DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH v2 00/10] qede: Add qede PMD
@ 2016-03-10 13:45 Rasesh Mody
  2016-03-10 13:45 ` [dpdk-dev] [PATCH v2 01/10] qede: add maintainers Rasesh Mody
                   ` (8 more replies)
  0 siblings, 9 replies; 13+ messages in thread
From: Rasesh Mody @ 2016-03-10 13:45 UTC (permalink / raw)
  To: dev; +Cc: sony.chacko

Submitting v2 patch series for QEDE PMD after incorporating review comments.

Includes:
 - Rename common module as base/ to be consistent with other PMD
 - Split the driver/common module into several patches based on feature by
   feature basis
 - Fix checkpatch warnings based on latest checkpatch.pl and correct checkpatch
   options
 - Move out newly added PCI ids from rte_pci_dev_ids.h and define those within
   the PMD
 - Rename common functions properly to avoid namespace clashes
 - Fix documentation to wrap it under 80 columns

Please Apply.

Thanks!
Rasesh

Harish Patil (1):
  qede: add maintainers

Rasesh Mody (9):
  qede: add documentation
  qede: Add license file
  qede: Add base driver
  qede: Add core driver
  qede: Add L2 support
  qede: Add SRIOV support
  qede: Add attention support
  qede: Add DCBX support
  qede: enable PMD build

 MAINTAINERS                                 |    7 +
 config/common_base                          |   14 +
 doc/guides/nics/index.rst                   |    1 +
 doc/guides/nics/qede.rst                    |  340 +
 drivers/net/Makefile                        |    1 +
 drivers/net/qede/LICENSE.qede_pmd           |   28 +
 drivers/net/qede/Makefile                   |   95 +
 drivers/net/qede/base/bcm_osal.c            |  178 +
 drivers/net/qede/base/bcm_osal.h            |  395 +
 drivers/net/qede/base/common_hsi.h          |  714 ++
 drivers/net/qede/base/ecore.h               |  746 ++
 drivers/net/qede/base/ecore_attn_values.h   |13287 +++++++++++++++++++++++++++
 drivers/net/qede/base/ecore_chain.h         |  724 ++
 drivers/net/qede/base/ecore_cxt.c           | 1961 ++++
 drivers/net/qede/base/ecore_cxt.h           |  157 +
 drivers/net/qede/base/ecore_cxt_api.h       |   79 +
 drivers/net/qede/base/ecore_dcbx.c          |  887 ++
 drivers/net/qede/base/ecore_dcbx.h          |   55 +
 drivers/net/qede/base/ecore_dcbx_api.h      |  160 +
 drivers/net/qede/base/ecore_dev.c           | 3578 ++++++++
 drivers/net/qede/base/ecore_dev_api.h       |  497 +
 drivers/net/qede/base/ecore_gtt_reg_addr.h  |   42 +
 drivers/net/qede/base/ecore_gtt_values.h    |   33 +
 drivers/net/qede/base/ecore_hsi_common.h    | 1912 ++++
 drivers/net/qede/base/ecore_hsi_eth.h       | 1912 ++++
 drivers/net/qede/base/ecore_hsi_tools.h     | 1081 +++
 drivers/net/qede/base/ecore_hw.c            |  992 ++
 drivers/net/qede/base/ecore_hw.h            |  269 +
 drivers/net/qede/base/ecore_hw_defs.h       |   49 +
 drivers/net/qede/base/ecore_init_fw_funcs.c | 1275 +++
 drivers/net/qede/base/ecore_init_fw_funcs.h |  263 +
 drivers/net/qede/base/ecore_init_ops.c      |  599 ++
 drivers/net/qede/base/ecore_init_ops.h      |  103 +
 drivers/net/qede/base/ecore_int.c           | 2225 +++++
 drivers/net/qede/base/ecore_int.h           |  234 +
 drivers/net/qede/base/ecore_int_api.h       |  277 +
 drivers/net/qede/base/ecore_iov_api.h       |  933 ++
 drivers/net/qede/base/ecore_iro.h           |  115 +
 drivers/net/qede/base/ecore_iro_values.h    |   59 +
 drivers/net/qede/base/ecore_l2.c            | 1798 ++++
 drivers/net/qede/base/ecore_l2.h            |  151 +
 drivers/net/qede/base/ecore_l2_api.h        |  401 +
 drivers/net/qede/base/ecore_mcp.c           | 1928 ++++
 drivers/net/qede/base/ecore_mcp.h           |  304 +
 drivers/net/qede/base/ecore_mcp_api.h       |  611 ++
 drivers/net/qede/base/ecore_proto_if.h      |   28 +
 drivers/net/qede/base/ecore_rt_defs.h       |  446 +
 drivers/net/qede/base/ecore_sp_api.h        |   42 +
 drivers/net/qede/base/ecore_sp_commands.c   |  525 ++
 drivers/net/qede/base/ecore_sp_commands.h   |  137 +
 drivers/net/qede/base/ecore_spq.c           |  944 ++
 drivers/net/qede/base/ecore_spq.h           |  284 +
 drivers/net/qede/base/ecore_sriov.c         | 3422 +++++++
 drivers/net/qede/base/ecore_sriov.h         |  390 +
 drivers/net/qede/base/ecore_status.h        |   30 +
 drivers/net/qede/base/ecore_utils.h         |   31 +
 drivers/net/qede/base/ecore_vf.c            | 1322 +++
 drivers/net/qede/base/ecore_vf.h            |  415 +
 drivers/net/qede/base/ecore_vf_api.h        |  186 +
 drivers/net/qede/base/ecore_vfpf_if.h       |  590 ++
 drivers/net/qede/base/eth_common.h          |  526 ++
 drivers/net/qede/base/mcp_public.h          | 1195 +++
 drivers/net/qede/base/nvm_cfg.h             |  919 ++
 drivers/net/qede/base/reg_addr.h            | 1107 +++
 drivers/net/qede/qede_eth_if.c              |  456 +
 drivers/net/qede/qede_eth_if.h              |  176 +
 drivers/net/qede/qede_ethdev.c              |  986 ++
 drivers/net/qede/qede_ethdev.h              |  156 +
 drivers/net/qede/qede_if.h                  |  164 +
 drivers/net/qede/qede_logs.h                |   93 +
 drivers/net/qede/qede_main.c                |  601 ++
 drivers/net/qede/qede_rxtx.c                | 1364 +++
 drivers/net/qede/qede_rxtx.h                |  187 +
 drivers/net/qede/rte_pmd_qede_version.map   |    4 +
 mk/rte.app.mk                               |    2 +
 scripts/test-build.sh                       |    1 +
 76 files changed, 58199 insertions(+)
 create mode 100644 doc/guides/nics/qede.rst
 create mode 100644 drivers/net/qede/LICENSE.qede_pmd
 create mode 100644 drivers/net/qede/Makefile
 create mode 100644 drivers/net/qede/base/bcm_osal.c
 create mode 100644 drivers/net/qede/base/bcm_osal.h
 create mode 100644 drivers/net/qede/base/common_hsi.h
 create mode 100644 drivers/net/qede/base/ecore.h
 create mode 100644 drivers/net/qede/base/ecore_attn_values.h
 create mode 100644 drivers/net/qede/base/ecore_chain.h
 create mode 100644 drivers/net/qede/base/ecore_cxt.c
 create mode 100644 drivers/net/qede/base/ecore_cxt.h
 create mode 100644 drivers/net/qede/base/ecore_cxt_api.h
 create mode 100644 drivers/net/qede/base/ecore_dcbx.c
 create mode 100644 drivers/net/qede/base/ecore_dcbx.h
 create mode 100644 drivers/net/qede/base/ecore_dcbx_api.h
 create mode 100644 drivers/net/qede/base/ecore_dev.c
 create mode 100644 drivers/net/qede/base/ecore_dev_api.h
 create mode 100644 drivers/net/qede/base/ecore_gtt_reg_addr.h
 create mode 100644 drivers/net/qede/base/ecore_gtt_values.h
 create mode 100644 drivers/net/qede/base/ecore_hsi_common.h
 create mode 100644 drivers/net/qede/base/ecore_hsi_eth.h
 create mode 100644 drivers/net/qede/base/ecore_hsi_tools.h
 create mode 100644 drivers/net/qede/base/ecore_hw.c
 create mode 100644 drivers/net/qede/base/ecore_hw.h
 create mode 100644 drivers/net/qede/base/ecore_hw_defs.h
 create mode 100644 drivers/net/qede/base/ecore_init_fw_funcs.c
 create mode 100644 drivers/net/qede/base/ecore_init_fw_funcs.h
 create mode 100644 drivers/net/qede/base/ecore_init_ops.c
 create mode 100644 drivers/net/qede/base/ecore_init_ops.h
 create mode 100644 drivers/net/qede/base/ecore_int.c
 create mode 100644 drivers/net/qede/base/ecore_int.h
 create mode 100644 drivers/net/qede/base/ecore_int_api.h
 create mode 100644 drivers/net/qede/base/ecore_iov_api.h
 create mode 100644 drivers/net/qede/base/ecore_iro.h
 create mode 100644 drivers/net/qede/base/ecore_iro_values.h
 create mode 100644 drivers/net/qede/base/ecore_l2.c
 create mode 100644 drivers/net/qede/base/ecore_l2.h
 create mode 100644 drivers/net/qede/base/ecore_l2_api.h
 create mode 100644 drivers/net/qede/base/ecore_mcp.c
 create mode 100644 drivers/net/qede/base/ecore_mcp.h
 create mode 100644 drivers/net/qede/base/ecore_mcp_api.h
 create mode 100644 drivers/net/qede/base/ecore_proto_if.h
 create mode 100644 drivers/net/qede/base/ecore_rt_defs.h
 create mode 100644 drivers/net/qede/base/ecore_sp_api.h
 create mode 100644 drivers/net/qede/base/ecore_sp_commands.c
 create mode 100644 drivers/net/qede/base/ecore_sp_commands.h
 create mode 100644 drivers/net/qede/base/ecore_spq.c
 create mode 100644 drivers/net/qede/base/ecore_spq.h
 create mode 100644 drivers/net/qede/base/ecore_sriov.c
 create mode 100644 drivers/net/qede/base/ecore_sriov.h
 create mode 100644 drivers/net/qede/base/ecore_status.h
 create mode 100644 drivers/net/qede/base/ecore_utils.h
 create mode 100644 drivers/net/qede/base/ecore_vf.c
 create mode 100644 drivers/net/qede/base/ecore_vf.h
 create mode 100644 drivers/net/qede/base/ecore_vf_api.h
 create mode 100644 drivers/net/qede/base/ecore_vfpf_if.h
 create mode 100644 drivers/net/qede/base/eth_common.h
 create mode 100644 drivers/net/qede/base/mcp_public.h
 create mode 100644 drivers/net/qede/base/nvm_cfg.h
 create mode 100644 drivers/net/qede/base/reg_addr.h
 create mode 100644 drivers/net/qede/qede_eth_if.c
 create mode 100644 drivers/net/qede/qede_eth_if.h
 create mode 100644 drivers/net/qede/qede_ethdev.c
 create mode 100644 drivers/net/qede/qede_ethdev.h
 create mode 100644 drivers/net/qede/qede_if.h
 create mode 100644 drivers/net/qede/qede_logs.h
 create mode 100644 drivers/net/qede/qede_main.c
 create mode 100644 drivers/net/qede/qede_rxtx.c
 create mode 100644 drivers/net/qede/qede_rxtx.h
 create mode 100644 drivers/net/qede/rte_pmd_qede_version.map

-- 
1.7.10.3

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH v2 01/10] qede: add maintainers
  2016-03-10 13:45 [dpdk-dev] [PATCH v2 00/10] qede: Add qede PMD Rasesh Mody
@ 2016-03-10 13:45 ` Rasesh Mody
  2016-03-10 13:45 ` [dpdk-dev] [PATCH v2 02/10] qede: add documentation Rasesh Mody
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 13+ messages in thread
From: Rasesh Mody @ 2016-03-10 13:45 UTC (permalink / raw)
  To: dev; +Cc: sony.chacko

From: Harish Patil <harish.patil@qlogic.com>

Signed-off-by: Harish Patil <harish.patil@qlogic.com>
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
Signed-off-by: Sony Chacko <sony.chacko@qlogic.com>
---
 MAINTAINERS |    7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 628bc05..1b27467 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -352,6 +352,13 @@ Null PMD
 M: Tetsuya Mukawa <mukawa@igel.co.jp>
 F: drivers/net/null/
 
+QLogic qede PMD
+M: Harish Patil <harish.patil@qlogic.com>
+M: Rasesh Mody <rasesh.mody@qlogic.com>
+M: Sony Chacko <sony.chacko@qlogic.com>
+F: drivers/net/qede/
+F: doc/guides/nics/qede.rst
+
 Intel AES-NI Multi-Buffer
 M: Declan Doherty <declan.doherty@intel.com>
 F: drivers/crypto/aesni_mb/
-- 
1.7.10.3

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH v2 02/10] qede: add documentation
  2016-03-10 13:45 [dpdk-dev] [PATCH v2 00/10] qede: Add qede PMD Rasesh Mody
  2016-03-10 13:45 ` [dpdk-dev] [PATCH v2 01/10] qede: add maintainers Rasesh Mody
@ 2016-03-10 13:45 ` Rasesh Mody
  2016-03-10 13:49   ` Thomas Monjalon
  2016-03-10 13:45 ` [dpdk-dev] [PATCH v2 03/10] qede: Add license file Rasesh Mody
                   ` (6 subsequent siblings)
  8 siblings, 1 reply; 13+ messages in thread
From: Rasesh Mody @ 2016-03-10 13:45 UTC (permalink / raw)
  To: dev; +Cc: sony.chacko

Signed-off-by: Harish Patil <harish.patil@qlogic.com>
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
Signed-off-by: Sony Chacko <sony.chacko@qlogic.com>
---
 doc/guides/nics/index.rst |    1 +
 doc/guides/nics/qede.rst  |  340 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 341 insertions(+)
 create mode 100644 doc/guides/nics/qede.rst

diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index 8618114..f092f75 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -50,6 +50,7 @@ Network Interface Controller Drivers
     virtio
     vmxnet3
     pcap_ring
+    qede
 
 **Figures**
 
diff --git a/doc/guides/nics/qede.rst b/doc/guides/nics/qede.rst
new file mode 100644
index 0000000..072f4a1
--- /dev/null
+++ b/doc/guides/nics/qede.rst
@@ -0,0 +1,340 @@
+..  BSD LICENSE
+    Copyright (c) 2016 QLogic Corporation
+    All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of QLogic Corporation nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+QEDE Poll Mode Driver
+======================
+
+The QEDE poll mode driver library (**librte_pmd_qede**) implements support
+for **QLogic FastLinQ QL4xxxx 25G/40G CNA** family of adapters as well
+as their virtual functions (VF) in SR-IOV context. It is supported on
+several standard Linux distros like RHEL7.x and SLES12.x OS.
+It is compile-tested under FreeBSD OS.
+
+More information can be found at `QLogic Corporation's Official Website
+<http://www.qlogic.com>`_.
+
+Supported Features
+------------------
+
+- Unicast/Multicast filtering
+- Promiscuous mode
+- Allmulti mode
+- Port hardware statistics
+- Jumbo frames (using single buffer)
+- VLAN offload - Filtering and stripping
+- Stateless checksum offloads (IPv4/TCP/UDP)
+- Multiple Rx/Tx queues (queue-pairs)
+- RSS (with default table/key)
+- TSS
+- Multiple MAC address
+- Default pause flow control
+- SR-IOV VF
+
+Non-supported Features
+----------------------
+
+- Scatter-Gather Rx/Tx frames
+- User configurable RETA table/key
+- Unequal number of Rx/Tx queues
+- MTU change (dynamic)
+- SR-IOV PF
+- Tunneling offloads
+- Ungraceful recovery 
+
+Supported QLogic NICs
+---------------------
+
+- QLogic FastLinQ QL4xxxx 25G/40G CNAs
+
+Prerequisites
+-------------
+
+- Requires storm firmware version **8.7.x.** and management
+  firmware version **8.7.x and higher**. Storm firmware may be available
+  under /usr/lib/firmware/qed/ in certain newer Linux
+  distributions (e.g. qed_init_values_zipped-8.7.8.0.bin).
+
+- If the required firmware files are not available then visit
+  `QLogic Driver Download Center <http://driverdownloads.qlogic.com>`
+
+- This driver relies on external zlib library (-lz) for uncompressing
+  the firmware file.
+
+Performance note
+~~~~~~~~~~~~~~~~
+
+- For better performance, it is recommended to use 8K or 16K RX/TX rings
+
+Config File Options
+~~~~~~~~~~~~~~~~~~~
+
+The following options can be modified in the ``.config`` file. Please note that
+enabling debugging options may affect system performance.
+
+- ``CONFIG_RTE_LIBRTE_QEDE_PMD`` (default **y**)
+
+  Toggle compilation of QEDE PMD driver.
+
+- ``CONFIG_RTE_LIBRTE_QEDE_DEBUG_INFO`` (default **n**)
+
+  Toggle display of generic debugging messages.
+
+- ``CONFIG_RTE_LIBRTE_QEDE_DEBUG_ECORE`` (default **n**)
+
+  Toggle display of ecore related messages.
+
+- ``CONFIG_RTE_LIBRTE_QEDE_DEBUG_TX`` (default **n**)
+
+  Toggle display of transmit fast path run-time messages.
+
+- ``CONFIG_RTE_LIBRTE_QEDE_DEBUG_RX`` (default **n**)
+
+  Toggle display of receive fast path run-time messages.
+
+- ``CONFIG_RTE_LIBRTE_QEDE_RX_COAL_US`` (default **24**)
+
+  Change Rx interrupt coalescing timer (in us).
+
+- ``CONFIG_RTE_LIBRTE_QEDE_TX_COAL_US`` (default **48**)
+
+  Change Tx interrupt coalescing timer (in us).
+
+- ``CONFIG_RTE_LIBRTE_QEDE_TX_SWITCHING`` (default **y**)
+
+  Toggle Tx switching
+
+- ``CONFIG_RTE_LIBRTE_QEDE_FW`` (default **n**)
+
+  Path of firmware file (overrides default location)
+
+Driver Compilation
+~~~~~~~~~~~~~~~~~~
+
+To compile QEDE PMD for Linux x86_64 gcc target, run the following "make"
+command::
+
+   cd <DPDK-source-directory>
+   make config T=x86_64-native-linuxapp-gcc install
+
+To compile QEDE PMD for Linux x86_64 clang target, run the following "make"
+command::
+
+   cd <DPDK-source-directory>
+   make config T=x86_64-native-linuxapp-clang install
+
+To compile QEDE PMD for FreeBSD x86_64 clang target, run the following "gmake"
+command::
+
+   cd <DPDK-source-directory>
+   gmake config T=x86_64-native-bsdapp-clang install
+
+To compile QEDE PMD for FreeBSD x86_64 gcc target, run the following "gmake"
+command::
+
+   cd <DPDK-source-directory>
+   gmake config T=x86_64-native-bsdapp-gcc install -Wl,-rpath=\
+                                        /usr/local/lib/gcc48 CC=gcc48
+
+
+Sample Application Notes
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+This section demonstrates how to launch ``testpmd`` with QLogic 579xx
+devices managed by ``librte_pmd_qede`` in Linux operating system.
+
+#. Request huge pages:
+
+   .. code-block:: console
+
+      echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages/ \
+                                                                nr_hugepages
+
+#. Load ``igb_uio`` or ``vfio-pci`` driver:
+
+   .. code-block:: console
+
+      insmod ./x86_64-native-linuxapp-gcc/kmod/igb_uio.ko
+
+   or
+
+   .. code-block:: console
+
+      modprobe vfio-pci
+
+#. Bind the QLogic 579xx adapters to ``igb_uio`` or ``vfio-pci`` loaded in the
+   previous step::
+
+      ./tools/dpdk_nic_bind.py --bind igb_uio 0000:84:00.0 0000:84:00.1 \
+                                              0000:84:00.2 0000:84:00.3
+
+   or
+
+   Setup VFIO permissions for regular users and then bind to ``vfio-pci``:
+
+   .. code-block:: console
+
+      sudo chmod a+x /dev/vfio
+
+      sudo chmod 0666 /dev/vfio/*
+
+      ./tools/dpdk_nic_bind.py --bind vfio-pci 0000:84:00.0 0000:84:00.1 \
+                                               0000:84:00.2 0000:84:00.3
+
+#. Start ``testpmd`` with basic parameters:
+
+   .. code-block:: console
+
+      testpmd -c 0xff1 -n 4 -- -i --nb-cores=8 --portmask=0xf --rxd=4096 \
+      --txd=4096 --txfreet=4068 --enable-rx-cksum --rxq=4 --txq=4 \
+      --rss-ip --rss-udp
+
+      [...]
+
+    EAL: PCI device 0000:84:00.0 on NUMA socket 1
+    EAL:   probe driver: 1077:1634 rte_qede_pmd
+    EAL:   Not managed by a supported kernel driver, skipped
+    EAL: PCI device 0000:84:00.1 on NUMA socket 1
+    EAL:   probe driver: 1077:1634 rte_qede_pmd
+    EAL:   Not managed by a supported kernel driver, skipped
+    EAL: PCI device 0000:88:00.0 on NUMA socket 1
+    EAL:   probe driver: 1077:1656 rte_qede_pmd
+    EAL:   PCI memory mapped at 0x7f738b200000
+    EAL:   PCI memory mapped at 0x7f738b280000
+    EAL:   PCI memory mapped at 0x7f738b300000
+    PMD: Chip details : BB1
+    PMD: Driver version : QEDE PMD 8.7.9.0_1.0.0
+    PMD: Firmware version : 8.7.7.0
+    PMD: Management firmware version : 8.7.8.0
+    PMD: Firmware file : /lib/firmware/qed/qed_init_values_zipped-8.7.7.0.bin
+    [QEDE PMD: (84:00.0:dpdk-port-0)]qede_common_dev_init:macaddr \
+                                                        00:0e:1e:d2:09:9c
+      [...]
+    [QEDE PMD: (84:00.0:dpdk-port-0)]qede_tx_queue_setup:txq 0 num_desc 4096 \
+                                                tx_free_thresh 4068 socket 0
+    [QEDE PMD: (84:00.0:dpdk-port-0)]qede_tx_queue_setup:txq 1 num_desc 4096 \
+                                                tx_free_thresh 4068 socket 0
+    [QEDE PMD: (84:00.0:dpdk-port-0)]qede_tx_queue_setup:txq 2 num_desc 4096 \
+                                                 tx_free_thresh 4068 socket 0
+    [QEDE PMD: (84:00.0:dpdk-port-0)]qede_tx_queue_setup:txq 3 num_desc 4096 \
+                                                 tx_free_thresh 4068 socket 0
+    [QEDE PMD: (84:00.0:dpdk-port-0)]qede_rx_queue_setup:rxq 0 num_desc 4096 \
+                                                rx_buf_size=2148 socket 0
+    [QEDE PMD: (84:00.0:dpdk-port-0)]qede_rx_queue_setup:rxq 1 num_desc 4096 \
+                                                rx_buf_size=2148 socket 0
+    [QEDE PMD: (84:00.0:dpdk-port-0)]qede_rx_queue_setup:rxq 2 num_desc 4096 \
+                                                rx_buf_size=2148 socket 0
+    [QEDE PMD: (84:00.0:dpdk-port-0)]qede_rx_queue_setup:rxq 3 num_desc 4096 \
+                                                rx_buf_size=2148 socket 0
+    [QEDE PMD: (84:00.0:dpdk-port-0)]qede_dev_start:port 0
+    [QEDE PMD: (84:00.0:dpdk-port-0)]qede_dev_start:link status: down 
+      [...]
+    Checking link statuses...
+    Port 0 Link Up - speed 25000 Mbps - full-duplex
+    Port 1 Link Up - speed 25000 Mbps - full-duplex
+    Port 2 Link Up - speed 25000 Mbps - full-duplex
+    Port 3 Link Up - speed 25000 Mbps - full-duplex
+    Done
+    testpmd>
+
+
+SR-IOV: Prerequisites and Sample Application Notes
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+This section provides instructions to configure SR-IOV with Linux OS.
+
+**Note**: QEDE PMD will be used to bind to SR-IOV VF device and
+        Linux native kernel driver (QEDE) will function as SR-IOV PF driver.
+
+#. Verify SR-IOV and ARI capability is enabled on the adapter using ``lspci``:
+
+   .. code-block:: console
+
+      lspci -s <slot> -vvv
+
+   Example output:
+
+   .. code-block:: console
+
+      [...]
+      Capabilities: [1b8 v1] Alternative Routing-ID Interpretation (ARI)
+      [...]
+      Capabilities: [1c0 v1] Single Root I/O Virtualization (SR-IOV)
+      [...]
+      Kernel driver in use: igb_uio
+
+#. Load the kernel module:
+
+   .. code-block:: console
+
+      modprobe qede
+
+   Example output:
+
+   .. code-block:: console
+
+      systemd-udevd[4848]: renamed network interface eth0 to ens5f0
+      systemd-udevd[4848]: renamed network interface eth1 to ens5f1
+
+#. Bring up the PF ports:
+
+   .. code-block:: console
+
+      ifconfig ens5f0 up
+      ifconfig ens5f1 up
+
+#. Create VF device(s):
+
+   Echo the number of VFs to be created into "sriov_numvfs" sysfs entry
+   of the parent PF.
+
+   Example output:
+
+   .. code-block:: console
+
+      echo 2 > /sys/devices/pci0000:00/0000:00:03.0/0000:81:00.0/sriov_numvfs
+
+
+#. Assign VF MAC address:
+
+   Assign MAC address to the VF using iproute2 utility. The syntax is::
+      ip link set <PF iface> vf <VF id> mac <macaddr>
+
+   Example output:
+
+   .. code-block:: console
+
+      ip link set ens5f0 vf 0 mac 52:54:00:2f:9d:e8
+
+
+#. PCI Passthrough:
+
+   The VF devices may be passed through to the guest VM using virt-manager or
+   virsh. QEDE PMD should be used to bind the VF devices in the guest VM
+   using the instructions outlined in the Application notes above.
-- 
1.7.10.3

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH v2 03/10] qede: Add license file
  2016-03-10 13:45 [dpdk-dev] [PATCH v2 00/10] qede: Add qede PMD Rasesh Mody
  2016-03-10 13:45 ` [dpdk-dev] [PATCH v2 01/10] qede: add maintainers Rasesh Mody
  2016-03-10 13:45 ` [dpdk-dev] [PATCH v2 02/10] qede: add documentation Rasesh Mody
@ 2016-03-10 13:45 ` Rasesh Mody
  2016-03-10 13:45 ` [dpdk-dev] [PATCH v2 05/10] qede: Add core driver Rasesh Mody
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 13+ messages in thread
From: Rasesh Mody @ 2016-03-10 13:45 UTC (permalink / raw)
  To: dev; +Cc: sony.chacko

Signed-off-by: Harish Patil <harish.patil@qlogic.com>
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
Signed-off-by: Sony Chacko <sony.chacko@qlogic.com>
---
 drivers/net/qede/LICENSE.qede_pmd |   28 ++++++++++++++++++++++++++++
 1 file changed, 28 insertions(+)
 create mode 100644 drivers/net/qede/LICENSE.qede_pmd

diff --git a/drivers/net/qede/LICENSE.qede_pmd b/drivers/net/qede/LICENSE.qede_pmd
new file mode 100644
index 0000000..c7cbdcc
--- /dev/null
+++ b/drivers/net/qede/LICENSE.qede_pmd
@@ -0,0 +1,28 @@
+/*
+ * BSD LICENSE
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ * 3. Neither the name of QLogic Corporation nor the name of its contributors
+ *    may be used to endorse or promote products derived from this software
+ *    without specific prior written consent.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS'
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED.  IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ * THE POSSIBILITY OF SUCH DAMAGE.
+ */
-- 
1.7.10.3

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH v2 05/10] qede: Add core driver
  2016-03-10 13:45 [dpdk-dev] [PATCH v2 00/10] qede: Add qede PMD Rasesh Mody
                   ` (2 preceding siblings ...)
  2016-03-10 13:45 ` [dpdk-dev] [PATCH v2 03/10] qede: Add license file Rasesh Mody
@ 2016-03-10 13:45 ` Rasesh Mody
  2016-03-10 13:45 ` [dpdk-dev] [PATCH v2 06/10] qede: Add L2 support Rasesh Mody
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 13+ messages in thread
From: Rasesh Mody @ 2016-03-10 13:45 UTC (permalink / raw)
  To: dev; +Cc: sony.chacko

Signed-off-by: Harish Patil <harish.patil@qlogic.com>
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
Signed-off-by: Sony Chacko <sony.chacko@qlogic.com>
---
 drivers/net/qede/Makefile                 |   90 +++
 drivers/net/qede/qede_eth_if.h            |  176 +++++
 drivers/net/qede/qede_ethdev.c            |  957 +++++++++++++++++++++++
 drivers/net/qede/qede_ethdev.h            |  155 ++++
 drivers/net/qede/qede_if.h                |  155 ++++
 drivers/net/qede/qede_logs.h              |   93 +++
 drivers/net/qede/qede_main.c              |  548 ++++++++++++++
 drivers/net/qede/qede_rxtx.c              | 1172 +++++++++++++++++++++++++++++
 drivers/net/qede/qede_rxtx.h              |  187 +++++
 drivers/net/qede/rte_pmd_qede_version.map |    4 +
 10 files changed, 3537 insertions(+)
 create mode 100644 drivers/net/qede/Makefile
 create mode 100644 drivers/net/qede/qede_eth_if.h
 create mode 100644 drivers/net/qede/qede_ethdev.c
 create mode 100644 drivers/net/qede/qede_ethdev.h
 create mode 100644 drivers/net/qede/qede_if.h
 create mode 100644 drivers/net/qede/qede_logs.h
 create mode 100644 drivers/net/qede/qede_main.c
 create mode 100644 drivers/net/qede/qede_rxtx.c
 create mode 100644 drivers/net/qede/qede_rxtx.h
 create mode 100644 drivers/net/qede/rte_pmd_qede_version.map

diff --git a/drivers/net/qede/Makefile b/drivers/net/qede/Makefile
new file mode 100644
index 0000000..efaefb2
--- /dev/null
+++ b/drivers/net/qede/Makefile
@@ -0,0 +1,90 @@
+#    Copyright (c) 2016 QLogic Corporation.
+#    All rights reserved.
+#    www.qlogic.com
+#
+#    See LICENSE.qede_pmd for copyright and licensing details.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_pmd_qede.a
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+EXPORT_MAP := rte_pmd_qede_version.map
+
+LIBABIVER := 1
+
+#
+#OS
+#
+OS_TYPE := $(shell uname -s)
+
+#
+# CFLAGS
+#
+CFLAGS_ECORE_DRIVER = -Wno-unused-parameter
+CFLAGS_ECORE_DRIVER += -Wno-unused-value
+CFLAGS_ECORE_DRIVER += -Wno-sign-compare
+CFLAGS_ECORE_DRIVER += -Wno-missing-prototypes
+CFLAGS_ECORE_DRIVER += -Wno-cast-qual
+CFLAGS_ECORE_DRIVER += -Wno-unused-function
+CFLAGS_ECORE_DRIVER += -Wno-unused-variable
+CFLAGS_ECORE_DRIVER += -Wno-strict-aliasing
+CFLAGS_ECORE_DRIVER += -Wno-missing-prototypes
+CFLAGS_ECORE_DRIVER += -Wno-format-nonliteral
+ifeq ($(OS_TYPE),Linux)
+CFLAGS_ECORE_DRIVER += -Wno-shift-negative-value
+endif
+
+ifneq (,$(filter gcc gcc48,$(CC)))
+CFLAGS_ECORE_DRIVER += -Wno-unused-but-set-variable
+CFLAGS_ECORE_DRIVER += -Wno-missing-declarations
+CFLAGS_ECORE_DRIVER += -Wno-maybe-uninitialized
+CFLAGS_ECORE_DRIVER += -Wno-strict-prototypes
+else ifeq ($(CC), clang)
+CFLAGS_ECORE_DRIVER += -Wno-format-extra-args
+CFLAGS_ECORE_DRIVER += -Wno-visibility
+CFLAGS_ECORE_DRIVER += -Wno-empty-body
+CFLAGS_ECORE_DRIVER += -Wno-invalid-source-encoding
+CFLAGS_ECORE_DRIVER += -Wno-sometimes-uninitialized
+CFLAGS_ECORE_DRIVER += -Wno-pointer-bool-conversion
+else
+#icc flags
+endif
+
+#
+# Add extra flags for base ecore driver files
+# to disable warnings in them
+#
+#
+ECORE_DRIVER_OBJS=$(patsubst %.c,%.o,$(notdir $(wildcard $(SRCDIR)/base/*.c)))
+$(foreach obj, $(ECORE_DRIVER_OBJS), $(eval CFLAGS+=$(CFLAGS_ECORE_DRIVER)))
+
+#
+# all source are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += base/ecore_dev.c
+SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += base/ecore_hw.c
+SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += base/ecore_cxt.c
+SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += base/ecore_sp_commands.c
+SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += base/ecore_init_fw_funcs.c
+SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += base/ecore_spq.c
+SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += base/ecore_init_ops.c
+SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += base/ecore_mcp.c
+SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += base/ecore_int.c
+SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += base/bcm_osal.c
+SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += qede_ethdev.c
+SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += qede_main.c
+SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += qede_rxtx.c
+
+# dependent libs:
+DEPDIRS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += lib/librte_eal lib/librte_ether
+DEPDIRS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += lib/librte_mempool lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += lib/librte_net lib/librte_malloc
+
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/qede/qede_eth_if.h b/drivers/net/qede/qede_eth_if.h
new file mode 100644
index 0000000..47b169d
--- /dev/null
+++ b/drivers/net/qede/qede_eth_if.h
@@ -0,0 +1,176 @@
+/*
+ * Copyright (c) 2016 QLogic Corporation.
+ * All rights reserved.
+ * www.qlogic.com
+ *
+ * See LICENSE.qede_pmd for copyright and licensing details.
+ */
+
+#ifndef _QEDE_ETH_IF_H
+#define _QEDE_ETH_IF_H
+
+#include "qede_if.h"
+
+/*forward decl */
+struct eth_slow_path_rx_cqe;
+
+#define INIT_STRUCT_FIELD(field, value) .field = value
+
+#define QED_ETH_INTERFACE_VERSION       609
+
+enum qed_filter_rx_mode_type {
+	QED_FILTER_RX_MODE_TYPE_REGULAR,
+	QED_FILTER_RX_MODE_TYPE_MULTI_PROMISC,
+	QED_FILTER_RX_MODE_TYPE_PROMISC,
+};
+
+enum qed_filter_xcast_params_type {
+	QED_FILTER_XCAST_TYPE_ADD,
+	QED_FILTER_XCAST_TYPE_DEL,
+	QED_FILTER_XCAST_TYPE_REPLACE,
+};
+
+enum qed_filter_type {
+	QED_FILTER_TYPE_UCAST,
+	QED_FILTER_TYPE_MCAST,
+	QED_FILTER_TYPE_RX_MODE,
+	QED_MAX_FILTER_TYPES,
+};
+
+struct qed_dev_eth_info {
+	struct qed_dev_info common;
+
+	uint8_t num_queues;
+	uint8_t num_tc;
+
+	struct ether_addr port_mac;
+	uint8_t num_vlan_filters;
+};
+
+struct qed_update_vport_rss_params {
+	uint16_t rss_ind_table[128];
+	uint32_t rss_key[10];
+};
+
+struct qed_stop_rxq_params {
+	uint8_t rss_id;
+	uint8_t rx_queue_id;
+	uint8_t vport_id;
+	bool eq_completion_only;
+};
+
+struct qed_update_vport_params {
+	uint8_t vport_id;
+	uint8_t update_vport_active_flg;
+	uint8_t vport_active_flg;
+	uint8_t update_inner_vlan_removal_flg;
+	uint8_t inner_vlan_removal_flg;
+	uint8_t update_tx_switching_flg;
+	uint8_t tx_switching_flg;
+	uint8_t update_accept_any_vlan_flg;
+	uint8_t accept_any_vlan;
+	uint8_t update_rss_flg;
+	struct qed_update_vport_rss_params rss_params;
+};
+
+struct qed_start_vport_params {
+	bool remove_inner_vlan;
+	bool handle_ptp_pkts;
+	bool gro_enable;
+	bool drop_ttl0;
+	uint8_t vport_id;
+	uint16_t mtu;
+	bool clear_stats;
+};
+
+struct qed_stop_txq_params {
+	uint8_t rss_id;
+	uint8_t tx_queue_id;
+};
+
+struct qed_filter_ucast_params {
+	enum qed_filter_xcast_params_type type;
+	uint8_t vlan_valid;
+	uint16_t vlan;
+	uint8_t mac_valid;
+	unsigned char mac[ETHER_ADDR_LEN];
+} __attribute__ ((__packed__));
+
+struct qed_filter_mcast_params {
+	enum qed_filter_xcast_params_type type;
+	uint8_t num;
+	unsigned char mac[64][ETHER_ADDR_LEN];
+};
+
+union qed_filter_type_params {
+	enum qed_filter_rx_mode_type accept_flags;
+	struct qed_filter_ucast_params ucast;
+	struct qed_filter_mcast_params mcast;
+};
+
+struct qed_filter_params {
+	enum qed_filter_type type;
+	union qed_filter_type_params filter;
+};
+
+struct qed_eth_ops {
+	const struct qed_common_ops *common;
+
+	int (*fill_dev_info)(struct ecore_dev *edev,
+			     struct qed_dev_eth_info *info);
+
+	int (*vport_start)(struct ecore_dev *edev,
+			   struct qed_start_vport_params *params);
+
+	int (*vport_stop)(struct ecore_dev *edev, uint8_t vport_id);
+
+	int (*vport_update)(struct ecore_dev *edev,
+			    struct qed_update_vport_params *params);
+
+	int (*q_rx_start)(struct ecore_dev *cdev,
+			  uint8_t rss_id, uint8_t rx_queue_id,
+			  uint8_t vport_id, uint16_t sb,
+			  uint8_t sb_index, uint16_t bd_max_bytes,
+			  dma_addr_t bd_chain_phys_addr,
+			  dma_addr_t cqe_pbl_addr,
+			  uint16_t cqe_pbl_size, void OSAL_IOMEM**pp_prod);
+
+	int (*q_rx_stop)(struct ecore_dev *edev,
+			 struct qed_stop_rxq_params *params);
+
+	int (*q_tx_start)(struct ecore_dev *edev,
+			  uint8_t rss_id, uint16_t tx_queue_id,
+			  uint8_t vport_id, uint16_t sb,
+			  uint8_t sb_index,
+			  dma_addr_t pbl_addr,
+			  uint16_t pbl_size, void OSAL_IOMEM**pp_doorbell);
+
+	int (*q_tx_stop)(struct ecore_dev *edev,
+			 struct qed_stop_txq_params *params);
+
+	int (*eth_cqe_completion)(struct ecore_dev *edev,
+				  uint8_t rss_id,
+				  struct eth_slow_path_rx_cqe *cqe);
+
+	int (*fastpath_stop)(struct ecore_dev *edev);
+
+	void (*get_vport_stats)(struct ecore_dev *edev,
+				struct ecore_eth_stats *stats);
+
+	int (*filter_config)(struct ecore_dev *edev,
+			     struct qed_filter_params *params);
+};
+
+/* externs */
+
+extern const struct qed_common_ops qed_common_ops_pass;
+
+extern int qed_fill_eth_dev_info(struct ecore_dev *edev,
+				 struct qed_dev_eth_info *info);
+
+void qed_put_eth_ops(void);
+
+int qed_configure_filter_rx_mode(struct ecore_dev *edev,
+				 enum qed_filter_rx_mode_type type);
+
+#endif /* _QEDE_ETH_IF_H */
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
new file mode 100644
index 0000000..d5f7019
--- /dev/null
+++ b/drivers/net/qede/qede_ethdev.c
@@ -0,0 +1,957 @@
+/*
+ * Copyright (c) 2016 QLogic Corporation.
+ * All rights reserved.
+ * www.qlogic.com
+ *
+ * See LICENSE.qede_pmd for copyright and licensing details.
+ */
+
+#include "qede_ethdev.h"
+
+/* Globals */
+static const struct qed_eth_ops *qed_ops;
+static const char *drivername = "qede pmd";
+
+static void qede_interrupt_action(struct ecore_hwfn *p_hwfn)
+{
+	ecore_int_sp_dpc((osal_int_ptr_t)(p_hwfn));
+}
+
+static void
+qede_interrupt_handler(__rte_unused struct rte_intr_handle *handle, void *param)
+{
+	struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)param;
+	struct qede_dev *qdev = eth_dev->data->dev_private;
+	struct ecore_dev *edev = &qdev->edev;
+
+	qede_interrupt_action(ECORE_LEADING_HWFN(edev));
+	if (rte_intr_enable(&(eth_dev->pci_dev->intr_handle)))
+		DP_ERR(edev, "rte_intr_enable failed\n");
+}
+
+static void
+qede_alloc_etherdev(struct qede_dev *qdev, struct qed_dev_eth_info *info)
+{
+	rte_memcpy(&qdev->dev_info, info, sizeof(*info));
+	qdev->num_tc = qdev->dev_info.num_tc;
+	qdev->ops = qed_ops;
+}
+
+static void qede_print_adapter_info(struct qede_dev *qdev)
+{
+	struct ecore_dev *edev = &qdev->edev;
+	struct qed_dev_info *info = &qdev->dev_info.common;
+	char ver_str[QED_DRV_VER_STR_SIZE] = { 0 };
+
+	RTE_LOG(INFO, PMD,
+		  " Chip details : %s%d\n",
+		  ECORE_IS_BB(edev) ? "BB" : "AH",
+		  CHIP_REV_IS_A0(edev) ? 0 : 1);
+
+	sprintf(ver_str, "%s %s_%d.%d.%d", QEDE_PMD_VER_PREFIX,
+		edev->ver_str, QEDE_PMD_VERSION_MAJOR,
+		QEDE_PMD_VERSION_MINOR, QEDE_PMD_VERSION_PATCH);
+	strcpy(qdev->drv_ver, ver_str);
+	RTE_LOG(INFO, PMD, " Driver version : %s\n", ver_str);
+
+	ver_str[0] = '\0';
+	sprintf(ver_str, "%d.%d.%d.%d", info->fw_major, info->fw_minor,
+		info->fw_rev, info->fw_eng);
+	RTE_LOG(INFO, PMD, " Firmware version : %s\n", ver_str);
+
+	ver_str[0] = '\0';
+	sprintf(ver_str, "%d.%d.%d.%d",
+		(info->mfw_rev >> 24) & 0xff,
+		(info->mfw_rev >> 16) & 0xff,
+		(info->mfw_rev >> 8) & 0xff, (info->mfw_rev) & 0xff);
+	RTE_LOG(INFO, PMD, " Management firmware version : %s\n", ver_str);
+
+	RTE_LOG(INFO, PMD, " Firmware file : %s\n", QEDE_FW_FILE_NAME);
+}
+
+static int
+qede_set_ucast_rx_mac(struct qede_dev *qdev,
+		      enum qed_filter_xcast_params_type opcode,
+		      uint8_t mac[ETHER_ADDR_LEN])
+{
+	struct ecore_dev *edev = &qdev->edev;
+	struct qed_filter_params filter_cmd;
+
+	memset(&filter_cmd, 0, sizeof(filter_cmd));
+	filter_cmd.type = QED_FILTER_TYPE_UCAST;
+	filter_cmd.filter.ucast.type = opcode;
+	filter_cmd.filter.ucast.mac_valid = 1;
+	rte_memcpy(&filter_cmd.filter.ucast.mac[0], &mac[0], ETHER_ADDR_LEN);
+	return qdev->ops->filter_config(edev, &filter_cmd);
+}
+
+static void
+qede_mac_addr_add(struct rte_eth_dev *eth_dev, struct ether_addr *mac_addr,
+		  __rte_unused uint32_t index, __rte_unused uint32_t pool)
+{
+	struct qede_dev *qdev = eth_dev->data->dev_private;
+	struct ecore_dev *edev = &qdev->edev;
+	int rc;
+
+	PMD_INIT_FUNC_TRACE(edev);
+
+	DP_NOTICE(edev, false, "%s\n", __func__);
+
+	/* Skip adding macaddr if promiscuous mode is set */
+	if (rte_eth_promiscuous_get(eth_dev->data->port_id) == 1) {
+		DP_NOTICE(edev, false, "Port is in promiscuous mode\n");
+		return;
+	}
+
+	/* Add MAC filters according to the unicast secondary macs */
+	rc = qede_set_ucast_rx_mac(qdev, QED_FILTER_XCAST_TYPE_ADD,
+				   mac_addr->addr_bytes);
+	if (rc)
+		DP_ERR(edev, "Unable to add filter\n");
+}
+
+static void
+qede_mac_addr_remove(__rte_unused struct rte_eth_dev *eth_dev,
+		     __rte_unused uint32_t index)
+{
+	struct qede_dev *qdev = eth_dev->data->dev_private;
+	struct ecore_dev *edev = &qdev->edev;
+
+	/* TBD: Not implemented currently because DPDK does not provide
+	 * macaddr and instead just passes the index. So pmd needs to
+	 * maintain index mapping to macaddr.
+	 */
+	DP_NOTICE(edev, false, "%s: Unsupported operation\n", __func__);
+}
+
+static void qede_config_accept_any_vlan(struct qede_dev *qdev, bool action)
+{
+	struct ecore_dev *edev = &qdev->edev;
+	struct qed_update_vport_params params;
+	int rc;
+
+	/* Proceed only if action actually needs to be performed */
+	if (qdev->accept_any_vlan == action)
+		return;
+
+	memset(&params, 0, sizeof(params));
+
+	params.vport_id = 0;
+	params.accept_any_vlan = action;
+	params.update_accept_any_vlan_flg = 1;
+
+	rc = qdev->ops->vport_update(edev, &params);
+	if (rc) {
+		DP_ERR(edev, "Failed to %s accept-any-vlan\n",
+		       action ? "enable" : "disable");
+	} else {
+		DP_INFO(edev, "%s accept-any-vlan\n",
+			action ? "enabled" : "disabled");
+		qdev->accept_any_vlan = action;
+	}
+}
+
+void qede_config_rx_mode(struct rte_eth_dev *eth_dev)
+{
+	struct qede_dev *qdev = eth_dev->data->dev_private;
+	struct ecore_dev *edev = &qdev->edev;
+	/* TODO: - QED_FILTER_TYPE_UCAST */
+	enum qed_filter_rx_mode_type accept_flags =
+			QED_FILTER_RX_MODE_TYPE_REGULAR;
+	struct qed_filter_params rx_mode;
+	int rc;
+
+	/* Configure the struct for the Rx mode */
+	memset(&rx_mode, 0, sizeof(struct qed_filter_params));
+	rx_mode.type = QED_FILTER_TYPE_RX_MODE;
+
+	rc = qede_set_ucast_rx_mac(qdev, QED_FILTER_XCAST_TYPE_REPLACE,
+				   eth_dev->data->mac_addrs[0].addr_bytes);
+	if (rte_eth_promiscuous_get(eth_dev->data->port_id) == 1) {
+		accept_flags = QED_FILTER_RX_MODE_TYPE_PROMISC;
+	} else {
+		rc = qede_set_ucast_rx_mac(qdev, QED_FILTER_XCAST_TYPE_ADD,
+					   eth_dev->data->
+					   mac_addrs[0].addr_bytes);
+		if (rc) {
+			DP_ERR(edev, "Unable to add filter\n");
+			return;
+		}
+	}
+
+	/* take care of VLAN mode */
+	if (rte_eth_promiscuous_get(eth_dev->data->port_id) == 1) {
+		qede_config_accept_any_vlan(qdev, true);
+	} else if (!qdev->non_configured_vlans) {
+		/* If we dont have non-configured VLANs and promisc
+		 * is not set, then check if we need to disable
+		 * accept_any_vlan mode.
+		 * Because in this case, accept_any_vlan mode is set
+		 * as part of IFF_RPOMISC flag handling.
+		 */
+		qede_config_accept_any_vlan(qdev, false);
+	}
+	rx_mode.filter.accept_flags = accept_flags;
+	(void)qdev->ops->filter_config(edev, &rx_mode);
+}
+
+static int qede_vlan_stripping(struct rte_eth_dev *eth_dev, bool set_stripping)
+{
+	struct qed_update_vport_params vport_update_params;
+	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
+	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+	int rc;
+
+	memset(&vport_update_params, 0, sizeof(vport_update_params));
+	vport_update_params.vport_id = 0;
+	vport_update_params.update_inner_vlan_removal_flg = 1;
+	vport_update_params.inner_vlan_removal_flg = set_stripping;
+	rc = qdev->ops->vport_update(edev, &vport_update_params);
+	if (rc) {
+		DP_ERR(edev, "Update V-PORT failed %d\n", rc);
+		return rc;
+	}
+
+	return 0;
+}
+
+static void qede_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
+{
+	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
+	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+
+	if (mask & ETH_VLAN_STRIP_MASK) {
+		if (eth_dev->data->dev_conf.rxmode.hw_vlan_strip)
+			(void)qede_vlan_stripping(eth_dev, 1);
+		else
+			(void)qede_vlan_stripping(eth_dev, 0);
+	}
+
+	DP_INFO(edev, "vlan offload mask %d vlan-strip %d\n",
+		mask, eth_dev->data->dev_conf.rxmode.hw_vlan_strip);
+}
+
+static int qede_set_ucast_rx_vlan(struct qede_dev *qdev,
+				  enum qed_filter_xcast_params_type opcode,
+				  uint16_t vid)
+{
+	struct qed_filter_params filter_cmd;
+	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+
+	memset(&filter_cmd, 0, sizeof(filter_cmd));
+	filter_cmd.type = QED_FILTER_TYPE_UCAST;
+	filter_cmd.filter.ucast.type = opcode;
+	filter_cmd.filter.ucast.vlan_valid = 1;
+	filter_cmd.filter.ucast.vlan = vid;
+
+	return qdev->ops->filter_config(edev, &filter_cmd);
+}
+
+static int qede_vlan_filter_set(struct rte_eth_dev *eth_dev,
+				uint16_t vlan_id, int on)
+{
+	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
+	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+	struct qed_dev_eth_info *dev_info = &qdev->dev_info;
+	int rc;
+
+	if (vlan_id != 0 &&
+	    qdev->configured_vlans == dev_info->num_vlan_filters) {
+		DP_NOTICE(edev, false, "Reached max VLAN filter limit"
+				     " enabling accept_any_vlan\n");
+		qede_config_accept_any_vlan(qdev, true);
+		return 0;
+	}
+
+	if (on) {
+		rc = qede_set_ucast_rx_vlan(qdev, QED_FILTER_XCAST_TYPE_ADD,
+					    vlan_id);
+		if (rc)
+			DP_ERR(edev, "Failed to add VLAN %u rc %d\n", vlan_id,
+			       rc);
+		else
+			if (vlan_id != 0)
+				qdev->configured_vlans++;
+	} else {
+		rc = qede_set_ucast_rx_vlan(qdev, QED_FILTER_XCAST_TYPE_DEL,
+					    vlan_id);
+		if (rc)
+			DP_ERR(edev, "Failed to delete VLAN %u rc %d\n",
+			       vlan_id, rc);
+		else
+			if (vlan_id != 0)
+				qdev->configured_vlans--;
+	}
+
+	DP_INFO(edev, "vlan_id %u on %u rc %d configured_vlans %u\n",
+			vlan_id, on, rc, qdev->configured_vlans);
+
+	return rc;
+}
+
+static int qede_dev_configure(struct rte_eth_dev *eth_dev)
+{
+	struct qede_dev *qdev = eth_dev->data->dev_private;
+	struct ecore_dev *edev = &qdev->edev;
+	struct rte_eth_rxmode *rxmode = &eth_dev->data->dev_conf.rxmode;
+	int rc = 0;
+
+	PMD_INIT_FUNC_TRACE(edev);
+
+	if (eth_dev->data->nb_rx_queues != eth_dev->data->nb_tx_queues) {
+		DP_NOTICE(edev, false,
+			  "Unequal number of rx/tx queues "
+			  "is not supported RX=%u TX=%u\n",
+			  eth_dev->data->nb_rx_queues,
+			  eth_dev->data->nb_tx_queues);
+		return -EINVAL;
+	}
+
+	qdev->num_rss = eth_dev->data->nb_rx_queues;
+
+	/* Initial state */
+	qdev->state = QEDE_CLOSE;
+
+	/* Sanity checks and throw warnings */
+
+	if (rxmode->enable_scatter == 1) {
+		DP_ERR(edev, "RX scatter packets is not supported\n");
+		return -EINVAL;
+	}
+
+	if (rxmode->enable_lro == 1) {
+		DP_INFO(edev, "LRO is not supported\n");
+		return -EINVAL;
+	}
+
+	if (!rxmode->hw_strip_crc)
+		DP_INFO(edev, "L2 CRC stripping is always enabled in hw\n");
+
+	if (!rxmode->hw_ip_checksum)
+		DP_INFO(edev, "IP/UDP/TCP checksum offload is always enabled "
+			      "in hw\n");
+
+
+	DP_INFO(edev, "Allocated %d RSS queues on %d TC/s\n",
+		QEDE_RSS_CNT(qdev), qdev->num_tc);
+
+	DP_INFO(edev, "my_id %u rel_pf_id %u abs_pf_id %u"
+		" port %u first_on_engine %d\n",
+		edev->hwfns[0].my_id,
+		edev->hwfns[0].rel_pf_id,
+		edev->hwfns[0].abs_pf_id,
+		edev->hwfns[0].port_id, edev->hwfns[0].first_on_engine);
+
+	return 0;
+}
+
+/* Info about HW descriptor ring limitations */
+static const struct rte_eth_desc_lim qede_rx_desc_lim = {
+	.nb_max = NUM_RX_BDS_MAX,
+	.nb_min = 128,
+	.nb_align = 128		/* lowest common multiple */
+};
+
+static const struct rte_eth_desc_lim qede_tx_desc_lim = {
+	.nb_max = NUM_TX_BDS_MAX,
+	.nb_min = 256,
+	.nb_align = 256
+};
+
+static void
+qede_dev_info_get(struct rte_eth_dev *eth_dev,
+		  struct rte_eth_dev_info *dev_info)
+{
+	struct qede_dev *qdev = eth_dev->data->dev_private;
+	struct ecore_dev *edev = &qdev->edev;
+
+	PMD_INIT_FUNC_TRACE(edev);
+
+	dev_info->min_rx_bufsize = (uint32_t)(ETHER_MIN_MTU +
+					      QEDE_ETH_OVERHEAD);
+	dev_info->max_rx_pktlen = (uint32_t)ETH_TX_MAX_NON_LSO_PKT_LEN;
+	dev_info->rx_desc_lim = qede_rx_desc_lim;
+	dev_info->tx_desc_lim = qede_tx_desc_lim;
+	/* Fix it for 8 queues for now */
+	dev_info->max_rx_queues = 8;
+	dev_info->max_tx_queues = 8;
+	dev_info->max_mac_addrs = (uint32_t)(RESC_NUM(&edev->hwfns[0],
+						      ECORE_MAC));
+	dev_info->max_vfs = (uint16_t)NUM_OF_VFS(&qdev->edev);
+	dev_info->driver_name = qdev->drv_ver;
+	dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
+	dev_info->flow_type_rss_offloads = (uint64_t)QEDE_RSS_OFFLOAD_ALL;
+	dev_info->default_txconf = (struct rte_eth_txconf) {
+	.txq_flags = QEDE_TXQ_FLAGS,};
+	dev_info->rx_offload_capa = (DEV_RX_OFFLOAD_VLAN_STRIP |
+				     DEV_RX_OFFLOAD_IPV4_CKSUM |
+				     DEV_RX_OFFLOAD_UDP_CKSUM |
+				     DEV_RX_OFFLOAD_TCP_CKSUM);
+	dev_info->tx_offload_capa = (DEV_TX_OFFLOAD_VLAN_INSERT |
+				     DEV_TX_OFFLOAD_IPV4_CKSUM |
+				     DEV_TX_OFFLOAD_UDP_CKSUM |
+				     DEV_TX_OFFLOAD_TCP_CKSUM);
+}
+
+/* return 0 means link status changed, -1 means not changed */
+static int
+qede_link_update(struct rte_eth_dev *eth_dev, __rte_unused int wait_to_complete)
+{
+	struct qede_dev *qdev = eth_dev->data->dev_private;
+	struct ecore_dev *edev = &qdev->edev;
+	uint16_t link_duplex;
+	struct qed_link_output link;
+	struct rte_eth_link *old = &eth_dev->data->dev_link;
+
+	memset(&link, 0, sizeof(struct qed_link_output));
+	qdev->ops->common->get_link(edev, &link);
+	if (old->link_status == link.link_up)
+		return -1;
+
+	/* Speed */
+	eth_dev->data->dev_link.link_speed = link.speed;
+
+	/* Duplex/Simplex */
+	switch (link.duplex) {
+	case QEDE_DUPLEX_HALF:
+		link_duplex = ETH_LINK_HALF_DUPLEX;
+		break;
+	case QEDE_DUPLEX_FULL:
+		link_duplex = ETH_LINK_FULL_DUPLEX;
+		break;
+	case QEDE_DUPLEX_UNKNOWN:
+	default:
+		link_duplex = -1;
+	}
+
+	eth_dev->data->dev_link.link_duplex = link_duplex;
+	eth_dev->data->dev_link.link_status = link.link_up;
+
+	/* Link state changed */
+	return 0;
+}
+
+static void
+qede_rx_mode_setting(struct rte_eth_dev *eth_dev,
+		     enum qed_filter_rx_mode_type accept_flags)
+{
+	struct qede_dev *qdev = eth_dev->data->dev_private;
+	struct ecore_dev *edev = &qdev->edev;
+	struct qed_filter_params rx_mode;
+
+	DP_INFO(edev, "%s mode %u\n", __func__, accept_flags);
+
+	memset(&rx_mode, 0, sizeof(struct qed_filter_params));
+	rx_mode.type = QED_FILTER_TYPE_RX_MODE;
+	rx_mode.filter.accept_flags = accept_flags;
+	qdev->ops->filter_config(edev, &rx_mode);
+}
+
+static void qede_promiscuous_enable(struct rte_eth_dev *eth_dev)
+{
+	struct qede_dev *qdev = eth_dev->data->dev_private;
+	struct ecore_dev *edev = &qdev->edev;
+
+	PMD_INIT_FUNC_TRACE(edev);
+
+	enum qed_filter_rx_mode_type type = QED_FILTER_RX_MODE_TYPE_PROMISC;
+
+	if (rte_eth_allmulticast_get(eth_dev->data->port_id) == 1)
+		type |= QED_FILTER_RX_MODE_TYPE_MULTI_PROMISC;
+
+	qede_rx_mode_setting(eth_dev, type);
+}
+
+static void qede_promiscuous_disable(struct rte_eth_dev *eth_dev)
+{
+	struct qede_dev *qdev = eth_dev->data->dev_private;
+	struct ecore_dev *edev = &qdev->edev;
+
+	PMD_INIT_FUNC_TRACE(edev);
+
+	if (rte_eth_allmulticast_get(eth_dev->data->port_id) == 1)
+		qede_rx_mode_setting(eth_dev,
+				     QED_FILTER_RX_MODE_TYPE_MULTI_PROMISC);
+	else
+		qede_rx_mode_setting(eth_dev, QED_FILTER_RX_MODE_TYPE_REGULAR);
+}
+
+static void qede_dev_close(struct rte_eth_dev *eth_dev)
+{
+	struct qede_dev *qdev = eth_dev->data->dev_private;
+	struct ecore_dev *edev = &qdev->edev;
+
+	PMD_INIT_FUNC_TRACE(edev);
+
+	/* dev_stop() shall cleanup fp resources in hw but without releasing
+	 * dma memories and sw structures so that dev_start() can be called
+	 * by the app without reconfiguration. However, in dev_close() we
+	 * can release all the resources and device can be brought up newly
+	 */
+	if (qdev->state != QEDE_STOP)
+		qede_dev_stop(eth_dev);
+	else
+		DP_INFO(edev, "Device is already stopped\n");
+
+	qede_free_mem_load(qdev);
+
+	qede_free_fp_arrays(qdev);
+
+	qede_dev_set_link_state(eth_dev, false);
+
+	qdev->ops->common->slowpath_stop(edev);
+
+	qdev->ops->common->remove(edev);
+
+	rte_intr_disable(&(eth_dev->pci_dev->intr_handle));
+
+	rte_intr_callback_unregister(&(eth_dev->pci_dev->intr_handle),
+				     qede_interrupt_handler, (void *)eth_dev);
+
+	qdev->state = QEDE_CLOSE;
+}
+
+static void
+qede_get_stats(struct rte_eth_dev *eth_dev, struct rte_eth_stats *eth_stats)
+{
+	struct qede_dev *qdev = eth_dev->data->dev_private;
+	struct ecore_dev *edev = &qdev->edev;
+	struct ecore_eth_stats stats;
+
+	qdev->ops->get_vport_stats(edev, &stats);
+
+	/* RX Stats */
+	eth_stats->ipackets = stats.rx_ucast_pkts +
+	    stats.rx_mcast_pkts + stats.rx_bcast_pkts;
+
+	eth_stats->ibytes = stats.rx_ucast_bytes +
+	    stats.rx_mcast_bytes + stats.rx_bcast_bytes;
+
+	eth_stats->imcasts = stats.rx_mcast_pkts;
+
+	eth_stats->ierrors = stats.rx_crc_errors +
+	    stats.rx_align_errors +
+	    stats.rx_carrier_errors +
+	    stats.rx_oversize_packets +
+	    stats.rx_jabbers + stats.rx_undersize_packets;
+
+	eth_stats->rx_nombuf = stats.no_buff_discards;
+
+	eth_stats->imissed = stats.mftag_filter_discards +
+	    stats.mac_filter_discards +
+	    stats.no_buff_discards + stats.brb_truncates + stats.brb_discards;
+
+	/* TX stats */
+	eth_stats->opackets = stats.tx_ucast_pkts +
+	    stats.tx_mcast_pkts + stats.tx_bcast_pkts;
+
+	eth_stats->obytes = stats.tx_ucast_bytes +
+	    stats.tx_mcast_bytes + stats.tx_bcast_bytes;
+
+	eth_stats->oerrors = stats.tx_err_drop_pkts;
+
+	DP_INFO(edev,
+		"no_buff_discards=%" PRIu64 ""
+		" mac_filter_discards=%" PRIu64 ""
+		" brb_truncates=%" PRIu64 ""
+		" brb_discards=%" PRIu64 "\n",
+		stats.no_buff_discards,
+		stats.mac_filter_discards,
+		stats.brb_truncates, stats.brb_discards);
+}
+
+int qede_dev_set_link_state(struct rte_eth_dev *eth_dev, bool link_up)
+{
+	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
+	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+	struct qed_link_params link_params;
+	int rc;
+
+	DP_INFO(edev, "setting link state %d\n", link_up);
+	memset(&link_params, 0, sizeof(link_params));
+	link_params.link_up = link_up;
+	rc = qdev->ops->common->set_link(edev, &link_params);
+	if (rc != ECORE_SUCCESS)
+		DP_ERR(edev, "Unable to set link state %d\n", link_up);
+
+	return rc;
+}
+
+static int qede_dev_set_link_up(struct rte_eth_dev *eth_dev)
+{
+	return qede_dev_set_link_state(eth_dev, true);
+}
+
+static int qede_dev_set_link_down(struct rte_eth_dev *eth_dev)
+{
+	return qede_dev_set_link_state(eth_dev, false);
+}
+
+static void qede_allmulticast_enable(struct rte_eth_dev *eth_dev)
+{
+	enum qed_filter_rx_mode_type type =
+	    QED_FILTER_RX_MODE_TYPE_MULTI_PROMISC;
+
+	if (rte_eth_promiscuous_get(eth_dev->data->port_id) == 1)
+		type |= QED_FILTER_RX_MODE_TYPE_PROMISC;
+
+	qede_rx_mode_setting(eth_dev, type);
+}
+
+static void qede_allmulticast_disable(struct rte_eth_dev *eth_dev)
+{
+	if (rte_eth_promiscuous_get(eth_dev->data->port_id) == 1)
+		qede_rx_mode_setting(eth_dev, QED_FILTER_RX_MODE_TYPE_PROMISC);
+	else
+		qede_rx_mode_setting(eth_dev, QED_FILTER_RX_MODE_TYPE_REGULAR);
+}
+
+static int qede_flow_ctrl_set(struct rte_eth_dev *eth_dev,
+			      struct rte_eth_fc_conf *fc_conf)
+{
+	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
+	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+	struct qed_link_output current_link;
+	struct qed_link_params params;
+
+	memset(&current_link, 0, sizeof(current_link));
+	qdev->ops->common->get_link(edev, &current_link);
+
+	memset(&params, 0, sizeof(params));
+	params.override_flags |= QED_LINK_OVERRIDE_PAUSE_CONFIG;
+	if (fc_conf->autoneg) {
+		if (!(current_link.supported_caps & QEDE_SUPPORTED_AUTONEG)) {
+			DP_ERR(edev, "Autoneg not supported\n");
+			return -EINVAL;
+		}
+		params.pause_config |= QED_LINK_PAUSE_AUTONEG_ENABLE;
+	}
+
+	/* Pause is assumed to be supported (SUPPORTED_Pause) */
+	if (fc_conf->mode == RTE_FC_FULL)
+		params.pause_config |= (QED_LINK_PAUSE_TX_ENABLE |
+					QED_LINK_PAUSE_RX_ENABLE);
+	if (fc_conf->mode == RTE_FC_TX_PAUSE)
+		params.pause_config |= QED_LINK_PAUSE_TX_ENABLE;
+	if (fc_conf->mode == RTE_FC_RX_PAUSE)
+		params.pause_config |= QED_LINK_PAUSE_RX_ENABLE;
+
+	params.link_up = true;
+	(void)qdev->ops->common->set_link(edev, &params);
+
+	return 0;
+}
+
+static int qede_flow_ctrl_get(struct rte_eth_dev *eth_dev,
+			      struct rte_eth_fc_conf *fc_conf)
+{
+	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
+	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+	struct qed_link_output current_link;
+
+	memset(&current_link, 0, sizeof(current_link));
+	qdev->ops->common->get_link(edev, &current_link);
+
+	if (current_link.pause_config & QED_LINK_PAUSE_AUTONEG_ENABLE)
+		fc_conf->autoneg = true;
+
+	if (current_link.pause_config & (QED_LINK_PAUSE_RX_ENABLE |
+					 QED_LINK_PAUSE_TX_ENABLE))
+		fc_conf->mode = RTE_FC_FULL;
+	else if (current_link.pause_config & QED_LINK_PAUSE_RX_ENABLE)
+		fc_conf->mode = RTE_FC_RX_PAUSE;
+	else if (current_link.pause_config & QED_LINK_PAUSE_TX_ENABLE)
+		fc_conf->mode = RTE_FC_TX_PAUSE;
+	else
+		fc_conf->mode = RTE_FC_NONE;
+
+	return 0;
+}
+
+static struct eth_dev_ops qede_eth_dev_ops = {
+	.dev_configure = qede_dev_configure,
+	.dev_infos_get = qede_dev_info_get,
+	.rx_queue_setup = qede_rx_queue_setup,
+	.rx_queue_release = qede_rx_queue_release,
+	.tx_queue_setup = qede_tx_queue_setup,
+	.tx_queue_release = qede_tx_queue_release,
+	.dev_start = qede_dev_start,
+	.dev_set_link_up = qede_dev_set_link_up,
+	.dev_set_link_down = qede_dev_set_link_down,
+	.link_update = qede_link_update,
+	.promiscuous_enable = qede_promiscuous_enable,
+	.promiscuous_disable = qede_promiscuous_disable,
+	.allmulticast_enable = qede_allmulticast_enable,
+	.allmulticast_disable = qede_allmulticast_disable,
+	.dev_stop = qede_dev_stop,
+	.dev_close = qede_dev_close,
+	.stats_get = qede_get_stats,
+	.mac_addr_add = qede_mac_addr_add,
+	.mac_addr_remove = qede_mac_addr_remove,
+	.vlan_offload_set = qede_vlan_offload_set,
+	.vlan_filter_set = qede_vlan_filter_set,
+	.flow_ctrl_set = qede_flow_ctrl_set,
+	.flow_ctrl_get = qede_flow_ctrl_get,
+};
+
+static void qede_update_pf_params(struct ecore_dev *edev)
+{
+	struct ecore_pf_params pf_params;
+	/* 16 rx + 16 tx */
+	memset(&pf_params, 0, sizeof(struct ecore_pf_params));
+	pf_params.eth_pf_params.num_cons = 32;
+	qed_ops->common->update_pf_params(edev, &pf_params);
+}
+
+static int qede_common_dev_init(struct rte_eth_dev *eth_dev, bool is_vf)
+{
+	struct rte_pci_device *pci_dev;
+	struct rte_pci_addr pci_addr;
+	struct qede_dev *adapter;
+	struct ecore_dev *edev;
+	struct qed_dev_eth_info dev_info;
+	struct qed_slowpath_params params;
+	uint32_t qed_ver;
+	static bool do_once = true;
+	uint8_t bulletin_change;
+	uint8_t vf_mac[ETHER_ADDR_LEN];
+	uint8_t is_mac_forced;
+	bool is_mac_exist;
+	/* Fix up ecore debug level */
+	uint32_t dp_module = ~0 & ~ECORE_MSG_HW;
+	uint8_t dp_level = ECORE_LEVEL_VERBOSE;
+	int rc;
+
+	/* Extract key data structures */
+	adapter = eth_dev->data->dev_private;
+	edev = &adapter->edev;
+	pci_addr = eth_dev->pci_dev->addr;
+
+	PMD_INIT_FUNC_TRACE(edev);
+
+	snprintf(edev->name, NAME_SIZE, PCI_SHORT_PRI_FMT ":dpdk-port-%u",
+		 pci_addr.bus, pci_addr.devid, pci_addr.function,
+		 eth_dev->data->port_id);
+
+	eth_dev->rx_pkt_burst = qede_recv_pkts;
+	eth_dev->tx_pkt_burst = qede_xmit_pkts;
+
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+		DP_NOTICE(edev, false,
+			  "Skipping device init from secondary process\n");
+		return 0;
+	}
+
+	pci_dev = eth_dev->pci_dev;
+
+	rte_eth_copy_pci_info(eth_dev, pci_dev);
+
+	if (qed_ver != QEDE_ETH_INTERFACE_VERSION) {
+		DP_ERR(edev, "Version mismatch [%08x != %08x]\n",
+		       qed_ver, QEDE_ETH_INTERFACE_VERSION);
+		return -EINVAL;
+	}
+
+	DP_INFO(edev, "Starting qede probe\n");
+
+	rc = qed_ops->common->probe(edev, pci_dev, QED_PROTOCOL_ETH,
+				    dp_module, dp_level, is_vf);
+
+	if (rc != 0) {
+		DP_ERR(edev, "qede probe failed rc 0x%x\n", rc);
+		return -ENODEV;
+	}
+
+	qede_update_pf_params(edev);
+
+	rte_intr_callback_register(&(eth_dev->pci_dev->intr_handle),
+				   qede_interrupt_handler, (void *)eth_dev);
+
+	if (rte_intr_enable(&(eth_dev->pci_dev->intr_handle))) {
+		DP_ERR(edev, "rte_intr_enable() failed\n");
+		return -ENODEV;
+	}
+
+	/* Start the Slowpath-process */
+	memset(&params, 0, sizeof(struct qed_slowpath_params));
+	params.int_mode = ECORE_INT_MODE_MSIX;
+	params.drv_major = QEDE_MAJOR_VERSION;
+	params.drv_minor = QEDE_MINOR_VERSION;
+	params.drv_rev = QEDE_REVISION_VERSION;
+	params.drv_eng = QEDE_ENGINEERING_VERSION;
+	strncpy((char *)params.name, "qede LAN", QED_DRV_VER_STR_SIZE);
+
+	rc = qed_ops->common->slowpath_start(edev, &params);
+	if (rc) {
+		DP_ERR(edev, "Cannot start slowpath rc=0x%x\n", rc);
+		return -ENODEV;
+	}
+
+	rc = qed_ops->fill_dev_info(edev, &dev_info);
+	if (rc) {
+		DP_ERR(edev, "Cannot get device_info rc=0x%x\n", rc);
+		qed_ops->common->slowpath_stop(edev);
+		qed_ops->common->remove(edev);
+		return -ENODEV;
+	}
+
+	qede_alloc_etherdev(adapter, &dev_info);
+
+	adapter->ops->common->set_id(edev, edev->name, QEDE_DRV_MODULE_VERSION);
+
+	/* Allocate memory for storing primary macaddr */
+	eth_dev->data->mac_addrs = rte_zmalloc(edev->name, ETHER_ADDR_LEN,
+					       RTE_CACHE_LINE_SIZE);
+
+	if (eth_dev->data->mac_addrs == NULL) {
+		DP_ERR(edev, "Failed to allocate MAC address\n");
+		qed_ops->common->slowpath_stop(edev);
+		qed_ops->common->remove(edev);
+		return -ENOMEM;
+	}
+
+	ether_addr_copy((struct ether_addr *)edev->hwfns[0].
+				hw_info.hw_mac_addr,
+				&eth_dev->data->mac_addrs[0]);
+
+	eth_dev->dev_ops = &qede_eth_dev_ops;
+
+	if (do_once) {
+		qede_print_adapter_info(adapter);
+		do_once = false;
+	}
+
+	DP_NOTICE(edev, false, "macaddr %02x:%02x:%02x:%02x:%02x:%02x\n",
+		  eth_dev->data->mac_addrs[0].addr_bytes[0],
+		  eth_dev->data->mac_addrs[0].addr_bytes[1],
+		  eth_dev->data->mac_addrs[0].addr_bytes[2],
+		  eth_dev->data->mac_addrs[0].addr_bytes[3],
+		  eth_dev->data->mac_addrs[0].addr_bytes[4],
+		  eth_dev->data->mac_addrs[0].addr_bytes[5]);
+
+	return rc;
+}
+
+static int qedevf_eth_dev_init(struct rte_eth_dev *eth_dev)
+{
+	return qede_common_dev_init(eth_dev, 1);
+}
+
+static int qede_eth_dev_init(struct rte_eth_dev *eth_dev)
+{
+	return qede_common_dev_init(eth_dev, 0);
+}
+
+static int qede_dev_common_uninit(struct rte_eth_dev *eth_dev)
+{
+	/* only uninitialize in the primary process */
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return 0;
+
+	/* safe to close dev here */
+	qede_dev_close(eth_dev);
+
+	eth_dev->dev_ops = NULL;
+	eth_dev->rx_pkt_burst = NULL;
+	eth_dev->tx_pkt_burst = NULL;
+
+	if (eth_dev->data->mac_addrs)
+		rte_free(eth_dev->data->mac_addrs);
+
+	eth_dev->data->mac_addrs = NULL;
+
+	return 0;
+}
+
+static int qede_eth_dev_uninit(struct rte_eth_dev *eth_dev)
+{
+	return qede_dev_common_uninit(eth_dev);
+}
+
+static int qedevf_eth_dev_uninit(struct rte_eth_dev *eth_dev)
+{
+	return qede_dev_common_uninit(eth_dev);
+}
+
+static struct rte_pci_id pci_id_qedevf_map[] = {
+#define QEDEVF_RTE_PCI_DEVICE(dev) RTE_PCI_DEVICE(PCI_VENDOR_ID_QLOGIC, dev)
+	{
+		QEDEVF_RTE_PCI_DEVICE(PCI_DEVICE_ID_NX2_VF)
+	},
+	{
+		QEDEVF_RTE_PCI_DEVICE(PCI_DEVICE_ID_57980S_IOV)
+	},
+	{.vendor_id = 0,}
+};
+
+static struct rte_pci_id pci_id_qede_map[] = {
+#define QEDE_RTE_PCI_DEVICE(dev) RTE_PCI_DEVICE(PCI_VENDOR_ID_QLOGIC, dev)
+	{
+		QEDE_RTE_PCI_DEVICE(PCI_DEVICE_ID_NX2_57980E)
+	},
+	{
+		QEDE_RTE_PCI_DEVICE(PCI_DEVICE_ID_NX2_57980S)
+	},
+	{
+		QEDE_RTE_PCI_DEVICE(PCI_DEVICE_ID_57980S_40)
+	},
+	{
+		QEDE_RTE_PCI_DEVICE(PCI_DEVICE_ID_57980S_25)
+	},
+	{.vendor_id = 0,}
+};
+
+static struct eth_driver rte_qedevf_pmd = {
+	.pci_drv = {
+		    .name = "rte_qedevf_pmd",
+		    .id_table = pci_id_qedevf_map,
+		    .drv_flags =
+		    RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC,
+		    },
+	.eth_dev_init = qedevf_eth_dev_init,
+	.eth_dev_uninit = qedevf_eth_dev_uninit,
+	.dev_private_size = sizeof(struct qede_dev),
+};
+
+static struct eth_driver rte_qede_pmd = {
+	.pci_drv = {
+		    .name = "rte_qede_pmd",
+		    .id_table = pci_id_qede_map,
+		    .drv_flags =
+		    RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC,
+		    },
+	.eth_dev_init = qede_eth_dev_init,
+	.eth_dev_uninit = qede_eth_dev_uninit,
+	.dev_private_size = sizeof(struct qede_dev),
+};
+
+static int
+rte_qedevf_pmd_init(const char *name __rte_unused,
+		    const char *params __rte_unused)
+{
+	rte_eth_driver_register(&rte_qedevf_pmd);
+
+	return 0;
+}
+
+static int
+rte_qede_pmd_init(const char *name __rte_unused,
+		  const char *params __rte_unused)
+{
+	rte_eth_driver_register(&rte_qede_pmd);
+
+	return 0;
+}
+
+static struct rte_driver rte_qedevf_driver = {
+	.type = PMD_PDEV,
+	.init = rte_qede_pmd_init
+};
+
+static struct rte_driver rte_qede_driver = {
+	.type = PMD_PDEV,
+	.init = rte_qedevf_pmd_init
+};
+
+PMD_REGISTER_DRIVER(rte_qede_driver);
+PMD_REGISTER_DRIVER(rte_qedevf_driver);
diff --git a/drivers/net/qede/qede_ethdev.h b/drivers/net/qede/qede_ethdev.h
new file mode 100644
index 0000000..3d90b23
--- /dev/null
+++ b/drivers/net/qede/qede_ethdev.h
@@ -0,0 +1,155 @@
+/*
+ * Copyright (c) 2016 QLogic Corporation.
+ * All rights reserved.
+ * www.qlogic.com
+ *
+ * See LICENSE.qede_pmd for copyright and licensing details.
+ */
+
+
+#ifndef _QEDE_ETHDEV_H_
+#define _QEDE_ETHDEV_H_
+
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_dev.h>
+
+/* ecore includes */
+#include "base/bcm_osal.h"
+#include "base/ecore.h"
+#include "base/ecore_dev_api.h"
+#include "base/ecore_sp_api.h"
+#include "base/ecore_mcp_api.h"
+#include "base/ecore_hsi_common.h"
+#include "base/ecore_int_api.h"
+#include "base/ecore_chain.h"
+#include "base/ecore_status.h"
+#include "base/ecore_hsi_eth.h"
+#include "base/ecore_dev_api.h"
+
+#include "qede_logs.h"
+#include "qede_if.h"
+#include "qede_eth_if.h"
+
+#include "qede_rxtx.h"
+
+#define qede_stringify1(x...)		#x
+#define qede_stringify(x...)		qede_stringify1(x)
+
+/* Driver versions */
+#define QEDE_PMD_VER_PREFIX		"QEDE PMD"
+#define QEDE_PMD_VERSION_MAJOR		1
+#define QEDE_PMD_VERSION_MINOR		0
+#define QEDE_PMD_VERSION_PATCH		0
+
+#define QEDE_MAJOR_VERSION		8
+#define QEDE_MINOR_VERSION		7
+#define QEDE_REVISION_VERSION		9
+#define QEDE_ENGINEERING_VERSION	0
+
+#define QEDE_DRV_MODULE_VERSION qede_stringify(QEDE_MAJOR_VERSION) "."	\
+		qede_stringify(QEDE_MINOR_VERSION) "."			\
+		qede_stringify(QEDE_REVISION_VERSION) "."		\
+		qede_stringify(QEDE_ENGINEERING_VERSION)
+
+#define QEDE_RSS_INDIR_INITED     (1 << 0)
+#define QEDE_RSS_KEY_INITED       (1 << 1)
+#define QEDE_RSS_CAPS_INITED      (1 << 2)
+
+#define QEDE_MAX_RSS_CNT(edev)  ((edev)->dev_info.num_queues)
+#define QEDE_MAX_TSS_CNT(edev)  ((edev)->dev_info.num_queues * \
+					(edev)->dev_info.num_tc)
+
+#define QEDE_RSS_CNT(edev)	((edev)->num_rss)
+#define QEDE_TSS_CNT(edev)	((edev)->num_rss * (edev)->num_tc)
+
+#define QEDE_DUPLEX_FULL	1
+#define QEDE_DUPLEX_HALF	2
+#define QEDE_DUPLEX_UNKNOWN     0xff
+
+#define QEDE_SUPPORTED_AUTONEG (1 << 6)
+#define QEDE_SUPPORTED_PAUSE   (1 << 13)
+
+#define QEDE_INIT_QDEV(eth_dev) (eth_dev->data->dev_private)
+
+#define QEDE_INIT_EDEV(adapter) (&((struct qede_dev *)adapter)->edev)
+
+#define QEDE_INIT(eth_dev) {					\
+	struct qede_dev *qdev = eth_dev->data->dev_private;	\
+	struct ecore_dev *edev = &qdev->edev;			\
+}
+
+/************* QLogic 25G/40G vendor/devices ids *************/
+#define PCI_VENDOR_ID_QLOGIC            0x1077
+
+#define CHIP_NUM_57980E                 0x1634
+#define CHIP_NUM_57980S                 0x1629
+#define CHIP_NUM_VF                     0x1630
+#define CHIP_NUM_57980S_40              0x1634
+#define CHIP_NUM_57980S_25              0x1656
+#define CHIP_NUM_57980S_IOV             0x1664
+
+#define PCI_DEVICE_ID_NX2_57980E        CHIP_NUM_57980E
+#define PCI_DEVICE_ID_NX2_57980S        CHIP_NUM_57980S
+#define PCI_DEVICE_ID_NX2_VF            CHIP_NUM_VF
+#define PCI_DEVICE_ID_57980S_40         CHIP_NUM_57980S_40
+#define PCI_DEVICE_ID_57980S_25         CHIP_NUM_57980S_25
+#define PCI_DEVICE_ID_57980S_IOV        CHIP_NUM_57980S_IOV
+
+extern const char *QEDE_FW_FILE_NAME;
+
+/* Port/function states */
+enum dev_state {
+	QEDE_START,
+	QEDE_STOP,
+	QEDE_CLOSE
+};
+
+struct qed_int_param {
+	uint32_t int_mode;
+	uint8_t num_vectors;
+	uint8_t min_msix_cnt;
+};
+
+struct qed_int_params {
+	struct qed_int_param in;
+	struct qed_int_param out;
+	bool fp_initialized;
+};
+
+/*
+ *  Structure to store private data for each port.
+ */
+struct qede_dev {
+	struct ecore_dev edev;
+	uint8_t protocol;
+	const struct qed_eth_ops *ops;
+	struct qed_dev_eth_info dev_info;
+	struct ecore_sb_info *sb_array;
+	struct qede_fastpath *fp_array;
+	uint16_t num_rss;
+	uint8_t num_tc;
+	uint16_t mtu;
+	uint32_t rss_params_inited;
+	struct qed_update_vport_rss_params rss_params;
+	uint32_t flags;
+	bool gro_disable;
+	struct qede_rx_queue **rx_queues;
+	struct qede_tx_queue **tx_queues;
+	enum dev_state state;
+
+	/* Vlans */
+	osal_list_t vlan_list;
+	uint16_t configured_vlans;
+	uint16_t non_configured_vlans;
+	bool accept_any_vlan;
+	uint16_t vxlan_dst_port;
+
+	bool handle_hw_err;
+	char drv_ver[QED_DRV_VER_STR_SIZE];
+};
+
+int qede_dev_set_link_state(struct rte_eth_dev *eth_dev, bool link_up);
+void qede_config_rx_mode(struct rte_eth_dev *eth_dev);
+
+#endif /* _QEDE_ETHDEV_H_ */
diff --git a/drivers/net/qede/qede_if.h b/drivers/net/qede/qede_if.h
new file mode 100644
index 0000000..935eed8
--- /dev/null
+++ b/drivers/net/qede/qede_if.h
@@ -0,0 +1,155 @@
+/*
+ * Copyright (c) 2016 QLogic Corporation.
+ * All rights reserved.
+ * www.qlogic.com
+ *
+ * See LICENSE.qede_pmd for copyright and licensing details.
+ */
+
+#ifndef _QEDE_IF_H
+#define _QEDE_IF_H
+
+#include "qede_ethdev.h"
+
+/* forward */
+struct ecore_dev;
+struct qed_sb_info;
+struct qed_pf_params;
+enum ecore_int_mode;
+
+struct qed_dev_info {
+	uint8_t num_hwfns;
+	uint8_t hw_mac[ETHER_ADDR_LEN];
+	bool is_mf_default;
+
+	/* FW version */
+	uint16_t fw_major;
+	uint16_t fw_minor;
+	uint16_t fw_rev;
+	uint16_t fw_eng;
+
+	/* MFW version */
+	uint32_t mfw_rev;
+
+	uint32_t flash_size;
+	uint8_t mf_mode;
+	bool tx_switching;
+	/* To be added... */
+};
+
+enum qed_sb_type {
+	QED_SB_TYPE_L2_QUEUE,
+	QED_SB_TYPE_STORAGE,
+	QED_SB_TYPE_CNQ,
+};
+
+enum qed_protocol {
+	QED_PROTOCOL_ETH,
+};
+
+struct qed_link_params {
+	bool link_up;
+
+#define QED_LINK_OVERRIDE_SPEED_AUTONEG         (1 << 0)
+#define QED_LINK_OVERRIDE_SPEED_ADV_SPEEDS      (1 << 1)
+#define QED_LINK_OVERRIDE_SPEED_FORCED_SPEED    (1 << 2)
+#define QED_LINK_OVERRIDE_PAUSE_CONFIG          (1 << 3)
+	uint32_t override_flags;
+	bool autoneg;
+	uint32_t adv_speeds;
+	uint32_t forced_speed;
+#define QED_LINK_PAUSE_AUTONEG_ENABLE           (1 << 0)
+#define QED_LINK_PAUSE_RX_ENABLE                (1 << 1)
+#define QED_LINK_PAUSE_TX_ENABLE                (1 << 2)
+	uint32_t pause_config;
+};
+
+struct qed_link_output {
+	bool link_up;
+	uint32_t supported_caps;	/* In SUPPORTED defs */
+	uint32_t advertised_caps;	/* In ADVERTISED defs */
+	uint32_t lp_caps;	/* In ADVERTISED defs */
+	uint32_t speed;		/* In Mb/s */
+	uint8_t duplex;		/* In DUPLEX defs */
+	uint8_t port;		/* In PORT defs */
+	bool autoneg;
+	uint32_t pause_config;
+};
+
+#define QED_DRV_VER_STR_SIZE 80
+struct qed_slowpath_params {
+	uint32_t int_mode;
+	uint8_t drv_major;
+	uint8_t drv_minor;
+	uint8_t drv_rev;
+	uint8_t drv_eng;
+	uint8_t name[QED_DRV_VER_STR_SIZE];
+};
+
+#define ILT_PAGE_SIZE_TCFC 0x8000	/* 32KB */
+
+struct qed_common_cb_ops {
+	void (*link_update)(void *dev, struct qed_link_output *link);
+};
+
+struct qed_selftest_ops {
+/**
+ * @brief registers - Perform register tests
+ *
+ * @param edev
+ *
+ * @return 0 on success, error otherwise.
+ */
+	int (*registers)(struct ecore_dev *edev);
+};
+
+struct qed_common_ops {
+	int (*probe)(struct ecore_dev *edev,
+		     struct rte_pci_device *pci_dev,
+		     enum qed_protocol protocol,
+		     uint32_t dp_module, uint8_t dp_level, bool is_vf);
+	void (*set_id)(struct ecore_dev *edev,
+		char name[], const char ver_str[]);
+	enum _ecore_status_t (*chain_alloc)(struct ecore_dev *edev,
+					    enum ecore_chain_use_mode
+					    intended_use,
+					    enum ecore_chain_mode mode,
+					    enum ecore_chain_cnt_type cnt_type,
+					    uint32_t num_elems,
+					    osal_size_t elem_size,
+					    struct ecore_chain *p_chain);
+
+	void (*chain_free)(struct ecore_dev *edev,
+			   struct ecore_chain *p_chain);
+
+	void (*get_link)(struct ecore_dev *edev,
+			 struct qed_link_output *if_link);
+	int (*set_link)(struct ecore_dev *edev,
+			struct qed_link_params *params);
+
+	int (*drain)(struct ecore_dev *edev);
+
+	void (*remove)(struct ecore_dev *edev);
+
+	int (*slowpath_stop)(struct ecore_dev *edev);
+
+	void (*update_pf_params)(struct ecore_dev *edev,
+				 struct ecore_pf_params *params);
+
+	int (*slowpath_start)(struct ecore_dev *edev,
+			      struct qed_slowpath_params *params);
+
+	int (*set_fp_int)(struct ecore_dev *edev, uint16_t cnt);
+
+	uint32_t (*sb_init)(struct ecore_dev *edev,
+			    struct ecore_sb_info *sb_info,
+			    void *sb_virt_addr,
+			    dma_addr_t sb_phy_addr,
+			    uint16_t sb_id, enum qed_sb_type type);
+
+	bool (*can_link_change)(struct ecore_dev *edev);
+	void (*update_msglvl)(struct ecore_dev *edev,
+			      uint32_t dp_module, uint8_t dp_level);
+};
+
+#endif /* _QEDE_IF_H */
diff --git a/drivers/net/qede/qede_logs.h b/drivers/net/qede/qede_logs.h
new file mode 100644
index 0000000..46a54e1
--- /dev/null
+++ b/drivers/net/qede/qede_logs.h
@@ -0,0 +1,93 @@
+/*
+ * Copyright (c) 2016 QLogic Corporation.
+ * All rights reserved.
+ * www.qlogic.com
+ *
+ * See LICENSE.qede_pmd for copyright and licensing details.
+ */
+
+#ifndef _QEDE_LOGS_H_
+#define _QEDE_LOGS_H_
+
+#define DP_ERR(p_dev, fmt, ...) \
+	rte_log(RTE_LOG_ERR, RTE_LOGTYPE_PMD, \
+		"[%s:%d(%s)]" fmt, \
+		  __func__, __LINE__, \
+		(p_dev)->name ? (p_dev)->name : "", \
+		##__VA_ARGS__)
+
+#define DP_NOTICE(p_dev, is_assert, fmt, ...) \
+do {  \
+	rte_log(RTE_LOG_NOTICE, RTE_LOGTYPE_PMD,\
+		"[QEDE PMD: (%s)]%s:" fmt, \
+		(p_dev)->name ? (p_dev)->name : "", \
+		 __func__, \
+		##__VA_ARGS__); \
+	OSAL_ASSERT(!is_assert); \
+} while (0)
+
+#ifdef RTE_LIBRTE_QEDE_DEBUG_INFO
+
+#define DP_INFO(p_dev, fmt, ...) \
+	rte_log(RTE_LOG_INFO, RTE_LOGTYPE_PMD, \
+		"[%s:%d(%s)]" fmt, \
+		__func__, __LINE__, \
+		(p_dev)->name ? (p_dev)->name : "", \
+		##__VA_ARGS__)
+#else
+#define DP_INFO(p_dev, fmt, ...) do { } while (0)
+
+#endif
+
+#ifdef RTE_LIBRTE_QEDE_DEBUG_ECORE
+#define DP_VERBOSE(p_dev, module, fmt, ...) \
+do { \
+	if ((p_dev)->dp_module & module) \
+		rte_log(RTE_LOG_DEBUG, RTE_LOGTYPE_PMD, \
+			"[%s:%d(%s)]" fmt, \
+		      __func__, __LINE__, \
+		      (p_dev)->name ? (p_dev)->name : "", \
+		      ##__VA_ARGS__); \
+} while (0)
+#else
+#define DP_VERBOSE(p_dev, fmt, ...) do { } while (0)
+#endif
+
+#define PMD_INIT_LOG(level, edev, fmt, args...)	\
+	rte_log(RTE_LOG_ ## level, RTE_LOGTYPE_PMD, \
+		"[qede_pmd: %s] %s() " fmt "\n", \
+	(edev)->name, __func__, ##args)
+
+#ifdef RTE_LIBRTE_QEDE_DEBUG_INIT
+#define PMD_INIT_FUNC_TRACE(edev) PMD_INIT_LOG(DEBUG, edev, " >>")
+#else
+#define PMD_INIT_FUNC_TRACE(edev) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_QEDE_DEBUG_TX
+#define PMD_TX_LOG(level, q, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): port=%u queue=%u " fmt "\n", \
+		__func__, q->port_id, q->queue_id, ## args)
+#else
+#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_QEDE_DEBUG_RX
+#define PMD_RX_LOG(level, q, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): port=%u queue=%u " fmt "\n",	\
+		__func__, q->port_id, q->queue_id, ## args)
+#else
+#define PMD_RX_LOG(level, q, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_QEDE_DEBUG_DRIVER
+#define PMD_DRV_LOG_RAW(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt, __func__, ## args)
+#else
+#define PMD_DRV_LOG_RAW(level, fmt, args...) do { } while (0)
+#endif
+
+#define PMD_DRV_LOG(level, fmt, args...) \
+	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
+
+#endif /* _QEDE_LOGS_H_ */
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
new file mode 100644
index 0000000..7a1b986
--- /dev/null
+++ b/drivers/net/qede/qede_main.c
@@ -0,0 +1,548 @@
+/*
+ * Copyright (c) 2016 QLogic Corporation.
+ * All rights reserved.
+ * www.qlogic.com
+ *
+ * See LICENSE.qede_pmd for copyright and licensing details.
+ */
+
+#include <sys/stat.h>
+#include <fcntl.h>
+#include <unistd.h>
+#include <zlib.h>
+
+#include "qede_ethdev.h"
+
+
+static uint8_t npar_tx_switching = 1;
+
+#define CONFIG_QED_BINARY_FW
+
+#ifdef RTE_LIBRTE_QEDE_TX_SWITCHING
+static uint8_t tx_switching = 1;
+#else
+static uint8_t tx_switching;
+#endif
+
+#ifndef RTE_LIBRTE_QEDE_FW
+const char *QEDE_FW_FILE_NAME =
+	"/lib/firmware/qed/qed_init_values_zipped-8.7.7.0.bin";
+#else
+const char *QEDE_FW_FILE_NAME = RTE_LIBRTE_QEDE_FW;
+#endif
+
+static void
+qed_update_pf_params(struct ecore_dev *edev, struct ecore_pf_params *params)
+{
+	int i;
+
+	for (i = 0; i < edev->num_hwfns; i++) {
+		struct ecore_hwfn *p_hwfn = &edev->hwfns[i];
+		p_hwfn->pf_params = *params;
+	}
+}
+
+static void qed_init_pci(struct ecore_dev *edev, struct rte_pci_device *pci_dev)
+{
+	edev->regview = pci_dev->mem_resource[0].addr;
+	edev->doorbells = pci_dev->mem_resource[2].addr;
+}
+
+static int
+qed_probe(struct ecore_dev *edev, struct rte_pci_device *pci_dev,
+	  enum qed_protocol protocol, uint32_t dp_module,
+	  uint8_t dp_level, bool is_vf)
+{
+	struct qede_dev *qdev = (struct qede_dev *)edev;
+	int rc;
+
+	ecore_init_struct(edev);
+	qdev->protocol = protocol;
+	if (is_vf) {
+		edev->b_is_vf = true;
+		edev->sriov_info.b_hw_channel = true;
+	}
+	ecore_init_dp(edev, dp_module, dp_level, NULL);
+	qed_init_pci(edev, pci_dev);
+	rc = ecore_hw_prepare(edev, ECORE_PCI_DEFAULT);
+	if (rc) {
+		DP_ERR(edev, "hw prepare failed\n");
+		return rc;
+	}
+
+	return rc;
+}
+
+static int qed_nic_setup(struct ecore_dev *edev)
+{
+	int rc, i;
+
+	rc = ecore_resc_alloc(edev);
+	if (rc)
+		return rc;
+
+	DP_INFO(edev, "Allocated qed resources\n");
+	ecore_resc_setup(edev);
+
+	return rc;
+}
+
+static int qed_alloc_stream_mem(struct ecore_dev *edev)
+{
+	int i;
+
+	for_each_hwfn(edev, i) {
+		struct ecore_hwfn *p_hwfn = &edev->hwfns[i];
+
+		p_hwfn->stream = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL,
+					     sizeof(*p_hwfn->stream));
+		if (!p_hwfn->stream)
+			return -ENOMEM;
+	}
+
+	return 0;
+}
+
+static void qed_free_stream_mem(struct ecore_dev *edev)
+{
+	int i;
+
+	for_each_hwfn(edev, i) {
+		struct ecore_hwfn *p_hwfn = &edev->hwfns[i];
+
+		if (!p_hwfn->stream)
+			return;
+
+		OSAL_FREE(p_hwfn->p_dev, p_hwfn->stream);
+	}
+}
+
+static int qed_load_firmware_data(struct ecore_dev *edev)
+{
+	int fd;
+	struct stat st;
+
+	fd = open(QEDE_FW_FILE_NAME, O_RDONLY);
+	if (fd < 0) {
+		DP_NOTICE(edev, false, "Can't open firmware file\n");
+		return -ENOENT;
+	}
+
+	if (fstat(fd, &st) < 0) {
+		DP_NOTICE(edev, false, "Can't stat firmware file\n");
+		return -1;
+	}
+
+	edev->firmware = rte_zmalloc("qede_fw", st.st_size,
+				    RTE_CACHE_LINE_SIZE);
+	if (!edev->firmware) {
+		DP_NOTICE(edev, false, "Can't allocate memory for firmware\n");
+		close(fd);
+		return -ENOMEM;
+	}
+
+	if (read(fd, edev->firmware, st.st_size) != st.st_size) {
+		DP_NOTICE(edev, false, "Can't read firmware data\n");
+		close(fd);
+		return -1;
+	}
+
+	edev->fw_len = st.st_size;
+	if (edev->fw_len < 104) {
+		DP_NOTICE(edev, false, "Invalid fw size: %" PRIu64"\n",
+			  edev->fw_len);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int qed_slowpath_start(struct ecore_dev *edev,
+			      struct qed_slowpath_params *params)
+{
+	bool allow_npar_tx_switching;
+	const uint8_t *data = NULL;
+	struct ecore_hwfn *hwfn;
+	struct ecore_mcp_drv_version drv_version;
+	struct qede_dev *qdev = (struct qede_dev *)edev;
+	int rc;
+#ifdef QED_ENC_SUPPORTED
+	struct ecore_tunn_start_params tunn_info;
+#endif
+
+#ifdef CONFIG_QED_BINARY_FW
+	rc = qed_load_firmware_data(edev);
+	if (rc) {
+		DP_NOTICE(edev, true,
+			  "Failed to find fw file %s\n",
+			  QEDE_FW_FILE_NAME);
+		goto err;
+	}
+#endif
+
+	rc = qed_nic_setup(edev);
+	if (rc)
+		goto err;
+
+	/* set int_coalescing_mode */
+	edev->int_coalescing_mode = ECORE_COAL_MODE_ENABLE;
+
+	/* Should go with CONFIG_QED_BINARY_FW */
+	/* Allocate stream for unzipping */
+	rc = qed_alloc_stream_mem(edev);
+	if (rc) {
+		DP_NOTICE(edev, true,
+		"Failed to allocate stream memory\n");
+		goto err2;
+	}
+
+	/* Start the slowpath */
+#ifdef CONFIG_QED_BINARY_FW
+	data = edev->firmware;
+#endif
+	allow_npar_tx_switching = npar_tx_switching ? true : false;
+
+#ifdef QED_ENC_SUPPORTED
+	memset(&tunn_info, 0, sizeof(tunn_info));
+	tunn_info.tunn_mode |= 1 << QED_MODE_VXLAN_TUNN |
+	    1 << QED_MODE_L2GRE_TUNN |
+	    1 << QED_MODE_IPGRE_TUNN |
+	    1 << QED_MODE_L2GENEVE_TUNN | 1 << QED_MODE_IPGENEVE_TUNN;
+	tunn_info.tunn_clss_vxlan = QED_TUNN_CLSS_MAC_VLAN;
+	tunn_info.tunn_clss_l2gre = QED_TUNN_CLSS_MAC_VLAN;
+	tunn_info.tunn_clss_ipgre = QED_TUNN_CLSS_MAC_VLAN;
+	rc = ecore_hw_init(edev, &tunn_info, true, ECORE_INT_MODE_MSIX,
+			   allow_npar_tx_switching, data);
+#else
+	rc = ecore_hw_init(edev, NULL, true, ECORE_INT_MODE_MSIX,
+			   allow_npar_tx_switching, data);
+#endif
+	if (rc) {
+		DP_ERR(edev, "ecore_hw_init failed\n");
+		goto err2;
+	}
+
+	DP_INFO(edev, "HW inited and function started\n");
+
+	hwfn = ECORE_LEADING_HWFN(edev);
+	drv_version.version = (params->drv_major << 24) |
+		    (params->drv_minor << 16) |
+		    (params->drv_rev << 8) | (params->drv_eng);
+	/* TBD: strlcpy() */
+	strncpy((char *)drv_version.name, (const char *)params->name,
+			MCP_DRV_VER_STR_SIZE - 4);
+	rc = ecore_mcp_send_drv_version(hwfn, hwfn->p_main_ptt,
+						&drv_version);
+	if (rc) {
+		DP_NOTICE(edev, true,
+			  "Failed sending drv version command\n");
+		return rc;
+	}
+
+	return 0;
+
+	ecore_hw_stop(edev);
+err2:
+	ecore_resc_free(edev);
+err:
+#ifdef CONFIG_QED_BINARY_FW
+	if (edev->firmware)
+		rte_free(edev->firmware);
+	edev->firmware = NULL;
+#endif
+	return rc;
+}
+
+static int
+qed_fill_dev_info(struct ecore_dev *edev, struct qed_dev_info *dev_info)
+{
+	struct ecore_ptt *ptt = NULL;
+
+	memset(dev_info, 0, sizeof(struct qed_dev_info));
+	dev_info->num_hwfns = edev->num_hwfns;
+	dev_info->is_mf_default = IS_MF_DEFAULT(&edev->hwfns[0]);
+	rte_memcpy(&dev_info->hw_mac, &edev->hwfns[0].hw_info.hw_mac_addr,
+	       ETHER_ADDR_LEN);
+
+	dev_info->fw_major = FW_MAJOR_VERSION;
+	dev_info->fw_minor = FW_MINOR_VERSION;
+	dev_info->fw_rev = FW_REVISION_VERSION;
+	dev_info->fw_eng = FW_ENGINEERING_VERSION;
+	dev_info->mf_mode = edev->mf_mode;
+	dev_info->tx_switching = tx_switching ? true : false;
+
+	ptt = ecore_ptt_acquire(ECORE_LEADING_HWFN(edev));
+	if (ptt) {
+		ecore_mcp_get_mfw_ver(edev, ptt,
+					      &dev_info->mfw_rev, NULL);
+
+		ecore_mcp_get_flash_size(ECORE_LEADING_HWFN(edev), ptt,
+						 &dev_info->flash_size);
+
+		/* Workaround to allow PHY-read commands for
+		 * B0 bringup.
+		 */
+		if (ECORE_IS_BB_B0(edev))
+			dev_info->flash_size = 0xffffffff;
+
+		ecore_ptt_release(ECORE_LEADING_HWFN(edev), ptt);
+	}
+
+	return 0;
+}
+
+int
+qed_fill_eth_dev_info(struct ecore_dev *edev, struct qed_dev_eth_info *info)
+{
+	struct qede_dev *qdev = (struct qede_dev *)edev;
+	int i;
+
+	memset(info, 0, sizeof(*info));
+
+	info->num_tc = 1 /* @@@TBD aelior MULTI_COS */;
+
+	info->num_queues = 0;
+	for_each_hwfn(edev, i)
+		    info->num_queues +=
+		    FEAT_NUM(&edev->hwfns[i], ECORE_PF_L2_QUE);
+
+	info->num_vlan_filters = RESC_NUM(&edev->hwfns[0], ECORE_VLAN);
+
+	rte_memcpy(&info->port_mac, &edev->hwfns[0].hw_info.hw_mac_addr,
+			   ETHER_ADDR_LEN);
+
+	qed_fill_dev_info(edev, &info->common);
+
+	return 0;
+}
+
+static void
+qed_set_id(struct ecore_dev *edev, char name[NAME_SIZE],
+	   const char ver_str[VER_SIZE])
+{
+	int i;
+
+	rte_memcpy(edev->name, name, NAME_SIZE);
+	for_each_hwfn(edev, i) {
+		snprintf(edev->hwfns[i].name, NAME_SIZE, "%s-%d", name, i);
+	}
+	rte_memcpy(edev->ver_str, ver_str, VER_SIZE);
+	edev->drv_type = DRV_ID_DRV_TYPE_LINUX;
+}
+
+static uint32_t
+qed_sb_init(struct ecore_dev *edev, struct ecore_sb_info *sb_info,
+	    void *sb_virt_addr, dma_addr_t sb_phy_addr,
+	    uint16_t sb_id, enum qed_sb_type type)
+{
+	struct ecore_hwfn *p_hwfn;
+	int hwfn_index;
+	uint16_t rel_sb_id;
+	uint8_t n_hwfns;
+	uint32_t rc;
+
+	/* RoCE uses single engine and CMT uses two engines. When using both
+	 * we force only a single engine. Storage uses only engine 0 too.
+	 */
+	if (type == QED_SB_TYPE_L2_QUEUE)
+		n_hwfns = edev->num_hwfns;
+	else
+		n_hwfns = 1;
+
+	hwfn_index = sb_id % n_hwfns;
+	p_hwfn = &edev->hwfns[hwfn_index];
+	rel_sb_id = sb_id / n_hwfns;
+
+	DP_INFO(edev, "hwfn [%d] <--[init]-- SB %04x [0x%04x upper]\n",
+		hwfn_index, rel_sb_id, sb_id);
+
+	rc = ecore_int_sb_init(p_hwfn, p_hwfn->p_main_ptt, sb_info,
+			       sb_virt_addr, sb_phy_addr, rel_sb_id);
+
+	return rc;
+}
+
+static void qed_fill_link(struct ecore_hwfn *hwfn,
+			  struct qed_link_output *if_link)
+{
+	struct ecore_mcp_link_params params;
+	struct ecore_mcp_link_state link;
+	struct ecore_mcp_link_capabilities link_caps;
+	uint32_t media_type;
+	uint8_t change = 0;
+
+	memset(if_link, 0, sizeof(*if_link));
+
+	/* Prepare source inputs */
+	rte_memcpy(&params, ecore_mcp_get_link_params(hwfn),
+		       sizeof(params));
+	rte_memcpy(&link, ecore_mcp_get_link_state(hwfn), sizeof(link));
+	rte_memcpy(&link_caps, ecore_mcp_get_link_capabilities(hwfn),
+		       sizeof(link_caps));
+
+	/* Set the link parameters to pass to protocol driver */
+	if (link.link_up)
+		if_link->link_up = true;
+
+	if (link.link_up)
+		if_link->speed = link.speed;
+
+	if_link->duplex = QEDE_DUPLEX_FULL;
+
+	if (params.speed.autoneg)
+		if_link->supported_caps |= QEDE_SUPPORTED_AUTONEG;
+
+	if (params.pause.autoneg || params.pause.forced_rx ||
+	    params.pause.forced_tx)
+		if_link->supported_caps |= QEDE_SUPPORTED_PAUSE;
+
+	if (params.pause.autoneg)
+		if_link->pause_config |= QED_LINK_PAUSE_AUTONEG_ENABLE;
+
+	if (params.pause.forced_rx)
+		if_link->pause_config |= QED_LINK_PAUSE_RX_ENABLE;
+
+	if (params.pause.forced_tx)
+		if_link->pause_config |= QED_LINK_PAUSE_TX_ENABLE;
+}
+
+static void
+qed_get_current_link(struct ecore_dev *edev, struct qed_link_output *if_link)
+{
+	qed_fill_link(&edev->hwfns[0], if_link);
+
+#ifdef CONFIG_QED_SRIOV
+	for_each_hwfn(cdev, i)
+		qed_inform_vf_link_state(&cdev->hwfns[i]);
+#endif
+}
+
+static int qed_set_link(struct ecore_dev *edev, struct qed_link_params *params)
+{
+	struct ecore_hwfn *hwfn;
+	struct ecore_ptt *ptt;
+	struct ecore_mcp_link_params *link_params;
+	int rc;
+
+	/* The link should be set only once per PF */
+	hwfn = &edev->hwfns[0];
+
+	ptt = ecore_ptt_acquire(hwfn);
+	if (!ptt)
+		return -EBUSY;
+
+	link_params = ecore_mcp_get_link_params(hwfn);
+	if (params->override_flags & QED_LINK_OVERRIDE_SPEED_AUTONEG)
+		link_params->speed.autoneg = params->autoneg;
+
+	if (params->override_flags & QED_LINK_OVERRIDE_PAUSE_CONFIG) {
+		if (params->pause_config & QED_LINK_PAUSE_AUTONEG_ENABLE)
+			link_params->pause.autoneg = true;
+		else
+			link_params->pause.autoneg = false;
+		if (params->pause_config & QED_LINK_PAUSE_RX_ENABLE)
+			link_params->pause.forced_rx = true;
+		else
+			link_params->pause.forced_rx = false;
+		if (params->pause_config & QED_LINK_PAUSE_TX_ENABLE)
+			link_params->pause.forced_tx = true;
+		else
+			link_params->pause.forced_tx = false;
+	}
+
+	rc = ecore_mcp_set_link(hwfn, ptt, params->link_up);
+
+	ecore_ptt_release(hwfn, ptt);
+
+	return rc;
+}
+
+static int qed_drain(struct ecore_dev *edev)
+{
+	struct ecore_hwfn *hwfn;
+	struct ecore_ptt *ptt;
+	int i, rc;
+
+	for_each_hwfn(edev, i) {
+		hwfn = &edev->hwfns[i];
+		ptt = ecore_ptt_acquire(hwfn);
+		if (!ptt) {
+			DP_NOTICE(hwfn, true, "Failed to drain NIG; No PTT\n");
+			return -EBUSY;
+		}
+		rc = ecore_mcp_drain(hwfn, ptt);
+		if (rc)
+			return rc;
+		ecore_ptt_release(hwfn, ptt);
+	}
+
+	return 0;
+}
+
+static int qed_nic_stop(struct ecore_dev *edev)
+{
+	int i, rc;
+
+	rc = ecore_hw_stop(edev);
+	for (i = 0; i < edev->num_hwfns; i++) {
+		struct ecore_hwfn *p_hwfn = &edev->hwfns[i];
+
+		if (p_hwfn->b_sp_dpc_enabled)
+			p_hwfn->b_sp_dpc_enabled = false;
+	}
+	return rc;
+}
+
+static int qed_nic_reset(struct ecore_dev *edev)
+{
+	int rc;
+
+	rc = ecore_hw_reset(edev);
+	if (rc)
+		return rc;
+
+	ecore_resc_free(edev);
+
+	return 0;
+}
+
+static int qed_slowpath_stop(struct ecore_dev *edev)
+{
+#ifdef CONFIG_QED_SRIOV
+	int i;
+#endif
+
+	if (!edev)
+		return -ENODEV;
+
+	qed_free_stream_mem(edev);
+
+	qed_nic_stop(edev);
+
+	qed_nic_reset(edev);
+
+	return 0;
+}
+
+static void qed_remove(struct ecore_dev *edev)
+{
+	if (!edev)
+		return;
+
+	ecore_hw_remove(edev);
+}
+
+const struct qed_common_ops qed_common_ops_pass = {
+	INIT_STRUCT_FIELD(probe, &qed_probe),
+	INIT_STRUCT_FIELD(update_pf_params, &qed_update_pf_params),
+	INIT_STRUCT_FIELD(slowpath_start, &qed_slowpath_start),
+	INIT_STRUCT_FIELD(set_id, &qed_set_id),
+	INIT_STRUCT_FIELD(chain_alloc, &ecore_chain_alloc),
+	INIT_STRUCT_FIELD(chain_free, &ecore_chain_free),
+	INIT_STRUCT_FIELD(sb_init, &qed_sb_init),
+	INIT_STRUCT_FIELD(get_link, &qed_get_current_link),
+	INIT_STRUCT_FIELD(set_link, &qed_set_link),
+	INIT_STRUCT_FIELD(drain, &qed_drain),
+	INIT_STRUCT_FIELD(slowpath_stop, &qed_slowpath_stop),
+	INIT_STRUCT_FIELD(remove, &qed_remove),
+};
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
new file mode 100644
index 0000000..d0450f7
--- /dev/null
+++ b/drivers/net/qede/qede_rxtx.c
@@ -0,0 +1,1172 @@
+/*
+ * Copyright (c) 2016 QLogic Corporation.
+ * All rights reserved.
+ * www.qlogic.com
+ *
+ * See LICENSE.qede_pmd for copyright and licensing details.
+ */
+
+#include "qede_rxtx.h"
+
+static bool gro_disable = 1;	/* mod_param */
+
+static inline struct
+rte_mbuf *qede_rxmbuf_alloc(struct rte_mempool *mp)
+{
+	struct rte_mbuf *m;
+
+	m = __rte_mbuf_raw_alloc(mp);
+	__rte_mbuf_sanity_check(m, 0);
+
+	return m;
+}
+
+static inline int qede_alloc_rx_buffer(struct qede_rx_queue *rxq)
+{
+	struct rte_mbuf *new_mb = NULL;
+	struct eth_rx_bd *rx_bd;
+	dma_addr_t mapping;
+	uint16_t idx = rxq->sw_rx_prod & NUM_RX_BDS(rxq);
+
+	new_mb = qede_rxmbuf_alloc(rxq->mb_pool);
+	if (unlikely(!new_mb)) {
+		PMD_RX_LOG(ERR, rxq,
+			   "Failed to allocate rx buffer "
+			   "sw_rx_prod %u sw_rx_cons %u mp entries %u free %u",
+			   idx, rxq->sw_rx_cons & NUM_RX_BDS(rxq),
+			   rte_mempool_count(rxq->mb_pool),
+			   rte_mempool_free_count(rxq->mb_pool));
+		return -ENOMEM;
+	}
+	rxq->sw_rx_ring[idx].mbuf = new_mb;
+	rxq->sw_rx_ring[idx].page_offset = 0;
+	mapping = new_mb->buf_physaddr;
+	/* Advance PROD and get BD pointer */
+	rx_bd = (struct eth_rx_bd *)ecore_chain_produce(&rxq->rx_bd_ring);
+	rx_bd->addr.hi = rte_cpu_to_le_32(U64_HI(mapping));
+	rx_bd->addr.lo = rte_cpu_to_le_32(U64_LO(mapping));
+	rxq->sw_rx_prod++;
+	return 0;
+}
+
+static void qede_rx_queue_release_mbufs(struct qede_rx_queue *rxq)
+{
+	uint16_t i;
+
+	if (rxq->sw_rx_ring != NULL) {
+		for (i = 0; i < rxq->nb_rx_desc; i++) {
+			if (rxq->sw_rx_ring[i].mbuf != NULL) {
+				rte_pktmbuf_free(rxq->sw_rx_ring[i].mbuf);
+				rxq->sw_rx_ring[i].mbuf = NULL;
+			}
+		}
+	}
+}
+
+void qede_rx_queue_release(void *rx_queue)
+{
+	struct qede_rx_queue *rxq = rx_queue;
+
+	if (rxq != NULL) {
+		qede_rx_queue_release_mbufs(rxq);
+		rte_free(rxq->sw_rx_ring);
+		rxq->sw_rx_ring = NULL;
+		rte_free(rxq);
+		rx_queue = NULL;
+	}
+}
+
+static uint16_t qede_set_rx_buf_size(struct rte_mempool *mp, uint16_t len)
+{
+	uint16_t data_size;
+	uint16_t buf_size;
+
+	data_size = rte_pktmbuf_data_room_size(mp) - RTE_PKTMBUF_HEADROOM;
+	buf_size = RTE_MAX(len, data_size);
+	return buf_size + QEDE_ETH_OVERHEAD;
+}
+
+int
+qede_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+		    uint16_t nb_desc, unsigned int socket_id,
+		    const struct rte_eth_rxconf *rx_conf,
+		    struct rte_mempool *mp)
+{
+	struct qede_dev *qdev = dev->data->dev_private;
+	struct ecore_dev *edev = &qdev->edev;
+	struct rte_eth_dev_data *eth_data = dev->data;
+	struct qede_rx_queue *rxq;
+	uint16_t pkt_len = (uint16_t)dev->data->dev_conf.rxmode.max_rx_pkt_len;
+	size_t size;
+	int rc;
+	int i;
+
+	PMD_INIT_FUNC_TRACE(edev);
+
+	/* Note: Ring size/align is controlled by struct rte_eth_desc_lim */
+	if (!rte_is_power_of_2(nb_desc)) {
+		DP_NOTICE(edev, false, "Ring size %u is not power of 2\n",
+			  nb_desc);
+		return -EINVAL;
+	}
+
+	/* Free memory prior to re-allocation if needed... */
+	if (dev->data->rx_queues[queue_idx] != NULL) {
+		qede_rx_queue_release(dev->data->rx_queues[queue_idx]);
+		dev->data->rx_queues[queue_idx] = NULL;
+	}
+
+	/* First allocate the rx queue data structure */
+	rxq = rte_zmalloc_socket("qede_rx_queue", sizeof(struct qede_rx_queue),
+				 RTE_CACHE_LINE_SIZE, socket_id);
+
+	if (!rxq) {
+		DP_NOTICE(edev, false,
+			  "Unable to allocate memory for rxq on socket %u",
+			  socket_id);
+		return -ENOMEM;
+	}
+
+	rxq->qdev = qdev;
+	rxq->mb_pool = mp;
+	rxq->nb_rx_desc = nb_desc;
+	rxq->queue_id = queue_idx;
+	rxq->port_id = dev->data->port_id;
+
+	rxq->rx_buf_size = qede_set_rx_buf_size(mp, pkt_len);
+	if (pkt_len > ETHER_MAX_LEN) {
+		dev->data->dev_conf.rxmode.jumbo_frame = 1;
+		DP_NOTICE(edev, false, "jumbo frame enabled\n");
+	} else {
+		dev->data->dev_conf.rxmode.jumbo_frame = 0;
+	}
+
+	qdev->mtu = rxq->rx_buf_size;
+	DP_INFO(edev, "rx_buf_size=%u\n", qdev->mtu);
+
+	/* Allocate the parallel driver ring for Rx buffers */
+	size = sizeof(*rxq->sw_rx_ring) * rxq->nb_rx_desc;
+	rxq->sw_rx_ring = rte_zmalloc_socket("sw_rx_ring", size,
+					     RTE_CACHE_LINE_SIZE, socket_id);
+	if (!rxq->sw_rx_ring) {
+		DP_NOTICE(edev, false,
+			  "Unable to alloc memory for sw_rx_ring on socket %u\n",
+			  socket_id);
+		rte_free(rxq);
+		rxq = NULL;
+		return -ENOMEM;
+	}
+
+	/* Allocate FW Rx ring  */
+	rc = qdev->ops->common->chain_alloc(edev,
+					    ECORE_CHAIN_USE_TO_CONSUME_PRODUCE,
+					    ECORE_CHAIN_MODE_NEXT_PTR,
+					    ECORE_CHAIN_CNT_TYPE_U16,
+					    rxq->nb_rx_desc,
+					    sizeof(struct eth_rx_bd),
+					    &rxq->rx_bd_ring);
+
+	if (rc != ECORE_SUCCESS) {
+		DP_NOTICE(edev, false,
+			  "Unable to alloc memory for rxbd ring on socket %u\n",
+			  socket_id);
+		rte_free(rxq->sw_rx_ring);
+		rxq->sw_rx_ring = NULL;
+		rte_free(rxq);
+		rxq = NULL;
+	}
+
+	/* Allocate FW completion ring */
+	rc = qdev->ops->common->chain_alloc(edev,
+					    ECORE_CHAIN_USE_TO_CONSUME,
+					    ECORE_CHAIN_MODE_PBL,
+					    ECORE_CHAIN_CNT_TYPE_U16,
+					    rxq->nb_rx_desc,
+					    sizeof(union eth_rx_cqe),
+					    &rxq->rx_comp_ring);
+
+	if (rc != ECORE_SUCCESS) {
+		DP_NOTICE(edev, false,
+			  "Unable to alloc memory for cqe ring on socket %u\n",
+			  socket_id);
+		/* TBD: Freeing RX BD ring */
+		rte_free(rxq->sw_rx_ring);
+		rxq->sw_rx_ring = NULL;
+		rte_free(rxq);
+	}
+
+	/* Allocate buffers for the Rx ring */
+	for (i = 0; i < rxq->nb_rx_desc; i++) {
+		rc = qede_alloc_rx_buffer(rxq);
+		if (rc) {
+			DP_NOTICE(edev, false,
+				  "RX buffer allocation failed at idx=%d\n", i);
+			goto err4;
+		}
+	}
+
+	dev->data->rx_queues[queue_idx] = rxq;
+	if (!qdev->rx_queues)
+		qdev->rx_queues = (struct qede_rx_queue **)dev->data->rx_queues;
+
+	DP_NOTICE(edev, false, "rxq %d num_desc %u rx_buf_size=%u socket %u\n",
+		  queue_idx, nb_desc, qdev->mtu, socket_id);
+
+	return 0;
+err4:
+	qede_rx_queue_release(rxq);
+	return -ENOMEM;
+}
+
+static void qede_tx_queue_release_mbufs(struct qede_tx_queue *txq)
+{
+	unsigned i;
+
+	PMD_TX_LOG(DEBUG, txq, "releasing %u mbufs\n", txq->nb_tx_desc);
+
+	if (txq->sw_tx_ring != NULL) {
+		for (i = 0; i < txq->nb_tx_desc; i++) {
+			if (txq->sw_tx_ring[i].mbuf != NULL) {
+				rte_pktmbuf_free(txq->sw_tx_ring[i].mbuf);
+				txq->sw_tx_ring[i].mbuf = NULL;
+			}
+		}
+	}
+}
+
+void qede_tx_queue_release(void *tx_queue)
+{
+	struct qede_tx_queue *txq = tx_queue;
+
+	if (txq != NULL) {
+		qede_tx_queue_release_mbufs(txq);
+		if (txq->sw_tx_ring) {
+			rte_free(txq->sw_tx_ring);
+			txq->sw_tx_ring = NULL;
+		}
+		rte_free(txq);
+	}
+	tx_queue = NULL;
+}
+
+int
+qede_tx_queue_setup(struct rte_eth_dev *dev,
+		    uint16_t queue_idx,
+		    uint16_t nb_desc,
+		    unsigned int socket_id,
+		    const struct rte_eth_txconf *tx_conf)
+{
+	struct qede_dev *qdev = dev->data->dev_private;
+	struct ecore_dev *edev = &qdev->edev;
+	struct qede_tx_queue *txq;
+	int rc;
+
+	PMD_INIT_FUNC_TRACE(edev);
+
+	if (!rte_is_power_of_2(nb_desc)) {
+		DP_NOTICE(edev, false, "Ring size %u is not power of 2\n",
+			  nb_desc);
+		return -EINVAL;
+	}
+
+	/* Free memory prior to re-allocation if needed... */
+	if (dev->data->tx_queues[queue_idx] != NULL) {
+		qede_tx_queue_release(dev->data->tx_queues[queue_idx]);
+		dev->data->tx_queues[queue_idx] = NULL;
+	}
+
+	txq = rte_zmalloc_socket("qede_tx_queue", sizeof(struct qede_tx_queue),
+				 RTE_CACHE_LINE_SIZE, socket_id);
+
+	if (txq == NULL) {
+		DP_ERR(edev,
+		       "Unable to allocate memory for txq on socket %u",
+		       socket_id);
+		return -ENOMEM;
+	}
+
+	txq->nb_tx_desc = nb_desc;
+	txq->qdev = qdev;
+	txq->port_id = dev->data->port_id;
+
+	rc = qdev->ops->common->chain_alloc(edev,
+					    ECORE_CHAIN_USE_TO_CONSUME_PRODUCE,
+					    ECORE_CHAIN_MODE_PBL,
+					    ECORE_CHAIN_CNT_TYPE_U16,
+					    txq->nb_tx_desc,
+					    sizeof(union eth_tx_bd_types),
+					    &txq->tx_pbl);
+	if (rc != ECORE_SUCCESS) {
+		DP_ERR(edev,
+		       "Unable to allocate memory for txbd ring on socket %u",
+		       socket_id);
+		qede_tx_queue_release(txq);
+		return -ENOMEM;
+	}
+
+	/* Allocate software ring */
+	txq->sw_tx_ring = rte_zmalloc_socket("txq->sw_tx_ring",
+					     (sizeof(struct qede_tx_entry) *
+					      txq->nb_tx_desc),
+					     RTE_CACHE_LINE_SIZE, socket_id);
+
+	if (!txq->sw_tx_ring) {
+		DP_ERR(edev,
+		       "Unable to allocate memory for txbd ring on socket %u",
+		       socket_id);
+		qede_tx_queue_release(txq);
+		return -ENOMEM;
+	}
+
+	txq->queue_id = queue_idx;
+
+	txq->nb_tx_avail = txq->nb_tx_desc;
+
+	txq->tx_free_thresh =
+	    tx_conf->tx_free_thresh ? tx_conf->tx_free_thresh :
+	    (txq->nb_tx_desc - QEDE_DEFAULT_TX_FREE_THRESH);
+
+	dev->data->tx_queues[queue_idx] = txq;
+	if (!qdev->tx_queues)
+		qdev->tx_queues = (struct qede_tx_queue **)dev->data->tx_queues;
+
+	txq->txq_counter = 0;
+
+	DP_NOTICE(edev, false,
+		  "txq %u num_desc %u tx_free_thresh %u socket %u\n",
+		  queue_idx, nb_desc, txq->tx_free_thresh, socket_id);
+
+	return 0;
+}
+
+/* This function inits fp content and resets the SB, RXQ and TXQ arrays */
+static void qede_init_fp(struct qede_dev *qdev)
+{
+	struct qede_fastpath *fp;
+	int rss_id, txq_index, tc;
+
+	memset((void *)qdev->fp_array, 0, (QEDE_RSS_CNT(qdev) *
+					   sizeof(*qdev->fp_array)));
+	memset((void *)qdev->sb_array, 0, (QEDE_RSS_CNT(qdev) *
+					   sizeof(*qdev->sb_array)));
+	for_each_rss(rss_id) {
+		fp = &qdev->fp_array[rss_id];
+
+		fp->qdev = qdev;
+		fp->rss_id = rss_id;
+
+		/* Point rxq to generic rte queues that was created
+		 * as part of queue creation.
+		 */
+		fp->rxq = qdev->rx_queues[rss_id];
+		fp->sb_info = &qdev->sb_array[rss_id];
+
+		for (tc = 0; tc < qdev->num_tc; tc++) {
+			txq_index = tc * QEDE_RSS_CNT(qdev) + rss_id;
+			fp->txqs[tc] = qdev->tx_queues[txq_index];
+			fp->txqs[tc]->queue_id = txq_index;
+			/* Updating it to main structure */
+			snprintf(fp->name, sizeof(fp->name), "%s-fp-%d",
+				 "qdev", rss_id);
+		}
+	}
+
+	qdev->gro_disable = gro_disable;
+}
+
+void qede_free_fp_arrays(struct qede_dev *qdev)
+{
+	/* It asseumes qede_free_mem_load() is called before */
+	if (qdev->fp_array != NULL) {
+		rte_free(qdev->fp_array);
+		qdev->fp_array = NULL;
+	}
+
+	if (qdev->sb_array != NULL) {
+		rte_free(qdev->sb_array);
+		qdev->sb_array = NULL;
+	}
+}
+
+int qede_alloc_fp_array(struct qede_dev *qdev)
+{
+	struct qede_fastpath *fp;
+	struct ecore_dev *edev = &qdev->edev;
+	int i;
+
+	qdev->fp_array = rte_calloc("fp", QEDE_RSS_CNT(qdev),
+				    sizeof(*qdev->fp_array),
+				    RTE_CACHE_LINE_SIZE);
+
+	if (!qdev->fp_array) {
+		DP_NOTICE(edev, true, "fp array allocation failed\n");
+		return -ENOMEM;
+	}
+
+	qdev->sb_array = rte_calloc("sb", QEDE_RSS_CNT(qdev),
+				    sizeof(*qdev->sb_array),
+				    RTE_CACHE_LINE_SIZE);
+
+	if (!qdev->sb_array) {
+		DP_NOTICE(edev, true, "sb array allocation failed\n");
+		rte_free(qdev->fp_array);
+		return -ENOMEM;
+	}
+
+	return 0;
+}
+
+/* This function allocates fast-path status block memory */
+static int
+qede_alloc_mem_sb(struct qede_dev *qdev, struct ecore_sb_info *sb_info,
+		  uint16_t sb_id)
+{
+	struct ecore_dev *edev = &qdev->edev;
+	struct status_block *sb_virt;
+	dma_addr_t sb_phys;
+	int rc;
+
+	sb_virt = OSAL_DMA_ALLOC_COHERENT(edev, &sb_phys, sizeof(*sb_virt));
+
+	if (!sb_virt) {
+		DP_ERR(edev, "Status block allocation failed\n");
+		return -ENOMEM;
+	}
+
+	rc = qdev->ops->common->sb_init(edev, sb_info,
+					sb_virt, sb_phys, sb_id,
+					QED_SB_TYPE_L2_QUEUE);
+	if (rc) {
+		DP_ERR(edev, "Status block initialization failed\n");
+		/* TBD: No dma_free_coherent possible */
+		return rc;
+	}
+
+	return 0;
+}
+
+static int qede_alloc_mem_fp(struct qede_dev *qdev, struct qede_fastpath *fp)
+{
+	return qede_alloc_mem_sb(qdev, fp->sb_info, fp->rss_id);
+}
+
+static void qede_shrink_txq(struct qede_dev *qdev, uint16_t num_rss)
+{
+	/* @@@TBD - this should also re-set the qed interrupts */
+}
+
+/* This function allocates all qede memory at NIC load. */
+static int qede_alloc_mem_load(struct qede_dev *qdev)
+{
+	int rc = 0, rss_id;
+	struct ecore_dev *edev = &qdev->edev;
+
+	for (rss_id = 0; rss_id < QEDE_RSS_CNT(qdev); rss_id++) {
+		struct qede_fastpath *fp = &qdev->fp_array[rss_id];
+
+		rc = qede_alloc_mem_fp(qdev, fp);
+		if (rc)
+			break;
+	}
+
+	if (rss_id != QEDE_RSS_CNT(qdev)) {
+		/* Failed allocating memory for all the queues */
+		if (!rss_id) {
+			DP_ERR(edev,
+			       "Failed to alloc memory for leading queue\n");
+			rc = -ENOMEM;
+		} else {
+			DP_NOTICE(edev, false,
+				  "Failed to allocate memory for all of "
+				  "RSS queues\n"
+				  "Desired: %d queues, allocated: %d queues\n",
+				  QEDE_RSS_CNT(qdev), rss_id);
+			qede_shrink_txq(qdev, rss_id);
+		}
+		qdev->num_rss = rss_id;
+	}
+
+	return 0;
+}
+
+static inline void
+qede_update_rx_prod(struct qede_dev *edev, struct qede_rx_queue *rxq)
+{
+	uint16_t bd_prod = ecore_chain_get_prod_idx(&rxq->rx_bd_ring);
+	uint16_t cqe_prod = ecore_chain_get_prod_idx(&rxq->rx_comp_ring);
+	struct eth_rx_prod_data rx_prods = { 0 };
+
+	/* Update producers */
+	rx_prods.bd_prod = rte_cpu_to_le_16(bd_prod);
+	rx_prods.cqe_prod = rte_cpu_to_le_16(cqe_prod);
+
+	/* Make sure that the BD and SGE data is updated before updating the
+	 * producers since FW might read the BD/SGE right after the producer
+	 * is updated.
+	 */
+	rte_wmb();
+
+	internal_ram_wr(rxq->hw_rxq_prod_addr, sizeof(rx_prods),
+			(uint32_t *)&rx_prods);
+
+	/* mmiowb is needed to synchronize doorbell writes from more than one
+	 * processor. It guarantees that the write arrives to the device before
+	 * the napi lock is released and another qede_poll is called (possibly
+	 * on another CPU). Without this barrier, the next doorbell can bypass
+	 * this doorbell. This is applicable to IA64/Altix systems.
+	 */
+	rte_wmb();
+
+	PMD_RX_LOG(DEBUG, rxq, "bd_prod %u  cqe_prod %u\n", bd_prod, cqe_prod);
+}
+
+static inline uint32_t
+qede_rxfh_indir_default(uint32_t index, uint32_t n_rx_rings)
+{
+	return index % n_rx_rings;
+}
+
+#ifdef ENC_SUPPORTED
+static bool qede_tunn_exist(uint16_t flag)
+{
+	return !!((PARSING_AND_ERR_FLAGS_TUNNELEXIST_MASK <<
+		    PARSING_AND_ERR_FLAGS_TUNNELEXIST_SHIFT) & flag);
+}
+
+static inline uint8_t qede_check_tunn_csum(uint16_t flag)
+{
+	uint8_t tcsum = 0;
+	uint16_t csum_flag = 0;
+
+	if ((PARSING_AND_ERR_FLAGS_TUNNELL4CHKSMWASCALCULATED_MASK <<
+	     PARSING_AND_ERR_FLAGS_TUNNELL4CHKSMWASCALCULATED_SHIFT) & flag)
+		csum_flag |= PARSING_AND_ERR_FLAGS_TUNNELL4CHKSMERROR_MASK <<
+		    PARSING_AND_ERR_FLAGS_TUNNELL4CHKSMERROR_SHIFT;
+
+	if ((PARSING_AND_ERR_FLAGS_L4CHKSMWASCALCULATED_MASK <<
+	     PARSING_AND_ERR_FLAGS_L4CHKSMWASCALCULATED_SHIFT) & flag) {
+		csum_flag |= PARSING_AND_ERR_FLAGS_L4CHKSMERROR_MASK <<
+		    PARSING_AND_ERR_FLAGS_L4CHKSMERROR_SHIFT;
+		tcsum = QEDE_TUNN_CSUM_UNNECESSARY;
+	}
+
+	csum_flag |= PARSING_AND_ERR_FLAGS_TUNNELIPHDRERROR_MASK <<
+	    PARSING_AND_ERR_FLAGS_TUNNELIPHDRERROR_SHIFT |
+	    PARSING_AND_ERR_FLAGS_IPHDRERROR_MASK <<
+	    PARSING_AND_ERR_FLAGS_IPHDRERROR_SHIFT;
+
+	if (csum_flag & flag)
+		return QEDE_CSUM_ERROR;
+
+	return QEDE_CSUM_UNNECESSARY | tcsum;
+}
+#else
+static inline uint8_t qede_tunn_exist(uint16_t flag)
+{
+	return 0;
+}
+
+static inline uint8_t qede_check_tunn_csum(uint16_t flag)
+{
+	return 0;
+}
+#endif
+
+static inline uint8_t qede_check_notunn_csum(uint16_t flag)
+{
+	uint8_t csum = 0;
+	uint16_t csum_flag = 0;
+
+	if ((PARSING_AND_ERR_FLAGS_L4CHKSMWASCALCULATED_MASK <<
+	     PARSING_AND_ERR_FLAGS_L4CHKSMWASCALCULATED_SHIFT) & flag) {
+		csum_flag |= PARSING_AND_ERR_FLAGS_L4CHKSMERROR_MASK <<
+		    PARSING_AND_ERR_FLAGS_L4CHKSMERROR_SHIFT;
+		csum = QEDE_CSUM_UNNECESSARY;
+	}
+
+	csum_flag |= PARSING_AND_ERR_FLAGS_IPHDRERROR_MASK <<
+	    PARSING_AND_ERR_FLAGS_IPHDRERROR_SHIFT;
+
+	if (csum_flag & flag)
+		return QEDE_CSUM_ERROR;
+
+	return csum;
+}
+
+static inline uint8_t qede_check_csum(uint16_t flag)
+{
+	if (likely(!qede_tunn_exist(flag)))
+		return qede_check_notunn_csum(flag);
+	else
+		return qede_check_tunn_csum(flag);
+}
+
+static inline void qede_rx_bd_ring_consume(struct qede_rx_queue *rxq)
+{
+	ecore_chain_consume(&rxq->rx_bd_ring);
+	rxq->sw_rx_cons++;
+}
+
+static inline void
+qede_reuse_page(struct qede_dev *qdev,
+		struct qede_rx_queue *rxq, struct qede_rx_entry *curr_cons)
+{
+	struct eth_rx_bd *rx_bd_prod = ecore_chain_produce(&rxq->rx_bd_ring);
+	uint16_t idx = rxq->sw_rx_cons & NUM_RX_BDS(rxq);
+	struct qede_rx_entry *curr_prod;
+	dma_addr_t new_mapping;
+
+	curr_prod = &rxq->sw_rx_ring[idx];
+	*curr_prod = *curr_cons;
+
+	new_mapping = curr_prod->mbuf->buf_physaddr + curr_prod->page_offset;
+
+	rx_bd_prod->addr.hi = rte_cpu_to_le_32(U64_HI(new_mapping));
+	rx_bd_prod->addr.lo = rte_cpu_to_le_32(U64_LO(new_mapping));
+
+	rxq->sw_rx_prod++;
+}
+
+static inline void
+qede_recycle_rx_bd_ring(struct qede_rx_queue *rxq,
+			struct qede_dev *qdev, uint8_t count)
+{
+	struct qede_rx_entry *curr_cons;
+
+	for (; count > 0; count--) {
+		curr_cons = &rxq->sw_rx_ring[rxq->sw_rx_cons & NUM_RX_BDS(rxq)];
+		qede_reuse_page(qdev, rxq, curr_cons);
+		qede_rx_bd_ring_consume(rxq);
+	}
+}
+
+static inline uint32_t qede_rx_cqe_to_pkt_type(uint16_t flags)
+{
+	uint32_t p_type;
+	/* TBD - L4 indications needed ? */
+	uint16_t protocol = ((PARSING_AND_ERR_FLAGS_L3TYPE_MASK <<
+			      PARSING_AND_ERR_FLAGS_L3TYPE_SHIFT) & flags);
+
+	/* protocol = 3 means LLC/SNAP over Ethernet */
+	if (unlikely(protocol == 0 || protocol == 3))
+		p_type = RTE_PTYPE_UNKNOWN;
+	else if (protocol == 1)
+		p_type = RTE_PTYPE_L3_IPV4;
+	else if (protocol == 2)
+		p_type = RTE_PTYPE_L3_IPV6;
+
+	return RTE_PTYPE_L2_ETHER | p_type;
+}
+
+uint16_t
+qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+{
+	struct qede_rx_queue *rxq = p_rxq;
+	struct qede_dev *qdev = rxq->qdev;
+	struct ecore_dev *edev = &qdev->edev;
+	struct qede_fastpath *fp = &qdev->fp_array[rxq->queue_id];
+	uint16_t hw_comp_cons, sw_comp_cons, sw_rx_index;
+	uint16_t rx_pkt = 0;
+	union eth_rx_cqe *cqe;
+	struct eth_fast_path_rx_reg_cqe *fp_cqe;
+	register struct rte_mbuf *rx_mb = NULL;
+	enum eth_rx_cqe_type cqe_type;
+	uint16_t len, pad;
+	uint16_t preload_idx;
+	uint8_t csum_flag;
+	uint16_t parse_flag;
+
+	hw_comp_cons = rte_le_to_cpu_16(*rxq->hw_cons_ptr);
+	sw_comp_cons = ecore_chain_get_cons_idx(&rxq->rx_comp_ring);
+
+	rte_rmb();
+
+	if (hw_comp_cons == sw_comp_cons)
+		return 0;
+
+	while (sw_comp_cons != hw_comp_cons) {
+		/* Get the CQE from the completion ring */
+		cqe =
+		    (union eth_rx_cqe *)ecore_chain_consume(&rxq->rx_comp_ring);
+		cqe_type = cqe->fast_path_regular.type;
+
+		if (unlikely(cqe_type == ETH_RX_CQE_TYPE_SLOW_PATH)) {
+			PMD_RX_LOG(DEBUG, rxq, "Got a slowath CQE\n");
+
+			qdev->ops->eth_cqe_completion(edev, fp->rss_id,
+				(struct eth_slow_path_rx_cqe *)cqe);
+			goto next_cqe;
+		}
+
+		/* Get the data from the SW ring */
+		sw_rx_index = rxq->sw_rx_cons & NUM_RX_BDS(rxq);
+		rx_mb = rxq->sw_rx_ring[sw_rx_index].mbuf;
+		assert(rx_mb != NULL);
+
+		/* non GRO */
+		fp_cqe = &cqe->fast_path_regular;
+
+		len = rte_le_to_cpu_16(fp_cqe->len_on_first_bd);
+		pad = fp_cqe->placement_offset;
+		PMD_RX_LOG(DEBUG, rxq,
+			   "CQE type = 0x%x, flags = 0x%x, vlan = 0x%x"
+			   " len = %u, parsing_flags = %d\n",
+			   cqe_type, fp_cqe->bitfields,
+			   rte_le_to_cpu_16(fp_cqe->vlan_tag),
+			   len, rte_le_to_cpu_16(fp_cqe->pars_flags.flags));
+
+		/* If this is an error packet then drop it */
+		parse_flag =
+		    rte_le_to_cpu_16(cqe->fast_path_regular.pars_flags.flags);
+		csum_flag = qede_check_csum(parse_flag);
+		if (unlikely(csum_flag == QEDE_CSUM_ERROR)) {
+			PMD_RX_LOG(ERR, rxq,
+				   "CQE in CONS = %u has error, flags = 0x%x "
+				   "dropping incoming packet\n",
+				   sw_comp_cons, parse_flag);
+			rxq->rx_hw_errors++;
+			qede_recycle_rx_bd_ring(rxq, qdev, fp_cqe->bd_num);
+			goto next_cqe;
+		}
+
+		if (unlikely(qede_alloc_rx_buffer(rxq) != 0)) {
+			PMD_RX_LOG(ERR, rxq,
+				   "New buffer allocation failed,"
+				   "dropping incoming packet\n");
+			qede_recycle_rx_bd_ring(rxq, qdev, fp_cqe->bd_num);
+			rte_eth_devices[rxq->port_id].
+			    data->rx_mbuf_alloc_failed++;
+			rxq->rx_alloc_errors++;
+			break;
+		}
+
+		qede_rx_bd_ring_consume(rxq);
+
+		/* Prefetch next mbuf while processing current one. */
+		preload_idx = rxq->sw_rx_cons & NUM_RX_BDS(rxq);
+		rte_prefetch0(rxq->sw_rx_ring[preload_idx].mbuf);
+
+		if (fp_cqe->bd_num != 1)
+			PMD_RX_LOG(DEBUG, rxq,
+				   "Jumbo-over-BD packet not supported\n");
+
+		rx_mb->buf_len = len + pad;
+		rx_mb->data_off = pad;
+		rx_mb->nb_segs = 1;
+		rx_mb->data_len = len;
+		rx_mb->pkt_len = len;
+		rx_mb->port = rxq->port_id;
+		rx_mb->packet_type = qede_rx_cqe_to_pkt_type(parse_flag);
+		rte_prefetch1(rte_pktmbuf_mtod(rx_mb, void *));
+
+		if (CQE_HAS_VLAN(parse_flag) ||
+		    CQE_HAS_OUTER_VLAN(parse_flag)) {
+			rx_mb->vlan_tci = rte_le_to_cpu_16(fp_cqe->vlan_tag);
+			rx_mb->ol_flags |= PKT_RX_VLAN_PKT;
+		}
+
+		rx_pkts[rx_pkt] = rx_mb;
+		rx_pkt++;
+next_cqe:
+		ecore_chain_recycle_consumed(&rxq->rx_comp_ring);
+		sw_comp_cons = ecore_chain_get_cons_idx(&rxq->rx_comp_ring);
+		if (rx_pkt == nb_pkts) {
+			PMD_RX_LOG(DEBUG, rxq,
+				   "Budget reached nb_pkts=%u received=%u\n",
+				   rx_pkt, nb_pkts);
+			break;
+		}
+	}
+
+	qede_update_rx_prod(qdev, rxq);
+
+	PMD_RX_LOG(DEBUG, rxq, "rx_pkts=%u core=%d\n", rx_pkt, rte_lcore_id());
+
+	return rx_pkt;
+}
+
+static inline int
+qede_free_tx_pkt(struct ecore_dev *edev, struct qede_tx_queue *txq)
+{
+	uint16_t idx = TX_CONS(txq);
+	struct eth_tx_bd *tx_data_bd;
+	struct rte_mbuf *mbuf = txq->sw_tx_ring[idx].mbuf;
+
+	if (unlikely(!mbuf)) {
+		PMD_TX_LOG(ERR, txq,
+			   "null mbuf nb_tx_desc %u nb_tx_avail %u "
+			   "sw_tx_cons %u sw_tx_prod %u\n",
+			   txq->nb_tx_desc, txq->nb_tx_avail, idx,
+			   TX_PROD(txq));
+		return -1;
+	}
+
+	/* Free now */
+	rte_pktmbuf_free_seg(mbuf);
+	txq->sw_tx_ring[idx].mbuf = NULL;
+	ecore_chain_consume(&txq->tx_pbl);
+	txq->nb_tx_avail++;
+
+	return 0;
+}
+
+static inline uint16_t
+qede_process_tx_compl(struct ecore_dev *edev, struct qede_tx_queue *txq)
+{
+	uint16_t tx_compl = 0;
+	uint16_t hw_bd_cons;
+	int rc;
+
+	hw_bd_cons = rte_le_to_cpu_16(*txq->hw_cons_ptr);
+	rte_compiler_barrier();
+
+	while (hw_bd_cons != ecore_chain_get_cons_idx(&txq->tx_pbl)) {
+		rc = qede_free_tx_pkt(edev, txq);
+		if (rc) {
+			DP_NOTICE(edev, true,
+				  "hw_bd_cons = %d, chain_cons=%d\n",
+				  hw_bd_cons,
+				  ecore_chain_get_cons_idx(&txq->tx_pbl));
+			break;
+		}
+		txq->sw_tx_cons++;	/* Making TXD available */
+		tx_compl++;
+	}
+
+	PMD_TX_LOG(DEBUG, txq, "Tx compl %u sw_tx_cons %u avail %u\n",
+		   tx_compl, txq->sw_tx_cons, txq->nb_tx_avail);
+	return tx_compl;
+}
+
+uint16_t
+qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+	struct qede_tx_queue *txq = p_txq;
+	struct qede_dev *qdev = txq->qdev;
+	struct ecore_dev *edev = &qdev->edev;
+	struct qede_fastpath *fp = &qdev->fp_array[txq->queue_id];
+	struct eth_tx_1st_bd *first_bd;
+	uint16_t nb_tx_pkts;
+	uint16_t nb_pkt_sent = 0;
+	uint16_t bd_prod;
+	uint16_t idx;
+	uint16_t tx_count;
+
+	if (unlikely(txq->nb_tx_avail < txq->tx_free_thresh)) {
+		PMD_TX_LOG(DEBUG, txq, "send=%u avail=%u free_thresh=%u\n",
+			   nb_pkts, txq->nb_tx_avail, txq->tx_free_thresh);
+		(void)qede_process_tx_compl(edev, txq);
+	}
+
+	nb_tx_pkts = RTE_MIN(nb_pkts, (txq->nb_tx_avail / MAX_NUM_TX_BDS));
+	if (unlikely(nb_tx_pkts == 0)) {
+		PMD_TX_LOG(DEBUG, txq, "Out of BDs nb_pkts=%u avail=%u\n",
+			   nb_pkts, txq->nb_tx_avail);
+		return 0;
+	}
+
+	tx_count = nb_tx_pkts;
+	while (nb_tx_pkts--) {
+		/* Fill the entry in the SW ring and the BDs in the FW ring */
+		idx = TX_PROD(txq);
+		struct rte_mbuf *mbuf = *tx_pkts++;
+		txq->sw_tx_ring[idx].mbuf = mbuf;
+		first_bd = (struct eth_tx_1st_bd *)
+		    ecore_chain_produce(&txq->tx_pbl);
+		first_bd->data.bd_flags.bitfields =
+		    1 << ETH_TX_1ST_BD_FLAGS_START_BD_SHIFT;
+		/* Map mbug linear data for DMA and set in the first BD */
+		QEDE_BD_SET_ADDR_LEN(first_bd, RTE_MBUF_DATA_DMA_ADDR(mbuf),
+				     mbuf->data_len);
+
+		/* Descriptor based VLAN insertion */
+		if (mbuf->ol_flags & (PKT_TX_VLAN_PKT | PKT_TX_QINQ_PKT)) {
+			first_bd->data.vlan = rte_cpu_to_le_16(mbuf->vlan_tci);
+			first_bd->data.bd_flags.bitfields |=
+			    1 << ETH_TX_1ST_BD_FLAGS_VLAN_INSERTION_SHIFT;
+		}
+
+		/* Offload the IP checksum in the hardware */
+		if (mbuf->ol_flags & PKT_TX_IP_CKSUM) {
+			first_bd->data.bd_flags.bitfields |=
+			    1 << ETH_TX_1ST_BD_FLAGS_IP_CSUM_SHIFT;
+		}
+
+		/* L4 checksum offload (tcp or udp) */
+		if (mbuf->ol_flags & (PKT_TX_TCP_CKSUM | PKT_TX_UDP_CKSUM)) {
+			first_bd->data.bd_flags.bitfields |=
+			    1 << ETH_TX_1ST_BD_FLAGS_L4_CSUM_SHIFT;
+			/* IPv6 + extn. -> later */
+		}
+		first_bd->data.nbds = MAX_NUM_TX_BDS;
+		txq->sw_tx_prod++;
+		rte_prefetch0(txq->sw_tx_ring[TX_PROD(txq)].mbuf);
+		txq->nb_tx_avail--;
+		bd_prod =
+		    rte_cpu_to_le_16(ecore_chain_get_prod_idx(&txq->tx_pbl));
+		nb_pkt_sent++;
+	}
+
+	/* Write value of prod idx into bd_prod */
+	txq->tx_db.data.bd_prod = bd_prod;
+	rte_wmb();
+	rte_compiler_barrier();
+	DIRECT_REG_WR(edev, txq->doorbell_addr, txq->tx_db.raw);
+	rte_wmb();
+
+	/* Check again for Tx completions if enabled */
+#ifdef RTE_LIBRTE_QEDE_TX_COMP_END
+	(void)qede_process_tx_compl(edev, txq);
+#endif
+
+	PMD_TX_LOG(DEBUG, txq, "to_send=%u can_send=%u sent=%u core=%d\n",
+		   nb_pkts, tx_count, nb_pkt_sent, rte_lcore_id());
+
+	return nb_pkt_sent;
+}
+
+int qede_dev_start(struct rte_eth_dev *eth_dev)
+{
+	struct qede_dev *qdev = eth_dev->data->dev_private;
+	struct ecore_dev *edev = &qdev->edev;
+	struct qed_link_output link_output;
+	int rc;
+
+	DP_NOTICE(edev, false, "port %u\n", eth_dev->data->port_id);
+
+	PMD_INIT_FUNC_TRACE(edev);
+
+	if (qdev->state == QEDE_START) {
+		DP_INFO(edev, "device already started\n");
+		return 0;
+	}
+
+	if (qdev->state == QEDE_CLOSE) {
+		rc = qede_alloc_fp_array(qdev);
+		qede_init_fp(qdev);
+		rc = qede_alloc_mem_load(qdev);
+		DP_INFO(edev, "Allocated %d RSS queues on %d TC/s\n",
+			QEDE_RSS_CNT(qdev), qdev->num_tc);
+	} else if (qdev->state == QEDE_STOP) {
+		DP_INFO(edev, "restarting port %u\n", eth_dev->data->port_id);
+	} else {
+		DP_INFO(edev, "unknown state port %u\n",
+			eth_dev->data->port_id);
+		return -EINVAL;
+	}
+
+	if (rc) {
+		DP_ERR(edev, "Failed to start queues\n");
+		/* TBD: free */
+		return rc;
+	}
+
+	DP_INFO(edev, "Start VPORT, RXQ and TXQ succeeded\n");
+
+	qede_dev_set_link_state(eth_dev, true);
+
+	/* Query whether link is already-up */
+	memset(&link_output, 0, sizeof(link_output));
+	qdev->ops->common->get_link(edev, &link_output);
+	DP_NOTICE(edev, false, "link status: %s\n",
+		  link_output.link_up ? "up" : "down");
+
+	qdev->state = QEDE_START;
+
+	qede_config_rx_mode(eth_dev);
+
+	DP_INFO(edev, "dev_state is QEDE_START\n");
+
+	return 0;
+}
+
+static int qede_drain_txq(struct qede_dev *qdev,
+			  struct qede_tx_queue *txq, bool allow_drain)
+{
+	struct ecore_dev *edev = &qdev->edev;
+	int rc, cnt = 1000;
+
+	while (txq->sw_tx_cons != txq->sw_tx_prod) {
+		qede_process_tx_compl(edev, txq);
+		if (!cnt) {
+			if (allow_drain) {
+				DP_NOTICE(edev, true,
+					  "Tx queue[%u] is stuck,"
+					  "requesting MCP to drain\n",
+					  txq->queue_id);
+				rc = qdev->ops->common->drain(edev);
+				if (rc)
+					return rc;
+				return qede_drain_txq(qdev, txq, false);
+			} else {
+				DP_NOTICE(edev, true,
+					  "Timeout waiting for tx queue[%d]:"
+					  "PROD=%d, CONS=%d\n",
+					  txq->queue_id, txq->sw_tx_prod,
+					  txq->sw_tx_cons);
+				return -ENODEV;
+			}
+		}
+		cnt--;
+		DELAY(1000);
+		rte_compiler_barrier();
+	}
+
+	/* FW finished processing, wait for HW to transmit all tx packets */
+	DELAY(2000);
+
+	return 0;
+}
+
+static int qede_stop_queues(struct qede_dev *qdev)
+{
+	struct qed_update_vport_params vport_update_params;
+	struct ecore_dev *edev = &qdev->edev;
+	int rc, tc, i;
+
+	/* Disable the vport */
+	memset(&vport_update_params, 0, sizeof(vport_update_params));
+	vport_update_params.vport_id = 0;
+	vport_update_params.update_vport_active_flg = 1;
+	vport_update_params.vport_active_flg = 0;
+	vport_update_params.update_rss_flg = 0;
+
+	DP_INFO(edev, "vport_update\n");
+
+	rc = qdev->ops->vport_update(edev, &vport_update_params);
+	if (rc) {
+		DP_ERR(edev, "Failed to update vport\n");
+		return rc;
+	}
+
+	DP_INFO(edev, "Flushing tx queues\n");
+
+	/* Flush Tx queues. If needed, request drain from MCP */
+	for_each_rss(i) {
+		struct qede_fastpath *fp = &qdev->fp_array[i];
+		for (tc = 0; tc < qdev->num_tc; tc++) {
+			struct qede_tx_queue *txq = fp->txqs[tc];
+			rc = qede_drain_txq(qdev, txq, true);
+			if (rc)
+				return rc;
+		}
+	}
+
+	/* Stop all Queues in reverse order */
+	for (i = QEDE_RSS_CNT(qdev) - 1; i >= 0; i--) {
+		struct qed_stop_rxq_params rx_params;
+
+		/* Stop the Tx Queue(s) */
+		for (tc = 0; tc < qdev->num_tc; tc++) {
+			struct qed_stop_txq_params tx_params;
+
+			tx_params.rss_id = i;
+			tx_params.tx_queue_id = tc * QEDE_RSS_CNT(qdev) + i;
+
+			DP_INFO(edev, "Stopping tx queues\n");
+			rc = qdev->ops->q_tx_stop(edev, &tx_params);
+			if (rc) {
+				DP_ERR(edev, "Failed to stop TXQ #%d\n",
+				       tx_params.tx_queue_id);
+				return rc;
+			}
+		}
+
+		/* Stop the Rx Queue */
+		memset(&rx_params, 0, sizeof(rx_params));
+		rx_params.rss_id = i;
+		rx_params.rx_queue_id = i;
+		rx_params.eq_completion_only = 1;
+
+		DP_INFO(edev, "Stopping rx queues\n");
+
+		rc = qdev->ops->q_rx_stop(edev, &rx_params);
+		if (rc) {
+			DP_ERR(edev, "Failed to stop RXQ #%d\n", i);
+			return rc;
+		}
+	}
+
+	DP_INFO(edev, "Stopping vports\n");
+
+	/* Stop the vport */
+	rc = qdev->ops->vport_stop(edev, 0);
+	if (rc)
+		DP_ERR(edev, "Failed to stop VPORT\n");
+
+	return rc;
+}
+
+void qede_reset_fp_rings(struct qede_dev *qdev)
+{
+	uint16_t rss_id;
+	uint8_t tc;
+
+	for_each_rss(rss_id) {
+		DP_INFO(&qdev->edev, "reset fp chain for rss %u\n", rss_id);
+		struct qede_fastpath *fp = &qdev->fp_array[rss_id];
+		ecore_chain_reset(&fp->rxq->rx_bd_ring);
+		ecore_chain_reset(&fp->rxq->rx_comp_ring);
+		for (tc = 0; tc < qdev->num_tc; tc++) {
+			struct qede_tx_queue *txq = fp->txqs[tc];
+			ecore_chain_reset(&txq->tx_pbl);
+		}
+	}
+}
+
+/* This function frees all memory of a single fp */
+static void qede_free_mem_fp(struct qede_dev *qdev, struct qede_fastpath *fp)
+{
+	uint8_t tc;
+
+	qede_rx_queue_release(fp->rxq);
+	for (tc = 0; tc < qdev->num_tc; tc++)
+		qede_tx_queue_release(fp->txqs[tc]);
+}
+
+void qede_free_mem_load(struct qede_dev *qdev)
+{
+	uint8_t rss_id;
+
+	for_each_rss(rss_id) {
+		struct qede_fastpath *fp = &qdev->fp_array[rss_id];
+		qede_free_mem_fp(qdev, fp);
+	}
+	/* qdev->num_rss = 0; */
+}
+
+/*
+ * Stop an Ethernet device. The device can be restarted with a call to
+ * rte_eth_dev_start().
+ * Do not change link state and do not release sw structures.
+ */
+void qede_dev_stop(struct rte_eth_dev *eth_dev)
+{
+	struct qede_dev *qdev = eth_dev->data->dev_private;
+	struct ecore_dev *edev = &qdev->edev;
+	int rc;
+
+	DP_NOTICE(edev, false, "port %u\n", eth_dev->data->port_id);
+
+	PMD_INIT_FUNC_TRACE(edev);
+
+	if (qdev->state != QEDE_START) {
+		DP_INFO(edev, "device not yet started\n");
+		return;
+	}
+
+	rc = qede_stop_queues(qdev);
+
+	if (rc)
+		DP_ERR(edev, "Didn't succeed to close queues\n");
+
+	DP_INFO(edev, "Stopped queues\n");
+
+	qdev->ops->fastpath_stop(edev);
+
+	qede_reset_fp_rings(qdev);
+
+	qdev->state = QEDE_STOP;
+
+	DP_INFO(edev, "dev_state is QEDE_STOP\n");
+}
diff --git a/drivers/net/qede/qede_rxtx.h b/drivers/net/qede/qede_rxtx.h
new file mode 100644
index 0000000..5e4e55b
--- /dev/null
+++ b/drivers/net/qede/qede_rxtx.h
@@ -0,0 +1,187 @@
+/*
+ * Copyright (c) 2016 QLogic Corporation.
+ * All rights reserved.
+ * www.qlogic.com
+ *
+ * See LICENSE.qede_pmd for copyright and licensing details.
+ */
+
+
+#ifndef _QEDE_RXTX_H_
+#define _QEDE_RXTX_H_
+
+#include "qede_ethdev.h"
+
+/* Ring Descriptors */
+#define RX_RING_SIZE_POW        16	/* 64K */
+#define RX_RING_SIZE            (1ULL << RX_RING_SIZE_POW)
+#define NUM_RX_BDS_MAX          (RX_RING_SIZE - 1)
+#define NUM_RX_BDS_MIN          128
+#define NUM_RX_BDS_DEF          NUM_RX_BDS_MAX
+#define NUM_RX_BDS(q)           (q->nb_rx_desc - 1)
+
+#define TX_RING_SIZE_POW        16	/* 64K */
+#define TX_RING_SIZE            (1ULL << TX_RING_SIZE_POW)
+#define NUM_TX_BDS_MAX          (TX_RING_SIZE - 1)
+#define NUM_TX_BDS_MIN          128
+#define NUM_TX_BDS_DEF          NUM_TX_BDS_MAX
+#define NUM_TX_BDS(q)           (q->nb_tx_desc - 1)
+
+#define TX_CONS(txq)            (txq->sw_tx_cons & NUM_TX_BDS(txq))
+#define TX_PROD(txq)            (txq->sw_tx_prod & NUM_TX_BDS(txq))
+
+/* Number of TX BDs per packet used currently */
+#define MAX_NUM_TX_BDS			1
+
+#define QEDE_DEFAULT_TX_FREE_THRESH	32
+
+#define QEDE_CSUM_ERROR			(1 << 0)
+#define QEDE_CSUM_UNNECESSARY		(1 << 1)
+#define QEDE_TUNN_CSUM_UNNECESSARY	(1 << 2)
+
+#define RTE_MBUF_DATA_DMA_ADDR(mb) \
+	((uint64_t)((mb)->buf_physaddr + (mb)->data_off))
+
+#define QEDE_BD_SET_ADDR_LEN(bd, maddr, len) \
+	do { \
+		(bd)->addr.hi = rte_cpu_to_le_32(U64_HI(maddr)); \
+		(bd)->addr.lo = rte_cpu_to_le_32(U64_LO(maddr)); \
+		(bd)->nbytes = rte_cpu_to_le_16(len); \
+	} while (0)
+
+#define CQE_HAS_VLAN(flags) \
+	((flags) & (PARSING_AND_ERR_FLAGS_TAG8021QEXIST_MASK \
+		<< PARSING_AND_ERR_FLAGS_TAG8021QEXIST_SHIFT))
+
+#define CQE_HAS_OUTER_VLAN(flags) \
+	((flags) & (PARSING_AND_ERR_FLAGS_TUNNEL8021QTAGEXIST_MASK \
+		<< PARSING_AND_ERR_FLAGS_TUNNEL8021QTAGEXIST_SHIFT))
+
+#define QEDE_IP_HEADER_ALIGNMENT_PADDING        2
+
+/* Max supported alignment is 256 (8 shift)
+ * minimal alignment shift 6 is optimal for 57xxx HW performance
+ */
+#define QEDE_L1_CACHE_SHIFT	6
+#define QEDE_RX_ALIGN_SHIFT	(RTE_MAX(6, RTE_MIN(8, QEDE_L1_CACHE_SHIFT)))
+#define QEDE_FW_RX_ALIGN_END	(1UL << QEDE_RX_ALIGN_SHIFT)
+
+/* L2 header size + 2*VLANs (8 bytes) + LLC SNAP (8 bytes) */
+#define QEDE_ETH_OVERHEAD       (ETHER_HDR_LEN + ETHER_CRC_LEN + \
+				 8 + 8 + QEDE_IP_HEADER_ALIGNMENT_PADDING + \
+				 QEDE_FW_RX_ALIGN_END)
+
+/* TBD: Excluding IPV6 */
+#define QEDE_RSS_OFFLOAD_ALL    (ETH_RSS_IPV4 | ETH_RSS_NONFRAG_IPV4_TCP | \
+				 ETH_RSS_NONFRAG_IPV4_UDP)
+
+#define QEDE_TXQ_FLAGS		((uint32_t)ETH_TXQ_FLAGS_NOMULTSEGS)
+
+#define MAX_NUM_TC		8
+
+#define for_each_rss(i) for (i = 0; i < qdev->num_rss; i++)
+
+/*
+ * RX BD descriptor ring
+ */
+struct qede_rx_entry {
+	struct rte_mbuf *mbuf;
+	uint32_t page_offset;
+	/* allows expansion .. */
+};
+
+/*
+ * Structure associated with each RX queue.
+ */
+struct qede_rx_queue {
+	struct rte_mempool *mb_pool;
+	struct ecore_chain rx_bd_ring;
+	struct ecore_chain rx_comp_ring;
+	uint16_t *hw_cons_ptr;
+	void OSAL_IOMEM *hw_rxq_prod_addr;
+	struct qede_rx_entry *sw_rx_ring;
+	uint16_t sw_rx_cons;
+	uint16_t sw_rx_prod;
+	uint16_t nb_rx_desc;
+	uint16_t queue_id;
+	uint16_t port_id;
+	uint16_t rx_buf_size;
+	uint64_t rx_hw_errors;
+	uint64_t rx_alloc_errors;
+	struct qede_dev *qdev;
+};
+
+/*
+ * TX BD descriptor ring
+ */
+struct qede_tx_entry {
+	struct rte_mbuf *mbuf;
+	uint8_t flags;
+};
+
+union db_prod {
+	struct eth_db_data data;
+	uint32_t raw;
+};
+
+struct qede_tx_queue {
+	struct ecore_chain tx_pbl;
+	struct qede_tx_entry *sw_tx_ring;
+	uint16_t nb_tx_desc;
+	uint16_t nb_tx_avail;
+	uint16_t tx_free_thresh;
+	uint16_t queue_id;
+	uint16_t *hw_cons_ptr;
+	uint16_t sw_tx_cons;
+	uint16_t sw_tx_prod;
+	void OSAL_IOMEM *doorbell_addr;
+	volatile union db_prod tx_db;
+	uint16_t port_id;
+	uint64_t txq_counter;
+	struct qede_dev *qdev;
+};
+
+struct qede_fastpath {
+	struct qede_dev *qdev;
+	uint8_t rss_id;
+	struct ecore_sb_info *sb_info;
+	struct qede_rx_queue *rxq;
+	struct qede_tx_queue *txqs[MAX_NUM_TC];
+	char name[80];
+};
+
+/*
+ * RX/TX function prototypes
+ */
+int qede_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+			uint16_t nb_desc, unsigned int socket_id,
+			const struct rte_eth_rxconf *rx_conf,
+			struct rte_mempool *mp);
+
+int qede_tx_queue_setup(struct rte_eth_dev *dev,
+			uint16_t queue_idx,
+			uint16_t nb_desc,
+			unsigned int socket_id,
+			const struct rte_eth_txconf *tx_conf);
+
+void qede_rx_queue_release(void *rx_queue);
+
+void qede_tx_queue_release(void *tx_queue);
+
+int qede_dev_start(struct rte_eth_dev *eth_dev);
+
+void qede_dev_stop(struct rte_eth_dev *eth_dev);
+
+void qede_reset_fp_rings(struct qede_dev *qdev);
+
+void qede_free_fp_arrays(struct qede_dev *qdev);
+
+void qede_free_mem_load(struct qede_dev *qdev);
+
+uint16_t qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts,
+			uint16_t nb_pkts);
+
+uint16_t qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts,
+			uint16_t nb_pkts);
+
+#endif /* _QEDE_RXTX_H_ */
diff --git a/drivers/net/qede/rte_pmd_qede_version.map b/drivers/net/qede/rte_pmd_qede_version.map
new file mode 100644
index 0000000..5151684
--- /dev/null
+++ b/drivers/net/qede/rte_pmd_qede_version.map
@@ -0,0 +1,4 @@
+DPDK_2.2 {
+
+	local: *;
+};
-- 
1.7.10.3

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH v2 06/10] qede: Add L2 support
  2016-03-10 13:45 [dpdk-dev] [PATCH v2 00/10] qede: Add qede PMD Rasesh Mody
                   ` (3 preceding siblings ...)
  2016-03-10 13:45 ` [dpdk-dev] [PATCH v2 05/10] qede: Add core driver Rasesh Mody
@ 2016-03-10 13:45 ` Rasesh Mody
  2016-03-10 13:45 ` [dpdk-dev] [PATCH v2 07/10] qede: Add SRIOV support Rasesh Mody
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 13+ messages in thread
From: Rasesh Mody @ 2016-03-10 13:45 UTC (permalink / raw)
  To: dev; +Cc: sony.chacko

Signed-off-by: Harish Patil <harish.patil@qlogic.com>
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
Signed-off-by: Sony Chacko <sony.chacko@qlogic.com>
---
 drivers/net/qede/Makefile            |    2 +
 drivers/net/qede/base/ecore_chain.h  |    6 +
 drivers/net/qede/base/ecore_l2.c     | 1608 ++++++++++++++++++++++++++++++++++
 drivers/net/qede/base/ecore_l2.h     |  101 +++
 drivers/net/qede/base/ecore_l2_api.h |  401 +++++++++
 drivers/net/qede/qede_eth_if.c       |  456 ++++++++++
 drivers/net/qede/qede_eth_if.h       |    2 +-
 drivers/net/qede/qede_ethdev.c       |   17 +-
 drivers/net/qede/qede_ethdev.h       |    1 +
 drivers/net/qede/qede_if.h           |    9 +
 drivers/net/qede/qede_main.c         |    2 +
 drivers/net/qede/qede_rxtx.c         |  192 ++++
 12 files changed, 2793 insertions(+), 4 deletions(-)
 create mode 100644 drivers/net/qede/base/ecore_l2.c
 create mode 100644 drivers/net/qede/base/ecore_l2.h
 create mode 100644 drivers/net/qede/base/ecore_l2_api.h
 create mode 100644 drivers/net/qede/qede_eth_if.c

diff --git a/drivers/net/qede/Makefile b/drivers/net/qede/Makefile
index efaefb2..eb08635 100644
--- a/drivers/net/qede/Makefile
+++ b/drivers/net/qede/Makefile
@@ -70,6 +70,7 @@ $(foreach obj, $(ECORE_DRIVER_OBJS), $(eval CFLAGS+=$(CFLAGS_ECORE_DRIVER)))
 SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += base/ecore_dev.c
 SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += base/ecore_hw.c
 SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += base/ecore_cxt.c
+SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += base/ecore_l2.c
 SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += base/ecore_sp_commands.c
 SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += base/ecore_init_fw_funcs.c
 SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += base/ecore_spq.c
@@ -78,6 +79,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += base/ecore_mcp.c
 SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += base/ecore_int.c
 SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += base/bcm_osal.c
 SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += qede_ethdev.c
+SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += qede_eth_if.c
 SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += qede_main.c
 SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += qede_rxtx.c
 
diff --git a/drivers/net/qede/base/ecore_chain.h b/drivers/net/qede/base/ecore_chain.h
index c9c21a6..8c8e8b4 100644
--- a/drivers/net/qede/base/ecore_chain.h
+++ b/drivers/net/qede/base/ecore_chain.h
@@ -251,6 +251,12 @@ static OSAL_INLINE u32 ecore_chain_get_page_cnt(struct ecore_chain *p_chain)
 	return p_chain->page_cnt;
 }
 
+static OSAL_INLINE
+dma_addr_t ecore_chain_get_pbl_phys(struct ecore_chain *p_chain)
+{
+	return p_chain->pbl.p_phys_table;
+}
+
 /**
  * @brief ecore_chain_advance_page -
  *
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
new file mode 100644
index 0000000..8d713e7
--- /dev/null
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -0,0 +1,1608 @@
+/*
+ * Copyright (c) 2016 QLogic Corporation.
+ * All rights reserved.
+ * www.qlogic.com
+ *
+ * See LICENSE.qede_pmd for copyright and licensing details.
+ */
+
+#include "bcm_osal.h"
+
+#include "ecore.h"
+#include "ecore_status.h"
+#include "ecore_hsi_eth.h"
+#include "ecore_chain.h"
+#include "ecore_spq.h"
+#include "ecore_init_fw_funcs.h"
+#include "ecore_cxt.h"
+#include "ecore_l2.h"
+#include "ecore_sp_commands.h"
+#include "ecore_gtt_reg_addr.h"
+#include "ecore_iro.h"
+#include "reg_addr.h"
+#include "ecore_int.h"
+#include "ecore_hw.h"
+#include "ecore_mcp.h"
+
+#define ECORE_MAX_SGES_NUM 16
+#define CRC32_POLY 0x1edc6f41
+
+enum _ecore_status_t
+ecore_sp_eth_vport_start(struct ecore_hwfn *p_hwfn,
+			 struct ecore_sp_vport_start_params *p_params)
+{
+	struct vport_start_ramrod_data *p_ramrod = OSAL_NULL;
+	struct ecore_spq_entry *p_ent = OSAL_NULL;
+	enum _ecore_status_t rc = ECORE_NOTIMPL;
+	struct ecore_sp_init_data init_data;
+	u8 abs_vport_id = 0;
+	u16 rx_mode = 0;
+
+	rc = ecore_fw_vport(p_hwfn, p_params->vport_id, &abs_vport_id);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	/* Get SPQ entry */
+	OSAL_MEMSET(&init_data, 0, sizeof(init_data));
+	init_data.cid = ecore_spq_get_cid(p_hwfn);
+	init_data.opaque_fid = p_params->opaque_fid;
+	init_data.comp_mode = ECORE_SPQ_MODE_EBLOCK;
+
+	rc = ecore_sp_init_request(p_hwfn, &p_ent,
+				   ETH_RAMROD_VPORT_START,
+				   PROTOCOLID_ETH, &init_data);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	p_ramrod = &p_ent->ramrod.vport_start;
+	p_ramrod->vport_id = abs_vport_id;
+
+	p_ramrod->mtu = OSAL_CPU_TO_LE16(p_params->mtu);
+	p_ramrod->inner_vlan_removal_en = p_params->remove_inner_vlan;
+	p_ramrod->handle_ptp_pkts = p_params->handle_ptp_pkts;
+	p_ramrod->drop_ttl0_en = p_params->drop_ttl0;
+	p_ramrod->untagged = p_params->only_untagged;
+	p_ramrod->zero_placement_offset = p_params->zero_placement_offset;
+
+	SET_FIELD(rx_mode, ETH_VPORT_RX_MODE_UCAST_DROP_ALL, 1);
+	SET_FIELD(rx_mode, ETH_VPORT_RX_MODE_MCAST_DROP_ALL, 1);
+
+	p_ramrod->rx_mode.state = OSAL_CPU_TO_LE16(rx_mode);
+
+	/* TPA related fields */
+	OSAL_MEMSET(&p_ramrod->tpa_param, 0,
+		    sizeof(struct eth_vport_tpa_param));
+	p_ramrod->tpa_param.max_buff_num = p_params->max_buffers_per_cqe;
+
+	switch (p_params->tpa_mode) {
+	case ECORE_TPA_MODE_GRO:
+		p_ramrod->tpa_param.tpa_max_aggs_num = ETH_TPA_MAX_AGGS_NUM;
+		p_ramrod->tpa_param.tpa_max_size = (u16)-1;
+		p_ramrod->tpa_param.tpa_min_size_to_cont = p_params->mtu / 2;
+		p_ramrod->tpa_param.tpa_min_size_to_start = p_params->mtu / 2;
+		p_ramrod->tpa_param.tpa_ipv4_en_flg = 1;
+		p_ramrod->tpa_param.tpa_ipv6_en_flg = 1;
+		p_ramrod->tpa_param.tpa_pkt_split_flg = 1;
+		p_ramrod->tpa_param.tpa_gro_consistent_flg = 1;
+		break;
+	default:
+		break;
+	}
+
+	p_ramrod->tx_switching_en = p_params->tx_switching;
+#ifndef ASIC_ONLY
+	if (CHIP_REV_IS_SLOW(p_hwfn->p_dev))
+		p_ramrod->tx_switching_en = 0;
+#endif
+
+	/* Software Function ID in hwfn (PFs are 0 - 15, VFs are 16 - 135) */
+	p_ramrod->sw_fid = ecore_concrete_to_sw_fid(p_hwfn->p_dev,
+						    p_params->concrete_fid);
+
+	return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
+}
+
+enum _ecore_status_t
+ecore_sp_vport_start(struct ecore_hwfn *p_hwfn,
+		     struct ecore_sp_vport_start_params *p_params)
+{
+	return ecore_sp_eth_vport_start(p_hwfn, p_params);
+}
+
+static enum _ecore_status_t
+ecore_sp_vport_update_rss(struct ecore_hwfn *p_hwfn,
+			  struct vport_update_ramrod_data *p_ramrod,
+			  struct ecore_rss_params *p_rss)
+{
+	enum _ecore_status_t rc = ECORE_SUCCESS;
+	struct eth_vport_rss_config *p_config;
+	u16 abs_l2_queue = 0;
+	int i;
+
+	if (!p_rss) {
+		p_ramrod->common.update_rss_flg = 0;
+		return rc;
+	}
+	p_config = &p_ramrod->rss_config;
+
+	OSAL_BUILD_BUG_ON(ECORE_RSS_IND_TABLE_SIZE !=
+			  ETH_RSS_IND_TABLE_ENTRIES_NUM);
+
+	rc = ecore_fw_rss_eng(p_hwfn, p_rss->rss_eng_id, &p_config->rss_id);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	p_ramrod->common.update_rss_flg = p_rss->update_rss_config;
+	p_config->update_rss_capabilities = p_rss->update_rss_capabilities;
+	p_config->update_rss_ind_table = p_rss->update_rss_ind_table;
+	p_config->update_rss_key = p_rss->update_rss_key;
+
+	p_config->rss_mode = p_rss->rss_enable ?
+	    ETH_VPORT_RSS_MODE_REGULAR : ETH_VPORT_RSS_MODE_DISABLED;
+
+	p_config->capabilities = 0;
+
+	SET_FIELD(p_config->capabilities,
+		  ETH_VPORT_RSS_CONFIG_IPV4_CAPABILITY,
+		  !!(p_rss->rss_caps & ECORE_RSS_IPV4));
+	SET_FIELD(p_config->capabilities,
+		  ETH_VPORT_RSS_CONFIG_IPV6_CAPABILITY,
+		  !!(p_rss->rss_caps & ECORE_RSS_IPV6));
+	SET_FIELD(p_config->capabilities,
+		  ETH_VPORT_RSS_CONFIG_IPV4_TCP_CAPABILITY,
+		  !!(p_rss->rss_caps & ECORE_RSS_IPV4_TCP));
+	SET_FIELD(p_config->capabilities,
+		  ETH_VPORT_RSS_CONFIG_IPV6_TCP_CAPABILITY,
+		  !!(p_rss->rss_caps & ECORE_RSS_IPV6_TCP));
+	SET_FIELD(p_config->capabilities,
+		  ETH_VPORT_RSS_CONFIG_IPV4_UDP_CAPABILITY,
+		  !!(p_rss->rss_caps & ECORE_RSS_IPV4_UDP));
+	SET_FIELD(p_config->capabilities,
+		  ETH_VPORT_RSS_CONFIG_IPV6_UDP_CAPABILITY,
+		  !!(p_rss->rss_caps & ECORE_RSS_IPV6_UDP));
+	p_config->tbl_size = p_rss->rss_table_size_log;
+	p_config->capabilities = OSAL_CPU_TO_LE16(p_config->capabilities);
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_IFUP,
+		   "update rss flag %d, rss_mode = %d, update_caps = %d, capabilities = %d, update_ind = %d, update_rss_key = %d\n",
+		   p_ramrod->common.update_rss_flg,
+		   p_config->rss_mode,
+		   p_config->update_rss_capabilities,
+		   p_config->capabilities,
+		   p_config->update_rss_ind_table, p_config->update_rss_key);
+
+	for (i = 0; i < ECORE_RSS_IND_TABLE_SIZE; i++) {
+		rc = ecore_fw_l2_queue(p_hwfn,
+				       (u8)p_rss->rss_ind_table[i],
+				       &abs_l2_queue);
+		if (rc != ECORE_SUCCESS)
+			return rc;
+
+		p_config->indirection_table[i] = OSAL_CPU_TO_LE16(abs_l2_queue);
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IFUP, "i= %d, queue = %d\n",
+			   i, p_config->indirection_table[i]);
+	}
+
+	for (i = 0; i < 10; i++)
+		p_config->rss_key[i] = OSAL_CPU_TO_LE32(p_rss->rss_key[i]);
+
+	return rc;
+}
+
+static void
+ecore_sp_update_accept_mode(struct ecore_hwfn *p_hwfn,
+			    struct vport_update_ramrod_data *p_ramrod,
+			    struct ecore_filter_accept_flags flags)
+{
+	p_ramrod->common.update_rx_mode_flg = flags.update_rx_mode_config;
+	p_ramrod->common.update_tx_mode_flg = flags.update_tx_mode_config;
+
+#ifndef ASIC_ONLY
+	/* On B0 emulation we cannot enable Tx, since this would cause writes
+	 * to PVFC HW block which isn't implemented in emulation.
+	 */
+	if (CHIP_REV_IS_SLOW(p_hwfn->p_dev)) {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+			   "Non-Asic - prevent Tx mode in vport update\n");
+		p_ramrod->common.update_tx_mode_flg = 0;
+	}
+#endif
+
+	/* Set Rx mode accept flags */
+	if (p_ramrod->common.update_rx_mode_flg) {
+		__le16 *state = &p_ramrod->rx_mode.state;
+		u8 accept_filter = flags.rx_accept_filter;
+
+/*
+ *		SET_FIELD(*state, ETH_VPORT_RX_MODE_UCAST_DROP_ALL,
+ *			  !!(accept_filter & ECORE_ACCEPT_NONE));
+ */
+/*
+ *		SET_FIELD(*state, ETH_VPORT_RX_MODE_UCAST_ACCEPT_ALL,
+ *			  (!!(accept_filter & ECORE_ACCEPT_UCAST_MATCHED) &&
+ *			   !!(accept_filter & ECORE_ACCEPT_UCAST_UNMATCHED)));
+ */
+		SET_FIELD(*state, ETH_VPORT_RX_MODE_UCAST_DROP_ALL,
+			  !(!!(accept_filter & ECORE_ACCEPT_UCAST_MATCHED) ||
+			    !!(accept_filter & ECORE_ACCEPT_UCAST_UNMATCHED)));
+
+		SET_FIELD(*state, ETH_VPORT_RX_MODE_UCAST_ACCEPT_UNMATCHED,
+			  !!(accept_filter & ECORE_ACCEPT_UCAST_UNMATCHED));
+/*
+ *		SET_FIELD(*state, ETH_VPORT_RX_MODE_MCAST_DROP_ALL,
+ *			  !!(accept_filter & ECORE_ACCEPT_NONE));
+ */
+		SET_FIELD(*state, ETH_VPORT_RX_MODE_MCAST_DROP_ALL,
+			  !(!!(accept_filter & ECORE_ACCEPT_MCAST_MATCHED) ||
+			    !!(accept_filter & ECORE_ACCEPT_MCAST_UNMATCHED)));
+
+		SET_FIELD(*state, ETH_VPORT_RX_MODE_MCAST_ACCEPT_ALL,
+			  (!!(accept_filter & ECORE_ACCEPT_MCAST_MATCHED) &&
+			   !!(accept_filter & ECORE_ACCEPT_MCAST_UNMATCHED)));
+
+		SET_FIELD(*state, ETH_VPORT_RX_MODE_BCAST_ACCEPT_ALL,
+			  !!(accept_filter & ECORE_ACCEPT_BCAST));
+
+		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+			   "p_ramrod->rx_mode.state = 0x%x\n",
+			   p_ramrod->rx_mode.state);
+	}
+
+	/* Set Tx mode accept flags */
+	if (p_ramrod->common.update_tx_mode_flg) {
+		__le16 *state = &p_ramrod->tx_mode.state;
+		u8 accept_filter = flags.tx_accept_filter;
+
+		SET_FIELD(*state, ETH_VPORT_TX_MODE_UCAST_DROP_ALL,
+			  !!(accept_filter & ECORE_ACCEPT_NONE));
+
+		SET_FIELD(*state, ETH_VPORT_TX_MODE_MCAST_DROP_ALL,
+			  !!(accept_filter & ECORE_ACCEPT_NONE));
+
+		SET_FIELD(*state, ETH_VPORT_TX_MODE_MCAST_ACCEPT_ALL,
+			  (!!(accept_filter & ECORE_ACCEPT_MCAST_MATCHED) &&
+			   !!(accept_filter & ECORE_ACCEPT_MCAST_UNMATCHED)));
+
+		SET_FIELD(*state, ETH_VPORT_TX_MODE_BCAST_ACCEPT_ALL,
+			  !!(accept_filter & ECORE_ACCEPT_BCAST));
+
+		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+			   "p_ramrod->tx_mode.state = 0x%x\n",
+			   p_ramrod->tx_mode.state);
+	}
+}
+
+static void
+ecore_sp_vport_update_sge_tpa(struct ecore_hwfn *p_hwfn,
+			      struct vport_update_ramrod_data *p_ramrod,
+			      struct ecore_sge_tpa_params *p_params)
+{
+	struct eth_vport_tpa_param *p_tpa;
+
+	if (!p_params) {
+		p_ramrod->common.update_tpa_param_flg = 0;
+		p_ramrod->common.update_tpa_en_flg = 0;
+		p_ramrod->common.update_tpa_param_flg = 0;
+		return;
+	}
+
+	p_ramrod->common.update_tpa_en_flg = p_params->update_tpa_en_flg;
+	p_tpa = &p_ramrod->tpa_param;
+	p_tpa->tpa_ipv4_en_flg = p_params->tpa_ipv4_en_flg;
+	p_tpa->tpa_ipv6_en_flg = p_params->tpa_ipv6_en_flg;
+	p_tpa->tpa_ipv4_tunn_en_flg = p_params->tpa_ipv4_tunn_en_flg;
+	p_tpa->tpa_ipv6_tunn_en_flg = p_params->tpa_ipv6_tunn_en_flg;
+
+	p_ramrod->common.update_tpa_param_flg = p_params->update_tpa_param_flg;
+	p_tpa->max_buff_num = p_params->max_buffers_per_cqe;
+	p_tpa->tpa_pkt_split_flg = p_params->tpa_pkt_split_flg;
+	p_tpa->tpa_hdr_data_split_flg = p_params->tpa_hdr_data_split_flg;
+	p_tpa->tpa_gro_consistent_flg = p_params->tpa_gro_consistent_flg;
+	p_tpa->tpa_max_aggs_num = p_params->tpa_max_aggs_num;
+	p_tpa->tpa_max_size = p_params->tpa_max_size;
+	p_tpa->tpa_min_size_to_start = p_params->tpa_min_size_to_start;
+	p_tpa->tpa_min_size_to_cont = p_params->tpa_min_size_to_cont;
+}
+
+static void
+ecore_sp_update_mcast_bin(struct ecore_hwfn *p_hwfn,
+			  struct vport_update_ramrod_data *p_ramrod,
+			  struct ecore_sp_vport_update_params *p_params)
+{
+	int i;
+
+	OSAL_MEMSET(&p_ramrod->approx_mcast.bins, 0,
+		    sizeof(p_ramrod->approx_mcast.bins));
+
+	if (!p_params->update_approx_mcast_flg)
+		return;
+
+	p_ramrod->common.update_approx_mcast_flg = 1;
+	for (i = 0; i < ETH_MULTICAST_MAC_BINS_IN_REGS; i++) {
+		u32 *p_bins = (u32 *)p_params->bins;
+
+		p_ramrod->approx_mcast.bins[i] = OSAL_CPU_TO_LE32(p_bins[i]);
+	}
+}
+
+enum _ecore_status_t
+ecore_sp_vport_update(struct ecore_hwfn *p_hwfn,
+		      struct ecore_sp_vport_update_params *p_params,
+		      enum spq_mode comp_mode,
+		      struct ecore_spq_comp_cb *p_comp_data)
+{
+	struct ecore_rss_params *p_rss_params = p_params->rss_params;
+	struct vport_update_ramrod_data *p_ramrod = OSAL_NULL;
+	struct ecore_spq_entry *p_ent = OSAL_NULL;
+	enum _ecore_status_t rc = ECORE_NOTIMPL;
+	struct ecore_sp_init_data init_data;
+	u8 abs_vport_id = 0, val;
+	u16 wordval;
+
+	rc = ecore_fw_vport(p_hwfn, p_params->vport_id, &abs_vport_id);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	/* Get SPQ entry */
+	OSAL_MEMSET(&init_data, 0, sizeof(init_data));
+	init_data.cid = ecore_spq_get_cid(p_hwfn);
+	init_data.opaque_fid = p_params->opaque_fid;
+	init_data.comp_mode = comp_mode;
+	init_data.p_comp_data = p_comp_data;
+
+	rc = ecore_sp_init_request(p_hwfn, &p_ent,
+				   ETH_RAMROD_VPORT_UPDATE,
+				   PROTOCOLID_ETH, &init_data);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	/* Copy input params to ramrod according to FW struct */
+	p_ramrod = &p_ent->ramrod.vport_update;
+
+	p_ramrod->common.vport_id = abs_vport_id;
+
+	p_ramrod->common.rx_active_flg = p_params->vport_active_rx_flg;
+	p_ramrod->common.tx_active_flg = p_params->vport_active_tx_flg;
+	val = p_params->update_vport_active_rx_flg;
+	p_ramrod->common.update_rx_active_flg = val;
+	val = p_params->update_vport_active_tx_flg;
+	p_ramrod->common.update_tx_active_flg = val;
+	val = p_params->update_inner_vlan_removal_flg;
+	p_ramrod->common.update_inner_vlan_removal_en_flg = val;
+	val = p_params->inner_vlan_removal_flg;
+	p_ramrod->common.inner_vlan_removal_en = val;
+	val = p_params->silent_vlan_removal_flg;
+	p_ramrod->common.silent_vlan_removal_en = val;
+	val = p_params->update_tx_switching_flg;
+	p_ramrod->common.update_tx_switching_en_flg = val;
+	val = p_params->update_default_vlan_enable_flg;
+	p_ramrod->common.update_default_vlan_en_flg = val;
+	p_ramrod->common.default_vlan_en = p_params->default_vlan_enable_flg;
+	val = p_params->update_default_vlan_flg;
+	p_ramrod->common.update_default_vlan_flg = val;
+	wordval = p_params->default_vlan;
+	p_ramrod->common.default_vlan = OSAL_CPU_TO_LE16(wordval);
+
+	p_ramrod->common.tx_switching_en = p_params->tx_switching_flg;
+
+#ifndef ASIC_ONLY
+	if (CHIP_REV_IS_FPGA(p_hwfn->p_dev))
+		if (p_ramrod->common.tx_switching_en ||
+		    p_ramrod->common.update_tx_switching_en_flg) {
+			DP_NOTICE(p_hwfn, false,
+				  "FPGA - why are we seeing tx-switching? Overriding it\n");
+			p_ramrod->common.tx_switching_en = 0;
+			p_ramrod->common.update_tx_switching_en_flg = 1;
+		}
+#endif
+
+	val = p_params->update_anti_spoofing_en_flg;
+	p_ramrod->common.update_anti_spoofing_en_flg = val;
+	p_ramrod->common.anti_spoofing_en = p_params->anti_spoofing_en;
+	p_ramrod->common.accept_any_vlan = p_params->accept_any_vlan;
+	val = p_params->update_accept_any_vlan_flg;
+	p_ramrod->common.update_accept_any_vlan_flg = val;
+
+	rc = ecore_sp_vport_update_rss(p_hwfn, p_ramrod, p_rss_params);
+	if (rc != ECORE_SUCCESS) {
+		/* Return spq entry which is taken in ecore_sp_init_request() */
+		ecore_spq_return_entry(p_hwfn, p_ent);
+		return rc;
+	}
+
+	/* Update mcast bins for VFs, PF doesn't use this functionality */
+	ecore_sp_update_mcast_bin(p_hwfn, p_ramrod, p_params);
+
+	ecore_sp_update_accept_mode(p_hwfn, p_ramrod, p_params->accept_flags);
+	ecore_sp_vport_update_sge_tpa(p_hwfn, p_ramrod,
+				      p_params->sge_tpa_params);
+	return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
+}
+
+enum _ecore_status_t ecore_sp_vport_stop(struct ecore_hwfn *p_hwfn,
+					 u16 opaque_fid, u8 vport_id)
+{
+	struct vport_stop_ramrod_data *p_ramrod;
+	struct ecore_sp_init_data init_data;
+	struct ecore_spq_entry *p_ent;
+	enum _ecore_status_t rc;
+	u8 abs_vport_id = 0;
+
+	rc = ecore_fw_vport(p_hwfn, vport_id, &abs_vport_id);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	/* Get SPQ entry */
+	OSAL_MEMSET(&init_data, 0, sizeof(init_data));
+	init_data.cid = ecore_spq_get_cid(p_hwfn);
+	init_data.opaque_fid = opaque_fid;
+	init_data.comp_mode = ECORE_SPQ_MODE_EBLOCK;
+
+	rc = ecore_sp_init_request(p_hwfn, &p_ent,
+				   ETH_RAMROD_VPORT_STOP,
+				   PROTOCOLID_ETH, &init_data);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	p_ramrod = &p_ent->ramrod.vport_stop;
+	p_ramrod->vport_id = abs_vport_id;
+
+	return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
+}
+
+enum _ecore_status_t
+ecore_filter_accept_cmd(struct ecore_dev *p_dev,
+			u8 vport,
+			struct ecore_filter_accept_flags accept_flags,
+			u8 update_accept_any_vlan,
+			u8 accept_any_vlan,
+			enum spq_mode comp_mode,
+			struct ecore_spq_comp_cb *p_comp_data)
+{
+	struct ecore_sp_vport_update_params update_params;
+	int i, rc;
+
+	/* Prepare and send the vport rx_mode change */
+	OSAL_MEMSET(&update_params, 0, sizeof(update_params));
+	update_params.vport_id = vport;
+	update_params.accept_flags = accept_flags;
+	update_params.update_accept_any_vlan_flg = update_accept_any_vlan;
+	update_params.accept_any_vlan = accept_any_vlan;
+
+	for_each_hwfn(p_dev, i) {
+		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
+
+		update_params.opaque_fid = p_hwfn->hw_info.opaque_fid;
+
+		rc = ecore_sp_vport_update(p_hwfn, &update_params,
+					   comp_mode, p_comp_data);
+		if (rc != ECORE_SUCCESS) {
+			DP_ERR(p_dev, "Update rx_mode failed %d\n", rc);
+			return rc;
+		}
+
+		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+			   "Accept filter configured, flags = [Rx]%x [Tx]%x\n",
+			   accept_flags.rx_accept_filter,
+			   accept_flags.tx_accept_filter);
+
+		if (update_accept_any_vlan)
+			DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+				   "accept_any_vlan=%d configured\n",
+				   accept_any_vlan);
+	}
+
+	return 0;
+}
+
+static void ecore_sp_release_queue_cid(struct ecore_hwfn *p_hwfn,
+				       struct ecore_hw_cid_data *p_cid_data)
+{
+	if (!p_cid_data->b_cid_allocated)
+		return;
+
+	ecore_cxt_release_cid(p_hwfn, p_cid_data->cid);
+	p_cid_data->b_cid_allocated = false;
+}
+
+enum _ecore_status_t
+ecore_sp_eth_rxq_start_ramrod(struct ecore_hwfn *p_hwfn,
+			      u16 opaque_fid,
+			      u32 cid,
+			      u16 rx_queue_id,
+			      u8 vport_id,
+			      u8 stats_id,
+			      u16 sb,
+			      u8 sb_index,
+			      u16 bd_max_bytes,
+			      dma_addr_t bd_chain_phys_addr,
+			      dma_addr_t cqe_pbl_addr, u16 cqe_pbl_size)
+{
+	struct ecore_hw_cid_data *p_rx_cid = &p_hwfn->p_rx_cids[rx_queue_id];
+	struct rx_queue_start_ramrod_data *p_ramrod = OSAL_NULL;
+	struct ecore_spq_entry *p_ent = OSAL_NULL;
+	enum _ecore_status_t rc = ECORE_NOTIMPL;
+	struct ecore_sp_init_data init_data;
+	u16 abs_rx_q_id = 0;
+	u8 abs_vport_id = 0;
+
+	/* Store information for the stop */
+	p_rx_cid->cid = cid;
+	p_rx_cid->opaque_fid = opaque_fid;
+	p_rx_cid->vport_id = vport_id;
+
+	rc = ecore_fw_vport(p_hwfn, vport_id, &abs_vport_id);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	rc = ecore_fw_l2_queue(p_hwfn, rx_queue_id, &abs_rx_q_id);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "opaque_fid=0x%x, cid=0x%x, rx_qid=0x%x, vport_id=0x%x, sb_id=0x%x\n",
+		   opaque_fid, cid, rx_queue_id, vport_id, sb);
+
+	/* Get SPQ entry */
+	OSAL_MEMSET(&init_data, 0, sizeof(init_data));
+	init_data.cid = cid;
+	init_data.opaque_fid = opaque_fid;
+	init_data.comp_mode = ECORE_SPQ_MODE_EBLOCK;
+
+	rc = ecore_sp_init_request(p_hwfn, &p_ent,
+				   ETH_RAMROD_RX_QUEUE_START,
+				   PROTOCOLID_ETH, &init_data);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	p_ramrod = &p_ent->ramrod.rx_queue_start;
+
+	p_ramrod->sb_id = OSAL_CPU_TO_LE16(sb);
+	p_ramrod->sb_index = sb_index;
+	p_ramrod->vport_id = abs_vport_id;
+	p_ramrod->stats_counter_id = stats_id;
+	p_ramrod->rx_queue_id = OSAL_CPU_TO_LE16(abs_rx_q_id);
+	p_ramrod->complete_cqe_flg = 0;
+	p_ramrod->complete_event_flg = 1;
+
+	p_ramrod->bd_max_bytes = OSAL_CPU_TO_LE16(bd_max_bytes);
+	DMA_REGPAIR_LE(p_ramrod->bd_base, bd_chain_phys_addr);
+
+	p_ramrod->num_of_pbl_pages = OSAL_CPU_TO_LE16(cqe_pbl_size);
+	DMA_REGPAIR_LE(p_ramrod->cqe_pbl_addr, cqe_pbl_addr);
+
+	return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
+}
+
+enum _ecore_status_t ecore_sp_eth_rx_queue_start(struct ecore_hwfn *p_hwfn,
+						 u16 opaque_fid,
+						 u8 rx_queue_id,
+						 u8 vport_id,
+						 u8 stats_id,
+						 u16 sb,
+						 u8 sb_index,
+						 u16 bd_max_bytes,
+						 dma_addr_t bd_chain_phys_addr,
+						 dma_addr_t cqe_pbl_addr,
+						 u16 cqe_pbl_size,
+						 void OSAL_IOMEM **pp_prod)
+{
+	struct ecore_hw_cid_data *p_rx_cid = &p_hwfn->p_rx_cids[rx_queue_id];
+	u8 abs_stats_id = 0;
+	u16 abs_l2_queue = 0;
+	enum _ecore_status_t rc;
+	u64 init_prod_val = 0;
+
+	rc = ecore_fw_l2_queue(p_hwfn, rx_queue_id, &abs_l2_queue);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	rc = ecore_fw_vport(p_hwfn, stats_id, &abs_stats_id);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	*pp_prod = (u8 OSAL_IOMEM *) p_hwfn->regview +
+	    GTT_BAR0_MAP_REG_MSDM_RAM + MSTORM_PRODS_OFFSET(abs_l2_queue);
+
+	/* Init the rcq, rx bd and rx sge (if valid) producers to 0 */
+	__internal_ram_wr(p_hwfn, *pp_prod, sizeof(u64),
+			  (u32 *)(&init_prod_val));
+
+	/* Allocate a CID for the queue */
+	rc = ecore_cxt_acquire_cid(p_hwfn, PROTOCOLID_ETH, &p_rx_cid->cid);
+	if (rc != ECORE_SUCCESS) {
+		DP_NOTICE(p_hwfn, true, "Failed to acquire cid\n");
+		return rc;
+	}
+	p_rx_cid->b_cid_allocated = true;
+
+	rc = ecore_sp_eth_rxq_start_ramrod(p_hwfn,
+					   opaque_fid,
+					   p_rx_cid->cid,
+					   rx_queue_id,
+					   vport_id,
+					   abs_stats_id,
+					   sb,
+					   sb_index,
+					   bd_max_bytes,
+					   bd_chain_phys_addr,
+					   cqe_pbl_addr, cqe_pbl_size);
+
+	if (rc != ECORE_SUCCESS)
+		ecore_sp_release_queue_cid(p_hwfn, p_rx_cid);
+
+	return rc;
+}
+
+enum _ecore_status_t
+ecore_sp_eth_rx_queues_update(struct ecore_hwfn *p_hwfn,
+			      u16 rx_queue_id,
+			      u8 num_rxqs,
+			      u8 complete_cqe_flg,
+			      u8 complete_event_flg,
+			      enum spq_mode comp_mode,
+			      struct ecore_spq_comp_cb *p_comp_data)
+{
+	struct rx_queue_update_ramrod_data *p_ramrod = OSAL_NULL;
+	struct ecore_spq_entry *p_ent = OSAL_NULL;
+	enum _ecore_status_t rc = ECORE_NOTIMPL;
+	struct ecore_sp_init_data init_data;
+	struct ecore_hw_cid_data *p_rx_cid;
+	u16 qid, abs_rx_q_id = 0;
+	u8 i;
+
+	OSAL_MEMSET(&init_data, 0, sizeof(init_data));
+	init_data.comp_mode = comp_mode;
+	init_data.p_comp_data = p_comp_data;
+
+	for (i = 0; i < num_rxqs; i++) {
+		qid = rx_queue_id + i;
+		p_rx_cid = &p_hwfn->p_rx_cids[qid];
+
+		/* Get SPQ entry */
+		init_data.cid = p_rx_cid->cid;
+		init_data.opaque_fid = p_rx_cid->opaque_fid;
+
+		rc = ecore_sp_init_request(p_hwfn, &p_ent,
+					   ETH_RAMROD_RX_QUEUE_UPDATE,
+					   PROTOCOLID_ETH, &init_data);
+		if (rc != ECORE_SUCCESS)
+			return rc;
+
+		p_ramrod = &p_ent->ramrod.rx_queue_update;
+
+		ecore_fw_vport(p_hwfn, p_rx_cid->vport_id, &p_ramrod->vport_id);
+		ecore_fw_l2_queue(p_hwfn, qid, &abs_rx_q_id);
+		p_ramrod->rx_queue_id = OSAL_CPU_TO_LE16(abs_rx_q_id);
+		p_ramrod->complete_cqe_flg = complete_cqe_flg;
+		p_ramrod->complete_event_flg = complete_event_flg;
+
+		rc = ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
+		if (rc)
+			return rc;
+	}
+
+	return rc;
+}
+
+enum _ecore_status_t
+ecore_sp_eth_rx_queue_stop(struct ecore_hwfn *p_hwfn,
+			   u16 rx_queue_id,
+			   bool eq_completion_only, bool cqe_completion)
+{
+	struct ecore_hw_cid_data *p_rx_cid = &p_hwfn->p_rx_cids[rx_queue_id];
+	struct rx_queue_stop_ramrod_data *p_ramrod = OSAL_NULL;
+	struct ecore_spq_entry *p_ent = OSAL_NULL;
+	enum _ecore_status_t rc = ECORE_NOTIMPL;
+	struct ecore_sp_init_data init_data;
+	u16 abs_rx_q_id = 0;
+
+	/* Get SPQ entry */
+	OSAL_MEMSET(&init_data, 0, sizeof(init_data));
+	init_data.cid = p_rx_cid->cid;
+	init_data.opaque_fid = p_rx_cid->opaque_fid;
+	init_data.comp_mode = ECORE_SPQ_MODE_EBLOCK;
+
+	rc = ecore_sp_init_request(p_hwfn, &p_ent,
+				   ETH_RAMROD_RX_QUEUE_STOP,
+				   PROTOCOLID_ETH, &init_data);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	p_ramrod = &p_ent->ramrod.rx_queue_stop;
+
+	ecore_fw_vport(p_hwfn, p_rx_cid->vport_id, &p_ramrod->vport_id);
+	ecore_fw_l2_queue(p_hwfn, rx_queue_id, &abs_rx_q_id);
+	p_ramrod->rx_queue_id = OSAL_CPU_TO_LE16(abs_rx_q_id);
+
+	/* Cleaning the queue requires the completion to arrive there.
+	 * In addition, VFs require the answer to come as eqe to PF.
+	 */
+	p_ramrod->complete_cqe_flg = (!!(p_rx_cid->opaque_fid ==
+					  p_hwfn->hw_info.opaque_fid) &&
+				      !eq_completion_only) || cqe_completion;
+	p_ramrod->complete_event_flg = !(p_rx_cid->opaque_fid ==
+					 p_hwfn->hw_info.opaque_fid) ||
+	    eq_completion_only;
+
+	rc = ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	ecore_sp_release_queue_cid(p_hwfn, p_rx_cid);
+
+	return rc;
+}
+
+enum _ecore_status_t
+ecore_sp_eth_txq_start_ramrod(struct ecore_hwfn *p_hwfn,
+			      u16 opaque_fid,
+			      u16 tx_queue_id,
+			      u32 cid,
+			      u8 vport_id,
+			      u8 stats_id,
+			      u16 sb,
+			      u8 sb_index,
+			      dma_addr_t pbl_addr,
+			      u16 pbl_size,
+			      union ecore_qm_pq_params *p_pq_params)
+{
+	struct ecore_hw_cid_data *p_tx_cid = &p_hwfn->p_tx_cids[tx_queue_id];
+	struct tx_queue_start_ramrod_data *p_ramrod = OSAL_NULL;
+	struct ecore_spq_entry *p_ent = OSAL_NULL;
+	enum _ecore_status_t rc = ECORE_NOTIMPL;
+	struct ecore_sp_init_data init_data;
+	u16 pq_id, abs_tx_q_id = 0;
+	u8 abs_vport_id;
+
+	/* Store information for the stop */
+	p_tx_cid->cid = cid;
+	p_tx_cid->opaque_fid = opaque_fid;
+
+	rc = ecore_fw_vport(p_hwfn, vport_id, &abs_vport_id);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	rc = ecore_fw_l2_queue(p_hwfn, tx_queue_id, &abs_tx_q_id);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	/* Get SPQ entry */
+	OSAL_MEMSET(&init_data, 0, sizeof(init_data));
+	init_data.cid = cid;
+	init_data.opaque_fid = opaque_fid;
+	init_data.comp_mode = ECORE_SPQ_MODE_EBLOCK;
+
+	rc = ecore_sp_init_request(p_hwfn, &p_ent,
+				   ETH_RAMROD_TX_QUEUE_START,
+				   PROTOCOLID_ETH, &init_data);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	p_ramrod = &p_ent->ramrod.tx_queue_start;
+	p_ramrod->vport_id = abs_vport_id;
+
+	p_ramrod->sb_id = OSAL_CPU_TO_LE16(sb);
+	p_ramrod->sb_index = sb_index;
+	p_ramrod->stats_counter_id = stats_id;
+
+	p_ramrod->queue_zone_id = OSAL_CPU_TO_LE16(abs_tx_q_id);
+
+	p_ramrod->pbl_size = OSAL_CPU_TO_LE16(pbl_size);
+	p_ramrod->pbl_base_addr.hi = DMA_HI_LE(pbl_addr);
+	p_ramrod->pbl_base_addr.lo = DMA_LO_LE(pbl_addr);
+
+	pq_id = ecore_get_qm_pq(p_hwfn, PROTOCOLID_ETH, p_pq_params);
+	p_ramrod->qm_pq_id = OSAL_CPU_TO_LE16(pq_id);
+
+	return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
+}
+
+enum _ecore_status_t ecore_sp_eth_tx_queue_start(struct ecore_hwfn *p_hwfn,
+						 u16 opaque_fid,
+						 u16 tx_queue_id,
+						 u8 vport_id,
+						 u8 stats_id,
+						 u16 sb,
+						 u8 sb_index,
+						 dma_addr_t pbl_addr,
+						 u16 pbl_size,
+						 void OSAL_IOMEM **pp_doorbell)
+{
+	struct ecore_hw_cid_data *p_tx_cid = &p_hwfn->p_tx_cids[tx_queue_id];
+	union ecore_qm_pq_params pq_params;
+	enum _ecore_status_t rc;
+	u8 abs_stats_id = 0;
+
+	rc = ecore_fw_vport(p_hwfn, stats_id, &abs_stats_id);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	OSAL_MEMSET(p_tx_cid, 0, sizeof(*p_tx_cid));
+	OSAL_MEMSET(&pq_params, 0, sizeof(pq_params));
+
+	/* Allocate a CID for the queue */
+	rc = ecore_cxt_acquire_cid(p_hwfn, PROTOCOLID_ETH, &p_tx_cid->cid);
+	if (rc != ECORE_SUCCESS) {
+		DP_NOTICE(p_hwfn, true, "Failed to acquire cid\n");
+		return rc;
+	}
+	p_tx_cid->b_cid_allocated = true;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "opaque_fid=0x%x, cid=0x%x, tx_qid=0x%x, vport_id=0x%x, sb_id=0x%x\n",
+		   opaque_fid, p_tx_cid->cid, tx_queue_id, vport_id, sb);
+
+	/* TODO - set tc in the pq_params for multi-cos */
+	rc = ecore_sp_eth_txq_start_ramrod(p_hwfn,
+					   opaque_fid,
+					   tx_queue_id,
+					   p_tx_cid->cid,
+					   vport_id,
+					   abs_stats_id,
+					   sb,
+					   sb_index,
+					   pbl_addr, pbl_size, &pq_params);
+
+	*pp_doorbell = (u8 OSAL_IOMEM *) p_hwfn->doorbells +
+	    DB_ADDR(p_tx_cid->cid, DQ_DEMS_LEGACY);
+
+	if (rc != ECORE_SUCCESS)
+		ecore_sp_release_queue_cid(p_hwfn, p_tx_cid);
+
+	return rc;
+}
+
+enum _ecore_status_t ecore_sp_eth_tx_queue_update(struct ecore_hwfn *p_hwfn)
+{
+	return ECORE_NOTIMPL;
+}
+
+enum _ecore_status_t ecore_sp_eth_tx_queue_stop(struct ecore_hwfn *p_hwfn,
+						u16 tx_queue_id)
+{
+	struct ecore_hw_cid_data *p_tx_cid = &p_hwfn->p_tx_cids[tx_queue_id];
+	struct tx_queue_stop_ramrod_data *p_ramrod = OSAL_NULL;
+	struct ecore_spq_entry *p_ent = OSAL_NULL;
+	enum _ecore_status_t rc = ECORE_NOTIMPL;
+	struct ecore_sp_init_data init_data;
+
+	/* Get SPQ entry */
+	OSAL_MEMSET(&init_data, 0, sizeof(init_data));
+	init_data.cid = p_tx_cid->cid;
+	init_data.opaque_fid = p_tx_cid->opaque_fid;
+	init_data.comp_mode = ECORE_SPQ_MODE_EBLOCK;
+
+	rc = ecore_sp_init_request(p_hwfn, &p_ent,
+				   ETH_RAMROD_TX_QUEUE_STOP,
+				   PROTOCOLID_ETH, &init_data);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	p_ramrod = &p_ent->ramrod.tx_queue_stop;
+
+	rc = ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	ecore_sp_release_queue_cid(p_hwfn, p_tx_cid);
+	return rc;
+}
+
+static enum eth_filter_action
+ecore_filter_action(enum ecore_filter_opcode opcode)
+{
+	enum eth_filter_action action = MAX_ETH_FILTER_ACTION;
+
+	switch (opcode) {
+	case ECORE_FILTER_ADD:
+		action = ETH_FILTER_ACTION_ADD;
+		break;
+	case ECORE_FILTER_REMOVE:
+		action = ETH_FILTER_ACTION_REMOVE;
+		break;
+	case ECORE_FILTER_FLUSH:
+		action = ETH_FILTER_ACTION_REMOVE_ALL;
+		break;
+	default:
+		action = MAX_ETH_FILTER_ACTION;
+	}
+
+	return action;
+}
+
+static void ecore_set_fw_mac_addr(__le16 *fw_msb,
+				  __le16 *fw_mid, __le16 *fw_lsb, u8 *mac)
+{
+	((u8 *)fw_msb)[0] = mac[1];
+	((u8 *)fw_msb)[1] = mac[0];
+	((u8 *)fw_mid)[0] = mac[3];
+	((u8 *)fw_mid)[1] = mac[2];
+	((u8 *)fw_lsb)[0] = mac[5];
+	((u8 *)fw_lsb)[1] = mac[4];
+}
+
+static enum _ecore_status_t
+ecore_filter_ucast_common(struct ecore_hwfn *p_hwfn,
+			  u16 opaque_fid,
+			  struct ecore_filter_ucast *p_filter_cmd,
+			  struct vport_filter_update_ramrod_data **pp_ramrod,
+			  struct ecore_spq_entry **pp_ent,
+			  enum spq_mode comp_mode,
+			  struct ecore_spq_comp_cb *p_comp_data)
+{
+	struct vport_filter_update_ramrod_data *p_ramrod;
+	u8 vport_to_add_to = 0, vport_to_remove_from = 0;
+	struct eth_filter_cmd *p_first_filter;
+	struct eth_filter_cmd *p_second_filter;
+	struct ecore_sp_init_data init_data;
+	enum eth_filter_action action;
+	enum _ecore_status_t rc;
+
+	rc = ecore_fw_vport(p_hwfn, p_filter_cmd->vport_to_remove_from,
+			    &vport_to_remove_from);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	rc = ecore_fw_vport(p_hwfn, p_filter_cmd->vport_to_add_to,
+			    &vport_to_add_to);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	/* Get SPQ entry */
+	OSAL_MEMSET(&init_data, 0, sizeof(init_data));
+	init_data.cid = ecore_spq_get_cid(p_hwfn);
+	init_data.opaque_fid = opaque_fid;
+	init_data.comp_mode = comp_mode;
+	init_data.p_comp_data = p_comp_data;
+
+	rc = ecore_sp_init_request(p_hwfn, pp_ent,
+				   ETH_RAMROD_FILTERS_UPDATE,
+				   PROTOCOLID_ETH, &init_data);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	*pp_ramrod = &(*pp_ent)->ramrod.vport_filter_update;
+	p_ramrod = *pp_ramrod;
+	p_ramrod->filter_cmd_hdr.rx = p_filter_cmd->is_rx_filter ? 1 : 0;
+	p_ramrod->filter_cmd_hdr.tx = p_filter_cmd->is_tx_filter ? 1 : 0;
+
+#ifndef ASIC_ONLY
+	if (CHIP_REV_IS_SLOW(p_hwfn->p_dev)) {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+			   "Non-Asic - prevent Tx filters\n");
+		p_ramrod->filter_cmd_hdr.tx = 0;
+	}
+#endif
+
+	switch (p_filter_cmd->opcode) {
+	case ECORE_FILTER_REPLACE:
+	case ECORE_FILTER_MOVE:
+		p_ramrod->filter_cmd_hdr.cmd_cnt = 2;
+		break;
+	default:
+		p_ramrod->filter_cmd_hdr.cmd_cnt = 1;
+		break;
+	}
+
+	p_first_filter = &p_ramrod->filter_cmds[0];
+	p_second_filter = &p_ramrod->filter_cmds[1];
+
+	switch (p_filter_cmd->type) {
+	case ECORE_FILTER_MAC:
+		p_first_filter->type = ETH_FILTER_TYPE_MAC;
+		break;
+	case ECORE_FILTER_VLAN:
+		p_first_filter->type = ETH_FILTER_TYPE_VLAN;
+		break;
+	case ECORE_FILTER_MAC_VLAN:
+		p_first_filter->type = ETH_FILTER_TYPE_PAIR;
+		break;
+	case ECORE_FILTER_INNER_MAC:
+		p_first_filter->type = ETH_FILTER_TYPE_INNER_MAC;
+		break;
+	case ECORE_FILTER_INNER_VLAN:
+		p_first_filter->type = ETH_FILTER_TYPE_INNER_VLAN;
+		break;
+	case ECORE_FILTER_INNER_PAIR:
+		p_first_filter->type = ETH_FILTER_TYPE_INNER_PAIR;
+		break;
+	case ECORE_FILTER_INNER_MAC_VNI_PAIR:
+		p_first_filter->type = ETH_FILTER_TYPE_INNER_MAC_VNI_PAIR;
+		break;
+	case ECORE_FILTER_MAC_VNI_PAIR:
+		p_first_filter->type = ETH_FILTER_TYPE_MAC_VNI_PAIR;
+		break;
+	case ECORE_FILTER_VNI:
+		p_first_filter->type = ETH_FILTER_TYPE_VNI;
+		break;
+	}
+
+	if ((p_first_filter->type == ETH_FILTER_TYPE_MAC) ||
+	    (p_first_filter->type == ETH_FILTER_TYPE_PAIR) ||
+	    (p_first_filter->type == ETH_FILTER_TYPE_INNER_MAC) ||
+	    (p_first_filter->type == ETH_FILTER_TYPE_INNER_PAIR) ||
+	    (p_first_filter->type == ETH_FILTER_TYPE_INNER_MAC_VNI_PAIR) ||
+	    (p_first_filter->type == ETH_FILTER_TYPE_MAC_VNI_PAIR))
+		ecore_set_fw_mac_addr(&p_first_filter->mac_msb,
+				      &p_first_filter->mac_mid,
+				      &p_first_filter->mac_lsb,
+				      (u8 *)p_filter_cmd->mac);
+
+	if ((p_first_filter->type == ETH_FILTER_TYPE_VLAN) ||
+	    (p_first_filter->type == ETH_FILTER_TYPE_PAIR) ||
+	    (p_first_filter->type == ETH_FILTER_TYPE_INNER_VLAN) ||
+	    (p_first_filter->type == ETH_FILTER_TYPE_INNER_PAIR))
+		p_first_filter->vlan_id = OSAL_CPU_TO_LE16(p_filter_cmd->vlan);
+
+	if ((p_first_filter->type == ETH_FILTER_TYPE_INNER_MAC_VNI_PAIR) ||
+	    (p_first_filter->type == ETH_FILTER_TYPE_MAC_VNI_PAIR) ||
+	    (p_first_filter->type == ETH_FILTER_TYPE_VNI))
+		p_first_filter->vni = OSAL_CPU_TO_LE32(p_filter_cmd->vni);
+
+	if (p_filter_cmd->opcode == ECORE_FILTER_MOVE) {
+		p_second_filter->type = p_first_filter->type;
+		p_second_filter->mac_msb = p_first_filter->mac_msb;
+		p_second_filter->mac_mid = p_first_filter->mac_mid;
+		p_second_filter->mac_lsb = p_first_filter->mac_lsb;
+		p_second_filter->vlan_id = p_first_filter->vlan_id;
+		p_second_filter->vni = p_first_filter->vni;
+
+		p_first_filter->action = ETH_FILTER_ACTION_REMOVE;
+
+		p_first_filter->vport_id = vport_to_remove_from;
+
+		p_second_filter->action = ETH_FILTER_ACTION_ADD;
+		p_second_filter->vport_id = vport_to_add_to;
+	} else if (p_filter_cmd->opcode == ECORE_FILTER_REPLACE) {
+		p_first_filter->vport_id = vport_to_add_to;
+		OSAL_MEMCPY(p_second_filter, p_first_filter,
+			    sizeof(*p_second_filter));
+		p_first_filter->action = ETH_FILTER_ACTION_REMOVE_ALL;
+		p_second_filter->action = ETH_FILTER_ACTION_ADD;
+	} else {
+		action = ecore_filter_action(p_filter_cmd->opcode);
+
+		if (action == MAX_ETH_FILTER_ACTION) {
+			DP_NOTICE(p_hwfn, true,
+				  "%d is not suppported yet\n",
+				  p_filter_cmd->opcode);
+			return ECORE_NOTIMPL;
+		}
+
+		p_first_filter->action = action;
+		p_first_filter->vport_id =
+		    (p_filter_cmd->opcode == ECORE_FILTER_REMOVE) ?
+		    vport_to_remove_from : vport_to_add_to;
+	}
+
+	return ECORE_SUCCESS;
+}
+
+enum _ecore_status_t
+ecore_sp_eth_filter_ucast(struct ecore_hwfn *p_hwfn,
+			  u16 opaque_fid,
+			  struct ecore_filter_ucast *p_filter_cmd,
+			  enum spq_mode comp_mode,
+			  struct ecore_spq_comp_cb *p_comp_data)
+{
+	struct vport_filter_update_ramrod_data *p_ramrod = OSAL_NULL;
+	struct ecore_spq_entry *p_ent = OSAL_NULL;
+	struct eth_filter_cmd_header *p_header;
+	enum _ecore_status_t rc;
+
+	rc = ecore_filter_ucast_common(p_hwfn, opaque_fid, p_filter_cmd,
+				       &p_ramrod, &p_ent,
+				       comp_mode, p_comp_data);
+	if (rc != ECORE_SUCCESS) {
+		DP_ERR(p_hwfn, "Uni. filter command failed %d\n", rc);
+		return rc;
+	}
+	p_header = &p_ramrod->filter_cmd_hdr;
+	p_header->assert_on_error = p_filter_cmd->assert_on_error;
+
+	rc = ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
+	if (rc != ECORE_SUCCESS) {
+		DP_ERR(p_hwfn, "Unicast filter ADD command failed %d\n", rc);
+		return rc;
+	}
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "Unicast filter configured, opcode = %s, type = %s, cmd_cnt = %d, is_rx_filter = %d, is_tx_filter = %d\n",
+		   (p_filter_cmd->opcode == ECORE_FILTER_ADD) ? "ADD" :
+		   ((p_filter_cmd->opcode == ECORE_FILTER_REMOVE) ?
+		    "REMOVE" :
+		    ((p_filter_cmd->opcode == ECORE_FILTER_MOVE) ?
+		     "MOVE" : "REPLACE")),
+		   (p_filter_cmd->type == ECORE_FILTER_MAC) ? "MAC" :
+		   ((p_filter_cmd->type == ECORE_FILTER_VLAN) ?
+		    "VLAN" : "MAC & VLAN"),
+		   p_ramrod->filter_cmd_hdr.cmd_cnt,
+		   p_filter_cmd->is_rx_filter, p_filter_cmd->is_tx_filter);
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "vport_to_add_to = %d, vport_to_remove_from = %d, mac = %2x:%2x:%2x:%2x:%2x:%2x, vlan = %d\n",
+		   p_filter_cmd->vport_to_add_to,
+		   p_filter_cmd->vport_to_remove_from,
+		   p_filter_cmd->mac[0], p_filter_cmd->mac[1],
+		   p_filter_cmd->mac[2], p_filter_cmd->mac[3],
+		   p_filter_cmd->mac[4], p_filter_cmd->mac[5],
+		   p_filter_cmd->vlan);
+
+	return ECORE_SUCCESS;
+}
+
+/*******************************************************************************
+ * Description:
+ *         Calculates crc 32 on a buffer
+ *         Note: crc32_length MUST be aligned to 8
+ * Return:
+ ******************************************************************************/
+static u32 ecore_calc_crc32c(u8 *crc32_packet,
+			     u32 crc32_length, u32 crc32_seed, u8 complement)
+{
+	u32 byte = 0, bit = 0, crc32_result = crc32_seed;
+	u8 msb = 0, current_byte = 0;
+
+	if ((crc32_packet == OSAL_NULL) ||
+	    (crc32_length == 0) || ((crc32_length % 8) != 0)) {
+		return crc32_result;
+	}
+
+	for (byte = 0; byte < crc32_length; byte++) {
+		current_byte = crc32_packet[byte];
+		for (bit = 0; bit < 8; bit++) {
+			msb = (u8)(crc32_result >> 31);
+			crc32_result = crc32_result << 1;
+			if (msb != (0x1 & (current_byte >> bit))) {
+				crc32_result = crc32_result ^ CRC32_POLY;
+				crc32_result |= 1;
+			}
+		}
+	}
+
+	return crc32_result;
+}
+
+static OSAL_INLINE u32 ecore_crc32c_le(u32 seed, u8 *mac, u32 len)
+{
+	u32 packet_buf[2] = { 0 };
+
+	OSAL_MEMCPY((u8 *)(&packet_buf[0]), &mac[0], 6);
+	return ecore_calc_crc32c((u8 *)packet_buf, 8, seed, 0);
+}
+
+u8 ecore_mcast_bin_from_mac(u8 *mac)
+{
+	u32 crc = ecore_crc32c_le(ETH_MULTICAST_BIN_FROM_MAC_SEED,
+				  mac, ETH_ALEN);
+
+	return crc & 0xff;
+}
+
+static enum _ecore_status_t
+ecore_sp_eth_filter_mcast(struct ecore_hwfn *p_hwfn,
+			  u16 opaque_fid,
+			  struct ecore_filter_mcast *p_filter_cmd,
+			  enum spq_mode comp_mode,
+			  struct ecore_spq_comp_cb *p_comp_data)
+{
+	struct vport_update_ramrod_data *p_ramrod = OSAL_NULL;
+	long unsigned bins[ETH_MULTICAST_MAC_BINS_IN_REGS];
+	struct ecore_spq_entry *p_ent = OSAL_NULL;
+	struct ecore_sp_init_data init_data;
+	enum _ecore_status_t rc;
+	u8 abs_vport_id = 0;
+	int i;
+
+	rc = ecore_fw_vport(p_hwfn,
+			    (p_filter_cmd->opcode == ECORE_FILTER_ADD) ?
+			    p_filter_cmd->vport_to_add_to :
+			    p_filter_cmd->vport_to_remove_from, &abs_vport_id);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	/* Get SPQ entry */
+	OSAL_MEMSET(&init_data, 0, sizeof(init_data));
+	init_data.cid = ecore_spq_get_cid(p_hwfn);
+	init_data.opaque_fid = p_hwfn->hw_info.opaque_fid;
+	init_data.comp_mode = comp_mode;
+	init_data.p_comp_data = p_comp_data;
+
+	rc = ecore_sp_init_request(p_hwfn, &p_ent,
+				   ETH_RAMROD_VPORT_UPDATE,
+				   PROTOCOLID_ETH, &init_data);
+	if (rc != ECORE_SUCCESS) {
+		DP_ERR(p_hwfn, "Multi-cast command failed %d\n", rc);
+		return rc;
+	}
+
+	p_ramrod = &p_ent->ramrod.vport_update;
+	p_ramrod->common.update_approx_mcast_flg = 1;
+
+	/* explicitly clear out the entire vector */
+	OSAL_MEMSET(&p_ramrod->approx_mcast.bins,
+		    0, sizeof(p_ramrod->approx_mcast.bins));
+	OSAL_MEMSET(bins, 0, sizeof(long unsigned) *
+		    ETH_MULTICAST_MAC_BINS_IN_REGS);
+
+	if (p_filter_cmd->opcode == ECORE_FILTER_ADD) {
+		/* filter ADD op is explicit set op and it removes
+		 *  any existing filters for the vport.
+		 */
+		for (i = 0; i < p_filter_cmd->num_mc_addrs; i++) {
+			u32 bit;
+
+			bit = ecore_mcast_bin_from_mac(p_filter_cmd->mac[i]);
+			OSAL_SET_BIT(bit, bins);
+		}
+
+		/* Convert to correct endianity */
+		for (i = 0; i < ETH_MULTICAST_MAC_BINS_IN_REGS; i++) {
+			struct vport_update_ramrod_mcast *p_ramrod_bins;
+			u32 *p_bins = (u32 *)bins;
+
+			p_ramrod_bins = &p_ramrod->approx_mcast;
+			p_ramrod_bins->bins[i] = OSAL_CPU_TO_LE32(p_bins[i]);
+		}
+	}
+
+	p_ramrod->common.vport_id = abs_vport_id;
+
+	rc = ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
+	if (rc != ECORE_SUCCESS)
+		DP_ERR(p_hwfn, "Multicast filter command failed %d\n", rc);
+
+	return rc;
+}
+
+enum _ecore_status_t
+ecore_filter_mcast_cmd(struct ecore_dev *p_dev,
+		       struct ecore_filter_mcast *p_filter_cmd,
+		       enum spq_mode comp_mode,
+		       struct ecore_spq_comp_cb *p_comp_data)
+{
+	enum _ecore_status_t rc = ECORE_SUCCESS;
+	int i;
+
+	/* only ADD and REMOVE operations are supported for multi-cast */
+	if ((p_filter_cmd->opcode != ECORE_FILTER_ADD &&
+	     (p_filter_cmd->opcode != ECORE_FILTER_REMOVE)) ||
+	    (p_filter_cmd->num_mc_addrs > ECORE_MAX_MC_ADDRS)) {
+		return ECORE_INVAL;
+	}
+
+	for_each_hwfn(p_dev, i) {
+		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
+
+		rc = ecore_sp_eth_filter_mcast(p_hwfn,
+					       p_hwfn->hw_info.opaque_fid,
+					       p_filter_cmd,
+					       comp_mode, p_comp_data);
+		if (rc != ECORE_SUCCESS)
+			break;
+	}
+
+	return rc;
+}
+
+enum _ecore_status_t
+ecore_filter_ucast_cmd(struct ecore_dev *p_dev,
+		       struct ecore_filter_ucast *p_filter_cmd,
+		       enum spq_mode comp_mode,
+		       struct ecore_spq_comp_cb *p_comp_data)
+{
+	enum _ecore_status_t rc = ECORE_SUCCESS;
+	int i;
+
+	for_each_hwfn(p_dev, i) {
+		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
+
+		rc = ecore_sp_eth_filter_ucast(p_hwfn,
+					       p_hwfn->hw_info.opaque_fid,
+					       p_filter_cmd,
+					       comp_mode, p_comp_data);
+		if (rc != ECORE_SUCCESS)
+			break;
+	}
+
+	return rc;
+}
+
+/* Statistics related code */
+static void __ecore_get_vport_pstats_addrlen(struct ecore_hwfn *p_hwfn,
+					     u32 *p_addr, u32 *p_len,
+					     u16 statistics_bin)
+{
+	*p_addr = BAR0_MAP_REG_PSDM_RAM +
+		    PSTORM_QUEUE_STAT_OFFSET(statistics_bin);
+	*p_len = sizeof(struct eth_pstorm_per_queue_stat);
+}
+
+static void __ecore_get_vport_pstats(struct ecore_hwfn *p_hwfn,
+				     struct ecore_ptt *p_ptt,
+				     struct ecore_eth_stats *p_stats,
+				     u16 statistics_bin)
+{
+	struct eth_pstorm_per_queue_stat pstats;
+	u32 pstats_addr = 0, pstats_len = 0;
+
+	__ecore_get_vport_pstats_addrlen(p_hwfn, &pstats_addr, &pstats_len,
+					 statistics_bin);
+
+	OSAL_MEMSET(&pstats, 0, sizeof(pstats));
+	ecore_memcpy_from(p_hwfn, p_ptt, &pstats, pstats_addr, pstats_len);
+
+	p_stats->tx_ucast_bytes += HILO_64_REGPAIR(pstats.sent_ucast_bytes);
+	p_stats->tx_mcast_bytes += HILO_64_REGPAIR(pstats.sent_mcast_bytes);
+	p_stats->tx_bcast_bytes += HILO_64_REGPAIR(pstats.sent_bcast_bytes);
+	p_stats->tx_ucast_pkts += HILO_64_REGPAIR(pstats.sent_ucast_pkts);
+	p_stats->tx_mcast_pkts += HILO_64_REGPAIR(pstats.sent_mcast_pkts);
+	p_stats->tx_bcast_pkts += HILO_64_REGPAIR(pstats.sent_bcast_pkts);
+	p_stats->tx_err_drop_pkts += HILO_64_REGPAIR(pstats.error_drop_pkts);
+}
+
+static void __ecore_get_vport_tstats(struct ecore_hwfn *p_hwfn,
+				     struct ecore_ptt *p_ptt,
+				     struct ecore_eth_stats *p_stats,
+				     u16 statistics_bin)
+{
+	struct tstorm_per_port_stat tstats;
+	u32 tstats_addr, tstats_len;
+
+	tstats_addr = BAR0_MAP_REG_TSDM_RAM +
+		    TSTORM_PORT_STAT_OFFSET(MFW_PORT(p_hwfn));
+	tstats_len = sizeof(struct tstorm_per_port_stat);
+
+	OSAL_MEMSET(&tstats, 0, sizeof(tstats));
+	ecore_memcpy_from(p_hwfn, p_ptt, &tstats, tstats_addr, tstats_len);
+
+	p_stats->mftag_filter_discards +=
+	    HILO_64_REGPAIR(tstats.mftag_filter_discard);
+	p_stats->mac_filter_discards +=
+	    HILO_64_REGPAIR(tstats.eth_mac_filter_discard);
+}
+
+static void __ecore_get_vport_ustats_addrlen(struct ecore_hwfn *p_hwfn,
+					     u32 *p_addr, u32 *p_len,
+					     u16 statistics_bin)
+{
+	*p_addr = BAR0_MAP_REG_USDM_RAM +
+		    USTORM_QUEUE_STAT_OFFSET(statistics_bin);
+	*p_len = sizeof(struct eth_ustorm_per_queue_stat);
+}
+
+static void __ecore_get_vport_ustats(struct ecore_hwfn *p_hwfn,
+				     struct ecore_ptt *p_ptt,
+				     struct ecore_eth_stats *p_stats,
+				     u16 statistics_bin)
+{
+	struct eth_ustorm_per_queue_stat ustats;
+	u32 ustats_addr = 0, ustats_len = 0;
+
+	__ecore_get_vport_ustats_addrlen(p_hwfn, &ustats_addr, &ustats_len,
+					 statistics_bin);
+
+	OSAL_MEMSET(&ustats, 0, sizeof(ustats));
+	ecore_memcpy_from(p_hwfn, p_ptt, &ustats, ustats_addr, ustats_len);
+
+	p_stats->rx_ucast_bytes += HILO_64_REGPAIR(ustats.rcv_ucast_bytes);
+	p_stats->rx_mcast_bytes += HILO_64_REGPAIR(ustats.rcv_mcast_bytes);
+	p_stats->rx_bcast_bytes += HILO_64_REGPAIR(ustats.rcv_bcast_bytes);
+	p_stats->rx_ucast_pkts += HILO_64_REGPAIR(ustats.rcv_ucast_pkts);
+	p_stats->rx_mcast_pkts += HILO_64_REGPAIR(ustats.rcv_mcast_pkts);
+	p_stats->rx_bcast_pkts += HILO_64_REGPAIR(ustats.rcv_bcast_pkts);
+}
+
+static void __ecore_get_vport_mstats_addrlen(struct ecore_hwfn *p_hwfn,
+					     u32 *p_addr, u32 *p_len,
+					     u16 statistics_bin)
+{
+	*p_addr = BAR0_MAP_REG_MSDM_RAM +
+		    MSTORM_QUEUE_STAT_OFFSET(statistics_bin);
+	*p_len = sizeof(struct eth_mstorm_per_queue_stat);
+}
+
+static void __ecore_get_vport_mstats(struct ecore_hwfn *p_hwfn,
+				     struct ecore_ptt *p_ptt,
+				     struct ecore_eth_stats *p_stats,
+				     u16 statistics_bin)
+{
+	struct eth_mstorm_per_queue_stat mstats;
+	u32 mstats_addr = 0, mstats_len = 0;
+
+	__ecore_get_vport_mstats_addrlen(p_hwfn, &mstats_addr, &mstats_len,
+					 statistics_bin);
+
+	OSAL_MEMSET(&mstats, 0, sizeof(mstats));
+	ecore_memcpy_from(p_hwfn, p_ptt, &mstats, mstats_addr, mstats_len);
+
+	p_stats->no_buff_discards += HILO_64_REGPAIR(mstats.no_buff_discard);
+	p_stats->packet_too_big_discard +=
+	    HILO_64_REGPAIR(mstats.packet_too_big_discard);
+	p_stats->ttl0_discard += HILO_64_REGPAIR(mstats.ttl0_discard);
+	p_stats->tpa_coalesced_pkts +=
+	    HILO_64_REGPAIR(mstats.tpa_coalesced_pkts);
+	p_stats->tpa_coalesced_events +=
+	    HILO_64_REGPAIR(mstats.tpa_coalesced_events);
+	p_stats->tpa_aborts_num += HILO_64_REGPAIR(mstats.tpa_aborts_num);
+	p_stats->tpa_coalesced_bytes +=
+	    HILO_64_REGPAIR(mstats.tpa_coalesced_bytes);
+}
+
+static void __ecore_get_vport_port_stats(struct ecore_hwfn *p_hwfn,
+					 struct ecore_ptt *p_ptt,
+					 struct ecore_eth_stats *p_stats)
+{
+	struct port_stats port_stats;
+	int j;
+
+	OSAL_MEMSET(&port_stats, 0, sizeof(port_stats));
+
+	ecore_memcpy_from(p_hwfn, p_ptt, &port_stats,
+			  p_hwfn->mcp_info->port_addr +
+			  OFFSETOF(struct public_port, stats),
+			  sizeof(port_stats));
+
+	p_stats->rx_64_byte_packets += port_stats.pmm.r64;
+	p_stats->rx_65_to_127_byte_packets += port_stats.pmm.r127;
+	p_stats->rx_128_to_255_byte_packets += port_stats.pmm.r255;
+	p_stats->rx_256_to_511_byte_packets += port_stats.pmm.r511;
+	p_stats->rx_512_to_1023_byte_packets += port_stats.pmm.r1023;
+	p_stats->rx_1024_to_1518_byte_packets += port_stats.pmm.r1518;
+	p_stats->rx_1519_to_1522_byte_packets += port_stats.pmm.r1522;
+	p_stats->rx_1519_to_2047_byte_packets += port_stats.pmm.r2047;
+	p_stats->rx_2048_to_4095_byte_packets += port_stats.pmm.r4095;
+	p_stats->rx_4096_to_9216_byte_packets += port_stats.pmm.r9216;
+	p_stats->rx_9217_to_16383_byte_packets += port_stats.pmm.r16383;
+	p_stats->rx_crc_errors += port_stats.pmm.rfcs;
+	p_stats->rx_mac_crtl_frames += port_stats.pmm.rxcf;
+	p_stats->rx_pause_frames += port_stats.pmm.rxpf;
+	p_stats->rx_pfc_frames += port_stats.pmm.rxpp;
+	p_stats->rx_align_errors += port_stats.pmm.raln;
+	p_stats->rx_carrier_errors += port_stats.pmm.rfcr;
+	p_stats->rx_oversize_packets += port_stats.pmm.rovr;
+	p_stats->rx_jabbers += port_stats.pmm.rjbr;
+	p_stats->rx_undersize_packets += port_stats.pmm.rund;
+	p_stats->rx_fragments += port_stats.pmm.rfrg;
+	p_stats->tx_64_byte_packets += port_stats.pmm.t64;
+	p_stats->tx_65_to_127_byte_packets += port_stats.pmm.t127;
+	p_stats->tx_128_to_255_byte_packets += port_stats.pmm.t255;
+	p_stats->tx_256_to_511_byte_packets += port_stats.pmm.t511;
+	p_stats->tx_512_to_1023_byte_packets += port_stats.pmm.t1023;
+	p_stats->tx_1024_to_1518_byte_packets += port_stats.pmm.t1518;
+	p_stats->tx_1519_to_2047_byte_packets += port_stats.pmm.t2047;
+	p_stats->tx_2048_to_4095_byte_packets += port_stats.pmm.t4095;
+	p_stats->tx_4096_to_9216_byte_packets += port_stats.pmm.t9216;
+	p_stats->tx_9217_to_16383_byte_packets += port_stats.pmm.t16383;
+	p_stats->tx_pause_frames += port_stats.pmm.txpf;
+	p_stats->tx_pfc_frames += port_stats.pmm.txpp;
+	p_stats->tx_lpi_entry_count += port_stats.pmm.tlpiec;
+	p_stats->tx_total_collisions += port_stats.pmm.tncl;
+	p_stats->rx_mac_bytes += port_stats.pmm.rbyte;
+	p_stats->rx_mac_uc_packets += port_stats.pmm.rxuca;
+	p_stats->rx_mac_mc_packets += port_stats.pmm.rxmca;
+	p_stats->rx_mac_bc_packets += port_stats.pmm.rxbca;
+	p_stats->rx_mac_frames_ok += port_stats.pmm.rxpok;
+	p_stats->tx_mac_bytes += port_stats.pmm.tbyte;
+	p_stats->tx_mac_uc_packets += port_stats.pmm.txuca;
+	p_stats->tx_mac_mc_packets += port_stats.pmm.txmca;
+	p_stats->tx_mac_bc_packets += port_stats.pmm.txbca;
+	p_stats->tx_mac_ctrl_frames += port_stats.pmm.txcf;
+	for (j = 0; j < 8; j++) {
+		p_stats->brb_truncates += port_stats.brb.brb_truncate[j];
+		p_stats->brb_discards += port_stats.brb.brb_discard[j];
+	}
+}
+
+void __ecore_get_vport_stats(struct ecore_hwfn *p_hwfn,
+			     struct ecore_ptt *p_ptt,
+			     struct ecore_eth_stats *stats,
+			     u16 statistics_bin, bool b_get_port_stats)
+{
+	__ecore_get_vport_mstats(p_hwfn, p_ptt, stats, statistics_bin);
+	__ecore_get_vport_ustats(p_hwfn, p_ptt, stats, statistics_bin);
+	__ecore_get_vport_tstats(p_hwfn, p_ptt, stats, statistics_bin);
+	__ecore_get_vport_pstats(p_hwfn, p_ptt, stats, statistics_bin);
+
+#ifndef ASIC_ONLY
+	/* Avoid getting PORT stats for emulation. */
+	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev))
+		return;
+#endif
+
+	if (b_get_port_stats && p_hwfn->mcp_info)
+		__ecore_get_vport_port_stats(p_hwfn, p_ptt, stats);
+}
+
+static void _ecore_get_vport_stats(struct ecore_dev *p_dev,
+				   struct ecore_eth_stats *stats)
+{
+	u8 fw_vport = 0;
+	int i;
+
+	OSAL_MEMSET(stats, 0, sizeof(*stats));
+
+	for_each_hwfn(p_dev, i) {
+		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
+		struct ecore_ptt *p_ptt = ecore_ptt_acquire(p_hwfn);
+
+		/* The main vport index is relative first */
+		if (ecore_fw_vport(p_hwfn, 0, &fw_vport)) {
+			DP_ERR(p_hwfn, "No vport available!\n");
+			goto out;
+		}
+
+		if (!p_ptt) {
+			DP_ERR(p_hwfn, "Failed to acquire ptt\n");
+			continue;
+		}
+
+		__ecore_get_vport_stats(p_hwfn, p_ptt, stats, fw_vport,
+					true);
+
+out:
+		ecore_ptt_release(p_hwfn, p_ptt);
+	}
+}
+
+void ecore_get_vport_stats(struct ecore_dev *p_dev,
+			   struct ecore_eth_stats *stats)
+{
+	u32 i;
+
+	if (!p_dev) {
+		OSAL_MEMSET(stats, 0, sizeof(*stats));
+		return;
+	}
+
+	_ecore_get_vport_stats(p_dev, stats);
+
+	if (!p_dev->reset_stats)
+		return;
+
+	/* Reduce the statistics baseline */
+	for (i = 0; i < sizeof(struct ecore_eth_stats) / sizeof(u64); i++)
+		((u64 *)stats)[i] -= ((u64 *)p_dev->reset_stats)[i];
+}
+
+/* zeroes V-PORT specific portion of stats (Port stats remains untouched) */
+void ecore_reset_vport_stats(struct ecore_dev *p_dev)
+{
+	int i;
+
+	for_each_hwfn(p_dev, i) {
+		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
+		struct eth_mstorm_per_queue_stat mstats;
+		struct eth_ustorm_per_queue_stat ustats;
+		struct eth_pstorm_per_queue_stat pstats;
+		struct ecore_ptt *p_ptt = ecore_ptt_acquire(p_hwfn);
+		u32 addr = 0, len = 0;
+
+		if (!p_ptt) {
+			DP_ERR(p_hwfn, "Failed to acquire ptt\n");
+			continue;
+		}
+
+		OSAL_MEMSET(&mstats, 0, sizeof(mstats));
+		__ecore_get_vport_mstats_addrlen(p_hwfn, &addr, &len, 0);
+		ecore_memcpy_to(p_hwfn, p_ptt, addr, &mstats, len);
+
+		OSAL_MEMSET(&ustats, 0, sizeof(ustats));
+		__ecore_get_vport_ustats_addrlen(p_hwfn, &addr, &len, 0);
+		ecore_memcpy_to(p_hwfn, p_ptt, addr, &ustats, len);
+
+		OSAL_MEMSET(&pstats, 0, sizeof(pstats));
+		__ecore_get_vport_pstats_addrlen(p_hwfn, &addr, &len, 0);
+		ecore_memcpy_to(p_hwfn, p_ptt, addr, &pstats, len);
+
+		ecore_ptt_release(p_hwfn, p_ptt);
+	}
+
+	/* PORT statistics are not necessarily reset, so we need to
+	 * read and create a baseline for future statistics.
+	 */
+	if (!p_dev->reset_stats)
+		DP_INFO(p_dev, "Reset stats not allocated\n");
+	else
+		_ecore_get_vport_stats(p_dev, p_dev->reset_stats);
+}
diff --git a/drivers/net/qede/base/ecore_l2.h b/drivers/net/qede/base/ecore_l2.h
new file mode 100644
index 0000000..658af45
--- /dev/null
+++ b/drivers/net/qede/base/ecore_l2.h
@@ -0,0 +1,101 @@
+/*
+ * Copyright (c) 2016 QLogic Corporation.
+ * All rights reserved.
+ * www.qlogic.com
+ *
+ * See LICENSE.qede_pmd for copyright and licensing details.
+ */
+
+#ifndef __ECORE_L2_H__
+#define __ECORE_L2_H__
+
+#include "ecore.h"
+#include "ecore_hw.h"
+#include "ecore_spq.h"
+#include "ecore_l2_api.h"
+
+/**
+ * @brief ecore_sp_eth_tx_queue_update -
+ *
+ * This ramrod updates a TX queue. It is used for setting the active
+ * state of the queue.
+ *
+ * @note Final phase API.
+ *
+ * @param p_hwfn
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t ecore_sp_eth_tx_queue_update(struct ecore_hwfn *p_hwfn);
+
+enum _ecore_status_t
+ecore_sp_eth_vport_start(struct ecore_hwfn *p_hwfn,
+			 struct ecore_sp_vport_start_params *p_params);
+
+/**
+ * @brief - Starts an Rx queue; Should be used where contexts are handled
+ * outside of the ramrod area [specifically iov scenarios]
+ *
+ * @param p_hwfn
+ * @param opaque_fid
+ * @param cid
+ * @param rx_queue_id
+ * @param vport_id
+ * @param stats_id
+ * @param sb
+ * @param sb_index
+ * @param bd_max_bytes
+ * @param bd_chain_phys_addr
+ * @param cqe_pbl_addr
+ * @param cqe_pbl_size
+ * @param leading
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t
+ecore_sp_eth_rxq_start_ramrod(struct ecore_hwfn *p_hwfn,
+			      u16 opaque_fid,
+			      u32 cid,
+			      u16 rx_queue_id,
+			      u8 vport_id,
+			      u8 stats_id,
+			      u16 sb,
+			      u8 sb_index,
+			      u16 bd_max_bytes,
+			      dma_addr_t bd_chain_phys_addr,
+			      dma_addr_t cqe_pbl_addr, u16 cqe_pbl_size);
+
+/**
+ * @brief - Starts a Tx queue; Should be used where contexts are handled
+ * outside of the ramrod area [specifically iov scenarios]
+ *
+ * @param p_hwfn
+ * @param opaque_fid
+ * @param tx_queue_id
+ * @param cid
+ * @param vport_id
+ * @param stats_id
+ * @param sb
+ * @param sb_index
+ * @param pbl_addr
+ * @param pbl_size
+ * @param p_pq_params - parameters for choosing the PQ for this Tx queue
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t
+ecore_sp_eth_txq_start_ramrod(struct ecore_hwfn *p_hwfn,
+			      u16 opaque_fid,
+			      u16 tx_queue_id,
+			      u32 cid,
+			      u8 vport_id,
+			      u8 stats_id,
+			      u16 sb,
+			      u8 sb_index,
+			      dma_addr_t pbl_addr,
+			      u16 pbl_size,
+			      union ecore_qm_pq_params *p_pq_params);
+
+u8 ecore_mcast_bin_from_mac(u8 *mac);
+
+#endif
diff --git a/drivers/net/qede/base/ecore_l2_api.h b/drivers/net/qede/base/ecore_l2_api.h
new file mode 100644
index 0000000..1e01b57
--- /dev/null
+++ b/drivers/net/qede/base/ecore_l2_api.h
@@ -0,0 +1,401 @@
+/*
+ * Copyright (c) 2016 QLogic Corporation.
+ * All rights reserved.
+ * www.qlogic.com
+ *
+ * See LICENSE.qede_pmd for copyright and licensing details.
+ */
+
+#ifndef __ECORE_L2_API_H__
+#define __ECORE_L2_API_H__
+
+#include "ecore_status.h"
+#include "ecore_sp_api.h"
+
+#ifndef __EXTRACT__LINUX__
+enum ecore_rss_caps {
+	ECORE_RSS_IPV4 = 0x1,
+	ECORE_RSS_IPV6 = 0x2,
+	ECORE_RSS_IPV4_TCP = 0x4,
+	ECORE_RSS_IPV6_TCP = 0x8,
+	ECORE_RSS_IPV4_UDP = 0x10,
+	ECORE_RSS_IPV6_UDP = 0x20,
+};
+
+/* Should be the same as ETH_RSS_IND_TABLE_ENTRIES_NUM */
+#define ECORE_RSS_IND_TABLE_SIZE 128
+#define ECORE_RSS_KEY_SIZE 10	/* size in 32b chunks */
+#endif
+
+struct ecore_rss_params {
+	u8 update_rss_config;
+	u8 rss_enable;
+	u8 rss_eng_id;
+	u8 update_rss_capabilities;
+	u8 update_rss_ind_table;
+	u8 update_rss_key;
+	u8 rss_caps;
+	u8 rss_table_size_log;	/* The table size is 2 ^ rss_table_size_log */
+	u16 rss_ind_table[ECORE_RSS_IND_TABLE_SIZE];
+	u32 rss_key[ECORE_RSS_KEY_SIZE];
+};
+
+struct ecore_sge_tpa_params {
+	u8 max_buffers_per_cqe;
+
+	u8 update_tpa_en_flg;
+	u8 tpa_ipv4_en_flg;
+	u8 tpa_ipv6_en_flg;
+	u8 tpa_ipv4_tunn_en_flg;
+	u8 tpa_ipv6_tunn_en_flg;
+
+	u8 update_tpa_param_flg;
+	u8 tpa_pkt_split_flg;
+	u8 tpa_hdr_data_split_flg;
+	u8 tpa_gro_consistent_flg;
+	u8 tpa_max_aggs_num;
+	u16 tpa_max_size;
+	u16 tpa_min_size_to_start;
+	u16 tpa_min_size_to_cont;
+};
+
+enum ecore_filter_opcode {
+	ECORE_FILTER_ADD,
+	ECORE_FILTER_REMOVE,
+	ECORE_FILTER_MOVE,
+	ECORE_FILTER_REPLACE,	/* Delete all MACs and add new one instead */
+	ECORE_FILTER_FLUSH,	/* Removes all filters */
+};
+
+enum ecore_filter_ucast_type {
+	ECORE_FILTER_MAC,
+	ECORE_FILTER_VLAN,
+	ECORE_FILTER_MAC_VLAN,
+	ECORE_FILTER_INNER_MAC,
+	ECORE_FILTER_INNER_VLAN,
+	ECORE_FILTER_INNER_PAIR,
+	ECORE_FILTER_INNER_MAC_VNI_PAIR,
+	ECORE_FILTER_MAC_VNI_PAIR,
+	ECORE_FILTER_VNI,
+};
+
+struct ecore_filter_ucast {
+	enum ecore_filter_opcode opcode;
+	enum ecore_filter_ucast_type type;
+	u8 is_rx_filter;
+	u8 is_tx_filter;
+	u8 vport_to_add_to;
+	u8 vport_to_remove_from;
+	unsigned char mac[ETH_ALEN];
+	u8 assert_on_error;
+	u16 vlan;
+	u32 vni;
+};
+
+struct ecore_filter_mcast {
+	/* MOVE is not supported for multicast */
+	enum ecore_filter_opcode opcode;
+	u8 vport_to_add_to;
+	u8 vport_to_remove_from;
+	u8 num_mc_addrs;
+#define ECORE_MAX_MC_ADDRS	64
+	unsigned char mac[ECORE_MAX_MC_ADDRS][ETH_ALEN];
+};
+
+struct ecore_filter_accept_flags {
+	u8 update_rx_mode_config;
+	u8 update_tx_mode_config;
+	u8 rx_accept_filter;
+	u8 tx_accept_filter;
+#define	ECORE_ACCEPT_NONE		0x01
+#define ECORE_ACCEPT_UCAST_MATCHED	0x02
+#define ECORE_ACCEPT_UCAST_UNMATCHED	0x04
+#define ECORE_ACCEPT_MCAST_MATCHED	0x08
+#define ECORE_ACCEPT_MCAST_UNMATCHED	0x10
+#define ECORE_ACCEPT_BCAST		0x20
+};
+
+/* Add / remove / move / remove-all unicast MAC-VLAN filters.
+ * FW will assert in the following cases, so driver should take care...:
+ * 1. Adding a filter to a full table.
+ * 2. Adding a filter which already exists on that vport.
+ * 3. Removing a filter which doesn't exist.
+ */
+
+enum _ecore_status_t
+ecore_filter_ucast_cmd(struct ecore_dev *p_dev,
+		       struct ecore_filter_ucast *p_filter_cmd,
+		       enum spq_mode comp_mode,
+		       struct ecore_spq_comp_cb *p_comp_data);
+
+/* Add / remove / move multicast MAC filters. */
+enum _ecore_status_t
+ecore_filter_mcast_cmd(struct ecore_dev *p_dev,
+		       struct ecore_filter_mcast *p_filter_cmd,
+		       enum spq_mode comp_mode,
+		       struct ecore_spq_comp_cb *p_comp_data);
+
+/* Set "accept" filters */
+enum _ecore_status_t
+ecore_filter_accept_cmd(struct ecore_dev *p_dev,
+			u8 vport,
+			struct ecore_filter_accept_flags accept_flags,
+			u8 update_accept_any_vlan,
+			u8 accept_any_vlan,
+			enum spq_mode comp_mode,
+			struct ecore_spq_comp_cb *p_comp_data);
+
+/**
+ * @brief ecore_sp_eth_rx_queue_start - RX Queue Start Ramrod
+ *
+ * This ramrod initializes an RX Queue for a VPort. An Assert is generated if
+ * the VPort ID is not currently initialized.
+ *
+ * @param p_hwfn
+ * @param opaque_fid
+ * @param rx_queue_id		RX Queue ID: Zero based, per VPort, allocated
+ *				by assignment (=rssId)
+ * @param vport_id		VPort ID
+ * @param u8 stats_id           VPort ID which the queue stats
+ *				will be added to
+ * @param sb			Status Block of the Function Event Ring
+ * @param sb_index		Index into the status block of the
+ *			Function Event Ring
+ * @param bd_max_bytes		Maximum bytes that can be placed on a BD
+ * @param bd_chain_phys_addr	Physical address of BDs for receive.
+ * @param cqe_pbl_addr		Physical address of the CQE PBL Table.
+ * @param cqe_pbl_size		Size of the CQE PBL Table
+ * @param pp_prod		Pointer to place producer's
+ *                              address for the Rx Q (May be
+ *				NULL).
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t ecore_sp_eth_rx_queue_start(struct ecore_hwfn *p_hwfn,
+						 u16 opaque_fid,
+						 u8 rx_queue_id,
+						 u8 vport_id,
+						 u8 stats_id,
+						 u16 sb,
+						 u8 sb_index,
+						 u16 bd_max_bytes,
+						 dma_addr_t bd_chain_phys_addr,
+						 dma_addr_t cqe_pbl_addr,
+						 u16 cqe_pbl_size,
+						 void OSAL_IOMEM **pp_prod);
+
+/**
+ * @brief ecore_sp_eth_rx_queue_stop -
+ *
+ * This ramrod closes an RX queue. It sends RX queue stop ramrod
+ * + CFC delete ramrod
+ *
+ * @param p_hwfn
+ * @param rx_queue_id		RX Queue ID
+ * @param eq_completion_only	If True completion will be on
+ *				EQe, if False completion will be
+ *				on EQe if p_hwfn opaque
+ *				different from the RXQ opaque
+ *				otherwise on CQe.
+ * @param cqe_completion	If True completion will be
+ *				recieve on CQe.
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t
+ecore_sp_eth_rx_queue_stop(struct ecore_hwfn *p_hwfn,
+			   u16 rx_queue_id,
+			   bool eq_completion_only, bool cqe_completion);
+
+/**
+ * @brief ecore_sp_eth_tx_queue_start - TX Queue Start Ramrod
+ *
+ * This ramrod initializes a TX Queue for a VPort. An Assert is generated if
+ * the VPort is not currently initialized.
+ *
+ * @param p_hwfn
+ * @param opaque_fid
+ * @param tx_queue_id		TX Queue ID
+ * @param vport_id		VPort ID
+ * @param stats_id              VPort ID which the queue stats
+ *				will be added to
+ * @param sb			Status Block of the Function Event Ring
+ * @param sb_index		Index into the status block of the Function
+ *				Event Ring
+ * @param pbl_addr		address of the pbl array
+ * @param pbl_size		number of entries in pbl
+ * @param pp_doorbell		Pointer to place doorbell pointer (May be NULL).
+ *			This address should be used with the
+ *				DIRECT_REG_WR macro.
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t ecore_sp_eth_tx_queue_start(struct ecore_hwfn *p_hwfn,
+						 u16 opaque_fid,
+						 u16 tx_queue_id,
+						 u8 vport_id,
+						 u8 stats_id,
+						 u16 sb,
+						 u8 sb_index,
+						 dma_addr_t pbl_addr,
+						 u16 pbl_size,
+						 void OSAL_IOMEM **
+						 pp_doorbell);
+
+/**
+ * @brief ecore_sp_eth_tx_queue_stop -
+ *
+ * This ramrod closes a TX queue. It sends TX queue stop ramrod
+ * + CFC delete ramrod
+ *
+ * @param p_hwfn
+ * @param tx_queue_id		TX Queue ID
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t ecore_sp_eth_tx_queue_stop(struct ecore_hwfn *p_hwfn,
+						u16 tx_queue_id);
+
+enum ecore_tpa_mode {
+	ECORE_TPA_MODE_NONE,
+	ECORE_TPA_MODE_RSC,
+	ECORE_TPA_MODE_GRO,
+	ECORE_TPA_MODE_MAX
+};
+
+struct ecore_sp_vport_start_params {
+	enum ecore_tpa_mode tpa_mode;
+	bool remove_inner_vlan;	/* Inner VLAN removal is enabled */
+	bool tx_switching;	/* Vport supports tx-switching */
+	bool handle_ptp_pkts;	/* Handle PTP packets */
+	bool only_untagged;	/* Untagged pkt control */
+	bool drop_ttl0;		/* Drop packets with TTL = 0 */
+	u8 max_buffers_per_cqe;
+	u32 concrete_fid;
+	u16 opaque_fid;
+	u8 vport_id;		/* VPORT ID */
+	u16 mtu;		/* VPORT MTU */
+	bool zero_placement_offset;
+};
+
+/**
+ * @brief ecore_sp_vport_start -
+ *
+ * This ramrod initializes a VPort. An Assert if generated if the Function ID
+ * of the VPort is not enabled.
+ *
+ * @param p_hwfn
+ * @param p_params		VPORT start params
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t
+ecore_sp_vport_start(struct ecore_hwfn *p_hwfn,
+		     struct ecore_sp_vport_start_params *p_params);
+
+struct ecore_sp_vport_update_params {
+	u16 opaque_fid;
+	u8 vport_id;
+	u8 update_vport_active_rx_flg;
+	u8 vport_active_rx_flg;
+	u8 update_vport_active_tx_flg;
+	u8 vport_active_tx_flg;
+	u8 update_inner_vlan_removal_flg;
+	u8 inner_vlan_removal_flg;
+	u8 silent_vlan_removal_flg;
+	u8 update_default_vlan_enable_flg;
+	u8 default_vlan_enable_flg;
+	u8 update_default_vlan_flg;
+	u16 default_vlan;
+	u8 update_tx_switching_flg;
+	u8 tx_switching_flg;
+	u8 update_approx_mcast_flg;
+	u8 update_anti_spoofing_en_flg;
+	u8 anti_spoofing_en;
+	u8 update_accept_any_vlan_flg;
+	u8 accept_any_vlan;
+	unsigned long bins[8];
+	struct ecore_rss_params *rss_params;
+	struct ecore_filter_accept_flags accept_flags;
+	struct ecore_sge_tpa_params *sge_tpa_params;
+};
+
+/**
+ * @brief ecore_sp_vport_update -
+ *
+ * This ramrod updates the parameters of the VPort. Every field can be updated
+ * independently, according to flags.
+ *
+ * This ramrod is also used to set the VPort state to active after creation.
+ * An Assert is generated if the VPort does not contain an RX queue.
+ *
+ * @param p_hwfn
+ * @param p_params
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t
+ecore_sp_vport_update(struct ecore_hwfn *p_hwfn,
+		      struct ecore_sp_vport_update_params *p_params,
+		      enum spq_mode comp_mode,
+		      struct ecore_spq_comp_cb *p_comp_data);
+/**
+ * @brief ecore_sp_vport_stop -
+ *
+ * This ramrod closes a VPort after all its RX and TX queues are terminated.
+ * An Assert is generated if any queues are left open.
+ *
+ * @param p_hwfn
+ * @param opaque_fid
+ * @param vport_id VPort ID
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t ecore_sp_vport_stop(struct ecore_hwfn *p_hwfn,
+					 u16 opaque_fid, u8 vport_id);
+
+enum _ecore_status_t
+ecore_sp_eth_filter_ucast(struct ecore_hwfn *p_hwfn,
+			  u16 opaque_fid,
+			  struct ecore_filter_ucast *p_filter_cmd,
+			  enum spq_mode comp_mode,
+			  struct ecore_spq_comp_cb *p_comp_data);
+
+/**
+ * @brief ecore_sp_rx_eth_queues_update -
+ *
+ * This ramrod updates an RX queue. It is used for setting the active state
+ * of the queue and updating the TPA and SGE parameters.
+ *
+ * @note Final phase API.
+ *
+ * @param p_hwfn
+ * @param rx_queue_id		RX Queue ID
+ * @param num_rxqs              Allow to update multiple rx
+ *				queues, from rx_queue_id to
+ *				(rx_queue_id + num_rxqs)
+ * @param complete_cqe_flg	Post completion to the CQE Ring if set
+ * @param complete_event_flg	Post completion to the Event Ring if set
+ *
+ * @return enum _ecore_status_t
+ */
+
+enum _ecore_status_t
+ecore_sp_eth_rx_queues_update(struct ecore_hwfn *p_hwfn,
+			      u16 rx_queue_id,
+			      u8 num_rxqs,
+			      u8 complete_cqe_flg,
+			      u8 complete_event_flg,
+			      enum spq_mode comp_mode,
+			      struct ecore_spq_comp_cb *p_comp_data);
+
+void __ecore_get_vport_stats(struct ecore_hwfn *p_hwfn,
+			     struct ecore_ptt *p_ptt,
+			     struct ecore_eth_stats *stats,
+			     u16 statistics_bin, bool b_get_port_stats);
+
+void ecore_get_vport_stats(struct ecore_dev *p_dev,
+			   struct ecore_eth_stats *stats);
+
+void ecore_reset_vport_stats(struct ecore_dev *p_dev);
+
+#endif
diff --git a/drivers/net/qede/qede_eth_if.c b/drivers/net/qede/qede_eth_if.c
new file mode 100644
index 0000000..0fc043e
--- /dev/null
+++ b/drivers/net/qede/qede_eth_if.c
@@ -0,0 +1,456 @@
+/*
+ * Copyright (c) 2016 QLogic Corporation.
+ * All rights reserved.
+ * www.qlogic.com
+ *
+ * See LICENSE.qede_pmd for copyright and licensing details.
+ */
+
+#include "qede_ethdev.h"
+
+static int
+qed_start_vport(struct ecore_dev *edev, struct qed_start_vport_params *p_params)
+{
+	int rc, i;
+
+	for_each_hwfn(edev, i) {
+		struct ecore_hwfn *p_hwfn = &edev->hwfns[i];
+		u8 tx_switching = 0;
+		struct ecore_sp_vport_start_params start = { 0 };
+
+		start.tpa_mode = p_params->gro_enable ? ECORE_TPA_MODE_GRO :
+		    ECORE_TPA_MODE_NONE;
+		start.remove_inner_vlan = p_params->remove_inner_vlan;
+		start.tx_switching = tx_switching;
+		start.only_untagged = false;	/* untagged only */
+		start.drop_ttl0 = p_params->drop_ttl0;
+		start.concrete_fid = p_hwfn->hw_info.concrete_fid;
+		start.opaque_fid = p_hwfn->hw_info.opaque_fid;
+		start.concrete_fid = p_hwfn->hw_info.concrete_fid;
+		start.handle_ptp_pkts = p_params->handle_ptp_pkts;
+		start.vport_id = p_params->vport_id;
+		start.max_buffers_per_cqe = 16;	/* TODO-is this right */
+		start.mtu = p_params->mtu;
+
+		rc = ecore_sp_vport_start(p_hwfn, &start);
+		if (rc) {
+			DP_ERR(edev, "Failed to start VPORT\n");
+			return rc;
+		}
+
+		ecore_hw_start_fastpath(p_hwfn);
+
+		DP_VERBOSE(edev, ECORE_MSG_SPQ,
+			   "Started V-PORT %d with MTU %d\n",
+			   p_params->vport_id, p_params->mtu);
+	}
+
+	ecore_reset_vport_stats(edev);
+
+	return 0;
+}
+
+static int qed_stop_vport(struct ecore_dev *edev, uint8_t vport_id)
+{
+	int rc, i;
+
+	for_each_hwfn(edev, i) {
+		struct ecore_hwfn *p_hwfn = &edev->hwfns[i];
+		rc = ecore_sp_vport_stop(p_hwfn,
+					 p_hwfn->hw_info.opaque_fid, vport_id);
+
+		if (rc) {
+			DP_ERR(edev, "Failed to stop VPORT\n");
+			return rc;
+		}
+	}
+
+	return 0;
+}
+
+static int
+qed_update_vport(struct ecore_dev *edev, struct qed_update_vport_params *params)
+{
+	struct ecore_sp_vport_update_params sp_params;
+	struct ecore_rss_params sp_rss_params;
+	int rc, i;
+
+	memset(&sp_params, 0, sizeof(sp_params));
+	memset(&sp_rss_params, 0, sizeof(sp_rss_params));
+
+	/* Translate protocol params into sp params */
+	sp_params.vport_id = params->vport_id;
+	sp_params.update_vport_active_rx_flg = params->update_vport_active_flg;
+	sp_params.update_vport_active_tx_flg = params->update_vport_active_flg;
+	sp_params.vport_active_rx_flg = params->vport_active_flg;
+	sp_params.vport_active_tx_flg = params->vport_active_flg;
+	sp_params.update_inner_vlan_removal_flg =
+	    params->update_inner_vlan_removal_flg;
+	sp_params.inner_vlan_removal_flg = params->inner_vlan_removal_flg;
+	sp_params.update_tx_switching_flg = params->update_tx_switching_flg;
+	sp_params.tx_switching_flg = params->tx_switching_flg;
+	sp_params.accept_any_vlan = params->accept_any_vlan;
+	sp_params.update_accept_any_vlan_flg =
+	    params->update_accept_any_vlan_flg;
+
+	/* RSS - is a bit tricky, since upper-layer isn't familiar with hwfns.
+	 * We need to re-fix the rss values per engine for CMT.
+	 */
+
+	if (edev->num_hwfns > 1 && params->update_rss_flg) {
+		struct qed_update_vport_rss_params *rss = &params->rss_params;
+		int k, max = 0;
+
+		/* Find largest entry, since it's possible RSS needs to
+		 * be disabled [in case only 1 queue per-hwfn]
+		 */
+		for (k = 0; k < ECORE_RSS_IND_TABLE_SIZE; k++)
+			max = (max > rss->rss_ind_table[k]) ?
+			    max : rss->rss_ind_table[k];
+
+		/* Either fix RSS values or disable RSS */
+		if (edev->num_hwfns < max + 1) {
+			int divisor = (max + edev->num_hwfns - 1) /
+			    edev->num_hwfns;
+
+			DP_VERBOSE(edev, ECORE_MSG_SPQ,
+				   "CMT - fixing RSS values (modulo %02x)\n",
+				   divisor);
+
+			for (k = 0; k < ECORE_RSS_IND_TABLE_SIZE; k++)
+				rss->rss_ind_table[k] =
+				    rss->rss_ind_table[k] % divisor;
+		} else {
+			DP_VERBOSE(edev, ECORE_MSG_SPQ,
+				   "CMT - 1 queue per-hwfn; Disabling RSS\n");
+			params->update_rss_flg = 0;
+		}
+	}
+
+	/* Now, update the RSS configuration for actual configuration */
+	if (params->update_rss_flg) {
+		sp_rss_params.update_rss_config = 1;
+		sp_rss_params.rss_enable = 1;
+		sp_rss_params.update_rss_capabilities = 1;
+		sp_rss_params.update_rss_ind_table = 1;
+		sp_rss_params.update_rss_key = 1;
+		sp_rss_params.rss_caps = ECORE_RSS_IPV4 | ECORE_RSS_IPV6 |
+		    ECORE_RSS_IPV4_TCP | ECORE_RSS_IPV6_TCP;
+		sp_rss_params.rss_table_size_log = 7;	/* 2^7 = 128 */
+		rte_memcpy(sp_rss_params.rss_ind_table,
+		       params->rss_params.rss_ind_table,
+		       ECORE_RSS_IND_TABLE_SIZE * sizeof(uint16_t));
+		rte_memcpy(sp_rss_params.rss_key, params->rss_params.rss_key,
+		       ECORE_RSS_KEY_SIZE * sizeof(uint32_t));
+	}
+	sp_params.rss_params = &sp_rss_params;
+
+	for_each_hwfn(edev, i) {
+		struct ecore_hwfn *p_hwfn = &edev->hwfns[i];
+
+		sp_params.opaque_fid = p_hwfn->hw_info.opaque_fid;
+		rc = ecore_sp_vport_update(p_hwfn, &sp_params,
+					   ECORE_SPQ_MODE_EBLOCK, NULL);
+		if (rc) {
+			DP_ERR(edev, "Failed to update VPORT\n");
+			return rc;
+		}
+
+		DP_VERBOSE(edev, ECORE_MSG_SPQ,
+			   "Updated V-PORT %d: active_flag %d [update %d]\n",
+			   params->vport_id, params->vport_active_flg,
+			   params->update_vport_active_flg);
+	}
+
+	return 0;
+}
+
+static int
+qed_start_rxq(struct ecore_dev *edev,
+	      uint8_t rss_id, uint8_t rx_queue_id,
+	      uint8_t vport_id, uint16_t sb,
+	      uint8_t sb_index, uint16_t bd_max_bytes,
+	      dma_addr_t bd_chain_phys_addr,
+	      dma_addr_t cqe_pbl_addr,
+	      uint16_t cqe_pbl_size, void OSAL_IOMEM**pp_prod)
+{
+	struct ecore_hwfn *p_hwfn;
+	int rc, hwfn_index;
+
+	hwfn_index = rss_id % edev->num_hwfns;
+	p_hwfn = &edev->hwfns[hwfn_index];
+
+	rc = ecore_sp_eth_rx_queue_start(p_hwfn,
+					 p_hwfn->hw_info.opaque_fid,
+					 rx_queue_id / edev->num_hwfns,
+					 vport_id,
+					 vport_id,
+					 sb,
+					 sb_index,
+					 bd_max_bytes,
+					 bd_chain_phys_addr,
+					 cqe_pbl_addr, cqe_pbl_size, pp_prod);
+
+	if (rc) {
+		DP_ERR(edev, "Failed to start RXQ#%d\n", rx_queue_id);
+		return rc;
+	}
+
+	DP_VERBOSE(edev, ECORE_MSG_SPQ,
+		   "Started RX-Q %d [rss %d] on V-PORT %d and SB %d\n",
+		   rx_queue_id, rss_id, vport_id, sb);
+
+	return 0;
+}
+
+static int
+qed_stop_rxq(struct ecore_dev *edev, struct qed_stop_rxq_params *params)
+{
+	int rc, hwfn_index;
+	struct ecore_hwfn *p_hwfn;
+
+	hwfn_index = params->rss_id % edev->num_hwfns;
+	p_hwfn = &edev->hwfns[hwfn_index];
+
+	rc = ecore_sp_eth_rx_queue_stop(p_hwfn,
+					params->rx_queue_id / edev->num_hwfns,
+					params->eq_completion_only, false);
+	if (rc) {
+		DP_ERR(edev, "Failed to stop RXQ#%d\n", params->rx_queue_id);
+		return rc;
+	}
+
+	return 0;
+}
+
+static int
+qed_start_txq(struct ecore_dev *edev,
+	      uint8_t rss_id, uint16_t tx_queue_id,
+	      uint8_t vport_id, uint16_t sb,
+	      uint8_t sb_index,
+	      dma_addr_t pbl_addr,
+	      uint16_t pbl_size, void OSAL_IOMEM**pp_doorbell)
+{
+	struct ecore_hwfn *p_hwfn;
+	int rc, hwfn_index;
+
+	hwfn_index = rss_id % edev->num_hwfns;
+	p_hwfn = &edev->hwfns[hwfn_index];
+
+	rc = ecore_sp_eth_tx_queue_start(p_hwfn,
+					 p_hwfn->hw_info.opaque_fid,
+					 tx_queue_id / edev->num_hwfns,
+					 vport_id,
+					 vport_id,
+					 sb,
+					 sb_index,
+					 pbl_addr, pbl_size, pp_doorbell);
+
+	if (rc) {
+		DP_ERR(edev, "Failed to start TXQ#%d\n", tx_queue_id);
+		return rc;
+	}
+
+	DP_VERBOSE(edev, ECORE_MSG_SPQ,
+		   "Started TX-Q %d [rss %d] on V-PORT %d and SB %d\n",
+		   tx_queue_id, rss_id, vport_id, sb);
+
+	return 0;
+}
+
+static int
+qed_stop_txq(struct ecore_dev *edev, struct qed_stop_txq_params *params)
+{
+	struct ecore_hwfn *p_hwfn;
+	int rc, hwfn_index;
+
+	hwfn_index = params->rss_id % edev->num_hwfns;
+	p_hwfn = &edev->hwfns[hwfn_index];
+
+	rc = ecore_sp_eth_tx_queue_stop(p_hwfn,
+					params->tx_queue_id / edev->num_hwfns);
+	if (rc) {
+		DP_ERR(edev, "Failed to stop TXQ#%d\n", params->tx_queue_id);
+		return rc;
+	}
+
+	return 0;
+}
+
+static int
+qed_fp_cqe_completion(struct ecore_dev *edev,
+		      uint8_t rss_id, struct eth_slow_path_rx_cqe *cqe)
+{
+
+	return ecore_eth_cqe_completion(&edev->hwfns[rss_id % edev->num_hwfns],
+					cqe);
+}
+
+static int qed_fastpath_stop(struct ecore_dev *edev)
+{
+	ecore_hw_stop_fastpath(edev);
+
+	return 0;
+}
+
+static void
+qed_get_vport_stats(struct ecore_dev *edev, struct ecore_eth_stats *stats)
+{
+	ecore_get_vport_stats(edev, stats);
+}
+
+static int
+qed_configure_filter_ucast(struct ecore_dev *edev,
+			   struct qed_filter_ucast_params *params)
+{
+	struct ecore_filter_ucast ucast;
+
+	if (!params->vlan_valid && !params->mac_valid) {
+		DP_NOTICE(edev, true,
+			  "Tried configuring a unicast filter,"
+			  "but both MAC and VLAN are not set\n");
+		return -EINVAL;
+	}
+
+	memset(&ucast, 0, sizeof(ucast));
+	switch (params->type) {
+	case QED_FILTER_XCAST_TYPE_ADD:
+		ucast.opcode = ECORE_FILTER_ADD;
+		break;
+	case QED_FILTER_XCAST_TYPE_DEL:
+		ucast.opcode = ECORE_FILTER_REMOVE;
+		break;
+	case QED_FILTER_XCAST_TYPE_REPLACE:
+		ucast.opcode = ECORE_FILTER_REPLACE;
+		break;
+	default:
+		DP_NOTICE(edev, true, "Unknown unicast filter type %d\n",
+			  params->type);
+	}
+
+	if (params->vlan_valid && params->mac_valid) {
+		ucast.type = ECORE_FILTER_MAC_VLAN;
+		ether_addr_copy((struct ether_addr *)&params->mac,
+				(struct ether_addr *)&ucast.mac);
+		ucast.vlan = params->vlan;
+	} else if (params->mac_valid) {
+		ucast.type = ECORE_FILTER_MAC;
+		ether_addr_copy((struct ether_addr *)&params->mac,
+				(struct ether_addr *)&ucast.mac);
+	} else {
+		ucast.type = ECORE_FILTER_VLAN;
+		ucast.vlan = params->vlan;
+	}
+
+	ucast.is_rx_filter = true;
+	ucast.is_tx_filter = true;
+
+	return ecore_filter_ucast_cmd(edev, &ucast, ECORE_SPQ_MODE_CB, NULL);
+}
+
+static int
+qed_configure_filter_mcast(struct ecore_dev *edev,
+			   struct qed_filter_mcast_params *params)
+{
+	struct ecore_filter_mcast mcast;
+	int i;
+
+	memset(&mcast, 0, sizeof(mcast));
+	switch (params->type) {
+	case QED_FILTER_XCAST_TYPE_ADD:
+		mcast.opcode = ECORE_FILTER_ADD;
+		break;
+	case QED_FILTER_XCAST_TYPE_DEL:
+		mcast.opcode = ECORE_FILTER_REMOVE;
+		break;
+	default:
+		DP_NOTICE(edev, true, "Unknown multicast filter type %d\n",
+			  params->type);
+	}
+
+	mcast.num_mc_addrs = params->num;
+	for (i = 0; i < mcast.num_mc_addrs; i++)
+		ether_addr_copy((struct ether_addr *)&params->mac[i],
+				(struct ether_addr *)&mcast.mac[i]);
+
+	return ecore_filter_mcast_cmd(edev, &mcast, ECORE_SPQ_MODE_CB, NULL);
+}
+
+int
+qed_configure_filter_rx_mode(struct ecore_dev *edev,
+			     enum qed_filter_rx_mode_type type)
+{
+	struct ecore_filter_accept_flags accept_flags;
+
+	memset(&accept_flags, 0, sizeof(accept_flags));
+
+	accept_flags.update_rx_mode_config = 1;
+	accept_flags.update_tx_mode_config = 1;
+	accept_flags.rx_accept_filter = ECORE_ACCEPT_UCAST_MATCHED |
+	    ECORE_ACCEPT_MCAST_MATCHED;
+	ECORE_ACCEPT_BCAST;
+	accept_flags.tx_accept_filter = ECORE_ACCEPT_UCAST_MATCHED |
+	    ECORE_ACCEPT_MCAST_MATCHED | ECORE_ACCEPT_BCAST;
+
+	if (type == QED_FILTER_RX_MODE_TYPE_PROMISC)
+		accept_flags.rx_accept_filter |= ECORE_ACCEPT_UCAST_UNMATCHED;
+	else if (type == QED_FILTER_RX_MODE_TYPE_MULTI_PROMISC)
+		accept_flags.rx_accept_filter |= ECORE_ACCEPT_MCAST_UNMATCHED;
+	else if (type == (QED_FILTER_RX_MODE_TYPE_MULTI_PROMISC |
+			  QED_FILTER_RX_MODE_TYPE_PROMISC))
+		accept_flags.rx_accept_filter |= ECORE_ACCEPT_UCAST_UNMATCHED |
+		    ECORE_ACCEPT_MCAST_UNMATCHED;
+
+	return ecore_filter_accept_cmd(edev, 0, accept_flags, false, false,
+				       ECORE_SPQ_MODE_CB, NULL);
+}
+
+static int
+qed_configure_filter(struct ecore_dev *edev, struct qed_filter_params *params)
+{
+	switch (params->type) {
+	case QED_FILTER_TYPE_UCAST:
+		return qed_configure_filter_ucast(edev, &params->filter.ucast);
+	case QED_FILTER_TYPE_MCAST:
+		return qed_configure_filter_mcast(edev, &params->filter.mcast);
+	case QED_FILTER_TYPE_RX_MODE:
+		return qed_configure_filter_rx_mode(edev,
+						    params->filter.
+						    accept_flags);
+	default:
+		DP_NOTICE(edev, true, "Unknown filter type %d\n",
+			  (int)params->type);
+		return -EINVAL;
+	}
+}
+
+static const struct qed_eth_ops qed_eth_ops_pass = {
+	INIT_STRUCT_FIELD(common, &qed_common_ops_pass),
+	INIT_STRUCT_FIELD(fill_dev_info, &qed_fill_eth_dev_info),
+	INIT_STRUCT_FIELD(vport_start, &qed_start_vport),
+	INIT_STRUCT_FIELD(vport_stop, &qed_stop_vport),
+	INIT_STRUCT_FIELD(vport_update, &qed_update_vport),
+	INIT_STRUCT_FIELD(q_rx_start, &qed_start_rxq),
+	INIT_STRUCT_FIELD(q_tx_start, &qed_start_txq),
+	INIT_STRUCT_FIELD(q_rx_stop, &qed_stop_rxq),
+	INIT_STRUCT_FIELD(q_tx_stop, &qed_stop_txq),
+	INIT_STRUCT_FIELD(eth_cqe_completion, &qed_fp_cqe_completion),
+	INIT_STRUCT_FIELD(fastpath_stop, &qed_fastpath_stop),
+	INIT_STRUCT_FIELD(get_vport_stats, &qed_get_vport_stats),
+	INIT_STRUCT_FIELD(filter_config, &qed_configure_filter),
+};
+
+uint32_t qed_get_protocol_version(enum qed_protocol protocol)
+{
+	switch (protocol) {
+	case QED_PROTOCOL_ETH:
+		return QED_ETH_INTERFACE_VERSION;
+	default:
+		return 0;
+	}
+}
+
+const struct qed_eth_ops *qed_get_eth_ops(void)
+{
+	return &qed_eth_ops_pass;
+}
diff --git a/drivers/net/qede/qede_eth_if.h b/drivers/net/qede/qede_eth_if.h
index 47b169d..bc6f86b 100644
--- a/drivers/net/qede/qede_eth_if.h
+++ b/drivers/net/qede/qede_eth_if.h
@@ -168,7 +168,7 @@ extern const struct qed_common_ops qed_common_ops_pass;
 extern int qed_fill_eth_dev_info(struct ecore_dev *edev,
 				 struct qed_dev_eth_info *info);
 
-void qed_put_eth_ops(void);
+const struct qed_eth_ops *qed_get_eth_ops();
 
 int qed_configure_filter_rx_mode(struct ecore_dev *edev,
 				 enum qed_filter_rx_mode_type type);
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index d5f7019..530b2c1 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -587,6 +587,14 @@ static int qede_dev_set_link_down(struct rte_eth_dev *eth_dev)
 	return qede_dev_set_link_state(eth_dev, false);
 }
 
+static void qede_reset_stats(struct rte_eth_dev *eth_dev)
+{
+	struct qede_dev *qdev = eth_dev->data->dev_private;
+	struct ecore_dev *edev = &qdev->edev;
+
+	ecore_reset_vport_stats(edev);
+}
+
 static void qede_allmulticast_enable(struct rte_eth_dev *eth_dev)
 {
 	enum qed_filter_rx_mode_type type =
@@ -686,6 +694,7 @@ static struct eth_dev_ops qede_eth_dev_ops = {
 	.dev_stop = qede_dev_stop,
 	.dev_close = qede_dev_close,
 	.stats_get = qede_get_stats,
+	.stats_reset = qede_reset_stats,
 	.mac_addr_add = qede_mac_addr_add,
 	.mac_addr_remove = qede_mac_addr_remove,
 	.vlan_offload_set = qede_vlan_offload_set,
@@ -746,9 +755,11 @@ static int qede_common_dev_init(struct rte_eth_dev *eth_dev, bool is_vf)
 
 	rte_eth_copy_pci_info(eth_dev, pci_dev);
 
-	if (qed_ver != QEDE_ETH_INTERFACE_VERSION) {
-		DP_ERR(edev, "Version mismatch [%08x != %08x]\n",
-		       qed_ver, QEDE_ETH_INTERFACE_VERSION);
+	qed_ver = qed_get_protocol_version(QED_PROTOCOL_ETH);
+
+	qed_ops = qed_get_eth_ops();
+	if (!qed_ops) {
+		DP_ERR(edev, "Failed to get qed_eth_ops_pass\n");
 		return -EINVAL;
 	}
 
diff --git a/drivers/net/qede/qede_ethdev.h b/drivers/net/qede/qede_ethdev.h
index 3d90b23..5550349 100644
--- a/drivers/net/qede/qede_ethdev.h
+++ b/drivers/net/qede/qede_ethdev.h
@@ -18,6 +18,7 @@
 #include "base/bcm_osal.h"
 #include "base/ecore.h"
 #include "base/ecore_dev_api.h"
+#include "base/ecore_l2_api.h"
 #include "base/ecore_sp_api.h"
 #include "base/ecore_mcp_api.h"
 #include "base/ecore_hsi_common.h"
diff --git a/drivers/net/qede/qede_if.h b/drivers/net/qede/qede_if.h
index 935eed8..1b05ff8 100644
--- a/drivers/net/qede/qede_if.h
+++ b/drivers/net/qede/qede_if.h
@@ -152,4 +152,13 @@ struct qed_common_ops {
 			      uint32_t dp_module, uint8_t dp_level);
 };
 
+/**
+ * @brief qed_get_protocol_version
+ *
+ * @param protocol
+ *
+ * @return version supported by qed for given protocol driver
+ */
+uint32_t qed_get_protocol_version(enum qed_protocol protocol);
+
 #endif /* _QEDE_IF_H */
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index 7a1b986..1f25908 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -239,6 +239,8 @@ static int qed_slowpath_start(struct ecore_dev *edev,
 		return rc;
 	}
 
+	ecore_reset_vport_stats(edev);
+
 	return 0;
 
 	ecore_hw_stop(edev);
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index d0450f7..f76f42c 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -526,6 +526,196 @@ qede_rxfh_indir_default(uint32_t index, uint32_t n_rx_rings)
 	return index % n_rx_rings;
 }
 
+static void qede_prandom_bytes(uint32_t *buff, size_t bytes)
+{
+	unsigned i;
+
+	srand((unsigned int)time(NULL));
+
+	for (i = 0; i < ECORE_RSS_KEY_SIZE; i++)
+		buff[i] = rand();
+}
+
+static int
+qede_config_rss(struct rte_eth_dev *eth_dev,
+		struct qed_update_vport_rss_params *rss_params)
+{
+	enum rte_eth_rx_mq_mode mode = eth_dev->data->dev_conf.rxmode.mq_mode;
+	struct rte_eth_rss_conf rss_conf =
+	    eth_dev->data->dev_conf.rx_adv_conf.rss_conf;
+	struct qede_dev *qdev = eth_dev->data->dev_private;
+	struct ecore_dev *edev = &qdev->edev;
+	unsigned i;
+
+	PMD_INIT_FUNC_TRACE(edev);
+
+	/* Check if RSS conditions are met */
+
+	if (!(mode & ETH_MQ_RX_RSS)) {
+		DP_INFO(edev, "RSS flag is not set\n");
+		return -EINVAL;
+	} else {
+		DP_INFO(edev, "RSS flag is set\n");
+	}
+
+	if (rss_conf.rss_hf == 0) {
+		DP_NOTICE(edev, false, "No RSS hash function to apply\n");
+		return -EINVAL;
+	}
+
+	if (QEDE_RSS_CNT(qdev) == 1) {
+		DP_NOTICE(edev, false, "RSS is not enabled with one queue\n");
+		return -EINVAL;
+	}
+
+	memset(rss_params, 0, sizeof(*rss_params));
+
+	for (i = 0; i < 128; i++)
+		rss_params->rss_ind_table[i] = qede_rxfh_indir_default(i,
+							QEDE_RSS_CNT(qdev));
+
+	/* key and protocols */
+	if (rss_conf.rss_key == NULL) {
+		qede_prandom_bytes(rss_params->rss_key,
+				   sizeof(rss_params->rss_key));
+	} else {
+		/* Fill given by user */
+		DP_NOTICE(edev, false,
+			  "User provided rss key is not supported\n");
+		return -EINVAL;
+	}
+
+	DP_INFO(edev, "RSS check passes\n");
+
+	return 0;
+}
+
+static int qede_start_queues(struct rte_eth_dev *eth_dev, bool clear_stats)
+{
+	struct qede_dev *qdev = eth_dev->data->dev_private;
+	struct ecore_dev *edev = &qdev->edev;
+	struct qed_update_vport_rss_params *rss_params = &qdev->rss_params;
+	struct qed_dev_info *qed_info = &qdev->dev_info.common;
+	struct qed_update_vport_params vport_update_params;
+	struct qed_start_vport_params start = { 0 };
+	int vlan_removal_en = 1;
+	int rc, tc, i;
+
+	if (!qdev->num_rss) {
+		DP_ERR(edev,
+		       "Cannot update V-VPORT as active as"
+		       "there are no Rx queues\n");
+		return -EINVAL;
+	}
+
+	start.remove_inner_vlan = vlan_removal_en;
+	start.gro_enable = !qdev->gro_disable;
+	start.mtu = qdev->mtu;
+	start.vport_id = 0;
+	start.drop_ttl0 = true;
+	start.clear_stats = clear_stats;
+
+	rc = qdev->ops->vport_start(edev, &start);
+	if (rc) {
+		DP_ERR(edev, "Start V-PORT failed %d\n", rc);
+		return rc;
+	}
+
+	DP_INFO(edev,
+		"Start vport ramrod passed, vport_id = %d,"
+		" MTU = %d, vlan_removal_en = %d\n",
+		start.vport_id, qdev->mtu + 0xe, vlan_removal_en);
+
+	for_each_rss(i) {
+		struct qede_fastpath *fp = &qdev->fp_array[i];
+		dma_addr_t p_phys_table;
+		uint16_t page_cnt;
+
+		p_phys_table = ecore_chain_get_pbl_phys(&fp->rxq->rx_comp_ring);
+		page_cnt = ecore_chain_get_page_cnt(&fp->rxq->rx_comp_ring);
+
+		ecore_sb_ack(fp->sb_info, IGU_INT_DISABLE, 0);	/* @DPDK */
+
+		rc = qdev->ops->q_rx_start(edev, i, i, 0,
+					   fp->sb_info->igu_sb_id,
+					   RX_PI,
+					   fp->rxq->rx_buf_size,
+					   fp->rxq->rx_bd_ring.p_phys_addr,
+					   p_phys_table,
+					   page_cnt,
+					   &fp->rxq->hw_rxq_prod_addr);
+		if (rc) {
+			DP_ERR(edev, "Start RXQ #%d failed %d\n", i, rc);
+			return rc;
+		}
+
+		fp->rxq->hw_cons_ptr = &fp->sb_info->sb_virt->pi_array[RX_PI];
+
+		qede_update_rx_prod(qdev, fp->rxq);
+
+		for (tc = 0; tc < qdev->num_tc; tc++) {
+			struct qede_tx_queue *txq = fp->txqs[tc];
+			int txq_index = tc * QEDE_RSS_CNT(qdev) + i;
+
+			p_phys_table = ecore_chain_get_pbl_phys(&txq->tx_pbl);
+			page_cnt = ecore_chain_get_page_cnt(&txq->tx_pbl);
+			rc = qdev->ops->q_tx_start(edev, i, txq_index,
+						   0,
+						   fp->sb_info->igu_sb_id,
+						   TX_PI(tc),
+						   p_phys_table, page_cnt,
+						   &txq->doorbell_addr);
+			if (rc) {
+				DP_ERR(edev, "Start txq %u failed %d\n",
+				       txq_index, rc);
+				return rc;
+			}
+
+			txq->hw_cons_ptr =
+			    &fp->sb_info->sb_virt->pi_array[TX_PI(tc)];
+			SET_FIELD(txq->tx_db.data.params,
+				  ETH_DB_DATA_DEST, DB_DEST_XCM);
+			SET_FIELD(txq->tx_db.data.params, ETH_DB_DATA_AGG_CMD,
+				  DB_AGG_CMD_SET);
+			SET_FIELD(txq->tx_db.data.params,
+				  ETH_DB_DATA_AGG_VAL_SEL,
+				  DQ_XCM_ETH_TX_BD_PROD_CMD);
+
+			txq->tx_db.data.agg_flags = DQ_XCM_ETH_DQ_CF_CMD;
+		}
+	}
+
+	/* Prepare and send the vport enable */
+	memset(&vport_update_params, 0, sizeof(vport_update_params));
+	vport_update_params.vport_id = start.vport_id;
+	vport_update_params.update_vport_active_flg = 1;
+	vport_update_params.vport_active_flg = 1;
+
+	/* @DPDK */
+	if (qed_info->mf_mode == MF_NPAR && qed_info->tx_switching) {
+		/* TBD: Check SRIOV enabled for VF */
+		vport_update_params.update_tx_switching_flg = 1;
+		vport_update_params.tx_switching_flg = 1;
+	}
+
+	if (!qede_config_rss(eth_dev, rss_params))
+		vport_update_params.update_rss_flg = 1;
+
+	DP_INFO(edev, "Updating RSS flag to %d\n",
+		vport_update_params.update_rss_flg);
+
+	rte_memcpy(&vport_update_params.rss_params, rss_params,
+	       sizeof(*rss_params));
+
+	rc = qdev->ops->vport_update(edev, &vport_update_params);
+	if (rc) {
+		DP_ERR(edev, "Update V-PORT failed %d\n", rc);
+		return rc;
+	}
+
+	return 0;
+}
+
 #ifdef ENC_SUPPORTED
 static bool qede_tunn_exist(uint16_t flag)
 {
@@ -955,6 +1145,8 @@ int qede_dev_start(struct rte_eth_dev *eth_dev)
 		return -EINVAL;
 	}
 
+	rc = qede_start_queues(eth_dev, true);
+
 	if (rc) {
 		DP_ERR(edev, "Failed to start queues\n");
 		/* TBD: free */
-- 
1.7.10.3

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH v2 07/10] qede: Add SRIOV support
  2016-03-10 13:45 [dpdk-dev] [PATCH v2 00/10] qede: Add qede PMD Rasesh Mody
                   ` (4 preceding siblings ...)
  2016-03-10 13:45 ` [dpdk-dev] [PATCH v2 06/10] qede: Add L2 support Rasesh Mody
@ 2016-03-10 13:45 ` Rasesh Mody
  2016-03-10 13:45 ` [dpdk-dev] [PATCH v2 08/10] qede: Add attention support Rasesh Mody
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 13+ messages in thread
From: Rasesh Mody @ 2016-03-10 13:45 UTC (permalink / raw)
  To: dev; +Cc: sony.chacko

Signed-off-by: Harish Patil <harish.patil@qlogic.com>
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
Signed-off-by: Sony Chacko <sony.chacko@qlogic.com>
---
 drivers/net/qede/Makefile              |    2 +
 drivers/net/qede/base/bcm_osal.c       |   57 +-
 drivers/net/qede/base/ecore.h          |    1 +
 drivers/net/qede/base/ecore_dev.c      |  116 +-
 drivers/net/qede/base/ecore_hw.c       |    9 +-
 drivers/net/qede/base/ecore_init_ops.c |    4 +
 drivers/net/qede/base/ecore_int.c      |   31 +-
 drivers/net/qede/base/ecore_iov_api.h  |  933 +++++++++
 drivers/net/qede/base/ecore_l2.c       |  233 ++-
 drivers/net/qede/base/ecore_l2.h       |   50 +
 drivers/net/qede/base/ecore_mcp.c      |   30 +
 drivers/net/qede/base/ecore_spq.c      |    8 +-
 drivers/net/qede/base/ecore_sriov.c    | 3422 ++++++++++++++++++++++++++++++++
 drivers/net/qede/base/ecore_sriov.h    |  390 ++++
 drivers/net/qede/base/ecore_vf.c       | 1322 ++++++++++++
 drivers/net/qede/base/ecore_vf.h       |  415 ++++
 drivers/net/qede/base/ecore_vf_api.h   |  186 ++
 drivers/net/qede/base/ecore_vfpf_if.h  |  590 ++++++
 drivers/net/qede/qede_ethdev.c         |   20 +-
 drivers/net/qede/qede_ethdev.h         |    4 +-
 drivers/net/qede/qede_main.c           |  151 +-
 21 files changed, 7863 insertions(+), 111 deletions(-)
 create mode 100644 drivers/net/qede/base/ecore_iov_api.h
 create mode 100644 drivers/net/qede/base/ecore_sriov.c
 create mode 100644 drivers/net/qede/base/ecore_sriov.h
 create mode 100644 drivers/net/qede/base/ecore_vf.c
 create mode 100644 drivers/net/qede/base/ecore_vf.h
 create mode 100644 drivers/net/qede/base/ecore_vf_api.h
 create mode 100644 drivers/net/qede/base/ecore_vfpf_if.h

diff --git a/drivers/net/qede/Makefile b/drivers/net/qede/Makefile
index eb08635..8970921 100644
--- a/drivers/net/qede/Makefile
+++ b/drivers/net/qede/Makefile
@@ -78,6 +78,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += base/ecore_init_ops.c
 SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += base/ecore_mcp.c
 SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += base/ecore_int.c
 SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += base/bcm_osal.c
+SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += base/ecore_sriov.c
+SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += base/ecore_vf.c
 SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += qede_ethdev.c
 SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += qede_eth_if.c
 SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += qede_main.c
diff --git a/drivers/net/qede/base/bcm_osal.c b/drivers/net/qede/base/bcm_osal.c
index 00b27ba..e7720c0 100644
--- a/drivers/net/qede/base/bcm_osal.c
+++ b/drivers/net/qede/base/bcm_osal.c
@@ -14,8 +14,9 @@
 #include "bcm_osal.h"
 #include "ecore.h"
 #include "ecore_hw.h"
+#include "ecore_iov_api.h"
 
-unsigned long log2_align(unsigned long n)
+unsigned long qede_log2_align(unsigned long n)
 {
 	unsigned long ret = n ? 1 : 0;
 	unsigned long _n = n >> 1;
@@ -31,7 +32,7 @@ unsigned long log2_align(unsigned long n)
 	return ret;
 }
 
-u32 osal_log2(u32 val)
+u32 qede_osal_log2(u32 val)
 {
 	u32 log = 0;
 
@@ -41,6 +42,54 @@ u32 osal_log2(u32 val)
 	return log;
 }
 
+inline void qede_set_bit(u32 nr, unsigned long *addr)
+{
+	__sync_fetch_and_or(addr, (1UL << nr));
+}
+
+inline void qede_clr_bit(u32 nr, unsigned long *addr)
+{
+	__sync_fetch_and_and(addr, ~(1UL << nr));
+}
+
+inline bool qede_test_bit(u32 nr, unsigned long *addr)
+{
+	bool res;
+
+	rte_mb();
+	res = ((*addr) & (1UL << nr)) != 0;
+	rte_mb();
+	return res;
+}
+
+static inline u32 qede_ffz(unsigned long word)
+{
+	unsigned long first_zero;
+
+	first_zero = __builtin_ffsl(~word);
+	return first_zero ? (first_zero - 1) : OSAL_BITS_PER_UL;
+}
+
+inline u32 qede_find_first_zero_bit(unsigned long *addr, u32 limit)
+{
+	u32 i;
+	u32 nwords = 0;
+	OSAL_BUILD_BUG_ON(!limit);
+	nwords = (limit - 1) / OSAL_BITS_PER_UL + 1;
+	for (i = 0; i < nwords; i++)
+		if (~(addr[i] != 0))
+			break;
+	return (i == nwords) ? limit : i * OSAL_BITS_PER_UL + qede_ffz(addr[i]);
+}
+
+void qede_vf_fill_driver_data(struct ecore_hwfn *hwfn,
+			      __rte_unused struct vf_pf_resc_request *resc_req,
+			      struct ecore_vf_acquire_sw_info *vf_sw_info)
+{
+	vf_sw_info->os_type = VFPF_ACQUIRE_OS_LINUX_USERSPACE;
+	vf_sw_info->override_fw_version = 1;
+}
+
 void *osal_dma_alloc_coherent(struct ecore_dev *p_dev,
 			      dma_addr_t *phys, size_t size)
 {
@@ -97,8 +146,8 @@ void *osal_dma_alloc_coherent_aligned(struct ecore_dev *p_dev,
 	return mz->addr;
 }
 
-u32 qed_unzip_data(struct ecore_hwfn *p_hwfn, u32 input_len,
-		   u8 *input_buf, u32 max_size, u8 *unzip_buf)
+u32 qede_unzip_data(struct ecore_hwfn *p_hwfn, u32 input_len,
+		    u8 *input_buf, u32 max_size, u8 *unzip_buf)
 {
 	int rc;
 
diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 2cd7a94..942aaee 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -50,6 +50,7 @@ enum ecore_nvm_cmd {
 #ifndef LINUX_REMOVE
 #if !defined(CONFIG_ECORE_L2)
 #define CONFIG_ECORE_L2
+#define CONFIG_ECORE_SRIOV
 #endif
 #endif
 
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 734d36e..f84266f 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -21,6 +21,8 @@
 #include "ecore_init_fw_funcs.h"
 #include "ecore_sp_commands.h"
 #include "ecore_dev_api.h"
+#include "ecore_sriov.h"
+#include "ecore_vf.h"
 #include "ecore_mcp.h"
 #include "ecore_hw_defs.h"
 #include "mcp_public.h"
@@ -126,6 +128,9 @@ void ecore_resc_free(struct ecore_dev *p_dev)
 {
 	int i;
 
+	if (IS_VF(p_dev))
+		return;
+
 	OSAL_FREE(p_dev, p_dev->fw_data);
 	p_dev->fw_data = OSAL_NULL;
 
@@ -149,6 +154,7 @@ void ecore_resc_free(struct ecore_dev *p_dev)
 		ecore_eq_free(p_hwfn, p_hwfn->p_eq);
 		ecore_consq_free(p_hwfn, p_hwfn->p_consq);
 		ecore_int_free(p_hwfn);
+		ecore_iov_free(p_hwfn);
 		ecore_dmae_info_free(p_hwfn);
 		/* @@@TBD Flush work-queue ? */
 	}
@@ -161,7 +167,11 @@ static enum _ecore_status_t ecore_init_qm_info(struct ecore_hwfn *p_hwfn,
 	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
 	struct init_qm_port_params *p_qm_port;
 	u16 num_pqs, multi_cos_tcs = 1;
+#ifdef CONFIG_ECORE_SRIOV
+	u16 num_vfs = p_hwfn->p_dev->sriov_info.total_vfs;
+#else
 	u16 num_vfs = 0;
+#endif
 
 	OSAL_MEM_ZERO(qm_info, sizeof(*qm_info));
 
@@ -363,6 +373,9 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 	struct ecore_eq *p_eq;
 	int i;
 
+	if (IS_VF(p_dev))
+		return rc;
+
 	p_dev->fw_data = OSAL_ZALLOC(p_dev, GFP_KERNEL,
 				     sizeof(struct ecore_fw_data));
 	if (!p_dev->fw_data)
@@ -440,6 +453,10 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 		if (rc)
 			goto alloc_err;
 
+		rc = ecore_iov_alloc(p_hwfn);
+		if (rc)
+			goto alloc_err;
+
 		/* EQ */
 		p_eq = ecore_eq_alloc(p_hwfn, 256);
 		if (!p_eq)
@@ -481,6 +498,9 @@ void ecore_resc_setup(struct ecore_dev *p_dev)
 {
 	int i;
 
+	if (IS_VF(p_dev))
+		return;
+
 	for_each_hwfn(p_dev, i) {
 		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
 
@@ -496,6 +516,8 @@ void ecore_resc_setup(struct ecore_dev *p_dev)
 			    p_hwfn->mcp_info->mfw_mb_length);
 
 		ecore_int_setup(p_hwfn, p_hwfn->p_main_ptt);
+
+		ecore_iov_setup(p_hwfn, p_hwfn->p_main_ptt);
 	}
 }
 
@@ -1141,23 +1163,6 @@ ecore_hw_init_pf(struct ecore_hwfn *p_hwfn,
 	/* Pure runtime initializations - directly to the HW  */
 	ecore_int_igu_init_pure_rt(p_hwfn, p_ptt, true, true);
 
-	/* PCI relaxed ordering causes a decrease in the performance on some
-	 * systems. Till a root cause is found, disable this attribute in the
-	 * PCI config space.
-	 */
-#if 0				/* @DPDK */
-	pos = OSAL_PCI_FIND_CAPABILITY(p_hwfn->p_dev, PCI_CAP_ID_EXP);
-	if (!pos) {
-		DP_NOTICE(p_hwfn, true,
-			  "Failed to find the PCI Express"
-			  " Capability structure in the PCI config space\n");
-		return ECORE_IO;
-	}
-	OSAL_PCI_READ_CONFIG_WORD(p_hwfn->p_dev, pos + PCI_EXP_DEVCTL, &ctrl);
-	ctrl &= ~PCI_EXP_DEVCTL_RELAX_EN;
-	OSAL_PCI_WRITE_CONFIG_WORD(p_hwfn->p_dev, pos + PCI_EXP_DEVCTL, ctrl);
-#endif /* @DPDK */
-
 	rc = ecore_hw_init_pf_doorbell_bar(p_hwfn, p_ptt);
 	if (rc)
 		return rc;
@@ -1248,13 +1253,22 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 	u32 load_code, param;
 	int i, j;
 
-	rc = ecore_init_fw_data(p_dev, bin_fw_data);
-	if (rc != ECORE_SUCCESS)
-		return rc;
+	if (IS_PF(p_dev)) {
+		rc = ecore_init_fw_data(p_dev, bin_fw_data);
+		if (rc != ECORE_SUCCESS)
+			return rc;
+	}
 
 	for_each_hwfn(p_dev, i) {
 		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
 
+		if (IS_VF(p_dev)) {
+			rc = ecore_vf_pf_init(p_hwfn);
+			if (rc)
+				return rc;
+			continue;
+		}
+
 		/* Enable DMAE in PXP */
 		rc = ecore_change_pci_hwfn(p_hwfn, p_hwfn->p_main_ptt, true);
 
@@ -1414,6 +1428,11 @@ enum _ecore_status_t ecore_hw_stop(struct ecore_dev *p_dev)
 
 		DP_VERBOSE(p_hwfn, ECORE_MSG_IFDOWN, "Stopping hw/fw\n");
 
+		if (IS_VF(p_dev)) {
+			ecore_vf_pf_int_cleanup(p_hwfn);
+			continue;
+		}
+
 		/* mark the hw as uninitialized... */
 		p_hwfn->hw_init_done = false;
 
@@ -1452,14 +1471,16 @@ enum _ecore_status_t ecore_hw_stop(struct ecore_dev *p_dev)
 		OSAL_MSLEEP(1);
 	}
 
-	/* Disable DMAE in PXP - in CMT, this should only be done for
-	 * first hw-function, and only after all transactions have
-	 * stopped for all active hw-functions.
-	 */
-	t_rc = ecore_change_pci_hwfn(&p_dev->hwfns[0],
-				     p_dev->hwfns[0].p_main_ptt, false);
-	if (t_rc != ECORE_SUCCESS)
-		rc = t_rc;
+	if (IS_PF(p_dev)) {
+		/* Disable DMAE in PXP - in CMT, this should only be done for
+		 * first hw-function, and only after all transactions have
+		 * stopped for all active hw-functions.
+		 */
+		t_rc = ecore_change_pci_hwfn(&p_dev->hwfns[0],
+					     p_dev->hwfns[0].p_main_ptt, false);
+		if (t_rc != ECORE_SUCCESS)
+			rc = t_rc;
+	}
 
 	return rc;
 }
@@ -1472,6 +1493,11 @@ void ecore_hw_stop_fastpath(struct ecore_dev *p_dev)
 		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[j];
 		struct ecore_ptt *p_ptt = p_hwfn->p_main_ptt;
 
+		if (IS_VF(p_dev)) {
+			ecore_vf_pf_int_cleanup(p_hwfn);
+			continue;
+		}
+
 		DP_VERBOSE(p_hwfn, ECORE_MSG_IFDOWN,
 			   "Shutting down the fastpath\n");
 
@@ -1497,6 +1523,9 @@ void ecore_hw_start_fastpath(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_ptt *p_ptt = p_hwfn->p_main_ptt;
 
+	if (IS_VF(p_hwfn->p_dev))
+		return;
+
 	/* Re-open incoming traffic */
 	ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
 		 NIG_REG_RX_LLH_BRB_GATE_DNTFWD_PERPF, 0x0);
@@ -1526,6 +1555,13 @@ enum _ecore_status_t ecore_hw_reset(struct ecore_dev *p_dev)
 	for_each_hwfn(p_dev, i) {
 		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
 
+		if (IS_VF(p_dev)) {
+			rc = ecore_vf_pf_reset(p_hwfn);
+			if (rc)
+				return rc;
+			continue;
+		}
+
 		DP_VERBOSE(p_hwfn, ECORE_MSG_IFDOWN, "Resetting hw/fw\n");
 
 		/* Check for incorrect states */
@@ -1655,7 +1691,11 @@ static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn)
 
 	OSAL_MEM_ZERO(&sb_cnt_info, sizeof(sb_cnt_info));
 
+#ifdef CONFIG_ECORE_SRIOV
+	max_vf_vlan_filters = ECORE_ETH_MAX_VF_NUM_VLAN_FILTERS;
+#else
 	max_vf_vlan_filters = 0;
+#endif
 
 	ecore_int_get_num_sbs(p_hwfn, &sb_cnt_info);
 	resc_num[ECORE_SB] = OSAL_MIN_T(u32,
@@ -2018,6 +2058,10 @@ ecore_get_hw_info(struct ecore_hwfn *p_hwfn,
 {
 	enum _ecore_status_t rc;
 
+	rc = ecore_iov_hw_info(p_hwfn, p_hwfn->p_main_ptt);
+	if (rc)
+		return rc;
+
 	/* TODO In get_hw_info, amoungst others:
 	 * Get MCP FW revision and determine according to it the supported
 	 * featrues (e.g. DCB)
@@ -2175,6 +2219,9 @@ void ecore_prepare_hibernate(struct ecore_dev *p_dev)
 {
 	int j;
 
+	if (IS_VF(p_dev))
+		return;
+
 	for_each_hwfn(p_dev, j) {
 		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[j];
 
@@ -2274,6 +2321,9 @@ enum _ecore_status_t ecore_hw_prepare(struct ecore_dev *p_dev, int personality)
 	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
 	enum _ecore_status_t rc;
 
+	if (IS_VF(p_dev))
+		return ecore_vf_hw_prepare(p_dev);
+
 	/* Store the precompiled init data ptrs */
 	ecore_init_iro_array(p_dev);
 
@@ -2325,6 +2375,11 @@ void ecore_hw_remove(struct ecore_dev *p_dev)
 	for_each_hwfn(p_dev, i) {
 		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
 
+		if (IS_VF(p_dev)) {
+			ecore_vf_pf_release(p_hwfn);
+			continue;
+		}
+
 		ecore_init_free(p_hwfn);
 		ecore_hw_hwfn_free(p_hwfn);
 		ecore_mcp_free(p_hwfn);
@@ -2952,6 +3007,11 @@ static enum _ecore_status_t ecore_set_coalesce(struct ecore_hwfn *p_hwfn,
 {
 	struct coalescing_timeset *p_coalesce_timeset;
 
+	if (IS_VF(p_hwfn->p_dev)) {
+		DP_NOTICE(p_hwfn, true, "VF coalescing config not supported\n");
+		return ECORE_INVAL;
+	}
+
 	if (p_hwfn->p_dev->int_coalescing_mode != ECORE_COAL_MODE_ENABLE) {
 		DP_NOTICE(p_hwfn, true,
 			  "Coalescing configuration not enabled\n");
diff --git a/drivers/net/qede/base/ecore_hw.c b/drivers/net/qede/base/ecore_hw.c
index 5a1d173..f21783f 100644
--- a/drivers/net/qede/base/ecore_hw.c
+++ b/drivers/net/qede/base/ecore_hw.c
@@ -13,6 +13,7 @@
 #include "ecore_hw.h"
 #include "reg_addr.h"
 #include "ecore_utils.h"
+#include "ecore_iov_api.h"
 
 #ifndef ASIC_ONLY
 #define ECORE_EMUL_FACTOR 2000
@@ -243,8 +244,12 @@ static void ecore_memcpy_hw(struct ecore_hwfn *p_hwfn,
 		quota = OSAL_MIN_T(osal_size_t, n - done,
 				   PXP_EXTERNAL_BAR_PF_WINDOW_SINGLE_SIZE);
 
-		ecore_ptt_set_win(p_hwfn, p_ptt, hw_addr + done);
-		hw_offset = ecore_ptt_get_bar_addr(p_ptt);
+		if (IS_PF(p_hwfn->p_dev)) {
+			ecore_ptt_set_win(p_hwfn, p_ptt, hw_addr + done);
+			hw_offset = ecore_ptt_get_bar_addr(p_ptt);
+		} else {
+			hw_offset = hw_addr + done;
+		}
 
 		dw_count = quota / 4;
 		host_addr = (u32 *) ((u8 *) addr + done);
diff --git a/drivers/net/qede/base/ecore_init_ops.c b/drivers/net/qede/base/ecore_init_ops.c
index eeaabb6..326eb92 100644
--- a/drivers/net/qede/base/ecore_init_ops.c
+++ b/drivers/net/qede/base/ecore_init_ops.c
@@ -16,6 +16,7 @@
 #include "ecore_init_fw_funcs.h"
 
 #include "ecore_iro_values.h"
+#include "ecore_sriov.h"
 #include "ecore_gtt_values.h"
 #include "reg_addr.h"
 #include "ecore_init_ops.h"
@@ -102,6 +103,9 @@ enum _ecore_status_t ecore_init_alloc(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_rt_data *rt_data = &p_hwfn->rt_data;
 
+	if (IS_VF(p_hwfn->p_dev))
+		return ECORE_SUCCESS;
+
 	rt_data->b_valid = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL,
 				       sizeof(bool) * RUNTIME_ARRAY_SIZE);
 	if (!rt_data->b_valid)
diff --git a/drivers/net/qede/base/ecore_int.c b/drivers/net/qede/base/ecore_int.c
index 91e8ad2..f1cc538 100644
--- a/drivers/net/qede/base/ecore_int.c
+++ b/drivers/net/qede/base/ecore_int.c
@@ -16,6 +16,8 @@
 #include "ecore_int.h"
 #include "reg_addr.h"
 #include "ecore_hw.h"
+#include "ecore_sriov.h"
+#include "ecore_vf.h"
 #include "ecore_hw_defs.h"
 #include "ecore_hsi_common.h"
 #include "ecore_mcp.h"
@@ -373,6 +375,9 @@ void ecore_int_cau_conf_pi(struct ecore_hwfn *p_hwfn,
 	struct cau_pi_entry pi_entry;
 	u32 sb_offset, pi_offset;
 
+	if (IS_VF(p_hwfn->p_dev))
+		return;		/* @@@TBD MichalK- VF CAU... */
+
 	sb_offset = igu_sb_id * PIS_PER_SB;
 	OSAL_MEMSET(&pi_entry, 0, sizeof(struct cau_pi_entry));
 
@@ -401,7 +406,8 @@ void ecore_int_sb_setup(struct ecore_hwfn *p_hwfn,
 	sb_info->sb_ack = 0;
 	OSAL_MEMSET(sb_info->sb_virt, 0, sizeof(*sb_info->sb_virt));
 
-	ecore_int_cau_conf_sb(p_hwfn, p_ptt, sb_info->sb_phys,
+	if (IS_PF(p_hwfn->p_dev))
+		ecore_int_cau_conf_sb(p_hwfn, p_ptt, sb_info->sb_phys,
 				      sb_info->igu_sb_id, 0, 0);
 }
 
@@ -421,8 +427,10 @@ static u16 ecore_get_igu_sb_id(struct ecore_hwfn *p_hwfn, u16 sb_id)
 	/* Assuming continuous set of IGU SBs dedicated for given PF */
 	if (sb_id == ECORE_SP_SB_ID)
 		igu_sb_id = p_hwfn->hw_info.p_igu_info->igu_dsb_id;
-	else
+	else if (IS_PF(p_hwfn->p_dev))
 		igu_sb_id = sb_id + p_hwfn->hw_info.p_igu_info->igu_base_sb;
+	else
+		igu_sb_id = ecore_vf_get_igu_sb_id(p_hwfn, sb_id);
 
 	if (sb_id == ECORE_SP_SB_ID)
 		DP_VERBOSE(p_hwfn, ECORE_MSG_INTR,
@@ -457,9 +465,17 @@ enum _ecore_status_t ecore_int_sb_init(struct ecore_hwfn *p_hwfn,
 	/* The igu address will hold the absolute address that needs to be
 	 * written to for a specific status block
 	 */
-	sb_info->igu_addr = (u8 OSAL_IOMEM *) p_hwfn->regview +
+	if (IS_PF(p_hwfn->p_dev)) {
+		sb_info->igu_addr = (u8 OSAL_IOMEM *) p_hwfn->regview +
 		    GTT_BAR0_MAP_REG_IGU_CMD + (sb_info->igu_sb_id << 3);
 
+	} else {
+		sb_info->igu_addr =
+		    (u8 OSAL_IOMEM *) p_hwfn->regview +
+		    PXP_VF_BAR0_START_IGU +
+		    ((IGU_CMD_INT_ACK_BASE + sb_info->igu_sb_id) << 3);
+	}
+
 	sb_info->flags |= ECORE_SB_INFO_INIT;
 
 	ecore_int_sb_setup(p_hwfn, p_ptt, sb_info);
@@ -687,6 +703,9 @@ void ecore_int_igu_disable_int(struct ecore_hwfn *p_hwfn,
 {
 	p_hwfn->b_int_enabled = 0;
 
+	if (IS_VF(p_hwfn->p_dev))
+		return;
+
 	ecore_wr(p_hwfn, p_ptt, IGU_REG_PF_CONFIGURATION, 0);
 }
 
@@ -853,8 +872,14 @@ enum _ecore_status_t ecore_int_igu_read_cam(struct ecore_hwfn *p_hwfn,
 	p_igu_info->igu_dsb_id = 0xffff;
 	p_igu_info->igu_base_sb_iov = 0xffff;
 
+#ifdef CONFIG_ECORE_SRIOV
+	min_vf = p_hwfn->hw_info.first_vf_in_pf;
+	max_vf = p_hwfn->hw_info.first_vf_in_pf +
+	    p_hwfn->p_dev->sriov_info.total_vfs;
+#else
 	min_vf = 0;
 	max_vf = 0;
+#endif
 
 	for (sb_id = 0; sb_id < ECORE_MAPPING_MEMORY_SIZE(p_hwfn->p_dev);
 	     sb_id++) {
diff --git a/drivers/net/qede/base/ecore_iov_api.h b/drivers/net/qede/base/ecore_iov_api.h
new file mode 100644
index 0000000..6e446f6
--- /dev/null
+++ b/drivers/net/qede/base/ecore_iov_api.h
@@ -0,0 +1,933 @@
+/*
+ * Copyright (c) 2016 QLogic Corporation.
+ * All rights reserved.
+ * www.qlogic.com
+ *
+ * See LICENSE.qede_pmd for copyright and licensing details.
+ */
+
+#ifndef __ECORE_SRIOV_API_H__
+#define __ECORE_SRIOV_API_H__
+
+#include "ecore_status.h"
+
+#define ECORE_VF_ARRAY_LENGTH (3)
+
+#define IS_VF(p_dev)		((p_dev)->b_is_vf)
+#define IS_PF(p_dev)		(!((p_dev)->b_is_vf))
+#ifdef CONFIG_ECORE_SRIOV
+#define IS_PF_SRIOV(p_hwfn)	(!!((p_hwfn)->p_dev->sriov_info.total_vfs))
+#else
+#define IS_PF_SRIOV(p_hwfn)	(0)
+#endif
+#define IS_PF_SRIOV_ALLOC(p_hwfn)	(!!((p_hwfn)->pf_iov_info))
+#define IS_PF_PDA(p_hwfn)	0	/* @@TBD Michalk */
+
+/* @@@ TBD MichalK - what should this number be*/
+#define ECORE_MAX_VF_CHAINS_PER_PF 16
+
+/* vport update extended feature tlvs flags */
+enum ecore_iov_vport_update_flag {
+	ECORE_IOV_VP_UPDATE_ACTIVATE = 0,
+	ECORE_IOV_VP_UPDATE_VLAN_STRIP = 1,
+	ECORE_IOV_VP_UPDATE_TX_SWITCH = 2,
+	ECORE_IOV_VP_UPDATE_MCAST = 3,
+	ECORE_IOV_VP_UPDATE_ACCEPT_PARAM = 4,
+	ECORE_IOV_VP_UPDATE_RSS = 5,
+	ECORE_IOV_VP_UPDATE_ACCEPT_ANY_VLAN = 6,
+	ECORE_IOV_VP_UPDATE_SGE_TPA = 7,
+	ECORE_IOV_VP_UPDATE_MAX = 8,
+};
+
+struct ecore_mcp_link_params;
+struct ecore_mcp_link_state;
+struct ecore_mcp_link_capabilities;
+
+/* These defines are used by the hw-channel; should never change order */
+#define VFPF_ACQUIRE_OS_LINUX (0)
+#define VFPF_ACQUIRE_OS_WINDOWS (1)
+#define VFPF_ACQUIRE_OS_ESX (2)
+#define VFPF_ACQUIRE_OS_SOLARIS (3)
+#define VFPF_ACQUIRE_OS_LINUX_USERSPACE (4)
+
+struct ecore_vf_acquire_sw_info {
+	u32 driver_version;
+	u8 os_type;
+	bool override_fw_version;
+};
+
+struct ecore_public_vf_info {
+	/* These copies will later be reflected in the bulletin board,
+	 * but this copy should be newer.
+	 */
+	u8 forced_mac[ETH_ALEN];
+	u16 forced_vlan;
+};
+
+#ifdef CONFIG_ECORE_SW_CHANNEL
+/* This is SW channel related only... */
+enum mbx_state {
+	VF_PF_UNKNOWN_STATE = 0,
+	VF_PF_WAIT_FOR_START_REQUEST = 1,
+	VF_PF_WAIT_FOR_NEXT_CHUNK_OF_REQUEST = 2,
+	VF_PF_REQUEST_IN_PROCESSING = 3,
+	VF_PF_RESPONSE_READY = 4,
+};
+
+struct ecore_iov_sw_mbx {
+	enum mbx_state mbx_state;
+
+	u32 request_size;
+	u32 request_offset;
+
+	u32 response_size;
+	u32 response_offset;
+};
+
+/**
+ * @brief Get the vf sw mailbox params
+ *
+ * @param p_hwfn
+ * @param rel_vf_id
+ *
+ * @return struct ecore_iov_sw_mbx*
+ */
+struct ecore_iov_sw_mbx *ecore_iov_get_vf_sw_mbx(struct ecore_hwfn *p_hwfn,
+						 u16 rel_vf_id);
+#endif
+
+#ifdef CONFIG_ECORE_SRIOV
+/**
+ * @brief mark/clear all VFs before/after an incoming PCIe sriov
+ *        disable.
+ *
+ * @param p_hwfn
+ * @param to_disable
+ */
+void ecore_iov_set_vfs_to_disable(struct ecore_hwfn *p_hwfn, u8 to_disable);
+
+/**
+ * @brief mark/clear chosen VFs before/after an incoming PCIe
+ *        sriov disable.
+ *
+ * @param p_hwfn
+ * @param to_disable
+ */
+void ecore_iov_set_vf_to_disable(struct ecore_hwfn *p_hwfn,
+				 u16 rel_vf_id, u8 to_disable);
+
+/**
+ *
+ * @brief ecore_iov_init_hw_for_vf - initialize the HW for
+ *        enabling access of a VF. Also includes preparing the
+ *        IGU for VF access. This needs to be called AFTER hw is
+ *        initialized and BEFORE VF is loaded inside the VM.
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ * @param rel_vf_id
+ * @param num_rx_queues
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
+					      struct ecore_ptt *p_ptt,
+					      u16 rel_vf_id, u16 num_rx_queues);
+
+/**
+ * @brief ecore_iov_process_mbx_req - process a request received
+ *        from the VF
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ * @param vfid
+ */
+void ecore_iov_process_mbx_req(struct ecore_hwfn *p_hwfn,
+			       struct ecore_ptt *p_ptt, int vfid);
+
+/**
+ * @brief ecore_iov_release_hw_for_vf - called once upper layer
+ *        knows VF is done with - can release any resources
+ *        allocated for VF at this point. this must be done once
+ *        we know VF is no longer loaded in VM.
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ * @param rel_vf_id
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t ecore_iov_release_hw_for_vf(struct ecore_hwfn *p_hwfn,
+						 struct ecore_ptt *p_ptt,
+						 u16 rel_vf_id);
+
+#ifndef LINUX_REMOVE
+/**
+ * @brief ecore_iov_set_vf_ctx - set a context for a given VF
+ *
+ * @param p_hwfn
+ * @param vf_id
+ * @param ctx
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t ecore_iov_set_vf_ctx(struct ecore_hwfn *p_hwfn,
+					  u16 vf_id, void *ctx);
+#endif
+
+/**
+ * @brief FLR cleanup for all VFs
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t ecore_iov_vf_flr_cleanup(struct ecore_hwfn *p_hwfn,
+					      struct ecore_ptt *p_ptt);
+
+/**
+ * @brief FLR cleanup for single VF
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ * @param rel_vf_id
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t
+ecore_iov_single_vf_flr_cleanup(struct ecore_hwfn *p_hwfn,
+				struct ecore_ptt *p_ptt, u16 rel_vf_id);
+
+/**
+ * @brief Update the bulletin with link information. Notice this does NOT
+ *        send a bulletin update, only updates the PF's bulletin.
+ *
+ * @param p_hwfn
+ * @param p_vf
+ * @param params - the link params to use for the VF link configuration
+ * @param link - the link output to use for the VF link configuration
+ * @param p_caps - the link default capabilities.
+ */
+void ecore_iov_set_link(struct ecore_hwfn *p_hwfn,
+			u16 vfid,
+			struct ecore_mcp_link_params *params,
+			struct ecore_mcp_link_state *link,
+			struct ecore_mcp_link_capabilities *p_caps);
+
+/**
+ * @brief Returns link information as perceived by VF.
+ *
+ * @param p_hwfn
+ * @param p_vf
+ * @param p_params - the link params visible to vf.
+ * @param p_link - the link state visible to vf.
+ * @param p_caps - the link default capabilities visible to vf.
+ */
+void ecore_iov_get_link(struct ecore_hwfn *p_hwfn,
+			u16 vfid,
+			struct ecore_mcp_link_params *params,
+			struct ecore_mcp_link_state *link,
+			struct ecore_mcp_link_capabilities *p_caps);
+
+/**
+ * @brief return if the VF is pending FLR
+ *
+ * @param p_hwfn
+ * @param rel_vf_id
+ *
+ * @return bool
+ */
+bool ecore_iov_is_vf_pending_flr(struct ecore_hwfn *p_hwfn, u16 rel_vf_id);
+
+/**
+ * @brief Check if given VF ID @vfid is valid
+ *        w.r.t. @b_enabled_only value
+ *        if b_enabled_only = true - only enabled VF id is valid
+ *        else any VF id less than max_vfs is valid
+ *
+ * @param p_hwfn
+ * @param rel_vf_id - Relative VF ID
+ * @param b_enabled_only - consider only enabled VF
+ *
+ * @return bool - true for valid VF ID
+ */
+bool ecore_iov_is_valid_vfid(struct ecore_hwfn *p_hwfn,
+			     int rel_vf_id, bool b_enabled_only);
+
+/**
+ * @brief Get VF's public info structure
+ *
+ * @param p_hwfn
+ * @param vfid - Relative VF ID
+ * @param b_enabled_only - false if want to access even if vf is disabled
+ *
+ * @return struct ecore_public_vf_info *
+ */
+struct ecore_public_vf_info *ecore_iov_get_public_vf_info(struct ecore_hwfn
+							  *p_hwfn, u16 vfid,
+							  bool b_enabled_only);
+
+/**
+ * @brief Set pending events bitmap for given @vfid
+ *
+ * @param p_hwfn
+ * @param vfid
+ */
+void ecore_iov_pf_add_pending_events(struct ecore_hwfn *p_hwfn, u8 vfid);
+
+/**
+ * @brief Copy pending events bitmap in @events and clear
+ *	  original copy of events
+ *
+ * @param p_hwfn
+ */
+void ecore_iov_pf_get_and_clear_pending_events(struct ecore_hwfn *p_hwfn,
+					       u64 *events);
+
+/**
+ * @brief Copy VF's message to PF's buffer
+ *
+ * @param p_hwfn
+ * @param ptt
+ * @param vfid
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t ecore_iov_copy_vf_msg(struct ecore_hwfn *p_hwfn,
+					   struct ecore_ptt *ptt, int vfid);
+/**
+ * @brief Set forced MAC address in PFs copy of bulletin board
+ *        and configures FW/HW to support the configuration.
+ *
+ * @param p_hwfn
+ * @param mac
+ * @param vfid
+ */
+void ecore_iov_bulletin_set_forced_mac(struct ecore_hwfn *p_hwfn,
+				       u8 *mac, int vfid);
+
+/**
+ * @brief Set MAC address in PFs copy of bulletin board without
+ *        configuring FW/HW.
+ *
+ * @param p_hwfn
+ * @param mac
+ * @param vfid
+ */
+enum _ecore_status_t ecore_iov_bulletin_set_mac(struct ecore_hwfn *p_hwfn,
+						u8 *mac, int vfid);
+
+/**
+ * @brief Set forced VLAN [pvid] in PFs copy of bulletin board
+ *        and configures FW/HW to support the configuration.
+ *        Setting of pvid 0 would clear the feature.
+ * @param p_hwfn
+ * @param pvid
+ * @param vfid
+ */
+void ecore_iov_bulletin_set_forced_vlan(struct ecore_hwfn *p_hwfn,
+					u16 pvid, int vfid);
+
+/**
+ * @brief Set default behaviour of VF in case no vlans are configured for it
+ *        whether to accept only untagged traffic or all.
+ *        Must be called prior to the VF vport-start.
+ *
+ * @param p_hwfn
+ * @param b_untagged_only
+ * @param vfid
+ *
+ * @return ECORE_SUCCESS if configuration would stick.
+ */
+enum _ecore_status_t
+ecore_iov_bulletin_set_forced_untagged_default(struct ecore_hwfn *p_hwfn,
+					       bool b_untagged_only, int vfid);
+/**
+ * @brief Get VFs opaque fid.
+ *
+ * @param p_hwfn
+ * @param vfid
+ * @param opaque_fid
+ */
+void ecore_iov_get_vfs_opaque_fid(struct ecore_hwfn *p_hwfn, int vfid,
+				  u16 *opaque_fid);
+
+/**
+ * @brief Get VFs VPORT id.
+ *
+ * @param p_hwfn
+ * @param vfid
+ * @param vport id
+ */
+void ecore_iov_get_vfs_vport_id(struct ecore_hwfn *p_hwfn, int vfid,
+				u8 *p_vport_id);
+
+/**
+ * @brief Check if VF has VPORT instance. This can be used
+ *	  to check if VPORT is active.
+ *
+ * @param p_hwfn
+ */
+bool ecore_iov_vf_has_vport_instance(struct ecore_hwfn *p_hwfn, int vfid);
+
+/**
+ * @brief PF posts the bulletin to the VF
+ *
+ * @param p_hwfn
+ * @param p_vf
+ * @param p_ptt
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t ecore_iov_post_vf_bulletin(struct ecore_hwfn *p_hwfn,
+						int vfid,
+						struct ecore_ptt *p_ptt);
+
+/**
+ * @brief Check if given VF (@vfid) is marked as stopped
+ *
+ * @param p_hwfn
+ * @param vfid
+ *
+ * @return bool : true if stopped
+ */
+bool ecore_iov_is_vf_stopped(struct ecore_hwfn *p_hwfn, int vfid);
+
+/**
+ * @brief Configure VF anti spoofing
+ *
+ * @param p_hwfn
+ * @param vfid
+ * @param val - spoofchk value - true/false
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t ecore_iov_spoofchk_set(struct ecore_hwfn *p_hwfn,
+					    int vfid, bool val);
+
+/**
+ * @brief Get VF's configured spoof value.
+ *
+ * @param p_hwfn
+ * @param vfid
+ *
+ * @return bool - spoofchk value - true/false
+ */
+bool ecore_iov_spoofchk_get(struct ecore_hwfn *p_hwfn, int vfid);
+
+/**
+ * @brief Check for SRIOV sanity by PF.
+ *
+ * @param p_hwfn
+ * @param vfid
+ *
+ * @return bool - true if sanity checks passes, else false
+ */
+bool ecore_iov_pf_sanity_check(struct ecore_hwfn *p_hwfn, int vfid);
+
+/**
+ * @brief Get the num of VF chains.
+ *
+ * @param p_hwfn
+ *
+ * @return u8
+ */
+u8 ecore_iov_vf_chains_per_pf(struct ecore_hwfn *p_hwfn);
+
+/**
+ * @brief Get vf request mailbox params
+ *
+ * @param p_hwfn
+ * @param rel_vf_id
+ * @param pp_req_virt_addr
+ * @param p_req_virt_size
+ */
+void ecore_iov_get_vf_req_virt_mbx_params(struct ecore_hwfn *p_hwfn,
+					  u16 rel_vf_id,
+					  void **pp_req_virt_addr,
+					  u16 *p_req_virt_size);
+
+/**
+ * @brief Get vf mailbox params
+ *
+ * @param p_hwfn
+ * @param rel_vf_id
+ * @param pp_reply_virt_addr
+ * @param p_reply_virt_size
+ */
+void ecore_iov_get_vf_reply_virt_mbx_params(struct ecore_hwfn *p_hwfn,
+					    u16 rel_vf_id,
+					    void **pp_reply_virt_addr,
+					    u16 *p_reply_virt_size);
+
+/**
+ * @brief Validate if the given length is a valid vfpf message
+ *        length
+ *
+ * @param length
+ *
+ * @return bool
+ */
+bool ecore_iov_is_valid_vfpf_msg_length(u32 length);
+
+/**
+ * @brief Return the max pfvf message length
+ *
+ * @return u32
+ */
+u32 ecore_iov_pfvf_msg_length(void);
+
+/**
+ * @brief Returns forced MAC address if one is configured
+ *
+ * @parm p_hwfn
+ * @parm rel_vf_id
+ *
+ * @return OSAL_NULL if mac isn't forced; Otherwise, returns MAC.
+ */
+u8 *ecore_iov_bulletin_get_forced_mac(struct ecore_hwfn *p_hwfn, u16 rel_vf_id);
+
+/**
+ * @brief Returns pvid if one is configured
+ *
+ * @parm p_hwfn
+ * @parm rel_vf_id
+ *
+ * @return 0 if no pvid is configured, otherwise the pvid.
+ */
+u16 ecore_iov_bulletin_get_forced_vlan(struct ecore_hwfn *p_hwfn,
+				       u16 rel_vf_id);
+/**
+ * @brief Configure VFs tx rate
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ * @param vfid
+ * @param val - tx rate value in Mb/sec.
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t ecore_iov_configure_tx_rate(struct ecore_hwfn *p_hwfn,
+						 struct ecore_ptt *p_ptt,
+						 int vfid, int val);
+
+/**
+ * @brief - Retrieves the statistics associated with a VF
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ * @param vfid
+ * @param p_stats - this will be filled with the VF statistics
+ *
+ * @return ECORE_SUCCESS iff statistics were retrieved. Error otherwise.
+ */
+enum _ecore_status_t ecore_iov_get_vf_stats(struct ecore_hwfn *p_hwfn,
+					    struct ecore_ptt *p_ptt,
+					    int vfid,
+					    struct ecore_eth_stats *p_stats);
+
+/**
+ * @brief - Retrieves num of rxqs chains
+ *
+ * @param p_hwfn
+ * @param rel_vf_id
+ *
+ * @return num of rxqs chains.
+ */
+u8 ecore_iov_get_vf_num_rxqs(struct ecore_hwfn *p_hwfn, u16 rel_vf_id);
+
+/**
+ * @brief - Retrieves num of active rxqs chains
+ *
+ * @param p_hwfn
+ * @param rel_vf_id
+ *
+ * @return
+ */
+u8 ecore_iov_get_vf_num_active_rxqs(struct ecore_hwfn *p_hwfn, u16 rel_vf_id);
+
+/**
+ * @brief - Retrieves ctx pointer
+ *
+ * @param p_hwfn
+ * @param rel_vf_id
+ *
+ * @return
+ */
+void *ecore_iov_get_vf_ctx(struct ecore_hwfn *p_hwfn, u16 rel_vf_id);
+
+/**
+ * @brief - Retrieves VF`s num sbs
+ *
+ * @param p_hwfn
+ * @param rel_vf_id
+ *
+ * @return
+ */
+u8 ecore_iov_get_vf_num_sbs(struct ecore_hwfn *p_hwfn, u16 rel_vf_id);
+
+/**
+ * @brief - Returm true if VF is waiting for acquire
+ *
+ * @param p_hwfn
+ * @param rel_vf_id
+ *
+ * @return
+ */
+bool ecore_iov_is_vf_wait_for_acquire(struct ecore_hwfn *p_hwfn, u16 rel_vf_id);
+
+/**
+ * @brief - Returm true if VF is acquired but not initialized
+ *
+ * @param p_hwfn
+ * @param rel_vf_id
+ *
+ * @return
+ */
+bool ecore_iov_is_vf_acquired_not_initialized(struct ecore_hwfn *p_hwfn,
+					      u16 rel_vf_id);
+
+/**
+ * @brief - Returm true if VF is acquired and initialized
+ *
+ * @param p_hwfn
+ * @param rel_vf_id
+ *
+ * @return
+ */
+bool ecore_iov_is_vf_initialized(struct ecore_hwfn *p_hwfn, u16 rel_vf_id);
+
+/**
+ * @brief - Get VF's vport min rate configured.
+ * @param p_hwfn
+ * @param rel_vf_id
+ *
+ * @return - rate in Mbps
+ */
+int ecore_iov_get_vf_min_rate(struct ecore_hwfn *p_hwfn, int vfid);
+
+/**
+ * @brief - Configure min rate for VF's vport.
+ * @param p_dev
+ * @param vfid
+ * @param - rate in Mbps
+ *
+ * @return
+ */
+enum _ecore_status_t ecore_iov_configure_min_tx_rate(struct ecore_dev *p_dev,
+						     int vfid, u32 rate);
+#else
+static OSAL_INLINE void ecore_iov_set_vfs_to_disable(struct ecore_hwfn *p_hwfn,
+						     u8 to_disable)
+{
+}
+
+static OSAL_INLINE void ecore_iov_set_vf_to_disable(struct ecore_hwfn *p_hwfn,
+						    u16 rel_vf_id,
+						    u8 to_disable)
+{
+}
+
+static OSAL_INLINE enum _ecore_status_t ecore_iov_init_hw_for_vf(struct
+								 ecore_hwfn
+								 *p_hwfn,
+								 struct
+								 ecore_ptt
+								 *p_ptt,
+								 u16 rel_vf_id,
+								 u16
+								 num_rx_queues)
+{
+	return ECORE_INVAL;
+}
+
+static OSAL_INLINE void ecore_iov_process_mbx_req(struct ecore_hwfn *p_hwfn,
+						  struct ecore_ptt *p_ptt,
+						  int vfid)
+{
+}
+
+static OSAL_INLINE enum _ecore_status_t ecore_iov_release_hw_for_vf(struct
+								    ecore_hwfn
+								    *p_hwfn,
+								    struct
+								    ecore_ptt
+								    *p_ptt,
+								    u16
+								    rel_vf_id)
+{
+	return ECORE_SUCCESS;
+}
+
+#ifndef LINUX_REMOVE
+static OSAL_INLINE enum _ecore_status_t ecore_iov_set_vf_ctx(struct ecore_hwfn
+							     *p_hwfn, u16 vf_id,
+							     void *ctx)
+{
+	return ECORE_INVAL;
+}
+#endif
+static OSAL_INLINE enum _ecore_status_t ecore_iov_vf_flr_cleanup(struct
+								 ecore_hwfn
+								 *p_hwfn,
+								 struct
+								 ecore_ptt
+								 *p_ptt)
+{
+	return ECORE_INVAL;
+}
+
+static OSAL_INLINE enum _ecore_status_t ecore_iov_single_vf_flr_cleanup(
+	struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, u16 rel_vf_id)
+{
+	return ECORE_INVAL;
+}
+
+static OSAL_INLINE void ecore_iov_set_link(struct ecore_hwfn *p_hwfn, u16 vfid,
+					   struct ecore_mcp_link_params *params,
+					   struct ecore_mcp_link_state *link,
+					   struct ecore_mcp_link_capabilities
+					   *p_caps)
+{
+}
+
+static OSAL_INLINE void ecore_iov_get_link(struct ecore_hwfn *p_hwfn, u16 vfid,
+					   struct ecore_mcp_link_params *params,
+					   struct ecore_mcp_link_state *link,
+					   struct ecore_mcp_link_capabilities
+					   *p_caps)
+{
+}
+
+static OSAL_INLINE bool ecore_iov_is_vf_pending_flr(struct ecore_hwfn *p_hwfn,
+						    u16 rel_vf_id)
+{
+	return false;
+}
+
+static OSAL_INLINE bool ecore_iov_is_valid_vfid(struct ecore_hwfn *p_hwfn,
+						int rel_vf_id,
+						bool b_enabled_only)
+{
+	return false;
+}
+
+static OSAL_INLINE struct ecore_public_vf_info *
+	ecore_iov_get_public_vf_info(struct ecore_hwfn *p_hwfn, u16 vfid,
+				  bool b_enabled_only)
+{
+	return OSAL_NULL;
+}
+
+static OSAL_INLINE void ecore_iov_pf_add_pending_events(struct ecore_hwfn
+							*p_hwfn, u8 vfid)
+{
+}
+
+static OSAL_INLINE void ecore_iov_pf_get_and_clear_pending_events(struct
+								  ecore_hwfn
+								  *p_hwfn,
+								  u64 *events)
+{
+}
+
+static OSAL_INLINE enum _ecore_status_t ecore_iov_copy_vf_msg(struct ecore_hwfn
+							      *p_hwfn,
+							      struct ecore_ptt
+							      *ptt, int vfid)
+{
+	return ECORE_INVAL;
+}
+
+static OSAL_INLINE void ecore_iov_bulletin_set_forced_mac(struct ecore_hwfn
+							  *p_hwfn, u8 *mac,
+							  int vfid)
+{
+}
+
+static OSAL_INLINE enum _ecore_status_t ecore_iov_bulletin_set_mac(struct
+								   ecore_hwfn
+								   *p_hwfn,
+								   u8 *mac,
+								   int vfid)
+{
+	return ECORE_INVAL;
+}
+
+static OSAL_INLINE void ecore_iov_bulletin_set_forced_vlan(struct ecore_hwfn
+							   p_hwfn, u16 pvid,
+							   int vfid)
+{
+}
+
+static OSAL_INLINE void ecore_iov_get_vfs_opaque_fid(struct ecore_hwfn *p_hwfn,
+						     int vfid, u16 *opaque_fid)
+{
+}
+
+static OSAL_INLINE void ecore_iov_get_vfs_vport_id(struct ecore_hwfn *p_hwfn,
+						   int vfid, u8 *p_vport_id)
+{
+}
+
+static OSAL_INLINE bool ecore_iov_vf_has_vport_instance(struct ecore_hwfn
+							*p_hwfn, int vfid)
+{
+	return false;
+}
+
+static OSAL_INLINE enum _ecore_status_t ecore_iov_post_vf_bulletin(struct
+								   ecore_hwfn
+								   *p_hwfn,
+								   int vfid,
+								   struct
+								   ecore_ptt
+								   *p_ptt)
+{
+	return ECORE_INVAL;
+}
+
+static OSAL_INLINE bool ecore_iov_is_vf_stopped(struct ecore_hwfn *p_hwfn,
+						int vfid)
+{
+	return false;
+}
+
+static OSAL_INLINE enum _ecore_status_t ecore_iov_spoofchk_set(struct ecore_hwfn
+							       *p_hwfn,
+							       int vfid,
+							       bool val)
+{
+	return ECORE_INVAL;
+}
+
+static OSAL_INLINE bool ecore_iov_spoofchk_get(struct ecore_hwfn *p_hwfn,
+					       int vfid)
+{
+	return false;
+}
+
+static OSAL_INLINE bool ecore_iov_pf_sanity_check(struct ecore_hwfn *p_hwfn,
+						  int vfid)
+{
+	return false;
+}
+
+static OSAL_INLINE u8 ecore_iov_vf_chains_per_pf(struct ecore_hwfn *p_hwfn)
+{
+	return 0;
+}
+
+static OSAL_INLINE void ecore_iov_get_vf_req_virt_mbx_params(struct ecore_hwfn
+							     *p_hwfn,
+							     u16 rel_vf_id,
+							     void
+							     **pp_req_virt_addr,
+							     u16 *
+							     p_req_virt_size)
+{
+}
+
+static OSAL_INLINE void ecore_iov_get_vf_reply_virt_mbx_params(struct ecore_hwfn
+							       *p_hwfn,
+							       u16 rel_vf_id,
+							       void
+						       **pp_reply_virt_addr,
+							       u16 *
+						       p_reply_virt_size)
+{
+}
+
+static OSAL_INLINE bool ecore_iov_is_valid_vfpf_msg_length(u32 length)
+{
+	return false;
+}
+
+static OSAL_INLINE u32 ecore_iov_pfvf_msg_length(void)
+{
+	return 0;
+}
+
+static OSAL_INLINE u8 *ecore_iov_bulletin_get_forced_mac(struct ecore_hwfn
+							 *p_hwfn, u16 rel_vf_id)
+{
+	return OSAL_NULL;
+}
+
+static OSAL_INLINE u16 ecore_iov_bulletin_get_forced_vlan(struct ecore_hwfn
+							  *p_hwfn,
+							  u16 rel_vf_id)
+{
+	return 0;
+}
+
+static OSAL_INLINE enum _ecore_status_t ecore_iov_configure_tx_rate(struct
+								    ecore_hwfn
+								    *p_hwfn,
+								    struct
+								    ecore_ptt
+								    *p_ptt,
+								    int vfid,
+								    int val)
+{
+	return ECORE_INVAL;
+}
+
+static OSAL_INLINE u8 ecore_iov_get_vf_num_rxqs(struct ecore_hwfn *p_hwfn,
+						u16 rel_vf_id)
+{
+	return 0;
+}
+
+static OSAL_INLINE u8 ecore_iov_get_vf_num_active_rxqs(struct ecore_hwfn
+						       *p_hwfn, u16 rel_vf_id)
+{
+	return 0;
+}
+
+static OSAL_INLINE void *ecore_iov_get_vf_ctx(struct ecore_hwfn *p_hwfn,
+					      u16 rel_vf_id)
+{
+	return OSAL_NULL;
+}
+
+static OSAL_INLINE u8 ecore_iov_get_vf_num_sbs(struct ecore_hwfn *p_hwfn,
+					       u16 rel_vf_id)
+{
+	return 0;
+}
+
+static OSAL_INLINE bool ecore_iov_is_vf_wait_for_acquire(struct ecore_hwfn
+							 *p_hwfn, u16 rel_vf_id)
+{
+	return false;
+}
+
+static OSAL_INLINE bool ecore_iov_is_vf_acquired_not_initialized(struct
+								 ecore_hwfn
+								 *p_hwfn,
+								 u16 rel_vf_id)
+{
+	return false;
+}
+
+static OSAL_INLINE bool ecore_iov_is_vf_initialized(struct ecore_hwfn *p_hwfn,
+						    u16 rel_vf_id)
+{
+	return false;
+}
+
+static OSAL_INLINE int ecore_iov_get_vf_min_rate(struct ecore_hwfn *p_hwfn,
+						 int vfid)
+{
+	return 0;
+}
+
+static OSAL_INLINE enum _ecore_status_t ecore_iov_configure_min_tx_rate(
+	struct ecore_dev *p_dev, int vfid, u32 rate)
+{
+	return ECORE_INVAL;
+}
+#endif
+#endif
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index 8d713e7..23ea426 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -22,6 +22,8 @@
 #include "reg_addr.h"
 #include "ecore_int.h"
 #include "ecore_hw.h"
+#include "ecore_vf.h"
+#include "ecore_sriov.h"
 #include "ecore_mcp.h"
 
 #define ECORE_MAX_SGES_NUM 16
@@ -106,6 +108,14 @@ enum _ecore_status_t
 ecore_sp_vport_start(struct ecore_hwfn *p_hwfn,
 		     struct ecore_sp_vport_start_params *p_params)
 {
+	if (IS_VF(p_hwfn->p_dev))
+		return ecore_vf_pf_vport_start(p_hwfn, p_params->vport_id,
+					       p_params->mtu,
+					       p_params->remove_inner_vlan,
+					       p_params->tpa_mode,
+					       p_params->max_buffers_per_cqe,
+					       p_params->only_untagged);
+
 	return ecore_sp_eth_vport_start(p_hwfn, p_params);
 }
 
@@ -339,6 +349,11 @@ ecore_sp_vport_update(struct ecore_hwfn *p_hwfn,
 	u8 abs_vport_id = 0, val;
 	u16 wordval;
 
+	if (IS_VF(p_hwfn->p_dev)) {
+		rc = ecore_vf_pf_vport_update(p_hwfn, p_params);
+		return rc;
+	}
+
 	rc = ecore_fw_vport(p_hwfn, p_params->vport_id, &abs_vport_id);
 	if (rc != ECORE_SUCCESS)
 		return rc;
@@ -428,6 +443,9 @@ enum _ecore_status_t ecore_sp_vport_stop(struct ecore_hwfn *p_hwfn,
 	enum _ecore_status_t rc;
 	u8 abs_vport_id = 0;
 
+	if (IS_VF(p_hwfn->p_dev))
+		return ecore_vf_pf_vport_stop(p_hwfn);
+
 	rc = ecore_fw_vport(p_hwfn, vport_id, &abs_vport_id);
 	if (rc != ECORE_SUCCESS)
 		return rc;
@@ -450,6 +468,19 @@ enum _ecore_status_t ecore_sp_vport_stop(struct ecore_hwfn *p_hwfn,
 	return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
 }
 
+static enum _ecore_status_t
+ecore_vf_pf_accept_flags(struct ecore_hwfn *p_hwfn,
+			 struct ecore_filter_accept_flags *p_accept_flags)
+{
+	struct ecore_sp_vport_update_params s_params;
+
+	OSAL_MEMSET(&s_params, 0, sizeof(s_params));
+	OSAL_MEMCPY(&s_params.accept_flags, p_accept_flags,
+		    sizeof(struct ecore_filter_accept_flags));
+
+	return ecore_vf_pf_vport_update(p_hwfn, &s_params);
+}
+
 enum _ecore_status_t
 ecore_filter_accept_cmd(struct ecore_dev *p_dev,
 			u8 vport,
@@ -474,6 +505,13 @@ ecore_filter_accept_cmd(struct ecore_dev *p_dev,
 
 		update_params.opaque_fid = p_hwfn->hw_info.opaque_fid;
 
+		if (IS_VF(p_dev)) {
+			rc = ecore_vf_pf_accept_flags(p_hwfn, &accept_flags);
+			if (rc != ECORE_SUCCESS)
+				return rc;
+			continue;
+		}
+
 		rc = ecore_sp_vport_update(p_hwfn, &update_params,
 					   comp_mode, p_comp_data);
 		if (rc != ECORE_SUCCESS) {
@@ -593,6 +631,17 @@ enum _ecore_status_t ecore_sp_eth_rx_queue_start(struct ecore_hwfn *p_hwfn,
 	enum _ecore_status_t rc;
 	u64 init_prod_val = 0;
 
+	if (IS_VF(p_hwfn->p_dev)) {
+		return ecore_vf_pf_rxq_start(p_hwfn,
+					     rx_queue_id,
+					     sb,
+					     sb_index,
+					     bd_max_bytes,
+					     bd_chain_phys_addr,
+					     cqe_pbl_addr,
+					     cqe_pbl_size, pp_prod);
+	}
+
 	rc = ecore_fw_l2_queue(p_hwfn, rx_queue_id, &abs_l2_queue);
 	if (rc != ECORE_SUCCESS)
 		return rc;
@@ -651,6 +700,13 @@ ecore_sp_eth_rx_queues_update(struct ecore_hwfn *p_hwfn,
 	u16 qid, abs_rx_q_id = 0;
 	u8 i;
 
+	if (IS_VF(p_hwfn->p_dev))
+		return ecore_vf_pf_rxqs_update(p_hwfn,
+					       rx_queue_id,
+					       num_rxqs,
+					       complete_cqe_flg,
+					       complete_event_flg);
+
 	OSAL_MEMSET(&init_data, 0, sizeof(init_data));
 	init_data.comp_mode = comp_mode;
 	init_data.p_comp_data = p_comp_data;
@@ -697,6 +753,10 @@ ecore_sp_eth_rx_queue_stop(struct ecore_hwfn *p_hwfn,
 	struct ecore_sp_init_data init_data;
 	u16 abs_rx_q_id = 0;
 
+	if (IS_VF(p_hwfn->p_dev))
+		return ecore_vf_pf_rxq_stop(p_hwfn, rx_queue_id,
+					    cqe_completion);
+
 	/* Get SPQ entry */
 	OSAL_MEMSET(&init_data, 0, sizeof(init_data));
 	init_data.cid = p_rx_cid->cid;
@@ -814,6 +874,14 @@ enum _ecore_status_t ecore_sp_eth_tx_queue_start(struct ecore_hwfn *p_hwfn,
 	enum _ecore_status_t rc;
 	u8 abs_stats_id = 0;
 
+	if (IS_VF(p_hwfn->p_dev)) {
+		return ecore_vf_pf_txq_start(p_hwfn,
+					     tx_queue_id,
+					     sb,
+					     sb_index,
+					     pbl_addr, pbl_size, pp_doorbell);
+	}
+
 	rc = ecore_fw_vport(p_hwfn, stats_id, &abs_stats_id);
 	if (rc != ECORE_SUCCESS)
 		return rc;
@@ -867,6 +935,9 @@ enum _ecore_status_t ecore_sp_eth_tx_queue_stop(struct ecore_hwfn *p_hwfn,
 	enum _ecore_status_t rc = ECORE_NOTIMPL;
 	struct ecore_sp_init_data init_data;
 
+	if (IS_VF(p_hwfn->p_dev))
+		return ecore_vf_pf_txq_stop(p_hwfn, tx_queue_id);
+
 	/* Get SPQ entry */
 	OSAL_MEMSET(&init_data, 0, sizeof(init_data));
 	init_data.cid = p_tx_cid->cid;
@@ -1274,6 +1345,11 @@ ecore_filter_mcast_cmd(struct ecore_dev *p_dev,
 	for_each_hwfn(p_dev, i) {
 		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
 
+		if (IS_VF(p_dev)) {
+			ecore_vf_pf_filter_mcast(p_hwfn, p_filter_cmd);
+			continue;
+		}
+
 		rc = ecore_sp_eth_filter_mcast(p_hwfn,
 					       p_hwfn->hw_info.opaque_fid,
 					       p_filter_cmd,
@@ -1297,6 +1373,11 @@ ecore_filter_ucast_cmd(struct ecore_dev *p_dev,
 	for_each_hwfn(p_dev, i) {
 		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
 
+		if (IS_VF(p_dev)) {
+			rc = ecore_vf_pf_filter_ucast(p_hwfn, p_filter_cmd);
+			continue;
+		}
+
 		rc = ecore_sp_eth_filter_ucast(p_hwfn,
 					       p_hwfn->hw_info.opaque_fid,
 					       p_filter_cmd,
@@ -1308,14 +1389,96 @@ ecore_filter_ucast_cmd(struct ecore_dev *p_dev,
 	return rc;
 }
 
+/* IOV related */
+enum _ecore_status_t ecore_sp_vf_start(struct ecore_hwfn *p_hwfn,
+				       u32 concrete_vfid, u16 opaque_vfid)
+{
+	struct vf_start_ramrod_data *p_ramrod = OSAL_NULL;
+	struct ecore_spq_entry *p_ent = OSAL_NULL;
+	enum _ecore_status_t rc = ECORE_NOTIMPL;
+	struct ecore_sp_init_data init_data;
+
+	/* Get SPQ entry */
+	OSAL_MEMSET(&init_data, 0, sizeof(init_data));
+	init_data.cid = ecore_spq_get_cid(p_hwfn);
+	init_data.opaque_fid = opaque_vfid;
+	init_data.comp_mode = ECORE_SPQ_MODE_EBLOCK;
+
+	rc = ecore_sp_init_request(p_hwfn, &p_ent,
+				   COMMON_RAMROD_VF_START,
+				   PROTOCOLID_COMMON, &init_data);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	p_ramrod = &p_ent->ramrod.vf_start;
+
+	p_ramrod->vf_id = GET_FIELD(concrete_vfid, PXP_CONCRETE_FID_VFID);
+	p_ramrod->opaque_fid = OSAL_CPU_TO_LE16(opaque_vfid);
+
+	switch (p_hwfn->hw_info.personality) {
+	case ECORE_PCI_ETH:
+		p_ramrod->personality = PERSONALITY_ETH;
+		break;
+	case ECORE_PCI_ETH_ROCE:
+		p_ramrod->personality = PERSONALITY_RDMA_AND_ETH;
+		break;
+	default:
+		DP_NOTICE(p_hwfn, true, "Unkown VF personality %d\n",
+			  p_hwfn->hw_info.personality);
+		return ECORE_INVAL;
+	}
+
+	return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
+}
+
+enum _ecore_status_t ecore_sp_vf_update(struct ecore_hwfn *p_hwfn)
+{
+	return ECORE_NOTIMPL;
+}
+
+enum _ecore_status_t ecore_sp_vf_stop(struct ecore_hwfn *p_hwfn,
+				      u32 concrete_vfid, u16 opaque_vfid)
+{
+	enum _ecore_status_t rc = ECORE_NOTIMPL;
+	struct vf_stop_ramrod_data *p_ramrod = OSAL_NULL;
+	struct ecore_spq_entry *p_ent = OSAL_NULL;
+	struct ecore_sp_init_data init_data;
+
+	/* Get SPQ entry */
+	OSAL_MEMSET(&init_data, 0, sizeof(init_data));
+	init_data.cid = ecore_spq_get_cid(p_hwfn);
+	init_data.opaque_fid = opaque_vfid;
+	init_data.comp_mode = ECORE_SPQ_MODE_EBLOCK;
+
+	rc = ecore_sp_init_request(p_hwfn, &p_ent,
+				   COMMON_RAMROD_VF_STOP,
+				   PROTOCOLID_COMMON, &init_data);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	p_ramrod = &p_ent->ramrod.vf_stop;
+
+	p_ramrod->vf_id = GET_FIELD(concrete_vfid, PXP_CONCRETE_FID_VFID);
+
+	return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
+}
+
 /* Statistics related code */
 static void __ecore_get_vport_pstats_addrlen(struct ecore_hwfn *p_hwfn,
 					     u32 *p_addr, u32 *p_len,
 					     u16 statistics_bin)
 {
-	*p_addr = BAR0_MAP_REG_PSDM_RAM +
+	if (IS_PF(p_hwfn->p_dev)) {
+		*p_addr = BAR0_MAP_REG_PSDM_RAM +
 		    PSTORM_QUEUE_STAT_OFFSET(statistics_bin);
-	*p_len = sizeof(struct eth_pstorm_per_queue_stat);
+		*p_len = sizeof(struct eth_pstorm_per_queue_stat);
+	} else {
+		struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
+		struct pfvf_acquire_resp_tlv *p_resp = &p_iov->acquire_resp;
+
+		*p_addr = p_resp->pfdev_info.stats_info.pstats.address;
+		*p_len = p_resp->pfdev_info.stats_info.pstats.len;
+	}
 }
 
 static void __ecore_get_vport_pstats(struct ecore_hwfn *p_hwfn,
@@ -1349,9 +1512,17 @@ static void __ecore_get_vport_tstats(struct ecore_hwfn *p_hwfn,
 	struct tstorm_per_port_stat tstats;
 	u32 tstats_addr, tstats_len;
 
-	tstats_addr = BAR0_MAP_REG_TSDM_RAM +
+	if (IS_PF(p_hwfn->p_dev)) {
+		tstats_addr = BAR0_MAP_REG_TSDM_RAM +
 		    TSTORM_PORT_STAT_OFFSET(MFW_PORT(p_hwfn));
-	tstats_len = sizeof(struct tstorm_per_port_stat);
+		tstats_len = sizeof(struct tstorm_per_port_stat);
+	} else {
+		struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
+		struct pfvf_acquire_resp_tlv *p_resp = &p_iov->acquire_resp;
+
+		tstats_addr = p_resp->pfdev_info.stats_info.tstats.address;
+		tstats_len = p_resp->pfdev_info.stats_info.tstats.len;
+	}
 
 	OSAL_MEMSET(&tstats, 0, sizeof(tstats));
 	ecore_memcpy_from(p_hwfn, p_ptt, &tstats, tstats_addr, tstats_len);
@@ -1366,9 +1537,17 @@ static void __ecore_get_vport_ustats_addrlen(struct ecore_hwfn *p_hwfn,
 					     u32 *p_addr, u32 *p_len,
 					     u16 statistics_bin)
 {
-	*p_addr = BAR0_MAP_REG_USDM_RAM +
+	if (IS_PF(p_hwfn->p_dev)) {
+		*p_addr = BAR0_MAP_REG_USDM_RAM +
 		    USTORM_QUEUE_STAT_OFFSET(statistics_bin);
-	*p_len = sizeof(struct eth_ustorm_per_queue_stat);
+		*p_len = sizeof(struct eth_ustorm_per_queue_stat);
+	} else {
+		struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
+		struct pfvf_acquire_resp_tlv *p_resp = &p_iov->acquire_resp;
+
+		*p_addr = p_resp->pfdev_info.stats_info.ustats.address;
+		*p_len = p_resp->pfdev_info.stats_info.ustats.len;
+	}
 }
 
 static void __ecore_get_vport_ustats(struct ecore_hwfn *p_hwfn,
@@ -1397,9 +1576,17 @@ static void __ecore_get_vport_mstats_addrlen(struct ecore_hwfn *p_hwfn,
 					     u32 *p_addr, u32 *p_len,
 					     u16 statistics_bin)
 {
-	*p_addr = BAR0_MAP_REG_MSDM_RAM +
+	if (IS_PF(p_hwfn->p_dev)) {
+		*p_addr = BAR0_MAP_REG_MSDM_RAM +
 		    MSTORM_QUEUE_STAT_OFFSET(statistics_bin);
-	*p_len = sizeof(struct eth_mstorm_per_queue_stat);
+		*p_len = sizeof(struct eth_mstorm_per_queue_stat);
+	} else {
+		struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
+		struct pfvf_acquire_resp_tlv *p_resp = &p_iov->acquire_resp;
+
+		*p_addr = p_resp->pfdev_info.stats_info.mstats.address;
+		*p_len = p_resp->pfdev_info.stats_info.mstats.len;
+	}
 }
 
 static void __ecore_get_vport_mstats(struct ecore_hwfn *p_hwfn,
@@ -1524,24 +1711,28 @@ static void _ecore_get_vport_stats(struct ecore_dev *p_dev,
 
 	for_each_hwfn(p_dev, i) {
 		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
-		struct ecore_ptt *p_ptt = ecore_ptt_acquire(p_hwfn);
-
-		/* The main vport index is relative first */
-		if (ecore_fw_vport(p_hwfn, 0, &fw_vport)) {
-			DP_ERR(p_hwfn, "No vport available!\n");
-			goto out;
+		struct ecore_ptt *p_ptt = IS_PF(p_dev) ?
+		    ecore_ptt_acquire(p_hwfn) : OSAL_NULL;
+
+		if (IS_PF(p_dev)) {
+			/* The main vport index is relative first */
+			if (ecore_fw_vport(p_hwfn, 0, &fw_vport)) {
+				DP_ERR(p_hwfn, "No vport available!\n");
+				goto out;
+			}
 		}
 
-		if (!p_ptt) {
+		if (IS_PF(p_dev) && !p_ptt) {
 			DP_ERR(p_hwfn, "Failed to acquire ptt\n");
 			continue;
 		}
 
 		__ecore_get_vport_stats(p_hwfn, p_ptt, stats, fw_vport,
-					true);
+					IS_PF(p_dev) ? true : false);
 
 out:
-		ecore_ptt_release(p_hwfn, p_ptt);
+		if (IS_PF(p_dev))
+			ecore_ptt_release(p_hwfn, p_ptt);
 	}
 }
 
@@ -1575,10 +1766,11 @@ void ecore_reset_vport_stats(struct ecore_dev *p_dev)
 		struct eth_mstorm_per_queue_stat mstats;
 		struct eth_ustorm_per_queue_stat ustats;
 		struct eth_pstorm_per_queue_stat pstats;
-		struct ecore_ptt *p_ptt = ecore_ptt_acquire(p_hwfn);
+		struct ecore_ptt *p_ptt = IS_PF(p_dev) ?
+		    ecore_ptt_acquire(p_hwfn) : OSAL_NULL;
 		u32 addr = 0, len = 0;
 
-		if (!p_ptt) {
+		if (IS_PF(p_dev) && !p_ptt) {
 			DP_ERR(p_hwfn, "Failed to acquire ptt\n");
 			continue;
 		}
@@ -1595,7 +1787,8 @@ void ecore_reset_vport_stats(struct ecore_dev *p_dev)
 		__ecore_get_vport_pstats_addrlen(p_hwfn, &addr, &len, 0);
 		ecore_memcpy_to(p_hwfn, p_ptt, addr, &pstats, len);
 
-		ecore_ptt_release(p_hwfn, p_ptt);
+		if (IS_PF(p_dev))
+			ecore_ptt_release(p_hwfn, p_ptt);
 	}
 
 	/* PORT statistics are not necessarily reset, so we need to
diff --git a/drivers/net/qede/base/ecore_l2.h b/drivers/net/qede/base/ecore_l2.h
index 658af45..b0850ca 100644
--- a/drivers/net/qede/base/ecore_l2.h
+++ b/drivers/net/qede/base/ecore_l2.h
@@ -15,6 +15,56 @@
 #include "ecore_l2_api.h"
 
 /**
+ * @brief ecore_sp_vf_start -  VF Function Start
+ *
+ * This ramrod is sent to initialize a virtual function (VF) is loaded.
+ * It will configure the function related parameters.
+ *
+ * @note Final phase API.
+ *
+ * @param p_hwfn
+ * @param concrete_vfid				VF ID
+ * @param opaque_vfid
+ *
+ * @return enum _ecore_status_t
+ */
+
+enum _ecore_status_t ecore_sp_vf_start(struct ecore_hwfn *p_hwfn,
+				       u32 concrete_vfid, u16 opaque_vfid);
+
+/**
+ * @brief ecore_sp_vf_update - VF Function Update Ramrod
+ *
+ * This ramrod performs updates of a virtual function (VF).
+ * It currently contains no functionality.
+ *
+ * @note Final phase API.
+ *
+ * @param p_hwfn
+ *
+ * @return enum _ecore_status_t
+ */
+
+enum _ecore_status_t ecore_sp_vf_update(struct ecore_hwfn *p_hwfn);
+
+/**
+ * @brief ecore_sp_vf_stop - VF Function Stop Ramrod
+ *
+ * This ramrod is sent to unload a virtual function (VF).
+ *
+ * @note Final phase API.
+ *
+ * @param p_hwfn
+ * @param concrete_vfid
+ * @param opaque_vfid
+ *
+ * @return enum _ecore_status_t
+ */
+
+enum _ecore_status_t ecore_sp_vf_stop(struct ecore_hwfn *p_hwfn,
+				      u32 concrete_vfid, u16 opaque_vfid);
+
+/**
  * @brief ecore_sp_eth_tx_queue_update -
  *
  * This ramrod updates a TX queue. It is used for setting the active
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index e51de24..7dff695 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -14,6 +14,8 @@
 #include "reg_addr.h"
 #include "ecore_hw.h"
 #include "ecore_init_fw_funcs.h"
+#include "ecore_sriov.h"
+#include "ecore_iov_api.h"
 #include "ecore_gtt_reg_addr.h"
 #include "ecore_iro.h"
 
@@ -517,6 +519,9 @@ static void ecore_mcp_handle_vf_flr(struct ecore_hwfn *p_hwfn,
 			   "FLR-ed VFs [%08x,...,%08x] - %08x\n",
 			   i * 32, (i + 1) * 32 - 1, disabled_vfs[i]);
 	}
+
+	if (ecore_iov_mark_vf_flr(p_hwfn, disabled_vfs))
+		OSAL_VF_FLR_UPDATE(p_hwfn);
 }
 
 enum _ecore_status_t ecore_mcp_ack_vf_flr(struct ecore_hwfn *p_hwfn,
@@ -793,6 +798,10 @@ u32 ecore_get_process_kill_counter(struct ecore_hwfn *p_hwfn,
 {
 	u32 path_offsize_addr, path_offsize, path_addr, proc_kill_cnt;
 
+	/* TODO - Add support for VFs */
+	if (IS_VF(p_hwfn->p_dev))
+		return ECORE_INVAL;
+
 	path_offsize_addr = SECTION_OFFSIZE_ADDR(p_hwfn->mcp_info->public_base,
 						 PUBLIC_PATH);
 	path_offsize = ecore_rd(p_hwfn, p_ptt, path_offsize_addr);
@@ -1050,6 +1059,20 @@ enum _ecore_status_t ecore_mcp_get_mfw_ver(struct ecore_dev *p_dev,
 	}
 #endif
 
+	if (IS_VF(p_dev)) {
+		if (p_hwfn->vf_iov_info) {
+			struct pfvf_acquire_resp_tlv *p_resp;
+
+			p_resp = &p_hwfn->vf_iov_info->acquire_resp;
+			*p_mfw_ver = p_resp->pfdev_info.mfw_ver;
+			return ECORE_SUCCESS;
+		} else {
+			DP_VERBOSE(p_dev, ECORE_MSG_IOV,
+				   "VF requested MFW vers prior to ACQUIRE\n");
+			return ECORE_INVAL;
+		}
+	}
+
 	global_offsize = ecore_rd(p_hwfn, p_ptt,
 				  SECTION_OFFSIZE_ADDR(p_hwfn->mcp_info->
 						       public_base,
@@ -1076,6 +1099,10 @@ enum _ecore_status_t ecore_mcp_get_media_type(struct ecore_dev *p_dev,
 	struct ecore_hwfn *p_hwfn = &p_dev->hwfns[0];
 	struct ecore_ptt *p_ptt;
 
+	/* TODO - Add support for VFs */
+	if (IS_VF(p_dev))
+		return ECORE_INVAL;
+
 	if (!ecore_mcp_is_init(p_hwfn)) {
 		DP_NOTICE(p_hwfn, true, "MFW is not initialized !\n");
 		return ECORE_BUSY;
@@ -1291,6 +1318,9 @@ enum _ecore_status_t ecore_mcp_get_flash_size(struct ecore_hwfn *p_hwfn,
 	}
 #endif
 
+	if (IS_VF(p_hwfn->p_dev))
+		return ECORE_INVAL;
+
 	flash_size = ecore_rd(p_hwfn, p_ptt, MCP_REG_NVM_CFG4);
 	flash_size = (flash_size & MCP_REG_NVM_CFG4_FLASH_SIZE) >>
 	    MCP_REG_NVM_CFG4_FLASH_SIZE_SHIFT;
diff --git a/drivers/net/qede/base/ecore_spq.c b/drivers/net/qede/base/ecore_spq.c
index e7743cd..80d234f 100644
--- a/drivers/net/qede/base/ecore_spq.c
+++ b/drivers/net/qede/base/ecore_spq.c
@@ -20,6 +20,7 @@
 #include "ecore_dev_api.h"
 #include "ecore_mcp.h"
 #include "ecore_hw.h"
+#include "ecore_sriov.h"
 
 /***************************************************************************
  * Structures & Definitions
@@ -250,7 +251,9 @@ ecore_async_event_completion(struct ecore_hwfn *p_hwfn,
 {
 	switch (p_eqe->protocol_id) {
 	case PROTOCOLID_COMMON:
-		return ECORE_SUCCESS;
+		return ecore_sriov_eqe_event(p_hwfn,
+					     p_eqe->opcode,
+					     p_eqe->echo, &p_eqe->data);
 	default:
 		DP_NOTICE(p_hwfn,
 			  true, "Unknown Async completion for protocol: %d\n",
@@ -386,6 +389,9 @@ static enum _ecore_status_t ecore_cqe_completion(struct ecore_hwfn *p_hwfn,
 						 *cqe,
 						 enum protocol_type protocol)
 {
+	if (IS_VF(p_hwfn->p_dev))
+		return OSAL_VF_CQE_COMPLETION(p_hwfn, cqe, protocol);
+
 	/* @@@tmp - it's possible we'll eventually want to handle some
 	 * actual commands that can arrive here, but for now this is only
 	 * used to complete the ramrod using the echo value on the cqe
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
new file mode 100644
index 0000000..eb74080
--- /dev/null
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -0,0 +1,3422 @@
+/*
+ * Copyright (c) 2016 QLogic Corporation.
+ * All rights reserved.
+ * www.qlogic.com
+ *
+ * See LICENSE.qede_pmd for copyright and licensing details.
+ */
+
+#include "bcm_osal.h"
+#include "ecore.h"
+#include "reg_addr.h"
+#include "ecore_sriov.h"
+#include "ecore_status.h"
+#include "ecore_hw.h"
+#include "ecore_hw_defs.h"
+#include "ecore_int.h"
+#include "ecore_hsi_eth.h"
+#include "ecore_l2.h"
+#include "ecore_vfpf_if.h"
+#include "ecore_rt_defs.h"
+#include "ecore_init_ops.h"
+#include "ecore_gtt_reg_addr.h"
+#include "ecore_iro.h"
+#include "ecore_mcp.h"
+#include "ecore_cxt.h"
+#include "ecore_vf.h"
+#include "ecore_init_fw_funcs.h"
+
+/* TEMPORARY until we implement print_enums... */
+const char *ecore_channel_tlvs_string[] = {
+	"CHANNEL_TLV_NONE",	/* ends tlv sequence */
+	"CHANNEL_TLV_ACQUIRE",
+	"CHANNEL_TLV_VPORT_START",
+	"CHANNEL_TLV_VPORT_UPDATE",
+	"CHANNEL_TLV_VPORT_TEARDOWN",
+	"CHANNEL_TLV_START_RXQ",
+	"CHANNEL_TLV_START_TXQ",
+	"CHANNEL_TLV_STOP_RXQ",
+	"CHANNEL_TLV_STOP_TXQ",
+	"CHANNEL_TLV_UPDATE_RXQ",
+	"CHANNEL_TLV_INT_CLEANUP",
+	"CHANNEL_TLV_CLOSE",
+	"CHANNEL_TLV_RELEASE",
+	"CHANNEL_TLV_LIST_END",
+	"CHANNEL_TLV_UCAST_FILTER",
+	"CHANNEL_TLV_VPORT_UPDATE_ACTIVATE",
+	"CHANNEL_TLV_VPORT_UPDATE_TX_SWITCH",
+	"CHANNEL_TLV_VPORT_UPDATE_VLAN_STRIP",
+	"CHANNEL_TLV_VPORT_UPDATE_MCAST",
+	"CHANNEL_TLV_VPORT_UPDATE_ACCEPT_PARAM",
+	"CHANNEL_TLV_VPORT_UPDATE_RSS",
+	"CHANNEL_TLV_VPORT_UPDATE_ACCEPT_ANY_VLAN",
+	"CHANNEL_TLV_VPORT_UPDATE_SGE_TPA",
+	"CHANNEL_TLV_MAX"
+};
+
+/* TODO - this is linux crc32; Need a way to ifdef it out for linux */
+u32 ecore_crc32(u32 crc, u8 *ptr, u32 length)
+{
+	int i;
+
+	while (length--) {
+		crc ^= *ptr++;
+		for (i = 0; i < 8; i++)
+			crc = (crc >> 1) ^ ((crc & 1) ? 0xedb88320 : 0);
+	}
+	return crc;
+}
+
+enum _ecore_status_t ecore_iov_post_vf_bulletin(struct ecore_hwfn *p_hwfn,
+						int vfid,
+						struct ecore_ptt *p_ptt)
+{
+	struct ecore_bulletin_content *p_bulletin;
+	struct ecore_dmae_params params;
+	struct ecore_vf_info *p_vf;
+	int crc_size = sizeof(p_bulletin->crc);
+
+	p_vf = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
+	if (!p_vf)
+		return ECORE_INVAL;
+
+	/* TODO - check VF is in a state where it can accept message */
+	if (!p_vf->vf_bulletin)
+		return ECORE_INVAL;
+
+	p_bulletin = p_vf->bulletin.p_virt;
+
+	/* Increment bulletin board version and compute crc */
+	p_bulletin->version++;
+	p_bulletin->crc = ecore_crc32(0, (u8 *)p_bulletin + crc_size,
+				      p_vf->bulletin.size - crc_size);
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+		   "Posting Bulletin 0x%08x to VF[%d] (CRC 0x%08x)\n",
+		   p_bulletin->version, p_vf->relative_vf_id, p_bulletin->crc);
+
+	/* propagate bulletin board via dmae to vm memory */
+	OSAL_MEMSET(&params, 0, sizeof(params));
+	params.flags = ECORE_DMAE_FLAG_VF_DST;
+	params.dst_vfid = p_vf->abs_vf_id;
+	return ecore_dmae_host2host(p_hwfn, p_ptt, p_vf->bulletin.phys,
+				    p_vf->vf_bulletin, p_vf->bulletin.size / 4,
+				    &params);
+}
+
+static enum _ecore_status_t ecore_iov_pci_cfg_info(struct ecore_dev *p_dev)
+{
+	struct ecore_hw_sriov_info *iov = &p_dev->sriov_info;
+	int pos = iov->pos;
+
+	DP_VERBOSE(p_dev, ECORE_MSG_IOV, "sriov ext pos %d\n", pos);
+	OSAL_PCI_READ_CONFIG_WORD(p_dev, pos + PCI_SRIOV_CTRL, &iov->ctrl);
+
+	OSAL_PCI_READ_CONFIG_WORD(p_dev,
+				  pos + PCI_SRIOV_TOTAL_VF, &iov->total_vfs);
+	OSAL_PCI_READ_CONFIG_WORD(p_dev,
+				  pos + PCI_SRIOV_INITIAL_VF,
+				  &iov->initial_vfs);
+
+	OSAL_PCI_READ_CONFIG_WORD(p_dev, pos + PCI_SRIOV_NUM_VF, &iov->num_vfs);
+	if (iov->num_vfs) {
+		/* @@@TODO - in future we might want to add an OSAL here to
+		 * allow each OS to decide on its own how to act.
+		 */
+		DP_VERBOSE(p_dev, ECORE_MSG_IOV,
+			   "Number of VFs are already set to non-zero value."
+			   " Ignoring PCI configuration value\n");
+		iov->num_vfs = 0;
+	}
+
+	OSAL_PCI_READ_CONFIG_WORD(p_dev,
+				  pos + PCI_SRIOV_VF_OFFSET, &iov->offset);
+
+	OSAL_PCI_READ_CONFIG_WORD(p_dev,
+				  pos + PCI_SRIOV_VF_STRIDE, &iov->stride);
+
+	OSAL_PCI_READ_CONFIG_WORD(p_dev,
+				  pos + PCI_SRIOV_VF_DID, &iov->vf_device_id);
+
+	OSAL_PCI_READ_CONFIG_DWORD(p_dev,
+				   pos + PCI_SRIOV_SUP_PGSIZE, &iov->pgsz);
+
+	OSAL_PCI_READ_CONFIG_DWORD(p_dev, pos + PCI_SRIOV_CAP, &iov->cap);
+
+	OSAL_PCI_READ_CONFIG_BYTE(p_dev, pos + PCI_SRIOV_FUNC_LINK, &iov->link);
+
+	DP_VERBOSE(p_dev, ECORE_MSG_IOV, "IOV info[%d]: nres %d, cap 0x%x,"
+		   "ctrl 0x%x, total %d, initial %d, num vfs %d, offset %d,"
+		   " stride %d, page size 0x%x\n", 0,
+		   iov->nres, iov->cap, iov->ctrl,
+		   iov->total_vfs, iov->initial_vfs, iov->nr_virtfn,
+		   iov->offset, iov->stride, iov->pgsz);
+
+	/* Some sanity checks */
+	if (iov->num_vfs > NUM_OF_VFS(p_dev) ||
+	    iov->total_vfs > NUM_OF_VFS(p_dev)) {
+		/* This can happen only due to a bug. In this case we set
+		 * num_vfs to zero to avoid memory corruption in the code that
+		 * assumes max number of vfs
+		 */
+		DP_NOTICE(p_dev, false,
+			  "IOV: Unexpected number of vfs set: %d"
+			  " setting num_vf to zero\n",
+			  iov->num_vfs);
+
+		iov->num_vfs = 0;
+		iov->total_vfs = 0;
+	}
+
+	return ECORE_SUCCESS;
+}
+
+static void ecore_iov_clear_vf_igu_blocks(struct ecore_hwfn *p_hwfn,
+					  struct ecore_ptt *p_ptt)
+{
+	struct ecore_igu_block *p_sb;
+	u16 sb_id;
+	u32 val;
+
+	if (!p_hwfn->hw_info.p_igu_info) {
+		DP_ERR(p_hwfn,
+		       "ecore_iov_clear_vf_igu_blocks IGU Info not inited\n");
+		return;
+	}
+
+	for (sb_id = 0;
+	     sb_id < ECORE_MAPPING_MEMORY_SIZE(p_hwfn->p_dev); sb_id++) {
+		p_sb = &p_hwfn->hw_info.p_igu_info->igu_map.igu_blocks[sb_id];
+		if ((p_sb->status & ECORE_IGU_STATUS_FREE) &&
+		    !(p_sb->status & ECORE_IGU_STATUS_PF)) {
+			val = ecore_rd(p_hwfn, p_ptt,
+				       IGU_REG_MAPPING_MEMORY + sb_id * 4);
+			SET_FIELD(val, IGU_MAPPING_LINE_VALID, 0);
+			ecore_wr(p_hwfn, p_ptt,
+				 IGU_REG_MAPPING_MEMORY + 4 * sb_id, val);
+		}
+	}
+}
+
+static void ecore_iov_setup_vfdb(struct ecore_hwfn *p_hwfn)
+{
+	u16 num_vfs = p_hwfn->p_dev->sriov_info.total_vfs;
+	union pfvf_tlvs *p_reply_virt_addr;
+	union vfpf_tlvs *p_req_virt_addr;
+	struct ecore_bulletin_content *p_bulletin_virt;
+	struct ecore_pf_iov *p_iov_info;
+	dma_addr_t req_p, rply_p, bulletin_p;
+	u8 idx = 0;
+
+	p_iov_info = p_hwfn->pf_iov_info;
+
+	OSAL_MEMSET(p_iov_info->vfs_array, 0, sizeof(p_iov_info->vfs_array));
+
+	p_req_virt_addr = p_iov_info->mbx_msg_virt_addr;
+	req_p = p_iov_info->mbx_msg_phys_addr;
+	p_reply_virt_addr = p_iov_info->mbx_reply_virt_addr;
+	rply_p = p_iov_info->mbx_reply_phys_addr;
+	p_bulletin_virt = p_iov_info->p_bulletins;
+	bulletin_p = p_iov_info->bulletins_phys;
+	if (!p_req_virt_addr || !p_reply_virt_addr || !p_bulletin_virt) {
+		DP_ERR(p_hwfn,
+		       "ecore_iov_setup_vfdb called without alloc mem first\n");
+		return;
+	}
+
+	p_iov_info->base_vport_id = 1;	/* @@@TBD resource allocation */
+
+	for (idx = 0; idx < num_vfs; idx++) {
+		struct ecore_vf_info *vf = &p_iov_info->vfs_array[idx];
+		u32 concrete;
+
+		vf->vf_mbx.req_virt = p_req_virt_addr + idx;
+		vf->vf_mbx.req_phys = req_p + idx * sizeof(union vfpf_tlvs);
+		vf->vf_mbx.reply_virt = p_reply_virt_addr + idx;
+		vf->vf_mbx.reply_phys = rply_p + idx * sizeof(union pfvf_tlvs);
+
+#ifdef CONFIG_ECORE_SW_CHANNEL
+		vf->vf_mbx.sw_mbx.request_size = sizeof(union vfpf_tlvs);
+		vf->vf_mbx.sw_mbx.mbx_state = VF_PF_WAIT_FOR_START_REQUEST;
+#endif
+		vf->state = VF_STOPPED;
+
+		vf->bulletin.phys = idx *
+		    sizeof(struct ecore_bulletin_content) + bulletin_p;
+		vf->bulletin.p_virt = p_bulletin_virt + idx;
+		vf->bulletin.size = sizeof(struct ecore_bulletin_content);
+
+		vf->relative_vf_id = idx;
+		vf->abs_vf_id = idx + p_hwfn->hw_info.first_vf_in_pf;
+		concrete = ecore_vfid_to_concrete(p_hwfn, vf->abs_vf_id);
+		vf->concrete_fid = concrete;
+		/* TODO - need to devise a better way of getting opaque */
+		vf->opaque_fid = (p_hwfn->hw_info.opaque_fid & 0xff) |
+		    (vf->abs_vf_id << 8);
+		/* @@TBD MichalK - add base vport_id of VFs to equation */
+		vf->vport_id = p_iov_info->base_vport_id + idx;
+	}
+}
+
+static enum _ecore_status_t ecore_iov_allocate_vfdb(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_pf_iov *p_iov_info = p_hwfn->pf_iov_info;
+	void **p_v_addr;
+	u16 num_vfs = 0;
+
+	num_vfs = p_hwfn->p_dev->sriov_info.total_vfs;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+		   "ecore_iov_allocate_vfdb for %d VFs\n", num_vfs);
+
+	/* Allocate PF Mailbox buffer (per-VF) */
+	p_iov_info->mbx_msg_size = sizeof(union vfpf_tlvs) * num_vfs;
+	p_v_addr = &p_iov_info->mbx_msg_virt_addr;
+	*p_v_addr = OSAL_DMA_ALLOC_COHERENT(p_hwfn->p_dev,
+					    &p_iov_info->mbx_msg_phys_addr,
+					    p_iov_info->mbx_msg_size);
+	if (!*p_v_addr)
+		return ECORE_NOMEM;
+
+	/* Allocate PF Mailbox Reply buffer (per-VF) */
+	p_iov_info->mbx_reply_size = sizeof(union pfvf_tlvs) * num_vfs;
+	p_v_addr = &p_iov_info->mbx_reply_virt_addr;
+	*p_v_addr = OSAL_DMA_ALLOC_COHERENT(p_hwfn->p_dev,
+					    &p_iov_info->mbx_reply_phys_addr,
+					    p_iov_info->mbx_reply_size);
+	if (!*p_v_addr)
+		return ECORE_NOMEM;
+
+	p_iov_info->bulletins_size = sizeof(struct ecore_bulletin_content) *
+	    num_vfs;
+	p_v_addr = &p_iov_info->p_bulletins;
+	*p_v_addr = OSAL_DMA_ALLOC_COHERENT(p_hwfn->p_dev,
+					    &p_iov_info->bulletins_phys,
+					    p_iov_info->bulletins_size);
+	if (!*p_v_addr)
+		return ECORE_NOMEM;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+		   "PF's Requests mailbox [%p virt 0x%lx phys],  Response"
+		   " mailbox [%p virt 0x%lx phys] Bulletins"
+		   " [%p virt 0x%lx phys]\n",
+		   p_iov_info->mbx_msg_virt_addr,
+		   (u64)p_iov_info->mbx_msg_phys_addr,
+		   p_iov_info->mbx_reply_virt_addr,
+		   (u64)p_iov_info->mbx_reply_phys_addr,
+		   p_iov_info->p_bulletins, (u64)p_iov_info->bulletins_phys);
+
+	/* @@@TBD MichalK - statistics / RSS */
+
+	return ECORE_SUCCESS;
+}
+
+static void ecore_iov_free_vfdb(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_pf_iov *p_iov_info = p_hwfn->pf_iov_info;
+
+	if (p_hwfn->pf_iov_info->mbx_msg_virt_addr)
+		OSAL_DMA_FREE_COHERENT(p_hwfn->p_dev,
+				       p_iov_info->mbx_msg_virt_addr,
+				       p_iov_info->mbx_msg_phys_addr,
+				       p_iov_info->mbx_msg_size);
+
+	if (p_hwfn->pf_iov_info->mbx_reply_virt_addr)
+		OSAL_DMA_FREE_COHERENT(p_hwfn->p_dev,
+				       p_iov_info->mbx_reply_virt_addr,
+				       p_iov_info->mbx_reply_phys_addr,
+				       p_iov_info->mbx_reply_size);
+
+	if (p_iov_info->p_bulletins)
+		OSAL_DMA_FREE_COHERENT(p_hwfn->p_dev,
+				       p_iov_info->p_bulletins,
+				       p_iov_info->bulletins_phys,
+				       p_iov_info->bulletins_size);
+
+	/* @@@TBD MichalK - statistics / RSS */
+}
+
+enum _ecore_status_t ecore_iov_alloc(struct ecore_hwfn *p_hwfn)
+{
+	enum _ecore_status_t rc = ECORE_SUCCESS;
+	struct ecore_pf_iov *p_sriov;
+
+	if (!IS_PF_SRIOV(p_hwfn)) {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+			   "No SR-IOV - no need for IOV db\n");
+		return rc;
+	}
+
+	p_sriov = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL, sizeof(*p_sriov));
+	if (!p_sriov) {
+		DP_NOTICE(p_hwfn, true,
+			  "Failed to allocate `struct ecore_sriov'");
+		return ECORE_NOMEM;
+	}
+
+	p_hwfn->pf_iov_info = p_sriov;
+
+	rc = ecore_iov_allocate_vfdb(p_hwfn);
+
+	return rc;
+}
+
+void ecore_iov_setup(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
+{
+	if (!IS_PF_SRIOV(p_hwfn) || !p_hwfn->pf_iov_info)
+		return;
+
+	ecore_iov_setup_vfdb(p_hwfn);
+	ecore_iov_clear_vf_igu_blocks(p_hwfn, p_ptt);
+}
+
+void ecore_iov_free(struct ecore_hwfn *p_hwfn)
+{
+	if (p_hwfn->pf_iov_info) {
+		ecore_iov_free_vfdb(p_hwfn);
+		OSAL_FREE(p_hwfn->p_dev, p_hwfn->pf_iov_info);
+	}
+}
+
+enum _ecore_status_t ecore_iov_hw_info(struct ecore_hwfn *p_hwfn,
+				       struct ecore_ptt *p_ptt)
+{
+	enum _ecore_status_t rc;
+
+	/* @@@ TBD get this information from shmem / pci cfg */
+	if (IS_VF(p_hwfn->p_dev))
+		return ECORE_SUCCESS;
+
+	/* First hwfn should learn the PCI configuration */
+	if (IS_LEAD_HWFN(p_hwfn)) {
+		struct ecore_dev *p_dev = p_hwfn->p_dev;
+		int *pos = &p_hwfn->p_dev->sriov_info.pos;
+
+		*pos = OSAL_PCI_FIND_EXT_CAPABILITY(p_hwfn->p_dev,
+						    PCI_EXT_CAP_ID_SRIOV);
+		if (!*pos) {
+			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+				   "No PCIe IOV support\n");
+			return ECORE_SUCCESS;
+		}
+
+		rc = ecore_iov_pci_cfg_info(p_dev);
+		if (rc)
+			return rc;
+	} else if (!p_hwfn->p_dev->sriov_info.pos) {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, "No PCIe IOV support\n");
+		return ECORE_SUCCESS;
+	}
+
+	/* Calculate the first VF index - this is a bit tricky; Basically,
+	 * VFs start at offset 16 relative to PF0, and 2nd engine VFs begin
+	 * after the first engine's VFs.
+	 */
+	p_hwfn->hw_info.first_vf_in_pf = p_hwfn->p_dev->sriov_info.offset +
+	    p_hwfn->abs_pf_id - 16;
+	if (ECORE_PATH_ID(p_hwfn))
+		p_hwfn->hw_info.first_vf_in_pf -= MAX_NUM_VFS_BB;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+		   "First VF in hwfn 0x%08x\n", p_hwfn->hw_info.first_vf_in_pf);
+
+	return ECORE_SUCCESS;
+}
+
+struct ecore_vf_info *ecore_iov_get_vf_info(struct ecore_hwfn *p_hwfn,
+					    u16 relative_vf_id,
+					    bool b_enabled_only)
+{
+	struct ecore_vf_info *vf = OSAL_NULL;
+
+	if (!p_hwfn->pf_iov_info) {
+		DP_NOTICE(p_hwfn->p_dev, true, "No iov info\n");
+		return OSAL_NULL;
+	}
+
+	if (ecore_iov_is_valid_vfid(p_hwfn, relative_vf_id, b_enabled_only))
+		vf = &p_hwfn->pf_iov_info->vfs_array[relative_vf_id];
+	else
+		DP_ERR(p_hwfn, "ecore_iov_get_vf_info: VF[%d] is not enabled\n",
+		       relative_vf_id);
+
+	return vf;
+}
+
+void ecore_iov_set_vf_to_disable(struct ecore_hwfn *p_hwfn,
+				 u16 rel_vf_id, u8 to_disable)
+{
+	struct ecore_vf_info *vf;
+
+	vf = ecore_iov_get_vf_info(p_hwfn, rel_vf_id, false);
+	if (!vf)
+		return;
+
+	vf->to_disable = to_disable;
+}
+
+void ecore_iov_set_vfs_to_disable(struct ecore_hwfn *p_hwfn, u8 to_disable)
+{
+	u16 i;
+
+	for (i = 0; i < p_hwfn->p_dev->sriov_info.total_vfs; i++)
+		ecore_iov_set_vf_to_disable(p_hwfn, i, to_disable);
+}
+
+#ifndef LINUX_REMOVE
+/* @@@TBD Consider taking outside of ecore... */
+enum _ecore_status_t ecore_iov_set_vf_ctx(struct ecore_hwfn *p_hwfn,
+					  u16 vf_id, void *ctx)
+{
+	enum _ecore_status_t rc = ECORE_SUCCESS;
+	struct ecore_vf_info *vf = ecore_iov_get_vf_info(p_hwfn, vf_id, true);
+
+	if (vf != OSAL_NULL) {
+		vf->ctx = ctx;
+#ifdef CONFIG_ECORE_SW_CHANNEL
+		vf->vf_mbx.sw_mbx.mbx_state = VF_PF_WAIT_FOR_START_REQUEST;
+#endif
+	} else {
+		rc = ECORE_UNKNOWN_ERROR;
+	}
+	return rc;
+}
+#endif
+
+/**
+ * VF enable primitives
+ *
+ * when pretend is required the caller is reponsible
+ * for calling pretend prioir to calling these routines
+ */
+
+/* clears vf error in all semi blocks
+ * Assumption: called under VF pretend...
+ */
+static OSAL_INLINE void ecore_iov_vf_semi_clear_err(struct ecore_hwfn *p_hwfn,
+						    struct ecore_ptt *p_ptt)
+{
+	ecore_wr(p_hwfn, p_ptt, TSEM_REG_VF_ERROR, 1);
+	ecore_wr(p_hwfn, p_ptt, USEM_REG_VF_ERROR, 1);
+	ecore_wr(p_hwfn, p_ptt, MSEM_REG_VF_ERROR, 1);
+	ecore_wr(p_hwfn, p_ptt, XSEM_REG_VF_ERROR, 1);
+	ecore_wr(p_hwfn, p_ptt, YSEM_REG_VF_ERROR, 1);
+	ecore_wr(p_hwfn, p_ptt, PSEM_REG_VF_ERROR, 1);
+}
+
+static void ecore_iov_vf_pglue_clear_err(struct ecore_hwfn *p_hwfn,
+					 struct ecore_ptt *p_ptt, u8 abs_vfid)
+{
+	ecore_wr(p_hwfn, p_ptt,
+		 PGLUE_B_REG_WAS_ERROR_VF_31_0_CLR + (abs_vfid >> 5) * 4,
+		 1 << (abs_vfid & 0x1f));
+}
+
+static void ecore_iov_vf_igu_reset(struct ecore_hwfn *p_hwfn,
+				   struct ecore_ptt *p_ptt,
+				   struct ecore_vf_info *vf)
+{
+	int i;
+	u16 igu_sb_id;
+
+	/* Set VF masks and configuration - pretend */
+	ecore_fid_pretend(p_hwfn, p_ptt, (u16)vf->concrete_fid);
+
+	ecore_wr(p_hwfn, p_ptt, IGU_REG_STATISTIC_NUM_VF_MSG_SENT, 0);
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+		   "value in VF_CONFIGURATION of vf %d after write %x\n",
+		   vf->abs_vf_id,
+		   ecore_rd(p_hwfn, p_ptt, IGU_REG_VF_CONFIGURATION));
+
+	/* unpretend */
+	ecore_fid_pretend(p_hwfn, p_ptt, (u16)p_hwfn->hw_info.concrete_fid);
+
+	/* iterate ove all queues, clear sb consumer */
+	for (i = 0; i < vf->num_sbs; i++) {
+		igu_sb_id = vf->igu_sbs[i];
+		/* Set then clear... */
+		ecore_int_igu_cleanup_sb(p_hwfn, p_ptt, igu_sb_id, 1,
+					 vf->opaque_fid);
+		ecore_int_igu_cleanup_sb(p_hwfn, p_ptt, igu_sb_id, 0,
+					 vf->opaque_fid);
+	}
+}
+
+static void ecore_iov_vf_igu_set_int(struct ecore_hwfn *p_hwfn,
+				     struct ecore_ptt *p_ptt,
+				     struct ecore_vf_info *vf, bool enable)
+{
+	u32 igu_vf_conf;
+
+	ecore_fid_pretend(p_hwfn, p_ptt, (u16)vf->concrete_fid);
+
+	igu_vf_conf = ecore_rd(p_hwfn, p_ptt, IGU_REG_VF_CONFIGURATION);
+
+	if (enable)
+		igu_vf_conf |= IGU_VF_CONF_MSI_MSIX_EN;
+	else
+		igu_vf_conf &= ~IGU_VF_CONF_MSI_MSIX_EN;
+
+	ecore_wr(p_hwfn, p_ptt, IGU_REG_VF_CONFIGURATION, igu_vf_conf);
+
+	/* unpretend */
+	ecore_fid_pretend(p_hwfn, p_ptt, (u16)p_hwfn->hw_info.concrete_fid);
+}
+
+static enum _ecore_status_t
+ecore_iov_enable_vf_access(struct ecore_hwfn *p_hwfn,
+			   struct ecore_ptt *p_ptt, struct ecore_vf_info *vf)
+{
+	u32 igu_vf_conf = IGU_VF_CONF_FUNC_EN;
+	enum _ecore_status_t rc;
+
+	if (vf->to_disable)
+		return ECORE_SUCCESS;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+		   "Enable internal access for vf %x [abs %x]\n", vf->abs_vf_id,
+		   ECORE_VF_ABS_ID(p_hwfn, vf));
+
+	ecore_iov_vf_pglue_clear_err(p_hwfn, p_ptt,
+				     ECORE_VF_ABS_ID(p_hwfn, vf));
+
+	rc = ecore_mcp_config_vf_msix(p_hwfn, p_ptt,
+				      vf->abs_vf_id, vf->num_sbs);
+	if (rc)
+		return rc;
+
+	ecore_fid_pretend(p_hwfn, p_ptt, (u16)vf->concrete_fid);
+
+	SET_FIELD(igu_vf_conf, IGU_VF_CONF_PARENT, p_hwfn->rel_pf_id);
+	STORE_RT_REG(p_hwfn, IGU_REG_VF_CONFIGURATION_RT_OFFSET, igu_vf_conf);
+
+	ecore_init_run(p_hwfn, p_ptt, PHASE_VF, vf->abs_vf_id,
+		       p_hwfn->hw_info.hw_mode);
+
+	/* unpretend */
+	ecore_fid_pretend(p_hwfn, p_ptt, (u16)p_hwfn->hw_info.concrete_fid);
+
+	if (vf->state != VF_STOPPED) {
+		DP_NOTICE(p_hwfn, true, "VF[%02x] is already started\n",
+			  vf->abs_vf_id);
+		return ECORE_INVAL;
+	}
+
+	/* Start VF */
+	rc = ecore_sp_vf_start(p_hwfn, vf->concrete_fid, vf->opaque_fid);
+	if (rc != ECORE_SUCCESS)
+		DP_NOTICE(p_hwfn, true, "Failed to start VF[%02x]\n",
+			  vf->abs_vf_id);
+
+	vf->state = VF_FREE;
+
+	return rc;
+}
+
+/**
+ *
+ * @brief ecore_iov_config_perm_table - configure the permission
+ *      zone table.
+ *      In E4, queue zone permission table size is 320x9. There
+ *      are 320 VF queues for single engine device (256 for dual
+ *      engine device), and each entry has the following format:
+ *      {Valid, VF[7:0]}
+ * @param p_hwfn
+ * @param p_ptt
+ * @param vf
+ * @param enable
+ */
+static void ecore_iov_config_perm_table(struct ecore_hwfn *p_hwfn,
+					struct ecore_ptt *p_ptt,
+					struct ecore_vf_info *vf, u8 enable)
+{
+	u32 reg_addr;
+	u32 val;
+	u16 qzone_id = 0;
+	int qid;
+
+	for (qid = 0; qid < vf->num_rxqs; qid++) {
+		ecore_fw_l2_queue(p_hwfn, vf->vf_queues[qid].fw_rx_qid,
+				  &qzone_id);
+
+		reg_addr = PSWHST_REG_ZONE_PERMISSION_TABLE + qzone_id * 4;
+		val = enable ? (vf->abs_vf_id | (1 << 8)) : 0;
+		ecore_wr(p_hwfn, p_ptt, reg_addr, val);
+	}
+}
+
+static void ecore_iov_enable_vf_traffic(struct ecore_hwfn *p_hwfn,
+					struct ecore_ptt *p_ptt,
+					struct ecore_vf_info *vf)
+{
+	/* Reset vf in IGU interrupts are still disabled */
+	ecore_iov_vf_igu_reset(p_hwfn, p_ptt, vf);
+
+	ecore_iov_vf_igu_set_int(p_hwfn, p_ptt, vf, 1 /* enable */);
+
+	/* Permission Table */
+	ecore_iov_config_perm_table(p_hwfn, p_ptt, vf, true /* enable */);
+}
+
+static u8 ecore_iov_alloc_vf_igu_sbs(struct ecore_hwfn *p_hwfn,
+				     struct ecore_ptt *p_ptt,
+				     struct ecore_vf_info *vf,
+				     u16 num_rx_queues)
+{
+	int igu_id = 0;
+	int qid = 0;
+	u32 val = 0;
+	struct ecore_igu_block *igu_blocks =
+	    p_hwfn->hw_info.p_igu_info->igu_map.igu_blocks;
+
+	if (num_rx_queues > p_hwfn->hw_info.p_igu_info->free_blks)
+		num_rx_queues = p_hwfn->hw_info.p_igu_info->free_blks;
+
+	p_hwfn->hw_info.p_igu_info->free_blks -= num_rx_queues;
+
+	SET_FIELD(val, IGU_MAPPING_LINE_FUNCTION_NUMBER, vf->abs_vf_id);
+	SET_FIELD(val, IGU_MAPPING_LINE_VALID, 1);
+	SET_FIELD(val, IGU_MAPPING_LINE_PF_VALID, 0);
+
+	while ((qid < num_rx_queues) &&
+	       (igu_id < ECORE_MAPPING_MEMORY_SIZE(p_hwfn->p_dev))) {
+		if (igu_blocks[igu_id].status & ECORE_IGU_STATUS_FREE) {
+			struct cau_sb_entry sb_entry;
+
+			vf->igu_sbs[qid] = (u16)igu_id;
+			igu_blocks[igu_id].status &= ~ECORE_IGU_STATUS_FREE;
+
+			SET_FIELD(val, IGU_MAPPING_LINE_VECTOR_NUMBER, qid);
+
+			ecore_wr(p_hwfn, p_ptt,
+				 IGU_REG_MAPPING_MEMORY + sizeof(u32) * igu_id,
+				 val);
+
+			/* Configure igu sb in CAU which were marked valid */
+			ecore_init_cau_sb_entry(p_hwfn, &sb_entry,
+						p_hwfn->rel_pf_id,
+						vf->abs_vf_id, 1);
+			ecore_dmae_host2grc(p_hwfn, p_ptt,
+					    (u64)(osal_uintptr_t)&sb_entry,
+					    CAU_REG_SB_VAR_MEMORY +
+					    igu_id * sizeof(u64), 2, 0);
+			qid++;
+		}
+		igu_id++;
+	}
+
+	vf->num_sbs = (u8)num_rx_queues;
+
+	return vf->num_sbs;
+}
+
+/**
+ *
+ * @brief The function invalidates all the VF entries,
+ *        technically this isn't required, but added for
+ *        cleaness and ease of debugging incase a VF attempts to
+ *        produce an interrupt after it has been taken down.
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ * @param vf
+ */
+static void ecore_iov_free_vf_igu_sbs(struct ecore_hwfn *p_hwfn,
+				      struct ecore_ptt *p_ptt,
+				      struct ecore_vf_info *vf)
+{
+	struct ecore_igu_info *p_info = p_hwfn->hw_info.p_igu_info;
+	int idx, igu_id;
+	u32 addr, val;
+
+	/* Invalidate igu CAM lines and mark them as free */
+	for (idx = 0; idx < vf->num_sbs; idx++) {
+		igu_id = vf->igu_sbs[idx];
+		addr = IGU_REG_MAPPING_MEMORY + sizeof(u32) * igu_id;
+
+		val = ecore_rd(p_hwfn, p_ptt, addr);
+		SET_FIELD(val, IGU_MAPPING_LINE_VALID, 0);
+		ecore_wr(p_hwfn, p_ptt, addr, val);
+
+		p_info->igu_map.igu_blocks[igu_id].status |=
+		    ECORE_IGU_STATUS_FREE;
+
+		p_hwfn->hw_info.p_igu_info->free_blks++;
+	}
+
+	vf->num_sbs = 0;
+}
+
+enum _ecore_status_t ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
+					      struct ecore_ptt *p_ptt,
+					      u16 rel_vf_id, u16 num_rx_queues)
+{
+	enum _ecore_status_t rc = ECORE_SUCCESS;
+	struct ecore_vf_info *vf = OSAL_NULL;
+	u8 num_of_vf_avaiable_chains = 0;
+	u32 cids;
+	u8 i;
+
+	if (ECORE_IS_VF_ACTIVE(p_hwfn->p_dev, rel_vf_id)) {
+		DP_NOTICE(p_hwfn, true, "VF[%d] is already active.\n",
+			  rel_vf_id);
+		return ECORE_INVAL;
+	}
+
+	vf = ecore_iov_get_vf_info(p_hwfn, rel_vf_id, false);
+	if (!vf) {
+		DP_ERR(p_hwfn, "ecore_iov_init_hw_for_vf : vf is OSAL_NULL\n");
+		return ECORE_UNKNOWN_ERROR;
+	}
+
+	/* Limit number of queues according to number of CIDs */
+	ecore_cxt_get_proto_cid_count(p_hwfn, PROTOCOLID_ETH, &cids);
+	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+		   "VF[%d] - requesting to initialize for 0x%04x queues"
+		   " [0x%04x CIDs available]\n",
+		   vf->relative_vf_id, num_rx_queues, (u16)cids);
+	num_rx_queues = OSAL_MIN_T(u16, num_rx_queues, ((u16)cids));
+
+	num_of_vf_avaiable_chains = ecore_iov_alloc_vf_igu_sbs(p_hwfn,
+							       p_ptt,
+							       vf,
+							       num_rx_queues);
+	if (num_of_vf_avaiable_chains == 0) {
+		DP_ERR(p_hwfn, "no available igu sbs\n");
+		return ECORE_NOMEM;
+	}
+
+	/* Choose queue number and index ranges */
+	vf->num_rxqs = num_of_vf_avaiable_chains;
+	vf->num_txqs = num_of_vf_avaiable_chains;
+
+	for (i = 0; i < vf->num_rxqs; i++) {
+		u16 queue_id = ecore_int_queue_id_from_sb_id(p_hwfn,
+							     vf->igu_sbs[i]);
+
+		if (queue_id > RESC_NUM(p_hwfn, ECORE_L2_QUEUE)) {
+			DP_NOTICE(p_hwfn, true,
+				  "VF[%d] will require utilizing of"
+				  " out-of-bounds queues - %04x\n",
+				  vf->relative_vf_id, queue_id);
+			/* TODO - cleanup the already allocate SBs */
+			return ECORE_INVAL;
+		}
+
+		/* CIDs are per-VF, so no problem having them 0-based. */
+		vf->vf_queues[i].fw_rx_qid = queue_id;
+		vf->vf_queues[i].fw_tx_qid = queue_id;
+		vf->vf_queues[i].fw_cid = i;
+
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+			   "VF[%d] - [%d] SB %04x, Tx/Rx queue %04x CID %04x\n",
+			   vf->relative_vf_id, i, vf->igu_sbs[i], queue_id, i);
+	}
+
+	rc = ecore_iov_enable_vf_access(p_hwfn, p_ptt, vf);
+
+	if (rc == ECORE_SUCCESS) {
+		struct ecore_hw_sriov_info *p_iov = &p_hwfn->p_dev->sriov_info;
+		u16 vf_id = vf->relative_vf_id;
+
+		p_iov->num_vfs++;
+		p_iov->active_vfs[vf_id / 64] |= (1ULL << (vf_id % 64));
+	}
+
+	return rc;
+}
+
+enum _ecore_status_t ecore_iov_release_hw_for_vf(struct ecore_hwfn *p_hwfn,
+						 struct ecore_ptt *p_ptt,
+						 u16 rel_vf_id)
+{
+	struct ecore_vf_info *vf = OSAL_NULL;
+	enum _ecore_status_t rc = ECORE_SUCCESS;
+
+	vf = ecore_iov_get_vf_info(p_hwfn, rel_vf_id, true);
+	if (!vf) {
+		DP_ERR(p_hwfn, "ecore_iov_release_hw_for_vf : vf is NULL\n");
+		return ECORE_UNKNOWN_ERROR;
+	}
+
+	if (vf->state != VF_STOPPED) {
+		/* Stopping the VF */
+		rc = ecore_sp_vf_stop(p_hwfn, vf->concrete_fid, vf->opaque_fid);
+
+		if (rc != ECORE_SUCCESS) {
+			DP_ERR(p_hwfn, "ecore_sp_vf_stop returned error %d\n",
+			       rc);
+			return rc;
+		}
+
+		vf->state = VF_STOPPED;
+	}
+
+	/* disablng interrupts and resetting permission table was done during
+	 * vf-close, however, we could get here without going through vf_close
+	 */
+	/* Disable Interrupts for VF */
+	ecore_iov_vf_igu_set_int(p_hwfn, p_ptt, vf, 0 /* disable */);
+
+	/* Reset Permission table */
+	ecore_iov_config_perm_table(p_hwfn, p_ptt, vf, 0 /* disable */);
+
+	vf->num_rxqs = 0;
+	vf->num_txqs = 0;
+	ecore_iov_free_vf_igu_sbs(p_hwfn, p_ptt, vf);
+
+	if (ECORE_IS_VF_ACTIVE(p_hwfn->p_dev, rel_vf_id)) {
+		struct ecore_hw_sriov_info *p_iov = &p_hwfn->p_dev->sriov_info;
+		u16 vf_id = vf->relative_vf_id;
+
+		p_iov->num_vfs--;
+		p_iov->active_vfs[vf_id / 64] &= ~(1ULL << (vf_id % 64));
+	}
+
+	return ECORE_SUCCESS;
+}
+
+static bool ecore_iov_tlv_supported(u16 tlvtype)
+{
+	return tlvtype > CHANNEL_TLV_NONE && tlvtype < CHANNEL_TLV_MAX;
+}
+
+static void ecore_iov_lock_vf_pf_channel(struct ecore_hwfn *p_hwfn,
+					 struct ecore_vf_info *vf, u16 tlv)
+{
+	/* we don't lock the channel for unsupported tlvs */
+	if (!ecore_iov_tlv_supported(tlv))
+		return;
+
+	/* lock the channel */
+	/* mutex_lock(&vf->op_mutex); @@@TBD MichalK - add lock... */
+
+	/* record the locking op */
+	/* vf->op_current = tlv; @@@TBD MichalK */
+
+	/* log the lock */
+	DP_VERBOSE(p_hwfn,
+		   ECORE_MSG_IOV,
+		   "VF[%d]: vf pf channel locked by     %s\n",
+		   vf->abs_vf_id, ecore_channel_tlvs_string[tlv]);
+}
+
+static void ecore_iov_unlock_vf_pf_channel(struct ecore_hwfn *p_hwfn,
+					   struct ecore_vf_info *vf,
+					   u16 expected_tlv)
+{
+	/* we don't unlock the channel for unsupported tlvs */
+	if (!ecore_iov_tlv_supported(expected_tlv))
+		return;
+
+	/* WARN(expected_tlv != vf->op_current,
+	 * "lock mismatch: expected %s found %s",
+	 * channel_tlvs_string[expected_tlv],
+	 * channel_tlvs_string[vf->op_current]);
+	 * @@@TBD MichalK
+	 */
+
+	/* lock the channel */
+	/* mutex_unlock(&vf->op_mutex); @@@TBD MichalK add the lock */
+
+	/* log the unlock */
+	DP_VERBOSE(p_hwfn,
+		   ECORE_MSG_IOV,
+		   "VF[%d]: vf pf channel unlocked by %s\n",
+		   vf->abs_vf_id, ecore_channel_tlvs_string[expected_tlv]);
+
+	/* record the locking op */
+	/* vf->op_current = CHANNEL_TLV_NONE; */
+}
+
+/* place a given tlv on the tlv buffer, continuing current tlv list */
+void *ecore_add_tlv(struct ecore_hwfn *p_hwfn,
+		    u8 **offset, u16 type, u16 length)
+{
+	struct channel_tlv *tl = (struct channel_tlv *)*offset;
+
+	tl->type = type;
+	tl->length = length;
+
+	/* Offset should keep pointing to next TLV (the end of the last) */
+	*offset += length;
+
+	/* Return a pointer to the start of the added tlv */
+	return *offset - length;
+}
+
+/* list the types and lengths of the tlvs on the buffer */
+void ecore_dp_tlv_list(struct ecore_hwfn *p_hwfn, void *tlvs_list)
+{
+	u16 i = 1, total_length = 0;
+	struct channel_tlv *tlv;
+
+	do {
+		/* cast current tlv list entry to channel tlv header */
+		tlv = (struct channel_tlv *)((u8 *)tlvs_list + total_length);
+
+		/* output tlv */
+		if (ecore_iov_tlv_supported(tlv->type))
+			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+				   "TLV number %d: type %s, length %d\n",
+				   i, ecore_channel_tlvs_string[tlv->type],
+				   tlv->length);
+		else
+			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+				   "TLV number %d: type %d, length %d\n",
+				   i, tlv->type, tlv->length);
+
+		if (tlv->type == CHANNEL_TLV_LIST_END)
+			return;
+
+		/* Validate entry - protect against malicious VFs */
+		if (!tlv->length) {
+			DP_NOTICE(p_hwfn, false, "TLV of length 0 found\n");
+			return;
+		}
+		total_length += tlv->length;
+		if (total_length >= sizeof(struct tlv_buffer_size)) {
+			DP_NOTICE(p_hwfn, false, "TLV ==> Buffer overflow\n");
+			return;
+		}
+
+		i++;
+	} while (1);
+}
+
+static void ecore_iov_send_response(struct ecore_hwfn *p_hwfn,
+				    struct ecore_ptt *p_ptt,
+				    struct ecore_vf_info *p_vf,
+				    u16 length, u8 status)
+{
+	struct ecore_iov_vf_mbx *mbx = &p_vf->vf_mbx;
+	struct ecore_dmae_params params;
+	u8 eng_vf_id;
+
+	mbx->reply_virt->default_resp.hdr.status = status;
+
+#ifdef CONFIG_ECORE_SW_CHANNEL
+	mbx->sw_mbx.response_size =
+	    length + sizeof(struct channel_list_end_tlv);
+#endif
+
+	ecore_dp_tlv_list(p_hwfn, mbx->reply_virt);
+
+	if (!p_hwfn->p_dev->sriov_info.b_hw_channel)
+		return;
+
+	eng_vf_id = p_vf->abs_vf_id;
+
+	OSAL_MEMSET(&params, 0, sizeof(struct ecore_dmae_params));
+	params.flags = ECORE_DMAE_FLAG_VF_DST;
+	params.dst_vfid = eng_vf_id;
+
+	ecore_dmae_host2host(p_hwfn, p_ptt, mbx->reply_phys + sizeof(u64),
+			     mbx->req_virt->first_tlv.reply_address +
+			     sizeof(u64),
+			     (sizeof(union pfvf_tlvs) - sizeof(u64)) / 4,
+			     &params);
+
+	ecore_dmae_host2host(p_hwfn, p_ptt, mbx->reply_phys,
+			     mbx->req_virt->first_tlv.reply_address,
+			     sizeof(u64) / 4, &params);
+
+	REG_WR(p_hwfn,
+	       GTT_BAR0_MAP_REG_USDM_RAM +
+	       USTORM_VF_PF_CHANNEL_READY_OFFSET(eng_vf_id), 1);
+}
+
+static u16 ecore_iov_vport_to_tlv(struct ecore_hwfn *p_hwfn,
+				  enum ecore_iov_vport_update_flag flag)
+{
+	switch (flag) {
+	case ECORE_IOV_VP_UPDATE_ACTIVATE:
+		return CHANNEL_TLV_VPORT_UPDATE_ACTIVATE;
+	case ECORE_IOV_VP_UPDATE_VLAN_STRIP:
+		return CHANNEL_TLV_VPORT_UPDATE_VLAN_STRIP;
+	case ECORE_IOV_VP_UPDATE_TX_SWITCH:
+		return CHANNEL_TLV_VPORT_UPDATE_TX_SWITCH;
+	case ECORE_IOV_VP_UPDATE_MCAST:
+		return CHANNEL_TLV_VPORT_UPDATE_MCAST;
+	case ECORE_IOV_VP_UPDATE_ACCEPT_PARAM:
+		return CHANNEL_TLV_VPORT_UPDATE_ACCEPT_PARAM;
+	case ECORE_IOV_VP_UPDATE_RSS:
+		return CHANNEL_TLV_VPORT_UPDATE_RSS;
+	case ECORE_IOV_VP_UPDATE_ACCEPT_ANY_VLAN:
+		return CHANNEL_TLV_VPORT_UPDATE_ACCEPT_ANY_VLAN;
+	case ECORE_IOV_VP_UPDATE_SGE_TPA:
+		return CHANNEL_TLV_VPORT_UPDATE_SGE_TPA;
+	default:
+		return 0;
+	}
+}
+
+static u16 ecore_iov_prep_vp_update_resp_tlvs(struct ecore_hwfn *p_hwfn,
+					      struct ecore_vf_info *p_vf,
+					      struct ecore_iov_vf_mbx *p_mbx,
+					      u8 status, u16 tlvs_mask,
+					      u16 tlvs_accepted)
+{
+	struct pfvf_def_resp_tlv *resp;
+	u16 size, total_len, i;
+
+	OSAL_MEMSET(p_mbx->reply_virt, 0, sizeof(union pfvf_tlvs));
+	p_mbx->offset = (u8 *)(p_mbx->reply_virt);
+	size = sizeof(struct pfvf_def_resp_tlv);
+	total_len = size;
+
+	ecore_add_tlv(p_hwfn, &p_mbx->offset, CHANNEL_TLV_VPORT_UPDATE, size);
+
+	/* Prepare response for all extended tlvs if they are found by PF */
+	for (i = 0; i < ECORE_IOV_VP_UPDATE_MAX; i++) {
+		if (!(tlvs_mask & (1 << i)))
+			continue;
+
+		resp = ecore_add_tlv(p_hwfn, &p_mbx->offset,
+				     ecore_iov_vport_to_tlv(p_hwfn, i), size);
+
+		if (tlvs_accepted & (1 << i))
+			resp->hdr.status = status;
+		else
+			resp->hdr.status = PFVF_STATUS_NOT_SUPPORTED;
+
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+			   "VF[%d] - vport_update resp: TLV %d, status %02x\n",
+			   p_vf->relative_vf_id,
+			   ecore_iov_vport_to_tlv(p_hwfn, i), resp->hdr.status);
+
+		total_len += size;
+	}
+
+	ecore_add_tlv(p_hwfn, &p_mbx->offset, CHANNEL_TLV_LIST_END,
+		      sizeof(struct channel_list_end_tlv));
+
+	return total_len;
+}
+
+static void ecore_iov_prepare_resp(struct ecore_hwfn *p_hwfn,
+				   struct ecore_ptt *p_ptt,
+				   struct ecore_vf_info *vf_info,
+				   u16 type, u16 length, u8 status)
+{
+	struct ecore_iov_vf_mbx *mbx = &vf_info->vf_mbx;
+
+	mbx->offset = (u8 *)(mbx->reply_virt);
+
+	ecore_add_tlv(p_hwfn, &mbx->offset, type, length);
+	ecore_add_tlv(p_hwfn, &mbx->offset, CHANNEL_TLV_LIST_END,
+		      sizeof(struct channel_list_end_tlv));
+
+	ecore_iov_send_response(p_hwfn, p_ptt, vf_info, length, status);
+}
+
+static void ecore_iov_vf_cleanup(struct ecore_hwfn *p_hwfn,
+				 struct ecore_vf_info *p_vf)
+{
+	p_vf->vf_bulletin = 0;
+	p_vf->vport_instance = 0;
+	p_vf->num_mac_filters = 0;
+	p_vf->num_vlan_filters = 0;
+	p_vf->num_mc_filters = 0;
+	p_vf->configured_features = 0;
+
+	/* If VF previously requested less resources, go back to default */
+	p_vf->num_rxqs = p_vf->num_sbs;
+	p_vf->num_txqs = p_vf->num_sbs;
+
+	p_vf->num_active_rxqs = 0;
+
+	OSAL_MEMSET(&p_vf->shadow_config, 0, sizeof(p_vf->shadow_config));
+	OSAL_IOV_VF_CLEANUP(p_hwfn, p_vf->relative_vf_id);
+}
+
+static void ecore_iov_vf_mbx_acquire(struct ecore_hwfn *p_hwfn,
+				     struct ecore_ptt *p_ptt,
+				     struct ecore_vf_info *vf)
+{
+	struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
+	struct vfpf_acquire_tlv *req = &mbx->req_virt->acquire;
+	struct pfvf_acquire_resp_tlv *resp = &mbx->reply_virt->acquire_resp;
+	struct pf_vf_resc *resc = &resp->resc;
+	struct pf_vf_pfdev_info *pfdev_info = &resp->pfdev_info;
+	u16 length;
+	u8 i, vfpf_status = PFVF_STATUS_SUCCESS;
+
+	/* Validate FW compatibility */
+	if (req->vfdev_info.fw_major != FW_MAJOR_VERSION ||
+	    req->vfdev_info.fw_minor != FW_MINOR_VERSION ||
+	    req->vfdev_info.fw_revision != FW_REVISION_VERSION ||
+	    req->vfdev_info.fw_engineering != FW_ENGINEERING_VERSION) {
+		DP_INFO(p_hwfn,
+			"VF[%d] is running an incompatible driver [VF needs"
+			" FW %02x:%02x:%02x:%02x but Hypervisor is"
+			" using %02x:%02x:%02x:%02x]\n",
+			vf->abs_vf_id, req->vfdev_info.fw_major,
+			req->vfdev_info.fw_minor, req->vfdev_info.fw_revision,
+			req->vfdev_info.fw_engineering, FW_MAJOR_VERSION,
+			FW_MINOR_VERSION, FW_REVISION_VERSION,
+			FW_ENGINEERING_VERSION);
+		vfpf_status = PFVF_STATUS_NOT_SUPPORTED;
+		goto out;
+	}
+#ifndef __EXTRACT__LINUX__
+	if (OSAL_IOV_VF_ACQUIRE(p_hwfn, vf->relative_vf_id) != ECORE_SUCCESS) {
+		vfpf_status = PFVF_STATUS_NOT_SUPPORTED;
+		goto out;
+	}
+#endif
+
+	OSAL_MEMSET(resp, 0, sizeof(*resp));
+
+	/* Fill in vf info stuff : @@@TBD MichalK Hard Coded for now... */
+	vf->opaque_fid = req->vfdev_info.opaque_fid;
+	vf->num_mac_filters = 1;
+	vf->num_vlan_filters = ECORE_ETH_VF_NUM_VLAN_FILTERS;
+	vf->num_mc_filters = ECORE_MAX_MC_ADDRS;
+
+	vf->vf_bulletin = req->bulletin_addr;
+	vf->bulletin.size = (vf->bulletin.size < req->bulletin_size) ?
+	    vf->bulletin.size : req->bulletin_size;
+
+	/* fill in pfdev info */
+	pfdev_info->chip_num = p_hwfn->p_dev->chip_num;
+	pfdev_info->db_size = 0;	/* @@@ TBD MichalK Vf Doorbells */
+	pfdev_info->indices_per_sb = PIS_PER_SB;
+	pfdev_info->capabilities = PFVF_ACQUIRE_CAP_DEFAULT_UNTAGGED;
+
+	pfdev_info->stats_info.mstats.address =
+	    PXP_VF_BAR0_START_MSDM_ZONE_B +
+	    OFFSETOF(struct mstorm_vf_zone, non_trigger.eth_queue_stat);
+	pfdev_info->stats_info.mstats.len =
+	    sizeof(struct eth_mstorm_per_queue_stat);
+
+	pfdev_info->stats_info.ustats.address =
+	    PXP_VF_BAR0_START_USDM_ZONE_B +
+	    OFFSETOF(struct ustorm_vf_zone, non_trigger.eth_queue_stat);
+	pfdev_info->stats_info.ustats.len =
+	    sizeof(struct eth_ustorm_per_queue_stat);
+
+	pfdev_info->stats_info.pstats.address =
+	    PXP_VF_BAR0_START_PSDM_ZONE_B +
+	    OFFSETOF(struct pstorm_vf_zone, non_trigger.eth_queue_stat);
+	pfdev_info->stats_info.pstats.len =
+	    sizeof(struct eth_pstorm_per_queue_stat);
+
+	pfdev_info->stats_info.tstats.address = 0;
+	pfdev_info->stats_info.tstats.len = 0;
+
+	OSAL_MEMCPY(pfdev_info->port_mac, p_hwfn->hw_info.hw_mac_addr,
+		    ETH_ALEN);
+
+	pfdev_info->fw_major = FW_MAJOR_VERSION;
+	pfdev_info->fw_minor = FW_MINOR_VERSION;
+	pfdev_info->fw_rev = FW_REVISION_VERSION;
+	pfdev_info->fw_eng = FW_ENGINEERING_VERSION;
+	pfdev_info->os_type = OSAL_IOV_GET_OS_TYPE();
+	ecore_mcp_get_mfw_ver(p_hwfn->p_dev, p_ptt, &pfdev_info->mfw_ver,
+			      OSAL_NULL);
+
+	pfdev_info->dev_type = p_hwfn->p_dev->type;
+	pfdev_info->chip_rev = p_hwfn->p_dev->chip_rev;
+
+	/* Fill in resc : @@@TBD MichalK Hard Coded for now... */
+	resc->num_rxqs = vf->num_rxqs;
+	resc->num_txqs = vf->num_txqs;
+	resc->num_sbs = vf->num_sbs;
+	for (i = 0; i < resc->num_sbs; i++) {
+		resc->hw_sbs[i].hw_sb_id = vf->igu_sbs[i];
+		resc->hw_sbs[i].sb_qid = 0;
+	}
+
+	for (i = 0; i < resc->num_rxqs; i++) {
+		ecore_fw_l2_queue(p_hwfn, vf->vf_queues[i].fw_rx_qid,
+				  (u16 *)&resc->hw_qid[i]);
+		resc->cid[i] = vf->vf_queues[i].fw_cid;
+	}
+
+	resc->num_mac_filters = OSAL_MIN_T(u8, vf->num_mac_filters,
+					   req->resc_request.num_mac_filters);
+	resc->num_vlan_filters = OSAL_MIN_T(u8, vf->num_vlan_filters,
+					    req->resc_request.num_vlan_filters);
+	resc->num_mc_filters = OSAL_MIN_T(u8, vf->num_mc_filters,
+					  req->resc_request.num_mc_filters);
+
+	/* Fill agreed size of bulletin board in response, and post
+	 * an initial image to the bulletin board.
+	 */
+	resp->bulletin_size = vf->bulletin.size;
+	ecore_iov_post_vf_bulletin(p_hwfn, vf->relative_vf_id, p_ptt);
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+		   "VF[%d] ACQUIRE_RESPONSE: pfdev_info- chip_num=0x%x,"
+		   " db_size=%d, idx_per_sb=%d, pf_cap=0x%lx\n"
+		   "resources- n_rxq-%d, n_txq-%d, n_sbs-%d, n_macs-%d,"
+		   " n_vlans-%d, n_mcs-%d\n",
+		   vf->abs_vf_id, resp->pfdev_info.chip_num,
+		   resp->pfdev_info.db_size, resp->pfdev_info.indices_per_sb,
+		   resp->pfdev_info.capabilities, resc->num_rxqs,
+		   resc->num_txqs, resc->num_sbs, resc->num_mac_filters,
+		   resc->num_vlan_filters, resc->num_mc_filters);
+
+	vf->state = VF_ACQUIRED;
+
+	/* Prepare Response */
+	length = sizeof(struct pfvf_acquire_resp_tlv);
+
+out:
+	ecore_iov_prepare_resp(p_hwfn, p_ptt, vf, CHANNEL_TLV_ACQUIRE,
+			       length, vfpf_status);
+
+	/* @@@TBD Bulletin */
+}
+
+static enum _ecore_status_t
+__ecore_iov_spoofchk_set(struct ecore_hwfn *p_hwfn,
+			 struct ecore_vf_info *p_vf, bool val)
+{
+	struct ecore_sp_vport_update_params params;
+	enum _ecore_status_t rc;
+
+	if (val == p_vf->spoof_chk) {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+			   "Spoofchk value[%d] is already configured\n", val);
+		return ECORE_SUCCESS;
+	}
+
+	OSAL_MEMSET(&params, 0, sizeof(struct ecore_sp_vport_update_params));
+	params.opaque_fid = p_vf->opaque_fid;
+	params.vport_id = p_vf->vport_id;
+	params.update_anti_spoofing_en_flg = 1;
+	params.anti_spoofing_en = val;
+
+	rc = ecore_sp_vport_update(p_hwfn, &params, ECORE_SPQ_MODE_EBLOCK,
+				   OSAL_NULL);
+	if (rc == ECORE_SUCCESS) {
+		p_vf->spoof_chk = val;
+		p_vf->req_spoofchk_val = p_vf->spoof_chk;
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+			   "Spoofchk val[%d] configured\n", val);
+	} else {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+			   "Spoofchk configuration[val:%d] failed for VF[%d]\n",
+			   val, p_vf->relative_vf_id);
+	}
+
+	return rc;
+}
+
+static enum _ecore_status_t
+ecore_iov_reconfigure_unicast_vlan(struct ecore_hwfn *p_hwfn,
+				   struct ecore_vf_info *p_vf)
+{
+	enum _ecore_status_t rc = ECORE_SUCCESS;
+	struct ecore_filter_ucast filter;
+	int i;
+
+	OSAL_MEMSET(&filter, 0, sizeof(filter));
+	filter.is_rx_filter = 1;
+	filter.is_tx_filter = 1;
+	filter.vport_to_add_to = p_vf->vport_id;
+	filter.opcode = ECORE_FILTER_ADD;
+
+	/* Reconfigure vlans */
+	for (i = 0; i < ECORE_ETH_VF_NUM_VLAN_FILTERS + 1; i++) {
+		if (p_vf->shadow_config.vlans[i].used) {
+			filter.type = ECORE_FILTER_VLAN;
+			filter.vlan = p_vf->shadow_config.vlans[i].vid;
+			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+				   "Reconfig VLAN [0x%04x] for VF [%04x]\n",
+				   filter.vlan, p_vf->relative_vf_id);
+			rc = ecore_sp_eth_filter_ucast(p_hwfn,
+						       p_vf->opaque_fid,
+						       &filter,
+						       ECORE_SPQ_MODE_CB,
+						       OSAL_NULL);
+			if (rc) {
+				DP_NOTICE(p_hwfn, true,
+					  "Failed to configure VLAN [%04x]"
+					  " to VF [%04x]\n",
+					  filter.vlan, p_vf->relative_vf_id);
+				break;
+			}
+		}
+	}
+
+	return rc;
+}
+
+static enum _ecore_status_t
+ecore_iov_reconfigure_unicast_shadow(struct ecore_hwfn *p_hwfn,
+				     struct ecore_vf_info *p_vf, u64 events)
+{
+	enum _ecore_status_t rc = ECORE_SUCCESS;
+
+	/*TODO - what about MACs? */
+
+	if ((events & (1 << VLAN_ADDR_FORCED)) &&
+	    !(p_vf->configured_features & (1 << VLAN_ADDR_FORCED)))
+		rc = ecore_iov_reconfigure_unicast_vlan(p_hwfn, p_vf);
+
+	return rc;
+}
+
+static int ecore_iov_configure_vport_forced(struct ecore_hwfn *p_hwfn,
+					    struct ecore_vf_info *p_vf,
+					    u64 events)
+{
+	enum _ecore_status_t rc = ECORE_SUCCESS;
+	struct ecore_filter_ucast filter;
+
+	if (!p_vf->vport_instance)
+		return ECORE_INVAL;
+
+	if (events & (1 << MAC_ADDR_FORCED)) {
+		/* Since there's no way [currently] of removing the MAC,
+		 * we can always assume this means we need to force it.
+		 */
+		OSAL_MEMSET(&filter, 0, sizeof(filter));
+		filter.type = ECORE_FILTER_MAC;
+		filter.opcode = ECORE_FILTER_REPLACE;
+		filter.is_rx_filter = 1;
+		filter.is_tx_filter = 1;
+		filter.vport_to_add_to = p_vf->vport_id;
+		OSAL_MEMCPY(filter.mac, p_vf->bulletin.p_virt->mac, ETH_ALEN);
+
+		rc = ecore_sp_eth_filter_ucast(p_hwfn, p_vf->opaque_fid,
+					       &filter,
+					       ECORE_SPQ_MODE_CB, OSAL_NULL);
+		if (rc) {
+			DP_NOTICE(p_hwfn, true,
+				  "PF failed to configure MAC for VF\n");
+			return rc;
+		}
+
+		p_vf->configured_features |= 1 << MAC_ADDR_FORCED;
+	}
+
+	if (events & (1 << VLAN_ADDR_FORCED)) {
+		struct ecore_sp_vport_update_params vport_update;
+		u8 removal;
+		int i;
+
+		OSAL_MEMSET(&filter, 0, sizeof(filter));
+		filter.type = ECORE_FILTER_VLAN;
+		filter.is_rx_filter = 1;
+		filter.is_tx_filter = 1;
+		filter.vport_to_add_to = p_vf->vport_id;
+		filter.vlan = p_vf->bulletin.p_virt->pvid;
+		filter.opcode = filter.vlan ? ECORE_FILTER_REPLACE :
+		    ECORE_FILTER_FLUSH;
+
+		/* Send the ramrod */
+		rc = ecore_sp_eth_filter_ucast(p_hwfn, p_vf->opaque_fid,
+					       &filter,
+					       ECORE_SPQ_MODE_CB, OSAL_NULL);
+		if (rc) {
+			DP_NOTICE(p_hwfn, true,
+				  "PF failed to configure VLAN for VF\n");
+			return rc;
+		}
+
+		/* Update the default-vlan & silent vlan stripping */
+		OSAL_MEMSET(&vport_update, 0, sizeof(vport_update));
+		vport_update.opaque_fid = p_vf->opaque_fid;
+		vport_update.vport_id = p_vf->vport_id;
+		vport_update.update_default_vlan_enable_flg = 1;
+		vport_update.default_vlan_enable_flg = filter.vlan ? 1 : 0;
+		vport_update.update_default_vlan_flg = 1;
+		vport_update.default_vlan = filter.vlan;
+
+		vport_update.update_inner_vlan_removal_flg = 1;
+		removal = filter.vlan ?
+		    1 : p_vf->shadow_config.inner_vlan_removal;
+		vport_update.inner_vlan_removal_flg = removal;
+		vport_update.silent_vlan_removal_flg = filter.vlan ? 1 : 0;
+		rc = ecore_sp_vport_update(p_hwfn, &vport_update,
+					   ECORE_SPQ_MODE_EBLOCK, OSAL_NULL);
+		if (rc) {
+			DP_NOTICE(p_hwfn, true,
+				  "PF failed to configure VF vport for vlan\n");
+			return rc;
+		}
+
+		/* Update all the Rx queues */
+		for (i = 0; i < ECORE_MAX_VF_CHAINS_PER_PF; i++) {
+			u16 qid;
+
+			if (!p_vf->vf_queues[i].rxq_active)
+				continue;
+
+			qid = p_vf->vf_queues[i].fw_rx_qid;
+
+			rc = ecore_sp_eth_rx_queues_update(p_hwfn, qid,
+						   1, 0, 1,
+						   ECORE_SPQ_MODE_EBLOCK,
+						   OSAL_NULL);
+			if (rc) {
+				DP_NOTICE(p_hwfn, true,
+					  "Failed to send Rx update"
+					  " queue[0x%04x]\n",
+					  qid);
+				return rc;
+			}
+		}
+
+		if (filter.vlan)
+			p_vf->configured_features |= 1 << VLAN_ADDR_FORCED;
+		else
+			p_vf->configured_features &= ~(1 << VLAN_ADDR_FORCED);
+	}
+
+	/* If forced features are terminated, we need to configure the shadow
+	 * configuration back again.
+	 */
+	if (events)
+		ecore_iov_reconfigure_unicast_shadow(p_hwfn, p_vf, events);
+
+	return rc;
+}
+
+static void ecore_iov_vf_mbx_start_vport(struct ecore_hwfn *p_hwfn,
+					 struct ecore_ptt *p_ptt,
+					 struct ecore_vf_info *vf)
+{
+	struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
+	struct vfpf_vport_start_tlv *start = &mbx->req_virt->start_vport;
+	struct ecore_sp_vport_start_params params = { 0 };
+	u8 status = PFVF_STATUS_SUCCESS;
+	struct ecore_vf_info *vf_info;
+	enum _ecore_status_t rc;
+	u64 *p_bitmap;
+	int sb_id;
+
+	vf_info = ecore_iov_get_vf_info(p_hwfn, (u16)vf->relative_vf_id, true);
+	if (!vf_info) {
+		DP_NOTICE(p_hwfn->p_dev, true,
+			  "Failed to get VF info, invalid vfid [%d]\n",
+			  vf->relative_vf_id);
+		return;
+	}
+
+	vf->state = VF_ENABLED;
+
+	/* Initialize Status block in CAU */
+	for (sb_id = 0; sb_id < vf->num_sbs; sb_id++) {
+		if (!start->sb_addr[sb_id]) {
+			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+				   "VF[%d] did not fill the address of SB %d\n",
+				   vf->relative_vf_id, sb_id);
+			break;
+		}
+
+		ecore_int_cau_conf_sb(p_hwfn, p_ptt,
+				      start->sb_addr[sb_id],
+				      vf->igu_sbs[sb_id],
+				      vf->abs_vf_id, 1 /* VF Valid */);
+	}
+	ecore_iov_enable_vf_traffic(p_hwfn, p_ptt, vf);
+
+	vf->mtu = start->mtu;
+	vf->shadow_config.inner_vlan_removal = start->inner_vlan_removal;
+
+	/* Take into consideration configuration forced by hypervisor;
+	 * If none is configured, use the supplied VF values [for old
+	 * vfs that would still be fine, since they passed '0' as padding].
+	 */
+	p_bitmap = &vf_info->bulletin.p_virt->valid_bitmap;
+	if (!(*p_bitmap & (1 << VFPF_BULLETIN_UNTAGGED_DEFAULT_FORCED))) {
+		u8 vf_req = start->only_untagged;
+
+		vf_info->bulletin.p_virt->default_only_untagged = vf_req;
+		*p_bitmap |= 1 << VFPF_BULLETIN_UNTAGGED_DEFAULT;
+	}
+
+	params.tpa_mode = start->tpa_mode;
+	params.remove_inner_vlan = start->inner_vlan_removal;
+	params.tx_switching = true;
+
+#ifndef ASIC_ONLY
+	if (CHIP_REV_IS_FPGA(p_hwfn->p_dev)) {
+		DP_NOTICE(p_hwfn, false,
+			  "FPGA: Don't confi VF for Tx-switching [no pVFC]\n");
+		params.tx_switching = false;
+	}
+#endif
+
+	params.only_untagged = vf_info->bulletin.p_virt->default_only_untagged;
+	params.drop_ttl0 = false;
+	params.concrete_fid = vf->concrete_fid;
+	params.opaque_fid = vf->opaque_fid;
+	params.vport_id = vf->vport_id;
+	params.max_buffers_per_cqe = start->max_buffers_per_cqe;
+	params.mtu = vf->mtu;
+
+	rc = ecore_sp_eth_vport_start(p_hwfn, &params);
+	if (rc != ECORE_SUCCESS) {
+		DP_ERR(p_hwfn,
+		       "ecore_iov_vf_mbx_start_vport returned error %d\n", rc);
+		status = PFVF_STATUS_FAILURE;
+	} else {
+		vf->vport_instance++;
+
+		/* Force configuration if needed on the newly opened vport */
+		ecore_iov_configure_vport_forced(p_hwfn, vf, *p_bitmap);
+		OSAL_IOV_POST_START_VPORT(p_hwfn, vf->relative_vf_id,
+					  vf->vport_id, vf->opaque_fid);
+		__ecore_iov_spoofchk_set(p_hwfn, vf, vf->req_spoofchk_val);
+	}
+
+	ecore_iov_prepare_resp(p_hwfn, p_ptt, vf, CHANNEL_TLV_VPORT_START,
+			       sizeof(struct pfvf_def_resp_tlv), status);
+}
+
+static void ecore_iov_vf_mbx_stop_vport(struct ecore_hwfn *p_hwfn,
+					struct ecore_ptt *p_ptt,
+					struct ecore_vf_info *vf)
+{
+	u8 status = PFVF_STATUS_SUCCESS;
+	enum _ecore_status_t rc;
+
+	vf->vport_instance--;
+	vf->spoof_chk = false;
+
+	rc = ecore_sp_vport_stop(p_hwfn, vf->opaque_fid, vf->vport_id);
+	if (rc != ECORE_SUCCESS) {
+		DP_ERR(p_hwfn,
+		       "ecore_iov_vf_mbx_stop_vport returned error %d\n", rc);
+		status = PFVF_STATUS_FAILURE;
+	}
+
+	/* Forget the configuration on the vport */
+	vf->configured_features = 0;
+	OSAL_MEMSET(&vf->shadow_config, 0, sizeof(vf->shadow_config));
+
+	ecore_iov_prepare_resp(p_hwfn, p_ptt, vf, CHANNEL_TLV_VPORT_TEARDOWN,
+			       sizeof(struct pfvf_def_resp_tlv), status);
+}
+
+static void ecore_iov_vf_mbx_start_rxq(struct ecore_hwfn *p_hwfn,
+				       struct ecore_ptt *p_ptt,
+				       struct ecore_vf_info *vf)
+{
+	struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
+	struct vfpf_start_rxq_tlv *req = &mbx->req_virt->start_rxq;
+	u16 length = sizeof(struct pfvf_def_resp_tlv);
+	u8 status = PFVF_STATUS_SUCCESS;
+	enum _ecore_status_t rc;
+
+	rc = ecore_sp_eth_rxq_start_ramrod(p_hwfn, vf->opaque_fid,
+					   vf->vf_queues[req->rx_qid].fw_cid,
+					   vf->vf_queues[req->rx_qid].fw_rx_qid,
+					   vf->vport_id,
+					   vf->abs_vf_id + 0x10,
+					   req->hw_sb,
+					   req->sb_index,
+					   req->bd_max_bytes,
+					   req->rxq_addr,
+					   req->cqe_pbl_addr,
+					   req->cqe_pbl_size);
+
+	if (rc) {
+		status = PFVF_STATUS_FAILURE;
+	} else {
+		vf->vf_queues[req->rx_qid].rxq_active = true;
+		vf->num_active_rxqs++;
+	}
+
+	ecore_iov_prepare_resp(p_hwfn, p_ptt, vf, CHANNEL_TLV_START_RXQ,
+			       length, status);
+}
+
+static void ecore_iov_vf_mbx_start_txq(struct ecore_hwfn *p_hwfn,
+				       struct ecore_ptt *p_ptt,
+				       struct ecore_vf_info *vf)
+{
+	struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
+	struct vfpf_start_txq_tlv *req = &mbx->req_virt->start_txq;
+	u16 length = sizeof(struct pfvf_def_resp_tlv);
+	union ecore_qm_pq_params pq_params;
+	u8 status = PFVF_STATUS_SUCCESS;
+	enum _ecore_status_t rc;
+
+	/* Prepare the parameters which would choose the right PQ */
+	OSAL_MEMSET(&pq_params, 0, sizeof(pq_params));
+	pq_params.eth.is_vf = 1;
+	pq_params.eth.vf_id = vf->relative_vf_id;
+
+	rc = ecore_sp_eth_txq_start_ramrod(p_hwfn,
+					   vf->opaque_fid,
+					   vf->vf_queues[req->tx_qid].fw_tx_qid,
+					   vf->vf_queues[req->tx_qid].fw_cid,
+					   vf->vport_id,
+					   vf->abs_vf_id + 0x10,
+					   req->hw_sb,
+					   req->sb_index,
+					   req->pbl_addr,
+					   req->pbl_size, &pq_params);
+
+	if (rc)
+		status = PFVF_STATUS_FAILURE;
+	else
+		vf->vf_queues[req->tx_qid].txq_active = true;
+
+	ecore_iov_prepare_resp(p_hwfn, p_ptt, vf, CHANNEL_TLV_START_TXQ,
+			       length, status);
+}
+
+static enum _ecore_status_t ecore_iov_vf_stop_rxqs(struct ecore_hwfn *p_hwfn,
+						   struct ecore_vf_info *vf,
+						   u16 rxq_id,
+						   u8 num_rxqs,
+						   bool cqe_completion)
+{
+	enum _ecore_status_t rc = ECORE_SUCCESS;
+	int qid;
+
+	if (rxq_id + num_rxqs > OSAL_ARRAY_SIZE(vf->vf_queues))
+		return ECORE_INVAL;
+
+	for (qid = rxq_id; qid < rxq_id + num_rxqs; qid++) {
+		if (vf->vf_queues[qid].rxq_active) {
+			rc = ecore_sp_eth_rx_queue_stop(p_hwfn,
+							vf->vf_queues[qid].
+							fw_rx_qid, false,
+							cqe_completion);
+
+			if (rc)
+				return rc;
+		}
+		vf->vf_queues[qid].rxq_active = false;
+		vf->num_active_rxqs--;
+	}
+
+	return rc;
+}
+
+static enum _ecore_status_t ecore_iov_vf_stop_txqs(struct ecore_hwfn *p_hwfn,
+						   struct ecore_vf_info *vf,
+						   u16 txq_id, u8 num_txqs)
+{
+	enum _ecore_status_t rc = ECORE_SUCCESS;
+	int qid;
+
+	if (txq_id + num_txqs > OSAL_ARRAY_SIZE(vf->vf_queues))
+		return ECORE_INVAL;
+
+	for (qid = txq_id; qid < txq_id + num_txqs; qid++) {
+		if (vf->vf_queues[qid].txq_active) {
+			rc = ecore_sp_eth_tx_queue_stop(p_hwfn,
+							vf->vf_queues[qid].
+							fw_tx_qid);
+
+			if (rc)
+				return rc;
+		}
+		vf->vf_queues[qid].txq_active = false;
+	}
+	return rc;
+}
+
+static void ecore_iov_vf_mbx_stop_rxqs(struct ecore_hwfn *p_hwfn,
+				       struct ecore_ptt *p_ptt,
+				       struct ecore_vf_info *vf)
+{
+	struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
+	struct vfpf_stop_rxqs_tlv *req = &mbx->req_virt->stop_rxqs;
+	u16 length = sizeof(struct pfvf_def_resp_tlv);
+	u8 status = PFVF_STATUS_SUCCESS;
+	enum _ecore_status_t rc;
+
+	/* We give the option of starting from qid != 0, in this case we
+	 * need to make sure that qid + num_qs doesn't exceed the actual
+	 * amount of queues that exist.
+	 */
+	rc = ecore_iov_vf_stop_rxqs(p_hwfn, vf, req->rx_qid,
+				    req->num_rxqs, req->cqe_completion);
+	if (rc)
+		status = PFVF_STATUS_FAILURE;
+
+	ecore_iov_prepare_resp(p_hwfn, p_ptt, vf, CHANNEL_TLV_STOP_RXQS,
+			       length, status);
+}
+
+static void ecore_iov_vf_mbx_stop_txqs(struct ecore_hwfn *p_hwfn,
+				       struct ecore_ptt *p_ptt,
+				       struct ecore_vf_info *vf)
+{
+	struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
+	struct vfpf_stop_txqs_tlv *req = &mbx->req_virt->stop_txqs;
+	u16 length = sizeof(struct pfvf_def_resp_tlv);
+	u8 status = PFVF_STATUS_SUCCESS;
+	enum _ecore_status_t rc;
+
+	/* We give the option of starting from qid != 0, in this case we
+	 * need to make sure that qid + num_qs doesn't exceed the actual
+	 * amount of queues that exist.
+	 */
+	rc = ecore_iov_vf_stop_txqs(p_hwfn, vf, req->tx_qid, req->num_txqs);
+	if (rc)
+		status = PFVF_STATUS_FAILURE;
+
+	ecore_iov_prepare_resp(p_hwfn, p_ptt, vf, CHANNEL_TLV_STOP_TXQS,
+			       length, status);
+}
+
+static void ecore_iov_vf_mbx_update_rxqs(struct ecore_hwfn *p_hwfn,
+					 struct ecore_ptt *p_ptt,
+					 struct ecore_vf_info *vf)
+{
+	struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
+	struct vfpf_update_rxq_tlv *req = &mbx->req_virt->update_rxq;
+	u16 length = sizeof(struct pfvf_def_resp_tlv);
+	u8 status = PFVF_STATUS_SUCCESS;
+	u8 complete_event_flg;
+	u8 complete_cqe_flg;
+	enum _ecore_status_t rc;
+	u16 qid;
+	u8 i;
+
+	complete_cqe_flg = !!(req->flags & VFPF_RXQ_UPD_COMPLETE_CQE_FLAG);
+	complete_event_flg = !!(req->flags & VFPF_RXQ_UPD_COMPLETE_EVENT_FLAG);
+
+	for (i = 0; i < req->num_rxqs; i++) {
+		qid = req->rx_qid + i;
+
+		if (!vf->vf_queues[qid].rxq_active) {
+			DP_NOTICE(p_hwfn, true,
+				  "VF rx_qid = %d isn`t active!\n", qid);
+			status = PFVF_STATUS_FAILURE;
+			break;
+		}
+
+		rc = ecore_sp_eth_rx_queues_update(p_hwfn,
+						   vf->vf_queues[qid].fw_rx_qid,
+						   1,
+						   complete_cqe_flg,
+						   complete_event_flg,
+						   ECORE_SPQ_MODE_EBLOCK,
+						   OSAL_NULL);
+
+		if (rc) {
+			status = PFVF_STATUS_FAILURE;
+			break;
+		}
+	}
+
+	ecore_iov_prepare_resp(p_hwfn, p_ptt, vf, CHANNEL_TLV_UPDATE_RXQ,
+			       length, status);
+}
+
+void *ecore_iov_search_list_tlvs(struct ecore_hwfn *p_hwfn,
+				 void *p_tlvs_list, u16 req_type)
+{
+	struct channel_tlv *p_tlv = (struct channel_tlv *)p_tlvs_list;
+	int len = 0;
+
+	do {
+		if (!p_tlv->length) {
+			DP_NOTICE(p_hwfn, true, "Zero length TLV found\n");
+			return OSAL_NULL;
+		}
+
+		if (p_tlv->type == req_type) {
+			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+				   "Extended tlv type %s, length %d found\n",
+				   ecore_channel_tlvs_string[p_tlv->type],
+				   p_tlv->length);
+			return p_tlv;
+		}
+
+		len += p_tlv->length;
+		p_tlv = (struct channel_tlv *)((u8 *)p_tlv + p_tlv->length);
+
+		if ((len + p_tlv->length) > TLV_BUFFER_SIZE) {
+			DP_NOTICE(p_hwfn, true,
+				  "TLVs has overrun the buffer size\n");
+			return OSAL_NULL;
+		}
+	} while (p_tlv->type != CHANNEL_TLV_LIST_END);
+
+	return OSAL_NULL;
+}
+
+static void
+ecore_iov_vp_update_act_param(struct ecore_hwfn *p_hwfn,
+			      struct ecore_sp_vport_update_params *p_data,
+			      struct ecore_iov_vf_mbx *p_mbx, u16 *tlvs_mask)
+{
+	struct vfpf_vport_update_activate_tlv *p_act_tlv;
+	u16 tlv = CHANNEL_TLV_VPORT_UPDATE_ACTIVATE;
+
+	p_act_tlv = (struct vfpf_vport_update_activate_tlv *)
+	    ecore_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt, tlv);
+	if (p_act_tlv) {
+		p_data->update_vport_active_rx_flg = p_act_tlv->update_rx;
+		p_data->vport_active_rx_flg = p_act_tlv->active_rx;
+		p_data->update_vport_active_tx_flg = p_act_tlv->update_tx;
+		p_data->vport_active_tx_flg = p_act_tlv->active_tx;
+		*tlvs_mask |= 1 << ECORE_IOV_VP_UPDATE_ACTIVATE;
+	}
+}
+
+static void
+ecore_iov_vp_update_vlan_param(struct ecore_hwfn *p_hwfn,
+			       struct ecore_sp_vport_update_params *p_data,
+			       struct ecore_vf_info *p_vf,
+			       struct ecore_iov_vf_mbx *p_mbx, u16 *tlvs_mask)
+{
+	struct vfpf_vport_update_vlan_strip_tlv *p_vlan_tlv;
+	u16 tlv = CHANNEL_TLV_VPORT_UPDATE_VLAN_STRIP;
+
+	p_vlan_tlv = (struct vfpf_vport_update_vlan_strip_tlv *)
+	    ecore_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt, tlv);
+	if (!p_vlan_tlv)
+		return;
+
+	p_vf->shadow_config.inner_vlan_removal = p_vlan_tlv->remove_vlan;
+
+	/* Ignore the VF request if we're forcing a vlan */
+	if (!(p_vf->configured_features & (1 << VLAN_ADDR_FORCED))) {
+		p_data->update_inner_vlan_removal_flg = 1;
+		p_data->inner_vlan_removal_flg = p_vlan_tlv->remove_vlan;
+	}
+
+	*tlvs_mask |= 1 << ECORE_IOV_VP_UPDATE_VLAN_STRIP;
+}
+
+static void
+ecore_iov_vp_update_tx_switch(struct ecore_hwfn *p_hwfn,
+			      struct ecore_sp_vport_update_params *p_data,
+			      struct ecore_iov_vf_mbx *p_mbx, u16 *tlvs_mask)
+{
+	struct vfpf_vport_update_tx_switch_tlv *p_tx_switch_tlv;
+	u16 tlv = CHANNEL_TLV_VPORT_UPDATE_TX_SWITCH;
+
+	p_tx_switch_tlv = (struct vfpf_vport_update_tx_switch_tlv *)
+	    ecore_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt, tlv);
+
+#ifndef ASIC_ONLY
+	if (CHIP_REV_IS_FPGA(p_hwfn->p_dev)) {
+		DP_NOTICE(p_hwfn, false,
+			  "FPGA: Ignore tx-switching configuration originating from VFs\n");
+		return;
+	}
+#endif
+
+	if (p_tx_switch_tlv) {
+		p_data->update_tx_switching_flg = 1;
+		p_data->tx_switching_flg = p_tx_switch_tlv->tx_switching;
+		*tlvs_mask |= 1 << ECORE_IOV_VP_UPDATE_TX_SWITCH;
+	}
+}
+
+static void
+ecore_iov_vp_update_mcast_bin_param(struct ecore_hwfn *p_hwfn,
+				    struct ecore_sp_vport_update_params *p_data,
+				    struct ecore_iov_vf_mbx *p_mbx,
+				    u16 *tlvs_mask)
+{
+	struct vfpf_vport_update_mcast_bin_tlv *p_mcast_tlv;
+	u16 tlv = CHANNEL_TLV_VPORT_UPDATE_MCAST;
+
+	p_mcast_tlv = (struct vfpf_vport_update_mcast_bin_tlv *)
+	    ecore_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt, tlv);
+
+	if (p_mcast_tlv) {
+		p_data->update_approx_mcast_flg = 1;
+		OSAL_MEMCPY(p_data->bins, p_mcast_tlv->bins,
+			    sizeof(unsigned long) *
+			    ETH_MULTICAST_MAC_BINS_IN_REGS);
+		*tlvs_mask |= 1 << ECORE_IOV_VP_UPDATE_MCAST;
+	}
+}
+
+static void
+ecore_iov_vp_update_accept_flag(struct ecore_hwfn *p_hwfn,
+				struct ecore_sp_vport_update_params *p_data,
+				struct ecore_iov_vf_mbx *p_mbx, u16 *tlvs_mask)
+{
+	struct vfpf_vport_update_accept_param_tlv *p_accept_tlv;
+	u16 tlv = CHANNEL_TLV_VPORT_UPDATE_ACCEPT_PARAM;
+
+	p_accept_tlv = (struct vfpf_vport_update_accept_param_tlv *)
+	    ecore_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt, tlv);
+
+	if (p_accept_tlv) {
+		p_data->accept_flags.update_rx_mode_config =
+		    p_accept_tlv->update_rx_mode;
+		p_data->accept_flags.rx_accept_filter =
+		    p_accept_tlv->rx_accept_filter;
+		p_data->accept_flags.update_tx_mode_config =
+		    p_accept_tlv->update_tx_mode;
+		p_data->accept_flags.tx_accept_filter =
+		    p_accept_tlv->tx_accept_filter;
+		*tlvs_mask |= 1 << ECORE_IOV_VP_UPDATE_ACCEPT_PARAM;
+	}
+}
+
+static void
+ecore_iov_vp_update_accept_any_vlan(struct ecore_hwfn *p_hwfn,
+				    struct ecore_sp_vport_update_params *p_data,
+				    struct ecore_iov_vf_mbx *p_mbx,
+				    u16 *tlvs_mask)
+{
+	struct vfpf_vport_update_accept_any_vlan_tlv *p_accept_any_vlan;
+	u16 tlv = CHANNEL_TLV_VPORT_UPDATE_ACCEPT_ANY_VLAN;
+
+	p_accept_any_vlan = (struct vfpf_vport_update_accept_any_vlan_tlv *)
+	    ecore_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt, tlv);
+
+	if (p_accept_any_vlan) {
+		p_data->accept_any_vlan = p_accept_any_vlan->accept_any_vlan;
+		p_data->update_accept_any_vlan_flg =
+		    p_accept_any_vlan->update_accept_any_vlan_flg;
+		*tlvs_mask |= 1 << ECORE_IOV_VP_UPDATE_ACCEPT_ANY_VLAN;
+	}
+}
+
+static void
+ecore_iov_vp_update_rss_param(struct ecore_hwfn *p_hwfn,
+			      struct ecore_vf_info *vf,
+			      struct ecore_sp_vport_update_params *p_data,
+			      struct ecore_rss_params *p_rss,
+			      struct ecore_iov_vf_mbx *p_mbx, u16 *tlvs_mask)
+{
+	struct vfpf_vport_update_rss_tlv *p_rss_tlv;
+	u16 tlv = CHANNEL_TLV_VPORT_UPDATE_RSS;
+	u16 table_size;
+	u16 i, q_idx, max_q_idx;
+
+	p_rss_tlv = (struct vfpf_vport_update_rss_tlv *)
+	    ecore_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt, tlv);
+	if (p_rss_tlv) {
+		OSAL_MEMSET(p_rss, 0, sizeof(struct ecore_rss_params));
+
+		p_rss->update_rss_config =
+		    !!(p_rss_tlv->update_rss_flags &
+			VFPF_UPDATE_RSS_CONFIG_FLAG);
+		p_rss->update_rss_capabilities =
+		    !!(p_rss_tlv->update_rss_flags &
+			VFPF_UPDATE_RSS_CAPS_FLAG);
+		p_rss->update_rss_ind_table =
+		    !!(p_rss_tlv->update_rss_flags &
+			VFPF_UPDATE_RSS_IND_TABLE_FLAG);
+		p_rss->update_rss_key =
+		    !!(p_rss_tlv->update_rss_flags & VFPF_UPDATE_RSS_KEY_FLAG);
+
+		p_rss->rss_enable = p_rss_tlv->rss_enable;
+		p_rss->rss_eng_id = vf->relative_vf_id + 1;
+		p_rss->rss_caps = p_rss_tlv->rss_caps;
+		p_rss->rss_table_size_log = p_rss_tlv->rss_table_size_log;
+		OSAL_MEMCPY(p_rss->rss_ind_table, p_rss_tlv->rss_ind_table,
+			    sizeof(p_rss->rss_ind_table));
+		OSAL_MEMCPY(p_rss->rss_key, p_rss_tlv->rss_key,
+			    sizeof(p_rss->rss_key));
+
+		table_size = OSAL_MIN_T(u16,
+					OSAL_ARRAY_SIZE(p_rss->rss_ind_table),
+					(1 << p_rss_tlv->rss_table_size_log));
+
+		max_q_idx = OSAL_ARRAY_SIZE(vf->vf_queues);
+
+		for (i = 0; i < table_size; i++) {
+			q_idx = p_rss->rss_ind_table[i];
+			if (q_idx >= max_q_idx) {
+				DP_NOTICE(p_hwfn, true,
+					  "rss_ind_table[%d] = %d, rxq is out of range\n",
+					  i, q_idx);
+				/* TBD: fail the request mark VF as malicious */
+				p_rss->rss_ind_table[i] =
+				    vf->vf_queues[0].fw_rx_qid;
+			} else if (!vf->vf_queues[q_idx].rxq_active) {
+				DP_NOTICE(p_hwfn, true,
+					  "rss_ind_table[%d] = %d, rxq is not active\n",
+					  i, q_idx);
+				/* TBD: fail the request mark VF as malicious */
+				p_rss->rss_ind_table[i] =
+				    vf->vf_queues[0].fw_rx_qid;
+			} else {
+				p_rss->rss_ind_table[i] =
+				    vf->vf_queues[q_idx].fw_rx_qid;
+			}
+		}
+
+		p_data->rss_params = p_rss;
+		*tlvs_mask |= 1 << ECORE_IOV_VP_UPDATE_RSS;
+	} else {
+		p_data->rss_params = OSAL_NULL;
+	}
+}
+
+static void
+ecore_iov_vp_update_sge_tpa_param(struct ecore_hwfn *p_hwfn,
+				  struct ecore_vf_info *vf,
+				  struct ecore_sp_vport_update_params *p_data,
+				  struct ecore_sge_tpa_params *p_sge_tpa,
+				  struct ecore_iov_vf_mbx *p_mbx,
+				  u16 *tlvs_mask)
+{
+	struct vfpf_vport_update_sge_tpa_tlv *p_sge_tpa_tlv;
+	u16 tlv = CHANNEL_TLV_VPORT_UPDATE_SGE_TPA;
+
+	p_sge_tpa_tlv = (struct vfpf_vport_update_sge_tpa_tlv *)
+	    ecore_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt, tlv);
+
+	if (!p_sge_tpa_tlv) {
+		p_data->sge_tpa_params = OSAL_NULL;
+		return;
+	}
+
+	OSAL_MEMSET(p_sge_tpa, 0, sizeof(struct ecore_sge_tpa_params));
+
+	p_sge_tpa->update_tpa_en_flg =
+	    !!(p_sge_tpa_tlv->update_sge_tpa_flags & VFPF_UPDATE_TPA_EN_FLAG);
+	p_sge_tpa->update_tpa_param_flg =
+	    !!(p_sge_tpa_tlv->update_sge_tpa_flags &
+		VFPF_UPDATE_TPA_PARAM_FLAG);
+
+	p_sge_tpa->tpa_ipv4_en_flg =
+	    !!(p_sge_tpa_tlv->sge_tpa_flags & VFPF_TPA_IPV4_EN_FLAG);
+	p_sge_tpa->tpa_ipv6_en_flg =
+	    !!(p_sge_tpa_tlv->sge_tpa_flags & VFPF_TPA_IPV6_EN_FLAG);
+	p_sge_tpa->tpa_pkt_split_flg =
+	    !!(p_sge_tpa_tlv->sge_tpa_flags & VFPF_TPA_PKT_SPLIT_FLAG);
+	p_sge_tpa->tpa_hdr_data_split_flg =
+	    !!(p_sge_tpa_tlv->sge_tpa_flags & VFPF_TPA_HDR_DATA_SPLIT_FLAG);
+	p_sge_tpa->tpa_gro_consistent_flg =
+	    !!(p_sge_tpa_tlv->sge_tpa_flags & VFPF_TPA_GRO_CONSIST_FLAG);
+
+	p_sge_tpa->tpa_max_aggs_num = p_sge_tpa_tlv->tpa_max_aggs_num;
+	p_sge_tpa->tpa_max_size = p_sge_tpa_tlv->tpa_max_size;
+	p_sge_tpa->tpa_min_size_to_start = p_sge_tpa_tlv->tpa_min_size_to_start;
+	p_sge_tpa->tpa_min_size_to_cont = p_sge_tpa_tlv->tpa_min_size_to_cont;
+	p_sge_tpa->max_buffers_per_cqe = p_sge_tpa_tlv->max_buffers_per_cqe;
+
+	p_data->sge_tpa_params = p_sge_tpa;
+
+	*tlvs_mask |= 1 << ECORE_IOV_VP_UPDATE_SGE_TPA;
+}
+
+static void ecore_iov_vf_mbx_vport_update(struct ecore_hwfn *p_hwfn,
+					  struct ecore_ptt *p_ptt,
+					  struct ecore_vf_info *vf)
+{
+	struct ecore_sp_vport_update_params params;
+	struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
+	struct ecore_sge_tpa_params sge_tpa_params;
+	struct ecore_rss_params rss_params;
+	u8 status = PFVF_STATUS_SUCCESS;
+	enum _ecore_status_t rc;
+	u16 tlvs_mask = 0, tlvs_accepted;
+	u16 length;
+
+	OSAL_MEMSET(&params, 0, sizeof(params));
+	params.opaque_fid = vf->opaque_fid;
+	params.vport_id = vf->vport_id;
+	params.rss_params = OSAL_NULL;
+
+	/* Search for extended tlvs list and update values
+	 * from VF in struct ecore_sp_vport_update_params.
+	 */
+	ecore_iov_vp_update_act_param(p_hwfn, &params, mbx, &tlvs_mask);
+	ecore_iov_vp_update_vlan_param(p_hwfn, &params, vf, mbx, &tlvs_mask);
+	ecore_iov_vp_update_tx_switch(p_hwfn, &params, mbx, &tlvs_mask);
+	ecore_iov_vp_update_mcast_bin_param(p_hwfn, &params, mbx, &tlvs_mask);
+	ecore_iov_vp_update_accept_flag(p_hwfn, &params, mbx, &tlvs_mask);
+	ecore_iov_vp_update_rss_param(p_hwfn, vf, &params, &rss_params,
+				      mbx, &tlvs_mask);
+	ecore_iov_vp_update_accept_any_vlan(p_hwfn, &params, mbx, &tlvs_mask);
+	ecore_iov_vp_update_sge_tpa_param(p_hwfn, vf, &params,
+					  &sge_tpa_params, mbx, &tlvs_mask);
+
+	/* Just log a message if there is no single extended tlv in buffer.
+	 * When all features of vport update ramrod would be requested by VF
+	 * as extended TLVs in buffer then an error can be returned in response
+	 * if there is no extended TLV present in buffer.
+	 */
+	tlvs_accepted = tlvs_mask;
+
+#ifndef __EXTRACT__LINUX__
+	if (OSAL_IOV_VF_VPORT_UPDATE(p_hwfn, vf->relative_vf_id,
+				     &params, &tlvs_accepted) !=
+	    ECORE_SUCCESS) {
+		tlvs_accepted = 0;
+		status = PFVF_STATUS_NOT_SUPPORTED;
+		goto out;
+	}
+#endif
+
+	if (!tlvs_accepted) {
+		if (tlvs_mask)
+			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+				   "Upper-layer prevents said VF configuration\n");
+		else
+			DP_NOTICE(p_hwfn, true,
+				  "No feature tlvs found for vport update\n");
+		status = PFVF_STATUS_NOT_SUPPORTED;
+		goto out;
+	}
+
+	rc = ecore_sp_vport_update(p_hwfn, &params, ECORE_SPQ_MODE_EBLOCK,
+				   OSAL_NULL);
+
+	if (rc)
+		status = PFVF_STATUS_FAILURE;
+
+out:
+	length = ecore_iov_prep_vp_update_resp_tlvs(p_hwfn, vf, mbx, status,
+						    tlvs_mask, tlvs_accepted);
+	ecore_iov_send_response(p_hwfn, p_ptt, vf, length, status);
+}
+
+static enum _ecore_status_t
+ecore_iov_vf_update_unicast_shadow(struct ecore_hwfn *p_hwfn,
+				   struct ecore_vf_info *p_vf,
+				   struct ecore_filter_ucast *p_params)
+{
+	int i;
+
+	/* TODO - do we need a MAC shadow registery? */
+	if (p_params->type == ECORE_FILTER_MAC)
+		return ECORE_SUCCESS;
+
+	/* First remove entries and then add new ones */
+	if (p_params->opcode == ECORE_FILTER_REMOVE) {
+		for (i = 0; i < ECORE_ETH_VF_NUM_VLAN_FILTERS + 1; i++)
+			if (p_vf->shadow_config.vlans[i].used &&
+			    p_vf->shadow_config.vlans[i].vid ==
+			    p_params->vlan) {
+				p_vf->shadow_config.vlans[i].used = false;
+				break;
+			}
+		if (i == ECORE_ETH_VF_NUM_VLAN_FILTERS + 1) {
+			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+				   "VF [%d] - Tries to remove a non-existing vlan\n",
+				   p_vf->relative_vf_id);
+			return ECORE_INVAL;
+		}
+	} else if (p_params->opcode == ECORE_FILTER_REPLACE ||
+		   p_params->opcode == ECORE_FILTER_FLUSH) {
+		for (i = 0; i < ECORE_ETH_VF_NUM_VLAN_FILTERS + 1; i++)
+			p_vf->shadow_config.vlans[i].used = false;
+	}
+
+	/* In forced mode, we're willing to remove entries - but we don't add
+	 * new ones.
+	 */
+	if (p_vf->bulletin.p_virt->valid_bitmap & (1 << VLAN_ADDR_FORCED))
+		return ECORE_SUCCESS;
+
+	if (p_params->opcode == ECORE_FILTER_ADD ||
+	    p_params->opcode == ECORE_FILTER_REPLACE) {
+		for (i = 0; i < ECORE_ETH_VF_NUM_VLAN_FILTERS + 1; i++)
+			if (!p_vf->shadow_config.vlans[i].used) {
+				p_vf->shadow_config.vlans[i].used = true;
+				p_vf->shadow_config.vlans[i].vid =
+				    p_params->vlan;
+				break;
+			}
+		if (i == ECORE_ETH_VF_NUM_VLAN_FILTERS + 1) {
+			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+				   "VF [%d] - Tries to configure more than %d vlan filters\n",
+				   p_vf->relative_vf_id,
+				   ECORE_ETH_VF_NUM_VLAN_FILTERS + 1);
+			return ECORE_INVAL;
+		}
+	}
+
+	return ECORE_SUCCESS;
+}
+
+static void ecore_iov_vf_mbx_ucast_filter(struct ecore_hwfn *p_hwfn,
+					  struct ecore_ptt *p_ptt,
+					  struct ecore_vf_info *vf)
+{
+	struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
+	struct vfpf_ucast_filter_tlv *req = &mbx->req_virt->ucast_filter;
+	struct ecore_bulletin_content *p_bulletin = vf->bulletin.p_virt;
+	struct ecore_filter_ucast params;
+	u8 status = PFVF_STATUS_SUCCESS;
+	enum _ecore_status_t rc;
+
+	/* Prepare the unicast filter params */
+	OSAL_MEMSET(&params, 0, sizeof(struct ecore_filter_ucast));
+	params.opcode = (enum ecore_filter_opcode)req->opcode;
+	params.type = (enum ecore_filter_ucast_type)req->type;
+
+	/* @@@TBD - We might need logic on HV side in determining this */
+	params.is_rx_filter = 1;
+	params.is_tx_filter = 1;
+	params.vport_to_remove_from = vf->vport_id;
+	params.vport_to_add_to = vf->vport_id;
+	OSAL_MEMCPY(params.mac, req->mac, ETH_ALEN);
+	params.vlan = req->vlan;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+		   "VF[%d]: opcode 0x%02x type 0x%02x [%s %s] [vport 0x%02x] MAC %02x:%02x:%02x:%02x:%02x:%02x, vlan 0x%04x\n",
+		   vf->abs_vf_id, params.opcode, params.type,
+		   params.is_rx_filter ? "RX" : "",
+		   params.is_tx_filter ? "TX" : "",
+		   params.vport_to_add_to,
+		   params.mac[0], params.mac[1], params.mac[2],
+		   params.mac[3], params.mac[4], params.mac[5], params.vlan);
+
+	if (!vf->vport_instance) {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+			   "No VPORT instance available for VF[%d], failing ucast MAC configuration\n",
+			   vf->abs_vf_id);
+		status = PFVF_STATUS_FAILURE;
+		goto out;
+	}
+
+	/* Update shadow copy of the VF configuration */
+	if (ecore_iov_vf_update_unicast_shadow(p_hwfn, vf, &params) !=
+	    ECORE_SUCCESS) {
+		status = PFVF_STATUS_FAILURE;
+		goto out;
+	}
+
+	/* Determine if the unicast filtering is acceptible by PF */
+	if ((p_bulletin->valid_bitmap & (1 << VLAN_ADDR_FORCED)) &&
+	    (params.type == ECORE_FILTER_VLAN ||
+	     params.type == ECORE_FILTER_MAC_VLAN)) {
+		/* Once VLAN is forced or PVID is set, do not allow
+		 * to add/replace any further VLANs.
+		 */
+		if (params.opcode == ECORE_FILTER_ADD ||
+		    params.opcode == ECORE_FILTER_REPLACE)
+			status = PFVF_STATUS_FORCED;
+		goto out;
+	}
+
+	if ((p_bulletin->valid_bitmap & (1 << MAC_ADDR_FORCED)) &&
+	    (params.type == ECORE_FILTER_MAC ||
+	     params.type == ECORE_FILTER_MAC_VLAN)) {
+		if (OSAL_MEMCMP(p_bulletin->mac, params.mac, ETH_ALEN) ||
+		    (params.opcode != ECORE_FILTER_ADD &&
+		     params.opcode != ECORE_FILTER_REPLACE))
+			status = PFVF_STATUS_FORCED;
+		goto out;
+	}
+
+	rc = OSAL_IOV_CHK_UCAST(p_hwfn, vf->relative_vf_id, &params);
+	if (rc == ECORE_EXISTS) {
+		goto out;
+	} else if (rc == ECORE_INVAL) {
+		status = PFVF_STATUS_FAILURE;
+		goto out;
+	}
+
+	rc = ecore_sp_eth_filter_ucast(p_hwfn, vf->opaque_fid, &params,
+				       ECORE_SPQ_MODE_CB, OSAL_NULL);
+	if (rc)
+		status = PFVF_STATUS_FAILURE;
+
+out:
+	ecore_iov_prepare_resp(p_hwfn, p_ptt, vf, CHANNEL_TLV_UCAST_FILTER,
+			       sizeof(struct pfvf_def_resp_tlv), status);
+}
+
+static void ecore_iov_vf_mbx_int_cleanup(struct ecore_hwfn *p_hwfn,
+					 struct ecore_ptt *p_ptt,
+					 struct ecore_vf_info *vf)
+{
+	int i;
+
+	/* Reset the SBs */
+	for (i = 0; i < vf->num_sbs; i++)
+		ecore_int_igu_init_pure_rt_single(p_hwfn, p_ptt,
+						  vf->igu_sbs[i],
+						  vf->opaque_fid, false);
+
+	ecore_iov_prepare_resp(p_hwfn, p_ptt, vf, CHANNEL_TLV_INT_CLEANUP,
+			       sizeof(struct pfvf_def_resp_tlv),
+			       PFVF_STATUS_SUCCESS);
+}
+
+static void ecore_iov_vf_mbx_close(struct ecore_hwfn *p_hwfn,
+				   struct ecore_ptt *p_ptt,
+				   struct ecore_vf_info *vf)
+{
+	u16 length = sizeof(struct pfvf_def_resp_tlv);
+	u8 status = PFVF_STATUS_SUCCESS;
+
+	/* Disable Interrupts for VF */
+	ecore_iov_vf_igu_set_int(p_hwfn, p_ptt, vf, 0 /* disable */);
+
+	/* Reset Permission table */
+	ecore_iov_config_perm_table(p_hwfn, p_ptt, vf, 0 /* disable */);
+
+	ecore_iov_prepare_resp(p_hwfn, p_ptt, vf, CHANNEL_TLV_CLOSE,
+			       length, status);
+}
+
+static void ecore_iov_vf_mbx_release(struct ecore_hwfn *p_hwfn,
+				     struct ecore_ptt *p_ptt,
+				     struct ecore_vf_info *p_vf)
+{
+	u16 length = sizeof(struct pfvf_def_resp_tlv);
+
+	ecore_iov_vf_cleanup(p_hwfn, p_vf);
+
+	ecore_iov_prepare_resp(p_hwfn, p_ptt, p_vf, CHANNEL_TLV_RELEASE,
+			       length, PFVF_STATUS_SUCCESS);
+}
+
+static enum _ecore_status_t
+ecore_iov_vf_flr_poll_dorq(struct ecore_hwfn *p_hwfn,
+			   struct ecore_vf_info *p_vf, struct ecore_ptt *p_ptt)
+{
+	int cnt;
+	u32 val;
+
+	ecore_fid_pretend(p_hwfn, p_ptt, (u16)p_vf->concrete_fid);
+
+	for (cnt = 0; cnt < 50; cnt++) {
+		val = ecore_rd(p_hwfn, p_ptt, DORQ_REG_VF_USAGE_CNT);
+		if (!val)
+			break;
+		OSAL_MSLEEP(20);
+	}
+	ecore_fid_pretend(p_hwfn, p_ptt, (u16)p_hwfn->hw_info.concrete_fid);
+
+	if (cnt == 50) {
+		DP_ERR(p_hwfn,
+		       "VF[%d] - dorq failed to cleanup [usage 0x%08x]\n",
+		       p_vf->abs_vf_id, val);
+		return ECORE_TIMEOUT;
+	}
+
+	return ECORE_SUCCESS;
+}
+
+static enum _ecore_status_t
+ecore_iov_vf_flr_poll_pbf(struct ecore_hwfn *p_hwfn,
+			  struct ecore_vf_info *p_vf, struct ecore_ptt *p_ptt)
+{
+	u32 cons[MAX_NUM_VOQS], distance[MAX_NUM_VOQS];
+	int i, cnt;
+
+	/* Read initial consumers & producers */
+	for (i = 0; i < MAX_NUM_VOQS; i++) {
+		u32 prod;
+
+		cons[i] = ecore_rd(p_hwfn, p_ptt,
+				   PBF_REG_NUM_BLOCKS_ALLOCATED_CONS_VOQ0 +
+				   i * 0x40);
+		prod = ecore_rd(p_hwfn, p_ptt,
+				PBF_REG_NUM_BLOCKS_ALLOCATED_PROD_VOQ0 +
+				i * 0x40);
+		distance[i] = prod - cons[i];
+	}
+
+	/* Wait for consumers to pass the producers */
+	i = 0;
+	for (cnt = 0; cnt < 50; cnt++) {
+		for (; i < MAX_NUM_VOQS; i++) {
+			u32 tmp;
+
+			tmp = ecore_rd(p_hwfn, p_ptt,
+				       PBF_REG_NUM_BLOCKS_ALLOCATED_CONS_VOQ0 +
+				       i * 0x40);
+			if (distance[i] > tmp - cons[i])
+				break;
+		}
+
+		if (i == MAX_NUM_VOQS)
+			break;
+
+		OSAL_MSLEEP(20);
+	}
+
+	if (cnt == 50) {
+		DP_ERR(p_hwfn, "VF[%d] - pbf polling failed on VOQ %d\n",
+		       p_vf->abs_vf_id, i);
+		return ECORE_TIMEOUT;
+	}
+
+	return ECORE_SUCCESS;
+}
+
+static enum _ecore_status_t
+ecore_iov_vf_flr_poll_prs(struct ecore_hwfn *p_hwfn,
+			  struct ecore_vf_info *p_vf, struct ecore_ptt *p_ptt)
+{
+	u16 tc_cons[NUM_OF_TCS], tc_lb_cons[NUM_OF_TCS];
+	u16 prod[NUM_OF_TCS];
+	int i, cnt;
+
+	/* Read initial consumers & producers */
+	for (i = 0; i < NUM_OF_TCS; i++) {
+		tc_cons[i] = (u16)ecore_rd(p_hwfn, p_ptt,
+					   PRS_REG_MSG_CT_MAIN_0 + i * 0x4);
+		tc_lb_cons[i] = (u16)ecore_rd(p_hwfn, p_ptt,
+					      PRS_REG_MSG_CT_LB_0 + i * 0x4);
+		prod[i] = (u16)ecore_rd(p_hwfn, p_ptt,
+					BRB_REG_PER_TC_COUNTERS +
+					p_hwfn->port_id * 0x20 + i * 0x4);
+	}
+
+	/* Wait for consumers to pass the producers */
+	i = 0;
+	for (cnt = 0; cnt < 50; cnt++) {
+		for (; i < NUM_OF_TCS; i++) {
+			u16 cons;
+
+			cons = (u16)ecore_rd(p_hwfn, p_ptt,
+					     PRS_REG_MSG_CT_MAIN_0 + i * 0x4);
+			if (prod[i] - tc_cons[i] > cons - tc_cons[i])
+				break;
+
+			cons = (u16)ecore_rd(p_hwfn, p_ptt,
+					     PRS_REG_MSG_CT_LB_0 + i * 0x4);
+			if (prod[i] - tc_lb_cons[i] > cons - tc_lb_cons[i])
+				break;
+		}
+
+		if (i == NUM_OF_TCS)
+			break;
+
+		/* 16-bit counters; Delay instead of sleep... */
+		OSAL_UDELAY(10);
+	}
+
+	/* This is only optional polling for BB, since registers are only
+	 * 16-bit wide and guarantee is not good enough. Don't fail things
+	 * if polling didn't return the expected results.
+	 */
+	if (cnt == 50)
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+			   "VF[%d] - prs polling failed on TC %d\n",
+			   p_vf->abs_vf_id, i);
+
+	return ECORE_SUCCESS;
+}
+
+static enum _ecore_status_t ecore_iov_vf_flr_poll(struct ecore_hwfn *p_hwfn,
+						  struct ecore_vf_info *p_vf,
+						  struct ecore_ptt *p_ptt)
+{
+	enum _ecore_status_t rc;
+
+	/* TODO - add SRC and TM polling once we add storage IOV */
+
+	rc = ecore_iov_vf_flr_poll_dorq(p_hwfn, p_vf, p_ptt);
+	if (rc)
+		return rc;
+
+	rc = ecore_iov_vf_flr_poll_pbf(p_hwfn, p_vf, p_ptt);
+	if (rc)
+		return rc;
+
+	rc = ecore_iov_vf_flr_poll_prs(p_hwfn, p_vf, p_ptt);
+	if (rc)
+		return rc;
+
+	return ECORE_SUCCESS;
+}
+
+static enum _ecore_status_t
+ecore_iov_execute_vf_flr_cleanup(struct ecore_hwfn *p_hwfn,
+				 struct ecore_ptt *p_ptt,
+				 u16 rel_vf_id, u32 *ack_vfs)
+{
+	enum _ecore_status_t rc = ECORE_SUCCESS;
+	struct ecore_vf_info *p_vf;
+
+	p_vf = ecore_iov_get_vf_info(p_hwfn, rel_vf_id, false);
+	if (!p_vf)
+		return ECORE_SUCCESS;
+
+	if (p_hwfn->pf_iov_info->pending_flr[rel_vf_id / 64] &
+	    (1ULL << (rel_vf_id % 64))) {
+		u16 vfid = p_vf->abs_vf_id;
+
+		/* TODO - should we lock channel? */
+
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+			   "VF[%d] - Handling FLR\n", vfid);
+
+		ecore_iov_vf_cleanup(p_hwfn, p_vf);
+
+		/* If VF isn't active, no need for anything but SW */
+		if (!ECORE_IS_VF_ACTIVE(p_hwfn->p_dev, p_vf->relative_vf_id))
+			goto cleanup;
+
+		/* TODO - what to do in case of failure? */
+		rc = ecore_iov_vf_flr_poll(p_hwfn, p_vf, p_ptt);
+		if (rc != ECORE_SUCCESS)
+			goto cleanup;
+
+		rc = ecore_final_cleanup(p_hwfn, p_ptt, vfid, true);
+		if (rc) {
+			/* TODO - what's now? What a mess.... */
+			DP_ERR(p_hwfn, "Failed handle FLR of VF[%d]\n", vfid);
+			return rc;
+		}
+
+		/* VF_STOPPED has to be set only after final cleanup
+		 * but prior to re-enabling the VF.
+		 */
+		p_vf->state = VF_STOPPED;
+
+		rc = ecore_iov_enable_vf_access(p_hwfn, p_ptt, p_vf);
+		if (rc) {
+			/* TODO - again, a mess... */
+			DP_ERR(p_hwfn, "Failed to re-enable VF[%d] acces\n",
+			       vfid);
+			return rc;
+		}
+cleanup:
+		/* Mark VF for ack and clean pending state */
+		if (p_vf->state == VF_RESET)
+			p_vf->state = VF_STOPPED;
+		ack_vfs[vfid / 32] |= (1 << (vfid % 32));
+		p_hwfn->pf_iov_info->pending_flr[rel_vf_id / 64] &=
+		    ~(1ULL << (rel_vf_id % 64));
+		p_hwfn->pf_iov_info->pending_events[rel_vf_id / 64] &=
+		    ~(1ULL << (rel_vf_id % 64));
+	}
+
+	return rc;
+}
+
+enum _ecore_status_t ecore_iov_vf_flr_cleanup(struct ecore_hwfn *p_hwfn,
+					      struct ecore_ptt *p_ptt)
+{
+	u32 ack_vfs[VF_MAX_STATIC / 32];
+	enum _ecore_status_t rc = ECORE_SUCCESS;
+	u16 i;
+
+	OSAL_MEMSET(ack_vfs, 0, sizeof(u32) * (VF_MAX_STATIC / 32));
+
+	for (i = 0; i < p_hwfn->p_dev->sriov_info.total_vfs; i++)
+		ecore_iov_execute_vf_flr_cleanup(p_hwfn, p_ptt, i, ack_vfs);
+
+	rc = ecore_mcp_ack_vf_flr(p_hwfn, p_ptt, ack_vfs);
+	return rc;
+}
+
+enum _ecore_status_t
+ecore_iov_single_vf_flr_cleanup(struct ecore_hwfn *p_hwfn,
+				struct ecore_ptt *p_ptt, u16 rel_vf_id)
+{
+	u32 ack_vfs[VF_MAX_STATIC / 32];
+	enum _ecore_status_t rc = ECORE_SUCCESS;
+
+	OSAL_MEMSET(ack_vfs, 0, sizeof(u32) * (VF_MAX_STATIC / 32));
+
+	ecore_iov_execute_vf_flr_cleanup(p_hwfn, p_ptt, rel_vf_id, ack_vfs);
+
+	rc = ecore_mcp_ack_vf_flr(p_hwfn, p_ptt, ack_vfs);
+	return rc;
+}
+
+int ecore_iov_mark_vf_flr(struct ecore_hwfn *p_hwfn, u32 *p_disabled_vfs)
+{
+	u16 i, found = 0;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, "Marking FLR-ed VFs\n");
+	for (i = 0; i < (VF_MAX_STATIC / 32); i++)
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+			   "[%08x,...,%08x]: %08x\n",
+			   i * 32, (i + 1) * 32 - 1, p_disabled_vfs[i]);
+
+	/* Mark VFs */
+	for (i = 0; i < p_hwfn->p_dev->sriov_info.total_vfs; i++) {
+		struct ecore_vf_info *p_vf;
+		u8 vfid;
+
+		p_vf = ecore_iov_get_vf_info(p_hwfn, i, false);
+		if (!p_vf)
+			continue;
+
+		vfid = p_vf->abs_vf_id;
+		if ((1 << (vfid % 32)) & p_disabled_vfs[vfid / 32]) {
+			u64 *p_flr = p_hwfn->pf_iov_info->pending_flr;
+			u16 rel_vf_id = p_vf->relative_vf_id;
+
+			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+				   "VF[%d] [rel %d] got FLR-ed\n",
+				   vfid, rel_vf_id);
+
+			p_vf->state = VF_RESET;
+
+			/* No need to lock here, since pending_flr should
+			 * only change here and before ACKing MFw. Since
+			 * MFW will not trigger an additional attention for
+			 * VF flr until ACKs, we're safe.
+			 */
+			p_flr[rel_vf_id / 64] |= 1ULL << (rel_vf_id % 64);
+			found = 1;
+		}
+	}
+
+	return found;
+}
+
+void ecore_iov_set_link(struct ecore_hwfn *p_hwfn,
+			u16 vfid,
+			struct ecore_mcp_link_params *params,
+			struct ecore_mcp_link_state *link,
+			struct ecore_mcp_link_capabilities *p_caps)
+{
+	struct ecore_vf_info *p_vf = ecore_iov_get_vf_info(p_hwfn, vfid, false);
+	struct ecore_bulletin_content *p_bulletin;
+
+	if (!p_vf)
+		return;
+
+	p_bulletin = p_vf->bulletin.p_virt;
+	p_bulletin->req_autoneg = params->speed.autoneg;
+	p_bulletin->req_adv_speed = params->speed.advertised_speeds;
+	p_bulletin->req_forced_speed = params->speed.forced_speed;
+	p_bulletin->req_autoneg_pause = params->pause.autoneg;
+	p_bulletin->req_forced_rx = params->pause.forced_rx;
+	p_bulletin->req_forced_tx = params->pause.forced_tx;
+	p_bulletin->req_loopback = params->loopback_mode;
+
+	p_bulletin->link_up = link->link_up;
+	p_bulletin->speed = link->speed;
+	p_bulletin->full_duplex = link->full_duplex;
+	p_bulletin->autoneg = link->an;
+	p_bulletin->autoneg_complete = link->an_complete;
+	p_bulletin->parallel_detection = link->parallel_detection;
+	p_bulletin->pfc_enabled = link->pfc_enabled;
+	p_bulletin->partner_adv_speed = link->partner_adv_speed;
+	p_bulletin->partner_tx_flow_ctrl_en = link->partner_tx_flow_ctrl_en;
+	p_bulletin->partner_rx_flow_ctrl_en = link->partner_rx_flow_ctrl_en;
+	p_bulletin->partner_adv_pause = link->partner_adv_pause;
+	p_bulletin->sfp_tx_fault = link->sfp_tx_fault;
+
+	p_bulletin->capability_speed = p_caps->speed_capabilities;
+}
+
+void ecore_iov_get_link(struct ecore_hwfn *p_hwfn,
+			u16 vfid,
+			struct ecore_mcp_link_params *p_params,
+			struct ecore_mcp_link_state *p_link,
+			struct ecore_mcp_link_capabilities *p_caps)
+{
+	struct ecore_vf_info *p_vf = ecore_iov_get_vf_info(p_hwfn, vfid, false);
+	struct ecore_bulletin_content *p_bulletin;
+
+	if (!p_vf)
+		return;
+
+	p_bulletin = p_vf->bulletin.p_virt;
+
+	if (p_params)
+		__ecore_vf_get_link_params(p_hwfn, p_params, p_bulletin);
+	if (p_link)
+		__ecore_vf_get_link_state(p_hwfn, p_link, p_bulletin);
+	if (p_caps)
+		__ecore_vf_get_link_caps(p_hwfn, p_caps, p_bulletin);
+}
+
+void ecore_iov_process_mbx_req(struct ecore_hwfn *p_hwfn,
+			       struct ecore_ptt *p_ptt, int vfid)
+{
+	struct ecore_iov_vf_mbx *mbx;
+	struct ecore_vf_info *p_vf;
+	int i;
+
+	p_vf = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
+	if (!p_vf)
+		return;
+
+	mbx = &p_vf->vf_mbx;
+
+	/* ecore_iov_process_mbx_request */
+	DP_VERBOSE(p_hwfn,
+		   ECORE_MSG_IOV,
+		   "ecore_iov_process_mbx_req vfid %d\n", p_vf->abs_vf_id);
+
+	mbx->first_tlv = mbx->req_virt->first_tlv;
+
+	/* check if tlv type is known */
+	if (ecore_iov_tlv_supported(mbx->first_tlv.tl.type)) {
+		/* Lock the per vf op mutex and note the locker's identity.
+		 * The unlock will take place in mbx response.
+		 */
+		ecore_iov_lock_vf_pf_channel(p_hwfn,
+					     p_vf, mbx->first_tlv.tl.type);
+
+		/* switch on the opcode */
+		switch (mbx->first_tlv.tl.type) {
+		case CHANNEL_TLV_ACQUIRE:
+			ecore_iov_vf_mbx_acquire(p_hwfn, p_ptt, p_vf);
+			break;
+		case CHANNEL_TLV_VPORT_START:
+			ecore_iov_vf_mbx_start_vport(p_hwfn, p_ptt, p_vf);
+			break;
+		case CHANNEL_TLV_VPORT_TEARDOWN:
+			ecore_iov_vf_mbx_stop_vport(p_hwfn, p_ptt, p_vf);
+			break;
+		case CHANNEL_TLV_START_RXQ:
+			ecore_iov_vf_mbx_start_rxq(p_hwfn, p_ptt, p_vf);
+			break;
+		case CHANNEL_TLV_START_TXQ:
+			ecore_iov_vf_mbx_start_txq(p_hwfn, p_ptt, p_vf);
+			break;
+		case CHANNEL_TLV_STOP_RXQS:
+			ecore_iov_vf_mbx_stop_rxqs(p_hwfn, p_ptt, p_vf);
+			break;
+		case CHANNEL_TLV_STOP_TXQS:
+			ecore_iov_vf_mbx_stop_txqs(p_hwfn, p_ptt, p_vf);
+			break;
+		case CHANNEL_TLV_UPDATE_RXQ:
+			ecore_iov_vf_mbx_update_rxqs(p_hwfn, p_ptt, p_vf);
+			break;
+		case CHANNEL_TLV_VPORT_UPDATE:
+			ecore_iov_vf_mbx_vport_update(p_hwfn, p_ptt, p_vf);
+			break;
+		case CHANNEL_TLV_UCAST_FILTER:
+			ecore_iov_vf_mbx_ucast_filter(p_hwfn, p_ptt, p_vf);
+			break;
+		case CHANNEL_TLV_CLOSE:
+			ecore_iov_vf_mbx_close(p_hwfn, p_ptt, p_vf);
+			break;
+		case CHANNEL_TLV_INT_CLEANUP:
+			ecore_iov_vf_mbx_int_cleanup(p_hwfn, p_ptt, p_vf);
+			break;
+		case CHANNEL_TLV_RELEASE:
+			ecore_iov_vf_mbx_release(p_hwfn, p_ptt, p_vf);
+			break;
+		}
+
+		ecore_iov_unlock_vf_pf_channel(p_hwfn,
+					       p_vf, mbx->first_tlv.tl.type);
+
+	} else {
+		/* unknown TLV - this may belong to a VF driver from the future
+		 * - a version written after this PF driver was written, which
+		 * supports features unknown as of yet. Too bad since we don't
+		 * support them. Or this may be because someone wrote a crappy
+		 * VF driver and is sending garbage over the channel.
+		 */
+		DP_ERR(p_hwfn,
+		       "unknown TLV. type %d length %d. first 20 bytes of mailbox buffer:\n",
+		       mbx->first_tlv.tl.type, mbx->first_tlv.tl.length);
+
+		for (i = 0; i < 20; i++) {
+			DP_VERBOSE(p_hwfn,
+				   ECORE_MSG_IOV,
+				   "%x ",
+				   mbx->req_virt->tlv_buf_size.tlv_buffer[i]);
+		}
+
+		/* test whether we can respond to the VF (do we have an address
+		 * for it?)
+		 */
+		if (p_vf->state == VF_ACQUIRED)
+			DP_ERR(p_hwfn, "UNKNOWN TLV Not supported yet\n");
+	}
+
+#ifdef CONFIG_ECORE_SW_CHANNEL
+	mbx->sw_mbx.mbx_state = VF_PF_RESPONSE_READY;
+	mbx->sw_mbx.response_offset = 0;
+#endif
+}
+
+static enum _ecore_status_t ecore_sriov_vfpf_msg(struct ecore_hwfn *p_hwfn,
+						 __le16 vfid,
+						 struct regpair *vf_msg)
+{
+	struct ecore_vf_info *p_vf;
+	u8 min, max;
+
+	if (!p_hwfn->pf_iov_info || !p_hwfn->pf_iov_info->vfs_array) {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+			   "Got a message from VF while PF is not initialized for IOV support\n");
+		return ECORE_SUCCESS;
+	}
+
+	/* Find the VF record - message comes with realtive [engine] vfid */
+	min = (u8)p_hwfn->hw_info.first_vf_in_pf;
+	max = min + p_hwfn->p_dev->sriov_info.total_vfs;
+	/* @@@TBD - for BE machines, should echo field be reversed? */
+	if ((u8)vfid < min || (u8)vfid >= max) {
+		DP_INFO(p_hwfn,
+			"Got a message from VF with relative id 0x%08x, but PF's range is [0x%02x,...,0x%02x)\n",
+			(u8)vfid, min, max);
+		return ECORE_INVAL;
+	}
+	p_vf = &p_hwfn->pf_iov_info->vfs_array[(u8)vfid - min];
+
+	/* List the physical address of the request so that handler
+	 * could later on copy the message from it.
+	 */
+	p_vf->vf_mbx.pending_req = (((u64)vf_msg->hi) << 32) | vf_msg->lo;
+
+	return OSAL_PF_VF_MSG(p_hwfn, p_vf->relative_vf_id);
+}
+
+enum _ecore_status_t ecore_sriov_eqe_event(struct ecore_hwfn *p_hwfn,
+					   u8 opcode,
+					   __le16 echo,
+					   union event_ring_data *data)
+{
+	switch (opcode) {
+	case COMMON_EVENT_VF_PF_CHANNEL:
+		return ecore_sriov_vfpf_msg(p_hwfn, echo,
+					    &data->vf_pf_channel.msg_addr);
+	case COMMON_EVENT_VF_FLR:
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+			   "VF-FLR is still not supported\n");
+		return ECORE_SUCCESS;
+	default:
+		DP_INFO(p_hwfn->p_dev, "Unknown sriov eqe event 0x%02x\n",
+			opcode);
+		return ECORE_INVAL;
+	}
+}
+
+bool ecore_iov_is_vf_pending_flr(struct ecore_hwfn *p_hwfn, u16 rel_vf_id)
+{
+	return !!(p_hwfn->pf_iov_info->pending_flr[rel_vf_id / 64] &
+		   (1ULL << (rel_vf_id % 64)));
+}
+
+bool ecore_iov_is_valid_vfid(struct ecore_hwfn *p_hwfn, int rel_vf_id,
+			     bool b_enabled_only)
+{
+	if (!p_hwfn->pf_iov_info) {
+		DP_NOTICE(p_hwfn->p_dev, true, "No iov info\n");
+		return false;
+	}
+
+	return b_enabled_only ? ECORE_IS_VF_ACTIVE(p_hwfn->p_dev, rel_vf_id) :
+	    (rel_vf_id < p_hwfn->p_dev->sriov_info.total_vfs);
+}
+
+struct ecore_public_vf_info *ecore_iov_get_public_vf_info(struct ecore_hwfn
+							  *p_hwfn,
+							  u16 relative_vf_id,
+							  bool b_enabled_only)
+{
+	struct ecore_vf_info *vf = OSAL_NULL;
+
+	vf = ecore_iov_get_vf_info(p_hwfn, relative_vf_id, b_enabled_only);
+	if (!vf)
+		return OSAL_NULL;
+
+	return &vf->p_vf_info;
+}
+
+void ecore_iov_pf_add_pending_events(struct ecore_hwfn *p_hwfn, u8 vfid)
+{
+	u64 add_bit = 1ULL << (vfid % 64);
+
+	/* TODO - add locking mechanisms [no atomics in ecore, so we can't
+	 * add the lock inside the ecore_pf_iov struct].
+	 */
+	p_hwfn->pf_iov_info->pending_events[vfid / 64] |= add_bit;
+}
+
+void ecore_iov_pf_get_and_clear_pending_events(struct ecore_hwfn *p_hwfn,
+					       u64 *events)
+{
+	u64 *p_pending_events = p_hwfn->pf_iov_info->pending_events;
+
+	/* TODO - Take a lock */
+	OSAL_MEMCPY(events, p_pending_events,
+		    sizeof(u64) * ECORE_VF_ARRAY_LENGTH);
+	OSAL_MEMSET(p_pending_events, 0, sizeof(u64) * ECORE_VF_ARRAY_LENGTH);
+}
+
+enum _ecore_status_t ecore_iov_copy_vf_msg(struct ecore_hwfn *p_hwfn,
+					   struct ecore_ptt *ptt, int vfid)
+{
+	struct ecore_dmae_params params;
+	struct ecore_vf_info *vf_info;
+
+	vf_info = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
+	if (!vf_info)
+		return ECORE_INVAL;
+
+	OSAL_MEMSET(&params, 0, sizeof(struct ecore_dmae_params));
+	params.flags = ECORE_DMAE_FLAG_VF_SRC | ECORE_DMAE_FLAG_COMPLETION_DST;
+	params.src_vfid = vf_info->abs_vf_id;
+
+	if (ecore_dmae_host2host(p_hwfn, ptt,
+				 vf_info->vf_mbx.pending_req,
+				 vf_info->vf_mbx.req_phys,
+				 sizeof(union vfpf_tlvs) / 4, &params)) {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+			   "Failed to copy message from VF 0x%02x\n", vfid);
+
+		return ECORE_IO;
+	}
+
+	return ECORE_SUCCESS;
+}
+
+void ecore_iov_bulletin_set_forced_mac(struct ecore_hwfn *p_hwfn,
+				       u8 *mac, int vfid)
+{
+	struct ecore_vf_info *vf_info;
+	u64 feature;
+
+	vf_info = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
+	if (!vf_info) {
+		DP_NOTICE(p_hwfn->p_dev, true,
+			  "Can not set forced MAC, invalid vfid [%d]\n", vfid);
+		return;
+	}
+
+	feature = 1 << MAC_ADDR_FORCED;
+	OSAL_MEMCPY(vf_info->bulletin.p_virt->mac, mac, ETH_ALEN);
+
+	vf_info->bulletin.p_virt->valid_bitmap |= feature;
+	/* Forced MAC will disable MAC_ADDR */
+	vf_info->bulletin.p_virt->valid_bitmap &=
+	    ~(1 << VFPF_BULLETIN_MAC_ADDR);
+
+	ecore_iov_configure_vport_forced(p_hwfn, vf_info, feature);
+}
+
+enum _ecore_status_t ecore_iov_bulletin_set_mac(struct ecore_hwfn *p_hwfn,
+						u8 *mac, int vfid)
+{
+	struct ecore_vf_info *vf_info;
+	u64 feature;
+
+	vf_info = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
+	if (!vf_info) {
+		DP_NOTICE(p_hwfn->p_dev, true,
+			  "Can not set MAC, invalid vfid [%d]\n", vfid);
+		return ECORE_INVAL;
+	}
+
+	if (vf_info->bulletin.p_virt->valid_bitmap & (1 << MAC_ADDR_FORCED)) {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+			   "Can not set MAC, Forced MAC is configured\n");
+		return ECORE_INVAL;
+	}
+
+	feature = 1 << VFPF_BULLETIN_MAC_ADDR;
+	OSAL_MEMCPY(vf_info->bulletin.p_virt->mac, mac, ETH_ALEN);
+
+	vf_info->bulletin.p_virt->valid_bitmap |= feature;
+
+	return ECORE_SUCCESS;
+}
+
+void ecore_iov_bulletin_set_forced_vlan(struct ecore_hwfn *p_hwfn,
+					u16 pvid, int vfid)
+{
+	struct ecore_vf_info *vf_info;
+	u64 feature;
+
+	vf_info = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
+	if (!vf_info) {
+		DP_NOTICE(p_hwfn->p_dev, true,
+			  "Can not set forced MAC, invalid vfid [%d]\n", vfid);
+		return;
+	}
+
+	feature = 1 << VLAN_ADDR_FORCED;
+	vf_info->bulletin.p_virt->pvid = pvid;
+	if (pvid)
+		vf_info->bulletin.p_virt->valid_bitmap |= feature;
+	else
+		vf_info->bulletin.p_virt->valid_bitmap &= ~feature;
+
+	ecore_iov_configure_vport_forced(p_hwfn, vf_info, feature);
+}
+
+enum _ecore_status_t
+ecore_iov_bulletin_set_forced_untagged_default(struct ecore_hwfn *p_hwfn,
+					       bool b_untagged_only, int vfid)
+{
+	struct ecore_vf_info *vf_info;
+	u64 feature;
+
+	vf_info = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
+	if (!vf_info) {
+		DP_NOTICE(p_hwfn->p_dev, true,
+			  "Can not set forced MAC, invalid vfid [%d]\n", vfid);
+		return ECORE_INVAL;
+	}
+
+	/* Since this is configurable only during vport-start, don't take it
+	 * if we're past that point.
+	 */
+	if (vf_info->state == VF_ENABLED) {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+			   "Can't support untagged change for vfid[%d] - VF is already active\n",
+			   vfid);
+		return ECORE_INVAL;
+	}
+
+	/* Set configuration; This will later be taken into account during the
+	 * VF initialization.
+	 */
+	feature = (1 << VFPF_BULLETIN_UNTAGGED_DEFAULT) |
+	    (1 << VFPF_BULLETIN_UNTAGGED_DEFAULT_FORCED);
+	vf_info->bulletin.p_virt->valid_bitmap |= feature;
+
+	vf_info->bulletin.p_virt->default_only_untagged = b_untagged_only ? 1
+	    : 0;
+
+	return ECORE_SUCCESS;
+}
+
+void ecore_iov_get_vfs_opaque_fid(struct ecore_hwfn *p_hwfn, int vfid,
+				  u16 *opaque_fid)
+{
+	struct ecore_vf_info *vf_info;
+
+	vf_info = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
+	if (!vf_info)
+		return;
+
+	*opaque_fid = vf_info->opaque_fid;
+}
+
+void ecore_iov_get_vfs_vport_id(struct ecore_hwfn *p_hwfn, int vfid,
+				u8 *p_vort_id)
+{
+	struct ecore_vf_info *vf_info;
+
+	vf_info = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
+	if (!vf_info)
+		return;
+
+	*p_vort_id = vf_info->vport_id;
+}
+
+bool ecore_iov_vf_has_vport_instance(struct ecore_hwfn *p_hwfn, int vfid)
+{
+	struct ecore_vf_info *p_vf_info;
+
+	p_vf_info = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
+	if (!p_vf_info)
+		return false;
+
+	return !!p_vf_info->vport_instance;
+}
+
+bool ecore_iov_is_vf_stopped(struct ecore_hwfn *p_hwfn, int vfid)
+{
+	struct ecore_vf_info *p_vf_info;
+
+	p_vf_info = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
+
+	return p_vf_info->state == VF_STOPPED;
+}
+
+bool ecore_iov_spoofchk_get(struct ecore_hwfn *p_hwfn, int vfid)
+{
+	struct ecore_vf_info *vf_info;
+
+	vf_info = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
+	if (!vf_info)
+		return false;
+
+	return vf_info->spoof_chk;
+}
+
+bool ecore_iov_pf_sanity_check(struct ecore_hwfn *p_hwfn, int vfid)
+{
+	if (IS_VF(p_hwfn->p_dev) || !IS_ECORE_SRIOV(p_hwfn->p_dev) ||
+	    !IS_PF_SRIOV_ALLOC(p_hwfn) ||
+	    !ECORE_IS_VF_ACTIVE(p_hwfn->p_dev, vfid))
+		return false;
+	else
+		return true;
+}
+
+enum _ecore_status_t ecore_iov_spoofchk_set(struct ecore_hwfn *p_hwfn,
+					    int vfid, bool val)
+{
+	enum _ecore_status_t rc = ECORE_INVAL;
+	struct ecore_vf_info *vf;
+
+	if (!ecore_iov_pf_sanity_check(p_hwfn, vfid)) {
+		DP_NOTICE(p_hwfn, true,
+			  "SR-IOV sanity check failed, can't set spoofchk\n");
+		goto out;
+	}
+
+	vf = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
+	if (!vf)
+		goto out;
+
+	if (!ecore_iov_vf_has_vport_instance(p_hwfn, vfid)) {
+		/* After VF VPORT start PF will configure spoof check */
+		vf->req_spoofchk_val = val;
+		rc = ECORE_SUCCESS;
+		goto out;
+	}
+
+	rc = __ecore_iov_spoofchk_set(p_hwfn, vf, val);
+
+out:
+	return rc;
+}
+
+u8 ecore_iov_vf_chains_per_pf(struct ecore_hwfn *p_hwfn)
+{
+	u8 max_chains_per_vf = p_hwfn->hw_info.max_chains_per_vf;
+
+	max_chains_per_vf = (max_chains_per_vf) ? max_chains_per_vf
+	    : ECORE_MAX_VF_CHAINS_PER_PF;
+
+	return max_chains_per_vf;
+}
+
+void ecore_iov_get_vf_req_virt_mbx_params(struct ecore_hwfn *p_hwfn,
+					  u16 rel_vf_id,
+					  void **pp_req_virt_addr,
+					  u16 *p_req_virt_size)
+{
+	struct ecore_vf_info *vf_info =
+	    ecore_iov_get_vf_info(p_hwfn, rel_vf_id, true);
+
+	if (!vf_info)
+		return;
+
+	if (pp_req_virt_addr)
+		*pp_req_virt_addr = vf_info->vf_mbx.req_virt;
+
+	if (p_req_virt_size)
+		*p_req_virt_size = sizeof(*vf_info->vf_mbx.req_virt);
+}
+
+void ecore_iov_get_vf_reply_virt_mbx_params(struct ecore_hwfn *p_hwfn,
+					    u16 rel_vf_id,
+					    void **pp_reply_virt_addr,
+					    u16 *p_reply_virt_size)
+{
+	struct ecore_vf_info *vf_info =
+	    ecore_iov_get_vf_info(p_hwfn, rel_vf_id, true);
+
+	if (!vf_info)
+		return;
+
+	if (pp_reply_virt_addr)
+		*pp_reply_virt_addr = vf_info->vf_mbx.reply_virt;
+
+	if (p_reply_virt_size)
+		*p_reply_virt_size = sizeof(*vf_info->vf_mbx.reply_virt);
+}
+
+#ifdef CONFIG_ECORE_SW_CHANNEL
+struct ecore_iov_sw_mbx *ecore_iov_get_vf_sw_mbx(struct ecore_hwfn *p_hwfn,
+						 u16 rel_vf_id)
+{
+	struct ecore_vf_info *vf_info =
+	    ecore_iov_get_vf_info(p_hwfn, rel_vf_id, true);
+
+	if (!vf_info)
+		return OSAL_NULL;
+
+	return &vf_info->vf_mbx.sw_mbx;
+}
+#endif
+
+bool ecore_iov_is_valid_vfpf_msg_length(u32 length)
+{
+	return (length >= sizeof(struct vfpf_first_tlv) &&
+		(length <= sizeof(union vfpf_tlvs)));
+}
+
+u32 ecore_iov_pfvf_msg_length(void)
+{
+	return sizeof(union pfvf_tlvs);
+}
+
+u8 *ecore_iov_bulletin_get_forced_mac(struct ecore_hwfn *p_hwfn, u16 rel_vf_id)
+{
+	struct ecore_vf_info *p_vf;
+
+	p_vf = ecore_iov_get_vf_info(p_hwfn, rel_vf_id, true);
+	if (!p_vf || !p_vf->bulletin.p_virt)
+		return OSAL_NULL;
+
+	if (!(p_vf->bulletin.p_virt->valid_bitmap & (1 << MAC_ADDR_FORCED)))
+		return OSAL_NULL;
+
+	return p_vf->bulletin.p_virt->mac;
+}
+
+u16 ecore_iov_bulletin_get_forced_vlan(struct ecore_hwfn *p_hwfn,
+				       u16 rel_vf_id)
+{
+	struct ecore_vf_info *p_vf;
+
+	p_vf = ecore_iov_get_vf_info(p_hwfn, rel_vf_id, true);
+	if (!p_vf || !p_vf->bulletin.p_virt)
+		return 0;
+
+	if (!(p_vf->bulletin.p_virt->valid_bitmap & (1 << VLAN_ADDR_FORCED)))
+		return 0;
+
+	return p_vf->bulletin.p_virt->pvid;
+}
+
+enum _ecore_status_t ecore_iov_configure_tx_rate(struct ecore_hwfn *p_hwfn,
+						 struct ecore_ptt *p_ptt,
+						 int vfid, int val)
+{
+	struct ecore_vf_info *vf;
+	enum _ecore_status_t rc;
+	u8 abs_vp_id = 0;
+
+	vf = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
+
+	if (!vf)
+		return ECORE_INVAL;
+
+	rc = ecore_fw_vport(p_hwfn, vf->vport_id, &abs_vp_id);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	rc = ecore_init_vport_rl(p_hwfn, p_ptt, abs_vp_id, (u32)val);
+
+	return rc;
+}
+
+enum _ecore_status_t ecore_iov_configure_min_tx_rate(struct ecore_dev *p_dev,
+						     int vfid, u32 rate)
+{
+	struct ecore_vf_info *vf;
+	enum _ecore_status_t rc;
+	u8 vport_id;
+	int i;
+
+	for_each_hwfn(p_dev, i) {
+		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
+
+		if (!ecore_iov_pf_sanity_check(p_hwfn, vfid)) {
+			DP_NOTICE(p_hwfn, true,
+				  "SR-IOV sanity check failed, can't set min rate\n");
+			return ECORE_INVAL;
+		}
+	}
+
+	vf = ecore_iov_get_vf_info(ECORE_LEADING_HWFN(p_dev), (u16)vfid, true);
+	vport_id = vf->vport_id;
+
+	rc = ecore_configure_vport_wfq(p_dev, vport_id, rate);
+
+	return rc;
+}
+
+enum _ecore_status_t ecore_iov_get_vf_stats(struct ecore_hwfn *p_hwfn,
+					    struct ecore_ptt *p_ptt,
+					    int vfid,
+					    struct ecore_eth_stats *p_stats)
+{
+	struct ecore_vf_info *vf;
+
+	vf = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
+	if (!vf)
+		return ECORE_INVAL;
+
+	if (vf->state != VF_ENABLED)
+		return ECORE_INVAL;
+
+	__ecore_get_vport_stats(p_hwfn, p_ptt, p_stats,
+				vf->abs_vf_id + 0x10, false);
+
+	return ECORE_SUCCESS;
+}
+
+u8 ecore_iov_get_vf_num_rxqs(struct ecore_hwfn *p_hwfn, u16 rel_vf_id)
+{
+	struct ecore_vf_info *p_vf;
+
+	p_vf = ecore_iov_get_vf_info(p_hwfn, rel_vf_id, true);
+	if (!p_vf)
+		return 0;
+
+	return p_vf->num_rxqs;
+}
+
+u8 ecore_iov_get_vf_num_active_rxqs(struct ecore_hwfn *p_hwfn, u16 rel_vf_id)
+{
+	struct ecore_vf_info *p_vf;
+
+	p_vf = ecore_iov_get_vf_info(p_hwfn, rel_vf_id, true);
+	if (!p_vf)
+		return 0;
+
+	return p_vf->num_active_rxqs;
+}
+
+void *ecore_iov_get_vf_ctx(struct ecore_hwfn *p_hwfn, u16 rel_vf_id)
+{
+	struct ecore_vf_info *p_vf;
+
+	p_vf = ecore_iov_get_vf_info(p_hwfn, rel_vf_id, true);
+	if (!p_vf)
+		return OSAL_NULL;
+
+	return p_vf->ctx;
+}
+
+u8 ecore_iov_get_vf_num_sbs(struct ecore_hwfn *p_hwfn, u16 rel_vf_id)
+{
+	struct ecore_vf_info *p_vf;
+
+	p_vf = ecore_iov_get_vf_info(p_hwfn, rel_vf_id, true);
+	if (!p_vf)
+		return 0;
+
+	return p_vf->num_sbs;
+}
+
+bool ecore_iov_is_vf_wait_for_acquire(struct ecore_hwfn *p_hwfn, u16 rel_vf_id)
+{
+	struct ecore_vf_info *p_vf;
+
+	p_vf = ecore_iov_get_vf_info(p_hwfn, rel_vf_id, true);
+	if (!p_vf)
+		return false;
+
+	return (p_vf->state == VF_FREE);
+}
+
+bool ecore_iov_is_vf_acquired_not_initialized(struct ecore_hwfn *p_hwfn,
+					      u16 rel_vf_id)
+{
+	struct ecore_vf_info *p_vf;
+
+	p_vf = ecore_iov_get_vf_info(p_hwfn, rel_vf_id, true);
+	if (!p_vf)
+		return false;
+
+	return (p_vf->state == VF_ACQUIRED);
+}
+
+bool ecore_iov_is_vf_initialized(struct ecore_hwfn *p_hwfn, u16 rel_vf_id)
+{
+	struct ecore_vf_info *p_vf;
+
+	p_vf = ecore_iov_get_vf_info(p_hwfn, rel_vf_id, true);
+	if (!p_vf)
+		return false;
+
+	return (p_vf->state == VF_ENABLED);
+}
+
+int ecore_iov_get_vf_min_rate(struct ecore_hwfn *p_hwfn, int vfid)
+{
+	struct ecore_wfq_data *vf_vp_wfq;
+	struct ecore_vf_info *vf_info;
+
+	vf_info = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
+	if (!vf_info)
+		return 0;
+
+	vf_vp_wfq = &p_hwfn->qm_info.wfq_data[vf_info->vport_id];
+
+	if (vf_vp_wfq->configured)
+		return vf_vp_wfq->min_speed;
+	else
+		return 0;
+}
diff --git a/drivers/net/qede/base/ecore_sriov.h b/drivers/net/qede/base/ecore_sriov.h
new file mode 100644
index 0000000..9ddc9aa
--- /dev/null
+++ b/drivers/net/qede/base/ecore_sriov.h
@@ -0,0 +1,390 @@
+/*
+ * Copyright (c) 2016 QLogic Corporation.
+ * All rights reserved.
+ * www.qlogic.com
+ *
+ * See LICENSE.qede_pmd for copyright and licensing details.
+ */
+
+#ifndef __ECORE_SRIOV_H__
+#define __ECORE_SRIOV_H__
+
+#include "ecore_status.h"
+#include "ecore_vfpf_if.h"
+#include "ecore_iov_api.h"
+#include "ecore_hsi_common.h"
+
+#define ECORE_ETH_VF_NUM_VLAN_FILTERS 2
+
+#define ECORE_ETH_MAX_VF_NUM_VLAN_FILTERS \
+	(MAX_NUM_VFS * ECORE_ETH_VF_NUM_VLAN_FILTERS)
+
+/* Represents a full message. Both the request filled by VF
+ * and the response filled by the PF. The VF needs one copy
+ * of this message, it fills the request part and sends it to
+ * the PF. The PF will copy the response to the response part for
+ * the VF to later read it. The PF needs to hold a message like this
+ * per VF, the request that is copied to the PF is placed in the
+ * request size, and the response is filled by the PF before sending
+ * it to the VF.
+ */
+struct ecore_vf_mbx_msg {
+	union vfpf_tlvs req;
+	union pfvf_tlvs resp;
+};
+
+/* This data is held in the ecore_hwfn structure for VFs only. */
+struct ecore_vf_iov {
+	union vfpf_tlvs *vf2pf_request;
+	dma_addr_t vf2pf_request_phys;
+	union pfvf_tlvs *pf2vf_reply;
+	dma_addr_t pf2vf_reply_phys;
+
+	/* Should be taken whenever the mailbox buffers are accessed */
+	osal_mutex_t mutex;
+	u8 *offset;
+
+	/* Bulletin Board */
+	struct ecore_bulletin bulletin;
+	struct ecore_bulletin_content bulletin_shadow;
+
+	/* we set aside a copy of the acquire response */
+	struct pfvf_acquire_resp_tlv acquire_resp;
+};
+
+/* This mailbox is maintained per VF in its PF
+ * contains all information required for sending / receiving
+ * a message
+ */
+struct ecore_iov_vf_mbx {
+	union vfpf_tlvs *req_virt;
+	dma_addr_t req_phys;
+	union pfvf_tlvs *reply_virt;
+	dma_addr_t reply_phys;
+
+	/* Address in VF where a pending message is located */
+	dma_addr_t pending_req;
+
+	u8 *offset;
+
+#ifdef CONFIG_ECORE_SW_CHANNEL
+	struct ecore_iov_sw_mbx sw_mbx;
+#endif
+
+	/* VF GPA address */
+	u32 vf_addr_lo;
+	u32 vf_addr_hi;
+
+	struct vfpf_first_tlv first_tlv;	/* saved VF request header */
+
+	u8 flags;
+#define VF_MSG_INPROCESS	0x1	/* failsafe - the FW should prevent
+					 * more then one pending msg
+					 */
+};
+
+struct ecore_vf_q_info {
+	u16 fw_rx_qid;
+	u16 fw_tx_qid;
+	u8 fw_cid;
+	u8 rxq_active;
+	u8 txq_active;
+};
+
+enum int_mod {
+	VPORT_INT_MOD_UNDEFINED = 0,
+	VPORT_INT_MOD_ADAPTIVE = 1,
+	VPORT_INT_MOD_OFF = 2,
+	VPORT_INT_MOD_LOW = 100,
+	VPORT_INT_MOD_MEDIUM = 200,
+	VPORT_INT_MOD_HIGH = 300
+};
+
+enum vf_state {
+	VF_FREE = 0,		/* VF ready to be acquired holds no resc */
+	VF_ACQUIRED = 1,	/* VF, aquired, but not initalized */
+	VF_ENABLED = 2,		/* VF, Enabled */
+	VF_RESET = 3,		/* VF, FLR'd, pending cleanup */
+	VF_STOPPED = 4		/* VF, Stopped */
+};
+
+struct ecore_vf_vlan_shadow {
+	bool used;
+	u16 vid;
+};
+
+struct ecore_vf_shadow_config {
+	/* Shadow copy of all guest vlans */
+	struct ecore_vf_vlan_shadow vlans[ECORE_ETH_VF_NUM_VLAN_FILTERS + 1];
+
+	u8 inner_vlan_removal;
+};
+
+/* PFs maintain an array of this structure, per VF */
+struct ecore_vf_info {
+	struct ecore_iov_vf_mbx vf_mbx;
+	enum vf_state state;
+	u8 to_disable;
+
+	struct ecore_bulletin bulletin;
+	dma_addr_t vf_bulletin;
+
+	u32 concrete_fid;
+	u16 opaque_fid;
+	u16 mtu;
+
+	u8 vport_id;
+	u8 relative_vf_id;
+	u8 abs_vf_id;
+#define ECORE_VF_ABS_ID(p_hwfn, p_vf)	(ECORE_PATH_ID(p_hwfn) ? \
+					 (p_vf)->abs_vf_id + MAX_NUM_VFS_BB : \
+					 (p_vf)->abs_vf_id)
+
+	u8 vport_instance;	/* Number of active vports */
+	u8 num_rxqs;
+	u8 num_txqs;
+
+	u8 num_sbs;
+
+	u8 num_mac_filters;
+	u8 num_vlan_filters;
+	u8 num_mc_filters;
+
+	struct ecore_vf_q_info vf_queues[ECORE_MAX_VF_CHAINS_PER_PF];
+	u16 igu_sbs[ECORE_MAX_VF_CHAINS_PER_PF];
+
+	/* TODO - Only windows is using it - should be removed */
+	u8 was_malicious;
+	u8 num_active_rxqs;
+	void *ctx;
+	struct ecore_public_vf_info p_vf_info;
+	bool spoof_chk;		/* Current configured on HW */
+	bool req_spoofchk_val;	/* Requested value */
+
+	/* Stores the configuration requested by VF */
+	struct ecore_vf_shadow_config shadow_config;
+
+	/* A bitfield using bulletin's valid-map bits, used to indicate
+	 * which of the bulletin board features have been configured.
+	 */
+	u64 configured_features;
+#define ECORE_IOV_CONFIGURED_FEATURES_MASK	((1 << MAC_ADDR_FORCED) | \
+						 (1 << VLAN_ADDR_FORCED))
+};
+
+/* This structure is part of ecore_hwfn and used only for PFs that have sriov
+ * capability enabled.
+ */
+struct ecore_pf_iov {
+	struct ecore_vf_info vfs_array[MAX_NUM_VFS];
+	u64 pending_events[ECORE_VF_ARRAY_LENGTH];
+	u64 pending_flr[ECORE_VF_ARRAY_LENGTH];
+	u16 base_vport_id;
+
+	/* Allocate message address continuosuly and split to each VF */
+	void *mbx_msg_virt_addr;
+	dma_addr_t mbx_msg_phys_addr;
+	u32 mbx_msg_size;
+	void *mbx_reply_virt_addr;
+	dma_addr_t mbx_reply_phys_addr;
+	u32 mbx_reply_size;
+	void *p_bulletins;
+	dma_addr_t bulletins_phys;
+	u32 bulletins_size;
+};
+
+#ifdef CONFIG_ECORE_SRIOV
+/**
+ * @brief Read sriov related information and allocated resources
+ *  reads from configuraiton space, shmem, and allocates the VF
+ *  database in the PF.
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t ecore_iov_hw_info(struct ecore_hwfn *p_hwfn,
+				       struct ecore_ptt *p_ptt);
+
+/**
+ * @brief ecore_add_tlv - place a given tlv on the tlv buffer at next offset
+ *
+ * @param p_hwfn
+ * @param p_iov
+ * @param type
+ * @param length
+ *
+ * @return pointer to the newly placed tlv
+ */
+void *ecore_add_tlv(struct ecore_hwfn *p_hwfn,
+		    u8 **offset, u16 type, u16 length);
+
+/**
+ * @brief list the types and lengths of the tlvs on the buffer
+ *
+ * @param p_hwfn
+ * @param tlvs_list
+ */
+void ecore_dp_tlv_list(struct ecore_hwfn *p_hwfn, void *tlvs_list);
+
+/**
+ * @brief ecore_iov_alloc - allocate sriov related resources
+ *
+ * @param p_hwfn
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t ecore_iov_alloc(struct ecore_hwfn *p_hwfn);
+
+/**
+ * @brief ecore_iov_setup - setup sriov related resources
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ */
+void ecore_iov_setup(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt);
+
+/**
+ * @brief ecore_iov_free - free sriov related resources
+ *
+ * @param p_hwfn
+ */
+void ecore_iov_free(struct ecore_hwfn *p_hwfn);
+
+/**
+ * @brief ecore_sriov_eqe_event - handle async sriov event arrived on eqe.
+ *
+ * @param p_hwfn
+ * @param opcode
+ * @param echo
+ * @param data
+ */
+enum _ecore_status_t ecore_sriov_eqe_event(struct ecore_hwfn *p_hwfn,
+					   u8 opcode,
+					   __le16 echo,
+					   union event_ring_data *data);
+
+/**
+ * @brief calculate CRC for bulletin board validation
+ *
+ * @param basic crc seed
+ * @param ptr to beginning of buffer
+ * @length in bytes of buffer
+ *
+ * @return calculated crc over buffer [with respect to seed].
+ */
+u32 ecore_crc32(u32 crc, u8 *ptr, u32 length);
+
+/**
+ * @brief Mark structs of vfs that have been FLR-ed.
+ *
+ * @param p_hwfn
+ * @param disabled_vfs - bitmask of all VFs on path that were FLRed
+ *
+ * @return 1 iff one of the PF's vfs got FLRed. 0 otherwise.
+ */
+int ecore_iov_mark_vf_flr(struct ecore_hwfn *p_hwfn, u32 *disabled_vfs);
+
+/**
+ * @brief Search extended TLVs in request/reply buffer.
+ *
+ * @param p_hwfn
+ * @param p_tlvs_list - Pointer to tlvs list
+ * @param req_type - Type of TLV
+ *
+ * @return pointer to tlv type if found, otherwise returns NULL.
+ */
+void *ecore_iov_search_list_tlvs(struct ecore_hwfn *p_hwfn,
+				 void *p_tlvs_list, u16 req_type);
+
+/**
+ * @brief ecore_iov_get_vf_info - return the database of a
+ *        specific VF
+ *
+ * @param p_hwfn
+ * @param relative_vf_id - relative id of the VF for which info
+ *			 is requested
+ * @param b_enabled_only - false iff want to access even if vf is disabled
+ *
+ * @return struct ecore_vf_info*
+ */
+struct ecore_vf_info *ecore_iov_get_vf_info(struct ecore_hwfn *p_hwfn,
+					    u16 relative_vf_id,
+					    bool b_enabled_only);
+#else
+static OSAL_INLINE enum _ecore_status_t ecore_iov_hw_info(struct ecore_hwfn
+							  *p_hwfn,
+							  struct ecore_ptt
+							  *p_ptt)
+{
+	return ECORE_SUCCESS;
+}
+
+static OSAL_INLINE void *ecore_add_tlv(struct ecore_hwfn *p_hwfn, u8 **offset,
+				       u16 type, u16 length)
+{
+	return OSAL_NULL;
+}
+
+static OSAL_INLINE void ecore_dp_tlv_list(struct ecore_hwfn *p_hwfn,
+					  void *tlvs_list)
+{
+}
+
+static OSAL_INLINE enum _ecore_status_t ecore_iov_alloc(struct ecore_hwfn
+							*p_hwfn)
+{
+	return ECORE_SUCCESS;
+}
+
+static OSAL_INLINE void ecore_iov_setup(struct ecore_hwfn *p_hwfn,
+					struct ecore_ptt *p_ptt)
+{
+}
+
+static OSAL_INLINE void ecore_iov_free(struct ecore_hwfn *p_hwfn)
+{
+}
+
+static OSAL_INLINE enum _ecore_status_t ecore_sriov_eqe_event(struct ecore_hwfn
+							      *p_hwfn,
+							      u8 opcode,
+							      __le16 echo,
+							      union
+							      event_ring_data
+							      *data)
+{
+	return ECORE_INVAL;
+}
+
+static OSAL_INLINE u32 ecore_crc32(u32 crc, u8 *ptr, u32 length)
+{
+	return 0;
+}
+
+static OSAL_INLINE int ecore_iov_mark_vf_flr(struct ecore_hwfn *p_hwfn,
+					     u32 *disabled_vfs)
+{
+	return 0;
+}
+
+static OSAL_INLINE void *ecore_iov_search_list_tlvs(struct ecore_hwfn *p_hwfn,
+						    void *p_tlvs_list,
+						    u16 req_type)
+{
+	return OSAL_NULL;
+}
+
+static OSAL_INLINE struct ecore_vf_info *ecore_iov_get_vf_info(struct ecore_hwfn
+							       *p_hwfn,
+							       u16
+							       relative_vf_id,
+							       bool
+							       b_enabled_only)
+{
+	return OSAL_NULL;
+}
+
+#endif
+#endif /* __ECORE_SRIOV_H__ */
diff --git a/drivers/net/qede/base/ecore_vf.c b/drivers/net/qede/base/ecore_vf.c
new file mode 100644
index 0000000..a452f3d
--- /dev/null
+++ b/drivers/net/qede/base/ecore_vf.c
@@ -0,0 +1,1322 @@
+/*
+ * Copyright (c) 2016 QLogic Corporation.
+ * All rights reserved.
+ * www.qlogic.com
+ *
+ * See LICENSE.qede_pmd for copyright and licensing details.
+ */
+
+#include "bcm_osal.h"
+#include "ecore.h"
+#include "ecore_hsi_eth.h"
+#include "ecore_sriov.h"
+#include "ecore_l2_api.h"
+#include "ecore_vf.h"
+#include "ecore_vfpf_if.h"
+#include "ecore_status.h"
+#include "reg_addr.h"
+#include "ecore_int.h"
+#include "ecore_l2.h"
+#include "ecore_mcp_api.h"
+#include "ecore_vf_api.h"
+
+static void *ecore_vf_pf_prep(struct ecore_hwfn *p_hwfn, u16 type, u16 length)
+{
+	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
+	void *p_tlv;
+
+	/* This lock is released when we receive PF's response
+	 * in ecore_send_msg2pf().
+	 * So, ecore_vf_pf_prep() and ecore_send_msg2pf()
+	 * must come in sequence.
+	 */
+	OSAL_MUTEX_ACQUIRE(&(p_iov->mutex));
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+		   "preparing to send %s tlv over vf pf channel\n",
+		   ecore_channel_tlvs_string[type]);
+
+	/* Reset Requst offset */
+	p_iov->offset = (u8 *)(p_iov->vf2pf_request);
+
+	/* Clear mailbox - both request and reply */
+	OSAL_MEMSET(p_iov->vf2pf_request, 0, sizeof(union vfpf_tlvs));
+	OSAL_MEMSET(p_iov->pf2vf_reply, 0, sizeof(union pfvf_tlvs));
+
+	/* Init type and length */
+	p_tlv = ecore_add_tlv(p_hwfn, &p_iov->offset, type, length);
+
+	/* Init first tlv header */
+	((struct vfpf_first_tlv *)p_tlv)->reply_address =
+	    (u64)p_iov->pf2vf_reply_phys;
+
+	return p_tlv;
+}
+
+static int ecore_send_msg2pf(struct ecore_hwfn *p_hwfn,
+			     u8 *done, u32 resp_size)
+{
+	struct ustorm_vf_zone *zone_data = (struct ustorm_vf_zone *)
+	    ((u8 *)PXP_VF_BAR0_START_USDM_ZONE_B);
+	union vfpf_tlvs *p_req = p_hwfn->vf_iov_info->vf2pf_request;
+	struct ustorm_trigger_vf_zone trigger;
+	int rc = ECORE_SUCCESS, time = 100;
+	u8 pf_id;
+
+	/* output tlvs list */
+	ecore_dp_tlv_list(p_hwfn, p_req);
+
+	/* need to add the END TLV to the message size */
+	resp_size += sizeof(struct channel_list_end_tlv);
+
+	if (!p_hwfn->p_dev->sriov_info.b_hw_channel) {
+		rc = OSAL_VF_SEND_MSG2PF(p_hwfn->p_dev,
+					 done,
+					 p_req,
+					 p_hwfn->vf_iov_info->pf2vf_reply,
+					 sizeof(union vfpf_tlvs), resp_size);
+		/* TODO - no prints about message ? */
+		goto exit;
+	}
+
+	/* Send TLVs over HW channel */
+	OSAL_MEMSET(&trigger, 0, sizeof(struct ustorm_trigger_vf_zone));
+	trigger.vf_pf_msg_valid = 1;
+	/* TODO - FW should remove this requirement */
+	pf_id = GET_FIELD(p_hwfn->hw_info.concrete_fid, PXP_CONCRETE_FID_PFID);
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+		   "VF -> PF [%02x] message: [%08x, %08x] --> %p, %08x --> %p\n",
+		   pf_id,
+		   U64_HI(p_hwfn->vf_iov_info->vf2pf_request_phys),
+		   U64_LO(p_hwfn->vf_iov_info->vf2pf_request_phys),
+		   &zone_data->non_trigger.vf_pf_msg_addr,
+		   *((u32 *)&trigger), &zone_data->trigger);
+
+	REG_WR(p_hwfn,
+	       (osal_uintptr_t)&zone_data->non_trigger.vf_pf_msg_addr.lo,
+	       U64_LO(p_hwfn->vf_iov_info->vf2pf_request_phys));
+
+	REG_WR(p_hwfn,
+	       (osal_uintptr_t)&zone_data->non_trigger.vf_pf_msg_addr.hi,
+	       U64_HI(p_hwfn->vf_iov_info->vf2pf_request_phys));
+
+	/* The message data must be written first, to prevent trigger before
+	 * data is written.
+	 */
+	OSAL_WMB(p_hwfn->p_dev);
+
+	REG_WR(p_hwfn, (osal_uintptr_t)&zone_data->trigger,
+	       *((u32 *)&trigger));
+
+	while ((!*done) && time) {
+		OSAL_MSLEEP(25);
+		time--;
+	}
+
+	if (!*done) {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+			   "VF <-- PF Timeout [Type %d]\n",
+			   p_req->first_tlv.tl.type);
+		rc = ECORE_TIMEOUT;
+		goto exit;
+	} else {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+			   "PF response: %d [Type %d]\n",
+			   *done, p_req->first_tlv.tl.type);
+	}
+
+exit:
+	OSAL_MUTEX_RELEASE(&(p_hwfn->vf_iov_info->mutex));
+
+	return rc;
+}
+
+#define VF_ACQUIRE_THRESH 3
+#define VF_ACQUIRE_MAC_FILTERS 1
+#define VF_ACQUIRE_MC_FILTERS 10
+
+static enum _ecore_status_t ecore_vf_pf_acquire(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
+	struct pfvf_acquire_resp_tlv *resp = &p_iov->pf2vf_reply->acquire_resp;
+	struct pf_vf_pfdev_info *pfdev_info = &resp->pfdev_info;
+	struct ecore_vf_acquire_sw_info vf_sw_info;
+	struct vfpf_acquire_tlv *req;
+	int rc = 0, attempts = 0;
+	bool resources_acquired = false;
+
+	/* @@@ TBD: MichalK take this from somewhere else... */
+	u8 rx_count = 1, tx_count = 1, num_sbs = 1;
+	u8 num_mac = VF_ACQUIRE_MAC_FILTERS, num_mc = VF_ACQUIRE_MC_FILTERS;
+
+	/* clear mailbox and prep first tlv */
+	req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_ACQUIRE, sizeof(*req));
+
+	/* @@@ TBD: PF may not be ready bnx2x_get_vf_id... */
+	req->vfdev_info.opaque_fid = p_hwfn->hw_info.opaque_fid;
+
+	req->resc_request.num_rxqs = rx_count;
+	req->resc_request.num_txqs = tx_count;
+	req->resc_request.num_sbs = num_sbs;
+	req->resc_request.num_mac_filters = num_mac;
+	req->resc_request.num_mc_filters = num_mc;
+	req->resc_request.num_vlan_filters = ECORE_ETH_VF_NUM_VLAN_FILTERS;
+
+	OSAL_MEMSET(&vf_sw_info, 0, sizeof(vf_sw_info));
+	OSAL_VF_FILL_ACQUIRE_RESC_REQ(p_hwfn, &req->resc_request, &vf_sw_info);
+
+	req->vfdev_info.os_type = vf_sw_info.os_type;
+	req->vfdev_info.driver_version = vf_sw_info.driver_version;
+	req->vfdev_info.fw_major = FW_MAJOR_VERSION;
+	req->vfdev_info.fw_minor = FW_MINOR_VERSION;
+	req->vfdev_info.fw_revision = FW_REVISION_VERSION;
+	req->vfdev_info.fw_engineering = FW_ENGINEERING_VERSION;
+
+	if (vf_sw_info.override_fw_version)
+		req->vfdev_info.capabilties |= VFPF_ACQUIRE_CAP_OVERRIDE_FW_VER;
+
+	/* pf 2 vf bulletin board address */
+	req->bulletin_addr = p_iov->bulletin.phys;
+	req->bulletin_size = p_iov->bulletin.size;
+
+	/* add list termination tlv */
+	ecore_add_tlv(p_hwfn, &p_iov->offset,
+		      CHANNEL_TLV_LIST_END,
+		      sizeof(struct channel_list_end_tlv));
+
+	while (!resources_acquired) {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+			   "attempting to acquire resources\n");
+
+		/* send acquire request */
+		rc = ecore_send_msg2pf(p_hwfn,
+				       &resp->hdr.status, sizeof(*resp));
+
+		/* PF timeout */
+		if (rc)
+			return rc;
+
+		/* copy acquire response from buffer to p_hwfn */
+		OSAL_MEMCPY(&p_iov->acquire_resp,
+			    resp, sizeof(p_iov->acquire_resp));
+
+		attempts++;
+
+		/* PF agrees to allocate our resources */
+		if (resp->hdr.status == PFVF_STATUS_SUCCESS) {
+			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+				   "resources acquired\n");
+			resources_acquired = true;
+		} /* PF refuses to allocate our resources */
+		else if (resp->hdr.status ==
+			 PFVF_STATUS_NO_RESOURCE &&
+			 attempts < VF_ACQUIRE_THRESH) {
+			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+				   "PF unwilling to fullfill resource request. Try PF recommended amount\n");
+
+			/* humble our request */
+			req->resc_request.num_txqs = resp->resc.num_txqs;
+			req->resc_request.num_rxqs = resp->resc.num_rxqs;
+			req->resc_request.num_sbs = resp->resc.num_sbs;
+			req->resc_request.num_mac_filters =
+			    resp->resc.num_mac_filters;
+			req->resc_request.num_vlan_filters =
+			    resp->resc.num_vlan_filters;
+			req->resc_request.num_mc_filters =
+			    resp->resc.num_mc_filters;
+
+			/* Clear response buffer */
+			OSAL_MEMSET(p_iov->pf2vf_reply, 0,
+				    sizeof(union pfvf_tlvs));
+		} else {
+			DP_ERR(p_hwfn,
+			       "PF returned error %d to VF acquisition request\n",
+			       resp->hdr.status);
+			return ECORE_AGAIN;
+		}
+	}
+
+	rc = OSAL_VF_UPDATE_ACQUIRE_RESC_RESP(p_hwfn, &resp->resc);
+	if (rc) {
+		DP_NOTICE(p_hwfn, true,
+			  "VF_UPDATE_ACQUIRE_RESC_RESP Failed: status = 0x%x.\n",
+			  rc);
+		return ECORE_AGAIN;
+	}
+
+	/* Update bulletin board size with response from PF */
+	p_iov->bulletin.size = resp->bulletin_size;
+
+	/* get HW info */
+	p_hwfn->p_dev->type = resp->pfdev_info.dev_type;
+	p_hwfn->p_dev->chip_rev = resp->pfdev_info.chip_rev;
+
+	DP_INFO(p_hwfn, "Chip details - %s%d\n",
+		ECORE_IS_BB(p_hwfn->p_dev) ? "BB" : "AH",
+		CHIP_REV_IS_A0(p_hwfn->p_dev) ? 0 : 1);
+
+	/* @@@TBD MichalK: Fw ver... */
+	/* strlcpy(p_hwfn->fw_ver, p_hwfn->acquire_resp.pfdev_info.fw_ver,
+	 *  sizeof(p_hwfn->fw_ver));
+	 */
+
+	p_hwfn->p_dev->chip_num = pfdev_info->chip_num & 0xffff;
+
+	return 0;
+}
+
+enum _ecore_status_t ecore_vf_hw_prepare(struct ecore_dev *p_dev)
+{
+	enum _ecore_status_t rc = ECORE_NOMEM;
+	struct ecore_vf_iov *p_sriov;
+	struct ecore_hwfn *p_hwfn = &p_dev->hwfns[0];	/* @@@TBD CMT */
+
+	p_dev->num_hwfns = 1;	/* @@@TBD CMT must be fixed... */
+
+	p_hwfn->regview = p_dev->regview;
+	if (p_hwfn->regview == OSAL_NULL) {
+		DP_ERR(p_hwfn,
+		       "regview should be initialized before"
+			" ecore_vf_hw_prepare is called\n");
+		return ECORE_INVAL;
+	}
+
+	/* Set the doorbell bar. Assumption: regview is set */
+	p_hwfn->doorbells = (u8 OSAL_IOMEM *) p_hwfn->regview +
+	    PXP_VF_BAR0_START_DQ;
+
+	p_hwfn->hw_info.opaque_fid = (u16)REG_RD(p_hwfn,
+					  PXP_VF_BAR0_ME_OPAQUE_ADDRESS);
+
+	p_hwfn->hw_info.concrete_fid = REG_RD(p_hwfn,
+				      PXP_VF_BAR0_ME_CONCRETE_ADDRESS);
+
+	/* Allocate vf sriov info */
+	p_sriov = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL, sizeof(*p_sriov));
+	if (!p_sriov) {
+		DP_NOTICE(p_hwfn, true,
+			  "Failed to allocate `struct ecore_sriov'\n");
+		return ECORE_NOMEM;
+	}
+
+	OSAL_MEMSET(p_sriov, 0, sizeof(*p_sriov));
+
+	/* Allocate vf2pf msg */
+	p_sriov->vf2pf_request = OSAL_DMA_ALLOC_COHERENT(p_hwfn->p_dev,
+							 &p_sriov->
+							 vf2pf_request_phys,
+							 sizeof(union
+								vfpf_tlvs));
+	if (!p_sriov->vf2pf_request) {
+		DP_NOTICE(p_hwfn, true,
+			  "Failed to allocate `vf2pf_request' DMA memory\n");
+		goto free_p_sriov;
+	}
+
+	p_sriov->pf2vf_reply = OSAL_DMA_ALLOC_COHERENT(p_hwfn->p_dev,
+						       &p_sriov->
+						       pf2vf_reply_phys,
+						       sizeof(union pfvf_tlvs));
+	if (!p_sriov->pf2vf_reply) {
+		DP_NOTICE(p_hwfn, true,
+			  "Failed to allocate `pf2vf_reply' DMA memory\n");
+		goto free_vf2pf_request;
+	}
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+		   "VF's Request mailbox [%p virt 0x%lx phys], Response"
+		   " mailbox [%p virt 0x%lx phys]\n",
+		   p_sriov->vf2pf_request,
+		   (u64)p_sriov->vf2pf_request_phys,
+		   p_sriov->pf2vf_reply, (u64)p_sriov->pf2vf_reply_phys);
+
+	/* Allocate Bulletin board */
+	p_sriov->bulletin.size = sizeof(struct ecore_bulletin_content);
+	p_sriov->bulletin.p_virt = OSAL_DMA_ALLOC_COHERENT(p_hwfn->p_dev,
+							   &p_sriov->bulletin.
+							   phys,
+							   p_sriov->bulletin.
+							   size);
+	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+		   "VF's bulletin Board [%p virt 0x%lx phys 0x%08x bytes]\n",
+		   p_sriov->bulletin.p_virt, (u64)p_sriov->bulletin.phys,
+		   p_sriov->bulletin.size);
+
+	OSAL_MUTEX_ALLOC(p_hwfn, &p_sriov->mutex);
+	OSAL_MUTEX_INIT(&p_sriov->mutex);
+
+	p_hwfn->vf_iov_info = p_sriov;
+
+	p_hwfn->hw_info.personality = ECORE_PCI_ETH;
+
+	/* First VF needs to query for information from PF */
+	if (!p_hwfn->my_id)
+		rc = ecore_vf_pf_acquire(p_hwfn);
+
+	return rc;
+
+free_vf2pf_request:
+	OSAL_DMA_FREE_COHERENT(p_hwfn->p_dev, p_sriov->vf2pf_request,
+			       p_sriov->vf2pf_request_phys,
+			       sizeof(union vfpf_tlvs));
+free_p_sriov:
+	OSAL_FREE(p_hwfn->p_dev, p_sriov);
+
+	return rc;
+}
+
+enum _ecore_status_t ecore_vf_pf_init(struct ecore_hwfn *p_hwfn)
+{
+	p_hwfn->b_int_enabled = 1;
+
+	return 0;
+}
+
+/* TEMP TEMP until in HSI */
+#define TSTORM_QZONE_START   PXP_VF_BAR0_START_SDM_ZONE_A
+#define MSTORM_QZONE_START(dev)   (TSTORM_QZONE_START + \
+				   (TSTORM_QZONE_SIZE * NUM_OF_L2_QUEUES(dev)))
+#define USTORM_QZONE_START(dev)   (MSTORM_QZONE_START + \
+				   (MSTORM_QZONE_SIZE * NUM_OF_L2_QUEUES(dev)))
+
+enum _ecore_status_t ecore_vf_pf_rxq_start(struct ecore_hwfn *p_hwfn,
+					   u8 rx_qid,
+					   u16 sb,
+					   u8 sb_index,
+					   u16 bd_max_bytes,
+					   dma_addr_t bd_chain_phys_addr,
+					   dma_addr_t cqe_pbl_addr,
+					   u16 cqe_pbl_size,
+					   void OSAL_IOMEM **pp_prod)
+{
+	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
+	struct vfpf_start_rxq_tlv *req;
+	struct pfvf_def_resp_tlv *resp = &p_iov->pf2vf_reply->default_resp;
+	int rc;
+	u8 hw_qid;
+	u64 init_prod_val = 0;
+
+	/* clear mailbox and prep first tlv */
+	req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_START_RXQ, sizeof(*req));
+
+	/* @@@TBD MichalK TPA */
+
+	req->rx_qid = rx_qid;
+	req->cqe_pbl_addr = cqe_pbl_addr;
+	req->cqe_pbl_size = cqe_pbl_size;
+	req->rxq_addr = bd_chain_phys_addr;
+	req->hw_sb = sb;
+	req->sb_index = sb_index;
+	req->hc_rate = 0;	/* @@@TBD MichalK -> host coalescing! */
+	req->bd_max_bytes = bd_max_bytes;
+	req->stat_id = -1;	/* No stats at the moment */
+
+	/* add list termination tlv */
+	ecore_add_tlv(p_hwfn, &p_iov->offset,
+		      CHANNEL_TLV_LIST_END,
+		      sizeof(struct channel_list_end_tlv));
+
+	if (pp_prod) {
+		hw_qid = p_iov->acquire_resp.resc.hw_qid[rx_qid];
+
+		*pp_prod = (u8 OSAL_IOMEM *) p_hwfn->regview +
+		    MSTORM_QZONE_START(p_hwfn->p_dev) +
+		    (hw_qid) * MSTORM_QZONE_SIZE +
+		    OFFSETOF(struct mstorm_eth_queue_zone, rx_producers);
+
+		/* Init the rcq, rx bd and rx sge (if valid) producers to 0 */
+		__internal_ram_wr(p_hwfn, *pp_prod, sizeof(u64),
+				  (u32 *)(&init_prod_val));
+	}
+
+	rc = ecore_send_msg2pf(p_hwfn, &resp->hdr.status, sizeof(*resp));
+	if (rc)
+		return rc;
+
+	if (resp->hdr.status != PFVF_STATUS_SUCCESS)
+		return ECORE_INVAL;
+
+	return rc;
+}
+
+enum _ecore_status_t ecore_vf_pf_rxq_stop(struct ecore_hwfn *p_hwfn,
+					  u16 rx_qid, bool cqe_completion)
+{
+	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
+	struct vfpf_stop_rxqs_tlv *req;
+	struct pfvf_def_resp_tlv *resp = &p_iov->pf2vf_reply->default_resp;
+	int rc;
+
+	/* clear mailbox and prep first tlv */
+	req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_STOP_RXQS, sizeof(*req));
+
+	/* @@@TBD MichalK TPA */
+
+	/* @@@TBD MichalK - relevant ???
+	 * flags  VFPF_QUEUE_FLG_OV VFPF_QUEUE_FLG_VLAN
+	 */
+	req->rx_qid = rx_qid;
+	req->num_rxqs = 1;
+	req->cqe_completion = cqe_completion;
+
+	/* add list termination tlv */
+	ecore_add_tlv(p_hwfn, &p_iov->offset,
+		      CHANNEL_TLV_LIST_END,
+		      sizeof(struct channel_list_end_tlv));
+
+	rc = ecore_send_msg2pf(p_hwfn, &resp->hdr.status, sizeof(*resp));
+	if (rc)
+		return rc;
+
+	if (resp->hdr.status != PFVF_STATUS_SUCCESS)
+		return ECORE_INVAL;
+
+	return rc;
+}
+
+enum _ecore_status_t ecore_vf_pf_txq_start(struct ecore_hwfn *p_hwfn,
+					   u16 tx_queue_id,
+					   u16 sb,
+					   u8 sb_index,
+					   dma_addr_t pbl_addr,
+					   u16 pbl_size,
+					   void OSAL_IOMEM **pp_doorbell)
+{
+	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
+	struct vfpf_start_txq_tlv *req;
+	struct pfvf_def_resp_tlv *resp = &p_iov->pf2vf_reply->default_resp;
+	int rc;
+
+	/* clear mailbox and prep first tlv */
+	req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_START_TXQ, sizeof(*req));
+
+	/* @@@TBD MichalK TPA */
+
+	req->tx_qid = tx_queue_id;
+
+	/* Tx */
+	req->pbl_addr = pbl_addr;
+	req->pbl_size = pbl_size;
+	req->hw_sb = sb;
+	req->sb_index = sb_index;
+	req->hc_rate = 0;	/* @@@TBD MichalK -> host coalescing! */
+	req->flags = 0;		/* @@@TBD MichalK -> flags... */
+
+	/* add list termination tlv */
+	ecore_add_tlv(p_hwfn, &p_iov->offset,
+		      CHANNEL_TLV_LIST_END,
+		      sizeof(struct channel_list_end_tlv));
+
+	rc = ecore_send_msg2pf(p_hwfn, &resp->hdr.status, sizeof(*resp));
+	if (rc)
+		return rc;
+
+	if (resp->hdr.status != PFVF_STATUS_SUCCESS)
+		return ECORE_INVAL;
+
+	if (pp_doorbell) {
+		u8 cid = p_iov->acquire_resp.resc.cid[tx_queue_id];
+
+		*pp_doorbell = (u8 OSAL_IOMEM *) p_hwfn->doorbells +
+		    DB_ADDR(cid, DQ_DEMS_LEGACY);
+	}
+
+	return rc;
+}
+
+enum _ecore_status_t ecore_vf_pf_txq_stop(struct ecore_hwfn *p_hwfn, u16 tx_qid)
+{
+	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
+	struct vfpf_stop_txqs_tlv *req;
+	struct pfvf_def_resp_tlv *resp = &p_iov->pf2vf_reply->default_resp;
+	int rc;
+
+	/* clear mailbox and prep first tlv */
+	req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_STOP_TXQS, sizeof(*req));
+
+	/* @@@TBD MichalK TPA */
+
+	/* @@@TBD MichalK - relevant ??? flags
+	 * VFPF_QUEUE_FLG_OV VFPF_QUEUE_FLG_VLAN
+	 */
+	req->tx_qid = tx_qid;
+	req->num_txqs = 1;
+
+	/* add list termination tlv */
+	ecore_add_tlv(p_hwfn, &p_iov->offset,
+		      CHANNEL_TLV_LIST_END,
+		      sizeof(struct channel_list_end_tlv));
+
+	rc = ecore_send_msg2pf(p_hwfn, &resp->hdr.status, sizeof(*resp));
+	if (rc)
+		return rc;
+
+	if (resp->hdr.status != PFVF_STATUS_SUCCESS)
+		return ECORE_INVAL;
+
+	return rc;
+}
+
+enum _ecore_status_t ecore_vf_pf_rxqs_update(struct ecore_hwfn *p_hwfn,
+					     u16 rx_queue_id,
+					     u8 num_rxqs,
+					     u8 comp_cqe_flg, u8 comp_event_flg)
+{
+	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
+	struct pfvf_def_resp_tlv *resp = &p_iov->pf2vf_reply->default_resp;
+	struct vfpf_update_rxq_tlv *req;
+	int rc;
+
+	/* clear mailbox and prep first tlv */
+	req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_UPDATE_RXQ, sizeof(*req));
+
+	req->rx_qid = rx_queue_id;
+	req->num_rxqs = num_rxqs;
+
+	if (comp_cqe_flg)
+		req->flags |= VFPF_RXQ_UPD_COMPLETE_CQE_FLAG;
+	if (comp_event_flg)
+		req->flags |= VFPF_RXQ_UPD_COMPLETE_EVENT_FLAG;
+
+	/* add list termination tlv */
+	ecore_add_tlv(p_hwfn, &p_iov->offset,
+		      CHANNEL_TLV_LIST_END,
+		      sizeof(struct channel_list_end_tlv));
+
+	rc = ecore_send_msg2pf(p_hwfn, &resp->hdr.status, sizeof(*resp));
+	if (rc)
+		return rc;
+
+	if (resp->hdr.status != PFVF_STATUS_SUCCESS)
+		return ECORE_INVAL;
+
+	return rc;
+}
+
+enum _ecore_status_t
+ecore_vf_pf_vport_start(struct ecore_hwfn *p_hwfn, u8 vport_id,
+			u16 mtu, u8 inner_vlan_removal,
+			enum ecore_tpa_mode tpa_mode, u8 max_buffers_per_cqe,
+			u8 only_untagged)
+{
+	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
+	struct vfpf_vport_start_tlv *req;
+	struct pfvf_def_resp_tlv *resp = &p_iov->pf2vf_reply->default_resp;
+	int rc, i;
+
+	/* clear mailbox and prep first tlv */
+	req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_VPORT_START, sizeof(*req));
+
+	req->mtu = mtu;
+	req->vport_id = vport_id;
+	req->inner_vlan_removal = inner_vlan_removal;
+	req->tpa_mode = tpa_mode;
+	req->max_buffers_per_cqe = max_buffers_per_cqe;
+	req->only_untagged = only_untagged;
+
+	/* status blocks */
+	for (i = 0; i < p_hwfn->vf_iov_info->acquire_resp.resc.num_sbs; i++)
+		if (p_hwfn->sbs_info[i])
+			req->sb_addr[i] = p_hwfn->sbs_info[i]->sb_phys;
+
+	/* add list termination tlv */
+	ecore_add_tlv(p_hwfn, &p_iov->offset,
+		      CHANNEL_TLV_LIST_END,
+		      sizeof(struct channel_list_end_tlv));
+
+	rc = ecore_send_msg2pf(p_hwfn, &resp->hdr.status, sizeof(*resp));
+	if (rc)
+		return rc;
+
+	if (resp->hdr.status != PFVF_STATUS_SUCCESS)
+		return ECORE_INVAL;
+
+	return rc;
+}
+
+enum _ecore_status_t ecore_vf_pf_vport_stop(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
+	struct pfvf_def_resp_tlv *resp = &p_iov->pf2vf_reply->default_resp;
+	int rc;
+
+	/* clear mailbox and prep first tlv */
+	ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_VPORT_TEARDOWN,
+			 sizeof(struct vfpf_first_tlv));
+
+	/* add list termination tlv */
+	ecore_add_tlv(p_hwfn, &p_iov->offset,
+		      CHANNEL_TLV_LIST_END,
+		      sizeof(struct channel_list_end_tlv));
+
+	rc = ecore_send_msg2pf(p_hwfn, &resp->hdr.status, sizeof(*resp));
+	if (rc)
+		return rc;
+
+	if (resp->hdr.status != PFVF_STATUS_SUCCESS)
+		return ECORE_INVAL;
+
+	return rc;
+}
+
+static void
+ecore_vf_handle_vp_update_tlvs_resp(struct ecore_hwfn *p_hwfn,
+				    struct ecore_sp_vport_update_params *p_data)
+{
+	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
+	struct pfvf_def_resp_tlv *p_resp;
+	u16 tlv;
+
+	if (p_data->update_vport_active_rx_flg ||
+	    p_data->update_vport_active_tx_flg) {
+		tlv = CHANNEL_TLV_VPORT_UPDATE_ACTIVATE;
+		p_resp = (struct pfvf_def_resp_tlv *)
+		    ecore_iov_search_list_tlvs(p_hwfn, p_iov->pf2vf_reply, tlv);
+		if (p_resp && p_resp->hdr.status)
+			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+				   "VP update activate tlv configured\n");
+		else
+			DP_NOTICE(p_hwfn, true,
+				  "VP update activate tlv config failed\n");
+	}
+
+	if (p_data->update_tx_switching_flg) {
+		tlv = CHANNEL_TLV_VPORT_UPDATE_TX_SWITCH;
+		p_resp = (struct pfvf_def_resp_tlv *)
+		    ecore_iov_search_list_tlvs(p_hwfn, p_iov->pf2vf_reply, tlv);
+		if (p_resp && p_resp->hdr.status)
+			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+				   "VP update tx switch tlv configured\n");
+#ifndef ASIC_ONLY
+		else if (CHIP_REV_IS_FPGA(p_hwfn->p_dev))
+			DP_NOTICE(p_hwfn, false,
+				  "FPGA: Skip checking whether PF"
+				  " replied to Tx-switching request\n");
+#endif
+		else
+			DP_NOTICE(p_hwfn, true,
+				  "VP update tx switch tlv config failed\n");
+	}
+
+	if (p_data->update_inner_vlan_removal_flg) {
+		tlv = CHANNEL_TLV_VPORT_UPDATE_VLAN_STRIP;
+		p_resp = (struct pfvf_def_resp_tlv *)
+		    ecore_iov_search_list_tlvs(p_hwfn, p_iov->pf2vf_reply, tlv);
+		if (p_resp && p_resp->hdr.status)
+			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+				   "VP update vlan strip tlv configured\n");
+		else
+			DP_NOTICE(p_hwfn, true,
+				  "VP update vlan strip tlv config failed\n");
+	}
+
+	if (p_data->update_approx_mcast_flg) {
+		tlv = CHANNEL_TLV_VPORT_UPDATE_MCAST;
+		p_resp = (struct pfvf_def_resp_tlv *)
+		    ecore_iov_search_list_tlvs(p_hwfn, p_iov->pf2vf_reply, tlv);
+		if (p_resp && p_resp->hdr.status)
+			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+				   "VP update mcast tlv configured\n");
+		else
+			DP_NOTICE(p_hwfn, true,
+				  "VP update mcast tlv config failed\n");
+	}
+
+	if (p_data->accept_flags.update_rx_mode_config ||
+	    p_data->accept_flags.update_tx_mode_config) {
+		tlv = CHANNEL_TLV_VPORT_UPDATE_ACCEPT_PARAM;
+		p_resp = (struct pfvf_def_resp_tlv *)
+		    ecore_iov_search_list_tlvs(p_hwfn, p_iov->pf2vf_reply, tlv);
+		if (p_resp && p_resp->hdr.status)
+			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+				   "VP update accept_mode tlv configured\n");
+		else
+			DP_NOTICE(p_hwfn, true,
+				  "VP update accept_mode tlv config failed\n");
+	}
+
+	if (p_data->rss_params) {
+		tlv = CHANNEL_TLV_VPORT_UPDATE_RSS;
+		p_resp = (struct pfvf_def_resp_tlv *)
+		    ecore_iov_search_list_tlvs(p_hwfn, p_iov->pf2vf_reply, tlv);
+		if (p_resp && p_resp->hdr.status)
+			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+				   "VP update rss tlv configured\n");
+		else
+			DP_NOTICE(p_hwfn, true,
+				  "VP update rss tlv config failed\n");
+	}
+
+	if (p_data->sge_tpa_params) {
+		tlv = CHANNEL_TLV_VPORT_UPDATE_SGE_TPA;
+		p_resp = (struct pfvf_def_resp_tlv *)
+		    ecore_iov_search_list_tlvs(p_hwfn, p_iov->pf2vf_reply, tlv);
+		if (p_resp && p_resp->hdr.status)
+			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+				   "VP update sge tpa tlv configured\n");
+		else
+			DP_NOTICE(p_hwfn, true,
+				  "VP update sge tpa tlv config failed\n");
+	}
+}
+
+enum _ecore_status_t
+ecore_vf_pf_vport_update(struct ecore_hwfn *p_hwfn,
+			 struct ecore_sp_vport_update_params *p_params)
+{
+	struct vfpf_vport_update_accept_any_vlan_tlv *p_any_vlan_tlv;
+	struct vfpf_vport_update_accept_param_tlv *p_accept_tlv;
+	struct vfpf_vport_update_tx_switch_tlv *p_tx_switch_tlv;
+	struct vfpf_vport_update_mcast_bin_tlv *p_mcast_tlv;
+	struct vfpf_vport_update_vlan_strip_tlv *p_vlan_tlv;
+	struct vfpf_vport_update_sge_tpa_tlv *p_sge_tpa_tlv;
+	struct vfpf_vport_update_activate_tlv *p_act_tlv;
+	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
+	struct vfpf_vport_update_rss_tlv *p_rss_tlv;
+	struct vfpf_vport_update_tlv *req;
+	struct pfvf_def_resp_tlv *resp;
+	u8 update_rx, update_tx;
+	u32 resp_size = 0;
+	u16 size, tlv;
+	int rc;
+
+	resp = &p_iov->pf2vf_reply->default_resp;
+	resp_size = sizeof(*resp);
+
+	update_rx = p_params->update_vport_active_rx_flg;
+	update_tx = p_params->update_vport_active_tx_flg;
+
+	/* clear mailbox and prep header tlv */
+	ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_VPORT_UPDATE, sizeof(*req));
+
+	/* Prepare extended tlvs */
+	if (update_rx || update_tx) {
+		size = sizeof(struct vfpf_vport_update_activate_tlv);
+		p_act_tlv = ecore_add_tlv(p_hwfn, &p_iov->offset,
+					  CHANNEL_TLV_VPORT_UPDATE_ACTIVATE,
+					  size);
+		resp_size += sizeof(struct pfvf_def_resp_tlv);
+
+		if (update_rx) {
+			p_act_tlv->update_rx = update_rx;
+			p_act_tlv->active_rx = p_params->vport_active_rx_flg;
+		}
+
+		if (update_tx) {
+			p_act_tlv->update_tx = update_tx;
+			p_act_tlv->active_tx = p_params->vport_active_tx_flg;
+		}
+	}
+
+	if (p_params->update_inner_vlan_removal_flg) {
+		size = sizeof(struct vfpf_vport_update_vlan_strip_tlv);
+		p_vlan_tlv = ecore_add_tlv(p_hwfn, &p_iov->offset,
+					   CHANNEL_TLV_VPORT_UPDATE_VLAN_STRIP,
+					   size);
+		resp_size += sizeof(struct pfvf_def_resp_tlv);
+
+		p_vlan_tlv->remove_vlan = p_params->inner_vlan_removal_flg;
+	}
+
+	if (p_params->update_tx_switching_flg) {
+		size = sizeof(struct vfpf_vport_update_tx_switch_tlv);
+		tlv = CHANNEL_TLV_VPORT_UPDATE_TX_SWITCH;
+		p_tx_switch_tlv = ecore_add_tlv(p_hwfn, &p_iov->offset,
+						tlv, size);
+		resp_size += sizeof(struct pfvf_def_resp_tlv);
+
+		p_tx_switch_tlv->tx_switching = p_params->tx_switching_flg;
+	}
+
+	if (p_params->update_approx_mcast_flg) {
+		size = sizeof(struct vfpf_vport_update_mcast_bin_tlv);
+		p_mcast_tlv = ecore_add_tlv(p_hwfn, &p_iov->offset,
+					    CHANNEL_TLV_VPORT_UPDATE_MCAST,
+					    size);
+		resp_size += sizeof(struct pfvf_def_resp_tlv);
+
+		OSAL_MEMCPY(p_mcast_tlv->bins, p_params->bins,
+			    sizeof(unsigned long) *
+			    ETH_MULTICAST_MAC_BINS_IN_REGS);
+	}
+
+	update_rx = p_params->accept_flags.update_rx_mode_config;
+	update_tx = p_params->accept_flags.update_tx_mode_config;
+
+	if (update_rx || update_tx) {
+		tlv = CHANNEL_TLV_VPORT_UPDATE_ACCEPT_PARAM;
+		size = sizeof(struct vfpf_vport_update_accept_param_tlv);
+		p_accept_tlv = ecore_add_tlv(p_hwfn, &p_iov->offset, tlv, size);
+		resp_size += sizeof(struct pfvf_def_resp_tlv);
+
+		if (update_rx) {
+			p_accept_tlv->update_rx_mode = update_rx;
+			p_accept_tlv->rx_accept_filter =
+			    p_params->accept_flags.rx_accept_filter;
+		}
+
+		if (update_tx) {
+			p_accept_tlv->update_tx_mode = update_tx;
+			p_accept_tlv->tx_accept_filter =
+			    p_params->accept_flags.tx_accept_filter;
+		}
+	}
+
+	if (p_params->rss_params) {
+		struct ecore_rss_params *rss_params = p_params->rss_params;
+
+		size = sizeof(struct vfpf_vport_update_rss_tlv);
+		p_rss_tlv = ecore_add_tlv(p_hwfn, &p_iov->offset,
+					  CHANNEL_TLV_VPORT_UPDATE_RSS, size);
+		resp_size += sizeof(struct pfvf_def_resp_tlv);
+
+		if (rss_params->update_rss_config)
+			p_rss_tlv->update_rss_flags |=
+			    VFPF_UPDATE_RSS_CONFIG_FLAG;
+		if (rss_params->update_rss_capabilities)
+			p_rss_tlv->update_rss_flags |=
+			    VFPF_UPDATE_RSS_CAPS_FLAG;
+		if (rss_params->update_rss_ind_table)
+			p_rss_tlv->update_rss_flags |=
+			    VFPF_UPDATE_RSS_IND_TABLE_FLAG;
+		if (rss_params->update_rss_key)
+			p_rss_tlv->update_rss_flags |= VFPF_UPDATE_RSS_KEY_FLAG;
+
+		p_rss_tlv->rss_enable = rss_params->rss_enable;
+		p_rss_tlv->rss_caps = rss_params->rss_caps;
+		p_rss_tlv->rss_table_size_log = rss_params->rss_table_size_log;
+		OSAL_MEMCPY(p_rss_tlv->rss_ind_table, rss_params->rss_ind_table,
+			    sizeof(rss_params->rss_ind_table));
+		OSAL_MEMCPY(p_rss_tlv->rss_key, rss_params->rss_key,
+			    sizeof(rss_params->rss_key));
+	}
+
+	if (p_params->update_accept_any_vlan_flg) {
+		size = sizeof(struct vfpf_vport_update_accept_any_vlan_tlv);
+		tlv = CHANNEL_TLV_VPORT_UPDATE_ACCEPT_ANY_VLAN;
+		p_any_vlan_tlv = ecore_add_tlv(p_hwfn, &p_iov->offset,
+					       tlv, size);
+
+		resp_size += sizeof(struct pfvf_def_resp_tlv);
+		p_any_vlan_tlv->accept_any_vlan = p_params->accept_any_vlan;
+		p_any_vlan_tlv->update_accept_any_vlan_flg =
+		    p_params->update_accept_any_vlan_flg;
+	}
+
+	if (p_params->sge_tpa_params) {
+		struct ecore_sge_tpa_params *sge_tpa_params =
+		    p_params->sge_tpa_params;
+
+		size = sizeof(struct vfpf_vport_update_sge_tpa_tlv);
+		p_sge_tpa_tlv = ecore_add_tlv(p_hwfn, &p_iov->offset,
+					      CHANNEL_TLV_VPORT_UPDATE_SGE_TPA,
+					      size);
+		resp_size += sizeof(struct pfvf_def_resp_tlv);
+
+		if (sge_tpa_params->update_tpa_en_flg)
+			p_sge_tpa_tlv->update_sge_tpa_flags |=
+			    VFPF_UPDATE_TPA_EN_FLAG;
+		if (sge_tpa_params->update_tpa_param_flg)
+			p_sge_tpa_tlv->update_sge_tpa_flags |=
+			    VFPF_UPDATE_TPA_PARAM_FLAG;
+
+		if (sge_tpa_params->tpa_ipv4_en_flg)
+			p_sge_tpa_tlv->sge_tpa_flags |= VFPF_TPA_IPV4_EN_FLAG;
+		if (sge_tpa_params->tpa_ipv6_en_flg)
+			p_sge_tpa_tlv->sge_tpa_flags |= VFPF_TPA_IPV6_EN_FLAG;
+		if (sge_tpa_params->tpa_pkt_split_flg)
+			p_sge_tpa_tlv->sge_tpa_flags |= VFPF_TPA_PKT_SPLIT_FLAG;
+		if (sge_tpa_params->tpa_hdr_data_split_flg)
+			p_sge_tpa_tlv->sge_tpa_flags |=
+			    VFPF_TPA_HDR_DATA_SPLIT_FLAG;
+		if (sge_tpa_params->tpa_gro_consistent_flg)
+			p_sge_tpa_tlv->sge_tpa_flags |=
+			    VFPF_TPA_GRO_CONSIST_FLAG;
+
+		p_sge_tpa_tlv->tpa_max_aggs_num =
+		    sge_tpa_params->tpa_max_aggs_num;
+		p_sge_tpa_tlv->tpa_max_size = sge_tpa_params->tpa_max_size;
+		p_sge_tpa_tlv->tpa_min_size_to_start =
+		    sge_tpa_params->tpa_min_size_to_start;
+		p_sge_tpa_tlv->tpa_min_size_to_cont =
+		    sge_tpa_params->tpa_min_size_to_cont;
+
+		p_sge_tpa_tlv->max_buffers_per_cqe =
+		    sge_tpa_params->max_buffers_per_cqe;
+	}
+
+	/* add list termination tlv */
+	ecore_add_tlv(p_hwfn, &p_iov->offset,
+		      CHANNEL_TLV_LIST_END,
+		      sizeof(struct channel_list_end_tlv));
+
+	rc = ecore_send_msg2pf(p_hwfn, &resp->hdr.status, resp_size);
+	if (rc)
+		return rc;
+
+	if (resp->hdr.status != PFVF_STATUS_SUCCESS)
+		return ECORE_INVAL;
+
+	ecore_vf_handle_vp_update_tlvs_resp(p_hwfn, p_params);
+
+	return rc;
+}
+
+enum _ecore_status_t ecore_vf_pf_reset(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
+	struct vfpf_first_tlv *req;
+	struct pfvf_def_resp_tlv *resp = &p_iov->pf2vf_reply->default_resp;
+	int rc;
+
+	/* clear mailbox and prep first tlv */
+	req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_CLOSE, sizeof(*req));
+
+	/* add list termination tlv */
+	ecore_add_tlv(p_hwfn, &p_iov->offset,
+		      CHANNEL_TLV_LIST_END,
+		      sizeof(struct channel_list_end_tlv));
+
+	rc = ecore_send_msg2pf(p_hwfn, &resp->hdr.status, sizeof(*resp));
+	if (rc)
+		return rc;
+
+	if (resp->hdr.status != PFVF_STATUS_SUCCESS)
+		return ECORE_AGAIN;
+
+	p_hwfn->b_int_enabled = 0;
+
+	return ECORE_SUCCESS;
+}
+
+enum _ecore_status_t ecore_vf_pf_release(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
+	struct vfpf_first_tlv *req;
+	struct pfvf_def_resp_tlv *resp = &p_iov->pf2vf_reply->default_resp;
+	u32 size;
+	int rc;
+
+	/* clear mailbox and prep first tlv */
+	req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_RELEASE, sizeof(*req));
+
+	/* add list termination tlv */
+	ecore_add_tlv(p_hwfn, &p_iov->offset,
+		      CHANNEL_TLV_LIST_END,
+		      sizeof(struct channel_list_end_tlv));
+
+	rc = ecore_send_msg2pf(p_hwfn, &resp->hdr.status, sizeof(*resp));
+
+	if (rc == ECORE_SUCCESS && resp->hdr.status != PFVF_STATUS_SUCCESS)
+		rc = ECORE_AGAIN;
+
+	p_hwfn->b_int_enabled = 0;
+
+	/* TODO - might need to revise this for 100g */
+	if (IS_LEAD_HWFN(p_hwfn))
+		OSAL_MUTEX_DEALLOC(&p_iov->mutex);
+
+	if (p_iov->vf2pf_request)
+		OSAL_DMA_FREE_COHERENT(p_hwfn->p_dev,
+				       p_iov->vf2pf_request,
+				       p_iov->vf2pf_request_phys,
+				       sizeof(union vfpf_tlvs));
+	if (p_iov->pf2vf_reply)
+		OSAL_DMA_FREE_COHERENT(p_hwfn->p_dev,
+				       p_iov->pf2vf_reply,
+				       p_iov->pf2vf_reply_phys,
+				       sizeof(union pfvf_tlvs));
+
+	if (p_iov->bulletin.p_virt) {
+		size = sizeof(struct ecore_bulletin_content);
+		OSAL_DMA_FREE_COHERENT(p_hwfn->p_dev,
+				       p_iov->bulletin.p_virt,
+				       p_iov->bulletin.phys, size);
+	}
+
+	OSAL_FREE(p_hwfn->p_dev, p_hwfn->vf_iov_info);
+	p_hwfn->vf_iov_info = OSAL_NULL;
+
+	return rc;
+}
+
+void ecore_vf_pf_filter_mcast(struct ecore_hwfn *p_hwfn,
+			      struct ecore_filter_mcast *p_filter_cmd)
+{
+	struct ecore_sp_vport_update_params sp_params;
+	int i;
+
+	OSAL_MEMSET(&sp_params, 0, sizeof(sp_params));
+	sp_params.update_approx_mcast_flg = 1;
+
+	if (p_filter_cmd->opcode == ECORE_FILTER_ADD) {
+		for (i = 0; i < p_filter_cmd->num_mc_addrs; i++) {
+			u32 bit;
+
+			bit = ecore_mcast_bin_from_mac(p_filter_cmd->mac[i]);
+			OSAL_SET_BIT(bit, sp_params.bins);
+		}
+	}
+
+	ecore_vf_pf_vport_update(p_hwfn, &sp_params);
+}
+
+enum _ecore_status_t ecore_vf_pf_filter_ucast(struct ecore_hwfn *p_hwfn,
+					      struct ecore_filter_ucast
+					      *p_ucast)
+{
+	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
+	struct vfpf_ucast_filter_tlv *req;
+	struct pfvf_def_resp_tlv *resp = &p_iov->pf2vf_reply->default_resp;
+	int rc;
+
+	/* Sanitize */
+	if (p_ucast->opcode == ECORE_FILTER_MOVE) {
+		DP_NOTICE(p_hwfn, true,
+			  "VFs don't support Moving of filters\n");
+		return ECORE_INVAL;
+	}
+
+	/* clear mailbox and prep first tlv */
+	req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_UCAST_FILTER, sizeof(*req));
+	req->opcode = (u8)p_ucast->opcode;
+	req->type = (u8)p_ucast->type;
+	OSAL_MEMCPY(req->mac, p_ucast->mac, ETH_ALEN);
+	req->vlan = p_ucast->vlan;
+
+	/* add list termination tlv */
+	ecore_add_tlv(p_hwfn, &p_iov->offset,
+		      CHANNEL_TLV_LIST_END,
+		      sizeof(struct channel_list_end_tlv));
+
+	rc = ecore_send_msg2pf(p_hwfn, &resp->hdr.status, sizeof(*resp));
+	if (rc)
+		return rc;
+
+	if (resp->hdr.status != PFVF_STATUS_SUCCESS)
+		return ECORE_AGAIN;
+
+	return ECORE_SUCCESS;
+}
+
+enum _ecore_status_t ecore_vf_pf_int_cleanup(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
+	struct pfvf_def_resp_tlv *resp = &p_iov->pf2vf_reply->default_resp;
+	int rc;
+
+	/* clear mailbox and prep first tlv */
+	ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_INT_CLEANUP,
+			 sizeof(struct vfpf_first_tlv));
+
+	/* add list termination tlv */
+	ecore_add_tlv(p_hwfn, &p_iov->offset,
+		      CHANNEL_TLV_LIST_END,
+		      sizeof(struct channel_list_end_tlv));
+
+	rc = ecore_send_msg2pf(p_hwfn, &resp->hdr.status, sizeof(*resp));
+	if (rc)
+		return rc;
+
+	if (resp->hdr.status != PFVF_STATUS_SUCCESS)
+		return ECORE_INVAL;
+
+	return ECORE_SUCCESS;
+}
+
+enum _ecore_status_t ecore_vf_read_bulletin(struct ecore_hwfn *p_hwfn,
+					    u8 *p_change)
+{
+	struct ecore_bulletin_content shadow;
+	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
+	u32 crc, crc_size = sizeof(p_iov->bulletin.p_virt->crc);
+
+	*p_change = 0;
+
+	/* Need to guarantee PF is not in the middle of writing it */
+	OSAL_MEMCPY(&shadow, p_iov->bulletin.p_virt, p_iov->bulletin.size);
+
+	/* If version did not update, no need to do anything */
+	if (shadow.version == p_iov->bulletin_shadow.version)
+		return ECORE_SUCCESS;
+
+	/* Verify the bulletin we see is valid */
+	crc = ecore_crc32(0, (u8 *)&shadow + crc_size,
+			  p_iov->bulletin.size - crc_size);
+	if (crc != shadow.crc)
+		return ECORE_AGAIN;
+
+	/* Set the shadow bulletin and process it */
+	OSAL_MEMCPY(&p_iov->bulletin_shadow, &shadow, p_iov->bulletin.size);
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+		   "Read a bulletin update %08x\n", shadow.version);
+
+	*p_change = 1;
+
+	return ECORE_SUCCESS;
+}
+
+u16 ecore_vf_get_igu_sb_id(struct ecore_hwfn *p_hwfn, u16 sb_id)
+{
+	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
+
+	if (!p_iov) {
+		DP_NOTICE(p_hwfn, true, "vf_sriov_info isn't initialized\n");
+		return 0;
+	}
+
+	return p_iov->acquire_resp.resc.hw_sbs[sb_id].hw_sb_id;
+}
+
+void __ecore_vf_get_link_params(struct ecore_hwfn *p_hwfn,
+				struct ecore_mcp_link_params *p_params,
+				struct ecore_bulletin_content *p_bulletin)
+{
+	OSAL_MEMSET(p_params, 0, sizeof(*p_params));
+
+	p_params->speed.autoneg = p_bulletin->req_autoneg;
+	p_params->speed.advertised_speeds = p_bulletin->req_adv_speed;
+	p_params->speed.forced_speed = p_bulletin->req_forced_speed;
+	p_params->pause.autoneg = p_bulletin->req_autoneg_pause;
+	p_params->pause.forced_rx = p_bulletin->req_forced_rx;
+	p_params->pause.forced_tx = p_bulletin->req_forced_tx;
+	p_params->loopback_mode = p_bulletin->req_loopback;
+}
+
+void ecore_vf_get_link_params(struct ecore_hwfn *p_hwfn,
+			      struct ecore_mcp_link_params *params)
+{
+	__ecore_vf_get_link_params(p_hwfn, params,
+				   &(p_hwfn->vf_iov_info->bulletin_shadow));
+}
+
+void __ecore_vf_get_link_state(struct ecore_hwfn *p_hwfn,
+			       struct ecore_mcp_link_state *p_link,
+			       struct ecore_bulletin_content *p_bulletin)
+{
+	OSAL_MEMSET(p_link, 0, sizeof(*p_link));
+
+	p_link->link_up = p_bulletin->link_up;
+	p_link->speed = p_bulletin->speed;
+	p_link->full_duplex = p_bulletin->full_duplex;
+	p_link->an = p_bulletin->autoneg;
+	p_link->an_complete = p_bulletin->autoneg_complete;
+	p_link->parallel_detection = p_bulletin->parallel_detection;
+	p_link->pfc_enabled = p_bulletin->pfc_enabled;
+	p_link->partner_adv_speed = p_bulletin->partner_adv_speed;
+	p_link->partner_tx_flow_ctrl_en = p_bulletin->partner_tx_flow_ctrl_en;
+	p_link->partner_rx_flow_ctrl_en = p_bulletin->partner_rx_flow_ctrl_en;
+	p_link->partner_adv_pause = p_bulletin->partner_adv_pause;
+	p_link->sfp_tx_fault = p_bulletin->sfp_tx_fault;
+}
+
+void ecore_vf_get_link_state(struct ecore_hwfn *p_hwfn,
+			     struct ecore_mcp_link_state *link)
+{
+	__ecore_vf_get_link_state(p_hwfn, link,
+				  &(p_hwfn->vf_iov_info->bulletin_shadow));
+}
+
+void __ecore_vf_get_link_caps(struct ecore_hwfn *p_hwfn,
+			      struct ecore_mcp_link_capabilities *p_link_caps,
+			      struct ecore_bulletin_content *p_bulletin)
+{
+	OSAL_MEMSET(p_link_caps, 0, sizeof(*p_link_caps));
+	p_link_caps->speed_capabilities = p_bulletin->capability_speed;
+}
+
+void ecore_vf_get_link_caps(struct ecore_hwfn *p_hwfn,
+			    struct ecore_mcp_link_capabilities *p_link_caps)
+{
+	__ecore_vf_get_link_caps(p_hwfn, p_link_caps,
+				 &(p_hwfn->vf_iov_info->bulletin_shadow));
+}
+
+void ecore_vf_get_num_rxqs(struct ecore_hwfn *p_hwfn, u8 *num_rxqs)
+{
+	*num_rxqs = p_hwfn->vf_iov_info->acquire_resp.resc.num_rxqs;
+}
+
+void ecore_vf_get_port_mac(struct ecore_hwfn *p_hwfn, u8 *port_mac)
+{
+	OSAL_MEMCPY(port_mac,
+		    p_hwfn->vf_iov_info->acquire_resp.pfdev_info.port_mac,
+		    ETH_ALEN);
+}
+
+void ecore_vf_get_num_vlan_filters(struct ecore_hwfn *p_hwfn,
+				   u8 *num_vlan_filters)
+{
+	struct ecore_vf_iov *p_vf;
+
+	p_vf = p_hwfn->vf_iov_info;
+	*num_vlan_filters = p_vf->acquire_resp.resc.num_vlan_filters;
+}
+
+bool ecore_vf_check_mac(struct ecore_hwfn *p_hwfn, u8 *mac)
+{
+	struct ecore_bulletin_content *bulletin;
+
+	bulletin = &p_hwfn->vf_iov_info->bulletin_shadow;
+	if (!(bulletin->valid_bitmap & (1 << MAC_ADDR_FORCED)))
+		return true;
+
+	/* Forbid VF from changing a MAC enforced by PF */
+	if (OSAL_MEMCMP(bulletin->mac, mac, ETH_ALEN))
+		return false;
+
+	return false;
+}
+
+bool ecore_vf_bulletin_get_forced_mac(struct ecore_hwfn *hwfn, u8 *dst_mac,
+				      u8 *p_is_forced)
+{
+	struct ecore_bulletin_content *bulletin;
+
+	bulletin = &hwfn->vf_iov_info->bulletin_shadow;
+
+	if (bulletin->valid_bitmap & (1 << MAC_ADDR_FORCED)) {
+		if (p_is_forced)
+			*p_is_forced = 1;
+	} else if (bulletin->valid_bitmap & (1 << VFPF_BULLETIN_MAC_ADDR)) {
+		if (p_is_forced)
+			*p_is_forced = 0;
+	} else {
+		return false;
+	}
+
+	OSAL_MEMCPY(dst_mac, bulletin->mac, ETH_ALEN);
+
+	return true;
+}
+
+bool ecore_vf_bulletin_get_forced_vlan(struct ecore_hwfn *hwfn, u16 *dst_pvid)
+{
+	struct ecore_bulletin_content *bulletin;
+
+	bulletin = &hwfn->vf_iov_info->bulletin_shadow;
+
+	if (!(bulletin->valid_bitmap & (1 << VLAN_ADDR_FORCED)))
+		return false;
+
+	if (dst_pvid)
+		*dst_pvid = bulletin->pvid;
+
+	return true;
+}
+
+void ecore_vf_get_fw_version(struct ecore_hwfn *p_hwfn,
+			     u16 *fw_major, u16 *fw_minor, u16 *fw_rev,
+			     u16 *fw_eng)
+{
+	struct pf_vf_pfdev_info *info;
+
+	info = &p_hwfn->vf_iov_info->acquire_resp.pfdev_info;
+
+	*fw_major = info->fw_major;
+	*fw_minor = info->fw_minor;
+	*fw_rev = info->fw_rev;
+	*fw_eng = info->fw_eng;
+}
diff --git a/drivers/net/qede/base/ecore_vf.h b/drivers/net/qede/base/ecore_vf.h
new file mode 100644
index 0000000..a006dac
--- /dev/null
+++ b/drivers/net/qede/base/ecore_vf.h
@@ -0,0 +1,415 @@
+/*
+ * Copyright (c) 2016 QLogic Corporation.
+ * All rights reserved.
+ * www.qlogic.com
+ *
+ * See LICENSE.qede_pmd for copyright and licensing details.
+ */
+
+#ifndef __ECORE_VF_H__
+#define __ECORE_VF_H__
+
+#include "ecore_status.h"
+#include "ecore_vf_api.h"
+#include "ecore_l2_api.h"
+#include "ecore_vfpf_if.h"
+
+#ifdef CONFIG_ECORE_SRIOV
+/**
+ *
+ * @brief hw preparation for VF
+ *	sends ACQUIRE message
+ *
+ * @param p_dev
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t ecore_vf_hw_prepare(struct ecore_dev *p_dev);
+
+/**
+ *
+ * @brief VF init in hw (equivalent to hw_init in PF)
+ *      mark interrupts as enabled
+ *
+ * @param p_hwfn
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t ecore_vf_pf_init(struct ecore_hwfn *p_hwfn);
+
+/**
+ *
+ * @brief VF - start the RX Queue by sending a message to the PF
+ *
+ * @param p_hwfn
+ * @param cid			- zero based within the VF
+ * @param rx_queue_id		- zero based within the VF
+ * @param sb			- VF status block for this queue
+ * @param sb_index		- Index within the status block
+ * @param bd_max_bytes		- maximum number of bytes per bd
+ * @param bd_chain_phys_addr	- physical address of bd chain
+ * @param cqe_pbl_addr		- physical address of pbl
+ * @param cqe_pbl_size		- pbl size
+ * @param pp_prod		- pointer to the producer to be
+ *	    used in fasthwfn
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t ecore_vf_pf_rxq_start(struct ecore_hwfn *p_hwfn,
+					   u8 rx_queue_id,
+					   u16 sb,
+					   u8 sb_index,
+					   u16 bd_max_bytes,
+					   dma_addr_t bd_chain_phys_addr,
+					   dma_addr_t cqe_pbl_addr,
+					   u16 cqe_pbl_size,
+					   void OSAL_IOMEM **pp_prod);
+
+/**
+ *
+ * @brief VF - start the TX queue by sending a message to the
+ *        PF.
+ *
+ * @param p_hwfn
+ * @param tx_queue_id		- zero based within the VF
+ * @param sb			- status block for this queue
+ * @param sb_index		- index within the status block
+ * @param bd_chain_phys_addr	- physical address of tx chain
+ * @param pp_doorbell		- pointer to address to which to
+ *		write the doorbell too..
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t ecore_vf_pf_txq_start(struct ecore_hwfn *p_hwfn,
+					   u16 tx_queue_id,
+					   u16 sb,
+					   u8 sb_index,
+					   dma_addr_t pbl_addr,
+					   u16 pbl_size,
+					   void OSAL_IOMEM **pp_doorbell);
+
+/**
+ *
+ * @brief VF - stop the RX queue by sending a message to the PF
+ *
+ * @param p_hwfn
+ * @param rx_qid
+ * @param cqe_completion
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t ecore_vf_pf_rxq_stop(struct ecore_hwfn *p_hwfn,
+					  u16 rx_qid, bool cqe_completion);
+
+/**
+ *
+ * @brief VF - stop the TX queue by sending a message to the PF
+ *
+ * @param p_hwfn
+ * @param tx_qid
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t ecore_vf_pf_txq_stop(struct ecore_hwfn *p_hwfn,
+					  u16 tx_qid);
+
+/**
+ * @brief VF - update the RX queue by sending a message to the
+ *        PF
+ *
+ * @param p_hwfn
+ * @param rx_queue_id
+ * @param num_rxqs
+ * @param init_sge_ring
+ * @param comp_cqe_flg
+ * @param comp_event_flg
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t ecore_vf_pf_rxqs_update(struct ecore_hwfn *p_hwfn,
+					     u16 rx_queue_id,
+					     u8 num_rxqs,
+					     u8 comp_cqe_flg,
+					     u8 comp_event_flg);
+
+/**
+ *
+ * @brief VF - send a vport update command
+ *
+ * @param p_hwfn
+ * @param params
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t
+ecore_vf_pf_vport_update(struct ecore_hwfn *p_hwfn,
+			 struct ecore_sp_vport_update_params *p_params);
+
+/**
+ *
+ * @brief VF - send a close message to PF
+ *
+ * @param p_hwfn
+ *
+ * @return enum _ecore_status
+ */
+enum _ecore_status_t ecore_vf_pf_reset(struct ecore_hwfn *p_hwfn);
+
+/**
+ *
+ * @brief VF - free vf`s memories
+ *
+ * @param p_hwfn
+ *
+ * @return enum _ecore_status
+ */
+enum _ecore_status_t ecore_vf_pf_release(struct ecore_hwfn *p_hwfn);
+
+/**
+ *
+ * @brief ecore_vf_get_igu_sb_id - Get the IGU SB ID for a given
+ *        sb_id. For VFs igu sbs don't have to be contiguous
+ *
+ * @param p_hwfn
+ * @param sb_id
+ *
+ * @return INLINE u16
+ */
+u16 ecore_vf_get_igu_sb_id(struct ecore_hwfn *p_hwfn, u16 sb_id);
+
+/**
+ * @brief ecore_vf_pf_vport_start - perform vport start for VF.
+ *
+ * @param p_hwfn
+ * @param vport_id
+ * @param mtu
+ * @param inner_vlan_removal
+ * @param tpa_mode
+ * @param max_buffers_per_cqe,
+ * @param only_untagged - default behavior regarding vlan acceptance
+ *
+ * @return enum _ecore_status
+ */
+enum _ecore_status_t ecore_vf_pf_vport_start(struct ecore_hwfn *p_hwfn,
+					     u8 vport_id,
+					     u16 mtu,
+					     u8 inner_vlan_removal,
+					     enum ecore_tpa_mode tpa_mode,
+					     u8 max_buffers_per_cqe,
+					     u8 only_untagged);
+
+/**
+ * @brief ecore_vf_pf_vport_stop - stop the VF's vport
+ *
+ * @param p_hwfn
+ *
+ * @return enum _ecore_status
+ */
+enum _ecore_status_t ecore_vf_pf_vport_stop(struct ecore_hwfn *p_hwfn);
+
+enum _ecore_status_t ecore_vf_pf_filter_ucast(struct ecore_hwfn *p_hwfn,
+					      struct ecore_filter_ucast
+					      *p_param);
+
+void ecore_vf_pf_filter_mcast(struct ecore_hwfn *p_hwfn,
+			      struct ecore_filter_mcast *p_filter_cmd);
+
+/**
+ * @brief ecore_vf_pf_int_cleanup - clean the SB of the VF
+ *
+ * @param p_hwfn
+ *
+ * @return enum _ecore_status
+ */
+enum _ecore_status_t ecore_vf_pf_int_cleanup(struct ecore_hwfn *p_hwfn);
+
+/**
+ * @brief - return the link params in a given bulletin board
+ *
+ * @param p_hwfn
+ * @param p_params - pointer to a struct to fill with link params
+ * @param p_bulletin
+ */
+void __ecore_vf_get_link_params(struct ecore_hwfn *p_hwfn,
+				struct ecore_mcp_link_params *p_params,
+				struct ecore_bulletin_content *p_bulletin);
+
+/**
+ * @brief - return the link state in a given bulletin board
+ *
+ * @param p_hwfn
+ * @param p_link - pointer to a struct to fill with link state
+ * @param p_bulletin
+ */
+void __ecore_vf_get_link_state(struct ecore_hwfn *p_hwfn,
+			       struct ecore_mcp_link_state *p_link,
+			       struct ecore_bulletin_content *p_bulletin);
+
+/**
+ * @brief - return the link capabilities in a given bulletin board
+ *
+ * @param p_hwfn
+ * @param p_link - pointer to a struct to fill with link capabilities
+ * @param p_bulletin
+ */
+void __ecore_vf_get_link_caps(struct ecore_hwfn *p_hwfn,
+			      struct ecore_mcp_link_capabilities *p_link_caps,
+			      struct ecore_bulletin_content *p_bulletin);
+
+#else
+static OSAL_INLINE enum _ecore_status_t ecore_vf_hw_prepare(struct ecore_dev
+							    *p_dev)
+{
+	return ECORE_INVAL;
+}
+
+static OSAL_INLINE enum _ecore_status_t ecore_vf_pf_init(struct ecore_hwfn
+							 *p_hwfn)
+{
+	return ECORE_INVAL;
+}
+
+static OSAL_INLINE enum _ecore_status_t ecore_vf_pf_rxq_start(struct ecore_hwfn
+							      *p_hwfn,
+							      u8 rx_queue_id,
+							      u16 sb,
+							      u8 sb_index,
+							      u16 bd_max_bytes,
+							      dma_addr_t
+							      bd_chain_phys_adr,
+							      dma_addr_t
+							      cqe_pbl_addr,
+							      u16 cqe_pbl_size,
+							      void OSAL_IOMEM **
+							      pp_prod)
+{
+	return ECORE_INVAL;
+}
+
+static OSAL_INLINE enum _ecore_status_t ecore_vf_pf_txq_start(struct ecore_hwfn
+							      *p_hwfn,
+							      u16 tx_queue_id,
+							      u16 sb,
+							      u8 sb_index,
+							      dma_addr_t
+							      pbl_addr,
+							      u16 pbl_size,
+							      void OSAL_IOMEM **
+							      pp_doorbell)
+{
+	return ECORE_INVAL;
+}
+
+static OSAL_INLINE enum _ecore_status_t ecore_vf_pf_rxq_stop(struct ecore_hwfn
+							     *p_hwfn,
+							     u16 rx_qid,
+							     bool
+							     cqe_completion)
+{
+	return ECORE_INVAL;
+}
+
+static OSAL_INLINE enum _ecore_status_t ecore_vf_pf_txq_stop(struct ecore_hwfn
+							     *p_hwfn,
+							     u16 tx_qid)
+{
+	return ECORE_INVAL;
+}
+
+static OSAL_INLINE enum _ecore_status_t ecore_vf_pf_rxqs_update(struct
+								ecore_hwfn
+								*p_hwfn,
+								u16 rx_queue_id,
+								u8 num_rxqs,
+								u8 comp_cqe_flg,
+								u8
+								comp_event_flg)
+{
+	return ECORE_INVAL;
+}
+
+static OSAL_INLINE enum _ecore_status_t ecore_vf_pf_vport_update(
+	struct ecore_hwfn *p_hwfn,
+	struct ecore_sp_vport_update_params *p_params)
+{
+	return ECORE_INVAL;
+}
+
+static OSAL_INLINE enum _ecore_status_t ecore_vf_pf_reset(struct ecore_hwfn
+							  *p_hwfn)
+{
+	return ECORE_INVAL;
+}
+
+static OSAL_INLINE enum _ecore_status_t ecore_vf_pf_release(struct ecore_hwfn
+							    *p_hwfn)
+{
+	return ECORE_INVAL;
+}
+
+static OSAL_INLINE u16 ecore_vf_get_igu_sb_id(struct ecore_hwfn *p_hwfn,
+					      u16 sb_id)
+{
+	return 0;
+}
+
+static OSAL_INLINE enum _ecore_status_t ecore_vf_pf_vport_start(
+	struct ecore_hwfn *p_hwfn, u8 vport_id, u16 mtu,
+	u8 inner_vlan_removal, enum ecore_tpa_mode tpa_mode,
+	u8 max_buffers_per_cqe, u8 only_untagged)
+{
+	return ECORE_INVAL;
+}
+
+static OSAL_INLINE enum _ecore_status_t ecore_vf_pf_vport_stop(
+	struct ecore_hwfn *p_hwfn)
+{
+	return ECORE_INVAL;
+}
+
+static OSAL_INLINE enum _ecore_status_t ecore_vf_pf_filter_ucast(
+	 struct ecore_hwfn *p_hwfn, struct ecore_filter_ucast *p_param)
+{
+	return ECORE_INVAL;
+}
+
+static OSAL_INLINE void ecore_vf_pf_filter_mcast(struct ecore_hwfn *p_hwfn,
+						 struct ecore_filter_mcast
+						 *p_filter_cmd)
+{
+}
+
+static OSAL_INLINE enum _ecore_status_t ecore_vf_pf_int_cleanup(struct
+								ecore_hwfn
+								*p_hwfn)
+{
+	return ECORE_INVAL;
+}
+
+static OSAL_INLINE void __ecore_vf_get_link_params(struct ecore_hwfn *p_hwfn,
+						   struct ecore_mcp_link_params
+						   *p_params,
+						   struct ecore_bulletin_content
+						   *p_bulletin)
+{
+}
+
+static OSAL_INLINE void __ecore_vf_get_link_state(struct ecore_hwfn *p_hwfn,
+						  struct ecore_mcp_link_state
+						  *p_link,
+						  struct ecore_bulletin_content
+						  *p_bulletin)
+{
+}
+
+static OSAL_INLINE void __ecore_vf_get_link_caps(struct ecore_hwfn *p_hwfn,
+						 struct
+						 ecore_mcp_link_capabilities
+						 *p_link_caps,
+						 struct ecore_bulletin_content
+						 *p_bulletin)
+{
+}
+#endif
+
+#endif /* __ECORE_VF_H__ */
diff --git a/drivers/net/qede/base/ecore_vf_api.h b/drivers/net/qede/base/ecore_vf_api.h
new file mode 100644
index 0000000..cce1813
--- /dev/null
+++ b/drivers/net/qede/base/ecore_vf_api.h
@@ -0,0 +1,186 @@
+/*
+ * Copyright (c) 2016 QLogic Corporation.
+ * All rights reserved.
+ * www.qlogic.com
+ *
+ * See LICENSE.qede_pmd for copyright and licensing details.
+ */
+
+#ifndef __ECORE_VF_API_H__
+#define __ECORE_VF_API_H__
+
+#include "ecore_sp_api.h"
+#include "ecore_mcp_api.h"
+
+#ifdef CONFIG_ECORE_SRIOV
+/**
+ * @brief Read the VF bulletin and act on it if needed
+ *
+ * @param p_hwfn
+ * @param p_change - ecore fills 1 iff bulletin board has changed, 0 otherwise.
+ *
+ * @return enum _ecore_status
+ */
+enum _ecore_status_t ecore_vf_read_bulletin(struct ecore_hwfn *p_hwfn,
+					    u8 *p_change);
+
+/**
+ * @brief Get link paramters for VF from ecore
+ *
+ * @param p_hwfn
+ * @param params - the link params structure to be filled for the VF
+ */
+void ecore_vf_get_link_params(struct ecore_hwfn *p_hwfn,
+			      struct ecore_mcp_link_params *params);
+
+/**
+ * @brief Get link state for VF from ecore
+ *
+ * @param p_hwfn
+ * @param link - the link state structure to be filled for the VF
+ */
+void ecore_vf_get_link_state(struct ecore_hwfn *p_hwfn,
+			     struct ecore_mcp_link_state *link);
+
+/**
+ * @brief Get link capabilities for VF from ecore
+ *
+ * @param p_hwfn
+ * @param p_link_caps - the link capabilities structure to be filled for the VF
+ */
+void ecore_vf_get_link_caps(struct ecore_hwfn *p_hwfn,
+			    struct ecore_mcp_link_capabilities *p_link_caps);
+
+/**
+ * @brief Get number of Rx queues allocated for VF by ecore
+ *
+ *  @param p_hwfn
+ *  @param num_rxqs - allocated RX queues
+ */
+void ecore_vf_get_num_rxqs(struct ecore_hwfn *p_hwfn, u8 *num_rxqs);
+
+/**
+ * @brief Get port mac address for VF
+ *
+ * @param p_hwfn
+ * @param port_mac - destination location for port mac
+ */
+void ecore_vf_get_port_mac(struct ecore_hwfn *p_hwfn, u8 *port_mac);
+
+/**
+ * @brief Get number of VLAN filters allocated for VF by ecore
+ *
+ *  @param p_hwfn
+ *  @param num_rxqs - allocated VLAN filters
+ */
+void ecore_vf_get_num_vlan_filters(struct ecore_hwfn *p_hwfn,
+				   u8 *num_vlan_filters);
+
+/**
+ * @brief Check if VF can set a MAC address
+ *
+ * @param p_hwfn
+ * @param mac
+ *
+ * @return bool
+ */
+bool ecore_vf_check_mac(struct ecore_hwfn *p_hwfn, u8 *mac);
+
+/**
+ * @brief Copy forced MAC address from bulletin board
+ *
+ * @param hwfn
+ * @param dst_mac
+ * @param p_is_forced - out param which indicate in case mac
+ *	        exist if it forced or not.
+ *
+ * @return bool       - return true if mac exist and false if
+ *                      not.
+ */
+bool ecore_vf_bulletin_get_forced_mac(struct ecore_hwfn *hwfn, u8 *dst_mac,
+				      u8 *p_is_forced);
+
+/**
+ * @brief Check if force vlan is set and copy the forced vlan
+ *        from bulletin board
+ *
+ * @param hwfn
+ * @param dst_pvid
+ * @return bool
+ */
+bool ecore_vf_bulletin_get_forced_vlan(struct ecore_hwfn *hwfn, u16 *dst_pvid);
+
+/**
+ * @brief Set firmware version information in dev_info from VFs acquire response
+ *        tlv
+ *
+ * @param p_hwfn
+ * @param fw_major
+ * @param fw_minor
+ * @param fw_rev
+ * @param fw_eng
+ */
+void ecore_vf_get_fw_version(struct ecore_hwfn *p_hwfn,
+			     u16 *fw_major,
+			     u16 *fw_minor, u16 *fw_rev, u16 *fw_eng);
+#else
+static OSAL_INLINE enum _ecore_status_t ecore_vf_read_bulletin(struct ecore_hwfn
+							       *p_hwfn,
+							       u8 *p_change)
+{
+	return ECORE_INVAL;
+}
+
+static OSAL_INLINE void ecore_vf_get_link_params(struct ecore_hwfn *p_hwfn,
+						 struct ecore_mcp_link_params
+						 *params)
+{
+}
+
+static OSAL_INLINE void ecore_vf_get_link_state(struct ecore_hwfn *p_hwfn,
+						struct ecore_mcp_link_state
+						*link)
+{
+}
+
+static OSAL_INLINE void ecore_vf_get_link_caps(struct ecore_hwfn *p_hwfn,
+					       struct
+					       ecore_mcp_link_capabilities
+					       *p_link_caps)
+{
+}
+
+static OSAL_INLINE void ecore_vf_get_num_rxqs(struct ecore_hwfn *p_hwfn,
+					      u8 *num_rxqs)
+{
+}
+
+static OSAL_INLINE void ecore_vf_get_port_mac(struct ecore_hwfn *p_hwfn,
+					      u8 *port_mac)
+{
+}
+
+static OSAL_INLINE void ecore_vf_get_num_vlan_filters(struct ecore_hwfn *p_hwfn,
+						      u8 *num_vlan_filters)
+{
+}
+
+static OSAL_INLINE bool ecore_vf_check_mac(struct ecore_hwfn *p_hwfn, u8 *mac)
+{
+	return false;
+}
+
+static OSAL_INLINE bool ecore_vf_bulletin_get_forced_mac(struct ecore_hwfn
+							 *hwfn, u8 *dst_mac,
+							 u8 *p_is_forced)
+{
+	return false;
+}
+
+static OSAL_INLINE void ecore_vf_get_fw_version(struct ecore_hwfn *p_hwfn,
+						u16 *fw_major, u16 *fw_minor,
+						u16 *fw_rev, u16 *fw_eng)
+{
+}
+#endif
+#endif
diff --git a/drivers/net/qede/base/ecore_vfpf_if.h b/drivers/net/qede/base/ecore_vfpf_if.h
new file mode 100644
index 0000000..e5cf097
--- /dev/null
+++ b/drivers/net/qede/base/ecore_vfpf_if.h
@@ -0,0 +1,590 @@
+/*
+ * Copyright (c) 2016 QLogic Corporation.
+ * All rights reserved.
+ * www.qlogic.com
+ *
+ * See LICENSE.qede_pmd for copyright and licensing details.
+ */
+
+#ifndef __ECORE_VF_PF_IF_H__
+#define __ECORE_VF_PF_IF_H__
+
+#define T_ETH_INDIRECTION_TABLE_SIZE 128
+#define T_ETH_RSS_KEY_SIZE 10
+#ifndef aligned_u64
+#define aligned_u64 u64
+#endif
+
+/***********************************************
+ *
+ * Common definitions for all HVs
+ *
+ **/
+struct vf_pf_resc_request {
+	u8 num_rxqs;
+	u8 num_txqs;
+	u8 num_sbs;
+	u8 num_mac_filters;
+	u8 num_vlan_filters;
+	u8 num_mc_filters;	/* No limit  so superfluous */
+	u16 padding;
+};
+
+struct hw_sb_info {
+	u16 hw_sb_id;		/* aka absolute igu id, used to ack the sb */
+	u8 sb_qid;		/* used to update DHC for sb */
+	u8 padding[5];
+};
+
+/***********************************************
+ *
+ * HW VF-PF channel definitions
+ *
+ * A.K.A VF-PF mailbox
+ *
+ **/
+#define TLV_BUFFER_SIZE		1024
+#define TLV_ALIGN		sizeof(u64)
+#define PF_VF_BULLETIN_SIZE	512
+
+#define VFPF_RX_MASK_ACCEPT_NONE		0x00000000
+#define VFPF_RX_MASK_ACCEPT_MATCHED_UNICAST     0x00000001
+#define VFPF_RX_MASK_ACCEPT_MATCHED_MULTICAST   0x00000002
+#define VFPF_RX_MASK_ACCEPT_ALL_UNICAST	0x00000004
+#define VFPF_RX_MASK_ACCEPT_ALL_MULTICAST       0x00000008
+#define VFPF_RX_MASK_ACCEPT_BROADCAST	0x00000010
+/* TODO: #define VFPF_RX_MASK_ACCEPT_ANY_VLAN   0x00000020 */
+
+#define BULLETIN_CONTENT_SIZE	(sizeof(struct pf_vf_bulletin_content))
+#define BULLETIN_ATTEMPTS       5	/* crc failures before throwing towel */
+#define BULLETIN_CRC_SEED       0
+
+enum {
+	PFVF_STATUS_WAITING = 0,
+	PFVF_STATUS_SUCCESS,
+	PFVF_STATUS_FAILURE,
+	PFVF_STATUS_NOT_SUPPORTED,
+	PFVF_STATUS_NO_RESOURCE,
+	PFVF_STATUS_FORCED,
+};
+
+/* vf pf channel tlvs */
+/* general tlv header (used for both vf->pf request and pf->vf response) */
+struct channel_tlv {
+	u16 type;
+	u16 length;
+};
+
+/* header of first vf->pf tlv carries the offset used to calculate reponse
+ * buffer address
+ */
+struct vfpf_first_tlv {
+	struct channel_tlv tl;
+	u32 padding;
+	aligned_u64 reply_address;
+};
+
+/* header of pf->vf tlvs, carries the status of handling the request */
+struct pfvf_tlv {
+	struct channel_tlv tl;
+	u8 status;
+	u8 padding[3];
+};
+
+/* response tlv used for most tlvs */
+struct pfvf_def_resp_tlv {
+	struct pfvf_tlv hdr;
+};
+
+/* used to terminate and pad a tlv list */
+struct channel_list_end_tlv {
+	struct channel_tlv tl;
+	u8 padding[4];
+};
+
+/* Acquire */
+struct vfpf_acquire_tlv {
+	struct vfpf_first_tlv first_tlv;
+
+	struct vf_pf_vfdev_info {
+#define VFPF_ACQUIRE_CAP_OVERRIDE_FW_VER		(1 << 0)
+		aligned_u64 capabilties;
+		u8 fw_major;
+		u8 fw_minor;
+		u8 fw_revision;
+		u8 fw_engineering;
+		u32 driver_version;
+		u16 opaque_fid;	/* ME register value */
+		u8 os_type;	/* VFPF_ACQUIRE_OS_* value */
+		u8 padding[5];
+	} vfdev_info;
+
+	struct vf_pf_resc_request resc_request;
+
+	aligned_u64 bulletin_addr;
+	u32 bulletin_size;
+	u32 padding;
+};
+
+/* receive side scaling tlv */
+struct vfpf_vport_update_rss_tlv {
+	struct channel_tlv tl;
+
+	u8 update_rss_flags;
+#define VFPF_UPDATE_RSS_CONFIG_FLAG	  (1 << 0)
+#define VFPF_UPDATE_RSS_CAPS_FLAG	  (1 << 1)
+#define VFPF_UPDATE_RSS_IND_TABLE_FLAG	  (1 << 2)
+#define VFPF_UPDATE_RSS_KEY_FLAG	  (1 << 3)
+
+	u8 rss_enable;
+	u8 rss_caps;
+	u8 rss_table_size_log;	/* The table size is 2 ^ rss_table_size_log */
+	u16 rss_ind_table[T_ETH_INDIRECTION_TABLE_SIZE];
+	u32 rss_key[T_ETH_RSS_KEY_SIZE];
+};
+
+struct pfvf_storm_stats {
+	u32 address;
+	u32 len;
+};
+
+struct pfvf_stats_info {
+	struct pfvf_storm_stats mstats;
+	struct pfvf_storm_stats pstats;
+	struct pfvf_storm_stats tstats;
+	struct pfvf_storm_stats ustats;
+};
+
+/* acquire response tlv - carries the allocated resources */
+struct pfvf_acquire_resp_tlv {
+	struct pfvf_tlv hdr;
+
+	struct pf_vf_pfdev_info {
+		u32 chip_num;
+		u32 mfw_ver;
+
+		u16 fw_major;
+		u16 fw_minor;
+		u16 fw_rev;
+		u16 fw_eng;
+
+		aligned_u64 capabilities;
+#define PFVF_ACQUIRE_CAP_DEFAULT_UNTAGGED	(1 << 0)
+
+		u16 db_size;
+		u8 indices_per_sb;
+		u8 os_type;
+
+		/* Thesee should match the PF's ecore_dev values */
+		u16 chip_rev;
+		u8 dev_type;
+
+		u8 padding;
+
+		struct pfvf_stats_info stats_info;
+
+		u8 port_mac[ETH_ALEN];
+		u8 padding2[2];
+	} pfdev_info;
+
+	struct pf_vf_resc {
+		/* in case of status NO_RESOURCE in message hdr, pf will fill
+		 * this struct with suggested amount of resources for next
+		 * acquire request
+		 */
+#define PFVF_MAX_QUEUES_PER_VF         16
+#define PFVF_MAX_SBS_PER_VF            16
+		struct hw_sb_info hw_sbs[PFVF_MAX_SBS_PER_VF];
+		u8 hw_qid[PFVF_MAX_QUEUES_PER_VF];
+		u8 cid[PFVF_MAX_QUEUES_PER_VF];
+
+		u8 num_rxqs;
+		u8 num_txqs;
+		u8 num_sbs;
+		u8 num_mac_filters;
+		u8 num_vlan_filters;
+		u8 num_mc_filters;
+		u8 padding[2];
+	} resc;
+
+	u32 bulletin_size;
+	u32 padding;
+};
+
+/* Init VF */
+struct vfpf_init_tlv {
+	struct vfpf_first_tlv first_tlv;
+	aligned_u64 stats_addr;
+
+	u16 rx_mask;
+	u16 tx_mask;
+	u8 drop_ttl0_flg;
+	u8 padding[3];
+
+};
+
+/* Setup Queue */
+struct vfpf_start_rxq_tlv {
+	struct vfpf_first_tlv first_tlv;
+
+	/* physical addresses */
+	aligned_u64 rxq_addr;
+	aligned_u64 deprecated_sge_addr;
+	aligned_u64 cqe_pbl_addr;
+
+	u16 cqe_pbl_size;
+	u16 hw_sb;
+	u16 rx_qid;
+	u16 hc_rate;		/* desired interrupts per sec. */
+
+	u16 bd_max_bytes;
+	u16 stat_id;
+	u8 sb_index;
+	u8 padding[3];
+
+};
+
+struct vfpf_start_txq_tlv {
+	struct vfpf_first_tlv first_tlv;
+
+	/* physical addresses */
+	aligned_u64 pbl_addr;
+	u16 pbl_size;
+	u16 stat_id;
+	u16 tx_qid;
+	u16 hw_sb;
+
+	u32 flags;		/* VFPF_QUEUE_FLG_X flags */
+	u16 hc_rate;		/* desired interrupts per sec. */
+	u8 sb_index;
+	u8 padding[3];
+};
+
+/* Stop RX Queue */
+struct vfpf_stop_rxqs_tlv {
+	struct vfpf_first_tlv first_tlv;
+
+	u16 rx_qid;
+	u8 num_rxqs;
+	u8 cqe_completion;
+	u8 padding[4];
+};
+
+/* Stop TX Queues */
+struct vfpf_stop_txqs_tlv {
+	struct vfpf_first_tlv first_tlv;
+
+	u16 tx_qid;
+	u8 num_txqs;
+	u8 padding[5];
+};
+
+struct vfpf_update_rxq_tlv {
+	struct vfpf_first_tlv first_tlv;
+
+	aligned_u64 deprecated_sge_addr[PFVF_MAX_QUEUES_PER_VF];
+
+	u16 rx_qid;
+	u8 num_rxqs;
+	u8 flags;
+#define VFPF_RXQ_UPD_INIT_SGE_DEPRECATE_FLAG	(1 << 0)
+#define VFPF_RXQ_UPD_COMPLETE_CQE_FLAG		(1 << 1)
+#define VFPF_RXQ_UPD_COMPLETE_EVENT_FLAG	(1 << 2)
+
+	u8 padding[4];
+};
+
+/* Set Queue Filters */
+struct vfpf_q_mac_vlan_filter {
+	u32 flags;
+#define VFPF_Q_FILTER_DEST_MAC_VALID    0x01
+#define VFPF_Q_FILTER_VLAN_TAG_VALID    0x02
+#define VFPF_Q_FILTER_SET_MAC	0x100	/* set/clear */
+
+	u8 mac[ETH_ALEN];
+	u16 vlan_tag;
+
+	u8 padding[4];
+};
+
+/* Start a vport */
+struct vfpf_vport_start_tlv {
+	struct vfpf_first_tlv first_tlv;
+
+	aligned_u64 sb_addr[PFVF_MAX_SBS_PER_VF];
+
+	u32 tpa_mode;
+	u16 dep1;
+	u16 mtu;
+
+	u8 vport_id;
+	u8 inner_vlan_removal;
+
+	u8 only_untagged;
+	u8 max_buffers_per_cqe;
+
+	u8 padding[4];
+};
+
+/* Extended tlvs - need to add rss, mcast, accept mode tlvs */
+struct vfpf_vport_update_activate_tlv {
+	struct channel_tlv tl;
+	u8 update_rx;
+	u8 update_tx;
+	u8 active_rx;
+	u8 active_tx;
+};
+
+struct vfpf_vport_update_tx_switch_tlv {
+	struct channel_tlv tl;
+	u8 tx_switching;
+	u8 padding[3];
+};
+
+struct vfpf_vport_update_vlan_strip_tlv {
+	struct channel_tlv tl;
+	u8 remove_vlan;
+	u8 padding[3];
+};
+
+struct vfpf_vport_update_mcast_bin_tlv {
+	struct channel_tlv tl;
+	u8 padding[4];
+
+	aligned_u64 bins[8];
+};
+
+struct vfpf_vport_update_accept_param_tlv {
+	struct channel_tlv tl;
+	u8 update_rx_mode;
+	u8 update_tx_mode;
+	u8 rx_accept_filter;
+	u8 tx_accept_filter;
+};
+
+struct vfpf_vport_update_accept_any_vlan_tlv {
+	struct channel_tlv tl;
+	u8 update_accept_any_vlan_flg;
+	u8 accept_any_vlan;
+
+	u8 padding[2];
+};
+
+struct vfpf_vport_update_sge_tpa_tlv {
+	struct channel_tlv tl;
+
+	u16 sge_tpa_flags;
+#define VFPF_TPA_IPV4_EN_FLAG	     (1 << 0)
+#define VFPF_TPA_IPV6_EN_FLAG        (1 << 1)
+#define VFPF_TPA_PKT_SPLIT_FLAG      (1 << 2)
+#define VFPF_TPA_HDR_DATA_SPLIT_FLAG (1 << 3)
+#define VFPF_TPA_GRO_CONSIST_FLAG    (1 << 4)
+
+	u8 update_sge_tpa_flags;
+#define VFPF_UPDATE_SGE_DEPRECATED_FLAG	   (1 << 0)
+#define VFPF_UPDATE_TPA_EN_FLAG    (1 << 1)
+#define VFPF_UPDATE_TPA_PARAM_FLAG (1 << 2)
+
+	u8 max_buffers_per_cqe;
+
+	u16 deprecated_sge_buff_size;
+	u16 tpa_max_size;
+	u16 tpa_min_size_to_start;
+	u16 tpa_min_size_to_cont;
+
+	u8 tpa_max_aggs_num;
+	u8 padding[7];
+
+};
+
+/* Primary tlv as a header for various extended tlvs for
+ * various functionalities in vport update ramrod.
+ */
+struct vfpf_vport_update_tlv {
+	struct vfpf_first_tlv first_tlv;
+};
+
+struct vfpf_ucast_filter_tlv {
+	struct vfpf_first_tlv first_tlv;
+
+	u8 opcode;
+	u8 type;
+
+	u8 mac[ETH_ALEN];
+
+	u16 vlan;
+	u16 padding[3];
+};
+
+struct tlv_buffer_size {
+	u8 tlv_buffer[TLV_BUFFER_SIZE];
+};
+
+union vfpf_tlvs {
+	struct vfpf_first_tlv first_tlv;
+	struct vfpf_acquire_tlv acquire;
+	struct vfpf_init_tlv init;
+	struct vfpf_start_rxq_tlv start_rxq;
+	struct vfpf_start_txq_tlv start_txq;
+	struct vfpf_stop_rxqs_tlv stop_rxqs;
+	struct vfpf_stop_txqs_tlv stop_txqs;
+	struct vfpf_update_rxq_tlv update_rxq;
+	struct vfpf_vport_start_tlv start_vport;
+	struct vfpf_vport_update_tlv vport_update;
+	struct vfpf_ucast_filter_tlv ucast_filter;
+	struct channel_list_end_tlv list_end;
+	struct tlv_buffer_size tlv_buf_size;
+};
+
+union pfvf_tlvs {
+	struct pfvf_def_resp_tlv default_resp;
+	struct pfvf_acquire_resp_tlv acquire_resp;
+	struct channel_list_end_tlv list_end;
+	struct tlv_buffer_size tlv_buf_size;
+};
+
+/* This is a structure which is allocated in the VF, which the PF may update
+ * when it deems it necessary to do so. The bulletin board is sampled
+ * periodically by the VF. A copy per VF is maintained in the PF (to prevent
+ * loss of data upon multiple updates (or the need for read modify write)).
+ */
+enum ecore_bulletin_bit {
+	/* Alert the VF that a forced MAC was set by the PF */
+	MAC_ADDR_FORCED = 0,
+
+	/* The VF should not access the vfpf channel */
+	VFPF_CHANNEL_INVALID = 1,
+
+	/* Alert the VF that a forced VLAN was set by the PF */
+	VLAN_ADDR_FORCED = 2,
+
+	/* Indicate that `default_only_untagged' contains actual data */
+	VFPF_BULLETIN_UNTAGGED_DEFAULT = 3,
+	VFPF_BULLETIN_UNTAGGED_DEFAULT_FORCED = 4,
+
+	/* Alert the VF that suggested mac was sent by the PF.
+	 * MAC_ADDR will be disabled in case MAC_ADDR_FORCED is set
+	 */
+	VFPF_BULLETIN_MAC_ADDR = 5
+};
+
+struct ecore_bulletin_content {
+	u32 crc;		/* crc of structure to ensure is not in
+				 * mid-update
+				 */
+	u32 version;
+
+	aligned_u64 valid_bitmap;	/* bitmap indicating wich fields
+					 * hold valid values
+					 */
+
+	u8 mac[ETH_ALEN];	/* used for MAC_ADDR or MAC_ADDR_FORCED */
+
+	u8 default_only_untagged;	/* If valid, 1 => only untagged Rx
+					 * if no vlan filter is configured.
+					 */
+	u8 padding;
+
+	/* The following is a 'copy' of ecore_mcp_link_state,
+	 * ecore_mcp_link_params and ecore_mcp_link_capabilities. Since it's
+	 * possible the structs will increase further along the road we cannot
+	 * have it here; Instead we need to have all of its fields.
+	 */
+	u8 req_autoneg;
+	u8 req_autoneg_pause;
+	u8 req_forced_rx;
+	u8 req_forced_tx;
+	u8 padding2[4];
+
+	u32 req_adv_speed;
+	u32 req_forced_speed;
+	u32 req_loopback;
+	u32 padding3;
+
+	u8 link_up;
+	u8 full_duplex;
+	u8 autoneg;
+	u8 autoneg_complete;
+	u8 parallel_detection;
+	u8 pfc_enabled;
+	u8 partner_tx_flow_ctrl_en;
+	u8 partner_rx_flow_ctrl_en;
+	u8 partner_adv_pause;
+	u8 sfp_tx_fault;
+	u8 padding4[6];
+
+	u32 speed;
+	u32 partner_adv_speed;
+
+	u32 capability_speed;
+
+	/* Forced vlan */
+	u16 pvid;
+	u16 padding5;
+};
+
+struct ecore_bulletin {
+	dma_addr_t phys;
+	struct ecore_bulletin_content *p_virt;
+	u32 size;
+};
+
+#ifndef print_enum
+enum {
+/*!!!!! Make sure to update STRINGS structure accordingly !!!!!*/
+
+	CHANNEL_TLV_NONE,	/* ends tlv sequence */
+	CHANNEL_TLV_ACQUIRE,
+	CHANNEL_TLV_VPORT_START,
+	CHANNEL_TLV_VPORT_UPDATE,
+	CHANNEL_TLV_VPORT_TEARDOWN,
+	CHANNEL_TLV_START_RXQ,
+	CHANNEL_TLV_START_TXQ,
+	CHANNEL_TLV_STOP_RXQS,
+	CHANNEL_TLV_STOP_TXQS,
+	CHANNEL_TLV_UPDATE_RXQ,
+	CHANNEL_TLV_INT_CLEANUP,
+	CHANNEL_TLV_CLOSE,
+	CHANNEL_TLV_RELEASE,
+	CHANNEL_TLV_LIST_END,
+	CHANNEL_TLV_UCAST_FILTER,
+	CHANNEL_TLV_VPORT_UPDATE_ACTIVATE,
+	CHANNEL_TLV_VPORT_UPDATE_TX_SWITCH,
+	CHANNEL_TLV_VPORT_UPDATE_VLAN_STRIP,
+	CHANNEL_TLV_VPORT_UPDATE_MCAST,
+	CHANNEL_TLV_VPORT_UPDATE_ACCEPT_PARAM,
+	CHANNEL_TLV_VPORT_UPDATE_RSS,
+	CHANNEL_TLV_VPORT_UPDATE_ACCEPT_ANY_VLAN,
+	CHANNEL_TLV_VPORT_UPDATE_SGE_TPA,
+	CHANNEL_TLV_MAX
+/*!!!!! Make sure to update STRINGS structure accordingly !!!!!*/
+};
+extern const char *ecore_channel_tlvs_string[];
+
+#else
+print_enum(channel_tlvs, CHANNEL_TLV_NONE,	/* ends tlv sequence */
+	   CHANNEL_TLV_ACQUIRE,
+	   CHANNEL_TLV_VPORT_START,
+	   CHANNEL_TLV_VPORT_UPDATE,
+	   CHANNEL_TLV_VPORT_TEARDOWN,
+	   CHANNEL_TLV_SETUP_RXQ,
+	   CHANNEL_TLV_SETUP_TXQ,
+	   CHANNEL_TLV_STOP_RXQS,
+	   CHANNEL_TLV_STOP_TXQS,
+	   CHANNEL_TLV_UPDATE_RXQ,
+	   CHANNEL_TLV_INT_CLEANUP,
+	   CHANNEL_TLV_CLOSE,
+	   CHANNEL_TLV_RELEASE,
+	   CHANNEL_TLV_LIST_END,
+	   CHANNEL_TLV_UCAST_FILTER,
+	   CHANNEL_TLV_VPORT_UPDATE_ACTIVATE,
+	   CHANNEL_TLV_VPORT_UPDATE_TX_SWITCH,
+	   CHANNEL_TLV_VPORT_UPDATE_VLAN_STRIP,
+	   CHANNEL_TLV_VPORT_UPDATE_MCAST,
+	   CHANNEL_TLV_VPORT_UPDATE_ACCEPT_PARAM,
+	   CHANNEL_TLV_VPORT_UPDATE_RSS,
+	   CHANNEL_TLV_VPORT_UPDATE_ACCEPT_ANY_VLAN,
+	   CHANNEL_TLV_VPORT_UPDATE_SGE_TPA, CHANNEL_TLV_MAX);
+#endif
+
+#endif /* __ECORE_VF_PF_IF_H__ */
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 530b2c1..33f3f78 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -821,9 +821,27 @@ static int qede_common_dev_init(struct rte_eth_dev *eth_dev, bool is_vf)
 		return -ENOMEM;
 	}
 
-	ether_addr_copy((struct ether_addr *)edev->hwfns[0].
+	if (!is_vf)
+		ether_addr_copy((struct ether_addr *)edev->hwfns[0].
 				hw_info.hw_mac_addr,
 				&eth_dev->data->mac_addrs[0]);
+	else {
+		ecore_vf_read_bulletin(&edev->hwfns[0], &bulletin_change);
+		if (bulletin_change) {
+			is_mac_exist =
+			    ecore_vf_bulletin_get_forced_mac(&edev->hwfns[0],
+							     vf_mac,
+							     &is_mac_forced);
+			if (is_mac_exist && is_mac_forced) {
+				DP_INFO(edev, "VF macaddr received from PF\n");
+				ether_addr_copy((struct ether_addr *)&vf_mac,
+						&eth_dev->data->mac_addrs[0]);
+			} else {
+				DP_NOTICE(edev, false,
+					  "No VF macaddr assigned\n");
+			}
+		}
+	}
 
 	eth_dev->dev_ops = &qede_eth_dev_ops;
 
diff --git a/drivers/net/qede/qede_ethdev.h b/drivers/net/qede/qede_ethdev.h
index 5550349..eb44e05 100644
--- a/drivers/net/qede/qede_ethdev.h
+++ b/drivers/net/qede/qede_ethdev.h
@@ -19,14 +19,14 @@
 #include "base/ecore.h"
 #include "base/ecore_dev_api.h"
 #include "base/ecore_l2_api.h"
-#include "base/ecore_sp_api.h"
-#include "base/ecore_mcp_api.h"
+#include "base/ecore_vf_api.h"
 #include "base/ecore_hsi_common.h"
 #include "base/ecore_int_api.h"
 #include "base/ecore_chain.h"
 #include "base/ecore_status.h"
 #include "base/ecore_hsi_eth.h"
 #include "base/ecore_dev_api.h"
+#include "base/ecore_iov_api.h"
 
 #include "qede_logs.h"
 #include "qede_if.h"
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index 1f25908..46d4b6c 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -171,12 +171,14 @@ static int qed_slowpath_start(struct ecore_dev *edev,
 #endif
 
 #ifdef CONFIG_QED_BINARY_FW
-	rc = qed_load_firmware_data(edev);
-	if (rc) {
-		DP_NOTICE(edev, true,
-			  "Failed to find fw file %s\n",
-			  QEDE_FW_FILE_NAME);
-		goto err;
+	if (IS_PF(edev)) {
+		rc = qed_load_firmware_data(edev);
+		if (rc) {
+			DP_NOTICE(edev, true,
+				  "Failed to find fw file %s\n",
+				  QEDE_FW_FILE_NAME);
+			goto err;
+		}
 	}
 #endif
 
@@ -188,17 +190,20 @@ static int qed_slowpath_start(struct ecore_dev *edev,
 	edev->int_coalescing_mode = ECORE_COAL_MODE_ENABLE;
 
 	/* Should go with CONFIG_QED_BINARY_FW */
-	/* Allocate stream for unzipping */
-	rc = qed_alloc_stream_mem(edev);
-	if (rc) {
-		DP_NOTICE(edev, true,
-		"Failed to allocate stream memory\n");
-		goto err2;
+	if (IS_PF(edev)) {
+		/* Allocate stream for unzipping */
+		rc = qed_alloc_stream_mem(edev);
+		if (rc) {
+			DP_NOTICE(edev, true,
+			"Failed to allocate stream memory\n");
+			goto err2;
+		}
 	}
 
 	/* Start the slowpath */
 #ifdef CONFIG_QED_BINARY_FW
-	data = edev->firmware;
+	if (IS_PF(edev))
+		data = edev->firmware;
 #endif
 	allow_npar_tx_switching = npar_tx_switching ? true : false;
 
@@ -224,19 +229,21 @@ static int qed_slowpath_start(struct ecore_dev *edev,
 
 	DP_INFO(edev, "HW inited and function started\n");
 
-	hwfn = ECORE_LEADING_HWFN(edev);
-	drv_version.version = (params->drv_major << 24) |
+	if (IS_PF(edev)) {
+		hwfn = ECORE_LEADING_HWFN(edev);
+		drv_version.version = (params->drv_major << 24) |
 		    (params->drv_minor << 16) |
 		    (params->drv_rev << 8) | (params->drv_eng);
-	/* TBD: strlcpy() */
-	strncpy((char *)drv_version.name, (const char *)params->name,
+		/* TBD: strlcpy() */
+		strncpy((char *)drv_version.name, (const char *)params->name,
 			MCP_DRV_VER_STR_SIZE - 4);
-	rc = ecore_mcp_send_drv_version(hwfn, hwfn->p_main_ptt,
+		rc = ecore_mcp_send_drv_version(hwfn, hwfn->p_main_ptt,
 						&drv_version);
-	if (rc) {
-		DP_NOTICE(edev, true,
-			  "Failed sending drv version command\n");
-		return rc;
+		if (rc) {
+			DP_NOTICE(edev, true,
+				  "Failed sending drv version command\n");
+			return rc;
+		}
 	}
 
 	ecore_reset_vport_stats(edev);
@@ -248,9 +255,11 @@ err2:
 	ecore_resc_free(edev);
 err:
 #ifdef CONFIG_QED_BINARY_FW
-	if (edev->firmware)
-		rte_free(edev->firmware);
-	edev->firmware = NULL;
+	if (IS_PF(edev)) {
+		if (edev->firmware)
+			rte_free(edev->firmware);
+		edev->firmware = NULL;
+	}
 #endif
 	return rc;
 }
@@ -266,28 +275,38 @@ qed_fill_dev_info(struct ecore_dev *edev, struct qed_dev_info *dev_info)
 	rte_memcpy(&dev_info->hw_mac, &edev->hwfns[0].hw_info.hw_mac_addr,
 	       ETHER_ADDR_LEN);
 
-	dev_info->fw_major = FW_MAJOR_VERSION;
-	dev_info->fw_minor = FW_MINOR_VERSION;
-	dev_info->fw_rev = FW_REVISION_VERSION;
-	dev_info->fw_eng = FW_ENGINEERING_VERSION;
-	dev_info->mf_mode = edev->mf_mode;
-	dev_info->tx_switching = tx_switching ? true : false;
+	if (IS_PF(edev)) {
+		dev_info->fw_major = FW_MAJOR_VERSION;
+		dev_info->fw_minor = FW_MINOR_VERSION;
+		dev_info->fw_rev = FW_REVISION_VERSION;
+		dev_info->fw_eng = FW_ENGINEERING_VERSION;
+		dev_info->mf_mode = edev->mf_mode;
+		dev_info->tx_switching = tx_switching ? true : false;
+	} else {
+		ecore_vf_get_fw_version(&edev->hwfns[0], &dev_info->fw_major,
+					&dev_info->fw_minor, &dev_info->fw_rev,
+					&dev_info->fw_eng);
+	}
 
-	ptt = ecore_ptt_acquire(ECORE_LEADING_HWFN(edev));
-	if (ptt) {
-		ecore_mcp_get_mfw_ver(edev, ptt,
+	if (IS_PF(edev)) {
+		ptt = ecore_ptt_acquire(ECORE_LEADING_HWFN(edev));
+		if (ptt) {
+			ecore_mcp_get_mfw_ver(edev, ptt,
 					      &dev_info->mfw_rev, NULL);
 
-		ecore_mcp_get_flash_size(ECORE_LEADING_HWFN(edev), ptt,
+			ecore_mcp_get_flash_size(ECORE_LEADING_HWFN(edev), ptt,
 						 &dev_info->flash_size);
 
-		/* Workaround to allow PHY-read commands for
-		 * B0 bringup.
-		 */
-		if (ECORE_IS_BB_B0(edev))
-			dev_info->flash_size = 0xffffffff;
+			/* Workaround to allow PHY-read commands for
+			 * B0 bringup.
+			 */
+			if (ECORE_IS_BB_B0(edev))
+				dev_info->flash_size = 0xffffffff;
 
-		ecore_ptt_release(ECORE_LEADING_HWFN(edev), ptt);
+			ecore_ptt_release(ECORE_LEADING_HWFN(edev), ptt);
+		}
+	} else {
+		ecore_mcp_get_mfw_ver(edev, ptt, &dev_info->mfw_rev, NULL);
 	}
 
 	return 0;
@@ -303,18 +322,31 @@ qed_fill_eth_dev_info(struct ecore_dev *edev, struct qed_dev_eth_info *info)
 
 	info->num_tc = 1 /* @@@TBD aelior MULTI_COS */;
 
-	info->num_queues = 0;
-	for_each_hwfn(edev, i)
+	if (IS_PF(edev)) {
+		info->num_queues = 0;
+		for_each_hwfn(edev, i)
 		    info->num_queues +=
 		    FEAT_NUM(&edev->hwfns[i], ECORE_PF_L2_QUE);
 
-	info->num_vlan_filters = RESC_NUM(&edev->hwfns[0], ECORE_VLAN);
+		info->num_vlan_filters = RESC_NUM(&edev->hwfns[0], ECORE_VLAN);
 
-	rte_memcpy(&info->port_mac, &edev->hwfns[0].hw_info.hw_mac_addr,
+		rte_memcpy(&info->port_mac, &edev->hwfns[0].hw_info.hw_mac_addr,
 			   ETHER_ADDR_LEN);
+	} else {
+		ecore_vf_get_num_rxqs(&edev->hwfns[0], &info->num_queues);
+
+		ecore_vf_get_num_vlan_filters(&edev->hwfns[0],
+					      &info->num_vlan_filters);
+
+		ecore_vf_get_port_mac(&edev->hwfns[0],
+				      (uint8_t *) &info->port_mac);
+	}
 
 	qed_fill_dev_info(edev, &info->common);
 
+	if (IS_VF(edev))
+		memset(&info->common.hw_mac, 0, ETHER_ADDR_LEN);
+
 	return 0;
 }
 
@@ -376,11 +408,18 @@ static void qed_fill_link(struct ecore_hwfn *hwfn,
 	memset(if_link, 0, sizeof(*if_link));
 
 	/* Prepare source inputs */
-	rte_memcpy(&params, ecore_mcp_get_link_params(hwfn),
+	if (IS_PF(hwfn->p_dev)) {
+		rte_memcpy(&params, ecore_mcp_get_link_params(hwfn),
 		       sizeof(params));
-	rte_memcpy(&link, ecore_mcp_get_link_state(hwfn), sizeof(link));
-	rte_memcpy(&link_caps, ecore_mcp_get_link_capabilities(hwfn),
+		rte_memcpy(&link, ecore_mcp_get_link_state(hwfn), sizeof(link));
+		rte_memcpy(&link_caps, ecore_mcp_get_link_capabilities(hwfn),
 		       sizeof(link_caps));
+	} else {
+		ecore_vf_read_bulletin(hwfn, &change);
+		ecore_vf_get_link_params(hwfn, &params);
+		ecore_vf_get_link_state(hwfn, &link);
+		ecore_vf_get_link_caps(hwfn, &link_caps);
+	}
 
 	/* Set the link parameters to pass to protocol driver */
 	if (link.link_up)
@@ -426,6 +465,9 @@ static int qed_set_link(struct ecore_dev *edev, struct qed_link_params *params)
 	struct ecore_mcp_link_params *link_params;
 	int rc;
 
+	if (IS_VF(edev))
+		return 0;
+
 	/* The link should be set only once per PF */
 	hwfn = &edev->hwfns[0];
 
@@ -465,6 +507,9 @@ static int qed_drain(struct ecore_dev *edev)
 	struct ecore_ptt *ptt;
 	int i, rc;
 
+	if (IS_VF(edev))
+		return 0;
+
 	for_each_hwfn(edev, i) {
 		hwfn = &edev->hwfns[i];
 		ptt = ecore_ptt_acquire(hwfn);
@@ -517,9 +562,15 @@ static int qed_slowpath_stop(struct ecore_dev *edev)
 	if (!edev)
 		return -ENODEV;
 
-	qed_free_stream_mem(edev);
+	if (IS_PF(edev)) {
+		qed_free_stream_mem(edev);
 
-	qed_nic_stop(edev);
+#ifdef CONFIG_QED_SRIOV
+		if (IS_QED_ETH_IF(edev))
+			qed_sriov_disable(edev, true);
+#endif
+		qed_nic_stop(edev);
+	}
 
 	qed_nic_reset(edev);
 
-- 
1.7.10.3

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH v2 08/10] qede: Add attention support
  2016-03-10 13:45 [dpdk-dev] [PATCH v2 00/10] qede: Add qede PMD Rasesh Mody
                   ` (5 preceding siblings ...)
  2016-03-10 13:45 ` [dpdk-dev] [PATCH v2 07/10] qede: Add SRIOV support Rasesh Mody
@ 2016-03-10 13:45 ` Rasesh Mody
  2016-03-10 13:45 ` [dpdk-dev] [PATCH v2 09/10] qede: Add DCBX support Rasesh Mody
  2016-03-10 13:45 ` [dpdk-dev] [PATCH v2 10/10] qede: enable PMD build Rasesh Mody
  8 siblings, 0 replies; 13+ messages in thread
From: Rasesh Mody @ 2016-03-10 13:45 UTC (permalink / raw)
  To: dev; +Cc: sony.chacko

Signed-off-by: Harish Patil <harish.patil@qlogic.com>
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
Signed-off-by: Sony Chacko <sony.chacko@qlogic.com>
---
 drivers/net/qede/base/ecore_attn_values.h |13287 +++++++++++++++++++++++++++++
 drivers/net/qede/base/ecore_dev.c         |   51 +
 drivers/net/qede/base/ecore_int.c         | 1131 +++
 3 files changed, 14469 insertions(+)
 create mode 100644 drivers/net/qede/base/ecore_attn_values.h

diff --git a/drivers/net/qede/base/ecore_attn_values.h b/drivers/net/qede/base/ecore_attn_values.h
new file mode 100644
index 0000000..8bd2ba7
--- /dev/null
+++ b/drivers/net/qede/base/ecore_attn_values.h
@@ -0,0 +1,13287 @@
+/*
+ * Copyright (c) 2016 QLogic Corporation.
+ * All rights reserved.
+ * www.qlogic.com
+ *
+ * See LICENSE.qede_pmd for copyright and licensing details.
+ */
+
+#ifndef __ATTN_VALUES_H__
+#define __ATTN_VALUES_H__
+
+#ifndef __PREVENT_INT_ATTN__
+
+/* HW Attention register */
+struct attn_hw_reg {
+	u16 reg_idx;		/* Index of this register in its block */
+	u16 num_of_bits;	/* number of valid attention bits */
+	const u16 *bit_attn_idx;	/* attention index per valid bit */
+	u32 sts_addr;		/* Address of the STS register */
+	u32 sts_clr_addr;	/* Address of the STS_CLR register */
+	u32 sts_wr_addr;	/* Address of the STS_WR register */
+	u32 mask_addr;		/* Address of the MASK register */
+};
+
+/* HW block attention registers */
+struct attn_hw_regs {
+	u16 num_of_int_regs;	/* Number of interrupt regs */
+	u16 num_of_prty_regs;	/* Number of parity regs */
+	struct attn_hw_reg **int_regs;	/* interrupt regs */
+	struct attn_hw_reg **prty_regs;	/* parity regs */
+};
+
+/* HW block attention registers */
+struct attn_hw_block {
+	const char *name;	/* Block name */
+	const char **int_desc;	/* Array of interrupt attention descriptions */
+	const char **prty_desc;	/* Array of parity attention descriptions */
+	struct attn_hw_regs chip_regs[3];	/* attention regs per chip.*/
+};
+
+#ifdef ATTN_DESC
+static const char *grc_int_attn_desc[5] = {
+	"grc_address_error",
+	"grc_timeout_event",
+	"grc_global_reserved_address",
+	"grc_path_isolation_error",
+	"grc_trace_fifo_valid_data",
+};
+#else
+#define grc_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 grc_int0_bb_a0_attn_idx[4] = {
+	0, 1, 2, 3,
+};
+
+static struct attn_hw_reg grc_int0_bb_a0 = {
+	0, 4, grc_int0_bb_a0_attn_idx, 0x50180, 0x5018c, 0x50188, 0x50184
+};
+
+static struct attn_hw_reg *grc_int_bb_a0_regs[1] = {
+	&grc_int0_bb_a0,
+};
+
+static const u16 grc_int0_bb_b0_attn_idx[4] = {
+	0, 1, 2, 3,
+};
+
+static struct attn_hw_reg grc_int0_bb_b0 = {
+	0, 4, grc_int0_bb_b0_attn_idx, 0x50180, 0x5018c, 0x50188, 0x50184
+};
+
+static struct attn_hw_reg *grc_int_bb_b0_regs[1] = {
+	&grc_int0_bb_b0,
+};
+
+static const u16 grc_int0_k2_attn_idx[5] = {
+	0, 1, 2, 3, 4,
+};
+
+static struct attn_hw_reg grc_int0_k2 = {
+	0, 5, grc_int0_k2_attn_idx, 0x50180, 0x5018c, 0x50188, 0x50184
+};
+
+static struct attn_hw_reg *grc_int_k2_regs[1] = {
+	&grc_int0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *grc_prty_attn_desc[3] = {
+	"grc_mem003_i_mem_prty",
+	"grc_mem002_i_mem_prty",
+	"grc_mem001_i_mem_prty",
+};
+#else
+#define grc_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 grc_prty1_bb_a0_attn_idx[2] = {
+	1, 2,
+};
+
+static struct attn_hw_reg grc_prty1_bb_a0 = {
+	0, 2, grc_prty1_bb_a0_attn_idx, 0x50200, 0x5020c, 0x50208, 0x50204
+};
+
+static struct attn_hw_reg *grc_prty_bb_a0_regs[1] = {
+	&grc_prty1_bb_a0,
+};
+
+static const u16 grc_prty1_bb_b0_attn_idx[2] = {
+	0, 1,
+};
+
+static struct attn_hw_reg grc_prty1_bb_b0 = {
+	0, 2, grc_prty1_bb_b0_attn_idx, 0x50200, 0x5020c, 0x50208, 0x50204
+};
+
+static struct attn_hw_reg *grc_prty_bb_b0_regs[1] = {
+	&grc_prty1_bb_b0,
+};
+
+static const u16 grc_prty1_k2_attn_idx[2] = {
+	0, 1,
+};
+
+static struct attn_hw_reg grc_prty1_k2 = {
+	0, 2, grc_prty1_k2_attn_idx, 0x50200, 0x5020c, 0x50208, 0x50204
+};
+
+static struct attn_hw_reg *grc_prty_k2_regs[1] = {
+	&grc_prty1_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *miscs_int_attn_desc[14] = {
+	"miscs_address_error",
+	"miscs_generic_sw",
+	"miscs_cnig_interrupt",
+	"miscs_opte_dorq_fifo_err_eng1",
+	"miscs_opte_dorq_fifo_err_eng0",
+	"miscs_opte_dbg_fifo_err_eng1",
+	"miscs_opte_dbg_fifo_err_eng0",
+	"miscs_opte_btb_if1_fifo_err_eng1",
+	"miscs_opte_btb_if1_fifo_err_eng0",
+	"miscs_opte_btb_if0_fifo_err_eng1",
+	"miscs_opte_btb_if0_fifo_err_eng0",
+	"miscs_opte_btb_sop_fifo_err_eng1",
+	"miscs_opte_btb_sop_fifo_err_eng0",
+	"miscs_opte_storm_fifo_err_eng0",
+};
+#else
+#define miscs_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 miscs_int0_bb_a0_attn_idx[2] = {
+	0, 1,
+};
+
+static struct attn_hw_reg miscs_int0_bb_a0 = {
+	0, 2, miscs_int0_bb_a0_attn_idx, 0x9180, 0x918c, 0x9188, 0x9184
+};
+
+static const u16 miscs_int1_bb_a0_attn_idx[11] = {
+	3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,
+};
+
+static struct attn_hw_reg miscs_int1_bb_a0 = {
+	1, 11, miscs_int1_bb_a0_attn_idx, 0x9190, 0x919c, 0x9198, 0x9194
+};
+
+static struct attn_hw_reg *miscs_int_bb_a0_regs[2] = {
+	&miscs_int0_bb_a0, &miscs_int1_bb_a0,
+};
+
+static const u16 miscs_int0_bb_b0_attn_idx[3] = {
+	0, 1, 2,
+};
+
+static struct attn_hw_reg miscs_int0_bb_b0 = {
+	0, 3, miscs_int0_bb_b0_attn_idx, 0x9180, 0x918c, 0x9188, 0x9184
+};
+
+static const u16 miscs_int1_bb_b0_attn_idx[11] = {
+	3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,
+};
+
+static struct attn_hw_reg miscs_int1_bb_b0 = {
+	1, 11, miscs_int1_bb_b0_attn_idx, 0x9190, 0x919c, 0x9198, 0x9194
+};
+
+static struct attn_hw_reg *miscs_int_bb_b0_regs[2] = {
+	&miscs_int0_bb_b0, &miscs_int1_bb_b0,
+};
+
+static const u16 miscs_int0_k2_attn_idx[3] = {
+	0, 1, 2,
+};
+
+static struct attn_hw_reg miscs_int0_k2 = {
+	0, 3, miscs_int0_k2_attn_idx, 0x9180, 0x918c, 0x9188, 0x9184
+};
+
+static struct attn_hw_reg *miscs_int_k2_regs[1] = {
+	&miscs_int0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *miscs_prty_attn_desc[1] = {
+	"miscs_cnig_parity",
+};
+#else
+#define miscs_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 miscs_prty0_bb_b0_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg miscs_prty0_bb_b0 = {
+	0, 1, miscs_prty0_bb_b0_attn_idx, 0x91a0, 0x91ac, 0x91a8, 0x91a4
+};
+
+static struct attn_hw_reg *miscs_prty_bb_b0_regs[1] = {
+	&miscs_prty0_bb_b0,
+};
+
+static const u16 miscs_prty0_k2_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg miscs_prty0_k2 = {
+	0, 1, miscs_prty0_k2_attn_idx, 0x91a0, 0x91ac, 0x91a8, 0x91a4
+};
+
+static struct attn_hw_reg *miscs_prty_k2_regs[1] = {
+	&miscs_prty0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *misc_int_attn_desc[1] = {
+	"misc_address_error",
+};
+#else
+#define misc_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 misc_int0_bb_a0_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg misc_int0_bb_a0 = {
+	0, 1, misc_int0_bb_a0_attn_idx, 0x8180, 0x818c, 0x8188, 0x8184
+};
+
+static struct attn_hw_reg *misc_int_bb_a0_regs[1] = {
+	&misc_int0_bb_a0,
+};
+
+static const u16 misc_int0_bb_b0_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg misc_int0_bb_b0 = {
+	0, 1, misc_int0_bb_b0_attn_idx, 0x8180, 0x818c, 0x8188, 0x8184
+};
+
+static struct attn_hw_reg *misc_int_bb_b0_regs[1] = {
+	&misc_int0_bb_b0,
+};
+
+static const u16 misc_int0_k2_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg misc_int0_k2 = {
+	0, 1, misc_int0_k2_attn_idx, 0x8180, 0x818c, 0x8188, 0x8184
+};
+
+static struct attn_hw_reg *misc_int_k2_regs[1] = {
+	&misc_int0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *pglue_b_int_attn_desc[24] = {
+	"pglue_b_address_error",
+	"pglue_b_incorrect_rcv_behavior",
+	"pglue_b_was_error_attn",
+	"pglue_b_vf_length_violation_attn",
+	"pglue_b_vf_grc_space_violation_attn",
+	"pglue_b_tcpl_error_attn",
+	"pglue_b_tcpl_in_two_rcbs_attn",
+	"pglue_b_cssnoop_fifo_overflow",
+	"pglue_b_tcpl_translation_size_different",
+	"pglue_b_pcie_rx_l0s_timeout",
+	"pglue_b_master_zlr_attn",
+	"pglue_b_admin_window_violation_attn",
+	"pglue_b_out_of_range_function_in_pretend",
+	"pglue_b_illegal_address",
+	"pglue_b_pgl_cpl_err",
+	"pglue_b_pgl_txw_of",
+	"pglue_b_pgl_cpl_aft",
+	"pglue_b_pgl_cpl_of",
+	"pglue_b_pgl_cpl_ecrc",
+	"pglue_b_pgl_pcie_attn",
+	"pglue_b_pgl_read_blocked",
+	"pglue_b_pgl_write_blocked",
+	"pglue_b_vf_ilt_err",
+	"pglue_b_rxobffexception_attn",
+};
+#else
+#define pglue_b_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 pglue_b_int0_bb_a0_attn_idx[23] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+	20,
+	21, 22,
+};
+
+static struct attn_hw_reg pglue_b_int0_bb_a0 = {
+	0, 23, pglue_b_int0_bb_a0_attn_idx, 0x2a8180, 0x2a818c, 0x2a8188,
+	0x2a8184
+};
+
+static struct attn_hw_reg *pglue_b_int_bb_a0_regs[1] = {
+	&pglue_b_int0_bb_a0,
+};
+
+static const u16 pglue_b_int0_bb_b0_attn_idx[23] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+	20,
+	21, 22,
+};
+
+static struct attn_hw_reg pglue_b_int0_bb_b0 = {
+	0, 23, pglue_b_int0_bb_b0_attn_idx, 0x2a8180, 0x2a818c, 0x2a8188,
+	0x2a8184
+};
+
+static struct attn_hw_reg *pglue_b_int_bb_b0_regs[1] = {
+	&pglue_b_int0_bb_b0,
+};
+
+static const u16 pglue_b_int0_k2_attn_idx[24] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+	20,
+	21, 22, 23,
+};
+
+static struct attn_hw_reg pglue_b_int0_k2 = {
+	0, 24, pglue_b_int0_k2_attn_idx, 0x2a8180, 0x2a818c, 0x2a8188, 0x2a8184
+};
+
+static struct attn_hw_reg *pglue_b_int_k2_regs[1] = {
+	&pglue_b_int0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *pglue_b_prty_attn_desc[35] = {
+	"pglue_b_datapath_registers",
+	"pglue_b_mem027_i_mem_prty",
+	"pglue_b_mem007_i_mem_prty",
+	"pglue_b_mem009_i_mem_prty",
+	"pglue_b_mem010_i_mem_prty",
+	"pglue_b_mem008_i_mem_prty",
+	"pglue_b_mem022_i_mem_prty",
+	"pglue_b_mem023_i_mem_prty",
+	"pglue_b_mem024_i_mem_prty",
+	"pglue_b_mem025_i_mem_prty",
+	"pglue_b_mem004_i_mem_prty",
+	"pglue_b_mem005_i_mem_prty",
+	"pglue_b_mem011_i_mem_prty",
+	"pglue_b_mem016_i_mem_prty",
+	"pglue_b_mem017_i_mem_prty",
+	"pglue_b_mem012_i_mem_prty",
+	"pglue_b_mem013_i_mem_prty",
+	"pglue_b_mem014_i_mem_prty",
+	"pglue_b_mem015_i_mem_prty",
+	"pglue_b_mem018_i_mem_prty",
+	"pglue_b_mem020_i_mem_prty",
+	"pglue_b_mem021_i_mem_prty",
+	"pglue_b_mem019_i_mem_prty",
+	"pglue_b_mem026_i_mem_prty",
+	"pglue_b_mem006_i_mem_prty",
+	"pglue_b_mem003_i_mem_prty",
+	"pglue_b_mem002_i_mem_prty_0",
+	"pglue_b_mem002_i_mem_prty_1",
+	"pglue_b_mem002_i_mem_prty_2",
+	"pglue_b_mem002_i_mem_prty_3",
+	"pglue_b_mem002_i_mem_prty_4",
+	"pglue_b_mem002_i_mem_prty_5",
+	"pglue_b_mem002_i_mem_prty_6",
+	"pglue_b_mem002_i_mem_prty_7",
+	"pglue_b_mem001_i_mem_prty",
+};
+#else
+#define pglue_b_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 pglue_b_prty1_bb_a0_attn_idx[22] = {
+	2, 3, 4, 5, 10, 11, 12, 15, 16, 17, 18, 24, 25, 26, 27, 28, 29, 30, 31,
+	32, 33, 34,
+};
+
+static struct attn_hw_reg pglue_b_prty1_bb_a0 = {
+	0, 22, pglue_b_prty1_bb_a0_attn_idx, 0x2a8200, 0x2a820c, 0x2a8208,
+	0x2a8204
+};
+
+static struct attn_hw_reg *pglue_b_prty_bb_a0_regs[1] = {
+	&pglue_b_prty1_bb_a0,
+};
+
+static const u16 pglue_b_prty0_bb_b0_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg pglue_b_prty0_bb_b0 = {
+	0, 1, pglue_b_prty0_bb_b0_attn_idx, 0x2a8190, 0x2a819c, 0x2a8198,
+	0x2a8194
+};
+
+static const u16 pglue_b_prty1_bb_b0_attn_idx[22] = {
+	2, 3, 4, 5, 10, 11, 12, 15, 16, 17, 18, 24, 25, 26, 27, 28, 29, 30, 31,
+	32, 33, 34,
+};
+
+static struct attn_hw_reg pglue_b_prty1_bb_b0 = {
+	1, 22, pglue_b_prty1_bb_b0_attn_idx, 0x2a8200, 0x2a820c, 0x2a8208,
+	0x2a8204
+};
+
+static struct attn_hw_reg *pglue_b_prty_bb_b0_regs[2] = {
+	&pglue_b_prty0_bb_b0, &pglue_b_prty1_bb_b0,
+};
+
+static const u16 pglue_b_prty0_k2_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg pglue_b_prty0_k2 = {
+	0, 1, pglue_b_prty0_k2_attn_idx, 0x2a8190, 0x2a819c, 0x2a8198, 0x2a8194
+};
+
+static const u16 pglue_b_prty1_k2_attn_idx[31] = {
+	1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20,
+	21,
+	22, 23, 24, 25, 26, 27, 28, 29, 30, 31,
+};
+
+static struct attn_hw_reg pglue_b_prty1_k2 = {
+	1, 31, pglue_b_prty1_k2_attn_idx, 0x2a8200, 0x2a820c, 0x2a8208,
+	0x2a8204
+};
+
+static const u16 pglue_b_prty2_k2_attn_idx[3] = {
+	32, 33, 34,
+};
+
+static struct attn_hw_reg pglue_b_prty2_k2 = {
+	2, 3, pglue_b_prty2_k2_attn_idx, 0x2a8210, 0x2a821c, 0x2a8218, 0x2a8214
+};
+
+static struct attn_hw_reg *pglue_b_prty_k2_regs[3] = {
+	&pglue_b_prty0_k2, &pglue_b_prty1_k2, &pglue_b_prty2_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *cnig_int_attn_desc[10] = {
+	"cnig_address_error",
+	"cnig_tx_illegal_sop_port0",
+	"cnig_tx_illegal_sop_port1",
+	"cnig_tx_illegal_sop_port2",
+	"cnig_tx_illegal_sop_port3",
+	"cnig_tdm_lane_0_bandwith_exceed",
+	"cnig_tdm_lane_1_bandwith_exceed",
+	"cnig_pmeg_intr",
+	"cnig_pmfc_intr",
+	"cnig_fifo_error",
+};
+#else
+#define cnig_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 cnig_int0_bb_a0_attn_idx[4] = {
+	0, 7, 8, 9,
+};
+
+static struct attn_hw_reg cnig_int0_bb_a0 = {
+	0, 4, cnig_int0_bb_a0_attn_idx, 0x2182e8, 0x2182f4, 0x2182f0, 0x2182ec
+};
+
+static struct attn_hw_reg *cnig_int_bb_a0_regs[1] = {
+	&cnig_int0_bb_a0,
+};
+
+static const u16 cnig_int0_bb_b0_attn_idx[6] = {
+	0, 1, 3, 7, 8, 9,
+};
+
+static struct attn_hw_reg cnig_int0_bb_b0 = {
+	0, 6, cnig_int0_bb_b0_attn_idx, 0x2182e8, 0x2182f4, 0x2182f0, 0x2182ec
+};
+
+static struct attn_hw_reg *cnig_int_bb_b0_regs[1] = {
+	&cnig_int0_bb_b0,
+};
+
+static const u16 cnig_int0_k2_attn_idx[7] = {
+	0, 1, 2, 3, 4, 5, 6,
+};
+
+static struct attn_hw_reg cnig_int0_k2 = {
+	0, 7, cnig_int0_k2_attn_idx, 0x218218, 0x218224, 0x218220, 0x21821c
+};
+
+static struct attn_hw_reg *cnig_int_k2_regs[1] = {
+	&cnig_int0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *cnig_prty_attn_desc[3] = {
+	"cnig_unused_0",
+	"cnig_datapath_tx",
+	"cnig_datapath_rx",
+};
+#else
+#define cnig_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 cnig_prty0_bb_b0_attn_idx[2] = {
+	1, 2,
+};
+
+static struct attn_hw_reg cnig_prty0_bb_b0 = {
+	0, 2, cnig_prty0_bb_b0_attn_idx, 0x218348, 0x218354, 0x218350, 0x21834c
+};
+
+static struct attn_hw_reg *cnig_prty_bb_b0_regs[1] = {
+	&cnig_prty0_bb_b0,
+};
+
+static const u16 cnig_prty0_k2_attn_idx[1] = {
+	1,
+};
+
+static struct attn_hw_reg cnig_prty0_k2 = {
+	0, 1, cnig_prty0_k2_attn_idx, 0x21822c, 0x218238, 0x218234, 0x218230
+};
+
+static struct attn_hw_reg *cnig_prty_k2_regs[1] = {
+	&cnig_prty0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *cpmu_int_attn_desc[1] = {
+	"cpmu_address_error",
+};
+#else
+#define cpmu_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 cpmu_int0_bb_a0_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg cpmu_int0_bb_a0 = {
+	0, 1, cpmu_int0_bb_a0_attn_idx, 0x303e0, 0x303ec, 0x303e8, 0x303e4
+};
+
+static struct attn_hw_reg *cpmu_int_bb_a0_regs[1] = {
+	&cpmu_int0_bb_a0,
+};
+
+static const u16 cpmu_int0_bb_b0_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg cpmu_int0_bb_b0 = {
+	0, 1, cpmu_int0_bb_b0_attn_idx, 0x303e0, 0x303ec, 0x303e8, 0x303e4
+};
+
+static struct attn_hw_reg *cpmu_int_bb_b0_regs[1] = {
+	&cpmu_int0_bb_b0,
+};
+
+static const u16 cpmu_int0_k2_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg cpmu_int0_k2 = {
+	0, 1, cpmu_int0_k2_attn_idx, 0x303e0, 0x303ec, 0x303e8, 0x303e4
+};
+
+static struct attn_hw_reg *cpmu_int_k2_regs[1] = {
+	&cpmu_int0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *ncsi_int_attn_desc[1] = {
+	"ncsi_address_error",
+};
+#else
+#define ncsi_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 ncsi_int0_bb_a0_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg ncsi_int0_bb_a0 = {
+	0, 1, ncsi_int0_bb_a0_attn_idx, 0x404cc, 0x404d8, 0x404d4, 0x404d0
+};
+
+static struct attn_hw_reg *ncsi_int_bb_a0_regs[1] = {
+	&ncsi_int0_bb_a0,
+};
+
+static const u16 ncsi_int0_bb_b0_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg ncsi_int0_bb_b0 = {
+	0, 1, ncsi_int0_bb_b0_attn_idx, 0x404cc, 0x404d8, 0x404d4, 0x404d0
+};
+
+static struct attn_hw_reg *ncsi_int_bb_b0_regs[1] = {
+	&ncsi_int0_bb_b0,
+};
+
+static const u16 ncsi_int0_k2_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg ncsi_int0_k2 = {
+	0, 1, ncsi_int0_k2_attn_idx, 0x404cc, 0x404d8, 0x404d4, 0x404d0
+};
+
+static struct attn_hw_reg *ncsi_int_k2_regs[1] = {
+	&ncsi_int0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *ncsi_prty_attn_desc[1] = {
+	"ncsi_mem002_i_mem_prty",
+};
+#else
+#define ncsi_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 ncsi_prty1_bb_a0_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg ncsi_prty1_bb_a0 = {
+	0, 1, ncsi_prty1_bb_a0_attn_idx, 0x40000, 0x4000c, 0x40008, 0x40004
+};
+
+static struct attn_hw_reg *ncsi_prty_bb_a0_regs[1] = {
+	&ncsi_prty1_bb_a0,
+};
+
+static const u16 ncsi_prty1_bb_b0_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg ncsi_prty1_bb_b0 = {
+	0, 1, ncsi_prty1_bb_b0_attn_idx, 0x40000, 0x4000c, 0x40008, 0x40004
+};
+
+static struct attn_hw_reg *ncsi_prty_bb_b0_regs[1] = {
+	&ncsi_prty1_bb_b0,
+};
+
+static const u16 ncsi_prty1_k2_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg ncsi_prty1_k2 = {
+	0, 1, ncsi_prty1_k2_attn_idx, 0x40000, 0x4000c, 0x40008, 0x40004
+};
+
+static struct attn_hw_reg *ncsi_prty_k2_regs[1] = {
+	&ncsi_prty1_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *opte_prty_attn_desc[12] = {
+	"opte_mem009_i_mem_prty",
+	"opte_mem010_i_mem_prty",
+	"opte_mem005_i_mem_prty",
+	"opte_mem006_i_mem_prty",
+	"opte_mem007_i_mem_prty",
+	"opte_mem008_i_mem_prty",
+	"opte_mem001_i_mem_prty",
+	"opte_mem002_i_mem_prty",
+	"opte_mem003_i_mem_prty",
+	"opte_mem004_i_mem_prty",
+	"opte_mem011_i_mem_prty",
+	"opte_datapath_parity_error",
+};
+#else
+#define opte_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 opte_prty1_bb_a0_attn_idx[11] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10,
+};
+
+static struct attn_hw_reg opte_prty1_bb_a0 = {
+	0, 11, opte_prty1_bb_a0_attn_idx, 0x53000, 0x5300c, 0x53008, 0x53004
+};
+
+static struct attn_hw_reg *opte_prty_bb_a0_regs[1] = {
+	&opte_prty1_bb_a0,
+};
+
+static const u16 opte_prty1_bb_b0_attn_idx[11] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10,
+};
+
+static struct attn_hw_reg opte_prty1_bb_b0 = {
+	0, 11, opte_prty1_bb_b0_attn_idx, 0x53000, 0x5300c, 0x53008, 0x53004
+};
+
+static const u16 opte_prty0_bb_b0_attn_idx[1] = {
+	11,
+};
+
+static struct attn_hw_reg opte_prty0_bb_b0 = {
+	1, 1, opte_prty0_bb_b0_attn_idx, 0x53208, 0x53214, 0x53210, 0x5320c
+};
+
+static struct attn_hw_reg *opte_prty_bb_b0_regs[2] = {
+	&opte_prty1_bb_b0, &opte_prty0_bb_b0,
+};
+
+static const u16 opte_prty1_k2_attn_idx[11] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10,
+};
+
+static struct attn_hw_reg opte_prty1_k2 = {
+	0, 11, opte_prty1_k2_attn_idx, 0x53000, 0x5300c, 0x53008, 0x53004
+};
+
+static const u16 opte_prty0_k2_attn_idx[1] = {
+	11,
+};
+
+static struct attn_hw_reg opte_prty0_k2 = {
+	1, 1, opte_prty0_k2_attn_idx, 0x53208, 0x53214, 0x53210, 0x5320c
+};
+
+static struct attn_hw_reg *opte_prty_k2_regs[2] = {
+	&opte_prty1_k2, &opte_prty0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *bmb_int_attn_desc[297] = {
+	"bmb_address_error",
+	"bmb_rc_pkt0_rls_error",
+	"bmb_unused_0",
+	"bmb_rc_pkt0_protocol_error",
+	"bmb_rc_pkt1_rls_error",
+	"bmb_unused_1",
+	"bmb_rc_pkt1_protocol_error",
+	"bmb_rc_pkt2_rls_error",
+	"bmb_unused_2",
+	"bmb_rc_pkt2_protocol_error",
+	"bmb_rc_pkt3_rls_error",
+	"bmb_unused_3",
+	"bmb_rc_pkt3_protocol_error",
+	"bmb_rc_sop_req_tc_port_error",
+	"bmb_unused_4",
+	"bmb_wc0_protocol_error",
+	"bmb_wc1_protocol_error",
+	"bmb_wc2_protocol_error",
+	"bmb_wc3_protocol_error",
+	"bmb_unused_5",
+	"bmb_ll_blk_error",
+	"bmb_unused_6",
+	"bmb_mac0_fc_cnt_error",
+	"bmb_ll_arb_calc_error",
+	"bmb_wc0_inp_fifo_error",
+	"bmb_wc0_sop_fifo_error",
+	"bmb_wc0_len_fifo_error",
+	"bmb_wc0_queue_fifo_error",
+	"bmb_wc0_free_point_fifo_error",
+	"bmb_wc0_next_point_fifo_error",
+	"bmb_wc0_strt_fifo_error",
+	"bmb_wc0_second_dscr_fifo_error",
+	"bmb_wc0_pkt_avail_fifo_error",
+	"bmb_wc0_cos_cnt_fifo_error",
+	"bmb_wc0_notify_fifo_error",
+	"bmb_wc0_ll_req_fifo_error",
+	"bmb_wc0_ll_pa_cnt_error",
+	"bmb_wc0_bb_pa_cnt_error",
+	"bmb_wc1_inp_fifo_error",
+	"bmb_wc1_sop_fifo_error",
+	"bmb_wc1_queue_fifo_error",
+	"bmb_wc1_free_point_fifo_error",
+	"bmb_wc1_next_point_fifo_error",
+	"bmb_wc1_strt_fifo_error",
+	"bmb_wc1_second_dscr_fifo_error",
+	"bmb_wc1_pkt_avail_fifo_error",
+	"bmb_wc1_cos_cnt_fifo_error",
+	"bmb_wc1_notify_fifo_error",
+	"bmb_wc1_ll_req_fifo_error",
+	"bmb_wc1_ll_pa_cnt_error",
+	"bmb_wc1_bb_pa_cnt_error",
+	"bmb_wc2_inp_fifo_error",
+	"bmb_wc2_sop_fifo_error",
+	"bmb_wc2_queue_fifo_error",
+	"bmb_wc2_free_point_fifo_error",
+	"bmb_wc2_next_point_fifo_error",
+	"bmb_wc2_strt_fifo_error",
+	"bmb_wc2_second_dscr_fifo_error",
+	"bmb_wc2_pkt_avail_fifo_error",
+	"bmb_wc2_cos_cnt_fifo_error",
+	"bmb_wc2_notify_fifo_error",
+	"bmb_wc2_ll_req_fifo_error",
+	"bmb_wc2_ll_pa_cnt_error",
+	"bmb_wc2_bb_pa_cnt_error",
+	"bmb_wc3_inp_fifo_error",
+	"bmb_wc3_sop_fifo_error",
+	"bmb_wc3_queue_fifo_error",
+	"bmb_wc3_free_point_fifo_error",
+	"bmb_wc3_next_point_fifo_error",
+	"bmb_wc3_strt_fifo_error",
+	"bmb_wc3_second_dscr_fifo_error",
+	"bmb_wc3_pkt_avail_fifo_error",
+	"bmb_wc3_cos_cnt_fifo_error",
+	"bmb_wc3_notify_fifo_error",
+	"bmb_wc3_ll_req_fifo_error",
+	"bmb_wc3_ll_pa_cnt_error",
+	"bmb_wc3_bb_pa_cnt_error",
+	"bmb_rc_pkt0_side_fifo_error",
+	"bmb_rc_pkt0_req_fifo_error",
+	"bmb_rc_pkt0_blk_fifo_error",
+	"bmb_rc_pkt0_rls_left_fifo_error",
+	"bmb_rc_pkt0_strt_ptr_fifo_error",
+	"bmb_rc_pkt0_second_ptr_fifo_error",
+	"bmb_rc_pkt0_rsp_fifo_error",
+	"bmb_rc_pkt0_dscr_fifo_error",
+	"bmb_rc_pkt1_side_fifo_error",
+	"bmb_rc_pkt1_req_fifo_error",
+	"bmb_rc_pkt1_blk_fifo_error",
+	"bmb_rc_pkt1_rls_left_fifo_error",
+	"bmb_rc_pkt1_strt_ptr_fifo_error",
+	"bmb_rc_pkt1_second_ptr_fifo_error",
+	"bmb_rc_pkt1_rsp_fifo_error",
+	"bmb_rc_pkt1_dscr_fifo_error",
+	"bmb_rc_pkt2_side_fifo_error",
+	"bmb_rc_pkt2_req_fifo_error",
+	"bmb_rc_pkt2_blk_fifo_error",
+	"bmb_rc_pkt2_rls_left_fifo_error",
+	"bmb_rc_pkt2_strt_ptr_fifo_error",
+	"bmb_rc_pkt2_second_ptr_fifo_error",
+	"bmb_rc_pkt2_rsp_fifo_error",
+	"bmb_rc_pkt2_dscr_fifo_error",
+	"bmb_rc_pkt3_side_fifo_error",
+	"bmb_rc_pkt3_req_fifo_error",
+	"bmb_rc_pkt3_blk_fifo_error",
+	"bmb_rc_pkt3_rls_left_fifo_error",
+	"bmb_rc_pkt3_strt_ptr_fifo_error",
+	"bmb_rc_pkt3_second_ptr_fifo_error",
+	"bmb_rc_pkt3_rsp_fifo_error",
+	"bmb_rc_pkt3_dscr_fifo_error",
+	"bmb_rc_sop_strt_fifo_error",
+	"bmb_rc_sop_req_fifo_error",
+	"bmb_rc_sop_dscr_fifo_error",
+	"bmb_rc_sop_queue_fifo_error",
+	"bmb_ll_arb_rls_fifo_error",
+	"bmb_ll_arb_prefetch_fifo_error",
+	"bmb_rc_pkt0_rls_fifo_error",
+	"bmb_rc_pkt1_rls_fifo_error",
+	"bmb_rc_pkt2_rls_fifo_error",
+	"bmb_rc_pkt3_rls_fifo_error",
+	"bmb_rc_pkt4_rls_fifo_error",
+	"bmb_rc_pkt5_rls_fifo_error",
+	"bmb_rc_pkt6_rls_fifo_error",
+	"bmb_rc_pkt7_rls_fifo_error",
+	"bmb_rc_pkt8_rls_fifo_error",
+	"bmb_rc_pkt9_rls_fifo_error",
+	"bmb_rc_pkt4_rls_error",
+	"bmb_rc_pkt4_protocol_error",
+	"bmb_rc_pkt4_side_fifo_error",
+	"bmb_rc_pkt4_req_fifo_error",
+	"bmb_rc_pkt4_blk_fifo_error",
+	"bmb_rc_pkt4_rls_left_fifo_error",
+	"bmb_rc_pkt4_strt_ptr_fifo_error",
+	"bmb_rc_pkt4_second_ptr_fifo_error",
+	"bmb_rc_pkt4_rsp_fifo_error",
+	"bmb_rc_pkt4_dscr_fifo_error",
+	"bmb_rc_pkt5_rls_error",
+	"bmb_rc_pkt5_protocol_error",
+	"bmb_rc_pkt5_side_fifo_error",
+	"bmb_rc_pkt5_req_fifo_error",
+	"bmb_rc_pkt5_blk_fifo_error",
+	"bmb_rc_pkt5_rls_left_fifo_error",
+	"bmb_rc_pkt5_strt_ptr_fifo_error",
+	"bmb_rc_pkt5_second_ptr_fifo_error",
+	"bmb_rc_pkt5_rsp_fifo_error",
+	"bmb_rc_pkt5_dscr_fifo_error",
+	"bmb_rc_pkt6_rls_error",
+	"bmb_rc_pkt6_protocol_error",
+	"bmb_rc_pkt6_side_fifo_error",
+	"bmb_rc_pkt6_req_fifo_error",
+	"bmb_rc_pkt6_blk_fifo_error",
+	"bmb_rc_pkt6_rls_left_fifo_error",
+	"bmb_rc_pkt6_strt_ptr_fifo_error",
+	"bmb_rc_pkt6_second_ptr_fifo_error",
+	"bmb_rc_pkt6_rsp_fifo_error",
+	"bmb_rc_pkt6_dscr_fifo_error",
+	"bmb_rc_pkt7_rls_error",
+	"bmb_rc_pkt7_protocol_error",
+	"bmb_rc_pkt7_side_fifo_error",
+	"bmb_rc_pkt7_req_fifo_error",
+	"bmb_rc_pkt7_blk_fifo_error",
+	"bmb_rc_pkt7_rls_left_fifo_error",
+	"bmb_rc_pkt7_strt_ptr_fifo_error",
+	"bmb_rc_pkt7_second_ptr_fifo_error",
+	"bmb_rc_pkt7_rsp_fifo_error",
+	"bmb_packet_available_sync_fifo_push_error",
+	"bmb_rc_pkt8_rls_error",
+	"bmb_rc_pkt8_protocol_error",
+	"bmb_rc_pkt8_side_fifo_error",
+	"bmb_rc_pkt8_req_fifo_error",
+	"bmb_rc_pkt8_blk_fifo_error",
+	"bmb_rc_pkt8_rls_left_fifo_error",
+	"bmb_rc_pkt8_strt_ptr_fifo_error",
+	"bmb_rc_pkt8_second_ptr_fifo_error",
+	"bmb_rc_pkt8_rsp_fifo_error",
+	"bmb_rc_pkt8_dscr_fifo_error",
+	"bmb_rc_pkt9_rls_error",
+	"bmb_rc_pkt9_protocol_error",
+	"bmb_rc_pkt9_side_fifo_error",
+	"bmb_rc_pkt9_req_fifo_error",
+	"bmb_rc_pkt9_blk_fifo_error",
+	"bmb_rc_pkt9_rls_left_fifo_error",
+	"bmb_rc_pkt9_strt_ptr_fifo_error",
+	"bmb_rc_pkt9_second_ptr_fifo_error",
+	"bmb_rc_pkt9_rsp_fifo_error",
+	"bmb_rc_pkt9_dscr_fifo_error",
+	"bmb_wc4_protocol_error",
+	"bmb_wc5_protocol_error",
+	"bmb_wc6_protocol_error",
+	"bmb_wc7_protocol_error",
+	"bmb_wc8_protocol_error",
+	"bmb_wc9_protocol_error",
+	"bmb_wc4_inp_fifo_error",
+	"bmb_wc4_sop_fifo_error",
+	"bmb_wc4_queue_fifo_error",
+	"bmb_wc4_free_point_fifo_error",
+	"bmb_wc4_next_point_fifo_error",
+	"bmb_wc4_strt_fifo_error",
+	"bmb_wc4_second_dscr_fifo_error",
+	"bmb_wc4_pkt_avail_fifo_error",
+	"bmb_wc4_cos_cnt_fifo_error",
+	"bmb_wc4_notify_fifo_error",
+	"bmb_wc4_ll_req_fifo_error",
+	"bmb_wc4_ll_pa_cnt_error",
+	"bmb_wc4_bb_pa_cnt_error",
+	"bmb_wc5_inp_fifo_error",
+	"bmb_wc5_sop_fifo_error",
+	"bmb_wc5_queue_fifo_error",
+	"bmb_wc5_free_point_fifo_error",
+	"bmb_wc5_next_point_fifo_error",
+	"bmb_wc5_strt_fifo_error",
+	"bmb_wc5_second_dscr_fifo_error",
+	"bmb_wc5_pkt_avail_fifo_error",
+	"bmb_wc5_cos_cnt_fifo_error",
+	"bmb_wc5_notify_fifo_error",
+	"bmb_wc5_ll_req_fifo_error",
+	"bmb_wc5_ll_pa_cnt_error",
+	"bmb_wc5_bb_pa_cnt_error",
+	"bmb_wc6_inp_fifo_error",
+	"bmb_wc6_sop_fifo_error",
+	"bmb_wc6_queue_fifo_error",
+	"bmb_wc6_free_point_fifo_error",
+	"bmb_wc6_next_point_fifo_error",
+	"bmb_wc6_strt_fifo_error",
+	"bmb_wc6_second_dscr_fifo_error",
+	"bmb_wc6_pkt_avail_fifo_error",
+	"bmb_wc6_cos_cnt_fifo_error",
+	"bmb_wc6_notify_fifo_error",
+	"bmb_wc6_ll_req_fifo_error",
+	"bmb_wc6_ll_pa_cnt_error",
+	"bmb_wc6_bb_pa_cnt_error",
+	"bmb_wc7_inp_fifo_error",
+	"bmb_wc7_sop_fifo_error",
+	"bmb_wc7_queue_fifo_error",
+	"bmb_wc7_free_point_fifo_error",
+	"bmb_wc7_next_point_fifo_error",
+	"bmb_wc7_strt_fifo_error",
+	"bmb_wc7_second_dscr_fifo_error",
+	"bmb_wc7_pkt_avail_fifo_error",
+	"bmb_wc7_cos_cnt_fifo_error",
+	"bmb_wc7_notify_fifo_error",
+	"bmb_wc7_ll_req_fifo_error",
+	"bmb_wc7_ll_pa_cnt_error",
+	"bmb_wc7_bb_pa_cnt_error",
+	"bmb_wc8_inp_fifo_error",
+	"bmb_wc8_sop_fifo_error",
+	"bmb_wc8_queue_fifo_error",
+	"bmb_wc8_free_point_fifo_error",
+	"bmb_wc8_next_point_fifo_error",
+	"bmb_wc8_strt_fifo_error",
+	"bmb_wc8_second_dscr_fifo_error",
+	"bmb_wc8_pkt_avail_fifo_error",
+	"bmb_wc8_cos_cnt_fifo_error",
+	"bmb_wc8_notify_fifo_error",
+	"bmb_wc8_ll_req_fifo_error",
+	"bmb_wc8_ll_pa_cnt_error",
+	"bmb_wc8_bb_pa_cnt_error",
+	"bmb_wc9_inp_fifo_error",
+	"bmb_wc9_sop_fifo_error",
+	"bmb_wc9_queue_fifo_error",
+	"bmb_wc9_free_point_fifo_error",
+	"bmb_wc9_next_point_fifo_error",
+	"bmb_wc9_strt_fifo_error",
+	"bmb_wc9_second_dscr_fifo_error",
+	"bmb_wc9_pkt_avail_fifo_error",
+	"bmb_wc9_cos_cnt_fifo_error",
+	"bmb_wc9_notify_fifo_error",
+	"bmb_wc9_ll_req_fifo_error",
+	"bmb_wc9_ll_pa_cnt_error",
+	"bmb_wc9_bb_pa_cnt_error",
+	"bmb_rc9_sop_rc_out_sync_fifo_error",
+	"bmb_rc9_sop_out_sync_fifo_push_error",
+	"bmb_rc0_sop_pend_fifo_error",
+	"bmb_rc1_sop_pend_fifo_error",
+	"bmb_rc2_sop_pend_fifo_error",
+	"bmb_rc3_sop_pend_fifo_error",
+	"bmb_rc4_sop_pend_fifo_error",
+	"bmb_rc5_sop_pend_fifo_error",
+	"bmb_rc6_sop_pend_fifo_error",
+	"bmb_rc7_sop_pend_fifo_error",
+	"bmb_rc0_dscr_pend_fifo_error",
+	"bmb_rc1_dscr_pend_fifo_error",
+	"bmb_rc2_dscr_pend_fifo_error",
+	"bmb_rc3_dscr_pend_fifo_error",
+	"bmb_rc4_dscr_pend_fifo_error",
+	"bmb_rc5_dscr_pend_fifo_error",
+	"bmb_rc6_dscr_pend_fifo_error",
+	"bmb_rc7_dscr_pend_fifo_error",
+	"bmb_rc8_sop_inp_sync_fifo_push_error",
+	"bmb_rc9_sop_inp_sync_fifo_push_error",
+	"bmb_rc8_sop_out_sync_fifo_push_error",
+	"bmb_rc_gnt_pend_fifo_error",
+	"bmb_rc8_out_sync_fifo_push_error",
+	"bmb_rc9_out_sync_fifo_push_error",
+	"bmb_wc8_sync_fifo_push_error",
+	"bmb_wc9_sync_fifo_push_error",
+	"bmb_rc8_sop_rc_out_sync_fifo_error",
+	"bmb_rc_pkt7_dscr_fifo_error",
+};
+#else
+#define bmb_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 bmb_int0_bb_a0_attn_idx[16] = {
+	0, 1, 3, 4, 6, 7, 9, 10, 12, 13, 15, 16, 17, 18, 20, 22,
+};
+
+static struct attn_hw_reg bmb_int0_bb_a0 = {
+	0, 16, bmb_int0_bb_a0_attn_idx, 0x5400c0, 0x5400cc, 0x5400c8, 0x5400c4
+};
+
+static const u16 bmb_int1_bb_a0_attn_idx[28] = {
+	23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40,
+	41, 42, 43, 44, 45, 46, 47, 48, 49, 50,
+};
+
+static struct attn_hw_reg bmb_int1_bb_a0 = {
+	1, 28, bmb_int1_bb_a0_attn_idx, 0x5400d8, 0x5400e4, 0x5400e0, 0x5400dc
+};
+
+static const u16 bmb_int2_bb_a0_attn_idx[26] = {
+	51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68,
+	69, 70, 71, 72, 73, 74, 75, 76,
+};
+
+static struct attn_hw_reg bmb_int2_bb_a0 = {
+	2, 26, bmb_int2_bb_a0_attn_idx, 0x5400f0, 0x5400fc, 0x5400f8, 0x5400f4
+};
+
+static const u16 bmb_int3_bb_a0_attn_idx[31] = {
+	77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94,
+	95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107,
+};
+
+static struct attn_hw_reg bmb_int3_bb_a0 = {
+	3, 31, bmb_int3_bb_a0_attn_idx, 0x540108, 0x540114, 0x540110, 0x54010c
+};
+
+static const u16 bmb_int4_bb_a0_attn_idx[27] = {
+	108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121,
+	122,
+	123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134,
+};
+
+static struct attn_hw_reg bmb_int4_bb_a0 = {
+	4, 27, bmb_int4_bb_a0_attn_idx, 0x540120, 0x54012c, 0x540128, 0x540124
+};
+
+static const u16 bmb_int5_bb_a0_attn_idx[29] = {
+	135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148,
+	149,
+	150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163,
+};
+
+static struct attn_hw_reg bmb_int5_bb_a0 = {
+	5, 29, bmb_int5_bb_a0_attn_idx, 0x540138, 0x540144, 0x540140, 0x54013c
+};
+
+static const u16 bmb_int6_bb_a0_attn_idx[30] = {
+	164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177,
+	178,
+	179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192,
+	    193,
+};
+
+static struct attn_hw_reg bmb_int6_bb_a0 = {
+	6, 30, bmb_int6_bb_a0_attn_idx, 0x540150, 0x54015c, 0x540158, 0x540154
+};
+
+static const u16 bmb_int7_bb_a0_attn_idx[32] = {
+	194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207,
+	208,
+	209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222,
+	    223, 224,
+	225,
+};
+
+static struct attn_hw_reg bmb_int7_bb_a0 = {
+	7, 32, bmb_int7_bb_a0_attn_idx, 0x540168, 0x540174, 0x540170, 0x54016c
+};
+
+static const u16 bmb_int8_bb_a0_attn_idx[32] = {
+	226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239,
+	240,
+	241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254,
+	    255, 256,
+	257,
+};
+
+static struct attn_hw_reg bmb_int8_bb_a0 = {
+	8, 32, bmb_int8_bb_a0_attn_idx, 0x540184, 0x540190, 0x54018c, 0x540188
+};
+
+static const u16 bmb_int9_bb_a0_attn_idx[32] = {
+	258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271,
+	272,
+	273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286,
+	    287, 288,
+	289,
+};
+
+static struct attn_hw_reg bmb_int9_bb_a0 = {
+	9, 32, bmb_int9_bb_a0_attn_idx, 0x54019c, 0x5401a8, 0x5401a4, 0x5401a0
+};
+
+static const u16 bmb_int10_bb_a0_attn_idx[3] = {
+	290, 291, 292,
+};
+
+static struct attn_hw_reg bmb_int10_bb_a0 = {
+	10, 3, bmb_int10_bb_a0_attn_idx, 0x5401b4, 0x5401c0, 0x5401bc, 0x5401b8
+};
+
+static const u16 bmb_int11_bb_a0_attn_idx[4] = {
+	293, 294, 295, 296,
+};
+
+static struct attn_hw_reg bmb_int11_bb_a0 = {
+	11, 4, bmb_int11_bb_a0_attn_idx, 0x5401cc, 0x5401d8, 0x5401d4, 0x5401d0
+};
+
+static struct attn_hw_reg *bmb_int_bb_a0_regs[12] = {
+	&bmb_int0_bb_a0, &bmb_int1_bb_a0, &bmb_int2_bb_a0, &bmb_int3_bb_a0,
+	&bmb_int4_bb_a0, &bmb_int5_bb_a0, &bmb_int6_bb_a0, &bmb_int7_bb_a0,
+	&bmb_int8_bb_a0, &bmb_int9_bb_a0,
+	&bmb_int10_bb_a0, &bmb_int11_bb_a0,
+};
+
+static const u16 bmb_int0_bb_b0_attn_idx[16] = {
+	0, 1, 3, 4, 6, 7, 9, 10, 12, 13, 15, 16, 17, 18, 20, 22,
+};
+
+static struct attn_hw_reg bmb_int0_bb_b0 = {
+	0, 16, bmb_int0_bb_b0_attn_idx, 0x5400c0, 0x5400cc, 0x5400c8, 0x5400c4
+};
+
+static const u16 bmb_int1_bb_b0_attn_idx[28] = {
+	23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40,
+	41, 42, 43, 44, 45, 46, 47, 48, 49, 50,
+};
+
+static struct attn_hw_reg bmb_int1_bb_b0 = {
+	1, 28, bmb_int1_bb_b0_attn_idx, 0x5400d8, 0x5400e4, 0x5400e0, 0x5400dc
+};
+
+static const u16 bmb_int2_bb_b0_attn_idx[26] = {
+	51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68,
+	69, 70, 71, 72, 73, 74, 75, 76,
+};
+
+static struct attn_hw_reg bmb_int2_bb_b0 = {
+	2, 26, bmb_int2_bb_b0_attn_idx, 0x5400f0, 0x5400fc, 0x5400f8, 0x5400f4
+};
+
+static const u16 bmb_int3_bb_b0_attn_idx[31] = {
+	77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94,
+	95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107,
+};
+
+static struct attn_hw_reg bmb_int3_bb_b0 = {
+	3, 31, bmb_int3_bb_b0_attn_idx, 0x540108, 0x540114, 0x540110, 0x54010c
+};
+
+static const u16 bmb_int4_bb_b0_attn_idx[27] = {
+	108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121,
+	122,
+	123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134,
+};
+
+static struct attn_hw_reg bmb_int4_bb_b0 = {
+	4, 27, bmb_int4_bb_b0_attn_idx, 0x540120, 0x54012c, 0x540128, 0x540124
+};
+
+static const u16 bmb_int5_bb_b0_attn_idx[29] = {
+	135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148,
+	149,
+	150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163,
+};
+
+static struct attn_hw_reg bmb_int5_bb_b0 = {
+	5, 29, bmb_int5_bb_b0_attn_idx, 0x540138, 0x540144, 0x540140, 0x54013c
+};
+
+static const u16 bmb_int6_bb_b0_attn_idx[30] = {
+	164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177,
+	178,
+	179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192,
+	    193,
+};
+
+static struct attn_hw_reg bmb_int6_bb_b0 = {
+	6, 30, bmb_int6_bb_b0_attn_idx, 0x540150, 0x54015c, 0x540158, 0x540154
+};
+
+static const u16 bmb_int7_bb_b0_attn_idx[32] = {
+	194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207,
+	208,
+	209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222,
+	    223, 224,
+	225,
+};
+
+static struct attn_hw_reg bmb_int7_bb_b0 = {
+	7, 32, bmb_int7_bb_b0_attn_idx, 0x540168, 0x540174, 0x540170, 0x54016c
+};
+
+static const u16 bmb_int8_bb_b0_attn_idx[32] = {
+	226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239,
+	240,
+	241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254,
+	    255, 256,
+	257,
+};
+
+static struct attn_hw_reg bmb_int8_bb_b0 = {
+	8, 32, bmb_int8_bb_b0_attn_idx, 0x540184, 0x540190, 0x54018c, 0x540188
+};
+
+static const u16 bmb_int9_bb_b0_attn_idx[32] = {
+	258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271,
+	272,
+	273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286,
+	    287, 288,
+	289,
+};
+
+static struct attn_hw_reg bmb_int9_bb_b0 = {
+	9, 32, bmb_int9_bb_b0_attn_idx, 0x54019c, 0x5401a8, 0x5401a4, 0x5401a0
+};
+
+static const u16 bmb_int10_bb_b0_attn_idx[3] = {
+	290, 291, 292,
+};
+
+static struct attn_hw_reg bmb_int10_bb_b0 = {
+	10, 3, bmb_int10_bb_b0_attn_idx, 0x5401b4, 0x5401c0, 0x5401bc, 0x5401b8
+};
+
+static const u16 bmb_int11_bb_b0_attn_idx[4] = {
+	293, 294, 295, 296,
+};
+
+static struct attn_hw_reg bmb_int11_bb_b0 = {
+	11, 4, bmb_int11_bb_b0_attn_idx, 0x5401cc, 0x5401d8, 0x5401d4, 0x5401d0
+};
+
+static struct attn_hw_reg *bmb_int_bb_b0_regs[12] = {
+	&bmb_int0_bb_b0, &bmb_int1_bb_b0, &bmb_int2_bb_b0, &bmb_int3_bb_b0,
+	&bmb_int4_bb_b0, &bmb_int5_bb_b0, &bmb_int6_bb_b0, &bmb_int7_bb_b0,
+	&bmb_int8_bb_b0, &bmb_int9_bb_b0,
+	&bmb_int10_bb_b0, &bmb_int11_bb_b0,
+};
+
+static const u16 bmb_int0_k2_attn_idx[16] = {
+	0, 1, 3, 4, 6, 7, 9, 10, 12, 13, 15, 16, 17, 18, 20, 22,
+};
+
+static struct attn_hw_reg bmb_int0_k2 = {
+	0, 16, bmb_int0_k2_attn_idx, 0x5400c0, 0x5400cc, 0x5400c8, 0x5400c4
+};
+
+static const u16 bmb_int1_k2_attn_idx[28] = {
+	23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40,
+	41, 42, 43, 44, 45, 46, 47, 48, 49, 50,
+};
+
+static struct attn_hw_reg bmb_int1_k2 = {
+	1, 28, bmb_int1_k2_attn_idx, 0x5400d8, 0x5400e4, 0x5400e0, 0x5400dc
+};
+
+static const u16 bmb_int2_k2_attn_idx[26] = {
+	51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68,
+	69, 70, 71, 72, 73, 74, 75, 76,
+};
+
+static struct attn_hw_reg bmb_int2_k2 = {
+	2, 26, bmb_int2_k2_attn_idx, 0x5400f0, 0x5400fc, 0x5400f8, 0x5400f4
+};
+
+static const u16 bmb_int3_k2_attn_idx[31] = {
+	77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94,
+	95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107,
+};
+
+static struct attn_hw_reg bmb_int3_k2 = {
+	3, 31, bmb_int3_k2_attn_idx, 0x540108, 0x540114, 0x540110, 0x54010c
+};
+
+static const u16 bmb_int4_k2_attn_idx[27] = {
+	108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121,
+	122,
+	123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134,
+};
+
+static struct attn_hw_reg bmb_int4_k2 = {
+	4, 27, bmb_int4_k2_attn_idx, 0x540120, 0x54012c, 0x540128, 0x540124
+};
+
+static const u16 bmb_int5_k2_attn_idx[29] = {
+	135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148,
+	149,
+	150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163,
+};
+
+static struct attn_hw_reg bmb_int5_k2 = {
+	5, 29, bmb_int5_k2_attn_idx, 0x540138, 0x540144, 0x540140, 0x54013c
+};
+
+static const u16 bmb_int6_k2_attn_idx[30] = {
+	164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177,
+	178,
+	179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192,
+	    193,
+};
+
+static struct attn_hw_reg bmb_int6_k2 = {
+	6, 30, bmb_int6_k2_attn_idx, 0x540150, 0x54015c, 0x540158, 0x540154
+};
+
+static const u16 bmb_int7_k2_attn_idx[32] = {
+	194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207,
+	208,
+	209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222,
+	    223, 224,
+	225,
+};
+
+static struct attn_hw_reg bmb_int7_k2 = {
+	7, 32, bmb_int7_k2_attn_idx, 0x540168, 0x540174, 0x540170, 0x54016c
+};
+
+static const u16 bmb_int8_k2_attn_idx[32] = {
+	226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239,
+	240,
+	241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254,
+	    255, 256,
+	257,
+};
+
+static struct attn_hw_reg bmb_int8_k2 = {
+	8, 32, bmb_int8_k2_attn_idx, 0x540184, 0x540190, 0x54018c, 0x540188
+};
+
+static const u16 bmb_int9_k2_attn_idx[32] = {
+	258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271,
+	272,
+	273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286,
+	    287, 288,
+	289,
+};
+
+static struct attn_hw_reg bmb_int9_k2 = {
+	9, 32, bmb_int9_k2_attn_idx, 0x54019c, 0x5401a8, 0x5401a4, 0x5401a0
+};
+
+static const u16 bmb_int10_k2_attn_idx[3] = {
+	290, 291, 292,
+};
+
+static struct attn_hw_reg bmb_int10_k2 = {
+	10, 3, bmb_int10_k2_attn_idx, 0x5401b4, 0x5401c0, 0x5401bc, 0x5401b8
+};
+
+static const u16 bmb_int11_k2_attn_idx[4] = {
+	293, 294, 295, 296,
+};
+
+static struct attn_hw_reg bmb_int11_k2 = {
+	11, 4, bmb_int11_k2_attn_idx, 0x5401cc, 0x5401d8, 0x5401d4, 0x5401d0
+};
+
+static struct attn_hw_reg *bmb_int_k2_regs[12] = {
+	&bmb_int0_k2, &bmb_int1_k2, &bmb_int2_k2, &bmb_int3_k2, &bmb_int4_k2,
+	&bmb_int5_k2, &bmb_int6_k2, &bmb_int7_k2, &bmb_int8_k2, &bmb_int9_k2,
+	&bmb_int10_k2, &bmb_int11_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *bmb_prty_attn_desc[61] = {
+	"bmb_ll_bank0_mem_prty",
+	"bmb_ll_bank1_mem_prty",
+	"bmb_ll_bank2_mem_prty",
+	"bmb_ll_bank3_mem_prty",
+	"bmb_datapath_registers",
+	"bmb_mem001_i_ecc_rf_int",
+	"bmb_mem008_i_ecc_rf_int",
+	"bmb_mem009_i_ecc_rf_int",
+	"bmb_mem010_i_ecc_rf_int",
+	"bmb_mem011_i_ecc_rf_int",
+	"bmb_mem012_i_ecc_rf_int",
+	"bmb_mem013_i_ecc_rf_int",
+	"bmb_mem014_i_ecc_rf_int",
+	"bmb_mem015_i_ecc_rf_int",
+	"bmb_mem016_i_ecc_rf_int",
+	"bmb_mem002_i_ecc_rf_int",
+	"bmb_mem003_i_ecc_rf_int",
+	"bmb_mem004_i_ecc_rf_int",
+	"bmb_mem005_i_ecc_rf_int",
+	"bmb_mem006_i_ecc_rf_int",
+	"bmb_mem007_i_ecc_rf_int",
+	"bmb_mem059_i_mem_prty",
+	"bmb_mem060_i_mem_prty",
+	"bmb_mem037_i_mem_prty",
+	"bmb_mem038_i_mem_prty",
+	"bmb_mem039_i_mem_prty",
+	"bmb_mem040_i_mem_prty",
+	"bmb_mem041_i_mem_prty",
+	"bmb_mem042_i_mem_prty",
+	"bmb_mem043_i_mem_prty",
+	"bmb_mem044_i_mem_prty",
+	"bmb_mem045_i_mem_prty",
+	"bmb_mem046_i_mem_prty",
+	"bmb_mem047_i_mem_prty",
+	"bmb_mem048_i_mem_prty",
+	"bmb_mem049_i_mem_prty",
+	"bmb_mem050_i_mem_prty",
+	"bmb_mem051_i_mem_prty",
+	"bmb_mem052_i_mem_prty",
+	"bmb_mem053_i_mem_prty",
+	"bmb_mem054_i_mem_prty",
+	"bmb_mem055_i_mem_prty",
+	"bmb_mem056_i_mem_prty",
+	"bmb_mem057_i_mem_prty",
+	"bmb_mem058_i_mem_prty",
+	"bmb_mem033_i_mem_prty",
+	"bmb_mem034_i_mem_prty",
+	"bmb_mem035_i_mem_prty",
+	"bmb_mem036_i_mem_prty",
+	"bmb_mem021_i_mem_prty",
+	"bmb_mem022_i_mem_prty",
+	"bmb_mem023_i_mem_prty",
+	"bmb_mem024_i_mem_prty",
+	"bmb_mem025_i_mem_prty",
+	"bmb_mem026_i_mem_prty",
+	"bmb_mem027_i_mem_prty",
+	"bmb_mem028_i_mem_prty",
+	"bmb_mem029_i_mem_prty",
+	"bmb_mem030_i_mem_prty",
+	"bmb_mem031_i_mem_prty",
+	"bmb_mem032_i_mem_prty",
+};
+#else
+#define bmb_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 bmb_prty1_bb_a0_attn_idx[31] = {
+	5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23,
+	24,
+	25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35,
+};
+
+static struct attn_hw_reg bmb_prty1_bb_a0 = {
+	0, 31, bmb_prty1_bb_a0_attn_idx, 0x540400, 0x54040c, 0x540408, 0x540404
+};
+
+static const u16 bmb_prty2_bb_a0_attn_idx[25] = {
+	36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53,
+	54, 55, 56, 57, 58, 59, 60,
+};
+
+static struct attn_hw_reg bmb_prty2_bb_a0 = {
+	1, 25, bmb_prty2_bb_a0_attn_idx, 0x540410, 0x54041c, 0x540418, 0x540414
+};
+
+static struct attn_hw_reg *bmb_prty_bb_a0_regs[2] = {
+	&bmb_prty1_bb_a0, &bmb_prty2_bb_a0,
+};
+
+static const u16 bmb_prty0_bb_b0_attn_idx[5] = {
+	0, 1, 2, 3, 4,
+};
+
+static struct attn_hw_reg bmb_prty0_bb_b0 = {
+	0, 5, bmb_prty0_bb_b0_attn_idx, 0x5401dc, 0x5401e8, 0x5401e4, 0x5401e0
+};
+
+static const u16 bmb_prty1_bb_b0_attn_idx[31] = {
+	5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23,
+	24,
+	25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35,
+};
+
+static struct attn_hw_reg bmb_prty1_bb_b0 = {
+	1, 31, bmb_prty1_bb_b0_attn_idx, 0x540400, 0x54040c, 0x540408, 0x540404
+};
+
+static const u16 bmb_prty2_bb_b0_attn_idx[15] = {
+	36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50,
+};
+
+static struct attn_hw_reg bmb_prty2_bb_b0 = {
+	2, 15, bmb_prty2_bb_b0_attn_idx, 0x540410, 0x54041c, 0x540418, 0x540414
+};
+
+static struct attn_hw_reg *bmb_prty_bb_b0_regs[3] = {
+	&bmb_prty0_bb_b0, &bmb_prty1_bb_b0, &bmb_prty2_bb_b0,
+};
+
+static const u16 bmb_prty0_k2_attn_idx[5] = {
+	0, 1, 2, 3, 4,
+};
+
+static struct attn_hw_reg bmb_prty0_k2 = {
+	0, 5, bmb_prty0_k2_attn_idx, 0x5401dc, 0x5401e8, 0x5401e4, 0x5401e0
+};
+
+static const u16 bmb_prty1_k2_attn_idx[31] = {
+	5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23,
+	24,
+	25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35,
+};
+
+static struct attn_hw_reg bmb_prty1_k2 = {
+	1, 31, bmb_prty1_k2_attn_idx, 0x540400, 0x54040c, 0x540408, 0x540404
+};
+
+static const u16 bmb_prty2_k2_attn_idx[15] = {
+	36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50,
+};
+
+static struct attn_hw_reg bmb_prty2_k2 = {
+	2, 15, bmb_prty2_k2_attn_idx, 0x540410, 0x54041c, 0x540418, 0x540414
+};
+
+static struct attn_hw_reg *bmb_prty_k2_regs[3] = {
+	&bmb_prty0_k2, &bmb_prty1_k2, &bmb_prty2_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *pcie_int_attn_desc[17] = {
+	"pcie_address_error",
+	"pcie_link_down_detect",
+	"pcie_link_up_detect",
+	"pcie_cfg_link_eq_req_int",
+	"pcie_pcie_bandwidth_change_detect",
+	"pcie_early_hot_reset_detect",
+	"pcie_hot_reset_detect",
+	"pcie_l1_entry_detect",
+	"pcie_l1_exit_detect",
+	"pcie_ltssm_state_match_detect",
+	"pcie_fc_timeout_detect",
+	"pcie_pme_turnoff_message_detect",
+	"pcie_cfg_send_cor_err",
+	"pcie_cfg_send_nf_err",
+	"pcie_cfg_send_f_err",
+	"pcie_qoverflow_detect",
+	"pcie_vdm_detect",
+};
+#else
+#define pcie_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 pcie_int0_k2_attn_idx[17] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,
+};
+
+static struct attn_hw_reg pcie_int0_k2 = {
+	0, 17, pcie_int0_k2_attn_idx, 0x547a0, 0x547ac, 0x547a8, 0x547a4
+};
+
+static struct attn_hw_reg *pcie_int_k2_regs[1] = {
+	&pcie_int0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *pcie_prty_attn_desc[24] = {
+	"pcie_mem003_i_ecc_rf_int",
+	"pcie_mem004_i_ecc_rf_int",
+	"pcie_mem008_i_mem_prty",
+	"pcie_mem007_i_mem_prty",
+	"pcie_mem005_i_mem_prty",
+	"pcie_mem006_i_mem_prty",
+	"pcie_mem001_i_mem_prty",
+	"pcie_mem002_i_mem_prty",
+	"pcie_mem001_i_ecc_rf_int",
+	"pcie_mem005_i_ecc_rf_int",
+	"pcie_mem010_i_ecc_rf_int",
+	"pcie_mem009_i_ecc_rf_int",
+	"pcie_mem007_i_ecc_rf_int",
+	"pcie_mem004_i_mem_prty_0",
+	"pcie_mem004_i_mem_prty_1",
+	"pcie_mem004_i_mem_prty_2",
+	"pcie_mem004_i_mem_prty_3",
+	"pcie_mem011_i_mem_prty_1",
+	"pcie_mem011_i_mem_prty_2",
+	"pcie_mem012_i_mem_prty_1",
+	"pcie_mem012_i_mem_prty_2",
+	"pcie_app_parity_errs_0",
+	"pcie_app_parity_errs_1",
+	"pcie_app_parity_errs_2",
+};
+#else
+#define pcie_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 pcie_prty1_bb_a0_attn_idx[17] = {
+	0, 2, 5, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20,
+};
+
+static struct attn_hw_reg pcie_prty1_bb_a0 = {
+	0, 17, pcie_prty1_bb_a0_attn_idx, 0x54000, 0x5400c, 0x54008, 0x54004
+};
+
+static struct attn_hw_reg *pcie_prty_bb_a0_regs[1] = {
+	&pcie_prty1_bb_a0,
+};
+
+static const u16 pcie_prty1_bb_b0_attn_idx[17] = {
+	0, 2, 5, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20,
+};
+
+static struct attn_hw_reg pcie_prty1_bb_b0 = {
+	0, 17, pcie_prty1_bb_b0_attn_idx, 0x54000, 0x5400c, 0x54008, 0x54004
+};
+
+static struct attn_hw_reg *pcie_prty_bb_b0_regs[1] = {
+	&pcie_prty1_bb_b0,
+};
+
+static const u16 pcie_prty1_k2_attn_idx[8] = {
+	0, 1, 2, 3, 4, 5, 6, 7,
+};
+
+static struct attn_hw_reg pcie_prty1_k2 = {
+	0, 8, pcie_prty1_k2_attn_idx, 0x54000, 0x5400c, 0x54008, 0x54004
+};
+
+static const u16 pcie_prty0_k2_attn_idx[3] = {
+	21, 22, 23,
+};
+
+static struct attn_hw_reg pcie_prty0_k2 = {
+	1, 3, pcie_prty0_k2_attn_idx, 0x547b0, 0x547bc, 0x547b8, 0x547b4
+};
+
+static struct attn_hw_reg *pcie_prty_k2_regs[2] = {
+	&pcie_prty1_k2, &pcie_prty0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *mcp2_prty_attn_desc[13] = {
+	"mcp2_rom_parity",
+	"mcp2_mem001_i_ecc_rf_int",
+	"mcp2_mem006_i_ecc_0_rf_int",
+	"mcp2_mem006_i_ecc_1_rf_int",
+	"mcp2_mem006_i_ecc_2_rf_int",
+	"mcp2_mem006_i_ecc_3_rf_int",
+	"mcp2_mem007_i_ecc_rf_int",
+	"mcp2_mem004_i_mem_prty",
+	"mcp2_mem003_i_mem_prty",
+	"mcp2_mem002_i_mem_prty",
+	"mcp2_mem009_i_mem_prty",
+	"mcp2_mem008_i_mem_prty",
+	"mcp2_mem005_i_mem_prty",
+};
+#else
+#define mcp2_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 mcp2_prty0_bb_a0_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg mcp2_prty0_bb_a0 = {
+	0, 1, mcp2_prty0_bb_a0_attn_idx, 0x52040, 0x5204c, 0x52048, 0x52044
+};
+
+static const u16 mcp2_prty1_bb_a0_attn_idx[12] = {
+	1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12,
+};
+
+static struct attn_hw_reg mcp2_prty1_bb_a0 = {
+	1, 12, mcp2_prty1_bb_a0_attn_idx, 0x52204, 0x52210, 0x5220c, 0x52208
+};
+
+static struct attn_hw_reg *mcp2_prty_bb_a0_regs[2] = {
+	&mcp2_prty0_bb_a0, &mcp2_prty1_bb_a0,
+};
+
+static const u16 mcp2_prty0_bb_b0_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg mcp2_prty0_bb_b0 = {
+	0, 1, mcp2_prty0_bb_b0_attn_idx, 0x52040, 0x5204c, 0x52048, 0x52044
+};
+
+static const u16 mcp2_prty1_bb_b0_attn_idx[12] = {
+	1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12,
+};
+
+static struct attn_hw_reg mcp2_prty1_bb_b0 = {
+	1, 12, mcp2_prty1_bb_b0_attn_idx, 0x52204, 0x52210, 0x5220c, 0x52208
+};
+
+static struct attn_hw_reg *mcp2_prty_bb_b0_regs[2] = {
+	&mcp2_prty0_bb_b0, &mcp2_prty1_bb_b0,
+};
+
+static const u16 mcp2_prty0_k2_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg mcp2_prty0_k2 = {
+	0, 1, mcp2_prty0_k2_attn_idx, 0x52040, 0x5204c, 0x52048, 0x52044
+};
+
+static const u16 mcp2_prty1_k2_attn_idx[12] = {
+	1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12,
+};
+
+static struct attn_hw_reg mcp2_prty1_k2 = {
+	1, 12, mcp2_prty1_k2_attn_idx, 0x52204, 0x52210, 0x5220c, 0x52208
+};
+
+static struct attn_hw_reg *mcp2_prty_k2_regs[2] = {
+	&mcp2_prty0_k2, &mcp2_prty1_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *pswhst_int_attn_desc[18] = {
+	"pswhst_address_error",
+	"pswhst_hst_src_fifo1_err",
+	"pswhst_hst_src_fifo2_err",
+	"pswhst_hst_src_fifo3_err",
+	"pswhst_hst_src_fifo4_err",
+	"pswhst_hst_src_fifo5_err",
+	"pswhst_hst_hdr_sync_fifo_err",
+	"pswhst_hst_data_sync_fifo_err",
+	"pswhst_hst_cpl_sync_fifo_err",
+	"pswhst_hst_vf_disabled_access",
+	"pswhst_hst_permission_violation",
+	"pswhst_hst_incorrect_access",
+	"pswhst_hst_src_fifo6_err",
+	"pswhst_hst_src_fifo7_err",
+	"pswhst_hst_src_fifo8_err",
+	"pswhst_hst_src_fifo9_err",
+	"pswhst_hst_source_credit_violation",
+	"pswhst_hst_timeout",
+};
+#else
+#define pswhst_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 pswhst_int0_bb_a0_attn_idx[18] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17,
+};
+
+static struct attn_hw_reg pswhst_int0_bb_a0 = {
+	0, 18, pswhst_int0_bb_a0_attn_idx, 0x2a0180, 0x2a018c, 0x2a0188,
+	0x2a0184
+};
+
+static struct attn_hw_reg *pswhst_int_bb_a0_regs[1] = {
+	&pswhst_int0_bb_a0,
+};
+
+static const u16 pswhst_int0_bb_b0_attn_idx[18] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17,
+};
+
+static struct attn_hw_reg pswhst_int0_bb_b0 = {
+	0, 18, pswhst_int0_bb_b0_attn_idx, 0x2a0180, 0x2a018c, 0x2a0188,
+	0x2a0184
+};
+
+static struct attn_hw_reg *pswhst_int_bb_b0_regs[1] = {
+	&pswhst_int0_bb_b0,
+};
+
+static const u16 pswhst_int0_k2_attn_idx[18] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17,
+};
+
+static struct attn_hw_reg pswhst_int0_k2 = {
+	0, 18, pswhst_int0_k2_attn_idx, 0x2a0180, 0x2a018c, 0x2a0188, 0x2a0184
+};
+
+static struct attn_hw_reg *pswhst_int_k2_regs[1] = {
+	&pswhst_int0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *pswhst_prty_attn_desc[18] = {
+	"pswhst_datapath_registers",
+	"pswhst_mem006_i_mem_prty",
+	"pswhst_mem007_i_mem_prty",
+	"pswhst_mem005_i_mem_prty",
+	"pswhst_mem002_i_mem_prty",
+	"pswhst_mem003_i_mem_prty",
+	"pswhst_mem001_i_mem_prty",
+	"pswhst_mem008_i_mem_prty",
+	"pswhst_mem004_i_mem_prty",
+	"pswhst_mem009_i_mem_prty",
+	"pswhst_mem010_i_mem_prty",
+	"pswhst_mem016_i_mem_prty",
+	"pswhst_mem012_i_mem_prty",
+	"pswhst_mem013_i_mem_prty",
+	"pswhst_mem014_i_mem_prty",
+	"pswhst_mem015_i_mem_prty",
+	"pswhst_mem011_i_mem_prty",
+	"pswhst_mem017_i_mem_prty",
+};
+#else
+#define pswhst_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 pswhst_prty1_bb_a0_attn_idx[17] = {
+	1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17,
+};
+
+static struct attn_hw_reg pswhst_prty1_bb_a0 = {
+	0, 17, pswhst_prty1_bb_a0_attn_idx, 0x2a0200, 0x2a020c, 0x2a0208,
+	0x2a0204
+};
+
+static struct attn_hw_reg *pswhst_prty_bb_a0_regs[1] = {
+	&pswhst_prty1_bb_a0,
+};
+
+static const u16 pswhst_prty0_bb_b0_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg pswhst_prty0_bb_b0 = {
+	0, 1, pswhst_prty0_bb_b0_attn_idx, 0x2a0190, 0x2a019c, 0x2a0198,
+	0x2a0194
+};
+
+static const u16 pswhst_prty1_bb_b0_attn_idx[17] = {
+	1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17,
+};
+
+static struct attn_hw_reg pswhst_prty1_bb_b0 = {
+	1, 17, pswhst_prty1_bb_b0_attn_idx, 0x2a0200, 0x2a020c, 0x2a0208,
+	0x2a0204
+};
+
+static struct attn_hw_reg *pswhst_prty_bb_b0_regs[2] = {
+	&pswhst_prty0_bb_b0, &pswhst_prty1_bb_b0,
+};
+
+static const u16 pswhst_prty0_k2_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg pswhst_prty0_k2 = {
+	0, 1, pswhst_prty0_k2_attn_idx, 0x2a0190, 0x2a019c, 0x2a0198, 0x2a0194
+};
+
+static const u16 pswhst_prty1_k2_attn_idx[17] = {
+	1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17,
+};
+
+static struct attn_hw_reg pswhst_prty1_k2 = {
+	1, 17, pswhst_prty1_k2_attn_idx, 0x2a0200, 0x2a020c, 0x2a0208, 0x2a0204
+};
+
+static struct attn_hw_reg *pswhst_prty_k2_regs[2] = {
+	&pswhst_prty0_k2, &pswhst_prty1_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *pswhst2_int_attn_desc[5] = {
+	"pswhst2_address_error",
+	"pswhst2_hst_header_fifo_err",
+	"pswhst2_hst_data_fifo_err",
+	"pswhst2_hst_cpl_fifo_err",
+	"pswhst2_hst_ireq_fifo_err",
+};
+#else
+#define pswhst2_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 pswhst2_int0_bb_a0_attn_idx[5] = {
+	0, 1, 2, 3, 4,
+};
+
+static struct attn_hw_reg pswhst2_int0_bb_a0 = {
+	0, 5, pswhst2_int0_bb_a0_attn_idx, 0x29e180, 0x29e18c, 0x29e188,
+	0x29e184
+};
+
+static struct attn_hw_reg *pswhst2_int_bb_a0_regs[1] = {
+	&pswhst2_int0_bb_a0,
+};
+
+static const u16 pswhst2_int0_bb_b0_attn_idx[5] = {
+	0, 1, 2, 3, 4,
+};
+
+static struct attn_hw_reg pswhst2_int0_bb_b0 = {
+	0, 5, pswhst2_int0_bb_b0_attn_idx, 0x29e180, 0x29e18c, 0x29e188,
+	0x29e184
+};
+
+static struct attn_hw_reg *pswhst2_int_bb_b0_regs[1] = {
+	&pswhst2_int0_bb_b0,
+};
+
+static const u16 pswhst2_int0_k2_attn_idx[5] = {
+	0, 1, 2, 3, 4,
+};
+
+static struct attn_hw_reg pswhst2_int0_k2 = {
+	0, 5, pswhst2_int0_k2_attn_idx, 0x29e180, 0x29e18c, 0x29e188, 0x29e184
+};
+
+static struct attn_hw_reg *pswhst2_int_k2_regs[1] = {
+	&pswhst2_int0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *pswhst2_prty_attn_desc[1] = {
+	"pswhst2_datapath_registers",
+};
+#else
+#define pswhst2_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 pswhst2_prty0_bb_b0_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg pswhst2_prty0_bb_b0 = {
+	0, 1, pswhst2_prty0_bb_b0_attn_idx, 0x29e190, 0x29e19c, 0x29e198,
+	0x29e194
+};
+
+static struct attn_hw_reg *pswhst2_prty_bb_b0_regs[1] = {
+	&pswhst2_prty0_bb_b0,
+};
+
+static const u16 pswhst2_prty0_k2_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg pswhst2_prty0_k2 = {
+	0, 1, pswhst2_prty0_k2_attn_idx, 0x29e190, 0x29e19c, 0x29e198, 0x29e194
+};
+
+static struct attn_hw_reg *pswhst2_prty_k2_regs[1] = {
+	&pswhst2_prty0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *pswrd_int_attn_desc[3] = {
+	"pswrd_address_error",
+	"pswrd_pop_error",
+	"pswrd_pop_pbf_error",
+};
+#else
+#define pswrd_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 pswrd_int0_bb_a0_attn_idx[3] = {
+	0, 1, 2,
+};
+
+static struct attn_hw_reg pswrd_int0_bb_a0 = {
+	0, 3, pswrd_int0_bb_a0_attn_idx, 0x29c180, 0x29c18c, 0x29c188, 0x29c184
+};
+
+static struct attn_hw_reg *pswrd_int_bb_a0_regs[1] = {
+	&pswrd_int0_bb_a0,
+};
+
+static const u16 pswrd_int0_bb_b0_attn_idx[3] = {
+	0, 1, 2,
+};
+
+static struct attn_hw_reg pswrd_int0_bb_b0 = {
+	0, 3, pswrd_int0_bb_b0_attn_idx, 0x29c180, 0x29c18c, 0x29c188, 0x29c184
+};
+
+static struct attn_hw_reg *pswrd_int_bb_b0_regs[1] = {
+	&pswrd_int0_bb_b0,
+};
+
+static const u16 pswrd_int0_k2_attn_idx[3] = {
+	0, 1, 2,
+};
+
+static struct attn_hw_reg pswrd_int0_k2 = {
+	0, 3, pswrd_int0_k2_attn_idx, 0x29c180, 0x29c18c, 0x29c188, 0x29c184
+};
+
+static struct attn_hw_reg *pswrd_int_k2_regs[1] = {
+	&pswrd_int0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *pswrd_prty_attn_desc[1] = {
+	"pswrd_datapath_registers",
+};
+#else
+#define pswrd_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 pswrd_prty0_bb_b0_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg pswrd_prty0_bb_b0 = {
+	0, 1, pswrd_prty0_bb_b0_attn_idx, 0x29c190, 0x29c19c, 0x29c198,
+	0x29c194
+};
+
+static struct attn_hw_reg *pswrd_prty_bb_b0_regs[1] = {
+	&pswrd_prty0_bb_b0,
+};
+
+static const u16 pswrd_prty0_k2_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg pswrd_prty0_k2 = {
+	0, 1, pswrd_prty0_k2_attn_idx, 0x29c190, 0x29c19c, 0x29c198, 0x29c194
+};
+
+static struct attn_hw_reg *pswrd_prty_k2_regs[1] = {
+	&pswrd_prty0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *pswrd2_int_attn_desc[5] = {
+	"pswrd2_address_error",
+	"pswrd2_sr_fifo_error",
+	"pswrd2_blk_fifo_error",
+	"pswrd2_push_error",
+	"pswrd2_push_pbf_error",
+};
+#else
+#define pswrd2_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 pswrd2_int0_bb_a0_attn_idx[5] = {
+	0, 1, 2, 3, 4,
+};
+
+static struct attn_hw_reg pswrd2_int0_bb_a0 = {
+	0, 5, pswrd2_int0_bb_a0_attn_idx, 0x29d180, 0x29d18c, 0x29d188,
+	0x29d184
+};
+
+static struct attn_hw_reg *pswrd2_int_bb_a0_regs[1] = {
+	&pswrd2_int0_bb_a0,
+};
+
+static const u16 pswrd2_int0_bb_b0_attn_idx[5] = {
+	0, 1, 2, 3, 4,
+};
+
+static struct attn_hw_reg pswrd2_int0_bb_b0 = {
+	0, 5, pswrd2_int0_bb_b0_attn_idx, 0x29d180, 0x29d18c, 0x29d188,
+	0x29d184
+};
+
+static struct attn_hw_reg *pswrd2_int_bb_b0_regs[1] = {
+	&pswrd2_int0_bb_b0,
+};
+
+static const u16 pswrd2_int0_k2_attn_idx[5] = {
+	0, 1, 2, 3, 4,
+};
+
+static struct attn_hw_reg pswrd2_int0_k2 = {
+	0, 5, pswrd2_int0_k2_attn_idx, 0x29d180, 0x29d18c, 0x29d188, 0x29d184
+};
+
+static struct attn_hw_reg *pswrd2_int_k2_regs[1] = {
+	&pswrd2_int0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *pswrd2_prty_attn_desc[36] = {
+	"pswrd2_datapath_registers",
+	"pswrd2_mem017_i_ecc_rf_int",
+	"pswrd2_mem018_i_ecc_rf_int",
+	"pswrd2_mem019_i_ecc_rf_int",
+	"pswrd2_mem020_i_ecc_rf_int",
+	"pswrd2_mem021_i_ecc_rf_int",
+	"pswrd2_mem022_i_ecc_rf_int",
+	"pswrd2_mem023_i_ecc_rf_int",
+	"pswrd2_mem024_i_ecc_rf_int",
+	"pswrd2_mem025_i_ecc_rf_int",
+	"pswrd2_mem015_i_ecc_rf_int",
+	"pswrd2_mem034_i_mem_prty",
+	"pswrd2_mem032_i_mem_prty",
+	"pswrd2_mem028_i_mem_prty",
+	"pswrd2_mem033_i_mem_prty",
+	"pswrd2_mem030_i_mem_prty",
+	"pswrd2_mem029_i_mem_prty",
+	"pswrd2_mem031_i_mem_prty",
+	"pswrd2_mem027_i_mem_prty",
+	"pswrd2_mem026_i_mem_prty",
+	"pswrd2_mem001_i_mem_prty",
+	"pswrd2_mem007_i_mem_prty",
+	"pswrd2_mem008_i_mem_prty",
+	"pswrd2_mem009_i_mem_prty",
+	"pswrd2_mem010_i_mem_prty",
+	"pswrd2_mem011_i_mem_prty",
+	"pswrd2_mem012_i_mem_prty",
+	"pswrd2_mem013_i_mem_prty",
+	"pswrd2_mem014_i_mem_prty",
+	"pswrd2_mem002_i_mem_prty",
+	"pswrd2_mem003_i_mem_prty",
+	"pswrd2_mem004_i_mem_prty",
+	"pswrd2_mem005_i_mem_prty",
+	"pswrd2_mem006_i_mem_prty",
+	"pswrd2_mem016_i_mem_prty",
+	"pswrd2_mem015_i_mem_prty",
+};
+#else
+#define pswrd2_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 pswrd2_prty1_bb_a0_attn_idx[31] = {
+	1, 2, 3, 4, 5, 6, 7, 8, 9, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21,
+	22,
+	23, 24, 25, 26, 27, 28, 29, 30, 31, 32,
+};
+
+static struct attn_hw_reg pswrd2_prty1_bb_a0 = {
+	0, 31, pswrd2_prty1_bb_a0_attn_idx, 0x29d200, 0x29d20c, 0x29d208,
+	0x29d204
+};
+
+static const u16 pswrd2_prty2_bb_a0_attn_idx[3] = {
+	33, 34, 35,
+};
+
+static struct attn_hw_reg pswrd2_prty2_bb_a0 = {
+	1, 3, pswrd2_prty2_bb_a0_attn_idx, 0x29d210, 0x29d21c, 0x29d218,
+	0x29d214
+};
+
+static struct attn_hw_reg *pswrd2_prty_bb_a0_regs[2] = {
+	&pswrd2_prty1_bb_a0, &pswrd2_prty2_bb_a0,
+};
+
+static const u16 pswrd2_prty0_bb_b0_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg pswrd2_prty0_bb_b0 = {
+	0, 1, pswrd2_prty0_bb_b0_attn_idx, 0x29d190, 0x29d19c, 0x29d198,
+	0x29d194
+};
+
+static const u16 pswrd2_prty1_bb_b0_attn_idx[31] = {
+	1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20,
+	21,
+	22, 23, 24, 25, 26, 27, 28, 29, 30, 31,
+};
+
+static struct attn_hw_reg pswrd2_prty1_bb_b0 = {
+	1, 31, pswrd2_prty1_bb_b0_attn_idx, 0x29d200, 0x29d20c, 0x29d208,
+	0x29d204
+};
+
+static const u16 pswrd2_prty2_bb_b0_attn_idx[3] = {
+	32, 33, 34,
+};
+
+static struct attn_hw_reg pswrd2_prty2_bb_b0 = {
+	2, 3, pswrd2_prty2_bb_b0_attn_idx, 0x29d210, 0x29d21c, 0x29d218,
+	0x29d214
+};
+
+static struct attn_hw_reg *pswrd2_prty_bb_b0_regs[3] = {
+	&pswrd2_prty0_bb_b0, &pswrd2_prty1_bb_b0, &pswrd2_prty2_bb_b0,
+};
+
+static const u16 pswrd2_prty0_k2_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg pswrd2_prty0_k2 = {
+	0, 1, pswrd2_prty0_k2_attn_idx, 0x29d190, 0x29d19c, 0x29d198, 0x29d194
+};
+
+static const u16 pswrd2_prty1_k2_attn_idx[31] = {
+	1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20,
+	21,
+	22, 23, 24, 25, 26, 27, 28, 29, 30, 31,
+};
+
+static struct attn_hw_reg pswrd2_prty1_k2 = {
+	1, 31, pswrd2_prty1_k2_attn_idx, 0x29d200, 0x29d20c, 0x29d208, 0x29d204
+};
+
+static const u16 pswrd2_prty2_k2_attn_idx[3] = {
+	32, 33, 34,
+};
+
+static struct attn_hw_reg pswrd2_prty2_k2 = {
+	2, 3, pswrd2_prty2_k2_attn_idx, 0x29d210, 0x29d21c, 0x29d218, 0x29d214
+};
+
+static struct attn_hw_reg *pswrd2_prty_k2_regs[3] = {
+	&pswrd2_prty0_k2, &pswrd2_prty1_k2, &pswrd2_prty2_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *pswwr_int_attn_desc[16] = {
+	"pswwr_address_error",
+	"pswwr_src_fifo_overflow",
+	"pswwr_qm_fifo_overflow",
+	"pswwr_tm_fifo_overflow",
+	"pswwr_usdm_fifo_overflow",
+	"pswwr_usdmdp_fifo_overflow",
+	"pswwr_xsdm_fifo_overflow",
+	"pswwr_tsdm_fifo_overflow",
+	"pswwr_cduwr_fifo_overflow",
+	"pswwr_dbg_fifo_overflow",
+	"pswwr_dmae_fifo_overflow",
+	"pswwr_hc_fifo_overflow",
+	"pswwr_msdm_fifo_overflow",
+	"pswwr_ysdm_fifo_overflow",
+	"pswwr_psdm_fifo_overflow",
+	"pswwr_m2p_fifo_overflow",
+};
+#else
+#define pswwr_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 pswwr_int0_bb_a0_attn_idx[16] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15,
+};
+
+static struct attn_hw_reg pswwr_int0_bb_a0 = {
+	0, 16, pswwr_int0_bb_a0_attn_idx, 0x29a180, 0x29a18c, 0x29a188,
+	0x29a184
+};
+
+static struct attn_hw_reg *pswwr_int_bb_a0_regs[1] = {
+	&pswwr_int0_bb_a0,
+};
+
+static const u16 pswwr_int0_bb_b0_attn_idx[16] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15,
+};
+
+static struct attn_hw_reg pswwr_int0_bb_b0 = {
+	0, 16, pswwr_int0_bb_b0_attn_idx, 0x29a180, 0x29a18c, 0x29a188,
+	0x29a184
+};
+
+static struct attn_hw_reg *pswwr_int_bb_b0_regs[1] = {
+	&pswwr_int0_bb_b0,
+};
+
+static const u16 pswwr_int0_k2_attn_idx[16] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15,
+};
+
+static struct attn_hw_reg pswwr_int0_k2 = {
+	0, 16, pswwr_int0_k2_attn_idx, 0x29a180, 0x29a18c, 0x29a188, 0x29a184
+};
+
+static struct attn_hw_reg *pswwr_int_k2_regs[1] = {
+	&pswwr_int0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *pswwr_prty_attn_desc[1] = {
+	"pswwr_datapath_registers",
+};
+#else
+#define pswwr_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 pswwr_prty0_bb_b0_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg pswwr_prty0_bb_b0 = {
+	0, 1, pswwr_prty0_bb_b0_attn_idx, 0x29a190, 0x29a19c, 0x29a198,
+	0x29a194
+};
+
+static struct attn_hw_reg *pswwr_prty_bb_b0_regs[1] = {
+	&pswwr_prty0_bb_b0,
+};
+
+static const u16 pswwr_prty0_k2_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg pswwr_prty0_k2 = {
+	0, 1, pswwr_prty0_k2_attn_idx, 0x29a190, 0x29a19c, 0x29a198, 0x29a194
+};
+
+static struct attn_hw_reg *pswwr_prty_k2_regs[1] = {
+	&pswwr_prty0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *pswwr2_int_attn_desc[19] = {
+	"pswwr2_address_error",
+	"pswwr2_pglue_eop_error",
+	"pswwr2_pglue_lsr_error",
+	"pswwr2_tm_underflow",
+	"pswwr2_qm_underflow",
+	"pswwr2_src_underflow",
+	"pswwr2_usdm_underflow",
+	"pswwr2_tsdm_underflow",
+	"pswwr2_xsdm_underflow",
+	"pswwr2_usdmdp_underflow",
+	"pswwr2_cdu_underflow",
+	"pswwr2_dbg_underflow",
+	"pswwr2_dmae_underflow",
+	"pswwr2_hc_underflow",
+	"pswwr2_msdm_underflow",
+	"pswwr2_ysdm_underflow",
+	"pswwr2_psdm_underflow",
+	"pswwr2_m2p_underflow",
+	"pswwr2_pglue_eop_error_in_line",
+};
+#else
+#define pswwr2_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 pswwr2_int0_bb_a0_attn_idx[19] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18,
+};
+
+static struct attn_hw_reg pswwr2_int0_bb_a0 = {
+	0, 19, pswwr2_int0_bb_a0_attn_idx, 0x29b180, 0x29b18c, 0x29b188,
+	0x29b184
+};
+
+static struct attn_hw_reg *pswwr2_int_bb_a0_regs[1] = {
+	&pswwr2_int0_bb_a0,
+};
+
+static const u16 pswwr2_int0_bb_b0_attn_idx[19] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18,
+};
+
+static struct attn_hw_reg pswwr2_int0_bb_b0 = {
+	0, 19, pswwr2_int0_bb_b0_attn_idx, 0x29b180, 0x29b18c, 0x29b188,
+	0x29b184
+};
+
+static struct attn_hw_reg *pswwr2_int_bb_b0_regs[1] = {
+	&pswwr2_int0_bb_b0,
+};
+
+static const u16 pswwr2_int0_k2_attn_idx[19] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18,
+};
+
+static struct attn_hw_reg pswwr2_int0_k2 = {
+	0, 19, pswwr2_int0_k2_attn_idx, 0x29b180, 0x29b18c, 0x29b188, 0x29b184
+};
+
+static struct attn_hw_reg *pswwr2_int_k2_regs[1] = {
+	&pswwr2_int0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *pswwr2_prty_attn_desc[114] = {
+	"pswwr2_datapath_registers",
+	"pswwr2_mem008_i_ecc_rf_int",
+	"pswwr2_mem001_i_mem_prty",
+	"pswwr2_mem014_i_mem_prty_0",
+	"pswwr2_mem014_i_mem_prty_1",
+	"pswwr2_mem014_i_mem_prty_2",
+	"pswwr2_mem014_i_mem_prty_3",
+	"pswwr2_mem014_i_mem_prty_4",
+	"pswwr2_mem014_i_mem_prty_5",
+	"pswwr2_mem014_i_mem_prty_6",
+	"pswwr2_mem014_i_mem_prty_7",
+	"pswwr2_mem014_i_mem_prty_8",
+	"pswwr2_mem016_i_mem_prty_0",
+	"pswwr2_mem016_i_mem_prty_1",
+	"pswwr2_mem016_i_mem_prty_2",
+	"pswwr2_mem016_i_mem_prty_3",
+	"pswwr2_mem016_i_mem_prty_4",
+	"pswwr2_mem016_i_mem_prty_5",
+	"pswwr2_mem016_i_mem_prty_6",
+	"pswwr2_mem016_i_mem_prty_7",
+	"pswwr2_mem016_i_mem_prty_8",
+	"pswwr2_mem007_i_mem_prty_0",
+	"pswwr2_mem007_i_mem_prty_1",
+	"pswwr2_mem007_i_mem_prty_2",
+	"pswwr2_mem007_i_mem_prty_3",
+	"pswwr2_mem007_i_mem_prty_4",
+	"pswwr2_mem007_i_mem_prty_5",
+	"pswwr2_mem007_i_mem_prty_6",
+	"pswwr2_mem007_i_mem_prty_7",
+	"pswwr2_mem007_i_mem_prty_8",
+	"pswwr2_mem017_i_mem_prty_0",
+	"pswwr2_mem017_i_mem_prty_1",
+	"pswwr2_mem017_i_mem_prty_2",
+	"pswwr2_mem017_i_mem_prty_3",
+	"pswwr2_mem017_i_mem_prty_4",
+	"pswwr2_mem017_i_mem_prty_5",
+	"pswwr2_mem017_i_mem_prty_6",
+	"pswwr2_mem017_i_mem_prty_7",
+	"pswwr2_mem017_i_mem_prty_8",
+	"pswwr2_mem009_i_mem_prty_0",
+	"pswwr2_mem009_i_mem_prty_1",
+	"pswwr2_mem009_i_mem_prty_2",
+	"pswwr2_mem009_i_mem_prty_3",
+	"pswwr2_mem009_i_mem_prty_4",
+	"pswwr2_mem009_i_mem_prty_5",
+	"pswwr2_mem009_i_mem_prty_6",
+	"pswwr2_mem009_i_mem_prty_7",
+	"pswwr2_mem009_i_mem_prty_8",
+	"pswwr2_mem013_i_mem_prty_0",
+	"pswwr2_mem013_i_mem_prty_1",
+	"pswwr2_mem013_i_mem_prty_2",
+	"pswwr2_mem013_i_mem_prty_3",
+	"pswwr2_mem013_i_mem_prty_4",
+	"pswwr2_mem013_i_mem_prty_5",
+	"pswwr2_mem013_i_mem_prty_6",
+	"pswwr2_mem013_i_mem_prty_7",
+	"pswwr2_mem013_i_mem_prty_8",
+	"pswwr2_mem006_i_mem_prty_0",
+	"pswwr2_mem006_i_mem_prty_1",
+	"pswwr2_mem006_i_mem_prty_2",
+	"pswwr2_mem006_i_mem_prty_3",
+	"pswwr2_mem006_i_mem_prty_4",
+	"pswwr2_mem006_i_mem_prty_5",
+	"pswwr2_mem006_i_mem_prty_6",
+	"pswwr2_mem006_i_mem_prty_7",
+	"pswwr2_mem006_i_mem_prty_8",
+	"pswwr2_mem010_i_mem_prty_0",
+	"pswwr2_mem010_i_mem_prty_1",
+	"pswwr2_mem010_i_mem_prty_2",
+	"pswwr2_mem010_i_mem_prty_3",
+	"pswwr2_mem010_i_mem_prty_4",
+	"pswwr2_mem010_i_mem_prty_5",
+	"pswwr2_mem010_i_mem_prty_6",
+	"pswwr2_mem010_i_mem_prty_7",
+	"pswwr2_mem010_i_mem_prty_8",
+	"pswwr2_mem012_i_mem_prty",
+	"pswwr2_mem011_i_mem_prty_0",
+	"pswwr2_mem011_i_mem_prty_1",
+	"pswwr2_mem011_i_mem_prty_2",
+	"pswwr2_mem011_i_mem_prty_3",
+	"pswwr2_mem011_i_mem_prty_4",
+	"pswwr2_mem011_i_mem_prty_5",
+	"pswwr2_mem011_i_mem_prty_6",
+	"pswwr2_mem011_i_mem_prty_7",
+	"pswwr2_mem011_i_mem_prty_8",
+	"pswwr2_mem004_i_mem_prty_0",
+	"pswwr2_mem004_i_mem_prty_1",
+	"pswwr2_mem004_i_mem_prty_2",
+	"pswwr2_mem004_i_mem_prty_3",
+	"pswwr2_mem004_i_mem_prty_4",
+	"pswwr2_mem004_i_mem_prty_5",
+	"pswwr2_mem004_i_mem_prty_6",
+	"pswwr2_mem004_i_mem_prty_7",
+	"pswwr2_mem004_i_mem_prty_8",
+	"pswwr2_mem015_i_mem_prty_0",
+	"pswwr2_mem015_i_mem_prty_1",
+	"pswwr2_mem015_i_mem_prty_2",
+	"pswwr2_mem005_i_mem_prty_0",
+	"pswwr2_mem005_i_mem_prty_1",
+	"pswwr2_mem005_i_mem_prty_2",
+	"pswwr2_mem005_i_mem_prty_3",
+	"pswwr2_mem005_i_mem_prty_4",
+	"pswwr2_mem005_i_mem_prty_5",
+	"pswwr2_mem005_i_mem_prty_6",
+	"pswwr2_mem005_i_mem_prty_7",
+	"pswwr2_mem005_i_mem_prty_8",
+	"pswwr2_mem002_i_mem_prty_0",
+	"pswwr2_mem002_i_mem_prty_1",
+	"pswwr2_mem002_i_mem_prty_2",
+	"pswwr2_mem002_i_mem_prty_3",
+	"pswwr2_mem002_i_mem_prty_4",
+	"pswwr2_mem003_i_mem_prty_0",
+	"pswwr2_mem003_i_mem_prty_1",
+	"pswwr2_mem003_i_mem_prty_2",
+};
+#else
+#define pswwr2_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 pswwr2_prty1_bb_a0_attn_idx[31] = {
+	1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20,
+	21,
+	22, 23, 24, 25, 26, 27, 28, 29, 30, 31,
+};
+
+static struct attn_hw_reg pswwr2_prty1_bb_a0 = {
+	0, 31, pswwr2_prty1_bb_a0_attn_idx, 0x29b200, 0x29b20c, 0x29b208,
+	0x29b204
+};
+
+static const u16 pswwr2_prty2_bb_a0_attn_idx[31] = {
+	32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49,
+	50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62,
+};
+
+static struct attn_hw_reg pswwr2_prty2_bb_a0 = {
+	1, 31, pswwr2_prty2_bb_a0_attn_idx, 0x29b210, 0x29b21c, 0x29b218,
+	0x29b214
+};
+
+static const u16 pswwr2_prty3_bb_a0_attn_idx[31] = {
+	63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80,
+	81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93,
+};
+
+static struct attn_hw_reg pswwr2_prty3_bb_a0 = {
+	2, 31, pswwr2_prty3_bb_a0_attn_idx, 0x29b220, 0x29b22c, 0x29b228,
+	0x29b224
+};
+
+static const u16 pswwr2_prty4_bb_a0_attn_idx[20] = {
+	94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108,
+	109,
+	110, 111, 112, 113,
+};
+
+static struct attn_hw_reg pswwr2_prty4_bb_a0 = {
+	3, 20, pswwr2_prty4_bb_a0_attn_idx, 0x29b230, 0x29b23c, 0x29b238,
+	0x29b234
+};
+
+static struct attn_hw_reg *pswwr2_prty_bb_a0_regs[4] = {
+	&pswwr2_prty1_bb_a0, &pswwr2_prty2_bb_a0, &pswwr2_prty3_bb_a0,
+	&pswwr2_prty4_bb_a0,
+};
+
+static const u16 pswwr2_prty0_bb_b0_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg pswwr2_prty0_bb_b0 = {
+	0, 1, pswwr2_prty0_bb_b0_attn_idx, 0x29b190, 0x29b19c, 0x29b198,
+	0x29b194
+};
+
+static const u16 pswwr2_prty1_bb_b0_attn_idx[31] = {
+	1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20,
+	21,
+	22, 23, 24, 25, 26, 27, 28, 29, 30, 31,
+};
+
+static struct attn_hw_reg pswwr2_prty1_bb_b0 = {
+	1, 31, pswwr2_prty1_bb_b0_attn_idx, 0x29b200, 0x29b20c, 0x29b208,
+	0x29b204
+};
+
+static const u16 pswwr2_prty2_bb_b0_attn_idx[31] = {
+	32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49,
+	50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62,
+};
+
+static struct attn_hw_reg pswwr2_prty2_bb_b0 = {
+	2, 31, pswwr2_prty2_bb_b0_attn_idx, 0x29b210, 0x29b21c, 0x29b218,
+	0x29b214
+};
+
+static const u16 pswwr2_prty3_bb_b0_attn_idx[31] = {
+	63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80,
+	81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93,
+};
+
+static struct attn_hw_reg pswwr2_prty3_bb_b0 = {
+	3, 31, pswwr2_prty3_bb_b0_attn_idx, 0x29b220, 0x29b22c, 0x29b228,
+	0x29b224
+};
+
+static const u16 pswwr2_prty4_bb_b0_attn_idx[20] = {
+	94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108,
+	109,
+	110, 111, 112, 113,
+};
+
+static struct attn_hw_reg pswwr2_prty4_bb_b0 = {
+	4, 20, pswwr2_prty4_bb_b0_attn_idx, 0x29b230, 0x29b23c, 0x29b238,
+	0x29b234
+};
+
+static struct attn_hw_reg *pswwr2_prty_bb_b0_regs[5] = {
+	&pswwr2_prty0_bb_b0, &pswwr2_prty1_bb_b0, &pswwr2_prty2_bb_b0,
+	&pswwr2_prty3_bb_b0, &pswwr2_prty4_bb_b0,
+};
+
+static const u16 pswwr2_prty0_k2_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg pswwr2_prty0_k2 = {
+	0, 1, pswwr2_prty0_k2_attn_idx, 0x29b190, 0x29b19c, 0x29b198, 0x29b194
+};
+
+static const u16 pswwr2_prty1_k2_attn_idx[31] = {
+	1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20,
+	21,
+	22, 23, 24, 25, 26, 27, 28, 29, 30, 31,
+};
+
+static struct attn_hw_reg pswwr2_prty1_k2 = {
+	1, 31, pswwr2_prty1_k2_attn_idx, 0x29b200, 0x29b20c, 0x29b208, 0x29b204
+};
+
+static const u16 pswwr2_prty2_k2_attn_idx[31] = {
+	32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49,
+	50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62,
+};
+
+static struct attn_hw_reg pswwr2_prty2_k2 = {
+	2, 31, pswwr2_prty2_k2_attn_idx, 0x29b210, 0x29b21c, 0x29b218, 0x29b214
+};
+
+static const u16 pswwr2_prty3_k2_attn_idx[31] = {
+	63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80,
+	81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93,
+};
+
+static struct attn_hw_reg pswwr2_prty3_k2 = {
+	3, 31, pswwr2_prty3_k2_attn_idx, 0x29b220, 0x29b22c, 0x29b228, 0x29b224
+};
+
+static const u16 pswwr2_prty4_k2_attn_idx[20] = {
+	94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108,
+	109,
+	110, 111, 112, 113,
+};
+
+static struct attn_hw_reg pswwr2_prty4_k2 = {
+	4, 20, pswwr2_prty4_k2_attn_idx, 0x29b230, 0x29b23c, 0x29b238, 0x29b234
+};
+
+static struct attn_hw_reg *pswwr2_prty_k2_regs[5] = {
+	&pswwr2_prty0_k2, &pswwr2_prty1_k2, &pswwr2_prty2_k2, &pswwr2_prty3_k2,
+	&pswwr2_prty4_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *pswrq_int_attn_desc[21] = {
+	"pswrq_address_error",
+	"pswrq_pbf_fifo_overflow",
+	"pswrq_src_fifo_overflow",
+	"pswrq_qm_fifo_overflow",
+	"pswrq_tm_fifo_overflow",
+	"pswrq_usdm_fifo_overflow",
+	"pswrq_m2p_fifo_overflow",
+	"pswrq_xsdm_fifo_overflow",
+	"pswrq_tsdm_fifo_overflow",
+	"pswrq_ptu_fifo_overflow",
+	"pswrq_cduwr_fifo_overflow",
+	"pswrq_cdurd_fifo_overflow",
+	"pswrq_dmae_fifo_overflow",
+	"pswrq_hc_fifo_overflow",
+	"pswrq_dbg_fifo_overflow",
+	"pswrq_msdm_fifo_overflow",
+	"pswrq_ysdm_fifo_overflow",
+	"pswrq_psdm_fifo_overflow",
+	"pswrq_prm_fifo_overflow",
+	"pswrq_muld_fifo_overflow",
+	"pswrq_xyld_fifo_overflow",
+};
+#else
+#define pswrq_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 pswrq_int0_bb_a0_attn_idx[21] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+	20,
+};
+
+static struct attn_hw_reg pswrq_int0_bb_a0 = {
+	0, 21, pswrq_int0_bb_a0_attn_idx, 0x280180, 0x28018c, 0x280188,
+	0x280184
+};
+
+static struct attn_hw_reg *pswrq_int_bb_a0_regs[1] = {
+	&pswrq_int0_bb_a0,
+};
+
+static const u16 pswrq_int0_bb_b0_attn_idx[21] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+	20,
+};
+
+static struct attn_hw_reg pswrq_int0_bb_b0 = {
+	0, 21, pswrq_int0_bb_b0_attn_idx, 0x280180, 0x28018c, 0x280188,
+	0x280184
+};
+
+static struct attn_hw_reg *pswrq_int_bb_b0_regs[1] = {
+	&pswrq_int0_bb_b0,
+};
+
+static const u16 pswrq_int0_k2_attn_idx[21] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+	20,
+};
+
+static struct attn_hw_reg pswrq_int0_k2 = {
+	0, 21, pswrq_int0_k2_attn_idx, 0x280180, 0x28018c, 0x280188, 0x280184
+};
+
+static struct attn_hw_reg *pswrq_int_k2_regs[1] = {
+	&pswrq_int0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *pswrq_prty_attn_desc[1] = {
+	"pswrq_pxp_busip_parity",
+};
+#else
+#define pswrq_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 pswrq_prty0_bb_b0_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg pswrq_prty0_bb_b0 = {
+	0, 1, pswrq_prty0_bb_b0_attn_idx, 0x280190, 0x28019c, 0x280198,
+	0x280194
+};
+
+static struct attn_hw_reg *pswrq_prty_bb_b0_regs[1] = {
+	&pswrq_prty0_bb_b0,
+};
+
+static const u16 pswrq_prty0_k2_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg pswrq_prty0_k2 = {
+	0, 1, pswrq_prty0_k2_attn_idx, 0x280190, 0x28019c, 0x280198, 0x280194
+};
+
+static struct attn_hw_reg *pswrq_prty_k2_regs[1] = {
+	&pswrq_prty0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *pswrq2_int_attn_desc[15] = {
+	"pswrq2_address_error",
+	"pswrq2_l2p_fifo_overflow",
+	"pswrq2_wdfifo_overflow",
+	"pswrq2_phyaddr_fifo_of",
+	"pswrq2_l2p_violation_1",
+	"pswrq2_l2p_violation_2",
+	"pswrq2_free_list_empty",
+	"pswrq2_elt_addr",
+	"pswrq2_l2p_vf_err",
+	"pswrq2_core_wdone_overflow",
+	"pswrq2_treq_fifo_underflow",
+	"pswrq2_treq_fifo_overflow",
+	"pswrq2_icpl_fifo_underflow",
+	"pswrq2_icpl_fifo_overflow",
+	"pswrq2_back2back_atc_response",
+};
+#else
+#define pswrq2_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 pswrq2_int0_bb_a0_attn_idx[15] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14,
+};
+
+static struct attn_hw_reg pswrq2_int0_bb_a0 = {
+	0, 15, pswrq2_int0_bb_a0_attn_idx, 0x240180, 0x24018c, 0x240188,
+	0x240184
+};
+
+static struct attn_hw_reg *pswrq2_int_bb_a0_regs[1] = {
+	&pswrq2_int0_bb_a0,
+};
+
+static const u16 pswrq2_int0_bb_b0_attn_idx[15] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14,
+};
+
+static struct attn_hw_reg pswrq2_int0_bb_b0 = {
+	0, 15, pswrq2_int0_bb_b0_attn_idx, 0x240180, 0x24018c, 0x240188,
+	0x240184
+};
+
+static struct attn_hw_reg *pswrq2_int_bb_b0_regs[1] = {
+	&pswrq2_int0_bb_b0,
+};
+
+static const u16 pswrq2_int0_k2_attn_idx[15] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14,
+};
+
+static struct attn_hw_reg pswrq2_int0_k2 = {
+	0, 15, pswrq2_int0_k2_attn_idx, 0x240180, 0x24018c, 0x240188, 0x240184
+};
+
+static struct attn_hw_reg *pswrq2_int_k2_regs[1] = {
+	&pswrq2_int0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *pswrq2_prty_attn_desc[11] = {
+	"pswrq2_mem004_i_ecc_rf_int",
+	"pswrq2_mem005_i_ecc_rf_int",
+	"pswrq2_mem001_i_ecc_rf_int",
+	"pswrq2_mem006_i_mem_prty",
+	"pswrq2_mem008_i_mem_prty",
+	"pswrq2_mem009_i_mem_prty",
+	"pswrq2_mem003_i_mem_prty",
+	"pswrq2_mem002_i_mem_prty",
+	"pswrq2_mem010_i_mem_prty",
+	"pswrq2_mem007_i_mem_prty",
+	"pswrq2_mem005_i_mem_prty",
+};
+#else
+#define pswrq2_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 pswrq2_prty1_bb_a0_attn_idx[9] = {
+	0, 2, 3, 4, 5, 6, 7, 9, 10,
+};
+
+static struct attn_hw_reg pswrq2_prty1_bb_a0 = {
+	0, 9, pswrq2_prty1_bb_a0_attn_idx, 0x240200, 0x24020c, 0x240208,
+	0x240204
+};
+
+static struct attn_hw_reg *pswrq2_prty_bb_a0_regs[1] = {
+	&pswrq2_prty1_bb_a0,
+};
+
+static const u16 pswrq2_prty1_bb_b0_attn_idx[9] = {
+	0, 2, 3, 4, 5, 6, 7, 9, 10,
+};
+
+static struct attn_hw_reg pswrq2_prty1_bb_b0 = {
+	0, 9, pswrq2_prty1_bb_b0_attn_idx, 0x240200, 0x24020c, 0x240208,
+	0x240204
+};
+
+static struct attn_hw_reg *pswrq2_prty_bb_b0_regs[1] = {
+	&pswrq2_prty1_bb_b0,
+};
+
+static const u16 pswrq2_prty1_k2_attn_idx[10] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9,
+};
+
+static struct attn_hw_reg pswrq2_prty1_k2 = {
+	0, 10, pswrq2_prty1_k2_attn_idx, 0x240200, 0x24020c, 0x240208, 0x240204
+};
+
+static struct attn_hw_reg *pswrq2_prty_k2_regs[1] = {
+	&pswrq2_prty1_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *pglcs_int_attn_desc[2] = {
+	"pglcs_address_error",
+	"pglcs_rasdp_error",
+};
+#else
+#define pglcs_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 pglcs_int0_bb_a0_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg pglcs_int0_bb_a0 = {
+	0, 1, pglcs_int0_bb_a0_attn_idx, 0x1d00, 0x1d0c, 0x1d08, 0x1d04
+};
+
+static struct attn_hw_reg *pglcs_int_bb_a0_regs[1] = {
+	&pglcs_int0_bb_a0,
+};
+
+static const u16 pglcs_int0_bb_b0_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg pglcs_int0_bb_b0 = {
+	0, 1, pglcs_int0_bb_b0_attn_idx, 0x1d00, 0x1d0c, 0x1d08, 0x1d04
+};
+
+static struct attn_hw_reg *pglcs_int_bb_b0_regs[1] = {
+	&pglcs_int0_bb_b0,
+};
+
+static const u16 pglcs_int0_k2_attn_idx[2] = {
+	0, 1,
+};
+
+static struct attn_hw_reg pglcs_int0_k2 = {
+	0, 2, pglcs_int0_k2_attn_idx, 0x1d00, 0x1d0c, 0x1d08, 0x1d04
+};
+
+static struct attn_hw_reg *pglcs_int_k2_regs[1] = {
+	&pglcs_int0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *dmae_int_attn_desc[2] = {
+	"dmae_address_error",
+	"dmae_pci_rd_buf_err",
+};
+#else
+#define dmae_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 dmae_int0_bb_a0_attn_idx[2] = {
+	0, 1,
+};
+
+static struct attn_hw_reg dmae_int0_bb_a0 = {
+	0, 2, dmae_int0_bb_a0_attn_idx, 0xc180, 0xc18c, 0xc188, 0xc184
+};
+
+static struct attn_hw_reg *dmae_int_bb_a0_regs[1] = {
+	&dmae_int0_bb_a0,
+};
+
+static const u16 dmae_int0_bb_b0_attn_idx[2] = {
+	0, 1,
+};
+
+static struct attn_hw_reg dmae_int0_bb_b0 = {
+	0, 2, dmae_int0_bb_b0_attn_idx, 0xc180, 0xc18c, 0xc188, 0xc184
+};
+
+static struct attn_hw_reg *dmae_int_bb_b0_regs[1] = {
+	&dmae_int0_bb_b0,
+};
+
+static const u16 dmae_int0_k2_attn_idx[2] = {
+	0, 1,
+};
+
+static struct attn_hw_reg dmae_int0_k2 = {
+	0, 2, dmae_int0_k2_attn_idx, 0xc180, 0xc18c, 0xc188, 0xc184
+};
+
+static struct attn_hw_reg *dmae_int_k2_regs[1] = {
+	&dmae_int0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *dmae_prty_attn_desc[3] = {
+	"dmae_mem002_i_mem_prty",
+	"dmae_mem001_i_mem_prty",
+	"dmae_mem003_i_mem_prty",
+};
+#else
+#define dmae_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 dmae_prty1_bb_a0_attn_idx[3] = {
+	0, 1, 2,
+};
+
+static struct attn_hw_reg dmae_prty1_bb_a0 = {
+	0, 3, dmae_prty1_bb_a0_attn_idx, 0xc200, 0xc20c, 0xc208, 0xc204
+};
+
+static struct attn_hw_reg *dmae_prty_bb_a0_regs[1] = {
+	&dmae_prty1_bb_a0,
+};
+
+static const u16 dmae_prty1_bb_b0_attn_idx[3] = {
+	0, 1, 2,
+};
+
+static struct attn_hw_reg dmae_prty1_bb_b0 = {
+	0, 3, dmae_prty1_bb_b0_attn_idx, 0xc200, 0xc20c, 0xc208, 0xc204
+};
+
+static struct attn_hw_reg *dmae_prty_bb_b0_regs[1] = {
+	&dmae_prty1_bb_b0,
+};
+
+static const u16 dmae_prty1_k2_attn_idx[3] = {
+	0, 1, 2,
+};
+
+static struct attn_hw_reg dmae_prty1_k2 = {
+	0, 3, dmae_prty1_k2_attn_idx, 0xc200, 0xc20c, 0xc208, 0xc204
+};
+
+static struct attn_hw_reg *dmae_prty_k2_regs[1] = {
+	&dmae_prty1_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *ptu_int_attn_desc[8] = {
+	"ptu_address_error",
+	"ptu_atc_tcpl_to_not_pend",
+	"ptu_atc_gpa_multiple_hits",
+	"ptu_atc_rcpl_to_empty_cnt",
+	"ptu_atc_tcpl_error",
+	"ptu_atc_inv_halt",
+	"ptu_atc_reuse_transpend",
+	"ptu_atc_ireq_less_than_stu",
+};
+#else
+#define ptu_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 ptu_int0_bb_a0_attn_idx[8] = {
+	0, 1, 2, 3, 4, 5, 6, 7,
+};
+
+static struct attn_hw_reg ptu_int0_bb_a0 = {
+	0, 8, ptu_int0_bb_a0_attn_idx, 0x560180, 0x56018c, 0x560188, 0x560184
+};
+
+static struct attn_hw_reg *ptu_int_bb_a0_regs[1] = {
+	&ptu_int0_bb_a0,
+};
+
+static const u16 ptu_int0_bb_b0_attn_idx[8] = {
+	0, 1, 2, 3, 4, 5, 6, 7,
+};
+
+static struct attn_hw_reg ptu_int0_bb_b0 = {
+	0, 8, ptu_int0_bb_b0_attn_idx, 0x560180, 0x56018c, 0x560188, 0x560184
+};
+
+static struct attn_hw_reg *ptu_int_bb_b0_regs[1] = {
+	&ptu_int0_bb_b0,
+};
+
+static const u16 ptu_int0_k2_attn_idx[8] = {
+	0, 1, 2, 3, 4, 5, 6, 7,
+};
+
+static struct attn_hw_reg ptu_int0_k2 = {
+	0, 8, ptu_int0_k2_attn_idx, 0x560180, 0x56018c, 0x560188, 0x560184
+};
+
+static struct attn_hw_reg *ptu_int_k2_regs[1] = {
+	&ptu_int0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *ptu_prty_attn_desc[18] = {
+	"ptu_mem017_i_ecc_rf_int",
+	"ptu_mem018_i_mem_prty",
+	"ptu_mem006_i_mem_prty",
+	"ptu_mem001_i_mem_prty",
+	"ptu_mem002_i_mem_prty",
+	"ptu_mem003_i_mem_prty",
+	"ptu_mem004_i_mem_prty",
+	"ptu_mem005_i_mem_prty",
+	"ptu_mem009_i_mem_prty",
+	"ptu_mem010_i_mem_prty",
+	"ptu_mem016_i_mem_prty",
+	"ptu_mem007_i_mem_prty",
+	"ptu_mem015_i_mem_prty",
+	"ptu_mem013_i_mem_prty",
+	"ptu_mem012_i_mem_prty",
+	"ptu_mem014_i_mem_prty",
+	"ptu_mem011_i_mem_prty",
+	"ptu_mem008_i_mem_prty",
+};
+#else
+#define ptu_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 ptu_prty1_bb_a0_attn_idx[18] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17,
+};
+
+static struct attn_hw_reg ptu_prty1_bb_a0 = {
+	0, 18, ptu_prty1_bb_a0_attn_idx, 0x560200, 0x56020c, 0x560208, 0x560204
+};
+
+static struct attn_hw_reg *ptu_prty_bb_a0_regs[1] = {
+	&ptu_prty1_bb_a0,
+};
+
+static const u16 ptu_prty1_bb_b0_attn_idx[18] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17,
+};
+
+static struct attn_hw_reg ptu_prty1_bb_b0 = {
+	0, 18, ptu_prty1_bb_b0_attn_idx, 0x560200, 0x56020c, 0x560208, 0x560204
+};
+
+static struct attn_hw_reg *ptu_prty_bb_b0_regs[1] = {
+	&ptu_prty1_bb_b0,
+};
+
+static const u16 ptu_prty1_k2_attn_idx[18] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17,
+};
+
+static struct attn_hw_reg ptu_prty1_k2 = {
+	0, 18, ptu_prty1_k2_attn_idx, 0x560200, 0x56020c, 0x560208, 0x560204
+};
+
+static struct attn_hw_reg *ptu_prty_k2_regs[1] = {
+	&ptu_prty1_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *tcm_int_attn_desc[41] = {
+	"tcm_address_error",
+	"tcm_is_storm_ovfl_err",
+	"tcm_is_storm_under_err",
+	"tcm_is_tsdm_ovfl_err",
+	"tcm_is_tsdm_under_err",
+	"tcm_is_msem_ovfl_err",
+	"tcm_is_msem_under_err",
+	"tcm_is_ysem_ovfl_err",
+	"tcm_is_ysem_under_err",
+	"tcm_is_dorq_ovfl_err",
+	"tcm_is_dorq_under_err",
+	"tcm_is_pbf_ovfl_err",
+	"tcm_is_pbf_under_err",
+	"tcm_is_prs_ovfl_err",
+	"tcm_is_prs_under_err",
+	"tcm_is_tm_ovfl_err",
+	"tcm_is_tm_under_err",
+	"tcm_is_qm_p_ovfl_err",
+	"tcm_is_qm_p_under_err",
+	"tcm_is_qm_s_ovfl_err",
+	"tcm_is_qm_s_under_err",
+	"tcm_is_grc_ovfl_err0",
+	"tcm_is_grc_under_err0",
+	"tcm_is_grc_ovfl_err1",
+	"tcm_is_grc_under_err1",
+	"tcm_is_grc_ovfl_err2",
+	"tcm_is_grc_under_err2",
+	"tcm_is_grc_ovfl_err3",
+	"tcm_is_grc_under_err3",
+	"tcm_in_prcs_tbl_ovfl",
+	"tcm_agg_con_data_buf_ovfl",
+	"tcm_agg_con_cmd_buf_ovfl",
+	"tcm_sm_con_data_buf_ovfl",
+	"tcm_sm_con_cmd_buf_ovfl",
+	"tcm_agg_task_data_buf_ovfl",
+	"tcm_agg_task_cmd_buf_ovfl",
+	"tcm_sm_task_data_buf_ovfl",
+	"tcm_sm_task_cmd_buf_ovfl",
+	"tcm_fi_desc_input_violate",
+	"tcm_se_desc_input_violate",
+	"tcm_qmreg_more4",
+};
+#else
+#define tcm_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 tcm_int0_bb_a0_attn_idx[8] = {
+	0, 1, 2, 3, 4, 5, 6, 7,
+};
+
+static struct attn_hw_reg tcm_int0_bb_a0 = {
+	0, 8, tcm_int0_bb_a0_attn_idx, 0x1180180, 0x118018c, 0x1180188,
+	0x1180184
+};
+
+static const u16 tcm_int1_bb_a0_attn_idx[32] = {
+	8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25,
+	26,
+	27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39,
+};
+
+static struct attn_hw_reg tcm_int1_bb_a0 = {
+	1, 32, tcm_int1_bb_a0_attn_idx, 0x1180190, 0x118019c, 0x1180198,
+	0x1180194
+};
+
+static const u16 tcm_int2_bb_a0_attn_idx[1] = {
+	40,
+};
+
+static struct attn_hw_reg tcm_int2_bb_a0 = {
+	2, 1, tcm_int2_bb_a0_attn_idx, 0x11801a0, 0x11801ac, 0x11801a8,
+	0x11801a4
+};
+
+static struct attn_hw_reg *tcm_int_bb_a0_regs[3] = {
+	&tcm_int0_bb_a0, &tcm_int1_bb_a0, &tcm_int2_bb_a0,
+};
+
+static const u16 tcm_int0_bb_b0_attn_idx[8] = {
+	0, 1, 2, 3, 4, 5, 6, 7,
+};
+
+static struct attn_hw_reg tcm_int0_bb_b0 = {
+	0, 8, tcm_int0_bb_b0_attn_idx, 0x1180180, 0x118018c, 0x1180188,
+	0x1180184
+};
+
+static const u16 tcm_int1_bb_b0_attn_idx[32] = {
+	8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25,
+	26,
+	27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39,
+};
+
+static struct attn_hw_reg tcm_int1_bb_b0 = {
+	1, 32, tcm_int1_bb_b0_attn_idx, 0x1180190, 0x118019c, 0x1180198,
+	0x1180194
+};
+
+static const u16 tcm_int2_bb_b0_attn_idx[1] = {
+	40,
+};
+
+static struct attn_hw_reg tcm_int2_bb_b0 = {
+	2, 1, tcm_int2_bb_b0_attn_idx, 0x11801a0, 0x11801ac, 0x11801a8,
+	0x11801a4
+};
+
+static struct attn_hw_reg *tcm_int_bb_b0_regs[3] = {
+	&tcm_int0_bb_b0, &tcm_int1_bb_b0, &tcm_int2_bb_b0,
+};
+
+static const u16 tcm_int0_k2_attn_idx[8] = {
+	0, 1, 2, 3, 4, 5, 6, 7,
+};
+
+static struct attn_hw_reg tcm_int0_k2 = {
+	0, 8, tcm_int0_k2_attn_idx, 0x1180180, 0x118018c, 0x1180188, 0x1180184
+};
+
+static const u16 tcm_int1_k2_attn_idx[32] = {
+	8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25,
+	26,
+	27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39,
+};
+
+static struct attn_hw_reg tcm_int1_k2 = {
+	1, 32, tcm_int1_k2_attn_idx, 0x1180190, 0x118019c, 0x1180198, 0x1180194
+};
+
+static const u16 tcm_int2_k2_attn_idx[1] = {
+	40,
+};
+
+static struct attn_hw_reg tcm_int2_k2 = {
+	2, 1, tcm_int2_k2_attn_idx, 0x11801a0, 0x11801ac, 0x11801a8, 0x11801a4
+};
+
+static struct attn_hw_reg *tcm_int_k2_regs[3] = {
+	&tcm_int0_k2, &tcm_int1_k2, &tcm_int2_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *tcm_prty_attn_desc[51] = {
+	"tcm_mem026_i_ecc_rf_int",
+	"tcm_mem003_i_ecc_0_rf_int",
+	"tcm_mem003_i_ecc_1_rf_int",
+	"tcm_mem022_i_ecc_0_rf_int",
+	"tcm_mem022_i_ecc_1_rf_int",
+	"tcm_mem005_i_ecc_0_rf_int",
+	"tcm_mem005_i_ecc_1_rf_int",
+	"tcm_mem024_i_ecc_0_rf_int",
+	"tcm_mem024_i_ecc_1_rf_int",
+	"tcm_mem018_i_mem_prty",
+	"tcm_mem019_i_mem_prty",
+	"tcm_mem015_i_mem_prty",
+	"tcm_mem016_i_mem_prty",
+	"tcm_mem017_i_mem_prty",
+	"tcm_mem010_i_mem_prty",
+	"tcm_mem020_i_mem_prty",
+	"tcm_mem011_i_mem_prty",
+	"tcm_mem012_i_mem_prty",
+	"tcm_mem013_i_mem_prty",
+	"tcm_mem014_i_mem_prty",
+	"tcm_mem029_i_mem_prty",
+	"tcm_mem028_i_mem_prty",
+	"tcm_mem027_i_mem_prty",
+	"tcm_mem004_i_mem_prty",
+	"tcm_mem023_i_mem_prty",
+	"tcm_mem006_i_mem_prty",
+	"tcm_mem025_i_mem_prty",
+	"tcm_mem021_i_mem_prty",
+	"tcm_mem007_i_mem_prty_0",
+	"tcm_mem007_i_mem_prty_1",
+	"tcm_mem008_i_mem_prty",
+	"tcm_mem025_i_ecc_rf_int",
+	"tcm_mem021_i_ecc_0_rf_int",
+	"tcm_mem021_i_ecc_1_rf_int",
+	"tcm_mem023_i_ecc_0_rf_int",
+	"tcm_mem023_i_ecc_1_rf_int",
+	"tcm_mem026_i_mem_prty",
+	"tcm_mem022_i_mem_prty",
+	"tcm_mem024_i_mem_prty",
+	"tcm_mem009_i_mem_prty",
+	"tcm_mem024_i_ecc_rf_int",
+	"tcm_mem001_i_ecc_0_rf_int",
+	"tcm_mem001_i_ecc_1_rf_int",
+	"tcm_mem019_i_ecc_0_rf_int",
+	"tcm_mem019_i_ecc_1_rf_int",
+	"tcm_mem022_i_ecc_rf_int",
+	"tcm_mem002_i_mem_prty",
+	"tcm_mem005_i_mem_prty_0",
+	"tcm_mem005_i_mem_prty_1",
+	"tcm_mem001_i_mem_prty",
+	"tcm_mem007_i_mem_prty",
+};
+#else
+#define tcm_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 tcm_prty1_bb_a0_attn_idx[31] = {
+	1, 2, 9, 11, 12, 13, 14, 15, 16, 17, 18, 19, 22, 23, 24, 25, 26, 30, 32,
+	33, 36, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48,
+};
+
+static struct attn_hw_reg tcm_prty1_bb_a0 = {
+	0, 31, tcm_prty1_bb_a0_attn_idx, 0x1180200, 0x118020c, 0x1180208,
+	0x1180204
+};
+
+static const u16 tcm_prty2_bb_a0_attn_idx[3] = {
+	50, 21, 20,
+};
+
+static struct attn_hw_reg tcm_prty2_bb_a0 = {
+	1, 3, tcm_prty2_bb_a0_attn_idx, 0x1180210, 0x118021c, 0x1180218,
+	0x1180214
+};
+
+static struct attn_hw_reg *tcm_prty_bb_a0_regs[2] = {
+	&tcm_prty1_bb_a0, &tcm_prty2_bb_a0,
+};
+
+static const u16 tcm_prty1_bb_b0_attn_idx[31] = {
+	1, 2, 5, 6, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 21, 22, 23, 25,
+	28,
+	29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39,
+};
+
+static struct attn_hw_reg tcm_prty1_bb_b0 = {
+	0, 31, tcm_prty1_bb_b0_attn_idx, 0x1180200, 0x118020c, 0x1180208,
+	0x1180204
+};
+
+static const u16 tcm_prty2_bb_b0_attn_idx[2] = {
+	49, 46,
+};
+
+static struct attn_hw_reg tcm_prty2_bb_b0 = {
+	1, 2, tcm_prty2_bb_b0_attn_idx, 0x1180210, 0x118021c, 0x1180218,
+	0x1180214
+};
+
+static struct attn_hw_reg *tcm_prty_bb_b0_regs[2] = {
+	&tcm_prty1_bb_b0, &tcm_prty2_bb_b0,
+};
+
+static const u16 tcm_prty1_k2_attn_idx[31] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+	20,
+	21, 22, 23, 24, 25, 26, 27, 28, 29, 30,
+};
+
+static struct attn_hw_reg tcm_prty1_k2 = {
+	0, 31, tcm_prty1_k2_attn_idx, 0x1180200, 0x118020c, 0x1180208,
+	0x1180204
+};
+
+static const u16 tcm_prty2_k2_attn_idx[3] = {
+	39, 49, 46,
+};
+
+static struct attn_hw_reg tcm_prty2_k2 = {
+	1, 3, tcm_prty2_k2_attn_idx, 0x1180210, 0x118021c, 0x1180218, 0x1180214
+};
+
+static struct attn_hw_reg *tcm_prty_k2_regs[2] = {
+	&tcm_prty1_k2, &tcm_prty2_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *mcm_int_attn_desc[41] = {
+	"mcm_address_error",
+	"mcm_is_storm_ovfl_err",
+	"mcm_is_storm_under_err",
+	"mcm_is_msdm_ovfl_err",
+	"mcm_is_msdm_under_err",
+	"mcm_is_ysdm_ovfl_err",
+	"mcm_is_ysdm_under_err",
+	"mcm_is_usdm_ovfl_err",
+	"mcm_is_usdm_under_err",
+	"mcm_is_tmld_ovfl_err",
+	"mcm_is_tmld_under_err",
+	"mcm_is_usem_ovfl_err",
+	"mcm_is_usem_under_err",
+	"mcm_is_ysem_ovfl_err",
+	"mcm_is_ysem_under_err",
+	"mcm_is_pbf_ovfl_err",
+	"mcm_is_pbf_under_err",
+	"mcm_is_qm_p_ovfl_err",
+	"mcm_is_qm_p_under_err",
+	"mcm_is_qm_s_ovfl_err",
+	"mcm_is_qm_s_under_err",
+	"mcm_is_grc_ovfl_err0",
+	"mcm_is_grc_under_err0",
+	"mcm_is_grc_ovfl_err1",
+	"mcm_is_grc_under_err1",
+	"mcm_is_grc_ovfl_err2",
+	"mcm_is_grc_under_err2",
+	"mcm_is_grc_ovfl_err3",
+	"mcm_is_grc_under_err3",
+	"mcm_in_prcs_tbl_ovfl",
+	"mcm_agg_con_data_buf_ovfl",
+	"mcm_agg_con_cmd_buf_ovfl",
+	"mcm_sm_con_data_buf_ovfl",
+	"mcm_sm_con_cmd_buf_ovfl",
+	"mcm_agg_task_data_buf_ovfl",
+	"mcm_agg_task_cmd_buf_ovfl",
+	"mcm_sm_task_data_buf_ovfl",
+	"mcm_sm_task_cmd_buf_ovfl",
+	"mcm_fi_desc_input_violate",
+	"mcm_se_desc_input_violate",
+	"mcm_qmreg_more4",
+};
+#else
+#define mcm_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 mcm_int0_bb_a0_attn_idx[14] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,
+};
+
+static struct attn_hw_reg mcm_int0_bb_a0 = {
+	0, 14, mcm_int0_bb_a0_attn_idx, 0x1200180, 0x120018c, 0x1200188,
+	0x1200184
+};
+
+static const u16 mcm_int1_bb_a0_attn_idx[26] = {
+	14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31,
+	32, 33, 34, 35, 36, 37, 38, 39,
+};
+
+static struct attn_hw_reg mcm_int1_bb_a0 = {
+	1, 26, mcm_int1_bb_a0_attn_idx, 0x1200190, 0x120019c, 0x1200198,
+	0x1200194
+};
+
+static const u16 mcm_int2_bb_a0_attn_idx[1] = {
+	40,
+};
+
+static struct attn_hw_reg mcm_int2_bb_a0 = {
+	2, 1, mcm_int2_bb_a0_attn_idx, 0x12001a0, 0x12001ac, 0x12001a8,
+	0x12001a4
+};
+
+static struct attn_hw_reg *mcm_int_bb_a0_regs[3] = {
+	&mcm_int0_bb_a0, &mcm_int1_bb_a0, &mcm_int2_bb_a0,
+};
+
+static const u16 mcm_int0_bb_b0_attn_idx[14] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,
+};
+
+static struct attn_hw_reg mcm_int0_bb_b0 = {
+	0, 14, mcm_int0_bb_b0_attn_idx, 0x1200180, 0x120018c, 0x1200188,
+	0x1200184
+};
+
+static const u16 mcm_int1_bb_b0_attn_idx[26] = {
+	14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31,
+	32, 33, 34, 35, 36, 37, 38, 39,
+};
+
+static struct attn_hw_reg mcm_int1_bb_b0 = {
+	1, 26, mcm_int1_bb_b0_attn_idx, 0x1200190, 0x120019c, 0x1200198,
+	0x1200194
+};
+
+static const u16 mcm_int2_bb_b0_attn_idx[1] = {
+	40,
+};
+
+static struct attn_hw_reg mcm_int2_bb_b0 = {
+	2, 1, mcm_int2_bb_b0_attn_idx, 0x12001a0, 0x12001ac, 0x12001a8,
+	0x12001a4
+};
+
+static struct attn_hw_reg *mcm_int_bb_b0_regs[3] = {
+	&mcm_int0_bb_b0, &mcm_int1_bb_b0, &mcm_int2_bb_b0,
+};
+
+static const u16 mcm_int0_k2_attn_idx[14] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,
+};
+
+static struct attn_hw_reg mcm_int0_k2 = {
+	0, 14, mcm_int0_k2_attn_idx, 0x1200180, 0x120018c, 0x1200188, 0x1200184
+};
+
+static const u16 mcm_int1_k2_attn_idx[26] = {
+	14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31,
+	32, 33, 34, 35, 36, 37, 38, 39,
+};
+
+static struct attn_hw_reg mcm_int1_k2 = {
+	1, 26, mcm_int1_k2_attn_idx, 0x1200190, 0x120019c, 0x1200198, 0x1200194
+};
+
+static const u16 mcm_int2_k2_attn_idx[1] = {
+	40,
+};
+
+static struct attn_hw_reg mcm_int2_k2 = {
+	2, 1, mcm_int2_k2_attn_idx, 0x12001a0, 0x12001ac, 0x12001a8, 0x12001a4
+};
+
+static struct attn_hw_reg *mcm_int_k2_regs[3] = {
+	&mcm_int0_k2, &mcm_int1_k2, &mcm_int2_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *mcm_prty_attn_desc[46] = {
+	"mcm_mem028_i_ecc_rf_int",
+	"mcm_mem003_i_ecc_rf_int",
+	"mcm_mem023_i_ecc_0_rf_int",
+	"mcm_mem023_i_ecc_1_rf_int",
+	"mcm_mem005_i_ecc_0_rf_int",
+	"mcm_mem005_i_ecc_1_rf_int",
+	"mcm_mem025_i_ecc_0_rf_int",
+	"mcm_mem025_i_ecc_1_rf_int",
+	"mcm_mem026_i_ecc_rf_int",
+	"mcm_mem017_i_mem_prty",
+	"mcm_mem019_i_mem_prty",
+	"mcm_mem016_i_mem_prty",
+	"mcm_mem015_i_mem_prty",
+	"mcm_mem020_i_mem_prty",
+	"mcm_mem021_i_mem_prty",
+	"mcm_mem018_i_mem_prty",
+	"mcm_mem011_i_mem_prty",
+	"mcm_mem012_i_mem_prty",
+	"mcm_mem013_i_mem_prty",
+	"mcm_mem014_i_mem_prty",
+	"mcm_mem031_i_mem_prty",
+	"mcm_mem030_i_mem_prty",
+	"mcm_mem029_i_mem_prty",
+	"mcm_mem004_i_mem_prty",
+	"mcm_mem024_i_mem_prty",
+	"mcm_mem006_i_mem_prty",
+	"mcm_mem027_i_mem_prty",
+	"mcm_mem022_i_mem_prty",
+	"mcm_mem007_i_mem_prty_0",
+	"mcm_mem007_i_mem_prty_1",
+	"mcm_mem008_i_mem_prty",
+	"mcm_mem001_i_ecc_rf_int",
+	"mcm_mem021_i_ecc_0_rf_int",
+	"mcm_mem021_i_ecc_1_rf_int",
+	"mcm_mem003_i_ecc_0_rf_int",
+	"mcm_mem003_i_ecc_1_rf_int",
+	"mcm_mem024_i_ecc_rf_int",
+	"mcm_mem009_i_mem_prty",
+	"mcm_mem010_i_mem_prty",
+	"mcm_mem028_i_mem_prty",
+	"mcm_mem002_i_mem_prty",
+	"mcm_mem025_i_mem_prty",
+	"mcm_mem005_i_mem_prty_0",
+	"mcm_mem005_i_mem_prty_1",
+	"mcm_mem001_i_mem_prty",
+	"mcm_mem007_i_mem_prty",
+};
+#else
+#define mcm_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 mcm_prty1_bb_a0_attn_idx[31] = {
+	2, 3, 8, 9, 10, 11, 12, 13, 15, 16, 17, 18, 19, 22, 23, 25, 26, 27, 31,
+	32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43,
+};
+
+static struct attn_hw_reg mcm_prty1_bb_a0 = {
+	0, 31, mcm_prty1_bb_a0_attn_idx, 0x1200200, 0x120020c, 0x1200208,
+	0x1200204
+};
+
+static const u16 mcm_prty2_bb_a0_attn_idx[4] = {
+	45, 30, 21, 20,
+};
+
+static struct attn_hw_reg mcm_prty2_bb_a0 = {
+	1, 4, mcm_prty2_bb_a0_attn_idx, 0x1200210, 0x120021c, 0x1200218,
+	0x1200214
+};
+
+static struct attn_hw_reg *mcm_prty_bb_a0_regs[2] = {
+	&mcm_prty1_bb_a0, &mcm_prty2_bb_a0,
+};
+
+static const u16 mcm_prty1_bb_b0_attn_idx[31] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+	20,
+	21, 22, 23, 24, 25, 26, 27, 28, 29, 30,
+};
+
+static struct attn_hw_reg mcm_prty1_bb_b0 = {
+	0, 31, mcm_prty1_bb_b0_attn_idx, 0x1200200, 0x120020c, 0x1200208,
+	0x1200204
+};
+
+static const u16 mcm_prty2_bb_b0_attn_idx[4] = {
+	37, 38, 44, 40,
+};
+
+static struct attn_hw_reg mcm_prty2_bb_b0 = {
+	1, 4, mcm_prty2_bb_b0_attn_idx, 0x1200210, 0x120021c, 0x1200218,
+	0x1200214
+};
+
+static struct attn_hw_reg *mcm_prty_bb_b0_regs[2] = {
+	&mcm_prty1_bb_b0, &mcm_prty2_bb_b0,
+};
+
+static const u16 mcm_prty1_k2_attn_idx[31] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+	20,
+	21, 22, 23, 24, 25, 26, 27, 28, 29, 30,
+};
+
+static struct attn_hw_reg mcm_prty1_k2 = {
+	0, 31, mcm_prty1_k2_attn_idx, 0x1200200, 0x120020c, 0x1200208,
+	0x1200204
+};
+
+static const u16 mcm_prty2_k2_attn_idx[4] = {
+	37, 38, 44, 40,
+};
+
+static struct attn_hw_reg mcm_prty2_k2 = {
+	1, 4, mcm_prty2_k2_attn_idx, 0x1200210, 0x120021c, 0x1200218, 0x1200214
+};
+
+static struct attn_hw_reg *mcm_prty_k2_regs[2] = {
+	&mcm_prty1_k2, &mcm_prty2_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *ucm_int_attn_desc[47] = {
+	"ucm_address_error",
+	"ucm_is_storm_ovfl_err",
+	"ucm_is_storm_under_err",
+	"ucm_is_xsdm_ovfl_err",
+	"ucm_is_xsdm_under_err",
+	"ucm_is_ysdm_ovfl_err",
+	"ucm_is_ysdm_under_err",
+	"ucm_is_usdm_ovfl_err",
+	"ucm_is_usdm_under_err",
+	"ucm_is_rdif_ovfl_err",
+	"ucm_is_rdif_under_err",
+	"ucm_is_tdif_ovfl_err",
+	"ucm_is_tdif_under_err",
+	"ucm_is_muld_ovfl_err",
+	"ucm_is_muld_under_err",
+	"ucm_is_yuld_ovfl_err",
+	"ucm_is_yuld_under_err",
+	"ucm_is_dorq_ovfl_err",
+	"ucm_is_dorq_under_err",
+	"ucm_is_pbf_ovfl_err",
+	"ucm_is_pbf_under_err",
+	"ucm_is_tm_ovfl_err",
+	"ucm_is_tm_under_err",
+	"ucm_is_qm_p_ovfl_err",
+	"ucm_is_qm_p_under_err",
+	"ucm_is_qm_s_ovfl_err",
+	"ucm_is_qm_s_under_err",
+	"ucm_is_grc_ovfl_err0",
+	"ucm_is_grc_under_err0",
+	"ucm_is_grc_ovfl_err1",
+	"ucm_is_grc_under_err1",
+	"ucm_is_grc_ovfl_err2",
+	"ucm_is_grc_under_err2",
+	"ucm_is_grc_ovfl_err3",
+	"ucm_is_grc_under_err3",
+	"ucm_in_prcs_tbl_ovfl",
+	"ucm_agg_con_data_buf_ovfl",
+	"ucm_agg_con_cmd_buf_ovfl",
+	"ucm_sm_con_data_buf_ovfl",
+	"ucm_sm_con_cmd_buf_ovfl",
+	"ucm_agg_task_data_buf_ovfl",
+	"ucm_agg_task_cmd_buf_ovfl",
+	"ucm_sm_task_data_buf_ovfl",
+	"ucm_sm_task_cmd_buf_ovfl",
+	"ucm_fi_desc_input_violate",
+	"ucm_se_desc_input_violate",
+	"ucm_qmreg_more4",
+};
+#else
+#define ucm_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 ucm_int0_bb_a0_attn_idx[17] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,
+};
+
+static struct attn_hw_reg ucm_int0_bb_a0 = {
+	0, 17, ucm_int0_bb_a0_attn_idx, 0x1280180, 0x128018c, 0x1280188,
+	0x1280184
+};
+
+static const u16 ucm_int1_bb_a0_attn_idx[29] = {
+	17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34,
+	35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45,
+};
+
+static struct attn_hw_reg ucm_int1_bb_a0 = {
+	1, 29, ucm_int1_bb_a0_attn_idx, 0x1280190, 0x128019c, 0x1280198,
+	0x1280194
+};
+
+static const u16 ucm_int2_bb_a0_attn_idx[1] = {
+	46,
+};
+
+static struct attn_hw_reg ucm_int2_bb_a0 = {
+	2, 1, ucm_int2_bb_a0_attn_idx, 0x12801a0, 0x12801ac, 0x12801a8,
+	0x12801a4
+};
+
+static struct attn_hw_reg *ucm_int_bb_a0_regs[3] = {
+	&ucm_int0_bb_a0, &ucm_int1_bb_a0, &ucm_int2_bb_a0,
+};
+
+static const u16 ucm_int0_bb_b0_attn_idx[17] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,
+};
+
+static struct attn_hw_reg ucm_int0_bb_b0 = {
+	0, 17, ucm_int0_bb_b0_attn_idx, 0x1280180, 0x128018c, 0x1280188,
+	0x1280184
+};
+
+static const u16 ucm_int1_bb_b0_attn_idx[29] = {
+	17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34,
+	35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45,
+};
+
+static struct attn_hw_reg ucm_int1_bb_b0 = {
+	1, 29, ucm_int1_bb_b0_attn_idx, 0x1280190, 0x128019c, 0x1280198,
+	0x1280194
+};
+
+static const u16 ucm_int2_bb_b0_attn_idx[1] = {
+	46,
+};
+
+static struct attn_hw_reg ucm_int2_bb_b0 = {
+	2, 1, ucm_int2_bb_b0_attn_idx, 0x12801a0, 0x12801ac, 0x12801a8,
+	0x12801a4
+};
+
+static struct attn_hw_reg *ucm_int_bb_b0_regs[3] = {
+	&ucm_int0_bb_b0, &ucm_int1_bb_b0, &ucm_int2_bb_b0,
+};
+
+static const u16 ucm_int0_k2_attn_idx[17] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,
+};
+
+static struct attn_hw_reg ucm_int0_k2 = {
+	0, 17, ucm_int0_k2_attn_idx, 0x1280180, 0x128018c, 0x1280188, 0x1280184
+};
+
+static const u16 ucm_int1_k2_attn_idx[29] = {
+	17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34,
+	35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45,
+};
+
+static struct attn_hw_reg ucm_int1_k2 = {
+	1, 29, ucm_int1_k2_attn_idx, 0x1280190, 0x128019c, 0x1280198, 0x1280194
+};
+
+static const u16 ucm_int2_k2_attn_idx[1] = {
+	46,
+};
+
+static struct attn_hw_reg ucm_int2_k2 = {
+	2, 1, ucm_int2_k2_attn_idx, 0x12801a0, 0x12801ac, 0x12801a8, 0x12801a4
+};
+
+static struct attn_hw_reg *ucm_int_k2_regs[3] = {
+	&ucm_int0_k2, &ucm_int1_k2, &ucm_int2_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *ucm_prty_attn_desc[54] = {
+	"ucm_mem030_i_ecc_rf_int",
+	"ucm_mem005_i_ecc_0_rf_int",
+	"ucm_mem005_i_ecc_1_rf_int",
+	"ucm_mem024_i_ecc_0_rf_int",
+	"ucm_mem024_i_ecc_1_rf_int",
+	"ucm_mem025_i_ecc_rf_int",
+	"ucm_mem007_i_ecc_0_rf_int",
+	"ucm_mem007_i_ecc_1_rf_int",
+	"ucm_mem008_i_ecc_rf_int",
+	"ucm_mem027_i_ecc_0_rf_int",
+	"ucm_mem027_i_ecc_1_rf_int",
+	"ucm_mem028_i_ecc_rf_int",
+	"ucm_mem020_i_mem_prty",
+	"ucm_mem021_i_mem_prty",
+	"ucm_mem019_i_mem_prty",
+	"ucm_mem013_i_mem_prty",
+	"ucm_mem018_i_mem_prty",
+	"ucm_mem022_i_mem_prty",
+	"ucm_mem014_i_mem_prty",
+	"ucm_mem015_i_mem_prty",
+	"ucm_mem016_i_mem_prty",
+	"ucm_mem017_i_mem_prty",
+	"ucm_mem033_i_mem_prty",
+	"ucm_mem032_i_mem_prty",
+	"ucm_mem031_i_mem_prty",
+	"ucm_mem006_i_mem_prty",
+	"ucm_mem026_i_mem_prty",
+	"ucm_mem009_i_mem_prty",
+	"ucm_mem029_i_mem_prty",
+	"ucm_mem023_i_mem_prty",
+	"ucm_mem010_i_mem_prty_0",
+	"ucm_mem003_i_ecc_0_rf_int",
+	"ucm_mem003_i_ecc_1_rf_int",
+	"ucm_mem022_i_ecc_0_rf_int",
+	"ucm_mem022_i_ecc_1_rf_int",
+	"ucm_mem023_i_ecc_rf_int",
+	"ucm_mem006_i_ecc_rf_int",
+	"ucm_mem025_i_ecc_0_rf_int",
+	"ucm_mem025_i_ecc_1_rf_int",
+	"ucm_mem026_i_ecc_rf_int",
+	"ucm_mem011_i_mem_prty",
+	"ucm_mem012_i_mem_prty",
+	"ucm_mem030_i_mem_prty",
+	"ucm_mem004_i_mem_prty",
+	"ucm_mem024_i_mem_prty",
+	"ucm_mem007_i_mem_prty",
+	"ucm_mem027_i_mem_prty",
+	"ucm_mem008_i_mem_prty_0",
+	"ucm_mem010_i_mem_prty_1",
+	"ucm_mem003_i_mem_prty",
+	"ucm_mem001_i_mem_prty",
+	"ucm_mem002_i_mem_prty",
+	"ucm_mem008_i_mem_prty_1",
+	"ucm_mem010_i_mem_prty",
+};
+#else
+#define ucm_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 ucm_prty1_bb_a0_attn_idx[31] = {
+	1, 2, 11, 12, 13, 14, 15, 16, 18, 19, 20, 21, 24, 28, 31, 32, 33, 34,
+	35,
+	36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47,
+};
+
+static struct attn_hw_reg ucm_prty1_bb_a0 = {
+	0, 31, ucm_prty1_bb_a0_attn_idx, 0x1280200, 0x128020c, 0x1280208,
+	0x1280204
+};
+
+static const u16 ucm_prty2_bb_a0_attn_idx[7] = {
+	50, 51, 52, 27, 53, 23, 22,
+};
+
+static struct attn_hw_reg ucm_prty2_bb_a0 = {
+	1, 7, ucm_prty2_bb_a0_attn_idx, 0x1280210, 0x128021c, 0x1280218,
+	0x1280214
+};
+
+static struct attn_hw_reg *ucm_prty_bb_a0_regs[2] = {
+	&ucm_prty1_bb_a0, &ucm_prty2_bb_a0,
+};
+
+static const u16 ucm_prty1_bb_b0_attn_idx[31] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+	20,
+	21, 22, 23, 24, 25, 26, 27, 28, 29, 30,
+};
+
+static struct attn_hw_reg ucm_prty1_bb_b0 = {
+	0, 31, ucm_prty1_bb_b0_attn_idx, 0x1280200, 0x128020c, 0x1280208,
+	0x1280204
+};
+
+static const u16 ucm_prty2_bb_b0_attn_idx[7] = {
+	48, 40, 41, 49, 43, 50, 51,
+};
+
+static struct attn_hw_reg ucm_prty2_bb_b0 = {
+	1, 7, ucm_prty2_bb_b0_attn_idx, 0x1280210, 0x128021c, 0x1280218,
+	0x1280214
+};
+
+static struct attn_hw_reg *ucm_prty_bb_b0_regs[2] = {
+	&ucm_prty1_bb_b0, &ucm_prty2_bb_b0,
+};
+
+static const u16 ucm_prty1_k2_attn_idx[31] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+	20,
+	21, 22, 23, 24, 25, 26, 27, 28, 29, 30,
+};
+
+static struct attn_hw_reg ucm_prty1_k2 = {
+	0, 31, ucm_prty1_k2_attn_idx, 0x1280200, 0x128020c, 0x1280208,
+	0x1280204
+};
+
+static const u16 ucm_prty2_k2_attn_idx[7] = {
+	48, 40, 41, 49, 43, 50, 51,
+};
+
+static struct attn_hw_reg ucm_prty2_k2 = {
+	1, 7, ucm_prty2_k2_attn_idx, 0x1280210, 0x128021c, 0x1280218, 0x1280214
+};
+
+static struct attn_hw_reg *ucm_prty_k2_regs[2] = {
+	&ucm_prty1_k2, &ucm_prty2_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *xcm_int_attn_desc[49] = {
+	"xcm_address_error",
+	"xcm_is_storm_ovfl_err",
+	"xcm_is_storm_under_err",
+	"xcm_is_msdm_ovfl_err",
+	"xcm_is_msdm_under_err",
+	"xcm_is_xsdm_ovfl_err",
+	"xcm_is_xsdm_under_err",
+	"xcm_is_ysdm_ovfl_err",
+	"xcm_is_ysdm_under_err",
+	"xcm_is_usdm_ovfl_err",
+	"xcm_is_usdm_under_err",
+	"xcm_is_msem_ovfl_err",
+	"xcm_is_msem_under_err",
+	"xcm_is_usem_ovfl_err",
+	"xcm_is_usem_under_err",
+	"xcm_is_ysem_ovfl_err",
+	"xcm_is_ysem_under_err",
+	"xcm_is_dorq_ovfl_err",
+	"xcm_is_dorq_under_err",
+	"xcm_is_pbf_ovfl_err",
+	"xcm_is_pbf_under_err",
+	"xcm_is_tm_ovfl_err",
+	"xcm_is_tm_under_err",
+	"xcm_is_qm_p_ovfl_err",
+	"xcm_is_qm_p_under_err",
+	"xcm_is_qm_s_ovfl_err",
+	"xcm_is_qm_s_under_err",
+	"xcm_is_grc_ovfl_err0",
+	"xcm_is_grc_under_err0",
+	"xcm_is_grc_ovfl_err1",
+	"xcm_is_grc_under_err1",
+	"xcm_is_grc_ovfl_err2",
+	"xcm_is_grc_under_err2",
+	"xcm_is_grc_ovfl_err3",
+	"xcm_is_grc_under_err3",
+	"xcm_in_prcs_tbl_ovfl",
+	"xcm_agg_con_data_buf_ovfl",
+	"xcm_agg_con_cmd_buf_ovfl",
+	"xcm_sm_con_data_buf_ovfl",
+	"xcm_sm_con_cmd_buf_ovfl",
+	"xcm_fi_desc_input_violate",
+	"xcm_qm_act_st_cnt_msg_prcs_under",
+	"xcm_qm_act_st_cnt_msg_prcs_ovfl",
+	"xcm_qm_act_st_cnt_ext_ld_under",
+	"xcm_qm_act_st_cnt_ext_ld_ovfl",
+	"xcm_qm_act_st_cnt_rbc_under",
+	"xcm_qm_act_st_cnt_rbc_ovfl",
+	"xcm_qm_act_st_cnt_drop_under",
+	"xcm_qm_act_st_cnt_illeg_pqnum",
+};
+#else
+#define xcm_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 xcm_int0_bb_a0_attn_idx[16] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15,
+};
+
+static struct attn_hw_reg xcm_int0_bb_a0 = {
+	0, 16, xcm_int0_bb_a0_attn_idx, 0x1000180, 0x100018c, 0x1000188,
+	0x1000184
+};
+
+static const u16 xcm_int1_bb_a0_attn_idx[25] = {
+	16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33,
+	34, 35, 36, 37, 38, 39, 40,
+};
+
+static struct attn_hw_reg xcm_int1_bb_a0 = {
+	1, 25, xcm_int1_bb_a0_attn_idx, 0x1000190, 0x100019c, 0x1000198,
+	0x1000194
+};
+
+static const u16 xcm_int2_bb_a0_attn_idx[8] = {
+	41, 42, 43, 44, 45, 46, 47, 48,
+};
+
+static struct attn_hw_reg xcm_int2_bb_a0 = {
+	2, 8, xcm_int2_bb_a0_attn_idx, 0x10001a0, 0x10001ac, 0x10001a8,
+	0x10001a4
+};
+
+static struct attn_hw_reg *xcm_int_bb_a0_regs[3] = {
+	&xcm_int0_bb_a0, &xcm_int1_bb_a0, &xcm_int2_bb_a0,
+};
+
+static const u16 xcm_int0_bb_b0_attn_idx[16] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15,
+};
+
+static struct attn_hw_reg xcm_int0_bb_b0 = {
+	0, 16, xcm_int0_bb_b0_attn_idx, 0x1000180, 0x100018c, 0x1000188,
+	0x1000184
+};
+
+static const u16 xcm_int1_bb_b0_attn_idx[25] = {
+	16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33,
+	34, 35, 36, 37, 38, 39, 40,
+};
+
+static struct attn_hw_reg xcm_int1_bb_b0 = {
+	1, 25, xcm_int1_bb_b0_attn_idx, 0x1000190, 0x100019c, 0x1000198,
+	0x1000194
+};
+
+static const u16 xcm_int2_bb_b0_attn_idx[8] = {
+	41, 42, 43, 44, 45, 46, 47, 48,
+};
+
+static struct attn_hw_reg xcm_int2_bb_b0 = {
+	2, 8, xcm_int2_bb_b0_attn_idx, 0x10001a0, 0x10001ac, 0x10001a8,
+	0x10001a4
+};
+
+static struct attn_hw_reg *xcm_int_bb_b0_regs[3] = {
+	&xcm_int0_bb_b0, &xcm_int1_bb_b0, &xcm_int2_bb_b0,
+};
+
+static const u16 xcm_int0_k2_attn_idx[16] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15,
+};
+
+static struct attn_hw_reg xcm_int0_k2 = {
+	0, 16, xcm_int0_k2_attn_idx, 0x1000180, 0x100018c, 0x1000188, 0x1000184
+};
+
+static const u16 xcm_int1_k2_attn_idx[25] = {
+	16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33,
+	34, 35, 36, 37, 38, 39, 40,
+};
+
+static struct attn_hw_reg xcm_int1_k2 = {
+	1, 25, xcm_int1_k2_attn_idx, 0x1000190, 0x100019c, 0x1000198, 0x1000194
+};
+
+static const u16 xcm_int2_k2_attn_idx[8] = {
+	41, 42, 43, 44, 45, 46, 47, 48,
+};
+
+static struct attn_hw_reg xcm_int2_k2 = {
+	2, 8, xcm_int2_k2_attn_idx, 0x10001a0, 0x10001ac, 0x10001a8, 0x10001a4
+};
+
+static struct attn_hw_reg *xcm_int_k2_regs[3] = {
+	&xcm_int0_k2, &xcm_int1_k2, &xcm_int2_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *xcm_prty_attn_desc[59] = {
+	"xcm_mem036_i_ecc_rf_int",
+	"xcm_mem003_i_ecc_0_rf_int",
+	"xcm_mem003_i_ecc_1_rf_int",
+	"xcm_mem003_i_ecc_2_rf_int",
+	"xcm_mem003_i_ecc_3_rf_int",
+	"xcm_mem004_i_ecc_rf_int",
+	"xcm_mem033_i_ecc_0_rf_int",
+	"xcm_mem033_i_ecc_1_rf_int",
+	"xcm_mem034_i_ecc_rf_int",
+	"xcm_mem026_i_mem_prty",
+	"xcm_mem025_i_mem_prty",
+	"xcm_mem022_i_mem_prty",
+	"xcm_mem029_i_mem_prty",
+	"xcm_mem023_i_mem_prty",
+	"xcm_mem028_i_mem_prty",
+	"xcm_mem030_i_mem_prty",
+	"xcm_mem017_i_mem_prty",
+	"xcm_mem024_i_mem_prty",
+	"xcm_mem027_i_mem_prty",
+	"xcm_mem018_i_mem_prty",
+	"xcm_mem019_i_mem_prty",
+	"xcm_mem020_i_mem_prty",
+	"xcm_mem021_i_mem_prty",
+	"xcm_mem039_i_mem_prty",
+	"xcm_mem038_i_mem_prty",
+	"xcm_mem037_i_mem_prty",
+	"xcm_mem005_i_mem_prty",
+	"xcm_mem035_i_mem_prty",
+	"xcm_mem031_i_mem_prty",
+	"xcm_mem006_i_mem_prty",
+	"xcm_mem015_i_mem_prty",
+	"xcm_mem035_i_ecc_rf_int",
+	"xcm_mem032_i_ecc_0_rf_int",
+	"xcm_mem032_i_ecc_1_rf_int",
+	"xcm_mem033_i_ecc_rf_int",
+	"xcm_mem036_i_mem_prty",
+	"xcm_mem034_i_mem_prty",
+	"xcm_mem016_i_mem_prty",
+	"xcm_mem002_i_ecc_0_rf_int",
+	"xcm_mem002_i_ecc_1_rf_int",
+	"xcm_mem002_i_ecc_2_rf_int",
+	"xcm_mem002_i_ecc_3_rf_int",
+	"xcm_mem003_i_ecc_rf_int",
+	"xcm_mem031_i_ecc_0_rf_int",
+	"xcm_mem031_i_ecc_1_rf_int",
+	"xcm_mem032_i_ecc_rf_int",
+	"xcm_mem004_i_mem_prty",
+	"xcm_mem033_i_mem_prty",
+	"xcm_mem014_i_mem_prty",
+	"xcm_mem032_i_mem_prty",
+	"xcm_mem007_i_mem_prty",
+	"xcm_mem008_i_mem_prty",
+	"xcm_mem009_i_mem_prty",
+	"xcm_mem010_i_mem_prty",
+	"xcm_mem011_i_mem_prty",
+	"xcm_mem012_i_mem_prty",
+	"xcm_mem013_i_mem_prty",
+	"xcm_mem001_i_mem_prty",
+	"xcm_mem002_i_mem_prty",
+};
+#else
+#define xcm_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 xcm_prty1_bb_a0_attn_idx[31] = {
+	8, 9, 10, 11, 12, 13, 14, 16, 17, 18, 19, 20, 21, 22, 25, 26, 27, 30,
+	35,
+	37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48,
+};
+
+static struct attn_hw_reg xcm_prty1_bb_a0 = {
+	0, 31, xcm_prty1_bb_a0_attn_idx, 0x1000200, 0x100020c, 0x1000208,
+	0x1000204
+};
+
+static const u16 xcm_prty2_bb_a0_attn_idx[11] = {
+	50, 51, 52, 53, 54, 55, 56, 57, 15, 29, 24,
+};
+
+static struct attn_hw_reg xcm_prty2_bb_a0 = {
+	1, 11, xcm_prty2_bb_a0_attn_idx, 0x1000210, 0x100021c, 0x1000218,
+	0x1000214
+};
+
+static struct attn_hw_reg *xcm_prty_bb_a0_regs[2] = {
+	&xcm_prty1_bb_a0, &xcm_prty2_bb_a0,
+};
+
+static const u16 xcm_prty1_bb_b0_attn_idx[31] = {
+	1, 2, 3, 4, 5, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22,
+	24,
+	25, 26, 29, 30, 31, 32, 33, 34, 35, 36, 37,
+};
+
+static struct attn_hw_reg xcm_prty1_bb_b0 = {
+	0, 31, xcm_prty1_bb_b0_attn_idx, 0x1000200, 0x100020c, 0x1000208,
+	0x1000204
+};
+
+static const u16 xcm_prty2_bb_b0_attn_idx[11] = {
+	50, 51, 52, 53, 54, 55, 56, 48, 57, 58, 28,
+};
+
+static struct attn_hw_reg xcm_prty2_bb_b0 = {
+	1, 11, xcm_prty2_bb_b0_attn_idx, 0x1000210, 0x100021c, 0x1000218,
+	0x1000214
+};
+
+static struct attn_hw_reg *xcm_prty_bb_b0_regs[2] = {
+	&xcm_prty1_bb_b0, &xcm_prty2_bb_b0,
+};
+
+static const u16 xcm_prty1_k2_attn_idx[31] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+	20,
+	21, 22, 23, 24, 25, 26, 27, 28, 29, 30,
+};
+
+static struct attn_hw_reg xcm_prty1_k2 = {
+	0, 31, xcm_prty1_k2_attn_idx, 0x1000200, 0x100020c, 0x1000208,
+	0x1000204
+};
+
+static const u16 xcm_prty2_k2_attn_idx[12] = {
+	37, 49, 50, 51, 52, 53, 54, 55, 56, 48, 57, 58,
+};
+
+static struct attn_hw_reg xcm_prty2_k2 = {
+	1, 12, xcm_prty2_k2_attn_idx, 0x1000210, 0x100021c, 0x1000218,
+	0x1000214
+};
+
+static struct attn_hw_reg *xcm_prty_k2_regs[2] = {
+	&xcm_prty1_k2, &xcm_prty2_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *ycm_int_attn_desc[37] = {
+	"ycm_address_error",
+	"ycm_is_storm_ovfl_err",
+	"ycm_is_storm_under_err",
+	"ycm_is_msdm_ovfl_err",
+	"ycm_is_msdm_under_err",
+	"ycm_is_ysdm_ovfl_err",
+	"ycm_is_ysdm_under_err",
+	"ycm_is_xyld_ovfl_err",
+	"ycm_is_xyld_under_err",
+	"ycm_is_msem_ovfl_err",
+	"ycm_is_msem_under_err",
+	"ycm_is_usem_ovfl_err",
+	"ycm_is_usem_under_err",
+	"ycm_is_pbf_ovfl_err",
+	"ycm_is_pbf_under_err",
+	"ycm_is_qm_p_ovfl_err",
+	"ycm_is_qm_p_under_err",
+	"ycm_is_qm_s_ovfl_err",
+	"ycm_is_qm_s_under_err",
+	"ycm_is_grc_ovfl_err0",
+	"ycm_is_grc_under_err0",
+	"ycm_is_grc_ovfl_err1",
+	"ycm_is_grc_under_err1",
+	"ycm_is_grc_ovfl_err2",
+	"ycm_is_grc_under_err2",
+	"ycm_is_grc_ovfl_err3",
+	"ycm_is_grc_under_err3",
+	"ycm_in_prcs_tbl_ovfl",
+	"ycm_sm_con_data_buf_ovfl",
+	"ycm_sm_con_cmd_buf_ovfl",
+	"ycm_agg_task_data_buf_ovfl",
+	"ycm_agg_task_cmd_buf_ovfl",
+	"ycm_sm_task_data_buf_ovfl",
+	"ycm_sm_task_cmd_buf_ovfl",
+	"ycm_fi_desc_input_violate",
+	"ycm_se_desc_input_violate",
+	"ycm_qmreg_more4",
+};
+#else
+#define ycm_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 ycm_int0_bb_a0_attn_idx[13] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12,
+};
+
+static struct attn_hw_reg ycm_int0_bb_a0 = {
+	0, 13, ycm_int0_bb_a0_attn_idx, 0x1080180, 0x108018c, 0x1080188,
+	0x1080184
+};
+
+static const u16 ycm_int1_bb_a0_attn_idx[23] = {
+	13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30,
+	31, 32, 33, 34, 35,
+};
+
+static struct attn_hw_reg ycm_int1_bb_a0 = {
+	1, 23, ycm_int1_bb_a0_attn_idx, 0x1080190, 0x108019c, 0x1080198,
+	0x1080194
+};
+
+static const u16 ycm_int2_bb_a0_attn_idx[1] = {
+	36,
+};
+
+static struct attn_hw_reg ycm_int2_bb_a0 = {
+	2, 1, ycm_int2_bb_a0_attn_idx, 0x10801a0, 0x10801ac, 0x10801a8,
+	0x10801a4
+};
+
+static struct attn_hw_reg *ycm_int_bb_a0_regs[3] = {
+	&ycm_int0_bb_a0, &ycm_int1_bb_a0, &ycm_int2_bb_a0,
+};
+
+static const u16 ycm_int0_bb_b0_attn_idx[13] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12,
+};
+
+static struct attn_hw_reg ycm_int0_bb_b0 = {
+	0, 13, ycm_int0_bb_b0_attn_idx, 0x1080180, 0x108018c, 0x1080188,
+	0x1080184
+};
+
+static const u16 ycm_int1_bb_b0_attn_idx[23] = {
+	13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30,
+	31, 32, 33, 34, 35,
+};
+
+static struct attn_hw_reg ycm_int1_bb_b0 = {
+	1, 23, ycm_int1_bb_b0_attn_idx, 0x1080190, 0x108019c, 0x1080198,
+	0x1080194
+};
+
+static const u16 ycm_int2_bb_b0_attn_idx[1] = {
+	36,
+};
+
+static struct attn_hw_reg ycm_int2_bb_b0 = {
+	2, 1, ycm_int2_bb_b0_attn_idx, 0x10801a0, 0x10801ac, 0x10801a8,
+	0x10801a4
+};
+
+static struct attn_hw_reg *ycm_int_bb_b0_regs[3] = {
+	&ycm_int0_bb_b0, &ycm_int1_bb_b0, &ycm_int2_bb_b0,
+};
+
+static const u16 ycm_int0_k2_attn_idx[13] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12,
+};
+
+static struct attn_hw_reg ycm_int0_k2 = {
+	0, 13, ycm_int0_k2_attn_idx, 0x1080180, 0x108018c, 0x1080188, 0x1080184
+};
+
+static const u16 ycm_int1_k2_attn_idx[23] = {
+	13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30,
+	31, 32, 33, 34, 35,
+};
+
+static struct attn_hw_reg ycm_int1_k2 = {
+	1, 23, ycm_int1_k2_attn_idx, 0x1080190, 0x108019c, 0x1080198, 0x1080194
+};
+
+static const u16 ycm_int2_k2_attn_idx[1] = {
+	36,
+};
+
+static struct attn_hw_reg ycm_int2_k2 = {
+	2, 1, ycm_int2_k2_attn_idx, 0x10801a0, 0x10801ac, 0x10801a8, 0x10801a4
+};
+
+static struct attn_hw_reg *ycm_int_k2_regs[3] = {
+	&ycm_int0_k2, &ycm_int1_k2, &ycm_int2_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *ycm_prty_attn_desc[44] = {
+	"ycm_mem027_i_ecc_rf_int",
+	"ycm_mem003_i_ecc_0_rf_int",
+	"ycm_mem003_i_ecc_1_rf_int",
+	"ycm_mem022_i_ecc_0_rf_int",
+	"ycm_mem022_i_ecc_1_rf_int",
+	"ycm_mem023_i_ecc_rf_int",
+	"ycm_mem005_i_ecc_0_rf_int",
+	"ycm_mem005_i_ecc_1_rf_int",
+	"ycm_mem025_i_ecc_0_rf_int",
+	"ycm_mem025_i_ecc_1_rf_int",
+	"ycm_mem018_i_mem_prty",
+	"ycm_mem020_i_mem_prty",
+	"ycm_mem017_i_mem_prty",
+	"ycm_mem016_i_mem_prty",
+	"ycm_mem019_i_mem_prty",
+	"ycm_mem015_i_mem_prty",
+	"ycm_mem011_i_mem_prty",
+	"ycm_mem012_i_mem_prty",
+	"ycm_mem013_i_mem_prty",
+	"ycm_mem014_i_mem_prty",
+	"ycm_mem030_i_mem_prty",
+	"ycm_mem029_i_mem_prty",
+	"ycm_mem028_i_mem_prty",
+	"ycm_mem004_i_mem_prty",
+	"ycm_mem024_i_mem_prty",
+	"ycm_mem006_i_mem_prty",
+	"ycm_mem026_i_mem_prty",
+	"ycm_mem021_i_mem_prty",
+	"ycm_mem007_i_mem_prty_0",
+	"ycm_mem007_i_mem_prty_1",
+	"ycm_mem008_i_mem_prty",
+	"ycm_mem026_i_ecc_rf_int",
+	"ycm_mem021_i_ecc_0_rf_int",
+	"ycm_mem021_i_ecc_1_rf_int",
+	"ycm_mem022_i_ecc_rf_int",
+	"ycm_mem024_i_ecc_0_rf_int",
+	"ycm_mem024_i_ecc_1_rf_int",
+	"ycm_mem027_i_mem_prty",
+	"ycm_mem023_i_mem_prty",
+	"ycm_mem025_i_mem_prty",
+	"ycm_mem009_i_mem_prty",
+	"ycm_mem010_i_mem_prty",
+	"ycm_mem001_i_mem_prty",
+	"ycm_mem002_i_mem_prty",
+};
+#else
+#define ycm_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 ycm_prty1_bb_a0_attn_idx[31] = {
+	1, 2, 6, 7, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 21, 22, 23, 25, 28,
+	29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40,
+};
+
+static struct attn_hw_reg ycm_prty1_bb_a0 = {
+	0, 31, ycm_prty1_bb_a0_attn_idx, 0x1080200, 0x108020c, 0x1080208,
+	0x1080204
+};
+
+static const u16 ycm_prty2_bb_a0_attn_idx[3] = {
+	41, 42, 43,
+};
+
+static struct attn_hw_reg ycm_prty2_bb_a0 = {
+	1, 3, ycm_prty2_bb_a0_attn_idx, 0x1080210, 0x108021c, 0x1080218,
+	0x1080214
+};
+
+static struct attn_hw_reg *ycm_prty_bb_a0_regs[2] = {
+	&ycm_prty1_bb_a0, &ycm_prty2_bb_a0,
+};
+
+static const u16 ycm_prty1_bb_b0_attn_idx[31] = {
+	1, 2, 6, 7, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 21, 22, 23, 25, 28,
+	29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40,
+};
+
+static struct attn_hw_reg ycm_prty1_bb_b0 = {
+	0, 31, ycm_prty1_bb_b0_attn_idx, 0x1080200, 0x108020c, 0x1080208,
+	0x1080204
+};
+
+static const u16 ycm_prty2_bb_b0_attn_idx[3] = {
+	41, 42, 43,
+};
+
+static struct attn_hw_reg ycm_prty2_bb_b0 = {
+	1, 3, ycm_prty2_bb_b0_attn_idx, 0x1080210, 0x108021c, 0x1080218,
+	0x1080214
+};
+
+static struct attn_hw_reg *ycm_prty_bb_b0_regs[2] = {
+	&ycm_prty1_bb_b0, &ycm_prty2_bb_b0,
+};
+
+static const u16 ycm_prty1_k2_attn_idx[31] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+	20,
+	21, 22, 23, 24, 25, 26, 27, 28, 29, 30,
+};
+
+static struct attn_hw_reg ycm_prty1_k2 = {
+	0, 31, ycm_prty1_k2_attn_idx, 0x1080200, 0x108020c, 0x1080208,
+	0x1080204
+};
+
+static const u16 ycm_prty2_k2_attn_idx[4] = {
+	40, 41, 42, 43,
+};
+
+static struct attn_hw_reg ycm_prty2_k2 = {
+	1, 4, ycm_prty2_k2_attn_idx, 0x1080210, 0x108021c, 0x1080218, 0x1080214
+};
+
+static struct attn_hw_reg *ycm_prty_k2_regs[2] = {
+	&ycm_prty1_k2, &ycm_prty2_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *pcm_int_attn_desc[20] = {
+	"pcm_address_error",
+	"pcm_is_storm_ovfl_err",
+	"pcm_is_storm_under_err",
+	"pcm_is_psdm_ovfl_err",
+	"pcm_is_psdm_under_err",
+	"pcm_is_pbf_ovfl_err",
+	"pcm_is_pbf_under_err",
+	"pcm_is_grc_ovfl_err0",
+	"pcm_is_grc_under_err0",
+	"pcm_is_grc_ovfl_err1",
+	"pcm_is_grc_under_err1",
+	"pcm_is_grc_ovfl_err2",
+	"pcm_is_grc_under_err2",
+	"pcm_is_grc_ovfl_err3",
+	"pcm_is_grc_under_err3",
+	"pcm_in_prcs_tbl_ovfl",
+	"pcm_sm_con_data_buf_ovfl",
+	"pcm_sm_con_cmd_buf_ovfl",
+	"pcm_fi_desc_input_violate",
+	"pcm_qmreg_more4",
+};
+#else
+#define pcm_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 pcm_int0_bb_a0_attn_idx[5] = {
+	0, 1, 2, 3, 4,
+};
+
+static struct attn_hw_reg pcm_int0_bb_a0 = {
+	0, 5, pcm_int0_bb_a0_attn_idx, 0x1100180, 0x110018c, 0x1100188,
+	0x1100184
+};
+
+static const u16 pcm_int1_bb_a0_attn_idx[14] = {
+	5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18,
+};
+
+static struct attn_hw_reg pcm_int1_bb_a0 = {
+	1, 14, pcm_int1_bb_a0_attn_idx, 0x1100190, 0x110019c, 0x1100198,
+	0x1100194
+};
+
+static const u16 pcm_int2_bb_a0_attn_idx[1] = {
+	19,
+};
+
+static struct attn_hw_reg pcm_int2_bb_a0 = {
+	2, 1, pcm_int2_bb_a0_attn_idx, 0x11001a0, 0x11001ac, 0x11001a8,
+	0x11001a4
+};
+
+static struct attn_hw_reg *pcm_int_bb_a0_regs[3] = {
+	&pcm_int0_bb_a0, &pcm_int1_bb_a0, &pcm_int2_bb_a0,
+};
+
+static const u16 pcm_int0_bb_b0_attn_idx[5] = {
+	0, 1, 2, 3, 4,
+};
+
+static struct attn_hw_reg pcm_int0_bb_b0 = {
+	0, 5, pcm_int0_bb_b0_attn_idx, 0x1100180, 0x110018c, 0x1100188,
+	0x1100184
+};
+
+static const u16 pcm_int1_bb_b0_attn_idx[14] = {
+	5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18,
+};
+
+static struct attn_hw_reg pcm_int1_bb_b0 = {
+	1, 14, pcm_int1_bb_b0_attn_idx, 0x1100190, 0x110019c, 0x1100198,
+	0x1100194
+};
+
+static const u16 pcm_int2_bb_b0_attn_idx[1] = {
+	19,
+};
+
+static struct attn_hw_reg pcm_int2_bb_b0 = {
+	2, 1, pcm_int2_bb_b0_attn_idx, 0x11001a0, 0x11001ac, 0x11001a8,
+	0x11001a4
+};
+
+static struct attn_hw_reg *pcm_int_bb_b0_regs[3] = {
+	&pcm_int0_bb_b0, &pcm_int1_bb_b0, &pcm_int2_bb_b0,
+};
+
+static const u16 pcm_int0_k2_attn_idx[5] = {
+	0, 1, 2, 3, 4,
+};
+
+static struct attn_hw_reg pcm_int0_k2 = {
+	0, 5, pcm_int0_k2_attn_idx, 0x1100180, 0x110018c, 0x1100188, 0x1100184
+};
+
+static const u16 pcm_int1_k2_attn_idx[14] = {
+	5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18,
+};
+
+static struct attn_hw_reg pcm_int1_k2 = {
+	1, 14, pcm_int1_k2_attn_idx, 0x1100190, 0x110019c, 0x1100198, 0x1100194
+};
+
+static const u16 pcm_int2_k2_attn_idx[1] = {
+	19,
+};
+
+static struct attn_hw_reg pcm_int2_k2 = {
+	2, 1, pcm_int2_k2_attn_idx, 0x11001a0, 0x11001ac, 0x11001a8, 0x11001a4
+};
+
+static struct attn_hw_reg *pcm_int_k2_regs[3] = {
+	&pcm_int0_k2, &pcm_int1_k2, &pcm_int2_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *pcm_prty_attn_desc[18] = {
+	"pcm_mem012_i_ecc_rf_int",
+	"pcm_mem010_i_ecc_0_rf_int",
+	"pcm_mem010_i_ecc_1_rf_int",
+	"pcm_mem008_i_mem_prty",
+	"pcm_mem007_i_mem_prty",
+	"pcm_mem006_i_mem_prty",
+	"pcm_mem002_i_mem_prty",
+	"pcm_mem003_i_mem_prty",
+	"pcm_mem004_i_mem_prty",
+	"pcm_mem005_i_mem_prty",
+	"pcm_mem011_i_mem_prty",
+	"pcm_mem001_i_mem_prty",
+	"pcm_mem011_i_ecc_rf_int",
+	"pcm_mem009_i_ecc_0_rf_int",
+	"pcm_mem009_i_ecc_1_rf_int",
+	"pcm_mem010_i_mem_prty",
+	"pcm_mem013_i_mem_prty",
+	"pcm_mem012_i_mem_prty",
+};
+#else
+#define pcm_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 pcm_prty1_bb_a0_attn_idx[14] = {
+	3, 4, 5, 6, 7, 8, 9, 11, 12, 13, 14, 15, 16, 17,
+};
+
+static struct attn_hw_reg pcm_prty1_bb_a0 = {
+	0, 14, pcm_prty1_bb_a0_attn_idx, 0x1100200, 0x110020c, 0x1100208,
+	0x1100204
+};
+
+static struct attn_hw_reg *pcm_prty_bb_a0_regs[1] = {
+	&pcm_prty1_bb_a0,
+};
+
+static const u16 pcm_prty1_bb_b0_attn_idx[11] = {
+	4, 5, 6, 7, 8, 9, 11, 12, 13, 14, 15,
+};
+
+static struct attn_hw_reg pcm_prty1_bb_b0 = {
+	0, 11, pcm_prty1_bb_b0_attn_idx, 0x1100200, 0x110020c, 0x1100208,
+	0x1100204
+};
+
+static struct attn_hw_reg *pcm_prty_bb_b0_regs[1] = {
+	&pcm_prty1_bb_b0,
+};
+
+static const u16 pcm_prty1_k2_attn_idx[12] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11,
+};
+
+static struct attn_hw_reg pcm_prty1_k2 = {
+	0, 12, pcm_prty1_k2_attn_idx, 0x1100200, 0x110020c, 0x1100208,
+	0x1100204
+};
+
+static struct attn_hw_reg *pcm_prty_k2_regs[1] = {
+	&pcm_prty1_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *qm_int_attn_desc[22] = {
+	"qm_address_error",
+	"qm_ovf_err_tx",
+	"qm_ovf_err_other",
+	"qm_pf_usg_cnt_err",
+	"qm_vf_usg_cnt_err",
+	"qm_voq_crd_inc_err",
+	"qm_voq_crd_dec_err",
+	"qm_byte_crd_inc_err",
+	"qm_byte_crd_dec_err",
+	"qm_err_incdec_rlglblcrd",
+	"qm_err_incdec_rlpfcrd",
+	"qm_err_incdec_wfqpfcrd",
+	"qm_err_incdec_wfqvpcrd",
+	"qm_err_incdec_voqlinecrd",
+	"qm_err_incdec_voqbytecrd",
+	"qm_fifos_error",
+	"qm_qm_rl_dc_exp_pf_controler_pop_error",
+	"qm_qm_rl_dc_exp_pf_controler_push_error",
+	"qm_qm_rl_dc_rf_req_controler_pop_error",
+	"qm_qm_rl_dc_rf_req_controler_push_error",
+	"qm_qm_rl_dc_rf_res_controler_pop_error",
+	"qm_qm_rl_dc_rf_res_controler_push_error",
+};
+#else
+#define qm_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 qm_int0_bb_a0_attn_idx[16] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15,
+};
+
+static struct attn_hw_reg qm_int0_bb_a0 = {
+	0, 16, qm_int0_bb_a0_attn_idx, 0x2f0180, 0x2f018c, 0x2f0188, 0x2f0184
+};
+
+static struct attn_hw_reg *qm_int_bb_a0_regs[1] = {
+	&qm_int0_bb_a0,
+};
+
+static const u16 qm_int0_bb_b0_attn_idx[22] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+	20,
+	21,
+};
+
+static struct attn_hw_reg qm_int0_bb_b0 = {
+	0, 22, qm_int0_bb_b0_attn_idx, 0x2f0180, 0x2f018c, 0x2f0188, 0x2f0184
+};
+
+static struct attn_hw_reg *qm_int_bb_b0_regs[1] = {
+	&qm_int0_bb_b0,
+};
+
+static const u16 qm_int0_k2_attn_idx[22] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+	20,
+	21,
+};
+
+static struct attn_hw_reg qm_int0_k2 = {
+	0, 22, qm_int0_k2_attn_idx, 0x2f0180, 0x2f018c, 0x2f0188, 0x2f0184
+};
+
+static struct attn_hw_reg *qm_int_k2_regs[1] = {
+	&qm_int0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *qm_prty_attn_desc[109] = {
+	"qm_xcm_wrc_fifo",
+	"qm_ucm_wrc_fifo",
+	"qm_tcm_wrc_fifo",
+	"qm_ccm_wrc_fifo",
+	"qm_bigramhigh",
+	"qm_bigramlow",
+	"qm_base_address",
+	"qm_wrbuff",
+	"qm_bigramhigh_ext_a",
+	"qm_bigramlow_ext_a",
+	"qm_base_address_ext_a",
+	"qm_mem006_i_ecc_0_rf_int",
+	"qm_mem006_i_ecc_1_rf_int",
+	"qm_mem005_i_ecc_0_rf_int",
+	"qm_mem005_i_ecc_1_rf_int",
+	"qm_mem012_i_ecc_rf_int",
+	"qm_mem037_i_mem_prty",
+	"qm_mem036_i_mem_prty",
+	"qm_mem039_i_mem_prty",
+	"qm_mem038_i_mem_prty",
+	"qm_mem040_i_mem_prty",
+	"qm_mem042_i_mem_prty",
+	"qm_mem041_i_mem_prty",
+	"qm_mem056_i_mem_prty",
+	"qm_mem055_i_mem_prty",
+	"qm_mem053_i_mem_prty",
+	"qm_mem054_i_mem_prty",
+	"qm_mem057_i_mem_prty",
+	"qm_mem058_i_mem_prty",
+	"qm_mem062_i_mem_prty",
+	"qm_mem061_i_mem_prty",
+	"qm_mem059_i_mem_prty",
+	"qm_mem060_i_mem_prty",
+	"qm_mem063_i_mem_prty",
+	"qm_mem064_i_mem_prty",
+	"qm_mem033_i_mem_prty",
+	"qm_mem032_i_mem_prty",
+	"qm_mem030_i_mem_prty",
+	"qm_mem031_i_mem_prty",
+	"qm_mem034_i_mem_prty",
+	"qm_mem035_i_mem_prty",
+	"qm_mem051_i_mem_prty",
+	"qm_mem042_i_ecc_0_rf_int",
+	"qm_mem042_i_ecc_1_rf_int",
+	"qm_mem041_i_ecc_0_rf_int",
+	"qm_mem041_i_ecc_1_rf_int",
+	"qm_mem048_i_ecc_rf_int",
+	"qm_mem009_i_mem_prty",
+	"qm_mem008_i_mem_prty",
+	"qm_mem011_i_mem_prty",
+	"qm_mem010_i_mem_prty",
+	"qm_mem012_i_mem_prty",
+	"qm_mem014_i_mem_prty",
+	"qm_mem013_i_mem_prty",
+	"qm_mem028_i_mem_prty",
+	"qm_mem027_i_mem_prty",
+	"qm_mem025_i_mem_prty",
+	"qm_mem026_i_mem_prty",
+	"qm_mem029_i_mem_prty",
+	"qm_mem005_i_mem_prty",
+	"qm_mem004_i_mem_prty",
+	"qm_mem002_i_mem_prty",
+	"qm_mem003_i_mem_prty",
+	"qm_mem006_i_mem_prty",
+	"qm_mem007_i_mem_prty",
+	"qm_mem023_i_mem_prty",
+	"qm_mem047_i_mem_prty",
+	"qm_mem049_i_mem_prty",
+	"qm_mem048_i_mem_prty",
+	"qm_mem052_i_mem_prty",
+	"qm_mem050_i_mem_prty",
+	"qm_mem045_i_mem_prty",
+	"qm_mem046_i_mem_prty",
+	"qm_mem043_i_mem_prty",
+	"qm_mem044_i_mem_prty",
+	"qm_mem017_i_mem_prty",
+	"qm_mem016_i_mem_prty",
+	"qm_mem021_i_mem_prty",
+	"qm_mem024_i_mem_prty",
+	"qm_mem019_i_mem_prty",
+	"qm_mem018_i_mem_prty",
+	"qm_mem015_i_mem_prty",
+	"qm_mem022_i_mem_prty",
+	"qm_mem020_i_mem_prty",
+	"qm_mem007_i_mem_prty_0",
+	"qm_mem007_i_mem_prty_1",
+	"qm_mem007_i_mem_prty_2",
+	"qm_mem001_i_mem_prty",
+	"qm_mem043_i_mem_prty_0",
+	"qm_mem043_i_mem_prty_1",
+	"qm_mem043_i_mem_prty_2",
+	"qm_mem007_i_mem_prty_3",
+	"qm_mem007_i_mem_prty_4",
+	"qm_mem007_i_mem_prty_5",
+	"qm_mem007_i_mem_prty_6",
+	"qm_mem007_i_mem_prty_7",
+	"qm_mem007_i_mem_prty_8",
+	"qm_mem007_i_mem_prty_9",
+	"qm_mem007_i_mem_prty_10",
+	"qm_mem007_i_mem_prty_11",
+	"qm_mem007_i_mem_prty_12",
+	"qm_mem007_i_mem_prty_13",
+	"qm_mem007_i_mem_prty_14",
+	"qm_mem007_i_mem_prty_15",
+	"qm_mem043_i_mem_prty_3",
+	"qm_mem043_i_mem_prty_4",
+	"qm_mem043_i_mem_prty_5",
+	"qm_mem043_i_mem_prty_6",
+	"qm_mem043_i_mem_prty_7",
+};
+#else
+#define qm_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 qm_prty0_bb_a0_attn_idx[11] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10,
+};
+
+static struct attn_hw_reg qm_prty0_bb_a0 = {
+	0, 11, qm_prty0_bb_a0_attn_idx, 0x2f0190, 0x2f019c, 0x2f0198, 0x2f0194
+};
+
+static const u16 qm_prty1_bb_a0_attn_idx[31] = {
+	17, 35, 36, 37, 38, 39, 40, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52,
+	53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65,
+};
+
+static struct attn_hw_reg qm_prty1_bb_a0 = {
+	1, 31, qm_prty1_bb_a0_attn_idx, 0x2f0200, 0x2f020c, 0x2f0208, 0x2f0204
+};
+
+static const u16 qm_prty2_bb_a0_attn_idx[31] = {
+	66, 67, 69, 70, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 87, 20, 18, 25,
+	27, 32, 24, 26, 41, 31, 29, 28, 30, 23, 88, 89, 90,
+};
+
+static struct attn_hw_reg qm_prty2_bb_a0 = {
+	2, 31, qm_prty2_bb_a0_attn_idx, 0x2f0210, 0x2f021c, 0x2f0218, 0x2f0214
+};
+
+static const u16 qm_prty3_bb_a0_attn_idx[11] = {
+	104, 105, 106, 107, 108, 33, 16, 34, 19, 72, 71,
+};
+
+static struct attn_hw_reg qm_prty3_bb_a0 = {
+	3, 11, qm_prty3_bb_a0_attn_idx, 0x2f0220, 0x2f022c, 0x2f0228, 0x2f0224
+};
+
+static struct attn_hw_reg *qm_prty_bb_a0_regs[4] = {
+	&qm_prty0_bb_a0, &qm_prty1_bb_a0, &qm_prty2_bb_a0, &qm_prty3_bb_a0,
+};
+
+static const u16 qm_prty0_bb_b0_attn_idx[11] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10,
+};
+
+static struct attn_hw_reg qm_prty0_bb_b0 = {
+	0, 11, qm_prty0_bb_b0_attn_idx, 0x2f0190, 0x2f019c, 0x2f0198, 0x2f0194
+};
+
+static const u16 qm_prty1_bb_b0_attn_idx[31] = {
+	11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28,
+	29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41,
+};
+
+static struct attn_hw_reg qm_prty1_bb_b0 = {
+	1, 31, qm_prty1_bb_b0_attn_idx, 0x2f0200, 0x2f020c, 0x2f0208, 0x2f0204
+};
+
+static const u16 qm_prty2_bb_b0_attn_idx[31] = {
+	66, 67, 68, 69, 70, 71, 72, 73, 74, 58, 60, 62, 49, 75, 76, 53, 77, 78,
+	79, 80, 81, 52, 65, 57, 82, 56, 83, 48, 84, 85, 86,
+};
+
+static struct attn_hw_reg qm_prty2_bb_b0 = {
+	2, 31, qm_prty2_bb_b0_attn_idx, 0x2f0210, 0x2f021c, 0x2f0218, 0x2f0214
+};
+
+static const u16 qm_prty3_bb_b0_attn_idx[11] = {
+	91, 92, 93, 94, 95, 55, 87, 54, 61, 50, 47,
+};
+
+static struct attn_hw_reg qm_prty3_bb_b0 = {
+	3, 11, qm_prty3_bb_b0_attn_idx, 0x2f0220, 0x2f022c, 0x2f0228, 0x2f0224
+};
+
+static struct attn_hw_reg *qm_prty_bb_b0_regs[4] = {
+	&qm_prty0_bb_b0, &qm_prty1_bb_b0, &qm_prty2_bb_b0, &qm_prty3_bb_b0,
+};
+
+static const u16 qm_prty0_k2_attn_idx[11] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10,
+};
+
+static struct attn_hw_reg qm_prty0_k2 = {
+	0, 11, qm_prty0_k2_attn_idx, 0x2f0190, 0x2f019c, 0x2f0198, 0x2f0194
+};
+
+static const u16 qm_prty1_k2_attn_idx[31] = {
+	11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28,
+	29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41,
+};
+
+static struct attn_hw_reg qm_prty1_k2 = {
+	1, 31, qm_prty1_k2_attn_idx, 0x2f0200, 0x2f020c, 0x2f0208, 0x2f0204
+};
+
+static const u16 qm_prty2_k2_attn_idx[31] = {
+	66, 67, 68, 69, 70, 71, 72, 73, 74, 58, 60, 62, 49, 75, 76, 53, 77, 78,
+	79, 80, 81, 52, 65, 57, 82, 56, 83, 48, 84, 85, 86,
+};
+
+static struct attn_hw_reg qm_prty2_k2 = {
+	2, 31, qm_prty2_k2_attn_idx, 0x2f0210, 0x2f021c, 0x2f0218, 0x2f0214
+};
+
+static const u16 qm_prty3_k2_attn_idx[19] = {
+	91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 55, 87, 54, 61,
+	50, 47,
+};
+
+static struct attn_hw_reg qm_prty3_k2 = {
+	3, 19, qm_prty3_k2_attn_idx, 0x2f0220, 0x2f022c, 0x2f0228, 0x2f0224
+};
+
+static struct attn_hw_reg *qm_prty_k2_regs[4] = {
+	&qm_prty0_k2, &qm_prty1_k2, &qm_prty2_k2, &qm_prty3_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *tm_int_attn_desc[43] = {
+	"tm_address_error",
+	"tm_pxp_read_data_fifo_ov",
+	"tm_pxp_read_data_fifo_un",
+	"tm_pxp_read_ctrl_fifo_ov",
+	"tm_pxp_read_ctrl_fifo_un",
+	"tm_cfc_load_command_fifo_ov",
+	"tm_cfc_load_command_fifo_un",
+	"tm_cfc_load_echo_fifo_ov",
+	"tm_cfc_load_echo_fifo_un",
+	"tm_client_out_fifo_ov",
+	"tm_client_out_fifo_un",
+	"tm_ac_command_fifo_ov",
+	"tm_ac_command_fifo_un",
+	"tm_client_in_pbf_fifo_ov",
+	"tm_client_in_pbf_fifo_un",
+	"tm_client_in_ucm_fifo_ov",
+	"tm_client_in_ucm_fifo_un",
+	"tm_client_in_tcm_fifo_ov",
+	"tm_client_in_tcm_fifo_un",
+	"tm_client_in_xcm_fifo_ov",
+	"tm_client_in_xcm_fifo_un",
+	"tm_expiration_cmd_fifo_ov",
+	"tm_expiration_cmd_fifo_un",
+	"tm_stop_all_lc_invalid",
+	"tm_command_lc_invalid_0",
+	"tm_command_lc_invalid_1",
+	"tm_init_command_lc_valid",
+	"tm_stop_all_exp_lc_valid",
+	"tm_command_cid_invalid_0",
+	"tm_reserved_command",
+	"tm_command_cid_invalid_1",
+	"tm_cload_res_loaderr_conn",
+	"tm_cload_res_loadcancel_conn",
+	"tm_cload_res_validerr_conn",
+	"tm_context_rd_last",
+	"tm_context_wr_last",
+	"tm_pxp_rd_data_eop_bvalid",
+	"tm_pend_conn_scan",
+	"tm_pend_task_scan",
+	"tm_pxp_rd_data_eop_error",
+	"tm_cload_res_loaderr_task",
+	"tm_cload_res_loadcancel_task",
+	"tm_cload_res_validerr_task",
+};
+#else
+#define tm_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 tm_int0_bb_a0_attn_idx[32] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+	20,
+	21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31,
+};
+
+static struct attn_hw_reg tm_int0_bb_a0 = {
+	0, 32, tm_int0_bb_a0_attn_idx, 0x2c0180, 0x2c018c, 0x2c0188, 0x2c0184
+};
+
+static const u16 tm_int1_bb_a0_attn_idx[11] = {
+	32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42,
+};
+
+static struct attn_hw_reg tm_int1_bb_a0 = {
+	1, 11, tm_int1_bb_a0_attn_idx, 0x2c0190, 0x2c019c, 0x2c0198, 0x2c0194
+};
+
+static struct attn_hw_reg *tm_int_bb_a0_regs[2] = {
+	&tm_int0_bb_a0, &tm_int1_bb_a0,
+};
+
+static const u16 tm_int0_bb_b0_attn_idx[32] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+	20,
+	21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31,
+};
+
+static struct attn_hw_reg tm_int0_bb_b0 = {
+	0, 32, tm_int0_bb_b0_attn_idx, 0x2c0180, 0x2c018c, 0x2c0188, 0x2c0184
+};
+
+static const u16 tm_int1_bb_b0_attn_idx[11] = {
+	32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42,
+};
+
+static struct attn_hw_reg tm_int1_bb_b0 = {
+	1, 11, tm_int1_bb_b0_attn_idx, 0x2c0190, 0x2c019c, 0x2c0198, 0x2c0194
+};
+
+static struct attn_hw_reg *tm_int_bb_b0_regs[2] = {
+	&tm_int0_bb_b0, &tm_int1_bb_b0,
+};
+
+static const u16 tm_int0_k2_attn_idx[32] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+	20,
+	21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31,
+};
+
+static struct attn_hw_reg tm_int0_k2 = {
+	0, 32, tm_int0_k2_attn_idx, 0x2c0180, 0x2c018c, 0x2c0188, 0x2c0184
+};
+
+static const u16 tm_int1_k2_attn_idx[11] = {
+	32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42,
+};
+
+static struct attn_hw_reg tm_int1_k2 = {
+	1, 11, tm_int1_k2_attn_idx, 0x2c0190, 0x2c019c, 0x2c0198, 0x2c0194
+};
+
+static struct attn_hw_reg *tm_int_k2_regs[2] = {
+	&tm_int0_k2, &tm_int1_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *tm_prty_attn_desc[17] = {
+	"tm_mem012_i_ecc_0_rf_int",
+	"tm_mem012_i_ecc_1_rf_int",
+	"tm_mem003_i_ecc_rf_int",
+	"tm_mem016_i_mem_prty",
+	"tm_mem007_i_mem_prty",
+	"tm_mem010_i_mem_prty",
+	"tm_mem008_i_mem_prty",
+	"tm_mem009_i_mem_prty",
+	"tm_mem013_i_mem_prty",
+	"tm_mem015_i_mem_prty",
+	"tm_mem014_i_mem_prty",
+	"tm_mem004_i_mem_prty",
+	"tm_mem005_i_mem_prty",
+	"tm_mem006_i_mem_prty",
+	"tm_mem011_i_mem_prty",
+	"tm_mem001_i_mem_prty",
+	"tm_mem002_i_mem_prty",
+};
+#else
+#define tm_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 tm_prty1_bb_a0_attn_idx[17] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,
+};
+
+static struct attn_hw_reg tm_prty1_bb_a0 = {
+	0, 17, tm_prty1_bb_a0_attn_idx, 0x2c0200, 0x2c020c, 0x2c0208, 0x2c0204
+};
+
+static struct attn_hw_reg *tm_prty_bb_a0_regs[1] = {
+	&tm_prty1_bb_a0,
+};
+
+static const u16 tm_prty1_bb_b0_attn_idx[17] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,
+};
+
+static struct attn_hw_reg tm_prty1_bb_b0 = {
+	0, 17, tm_prty1_bb_b0_attn_idx, 0x2c0200, 0x2c020c, 0x2c0208, 0x2c0204
+};
+
+static struct attn_hw_reg *tm_prty_bb_b0_regs[1] = {
+	&tm_prty1_bb_b0,
+};
+
+static const u16 tm_prty1_k2_attn_idx[17] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,
+};
+
+static struct attn_hw_reg tm_prty1_k2 = {
+	0, 17, tm_prty1_k2_attn_idx, 0x2c0200, 0x2c020c, 0x2c0208, 0x2c0204
+};
+
+static struct attn_hw_reg *tm_prty_k2_regs[1] = {
+	&tm_prty1_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *dorq_int_attn_desc[9] = {
+	"dorq_address_error",
+	"dorq_db_drop",
+	"dorq_dorq_fifo_ovfl_err",
+	"dorq_dorq_fifo_afull",
+	"dorq_cfc_byp_validation_err",
+	"dorq_cfc_ld_resp_err",
+	"dorq_xcm_done_cnt_err",
+	"dorq_cfc_ld_req_fifo_ovfl_err",
+	"dorq_cfc_ld_req_fifo_under_err",
+};
+#else
+#define dorq_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 dorq_int0_bb_a0_attn_idx[9] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8,
+};
+
+static struct attn_hw_reg dorq_int0_bb_a0 = {
+	0, 9, dorq_int0_bb_a0_attn_idx, 0x100180, 0x10018c, 0x100188, 0x100184
+};
+
+static struct attn_hw_reg *dorq_int_bb_a0_regs[1] = {
+	&dorq_int0_bb_a0,
+};
+
+static const u16 dorq_int0_bb_b0_attn_idx[9] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8,
+};
+
+static struct attn_hw_reg dorq_int0_bb_b0 = {
+	0, 9, dorq_int0_bb_b0_attn_idx, 0x100180, 0x10018c, 0x100188, 0x100184
+};
+
+static struct attn_hw_reg *dorq_int_bb_b0_regs[1] = {
+	&dorq_int0_bb_b0,
+};
+
+static const u16 dorq_int0_k2_attn_idx[9] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8,
+};
+
+static struct attn_hw_reg dorq_int0_k2 = {
+	0, 9, dorq_int0_k2_attn_idx, 0x100180, 0x10018c, 0x100188, 0x100184
+};
+
+static struct attn_hw_reg *dorq_int_k2_regs[1] = {
+	&dorq_int0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *dorq_prty_attn_desc[7] = {
+	"dorq_datapath_registers",
+	"dorq_mem002_i_ecc_rf_int",
+	"dorq_mem001_i_mem_prty",
+	"dorq_mem003_i_mem_prty",
+	"dorq_mem004_i_mem_prty",
+	"dorq_mem005_i_mem_prty",
+	"dorq_mem006_i_mem_prty",
+};
+#else
+#define dorq_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 dorq_prty1_bb_a0_attn_idx[6] = {
+	1, 2, 3, 4, 5, 6,
+};
+
+static struct attn_hw_reg dorq_prty1_bb_a0 = {
+	0, 6, dorq_prty1_bb_a0_attn_idx, 0x100200, 0x10020c, 0x100208, 0x100204
+};
+
+static struct attn_hw_reg *dorq_prty_bb_a0_regs[1] = {
+	&dorq_prty1_bb_a0,
+};
+
+static const u16 dorq_prty0_bb_b0_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg dorq_prty0_bb_b0 = {
+	0, 1, dorq_prty0_bb_b0_attn_idx, 0x100190, 0x10019c, 0x100198, 0x100194
+};
+
+static const u16 dorq_prty1_bb_b0_attn_idx[6] = {
+	1, 2, 3, 4, 5, 6,
+};
+
+static struct attn_hw_reg dorq_prty1_bb_b0 = {
+	1, 6, dorq_prty1_bb_b0_attn_idx, 0x100200, 0x10020c, 0x100208, 0x100204
+};
+
+static struct attn_hw_reg *dorq_prty_bb_b0_regs[2] = {
+	&dorq_prty0_bb_b0, &dorq_prty1_bb_b0,
+};
+
+static const u16 dorq_prty0_k2_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg dorq_prty0_k2 = {
+	0, 1, dorq_prty0_k2_attn_idx, 0x100190, 0x10019c, 0x100198, 0x100194
+};
+
+static const u16 dorq_prty1_k2_attn_idx[6] = {
+	1, 2, 3, 4, 5, 6,
+};
+
+static struct attn_hw_reg dorq_prty1_k2 = {
+	1, 6, dorq_prty1_k2_attn_idx, 0x100200, 0x10020c, 0x100208, 0x100204
+};
+
+static struct attn_hw_reg *dorq_prty_k2_regs[2] = {
+	&dorq_prty0_k2, &dorq_prty1_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *brb_int_attn_desc[237] = {
+	"brb_address_error",
+	"brb_rc_pkt0_rls_error",
+	"brb_rc_pkt0_1st_error",
+	"brb_rc_pkt0_len_error",
+	"brb_rc_pkt0_middle_error",
+	"brb_rc_pkt0_protocol_error",
+	"brb_rc_pkt1_rls_error",
+	"brb_rc_pkt1_1st_error",
+	"brb_rc_pkt1_len_error",
+	"brb_rc_pkt1_middle_error",
+	"brb_rc_pkt1_protocol_error",
+	"brb_rc_pkt2_rls_error",
+	"brb_rc_pkt2_1st_error",
+	"brb_rc_pkt2_len_error",
+	"brb_rc_pkt2_middle_error",
+	"brb_rc_pkt2_protocol_error",
+	"brb_rc_pkt3_rls_error",
+	"brb_rc_pkt3_1st_error",
+	"brb_rc_pkt3_len_error",
+	"brb_rc_pkt3_middle_error",
+	"brb_rc_pkt3_protocol_error",
+	"brb_rc_sop_req_tc_port_error",
+	"brb_uncomplient_lossless_error",
+	"brb_wc0_protocol_error",
+	"brb_wc1_protocol_error",
+	"brb_wc2_protocol_error",
+	"brb_wc3_protocol_error",
+	"brb_ll_arb_prefetch_sop_error",
+	"brb_ll_blk_error",
+	"brb_packet_counter_error",
+	"brb_byte_counter_error",
+	"brb_mac0_fc_cnt_error",
+	"brb_mac1_fc_cnt_error",
+	"brb_ll_arb_calc_error",
+	"brb_unused_0",
+	"brb_wc0_inp_fifo_error",
+	"brb_wc0_sop_fifo_error",
+	"brb_unused_1",
+	"brb_wc0_eop_fifo_error",
+	"brb_wc0_queue_fifo_error",
+	"brb_wc0_free_point_fifo_error",
+	"brb_wc0_next_point_fifo_error",
+	"brb_wc0_strt_fifo_error",
+	"brb_wc0_second_dscr_fifo_error",
+	"brb_wc0_pkt_avail_fifo_error",
+	"brb_wc0_cos_cnt_fifo_error",
+	"brb_wc0_notify_fifo_error",
+	"brb_wc0_ll_req_fifo_error",
+	"brb_wc0_ll_pa_cnt_error",
+	"brb_wc0_bb_pa_cnt_error",
+	"brb_wc1_inp_fifo_error",
+	"brb_wc1_sop_fifo_error",
+	"brb_wc1_eop_fifo_error",
+	"brb_wc1_queue_fifo_error",
+	"brb_wc1_free_point_fifo_error",
+	"brb_wc1_next_point_fifo_error",
+	"brb_wc1_strt_fifo_error",
+	"brb_wc1_second_dscr_fifo_error",
+	"brb_wc1_pkt_avail_fifo_error",
+	"brb_wc1_cos_cnt_fifo_error",
+	"brb_wc1_notify_fifo_error",
+	"brb_wc1_ll_req_fifo_error",
+	"brb_wc1_ll_pa_cnt_error",
+	"brb_wc1_bb_pa_cnt_error",
+	"brb_wc2_inp_fifo_error",
+	"brb_wc2_sop_fifo_error",
+	"brb_wc2_eop_fifo_error",
+	"brb_wc2_queue_fifo_error",
+	"brb_wc2_free_point_fifo_error",
+	"brb_wc2_next_point_fifo_error",
+	"brb_wc2_strt_fifo_error",
+	"brb_wc2_second_dscr_fifo_error",
+	"brb_wc2_pkt_avail_fifo_error",
+	"brb_wc2_cos_cnt_fifo_error",
+	"brb_wc2_notify_fifo_error",
+	"brb_wc2_ll_req_fifo_error",
+	"brb_wc2_ll_pa_cnt_error",
+	"brb_wc2_bb_pa_cnt_error",
+	"brb_wc3_inp_fifo_error",
+	"brb_wc3_sop_fifo_error",
+	"brb_wc3_eop_fifo_error",
+	"brb_wc3_queue_fifo_error",
+	"brb_wc3_free_point_fifo_error",
+	"brb_wc3_next_point_fifo_error",
+	"brb_wc3_strt_fifo_error",
+	"brb_wc3_second_dscr_fifo_error",
+	"brb_wc3_pkt_avail_fifo_error",
+	"brb_wc3_cos_cnt_fifo_error",
+	"brb_wc3_notify_fifo_error",
+	"brb_wc3_ll_req_fifo_error",
+	"brb_wc3_ll_pa_cnt_error",
+	"brb_wc3_bb_pa_cnt_error",
+	"brb_rc_pkt0_side_fifo_error",
+	"brb_rc_pkt0_req_fifo_error",
+	"brb_rc_pkt0_blk_fifo_error",
+	"brb_rc_pkt0_rls_left_fifo_error",
+	"brb_rc_pkt0_strt_ptr_fifo_error",
+	"brb_rc_pkt0_second_ptr_fifo_error",
+	"brb_rc_pkt0_rsp_fifo_error",
+	"brb_rc_pkt0_dscr_fifo_error",
+	"brb_rc_pkt1_side_fifo_error",
+	"brb_rc_pkt1_req_fifo_error",
+	"brb_rc_pkt1_blk_fifo_error",
+	"brb_rc_pkt1_rls_left_fifo_error",
+	"brb_rc_pkt1_strt_ptr_fifo_error",
+	"brb_rc_pkt1_second_ptr_fifo_error",
+	"brb_rc_pkt1_rsp_fifo_error",
+	"brb_rc_pkt1_dscr_fifo_error",
+	"brb_rc_pkt2_side_fifo_error",
+	"brb_rc_pkt2_req_fifo_error",
+	"brb_rc_pkt2_blk_fifo_error",
+	"brb_rc_pkt2_rls_left_fifo_error",
+	"brb_rc_pkt2_strt_ptr_fifo_error",
+	"brb_rc_pkt2_second_ptr_fifo_error",
+	"brb_rc_pkt2_rsp_fifo_error",
+	"brb_rc_pkt2_dscr_fifo_error",
+	"brb_rc_pkt3_side_fifo_error",
+	"brb_rc_pkt3_req_fifo_error",
+	"brb_rc_pkt3_blk_fifo_error",
+	"brb_rc_pkt3_rls_left_fifo_error",
+	"brb_rc_pkt3_strt_ptr_fifo_error",
+	"brb_rc_pkt3_second_ptr_fifo_error",
+	"brb_rc_pkt3_rsp_fifo_error",
+	"brb_rc_pkt3_dscr_fifo_error",
+	"brb_rc_sop_strt_fifo_error",
+	"brb_rc_sop_req_fifo_error",
+	"brb_rc_sop_dscr_fifo_error",
+	"brb_rc_sop_queue_fifo_error",
+	"brb_rc0_eop_error",
+	"brb_rc1_eop_error",
+	"brb_ll_arb_rls_fifo_error",
+	"brb_ll_arb_prefetch_fifo_error",
+	"brb_rc_pkt0_rls_fifo_error",
+	"brb_rc_pkt1_rls_fifo_error",
+	"brb_rc_pkt2_rls_fifo_error",
+	"brb_rc_pkt3_rls_fifo_error",
+	"brb_rc_pkt4_rls_fifo_error",
+	"brb_rc_pkt4_rls_error",
+	"brb_rc_pkt4_1st_error",
+	"brb_rc_pkt4_len_error",
+	"brb_rc_pkt4_middle_error",
+	"brb_rc_pkt4_protocol_error",
+	"brb_rc_pkt4_side_fifo_error",
+	"brb_rc_pkt4_req_fifo_error",
+	"brb_rc_pkt4_blk_fifo_error",
+	"brb_rc_pkt4_rls_left_fifo_error",
+	"brb_rc_pkt4_strt_ptr_fifo_error",
+	"brb_rc_pkt4_second_ptr_fifo_error",
+	"brb_rc_pkt4_rsp_fifo_error",
+	"brb_rc_pkt4_dscr_fifo_error",
+	"brb_rc_pkt5_rls_error",
+	"brb_packet_available_sync_fifo_push_error",
+	"brb_wc4_protocol_error",
+	"brb_wc5_protocol_error",
+	"brb_wc6_protocol_error",
+	"brb_wc7_protocol_error",
+	"brb_wc4_inp_fifo_error",
+	"brb_wc4_sop_fifo_error",
+	"brb_wc4_queue_fifo_error",
+	"brb_wc4_free_point_fifo_error",
+	"brb_wc4_next_point_fifo_error",
+	"brb_wc4_strt_fifo_error",
+	"brb_wc4_second_dscr_fifo_error",
+	"brb_wc4_pkt_avail_fifo_error",
+	"brb_wc4_cos_cnt_fifo_error",
+	"brb_wc4_notify_fifo_error",
+	"brb_wc4_ll_req_fifo_error",
+	"brb_wc4_ll_pa_cnt_error",
+	"brb_wc4_bb_pa_cnt_error",
+	"brb_wc5_inp_fifo_error",
+	"brb_wc5_sop_fifo_error",
+	"brb_wc5_queue_fifo_error",
+	"brb_wc5_free_point_fifo_error",
+	"brb_wc5_next_point_fifo_error",
+	"brb_wc5_strt_fifo_error",
+	"brb_wc5_second_dscr_fifo_error",
+	"brb_wc5_pkt_avail_fifo_error",
+	"brb_wc5_cos_cnt_fifo_error",
+	"brb_wc5_notify_fifo_error",
+	"brb_wc5_ll_req_fifo_error",
+	"brb_wc5_ll_pa_cnt_error",
+	"brb_wc5_bb_pa_cnt_error",
+	"brb_wc6_inp_fifo_error",
+	"brb_wc6_sop_fifo_error",
+	"brb_wc6_queue_fifo_error",
+	"brb_wc6_free_point_fifo_error",
+	"brb_wc6_next_point_fifo_error",
+	"brb_wc6_strt_fifo_error",
+	"brb_wc6_second_dscr_fifo_error",
+	"brb_wc6_pkt_avail_fifo_error",
+	"brb_wc6_cos_cnt_fifo_error",
+	"brb_wc6_notify_fifo_error",
+	"brb_wc6_ll_req_fifo_error",
+	"brb_wc6_ll_pa_cnt_error",
+	"brb_wc6_bb_pa_cnt_error",
+	"brb_wc7_inp_fifo_error",
+	"brb_wc7_sop_fifo_error",
+	"brb_wc7_queue_fifo_error",
+	"brb_wc7_free_point_fifo_error",
+	"brb_wc7_next_point_fifo_error",
+	"brb_wc7_strt_fifo_error",
+	"brb_wc7_second_dscr_fifo_error",
+	"brb_wc7_pkt_avail_fifo_error",
+	"brb_wc7_cos_cnt_fifo_error",
+	"brb_wc7_notify_fifo_error",
+	"brb_wc7_ll_req_fifo_error",
+	"brb_wc7_ll_pa_cnt_error",
+	"brb_wc7_bb_pa_cnt_error",
+	"brb_wc9_queue_fifo_error",
+	"brb_rc_sop_inp_sync_fifo_push_error",
+	"brb_rc0_inp_sync_fifo_push_error",
+	"brb_rc1_inp_sync_fifo_push_error",
+	"brb_rc2_inp_sync_fifo_push_error",
+	"brb_rc3_inp_sync_fifo_push_error",
+	"brb_rc0_out_sync_fifo_push_error",
+	"brb_rc1_out_sync_fifo_push_error",
+	"brb_rc2_out_sync_fifo_push_error",
+	"brb_rc3_out_sync_fifo_push_error",
+	"brb_rc4_out_sync_fifo_push_error",
+	"brb_unused_2",
+	"brb_rc0_eop_inp_sync_fifo_push_error",
+	"brb_rc1_eop_inp_sync_fifo_push_error",
+	"brb_rc2_eop_inp_sync_fifo_push_error",
+	"brb_rc3_eop_inp_sync_fifo_push_error",
+	"brb_rc0_eop_out_sync_fifo_push_error",
+	"brb_rc1_eop_out_sync_fifo_push_error",
+	"brb_rc2_eop_out_sync_fifo_push_error",
+	"brb_rc3_eop_out_sync_fifo_push_error",
+	"brb_unused_3",
+	"brb_rc2_eop_error",
+	"brb_rc3_eop_error",
+	"brb_mac2_fc_cnt_error",
+	"brb_mac3_fc_cnt_error",
+	"brb_wc4_eop_fifo_error",
+	"brb_wc5_eop_fifo_error",
+	"brb_wc6_eop_fifo_error",
+	"brb_wc7_eop_fifo_error",
+};
+#else
+#define brb_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 brb_int0_bb_a0_attn_idx[32] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+	20,
+	21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31,
+};
+
+static struct attn_hw_reg brb_int0_bb_a0 = {
+	0, 32, brb_int0_bb_a0_attn_idx, 0x3400c0, 0x3400cc, 0x3400c8, 0x3400c4
+};
+
+static const u16 brb_int1_bb_a0_attn_idx[30] = {
+	32, 33, 35, 36, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51,
+	52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63,
+};
+
+static struct attn_hw_reg brb_int1_bb_a0 = {
+	1, 30, brb_int1_bb_a0_attn_idx, 0x3400d8, 0x3400e4, 0x3400e0, 0x3400dc
+};
+
+static const u16 brb_int2_bb_a0_attn_idx[28] = {
+	64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81,
+	82, 83, 84, 85, 86, 87, 88, 89, 90, 91,
+};
+
+static struct attn_hw_reg brb_int2_bb_a0 = {
+	2, 28, brb_int2_bb_a0_attn_idx, 0x3400f0, 0x3400fc, 0x3400f8, 0x3400f4
+};
+
+static const u16 brb_int3_bb_a0_attn_idx[31] = {
+	92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107,
+	108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121,
+	    122,
+};
+
+static struct attn_hw_reg brb_int3_bb_a0 = {
+	3, 31, brb_int3_bb_a0_attn_idx, 0x340108, 0x340114, 0x340110, 0x34010c
+};
+
+static const u16 brb_int4_bb_a0_attn_idx[27] = {
+	123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136,
+	137,
+	138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149,
+};
+
+static struct attn_hw_reg brb_int4_bb_a0 = {
+	4, 27, brb_int4_bb_a0_attn_idx, 0x340120, 0x34012c, 0x340128, 0x340124
+};
+
+static const u16 brb_int5_bb_a0_attn_idx[1] = {
+	150,
+};
+
+static struct attn_hw_reg brb_int5_bb_a0 = {
+	5, 1, brb_int5_bb_a0_attn_idx, 0x340138, 0x340144, 0x340140, 0x34013c
+};
+
+static const u16 brb_int6_bb_a0_attn_idx[8] = {
+	151, 152, 153, 154, 155, 156, 157, 158,
+};
+
+static struct attn_hw_reg brb_int6_bb_a0 = {
+	6, 8, brb_int6_bb_a0_attn_idx, 0x340150, 0x34015c, 0x340158, 0x340154
+};
+
+static const u16 brb_int7_bb_a0_attn_idx[32] = {
+	159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172,
+	173,
+	174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187,
+	    188, 189,
+	190,
+};
+
+static struct attn_hw_reg brb_int7_bb_a0 = {
+	7, 32, brb_int7_bb_a0_attn_idx, 0x340168, 0x340174, 0x340170, 0x34016c
+};
+
+static const u16 brb_int8_bb_a0_attn_idx[17] = {
+	191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204,
+	205,
+	206, 207,
+};
+
+static struct attn_hw_reg brb_int8_bb_a0 = {
+	8, 17, brb_int8_bb_a0_attn_idx, 0x340184, 0x340190, 0x34018c, 0x340188
+};
+
+static const u16 brb_int9_bb_a0_attn_idx[1] = {
+	208,
+};
+
+static struct attn_hw_reg brb_int9_bb_a0 = {
+	9, 1, brb_int9_bb_a0_attn_idx, 0x34019c, 0x3401a8, 0x3401a4, 0x3401a0
+};
+
+static const u16 brb_int10_bb_a0_attn_idx[14] = {
+	209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 220, 221, 224, 225,
+};
+
+static struct attn_hw_reg brb_int10_bb_a0 = {
+	10, 14, brb_int10_bb_a0_attn_idx, 0x3401b4, 0x3401c0, 0x3401bc,
+	0x3401b8
+};
+
+static const u16 brb_int11_bb_a0_attn_idx[8] = {
+	229, 230, 231, 232, 233, 234, 235, 236,
+};
+
+static struct attn_hw_reg brb_int11_bb_a0 = {
+	11, 8, brb_int11_bb_a0_attn_idx, 0x3401cc, 0x3401d8, 0x3401d4, 0x3401d0
+};
+
+static struct attn_hw_reg *brb_int_bb_a0_regs[12] = {
+	&brb_int0_bb_a0, &brb_int1_bb_a0, &brb_int2_bb_a0, &brb_int3_bb_a0,
+	&brb_int4_bb_a0, &brb_int5_bb_a0, &brb_int6_bb_a0, &brb_int7_bb_a0,
+	&brb_int8_bb_a0, &brb_int9_bb_a0,
+	&brb_int10_bb_a0, &brb_int11_bb_a0,
+};
+
+static const u16 brb_int0_bb_b0_attn_idx[32] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+	20,
+	21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31,
+};
+
+static struct attn_hw_reg brb_int0_bb_b0 = {
+	0, 32, brb_int0_bb_b0_attn_idx, 0x3400c0, 0x3400cc, 0x3400c8, 0x3400c4
+};
+
+static const u16 brb_int1_bb_b0_attn_idx[30] = {
+	32, 33, 35, 36, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51,
+	52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63,
+};
+
+static struct attn_hw_reg brb_int1_bb_b0 = {
+	1, 30, brb_int1_bb_b0_attn_idx, 0x3400d8, 0x3400e4, 0x3400e0, 0x3400dc
+};
+
+static const u16 brb_int2_bb_b0_attn_idx[28] = {
+	64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81,
+	82, 83, 84, 85, 86, 87, 88, 89, 90, 91,
+};
+
+static struct attn_hw_reg brb_int2_bb_b0 = {
+	2, 28, brb_int2_bb_b0_attn_idx, 0x3400f0, 0x3400fc, 0x3400f8, 0x3400f4
+};
+
+static const u16 brb_int3_bb_b0_attn_idx[31] = {
+	92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107,
+	108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121,
+	    122,
+};
+
+static struct attn_hw_reg brb_int3_bb_b0 = {
+	3, 31, brb_int3_bb_b0_attn_idx, 0x340108, 0x340114, 0x340110, 0x34010c
+};
+
+static const u16 brb_int4_bb_b0_attn_idx[27] = {
+	123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136,
+	137,
+	138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149,
+};
+
+static struct attn_hw_reg brb_int4_bb_b0 = {
+	4, 27, brb_int4_bb_b0_attn_idx, 0x340120, 0x34012c, 0x340128, 0x340124
+};
+
+static const u16 brb_int5_bb_b0_attn_idx[1] = {
+	150,
+};
+
+static struct attn_hw_reg brb_int5_bb_b0 = {
+	5, 1, brb_int5_bb_b0_attn_idx, 0x340138, 0x340144, 0x340140, 0x34013c
+};
+
+static const u16 brb_int6_bb_b0_attn_idx[8] = {
+	151, 152, 153, 154, 155, 156, 157, 158,
+};
+
+static struct attn_hw_reg brb_int6_bb_b0 = {
+	6, 8, brb_int6_bb_b0_attn_idx, 0x340150, 0x34015c, 0x340158, 0x340154
+};
+
+static const u16 brb_int7_bb_b0_attn_idx[32] = {
+	159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172,
+	173,
+	174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187,
+	    188, 189,
+	190,
+};
+
+static struct attn_hw_reg brb_int7_bb_b0 = {
+	7, 32, brb_int7_bb_b0_attn_idx, 0x340168, 0x340174, 0x340170, 0x34016c
+};
+
+static const u16 brb_int8_bb_b0_attn_idx[17] = {
+	191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204,
+	205,
+	206, 207,
+};
+
+static struct attn_hw_reg brb_int8_bb_b0 = {
+	8, 17, brb_int8_bb_b0_attn_idx, 0x340184, 0x340190, 0x34018c, 0x340188
+};
+
+static const u16 brb_int9_bb_b0_attn_idx[1] = {
+	208,
+};
+
+static struct attn_hw_reg brb_int9_bb_b0 = {
+	9, 1, brb_int9_bb_b0_attn_idx, 0x34019c, 0x3401a8, 0x3401a4, 0x3401a0
+};
+
+static const u16 brb_int10_bb_b0_attn_idx[14] = {
+	209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 220, 221, 224, 225,
+};
+
+static struct attn_hw_reg brb_int10_bb_b0 = {
+	10, 14, brb_int10_bb_b0_attn_idx, 0x3401b4, 0x3401c0, 0x3401bc,
+	0x3401b8
+};
+
+static const u16 brb_int11_bb_b0_attn_idx[8] = {
+	229, 230, 231, 232, 233, 234, 235, 236,
+};
+
+static struct attn_hw_reg brb_int11_bb_b0 = {
+	11, 8, brb_int11_bb_b0_attn_idx, 0x3401cc, 0x3401d8, 0x3401d4, 0x3401d0
+};
+
+static struct attn_hw_reg *brb_int_bb_b0_regs[12] = {
+	&brb_int0_bb_b0, &brb_int1_bb_b0, &brb_int2_bb_b0, &brb_int3_bb_b0,
+	&brb_int4_bb_b0, &brb_int5_bb_b0, &brb_int6_bb_b0, &brb_int7_bb_b0,
+	&brb_int8_bb_b0, &brb_int9_bb_b0,
+	&brb_int10_bb_b0, &brb_int11_bb_b0,
+};
+
+static const u16 brb_int0_k2_attn_idx[32] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+	20,
+	21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31,
+};
+
+static struct attn_hw_reg brb_int0_k2 = {
+	0, 32, brb_int0_k2_attn_idx, 0x3400c0, 0x3400cc, 0x3400c8, 0x3400c4
+};
+
+static const u16 brb_int1_k2_attn_idx[30] = {
+	32, 33, 35, 36, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51,
+	52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63,
+};
+
+static struct attn_hw_reg brb_int1_k2 = {
+	1, 30, brb_int1_k2_attn_idx, 0x3400d8, 0x3400e4, 0x3400e0, 0x3400dc
+};
+
+static const u16 brb_int2_k2_attn_idx[28] = {
+	64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81,
+	82, 83, 84, 85, 86, 87, 88, 89, 90, 91,
+};
+
+static struct attn_hw_reg brb_int2_k2 = {
+	2, 28, brb_int2_k2_attn_idx, 0x3400f0, 0x3400fc, 0x3400f8, 0x3400f4
+};
+
+static const u16 brb_int3_k2_attn_idx[31] = {
+	92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107,
+	108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121,
+	    122,
+};
+
+static struct attn_hw_reg brb_int3_k2 = {
+	3, 31, brb_int3_k2_attn_idx, 0x340108, 0x340114, 0x340110, 0x34010c
+};
+
+static const u16 brb_int4_k2_attn_idx[27] = {
+	123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136,
+	137,
+	138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149,
+};
+
+static struct attn_hw_reg brb_int4_k2 = {
+	4, 27, brb_int4_k2_attn_idx, 0x340120, 0x34012c, 0x340128, 0x340124
+};
+
+static const u16 brb_int5_k2_attn_idx[1] = {
+	150,
+};
+
+static struct attn_hw_reg brb_int5_k2 = {
+	5, 1, brb_int5_k2_attn_idx, 0x340138, 0x340144, 0x340140, 0x34013c
+};
+
+static const u16 brb_int6_k2_attn_idx[8] = {
+	151, 152, 153, 154, 155, 156, 157, 158,
+};
+
+static struct attn_hw_reg brb_int6_k2 = {
+	6, 8, brb_int6_k2_attn_idx, 0x340150, 0x34015c, 0x340158, 0x340154
+};
+
+static const u16 brb_int7_k2_attn_idx[32] = {
+	159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172,
+	173,
+	174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187,
+	    188, 189,
+	190,
+};
+
+static struct attn_hw_reg brb_int7_k2 = {
+	7, 32, brb_int7_k2_attn_idx, 0x340168, 0x340174, 0x340170, 0x34016c
+};
+
+static const u16 brb_int8_k2_attn_idx[17] = {
+	191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204,
+	205,
+	206, 207,
+};
+
+static struct attn_hw_reg brb_int8_k2 = {
+	8, 17, brb_int8_k2_attn_idx, 0x340184, 0x340190, 0x34018c, 0x340188
+};
+
+static const u16 brb_int9_k2_attn_idx[1] = {
+	208,
+};
+
+static struct attn_hw_reg brb_int9_k2 = {
+	9, 1, brb_int9_k2_attn_idx, 0x34019c, 0x3401a8, 0x3401a4, 0x3401a0
+};
+
+static const u16 brb_int10_k2_attn_idx[18] = {
+	209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 220, 221, 222, 223,
+	224,
+	225, 226, 227,
+};
+
+static struct attn_hw_reg brb_int10_k2 = {
+	10, 18, brb_int10_k2_attn_idx, 0x3401b4, 0x3401c0, 0x3401bc, 0x3401b8
+};
+
+static const u16 brb_int11_k2_attn_idx[8] = {
+	229, 230, 231, 232, 233, 234, 235, 236,
+};
+
+static struct attn_hw_reg brb_int11_k2 = {
+	11, 8, brb_int11_k2_attn_idx, 0x3401cc, 0x3401d8, 0x3401d4, 0x3401d0
+};
+
+static struct attn_hw_reg *brb_int_k2_regs[12] = {
+	&brb_int0_k2, &brb_int1_k2, &brb_int2_k2, &brb_int3_k2, &brb_int4_k2,
+	&brb_int5_k2, &brb_int6_k2, &brb_int7_k2, &brb_int8_k2, &brb_int9_k2,
+	&brb_int10_k2, &brb_int11_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *brb_prty_attn_desc[75] = {
+	"brb_ll_bank0_mem_prty",
+	"brb_ll_bank1_mem_prty",
+	"brb_ll_bank2_mem_prty",
+	"brb_ll_bank3_mem_prty",
+	"brb_datapath_registers",
+	"brb_mem001_i_ecc_rf_int",
+	"brb_mem008_i_ecc_rf_int",
+	"brb_mem009_i_ecc_rf_int",
+	"brb_mem010_i_ecc_rf_int",
+	"brb_mem011_i_ecc_rf_int",
+	"brb_mem012_i_ecc_rf_int",
+	"brb_mem013_i_ecc_rf_int",
+	"brb_mem014_i_ecc_rf_int",
+	"brb_mem015_i_ecc_rf_int",
+	"brb_mem016_i_ecc_rf_int",
+	"brb_mem002_i_ecc_rf_int",
+	"brb_mem003_i_ecc_rf_int",
+	"brb_mem004_i_ecc_rf_int",
+	"brb_mem005_i_ecc_rf_int",
+	"brb_mem006_i_ecc_rf_int",
+	"brb_mem007_i_ecc_rf_int",
+	"brb_mem070_i_mem_prty",
+	"brb_mem069_i_mem_prty",
+	"brb_mem053_i_mem_prty",
+	"brb_mem054_i_mem_prty",
+	"brb_mem055_i_mem_prty",
+	"brb_mem056_i_mem_prty",
+	"brb_mem057_i_mem_prty",
+	"brb_mem058_i_mem_prty",
+	"brb_mem059_i_mem_prty",
+	"brb_mem060_i_mem_prty",
+	"brb_mem061_i_mem_prty",
+	"brb_mem062_i_mem_prty",
+	"brb_mem063_i_mem_prty",
+	"brb_mem064_i_mem_prty",
+	"brb_mem065_i_mem_prty",
+	"brb_mem045_i_mem_prty",
+	"brb_mem046_i_mem_prty",
+	"brb_mem047_i_mem_prty",
+	"brb_mem048_i_mem_prty",
+	"brb_mem049_i_mem_prty",
+	"brb_mem050_i_mem_prty",
+	"brb_mem051_i_mem_prty",
+	"brb_mem052_i_mem_prty",
+	"brb_mem041_i_mem_prty",
+	"brb_mem042_i_mem_prty",
+	"brb_mem043_i_mem_prty",
+	"brb_mem044_i_mem_prty",
+	"brb_mem040_i_mem_prty",
+	"brb_mem035_i_mem_prty",
+	"brb_mem066_i_mem_prty",
+	"brb_mem067_i_mem_prty",
+	"brb_mem068_i_mem_prty",
+	"brb_mem030_i_mem_prty",
+	"brb_mem031_i_mem_prty",
+	"brb_mem032_i_mem_prty",
+	"brb_mem033_i_mem_prty",
+	"brb_mem037_i_mem_prty",
+	"brb_mem038_i_mem_prty",
+	"brb_mem034_i_mem_prty",
+	"brb_mem036_i_mem_prty",
+	"brb_mem017_i_mem_prty",
+	"brb_mem018_i_mem_prty",
+	"brb_mem019_i_mem_prty",
+	"brb_mem020_i_mem_prty",
+	"brb_mem021_i_mem_prty",
+	"brb_mem022_i_mem_prty",
+	"brb_mem023_i_mem_prty",
+	"brb_mem024_i_mem_prty",
+	"brb_mem029_i_mem_prty",
+	"brb_mem026_i_mem_prty",
+	"brb_mem027_i_mem_prty",
+	"brb_mem028_i_mem_prty",
+	"brb_mem025_i_mem_prty",
+	"brb_mem039_i_mem_prty",
+};
+#else
+#define brb_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 brb_prty1_bb_a0_attn_idx[31] = {
+	5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 23, 24, 36,
+	37,
+	38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 49,
+};
+
+static struct attn_hw_reg brb_prty1_bb_a0 = {
+	0, 31, brb_prty1_bb_a0_attn_idx, 0x340400, 0x34040c, 0x340408, 0x340404
+};
+
+static const u16 brb_prty2_bb_a0_attn_idx[19] = {
+	53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 69, 70, 71, 72, 73, 74,
+	48,
+};
+
+static struct attn_hw_reg brb_prty2_bb_a0 = {
+	1, 19, brb_prty2_bb_a0_attn_idx, 0x340410, 0x34041c, 0x340418, 0x340414
+};
+
+static struct attn_hw_reg *brb_prty_bb_a0_regs[2] = {
+	&brb_prty1_bb_a0, &brb_prty2_bb_a0,
+};
+
+static const u16 brb_prty0_bb_b0_attn_idx[5] = {
+	0, 1, 2, 3, 4,
+};
+
+static struct attn_hw_reg brb_prty0_bb_b0 = {
+	0, 5, brb_prty0_bb_b0_attn_idx, 0x3401dc, 0x3401e8, 0x3401e4, 0x3401e0
+};
+
+static const u16 brb_prty1_bb_b0_attn_idx[31] = {
+	5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 23, 24, 36,
+	37,
+	38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48,
+};
+
+static struct attn_hw_reg brb_prty1_bb_b0 = {
+	1, 31, brb_prty1_bb_b0_attn_idx, 0x340400, 0x34040c, 0x340408, 0x340404
+};
+
+static const u16 brb_prty2_bb_b0_attn_idx[14] = {
+	53, 54, 55, 56, 59, 61, 62, 63, 64, 69, 70, 71, 72, 73,
+};
+
+static struct attn_hw_reg brb_prty2_bb_b0 = {
+	2, 14, brb_prty2_bb_b0_attn_idx, 0x340410, 0x34041c, 0x340418, 0x340414
+};
+
+static struct attn_hw_reg *brb_prty_bb_b0_regs[3] = {
+	&brb_prty0_bb_b0, &brb_prty1_bb_b0, &brb_prty2_bb_b0,
+};
+
+static const u16 brb_prty0_k2_attn_idx[5] = {
+	0, 1, 2, 3, 4,
+};
+
+static struct attn_hw_reg brb_prty0_k2 = {
+	0, 5, brb_prty0_k2_attn_idx, 0x3401dc, 0x3401e8, 0x3401e4, 0x3401e0
+};
+
+static const u16 brb_prty1_k2_attn_idx[31] = {
+	5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23,
+	24,
+	25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35,
+};
+
+static struct attn_hw_reg brb_prty1_k2 = {
+	1, 31, brb_prty1_k2_attn_idx, 0x340400, 0x34040c, 0x340408, 0x340404
+};
+
+static const u16 brb_prty2_k2_attn_idx[30] = {
+	50, 51, 52, 36, 37, 38, 39, 40, 41, 42, 43, 47, 53, 54, 55, 56, 57, 58,
+	59, 49, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69,
+};
+
+static struct attn_hw_reg brb_prty2_k2 = {
+	2, 30, brb_prty2_k2_attn_idx, 0x340410, 0x34041c, 0x340418, 0x340414
+};
+
+static struct attn_hw_reg *brb_prty_k2_regs[3] = {
+	&brb_prty0_k2, &brb_prty1_k2, &brb_prty2_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *src_int_attn_desc[1] = {
+	"src_address_error",
+};
+#else
+#define src_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 src_int0_bb_a0_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg src_int0_bb_a0 = {
+	0, 1, src_int0_bb_a0_attn_idx, 0x2381d8, 0x2381dc, 0x2381e0, 0x2381e4
+};
+
+static struct attn_hw_reg *src_int_bb_a0_regs[1] = {
+	&src_int0_bb_a0,
+};
+
+static const u16 src_int0_bb_b0_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg src_int0_bb_b0 = {
+	0, 1, src_int0_bb_b0_attn_idx, 0x2381d8, 0x2381dc, 0x2381e0, 0x2381e4
+};
+
+static struct attn_hw_reg *src_int_bb_b0_regs[1] = {
+	&src_int0_bb_b0,
+};
+
+static const u16 src_int0_k2_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg src_int0_k2 = {
+	0, 1, src_int0_k2_attn_idx, 0x2381d8, 0x2381dc, 0x2381e0, 0x2381e4
+};
+
+static struct attn_hw_reg *src_int_k2_regs[1] = {
+	&src_int0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *prs_int_attn_desc[2] = {
+	"prs_address_error",
+	"prs_lcid_validation_err",
+};
+#else
+#define prs_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 prs_int0_bb_a0_attn_idx[2] = {
+	0, 1,
+};
+
+static struct attn_hw_reg prs_int0_bb_a0 = {
+	0, 2, prs_int0_bb_a0_attn_idx, 0x1f0040, 0x1f004c, 0x1f0048, 0x1f0044
+};
+
+static struct attn_hw_reg *prs_int_bb_a0_regs[1] = {
+	&prs_int0_bb_a0,
+};
+
+static const u16 prs_int0_bb_b0_attn_idx[2] = {
+	0, 1,
+};
+
+static struct attn_hw_reg prs_int0_bb_b0 = {
+	0, 2, prs_int0_bb_b0_attn_idx, 0x1f0040, 0x1f004c, 0x1f0048, 0x1f0044
+};
+
+static struct attn_hw_reg *prs_int_bb_b0_regs[1] = {
+	&prs_int0_bb_b0,
+};
+
+static const u16 prs_int0_k2_attn_idx[2] = {
+	0, 1,
+};
+
+static struct attn_hw_reg prs_int0_k2 = {
+	0, 2, prs_int0_k2_attn_idx, 0x1f0040, 0x1f004c, 0x1f0048, 0x1f0044
+};
+
+static struct attn_hw_reg *prs_int_k2_regs[1] = {
+	&prs_int0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *prs_prty_attn_desc[75] = {
+	"prs_cam_parity",
+	"prs_gft_cam_parity",
+	"prs_mem011_i_ecc_rf_int",
+	"prs_mem012_i_ecc_rf_int",
+	"prs_mem016_i_ecc_rf_int",
+	"prs_mem017_i_ecc_rf_int",
+	"prs_mem021_i_ecc_rf_int",
+	"prs_mem022_i_ecc_rf_int",
+	"prs_mem026_i_ecc_rf_int",
+	"prs_mem027_i_ecc_rf_int",
+	"prs_mem064_i_mem_prty",
+	"prs_mem044_i_mem_prty",
+	"prs_mem043_i_mem_prty",
+	"prs_mem037_i_mem_prty",
+	"prs_mem033_i_mem_prty",
+	"prs_mem034_i_mem_prty",
+	"prs_mem035_i_mem_prty",
+	"prs_mem036_i_mem_prty",
+	"prs_mem029_i_mem_prty",
+	"prs_mem030_i_mem_prty",
+	"prs_mem031_i_mem_prty",
+	"prs_mem032_i_mem_prty",
+	"prs_mem007_i_mem_prty",
+	"prs_mem028_i_mem_prty",
+	"prs_mem039_i_mem_prty",
+	"prs_mem040_i_mem_prty",
+	"prs_mem058_i_mem_prty",
+	"prs_mem059_i_mem_prty",
+	"prs_mem041_i_mem_prty",
+	"prs_mem042_i_mem_prty",
+	"prs_mem060_i_mem_prty",
+	"prs_mem061_i_mem_prty",
+	"prs_mem009_i_mem_prty",
+	"prs_mem009_i_ecc_rf_int",
+	"prs_mem010_i_ecc_rf_int",
+	"prs_mem014_i_ecc_rf_int",
+	"prs_mem015_i_ecc_rf_int",
+	"prs_mem026_i_mem_prty",
+	"prs_mem025_i_mem_prty",
+	"prs_mem021_i_mem_prty",
+	"prs_mem019_i_mem_prty",
+	"prs_mem020_i_mem_prty",
+	"prs_mem017_i_mem_prty",
+	"prs_mem018_i_mem_prty",
+	"prs_mem005_i_mem_prty",
+	"prs_mem016_i_mem_prty",
+	"prs_mem023_i_mem_prty",
+	"prs_mem024_i_mem_prty",
+	"prs_mem008_i_mem_prty",
+	"prs_mem012_i_mem_prty",
+	"prs_mem013_i_mem_prty",
+	"prs_mem006_i_mem_prty",
+	"prs_mem011_i_mem_prty",
+	"prs_mem003_i_mem_prty",
+	"prs_mem004_i_mem_prty",
+	"prs_mem027_i_mem_prty",
+	"prs_mem010_i_mem_prty",
+	"prs_mem014_i_mem_prty",
+	"prs_mem015_i_mem_prty",
+	"prs_mem054_i_mem_prty",
+	"prs_mem055_i_mem_prty",
+	"prs_mem056_i_mem_prty",
+	"prs_mem057_i_mem_prty",
+	"prs_mem046_i_mem_prty",
+	"prs_mem047_i_mem_prty",
+	"prs_mem048_i_mem_prty",
+	"prs_mem049_i_mem_prty",
+	"prs_mem050_i_mem_prty",
+	"prs_mem051_i_mem_prty",
+	"prs_mem052_i_mem_prty",
+	"prs_mem053_i_mem_prty",
+	"prs_mem062_i_mem_prty",
+	"prs_mem045_i_mem_prty",
+	"prs_mem002_i_mem_prty",
+	"prs_mem001_i_mem_prty",
+};
+#else
+#define prs_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 prs_prty0_bb_a0_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg prs_prty0_bb_a0 = {
+	0, 1, prs_prty0_bb_a0_attn_idx, 0x1f0050, 0x1f005c, 0x1f0058, 0x1f0054
+};
+
+static const u16 prs_prty1_bb_a0_attn_idx[31] = {
+	13, 14, 15, 16, 18, 21, 22, 23, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42,
+	43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55,
+};
+
+static struct attn_hw_reg prs_prty1_bb_a0 = {
+	1, 31, prs_prty1_bb_a0_attn_idx, 0x1f0204, 0x1f0210, 0x1f020c, 0x1f0208
+};
+
+static const u16 prs_prty2_bb_a0_attn_idx[5] = {
+	73, 74, 20, 17, 19,
+};
+
+static struct attn_hw_reg prs_prty2_bb_a0 = {
+	2, 5, prs_prty2_bb_a0_attn_idx, 0x1f0214, 0x1f0220, 0x1f021c, 0x1f0218
+};
+
+static struct attn_hw_reg *prs_prty_bb_a0_regs[3] = {
+	&prs_prty0_bb_a0, &prs_prty1_bb_a0, &prs_prty2_bb_a0,
+};
+
+static const u16 prs_prty0_bb_b0_attn_idx[2] = {
+	0, 1,
+};
+
+static struct attn_hw_reg prs_prty0_bb_b0 = {
+	0, 2, prs_prty0_bb_b0_attn_idx, 0x1f0050, 0x1f005c, 0x1f0058, 0x1f0054
+};
+
+static const u16 prs_prty1_bb_b0_attn_idx[31] = {
+	13, 14, 15, 16, 18, 19, 21, 22, 23, 33, 34, 35, 36, 37, 38, 39, 40, 41,
+	42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54,
+};
+
+static struct attn_hw_reg prs_prty1_bb_b0 = {
+	1, 31, prs_prty1_bb_b0_attn_idx, 0x1f0204, 0x1f0210, 0x1f020c, 0x1f0208
+};
+
+static const u16 prs_prty2_bb_b0_attn_idx[5] = {
+	73, 74, 20, 17, 55,
+};
+
+static struct attn_hw_reg prs_prty2_bb_b0 = {
+	2, 5, prs_prty2_bb_b0_attn_idx, 0x1f0214, 0x1f0220, 0x1f021c, 0x1f0218
+};
+
+static struct attn_hw_reg *prs_prty_bb_b0_regs[3] = {
+	&prs_prty0_bb_b0, &prs_prty1_bb_b0, &prs_prty2_bb_b0,
+};
+
+static const u16 prs_prty0_k2_attn_idx[2] = {
+	0, 1,
+};
+
+static struct attn_hw_reg prs_prty0_k2 = {
+	0, 2, prs_prty0_k2_attn_idx, 0x1f0050, 0x1f005c, 0x1f0058, 0x1f0054
+};
+
+static const u16 prs_prty1_k2_attn_idx[31] = {
+	2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21,
+	22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32,
+};
+
+static struct attn_hw_reg prs_prty1_k2 = {
+	1, 31, prs_prty1_k2_attn_idx, 0x1f0204, 0x1f0210, 0x1f020c, 0x1f0208
+};
+
+static const u16 prs_prty2_k2_attn_idx[31] = {
+	56, 57, 58, 40, 41, 47, 38, 48, 50, 43, 46, 59, 60, 61, 62, 53, 54, 44,
+	51, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74,
+};
+
+static struct attn_hw_reg prs_prty2_k2 = {
+	2, 31, prs_prty2_k2_attn_idx, 0x1f0214, 0x1f0220, 0x1f021c, 0x1f0218
+};
+
+static struct attn_hw_reg *prs_prty_k2_regs[3] = {
+	&prs_prty0_k2, &prs_prty1_k2, &prs_prty2_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *tsdm_int_attn_desc[28] = {
+	"tsdm_address_error",
+	"tsdm_inp_queue_error",
+	"tsdm_delay_fifo_error",
+	"tsdm_async_host_error",
+	"tsdm_prm_fifo_error",
+	"tsdm_ccfc_load_pend_error",
+	"tsdm_tcfc_load_pend_error",
+	"tsdm_dst_int_ram_wait_error",
+	"tsdm_dst_pas_buf_wait_error",
+	"tsdm_dst_pxp_immed_error",
+	"tsdm_dst_pxp_dst_pend_error",
+	"tsdm_dst_brb_src_pend_error",
+	"tsdm_dst_brb_src_addr_error",
+	"tsdm_rsp_brb_pend_error",
+	"tsdm_rsp_int_ram_pend_error",
+	"tsdm_rsp_brb_rd_data_error",
+	"tsdm_rsp_int_ram_rd_data_error",
+	"tsdm_rsp_pxp_rd_data_error",
+	"tsdm_cm_delay_error",
+	"tsdm_sh_delay_error",
+	"tsdm_cmpl_pend_error",
+	"tsdm_cprm_pend_error",
+	"tsdm_timer_addr_error",
+	"tsdm_timer_pend_error",
+	"tsdm_dorq_dpm_error",
+	"tsdm_dst_pxp_done_error",
+	"tsdm_xcm_rmt_buffer_error",
+	"tsdm_ycm_rmt_buffer_error",
+};
+#else
+#define tsdm_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 tsdm_int0_bb_a0_attn_idx[26] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+	20,
+	21, 22, 23, 24, 25,
+};
+
+static struct attn_hw_reg tsdm_int0_bb_a0 = {
+	0, 26, tsdm_int0_bb_a0_attn_idx, 0xfb0040, 0xfb004c, 0xfb0048, 0xfb0044
+};
+
+static struct attn_hw_reg *tsdm_int_bb_a0_regs[1] = {
+	&tsdm_int0_bb_a0,
+};
+
+static const u16 tsdm_int0_bb_b0_attn_idx[26] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+	20,
+	21, 22, 23, 24, 25,
+};
+
+static struct attn_hw_reg tsdm_int0_bb_b0 = {
+	0, 26, tsdm_int0_bb_b0_attn_idx, 0xfb0040, 0xfb004c, 0xfb0048, 0xfb0044
+};
+
+static struct attn_hw_reg *tsdm_int_bb_b0_regs[1] = {
+	&tsdm_int0_bb_b0,
+};
+
+static const u16 tsdm_int0_k2_attn_idx[28] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+	20,
+	21, 22, 23, 24, 25, 26, 27,
+};
+
+static struct attn_hw_reg tsdm_int0_k2 = {
+	0, 28, tsdm_int0_k2_attn_idx, 0xfb0040, 0xfb004c, 0xfb0048, 0xfb0044
+};
+
+static struct attn_hw_reg *tsdm_int_k2_regs[1] = {
+	&tsdm_int0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *tsdm_prty_attn_desc[10] = {
+	"tsdm_mem009_i_mem_prty",
+	"tsdm_mem008_i_mem_prty",
+	"tsdm_mem007_i_mem_prty",
+	"tsdm_mem006_i_mem_prty",
+	"tsdm_mem005_i_mem_prty",
+	"tsdm_mem002_i_mem_prty",
+	"tsdm_mem010_i_mem_prty",
+	"tsdm_mem001_i_mem_prty",
+	"tsdm_mem003_i_mem_prty",
+	"tsdm_mem004_i_mem_prty",
+};
+#else
+#define tsdm_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 tsdm_prty1_bb_a0_attn_idx[10] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9,
+};
+
+static struct attn_hw_reg tsdm_prty1_bb_a0 = {
+	0, 10, tsdm_prty1_bb_a0_attn_idx, 0xfb0200, 0xfb020c, 0xfb0208,
+	0xfb0204
+};
+
+static struct attn_hw_reg *tsdm_prty_bb_a0_regs[1] = {
+	&tsdm_prty1_bb_a0,
+};
+
+static const u16 tsdm_prty1_bb_b0_attn_idx[10] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9,
+};
+
+static struct attn_hw_reg tsdm_prty1_bb_b0 = {
+	0, 10, tsdm_prty1_bb_b0_attn_idx, 0xfb0200, 0xfb020c, 0xfb0208,
+	0xfb0204
+};
+
+static struct attn_hw_reg *tsdm_prty_bb_b0_regs[1] = {
+	&tsdm_prty1_bb_b0,
+};
+
+static const u16 tsdm_prty1_k2_attn_idx[10] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9,
+};
+
+static struct attn_hw_reg tsdm_prty1_k2 = {
+	0, 10, tsdm_prty1_k2_attn_idx, 0xfb0200, 0xfb020c, 0xfb0208, 0xfb0204
+};
+
+static struct attn_hw_reg *tsdm_prty_k2_regs[1] = {
+	&tsdm_prty1_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *msdm_int_attn_desc[28] = {
+	"msdm_address_error",
+	"msdm_inp_queue_error",
+	"msdm_delay_fifo_error",
+	"msdm_async_host_error",
+	"msdm_prm_fifo_error",
+	"msdm_ccfc_load_pend_error",
+	"msdm_tcfc_load_pend_error",
+	"msdm_dst_int_ram_wait_error",
+	"msdm_dst_pas_buf_wait_error",
+	"msdm_dst_pxp_immed_error",
+	"msdm_dst_pxp_dst_pend_error",
+	"msdm_dst_brb_src_pend_error",
+	"msdm_dst_brb_src_addr_error",
+	"msdm_rsp_brb_pend_error",
+	"msdm_rsp_int_ram_pend_error",
+	"msdm_rsp_brb_rd_data_error",
+	"msdm_rsp_int_ram_rd_data_error",
+	"msdm_rsp_pxp_rd_data_error",
+	"msdm_cm_delay_error",
+	"msdm_sh_delay_error",
+	"msdm_cmpl_pend_error",
+	"msdm_cprm_pend_error",
+	"msdm_timer_addr_error",
+	"msdm_timer_pend_error",
+	"msdm_dorq_dpm_error",
+	"msdm_dst_pxp_done_error",
+	"msdm_xcm_rmt_buffer_error",
+	"msdm_ycm_rmt_buffer_error",
+};
+#else
+#define msdm_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 msdm_int0_bb_a0_attn_idx[26] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+	20,
+	21, 22, 23, 24, 25,
+};
+
+static struct attn_hw_reg msdm_int0_bb_a0 = {
+	0, 26, msdm_int0_bb_a0_attn_idx, 0xfc0040, 0xfc004c, 0xfc0048, 0xfc0044
+};
+
+static struct attn_hw_reg *msdm_int_bb_a0_regs[1] = {
+	&msdm_int0_bb_a0,
+};
+
+static const u16 msdm_int0_bb_b0_attn_idx[26] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+	20,
+	21, 22, 23, 24, 25,
+};
+
+static struct attn_hw_reg msdm_int0_bb_b0 = {
+	0, 26, msdm_int0_bb_b0_attn_idx, 0xfc0040, 0xfc004c, 0xfc0048, 0xfc0044
+};
+
+static struct attn_hw_reg *msdm_int_bb_b0_regs[1] = {
+	&msdm_int0_bb_b0,
+};
+
+static const u16 msdm_int0_k2_attn_idx[28] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+	20,
+	21, 22, 23, 24, 25, 26, 27,
+};
+
+static struct attn_hw_reg msdm_int0_k2 = {
+	0, 28, msdm_int0_k2_attn_idx, 0xfc0040, 0xfc004c, 0xfc0048, 0xfc0044
+};
+
+static struct attn_hw_reg *msdm_int_k2_regs[1] = {
+	&msdm_int0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *msdm_prty_attn_desc[11] = {
+	"msdm_mem009_i_mem_prty",
+	"msdm_mem008_i_mem_prty",
+	"msdm_mem007_i_mem_prty",
+	"msdm_mem006_i_mem_prty",
+	"msdm_mem005_i_mem_prty",
+	"msdm_mem002_i_mem_prty",
+	"msdm_mem011_i_mem_prty",
+	"msdm_mem001_i_mem_prty",
+	"msdm_mem003_i_mem_prty",
+	"msdm_mem004_i_mem_prty",
+	"msdm_mem010_i_mem_prty",
+};
+#else
+#define msdm_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 msdm_prty1_bb_a0_attn_idx[11] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10,
+};
+
+static struct attn_hw_reg msdm_prty1_bb_a0 = {
+	0, 11, msdm_prty1_bb_a0_attn_idx, 0xfc0200, 0xfc020c, 0xfc0208,
+	0xfc0204
+};
+
+static struct attn_hw_reg *msdm_prty_bb_a0_regs[1] = {
+	&msdm_prty1_bb_a0,
+};
+
+static const u16 msdm_prty1_bb_b0_attn_idx[11] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10,
+};
+
+static struct attn_hw_reg msdm_prty1_bb_b0 = {
+	0, 11, msdm_prty1_bb_b0_attn_idx, 0xfc0200, 0xfc020c, 0xfc0208,
+	0xfc0204
+};
+
+static struct attn_hw_reg *msdm_prty_bb_b0_regs[1] = {
+	&msdm_prty1_bb_b0,
+};
+
+static const u16 msdm_prty1_k2_attn_idx[11] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10,
+};
+
+static struct attn_hw_reg msdm_prty1_k2 = {
+	0, 11, msdm_prty1_k2_attn_idx, 0xfc0200, 0xfc020c, 0xfc0208, 0xfc0204
+};
+
+static struct attn_hw_reg *msdm_prty_k2_regs[1] = {
+	&msdm_prty1_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *usdm_int_attn_desc[28] = {
+	"usdm_address_error",
+	"usdm_inp_queue_error",
+	"usdm_delay_fifo_error",
+	"usdm_async_host_error",
+	"usdm_prm_fifo_error",
+	"usdm_ccfc_load_pend_error",
+	"usdm_tcfc_load_pend_error",
+	"usdm_dst_int_ram_wait_error",
+	"usdm_dst_pas_buf_wait_error",
+	"usdm_dst_pxp_immed_error",
+	"usdm_dst_pxp_dst_pend_error",
+	"usdm_dst_brb_src_pend_error",
+	"usdm_dst_brb_src_addr_error",
+	"usdm_rsp_brb_pend_error",
+	"usdm_rsp_int_ram_pend_error",
+	"usdm_rsp_brb_rd_data_error",
+	"usdm_rsp_int_ram_rd_data_error",
+	"usdm_rsp_pxp_rd_data_error",
+	"usdm_cm_delay_error",
+	"usdm_sh_delay_error",
+	"usdm_cmpl_pend_error",
+	"usdm_cprm_pend_error",
+	"usdm_timer_addr_error",
+	"usdm_timer_pend_error",
+	"usdm_dorq_dpm_error",
+	"usdm_dst_pxp_done_error",
+	"usdm_xcm_rmt_buffer_error",
+	"usdm_ycm_rmt_buffer_error",
+};
+#else
+#define usdm_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 usdm_int0_bb_a0_attn_idx[26] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+	20,
+	21, 22, 23, 24, 25,
+};
+
+static struct attn_hw_reg usdm_int0_bb_a0 = {
+	0, 26, usdm_int0_bb_a0_attn_idx, 0xfd0040, 0xfd004c, 0xfd0048, 0xfd0044
+};
+
+static struct attn_hw_reg *usdm_int_bb_a0_regs[1] = {
+	&usdm_int0_bb_a0,
+};
+
+static const u16 usdm_int0_bb_b0_attn_idx[26] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+	20,
+	21, 22, 23, 24, 25,
+};
+
+static struct attn_hw_reg usdm_int0_bb_b0 = {
+	0, 26, usdm_int0_bb_b0_attn_idx, 0xfd0040, 0xfd004c, 0xfd0048, 0xfd0044
+};
+
+static struct attn_hw_reg *usdm_int_bb_b0_regs[1] = {
+	&usdm_int0_bb_b0,
+};
+
+static const u16 usdm_int0_k2_attn_idx[28] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+	20,
+	21, 22, 23, 24, 25, 26, 27,
+};
+
+static struct attn_hw_reg usdm_int0_k2 = {
+	0, 28, usdm_int0_k2_attn_idx, 0xfd0040, 0xfd004c, 0xfd0048, 0xfd0044
+};
+
+static struct attn_hw_reg *usdm_int_k2_regs[1] = {
+	&usdm_int0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *usdm_prty_attn_desc[10] = {
+	"usdm_mem008_i_mem_prty",
+	"usdm_mem007_i_mem_prty",
+	"usdm_mem006_i_mem_prty",
+	"usdm_mem005_i_mem_prty",
+	"usdm_mem002_i_mem_prty",
+	"usdm_mem010_i_mem_prty",
+	"usdm_mem001_i_mem_prty",
+	"usdm_mem003_i_mem_prty",
+	"usdm_mem004_i_mem_prty",
+	"usdm_mem009_i_mem_prty",
+};
+#else
+#define usdm_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 usdm_prty1_bb_a0_attn_idx[10] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9,
+};
+
+static struct attn_hw_reg usdm_prty1_bb_a0 = {
+	0, 10, usdm_prty1_bb_a0_attn_idx, 0xfd0200, 0xfd020c, 0xfd0208,
+	0xfd0204
+};
+
+static struct attn_hw_reg *usdm_prty_bb_a0_regs[1] = {
+	&usdm_prty1_bb_a0,
+};
+
+static const u16 usdm_prty1_bb_b0_attn_idx[10] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9,
+};
+
+static struct attn_hw_reg usdm_prty1_bb_b0 = {
+	0, 10, usdm_prty1_bb_b0_attn_idx, 0xfd0200, 0xfd020c, 0xfd0208,
+	0xfd0204
+};
+
+static struct attn_hw_reg *usdm_prty_bb_b0_regs[1] = {
+	&usdm_prty1_bb_b0,
+};
+
+static const u16 usdm_prty1_k2_attn_idx[10] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9,
+};
+
+static struct attn_hw_reg usdm_prty1_k2 = {
+	0, 10, usdm_prty1_k2_attn_idx, 0xfd0200, 0xfd020c, 0xfd0208, 0xfd0204
+};
+
+static struct attn_hw_reg *usdm_prty_k2_regs[1] = {
+	&usdm_prty1_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *xsdm_int_attn_desc[28] = {
+	"xsdm_address_error",
+	"xsdm_inp_queue_error",
+	"xsdm_delay_fifo_error",
+	"xsdm_async_host_error",
+	"xsdm_prm_fifo_error",
+	"xsdm_ccfc_load_pend_error",
+	"xsdm_tcfc_load_pend_error",
+	"xsdm_dst_int_ram_wait_error",
+	"xsdm_dst_pas_buf_wait_error",
+	"xsdm_dst_pxp_immed_error",
+	"xsdm_dst_pxp_dst_pend_error",
+	"xsdm_dst_brb_src_pend_error",
+	"xsdm_dst_brb_src_addr_error",
+	"xsdm_rsp_brb_pend_error",
+	"xsdm_rsp_int_ram_pend_error",
+	"xsdm_rsp_brb_rd_data_error",
+	"xsdm_rsp_int_ram_rd_data_error",
+	"xsdm_rsp_pxp_rd_data_error",
+	"xsdm_cm_delay_error",
+	"xsdm_sh_delay_error",
+	"xsdm_cmpl_pend_error",
+	"xsdm_cprm_pend_error",
+	"xsdm_timer_addr_error",
+	"xsdm_timer_pend_error",
+	"xsdm_dorq_dpm_error",
+	"xsdm_dst_pxp_done_error",
+	"xsdm_xcm_rmt_buffer_error",
+	"xsdm_ycm_rmt_buffer_error",
+};
+#else
+#define xsdm_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 xsdm_int0_bb_a0_attn_idx[26] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+	20,
+	21, 22, 23, 24, 25,
+};
+
+static struct attn_hw_reg xsdm_int0_bb_a0 = {
+	0, 26, xsdm_int0_bb_a0_attn_idx, 0xf80040, 0xf8004c, 0xf80048, 0xf80044
+};
+
+static struct attn_hw_reg *xsdm_int_bb_a0_regs[1] = {
+	&xsdm_int0_bb_a0,
+};
+
+static const u16 xsdm_int0_bb_b0_attn_idx[26] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+	20,
+	21, 22, 23, 24, 25,
+};
+
+static struct attn_hw_reg xsdm_int0_bb_b0 = {
+	0, 26, xsdm_int0_bb_b0_attn_idx, 0xf80040, 0xf8004c, 0xf80048, 0xf80044
+};
+
+static struct attn_hw_reg *xsdm_int_bb_b0_regs[1] = {
+	&xsdm_int0_bb_b0,
+};
+
+static const u16 xsdm_int0_k2_attn_idx[28] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+	20,
+	21, 22, 23, 24, 25, 26, 27,
+};
+
+static struct attn_hw_reg xsdm_int0_k2 = {
+	0, 28, xsdm_int0_k2_attn_idx, 0xf80040, 0xf8004c, 0xf80048, 0xf80044
+};
+
+static struct attn_hw_reg *xsdm_int_k2_regs[1] = {
+	&xsdm_int0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *xsdm_prty_attn_desc[10] = {
+	"xsdm_mem009_i_mem_prty",
+	"xsdm_mem008_i_mem_prty",
+	"xsdm_mem007_i_mem_prty",
+	"xsdm_mem006_i_mem_prty",
+	"xsdm_mem003_i_mem_prty",
+	"xsdm_mem010_i_mem_prty",
+	"xsdm_mem002_i_mem_prty",
+	"xsdm_mem004_i_mem_prty",
+	"xsdm_mem005_i_mem_prty",
+	"xsdm_mem001_i_mem_prty",
+};
+#else
+#define xsdm_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 xsdm_prty1_bb_a0_attn_idx[10] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9,
+};
+
+static struct attn_hw_reg xsdm_prty1_bb_a0 = {
+	0, 10, xsdm_prty1_bb_a0_attn_idx, 0xf80200, 0xf8020c, 0xf80208,
+	0xf80204
+};
+
+static struct attn_hw_reg *xsdm_prty_bb_a0_regs[1] = {
+	&xsdm_prty1_bb_a0,
+};
+
+static const u16 xsdm_prty1_bb_b0_attn_idx[10] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9,
+};
+
+static struct attn_hw_reg xsdm_prty1_bb_b0 = {
+	0, 10, xsdm_prty1_bb_b0_attn_idx, 0xf80200, 0xf8020c, 0xf80208,
+	0xf80204
+};
+
+static struct attn_hw_reg *xsdm_prty_bb_b0_regs[1] = {
+	&xsdm_prty1_bb_b0,
+};
+
+static const u16 xsdm_prty1_k2_attn_idx[10] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9,
+};
+
+static struct attn_hw_reg xsdm_prty1_k2 = {
+	0, 10, xsdm_prty1_k2_attn_idx, 0xf80200, 0xf8020c, 0xf80208, 0xf80204
+};
+
+static struct attn_hw_reg *xsdm_prty_k2_regs[1] = {
+	&xsdm_prty1_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *ysdm_int_attn_desc[28] = {
+	"ysdm_address_error",
+	"ysdm_inp_queue_error",
+	"ysdm_delay_fifo_error",
+	"ysdm_async_host_error",
+	"ysdm_prm_fifo_error",
+	"ysdm_ccfc_load_pend_error",
+	"ysdm_tcfc_load_pend_error",
+	"ysdm_dst_int_ram_wait_error",
+	"ysdm_dst_pas_buf_wait_error",
+	"ysdm_dst_pxp_immed_error",
+	"ysdm_dst_pxp_dst_pend_error",
+	"ysdm_dst_brb_src_pend_error",
+	"ysdm_dst_brb_src_addr_error",
+	"ysdm_rsp_brb_pend_error",
+	"ysdm_rsp_int_ram_pend_error",
+	"ysdm_rsp_brb_rd_data_error",
+	"ysdm_rsp_int_ram_rd_data_error",
+	"ysdm_rsp_pxp_rd_data_error",
+	"ysdm_cm_delay_error",
+	"ysdm_sh_delay_error",
+	"ysdm_cmpl_pend_error",
+	"ysdm_cprm_pend_error",
+	"ysdm_timer_addr_error",
+	"ysdm_timer_pend_error",
+	"ysdm_dorq_dpm_error",
+	"ysdm_dst_pxp_done_error",
+	"ysdm_xcm_rmt_buffer_error",
+	"ysdm_ycm_rmt_buffer_error",
+};
+#else
+#define ysdm_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 ysdm_int0_bb_a0_attn_idx[26] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+	20,
+	21, 22, 23, 24, 25,
+};
+
+static struct attn_hw_reg ysdm_int0_bb_a0 = {
+	0, 26, ysdm_int0_bb_a0_attn_idx, 0xf90040, 0xf9004c, 0xf90048, 0xf90044
+};
+
+static struct attn_hw_reg *ysdm_int_bb_a0_regs[1] = {
+	&ysdm_int0_bb_a0,
+};
+
+static const u16 ysdm_int0_bb_b0_attn_idx[26] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+	20,
+	21, 22, 23, 24, 25,
+};
+
+static struct attn_hw_reg ysdm_int0_bb_b0 = {
+	0, 26, ysdm_int0_bb_b0_attn_idx, 0xf90040, 0xf9004c, 0xf90048, 0xf90044
+};
+
+static struct attn_hw_reg *ysdm_int_bb_b0_regs[1] = {
+	&ysdm_int0_bb_b0,
+};
+
+static const u16 ysdm_int0_k2_attn_idx[28] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+	20,
+	21, 22, 23, 24, 25, 26, 27,
+};
+
+static struct attn_hw_reg ysdm_int0_k2 = {
+	0, 28, ysdm_int0_k2_attn_idx, 0xf90040, 0xf9004c, 0xf90048, 0xf90044
+};
+
+static struct attn_hw_reg *ysdm_int_k2_regs[1] = {
+	&ysdm_int0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *ysdm_prty_attn_desc[9] = {
+	"ysdm_mem008_i_mem_prty",
+	"ysdm_mem007_i_mem_prty",
+	"ysdm_mem006_i_mem_prty",
+	"ysdm_mem005_i_mem_prty",
+	"ysdm_mem002_i_mem_prty",
+	"ysdm_mem009_i_mem_prty",
+	"ysdm_mem001_i_mem_prty",
+	"ysdm_mem003_i_mem_prty",
+	"ysdm_mem004_i_mem_prty",
+};
+#else
+#define ysdm_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 ysdm_prty1_bb_a0_attn_idx[9] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8,
+};
+
+static struct attn_hw_reg ysdm_prty1_bb_a0 = {
+	0, 9, ysdm_prty1_bb_a0_attn_idx, 0xf90200, 0xf9020c, 0xf90208, 0xf90204
+};
+
+static struct attn_hw_reg *ysdm_prty_bb_a0_regs[1] = {
+	&ysdm_prty1_bb_a0,
+};
+
+static const u16 ysdm_prty1_bb_b0_attn_idx[9] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8,
+};
+
+static struct attn_hw_reg ysdm_prty1_bb_b0 = {
+	0, 9, ysdm_prty1_bb_b0_attn_idx, 0xf90200, 0xf9020c, 0xf90208, 0xf90204
+};
+
+static struct attn_hw_reg *ysdm_prty_bb_b0_regs[1] = {
+	&ysdm_prty1_bb_b0,
+};
+
+static const u16 ysdm_prty1_k2_attn_idx[9] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8,
+};
+
+static struct attn_hw_reg ysdm_prty1_k2 = {
+	0, 9, ysdm_prty1_k2_attn_idx, 0xf90200, 0xf9020c, 0xf90208, 0xf90204
+};
+
+static struct attn_hw_reg *ysdm_prty_k2_regs[1] = {
+	&ysdm_prty1_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *psdm_int_attn_desc[28] = {
+	"psdm_address_error",
+	"psdm_inp_queue_error",
+	"psdm_delay_fifo_error",
+	"psdm_async_host_error",
+	"psdm_prm_fifo_error",
+	"psdm_ccfc_load_pend_error",
+	"psdm_tcfc_load_pend_error",
+	"psdm_dst_int_ram_wait_error",
+	"psdm_dst_pas_buf_wait_error",
+	"psdm_dst_pxp_immed_error",
+	"psdm_dst_pxp_dst_pend_error",
+	"psdm_dst_brb_src_pend_error",
+	"psdm_dst_brb_src_addr_error",
+	"psdm_rsp_brb_pend_error",
+	"psdm_rsp_int_ram_pend_error",
+	"psdm_rsp_brb_rd_data_error",
+	"psdm_rsp_int_ram_rd_data_error",
+	"psdm_rsp_pxp_rd_data_error",
+	"psdm_cm_delay_error",
+	"psdm_sh_delay_error",
+	"psdm_cmpl_pend_error",
+	"psdm_cprm_pend_error",
+	"psdm_timer_addr_error",
+	"psdm_timer_pend_error",
+	"psdm_dorq_dpm_error",
+	"psdm_dst_pxp_done_error",
+	"psdm_xcm_rmt_buffer_error",
+	"psdm_ycm_rmt_buffer_error",
+};
+#else
+#define psdm_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 psdm_int0_bb_a0_attn_idx[26] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+	20,
+	21, 22, 23, 24, 25,
+};
+
+static struct attn_hw_reg psdm_int0_bb_a0 = {
+	0, 26, psdm_int0_bb_a0_attn_idx, 0xfa0040, 0xfa004c, 0xfa0048, 0xfa0044
+};
+
+static struct attn_hw_reg *psdm_int_bb_a0_regs[1] = {
+	&psdm_int0_bb_a0,
+};
+
+static const u16 psdm_int0_bb_b0_attn_idx[26] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+	20,
+	21, 22, 23, 24, 25,
+};
+
+static struct attn_hw_reg psdm_int0_bb_b0 = {
+	0, 26, psdm_int0_bb_b0_attn_idx, 0xfa0040, 0xfa004c, 0xfa0048, 0xfa0044
+};
+
+static struct attn_hw_reg *psdm_int_bb_b0_regs[1] = {
+	&psdm_int0_bb_b0,
+};
+
+static const u16 psdm_int0_k2_attn_idx[28] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+	20,
+	21, 22, 23, 24, 25, 26, 27,
+};
+
+static struct attn_hw_reg psdm_int0_k2 = {
+	0, 28, psdm_int0_k2_attn_idx, 0xfa0040, 0xfa004c, 0xfa0048, 0xfa0044
+};
+
+static struct attn_hw_reg *psdm_int_k2_regs[1] = {
+	&psdm_int0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *psdm_prty_attn_desc[9] = {
+	"psdm_mem008_i_mem_prty",
+	"psdm_mem007_i_mem_prty",
+	"psdm_mem006_i_mem_prty",
+	"psdm_mem005_i_mem_prty",
+	"psdm_mem002_i_mem_prty",
+	"psdm_mem009_i_mem_prty",
+	"psdm_mem001_i_mem_prty",
+	"psdm_mem003_i_mem_prty",
+	"psdm_mem004_i_mem_prty",
+};
+#else
+#define psdm_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 psdm_prty1_bb_a0_attn_idx[9] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8,
+};
+
+static struct attn_hw_reg psdm_prty1_bb_a0 = {
+	0, 9, psdm_prty1_bb_a0_attn_idx, 0xfa0200, 0xfa020c, 0xfa0208, 0xfa0204
+};
+
+static struct attn_hw_reg *psdm_prty_bb_a0_regs[1] = {
+	&psdm_prty1_bb_a0,
+};
+
+static const u16 psdm_prty1_bb_b0_attn_idx[9] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8,
+};
+
+static struct attn_hw_reg psdm_prty1_bb_b0 = {
+	0, 9, psdm_prty1_bb_b0_attn_idx, 0xfa0200, 0xfa020c, 0xfa0208, 0xfa0204
+};
+
+static struct attn_hw_reg *psdm_prty_bb_b0_regs[1] = {
+	&psdm_prty1_bb_b0,
+};
+
+static const u16 psdm_prty1_k2_attn_idx[9] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8,
+};
+
+static struct attn_hw_reg psdm_prty1_k2 = {
+	0, 9, psdm_prty1_k2_attn_idx, 0xfa0200, 0xfa020c, 0xfa0208, 0xfa0204
+};
+
+static struct attn_hw_reg *psdm_prty_k2_regs[1] = {
+	&psdm_prty1_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *tsem_int_attn_desc[46] = {
+	"tsem_address_error",
+	"tsem_fic_last_error",
+	"tsem_fic_length_error",
+	"tsem_fic_fifo_error",
+	"tsem_pas_buf_fifo_error",
+	"tsem_sync_fin_pop_error",
+	"tsem_sync_dra_wr_push_error",
+	"tsem_sync_dra_wr_pop_error",
+	"tsem_sync_dra_rd_push_error",
+	"tsem_sync_dra_rd_pop_error",
+	"tsem_sync_fin_push_error",
+	"tsem_sem_fast_address_error",
+	"tsem_cam_lsb_inp_fifo",
+	"tsem_cam_msb_inp_fifo",
+	"tsem_cam_out_fifo",
+	"tsem_fin_fifo",
+	"tsem_thread_fifo_error",
+	"tsem_thread_overrun",
+	"tsem_sync_ext_store_push_error",
+	"tsem_sync_ext_store_pop_error",
+	"tsem_sync_ext_load_push_error",
+	"tsem_sync_ext_load_pop_error",
+	"tsem_sync_ram_rd_push_error",
+	"tsem_sync_ram_rd_pop_error",
+	"tsem_sync_ram_wr_pop_error",
+	"tsem_sync_ram_wr_push_error",
+	"tsem_sync_dbg_push_error",
+	"tsem_sync_dbg_pop_error",
+	"tsem_dbg_fifo_error",
+	"tsem_cam_msb2_inp_fifo",
+	"tsem_vfc_interrupt",
+	"tsem_vfc_out_fifo_error",
+	"tsem_storm_stack_uf_attn",
+	"tsem_storm_stack_of_attn",
+	"tsem_storm_runtime_error",
+	"tsem_ext_load_pend_wr_error",
+	"tsem_thread_rls_orun_error",
+	"tsem_thread_rls_aloc_error",
+	"tsem_thread_rls_vld_error",
+	"tsem_ext_thread_oor_error",
+	"tsem_ord_id_fifo_error",
+	"tsem_invld_foc_error",
+	"tsem_ext_ld_len_error",
+	"tsem_thrd_ord_fifo_error",
+	"tsem_invld_thrd_ord_error",
+	"tsem_fast_memory_address_error",
+};
+#else
+#define tsem_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 tsem_int0_bb_a0_attn_idx[32] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+	20,
+	21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31,
+};
+
+static struct attn_hw_reg tsem_int0_bb_a0 = {
+	0, 32, tsem_int0_bb_a0_attn_idx, 0x1700040, 0x170004c, 0x1700048,
+	0x1700044
+};
+
+static const u16 tsem_int1_bb_a0_attn_idx[13] = {
+	32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44,
+};
+
+static struct attn_hw_reg tsem_int1_bb_a0 = {
+	1, 13, tsem_int1_bb_a0_attn_idx, 0x1700050, 0x170005c, 0x1700058,
+	0x1700054
+};
+
+static const u16 tsem_fast_memory_int0_bb_a0_attn_idx[1] = {
+	45,
+};
+
+static struct attn_hw_reg tsem_fast_memory_int0_bb_a0 = {
+	2, 1, tsem_fast_memory_int0_bb_a0_attn_idx, 0x1740040, 0x174004c,
+	0x1740048, 0x1740044
+};
+
+static struct attn_hw_reg *tsem_int_bb_a0_regs[3] = {
+	&tsem_int0_bb_a0, &tsem_int1_bb_a0, &tsem_fast_memory_int0_bb_a0,
+};
+
+static const u16 tsem_int0_bb_b0_attn_idx[32] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+	20,
+	21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31,
+};
+
+static struct attn_hw_reg tsem_int0_bb_b0 = {
+	0, 32, tsem_int0_bb_b0_attn_idx, 0x1700040, 0x170004c, 0x1700048,
+	0x1700044
+};
+
+static const u16 tsem_int1_bb_b0_attn_idx[13] = {
+	32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44,
+};
+
+static struct attn_hw_reg tsem_int1_bb_b0 = {
+	1, 13, tsem_int1_bb_b0_attn_idx, 0x1700050, 0x170005c, 0x1700058,
+	0x1700054
+};
+
+static const u16 tsem_fast_memory_int0_bb_b0_attn_idx[1] = {
+	45,
+};
+
+static struct attn_hw_reg tsem_fast_memory_int0_bb_b0 = {
+	2, 1, tsem_fast_memory_int0_bb_b0_attn_idx, 0x1740040, 0x174004c,
+	0x1740048, 0x1740044
+};
+
+static struct attn_hw_reg *tsem_int_bb_b0_regs[3] = {
+	&tsem_int0_bb_b0, &tsem_int1_bb_b0, &tsem_fast_memory_int0_bb_b0,
+};
+
+static const u16 tsem_int0_k2_attn_idx[32] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+	20,
+	21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31,
+};
+
+static struct attn_hw_reg tsem_int0_k2 = {
+	0, 32, tsem_int0_k2_attn_idx, 0x1700040, 0x170004c, 0x1700048,
+	0x1700044
+};
+
+static const u16 tsem_int1_k2_attn_idx[13] = {
+	32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44,
+};
+
+static struct attn_hw_reg tsem_int1_k2 = {
+	1, 13, tsem_int1_k2_attn_idx, 0x1700050, 0x170005c, 0x1700058,
+	0x1700054
+};
+
+static const u16 tsem_fast_memory_int0_k2_attn_idx[1] = {
+	45,
+};
+
+static struct attn_hw_reg tsem_fast_memory_int0_k2 = {
+	2, 1, tsem_fast_memory_int0_k2_attn_idx, 0x1740040, 0x174004c,
+	0x1740048,
+	0x1740044
+};
+
+static struct attn_hw_reg *tsem_int_k2_regs[3] = {
+	&tsem_int0_k2, &tsem_int1_k2, &tsem_fast_memory_int0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *tsem_prty_attn_desc[23] = {
+	"tsem_vfc_rbc_parity_error",
+	"tsem_storm_rf_parity_error",
+	"tsem_reg_gen_parity_error",
+	"tsem_mem005_i_ecc_0_rf_int",
+	"tsem_mem005_i_ecc_1_rf_int",
+	"tsem_mem004_i_mem_prty",
+	"tsem_mem002_i_mem_prty",
+	"tsem_mem003_i_mem_prty",
+	"tsem_mem001_i_mem_prty",
+	"tsem_fast_memory_mem024_i_mem_prty",
+	"tsem_fast_memory_mem023_i_mem_prty",
+	"tsem_fast_memory_mem022_i_mem_prty",
+	"tsem_fast_memory_mem021_i_mem_prty",
+	"tsem_fast_memory_mem020_i_mem_prty",
+	"tsem_fast_memory_mem019_i_mem_prty",
+	"tsem_fast_memory_mem018_i_mem_prty",
+	"tsem_fast_memory_vfc_config_mem005_i_ecc_rf_int",
+	"tsem_fast_memory_vfc_config_mem002_i_ecc_rf_int",
+	"tsem_fast_memory_vfc_config_mem006_i_mem_prty",
+	"tsem_fast_memory_vfc_config_mem001_i_mem_prty",
+	"tsem_fast_memory_vfc_config_mem004_i_mem_prty",
+	"tsem_fast_memory_vfc_config_mem003_i_mem_prty",
+	"tsem_fast_memory_vfc_config_mem007_i_mem_prty",
+};
+#else
+#define tsem_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 tsem_prty0_bb_a0_attn_idx[3] = {
+	0, 1, 2,
+};
+
+static struct attn_hw_reg tsem_prty0_bb_a0 = {
+	0, 3, tsem_prty0_bb_a0_attn_idx, 0x17000c8, 0x17000d4, 0x17000d0,
+	0x17000cc
+};
+
+static const u16 tsem_prty1_bb_a0_attn_idx[6] = {
+	3, 4, 5, 6, 7, 8,
+};
+
+static struct attn_hw_reg tsem_prty1_bb_a0 = {
+	1, 6, tsem_prty1_bb_a0_attn_idx, 0x1700200, 0x170020c, 0x1700208,
+	0x1700204
+};
+
+static const u16 tsem_fast_memory_vfc_config_prty1_bb_a0_attn_idx[6] = {
+	16, 17, 19, 20, 21, 22,
+};
+
+static struct attn_hw_reg tsem_fast_memory_vfc_config_prty1_bb_a0 = {
+	2, 6, tsem_fast_memory_vfc_config_prty1_bb_a0_attn_idx, 0x174a200,
+	0x174a20c, 0x174a208, 0x174a204
+};
+
+static struct attn_hw_reg *tsem_prty_bb_a0_regs[3] = {
+	&tsem_prty0_bb_a0, &tsem_prty1_bb_a0,
+	&tsem_fast_memory_vfc_config_prty1_bb_a0,
+};
+
+static const u16 tsem_prty0_bb_b0_attn_idx[3] = {
+	0, 1, 2,
+};
+
+static struct attn_hw_reg tsem_prty0_bb_b0 = {
+	0, 3, tsem_prty0_bb_b0_attn_idx, 0x17000c8, 0x17000d4, 0x17000d0,
+	0x17000cc
+};
+
+static const u16 tsem_prty1_bb_b0_attn_idx[6] = {
+	3, 4, 5, 6, 7, 8,
+};
+
+static struct attn_hw_reg tsem_prty1_bb_b0 = {
+	1, 6, tsem_prty1_bb_b0_attn_idx, 0x1700200, 0x170020c, 0x1700208,
+	0x1700204
+};
+
+static const u16 tsem_fast_memory_vfc_config_prty1_bb_b0_attn_idx[6] = {
+	16, 17, 19, 20, 21, 22,
+};
+
+static struct attn_hw_reg tsem_fast_memory_vfc_config_prty1_bb_b0 = {
+	2, 6, tsem_fast_memory_vfc_config_prty1_bb_b0_attn_idx, 0x174a200,
+	0x174a20c, 0x174a208, 0x174a204
+};
+
+static struct attn_hw_reg *tsem_prty_bb_b0_regs[3] = {
+	&tsem_prty0_bb_b0, &tsem_prty1_bb_b0,
+	&tsem_fast_memory_vfc_config_prty1_bb_b0,
+};
+
+static const u16 tsem_prty0_k2_attn_idx[3] = {
+	0, 1, 2,
+};
+
+static struct attn_hw_reg tsem_prty0_k2 = {
+	0, 3, tsem_prty0_k2_attn_idx, 0x17000c8, 0x17000d4, 0x17000d0,
+	0x17000cc
+};
+
+static const u16 tsem_prty1_k2_attn_idx[6] = {
+	3, 4, 5, 6, 7, 8,
+};
+
+static struct attn_hw_reg tsem_prty1_k2 = {
+	1, 6, tsem_prty1_k2_attn_idx, 0x1700200, 0x170020c, 0x1700208,
+	0x1700204
+};
+
+static const u16 tsem_fast_memory_prty1_k2_attn_idx[7] = {
+	9, 10, 11, 12, 13, 14, 15,
+};
+
+static struct attn_hw_reg tsem_fast_memory_prty1_k2 = {
+	2, 7, tsem_fast_memory_prty1_k2_attn_idx, 0x1740200, 0x174020c,
+	0x1740208,
+	0x1740204
+};
+
+static const u16 tsem_fast_memory_vfc_config_prty1_k2_attn_idx[6] = {
+	16, 17, 18, 19, 20, 21,
+};
+
+static struct attn_hw_reg tsem_fast_memory_vfc_config_prty1_k2 = {
+	3, 6, tsem_fast_memory_vfc_config_prty1_k2_attn_idx, 0x174a200,
+	0x174a20c,
+	0x174a208, 0x174a204
+};
+
+static struct attn_hw_reg *tsem_prty_k2_regs[4] = {
+	&tsem_prty0_k2, &tsem_prty1_k2, &tsem_fast_memory_prty1_k2,
+	&tsem_fast_memory_vfc_config_prty1_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *msem_int_attn_desc[46] = {
+	"msem_address_error",
+	"msem_fic_last_error",
+	"msem_fic_length_error",
+	"msem_fic_fifo_error",
+	"msem_pas_buf_fifo_error",
+	"msem_sync_fin_pop_error",
+	"msem_sync_dra_wr_push_error",
+	"msem_sync_dra_wr_pop_error",
+	"msem_sync_dra_rd_push_error",
+	"msem_sync_dra_rd_pop_error",
+	"msem_sync_fin_push_error",
+	"msem_sem_fast_address_error",
+	"msem_cam_lsb_inp_fifo",
+	"msem_cam_msb_inp_fifo",
+	"msem_cam_out_fifo",
+	"msem_fin_fifo",
+	"msem_thread_fifo_error",
+	"msem_thread_overrun",
+	"msem_sync_ext_store_push_error",
+	"msem_sync_ext_store_pop_error",
+	"msem_sync_ext_load_push_error",
+	"msem_sync_ext_load_pop_error",
+	"msem_sync_ram_rd_push_error",
+	"msem_sync_ram_rd_pop_error",
+	"msem_sync_ram_wr_pop_error",
+	"msem_sync_ram_wr_push_error",
+	"msem_sync_dbg_push_error",
+	"msem_sync_dbg_pop_error",
+	"msem_dbg_fifo_error",
+	"msem_cam_msb2_inp_fifo",
+	"msem_vfc_interrupt",
+	"msem_vfc_out_fifo_error",
+	"msem_storm_stack_uf_attn",
+	"msem_storm_stack_of_attn",
+	"msem_storm_runtime_error",
+	"msem_ext_load_pend_wr_error",
+	"msem_thread_rls_orun_error",
+	"msem_thread_rls_aloc_error",
+	"msem_thread_rls_vld_error",
+	"msem_ext_thread_oor_error",
+	"msem_ord_id_fifo_error",
+	"msem_invld_foc_error",
+	"msem_ext_ld_len_error",
+	"msem_thrd_ord_fifo_error",
+	"msem_invld_thrd_ord_error",
+	"msem_fast_memory_address_error",
+};
+#else
+#define msem_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 msem_int0_bb_a0_attn_idx[32] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+	20,
+	21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31,
+};
+
+static struct attn_hw_reg msem_int0_bb_a0 = {
+	0, 32, msem_int0_bb_a0_attn_idx, 0x1800040, 0x180004c, 0x1800048,
+	0x1800044
+};
+
+static const u16 msem_int1_bb_a0_attn_idx[13] = {
+	32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44,
+};
+
+static struct attn_hw_reg msem_int1_bb_a0 = {
+	1, 13, msem_int1_bb_a0_attn_idx, 0x1800050, 0x180005c, 0x1800058,
+	0x1800054
+};
+
+static const u16 msem_fast_memory_int0_bb_a0_attn_idx[1] = {
+	45,
+};
+
+static struct attn_hw_reg msem_fast_memory_int0_bb_a0 = {
+	2, 1, msem_fast_memory_int0_bb_a0_attn_idx, 0x1840040, 0x184004c,
+	0x1840048, 0x1840044
+};
+
+static struct attn_hw_reg *msem_int_bb_a0_regs[3] = {
+	&msem_int0_bb_a0, &msem_int1_bb_a0, &msem_fast_memory_int0_bb_a0,
+};
+
+static const u16 msem_int0_bb_b0_attn_idx[32] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+	20,
+	21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31,
+};
+
+static struct attn_hw_reg msem_int0_bb_b0 = {
+	0, 32, msem_int0_bb_b0_attn_idx, 0x1800040, 0x180004c, 0x1800048,
+	0x1800044
+};
+
+static const u16 msem_int1_bb_b0_attn_idx[13] = {
+	32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44,
+};
+
+static struct attn_hw_reg msem_int1_bb_b0 = {
+	1, 13, msem_int1_bb_b0_attn_idx, 0x1800050, 0x180005c, 0x1800058,
+	0x1800054
+};
+
+static const u16 msem_fast_memory_int0_bb_b0_attn_idx[1] = {
+	45,
+};
+
+static struct attn_hw_reg msem_fast_memory_int0_bb_b0 = {
+	2, 1, msem_fast_memory_int0_bb_b0_attn_idx, 0x1840040, 0x184004c,
+	0x1840048, 0x1840044
+};
+
+static struct attn_hw_reg *msem_int_bb_b0_regs[3] = {
+	&msem_int0_bb_b0, &msem_int1_bb_b0, &msem_fast_memory_int0_bb_b0,
+};
+
+static const u16 msem_int0_k2_attn_idx[32] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+	20,
+	21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31,
+};
+
+static struct attn_hw_reg msem_int0_k2 = {
+	0, 32, msem_int0_k2_attn_idx, 0x1800040, 0x180004c, 0x1800048,
+	0x1800044
+};
+
+static const u16 msem_int1_k2_attn_idx[13] = {
+	32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44,
+};
+
+static struct attn_hw_reg msem_int1_k2 = {
+	1, 13, msem_int1_k2_attn_idx, 0x1800050, 0x180005c, 0x1800058,
+	0x1800054
+};
+
+static const u16 msem_fast_memory_int0_k2_attn_idx[1] = {
+	45,
+};
+
+static struct attn_hw_reg msem_fast_memory_int0_k2 = {
+	2, 1, msem_fast_memory_int0_k2_attn_idx, 0x1840040, 0x184004c,
+	0x1840048,
+	0x1840044
+};
+
+static struct attn_hw_reg *msem_int_k2_regs[3] = {
+	&msem_int0_k2, &msem_int1_k2, &msem_fast_memory_int0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *msem_prty_attn_desc[23] = {
+	"msem_vfc_rbc_parity_error",
+	"msem_storm_rf_parity_error",
+	"msem_reg_gen_parity_error",
+	"msem_mem005_i_ecc_0_rf_int",
+	"msem_mem005_i_ecc_1_rf_int",
+	"msem_mem004_i_mem_prty",
+	"msem_mem002_i_mem_prty",
+	"msem_mem003_i_mem_prty",
+	"msem_mem001_i_mem_prty",
+	"msem_fast_memory_mem024_i_mem_prty",
+	"msem_fast_memory_mem023_i_mem_prty",
+	"msem_fast_memory_mem022_i_mem_prty",
+	"msem_fast_memory_mem021_i_mem_prty",
+	"msem_fast_memory_mem020_i_mem_prty",
+	"msem_fast_memory_mem019_i_mem_prty",
+	"msem_fast_memory_mem018_i_mem_prty",
+	"msem_fast_memory_vfc_config_mem005_i_ecc_rf_int",
+	"msem_fast_memory_vfc_config_mem002_i_ecc_rf_int",
+	"msem_fast_memory_vfc_config_mem006_i_mem_prty",
+	"msem_fast_memory_vfc_config_mem001_i_mem_prty",
+	"msem_fast_memory_vfc_config_mem004_i_mem_prty",
+	"msem_fast_memory_vfc_config_mem003_i_mem_prty",
+	"msem_fast_memory_vfc_config_mem007_i_mem_prty",
+};
+#else
+#define msem_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 msem_prty0_bb_a0_attn_idx[3] = {
+	0, 1, 2,
+};
+
+static struct attn_hw_reg msem_prty0_bb_a0 = {
+	0, 3, msem_prty0_bb_a0_attn_idx, 0x18000c8, 0x18000d4, 0x18000d0,
+	0x18000cc
+};
+
+static const u16 msem_prty1_bb_a0_attn_idx[6] = {
+	3, 4, 5, 6, 7, 8,
+};
+
+static struct attn_hw_reg msem_prty1_bb_a0 = {
+	1, 6, msem_prty1_bb_a0_attn_idx, 0x1800200, 0x180020c, 0x1800208,
+	0x1800204
+};
+
+static struct attn_hw_reg *msem_prty_bb_a0_regs[2] = {
+	&msem_prty0_bb_a0, &msem_prty1_bb_a0,
+};
+
+static const u16 msem_prty0_bb_b0_attn_idx[3] = {
+	0, 1, 2,
+};
+
+static struct attn_hw_reg msem_prty0_bb_b0 = {
+	0, 3, msem_prty0_bb_b0_attn_idx, 0x18000c8, 0x18000d4, 0x18000d0,
+	0x18000cc
+};
+
+static const u16 msem_prty1_bb_b0_attn_idx[6] = {
+	3, 4, 5, 6, 7, 8,
+};
+
+static struct attn_hw_reg msem_prty1_bb_b0 = {
+	1, 6, msem_prty1_bb_b0_attn_idx, 0x1800200, 0x180020c, 0x1800208,
+	0x1800204
+};
+
+static struct attn_hw_reg *msem_prty_bb_b0_regs[2] = {
+	&msem_prty0_bb_b0, &msem_prty1_bb_b0,
+};
+
+static const u16 msem_prty0_k2_attn_idx[3] = {
+	0, 1, 2,
+};
+
+static struct attn_hw_reg msem_prty0_k2 = {
+	0, 3, msem_prty0_k2_attn_idx, 0x18000c8, 0x18000d4, 0x18000d0,
+	0x18000cc
+};
+
+static const u16 msem_prty1_k2_attn_idx[6] = {
+	3, 4, 5, 6, 7, 8,
+};
+
+static struct attn_hw_reg msem_prty1_k2 = {
+	1, 6, msem_prty1_k2_attn_idx, 0x1800200, 0x180020c, 0x1800208,
+	0x1800204
+};
+
+static const u16 msem_fast_memory_prty1_k2_attn_idx[7] = {
+	9, 10, 11, 12, 13, 14, 15,
+};
+
+static struct attn_hw_reg msem_fast_memory_prty1_k2 = {
+	2, 7, msem_fast_memory_prty1_k2_attn_idx, 0x1840200, 0x184020c,
+	0x1840208,
+	0x1840204
+};
+
+static struct attn_hw_reg *msem_prty_k2_regs[3] = {
+	&msem_prty0_k2, &msem_prty1_k2, &msem_fast_memory_prty1_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *usem_int_attn_desc[46] = {
+	"usem_address_error",
+	"usem_fic_last_error",
+	"usem_fic_length_error",
+	"usem_fic_fifo_error",
+	"usem_pas_buf_fifo_error",
+	"usem_sync_fin_pop_error",
+	"usem_sync_dra_wr_push_error",
+	"usem_sync_dra_wr_pop_error",
+	"usem_sync_dra_rd_push_error",
+	"usem_sync_dra_rd_pop_error",
+	"usem_sync_fin_push_error",
+	"usem_sem_fast_address_error",
+	"usem_cam_lsb_inp_fifo",
+	"usem_cam_msb_inp_fifo",
+	"usem_cam_out_fifo",
+	"usem_fin_fifo",
+	"usem_thread_fifo_error",
+	"usem_thread_overrun",
+	"usem_sync_ext_store_push_error",
+	"usem_sync_ext_store_pop_error",
+	"usem_sync_ext_load_push_error",
+	"usem_sync_ext_load_pop_error",
+	"usem_sync_ram_rd_push_error",
+	"usem_sync_ram_rd_pop_error",
+	"usem_sync_ram_wr_pop_error",
+	"usem_sync_ram_wr_push_error",
+	"usem_sync_dbg_push_error",
+	"usem_sync_dbg_pop_error",
+	"usem_dbg_fifo_error",
+	"usem_cam_msb2_inp_fifo",
+	"usem_vfc_interrupt",
+	"usem_vfc_out_fifo_error",
+	"usem_storm_stack_uf_attn",
+	"usem_storm_stack_of_attn",
+	"usem_storm_runtime_error",
+	"usem_ext_load_pend_wr_error",
+	"usem_thread_rls_orun_error",
+	"usem_thread_rls_aloc_error",
+	"usem_thread_rls_vld_error",
+	"usem_ext_thread_oor_error",
+	"usem_ord_id_fifo_error",
+	"usem_invld_foc_error",
+	"usem_ext_ld_len_error",
+	"usem_thrd_ord_fifo_error",
+	"usem_invld_thrd_ord_error",
+	"usem_fast_memory_address_error",
+};
+#else
+#define usem_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 usem_int0_bb_a0_attn_idx[32] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+	20,
+	21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31,
+};
+
+static struct attn_hw_reg usem_int0_bb_a0 = {
+	0, 32, usem_int0_bb_a0_attn_idx, 0x1900040, 0x190004c, 0x1900048,
+	0x1900044
+};
+
+static const u16 usem_int1_bb_a0_attn_idx[13] = {
+	32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44,
+};
+
+static struct attn_hw_reg usem_int1_bb_a0 = {
+	1, 13, usem_int1_bb_a0_attn_idx, 0x1900050, 0x190005c, 0x1900058,
+	0x1900054
+};
+
+static const u16 usem_fast_memory_int0_bb_a0_attn_idx[1] = {
+	45,
+};
+
+static struct attn_hw_reg usem_fast_memory_int0_bb_a0 = {
+	2, 1, usem_fast_memory_int0_bb_a0_attn_idx, 0x1940040, 0x194004c,
+	0x1940048, 0x1940044
+};
+
+static struct attn_hw_reg *usem_int_bb_a0_regs[3] = {
+	&usem_int0_bb_a0, &usem_int1_bb_a0, &usem_fast_memory_int0_bb_a0,
+};
+
+static const u16 usem_int0_bb_b0_attn_idx[32] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+	20,
+	21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31,
+};
+
+static struct attn_hw_reg usem_int0_bb_b0 = {
+	0, 32, usem_int0_bb_b0_attn_idx, 0x1900040, 0x190004c, 0x1900048,
+	0x1900044
+};
+
+static const u16 usem_int1_bb_b0_attn_idx[13] = {
+	32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44,
+};
+
+static struct attn_hw_reg usem_int1_bb_b0 = {
+	1, 13, usem_int1_bb_b0_attn_idx, 0x1900050, 0x190005c, 0x1900058,
+	0x1900054
+};
+
+static const u16 usem_fast_memory_int0_bb_b0_attn_idx[1] = {
+	45,
+};
+
+static struct attn_hw_reg usem_fast_memory_int0_bb_b0 = {
+	2, 1, usem_fast_memory_int0_bb_b0_attn_idx, 0x1940040, 0x194004c,
+	0x1940048, 0x1940044
+};
+
+static struct attn_hw_reg *usem_int_bb_b0_regs[3] = {
+	&usem_int0_bb_b0, &usem_int1_bb_b0, &usem_fast_memory_int0_bb_b0,
+};
+
+static const u16 usem_int0_k2_attn_idx[32] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+	20,
+	21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31,
+};
+
+static struct attn_hw_reg usem_int0_k2 = {
+	0, 32, usem_int0_k2_attn_idx, 0x1900040, 0x190004c, 0x1900048,
+	0x1900044
+};
+
+static const u16 usem_int1_k2_attn_idx[13] = {
+	32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44,
+};
+
+static struct attn_hw_reg usem_int1_k2 = {
+	1, 13, usem_int1_k2_attn_idx, 0x1900050, 0x190005c, 0x1900058,
+	0x1900054
+};
+
+static const u16 usem_fast_memory_int0_k2_attn_idx[1] = {
+	45,
+};
+
+static struct attn_hw_reg usem_fast_memory_int0_k2 = {
+	2, 1, usem_fast_memory_int0_k2_attn_idx, 0x1940040, 0x194004c,
+	0x1940048,
+	0x1940044
+};
+
+static struct attn_hw_reg *usem_int_k2_regs[3] = {
+	&usem_int0_k2, &usem_int1_k2, &usem_fast_memory_int0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *usem_prty_attn_desc[23] = {
+	"usem_vfc_rbc_parity_error",
+	"usem_storm_rf_parity_error",
+	"usem_reg_gen_parity_error",
+	"usem_mem005_i_ecc_0_rf_int",
+	"usem_mem005_i_ecc_1_rf_int",
+	"usem_mem004_i_mem_prty",
+	"usem_mem002_i_mem_prty",
+	"usem_mem003_i_mem_prty",
+	"usem_mem001_i_mem_prty",
+	"usem_fast_memory_mem024_i_mem_prty",
+	"usem_fast_memory_mem023_i_mem_prty",
+	"usem_fast_memory_mem022_i_mem_prty",
+	"usem_fast_memory_mem021_i_mem_prty",
+	"usem_fast_memory_mem020_i_mem_prty",
+	"usem_fast_memory_mem019_i_mem_prty",
+	"usem_fast_memory_mem018_i_mem_prty",
+	"usem_fast_memory_vfc_config_mem005_i_ecc_rf_int",
+	"usem_fast_memory_vfc_config_mem002_i_ecc_rf_int",
+	"usem_fast_memory_vfc_config_mem006_i_mem_prty",
+	"usem_fast_memory_vfc_config_mem001_i_mem_prty",
+	"usem_fast_memory_vfc_config_mem004_i_mem_prty",
+	"usem_fast_memory_vfc_config_mem003_i_mem_prty",
+	"usem_fast_memory_vfc_config_mem007_i_mem_prty",
+};
+#else
+#define usem_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 usem_prty0_bb_a0_attn_idx[3] = {
+	0, 1, 2,
+};
+
+static struct attn_hw_reg usem_prty0_bb_a0 = {
+	0, 3, usem_prty0_bb_a0_attn_idx, 0x19000c8, 0x19000d4, 0x19000d0,
+	0x19000cc
+};
+
+static const u16 usem_prty1_bb_a0_attn_idx[6] = {
+	3, 4, 5, 6, 7, 8,
+};
+
+static struct attn_hw_reg usem_prty1_bb_a0 = {
+	1, 6, usem_prty1_bb_a0_attn_idx, 0x1900200, 0x190020c, 0x1900208,
+	0x1900204
+};
+
+static struct attn_hw_reg *usem_prty_bb_a0_regs[2] = {
+	&usem_prty0_bb_a0, &usem_prty1_bb_a0,
+};
+
+static const u16 usem_prty0_bb_b0_attn_idx[3] = {
+	0, 1, 2,
+};
+
+static struct attn_hw_reg usem_prty0_bb_b0 = {
+	0, 3, usem_prty0_bb_b0_attn_idx, 0x19000c8, 0x19000d4, 0x19000d0,
+	0x19000cc
+};
+
+static const u16 usem_prty1_bb_b0_attn_idx[6] = {
+	3, 4, 5, 6, 7, 8,
+};
+
+static struct attn_hw_reg usem_prty1_bb_b0 = {
+	1, 6, usem_prty1_bb_b0_attn_idx, 0x1900200, 0x190020c, 0x1900208,
+	0x1900204
+};
+
+static struct attn_hw_reg *usem_prty_bb_b0_regs[2] = {
+	&usem_prty0_bb_b0, &usem_prty1_bb_b0,
+};
+
+static const u16 usem_prty0_k2_attn_idx[3] = {
+	0, 1, 2,
+};
+
+static struct attn_hw_reg usem_prty0_k2 = {
+	0, 3, usem_prty0_k2_attn_idx, 0x19000c8, 0x19000d4, 0x19000d0,
+	0x19000cc
+};
+
+static const u16 usem_prty1_k2_attn_idx[6] = {
+	3, 4, 5, 6, 7, 8,
+};
+
+static struct attn_hw_reg usem_prty1_k2 = {
+	1, 6, usem_prty1_k2_attn_idx, 0x1900200, 0x190020c, 0x1900208,
+	0x1900204
+};
+
+static const u16 usem_fast_memory_prty1_k2_attn_idx[7] = {
+	9, 10, 11, 12, 13, 14, 15,
+};
+
+static struct attn_hw_reg usem_fast_memory_prty1_k2 = {
+	2, 7, usem_fast_memory_prty1_k2_attn_idx, 0x1940200, 0x194020c,
+	0x1940208,
+	0x1940204
+};
+
+static struct attn_hw_reg *usem_prty_k2_regs[3] = {
+	&usem_prty0_k2, &usem_prty1_k2, &usem_fast_memory_prty1_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *xsem_int_attn_desc[46] = {
+	"xsem_address_error",
+	"xsem_fic_last_error",
+	"xsem_fic_length_error",
+	"xsem_fic_fifo_error",
+	"xsem_pas_buf_fifo_error",
+	"xsem_sync_fin_pop_error",
+	"xsem_sync_dra_wr_push_error",
+	"xsem_sync_dra_wr_pop_error",
+	"xsem_sync_dra_rd_push_error",
+	"xsem_sync_dra_rd_pop_error",
+	"xsem_sync_fin_push_error",
+	"xsem_sem_fast_address_error",
+	"xsem_cam_lsb_inp_fifo",
+	"xsem_cam_msb_inp_fifo",
+	"xsem_cam_out_fifo",
+	"xsem_fin_fifo",
+	"xsem_thread_fifo_error",
+	"xsem_thread_overrun",
+	"xsem_sync_ext_store_push_error",
+	"xsem_sync_ext_store_pop_error",
+	"xsem_sync_ext_load_push_error",
+	"xsem_sync_ext_load_pop_error",
+	"xsem_sync_ram_rd_push_error",
+	"xsem_sync_ram_rd_pop_error",
+	"xsem_sync_ram_wr_pop_error",
+	"xsem_sync_ram_wr_push_error",
+	"xsem_sync_dbg_push_error",
+	"xsem_sync_dbg_pop_error",
+	"xsem_dbg_fifo_error",
+	"xsem_cam_msb2_inp_fifo",
+	"xsem_vfc_interrupt",
+	"xsem_vfc_out_fifo_error",
+	"xsem_storm_stack_uf_attn",
+	"xsem_storm_stack_of_attn",
+	"xsem_storm_runtime_error",
+	"xsem_ext_load_pend_wr_error",
+	"xsem_thread_rls_orun_error",
+	"xsem_thread_rls_aloc_error",
+	"xsem_thread_rls_vld_error",
+	"xsem_ext_thread_oor_error",
+	"xsem_ord_id_fifo_error",
+	"xsem_invld_foc_error",
+	"xsem_ext_ld_len_error",
+	"xsem_thrd_ord_fifo_error",
+	"xsem_invld_thrd_ord_error",
+	"xsem_fast_memory_address_error",
+};
+#else
+#define xsem_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 xsem_int0_bb_a0_attn_idx[32] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+	20,
+	21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31,
+};
+
+static struct attn_hw_reg xsem_int0_bb_a0 = {
+	0, 32, xsem_int0_bb_a0_attn_idx, 0x1400040, 0x140004c, 0x1400048,
+	0x1400044
+};
+
+static const u16 xsem_int1_bb_a0_attn_idx[13] = {
+	32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44,
+};
+
+static struct attn_hw_reg xsem_int1_bb_a0 = {
+	1, 13, xsem_int1_bb_a0_attn_idx, 0x1400050, 0x140005c, 0x1400058,
+	0x1400054
+};
+
+static const u16 xsem_fast_memory_int0_bb_a0_attn_idx[1] = {
+	45,
+};
+
+static struct attn_hw_reg xsem_fast_memory_int0_bb_a0 = {
+	2, 1, xsem_fast_memory_int0_bb_a0_attn_idx, 0x1440040, 0x144004c,
+	0x1440048, 0x1440044
+};
+
+static struct attn_hw_reg *xsem_int_bb_a0_regs[3] = {
+	&xsem_int0_bb_a0, &xsem_int1_bb_a0, &xsem_fast_memory_int0_bb_a0,
+};
+
+static const u16 xsem_int0_bb_b0_attn_idx[32] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+	20,
+	21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31,
+};
+
+static struct attn_hw_reg xsem_int0_bb_b0 = {
+	0, 32, xsem_int0_bb_b0_attn_idx, 0x1400040, 0x140004c, 0x1400048,
+	0x1400044
+};
+
+static const u16 xsem_int1_bb_b0_attn_idx[13] = {
+	32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44,
+};
+
+static struct attn_hw_reg xsem_int1_bb_b0 = {
+	1, 13, xsem_int1_bb_b0_attn_idx, 0x1400050, 0x140005c, 0x1400058,
+	0x1400054
+};
+
+static const u16 xsem_fast_memory_int0_bb_b0_attn_idx[1] = {
+	45,
+};
+
+static struct attn_hw_reg xsem_fast_memory_int0_bb_b0 = {
+	2, 1, xsem_fast_memory_int0_bb_b0_attn_idx, 0x1440040, 0x144004c,
+	0x1440048, 0x1440044
+};
+
+static struct attn_hw_reg *xsem_int_bb_b0_regs[3] = {
+	&xsem_int0_bb_b0, &xsem_int1_bb_b0, &xsem_fast_memory_int0_bb_b0,
+};
+
+static const u16 xsem_int0_k2_attn_idx[32] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+	20,
+	21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31,
+};
+
+static struct attn_hw_reg xsem_int0_k2 = {
+	0, 32, xsem_int0_k2_attn_idx, 0x1400040, 0x140004c, 0x1400048,
+	0x1400044
+};
+
+static const u16 xsem_int1_k2_attn_idx[13] = {
+	32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44,
+};
+
+static struct attn_hw_reg xsem_int1_k2 = {
+	1, 13, xsem_int1_k2_attn_idx, 0x1400050, 0x140005c, 0x1400058,
+	0x1400054
+};
+
+static const u16 xsem_fast_memory_int0_k2_attn_idx[1] = {
+	45,
+};
+
+static struct attn_hw_reg xsem_fast_memory_int0_k2 = {
+	2, 1, xsem_fast_memory_int0_k2_attn_idx, 0x1440040, 0x144004c,
+	0x1440048,
+	0x1440044
+};
+
+static struct attn_hw_reg *xsem_int_k2_regs[3] = {
+	&xsem_int0_k2, &xsem_int1_k2, &xsem_fast_memory_int0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *xsem_prty_attn_desc[24] = {
+	"xsem_vfc_rbc_parity_error",
+	"xsem_storm_rf_parity_error",
+	"xsem_reg_gen_parity_error",
+	"xsem_mem006_i_ecc_0_rf_int",
+	"xsem_mem006_i_ecc_1_rf_int",
+	"xsem_mem005_i_mem_prty",
+	"xsem_mem002_i_mem_prty",
+	"xsem_mem004_i_mem_prty",
+	"xsem_mem003_i_mem_prty",
+	"xsem_mem001_i_mem_prty",
+	"xsem_fast_memory_mem024_i_mem_prty",
+	"xsem_fast_memory_mem023_i_mem_prty",
+	"xsem_fast_memory_mem022_i_mem_prty",
+	"xsem_fast_memory_mem021_i_mem_prty",
+	"xsem_fast_memory_mem020_i_mem_prty",
+	"xsem_fast_memory_mem019_i_mem_prty",
+	"xsem_fast_memory_mem018_i_mem_prty",
+	"xsem_fast_memory_vfc_config_mem005_i_ecc_rf_int",
+	"xsem_fast_memory_vfc_config_mem002_i_ecc_rf_int",
+	"xsem_fast_memory_vfc_config_mem006_i_mem_prty",
+	"xsem_fast_memory_vfc_config_mem001_i_mem_prty",
+	"xsem_fast_memory_vfc_config_mem004_i_mem_prty",
+	"xsem_fast_memory_vfc_config_mem003_i_mem_prty",
+	"xsem_fast_memory_vfc_config_mem007_i_mem_prty",
+};
+#else
+#define xsem_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 xsem_prty0_bb_a0_attn_idx[3] = {
+	0, 1, 2,
+};
+
+static struct attn_hw_reg xsem_prty0_bb_a0 = {
+	0, 3, xsem_prty0_bb_a0_attn_idx, 0x14000c8, 0x14000d4, 0x14000d0,
+	0x14000cc
+};
+
+static const u16 xsem_prty1_bb_a0_attn_idx[7] = {
+	3, 4, 5, 6, 7, 8, 9,
+};
+
+static struct attn_hw_reg xsem_prty1_bb_a0 = {
+	1, 7, xsem_prty1_bb_a0_attn_idx, 0x1400200, 0x140020c, 0x1400208,
+	0x1400204
+};
+
+static struct attn_hw_reg *xsem_prty_bb_a0_regs[2] = {
+	&xsem_prty0_bb_a0, &xsem_prty1_bb_a0,
+};
+
+static const u16 xsem_prty0_bb_b0_attn_idx[3] = {
+	0, 1, 2,
+};
+
+static struct attn_hw_reg xsem_prty0_bb_b0 = {
+	0, 3, xsem_prty0_bb_b0_attn_idx, 0x14000c8, 0x14000d4, 0x14000d0,
+	0x14000cc
+};
+
+static const u16 xsem_prty1_bb_b0_attn_idx[7] = {
+	3, 4, 5, 6, 7, 8, 9,
+};
+
+static struct attn_hw_reg xsem_prty1_bb_b0 = {
+	1, 7, xsem_prty1_bb_b0_attn_idx, 0x1400200, 0x140020c, 0x1400208,
+	0x1400204
+};
+
+static struct attn_hw_reg *xsem_prty_bb_b0_regs[2] = {
+	&xsem_prty0_bb_b0, &xsem_prty1_bb_b0,
+};
+
+static const u16 xsem_prty0_k2_attn_idx[3] = {
+	0, 1, 2,
+};
+
+static struct attn_hw_reg xsem_prty0_k2 = {
+	0, 3, xsem_prty0_k2_attn_idx, 0x14000c8, 0x14000d4, 0x14000d0,
+	0x14000cc
+};
+
+static const u16 xsem_prty1_k2_attn_idx[7] = {
+	3, 4, 5, 6, 7, 8, 9,
+};
+
+static struct attn_hw_reg xsem_prty1_k2 = {
+	1, 7, xsem_prty1_k2_attn_idx, 0x1400200, 0x140020c, 0x1400208,
+	0x1400204
+};
+
+static const u16 xsem_fast_memory_prty1_k2_attn_idx[7] = {
+	10, 11, 12, 13, 14, 15, 16,
+};
+
+static struct attn_hw_reg xsem_fast_memory_prty1_k2 = {
+	2, 7, xsem_fast_memory_prty1_k2_attn_idx, 0x1440200, 0x144020c,
+	0x1440208,
+	0x1440204
+};
+
+static struct attn_hw_reg *xsem_prty_k2_regs[3] = {
+	&xsem_prty0_k2, &xsem_prty1_k2, &xsem_fast_memory_prty1_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *ysem_int_attn_desc[46] = {
+	"ysem_address_error",
+	"ysem_fic_last_error",
+	"ysem_fic_length_error",
+	"ysem_fic_fifo_error",
+	"ysem_pas_buf_fifo_error",
+	"ysem_sync_fin_pop_error",
+	"ysem_sync_dra_wr_push_error",
+	"ysem_sync_dra_wr_pop_error",
+	"ysem_sync_dra_rd_push_error",
+	"ysem_sync_dra_rd_pop_error",
+	"ysem_sync_fin_push_error",
+	"ysem_sem_fast_address_error",
+	"ysem_cam_lsb_inp_fifo",
+	"ysem_cam_msb_inp_fifo",
+	"ysem_cam_out_fifo",
+	"ysem_fin_fifo",
+	"ysem_thread_fifo_error",
+	"ysem_thread_overrun",
+	"ysem_sync_ext_store_push_error",
+	"ysem_sync_ext_store_pop_error",
+	"ysem_sync_ext_load_push_error",
+	"ysem_sync_ext_load_pop_error",
+	"ysem_sync_ram_rd_push_error",
+	"ysem_sync_ram_rd_pop_error",
+	"ysem_sync_ram_wr_pop_error",
+	"ysem_sync_ram_wr_push_error",
+	"ysem_sync_dbg_push_error",
+	"ysem_sync_dbg_pop_error",
+	"ysem_dbg_fifo_error",
+	"ysem_cam_msb2_inp_fifo",
+	"ysem_vfc_interrupt",
+	"ysem_vfc_out_fifo_error",
+	"ysem_storm_stack_uf_attn",
+	"ysem_storm_stack_of_attn",
+	"ysem_storm_runtime_error",
+	"ysem_ext_load_pend_wr_error",
+	"ysem_thread_rls_orun_error",
+	"ysem_thread_rls_aloc_error",
+	"ysem_thread_rls_vld_error",
+	"ysem_ext_thread_oor_error",
+	"ysem_ord_id_fifo_error",
+	"ysem_invld_foc_error",
+	"ysem_ext_ld_len_error",
+	"ysem_thrd_ord_fifo_error",
+	"ysem_invld_thrd_ord_error",
+	"ysem_fast_memory_address_error",
+};
+#else
+#define ysem_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 ysem_int0_bb_a0_attn_idx[32] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+	20,
+	21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31,
+};
+
+static struct attn_hw_reg ysem_int0_bb_a0 = {
+	0, 32, ysem_int0_bb_a0_attn_idx, 0x1500040, 0x150004c, 0x1500048,
+	0x1500044
+};
+
+static const u16 ysem_int1_bb_a0_attn_idx[13] = {
+	32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44,
+};
+
+static struct attn_hw_reg ysem_int1_bb_a0 = {
+	1, 13, ysem_int1_bb_a0_attn_idx, 0x1500050, 0x150005c, 0x1500058,
+	0x1500054
+};
+
+static const u16 ysem_fast_memory_int0_bb_a0_attn_idx[1] = {
+	45,
+};
+
+static struct attn_hw_reg ysem_fast_memory_int0_bb_a0 = {
+	2, 1, ysem_fast_memory_int0_bb_a0_attn_idx, 0x1540040, 0x154004c,
+	0x1540048, 0x1540044
+};
+
+static struct attn_hw_reg *ysem_int_bb_a0_regs[3] = {
+	&ysem_int0_bb_a0, &ysem_int1_bb_a0, &ysem_fast_memory_int0_bb_a0,
+};
+
+static const u16 ysem_int0_bb_b0_attn_idx[32] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+	20,
+	21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31,
+};
+
+static struct attn_hw_reg ysem_int0_bb_b0 = {
+	0, 32, ysem_int0_bb_b0_attn_idx, 0x1500040, 0x150004c, 0x1500048,
+	0x1500044
+};
+
+static const u16 ysem_int1_bb_b0_attn_idx[13] = {
+	32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44,
+};
+
+static struct attn_hw_reg ysem_int1_bb_b0 = {
+	1, 13, ysem_int1_bb_b0_attn_idx, 0x1500050, 0x150005c, 0x1500058,
+	0x1500054
+};
+
+static const u16 ysem_fast_memory_int0_bb_b0_attn_idx[1] = {
+	45,
+};
+
+static struct attn_hw_reg ysem_fast_memory_int0_bb_b0 = {
+	2, 1, ysem_fast_memory_int0_bb_b0_attn_idx, 0x1540040, 0x154004c,
+	0x1540048, 0x1540044
+};
+
+static struct attn_hw_reg *ysem_int_bb_b0_regs[3] = {
+	&ysem_int0_bb_b0, &ysem_int1_bb_b0, &ysem_fast_memory_int0_bb_b0,
+};
+
+static const u16 ysem_int0_k2_attn_idx[32] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+	20,
+	21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31,
+};
+
+static struct attn_hw_reg ysem_int0_k2 = {
+	0, 32, ysem_int0_k2_attn_idx, 0x1500040, 0x150004c, 0x1500048,
+	0x1500044
+};
+
+static const u16 ysem_int1_k2_attn_idx[13] = {
+	32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44,
+};
+
+static struct attn_hw_reg ysem_int1_k2 = {
+	1, 13, ysem_int1_k2_attn_idx, 0x1500050, 0x150005c, 0x1500058,
+	0x1500054
+};
+
+static const u16 ysem_fast_memory_int0_k2_attn_idx[1] = {
+	45,
+};
+
+static struct attn_hw_reg ysem_fast_memory_int0_k2 = {
+	2, 1, ysem_fast_memory_int0_k2_attn_idx, 0x1540040, 0x154004c,
+	0x1540048,
+	0x1540044
+};
+
+static struct attn_hw_reg *ysem_int_k2_regs[3] = {
+	&ysem_int0_k2, &ysem_int1_k2, &ysem_fast_memory_int0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *ysem_prty_attn_desc[24] = {
+	"ysem_vfc_rbc_parity_error",
+	"ysem_storm_rf_parity_error",
+	"ysem_reg_gen_parity_error",
+	"ysem_mem006_i_ecc_0_rf_int",
+	"ysem_mem006_i_ecc_1_rf_int",
+	"ysem_mem005_i_mem_prty",
+	"ysem_mem002_i_mem_prty",
+	"ysem_mem004_i_mem_prty",
+	"ysem_mem003_i_mem_prty",
+	"ysem_mem001_i_mem_prty",
+	"ysem_fast_memory_mem024_i_mem_prty",
+	"ysem_fast_memory_mem023_i_mem_prty",
+	"ysem_fast_memory_mem022_i_mem_prty",
+	"ysem_fast_memory_mem021_i_mem_prty",
+	"ysem_fast_memory_mem020_i_mem_prty",
+	"ysem_fast_memory_mem019_i_mem_prty",
+	"ysem_fast_memory_mem018_i_mem_prty",
+	"ysem_fast_memory_vfc_config_mem005_i_ecc_rf_int",
+	"ysem_fast_memory_vfc_config_mem002_i_ecc_rf_int",
+	"ysem_fast_memory_vfc_config_mem006_i_mem_prty",
+	"ysem_fast_memory_vfc_config_mem001_i_mem_prty",
+	"ysem_fast_memory_vfc_config_mem004_i_mem_prty",
+	"ysem_fast_memory_vfc_config_mem003_i_mem_prty",
+	"ysem_fast_memory_vfc_config_mem007_i_mem_prty",
+};
+#else
+#define ysem_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 ysem_prty0_bb_a0_attn_idx[3] = {
+	0, 1, 2,
+};
+
+static struct attn_hw_reg ysem_prty0_bb_a0 = {
+	0, 3, ysem_prty0_bb_a0_attn_idx, 0x15000c8, 0x15000d4, 0x15000d0,
+	0x15000cc
+};
+
+static const u16 ysem_prty1_bb_a0_attn_idx[7] = {
+	3, 4, 5, 6, 7, 8, 9,
+};
+
+static struct attn_hw_reg ysem_prty1_bb_a0 = {
+	1, 7, ysem_prty1_bb_a0_attn_idx, 0x1500200, 0x150020c, 0x1500208,
+	0x1500204
+};
+
+static struct attn_hw_reg *ysem_prty_bb_a0_regs[2] = {
+	&ysem_prty0_bb_a0, &ysem_prty1_bb_a0,
+};
+
+static const u16 ysem_prty0_bb_b0_attn_idx[3] = {
+	0, 1, 2,
+};
+
+static struct attn_hw_reg ysem_prty0_bb_b0 = {
+	0, 3, ysem_prty0_bb_b0_attn_idx, 0x15000c8, 0x15000d4, 0x15000d0,
+	0x15000cc
+};
+
+static const u16 ysem_prty1_bb_b0_attn_idx[7] = {
+	3, 4, 5, 6, 7, 8, 9,
+};
+
+static struct attn_hw_reg ysem_prty1_bb_b0 = {
+	1, 7, ysem_prty1_bb_b0_attn_idx, 0x1500200, 0x150020c, 0x1500208,
+	0x1500204
+};
+
+static struct attn_hw_reg *ysem_prty_bb_b0_regs[2] = {
+	&ysem_prty0_bb_b0, &ysem_prty1_bb_b0,
+};
+
+static const u16 ysem_prty0_k2_attn_idx[3] = {
+	0, 1, 2,
+};
+
+static struct attn_hw_reg ysem_prty0_k2 = {
+	0, 3, ysem_prty0_k2_attn_idx, 0x15000c8, 0x15000d4, 0x15000d0,
+	0x15000cc
+};
+
+static const u16 ysem_prty1_k2_attn_idx[7] = {
+	3, 4, 5, 6, 7, 8, 9,
+};
+
+static struct attn_hw_reg ysem_prty1_k2 = {
+	1, 7, ysem_prty1_k2_attn_idx, 0x1500200, 0x150020c, 0x1500208,
+	0x1500204
+};
+
+static const u16 ysem_fast_memory_prty1_k2_attn_idx[7] = {
+	10, 11, 12, 13, 14, 15, 16,
+};
+
+static struct attn_hw_reg ysem_fast_memory_prty1_k2 = {
+	2, 7, ysem_fast_memory_prty1_k2_attn_idx, 0x1540200, 0x154020c,
+	0x1540208,
+	0x1540204
+};
+
+static struct attn_hw_reg *ysem_prty_k2_regs[3] = {
+	&ysem_prty0_k2, &ysem_prty1_k2, &ysem_fast_memory_prty1_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *psem_int_attn_desc[46] = {
+	"psem_address_error",
+	"psem_fic_last_error",
+	"psem_fic_length_error",
+	"psem_fic_fifo_error",
+	"psem_pas_buf_fifo_error",
+	"psem_sync_fin_pop_error",
+	"psem_sync_dra_wr_push_error",
+	"psem_sync_dra_wr_pop_error",
+	"psem_sync_dra_rd_push_error",
+	"psem_sync_dra_rd_pop_error",
+	"psem_sync_fin_push_error",
+	"psem_sem_fast_address_error",
+	"psem_cam_lsb_inp_fifo",
+	"psem_cam_msb_inp_fifo",
+	"psem_cam_out_fifo",
+	"psem_fin_fifo",
+	"psem_thread_fifo_error",
+	"psem_thread_overrun",
+	"psem_sync_ext_store_push_error",
+	"psem_sync_ext_store_pop_error",
+	"psem_sync_ext_load_push_error",
+	"psem_sync_ext_load_pop_error",
+	"psem_sync_ram_rd_push_error",
+	"psem_sync_ram_rd_pop_error",
+	"psem_sync_ram_wr_pop_error",
+	"psem_sync_ram_wr_push_error",
+	"psem_sync_dbg_push_error",
+	"psem_sync_dbg_pop_error",
+	"psem_dbg_fifo_error",
+	"psem_cam_msb2_inp_fifo",
+	"psem_vfc_interrupt",
+	"psem_vfc_out_fifo_error",
+	"psem_storm_stack_uf_attn",
+	"psem_storm_stack_of_attn",
+	"psem_storm_runtime_error",
+	"psem_ext_load_pend_wr_error",
+	"psem_thread_rls_orun_error",
+	"psem_thread_rls_aloc_error",
+	"psem_thread_rls_vld_error",
+	"psem_ext_thread_oor_error",
+	"psem_ord_id_fifo_error",
+	"psem_invld_foc_error",
+	"psem_ext_ld_len_error",
+	"psem_thrd_ord_fifo_error",
+	"psem_invld_thrd_ord_error",
+	"psem_fast_memory_address_error",
+};
+#else
+#define psem_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 psem_int0_bb_a0_attn_idx[32] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+	20,
+	21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31,
+};
+
+static struct attn_hw_reg psem_int0_bb_a0 = {
+	0, 32, psem_int0_bb_a0_attn_idx, 0x1600040, 0x160004c, 0x1600048,
+	0x1600044
+};
+
+static const u16 psem_int1_bb_a0_attn_idx[13] = {
+	32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44,
+};
+
+static struct attn_hw_reg psem_int1_bb_a0 = {
+	1, 13, psem_int1_bb_a0_attn_idx, 0x1600050, 0x160005c, 0x1600058,
+	0x1600054
+};
+
+static const u16 psem_fast_memory_int0_bb_a0_attn_idx[1] = {
+	45,
+};
+
+static struct attn_hw_reg psem_fast_memory_int0_bb_a0 = {
+	2, 1, psem_fast_memory_int0_bb_a0_attn_idx, 0x1640040, 0x164004c,
+	0x1640048, 0x1640044
+};
+
+static struct attn_hw_reg *psem_int_bb_a0_regs[3] = {
+	&psem_int0_bb_a0, &psem_int1_bb_a0, &psem_fast_memory_int0_bb_a0,
+};
+
+static const u16 psem_int0_bb_b0_attn_idx[32] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+	20,
+	21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31,
+};
+
+static struct attn_hw_reg psem_int0_bb_b0 = {
+	0, 32, psem_int0_bb_b0_attn_idx, 0x1600040, 0x160004c, 0x1600048,
+	0x1600044
+};
+
+static const u16 psem_int1_bb_b0_attn_idx[13] = {
+	32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44,
+};
+
+static struct attn_hw_reg psem_int1_bb_b0 = {
+	1, 13, psem_int1_bb_b0_attn_idx, 0x1600050, 0x160005c, 0x1600058,
+	0x1600054
+};
+
+static const u16 psem_fast_memory_int0_bb_b0_attn_idx[1] = {
+	45,
+};
+
+static struct attn_hw_reg psem_fast_memory_int0_bb_b0 = {
+	2, 1, psem_fast_memory_int0_bb_b0_attn_idx, 0x1640040, 0x164004c,
+	0x1640048, 0x1640044
+};
+
+static struct attn_hw_reg *psem_int_bb_b0_regs[3] = {
+	&psem_int0_bb_b0, &psem_int1_bb_b0, &psem_fast_memory_int0_bb_b0,
+};
+
+static const u16 psem_int0_k2_attn_idx[32] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+	20,
+	21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31,
+};
+
+static struct attn_hw_reg psem_int0_k2 = {
+	0, 32, psem_int0_k2_attn_idx, 0x1600040, 0x160004c, 0x1600048,
+	0x1600044
+};
+
+static const u16 psem_int1_k2_attn_idx[13] = {
+	32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44,
+};
+
+static struct attn_hw_reg psem_int1_k2 = {
+	1, 13, psem_int1_k2_attn_idx, 0x1600050, 0x160005c, 0x1600058,
+	0x1600054
+};
+
+static const u16 psem_fast_memory_int0_k2_attn_idx[1] = {
+	45,
+};
+
+static struct attn_hw_reg psem_fast_memory_int0_k2 = {
+	2, 1, psem_fast_memory_int0_k2_attn_idx, 0x1640040, 0x164004c,
+	0x1640048,
+	0x1640044
+};
+
+static struct attn_hw_reg *psem_int_k2_regs[3] = {
+	&psem_int0_k2, &psem_int1_k2, &psem_fast_memory_int0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *psem_prty_attn_desc[23] = {
+	"psem_vfc_rbc_parity_error",
+	"psem_storm_rf_parity_error",
+	"psem_reg_gen_parity_error",
+	"psem_mem005_i_ecc_0_rf_int",
+	"psem_mem005_i_ecc_1_rf_int",
+	"psem_mem004_i_mem_prty",
+	"psem_mem002_i_mem_prty",
+	"psem_mem003_i_mem_prty",
+	"psem_mem001_i_mem_prty",
+	"psem_fast_memory_mem024_i_mem_prty",
+	"psem_fast_memory_mem023_i_mem_prty",
+	"psem_fast_memory_mem022_i_mem_prty",
+	"psem_fast_memory_mem021_i_mem_prty",
+	"psem_fast_memory_mem020_i_mem_prty",
+	"psem_fast_memory_mem019_i_mem_prty",
+	"psem_fast_memory_mem018_i_mem_prty",
+	"psem_fast_memory_vfc_config_mem005_i_ecc_rf_int",
+	"psem_fast_memory_vfc_config_mem002_i_ecc_rf_int",
+	"psem_fast_memory_vfc_config_mem006_i_mem_prty",
+	"psem_fast_memory_vfc_config_mem001_i_mem_prty",
+	"psem_fast_memory_vfc_config_mem004_i_mem_prty",
+	"psem_fast_memory_vfc_config_mem003_i_mem_prty",
+	"psem_fast_memory_vfc_config_mem007_i_mem_prty",
+};
+#else
+#define psem_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 psem_prty0_bb_a0_attn_idx[3] = {
+	0, 1, 2,
+};
+
+static struct attn_hw_reg psem_prty0_bb_a0 = {
+	0, 3, psem_prty0_bb_a0_attn_idx, 0x16000c8, 0x16000d4, 0x16000d0,
+	0x16000cc
+};
+
+static const u16 psem_prty1_bb_a0_attn_idx[6] = {
+	3, 4, 5, 6, 7, 8,
+};
+
+static struct attn_hw_reg psem_prty1_bb_a0 = {
+	1, 6, psem_prty1_bb_a0_attn_idx, 0x1600200, 0x160020c, 0x1600208,
+	0x1600204
+};
+
+static const u16 psem_fast_memory_vfc_config_prty1_bb_a0_attn_idx[6] = {
+	16, 17, 19, 20, 21, 22,
+};
+
+static struct attn_hw_reg psem_fast_memory_vfc_config_prty1_bb_a0 = {
+	2, 6, psem_fast_memory_vfc_config_prty1_bb_a0_attn_idx, 0x164a200,
+	0x164a20c, 0x164a208, 0x164a204
+};
+
+static struct attn_hw_reg *psem_prty_bb_a0_regs[3] = {
+	&psem_prty0_bb_a0, &psem_prty1_bb_a0,
+	&psem_fast_memory_vfc_config_prty1_bb_a0,
+};
+
+static const u16 psem_prty0_bb_b0_attn_idx[3] = {
+	0, 1, 2,
+};
+
+static struct attn_hw_reg psem_prty0_bb_b0 = {
+	0, 3, psem_prty0_bb_b0_attn_idx, 0x16000c8, 0x16000d4, 0x16000d0,
+	0x16000cc
+};
+
+static const u16 psem_prty1_bb_b0_attn_idx[6] = {
+	3, 4, 5, 6, 7, 8,
+};
+
+static struct attn_hw_reg psem_prty1_bb_b0 = {
+	1, 6, psem_prty1_bb_b0_attn_idx, 0x1600200, 0x160020c, 0x1600208,
+	0x1600204
+};
+
+static const u16 psem_fast_memory_vfc_config_prty1_bb_b0_attn_idx[6] = {
+	16, 17, 19, 20, 21, 22,
+};
+
+static struct attn_hw_reg psem_fast_memory_vfc_config_prty1_bb_b0 = {
+	2, 6, psem_fast_memory_vfc_config_prty1_bb_b0_attn_idx, 0x164a200,
+	0x164a20c, 0x164a208, 0x164a204
+};
+
+static struct attn_hw_reg *psem_prty_bb_b0_regs[3] = {
+	&psem_prty0_bb_b0, &psem_prty1_bb_b0,
+	&psem_fast_memory_vfc_config_prty1_bb_b0,
+};
+
+static const u16 psem_prty0_k2_attn_idx[3] = {
+	0, 1, 2,
+};
+
+static struct attn_hw_reg psem_prty0_k2 = {
+	0, 3, psem_prty0_k2_attn_idx, 0x16000c8, 0x16000d4, 0x16000d0,
+	0x16000cc
+};
+
+static const u16 psem_prty1_k2_attn_idx[6] = {
+	3, 4, 5, 6, 7, 8,
+};
+
+static struct attn_hw_reg psem_prty1_k2 = {
+	1, 6, psem_prty1_k2_attn_idx, 0x1600200, 0x160020c, 0x1600208,
+	0x1600204
+};
+
+static const u16 psem_fast_memory_prty1_k2_attn_idx[7] = {
+	9, 10, 11, 12, 13, 14, 15,
+};
+
+static struct attn_hw_reg psem_fast_memory_prty1_k2 = {
+	2, 7, psem_fast_memory_prty1_k2_attn_idx, 0x1640200, 0x164020c,
+	0x1640208,
+	0x1640204
+};
+
+static const u16 psem_fast_memory_vfc_config_prty1_k2_attn_idx[6] = {
+	16, 17, 18, 19, 20, 21,
+};
+
+static struct attn_hw_reg psem_fast_memory_vfc_config_prty1_k2 = {
+	3, 6, psem_fast_memory_vfc_config_prty1_k2_attn_idx, 0x164a200,
+	0x164a20c,
+	0x164a208, 0x164a204
+};
+
+static struct attn_hw_reg *psem_prty_k2_regs[4] = {
+	&psem_prty0_k2, &psem_prty1_k2, &psem_fast_memory_prty1_k2,
+	&psem_fast_memory_vfc_config_prty1_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *rss_int_attn_desc[12] = {
+	"rss_address_error",
+	"rss_msg_inp_cnt_error",
+	"rss_msg_out_cnt_error",
+	"rss_inp_state_error",
+	"rss_out_state_error",
+	"rss_main_state_error",
+	"rss_calc_state_error",
+	"rss_inp_fifo_error",
+	"rss_cmd_fifo_error",
+	"rss_msg_fifo_error",
+	"rss_rsp_fifo_error",
+	"rss_hdr_fifo_error",
+};
+#else
+#define rss_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 rss_int0_bb_a0_attn_idx[12] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11,
+};
+
+static struct attn_hw_reg rss_int0_bb_a0 = {
+	0, 12, rss_int0_bb_a0_attn_idx, 0x238980, 0x23898c, 0x238988, 0x238984
+};
+
+static struct attn_hw_reg *rss_int_bb_a0_regs[1] = {
+	&rss_int0_bb_a0,
+};
+
+static const u16 rss_int0_bb_b0_attn_idx[12] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11,
+};
+
+static struct attn_hw_reg rss_int0_bb_b0 = {
+	0, 12, rss_int0_bb_b0_attn_idx, 0x238980, 0x23898c, 0x238988, 0x238984
+};
+
+static struct attn_hw_reg *rss_int_bb_b0_regs[1] = {
+	&rss_int0_bb_b0,
+};
+
+static const u16 rss_int0_k2_attn_idx[12] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11,
+};
+
+static struct attn_hw_reg rss_int0_k2 = {
+	0, 12, rss_int0_k2_attn_idx, 0x238980, 0x23898c, 0x238988, 0x238984
+};
+
+static struct attn_hw_reg *rss_int_k2_regs[1] = {
+	&rss_int0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *rss_prty_attn_desc[4] = {
+	"rss_mem002_i_ecc_rf_int",
+	"rss_mem001_i_ecc_rf_int",
+	"rss_mem003_i_mem_prty",
+	"rss_mem004_i_mem_prty",
+};
+#else
+#define rss_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 rss_prty1_bb_a0_attn_idx[4] = {
+	0, 1, 2, 3,
+};
+
+static struct attn_hw_reg rss_prty1_bb_a0 = {
+	0, 4, rss_prty1_bb_a0_attn_idx, 0x238a00, 0x238a0c, 0x238a08, 0x238a04
+};
+
+static struct attn_hw_reg *rss_prty_bb_a0_regs[1] = {
+	&rss_prty1_bb_a0,
+};
+
+static const u16 rss_prty1_bb_b0_attn_idx[4] = {
+	0, 1, 2, 3,
+};
+
+static struct attn_hw_reg rss_prty1_bb_b0 = {
+	0, 4, rss_prty1_bb_b0_attn_idx, 0x238a00, 0x238a0c, 0x238a08, 0x238a04
+};
+
+static struct attn_hw_reg *rss_prty_bb_b0_regs[1] = {
+	&rss_prty1_bb_b0,
+};
+
+static const u16 rss_prty1_k2_attn_idx[4] = {
+	0, 1, 2, 3,
+};
+
+static struct attn_hw_reg rss_prty1_k2 = {
+	0, 4, rss_prty1_k2_attn_idx, 0x238a00, 0x238a0c, 0x238a08, 0x238a04
+};
+
+static struct attn_hw_reg *rss_prty_k2_regs[1] = {
+	&rss_prty1_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *tmld_int_attn_desc[6] = {
+	"tmld_address_error",
+	"tmld_ld_hdr_err",
+	"tmld_ld_seg_msg_err",
+	"tmld_ld_tid_mini_cache_err",
+	"tmld_ld_cid_mini_cache_err",
+	"tmld_ld_long_message",
+};
+#else
+#define tmld_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 tmld_int0_bb_a0_attn_idx[6] = {
+	0, 1, 2, 3, 4, 5,
+};
+
+static struct attn_hw_reg tmld_int0_bb_a0 = {
+	0, 6, tmld_int0_bb_a0_attn_idx, 0x4d0180, 0x4d018c, 0x4d0188, 0x4d0184
+};
+
+static struct attn_hw_reg *tmld_int_bb_a0_regs[1] = {
+	&tmld_int0_bb_a0,
+};
+
+static const u16 tmld_int0_bb_b0_attn_idx[6] = {
+	0, 1, 2, 3, 4, 5,
+};
+
+static struct attn_hw_reg tmld_int0_bb_b0 = {
+	0, 6, tmld_int0_bb_b0_attn_idx, 0x4d0180, 0x4d018c, 0x4d0188, 0x4d0184
+};
+
+static struct attn_hw_reg *tmld_int_bb_b0_regs[1] = {
+	&tmld_int0_bb_b0,
+};
+
+static const u16 tmld_int0_k2_attn_idx[6] = {
+	0, 1, 2, 3, 4, 5,
+};
+
+static struct attn_hw_reg tmld_int0_k2 = {
+	0, 6, tmld_int0_k2_attn_idx, 0x4d0180, 0x4d018c, 0x4d0188, 0x4d0184
+};
+
+static struct attn_hw_reg *tmld_int_k2_regs[1] = {
+	&tmld_int0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *tmld_prty_attn_desc[8] = {
+	"tmld_mem006_i_ecc_rf_int",
+	"tmld_mem002_i_ecc_rf_int",
+	"tmld_mem003_i_mem_prty",
+	"tmld_mem004_i_mem_prty",
+	"tmld_mem007_i_mem_prty",
+	"tmld_mem008_i_mem_prty",
+	"tmld_mem005_i_mem_prty",
+	"tmld_mem001_i_mem_prty",
+};
+#else
+#define tmld_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 tmld_prty1_bb_a0_attn_idx[8] = {
+	0, 1, 2, 3, 4, 5, 6, 7,
+};
+
+static struct attn_hw_reg tmld_prty1_bb_a0 = {
+	0, 8, tmld_prty1_bb_a0_attn_idx, 0x4d0200, 0x4d020c, 0x4d0208, 0x4d0204
+};
+
+static struct attn_hw_reg *tmld_prty_bb_a0_regs[1] = {
+	&tmld_prty1_bb_a0,
+};
+
+static const u16 tmld_prty1_bb_b0_attn_idx[8] = {
+	0, 1, 2, 3, 4, 5, 6, 7,
+};
+
+static struct attn_hw_reg tmld_prty1_bb_b0 = {
+	0, 8, tmld_prty1_bb_b0_attn_idx, 0x4d0200, 0x4d020c, 0x4d0208, 0x4d0204
+};
+
+static struct attn_hw_reg *tmld_prty_bb_b0_regs[1] = {
+	&tmld_prty1_bb_b0,
+};
+
+static const u16 tmld_prty1_k2_attn_idx[8] = {
+	0, 1, 2, 3, 4, 5, 6, 7,
+};
+
+static struct attn_hw_reg tmld_prty1_k2 = {
+	0, 8, tmld_prty1_k2_attn_idx, 0x4d0200, 0x4d020c, 0x4d0208, 0x4d0204
+};
+
+static struct attn_hw_reg *tmld_prty_k2_regs[1] = {
+	&tmld_prty1_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *muld_int_attn_desc[6] = {
+	"muld_address_error",
+	"muld_ld_hdr_err",
+	"muld_ld_seg_msg_err",
+	"muld_ld_tid_mini_cache_err",
+	"muld_ld_cid_mini_cache_err",
+	"muld_ld_long_message",
+};
+#else
+#define muld_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 muld_int0_bb_a0_attn_idx[6] = {
+	0, 1, 2, 3, 4, 5,
+};
+
+static struct attn_hw_reg muld_int0_bb_a0 = {
+	0, 6, muld_int0_bb_a0_attn_idx, 0x4e0180, 0x4e018c, 0x4e0188, 0x4e0184
+};
+
+static struct attn_hw_reg *muld_int_bb_a0_regs[1] = {
+	&muld_int0_bb_a0,
+};
+
+static const u16 muld_int0_bb_b0_attn_idx[6] = {
+	0, 1, 2, 3, 4, 5,
+};
+
+static struct attn_hw_reg muld_int0_bb_b0 = {
+	0, 6, muld_int0_bb_b0_attn_idx, 0x4e0180, 0x4e018c, 0x4e0188, 0x4e0184
+};
+
+static struct attn_hw_reg *muld_int_bb_b0_regs[1] = {
+	&muld_int0_bb_b0,
+};
+
+static const u16 muld_int0_k2_attn_idx[6] = {
+	0, 1, 2, 3, 4, 5,
+};
+
+static struct attn_hw_reg muld_int0_k2 = {
+	0, 6, muld_int0_k2_attn_idx, 0x4e0180, 0x4e018c, 0x4e0188, 0x4e0184
+};
+
+static struct attn_hw_reg *muld_int_k2_regs[1] = {
+	&muld_int0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *muld_prty_attn_desc[10] = {
+	"muld_mem005_i_ecc_rf_int",
+	"muld_mem001_i_ecc_rf_int",
+	"muld_mem008_i_ecc_rf_int",
+	"muld_mem007_i_ecc_rf_int",
+	"muld_mem002_i_mem_prty",
+	"muld_mem003_i_mem_prty",
+	"muld_mem009_i_mem_prty",
+	"muld_mem010_i_mem_prty",
+	"muld_mem004_i_mem_prty",
+	"muld_mem006_i_mem_prty",
+};
+#else
+#define muld_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 muld_prty1_bb_a0_attn_idx[10] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9,
+};
+
+static struct attn_hw_reg muld_prty1_bb_a0 = {
+	0, 10, muld_prty1_bb_a0_attn_idx, 0x4e0200, 0x4e020c, 0x4e0208,
+	0x4e0204
+};
+
+static struct attn_hw_reg *muld_prty_bb_a0_regs[1] = {
+	&muld_prty1_bb_a0,
+};
+
+static const u16 muld_prty1_bb_b0_attn_idx[10] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9,
+};
+
+static struct attn_hw_reg muld_prty1_bb_b0 = {
+	0, 10, muld_prty1_bb_b0_attn_idx, 0x4e0200, 0x4e020c, 0x4e0208,
+	0x4e0204
+};
+
+static struct attn_hw_reg *muld_prty_bb_b0_regs[1] = {
+	&muld_prty1_bb_b0,
+};
+
+static const u16 muld_prty1_k2_attn_idx[10] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9,
+};
+
+static struct attn_hw_reg muld_prty1_k2 = {
+	0, 10, muld_prty1_k2_attn_idx, 0x4e0200, 0x4e020c, 0x4e0208, 0x4e0204
+};
+
+static struct attn_hw_reg *muld_prty_k2_regs[1] = {
+	&muld_prty1_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *yuld_int_attn_desc[6] = {
+	"yuld_address_error",
+	"yuld_ld_hdr_err",
+	"yuld_ld_seg_msg_err",
+	"yuld_ld_tid_mini_cache_err",
+	"yuld_ld_cid_mini_cache_err",
+	"yuld_ld_long_message",
+};
+#else
+#define yuld_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 yuld_int0_bb_a0_attn_idx[6] = {
+	0, 1, 2, 3, 4, 5,
+};
+
+static struct attn_hw_reg yuld_int0_bb_a0 = {
+	0, 6, yuld_int0_bb_a0_attn_idx, 0x4c8180, 0x4c818c, 0x4c8188, 0x4c8184
+};
+
+static struct attn_hw_reg *yuld_int_bb_a0_regs[1] = {
+	&yuld_int0_bb_a0,
+};
+
+static const u16 yuld_int0_bb_b0_attn_idx[6] = {
+	0, 1, 2, 3, 4, 5,
+};
+
+static struct attn_hw_reg yuld_int0_bb_b0 = {
+	0, 6, yuld_int0_bb_b0_attn_idx, 0x4c8180, 0x4c818c, 0x4c8188, 0x4c8184
+};
+
+static struct attn_hw_reg *yuld_int_bb_b0_regs[1] = {
+	&yuld_int0_bb_b0,
+};
+
+static const u16 yuld_int0_k2_attn_idx[6] = {
+	0, 1, 2, 3, 4, 5,
+};
+
+static struct attn_hw_reg yuld_int0_k2 = {
+	0, 6, yuld_int0_k2_attn_idx, 0x4c8180, 0x4c818c, 0x4c8188, 0x4c8184
+};
+
+static struct attn_hw_reg *yuld_int_k2_regs[1] = {
+	&yuld_int0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *yuld_prty_attn_desc[6] = {
+	"yuld_mem001_i_mem_prty",
+	"yuld_mem002_i_mem_prty",
+	"yuld_mem005_i_mem_prty",
+	"yuld_mem006_i_mem_prty",
+	"yuld_mem004_i_mem_prty",
+	"yuld_mem003_i_mem_prty",
+};
+#else
+#define yuld_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 yuld_prty1_bb_a0_attn_idx[6] = {
+	0, 1, 2, 3, 4, 5,
+};
+
+static struct attn_hw_reg yuld_prty1_bb_a0 = {
+	0, 6, yuld_prty1_bb_a0_attn_idx, 0x4c8200, 0x4c820c, 0x4c8208, 0x4c8204
+};
+
+static struct attn_hw_reg *yuld_prty_bb_a0_regs[1] = {
+	&yuld_prty1_bb_a0,
+};
+
+static const u16 yuld_prty1_bb_b0_attn_idx[6] = {
+	0, 1, 2, 3, 4, 5,
+};
+
+static struct attn_hw_reg yuld_prty1_bb_b0 = {
+	0, 6, yuld_prty1_bb_b0_attn_idx, 0x4c8200, 0x4c820c, 0x4c8208, 0x4c8204
+};
+
+static struct attn_hw_reg *yuld_prty_bb_b0_regs[1] = {
+	&yuld_prty1_bb_b0,
+};
+
+static const u16 yuld_prty1_k2_attn_idx[6] = {
+	0, 1, 2, 3, 4, 5,
+};
+
+static struct attn_hw_reg yuld_prty1_k2 = {
+	0, 6, yuld_prty1_k2_attn_idx, 0x4c8200, 0x4c820c, 0x4c8208, 0x4c8204
+};
+
+static struct attn_hw_reg *yuld_prty_k2_regs[1] = {
+	&yuld_prty1_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *xyld_int_attn_desc[6] = {
+	"xyld_address_error",
+	"xyld_ld_hdr_err",
+	"xyld_ld_seg_msg_err",
+	"xyld_ld_tid_mini_cache_err",
+	"xyld_ld_cid_mini_cache_err",
+	"xyld_ld_long_message",
+};
+#else
+#define xyld_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 xyld_int0_bb_a0_attn_idx[6] = {
+	0, 1, 2, 3, 4, 5,
+};
+
+static struct attn_hw_reg xyld_int0_bb_a0 = {
+	0, 6, xyld_int0_bb_a0_attn_idx, 0x4c0180, 0x4c018c, 0x4c0188, 0x4c0184
+};
+
+static struct attn_hw_reg *xyld_int_bb_a0_regs[1] = {
+	&xyld_int0_bb_a0,
+};
+
+static const u16 xyld_int0_bb_b0_attn_idx[6] = {
+	0, 1, 2, 3, 4, 5,
+};
+
+static struct attn_hw_reg xyld_int0_bb_b0 = {
+	0, 6, xyld_int0_bb_b0_attn_idx, 0x4c0180, 0x4c018c, 0x4c0188, 0x4c0184
+};
+
+static struct attn_hw_reg *xyld_int_bb_b0_regs[1] = {
+	&xyld_int0_bb_b0,
+};
+
+static const u16 xyld_int0_k2_attn_idx[6] = {
+	0, 1, 2, 3, 4, 5,
+};
+
+static struct attn_hw_reg xyld_int0_k2 = {
+	0, 6, xyld_int0_k2_attn_idx, 0x4c0180, 0x4c018c, 0x4c0188, 0x4c0184
+};
+
+static struct attn_hw_reg *xyld_int_k2_regs[1] = {
+	&xyld_int0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *xyld_prty_attn_desc[9] = {
+	"xyld_mem004_i_ecc_rf_int",
+	"xyld_mem006_i_ecc_rf_int",
+	"xyld_mem001_i_mem_prty",
+	"xyld_mem002_i_mem_prty",
+	"xyld_mem008_i_mem_prty",
+	"xyld_mem009_i_mem_prty",
+	"xyld_mem003_i_mem_prty",
+	"xyld_mem005_i_mem_prty",
+	"xyld_mem007_i_mem_prty",
+};
+#else
+#define xyld_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 xyld_prty1_bb_a0_attn_idx[9] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8,
+};
+
+static struct attn_hw_reg xyld_prty1_bb_a0 = {
+	0, 9, xyld_prty1_bb_a0_attn_idx, 0x4c0200, 0x4c020c, 0x4c0208, 0x4c0204
+};
+
+static struct attn_hw_reg *xyld_prty_bb_a0_regs[1] = {
+	&xyld_prty1_bb_a0,
+};
+
+static const u16 xyld_prty1_bb_b0_attn_idx[9] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8,
+};
+
+static struct attn_hw_reg xyld_prty1_bb_b0 = {
+	0, 9, xyld_prty1_bb_b0_attn_idx, 0x4c0200, 0x4c020c, 0x4c0208, 0x4c0204
+};
+
+static struct attn_hw_reg *xyld_prty_bb_b0_regs[1] = {
+	&xyld_prty1_bb_b0,
+};
+
+static const u16 xyld_prty1_k2_attn_idx[9] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8,
+};
+
+static struct attn_hw_reg xyld_prty1_k2 = {
+	0, 9, xyld_prty1_k2_attn_idx, 0x4c0200, 0x4c020c, 0x4c0208, 0x4c0204
+};
+
+static struct attn_hw_reg *xyld_prty_k2_regs[1] = {
+	&xyld_prty1_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *prm_int_attn_desc[11] = {
+	"prm_address_error",
+	"prm_ififo_error",
+	"prm_immed_fifo_error",
+	"prm_ofst_pend_error",
+	"prm_pad_pend_error",
+	"prm_pbinp_pend_error",
+	"prm_tag_pend_error",
+	"prm_mstorm_eop_err",
+	"prm_ustorm_eop_err",
+	"prm_mstorm_que_err",
+	"prm_ustorm_que_err",
+};
+#else
+#define prm_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 prm_int0_bb_a0_attn_idx[11] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10,
+};
+
+static struct attn_hw_reg prm_int0_bb_a0 = {
+	0, 11, prm_int0_bb_a0_attn_idx, 0x230040, 0x23004c, 0x230048, 0x230044
+};
+
+static struct attn_hw_reg *prm_int_bb_a0_regs[1] = {
+	&prm_int0_bb_a0,
+};
+
+static const u16 prm_int0_bb_b0_attn_idx[11] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10,
+};
+
+static struct attn_hw_reg prm_int0_bb_b0 = {
+	0, 11, prm_int0_bb_b0_attn_idx, 0x230040, 0x23004c, 0x230048, 0x230044
+};
+
+static struct attn_hw_reg *prm_int_bb_b0_regs[1] = {
+	&prm_int0_bb_b0,
+};
+
+static const u16 prm_int0_k2_attn_idx[11] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10,
+};
+
+static struct attn_hw_reg prm_int0_k2 = {
+	0, 11, prm_int0_k2_attn_idx, 0x230040, 0x23004c, 0x230048, 0x230044
+};
+
+static struct attn_hw_reg *prm_int_k2_regs[1] = {
+	&prm_int0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *prm_prty_attn_desc[30] = {
+	"prm_datapath_registers",
+	"prm_mem012_i_ecc_rf_int",
+	"prm_mem013_i_ecc_rf_int",
+	"prm_mem014_i_ecc_rf_int",
+	"prm_mem020_i_ecc_rf_int",
+	"prm_mem004_i_mem_prty",
+	"prm_mem024_i_mem_prty",
+	"prm_mem016_i_mem_prty",
+	"prm_mem017_i_mem_prty",
+	"prm_mem008_i_mem_prty",
+	"prm_mem009_i_mem_prty",
+	"prm_mem010_i_mem_prty",
+	"prm_mem015_i_mem_prty",
+	"prm_mem011_i_mem_prty",
+	"prm_mem003_i_mem_prty",
+	"prm_mem002_i_mem_prty",
+	"prm_mem005_i_mem_prty",
+	"prm_mem023_i_mem_prty",
+	"prm_mem006_i_mem_prty",
+	"prm_mem007_i_mem_prty",
+	"prm_mem001_i_mem_prty",
+	"prm_mem022_i_mem_prty",
+	"prm_mem021_i_mem_prty",
+	"prm_mem019_i_mem_prty",
+	"prm_mem015_i_ecc_rf_int",
+	"prm_mem021_i_ecc_rf_int",
+	"prm_mem025_i_mem_prty",
+	"prm_mem018_i_mem_prty",
+	"prm_mem012_i_mem_prty",
+	"prm_mem020_i_mem_prty",
+};
+#else
+#define prm_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 prm_prty1_bb_a0_attn_idx[25] = {
+	2, 3, 5, 6, 7, 8, 9, 10, 11, 13, 14, 15, 16, 17, 18, 19, 20, 21, 23, 24,
+	25, 26, 27, 28, 29,
+};
+
+static struct attn_hw_reg prm_prty1_bb_a0 = {
+	0, 25, prm_prty1_bb_a0_attn_idx, 0x230200, 0x23020c, 0x230208, 0x230204
+};
+
+static struct attn_hw_reg *prm_prty_bb_a0_regs[1] = {
+	&prm_prty1_bb_a0,
+};
+
+static const u16 prm_prty0_bb_b0_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg prm_prty0_bb_b0 = {
+	0, 1, prm_prty0_bb_b0_attn_idx, 0x230050, 0x23005c, 0x230058, 0x230054
+};
+
+static const u16 prm_prty1_bb_b0_attn_idx[24] = {
+	2, 3, 5, 6, 7, 8, 9, 10, 11, 13, 14, 15, 16, 17, 18, 19, 20, 21, 24, 25,
+	26, 27, 28, 29,
+};
+
+static struct attn_hw_reg prm_prty1_bb_b0 = {
+	1, 24, prm_prty1_bb_b0_attn_idx, 0x230200, 0x23020c, 0x230208, 0x230204
+};
+
+static struct attn_hw_reg *prm_prty_bb_b0_regs[2] = {
+	&prm_prty0_bb_b0, &prm_prty1_bb_b0,
+};
+
+static const u16 prm_prty0_k2_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg prm_prty0_k2 = {
+	0, 1, prm_prty0_k2_attn_idx, 0x230050, 0x23005c, 0x230058, 0x230054
+};
+
+static const u16 prm_prty1_k2_attn_idx[23] = {
+	1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20,
+	21,
+	22, 23,
+};
+
+static struct attn_hw_reg prm_prty1_k2 = {
+	1, 23, prm_prty1_k2_attn_idx, 0x230200, 0x23020c, 0x230208, 0x230204
+};
+
+static struct attn_hw_reg *prm_prty_k2_regs[2] = {
+	&prm_prty0_k2, &prm_prty1_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *pbf_pb1_int_attn_desc[9] = {
+	"pbf_pb1_address_error",
+	"pbf_pb1_eop_error",
+	"pbf_pb1_ififo_error",
+	"pbf_pb1_pfifo_error",
+	"pbf_pb1_db_buf_error",
+	"pbf_pb1_th_exec_error",
+	"pbf_pb1_tq_error_wr",
+	"pbf_pb1_tq_error_rd_th",
+	"pbf_pb1_tq_error_rd_ih",
+};
+#else
+#define pbf_pb1_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 pbf_pb1_int0_bb_a0_attn_idx[9] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8,
+};
+
+static struct attn_hw_reg pbf_pb1_int0_bb_a0 = {
+	0, 9, pbf_pb1_int0_bb_a0_attn_idx, 0xda0040, 0xda004c, 0xda0048,
+	0xda0044
+};
+
+static struct attn_hw_reg *pbf_pb1_int_bb_a0_regs[1] = {
+	&pbf_pb1_int0_bb_a0,
+};
+
+static const u16 pbf_pb1_int0_bb_b0_attn_idx[9] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8,
+};
+
+static struct attn_hw_reg pbf_pb1_int0_bb_b0 = {
+	0, 9, pbf_pb1_int0_bb_b0_attn_idx, 0xda0040, 0xda004c, 0xda0048,
+	0xda0044
+};
+
+static struct attn_hw_reg *pbf_pb1_int_bb_b0_regs[1] = {
+	&pbf_pb1_int0_bb_b0,
+};
+
+static const u16 pbf_pb1_int0_k2_attn_idx[9] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8,
+};
+
+static struct attn_hw_reg pbf_pb1_int0_k2 = {
+	0, 9, pbf_pb1_int0_k2_attn_idx, 0xda0040, 0xda004c, 0xda0048, 0xda0044
+};
+
+static struct attn_hw_reg *pbf_pb1_int_k2_regs[1] = {
+	&pbf_pb1_int0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *pbf_pb1_prty_attn_desc[1] = {
+	"pbf_pb1_datapath_registers",
+};
+#else
+#define pbf_pb1_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 pbf_pb1_prty0_bb_b0_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg pbf_pb1_prty0_bb_b0 = {
+	0, 1, pbf_pb1_prty0_bb_b0_attn_idx, 0xda0050, 0xda005c, 0xda0058,
+	0xda0054
+};
+
+static struct attn_hw_reg *pbf_pb1_prty_bb_b0_regs[1] = {
+	&pbf_pb1_prty0_bb_b0,
+};
+
+static const u16 pbf_pb1_prty0_k2_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg pbf_pb1_prty0_k2 = {
+	0, 1, pbf_pb1_prty0_k2_attn_idx, 0xda0050, 0xda005c, 0xda0058, 0xda0054
+};
+
+static struct attn_hw_reg *pbf_pb1_prty_k2_regs[1] = {
+	&pbf_pb1_prty0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *pbf_pb2_int_attn_desc[9] = {
+	"pbf_pb2_address_error",
+	"pbf_pb2_eop_error",
+	"pbf_pb2_ififo_error",
+	"pbf_pb2_pfifo_error",
+	"pbf_pb2_db_buf_error",
+	"pbf_pb2_th_exec_error",
+	"pbf_pb2_tq_error_wr",
+	"pbf_pb2_tq_error_rd_th",
+	"pbf_pb2_tq_error_rd_ih",
+};
+#else
+#define pbf_pb2_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 pbf_pb2_int0_bb_a0_attn_idx[9] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8,
+};
+
+static struct attn_hw_reg pbf_pb2_int0_bb_a0 = {
+	0, 9, pbf_pb2_int0_bb_a0_attn_idx, 0xda4040, 0xda404c, 0xda4048,
+	0xda4044
+};
+
+static struct attn_hw_reg *pbf_pb2_int_bb_a0_regs[1] = {
+	&pbf_pb2_int0_bb_a0,
+};
+
+static const u16 pbf_pb2_int0_bb_b0_attn_idx[9] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8,
+};
+
+static struct attn_hw_reg pbf_pb2_int0_bb_b0 = {
+	0, 9, pbf_pb2_int0_bb_b0_attn_idx, 0xda4040, 0xda404c, 0xda4048,
+	0xda4044
+};
+
+static struct attn_hw_reg *pbf_pb2_int_bb_b0_regs[1] = {
+	&pbf_pb2_int0_bb_b0,
+};
+
+static const u16 pbf_pb2_int0_k2_attn_idx[9] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8,
+};
+
+static struct attn_hw_reg pbf_pb2_int0_k2 = {
+	0, 9, pbf_pb2_int0_k2_attn_idx, 0xda4040, 0xda404c, 0xda4048, 0xda4044
+};
+
+static struct attn_hw_reg *pbf_pb2_int_k2_regs[1] = {
+	&pbf_pb2_int0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *pbf_pb2_prty_attn_desc[1] = {
+	"pbf_pb2_datapath_registers",
+};
+#else
+#define pbf_pb2_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 pbf_pb2_prty0_bb_b0_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg pbf_pb2_prty0_bb_b0 = {
+	0, 1, pbf_pb2_prty0_bb_b0_attn_idx, 0xda4050, 0xda405c, 0xda4058,
+	0xda4054
+};
+
+static struct attn_hw_reg *pbf_pb2_prty_bb_b0_regs[1] = {
+	&pbf_pb2_prty0_bb_b0,
+};
+
+static const u16 pbf_pb2_prty0_k2_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg pbf_pb2_prty0_k2 = {
+	0, 1, pbf_pb2_prty0_k2_attn_idx, 0xda4050, 0xda405c, 0xda4058, 0xda4054
+};
+
+static struct attn_hw_reg *pbf_pb2_prty_k2_regs[1] = {
+	&pbf_pb2_prty0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *rpb_int_attn_desc[9] = {
+	"rpb_address_error",
+	"rpb_eop_error",
+	"rpb_ififo_error",
+	"rpb_pfifo_error",
+	"rpb_db_buf_error",
+	"rpb_th_exec_error",
+	"rpb_tq_error_wr",
+	"rpb_tq_error_rd_th",
+	"rpb_tq_error_rd_ih",
+};
+#else
+#define rpb_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 rpb_int0_bb_a0_attn_idx[9] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8,
+};
+
+static struct attn_hw_reg rpb_int0_bb_a0 = {
+	0, 9, rpb_int0_bb_a0_attn_idx, 0x23c040, 0x23c04c, 0x23c048, 0x23c044
+};
+
+static struct attn_hw_reg *rpb_int_bb_a0_regs[1] = {
+	&rpb_int0_bb_a0,
+};
+
+static const u16 rpb_int0_bb_b0_attn_idx[9] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8,
+};
+
+static struct attn_hw_reg rpb_int0_bb_b0 = {
+	0, 9, rpb_int0_bb_b0_attn_idx, 0x23c040, 0x23c04c, 0x23c048, 0x23c044
+};
+
+static struct attn_hw_reg *rpb_int_bb_b0_regs[1] = {
+	&rpb_int0_bb_b0,
+};
+
+static const u16 rpb_int0_k2_attn_idx[9] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8,
+};
+
+static struct attn_hw_reg rpb_int0_k2 = {
+	0, 9, rpb_int0_k2_attn_idx, 0x23c040, 0x23c04c, 0x23c048, 0x23c044
+};
+
+static struct attn_hw_reg *rpb_int_k2_regs[1] = {
+	&rpb_int0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *rpb_prty_attn_desc[1] = {
+	"rpb_datapath_registers",
+};
+#else
+#define rpb_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 rpb_prty0_bb_b0_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg rpb_prty0_bb_b0 = {
+	0, 1, rpb_prty0_bb_b0_attn_idx, 0x23c050, 0x23c05c, 0x23c058, 0x23c054
+};
+
+static struct attn_hw_reg *rpb_prty_bb_b0_regs[1] = {
+	&rpb_prty0_bb_b0,
+};
+
+static const u16 rpb_prty0_k2_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg rpb_prty0_k2 = {
+	0, 1, rpb_prty0_k2_attn_idx, 0x23c050, 0x23c05c, 0x23c058, 0x23c054
+};
+
+static struct attn_hw_reg *rpb_prty_k2_regs[1] = {
+	&rpb_prty0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *btb_int_attn_desc[139] = {
+	"btb_address_error",
+	"btb_rc_pkt0_rls_error",
+	"btb_unused_0",
+	"btb_rc_pkt0_len_error",
+	"btb_unused_1",
+	"btb_rc_pkt0_protocol_error",
+	"btb_rc_pkt1_rls_error",
+	"btb_unused_2",
+	"btb_rc_pkt1_len_error",
+	"btb_unused_3",
+	"btb_rc_pkt1_protocol_error",
+	"btb_rc_pkt2_rls_error",
+	"btb_unused_4",
+	"btb_rc_pkt2_len_error",
+	"btb_unused_5",
+	"btb_rc_pkt2_protocol_error",
+	"btb_rc_pkt3_rls_error",
+	"btb_unused_6",
+	"btb_rc_pkt3_len_error",
+	"btb_unused_7",
+	"btb_rc_pkt3_protocol_error",
+	"btb_rc_sop_req_tc_port_error",
+	"btb_unused_8",
+	"btb_wc0_protocol_error",
+	"btb_unused_9",
+	"btb_ll_blk_error",
+	"btb_ll_arb_calc_error",
+	"btb_fc_alm_calc_error",
+	"btb_wc0_inp_fifo_error",
+	"btb_wc0_sop_fifo_error",
+	"btb_wc0_len_fifo_error",
+	"btb_wc0_eop_fifo_error",
+	"btb_wc0_queue_fifo_error",
+	"btb_wc0_free_point_fifo_error",
+	"btb_wc0_next_point_fifo_error",
+	"btb_wc0_strt_fifo_error",
+	"btb_wc0_second_dscr_fifo_error",
+	"btb_wc0_pkt_avail_fifo_error",
+	"btb_wc0_notify_fifo_error",
+	"btb_wc0_ll_req_fifo_error",
+	"btb_wc0_ll_pa_cnt_error",
+	"btb_wc0_bb_pa_cnt_error",
+	"btb_wc_dup_upd_data_fifo_error",
+	"btb_wc_dup_rsp_dscr_fifo_error",
+	"btb_wc_dup_upd_point_fifo_error",
+	"btb_wc_dup_pkt_avail_fifo_error",
+	"btb_wc_dup_pkt_avail_cnt_error",
+	"btb_rc_pkt0_side_fifo_error",
+	"btb_rc_pkt0_req_fifo_error",
+	"btb_rc_pkt0_blk_fifo_error",
+	"btb_rc_pkt0_rls_left_fifo_error",
+	"btb_rc_pkt0_strt_ptr_fifo_error",
+	"btb_rc_pkt0_second_ptr_fifo_error",
+	"btb_rc_pkt0_rsp_fifo_error",
+	"btb_rc_pkt0_dscr_fifo_error",
+	"btb_rc_pkt1_side_fifo_error",
+	"btb_rc_pkt1_req_fifo_error",
+	"btb_rc_pkt1_blk_fifo_error",
+	"btb_rc_pkt1_rls_left_fifo_error",
+	"btb_rc_pkt1_strt_ptr_fifo_error",
+	"btb_rc_pkt1_second_ptr_fifo_error",
+	"btb_rc_pkt1_rsp_fifo_error",
+	"btb_rc_pkt1_dscr_fifo_error",
+	"btb_rc_pkt2_side_fifo_error",
+	"btb_rc_pkt2_req_fifo_error",
+	"btb_rc_pkt2_blk_fifo_error",
+	"btb_rc_pkt2_rls_left_fifo_error",
+	"btb_rc_pkt2_strt_ptr_fifo_error",
+	"btb_rc_pkt2_second_ptr_fifo_error",
+	"btb_rc_pkt2_rsp_fifo_error",
+	"btb_rc_pkt2_dscr_fifo_error",
+	"btb_rc_pkt3_side_fifo_error",
+	"btb_rc_pkt3_req_fifo_error",
+	"btb_rc_pkt3_blk_fifo_error",
+	"btb_rc_pkt3_rls_left_fifo_error",
+	"btb_rc_pkt3_strt_ptr_fifo_error",
+	"btb_rc_pkt3_second_ptr_fifo_error",
+	"btb_rc_pkt3_rsp_fifo_error",
+	"btb_rc_pkt3_dscr_fifo_error",
+	"btb_rc_sop_queue_fifo_error",
+	"btb_ll_arb_rls_fifo_error",
+	"btb_ll_arb_prefetch_fifo_error",
+	"btb_rc_pkt0_rls_fifo_error",
+	"btb_rc_pkt1_rls_fifo_error",
+	"btb_rc_pkt2_rls_fifo_error",
+	"btb_rc_pkt3_rls_fifo_error",
+	"btb_rc_pkt4_rls_fifo_error",
+	"btb_rc_pkt5_rls_fifo_error",
+	"btb_rc_pkt6_rls_fifo_error",
+	"btb_rc_pkt7_rls_fifo_error",
+	"btb_rc_pkt4_rls_error",
+	"btb_rc_pkt4_len_error",
+	"btb_rc_pkt4_protocol_error",
+	"btb_rc_pkt4_side_fifo_error",
+	"btb_rc_pkt4_req_fifo_error",
+	"btb_rc_pkt4_blk_fifo_error",
+	"btb_rc_pkt4_rls_left_fifo_error",
+	"btb_rc_pkt4_strt_ptr_fifo_error",
+	"btb_rc_pkt4_second_ptr_fifo_error",
+	"btb_rc_pkt4_rsp_fifo_error",
+	"btb_rc_pkt4_dscr_fifo_error",
+	"btb_rc_pkt5_rls_error",
+	"btb_rc_pkt5_len_error",
+	"btb_rc_pkt5_protocol_error",
+	"btb_rc_pkt5_side_fifo_error",
+	"btb_rc_pkt5_req_fifo_error",
+	"btb_rc_pkt5_blk_fifo_error",
+	"btb_rc_pkt5_rls_left_fifo_error",
+	"btb_rc_pkt5_strt_ptr_fifo_error",
+	"btb_rc_pkt5_second_ptr_fifo_error",
+	"btb_rc_pkt5_rsp_fifo_error",
+	"btb_rc_pkt5_dscr_fifo_error",
+	"btb_rc_pkt6_rls_error",
+	"btb_rc_pkt6_len_error",
+	"btb_rc_pkt6_protocol_error",
+	"btb_rc_pkt6_side_fifo_error",
+	"btb_rc_pkt6_req_fifo_error",
+	"btb_rc_pkt6_blk_fifo_error",
+	"btb_rc_pkt6_rls_left_fifo_error",
+	"btb_rc_pkt6_strt_ptr_fifo_error",
+	"btb_rc_pkt6_second_ptr_fifo_error",
+	"btb_rc_pkt6_rsp_fifo_error",
+	"btb_rc_pkt6_dscr_fifo_error",
+	"btb_rc_pkt7_rls_error",
+	"btb_rc_pkt7_len_error",
+	"btb_rc_pkt7_protocol_error",
+	"btb_rc_pkt7_side_fifo_error",
+	"btb_rc_pkt7_req_fifo_error",
+	"btb_rc_pkt7_blk_fifo_error",
+	"btb_rc_pkt7_rls_left_fifo_error",
+	"btb_rc_pkt7_strt_ptr_fifo_error",
+	"btb_rc_pkt7_second_ptr_fifo_error",
+	"btb_rc_pkt7_rsp_fifo_error",
+	"btb_packet_available_sync_fifo_push_error",
+	"btb_wc6_notify_fifo_error",
+	"btb_wc9_queue_fifo_error",
+	"btb_wc0_sync_fifo_push_error",
+	"btb_rls_sync_fifo_push_error",
+	"btb_rc_pkt7_dscr_fifo_error",
+};
+#else
+#define btb_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 btb_int0_bb_a0_attn_idx[16] = {
+	0, 1, 3, 5, 6, 8, 10, 11, 13, 15, 16, 18, 20, 21, 23, 25,
+};
+
+static struct attn_hw_reg btb_int0_bb_a0 = {
+	0, 16, btb_int0_bb_a0_attn_idx, 0xdb00c0, 0xdb00cc, 0xdb00c8, 0xdb00c4
+};
+
+static const u16 btb_int1_bb_a0_attn_idx[16] = {
+	26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41,
+};
+
+static struct attn_hw_reg btb_int1_bb_a0 = {
+	1, 16, btb_int1_bb_a0_attn_idx, 0xdb00d8, 0xdb00e4, 0xdb00e0, 0xdb00dc
+};
+
+static const u16 btb_int2_bb_a0_attn_idx[4] = {
+	42, 43, 44, 45,
+};
+
+static struct attn_hw_reg btb_int2_bb_a0 = {
+	2, 4, btb_int2_bb_a0_attn_idx, 0xdb00f0, 0xdb00fc, 0xdb00f8, 0xdb00f4
+};
+
+static const u16 btb_int3_bb_a0_attn_idx[32] = {
+	46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63,
+	64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77,
+};
+
+static struct attn_hw_reg btb_int3_bb_a0 = {
+	3, 32, btb_int3_bb_a0_attn_idx, 0xdb0108, 0xdb0114, 0xdb0110, 0xdb010c
+};
+
+static const u16 btb_int4_bb_a0_attn_idx[23] = {
+	78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95,
+	96, 97, 98, 99, 100,
+};
+
+static struct attn_hw_reg btb_int4_bb_a0 = {
+	4, 23, btb_int4_bb_a0_attn_idx, 0xdb0120, 0xdb012c, 0xdb0128, 0xdb0124
+};
+
+static const u16 btb_int5_bb_a0_attn_idx[32] = {
+	101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114,
+	115,
+	116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129,
+	    130, 131,
+	132,
+};
+
+static struct attn_hw_reg btb_int5_bb_a0 = {
+	5, 32, btb_int5_bb_a0_attn_idx, 0xdb0138, 0xdb0144, 0xdb0140, 0xdb013c
+};
+
+static const u16 btb_int6_bb_a0_attn_idx[1] = {
+	133,
+};
+
+static struct attn_hw_reg btb_int6_bb_a0 = {
+	6, 1, btb_int6_bb_a0_attn_idx, 0xdb0150, 0xdb015c, 0xdb0158, 0xdb0154
+};
+
+static const u16 btb_int8_bb_a0_attn_idx[1] = {
+	134,
+};
+
+static struct attn_hw_reg btb_int8_bb_a0 = {
+	7, 1, btb_int8_bb_a0_attn_idx, 0xdb0184, 0xdb0190, 0xdb018c, 0xdb0188
+};
+
+static const u16 btb_int9_bb_a0_attn_idx[1] = {
+	135,
+};
+
+static struct attn_hw_reg btb_int9_bb_a0 = {
+	8, 1, btb_int9_bb_a0_attn_idx, 0xdb019c, 0xdb01a8, 0xdb01a4, 0xdb01a0
+};
+
+static const u16 btb_int10_bb_a0_attn_idx[1] = {
+	136,
+};
+
+static struct attn_hw_reg btb_int10_bb_a0 = {
+	9, 1, btb_int10_bb_a0_attn_idx, 0xdb01b4, 0xdb01c0, 0xdb01bc, 0xdb01b8
+};
+
+static const u16 btb_int11_bb_a0_attn_idx[2] = {
+	137, 138,
+};
+
+static struct attn_hw_reg btb_int11_bb_a0 = {
+	10, 2, btb_int11_bb_a0_attn_idx, 0xdb01cc, 0xdb01d8, 0xdb01d4, 0xdb01d0
+};
+
+static struct attn_hw_reg *btb_int_bb_a0_regs[11] = {
+	&btb_int0_bb_a0, &btb_int1_bb_a0, &btb_int2_bb_a0, &btb_int3_bb_a0,
+	&btb_int4_bb_a0, &btb_int5_bb_a0, &btb_int6_bb_a0, &btb_int8_bb_a0,
+	&btb_int9_bb_a0, &btb_int10_bb_a0,
+	&btb_int11_bb_a0,
+};
+
+static const u16 btb_int0_bb_b0_attn_idx[16] = {
+	0, 1, 3, 5, 6, 8, 10, 11, 13, 15, 16, 18, 20, 21, 23, 25,
+};
+
+static struct attn_hw_reg btb_int0_bb_b0 = {
+	0, 16, btb_int0_bb_b0_attn_idx, 0xdb00c0, 0xdb00cc, 0xdb00c8, 0xdb00c4
+};
+
+static const u16 btb_int1_bb_b0_attn_idx[16] = {
+	26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41,
+};
+
+static struct attn_hw_reg btb_int1_bb_b0 = {
+	1, 16, btb_int1_bb_b0_attn_idx, 0xdb00d8, 0xdb00e4, 0xdb00e0, 0xdb00dc
+};
+
+static const u16 btb_int2_bb_b0_attn_idx[4] = {
+	42, 43, 44, 45,
+};
+
+static struct attn_hw_reg btb_int2_bb_b0 = {
+	2, 4, btb_int2_bb_b0_attn_idx, 0xdb00f0, 0xdb00fc, 0xdb00f8, 0xdb00f4
+};
+
+static const u16 btb_int3_bb_b0_attn_idx[32] = {
+	46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63,
+	64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77,
+};
+
+static struct attn_hw_reg btb_int3_bb_b0 = {
+	3, 32, btb_int3_bb_b0_attn_idx, 0xdb0108, 0xdb0114, 0xdb0110, 0xdb010c
+};
+
+static const u16 btb_int4_bb_b0_attn_idx[23] = {
+	78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95,
+	96, 97, 98, 99, 100,
+};
+
+static struct attn_hw_reg btb_int4_bb_b0 = {
+	4, 23, btb_int4_bb_b0_attn_idx, 0xdb0120, 0xdb012c, 0xdb0128, 0xdb0124
+};
+
+static const u16 btb_int5_bb_b0_attn_idx[32] = {
+	101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114,
+	115,
+	116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129,
+	    130, 131,
+	132,
+};
+
+static struct attn_hw_reg btb_int5_bb_b0 = {
+	5, 32, btb_int5_bb_b0_attn_idx, 0xdb0138, 0xdb0144, 0xdb0140, 0xdb013c
+};
+
+static const u16 btb_int6_bb_b0_attn_idx[1] = {
+	133,
+};
+
+static struct attn_hw_reg btb_int6_bb_b0 = {
+	6, 1, btb_int6_bb_b0_attn_idx, 0xdb0150, 0xdb015c, 0xdb0158, 0xdb0154
+};
+
+static const u16 btb_int8_bb_b0_attn_idx[1] = {
+	134,
+};
+
+static struct attn_hw_reg btb_int8_bb_b0 = {
+	7, 1, btb_int8_bb_b0_attn_idx, 0xdb0184, 0xdb0190, 0xdb018c, 0xdb0188
+};
+
+static const u16 btb_int9_bb_b0_attn_idx[1] = {
+	135,
+};
+
+static struct attn_hw_reg btb_int9_bb_b0 = {
+	8, 1, btb_int9_bb_b0_attn_idx, 0xdb019c, 0xdb01a8, 0xdb01a4, 0xdb01a0
+};
+
+static const u16 btb_int10_bb_b0_attn_idx[1] = {
+	136,
+};
+
+static struct attn_hw_reg btb_int10_bb_b0 = {
+	9, 1, btb_int10_bb_b0_attn_idx, 0xdb01b4, 0xdb01c0, 0xdb01bc, 0xdb01b8
+};
+
+static const u16 btb_int11_bb_b0_attn_idx[2] = {
+	137, 138,
+};
+
+static struct attn_hw_reg btb_int11_bb_b0 = {
+	10, 2, btb_int11_bb_b0_attn_idx, 0xdb01cc, 0xdb01d8, 0xdb01d4, 0xdb01d0
+};
+
+static struct attn_hw_reg *btb_int_bb_b0_regs[11] = {
+	&btb_int0_bb_b0, &btb_int1_bb_b0, &btb_int2_bb_b0, &btb_int3_bb_b0,
+	&btb_int4_bb_b0, &btb_int5_bb_b0, &btb_int6_bb_b0, &btb_int8_bb_b0,
+	&btb_int9_bb_b0, &btb_int10_bb_b0,
+	&btb_int11_bb_b0,
+};
+
+static const u16 btb_int0_k2_attn_idx[16] = {
+	0, 1, 3, 5, 6, 8, 10, 11, 13, 15, 16, 18, 20, 21, 23, 25,
+};
+
+static struct attn_hw_reg btb_int0_k2 = {
+	0, 16, btb_int0_k2_attn_idx, 0xdb00c0, 0xdb00cc, 0xdb00c8, 0xdb00c4
+};
+
+static const u16 btb_int1_k2_attn_idx[16] = {
+	26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41,
+};
+
+static struct attn_hw_reg btb_int1_k2 = {
+	1, 16, btb_int1_k2_attn_idx, 0xdb00d8, 0xdb00e4, 0xdb00e0, 0xdb00dc
+};
+
+static const u16 btb_int2_k2_attn_idx[4] = {
+	42, 43, 44, 45,
+};
+
+static struct attn_hw_reg btb_int2_k2 = {
+	2, 4, btb_int2_k2_attn_idx, 0xdb00f0, 0xdb00fc, 0xdb00f8, 0xdb00f4
+};
+
+static const u16 btb_int3_k2_attn_idx[32] = {
+	46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63,
+	64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77,
+};
+
+static struct attn_hw_reg btb_int3_k2 = {
+	3, 32, btb_int3_k2_attn_idx, 0xdb0108, 0xdb0114, 0xdb0110, 0xdb010c
+};
+
+static const u16 btb_int4_k2_attn_idx[23] = {
+	78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95,
+	96, 97, 98, 99, 100,
+};
+
+static struct attn_hw_reg btb_int4_k2 = {
+	4, 23, btb_int4_k2_attn_idx, 0xdb0120, 0xdb012c, 0xdb0128, 0xdb0124
+};
+
+static const u16 btb_int5_k2_attn_idx[32] = {
+	101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114,
+	115,
+	116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129,
+	    130, 131,
+	132,
+};
+
+static struct attn_hw_reg btb_int5_k2 = {
+	5, 32, btb_int5_k2_attn_idx, 0xdb0138, 0xdb0144, 0xdb0140, 0xdb013c
+};
+
+static const u16 btb_int6_k2_attn_idx[1] = {
+	133,
+};
+
+static struct attn_hw_reg btb_int6_k2 = {
+	6, 1, btb_int6_k2_attn_idx, 0xdb0150, 0xdb015c, 0xdb0158, 0xdb0154
+};
+
+static const u16 btb_int8_k2_attn_idx[1] = {
+	134,
+};
+
+static struct attn_hw_reg btb_int8_k2 = {
+	7, 1, btb_int8_k2_attn_idx, 0xdb0184, 0xdb0190, 0xdb018c, 0xdb0188
+};
+
+static const u16 btb_int9_k2_attn_idx[1] = {
+	135,
+};
+
+static struct attn_hw_reg btb_int9_k2 = {
+	8, 1, btb_int9_k2_attn_idx, 0xdb019c, 0xdb01a8, 0xdb01a4, 0xdb01a0
+};
+
+static const u16 btb_int10_k2_attn_idx[1] = {
+	136,
+};
+
+static struct attn_hw_reg btb_int10_k2 = {
+	9, 1, btb_int10_k2_attn_idx, 0xdb01b4, 0xdb01c0, 0xdb01bc, 0xdb01b8
+};
+
+static const u16 btb_int11_k2_attn_idx[2] = {
+	137, 138,
+};
+
+static struct attn_hw_reg btb_int11_k2 = {
+	10, 2, btb_int11_k2_attn_idx, 0xdb01cc, 0xdb01d8, 0xdb01d4, 0xdb01d0
+};
+
+static struct attn_hw_reg *btb_int_k2_regs[11] = {
+	&btb_int0_k2, &btb_int1_k2, &btb_int2_k2, &btb_int3_k2, &btb_int4_k2,
+	&btb_int5_k2, &btb_int6_k2, &btb_int8_k2, &btb_int9_k2, &btb_int10_k2,
+	&btb_int11_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *btb_prty_attn_desc[36] = {
+	"btb_ll_bank0_mem_prty",
+	"btb_ll_bank1_mem_prty",
+	"btb_ll_bank2_mem_prty",
+	"btb_ll_bank3_mem_prty",
+	"btb_datapath_registers",
+	"btb_mem001_i_ecc_rf_int",
+	"btb_mem008_i_ecc_rf_int",
+	"btb_mem009_i_ecc_rf_int",
+	"btb_mem010_i_ecc_rf_int",
+	"btb_mem011_i_ecc_rf_int",
+	"btb_mem012_i_ecc_rf_int",
+	"btb_mem013_i_ecc_rf_int",
+	"btb_mem014_i_ecc_rf_int",
+	"btb_mem015_i_ecc_rf_int",
+	"btb_mem016_i_ecc_rf_int",
+	"btb_mem002_i_ecc_rf_int",
+	"btb_mem003_i_ecc_rf_int",
+	"btb_mem004_i_ecc_rf_int",
+	"btb_mem005_i_ecc_rf_int",
+	"btb_mem006_i_ecc_rf_int",
+	"btb_mem007_i_ecc_rf_int",
+	"btb_mem033_i_mem_prty",
+	"btb_mem035_i_mem_prty",
+	"btb_mem034_i_mem_prty",
+	"btb_mem032_i_mem_prty",
+	"btb_mem031_i_mem_prty",
+	"btb_mem021_i_mem_prty",
+	"btb_mem022_i_mem_prty",
+	"btb_mem023_i_mem_prty",
+	"btb_mem024_i_mem_prty",
+	"btb_mem025_i_mem_prty",
+	"btb_mem026_i_mem_prty",
+	"btb_mem027_i_mem_prty",
+	"btb_mem028_i_mem_prty",
+	"btb_mem030_i_mem_prty",
+	"btb_mem029_i_mem_prty",
+};
+#else
+#define btb_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 btb_prty1_bb_a0_attn_idx[27] = {
+	5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 25, 26, 27,
+	28,
+	29, 30, 31, 32, 33, 34, 35,
+};
+
+static struct attn_hw_reg btb_prty1_bb_a0 = {
+	0, 27, btb_prty1_bb_a0_attn_idx, 0xdb0400, 0xdb040c, 0xdb0408, 0xdb0404
+};
+
+static struct attn_hw_reg *btb_prty_bb_a0_regs[1] = {
+	&btb_prty1_bb_a0,
+};
+
+static const u16 btb_prty0_bb_b0_attn_idx[5] = {
+	0, 1, 2, 3, 4,
+};
+
+static struct attn_hw_reg btb_prty0_bb_b0 = {
+	0, 5, btb_prty0_bb_b0_attn_idx, 0xdb01dc, 0xdb01e8, 0xdb01e4, 0xdb01e0
+};
+
+static const u16 btb_prty1_bb_b0_attn_idx[23] = {
+	5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 25, 30, 31,
+	32,
+	33, 34, 35,
+};
+
+static struct attn_hw_reg btb_prty1_bb_b0 = {
+	1, 23, btb_prty1_bb_b0_attn_idx, 0xdb0400, 0xdb040c, 0xdb0408, 0xdb0404
+};
+
+static struct attn_hw_reg *btb_prty_bb_b0_regs[2] = {
+	&btb_prty0_bb_b0, &btb_prty1_bb_b0,
+};
+
+static const u16 btb_prty0_k2_attn_idx[5] = {
+	0, 1, 2, 3, 4,
+};
+
+static struct attn_hw_reg btb_prty0_k2 = {
+	0, 5, btb_prty0_k2_attn_idx, 0xdb01dc, 0xdb01e8, 0xdb01e4, 0xdb01e0
+};
+
+static const u16 btb_prty1_k2_attn_idx[31] = {
+	5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23,
+	24,
+	25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35,
+};
+
+static struct attn_hw_reg btb_prty1_k2 = {
+	1, 31, btb_prty1_k2_attn_idx, 0xdb0400, 0xdb040c, 0xdb0408, 0xdb0404
+};
+
+static struct attn_hw_reg *btb_prty_k2_regs[2] = {
+	&btb_prty0_k2, &btb_prty1_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *pbf_int_attn_desc[1] = {
+	"pbf_address_error",
+};
+#else
+#define pbf_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 pbf_int0_bb_a0_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg pbf_int0_bb_a0 = {
+	0, 1, pbf_int0_bb_a0_attn_idx, 0xd80180, 0xd8018c, 0xd80188, 0xd80184
+};
+
+static struct attn_hw_reg *pbf_int_bb_a0_regs[1] = {
+	&pbf_int0_bb_a0,
+};
+
+static const u16 pbf_int0_bb_b0_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg pbf_int0_bb_b0 = {
+	0, 1, pbf_int0_bb_b0_attn_idx, 0xd80180, 0xd8018c, 0xd80188, 0xd80184
+};
+
+static struct attn_hw_reg *pbf_int_bb_b0_regs[1] = {
+	&pbf_int0_bb_b0,
+};
+
+static const u16 pbf_int0_k2_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg pbf_int0_k2 = {
+	0, 1, pbf_int0_k2_attn_idx, 0xd80180, 0xd8018c, 0xd80188, 0xd80184
+};
+
+static struct attn_hw_reg *pbf_int_k2_regs[1] = {
+	&pbf_int0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *pbf_prty_attn_desc[59] = {
+	"pbf_datapath_registers",
+	"pbf_mem041_i_ecc_rf_int",
+	"pbf_mem042_i_ecc_rf_int",
+	"pbf_mem033_i_ecc_rf_int",
+	"pbf_mem003_i_ecc_rf_int",
+	"pbf_mem018_i_ecc_rf_int",
+	"pbf_mem009_i_ecc_0_rf_int",
+	"pbf_mem009_i_ecc_1_rf_int",
+	"pbf_mem012_i_ecc_0_rf_int",
+	"pbf_mem012_i_ecc_1_rf_int",
+	"pbf_mem012_i_ecc_2_rf_int",
+	"pbf_mem012_i_ecc_3_rf_int",
+	"pbf_mem012_i_ecc_4_rf_int",
+	"pbf_mem012_i_ecc_5_rf_int",
+	"pbf_mem012_i_ecc_6_rf_int",
+	"pbf_mem012_i_ecc_7_rf_int",
+	"pbf_mem012_i_ecc_8_rf_int",
+	"pbf_mem012_i_ecc_9_rf_int",
+	"pbf_mem012_i_ecc_10_rf_int",
+	"pbf_mem012_i_ecc_11_rf_int",
+	"pbf_mem012_i_ecc_12_rf_int",
+	"pbf_mem012_i_ecc_13_rf_int",
+	"pbf_mem012_i_ecc_14_rf_int",
+	"pbf_mem012_i_ecc_15_rf_int",
+	"pbf_mem040_i_mem_prty",
+	"pbf_mem039_i_mem_prty",
+	"pbf_mem038_i_mem_prty",
+	"pbf_mem034_i_mem_prty",
+	"pbf_mem032_i_mem_prty",
+	"pbf_mem031_i_mem_prty",
+	"pbf_mem030_i_mem_prty",
+	"pbf_mem029_i_mem_prty",
+	"pbf_mem022_i_mem_prty",
+	"pbf_mem023_i_mem_prty",
+	"pbf_mem021_i_mem_prty",
+	"pbf_mem020_i_mem_prty",
+	"pbf_mem001_i_mem_prty",
+	"pbf_mem002_i_mem_prty",
+	"pbf_mem006_i_mem_prty",
+	"pbf_mem007_i_mem_prty",
+	"pbf_mem005_i_mem_prty",
+	"pbf_mem004_i_mem_prty",
+	"pbf_mem028_i_mem_prty",
+	"pbf_mem026_i_mem_prty",
+	"pbf_mem027_i_mem_prty",
+	"pbf_mem019_i_mem_prty",
+	"pbf_mem016_i_mem_prty",
+	"pbf_mem017_i_mem_prty",
+	"pbf_mem008_i_mem_prty",
+	"pbf_mem011_i_mem_prty",
+	"pbf_mem010_i_mem_prty",
+	"pbf_mem024_i_mem_prty",
+	"pbf_mem025_i_mem_prty",
+	"pbf_mem037_i_mem_prty",
+	"pbf_mem036_i_mem_prty",
+	"pbf_mem035_i_mem_prty",
+	"pbf_mem014_i_mem_prty",
+	"pbf_mem015_i_mem_prty",
+	"pbf_mem013_i_mem_prty",
+};
+#else
+#define pbf_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 pbf_prty1_bb_a0_attn_idx[31] = {
+	1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20,
+	21,
+	22, 23, 24, 25, 26, 27, 28, 29, 30, 31,
+};
+
+static struct attn_hw_reg pbf_prty1_bb_a0 = {
+	0, 31, pbf_prty1_bb_a0_attn_idx, 0xd80200, 0xd8020c, 0xd80208, 0xd80204
+};
+
+static const u16 pbf_prty2_bb_a0_attn_idx[27] = {
+	32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49,
+	50, 51, 52, 53, 54, 55, 56, 57, 58,
+};
+
+static struct attn_hw_reg pbf_prty2_bb_a0 = {
+	1, 27, pbf_prty2_bb_a0_attn_idx, 0xd80210, 0xd8021c, 0xd80218, 0xd80214
+};
+
+static struct attn_hw_reg *pbf_prty_bb_a0_regs[2] = {
+	&pbf_prty1_bb_a0, &pbf_prty2_bb_a0,
+};
+
+static const u16 pbf_prty0_bb_b0_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg pbf_prty0_bb_b0 = {
+	0, 1, pbf_prty0_bb_b0_attn_idx, 0xd80190, 0xd8019c, 0xd80198, 0xd80194
+};
+
+static const u16 pbf_prty1_bb_b0_attn_idx[31] = {
+	1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20,
+	21,
+	22, 23, 24, 25, 26, 27, 28, 29, 30, 31,
+};
+
+static struct attn_hw_reg pbf_prty1_bb_b0 = {
+	1, 31, pbf_prty1_bb_b0_attn_idx, 0xd80200, 0xd8020c, 0xd80208, 0xd80204
+};
+
+static const u16 pbf_prty2_bb_b0_attn_idx[27] = {
+	32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49,
+	50, 51, 52, 53, 54, 55, 56, 57, 58,
+};
+
+static struct attn_hw_reg pbf_prty2_bb_b0 = {
+	2, 27, pbf_prty2_bb_b0_attn_idx, 0xd80210, 0xd8021c, 0xd80218, 0xd80214
+};
+
+static struct attn_hw_reg *pbf_prty_bb_b0_regs[3] = {
+	&pbf_prty0_bb_b0, &pbf_prty1_bb_b0, &pbf_prty2_bb_b0,
+};
+
+static const u16 pbf_prty0_k2_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg pbf_prty0_k2 = {
+	0, 1, pbf_prty0_k2_attn_idx, 0xd80190, 0xd8019c, 0xd80198, 0xd80194
+};
+
+static const u16 pbf_prty1_k2_attn_idx[31] = {
+	1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20,
+	21,
+	22, 23, 24, 25, 26, 27, 28, 29, 30, 31,
+};
+
+static struct attn_hw_reg pbf_prty1_k2 = {
+	1, 31, pbf_prty1_k2_attn_idx, 0xd80200, 0xd8020c, 0xd80208, 0xd80204
+};
+
+static const u16 pbf_prty2_k2_attn_idx[27] = {
+	32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49,
+	50, 51, 52, 53, 54, 55, 56, 57, 58,
+};
+
+static struct attn_hw_reg pbf_prty2_k2 = {
+	2, 27, pbf_prty2_k2_attn_idx, 0xd80210, 0xd8021c, 0xd80218, 0xd80214
+};
+
+static struct attn_hw_reg *pbf_prty_k2_regs[3] = {
+	&pbf_prty0_k2, &pbf_prty1_k2, &pbf_prty2_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *rdif_int_attn_desc[9] = {
+	"rdif_address_error",
+	"rdif_fatal_dix_err",
+	"rdif_fatal_config_err",
+	"rdif_cmd_fifo_err",
+	"rdif_order_fifo_err",
+	"rdif_rdata_fifo_err",
+	"rdif_dif_stop_err",
+	"rdif_partial_dif_w_eob",
+	"rdif_l1_dirty_bit",
+};
+#else
+#define rdif_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 rdif_int0_bb_a0_attn_idx[8] = {
+	0, 1, 2, 3, 4, 5, 6, 7,
+};
+
+static struct attn_hw_reg rdif_int0_bb_a0 = {
+	0, 8, rdif_int0_bb_a0_attn_idx, 0x300180, 0x30018c, 0x300188, 0x300184
+};
+
+static struct attn_hw_reg *rdif_int_bb_a0_regs[1] = {
+	&rdif_int0_bb_a0,
+};
+
+static const u16 rdif_int0_bb_b0_attn_idx[8] = {
+	0, 1, 2, 3, 4, 5, 6, 7,
+};
+
+static struct attn_hw_reg rdif_int0_bb_b0 = {
+	0, 8, rdif_int0_bb_b0_attn_idx, 0x300180, 0x30018c, 0x300188, 0x300184
+};
+
+static struct attn_hw_reg *rdif_int_bb_b0_regs[1] = {
+	&rdif_int0_bb_b0,
+};
+
+static const u16 rdif_int0_k2_attn_idx[9] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8,
+};
+
+static struct attn_hw_reg rdif_int0_k2 = {
+	0, 9, rdif_int0_k2_attn_idx, 0x300180, 0x30018c, 0x300188, 0x300184
+};
+
+static struct attn_hw_reg *rdif_int_k2_regs[1] = {
+	&rdif_int0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *rdif_prty_attn_desc[2] = {
+	"rdif_unused_0",
+	"rdif_datapath_registers",
+};
+#else
+#define rdif_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 rdif_prty0_bb_b0_attn_idx[1] = {
+	1,
+};
+
+static struct attn_hw_reg rdif_prty0_bb_b0 = {
+	0, 1, rdif_prty0_bb_b0_attn_idx, 0x300190, 0x30019c, 0x300198, 0x300194
+};
+
+static struct attn_hw_reg *rdif_prty_bb_b0_regs[1] = {
+	&rdif_prty0_bb_b0,
+};
+
+static const u16 rdif_prty0_k2_attn_idx[1] = {
+	1,
+};
+
+static struct attn_hw_reg rdif_prty0_k2 = {
+	0, 1, rdif_prty0_k2_attn_idx, 0x300190, 0x30019c, 0x300198, 0x300194
+};
+
+static struct attn_hw_reg *rdif_prty_k2_regs[1] = {
+	&rdif_prty0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *tdif_int_attn_desc[9] = {
+	"tdif_address_error",
+	"tdif_fatal_dix_err",
+	"tdif_fatal_config_err",
+	"tdif_cmd_fifo_err",
+	"tdif_order_fifo_err",
+	"tdif_rdata_fifo_err",
+	"tdif_dif_stop_err",
+	"tdif_partial_dif_w_eob",
+	"tdif_l1_dirty_bit",
+};
+#else
+#define tdif_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 tdif_int0_bb_a0_attn_idx[8] = {
+	0, 1, 2, 3, 4, 5, 6, 7,
+};
+
+static struct attn_hw_reg tdif_int0_bb_a0 = {
+	0, 8, tdif_int0_bb_a0_attn_idx, 0x310180, 0x31018c, 0x310188, 0x310184
+};
+
+static struct attn_hw_reg *tdif_int_bb_a0_regs[1] = {
+	&tdif_int0_bb_a0,
+};
+
+static const u16 tdif_int0_bb_b0_attn_idx[8] = {
+	0, 1, 2, 3, 4, 5, 6, 7,
+};
+
+static struct attn_hw_reg tdif_int0_bb_b0 = {
+	0, 8, tdif_int0_bb_b0_attn_idx, 0x310180, 0x31018c, 0x310188, 0x310184
+};
+
+static struct attn_hw_reg *tdif_int_bb_b0_regs[1] = {
+	&tdif_int0_bb_b0,
+};
+
+static const u16 tdif_int0_k2_attn_idx[9] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8,
+};
+
+static struct attn_hw_reg tdif_int0_k2 = {
+	0, 9, tdif_int0_k2_attn_idx, 0x310180, 0x31018c, 0x310188, 0x310184
+};
+
+static struct attn_hw_reg *tdif_int_k2_regs[1] = {
+	&tdif_int0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *tdif_prty_attn_desc[13] = {
+	"tdif_unused_0",
+	"tdif_datapath_registers",
+	"tdif_mem005_i_ecc_rf_int",
+	"tdif_mem009_i_ecc_rf_int",
+	"tdif_mem010_i_ecc_rf_int",
+	"tdif_mem011_i_ecc_rf_int",
+	"tdif_mem001_i_mem_prty",
+	"tdif_mem003_i_mem_prty",
+	"tdif_mem002_i_mem_prty",
+	"tdif_mem006_i_mem_prty",
+	"tdif_mem007_i_mem_prty",
+	"tdif_mem008_i_mem_prty",
+	"tdif_mem004_i_mem_prty",
+};
+#else
+#define tdif_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 tdif_prty1_bb_a0_attn_idx[11] = {
+	2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12,
+};
+
+static struct attn_hw_reg tdif_prty1_bb_a0 = {
+	0, 11, tdif_prty1_bb_a0_attn_idx, 0x310200, 0x31020c, 0x310208,
+	0x310204
+};
+
+static struct attn_hw_reg *tdif_prty_bb_a0_regs[1] = {
+	&tdif_prty1_bb_a0,
+};
+
+static const u16 tdif_prty0_bb_b0_attn_idx[1] = {
+	1,
+};
+
+static struct attn_hw_reg tdif_prty0_bb_b0 = {
+	0, 1, tdif_prty0_bb_b0_attn_idx, 0x310190, 0x31019c, 0x310198, 0x310194
+};
+
+static const u16 tdif_prty1_bb_b0_attn_idx[11] = {
+	2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12,
+};
+
+static struct attn_hw_reg tdif_prty1_bb_b0 = {
+	1, 11, tdif_prty1_bb_b0_attn_idx, 0x310200, 0x31020c, 0x310208,
+	0x310204
+};
+
+static struct attn_hw_reg *tdif_prty_bb_b0_regs[2] = {
+	&tdif_prty0_bb_b0, &tdif_prty1_bb_b0,
+};
+
+static const u16 tdif_prty0_k2_attn_idx[1] = {
+	1,
+};
+
+static struct attn_hw_reg tdif_prty0_k2 = {
+	0, 1, tdif_prty0_k2_attn_idx, 0x310190, 0x31019c, 0x310198, 0x310194
+};
+
+static const u16 tdif_prty1_k2_attn_idx[11] = {
+	2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12,
+};
+
+static struct attn_hw_reg tdif_prty1_k2 = {
+	1, 11, tdif_prty1_k2_attn_idx, 0x310200, 0x31020c, 0x310208, 0x310204
+};
+
+static struct attn_hw_reg *tdif_prty_k2_regs[2] = {
+	&tdif_prty0_k2, &tdif_prty1_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *cdu_int_attn_desc[8] = {
+	"cdu_address_error",
+	"cdu_ccfc_ld_l1_num_error",
+	"cdu_tcfc_ld_l1_num_error",
+	"cdu_ccfc_wb_l1_num_error",
+	"cdu_tcfc_wb_l1_num_error",
+	"cdu_ccfc_cvld_error",
+	"cdu_tcfc_cvld_error",
+	"cdu_bvalid_error",
+};
+#else
+#define cdu_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 cdu_int0_bb_a0_attn_idx[8] = {
+	0, 1, 2, 3, 4, 5, 6, 7,
+};
+
+static struct attn_hw_reg cdu_int0_bb_a0 = {
+	0, 8, cdu_int0_bb_a0_attn_idx, 0x5801c0, 0x5801c4, 0x5801c8, 0x5801cc
+};
+
+static struct attn_hw_reg *cdu_int_bb_a0_regs[1] = {
+	&cdu_int0_bb_a0,
+};
+
+static const u16 cdu_int0_bb_b0_attn_idx[8] = {
+	0, 1, 2, 3, 4, 5, 6, 7,
+};
+
+static struct attn_hw_reg cdu_int0_bb_b0 = {
+	0, 8, cdu_int0_bb_b0_attn_idx, 0x5801c0, 0x5801c4, 0x5801c8, 0x5801cc
+};
+
+static struct attn_hw_reg *cdu_int_bb_b0_regs[1] = {
+	&cdu_int0_bb_b0,
+};
+
+static const u16 cdu_int0_k2_attn_idx[8] = {
+	0, 1, 2, 3, 4, 5, 6, 7,
+};
+
+static struct attn_hw_reg cdu_int0_k2 = {
+	0, 8, cdu_int0_k2_attn_idx, 0x5801c0, 0x5801c4, 0x5801c8, 0x5801cc
+};
+
+static struct attn_hw_reg *cdu_int_k2_regs[1] = {
+	&cdu_int0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *cdu_prty_attn_desc[5] = {
+	"cdu_mem001_i_mem_prty",
+	"cdu_mem004_i_mem_prty",
+	"cdu_mem002_i_mem_prty",
+	"cdu_mem005_i_mem_prty",
+	"cdu_mem003_i_mem_prty",
+};
+#else
+#define cdu_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 cdu_prty1_bb_a0_attn_idx[5] = {
+	0, 1, 2, 3, 4,
+};
+
+static struct attn_hw_reg cdu_prty1_bb_a0 = {
+	0, 5, cdu_prty1_bb_a0_attn_idx, 0x580200, 0x58020c, 0x580208, 0x580204
+};
+
+static struct attn_hw_reg *cdu_prty_bb_a0_regs[1] = {
+	&cdu_prty1_bb_a0,
+};
+
+static const u16 cdu_prty1_bb_b0_attn_idx[5] = {
+	0, 1, 2, 3, 4,
+};
+
+static struct attn_hw_reg cdu_prty1_bb_b0 = {
+	0, 5, cdu_prty1_bb_b0_attn_idx, 0x580200, 0x58020c, 0x580208, 0x580204
+};
+
+static struct attn_hw_reg *cdu_prty_bb_b0_regs[1] = {
+	&cdu_prty1_bb_b0,
+};
+
+static const u16 cdu_prty1_k2_attn_idx[5] = {
+	0, 1, 2, 3, 4,
+};
+
+static struct attn_hw_reg cdu_prty1_k2 = {
+	0, 5, cdu_prty1_k2_attn_idx, 0x580200, 0x58020c, 0x580208, 0x580204
+};
+
+static struct attn_hw_reg *cdu_prty_k2_regs[1] = {
+	&cdu_prty1_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *ccfc_int_attn_desc[2] = {
+	"ccfc_address_error",
+	"ccfc_exe_error",
+};
+#else
+#define ccfc_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 ccfc_int0_bb_a0_attn_idx[2] = {
+	0, 1,
+};
+
+static struct attn_hw_reg ccfc_int0_bb_a0 = {
+	0, 2, ccfc_int0_bb_a0_attn_idx, 0x2e0180, 0x2e018c, 0x2e0188, 0x2e0184
+};
+
+static struct attn_hw_reg *ccfc_int_bb_a0_regs[1] = {
+	&ccfc_int0_bb_a0,
+};
+
+static const u16 ccfc_int0_bb_b0_attn_idx[2] = {
+	0, 1,
+};
+
+static struct attn_hw_reg ccfc_int0_bb_b0 = {
+	0, 2, ccfc_int0_bb_b0_attn_idx, 0x2e0180, 0x2e018c, 0x2e0188, 0x2e0184
+};
+
+static struct attn_hw_reg *ccfc_int_bb_b0_regs[1] = {
+	&ccfc_int0_bb_b0,
+};
+
+static const u16 ccfc_int0_k2_attn_idx[2] = {
+	0, 1,
+};
+
+static struct attn_hw_reg ccfc_int0_k2 = {
+	0, 2, ccfc_int0_k2_attn_idx, 0x2e0180, 0x2e018c, 0x2e0188, 0x2e0184
+};
+
+static struct attn_hw_reg *ccfc_int_k2_regs[1] = {
+	&ccfc_int0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *ccfc_prty_attn_desc[10] = {
+	"ccfc_mem001_i_ecc_rf_int",
+	"ccfc_mem003_i_mem_prty",
+	"ccfc_mem007_i_mem_prty",
+	"ccfc_mem006_i_mem_prty",
+	"ccfc_ccam_par_err",
+	"ccfc_scam_par_err",
+	"ccfc_lc_que_ram_porta_lsb_par_err",
+	"ccfc_lc_que_ram_porta_msb_par_err",
+	"ccfc_lc_que_ram_portb_lsb_par_err",
+	"ccfc_lc_que_ram_portb_msb_par_err",
+};
+#else
+#define ccfc_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 ccfc_prty1_bb_a0_attn_idx[4] = {
+	0, 1, 2, 3,
+};
+
+static struct attn_hw_reg ccfc_prty1_bb_a0 = {
+	0, 4, ccfc_prty1_bb_a0_attn_idx, 0x2e0200, 0x2e020c, 0x2e0208, 0x2e0204
+};
+
+static const u16 ccfc_prty0_bb_a0_attn_idx[2] = {
+	4, 5,
+};
+
+static struct attn_hw_reg ccfc_prty0_bb_a0 = {
+	1, 2, ccfc_prty0_bb_a0_attn_idx, 0x2e05e4, 0x2e05f0, 0x2e05ec, 0x2e05e8
+};
+
+static struct attn_hw_reg *ccfc_prty_bb_a0_regs[2] = {
+	&ccfc_prty1_bb_a0, &ccfc_prty0_bb_a0,
+};
+
+static const u16 ccfc_prty1_bb_b0_attn_idx[2] = {
+	0, 1,
+};
+
+static struct attn_hw_reg ccfc_prty1_bb_b0 = {
+	0, 2, ccfc_prty1_bb_b0_attn_idx, 0x2e0200, 0x2e020c, 0x2e0208, 0x2e0204
+};
+
+static const u16 ccfc_prty0_bb_b0_attn_idx[6] = {
+	4, 5, 6, 7, 8, 9,
+};
+
+static struct attn_hw_reg ccfc_prty0_bb_b0 = {
+	1, 6, ccfc_prty0_bb_b0_attn_idx, 0x2e05e4, 0x2e05f0, 0x2e05ec, 0x2e05e8
+};
+
+static struct attn_hw_reg *ccfc_prty_bb_b0_regs[2] = {
+	&ccfc_prty1_bb_b0, &ccfc_prty0_bb_b0,
+};
+
+static const u16 ccfc_prty1_k2_attn_idx[2] = {
+	0, 1,
+};
+
+static struct attn_hw_reg ccfc_prty1_k2 = {
+	0, 2, ccfc_prty1_k2_attn_idx, 0x2e0200, 0x2e020c, 0x2e0208, 0x2e0204
+};
+
+static const u16 ccfc_prty0_k2_attn_idx[6] = {
+	4, 5, 6, 7, 8, 9,
+};
+
+static struct attn_hw_reg ccfc_prty0_k2 = {
+	1, 6, ccfc_prty0_k2_attn_idx, 0x2e05e4, 0x2e05f0, 0x2e05ec, 0x2e05e8
+};
+
+static struct attn_hw_reg *ccfc_prty_k2_regs[2] = {
+	&ccfc_prty1_k2, &ccfc_prty0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *tcfc_int_attn_desc[2] = {
+	"tcfc_address_error",
+	"tcfc_exe_error",
+};
+#else
+#define tcfc_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 tcfc_int0_bb_a0_attn_idx[2] = {
+	0, 1,
+};
+
+static struct attn_hw_reg tcfc_int0_bb_a0 = {
+	0, 2, tcfc_int0_bb_a0_attn_idx, 0x2d0180, 0x2d018c, 0x2d0188, 0x2d0184
+};
+
+static struct attn_hw_reg *tcfc_int_bb_a0_regs[1] = {
+	&tcfc_int0_bb_a0,
+};
+
+static const u16 tcfc_int0_bb_b0_attn_idx[2] = {
+	0, 1,
+};
+
+static struct attn_hw_reg tcfc_int0_bb_b0 = {
+	0, 2, tcfc_int0_bb_b0_attn_idx, 0x2d0180, 0x2d018c, 0x2d0188, 0x2d0184
+};
+
+static struct attn_hw_reg *tcfc_int_bb_b0_regs[1] = {
+	&tcfc_int0_bb_b0,
+};
+
+static const u16 tcfc_int0_k2_attn_idx[2] = {
+	0, 1,
+};
+
+static struct attn_hw_reg tcfc_int0_k2 = {
+	0, 2, tcfc_int0_k2_attn_idx, 0x2d0180, 0x2d018c, 0x2d0188, 0x2d0184
+};
+
+static struct attn_hw_reg *tcfc_int_k2_regs[1] = {
+	&tcfc_int0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *tcfc_prty_attn_desc[10] = {
+	"tcfc_mem002_i_mem_prty",
+	"tcfc_mem001_i_mem_prty",
+	"tcfc_mem006_i_mem_prty",
+	"tcfc_mem005_i_mem_prty",
+	"tcfc_ccam_par_err",
+	"tcfc_scam_par_err",
+	"tcfc_lc_que_ram_porta_lsb_par_err",
+	"tcfc_lc_que_ram_porta_msb_par_err",
+	"tcfc_lc_que_ram_portb_lsb_par_err",
+	"tcfc_lc_que_ram_portb_msb_par_err",
+};
+#else
+#define tcfc_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 tcfc_prty1_bb_a0_attn_idx[4] = {
+	0, 1, 2, 3,
+};
+
+static struct attn_hw_reg tcfc_prty1_bb_a0 = {
+	0, 4, tcfc_prty1_bb_a0_attn_idx, 0x2d0200, 0x2d020c, 0x2d0208, 0x2d0204
+};
+
+static const u16 tcfc_prty0_bb_a0_attn_idx[2] = {
+	4, 5,
+};
+
+static struct attn_hw_reg tcfc_prty0_bb_a0 = {
+	1, 2, tcfc_prty0_bb_a0_attn_idx, 0x2d05e4, 0x2d05f0, 0x2d05ec, 0x2d05e8
+};
+
+static struct attn_hw_reg *tcfc_prty_bb_a0_regs[2] = {
+	&tcfc_prty1_bb_a0, &tcfc_prty0_bb_a0,
+};
+
+static const u16 tcfc_prty1_bb_b0_attn_idx[2] = {
+	0, 1,
+};
+
+static struct attn_hw_reg tcfc_prty1_bb_b0 = {
+	0, 2, tcfc_prty1_bb_b0_attn_idx, 0x2d0200, 0x2d020c, 0x2d0208, 0x2d0204
+};
+
+static const u16 tcfc_prty0_bb_b0_attn_idx[6] = {
+	4, 5, 6, 7, 8, 9,
+};
+
+static struct attn_hw_reg tcfc_prty0_bb_b0 = {
+	1, 6, tcfc_prty0_bb_b0_attn_idx, 0x2d05e4, 0x2d05f0, 0x2d05ec, 0x2d05e8
+};
+
+static struct attn_hw_reg *tcfc_prty_bb_b0_regs[2] = {
+	&tcfc_prty1_bb_b0, &tcfc_prty0_bb_b0,
+};
+
+static const u16 tcfc_prty1_k2_attn_idx[2] = {
+	0, 1,
+};
+
+static struct attn_hw_reg tcfc_prty1_k2 = {
+	0, 2, tcfc_prty1_k2_attn_idx, 0x2d0200, 0x2d020c, 0x2d0208, 0x2d0204
+};
+
+static const u16 tcfc_prty0_k2_attn_idx[6] = {
+	4, 5, 6, 7, 8, 9,
+};
+
+static struct attn_hw_reg tcfc_prty0_k2 = {
+	1, 6, tcfc_prty0_k2_attn_idx, 0x2d05e4, 0x2d05f0, 0x2d05ec, 0x2d05e8
+};
+
+static struct attn_hw_reg *tcfc_prty_k2_regs[2] = {
+	&tcfc_prty1_k2, &tcfc_prty0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *igu_int_attn_desc[11] = {
+	"igu_address_error",
+	"igu_ctrl_fifo_error_err",
+	"igu_pxp_req_length_too_big",
+	"igu_host_tries2access_prod_upd",
+	"igu_vf_tries2acc_attn_cmd",
+	"igu_mme_bigger_then_5",
+	"igu_sb_index_is_not_valid",
+	"igu_durin_int_read_with_simd_dis",
+	"igu_cmd_fid_not_match",
+	"igu_segment_access_invalid",
+	"igu_attn_prod_acc",
+};
+#else
+#define igu_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 igu_int0_bb_a0_attn_idx[11] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10,
+};
+
+static struct attn_hw_reg igu_int0_bb_a0 = {
+	0, 11, igu_int0_bb_a0_attn_idx, 0x180180, 0x18018c, 0x180188, 0x180184
+};
+
+static struct attn_hw_reg *igu_int_bb_a0_regs[1] = {
+	&igu_int0_bb_a0,
+};
+
+static const u16 igu_int0_bb_b0_attn_idx[11] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10,
+};
+
+static struct attn_hw_reg igu_int0_bb_b0 = {
+	0, 11, igu_int0_bb_b0_attn_idx, 0x180180, 0x18018c, 0x180188, 0x180184
+};
+
+static struct attn_hw_reg *igu_int_bb_b0_regs[1] = {
+	&igu_int0_bb_b0,
+};
+
+static const u16 igu_int0_k2_attn_idx[11] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10,
+};
+
+static struct attn_hw_reg igu_int0_k2 = {
+	0, 11, igu_int0_k2_attn_idx, 0x180180, 0x18018c, 0x180188, 0x180184
+};
+
+static struct attn_hw_reg *igu_int_k2_regs[1] = {
+	&igu_int0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *igu_prty_attn_desc[42] = {
+	"igu_cam_parity",
+	"igu_mem009_i_ecc_rf_int",
+	"igu_mem015_i_mem_prty",
+	"igu_mem016_i_mem_prty",
+	"igu_mem017_i_mem_prty",
+	"igu_mem018_i_mem_prty",
+	"igu_mem019_i_mem_prty",
+	"igu_mem001_i_mem_prty",
+	"igu_mem002_i_mem_prty_0",
+	"igu_mem002_i_mem_prty_1",
+	"igu_mem004_i_mem_prty_0",
+	"igu_mem004_i_mem_prty_1",
+	"igu_mem004_i_mem_prty_2",
+	"igu_mem003_i_mem_prty",
+	"igu_mem005_i_mem_prty",
+	"igu_mem006_i_mem_prty_0",
+	"igu_mem006_i_mem_prty_1",
+	"igu_mem008_i_mem_prty_0",
+	"igu_mem008_i_mem_prty_1",
+	"igu_mem008_i_mem_prty_2",
+	"igu_mem007_i_mem_prty",
+	"igu_mem010_i_mem_prty_0",
+	"igu_mem010_i_mem_prty_1",
+	"igu_mem012_i_mem_prty_0",
+	"igu_mem012_i_mem_prty_1",
+	"igu_mem012_i_mem_prty_2",
+	"igu_mem011_i_mem_prty",
+	"igu_mem013_i_mem_prty",
+	"igu_mem014_i_mem_prty",
+	"igu_mem020_i_mem_prty",
+	"igu_mem003_i_mem_prty_0",
+	"igu_mem003_i_mem_prty_1",
+	"igu_mem003_i_mem_prty_2",
+	"igu_mem002_i_mem_prty",
+	"igu_mem007_i_mem_prty_0",
+	"igu_mem007_i_mem_prty_1",
+	"igu_mem007_i_mem_prty_2",
+	"igu_mem006_i_mem_prty",
+	"igu_mem010_i_mem_prty_2",
+	"igu_mem010_i_mem_prty_3",
+	"igu_mem013_i_mem_prty_0",
+	"igu_mem013_i_mem_prty_1",
+};
+#else
+#define igu_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 igu_prty0_bb_a0_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg igu_prty0_bb_a0 = {
+	0, 1, igu_prty0_bb_a0_attn_idx, 0x180190, 0x18019c, 0x180198, 0x180194
+};
+
+static const u16 igu_prty1_bb_a0_attn_idx[31] = {
+	1, 3, 4, 5, 6, 7, 10, 11, 14, 17, 18, 21, 22, 23, 24, 25, 26, 28, 29,
+	30,
+	31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41,
+};
+
+static struct attn_hw_reg igu_prty1_bb_a0 = {
+	1, 31, igu_prty1_bb_a0_attn_idx, 0x180200, 0x18020c, 0x180208, 0x180204
+};
+
+static const u16 igu_prty2_bb_a0_attn_idx[1] = {
+	2,
+};
+
+static struct attn_hw_reg igu_prty2_bb_a0 = {
+	2, 1, igu_prty2_bb_a0_attn_idx, 0x180210, 0x18021c, 0x180218, 0x180214
+};
+
+static struct attn_hw_reg *igu_prty_bb_a0_regs[3] = {
+	&igu_prty0_bb_a0, &igu_prty1_bb_a0, &igu_prty2_bb_a0,
+};
+
+static const u16 igu_prty0_bb_b0_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg igu_prty0_bb_b0 = {
+	0, 1, igu_prty0_bb_b0_attn_idx, 0x180190, 0x18019c, 0x180198, 0x180194
+};
+
+static const u16 igu_prty1_bb_b0_attn_idx[31] = {
+	1, 3, 4, 5, 6, 7, 10, 11, 14, 17, 18, 21, 22, 23, 24, 25, 26, 28, 29,
+	30,
+	31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41,
+};
+
+static struct attn_hw_reg igu_prty1_bb_b0 = {
+	1, 31, igu_prty1_bb_b0_attn_idx, 0x180200, 0x18020c, 0x180208, 0x180204
+};
+
+static const u16 igu_prty2_bb_b0_attn_idx[1] = {
+	2,
+};
+
+static struct attn_hw_reg igu_prty2_bb_b0 = {
+	2, 1, igu_prty2_bb_b0_attn_idx, 0x180210, 0x18021c, 0x180218, 0x180214
+};
+
+static struct attn_hw_reg *igu_prty_bb_b0_regs[3] = {
+	&igu_prty0_bb_b0, &igu_prty1_bb_b0, &igu_prty2_bb_b0,
+};
+
+static const u16 igu_prty0_k2_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg igu_prty0_k2 = {
+	0, 1, igu_prty0_k2_attn_idx, 0x180190, 0x18019c, 0x180198, 0x180194
+};
+
+static const u16 igu_prty1_k2_attn_idx[28] = {
+	1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20,
+	21,
+	22, 23, 24, 25, 26, 27, 28,
+};
+
+static struct attn_hw_reg igu_prty1_k2 = {
+	1, 28, igu_prty1_k2_attn_idx, 0x180200, 0x18020c, 0x180208, 0x180204
+};
+
+static struct attn_hw_reg *igu_prty_k2_regs[2] = {
+	&igu_prty0_k2, &igu_prty1_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *cau_int_attn_desc[11] = {
+	"cau_address_error",
+	"cau_unauthorized_pxp_rd_cmd",
+	"cau_unauthorized_pxp_length_cmd",
+	"cau_pxp_sb_address_error",
+	"cau_pxp_pi_number_error",
+	"cau_cleanup_reg_sb_idx_error",
+	"cau_fsm_invalid_line",
+	"cau_cqe_fifo_err",
+	"cau_igu_wdata_fifo_err",
+	"cau_igu_req_fifo_err",
+	"cau_igu_cmd_fifo_err",
+};
+#else
+#define cau_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 cau_int0_bb_a0_attn_idx[11] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10,
+};
+
+static struct attn_hw_reg cau_int0_bb_a0 = {
+	0, 11, cau_int0_bb_a0_attn_idx, 0x1c00d4, 0x1c00d8, 0x1c00dc, 0x1c00e0
+};
+
+static struct attn_hw_reg *cau_int_bb_a0_regs[1] = {
+	&cau_int0_bb_a0,
+};
+
+static const u16 cau_int0_bb_b0_attn_idx[11] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10,
+};
+
+static struct attn_hw_reg cau_int0_bb_b0 = {
+	0, 11, cau_int0_bb_b0_attn_idx, 0x1c00d4, 0x1c00d8, 0x1c00dc, 0x1c00e0
+};
+
+static struct attn_hw_reg *cau_int_bb_b0_regs[1] = {
+	&cau_int0_bb_b0,
+};
+
+static const u16 cau_int0_k2_attn_idx[11] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10,
+};
+
+static struct attn_hw_reg cau_int0_k2 = {
+	0, 11, cau_int0_k2_attn_idx, 0x1c00d4, 0x1c00d8, 0x1c00dc, 0x1c00e0
+};
+
+static struct attn_hw_reg *cau_int_k2_regs[1] = {
+	&cau_int0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *cau_prty_attn_desc[15] = {
+	"cau_mem006_i_ecc_rf_int",
+	"cau_mem001_i_ecc_0_rf_int",
+	"cau_mem001_i_ecc_1_rf_int",
+	"cau_mem002_i_ecc_rf_int",
+	"cau_mem004_i_ecc_rf_int",
+	"cau_mem005_i_mem_prty",
+	"cau_mem007_i_mem_prty",
+	"cau_mem008_i_mem_prty",
+	"cau_mem009_i_mem_prty",
+	"cau_mem010_i_mem_prty",
+	"cau_mem011_i_mem_prty",
+	"cau_mem003_i_mem_prty_0",
+	"cau_mem003_i_mem_prty_1",
+	"cau_mem002_i_mem_prty",
+	"cau_mem004_i_mem_prty",
+};
+#else
+#define cau_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 cau_prty1_bb_a0_attn_idx[13] = {
+	0, 1, 2, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14,
+};
+
+static struct attn_hw_reg cau_prty1_bb_a0 = {
+	0, 13, cau_prty1_bb_a0_attn_idx, 0x1c0200, 0x1c020c, 0x1c0208, 0x1c0204
+};
+
+static struct attn_hw_reg *cau_prty_bb_a0_regs[1] = {
+	&cau_prty1_bb_a0,
+};
+
+static const u16 cau_prty1_bb_b0_attn_idx[13] = {
+	0, 1, 2, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14,
+};
+
+static struct attn_hw_reg cau_prty1_bb_b0 = {
+	0, 13, cau_prty1_bb_b0_attn_idx, 0x1c0200, 0x1c020c, 0x1c0208, 0x1c0204
+};
+
+static struct attn_hw_reg *cau_prty_bb_b0_regs[1] = {
+	&cau_prty1_bb_b0,
+};
+
+static const u16 cau_prty1_k2_attn_idx[13] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12,
+};
+
+static struct attn_hw_reg cau_prty1_k2 = {
+	0, 13, cau_prty1_k2_attn_idx, 0x1c0200, 0x1c020c, 0x1c0208, 0x1c0204
+};
+
+static struct attn_hw_reg *cau_prty_k2_regs[1] = {
+	&cau_prty1_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *umac_int_attn_desc[2] = {
+	"umac_address_error",
+	"umac_tx_overflow",
+};
+#else
+#define umac_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 umac_int0_k2_attn_idx[2] = {
+	0, 1,
+};
+
+static struct attn_hw_reg umac_int0_k2 = {
+	0, 2, umac_int0_k2_attn_idx, 0x51180, 0x5118c, 0x51188, 0x51184
+};
+
+static struct attn_hw_reg *umac_int_k2_regs[1] = {
+	&umac_int0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *dbg_int_attn_desc[1] = {
+	"dbg_address_error",
+};
+#else
+#define dbg_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 dbg_int0_bb_a0_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg dbg_int0_bb_a0 = {
+	0, 1, dbg_int0_bb_a0_attn_idx, 0x10180, 0x1018c, 0x10188, 0x10184
+};
+
+static struct attn_hw_reg *dbg_int_bb_a0_regs[1] = {
+	&dbg_int0_bb_a0,
+};
+
+static const u16 dbg_int0_bb_b0_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg dbg_int0_bb_b0 = {
+	0, 1, dbg_int0_bb_b0_attn_idx, 0x10180, 0x1018c, 0x10188, 0x10184
+};
+
+static struct attn_hw_reg *dbg_int_bb_b0_regs[1] = {
+	&dbg_int0_bb_b0,
+};
+
+static const u16 dbg_int0_k2_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg dbg_int0_k2 = {
+	0, 1, dbg_int0_k2_attn_idx, 0x10180, 0x1018c, 0x10188, 0x10184
+};
+
+static struct attn_hw_reg *dbg_int_k2_regs[1] = {
+	&dbg_int0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *dbg_prty_attn_desc[1] = {
+	"dbg_mem001_i_mem_prty",
+};
+#else
+#define dbg_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 dbg_prty1_bb_a0_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg dbg_prty1_bb_a0 = {
+	0, 1, dbg_prty1_bb_a0_attn_idx, 0x10200, 0x1020c, 0x10208, 0x10204
+};
+
+static struct attn_hw_reg *dbg_prty_bb_a0_regs[1] = {
+	&dbg_prty1_bb_a0,
+};
+
+static const u16 dbg_prty1_bb_b0_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg dbg_prty1_bb_b0 = {
+	0, 1, dbg_prty1_bb_b0_attn_idx, 0x10200, 0x1020c, 0x10208, 0x10204
+};
+
+static struct attn_hw_reg *dbg_prty_bb_b0_regs[1] = {
+	&dbg_prty1_bb_b0,
+};
+
+static const u16 dbg_prty1_k2_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg dbg_prty1_k2 = {
+	0, 1, dbg_prty1_k2_attn_idx, 0x10200, 0x1020c, 0x10208, 0x10204
+};
+
+static struct attn_hw_reg *dbg_prty_k2_regs[1] = {
+	&dbg_prty1_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *nig_int_attn_desc[196] = {
+	"nig_address_error",
+	"nig_debug_fifo_error",
+	"nig_dorq_fifo_error",
+	"nig_dbg_syncfifo_error_wr",
+	"nig_dorq_syncfifo_error_wr",
+	"nig_storm_syncfifo_error_wr",
+	"nig_dbgmux_syncfifo_error_wr",
+	"nig_msdm_syncfifo_error_wr",
+	"nig_tsdm_syncfifo_error_wr",
+	"nig_usdm_syncfifo_error_wr",
+	"nig_xsdm_syncfifo_error_wr",
+	"nig_ysdm_syncfifo_error_wr",
+	"nig_tx_sopq0_error",
+	"nig_tx_sopq1_error",
+	"nig_tx_sopq2_error",
+	"nig_tx_sopq3_error",
+	"nig_tx_sopq4_error",
+	"nig_tx_sopq5_error",
+	"nig_tx_sopq6_error",
+	"nig_tx_sopq7_error",
+	"nig_tx_sopq8_error",
+	"nig_tx_sopq9_error",
+	"nig_tx_sopq10_error",
+	"nig_tx_sopq11_error",
+	"nig_tx_sopq12_error",
+	"nig_tx_sopq13_error",
+	"nig_tx_sopq14_error",
+	"nig_tx_sopq15_error",
+	"nig_lb_sopq0_error",
+	"nig_lb_sopq1_error",
+	"nig_lb_sopq2_error",
+	"nig_lb_sopq3_error",
+	"nig_lb_sopq4_error",
+	"nig_lb_sopq5_error",
+	"nig_lb_sopq6_error",
+	"nig_lb_sopq7_error",
+	"nig_lb_sopq8_error",
+	"nig_lb_sopq9_error",
+	"nig_lb_sopq10_error",
+	"nig_lb_sopq11_error",
+	"nig_lb_sopq12_error",
+	"nig_lb_sopq13_error",
+	"nig_lb_sopq14_error",
+	"nig_lb_sopq15_error",
+	"nig_p0_purelb_sopq_error",
+	"nig_p0_rx_macfifo_error",
+	"nig_p0_tx_macfifo_error",
+	"nig_p0_tx_bmb_fifo_error",
+	"nig_p0_lb_bmb_fifo_error",
+	"nig_p0_tx_btb_fifo_error",
+	"nig_p0_lb_btb_fifo_error",
+	"nig_p0_rx_llh_dfifo_error",
+	"nig_p0_tx_llh_dfifo_error",
+	"nig_p0_lb_llh_dfifo_error",
+	"nig_p0_rx_llh_hfifo_error",
+	"nig_p0_tx_llh_hfifo_error",
+	"nig_p0_lb_llh_hfifo_error",
+	"nig_p0_rx_llh_rfifo_error",
+	"nig_p0_tx_llh_rfifo_error",
+	"nig_p0_lb_llh_rfifo_error",
+	"nig_p0_storm_fifo_error",
+	"nig_p0_storm_dscr_fifo_error",
+	"nig_p0_tx_gnt_fifo_error",
+	"nig_p0_lb_gnt_fifo_error",
+	"nig_p0_tx_pause_too_long_int",
+	"nig_p0_tc0_pause_too_long_int",
+	"nig_p0_tc1_pause_too_long_int",
+	"nig_p0_tc2_pause_too_long_int",
+	"nig_p0_tc3_pause_too_long_int",
+	"nig_p0_tc4_pause_too_long_int",
+	"nig_p0_tc5_pause_too_long_int",
+	"nig_p0_tc6_pause_too_long_int",
+	"nig_p0_tc7_pause_too_long_int",
+	"nig_p0_lb_tc0_pause_too_long_int",
+	"nig_p0_lb_tc1_pause_too_long_int",
+	"nig_p0_lb_tc2_pause_too_long_int",
+	"nig_p0_lb_tc3_pause_too_long_int",
+	"nig_p0_lb_tc4_pause_too_long_int",
+	"nig_p0_lb_tc5_pause_too_long_int",
+	"nig_p0_lb_tc6_pause_too_long_int",
+	"nig_p0_lb_tc7_pause_too_long_int",
+	"nig_p0_lb_tc8_pause_too_long_int",
+	"nig_p1_purelb_sopq_error",
+	"nig_p1_rx_macfifo_error",
+	"nig_p1_tx_macfifo_error",
+	"nig_p1_tx_bmb_fifo_error",
+	"nig_p1_lb_bmb_fifo_error",
+	"nig_p1_tx_btb_fifo_error",
+	"nig_p1_lb_btb_fifo_error",
+	"nig_p1_rx_llh_dfifo_error",
+	"nig_p1_tx_llh_dfifo_error",
+	"nig_p1_lb_llh_dfifo_error",
+	"nig_p1_rx_llh_hfifo_error",
+	"nig_p1_tx_llh_hfifo_error",
+	"nig_p1_lb_llh_hfifo_error",
+	"nig_p1_rx_llh_rfifo_error",
+	"nig_p1_tx_llh_rfifo_error",
+	"nig_p1_lb_llh_rfifo_error",
+	"nig_p1_storm_fifo_error",
+	"nig_p1_storm_dscr_fifo_error",
+	"nig_p1_tx_gnt_fifo_error",
+	"nig_p1_lb_gnt_fifo_error",
+	"nig_p1_tx_pause_too_long_int",
+	"nig_p1_tc0_pause_too_long_int",
+	"nig_p1_tc1_pause_too_long_int",
+	"nig_p1_tc2_pause_too_long_int",
+	"nig_p1_tc3_pause_too_long_int",
+	"nig_p1_tc4_pause_too_long_int",
+	"nig_p1_tc5_pause_too_long_int",
+	"nig_p1_tc6_pause_too_long_int",
+	"nig_p1_tc7_pause_too_long_int",
+	"nig_p1_lb_tc0_pause_too_long_int",
+	"nig_p1_lb_tc1_pause_too_long_int",
+	"nig_p1_lb_tc2_pause_too_long_int",
+	"nig_p1_lb_tc3_pause_too_long_int",
+	"nig_p1_lb_tc4_pause_too_long_int",
+	"nig_p1_lb_tc5_pause_too_long_int",
+	"nig_p1_lb_tc6_pause_too_long_int",
+	"nig_p1_lb_tc7_pause_too_long_int",
+	"nig_p1_lb_tc8_pause_too_long_int",
+	"nig_p2_purelb_sopq_error",
+	"nig_p2_rx_macfifo_error",
+	"nig_p2_tx_macfifo_error",
+	"nig_p2_tx_bmb_fifo_error",
+	"nig_p2_lb_bmb_fifo_error",
+	"nig_p2_tx_btb_fifo_error",
+	"nig_p2_lb_btb_fifo_error",
+	"nig_p2_rx_llh_dfifo_error",
+	"nig_p2_tx_llh_dfifo_error",
+	"nig_p2_lb_llh_dfifo_error",
+	"nig_p2_rx_llh_hfifo_error",
+	"nig_p2_tx_llh_hfifo_error",
+	"nig_p2_lb_llh_hfifo_error",
+	"nig_p2_rx_llh_rfifo_error",
+	"nig_p2_tx_llh_rfifo_error",
+	"nig_p2_lb_llh_rfifo_error",
+	"nig_p2_storm_fifo_error",
+	"nig_p2_storm_dscr_fifo_error",
+	"nig_p2_tx_gnt_fifo_error",
+	"nig_p2_lb_gnt_fifo_error",
+	"nig_p2_tx_pause_too_long_int",
+	"nig_p2_tc0_pause_too_long_int",
+	"nig_p2_tc1_pause_too_long_int",
+	"nig_p2_tc2_pause_too_long_int",
+	"nig_p2_tc3_pause_too_long_int",
+	"nig_p2_tc4_pause_too_long_int",
+	"nig_p2_tc5_pause_too_long_int",
+	"nig_p2_tc6_pause_too_long_int",
+	"nig_p2_tc7_pause_too_long_int",
+	"nig_p2_lb_tc0_pause_too_long_int",
+	"nig_p2_lb_tc1_pause_too_long_int",
+	"nig_p2_lb_tc2_pause_too_long_int",
+	"nig_p2_lb_tc3_pause_too_long_int",
+	"nig_p2_lb_tc4_pause_too_long_int",
+	"nig_p2_lb_tc5_pause_too_long_int",
+	"nig_p2_lb_tc6_pause_too_long_int",
+	"nig_p2_lb_tc7_pause_too_long_int",
+	"nig_p2_lb_tc8_pause_too_long_int",
+	"nig_p3_purelb_sopq_error",
+	"nig_p3_rx_macfifo_error",
+	"nig_p3_tx_macfifo_error",
+	"nig_p3_tx_bmb_fifo_error",
+	"nig_p3_lb_bmb_fifo_error",
+	"nig_p3_tx_btb_fifo_error",
+	"nig_p3_lb_btb_fifo_error",
+	"nig_p3_rx_llh_dfifo_error",
+	"nig_p3_tx_llh_dfifo_error",
+	"nig_p3_lb_llh_dfifo_error",
+	"nig_p3_rx_llh_hfifo_error",
+	"nig_p3_tx_llh_hfifo_error",
+	"nig_p3_lb_llh_hfifo_error",
+	"nig_p3_rx_llh_rfifo_error",
+	"nig_p3_tx_llh_rfifo_error",
+	"nig_p3_lb_llh_rfifo_error",
+	"nig_p3_storm_fifo_error",
+	"nig_p3_storm_dscr_fifo_error",
+	"nig_p3_tx_gnt_fifo_error",
+	"nig_p3_lb_gnt_fifo_error",
+	"nig_p3_tx_pause_too_long_int",
+	"nig_p3_tc0_pause_too_long_int",
+	"nig_p3_tc1_pause_too_long_int",
+	"nig_p3_tc2_pause_too_long_int",
+	"nig_p3_tc3_pause_too_long_int",
+	"nig_p3_tc4_pause_too_long_int",
+	"nig_p3_tc5_pause_too_long_int",
+	"nig_p3_tc6_pause_too_long_int",
+	"nig_p3_tc7_pause_too_long_int",
+	"nig_p3_lb_tc0_pause_too_long_int",
+	"nig_p3_lb_tc1_pause_too_long_int",
+	"nig_p3_lb_tc2_pause_too_long_int",
+	"nig_p3_lb_tc3_pause_too_long_int",
+	"nig_p3_lb_tc4_pause_too_long_int",
+	"nig_p3_lb_tc5_pause_too_long_int",
+	"nig_p3_lb_tc6_pause_too_long_int",
+	"nig_p3_lb_tc7_pause_too_long_int",
+	"nig_p3_lb_tc8_pause_too_long_int",
+};
+#else
+#define nig_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 nig_int0_bb_a0_attn_idx[12] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11,
+};
+
+static struct attn_hw_reg nig_int0_bb_a0 = {
+	0, 12, nig_int0_bb_a0_attn_idx, 0x500040, 0x50004c, 0x500048, 0x500044
+};
+
+static const u16 nig_int1_bb_a0_attn_idx[32] = {
+	12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29,
+	30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43,
+};
+
+static struct attn_hw_reg nig_int1_bb_a0 = {
+	1, 32, nig_int1_bb_a0_attn_idx, 0x500050, 0x50005c, 0x500058, 0x500054
+};
+
+static const u16 nig_int2_bb_a0_attn_idx[20] = {
+	44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61,
+	62, 63,
+};
+
+static struct attn_hw_reg nig_int2_bb_a0 = {
+	2, 20, nig_int2_bb_a0_attn_idx, 0x500060, 0x50006c, 0x500068, 0x500064
+};
+
+static const u16 nig_int3_bb_a0_attn_idx[18] = {
+	64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81,
+};
+
+static struct attn_hw_reg nig_int3_bb_a0 = {
+	3, 18, nig_int3_bb_a0_attn_idx, 0x500070, 0x50007c, 0x500078, 0x500074
+};
+
+static const u16 nig_int4_bb_a0_attn_idx[20] = {
+	82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99,
+	100, 101,
+};
+
+static struct attn_hw_reg nig_int4_bb_a0 = {
+	4, 20, nig_int4_bb_a0_attn_idx, 0x500080, 0x50008c, 0x500088, 0x500084
+};
+
+static const u16 nig_int5_bb_a0_attn_idx[18] = {
+	102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115,
+	116,
+	117, 118, 119,
+};
+
+static struct attn_hw_reg nig_int5_bb_a0 = {
+	5, 18, nig_int5_bb_a0_attn_idx, 0x500090, 0x50009c, 0x500098, 0x500094
+};
+
+static struct attn_hw_reg *nig_int_bb_a0_regs[6] = {
+	&nig_int0_bb_a0, &nig_int1_bb_a0, &nig_int2_bb_a0, &nig_int3_bb_a0,
+	&nig_int4_bb_a0, &nig_int5_bb_a0,
+};
+
+static const u16 nig_int0_bb_b0_attn_idx[12] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11,
+};
+
+static struct attn_hw_reg nig_int0_bb_b0 = {
+	0, 12, nig_int0_bb_b0_attn_idx, 0x500040, 0x50004c, 0x500048, 0x500044
+};
+
+static const u16 nig_int1_bb_b0_attn_idx[32] = {
+	12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29,
+	30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43,
+};
+
+static struct attn_hw_reg nig_int1_bb_b0 = {
+	1, 32, nig_int1_bb_b0_attn_idx, 0x500050, 0x50005c, 0x500058, 0x500054
+};
+
+static const u16 nig_int2_bb_b0_attn_idx[20] = {
+	44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61,
+	62, 63,
+};
+
+static struct attn_hw_reg nig_int2_bb_b0 = {
+	2, 20, nig_int2_bb_b0_attn_idx, 0x500060, 0x50006c, 0x500068, 0x500064
+};
+
+static const u16 nig_int3_bb_b0_attn_idx[18] = {
+	64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81,
+};
+
+static struct attn_hw_reg nig_int3_bb_b0 = {
+	3, 18, nig_int3_bb_b0_attn_idx, 0x500070, 0x50007c, 0x500078, 0x500074
+};
+
+static const u16 nig_int4_bb_b0_attn_idx[20] = {
+	82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99,
+	100, 101,
+};
+
+static struct attn_hw_reg nig_int4_bb_b0 = {
+	4, 20, nig_int4_bb_b0_attn_idx, 0x500080, 0x50008c, 0x500088, 0x500084
+};
+
+static const u16 nig_int5_bb_b0_attn_idx[18] = {
+	102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115,
+	116,
+	117, 118, 119,
+};
+
+static struct attn_hw_reg nig_int5_bb_b0 = {
+	5, 18, nig_int5_bb_b0_attn_idx, 0x500090, 0x50009c, 0x500098, 0x500094
+};
+
+static struct attn_hw_reg *nig_int_bb_b0_regs[6] = {
+	&nig_int0_bb_b0, &nig_int1_bb_b0, &nig_int2_bb_b0, &nig_int3_bb_b0,
+	&nig_int4_bb_b0, &nig_int5_bb_b0,
+};
+
+static const u16 nig_int0_k2_attn_idx[12] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11,
+};
+
+static struct attn_hw_reg nig_int0_k2 = {
+	0, 12, nig_int0_k2_attn_idx, 0x500040, 0x50004c, 0x500048, 0x500044
+};
+
+static const u16 nig_int1_k2_attn_idx[32] = {
+	12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29,
+	30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43,
+};
+
+static struct attn_hw_reg nig_int1_k2 = {
+	1, 32, nig_int1_k2_attn_idx, 0x500050, 0x50005c, 0x500058, 0x500054
+};
+
+static const u16 nig_int2_k2_attn_idx[20] = {
+	44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61,
+	62, 63,
+};
+
+static struct attn_hw_reg nig_int2_k2 = {
+	2, 20, nig_int2_k2_attn_idx, 0x500060, 0x50006c, 0x500068, 0x500064
+};
+
+static const u16 nig_int3_k2_attn_idx[18] = {
+	64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81,
+};
+
+static struct attn_hw_reg nig_int3_k2 = {
+	3, 18, nig_int3_k2_attn_idx, 0x500070, 0x50007c, 0x500078, 0x500074
+};
+
+static const u16 nig_int4_k2_attn_idx[20] = {
+	82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99,
+	100, 101,
+};
+
+static struct attn_hw_reg nig_int4_k2 = {
+	4, 20, nig_int4_k2_attn_idx, 0x500080, 0x50008c, 0x500088, 0x500084
+};
+
+static const u16 nig_int5_k2_attn_idx[18] = {
+	102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115,
+	116,
+	117, 118, 119,
+};
+
+static struct attn_hw_reg nig_int5_k2 = {
+	5, 18, nig_int5_k2_attn_idx, 0x500090, 0x50009c, 0x500098, 0x500094
+};
+
+static const u16 nig_int6_k2_attn_idx[20] = {
+	120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133,
+	134,
+	135, 136, 137, 138, 139,
+};
+
+static struct attn_hw_reg nig_int6_k2 = {
+	6, 20, nig_int6_k2_attn_idx, 0x5000a0, 0x5000ac, 0x5000a8, 0x5000a4
+};
+
+static const u16 nig_int7_k2_attn_idx[18] = {
+	140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153,
+	154,
+	155, 156, 157,
+};
+
+static struct attn_hw_reg nig_int7_k2 = {
+	7, 18, nig_int7_k2_attn_idx, 0x5000b0, 0x5000bc, 0x5000b8, 0x5000b4
+};
+
+static const u16 nig_int8_k2_attn_idx[20] = {
+	158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171,
+	172,
+	173, 174, 175, 176, 177,
+};
+
+static struct attn_hw_reg nig_int8_k2 = {
+	8, 20, nig_int8_k2_attn_idx, 0x5000c0, 0x5000cc, 0x5000c8, 0x5000c4
+};
+
+static const u16 nig_int9_k2_attn_idx[18] = {
+	178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191,
+	192,
+	193, 194, 195,
+};
+
+static struct attn_hw_reg nig_int9_k2 = {
+	9, 18, nig_int9_k2_attn_idx, 0x5000d0, 0x5000dc, 0x5000d8, 0x5000d4
+};
+
+static struct attn_hw_reg *nig_int_k2_regs[10] = {
+	&nig_int0_k2, &nig_int1_k2, &nig_int2_k2, &nig_int3_k2, &nig_int4_k2,
+	&nig_int5_k2, &nig_int6_k2, &nig_int7_k2, &nig_int8_k2, &nig_int9_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *nig_prty_attn_desc[113] = {
+	"nig_datapath_parity_error",
+	"nig_mem107_i_mem_prty",
+	"nig_mem103_i_mem_prty",
+	"nig_mem104_i_mem_prty",
+	"nig_mem105_i_mem_prty",
+	"nig_mem106_i_mem_prty",
+	"nig_mem072_i_mem_prty",
+	"nig_mem071_i_mem_prty",
+	"nig_mem074_i_mem_prty",
+	"nig_mem073_i_mem_prty",
+	"nig_mem076_i_mem_prty",
+	"nig_mem075_i_mem_prty",
+	"nig_mem078_i_mem_prty",
+	"nig_mem077_i_mem_prty",
+	"nig_mem055_i_mem_prty",
+	"nig_mem062_i_mem_prty",
+	"nig_mem063_i_mem_prty",
+	"nig_mem064_i_mem_prty",
+	"nig_mem065_i_mem_prty",
+	"nig_mem066_i_mem_prty",
+	"nig_mem067_i_mem_prty",
+	"nig_mem068_i_mem_prty",
+	"nig_mem069_i_mem_prty",
+	"nig_mem070_i_mem_prty",
+	"nig_mem056_i_mem_prty",
+	"nig_mem057_i_mem_prty",
+	"nig_mem058_i_mem_prty",
+	"nig_mem059_i_mem_prty",
+	"nig_mem060_i_mem_prty",
+	"nig_mem061_i_mem_prty",
+	"nig_mem035_i_mem_prty",
+	"nig_mem046_i_mem_prty",
+	"nig_mem051_i_mem_prty",
+	"nig_mem052_i_mem_prty",
+	"nig_mem090_i_mem_prty",
+	"nig_mem089_i_mem_prty",
+	"nig_mem092_i_mem_prty",
+	"nig_mem091_i_mem_prty",
+	"nig_mem109_i_mem_prty",
+	"nig_mem110_i_mem_prty",
+	"nig_mem001_i_mem_prty",
+	"nig_mem008_i_mem_prty",
+	"nig_mem009_i_mem_prty",
+	"nig_mem010_i_mem_prty",
+	"nig_mem011_i_mem_prty",
+	"nig_mem012_i_mem_prty",
+	"nig_mem013_i_mem_prty",
+	"nig_mem014_i_mem_prty",
+	"nig_mem015_i_mem_prty",
+	"nig_mem016_i_mem_prty",
+	"nig_mem002_i_mem_prty",
+	"nig_mem003_i_mem_prty",
+	"nig_mem004_i_mem_prty",
+	"nig_mem005_i_mem_prty",
+	"nig_mem006_i_mem_prty",
+	"nig_mem007_i_mem_prty",
+	"nig_mem080_i_mem_prty",
+	"nig_mem081_i_mem_prty",
+	"nig_mem082_i_mem_prty",
+	"nig_mem083_i_mem_prty",
+	"nig_mem048_i_mem_prty",
+	"nig_mem049_i_mem_prty",
+	"nig_mem102_i_mem_prty",
+	"nig_mem087_i_mem_prty",
+	"nig_mem086_i_mem_prty",
+	"nig_mem088_i_mem_prty",
+	"nig_mem079_i_mem_prty",
+	"nig_mem047_i_mem_prty",
+	"nig_mem050_i_mem_prty",
+	"nig_mem053_i_mem_prty",
+	"nig_mem054_i_mem_prty",
+	"nig_mem036_i_mem_prty",
+	"nig_mem037_i_mem_prty",
+	"nig_mem038_i_mem_prty",
+	"nig_mem039_i_mem_prty",
+	"nig_mem040_i_mem_prty",
+	"nig_mem041_i_mem_prty",
+	"nig_mem042_i_mem_prty",
+	"nig_mem043_i_mem_prty",
+	"nig_mem044_i_mem_prty",
+	"nig_mem045_i_mem_prty",
+	"nig_mem093_i_mem_prty",
+	"nig_mem094_i_mem_prty",
+	"nig_mem027_i_mem_prty",
+	"nig_mem028_i_mem_prty",
+	"nig_mem029_i_mem_prty",
+	"nig_mem030_i_mem_prty",
+	"nig_mem017_i_mem_prty",
+	"nig_mem018_i_mem_prty",
+	"nig_mem095_i_mem_prty",
+	"nig_mem084_i_mem_prty",
+	"nig_mem085_i_mem_prty",
+	"nig_mem099_i_mem_prty",
+	"nig_mem100_i_mem_prty",
+	"nig_mem096_i_mem_prty",
+	"nig_mem097_i_mem_prty",
+	"nig_mem098_i_mem_prty",
+	"nig_mem031_i_mem_prty",
+	"nig_mem032_i_mem_prty",
+	"nig_mem033_i_mem_prty",
+	"nig_mem034_i_mem_prty",
+	"nig_mem019_i_mem_prty",
+	"nig_mem020_i_mem_prty",
+	"nig_mem021_i_mem_prty",
+	"nig_mem022_i_mem_prty",
+	"nig_mem101_i_mem_prty",
+	"nig_mem023_i_mem_prty",
+	"nig_mem024_i_mem_prty",
+	"nig_mem025_i_mem_prty",
+	"nig_mem026_i_mem_prty",
+	"nig_mem108_i_mem_prty",
+	"nig_mem031_ext_i_mem_prty",
+	"nig_mem034_ext_i_mem_prty",
+};
+#else
+#define nig_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 nig_prty1_bb_a0_attn_idx[31] = {
+	1, 2, 5, 12, 13, 23, 35, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51,
+	52, 53, 54, 55, 56, 60, 61, 62, 63, 64, 65, 66,
+};
+
+static struct attn_hw_reg nig_prty1_bb_a0 = {
+	0, 31, nig_prty1_bb_a0_attn_idx, 0x500200, 0x50020c, 0x500208, 0x500204
+};
+
+static const u16 nig_prty2_bb_a0_attn_idx[31] = {
+	33, 69, 70, 90, 91, 8, 11, 10, 14, 17, 18, 19, 20, 21, 22, 7, 6, 24, 25,
+	26, 27, 28, 29, 15, 16, 57, 58, 59, 9, 94, 95,
+};
+
+static struct attn_hw_reg nig_prty2_bb_a0 = {
+	1, 31, nig_prty2_bb_a0_attn_idx, 0x500210, 0x50021c, 0x500218, 0x500214
+};
+
+static const u16 nig_prty3_bb_a0_attn_idx[31] = {
+	96, 97, 98, 103, 104, 92, 93, 105, 106, 107, 108, 109, 80, 31, 67, 83,
+	84,
+	3, 68, 85, 86, 89, 77, 78, 79, 4, 32, 36, 81, 82, 87,
+};
+
+static struct attn_hw_reg nig_prty3_bb_a0 = {
+	2, 31, nig_prty3_bb_a0_attn_idx, 0x500220, 0x50022c, 0x500228, 0x500224
+};
+
+static const u16 nig_prty4_bb_a0_attn_idx[14] = {
+	88, 101, 102, 75, 71, 74, 76, 73, 72, 34, 37, 99, 30, 100,
+};
+
+static struct attn_hw_reg nig_prty4_bb_a0 = {
+	3, 14, nig_prty4_bb_a0_attn_idx, 0x500230, 0x50023c, 0x500238, 0x500234
+};
+
+static struct attn_hw_reg *nig_prty_bb_a0_regs[4] = {
+	&nig_prty1_bb_a0, &nig_prty2_bb_a0, &nig_prty3_bb_a0, &nig_prty4_bb_a0,
+};
+
+static const u16 nig_prty0_bb_b0_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg nig_prty0_bb_b0 = {
+	0, 1, nig_prty0_bb_b0_attn_idx, 0x5000a0, 0x5000ac, 0x5000a8, 0x5000a4
+};
+
+static const u16 nig_prty1_bb_b0_attn_idx[31] = {
+	4, 5, 9, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47,
+	48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59,
+};
+
+static struct attn_hw_reg nig_prty1_bb_b0 = {
+	1, 31, nig_prty1_bb_b0_attn_idx, 0x500200, 0x50020c, 0x500208, 0x500204
+};
+
+static const u16 nig_prty2_bb_b0_attn_idx[31] = {
+	90, 91, 64, 63, 65, 8, 11, 10, 13, 12, 66, 14, 17, 18, 19, 20, 21, 22,
+	23,
+	7, 6, 24, 25, 26, 27, 28, 29, 15, 16, 92, 93,
+};
+
+static struct attn_hw_reg nig_prty2_bb_b0 = {
+	2, 31, nig_prty2_bb_b0_attn_idx, 0x500210, 0x50021c, 0x500218, 0x500214
+};
+
+static const u16 nig_prty3_bb_b0_attn_idx[31] = {
+	94, 95, 96, 97, 99, 100, 103, 104, 105, 62, 108, 109, 80, 31, 1, 67, 60,
+	69, 83, 84, 2, 3, 110, 61, 68, 70, 85, 86, 111, 112, 89,
+};
+
+static struct attn_hw_reg nig_prty3_bb_b0 = {
+	3, 31, nig_prty3_bb_b0_attn_idx, 0x500220, 0x50022c, 0x500228, 0x500224
+};
+
+static const u16 nig_prty4_bb_b0_attn_idx[17] = {
+	106, 107, 87, 88, 81, 82, 101, 102, 75, 71, 74, 76, 77, 78, 79, 73, 72,
+};
+
+static struct attn_hw_reg nig_prty4_bb_b0 = {
+	4, 17, nig_prty4_bb_b0_attn_idx, 0x500230, 0x50023c, 0x500238, 0x500234
+};
+
+static struct attn_hw_reg *nig_prty_bb_b0_regs[5] = {
+	&nig_prty0_bb_b0, &nig_prty1_bb_b0, &nig_prty2_bb_b0, &nig_prty3_bb_b0,
+	&nig_prty4_bb_b0,
+};
+
+static const u16 nig_prty0_k2_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg nig_prty0_k2 = {
+	0, 1, nig_prty0_k2_attn_idx, 0x5000e0, 0x5000ec, 0x5000e8, 0x5000e4
+};
+
+static const u16 nig_prty1_k2_attn_idx[31] = {
+	1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20,
+	21,
+	22, 23, 24, 25, 26, 27, 28, 29, 30, 31,
+};
+
+static struct attn_hw_reg nig_prty1_k2 = {
+	1, 31, nig_prty1_k2_attn_idx, 0x500200, 0x50020c, 0x500208, 0x500204
+};
+
+static const u16 nig_prty2_k2_attn_idx[31] = {
+	67, 60, 61, 68, 32, 33, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80,
+	37, 36, 81, 82, 83, 84, 85, 86, 48, 49, 87, 88, 89,
+};
+
+static struct attn_hw_reg nig_prty2_k2 = {
+	2, 31, nig_prty2_k2_attn_idx, 0x500210, 0x50021c, 0x500218, 0x500214
+};
+
+static const u16 nig_prty3_k2_attn_idx[31] = {
+	94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 92, 93, 105, 62, 106,
+	107, 108, 109, 59, 90, 91, 64, 55, 41, 42, 43, 63, 65, 35, 34,
+};
+
+static struct attn_hw_reg nig_prty3_k2 = {
+	3, 31, nig_prty3_k2_attn_idx, 0x500220, 0x50022c, 0x500228, 0x500224
+};
+
+static const u16 nig_prty4_k2_attn_idx[14] = {
+	44, 45, 46, 47, 40, 50, 66, 56, 57, 58, 51, 52, 53, 54,
+};
+
+static struct attn_hw_reg nig_prty4_k2 = {
+	4, 14, nig_prty4_k2_attn_idx, 0x500230, 0x50023c, 0x500238, 0x500234
+};
+
+static struct attn_hw_reg *nig_prty_k2_regs[5] = {
+	&nig_prty0_k2, &nig_prty1_k2, &nig_prty2_k2, &nig_prty3_k2,
+	&nig_prty4_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *wol_int_attn_desc[1] = {
+	"wol_address_error",
+};
+#else
+#define wol_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 wol_int0_k2_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg wol_int0_k2 = {
+	0, 1, wol_int0_k2_attn_idx, 0x600040, 0x60004c, 0x600048, 0x600044
+};
+
+static struct attn_hw_reg *wol_int_k2_regs[1] = {
+	&wol_int0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *wol_prty_attn_desc[24] = {
+	"wol_mem017_i_mem_prty",
+	"wol_mem018_i_mem_prty",
+	"wol_mem019_i_mem_prty",
+	"wol_mem020_i_mem_prty",
+	"wol_mem021_i_mem_prty",
+	"wol_mem022_i_mem_prty",
+	"wol_mem023_i_mem_prty",
+	"wol_mem024_i_mem_prty",
+	"wol_mem001_i_mem_prty",
+	"wol_mem008_i_mem_prty",
+	"wol_mem009_i_mem_prty",
+	"wol_mem010_i_mem_prty",
+	"wol_mem011_i_mem_prty",
+	"wol_mem012_i_mem_prty",
+	"wol_mem013_i_mem_prty",
+	"wol_mem014_i_mem_prty",
+	"wol_mem015_i_mem_prty",
+	"wol_mem016_i_mem_prty",
+	"wol_mem002_i_mem_prty",
+	"wol_mem003_i_mem_prty",
+	"wol_mem004_i_mem_prty",
+	"wol_mem005_i_mem_prty",
+	"wol_mem006_i_mem_prty",
+	"wol_mem007_i_mem_prty",
+};
+#else
+#define wol_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 wol_prty1_k2_attn_idx[24] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+	20,
+	21, 22, 23,
+};
+
+static struct attn_hw_reg wol_prty1_k2 = {
+	0, 24, wol_prty1_k2_attn_idx, 0x600200, 0x60020c, 0x600208, 0x600204
+};
+
+static struct attn_hw_reg *wol_prty_k2_regs[1] = {
+	&wol_prty1_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *bmbn_int_attn_desc[1] = {
+	"bmbn_address_error",
+};
+#else
+#define bmbn_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 bmbn_int0_k2_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg bmbn_int0_k2 = {
+	0, 1, bmbn_int0_k2_attn_idx, 0x610040, 0x61004c, 0x610048, 0x610044
+};
+
+static struct attn_hw_reg *bmbn_int_k2_regs[1] = {
+	&bmbn_int0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *ipc_int_attn_desc[14] = {
+	"ipc_address_error",
+	"ipc_unused_0",
+	"ipc_vmain_por_assert",
+	"ipc_vmain_por_deassert",
+	"ipc_perst_assert",
+	"ipc_perst_deassert",
+	"ipc_otp_ecc_ded_0",
+	"ipc_otp_ecc_ded_1",
+	"ipc_otp_ecc_ded_2",
+	"ipc_otp_ecc_ded_3",
+	"ipc_otp_ecc_ded_4",
+	"ipc_otp_ecc_ded_5",
+	"ipc_otp_ecc_ded_6",
+	"ipc_otp_ecc_ded_7",
+};
+#else
+#define ipc_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 ipc_int0_bb_a0_attn_idx[5] = {
+	0, 2, 3, 4, 5,
+};
+
+static struct attn_hw_reg ipc_int0_bb_a0 = {
+	0, 5, ipc_int0_bb_a0_attn_idx, 0x2050c, 0x20518, 0x20514, 0x20510
+};
+
+static struct attn_hw_reg *ipc_int_bb_a0_regs[1] = {
+	&ipc_int0_bb_a0,
+};
+
+static const u16 ipc_int0_bb_b0_attn_idx[13] = {
+	0, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,
+};
+
+static struct attn_hw_reg ipc_int0_bb_b0 = {
+	0, 13, ipc_int0_bb_b0_attn_idx, 0x2050c, 0x20518, 0x20514, 0x20510
+};
+
+static struct attn_hw_reg *ipc_int_bb_b0_regs[1] = {
+	&ipc_int0_bb_b0,
+};
+
+static const u16 ipc_int0_k2_attn_idx[5] = {
+	0, 2, 3, 4, 5,
+};
+
+static struct attn_hw_reg ipc_int0_k2 = {
+	0, 5, ipc_int0_k2_attn_idx, 0x202dc, 0x202e8, 0x202e4, 0x202e0
+};
+
+static struct attn_hw_reg *ipc_int_k2_regs[1] = {
+	&ipc_int0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *ipc_prty_attn_desc[1] = {
+	"ipc_fake_par_err",
+};
+#else
+#define ipc_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 ipc_prty0_bb_a0_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg ipc_prty0_bb_a0 = {
+	0, 1, ipc_prty0_bb_a0_attn_idx, 0x2051c, 0x20528, 0x20524, 0x20520
+};
+
+static struct attn_hw_reg *ipc_prty_bb_a0_regs[1] = {
+	&ipc_prty0_bb_a0,
+};
+
+static const u16 ipc_prty0_bb_b0_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg ipc_prty0_bb_b0 = {
+	0, 1, ipc_prty0_bb_b0_attn_idx, 0x2051c, 0x20528, 0x20524, 0x20520
+};
+
+static struct attn_hw_reg *ipc_prty_bb_b0_regs[1] = {
+	&ipc_prty0_bb_b0,
+};
+
+#ifdef ATTN_DESC
+static const char *nwm_int_attn_desc[18] = {
+	"nwm_address_error",
+	"nwm_tx_overflow_0",
+	"nwm_tx_underflow_0",
+	"nwm_tx_overflow_1",
+	"nwm_tx_underflow_1",
+	"nwm_tx_overflow_2",
+	"nwm_tx_underflow_2",
+	"nwm_tx_overflow_3",
+	"nwm_tx_underflow_3",
+	"nwm_unused_0",
+	"nwm_ln0_at_10M",
+	"nwm_ln0_at_100M",
+	"nwm_ln1_at_10M",
+	"nwm_ln1_at_100M",
+	"nwm_ln2_at_10M",
+	"nwm_ln2_at_100M",
+	"nwm_ln3_at_10M",
+	"nwm_ln3_at_100M",
+};
+#else
+#define nwm_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 nwm_int0_k2_attn_idx[17] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 10, 11, 12, 13, 14, 15, 16, 17,
+};
+
+static struct attn_hw_reg nwm_int0_k2 = {
+	0, 17, nwm_int0_k2_attn_idx, 0x800004, 0x800010, 0x80000c, 0x800008
+};
+
+static struct attn_hw_reg *nwm_int_k2_regs[1] = {
+	&nwm_int0_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *nwm_prty_attn_desc[72] = {
+	"nwm_mem020_i_mem_prty",
+	"nwm_mem028_i_mem_prty",
+	"nwm_mem036_i_mem_prty",
+	"nwm_mem044_i_mem_prty",
+	"nwm_mem023_i_mem_prty",
+	"nwm_mem031_i_mem_prty",
+	"nwm_mem039_i_mem_prty",
+	"nwm_mem047_i_mem_prty",
+	"nwm_mem024_i_mem_prty",
+	"nwm_mem032_i_mem_prty",
+	"nwm_mem040_i_mem_prty",
+	"nwm_mem048_i_mem_prty",
+	"nwm_mem018_i_mem_prty",
+	"nwm_mem026_i_mem_prty",
+	"nwm_mem034_i_mem_prty",
+	"nwm_mem042_i_mem_prty",
+	"nwm_mem017_i_mem_prty",
+	"nwm_mem025_i_mem_prty",
+	"nwm_mem033_i_mem_prty",
+	"nwm_mem041_i_mem_prty",
+	"nwm_mem021_i_mem_prty",
+	"nwm_mem029_i_mem_prty",
+	"nwm_mem037_i_mem_prty",
+	"nwm_mem045_i_mem_prty",
+	"nwm_mem019_i_mem_prty",
+	"nwm_mem027_i_mem_prty",
+	"nwm_mem035_i_mem_prty",
+	"nwm_mem043_i_mem_prty",
+	"nwm_mem022_i_mem_prty",
+	"nwm_mem030_i_mem_prty",
+	"nwm_mem038_i_mem_prty",
+	"nwm_mem046_i_mem_prty",
+	"nwm_mem057_i_mem_prty",
+	"nwm_mem059_i_mem_prty",
+	"nwm_mem061_i_mem_prty",
+	"nwm_mem063_i_mem_prty",
+	"nwm_mem058_i_mem_prty",
+	"nwm_mem060_i_mem_prty",
+	"nwm_mem062_i_mem_prty",
+	"nwm_mem064_i_mem_prty",
+	"nwm_mem009_i_mem_prty",
+	"nwm_mem010_i_mem_prty",
+	"nwm_mem011_i_mem_prty",
+	"nwm_mem012_i_mem_prty",
+	"nwm_mem013_i_mem_prty",
+	"nwm_mem014_i_mem_prty",
+	"nwm_mem015_i_mem_prty",
+	"nwm_mem016_i_mem_prty",
+	"nwm_mem001_i_mem_prty",
+	"nwm_mem002_i_mem_prty",
+	"nwm_mem003_i_mem_prty",
+	"nwm_mem004_i_mem_prty",
+	"nwm_mem005_i_mem_prty",
+	"nwm_mem006_i_mem_prty",
+	"nwm_mem007_i_mem_prty",
+	"nwm_mem008_i_mem_prty",
+	"nwm_mem049_i_mem_prty",
+	"nwm_mem053_i_mem_prty",
+	"nwm_mem050_i_mem_prty",
+	"nwm_mem054_i_mem_prty",
+	"nwm_mem051_i_mem_prty",
+	"nwm_mem055_i_mem_prty",
+	"nwm_mem052_i_mem_prty",
+	"nwm_mem056_i_mem_prty",
+	"nwm_mem066_i_mem_prty",
+	"nwm_mem068_i_mem_prty",
+	"nwm_mem070_i_mem_prty",
+	"nwm_mem072_i_mem_prty",
+	"nwm_mem065_i_mem_prty",
+	"nwm_mem067_i_mem_prty",
+	"nwm_mem069_i_mem_prty",
+	"nwm_mem071_i_mem_prty",
+};
+#else
+#define nwm_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 nwm_prty1_k2_attn_idx[31] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+	20,
+	21, 22, 23, 24, 25, 26, 27, 28, 29, 30,
+};
+
+static struct attn_hw_reg nwm_prty1_k2 = {
+	0, 31, nwm_prty1_k2_attn_idx, 0x800200, 0x80020c, 0x800208, 0x800204
+};
+
+static const u16 nwm_prty2_k2_attn_idx[31] = {
+	31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48,
+	49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61,
+};
+
+static struct attn_hw_reg nwm_prty2_k2 = {
+	1, 31, nwm_prty2_k2_attn_idx, 0x800210, 0x80021c, 0x800218, 0x800214
+};
+
+static const u16 nwm_prty3_k2_attn_idx[10] = {
+	62, 63, 64, 65, 66, 67, 68, 69, 70, 71,
+};
+
+static struct attn_hw_reg nwm_prty3_k2 = {
+	2, 10, nwm_prty3_k2_attn_idx, 0x800220, 0x80022c, 0x800228, 0x800224
+};
+
+static struct attn_hw_reg *nwm_prty_k2_regs[3] = {
+	&nwm_prty1_k2, &nwm_prty2_k2, &nwm_prty3_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *nws_int_attn_desc[38] = {
+	"nws_address_error",
+	"nws_ln0_an_resolve_50g_cr2",
+	"nws_ln0_an_resolve_50g_kr2",
+	"nws_ln0_an_resolve_40g_cr4",
+	"nws_ln0_an_resolve_40g_kr4",
+	"nws_ln0_an_resolve_25g_gr",
+	"nws_ln0_an_resolve_25g_cr",
+	"nws_ln0_an_resolve_25g_kr",
+	"nws_ln0_an_resolve_10g_kr",
+	"nws_ln0_an_resolve_1g_kx",
+	"nws_unused_0",
+	"nws_ln1_an_resolve_50g_cr2",
+	"nws_ln1_an_resolve_50g_kr2",
+	"nws_ln1_an_resolve_40g_cr4",
+	"nws_ln1_an_resolve_40g_kr4",
+	"nws_ln1_an_resolve_25g_gr",
+	"nws_ln1_an_resolve_25g_cr",
+	"nws_ln1_an_resolve_25g_kr",
+	"nws_ln1_an_resolve_10g_kr",
+	"nws_ln1_an_resolve_1g_kx",
+	"nws_ln2_an_resolve_50g_cr2",
+	"nws_ln2_an_resolve_50g_kr2",
+	"nws_ln2_an_resolve_40g_cr4",
+	"nws_ln2_an_resolve_40g_kr4",
+	"nws_ln2_an_resolve_25g_gr",
+	"nws_ln2_an_resolve_25g_cr",
+	"nws_ln2_an_resolve_25g_kr",
+	"nws_ln2_an_resolve_10g_kr",
+	"nws_ln2_an_resolve_1g_kx",
+	"nws_ln3_an_resolve_50g_cr2",
+	"nws_ln3_an_resolve_50g_kr2",
+	"nws_ln3_an_resolve_40g_cr4",
+	"nws_ln3_an_resolve_40g_kr4",
+	"nws_ln3_an_resolve_25g_gr",
+	"nws_ln3_an_resolve_25g_cr",
+	"nws_ln3_an_resolve_25g_kr",
+	"nws_ln3_an_resolve_10g_kr",
+	"nws_ln3_an_resolve_1g_kx",
+};
+#else
+#define nws_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 nws_int0_k2_attn_idx[10] = {
+	0, 1, 2, 3, 4, 5, 6, 7, 8, 9,
+};
+
+static struct attn_hw_reg nws_int0_k2 = {
+	0, 10, nws_int0_k2_attn_idx, 0x700180, 0x70018c, 0x700188, 0x700184
+};
+
+static const u16 nws_int1_k2_attn_idx[9] = {
+	11, 12, 13, 14, 15, 16, 17, 18, 19,
+};
+
+static struct attn_hw_reg nws_int1_k2 = {
+	1, 9, nws_int1_k2_attn_idx, 0x700190, 0x70019c, 0x700198, 0x700194
+};
+
+static const u16 nws_int2_k2_attn_idx[9] = {
+	20, 21, 22, 23, 24, 25, 26, 27, 28,
+};
+
+static struct attn_hw_reg nws_int2_k2 = {
+	2, 9, nws_int2_k2_attn_idx, 0x7001a0, 0x7001ac, 0x7001a8, 0x7001a4
+};
+
+static const u16 nws_int3_k2_attn_idx[9] = {
+	29, 30, 31, 32, 33, 34, 35, 36, 37,
+};
+
+static struct attn_hw_reg nws_int3_k2 = {
+	3, 9, nws_int3_k2_attn_idx, 0x7001b0, 0x7001bc, 0x7001b8, 0x7001b4
+};
+
+static struct attn_hw_reg *nws_int_k2_regs[4] = {
+	&nws_int0_k2, &nws_int1_k2, &nws_int2_k2, &nws_int3_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *nws_prty_attn_desc[4] = {
+	"nws_mem003_i_mem_prty",
+	"nws_mem001_i_mem_prty",
+	"nws_mem004_i_mem_prty",
+	"nws_mem002_i_mem_prty",
+};
+#else
+#define nws_prty_attn_desc OSAL_NULL
+#endif
+
+static const u16 nws_prty1_k2_attn_idx[4] = {
+	0, 1, 2, 3,
+};
+
+static struct attn_hw_reg nws_prty1_k2 = {
+	0, 4, nws_prty1_k2_attn_idx, 0x700200, 0x70020c, 0x700208, 0x700204
+};
+
+static struct attn_hw_reg *nws_prty_k2_regs[1] = {
+	&nws_prty1_k2,
+};
+
+#ifdef ATTN_DESC
+static const char *ms_int_attn_desc[1] = {
+	"ms_address_error",
+};
+#else
+#define ms_int_attn_desc OSAL_NULL
+#endif
+
+static const u16 ms_int0_k2_attn_idx[1] = {
+	0,
+};
+
+static struct attn_hw_reg ms_int0_k2 = {
+	0, 1, ms_int0_k2_attn_idx, 0x6a0180, 0x6a018c, 0x6a0188, 0x6a0184
+};
+
+static struct attn_hw_reg *ms_int_k2_regs[1] = {
+	&ms_int0_k2,
+};
+
+static struct attn_hw_block attn_blocks[] = {
+	{"grc", grc_int_attn_desc, grc_prty_attn_desc, {
+							{1, 1,
+							 grc_int_bb_a0_regs,
+							 grc_prty_bb_a0_regs},
+							{1, 1,
+							 grc_int_bb_b0_regs,
+							 grc_prty_bb_b0_regs},
+							{1, 1, grc_int_k2_regs,
+							 grc_prty_k2_regs} } },
+	{"miscs", miscs_int_attn_desc, miscs_prty_attn_desc, {
+							      {2, 0,
+
+							miscs_int_bb_a0_regs,
+							       OSAL_NULL},
+							      {2, 1,
+
+							miscs_int_bb_b0_regs,
+
+							miscs_prty_bb_b0_regs},
+							      {1, 1,
+
+							miscs_int_k2_regs,
+
+						miscs_prty_k2_regs } } },
+	{"misc", misc_int_attn_desc, OSAL_NULL, {
+						 {1, 0, misc_int_bb_a0_regs,
+						  OSAL_NULL},
+						 {1, 0, misc_int_bb_b0_regs,
+						  OSAL_NULL},
+						 {1, 0, misc_int_k2_regs,
+						  OSAL_NULL } } },
+	{"dbu", OSAL_NULL, OSAL_NULL, {
+				       {0, 0, OSAL_NULL, OSAL_NULL},
+				       {0, 0, OSAL_NULL, OSAL_NULL},
+				       {0, 0, OSAL_NULL, OSAL_NULL } } },
+	{"pglue_b", pglue_b_int_attn_desc, pglue_b_prty_attn_desc, {
+								    {1, 1,
+
+						pglue_b_int_bb_a0_regs,
+
+						pglue_b_prty_bb_a0_regs},
+								    {1, 2,
+
+						pglue_b_int_bb_b0_regs,
+
+						pglue_b_prty_bb_b0_regs},
+								    {1, 3,
+
+					     pglue_b_int_k2_regs,
+
+					     pglue_b_prty_k2_regs } } },
+	{"cnig", cnig_int_attn_desc, cnig_prty_attn_desc, {
+							   {1, 0,
+						    cnig_int_bb_a0_regs,
+							    OSAL_NULL},
+							   {1, 1,
+						    cnig_int_bb_b0_regs,
+
+						    cnig_prty_bb_b0_regs},
+							   {1, 1,
+							    cnig_int_k2_regs,
+
+						    cnig_prty_k2_regs } } },
+	{"cpmu", cpmu_int_attn_desc, OSAL_NULL, {
+						 {1, 0, cpmu_int_bb_a0_regs,
+						  OSAL_NULL},
+						 {1, 0, cpmu_int_bb_b0_regs,
+						  OSAL_NULL},
+						 {1, 0, cpmu_int_k2_regs,
+						  OSAL_NULL } } },
+	{"ncsi", ncsi_int_attn_desc, ncsi_prty_attn_desc, {
+							   {1, 1,
+						    ncsi_int_bb_a0_regs,
+
+						    ncsi_prty_bb_a0_regs},
+							   {1, 1,
+						    ncsi_int_bb_b0_regs,
+
+						    ncsi_prty_bb_b0_regs},
+							   {1, 1,
+							    ncsi_int_k2_regs,
+
+						    ncsi_prty_k2_regs } } },
+	{"opte", OSAL_NULL, opte_prty_attn_desc, {
+						  {0, 1, OSAL_NULL,
+						   opte_prty_bb_a0_regs},
+						  {0, 2, OSAL_NULL,
+						   opte_prty_bb_b0_regs},
+						  {0, 2, OSAL_NULL,
+						   opte_prty_k2_regs } } },
+	{"bmb", bmb_int_attn_desc, bmb_prty_attn_desc, {
+							{12, 2,
+							 bmb_int_bb_a0_regs,
+							 bmb_prty_bb_a0_regs},
+							{12, 3,
+							 bmb_int_bb_b0_regs,
+							 bmb_prty_bb_b0_regs},
+						{12, 3, bmb_int_k2_regs,
+							 bmb_prty_k2_regs } } },
+	{"pcie", pcie_int_attn_desc, pcie_prty_attn_desc, {
+							   {0, 1, OSAL_NULL,
+
+						    pcie_prty_bb_a0_regs},
+							   {0, 1, OSAL_NULL,
+
+						    pcie_prty_bb_b0_regs},
+							   {1, 2,
+							    pcie_int_k2_regs,
+
+						    pcie_prty_k2_regs } } },
+	{"mcp", OSAL_NULL, OSAL_NULL, {
+				       {0, 0, OSAL_NULL, OSAL_NULL},
+				       {0, 0, OSAL_NULL, OSAL_NULL},
+				       {0, 0, OSAL_NULL, OSAL_NULL } } },
+	{"mcp2", OSAL_NULL, mcp2_prty_attn_desc, {
+						  {0, 2, OSAL_NULL,
+						   mcp2_prty_bb_a0_regs},
+						  {0, 2, OSAL_NULL,
+						   mcp2_prty_bb_b0_regs},
+						  {0, 2, OSAL_NULL,
+						   mcp2_prty_k2_regs } } },
+	{"pswhst", pswhst_int_attn_desc, pswhst_prty_attn_desc, {
+								 {1, 1,
+
+						  pswhst_int_bb_a0_regs,
+
+						  pswhst_prty_bb_a0_regs},
+								 {1, 2,
+
+						  pswhst_int_bb_b0_regs,
+
+						  pswhst_prty_bb_b0_regs},
+								 {1, 2,
+
+						  pswhst_int_k2_regs,
+
+						  pswhst_prty_k2_regs } } },
+	{"pswhst2", pswhst2_int_attn_desc, pswhst2_prty_attn_desc, {
+								    {1, 0,
+
+						     pswhst2_int_bb_a0_regs,
+							     OSAL_NULL},
+								    {1, 1,
+
+						     pswhst2_int_bb_b0_regs,
+
+						pswhst2_prty_bb_b0_regs},
+								    {1, 1,
+
+					     pswhst2_int_k2_regs,
+
+					     pswhst2_prty_k2_regs } } },
+	{"pswrd", pswrd_int_attn_desc, pswrd_prty_attn_desc, {
+							      {1, 0,
+
+					      pswrd_int_bb_a0_regs,
+							       OSAL_NULL},
+							      {1, 1,
+
+						       pswrd_int_bb_b0_regs,
+
+						       pswrd_prty_bb_b0_regs},
+							      {1, 1,
+
+						       pswrd_int_k2_regs,
+
+						       pswrd_prty_k2_regs } } },
+	{"pswrd2", pswrd2_int_attn_desc, pswrd2_prty_attn_desc, {
+								 {1, 2,
+
+						  pswrd2_int_bb_a0_regs,
+
+						  pswrd2_prty_bb_a0_regs},
+								 {1, 3,
+
+						  pswrd2_int_bb_b0_regs,
+
+						  pswrd2_prty_bb_b0_regs},
+								 {1, 3,
+
+						  pswrd2_int_k2_regs,
+
+						  pswrd2_prty_k2_regs } } },
+	{"pswwr", pswwr_int_attn_desc, pswwr_prty_attn_desc, {
+							      {1, 0,
+
+					       pswwr_int_bb_a0_regs,
+							       OSAL_NULL},
+							      {1, 1,
+
+					       pswwr_int_bb_b0_regs,
+
+					       pswwr_prty_bb_b0_regs},
+							      {1, 1,
+
+					       pswwr_int_k2_regs,
+
+					       pswwr_prty_k2_regs } } },
+	{"pswwr2", pswwr2_int_attn_desc, pswwr2_prty_attn_desc, {
+								 {1, 4,
+
+						  pswwr2_int_bb_a0_regs,
+
+						  pswwr2_prty_bb_a0_regs},
+								 {1, 5,
+
+						  pswwr2_int_bb_b0_regs,
+
+						  pswwr2_prty_bb_b0_regs},
+								 {1, 5,
+
+						  pswwr2_int_k2_regs,
+
+						  pswwr2_prty_k2_regs } } },
+	{"pswrq", pswrq_int_attn_desc, pswrq_prty_attn_desc, {
+							      {1, 0,
+
+					       pswrq_int_bb_a0_regs,
+							       OSAL_NULL},
+							      {1, 1,
+
+					       pswrq_int_bb_b0_regs,
+
+					       pswrq_prty_bb_b0_regs},
+							      {1, 1,
+
+					       pswrq_int_k2_regs,
+
+					       pswrq_prty_k2_regs } } },
+	{"pswrq2", pswrq2_int_attn_desc, pswrq2_prty_attn_desc, {
+								 {1, 1,
+
+						  pswrq2_int_bb_a0_regs,
+
+						  pswrq2_prty_bb_a0_regs},
+								 {1, 1,
+
+						  pswrq2_int_bb_b0_regs,
+
+						  pswrq2_prty_bb_b0_regs},
+								 {1, 1,
+
+						  pswrq2_int_k2_regs,
+
+						  pswrq2_prty_k2_regs } } },
+	{"pglcs", pglcs_int_attn_desc, OSAL_NULL, {
+						   {1, 0, pglcs_int_bb_a0_regs,
+						    OSAL_NULL},
+						   {1, 0, pglcs_int_bb_b0_regs,
+						    OSAL_NULL},
+						   {1, 0, pglcs_int_k2_regs,
+						    OSAL_NULL } } },
+	{"dmae", dmae_int_attn_desc, dmae_prty_attn_desc, {
+							   {1, 1,
+						    dmae_int_bb_a0_regs,
+
+						    dmae_prty_bb_a0_regs},
+							   {1, 1,
+						    dmae_int_bb_b0_regs,
+
+						    dmae_prty_bb_b0_regs},
+							   {1, 1,
+							    dmae_int_k2_regs,
+
+					    dmae_prty_k2_regs } } },
+	{"ptu", ptu_int_attn_desc, ptu_prty_attn_desc, {
+							{1, 1,
+							 ptu_int_bb_a0_regs,
+							 ptu_prty_bb_a0_regs},
+							{1, 1,
+							 ptu_int_bb_b0_regs,
+							 ptu_prty_bb_b0_regs},
+							{1, 1, ptu_int_k2_regs,
+							 ptu_prty_k2_regs } } },
+	{"tcm", tcm_int_attn_desc, tcm_prty_attn_desc, {
+							{3, 2,
+							 tcm_int_bb_a0_regs,
+							 tcm_prty_bb_a0_regs},
+							{3, 2,
+							 tcm_int_bb_b0_regs,
+							 tcm_prty_bb_b0_regs},
+							{3, 2, tcm_int_k2_regs,
+							 tcm_prty_k2_regs } } },
+	{"mcm", mcm_int_attn_desc, mcm_prty_attn_desc, {
+							{3, 2,
+							 mcm_int_bb_a0_regs,
+							 mcm_prty_bb_a0_regs},
+							{3, 2,
+							 mcm_int_bb_b0_regs,
+							 mcm_prty_bb_b0_regs},
+							{3, 2, mcm_int_k2_regs,
+							 mcm_prty_k2_regs } } },
+	{"ucm", ucm_int_attn_desc, ucm_prty_attn_desc, {
+							{3, 2,
+							 ucm_int_bb_a0_regs,
+							 ucm_prty_bb_a0_regs},
+							{3, 2,
+							 ucm_int_bb_b0_regs,
+							 ucm_prty_bb_b0_regs},
+							{3, 2, ucm_int_k2_regs,
+							 ucm_prty_k2_regs } } },
+	{"xcm", xcm_int_attn_desc, xcm_prty_attn_desc, {
+							{3, 2,
+							 xcm_int_bb_a0_regs,
+							 xcm_prty_bb_a0_regs},
+							{3, 2,
+							 xcm_int_bb_b0_regs,
+							 xcm_prty_bb_b0_regs},
+							{3, 2, xcm_int_k2_regs,
+							 xcm_prty_k2_regs } } },
+	{"ycm", ycm_int_attn_desc, ycm_prty_attn_desc, {
+							{3, 2,
+							 ycm_int_bb_a0_regs,
+							 ycm_prty_bb_a0_regs},
+							{3, 2,
+							 ycm_int_bb_b0_regs,
+							 ycm_prty_bb_b0_regs},
+							{3, 2, ycm_int_k2_regs,
+							 ycm_prty_k2_regs } } },
+	{"pcm", pcm_int_attn_desc, pcm_prty_attn_desc, {
+							{3, 1,
+							 pcm_int_bb_a0_regs,
+							 pcm_prty_bb_a0_regs},
+							{3, 1,
+							 pcm_int_bb_b0_regs,
+							 pcm_prty_bb_b0_regs},
+							{3, 1, pcm_int_k2_regs,
+							 pcm_prty_k2_regs } } },
+	{"qm", qm_int_attn_desc, qm_prty_attn_desc, {
+						     {1, 4, qm_int_bb_a0_regs,
+						      qm_prty_bb_a0_regs},
+						     {1, 4, qm_int_bb_b0_regs,
+						      qm_prty_bb_b0_regs},
+						     {1, 4, qm_int_k2_regs,
+						      qm_prty_k2_regs } } },
+	{"tm", tm_int_attn_desc, tm_prty_attn_desc, {
+						     {2, 1, tm_int_bb_a0_regs,
+						      tm_prty_bb_a0_regs},
+						     {2, 1, tm_int_bb_b0_regs,
+						      tm_prty_bb_b0_regs},
+						     {2, 1, tm_int_k2_regs,
+						      tm_prty_k2_regs } } },
+	{"dorq", dorq_int_attn_desc, dorq_prty_attn_desc, {
+							   {1, 1,
+						    dorq_int_bb_a0_regs,
+
+						    dorq_prty_bb_a0_regs},
+							   {1, 2,
+						    dorq_int_bb_b0_regs,
+
+						    dorq_prty_bb_b0_regs},
+							   {1, 2,
+							    dorq_int_k2_regs,
+
+						    dorq_prty_k2_regs } } },
+	{"brb", brb_int_attn_desc, brb_prty_attn_desc, {
+							{12, 2,
+							 brb_int_bb_a0_regs,
+							 brb_prty_bb_a0_regs},
+							{12, 3,
+							 brb_int_bb_b0_regs,
+							 brb_prty_bb_b0_regs},
+						{12, 3, brb_int_k2_regs,
+							 brb_prty_k2_regs } } },
+	{"src", src_int_attn_desc, OSAL_NULL, {
+					       {1, 0, src_int_bb_a0_regs,
+						OSAL_NULL},
+					       {1, 0, src_int_bb_b0_regs,
+						OSAL_NULL},
+					       {1, 0, src_int_k2_regs,
+						OSAL_NULL } } },
+	{"prs", prs_int_attn_desc, prs_prty_attn_desc, {
+							{1, 3,
+							 prs_int_bb_a0_regs,
+							 prs_prty_bb_a0_regs},
+							{1, 3,
+							 prs_int_bb_b0_regs,
+							 prs_prty_bb_b0_regs},
+							{1, 3, prs_int_k2_regs,
+							 prs_prty_k2_regs } } },
+	{"tsdm", tsdm_int_attn_desc, tsdm_prty_attn_desc, {
+							   {1, 1,
+						    tsdm_int_bb_a0_regs,
+
+						    tsdm_prty_bb_a0_regs},
+							   {1, 1,
+						    tsdm_int_bb_b0_regs,
+
+						    tsdm_prty_bb_b0_regs},
+							   {1, 1,
+						    tsdm_int_k2_regs,
+
+						    tsdm_prty_k2_regs } } },
+	{"msdm", msdm_int_attn_desc, msdm_prty_attn_desc, {
+							   {1, 1,
+						    msdm_int_bb_a0_regs,
+
+						    msdm_prty_bb_a0_regs},
+							   {1, 1,
+						    msdm_int_bb_b0_regs,
+
+						    msdm_prty_bb_b0_regs},
+							   {1, 1,
+							    msdm_int_k2_regs,
+
+						    msdm_prty_k2_regs } } },
+	{"usdm", usdm_int_attn_desc, usdm_prty_attn_desc, {
+							   {1, 1,
+						    usdm_int_bb_a0_regs,
+
+						    usdm_prty_bb_a0_regs},
+							   {1, 1,
+						    usdm_int_bb_b0_regs,
+
+						    usdm_prty_bb_b0_regs},
+							   {1, 1,
+							    usdm_int_k2_regs,
+
+						    usdm_prty_k2_regs } } },
+	{"xsdm", xsdm_int_attn_desc, xsdm_prty_attn_desc, {
+							   {1, 1,
+						    xsdm_int_bb_a0_regs,
+
+						    xsdm_prty_bb_a0_regs},
+							   {1, 1,
+						    xsdm_int_bb_b0_regs,
+
+						    xsdm_prty_bb_b0_regs},
+							   {1, 1,
+						    xsdm_int_k2_regs,
+
+						    xsdm_prty_k2_regs } } },
+	{"ysdm", ysdm_int_attn_desc, ysdm_prty_attn_desc, {
+							   {1, 1,
+						    ysdm_int_bb_a0_regs,
+
+						    ysdm_prty_bb_a0_regs},
+							   {1, 1,
+						    ysdm_int_bb_b0_regs,
+
+						    ysdm_prty_bb_b0_regs},
+							   {1, 1,
+						    ysdm_int_k2_regs,
+
+						    ysdm_prty_k2_regs } } },
+	{"psdm", psdm_int_attn_desc, psdm_prty_attn_desc, {
+							   {1, 1,
+						    psdm_int_bb_a0_regs,
+
+						    psdm_prty_bb_a0_regs},
+							   {1, 1,
+						    psdm_int_bb_b0_regs,
+
+						    psdm_prty_bb_b0_regs},
+							   {1, 1,
+						    psdm_int_k2_regs,
+
+						    psdm_prty_k2_regs } } },
+	{"tsem", tsem_int_attn_desc, tsem_prty_attn_desc, {
+							   {3, 3,
+						    tsem_int_bb_a0_regs,
+
+						    tsem_prty_bb_a0_regs},
+							   {3, 3,
+						    tsem_int_bb_b0_regs,
+
+						    tsem_prty_bb_b0_regs},
+							   {3, 4,
+						    tsem_int_k2_regs,
+
+						    tsem_prty_k2_regs } } },
+	{"msem", msem_int_attn_desc, msem_prty_attn_desc, {
+							   {3, 2,
+						    msem_int_bb_a0_regs,
+
+						    msem_prty_bb_a0_regs},
+							   {3, 2,
+						    msem_int_bb_b0_regs,
+
+						    msem_prty_bb_b0_regs},
+							   {3, 3,
+						    msem_int_k2_regs,
+
+						    msem_prty_k2_regs } } },
+	{"usem", usem_int_attn_desc, usem_prty_attn_desc, {
+							   {3, 2,
+						    usem_int_bb_a0_regs,
+
+						    usem_prty_bb_a0_regs},
+							   {3, 2,
+						    usem_int_bb_b0_regs,
+
+						    usem_prty_bb_b0_regs},
+							   {3, 3,
+						    usem_int_k2_regs,
+
+						    usem_prty_k2_regs } } },
+	{"xsem", xsem_int_attn_desc, xsem_prty_attn_desc, {
+							   {3, 2,
+						    xsem_int_bb_a0_regs,
+
+						    xsem_prty_bb_a0_regs},
+							   {3, 2,
+						    xsem_int_bb_b0_regs,
+
+						    xsem_prty_bb_b0_regs},
+							   {3, 3,
+						    xsem_int_k2_regs,
+
+						    xsem_prty_k2_regs } } },
+	{"ysem", ysem_int_attn_desc, ysem_prty_attn_desc, {
+							   {3, 2,
+						    ysem_int_bb_a0_regs,
+
+						    ysem_prty_bb_a0_regs},
+							   {3, 2,
+						    ysem_int_bb_b0_regs,
+
+						    ysem_prty_bb_b0_regs},
+							   {3, 3,
+						    ysem_int_k2_regs,
+
+						    ysem_prty_k2_regs } } },
+	{"psem", psem_int_attn_desc, psem_prty_attn_desc, {
+							   {3, 3,
+						    psem_int_bb_a0_regs,
+
+						    psem_prty_bb_a0_regs},
+							   {3, 3,
+						    psem_int_bb_b0_regs,
+
+						    psem_prty_bb_b0_regs},
+							   {3, 4,
+						    psem_int_k2_regs,
+
+						    psem_prty_k2_regs } } },
+	{"rss", rss_int_attn_desc, rss_prty_attn_desc, {
+							{1, 1,
+							 rss_int_bb_a0_regs,
+							 rss_prty_bb_a0_regs},
+							{1, 1,
+							 rss_int_bb_b0_regs,
+							 rss_prty_bb_b0_regs},
+							{1, 1, rss_int_k2_regs,
+							 rss_prty_k2_regs } } },
+	{"tmld", tmld_int_attn_desc, tmld_prty_attn_desc, {
+							   {1, 1,
+						    tmld_int_bb_a0_regs,
+
+						    tmld_prty_bb_a0_regs},
+							   {1, 1,
+						    tmld_int_bb_b0_regs,
+
+						    tmld_prty_bb_b0_regs},
+							   {1, 1,
+							    tmld_int_k2_regs,
+
+						    tmld_prty_k2_regs } } },
+	{"muld", muld_int_attn_desc, muld_prty_attn_desc, {
+							   {1, 1,
+						    muld_int_bb_a0_regs,
+
+						    muld_prty_bb_a0_regs},
+							   {1, 1,
+						    muld_int_bb_b0_regs,
+
+						    muld_prty_bb_b0_regs},
+							   {1, 1,
+						    muld_int_k2_regs,
+
+						    muld_prty_k2_regs } } },
+	{"yuld", yuld_int_attn_desc, yuld_prty_attn_desc, {
+							   {1, 1,
+						    yuld_int_bb_a0_regs,
+
+						    yuld_prty_bb_a0_regs},
+							   {1, 1,
+						    yuld_int_bb_b0_regs,
+
+						    yuld_prty_bb_b0_regs},
+							   {1, 1,
+						    yuld_int_k2_regs,
+
+						    yuld_prty_k2_regs } } },
+	{"xyld", xyld_int_attn_desc, xyld_prty_attn_desc, {
+							   {1, 1,
+						    xyld_int_bb_a0_regs,
+
+						    xyld_prty_bb_a0_regs},
+							   {1, 1,
+						    xyld_int_bb_b0_regs,
+
+						    xyld_prty_bb_b0_regs},
+							   {1, 1,
+						    xyld_int_k2_regs,
+
+						    xyld_prty_k2_regs } } },
+	{"prm", prm_int_attn_desc, prm_prty_attn_desc, {
+							{1, 1,
+							 prm_int_bb_a0_regs,
+							 prm_prty_bb_a0_regs},
+							{1, 2,
+							 prm_int_bb_b0_regs,
+							 prm_prty_bb_b0_regs},
+							{1, 2, prm_int_k2_regs,
+							 prm_prty_k2_regs } } },
+	{"pbf_pb1", pbf_pb1_int_attn_desc, pbf_pb1_prty_attn_desc, {
+								    {1, 0,
+
+						     pbf_pb1_int_bb_a0_regs,
+						     OSAL_NULL},
+								    {1, 1,
+
+						     pbf_pb1_int_bb_b0_regs,
+
+						     pbf_pb1_prty_bb_b0_regs},
+								    {1, 1,
+
+						     pbf_pb1_int_k2_regs,
+
+						     pbf_pb1_prty_k2_regs } } },
+	{"pbf_pb2", pbf_pb2_int_attn_desc, pbf_pb2_prty_attn_desc, {
+								    {1, 0,
+
+						     pbf_pb2_int_bb_a0_regs,
+						     OSAL_NULL},
+								    {1, 1,
+
+						     pbf_pb2_int_bb_b0_regs,
+
+						     pbf_pb2_prty_bb_b0_regs},
+								    {1, 1,
+
+						     pbf_pb2_int_k2_regs,
+
+						     pbf_pb2_prty_k2_regs } } },
+	{"rpb", rpb_int_attn_desc, rpb_prty_attn_desc, {
+							{1, 0,
+							 rpb_int_bb_a0_regs,
+							 OSAL_NULL},
+							{1, 1,
+							 rpb_int_bb_b0_regs,
+							 rpb_prty_bb_b0_regs},
+							{1, 1, rpb_int_k2_regs,
+							 rpb_prty_k2_regs } } },
+	{"btb", btb_int_attn_desc, btb_prty_attn_desc, {
+							{11, 1,
+							 btb_int_bb_a0_regs,
+							 btb_prty_bb_a0_regs},
+							{11, 2,
+							 btb_int_bb_b0_regs,
+							 btb_prty_bb_b0_regs},
+						{11, 2, btb_int_k2_regs,
+							 btb_prty_k2_regs } } },
+	{"pbf", pbf_int_attn_desc, pbf_prty_attn_desc, {
+							{1, 2,
+							 pbf_int_bb_a0_regs,
+							 pbf_prty_bb_a0_regs},
+							{1, 3,
+							 pbf_int_bb_b0_regs,
+							 pbf_prty_bb_b0_regs},
+							{1, 3, pbf_int_k2_regs,
+							 pbf_prty_k2_regs } } },
+	{"rdif", rdif_int_attn_desc, rdif_prty_attn_desc, {
+							   {1, 0,
+					    rdif_int_bb_a0_regs,
+							    OSAL_NULL},
+							   {1, 1,
+					    rdif_int_bb_b0_regs,
+
+					    rdif_prty_bb_b0_regs},
+							   {1, 1,
+							    rdif_int_k2_regs,
+
+					    rdif_prty_k2_regs } } },
+	{"tdif", tdif_int_attn_desc, tdif_prty_attn_desc, {
+							   {1, 1,
+					    tdif_int_bb_a0_regs,
+
+					    tdif_prty_bb_a0_regs},
+							   {1, 2,
+					    tdif_int_bb_b0_regs,
+
+					    tdif_prty_bb_b0_regs},
+							   {1, 2,
+					    tdif_int_k2_regs,
+
+					    tdif_prty_k2_regs } } },
+	{"cdu", cdu_int_attn_desc, cdu_prty_attn_desc, {
+							{1, 1,
+							 cdu_int_bb_a0_regs,
+							 cdu_prty_bb_a0_regs},
+							{1, 1,
+							 cdu_int_bb_b0_regs,
+							 cdu_prty_bb_b0_regs},
+					{1, 1, cdu_int_k2_regs,
+							 cdu_prty_k2_regs } } },
+	{"ccfc", ccfc_int_attn_desc, ccfc_prty_attn_desc, {
+							   {1, 2,
+					    ccfc_int_bb_a0_regs,
+
+					    ccfc_prty_bb_a0_regs},
+							   {1, 2,
+					    ccfc_int_bb_b0_regs,
+
+					    ccfc_prty_bb_b0_regs},
+							   {1, 2,
+					    ccfc_int_k2_regs,
+
+					    ccfc_prty_k2_regs } } },
+	{"tcfc", tcfc_int_attn_desc, tcfc_prty_attn_desc, {
+							   {1, 2,
+					    tcfc_int_bb_a0_regs,
+
+					    tcfc_prty_bb_a0_regs},
+							   {1, 2,
+					    tcfc_int_bb_b0_regs,
+
+					    tcfc_prty_bb_b0_regs},
+							   {1, 2,
+					    tcfc_int_k2_regs,
+
+					    tcfc_prty_k2_regs } } },
+	{"igu", igu_int_attn_desc, igu_prty_attn_desc, {
+							{1, 3,
+							 igu_int_bb_a0_regs,
+							 igu_prty_bb_a0_regs},
+							{1, 3,
+							 igu_int_bb_b0_regs,
+							 igu_prty_bb_b0_regs},
+							{1, 2, igu_int_k2_regs,
+							 igu_prty_k2_regs } } },
+	{"cau", cau_int_attn_desc, cau_prty_attn_desc, {
+							{1, 1,
+							 cau_int_bb_a0_regs,
+							 cau_prty_bb_a0_regs},
+							{1, 1,
+							 cau_int_bb_b0_regs,
+							 cau_prty_bb_b0_regs},
+							{1, 1, cau_int_k2_regs,
+							 cau_prty_k2_regs } } },
+	{"umac", umac_int_attn_desc, OSAL_NULL, {
+						 {0, 0, OSAL_NULL, OSAL_NULL},
+						 {0, 0, OSAL_NULL, OSAL_NULL},
+						 {1, 0, umac_int_k2_regs,
+						  OSAL_NULL } } },
+	{"xmac", OSAL_NULL, OSAL_NULL, {
+					{0, 0, OSAL_NULL, OSAL_NULL},
+					{0, 0, OSAL_NULL, OSAL_NULL},
+					{0, 0, OSAL_NULL, OSAL_NULL } } },
+	{"dbg", dbg_int_attn_desc, dbg_prty_attn_desc, {
+							{1, 1,
+							 dbg_int_bb_a0_regs,
+							 dbg_prty_bb_a0_regs},
+							{1, 1,
+							 dbg_int_bb_b0_regs,
+							 dbg_prty_bb_b0_regs},
+							{1, 1, dbg_int_k2_regs,
+							 dbg_prty_k2_regs } } },
+	{"nig", nig_int_attn_desc, nig_prty_attn_desc, {
+							{6, 4,
+							 nig_int_bb_a0_regs,
+							 nig_prty_bb_a0_regs},
+							{6, 5,
+							 nig_int_bb_b0_regs,
+							 nig_prty_bb_b0_regs},
+					{10, 5, nig_int_k2_regs,
+							 nig_prty_k2_regs } } },
+	{"wol", wol_int_attn_desc, wol_prty_attn_desc, {
+							{0, 0, OSAL_NULL,
+							 OSAL_NULL},
+							{0, 0, OSAL_NULL,
+							 OSAL_NULL},
+							{1, 1, wol_int_k2_regs,
+							 wol_prty_k2_regs } } },
+	{"bmbn", bmbn_int_attn_desc, OSAL_NULL, {
+						 {0, 0, OSAL_NULL, OSAL_NULL},
+						 {0, 0, OSAL_NULL, OSAL_NULL},
+						 {1, 0, bmbn_int_k2_regs,
+						  OSAL_NULL } } },
+	{"ipc", ipc_int_attn_desc, ipc_prty_attn_desc, {
+							{1, 1,
+							 ipc_int_bb_a0_regs,
+							 ipc_prty_bb_a0_regs},
+							{1, 1,
+							 ipc_int_bb_b0_regs,
+							 ipc_prty_bb_b0_regs},
+							{1, 0, ipc_int_k2_regs,
+							 OSAL_NULL } } },
+	{"nwm", nwm_int_attn_desc, nwm_prty_attn_desc, {
+							{0, 0, OSAL_NULL,
+							 OSAL_NULL},
+							{0, 0, OSAL_NULL,
+							 OSAL_NULL},
+							{1, 3, nwm_int_k2_regs,
+							 nwm_prty_k2_regs } } },
+	{"nws", nws_int_attn_desc, nws_prty_attn_desc, {
+							{0, 0, OSAL_NULL,
+							 OSAL_NULL},
+							{0, 0, OSAL_NULL,
+							 OSAL_NULL},
+							{4, 1, nws_int_k2_regs,
+							 nws_prty_k2_regs } } },
+	{"ms", ms_int_attn_desc, OSAL_NULL, {
+					     {0, 0, OSAL_NULL, OSAL_NULL},
+					     {0, 0, OSAL_NULL, OSAL_NULL},
+					     {1, 0, ms_int_k2_regs,
+					      OSAL_NULL } } },
+	{"phy_pcie", OSAL_NULL, OSAL_NULL, {
+					    {0, 0, OSAL_NULL, OSAL_NULL},
+					    {0, 0, OSAL_NULL, OSAL_NULL},
+					    {0, 0, OSAL_NULL, OSAL_NULL } } },
+	{"misc_aeu", OSAL_NULL, OSAL_NULL, {
+					    {0, 0, OSAL_NULL, OSAL_NULL},
+					    {0, 0, OSAL_NULL, OSAL_NULL},
+					    {0, 0, OSAL_NULL, OSAL_NULL } } },
+	{"bar0_map", OSAL_NULL, OSAL_NULL, {
+					    {0, 0, OSAL_NULL, OSAL_NULL},
+					    {0, 0, OSAL_NULL, OSAL_NULL},
+					    {0, 0, OSAL_NULL, OSAL_NULL } } },
+};
+
+#define NUM_INT_REGS 423
+#define NUM_PRTY_REGS 378
+
+#endif /* __PREVENT_INT_ATTN__ */
+
+#endif /* __ATTN_VALUES_H__ */
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index f84266f..e68f60b 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -29,6 +29,7 @@
 #include "ecore_iro.h"
 #include "nvm_cfg.h"
 #include "ecore_dev_api.h"
+#include "ecore_attn_values.h"
 
 /* Configurable */
 #define ECORE_MIN_DPIS		(4)	/* The minimal number of DPIs required
@@ -676,6 +677,37 @@ static void ecore_hw_init_chip(struct ecore_hwfn *p_hwfn,
 	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev) && ECORE_IS_AH(p_hwfn->p_dev))
 		ecore_wr(p_hwfn, p_ptt, MISCS_REG_RESET_PL_HV_2, 0x3ffffff);
 
+	/* initialize interrupt masks */
+	for (i = 0;
+	     i <
+	     attn_blocks[BLOCK_MISCS].chip_regs[ECORE_GET_TYPE(p_hwfn->p_dev)].
+	     num_of_int_regs; i++)
+		ecore_wr(p_hwfn, p_ptt,
+			 attn_blocks[BLOCK_MISCS].
+			 chip_regs[ECORE_GET_TYPE(p_hwfn->p_dev)].int_regs[i]->
+			 mask_addr, 0);
+
+	if (!CHIP_REV_IS_EMUL(p_hwfn->p_dev) || !ECORE_IS_AH(p_hwfn->p_dev))
+		ecore_wr(p_hwfn, p_ptt,
+			 attn_blocks[BLOCK_CNIG].
+			 chip_regs[ECORE_GET_TYPE(p_hwfn->p_dev)].int_regs[0]->
+			 mask_addr, 0);
+	ecore_wr(p_hwfn, p_ptt,
+		 attn_blocks[BLOCK_PGLCS].
+		 chip_regs[ECORE_GET_TYPE(p_hwfn->p_dev)].int_regs[0]->
+		 mask_addr, 0);
+	ecore_wr(p_hwfn, p_ptt,
+		 attn_blocks[BLOCK_CPMU].
+		 chip_regs[ECORE_GET_TYPE(p_hwfn->p_dev)].int_regs[0]->
+		 mask_addr, 0);
+	/* Currently A0 and B0 interrupt bits are the same in pglue_b;
+	 * If this changes, need to set this according to chip type. <14/09/23>
+	 */
+	ecore_wr(p_hwfn, p_ptt,
+		 attn_blocks[BLOCK_PGLUE_B].
+		 chip_regs[ECORE_GET_TYPE(p_hwfn->p_dev)].int_regs[0]->
+		 mask_addr, 0x80000);
+
 	/* initialize port mode to 4x10G_E (10G with 4x10 SERDES) */
 	/* CNIG_REG_NW_PORT_MODE is same for A0 and B0 */
 	if (!CHIP_REV_IS_EMUL(p_hwfn->p_dev) || !ECORE_IS_AH(p_hwfn->p_dev))
@@ -1163,6 +1195,25 @@ ecore_hw_init_pf(struct ecore_hwfn *p_hwfn,
 	/* Pure runtime initializations - directly to the HW  */
 	ecore_int_igu_init_pure_rt(p_hwfn, p_ptt, true, true);
 
+#ifndef ASIC_ONLY
+	/*@@TMP - On B0 build 1, need to mask the datapath_registers parity */
+	if (CHIP_REV_IS_EMUL_B0(p_hwfn->p_dev) &&
+	    (p_hwfn->p_dev->chip_metal == 1)) {
+		u32 reg_addr, tmp;
+
+		reg_addr =
+		    attn_blocks[BLOCK_PGLUE_B].
+		    chip_regs[ECORE_GET_TYPE(p_hwfn->p_dev)].prty_regs[0]->
+		    mask_addr;
+		DP_NOTICE(p_hwfn, false,
+			  "Masking datapath registers parity on"
+			  " B0 emulation [build 1]\n");
+		tmp = ecore_rd(p_hwfn, p_ptt, reg_addr);
+		tmp |= (1 << 0);	/* Was PRTY_MASK_DATAPATH_REGISTERS */
+		ecore_wr(p_hwfn, p_ptt, reg_addr, tmp);
+	}
+#endif
+
 	rc = ecore_hw_init_pf_doorbell_bar(p_hwfn, p_ptt);
 	if (rc)
 		return rc;
diff --git a/drivers/net/qede/base/ecore_int.c b/drivers/net/qede/base/ecore_int.c
index f1cc538..15bcf67 100644
--- a/drivers/net/qede/base/ecore_int.c
+++ b/drivers/net/qede/base/ecore_int.c
@@ -21,6 +21,7 @@
 #include "ecore_hw_defs.h"
 #include "ecore_hsi_common.h"
 #include "ecore_mcp.h"
+#include "ecore_attn_values.h"
 
 struct ecore_pi_info {
 	ecore_int_comp_cb_t comp_cb;
@@ -74,8 +75,603 @@ struct aeu_invert_reg {
 	struct aeu_invert_reg_bit bits[32];
 };
 
+#define MAX_ATTN_GRPS		(8)
 #define NUM_ATTN_REGS		(9)
 
+static enum _ecore_status_t ecore_mcp_attn_cb(struct ecore_hwfn *p_hwfn)
+{
+	u32 tmp = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt, MCP_REG_CPU_STATE);
+
+	DP_INFO(p_hwfn->p_dev, "MCP_REG_CPU_STATE: %08x - Masking...\n", tmp);
+	ecore_wr(p_hwfn, p_hwfn->p_dpc_ptt, MCP_REG_CPU_EVENT_MASK, 0xffffffff);
+
+	return ECORE_SUCCESS;
+}
+
+#define ECORE_PSWHST_ATTENTION_DISABLED_PF_MASK		(0x3c000)
+#define ECORE_PSWHST_ATTENTION_DISABLED_PF_SHIFT	(14)
+#define ECORE_PSWHST_ATTENTION_DISABLED_VF_MASK		(0x03fc0)
+#define ECORE_PSWHST_ATTENTION_DISABLED_VF_SHIFT	(6)
+#define ECORE_PSWHST_ATTENTION_DISABLED_VALID_MASK	(0x00020)
+#define ECORE_PSWHST_ATTENTION_DISABLED_VALID_SHIFT	(5)
+#define ECORE_PSWHST_ATTENTION_DISABLED_CLIENT_MASK	(0x0001e)
+#define ECORE_PSWHST_ATTENTION_DISABLED_CLIENT_SHIFT	(1)
+#define ECORE_PSWHST_ATTENTION_DISABLED_WRITE_MASK	(0x1)
+#define ECORE_PSWHST_ATTNETION_DISABLED_WRITE_SHIFT	(0)
+#define ECORE_PSWHST_ATTENTION_VF_DISABLED		(0x1)
+#define ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS		(0x1)
+#define ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_WR_MASK	(0x1)
+#define ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_WR_SHIFT	(0)
+#define ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_CLIENT_MASK	(0x1e)
+#define ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_CLIENT_SHIFT	(1)
+#define ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_VF_VALID_MASK	(0x20)
+#define ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_VF_VALID_SHIFT	(5)
+#define ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_VF_ID_MASK	(0x3fc0)
+#define ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_VF_ID_SHIFT	(6)
+#define ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_PF_ID_MASK	(0x3c000)
+#define ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_PF_ID_SHIFT	(14)
+#define ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_BYTE_EN_MASK	(0x3fc0000)
+#define ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_BYTE_EN_SHIFT	(18)
+static enum _ecore_status_t ecore_pswhst_attn_cb(struct ecore_hwfn *p_hwfn)
+{
+	u32 tmp =
+	    ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt,
+		     PSWHST_REG_VF_DISABLED_ERROR_VALID);
+
+	/* Disabled VF access */
+	if (tmp & ECORE_PSWHST_ATTENTION_VF_DISABLED) {
+		u32 addr, data;
+
+		addr = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt,
+				PSWHST_REG_VF_DISABLED_ERROR_ADDRESS);
+		data = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt,
+				PSWHST_REG_VF_DISABLED_ERROR_DATA);
+		DP_INFO(p_hwfn->p_dev,
+			"PF[0x%02x] VF [0x%02x] [Valid 0x%02x] Client [0x%02x]"
+			" Write [0x%02x] Addr [0x%08x]\n",
+			(u8)((data & ECORE_PSWHST_ATTENTION_DISABLED_PF_MASK)
+			     >> ECORE_PSWHST_ATTENTION_DISABLED_PF_SHIFT),
+			(u8)((data & ECORE_PSWHST_ATTENTION_DISABLED_VF_MASK)
+			     >> ECORE_PSWHST_ATTENTION_DISABLED_VF_SHIFT),
+			(u8)((data &
+			      ECORE_PSWHST_ATTENTION_DISABLED_VALID_MASK) >>
+			      ECORE_PSWHST_ATTENTION_DISABLED_VALID_SHIFT),
+			(u8)((data &
+			      ECORE_PSWHST_ATTENTION_DISABLED_CLIENT_MASK) >>
+			      ECORE_PSWHST_ATTENTION_DISABLED_CLIENT_SHIFT),
+			(u8)((data &
+			      ECORE_PSWHST_ATTENTION_DISABLED_WRITE_MASK) >>
+			      ECORE_PSWHST_ATTNETION_DISABLED_WRITE_SHIFT),
+			addr);
+	}
+
+	tmp = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt,
+		       PSWHST_REG_INCORRECT_ACCESS_VALID);
+	if (tmp & ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS) {
+		u32 addr, data, length;
+
+		addr = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt,
+				PSWHST_REG_INCORRECT_ACCESS_ADDRESS);
+		data = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt,
+				PSWHST_REG_INCORRECT_ACCESS_DATA);
+		length = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt,
+				  PSWHST_REG_INCORRECT_ACCESS_LENGTH);
+
+		DP_INFO(p_hwfn->p_dev,
+			"Incorrect access to %08x of length %08x - PF [%02x]"
+			" VF [%04x] [valid %02x] client [%02x] write [%02x]"
+			" Byte-Enable [%04x] [%08x]\n",
+			addr, length,
+			(u8)((data &
+		      ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_PF_ID_MASK) >>
+		      ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_PF_ID_SHIFT),
+			(u8)((data &
+		      ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_VF_ID_MASK) >>
+		      ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_VF_ID_SHIFT),
+			(u8)((data &
+		      ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_VF_VALID_MASK) >>
+		      ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_VF_VALID_SHIFT),
+			(u8)((data &
+		      ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_CLIENT_MASK) >>
+		      ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_CLIENT_SHIFT),
+			(u8)((data &
+		      ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_WR_MASK) >>
+		      ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_WR_SHIFT),
+			(u8)((data &
+		      ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_BYTE_EN_MASK) >>
+		      ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_BYTE_EN_SHIFT),
+			data);
+	}
+
+	/* TODO - We know 'some' of these are legal due to virtualization,
+	 * but is it true for all of them?
+	 */
+	return ECORE_SUCCESS;
+}
+
+#define ECORE_GRC_ATTENTION_VALID_BIT		(1 << 0)
+#define ECORE_GRC_ATTENTION_ADDRESS_MASK	(0x7fffff << 0)
+#define ECORE_GRC_ATTENTION_RDWR_BIT		(1 << 23)
+#define ECORE_GRC_ATTENTION_MASTER_MASK		(0xf << 24)
+#define ECORE_GRC_ATTENTION_MASTER_SHIFT	(24)
+#define ECORE_GRC_ATTENTION_PF_MASK		(0xf)
+#define ECORE_GRC_ATTENTION_VF_MASK		(0xff << 4)
+#define ECORE_GRC_ATTENTION_VF_SHIFT		(4)
+#define ECORE_GRC_ATTENTION_PRIV_MASK		(0x3 << 14)
+#define ECORE_GRC_ATTENTION_PRIV_SHIFT		(14)
+#define ECORE_GRC_ATTENTION_PRIV_VF		(0)
+static const char *grc_timeout_attn_master_to_str(u8 master)
+{
+	switch (master) {
+	case 1:
+		return "PXP";
+	case 2:
+		return "MCP";
+	case 3:
+		return "MSDM";
+	case 4:
+		return "PSDM";
+	case 5:
+		return "YSDM";
+	case 6:
+		return "USDM";
+	case 7:
+		return "TSDM";
+	case 8:
+		return "XSDM";
+	case 9:
+		return "DBU";
+	case 10:
+		return "DMAE";
+	default:
+		return "Unkown";
+	}
+}
+
+static enum _ecore_status_t ecore_grc_attn_cb(struct ecore_hwfn *p_hwfn)
+{
+	u32 tmp, tmp2;
+
+	/* We've already cleared the timeout interrupt register, so we learn
+	 * of interrupts via the validity register
+	 */
+	tmp = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt,
+		       GRC_REG_TIMEOUT_ATTN_ACCESS_VALID);
+	if (!(tmp & ECORE_GRC_ATTENTION_VALID_BIT))
+		goto out;
+
+	/* Read the GRC timeout information */
+	tmp = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt,
+		       GRC_REG_TIMEOUT_ATTN_ACCESS_DATA_0);
+	tmp2 = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt,
+			GRC_REG_TIMEOUT_ATTN_ACCESS_DATA_1);
+
+	DP_INFO(p_hwfn->p_dev,
+		"GRC timeout [%08x:%08x] - %s Address [%08x] [Master %s]"
+		" [PF: %02x %s %02x]\n",
+		tmp2, tmp,
+		(tmp & ECORE_GRC_ATTENTION_RDWR_BIT) ? "Write to" : "Read from",
+		(tmp & ECORE_GRC_ATTENTION_ADDRESS_MASK) << 2,
+		grc_timeout_attn_master_to_str((tmp &
+					ECORE_GRC_ATTENTION_MASTER_MASK) >>
+				       ECORE_GRC_ATTENTION_MASTER_SHIFT),
+		(tmp2 & ECORE_GRC_ATTENTION_PF_MASK),
+		(((tmp2 & ECORE_GRC_ATTENTION_PRIV_MASK) >>
+		  ECORE_GRC_ATTENTION_PRIV_SHIFT) ==
+		 ECORE_GRC_ATTENTION_PRIV_VF) ? "VF" : "(Irrelevant:)",
+		(tmp2 & ECORE_GRC_ATTENTION_VF_MASK) >>
+		ECORE_GRC_ATTENTION_VF_SHIFT);
+
+out:
+	/* Regardles of anything else, clean the validity bit */
+	ecore_wr(p_hwfn, p_hwfn->p_dpc_ptt,
+		 GRC_REG_TIMEOUT_ATTN_ACCESS_VALID, 0);
+	return ECORE_SUCCESS;
+}
+
+#define ECORE_PGLUE_ATTENTION_VALID (1 << 29)
+#define ECORE_PGLUE_ATTENTION_RD_VALID (1 << 26)
+#define ECORE_PGLUE_ATTENTION_DETAILS_PFID_MASK (0xf << 20)
+#define ECORE_PGLUE_ATTENTION_DETAILS_PFID_SHIFT (20)
+#define ECORE_PGLUE_ATTENTION_DETAILS_VF_VALID (1 << 19)
+#define ECORE_PGLUE_ATTENTION_DETAILS_VFID_MASK (0xff << 24)
+#define ECORE_PGLUE_ATTENTION_DETAILS_VFID_SHIFT (24)
+#define ECORE_PGLUE_ATTENTION_DETAILS2_WAS_ERR (1 << 21)
+#define ECORE_PGLUE_ATTENTION_DETAILS2_BME	(1 << 22)
+#define ECORE_PGLUE_ATTENTION_DETAILS2_FID_EN (1 << 23)
+#define ECORE_PGLUE_ATTENTION_ICPL_VALID (1 << 23)
+#define ECORE_PGLUE_ATTENTION_ZLR_VALID (1 << 25)
+#define ECORE_PGLUE_ATTENTION_ILT_VALID (1 << 23)
+static enum _ecore_status_t ecore_pglub_rbc_attn_cb(struct ecore_hwfn *p_hwfn)
+{
+	u32 tmp, reg_addr;
+
+	reg_addr =
+	    attn_blocks[BLOCK_PGLUE_B].chip_regs[ECORE_GET_TYPE(p_hwfn->p_dev)].
+	    int_regs[0]->mask_addr;
+
+	/* Mask unnecessary attentions -@TBD move to MFW */
+	tmp = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt, reg_addr);
+	tmp |= (1 << 19);	/* Was PGL_PCIE_ATTN */
+	ecore_wr(p_hwfn, p_hwfn->p_dpc_ptt, reg_addr, tmp);
+
+	tmp = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt,
+		       PGLUE_B_REG_TX_ERR_WR_DETAILS2);
+	if (tmp & ECORE_PGLUE_ATTENTION_VALID) {
+		u32 addr_lo, addr_hi, details;
+
+		addr_lo = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt,
+				   PGLUE_B_REG_TX_ERR_WR_ADD_31_0);
+		addr_hi = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt,
+				   PGLUE_B_REG_TX_ERR_WR_ADD_63_32);
+		details = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt,
+				   PGLUE_B_REG_TX_ERR_WR_DETAILS);
+
+		DP_INFO(p_hwfn,
+			"Illegal write by chip to [%08x:%08x] blocked."
+			"Details: %08x [PFID %02x, VFID %02x, VF_VALID %02x]"
+			" Details2 %08x [Was_error %02x BME deassert %02x"
+			" FID_enable deassert %02x]\n",
+			addr_hi, addr_lo, details,
+			(u8)((details &
+			      ECORE_PGLUE_ATTENTION_DETAILS_PFID_MASK) >>
+			     ECORE_PGLUE_ATTENTION_DETAILS_PFID_SHIFT),
+			(u8)((details &
+			      ECORE_PGLUE_ATTENTION_DETAILS_VFID_MASK) >>
+			     ECORE_PGLUE_ATTENTION_DETAILS_VFID_SHIFT),
+			(u8)((details & ECORE_PGLUE_ATTENTION_DETAILS_VF_VALID)
+			     ? 1 : 0), tmp,
+			(u8)((tmp & ECORE_PGLUE_ATTENTION_DETAILS2_WAS_ERR) ? 1
+			     : 0),
+			(u8)((tmp & ECORE_PGLUE_ATTENTION_DETAILS2_BME) ? 1 :
+			     0),
+			(u8)((tmp & ECORE_PGLUE_ATTENTION_DETAILS2_FID_EN) ? 1
+			     : 0));
+	}
+
+	tmp = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt,
+		       PGLUE_B_REG_TX_ERR_RD_DETAILS2);
+	if (tmp & ECORE_PGLUE_ATTENTION_RD_VALID) {
+		u32 addr_lo, addr_hi, details;
+
+		addr_lo = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt,
+				   PGLUE_B_REG_TX_ERR_RD_ADD_31_0);
+		addr_hi = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt,
+				   PGLUE_B_REG_TX_ERR_RD_ADD_63_32);
+		details = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt,
+				   PGLUE_B_REG_TX_ERR_RD_DETAILS);
+
+		DP_INFO(p_hwfn,
+			"Illegal read by chip from [%08x:%08x] blocked."
+			" Details: %08x [PFID %02x, VFID %02x, VF_VALID %02x]"
+			" Details2 %08x [Was_error %02x BME deassert %02x"
+			" FID_enable deassert %02x]\n",
+			addr_hi, addr_lo, details,
+			(u8)((details &
+			      ECORE_PGLUE_ATTENTION_DETAILS_PFID_MASK) >>
+			     ECORE_PGLUE_ATTENTION_DETAILS_PFID_SHIFT),
+			(u8)((details &
+			      ECORE_PGLUE_ATTENTION_DETAILS_VFID_MASK) >>
+			     ECORE_PGLUE_ATTENTION_DETAILS_VFID_SHIFT),
+			(u8)((details & ECORE_PGLUE_ATTENTION_DETAILS_VF_VALID)
+			     ? 1 : 0), tmp,
+			(u8)((tmp & ECORE_PGLUE_ATTENTION_DETAILS2_WAS_ERR) ? 1
+			     : 0),
+			(u8)((tmp & ECORE_PGLUE_ATTENTION_DETAILS2_BME) ? 1 :
+			     0),
+			(u8)((tmp & ECORE_PGLUE_ATTENTION_DETAILS2_FID_EN) ? 1
+			     : 0));
+	}
+
+	tmp = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt,
+		       PGLUE_B_REG_TX_ERR_WR_DETAILS_ICPL);
+	if (tmp & ECORE_PGLUE_ATTENTION_ICPL_VALID)
+		DP_INFO(p_hwfn, "ICPL eror - %08x\n", tmp);
+
+	tmp = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt,
+		       PGLUE_B_REG_MASTER_ZLR_ERR_DETAILS);
+	if (tmp & ECORE_PGLUE_ATTENTION_ZLR_VALID) {
+		u32 addr_hi, addr_lo;
+
+		addr_lo = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt,
+				   PGLUE_B_REG_MASTER_ZLR_ERR_ADD_31_0);
+		addr_hi = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt,
+				   PGLUE_B_REG_MASTER_ZLR_ERR_ADD_63_32);
+
+		DP_INFO(p_hwfn, "ICPL eror - %08x [Address %08x:%08x]\n",
+			tmp, addr_hi, addr_lo);
+	}
+
+	tmp = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt,
+		       PGLUE_B_REG_VF_ILT_ERR_DETAILS2);
+	if (tmp & ECORE_PGLUE_ATTENTION_ILT_VALID) {
+		u32 addr_hi, addr_lo, details;
+
+		addr_lo = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt,
+				   PGLUE_B_REG_VF_ILT_ERR_ADD_31_0);
+		addr_hi = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt,
+				   PGLUE_B_REG_VF_ILT_ERR_ADD_63_32);
+		details = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt,
+				   PGLUE_B_REG_VF_ILT_ERR_DETAILS);
+
+		DP_INFO(p_hwfn,
+			"ILT eror - Details %08x Details2 %08x"
+			" [Address %08x:%08x]\n",
+			details, tmp, addr_hi, addr_lo);
+	}
+
+	/* Clear the indications */
+	ecore_wr(p_hwfn, p_hwfn->p_dpc_ptt,
+		 PGLUE_B_REG_LATCHED_ERRORS_CLR, (1 << 2));
+
+	return ECORE_SUCCESS;
+}
+
+static enum _ecore_status_t ecore_nig_attn_cb(struct ecore_hwfn *p_hwfn)
+{
+	u32 tmp, reg_addr;
+
+	/* Mask unnecessary attentions -@TBD move to MFW */
+	reg_addr =
+	    attn_blocks[BLOCK_NIG].chip_regs[ECORE_GET_TYPE(p_hwfn->p_dev)].
+	    int_regs[3]->mask_addr;
+	tmp = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt, reg_addr);
+	tmp |= (1 << 0);	/* Was 3_P0_TX_PAUSE_TOO_LONG_INT */
+	tmp |= NIG_REG_INT_MASK_3_P0_LB_TC1_PAUSE_TOO_LONG_INT;
+	ecore_wr(p_hwfn, p_hwfn->p_dpc_ptt, reg_addr, tmp);
+
+	reg_addr =
+	    attn_blocks[BLOCK_NIG].chip_regs[ECORE_GET_TYPE(p_hwfn->p_dev)].
+	    int_regs[5]->mask_addr;
+	tmp = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt, reg_addr);
+	tmp |= (1 << 0);	/* Was 5_P1_TX_PAUSE_TOO_LONG_INT */
+	ecore_wr(p_hwfn, p_hwfn->p_dpc_ptt, reg_addr, tmp);
+
+	/* TODO - a bit risky to return success here; But alternative is to
+	 * actually read the multitdue of interrupt register of the block.
+	 */
+	return ECORE_SUCCESS;
+}
+
+static enum _ecore_status_t ecore_fw_assertion(struct ecore_hwfn *p_hwfn)
+{
+	DP_NOTICE(p_hwfn, false, "FW assertion!\n");
+
+	ecore_hw_err_notify(p_hwfn, ECORE_HW_ERR_FW_ASSERT);
+
+	return ECORE_INVAL;
+}
+
+static enum _ecore_status_t
+ecore_general_attention_35(struct ecore_hwfn *p_hwfn)
+{
+	DP_INFO(p_hwfn, "General attention 35!\n");
+
+	return ECORE_SUCCESS;
+}
+
+#define ECORE_DORQ_ATTENTION_REASON_MASK (0xfffff)
+#define ECORE_DORQ_ATTENTION_OPAQUE_MASK (0xffff)
+#define ECORE_DORQ_ATTENTION_SIZE_MASK	 (0x7f)
+#define ECORE_DORQ_ATTENTION_SIZE_SHIFT	 (16)
+
+static enum _ecore_status_t ecore_dorq_attn_cb(struct ecore_hwfn *p_hwfn)
+{
+	u32 reason;
+
+	reason = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt, DORQ_REG_DB_DROP_REASON) &
+	    ECORE_DORQ_ATTENTION_REASON_MASK;
+	if (reason) {
+		u32 details = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt,
+				       DORQ_REG_DB_DROP_DETAILS);
+
+		DP_INFO(p_hwfn->p_dev,
+			"DORQ db_drop: adress 0x%08x Opaque FID 0x%04x"
+			" Size [bytes] 0x%08x Reason: 0x%08x\n",
+			ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt,
+				 DORQ_REG_DB_DROP_DETAILS_ADDRESS),
+			(u16)(details & ECORE_DORQ_ATTENTION_OPAQUE_MASK),
+			((details & ECORE_DORQ_ATTENTION_SIZE_MASK) >>
+			 ECORE_DORQ_ATTENTION_SIZE_SHIFT) * 4, reason);
+	}
+
+	return ECORE_INVAL;
+}
+
+static enum _ecore_status_t ecore_tm_attn_cb(struct ecore_hwfn *p_hwfn)
+{
+#ifndef ASIC_ONLY
+	if (CHIP_REV_IS_EMUL_B0(p_hwfn->p_dev)) {
+		u32 val = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt,
+				   TM_REG_INT_STS_1);
+
+		if (val & ~(TM_REG_INT_STS_1_PEND_TASK_SCAN |
+			    TM_REG_INT_STS_1_PEND_CONN_SCAN))
+			return ECORE_INVAL;
+
+		if (val & (TM_REG_INT_STS_1_PEND_TASK_SCAN |
+			   TM_REG_INT_STS_1_PEND_CONN_SCAN))
+			DP_INFO(p_hwfn,
+				"TM attention on emulation - most likely"
+				" results of clock-ratios\n");
+		val = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt, TM_REG_INT_MASK_1);
+		val |= TM_REG_INT_MASK_1_PEND_CONN_SCAN |
+		    TM_REG_INT_MASK_1_PEND_TASK_SCAN;
+		ecore_wr(p_hwfn, p_hwfn->p_dpc_ptt, TM_REG_INT_MASK_1, val);
+
+		return ECORE_SUCCESS;
+	}
+#endif
+
+	return ECORE_INVAL;
+}
+
+/* Notice aeu_invert_reg must be defined in the same order of bits as HW;  */
+static struct aeu_invert_reg aeu_descs[NUM_ATTN_REGS] = {
+	{
+	 {			/* After Invert 1 */
+	  {"GPIO0 function%d", (32 << ATTENTION_LENGTH_SHIFT), OSAL_NULL,
+	   MAX_BLOCK_ID},
+	  }
+	 },
+
+	{
+	 {			/* After Invert 2 */
+	  {"PGLUE config_space", ATTENTION_SINGLE, OSAL_NULL, MAX_BLOCK_ID},
+	  {"PGLUE misc_flr", ATTENTION_SINGLE, OSAL_NULL, MAX_BLOCK_ID},
+	  {"PGLUE B RBC", ATTENTION_PAR_INT, ecore_pglub_rbc_attn_cb,
+	   BLOCK_PGLUE_B},
+	  {"PGLUE misc_mctp", ATTENTION_SINGLE, OSAL_NULL, MAX_BLOCK_ID},
+	  {"Flash event", ATTENTION_SINGLE, OSAL_NULL, MAX_BLOCK_ID},
+	  {"SMB event", ATTENTION_SINGLE, OSAL_NULL, MAX_BLOCK_ID},
+	  {"Main Power", ATTENTION_SINGLE, OSAL_NULL, MAX_BLOCK_ID},
+	  {"SW timers #%d",
+	   (8 << ATTENTION_LENGTH_SHIFT) | (1 << ATTENTION_OFFSET_SHIFT),
+	   OSAL_NULL, MAX_BLOCK_ID},
+	  {"PCIE glue/PXP VPD %d", (16 << ATTENTION_LENGTH_SHIFT), OSAL_NULL,
+	   BLOCK_PGLCS},
+	  }
+	 },
+
+	{
+	 {			/* After Invert 3 */
+	  {"General Attention %d", (32 << ATTENTION_LENGTH_SHIFT), OSAL_NULL,
+	   MAX_BLOCK_ID},
+	  }
+	 },
+
+	{
+	 {			/* After Invert 4 */
+	  {"General Attention 32", ATTENTION_SINGLE | ATTENTION_CLEAR_ENABLE,
+	   ecore_fw_assertion, MAX_BLOCK_ID},
+	  {"General Attention %d",
+	   (2 << ATTENTION_LENGTH_SHIFT) | (33 << ATTENTION_OFFSET_SHIFT),
+	   OSAL_NULL, MAX_BLOCK_ID},
+	  {"General Attention 35", ATTENTION_SINGLE | ATTENTION_CLEAR_ENABLE,
+	   ecore_general_attention_35, MAX_BLOCK_ID},
+	  {"CNIG port %d", (4 << ATTENTION_LENGTH_SHIFT), OSAL_NULL,
+	   BLOCK_CNIG},
+	  {"MCP CPU", ATTENTION_SINGLE, ecore_mcp_attn_cb, MAX_BLOCK_ID},
+	  {"MCP Watchdog timer", ATTENTION_SINGLE, OSAL_NULL, MAX_BLOCK_ID},
+	  {"MCP M2P", ATTENTION_SINGLE, OSAL_NULL, MAX_BLOCK_ID},
+	  {"AVS stop status ready", ATTENTION_SINGLE, OSAL_NULL, MAX_BLOCK_ID},
+	  {"MSTAT", ATTENTION_PAR_INT, OSAL_NULL, MAX_BLOCK_ID},
+	  {"MSTAT per-path", ATTENTION_PAR_INT, OSAL_NULL, MAX_BLOCK_ID},
+	  {"Reserved %d", (6 << ATTENTION_LENGTH_SHIFT), OSAL_NULL,
+	   MAX_BLOCK_ID},
+	  {"NIG", ATTENTION_PAR_INT, ecore_nig_attn_cb, BLOCK_NIG},
+	  {"BMB/OPTE/MCP", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_BMB},
+	  {"BTB", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_BTB},
+	  {"BRB", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_BRB},
+	  {"PRS", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PRS},
+	  }
+	 },
+
+	{
+	 {			/* After Invert 5 */
+	  {"SRC", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_SRC},
+	  {"PB Client1", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PBF_PB1},
+	  {"PB Client2", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PBF_PB2},
+	  {"RPB", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_RPB},
+	  {"PBF", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PBF},
+	  {"QM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_QM},
+	  {"TM", ATTENTION_PAR_INT, ecore_tm_attn_cb, BLOCK_TM},
+	  {"MCM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_MCM},
+	  {"MSDM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_MSDM},
+	  {"MSEM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_MSEM},
+	  {"PCM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PCM},
+	  {"PSDM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PSDM},
+	  {"PSEM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PSEM},
+	  {"TCM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_TCM},
+	  {"TSDM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_TSDM},
+	  {"TSEM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_TSEM},
+	  }
+	 },
+
+	{
+	 {			/* After Invert 6 */
+	  {"UCM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_UCM},
+	  {"USDM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_USDM},
+	  {"USEM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_USEM},
+	  {"XCM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_XCM},
+	  {"XSDM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_XSDM},
+	  {"XSEM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_XSEM},
+	  {"YCM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_YCM},
+	  {"YSDM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_YSDM},
+	  {"YSEM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_YSEM},
+	  {"XYLD", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_XYLD},
+	  {"TMLD", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_TMLD},
+	  {"MYLD", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_MULD},
+	  {"YULD", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_YULD},
+	  {"DORQ", ATTENTION_PAR_INT, ecore_dorq_attn_cb, BLOCK_DORQ},
+	  {"DBG", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_DBG},
+	  {"IPC", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_IPC},
+	  }
+	 },
+
+	{
+	 {			/* After Invert 7 */
+	  {"CCFC", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_CCFC},
+	  {"CDU", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_CDU},
+	  {"DMAE", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_DMAE},
+	  {"IGU", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_IGU},
+	  {"ATC", ATTENTION_PAR_INT, OSAL_NULL, MAX_BLOCK_ID},
+	  {"CAU", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_CAU},
+	  {"PTU", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PTU},
+	  {"PRM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PRM},
+	  {"TCFC", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_TCFC},
+	  {"RDIF", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_RDIF},
+	  {"TDIF", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_TDIF},
+	  {"RSS", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_RSS},
+	  {"MISC", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_MISC},
+	  {"MISCS", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_MISCS},
+	  {"PCIE", ATTENTION_PAR, OSAL_NULL, BLOCK_PCIE},
+	  {"Vaux PCI core", ATTENTION_SINGLE, OSAL_NULL, BLOCK_PGLCS},
+	  {"PSWRQ", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PSWRQ},
+	  }
+	 },
+
+	{
+	 {			/* After Invert 8 */
+	  {"PSWRQ (pci_clk)", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PSWRQ2},
+	  {"PSWWR", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PSWWR},
+	  {"PSWWR (pci_clk)", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PSWWR2},
+	  {"PSWRD", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PSWRD},
+	  {"PSWRD (pci_clk)", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PSWRD2},
+	  {"PSWHST", ATTENTION_PAR_INT, ecore_pswhst_attn_cb, BLOCK_PSWHST},
+	  {"PSWHST (pci_clk)", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PSWHST2},
+	  {"GRC", ATTENTION_PAR_INT, ecore_grc_attn_cb, BLOCK_GRC},
+	  {"CPMU", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_CPMU},
+	  {"NCSI", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_NCSI},
+	  {"MSEM PRAM", ATTENTION_PAR, OSAL_NULL, MAX_BLOCK_ID},
+	  {"PSEM PRAM", ATTENTION_PAR, OSAL_NULL, MAX_BLOCK_ID},
+	  {"TSEM PRAM", ATTENTION_PAR, OSAL_NULL, MAX_BLOCK_ID},
+	  {"USEM PRAM", ATTENTION_PAR, OSAL_NULL, MAX_BLOCK_ID},
+	  {"XSEM PRAM", ATTENTION_PAR, OSAL_NULL, MAX_BLOCK_ID},
+	  {"YSEM PRAM", ATTENTION_PAR, OSAL_NULL, MAX_BLOCK_ID},
+	  {"pxp_misc_mps", ATTENTION_PAR, OSAL_NULL, BLOCK_PGLCS},
+	  {"PCIE glue/PXP Exp. ROM", ATTENTION_SINGLE, OSAL_NULL, BLOCK_PGLCS},
+	  {"PERST_B assertion", ATTENTION_SINGLE, OSAL_NULL, MAX_BLOCK_ID},
+	  {"PERST_B deassertion", ATTENTION_SINGLE, OSAL_NULL, MAX_BLOCK_ID},
+	  {"Reserved %d", (2 << ATTENTION_LENGTH_SHIFT), OSAL_NULL,
+	   MAX_BLOCK_ID},
+	  }
+	 },
+
+	{
+	 {			/* After Invert 9 */
+	  {"MCP Latched memory", ATTENTION_PAR, OSAL_NULL, MAX_BLOCK_ID},
+	  {"MCP Latched scratchpad cache", ATTENTION_SINGLE, OSAL_NULL,
+	   MAX_BLOCK_ID},
+	  {"MCP Latched ump_tx", ATTENTION_PAR, OSAL_NULL, MAX_BLOCK_ID},
+	  {"MCP Latched scratchpad", ATTENTION_PAR, OSAL_NULL, MAX_BLOCK_ID},
+	  {"Reserved %d", (28 << ATTENTION_LENGTH_SHIFT), OSAL_NULL,
+	   MAX_BLOCK_ID},
+	  }
+	 },
+
+};
+
 #define ATTN_STATE_BITS		(0xfff)
 #define ATTN_BITS_MASKABLE	(0x3ff)
 struct ecore_sb_attn_info {
@@ -117,6 +713,436 @@ static u16 ecore_attn_update_idx(struct ecore_hwfn *p_hwfn,
 	return rc;
 }
 
+/**
+ * @brief ecore_int_assertion - handles asserted attention bits
+ *
+ * @param p_hwfn
+ * @param asserted_bits newly asserted bits
+ * @return enum _ecore_status_t
+ */
+static enum _ecore_status_t ecore_int_assertion(struct ecore_hwfn *p_hwfn,
+						u16 asserted_bits)
+{
+	struct ecore_sb_attn_info *sb_attn_sw = p_hwfn->p_sb_attn;
+	u32 igu_mask;
+
+	/* Mask the source of the attention in the IGU */
+	igu_mask = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt,
+			    IGU_REG_ATTENTION_ENABLE);
+	DP_VERBOSE(p_hwfn, ECORE_MSG_INTR, "IGU mask: 0x%08x --> 0x%08x\n",
+		   igu_mask, igu_mask & ~(asserted_bits & ATTN_BITS_MASKABLE));
+	igu_mask &= ~(asserted_bits & ATTN_BITS_MASKABLE);
+	ecore_wr(p_hwfn, p_hwfn->p_dpc_ptt, IGU_REG_ATTENTION_ENABLE, igu_mask);
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_INTR,
+		   "inner known ATTN state: 0x%04x --> 0x%04x\n",
+		   sb_attn_sw->known_attn,
+		   sb_attn_sw->known_attn | asserted_bits);
+	sb_attn_sw->known_attn |= asserted_bits;
+
+	/* Handle MCP events */
+	if (asserted_bits & 0x100) {
+		ecore_mcp_handle_events(p_hwfn, p_hwfn->p_dpc_ptt);
+		/* Clean the MCP attention */
+		ecore_wr(p_hwfn, p_hwfn->p_dpc_ptt,
+			 sb_attn_sw->mfw_attn_addr, 0);
+	}
+
+	/* FIXME - this will change once we'll have GOOD gtt definitions */
+	DIRECT_REG_WR(p_hwfn,
+		      (u8 OSAL_IOMEM *) p_hwfn->regview +
+		      GTT_BAR0_MAP_REG_IGU_CMD +
+		      ((IGU_CMD_ATTN_BIT_SET_UPPER -
+			IGU_CMD_INT_ACK_BASE) << 3), (u32)asserted_bits);
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_INTR, "set cmd IGU: 0x%04x\n",
+		   asserted_bits);
+
+	return ECORE_SUCCESS;
+}
+
+static void ecore_int_deassertion_print_bit(struct ecore_hwfn *p_hwfn,
+					    struct attn_hw_reg *p_reg_desc,
+					    struct attn_hw_block *p_block,
+					    enum ecore_attention_type type,
+					    u32 val, u32 mask)
+{
+	int j;
+#ifdef ATTN_DESC
+	const char **description;
+
+	if (type == ECORE_ATTN_TYPE_ATTN)
+		description = p_block->int_desc;
+	else
+		description = p_block->prty_desc;
+#endif
+
+	for (j = 0; j < p_reg_desc->num_of_bits; j++) {
+		if (val & (1 << j)) {
+#ifdef ATTN_DESC
+			DP_NOTICE(p_hwfn, false,
+				  "%s (%s): %s [reg %d [0x%08x], bit %d]%s\n",
+				  p_block->name,
+				  type == ECORE_ATTN_TYPE_ATTN ? "Interrupt" :
+				  "Parity",
+				  description[p_reg_desc->bit_attn_idx[j]],
+				  p_reg_desc->reg_idx,
+				  p_reg_desc->sts_addr, j,
+				  (mask & (1 << j)) ? " [MASKED]" : "");
+#else
+			DP_NOTICE(p_hwfn->p_dev, false,
+				  "%s (%s): [reg %d [0x%08x], bit %d]%s\n",
+				  p_block->name,
+				  type == ECORE_ATTN_TYPE_ATTN ? "Interrupt" :
+				  "Parity",
+				  p_reg_desc->reg_idx,
+				  p_reg_desc->sts_addr, j,
+				  (mask & (1 << j)) ? " [MASKED]" : "");
+#endif
+		}
+	}
+}
+
+/**
+ * @brief ecore_int_deassertion_aeu_bit - handles the effects of a single
+ * cause of the attention
+ *
+ * @param p_hwfn
+ * @param p_aeu - descriptor of an AEU bit which caused the attention
+ * @param aeu_en_reg - register offset of the AEU enable reg. which configured
+ *  this bit to this group.
+ * @param bit_index - index of this bit in the aeu_en_reg
+ *
+ * @return enum _ecore_status_t
+ */
+static enum _ecore_status_t
+ecore_int_deassertion_aeu_bit(struct ecore_hwfn *p_hwfn,
+			      struct aeu_invert_reg_bit *p_aeu,
+			      u32 aeu_en_reg, u32 bitmask)
+{
+	enum _ecore_status_t rc = ECORE_INVAL;
+	u32 val, mask;
+
+#ifndef REMOVE_DBG
+	u32 interrupts[20];	/* TODO- change into HSI define once supplied */
+
+	OSAL_MEMSET(interrupts, 0, sizeof(u32) * 20);	/* FIXME real size) */
+#endif
+
+	DP_INFO(p_hwfn, "Deasserted attention `%s'[%08x]\n",
+		p_aeu->bit_name, bitmask);
+
+	/* Call callback before clearing the interrupt status */
+	if (p_aeu->cb) {
+		DP_INFO(p_hwfn, "`%s (attention)': Calling Callback function\n",
+			p_aeu->bit_name);
+		rc = p_aeu->cb(p_hwfn);
+	}
+
+	/* Handle HW block interrupt registers */
+	if (p_aeu->block_index != MAX_BLOCK_ID) {
+		u16 chip_type = ECORE_GET_TYPE(p_hwfn->p_dev);
+		struct attn_hw_block *p_block;
+		int i;
+
+		p_block = &attn_blocks[p_aeu->block_index];
+
+		/* Handle each interrupt register */
+		for (i = 0;
+		     i < p_block->chip_regs[chip_type].num_of_int_regs; i++) {
+			struct attn_hw_reg *p_reg_desc;
+			u32 sts_addr;
+
+			p_reg_desc = p_block->chip_regs[chip_type].int_regs[i];
+
+			/* In case of fatal attention, don't clear the status
+			 * so it would appear in idle check.
+			 */
+			if (rc == ECORE_SUCCESS)
+				sts_addr = p_reg_desc->sts_clr_addr;
+			else
+				sts_addr = p_reg_desc->sts_addr;
+
+			val = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt, sts_addr);
+			mask = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt,
+					p_reg_desc->mask_addr);
+			ecore_int_deassertion_print_bit(p_hwfn, p_reg_desc,
+							p_block,
+							ECORE_ATTN_TYPE_ATTN,
+							val, mask);
+
+#ifndef REMOVE_DBG
+			interrupts[i] = val;
+#endif
+		}
+	}
+
+	/* Reach assertion if attention is fatal */
+	if (rc != ECORE_SUCCESS) {
+		DP_NOTICE(p_hwfn, true, "`%s': Fatal attention\n",
+			  p_aeu->bit_name);
+
+		ecore_hw_err_notify(p_hwfn, ECORE_HW_ERR_HW_ATTN);
+	}
+
+	/* Prevent this Attention from being asserted in the future */
+	if (p_aeu->flags & ATTENTION_CLEAR_ENABLE) {
+		u32 val;
+		u32 mask = ~bitmask;
+		val = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt, aeu_en_reg);
+		ecore_wr(p_hwfn, p_hwfn->p_dpc_ptt, aeu_en_reg, (val & mask));
+		DP_INFO(p_hwfn, "`%s' - Disabled future attentions\n",
+			p_aeu->bit_name);
+	}
+
+	if (p_aeu->flags & (ATTENTION_FW_DUMP | ATTENTION_PANIC_DUMP)) {
+		/* @@@TODO - what to dump? <yuvalmin 04/02/13> */
+		DP_ERR(p_hwfn->p_dev, "`%s' - Dumps aren't implemented yet\n",
+		       p_aeu->bit_name);
+		return ECORE_NOTIMPL;
+	}
+
+	return rc;
+}
+
+static void ecore_int_parity_print(struct ecore_hwfn *p_hwfn,
+				   struct aeu_invert_reg_bit *p_aeu,
+				   struct attn_hw_block *p_block, u8 bit_index)
+{
+	u16 chip_type = ECORE_GET_TYPE(p_hwfn->p_dev);
+	int i;
+
+	for (i = 0; i < p_block->chip_regs[chip_type].num_of_prty_regs; i++) {
+		struct attn_hw_reg *p_reg_desc;
+		u32 val, mask;
+
+		p_reg_desc = p_block->chip_regs[chip_type].prty_regs[i];
+
+		val = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt,
+			       p_reg_desc->sts_clr_addr);
+		mask = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt,
+				p_reg_desc->mask_addr);
+		DP_VERBOSE(p_hwfn, ECORE_MSG_INTR,
+			   "%s[%d] - parity register[%d] is %08x [mask is %08x]\n",
+			   p_aeu->bit_name, bit_index, i, val, mask);
+		ecore_int_deassertion_print_bit(p_hwfn, p_reg_desc,
+						p_block,
+						ECORE_ATTN_TYPE_PARITY,
+						val, mask);
+	}
+}
+
+/**
+ * @brief ecore_int_deassertion_parity - handle a single parity AEU source
+ *
+ * @param p_hwfn
+ * @param p_aeu - descriptor of an AEU bit which caused the
+ *              parity
+ * @param bit_index
+ */
+static void ecore_int_deassertion_parity(struct ecore_hwfn *p_hwfn,
+					 struct aeu_invert_reg_bit *p_aeu,
+					 u8 bit_index)
+{
+	u32 block_id = p_aeu->block_index;
+
+	DP_INFO(p_hwfn->p_dev, "%s[%d] parity attention is set\n",
+		p_aeu->bit_name, bit_index);
+
+	if (block_id != MAX_BLOCK_ID) {
+		ecore_int_parity_print(p_hwfn, p_aeu, &attn_blocks[block_id],
+				       bit_index);
+
+		/* In A0, there's a single parity bit for several blocks */
+		if (block_id == BLOCK_BTB) {
+			ecore_int_parity_print(p_hwfn, p_aeu,
+					       &attn_blocks[BLOCK_OPTE],
+					       bit_index);
+			ecore_int_parity_print(p_hwfn, p_aeu,
+					       &attn_blocks[BLOCK_MCP],
+					       bit_index);
+		}
+	}
+}
+
+/**
+ * @brief - handles deassertion of previously asserted attentions.
+ *
+ * @param p_hwfn
+ * @param deasserted_bits - newly deasserted bits
+ * @return enum _ecore_status_t
+ *
+ */
+static enum _ecore_status_t ecore_int_deassertion(struct ecore_hwfn *p_hwfn,
+						  u16 deasserted_bits)
+{
+	struct ecore_sb_attn_info *sb_attn_sw = p_hwfn->p_sb_attn;
+	u32 aeu_inv_arr[NUM_ATTN_REGS], aeu_mask;
+	bool b_parity = false;
+	u8 i, j, k, bit_idx;
+	enum _ecore_status_t rc = ECORE_SUCCESS;
+
+	/* Read the attention registers in the AEU */
+	for (i = 0; i < NUM_ATTN_REGS; i++) {
+		aeu_inv_arr[i] = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt,
+					  MISC_REG_AEU_AFTER_INVERT_1_IGU +
+					  i * 0x4);
+		DP_VERBOSE(p_hwfn, ECORE_MSG_INTR,
+			   "Deasserted bits [%d]: %08x\n", i, aeu_inv_arr[i]);
+	}
+
+	/* Handle parity attentions first */
+	for (i = 0; i < NUM_ATTN_REGS; i++) {
+		struct aeu_invert_reg *p_aeu = &sb_attn_sw->p_aeu_desc[i];
+		u32 en = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt,
+				  MISC_REG_AEU_ENABLE1_IGU_OUT_0 +
+				  i * sizeof(u32));
+
+		u32 parities = sb_attn_sw->parity_mask[i] & aeu_inv_arr[i] & en;
+
+		/* Skip register in which no parity bit is currently set */
+		if (!parities)
+			continue;
+
+		for (j = 0, bit_idx = 0; bit_idx < 32; j++) {
+			struct aeu_invert_reg_bit *p_bit = &p_aeu->bits[j];
+
+			if ((p_bit->flags & ATTENTION_PARITY) &&
+			    !!(parities & (1 << bit_idx))) {
+				ecore_int_deassertion_parity(p_hwfn, p_bit,
+							     bit_idx);
+				b_parity = true;
+			}
+
+			bit_idx += ATTENTION_LENGTH(p_bit->flags);
+		}
+	}
+
+	/* Find non-parity cause for attention and act */
+	for (k = 0; k < MAX_ATTN_GRPS; k++) {
+		struct aeu_invert_reg_bit *p_aeu;
+
+		/* Handle only groups whose attention is currently deasserted */
+		if (!(deasserted_bits & (1 << k)))
+			continue;
+
+		for (i = 0; i < NUM_ATTN_REGS; i++) {
+			u32 aeu_en = MISC_REG_AEU_ENABLE1_IGU_OUT_0 +
+			    i * sizeof(u32) + k * sizeof(u32) * NUM_ATTN_REGS;
+			u32 en = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt, aeu_en);
+			u32 bits = aeu_inv_arr[i] & en;
+
+			/* Skip if no bit from this group is currently set */
+			if (!bits)
+				continue;
+
+			/* Find all set bits from current register which belong
+			 * to current group, making them responsible for the
+			 * previous assertion.
+			 */
+			for (j = 0, bit_idx = 0; bit_idx < 32; j++) {
+				u8 bit, bit_len;
+				u32 bitmask;
+
+				p_aeu = &sb_attn_sw->p_aeu_desc[i].bits[j];
+
+				/* No need to handle attention-only bits */
+				if (p_aeu->flags == ATTENTION_PAR)
+					continue;
+
+				bit = bit_idx;
+				bit_len = ATTENTION_LENGTH(p_aeu->flags);
+				if (p_aeu->flags & ATTENTION_PAR_INT) {
+					/* Skip Parity */
+					bit++;
+					bit_len--;
+				}
+
+				bitmask = bits & (((1 << bit_len) - 1) << bit);
+				if (bitmask) {
+					/* Handle source of the attention */
+					ecore_int_deassertion_aeu_bit(p_hwfn,
+								      p_aeu,
+								      aeu_en,
+								      bitmask);
+				}
+
+				bit_idx += ATTENTION_LENGTH(p_aeu->flags);
+			}
+		}
+	}
+
+	/* Clear IGU indication for the deasserted bits */
+	/* FIXME - this will change once we'll have GOOD gtt definitions */
+	DIRECT_REG_WR(p_hwfn,
+		      (u8 OSAL_IOMEM *) p_hwfn->regview +
+		      GTT_BAR0_MAP_REG_IGU_CMD +
+		      ((IGU_CMD_ATTN_BIT_CLR_UPPER -
+			IGU_CMD_INT_ACK_BASE) << 3), ~((u32)deasserted_bits));
+
+	/* Unmask deasserted attentions in IGU */
+	aeu_mask = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt,
+			    IGU_REG_ATTENTION_ENABLE);
+	aeu_mask |= (deasserted_bits & ATTN_BITS_MASKABLE);
+	ecore_wr(p_hwfn, p_hwfn->p_dpc_ptt, IGU_REG_ATTENTION_ENABLE, aeu_mask);
+
+	/* Clear deassertion from inner state */
+	sb_attn_sw->known_attn &= ~deasserted_bits;
+
+	return rc;
+}
+
+static enum _ecore_status_t ecore_int_attentions(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_sb_attn_info *p_sb_attn_sw = p_hwfn->p_sb_attn;
+	struct atten_status_block *p_sb_attn = p_sb_attn_sw->sb_attn;
+	u16 index = 0, asserted_bits, deasserted_bits;
+	enum _ecore_status_t rc = ECORE_SUCCESS;
+	u32 attn_bits = 0, attn_acks = 0;
+
+	/* Read current attention bits/acks - safeguard against attentions
+	 * by guaranting work on a synchronized timeframe
+	 */
+	do {
+		index = OSAL_LE16_TO_CPU(p_sb_attn->sb_index);
+		attn_bits = OSAL_LE32_TO_CPU(p_sb_attn->atten_bits);
+		attn_acks = OSAL_LE32_TO_CPU(p_sb_attn->atten_ack);
+	} while (index != OSAL_LE16_TO_CPU(p_sb_attn->sb_index));
+	p_sb_attn->sb_index = index;
+
+	/* Attention / Deassertion are meaningful (and in correct state)
+	 * only when they differ and consistent with known state - deassertion
+	 * when previous attention & current ack, and assertion when current
+	 * attention with no previous attention
+	 */
+	asserted_bits = (attn_bits & ~attn_acks & ATTN_STATE_BITS) &
+	    ~p_sb_attn_sw->known_attn;
+	deasserted_bits = (~attn_bits & attn_acks & ATTN_STATE_BITS) &
+	    p_sb_attn_sw->known_attn;
+
+	if ((asserted_bits & ~0x100) || (deasserted_bits & ~0x100))
+		DP_INFO(p_hwfn,
+			"Attention: Index: 0x%04x, Bits: 0x%08x, Acks: 0x%08x, asserted: 0x%04x, De-asserted 0x%04x [Prev. known: 0x%04x]\n",
+			index, attn_bits, attn_acks, asserted_bits,
+			deasserted_bits, p_sb_attn_sw->known_attn);
+	else if (asserted_bits == 0x100)
+		DP_INFO(p_hwfn, "MFW indication via attention\n");
+	else
+		DP_VERBOSE(p_hwfn, ECORE_MSG_INTR,
+			   "MFW indication [deassertion]\n");
+
+	if (asserted_bits) {
+		rc = ecore_int_assertion(p_hwfn, asserted_bits);
+		if (rc)
+			return rc;
+	}
+
+	if (deasserted_bits)
+		rc = ecore_int_deassertion(p_hwfn, deasserted_bits);
+
+	return rc;
+}
+
 static void ecore_sb_ack_attn(struct ecore_hwfn *p_hwfn,
 			      void OSAL_IOMEM *igu_addr, u32 ack_cons)
 {
@@ -218,6 +1244,9 @@ void ecore_int_sp_dpc(osal_int_ptr_t hwfn_cookie)
 		return;
 	}
 
+	if (rc & ECORE_SB_ATT_IDX)
+		ecore_int_attentions(p_hwfn);
+
 	if (rc & ECORE_SB_IDX) {
 		int pi;
 
@@ -258,6 +1287,93 @@ static void ecore_int_sb_attn_free(struct ecore_hwfn *p_hwfn)
 	OSAL_FREE(p_hwfn->p_dev, p_sb);
 }
 
+static void ecore_int_sb_attn_setup(struct ecore_hwfn *p_hwfn,
+				    struct ecore_ptt *p_ptt)
+{
+	struct ecore_sb_attn_info *sb_info = p_hwfn->p_sb_attn;
+
+	OSAL_MEMSET(sb_info->sb_attn, 0, sizeof(*sb_info->sb_attn));
+
+	sb_info->index = 0;
+	sb_info->known_attn = 0;
+
+	/* Configure Attention Status Block in IGU */
+	ecore_wr(p_hwfn, p_ptt, IGU_REG_ATTN_MSG_ADDR_L,
+		 DMA_LO(p_hwfn->p_sb_attn->sb_phys));
+	ecore_wr(p_hwfn, p_ptt, IGU_REG_ATTN_MSG_ADDR_H,
+		 DMA_HI(p_hwfn->p_sb_attn->sb_phys));
+}
+
+static void ecore_int_sb_attn_init(struct ecore_hwfn *p_hwfn,
+				   struct ecore_ptt *p_ptt,
+				   void *sb_virt_addr, dma_addr_t sb_phy_addr)
+{
+	struct ecore_sb_attn_info *sb_info = p_hwfn->p_sb_attn;
+	int i, j, k;
+
+	sb_info->sb_attn = sb_virt_addr;
+	sb_info->sb_phys = sb_phy_addr;
+
+	/* Set the pointer to the AEU descriptors */
+	sb_info->p_aeu_desc = aeu_descs;
+
+	/* Calculate Parity Masks */
+	OSAL_MEMSET(sb_info->parity_mask, 0, sizeof(u32) * NUM_ATTN_REGS);
+	for (i = 0; i < NUM_ATTN_REGS; i++) {
+		/* j is array index, k is bit index */
+		for (j = 0, k = 0; k < 32; j++) {
+			unsigned int flags = aeu_descs[i].bits[j].flags;
+
+			if (flags & ATTENTION_PARITY)
+				sb_info->parity_mask[i] |= 1 << k;
+
+			k += ATTENTION_LENGTH(flags);
+		}
+		DP_VERBOSE(p_hwfn, ECORE_MSG_INTR,
+			   "Attn Mask [Reg %d]: 0x%08x\n",
+			   i, sb_info->parity_mask[i]);
+	}
+
+	/* Set the address of cleanup for the mcp attention */
+	sb_info->mfw_attn_addr = (p_hwfn->rel_pf_id << 3) +
+	    MISC_REG_AEU_GENERAL_ATTN_0;
+
+	ecore_int_sb_attn_setup(p_hwfn, p_ptt);
+}
+
+static enum _ecore_status_t ecore_int_sb_attn_alloc(struct ecore_hwfn *p_hwfn,
+						    struct ecore_ptt *p_ptt)
+{
+	struct ecore_dev *p_dev = p_hwfn->p_dev;
+	struct ecore_sb_attn_info *p_sb;
+	dma_addr_t p_phys = 0;
+	void *p_virt;
+
+	/* SB struct */
+	p_sb = OSAL_ALLOC(p_dev, GFP_KERNEL, sizeof(struct ecore_sb_attn_info));
+	if (!p_sb) {
+		DP_NOTICE(p_dev, true,
+			  "Failed to allocate `struct ecore_sb_attn_info'");
+		return ECORE_NOMEM;
+	}
+
+	/* SB ring  */
+	p_virt = OSAL_DMA_ALLOC_COHERENT(p_dev, &p_phys,
+					 SB_ATTN_ALIGNED_SIZE(p_hwfn));
+	if (!p_virt) {
+		DP_NOTICE(p_dev, true,
+			  "Failed to allocate status block (attentions)");
+		OSAL_FREE(p_dev, p_sb);
+		return ECORE_NOMEM;
+	}
+
+	/* Attention setup */
+	p_hwfn->p_sb_attn = p_sb;
+	ecore_int_sb_attn_init(p_hwfn, p_ptt, p_virt, p_phys);
+
+	return ECORE_SUCCESS;
+}
+
 /* coalescing timeout = timeset << (timer_res + 1) */
 #ifdef RTE_LIBRTE_QEDE_RX_COAL_US
 #define ECORE_CAU_DEF_RX_USECS RTE_LIBRTE_QEDE_RX_COAL_US
@@ -678,6 +1794,16 @@ ecore_int_igu_enable(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 	tmp &= ~0x800;
 	ecore_wr(p_hwfn, p_ptt, MISC_REG_AEU_ENABLE4_IGU_OUT_0, tmp);
 
+	/* @@@tmp - Mask interrupt sources - should move to init tool;
+	 * Also, correct for A0 [might still change in B0.
+	 */
+	reg_addr =
+	    attn_blocks[BLOCK_BRB].chip_regs[ECORE_GET_TYPE(p_hwfn->p_dev)].
+	    int_regs[0]->mask_addr;
+	tmp = ecore_rd(p_hwfn, p_ptt, reg_addr);
+	tmp |= (1 << 21);	/* Was PKT4_LEN_ERROR */
+	ecore_wr(p_hwfn, p_ptt, reg_addr, tmp);
+
 	ecore_int_igu_enable_attn(p_hwfn, p_ptt);
 
 	if ((int_mode != ECORE_INT_MODE_INTA) || IS_LEAD_HWFN(p_hwfn)) {
@@ -1035,6 +2161,10 @@ enum _ecore_status_t ecore_int_alloc(struct ecore_hwfn *p_hwfn,
 		return rc;
 	}
 
+	rc = ecore_int_sb_attn_alloc(p_hwfn, p_ptt);
+	if (rc != ECORE_SUCCESS)
+		DP_ERR(p_hwfn->p_dev, "Failed to allocate sb attn mem\n");
+
 	return rc;
 }
 
@@ -1051,6 +2181,7 @@ void ecore_int_setup(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
 		return;
 
 	ecore_int_sb_setup(p_hwfn, p_ptt, &p_hwfn->p_sp_sb->sb_info);
+	ecore_int_sb_attn_setup(p_hwfn, p_ptt);
 	ecore_int_sp_dpc_setup(p_hwfn);
 }
 
-- 
1.7.10.3

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH v2 09/10] qede: Add DCBX support
  2016-03-10 13:45 [dpdk-dev] [PATCH v2 00/10] qede: Add qede PMD Rasesh Mody
                   ` (6 preceding siblings ...)
  2016-03-10 13:45 ` [dpdk-dev] [PATCH v2 08/10] qede: Add attention support Rasesh Mody
@ 2016-03-10 13:45 ` Rasesh Mody
  2016-03-10 13:45 ` [dpdk-dev] [PATCH v2 10/10] qede: enable PMD build Rasesh Mody
  8 siblings, 0 replies; 13+ messages in thread
From: Rasesh Mody @ 2016-03-10 13:45 UTC (permalink / raw)
  To: dev; +Cc: sony.chacko

Signed-off-by: Harish Patil <harish.patil@qlogic.com>
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
Signed-off-by: Sony Chacko <sony.chacko@qlogic.com>
---
 drivers/net/qede/Makefile                 |    1 +
 drivers/net/qede/base/bcm_osal.h          |  100 ++--
 drivers/net/qede/base/ecore.h             |    2 +
 drivers/net/qede/base/ecore_dcbx.c        |  887 +++++++++++++++++++++++++++++
 drivers/net/qede/base/ecore_dcbx.h        |   55 ++
 drivers/net/qede/base/ecore_dcbx_api.h    |  160 ++++++
 drivers/net/qede/base/ecore_dev.c         |   27 +
 drivers/net/qede/base/ecore_l2.c          |    3 -
 drivers/net/qede/base/ecore_mcp.c         |   16 +
 drivers/net/qede/base/ecore_sp_commands.c |    4 +
 drivers/net/qede/base/mcp_public.h        |  200 +++++++
 drivers/net/qede/base/nvm_cfg.h           |    6 +
 drivers/net/qede/qede_main.c              |    6 +-
 drivers/net/qede/qede_rxtx.c              |    2 +-
 14 files changed, 1406 insertions(+), 63 deletions(-)
 create mode 100644 drivers/net/qede/base/ecore_dcbx.c
 create mode 100644 drivers/net/qede/base/ecore_dcbx.h
 create mode 100644 drivers/net/qede/base/ecore_dcbx_api.h

diff --git a/drivers/net/qede/Makefile b/drivers/net/qede/Makefile
index 8970921..cb59bbe 100644
--- a/drivers/net/qede/Makefile
+++ b/drivers/net/qede/Makefile
@@ -77,6 +77,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += base/ecore_spq.c
 SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += base/ecore_init_ops.c
 SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += base/ecore_mcp.c
 SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += base/ecore_int.c
+SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += base/ecore_dcbx.c
 SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += base/bcm_osal.c
 SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += base/ecore_sriov.c
 SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += base/ecore_vf.c
diff --git a/drivers/net/qede/base/bcm_osal.h b/drivers/net/qede/base/bcm_osal.h
index 4d81101..26221e5 100644
--- a/drivers/net/qede/base/bcm_osal.h
+++ b/drivers/net/qede/base/bcm_osal.h
@@ -22,6 +22,8 @@
 /* Forward declaration */
 struct ecore_dev;
 struct ecore_hwfn;
+struct ecore_vf_acquire_sw_info;
+struct vf_pf_resc_request;
 
 #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
 #undef __BIG_ENDIAN
@@ -270,60 +272,43 @@ typedef struct osal_list_t {
 
 /* Barriers */
 
-#define OSAL_MMIOWB(dev) rte_wmb()	/* No user space equivalent */
-#define OSAL_BARRIER(dev) rte_compiler_barrier()
-#define OSAL_SMP_RMB(dev) rte_rmb()
-#define OSAL_SMP_WMB(dev) rte_wmb()
-#define OSAL_RMB(dev) rte_rmb()
-#define OSAL_WMB(dev) rte_wmb()
+#define OSAL_MMIOWB(dev)		rte_wmb()
+#define OSAL_BARRIER(dev)		rte_compiler_barrier()
+#define OSAL_SMP_RMB(dev)		rte_rmb()
+#define OSAL_SMP_WMB(dev)		rte_wmb()
+#define OSAL_RMB(dev)			rte_rmb()
+#define OSAL_WMB(dev)			rte_wmb()
 #define OSAL_DMA_SYNC(dev, addr, length, is_post) nothing
 
-#define OSAL_BITS_PER_BYTE	(8)
+#define OSAL_BITS_PER_BYTE		(8)
 #define OSAL_BITS_PER_UL	(sizeof(unsigned long)*OSAL_BITS_PER_BYTE)
-#define OSAL_BITS_PER_UL_MASK   (OSAL_BITS_PER_UL - 1)
-
-#define OSAL_BUILD_BUG_ON(cond) nothing
-#define ETH_ALEN ETHER_ADDR_LEN
-
-static inline u32 osal_ffz(unsigned long word)
-{
-	unsigned long first_zero;
-
-	first_zero = __builtin_ffsl(~word);
-	return first_zero ? (first_zero - 1) : OSAL_BITS_PER_UL;
-}
-
-static inline void OSAL_SET_BIT(u32 nr, unsigned long *addr)
-{
-	addr[nr / OSAL_BITS_PER_UL] |= 1UL << (nr & OSAL_BITS_PER_UL_MASK);
-}
-
-static inline void OSAL_CLEAR_BIT(u32 nr, unsigned long *addr)
-{
-	addr[nr / OSAL_BITS_PER_UL] &= ~(1UL << (nr & OSAL_BITS_PER_UL_MASK));
-}
-
-static inline bool OSAL_TEST_BIT(u32 nr, unsigned long *addr)
-{
-	return !!(addr[nr / OSAL_BITS_PER_UL] &
-		   (1UL << (nr & OSAL_BITS_PER_UL_MASK)));
-}
-
-static inline u32 OSAL_FIND_FIRST_ZERO_BIT(unsigned long *addr, u32 limit)
-{
-	u32 i;
-	u32 nwords = 0;
-	OSAL_BUILD_BUG_ON(!limit);
-	nwords = (limit - 1) / OSAL_BITS_PER_UL + 1;
-	for (i = 0; i < nwords; i++)
-		if (~(addr[i] != 0))
-			break;
-	return (i == nwords) ? limit : i * OSAL_BITS_PER_UL + osal_ffz(addr[i]);
-}
+#define OSAL_BITS_PER_UL_MASK		(OSAL_BITS_PER_UL - 1)
 
-/* SR-IOV channel */
+/* Bitops */
+void qede_set_bit(u32, unsigned long *);
+#define OSAL_SET_BIT(bit, bitmap) \
+	qede_set_bit(bit, bitmap)
+
+void qede_clr_bit(u32, unsigned long *);
+#define OSAL_CLEAR_BIT(bit, bitmap) \
+	qede_clr_bit(bit, bitmap)
+
+bool qede_test_bit(u32, unsigned long *);
+#define OSAL_TEST_BIT(bit, bitmap) \
+	qede_test_bit(bit, bitmap)
+
+u32 qede_find_first_zero_bit(unsigned long *, u32);
+#define OSAL_FIND_FIRST_ZERO_BIT(bitmap, length) \
+	qede_find_first_zero_bit(bitmap, length)
+
+#define OSAL_BUILD_BUG_ON(cond)		nothing
+#define ETH_ALEN			ETHER_ADDR_LEN
 
 #define OSAL_LINK_UPDATE(hwfn) nothing
+#define OSAL_DCBX_AEN(hwfn, mib_type) nothing
+
+/* SR-IOV channel */
+
 #define OSAL_VF_FLR_UPDATE(hwfn) nothing
 #define OSAL_VF_SEND_MSG2PF(dev, done, msg, reply_addr, msg_size, reply_size) 0
 #define OSAL_VF_CQE_COMPLETION(_dev_p, _cqe, _protocol)	(0)
@@ -333,15 +318,18 @@ static inline u32 OSAL_FIND_FIRST_ZERO_BIT(unsigned long *addr, u32 limit)
 #define OSAL_IOV_VF_ACQUIRE(hwfn, vfid) 0
 #define OSAL_IOV_VF_CLEANUP(hwfn, vfid) nothing
 #define OSAL_IOV_VF_VPORT_UPDATE(hwfn, vfid, p_params, p_mask) 0
-#define OSAL_VF_FILL_ACQUIRE_RESC_REQ(_dev_p, _resc_req, _os_info) nothing
 #define OSAL_VF_UPDATE_ACQUIRE_RESC_RESP(_dev_p, _resc_resp) 0
 #define OSAL_IOV_GET_OS_TYPE() 0
 
-u32 qed_unzip_data(struct ecore_hwfn *p_hwfn, u32 input_len,
-		   u8 *input_buf, u32 max_size, u8 *unzip_buf);
+void qede_vf_fill_driver_data(struct ecore_hwfn *, struct vf_pf_resc_request *,
+			      struct ecore_vf_acquire_sw_info *);
+#define OSAL_VF_FILL_ACQUIRE_RESC_REQ(_dev_p, _resc_req, _os_info) \
+	qede_vf_fill_driver_data(_dev_p, _resc_req, _os_info)
 
+u32 qede_unzip_data(struct ecore_hwfn *p_hwfn, u32 input_len,
+		   u8 *input_buf, u32 max_size, u8 *unzip_buf);
 #define OSAL_UNZIP_DATA(p_hwfn, input_len, buf, max_size, unzip_buf) \
-	qed_unzip_data(p_hwfn, input_len, buf, max_size, unzip_buf)
+	qede_unzip_data(p_hwfn, input_len, buf, max_size, unzip_buf)
 
 /* TODO: */
 #define OSAL_SCHEDULE_RECOVERY_HANDLER(hwfn) nothing
@@ -357,13 +345,13 @@ u32 qed_unzip_data(struct ecore_hwfn *p_hwfn, u32 input_len,
 #define RTE_ROUNDUP(x, y) ((((x) + ((y) - 1)) / (y)) * (y))
 #define ROUNDUP(value, to_what) RTE_ROUNDUP((value), (to_what))
 
-unsigned long log2_align(unsigned long n);
+unsigned long qede_log2_align(unsigned long n);
 #define OSAL_ROUNDUP_POW_OF_TWO(val) \
-	log2_align(val)
+	qede_log2_align(val)
 
-u32 osal_log2(u32 val);
+u32 qede_osal_log2(u32);
 #define OSAL_LOG2(val) \
-	osal_log2(val)
+	qede_osal_log2(val)
 
 #define PRINT(format, ...) printf
 #define PRINT_ERR(format, ...) PRINT
diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 942aaee..79e7526 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -152,6 +152,7 @@ struct ecore_dma_mem;
 struct ecore_sb_sp_info;
 struct ecore_igu_info;
 struct ecore_mcp_info;
+struct ecore_dcbx_info;
 
 struct ecore_rt_data {
 	u32 *init_val;
@@ -499,6 +500,7 @@ struct ecore_hwfn {
 	struct ecore_vf_iov *vf_iov_info;
 	struct ecore_pf_iov *pf_iov_info;
 	struct ecore_mcp_info *mcp_info;
+	struct ecore_dcbx_info *p_dcbx_info;
 
 	struct ecore_hw_cid_data *p_tx_cids;
 	struct ecore_hw_cid_data *p_rx_cids;
diff --git a/drivers/net/qede/base/ecore_dcbx.c b/drivers/net/qede/base/ecore_dcbx.c
new file mode 100644
index 0000000..6a966cb
--- /dev/null
+++ b/drivers/net/qede/base/ecore_dcbx.c
@@ -0,0 +1,887 @@
+/*
+ * Copyright (c) 2016 QLogic Corporation.
+ * All rights reserved.
+ * www.qlogic.com
+ *
+ * See LICENSE.qede_pmd for copyright and licensing details.
+ */
+
+#include "bcm_osal.h"
+#include "ecore.h"
+#include "ecore_sp_commands.h"
+#include "ecore_dcbx.h"
+#include "ecore_cxt.h"
+#include "ecore_gtt_reg_addr.h"
+#include "ecore_iro.h"
+
+#define ECORE_DCBX_MAX_MIB_READ_TRY	(100)
+#define ECORE_MAX_PFC_PRIORITIES	8
+#define ECORE_ETH_TYPE_DEFAULT		(0)
+
+#define ECORE_DCBX_INVALID_PRIORITY	0xFF
+
+/* Get Traffic Class from priority traffic class table, 4 bits represent
+ * the traffic class corresponding to the priority.
+ */
+#define ECORE_DCBX_PRIO2TC(prio_tc_tbl, prio) \
+		((u32)(pri_tc_tbl >> ((7 - prio) * 4)) & 0x7)
+
+static bool ecore_dcbx_app_ethtype(u32 app_info_bitmap)
+{
+	return (ECORE_MFW_GET_FIELD(app_info_bitmap, DCBX_APP_SF) ==
+		DCBX_APP_SF_ETHTYPE) ? true : false;
+}
+
+static bool ecore_dcbx_app_port(u32 app_info_bitmap)
+{
+	return (ECORE_MFW_GET_FIELD(app_info_bitmap, DCBX_APP_SF) ==
+		DCBX_APP_SF_PORT) ? true : false;
+}
+
+static bool ecore_dcbx_default_tlv(u32 app_info_bitmap, u16 proto_id)
+{
+	return (ecore_dcbx_app_ethtype(app_info_bitmap) &&
+		proto_id == ECORE_ETH_TYPE_DEFAULT) ? true : false;
+}
+
+static bool ecore_dcbx_enabled(u32 dcbx_cfg_bitmap)
+{
+	return (ECORE_MFW_GET_FIELD(dcbx_cfg_bitmap, DCBX_CONFIG_VERSION) ==
+		DCBX_CONFIG_VERSION_DISABLED) ? false : true;
+}
+
+static bool ecore_dcbx_cee(u32 dcbx_cfg_bitmap)
+{
+	return (ECORE_MFW_GET_FIELD(dcbx_cfg_bitmap, DCBX_CONFIG_VERSION) ==
+		DCBX_CONFIG_VERSION_CEE) ? true : false;
+}
+
+static bool ecore_dcbx_ieee(u32 dcbx_cfg_bitmap)
+{
+	return (ECORE_MFW_GET_FIELD(dcbx_cfg_bitmap, DCBX_CONFIG_VERSION) ==
+		DCBX_CONFIG_VERSION_IEEE) ? true : false;
+}
+
+/* @@@TBD A0 Eagle workaround */
+void ecore_dcbx_eagle_workaround(struct ecore_hwfn *p_hwfn,
+				 struct ecore_ptt *p_ptt, bool set_to_pfc)
+{
+	if (!ENABLE_EAGLE_ENG1_WORKAROUND(p_hwfn))
+		return;
+
+	ecore_wr(p_hwfn, p_ptt,
+		 YSEM_REG_FAST_MEMORY + 0x20000 /* RAM in FASTMEM */  +
+		 YSTORM_FLOW_CONTROL_MODE_OFFSET,
+		 set_to_pfc ? flow_ctrl_pfc : flow_ctrl_pause);
+	ecore_wr(p_hwfn, p_ptt, NIG_REG_FLOWCTRL_MODE,
+		 EAGLE_ENG1_WORKAROUND_NIG_FLOWCTRL_MODE);
+}
+
+static void
+ecore_dcbx_dp_protocol(struct ecore_hwfn *p_hwfn,
+		       struct ecore_dcbx_results *p_data)
+{
+	struct ecore_hw_info *p_info = &p_hwfn->hw_info;
+	enum dcbx_protocol_type id;
+	bool enable, update;
+	u8 prio, tc, size;
+	const char *name;	/* @DPDK */
+	int i;
+
+	size = OSAL_ARRAY_SIZE(ecore_dcbx_app_update);
+
+	DP_INFO(p_hwfn, "DCBX negotiated: %d\n", p_data->dcbx_enabled);
+
+	for (i = 0; i < size; i++) {
+		id = ecore_dcbx_app_update[i].id;
+		name = ecore_dcbx_app_update[i].name;
+
+		enable = p_data->arr[id].enable;
+		update = p_data->arr[id].update;
+		tc = p_data->arr[id].tc;
+		prio = p_data->arr[id].priority;
+
+		DP_INFO(p_hwfn,
+			"%s info: update %d, enable %d, prio %d, tc %d, num_tc %d\n",
+			name, update, enable, prio, tc, p_info->num_tc);
+	}
+}
+
+static void
+ecore_dcbx_set_pf_tcs(struct ecore_hw_info *p_info,
+		      u8 tc, enum ecore_pci_personality personality)
+{
+	/* QM reconf data */
+	if (p_info->personality == personality) {
+		if (personality == ECORE_PCI_ETH)
+			p_info->non_offload_tc = tc;
+		else
+			p_info->offload_tc = tc;
+	}
+}
+
+void
+ecore_dcbx_set_params(struct ecore_dcbx_results *p_data,
+		      struct ecore_hw_info *p_info,
+		      bool enable, bool update, u8 prio, u8 tc,
+		      enum dcbx_protocol_type type,
+		      enum ecore_pci_personality personality)
+{
+	/* PF update ramrod data */
+	p_data->arr[type].update = update;
+	p_data->arr[type].enable = enable;
+	p_data->arr[type].priority = prio;
+	p_data->arr[type].tc = tc;
+
+	ecore_dcbx_set_pf_tcs(p_info, tc, personality);
+}
+
+/* Update app protocol data and hw_info fields with the TLV info */
+static void
+ecore_dcbx_update_app_info(struct ecore_dcbx_results *p_data,
+			   struct ecore_hwfn *p_hwfn,
+			   bool enable, bool update, u8 prio, u8 tc,
+			   enum dcbx_protocol_type type)
+{
+	struct ecore_hw_info *p_info = &p_hwfn->hw_info;
+	enum ecore_pci_personality personality;
+	enum dcbx_protocol_type id;
+	const char *name;	/* @DPDK */
+	u8 size;
+	int i;
+
+	size = OSAL_ARRAY_SIZE(ecore_dcbx_app_update);
+
+	for (i = 0; i < size; i++) {
+		id = ecore_dcbx_app_update[i].id;
+
+		if (type != id)
+			continue;
+
+		personality = ecore_dcbx_app_update[i].personality;
+		name = ecore_dcbx_app_update[i].name;
+
+		ecore_dcbx_set_params(p_data, p_info, enable, update,
+				      prio, tc, type, personality);
+	}
+}
+
+static enum _ecore_status_t
+ecore_dcbx_get_app_priority(u8 pri_bitmap, u8 *priority)
+{
+	u32 pri_mask, pri = ECORE_MAX_PFC_PRIORITIES;
+	u32 index = ECORE_MAX_PFC_PRIORITIES - 1;
+	enum _ecore_status_t rc = ECORE_SUCCESS;
+
+	/* Bitmap 1 corresponds to priority 0, return priority 0 */
+	if (pri_bitmap == 1) {
+		*priority = 0;
+		return rc;
+	}
+
+	/* Choose the highest priority */
+	while ((pri == ECORE_MAX_PFC_PRIORITIES) && index) {
+		pri_mask = 1 << index;
+		if (pri_bitmap & pri_mask)
+			pri = index;
+		index--;
+	}
+
+	if (pri < ECORE_MAX_PFC_PRIORITIES)
+		*priority = (u8)pri;
+	else
+		rc = ECORE_INVAL;
+
+	return rc;
+}
+
+static bool
+ecore_dcbx_get_app_protocol_type(struct ecore_hwfn *p_hwfn,
+				 u32 app_prio_bitmap, u16 id, int *type)
+{
+	bool status = false;
+
+	if (ecore_dcbx_default_tlv(app_prio_bitmap, id)) {
+		*type = DCBX_PROTOCOL_ETH;
+		status = true;
+	} else {
+		DP_ERR(p_hwfn, "Unsupported protocol %d\n", id);
+	}
+
+	return status;
+}
+
+/*  Parse app TLV's to update TC information in hw_info structure for
+ * reconfiguring QM. Get protocol specific data for PF update ramrod command.
+ */
+static enum _ecore_status_t
+ecore_dcbx_process_tlv(struct ecore_hwfn *p_hwfn,
+		       struct ecore_dcbx_results *p_data,
+		       struct dcbx_app_priority_entry *p_tbl, u32 pri_tc_tbl,
+		       int count, bool dcbx_enabled)
+{
+	enum _ecore_status_t rc = ECORE_SUCCESS;
+	u8 tc, priority, priority_map;
+	int i, type = -1;
+	u16 protocol_id;
+	bool enable;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_DCB, "Num APP entries = %d\n", count);
+
+	/* Parse APP TLV */
+	for (i = 0; i < count; i++) {
+		protocol_id = ECORE_MFW_GET_FIELD(p_tbl[i].entry,
+						  DCBX_APP_PROTOCOL_ID);
+		priority_map = ECORE_MFW_GET_FIELD(p_tbl[i].entry,
+						   DCBX_APP_PRI_MAP);
+		rc = ecore_dcbx_get_app_priority(priority_map, &priority);
+		if (rc == ECORE_INVAL) {
+			DP_ERR(p_hwfn, "Invalid priority\n");
+			return rc;
+		}
+
+		tc = ECORE_DCBX_PRIO2TC(pri_tc_tbl, priority);
+		if (ecore_dcbx_get_app_protocol_type(p_hwfn, p_tbl[i].entry,
+						     protocol_id, &type)) {
+			/* ETH always have the enable bit reset, as it gets
+			 * vlan information per packet. For other protocols,
+			 * should be set according to the dcbx_enabled
+			 * indication, but we only got here if there was an
+			 * app tlv for the protocol, so dcbx must be enabled.
+			 */
+			enable = (type == DCBX_PROTOCOL_ETH ? false : true);
+
+			ecore_dcbx_update_app_info(p_data, p_hwfn, enable, true,
+						   priority, tc, type);
+		}
+	}
+	/* Update ramrod protocol data and hw_info fields
+	 * with default info when corresponding APP TLV's are not detected.
+	 * The enabled field has a different logic for ethernet as only for
+	 * ethernet dcb should disabled by default, as the information arrives
+	 * from the OS (unless an explicit app tlv was present).
+	 */
+	tc = p_data->arr[DCBX_PROTOCOL_ETH].tc;
+	priority = p_data->arr[DCBX_PROTOCOL_ETH].priority;
+	for (type = 0; type < DCBX_MAX_PROTOCOL_TYPE; type++) {
+		if (p_data->arr[type].update)
+			continue;
+
+		enable = (type == DCBX_PROTOCOL_ETH) ? false : dcbx_enabled;
+		ecore_dcbx_update_app_info(p_data, p_hwfn, enable, true,
+					   priority, tc, type);
+	}
+
+	return ECORE_SUCCESS;
+}
+
+/* Parse app TLV's to update TC information in hw_info structure for
+ * reconfiguring QM. Get protocol specific data for PF update ramrod command.
+ */
+static enum _ecore_status_t
+ecore_dcbx_process_mib_info(struct ecore_hwfn *p_hwfn)
+{
+	struct dcbx_app_priority_feature *p_app;
+	enum _ecore_status_t rc = ECORE_SUCCESS;
+	struct ecore_dcbx_results data = { 0 };
+	struct dcbx_app_priority_entry *p_tbl;
+	struct dcbx_ets_feature *p_ets;
+	struct ecore_hw_info *p_info;
+	u32 pri_tc_tbl, flags;
+	bool dcbx_enabled;
+	int num_entries;
+
+	/* If DCBx version is non zero, then negotiation was
+	 * successfuly performed
+	 */
+	flags = p_hwfn->p_dcbx_info->operational.flags;
+	dcbx_enabled = ECORE_MFW_GET_FIELD(flags, DCBX_CONFIG_VERSION) != 0;
+
+	p_app = &p_hwfn->p_dcbx_info->operational.features.app;
+	p_tbl = p_app->app_pri_tbl;
+
+	p_ets = &p_hwfn->p_dcbx_info->operational.features.ets;
+	pri_tc_tbl = p_ets->pri_tc_tbl[0];
+
+	p_info = &p_hwfn->hw_info;
+	num_entries = ECORE_MFW_GET_FIELD(p_app->flags, DCBX_APP_NUM_ENTRIES);
+
+	rc = ecore_dcbx_process_tlv(p_hwfn, &data, p_tbl, pri_tc_tbl,
+				    num_entries, dcbx_enabled);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	p_info->num_tc = ECORE_MFW_GET_FIELD(p_ets->flags, DCBX_ETS_MAX_TCS);
+	data.pf_id = p_hwfn->rel_pf_id;
+	data.dcbx_enabled = dcbx_enabled;
+
+	ecore_dcbx_dp_protocol(p_hwfn, &data);
+
+	OSAL_MEMCPY(&p_hwfn->p_dcbx_info->results, &data,
+		    sizeof(struct ecore_dcbx_results));
+
+	return ECORE_SUCCESS;
+}
+
+static enum _ecore_status_t
+ecore_dcbx_copy_mib(struct ecore_hwfn *p_hwfn,
+		    struct ecore_ptt *p_ptt,
+		    struct ecore_dcbx_mib_meta_data *p_data,
+		    enum ecore_mib_read_type type)
+{
+	enum _ecore_status_t rc = ECORE_SUCCESS;
+	u32 prefix_seq_num, suffix_seq_num;
+	int read_count = 0;
+
+	do {
+		if (type == ECORE_DCBX_REMOTE_LLDP_MIB) {
+			ecore_memcpy_from(p_hwfn, p_ptt, p_data->lldp_remote,
+					  p_data->addr, p_data->size);
+			prefix_seq_num = p_data->lldp_remote->prefix_seq_num;
+			suffix_seq_num = p_data->lldp_remote->suffix_seq_num;
+		} else {
+			ecore_memcpy_from(p_hwfn, p_ptt, p_data->mib,
+					  p_data->addr, p_data->size);
+			prefix_seq_num = p_data->mib->prefix_seq_num;
+			suffix_seq_num = p_data->mib->suffix_seq_num;
+		}
+		read_count++;
+
+		DP_VERBOSE(p_hwfn, ECORE_MSG_DCB,
+			   "mib type = %d, try count = %d prefix seq num  = %d suffix seq num = %d\n",
+			   type, read_count, prefix_seq_num, suffix_seq_num);
+	} while ((prefix_seq_num != suffix_seq_num) &&
+		 (read_count < ECORE_DCBX_MAX_MIB_READ_TRY));
+
+	if (read_count >= ECORE_DCBX_MAX_MIB_READ_TRY) {
+		DP_ERR(p_hwfn,
+		       "MIB read err, mib type = %d, try count = %d prefix seq num = %d suffix seq num = %d\n",
+		       type, read_count, prefix_seq_num, suffix_seq_num);
+		rc = ECORE_IO;
+	}
+
+	return rc;
+}
+
+static enum _ecore_status_t
+ecore_dcbx_get_priority_info(struct ecore_hwfn *p_hwfn,
+			     struct ecore_dcbx_app_prio *p_prio,
+			     struct ecore_dcbx_results *p_results)
+{
+	enum _ecore_status_t rc = ECORE_SUCCESS;
+
+	if (p_results->arr[DCBX_PROTOCOL_ETH].update &&
+	    p_results->arr[DCBX_PROTOCOL_ETH].enable) {
+		p_prio->eth = p_results->arr[DCBX_PROTOCOL_ETH].priority;
+		DP_VERBOSE(p_hwfn, ECORE_MSG_DCB,
+			   "Priority: eth %d\n", p_prio->eth);
+	}
+
+	return rc;
+}
+
+static void
+ecore_dcbx_get_app_data(struct ecore_hwfn *p_hwfn,
+			struct dcbx_app_priority_feature *p_app,
+			struct dcbx_app_priority_entry *p_tbl,
+			struct ecore_dcbx_params *p_params)
+{
+	int i;
+
+	p_params->app_willing = ECORE_MFW_GET_FIELD(p_app->flags,
+						    DCBX_APP_WILLING);
+	p_params->app_valid = ECORE_MFW_GET_FIELD(p_app->flags,
+						  DCBX_APP_ENABLED);
+	p_params->num_app_entries = ECORE_MFW_GET_FIELD(p_app->flags,
+							DCBX_APP_ENABLED);
+	for (i = 0; i < DCBX_MAX_APP_PROTOCOL; i++)
+		p_params->app_bitmap[i] = p_tbl[i].entry;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_DCB,
+		   "APP params: willing %d, valid %d\n",
+		   p_params->app_willing, p_params->app_valid);
+}
+
+static void
+ecore_dcbx_get_pfc_data(struct ecore_hwfn *p_hwfn,
+			u32 pfc, struct ecore_dcbx_params *p_params)
+{
+	p_params->pfc_willing = ECORE_MFW_GET_FIELD(pfc, DCBX_PFC_WILLING);
+	p_params->max_pfc_tc = ECORE_MFW_GET_FIELD(pfc, DCBX_PFC_CAPS);
+	p_params->pfc_enabled = ECORE_MFW_GET_FIELD(pfc, DCBX_PFC_ENABLED);
+	p_params->pfc_bitmap = pfc;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_DCB,
+		   "PFC params: willing %d, pfc_bitmap %d\n",
+		   p_params->pfc_willing, p_params->pfc_bitmap);
+}
+
+static void
+ecore_dcbx_get_ets_data(struct ecore_hwfn *p_hwfn,
+			struct dcbx_ets_feature *p_ets,
+			struct ecore_dcbx_params *p_params)
+{
+	int i;
+
+	p_params->ets_willing = ECORE_MFW_GET_FIELD(p_ets->flags,
+						    DCBX_ETS_WILLING);
+	p_params->ets_enabled = ECORE_MFW_GET_FIELD(p_ets->flags,
+						    DCBX_ETS_ENABLED);
+	p_params->max_ets_tc = ECORE_MFW_GET_FIELD(p_ets->flags,
+						   DCBX_ETS_MAX_TCS);
+	p_params->ets_pri_tc_tbl[0] = p_ets->pri_tc_tbl[0];
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_DCB,
+		   "ETS params: willing %d, pri_tc_tbl_0 %x max_ets_tc %d\n",
+		   p_params->ets_willing, p_params->ets_pri_tc_tbl[0],
+		   p_params->max_ets_tc);
+
+	/* 8 bit tsa and bw data corresponding to each of the 8 TC's are
+	 * encoded in a type u32 array of size 2.
+	 */
+	for (i = 0; i < 2; i++) {
+		p_params->ets_tc_tsa_tbl[i] = p_ets->tc_tsa_tbl[i];
+		p_params->ets_tc_bw_tbl[i] = p_ets->tc_bw_tbl[i];
+
+		DP_VERBOSE(p_hwfn, ECORE_MSG_DCB,
+			   "elem %d  bw_tbl %x tsa_tbl %x\n",
+			   i, p_params->ets_tc_bw_tbl[i],
+			   p_params->ets_tc_tsa_tbl[i]);
+	}
+}
+
+static enum _ecore_status_t
+ecore_dcbx_get_common_params(struct ecore_hwfn *p_hwfn,
+			     struct dcbx_app_priority_feature *p_app,
+			     struct dcbx_app_priority_entry *p_tbl,
+			     struct dcbx_ets_feature *p_ets,
+			     u32 pfc, struct ecore_dcbx_params *p_params)
+{
+	ecore_dcbx_get_app_data(p_hwfn, p_app, p_tbl, p_params);
+	ecore_dcbx_get_ets_data(p_hwfn, p_ets, p_params);
+	ecore_dcbx_get_pfc_data(p_hwfn, pfc, p_params);
+
+	return ECORE_SUCCESS;
+}
+
+static enum _ecore_status_t
+ecore_dcbx_get_local_params(struct ecore_hwfn *p_hwfn,
+			    struct ecore_ptt *p_ptt,
+			    struct ecore_dcbx_get *params)
+{
+	struct ecore_dcbx_admin_params *p_local;
+	struct dcbx_app_priority_feature *p_app;
+	struct dcbx_app_priority_entry *p_tbl;
+	struct ecore_dcbx_params *p_data;
+	struct dcbx_ets_feature *p_ets;
+	u32 pfc;
+
+	p_local = &params->local;
+	p_data = &p_local->params;
+	p_app = &p_hwfn->p_dcbx_info->local_admin.features.app;
+	p_tbl = p_app->app_pri_tbl;
+	p_ets = &p_hwfn->p_dcbx_info->local_admin.features.ets;
+	pfc = p_hwfn->p_dcbx_info->local_admin.features.pfc;
+
+	ecore_dcbx_get_common_params(p_hwfn, p_app, p_tbl, p_ets, pfc, p_data);
+	p_local->valid = true;
+
+	return ECORE_SUCCESS;
+}
+
+static enum _ecore_status_t
+ecore_dcbx_get_remote_params(struct ecore_hwfn *p_hwfn,
+			     struct ecore_ptt *p_ptt,
+			     struct ecore_dcbx_get *params)
+{
+	struct ecore_dcbx_remote_params *p_remote;
+	struct dcbx_app_priority_feature *p_app;
+	struct dcbx_app_priority_entry *p_tbl;
+	struct ecore_dcbx_params *p_data;
+	struct dcbx_ets_feature *p_ets;
+	u32 pfc;
+
+	p_remote = &params->remote;
+	p_data = &p_remote->params;
+	p_app = &p_hwfn->p_dcbx_info->remote.features.app;
+	p_tbl = p_app->app_pri_tbl;
+	p_ets = &p_hwfn->p_dcbx_info->remote.features.ets;
+	pfc = p_hwfn->p_dcbx_info->remote.features.pfc;
+
+	ecore_dcbx_get_common_params(p_hwfn, p_app, p_tbl, p_ets, pfc, p_data);
+	p_remote->valid = true;
+
+	return ECORE_SUCCESS;
+}
+
+static enum _ecore_status_t
+ecore_dcbx_get_operational_params(struct ecore_hwfn *p_hwfn,
+				  struct ecore_ptt *p_ptt,
+				  struct ecore_dcbx_get *params)
+{
+	struct ecore_dcbx_operational_params *p_operational;
+	enum _ecore_status_t rc = ECORE_SUCCESS;
+	struct dcbx_app_priority_feature *p_app;
+	struct dcbx_app_priority_entry *p_tbl;
+	struct ecore_dcbx_results *p_results;
+	struct ecore_dcbx_params *p_data;
+	struct dcbx_ets_feature *p_ets;
+	bool enabled, err;
+	u32 pfc, flags;
+
+	flags = p_hwfn->p_dcbx_info->operational.flags;
+
+	/* If DCBx version is non zero, then negotiation
+	 * was successfuly performed
+	 */
+	p_operational = &params->operational;
+	enabled = ecore_dcbx_enabled(flags);
+	if (!enabled) {
+		p_operational->enabled = enabled;
+		p_operational->valid = false;
+		return ECORE_INVAL;
+	}
+
+	p_data = &p_operational->params;
+	p_results = &p_hwfn->p_dcbx_info->results;
+	p_app = &p_hwfn->p_dcbx_info->operational.features.app;
+	p_tbl = p_app->app_pri_tbl;
+	p_ets = &p_hwfn->p_dcbx_info->operational.features.ets;
+	pfc = p_hwfn->p_dcbx_info->operational.features.pfc;
+
+	p_operational->ieee = ecore_dcbx_ieee(flags);
+	p_operational->cee = ecore_dcbx_cee(flags);
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_DCB,
+		   "Version support: ieee %d, cee %d\n",
+		   p_operational->ieee, p_operational->cee);
+
+	ecore_dcbx_get_common_params(p_hwfn, p_app, p_tbl, p_ets, pfc, p_data);
+	ecore_dcbx_get_priority_info(p_hwfn, &p_operational->app_prio,
+				     p_results);
+	err = ECORE_MFW_GET_FIELD(p_app->flags, DCBX_APP_ERROR);
+	p_operational->err = err;
+	p_operational->enabled = enabled;
+	p_operational->valid = true;
+
+	return rc;
+}
+
+static enum _ecore_status_t
+ecore_dcbx_get_local_lldp_params(struct ecore_hwfn *p_hwfn,
+				 struct ecore_ptt *p_ptt,
+				 struct ecore_dcbx_get *params)
+{
+	struct ecore_dcbx_lldp_local *p_local;
+	osal_size_t size;
+	u32 *dest;
+
+	p_local = &params->lldp_local;
+
+	size = OSAL_ARRAY_SIZE(p_local->local_chassis_id);
+	dest = p_hwfn->p_dcbx_info->get.lldp_local.local_chassis_id;
+	OSAL_MEMCPY(dest, p_local->local_chassis_id, size);
+
+	size = OSAL_ARRAY_SIZE(p_local->local_port_id);
+	dest = p_hwfn->p_dcbx_info->get.lldp_local.local_port_id;
+	OSAL_MEMCPY(dest, p_local->local_port_id, size);
+
+	return ECORE_SUCCESS;
+}
+
+static enum _ecore_status_t
+ecore_dcbx_get_remote_lldp_params(struct ecore_hwfn *p_hwfn,
+				  struct ecore_ptt *p_ptt,
+				  struct ecore_dcbx_get *params)
+{
+	struct ecore_dcbx_lldp_remote *p_remote;
+	osal_size_t size;
+	u32 *dest;
+
+	p_remote = &params->lldp_remote;
+
+	size = OSAL_ARRAY_SIZE(p_remote->peer_chassis_id);
+	dest = p_hwfn->p_dcbx_info->get.lldp_remote.peer_chassis_id;
+	OSAL_MEMCPY(dest, p_remote->peer_chassis_id, size);
+
+	size = OSAL_ARRAY_SIZE(p_remote->peer_port_id);
+	dest = p_hwfn->p_dcbx_info->get.lldp_remote.peer_port_id;
+	OSAL_MEMCPY(dest, p_remote->peer_port_id, size);
+
+	return ECORE_SUCCESS;
+}
+
+static enum _ecore_status_t
+ecore_dcbx_get_params(struct ecore_hwfn *p_hwfn,
+		      struct ecore_ptt *p_ptt, enum ecore_mib_read_type type)
+{
+	enum _ecore_status_t rc = ECORE_SUCCESS;
+	struct ecore_dcbx_get *p_params;
+
+	p_params = &p_hwfn->p_dcbx_info->get;
+
+	switch (type) {
+	case ECORE_DCBX_REMOTE_MIB:
+		ecore_dcbx_get_remote_params(p_hwfn, p_ptt, p_params);
+		break;
+	case ECORE_DCBX_LOCAL_MIB:
+		ecore_dcbx_get_local_params(p_hwfn, p_ptt, p_params);
+		break;
+	case ECORE_DCBX_OPERATIONAL_MIB:
+		ecore_dcbx_get_operational_params(p_hwfn, p_ptt, p_params);
+		break;
+	case ECORE_DCBX_REMOTE_LLDP_MIB:
+		rc = ecore_dcbx_get_remote_lldp_params(p_hwfn, p_ptt, p_params);
+		break;
+	case ECORE_DCBX_LOCAL_LLDP_MIB:
+		rc = ecore_dcbx_get_local_lldp_params(p_hwfn, p_ptt, p_params);
+		break;
+	default:
+		DP_ERR(p_hwfn, "MIB read err, unknown mib type %d\n", type);
+		return ECORE_INVAL;
+	}
+
+	return rc;
+}
+
+static enum _ecore_status_t
+ecore_dcbx_read_local_lldp_mib(struct ecore_hwfn *p_hwfn,
+			       struct ecore_ptt *p_ptt)
+{
+	enum _ecore_status_t rc = ECORE_SUCCESS;
+	struct ecore_dcbx_mib_meta_data data;
+
+	data.addr = p_hwfn->mcp_info->port_addr + offsetof(struct public_port,
+							   lldp_config_params);
+	data.lldp_local = p_hwfn->p_dcbx_info->lldp_local;
+	data.size = sizeof(struct lldp_config_params_s);
+	ecore_memcpy_from(p_hwfn, p_ptt, data.lldp_local, data.addr, data.size);
+
+	return rc;
+}
+
+static enum _ecore_status_t
+ecore_dcbx_read_remote_lldp_mib(struct ecore_hwfn *p_hwfn,
+				struct ecore_ptt *p_ptt,
+				enum ecore_mib_read_type type)
+{
+	enum _ecore_status_t rc = ECORE_SUCCESS;
+	struct ecore_dcbx_mib_meta_data data;
+
+	data.addr = p_hwfn->mcp_info->port_addr + offsetof(struct public_port,
+							   lldp_status_params);
+	data.lldp_remote = p_hwfn->p_dcbx_info->lldp_remote;
+	data.size = sizeof(struct lldp_status_params_s);
+	rc = ecore_dcbx_copy_mib(p_hwfn, p_ptt, &data, type);
+
+	return rc;
+}
+
+static enum _ecore_status_t
+ecore_dcbx_read_operational_mib(struct ecore_hwfn *p_hwfn,
+				struct ecore_ptt *p_ptt,
+				enum ecore_mib_read_type type)
+{
+	struct ecore_dcbx_mib_meta_data data;
+	enum _ecore_status_t rc = ECORE_SUCCESS;
+
+	data.addr = p_hwfn->mcp_info->port_addr +
+	    offsetof(struct public_port, operational_dcbx_mib);
+	data.mib = &p_hwfn->p_dcbx_info->operational;
+	data.size = sizeof(struct dcbx_mib);
+	rc = ecore_dcbx_copy_mib(p_hwfn, p_ptt, &data, type);
+
+	return rc;
+}
+
+static enum _ecore_status_t
+ecore_dcbx_read_remote_mib(struct ecore_hwfn *p_hwfn,
+			   struct ecore_ptt *p_ptt,
+			   enum ecore_mib_read_type type)
+{
+	struct ecore_dcbx_mib_meta_data data;
+	enum _ecore_status_t rc = ECORE_SUCCESS;
+
+	data.addr = p_hwfn->mcp_info->port_addr +
+	    offsetof(struct public_port, remote_dcbx_mib);
+	data.mib = &p_hwfn->p_dcbx_info->remote;
+	data.size = sizeof(struct dcbx_mib);
+	rc = ecore_dcbx_copy_mib(p_hwfn, p_ptt, &data, type);
+
+	return rc;
+}
+
+static enum _ecore_status_t
+ecore_dcbx_read_local_mib(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
+{
+	struct ecore_dcbx_mib_meta_data data;
+	enum _ecore_status_t rc = ECORE_SUCCESS;
+
+	data.addr = p_hwfn->mcp_info->port_addr +
+	    offsetof(struct public_port, local_admin_dcbx_mib);
+	data.local_admin = &p_hwfn->p_dcbx_info->local_admin;
+	data.size = sizeof(struct dcbx_local_params);
+	ecore_memcpy_from(p_hwfn, p_ptt, data.local_admin,
+			  data.addr, data.size);
+
+	return rc;
+}
+
+static enum _ecore_status_t ecore_dcbx_read_mib(struct ecore_hwfn *p_hwfn,
+						struct ecore_ptt *p_ptt,
+						enum ecore_mib_read_type type)
+{
+	enum _ecore_status_t rc = ECORE_SUCCESS;
+
+	switch (type) {
+	case ECORE_DCBX_OPERATIONAL_MIB:
+		rc = ecore_dcbx_read_operational_mib(p_hwfn, p_ptt, type);
+		break;
+	case ECORE_DCBX_REMOTE_MIB:
+		rc = ecore_dcbx_read_remote_mib(p_hwfn, p_ptt, type);
+		break;
+	case ECORE_DCBX_LOCAL_MIB:
+		rc = ecore_dcbx_read_local_mib(p_hwfn, p_ptt);
+		break;
+	case ECORE_DCBX_REMOTE_LLDP_MIB:
+		rc = ecore_dcbx_read_remote_lldp_mib(p_hwfn, p_ptt, type);
+		break;
+	case ECORE_DCBX_LOCAL_LLDP_MIB:
+		rc = ecore_dcbx_read_local_lldp_mib(p_hwfn, p_ptt);
+		break;
+	default:
+		DP_ERR(p_hwfn, "MIB read err, unknown mib type %d\n", type);
+		return ECORE_INVAL;
+	}
+
+	return rc;
+}
+
+/*
+ * Read updated MIB.
+ * Reconfigure QM and invoke PF update ramrod command if operational MIB
+ * change is detected.
+ */
+enum _ecore_status_t
+ecore_dcbx_mib_update_event(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+			    enum ecore_mib_read_type type)
+{
+	enum _ecore_status_t rc = ECORE_SUCCESS;
+
+	rc = ecore_dcbx_read_mib(p_hwfn, p_ptt, type);
+	if (rc)
+		return rc;
+
+	if (type == ECORE_DCBX_OPERATIONAL_MIB) {
+		rc = ecore_dcbx_process_mib_info(p_hwfn);
+		if (!rc) {
+			bool enabled;
+
+			/* reconfigure tcs of QM queues according
+			 * to negotiation results
+			 */
+			ecore_qm_reconf(p_hwfn, p_ptt);
+
+			/* update storm FW with negotiation results */
+			ecore_sp_pf_update(p_hwfn);
+
+			/* set eagle enigne 1 flow control workaround
+			 * according to negotiation results
+			 */
+			enabled = p_hwfn->p_dcbx_info->results.dcbx_enabled;
+			ecore_dcbx_eagle_workaround(p_hwfn, p_ptt, enabled);
+		}
+	}
+	ecore_dcbx_get_params(p_hwfn, p_ptt, type);
+	OSAL_DCBX_AEN(p_hwfn, type);
+
+	return rc;
+}
+
+enum _ecore_status_t ecore_dcbx_info_alloc(struct ecore_hwfn *p_hwfn)
+{
+	enum _ecore_status_t rc = ECORE_SUCCESS;
+
+	p_hwfn->p_dcbx_info = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL,
+					  sizeof(struct ecore_dcbx_info));
+	if (!p_hwfn->p_dcbx_info) {
+		DP_NOTICE(p_hwfn, true,
+			  "Failed to allocate `struct ecore_dcbx_info'");
+		rc = ECORE_NOMEM;
+	}
+
+	return rc;
+}
+
+void ecore_dcbx_info_free(struct ecore_hwfn *p_hwfn,
+			  struct ecore_dcbx_info *p_dcbx_info)
+{
+	OSAL_FREE(p_hwfn->p_dev, p_hwfn->p_dcbx_info);
+}
+
+static void ecore_dcbx_update_protocol_data(struct protocol_dcb_data *p_data,
+					    struct ecore_dcbx_results *p_src,
+					    enum dcbx_protocol_type type)
+{
+	p_data->dcb_enable_flag = p_src->arr[type].enable;
+	p_data->dcb_priority = p_src->arr[type].priority;
+	p_data->dcb_tc = p_src->arr[type].tc;
+}
+
+/* Set pf update ramrod command params */
+void ecore_dcbx_set_pf_update_params(struct ecore_dcbx_results *p_src,
+				     struct pf_update_ramrod_data *p_dest)
+{
+	struct protocol_dcb_data *p_dcb_data;
+	bool update_flag;
+
+	p_dest->pf_id = p_src->pf_id;
+
+	update_flag = p_src->arr[DCBX_PROTOCOL_ETH].update;
+	p_dest->update_eth_dcb_data_flag = update_flag;
+
+	p_dcb_data = &p_dest->eth_dcb_data;
+	ecore_dcbx_update_protocol_data(p_dcb_data, p_src, DCBX_PROTOCOL_ETH);
+}
+
+static
+enum _ecore_status_t ecore_dcbx_query(struct ecore_hwfn *p_hwfn,
+				      enum ecore_mib_read_type type)
+{
+	struct ecore_ptt *p_ptt;
+	enum _ecore_status_t rc;
+
+	p_ptt = ecore_ptt_acquire(p_hwfn);
+	if (!p_ptt) {
+		rc = ECORE_TIMEOUT;
+		DP_ERR(p_hwfn, "rc = %d\n", rc);
+		return rc;
+	}
+
+	rc = ecore_dcbx_read_mib(p_hwfn, p_ptt, type);
+	if (rc != ECORE_SUCCESS)
+		goto out;
+
+	rc = ecore_dcbx_get_params(p_hwfn, p_ptt, type);
+
+out:
+	ecore_ptt_release(p_hwfn, p_ptt);
+	return rc;
+}
+
+enum _ecore_status_t ecore_dcbx_query_params(struct ecore_hwfn *p_hwfn,
+					     struct ecore_dcbx_get *p_get,
+					     enum ecore_mib_read_type type)
+{
+	enum _ecore_status_t rc;
+
+	rc = ecore_dcbx_query(p_hwfn, type);
+	if (rc)
+		return rc;
+
+	if (p_get != OSAL_NULL)
+		OSAL_MEMCPY(p_get, &p_hwfn->p_dcbx_info->get,
+			    sizeof(struct ecore_dcbx_get));
+
+	return rc;
+}
diff --git a/drivers/net/qede/base/ecore_dcbx.h b/drivers/net/qede/base/ecore_dcbx.h
new file mode 100644
index 0000000..d577f4e
--- /dev/null
+++ b/drivers/net/qede/base/ecore_dcbx.h
@@ -0,0 +1,55 @@
+/*
+ * Copyright (c) 2016 QLogic Corporation.
+ * All rights reserved.
+ * www.qlogic.com
+ *
+ * See LICENSE.qede_pmd for copyright and licensing details.
+ */
+
+#ifndef __ECORE_DCBX_H__
+#define __ECORE_DCBX_H__
+
+#include "ecore.h"
+#include "ecore_mcp.h"
+#include "mcp_public.h"
+#include "reg_addr.h"
+#include "ecore_hw.h"
+#include "ecore_hsi_common.h"
+#include "ecore_dcbx_api.h"
+
+#define ECORE_MFW_GET_FIELD(name, field) \
+	(((name) & (field ## _MASK)) >> (field ## _SHIFT))
+
+struct ecore_dcbx_info {
+	struct lldp_status_params_s lldp_remote[LLDP_MAX_LLDP_AGENTS];
+	struct lldp_config_params_s lldp_local[LLDP_MAX_LLDP_AGENTS];
+	struct dcbx_local_params local_admin;
+	struct ecore_dcbx_results results;
+	struct dcbx_mib operational;
+	struct dcbx_mib remote;
+	struct ecore_dcbx_set set;
+	struct ecore_dcbx_get get;
+	u8 dcbx_cap;
+};
+
+/* Upper layer driver interface routines */
+enum _ecore_status_t ecore_dcbx_config_params(struct ecore_hwfn *,
+					      struct ecore_ptt *,
+					      struct ecore_dcbx_set *);
+
+/* ECORE local interface routines */
+enum _ecore_status_t
+ecore_dcbx_mib_update_event(struct ecore_hwfn *, struct ecore_ptt *,
+			    enum ecore_mib_read_type);
+
+enum _ecore_status_t ecore_dcbx_read_lldp_params(struct ecore_hwfn *,
+						 struct ecore_ptt *);
+enum _ecore_status_t ecore_dcbx_info_alloc(struct ecore_hwfn *p_hwfn);
+void ecore_dcbx_info_free(struct ecore_hwfn *, struct ecore_dcbx_info *);
+void ecore_dcbx_set_pf_update_params(struct ecore_dcbx_results *p_src,
+				     struct pf_update_ramrod_data *p_dest);
+/* @@@TBD eagle phy workaround */
+void ecore_dcbx_eagle_workaround(struct ecore_hwfn *, struct ecore_ptt *,
+				 bool set_to_pfc);
+
+#endif /* __ECORE_DCBX_H__ */
diff --git a/drivers/net/qede/base/ecore_dcbx_api.h b/drivers/net/qede/base/ecore_dcbx_api.h
new file mode 100644
index 0000000..1deddd6
--- /dev/null
+++ b/drivers/net/qede/base/ecore_dcbx_api.h
@@ -0,0 +1,160 @@
+/*
+ * Copyright (c) 2016 QLogic Corporation.
+ * All rights reserved.
+ * www.qlogic.com
+ *
+ * See LICENSE.qede_pmd for copyright and licensing details.
+ */
+
+#ifndef __ECORE_DCBX_API_H__
+#define __ECORE_DCBX_API_H__
+
+#include "ecore.h"
+
+#define DCBX_CONFIG_MAX_APP_PROTOCOL	4
+
+enum ecore_mib_read_type {
+	ECORE_DCBX_OPERATIONAL_MIB,
+	ECORE_DCBX_REMOTE_MIB,
+	ECORE_DCBX_LOCAL_MIB,
+	ECORE_DCBX_REMOTE_LLDP_MIB,
+	ECORE_DCBX_LOCAL_LLDP_MIB
+};
+
+struct ecore_dcbx_app_data {
+	bool enable;		/* DCB enabled */
+	bool update;		/* Update indication */
+	u8 priority;		/* Priority */
+	u8 tc;			/* Traffic Class */
+};
+
+#ifndef __EXTRACT__LINUX__
+enum dcbx_protocol_type {
+	DCBX_PROTOCOL_ETH,
+	DCBX_MAX_PROTOCOL_TYPE
+};
+
+#ifdef LINUX_REMOVE
+/* We can't assume THE HSI values are avaiable to clients, so we need
+ * to redefine those here.
+ */
+#ifndef LLDP_CHASSIS_ID_STAT_LEN
+#define LLDP_CHASSIS_ID_STAT_LEN 4
+#endif
+#ifndef LLDP_PORT_ID_STAT_LEN
+#define LLDP_PORT_ID_STAT_LEN 4
+#endif
+#ifndef DCBX_MAX_APP_PROTOCOL
+#define DCBX_MAX_APP_PROTOCOL 32
+#endif
+
+#endif
+
+struct ecore_dcbx_lldp_remote {
+	u32 peer_chassis_id[LLDP_CHASSIS_ID_STAT_LEN];
+	u32 peer_port_id[LLDP_PORT_ID_STAT_LEN];
+	bool enable_rx;
+	bool enable_tx;
+	u32 tx_interval;
+	u32 max_credit;
+};
+
+struct ecore_dcbx_lldp_local {
+	u32 local_chassis_id[LLDP_CHASSIS_ID_STAT_LEN];
+	u32 local_port_id[LLDP_PORT_ID_STAT_LEN];
+};
+
+struct ecore_dcbx_app_prio {
+	u8 eth;
+};
+
+struct ecore_dcbx_params {
+	u32 app_bitmap[DCBX_MAX_APP_PROTOCOL];
+	u16 num_app_entries;
+	bool app_willing;
+	bool app_valid;
+	bool ets_willing;
+	bool ets_enabled;
+	bool valid;		/* Indicate validity of params */
+	u32 ets_pri_tc_tbl[1];
+	u32 ets_tc_bw_tbl[2];
+	u32 ets_tc_tsa_tbl[2];
+	bool pfc_willing;
+	bool pfc_enabled;
+	u32 pfc_bitmap;
+	u8 max_pfc_tc;
+	u8 max_ets_tc;
+};
+
+struct ecore_dcbx_admin_params {
+	struct ecore_dcbx_params params;
+	bool valid;		/* Indicate validity of params */
+};
+
+struct ecore_dcbx_remote_params {
+	struct ecore_dcbx_params params;
+	bool valid;		/* Indicate validity of params */
+};
+
+struct ecore_dcbx_operational_params {
+	struct ecore_dcbx_app_prio app_prio;
+	struct ecore_dcbx_params params;
+	bool valid;		/* Indicate validity of params */
+	bool enabled;
+	bool ieee;
+	bool cee;
+	u32 err;
+};
+
+struct ecore_dcbx_get {
+	struct ecore_dcbx_operational_params operational;
+	struct ecore_dcbx_lldp_remote lldp_remote;
+	struct ecore_dcbx_lldp_local lldp_local;
+	struct ecore_dcbx_remote_params remote;
+	struct ecore_dcbx_admin_params local;
+};
+#endif
+
+struct ecore_dcbx_set {
+	struct ecore_dcbx_admin_params config;
+	bool enabled;
+	u32 ver_num;
+};
+
+struct ecore_dcbx_results {
+	bool dcbx_enabled;
+	u8 pf_id;
+	struct ecore_dcbx_app_data arr[DCBX_MAX_PROTOCOL_TYPE];
+};
+
+struct ecore_dcbx_app_metadata {
+	enum dcbx_protocol_type id;
+	const char *name;	/* @DPDK */
+	enum ecore_pci_personality personality;
+};
+
+struct ecore_dcbx_mib_meta_data {
+	struct lldp_config_params_s *lldp_local;
+	struct lldp_status_params_s *lldp_remote;
+	struct dcbx_local_params *local_admin;
+	struct dcbx_mib *mib;
+	osal_size_t size;
+	u32 addr;
+};
+
+void
+ecore_dcbx_set_params(struct ecore_dcbx_results *p_data,
+		      struct ecore_hw_info *p_info,
+		      bool enable, bool update, u8 prio, u8 tc,
+		      enum dcbx_protocol_type type,
+		      enum ecore_pci_personality personality);
+
+enum _ecore_status_t ecore_dcbx_query_params(struct ecore_hwfn *,
+					     struct ecore_dcbx_get *,
+					     enum ecore_mib_read_type);
+
+static const struct ecore_dcbx_app_metadata ecore_dcbx_app_update[] = {
+	{DCBX_PROTOCOL_ETH, "ETH", ECORE_PCI_ETH}
+};
+
+#endif /* __ECORE_DCBX_API_H__ */
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index e68f60b..38476ee 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -30,6 +30,7 @@
 #include "nvm_cfg.h"
 #include "ecore_dev_api.h"
 #include "ecore_attn_values.h"
+#include "ecore_dcbx.h"
 
 /* Configurable */
 #define ECORE_MIN_DPIS		(4)	/* The minimal number of DPIs required
@@ -157,6 +158,7 @@ void ecore_resc_free(struct ecore_dev *p_dev)
 		ecore_int_free(p_hwfn);
 		ecore_iov_free(p_hwfn);
 		ecore_dmae_info_free(p_hwfn);
+		ecore_dcbx_info_free(p_hwfn, p_hwfn->p_dcbx_info);
 		/* @@@TBD Flush work-queue ? */
 	}
 }
@@ -279,6 +281,9 @@ static enum _ecore_status_t ecore_init_qm_info(struct ecore_hwfn *p_hwfn,
 	for (i = 0; i < num_ports; i++) {
 		p_qm_port = &qm_info->qm_port_params[i];
 		p_qm_port->active = 1;
+		/* @@@TMP - was NUM_OF_PHYS_TCS; Changed until dcbx will
+		 * be in place
+		 */
 		if (num_ports == 4)
 			p_qm_port->num_active_phys_tcs = 2;
 		else
@@ -477,6 +482,14 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 				  " dmae_info structure\n");
 			goto alloc_err;
 		}
+
+		/* DCBX initialization */
+		rc = ecore_dcbx_info_alloc(p_hwfn);
+		if (rc) {
+			DP_NOTICE(p_hwfn, true,
+				  "Failed to allocate memory for dcbxstruct\n");
+			goto alloc_err;
+		}
 	}
 
 	p_dev->reset_stats = OSAL_ZALLOC(p_dev, GFP_KERNEL,
@@ -1418,6 +1431,20 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 			return mfw_rc;
 		}
 
+		/* send DCBX attention request command */
+		DP_VERBOSE(p_hwfn, ECORE_MSG_DCB,
+			   "sending phony dcbx set command to trigger DCBx"
+			   " attention handling\n");
+		mfw_rc = ecore_mcp_cmd(p_hwfn, p_hwfn->p_main_ptt,
+				       DRV_MSG_CODE_SET_DCBX,
+				       1 << DRV_MB_PARAM_DCBX_NOTIFY_SHIFT,
+				       &load_code, &param);
+		if (mfw_rc != ECORE_SUCCESS) {
+			DP_NOTICE(p_hwfn, true,
+				  "Failed to send DCBX attention request\n");
+			return mfw_rc;
+		}
+
 		p_hwfn->hw_init_done = true;
 	}
 
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index 23ea426..e57155b 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -1419,9 +1419,6 @@ enum _ecore_status_t ecore_sp_vf_start(struct ecore_hwfn *p_hwfn,
 	case ECORE_PCI_ETH:
 		p_ramrod->personality = PERSONALITY_ETH;
 		break;
-	case ECORE_PCI_ETH_ROCE:
-		p_ramrod->personality = PERSONALITY_RDMA_AND_ETH;
-		break;
 	default:
 		DP_NOTICE(p_hwfn, true, "Unkown VF personality %d\n",
 			  p_hwfn->hw_info.personality);
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 7dff695..db41ee0 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -18,6 +18,7 @@
 #include "ecore_iov_api.h"
 #include "ecore_gtt_reg_addr.h"
 #include "ecore_iro.h"
+#include "ecore_dcbx.h"
 
 #define CHIP_MCP_RESP_ITER_US 10
 #define EMUL_MCP_RESP_ITER_US (1000 * 1000)
@@ -726,6 +727,9 @@ static void ecore_mcp_handle_link_change(struct ecore_hwfn *p_hwfn,
 
 	p_link->sfp_tx_fault = !!(status & LINK_STATUS_SFP_TX_FAULT);
 
+	if (p_link->link_up)
+		ecore_dcbx_eagle_workaround(p_hwfn, p_ptt, p_link->pfc_enabled);
+
 	OSAL_LINK_UPDATE(p_hwfn);
 }
 
@@ -998,6 +1002,18 @@ enum _ecore_status_t ecore_mcp_handle_events(struct ecore_hwfn *p_hwfn,
 		case MFW_DRV_MSG_VF_DISABLED:
 			ecore_mcp_handle_vf_flr(p_hwfn, p_ptt);
 			break;
+		case MFW_DRV_MSG_LLDP_DATA_UPDATED:
+			ecore_dcbx_mib_update_event(p_hwfn, p_ptt,
+						    ECORE_DCBX_REMOTE_LLDP_MIB);
+			break;
+		case MFW_DRV_MSG_DCBX_REMOTE_MIB_UPDATED:
+			ecore_dcbx_mib_update_event(p_hwfn, p_ptt,
+						    ECORE_DCBX_REMOTE_MIB);
+			break;
+		case MFW_DRV_MSG_DCBX_OPERATIONAL_MIB_UPDATED:
+			ecore_dcbx_mib_update_event(p_hwfn, p_ptt,
+						    ECORE_DCBX_OPERATIONAL_MIB);
+			break;
 		case MFW_DRV_MSG_ERROR_RECOVERY:
 			ecore_mcp_handle_process_kill(p_hwfn, p_ptt);
 			break;
diff --git a/drivers/net/qede/base/ecore_sp_commands.c b/drivers/net/qede/base/ecore_sp_commands.c
index d2c2aae..564a6b2 100644
--- a/drivers/net/qede/base/ecore_sp_commands.c
+++ b/drivers/net/qede/base/ecore_sp_commands.c
@@ -20,6 +20,7 @@
 #include "reg_addr.h"
 #include "ecore_int.h"
 #include "ecore_hw.h"
+#include "ecore_dcbx.h"
 
 enum _ecore_status_t ecore_sp_init_request(struct ecore_hwfn *p_hwfn,
 					   struct ecore_spq_entry **pp_ent,
@@ -430,6 +431,9 @@ enum _ecore_status_t ecore_sp_pf_update(struct ecore_hwfn *p_hwfn)
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
+	ecore_dcbx_set_pf_update_params(&p_hwfn->p_dcbx_info->results,
+					&p_ent->ramrod.pf_update);
+
 	return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
 }
 
diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index 07f3673..0e61502 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -183,6 +183,179 @@ struct couple_mode_teaming {
 #define PORT_CMT_TEAM1              (1 << 2)
 };
 
+/**************************************
+ *     LLDP and DCBX HSI structures
+ **************************************/
+#define LLDP_CHASSIS_ID_STAT_LEN 4
+#define LLDP_PORT_ID_STAT_LEN 4
+#define DCBX_MAX_APP_PROTOCOL		32
+#define MAX_SYSTEM_LLDP_TLV_DATA    32
+
+typedef enum _lldp_agent_e {
+	LLDP_NEAREST_BRIDGE = 0,
+	LLDP_NEAREST_NON_TPMR_BRIDGE,
+	LLDP_NEAREST_CUSTOMER_BRIDGE,
+	LLDP_MAX_LLDP_AGENTS
+} lldp_agent_e;
+
+struct lldp_config_params_s {
+	u32 config;
+#define LLDP_CONFIG_TX_INTERVAL_MASK        0x000000ff
+#define LLDP_CONFIG_TX_INTERVAL_SHIFT       0
+#define LLDP_CONFIG_HOLD_MASK               0x00000f00
+#define LLDP_CONFIG_HOLD_SHIFT              8
+#define LLDP_CONFIG_MAX_CREDIT_MASK         0x0000f000
+#define LLDP_CONFIG_MAX_CREDIT_SHIFT        12
+#define LLDP_CONFIG_ENABLE_RX_MASK          0x40000000
+#define LLDP_CONFIG_ENABLE_RX_SHIFT         30
+#define LLDP_CONFIG_ENABLE_TX_MASK          0x80000000
+#define LLDP_CONFIG_ENABLE_TX_SHIFT         31
+	/* Holds local Chassis ID TLV header, subtype and 9B of payload.
+	   If firtst byte is 0, then we will use default chassis ID */
+	u32 local_chassis_id[LLDP_CHASSIS_ID_STAT_LEN];
+	/* Holds local Port ID TLV header, subtype and 9B of payload.
+	   If firtst byte is 0, then we will use default port ID */
+	u32 local_port_id[LLDP_PORT_ID_STAT_LEN];
+};
+
+struct lldp_status_params_s {
+	u32 prefix_seq_num;
+	u32 status;		/* TBD */
+	/* Holds remote Chassis ID TLV header, subtype and 9B of payload. */
+	u32 peer_chassis_id[LLDP_CHASSIS_ID_STAT_LEN];
+	/* Holds remote Port ID TLV header, subtype and 9B of payload. */
+	u32 peer_port_id[LLDP_PORT_ID_STAT_LEN];
+	u32 suffix_seq_num;
+};
+
+struct dcbx_ets_feature {
+	u32 flags;
+#define DCBX_ETS_ENABLED_MASK                   0x00000001
+#define DCBX_ETS_ENABLED_SHIFT                  0
+#define DCBX_ETS_WILLING_MASK                   0x00000002
+#define DCBX_ETS_WILLING_SHIFT                  1
+#define DCBX_ETS_ERROR_MASK                     0x00000004
+#define DCBX_ETS_ERROR_SHIFT                    2
+#define DCBX_ETS_CBS_MASK                       0x00000008
+#define DCBX_ETS_CBS_SHIFT                      3
+#define DCBX_ETS_MAX_TCS_MASK                   0x000000f0
+#define DCBX_ETS_MAX_TCS_SHIFT                  4
+	u32 pri_tc_tbl[1];
+#define DCBX_CEE_STRICT_PRIORITY		0xf
+#define DCBX_CEE_STRICT_PRIORITY_TC		0x7
+	u32 tc_bw_tbl[2];
+	u32 tc_tsa_tbl[2];
+#define DCBX_ETS_TSA_STRICT			0
+#define DCBX_ETS_TSA_CBS			1
+#define DCBX_ETS_TSA_ETS			2
+};
+
+struct dcbx_app_priority_entry {
+	u32 entry;
+#define DCBX_APP_PRI_MAP_MASK       0x000000ff
+#define DCBX_APP_PRI_MAP_SHIFT      0
+#define DCBX_APP_PRI_0              0x01
+#define DCBX_APP_PRI_1              0x02
+#define DCBX_APP_PRI_2              0x04
+#define DCBX_APP_PRI_3              0x08
+#define DCBX_APP_PRI_4              0x10
+#define DCBX_APP_PRI_5              0x20
+#define DCBX_APP_PRI_6              0x40
+#define DCBX_APP_PRI_7              0x80
+#define DCBX_APP_SF_MASK            0x00000300
+#define DCBX_APP_SF_SHIFT           8
+#define DCBX_APP_SF_ETHTYPE         0
+#define DCBX_APP_SF_PORT            1
+#define DCBX_APP_PROTOCOL_ID_MASK   0xffff0000
+#define DCBX_APP_PROTOCOL_ID_SHIFT  16
+};
+
+/* FW structure in BE */
+struct dcbx_app_priority_feature {
+	u32 flags;
+#define DCBX_APP_ENABLED_MASK           0x00000001
+#define DCBX_APP_ENABLED_SHIFT          0
+#define DCBX_APP_WILLING_MASK           0x00000002
+#define DCBX_APP_WILLING_SHIFT          1
+#define DCBX_APP_ERROR_MASK             0x00000004
+#define DCBX_APP_ERROR_SHIFT            2
+	/* Not in use
+	   #define DCBX_APP_DEFAULT_PRI_MASK       0x00000f00
+	   #define DCBX_APP_DEFAULT_PRI_SHIFT      8
+	 */
+#define DCBX_APP_MAX_TCS_MASK           0x0000f000
+#define DCBX_APP_MAX_TCS_SHIFT          12
+#define DCBX_APP_NUM_ENTRIES_MASK       0x00ff0000
+#define DCBX_APP_NUM_ENTRIES_SHIFT      16
+	struct dcbx_app_priority_entry app_pri_tbl[DCBX_MAX_APP_PROTOCOL];
+};
+
+/* FW structure in BE */
+struct dcbx_features {
+	/* PG feature */
+	struct dcbx_ets_feature ets;
+	/* PFC feature */
+	u32 pfc;
+#define DCBX_PFC_PRI_EN_BITMAP_MASK             0x000000ff
+#define DCBX_PFC_PRI_EN_BITMAP_SHIFT            0
+#define DCBX_PFC_PRI_EN_BITMAP_PRI_0            0x01
+#define DCBX_PFC_PRI_EN_BITMAP_PRI_1            0x02
+#define DCBX_PFC_PRI_EN_BITMAP_PRI_2            0x04
+#define DCBX_PFC_PRI_EN_BITMAP_PRI_3            0x08
+#define DCBX_PFC_PRI_EN_BITMAP_PRI_4            0x10
+#define DCBX_PFC_PRI_EN_BITMAP_PRI_5            0x20
+#define DCBX_PFC_PRI_EN_BITMAP_PRI_6            0x40
+#define DCBX_PFC_PRI_EN_BITMAP_PRI_7            0x80
+
+#define DCBX_PFC_FLAGS_MASK                     0x0000ff00
+#define DCBX_PFC_FLAGS_SHIFT                    8
+#define DCBX_PFC_CAPS_MASK                      0x00000f00
+#define DCBX_PFC_CAPS_SHIFT                     8
+#define DCBX_PFC_MBC_MASK                       0x00004000
+#define DCBX_PFC_MBC_SHIFT                      14
+#define DCBX_PFC_WILLING_MASK                   0x00008000
+#define DCBX_PFC_WILLING_SHIFT                  15
+#define DCBX_PFC_ENABLED_MASK                   0x00010000
+#define DCBX_PFC_ENABLED_SHIFT                  16
+#define DCBX_PFC_ERROR_MASK                     0x00020000
+#define DCBX_PFC_ERROR_SHIFT                    17
+
+	/* APP feature */
+	struct dcbx_app_priority_feature app;
+};
+
+struct dcbx_local_params {
+	u32 config;
+#define DCBX_CONFIG_VERSION_MASK            0x00000003
+#define DCBX_CONFIG_VERSION_SHIFT           0
+#define DCBX_CONFIG_VERSION_DISABLED        0
+#define DCBX_CONFIG_VERSION_IEEE            1
+#define DCBX_CONFIG_VERSION_CEE             2
+
+	u32 flags;
+	struct dcbx_features features;
+};
+
+struct dcbx_mib {
+	u32 prefix_seq_num;
+	u32 flags;
+	/*
+	 * #define DCBX_CONFIG_VERSION_MASK            0x00000003
+	 * #define DCBX_CONFIG_VERSION_SHIFT           0
+	 * #define DCBX_CONFIG_VERSION_DISABLED        0
+	 * #define DCBX_CONFIG_VERSION_IEEE            1
+	 * #define DCBX_CONFIG_VERSION_CEE             2
+	 */
+	struct dcbx_features features;
+	u32 suffix_seq_num;
+};
+
+struct lldp_system_tlvs_buffer_s {
+	u16 valid;
+	u16 length;
+	u32 data[MAX_SYSTEM_LLDP_TLV_DATA];
+};
+
 /**************************************/
 /*                                    */
 /*     P U B L I C      G L O B A L   */
@@ -386,6 +559,16 @@ struct public_port {
 
 	u32 link_change_count;
 
+	/* LLDP params */
+	struct lldp_config_params_s lldp_config_params[LLDP_MAX_LLDP_AGENTS];
+	struct lldp_status_params_s lldp_status_params[LLDP_MAX_LLDP_AGENTS];
+	struct lldp_system_tlvs_buffer_s system_lldp_tlvs_buf;
+
+	/* DCBX related MIB */
+	struct dcbx_local_params local_admin_dcbx_mib;
+	struct dcbx_mib remote_dcbx_mib;
+	struct dcbx_mib operational_dcbx_mib;
+
 	/* FC_NPIV table offset & size in NVRAM value of 0 means not present */
 	u32 fc_npiv_nvram_tbl_addr;
 	u32 fc_npiv_nvram_tbl_size;
@@ -629,6 +812,9 @@ struct public_drv_mb {
 	/*        - DONT_CARE - Don't flap the link if up */
 #define DRV_MSG_CODE_LINK_RESET			0x23000000
 
+	/* Vitaly: LLDP commands */
+#define DRV_MSG_CODE_SET_LLDP                   0x24000000
+#define DRV_MSG_CODE_SET_DCBX                   0x25000000
 	/* OneView feature driver HSI */
 #define DRV_MSG_CODE_OV_UPDATE_CURR_CFG		0x26000000
 #define DRV_MSG_CODE_OV_UPDATE_BUS_NUM		0x27000000
@@ -700,6 +886,14 @@ struct public_drv_mb {
 #define DRV_MB_PARAM_INIT_PHY_FORCE		0x00000001
 #define DRV_MB_PARAM_INIT_PHY_DONT_CARE		0x00000002
 
+	/* LLDP / DCBX params */
+#define DRV_MB_PARAM_LLDP_SEND_MASK		0x00000001
+#define DRV_MB_PARAM_LLDP_SEND_SHIFT		0
+#define DRV_MB_PARAM_LLDP_AGENT_MASK		0x00000006
+#define DRV_MB_PARAM_LLDP_AGENT_SHIFT		1
+#define DRV_MB_PARAM_DCBX_NOTIFY_MASK		0x00000008
+#define DRV_MB_PARAM_DCBX_NOTIFY_SHIFT		3
+
 #define DRV_MB_PARAM_NIG_DRAIN_PERIOD_MS_MASK	0x000000FF
 #define DRV_MB_PARAM_NIG_DRAIN_PERIOD_MS_SHIFT	0
 
@@ -806,6 +1000,9 @@ struct public_drv_mb {
 #define FW_MSG_CODE_INIT_PHY_DONE		0x21200000
 #define FW_MSG_CODE_INIT_PHY_ERR_INVALID_ARGS	0x21300000
 #define FW_MSG_CODE_LINK_RESET_DONE		0x23000000
+#define FW_MSG_CODE_SET_LLDP_DONE               0x24000000
+#define FW_MSG_CODE_SET_LLDP_UNSUPPORTED_AGENT  0x24010000
+#define FW_MSG_CODE_SET_DCBX_DONE               0x25000000
 #define FW_MSG_CODE_UPDATE_CURR_CFG_DONE        0x26000000
 #define FW_MSG_CODE_UPDATE_BUS_NUM_DONE         0x27000000
 #define FW_MSG_CODE_UPDATE_BOOT_PROGRESS_DONE   0x28000000
@@ -916,6 +1113,9 @@ enum MFW_DRV_MSG_TYPE {
 	MFW_DRV_MSG_LINK_CHANGE,
 	MFW_DRV_MSG_FLR_FW_ACK_FAILED,
 	MFW_DRV_MSG_VF_DISABLED,
+	MFW_DRV_MSG_LLDP_DATA_UPDATED,
+	MFW_DRV_MSG_DCBX_REMOTE_MIB_UPDATED,
+	MFW_DRV_MSG_DCBX_OPERATIONAL_MIB_UPDATED,
 	MFW_DRV_MSG_ERROR_RECOVERY,
 	MFW_DRV_MSG_BW_UPDATE,
 	MFW_DRV_MSG_S_TAG_UPDATE,
diff --git a/drivers/net/qede/base/nvm_cfg.h b/drivers/net/qede/base/nvm_cfg.h
index 907994b..7f1a60d 100644
--- a/drivers/net/qede/base/nvm_cfg.h
+++ b/drivers/net/qede/base/nvm_cfg.h
@@ -553,6 +553,12 @@ struct nvm_cfg1_port {
 #define NVM_CFG1_PORT_LED_MODE_PHY10                            0xD
 #define NVM_CFG1_PORT_LED_MODE_PHY11                            0xE
 #define NVM_CFG1_PORT_LED_MODE_PHY12                            0xF
+#define NVM_CFG1_PORT_DCBX_MODE_MASK                            0x000F0000
+#define NVM_CFG1_PORT_DCBX_MODE_OFFSET                          16
+#define NVM_CFG1_PORT_DCBX_MODE_DISABLED                        0x0
+#define NVM_CFG1_PORT_DCBX_MODE_IEEE                            0x1
+#define NVM_CFG1_PORT_DCBX_MODE_CEE                             0x2
+#define NVM_CFG1_PORT_DCBX_MODE_DYNAMIC                         0x3
 #define NVM_CFG1_PORT_DEFAULT_ENABLED_PROTOCOLS_MASK            0x00F00000
 #define NVM_CFG1_PORT_DEFAULT_ENABLED_PROTOCOLS_OFFSET          20
 #define NVM_CFG1_PORT_DEFAULT_ENABLED_PROTOCOLS_ETHERNET        0x1
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index 46d4b6c..d5046a2 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -325,8 +325,8 @@ qed_fill_eth_dev_info(struct ecore_dev *edev, struct qed_dev_eth_info *info)
 	if (IS_PF(edev)) {
 		info->num_queues = 0;
 		for_each_hwfn(edev, i)
-		    info->num_queues +=
-		    FEAT_NUM(&edev->hwfns[i], ECORE_PF_L2_QUE);
+			info->num_queues +=
+			FEAT_NUM(&edev->hwfns[i], ECORE_PF_L2_QUE);
 
 		info->num_vlan_filters = RESC_NUM(&edev->hwfns[0], ECORE_VLAN);
 
@@ -339,7 +339,7 @@ qed_fill_eth_dev_info(struct ecore_dev *edev, struct qed_dev_eth_info *info)
 					      &info->num_vlan_filters);
 
 		ecore_vf_get_port_mac(&edev->hwfns[0],
-				      (uint8_t *) &info->port_mac);
+				      (uint8_t *)&info->port_mac);
 	}
 
 	qed_fill_dev_info(edev, &info->common);
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index f76f42c..f08c7d5 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -603,7 +603,7 @@ static int qede_start_queues(struct rte_eth_dev *eth_dev, bool clear_stats)
 
 	if (!qdev->num_rss) {
 		DP_ERR(edev,
-		       "Cannot update V-VPORT as active as"
+		       "Cannot update V-VPORT as active as "
 		       "there are no Rx queues\n");
 		return -EINVAL;
 	}
-- 
1.7.10.3

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH v2 10/10] qede: enable PMD build
  2016-03-10 13:45 [dpdk-dev] [PATCH v2 00/10] qede: Add qede PMD Rasesh Mody
                   ` (7 preceding siblings ...)
  2016-03-10 13:45 ` [dpdk-dev] [PATCH v2 09/10] qede: Add DCBX support Rasesh Mody
@ 2016-03-10 13:45 ` Rasesh Mody
  8 siblings, 0 replies; 13+ messages in thread
From: Rasesh Mody @ 2016-03-10 13:45 UTC (permalink / raw)
  To: dev; +Cc: sony.chacko

Signed-off-by: Harish Patil <harish.patil@qlogic.com>
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
Signed-off-by: Sony Chacko <sony.chacko@qlogic.com>
---
 config/common_base    |   14 ++++++++++++++
 drivers/net/Makefile  |    1 +
 mk/rte.app.mk         |    2 ++
 scripts/test-build.sh |    1 +
 4 files changed, 18 insertions(+)

diff --git a/config/common_base b/config/common_base
index 1af28c8..6b10c65 100644
--- a/config/common_base
+++ b/config/common_base
@@ -285,6 +285,20 @@ CONFIG_RTE_LIBRTE_PMD_BOND=y
 CONFIG_RTE_LIBRTE_BOND_DEBUG_ALB=n
 CONFIG_RTE_LIBRTE_BOND_DEBUG_ALB_L1=n
 
+# QLogic 25G/40G PMD
+#
+CONFIG_RTE_LIBRTE_QEDE_PMD=y
+CONFIG_RTE_LIBRTE_QEDE_DEBUG_INIT=n
+CONFIG_RTE_LIBRTE_QEDE_DEBUG_INFO=n
+CONFIG_RTE_LIBRTE_QEDE_DEBUG_ECORE=n
+CONFIG_RTE_LIBRTE_QEDE_DEBUG_TX=n
+CONFIG_RTE_LIBRTE_QEDE_DEBUG_RX=n
+CONFIG_RTE_LIBRTE_QEDE_RX_COAL_US=24
+CONFIG_RTE_LIBRTE_QEDE_TX_COAL_US=48
+CONFIG_RTE_LIBRTE_QEDE_TX_SWITCHING=y
+#Provides path/name of the firmware file
+CONFIG_RTE_LIBRTE_QEDE_FW=n
+
 #
 # Compile software PMD backed by AF_PACKET sockets (Linux only)
 #
diff --git a/drivers/net/Makefile b/drivers/net/Makefile
index 0c3393f..61d3f16 100644
--- a/drivers/net/Makefile
+++ b/drivers/net/Makefile
@@ -51,5 +51,6 @@ DIRS-$(CONFIG_RTE_LIBRTE_PMD_SZEDATA2) += szedata2
 DIRS-$(CONFIG_RTE_LIBRTE_VIRTIO_PMD) += virtio
 DIRS-$(CONFIG_RTE_LIBRTE_VMXNET3_PMD) += vmxnet3
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_XENVIRT) += xenvirt
+DIRS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += qede
 
 include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index daac09f..e350ba4 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -102,6 +102,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_MLX5_PMD)       += -libverbs
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_SZEDATA2)   += -lsze2
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_XENVIRT)    += -lxenstore
 _LDLIBS-$(CONFIG_RTE_LIBRTE_MPIPE_PMD)      += -lgxio
+_LDLIBS-$(CONFIG_RTE_LIBRTE_QEDE_PMD)       += -lz
 # QAT PMD has a dependency on libcrypto (from openssl) for calculating HMAC precomputes
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_QAT)        += -lcrypto
 endif # !CONFIG_RTE_BUILD_SHARED_LIBS
@@ -143,6 +144,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_MPIPE_PMD)      += -lrte_pmd_mpipe
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_RING)       += -lrte_pmd_ring
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_PCAP)       += -lrte_pmd_pcap
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AF_PACKET)  += -lrte_pmd_af_packet
+_LDLIBS-$(CONFIG_RTE_LIBRTE_QEDE_PMD)       += -lrte_pmd_qede
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_NULL)       += -lrte_pmd_null
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_QAT)        += -lrte_pmd_qat
 
diff --git a/scripts/test-build.sh b/scripts/test-build.sh
index 5cadc08..8436e41 100755
--- a/scripts/test-build.sh
+++ b/scripts/test-build.sh
@@ -115,6 +115,7 @@ config () # <directory> <target> <options>
 		test "$DPDK_DEP_ZLIB" != y || \
 		sed -ri          's,(BNX2X_PMD=)n,\1y,' $1/.config
 		sed -ri            's,(NFP_PMD=)n,\1y,' $1/.config
+		sed -ri          's,(QEDE_PMD=)n,\1y,' $1/.config
 		test "$DPDK_DEP_PCAP" != y || \
 		sed -ri               's,(PCAP=)n,\1y,' $1/.config
 		test -z "$AESNI_MULTI_BUFFER_LIB_PATH" || \
-- 
1.7.10.3

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [dpdk-dev] [PATCH v2 02/10] qede: add documentation
  2016-03-10 13:45 ` [dpdk-dev] [PATCH v2 02/10] qede: add documentation Rasesh Mody
@ 2016-03-10 13:49   ` Thomas Monjalon
  2016-03-10 17:17     ` Harish Patil
  0 siblings, 1 reply; 13+ messages in thread
From: Thomas Monjalon @ 2016-03-10 13:49 UTC (permalink / raw)
  To: Rasesh Mody, sony.chacko; +Cc: dev

2016-03-10 05:45, Rasesh Mody:
>  doc/guides/nics/index.rst |    1 +
>  doc/guides/nics/qede.rst  |  340 +++++++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 341 insertions(+)

It would be nice to see a new column in the matrix of overview.rst.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [dpdk-dev] [PATCH v2 02/10] qede: add documentation
  2016-03-10 13:49   ` Thomas Monjalon
@ 2016-03-10 17:17     ` Harish Patil
  2016-03-10 22:40       ` Rasesh Mody
  0 siblings, 1 reply; 13+ messages in thread
From: Harish Patil @ 2016-03-10 17:17 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: dev, Sony Chacko

>
>2016-03-10 05:45, Rasesh Mody:
>>  doc/guides/nics/index.rst |    1 +
>>  doc/guides/nics/qede.rst  |  340
>>+++++++++++++++++++++++++++++++++++++++++++++
>>  2 files changed, 341 insertions(+)
>
>It would be nice to see a new column in the matrix of overview.rst.
>

Hi Thomas,
Yes, we had updated overview.rst with two new columns, but missed this
file while submitting patches.
Will send a follow-on patch.

Thanks,
Harish


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [dpdk-dev] [PATCH v2 02/10] qede: add documentation
  2016-03-10 17:17     ` Harish Patil
@ 2016-03-10 22:40       ` Rasesh Mody
  0 siblings, 0 replies; 13+ messages in thread
From: Rasesh Mody @ 2016-03-10 22:40 UTC (permalink / raw)
  To: Harish Patil, Thomas Monjalon; +Cc: dev, Sony Chacko

Hi Thomas,

> From: Harish Patil
> Sent: Thursday, March 10, 2016 9:18 AM
> >
> >2016-03-10 05:45, Rasesh Mody:
> >>  doc/guides/nics/index.rst |    1 +
> >>  doc/guides/nics/qede.rst  |  340
> >>+++++++++++++++++++++++++++++++++++++++++++++
> >>  2 files changed, 341 insertions(+)
> >
> >It would be nice to see a new column in the matrix of overview.rst.
> >
> 
> Hi Thomas,
> Yes, we had updated overview.rst with two new columns, but missed this file
> while submitting patches.
> Will send a follow-on patch.

A separate patch is sent that adds new columns (qede and qedevf) in the matrix of overview.rst.

Thanks!
Rasesh

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2016-03-10 22:40 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-03-10 13:45 [dpdk-dev] [PATCH v2 00/10] qede: Add qede PMD Rasesh Mody
2016-03-10 13:45 ` [dpdk-dev] [PATCH v2 01/10] qede: add maintainers Rasesh Mody
2016-03-10 13:45 ` [dpdk-dev] [PATCH v2 02/10] qede: add documentation Rasesh Mody
2016-03-10 13:49   ` Thomas Monjalon
2016-03-10 17:17     ` Harish Patil
2016-03-10 22:40       ` Rasesh Mody
2016-03-10 13:45 ` [dpdk-dev] [PATCH v2 03/10] qede: Add license file Rasesh Mody
2016-03-10 13:45 ` [dpdk-dev] [PATCH v2 05/10] qede: Add core driver Rasesh Mody
2016-03-10 13:45 ` [dpdk-dev] [PATCH v2 06/10] qede: Add L2 support Rasesh Mody
2016-03-10 13:45 ` [dpdk-dev] [PATCH v2 07/10] qede: Add SRIOV support Rasesh Mody
2016-03-10 13:45 ` [dpdk-dev] [PATCH v2 08/10] qede: Add attention support Rasesh Mody
2016-03-10 13:45 ` [dpdk-dev] [PATCH v2 09/10] qede: Add DCBX support Rasesh Mody
2016-03-10 13:45 ` [dpdk-dev] [PATCH v2 10/10] qede: enable PMD build Rasesh Mody

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).