DPDK patches and discussions
 help / color / mirror / Atom feed
* [RFC PATCH 00/29] cover letter for net/qdma PMD
@ 2022-07-06  7:51 Aman Kumar
  2022-07-06  7:51 ` [RFC PATCH 01/29] net/qdma: add net PMD template Aman Kumar
                   ` (29 more replies)
  0 siblings, 30 replies; 43+ messages in thread
From: Aman Kumar @ 2022-07-06  7:51 UTC (permalink / raw)
  To: dev; +Cc: maxime.coquelin, david.marchand, aman.kumar

This patch series provides net PMD for VVDN's NR(5G) Hi-PHY solution over
T1 telco card. These telco accelerator NIC cards are targeted for ORAN DU
systems to offload inline NR Hi-PHY (split 7.2) operations. For the DU host,
the cards typically appears as a basic NIC device. The device is based on
AMD/Xilinx's Ultrasale MPSoC and RFSoC FPGA for which the inline Hi-PHY IP
is developed by VVDN Technologies Private Limited.
PCI_VENDOR: 0x1f44, Supported Devices: 0x0201, 0x0281

Hardware-specs:
https://www.xilinx.com/publications/product-briefs/xilinx-t1-product-brief.pdf

- This series is an RFC and target for DPDK v22.11.
- Currently, the PMD is supported only for x86_64 host.
- Build machine used: Fedora 36 with gcc 12.1.1
- The device communicates to host over AMD/Xilinx's QDMA subsystem for
  PCIe interface. Link: https://docs.xilinx.com/r/en-US/pg302-qdma
- The QDMA access library is part of PMD [PATCH 06]
- The DPDK version of documents (doc/guide/nics/*) for this device is
  WIP and will be included in the next version of patchset.

Aman Kumar (29):
  net/qdma: add net PMD template
  maintainers: add maintainer for net/qdma PMD
  net/meson.build: add support to compile net qdma
  net/qdma: add logging support
  net/qdma: add device init and uninit functions
  net/qdma: add qdma access library
  net/qdma: add supported qdma version
  net/qdma: qdma hardware initialization
  net/qdma: define device modes and data structure
  net/qdma: add net PMD ops template
  net/qdma: add configure close and reset ethdev ops
  net/qdma: add routine for Rx queue initialization
  net/qdma: add callback support for Rx queue count
  net/qdma: add routine for Tx queue initialization
  net/qdma: add queue cleanup PMD ops
  net/qdma: add start and stop apis
  net/qdma: add Tx burst API
  net/qdma: add Tx queue reclaim routine
  net/qdma: add callback function for Tx desc status
  net/qdma: add Rx burst API
  net/qdma: add mailbox communication library
  net/qdma: mbox API adaptation in Rx/Tx init
  net/qdma: add support for VF interfaces
  net/qdma: add Rx/Tx queue setup routine for VF devices
  net/qdma: add basic PMD ops for VF
  net/qdma: add datapath burst API for VF
  net/qdma: add device specific APIs for export
  net/qdma: add additional debug APIs
  net/qdma: add stats PMD ops for PF and VF

 MAINTAINERS                                   |    4 +
 drivers/net/meson.build                       |    1 +
 drivers/net/qdma/meson.build                  |   44 +
 drivers/net/qdma/qdma.h                       |  354 +
 .../eqdma_soft_access/eqdma_soft_access.c     | 5832 ++++++++++++
 .../eqdma_soft_access/eqdma_soft_access.h     |  294 +
 .../eqdma_soft_access/eqdma_soft_reg.h        | 1211 +++
 .../eqdma_soft_access/eqdma_soft_reg_dump.c   | 3908 ++++++++
 .../net/qdma/qdma_access/qdma_access_common.c | 1271 +++
 .../net/qdma/qdma_access/qdma_access_common.h |  888 ++
 .../net/qdma/qdma_access/qdma_access_errors.h |   60 +
 .../net/qdma/qdma_access/qdma_access_export.h |  243 +
 .../qdma/qdma_access/qdma_access_version.h    |   24 +
 drivers/net/qdma/qdma_access/qdma_list.c      |   51 +
 drivers/net/qdma/qdma_access/qdma_list.h      |  109 +
 .../net/qdma/qdma_access/qdma_mbox_protocol.c | 2107 +++++
 .../net/qdma/qdma_access/qdma_mbox_protocol.h |  681 ++
 drivers/net/qdma/qdma_access/qdma_platform.c  |  224 +
 drivers/net/qdma/qdma_access/qdma_platform.h  |  156 +
 .../net/qdma/qdma_access/qdma_platform_env.h  |   32 +
 drivers/net/qdma/qdma_access/qdma_reg_dump.h  |   77 +
 .../net/qdma/qdma_access/qdma_resource_mgmt.c |  787 ++
 .../net/qdma/qdma_access/qdma_resource_mgmt.h |  201 +
 .../qdma_s80_hard_access.c                    | 5851 ++++++++++++
 .../qdma_s80_hard_access.h                    |  266 +
 .../qdma_s80_hard_access/qdma_s80_hard_reg.h  | 2031 +++++
 .../qdma_s80_hard_reg_dump.c                  | 7999 +++++++++++++++++
 .../qdma_soft_access/qdma_soft_access.c       | 6106 +++++++++++++
 .../qdma_soft_access/qdma_soft_access.h       |  280 +
 .../qdma_soft_access/qdma_soft_reg.h          |  570 ++
 drivers/net/qdma/qdma_common.c                |  531 ++
 drivers/net/qdma/qdma_devops.c                | 2009 +++++
 drivers/net/qdma/qdma_devops.h                |  526 ++
 drivers/net/qdma/qdma_ethdev.c                |  722 ++
 drivers/net/qdma/qdma_log.h                   |   16 +
 drivers/net/qdma/qdma_mbox.c                  |  400 +
 drivers/net/qdma/qdma_mbox.h                  |   47 +
 drivers/net/qdma/qdma_rxtx.c                  | 1538 ++++
 drivers/net/qdma/qdma_rxtx.h                  |   36 +
 drivers/net/qdma/qdma_user.c                  |  263 +
 drivers/net/qdma/qdma_user.h                  |  225 +
 drivers/net/qdma/qdma_version.h               |   23 +
 drivers/net/qdma/qdma_vf_ethdev.c             | 1033 +++
 drivers/net/qdma/qdma_xdebug.c                | 1072 +++
 drivers/net/qdma/rte_pmd_qdma.c               | 1728 ++++
 drivers/net/qdma/rte_pmd_qdma.h               |  689 ++
 drivers/net/qdma/version.map                  |   38 +
 47 files changed, 52558 insertions(+)
 create mode 100644 drivers/net/qdma/meson.build
 create mode 100644 drivers/net/qdma/qdma.h
 create mode 100644 drivers/net/qdma/qdma_access/eqdma_soft_access/eqdma_soft_access.c
 create mode 100644 drivers/net/qdma/qdma_access/eqdma_soft_access/eqdma_soft_access.h
 create mode 100644 drivers/net/qdma/qdma_access/eqdma_soft_access/eqdma_soft_reg.h
 create mode 100644 drivers/net/qdma/qdma_access/eqdma_soft_access/eqdma_soft_reg_dump.c
 create mode 100644 drivers/net/qdma/qdma_access/qdma_access_common.c
 create mode 100644 drivers/net/qdma/qdma_access/qdma_access_common.h
 create mode 100644 drivers/net/qdma/qdma_access/qdma_access_errors.h
 create mode 100644 drivers/net/qdma/qdma_access/qdma_access_export.h
 create mode 100644 drivers/net/qdma/qdma_access/qdma_access_version.h
 create mode 100644 drivers/net/qdma/qdma_access/qdma_list.c
 create mode 100644 drivers/net/qdma/qdma_access/qdma_list.h
 create mode 100644 drivers/net/qdma/qdma_access/qdma_mbox_protocol.c
 create mode 100644 drivers/net/qdma/qdma_access/qdma_mbox_protocol.h
 create mode 100644 drivers/net/qdma/qdma_access/qdma_platform.c
 create mode 100644 drivers/net/qdma/qdma_access/qdma_platform.h
 create mode 100644 drivers/net/qdma/qdma_access/qdma_platform_env.h
 create mode 100644 drivers/net/qdma/qdma_access/qdma_reg_dump.h
 create mode 100644 drivers/net/qdma/qdma_access/qdma_resource_mgmt.c
 create mode 100644 drivers/net/qdma/qdma_access/qdma_resource_mgmt.h
 create mode 100644 drivers/net/qdma/qdma_access/qdma_s80_hard_access/qdma_s80_hard_access.c
 create mode 100644 drivers/net/qdma/qdma_access/qdma_s80_hard_access/qdma_s80_hard_access.h
 create mode 100644 drivers/net/qdma/qdma_access/qdma_s80_hard_access/qdma_s80_hard_reg.h
 create mode 100644 drivers/net/qdma/qdma_access/qdma_s80_hard_access/qdma_s80_hard_reg_dump.c
 create mode 100644 drivers/net/qdma/qdma_access/qdma_soft_access/qdma_soft_access.c
 create mode 100644 drivers/net/qdma/qdma_access/qdma_soft_access/qdma_soft_access.h
 create mode 100644 drivers/net/qdma/qdma_access/qdma_soft_access/qdma_soft_reg.h
 create mode 100644 drivers/net/qdma/qdma_common.c
 create mode 100644 drivers/net/qdma/qdma_devops.c
 create mode 100644 drivers/net/qdma/qdma_devops.h
 create mode 100644 drivers/net/qdma/qdma_ethdev.c
 create mode 100644 drivers/net/qdma/qdma_log.h
 create mode 100644 drivers/net/qdma/qdma_mbox.c
 create mode 100644 drivers/net/qdma/qdma_mbox.h
 create mode 100644 drivers/net/qdma/qdma_rxtx.c
 create mode 100644 drivers/net/qdma/qdma_rxtx.h
 create mode 100644 drivers/net/qdma/qdma_user.c
 create mode 100644 drivers/net/qdma/qdma_user.h
 create mode 100644 drivers/net/qdma/qdma_version.h
 create mode 100644 drivers/net/qdma/qdma_vf_ethdev.c
 create mode 100644 drivers/net/qdma/qdma_xdebug.c
 create mode 100644 drivers/net/qdma/rte_pmd_qdma.c
 create mode 100644 drivers/net/qdma/rte_pmd_qdma.h
 create mode 100644 drivers/net/qdma/version.map

-- 
2.36.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [RFC PATCH 01/29] net/qdma: add net PMD template
  2022-07-06  7:51 [RFC PATCH 00/29] cover letter for net/qdma PMD Aman Kumar
@ 2022-07-06  7:51 ` Aman Kumar
  2022-07-06  7:51 ` [RFC PATCH 02/29] maintainers: add maintainer for net/qdma PMD Aman Kumar
                   ` (28 subsequent siblings)
  29 siblings, 0 replies; 43+ messages in thread
From: Aman Kumar @ 2022-07-06  7:51 UTC (permalink / raw)
  To: dev; +Cc: maxime.coquelin, david.marchand, aman.kumar

add probe and remove function template for qdma PMD.
define supported PCI device table.

Signed-off-by: Aman Kumar <aman.kumar@vvdntech.in>
---
 drivers/net/qdma/meson.build   |  18 ++++++
 drivers/net/qdma/qdma_ethdev.c | 107 +++++++++++++++++++++++++++++++++
 drivers/net/qdma/version.map   |   3 +
 3 files changed, 128 insertions(+)
 create mode 100644 drivers/net/qdma/meson.build
 create mode 100644 drivers/net/qdma/qdma_ethdev.c
 create mode 100644 drivers/net/qdma/version.map

diff --git a/drivers/net/qdma/meson.build b/drivers/net/qdma/meson.build
new file mode 100644
index 0000000000..fe9d2d48d7
--- /dev/null
+++ b/drivers/net/qdma/meson.build
@@ -0,0 +1,18 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2021-2022 Xilinx, Inc. All rights reserved.
+# Copyright(c) 2022 VVDN Technologies Private Limited. All rights reserved.
+
+if not is_linux
+    build = false
+    reason = 'only supported on Linux'
+endif
+if (not dpdk_conf.has('RTE_ARCH_X86_64'))
+    build = false
+    reason = 'only supported on x86_64'
+endif
+
+includes += include_directories('.')
+
+sources = files(
+        'qdma_ethdev.c',
+)
diff --git a/drivers/net/qdma/qdma_ethdev.c b/drivers/net/qdma/qdma_ethdev.c
new file mode 100644
index 0000000000..35d7c88658
--- /dev/null
+++ b/drivers/net/qdma/qdma_ethdev.c
@@ -0,0 +1,107 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2022 Xilinx, Inc. All rights reserved.
+ * Copyright(c) 2022 VVDN Technologies Private Limited. All rights reserved.
+ */
+
+#include <ethdev_pci.h>
+#include <rte_dev.h>
+#include <rte_pci.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+
+/*
+ * The set of PCI devices this driver supports
+ */
+static struct rte_pci_id qdma_pci_id_tbl[] = {
+#define RTE_PCI_DEV_ID_DECL(vend, dev) {RTE_PCI_DEVICE(vend, dev)},
+#ifndef PCI_VENDOR_ID_VVDN
+#define PCI_VENDOR_ID_VVDN 0x1f44
+#endif
+
+	/** Gen 3 PF */
+	/** PCIe lane width x8 */
+	RTE_PCI_DEV_ID_DECL(PCI_VENDOR_ID_VVDN, 0x0201)	/** PF */
+
+	{ .vendor_id = 0, /* sentinel */ },
+};
+
+/**
+ * DPDK callback to register a PCI device.
+ *
+ * This function creates an Ethernet device for each port of a given
+ * PCI device.
+ *
+ * @param[in] dev
+ *   Pointer to Ethernet device structure.
+ *
+ * @return
+ *   0 on success, negative errno value on failure.
+ */
+static int qdma_eth_dev_init(struct rte_eth_dev *dev)
+{
+	struct rte_pci_device *pci_dev;
+
+	/* sanity checks */
+	if (dev == NULL)
+		return -EINVAL;
+	if (dev->data == NULL)
+		return -EINVAL;
+	if (dev->data->dev_private == NULL)
+		return -EINVAL;
+
+	pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	if (pci_dev == NULL)
+		return -EINVAL;
+
+	/* for secondary processes, we don't initialise any further as primary
+	 * has already done this work.
+	 */
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return 0;
+
+	return 0;
+}
+
+/**
+ * DPDK callback to deregister PCI device.
+ *
+ * @param[in] dev
+ *   Pointer to Ethernet device structure.
+ *
+ * @return
+ *   0 on success, negative errno value on failure.
+ */
+static int qdma_eth_dev_uninit(struct rte_eth_dev *dev)
+{
+	/* sanity checks */
+	if (dev == NULL)
+		return -EINVAL;
+	/* only uninitialize in the primary process */
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return -EPERM;
+
+	return 0;
+}
+
+static int eth_qdma_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+				struct rte_pci_device *pci_dev)
+{
+	return rte_eth_dev_pci_generic_probe(pci_dev, 0,
+						qdma_eth_dev_init);
+}
+
+/* Detach a ethdev interface */
+static int eth_qdma_pci_remove(struct rte_pci_device *pci_dev)
+{
+	return rte_eth_dev_pci_generic_remove(pci_dev, qdma_eth_dev_uninit);
+}
+
+static struct rte_pci_driver rte_qdma_pmd = {
+	.id_table = qdma_pci_id_tbl,
+	.drv_flags = RTE_PCI_DRV_NEED_MAPPING,
+	.probe = eth_qdma_pci_probe,
+	.remove = eth_qdma_pci_remove,
+};
+
+RTE_PMD_REGISTER_PCI(net_qdma, rte_qdma_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(net_qdma, qdma_pci_id_tbl);
diff --git a/drivers/net/qdma/version.map b/drivers/net/qdma/version.map
new file mode 100644
index 0000000000..c2e0723b4c
--- /dev/null
+++ b/drivers/net/qdma/version.map
@@ -0,0 +1,3 @@
+DPDK_22 {
+	local: *;
+};
-- 
2.36.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [RFC PATCH 02/29] maintainers: add maintainer for net/qdma PMD
  2022-07-06  7:51 [RFC PATCH 00/29] cover letter for net/qdma PMD Aman Kumar
  2022-07-06  7:51 ` [RFC PATCH 01/29] net/qdma: add net PMD template Aman Kumar
@ 2022-07-06  7:51 ` Aman Kumar
  2022-07-06  7:51 ` [RFC PATCH 03/29] net/meson.build: add support to compile net qdma Aman Kumar
                   ` (27 subsequent siblings)
  29 siblings, 0 replies; 43+ messages in thread
From: Aman Kumar @ 2022-07-06  7:51 UTC (permalink / raw)
  To: dev; +Cc: maxime.coquelin, david.marchand, aman.kumar

add myself as maintainer for net/qdma PMD

Signed-off-by: Aman Kumar <aman.kumar@vvdntech.in>
---
 MAINTAINERS | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index bfeeb7d1b4..983899b29f 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -906,6 +906,10 @@ F: drivers/net/bnx2x/
 F: doc/guides/nics/bnx2x.rst
 F: doc/guides/nics/features/bnx2x*.ini
 
+VVDN NR telco NIC PMD based on QDMA
+M: Aman Kumar <aman.kumar@vvdntech.in>
+F: drivers/net/qdma/
+
 Marvell QLogic qede PMD
 M: Rasesh Mody <rmody@marvell.com>
 M: Devendra Singh Rawat <dsinghrawat@marvell.com>
-- 
2.36.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [RFC PATCH 03/29] net/meson.build: add support to compile net qdma
  2022-07-06  7:51 [RFC PATCH 00/29] cover letter for net/qdma PMD Aman Kumar
  2022-07-06  7:51 ` [RFC PATCH 01/29] net/qdma: add net PMD template Aman Kumar
  2022-07-06  7:51 ` [RFC PATCH 02/29] maintainers: add maintainer for net/qdma PMD Aman Kumar
@ 2022-07-06  7:51 ` Aman Kumar
  2022-07-06  7:51 ` [RFC PATCH 04/29] net/qdma: add logging support Aman Kumar
                   ` (26 subsequent siblings)
  29 siblings, 0 replies; 43+ messages in thread
From: Aman Kumar @ 2022-07-06  7:51 UTC (permalink / raw)
  To: dev; +Cc: maxime.coquelin, david.marchand, aman.kumar

meson will default attempt to compile qdma net PMD.
conditions on which qdma will be compiled are added
in qdma local meson.build.

Signed-off-by: Aman Kumar <aman.kumar@vvdntech.in>
---
 drivers/net/meson.build | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index e35652fe63..ddf2a7e40d 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -48,6 +48,7 @@ drivers = [
         'octeontx_ep',
         'pcap',
         'pfe',
+        'qdma',
         'qede',
         'ring',
         'sfc',
-- 
2.36.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [RFC PATCH 04/29] net/qdma: add logging support
  2022-07-06  7:51 [RFC PATCH 00/29] cover letter for net/qdma PMD Aman Kumar
                   ` (2 preceding siblings ...)
  2022-07-06  7:51 ` [RFC PATCH 03/29] net/meson.build: add support to compile net qdma Aman Kumar
@ 2022-07-06  7:51 ` Aman Kumar
  2022-07-06 15:27   ` Stephen Hemminger
  2022-07-06  7:51 ` [RFC PATCH 05/29] net/qdma: add device init and uninit functions Aman Kumar
                   ` (25 subsequent siblings)
  29 siblings, 1 reply; 43+ messages in thread
From: Aman Kumar @ 2022-07-06  7:51 UTC (permalink / raw)
  To: dev; +Cc: maxime.coquelin, david.marchand, aman.kumar

define macro for logging across PMD files

Signed-off-by: Aman Kumar <aman.kumar@vvdntech.in>
---
 drivers/net/qdma/qdma_ethdev.c |  1 +
 drivers/net/qdma/qdma_log.h    | 16 ++++++++++++++++
 2 files changed, 17 insertions(+)
 create mode 100644 drivers/net/qdma/qdma_log.h

diff --git a/drivers/net/qdma/qdma_ethdev.c b/drivers/net/qdma/qdma_ethdev.c
index 35d7c88658..8dbc7c4ac1 100644
--- a/drivers/net/qdma/qdma_ethdev.c
+++ b/drivers/net/qdma/qdma_ethdev.c
@@ -105,3 +105,4 @@ static struct rte_pci_driver rte_qdma_pmd = {
 
 RTE_PMD_REGISTER_PCI(net_qdma, rte_qdma_pmd);
 RTE_PMD_REGISTER_PCI_TABLE(net_qdma, qdma_pci_id_tbl);
+RTE_LOG_REGISTER_DEFAULT(qdma_logtype_pmd, NOTICE);
diff --git a/drivers/net/qdma/qdma_log.h b/drivers/net/qdma/qdma_log.h
new file mode 100644
index 0000000000..e65b0a5d8c
--- /dev/null
+++ b/drivers/net/qdma/qdma_log.h
@@ -0,0 +1,16 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2022 Xilinx, Inc. All rights reserved.
+ * Copyright(c) 2022 VVDN Technologies Private Limited. All rights reserved.
+ */
+
+#ifndef __QDMA_LOG_H__
+#define __QDMA_LOG_H__
+
+#include <rte_log.h>
+
+extern int qdma_logtype_pmd;
+#define PMD_DRV_LOG(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, qdma_logtype_pmd, "%s(): " \
+		fmt "\n", __func__, ## args)
+
+#endif /* ifndef __QDMA_LOG_H__ */
-- 
2.36.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [RFC PATCH 05/29] net/qdma: add device init and uninit functions
  2022-07-06  7:51 [RFC PATCH 00/29] cover letter for net/qdma PMD Aman Kumar
                   ` (3 preceding siblings ...)
  2022-07-06  7:51 ` [RFC PATCH 04/29] net/qdma: add logging support Aman Kumar
@ 2022-07-06  7:51 ` Aman Kumar
  2022-07-06 15:35   ` Stephen Hemminger
  2022-07-06  7:51 ` [RFC PATCH 06/29] net/qdma: add qdma access library Aman Kumar
                   ` (24 subsequent siblings)
  29 siblings, 1 reply; 43+ messages in thread
From: Aman Kumar @ 2022-07-06  7:51 UTC (permalink / raw)
  To: dev; +Cc: maxime.coquelin, david.marchand, aman.kumar

upon device initialization, initialize mac and other
private data. Handle cleanup on uninit.
defines basic device/queue data structures.

Signed-off-by: Aman Kumar <aman.kumar@vvdntech.in>
---
 drivers/net/qdma/meson.build   |   1 +
 drivers/net/qdma/qdma.h        | 225 ++++++++++++++++++++++++
 drivers/net/qdma/qdma_common.c | 236 +++++++++++++++++++++++++
 drivers/net/qdma/qdma_ethdev.c | 310 ++++++++++++++++++++++++++++++++-
 4 files changed, 768 insertions(+), 4 deletions(-)
 create mode 100644 drivers/net/qdma/qdma.h
 create mode 100644 drivers/net/qdma/qdma_common.c

diff --git a/drivers/net/qdma/meson.build b/drivers/net/qdma/meson.build
index fe9d2d48d7..f0df5ef0d9 100644
--- a/drivers/net/qdma/meson.build
+++ b/drivers/net/qdma/meson.build
@@ -15,4 +15,5 @@ includes += include_directories('.')
 
 sources = files(
         'qdma_ethdev.c',
+        'qdma_common.c',
 )
diff --git a/drivers/net/qdma/qdma.h b/drivers/net/qdma/qdma.h
new file mode 100644
index 0000000000..4bc61d2a08
--- /dev/null
+++ b/drivers/net/qdma/qdma.h
@@ -0,0 +1,225 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2022 Xilinx, Inc. All rights reserved.
+ */
+
+#ifndef __QDMA_H__
+#define __QDMA_H__
+
+#include <stdbool.h>
+#include <rte_dev.h>
+#include <rte_ethdev.h>
+#include <ethdev_driver.h>
+#include <rte_spinlock.h>
+#include <rte_log.h>
+#include <rte_cycles.h>
+#include <rte_byteorder.h>
+#include <rte_memzone.h>
+#include <linux/pci.h>
+#include "qdma_log.h"
+
+#define QDMA_NUM_BARS          (6)
+#define DEFAULT_PF_CONFIG_BAR  (0)
+#define BAR_ID_INVALID         (-1)
+
+#define QDMA_FUNC_ID_INVALID    0xFFFF
+
+#define DEFAULT_TIMER_CNT_TRIG_MODE_TIMER	(5)
+
+enum dma_data_direction {
+	DMA_BIDIRECTIONAL = 0,
+	DMA_TO_DEVICE = 1,
+	DMA_FROM_DEVICE = 2,
+	DMA_NONE = 3,
+};
+
+enum reset_state_t {
+	RESET_STATE_IDLE,
+	RESET_STATE_RECV_PF_RESET_REQ,
+	RESET_STATE_RECV_PF_RESET_DONE,
+	RESET_STATE_INVALID
+};
+
+/* MM Write-back status structure */
+struct __rte_packed wb_status
+{
+	volatile uint16_t	pidx; /* in C2H WB */
+	volatile uint16_t	cidx; /* Consumer-index */
+	uint32_t	rsvd2; /* Reserved. */
+};
+
+struct qdma_pkt_stats {
+	uint64_t pkts;
+	uint64_t bytes;
+};
+
+/*
+ * Structure associated with each CMPT queue.
+ */
+struct qdma_cmpt_queue {
+	struct qdma_ul_cmpt_ring *cmpt_ring;
+	struct wb_status    *wb_status;
+	struct rte_eth_dev	*dev;
+
+	uint16_t	cmpt_desc_len;
+	uint16_t	nb_cmpt_desc;
+	uint32_t	queue_id; /* CMPT queue index. */
+
+	uint8_t		status:1;
+	uint8_t		st_mode:1; /* dma-mode: MM or ST */
+	uint8_t		dis_overflow_check:1;
+	uint8_t		func_id;
+	uint16_t	port_id; /* Device port identifier. */
+	int8_t		ringszidx;
+	int8_t		threshidx;
+	int8_t		timeridx;
+	int8_t		triggermode;
+	/* completion descriptor memzone */
+	const struct rte_memzone *cmpt_mz;
+};
+
+/**
+ * Structure associated with each RX queue.
+ */
+struct qdma_rx_queue {
+	struct rte_mempool	*mb_pool; /* mbuf pool to populate RX ring. */
+	void			*rx_ring; /* RX ring virtual address */
+	union qdma_ul_st_cmpt_ring	*cmpt_ring;
+	struct wb_status	*wb_status;
+	struct rte_mbuf		**sw_ring; /* address of RX software ring. */
+	struct rte_eth_dev	*dev;
+
+	uint16_t		rx_tail;
+	uint16_t		cmpt_desc_len;
+	uint16_t		rx_buff_size;
+	uint16_t		nb_rx_desc; /* number of RX descriptors. */
+	uint16_t		nb_rx_cmpt_desc;
+	uint32_t		queue_id; /* RX queue index. */
+	uint64_t		mbuf_initializer; /* value to init mbufs */
+
+	struct qdma_pkt_stats	stats;
+
+	uint16_t		port_id; /* Device port identifier. */
+	uint8_t			status:1;
+	uint8_t			err:1;
+	uint8_t			st_mode:1; /* dma-mode: MM or ST */
+	uint8_t			dump_immediate_data:1;
+	uint8_t			rx_deferred_start:1;
+	uint8_t			en_prefetch:1;
+	uint8_t			en_bypass:1;
+	uint8_t			en_bypass_prefetch:1;
+	uint8_t			dis_overflow_check:1;
+
+	uint8_t			func_id; /* RX queue index. */
+	uint32_t		ep_addr;
+
+	int8_t			ringszidx;
+	int8_t			cmpt_ringszidx;
+	int8_t			buffszidx;
+	int8_t			threshidx;
+	int8_t			timeridx;
+	int8_t			triggermode;
+
+	const struct rte_memzone *rx_mz;
+	/* C2H stream mode, completion descriptor result */
+	const struct rte_memzone *rx_cmpt_mz;
+};
+
+/**
+ * Structure associated with each TX queue.
+ */
+struct qdma_tx_queue {
+	void				*tx_ring; /* TX ring virtual address */
+	struct wb_status		*wb_status;
+	struct rte_mbuf			**sw_ring;/* SW ring virtual address */
+	struct rte_eth_dev		*dev;
+	uint16_t			tx_fl_tail;
+	uint16_t			tx_desc_pend;
+	uint16_t			nb_tx_desc; /* No of TX descriptors. */
+	rte_spinlock_t			pidx_update_lock;
+	uint64_t			offloads; /* Tx offloads */
+
+	uint8_t				st_mode:1;/* dma-mode: MM or ST */
+	uint8_t				tx_deferred_start:1;
+	uint8_t				en_bypass:1;
+	uint8_t				status:1;
+	uint16_t			port_id; /* Device port identifier. */
+	uint8_t				func_id; /* RX queue index. */
+	int8_t				ringszidx;
+
+	struct qdma_pkt_stats		stats;
+
+	uint64_t			ep_addr;
+	uint32_t			queue_id; /* TX queue index. */
+	uint32_t			num_queues; /* TX queue index. */
+	const struct rte_memzone	*tx_mz;
+};
+
+struct qdma_vf_info {
+	uint16_t	func_id;
+};
+
+struct queue_info {
+	uint32_t	queue_mode:1;
+	uint32_t	rx_bypass_mode:2;
+	uint32_t	tx_bypass_mode:1;
+	uint32_t	cmpt_desc_sz:7;
+	uint8_t		immediate_data_state:1;
+	uint8_t		dis_cmpt_ovf_chk:1;
+	uint8_t		en_prefetch:1;
+	uint8_t		timer_count;
+	int8_t		trigger_mode;
+};
+
+struct qdma_pci_dev {
+	int config_bar_idx;
+	int user_bar_idx;
+	int bypass_bar_idx;
+	void *bar_addr[QDMA_NUM_BARS]; /* memory mapped I/O addr for BARs */
+
+	/* Driver Attributes */
+	uint32_t qsets_en;  /* no. of queue pairs enabled */
+	uint32_t queue_base;
+	uint8_t func_id;  /* Function id */
+
+	/* DMA identifier used by the resource manager
+	 * for the DMA instances used by this driver
+	 */
+	uint32_t dma_device_index;
+
+	uint8_t cmpt_desc_len;
+	uint8_t c2h_bypass_mode;
+	uint8_t h2c_bypass_mode;
+	uint8_t trigger_mode;
+	uint8_t timer_count;
+
+	uint8_t dev_configured:1;
+	uint8_t is_vf:1;
+	uint8_t is_master:1;
+	uint8_t en_desc_prefetch:1;
+
+	/* Reset state */
+	uint8_t reset_in_progress;
+	enum reset_state_t reset_state;
+
+	/* Hardware version info */
+	uint32_t vivado_rel:4;
+	uint32_t rtl_version:4;
+	uint32_t device_type:4;
+	uint32_t ip_type:4;
+
+	struct queue_info *q_info;
+	uint8_t init_q_range;
+
+	struct qdma_vf_info *vfinfo;
+	uint8_t vf_online_count;
+
+	int16_t tx_qid_statid_map[RTE_ETHDEV_QUEUE_STAT_CNTRS];
+	int16_t rx_qid_statid_map[RTE_ETHDEV_QUEUE_STAT_CNTRS];
+};
+
+int qdma_identify_bars(struct rte_eth_dev *dev);
+
+int qdma_check_kvargs(struct rte_devargs *devargs,
+			struct qdma_pci_dev *qdma_dev);
+
+#endif /* ifndef __QDMA_H__ */
diff --git a/drivers/net/qdma/qdma_common.c b/drivers/net/qdma/qdma_common.c
new file mode 100644
index 0000000000..c0c5162f0f
--- /dev/null
+++ b/drivers/net/qdma/qdma_common.c
@@ -0,0 +1,236 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2022 Xilinx, Inc. All rights reserved.
+ * Copyright(c) 2022 VVDN Technologies Private Limited. All rights reserved.
+ */
+
+#include <stdint.h>
+#include <rte_malloc.h>
+#include <rte_common.h>
+#include <ethdev_pci.h>
+#include <rte_cycles.h>
+#include <rte_kvargs.h>
+#include "qdma.h"
+
+#include <fcntl.h>
+#include <unistd.h>
+
+static int pfetch_check_handler(__rte_unused const char *key,
+					const char *value,  void *opaque)
+{
+	struct qdma_pci_dev *qdma_dev = (struct qdma_pci_dev *)opaque;
+	char *end = NULL;
+	uint8_t desc_prefetch;
+
+	PMD_DRV_LOG(INFO, "QDMA devargs desc_prefetch is: %s\n", value);
+	desc_prefetch = (uint8_t)strtoul(value, &end, 10);
+	if (desc_prefetch > 1) {
+		PMD_DRV_LOG(INFO, "QDMA devargs prefetch should be 1 or 0,"
+						  " setting to 1.\n");
+	}
+	qdma_dev->en_desc_prefetch = desc_prefetch ? 1 : 0;
+	return 0;
+}
+
+static int cmpt_desc_len_check_handler(__rte_unused const char *key,
+					const char *value,  void *opaque)
+{
+	struct qdma_pci_dev *qdma_dev = (struct qdma_pci_dev *)opaque;
+	char *end = NULL;
+
+	PMD_DRV_LOG(INFO, "QDMA devargs cmpt_desc_len is: %s\n", value);
+	qdma_dev->cmpt_desc_len =  (uint8_t)strtoul(value, &end, 10);
+	if (qdma_dev->cmpt_desc_len != 8 &&
+		qdma_dev->cmpt_desc_len != 16 &&
+		qdma_dev->cmpt_desc_len != 32 &&
+		qdma_dev->cmpt_desc_len != 64) {
+		PMD_DRV_LOG(INFO, "QDMA devargs incorrect cmpt_desc_len = %d "
+						  "specified\n",
+						  qdma_dev->cmpt_desc_len);
+		return -1;
+	}
+
+	return 0;
+}
+
+static int trigger_mode_handler(__rte_unused const char *key,
+					const char *value,  void *opaque)
+{
+	struct qdma_pci_dev *qdma_dev = (struct qdma_pci_dev *)opaque;
+	char *end = NULL;
+
+	PMD_DRV_LOG(INFO, "QDMA devargs trigger mode: %s\n", value);
+	qdma_dev->trigger_mode =  (uint8_t)strtoul(value, &end, 10);
+
+	return 0;
+}
+
+static int config_bar_idx_handler(__rte_unused const char *key,
+					const char *value,  void *opaque)
+{
+	struct qdma_pci_dev *qdma_dev = (struct qdma_pci_dev *)opaque;
+	char *end = NULL;
+
+	PMD_DRV_LOG(INFO, "QDMA devargs trigger mode: %s\n", value);
+	qdma_dev->config_bar_idx =  (int)strtoul(value, &end, 10);
+
+	if (qdma_dev->config_bar_idx >= QDMA_NUM_BARS ||
+			qdma_dev->config_bar_idx < 0) {
+		PMD_DRV_LOG(INFO, "QDMA devargs config bar idx invalid: %d\n",
+				qdma_dev->config_bar_idx);
+		return -1;
+	}
+	return 0;
+}
+
+static int c2h_byp_mode_check_handler(__rte_unused const char *key,
+					const char *value,  void *opaque)
+{
+	struct qdma_pci_dev *qdma_dev = (struct qdma_pci_dev *)opaque;
+	char *end = NULL;
+
+	PMD_DRV_LOG(INFO, "QDMA devargs c2h_byp_mode is: %s\n", value);
+	qdma_dev->c2h_bypass_mode =  (uint8_t)strtoul(value, &end, 10);
+
+	return 0;
+}
+
+static int h2c_byp_mode_check_handler(__rte_unused const char *key,
+					const char *value,  void *opaque)
+{
+	struct qdma_pci_dev *qdma_dev = (struct qdma_pci_dev *)opaque;
+	char *end = NULL;
+
+	PMD_DRV_LOG(INFO, "QDMA devargs h2c_byp_mode is: %s\n", value);
+	qdma_dev->h2c_bypass_mode =  (uint8_t)strtoul(value, &end, 10);
+
+	if (qdma_dev->h2c_bypass_mode > 1) {
+		PMD_DRV_LOG(INFO, "QDMA devargs incorrect"
+				" h2c_byp_mode =%d specified\n",
+					qdma_dev->h2c_bypass_mode);
+		return -1;
+	}
+
+	return 0;
+}
+
+/* Process the all devargs */
+int qdma_check_kvargs(struct rte_devargs *devargs,
+						struct qdma_pci_dev *qdma_dev)
+{
+	struct rte_kvargs *kvlist;
+	const char *pfetch_key = "desc_prefetch";
+	const char *cmpt_desc_len_key = "cmpt_desc_len";
+	const char *trigger_mode_key = "trigger_mode";
+	const char *config_bar_key = "config_bar";
+	const char *c2h_byp_mode_key = "c2h_byp_mode";
+	const char *h2c_byp_mode_key = "h2c_byp_mode";
+	int ret = 0;
+
+	if (!devargs)
+		return 0;
+
+	kvlist = rte_kvargs_parse(devargs->args, NULL);
+	if (!kvlist)
+		return 0;
+
+	/* process the desc_prefetch */
+	if (rte_kvargs_count(kvlist, pfetch_key)) {
+		ret = rte_kvargs_process(kvlist, pfetch_key,
+						pfetch_check_handler, qdma_dev);
+		if (ret) {
+			rte_kvargs_free(kvlist);
+			return ret;
+		}
+	}
+
+	/* process the cmpt_desc_len */
+	if (rte_kvargs_count(kvlist, cmpt_desc_len_key)) {
+		ret = rte_kvargs_process(kvlist, cmpt_desc_len_key,
+					 cmpt_desc_len_check_handler, qdma_dev);
+		if (ret) {
+			rte_kvargs_free(kvlist);
+			return ret;
+		}
+	}
+
+	/* process the trigger_mode */
+	if (rte_kvargs_count(kvlist, trigger_mode_key)) {
+		ret = rte_kvargs_process(kvlist, trigger_mode_key,
+						trigger_mode_handler, qdma_dev);
+		if (ret) {
+			rte_kvargs_free(kvlist);
+			return ret;
+		}
+	}
+
+	/* process the config bar */
+	if (rte_kvargs_count(kvlist, config_bar_key)) {
+		ret = rte_kvargs_process(kvlist, config_bar_key,
+					   config_bar_idx_handler, qdma_dev);
+		if (ret) {
+			rte_kvargs_free(kvlist);
+			return ret;
+		}
+	}
+
+	/* process c2h_byp_mode */
+	if (rte_kvargs_count(kvlist, c2h_byp_mode_key)) {
+		ret = rte_kvargs_process(kvlist, c2h_byp_mode_key,
+					  c2h_byp_mode_check_handler, qdma_dev);
+		if (ret) {
+			rte_kvargs_free(kvlist);
+			return ret;
+		}
+	}
+
+	/* process h2c_byp_mode */
+	if (rte_kvargs_count(kvlist, h2c_byp_mode_key)) {
+		ret = rte_kvargs_process(kvlist, h2c_byp_mode_key,
+					  h2c_byp_mode_check_handler, qdma_dev);
+		if (ret) {
+			rte_kvargs_free(kvlist);
+			return ret;
+		}
+	}
+
+	rte_kvargs_free(kvlist);
+	return ret;
+}
+
+int qdma_identify_bars(struct rte_eth_dev *dev)
+{
+	int bar_len, i;
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct qdma_pci_dev *dma_priv;
+
+	dma_priv = (struct qdma_pci_dev *)dev->data->dev_private;
+
+	/* Config bar */
+	bar_len = pci_dev->mem_resource[dma_priv->config_bar_idx].len;
+	if (!bar_len) {
+		PMD_DRV_LOG(INFO, "QDMA config BAR index :%d is not enabled",
+					dma_priv->config_bar_idx);
+		return -1;
+	}
+
+	/* Find AXI Bridge Master bar(bypass bar) */
+	for (i = 0; i < QDMA_NUM_BARS; i++) {
+		bar_len = pci_dev->mem_resource[i].len;
+		if (!bar_len) /* Bar not enabled ? */
+			continue;
+		if (dma_priv->user_bar_idx != i &&
+				dma_priv->config_bar_idx != i) {
+			dma_priv->bypass_bar_idx = i;
+			break;
+		}
+	}
+
+	PMD_DRV_LOG(INFO, "QDMA config bar idx :%d\n",
+			dma_priv->config_bar_idx);
+	PMD_DRV_LOG(INFO, "QDMA AXI Master Lite bar idx :%d\n",
+			dma_priv->user_bar_idx);
+	PMD_DRV_LOG(INFO, "QDMA AXI Bridge Master bar idx :%d\n",
+			dma_priv->bypass_bar_idx);
+
+	return 0;
+}
diff --git a/drivers/net/qdma/qdma_ethdev.c b/drivers/net/qdma/qdma_ethdev.c
index 8dbc7c4ac1..c2ed6a52bb 100644
--- a/drivers/net/qdma/qdma_ethdev.c
+++ b/drivers/net/qdma/qdma_ethdev.c
@@ -3,11 +3,29 @@
  * Copyright(c) 2022 VVDN Technologies Private Limited. All rights reserved.
  */
 
+#include <stdint.h>
+#include <stdbool.h>
+#include <sys/mman.h>
+#include <sys/fcntl.h>
+#include <dirent.h>
+#include <unistd.h>
+#include <string.h>
+#include <rte_memzone.h>
+#include <rte_string_fns.h>
+#include <rte_malloc.h>
 #include <ethdev_pci.h>
 #include <rte_dev.h>
 #include <rte_pci.h>
 #include <rte_ether.h>
 #include <rte_ethdev.h>
+#include <rte_alarm.h>
+#include <rte_cycles.h>
+
+#include "qdma.h"
+
+#define PCI_CONFIG_BRIDGE_DEVICE	(6)
+#define PCI_CONFIG_CLASS_CODE_SHIFT	(16)
+#define MAX_PCIE_CAPABILITY		(48)
 
 /*
  * The set of PCI devices this driver supports
@@ -25,6 +43,181 @@ static struct rte_pci_id qdma_pci_id_tbl[] = {
 	{ .vendor_id = 0, /* sentinel */ },
 };
 
+/* parse a sysfs file containing one integer value */
+static int parse_sysfs_value(const char *filename, uint32_t *val)
+{
+	FILE *f;
+	char buf[BUFSIZ];
+	char *end = NULL;
+
+	f = fopen(filename, "r");
+	if (f == NULL) {
+		PMD_DRV_LOG(ERR, "%s(): Failed to open sysfs file %s\n",
+				__func__, filename);
+		return -1;
+	}
+
+	if (fgets(buf, sizeof(buf), f) == NULL) {
+		PMD_DRV_LOG(ERR, "%s(): Failed to read sysfs value %s\n",
+			__func__, filename);
+		fclose(f);
+		return -1;
+	}
+	*val = (uint32_t)strtoul(buf, &end, 0);
+	if ((buf[0] == '\0') || end == NULL || (*end != '\n')) {
+		PMD_DRV_LOG(ERR, "%s(): Failed to parse sysfs value %s\n",
+				__func__, filename);
+		fclose(f);
+		return -1;
+	}
+	fclose(f);
+	return 0;
+}
+
+/* Split up a pci address into its constituent parts. */
+static int parse_pci_addr_format(const char *buf,
+		int bufsize, struct rte_pci_addr *addr)
+{
+	/* first split on ':' */
+	union splitaddr {
+		struct {
+			char *domain;
+			char *bus;
+			char *devid;
+			char *function;
+		};
+		/* last element-separator is "." not ":" */
+		char *str[PCI_FMT_NVAL];
+	} splitaddr;
+
+	char *buf_copy = strndup(buf, bufsize);
+	if (buf_copy == NULL) {
+		PMD_DRV_LOG(ERR, "Failed to get pci address duplicate copy\n");
+		return -1;
+	}
+
+	if (rte_strsplit(buf_copy, bufsize, splitaddr.str, PCI_FMT_NVAL, ':')
+			!= PCI_FMT_NVAL - 1) {
+		PMD_DRV_LOG(ERR, "Failed to split pci address string\n");
+		goto error;
+	}
+
+	/* final split is on '.' between devid and function */
+	splitaddr.function = strchr(splitaddr.devid, '.');
+	if (splitaddr.function == NULL) {
+		PMD_DRV_LOG(ERR, "Failed to split pci devid and function\n");
+		goto error;
+	}
+	*splitaddr.function++ = '\0';
+
+	/* now convert to int values */
+	addr->domain = strtoul(splitaddr.domain, NULL, 16);
+	addr->bus = strtoul(splitaddr.bus, NULL, 16);
+	addr->devid = strtoul(splitaddr.devid, NULL, 16);
+	addr->function = strtoul(splitaddr.function, NULL, 10);
+
+	free(buf_copy); /* free the copy made with strdup */
+	return 0;
+
+error:
+	free(buf_copy);
+	return -1;
+}
+
+/* Get max pci bus number from the corresponding pci bridge device */
+static int get_max_pci_bus_num(uint8_t start_bus, uint8_t *end_bus)
+{
+	char dirname[PATH_MAX];
+	char filename[PATH_MAX];
+	char cfgname[PATH_MAX];
+	struct rte_pci_addr addr;
+	struct dirent *dp;
+	uint32_t pci_class_code;
+	uint8_t sec_bus_num, sub_bus_num;
+	DIR *dir;
+	int ret, fd;
+
+	/* Initialize end bus number to zero */
+	*end_bus = 0;
+
+	/* Open pci devices directory */
+	dir = opendir(rte_pci_get_sysfs_path());
+	if (dir == NULL) {
+		PMD_DRV_LOG(ERR, "%s(): opendir failed\n",
+			__func__);
+		return -1;
+	}
+
+	while ((dp = readdir(dir)) != NULL) {
+		if (dp->d_name[0] == '.')
+			continue;
+
+		/* Split pci address to get bus, devid and function numbers */
+		if (parse_pci_addr_format(dp->d_name,
+				sizeof(dp->d_name), &addr) != 0)
+			continue;
+
+		snprintf(dirname, sizeof(dirname), "%s/%s",
+				rte_pci_get_sysfs_path(), dp->d_name);
+
+		/* get class code */
+		snprintf(filename, sizeof(filename), "%s/class", dirname);
+		if (parse_sysfs_value(filename, &pci_class_code) < 0) {
+			PMD_DRV_LOG(ERR, "Failed to get pci class code\n");
+			goto error;
+		}
+
+		/* Get max pci number from pci bridge device */
+		if ((((pci_class_code >> PCI_CONFIG_CLASS_CODE_SHIFT) & 0xFF) ==
+				PCI_CONFIG_BRIDGE_DEVICE)) {
+			snprintf(cfgname, sizeof(cfgname),
+					"%s/config", dirname);
+			fd = open(cfgname, O_RDWR);
+			if (fd < 0) {
+				PMD_DRV_LOG(ERR, "Failed to open %s\n",
+					cfgname);
+				goto error;
+			}
+
+			/* get secondary bus number */
+			ret = pread(fd, &sec_bus_num, sizeof(uint8_t),
+						PCI_SECONDARY_BUS);
+			if (ret == -1) {
+				PMD_DRV_LOG(ERR, "Failed to read secondary bus number\n");
+				close(fd);
+				goto error;
+			}
+
+			/* get subordinate bus number */
+			ret = pread(fd, &sub_bus_num, sizeof(uint8_t),
+						PCI_SUBORDINATE_BUS);
+			if (ret == -1) {
+				PMD_DRV_LOG(ERR, "Failed to read subordinate bus number\n");
+				close(fd);
+				goto error;
+			}
+
+			/* Get max bus number by checking if given bus number
+			 * falls in between secondary and subordinate bus
+			 * numbers of this pci bridge device.
+			 */
+			if (start_bus >= sec_bus_num &&
+			    start_bus <= sub_bus_num) {
+				*end_bus = sub_bus_num;
+				close(fd);
+				closedir(dir);
+				return 0;
+			}
+
+			close(fd);
+		}
+	}
+
+error:
+	closedir(dir);
+	return -1;
+}
+
 /**
  * DPDK callback to register a PCI device.
  *
@@ -39,7 +232,12 @@ static struct rte_pci_id qdma_pci_id_tbl[] = {
  */
 static int qdma_eth_dev_init(struct rte_eth_dev *dev)
 {
+	struct qdma_pci_dev *dma_priv;
+	uint8_t *baseaddr;
+	int i, idx, ret;
 	struct rte_pci_device *pci_dev;
+	uint16_t num_vfs;
+	uint8_t max_pci_bus = 0;
 
 	/* sanity checks */
 	if (dev == NULL)
@@ -59,6 +257,88 @@ static int qdma_eth_dev_init(struct rte_eth_dev *dev)
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return 0;
 
+	/* allocate space for a single Ethernet MAC address */
+	dev->data->mac_addrs = rte_zmalloc("qdma", RTE_ETHER_ADDR_LEN * 1, 0);
+	if (dev->data->mac_addrs == NULL)
+		return -ENOMEM;
+
+	/* Copy some dummy Ethernet MAC address for QDMA device
+	 * This will change in real NIC device...
+	 * TODO: Read MAC from EEPROM
+	 */
+	for (i = 0; i < RTE_ETHER_ADDR_LEN; ++i)
+		dev->data->mac_addrs[0].addr_bytes[i] = 0x15 + i;
+
+	/* Init system & device */
+	dma_priv = (struct qdma_pci_dev *)dev->data->dev_private;
+	dma_priv->is_vf = 0;
+	dma_priv->is_master = 0;
+	dma_priv->vf_online_count = 0;
+	dma_priv->timer_count = DEFAULT_TIMER_CNT_TRIG_MODE_TIMER;
+
+	dma_priv->en_desc_prefetch = 0; /* Keep prefetch default to 0 */
+	dma_priv->cmpt_desc_len = 0;
+	dma_priv->c2h_bypass_mode = 0;
+	dma_priv->h2c_bypass_mode = 0;
+
+	dma_priv->config_bar_idx = DEFAULT_PF_CONFIG_BAR;
+	dma_priv->bypass_bar_idx = BAR_ID_INVALID;
+	dma_priv->user_bar_idx = BAR_ID_INVALID;
+
+	/* Check and handle device devargs */
+	if (qdma_check_kvargs(dev->device->devargs, dma_priv)) {
+		PMD_DRV_LOG(INFO, "devargs failed\n");
+		rte_free(dev->data->mac_addrs);
+		return -EINVAL;
+	}
+
+	/* Store BAR address and length of Config BAR */
+	baseaddr = (uint8_t *)
+			pci_dev->mem_resource[dma_priv->config_bar_idx].addr;
+	dma_priv->bar_addr[dma_priv->config_bar_idx] = baseaddr;
+
+	idx = qdma_identify_bars(dev);
+	if (idx < 0) {
+		rte_free(dev->data->mac_addrs);
+		return -EINVAL;
+	}
+
+	/* Store BAR address and length of AXI Master Lite BAR(user bar) */
+	if (dma_priv->user_bar_idx >= 0) {
+		baseaddr = (uint8_t *)
+			    pci_dev->mem_resource[dma_priv->user_bar_idx].addr;
+		dma_priv->bar_addr[dma_priv->user_bar_idx] = baseaddr;
+	}
+
+	PMD_DRV_LOG(INFO, "QDMA device driver probe:");
+
+	ret = get_max_pci_bus_num(pci_dev->addr.bus, &max_pci_bus);
+	if (ret != 0 && !max_pci_bus) {
+		PMD_DRV_LOG(ERR, "Failed to get max pci bus number\n");
+		rte_free(dev->data->mac_addrs);
+		return -EINVAL;
+	}
+	PMD_DRV_LOG(INFO, "PCI max bus number : 0x%x", max_pci_bus);
+
+	if (!dma_priv->reset_in_progress) {
+		num_vfs = pci_dev->max_vfs;
+		if (num_vfs) {
+			dma_priv->vfinfo = rte_zmalloc("vfinfo",
+				sizeof(struct qdma_vf_info) * num_vfs, 0);
+			if (dma_priv->vfinfo == NULL) {
+				PMD_DRV_LOG(ERR, "Cannot allocate memory for private VF info\n");
+				return -ENOMEM;
+			}
+
+			/* Mark all VFs with invalid function id mapping*/
+			for (i = 0; i < num_vfs; i++)
+				dma_priv->vfinfo[i].func_id =
+					QDMA_FUNC_ID_INVALID;
+		}
+	}
+
+	dma_priv->reset_in_progress = 0;
+
 	return 0;
 }
 
@@ -73,20 +353,42 @@ static int qdma_eth_dev_init(struct rte_eth_dev *dev)
  */
 static int qdma_eth_dev_uninit(struct rte_eth_dev *dev)
 {
-	/* sanity checks */
-	if (dev == NULL)
-		return -EINVAL;
+	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
+
 	/* only uninitialize in the primary process */
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return -EPERM;
 
+	dev->dev_ops = NULL;
+	dev->rx_pkt_burst = NULL;
+	dev->tx_pkt_burst = NULL;
+	dev->data->nb_rx_queues = 0;
+	dev->data->nb_tx_queues = 0;
+
+	if (!qdma_dev->reset_in_progress &&
+			qdma_dev->vfinfo != NULL) {
+		rte_free(qdma_dev->vfinfo);
+		qdma_dev->vfinfo = NULL;
+	}
+
+	if (dev->data->mac_addrs != NULL) {
+		rte_free(dev->data->mac_addrs);
+		dev->data->mac_addrs = NULL;
+	}
+
+	if (qdma_dev->q_info != NULL) {
+		rte_free(qdma_dev->q_info);
+		qdma_dev->q_info = NULL;
+	}
+
 	return 0;
 }
 
 static int eth_qdma_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 				struct rte_pci_device *pci_dev)
 {
-	return rte_eth_dev_pci_generic_probe(pci_dev, 0,
+	return rte_eth_dev_pci_generic_probe(pci_dev,
+						sizeof(struct qdma_pci_dev),
 						qdma_eth_dev_init);
 }
 
-- 
2.36.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [RFC PATCH 06/29] net/qdma: add qdma access library
  2022-07-06  7:51 [RFC PATCH 00/29] cover letter for net/qdma PMD Aman Kumar
                   ` (4 preceding siblings ...)
  2022-07-06  7:51 ` [RFC PATCH 05/29] net/qdma: add device init and uninit functions Aman Kumar
@ 2022-07-06  7:51 ` Aman Kumar
  2022-07-06  7:51 ` [RFC PATCH 07/29] net/qdma: add supported qdma version Aman Kumar
                   ` (23 subsequent siblings)
  29 siblings, 0 replies; 43+ messages in thread
From: Aman Kumar @ 2022-07-06  7:51 UTC (permalink / raw)
  To: dev; +Cc: maxime.coquelin, david.marchand, aman.kumar

add qdma hardware library.
modify qdma meson.build to compile new added files.

Signed-off-by: Aman Kumar <aman.kumar@vvdntech.in>
---
 drivers/net/qdma/meson.build                  |   14 +
 .../eqdma_soft_access/eqdma_soft_access.c     | 5832 ++++++++++++
 .../eqdma_soft_access/eqdma_soft_access.h     |  294 +
 .../eqdma_soft_access/eqdma_soft_reg.h        | 1211 +++
 .../eqdma_soft_access/eqdma_soft_reg_dump.c   | 3908 ++++++++
 .../net/qdma/qdma_access/qdma_access_common.c | 1271 +++
 .../net/qdma/qdma_access/qdma_access_common.h |  888 ++
 .../net/qdma/qdma_access/qdma_access_errors.h |   60 +
 .../net/qdma/qdma_access/qdma_access_export.h |  243 +
 .../qdma/qdma_access/qdma_access_version.h    |   24 +
 drivers/net/qdma/qdma_access/qdma_list.c      |   51 +
 drivers/net/qdma/qdma_access/qdma_list.h      |  109 +
 .../net/qdma/qdma_access/qdma_mbox_protocol.c | 2107 +++++
 .../net/qdma/qdma_access/qdma_mbox_protocol.h |  681 ++
 drivers/net/qdma/qdma_access/qdma_platform.c  |  224 +
 drivers/net/qdma/qdma_access/qdma_platform.h  |  156 +
 .../net/qdma/qdma_access/qdma_platform_env.h  |   32 +
 drivers/net/qdma/qdma_access/qdma_reg_dump.h  |   77 +
 .../net/qdma/qdma_access/qdma_resource_mgmt.c |  787 ++
 .../net/qdma/qdma_access/qdma_resource_mgmt.h |  201 +
 .../qdma_s80_hard_access.c                    | 5851 ++++++++++++
 .../qdma_s80_hard_access.h                    |  266 +
 .../qdma_s80_hard_access/qdma_s80_hard_reg.h  | 2031 +++++
 .../qdma_s80_hard_reg_dump.c                  | 7999 +++++++++++++++++
 .../qdma_soft_access/qdma_soft_access.c       | 6106 +++++++++++++
 .../qdma_soft_access/qdma_soft_access.h       |  280 +
 .../qdma_soft_access/qdma_soft_reg.h          |  570 ++
 27 files changed, 41273 insertions(+)
 create mode 100644 drivers/net/qdma/qdma_access/eqdma_soft_access/eqdma_soft_access.c
 create mode 100644 drivers/net/qdma/qdma_access/eqdma_soft_access/eqdma_soft_access.h
 create mode 100644 drivers/net/qdma/qdma_access/eqdma_soft_access/eqdma_soft_reg.h
 create mode 100644 drivers/net/qdma/qdma_access/eqdma_soft_access/eqdma_soft_reg_dump.c
 create mode 100644 drivers/net/qdma/qdma_access/qdma_access_common.c
 create mode 100644 drivers/net/qdma/qdma_access/qdma_access_common.h
 create mode 100644 drivers/net/qdma/qdma_access/qdma_access_errors.h
 create mode 100644 drivers/net/qdma/qdma_access/qdma_access_export.h
 create mode 100644 drivers/net/qdma/qdma_access/qdma_access_version.h
 create mode 100644 drivers/net/qdma/qdma_access/qdma_list.c
 create mode 100644 drivers/net/qdma/qdma_access/qdma_list.h
 create mode 100644 drivers/net/qdma/qdma_access/qdma_mbox_protocol.c
 create mode 100644 drivers/net/qdma/qdma_access/qdma_mbox_protocol.h
 create mode 100644 drivers/net/qdma/qdma_access/qdma_platform.c
 create mode 100644 drivers/net/qdma/qdma_access/qdma_platform.h
 create mode 100644 drivers/net/qdma/qdma_access/qdma_platform_env.h
 create mode 100644 drivers/net/qdma/qdma_access/qdma_reg_dump.h
 create mode 100644 drivers/net/qdma/qdma_access/qdma_resource_mgmt.c
 create mode 100644 drivers/net/qdma/qdma_access/qdma_resource_mgmt.h
 create mode 100644 drivers/net/qdma/qdma_access/qdma_s80_hard_access/qdma_s80_hard_access.c
 create mode 100644 drivers/net/qdma/qdma_access/qdma_s80_hard_access/qdma_s80_hard_access.h
 create mode 100644 drivers/net/qdma/qdma_access/qdma_s80_hard_access/qdma_s80_hard_reg.h
 create mode 100644 drivers/net/qdma/qdma_access/qdma_s80_hard_access/qdma_s80_hard_reg_dump.c
 create mode 100644 drivers/net/qdma/qdma_access/qdma_soft_access/qdma_soft_access.c
 create mode 100644 drivers/net/qdma/qdma_access/qdma_soft_access/qdma_soft_access.h
 create mode 100644 drivers/net/qdma/qdma_access/qdma_soft_access/qdma_soft_reg.h

diff --git a/drivers/net/qdma/meson.build b/drivers/net/qdma/meson.build
index f0df5ef0d9..99076e1ebf 100644
--- a/drivers/net/qdma/meson.build
+++ b/drivers/net/qdma/meson.build
@@ -12,8 +12,22 @@ if (not dpdk_conf.has('RTE_ARCH_X86_64'))
 endif
 
 includes += include_directories('.')
+includes += include_directories('qdma_access')
+includes += include_directories('qdma_access/qdma_soft_access')
+includes += include_directories('qdma_access/eqdma_soft_access')
+includes += include_directories('qdma_access/qdma_s80_hard_access')
 
 sources = files(
         'qdma_ethdev.c',
         'qdma_common.c',
+        'qdma_access/eqdma_soft_access/eqdma_soft_access.c',
+        'qdma_access/eqdma_soft_access/eqdma_soft_reg_dump.c',
+        'qdma_access/qdma_s80_hard_access/qdma_s80_hard_access.c',
+        'qdma_access/qdma_s80_hard_access/qdma_s80_hard_reg_dump.c',
+        'qdma_access/qdma_soft_access/qdma_soft_access.c',
+        'qdma_access/qdma_list.c',
+        'qdma_access/qdma_resource_mgmt.c',
+        'qdma_access/qdma_mbox_protocol.c',
+        'qdma_access/qdma_access_common.c',
+        'qdma_access/qdma_platform.c',
 )
diff --git a/drivers/net/qdma/qdma_access/eqdma_soft_access/eqdma_soft_access.c b/drivers/net/qdma/qdma_access/eqdma_soft_access/eqdma_soft_access.c
new file mode 100644
index 0000000000..38e0a7488d
--- /dev/null
+++ b/drivers/net/qdma/qdma_access/eqdma_soft_access/eqdma_soft_access.c
@@ -0,0 +1,5832 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2022 Xilinx, Inc. All rights reserved.
+ */
+
+#include "eqdma_soft_access.h"
+#include "eqdma_soft_reg.h"
+#include "qdma_reg_dump.h"
+
+#ifdef ENABLE_WPP_TRACING
+#include "eqdma_soft_access.tmh"
+#endif
+
+/** EQDMA Context array size */
+#define EQDMA_SW_CONTEXT_NUM_WORDS           8
+#define EQDMA_HW_CONTEXT_NUM_WORDS           2
+#define EQDMA_PFETCH_CONTEXT_NUM_WORDS       2
+#define EQDMA_CR_CONTEXT_NUM_WORDS           1
+#define EQDMA_CMPT_CONTEXT_NUM_WORDS         6
+#define EQDMA_IND_INTR_CONTEXT_NUM_WORDS     4
+
+#define EQDMA_VF_USER_BAR_ID                 2
+
+#define EQDMA_REG_GROUP_1_START_ADDR	0x000
+#define EQDMA_REG_GROUP_2_START_ADDR	0x804
+#define EQDMA_REG_GROUP_3_START_ADDR	0xB00
+#define EQDMA_REG_GROUP_4_START_ADDR	0x5014
+
+#define EQDMA_TOTAL_LEAF_ERROR_AGGREGATORS 9
+#define EQDMA_GLBL_TRQ_ERR_ALL_MASK	0XB3
+#define EQDMA_GLBL_DSC_ERR_ALL_MASK	0X1F9037E
+#define EQDMA_C2H_ERR_ALL_MASK		0X3F6DF
+#define EQDMA_C2H_FATAL_ERR_ALL_MASK	0X1FDF1B
+#define EQDMA_H2C_ERR_ALL_MASK		0X3F
+#define EQDMA_SBE_ERR_ALL_MASK		0XFFFFFFFF
+#define EQDMA_DBE_ERR_ALL_MASK		0XFFFFFFFF
+
+/* H2C Throttle settings */
+#define EQDMA_H2C_THROT_DATA_THRESH       0x5000
+#define EQDMA_THROT_EN_DATA               1
+#define EQDMA_THROT_EN_REQ                0
+#define EQDMA_H2C_THROT_REQ_THRESH        0xC0
+
+
+/** Auxiliary Bitmasks for fields spanning multiple words */
+#define EQDMA_SW_CTXT_PASID_GET_H_MASK              GENMASK(21, 12)
+#define EQDMA_SW_CTXT_PASID_GET_L_MASK              GENMASK(11, 0)
+#define EQDMA_SW_CTXT_VIRTIO_DSC_BASE_GET_H_MASK    GENMASK_ULL(63, 53)
+#define EQDMA_SW_CTXT_VIRTIO_DSC_BASE_GET_M_MASK    GENMASK_ULL(52, 21)
+#define EQDMA_SW_CTXT_VIRTIO_DSC_BASE_GET_L_MASK    GENMASK_ULL(20, 0)
+#define EQDMA_CMPL_CTXT_PASID_GET_H_MASK            GENMASK(21, 9)
+#define EQDMA_CMPL_CTXT_PASID_GET_L_MASK            GENMASK(8, 0)
+#define EQDMA_INTR_CTXT_PASID_GET_H_MASK            GENMASK(21, 9)
+#define EQDMA_INTR_CTXT_PASID_GET_L_MASK            GENMASK(8, 0)
+
+
+#define EQDMA_OFFSET_GLBL2_PF_BARLITE_EXT	0x10C
+
+#define QDMA_OFFSET_GLBL2_PF_BARLITE_INT	0x104
+#define QDMA_GLBL2_PF3_BAR_MAP_MASK		GENMASK(23, 18)
+#define QDMA_GLBL2_PF2_BAR_MAP_MASK		GENMASK(17, 12)
+#define QDMA_GLBL2_PF1_BAR_MAP_MASK		GENMASK(11, 6)
+#define QDMA_GLBL2_PF0_BAR_MAP_MASK		GENMASK(5, 0)
+
+#define EQDMA_GLBL2_DBG_MODE_EN_MASK		BIT(4)
+#define EQDMA_GLBL2_DESC_ENG_MODE_MASK		GENMASK(3, 2)
+#define EQDMA_GLBL2_FLR_PRESENT_MASK		BIT(1)
+#define EQDMA_GLBL2_MAILBOX_EN_MASK		BIT(0)
+
+static void eqdma_hw_st_h2c_err_process(void *dev_hndl);
+static void eqdma_hw_st_c2h_err_process(void *dev_hndl);
+static void eqdma_hw_desc_err_process(void *dev_hndl);
+static void eqdma_hw_trq_err_process(void *dev_hndl);
+static void eqdma_hw_ram_sbe_err_process(void *dev_hndl);
+static void eqdma_hw_ram_dbe_err_process(void *dev_hndl);
+
+static struct eqdma_hw_err_info eqdma_err_info[EQDMA_ERRS_ALL] = {
+	/* Descriptor errors */
+	{
+		EQDMA_DSC_ERR_POISON,
+		"Poison error",
+		EQDMA_GLBL_DSC_ERR_MSK_ADDR,
+		EQDMA_GLBL_DSC_ERR_STS_ADDR,
+		GLBL_DSC_ERR_STS_POISON_MASK,
+		GLBL_ERR_STAT_ERR_DSC_MASK,
+		&eqdma_hw_desc_err_process
+	},
+	{
+		EQDMA_DSC_ERR_UR_CA,
+		"Unsupported request or completer aborted error",
+		EQDMA_GLBL_DSC_ERR_MSK_ADDR,
+		EQDMA_GLBL_DSC_ERR_STS_ADDR,
+		GLBL_DSC_ERR_STS_UR_CA_MASK,
+		GLBL_ERR_STAT_ERR_DSC_MASK,
+		&eqdma_hw_desc_err_process
+	},
+	{
+		EQDMA_DSC_ERR_BCNT,
+		"Unexpected Byte count in completion error",
+		EQDMA_GLBL_DSC_ERR_MSK_ADDR,
+		EQDMA_GLBL_DSC_ERR_STS_ADDR,
+		GLBL_DSC_ERR_STS_BCNT_MASK,
+		GLBL_ERR_STAT_ERR_DSC_MASK,
+		&eqdma_hw_desc_err_process
+	},
+	{
+		EQDMA_DSC_ERR_PARAM,
+		"Parameter mismatch error",
+		EQDMA_GLBL_DSC_ERR_MSK_ADDR,
+		EQDMA_GLBL_DSC_ERR_STS_ADDR,
+		GLBL_DSC_ERR_STS_PARAM_MASK,
+		GLBL_ERR_STAT_ERR_DSC_MASK,
+		&eqdma_hw_desc_err_process
+	},
+	{
+		EQDMA_DSC_ERR_ADDR,
+		"Address mismatch error",
+		EQDMA_GLBL_DSC_ERR_MSK_ADDR,
+		EQDMA_GLBL_DSC_ERR_STS_ADDR,
+		GLBL_DSC_ERR_STS_ADDR_MASK,
+		GLBL_ERR_STAT_ERR_DSC_MASK,
+		&eqdma_hw_desc_err_process
+	},
+	{
+		EQDMA_DSC_ERR_TAG,
+		"Unexpected tag error",
+		EQDMA_GLBL_DSC_ERR_MSK_ADDR,
+		EQDMA_GLBL_DSC_ERR_STS_ADDR,
+		GLBL_DSC_ERR_STS_TAG_MASK,
+		GLBL_ERR_STAT_ERR_DSC_MASK,
+		&eqdma_hw_desc_err_process
+	},
+	{
+		EQDMA_DSC_ERR_FLR,
+		"FLR error",
+		EQDMA_GLBL_DSC_ERR_MSK_ADDR,
+		EQDMA_GLBL_DSC_ERR_STS_ADDR,
+		GLBL_DSC_ERR_STS_FLR_MASK,
+		GLBL_ERR_STAT_ERR_DSC_MASK,
+		&eqdma_hw_desc_err_process
+	},
+	{
+		EQDMA_DSC_ERR_TIMEOUT,
+		"Timed out error",
+		EQDMA_GLBL_DSC_ERR_MSK_ADDR,
+		EQDMA_GLBL_DSC_ERR_STS_ADDR,
+		GLBL_DSC_ERR_STS_TIMEOUT_MASK,
+		GLBL_ERR_STAT_ERR_DSC_MASK,
+		&eqdma_hw_desc_err_process
+	},
+	{
+		EQDMA_DSC_ERR_DAT_POISON,
+		"Poison data error",
+		EQDMA_GLBL_DSC_ERR_MSK_ADDR,
+		EQDMA_GLBL_DSC_ERR_STS_ADDR,
+		GLBL_DSC_ERR_STS_DAT_POISON_MASK,
+		GLBL_ERR_STAT_ERR_DSC_MASK,
+		&eqdma_hw_desc_err_process
+	},
+	{
+		EQDMA_DSC_ERR_FLR_CANCEL,
+		"Descriptor fetch cancelled due to FLR error",
+		EQDMA_GLBL_DSC_ERR_MSK_ADDR,
+		EQDMA_GLBL_DSC_ERR_STS_ADDR,
+		GLBL_DSC_ERR_STS_FLR_CANCEL_MASK,
+		GLBL_ERR_STAT_ERR_DSC_MASK,
+		&eqdma_hw_desc_err_process
+	},
+	{
+		EQDMA_DSC_ERR_DMA,
+		"DMA engine error",
+		EQDMA_GLBL_DSC_ERR_MSK_ADDR,
+		EQDMA_GLBL_DSC_ERR_STS_ADDR,
+		GLBL_DSC_ERR_STS_DMA_MASK,
+		GLBL_ERR_STAT_ERR_DSC_MASK,
+		&eqdma_hw_desc_err_process
+	},
+	{
+		EQDMA_DSC_ERR_DSC,
+		"Invalid PIDX update error",
+		EQDMA_GLBL_DSC_ERR_MSK_ADDR,
+		EQDMA_GLBL_DSC_ERR_STS_ADDR,
+		GLBL_DSC_ERR_STS_DSC_MASK,
+		GLBL_ERR_STAT_ERR_DSC_MASK,
+		&eqdma_hw_desc_err_process
+	},
+	{
+		EQDMA_DSC_ERR_RQ_CANCEL,
+		"Descriptor fetch cancelled due to disable register status error",
+		EQDMA_GLBL_DSC_ERR_MSK_ADDR,
+		EQDMA_GLBL_DSC_ERR_STS_ADDR,
+		GLBL_DSC_ERR_STS_RQ_CANCEL_MASK,
+		GLBL_ERR_STAT_ERR_DSC_MASK,
+		&eqdma_hw_desc_err_process
+	},
+	{
+		EQDMA_DSC_ERR_DBE,
+		"UNC_ERR_RAM_DBE error",
+		EQDMA_GLBL_DSC_ERR_MSK_ADDR,
+		EQDMA_GLBL_DSC_ERR_STS_ADDR,
+		GLBL_DSC_ERR_STS_DBE_MASK,
+		GLBL_ERR_STAT_ERR_DSC_MASK,
+		&eqdma_hw_desc_err_process
+	},
+	{
+		EQDMA_DSC_ERR_SBE,
+		"UNC_ERR_RAM_SBE error",
+		EQDMA_GLBL_DSC_ERR_MSK_ADDR,
+		EQDMA_GLBL_DSC_ERR_STS_ADDR,
+		GLBL_DSC_ERR_STS_SBE_MASK,
+		GLBL_ERR_STAT_ERR_DSC_MASK,
+		&eqdma_hw_desc_err_process
+	},
+	{
+		EQDMA_DSC_ERR_ALL,
+		"All Descriptor errors",
+		EQDMA_GLBL_DSC_ERR_MSK_ADDR,
+		EQDMA_GLBL_DSC_ERR_STS_ADDR,
+		EQDMA_GLBL_DSC_ERR_ALL_MASK,
+		GLBL_ERR_STAT_ERR_DSC_MASK,
+		&eqdma_hw_desc_err_process
+	},
+
+	/* TRQ errors */
+	{
+		EQDMA_TRQ_ERR_CSR_UNMAPPED,
+		"Access targeted unmapped register space via CSR pathway error",
+		EQDMA_GLBL_TRQ_ERR_MSK_ADDR,
+		EQDMA_GLBL_TRQ_ERR_STS_ADDR,
+		GLBL_TRQ_ERR_STS_CSR_UNMAPPED_MASK,
+		GLBL_ERR_STAT_ERR_TRQ_MASK,
+		&eqdma_hw_trq_err_process
+	},
+	{
+		EQDMA_TRQ_ERR_VF_ACCESS,
+		"VF attempted to access Global register space or Function map",
+		EQDMA_GLBL_TRQ_ERR_MSK_ADDR,
+		EQDMA_GLBL_TRQ_ERR_STS_ADDR,
+		GLBL_TRQ_ERR_STS_VF_ACCESS_ERR_MASK,
+		GLBL_ERR_STAT_ERR_TRQ_MASK,
+		&eqdma_hw_trq_err_process
+	},
+	{
+		EQDMA_TRQ_ERR_TCP_CSR_TIMEOUT,
+		"Timeout on request to dma internal csr register",
+		EQDMA_GLBL_TRQ_ERR_MSK_ADDR,
+		EQDMA_GLBL_TRQ_ERR_STS_ADDR,
+		GLBL_TRQ_ERR_STS_TCP_CSR_TIMEOUT_MASK,
+		GLBL_ERR_STAT_ERR_TRQ_MASK,
+		&eqdma_hw_trq_err_process
+	},
+	{
+		EQDMA_TRQ_ERR_QSPC_UNMAPPED,
+		"Access targeted unmapped register via queue space pathway",
+		EQDMA_GLBL_TRQ_ERR_MSK_ADDR,
+		EQDMA_GLBL_TRQ_ERR_STS_ADDR,
+		GLBL_TRQ_ERR_STS_QSPC_UNMAPPED_MASK,
+		GLBL_ERR_STAT_ERR_TRQ_MASK,
+		&eqdma_hw_trq_err_process
+	},
+	{
+		EQDMA_TRQ_ERR_QID_RANGE,
+		"Qid range error",
+		EQDMA_GLBL_TRQ_ERR_MSK_ADDR,
+		EQDMA_GLBL_TRQ_ERR_STS_ADDR,
+		GLBL_TRQ_ERR_STS_QID_RANGE_MASK,
+		GLBL_ERR_STAT_ERR_TRQ_MASK,
+		&eqdma_hw_trq_err_process
+	},
+	{
+		EQDMA_TRQ_ERR_TCP_QSPC_TIMEOUT,
+		"Timeout on request to dma internal queue space register",
+		EQDMA_GLBL_TRQ_ERR_MSK_ADDR,
+		EQDMA_GLBL_TRQ_ERR_STS_ADDR,
+		GLBL_TRQ_ERR_STS_TCP_QSPC_TIMEOUT_MASK,
+		GLBL_ERR_STAT_ERR_TRQ_MASK,
+		&eqdma_hw_trq_err_process
+	},
+	{
+		EQDMA_TRQ_ERR_ALL,
+		"All TRQ errors",
+		EQDMA_GLBL_TRQ_ERR_MSK_ADDR,
+		EQDMA_GLBL_TRQ_ERR_STS_ADDR,
+		EQDMA_GLBL_TRQ_ERR_ALL_MASK,
+		GLBL_ERR_STAT_ERR_TRQ_MASK,
+		&eqdma_hw_trq_err_process
+	},
+
+	/* C2H Errors */
+	{
+		EQDMA_ST_C2H_ERR_MTY_MISMATCH,
+		"MTY mismatch error",
+		EQDMA_C2H_ERR_MASK_ADDR,
+		EQDMA_C2H_ERR_STAT_ADDR,
+		C2H_ERR_STAT_MTY_MISMATCH_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&eqdma_hw_st_c2h_err_process
+	},
+	{
+		EQDMA_ST_C2H_ERR_LEN_MISMATCH,
+		"Packet length mismatch error",
+		EQDMA_C2H_ERR_MASK_ADDR,
+		EQDMA_C2H_ERR_STAT_ADDR,
+		C2H_ERR_STAT_LEN_MISMATCH_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&eqdma_hw_st_c2h_err_process
+	},
+	{
+		EQDMA_ST_C2H_ERR_SH_CMPT_DSC,
+		"A Shared CMPT queue has encountered a descriptor error",
+		EQDMA_C2H_ERR_MASK_ADDR,
+		EQDMA_C2H_ERR_STAT_ADDR,
+		C2H_ERR_STAT_SH_CMPT_DSC_ERR_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&eqdma_hw_st_c2h_err_process
+	},
+	{
+		EQDMA_ST_C2H_ERR_QID_MISMATCH,
+		"Qid mismatch error",
+		EQDMA_C2H_ERR_MASK_ADDR,
+		EQDMA_C2H_ERR_STAT_ADDR,
+		C2H_ERR_STAT_QID_MISMATCH_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&eqdma_hw_st_c2h_err_process
+	},
+	{
+		EQDMA_ST_C2H_ERR_DESC_RSP_ERR,
+		"Descriptor error bit set",
+		EQDMA_C2H_ERR_MASK_ADDR,
+		EQDMA_C2H_ERR_STAT_ADDR,
+		C2H_ERR_STAT_DESC_RSP_ERR_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&eqdma_hw_st_c2h_err_process
+	},
+	{
+		EQDMA_ST_C2H_ERR_ENG_WPL_DATA_PAR_ERR,
+		"Data parity error",
+		EQDMA_C2H_ERR_MASK_ADDR,
+		EQDMA_C2H_ERR_STAT_ADDR,
+		C2H_ERR_STAT_ENG_WPL_DATA_PAR_ERR_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&eqdma_hw_st_c2h_err_process
+	},
+	{
+		EQDMA_ST_C2H_ERR_MSI_INT_FAIL,
+		"MSI got a fail response error",
+		EQDMA_C2H_ERR_MASK_ADDR,
+		EQDMA_C2H_ERR_STAT_ADDR,
+		C2H_ERR_STAT_MSI_INT_FAIL_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&eqdma_hw_st_c2h_err_process
+	},
+	{
+		EQDMA_ST_C2H_ERR_ERR_DESC_CNT,
+		"Descriptor count error",
+		EQDMA_C2H_ERR_MASK_ADDR,
+		EQDMA_C2H_ERR_STAT_ADDR,
+		C2H_ERR_STAT_ERR_DESC_CNT_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&eqdma_hw_st_c2h_err_process
+	},
+	{
+		EQDMA_ST_C2H_ERR_PORTID_CTXT_MISMATCH,
+		"Port id in packet and pfetch ctxt mismatch error",
+		EQDMA_C2H_ERR_MASK_ADDR,
+		EQDMA_C2H_ERR_STAT_ADDR,
+		C2H_ERR_STAT_PORT_ID_CTXT_MISMATCH_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&eqdma_hw_st_c2h_err_process
+	},
+	{
+		EQDMA_ST_C2H_ERR_CMPT_INV_Q_ERR,
+		"Writeback on invalid queue error",
+		EQDMA_C2H_ERR_MASK_ADDR,
+		EQDMA_C2H_ERR_STAT_ADDR,
+		C2H_ERR_STAT_WRB_INV_Q_ERR_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&eqdma_hw_st_c2h_err_process
+	},
+	{
+		EQDMA_ST_C2H_ERR_CMPT_QFULL_ERR,
+		"Completion queue gets full error",
+		EQDMA_C2H_ERR_MASK_ADDR,
+		EQDMA_C2H_ERR_STAT_ADDR,
+		C2H_ERR_STAT_WRB_QFULL_ERR_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&eqdma_hw_st_c2h_err_process
+	},
+	{
+		EQDMA_ST_C2H_ERR_CMPT_CIDX_ERR,
+		"Bad CIDX update by the software error",
+		EQDMA_C2H_ERR_MASK_ADDR,
+		EQDMA_C2H_ERR_STAT_ADDR,
+		C2H_ERR_STAT_WRB_CIDX_ERR_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&eqdma_hw_st_c2h_err_process
+	},
+	{
+		EQDMA_ST_C2H_ERR_CMPT_PRTY_ERR,
+		"C2H completion Parity error",
+		EQDMA_C2H_ERR_MASK_ADDR,
+		EQDMA_C2H_ERR_STAT_ADDR,
+		C2H_ERR_STAT_WRB_PRTY_ERR_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&eqdma_hw_st_c2h_err_process
+	},
+	{
+		EQDMA_ST_C2H_ERR_AVL_RING_DSC,
+		"Available ring fetch returns descriptor with error",
+		EQDMA_C2H_ERR_MASK_ADDR,
+		EQDMA_C2H_ERR_STAT_ADDR,
+		C2H_ERR_STAT_AVL_RING_DSC_ERR_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&eqdma_hw_st_c2h_err_process
+	},
+	{
+		EQDMA_ST_C2H_ERR_HDR_ECC_UNC,
+		"multi-bit ecc error on c2h packet header",
+		EQDMA_C2H_ERR_MASK_ADDR,
+		EQDMA_C2H_ERR_STAT_ADDR,
+		C2H_ERR_STAT_HDR_ECC_UNC_ERR_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&eqdma_hw_st_c2h_err_process
+	},
+	{
+		EQDMA_ST_C2H_ERR_HDR_ECC_COR,
+		"single-bit ecc error on c2h packet header",
+		EQDMA_C2H_ERR_MASK_ADDR,
+		EQDMA_C2H_ERR_STAT_ADDR,
+		C2H_ERR_STAT_HDR_ECC_COR_ERR_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&eqdma_hw_st_c2h_err_process
+	},
+	{
+		EQDMA_ST_C2H_ERR_ALL,
+		"All C2h errors",
+		EQDMA_C2H_ERR_MASK_ADDR,
+		EQDMA_C2H_ERR_STAT_ADDR,
+		EQDMA_C2H_ERR_ALL_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&eqdma_hw_st_c2h_err_process
+	},
+
+	/* C2H fatal errors */
+	{
+		EQDMA_ST_FATAL_ERR_MTY_MISMATCH,
+		"Fatal MTY mismatch error",
+		EQDMA_C2H_FATAL_ERR_MASK_ADDR,
+		EQDMA_C2H_FATAL_ERR_STAT_ADDR,
+		C2H_FATAL_ERR_STAT_MTY_MISMATCH_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&eqdma_hw_st_c2h_err_process
+	},
+	{
+		EQDMA_ST_FATAL_ERR_LEN_MISMATCH,
+		"Fatal Len mismatch error",
+		EQDMA_C2H_FATAL_ERR_MASK_ADDR,
+		EQDMA_C2H_FATAL_ERR_STAT_ADDR,
+		C2H_FATAL_ERR_STAT_LEN_MISMATCH_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&eqdma_hw_st_c2h_err_process
+	},
+	{
+		EQDMA_ST_FATAL_ERR_QID_MISMATCH,
+		"Fatal Qid mismatch error",
+		EQDMA_C2H_FATAL_ERR_MASK_ADDR,
+		EQDMA_C2H_FATAL_ERR_STAT_ADDR,
+		C2H_FATAL_ERR_STAT_QID_MISMATCH_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&eqdma_hw_st_c2h_err_process
+	},
+	{
+		EQDMA_ST_FATAL_ERR_TIMER_FIFO_RAM_RDBE,
+		"RAM double bit fatal error",
+		EQDMA_C2H_FATAL_ERR_MASK_ADDR,
+		EQDMA_C2H_FATAL_ERR_STAT_ADDR,
+		C2H_FATAL_ERR_STAT_TIMER_FIFO_RAM_RDBE_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&eqdma_hw_st_c2h_err_process
+	},
+	{
+		EQDMA_ST_FATAL_ERR_PFCH_II_RAM_RDBE,
+		"RAM double bit fatal error",
+		EQDMA_C2H_FATAL_ERR_MASK_ADDR,
+		EQDMA_C2H_FATAL_ERR_STAT_ADDR,
+		C2H_FATAL_ERR_STAT_PFCH_LL_RAM_RDBE_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&eqdma_hw_st_c2h_err_process
+	},
+	{
+		EQDMA_ST_FATAL_ERR_CMPT_CTXT_RAM_RDBE,
+		"RAM double bit fatal error",
+		EQDMA_C2H_FATAL_ERR_MASK_ADDR,
+		EQDMA_C2H_FATAL_ERR_STAT_ADDR,
+		C2H_FATAL_ERR_STAT_WRB_CTXT_RAM_RDBE_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&eqdma_hw_st_c2h_err_process
+	},
+	{
+		EQDMA_ST_FATAL_ERR_PFCH_CTXT_RAM_RDBE,
+		"RAM double bit fatal error",
+		EQDMA_C2H_FATAL_ERR_MASK_ADDR,
+		EQDMA_C2H_FATAL_ERR_STAT_ADDR,
+		C2H_FATAL_ERR_STAT_PFCH_CTXT_RAM_RDBE_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&eqdma_hw_st_c2h_err_process
+	},
+	{
+		EQDMA_ST_FATAL_ERR_DESC_REQ_FIFO_RAM_RDBE,
+		"RAM double bit fatal error",
+		EQDMA_C2H_FATAL_ERR_MASK_ADDR,
+		EQDMA_C2H_FATAL_ERR_STAT_ADDR,
+		C2H_FATAL_ERR_STAT_DESC_REQ_FIFO_RAM_RDBE_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&eqdma_hw_st_c2h_err_process
+	},
+	{
+		EQDMA_ST_FATAL_ERR_INT_CTXT_RAM_RDBE,
+		"RAM double bit fatal error",
+		EQDMA_C2H_FATAL_ERR_MASK_ADDR,
+		EQDMA_C2H_FATAL_ERR_STAT_ADDR,
+		C2H_FATAL_ERR_STAT_INT_CTXT_RAM_RDBE_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&eqdma_hw_st_c2h_err_process
+	},
+	{
+		EQDMA_ST_FATAL_ERR_CMPT_COAL_DATA_RAM_RDBE,
+		"RAM double bit fatal error",
+		EQDMA_C2H_FATAL_ERR_MASK_ADDR,
+		EQDMA_C2H_FATAL_ERR_STAT_ADDR,
+		C2H_FATAL_ERR_STAT_WRB_COAL_DATA_RAM_RDBE_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&eqdma_hw_st_c2h_err_process
+	},
+	{
+		EQDMA_ST_FATAL_ERR_CMPT_FIFO_RAM_RDBE,
+		"RAM double bit fatal error",
+		EQDMA_C2H_FATAL_ERR_MASK_ADDR,
+		EQDMA_C2H_FATAL_ERR_STAT_ADDR,
+		C2H_FATAL_ERR_STAT_CMPT_FIFO_RAM_RDBE_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&eqdma_hw_st_c2h_err_process
+	},
+	{
+		EQDMA_ST_FATAL_ERR_QID_FIFO_RAM_RDBE,
+		"RAM double bit fatal error",
+		EQDMA_C2H_FATAL_ERR_MASK_ADDR,
+		EQDMA_C2H_FATAL_ERR_STAT_ADDR,
+		C2H_FATAL_ERR_STAT_QID_FIFO_RAM_RDBE_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&eqdma_hw_st_c2h_err_process
+	},
+	{
+		EQDMA_ST_FATAL_ERR_PAYLOAD_FIFO_RAM_RDBE,
+		"RAM double bit fatal error",
+		EQDMA_C2H_FATAL_ERR_MASK_ADDR,
+		EQDMA_C2H_FATAL_ERR_STAT_ADDR,
+		C2H_FATAL_ERR_STAT_PLD_FIFO_RAM_RDBE_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&eqdma_hw_st_c2h_err_process
+	},
+	{
+		EQDMA_ST_FATAL_ERR_WPL_DATA_PAR,
+		"RAM double bit fatal error",
+		EQDMA_C2H_FATAL_ERR_MASK_ADDR,
+		EQDMA_C2H_FATAL_ERR_STAT_ADDR,
+		C2H_FATAL_ERR_STAT_WPL_DATA_PAR_ERR_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&eqdma_hw_st_c2h_err_process
+	},
+	{
+		EQDMA_ST_FATAL_ERR_AVL_RING_FIFO_RAM_RDBE,
+		"RAM double bit fatal error",
+		EQDMA_C2H_FATAL_ERR_MASK_ADDR,
+		EQDMA_C2H_FATAL_ERR_STAT_ADDR,
+		C2H_FATAL_ERR_STAT_AVL_RING_FIFO_RAM_RDBE_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&eqdma_hw_st_c2h_err_process
+	},
+	{
+		EQDMA_ST_FATAL_ERR_HDR_ECC_UNC,
+		"RAM double bit fatal error",
+		EQDMA_C2H_FATAL_ERR_MASK_ADDR,
+		EQDMA_C2H_FATAL_ERR_STAT_ADDR,
+		C2H_FATAL_ERR_STAT_HDR_ECC_UNC_ERR_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&eqdma_hw_st_c2h_err_process
+	},
+	{
+		EQDMA_ST_FATAL_ERR_ALL,
+		"All fatal errors",
+		EQDMA_C2H_FATAL_ERR_MASK_ADDR,
+		EQDMA_C2H_FATAL_ERR_STAT_ADDR,
+		EQDMA_C2H_FATAL_ERR_ALL_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&eqdma_hw_st_c2h_err_process
+	},
+
+	/* H2C St errors */
+	{
+		EQDMA_ST_H2C_ERR_ZERO_LEN_DESC,
+		"Zero length descriptor error",
+		EQDMA_H2C_ERR_MASK_ADDR,
+		EQDMA_H2C_ERR_STAT_ADDR,
+		H2C_ERR_STAT_ZERO_LEN_DS_MASK,
+		GLBL_ERR_STAT_ERR_H2C_ST_MASK,
+		&eqdma_hw_st_h2c_err_process
+	},
+	{
+		EQDMA_ST_H2C_ERR_SDI_MRKR_REQ_MOP,
+		"A non-EOP descriptor received",
+		EQDMA_H2C_ERR_MASK_ADDR,
+		EQDMA_H2C_ERR_STAT_ADDR,
+		H2C_ERR_STAT_SDI_MRKR_REQ_MOP_ERR_MASK,
+		GLBL_ERR_STAT_ERR_H2C_ST_MASK,
+		&eqdma_hw_st_h2c_err_process
+	},
+	{
+		EQDMA_ST_H2C_ERR_NO_DMA_DSC,
+		"No DMA descriptor received error",
+		EQDMA_H2C_ERR_MASK_ADDR,
+		EQDMA_H2C_ERR_STAT_ADDR,
+		H2C_ERR_STAT_NO_DMA_DS_MASK,
+		GLBL_ERR_STAT_ERR_H2C_ST_MASK,
+		&eqdma_hw_st_h2c_err_process
+	},
+	{
+		EQDMA_ST_H2C_ERR_SBE,
+		"Single bit error detected on H2C-ST data error",
+		EQDMA_H2C_ERR_MASK_ADDR,
+		EQDMA_H2C_ERR_STAT_ADDR,
+		H2C_ERR_STAT_SBE_MASK,
+		GLBL_ERR_STAT_ERR_H2C_ST_MASK,
+		&eqdma_hw_st_h2c_err_process
+	},
+	{
+		EQDMA_ST_H2C_ERR_DBE,
+		"Double bit error detected on H2C-ST data error",
+		EQDMA_H2C_ERR_MASK_ADDR,
+		EQDMA_H2C_ERR_STAT_ADDR,
+		H2C_ERR_STAT_DBE_MASK,
+		GLBL_ERR_STAT_ERR_H2C_ST_MASK,
+		&eqdma_hw_st_h2c_err_process
+	},
+	{
+		EQDMA_ST_H2C_ERR_PAR,
+		"Internal data parity error",
+		EQDMA_H2C_ERR_MASK_ADDR,
+		EQDMA_H2C_ERR_STAT_ADDR,
+		H2C_ERR_STAT_PAR_ERR_MASK,
+		GLBL_ERR_STAT_ERR_H2C_ST_MASK,
+		&eqdma_hw_st_h2c_err_process
+	},
+	{
+		EQDMA_ST_H2C_ERR_ALL,
+		"All H2C errors",
+		EQDMA_H2C_ERR_MASK_ADDR,
+		EQDMA_H2C_ERR_STAT_ADDR,
+		EQDMA_H2C_ERR_ALL_MASK,
+		GLBL_ERR_STAT_ERR_H2C_ST_MASK,
+		&eqdma_hw_st_h2c_err_process
+	},
+
+	/* SBE errors */
+	{
+		EQDMA_SBE_1_ERR_RC_RRQ_EVEN_RAM,
+		"RC RRQ Even RAM single bit ECC error.",
+		EQDMA_RAM_SBE_MSK_1_A_ADDR,
+		EQDMA_RAM_SBE_STS_1_A_ADDR,
+		RAM_SBE_STS_1_A_RC_RRQ_EVEN_RAM_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&eqdma_hw_ram_sbe_err_process
+	},
+	{
+		EQDMA_SBE_1_ERR_TAG_ODD_RAM,
+		"Tag Odd Ram single bit ECC error.",
+		EQDMA_RAM_SBE_MSK_1_A_ADDR,
+		EQDMA_RAM_SBE_STS_1_A_ADDR,
+		RAM_SBE_STS_1_A_TAG_ODD_RAM_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&eqdma_hw_ram_sbe_err_process
+	},
+	{
+		EQDMA_SBE_1_ERR_TAG_EVEN_RAM,
+		"Tag Even Ram single bit ECC error.",
+		EQDMA_RAM_SBE_MSK_1_A_ADDR,
+		EQDMA_RAM_SBE_STS_1_A_ADDR,
+		RAM_SBE_STS_1_A_TAG_EVEN_RAM_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&eqdma_hw_ram_sbe_err_process
+	},
+	{
+		EQDMA_SBE_1_ERR_PFCH_CTXT_CAM_RAM_0,
+		"Pfch Ctxt CAM RAM 0 single bit ECC error.",
+		EQDMA_RAM_SBE_MSK_1_A_ADDR,
+		EQDMA_RAM_SBE_STS_1_A_ADDR,
+		RAM_SBE_STS_1_A_PFCH_CTXT_CAM_RAM_0_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&eqdma_hw_ram_sbe_err_process
+	},
+	{
+		EQDMA_SBE_1_ERR_PFCH_CTXT_CAM_RAM_1,
+		"Pfch Ctxt CAM RAM 1 single bit ECC error.",
+		EQDMA_RAM_SBE_MSK_1_A_ADDR,
+		EQDMA_RAM_SBE_STS_1_A_ADDR,
+		RAM_SBE_STS_1_A_PFCH_CTXT_CAM_RAM_1_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&eqdma_hw_ram_sbe_err_process
+	},
+	{
+		EQDMA_SBE_1_ERR_ALL,
+		"All SBE Errors.",
+		EQDMA_RAM_SBE_MSK_1_A_ADDR,
+		EQDMA_RAM_SBE_STS_1_A_ADDR,
+		EQDMA_SBE_ERR_ALL_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&eqdma_hw_ram_sbe_err_process
+	},
+	{
+		EQDMA_SBE_ERR_MI_H2C0_DAT,
+		"H2C MM data buffer single bit ECC error",
+		EQDMA_RAM_SBE_MSK_A_ADDR,
+		EQDMA_RAM_SBE_STS_A_ADDR,
+		RAM_SBE_STS_A_MI_H2C0_DAT_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&eqdma_hw_ram_sbe_err_process
+	},
+	{
+		EQDMA_SBE_ERR_MI_H2C1_DAT,
+		"H2C MM data buffer single bit ECC error",
+		EQDMA_RAM_SBE_MSK_A_ADDR,
+		EQDMA_RAM_SBE_STS_A_ADDR,
+		RAM_SBE_STS_A_MI_H2C1_DAT_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&eqdma_hw_ram_sbe_err_process
+	},
+	{
+		EQDMA_SBE_ERR_MI_H2C2_DAT,
+		"H2C MM data buffer single bit ECC error",
+		EQDMA_RAM_SBE_MSK_A_ADDR,
+		EQDMA_RAM_SBE_STS_A_ADDR,
+		RAM_SBE_STS_A_MI_H2C2_DAT_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&eqdma_hw_ram_sbe_err_process
+	},
+	{
+		EQDMA_SBE_ERR_MI_H2C3_DAT,
+		"H2C MM data buffer single bit ECC error",
+		EQDMA_RAM_SBE_MSK_A_ADDR,
+		EQDMA_RAM_SBE_STS_A_ADDR,
+		RAM_SBE_STS_A_MI_H2C3_DAT_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&eqdma_hw_ram_sbe_err_process
+	},
+	{
+		EQDMA_SBE_ERR_MI_C2H0_DAT,
+		"C2H MM data buffer single bit ECC error",
+		EQDMA_RAM_SBE_MSK_A_ADDR,
+		EQDMA_RAM_SBE_STS_A_ADDR,
+		RAM_SBE_STS_A_MI_C2H0_DAT_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&eqdma_hw_ram_sbe_err_process
+	},
+	{
+		EQDMA_SBE_ERR_MI_C2H1_DAT,
+		"C2H MM data buffer single bit ECC error",
+		EQDMA_RAM_SBE_MSK_A_ADDR,
+		EQDMA_RAM_SBE_STS_A_ADDR,
+		RAM_SBE_STS_A_MI_C2H1_DAT_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&eqdma_hw_ram_sbe_err_process
+	},
+{
+		EQDMA_SBE_ERR_MI_C2H2_DAT,
+		"C2H MM data buffer single bit ECC error",
+		EQDMA_RAM_SBE_MSK_A_ADDR,
+		EQDMA_RAM_SBE_STS_A_ADDR,
+		RAM_SBE_STS_A_MI_C2H2_DAT_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&eqdma_hw_ram_sbe_err_process
+	},
+	{
+		EQDMA_SBE_ERR_MI_C2H3_DAT,
+		"C2H MM data buffer single bit ECC error",
+		EQDMA_RAM_SBE_MSK_A_ADDR,
+		EQDMA_RAM_SBE_STS_A_ADDR,
+		RAM_SBE_STS_A_MI_C2H3_DAT_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&eqdma_hw_ram_sbe_err_process
+	},
+	{
+		EQDMA_SBE_ERR_H2C_RD_BRG_DAT,
+		"Bridge master read single bit ECC error",
+		EQDMA_RAM_SBE_MSK_A_ADDR,
+		EQDMA_RAM_SBE_STS_A_ADDR,
+		RAM_SBE_STS_A_H2C_RD_BRG_DAT_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&eqdma_hw_ram_sbe_err_process
+	},
+	{
+		EQDMA_SBE_ERR_H2C_WR_BRG_DAT,
+		"Bridge master write single bit ECC error",
+		EQDMA_RAM_SBE_MSK_A_ADDR,
+		EQDMA_RAM_SBE_STS_A_ADDR,
+		RAM_SBE_STS_A_H2C_WR_BRG_DAT_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&eqdma_hw_ram_sbe_err_process
+	},
+	{
+		EQDMA_SBE_ERR_C2H_RD_BRG_DAT,
+		"Bridge slave read data buffer single bit ECC error",
+		EQDMA_RAM_SBE_MSK_A_ADDR,
+		EQDMA_RAM_SBE_STS_A_ADDR,
+		RAM_SBE_STS_A_C2H_RD_BRG_DAT_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&eqdma_hw_ram_sbe_err_process
+	},
+	{
+		EQDMA_SBE_ERR_C2H_WR_BRG_DAT,
+		"Bridge slave write data buffer single bit ECC error",
+		EQDMA_RAM_SBE_MSK_A_ADDR,
+		EQDMA_RAM_SBE_STS_A_ADDR,
+		RAM_SBE_STS_A_C2H_WR_BRG_DAT_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&eqdma_hw_ram_sbe_err_process
+	},
+	{
+		EQDMA_SBE_ERR_FUNC_MAP,
+		"Function map RAM single bit ECC error",
+		EQDMA_RAM_SBE_MSK_A_ADDR,
+		EQDMA_RAM_SBE_STS_A_ADDR,
+		RAM_SBE_STS_A_FUNC_MAP_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&eqdma_hw_ram_sbe_err_process
+	},
+	{
+		EQDMA_SBE_ERR_DSC_HW_CTXT,
+		"Descriptor engine hardware context RAM single bit ECC error",
+		EQDMA_RAM_SBE_MSK_A_ADDR,
+		EQDMA_RAM_SBE_STS_A_ADDR,
+		RAM_SBE_STS_A_DSC_HW_CTXT_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&eqdma_hw_ram_sbe_err_process
+	},
+	{
+		EQDMA_SBE_ERR_DSC_CRD_RCV,
+		"Descriptor engine receive credit context RAM single bit ECC error",
+		EQDMA_RAM_SBE_MSK_A_ADDR,
+		EQDMA_RAM_SBE_STS_A_ADDR,
+		RAM_SBE_STS_A_DSC_CRD_RCV_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&eqdma_hw_ram_sbe_err_process
+	},
+	{
+		EQDMA_SBE_ERR_DSC_SW_CTXT,
+		"Descriptor engine software context RAM single bit ECC error",
+		EQDMA_RAM_SBE_MSK_A_ADDR,
+		EQDMA_RAM_SBE_STS_A_ADDR,
+		RAM_SBE_STS_A_DSC_SW_CTXT_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&eqdma_hw_ram_sbe_err_process
+	},
+	{
+		EQDMA_SBE_ERR_DSC_CPLI,
+		"Descriptor engine fetch completion information RAM single bit ECC error",
+		EQDMA_RAM_SBE_MSK_A_ADDR,
+		EQDMA_RAM_SBE_STS_A_ADDR,
+		RAM_SBE_STS_A_DSC_CPLI_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&eqdma_hw_ram_sbe_err_process
+	},
+	{
+		EQDMA_SBE_ERR_DSC_CPLD,
+		"Descriptor engine fetch completion data RAM single bit ECC error",
+		EQDMA_RAM_SBE_MSK_A_ADDR,
+		EQDMA_RAM_SBE_STS_A_ADDR,
+		RAM_SBE_STS_A_DSC_CPLD_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&eqdma_hw_ram_sbe_err_process
+	},
+	{
+		EQDMA_SBE_ERR_MI_TL_SLV_FIFO_RAM,
+		"TL Slavle FIFO RAM single bit ECC error",
+		EQDMA_RAM_SBE_MSK_A_ADDR,
+		EQDMA_RAM_SBE_STS_A_ADDR,
+		RAM_SBE_STS_A_MI_TL_SLV_FIFO_RAM_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&eqdma_hw_ram_sbe_err_process
+	},
+	{
+		EQDMA_SBE_ERR_TIMER_FIFO_RAM,
+		"Timer fifo RAM single bit ECC error",
+		EQDMA_RAM_SBE_MSK_A_ADDR,
+		EQDMA_RAM_SBE_STS_A_ADDR,
+		RAM_SBE_STS_A_TIMER_FIFO_RAM_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&eqdma_hw_ram_sbe_err_process
+	},
+	{
+		EQDMA_SBE_ERR_QID_FIFO_RAM,
+		"C2H ST QID FIFO RAM single bit ECC error",
+		EQDMA_RAM_SBE_MSK_A_ADDR,
+		EQDMA_RAM_SBE_STS_A_ADDR,
+		RAM_SBE_STS_A_QID_FIFO_RAM_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&eqdma_hw_ram_sbe_err_process
+	},
+	{
+		EQDMA_SBE_ERR_WRB_COAL_DATA_RAM,
+		"Writeback Coalescing RAM single bit ECC error",
+		EQDMA_RAM_SBE_MSK_A_ADDR,
+		EQDMA_RAM_SBE_STS_A_ADDR,
+		RAM_SBE_STS_A_WRB_COAL_DATA_RAM_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&eqdma_hw_ram_sbe_err_process
+	},
+	{
+		EQDMA_SBE_ERR_INT_CTXT_RAM,
+		"Interrupt context RAM single bit ECC error",
+		EQDMA_RAM_SBE_MSK_A_ADDR,
+		EQDMA_RAM_SBE_STS_A_ADDR,
+		RAM_SBE_STS_A_INT_CTXT_RAM_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&eqdma_hw_ram_sbe_err_process
+	},
+	{
+		EQDMA_SBE_ERR_DESC_REQ_FIFO_RAM,
+		"C2H ST descriptor request RAM single bit ECC error",
+		EQDMA_RAM_SBE_MSK_A_ADDR,
+		EQDMA_RAM_SBE_STS_A_ADDR,
+		RAM_SBE_STS_A_DESC_REQ_FIFO_RAM_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&eqdma_hw_ram_sbe_err_process
+	},
+	{
+		EQDMA_SBE_ERR_PFCH_CTXT_RAM,
+		"C2H ST prefetch RAM single bit ECC error",
+		EQDMA_RAM_SBE_MSK_A_ADDR,
+		EQDMA_RAM_SBE_STS_A_ADDR,
+		RAM_SBE_STS_A_PFCH_CTXT_RAM_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&eqdma_hw_ram_sbe_err_process
+	},
+	{
+		EQDMA_SBE_ERR_WRB_CTXT_RAM,
+		"C2H ST completion context RAM single bit ECC error",
+		EQDMA_RAM_SBE_MSK_A_ADDR,
+		EQDMA_RAM_SBE_STS_A_ADDR,
+		RAM_SBE_STS_A_WRB_CTXT_RAM_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&eqdma_hw_ram_sbe_err_process
+	},
+	{
+		EQDMA_SBE_ERR_PFCH_LL_RAM,
+		"C2H ST prefetch list RAM single bit ECC error",
+		EQDMA_RAM_SBE_MSK_A_ADDR,
+		EQDMA_RAM_SBE_STS_A_ADDR,
+		RAM_SBE_STS_A_PFCH_LL_RAM_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&eqdma_hw_ram_sbe_err_process
+	},
+	{
+		EQDMA_SBE_ERR_PEND_FIFO_RAM,
+		"Pend FIFO RAM single bit ECC error",
+		EQDMA_RAM_SBE_MSK_A_ADDR,
+		EQDMA_RAM_SBE_STS_A_ADDR,
+		RAM_SBE_STS_A_PEND_FIFO_RAM_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&eqdma_hw_ram_sbe_err_process
+	},
+	{
+		EQDMA_SBE_ERR_RC_RRQ_ODD_RAM,
+		"RC RRQ Odd RAM single bit ECC error.",
+		EQDMA_RAM_SBE_MSK_A_ADDR,
+		EQDMA_RAM_SBE_STS_A_ADDR,
+		RAM_SBE_STS_A_RC_RRQ_ODD_RAM_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&eqdma_hw_ram_sbe_err_process
+	},
+	{
+		EQDMA_SBE_ERR_ALL,
+		"All SBE errors",
+		EQDMA_RAM_SBE_MSK_A_ADDR,
+		EQDMA_RAM_SBE_STS_A_ADDR,
+		EQDMA_SBE_ERR_ALL_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&eqdma_hw_ram_sbe_err_process
+	},
+
+
+	/* DBE errors */
+	{
+		EQDMA_DBE_1_ERR_RC_RRQ_EVEN_RAM,
+		"RC RRQ Odd RAM double bit ECC error.",
+		EQDMA_RAM_DBE_MSK_1_A_ADDR,
+		EQDMA_RAM_DBE_STS_1_A_ADDR,
+		RAM_DBE_STS_1_A_RC_RRQ_EVEN_RAM_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&eqdma_hw_ram_dbe_err_process
+	},
+	{
+		EQDMA_DBE_1_ERR_TAG_ODD_RAM,
+		"Tag Odd Ram double bit ECC error.",
+		EQDMA_RAM_DBE_MSK_1_A_ADDR,
+		EQDMA_RAM_DBE_STS_1_A_ADDR,
+		RAM_DBE_STS_1_A_TAG_ODD_RAM_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&eqdma_hw_ram_dbe_err_process
+	},
+	{
+		EQDMA_DBE_1_ERR_TAG_EVEN_RAM,
+		"Tag Even Ram double bit ECC error.",
+		EQDMA_RAM_DBE_MSK_1_A_ADDR,
+		EQDMA_RAM_DBE_STS_1_A_ADDR,
+		RAM_DBE_STS_1_A_TAG_EVEN_RAM_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&eqdma_hw_ram_dbe_err_process
+	},
+	{
+		EQDMA_DBE_1_ERR_PFCH_CTXT_CAM_RAM_0,
+		"Pfch Ctxt CAM RAM 0 double bit ECC error.",
+		EQDMA_RAM_DBE_MSK_1_A_ADDR,
+		EQDMA_RAM_DBE_STS_1_A_ADDR,
+		RAM_DBE_STS_1_A_PFCH_CTXT_CAM_RAM_0_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&eqdma_hw_ram_dbe_err_process
+	},
+	{
+		EQDMA_DBE_1_ERR_PFCH_CTXT_CAM_RAM_1,
+		"Pfch Ctxt CAM RAM double bit ECC error.",
+		EQDMA_RAM_DBE_MSK_1_A_ADDR,
+		EQDMA_RAM_DBE_STS_1_A_ADDR,
+		RAM_DBE_STS_1_A_PFCH_CTXT_CAM_RAM_0_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&eqdma_hw_ram_dbe_err_process
+	},
+	{
+		EQDMA_DBE_1_ERR_ALL,
+		"All DBE errors",
+		EQDMA_RAM_DBE_MSK_1_A_ADDR,
+		EQDMA_RAM_DBE_STS_1_A_ADDR,
+		EQDMA_DBE_ERR_ALL_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&eqdma_hw_ram_dbe_err_process
+	},
+	{
+		EQDMA_DBE_ERR_MI_H2C0_DAT,
+		"H2C MM data buffer double bit ECC error",
+		EQDMA_RAM_DBE_MSK_A_ADDR,
+		EQDMA_RAM_DBE_STS_A_ADDR,
+		RAM_DBE_STS_A_MI_H2C0_DAT_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&eqdma_hw_ram_dbe_err_process
+	},
+	{
+		EQDMA_DBE_ERR_MI_H2C1_DAT,
+		"H2C MM data buffer double bit ECC error",
+		EQDMA_RAM_DBE_MSK_A_ADDR,
+		EQDMA_RAM_DBE_STS_A_ADDR,
+		RAM_DBE_STS_A_MI_H2C1_DAT_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&eqdma_hw_ram_dbe_err_process
+	},
+	{
+		EQDMA_DBE_ERR_MI_H2C2_DAT,
+		"H2C MM data buffer double bit ECC error",
+		EQDMA_RAM_DBE_MSK_A_ADDR,
+		EQDMA_RAM_DBE_STS_A_ADDR,
+		RAM_DBE_STS_A_MI_H2C2_DAT_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&eqdma_hw_ram_dbe_err_process
+	},
+	{
+		EQDMA_DBE_ERR_MI_H2C3_DAT,
+		"H2C MM data buffer double bit ECC error",
+		EQDMA_RAM_DBE_MSK_A_ADDR,
+		EQDMA_RAM_DBE_STS_A_ADDR,
+		RAM_DBE_STS_A_MI_H2C3_DAT_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&eqdma_hw_ram_dbe_err_process
+	},
+	{
+		EQDMA_DBE_ERR_MI_C2H0_DAT,
+		"C2H MM data buffer double bit ECC error",
+		EQDMA_RAM_DBE_MSK_A_ADDR,
+		EQDMA_RAM_DBE_STS_A_ADDR,
+		RAM_DBE_STS_A_MI_C2H0_DAT_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&eqdma_hw_ram_dbe_err_process
+	},
+	{
+		EQDMA_DBE_ERR_MI_C2H1_DAT,
+		"C2H MM data buffer double bit ECC error",
+		EQDMA_RAM_DBE_MSK_A_ADDR,
+		EQDMA_RAM_DBE_STS_A_ADDR,
+		RAM_DBE_STS_A_MI_C2H1_DAT_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&eqdma_hw_ram_dbe_err_process
+	},
+	{
+		EQDMA_DBE_ERR_MI_C2H2_DAT,
+		"C2H MM data buffer double bit ECC error",
+		EQDMA_RAM_DBE_MSK_A_ADDR,
+		EQDMA_RAM_DBE_STS_A_ADDR,
+		RAM_DBE_STS_A_MI_C2H2_DAT_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&eqdma_hw_ram_dbe_err_process
+	},
+	{
+		EQDMA_DBE_ERR_MI_C2H3_DAT,
+		"C2H MM data buffer double bit ECC error",
+		EQDMA_RAM_DBE_MSK_A_ADDR,
+		EQDMA_RAM_DBE_STS_A_ADDR,
+		RAM_DBE_STS_A_MI_C2H3_DAT_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&eqdma_hw_ram_dbe_err_process
+	},
+	{
+		EQDMA_DBE_ERR_H2C_RD_BRG_DAT,
+		"Bridge master read double bit ECC error",
+		EQDMA_RAM_DBE_MSK_A_ADDR,
+		EQDMA_RAM_DBE_STS_A_ADDR,
+		RAM_DBE_STS_A_H2C_RD_BRG_DAT_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&eqdma_hw_ram_dbe_err_process
+	},
+	{
+		EQDMA_DBE_ERR_H2C_WR_BRG_DAT,
+		"Bridge master write double bit ECC error",
+		EQDMA_RAM_DBE_MSK_A_ADDR,
+		EQDMA_RAM_DBE_STS_A_ADDR,
+		RAM_DBE_STS_A_H2C_WR_BRG_DAT_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&eqdma_hw_ram_dbe_err_process
+	},
+	{
+		EQDMA_DBE_ERR_C2H_RD_BRG_DAT,
+		"Bridge slave read data buffer double bit ECC error",
+		EQDMA_RAM_DBE_MSK_A_ADDR,
+		EQDMA_RAM_DBE_STS_A_ADDR,
+		RAM_DBE_STS_A_C2H_RD_BRG_DAT_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&eqdma_hw_ram_dbe_err_process
+	},
+	{
+		EQDMA_DBE_ERR_C2H_WR_BRG_DAT,
+		"Bridge slave write data buffer double bit ECC error",
+		EQDMA_RAM_DBE_MSK_A_ADDR,
+		EQDMA_RAM_DBE_STS_A_ADDR,
+		RAM_DBE_STS_A_C2H_WR_BRG_DAT_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&eqdma_hw_ram_dbe_err_process
+	},
+	{
+		EQDMA_DBE_ERR_FUNC_MAP,
+		"Function map RAM double bit ECC error",
+		EQDMA_RAM_DBE_MSK_A_ADDR,
+		EQDMA_RAM_DBE_STS_A_ADDR,
+		RAM_DBE_STS_A_FUNC_MAP_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&eqdma_hw_ram_dbe_err_process
+	},
+	{
+		EQDMA_DBE_ERR_DSC_HW_CTXT,
+		"Descriptor engine hardware context RAM double bit ECC error",
+		EQDMA_RAM_DBE_MSK_A_ADDR,
+		EQDMA_RAM_DBE_STS_A_ADDR,
+		RAM_DBE_STS_A_DSC_HW_CTXT_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&eqdma_hw_ram_dbe_err_process
+	},
+	{
+		EQDMA_DBE_ERR_DSC_CRD_RCV,
+		"Descriptor engine receive credit context RAM double bit ECC error",
+		EQDMA_RAM_DBE_MSK_A_ADDR,
+		EQDMA_RAM_DBE_STS_A_ADDR,
+		RAM_DBE_STS_A_DSC_CRD_RCV_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&eqdma_hw_ram_dbe_err_process
+	},
+	{
+		EQDMA_DBE_ERR_DSC_SW_CTXT,
+		"Descriptor engine software context RAM double bit ECC error",
+		EQDMA_RAM_DBE_MSK_A_ADDR,
+		EQDMA_RAM_DBE_STS_A_ADDR,
+		RAM_DBE_STS_A_DSC_SW_CTXT_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&eqdma_hw_ram_dbe_err_process
+	},
+	{
+		EQDMA_DBE_ERR_DSC_CPLI,
+		"Descriptor engine fetch completion information RAM double bit ECC error",
+		EQDMA_RAM_DBE_MSK_A_ADDR,
+		EQDMA_RAM_DBE_STS_A_ADDR,
+		RAM_DBE_STS_A_DSC_CPLI_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&eqdma_hw_ram_dbe_err_process
+	},
+	{
+		EQDMA_DBE_ERR_DSC_CPLD,
+		"Descriptor engine fetch completion data RAM double bit ECC error",
+		EQDMA_RAM_DBE_MSK_A_ADDR,
+		EQDMA_RAM_DBE_STS_A_ADDR,
+		RAM_DBE_STS_A_DSC_CPLD_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&eqdma_hw_ram_dbe_err_process
+	},
+	{
+		EQDMA_DBE_ERR_MI_TL_SLV_FIFO_RAM,
+		"TL Slave FIFO RAM double bit ECC error",
+		EQDMA_RAM_DBE_MSK_A_ADDR,
+		EQDMA_RAM_DBE_STS_A_ADDR,
+		RAM_DBE_STS_A_MI_TL_SLV_FIFO_RAM_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&eqdma_hw_ram_dbe_err_process
+	},
+	{
+		EQDMA_DBE_ERR_TIMER_FIFO_RAM,
+		"Timer fifo RAM double bit ECC error",
+		EQDMA_RAM_DBE_MSK_A_ADDR,
+		EQDMA_RAM_DBE_STS_A_ADDR,
+		RAM_DBE_STS_A_TIMER_FIFO_RAM_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&eqdma_hw_ram_dbe_err_process
+	},
+	{
+		EQDMA_DBE_ERR_QID_FIFO_RAM,
+		"C2H ST QID FIFO RAM double bit ECC error",
+		EQDMA_RAM_DBE_MSK_A_ADDR,
+		EQDMA_RAM_DBE_STS_A_ADDR,
+		RAM_DBE_STS_A_QID_FIFO_RAM_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&eqdma_hw_ram_dbe_err_process
+	},
+	{
+		EQDMA_DBE_ERR_WRB_COAL_DATA_RAM,
+		"Writeback Coalescing RAM double bit ECC error",
+		EQDMA_RAM_DBE_MSK_A_ADDR,
+		EQDMA_RAM_DBE_STS_A_ADDR,
+		RAM_DBE_STS_A_WRB_COAL_DATA_RAM_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&eqdma_hw_ram_dbe_err_process
+	},
+	{
+		EQDMA_DBE_ERR_INT_CTXT_RAM,
+		"Interrupt context RAM double bit ECC error",
+		EQDMA_RAM_DBE_MSK_A_ADDR,
+		EQDMA_RAM_DBE_STS_A_ADDR,
+		RAM_DBE_STS_A_INT_CTXT_RAM_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&eqdma_hw_ram_dbe_err_process
+	},
+	{
+		EQDMA_DBE_ERR_DESC_REQ_FIFO_RAM,
+		"C2H ST descriptor request RAM double bit ECC error",
+		EQDMA_RAM_DBE_MSK_A_ADDR,
+		EQDMA_RAM_DBE_STS_A_ADDR,
+		RAM_DBE_STS_A_DESC_REQ_FIFO_RAM_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&eqdma_hw_ram_dbe_err_process
+	},
+	{
+		EQDMA_DBE_ERR_PFCH_CTXT_RAM,
+		"C2H ST prefetch RAM double bit ECC error",
+		EQDMA_RAM_DBE_MSK_A_ADDR,
+		EQDMA_RAM_DBE_STS_A_ADDR,
+		RAM_DBE_STS_A_PFCH_CTXT_RAM_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&eqdma_hw_ram_dbe_err_process
+	},
+	{
+		EQDMA_DBE_ERR_WRB_CTXT_RAM,
+		"C2H ST completion context RAM double bit ECC error",
+		EQDMA_RAM_DBE_MSK_A_ADDR,
+		EQDMA_RAM_DBE_STS_A_ADDR,
+		RAM_DBE_STS_A_WRB_CTXT_RAM_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&eqdma_hw_ram_dbe_err_process
+	},
+	{
+		EQDMA_DBE_ERR_PFCH_LL_RAM,
+		"C2H ST prefetch list RAM double bit ECC error",
+		EQDMA_RAM_DBE_MSK_A_ADDR,
+		EQDMA_RAM_DBE_STS_A_ADDR,
+		RAM_DBE_STS_A_PFCH_LL_RAM_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&eqdma_hw_ram_dbe_err_process
+	},
+	{
+		EQDMA_DBE_ERR_PEND_FIFO_RAM,
+		"Pend FIFO RAM double bit ECC error",
+		EQDMA_RAM_DBE_MSK_A_ADDR,
+		EQDMA_RAM_DBE_STS_A_ADDR,
+		RAM_DBE_STS_A_PEND_FIFO_RAM_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&eqdma_hw_ram_dbe_err_process
+	},
+	{
+		EQDMA_DBE_ERR_RC_RRQ_ODD_RAM,
+		"RC RRQ Odd RAM double bit ECC error.",
+		EQDMA_RAM_DBE_MSK_A_ADDR,
+		EQDMA_RAM_DBE_STS_A_ADDR,
+		RAM_DBE_STS_A_RC_RRQ_ODD_RAM_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&eqdma_hw_ram_dbe_err_process
+	},
+	{
+		EQDMA_DBE_ERR_ALL,
+		"All DBE errors",
+		EQDMA_RAM_DBE_MSK_A_ADDR,
+		EQDMA_RAM_DBE_STS_A_ADDR,
+		EQDMA_DBE_ERR_ALL_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&eqdma_hw_ram_dbe_err_process
+	}
+};
+
+static int32_t all_eqdma_hw_errs[EQDMA_TOTAL_LEAF_ERROR_AGGREGATORS] = {
+	EQDMA_DSC_ERR_ALL,
+	EQDMA_TRQ_ERR_ALL,
+	EQDMA_ST_C2H_ERR_ALL,
+	EQDMA_ST_FATAL_ERR_ALL,
+	EQDMA_ST_H2C_ERR_ALL,
+	EQDMA_SBE_1_ERR_ALL,
+	EQDMA_SBE_ERR_ALL,
+	EQDMA_DBE_1_ERR_ALL,
+	EQDMA_DBE_ERR_ALL
+};
+
+static struct qctx_entry eqdma_sw_ctxt_entries[] = {
+	{"PIDX", 0},
+	{"IRQ Arm", 0},
+	{"Function Id", 0},
+	{"Queue Enable", 0},
+	{"Fetch Credit Enable", 0},
+	{"Write back/Intr Check", 0},
+	{"Write back/Intr Interval", 0},
+	{"Address Translation", 0},
+	{"Fetch Max", 0},
+	{"Ring Size", 0},
+	{"Descriptor Size", 0},
+	{"Bypass Enable", 0},
+	{"MM Channel", 0},
+	{"Writeback Enable", 0},
+	{"Interrupt Enable", 0},
+	{"Port Id", 0},
+	{"Interrupt No Last", 0},
+	{"Error", 0},
+	{"Writeback Error Sent", 0},
+	{"IRQ Request", 0},
+	{"Marker Disable", 0},
+	{"Is Memory Mapped", 0},
+	{"Descriptor Ring Base Addr (Low)", 0},
+	{"Descriptor Ring Base Addr (High)", 0},
+	{"Interrupt Vector/Ring Index", 0},
+	{"Interrupt Aggregation", 0},
+	{"Disable Interrupt with VF", 0},
+	{"Pack descriptor output interface", 0},
+	{"Irq Bypass", 0},
+};
+
+static struct qctx_entry eqdma_hw_ctxt_entries[] = {
+	{"CIDX", 0},
+	{"Credits Consumed", 0},
+	{"Descriptors Pending", 0},
+	{"Queue Invalid No Desc Pending", 0},
+	{"Eviction Pending", 0},
+	{"Fetch Pending", 0},
+};
+
+static struct qctx_entry eqdma_credit_ctxt_entries[] = {
+	{"Credit", 0},
+};
+
+static struct qctx_entry eqdma_cmpt_ctxt_entries[] = {
+	{"Enable Status Desc Update", 0},
+	{"Enable Interrupt", 0},
+	{"Trigger Mode", 0},
+	{"Function Id", 0},
+	{"Counter Index", 0},
+	{"Timer Index", 0},
+	{"Interrupt State", 0},
+	{"Color", 0},
+	{"Ring Size", 0},
+	{"Base Addr High (L)[37:6]", 0},
+	{"Base Addr High(H)[63:38]", 0},
+	{"Descriptor Size", 0},
+	{"PIDX", 0},
+	{"CIDX", 0},
+	{"Valid", 0},
+	{"Error", 0},
+	{"Trigger Pending", 0},
+	{"Timer Running", 0},
+	{"Full Update", 0},
+	{"Over Flow Check Disable", 0},
+	{"Address Translation", 0},
+	{"Interrupt Vector/Ring Index", 0},
+	{"Interrupt Aggregation", 0},
+	{"Disable Insterrupt with VF", 0},
+	{"c2h Direction", 0},
+	{"Base Addr Low[5:2]", 0},
+	{"Shared Completion Queue", 0},
+};
+
+static struct qctx_entry eqdma_c2h_pftch_ctxt_entries[] = {
+	{"Bypass", 0},
+	{"Buffer Size Index", 0},
+	{"Port Id", 0},
+	{"Variable Descriptor", 0},
+	{"Number of descriptors prefetched", 0},
+	{"Error", 0},
+	{"Prefetch Enable", 0},
+	{"In Prefetch", 0},
+	{"Software Credit", 0},
+	{"Valid", 0},
+};
+
+static struct qctx_entry eqdma_ind_intr_ctxt_entries[] = {
+	{"valid", 0},
+	{"vec", 0},
+	{"int_st", 0},
+	{"color", 0},
+	{"baddr_4k (Low)", 0},
+	{"baddr_4k (High)", 0},
+	{"page_size", 0},
+	{"pidx", 0},
+	{"at", 0},
+	{"Function Id", 0},
+};
+
+static int eqdma_indirect_reg_invalidate(void *dev_hndl,
+		enum ind_ctxt_cmd_sel sel, uint16_t hw_qid);
+static int eqdma_indirect_reg_clear(void *dev_hndl,
+		enum ind_ctxt_cmd_sel sel, uint16_t hw_qid);
+static int eqdma_indirect_reg_read(void *dev_hndl, enum ind_ctxt_cmd_sel sel,
+		uint16_t hw_qid, uint32_t cnt, uint32_t *data);
+static int eqdma_indirect_reg_write(void *dev_hndl, enum ind_ctxt_cmd_sel sel,
+		uint16_t hw_qid, uint32_t *data, uint16_t cnt);
+
+uint32_t eqdma_get_config_num_regs(void)
+{
+	return eqdma_config_num_regs_get();
+}
+
+struct xreg_info *eqdma_get_config_regs(void)
+{
+	return eqdma_config_regs_get();
+}
+
+uint32_t eqdma_reg_dump_buf_len(void)
+{
+	uint32_t length = (eqdma_config_num_regs_get() + 1)
+			* REG_DUMP_SIZE_PER_LINE;
+	return length;
+}
+
+int eqdma_context_buf_len(uint8_t st,
+		enum qdma_dev_q_type q_type, uint32_t *buflen)
+{
+	int len = 0;
+
+	if (q_type == QDMA_DEV_Q_TYPE_CMPT) {
+		len += (((sizeof(eqdma_cmpt_ctxt_entries) /
+			sizeof(eqdma_cmpt_ctxt_entries[0])) + 1) *
+			REG_DUMP_SIZE_PER_LINE);
+	} else {
+		len += (((sizeof(eqdma_sw_ctxt_entries) /
+				sizeof(eqdma_sw_ctxt_entries[0])) + 1) *
+				REG_DUMP_SIZE_PER_LINE);
+
+		len += (((sizeof(eqdma_hw_ctxt_entries) /
+			sizeof(eqdma_hw_ctxt_entries[0])) + 1) *
+			REG_DUMP_SIZE_PER_LINE);
+
+		len += (((sizeof(eqdma_credit_ctxt_entries) /
+			sizeof(eqdma_credit_ctxt_entries[0])) + 1) *
+			REG_DUMP_SIZE_PER_LINE);
+
+		if (st && q_type == QDMA_DEV_Q_TYPE_C2H) {
+			len += (((sizeof(eqdma_cmpt_ctxt_entries) /
+				sizeof(eqdma_cmpt_ctxt_entries[0])) + 1) *
+				REG_DUMP_SIZE_PER_LINE);
+
+			len += (((sizeof(eqdma_c2h_pftch_ctxt_entries) /
+				sizeof(eqdma_c2h_pftch_ctxt_entries[0])) + 1) *
+				REG_DUMP_SIZE_PER_LINE);
+		}
+	}
+
+	*buflen = len;
+	return 0;
+}
+
+static uint32_t eqdma_intr_context_buf_len(void)
+{
+	uint32_t len = 0;
+
+	len += (((sizeof(eqdma_ind_intr_ctxt_entries) /
+			sizeof(eqdma_ind_intr_ctxt_entries[0])) + 1) *
+			REG_DUMP_SIZE_PER_LINE);
+	return len;
+}
+
+/*
+ * eqdma_indirect_reg_invalidate() - helper function to invalidate indirect
+ *					context registers.
+ *
+ * return -QDMA_ERR_HWACC_BUSY_TIMEOUT if register
+ *	value didn't match, QDMA_SUCCESS other wise
+ */
+static int eqdma_indirect_reg_invalidate(void *dev_hndl,
+		enum ind_ctxt_cmd_sel sel, uint16_t hw_qid)
+{
+	union qdma_ind_ctxt_cmd cmd;
+
+	qdma_reg_access_lock(dev_hndl);
+
+	/* set command register */
+	cmd.word = 0;
+	cmd.bits.qid = hw_qid;
+	cmd.bits.op = QDMA_CTXT_CMD_INV;
+	cmd.bits.sel = sel;
+	qdma_reg_write(dev_hndl, EQDMA_IND_CTXT_CMD_ADDR, cmd.word);
+
+	/* check if the operation went through well */
+	if (hw_monitor_reg(dev_hndl, EQDMA_IND_CTXT_CMD_ADDR,
+			IND_CTXT_CMD_BUSY_MASK, 0,
+			QDMA_REG_POLL_DFLT_INTERVAL_US,
+			QDMA_REG_POLL_DFLT_TIMEOUT_US)) {
+		qdma_reg_access_release(dev_hndl);
+		qdma_log_error("%s: hw_monitor_reg failed with err:%d\n",
+						__func__,
+					   -QDMA_ERR_HWACC_BUSY_TIMEOUT);
+		return -QDMA_ERR_HWACC_BUSY_TIMEOUT;
+	}
+
+	qdma_reg_access_release(dev_hndl);
+
+	return QDMA_SUCCESS;
+}
+
+/*
+ * eqdma_indirect_reg_clear() - helper function to clear indirect
+ *				context registers.
+ *
+ * return -QDMA_ERR_HWACC_BUSY_TIMEOUT if register
+ *	value didn't match, QDMA_SUCCESS other wise
+ */
+static int eqdma_indirect_reg_clear(void *dev_hndl,
+		enum ind_ctxt_cmd_sel sel, uint16_t hw_qid)
+{
+	union qdma_ind_ctxt_cmd cmd;
+
+	qdma_reg_access_lock(dev_hndl);
+
+	/* set command register */
+	cmd.word = 0;
+	cmd.bits.qid = hw_qid;
+	cmd.bits.op = QDMA_CTXT_CMD_CLR;
+	cmd.bits.sel = sel;
+	qdma_reg_write(dev_hndl, EQDMA_IND_CTXT_CMD_ADDR, cmd.word);
+
+	/* check if the operation went through well */
+	if (hw_monitor_reg(dev_hndl, EQDMA_IND_CTXT_CMD_ADDR,
+			IND_CTXT_CMD_BUSY_MASK, 0,
+			QDMA_REG_POLL_DFLT_INTERVAL_US,
+			QDMA_REG_POLL_DFLT_TIMEOUT_US)) {
+		qdma_reg_access_release(dev_hndl);
+		qdma_log_error("%s: hw_monitor_reg failed with err:%d\n",
+						__func__,
+					   -QDMA_ERR_HWACC_BUSY_TIMEOUT);
+		return -QDMA_ERR_HWACC_BUSY_TIMEOUT;
+	}
+
+	qdma_reg_access_release(dev_hndl);
+
+	return QDMA_SUCCESS;
+}
+
+/*
+ * eqdma_indirect_reg_read() - helper function to read indirect
+ *				context registers.
+ *
+ * return -QDMA_ERR_HWACC_BUSY_TIMEOUT if register
+ *	value didn't match, QDMA_SUCCESS other wise
+ */
+static int eqdma_indirect_reg_read(void *dev_hndl, enum ind_ctxt_cmd_sel sel,
+		uint16_t hw_qid, uint32_t cnt, uint32_t *data)
+{
+	uint32_t index = 0, reg_addr = EQDMA_IND_CTXT_DATA_ADDR;
+	union qdma_ind_ctxt_cmd cmd;
+
+	qdma_reg_access_lock(dev_hndl);
+
+	/* set command register */
+	cmd.word = 0;
+	cmd.bits.qid = hw_qid;
+	cmd.bits.op = QDMA_CTXT_CMD_RD;
+	cmd.bits.sel = sel;
+
+	qdma_reg_write(dev_hndl, EQDMA_IND_CTXT_CMD_ADDR, cmd.word);
+
+	/* check if the operation went through well */
+	if (hw_monitor_reg(dev_hndl, EQDMA_IND_CTXT_CMD_ADDR,
+			IND_CTXT_CMD_BUSY_MASK, 0,
+			QDMA_REG_POLL_DFLT_INTERVAL_US,
+			QDMA_REG_POLL_DFLT_TIMEOUT_US)) {
+		qdma_reg_access_release(dev_hndl);
+		qdma_log_error("%s: hw_monitor_reg failed with err:%d\n",
+						__func__,
+					   -QDMA_ERR_HWACC_BUSY_TIMEOUT);
+		return -QDMA_ERR_HWACC_BUSY_TIMEOUT;
+	}
+
+	for (index = 0; index < cnt; index++, reg_addr += sizeof(uint32_t))
+		data[index] = qdma_reg_read(dev_hndl, reg_addr);
+
+	qdma_reg_access_release(dev_hndl);
+
+	return QDMA_SUCCESS;
+}
+
+/*
+ * eqdma_indirect_reg_write() - helper function to write indirect
+ *				context registers.
+ *
+ * return -QDMA_ERR_HWACC_BUSY_TIMEOUT if register
+ *	value didn't match, QDMA_SUCCESS other wise
+ */
+static int eqdma_indirect_reg_write(void *dev_hndl, enum ind_ctxt_cmd_sel sel,
+		uint16_t hw_qid, uint32_t *data, uint16_t cnt)
+{
+	uint32_t index, reg_addr;
+	struct qdma_indirect_ctxt_regs regs;
+	uint32_t *wr_data = (uint32_t *)&regs;
+
+	qdma_reg_access_lock(dev_hndl);
+
+	/* write the context data */
+	for (index = 0; index < QDMA_IND_CTXT_DATA_NUM_REGS; index++) {
+		if (index < cnt)
+			regs.qdma_ind_ctxt_data[index] = data[index];
+		else
+			regs.qdma_ind_ctxt_data[index] = 0;
+		regs.qdma_ind_ctxt_mask[index] = 0xFFFFFFFF;
+	}
+
+	regs.cmd.word = 0;
+	regs.cmd.bits.qid = hw_qid;
+	regs.cmd.bits.op = QDMA_CTXT_CMD_WR;
+	regs.cmd.bits.sel = sel;
+	reg_addr = EQDMA_IND_CTXT_DATA_ADDR;
+
+	for (index = 0; index < ((2 * QDMA_IND_CTXT_DATA_NUM_REGS) + 1);
+		 index++, reg_addr += sizeof(uint32_t))
+		qdma_reg_write(dev_hndl, reg_addr, wr_data[index]);
+
+	/* check if the operation went through well */
+	if (hw_monitor_reg(dev_hndl, EQDMA_IND_CTXT_CMD_ADDR,
+			IND_CTXT_CMD_BUSY_MASK, 0,
+			QDMA_REG_POLL_DFLT_INTERVAL_US,
+			QDMA_REG_POLL_DFLT_TIMEOUT_US)) {
+		qdma_reg_access_release(dev_hndl);
+		qdma_log_error("%s: hw_monitor_reg failed with err:%d\n",
+						__func__,
+					   -QDMA_ERR_HWACC_BUSY_TIMEOUT);
+		return -QDMA_ERR_HWACC_BUSY_TIMEOUT;
+	}
+
+	qdma_reg_access_release(dev_hndl);
+
+	return QDMA_SUCCESS;
+}
+
+/*
+ * eqdma_fill_sw_ctxt() - Helper function to fill sw context into structure
+ *
+ */
+static void eqdma_fill_sw_ctxt(struct qdma_descq_sw_ctxt *sw_ctxt)
+{
+	int i = 0;
+
+	eqdma_sw_ctxt_entries[i++].value = sw_ctxt->pidx;
+	eqdma_sw_ctxt_entries[i++].value = sw_ctxt->irq_arm;
+	eqdma_sw_ctxt_entries[i++].value = sw_ctxt->fnc_id;
+	eqdma_sw_ctxt_entries[i++].value = sw_ctxt->qen;
+	eqdma_sw_ctxt_entries[i++].value = sw_ctxt->frcd_en;
+	eqdma_sw_ctxt_entries[i++].value = sw_ctxt->wbi_chk;
+	eqdma_sw_ctxt_entries[i++].value = sw_ctxt->wbi_intvl_en;
+	eqdma_sw_ctxt_entries[i++].value = sw_ctxt->at;
+	eqdma_sw_ctxt_entries[i++].value = sw_ctxt->fetch_max;
+	eqdma_sw_ctxt_entries[i++].value = sw_ctxt->rngsz_idx;
+	eqdma_sw_ctxt_entries[i++].value = sw_ctxt->desc_sz;
+	eqdma_sw_ctxt_entries[i++].value = sw_ctxt->bypass;
+	eqdma_sw_ctxt_entries[i++].value = sw_ctxt->mm_chn;
+	eqdma_sw_ctxt_entries[i++].value = sw_ctxt->wbk_en;
+	eqdma_sw_ctxt_entries[i++].value = sw_ctxt->irq_en;
+	eqdma_sw_ctxt_entries[i++].value = sw_ctxt->port_id;
+	eqdma_sw_ctxt_entries[i++].value = sw_ctxt->irq_no_last;
+	eqdma_sw_ctxt_entries[i++].value = sw_ctxt->err;
+	eqdma_sw_ctxt_entries[i++].value = sw_ctxt->err_wb_sent;
+	eqdma_sw_ctxt_entries[i++].value = sw_ctxt->irq_req;
+	eqdma_sw_ctxt_entries[i++].value = sw_ctxt->mrkr_dis;
+	eqdma_sw_ctxt_entries[i++].value = sw_ctxt->is_mm;
+	eqdma_sw_ctxt_entries[i++].value = sw_ctxt->ring_bs_addr & 0xFFFFFFFF;
+	eqdma_sw_ctxt_entries[i++].value =
+		(sw_ctxt->ring_bs_addr >> 32) & 0xFFFFFFFF;
+	eqdma_sw_ctxt_entries[i++].value = sw_ctxt->vec;
+	eqdma_sw_ctxt_entries[i++].value = sw_ctxt->intr_aggr;
+	eqdma_sw_ctxt_entries[i++].value = sw_ctxt->dis_intr_on_vf;
+	eqdma_sw_ctxt_entries[i++].value = sw_ctxt->pack_byp_out;
+	eqdma_sw_ctxt_entries[i++].value = sw_ctxt->irq_byp;
+}
+
+/*
+ * eqdma_fill_cmpt_ctxt() - Helper function to fill completion context
+ *                         into structure
+ *
+ */
+static void eqdma_fill_cmpt_ctxt(struct qdma_descq_cmpt_ctxt *cmpt_ctxt)
+{
+	int i = 0;
+
+	eqdma_cmpt_ctxt_entries[i++].value = cmpt_ctxt->en_stat_desc;
+	eqdma_cmpt_ctxt_entries[i++].value = cmpt_ctxt->en_int;
+	eqdma_cmpt_ctxt_entries[i++].value = cmpt_ctxt->trig_mode;
+	eqdma_cmpt_ctxt_entries[i++].value = cmpt_ctxt->fnc_id;
+	eqdma_cmpt_ctxt_entries[i++].value = cmpt_ctxt->counter_idx;
+	eqdma_cmpt_ctxt_entries[i++].value = cmpt_ctxt->timer_idx;
+	eqdma_cmpt_ctxt_entries[i++].value = cmpt_ctxt->in_st;
+	eqdma_cmpt_ctxt_entries[i++].value = cmpt_ctxt->color;
+	eqdma_cmpt_ctxt_entries[i++].value = cmpt_ctxt->ringsz_idx;
+	eqdma_cmpt_ctxt_entries[i++].value =
+		(uint32_t)FIELD_GET(EQDMA_COMPL_CTXT_BADDR_HIGH_L_MASK,
+				    cmpt_ctxt->bs_addr);
+	eqdma_cmpt_ctxt_entries[i++].value =
+		(uint32_t)FIELD_GET(EQDMA_COMPL_CTXT_BADDR_HIGH_H_MASK,
+				    cmpt_ctxt->bs_addr);
+	eqdma_cmpt_ctxt_entries[i++].value = cmpt_ctxt->desc_sz;
+	eqdma_cmpt_ctxt_entries[i++].value = cmpt_ctxt->pidx;
+	eqdma_cmpt_ctxt_entries[i++].value = cmpt_ctxt->cidx;
+	eqdma_cmpt_ctxt_entries[i++].value = cmpt_ctxt->valid;
+	eqdma_cmpt_ctxt_entries[i++].value = cmpt_ctxt->err;
+	eqdma_cmpt_ctxt_entries[i++].value = cmpt_ctxt->user_trig_pend;
+	eqdma_cmpt_ctxt_entries[i++].value = cmpt_ctxt->timer_running;
+	eqdma_cmpt_ctxt_entries[i++].value = cmpt_ctxt->full_upd;
+	eqdma_cmpt_ctxt_entries[i++].value = cmpt_ctxt->ovf_chk_dis;
+	eqdma_cmpt_ctxt_entries[i++].value = cmpt_ctxt->at;
+	eqdma_cmpt_ctxt_entries[i++].value = cmpt_ctxt->vec;
+	eqdma_cmpt_ctxt_entries[i++].value = cmpt_ctxt->int_aggr;
+	eqdma_cmpt_ctxt_entries[i++].value = cmpt_ctxt->dis_intr_on_vf;
+	eqdma_cmpt_ctxt_entries[i++].value = cmpt_ctxt->dir_c2h;
+	eqdma_cmpt_ctxt_entries[i++].value =
+		(uint32_t)FIELD_GET(EQDMA_COMPL_CTXT_BADDR_LOW_MASK,
+				    cmpt_ctxt->bs_addr);
+	eqdma_cmpt_ctxt_entries[i++].value = cmpt_ctxt->sh_cmpt;
+}
+
+/*
+ * eqdma_fill_hw_ctxt() - Helper function to fill HW context into structure
+ *
+ */
+static void eqdma_fill_hw_ctxt(struct qdma_descq_hw_ctxt *hw_ctxt)
+{
+	int i = 0;
+
+	eqdma_hw_ctxt_entries[i++].value = hw_ctxt->cidx;
+	eqdma_hw_ctxt_entries[i++].value = hw_ctxt->crd_use;
+	eqdma_hw_ctxt_entries[i++].value = hw_ctxt->dsc_pend;
+	eqdma_hw_ctxt_entries[i++].value = hw_ctxt->idl_stp_b;
+	eqdma_hw_ctxt_entries[i++].value = hw_ctxt->evt_pnd;
+	eqdma_hw_ctxt_entries[i++].value = hw_ctxt->fetch_pnd;
+}
+
+/*
+ * eqdma_fill_credit_ctxt() - Helper function to fill Credit context
+ *                           into structure
+ *
+ */
+static void eqdma_fill_credit_ctxt(struct qdma_descq_credit_ctxt *cr_ctxt)
+{
+	eqdma_credit_ctxt_entries[0].value = cr_ctxt->credit;
+}
+
+/*
+ * eqdma_fill_pfetch_ctxt() - Helper function to fill Prefetch context
+ *                           into structure
+ *
+ */
+static void eqdma_fill_pfetch_ctxt(struct qdma_descq_prefetch_ctxt
+		*pfetch_ctxt)
+{
+	int i = 0;
+
+	eqdma_c2h_pftch_ctxt_entries[i++].value = pfetch_ctxt->bypass;
+	eqdma_c2h_pftch_ctxt_entries[i++].value = pfetch_ctxt->bufsz_idx;
+	eqdma_c2h_pftch_ctxt_entries[i++].value = pfetch_ctxt->port_id;
+	eqdma_c2h_pftch_ctxt_entries[i++].value = pfetch_ctxt->var_desc;
+	eqdma_c2h_pftch_ctxt_entries[i++].value = pfetch_ctxt->num_pftch;
+	eqdma_c2h_pftch_ctxt_entries[i++].value = pfetch_ctxt->err;
+	eqdma_c2h_pftch_ctxt_entries[i++].value = pfetch_ctxt->pfch_en;
+	eqdma_c2h_pftch_ctxt_entries[i++].value = pfetch_ctxt->pfch;
+	eqdma_c2h_pftch_ctxt_entries[i++].value = pfetch_ctxt->sw_crdt;
+	eqdma_c2h_pftch_ctxt_entries[i++].value = pfetch_ctxt->valid;
+}
+
+/*
+ * eqdma_fill_intr_ctxt() - Helper function to fill interrupt context
+ *                           into structure
+ *
+ */
+static void eqdma_fill_intr_ctxt(struct qdma_indirect_intr_ctxt *intr_ctxt)
+{
+	int i = 0;
+
+	eqdma_ind_intr_ctxt_entries[i++].value = intr_ctxt->valid;
+	eqdma_ind_intr_ctxt_entries[i++].value = intr_ctxt->vec;
+	eqdma_ind_intr_ctxt_entries[i++].value = intr_ctxt->int_st;
+	eqdma_ind_intr_ctxt_entries[i++].value = intr_ctxt->color;
+	eqdma_ind_intr_ctxt_entries[i++].value =
+			intr_ctxt->baddr_4k & 0xFFFFFFFF;
+	eqdma_ind_intr_ctxt_entries[i++].value =
+			(intr_ctxt->baddr_4k >> 32) & 0xFFFFFFFF;
+	eqdma_ind_intr_ctxt_entries[i++].value = intr_ctxt->page_size;
+	eqdma_ind_intr_ctxt_entries[i++].value = intr_ctxt->pidx;
+	eqdma_ind_intr_ctxt_entries[i++].value = intr_ctxt->at;
+	eqdma_ind_intr_ctxt_entries[i++].value = intr_ctxt->func_id;
+}
+
+/*****************************************************************************/
+/**
+ * eqdma_set_default_global_csr() - function to set the global CSR register to
+ * default values. The value can be modified later by using the set/get csr
+ * functions
+ *
+ * @dev_hndl:	device handle
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+int eqdma_set_default_global_csr(void *dev_hndl)
+{
+	/* Default values */
+	uint32_t cfg_val = 0, reg_val = 0;
+	uint32_t rng_sz[QDMA_NUM_RING_SIZES] = {2049, 65, 129, 193, 257, 385,
+		513, 769, 1025, 1537, 3073, 4097, 6145, 8193, 12289, 16385};
+	uint32_t tmr_cnt[QDMA_NUM_C2H_TIMERS] = {1, 2, 4, 5, 8, 10, 15, 20, 25,
+		30, 50, 75, 100, 125, 150, 200};
+	uint32_t cnt_th[QDMA_NUM_C2H_COUNTERS] = {2, 4, 8, 16, 24, 32, 48, 64,
+		80, 96, 112, 128, 144, 160, 176, 192};
+	uint32_t buf_sz[QDMA_NUM_C2H_BUFFER_SIZES] = {4096, 256, 512, 1024,
+		2048, 3968, 4096, 4096, 4096, 4096, 4096, 4096, 8192, 9018,
+		16384, 65535};
+	struct qdma_dev_attributes dev_cap;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n", __func__,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	eqdma_get_device_attributes(dev_hndl, &dev_cap);
+
+	/* Configuring CSR registers */
+	/* Global ring sizes */
+	qdma_write_csr_values(dev_hndl, EQDMA_GLBL_RNG_SZ_1_ADDR, 0,
+			QDMA_NUM_RING_SIZES, rng_sz);
+
+	if (dev_cap.st_en || dev_cap.mm_cmpt_en) {
+		/* Counter thresholds */
+		qdma_write_csr_values(dev_hndl, EQDMA_C2H_CNT_TH_ADDR, 0,
+				QDMA_NUM_C2H_COUNTERS, cnt_th);
+
+		/* Timer Counters */
+		qdma_write_csr_values(dev_hndl, EQDMA_C2H_TIMER_CNT_ADDR, 0,
+				QDMA_NUM_C2H_TIMERS, tmr_cnt);
+
+
+		/* Writeback Interval */
+		reg_val =
+			FIELD_SET(GLBL_DSC_CFG_MAXFETCH_MASK,
+					DEFAULT_MAX_DSC_FETCH) |
+			FIELD_SET(GLBL_DSC_CFG_WB_ACC_INT_MASK,
+					DEFAULT_WRB_INT);
+		qdma_reg_write(dev_hndl, EQDMA_GLBL_DSC_CFG_ADDR, reg_val);
+	}
+
+	if (dev_cap.st_en) {
+		/* Buffer Sizes */
+		qdma_write_csr_values(dev_hndl, EQDMA_C2H_BUF_SZ_ADDR, 0,
+				QDMA_NUM_C2H_BUFFER_SIZES, buf_sz);
+
+		/* Prefetch Configuration */
+
+		cfg_val = qdma_reg_read(dev_hndl,
+				EQDMA_C2H_PFCH_CACHE_DEPTH_ADDR);
+
+		reg_val =
+			FIELD_SET(C2H_PFCH_CFG_FL_TH_MASK,
+					DEFAULT_PFCH_STOP_THRESH);
+		qdma_reg_write(dev_hndl, EQDMA_C2H_PFCH_CFG_ADDR, reg_val);
+
+		reg_val = FIELD_SET(C2H_PFCH_CFG_1_QCNT_MASK, (cfg_val >> 1)) |
+				  FIELD_SET(C2H_PFCH_CFG_1_EVT_QCNT_TH_MASK,
+						((cfg_val >> 1) - 2));
+		qdma_reg_write(dev_hndl, EQDMA_C2H_PFCH_CFG_1_ADDR, reg_val);
+
+		reg_val = FIELD_SET(C2H_PFCH_CFG_2_NUM_MASK,
+					DEFAULT_PFCH_NUM_ENTRIES_PER_Q);
+
+		qdma_reg_write(dev_hndl, EQDMA_C2H_PFCH_CFG_2_ADDR, reg_val);
+
+		/* C2H interrupt timer tick */
+		qdma_reg_write(dev_hndl, EQDMA_C2H_INT_TIMER_TICK_ADDR,
+				DEFAULT_C2H_INTR_TIMER_TICK);
+
+		/* C2h Completion Coalesce Configuration */
+		cfg_val = qdma_reg_read(dev_hndl,
+				EQDMA_C2H_WRB_COAL_BUF_DEPTH_ADDR);
+		reg_val =
+			FIELD_SET(C2H_WRB_COAL_CFG_TICK_CNT_MASK,
+					DEFAULT_CMPT_COAL_TIMER_CNT) |
+			FIELD_SET(C2H_WRB_COAL_CFG_TICK_VAL_MASK,
+					DEFAULT_CMPT_COAL_TIMER_TICK) |
+			FIELD_SET(C2H_WRB_COAL_CFG_MAX_BUF_SZ_MASK, cfg_val);
+		qdma_reg_write(dev_hndl, EQDMA_C2H_WRB_COAL_CFG_ADDR, reg_val);
+
+		/* H2C throttle Configuration */
+
+		reg_val =
+			FIELD_SET(H2C_REQ_THROT_PCIE_DATA_THRESH_MASK,
+					EQDMA_H2C_THROT_DATA_THRESH) |
+			FIELD_SET(H2C_REQ_THROT_PCIE_EN_DATA_MASK,
+					EQDMA_THROT_EN_DATA) |
+			FIELD_SET(H2C_REQ_THROT_PCIE_MASK,
+					EQDMA_H2C_THROT_REQ_THRESH) |
+			FIELD_SET(H2C_REQ_THROT_PCIE_EN_REQ_MASK,
+					EQDMA_THROT_EN_REQ);
+		qdma_reg_write(dev_hndl, EQDMA_H2C_REQ_THROT_PCIE_ADDR,
+			reg_val);
+	}
+
+	return QDMA_SUCCESS;
+}
+
+/*
+ * dump_eqdma_context() - Helper function to dump queue context into string
+ *
+ * return len - length of the string copied into buffer
+ */
+static int dump_eqdma_context(struct qdma_descq_context *queue_context,
+		uint8_t st,	enum qdma_dev_q_type q_type,
+		char *buf, int buf_sz)
+{
+	int i = 0;
+	int n;
+	int len = 0;
+	int rv;
+	char banner[DEBGFS_LINE_SZ];
+
+	if (queue_context == NULL) {
+		qdma_log_error("%s: queue_context is NULL, err:%d\n",
+						__func__,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if (q_type == QDMA_DEV_Q_TYPE_CMPT) {
+		eqdma_fill_cmpt_ctxt(&queue_context->cmpt_ctxt);
+	} else if (q_type == QDMA_DEV_Q_TYPE_H2C) {
+		eqdma_fill_sw_ctxt(&queue_context->sw_ctxt);
+		eqdma_fill_hw_ctxt(&queue_context->hw_ctxt);
+		eqdma_fill_credit_ctxt(&queue_context->cr_ctxt);
+	} else if (q_type == QDMA_DEV_Q_TYPE_C2H) {
+		eqdma_fill_sw_ctxt(&queue_context->sw_ctxt);
+		eqdma_fill_hw_ctxt(&queue_context->hw_ctxt);
+		eqdma_fill_credit_ctxt(&queue_context->cr_ctxt);
+		if (st) {
+			eqdma_fill_pfetch_ctxt(&queue_context->pfetch_ctxt);
+			eqdma_fill_cmpt_ctxt(&queue_context->cmpt_ctxt);
+		}
+	}
+
+	if (q_type != QDMA_DEV_Q_TYPE_CMPT) {
+		for (i = 0; i < DEBGFS_LINE_SZ - 5; i++) {
+			rv = QDMA_SNPRINTF_S(banner + i,
+				(DEBGFS_LINE_SZ - i),
+				sizeof("-"), "-");
+			if (rv < 0 || rv > (int)sizeof("-")) {
+				qdma_log_error("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+					__LINE__, __func__,
+					rv);
+				goto INSUF_BUF_EXIT;
+			}
+		}
+
+		/* SW context dump */
+		n = sizeof(eqdma_sw_ctxt_entries) /
+				sizeof((eqdma_sw_ctxt_entries)[0]);
+		for (i = 0; i < n; i++) {
+			if (len >= buf_sz ||
+			    ((len + DEBGFS_LINE_SZ) >= buf_sz))
+				goto INSUF_BUF_EXIT;
+
+			if (i == 0) {
+				if ((len + (3 * DEBGFS_LINE_SZ)) >= buf_sz)
+					goto INSUF_BUF_EXIT;
+				rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+						     DEBGFS_LINE_SZ, "\n%s",
+						     banner);
+				if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+					qdma_log_error("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+						__LINE__, __func__,
+						rv);
+					goto INSUF_BUF_EXIT;
+				}
+				len += rv;
+
+				rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+					DEBGFS_LINE_SZ, "\n%40s", "SW Context");
+				if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+					qdma_log_error("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+						__LINE__, __func__,
+						rv);
+					goto INSUF_BUF_EXIT;
+				}
+				len += rv;
+
+				rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+					DEBGFS_LINE_SZ, "\n%s\n", banner);
+				if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+					qdma_log_error("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+						__LINE__, __func__,
+						rv);
+					goto INSUF_BUF_EXIT;
+				}
+				len += rv;
+			}
+
+			rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+				DEBGFS_LINE_SZ,
+				"%-47s %#-10x %u\n",
+				eqdma_sw_ctxt_entries[i].name,
+				eqdma_sw_ctxt_entries[i].value,
+				eqdma_sw_ctxt_entries[i].value);
+			if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+				qdma_log_error("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+					__LINE__, __func__,
+					rv);
+				goto INSUF_BUF_EXIT;
+			}
+			len += rv;
+		}
+
+		/* HW context dump */
+		n = sizeof(eqdma_hw_ctxt_entries) /
+				sizeof((eqdma_hw_ctxt_entries)[0]);
+		for (i = 0; i < n; i++) {
+			if (len >= buf_sz ||
+			    ((len + DEBGFS_LINE_SZ) >= buf_sz))
+				goto INSUF_BUF_EXIT;
+
+			if (i == 0) {
+				if ((len + (3 * DEBGFS_LINE_SZ)) >= buf_sz)
+					goto INSUF_BUF_EXIT;
+
+				rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+					DEBGFS_LINE_SZ, "\n%s", banner);
+				if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+					qdma_log_error("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+						__LINE__, __func__,
+						rv);
+					goto INSUF_BUF_EXIT;
+				}
+				len += rv;
+
+				rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+					DEBGFS_LINE_SZ, "\n%40s", "HW Context");
+				if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+					qdma_log_error("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+						__LINE__, __func__,
+						rv);
+					goto INSUF_BUF_EXIT;
+				}
+				len += rv;
+
+				rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+					DEBGFS_LINE_SZ, "\n%s\n", banner);
+				if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+					qdma_log_error("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+						__LINE__, __func__,
+						rv);
+					goto INSUF_BUF_EXIT;
+				}
+				len += rv;
+			}
+
+			rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+				DEBGFS_LINE_SZ,
+				"%-47s %#-10x %u\n",
+				eqdma_hw_ctxt_entries[i].name,
+				eqdma_hw_ctxt_entries[i].value,
+				eqdma_hw_ctxt_entries[i].value);
+			if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+				qdma_log_error("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+					__LINE__, __func__,
+					rv);
+				goto INSUF_BUF_EXIT;
+			}
+			len += rv;
+		}
+
+		/* Credit context dump */
+		n = sizeof(eqdma_credit_ctxt_entries) /
+			sizeof((eqdma_credit_ctxt_entries)[0]);
+		for (i = 0; i < n; i++) {
+			if (len >= buf_sz ||
+				((len + DEBGFS_LINE_SZ) >= buf_sz))
+				goto INSUF_BUF_EXIT;
+
+			if (i == 0) {
+				if ((len + (3 * DEBGFS_LINE_SZ)) >= buf_sz)
+					goto INSUF_BUF_EXIT;
+
+				rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+					DEBGFS_LINE_SZ, "\n%s", banner);
+				if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+					qdma_log_error("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+						__LINE__, __func__,
+						rv);
+					goto INSUF_BUF_EXIT;
+				}
+				len += rv;
+
+				rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+					DEBGFS_LINE_SZ, "\n%40s",
+					"Credit Context");
+				if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+					qdma_log_error("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+						__LINE__, __func__,
+						rv);
+					goto INSUF_BUF_EXIT;
+				}
+				len += rv;
+
+				rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+					DEBGFS_LINE_SZ, "\n%s\n", banner);
+				if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+					qdma_log_error("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+						__LINE__, __func__,
+						rv);
+					goto INSUF_BUF_EXIT;
+				}
+				len += rv;
+			}
+
+			rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+				DEBGFS_LINE_SZ,
+				"%-47s %#-10x %u\n",
+				eqdma_credit_ctxt_entries[i].name,
+				eqdma_credit_ctxt_entries[i].value,
+				eqdma_credit_ctxt_entries[i].value);
+			if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+				qdma_log_error("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+					__LINE__, __func__,
+					rv);
+				goto INSUF_BUF_EXIT;
+			}
+			len += rv;
+		}
+	}
+
+	if (q_type == QDMA_DEV_Q_TYPE_CMPT ||
+			(st && q_type == QDMA_DEV_Q_TYPE_C2H)) {
+		/* Completion context dump */
+		n = sizeof(eqdma_cmpt_ctxt_entries) /
+				sizeof((eqdma_cmpt_ctxt_entries)[0]);
+		for (i = 0; i < n; i++) {
+			if (len >= buf_sz ||
+			    ((len + DEBGFS_LINE_SZ) >= buf_sz))
+				goto INSUF_BUF_EXIT;
+
+			if (i == 0) {
+				if ((len + (3 * DEBGFS_LINE_SZ)) >= buf_sz)
+					goto INSUF_BUF_EXIT;
+
+				rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+					DEBGFS_LINE_SZ, "\n%s", banner);
+				if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+					qdma_log_error
+						("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+						__LINE__, __func__,
+						rv);
+					goto INSUF_BUF_EXIT;
+				}
+				len += rv;
+
+				rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+					DEBGFS_LINE_SZ, "\n%40s",
+					"Completion Context");
+				if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+					qdma_log_error
+						("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+						__LINE__, __func__,
+						rv);
+					goto INSUF_BUF_EXIT;
+				}
+				len += rv;
+
+				rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+					DEBGFS_LINE_SZ, "\n%s\n", banner);
+				if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+					qdma_log_error
+						("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+						__LINE__, __func__,
+						rv);
+					goto INSUF_BUF_EXIT;
+				}
+				len += rv;
+			}
+
+			rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+				DEBGFS_LINE_SZ,
+				"%-47s %#-10x %u\n",
+				eqdma_cmpt_ctxt_entries[i].name,
+				eqdma_cmpt_ctxt_entries[i].value,
+				eqdma_cmpt_ctxt_entries[i].value);
+			if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+				qdma_log_error
+					("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+					__LINE__, __func__,
+					rv);
+				goto INSUF_BUF_EXIT;
+			}
+			len += rv;
+		}
+	}
+
+	if (st && q_type == QDMA_DEV_Q_TYPE_C2H) {
+		/* Prefetch context dump */
+		n = sizeof(eqdma_c2h_pftch_ctxt_entries) /
+			sizeof(eqdma_c2h_pftch_ctxt_entries[0]);
+		for (i = 0; i < n; i++) {
+			if (len >= buf_sz ||
+				((len + DEBGFS_LINE_SZ) >= buf_sz))
+				goto INSUF_BUF_EXIT;
+
+			if (i == 0) {
+				if ((len + (3 * DEBGFS_LINE_SZ)) >= buf_sz)
+					goto INSUF_BUF_EXIT;
+
+				rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+					DEBGFS_LINE_SZ, "\n%s", banner);
+				if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+					qdma_log_error
+						("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+						__LINE__, __func__,
+						rv);
+					goto INSUF_BUF_EXIT;
+				}
+				len += rv;
+
+				rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+					DEBGFS_LINE_SZ, "\n%40s",
+					"Prefetch Context");
+				if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+					qdma_log_error
+						("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+						__LINE__, __func__,
+						rv);
+					goto INSUF_BUF_EXIT;
+				}
+				len += rv;
+
+				rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+					DEBGFS_LINE_SZ, "\n%s\n", banner);
+				if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+					qdma_log_error
+						("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+						__LINE__, __func__,
+						rv);
+					goto INSUF_BUF_EXIT;
+				}
+				len += rv;
+			}
+
+			rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+				DEBGFS_LINE_SZ,
+				"%-47s %#-10x %u\n",
+				eqdma_c2h_pftch_ctxt_entries[i].name,
+				eqdma_c2h_pftch_ctxt_entries[i].value,
+				eqdma_c2h_pftch_ctxt_entries[i].value);
+			if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+				qdma_log_error
+					("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+					__LINE__, __func__,
+					rv);
+				goto INSUF_BUF_EXIT;
+			}
+			len += rv;
+		}
+	}
+
+	return len;
+
+INSUF_BUF_EXIT:
+	if (buf_sz > DEBGFS_LINE_SZ) {
+		rv = QDMA_SNPRINTF_S((buf + buf_sz - DEBGFS_LINE_SZ),
+			buf_sz, DEBGFS_LINE_SZ,
+			"\n\nInsufficient buffer size, partial context dump\n");
+		if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+			qdma_log_error
+				("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+				__LINE__, __func__,
+				rv);
+		}
+	}
+
+	qdma_log_error("%s: Insufficient buffer size, err:%d\n",
+		__func__, -QDMA_ERR_NO_MEM);
+
+	return -QDMA_ERR_NO_MEM;
+}
+
+/*
+ * dump_eqdma_intr_context() - Helper function to dump interrupt context into
+ * string
+ *
+ * return len - length of the string copied into buffer
+ */
+static int dump_eqdma_intr_context(struct qdma_indirect_intr_ctxt *intr_ctx,
+		int ring_index,
+		char *buf, int buf_sz)
+{
+	int i = 0;
+	int n;
+	int len = 0;
+	int rv;
+	char banner[DEBGFS_LINE_SZ];
+
+	eqdma_fill_intr_ctxt(intr_ctx);
+
+	for (i = 0; i < DEBGFS_LINE_SZ - 5; i++) {
+		rv = QDMA_SNPRINTF_S(banner + i,
+			(DEBGFS_LINE_SZ - i),
+			sizeof("-"), "-");
+		if (rv < 0 || rv > (int)sizeof("-")) {
+			qdma_log_error
+				("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+				__LINE__, __func__,
+				rv);
+			goto INSUF_BUF_EXIT;
+		}
+	}
+
+	/* Interrupt context dump */
+	n = sizeof(eqdma_ind_intr_ctxt_entries) /
+			sizeof((eqdma_ind_intr_ctxt_entries)[0]);
+	for (i = 0; i < n; i++) {
+		if (len >= buf_sz || ((len + DEBGFS_LINE_SZ) >= buf_sz))
+			goto INSUF_BUF_EXIT;
+
+		if (i == 0) {
+			if ((len + (3 * DEBGFS_LINE_SZ)) >= buf_sz)
+				goto INSUF_BUF_EXIT;
+
+			rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+				DEBGFS_LINE_SZ, "\n%s", banner);
+			if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+				qdma_log_error
+					("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+					__LINE__, __func__,
+					rv);
+				goto INSUF_BUF_EXIT;
+			}
+			len += rv;
+
+			rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+				DEBGFS_LINE_SZ, "\n%50s %d",
+				"Interrupt Context for ring#", ring_index);
+			if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+				qdma_log_error
+					("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+					__LINE__, __func__,
+					rv);
+				goto INSUF_BUF_EXIT;
+			}
+			len += rv;
+
+			rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+				DEBGFS_LINE_SZ, "\n%s\n", banner);
+			if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+				qdma_log_error
+					("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+					__LINE__, __func__,
+					rv);
+				goto INSUF_BUF_EXIT;
+			}
+			len += rv;
+		}
+
+		rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len), DEBGFS_LINE_SZ,
+			"%-47s %#-10x %u\n",
+			eqdma_ind_intr_ctxt_entries[i].name,
+			eqdma_ind_intr_ctxt_entries[i].value,
+			eqdma_ind_intr_ctxt_entries[i].value);
+		if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+			qdma_log_error
+				("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+				__LINE__, __func__,
+				rv);
+			goto INSUF_BUF_EXIT;
+		}
+		len += rv;
+	}
+
+	return len;
+
+INSUF_BUF_EXIT:
+	if (buf_sz > DEBGFS_LINE_SZ) {
+		rv = QDMA_SNPRINTF_S((buf + buf_sz - DEBGFS_LINE_SZ),
+			buf_sz, DEBGFS_LINE_SZ,
+			"\n\nInsufficient buffer size, partial context dump\n");
+		if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+			qdma_log_error
+				("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+				__LINE__, __func__,
+				rv);
+		}
+	}
+
+	qdma_log_error("%s: Insufficient buffer size, err:%d\n",
+		__func__, -QDMA_ERR_NO_MEM);
+
+	return -QDMA_ERR_NO_MEM;
+}
+
+/*****************************************************************************/
+/**
+ * eqdma_get_version() - Function to get the eqdma version
+ *
+ * @dev_hndl:	device handle
+ * @is_vf:	Whether PF or VF
+ * @version_info:	pointer to hold the version info
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+int eqdma_get_version(void *dev_hndl, uint8_t is_vf,
+		struct qdma_hw_version_info *version_info)
+{
+	uint32_t reg_val = 0;
+	uint32_t reg_addr = (is_vf) ? EQDMA_OFFSET_VF_VERSION :
+			EQDMA_GLBL2_MISC_CAP_ADDR;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+				__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	reg_val = qdma_reg_read(dev_hndl, reg_addr);
+
+	qdma_fetch_version_details(is_vf, reg_val, version_info);
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * eqdma_sw_context_write() - create sw context and program it
+ *
+ * @dev_hndl:	device handle
+ * @c2h:	is c2h queue
+ * @hw_qid:	hardware qid of the queue
+ * @ctxt:	pointer to the SW context data strucutre
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int eqdma_sw_context_write(void *dev_hndl, uint8_t c2h,
+			 uint16_t hw_qid,
+			 const struct qdma_descq_sw_ctxt *ctxt)
+{
+	uint32_t sw_ctxt[EQDMA_SW_CONTEXT_NUM_WORDS] = {0};
+	uint16_t num_words_count = 0;
+	uint32_t pasid_l, pasid_h;
+	uint32_t virtio_desc_base_l, virtio_desc_base_m, virtio_desc_base_h;
+	enum ind_ctxt_cmd_sel sel = c2h ?
+			QDMA_CTXT_SEL_SW_C2H : QDMA_CTXT_SEL_SW_H2C;
+
+	/* Input args check */
+	if (!dev_hndl || !ctxt) {
+		qdma_log_error("%s: dev_handle=%p sw_ctxt=%p NULL, err:%d\n",
+					   __func__, dev_hndl, ctxt,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	pasid_l =
+		FIELD_GET(EQDMA_SW_CTXT_PASID_GET_L_MASK, ctxt->pasid);
+	pasid_h =
+		FIELD_GET(EQDMA_SW_CTXT_PASID_GET_H_MASK, ctxt->pasid);
+
+	virtio_desc_base_l = (uint32_t)FIELD_GET
+		(EQDMA_SW_CTXT_VIRTIO_DSC_BASE_GET_L_MASK,
+		ctxt->virtio_dsc_base);
+	virtio_desc_base_m = (uint32_t)FIELD_GET
+		(EQDMA_SW_CTXT_VIRTIO_DSC_BASE_GET_M_MASK,
+		ctxt->virtio_dsc_base);
+	virtio_desc_base_h = (uint32_t)FIELD_GET
+		(EQDMA_SW_CTXT_VIRTIO_DSC_BASE_GET_H_MASK,
+		ctxt->virtio_dsc_base);
+
+	sw_ctxt[num_words_count++] =
+		FIELD_SET(SW_IND_CTXT_DATA_W0_PIDX_MASK, ctxt->pidx) |
+		FIELD_SET(SW_IND_CTXT_DATA_W0_IRQ_ARM_MASK, ctxt->irq_arm) |
+		FIELD_SET(SW_IND_CTXT_DATA_W0_FNC_MASK, ctxt->fnc_id);
+
+	qdma_log_debug("%s: pidx=%x, irq_arm=%x, fnc_id=%x\n",
+			 __func__, ctxt->pidx, ctxt->irq_arm, ctxt->fnc_id);
+
+	sw_ctxt[num_words_count++] =
+		FIELD_SET(SW_IND_CTXT_DATA_W1_QEN_MASK, ctxt->qen) |
+		FIELD_SET(SW_IND_CTXT_DATA_W1_FCRD_EN_MASK, ctxt->frcd_en) |
+		FIELD_SET(SW_IND_CTXT_DATA_W1_WBI_CHK_MASK, ctxt->wbi_chk) |
+		FIELD_SET(SW_IND_CTXT_DATA_W1_WBI_INTVL_EN_MASK,
+				  ctxt->wbi_intvl_en) |
+		FIELD_SET(SW_IND_CTXT_DATA_W1_AT_MASK, ctxt->at) |
+		FIELD_SET(SW_IND_CTXT_DATA_W1_FETCH_MAX_MASK, ctxt->fetch_max) |
+		FIELD_SET(SW_IND_CTXT_DATA_W1_RNG_SZ_MASK, ctxt->rngsz_idx) |
+		FIELD_SET(SW_IND_CTXT_DATA_W1_DSC_SZ_MASK, ctxt->desc_sz) |
+		FIELD_SET(SW_IND_CTXT_DATA_W1_BYPASS_MASK, ctxt->bypass) |
+		FIELD_SET(SW_IND_CTXT_DATA_W1_MM_CHN_MASK, ctxt->mm_chn) |
+		FIELD_SET(SW_IND_CTXT_DATA_W1_WBK_EN_MASK, ctxt->wbk_en) |
+		FIELD_SET(SW_IND_CTXT_DATA_W1_IRQ_EN_MASK, ctxt->irq_en) |
+		FIELD_SET(SW_IND_CTXT_DATA_W1_PORT_ID_MASK, ctxt->port_id) |
+		FIELD_SET(SW_IND_CTXT_DATA_W1_IRQ_NO_LAST_MASK,
+			ctxt->irq_no_last) |
+		FIELD_SET(SW_IND_CTXT_DATA_W1_ERR_MASK, ctxt->err) |
+		FIELD_SET(SW_IND_CTXT_DATA_W1_ERR_WB_SENT_MASK,
+			ctxt->err_wb_sent) |
+		FIELD_SET(SW_IND_CTXT_DATA_W1_IRQ_REQ_MASK, ctxt->irq_req) |
+		FIELD_SET(SW_IND_CTXT_DATA_W1_MRKR_DIS_MASK, ctxt->mrkr_dis) |
+		FIELD_SET(SW_IND_CTXT_DATA_W1_IS_MM_MASK, ctxt->is_mm);
+
+	qdma_log_debug("%s: qen=%x, frcd_en=%x, wbi_chk=%x, wbi_intvl_en=%x\n",
+			 __func__, ctxt->qen, ctxt->frcd_en, ctxt->wbi_chk,
+			ctxt->wbi_intvl_en);
+
+	qdma_log_debug("%s: at=%x, fetch_max=%x, rngsz_idx=%x, desc_sz=%x\n",
+			__func__, ctxt->at, ctxt->fetch_max, ctxt->rngsz_idx,
+			ctxt->desc_sz);
+
+	qdma_log_debug("%s: bypass=%x, mm_chn=%x, wbk_en=%x, irq_en=%x\n",
+			__func__, ctxt->bypass, ctxt->mm_chn, ctxt->wbk_en,
+			ctxt->irq_en);
+
+	qdma_log_debug("%s: port_id=%x, irq_no_last=%x,err=%x",
+			__func__, ctxt->port_id, ctxt->irq_no_last, ctxt->err);
+	qdma_log_debug(", err_wb_sent=%x\n", ctxt->err_wb_sent);
+
+	qdma_log_debug("%s: irq_req=%x, mrkr_dis=%x, is_mm=%x\n",
+			__func__, ctxt->irq_req, ctxt->mrkr_dis, ctxt->is_mm);
+
+	sw_ctxt[num_words_count++] = ctxt->ring_bs_addr & 0xffffffff;
+	sw_ctxt[num_words_count++] = (ctxt->ring_bs_addr >> 32) & 0xffffffff;
+
+	sw_ctxt[num_words_count++] =
+		FIELD_SET(SW_IND_CTXT_DATA_W4_VEC_MASK, ctxt->vec) |
+		FIELD_SET(SW_IND_CTXT_DATA_W4_INT_AGGR_MASK, ctxt->intr_aggr) |
+		FIELD_SET(SW_IND_CTXT_DATA_W4_DIS_INTR_ON_VF_MASK,
+				ctxt->dis_intr_on_vf) |
+		FIELD_SET(SW_IND_CTXT_DATA_W4_VIRTIO_EN_MASK,
+				ctxt->virtio_en) |
+		FIELD_SET(SW_IND_CTXT_DATA_W4_PACK_BYP_OUT_MASK,
+				ctxt->pack_byp_out) |
+		FIELD_SET(SW_IND_CTXT_DATA_W4_IRQ_BYP_MASK, ctxt->irq_byp) |
+		FIELD_SET(SW_IND_CTXT_DATA_W4_HOST_ID_MASK, ctxt->host_id) |
+		FIELD_SET(SW_IND_CTXT_DATA_W4_PASID_L_MASK, pasid_l);
+
+	sw_ctxt[num_words_count++] =
+		FIELD_SET(SW_IND_CTXT_DATA_W5_PASID_H_MASK, pasid_h) |
+		FIELD_SET(SW_IND_CTXT_DATA_W5_PASID_EN_MASK, ctxt->pasid_en) |
+		FIELD_SET(SW_IND_CTXT_DATA_W5_VIRTIO_DSC_BASE_L_MASK,
+				virtio_desc_base_l);
+
+	sw_ctxt[num_words_count++] =
+		FIELD_SET(SW_IND_CTXT_DATA_W6_VIRTIO_DSC_BASE_M_MASK,
+				virtio_desc_base_m);
+
+	sw_ctxt[num_words_count++] =
+		FIELD_SET(SW_IND_CTXT_DATA_W7_VIRTIO_DSC_BASE_H_MASK,
+				virtio_desc_base_h);
+
+
+	qdma_log_debug("%s: vec=%x, intr_aggr=%x\n",
+			__func__, ctxt->vec, ctxt->intr_aggr);
+
+	return eqdma_indirect_reg_write(dev_hndl, sel, hw_qid,
+			sw_ctxt, num_words_count);
+}
+
+/*****************************************************************************/
+/**
+ * eqdma_sw_context_read() - read sw context
+ *
+ * @dev_hndl:	device handle
+ * @c2h:	is c2h queue
+ * @hw_qid:	hardware qid of the queue
+ * @ctxt:	pointer to the output context data
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int eqdma_sw_context_read(void *dev_hndl, uint8_t c2h,
+			 uint16_t hw_qid,
+			 struct qdma_descq_sw_ctxt *ctxt)
+{
+	int rv = QDMA_SUCCESS;
+	uint32_t sw_ctxt[EQDMA_SW_CONTEXT_NUM_WORDS] = {0};
+	uint32_t pasid_l, pasid_h;
+	uint32_t virtio_desc_base_l, virtio_desc_base_m, virtio_desc_base_h;
+	enum ind_ctxt_cmd_sel sel = c2h ?
+			QDMA_CTXT_SEL_SW_C2H : QDMA_CTXT_SEL_SW_H2C;
+
+	if (!dev_hndl || !ctxt) {
+		qdma_log_error("%s: dev_handle=%p sw_ctxt=%p NULL, err:%d\n",
+					   __func__, dev_hndl, ctxt,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	rv = eqdma_indirect_reg_read(dev_hndl, sel, hw_qid,
+			EQDMA_SW_CONTEXT_NUM_WORDS, sw_ctxt);
+	if (rv < 0)
+		return rv;
+
+	ctxt->pidx = FIELD_GET(SW_IND_CTXT_DATA_W0_PIDX_MASK, sw_ctxt[0]);
+	ctxt->irq_arm =
+		(uint8_t)(FIELD_GET(SW_IND_CTXT_DATA_W0_IRQ_ARM_MASK,
+				sw_ctxt[0]));
+	ctxt->fnc_id =
+		(uint8_t)(FIELD_GET(SW_IND_CTXT_DATA_W0_FNC_MASK,
+				sw_ctxt[0]));
+
+	qdma_log_debug("%s: pidx=%x, irq_arm=%x, fnc_id=%x",
+			 __func__, ctxt->pidx, ctxt->irq_arm, ctxt->fnc_id);
+
+	ctxt->qen = FIELD_GET(SW_IND_CTXT_DATA_W1_QEN_MASK, sw_ctxt[1]);
+	ctxt->frcd_en = FIELD_GET(SW_IND_CTXT_DATA_W1_FCRD_EN_MASK, sw_ctxt[1]);
+	ctxt->wbi_chk = FIELD_GET(SW_IND_CTXT_DATA_W1_WBI_CHK_MASK, sw_ctxt[1]);
+	ctxt->wbi_intvl_en =
+		FIELD_GET(SW_IND_CTXT_DATA_W1_WBI_INTVL_EN_MASK, sw_ctxt[1]);
+	ctxt->at = FIELD_GET(SW_IND_CTXT_DATA_W1_AT_MASK, sw_ctxt[1]);
+	ctxt->fetch_max =
+		(uint8_t)FIELD_GET(SW_IND_CTXT_DATA_W1_FETCH_MAX_MASK,
+				sw_ctxt[1]);
+	ctxt->rngsz_idx =
+		(uint8_t)(FIELD_GET(SW_IND_CTXT_DATA_W1_RNG_SZ_MASK,
+				sw_ctxt[1]));
+	ctxt->desc_sz =
+		(uint8_t)(FIELD_GET(SW_IND_CTXT_DATA_W1_DSC_SZ_MASK,
+				sw_ctxt[1]));
+	ctxt->bypass =
+		(uint8_t)(FIELD_GET(SW_IND_CTXT_DATA_W1_BYPASS_MASK,
+				sw_ctxt[1]));
+	ctxt->mm_chn =
+		(uint8_t)(FIELD_GET(SW_IND_CTXT_DATA_W1_MM_CHN_MASK,
+				sw_ctxt[1]));
+	ctxt->wbk_en =
+		(uint8_t)(FIELD_GET(SW_IND_CTXT_DATA_W1_WBK_EN_MASK,
+				sw_ctxt[1]));
+	ctxt->irq_en =
+		(uint8_t)(FIELD_GET(SW_IND_CTXT_DATA_W1_IRQ_EN_MASK,
+				sw_ctxt[1]));
+	ctxt->port_id =
+		(uint8_t)(FIELD_GET(SW_IND_CTXT_DATA_W1_PORT_ID_MASK,
+				sw_ctxt[1]));
+	ctxt->irq_no_last =
+		(uint8_t)(FIELD_GET(SW_IND_CTXT_DATA_W1_IRQ_NO_LAST_MASK,
+			sw_ctxt[1]));
+	ctxt->err =
+		(uint8_t)(FIELD_GET(SW_IND_CTXT_DATA_W1_ERR_MASK, sw_ctxt[1]));
+	ctxt->err_wb_sent =
+		(uint8_t)(FIELD_GET(SW_IND_CTXT_DATA_W1_ERR_WB_SENT_MASK,
+			sw_ctxt[1]));
+	ctxt->irq_req =
+		(uint8_t)(FIELD_GET(SW_IND_CTXT_DATA_W1_IRQ_REQ_MASK,
+				sw_ctxt[1]));
+	ctxt->mrkr_dis =
+		(uint8_t)(FIELD_GET(SW_IND_CTXT_DATA_W1_MRKR_DIS_MASK,
+				sw_ctxt[1]));
+	ctxt->is_mm =
+		(uint8_t)(FIELD_GET(SW_IND_CTXT_DATA_W1_IS_MM_MASK,
+				sw_ctxt[1]));
+
+	qdma_log_debug("%s: qen=%x, frcd_en=%x, wbi_chk=%x, wbi_intvl_en=%x\n",
+			 __func__, ctxt->qen, ctxt->frcd_en, ctxt->wbi_chk,
+			ctxt->wbi_intvl_en);
+	qdma_log_debug("%s: at=%x, fetch_max=%x, rngsz_idx=%x, desc_sz=%x\n",
+			__func__, ctxt->at, ctxt->fetch_max, ctxt->rngsz_idx,
+			ctxt->desc_sz);
+	qdma_log_debug("%s: bypass=%x, mm_chn=%x, wbk_en=%x, irq_en=%x\n",
+			__func__, ctxt->bypass, ctxt->mm_chn, ctxt->wbk_en,
+			ctxt->irq_en);
+	qdma_log_debug("%s: port_id=%x, irq_no_last=%x,",
+			__func__, ctxt->port_id, ctxt->irq_no_last);
+	qdma_log_debug(" err=%x, err_wb_sent=%x\n",
+			ctxt->err, ctxt->err_wb_sent);
+	qdma_log_debug("%s: irq_req=%x, mrkr_dis=%x, is_mm=%x\n",
+			__func__, ctxt->irq_req, ctxt->mrkr_dis, ctxt->is_mm);
+
+	ctxt->ring_bs_addr = ((uint64_t)sw_ctxt[3] << 32) | (sw_ctxt[2]);
+
+	ctxt->vec = FIELD_GET(SW_IND_CTXT_DATA_W4_VEC_MASK, sw_ctxt[4]);
+	ctxt->intr_aggr = (uint8_t)(FIELD_GET(SW_IND_CTXT_DATA_W4_INT_AGGR_MASK,
+			sw_ctxt[4]));
+	ctxt->dis_intr_on_vf =
+		(uint8_t)(FIELD_GET(SW_IND_CTXT_DATA_W4_DIS_INTR_ON_VF_MASK,
+				sw_ctxt[4]));
+	ctxt->virtio_en =
+		(uint8_t)(FIELD_GET(SW_IND_CTXT_DATA_W4_VIRTIO_EN_MASK,
+				sw_ctxt[4]));
+	ctxt->pack_byp_out =
+		(uint8_t)(FIELD_GET(SW_IND_CTXT_DATA_W4_PACK_BYP_OUT_MASK,
+				sw_ctxt[4]));
+	ctxt->irq_byp =
+		(uint8_t)(FIELD_GET(SW_IND_CTXT_DATA_W4_IRQ_BYP_MASK,
+				sw_ctxt[4]));
+	ctxt->host_id =
+		(uint8_t)(FIELD_GET(SW_IND_CTXT_DATA_W4_HOST_ID_MASK,
+				sw_ctxt[4]));
+	pasid_l = FIELD_GET(SW_IND_CTXT_DATA_W4_PASID_L_MASK, sw_ctxt[4]);
+
+	pasid_h = FIELD_GET(SW_IND_CTXT_DATA_W5_PASID_H_MASK, sw_ctxt[5]);
+	ctxt->pasid_en = (uint8_t)FIELD_GET(SW_IND_CTXT_DATA_W5_PASID_EN_MASK,
+			sw_ctxt[5]);
+	virtio_desc_base_l =
+		FIELD_GET(SW_IND_CTXT_DATA_W5_VIRTIO_DSC_BASE_L_MASK,
+				sw_ctxt[5]);
+	virtio_desc_base_m =
+		FIELD_GET(SW_IND_CTXT_DATA_W6_VIRTIO_DSC_BASE_M_MASK,
+				sw_ctxt[6]);
+
+	virtio_desc_base_h =
+		FIELD_GET(SW_IND_CTXT_DATA_W7_VIRTIO_DSC_BASE_H_MASK,
+				sw_ctxt[6]);
+
+	ctxt->pasid =
+			FIELD_SET(EQDMA_SW_CTXT_PASID_GET_L_MASK, pasid_l) |
+			FIELD_SET(EQDMA_SW_CTXT_PASID_GET_H_MASK, pasid_h);
+
+	ctxt->virtio_dsc_base =
+			FIELD_SET(EQDMA_SW_CTXT_VIRTIO_DSC_BASE_GET_L_MASK,
+					(uint64_t)virtio_desc_base_l) |
+			FIELD_SET(EQDMA_SW_CTXT_VIRTIO_DSC_BASE_GET_M_MASK,
+					(uint64_t)virtio_desc_base_m) |
+			FIELD_SET(EQDMA_SW_CTXT_VIRTIO_DSC_BASE_GET_H_MASK,
+					(uint64_t)virtio_desc_base_h);
+
+	qdma_log_debug("%s: vec=%x, intr_aggr=%x\n",
+			__func__, ctxt->vec, ctxt->intr_aggr);
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * eqdma_sw_context_clear() - clear sw context
+ *
+ * @dev_hndl:	device handle
+ * @c2h:	is c2h queue
+ * @hw_qid:	hardware qid of the queue
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int eqdma_sw_context_clear(void *dev_hndl, uint8_t c2h,
+			  uint16_t hw_qid)
+{
+	enum ind_ctxt_cmd_sel sel = c2h ?
+			QDMA_CTXT_SEL_SW_C2H : QDMA_CTXT_SEL_SW_H2C;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n", __func__,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	return eqdma_indirect_reg_clear(dev_hndl, sel, hw_qid);
+}
+
+/*****************************************************************************/
+/**
+ * eqdma_sw_context_invalidate() - invalidate sw context
+ *
+ * @dev_hndl:	device handle
+ * @c2h:	is c2h queue
+ * @hw_qid:	hardware qid of the queue
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int eqdma_sw_context_invalidate(void *dev_hndl, uint8_t c2h,
+		uint16_t hw_qid)
+{
+	enum ind_ctxt_cmd_sel sel = c2h ?
+			QDMA_CTXT_SEL_SW_C2H : QDMA_CTXT_SEL_SW_H2C;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n", __func__,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+	return eqdma_indirect_reg_invalidate(dev_hndl, sel, hw_qid);
+}
+
+/*****************************************************************************/
+/**
+ * eqdma_sw_ctx_conf() - configure SW context
+ *
+ * @dev_hndl:	device handle
+ * @c2h:	is c2h queue
+ * @hw_qid:	hardware qid of the queue
+ * @ctxt:	pointer to the context data
+ * @access_type HW access type (qdma_hw_access_type enum) value
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+int eqdma_sw_ctx_conf(void *dev_hndl, uint8_t c2h, uint16_t hw_qid,
+				struct qdma_descq_sw_ctxt *ctxt,
+				enum qdma_hw_access_type access_type)
+{
+	int rv = QDMA_SUCCESS;
+
+	switch (access_type) {
+	case QDMA_HW_ACCESS_READ:
+		rv = eqdma_sw_context_read(dev_hndl, c2h, hw_qid, ctxt);
+		break;
+	case QDMA_HW_ACCESS_WRITE:
+		rv = eqdma_sw_context_write(dev_hndl, c2h, hw_qid, ctxt);
+		break;
+	case QDMA_HW_ACCESS_CLEAR:
+		rv = eqdma_sw_context_clear(dev_hndl, c2h, hw_qid);
+		break;
+	case QDMA_HW_ACCESS_INVALIDATE:
+		rv = eqdma_sw_context_invalidate(dev_hndl, c2h, hw_qid);
+		break;
+	default:
+		qdma_log_error("%s: access_type(%d) invalid, err:%d\n",
+						__func__,
+						access_type,
+					   -QDMA_ERR_INV_PARAM);
+		rv = -QDMA_ERR_INV_PARAM;
+		break;
+	}
+
+	return rv;
+}
+
+/*****************************************************************************/
+/**
+ * eqdma_pfetch_context_write() - create prefetch context and program it
+ *
+ * @dev_hndl:	device handle
+ * @hw_qid:	hardware qid of the queue
+ * @ctxt:	pointer to the prefetch context data strucutre
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int eqdma_pfetch_context_write(void *dev_hndl, uint16_t hw_qid,
+		const struct qdma_descq_prefetch_ctxt *ctxt)
+{
+	uint32_t pfetch_ctxt[EQDMA_PFETCH_CONTEXT_NUM_WORDS] = {0};
+	enum ind_ctxt_cmd_sel sel = QDMA_CTXT_SEL_PFTCH;
+	uint32_t sw_crdt_l, sw_crdt_h;
+	uint16_t num_words_count = 0;
+
+	if (!dev_hndl || !ctxt) {
+		qdma_log_error("%s: dev_handle or pfetch ctxt NULL, err:%d\n",
+					   __func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	sw_crdt_l =
+		FIELD_GET(QDMA_PFTCH_CTXT_SW_CRDT_GET_L_MASK, ctxt->sw_crdt);
+	sw_crdt_h =
+		FIELD_GET(QDMA_PFTCH_CTXT_SW_CRDT_GET_H_MASK, ctxt->sw_crdt);
+
+	qdma_log_debug("%s: sw_crdt_l=%u, sw_crdt_h=%u, hw_qid=%u\n",
+			 __func__, sw_crdt_l, sw_crdt_h, hw_qid);
+
+	pfetch_ctxt[num_words_count++] =
+		FIELD_SET(PREFETCH_CTXT_DATA_W0_BYPASS_MASK, ctxt->bypass) |
+		FIELD_SET(PREFETCH_CTXT_DATA_W0_BUF_SZ_IDX_MASK,
+				ctxt->bufsz_idx) |
+		FIELD_SET(PREFETCH_CTXT_DATA_W0_PORT_ID_MASK, ctxt->port_id) |
+		FIELD_SET(PREFETCH_CTXT_DATA_W0_NUM_PFCH_MASK,
+				ctxt->num_pftch) |
+		FIELD_SET(PREFETCH_CTXT_DATA_W0_VAR_DESC_MASK,
+				ctxt->var_desc) |
+		FIELD_SET(PREFETCH_CTXT_DATA_W0_ERR_MASK, ctxt->err) |
+		FIELD_SET(PREFETCH_CTXT_DATA_W0_PFCH_EN_MASK, ctxt->pfch_en) |
+		FIELD_SET(PREFETCH_CTXT_DATA_W0_PFCH_MASK, ctxt->pfch) |
+		FIELD_SET(PREFETCH_CTXT_DATA_W0_SW_CRDT_L_MASK, sw_crdt_l);
+
+	qdma_log_debug("%s: bypass=%x, bufsz_idx=%x, port_id=%x\n",
+			__func__, ctxt->bypass, ctxt->bufsz_idx, ctxt->port_id);
+	qdma_log_debug("%s: err=%x, pfch_en=%x, pfch=%x, ctxt->valid=%x\n",
+			__func__, ctxt->err, ctxt->pfch_en, ctxt->pfch,
+			ctxt->valid);
+
+	pfetch_ctxt[num_words_count++] =
+		FIELD_SET(PREFETCH_CTXT_DATA_W1_SW_CRDT_H_MASK, sw_crdt_h) |
+		FIELD_SET(PREFETCH_CTXT_DATA_W1_VALID_MASK, ctxt->valid);
+
+	return eqdma_indirect_reg_write(dev_hndl, sel, hw_qid,
+			pfetch_ctxt, num_words_count);
+}
+
+/*****************************************************************************/
+/**
+ * eqdma_pfetch_context_read() - read prefetch context
+ *
+ * @dev_hndl:	device handle
+ * @hw_qid:	hardware qid of the queue
+ * @ctxt:	pointer to the output context data
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int eqdma_pfetch_context_read(void *dev_hndl, uint16_t hw_qid,
+		struct qdma_descq_prefetch_ctxt *ctxt)
+{
+	int rv = QDMA_SUCCESS;
+	uint32_t pfetch_ctxt[EQDMA_PFETCH_CONTEXT_NUM_WORDS] = {0};
+	enum ind_ctxt_cmd_sel sel = QDMA_CTXT_SEL_PFTCH;
+	uint32_t sw_crdt_l, sw_crdt_h;
+
+	if (!dev_hndl || !ctxt) {
+		qdma_log_error("%s: dev_handle or pfetch ctxt NULL, err:%d\n",
+					   __func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	rv = eqdma_indirect_reg_read(dev_hndl, sel, hw_qid,
+			EQDMA_PFETCH_CONTEXT_NUM_WORDS, pfetch_ctxt);
+	if (rv < 0)
+		return rv;
+
+	ctxt->bypass =
+		FIELD_GET(PREFETCH_CTXT_DATA_W0_BYPASS_MASK, pfetch_ctxt[0]);
+	ctxt->bufsz_idx =
+		FIELD_GET(PREFETCH_CTXT_DATA_W0_BUF_SZ_IDX_MASK,
+			pfetch_ctxt[0]);
+	ctxt->num_pftch = (uint16_t)FIELD_GET
+			(PREFETCH_CTXT_DATA_W0_NUM_PFCH_MASK, pfetch_ctxt[0]);
+	ctxt->port_id =
+		FIELD_GET(PREFETCH_CTXT_DATA_W0_PORT_ID_MASK, pfetch_ctxt[0]);
+	ctxt->var_desc = (uint8_t)
+		FIELD_GET(PREFETCH_CTXT_DATA_W0_VAR_DESC_MASK,
+				pfetch_ctxt[0]);
+	ctxt->err =
+		(uint8_t)(FIELD_GET(PREFETCH_CTXT_DATA_W0_ERR_MASK,
+			pfetch_ctxt[0]));
+	ctxt->pfch_en =
+		(uint8_t)(FIELD_GET(PREFETCH_CTXT_DATA_W0_PFCH_EN_MASK,
+			pfetch_ctxt[0]));
+	ctxt->pfch =
+		(uint8_t)(FIELD_GET(PREFETCH_CTXT_DATA_W0_PFCH_MASK,
+				pfetch_ctxt[0]));
+	sw_crdt_l =
+		FIELD_GET(PREFETCH_CTXT_DATA_W0_SW_CRDT_L_MASK, pfetch_ctxt[0]);
+
+	sw_crdt_h =
+		FIELD_GET(PREFETCH_CTXT_DATA_W1_SW_CRDT_H_MASK, pfetch_ctxt[1]);
+	ctxt->valid =
+		(uint8_t)(FIELD_GET(PREFETCH_CTXT_DATA_W1_VALID_MASK,
+			pfetch_ctxt[1]));
+
+	ctxt->sw_crdt =
+		FIELD_SET(QDMA_PFTCH_CTXT_SW_CRDT_GET_L_MASK, sw_crdt_l) |
+		FIELD_SET(QDMA_PFTCH_CTXT_SW_CRDT_GET_H_MASK, sw_crdt_h);
+
+	qdma_log_debug("%s: sw_crdt_l=%u, sw_crdt_h=%u, hw_qid=%u\n",
+			 __func__, sw_crdt_l, sw_crdt_h, hw_qid);
+	qdma_log_debug("%s: bypass=%x, bufsz_idx=%x, port_id=%x\n",
+			__func__, ctxt->bypass, ctxt->bufsz_idx, ctxt->port_id);
+	qdma_log_debug("%s: err=%x, pfch_en=%x, pfch=%x, ctxt->valid=%x\n",
+			__func__, ctxt->err, ctxt->pfch_en, ctxt->pfch,
+			ctxt->valid);
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * eqdma_pfetch_context_clear() - clear prefetch context
+ *
+ * @dev_hndl:	device handle
+ * @hw_qid:	hardware qid of the queue
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int eqdma_pfetch_context_clear(void *dev_hndl, uint16_t hw_qid)
+{
+	enum ind_ctxt_cmd_sel sel = QDMA_CTXT_SEL_PFTCH;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n", __func__,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	return eqdma_indirect_reg_clear(dev_hndl, sel, hw_qid);
+}
+
+/*****************************************************************************/
+/**
+ * eqdma_pfetch_context_invalidate() - invalidate prefetch context
+ *
+ * @dev_hndl:	device handle
+ * @hw_qid:	hardware qid of the queue
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int eqdma_pfetch_context_invalidate(void *dev_hndl, uint16_t hw_qid)
+{
+	enum ind_ctxt_cmd_sel sel = QDMA_CTXT_SEL_PFTCH;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n", __func__,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	return eqdma_indirect_reg_invalidate(dev_hndl, sel, hw_qid);
+}
+
+/*****************************************************************************/
+/**
+ * eqdma_pfetch_ctx_conf() - configure prefetch context
+ *
+ * @dev_hndl:	device handle
+ * @hw_qid:	hardware qid of the queue
+ * @ctxt:	pointer to context data
+ * @access_type HW access type (qdma_hw_access_type enum) value
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+int eqdma_pfetch_ctx_conf(void *dev_hndl, uint16_t hw_qid,
+				struct qdma_descq_prefetch_ctxt *ctxt,
+				enum qdma_hw_access_type access_type)
+{
+	int rv = QDMA_SUCCESS;
+
+	switch (access_type) {
+	case QDMA_HW_ACCESS_READ:
+		rv = eqdma_pfetch_context_read(dev_hndl, hw_qid, ctxt);
+		break;
+	case QDMA_HW_ACCESS_WRITE:
+		rv = eqdma_pfetch_context_write(dev_hndl, hw_qid, ctxt);
+		break;
+	case QDMA_HW_ACCESS_CLEAR:
+		rv = eqdma_pfetch_context_clear(dev_hndl, hw_qid);
+		break;
+	case QDMA_HW_ACCESS_INVALIDATE:
+		rv = eqdma_pfetch_context_invalidate(dev_hndl, hw_qid);
+		break;
+	default:
+		qdma_log_error("%s: access_type(%d) invalid, err:%d\n",
+						__func__,
+						access_type,
+					   -QDMA_ERR_INV_PARAM);
+		rv = -QDMA_ERR_INV_PARAM;
+		break;
+	}
+
+	return rv;
+}
+
+/*****************************************************************************/
+/**
+ * eqdma_cmpt_context_write() - create completion context and program it
+ *
+ * @dev_hndl:	device handle
+ * @hw_qid:	hardware qid of the queue
+ * @ctxt:	pointer to the cmpt context data strucutre
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int eqdma_cmpt_context_write(void *dev_hndl, uint16_t hw_qid,
+			   const struct qdma_descq_cmpt_ctxt *ctxt)
+{
+	uint32_t cmpt_ctxt[EQDMA_CMPT_CONTEXT_NUM_WORDS] = {0};
+	uint16_t num_words_count = 0;
+	uint32_t baddr4_high_l, baddr4_high_h,
+			baddr4_low, pidx_l, pidx_h, pasid_l, pasid_h;
+	enum ind_ctxt_cmd_sel sel = QDMA_CTXT_SEL_CMPT;
+
+	/* Input args check */
+	if (!dev_hndl || !ctxt) {
+		qdma_log_error("%s: dev_handle or cmpt ctxt NULL, err:%d\n",
+					   __func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if (ctxt->trig_mode > QDMA_CMPT_UPDATE_TRIG_MODE_TMR_CNTR) {
+		qdma_log_error("%s: trig_mode(%d) > (%d) is invalid, err:%d\n",
+					__func__,
+					ctxt->trig_mode,
+					QDMA_CMPT_UPDATE_TRIG_MODE_TMR_CNTR,
+					-QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	baddr4_high_l = (uint32_t)FIELD_GET(EQDMA_COMPL_CTXT_BADDR_HIGH_L_MASK,
+			ctxt->bs_addr);
+	baddr4_high_h = (uint32_t)FIELD_GET(EQDMA_COMPL_CTXT_BADDR_HIGH_H_MASK,
+			ctxt->bs_addr);
+	baddr4_low = (uint32_t)FIELD_GET(EQDMA_COMPL_CTXT_BADDR_LOW_MASK,
+			ctxt->bs_addr);
+
+	pidx_l = FIELD_GET(QDMA_COMPL_CTXT_PIDX_GET_L_MASK, ctxt->pidx);
+	pidx_h = FIELD_GET(QDMA_COMPL_CTXT_PIDX_GET_H_MASK, ctxt->pidx);
+
+	pasid_l =
+		FIELD_GET(EQDMA_CMPL_CTXT_PASID_GET_L_MASK, ctxt->pasid);
+	pasid_h =
+		FIELD_GET(EQDMA_CMPL_CTXT_PASID_GET_H_MASK, ctxt->pasid);
+
+	cmpt_ctxt[num_words_count++] =
+		FIELD_SET(CMPL_CTXT_DATA_W0_EN_STAT_DESC_MASK,
+				ctxt->en_stat_desc) |
+		FIELD_SET(CMPL_CTXT_DATA_W0_EN_INT_MASK, ctxt->en_int) |
+		FIELD_SET(CMPL_CTXT_DATA_W0_TRIG_MODE_MASK, ctxt->trig_mode) |
+		FIELD_SET(CMPL_CTXT_DATA_W0_FNC_ID_MASK, ctxt->fnc_id) |
+		FIELD_SET(CMPL_CTXT_DATA_W0_CNTER_IX_MASK,
+				ctxt->counter_idx) |
+		FIELD_SET(CMPL_CTXT_DATA_W0_TIMER_IX_MASK, ctxt->timer_idx) |
+		FIELD_SET(CMPL_CTXT_DATA_W0_INT_ST_MASK, ctxt->in_st) |
+		FIELD_SET(CMPL_CTXT_DATA_W0_COLOR_MASK, ctxt->color) |
+		FIELD_SET(CMPL_CTXT_DATA_W0_QSIZE_IX_MASK, ctxt->ringsz_idx);
+
+	cmpt_ctxt[num_words_count++] =
+		FIELD_SET(CMPL_CTXT_DATA_W1_BADDR4_HIGH_L_MASK, baddr4_high_l);
+
+	cmpt_ctxt[num_words_count++] =
+		FIELD_SET(CMPL_CTXT_DATA_W2_BADDR4_HIGH_H_MASK, baddr4_high_h) |
+		FIELD_SET(CMPL_CTXT_DATA_W2_DESC_SIZE_MASK, ctxt->desc_sz) |
+		FIELD_SET(CMPL_CTXT_DATA_W2_PIDX_L_MASK, pidx_l);
+
+	cmpt_ctxt[num_words_count++] =
+		FIELD_SET(CMPL_CTXT_DATA_W3_PIDX_H_MASK, pidx_h) |
+		FIELD_SET(CMPL_CTXT_DATA_W3_CIDX_MASK, ctxt->cidx) |
+		FIELD_SET(CMPL_CTXT_DATA_W3_VALID_MASK, ctxt->valid) |
+		FIELD_SET(CMPL_CTXT_DATA_W3_ERR_MASK, ctxt->err) |
+		FIELD_SET(CMPL_CTXT_DATA_W3_USER_TRIG_PEND_MASK,
+				ctxt->user_trig_pend);
+
+	cmpt_ctxt[num_words_count++] =
+		FIELD_SET(CMPL_CTXT_DATA_W4_TIMER_RUNNING_MASK,
+				ctxt->timer_running) |
+		FIELD_SET(CMPL_CTXT_DATA_W4_FULL_UPD_MASK, ctxt->full_upd) |
+		FIELD_SET(CMPL_CTXT_DATA_W4_OVF_CHK_DIS_MASK,
+				ctxt->ovf_chk_dis) |
+		FIELD_SET(CMPL_CTXT_DATA_W4_AT_MASK, ctxt->at) |
+		FIELD_SET(CMPL_CTXT_DATA_W4_VEC_MASK, ctxt->vec) |
+		FIELD_SET(CMPL_CTXT_DATA_W4_INT_AGGR_MASK, ctxt->int_aggr) |
+		FIELD_SET(CMPL_CTXT_DATA_W4_DIS_INTR_ON_VF_MASK,
+				ctxt->dis_intr_on_vf) |
+		FIELD_SET(CMPL_CTXT_DATA_W4_VIO_MASK, ctxt->vio) |
+		FIELD_SET(CMPL_CTXT_DATA_W4_DIR_C2H_MASK, ctxt->dir_c2h) |
+		FIELD_SET(CMPL_CTXT_DATA_W4_HOST_ID_MASK, ctxt->host_id) |
+		FIELD_SET(CMPL_CTXT_DATA_W4_PASID_L_MASK, pasid_l);
+
+	cmpt_ctxt[num_words_count++] =
+		FIELD_SET(CMPL_CTXT_DATA_W5_PASID_H_MASK, pasid_h) |
+		FIELD_SET(CMPL_CTXT_DATA_W5_PASID_EN_MASK,
+				ctxt->pasid_en) |
+		FIELD_SET(CMPL_CTXT_DATA_W5_BADDR4_LOW_MASK,
+				baddr4_low) |
+		FIELD_SET(CMPL_CTXT_DATA_W5_VIO_EOP_MASK, ctxt->vio_eop) |
+		FIELD_SET(CMPL_CTXT_DATA_W5_SH_CMPT_MASK, ctxt->sh_cmpt);
+
+	return eqdma_indirect_reg_write(dev_hndl, sel, hw_qid,
+			cmpt_ctxt, num_words_count);
+}
+
+/*****************************************************************************/
+/**
+ * eqdma_cmpt_context_read() - read completion context
+ *
+ * @dev_hndl:	device handle
+ * @hw_qid:	hardware qid of the queue
+ * @ctxt:	pointer to the context data
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int eqdma_cmpt_context_read(void *dev_hndl, uint16_t hw_qid,
+			   struct qdma_descq_cmpt_ctxt *ctxt)
+{
+	int rv = QDMA_SUCCESS;
+	uint32_t cmpt_ctxt[EQDMA_CMPT_CONTEXT_NUM_WORDS] = {0};
+	enum ind_ctxt_cmd_sel sel = QDMA_CTXT_SEL_CMPT;
+	uint32_t baddr4_high_l, baddr4_high_h, baddr4_low,
+			pidx_l, pidx_h, pasid_l, pasid_h;
+
+	if (!dev_hndl || !ctxt) {
+		qdma_log_error("%s: dev_handle or cmpt ctxt NULL, err:%d\n",
+					   __func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	rv = eqdma_indirect_reg_read(dev_hndl, sel, hw_qid,
+			EQDMA_CMPT_CONTEXT_NUM_WORDS, cmpt_ctxt);
+	if (rv < 0)
+		return rv;
+
+	ctxt->en_stat_desc =
+		FIELD_GET(CMPL_CTXT_DATA_W0_EN_STAT_DESC_MASK, cmpt_ctxt[0]);
+	ctxt->en_int = FIELD_GET(CMPL_CTXT_DATA_W0_EN_INT_MASK, cmpt_ctxt[0]);
+	ctxt->trig_mode =
+		FIELD_GET(CMPL_CTXT_DATA_W0_TRIG_MODE_MASK, cmpt_ctxt[0]);
+	ctxt->fnc_id =
+		(uint8_t)(FIELD_GET(CMPL_CTXT_DATA_W0_FNC_ID_MASK,
+			cmpt_ctxt[0]));
+	ctxt->counter_idx =
+		(uint8_t)(FIELD_GET(CMPL_CTXT_DATA_W0_CNTER_IX_MASK,
+			cmpt_ctxt[0]));
+	ctxt->timer_idx =
+		(uint8_t)(FIELD_GET(CMPL_CTXT_DATA_W0_TIMER_IX_MASK,
+			cmpt_ctxt[0]));
+	ctxt->in_st =
+		(uint8_t)(FIELD_GET(CMPL_CTXT_DATA_W0_INT_ST_MASK,
+			cmpt_ctxt[0]));
+	ctxt->color =
+		(uint8_t)(FIELD_GET(CMPL_CTXT_DATA_W0_COLOR_MASK,
+			cmpt_ctxt[0]));
+	ctxt->ringsz_idx =
+		(uint8_t)(FIELD_GET(CMPL_CTXT_DATA_W0_QSIZE_IX_MASK,
+			cmpt_ctxt[0]));
+
+	baddr4_high_l = FIELD_GET(CMPL_CTXT_DATA_W1_BADDR4_HIGH_L_MASK,
+			cmpt_ctxt[1]);
+
+	baddr4_high_h = FIELD_GET(CMPL_CTXT_DATA_W2_BADDR4_HIGH_H_MASK,
+			cmpt_ctxt[2]);
+	ctxt->desc_sz =
+		(uint8_t)(FIELD_GET(CMPL_CTXT_DATA_W2_DESC_SIZE_MASK,
+			cmpt_ctxt[2]));
+	pidx_l = FIELD_GET(CMPL_CTXT_DATA_W2_PIDX_L_MASK, cmpt_ctxt[2]);
+
+	pidx_h = FIELD_GET(CMPL_CTXT_DATA_W3_PIDX_H_MASK, cmpt_ctxt[3]);
+	ctxt->cidx =
+		(uint16_t)(FIELD_GET(CMPL_CTXT_DATA_W3_CIDX_MASK,
+			cmpt_ctxt[3]));
+	ctxt->valid =
+		(uint8_t)(FIELD_GET(CMPL_CTXT_DATA_W3_VALID_MASK,
+			cmpt_ctxt[3]));
+	ctxt->err =
+		(uint8_t)(FIELD_GET(CMPL_CTXT_DATA_W3_ERR_MASK,
+			cmpt_ctxt[3]));
+	ctxt->user_trig_pend = (uint8_t)
+		(FIELD_GET(CMPL_CTXT_DATA_W3_USER_TRIG_PEND_MASK,
+			cmpt_ctxt[3]));
+
+	ctxt->timer_running =
+		FIELD_GET(CMPL_CTXT_DATA_W4_TIMER_RUNNING_MASK, cmpt_ctxt[4]);
+	ctxt->full_upd =
+		FIELD_GET(CMPL_CTXT_DATA_W4_FULL_UPD_MASK, cmpt_ctxt[4]);
+	ctxt->ovf_chk_dis =
+		FIELD_GET(CMPL_CTXT_DATA_W4_OVF_CHK_DIS_MASK, cmpt_ctxt[4]);
+	ctxt->at = FIELD_GET(CMPL_CTXT_DATA_W4_AT_MASK, cmpt_ctxt[4]);
+	ctxt->vec = FIELD_GET(CMPL_CTXT_DATA_W4_VEC_MASK, cmpt_ctxt[4]);
+	ctxt->int_aggr = (uint8_t)
+		(FIELD_GET(CMPL_CTXT_DATA_W4_INT_AGGR_MASK, cmpt_ctxt[4]));
+	ctxt->dis_intr_on_vf = (uint8_t)
+		FIELD_GET(CMPL_CTXT_DATA_W4_DIS_INTR_ON_VF_MASK,
+				cmpt_ctxt[4]);
+	ctxt->vio = (uint8_t)FIELD_GET(CMPL_CTXT_DATA_W4_VIO_MASK,
+			cmpt_ctxt[4]);
+	ctxt->dir_c2h = (uint8_t)FIELD_GET(CMPL_CTXT_DATA_W4_DIR_C2H_MASK,
+			cmpt_ctxt[4]);
+	ctxt->host_id = (uint8_t)FIELD_GET(CMPL_CTXT_DATA_W4_HOST_ID_MASK,
+			cmpt_ctxt[4]);
+	pasid_l = FIELD_GET(CMPL_CTXT_DATA_W4_PASID_L_MASK, cmpt_ctxt[4]);
+
+	pasid_h = (uint32_t)FIELD_GET(CMPL_CTXT_DATA_W5_PASID_H_MASK,
+			cmpt_ctxt[5]);
+	ctxt->pasid_en = (uint8_t)FIELD_GET(CMPL_CTXT_DATA_W5_PASID_EN_MASK,
+			cmpt_ctxt[5]);
+	baddr4_low = (uint8_t)FIELD_GET
+			(CMPL_CTXT_DATA_W5_BADDR4_LOW_MASK, cmpt_ctxt[5]);
+	ctxt->vio_eop = (uint8_t)FIELD_GET(CMPL_CTXT_DATA_W5_VIO_EOP_MASK,
+			cmpt_ctxt[5]);
+	ctxt->sh_cmpt = (uint8_t)FIELD_GET(CMPL_CTXT_DATA_W5_SH_CMPT_MASK,
+			cmpt_ctxt[5]);
+
+	ctxt->bs_addr =
+		FIELD_SET(EQDMA_COMPL_CTXT_BADDR_HIGH_L_MASK,
+				(uint64_t)baddr4_high_l) |
+		FIELD_SET(EQDMA_COMPL_CTXT_BADDR_HIGH_H_MASK,
+				(uint64_t)baddr4_high_h) |
+		FIELD_SET(EQDMA_COMPL_CTXT_BADDR_LOW_MASK,
+				(uint64_t)baddr4_low);
+
+	ctxt->pasid =
+		FIELD_SET(EQDMA_CMPL_CTXT_PASID_GET_L_MASK, pasid_l) |
+		FIELD_SET(EQDMA_CMPL_CTXT_PASID_GET_H_MASK,
+				(uint64_t)pasid_h);
+
+	ctxt->pidx =
+		FIELD_SET(QDMA_COMPL_CTXT_PIDX_GET_L_MASK, pidx_l) |
+		FIELD_SET(QDMA_COMPL_CTXT_PIDX_GET_H_MASK, pidx_h);
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * eqdma_cmpt_context_clear() - clear completion context
+ *
+ * @dev_hndl:	device handle
+ * @hw_qid:	hardware qid of the queue
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int eqdma_cmpt_context_clear(void *dev_hndl, uint16_t hw_qid)
+{
+	enum ind_ctxt_cmd_sel sel = QDMA_CTXT_SEL_CMPT;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n", __func__,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	return eqdma_indirect_reg_clear(dev_hndl, sel, hw_qid);
+}
+
+/*****************************************************************************/
+/**
+ * eqdma_cmpt_context_invalidate() - invalidate completion context
+ *
+ * @dev_hndl:	device handle
+ * @hw_qid:	hardware qid of the queue
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int eqdma_cmpt_context_invalidate(void *dev_hndl, uint16_t hw_qid)
+{
+	enum ind_ctxt_cmd_sel sel = QDMA_CTXT_SEL_CMPT;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n", __func__,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	return eqdma_indirect_reg_invalidate(dev_hndl, sel, hw_qid);
+}
+
+/*****************************************************************************/
+/**
+ * eqdma_cmpt_ctx_conf() - configure completion context
+ *
+ * @dev_hndl:	device handle
+ * @hw_qid:	hardware qid of the queue
+ * @ctxt:	pointer to context data
+ * @access_type HW access type (qdma_hw_access_type enum) value
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+int eqdma_cmpt_ctx_conf(void *dev_hndl, uint16_t hw_qid,
+			struct qdma_descq_cmpt_ctxt *ctxt,
+			enum qdma_hw_access_type access_type)
+{
+	int rv = QDMA_SUCCESS;
+
+	switch (access_type) {
+	case QDMA_HW_ACCESS_READ:
+		rv = eqdma_cmpt_context_read(dev_hndl, hw_qid, ctxt);
+		break;
+	case QDMA_HW_ACCESS_WRITE:
+		rv = eqdma_cmpt_context_write(dev_hndl, hw_qid, ctxt);
+		break;
+	case QDMA_HW_ACCESS_CLEAR:
+		rv = eqdma_cmpt_context_clear(dev_hndl, hw_qid);
+		break;
+	case QDMA_HW_ACCESS_INVALIDATE:
+		rv = eqdma_cmpt_context_invalidate(dev_hndl, hw_qid);
+		break;
+	default:
+		qdma_log_error("%s: access_type(%d) invalid, err:%d\n",
+						__func__,
+						access_type,
+					   -QDMA_ERR_INV_PARAM);
+		rv = -QDMA_ERR_INV_PARAM;
+		break;
+	}
+
+	return rv;
+}
+
+/*****************************************************************************/
+/**
+ * eqdma_hw_context_read() - read hardware context
+ *
+ * @dev_hndl:	device handle
+ * @c2h:	is c2h queue
+ * @hw_qid:	hardware qid of the queue
+ * @ctxt:	pointer to the output context data
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int eqdma_hw_context_read(void *dev_hndl, uint8_t c2h,
+			 uint16_t hw_qid, struct qdma_descq_hw_ctxt *ctxt)
+{
+	int rv = QDMA_SUCCESS;
+	uint32_t hw_ctxt[EQDMA_HW_CONTEXT_NUM_WORDS] = {0};
+	enum ind_ctxt_cmd_sel sel = c2h ? QDMA_CTXT_SEL_HW_C2H :
+			QDMA_CTXT_SEL_HW_H2C;
+
+	if (!dev_hndl || !ctxt) {
+		qdma_log_error("%s: dev_handle or hw_ctxt NULL, err:%d\n",
+					   __func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	rv = eqdma_indirect_reg_read(dev_hndl, sel, hw_qid,
+			EQDMA_HW_CONTEXT_NUM_WORDS, hw_ctxt);
+	if (rv < 0)
+		return rv;
+
+	ctxt->cidx = FIELD_GET(HW_IND_CTXT_DATA_W0_CIDX_MASK, hw_ctxt[0]);
+	ctxt->crd_use =
+		(uint16_t)(FIELD_GET(HW_IND_CTXT_DATA_W0_CRD_USE_MASK,
+					hw_ctxt[0]));
+
+	ctxt->dsc_pend =
+		(uint8_t)(FIELD_GET(HW_IND_CTXT_DATA_W1_DSC_PND_MASK,
+					hw_ctxt[1]));
+	ctxt->idl_stp_b =
+		(uint8_t)(FIELD_GET(HW_IND_CTXT_DATA_W1_IDL_STP_B_MASK,
+			hw_ctxt[1]));
+	ctxt->evt_pnd =
+		(uint8_t)(FIELD_GET(HW_IND_CTXT_DATA_W1_EVT_PND_MASK,
+			hw_ctxt[1]));
+	ctxt->fetch_pnd = (uint8_t)
+		(FIELD_GET(HW_IND_CTXT_DATA_W1_DSC_PND_MASK, hw_ctxt[1]));
+
+	qdma_log_debug("%s: cidx=%u, crd_use=%u, dsc_pend=%x\n",
+			__func__, ctxt->cidx, ctxt->crd_use, ctxt->dsc_pend);
+	qdma_log_debug("%s: idl_stp_b=%x, evt_pnd=%x, fetch_pnd=%x\n",
+			__func__, ctxt->idl_stp_b, ctxt->evt_pnd,
+			ctxt->fetch_pnd);
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * eqdma_hw_context_clear() - clear hardware context
+ *
+ * @dev_hndl:	device handle
+ * @c2h:	is c2h queue
+ * @hw_qid:	hardware qid of the queue
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int eqdma_hw_context_clear(void *dev_hndl, uint8_t c2h,
+			  uint16_t hw_qid)
+{
+	enum ind_ctxt_cmd_sel sel = c2h ? QDMA_CTXT_SEL_HW_C2H :
+			QDMA_CTXT_SEL_HW_H2C;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n", __func__,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	return eqdma_indirect_reg_clear(dev_hndl, sel, hw_qid);
+}
+
+/*****************************************************************************/
+/**
+ * eqdma_hw_context_invalidate() - invalidate hardware context
+ *
+ * @dev_hndl:	device handle
+ * @c2h:	is c2h queue
+ * @hw_qid:	hardware qid of the queue
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int eqdma_hw_context_invalidate(void *dev_hndl, uint8_t c2h,
+				   uint16_t hw_qid)
+{
+	enum ind_ctxt_cmd_sel sel = c2h ? QDMA_CTXT_SEL_HW_C2H :
+			QDMA_CTXT_SEL_HW_H2C;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n", __func__,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	return eqdma_indirect_reg_invalidate(dev_hndl, sel, hw_qid);
+}
+
+/*****************************************************************************/
+/**
+ * eqdma_hw_ctx_conf() - configure HW context
+ *
+ * @dev_hndl:	device handle
+ * @c2h:	is c2h queue
+ * @hw_qid:	hardware qid of the queue
+ * @ctxt:	pointer to context data
+ * @access_type HW access type (qdma_hw_access_type enum) value
+ *		QDMA_HW_ACCESS_WRITE Not supported
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+int eqdma_hw_ctx_conf(void *dev_hndl, uint8_t c2h, uint16_t hw_qid,
+				struct qdma_descq_hw_ctxt *ctxt,
+				enum qdma_hw_access_type access_type)
+{
+	int rv = QDMA_SUCCESS;
+
+	/** ctxt requires only H2C-0 or C2H-1
+	 *  return error for any other values
+	 */
+	if (c2h > 1) {
+		qdma_log_error("%s: c2h(%d) invalid, err:%d\n",
+						__func__,
+						c2h,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	switch (access_type) {
+	case QDMA_HW_ACCESS_READ:
+		rv = eqdma_hw_context_read(dev_hndl, c2h, hw_qid, ctxt);
+		break;
+	case QDMA_HW_ACCESS_CLEAR:
+		rv = eqdma_hw_context_clear(dev_hndl, c2h, hw_qid);
+		break;
+	case QDMA_HW_ACCESS_INVALIDATE:
+		rv = eqdma_hw_context_invalidate(dev_hndl, c2h, hw_qid);
+		break;
+	case QDMA_HW_ACCESS_WRITE:
+	default:
+		qdma_log_error("%s: access_type=%d is invalid, err:%d\n",
+					   __func__, access_type,
+					   -QDMA_ERR_INV_PARAM);
+		rv = -QDMA_ERR_INV_PARAM;
+		break;
+	}
+
+	return rv;
+}
+
+/*****************************************************************************/
+/**
+ * eqdma_credit_context_read() - read credit context
+ *
+ * @dev_hndl:	device handle
+ * @c2h:	is c2h queue
+ * @hw_qid:	hardware qid of the queue
+ * @ctxt:	pointer to the context data
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int eqdma_credit_context_read(void *dev_hndl, uint8_t c2h,
+			 uint16_t hw_qid,
+			 struct qdma_descq_credit_ctxt *ctxt)
+{
+	int rv = QDMA_SUCCESS;
+	uint32_t cr_ctxt[EQDMA_CR_CONTEXT_NUM_WORDS] = {0};
+	enum ind_ctxt_cmd_sel sel = c2h ? QDMA_CTXT_SEL_CR_C2H :
+			QDMA_CTXT_SEL_CR_H2C;
+
+	if (!dev_hndl || !ctxt) {
+		qdma_log_error("%s: dev_hndl=%p credit_ctxt=%p, err:%d\n",
+						__func__, dev_hndl, ctxt,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	rv = eqdma_indirect_reg_read(dev_hndl, sel, hw_qid,
+			EQDMA_CR_CONTEXT_NUM_WORDS, cr_ctxt);
+	if (rv < 0)
+		return rv;
+
+	ctxt->credit = FIELD_GET(CRED_CTXT_DATA_W0_CREDT_MASK, cr_ctxt[0]);
+
+	qdma_log_debug("%s: credit=%u\n", __func__, ctxt->credit);
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * eqdma_credit_context_clear() - clear credit context
+ *
+ * @dev_hndl:	device handle
+ * @c2h:	is c2h queue
+ * @hw_qid:	hardware qid of the queue
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int eqdma_credit_context_clear(void *dev_hndl, uint8_t c2h,
+			  uint16_t hw_qid)
+{
+	enum ind_ctxt_cmd_sel sel = c2h ? QDMA_CTXT_SEL_CR_C2H :
+			QDMA_CTXT_SEL_CR_H2C;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n", __func__,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	return eqdma_indirect_reg_clear(dev_hndl, sel, hw_qid);
+}
+
+/*****************************************************************************/
+/**
+ * eqdma_credit_context_invalidate() - invalidate credit context
+ *
+ * @dev_hndl:	device handle
+ * @c2h:	is c2h queue
+ * @hw_qid:	hardware qid of the queue
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int eqdma_credit_context_invalidate(void *dev_hndl, uint8_t c2h,
+				   uint16_t hw_qid)
+{
+	enum ind_ctxt_cmd_sel sel = c2h ? QDMA_CTXT_SEL_CR_C2H :
+			QDMA_CTXT_SEL_CR_H2C;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n", __func__,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	return eqdma_indirect_reg_invalidate(dev_hndl, sel, hw_qid);
+}
+
+/*****************************************************************************/
+/**
+ * eqdma_credit_ctx_conf() - configure credit context
+ *
+ * @dev_hndl:	device handle
+ * @c2h:	is c2h queue
+ * @hw_qid:	hardware qid of the queue
+ * @ctxt:	pointer to the context data
+ * @access_type HW access type (qdma_hw_access_type enum) value
+ *		QDMA_HW_ACCESS_WRITE Not supported
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+int eqdma_credit_ctx_conf(void *dev_hndl, uint8_t c2h,
+		uint16_t hw_qid, struct qdma_descq_credit_ctxt *ctxt,
+		enum qdma_hw_access_type access_type)
+{
+	int rv = QDMA_SUCCESS;
+
+	/** ctxt requires only H2C-0 or C2H-1
+	 *  return error for any other values
+	 */
+	if (c2h > 1) {
+		qdma_log_error("%s: c2h(%d) invalid, err:%d\n",
+						__func__,
+						c2h,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	switch (access_type) {
+	case QDMA_HW_ACCESS_READ:
+		rv = eqdma_credit_context_read(dev_hndl, c2h, hw_qid, ctxt);
+		break;
+	case QDMA_HW_ACCESS_CLEAR:
+		rv = eqdma_credit_context_clear(dev_hndl, c2h, hw_qid);
+		break;
+	case QDMA_HW_ACCESS_INVALIDATE:
+		rv = eqdma_credit_context_invalidate(dev_hndl, c2h, hw_qid);
+		break;
+	case QDMA_HW_ACCESS_WRITE:
+	default:
+		qdma_log_error("%s: Invalid access type=%d, err:%d\n",
+					   __func__, access_type,
+					   -QDMA_ERR_INV_PARAM);
+		rv = -QDMA_ERR_INV_PARAM;
+		break;
+	}
+
+	return rv;
+}
+
+
+/*****************************************************************************/
+/**
+ * eqdma_indirect_intr_context_write() - create indirect interrupt context
+ *					and program it
+ *
+ * @dev_hndl:   device handle
+ * @ring_index: indirect interrupt ring index
+ * @ctxt:	pointer to the interrupt context data strucutre
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int eqdma_indirect_intr_context_write(void *dev_hndl,
+		uint16_t ring_index, const struct qdma_indirect_intr_ctxt *ctxt)
+{
+	uint32_t intr_ctxt[EQDMA_IND_INTR_CONTEXT_NUM_WORDS] = {0};
+	enum ind_ctxt_cmd_sel sel = QDMA_CTXT_SEL_INT_COAL;
+	uint32_t baddr_l, baddr_m, baddr_h, pasid_l, pasid_h;
+	uint16_t num_words_count = 0;
+
+	if (!dev_hndl || !ctxt) {
+		qdma_log_error("%s: dev_hndl=%p intr_ctxt=%p, err:%d\n",
+						__func__, dev_hndl, ctxt,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	baddr_l = (uint32_t)FIELD_GET(QDMA_INTR_CTXT_BADDR_GET_L_MASK,
+			ctxt->baddr_4k);
+	baddr_m = (uint32_t)FIELD_GET(QDMA_INTR_CTXT_BADDR_GET_M_MASK,
+			ctxt->baddr_4k);
+	baddr_h = (uint32_t)FIELD_GET(QDMA_INTR_CTXT_BADDR_GET_H_MASK,
+			ctxt->baddr_4k);
+
+	pasid_l =
+		FIELD_GET(EQDMA_INTR_CTXT_PASID_GET_L_MASK, ctxt->pasid);
+	pasid_h =
+		FIELD_GET(EQDMA_INTR_CTXT_PASID_GET_H_MASK, ctxt->pasid);
+
+	intr_ctxt[num_words_count++] =
+		FIELD_SET(INTR_CTXT_DATA_W0_VALID_MASK, ctxt->valid) |
+		FIELD_SET(INTR_CTXT_DATA_W0_VEC_MASK, ctxt->vec) |
+		FIELD_SET(INTR_CTXT_DATA_W0_INT_ST_MASK, ctxt->int_st) |
+		FIELD_SET(INTR_CTXT_DATA_W0_COLOR_MASK, ctxt->color) |
+		FIELD_SET(INTR_CTXT_DATA_W0_BADDR_4K_L_MASK, baddr_l);
+
+	intr_ctxt[num_words_count++] =
+		FIELD_SET(INTR_CTXT_DATA_W1_BADDR_4K_M_MASK, baddr_m);
+
+	intr_ctxt[num_words_count++] =
+		FIELD_SET(INTR_CTXT_DATA_W2_BADDR_4K_H_MASK, baddr_h) |
+		FIELD_SET(INTR_CTXT_DATA_W2_PAGE_SIZE_MASK, ctxt->page_size) |
+		FIELD_SET(INTR_CTXT_DATA_W2_PIDX_MASK, ctxt->pidx) |
+		FIELD_SET(INTR_CTXT_DATA_W2_AT_MASK, ctxt->at) |
+		FIELD_SET(INTR_CTXT_DATA_W2_HOST_ID_MASK, ctxt->host_id) |
+		FIELD_SET(INTR_CTXT_DATA_W2_PASID_L_MASK, pasid_l);
+
+	intr_ctxt[num_words_count++] =
+		FIELD_SET(INTR_CTXT_DATA_W3_PASID_H_MASK, pasid_h) |
+		FIELD_SET(INTR_CTXT_DATA_W3_PASID_EN_MASK, ctxt->pasid_en) |
+		FIELD_SET(INTR_CTXT_DATA_W3_FUNC_MASK, ctxt->func_id);
+
+	return eqdma_indirect_reg_write(dev_hndl, sel, ring_index,
+			intr_ctxt, num_words_count);
+}
+
+/*****************************************************************************/
+/**
+ * eqdma_indirect_intr_context_read() - read indirect interrupt context
+ *
+ * @dev_hndl:	device handle
+ * @ring_index:	indirect interrupt ring index
+ * @ctxt:	pointer to the output context data
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int eqdma_indirect_intr_context_read(void *dev_hndl,
+		uint16_t ring_index, struct qdma_indirect_intr_ctxt *ctxt)
+{
+	int rv = QDMA_SUCCESS;
+	uint32_t intr_ctxt[EQDMA_IND_INTR_CONTEXT_NUM_WORDS] = {0};
+	enum ind_ctxt_cmd_sel sel = QDMA_CTXT_SEL_INT_COAL;
+	uint64_t baddr_l, baddr_m, baddr_h, pasid_l, pasid_h;
+
+	if (!dev_hndl || !ctxt) {
+		qdma_log_error("%s: dev_hndl=%p intr_ctxt=%p, err:%d\n",
+						__func__, dev_hndl, ctxt,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	rv = eqdma_indirect_reg_read(dev_hndl, sel, ring_index,
+			EQDMA_IND_INTR_CONTEXT_NUM_WORDS, intr_ctxt);
+	if (rv < 0)
+		return rv;
+
+	ctxt->valid = FIELD_GET(INTR_CTXT_DATA_W0_VALID_MASK, intr_ctxt[0]);
+	ctxt->vec = FIELD_GET(INTR_CTXT_DATA_W0_VEC_MASK, intr_ctxt[0]);
+	ctxt->int_st =
+		(uint8_t)(FIELD_GET(INTR_CTXT_DATA_W0_INT_ST_MASK,
+			intr_ctxt[0]));
+	ctxt->color =
+		(uint8_t)(FIELD_GET(INTR_CTXT_DATA_W0_COLOR_MASK,
+			intr_ctxt[0]));
+	baddr_l = FIELD_GET(INTR_CTXT_DATA_W0_BADDR_4K_L_MASK, intr_ctxt[0]);
+
+	baddr_m = FIELD_GET(INTR_CTXT_DATA_W1_BADDR_4K_M_MASK, intr_ctxt[1]);
+
+	baddr_h = FIELD_GET(INTR_CTXT_DATA_W2_BADDR_4K_H_MASK, intr_ctxt[2]);
+	ctxt->page_size =
+		FIELD_GET(INTR_CTXT_DATA_W2_PAGE_SIZE_MASK, intr_ctxt[2]);
+	ctxt->pidx =
+		(uint16_t)(FIELD_GET(INTR_CTXT_DATA_W2_PIDX_MASK,
+			intr_ctxt[2]));
+	ctxt->at =
+		(uint8_t)(FIELD_GET(INTR_CTXT_DATA_W2_AT_MASK, intr_ctxt[2]));
+	ctxt->host_id = (uint8_t)(FIELD_GET(INTR_CTXT_DATA_W2_HOST_ID_MASK,
+			intr_ctxt[2]));
+	pasid_l = (uint8_t)(FIELD_GET(INTR_CTXT_DATA_W2_PASID_L_MASK,
+			intr_ctxt[2]));
+
+	pasid_h = FIELD_GET(INTR_CTXT_DATA_W3_PASID_H_MASK, intr_ctxt[3]);
+	ctxt->pasid_en = (uint8_t)FIELD_GET(INTR_CTXT_DATA_W3_PASID_EN_MASK,
+			intr_ctxt[3]);
+
+	ctxt->func_id = (uint16_t)FIELD_GET(INTR_CTXT_DATA_W3_FUNC_MASK,
+			intr_ctxt[3]);
+
+	ctxt->baddr_4k =
+		FIELD_SET(QDMA_INTR_CTXT_BADDR_GET_L_MASK, baddr_l) |
+		FIELD_SET(QDMA_INTR_CTXT_BADDR_GET_M_MASK, baddr_m) |
+		FIELD_SET(QDMA_INTR_CTXT_BADDR_GET_H_MASK, baddr_h);
+
+	ctxt->pasid =
+		FIELD_SET(EQDMA_INTR_CTXT_PASID_GET_L_MASK, pasid_l) |
+		FIELD_SET(EQDMA_INTR_CTXT_PASID_GET_H_MASK, pasid_h);
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * eqdma_indirect_intr_context_clear() - clear indirect interrupt context
+ *
+ * @dev_hndl:	device handle
+ * @ring_index:	indirect interrupt ring index
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int eqdma_indirect_intr_context_clear(void *dev_hndl,
+		uint16_t ring_index)
+{
+	enum ind_ctxt_cmd_sel sel = QDMA_CTXT_SEL_INT_COAL;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n", __func__,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	return eqdma_indirect_reg_clear(dev_hndl, sel, ring_index);
+}
+
+/*****************************************************************************/
+/**
+ * eqdma_indirect_intr_context_invalidate() - invalidate indirect interrupt
+ * context
+ *
+ * @dev_hndl:	device handle
+ * @ring_index:	indirect interrupt ring index
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int eqdma_indirect_intr_context_invalidate(void *dev_hndl,
+					  uint16_t ring_index)
+{
+	enum ind_ctxt_cmd_sel sel = QDMA_CTXT_SEL_INT_COAL;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n", __func__,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	return eqdma_indirect_reg_invalidate(dev_hndl, sel, ring_index);
+}
+
+/*****************************************************************************/
+/**
+ * eqdma_indirect_intr_ctx_conf() - configure indirect interrupt context
+ *
+ * @dev_hndl:	device handle
+ * @ring_index:	indirect interrupt ring index
+ * @ctxt:	pointer to context data
+ * @access_type HW access type (qdma_hw_access_type enum) value
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+int eqdma_indirect_intr_ctx_conf(void *dev_hndl, uint16_t ring_index,
+				struct qdma_indirect_intr_ctxt *ctxt,
+				enum qdma_hw_access_type access_type)
+{
+	int rv = QDMA_SUCCESS;
+
+	switch (access_type) {
+	case QDMA_HW_ACCESS_READ:
+		rv = eqdma_indirect_intr_context_read(dev_hndl, ring_index,
+							ctxt);
+		break;
+	case QDMA_HW_ACCESS_WRITE:
+		rv = eqdma_indirect_intr_context_write(dev_hndl, ring_index,
+							ctxt);
+		break;
+	case QDMA_HW_ACCESS_CLEAR:
+		rv = eqdma_indirect_intr_context_clear(dev_hndl,
+							ring_index);
+		break;
+	case QDMA_HW_ACCESS_INVALIDATE:
+		rv = eqdma_indirect_intr_context_invalidate(dev_hndl,
+								ring_index);
+		break;
+	default:
+		qdma_log_error("%s: access_type=%d is invalid, err:%d\n",
+					   __func__, access_type,
+					   -QDMA_ERR_INV_PARAM);
+		rv = -QDMA_ERR_INV_PARAM;
+		break;
+	}
+
+	return rv;
+}
+
+/*****************************************************************************/
+/**
+ * eqdma_dump_config_regs() - Function to get qdma config register dump in a
+ * buffer
+ *
+ * @dev_hndl:   device handle
+ * @is_vf:      Whether PF or VF
+ * @buf :       pointer to buffer to be filled
+ * @buflen :    Length of the buffer
+ *
+ * Return:	Length up-till the buffer is filled -success and < 0 - failure
+ *****************************************************************************/
+int eqdma_dump_config_regs(void *dev_hndl, uint8_t is_vf,
+		char *buf, uint32_t buflen)
+{
+	uint32_t i = 0, j = 0;
+	struct xreg_info *reg_info;
+	uint32_t num_regs = eqdma_config_num_regs_get();
+	uint32_t len = 0, val = 0;
+	int rv = QDMA_SUCCESS;
+	char name[DEBGFS_GEN_NAME_SZ] = "";
+	struct qdma_dev_attributes dev_cap;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+					   __func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if (buflen < eqdma_reg_dump_buf_len()) {
+		qdma_log_error("%s: Buffer too small, err:%d\n",
+					__func__, -QDMA_ERR_NO_MEM);
+		return -QDMA_ERR_NO_MEM;
+	}
+
+	if (is_vf) {
+		qdma_log_error("%s: Wrong API used for VF, err:%d\n",
+				__func__,
+				-QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED);
+		return -QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED;
+	}
+
+	eqdma_get_device_attributes(dev_hndl, &dev_cap);
+
+	reg_info = eqdma_config_regs_get();
+
+	for (i = 0; i < num_regs; i++) {
+		int mask = get_capability_mask(dev_cap.mm_en,
+					       dev_cap.st_en,
+					       dev_cap.mm_cmpt_en,
+					       dev_cap.mailbox_en);
+
+		if ((mask & reg_info[i].mode) == 0)
+			continue;
+
+		/* If Debug Mode not enabled and the current register
+		 * is debug register, skip reading it.
+		 */
+		if (dev_cap.debug_mode == 0 &&
+				reg_info[i].is_debug_reg == 1)
+			continue;
+
+		for (j = 0; j < reg_info[i].repeat; j++) {
+			rv = QDMA_SNPRINTF_S(name, DEBGFS_GEN_NAME_SZ,
+					DEBGFS_GEN_NAME_SZ,
+					"%s", reg_info[i].name);
+			if (rv < 0 || rv > DEBGFS_GEN_NAME_SZ) {
+				qdma_log_error
+					("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+					__LINE__, __func__,
+					rv);
+				return -QDMA_ERR_NO_MEM;
+			}
+			val = qdma_reg_read(dev_hndl,
+					(reg_info[i].addr + (j * 4)));
+			rv = dump_reg(buf + len, buflen - len,
+					(reg_info[i].addr + (j * 4)),
+						name, val);
+			if (rv < 0) {
+				qdma_log_error
+				("%s Buff too small, err:%d\n",
+				__func__,
+				-QDMA_ERR_NO_MEM);
+				return -QDMA_ERR_NO_MEM;
+			}
+			len += rv;
+		}
+	}
+
+	return len;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_dump_cpm_queue_context() - Function to get qdma queue context dump
+ * in a buffer
+ *
+ * @dev_hndl:   device handle
+ * @st:			Queue Mode(ST or MM)
+ * @q_type:		Queue type(H2C/C2H/CMPT)
+ * @context:	Queue Context
+ * @buf :       pointer to buffer to be filled
+ * @buflen :    Length of the buffer
+ *
+ * Return:	Length up-till the buffer is filled -success and < 0 - failure
+ *****************************************************************************/
+int eqdma_dump_queue_context(void *dev_hndl,
+		uint8_t st,
+		enum qdma_dev_q_type q_type,
+		struct qdma_descq_context *ctxt_data,
+		char *buf, uint32_t buflen)
+{
+	int rv = 0;
+	uint32_t req_buflen = 0;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+			__func__, -QDMA_ERR_INV_PARAM);
+
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if (!ctxt_data) {
+		qdma_log_error("%s: ctxt_data is NULL, err:%d\n",
+			__func__, -QDMA_ERR_INV_PARAM);
+
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if (!buf) {
+		qdma_log_error("%s: buf is NULL, err:%d\n",
+			__func__, -QDMA_ERR_INV_PARAM);
+
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if (q_type >= QDMA_DEV_Q_TYPE_MAX) {
+		qdma_log_error("%s: invalid q_type, err:%d\n",
+			__func__, -QDMA_ERR_INV_PARAM);
+
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	rv = eqdma_context_buf_len(st, q_type, &req_buflen);
+	if (rv != QDMA_SUCCESS)
+		return rv;
+
+	if (buflen < req_buflen) {
+		qdma_log_error("%s: Too small buffer(%d), reqd(%d), err:%d\n",
+			__func__, buflen, req_buflen, -QDMA_ERR_NO_MEM);
+		return -QDMA_ERR_NO_MEM;
+	}
+
+	rv = dump_eqdma_context(ctxt_data, st, q_type,
+				buf, buflen);
+
+	return rv;
+}
+
+/*****************************************************************************/
+/**
+ * eqdma_dump_intr_context() - Function to get qdma interrupt context dump
+ * in a buffer
+ *
+ * @dev_hndl:   device handle
+ * @intr_ctx:	Interrupt Context
+ * @ring_index: Ring index
+ * @buf :       pointer to buffer to be filled
+ * @buflen :    Length of the buffer
+ *
+ * Return:	Length up-till the buffer is filled -success and < 0 - failure
+ *****************************************************************************/
+int eqdma_dump_intr_context(void *dev_hndl,
+		struct qdma_indirect_intr_ctxt *intr_ctx,
+		int ring_index,
+		char *buf, uint32_t buflen)
+{
+	int rv = 0;
+	uint32_t req_buflen = 0;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+			__func__, -QDMA_ERR_INV_PARAM);
+
+		return -QDMA_ERR_INV_PARAM;
+	}
+	if (!intr_ctx) {
+		qdma_log_error("%s: intr_ctx is NULL, err:%d\n",
+			__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if (!buf) {
+		qdma_log_error("%s: buf is NULL, err:%d\n",
+			__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	req_buflen = eqdma_intr_context_buf_len();
+	if (buflen < req_buflen) {
+		qdma_log_error("%s: Too small buffer(%d), reqd(%d), err:%d\n",
+			__func__, buflen, req_buflen, -QDMA_ERR_NO_MEM);
+		return -QDMA_ERR_NO_MEM;
+	}
+
+	rv = dump_eqdma_intr_context(intr_ctx, ring_index, buf, buflen);
+
+	return rv;
+}
+
+/*****************************************************************************/
+/**
+ * eqdma_read_dump_queue_context() - Function to read and dump the queue
+ * context in a buffer
+ *
+ * @dev_hndl:   device handle
+ * @hw_qid:     queue id
+ * @st:			Queue Mode(ST or MM)
+ * @q_type:		Queue type(H2C/C2H/CMPT)
+ * @buf :       pointer to buffer to be filled
+ * @buflen :    Length of the buffer
+ *
+ * Return:	Length up-till the buffer is filled -success and < 0 - failure
+ *****************************************************************************/
+int eqdma_read_dump_queue_context(void *dev_hndl,
+		uint16_t qid_hw,
+		uint8_t st,
+		enum qdma_dev_q_type q_type,
+		char *buf, uint32_t buflen)
+{
+	int rv = QDMA_SUCCESS;
+	uint32_t req_buflen = 0;
+	struct qdma_descq_context context;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+			__func__, -QDMA_ERR_INV_PARAM);
+
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if (!buf) {
+		qdma_log_error("%s: buf is NULL, err:%d\n",
+			__func__, -QDMA_ERR_INV_PARAM);
+
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if (q_type >= QDMA_DEV_Q_TYPE_MAX) {
+		qdma_log_error("%s: Not supported for q_type, err = %d\n",
+			__func__, -QDMA_ERR_INV_PARAM);
+
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	rv = eqdma_context_buf_len(st, q_type, &req_buflen);
+	if (rv != QDMA_SUCCESS)
+		return rv;
+
+	if (buflen < req_buflen) {
+		qdma_log_error("%s: Too small buffer(%d), reqd(%d), err:%d\n",
+			__func__, buflen, req_buflen, -QDMA_ERR_NO_MEM);
+		return -QDMA_ERR_NO_MEM;
+	}
+
+	qdma_memset(&context, 0, sizeof(struct qdma_descq_context));
+
+	if (q_type != QDMA_DEV_Q_TYPE_CMPT) {
+		rv = eqdma_sw_ctx_conf(dev_hndl, (uint8_t)q_type, qid_hw,
+				&context.sw_ctxt, QDMA_HW_ACCESS_READ);
+		if (rv < 0) {
+			qdma_log_error
+			("%s: Failed to read sw context, err = %d",
+					__func__, rv);
+			return rv;
+		}
+
+		rv = eqdma_hw_ctx_conf(dev_hndl, (uint8_t)q_type, qid_hw,
+				&context.hw_ctxt, QDMA_HW_ACCESS_READ);
+		if (rv < 0) {
+			qdma_log_error
+			("%s: Failed to read hw context, err = %d",
+					__func__, rv);
+			return rv;
+		}
+
+		rv = eqdma_credit_ctx_conf(dev_hndl, (uint8_t)q_type,
+				qid_hw, &context.cr_ctxt,
+				QDMA_HW_ACCESS_READ);
+		if (rv < 0) {
+			qdma_log_error
+			("%s: Failed to read credit context, err = %d",
+					__func__, rv);
+			return rv;
+		}
+
+		if (st && q_type == QDMA_DEV_Q_TYPE_C2H) {
+			rv = eqdma_pfetch_ctx_conf(dev_hndl,
+					qid_hw,
+					&context.pfetch_ctxt,
+					QDMA_HW_ACCESS_READ);
+			if (rv < 0) {
+				qdma_log_error
+			("%s: Failed to read pftech context, err = %d",
+						__func__, rv);
+				return rv;
+			}
+		}
+	}
+
+	if ((st && q_type == QDMA_DEV_Q_TYPE_C2H) ||
+			(!st && q_type == QDMA_DEV_Q_TYPE_CMPT)) {
+		rv = eqdma_cmpt_ctx_conf(dev_hndl, qid_hw,
+						&context.cmpt_ctxt,
+						 QDMA_HW_ACCESS_READ);
+		if (rv < 0) {
+			qdma_log_error
+			("%s: Failed to read cmpt context, err = %d",
+					__func__, rv);
+			return rv;
+		}
+	}
+
+
+	rv = dump_eqdma_context(&context, st, q_type,
+				buf, buflen);
+
+	return rv;
+}
+
+/*****************************************************************************/
+/**
+ * eqdma_get_user_bar() - Function to get the AXI Master Lite(user bar) number
+ *
+ * @dev_hndl:	device handle
+ * @is_vf:	Whether PF or VF
+ * @func_id:	function id of the PF
+ * @user_bar:	pointer to hold the AXI Master Lite bar number
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+int eqdma_get_user_bar(void *dev_hndl, uint8_t is_vf,
+		uint8_t func_id, uint8_t *user_bar)
+{
+	uint8_t bar_found = 0;
+	uint8_t bar_idx = 0;
+	uint32_t user_bar_id = 0;
+	uint32_t reg_addr = (is_vf) ?  EQDMA_OFFSET_VF_USER_BAR :
+			EQDMA_OFFSET_GLBL2_PF_BARLITE_EXT;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+					__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if (!user_bar) {
+		qdma_log_error("%s: AXI Master Lite bar is NULL, err:%d\n",
+					__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	user_bar_id = qdma_reg_read(dev_hndl, reg_addr);
+	user_bar_id = (user_bar_id >> (6 * func_id)) & 0x3F;
+
+	for (bar_idx = 0; bar_idx < QDMA_BAR_NUM; bar_idx++) {
+		if (user_bar_id & (1 << bar_idx)) {
+			*user_bar = bar_idx;
+			bar_found = 1;
+			break;
+		}
+	}
+	if (bar_found == 0) {
+		*user_bar = 0;
+		qdma_log_error("%s: Bar not found, err:%d\n",
+					__func__,
+					-QDMA_ERR_HWACC_BAR_NOT_FOUND);
+		return -QDMA_ERR_HWACC_BAR_NOT_FOUND;
+	}
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * eqdma_hw_ram_sbe_err_process() - Function to dump SBE error debug information
+ *
+ * @dev_hndl: device handle
+ * @buf: Bufffer for the debug info to be dumped in
+ * @buflen: Length of the buffer
+ *
+ * Return: void
+ *****************************************************************************/
+static void eqdma_hw_ram_sbe_err_process(void *dev_hndl)
+{
+	eqdma_dump_reg_info(dev_hndl, EQDMA_RAM_SBE_STS_A_ADDR,
+						1, NULL, 0);
+	eqdma_dump_reg_info(dev_hndl, EQDMA_RAM_SBE_STS_1_A_ADDR,
+						1, NULL, 0);
+}
+
+/*****************************************************************************/
+/**
+ * eqdma_hw_ram_dbe_err_process() - Function to dump DBE error debug information
+ *
+ * @dev_hndl: device handle
+ * @buf: Bufffer for the debug info to be dumped in
+ * @buflen: Length of the buffer
+ *
+ * Return: void
+ *****************************************************************************/
+static void eqdma_hw_ram_dbe_err_process(void *dev_hndl)
+{
+	eqdma_dump_reg_info(dev_hndl, EQDMA_RAM_DBE_STS_A_ADDR,
+						1, NULL, 0);
+	eqdma_dump_reg_info(dev_hndl, EQDMA_RAM_DBE_STS_1_A_ADDR,
+						1, NULL, 0);
+}
+
+/*****************************************************************************/
+/**
+ * eqdma_hw_desc_err_process() - Function to dump Descriptor Error information
+ *
+ * @dev_hndl: device handle
+ * @buf: Bufffer for the debug info to be dumped in
+ * @buflen: Length of the buffer
+ *
+ * Return: void
+ *****************************************************************************/
+static void eqdma_hw_desc_err_process(void *dev_hndl)
+{
+	int i = 0;
+	uint32_t desc_err_reg_list[] = {
+		EQDMA_GLBL_DSC_ERR_STS_ADDR,
+		EQDMA_GLBL_DSC_ERR_LOG0_ADDR,
+		EQDMA_GLBL_DSC_ERR_LOG1_ADDR,
+		EQDMA_GLBL_DSC_DBG_DAT0_ADDR,
+		EQDMA_GLBL_DSC_DBG_DAT1_ADDR,
+		EQDMA_GLBL_DSC_ERR_LOG2_ADDR
+	};
+	int desc_err_num_regs = sizeof(desc_err_reg_list) / sizeof(uint32_t);
+
+	for (i = 0; i < desc_err_num_regs; i++) {
+		eqdma_dump_reg_info(dev_hndl, desc_err_reg_list[i],
+					1, NULL, 0);
+	}
+}
+
+/*****************************************************************************/
+/**
+ * eqdma_hw_trq_err_process() - Function to dump Target Access Error information
+ *
+ * @dev_hndl: device handle
+ * @buf: Bufffer for the debug info to be dumped in
+ * @buflen: Length of the buffer
+ *
+ * Return: void
+ *****************************************************************************/
+static void eqdma_hw_trq_err_process(void *dev_hndl)
+{
+	int i = 0;
+	uint32_t trq_err_reg_list[] = {
+		EQDMA_GLBL_TRQ_ERR_STS_ADDR,
+		EQDMA_GLBL_TRQ_ERR_LOG_ADDR
+	};
+	int trq_err_reg_num_regs = sizeof(trq_err_reg_list) / sizeof(uint32_t);
+
+	for (i = 0; i < trq_err_reg_num_regs; i++) {
+		eqdma_dump_reg_info(dev_hndl, trq_err_reg_list[i],
+					1, NULL, 0);
+	}
+}
+
+/*****************************************************************************/
+/**
+ * eqdma_hw_st_h2c_err_process() - Function to dump MM H2C Error information
+ *
+ * @dev_hndl: device handle
+ * @buf: Bufffer for the debug info to be dumped in
+ * @buflen: Length of the buffer
+ *
+ * Return: void
+ *****************************************************************************/
+static void eqdma_hw_st_h2c_err_process(void *dev_hndl)
+{
+	int i = 0;
+	uint32_t st_h2c_err_reg_list[] = {
+		EQDMA_H2C_ERR_STAT_ADDR,
+		EQDMA_H2C_FIRST_ERR_QID_ADDR,
+		EQDMA_H2C_DBG_REG0_ADDR,
+		EQDMA_H2C_DBG_REG1_ADDR,
+		EQDMA_H2C_DBG_REG2_ADDR,
+		EQDMA_H2C_DBG_REG3_ADDR,
+		EQDMA_H2C_DBG_REG4_ADDR
+	};
+	int st_h2c_err_num_regs = sizeof(st_h2c_err_reg_list) / sizeof(uint32_t);
+
+	for (i = 0; i < st_h2c_err_num_regs; i++) {
+		eqdma_dump_reg_info(dev_hndl, st_h2c_err_reg_list[i],
+					1, NULL, 0);
+	}
+}
+
+
+/*****************************************************************************/
+/**
+ * eqdma_hw_st_c2h_err_process() - Function to dump MM H2C Error information
+ *
+ * @dev_hndl: device handle
+ * @buf: Bufffer for the debug info to be dumped in
+ * @buflen: Length of the buffer
+ *
+ * Return: void
+ *****************************************************************************/
+static void eqdma_hw_st_c2h_err_process(void *dev_hndl)
+{
+	int i = 0;
+	uint32_t st_c2h_err_reg_list[] = {
+		EQDMA_C2H_ERR_STAT_ADDR,
+		EQDMA_C2H_FATAL_ERR_STAT_ADDR,
+		EQDMA_C2H_FIRST_ERR_QID_ADDR,
+		EQDMA_C2H_STAT_S_AXIS_C2H_ACCEPTED_ADDR,
+		EQDMA_C2H_STAT_S_AXIS_WRB_ACCEPTED_ADDR,
+		EQDMA_C2H_STAT_DESC_RSP_PKT_ACCEPTED_ADDR,
+		EQDMA_C2H_STAT_AXIS_PKG_CMP_ADDR,
+		EQDMA_C2H_STAT_DBG_DMA_ENG_0_ADDR,
+		EQDMA_C2H_STAT_DBG_DMA_ENG_1_ADDR,
+		EQDMA_C2H_STAT_DBG_DMA_ENG_2_ADDR,
+		EQDMA_C2H_STAT_DBG_DMA_ENG_3_ADDR,
+		EQDMA_C2H_STAT_DESC_RSP_DROP_ACCEPTED_ADDR,
+		EQDMA_C2H_STAT_DESC_RSP_ERR_ACCEPTED_ADDR
+	};
+	int st_c2h_err_num_regs = sizeof(st_c2h_err_reg_list) / sizeof(uint32_t);
+
+	for (i = 0; i < st_c2h_err_num_regs; i++) {
+		eqdma_dump_reg_info(dev_hndl, st_c2h_err_reg_list[i],
+					1, NULL, 0);
+	}
+}
+
+
+
+/*****************************************************************************/
+/**
+ * eqdma_hw_get_error_name() - Function to get the error in string format
+ *
+ * @err_idx: error index
+ *
+ * Return: string - success and NULL on failure
+ *****************************************************************************/
+const char *eqdma_hw_get_error_name(uint32_t err_idx)
+{
+	if (err_idx >= EQDMA_ERRS_ALL) {
+		qdma_log_error("%s: err_idx=%d is invalid, returning NULL\n",
+				__func__, (enum eqdma_error_idx)err_idx);
+		return NULL;
+	}
+
+	return eqdma_err_info[(enum eqdma_error_idx)err_idx].err_name;
+}
+
+/*****************************************************************************/
+/**
+ * eqdma_hw_error_process() - Function to find the error that got
+ * triggered and call the handler qdma_hw_error_handler of that
+ * particular error.
+ *
+ * @dev_hndl: device handle
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+int eqdma_hw_error_process(void *dev_hndl)
+{
+	uint32_t glbl_err_stat = 0, err_stat = 0;
+	uint32_t bit = 0, i = 0;
+	int32_t idx = 0;
+	struct qdma_dev_attributes dev_cap;
+	uint32_t hw_err_position[EQDMA_TOTAL_LEAF_ERROR_AGGREGATORS] = {
+		EQDMA_DSC_ERR_POISON,
+		EQDMA_TRQ_ERR_CSR_UNMAPPED,
+		EQDMA_ST_C2H_ERR_MTY_MISMATCH,
+		EQDMA_ST_FATAL_ERR_MTY_MISMATCH,
+		EQDMA_ST_H2C_ERR_ZERO_LEN_DESC,
+		EQDMA_SBE_1_ERR_RC_RRQ_EVEN_RAM,
+		EQDMA_SBE_ERR_MI_H2C0_DAT,
+		EQDMA_DBE_1_ERR_RC_RRQ_EVEN_RAM,
+		EQDMA_DBE_ERR_MI_H2C0_DAT
+	};
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+				__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+
+	eqdma_get_device_attributes(dev_hndl, &dev_cap);
+
+	glbl_err_stat = qdma_reg_read(dev_hndl, EQDMA_GLBL_ERR_STAT_ADDR);
+
+	if (!glbl_err_stat)
+		return QDMA_HW_ERR_NOT_DETECTED;
+
+
+	qdma_log_info("%s: Global Err Reg(0x%x) = 0x%x\n",
+				  __func__, EQDMA_GLBL_ERR_STAT_ADDR,
+				  glbl_err_stat);
+
+	for (i = 0; i < EQDMA_TOTAL_LEAF_ERROR_AGGREGATORS; i++) {
+		bit = hw_err_position[i];
+
+		if (!dev_cap.st_en && (bit == EQDMA_ST_C2H_ERR_MTY_MISMATCH ||
+				bit == EQDMA_ST_FATAL_ERR_MTY_MISMATCH ||
+				bit == EQDMA_ST_H2C_ERR_ZERO_LEN_DESC))
+			continue;
+
+		err_stat = qdma_reg_read(dev_hndl,
+				eqdma_err_info[bit].stat_reg_addr);
+		if (err_stat) {
+			qdma_log_info("addr = 0x%08x val = 0x%08x",
+					eqdma_err_info[bit].stat_reg_addr,
+					err_stat);
+
+			eqdma_err_info[bit].eqdma_hw_err_process(dev_hndl);
+			for (idx = bit; idx < all_eqdma_hw_errs[i]; idx++) {
+				/* call the platform specific handler */
+				if (err_stat &
+				eqdma_err_info[idx].leaf_err_mask)
+					qdma_log_error("%s detected %s\n",
+						__func__,
+						eqdma_hw_get_error_name(idx));
+			}
+			qdma_reg_write(dev_hndl,
+					eqdma_err_info[bit].stat_reg_addr,
+					err_stat);
+		}
+	}
+
+	/* Write 1 to the global status register to clear the bits */
+	qdma_reg_write(dev_hndl, EQDMA_GLBL_ERR_STAT_ADDR, glbl_err_stat);
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_hw_error_enable() - Function to enable all or a specific error
+ *
+ * @dev_hndl: device handle
+ * @err_idx: error index
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+int eqdma_hw_error_enable(void *dev_hndl, uint32_t err_idx)
+{
+	uint32_t idx = 0, i = 0;
+	uint32_t reg_val = 0;
+	struct qdma_dev_attributes dev_cap;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+				__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if (err_idx > EQDMA_ERRS_ALL) {
+		qdma_log_error("%s: err_idx=%d is invalid, err:%d\n",
+				__func__, (enum eqdma_error_idx)err_idx,
+				-QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	eqdma_get_device_attributes(dev_hndl, &dev_cap);
+
+	if (err_idx == EQDMA_ERRS_ALL) {
+		for (i = 0; i < EQDMA_TOTAL_LEAF_ERROR_AGGREGATORS; i++) {
+			idx = all_eqdma_hw_errs[i];
+
+			/* Don't access streaming registers in
+			 * MM only bitstreams
+			 */
+			if (!dev_cap.st_en) {
+				if (idx == EQDMA_ST_C2H_ERR_ALL ||
+					idx == EQDMA_ST_FATAL_ERR_ALL ||
+					idx == EQDMA_ST_H2C_ERR_ALL)
+					continue;
+			}
+
+			reg_val = eqdma_err_info[idx].leaf_err_mask;
+			qdma_reg_write(dev_hndl,
+				eqdma_err_info[idx].mask_reg_addr, reg_val);
+
+			reg_val = qdma_reg_read(dev_hndl,
+					EQDMA_GLBL_ERR_MASK_ADDR);
+			reg_val |= FIELD_SET
+				(eqdma_err_info[idx].global_err_mask, 1);
+			qdma_reg_write(dev_hndl, EQDMA_GLBL_ERR_MASK_ADDR,
+					reg_val);
+		}
+
+	} else {
+		/* Don't access streaming registers in MM only bitstreams
+		 *  QDMA_C2H_ERR_MTY_MISMATCH to QDMA_H2C_ERR_ALL are all
+		 *  ST errors
+		 */
+		if (!dev_cap.st_en) {
+			if (err_idx >= EQDMA_ST_C2H_ERR_MTY_MISMATCH &&
+					err_idx <= EQDMA_ST_H2C_ERR_ALL)
+				return QDMA_SUCCESS;
+		}
+
+		reg_val = qdma_reg_read(dev_hndl,
+				eqdma_err_info[err_idx].mask_reg_addr);
+		reg_val |= FIELD_SET(eqdma_err_info[err_idx].leaf_err_mask, 1);
+		qdma_reg_write(dev_hndl,
+				eqdma_err_info[err_idx].mask_reg_addr, reg_val);
+
+		reg_val = qdma_reg_read(dev_hndl, EQDMA_GLBL_ERR_MASK_ADDR);
+		reg_val |=
+			FIELD_SET(eqdma_err_info[err_idx].global_err_mask, 1);
+		qdma_reg_write(dev_hndl, EQDMA_GLBL_ERR_MASK_ADDR, reg_val);
+	}
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * eqdma_get_device_attributes() - Function to get the qdma device
+ * attributes
+ *
+ * @dev_hndl:	device handle
+ * @dev_info:	pointer to hold the device info
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+int eqdma_get_device_attributes(void *dev_hndl,
+		struct qdma_dev_attributes *dev_info)
+{
+	uint8_t count = 0;
+	uint32_t reg_val = 0;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+				__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+	if (!dev_info) {
+		qdma_log_error("%s: dev_info is NULL, err:%d\n",
+				__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	/* number of PFs */
+	reg_val = qdma_reg_read(dev_hndl, QDMA_OFFSET_GLBL2_PF_BARLITE_INT);
+	if (FIELD_GET(QDMA_GLBL2_PF0_BAR_MAP_MASK, reg_val))
+		count++;
+	if (FIELD_GET(QDMA_GLBL2_PF1_BAR_MAP_MASK, reg_val))
+		count++;
+	if (FIELD_GET(QDMA_GLBL2_PF2_BAR_MAP_MASK, reg_val))
+		count++;
+	if (FIELD_GET(QDMA_GLBL2_PF3_BAR_MAP_MASK, reg_val))
+		count++;
+	dev_info->num_pfs = count;
+
+	/* Number of Qs */
+	reg_val = qdma_reg_read(dev_hndl, EQDMA_GLBL2_CHANNEL_CAP_ADDR);
+	dev_info->num_qs =
+			FIELD_GET(GLBL2_CHANNEL_CAP_MULTIQ_MAX_MASK, reg_val);
+
+	/* FLR present */
+	reg_val = qdma_reg_read(dev_hndl, EQDMA_GLBL2_MISC_CAP_ADDR);
+	dev_info->mailbox_en = FIELD_GET(EQDMA_GLBL2_MAILBOX_EN_MASK,
+		reg_val);
+	dev_info->flr_present = FIELD_GET(EQDMA_GLBL2_FLR_PRESENT_MASK,
+		reg_val);
+	dev_info->mm_cmpt_en  = 0;
+	dev_info->debug_mode = FIELD_GET(EQDMA_GLBL2_DBG_MODE_EN_MASK,
+		reg_val);
+	dev_info->desc_eng_mode = FIELD_GET(EQDMA_GLBL2_DESC_ENG_MODE_MASK,
+		reg_val);
+
+	/* ST/MM enabled? */
+	reg_val = qdma_reg_read(dev_hndl, EQDMA_GLBL2_CHANNEL_MDMA_ADDR);
+	dev_info->st_en = (FIELD_GET(GLBL2_CHANNEL_MDMA_C2H_ST_MASK, reg_val) &&
+		FIELD_GET(GLBL2_CHANNEL_MDMA_H2C_ST_MASK, reg_val)) ? 1 : 0;
+	dev_info->mm_en = (FIELD_GET(GLBL2_CHANNEL_MDMA_C2H_ENG_MASK, reg_val) &&
+		FIELD_GET(GLBL2_CHANNEL_MDMA_H2C_ENG_MASK, reg_val)) ? 1 : 0;
+
+	/* num of mm channels */
+	/* TODO : Register not yet defined for this. Hard coding it to 1.*/
+	dev_info->mm_channel_max = 1;
+
+	dev_info->qid2vec_ctx = 0;
+	dev_info->cmpt_ovf_chk_dis = 1;
+	dev_info->mailbox_intr = 1;
+	dev_info->sw_desc_64b = 1;
+	dev_info->cmpt_desc_64b = 1;
+	dev_info->dynamic_bar = 1;
+	dev_info->legacy_intr = 1;
+	dev_info->cmpt_trig_count_timer = 1;
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * eqdma_init_ctxt_memory() - function to initialize the context memory
+ *
+ * @dev_hndl: device handle
+ *
+ * Return: returns the platform specific error code
+ *****************************************************************************/
+int eqdma_init_ctxt_memory(void *dev_hndl)
+{
+#ifdef ENABLE_INIT_CTXT_MEMORY
+	uint32_t data[QDMA_REG_IND_CTXT_REG_COUNT];
+	uint16_t i = 0;
+	struct qdma_dev_attributes dev_info;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+					__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	qdma_memset(data, 0, sizeof(uint32_t) * QDMA_REG_IND_CTXT_REG_COUNT);
+	eqdma_get_device_attributes(dev_hndl, &dev_info);
+
+	for (; i < dev_info.num_qs; i++) {
+		int sel = QDMA_CTXT_SEL_SW_C2H;
+		int rv;
+
+		for (; sel <= QDMA_CTXT_SEL_PFTCH; sel++) {
+			/** if the st mode(h2c/c2h) not enabled
+			 *  in the design, then skip the PFTCH
+			 *  and CMPT context setup
+			 */
+			if (dev_info.st_en == 0 &&
+				(sel == QDMA_CTXT_SEL_PFTCH ||
+				sel == QDMA_CTXT_SEL_CMPT)) {
+				qdma_log_debug("%s: ST context is skipped:",
+					__func__);
+				qdma_log_debug("sel = %d\n", sel);
+				continue;
+			}
+
+			rv = eqdma_indirect_reg_clear(dev_hndl,
+					(enum ind_ctxt_cmd_sel)sel, i);
+			if (rv < 0)
+				return rv;
+		}
+	}
+
+	/* fmap */
+	for (i = 0; i < dev_info.num_pfs; i++)
+		eqdma_indirect_reg_clear(dev_hndl,
+				QDMA_CTXT_SEL_FMAP, i);
+
+#else
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+					__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+#endif
+	return QDMA_SUCCESS;
+}
+
+
+static int get_reg_entry(uint32_t reg_addr, int *reg_entry)
+{
+	uint32_t i = 0;
+	struct xreg_info *reg_info;
+	uint32_t num_regs = eqdma_config_num_regs_get();
+
+	reg_info = eqdma_config_regs_get();
+
+	for (i = 0; (i < num_regs - 1); i++) {
+		if (reg_info[i].addr == reg_addr) {
+			*reg_entry = i;
+			break;
+		}
+	}
+
+	if (i >= num_regs - 1) {
+		qdma_log_error("%s: 0x%08x is missing register list, err:%d\n",
+					__func__,
+					reg_addr,
+					-QDMA_ERR_INV_PARAM);
+		*reg_entry = -1;
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	return 0;
+}
+
+/*****************************************************************************/
+/**
+ * eqdma_dump_config_reg_list() - Dump the registers
+ *
+ * @dev_hndl:		device handle
+ * @total_regs :	Max registers to read
+ * @reg_list :		array of reg addr and reg values
+ * @buf :		pointer to buffer to be filled
+ * @buflen :		Length of the buffer
+ *
+ * Return: returns the platform specific error code
+ *****************************************************************************/
+int eqdma_dump_config_reg_list(void *dev_hndl, uint32_t total_regs,
+		struct qdma_reg_data *reg_list, char *buf, uint32_t buflen)
+{
+	uint32_t j = 0, len = 0;
+	uint32_t reg_count = 0;
+	int reg_data_entry;
+	int rv = 0;
+	char name[DEBGFS_GEN_NAME_SZ] = "";
+	struct xreg_info *reg_info = eqdma_config_regs_get();
+	struct qdma_dev_attributes dev_cap;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+				__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if (!buf) {
+		qdma_log_error("%s: buf is NULL, err:%d\n",
+				__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	eqdma_get_device_attributes(dev_hndl, &dev_cap);
+
+	for (reg_count = 0;
+			(reg_count < total_regs);) {
+		/* If Debug Mode not enabled and the current register
+		 * is debug register, skip reading it.
+		 */
+		if (dev_cap.debug_mode == 0 &&
+				reg_info[reg_count].is_debug_reg == 1)
+			continue;
+
+		rv = get_reg_entry(reg_list[reg_count].reg_addr,
+					&reg_data_entry);
+		if (rv < 0) {
+			qdma_log_error("%s: register missing in list, err:%d\n",
+						   __func__,
+						   -QDMA_ERR_INV_PARAM);
+			return rv;
+		}
+
+		for (j = 0; j < reg_info[reg_data_entry].repeat; j++) {
+			rv = QDMA_SNPRINTF_S(name, DEBGFS_GEN_NAME_SZ,
+					DEBGFS_GEN_NAME_SZ,
+					"%s_%d",
+					reg_info[reg_data_entry].name, j);
+			if (rv < 0 || rv > DEBGFS_GEN_NAME_SZ) {
+				qdma_log_error
+					("%d:%s snprintf failed, err:%d\n",
+					__LINE__, __func__,
+					rv);
+				return -QDMA_ERR_NO_MEM;
+			}
+			rv = dump_reg(buf + len, buflen - len,
+				(reg_info[reg_data_entry].addr + (j * 4)),
+					name,
+					reg_list[reg_count + j].reg_val);
+			if (rv < 0) {
+				qdma_log_error
+				("%s Buff too small, err:%d\n",
+				__func__,
+				-QDMA_ERR_NO_MEM);
+				return -QDMA_ERR_NO_MEM;
+			}
+			len += rv;
+		}
+		reg_count += j;
+	}
+
+	return len;
+}
+
+
+/*****************************************************************************/
+/**
+ * qdma_read_reg_list() - read the register values
+ *
+ * @dev_hndl:		device handle
+ * @is_vf:		Whether PF or VF
+ * @total_regs :	Max registers to read
+ * @reg_list :		array of reg addr and reg values
+ *
+ * Return: returns the platform specific error code
+ *****************************************************************************/
+int eqdma_read_reg_list(void *dev_hndl, uint8_t is_vf,
+		uint16_t reg_rd_group,
+		uint16_t *total_regs,
+		struct qdma_reg_data *reg_list)
+{
+	uint16_t reg_count = 0, i = 0, j = 0;
+	struct xreg_info *reg_info;
+	uint32_t num_regs = eqdma_config_num_regs_get();
+	struct xreg_info *eqdma_config_regs = eqdma_config_regs_get();
+	struct qdma_dev_attributes dev_cap;
+	uint32_t reg_start_addr = 0;
+	int reg_index = 0;
+	int rv = 0;
+
+	if (!is_vf) {
+		qdma_log_error("%s: not supported for PF, err:%d\n",
+				__func__,
+				-QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED);
+		return -QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED;
+	}
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+					   __func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if (!reg_list) {
+		qdma_log_error("%s: reg_list is NULL, err:%d\n",
+					   __func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	eqdma_get_device_attributes(dev_hndl, &dev_cap);
+
+	switch (reg_rd_group) {
+	case QDMA_REG_READ_GROUP_1:
+			reg_start_addr = EQDMA_REG_GROUP_1_START_ADDR;
+			break;
+	case QDMA_REG_READ_GROUP_2:
+			reg_start_addr = EQDMA_REG_GROUP_2_START_ADDR;
+			break;
+	case QDMA_REG_READ_GROUP_3:
+			reg_start_addr = EQDMA_REG_GROUP_3_START_ADDR;
+			break;
+	case QDMA_REG_READ_GROUP_4:
+			reg_start_addr = EQDMA_REG_GROUP_4_START_ADDR;
+			break;
+	default:
+		qdma_log_error("%s: Invalid slot received\n",
+			   __func__);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	rv = get_reg_entry(reg_start_addr, &reg_index);
+	if (rv < 0) {
+		qdma_log_error("%s: register missing in list, err:%d\n",
+					   __func__,
+					   -QDMA_ERR_INV_PARAM);
+		return rv;
+	}
+	reg_info = &eqdma_config_regs[reg_index];
+
+	for (i = 0, reg_count = 0;
+			((i < num_regs - 1 - reg_index) &&
+			(reg_count < QDMA_MAX_REGISTER_DUMP)); i++) {
+		int mask = get_capability_mask(dev_cap.mm_en, dev_cap.st_en,
+				dev_cap.mm_cmpt_en, dev_cap.mailbox_en);
+
+		if (((mask & reg_info[i].mode) == 0) ||
+			reg_info[i].read_type == QDMA_REG_READ_PF_ONLY)
+			continue;
+
+		/* If Debug Mode not enabled and the current register
+		 * is debug register, skip reading it.
+		 */
+		if (dev_cap.debug_mode == 0 &&
+				reg_info[i].is_debug_reg == 1)
+			continue;
+
+		for (j = 0; j < reg_info[i].repeat &&
+				(reg_count < QDMA_MAX_REGISTER_DUMP);
+				j++) {
+			reg_list[reg_count].reg_addr =
+					(reg_info[i].addr + (j * 4));
+			reg_list[reg_count].reg_val =
+				qdma_reg_read(dev_hndl,
+					reg_list[reg_count].reg_addr);
+			reg_count++;
+		}
+	}
+
+	*total_regs = reg_count;
+	return rv;
+}
+
+/*****************************************************************************/
+/**
+ * eqdma_write_global_ring_sizes() - function to set the global ring size array
+ *
+ * @dev_hndl:   device handle
+ * @index: Index from where the values needs to written
+ * @count: number of entries to be written
+ * @glbl_rng_sz: pointer to the array having the values to write
+ *
+ * (index + count) shall not be more than 16
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int eqdma_write_global_ring_sizes(void *dev_hndl, uint8_t index,
+				uint8_t count, const uint32_t *glbl_rng_sz)
+{
+	if (!dev_hndl || !glbl_rng_sz || !count) {
+		qdma_log_error("%s: dev_hndl=%p glbl_rng_sz=%p, err:%d\n",
+					   __func__, dev_hndl, glbl_rng_sz,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if ((index + count) > QDMA_NUM_RING_SIZES) {
+		qdma_log_error("%s: index=%u count=%u > %d, err:%d\n",
+					   __func__, index, count,
+					   QDMA_NUM_RING_SIZES,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	qdma_write_csr_values(dev_hndl, EQDMA_GLBL_RNG_SZ_1_ADDR, index, count,
+			glbl_rng_sz);
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * eqdma_read_global_ring_sizes() - function to get the global rng_sz array
+ *
+ * @dev_hndl:   device handle
+ * @index:	 Index from where the values needs to read
+ * @count:	 number of entries to be read
+ * @glbl_rng_sz: pointer to array to hold the values read
+ *
+ * (index + count) shall not be more than 16
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int eqdma_read_global_ring_sizes(void *dev_hndl, uint8_t index,
+				uint8_t count, uint32_t *glbl_rng_sz)
+{
+	if (!dev_hndl || !glbl_rng_sz || !count) {
+		qdma_log_error("%s: dev_hndl=%p glbl_rng_sz=%p, err:%d\n",
+					   __func__, dev_hndl, glbl_rng_sz,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if ((index + count) > QDMA_NUM_RING_SIZES) {
+		qdma_log_error("%s: index=%u count=%u > %d, err:%d\n",
+					   __func__, index, count,
+					   QDMA_NUM_C2H_BUFFER_SIZES,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	qdma_read_csr_values(dev_hndl, EQDMA_GLBL_RNG_SZ_1_ADDR, index, count,
+			glbl_rng_sz);
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * eqdma_write_global_timer_count() - function to set the timer values
+ *
+ * @dev_hndl:   device handle
+ * @glbl_tmr_cnt: pointer to the array having the values to write
+ * @index:	 Index from where the values needs to written
+ * @count:	 number of entries to be written
+ *
+ * (index + count) shall not be more than 16
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int eqdma_write_global_timer_count(void *dev_hndl, uint8_t index,
+				uint8_t count, const uint32_t *glbl_tmr_cnt)
+{
+	struct qdma_dev_attributes dev_cap;
+
+	if (!dev_hndl || !glbl_tmr_cnt || !count) {
+		qdma_log_error("%s: dev_hndl=%p glbl_tmr_cnt=%p, err:%d\n",
+					   __func__, dev_hndl, glbl_tmr_cnt,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if ((index + count) > QDMA_NUM_C2H_TIMERS) {
+		qdma_log_error("%s: index=%u count=%u > %d, err:%d\n",
+					   __func__, index, count,
+					   QDMA_NUM_C2H_TIMERS,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	eqdma_get_device_attributes(dev_hndl, &dev_cap);
+
+	if (dev_cap.st_en || dev_cap.mm_cmpt_en) {
+		qdma_write_csr_values(dev_hndl, EQDMA_C2H_TIMER_CNT_ADDR,
+				index, count, glbl_tmr_cnt);
+	} else {
+		qdma_log_error("%s: ST or MM cmpt not supported, err:%d\n",
+				__func__,
+				-QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED);
+		return -QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED;
+	}
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * eqdma_read_global_timer_count() - function to get the timer values
+ *
+ * @dev_hndl:   device handle
+ * @index:	 Index from where the values needs to read
+ * @count:	 number of entries to be read
+ * @glbl_tmr_cnt: pointer to array to hold the values read
+ *
+ * (index + count) shall not be more than 16
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int eqdma_read_global_timer_count(void *dev_hndl, uint8_t index,
+				uint8_t count, uint32_t *glbl_tmr_cnt)
+{
+	struct qdma_dev_attributes dev_cap;
+
+	if (!dev_hndl || !glbl_tmr_cnt || !count) {
+		qdma_log_error("%s: dev_hndl=%p glbl_tmr_cnt=%p, err:%d\n",
+					   __func__, dev_hndl, glbl_tmr_cnt,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if ((index + count) > QDMA_NUM_C2H_TIMERS) {
+		qdma_log_error("%s: index=%u count=%u > %d, err:%d\n",
+					   __func__, index, count,
+					   QDMA_NUM_C2H_TIMERS,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	eqdma_get_device_attributes(dev_hndl, &dev_cap);
+
+	if (dev_cap.st_en || dev_cap.mm_cmpt_en) {
+		qdma_read_csr_values(dev_hndl,
+				EQDMA_C2H_TIMER_CNT_ADDR, index,
+				count, glbl_tmr_cnt);
+	} else {
+		qdma_log_error("%s: ST or MM cmpt not supported, err:%d\n",
+				__func__,
+				-QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED);
+		return -QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED;
+	}
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * eqdma_write_global_counter_threshold() - function to set the counter
+ *						threshold values
+ *
+ * @dev_hndl:   device handle
+ * @index:	 Index from where the values needs to written
+ * @count:	 number of entries to be written
+ * @glbl_cnt_th: pointer to the array having the values to write
+ *
+ * (index + count) shall not be more than 16
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int eqdma_write_global_counter_threshold(void *dev_hndl, uint8_t index,
+		uint8_t count, const uint32_t *glbl_cnt_th)
+{
+	struct qdma_dev_attributes dev_cap;
+
+	if (!dev_hndl || !glbl_cnt_th || !count) {
+		qdma_log_error("%s: dev_hndl=%p glbl_cnt_th=%p, err:%d\n",
+					   __func__, dev_hndl, glbl_cnt_th,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if ((index + count) > QDMA_NUM_C2H_COUNTERS) {
+		qdma_log_error("%s: index=%u count=%u > %d, err:%d\n",
+					   __func__, index, count,
+					   QDMA_NUM_C2H_BUFFER_SIZES,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	eqdma_get_device_attributes(dev_hndl, &dev_cap);
+
+	if (dev_cap.st_en || dev_cap.mm_cmpt_en) {
+		qdma_write_csr_values(dev_hndl, EQDMA_C2H_CNT_TH_ADDR, index,
+				count, glbl_cnt_th);
+	} else {
+		qdma_log_error("%s: ST or MM cmpt not supported, err:%d\n",
+				__func__,
+				-QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED);
+		return -QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED;
+	}
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * eqdma_read_global_counter_threshold() - function to get the counter threshold
+ * values
+ *
+ * @dev_hndl:   device handle
+ * @index:	 Index from where the values needs to read
+ * @count:	 number of entries to be read
+ * @glbl_cnt_th: pointer to array to hold the values read
+ *
+ * (index + count) shall not be more than 16
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int eqdma_read_global_counter_threshold(void *dev_hndl, uint8_t index,
+		uint8_t count, uint32_t *glbl_cnt_th)
+{
+	struct qdma_dev_attributes dev_cap;
+
+	if (!dev_hndl || !glbl_cnt_th || !count) {
+		qdma_log_error("%s: dev_hndl=%p glbl_cnt_th=%p, err:%d\n",
+					   __func__, dev_hndl, glbl_cnt_th,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if ((index + count) > QDMA_NUM_C2H_COUNTERS) {
+		qdma_log_error("%s: index=%u count=%u > %d, err:%d\n",
+					   __func__, index, count,
+					   QDMA_NUM_C2H_COUNTERS,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	eqdma_get_device_attributes(dev_hndl, &dev_cap);
+
+	if (dev_cap.st_en || dev_cap.mm_cmpt_en) {
+		qdma_read_csr_values(dev_hndl, EQDMA_C2H_CNT_TH_ADDR, index,
+				count, glbl_cnt_th);
+	} else {
+		qdma_log_error("%s: ST or MM cmpt not supported, err:%d\n",
+			   __func__, -QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED);
+		return -QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED;
+	}
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * eqdma_write_global_buffer_sizes() - function to set the buffer sizes
+ *
+ * @dev_hndl:   device handle
+ * @index:	 Index from where the values needs to written
+ * @count:	 number of entries to be written
+ * @glbl_buf_sz: pointer to the array having the values to write
+ *
+ * (index + count) shall not be more than 16
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int eqdma_write_global_buffer_sizes(void *dev_hndl, uint8_t index,
+		uint8_t count, const uint32_t *glbl_buf_sz)
+{
+	struct qdma_dev_attributes dev_cap;
+
+	if (!dev_hndl || !glbl_buf_sz || !count) {
+		qdma_log_error("%s: dev_hndl=%p glbl_buf_sz=%p, err:%d\n",
+					   __func__, dev_hndl, glbl_buf_sz,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if ((index + count) > QDMA_NUM_C2H_BUFFER_SIZES) {
+		qdma_log_error("%s: index=%u count=%u > %d, err:%d\n",
+					   __func__, index, count,
+					   QDMA_NUM_C2H_BUFFER_SIZES,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	eqdma_get_device_attributes(dev_hndl, &dev_cap);
+
+	if (dev_cap.st_en) {
+		qdma_write_csr_values(dev_hndl, EQDMA_C2H_BUF_SZ_ADDR, index,
+				count, glbl_buf_sz);
+	} else {
+		qdma_log_error("%s: ST not supported, err:%d\n",
+				__func__,
+				-QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED);
+		return -QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED;
+	}
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * eqdma_read_global_buffer_sizes() - function to get the buffer sizes
+ *
+ * @dev_hndl:   device handle
+ * @index:	 Index from where the values needs to read
+ * @count:	 number of entries to be read
+ * @glbl_buf_sz: pointer to array to hold the values read
+ *
+ * (index + count) shall not be more than 16
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int eqdma_read_global_buffer_sizes(void *dev_hndl, uint8_t index,
+				uint8_t count, uint32_t *glbl_buf_sz)
+{
+	struct qdma_dev_attributes dev_cap;
+
+	if (!dev_hndl || !glbl_buf_sz || !count) {
+		qdma_log_error("%s: dev_hndl=%p glbl_buf_sz=%p, err:%d\n",
+					   __func__, dev_hndl, glbl_buf_sz,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if ((index + count) > QDMA_NUM_C2H_BUFFER_SIZES) {
+		qdma_log_error("%s: index=%u count=%u > %d, err:%d\n",
+					   __func__, index, count,
+					   QDMA_NUM_C2H_BUFFER_SIZES,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	eqdma_get_device_attributes(dev_hndl, &dev_cap);
+
+	if (dev_cap.st_en) {
+		qdma_read_csr_values(dev_hndl, EQDMA_C2H_BUF_SZ_ADDR, index,
+				count, glbl_buf_sz);
+	} else {
+		qdma_log_error("%s: ST is not supported, err:%d\n",
+					__func__,
+					-QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED);
+		return -QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED;
+	}
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * eqdma_global_csr_conf() - function to configure global csr
+ *
+ * @dev_hndl:	device handle
+ * @index:	Index from where the values needs to read
+ * @count:	number of entries to be read
+ * @csr_val:	uint32_t pointer to csr value
+ * @csr_type:	Type of the CSR (qdma_global_csr_type enum) to configure
+ * @access_type HW access type (qdma_hw_access_type enum) value
+ *		QDMA_HW_ACCESS_CLEAR - Not supported
+ *		QDMA_HW_ACCESS_INVALIDATE - Not supported
+ *
+ * (index + count) shall not be more than 16
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+int eqdma_global_csr_conf(void *dev_hndl, uint8_t index, uint8_t count,
+				uint32_t *csr_val,
+				enum qdma_global_csr_type csr_type,
+				enum qdma_hw_access_type access_type)
+{
+	int rv = QDMA_SUCCESS;
+
+	switch (csr_type) {
+	case QDMA_CSR_RING_SZ:
+		switch (access_type) {
+		case QDMA_HW_ACCESS_READ:
+			rv = eqdma_read_global_ring_sizes(dev_hndl,
+						index,
+						count,
+						csr_val);
+			break;
+		case QDMA_HW_ACCESS_WRITE:
+			rv = eqdma_write_global_ring_sizes(dev_hndl,
+						index,
+						count,
+						csr_val);
+			break;
+		default:
+			qdma_log_error("%s: access_type(%d) invalid, err:%d\n",
+							__func__,
+							access_type,
+						   -QDMA_ERR_INV_PARAM);
+			rv = -QDMA_ERR_INV_PARAM;
+			break;
+		}
+		break;
+	case QDMA_CSR_TIMER_CNT:
+		switch (access_type) {
+		case QDMA_HW_ACCESS_READ:
+			rv = eqdma_read_global_timer_count(dev_hndl,
+						index,
+						count,
+						csr_val);
+			break;
+		case QDMA_HW_ACCESS_WRITE:
+			rv = eqdma_write_global_timer_count(dev_hndl,
+						index,
+						count,
+						csr_val);
+			break;
+		default:
+			qdma_log_error("%s: access_type(%d) invalid, err:%d\n",
+							__func__,
+							access_type,
+						   -QDMA_ERR_INV_PARAM);
+			rv = -QDMA_ERR_INV_PARAM;
+			break;
+		}
+		break;
+	case QDMA_CSR_CNT_TH:
+		switch (access_type) {
+		case QDMA_HW_ACCESS_READ:
+			rv =
+			eqdma_read_global_counter_threshold(dev_hndl,
+						index,
+						count,
+						csr_val);
+			break;
+		case QDMA_HW_ACCESS_WRITE:
+			rv =
+			eqdma_write_global_counter_threshold(dev_hndl,
+						index,
+						count,
+						csr_val);
+			break;
+		default:
+			qdma_log_error("%s: access_type(%d) invalid, err:%d\n",
+							__func__,
+							access_type,
+						   -QDMA_ERR_INV_PARAM);
+			rv = -QDMA_ERR_INV_PARAM;
+			break;
+		}
+		break;
+	case QDMA_CSR_BUF_SZ:
+		switch (access_type) {
+		case QDMA_HW_ACCESS_READ:
+			rv =
+			eqdma_read_global_buffer_sizes(dev_hndl,
+						index,
+						count,
+						csr_val);
+			break;
+		case QDMA_HW_ACCESS_WRITE:
+			rv =
+			eqdma_write_global_buffer_sizes(dev_hndl,
+						index,
+						count,
+						csr_val);
+			break;
+		default:
+			qdma_log_error("%s: access_type(%d) invalid, err:%d\n",
+							__func__,
+							access_type,
+						   -QDMA_ERR_INV_PARAM);
+			rv = -QDMA_ERR_INV_PARAM;
+			break;
+		}
+		break;
+	default:
+		qdma_log_error("%s: csr_type(%d) invalid, err:%d\n",
+						__func__,
+						csr_type,
+					   -QDMA_ERR_INV_PARAM);
+		rv = -QDMA_ERR_INV_PARAM;
+		break;
+	}
+
+	return rv;
+}
+
+/*****************************************************************************/
+/**
+ * eqdma_global_writeback_interval_write() -  function to set the writeback
+ * interval
+ *
+ * @dev_hndl	device handle
+ * @wb_int:	Writeback Interval
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int eqdma_global_writeback_interval_write(void *dev_hndl,
+		enum qdma_wrb_interval wb_int)
+{
+	uint32_t reg_val;
+	struct qdma_dev_attributes dev_cap;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n", __func__,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if (wb_int >=  QDMA_NUM_WRB_INTERVALS) {
+		qdma_log_error("%s: wb_int=%d is invalid, err:%d\n",
+					   __func__, wb_int,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	eqdma_get_device_attributes(dev_hndl, &dev_cap);
+
+	if (dev_cap.st_en || dev_cap.mm_cmpt_en) {
+		reg_val = qdma_reg_read(dev_hndl, EQDMA_GLBL_DSC_CFG_ADDR);
+		reg_val |= FIELD_SET(GLBL_DSC_CFG_WB_ACC_INT_MASK, wb_int);
+		qdma_reg_write(dev_hndl, EQDMA_GLBL_DSC_CFG_ADDR, reg_val);
+	} else {
+		qdma_log_error("%s: ST or MM cmpt not supported, err:%d\n",
+			   __func__, -QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED);
+		return -QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED;
+	}
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * eqdma_global_writeback_interval_read() -  function to get the writeback
+ * interval
+ *
+ * @dev_hndl:	device handle
+ * @wb_int:	pointer to the data to hold Writeback Interval
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int eqdma_global_writeback_interval_read(void *dev_hndl,
+		enum qdma_wrb_interval *wb_int)
+{
+	uint32_t reg_val;
+	struct qdma_dev_attributes dev_cap;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n", __func__,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if (!wb_int) {
+		qdma_log_error("%s: wb_int is NULL, err:%d\n", __func__,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	eqdma_get_device_attributes(dev_hndl, &dev_cap);
+
+	if (dev_cap.st_en || dev_cap.mm_cmpt_en) {
+		reg_val = qdma_reg_read(dev_hndl, EQDMA_GLBL_DSC_CFG_ADDR);
+		*wb_int = (enum qdma_wrb_interval)FIELD_GET
+				(GLBL_DSC_CFG_WB_ACC_INT_MASK, reg_val);
+	} else {
+		qdma_log_error("%s: ST or MM cmpt not supported, err:%d\n",
+			   __func__, -QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED);
+		return -QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED;
+	}
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * eqdma_global_writeback_interval_conf() - function to configure
+ *					the writeback interval
+ *
+ * @dev_hndl:   device handle
+ * @wb_int:	pointer to the data to hold Writeback Interval
+ * @access_type HW access type (qdma_hw_access_type enum) value
+ *		QDMA_HW_ACCESS_CLEAR - Not supported
+ *		QDMA_HW_ACCESS_INVALIDATE - Not supported
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+int eqdma_global_writeback_interval_conf(void *dev_hndl,
+				enum qdma_wrb_interval *wb_int,
+				enum qdma_hw_access_type access_type)
+{
+	int rv = QDMA_SUCCESS;
+
+	switch (access_type) {
+	case QDMA_HW_ACCESS_READ:
+		rv = eqdma_global_writeback_interval_read(dev_hndl, wb_int);
+		break;
+	case QDMA_HW_ACCESS_WRITE:
+		rv = eqdma_global_writeback_interval_write(dev_hndl, *wb_int);
+		break;
+	case QDMA_HW_ACCESS_CLEAR:
+	case QDMA_HW_ACCESS_INVALIDATE:
+	default:
+		qdma_log_error("%s: access_type(%d) invalid, err:%d\n",
+						__func__,
+						access_type,
+					   -QDMA_ERR_INV_PARAM);
+		rv = -QDMA_ERR_INV_PARAM;
+		break;
+	}
+
+	return rv;
+}
+
+
+/*****************************************************************************/
+/**
+ * eqdma_mm_channel_conf() - Function to enable/disable the MM channel
+ *
+ * @dev_hndl:	device handle
+ * @channel:	MM channel number
+ * @is_c2h:	Queue direction. Set 1 for C2H and 0 for H2C
+ * @enable:	Enable or disable MM channel
+ *
+ * Presently, we have only 1 MM channel
+ *
+ * Return:   0   - success and < 0 - failure
+ *****************************************************************************/
+int eqdma_mm_channel_conf(void *dev_hndl, uint8_t channel, uint8_t is_c2h,
+				uint8_t enable)
+{
+	uint32_t reg_addr = (is_c2h) ?  EQDMA_C2H_MM_CTL_ADDR :
+			EQDMA_H2C_MM_CTL_ADDR;
+	struct qdma_dev_attributes dev_cap;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+				__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	eqdma_get_device_attributes(dev_hndl, &dev_cap);
+
+	if (dev_cap.mm_en)
+		qdma_reg_write(dev_hndl,
+				reg_addr + (channel * QDMA_MM_CONTROL_STEP),
+				enable);
+
+
+	return QDMA_SUCCESS;
+}
+
+int eqdma_dump_reg_info(void *dev_hndl, uint32_t reg_addr,
+		uint32_t num_regs, char *buf, uint32_t buflen)
+{
+	uint32_t total_num_regs = eqdma_config_num_regs_get();
+	struct xreg_info *config_regs  = eqdma_config_regs_get();
+	struct qdma_dev_attributes dev_cap;
+	const char *bitfield_name;
+	uint32_t i = 0, num_regs_idx = 0, k = 0, j = 0,
+			bitfield = 0, lsb = 0, msb = 31;
+	int rv = 0;
+	uint32_t reg_val;
+	uint32_t data_len = 0;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+				__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	eqdma_get_device_attributes(dev_hndl, &dev_cap);
+
+	for (i = 0; i < total_num_regs; i++) {
+		if (reg_addr == config_regs[i].addr) {
+			j = i;
+			break;
+		}
+	}
+
+	if (i == total_num_regs) {
+		qdma_log_error("%s: Register not found err:%d\n",
+				__func__, -QDMA_ERR_INV_PARAM);
+		if (buf)
+			QDMA_SNPRINTF_S(buf, buflen,
+					DEBGFS_LINE_SZ,
+					"Register not found 0x%x\n", reg_addr);
+
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	num_regs_idx = (j + num_regs < total_num_regs) ?
+					(j + num_regs) : total_num_regs;
+
+	for (; j < num_regs_idx ; j++) {
+		reg_val = qdma_reg_read(dev_hndl,
+				config_regs[j].addr);
+
+		if (buf) {
+			rv = QDMA_SNPRINTF_S(buf, buflen,
+						DEBGFS_LINE_SZ,
+						"\n%-40s 0x%-7x %-#10x %-10d\n",
+						config_regs[j].name,
+						config_regs[j].addr,
+						reg_val, reg_val);
+			if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+				qdma_log_error
+					("%s: Insufficient buffer, err:%d\n",
+					__func__, -QDMA_ERR_NO_MEM);
+				return -QDMA_ERR_NO_MEM;
+			}
+			buf += rv;
+			data_len += rv;
+			buflen -= rv;
+		} else {
+			qdma_log_info("%-40s 0x%-7x %-#10x %-10d\n",
+						  config_regs[j].name,
+						  config_regs[j].addr,
+						  reg_val, reg_val);
+		}
+
+		for (k = 0;
+			 k < config_regs[j].num_bitfields; k++) {
+			/* If Debug Mode not enabled and the current register
+			 * is debug register, skip reading it.
+			 */
+			if (dev_cap.debug_mode == 0 &&
+					config_regs[j].is_debug_reg == 1)
+				continue;
+
+			bitfield =
+				config_regs[j].bitfields[k].field_mask;
+			bitfield_name =
+				config_regs[i].bitfields[k].field_name;
+			lsb = 0;
+			msb = 31;
+
+			while (!(BIT(lsb) & bitfield))
+				lsb++;
+
+			while (!(BIT(msb) & bitfield))
+				msb--;
+
+			if (msb != lsb) {
+				if (buf) {
+					rv = QDMA_SNPRINTF_S(buf, buflen,
+							DEBGFS_LINE_SZ,
+							"%-40s [%2u,%2u]   %#-10x\n",
+							bitfield_name,
+							msb, lsb,
+							(reg_val & bitfield) >>
+								lsb);
+					if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+						qdma_log_error
+							("%s: Insufficient buffer, err:%d\n",
+							__func__,
+							-QDMA_ERR_NO_MEM);
+						return -QDMA_ERR_NO_MEM;
+					}
+					buf += rv;
+					data_len += rv;
+					buflen -= rv;
+				} else {
+					qdma_log_info
+						("%-40s [%2u,%2u]   %#-10x\n",
+						bitfield_name,
+						msb, lsb,
+						(reg_val & bitfield) >> lsb);
+				}
+
+			} else {
+				if (buf) {
+					rv = QDMA_SNPRINTF_S(buf, buflen,
+							DEBGFS_LINE_SZ,
+							"%-40s [%5u]   %#-10x\n",
+							bitfield_name,
+							lsb,
+							(reg_val & bitfield) >>
+								lsb);
+					if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+						qdma_log_error
+							("%s: Insufficient buffer, err:%d\n",
+							__func__,
+							-QDMA_ERR_NO_MEM);
+						return -QDMA_ERR_NO_MEM;
+					}
+					buf += rv;
+					data_len += rv;
+					buflen -= rv;
+				} else {
+					qdma_log_info
+						("%-40s [%5u]   %#-10x\n",
+						bitfield_name,
+						lsb,
+						(reg_val & bitfield) >> lsb);
+				}
+			}
+		}
+	}
+
+	return data_len;
+}
diff --git a/drivers/net/qdma/qdma_access/eqdma_soft_access/eqdma_soft_access.h b/drivers/net/qdma/qdma_access/eqdma_soft_access/eqdma_soft_access.h
new file mode 100644
index 0000000000..dc5d3de312
--- /dev/null
+++ b/drivers/net/qdma/qdma_access/eqdma_soft_access/eqdma_soft_access.h
@@ -0,0 +1,294 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2022 Xilinx, Inc. All rights reserved.
+ */
+
+#ifndef __EQDMA_SOFT_ACCESS_H_
+#define __EQDMA_SOFT_ACCESS_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include "qdma_platform.h"
+
+/**
+ * enum qdma_error_idx - qdma errors
+ */
+enum eqdma_error_idx {
+	/* Descriptor errors */
+	EQDMA_DSC_ERR_POISON,
+	EQDMA_DSC_ERR_UR_CA,
+	EQDMA_DSC_ERR_BCNT,
+	EQDMA_DSC_ERR_PARAM,
+	EQDMA_DSC_ERR_ADDR,
+	EQDMA_DSC_ERR_TAG,
+	EQDMA_DSC_ERR_FLR,
+	EQDMA_DSC_ERR_TIMEOUT,
+	EQDMA_DSC_ERR_DAT_POISON,
+	EQDMA_DSC_ERR_FLR_CANCEL,
+	EQDMA_DSC_ERR_DMA,
+	EQDMA_DSC_ERR_DSC,
+	EQDMA_DSC_ERR_RQ_CANCEL,
+	EQDMA_DSC_ERR_DBE,
+	EQDMA_DSC_ERR_SBE,
+	EQDMA_DSC_ERR_ALL,
+
+	/* TRQ Errors */
+	EQDMA_TRQ_ERR_CSR_UNMAPPED,
+	EQDMA_TRQ_ERR_VF_ACCESS,
+	EQDMA_TRQ_ERR_TCP_CSR_TIMEOUT,
+	EQDMA_TRQ_ERR_QSPC_UNMAPPED,
+	EQDMA_TRQ_ERR_QID_RANGE,
+	EQDMA_TRQ_ERR_TCP_QSPC_TIMEOUT,
+	EQDMA_TRQ_ERR_ALL,
+
+	/* C2H Errors */
+	EQDMA_ST_C2H_ERR_MTY_MISMATCH,
+	EQDMA_ST_C2H_ERR_LEN_MISMATCH,
+	EQDMA_ST_C2H_ERR_SH_CMPT_DSC,
+	EQDMA_ST_C2H_ERR_QID_MISMATCH,
+	EQDMA_ST_C2H_ERR_DESC_RSP_ERR,
+	EQDMA_ST_C2H_ERR_ENG_WPL_DATA_PAR_ERR,
+	EQDMA_ST_C2H_ERR_MSI_INT_FAIL,
+	EQDMA_ST_C2H_ERR_ERR_DESC_CNT,
+	EQDMA_ST_C2H_ERR_PORTID_CTXT_MISMATCH,
+	EQDMA_ST_C2H_ERR_CMPT_INV_Q_ERR,
+	EQDMA_ST_C2H_ERR_CMPT_QFULL_ERR,
+	EQDMA_ST_C2H_ERR_CMPT_CIDX_ERR,
+	EQDMA_ST_C2H_ERR_CMPT_PRTY_ERR,
+	EQDMA_ST_C2H_ERR_AVL_RING_DSC,
+	EQDMA_ST_C2H_ERR_HDR_ECC_UNC,
+	EQDMA_ST_C2H_ERR_HDR_ECC_COR,
+	EQDMA_ST_C2H_ERR_ALL,
+
+	/* Fatal Errors */
+	EQDMA_ST_FATAL_ERR_MTY_MISMATCH,
+	EQDMA_ST_FATAL_ERR_LEN_MISMATCH,
+	EQDMA_ST_FATAL_ERR_QID_MISMATCH,
+	EQDMA_ST_FATAL_ERR_TIMER_FIFO_RAM_RDBE,
+	EQDMA_ST_FATAL_ERR_PFCH_II_RAM_RDBE,
+	EQDMA_ST_FATAL_ERR_CMPT_CTXT_RAM_RDBE,
+	EQDMA_ST_FATAL_ERR_PFCH_CTXT_RAM_RDBE,
+	EQDMA_ST_FATAL_ERR_DESC_REQ_FIFO_RAM_RDBE,
+	EQDMA_ST_FATAL_ERR_INT_CTXT_RAM_RDBE,
+	EQDMA_ST_FATAL_ERR_CMPT_COAL_DATA_RAM_RDBE,
+	EQDMA_ST_FATAL_ERR_CMPT_FIFO_RAM_RDBE,
+	EQDMA_ST_FATAL_ERR_QID_FIFO_RAM_RDBE,
+	EQDMA_ST_FATAL_ERR_PAYLOAD_FIFO_RAM_RDBE,
+	EQDMA_ST_FATAL_ERR_WPL_DATA_PAR,
+	EQDMA_ST_FATAL_ERR_AVL_RING_FIFO_RAM_RDBE,
+	EQDMA_ST_FATAL_ERR_HDR_ECC_UNC,
+	EQDMA_ST_FATAL_ERR_ALL,
+
+	/* H2C Errors */
+	EQDMA_ST_H2C_ERR_ZERO_LEN_DESC,
+	EQDMA_ST_H2C_ERR_SDI_MRKR_REQ_MOP,
+	EQDMA_ST_H2C_ERR_NO_DMA_DSC,
+	EQDMA_ST_H2C_ERR_SBE,
+	EQDMA_ST_H2C_ERR_DBE,
+	EQDMA_ST_H2C_ERR_PAR,
+	EQDMA_ST_H2C_ERR_ALL,
+
+	/* Single bit errors */
+	EQDMA_SBE_1_ERR_RC_RRQ_EVEN_RAM,
+	EQDMA_SBE_1_ERR_TAG_ODD_RAM,
+	EQDMA_SBE_1_ERR_TAG_EVEN_RAM,
+	EQDMA_SBE_1_ERR_PFCH_CTXT_CAM_RAM_0,
+	EQDMA_SBE_1_ERR_PFCH_CTXT_CAM_RAM_1,
+	EQDMA_SBE_1_ERR_ALL,
+
+	/* Single bit errors */
+	EQDMA_SBE_ERR_MI_H2C0_DAT,
+	EQDMA_SBE_ERR_MI_H2C1_DAT,
+	EQDMA_SBE_ERR_MI_H2C2_DAT,
+	EQDMA_SBE_ERR_MI_H2C3_DAT,
+	EQDMA_SBE_ERR_MI_C2H0_DAT,
+	EQDMA_SBE_ERR_MI_C2H1_DAT,
+	EQDMA_SBE_ERR_MI_C2H2_DAT,
+	EQDMA_SBE_ERR_MI_C2H3_DAT,
+	EQDMA_SBE_ERR_H2C_RD_BRG_DAT,
+	EQDMA_SBE_ERR_H2C_WR_BRG_DAT,
+	EQDMA_SBE_ERR_C2H_RD_BRG_DAT,
+	EQDMA_SBE_ERR_C2H_WR_BRG_DAT,
+	EQDMA_SBE_ERR_FUNC_MAP,
+	EQDMA_SBE_ERR_DSC_HW_CTXT,
+	EQDMA_SBE_ERR_DSC_CRD_RCV,
+	EQDMA_SBE_ERR_DSC_SW_CTXT,
+	EQDMA_SBE_ERR_DSC_CPLI,
+	EQDMA_SBE_ERR_DSC_CPLD,
+	EQDMA_SBE_ERR_MI_TL_SLV_FIFO_RAM,
+	EQDMA_SBE_ERR_TIMER_FIFO_RAM,
+	EQDMA_SBE_ERR_QID_FIFO_RAM,
+	EQDMA_SBE_ERR_WRB_COAL_DATA_RAM,
+	EQDMA_SBE_ERR_INT_CTXT_RAM,
+	EQDMA_SBE_ERR_DESC_REQ_FIFO_RAM,
+	EQDMA_SBE_ERR_PFCH_CTXT_RAM,
+	EQDMA_SBE_ERR_WRB_CTXT_RAM,
+	EQDMA_SBE_ERR_PFCH_LL_RAM,
+	EQDMA_SBE_ERR_PEND_FIFO_RAM,
+	EQDMA_SBE_ERR_RC_RRQ_ODD_RAM,
+	EQDMA_SBE_ERR_ALL,
+
+	/* Double bit Errors */
+	EQDMA_DBE_1_ERR_RC_RRQ_EVEN_RAM,
+	EQDMA_DBE_1_ERR_TAG_ODD_RAM,
+	EQDMA_DBE_1_ERR_TAG_EVEN_RAM,
+	EQDMA_DBE_1_ERR_PFCH_CTXT_CAM_RAM_0,
+	EQDMA_DBE_1_ERR_PFCH_CTXT_CAM_RAM_1,
+	EQDMA_DBE_1_ERR_ALL,
+
+	/* Double bit Errors */
+	EQDMA_DBE_ERR_MI_H2C0_DAT,
+	EQDMA_DBE_ERR_MI_H2C1_DAT,
+	EQDMA_DBE_ERR_MI_H2C2_DAT,
+	EQDMA_DBE_ERR_MI_H2C3_DAT,
+	EQDMA_DBE_ERR_MI_C2H0_DAT,
+	EQDMA_DBE_ERR_MI_C2H1_DAT,
+	EQDMA_DBE_ERR_MI_C2H2_DAT,
+	EQDMA_DBE_ERR_MI_C2H3_DAT,
+	EQDMA_DBE_ERR_H2C_RD_BRG_DAT,
+	EQDMA_DBE_ERR_H2C_WR_BRG_DAT,
+	EQDMA_DBE_ERR_C2H_RD_BRG_DAT,
+	EQDMA_DBE_ERR_C2H_WR_BRG_DAT,
+	EQDMA_DBE_ERR_FUNC_MAP,
+	EQDMA_DBE_ERR_DSC_HW_CTXT,
+	EQDMA_DBE_ERR_DSC_CRD_RCV,
+	EQDMA_DBE_ERR_DSC_SW_CTXT,
+	EQDMA_DBE_ERR_DSC_CPLI,
+	EQDMA_DBE_ERR_DSC_CPLD,
+	EQDMA_DBE_ERR_MI_TL_SLV_FIFO_RAM,
+	EQDMA_DBE_ERR_TIMER_FIFO_RAM,
+	EQDMA_DBE_ERR_QID_FIFO_RAM,
+	EQDMA_DBE_ERR_WRB_COAL_DATA_RAM,
+	EQDMA_DBE_ERR_INT_CTXT_RAM,
+	EQDMA_DBE_ERR_DESC_REQ_FIFO_RAM,
+	EQDMA_DBE_ERR_PFCH_CTXT_RAM,
+	EQDMA_DBE_ERR_WRB_CTXT_RAM,
+	EQDMA_DBE_ERR_PFCH_LL_RAM,
+	EQDMA_DBE_ERR_PEND_FIFO_RAM,
+	EQDMA_DBE_ERR_RC_RRQ_ODD_RAM,
+	EQDMA_DBE_ERR_ALL,
+
+	EQDMA_ERRS_ALL
+};
+
+struct eqdma_hw_err_info {
+	enum eqdma_error_idx idx;
+	const char *err_name;
+	uint32_t mask_reg_addr;
+	uint32_t stat_reg_addr;
+	uint32_t leaf_err_mask;
+	uint32_t global_err_mask;
+	void (*eqdma_hw_err_process)(void *dev_hndl);
+};
+
+#define EQDMA_OFFSET_VF_VERSION           0x5014
+#define EQDMA_OFFSET_VF_USER_BAR		  0x5018
+
+#define EQDMA_OFFSET_MBOX_BASE_PF         0x22400
+#define EQDMA_OFFSET_MBOX_BASE_VF         0x5000
+
+#define EQDMA_COMPL_CTXT_BADDR_HIGH_H_MASK             GENMASK_ULL(63, 38)
+#define EQDMA_COMPL_CTXT_BADDR_HIGH_L_MASK             GENMASK_ULL(37, 6)
+#define EQDMA_COMPL_CTXT_BADDR_LOW_MASK                GENMASK_ULL(5, 2)
+
+int eqdma_init_ctxt_memory(void *dev_hndl);
+
+int eqdma_get_version(void *dev_hndl, uint8_t is_vf,
+		struct qdma_hw_version_info *version_info);
+
+int eqdma_sw_ctx_conf(void *dev_hndl, uint8_t c2h, uint16_t hw_qid,
+			struct qdma_descq_sw_ctxt *ctxt,
+			enum qdma_hw_access_type access_type);
+
+int eqdma_hw_ctx_conf(void *dev_hndl, uint8_t c2h, uint16_t hw_qid,
+				struct qdma_descq_hw_ctxt *ctxt,
+				enum qdma_hw_access_type access_type);
+
+int eqdma_credit_ctx_conf(void *dev_hndl, uint8_t c2h,
+		uint16_t hw_qid, struct qdma_descq_credit_ctxt *ctxt,
+		enum qdma_hw_access_type access_type);
+
+int eqdma_pfetch_ctx_conf(void *dev_hndl, uint16_t hw_qid,
+			struct qdma_descq_prefetch_ctxt *ctxt,
+			enum qdma_hw_access_type access_type);
+
+int eqdma_cmpt_ctx_conf(void *dev_hndl, uint16_t hw_qid,
+			struct qdma_descq_cmpt_ctxt *ctxt,
+			enum qdma_hw_access_type access_type);
+
+int eqdma_indirect_intr_ctx_conf(void *dev_hndl, uint16_t ring_index,
+			struct qdma_indirect_intr_ctxt *ctxt,
+			enum qdma_hw_access_type access_type);
+
+int eqdma_dump_config_regs(void *dev_hndl, uint8_t is_vf,
+		char *buf, uint32_t buflen);
+
+int eqdma_dump_intr_context(void *dev_hndl,
+		struct qdma_indirect_intr_ctxt *intr_ctx,
+		int ring_index,
+		char *buf, uint32_t buflen);
+
+int eqdma_dump_queue_context(void *dev_hndl,
+		uint8_t st,
+		enum qdma_dev_q_type q_type,
+		struct qdma_descq_context *ctxt_data,
+		char *buf, uint32_t buflen);
+
+uint32_t eqdma_reg_dump_buf_len(void);
+
+int eqdma_context_buf_len(uint8_t st,
+		enum qdma_dev_q_type q_type, uint32_t *buflen);
+
+int eqdma_hw_error_process(void *dev_hndl);
+const char *eqdma_hw_get_error_name(uint32_t err_idx);
+int eqdma_hw_error_enable(void *dev_hndl, uint32_t err_idx);
+
+int eqdma_read_dump_queue_context(void *dev_hndl,
+		uint16_t qid_hw,
+		uint8_t st,
+		enum qdma_dev_q_type q_type,
+		char *buf, uint32_t buflen);
+
+int eqdma_get_device_attributes(void *dev_hndl,
+		struct qdma_dev_attributes *dev_info);
+
+int eqdma_get_user_bar(void *dev_hndl, uint8_t is_vf,
+		uint8_t func_id, uint8_t *user_bar);
+
+int eqdma_dump_config_reg_list(void *dev_hndl,
+		uint32_t total_regs,
+		struct qdma_reg_data *reg_list,
+		char *buf, uint32_t buflen);
+
+int eqdma_read_reg_list(void *dev_hndl, uint8_t is_vf,
+		uint16_t reg_rd_group,
+		uint16_t *total_regs,
+		struct qdma_reg_data *reg_list);
+
+int eqdma_set_default_global_csr(void *dev_hndl);
+
+int eqdma_global_csr_conf(void *dev_hndl, uint8_t index, uint8_t count,
+				uint32_t *csr_val,
+				enum qdma_global_csr_type csr_type,
+				enum qdma_hw_access_type access_type);
+
+int eqdma_global_writeback_interval_conf(void *dev_hndl,
+				enum qdma_wrb_interval *wb_int,
+				enum qdma_hw_access_type access_type);
+
+int eqdma_mm_channel_conf(void *dev_hndl, uint8_t channel, uint8_t is_c2h,
+				uint8_t enable);
+
+int eqdma_dump_reg_info(void *dev_hndl, uint32_t reg_addr,
+			uint32_t num_regs, char *buf, uint32_t buflen);
+
+uint32_t eqdma_get_config_num_regs(void);
+
+struct xreg_info *eqdma_get_config_regs(void);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __EQDMA_SOFT_ACCESS_H_ */
diff --git a/drivers/net/qdma/qdma_access/eqdma_soft_access/eqdma_soft_reg.h b/drivers/net/qdma/qdma_access/eqdma_soft_access/eqdma_soft_reg.h
new file mode 100644
index 0000000000..446079a8fc
--- /dev/null
+++ b/drivers/net/qdma/qdma_access/eqdma_soft_access/eqdma_soft_reg.h
@@ -0,0 +1,1211 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2022 Xilinx, Inc. All rights reserved.
+ */
+
+#ifndef __EQDMA_SOFT_REG_H
+#define __EQDMA_SOFT_REG_H
+
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include "qdma_platform.h"
+
+#ifdef CHAR_BIT
+#undef CHAR_BIT
+#endif
+#define CHAR_BIT 8
+
+#ifdef BIT
+#undef BIT
+#endif
+#define BIT(n)                  (1u << (n))
+
+#ifdef BITS_PER_BYTE
+#undef BITS_PER_BYTE
+#endif
+#define BITS_PER_BYTE           CHAR_BIT
+
+#ifdef BITS_PER_LONG
+#undef BITS_PER_LONG
+#endif
+#define BITS_PER_LONG           (sizeof(uint32_t) * BITS_PER_BYTE)
+
+#ifdef BITS_PER_LONG_LONG
+#undef BITS_PER_LONG_LONG
+#endif
+#define BITS_PER_LONG_LONG      (sizeof(uint64_t) * BITS_PER_BYTE)
+
+#ifdef GENMASK
+#undef GENMASK
+#endif
+#define GENMASK(h, l) \
+	((0xFFFFFFFF << (l)) & (0xFFFFFFFF >> (BITS_PER_LONG - 1 - (h))))
+
+#ifdef GENMASK_ULL
+#undef GENMASK_ULL
+#endif
+#define GENMASK_ULL(h, l) \
+	((0xFFFFFFFFFFFFFFFF << (l)) & \
+			(0xFFFFFFFFFFFFFFFF >> (BITS_PER_LONG_LONG - 1 - (h))))
+
+#define DEBGFS_LINE_SZ			(81)
+
+#ifdef ARRAY_SIZE
+#undef ARRAY_SIZE
+#endif
+#define ARRAY_SIZE(arr) RTE_DIM(arr)
+
+uint32_t eqdma_config_num_regs_get(void);
+struct xreg_info *eqdma_config_regs_get(void);
+#define EQDMA_CFG_BLK_IDENTIFIER_ADDR                      0x00
+#define CFG_BLK_IDENTIFIER_MASK                           GENMASK(31, 20)
+#define CFG_BLK_IDENTIFIER_1_MASK                         GENMASK(19, 16)
+#define CFG_BLK_IDENTIFIER_RSVD_1_MASK                     GENMASK(15, 8)
+#define CFG_BLK_IDENTIFIER_VERSION_MASK                    GENMASK(7, 0)
+#define EQDMA_CFG_BLK_PCIE_MAX_PLD_SIZE_ADDR               0x08
+#define CFG_BLK_PCIE_MAX_PLD_SIZE_RSVD_1_MASK              GENMASK(31, 7)
+#define CFG_BLK_PCIE_MAX_PLD_SIZE_PROG_MASK                GENMASK(6, 4)
+#define CFG_BLK_PCIE_MAX_PLD_SIZE_RSVD_2_MASK              BIT(3)
+#define CFG_BLK_PCIE_MAX_PLD_SIZE_ISSUED_MASK              GENMASK(2, 0)
+#define EQDMA_CFG_BLK_PCIE_MAX_READ_REQ_SIZE_ADDR          0x0C
+#define CFG_BLK_PCIE_MAX_READ_REQ_SIZE_RSVD_1_MASK         GENMASK(31, 7)
+#define CFG_BLK_PCIE_MAX_READ_REQ_SIZE_PROG_MASK           GENMASK(6, 4)
+#define CFG_BLK_PCIE_MAX_READ_REQ_SIZE_RSVD_2_MASK         BIT(3)
+#define CFG_BLK_PCIE_MAX_READ_REQ_SIZE_ISSUED_MASK         GENMASK(2, 0)
+#define EQDMA_CFG_BLK_SYSTEM_ID_ADDR                       0x10
+#define CFG_BLK_SYSTEM_ID_RSVD_1_MASK                      GENMASK(31, 17)
+#define CFG_BLK_SYSTEM_ID_INST_TYPE_MASK                   BIT(16)
+#define CFG_BLK_SYSTEM_ID_MASK                            GENMASK(15, 0)
+#define EQDMA_CFG_BLK_MSIX_ENABLE_ADDR                     0x014
+#define CFG_BLK_MSIX_ENABLE_MASK                          GENMASK(31, 0)
+#define EQDMA_CFG_PCIE_DATA_WIDTH_ADDR                     0x18
+#define CFG_PCIE_DATA_WIDTH_RSVD_1_MASK                    GENMASK(31, 3)
+#define CFG_PCIE_DATA_WIDTH_DATAPATH_MASK                  GENMASK(2, 0)
+#define EQDMA_CFG_PCIE_CTL_ADDR                            0x1C
+#define CFG_PCIE_CTL_RSVD_1_MASK                           GENMASK(31, 18)
+#define CFG_PCIE_CTL_MGMT_AXIL_CTRL_MASK                   GENMASK(17, 16)
+#define CFG_PCIE_CTL_RSVD_2_MASK                           GENMASK(15, 2)
+#define CFG_PCIE_CTL_RRQ_DISABLE_MASK                      BIT(1)
+#define CFG_PCIE_CTL_RELAXED_ORDERING_MASK                 BIT(0)
+#define EQDMA_CFG_BLK_MSI_ENABLE_ADDR                      0x20
+#define CFG_BLK_MSI_ENABLE_MASK                           GENMASK(31, 0)
+#define EQDMA_CFG_AXI_USER_MAX_PLD_SIZE_ADDR               0x40
+#define CFG_AXI_USER_MAX_PLD_SIZE_RSVD_1_MASK              GENMASK(31, 7)
+#define CFG_AXI_USER_MAX_PLD_SIZE_ISSUED_MASK              GENMASK(6, 4)
+#define CFG_AXI_USER_MAX_PLD_SIZE_RSVD_2_MASK              BIT(3)
+#define CFG_AXI_USER_MAX_PLD_SIZE_PROG_MASK                GENMASK(2, 0)
+#define EQDMA_CFG_AXI_USER_MAX_READ_REQ_SIZE_ADDR          0x44
+#define CFG_AXI_USER_MAX_READ_REQ_SIZE_RSVD_1_MASK         GENMASK(31, 7)
+#define CFG_AXI_USER_MAX_READ_REQ_SIZE_USISSUED_MASK       GENMASK(6, 4)
+#define CFG_AXI_USER_MAX_READ_REQ_SIZE_RSVD_2_MASK         BIT(3)
+#define CFG_AXI_USER_MAX_READ_REQ_SIZE_USPROG_MASK         GENMASK(2, 0)
+#define EQDMA_CFG_BLK_MISC_CTL_ADDR                        0x4C
+#define CFG_BLK_MISC_CTL_RSVD_1_MASK                       GENMASK(31, 24)
+#define CFG_BLK_MISC_CTL_10B_TAG_EN_MASK                   BIT(23)
+#define CFG_BLK_MISC_CTL_RSVD_2_MASK                       BIT(22)
+#define CFG_BLK_MISC_CTL_AXI_WBK_MASK                      BIT(21)
+#define CFG_BLK_MISC_CTL_AXI_DSC_MASK                      BIT(20)
+#define CFG_BLK_MISC_CTL_NUM_TAG_MASK                      GENMASK(19, 8)
+#define CFG_BLK_MISC_CTL_RSVD_3_MASK                       GENMASK(7, 5)
+#define CFG_BLK_MISC_CTL_RQ_METERING_MULTIPLIER_MASK       GENMASK(4, 0)
+#define EQDMA_CFG_PL_CRED_CTL_ADDR                         0x68
+#define CFG_PL_CRED_CTL_RSVD_1_MASK                        GENMASK(31, 5)
+#define CFG_PL_CRED_CTL_SLAVE_CRD_RLS_MASK                 BIT(4)
+#define CFG_PL_CRED_CTL_RSVD_2_MASK                        GENMASK(3, 1)
+#define CFG_PL_CRED_CTL_MASTER_CRD_RST_MASK                BIT(0)
+#define EQDMA_CFG_BLK_SCRATCH_ADDR                         0x80
+#define CFG_BLK_SCRATCH_MASK                              GENMASK(31, 0)
+#define EQDMA_CFG_GIC_ADDR                                 0xA0
+#define CFG_GIC_RSVD_1_MASK                                GENMASK(31, 1)
+#define CFG_GIC_GIC_IRQ_MASK                               BIT(0)
+#define EQDMA_RAM_SBE_MSK_1_A_ADDR                         0xE0
+#define RAM_SBE_MSK_1_A_MASK                          GENMASK(31, 0)
+#define EQDMA_RAM_SBE_STS_1_A_ADDR                         0xE4
+#define RAM_SBE_STS_1_A_RSVD_MASK                          GENMASK(31, 5)
+#define RAM_SBE_STS_1_A_PFCH_CTXT_CAM_RAM_1_MASK           BIT(4)
+#define RAM_SBE_STS_1_A_PFCH_CTXT_CAM_RAM_0_MASK           BIT(3)
+#define RAM_SBE_STS_1_A_TAG_EVEN_RAM_MASK                  BIT(2)
+#define RAM_SBE_STS_1_A_TAG_ODD_RAM_MASK                   BIT(1)
+#define RAM_SBE_STS_1_A_RC_RRQ_EVEN_RAM_MASK               BIT(0)
+#define EQDMA_RAM_DBE_MSK_1_A_ADDR                         0xE8
+#define RAM_DBE_MSK_1_A_MASK                          GENMASK(31, 0)
+#define EQDMA_RAM_DBE_STS_1_A_ADDR                         0xEC
+#define RAM_DBE_STS_1_A_RSVD_MASK                          GENMASK(31, 5)
+#define RAM_DBE_STS_1_A_PFCH_CTXT_CAM_RAM_1_MASK           BIT(4)
+#define RAM_DBE_STS_1_A_PFCH_CTXT_CAM_RAM_0_MASK           BIT(3)
+#define RAM_DBE_STS_1_A_TAG_EVEN_RAM_MASK                  BIT(2)
+#define RAM_DBE_STS_1_A_TAG_ODD_RAM_MASK                   BIT(1)
+#define RAM_DBE_STS_1_A_RC_RRQ_EVEN_RAM_MASK               BIT(0)
+#define EQDMA_RAM_SBE_MSK_A_ADDR                           0xF0
+#define RAM_SBE_MSK_A_MASK                            GENMASK(31, 0)
+#define EQDMA_RAM_SBE_STS_A_ADDR                           0xF4
+#define RAM_SBE_STS_A_RC_RRQ_ODD_RAM_MASK                  BIT(31)
+#define RAM_SBE_STS_A_PEND_FIFO_RAM_MASK                   BIT(30)
+#define RAM_SBE_STS_A_PFCH_LL_RAM_MASK                     BIT(29)
+#define RAM_SBE_STS_A_WRB_CTXT_RAM_MASK                    BIT(28)
+#define RAM_SBE_STS_A_PFCH_CTXT_RAM_MASK                   BIT(27)
+#define RAM_SBE_STS_A_DESC_REQ_FIFO_RAM_MASK               BIT(26)
+#define RAM_SBE_STS_A_INT_CTXT_RAM_MASK                    BIT(25)
+#define RAM_SBE_STS_A_WRB_COAL_DATA_RAM_MASK               BIT(24)
+#define RAM_SBE_STS_A_QID_FIFO_RAM_MASK                    BIT(23)
+#define RAM_SBE_STS_A_TIMER_FIFO_RAM_MASK                  GENMASK(22, 19)
+#define RAM_SBE_STS_A_MI_TL_SLV_FIFO_RAM_MASK              BIT(18)
+#define RAM_SBE_STS_A_DSC_CPLD_MASK                        BIT(17)
+#define RAM_SBE_STS_A_DSC_CPLI_MASK                        BIT(16)
+#define RAM_SBE_STS_A_DSC_SW_CTXT_MASK                     BIT(15)
+#define RAM_SBE_STS_A_DSC_CRD_RCV_MASK                     BIT(14)
+#define RAM_SBE_STS_A_DSC_HW_CTXT_MASK                     BIT(13)
+#define RAM_SBE_STS_A_FUNC_MAP_MASK                        BIT(12)
+#define RAM_SBE_STS_A_C2H_WR_BRG_DAT_MASK                  BIT(11)
+#define RAM_SBE_STS_A_C2H_RD_BRG_DAT_MASK                  BIT(10)
+#define RAM_SBE_STS_A_H2C_WR_BRG_DAT_MASK                  BIT(9)
+#define RAM_SBE_STS_A_H2C_RD_BRG_DAT_MASK                  BIT(8)
+#define RAM_SBE_STS_A_MI_C2H3_DAT_MASK                     BIT(7)
+#define RAM_SBE_STS_A_MI_C2H2_DAT_MASK                     BIT(6)
+#define RAM_SBE_STS_A_MI_C2H1_DAT_MASK                     BIT(5)
+#define RAM_SBE_STS_A_MI_C2H0_DAT_MASK                     BIT(4)
+#define RAM_SBE_STS_A_MI_H2C3_DAT_MASK                     BIT(3)
+#define RAM_SBE_STS_A_MI_H2C2_DAT_MASK                     BIT(2)
+#define RAM_SBE_STS_A_MI_H2C1_DAT_MASK                     BIT(1)
+#define RAM_SBE_STS_A_MI_H2C0_DAT_MASK                     BIT(0)
+#define EQDMA_RAM_DBE_MSK_A_ADDR                           0xF8
+#define RAM_DBE_MSK_A_MASK                            GENMASK(31, 0)
+#define EQDMA_RAM_DBE_STS_A_ADDR                           0xFC
+#define RAM_DBE_STS_A_RC_RRQ_ODD_RAM_MASK                  BIT(31)
+#define RAM_DBE_STS_A_PEND_FIFO_RAM_MASK                   BIT(30)
+#define RAM_DBE_STS_A_PFCH_LL_RAM_MASK                     BIT(29)
+#define RAM_DBE_STS_A_WRB_CTXT_RAM_MASK                    BIT(28)
+#define RAM_DBE_STS_A_PFCH_CTXT_RAM_MASK                   BIT(27)
+#define RAM_DBE_STS_A_DESC_REQ_FIFO_RAM_MASK               BIT(26)
+#define RAM_DBE_STS_A_INT_CTXT_RAM_MASK                    BIT(25)
+#define RAM_DBE_STS_A_WRB_COAL_DATA_RAM_MASK               BIT(24)
+#define RAM_DBE_STS_A_QID_FIFO_RAM_MASK                    BIT(23)
+#define RAM_DBE_STS_A_TIMER_FIFO_RAM_MASK                  GENMASK(22, 19)
+#define RAM_DBE_STS_A_MI_TL_SLV_FIFO_RAM_MASK              BIT(18)
+#define RAM_DBE_STS_A_DSC_CPLD_MASK                        BIT(17)
+#define RAM_DBE_STS_A_DSC_CPLI_MASK                        BIT(16)
+#define RAM_DBE_STS_A_DSC_SW_CTXT_MASK                     BIT(15)
+#define RAM_DBE_STS_A_DSC_CRD_RCV_MASK                     BIT(14)
+#define RAM_DBE_STS_A_DSC_HW_CTXT_MASK                     BIT(13)
+#define RAM_DBE_STS_A_FUNC_MAP_MASK                        BIT(12)
+#define RAM_DBE_STS_A_C2H_WR_BRG_DAT_MASK                  BIT(11)
+#define RAM_DBE_STS_A_C2H_RD_BRG_DAT_MASK                  BIT(10)
+#define RAM_DBE_STS_A_H2C_WR_BRG_DAT_MASK                  BIT(9)
+#define RAM_DBE_STS_A_H2C_RD_BRG_DAT_MASK                  BIT(8)
+#define RAM_DBE_STS_A_MI_C2H3_DAT_MASK                     BIT(7)
+#define RAM_DBE_STS_A_MI_C2H2_DAT_MASK                     BIT(6)
+#define RAM_DBE_STS_A_MI_C2H1_DAT_MASK                     BIT(5)
+#define RAM_DBE_STS_A_MI_C2H0_DAT_MASK                     BIT(4)
+#define RAM_DBE_STS_A_MI_H2C3_DAT_MASK                     BIT(3)
+#define RAM_DBE_STS_A_MI_H2C2_DAT_MASK                     BIT(2)
+#define RAM_DBE_STS_A_MI_H2C1_DAT_MASK                     BIT(1)
+#define RAM_DBE_STS_A_MI_H2C0_DAT_MASK                     BIT(0)
+#define EQDMA_GLBL2_IDENTIFIER_ADDR                        0x100
+#define GLBL2_IDENTIFIER_MASK                             GENMASK(31, 8)
+#define GLBL2_IDENTIFIER_VERSION_MASK                      GENMASK(7, 0)
+#define EQDMA_GLBL2_CHANNEL_INST_ADDR                      0x114
+#define GLBL2_CHANNEL_INST_RSVD_1_MASK                     GENMASK(31, 18)
+#define GLBL2_CHANNEL_INST_C2H_ST_MASK                     BIT(17)
+#define GLBL2_CHANNEL_INST_H2C_ST_MASK                     BIT(16)
+#define GLBL2_CHANNEL_INST_RSVD_2_MASK                     GENMASK(15, 12)
+#define GLBL2_CHANNEL_INST_C2H_ENG_MASK                    GENMASK(11, 8)
+#define GLBL2_CHANNEL_INST_RSVD_3_MASK                     GENMASK(7, 4)
+#define GLBL2_CHANNEL_INST_H2C_ENG_MASK                    GENMASK(3, 0)
+#define EQDMA_GLBL2_CHANNEL_MDMA_ADDR                      0x118
+#define GLBL2_CHANNEL_MDMA_RSVD_1_MASK                     GENMASK(31, 18)
+#define GLBL2_CHANNEL_MDMA_C2H_ST_MASK                     BIT(17)
+#define GLBL2_CHANNEL_MDMA_H2C_ST_MASK                     BIT(16)
+#define GLBL2_CHANNEL_MDMA_RSVD_2_MASK                     GENMASK(15, 12)
+#define GLBL2_CHANNEL_MDMA_C2H_ENG_MASK                    GENMASK(11, 8)
+#define GLBL2_CHANNEL_MDMA_RSVD_3_MASK                     GENMASK(7, 4)
+#define GLBL2_CHANNEL_MDMA_H2C_ENG_MASK                    GENMASK(3, 0)
+#define EQDMA_GLBL2_CHANNEL_STRM_ADDR                      0x11C
+#define GLBL2_CHANNEL_STRM_RSVD_1_MASK                     GENMASK(31, 18)
+#define GLBL2_CHANNEL_STRM_C2H_ST_MASK                     BIT(17)
+#define GLBL2_CHANNEL_STRM_H2C_ST_MASK                     BIT(16)
+#define GLBL2_CHANNEL_STRM_RSVD_2_MASK                     GENMASK(15, 12)
+#define GLBL2_CHANNEL_STRM_C2H_ENG_MASK                    GENMASK(11, 8)
+#define GLBL2_CHANNEL_STRM_RSVD_3_MASK                     GENMASK(7, 4)
+#define GLBL2_CHANNEL_STRM_H2C_ENG_MASK                    GENMASK(3, 0)
+#define EQDMA_GLBL2_CHANNEL_CAP_ADDR                       0x120
+#define GLBL2_CHANNEL_CAP_RSVD_1_MASK                      GENMASK(31, 12)
+#define GLBL2_CHANNEL_CAP_MULTIQ_MAX_MASK                  GENMASK(11, 0)
+#define EQDMA_GLBL2_CHANNEL_PASID_CAP_ADDR                 0x128
+#define GLBL2_CHANNEL_PASID_CAP_RSVD_1_MASK                GENMASK(31, 2)
+#define GLBL2_CHANNEL_PASID_CAP_BRIDGEEN_MASK              BIT(1)
+#define GLBL2_CHANNEL_PASID_CAP_DMAEN_MASK                 BIT(0)
+#define EQDMA_GLBL2_SYSTEM_ID_ADDR                         0x130
+#define GLBL2_SYSTEM_ID_RSVD_1_MASK                        GENMASK(31, 16)
+#define GLBL2_SYSTEM_ID_MASK                              GENMASK(15, 0)
+#define EQDMA_GLBL2_MISC_CAP_ADDR                          0x134
+#define GLBL2_MISC_CAP_MASK                               GENMASK(31, 0)
+#define EQDMA_GLBL2_DBG_PCIE_RQ0_ADDR                      0x1B8
+#define GLBL2_PCIE_RQ0_NPH_AVL_MASK                    GENMASK(31, 20)
+#define GLBL2_PCIE_RQ0_RCB_AVL_MASK                    GENMASK(19, 9)
+#define GLBL2_PCIE_RQ0_SLV_RD_CREDS_MASK               GENMASK(8, 2)
+#define GLBL2_PCIE_RQ0_TAG_EP_MASK                     GENMASK(1, 0)
+#define EQDMA_GLBL2_DBG_PCIE_RQ1_ADDR                      0x1BC
+#define GLBL2_PCIE_RQ1_RSVD_1_MASK                     GENMASK(31, 21)
+#define GLBL2_PCIE_RQ1_TAG_FL_MASK                     GENMASK(20, 19)
+#define GLBL2_PCIE_RQ1_WTLP_HEADER_FIFO_FL_MASK        BIT(18)
+#define GLBL2_PCIE_RQ1_WTLP_HEADER_FIFO_EP_MASK        BIT(17)
+#define GLBL2_PCIE_RQ1_RQ_FIFO_EP_MASK                 BIT(16)
+#define GLBL2_PCIE_RQ1_RQ_FIFO_FL_MASK                 BIT(15)
+#define GLBL2_PCIE_RQ1_TLPSM_MASK                      GENMASK(14, 12)
+#define GLBL2_PCIE_RQ1_TLPSM512_MASK                   GENMASK(11, 9)
+#define GLBL2_PCIE_RQ1_RREQ_RCB_OK_MASK                BIT(8)
+#define GLBL2_PCIE_RQ1_RREQ0_SLV_MASK                  BIT(7)
+#define GLBL2_PCIE_RQ1_RREQ0_VLD_MASK                  BIT(6)
+#define GLBL2_PCIE_RQ1_RREQ0_RDY_MASK                  BIT(5)
+#define GLBL2_PCIE_RQ1_RREQ1_SLV_MASK                  BIT(4)
+#define GLBL2_PCIE_RQ1_RREQ1_VLD_MASK                  BIT(3)
+#define GLBL2_PCIE_RQ1_RREQ1_RDY_MASK                  BIT(2)
+#define GLBL2_PCIE_RQ1_WTLP_REQ_MASK                   BIT(1)
+#define GLBL2_PCIE_RQ1_WTLP_STRADDLE_MASK              BIT(0)
+#define EQDMA_GLBL2_DBG_AXIMM_WR0_ADDR                     0x1C0
+#define GLBL2_AXIMM_WR0_RSVD_1_MASK                    GENMASK(31, 27)
+#define GLBL2_AXIMM_WR0_WR_REQ_MASK                    BIT(26)
+#define GLBL2_AXIMM_WR0_WR_CHN_MASK                    GENMASK(25, 23)
+#define GLBL2_AXIMM_WR0_WTLP_DATA_FIFO_EP_MASK         BIT(22)
+#define GLBL2_AXIMM_WR0_WPL_FIFO_EP_MASK               BIT(21)
+#define GLBL2_AXIMM_WR0_BRSP_CLAIM_CHN_MASK            GENMASK(20, 18)
+#define GLBL2_AXIMM_WR0_WRREQ_CNT_MASK                 GENMASK(17, 12)
+#define GLBL2_AXIMM_WR0_BID_MASK                       GENMASK(11, 9)
+#define GLBL2_AXIMM_WR0_BVALID_MASK                    BIT(8)
+#define GLBL2_AXIMM_WR0_BREADY_MASK                    BIT(7)
+#define GLBL2_AXIMM_WR0_WVALID_MASK                    BIT(6)
+#define GLBL2_AXIMM_WR0_WREADY_MASK                    BIT(5)
+#define GLBL2_AXIMM_WR0_AWID_MASK                      GENMASK(4, 2)
+#define GLBL2_AXIMM_WR0_AWVALID_MASK                   BIT(1)
+#define GLBL2_AXIMM_WR0_AWREADY_MASK                   BIT(0)
+#define EQDMA_GLBL2_DBG_AXIMM_WR1_ADDR                     0x1C4
+#define GLBL2_AXIMM_WR1_RSVD_1_MASK                    GENMASK(31, 30)
+#define GLBL2_AXIMM_WR1_BRSP_CNT4_MASK                 GENMASK(29, 24)
+#define GLBL2_AXIMM_WR1_BRSP_CNT3_MASK                 GENMASK(23, 18)
+#define GLBL2_AXIMM_WR1_BRSP_CNT2_MASK                 GENMASK(17, 12)
+#define GLBL2_AXIMM_WR1_BRSP_CNT1_MASK                 GENMASK(11, 6)
+#define GLBL2_AXIMM_WR1_BRSP_CNT0_MASK                 GENMASK(5, 0)
+#define EQDMA_GLBL2_DBG_AXIMM_RD0_ADDR                     0x1C8
+#define GLBL2_AXIMM_RD0_RSVD_1_MASK                    GENMASK(31, 23)
+#define GLBL2_AXIMM_RD0_PND_CNT_MASK                   GENMASK(22, 17)
+#define GLBL2_AXIMM_RD0_RD_REQ_MASK                    BIT(16)
+#define GLBL2_AXIMM_RD0_RD_CHNL_MASK                   GENMASK(15, 13)
+#define GLBL2_AXIMM_RD0_RRSP_CLAIM_CHNL_MASK           GENMASK(12, 10)
+#define GLBL2_AXIMM_RD0_RID_MASK                       GENMASK(9, 7)
+#define GLBL2_AXIMM_RD0_RVALID_MASK                    BIT(6)
+#define GLBL2_AXIMM_RD0_RREADY_MASK                    BIT(5)
+#define GLBL2_AXIMM_RD0_ARID_MASK                      GENMASK(4, 2)
+#define GLBL2_AXIMM_RD0_ARVALID_MASK                   BIT(1)
+#define GLBL2_AXIMM_RD0_ARREADY_MASK                   BIT(0)
+#define EQDMA_GLBL2_DBG_AXIMM_RD1_ADDR                     0x1CC
+#define GLBL2_AXIMM_RD1_RSVD_1_MASK                    GENMASK(31, 30)
+#define GLBL2_AXIMM_RD1_RRSP_CNT4_MASK                 GENMASK(29, 24)
+#define GLBL2_AXIMM_RD1_RRSP_CNT3_MASK                 GENMASK(23, 18)
+#define GLBL2_AXIMM_RD1_RRSP_CNT2_MASK                 GENMASK(17, 12)
+#define GLBL2_AXIMM_RD1_RRSP_CNT1_MASK                 GENMASK(11, 6)
+#define GLBL2_AXIMM_RD1_RRSP_CNT0_MASK                 GENMASK(5, 0)
+#define EQDMA_GLBL2_DBG_FAB0_ADDR                          0x1D0
+#define GLBL2_FAB0_H2C_INB_CONV_IN_VLD_MASK            BIT(31)
+#define GLBL2_FAB0_H2C_INB_CONV_IN_RDY_MASK            BIT(30)
+#define GLBL2_FAB0_H2C_SEG_IN_VLD_MASK                 BIT(29)
+#define GLBL2_FAB0_H2C_SEG_IN_RDY_MASK                 BIT(28)
+#define GLBL2_FAB0_H2C_SEG_OUT_VLD_MASK                GENMASK(27, 24)
+#define GLBL2_FAB0_H2C_SEG_OUT_RDY_MASK                BIT(23)
+#define GLBL2_FAB0_H2C_MST_CRDT_STAT_MASK              GENMASK(22, 16)
+#define GLBL2_FAB0_C2H_SLV_AFIFO_FULL_MASK             BIT(15)
+#define GLBL2_FAB0_C2H_SLV_AFIFO_EMPTY_MASK            BIT(14)
+#define GLBL2_FAB0_C2H_DESEG_SEG_VLD_MASK              GENMASK(13, 10)
+#define GLBL2_FAB0_C2H_DESEG_SEG_RDY_MASK              BIT(9)
+#define GLBL2_FAB0_C2H_DESEG_OUT_VLD_MASK              BIT(8)
+#define GLBL2_FAB0_C2H_DESEG_OUT_RDY_MASK              BIT(7)
+#define GLBL2_FAB0_C2H_INB_DECONV_OUT_VLD_MASK         BIT(6)
+#define GLBL2_FAB0_C2H_INB_DECONV_OUT_RDY_MASK         BIT(5)
+#define GLBL2_FAB0_C2H_DSC_CRDT_AFIFO_FULL_MASK        BIT(4)
+#define GLBL2_FAB0_C2H_DSC_CRDT_AFIFO_EMPTY_MASK       BIT(3)
+#define GLBL2_FAB0_IRQ_IN_AFIFO_FULL_MASK              BIT(2)
+#define GLBL2_FAB0_IRQ_IN_AFIFO_EMPTY_MASK             BIT(1)
+#define GLBL2_FAB0_IMM_CRD_AFIFO_EMPTY_MASK            BIT(0)
+#define EQDMA_GLBL2_DBG_FAB1_ADDR                          0x1D4
+#define GLBL2_FAB1_BYP_OUT_CRDT_STAT_MASK              GENMASK(31, 25)
+#define GLBL2_FAB1_TM_DSC_STS_CRDT_STAT_MASK           GENMASK(24, 18)
+#define GLBL2_FAB1_C2H_CMN_AFIFO_FULL_MASK             BIT(17)
+#define GLBL2_FAB1_C2H_CMN_AFIFO_EMPTY_MASK            BIT(16)
+#define GLBL2_FAB1_RSVD_1_MASK                         GENMASK(15, 13)
+#define GLBL2_FAB1_C2H_BYP_IN_AFIFO_FULL_MASK          BIT(12)
+#define GLBL2_FAB1_RSVD_2_MASK                         GENMASK(11, 9)
+#define GLBL2_FAB1_C2H_BYP_IN_AFIFO_EMPTY_MASK         BIT(8)
+#define GLBL2_FAB1_RSVD_3_MASK                         GENMASK(7, 5)
+#define GLBL2_FAB1_H2C_BYP_IN_AFIFO_FULL_MASK          BIT(4)
+#define GLBL2_FAB1_RSVD_4_MASK                         GENMASK(3, 1)
+#define GLBL2_FAB1_H2C_BYP_IN_AFIFO_EMPTY_MASK         BIT(0)
+#define EQDMA_GLBL2_DBG_MATCH_SEL_ADDR                     0x1F4
+#define GLBL2_MATCH_SEL_RSV_MASK                       GENMASK(31, 18)
+#define GLBL2_MATCH_SEL_CSR_SEL_MASK                   GENMASK(17, 13)
+#define GLBL2_MATCH_SEL_CSR_EN_MASK                    BIT(12)
+#define GLBL2_MATCH_SEL_ROTATE1_MASK                   GENMASK(11, 10)
+#define GLBL2_MATCH_SEL_ROTATE0_MASK                   GENMASK(9, 8)
+#define GLBL2_MATCH_SEL_SEL_MASK                       GENMASK(7, 0)
+#define EQDMA_GLBL2_DBG_MATCH_MSK_ADDR                     0x1F8
+#define GLBL2_MATCH_MSK_MASK                      GENMASK(31, 0)
+#define EQDMA_GLBL2_DBG_MATCH_PAT_ADDR                     0x1FC
+#define GLBL2_MATCH_PAT_PATTERN_MASK                   GENMASK(31, 0)
+#define EQDMA_GLBL_RNG_SZ_1_ADDR                           0x204
+#define GLBL_RNG_SZ_1_RSVD_1_MASK                          GENMASK(31, 16)
+#define GLBL_RNG_SZ_1_RING_SIZE_MASK                       GENMASK(15, 0)
+#define EQDMA_GLBL_RNG_SZ_2_ADDR                           0x208
+#define GLBL_RNG_SZ_2_RSVD_1_MASK                          GENMASK(31, 16)
+#define GLBL_RNG_SZ_2_RING_SIZE_MASK                       GENMASK(15, 0)
+#define EQDMA_GLBL_RNG_SZ_3_ADDR                           0x20C
+#define GLBL_RNG_SZ_3_RSVD_1_MASK                          GENMASK(31, 16)
+#define GLBL_RNG_SZ_3_RING_SIZE_MASK                       GENMASK(15, 0)
+#define EQDMA_GLBL_RNG_SZ_4_ADDR                           0x210
+#define GLBL_RNG_SZ_4_RSVD_1_MASK                          GENMASK(31, 16)
+#define GLBL_RNG_SZ_4_RING_SIZE_MASK                       GENMASK(15, 0)
+#define EQDMA_GLBL_RNG_SZ_5_ADDR                           0x214
+#define GLBL_RNG_SZ_5_RSVD_1_MASK                          GENMASK(31, 16)
+#define GLBL_RNG_SZ_5_RING_SIZE_MASK                       GENMASK(15, 0)
+#define EQDMA_GLBL_RNG_SZ_6_ADDR                           0x218
+#define GLBL_RNG_SZ_6_RSVD_1_MASK                          GENMASK(31, 16)
+#define GLBL_RNG_SZ_6_RING_SIZE_MASK                       GENMASK(15, 0)
+#define EQDMA_GLBL_RNG_SZ_7_ADDR                           0x21C
+#define GLBL_RNG_SZ_7_RSVD_1_MASK                          GENMASK(31, 16)
+#define GLBL_RNG_SZ_7_RING_SIZE_MASK                       GENMASK(15, 0)
+#define EQDMA_GLBL_RNG_SZ_8_ADDR                           0x220
+#define GLBL_RNG_SZ_8_RSVD_1_MASK                          GENMASK(31, 16)
+#define GLBL_RNG_SZ_8_RING_SIZE_MASK                       GENMASK(15, 0)
+#define EQDMA_GLBL_RNG_SZ_9_ADDR                           0x224
+#define GLBL_RNG_SZ_9_RSVD_1_MASK                          GENMASK(31, 16)
+#define GLBL_RNG_SZ_9_RING_SIZE_MASK                       GENMASK(15, 0)
+#define EQDMA_GLBL_RNG_SZ_A_ADDR                           0x228
+#define GLBL_RNG_SZ_A_RSVD_1_MASK                          GENMASK(31, 16)
+#define GLBL_RNG_SZ_A_RING_SIZE_MASK                       GENMASK(15, 0)
+#define EQDMA_GLBL_RNG_SZ_B_ADDR                           0x22C
+#define GLBL_RNG_SZ_B_RSVD_1_MASK                          GENMASK(31, 16)
+#define GLBL_RNG_SZ_B_RING_SIZE_MASK                       GENMASK(15, 0)
+#define EQDMA_GLBL_RNG_SZ_C_ADDR                           0x230
+#define GLBL_RNG_SZ_C_RSVD_1_MASK                          GENMASK(31, 16)
+#define GLBL_RNG_SZ_C_RING_SIZE_MASK                       GENMASK(15, 0)
+#define EQDMA_GLBL_RNG_SZ_D_ADDR                           0x234
+#define GLBL_RNG_SZ_D_RSVD_1_MASK                          GENMASK(31, 16)
+#define GLBL_RNG_SZ_D_RING_SIZE_MASK                       GENMASK(15, 0)
+#define EQDMA_GLBL_RNG_SZ_E_ADDR                           0x238
+#define GLBL_RNG_SZ_E_RSVD_1_MASK                          GENMASK(31, 16)
+#define GLBL_RNG_SZ_E_RING_SIZE_MASK                       GENMASK(15, 0)
+#define EQDMA_GLBL_RNG_SZ_F_ADDR                           0x23C
+#define GLBL_RNG_SZ_F_RSVD_1_MASK                          GENMASK(31, 16)
+#define GLBL_RNG_SZ_F_RING_SIZE_MASK                       GENMASK(15, 0)
+#define EQDMA_GLBL_RNG_SZ_10_ADDR                          0x240
+#define GLBL_RNG_SZ_10_RSVD_1_MASK                         GENMASK(31, 16)
+#define GLBL_RNG_SZ_10_RING_SIZE_MASK                      GENMASK(15, 0)
+#define EQDMA_GLBL_ERR_STAT_ADDR                           0x248
+#define GLBL_ERR_STAT_RSVD_1_MASK                          GENMASK(31, 18)
+#define GLBL_ERR_STAT_ERR_FAB_MASK                         BIT(17)
+#define GLBL_ERR_STAT_ERR_H2C_ST_MASK                      BIT(16)
+#define GLBL_ERR_STAT_ERR_BDG_MASK                         BIT(15)
+#define GLBL_ERR_STAT_IND_CTXT_CMD_ERR_MASK                GENMASK(14, 9)
+#define GLBL_ERR_STAT_ERR_C2H_ST_MASK                      BIT(8)
+#define GLBL_ERR_STAT_ERR_C2H_MM_1_MASK                    BIT(7)
+#define GLBL_ERR_STAT_ERR_C2H_MM_0_MASK                    BIT(6)
+#define GLBL_ERR_STAT_ERR_H2C_MM_1_MASK                    BIT(5)
+#define GLBL_ERR_STAT_ERR_H2C_MM_0_MASK                    BIT(4)
+#define GLBL_ERR_STAT_ERR_TRQ_MASK                         BIT(3)
+#define GLBL_ERR_STAT_ERR_DSC_MASK                         BIT(2)
+#define GLBL_ERR_STAT_ERR_RAM_DBE_MASK                     BIT(1)
+#define GLBL_ERR_STAT_ERR_RAM_SBE_MASK                     BIT(0)
+#define EQDMA_GLBL_ERR_MASK_ADDR                           0x24C
+#define GLBL_ERR_MASK                            GENMASK(31, 0)
+#define EQDMA_GLBL_DSC_CFG_ADDR                            0x250
+#define GLBL_DSC_CFG_RSVD_1_MASK                           GENMASK(31, 10)
+#define GLBL_DSC_CFG_UNC_OVR_COR_MASK                      BIT(9)
+#define GLBL_DSC_CFG_CTXT_FER_DIS_MASK                     BIT(8)
+#define GLBL_DSC_CFG_RSVD_2_MASK                           GENMASK(7, 6)
+#define GLBL_DSC_CFG_MAXFETCH_MASK                         GENMASK(5, 3)
+#define GLBL_DSC_CFG_WB_ACC_INT_MASK                       GENMASK(2, 0)
+#define EQDMA_GLBL_DSC_ERR_STS_ADDR                        0x254
+#define GLBL_DSC_ERR_STS_RSVD_1_MASK                       GENMASK(31, 26)
+#define GLBL_DSC_ERR_STS_PORT_ID_MASK                      BIT(25)
+#define GLBL_DSC_ERR_STS_SBE_MASK                          BIT(24)
+#define GLBL_DSC_ERR_STS_DBE_MASK                          BIT(23)
+#define GLBL_DSC_ERR_STS_RQ_CANCEL_MASK                    BIT(22)
+#define GLBL_DSC_ERR_STS_DSC_MASK                          BIT(21)
+#define GLBL_DSC_ERR_STS_DMA_MASK                          BIT(20)
+#define GLBL_DSC_ERR_STS_FLR_CANCEL_MASK                   BIT(19)
+#define GLBL_DSC_ERR_STS_RSVD_2_MASK                       GENMASK(18, 17)
+#define GLBL_DSC_ERR_STS_DAT_POISON_MASK                   BIT(16)
+#define GLBL_DSC_ERR_STS_TIMEOUT_MASK                      BIT(9)
+#define GLBL_DSC_ERR_STS_FLR_MASK                          BIT(8)
+#define GLBL_DSC_ERR_STS_TAG_MASK                          BIT(6)
+#define GLBL_DSC_ERR_STS_ADDR_MASK                         BIT(5)
+#define GLBL_DSC_ERR_STS_PARAM_MASK                        BIT(4)
+#define GLBL_DSC_ERR_STS_BCNT_MASK                         BIT(3)
+#define GLBL_DSC_ERR_STS_UR_CA_MASK                        BIT(2)
+#define GLBL_DSC_ERR_STS_POISON_MASK                       BIT(1)
+#define EQDMA_GLBL_DSC_ERR_MSK_ADDR                        0x258
+#define GLBL_DSC_ERR_MSK_MASK                         GENMASK(31, 0)
+#define EQDMA_GLBL_DSC_ERR_LOG0_ADDR                       0x25C
+#define GLBL_DSC_ERR_LOG0_VALID_MASK                       BIT(31)
+#define GLBL_DSC_ERR_LOG0_SEL_MASK                         BIT(30)
+#define GLBL_DSC_ERR_LOG0_RSVD_1_MASK                      GENMASK(29, 13)
+#define GLBL_DSC_ERR_LOG0_QID_MASK                         GENMASK(12, 0)
+#define EQDMA_GLBL_DSC_ERR_LOG1_ADDR                       0x260
+#define GLBL_DSC_ERR_LOG1_RSVD_1_MASK                      GENMASK(31, 28)
+#define GLBL_DSC_ERR_LOG1_CIDX_MASK                        GENMASK(27, 12)
+#define GLBL_DSC_ERR_LOG1_RSVD_2_MASK                      GENMASK(11, 9)
+#define GLBL_DSC_ERR_LOG1_SUB_TYPE_MASK                    GENMASK(8, 5)
+#define GLBL_DSC_ERR_LOG1_ERR_TYPE_MASK                    GENMASK(4, 0)
+#define EQDMA_GLBL_TRQ_ERR_STS_ADDR                        0x264
+#define GLBL_TRQ_ERR_STS_RSVD_1_MASK                       GENMASK(31, 8)
+#define GLBL_TRQ_ERR_STS_TCP_QSPC_TIMEOUT_MASK             BIT(7)
+#define GLBL_TRQ_ERR_STS_RSVD_2_MASK                       BIT(6)
+#define GLBL_TRQ_ERR_STS_QID_RANGE_MASK                    BIT(5)
+#define GLBL_TRQ_ERR_STS_QSPC_UNMAPPED_MASK                BIT(4)
+#define GLBL_TRQ_ERR_STS_TCP_CSR_TIMEOUT_MASK              BIT(3)
+#define GLBL_TRQ_ERR_STS_RSVD_3_MASK                       BIT(2)
+#define GLBL_TRQ_ERR_STS_VF_ACCESS_ERR_MASK                BIT(1)
+#define GLBL_TRQ_ERR_STS_CSR_UNMAPPED_MASK                 BIT(0)
+#define EQDMA_GLBL_TRQ_ERR_MSK_ADDR                        0x268
+#define GLBL_TRQ_ERR_MSK_MASK                         GENMASK(31, 0)
+#define EQDMA_GLBL_TRQ_ERR_LOG_ADDR                        0x26C
+#define GLBL_TRQ_ERR_LOG_SRC_MASK                          BIT(31)
+#define GLBL_TRQ_ERR_LOG_TARGET_MASK                       GENMASK(30, 27)
+#define GLBL_TRQ_ERR_LOG_FUNC_MASK                         GENMASK(26, 17)
+#define GLBL_TRQ_ERR_LOG_ADDRESS_MASK                      GENMASK(16, 0)
+#define EQDMA_GLBL_DSC_DBG_DAT0_ADDR                       0x270
+#define GLBL_DSC_DAT0_RSVD_1_MASK                      GENMASK(31, 30)
+#define GLBL_DSC_DAT0_CTXT_ARB_DIR_MASK                BIT(29)
+#define GLBL_DSC_DAT0_CTXT_ARB_QID_MASK                GENMASK(28, 17)
+#define GLBL_DSC_DAT0_CTXT_ARB_REQ_MASK                GENMASK(16, 12)
+#define GLBL_DSC_DAT0_IRQ_FIFO_FL_MASK                 BIT(11)
+#define GLBL_DSC_DAT0_TMSTALL_MASK                     BIT(10)
+#define GLBL_DSC_DAT0_RRQ_STALL_MASK                   GENMASK(9, 8)
+#define GLBL_DSC_DAT0_RCP_FIFO_SPC_STALL_MASK          GENMASK(7, 6)
+#define GLBL_DSC_DAT0_RRQ_FIFO_SPC_STALL_MASK          GENMASK(5, 4)
+#define GLBL_DSC_DAT0_FAB_MRKR_RSP_STALL_MASK          GENMASK(3, 2)
+#define GLBL_DSC_DAT0_DSC_OUT_STALL_MASK               GENMASK(1, 0)
+#define EQDMA_GLBL_DSC_DBG_DAT1_ADDR                       0x274
+#define GLBL_DSC_DAT1_RSVD_1_MASK                      GENMASK(31, 28)
+#define GLBL_DSC_DAT1_EVT_SPC_C2H_MASK                 GENMASK(27, 22)
+#define GLBL_DSC_DAT1_EVT_SP_H2C_MASK                  GENMASK(21, 16)
+#define GLBL_DSC_DAT1_DSC_SPC_C2H_MASK                 GENMASK(15, 8)
+#define GLBL_DSC_DAT1_DSC_SPC_H2C_MASK                 GENMASK(7, 0)
+#define EQDMA_GLBL_DSC_DBG_CTL_ADDR                        0x278
+#define GLBL_DSC_CTL_RSVD_1_MASK                       GENMASK(31, 3)
+#define GLBL_DSC_CTL_SELECT_MASK                       GENMASK(2, 0)
+#define EQDMA_GLBL_DSC_ERR_LOG2_ADDR                       0x27c
+#define GLBL_DSC_ERR_LOG2_OLD_PIDX_MASK                    GENMASK(31, 16)
+#define GLBL_DSC_ERR_LOG2_NEW_PIDX_MASK                    GENMASK(15, 0)
+#define EQDMA_GLBL_GLBL_INTERRUPT_CFG_ADDR                 0x2c4
+#define GLBL_GLBL_INTERRUPT_CFG_RSVD_1_MASK                GENMASK(31, 2)
+#define GLBL_GLBL_INTERRUPT_CFG_LGCY_INTR_PENDING_MASK     BIT(1)
+#define GLBL_GLBL_INTERRUPT_CFG_EN_LGCY_INTR_MASK          BIT(0)
+#define EQDMA_GLBL_VCH_HOST_PROFILE_ADDR                   0x2c8
+#define GLBL_VCH_HOST_PROFILE_RSVD_1_MASK                  GENMASK(31, 28)
+#define GLBL_VCH_HOST_PROFILE_2C_MM_MASK                   GENMASK(27, 24)
+#define GLBL_VCH_HOST_PROFILE_2C_ST_MASK                   GENMASK(23, 20)
+#define GLBL_VCH_HOST_PROFILE_VCH_DSC_MASK                 GENMASK(19, 16)
+#define GLBL_VCH_HOST_PROFILE_VCH_INT_MSG_MASK             GENMASK(15, 12)
+#define GLBL_VCH_HOST_PROFILE_VCH_INT_AGGR_MASK            GENMASK(11, 8)
+#define GLBL_VCH_HOST_PROFILE_VCH_CMPT_MASK                GENMASK(7, 4)
+#define GLBL_VCH_HOST_PROFILE_VCH_C2H_PLD_MASK             GENMASK(3, 0)
+#define EQDMA_GLBL_BRIDGE_HOST_PROFILE_ADDR                0x308
+#define GLBL_BRIDGE_HOST_PROFILE_RSVD_1_MASK               GENMASK(31, 4)
+#define GLBL_BRIDGE_HOST_PROFILE_BDGID_MASK                GENMASK(3, 0)
+#define EQDMA_AXIMM_IRQ_DEST_ADDR_ADDR                     0x30c
+#define AXIMM_IRQ_DEST_ADDR_ADDR_MASK                      GENMASK(31, 0)
+#define EQDMA_FAB_ERR_LOG_ADDR                             0x314
+#define FAB_ERR_LOG_RSVD_1_MASK                            GENMASK(31, 7)
+#define FAB_ERR_LOG_SRC_MASK                               GENMASK(6, 0)
+#define EQDMA_GLBL_REQ_ERR_STS_ADDR                        0x318
+#define GLBL_REQ_ERR_STS_RSVD_1_MASK                       GENMASK(31, 11)
+#define GLBL_REQ_ERR_STS_RC_DISCONTINUE_MASK               BIT(10)
+#define GLBL_REQ_ERR_STS_RC_PRTY_MASK                      BIT(9)
+#define GLBL_REQ_ERR_STS_RC_FLR_MASK                       BIT(8)
+#define GLBL_REQ_ERR_STS_RC_TIMEOUT_MASK                   BIT(7)
+#define GLBL_REQ_ERR_STS_RC_INV_BCNT_MASK                  BIT(6)
+#define GLBL_REQ_ERR_STS_RC_INV_TAG_MASK                   BIT(5)
+#define GLBL_REQ_ERR_STS_RC_START_ADDR_MISMCH_MASK         BIT(4)
+#define GLBL_REQ_ERR_STS_RC_RID_TC_ATTR_MISMCH_MASK        BIT(3)
+#define GLBL_REQ_ERR_STS_RC_NO_DATA_MASK                   BIT(2)
+#define GLBL_REQ_ERR_STS_RC_UR_CA_CRS_MASK                 BIT(1)
+#define GLBL_REQ_ERR_STS_RC_POISONED_MASK                  BIT(0)
+#define EQDMA_GLBL_REQ_ERR_MSK_ADDR                        0x31C
+#define GLBL_REQ_ERR_MSK_MASK                         GENMASK(31, 0)
+#define EQDMA_IND_CTXT_DATA_ADDR                           0x804
+#define IND_CTXT_DATA_DATA_MASK                            GENMASK(31, 0)
+#define EQDMA_IND_CTXT_MASK_ADDR                           0x824
+#define IND_CTXT_MASK                            GENMASK(31, 0)
+#define EQDMA_IND_CTXT_CMD_ADDR                            0x844
+#define IND_CTXT_CMD_RSVD_1_MASK                           GENMASK(31, 20)
+#define IND_CTXT_CMD_QID_MASK                              GENMASK(19, 7)
+#define IND_CTXT_CMD_OP_MASK                               GENMASK(6, 5)
+#define IND_CTXT_CMD_SEL_MASK                              GENMASK(4, 1)
+#define IND_CTXT_CMD_BUSY_MASK                             BIT(0)
+#define EQDMA_C2H_TIMER_CNT_ADDR                           0xA00
+#define C2H_TIMER_CNT_RSVD_1_MASK                          GENMASK(31, 16)
+#define C2H_TIMER_CNT_MASK                                GENMASK(15, 0)
+#define EQDMA_C2H_CNT_TH_ADDR                              0xA40
+#define C2H_CNT_TH_RSVD_1_MASK                             GENMASK(31, 16)
+#define C2H_CNT_TH_THESHOLD_CNT_MASK                       GENMASK(15, 0)
+#define EQDMA_C2H_STAT_S_AXIS_C2H_ACCEPTED_ADDR            0xA88
+#define C2H_STAT_S_AXIS_C2H_ACCEPTED_MASK                 GENMASK(31, 0)
+#define EQDMA_C2H_STAT_S_AXIS_WRB_ACCEPTED_ADDR            0xA8C
+#define C2H_STAT_S_AXIS_WRB_ACCEPTED_MASK                 GENMASK(31, 0)
+#define EQDMA_C2H_STAT_DESC_RSP_PKT_ACCEPTED_ADDR          0xA90
+#define C2H_STAT_DESC_RSP_PKT_ACCEPTED_D_MASK              GENMASK(31, 0)
+#define EQDMA_C2H_STAT_AXIS_PKG_CMP_ADDR                   0xA94
+#define C2H_STAT_AXIS_PKG_CMP_MASK                        GENMASK(31, 0)
+#define EQDMA_C2H_STAT_DESC_RSP_ACCEPTED_ADDR              0xA98
+#define C2H_STAT_DESC_RSP_ACCEPTED_D_MASK                  GENMASK(31, 0)
+#define EQDMA_C2H_STAT_DESC_RSP_CMP_ADDR                   0xA9C
+#define C2H_STAT_DESC_RSP_CMP_D_MASK                       GENMASK(31, 0)
+#define EQDMA_C2H_STAT_WRQ_OUT_ADDR                        0xAA0
+#define C2H_STAT_WRQ_OUT_MASK                             GENMASK(31, 0)
+#define EQDMA_C2H_STAT_WPL_REN_ACCEPTED_ADDR               0xAA4
+#define C2H_STAT_WPL_REN_ACCEPTED_MASK                    GENMASK(31, 0)
+#define EQDMA_C2H_STAT_TOTAL_WRQ_LEN_ADDR                  0xAA8
+#define C2H_STAT_TOTAL_WRQ_LEN_MASK                       GENMASK(31, 0)
+#define EQDMA_C2H_STAT_TOTAL_WPL_LEN_ADDR                  0xAAC
+#define C2H_STAT_TOTAL_WPL_LEN_MASK                       GENMASK(31, 0)
+#define EQDMA_C2H_BUF_SZ_ADDR                              0xAB0
+#define C2H_BUF_SZ_IZE_MASK                                GENMASK(31, 0)
+#define EQDMA_C2H_ERR_STAT_ADDR                            0xAF0
+#define C2H_ERR_STAT_RSVD_1_MASK                           GENMASK(31, 21)
+#define C2H_ERR_STAT_WRB_PORT_ID_ERR_MASK                  BIT(20)
+#define C2H_ERR_STAT_HDR_PAR_ERR_MASK                      BIT(19)
+#define C2H_ERR_STAT_HDR_ECC_COR_ERR_MASK                  BIT(18)
+#define C2H_ERR_STAT_HDR_ECC_UNC_ERR_MASK                  BIT(17)
+#define C2H_ERR_STAT_AVL_RING_DSC_ERR_MASK                 BIT(16)
+#define C2H_ERR_STAT_WRB_PRTY_ERR_MASK                     BIT(15)
+#define C2H_ERR_STAT_WRB_CIDX_ERR_MASK                     BIT(14)
+#define C2H_ERR_STAT_WRB_QFULL_ERR_MASK                    BIT(13)
+#define C2H_ERR_STAT_WRB_INV_Q_ERR_MASK                    BIT(12)
+#define C2H_ERR_STAT_RSVD_2_MASK                           BIT(11)
+#define C2H_ERR_STAT_PORT_ID_CTXT_MISMATCH_MASK            BIT(10)
+#define C2H_ERR_STAT_ERR_DESC_CNT_MASK                     BIT(9)
+#define C2H_ERR_STAT_RSVD_3_MASK                           BIT(8)
+#define C2H_ERR_STAT_MSI_INT_FAIL_MASK                     BIT(7)
+#define C2H_ERR_STAT_ENG_WPL_DATA_PAR_ERR_MASK             BIT(6)
+#define C2H_ERR_STAT_RSVD_4_MASK                           BIT(5)
+#define C2H_ERR_STAT_DESC_RSP_ERR_MASK                     BIT(4)
+#define C2H_ERR_STAT_QID_MISMATCH_MASK                     BIT(3)
+#define C2H_ERR_STAT_SH_CMPT_DSC_ERR_MASK                  BIT(2)
+#define C2H_ERR_STAT_LEN_MISMATCH_MASK                     BIT(1)
+#define C2H_ERR_STAT_MTY_MISMATCH_MASK                     BIT(0)
+#define EQDMA_C2H_ERR_MASK_ADDR                            0xAF4
+#define C2H_ERR_EN_MASK                          GENMASK(31, 0)
+#define EQDMA_C2H_FATAL_ERR_STAT_ADDR                      0xAF8
+#define C2H_FATAL_ERR_STAT_RSVD_1_MASK                     GENMASK(31, 21)
+#define C2H_FATAL_ERR_STAT_HDR_ECC_UNC_ERR_MASK            BIT(20)
+#define C2H_FATAL_ERR_STAT_AVL_RING_FIFO_RAM_RDBE_MASK     BIT(19)
+#define C2H_FATAL_ERR_STAT_WPL_DATA_PAR_ERR_MASK           BIT(18)
+#define C2H_FATAL_ERR_STAT_PLD_FIFO_RAM_RDBE_MASK          BIT(17)
+#define C2H_FATAL_ERR_STAT_QID_FIFO_RAM_RDBE_MASK          BIT(16)
+#define C2H_FATAL_ERR_STAT_CMPT_FIFO_RAM_RDBE_MASK         BIT(15)
+#define C2H_FATAL_ERR_STAT_WRB_COAL_DATA_RAM_RDBE_MASK     BIT(14)
+#define C2H_FATAL_ERR_STAT_RESERVED2_MASK                  BIT(13)
+#define C2H_FATAL_ERR_STAT_INT_CTXT_RAM_RDBE_MASK          BIT(12)
+#define C2H_FATAL_ERR_STAT_DESC_REQ_FIFO_RAM_RDBE_MASK     BIT(11)
+#define C2H_FATAL_ERR_STAT_PFCH_CTXT_RAM_RDBE_MASK         BIT(10)
+#define C2H_FATAL_ERR_STAT_WRB_CTXT_RAM_RDBE_MASK          BIT(9)
+#define C2H_FATAL_ERR_STAT_PFCH_LL_RAM_RDBE_MASK           BIT(8)
+#define C2H_FATAL_ERR_STAT_TIMER_FIFO_RAM_RDBE_MASK        GENMASK(7, 4)
+#define C2H_FATAL_ERR_STAT_QID_MISMATCH_MASK               BIT(3)
+#define C2H_FATAL_ERR_STAT_RESERVED1_MASK                  BIT(2)
+#define C2H_FATAL_ERR_STAT_LEN_MISMATCH_MASK               BIT(1)
+#define C2H_FATAL_ERR_STAT_MTY_MISMATCH_MASK               BIT(0)
+#define EQDMA_C2H_FATAL_ERR_MASK_ADDR                      0xAFC
+#define C2H_FATAL_ERR_C2HEN_MASK                 GENMASK(31, 0)
+#define EQDMA_C2H_FATAL_ERR_ENABLE_ADDR                    0xB00
+#define C2H_FATAL_ERR_ENABLE_RSVD_1_MASK                   GENMASK(31, 2)
+#define C2H_FATAL_ERR_ENABLE_WPL_PAR_INV_MASK             BIT(1)
+#define C2H_FATAL_ERR_ENABLE_WRQ_DIS_MASK                 BIT(0)
+#define EQDMA_GLBL_ERR_INT_ADDR                            0xB04
+#define GLBL_ERR_INT_RSVD_1_MASK                           GENMASK(31, 30)
+#define GLBL_ERR_INT_HOST_ID_MASK                          GENMASK(29, 26)
+#define GLBL_ERR_INT_DIS_INTR_ON_VF_MASK                   BIT(25)
+#define GLBL_ERR_INT_ARM_MASK                             BIT(24)
+#define GLBL_ERR_INT_EN_COAL_MASK                          BIT(23)
+#define GLBL_ERR_INT_VEC_MASK                              GENMASK(22, 12)
+#define GLBL_ERR_INT_FUNC_MASK                             GENMASK(11, 0)
+#define EQDMA_C2H_PFCH_CFG_ADDR                            0xB08
+#define C2H_PFCH_CFG_EVTFL_TH_MASK                         GENMASK(31, 16)
+#define C2H_PFCH_CFG_FL_TH_MASK                            GENMASK(15, 0)
+#define EQDMA_C2H_PFCH_CFG_1_ADDR                          0xA80
+#define C2H_PFCH_CFG_1_EVT_QCNT_TH_MASK                    GENMASK(31, 16)
+#define C2H_PFCH_CFG_1_QCNT_MASK                           GENMASK(15, 0)
+#define EQDMA_C2H_PFCH_CFG_2_ADDR                          0xA84
+#define C2H_PFCH_CFG_2_FENCE_MASK                          BIT(31)
+#define C2H_PFCH_CFG_2_RSVD_MASK                           GENMASK(30, 29)
+#define C2H_PFCH_CFG_2_VAR_DESC_NO_DROP_MASK               BIT(28)
+#define C2H_PFCH_CFG_2_LL_SZ_TH_MASK                       GENMASK(27, 12)
+#define C2H_PFCH_CFG_2_VAR_DESC_NUM_MASK                   GENMASK(11, 6)
+#define C2H_PFCH_CFG_2_NUM_MASK                            GENMASK(5, 0)
+#define EQDMA_C2H_INT_TIMER_TICK_ADDR                      0xB0C
+#define C2H_INT_TIMER_TICK_MASK                           GENMASK(31, 0)
+#define EQDMA_C2H_STAT_DESC_RSP_DROP_ACCEPTED_ADDR         0xB10
+#define C2H_STAT_DESC_RSP_DROP_ACCEPTED_D_MASK             GENMASK(31, 0)
+#define EQDMA_C2H_STAT_DESC_RSP_ERR_ACCEPTED_ADDR          0xB14
+#define C2H_STAT_DESC_RSP_ERR_ACCEPTED_D_MASK              GENMASK(31, 0)
+#define EQDMA_C2H_STAT_DESC_REQ_ADDR                       0xB18
+#define C2H_STAT_DESC_REQ_MASK                            GENMASK(31, 0)
+#define EQDMA_C2H_STAT_DBG_DMA_ENG_0_ADDR                  0xB1C
+#define C2H_STAT_DMA_ENG_0_S_AXIS_C2H_TVALID_MASK      BIT(31)
+#define C2H_STAT_DMA_ENG_0_S_AXIS_C2H_TREADY_MASK      BIT(30)
+#define C2H_STAT_DMA_ENG_0_S_AXIS_WRB_TVALID_MASK      GENMASK(29, 27)
+#define C2H_STAT_DMA_ENG_0_S_AXIS_WRB_TREADY_MASK      GENMASK(26, 24)
+#define C2H_STAT_DMA_ENG_0_PLD_FIFO_IN_RDY_MASK        BIT(23)
+#define C2H_STAT_DMA_ENG_0_QID_FIFO_IN_RDY_MASK        BIT(22)
+#define C2H_STAT_DMA_ENG_0_ARB_FIFO_OUT_VLD_MASK       BIT(21)
+#define C2H_STAT_DMA_ENG_0_ARB_FIFO_OUT_QID_MASK       GENMASK(20, 9)
+#define C2H_STAT_DMA_ENG_0_WRB_FIFO_IN_RDY_MASK        BIT(8)
+#define C2H_STAT_DMA_ENG_0_WRB_FIFO_OUT_CNT_MASK       GENMASK(7, 5)
+#define C2H_STAT_DMA_ENG_0_WRB_SM_CS_MASK              BIT(4)
+#define C2H_STAT_DMA_ENG_0_MAIN_SM_CS_MASK             GENMASK(3, 0)
+#define EQDMA_C2H_STAT_DBG_DMA_ENG_1_ADDR                  0xB20
+#define C2H_STAT_DMA_ENG_1_RSVD_1_MASK                 GENMASK(31, 29)
+#define C2H_STAT_DMA_ENG_1_QID_FIFO_OUT_CNT_MASK       GENMASK(28, 18)
+#define C2H_STAT_DMA_ENG_1_PLD_FIFO_OUT_CNT_MASK       GENMASK(17, 7)
+#define C2H_STAT_DMA_ENG_1_PLD_ST_FIFO_CNT_MASK        GENMASK(6, 0)
+#define EQDMA_C2H_STAT_DBG_DMA_ENG_2_ADDR                  0xB24
+#define C2H_STAT_DMA_ENG_2_RSVD_1_MASK                 GENMASK(31, 29)
+#define C2H_STAT_DMA_ENG_2_QID_FIFO_OUT_CNT_MASK       GENMASK(28, 18)
+#define C2H_STAT_DMA_ENG_2_PLD_FIFO_OUT_CNT_MASK       GENMASK(17, 7)
+#define C2H_STAT_DMA_ENG_2_PLD_ST_FIFO_CNT_MASK        GENMASK(6, 0)
+#define EQDMA_C2H_STAT_DBG_DMA_ENG_3_ADDR                  0xB28
+#define C2H_STAT_DMA_ENG_3_RSVD_1_MASK                 GENMASK(31, 24)
+#define C2H_STAT_DMA_ENG_3_WRQ_FIFO_OUT_CNT_MASK       GENMASK(23, 19)
+#define C2H_STAT_DMA_ENG_3_QID_FIFO_OUT_VLD_MASK       BIT(18)
+#define C2H_STAT_DMA_ENG_3_PLD_FIFO_OUT_VLD_MASK       BIT(17)
+#define C2H_STAT_DMA_ENG_3_PLD_ST_FIFO_OUT_VLD_MASK    BIT(16)
+#define C2H_STAT_DMA_ENG_3_PLD_ST_FIFO_OUT_DATA_EOP_MASK BIT(15)
+#define C2H_STAT_DMA_ENG_3_PLD_ST_FIFO_OUT_DATA_AVL_IDX_ENABLE_MASK BIT(14)
+#define C2H_STAT_DMA_ENG_3_PLD_ST_FIFO_OUT_DATA_DROP_MASK BIT(13)
+#define C2H_STAT_DMA_ENG_3_PLD_ST_FIFO_OUT_DATA_ERR_MASK BIT(12)
+#define C2H_STAT_DMA_ENG_3_DESC_CNT_FIFO_IN_RDY_MASK   BIT(11)
+#define C2H_STAT_DMA_ENG_3_DESC_RSP_FIFO_IN_RDY_MASK   BIT(10)
+#define C2H_STAT_DMA_ENG_3_PLD_PKT_ID_LARGER_0_MASK    BIT(9)
+#define C2H_STAT_DMA_ENG_3_WRQ_VLD_MASK                BIT(8)
+#define C2H_STAT_DMA_ENG_3_WRQ_RDY_MASK                BIT(7)
+#define C2H_STAT_DMA_ENG_3_WRQ_FIFO_OUT_RDY_MASK       BIT(6)
+#define C2H_STAT_DMA_ENG_3_WRQ_PACKET_OUT_DATA_DROP_MASK BIT(5)
+#define C2H_STAT_DMA_ENG_3_WRQ_PACKET_OUT_DATA_ERR_MASK BIT(4)
+#define C2H_STAT_DMA_ENG_3_WRQ_PACKET_OUT_DATA_MARKER_MASK BIT(3)
+#define C2H_STAT_DMA_ENG_3_WRQ_PACKET_PRE_EOR_MASK     BIT(2)
+#define C2H_STAT_DMA_ENG_3_WCP_FIFO_IN_RDY_MASK        BIT(1)
+#define C2H_STAT_DMA_ENG_3_PLD_ST_FIFO_IN_RDY_MASK     BIT(0)
+#define EQDMA_C2H_DBG_PFCH_ERR_CTXT_ADDR                   0xB2C
+#define C2H_PFCH_ERR_CTXT_RSVD_1_MASK                  GENMASK(31, 14)
+#define C2H_PFCH_ERR_CTXT_ERR_STAT_MASK                BIT(13)
+#define C2H_PFCH_ERR_CTXT_CMD_WR_MASK                  BIT(12)
+#define C2H_PFCH_ERR_CTXT_QID_MASK                     GENMASK(11, 1)
+#define C2H_PFCH_ERR_CTXT_DONE_MASK                    BIT(0)
+#define EQDMA_C2H_FIRST_ERR_QID_ADDR                       0xB30
+#define C2H_FIRST_ERR_QID_RSVD_1_MASK                      GENMASK(31, 21)
+#define C2H_FIRST_ERR_QID_ERR_TYPE_MASK                    GENMASK(20, 16)
+#define C2H_FIRST_ERR_QID_RSVD_MASK                        GENMASK(15, 13)
+#define C2H_FIRST_ERR_QID_QID_MASK                         GENMASK(12, 0)
+#define EQDMA_STAT_NUM_WRB_IN_ADDR                         0xB34
+#define STAT_NUM_WRB_IN_RSVD_1_MASK                        GENMASK(31, 16)
+#define STAT_NUM_WRB_IN_WRB_CNT_MASK                       GENMASK(15, 0)
+#define EQDMA_STAT_NUM_WRB_OUT_ADDR                        0xB38
+#define STAT_NUM_WRB_OUT_RSVD_1_MASK                       GENMASK(31, 16)
+#define STAT_NUM_WRB_OUT_WRB_CNT_MASK                      GENMASK(15, 0)
+#define EQDMA_STAT_NUM_WRB_DRP_ADDR                        0xB3C
+#define STAT_NUM_WRB_DRP_RSVD_1_MASK                       GENMASK(31, 16)
+#define STAT_NUM_WRB_DRP_WRB_CNT_MASK                      GENMASK(15, 0)
+#define EQDMA_STAT_NUM_STAT_DESC_OUT_ADDR                  0xB40
+#define STAT_NUM_STAT_DESC_OUT_RSVD_1_MASK                 GENMASK(31, 16)
+#define STAT_NUM_STAT_DESC_OUT_CNT_MASK                    GENMASK(15, 0)
+#define EQDMA_STAT_NUM_DSC_CRDT_SENT_ADDR                  0xB44
+#define STAT_NUM_DSC_CRDT_SENT_RSVD_1_MASK                 GENMASK(31, 16)
+#define STAT_NUM_DSC_CRDT_SENT_CNT_MASK                    GENMASK(15, 0)
+#define EQDMA_STAT_NUM_FCH_DSC_RCVD_ADDR                   0xB48
+#define STAT_NUM_FCH_DSC_RCVD_RSVD_1_MASK                  GENMASK(31, 16)
+#define STAT_NUM_FCH_DSC_RCVD_DSC_CNT_MASK                 GENMASK(15, 0)
+#define EQDMA_STAT_NUM_BYP_DSC_RCVD_ADDR                   0xB4C
+#define STAT_NUM_BYP_DSC_RCVD_RSVD_1_MASK                  GENMASK(31, 11)
+#define STAT_NUM_BYP_DSC_RCVD_DSC_CNT_MASK                 GENMASK(10, 0)
+#define EQDMA_C2H_WRB_COAL_CFG_ADDR                        0xB50
+#define C2H_WRB_COAL_CFG_MAX_BUF_SZ_MASK                   GENMASK(31, 26)
+#define C2H_WRB_COAL_CFG_TICK_VAL_MASK                     GENMASK(25, 14)
+#define C2H_WRB_COAL_CFG_TICK_CNT_MASK                     GENMASK(13, 2)
+#define C2H_WRB_COAL_CFG_SET_GLB_FLUSH_MASK                BIT(1)
+#define C2H_WRB_COAL_CFG_DONE_GLB_FLUSH_MASK               BIT(0)
+#define EQDMA_C2H_INTR_H2C_REQ_ADDR                        0xB54
+#define C2H_INTR_H2C_REQ_RSVD_1_MASK                       GENMASK(31, 18)
+#define C2H_INTR_H2C_REQ_CNT_MASK                          GENMASK(17, 0)
+#define EQDMA_C2H_INTR_C2H_MM_REQ_ADDR                     0xB58
+#define C2H_INTR_C2H_MM_REQ_RSVD_1_MASK                    GENMASK(31, 18)
+#define C2H_INTR_C2H_MM_REQ_CNT_MASK                       GENMASK(17, 0)
+#define EQDMA_C2H_INTR_ERR_INT_REQ_ADDR                    0xB5C
+#define C2H_INTR_ERR_INT_REQ_RSVD_1_MASK                   GENMASK(31, 18)
+#define C2H_INTR_ERR_INT_REQ_CNT_MASK                      GENMASK(17, 0)
+#define EQDMA_C2H_INTR_C2H_ST_REQ_ADDR                     0xB60
+#define C2H_INTR_C2H_ST_REQ_RSVD_1_MASK                    GENMASK(31, 18)
+#define C2H_INTR_C2H_ST_REQ_CNT_MASK                       GENMASK(17, 0)
+#define EQDMA_C2H_INTR_H2C_ERR_C2H_MM_MSIX_ACK_ADDR        0xB64
+#define C2H_INTR_H2C_ERR_C2H_MM_MSIX_ACK_RSVD_1_MASK       GENMASK(31, 18)
+#define C2H_INTR_H2C_ERR_C2H_MM_MSIX_ACK_CNT_MASK          GENMASK(17, 0)
+#define EQDMA_C2H_INTR_H2C_ERR_C2H_MM_MSIX_FAIL_ADDR       0xB68
+#define C2H_INTR_H2C_ERR_C2H_MM_MSIX_FAIL_RSVD_1_MASK      GENMASK(31, 18)
+#define C2H_INTR_H2C_ERR_C2H_MM_MSIX_FAIL_CNT_MASK         GENMASK(17, 0)
+#define EQDMA_C2H_INTR_H2C_ERR_C2H_MM_MSIX_NO_MSIX_ADDR    0xB6C
+#define C2H_INTR_H2C_ERR_C2H_MM_MSIX_NO_MSIX_RSVD_1_MASK   GENMASK(31, 18)
+#define C2H_INTR_H2C_ERR_C2H_MM_MSIX_NO_MSIX_CNT_MASK      GENMASK(17, 0)
+#define EQDMA_C2H_INTR_H2C_ERR_C2H_MM_CTXT_INVAL_ADDR      0xB70
+#define C2H_INTR_H2C_ERR_C2H_MM_CTXT_INVAL_RSVD_1_MASK     GENMASK(31, 18)
+#define C2H_INTR_H2C_ERR_C2H_MM_CTXT_INVAL_CNT_MASK        GENMASK(17, 0)
+#define EQDMA_C2H_INTR_C2H_ST_MSIX_ACK_ADDR                0xB74
+#define C2H_INTR_C2H_ST_MSIX_ACK_RSVD_1_MASK               GENMASK(31, 18)
+#define C2H_INTR_C2H_ST_MSIX_ACK_CNT_MASK                  GENMASK(17, 0)
+#define EQDMA_C2H_INTR_C2H_ST_MSIX_FAIL_ADDR               0xB78
+#define C2H_INTR_C2H_ST_MSIX_FAIL_RSVD_1_MASK              GENMASK(31, 18)
+#define C2H_INTR_C2H_ST_MSIX_FAIL_CNT_MASK                 GENMASK(17, 0)
+#define EQDMA_C2H_INTR_C2H_ST_NO_MSIX_ADDR                 0xB7C
+#define C2H_INTR_C2H_ST_NO_MSIX_RSVD_1_MASK                GENMASK(31, 18)
+#define C2H_INTR_C2H_ST_NO_MSIX_CNT_MASK                   GENMASK(17, 0)
+#define EQDMA_C2H_INTR_C2H_ST_CTXT_INVAL_ADDR              0xB80
+#define C2H_INTR_C2H_ST_CTXT_INVAL_RSVD_1_MASK             GENMASK(31, 18)
+#define C2H_INTR_C2H_ST_CTXT_INVAL_CNT_MASK                GENMASK(17, 0)
+#define EQDMA_C2H_STAT_WR_CMP_ADDR                         0xB84
+#define C2H_STAT_WR_CMP_RSVD_1_MASK                        GENMASK(31, 18)
+#define C2H_STAT_WR_CMP_CNT_MASK                           GENMASK(17, 0)
+#define EQDMA_C2H_STAT_DBG_DMA_ENG_4_ADDR                  0xB88
+#define C2H_STAT_DMA_ENG_4_RSVD_1_MASK                 GENMASK(31, 24)
+#define C2H_STAT_DMA_ENG_4_WRQ_FIFO_OUT_CNT_MASK       GENMASK(23, 19)
+#define C2H_STAT_DMA_ENG_4_QID_FIFO_OUT_VLD_MASK       BIT(18)
+#define C2H_STAT_DMA_ENG_4_PLD_FIFO_OUT_VLD_MASK       BIT(17)
+#define C2H_STAT_DMA_ENG_4_PLD_ST_FIFO_OUT_VLD_MASK    BIT(16)
+#define C2H_STAT_DMA_ENG_4_PLD_ST_FIFO_OUT_DATA_EOP_MASK BIT(15)
+#define C2H_STAT_DMA_ENG_4_PLD_ST_FIFO_OUT_DATA_AVL_IDX_ENABLE_MASK BIT(14)
+#define C2H_STAT_DMA_ENG_4_PLD_ST_FIFO_OUT_DATA_DROP_MASK BIT(13)
+#define C2H_STAT_DMA_ENG_4_PLD_ST_FIFO_OUT_DATA_ERR_MASK BIT(12)
+#define C2H_STAT_DMA_ENG_4_DESC_CNT_FIFO_IN_RDY_MASK   BIT(11)
+#define C2H_STAT_DMA_ENG_4_DESC_RSP_FIFO_IN_RDY_MASK   BIT(10)
+#define C2H_STAT_DMA_ENG_4_PLD_PKT_ID_LARGER_0_MASK    BIT(9)
+#define C2H_STAT_DMA_ENG_4_WRQ_VLD_MASK                BIT(8)
+#define C2H_STAT_DMA_ENG_4_WRQ_RDY_MASK                BIT(7)
+#define C2H_STAT_DMA_ENG_4_WRQ_FIFO_OUT_RDY_MASK       BIT(6)
+#define C2H_STAT_DMA_ENG_4_WRQ_PACKET_OUT_DATA_DROP_MASK BIT(5)
+#define C2H_STAT_DMA_ENG_4_WRQ_PACKET_OUT_DATA_ERR_MASK BIT(4)
+#define C2H_STAT_DMA_ENG_4_WRQ_PACKET_OUT_DATA_MARKER_MASK BIT(3)
+#define C2H_STAT_DMA_ENG_4_WRQ_PACKET_PRE_EOR_MASK     BIT(2)
+#define C2H_STAT_DMA_ENG_4_WCP_FIFO_IN_RDY_MASK        BIT(1)
+#define C2H_STAT_DMA_ENG_4_PLD_ST_FIFO_IN_RDY_MASK     BIT(0)
+#define EQDMA_C2H_STAT_DBG_DMA_ENG_5_ADDR                  0xB8C
+#define C2H_STAT_DMA_ENG_5_RSVD_1_MASK                 GENMASK(31, 30)
+#define C2H_STAT_DMA_ENG_5_WRB_SM_VIRT_CH_MASK         BIT(29)
+#define C2H_STAT_DMA_ENG_5_WRB_FIFO_IN_REQ_MASK        GENMASK(28, 24)
+#define C2H_STAT_DMA_ENG_5_ARB_FIFO_OUT_CNT_MASK       GENMASK(23, 22)
+#define C2H_STAT_DMA_ENG_5_ARB_FIFO_OUT_DATA_LEN_MASK  GENMASK(21, 6)
+#define C2H_STAT_DMA_ENG_5_ARB_FIFO_OUT_DATA_VIRT_CH_MASK BIT(5)
+#define C2H_STAT_DMA_ENG_5_ARB_FIFO_OUT_DATA_VAR_DESC_MASK BIT(4)
+#define C2H_STAT_DMA_ENG_5_ARB_FIFO_OUT_DATA_DROP_REQ_MASK BIT(3)
+#define C2H_STAT_DMA_ENG_5_ARB_FIFO_OUT_DATA_NUM_BUF_OV_MASK BIT(2)
+#define C2H_STAT_DMA_ENG_5_ARB_FIFO_OUT_DATA_MARKER_MASK BIT(1)
+#define C2H_STAT_DMA_ENG_5_ARB_FIFO_OUT_DATA_HAS_CMPT_MASK BIT(0)
+#define EQDMA_C2H_DBG_PFCH_QID_ADDR                        0xB90
+#define C2H_PFCH_QID_RSVD_1_MASK                       GENMASK(31, 16)
+#define C2H_PFCH_QID_ERR_CTXT_MASK                     BIT(15)
+#define C2H_PFCH_QID_TARGET_MASK                       GENMASK(14, 12)
+#define C2H_PFCH_QID_QID_OR_TAG_MASK                   GENMASK(11, 0)
+#define EQDMA_C2H_DBG_PFCH_ADDR                            0xB94
+#define C2H_PFCH_DATA_MASK                             GENMASK(31, 0)
+#define EQDMA_C2H_INT_DBG_ADDR                             0xB98
+#define C2H_INT_RSVD_1_MASK                            GENMASK(31, 8)
+#define C2H_INT_INT_COAL_SM_MASK                       GENMASK(7, 4)
+#define C2H_INT_INT_SM_MASK                            GENMASK(3, 0)
+#define EQDMA_C2H_STAT_IMM_ACCEPTED_ADDR                   0xB9C
+#define C2H_STAT_IMM_ACCEPTED_RSVD_1_MASK                  GENMASK(31, 18)
+#define C2H_STAT_IMM_ACCEPTED_CNT_MASK                     GENMASK(17, 0)
+#define EQDMA_C2H_STAT_MARKER_ACCEPTED_ADDR                0xBA0
+#define C2H_STAT_MARKER_ACCEPTED_RSVD_1_MASK               GENMASK(31, 18)
+#define C2H_STAT_MARKER_ACCEPTED_CNT_MASK                  GENMASK(17, 0)
+#define EQDMA_C2H_STAT_DISABLE_CMP_ACCEPTED_ADDR           0xBA4
+#define C2H_STAT_DISABLE_CMP_ACCEPTED_RSVD_1_MASK          GENMASK(31, 18)
+#define C2H_STAT_DISABLE_CMP_ACCEPTED_CNT_MASK             GENMASK(17, 0)
+#define EQDMA_C2H_PLD_FIFO_CRDT_CNT_ADDR                   0xBA8
+#define C2H_PLD_FIFO_CRDT_CNT_RSVD_1_MASK                  GENMASK(31, 18)
+#define C2H_PLD_FIFO_CRDT_CNT_CNT_MASK                     GENMASK(17, 0)
+#define EQDMA_C2H_INTR_DYN_REQ_ADDR                        0xBAC
+#define C2H_INTR_DYN_REQ_RSVD_1_MASK                       GENMASK(31, 18)
+#define C2H_INTR_DYN_REQ_CNT_MASK                          GENMASK(17, 0)
+#define EQDMA_C2H_INTR_DYN_MISC_ADDR                       0xBB0
+#define C2H_INTR_DYN_MISC_RSVD_1_MASK                      GENMASK(31, 18)
+#define C2H_INTR_DYN_MISC_CNT_MASK                         GENMASK(17, 0)
+#define EQDMA_C2H_DROP_LEN_MISMATCH_ADDR                   0xBB4
+#define C2H_DROP_LEN_MISMATCH_RSVD_1_MASK                  GENMASK(31, 18)
+#define C2H_DROP_LEN_MISMATCH_CNT_MASK                     GENMASK(17, 0)
+#define EQDMA_C2H_DROP_DESC_RSP_LEN_ADDR                   0xBB8
+#define C2H_DROP_DESC_RSP_LEN_RSVD_1_MASK                  GENMASK(31, 18)
+#define C2H_DROP_DESC_RSP_LEN_CNT_MASK                     GENMASK(17, 0)
+#define EQDMA_C2H_DROP_QID_FIFO_LEN_ADDR                   0xBBC
+#define C2H_DROP_QID_FIFO_LEN_RSVD_1_MASK                  GENMASK(31, 18)
+#define C2H_DROP_QID_FIFO_LEN_CNT_MASK                     GENMASK(17, 0)
+#define EQDMA_C2H_DROP_PLD_CNT_ADDR                        0xBC0
+#define C2H_DROP_PLD_CNT_RSVD_1_MASK                       GENMASK(31, 18)
+#define C2H_DROP_PLD_CNT_CNT_MASK                          GENMASK(17, 0)
+#define EQDMA_C2H_CMPT_FORMAT_0_ADDR                       0xBC4
+#define C2H_CMPT_FORMAT_0_DESC_ERR_LOC_MASK                GENMASK(31, 16)
+#define C2H_CMPT_FORMAT_0_COLOR_LOC_MASK                   GENMASK(15, 0)
+#define EQDMA_C2H_CMPT_FORMAT_1_ADDR                       0xBC8
+#define C2H_CMPT_FORMAT_1_DESC_ERR_LOC_MASK                GENMASK(31, 16)
+#define C2H_CMPT_FORMAT_1_COLOR_LOC_MASK                   GENMASK(15, 0)
+#define EQDMA_C2H_CMPT_FORMAT_2_ADDR                       0xBCC
+#define C2H_CMPT_FORMAT_2_DESC_ERR_LOC_MASK                GENMASK(31, 16)
+#define C2H_CMPT_FORMAT_2_COLOR_LOC_MASK                   GENMASK(15, 0)
+#define EQDMA_C2H_CMPT_FORMAT_3_ADDR                       0xBD0
+#define C2H_CMPT_FORMAT_3_DESC_ERR_LOC_MASK                GENMASK(31, 16)
+#define C2H_CMPT_FORMAT_3_COLOR_LOC_MASK                   GENMASK(15, 0)
+#define EQDMA_C2H_CMPT_FORMAT_4_ADDR                       0xBD4
+#define C2H_CMPT_FORMAT_4_DESC_ERR_LOC_MASK                GENMASK(31, 16)
+#define C2H_CMPT_FORMAT_4_COLOR_LOC_MASK                   GENMASK(15, 0)
+#define EQDMA_C2H_CMPT_FORMAT_5_ADDR                       0xBD8
+#define C2H_CMPT_FORMAT_5_DESC_ERR_LOC_MASK                GENMASK(31, 16)
+#define C2H_CMPT_FORMAT_5_COLOR_LOC_MASK                   GENMASK(15, 0)
+#define EQDMA_C2H_CMPT_FORMAT_6_ADDR                       0xBDC
+#define C2H_CMPT_FORMAT_6_DESC_ERR_LOC_MASK                GENMASK(31, 16)
+#define C2H_CMPT_FORMAT_6_COLOR_LOC_MASK                   GENMASK(15, 0)
+#define EQDMA_C2H_PFCH_CACHE_DEPTH_ADDR                    0xBE0
+#define C2H_PFCH_CACHE_DEPTH_MAX_STBUF_MASK                GENMASK(23, 16)
+#define C2H_PFCH_CACHE_DEPTH_MASK                         GENMASK(7, 0)
+#define EQDMA_C2H_WRB_COAL_BUF_DEPTH_ADDR                  0xBE4
+#define C2H_WRB_COAL_BUF_DEPTH_RSVD_1_MASK                 GENMASK(31, 8)
+#define C2H_WRB_COAL_BUF_DEPTH_BUFFER_MASK                 GENMASK(7, 0)
+#define EQDMA_C2H_PFCH_CRDT_ADDR                           0xBE8
+#define C2H_PFCH_CRDT_RSVD_1_MASK                          GENMASK(31, 1)
+#define C2H_PFCH_CRDT_RSVD_2_MASK                          BIT(0)
+#define EQDMA_C2H_STAT_HAS_CMPT_ACCEPTED_ADDR              0xBEC
+#define C2H_STAT_HAS_CMPT_ACCEPTED_RSVD_1_MASK             GENMASK(31, 18)
+#define C2H_STAT_HAS_CMPT_ACCEPTED_CNT_MASK                GENMASK(17, 0)
+#define EQDMA_C2H_STAT_HAS_PLD_ACCEPTED_ADDR               0xBF0
+#define C2H_STAT_HAS_PLD_ACCEPTED_RSVD_1_MASK              GENMASK(31, 18)
+#define C2H_STAT_HAS_PLD_ACCEPTED_CNT_MASK                 GENMASK(17, 0)
+#define EQDMA_C2H_PLD_PKT_ID_ADDR                          0xBF4
+#define C2H_PLD_PKT_ID_CMPT_WAIT_MASK                      GENMASK(31, 16)
+#define C2H_PLD_PKT_ID_DATA_MASK                           GENMASK(15, 0)
+#define EQDMA_C2H_PLD_PKT_ID_1_ADDR                        0xBF8
+#define C2H_PLD_PKT_ID_1_CMPT_WAIT_MASK                    GENMASK(31, 16)
+#define C2H_PLD_PKT_ID_1_DATA_MASK                         GENMASK(15, 0)
+#define EQDMA_C2H_DROP_PLD_CNT_1_ADDR                      0xBFC
+#define C2H_DROP_PLD_CNT_1_RSVD_1_MASK                     GENMASK(31, 18)
+#define C2H_DROP_PLD_CNT_1_CNT_MASK                        GENMASK(17, 0)
+#define EQDMA_H2C_ERR_STAT_ADDR                            0xE00
+#define H2C_ERR_STAT_RSVD_1_MASK                           GENMASK(31, 6)
+#define H2C_ERR_STAT_PAR_ERR_MASK                          BIT(5)
+#define H2C_ERR_STAT_SBE_MASK                              BIT(4)
+#define H2C_ERR_STAT_DBE_MASK                              BIT(3)
+#define H2C_ERR_STAT_NO_DMA_DS_MASK                        BIT(2)
+#define H2C_ERR_STAT_SDI_MRKR_REQ_MOP_ERR_MASK             BIT(1)
+#define H2C_ERR_STAT_ZERO_LEN_DS_MASK                      BIT(0)
+#define EQDMA_H2C_ERR_MASK_ADDR                            0xE04
+#define H2C_ERR_EN_MASK                          GENMASK(31, 0)
+#define EQDMA_H2C_FIRST_ERR_QID_ADDR                       0xE08
+#define H2C_FIRST_ERR_QID_RSVD_1_MASK                      GENMASK(31, 20)
+#define H2C_FIRST_ERR_QID_ERR_TYPE_MASK                    GENMASK(19, 16)
+#define H2C_FIRST_ERR_QID_RSVD_2_MASK                      GENMASK(15, 13)
+#define H2C_FIRST_ERR_QID_QID_MASK                         GENMASK(12, 0)
+#define EQDMA_H2C_DBG_REG0_ADDR                            0xE0C
+#define H2C_REG0_NUM_DSC_RCVD_MASK                     GENMASK(31, 16)
+#define H2C_REG0_NUM_WRB_SENT_MASK                     GENMASK(15, 0)
+#define EQDMA_H2C_DBG_REG1_ADDR                            0xE10
+#define H2C_REG1_NUM_REQ_SENT_MASK                     GENMASK(31, 16)
+#define H2C_REG1_NUM_CMP_SENT_MASK                     GENMASK(15, 0)
+#define EQDMA_H2C_DBG_REG2_ADDR                            0xE14
+#define H2C_REG2_RSVD_1_MASK                           GENMASK(31, 16)
+#define H2C_REG2_NUM_ERR_DSC_RCVD_MASK                 GENMASK(15, 0)
+#define EQDMA_H2C_DBG_REG3_ADDR                            0xE18
+#define H2C_REG3_RSVD_1_MASK                           BIT(31)
+#define H2C_REG3_DSCO_FIFO_EMPTY_MASK                  BIT(30)
+#define H2C_REG3_DSCO_FIFO_FULL_MASK                   BIT(29)
+#define H2C_REG3_CUR_RC_STATE_MASK                     GENMASK(28, 26)
+#define H2C_REG3_RDREQ_LINES_MASK                      GENMASK(25, 16)
+#define H2C_REG3_RDATA_LINES_AVAIL_MASK                GENMASK(15, 6)
+#define H2C_REG3_PEND_FIFO_EMPTY_MASK                  BIT(5)
+#define H2C_REG3_PEND_FIFO_FULL_MASK                   BIT(4)
+#define H2C_REG3_CUR_RQ_STATE_MASK                     GENMASK(3, 2)
+#define H2C_REG3_DSCI_FIFO_FULL_MASK                   BIT(1)
+#define H2C_REG3_DSCI_FIFO_EMPTY_MASK                  BIT(0)
+#define EQDMA_H2C_DBG_REG4_ADDR                            0xE1C
+#define H2C_REG4_RDREQ_ADDR_MASK                       GENMASK(31, 0)
+#define EQDMA_H2C_FATAL_ERR_EN_ADDR                        0xE20
+#define H2C_FATAL_ERR_EN_RSVD_1_MASK                       GENMASK(31, 1)
+#define H2C_FATAL_ERR_EN_H2C_MASK                          BIT(0)
+#define EQDMA_H2C_REQ_THROT_PCIE_ADDR                      0xE24
+#define H2C_REQ_THROT_PCIE_EN_REQ_MASK                     BIT(31)
+#define H2C_REQ_THROT_PCIE_MASK                           GENMASK(30, 19)
+#define H2C_REQ_THROT_PCIE_EN_DATA_MASK                    BIT(18)
+#define H2C_REQ_THROT_PCIE_DATA_THRESH_MASK                GENMASK(17, 0)
+#define EQDMA_H2C_ALN_DBG_REG0_ADDR                        0xE28
+#define H2C_ALN_REG0_NUM_PKT_SENT_MASK                 GENMASK(15, 0)
+#define EQDMA_H2C_REQ_THROT_AXIMM_ADDR                     0xE2C
+#define H2C_REQ_THROT_AXIMM_EN_REQ_MASK                    BIT(31)
+#define H2C_REQ_THROT_AXIMM_MASK                          GENMASK(30, 19)
+#define H2C_REQ_THROT_AXIMM_EN_DATA_MASK                   BIT(18)
+#define H2C_REQ_THROT_AXIMM_DATA_THRESH_MASK               GENMASK(17, 0)
+#define EQDMA_C2H_MM_CTL_ADDR                              0x1004
+#define C2H_MM_CTL_RESERVED1_MASK                          GENMASK(31, 9)
+#define C2H_MM_CTL_ERRC_EN_MASK                            BIT(8)
+#define C2H_MM_CTL_RESERVED0_MASK                          GENMASK(7, 1)
+#define C2H_MM_CTL_RUN_MASK                                BIT(0)
+#define EQDMA_C2H_MM_STATUS_ADDR                           0x1040
+#define C2H_MM_STATUS_RSVD_1_MASK                          GENMASK(31, 1)
+#define C2H_MM_STATUS_RUN_MASK                             BIT(0)
+#define EQDMA_C2H_MM_CMPL_DESC_CNT_ADDR                    0x1048
+#define C2H_MM_CMPL_DESC_CNT_C2H_CO_MASK                   GENMASK(31, 0)
+#define EQDMA_C2H_MM_ERR_CODE_ENABLE_MASK_ADDR             0x1054
+#define C2H_MM_ERR_CODE_ENABLE_RESERVED1_MASK         BIT(31)
+#define C2H_MM_ERR_CODE_ENABLE_WR_UC_RAM_MASK         BIT(30)
+#define C2H_MM_ERR_CODE_ENABLE_WR_UR_MASK             BIT(29)
+#define C2H_MM_ERR_CODE_ENABLE_WR_FLR_MASK            BIT(28)
+#define C2H_MM_ERR_CODE_ENABLE_RESERVED0_MASK         GENMASK(27, 2)
+#define C2H_MM_ERR_CODE_ENABLE_RD_SLV_ERR_MASK        BIT(1)
+#define C2H_MM_ERR_CODE_ENABLE_WR_SLV_ERR_MASK        BIT(0)
+#define EQDMA_C2H_MM_ERR_CODE_ADDR                         0x1058
+#define C2H_MM_ERR_CODE_RESERVED1_MASK                     GENMASK(31, 28)
+#define C2H_MM_ERR_CODE_CIDX_MASK                          GENMASK(27, 12)
+#define C2H_MM_ERR_CODE_RESERVED0_MASK                     GENMASK(11, 10)
+#define C2H_MM_ERR_CODE_SUB_TYPE_MASK                      GENMASK(9, 5)
+#define C2H_MM_ERR_CODE_MASK                              GENMASK(4, 0)
+#define EQDMA_C2H_MM_ERR_INFO_ADDR                         0x105C
+#define C2H_MM_ERR_INFO_VALID_MASK                         BIT(31)
+#define C2H_MM_ERR_INFO_SEL_MASK                           BIT(30)
+#define C2H_MM_ERR_INFO_RSVD_1_MASK                        GENMASK(29, 24)
+#define C2H_MM_ERR_INFO_QID_MASK                           GENMASK(23, 0)
+#define EQDMA_C2H_MM_PERF_MON_CTL_ADDR                     0x10C0
+#define C2H_MM_PERF_MON_CTL_RSVD_1_MASK                    GENMASK(31, 4)
+#define C2H_MM_PERF_MON_CTL_IMM_START_MASK                 BIT(3)
+#define C2H_MM_PERF_MON_CTL_RUN_START_MASK                 BIT(2)
+#define C2H_MM_PERF_MON_CTL_IMM_CLEAR_MASK                 BIT(1)
+#define C2H_MM_PERF_MON_CTL_RUN_CLEAR_MASK                 BIT(0)
+#define EQDMA_C2H_MM_PERF_MON_CYCLE_CNT0_ADDR              0x10C4
+#define C2H_MM_PERF_MON_CYCLE_CNT0_CYC_CNT_MASK            GENMASK(31, 0)
+#define EQDMA_C2H_MM_PERF_MON_CYCLE_CNT1_ADDR              0x10C8
+#define C2H_MM_PERF_MON_CYCLE_CNT1_RSVD_1_MASK             GENMASK(31, 10)
+#define C2H_MM_PERF_MON_CYCLE_CNT1_CYC_CNT_MASK            GENMASK(9, 0)
+#define EQDMA_C2H_MM_PERF_MON_DATA_CNT0_ADDR               0x10CC
+#define C2H_MM_PERF_MON_DATA_CNT0_DCNT_MASK                GENMASK(31, 0)
+#define EQDMA_C2H_MM_PERF_MON_DATA_CNT1_ADDR               0x10D0
+#define C2H_MM_PERF_MON_DATA_CNT1_RSVD_1_MASK              GENMASK(31, 10)
+#define C2H_MM_PERF_MON_DATA_CNT1_DCNT_MASK                GENMASK(9, 0)
+#define EQDMA_C2H_MM_DBG_ADDR                              0x10E8
+#define C2H_MM_RSVD_1_MASK                             GENMASK(31, 24)
+#define C2H_MM_RRQ_ENTRIES_MASK                        GENMASK(23, 17)
+#define C2H_MM_DAT_FIFO_SPC_MASK                       GENMASK(16, 7)
+#define C2H_MM_RD_STALL_MASK                           BIT(6)
+#define C2H_MM_RRQ_FIFO_FI_MASK                        BIT(5)
+#define C2H_MM_WR_STALL_MASK                           BIT(4)
+#define C2H_MM_WRQ_FIFO_FI_MASK                        BIT(3)
+#define C2H_MM_WBK_STALL_MASK                          BIT(2)
+#define C2H_MM_DSC_FIFO_EP_MASK                        BIT(1)
+#define C2H_MM_DSC_FIFO_FL_MASK                        BIT(0)
+#define EQDMA_H2C_MM_CTL_ADDR                              0x1204
+#define H2C_MM_CTL_RESERVED1_MASK                          GENMASK(31, 9)
+#define H2C_MM_CTL_ERRC_EN_MASK                            BIT(8)
+#define H2C_MM_CTL_RESERVED0_MASK                          GENMASK(7, 1)
+#define H2C_MM_CTL_RUN_MASK                                BIT(0)
+#define EQDMA_H2C_MM_STATUS_ADDR                           0x1240
+#define H2C_MM_STATUS_RSVD_1_MASK                          GENMASK(31, 1)
+#define H2C_MM_STATUS_RUN_MASK                             BIT(0)
+#define EQDMA_H2C_MM_CMPL_DESC_CNT_ADDR                    0x1248
+#define H2C_MM_CMPL_DESC_CNT_H2C_CO_MASK                   GENMASK(31, 0)
+#define EQDMA_H2C_MM_ERR_CODE_ENABLE_MASK_ADDR             0x1254
+#define H2C_MM_ERR_CODE_ENABLE_RESERVED5_MASK         GENMASK(31, 30)
+#define H2C_MM_ERR_CODE_ENABLE_WR_SLV_ERR_MASK        BIT(29)
+#define H2C_MM_ERR_CODE_ENABLE_WR_DEC_ERR_MASK        BIT(28)
+#define H2C_MM_ERR_CODE_ENABLE_RESERVED4_MASK         GENMASK(27, 23)
+#define H2C_MM_ERR_CODE_ENABLE_RD_RQ_DIS_ERR_MASK     BIT(22)
+#define H2C_MM_ERR_CODE_ENABLE_RESERVED3_MASK         GENMASK(21, 17)
+#define H2C_MM_ERR_CODE_ENABLE_RD_DAT_POISON_ERR_MASK BIT(16)
+#define H2C_MM_ERR_CODE_ENABLE_RESERVED2_MASK         GENMASK(15, 9)
+#define H2C_MM_ERR_CODE_ENABLE_RD_FLR_ERR_MASK        BIT(8)
+#define H2C_MM_ERR_CODE_ENABLE_RESERVED1_MASK         GENMASK(7, 6)
+#define H2C_MM_ERR_CODE_ENABLE_RD_HDR_ADR_ERR_MASK    BIT(5)
+#define H2C_MM_ERR_CODE_ENABLE_RD_HDR_PARA_MASK       BIT(4)
+#define H2C_MM_ERR_CODE_ENABLE_RD_HDR_BYTE_ERR_MASK   BIT(3)
+#define H2C_MM_ERR_CODE_ENABLE_RD_UR_CA_MASK          BIT(2)
+#define H2C_MM_ERR_CODE_ENABLE_RD_HRD_POISON_ERR_MASK BIT(1)
+#define H2C_MM_ERR_CODE_ENABLE_RESERVED0_MASK         BIT(0)
+#define EQDMA_H2C_MM_ERR_CODE_ADDR                         0x1258
+#define H2C_MM_ERR_CODE_RSVD_1_MASK                        GENMASK(31, 28)
+#define H2C_MM_ERR_CODE_CIDX_MASK                          GENMASK(27, 12)
+#define H2C_MM_ERR_CODE_RESERVED0_MASK                     GENMASK(11, 10)
+#define H2C_MM_ERR_CODE_SUB_TYPE_MASK                      GENMASK(9, 5)
+#define H2C_MM_ERR_CODE_MASK                              GENMASK(4, 0)
+#define EQDMA_H2C_MM_ERR_INFO_ADDR                         0x125C
+#define H2C_MM_ERR_INFO_VALID_MASK                         BIT(31)
+#define H2C_MM_ERR_INFO_SEL_MASK                           BIT(30)
+#define H2C_MM_ERR_INFO_RSVD_1_MASK                        GENMASK(29, 24)
+#define H2C_MM_ERR_INFO_QID_MASK                           GENMASK(23, 0)
+#define EQDMA_H2C_MM_PERF_MON_CTL_ADDR                     0x12C0
+#define H2C_MM_PERF_MON_CTL_RSVD_1_MASK                    GENMASK(31, 4)
+#define H2C_MM_PERF_MON_CTL_IMM_START_MASK                 BIT(3)
+#define H2C_MM_PERF_MON_CTL_RUN_START_MASK                 BIT(2)
+#define H2C_MM_PERF_MON_CTL_IMM_CLEAR_MASK                 BIT(1)
+#define H2C_MM_PERF_MON_CTL_RUN_CLEAR_MASK                 BIT(0)
+#define EQDMA_H2C_MM_PERF_MON_CYCLE_CNT0_ADDR              0x12C4
+#define H2C_MM_PERF_MON_CYCLE_CNT0_CYC_CNT_MASK            GENMASK(31, 0)
+#define EQDMA_H2C_MM_PERF_MON_CYCLE_CNT1_ADDR              0x12C8
+#define H2C_MM_PERF_MON_CYCLE_CNT1_RSVD_1_MASK             GENMASK(31, 10)
+#define H2C_MM_PERF_MON_CYCLE_CNT1_CYC_CNT_MASK            GENMASK(9, 0)
+#define EQDMA_H2C_MM_PERF_MON_DATA_CNT0_ADDR               0x12CC
+#define H2C_MM_PERF_MON_DATA_CNT0_DCNT_MASK                GENMASK(31, 0)
+#define EQDMA_H2C_MM_PERF_MON_DATA_CNT1_ADDR               0x12D0
+#define H2C_MM_PERF_MON_DATA_CNT1_RSVD_1_MASK              GENMASK(31, 10)
+#define H2C_MM_PERF_MON_DATA_CNT1_DCNT_MASK                GENMASK(9, 0)
+#define EQDMA_H2C_MM_DBG_ADDR                              0x12E8
+#define H2C_MM_RSVD_1_MASK                             GENMASK(31, 24)
+#define H2C_MM_RRQ_ENTRIES_MASK                        GENMASK(23, 17)
+#define H2C_MM_DAT_FIFO_SPC_MASK                       GENMASK(16, 7)
+#define H2C_MM_RD_STALL_MASK                           BIT(6)
+#define H2C_MM_RRQ_FIFO_FI_MASK                        BIT(5)
+#define H2C_MM_WR_STALL_MASK                           BIT(4)
+#define H2C_MM_WRQ_FIFO_FI_MASK                        BIT(3)
+#define H2C_MM_WBK_STALL_MASK                          BIT(2)
+#define H2C_MM_DSC_FIFO_EP_MASK                        BIT(1)
+#define H2C_MM_DSC_FIFO_FL_MASK                        BIT(0)
+#define EQDMA_C2H_CRDT_COAL_CFG_1_ADDR                     0x1400
+#define C2H_CRDT_COAL_CFG_1_RSVD_1_MASK                    GENMASK(31, 18)
+#define C2H_CRDT_COAL_CFG_1_PLD_FIFO_TH_MASK               GENMASK(17, 10)
+#define C2H_CRDT_COAL_CFG_1_TIMER_TH_MASK                  GENMASK(9, 0)
+#define EQDMA_C2H_CRDT_COAL_CFG_2_ADDR                     0x1404
+#define C2H_CRDT_COAL_CFG_2_RSVD_1_MASK                    GENMASK(31, 24)
+#define C2H_CRDT_COAL_CFG_2_FIFO_TH_MASK                   GENMASK(23, 16)
+#define C2H_CRDT_COAL_CFG_2_RESERVED1_MASK                 GENMASK(15, 11)
+#define C2H_CRDT_COAL_CFG_2_NT_TH_MASK                     GENMASK(10, 0)
+#define EQDMA_C2H_PFCH_BYP_QID_ADDR                        0x1408
+#define C2H_PFCH_BYP_QID_RSVD_1_MASK                       GENMASK(31, 12)
+#define C2H_PFCH_BYP_QID_MASK                             GENMASK(11, 0)
+#define EQDMA_C2H_PFCH_BYP_TAG_ADDR                        0x140C
+#define C2H_PFCH_BYP_TAG_RSVD_1_MASK                       GENMASK(31, 20)
+#define C2H_PFCH_BYP_TAG_BYP_QID_MASK                      GENMASK(19, 8)
+#define C2H_PFCH_BYP_TAG_RSVD_2_MASK                       BIT(7)
+#define C2H_PFCH_BYP_TAG_MASK                             GENMASK(6, 0)
+#define EQDMA_C2H_WATER_MARK_ADDR                          0x1500
+#define C2H_WATER_MARK_HIGH_WM_MASK                        GENMASK(31, 16)
+#define C2H_WATER_MARK_LOW_WM_MASK                         GENMASK(15, 0)
+#define SW_IND_CTXT_DATA_W7_VIRTIO_DSC_BASE_H_MASK        GENMASK(10, 0)
+#define SW_IND_CTXT_DATA_W6_VIRTIO_DSC_BASE_M_MASK        GENMASK(31, 0)
+#define SW_IND_CTXT_DATA_W5_VIRTIO_DSC_BASE_L_MASK        GENMASK(31, 11)
+#define SW_IND_CTXT_DATA_W5_PASID_EN_MASK                 BIT(10)
+#define SW_IND_CTXT_DATA_W5_PASID_H_MASK                  GENMASK(9, 0)
+#define SW_IND_CTXT_DATA_W4_PASID_L_MASK                  GENMASK(31, 20)
+#define SW_IND_CTXT_DATA_W4_HOST_ID_MASK                  GENMASK(19, 16)
+#define SW_IND_CTXT_DATA_W4_IRQ_BYP_MASK                  BIT(15)
+#define SW_IND_CTXT_DATA_W4_PACK_BYP_OUT_MASK             BIT(14)
+#define SW_IND_CTXT_DATA_W4_VIRTIO_EN_MASK                BIT(13)
+#define SW_IND_CTXT_DATA_W4_DIS_INTR_ON_VF_MASK           BIT(12)
+#define SW_IND_CTXT_DATA_W4_INT_AGGR_MASK                 BIT(11)
+#define SW_IND_CTXT_DATA_W4_VEC_MASK                      GENMASK(10, 0)
+#define SW_IND_CTXT_DATA_W3_DSC_BASE_H_MASK               GENMASK(31, 0)
+#define SW_IND_CTXT_DATA_W2_DSC_BASE_L_MASK               GENMASK(31, 0)
+#define SW_IND_CTXT_DATA_W1_IS_MM_MASK                    BIT(31)
+#define SW_IND_CTXT_DATA_W1_MRKR_DIS_MASK                 BIT(30)
+#define SW_IND_CTXT_DATA_W1_IRQ_REQ_MASK                  BIT(29)
+#define SW_IND_CTXT_DATA_W1_ERR_WB_SENT_MASK              BIT(28)
+#define SW_IND_CTXT_DATA_W1_ERR_MASK                      GENMASK(27, 26)
+#define SW_IND_CTXT_DATA_W1_IRQ_NO_LAST_MASK              BIT(25)
+#define SW_IND_CTXT_DATA_W1_PORT_ID_MASK                  GENMASK(24, 22)
+#define SW_IND_CTXT_DATA_W1_IRQ_EN_MASK                   BIT(21)
+#define SW_IND_CTXT_DATA_W1_WBK_EN_MASK                   BIT(20)
+#define SW_IND_CTXT_DATA_W1_MM_CHN_MASK                   BIT(19)
+#define SW_IND_CTXT_DATA_W1_BYPASS_MASK                   BIT(18)
+#define SW_IND_CTXT_DATA_W1_DSC_SZ_MASK                   GENMASK(17, 16)
+#define SW_IND_CTXT_DATA_W1_RNG_SZ_MASK                   GENMASK(15, 12)
+#define SW_IND_CTXT_DATA_W1_RSVD_1_MASK                   GENMASK(11, 9)
+#define SW_IND_CTXT_DATA_W1_FETCH_MAX_MASK                GENMASK(8, 5)
+#define SW_IND_CTXT_DATA_W1_AT_MASK                       BIT(4)
+#define SW_IND_CTXT_DATA_W1_WBI_INTVL_EN_MASK             BIT(3)
+#define SW_IND_CTXT_DATA_W1_WBI_CHK_MASK                  BIT(2)
+#define SW_IND_CTXT_DATA_W1_FCRD_EN_MASK                  BIT(1)
+#define SW_IND_CTXT_DATA_W1_QEN_MASK                      BIT(0)
+#define SW_IND_CTXT_DATA_W0_RSV_MASK                      GENMASK(31, 29)
+#define SW_IND_CTXT_DATA_W0_FNC_MASK                      GENMASK(28, 17)
+#define SW_IND_CTXT_DATA_W0_IRQ_ARM_MASK                  BIT(16)
+#define SW_IND_CTXT_DATA_W0_PIDX_MASK                     GENMASK(15, 0)
+#define HW_IND_CTXT_DATA_W1_RSVD_1_MASK                   BIT(15)
+#define HW_IND_CTXT_DATA_W1_FETCH_PND_MASK                GENMASK(14, 11)
+#define HW_IND_CTXT_DATA_W1_EVT_PND_MASK                  BIT(10)
+#define HW_IND_CTXT_DATA_W1_IDL_STP_B_MASK                BIT(9)
+#define HW_IND_CTXT_DATA_W1_DSC_PND_MASK                  BIT(8)
+#define HW_IND_CTXT_DATA_W1_RSVD_2_MASK                   GENMASK(7, 0)
+#define HW_IND_CTXT_DATA_W0_CRD_USE_MASK                  GENMASK(31, 16)
+#define HW_IND_CTXT_DATA_W0_CIDX_MASK                     GENMASK(15, 0)
+#define CRED_CTXT_DATA_W0_RSVD_1_MASK                     GENMASK(31, 16)
+#define CRED_CTXT_DATA_W0_CREDT_MASK                      GENMASK(15, 0)
+#define PREFETCH_CTXT_DATA_W1_VALID_MASK                  BIT(13)
+#define PREFETCH_CTXT_DATA_W1_SW_CRDT_H_MASK              GENMASK(12, 0)
+#define PREFETCH_CTXT_DATA_W0_SW_CRDT_L_MASK              GENMASK(31, 29)
+#define PREFETCH_CTXT_DATA_W0_PFCH_MASK                   BIT(28)
+#define PREFETCH_CTXT_DATA_W0_PFCH_EN_MASK                BIT(27)
+#define PREFETCH_CTXT_DATA_W0_ERR_MASK                    BIT(26)
+#define PREFETCH_CTXT_DATA_W0_RSVD_MASK                   GENMASK(25, 22)
+#define PREFETCH_CTXT_DATA_W0_PFCH_NEED_MASK              GENMASK(21, 16)
+#define PREFETCH_CTXT_DATA_W0_NUM_PFCH_MASK               GENMASK(15, 10)
+#define PREFETCH_CTXT_DATA_W0_VIRTIO_MASK                 BIT(9)
+#define PREFETCH_CTXT_DATA_W0_VAR_DESC_MASK               BIT(8)
+#define PREFETCH_CTXT_DATA_W0_PORT_ID_MASK                GENMASK(7, 5)
+#define PREFETCH_CTXT_DATA_W0_BUF_SZ_IDX_MASK             GENMASK(4, 1)
+#define PREFETCH_CTXT_DATA_W0_BYPASS_MASK                 BIT(0)
+#define CMPL_CTXT_DATA_W6_RSVD_1_H_MASK                   GENMASK(7, 0)
+#define CMPL_CTXT_DATA_W5_RSVD_1_L_MASK                   GENMASK(31, 23)
+#define CMPL_CTXT_DATA_W5_PORT_ID_MASK                    GENMASK(22, 20)
+#define CMPL_CTXT_DATA_W5_SH_CMPT_MASK                    BIT(19)
+#define CMPL_CTXT_DATA_W5_VIO_EOP_MASK                    BIT(18)
+#define CMPL_CTXT_DATA_W5_BADDR4_LOW_MASK                 GENMASK(17, 14)
+#define CMPL_CTXT_DATA_W5_PASID_EN_MASK                   BIT(13)
+#define CMPL_CTXT_DATA_W5_PASID_H_MASK                    GENMASK(12, 0)
+#define CMPL_CTXT_DATA_W4_PASID_L_MASK                    GENMASK(31, 23)
+#define CMPL_CTXT_DATA_W4_HOST_ID_MASK                    GENMASK(22, 19)
+#define CMPL_CTXT_DATA_W4_DIR_C2H_MASK                    BIT(18)
+#define CMPL_CTXT_DATA_W4_VIO_MASK                        BIT(17)
+#define CMPL_CTXT_DATA_W4_DIS_INTR_ON_VF_MASK             BIT(16)
+#define CMPL_CTXT_DATA_W4_INT_AGGR_MASK                   BIT(15)
+#define CMPL_CTXT_DATA_W4_VEC_MASK                        GENMASK(14, 4)
+#define CMPL_CTXT_DATA_W4_AT_MASK                         BIT(3)
+#define CMPL_CTXT_DATA_W4_OVF_CHK_DIS_MASK                BIT(2)
+#define CMPL_CTXT_DATA_W4_FULL_UPD_MASK                   BIT(1)
+#define CMPL_CTXT_DATA_W4_TIMER_RUNNING_MASK              BIT(0)
+#define CMPL_CTXT_DATA_W3_USER_TRIG_PEND_MASK             BIT(31)
+#define CMPL_CTXT_DATA_W3_ERR_MASK                        GENMASK(30, 29)
+#define CMPL_CTXT_DATA_W3_VALID_MASK                      BIT(28)
+#define CMPL_CTXT_DATA_W3_CIDX_MASK                       GENMASK(27, 12)
+#define CMPL_CTXT_DATA_W3_PIDX_H_MASK                     GENMASK(11, 0)
+#define CMPL_CTXT_DATA_W2_PIDX_L_MASK                     GENMASK(31, 28)
+#define CMPL_CTXT_DATA_W2_DESC_SIZE_MASK                  GENMASK(27, 26)
+#define CMPL_CTXT_DATA_W2_BADDR4_HIGH_H_MASK              GENMASK(25, 0)
+#define CMPL_CTXT_DATA_W1_BADDR4_HIGH_L_MASK              GENMASK(31, 0)
+#define CMPL_CTXT_DATA_W0_QSIZE_IX_MASK                   GENMASK(31, 28)
+#define CMPL_CTXT_DATA_W0_COLOR_MASK                      BIT(27)
+#define CMPL_CTXT_DATA_W0_INT_ST_MASK                     GENMASK(26, 25)
+#define CMPL_CTXT_DATA_W0_TIMER_IX_MASK                   GENMASK(24, 21)
+#define CMPL_CTXT_DATA_W0_CNTER_IX_MASK                   GENMASK(20, 17)
+#define CMPL_CTXT_DATA_W0_FNC_ID_MASK                     GENMASK(16, 5)
+#define CMPL_CTXT_DATA_W0_TRIG_MODE_MASK                  GENMASK(4, 2)
+#define CMPL_CTXT_DATA_W0_EN_INT_MASK                     BIT(1)
+#define CMPL_CTXT_DATA_W0_EN_STAT_DESC_MASK               BIT(0)
+#define INTR_CTXT_DATA_W3_FUNC_MASK                       GENMASK(29, 18)
+#define INTR_CTXT_DATA_W3_RSVD_MASK                       GENMASK(17, 14)
+#define INTR_CTXT_DATA_W3_PASID_EN_MASK                   BIT(13)
+#define INTR_CTXT_DATA_W3_PASID_H_MASK                    GENMASK(12, 0)
+#define INTR_CTXT_DATA_W2_PASID_L_MASK                    GENMASK(31, 23)
+#define INTR_CTXT_DATA_W2_HOST_ID_MASK                    GENMASK(22, 19)
+#define INTR_CTXT_DATA_W2_AT_MASK                         BIT(18)
+#define INTR_CTXT_DATA_W2_PIDX_MASK                       GENMASK(17, 6)
+#define INTR_CTXT_DATA_W2_PAGE_SIZE_MASK                  GENMASK(5, 3)
+#define INTR_CTXT_DATA_W2_BADDR_4K_H_MASK                 GENMASK(2, 0)
+#define INTR_CTXT_DATA_W1_BADDR_4K_M_MASK                 GENMASK(31, 0)
+#define INTR_CTXT_DATA_W0_BADDR_4K_L_MASK                 GENMASK(31, 15)
+#define INTR_CTXT_DATA_W0_COLOR_MASK                      BIT(14)
+#define INTR_CTXT_DATA_W0_INT_ST_MASK                     BIT(13)
+#define INTR_CTXT_DATA_W0_RSVD1_MASK                      BIT(12)
+#define INTR_CTXT_DATA_W0_VEC_MASK                        GENMASK(11, 1)
+#define INTR_CTXT_DATA_W0_VALID_MASK                      BIT(0)
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif
diff --git a/drivers/net/qdma/qdma_access/eqdma_soft_access/eqdma_soft_reg_dump.c b/drivers/net/qdma/qdma_access/eqdma_soft_access/eqdma_soft_reg_dump.c
new file mode 100644
index 0000000000..e8b2762f54
--- /dev/null
+++ b/drivers/net/qdma/qdma_access/eqdma_soft_access/eqdma_soft_reg_dump.c
@@ -0,0 +1,3908 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2022 Xilinx, Inc. All rights reserved.
+ */
+
+#include "eqdma_soft_reg.h"
+#include "qdma_reg_dump.h"
+
+#ifdef ENABLE_WPP_TRACING
+#include "eqdma_soft_reg_dump.tmh"
+#endif
+
+static struct regfield_info
+	cfg_blk_identifier_field_info[] = {
+	{"CFG_BLK_IDENTIFIER",
+		CFG_BLK_IDENTIFIER_MASK},
+	{"CFG_BLK_IDENTIFIER_1",
+		CFG_BLK_IDENTIFIER_1_MASK},
+	{"CFG_BLK_IDENTIFIER_RSVD_1",
+		CFG_BLK_IDENTIFIER_RSVD_1_MASK},
+	{"CFG_BLK_IDENTIFIER_VERSION",
+		CFG_BLK_IDENTIFIER_VERSION_MASK},
+};
+
+static struct regfield_info
+	cfg_blk_pcie_max_pld_size_field_info[] = {
+	{"CFG_BLK_PCIE_MAX_PLD_SIZE_RSVD_1",
+		CFG_BLK_PCIE_MAX_PLD_SIZE_RSVD_1_MASK},
+	{"CFG_BLK_PCIE_MAX_PLD_SIZE_PROG",
+		CFG_BLK_PCIE_MAX_PLD_SIZE_PROG_MASK},
+	{"CFG_BLK_PCIE_MAX_PLD_SIZE_RSVD_2",
+		CFG_BLK_PCIE_MAX_PLD_SIZE_RSVD_2_MASK},
+	{"CFG_BLK_PCIE_MAX_PLD_SIZE_ISSUED",
+		CFG_BLK_PCIE_MAX_PLD_SIZE_ISSUED_MASK},
+};
+
+
+static struct regfield_info
+	cfg_blk_pcie_max_read_req_size_field_info[] = {
+	{"CFG_BLK_PCIE_MAX_READ_REQ_SIZE_RSVD_1",
+		CFG_BLK_PCIE_MAX_READ_REQ_SIZE_RSVD_1_MASK},
+	{"CFG_BLK_PCIE_MAX_READ_REQ_SIZE_PROG",
+		CFG_BLK_PCIE_MAX_READ_REQ_SIZE_PROG_MASK},
+	{"CFG_BLK_PCIE_MAX_READ_REQ_SIZE_RSVD_2",
+		CFG_BLK_PCIE_MAX_READ_REQ_SIZE_RSVD_2_MASK},
+	{"CFG_BLK_PCIE_MAX_READ_REQ_SIZE_ISSUED",
+		CFG_BLK_PCIE_MAX_READ_REQ_SIZE_ISSUED_MASK},
+};
+
+static struct regfield_info
+	cfg_blk_system_id_field_info[] = {
+	{"CFG_BLK_SYSTEM_ID_RSVD_1",
+		CFG_BLK_SYSTEM_ID_RSVD_1_MASK},
+	{"CFG_BLK_SYSTEM_ID_INST_TYPE",
+		CFG_BLK_SYSTEM_ID_INST_TYPE_MASK},
+	{"CFG_BLK_SYSTEM_ID",
+		CFG_BLK_SYSTEM_ID_MASK},
+};
+
+static struct regfield_info
+	cfg_blk_msix_enable_field_info[] = {
+	{"CFG_BLK_MSIX_ENABLE",
+		CFG_BLK_MSIX_ENABLE_MASK},
+};
+
+static struct regfield_info
+	cfg_pcie_data_width_field_info[] = {
+	{"CFG_PCIE_DATA_WIDTH_RSVD_1",
+		CFG_PCIE_DATA_WIDTH_RSVD_1_MASK},
+	{"CFG_PCIE_DATA_WIDTH_DATAPATH",
+		CFG_PCIE_DATA_WIDTH_DATAPATH_MASK},
+};
+
+static struct regfield_info
+	cfg_pcie_ctl_field_info[] = {
+	{"CFG_PCIE_CTL_RSVD_1",
+		CFG_PCIE_CTL_RSVD_1_MASK},
+	{"CFG_PCIE_CTL_MGMT_AXIL_CTRL",
+		CFG_PCIE_CTL_MGMT_AXIL_CTRL_MASK},
+	{"CFG_PCIE_CTL_RSVD_2",
+		CFG_PCIE_CTL_RSVD_2_MASK},
+	{"CFG_PCIE_CTL_RRQ_DISABLE",
+		CFG_PCIE_CTL_RRQ_DISABLE_MASK},
+	{"CFG_PCIE_CTL_RELAXED_ORDERING",
+		CFG_PCIE_CTL_RELAXED_ORDERING_MASK},
+};
+
+static struct regfield_info
+	cfg_blk_msi_enable_field_info[] = {
+	{"CFG_BLK_MSI_ENABLE",
+		CFG_BLK_MSI_ENABLE_MASK},
+};
+
+static struct regfield_info
+	cfg_axi_user_max_pld_size_field_info[] = {
+	{"CFG_AXI_USER_MAX_PLD_SIZE_RSVD_1",
+		CFG_AXI_USER_MAX_PLD_SIZE_RSVD_1_MASK},
+	{"CFG_AXI_USER_MAX_PLD_SIZE_ISSUED",
+		CFG_AXI_USER_MAX_PLD_SIZE_ISSUED_MASK},
+	{"CFG_AXI_USER_MAX_PLD_SIZE_RSVD_2",
+		CFG_AXI_USER_MAX_PLD_SIZE_RSVD_2_MASK},
+	{"CFG_AXI_USER_MAX_PLD_SIZE_PROG",
+		CFG_AXI_USER_MAX_PLD_SIZE_PROG_MASK},
+};
+
+static struct regfield_info
+	cfg_axi_user_max_read_req_size_field_info[] = {
+	{"CFG_AXI_USER_MAX_READ_REQ_SIZE_RSVD_1",
+		CFG_AXI_USER_MAX_READ_REQ_SIZE_RSVD_1_MASK},
+	{"CFG_AXI_USER_MAX_READ_REQ_SIZE_USISSUED",
+		CFG_AXI_USER_MAX_READ_REQ_SIZE_USISSUED_MASK},
+	{"CFG_AXI_USER_MAX_READ_REQ_SIZE_RSVD_2",
+		CFG_AXI_USER_MAX_READ_REQ_SIZE_RSVD_2_MASK},
+	{"CFG_AXI_USER_MAX_READ_REQ_SIZE_USPROG",
+		CFG_AXI_USER_MAX_READ_REQ_SIZE_USPROG_MASK},
+};
+
+static struct regfield_info
+	cfg_blk_misc_ctl_field_info[] = {
+	{"CFG_BLK_MISC_CTL_RSVD_1",
+		CFG_BLK_MISC_CTL_RSVD_1_MASK},
+	{"CFG_BLK_MISC_CTL_10B_TAG_EN",
+		CFG_BLK_MISC_CTL_10B_TAG_EN_MASK},
+	{"CFG_BLK_MISC_CTL_RSVD_2",
+		CFG_BLK_MISC_CTL_RSVD_2_MASK},
+	{"CFG_BLK_MISC_CTL_AXI_WBK",
+		CFG_BLK_MISC_CTL_AXI_WBK_MASK},
+	{"CFG_BLK_MISC_CTL_AXI_DSC",
+		CFG_BLK_MISC_CTL_AXI_DSC_MASK},
+	{"CFG_BLK_MISC_CTL_NUM_TAG",
+		CFG_BLK_MISC_CTL_NUM_TAG_MASK},
+	{"CFG_BLK_MISC_CTL_RSVD_3",
+		CFG_BLK_MISC_CTL_RSVD_3_MASK},
+	{"CFG_BLK_MISC_CTL_RQ_METERING_MULTIPLIER",
+		CFG_BLK_MISC_CTL_RQ_METERING_MULTIPLIER_MASK},
+};
+
+static struct regfield_info
+	cfg_pl_cred_ctl_field_info[] = {
+	{"CFG_PL_CRED_CTL_RSVD_1",
+		CFG_PL_CRED_CTL_RSVD_1_MASK},
+	{"CFG_PL_CRED_CTL_SLAVE_CRD_RLS",
+		CFG_PL_CRED_CTL_SLAVE_CRD_RLS_MASK},
+	{"CFG_PL_CRED_CTL_RSVD_2",
+		CFG_PL_CRED_CTL_RSVD_2_MASK},
+	{"CFG_PL_CRED_CTL_MASTER_CRD_RST",
+		CFG_PL_CRED_CTL_MASTER_CRD_RST_MASK},
+};
+
+static struct regfield_info
+	cfg_blk_scratch_field_info[] = {
+	{"CFG_BLK_SCRATCH",
+		CFG_BLK_SCRATCH_MASK},
+};
+
+static struct regfield_info
+	cfg_gic_field_info[] = {
+	{"CFG_GIC_RSVD_1",
+		CFG_GIC_RSVD_1_MASK},
+	{"CFG_GIC_GIC_IRQ",
+		CFG_GIC_GIC_IRQ_MASK},
+};
+
+static struct regfield_info
+	ram_sbe_msk_1_a_field_info[] = {
+	{"RAM_SBE_MSK_1_A",
+		RAM_SBE_MSK_1_A_MASK},
+};
+
+static struct regfield_info
+	ram_sbe_sts_1_a_field_info[] = {
+	{"RAM_SBE_STS_1_A_RSVD",
+		RAM_SBE_STS_1_A_RSVD_MASK},
+	{"RAM_SBE_STS_1_A_PFCH_CTXT_CAM_RAM_1",
+		RAM_SBE_STS_1_A_PFCH_CTXT_CAM_RAM_1_MASK},
+	{"RAM_SBE_STS_1_A_PFCH_CTXT_CAM_RAM_0",
+		RAM_SBE_STS_1_A_PFCH_CTXT_CAM_RAM_0_MASK},
+	{"RAM_SBE_STS_1_A_TAG_EVEN_RAM",
+		RAM_SBE_STS_1_A_TAG_EVEN_RAM_MASK},
+	{"RAM_SBE_STS_1_A_TAG_ODD_RAM",
+		RAM_SBE_STS_1_A_TAG_ODD_RAM_MASK},
+	{"RAM_SBE_STS_1_A_RC_RRQ_EVEN_RAM",
+		RAM_SBE_STS_1_A_RC_RRQ_EVEN_RAM_MASK},
+};
+
+static struct regfield_info
+	ram_dbe_msk_1_a_field_info[] = {
+	{"RAM_DBE_MSK_1_A",
+		RAM_DBE_MSK_1_A_MASK},
+};
+
+
+static struct regfield_info
+	ram_dbe_sts_1_a_field_info[] = {
+	{"RAM_DBE_STS_1_A_RSVD",
+		RAM_DBE_STS_1_A_RSVD_MASK},
+	{"RAM_DBE_STS_1_A_PFCH_CTXT_CAM_RAM_1",
+		RAM_DBE_STS_1_A_PFCH_CTXT_CAM_RAM_1_MASK},
+	{"RAM_DBE_STS_1_A_PFCH_CTXT_CAM_RAM_0",
+		RAM_DBE_STS_1_A_PFCH_CTXT_CAM_RAM_0_MASK},
+	{"RAM_DBE_STS_1_A_TAG_EVEN_RAM",
+		RAM_DBE_STS_1_A_TAG_EVEN_RAM_MASK},
+	{"RAM_DBE_STS_1_A_TAG_ODD_RAM",
+		RAM_DBE_STS_1_A_TAG_ODD_RAM_MASK},
+	{"RAM_DBE_STS_1_A_RC_RRQ_EVEN_RAM",
+		RAM_DBE_STS_1_A_RC_RRQ_EVEN_RAM_MASK},
+};
+
+
+static struct regfield_info
+	ram_sbe_msk_a_field_info[] = {
+	{"RAM_SBE_MSK_A",
+		RAM_SBE_MSK_A_MASK},
+};
+
+
+static struct regfield_info
+	ram_sbe_sts_a_field_info[] = {
+	{"RAM_SBE_STS_A_RC_RRQ_ODD_RAM",
+		RAM_SBE_STS_A_RC_RRQ_ODD_RAM_MASK},
+	{"RAM_SBE_STS_A_PEND_FIFO_RAM",
+		RAM_SBE_STS_A_PEND_FIFO_RAM_MASK},
+	{"RAM_SBE_STS_A_PFCH_LL_RAM",
+		RAM_SBE_STS_A_PFCH_LL_RAM_MASK},
+	{"RAM_SBE_STS_A_WRB_CTXT_RAM",
+		RAM_SBE_STS_A_WRB_CTXT_RAM_MASK},
+	{"RAM_SBE_STS_A_PFCH_CTXT_RAM",
+		RAM_SBE_STS_A_PFCH_CTXT_RAM_MASK},
+	{"RAM_SBE_STS_A_DESC_REQ_FIFO_RAM",
+		RAM_SBE_STS_A_DESC_REQ_FIFO_RAM_MASK},
+	{"RAM_SBE_STS_A_INT_CTXT_RAM",
+		RAM_SBE_STS_A_INT_CTXT_RAM_MASK},
+	{"RAM_SBE_STS_A_WRB_COAL_DATA_RAM",
+		RAM_SBE_STS_A_WRB_COAL_DATA_RAM_MASK},
+	{"RAM_SBE_STS_A_QID_FIFO_RAM",
+		RAM_SBE_STS_A_QID_FIFO_RAM_MASK},
+	{"RAM_SBE_STS_A_TIMER_FIFO_RAM",
+		RAM_SBE_STS_A_TIMER_FIFO_RAM_MASK},
+	{"RAM_SBE_STS_A_MI_TL_SLV_FIFO_RAM",
+		RAM_SBE_STS_A_MI_TL_SLV_FIFO_RAM_MASK},
+	{"RAM_SBE_STS_A_DSC_CPLD",
+		RAM_SBE_STS_A_DSC_CPLD_MASK},
+	{"RAM_SBE_STS_A_DSC_CPLI",
+		RAM_SBE_STS_A_DSC_CPLI_MASK},
+	{"RAM_SBE_STS_A_DSC_SW_CTXT",
+		RAM_SBE_STS_A_DSC_SW_CTXT_MASK},
+	{"RAM_SBE_STS_A_DSC_CRD_RCV",
+		RAM_SBE_STS_A_DSC_CRD_RCV_MASK},
+	{"RAM_SBE_STS_A_DSC_HW_CTXT",
+		RAM_SBE_STS_A_DSC_HW_CTXT_MASK},
+	{"RAM_SBE_STS_A_FUNC_MAP",
+		RAM_SBE_STS_A_FUNC_MAP_MASK},
+	{"RAM_SBE_STS_A_C2H_WR_BRG_DAT",
+		RAM_SBE_STS_A_C2H_WR_BRG_DAT_MASK},
+	{"RAM_SBE_STS_A_C2H_RD_BRG_DAT",
+		RAM_SBE_STS_A_C2H_RD_BRG_DAT_MASK},
+	{"RAM_SBE_STS_A_H2C_WR_BRG_DAT",
+		RAM_SBE_STS_A_H2C_WR_BRG_DAT_MASK},
+	{"RAM_SBE_STS_A_H2C_RD_BRG_DAT",
+		RAM_SBE_STS_A_H2C_RD_BRG_DAT_MASK},
+	{"RAM_SBE_STS_A_MI_C2H3_DAT",
+		RAM_SBE_STS_A_MI_C2H3_DAT_MASK},
+	{"RAM_SBE_STS_A_MI_C2H2_DAT",
+		RAM_SBE_STS_A_MI_C2H2_DAT_MASK},
+	{"RAM_SBE_STS_A_MI_C2H1_DAT",
+		RAM_SBE_STS_A_MI_C2H1_DAT_MASK},
+	{"RAM_SBE_STS_A_MI_C2H0_DAT",
+		RAM_SBE_STS_A_MI_C2H0_DAT_MASK},
+	{"RAM_SBE_STS_A_MI_H2C3_DAT",
+		RAM_SBE_STS_A_MI_H2C3_DAT_MASK},
+	{"RAM_SBE_STS_A_MI_H2C2_DAT",
+		RAM_SBE_STS_A_MI_H2C2_DAT_MASK},
+	{"RAM_SBE_STS_A_MI_H2C1_DAT",
+		RAM_SBE_STS_A_MI_H2C1_DAT_MASK},
+	{"RAM_SBE_STS_A_MI_H2C0_DAT",
+		RAM_SBE_STS_A_MI_H2C0_DAT_MASK},
+};
+
+
+static struct regfield_info
+	ram_dbe_msk_a_field_info[] = {
+	{"RAM_DBE_MSK_A",
+		RAM_DBE_MSK_A_MASK},
+};
+
+
+static struct regfield_info
+	ram_dbe_sts_a_field_info[] = {
+	{"RAM_DBE_STS_A_RC_RRQ_ODD_RAM",
+		RAM_DBE_STS_A_RC_RRQ_ODD_RAM_MASK},
+	{"RAM_DBE_STS_A_PEND_FIFO_RAM",
+		RAM_DBE_STS_A_PEND_FIFO_RAM_MASK},
+	{"RAM_DBE_STS_A_PFCH_LL_RAM",
+		RAM_DBE_STS_A_PFCH_LL_RAM_MASK},
+	{"RAM_DBE_STS_A_WRB_CTXT_RAM",
+		RAM_DBE_STS_A_WRB_CTXT_RAM_MASK},
+	{"RAM_DBE_STS_A_PFCH_CTXT_RAM",
+		RAM_DBE_STS_A_PFCH_CTXT_RAM_MASK},
+	{"RAM_DBE_STS_A_DESC_REQ_FIFO_RAM",
+		RAM_DBE_STS_A_DESC_REQ_FIFO_RAM_MASK},
+	{"RAM_DBE_STS_A_INT_CTXT_RAM",
+		RAM_DBE_STS_A_INT_CTXT_RAM_MASK},
+	{"RAM_DBE_STS_A_WRB_COAL_DATA_RAM",
+		RAM_DBE_STS_A_WRB_COAL_DATA_RAM_MASK},
+	{"RAM_DBE_STS_A_QID_FIFO_RAM",
+		RAM_DBE_STS_A_QID_FIFO_RAM_MASK},
+	{"RAM_DBE_STS_A_TIMER_FIFO_RAM",
+		RAM_DBE_STS_A_TIMER_FIFO_RAM_MASK},
+	{"RAM_DBE_STS_A_MI_TL_SLV_FIFO_RAM",
+		RAM_DBE_STS_A_MI_TL_SLV_FIFO_RAM_MASK},
+	{"RAM_DBE_STS_A_DSC_CPLD",
+		RAM_DBE_STS_A_DSC_CPLD_MASK},
+	{"RAM_DBE_STS_A_DSC_CPLI",
+		RAM_DBE_STS_A_DSC_CPLI_MASK},
+	{"RAM_DBE_STS_A_DSC_SW_CTXT",
+		RAM_DBE_STS_A_DSC_SW_CTXT_MASK},
+	{"RAM_DBE_STS_A_DSC_CRD_RCV",
+		RAM_DBE_STS_A_DSC_CRD_RCV_MASK},
+	{"RAM_DBE_STS_A_DSC_HW_CTXT",
+		RAM_DBE_STS_A_DSC_HW_CTXT_MASK},
+	{"RAM_DBE_STS_A_FUNC_MAP",
+		RAM_DBE_STS_A_FUNC_MAP_MASK},
+	{"RAM_DBE_STS_A_C2H_WR_BRG_DAT",
+		RAM_DBE_STS_A_C2H_WR_BRG_DAT_MASK},
+	{"RAM_DBE_STS_A_C2H_RD_BRG_DAT",
+		RAM_DBE_STS_A_C2H_RD_BRG_DAT_MASK},
+	{"RAM_DBE_STS_A_H2C_WR_BRG_DAT",
+		RAM_DBE_STS_A_H2C_WR_BRG_DAT_MASK},
+	{"RAM_DBE_STS_A_H2C_RD_BRG_DAT",
+		RAM_DBE_STS_A_H2C_RD_BRG_DAT_MASK},
+	{"RAM_DBE_STS_A_MI_C2H3_DAT",
+		RAM_DBE_STS_A_MI_C2H3_DAT_MASK},
+	{"RAM_DBE_STS_A_MI_C2H2_DAT",
+		RAM_DBE_STS_A_MI_C2H2_DAT_MASK},
+	{"RAM_DBE_STS_A_MI_C2H1_DAT",
+		RAM_DBE_STS_A_MI_C2H1_DAT_MASK},
+	{"RAM_DBE_STS_A_MI_C2H0_DAT",
+		RAM_DBE_STS_A_MI_C2H0_DAT_MASK},
+	{"RAM_DBE_STS_A_MI_H2C3_DAT",
+		RAM_DBE_STS_A_MI_H2C3_DAT_MASK},
+	{"RAM_DBE_STS_A_MI_H2C2_DAT",
+		RAM_DBE_STS_A_MI_H2C2_DAT_MASK},
+	{"RAM_DBE_STS_A_MI_H2C1_DAT",
+		RAM_DBE_STS_A_MI_H2C1_DAT_MASK},
+	{"RAM_DBE_STS_A_MI_H2C0_DAT",
+		RAM_DBE_STS_A_MI_H2C0_DAT_MASK},
+};
+
+
+static struct regfield_info
+	glbl2_identifier_field_info[] = {
+	{"GLBL2_IDENTIFIER",
+		GLBL2_IDENTIFIER_MASK},
+	{"GLBL2_IDENTIFIER_VERSION",
+		GLBL2_IDENTIFIER_VERSION_MASK},
+};
+
+
+static struct regfield_info
+	glbl2_channel_inst_field_info[] = {
+	{"GLBL2_CHANNEL_INST_RSVD_1",
+		GLBL2_CHANNEL_INST_RSVD_1_MASK},
+	{"GLBL2_CHANNEL_INST_C2H_ST",
+		GLBL2_CHANNEL_INST_C2H_ST_MASK},
+	{"GLBL2_CHANNEL_INST_H2C_ST",
+		GLBL2_CHANNEL_INST_H2C_ST_MASK},
+	{"GLBL2_CHANNEL_INST_RSVD_2",
+		GLBL2_CHANNEL_INST_RSVD_2_MASK},
+	{"GLBL2_CHANNEL_INST_C2H_ENG",
+		GLBL2_CHANNEL_INST_C2H_ENG_MASK},
+	{"GLBL2_CHANNEL_INST_RSVD_3",
+		GLBL2_CHANNEL_INST_RSVD_3_MASK},
+	{"GLBL2_CHANNEL_INST_H2C_ENG",
+		GLBL2_CHANNEL_INST_H2C_ENG_MASK},
+};
+
+
+static struct regfield_info
+	glbl2_channel_mdma_field_info[] = {
+	{"GLBL2_CHANNEL_MDMA_RSVD_1",
+		GLBL2_CHANNEL_MDMA_RSVD_1_MASK},
+	{"GLBL2_CHANNEL_MDMA_C2H_ST",
+		GLBL2_CHANNEL_MDMA_C2H_ST_MASK},
+	{"GLBL2_CHANNEL_MDMA_H2C_ST",
+		GLBL2_CHANNEL_MDMA_H2C_ST_MASK},
+	{"GLBL2_CHANNEL_MDMA_RSVD_2",
+		GLBL2_CHANNEL_MDMA_RSVD_2_MASK},
+	{"GLBL2_CHANNEL_MDMA_C2H_ENG",
+		GLBL2_CHANNEL_MDMA_C2H_ENG_MASK},
+	{"GLBL2_CHANNEL_MDMA_RSVD_3",
+		GLBL2_CHANNEL_MDMA_RSVD_3_MASK},
+	{"GLBL2_CHANNEL_MDMA_H2C_ENG",
+		GLBL2_CHANNEL_MDMA_H2C_ENG_MASK},
+};
+
+
+static struct regfield_info
+	glbl2_channel_strm_field_info[] = {
+	{"GLBL2_CHANNEL_STRM_RSVD_1",
+		GLBL2_CHANNEL_STRM_RSVD_1_MASK},
+	{"GLBL2_CHANNEL_STRM_C2H_ST",
+		GLBL2_CHANNEL_STRM_C2H_ST_MASK},
+	{"GLBL2_CHANNEL_STRM_H2C_ST",
+		GLBL2_CHANNEL_STRM_H2C_ST_MASK},
+	{"GLBL2_CHANNEL_STRM_RSVD_2",
+		GLBL2_CHANNEL_STRM_RSVD_2_MASK},
+	{"GLBL2_CHANNEL_STRM_C2H_ENG",
+		GLBL2_CHANNEL_STRM_C2H_ENG_MASK},
+	{"GLBL2_CHANNEL_STRM_RSVD_3",
+		GLBL2_CHANNEL_STRM_RSVD_3_MASK},
+	{"GLBL2_CHANNEL_STRM_H2C_ENG",
+		GLBL2_CHANNEL_STRM_H2C_ENG_MASK},
+};
+
+
+static struct regfield_info
+	glbl2_channel_cap_field_info[] = {
+	{"GLBL2_CHANNEL_CAP_RSVD_1",
+		GLBL2_CHANNEL_CAP_RSVD_1_MASK},
+	{"GLBL2_CHANNEL_CAP_MULTIQ_MAX",
+		GLBL2_CHANNEL_CAP_MULTIQ_MAX_MASK},
+};
+
+
+static struct regfield_info
+	glbl2_channel_pasid_cap_field_info[] = {
+	{"GLBL2_CHANNEL_PASID_CAP_RSVD_1",
+		GLBL2_CHANNEL_PASID_CAP_RSVD_1_MASK},
+	{"GLBL2_CHANNEL_PASID_CAP_BRIDGEEN",
+		GLBL2_CHANNEL_PASID_CAP_BRIDGEEN_MASK},
+	{"GLBL2_CHANNEL_PASID_CAP_DMAEN",
+		GLBL2_CHANNEL_PASID_CAP_DMAEN_MASK},
+};
+
+
+static struct regfield_info
+	glbl2_system_id_field_info[] = {
+	{"GLBL2_SYSTEM_ID_RSVD_1",
+		GLBL2_SYSTEM_ID_RSVD_1_MASK},
+	{"GLBL2_SYSTEM_ID",
+		GLBL2_SYSTEM_ID_MASK},
+};
+
+
+static struct regfield_info
+	glbl2_misc_cap_field_info[] = {
+	{"GLBL2_MISC_CAP",
+		GLBL2_MISC_CAP_MASK},
+};
+
+
+static struct regfield_info
+	glbl2_dbg_pcie_rq0_field_info[] = {
+	{"GLBL2_PCIE_RQ0_NPH_AVL",
+		GLBL2_PCIE_RQ0_NPH_AVL_MASK},
+	{"GLBL2_PCIE_RQ0_RCB_AVL",
+		GLBL2_PCIE_RQ0_RCB_AVL_MASK},
+	{"GLBL2_PCIE_RQ0_SLV_RD_CREDS",
+		GLBL2_PCIE_RQ0_SLV_RD_CREDS_MASK},
+	{"GLBL2_PCIE_RQ0_TAG_EP",
+		GLBL2_PCIE_RQ0_TAG_EP_MASK},
+};
+
+
+static struct regfield_info
+	glbl2_dbg_pcie_rq1_field_info[] = {
+	{"GLBL2_PCIE_RQ1_RSVD_1",
+		GLBL2_PCIE_RQ1_RSVD_1_MASK},
+	{"GLBL2_PCIE_RQ1_TAG_FL",
+		GLBL2_PCIE_RQ1_TAG_FL_MASK},
+	{"GLBL2_PCIE_RQ1_WTLP_HEADER_FIFO_FL",
+		GLBL2_PCIE_RQ1_WTLP_HEADER_FIFO_FL_MASK},
+	{"GLBL2_PCIE_RQ1_WTLP_HEADER_FIFO_EP",
+		GLBL2_PCIE_RQ1_WTLP_HEADER_FIFO_EP_MASK},
+	{"GLBL2_PCIE_RQ1_RQ_FIFO_EP",
+		GLBL2_PCIE_RQ1_RQ_FIFO_EP_MASK},
+	{"GLBL2_PCIE_RQ1_RQ_FIFO_FL",
+		GLBL2_PCIE_RQ1_RQ_FIFO_FL_MASK},
+	{"GLBL2_PCIE_RQ1_TLPSM",
+		GLBL2_PCIE_RQ1_TLPSM_MASK},
+	{"GLBL2_PCIE_RQ1_TLPSM512",
+		GLBL2_PCIE_RQ1_TLPSM512_MASK},
+	{"GLBL2_PCIE_RQ1_RREQ_RCB_OK",
+		GLBL2_PCIE_RQ1_RREQ_RCB_OK_MASK},
+	{"GLBL2_PCIE_RQ1_RREQ0_SLV",
+		GLBL2_PCIE_RQ1_RREQ0_SLV_MASK},
+	{"GLBL2_PCIE_RQ1_RREQ0_VLD",
+		GLBL2_PCIE_RQ1_RREQ0_VLD_MASK},
+	{"GLBL2_PCIE_RQ1_RREQ0_RDY",
+		GLBL2_PCIE_RQ1_RREQ0_RDY_MASK},
+	{"GLBL2_PCIE_RQ1_RREQ1_SLV",
+		GLBL2_PCIE_RQ1_RREQ1_SLV_MASK},
+	{"GLBL2_PCIE_RQ1_RREQ1_VLD",
+		GLBL2_PCIE_RQ1_RREQ1_VLD_MASK},
+	{"GLBL2_PCIE_RQ1_RREQ1_RDY",
+		GLBL2_PCIE_RQ1_RREQ1_RDY_MASK},
+	{"GLBL2_PCIE_RQ1_WTLP_REQ",
+		GLBL2_PCIE_RQ1_WTLP_REQ_MASK},
+	{"GLBL2_PCIE_RQ1_WTLP_STRADDLE",
+		GLBL2_PCIE_RQ1_WTLP_STRADDLE_MASK},
+};
+
+
+static struct regfield_info
+	glbl2_dbg_aximm_wr0_field_info[] = {
+	{"GLBL2_AXIMM_WR0_RSVD_1",
+		GLBL2_AXIMM_WR0_RSVD_1_MASK},
+	{"GLBL2_AXIMM_WR0_WR_REQ",
+		GLBL2_AXIMM_WR0_WR_REQ_MASK},
+	{"GLBL2_AXIMM_WR0_WR_CHN",
+		GLBL2_AXIMM_WR0_WR_CHN_MASK},
+	{"GLBL2_AXIMM_WR0_WTLP_DATA_FIFO_EP",
+		GLBL2_AXIMM_WR0_WTLP_DATA_FIFO_EP_MASK},
+	{"GLBL2_AXIMM_WR0_WPL_FIFO_EP",
+		GLBL2_AXIMM_WR0_WPL_FIFO_EP_MASK},
+	{"GLBL2_AXIMM_WR0_BRSP_CLAIM_CHN",
+		GLBL2_AXIMM_WR0_BRSP_CLAIM_CHN_MASK},
+	{"GLBL2_AXIMM_WR0_WRREQ_CNT",
+		GLBL2_AXIMM_WR0_WRREQ_CNT_MASK},
+	{"GLBL2_AXIMM_WR0_BID",
+		GLBL2_AXIMM_WR0_BID_MASK},
+	{"GLBL2_AXIMM_WR0_BVALID",
+		GLBL2_AXIMM_WR0_BVALID_MASK},
+	{"GLBL2_AXIMM_WR0_BREADY",
+		GLBL2_AXIMM_WR0_BREADY_MASK},
+	{"GLBL2_AXIMM_WR0_WVALID",
+		GLBL2_AXIMM_WR0_WVALID_MASK},
+	{"GLBL2_AXIMM_WR0_WREADY",
+		GLBL2_AXIMM_WR0_WREADY_MASK},
+	{"GLBL2_AXIMM_WR0_AWID",
+		GLBL2_AXIMM_WR0_AWID_MASK},
+	{"GLBL2_AXIMM_WR0_AWVALID",
+		GLBL2_AXIMM_WR0_AWVALID_MASK},
+	{"GLBL2_AXIMM_WR0_AWREADY",
+		GLBL2_AXIMM_WR0_AWREADY_MASK},
+};
+
+
+static struct regfield_info
+	glbl2_dbg_aximm_wr1_field_info[] = {
+	{"GLBL2_AXIMM_WR1_RSVD_1",
+		GLBL2_AXIMM_WR1_RSVD_1_MASK},
+	{"GLBL2_AXIMM_WR1_BRSP_CNT4",
+		GLBL2_AXIMM_WR1_BRSP_CNT4_MASK},
+	{"GLBL2_AXIMM_WR1_BRSP_CNT3",
+		GLBL2_AXIMM_WR1_BRSP_CNT3_MASK},
+	{"GLBL2_AXIMM_WR1_BRSP_CNT2",
+		GLBL2_AXIMM_WR1_BRSP_CNT2_MASK},
+	{"GLBL2_AXIMM_WR1_BRSP_CNT1",
+		GLBL2_AXIMM_WR1_BRSP_CNT1_MASK},
+	{"GLBL2_AXIMM_WR1_BRSP_CNT0",
+		GLBL2_AXIMM_WR1_BRSP_CNT0_MASK},
+};
+
+
+static struct regfield_info
+	glbl2_dbg_aximm_rd0_field_info[] = {
+	{"GLBL2_AXIMM_RD0_RSVD_1",
+		GLBL2_AXIMM_RD0_RSVD_1_MASK},
+	{"GLBL2_AXIMM_RD0_PND_CNT",
+		GLBL2_AXIMM_RD0_PND_CNT_MASK},
+	{"GLBL2_AXIMM_RD0_RD_REQ",
+		GLBL2_AXIMM_RD0_RD_REQ_MASK},
+	{"GLBL2_AXIMM_RD0_RD_CHNL",
+		GLBL2_AXIMM_RD0_RD_CHNL_MASK},
+	{"GLBL2_AXIMM_RD0_RRSP_CLAIM_CHNL",
+		GLBL2_AXIMM_RD0_RRSP_CLAIM_CHNL_MASK},
+	{"GLBL2_AXIMM_RD0_RID",
+		GLBL2_AXIMM_RD0_RID_MASK},
+	{"GLBL2_AXIMM_RD0_RVALID",
+		GLBL2_AXIMM_RD0_RVALID_MASK},
+	{"GLBL2_AXIMM_RD0_RREADY",
+		GLBL2_AXIMM_RD0_RREADY_MASK},
+	{"GLBL2_AXIMM_RD0_ARID",
+		GLBL2_AXIMM_RD0_ARID_MASK},
+	{"GLBL2_AXIMM_RD0_ARVALID",
+		GLBL2_AXIMM_RD0_ARVALID_MASK},
+	{"GLBL2_AXIMM_RD0_ARREADY",
+		GLBL2_AXIMM_RD0_ARREADY_MASK},
+};
+
+
+static struct regfield_info
+	glbl2_dbg_aximm_rd1_field_info[] = {
+	{"GLBL2_AXIMM_RD1_RSVD_1",
+		GLBL2_AXIMM_RD1_RSVD_1_MASK},
+	{"GLBL2_AXIMM_RD1_RRSP_CNT4",
+		GLBL2_AXIMM_RD1_RRSP_CNT4_MASK},
+	{"GLBL2_AXIMM_RD1_RRSP_CNT3",
+		GLBL2_AXIMM_RD1_RRSP_CNT3_MASK},
+	{"GLBL2_AXIMM_RD1_RRSP_CNT2",
+		GLBL2_AXIMM_RD1_RRSP_CNT2_MASK},
+	{"GLBL2_AXIMM_RD1_RRSP_CNT1",
+		GLBL2_AXIMM_RD1_RRSP_CNT1_MASK},
+	{"GLBL2_AXIMM_RD1_RRSP_CNT0",
+		GLBL2_AXIMM_RD1_RRSP_CNT0_MASK},
+};
+
+
+static struct regfield_info
+	glbl2_dbg_fab0_field_info[] = {
+	{"GLBL2_FAB0_H2C_INB_CONV_IN_VLD",
+		GLBL2_FAB0_H2C_INB_CONV_IN_VLD_MASK},
+	{"GLBL2_FAB0_H2C_INB_CONV_IN_RDY",
+		GLBL2_FAB0_H2C_INB_CONV_IN_RDY_MASK},
+	{"GLBL2_FAB0_H2C_SEG_IN_VLD",
+		GLBL2_FAB0_H2C_SEG_IN_VLD_MASK},
+	{"GLBL2_FAB0_H2C_SEG_IN_RDY",
+		GLBL2_FAB0_H2C_SEG_IN_RDY_MASK},
+	{"GLBL2_FAB0_H2C_SEG_OUT_VLD",
+		GLBL2_FAB0_H2C_SEG_OUT_VLD_MASK},
+	{"GLBL2_FAB0_H2C_SEG_OUT_RDY",
+		GLBL2_FAB0_H2C_SEG_OUT_RDY_MASK},
+	{"GLBL2_FAB0_H2C_MST_CRDT_STAT",
+		GLBL2_FAB0_H2C_MST_CRDT_STAT_MASK},
+	{"GLBL2_FAB0_C2H_SLV_AFIFO_FULL",
+		GLBL2_FAB0_C2H_SLV_AFIFO_FULL_MASK},
+	{"GLBL2_FAB0_C2H_SLV_AFIFO_EMPTY",
+		GLBL2_FAB0_C2H_SLV_AFIFO_EMPTY_MASK},
+	{"GLBL2_FAB0_C2H_DESEG_SEG_VLD",
+		GLBL2_FAB0_C2H_DESEG_SEG_VLD_MASK},
+	{"GLBL2_FAB0_C2H_DESEG_SEG_RDY",
+		GLBL2_FAB0_C2H_DESEG_SEG_RDY_MASK},
+	{"GLBL2_FAB0_C2H_DESEG_OUT_VLD",
+		GLBL2_FAB0_C2H_DESEG_OUT_VLD_MASK},
+	{"GLBL2_FAB0_C2H_DESEG_OUT_RDY",
+		GLBL2_FAB0_C2H_DESEG_OUT_RDY_MASK},
+	{"GLBL2_FAB0_C2H_INB_DECONV_OUT_VLD",
+		GLBL2_FAB0_C2H_INB_DECONV_OUT_VLD_MASK},
+	{"GLBL2_FAB0_C2H_INB_DECONV_OUT_RDY",
+		GLBL2_FAB0_C2H_INB_DECONV_OUT_RDY_MASK},
+	{"GLBL2_FAB0_C2H_DSC_CRDT_AFIFO_FULL",
+		GLBL2_FAB0_C2H_DSC_CRDT_AFIFO_FULL_MASK},
+	{"GLBL2_FAB0_C2H_DSC_CRDT_AFIFO_EMPTY",
+		GLBL2_FAB0_C2H_DSC_CRDT_AFIFO_EMPTY_MASK},
+	{"GLBL2_FAB0_IRQ_IN_AFIFO_FULL",
+		GLBL2_FAB0_IRQ_IN_AFIFO_FULL_MASK},
+	{"GLBL2_FAB0_IRQ_IN_AFIFO_EMPTY",
+		GLBL2_FAB0_IRQ_IN_AFIFO_EMPTY_MASK},
+	{"GLBL2_FAB0_IMM_CRD_AFIFO_EMPTY",
+		GLBL2_FAB0_IMM_CRD_AFIFO_EMPTY_MASK},
+};
+
+
+static struct regfield_info
+	glbl2_dbg_fab1_field_info[] = {
+	{"GLBL2_FAB1_BYP_OUT_CRDT_STAT",
+		GLBL2_FAB1_BYP_OUT_CRDT_STAT_MASK},
+	{"GLBL2_FAB1_TM_DSC_STS_CRDT_STAT",
+		GLBL2_FAB1_TM_DSC_STS_CRDT_STAT_MASK},
+	{"GLBL2_FAB1_C2H_CMN_AFIFO_FULL",
+		GLBL2_FAB1_C2H_CMN_AFIFO_FULL_MASK},
+	{"GLBL2_FAB1_C2H_CMN_AFIFO_EMPTY",
+		GLBL2_FAB1_C2H_CMN_AFIFO_EMPTY_MASK},
+	{"GLBL2_FAB1_RSVD_1",
+		GLBL2_FAB1_RSVD_1_MASK},
+	{"GLBL2_FAB1_C2H_BYP_IN_AFIFO_FULL",
+		GLBL2_FAB1_C2H_BYP_IN_AFIFO_FULL_MASK},
+	{"GLBL2_FAB1_RSVD_2",
+		GLBL2_FAB1_RSVD_2_MASK},
+	{"GLBL2_FAB1_C2H_BYP_IN_AFIFO_EMPTY",
+		GLBL2_FAB1_C2H_BYP_IN_AFIFO_EMPTY_MASK},
+	{"GLBL2_FAB1_RSVD_3",
+		GLBL2_FAB1_RSVD_3_MASK},
+	{"GLBL2_FAB1_H2C_BYP_IN_AFIFO_FULL",
+		GLBL2_FAB1_H2C_BYP_IN_AFIFO_FULL_MASK},
+	{"GLBL2_FAB1_RSVD_4",
+		GLBL2_FAB1_RSVD_4_MASK},
+	{"GLBL2_FAB1_H2C_BYP_IN_AFIFO_EMPTY",
+		GLBL2_FAB1_H2C_BYP_IN_AFIFO_EMPTY_MASK},
+};
+
+
+static struct regfield_info
+	glbl2_dbg_match_sel_field_info[] = {
+	{"GLBL2_MATCH_SEL_RSV",
+		GLBL2_MATCH_SEL_RSV_MASK},
+	{"GLBL2_MATCH_SEL_CSR_SEL",
+		GLBL2_MATCH_SEL_CSR_SEL_MASK},
+	{"GLBL2_MATCH_SEL_CSR_EN",
+		GLBL2_MATCH_SEL_CSR_EN_MASK},
+	{"GLBL2_MATCH_SEL_ROTATE1",
+		GLBL2_MATCH_SEL_ROTATE1_MASK},
+	{"GLBL2_MATCH_SEL_ROTATE0",
+		GLBL2_MATCH_SEL_ROTATE0_MASK},
+	{"GLBL2_MATCH_SEL_SEL",
+		GLBL2_MATCH_SEL_SEL_MASK},
+};
+
+
+static struct regfield_info
+	glbl2_dbg_match_msk_field_info[] = {
+	{"GLBL2_MATCH_MSK",
+		GLBL2_MATCH_MSK_MASK},
+};
+
+
+static struct regfield_info
+	glbl2_dbg_match_pat_field_info[] = {
+	{"GLBL2_MATCH_PAT_PATTERN",
+		GLBL2_MATCH_PAT_PATTERN_MASK},
+};
+
+
+static struct regfield_info
+	glbl_rng_sz_1_field_info[] = {
+	{"GLBL_RNG_SZ_1_RSVD_1",
+		GLBL_RNG_SZ_1_RSVD_1_MASK},
+	{"GLBL_RNG_SZ_1_RING_SIZE",
+		GLBL_RNG_SZ_1_RING_SIZE_MASK},
+};
+
+
+static struct regfield_info
+	glbl_rng_sz_2_field_info[] = {
+	{"GLBL_RNG_SZ_2_RSVD_1",
+		GLBL_RNG_SZ_2_RSVD_1_MASK},
+	{"GLBL_RNG_SZ_2_RING_SIZE",
+		GLBL_RNG_SZ_2_RING_SIZE_MASK},
+};
+
+
+static struct regfield_info
+	glbl_rng_sz_3_field_info[] = {
+	{"GLBL_RNG_SZ_3_RSVD_1",
+		GLBL_RNG_SZ_3_RSVD_1_MASK},
+	{"GLBL_RNG_SZ_3_RING_SIZE",
+		GLBL_RNG_SZ_3_RING_SIZE_MASK},
+};
+
+
+static struct regfield_info
+	glbl_rng_sz_4_field_info[] = {
+	{"GLBL_RNG_SZ_4_RSVD_1",
+		GLBL_RNG_SZ_4_RSVD_1_MASK},
+	{"GLBL_RNG_SZ_4_RING_SIZE",
+		GLBL_RNG_SZ_4_RING_SIZE_MASK},
+};
+
+
+static struct regfield_info
+	glbl_rng_sz_5_field_info[] = {
+	{"GLBL_RNG_SZ_5_RSVD_1",
+		GLBL_RNG_SZ_5_RSVD_1_MASK},
+	{"GLBL_RNG_SZ_5_RING_SIZE",
+		GLBL_RNG_SZ_5_RING_SIZE_MASK},
+};
+
+
+static struct regfield_info
+	glbl_rng_sz_6_field_info[] = {
+	{"GLBL_RNG_SZ_6_RSVD_1",
+		GLBL_RNG_SZ_6_RSVD_1_MASK},
+	{"GLBL_RNG_SZ_6_RING_SIZE",
+		GLBL_RNG_SZ_6_RING_SIZE_MASK},
+};
+
+
+static struct regfield_info
+	glbl_rng_sz_7_field_info[] = {
+	{"GLBL_RNG_SZ_7_RSVD_1",
+		GLBL_RNG_SZ_7_RSVD_1_MASK},
+	{"GLBL_RNG_SZ_7_RING_SIZE",
+		GLBL_RNG_SZ_7_RING_SIZE_MASK},
+};
+
+
+static struct regfield_info
+	glbl_rng_sz_8_field_info[] = {
+	{"GLBL_RNG_SZ_8_RSVD_1",
+		GLBL_RNG_SZ_8_RSVD_1_MASK},
+	{"GLBL_RNG_SZ_8_RING_SIZE",
+		GLBL_RNG_SZ_8_RING_SIZE_MASK},
+};
+
+
+static struct regfield_info
+	glbl_rng_sz_9_field_info[] = {
+	{"GLBL_RNG_SZ_9_RSVD_1",
+		GLBL_RNG_SZ_9_RSVD_1_MASK},
+	{"GLBL_RNG_SZ_9_RING_SIZE",
+		GLBL_RNG_SZ_9_RING_SIZE_MASK},
+};
+
+
+static struct regfield_info
+	glbl_rng_sz_a_field_info[] = {
+	{"GLBL_RNG_SZ_A_RSVD_1",
+		GLBL_RNG_SZ_A_RSVD_1_MASK},
+	{"GLBL_RNG_SZ_A_RING_SIZE",
+		GLBL_RNG_SZ_A_RING_SIZE_MASK},
+};
+
+
+static struct regfield_info
+	glbl_rng_sz_b_field_info[] = {
+	{"GLBL_RNG_SZ_B_RSVD_1",
+		GLBL_RNG_SZ_B_RSVD_1_MASK},
+	{"GLBL_RNG_SZ_B_RING_SIZE",
+		GLBL_RNG_SZ_B_RING_SIZE_MASK},
+};
+
+
+static struct regfield_info
+	glbl_rng_sz_c_field_info[] = {
+	{"GLBL_RNG_SZ_C_RSVD_1",
+		GLBL_RNG_SZ_C_RSVD_1_MASK},
+	{"GLBL_RNG_SZ_C_RING_SIZE",
+		GLBL_RNG_SZ_C_RING_SIZE_MASK},
+};
+
+
+static struct regfield_info
+	glbl_rng_sz_d_field_info[] = {
+	{"GLBL_RNG_SZ_D_RSVD_1",
+		GLBL_RNG_SZ_D_RSVD_1_MASK},
+	{"GLBL_RNG_SZ_D_RING_SIZE",
+		GLBL_RNG_SZ_D_RING_SIZE_MASK},
+};
+
+
+static struct regfield_info
+	glbl_rng_sz_e_field_info[] = {
+	{"GLBL_RNG_SZ_E_RSVD_1",
+		GLBL_RNG_SZ_E_RSVD_1_MASK},
+	{"GLBL_RNG_SZ_E_RING_SIZE",
+		GLBL_RNG_SZ_E_RING_SIZE_MASK},
+};
+
+
+static struct regfield_info
+	glbl_rng_sz_f_field_info[] = {
+	{"GLBL_RNG_SZ_F_RSVD_1",
+		GLBL_RNG_SZ_F_RSVD_1_MASK},
+	{"GLBL_RNG_SZ_F_RING_SIZE",
+		GLBL_RNG_SZ_F_RING_SIZE_MASK},
+};
+
+
+static struct regfield_info
+	glbl_rng_sz_10_field_info[] = {
+	{"GLBL_RNG_SZ_10_RSVD_1",
+		GLBL_RNG_SZ_10_RSVD_1_MASK},
+	{"GLBL_RNG_SZ_10_RING_SIZE",
+		GLBL_RNG_SZ_10_RING_SIZE_MASK},
+};
+
+
+static struct regfield_info
+	glbl_err_stat_field_info[] = {
+	{"GLBL_ERR_STAT_RSVD_1",
+		GLBL_ERR_STAT_RSVD_1_MASK},
+	{"GLBL_ERR_STAT_ERR_FAB",
+		GLBL_ERR_STAT_ERR_FAB_MASK},
+	{"GLBL_ERR_STAT_ERR_H2C_ST",
+		GLBL_ERR_STAT_ERR_H2C_ST_MASK},
+	{"GLBL_ERR_STAT_ERR_BDG",
+		GLBL_ERR_STAT_ERR_BDG_MASK},
+	{"GLBL_ERR_STAT_IND_CTXT_CMD_ERR",
+		GLBL_ERR_STAT_IND_CTXT_CMD_ERR_MASK},
+	{"GLBL_ERR_STAT_ERR_C2H_ST",
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK},
+	{"GLBL_ERR_STAT_ERR_C2H_MM_1",
+		GLBL_ERR_STAT_ERR_C2H_MM_1_MASK},
+	{"GLBL_ERR_STAT_ERR_C2H_MM_0",
+		GLBL_ERR_STAT_ERR_C2H_MM_0_MASK},
+	{"GLBL_ERR_STAT_ERR_H2C_MM_1",
+		GLBL_ERR_STAT_ERR_H2C_MM_1_MASK},
+	{"GLBL_ERR_STAT_ERR_H2C_MM_0",
+		GLBL_ERR_STAT_ERR_H2C_MM_0_MASK},
+	{"GLBL_ERR_STAT_ERR_TRQ",
+		GLBL_ERR_STAT_ERR_TRQ_MASK},
+	{"GLBL_ERR_STAT_ERR_DSC",
+		GLBL_ERR_STAT_ERR_DSC_MASK},
+	{"GLBL_ERR_STAT_ERR_RAM_DBE",
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK},
+	{"GLBL_ERR_STAT_ERR_RAM_SBE",
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK},
+};
+
+
+static struct regfield_info
+	glbl_err_mask_field_info[] = {
+	{"GLBL_ERR",
+		GLBL_ERR_MASK},
+};
+
+
+static struct regfield_info
+	glbl_dsc_cfg_field_info[] = {
+	{"GLBL_DSC_CFG_RSVD_1",
+		GLBL_DSC_CFG_RSVD_1_MASK},
+	{"GLBL_DSC_CFG_UNC_OVR_COR",
+		GLBL_DSC_CFG_UNC_OVR_COR_MASK},
+	{"GLBL_DSC_CFG_CTXT_FER_DIS",
+		GLBL_DSC_CFG_CTXT_FER_DIS_MASK},
+	{"GLBL_DSC_CFG_RSVD_2",
+		GLBL_DSC_CFG_RSVD_2_MASK},
+	{"GLBL_DSC_CFG_MAXFETCH",
+		GLBL_DSC_CFG_MAXFETCH_MASK},
+	{"GLBL_DSC_CFG_WB_ACC_INT",
+		GLBL_DSC_CFG_WB_ACC_INT_MASK},
+};
+
+
+static struct regfield_info
+	glbl_dsc_err_sts_field_info[] = {
+	{"GLBL_DSC_ERR_STS_RSVD_1",
+		GLBL_DSC_ERR_STS_RSVD_1_MASK},
+	{"GLBL_DSC_ERR_STS_PORT_ID",
+		GLBL_DSC_ERR_STS_PORT_ID_MASK},
+	{"GLBL_DSC_ERR_STS_SBE",
+		GLBL_DSC_ERR_STS_SBE_MASK},
+	{"GLBL_DSC_ERR_STS_DBE",
+		GLBL_DSC_ERR_STS_DBE_MASK},
+	{"GLBL_DSC_ERR_STS_RQ_CANCEL",
+		GLBL_DSC_ERR_STS_RQ_CANCEL_MASK},
+	{"GLBL_DSC_ERR_STS_DSC",
+		GLBL_DSC_ERR_STS_DSC_MASK},
+	{"GLBL_DSC_ERR_STS_DMA",
+		GLBL_DSC_ERR_STS_DMA_MASK},
+	{"GLBL_DSC_ERR_STS_FLR_CANCEL",
+		GLBL_DSC_ERR_STS_FLR_CANCEL_MASK},
+	{"GLBL_DSC_ERR_STS_RSVD_2",
+		GLBL_DSC_ERR_STS_RSVD_2_MASK},
+	{"GLBL_DSC_ERR_STS_DAT_POISON",
+		GLBL_DSC_ERR_STS_DAT_POISON_MASK},
+	{"GLBL_DSC_ERR_STS_TIMEOUT",
+		GLBL_DSC_ERR_STS_TIMEOUT_MASK},
+	{"GLBL_DSC_ERR_STS_FLR",
+		GLBL_DSC_ERR_STS_FLR_MASK},
+	{"GLBL_DSC_ERR_STS_TAG",
+		GLBL_DSC_ERR_STS_TAG_MASK},
+	{"GLBL_DSC_ERR_STS_ADDR",
+		GLBL_DSC_ERR_STS_ADDR_MASK},
+	{"GLBL_DSC_ERR_STS_PARAM",
+		GLBL_DSC_ERR_STS_PARAM_MASK},
+	{"GLBL_DSC_ERR_STS_BCNT",
+		GLBL_DSC_ERR_STS_BCNT_MASK},
+	{"GLBL_DSC_ERR_STS_UR_CA",
+		GLBL_DSC_ERR_STS_UR_CA_MASK},
+	{"GLBL_DSC_ERR_STS_POISON",
+		GLBL_DSC_ERR_STS_POISON_MASK},
+};
+
+
+static struct regfield_info
+	glbl_dsc_err_msk_field_info[] = {
+	{"GLBL_DSC_ERR_MSK",
+		GLBL_DSC_ERR_MSK_MASK},
+};
+
+
+static struct regfield_info
+	glbl_dsc_err_log0_field_info[] = {
+	{"GLBL_DSC_ERR_LOG0_VALID",
+		GLBL_DSC_ERR_LOG0_VALID_MASK},
+	{"GLBL_DSC_ERR_LOG0_SEL",
+		GLBL_DSC_ERR_LOG0_SEL_MASK},
+	{"GLBL_DSC_ERR_LOG0_RSVD_1",
+		GLBL_DSC_ERR_LOG0_RSVD_1_MASK},
+	{"GLBL_DSC_ERR_LOG0_QID",
+		GLBL_DSC_ERR_LOG0_QID_MASK},
+};
+
+
+static struct regfield_info
+	glbl_dsc_err_log1_field_info[] = {
+	{"GLBL_DSC_ERR_LOG1_RSVD_1",
+		GLBL_DSC_ERR_LOG1_RSVD_1_MASK},
+	{"GLBL_DSC_ERR_LOG1_CIDX",
+		GLBL_DSC_ERR_LOG1_CIDX_MASK},
+	{"GLBL_DSC_ERR_LOG1_RSVD_2",
+		GLBL_DSC_ERR_LOG1_RSVD_2_MASK},
+	{"GLBL_DSC_ERR_LOG1_SUB_TYPE",
+		GLBL_DSC_ERR_LOG1_SUB_TYPE_MASK},
+	{"GLBL_DSC_ERR_LOG1_ERR_TYPE",
+		GLBL_DSC_ERR_LOG1_ERR_TYPE_MASK},
+};
+
+
+static struct regfield_info
+	glbl_trq_err_sts_field_info[] = {
+	{"GLBL_TRQ_ERR_STS_RSVD_1",
+		GLBL_TRQ_ERR_STS_RSVD_1_MASK},
+	{"GLBL_TRQ_ERR_STS_TCP_QSPC_TIMEOUT",
+		GLBL_TRQ_ERR_STS_TCP_QSPC_TIMEOUT_MASK},
+	{"GLBL_TRQ_ERR_STS_RSVD_2",
+		GLBL_TRQ_ERR_STS_RSVD_2_MASK},
+	{"GLBL_TRQ_ERR_STS_QID_RANGE",
+		GLBL_TRQ_ERR_STS_QID_RANGE_MASK},
+	{"GLBL_TRQ_ERR_STS_QSPC_UNMAPPED",
+		GLBL_TRQ_ERR_STS_QSPC_UNMAPPED_MASK},
+	{"GLBL_TRQ_ERR_STS_TCP_CSR_TIMEOUT",
+		GLBL_TRQ_ERR_STS_TCP_CSR_TIMEOUT_MASK},
+	{"GLBL_TRQ_ERR_STS_RSVD_3",
+		GLBL_TRQ_ERR_STS_RSVD_3_MASK},
+	{"GLBL_TRQ_ERR_STS_VF_ACCESS_ERR",
+		GLBL_TRQ_ERR_STS_VF_ACCESS_ERR_MASK},
+	{"GLBL_TRQ_ERR_STS_CSR_UNMAPPED",
+		GLBL_TRQ_ERR_STS_CSR_UNMAPPED_MASK},
+};
+
+
+static struct regfield_info
+	glbl_trq_err_msk_field_info[] = {
+	{"GLBL_TRQ_ERR_MSK",
+		GLBL_TRQ_ERR_MSK_MASK},
+};
+
+
+static struct regfield_info
+	glbl_trq_err_log_field_info[] = {
+	{"GLBL_TRQ_ERR_LOG_SRC",
+		GLBL_TRQ_ERR_LOG_SRC_MASK},
+	{"GLBL_TRQ_ERR_LOG_TARGET",
+		GLBL_TRQ_ERR_LOG_TARGET_MASK},
+	{"GLBL_TRQ_ERR_LOG_FUNC",
+		GLBL_TRQ_ERR_LOG_FUNC_MASK},
+	{"GLBL_TRQ_ERR_LOG_ADDRESS",
+		GLBL_TRQ_ERR_LOG_ADDRESS_MASK},
+};
+
+
+static struct regfield_info
+	glbl_dsc_dbg_dat0_field_info[] = {
+	{"GLBL_DSC_DAT0_RSVD_1",
+		GLBL_DSC_DAT0_RSVD_1_MASK},
+	{"GLBL_DSC_DAT0_CTXT_ARB_DIR",
+		GLBL_DSC_DAT0_CTXT_ARB_DIR_MASK},
+	{"GLBL_DSC_DAT0_CTXT_ARB_QID",
+		GLBL_DSC_DAT0_CTXT_ARB_QID_MASK},
+	{"GLBL_DSC_DAT0_CTXT_ARB_REQ",
+		GLBL_DSC_DAT0_CTXT_ARB_REQ_MASK},
+	{"GLBL_DSC_DAT0_IRQ_FIFO_FL",
+		GLBL_DSC_DAT0_IRQ_FIFO_FL_MASK},
+	{"GLBL_DSC_DAT0_TMSTALL",
+		GLBL_DSC_DAT0_TMSTALL_MASK},
+	{"GLBL_DSC_DAT0_RRQ_STALL",
+		GLBL_DSC_DAT0_RRQ_STALL_MASK},
+	{"GLBL_DSC_DAT0_RCP_FIFO_SPC_STALL",
+		GLBL_DSC_DAT0_RCP_FIFO_SPC_STALL_MASK},
+	{"GLBL_DSC_DAT0_RRQ_FIFO_SPC_STALL",
+		GLBL_DSC_DAT0_RRQ_FIFO_SPC_STALL_MASK},
+	{"GLBL_DSC_DAT0_FAB_MRKR_RSP_STALL",
+		GLBL_DSC_DAT0_FAB_MRKR_RSP_STALL_MASK},
+	{"GLBL_DSC_DAT0_DSC_OUT_STALL",
+		GLBL_DSC_DAT0_DSC_OUT_STALL_MASK},
+};
+
+
+static struct regfield_info
+	glbl_dsc_dbg_dat1_field_info[] = {
+	{"GLBL_DSC_DAT1_RSVD_1",
+		GLBL_DSC_DAT1_RSVD_1_MASK},
+	{"GLBL_DSC_DAT1_EVT_SPC_C2H",
+		GLBL_DSC_DAT1_EVT_SPC_C2H_MASK},
+	{"GLBL_DSC_DAT1_EVT_SP_H2C",
+		GLBL_DSC_DAT1_EVT_SP_H2C_MASK},
+	{"GLBL_DSC_DAT1_DSC_SPC_C2H",
+		GLBL_DSC_DAT1_DSC_SPC_C2H_MASK},
+	{"GLBL_DSC_DAT1_DSC_SPC_H2C",
+		GLBL_DSC_DAT1_DSC_SPC_H2C_MASK},
+};
+
+
+static struct regfield_info
+	glbl_dsc_dbg_ctl_field_info[] = {
+	{"GLBL_DSC_CTL_RSVD_1",
+		GLBL_DSC_CTL_RSVD_1_MASK},
+	{"GLBL_DSC_CTL_SELECT",
+		GLBL_DSC_CTL_SELECT_MASK},
+};
+
+
+static struct regfield_info
+	glbl_dsc_err_log2_field_info[] = {
+	{"GLBL_DSC_ERR_LOG2_OLD_PIDX",
+		GLBL_DSC_ERR_LOG2_OLD_PIDX_MASK},
+	{"GLBL_DSC_ERR_LOG2_NEW_PIDX",
+		GLBL_DSC_ERR_LOG2_NEW_PIDX_MASK},
+};
+
+
+static struct regfield_info
+	glbl_glbl_interrupt_cfg_field_info[] = {
+	{"GLBL_GLBL_INTERRUPT_CFG_RSVD_1",
+		GLBL_GLBL_INTERRUPT_CFG_RSVD_1_MASK},
+	{"GLBL_GLBL_INTERRUPT_CFG_LGCY_INTR_PENDING",
+		GLBL_GLBL_INTERRUPT_CFG_LGCY_INTR_PENDING_MASK},
+	{"GLBL_GLBL_INTERRUPT_CFG_EN_LGCY_INTR",
+		GLBL_GLBL_INTERRUPT_CFG_EN_LGCY_INTR_MASK},
+};
+
+
+static struct regfield_info
+	glbl_vch_host_profile_field_info[] = {
+	{"GLBL_VCH_HOST_PROFILE_RSVD_1",
+		GLBL_VCH_HOST_PROFILE_RSVD_1_MASK},
+	{"GLBL_VCH_HOST_PROFILE_2C_MM",
+		GLBL_VCH_HOST_PROFILE_2C_MM_MASK},
+	{"GLBL_VCH_HOST_PROFILE_2C_ST",
+		GLBL_VCH_HOST_PROFILE_2C_ST_MASK},
+	{"GLBL_VCH_HOST_PROFILE_VCH_DSC",
+		GLBL_VCH_HOST_PROFILE_VCH_DSC_MASK},
+	{"GLBL_VCH_HOST_PROFILE_VCH_INT_MSG",
+		GLBL_VCH_HOST_PROFILE_VCH_INT_MSG_MASK},
+	{"GLBL_VCH_HOST_PROFILE_VCH_INT_AGGR",
+		GLBL_VCH_HOST_PROFILE_VCH_INT_AGGR_MASK},
+	{"GLBL_VCH_HOST_PROFILE_VCH_CMPT",
+		GLBL_VCH_HOST_PROFILE_VCH_CMPT_MASK},
+	{"GLBL_VCH_HOST_PROFILE_VCH_C2H_PLD",
+		GLBL_VCH_HOST_PROFILE_VCH_C2H_PLD_MASK},
+};
+
+
+static struct regfield_info
+	glbl_bridge_host_profile_field_info[] = {
+	{"GLBL_BRIDGE_HOST_PROFILE_RSVD_1",
+		GLBL_BRIDGE_HOST_PROFILE_RSVD_1_MASK},
+	{"GLBL_BRIDGE_HOST_PROFILE_BDGID",
+		GLBL_BRIDGE_HOST_PROFILE_BDGID_MASK},
+};
+
+
+static struct regfield_info
+	aximm_irq_dest_addr_field_info[] = {
+	{"AXIMM_IRQ_DEST_ADDR_ADDR",
+		AXIMM_IRQ_DEST_ADDR_ADDR_MASK},
+};
+
+
+static struct regfield_info
+	fab_err_log_field_info[] = {
+	{"FAB_ERR_LOG_RSVD_1",
+		FAB_ERR_LOG_RSVD_1_MASK},
+	{"FAB_ERR_LOG_SRC",
+		FAB_ERR_LOG_SRC_MASK},
+};
+
+
+static struct regfield_info
+	glbl_req_err_sts_field_info[] = {
+	{"GLBL_REQ_ERR_STS_RSVD_1",
+		GLBL_REQ_ERR_STS_RSVD_1_MASK},
+	{"GLBL_REQ_ERR_STS_RC_DISCONTINUE",
+		GLBL_REQ_ERR_STS_RC_DISCONTINUE_MASK},
+	{"GLBL_REQ_ERR_STS_RC_PRTY",
+		GLBL_REQ_ERR_STS_RC_PRTY_MASK},
+	{"GLBL_REQ_ERR_STS_RC_FLR",
+		GLBL_REQ_ERR_STS_RC_FLR_MASK},
+	{"GLBL_REQ_ERR_STS_RC_TIMEOUT",
+		GLBL_REQ_ERR_STS_RC_TIMEOUT_MASK},
+	{"GLBL_REQ_ERR_STS_RC_INV_BCNT",
+		GLBL_REQ_ERR_STS_RC_INV_BCNT_MASK},
+	{"GLBL_REQ_ERR_STS_RC_INV_TAG",
+		GLBL_REQ_ERR_STS_RC_INV_TAG_MASK},
+	{"GLBL_REQ_ERR_STS_RC_START_ADDR_MISMCH",
+		GLBL_REQ_ERR_STS_RC_START_ADDR_MISMCH_MASK},
+	{"GLBL_REQ_ERR_STS_RC_RID_TC_ATTR_MISMCH",
+		GLBL_REQ_ERR_STS_RC_RID_TC_ATTR_MISMCH_MASK},
+	{"GLBL_REQ_ERR_STS_RC_NO_DATA",
+		GLBL_REQ_ERR_STS_RC_NO_DATA_MASK},
+	{"GLBL_REQ_ERR_STS_RC_UR_CA_CRS",
+		GLBL_REQ_ERR_STS_RC_UR_CA_CRS_MASK},
+	{"GLBL_REQ_ERR_STS_RC_POISONED",
+		GLBL_REQ_ERR_STS_RC_POISONED_MASK},
+};
+
+
+static struct regfield_info
+	glbl_req_err_msk_field_info[] = {
+	{"GLBL_REQ_ERR_MSK",
+		GLBL_REQ_ERR_MSK_MASK},
+};
+
+
+static struct regfield_info
+	ind_ctxt_data_field_info[] = {
+	{"IND_CTXT_DATA_DATA",
+		IND_CTXT_DATA_DATA_MASK},
+};
+
+
+static struct regfield_info
+	ind_ctxt_mask_field_info[] = {
+	{"IND_CTXT",
+		IND_CTXT_MASK},
+};
+
+
+static struct regfield_info
+	ind_ctxt_cmd_field_info[] = {
+	{"IND_CTXT_CMD_RSVD_1",
+		IND_CTXT_CMD_RSVD_1_MASK},
+	{"IND_CTXT_CMD_QID",
+		IND_CTXT_CMD_QID_MASK},
+	{"IND_CTXT_CMD_OP",
+		IND_CTXT_CMD_OP_MASK},
+	{"IND_CTXT_CMD_SEL",
+		IND_CTXT_CMD_SEL_MASK},
+	{"IND_CTXT_CMD_BUSY",
+		IND_CTXT_CMD_BUSY_MASK},
+};
+
+
+static struct regfield_info
+	c2h_timer_cnt_field_info[] = {
+	{"C2H_TIMER_CNT_RSVD_1",
+		C2H_TIMER_CNT_RSVD_1_MASK},
+	{"C2H_TIMER_CNT",
+		C2H_TIMER_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_cnt_th_field_info[] = {
+	{"C2H_CNT_TH_RSVD_1",
+		C2H_CNT_TH_RSVD_1_MASK},
+	{"C2H_CNT_TH_THESHOLD_CNT",
+		C2H_CNT_TH_THESHOLD_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_stat_s_axis_c2h_accepted_field_info[] = {
+	{"C2H_STAT_S_AXIS_C2H_ACCEPTED",
+		C2H_STAT_S_AXIS_C2H_ACCEPTED_MASK},
+};
+
+
+static struct regfield_info
+	c2h_stat_s_axis_wrb_accepted_field_info[] = {
+	{"C2H_STAT_S_AXIS_WRB_ACCEPTED",
+		C2H_STAT_S_AXIS_WRB_ACCEPTED_MASK},
+};
+
+
+static struct regfield_info
+	c2h_stat_desc_rsp_pkt_accepted_field_info[] = {
+	{"C2H_STAT_DESC_RSP_PKT_ACCEPTED_D",
+		C2H_STAT_DESC_RSP_PKT_ACCEPTED_D_MASK},
+};
+
+
+static struct regfield_info
+	c2h_stat_axis_pkg_cmp_field_info[] = {
+	{"C2H_STAT_AXIS_PKG_CMP",
+		C2H_STAT_AXIS_PKG_CMP_MASK},
+};
+
+
+static struct regfield_info
+	c2h_stat_desc_rsp_accepted_field_info[] = {
+	{"C2H_STAT_DESC_RSP_ACCEPTED_D",
+		C2H_STAT_DESC_RSP_ACCEPTED_D_MASK},
+};
+
+
+static struct regfield_info
+	c2h_stat_desc_rsp_cmp_field_info[] = {
+	{"C2H_STAT_DESC_RSP_CMP_D",
+		C2H_STAT_DESC_RSP_CMP_D_MASK},
+};
+
+
+static struct regfield_info
+	c2h_stat_wrq_out_field_info[] = {
+	{"C2H_STAT_WRQ_OUT",
+		C2H_STAT_WRQ_OUT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_stat_wpl_ren_accepted_field_info[] = {
+	{"C2H_STAT_WPL_REN_ACCEPTED",
+		C2H_STAT_WPL_REN_ACCEPTED_MASK},
+};
+
+
+static struct regfield_info
+	c2h_stat_total_wrq_len_field_info[] = {
+	{"C2H_STAT_TOTAL_WRQ_LEN",
+		C2H_STAT_TOTAL_WRQ_LEN_MASK},
+};
+
+
+static struct regfield_info
+	c2h_stat_total_wpl_len_field_info[] = {
+	{"C2H_STAT_TOTAL_WPL_LEN",
+		C2H_STAT_TOTAL_WPL_LEN_MASK},
+};
+
+
+static struct regfield_info
+	c2h_buf_sz_field_info[] = {
+	{"C2H_BUF_SZ_IZE",
+		C2H_BUF_SZ_IZE_MASK},
+};
+
+
+static struct regfield_info
+	c2h_err_stat_field_info[] = {
+	{"C2H_ERR_STAT_RSVD_1",
+		C2H_ERR_STAT_RSVD_1_MASK},
+	{"C2H_ERR_STAT_WRB_PORT_ID_ERR",
+		C2H_ERR_STAT_WRB_PORT_ID_ERR_MASK},
+	{"C2H_ERR_STAT_HDR_PAR_ERR",
+		C2H_ERR_STAT_HDR_PAR_ERR_MASK},
+	{"C2H_ERR_STAT_HDR_ECC_COR_ERR",
+		C2H_ERR_STAT_HDR_ECC_COR_ERR_MASK},
+	{"C2H_ERR_STAT_HDR_ECC_UNC_ERR",
+		C2H_ERR_STAT_HDR_ECC_UNC_ERR_MASK},
+	{"C2H_ERR_STAT_AVL_RING_DSC_ERR",
+		C2H_ERR_STAT_AVL_RING_DSC_ERR_MASK},
+	{"C2H_ERR_STAT_WRB_PRTY_ERR",
+		C2H_ERR_STAT_WRB_PRTY_ERR_MASK},
+	{"C2H_ERR_STAT_WRB_CIDX_ERR",
+		C2H_ERR_STAT_WRB_CIDX_ERR_MASK},
+	{"C2H_ERR_STAT_WRB_QFULL_ERR",
+		C2H_ERR_STAT_WRB_QFULL_ERR_MASK},
+	{"C2H_ERR_STAT_WRB_INV_Q_ERR",
+		C2H_ERR_STAT_WRB_INV_Q_ERR_MASK},
+	{"C2H_ERR_STAT_RSVD_2",
+		C2H_ERR_STAT_RSVD_2_MASK},
+	{"C2H_ERR_STAT_PORT_ID_CTXT_MISMATCH",
+		C2H_ERR_STAT_PORT_ID_CTXT_MISMATCH_MASK},
+	{"C2H_ERR_STAT_ERR_DESC_CNT",
+		C2H_ERR_STAT_ERR_DESC_CNT_MASK},
+	{"C2H_ERR_STAT_RSVD_3",
+		C2H_ERR_STAT_RSVD_3_MASK},
+	{"C2H_ERR_STAT_MSI_INT_FAIL",
+		C2H_ERR_STAT_MSI_INT_FAIL_MASK},
+	{"C2H_ERR_STAT_ENG_WPL_DATA_PAR_ERR",
+		C2H_ERR_STAT_ENG_WPL_DATA_PAR_ERR_MASK},
+	{"C2H_ERR_STAT_RSVD_4",
+		C2H_ERR_STAT_RSVD_4_MASK},
+	{"C2H_ERR_STAT_DESC_RSP_ERR",
+		C2H_ERR_STAT_DESC_RSP_ERR_MASK},
+	{"C2H_ERR_STAT_QID_MISMATCH",
+		C2H_ERR_STAT_QID_MISMATCH_MASK},
+	{"C2H_ERR_STAT_SH_CMPT_DSC_ERR",
+		C2H_ERR_STAT_SH_CMPT_DSC_ERR_MASK},
+	{"C2H_ERR_STAT_LEN_MISMATCH",
+		C2H_ERR_STAT_LEN_MISMATCH_MASK},
+	{"C2H_ERR_STAT_MTY_MISMATCH",
+		C2H_ERR_STAT_MTY_MISMATCH_MASK},
+};
+
+
+static struct regfield_info
+	c2h_err_mask_field_info[] = {
+	{"C2H_ERR_EN",
+		C2H_ERR_EN_MASK},
+};
+
+
+static struct regfield_info
+	c2h_fatal_err_stat_field_info[] = {
+	{"C2H_FATAL_ERR_STAT_RSVD_1",
+		C2H_FATAL_ERR_STAT_RSVD_1_MASK},
+	{"C2H_FATAL_ERR_STAT_HDR_ECC_UNC_ERR",
+		C2H_FATAL_ERR_STAT_HDR_ECC_UNC_ERR_MASK},
+	{"C2H_FATAL_ERR_STAT_AVL_RING_FIFO_RAM_RDBE",
+		C2H_FATAL_ERR_STAT_AVL_RING_FIFO_RAM_RDBE_MASK},
+	{"C2H_FATAL_ERR_STAT_WPL_DATA_PAR_ERR",
+		C2H_FATAL_ERR_STAT_WPL_DATA_PAR_ERR_MASK},
+	{"C2H_FATAL_ERR_STAT_PLD_FIFO_RAM_RDBE",
+		C2H_FATAL_ERR_STAT_PLD_FIFO_RAM_RDBE_MASK},
+	{"C2H_FATAL_ERR_STAT_QID_FIFO_RAM_RDBE",
+		C2H_FATAL_ERR_STAT_QID_FIFO_RAM_RDBE_MASK},
+	{"C2H_FATAL_ERR_STAT_CMPT_FIFO_RAM_RDBE",
+		C2H_FATAL_ERR_STAT_CMPT_FIFO_RAM_RDBE_MASK},
+	{"C2H_FATAL_ERR_STAT_WRB_COAL_DATA_RAM_RDBE",
+		C2H_FATAL_ERR_STAT_WRB_COAL_DATA_RAM_RDBE_MASK},
+	{"C2H_FATAL_ERR_STAT_RESERVED2",
+		C2H_FATAL_ERR_STAT_RESERVED2_MASK},
+	{"C2H_FATAL_ERR_STAT_INT_CTXT_RAM_RDBE",
+		C2H_FATAL_ERR_STAT_INT_CTXT_RAM_RDBE_MASK},
+	{"C2H_FATAL_ERR_STAT_DESC_REQ_FIFO_RAM_RDBE",
+		C2H_FATAL_ERR_STAT_DESC_REQ_FIFO_RAM_RDBE_MASK},
+	{"C2H_FATAL_ERR_STAT_PFCH_CTXT_RAM_RDBE",
+		C2H_FATAL_ERR_STAT_PFCH_CTXT_RAM_RDBE_MASK},
+	{"C2H_FATAL_ERR_STAT_WRB_CTXT_RAM_RDBE",
+		C2H_FATAL_ERR_STAT_WRB_CTXT_RAM_RDBE_MASK},
+	{"C2H_FATAL_ERR_STAT_PFCH_LL_RAM_RDBE",
+		C2H_FATAL_ERR_STAT_PFCH_LL_RAM_RDBE_MASK},
+	{"C2H_FATAL_ERR_STAT_TIMER_FIFO_RAM_RDBE",
+		C2H_FATAL_ERR_STAT_TIMER_FIFO_RAM_RDBE_MASK},
+	{"C2H_FATAL_ERR_STAT_QID_MISMATCH",
+		C2H_FATAL_ERR_STAT_QID_MISMATCH_MASK},
+	{"C2H_FATAL_ERR_STAT_RESERVED1",
+		C2H_FATAL_ERR_STAT_RESERVED1_MASK},
+	{"C2H_FATAL_ERR_STAT_LEN_MISMATCH",
+		C2H_FATAL_ERR_STAT_LEN_MISMATCH_MASK},
+	{"C2H_FATAL_ERR_STAT_MTY_MISMATCH",
+		C2H_FATAL_ERR_STAT_MTY_MISMATCH_MASK},
+};
+
+
+static struct regfield_info
+	c2h_fatal_err_mask_field_info[] = {
+	{"C2H_FATAL_ERR_C2HEN",
+		C2H_FATAL_ERR_C2HEN_MASK},
+};
+
+
+static struct regfield_info
+	c2h_fatal_err_enable_field_info[] = {
+	{"C2H_FATAL_ERR_ENABLE_RSVD_1",
+		C2H_FATAL_ERR_ENABLE_RSVD_1_MASK},
+	{"C2H_FATAL_ERR_ENABLE_WPL_PAR_INV",
+		C2H_FATAL_ERR_ENABLE_WPL_PAR_INV_MASK},
+	{"C2H_FATAL_ERR_ENABLE_WRQ_DIS",
+		C2H_FATAL_ERR_ENABLE_WRQ_DIS_MASK},
+};
+
+
+static struct regfield_info
+	glbl_err_int_field_info[] = {
+	{"GLBL_ERR_INT_RSVD_1",
+		GLBL_ERR_INT_RSVD_1_MASK},
+	{"GLBL_ERR_INT_HOST_ID",
+		GLBL_ERR_INT_HOST_ID_MASK},
+	{"GLBL_ERR_INT_DIS_INTR_ON_VF",
+		GLBL_ERR_INT_DIS_INTR_ON_VF_MASK},
+	{"GLBL_ERR_INT_ARM",
+		GLBL_ERR_INT_ARM_MASK},
+	{"GLBL_ERR_INT_EN_COAL",
+		GLBL_ERR_INT_EN_COAL_MASK},
+	{"GLBL_ERR_INT_VEC",
+		GLBL_ERR_INT_VEC_MASK},
+	{"GLBL_ERR_INT_FUNC",
+		GLBL_ERR_INT_FUNC_MASK},
+};
+
+
+static struct regfield_info
+	c2h_pfch_cfg_field_info[] = {
+	{"C2H_PFCH_CFG_EVTFL_TH",
+		C2H_PFCH_CFG_EVTFL_TH_MASK},
+	{"C2H_PFCH_CFG_FL_TH",
+		C2H_PFCH_CFG_FL_TH_MASK},
+};
+
+
+static struct regfield_info
+	c2h_pfch_cfg_1_field_info[] = {
+	{"C2H_PFCH_CFG_1_EVT_QCNT_TH",
+		C2H_PFCH_CFG_1_EVT_QCNT_TH_MASK},
+	{"C2H_PFCH_CFG_1_QCNT",
+		C2H_PFCH_CFG_1_QCNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_pfch_cfg_2_field_info[] = {
+	{"C2H_PFCH_CFG_2_FENCE",
+		C2H_PFCH_CFG_2_FENCE_MASK},
+	{"C2H_PFCH_CFG_2_RSVD",
+		C2H_PFCH_CFG_2_RSVD_MASK},
+	{"C2H_PFCH_CFG_2_VAR_DESC_NO_DROP",
+		C2H_PFCH_CFG_2_VAR_DESC_NO_DROP_MASK},
+	{"C2H_PFCH_CFG_2_LL_SZ_TH",
+		C2H_PFCH_CFG_2_LL_SZ_TH_MASK},
+	{"C2H_PFCH_CFG_2_VAR_DESC_NUM",
+		C2H_PFCH_CFG_2_VAR_DESC_NUM_MASK},
+	{"C2H_PFCH_CFG_2_NUM",
+		C2H_PFCH_CFG_2_NUM_MASK},
+};
+
+
+static struct regfield_info
+	c2h_int_timer_tick_field_info[] = {
+	{"C2H_INT_TIMER_TICK",
+		C2H_INT_TIMER_TICK_MASK},
+};
+
+
+static struct regfield_info
+	c2h_stat_desc_rsp_drop_accepted_field_info[] = {
+	{"C2H_STAT_DESC_RSP_DROP_ACCEPTED_D",
+		C2H_STAT_DESC_RSP_DROP_ACCEPTED_D_MASK},
+};
+
+
+static struct regfield_info
+	c2h_stat_desc_rsp_err_accepted_field_info[] = {
+	{"C2H_STAT_DESC_RSP_ERR_ACCEPTED_D",
+		C2H_STAT_DESC_RSP_ERR_ACCEPTED_D_MASK},
+};
+
+
+static struct regfield_info
+	c2h_stat_desc_req_field_info[] = {
+	{"C2H_STAT_DESC_REQ",
+		C2H_STAT_DESC_REQ_MASK},
+};
+
+
+static struct regfield_info
+	c2h_stat_dbg_dma_eng_0_field_info[] = {
+	{"C2H_STAT_DMA_ENG_0_S_AXIS_C2H_TVALID",
+		C2H_STAT_DMA_ENG_0_S_AXIS_C2H_TVALID_MASK},
+	{"C2H_STAT_DMA_ENG_0_S_AXIS_C2H_TREADY",
+		C2H_STAT_DMA_ENG_0_S_AXIS_C2H_TREADY_MASK},
+	{"C2H_STAT_DMA_ENG_0_S_AXIS_WRB_TVALID",
+		C2H_STAT_DMA_ENG_0_S_AXIS_WRB_TVALID_MASK},
+	{"C2H_STAT_DMA_ENG_0_S_AXIS_WRB_TREADY",
+		C2H_STAT_DMA_ENG_0_S_AXIS_WRB_TREADY_MASK},
+	{"C2H_STAT_DMA_ENG_0_PLD_FIFO_IN_RDY",
+		C2H_STAT_DMA_ENG_0_PLD_FIFO_IN_RDY_MASK},
+	{"C2H_STAT_DMA_ENG_0_QID_FIFO_IN_RDY",
+		C2H_STAT_DMA_ENG_0_QID_FIFO_IN_RDY_MASK},
+	{"C2H_STAT_DMA_ENG_0_ARB_FIFO_OUT_VLD",
+		C2H_STAT_DMA_ENG_0_ARB_FIFO_OUT_VLD_MASK},
+	{"C2H_STAT_DMA_ENG_0_ARB_FIFO_OUT_QID",
+		C2H_STAT_DMA_ENG_0_ARB_FIFO_OUT_QID_MASK},
+	{"C2H_STAT_DMA_ENG_0_WRB_FIFO_IN_RDY",
+		C2H_STAT_DMA_ENG_0_WRB_FIFO_IN_RDY_MASK},
+	{"C2H_STAT_DMA_ENG_0_WRB_FIFO_OUT_CNT",
+		C2H_STAT_DMA_ENG_0_WRB_FIFO_OUT_CNT_MASK},
+	{"C2H_STAT_DMA_ENG_0_WRB_SM_CS",
+		C2H_STAT_DMA_ENG_0_WRB_SM_CS_MASK},
+	{"C2H_STAT_DMA_ENG_0_MAIN_SM_CS",
+		C2H_STAT_DMA_ENG_0_MAIN_SM_CS_MASK},
+};
+
+
+static struct regfield_info
+	c2h_stat_dbg_dma_eng_1_field_info[] = {
+	{"C2H_STAT_DMA_ENG_1_RSVD_1",
+		C2H_STAT_DMA_ENG_1_RSVD_1_MASK},
+	{"C2H_STAT_DMA_ENG_1_QID_FIFO_OUT_CNT",
+		C2H_STAT_DMA_ENG_1_QID_FIFO_OUT_CNT_MASK},
+	{"C2H_STAT_DMA_ENG_1_PLD_FIFO_OUT_CNT",
+		C2H_STAT_DMA_ENG_1_PLD_FIFO_OUT_CNT_MASK},
+	{"C2H_STAT_DMA_ENG_1_PLD_ST_FIFO_CNT",
+		C2H_STAT_DMA_ENG_1_PLD_ST_FIFO_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_stat_dbg_dma_eng_2_field_info[] = {
+	{"C2H_STAT_DMA_ENG_2_RSVD_1",
+		C2H_STAT_DMA_ENG_2_RSVD_1_MASK},
+	{"C2H_STAT_DMA_ENG_2_QID_FIFO_OUT_CNT",
+		C2H_STAT_DMA_ENG_2_QID_FIFO_OUT_CNT_MASK},
+	{"C2H_STAT_DMA_ENG_2_PLD_FIFO_OUT_CNT",
+		C2H_STAT_DMA_ENG_2_PLD_FIFO_OUT_CNT_MASK},
+	{"C2H_STAT_DMA_ENG_2_PLD_ST_FIFO_CNT",
+		C2H_STAT_DMA_ENG_2_PLD_ST_FIFO_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_stat_dbg_dma_eng_3_field_info[] = {
+	{"C2H_STAT_DMA_ENG_3_RSVD_1",
+		C2H_STAT_DMA_ENG_3_RSVD_1_MASK},
+	{"C2H_STAT_DMA_ENG_3_WRQ_FIFO_OUT_CNT",
+		C2H_STAT_DMA_ENG_3_WRQ_FIFO_OUT_CNT_MASK},
+	{"C2H_STAT_DMA_ENG_3_QID_FIFO_OUT_VLD",
+		C2H_STAT_DMA_ENG_3_QID_FIFO_OUT_VLD_MASK},
+	{"C2H_STAT_DMA_ENG_3_PLD_FIFO_OUT_VLD",
+		C2H_STAT_DMA_ENG_3_PLD_FIFO_OUT_VLD_MASK},
+	{"C2H_STAT_DMA_ENG_3_PLD_ST_FIFO_OUT_VLD",
+		C2H_STAT_DMA_ENG_3_PLD_ST_FIFO_OUT_VLD_MASK},
+	{"C2H_STAT_DMA_ENG_3_PLD_ST_FIFO_OUT_DATA_EOP",
+		C2H_STAT_DMA_ENG_3_PLD_ST_FIFO_OUT_DATA_EOP_MASK},
+	{"C2H_STAT_DMA_ENG_3_PLD_ST_FIFO_OUT_DATA_AVL_IDX_ENABLE",
+		C2H_STAT_DMA_ENG_3_PLD_ST_FIFO_OUT_DATA_AVL_IDX_ENABLE_MASK},
+	{"C2H_STAT_DMA_ENG_3_PLD_ST_FIFO_OUT_DATA_DROP",
+		C2H_STAT_DMA_ENG_3_PLD_ST_FIFO_OUT_DATA_DROP_MASK},
+	{"C2H_STAT_DMA_ENG_3_PLD_ST_FIFO_OUT_DATA_ERR",
+		C2H_STAT_DMA_ENG_3_PLD_ST_FIFO_OUT_DATA_ERR_MASK},
+	{"C2H_STAT_DMA_ENG_3_DESC_CNT_FIFO_IN_RDY",
+		C2H_STAT_DMA_ENG_3_DESC_CNT_FIFO_IN_RDY_MASK},
+	{"C2H_STAT_DMA_ENG_3_DESC_RSP_FIFO_IN_RDY",
+		C2H_STAT_DMA_ENG_3_DESC_RSP_FIFO_IN_RDY_MASK},
+	{"C2H_STAT_DMA_ENG_3_PLD_PKT_ID_LARGER_0",
+		C2H_STAT_DMA_ENG_3_PLD_PKT_ID_LARGER_0_MASK},
+	{"C2H_STAT_DMA_ENG_3_WRQ_VLD",
+		C2H_STAT_DMA_ENG_3_WRQ_VLD_MASK},
+	{"C2H_STAT_DMA_ENG_3_WRQ_RDY",
+		C2H_STAT_DMA_ENG_3_WRQ_RDY_MASK},
+	{"C2H_STAT_DMA_ENG_3_WRQ_FIFO_OUT_RDY",
+		C2H_STAT_DMA_ENG_3_WRQ_FIFO_OUT_RDY_MASK},
+	{"C2H_STAT_DMA_ENG_3_WRQ_PACKET_OUT_DATA_DROP",
+		C2H_STAT_DMA_ENG_3_WRQ_PACKET_OUT_DATA_DROP_MASK},
+	{"C2H_STAT_DMA_ENG_3_WRQ_PACKET_OUT_DATA_ERR",
+		C2H_STAT_DMA_ENG_3_WRQ_PACKET_OUT_DATA_ERR_MASK},
+	{"C2H_STAT_DMA_ENG_3_WRQ_PACKET_OUT_DATA_MARKER",
+		C2H_STAT_DMA_ENG_3_WRQ_PACKET_OUT_DATA_MARKER_MASK},
+	{"C2H_STAT_DMA_ENG_3_WRQ_PACKET_PRE_EOR",
+		C2H_STAT_DMA_ENG_3_WRQ_PACKET_PRE_EOR_MASK},
+	{"C2H_STAT_DMA_ENG_3_WCP_FIFO_IN_RDY",
+		C2H_STAT_DMA_ENG_3_WCP_FIFO_IN_RDY_MASK},
+	{"C2H_STAT_DMA_ENG_3_PLD_ST_FIFO_IN_RDY",
+		C2H_STAT_DMA_ENG_3_PLD_ST_FIFO_IN_RDY_MASK},
+};
+
+
+static struct regfield_info
+	c2h_dbg_pfch_err_ctxt_field_info[] = {
+	{"C2H_PFCH_ERR_CTXT_RSVD_1",
+		C2H_PFCH_ERR_CTXT_RSVD_1_MASK},
+	{"C2H_PFCH_ERR_CTXT_ERR_STAT",
+		C2H_PFCH_ERR_CTXT_ERR_STAT_MASK},
+	{"C2H_PFCH_ERR_CTXT_CMD_WR",
+		C2H_PFCH_ERR_CTXT_CMD_WR_MASK},
+	{"C2H_PFCH_ERR_CTXT_QID",
+		C2H_PFCH_ERR_CTXT_QID_MASK},
+	{"C2H_PFCH_ERR_CTXT_DONE",
+		C2H_PFCH_ERR_CTXT_DONE_MASK},
+};
+
+
+static struct regfield_info
+	c2h_first_err_qid_field_info[] = {
+	{"C2H_FIRST_ERR_QID_RSVD_1",
+		C2H_FIRST_ERR_QID_RSVD_1_MASK},
+	{"C2H_FIRST_ERR_QID_ERR_TYPE",
+		C2H_FIRST_ERR_QID_ERR_TYPE_MASK},
+	{"C2H_FIRST_ERR_QID_RSVD",
+		C2H_FIRST_ERR_QID_RSVD_MASK},
+	{"C2H_FIRST_ERR_QID_QID",
+		C2H_FIRST_ERR_QID_QID_MASK},
+};
+
+
+static struct regfield_info
+	stat_num_wrb_in_field_info[] = {
+	{"STAT_NUM_WRB_IN_RSVD_1",
+		STAT_NUM_WRB_IN_RSVD_1_MASK},
+	{"STAT_NUM_WRB_IN_WRB_CNT",
+		STAT_NUM_WRB_IN_WRB_CNT_MASK},
+};
+
+
+static struct regfield_info
+	stat_num_wrb_out_field_info[] = {
+	{"STAT_NUM_WRB_OUT_RSVD_1",
+		STAT_NUM_WRB_OUT_RSVD_1_MASK},
+	{"STAT_NUM_WRB_OUT_WRB_CNT",
+		STAT_NUM_WRB_OUT_WRB_CNT_MASK},
+};
+
+
+static struct regfield_info
+	stat_num_wrb_drp_field_info[] = {
+	{"STAT_NUM_WRB_DRP_RSVD_1",
+		STAT_NUM_WRB_DRP_RSVD_1_MASK},
+	{"STAT_NUM_WRB_DRP_WRB_CNT",
+		STAT_NUM_WRB_DRP_WRB_CNT_MASK},
+};
+
+
+static struct regfield_info
+	stat_num_stat_desc_out_field_info[] = {
+	{"STAT_NUM_STAT_DESC_OUT_RSVD_1",
+		STAT_NUM_STAT_DESC_OUT_RSVD_1_MASK},
+	{"STAT_NUM_STAT_DESC_OUT_CNT",
+		STAT_NUM_STAT_DESC_OUT_CNT_MASK},
+};
+
+
+static struct regfield_info
+	stat_num_dsc_crdt_sent_field_info[] = {
+	{"STAT_NUM_DSC_CRDT_SENT_RSVD_1",
+		STAT_NUM_DSC_CRDT_SENT_RSVD_1_MASK},
+	{"STAT_NUM_DSC_CRDT_SENT_CNT",
+		STAT_NUM_DSC_CRDT_SENT_CNT_MASK},
+};
+
+
+static struct regfield_info
+	stat_num_fch_dsc_rcvd_field_info[] = {
+	{"STAT_NUM_FCH_DSC_RCVD_RSVD_1",
+		STAT_NUM_FCH_DSC_RCVD_RSVD_1_MASK},
+	{"STAT_NUM_FCH_DSC_RCVD_DSC_CNT",
+		STAT_NUM_FCH_DSC_RCVD_DSC_CNT_MASK},
+};
+
+
+static struct regfield_info
+	stat_num_byp_dsc_rcvd_field_info[] = {
+	{"STAT_NUM_BYP_DSC_RCVD_RSVD_1",
+		STAT_NUM_BYP_DSC_RCVD_RSVD_1_MASK},
+	{"STAT_NUM_BYP_DSC_RCVD_DSC_CNT",
+		STAT_NUM_BYP_DSC_RCVD_DSC_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_wrb_coal_cfg_field_info[] = {
+	{"C2H_WRB_COAL_CFG_MAX_BUF_SZ",
+		C2H_WRB_COAL_CFG_MAX_BUF_SZ_MASK},
+	{"C2H_WRB_COAL_CFG_TICK_VAL",
+		C2H_WRB_COAL_CFG_TICK_VAL_MASK},
+	{"C2H_WRB_COAL_CFG_TICK_CNT",
+		C2H_WRB_COAL_CFG_TICK_CNT_MASK},
+	{"C2H_WRB_COAL_CFG_SET_GLB_FLUSH",
+		C2H_WRB_COAL_CFG_SET_GLB_FLUSH_MASK},
+	{"C2H_WRB_COAL_CFG_DONE_GLB_FLUSH",
+		C2H_WRB_COAL_CFG_DONE_GLB_FLUSH_MASK},
+};
+
+
+static struct regfield_info
+	c2h_intr_h2c_req_field_info[] = {
+	{"C2H_INTR_H2C_REQ_RSVD_1",
+		C2H_INTR_H2C_REQ_RSVD_1_MASK},
+	{"C2H_INTR_H2C_REQ_CNT",
+		C2H_INTR_H2C_REQ_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_intr_c2h_mm_req_field_info[] = {
+	{"C2H_INTR_C2H_MM_REQ_RSVD_1",
+		C2H_INTR_C2H_MM_REQ_RSVD_1_MASK},
+	{"C2H_INTR_C2H_MM_REQ_CNT",
+		C2H_INTR_C2H_MM_REQ_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_intr_err_int_req_field_info[] = {
+	{"C2H_INTR_ERR_INT_REQ_RSVD_1",
+		C2H_INTR_ERR_INT_REQ_RSVD_1_MASK},
+	{"C2H_INTR_ERR_INT_REQ_CNT",
+		C2H_INTR_ERR_INT_REQ_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_intr_c2h_st_req_field_info[] = {
+	{"C2H_INTR_C2H_ST_REQ_RSVD_1",
+		C2H_INTR_C2H_ST_REQ_RSVD_1_MASK},
+	{"C2H_INTR_C2H_ST_REQ_CNT",
+		C2H_INTR_C2H_ST_REQ_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_intr_h2c_err_c2h_mm_msix_ack_field_info[] = {
+	{"C2H_INTR_H2C_ERR_C2H_MM_MSIX_ACK_RSVD_1",
+		C2H_INTR_H2C_ERR_C2H_MM_MSIX_ACK_RSVD_1_MASK},
+	{"C2H_INTR_H2C_ERR_C2H_MM_MSIX_ACK_CNT",
+		C2H_INTR_H2C_ERR_C2H_MM_MSIX_ACK_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_intr_h2c_err_c2h_mm_msix_fail_field_info[] = {
+	{"C2H_INTR_H2C_ERR_C2H_MM_MSIX_FAIL_RSVD_1",
+		C2H_INTR_H2C_ERR_C2H_MM_MSIX_FAIL_RSVD_1_MASK},
+	{"C2H_INTR_H2C_ERR_C2H_MM_MSIX_FAIL_CNT",
+		C2H_INTR_H2C_ERR_C2H_MM_MSIX_FAIL_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_intr_h2c_err_c2h_mm_msix_no_msix_field_info[] = {
+	{"C2H_INTR_H2C_ERR_C2H_MM_MSIX_NO_MSIX_RSVD_1",
+		C2H_INTR_H2C_ERR_C2H_MM_MSIX_NO_MSIX_RSVD_1_MASK},
+	{"C2H_INTR_H2C_ERR_C2H_MM_MSIX_NO_MSIX_CNT",
+		C2H_INTR_H2C_ERR_C2H_MM_MSIX_NO_MSIX_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_intr_h2c_err_c2h_mm_ctxt_inval_field_info[] = {
+	{"C2H_INTR_H2C_ERR_C2H_MM_CTXT_INVAL_RSVD_1",
+		C2H_INTR_H2C_ERR_C2H_MM_CTXT_INVAL_RSVD_1_MASK},
+	{"C2H_INTR_H2C_ERR_C2H_MM_CTXT_INVAL_CNT",
+		C2H_INTR_H2C_ERR_C2H_MM_CTXT_INVAL_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_intr_c2h_st_msix_ack_field_info[] = {
+	{"C2H_INTR_C2H_ST_MSIX_ACK_RSVD_1",
+		C2H_INTR_C2H_ST_MSIX_ACK_RSVD_1_MASK},
+	{"C2H_INTR_C2H_ST_MSIX_ACK_CNT",
+		C2H_INTR_C2H_ST_MSIX_ACK_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_intr_c2h_st_msix_fail_field_info[] = {
+	{"C2H_INTR_C2H_ST_MSIX_FAIL_RSVD_1",
+		C2H_INTR_C2H_ST_MSIX_FAIL_RSVD_1_MASK},
+	{"C2H_INTR_C2H_ST_MSIX_FAIL_CNT",
+		C2H_INTR_C2H_ST_MSIX_FAIL_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_intr_c2h_st_no_msix_field_info[] = {
+	{"C2H_INTR_C2H_ST_NO_MSIX_RSVD_1",
+		C2H_INTR_C2H_ST_NO_MSIX_RSVD_1_MASK},
+	{"C2H_INTR_C2H_ST_NO_MSIX_CNT",
+		C2H_INTR_C2H_ST_NO_MSIX_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_intr_c2h_st_ctxt_inval_field_info[] = {
+	{"C2H_INTR_C2H_ST_CTXT_INVAL_RSVD_1",
+		C2H_INTR_C2H_ST_CTXT_INVAL_RSVD_1_MASK},
+	{"C2H_INTR_C2H_ST_CTXT_INVAL_CNT",
+		C2H_INTR_C2H_ST_CTXT_INVAL_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_stat_wr_cmp_field_info[] = {
+	{"C2H_STAT_WR_CMP_RSVD_1",
+		C2H_STAT_WR_CMP_RSVD_1_MASK},
+	{"C2H_STAT_WR_CMP_CNT",
+		C2H_STAT_WR_CMP_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_stat_dbg_dma_eng_4_field_info[] = {
+	{"C2H_STAT_DMA_ENG_4_RSVD_1",
+		C2H_STAT_DMA_ENG_4_RSVD_1_MASK},
+	{"C2H_STAT_DMA_ENG_4_WRQ_FIFO_OUT_CNT",
+		C2H_STAT_DMA_ENG_4_WRQ_FIFO_OUT_CNT_MASK},
+	{"C2H_STAT_DMA_ENG_4_QID_FIFO_OUT_VLD",
+		C2H_STAT_DMA_ENG_4_QID_FIFO_OUT_VLD_MASK},
+	{"C2H_STAT_DMA_ENG_4_PLD_FIFO_OUT_VLD",
+		C2H_STAT_DMA_ENG_4_PLD_FIFO_OUT_VLD_MASK},
+	{"C2H_STAT_DMA_ENG_4_PLD_ST_FIFO_OUT_VLD",
+		C2H_STAT_DMA_ENG_4_PLD_ST_FIFO_OUT_VLD_MASK},
+	{"C2H_STAT_DMA_ENG_4_PLD_ST_FIFO_OUT_DATA_EOP",
+		C2H_STAT_DMA_ENG_4_PLD_ST_FIFO_OUT_DATA_EOP_MASK},
+	{"C2H_STAT_DMA_ENG_4_PLD_ST_FIFO_OUT_DATA_AVL_IDX_ENABLE",
+		C2H_STAT_DMA_ENG_4_PLD_ST_FIFO_OUT_DATA_AVL_IDX_ENABLE_MASK},
+	{"C2H_STAT_DMA_ENG_4_PLD_ST_FIFO_OUT_DATA_DROP",
+		C2H_STAT_DMA_ENG_4_PLD_ST_FIFO_OUT_DATA_DROP_MASK},
+	{"C2H_STAT_DMA_ENG_4_PLD_ST_FIFO_OUT_DATA_ERR",
+		C2H_STAT_DMA_ENG_4_PLD_ST_FIFO_OUT_DATA_ERR_MASK},
+	{"C2H_STAT_DMA_ENG_4_DESC_CNT_FIFO_IN_RDY",
+		C2H_STAT_DMA_ENG_4_DESC_CNT_FIFO_IN_RDY_MASK},
+	{"C2H_STAT_DMA_ENG_4_DESC_RSP_FIFO_IN_RDY",
+		C2H_STAT_DMA_ENG_4_DESC_RSP_FIFO_IN_RDY_MASK},
+	{"C2H_STAT_DMA_ENG_4_PLD_PKT_ID_LARGER_0",
+		C2H_STAT_DMA_ENG_4_PLD_PKT_ID_LARGER_0_MASK},
+	{"C2H_STAT_DMA_ENG_4_WRQ_VLD",
+		C2H_STAT_DMA_ENG_4_WRQ_VLD_MASK},
+	{"C2H_STAT_DMA_ENG_4_WRQ_RDY",
+		C2H_STAT_DMA_ENG_4_WRQ_RDY_MASK},
+	{"C2H_STAT_DMA_ENG_4_WRQ_FIFO_OUT_RDY",
+		C2H_STAT_DMA_ENG_4_WRQ_FIFO_OUT_RDY_MASK},
+	{"C2H_STAT_DMA_ENG_4_WRQ_PACKET_OUT_DATA_DROP",
+		C2H_STAT_DMA_ENG_4_WRQ_PACKET_OUT_DATA_DROP_MASK},
+	{"C2H_STAT_DMA_ENG_4_WRQ_PACKET_OUT_DATA_ERR",
+		C2H_STAT_DMA_ENG_4_WRQ_PACKET_OUT_DATA_ERR_MASK},
+	{"C2H_STAT_DMA_ENG_4_WRQ_PACKET_OUT_DATA_MARKER",
+		C2H_STAT_DMA_ENG_4_WRQ_PACKET_OUT_DATA_MARKER_MASK},
+	{"C2H_STAT_DMA_ENG_4_WRQ_PACKET_PRE_EOR",
+		C2H_STAT_DMA_ENG_4_WRQ_PACKET_PRE_EOR_MASK},
+	{"C2H_STAT_DMA_ENG_4_WCP_FIFO_IN_RDY",
+		C2H_STAT_DMA_ENG_4_WCP_FIFO_IN_RDY_MASK},
+	{"C2H_STAT_DMA_ENG_4_PLD_ST_FIFO_IN_RDY",
+		C2H_STAT_DMA_ENG_4_PLD_ST_FIFO_IN_RDY_MASK},
+};
+
+
+static struct regfield_info
+	c2h_stat_dbg_dma_eng_5_field_info[] = {
+	{"C2H_STAT_DMA_ENG_5_RSVD_1",
+		C2H_STAT_DMA_ENG_5_RSVD_1_MASK},
+	{"C2H_STAT_DMA_ENG_5_WRB_SM_VIRT_CH",
+		C2H_STAT_DMA_ENG_5_WRB_SM_VIRT_CH_MASK},
+	{"C2H_STAT_DMA_ENG_5_WRB_FIFO_IN_REQ",
+		C2H_STAT_DMA_ENG_5_WRB_FIFO_IN_REQ_MASK},
+	{"C2H_STAT_DMA_ENG_5_ARB_FIFO_OUT_CNT",
+		C2H_STAT_DMA_ENG_5_ARB_FIFO_OUT_CNT_MASK},
+	{"C2H_STAT_DMA_ENG_5_ARB_FIFO_OUT_DATA_LEN",
+		C2H_STAT_DMA_ENG_5_ARB_FIFO_OUT_DATA_LEN_MASK},
+	{"C2H_STAT_DMA_ENG_5_ARB_FIFO_OUT_DATA_VIRT_CH",
+		C2H_STAT_DMA_ENG_5_ARB_FIFO_OUT_DATA_VIRT_CH_MASK},
+	{"C2H_STAT_DMA_ENG_5_ARB_FIFO_OUT_DATA_VAR_DESC",
+		C2H_STAT_DMA_ENG_5_ARB_FIFO_OUT_DATA_VAR_DESC_MASK},
+	{"C2H_STAT_DMA_ENG_5_ARB_FIFO_OUT_DATA_DROP_REQ",
+		C2H_STAT_DMA_ENG_5_ARB_FIFO_OUT_DATA_DROP_REQ_MASK},
+	{"C2H_STAT_DMA_ENG_5_ARB_FIFO_OUT_DATA_NUM_BUF_OV",
+		C2H_STAT_DMA_ENG_5_ARB_FIFO_OUT_DATA_NUM_BUF_OV_MASK},
+	{"C2H_STAT_DMA_ENG_5_ARB_FIFO_OUT_DATA_MARKER",
+		C2H_STAT_DMA_ENG_5_ARB_FIFO_OUT_DATA_MARKER_MASK},
+	{"C2H_STAT_DMA_ENG_5_ARB_FIFO_OUT_DATA_HAS_CMPT",
+		C2H_STAT_DMA_ENG_5_ARB_FIFO_OUT_DATA_HAS_CMPT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_dbg_pfch_qid_field_info[] = {
+	{"C2H_PFCH_QID_RSVD_1",
+		C2H_PFCH_QID_RSVD_1_MASK},
+	{"C2H_PFCH_QID_ERR_CTXT",
+		C2H_PFCH_QID_ERR_CTXT_MASK},
+	{"C2H_PFCH_QID_TARGET",
+		C2H_PFCH_QID_TARGET_MASK},
+	{"C2H_PFCH_QID_QID_OR_TAG",
+		C2H_PFCH_QID_QID_OR_TAG_MASK},
+};
+
+
+static struct regfield_info
+	c2h_dbg_pfch_field_info[] = {
+	{"C2H_PFCH_DATA",
+		C2H_PFCH_DATA_MASK},
+};
+
+
+static struct regfield_info
+	c2h_int_dbg_field_info[] = {
+	{"C2H_INT_RSVD_1",
+		C2H_INT_RSVD_1_MASK},
+	{"C2H_INT_INT_COAL_SM",
+		C2H_INT_INT_COAL_SM_MASK},
+	{"C2H_INT_INT_SM",
+		C2H_INT_INT_SM_MASK},
+};
+
+
+static struct regfield_info
+	c2h_stat_imm_accepted_field_info[] = {
+	{"C2H_STAT_IMM_ACCEPTED_RSVD_1",
+		C2H_STAT_IMM_ACCEPTED_RSVD_1_MASK},
+	{"C2H_STAT_IMM_ACCEPTED_CNT",
+		C2H_STAT_IMM_ACCEPTED_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_stat_marker_accepted_field_info[] = {
+	{"C2H_STAT_MARKER_ACCEPTED_RSVD_1",
+		C2H_STAT_MARKER_ACCEPTED_RSVD_1_MASK},
+	{"C2H_STAT_MARKER_ACCEPTED_CNT",
+		C2H_STAT_MARKER_ACCEPTED_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_stat_disable_cmp_accepted_field_info[] = {
+	{"C2H_STAT_DISABLE_CMP_ACCEPTED_RSVD_1",
+		C2H_STAT_DISABLE_CMP_ACCEPTED_RSVD_1_MASK},
+	{"C2H_STAT_DISABLE_CMP_ACCEPTED_CNT",
+		C2H_STAT_DISABLE_CMP_ACCEPTED_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_pld_fifo_crdt_cnt_field_info[] = {
+	{"C2H_PLD_FIFO_CRDT_CNT_RSVD_1",
+		C2H_PLD_FIFO_CRDT_CNT_RSVD_1_MASK},
+	{"C2H_PLD_FIFO_CRDT_CNT_CNT",
+		C2H_PLD_FIFO_CRDT_CNT_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_intr_dyn_req_field_info[] = {
+	{"C2H_INTR_DYN_REQ_RSVD_1",
+		C2H_INTR_DYN_REQ_RSVD_1_MASK},
+	{"C2H_INTR_DYN_REQ_CNT",
+		C2H_INTR_DYN_REQ_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_intr_dyn_misc_field_info[] = {
+	{"C2H_INTR_DYN_MISC_RSVD_1",
+		C2H_INTR_DYN_MISC_RSVD_1_MASK},
+	{"C2H_INTR_DYN_MISC_CNT",
+		C2H_INTR_DYN_MISC_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_drop_len_mismatch_field_info[] = {
+	{"C2H_DROP_LEN_MISMATCH_RSVD_1",
+		C2H_DROP_LEN_MISMATCH_RSVD_1_MASK},
+	{"C2H_DROP_LEN_MISMATCH_CNT",
+		C2H_DROP_LEN_MISMATCH_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_drop_desc_rsp_len_field_info[] = {
+	{"C2H_DROP_DESC_RSP_LEN_RSVD_1",
+		C2H_DROP_DESC_RSP_LEN_RSVD_1_MASK},
+	{"C2H_DROP_DESC_RSP_LEN_CNT",
+		C2H_DROP_DESC_RSP_LEN_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_drop_qid_fifo_len_field_info[] = {
+	{"C2H_DROP_QID_FIFO_LEN_RSVD_1",
+		C2H_DROP_QID_FIFO_LEN_RSVD_1_MASK},
+	{"C2H_DROP_QID_FIFO_LEN_CNT",
+		C2H_DROP_QID_FIFO_LEN_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_drop_pld_cnt_field_info[] = {
+	{"C2H_DROP_PLD_CNT_RSVD_1",
+		C2H_DROP_PLD_CNT_RSVD_1_MASK},
+	{"C2H_DROP_PLD_CNT_CNT",
+		C2H_DROP_PLD_CNT_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_cmpt_format_0_field_info[] = {
+	{"C2H_CMPT_FORMAT_0_DESC_ERR_LOC",
+		C2H_CMPT_FORMAT_0_DESC_ERR_LOC_MASK},
+	{"C2H_CMPT_FORMAT_0_COLOR_LOC",
+		C2H_CMPT_FORMAT_0_COLOR_LOC_MASK},
+};
+
+
+static struct regfield_info
+	c2h_cmpt_format_1_field_info[] = {
+	{"C2H_CMPT_FORMAT_1_DESC_ERR_LOC",
+		C2H_CMPT_FORMAT_1_DESC_ERR_LOC_MASK},
+	{"C2H_CMPT_FORMAT_1_COLOR_LOC",
+		C2H_CMPT_FORMAT_1_COLOR_LOC_MASK},
+};
+
+
+static struct regfield_info
+	c2h_cmpt_format_2_field_info[] = {
+	{"C2H_CMPT_FORMAT_2_DESC_ERR_LOC",
+		C2H_CMPT_FORMAT_2_DESC_ERR_LOC_MASK},
+	{"C2H_CMPT_FORMAT_2_COLOR_LOC",
+		C2H_CMPT_FORMAT_2_COLOR_LOC_MASK},
+};
+
+
+static struct regfield_info
+	c2h_cmpt_format_3_field_info[] = {
+	{"C2H_CMPT_FORMAT_3_DESC_ERR_LOC",
+		C2H_CMPT_FORMAT_3_DESC_ERR_LOC_MASK},
+	{"C2H_CMPT_FORMAT_3_COLOR_LOC",
+		C2H_CMPT_FORMAT_3_COLOR_LOC_MASK},
+};
+
+
+static struct regfield_info
+	c2h_cmpt_format_4_field_info[] = {
+	{"C2H_CMPT_FORMAT_4_DESC_ERR_LOC",
+		C2H_CMPT_FORMAT_4_DESC_ERR_LOC_MASK},
+	{"C2H_CMPT_FORMAT_4_COLOR_LOC",
+		C2H_CMPT_FORMAT_4_COLOR_LOC_MASK},
+};
+
+
+static struct regfield_info
+	c2h_cmpt_format_5_field_info[] = {
+	{"C2H_CMPT_FORMAT_5_DESC_ERR_LOC",
+		C2H_CMPT_FORMAT_5_DESC_ERR_LOC_MASK},
+	{"C2H_CMPT_FORMAT_5_COLOR_LOC",
+		C2H_CMPT_FORMAT_5_COLOR_LOC_MASK},
+};
+
+
+static struct regfield_info
+	c2h_cmpt_format_6_field_info[] = {
+	{"C2H_CMPT_FORMAT_6_DESC_ERR_LOC",
+		C2H_CMPT_FORMAT_6_DESC_ERR_LOC_MASK},
+	{"C2H_CMPT_FORMAT_6_COLOR_LOC",
+		C2H_CMPT_FORMAT_6_COLOR_LOC_MASK},
+};
+
+
+static struct regfield_info
+	c2h_pfch_cache_depth_field_info[] = {
+	{"C2H_PFCH_CACHE_DEPTH_MAX_STBUF",
+		C2H_PFCH_CACHE_DEPTH_MAX_STBUF_MASK},
+	{"C2H_PFCH_CACHE_DEPTH",
+		C2H_PFCH_CACHE_DEPTH_MASK},
+};
+
+
+static struct regfield_info
+	c2h_wrb_coal_buf_depth_field_info[] = {
+	{"C2H_WRB_COAL_BUF_DEPTH_RSVD_1",
+		C2H_WRB_COAL_BUF_DEPTH_RSVD_1_MASK},
+	{"C2H_WRB_COAL_BUF_DEPTH_BUFFER",
+		C2H_WRB_COAL_BUF_DEPTH_BUFFER_MASK},
+};
+
+
+static struct regfield_info
+	c2h_pfch_crdt_field_info[] = {
+	{"C2H_PFCH_CRDT_RSVD_1",
+		C2H_PFCH_CRDT_RSVD_1_MASK},
+	{"C2H_PFCH_CRDT_RSVD_2",
+		C2H_PFCH_CRDT_RSVD_2_MASK},
+};
+
+
+static struct regfield_info
+	c2h_stat_has_cmpt_accepted_field_info[] = {
+	{"C2H_STAT_HAS_CMPT_ACCEPTED_RSVD_1",
+		C2H_STAT_HAS_CMPT_ACCEPTED_RSVD_1_MASK},
+	{"C2H_STAT_HAS_CMPT_ACCEPTED_CNT",
+		C2H_STAT_HAS_CMPT_ACCEPTED_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_stat_has_pld_accepted_field_info[] = {
+	{"C2H_STAT_HAS_PLD_ACCEPTED_RSVD_1",
+		C2H_STAT_HAS_PLD_ACCEPTED_RSVD_1_MASK},
+	{"C2H_STAT_HAS_PLD_ACCEPTED_CNT",
+		C2H_STAT_HAS_PLD_ACCEPTED_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_pld_pkt_id_field_info[] = {
+	{"C2H_PLD_PKT_ID_CMPT_WAIT",
+		C2H_PLD_PKT_ID_CMPT_WAIT_MASK},
+	{"C2H_PLD_PKT_ID_DATA",
+		C2H_PLD_PKT_ID_DATA_MASK},
+};
+
+
+static struct regfield_info
+	c2h_pld_pkt_id_1_field_info[] = {
+	{"C2H_PLD_PKT_ID_1_CMPT_WAIT",
+		C2H_PLD_PKT_ID_1_CMPT_WAIT_MASK},
+	{"C2H_PLD_PKT_ID_1_DATA",
+		C2H_PLD_PKT_ID_1_DATA_MASK},
+};
+
+
+static struct regfield_info
+	c2h_drop_pld_cnt_1_field_info[] = {
+	{"C2H_DROP_PLD_CNT_1_RSVD_1",
+		C2H_DROP_PLD_CNT_1_RSVD_1_MASK},
+	{"C2H_DROP_PLD_CNT_1_CNT",
+		C2H_DROP_PLD_CNT_1_CNT_MASK},
+};
+
+
+static struct regfield_info
+	h2c_err_stat_field_info[] = {
+	{"H2C_ERR_STAT_RSVD_1",
+		H2C_ERR_STAT_RSVD_1_MASK},
+	{"H2C_ERR_STAT_PAR_ERR",
+		H2C_ERR_STAT_PAR_ERR_MASK},
+	{"H2C_ERR_STAT_SBE",
+		H2C_ERR_STAT_SBE_MASK},
+	{"H2C_ERR_STAT_DBE",
+		H2C_ERR_STAT_DBE_MASK},
+	{"H2C_ERR_STAT_NO_DMA_DS",
+		H2C_ERR_STAT_NO_DMA_DS_MASK},
+	{"H2C_ERR_STAT_SDI_MRKR_REQ_MOP_ERR",
+		H2C_ERR_STAT_SDI_MRKR_REQ_MOP_ERR_MASK},
+	{"H2C_ERR_STAT_ZERO_LEN_DS",
+		H2C_ERR_STAT_ZERO_LEN_DS_MASK},
+};
+
+
+static struct regfield_info
+	h2c_err_mask_field_info[] = {
+	{"H2C_ERR_EN",
+		H2C_ERR_EN_MASK},
+};
+
+
+static struct regfield_info
+	h2c_first_err_qid_field_info[] = {
+	{"H2C_FIRST_ERR_QID_RSVD_1",
+		H2C_FIRST_ERR_QID_RSVD_1_MASK},
+	{"H2C_FIRST_ERR_QID_ERR_TYPE",
+		H2C_FIRST_ERR_QID_ERR_TYPE_MASK},
+	{"H2C_FIRST_ERR_QID_RSVD_2",
+		H2C_FIRST_ERR_QID_RSVD_2_MASK},
+	{"H2C_FIRST_ERR_QID_QID",
+		H2C_FIRST_ERR_QID_QID_MASK},
+};
+
+
+static struct regfield_info
+	h2c_dbg_reg0_field_info[] = {
+	{"H2C_REG0_NUM_DSC_RCVD",
+		H2C_REG0_NUM_DSC_RCVD_MASK},
+	{"H2C_REG0_NUM_WRB_SENT",
+		H2C_REG0_NUM_WRB_SENT_MASK},
+};
+
+
+static struct regfield_info
+	h2c_dbg_reg1_field_info[] = {
+	{"H2C_REG1_NUM_REQ_SENT",
+		H2C_REG1_NUM_REQ_SENT_MASK},
+	{"H2C_REG1_NUM_CMP_SENT",
+		H2C_REG1_NUM_CMP_SENT_MASK},
+};
+
+
+static struct regfield_info
+	h2c_dbg_reg2_field_info[] = {
+	{"H2C_REG2_RSVD_1",
+		H2C_REG2_RSVD_1_MASK},
+	{"H2C_REG2_NUM_ERR_DSC_RCVD",
+		H2C_REG2_NUM_ERR_DSC_RCVD_MASK},
+};
+
+
+static struct regfield_info
+	h2c_dbg_reg3_field_info[] = {
+	{"H2C_REG3_RSVD_1",
+		H2C_REG3_RSVD_1_MASK},
+	{"H2C_REG3_DSCO_FIFO_EMPTY",
+		H2C_REG3_DSCO_FIFO_EMPTY_MASK},
+	{"H2C_REG3_DSCO_FIFO_FULL",
+		H2C_REG3_DSCO_FIFO_FULL_MASK},
+	{"H2C_REG3_CUR_RC_STATE",
+		H2C_REG3_CUR_RC_STATE_MASK},
+	{"H2C_REG3_RDREQ_LINES",
+		H2C_REG3_RDREQ_LINES_MASK},
+	{"H2C_REG3_RDATA_LINES_AVAIL",
+		H2C_REG3_RDATA_LINES_AVAIL_MASK},
+	{"H2C_REG3_PEND_FIFO_EMPTY",
+		H2C_REG3_PEND_FIFO_EMPTY_MASK},
+	{"H2C_REG3_PEND_FIFO_FULL",
+		H2C_REG3_PEND_FIFO_FULL_MASK},
+	{"H2C_REG3_CUR_RQ_STATE",
+		H2C_REG3_CUR_RQ_STATE_MASK},
+	{"H2C_REG3_DSCI_FIFO_FULL",
+		H2C_REG3_DSCI_FIFO_FULL_MASK},
+	{"H2C_REG3_DSCI_FIFO_EMPTY",
+		H2C_REG3_DSCI_FIFO_EMPTY_MASK},
+};
+
+
+static struct regfield_info
+	h2c_dbg_reg4_field_info[] = {
+	{"H2C_REG4_RDREQ_ADDR",
+		H2C_REG4_RDREQ_ADDR_MASK},
+};
+
+
+static struct regfield_info
+	h2c_fatal_err_en_field_info[] = {
+	{"H2C_FATAL_ERR_EN_RSVD_1",
+		H2C_FATAL_ERR_EN_RSVD_1_MASK},
+	{"H2C_FATAL_ERR_EN_H2C",
+		H2C_FATAL_ERR_EN_H2C_MASK},
+};
+
+
+static struct regfield_info
+	h2c_req_throt_pcie_field_info[] = {
+	{"H2C_REQ_THROT_PCIE_EN_REQ",
+		H2C_REQ_THROT_PCIE_EN_REQ_MASK},
+	{"H2C_REQ_THROT_PCIE",
+		H2C_REQ_THROT_PCIE_MASK},
+	{"H2C_REQ_THROT_PCIE_EN_DATA",
+		H2C_REQ_THROT_PCIE_EN_DATA_MASK},
+	{"H2C_REQ_THROT_PCIE_DATA_THRESH",
+		H2C_REQ_THROT_PCIE_DATA_THRESH_MASK},
+};
+
+
+static struct regfield_info
+	h2c_aln_dbg_reg0_field_info[] = {
+	{"H2C_ALN_REG0_NUM_PKT_SENT",
+		H2C_ALN_REG0_NUM_PKT_SENT_MASK},
+};
+
+
+static struct regfield_info
+	h2c_req_throt_aximm_field_info[] = {
+	{"H2C_REQ_THROT_AXIMM_EN_REQ",
+		H2C_REQ_THROT_AXIMM_EN_REQ_MASK},
+	{"H2C_REQ_THROT_AXIMM",
+		H2C_REQ_THROT_AXIMM_MASK},
+	{"H2C_REQ_THROT_AXIMM_EN_DATA",
+		H2C_REQ_THROT_AXIMM_EN_DATA_MASK},
+	{"H2C_REQ_THROT_AXIMM_DATA_THRESH",
+		H2C_REQ_THROT_AXIMM_DATA_THRESH_MASK},
+};
+
+
+static struct regfield_info
+	c2h_mm_ctl_field_info[] = {
+	{"C2H_MM_CTL_RESERVED1",
+		C2H_MM_CTL_RESERVED1_MASK},
+	{"C2H_MM_CTL_ERRC_EN",
+		C2H_MM_CTL_ERRC_EN_MASK},
+	{"C2H_MM_CTL_RESERVED0",
+		C2H_MM_CTL_RESERVED0_MASK},
+	{"C2H_MM_CTL_RUN",
+		C2H_MM_CTL_RUN_MASK},
+};
+
+
+static struct regfield_info
+	c2h_mm_status_field_info[] = {
+	{"C2H_MM_STATUS_RSVD_1",
+		C2H_MM_STATUS_RSVD_1_MASK},
+	{"C2H_MM_STATUS_RUN",
+		C2H_MM_STATUS_RUN_MASK},
+};
+
+
+static struct regfield_info
+	c2h_mm_cmpl_desc_cnt_field_info[] = {
+	{"C2H_MM_CMPL_DESC_CNT_C2H_CO",
+		C2H_MM_CMPL_DESC_CNT_C2H_CO_MASK},
+};
+
+
+static struct regfield_info
+	c2h_mm_err_code_enable_mask_field_info[] = {
+	{"C2H_MM_ERR_CODE_ENABLE_RESERVED1",
+		C2H_MM_ERR_CODE_ENABLE_RESERVED1_MASK},
+	{"C2H_MM_ERR_CODE_ENABLE_WR_UC_RAM",
+		C2H_MM_ERR_CODE_ENABLE_WR_UC_RAM_MASK},
+	{"C2H_MM_ERR_CODE_ENABLE_WR_UR",
+		C2H_MM_ERR_CODE_ENABLE_WR_UR_MASK},
+	{"C2H_MM_ERR_CODE_ENABLE_WR_FLR",
+		C2H_MM_ERR_CODE_ENABLE_WR_FLR_MASK},
+	{"C2H_MM_ERR_CODE_ENABLE_RESERVED0",
+		C2H_MM_ERR_CODE_ENABLE_RESERVED0_MASK},
+	{"C2H_MM_ERR_CODE_ENABLE_RD_SLV_ERR",
+		C2H_MM_ERR_CODE_ENABLE_RD_SLV_ERR_MASK},
+	{"C2H_MM_ERR_CODE_ENABLE_WR_SLV_ERR",
+		C2H_MM_ERR_CODE_ENABLE_WR_SLV_ERR_MASK},
+};
+
+
+static struct regfield_info
+	c2h_mm_err_code_field_info[] = {
+	{"C2H_MM_ERR_CODE_RESERVED1",
+		C2H_MM_ERR_CODE_RESERVED1_MASK},
+	{"C2H_MM_ERR_CODE_CIDX",
+		C2H_MM_ERR_CODE_CIDX_MASK},
+	{"C2H_MM_ERR_CODE_RESERVED0",
+		C2H_MM_ERR_CODE_RESERVED0_MASK},
+	{"C2H_MM_ERR_CODE_SUB_TYPE",
+		C2H_MM_ERR_CODE_SUB_TYPE_MASK},
+	{"C2H_MM_ERR_CODE",
+		C2H_MM_ERR_CODE_MASK},
+};
+
+
+static struct regfield_info
+	c2h_mm_err_info_field_info[] = {
+	{"C2H_MM_ERR_INFO_VALID",
+		C2H_MM_ERR_INFO_VALID_MASK},
+	{"C2H_MM_ERR_INFO_SEL",
+		C2H_MM_ERR_INFO_SEL_MASK},
+	{"C2H_MM_ERR_INFO_RSVD_1",
+		C2H_MM_ERR_INFO_RSVD_1_MASK},
+	{"C2H_MM_ERR_INFO_QID",
+		C2H_MM_ERR_INFO_QID_MASK},
+};
+
+
+static struct regfield_info
+	c2h_mm_perf_mon_ctl_field_info[] = {
+	{"C2H_MM_PERF_MON_CTL_RSVD_1",
+		C2H_MM_PERF_MON_CTL_RSVD_1_MASK},
+	{"C2H_MM_PERF_MON_CTL_IMM_START",
+		C2H_MM_PERF_MON_CTL_IMM_START_MASK},
+	{"C2H_MM_PERF_MON_CTL_RUN_START",
+		C2H_MM_PERF_MON_CTL_RUN_START_MASK},
+	{"C2H_MM_PERF_MON_CTL_IMM_CLEAR",
+		C2H_MM_PERF_MON_CTL_IMM_CLEAR_MASK},
+	{"C2H_MM_PERF_MON_CTL_RUN_CLEAR",
+		C2H_MM_PERF_MON_CTL_RUN_CLEAR_MASK},
+};
+
+
+static struct regfield_info
+	c2h_mm_perf_mon_cycle_cnt0_field_info[] = {
+	{"C2H_MM_PERF_MON_CYCLE_CNT0_CYC_CNT",
+		C2H_MM_PERF_MON_CYCLE_CNT0_CYC_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_mm_perf_mon_cycle_cnt1_field_info[] = {
+	{"C2H_MM_PERF_MON_CYCLE_CNT1_RSVD_1",
+		C2H_MM_PERF_MON_CYCLE_CNT1_RSVD_1_MASK},
+	{"C2H_MM_PERF_MON_CYCLE_CNT1_CYC_CNT",
+		C2H_MM_PERF_MON_CYCLE_CNT1_CYC_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_mm_perf_mon_data_cnt0_field_info[] = {
+	{"C2H_MM_PERF_MON_DATA_CNT0_DCNT",
+		C2H_MM_PERF_MON_DATA_CNT0_DCNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_mm_perf_mon_data_cnt1_field_info[] = {
+	{"C2H_MM_PERF_MON_DATA_CNT1_RSVD_1",
+		C2H_MM_PERF_MON_DATA_CNT1_RSVD_1_MASK},
+	{"C2H_MM_PERF_MON_DATA_CNT1_DCNT",
+		C2H_MM_PERF_MON_DATA_CNT1_DCNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_mm_dbg_field_info[] = {
+	{"C2H_MM_RSVD_1",
+		C2H_MM_RSVD_1_MASK},
+	{"C2H_MM_RRQ_ENTRIES",
+		C2H_MM_RRQ_ENTRIES_MASK},
+	{"C2H_MM_DAT_FIFO_SPC",
+		C2H_MM_DAT_FIFO_SPC_MASK},
+	{"C2H_MM_RD_STALL",
+		C2H_MM_RD_STALL_MASK},
+	{"C2H_MM_RRQ_FIFO_FI",
+		C2H_MM_RRQ_FIFO_FI_MASK},
+	{"C2H_MM_WR_STALL",
+		C2H_MM_WR_STALL_MASK},
+	{"C2H_MM_WRQ_FIFO_FI",
+		C2H_MM_WRQ_FIFO_FI_MASK},
+	{"C2H_MM_WBK_STALL",
+		C2H_MM_WBK_STALL_MASK},
+	{"C2H_MM_DSC_FIFO_EP",
+		C2H_MM_DSC_FIFO_EP_MASK},
+	{"C2H_MM_DSC_FIFO_FL",
+		C2H_MM_DSC_FIFO_FL_MASK},
+};
+
+
+static struct regfield_info
+	h2c_mm_ctl_field_info[] = {
+	{"H2C_MM_CTL_RESERVED1",
+		H2C_MM_CTL_RESERVED1_MASK},
+	{"H2C_MM_CTL_ERRC_EN",
+		H2C_MM_CTL_ERRC_EN_MASK},
+	{"H2C_MM_CTL_RESERVED0",
+		H2C_MM_CTL_RESERVED0_MASK},
+	{"H2C_MM_CTL_RUN",
+		H2C_MM_CTL_RUN_MASK},
+};
+
+
+static struct regfield_info
+	h2c_mm_status_field_info[] = {
+	{"H2C_MM_STATUS_RSVD_1",
+		H2C_MM_STATUS_RSVD_1_MASK},
+	{"H2C_MM_STATUS_RUN",
+		H2C_MM_STATUS_RUN_MASK},
+};
+
+
+static struct regfield_info
+	h2c_mm_cmpl_desc_cnt_field_info[] = {
+	{"H2C_MM_CMPL_DESC_CNT_H2C_CO",
+		H2C_MM_CMPL_DESC_CNT_H2C_CO_MASK},
+};
+
+
+static struct regfield_info
+	h2c_mm_err_code_enable_mask_field_info[] = {
+	{"H2C_MM_ERR_CODE_ENABLE_RESERVED5",
+		H2C_MM_ERR_CODE_ENABLE_RESERVED5_MASK},
+	{"H2C_MM_ERR_CODE_ENABLE_WR_SLV_ERR",
+		H2C_MM_ERR_CODE_ENABLE_WR_SLV_ERR_MASK},
+	{"H2C_MM_ERR_CODE_ENABLE_WR_DEC_ERR",
+		H2C_MM_ERR_CODE_ENABLE_WR_DEC_ERR_MASK},
+	{"H2C_MM_ERR_CODE_ENABLE_RESERVED4",
+		H2C_MM_ERR_CODE_ENABLE_RESERVED4_MASK},
+	{"H2C_MM_ERR_CODE_ENABLE_RD_RQ_DIS_ERR",
+		H2C_MM_ERR_CODE_ENABLE_RD_RQ_DIS_ERR_MASK},
+	{"H2C_MM_ERR_CODE_ENABLE_RESERVED3",
+		H2C_MM_ERR_CODE_ENABLE_RESERVED3_MASK},
+	{"H2C_MM_ERR_CODE_ENABLE_RD_DAT_POISON_ERR",
+		H2C_MM_ERR_CODE_ENABLE_RD_DAT_POISON_ERR_MASK},
+	{"H2C_MM_ERR_CODE_ENABLE_RESERVED2",
+		H2C_MM_ERR_CODE_ENABLE_RESERVED2_MASK},
+	{"H2C_MM_ERR_CODE_ENABLE_RD_FLR_ERR",
+		H2C_MM_ERR_CODE_ENABLE_RD_FLR_ERR_MASK},
+	{"H2C_MM_ERR_CODE_ENABLE_RESERVED1",
+		H2C_MM_ERR_CODE_ENABLE_RESERVED1_MASK},
+	{"H2C_MM_ERR_CODE_ENABLE_RD_HDR_ADR_ERR",
+		H2C_MM_ERR_CODE_ENABLE_RD_HDR_ADR_ERR_MASK},
+	{"H2C_MM_ERR_CODE_ENABLE_RD_HDR_PARA",
+		H2C_MM_ERR_CODE_ENABLE_RD_HDR_PARA_MASK},
+	{"H2C_MM_ERR_CODE_ENABLE_RD_HDR_BYTE_ERR",
+		H2C_MM_ERR_CODE_ENABLE_RD_HDR_BYTE_ERR_MASK},
+	{"H2C_MM_ERR_CODE_ENABLE_RD_UR_CA",
+		H2C_MM_ERR_CODE_ENABLE_RD_UR_CA_MASK},
+	{"H2C_MM_ERR_CODE_ENABLE_RD_HRD_POISON_ERR",
+		H2C_MM_ERR_CODE_ENABLE_RD_HRD_POISON_ERR_MASK},
+	{"H2C_MM_ERR_CODE_ENABLE_RESERVED0",
+		H2C_MM_ERR_CODE_ENABLE_RESERVED0_MASK},
+};
+
+
+static struct regfield_info
+	h2c_mm_err_code_field_info[] = {
+	{"H2C_MM_ERR_CODE_RSVD_1",
+		H2C_MM_ERR_CODE_RSVD_1_MASK},
+	{"H2C_MM_ERR_CODE_CIDX",
+		H2C_MM_ERR_CODE_CIDX_MASK},
+	{"H2C_MM_ERR_CODE_RESERVED0",
+		H2C_MM_ERR_CODE_RESERVED0_MASK},
+	{"H2C_MM_ERR_CODE_SUB_TYPE",
+		H2C_MM_ERR_CODE_SUB_TYPE_MASK},
+	{"H2C_MM_ERR_CODE",
+		H2C_MM_ERR_CODE_MASK},
+};
+
+
+static struct regfield_info
+	h2c_mm_err_info_field_info[] = {
+	{"H2C_MM_ERR_INFO_VALID",
+		H2C_MM_ERR_INFO_VALID_MASK},
+	{"H2C_MM_ERR_INFO_SEL",
+		H2C_MM_ERR_INFO_SEL_MASK},
+	{"H2C_MM_ERR_INFO_RSVD_1",
+		H2C_MM_ERR_INFO_RSVD_1_MASK},
+	{"H2C_MM_ERR_INFO_QID",
+		H2C_MM_ERR_INFO_QID_MASK},
+};
+
+
+static struct regfield_info
+	h2c_mm_perf_mon_ctl_field_info[] = {
+	{"H2C_MM_PERF_MON_CTL_RSVD_1",
+		H2C_MM_PERF_MON_CTL_RSVD_1_MASK},
+	{"H2C_MM_PERF_MON_CTL_IMM_START",
+		H2C_MM_PERF_MON_CTL_IMM_START_MASK},
+	{"H2C_MM_PERF_MON_CTL_RUN_START",
+		H2C_MM_PERF_MON_CTL_RUN_START_MASK},
+	{"H2C_MM_PERF_MON_CTL_IMM_CLEAR",
+		H2C_MM_PERF_MON_CTL_IMM_CLEAR_MASK},
+	{"H2C_MM_PERF_MON_CTL_RUN_CLEAR",
+		H2C_MM_PERF_MON_CTL_RUN_CLEAR_MASK},
+};
+
+
+static struct regfield_info
+	h2c_mm_perf_mon_cycle_cnt0_field_info[] = {
+	{"H2C_MM_PERF_MON_CYCLE_CNT0_CYC_CNT",
+		H2C_MM_PERF_MON_CYCLE_CNT0_CYC_CNT_MASK},
+};
+
+
+static struct regfield_info
+	h2c_mm_perf_mon_cycle_cnt1_field_info[] = {
+	{"H2C_MM_PERF_MON_CYCLE_CNT1_RSVD_1",
+		H2C_MM_PERF_MON_CYCLE_CNT1_RSVD_1_MASK},
+	{"H2C_MM_PERF_MON_CYCLE_CNT1_CYC_CNT",
+		H2C_MM_PERF_MON_CYCLE_CNT1_CYC_CNT_MASK},
+};
+
+
+static struct regfield_info
+	h2c_mm_perf_mon_data_cnt0_field_info[] = {
+	{"H2C_MM_PERF_MON_DATA_CNT0_DCNT",
+		H2C_MM_PERF_MON_DATA_CNT0_DCNT_MASK},
+};
+
+
+static struct regfield_info
+	h2c_mm_perf_mon_data_cnt1_field_info[] = {
+	{"H2C_MM_PERF_MON_DATA_CNT1_RSVD_1",
+		H2C_MM_PERF_MON_DATA_CNT1_RSVD_1_MASK},
+	{"H2C_MM_PERF_MON_DATA_CNT1_DCNT",
+		H2C_MM_PERF_MON_DATA_CNT1_DCNT_MASK},
+};
+
+
+static struct regfield_info
+	h2c_mm_dbg_field_info[] = {
+	{"H2C_MM_RSVD_1",
+		H2C_MM_RSVD_1_MASK},
+	{"H2C_MM_RRQ_ENTRIES",
+		H2C_MM_RRQ_ENTRIES_MASK},
+	{"H2C_MM_DAT_FIFO_SPC",
+		H2C_MM_DAT_FIFO_SPC_MASK},
+	{"H2C_MM_RD_STALL",
+		H2C_MM_RD_STALL_MASK},
+	{"H2C_MM_RRQ_FIFO_FI",
+		H2C_MM_RRQ_FIFO_FI_MASK},
+	{"H2C_MM_WR_STALL",
+		H2C_MM_WR_STALL_MASK},
+	{"H2C_MM_WRQ_FIFO_FI",
+		H2C_MM_WRQ_FIFO_FI_MASK},
+	{"H2C_MM_WBK_STALL",
+		H2C_MM_WBK_STALL_MASK},
+	{"H2C_MM_DSC_FIFO_EP",
+		H2C_MM_DSC_FIFO_EP_MASK},
+	{"H2C_MM_DSC_FIFO_FL",
+		H2C_MM_DSC_FIFO_FL_MASK},
+};
+
+
+static struct regfield_info
+	c2h_crdt_coal_cfg_1_field_info[] = {
+	{"C2H_CRDT_COAL_CFG_1_RSVD_1",
+		C2H_CRDT_COAL_CFG_1_RSVD_1_MASK},
+	{"C2H_CRDT_COAL_CFG_1_PLD_FIFO_TH",
+		C2H_CRDT_COAL_CFG_1_PLD_FIFO_TH_MASK},
+	{"C2H_CRDT_COAL_CFG_1_TIMER_TH",
+		C2H_CRDT_COAL_CFG_1_TIMER_TH_MASK},
+};
+
+
+static struct regfield_info
+	c2h_crdt_coal_cfg_2_field_info[] = {
+	{"C2H_CRDT_COAL_CFG_2_RSVD_1",
+		C2H_CRDT_COAL_CFG_2_RSVD_1_MASK},
+	{"C2H_CRDT_COAL_CFG_2_FIFO_TH",
+		C2H_CRDT_COAL_CFG_2_FIFO_TH_MASK},
+	{"C2H_CRDT_COAL_CFG_2_RESERVED1",
+		C2H_CRDT_COAL_CFG_2_RESERVED1_MASK},
+	{"C2H_CRDT_COAL_CFG_2_NT_TH",
+		C2H_CRDT_COAL_CFG_2_NT_TH_MASK},
+};
+
+
+static struct regfield_info
+	c2h_pfch_byp_qid_field_info[] = {
+	{"C2H_PFCH_BYP_QID_RSVD_1",
+		C2H_PFCH_BYP_QID_RSVD_1_MASK},
+	{"C2H_PFCH_BYP_QID",
+		C2H_PFCH_BYP_QID_MASK},
+};
+
+
+static struct regfield_info
+	c2h_pfch_byp_tag_field_info[] = {
+	{"C2H_PFCH_BYP_TAG_RSVD_1",
+		C2H_PFCH_BYP_TAG_RSVD_1_MASK},
+	{"C2H_PFCH_BYP_TAG_BYP_QID",
+		C2H_PFCH_BYP_TAG_BYP_QID_MASK},
+	{"C2H_PFCH_BYP_TAG_RSVD_2",
+		C2H_PFCH_BYP_TAG_RSVD_2_MASK},
+	{"C2H_PFCH_BYP_TAG",
+		C2H_PFCH_BYP_TAG_MASK},
+};
+
+
+static struct regfield_info
+	c2h_water_mark_field_info[] = {
+	{"C2H_WATER_MARK_HIGH_WM",
+		C2H_WATER_MARK_HIGH_WM_MASK},
+	{"C2H_WATER_MARK_LOW_WM",
+		C2H_WATER_MARK_LOW_WM_MASK},
+};
+
+static struct xreg_info eqdma_config_regs[] = {
+{"CFG_BLK_IDENTIFIER", 0x00,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(cfg_blk_identifier_field_info),
+	cfg_blk_identifier_field_info
+},
+{"CFG_BLK_PCIE_MAX_PLD_SIZE", 0x08,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(cfg_blk_pcie_max_pld_size_field_info),
+	cfg_blk_pcie_max_pld_size_field_info
+},
+{"CFG_BLK_PCIE_MAX_READ_REQ_SIZE", 0x0c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(cfg_blk_pcie_max_read_req_size_field_info),
+	cfg_blk_pcie_max_read_req_size_field_info
+},
+{"CFG_BLK_SYSTEM_ID", 0x10,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(cfg_blk_system_id_field_info),
+	cfg_blk_system_id_field_info
+},
+{"CFG_BLK_MSIX_ENABLE", 0x014,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(cfg_blk_msix_enable_field_info),
+	cfg_blk_msix_enable_field_info
+},
+{"CFG_PCIE_DATA_WIDTH", 0x18,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(cfg_pcie_data_width_field_info),
+	cfg_pcie_data_width_field_info
+},
+{"CFG_PCIE_CTL", 0x1c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(cfg_pcie_ctl_field_info),
+	cfg_pcie_ctl_field_info
+},
+{"CFG_BLK_MSI_ENABLE", 0x20,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(cfg_blk_msi_enable_field_info),
+	cfg_blk_msi_enable_field_info
+},
+{"CFG_AXI_USER_MAX_PLD_SIZE", 0x40,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(cfg_axi_user_max_pld_size_field_info),
+	cfg_axi_user_max_pld_size_field_info
+},
+{"CFG_AXI_USER_MAX_READ_REQ_SIZE", 0x44,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(cfg_axi_user_max_read_req_size_field_info),
+	cfg_axi_user_max_read_req_size_field_info
+},
+{"CFG_BLK_MISC_CTL", 0x4c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(cfg_blk_misc_ctl_field_info),
+	cfg_blk_misc_ctl_field_info
+},
+{"CFG_PL_CRED_CTL", 0x68,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(cfg_pl_cred_ctl_field_info),
+	cfg_pl_cred_ctl_field_info
+},
+{"CFG_BLK_SCRATCH", 0x80,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(cfg_blk_scratch_field_info),
+	cfg_blk_scratch_field_info
+},
+{"CFG_GIC", 0xa0,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(cfg_gic_field_info),
+	cfg_gic_field_info
+},
+{"RAM_SBE_MSK_1_A", 0xe0,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(ram_sbe_msk_1_a_field_info),
+	ram_sbe_msk_1_a_field_info
+},
+{"RAM_SBE_STS_1_A", 0xe4,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(ram_sbe_sts_1_a_field_info),
+	ram_sbe_sts_1_a_field_info
+},
+{"RAM_DBE_MSK_1_A", 0xe8,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(ram_dbe_msk_1_a_field_info),
+	ram_dbe_msk_1_a_field_info
+},
+{"RAM_DBE_STS_1_A", 0xec,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(ram_dbe_sts_1_a_field_info),
+	ram_dbe_sts_1_a_field_info
+},
+{"RAM_SBE_MSK_A", 0xf0,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(ram_sbe_msk_a_field_info),
+	ram_sbe_msk_a_field_info
+},
+{"RAM_SBE_STS_A", 0xf4,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(ram_sbe_sts_a_field_info),
+	ram_sbe_sts_a_field_info
+},
+{"RAM_DBE_MSK_A", 0xf8,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(ram_dbe_msk_a_field_info),
+	ram_dbe_msk_a_field_info
+},
+{"RAM_DBE_STS_A", 0xfc,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(ram_dbe_sts_a_field_info),
+	ram_dbe_sts_a_field_info
+},
+{"GLBL2_IDENTIFIER", 0x100,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(glbl2_identifier_field_info),
+	glbl2_identifier_field_info
+},
+{"GLBL2_CHANNEL_INST", 0x114,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(glbl2_channel_inst_field_info),
+	glbl2_channel_inst_field_info
+},
+{"GLBL2_CHANNEL_MDMA", 0x118,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(glbl2_channel_mdma_field_info),
+	glbl2_channel_mdma_field_info
+},
+{"GLBL2_CHANNEL_STRM", 0x11c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(glbl2_channel_strm_field_info),
+	glbl2_channel_strm_field_info
+},
+{"GLBL2_CHANNEL_CAP", 0x120,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(glbl2_channel_cap_field_info),
+	glbl2_channel_cap_field_info
+},
+{"GLBL2_CHANNEL_PASID_CAP", 0x128,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(glbl2_channel_pasid_cap_field_info),
+	glbl2_channel_pasid_cap_field_info
+},
+{"GLBL2_SYSTEM_ID", 0x130,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(glbl2_system_id_field_info),
+	glbl2_system_id_field_info
+},
+{"GLBL2_MISC_CAP", 0x134,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(glbl2_misc_cap_field_info),
+	glbl2_misc_cap_field_info
+},
+{"GLBL2_DBG_PCIE_RQ0", 0x1b8,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(glbl2_dbg_pcie_rq0_field_info),
+	glbl2_dbg_pcie_rq0_field_info
+},
+{"GLBL2_DBG_PCIE_RQ1", 0x1bc,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(glbl2_dbg_pcie_rq1_field_info),
+	glbl2_dbg_pcie_rq1_field_info
+},
+{"GLBL2_DBG_AXIMM_WR0", 0x1c0,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(glbl2_dbg_aximm_wr0_field_info),
+	glbl2_dbg_aximm_wr0_field_info
+},
+{"GLBL2_DBG_AXIMM_WR1", 0x1c4,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(glbl2_dbg_aximm_wr1_field_info),
+	glbl2_dbg_aximm_wr1_field_info
+},
+{"GLBL2_DBG_AXIMM_RD0", 0x1c8,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(glbl2_dbg_aximm_rd0_field_info),
+	glbl2_dbg_aximm_rd0_field_info
+},
+{"GLBL2_DBG_AXIMM_RD1", 0x1cc,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(glbl2_dbg_aximm_rd1_field_info),
+	glbl2_dbg_aximm_rd1_field_info
+},
+{"GLBL2_DBG_FAB0", 0x1d0,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(glbl2_dbg_fab0_field_info),
+	glbl2_dbg_fab0_field_info
+},
+{"GLBL2_DBG_FAB1", 0x1d4,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(glbl2_dbg_fab1_field_info),
+	glbl2_dbg_fab1_field_info
+},
+{"GLBL2_DBG_MATCH_SEL", 0x1f4,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(glbl2_dbg_match_sel_field_info),
+	glbl2_dbg_match_sel_field_info
+},
+{"GLBL2_DBG_MATCH_MSK", 0x1f8,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(glbl2_dbg_match_msk_field_info),
+	glbl2_dbg_match_msk_field_info
+},
+{"GLBL2_DBG_MATCH_PAT", 0x1fc,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(glbl2_dbg_match_pat_field_info),
+	glbl2_dbg_match_pat_field_info
+},
+{"GLBL_RNG_SZ_1", 0x204,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_rng_sz_1_field_info),
+	glbl_rng_sz_1_field_info
+},
+{"GLBL_RNG_SZ_2", 0x208,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_rng_sz_2_field_info),
+	glbl_rng_sz_2_field_info
+},
+{"GLBL_RNG_SZ_3", 0x20c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_rng_sz_3_field_info),
+	glbl_rng_sz_3_field_info
+},
+{"GLBL_RNG_SZ_4", 0x210,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_rng_sz_4_field_info),
+	glbl_rng_sz_4_field_info
+},
+{"GLBL_RNG_SZ_5", 0x214,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_rng_sz_5_field_info),
+	glbl_rng_sz_5_field_info
+},
+{"GLBL_RNG_SZ_6", 0x218,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_rng_sz_6_field_info),
+	glbl_rng_sz_6_field_info
+},
+{"GLBL_RNG_SZ_7", 0x21c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_rng_sz_7_field_info),
+	glbl_rng_sz_7_field_info
+},
+{"GLBL_RNG_SZ_8", 0x220,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_rng_sz_8_field_info),
+	glbl_rng_sz_8_field_info
+},
+{"GLBL_RNG_SZ_9", 0x224,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_rng_sz_9_field_info),
+	glbl_rng_sz_9_field_info
+},
+{"GLBL_RNG_SZ_A", 0x228,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_rng_sz_a_field_info),
+	glbl_rng_sz_a_field_info
+},
+{"GLBL_RNG_SZ_B", 0x22c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_rng_sz_b_field_info),
+	glbl_rng_sz_b_field_info
+},
+{"GLBL_RNG_SZ_C", 0x230,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_rng_sz_c_field_info),
+	glbl_rng_sz_c_field_info
+},
+{"GLBL_RNG_SZ_D", 0x234,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_rng_sz_d_field_info),
+	glbl_rng_sz_d_field_info
+},
+{"GLBL_RNG_SZ_E", 0x238,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_rng_sz_e_field_info),
+	glbl_rng_sz_e_field_info
+},
+{"GLBL_RNG_SZ_F", 0x23c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_rng_sz_f_field_info),
+	glbl_rng_sz_f_field_info
+},
+{"GLBL_RNG_SZ_10", 0x240,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_rng_sz_10_field_info),
+	glbl_rng_sz_10_field_info
+},
+{"GLBL_ERR_STAT", 0x248,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_err_stat_field_info),
+	glbl_err_stat_field_info
+},
+{"GLBL_ERR_MASK", 0x24c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_err_mask_field_info),
+	glbl_err_mask_field_info
+},
+{"GLBL_DSC_CFG", 0x250,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_dsc_cfg_field_info),
+	glbl_dsc_cfg_field_info
+},
+{"GLBL_DSC_ERR_STS", 0x254,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_dsc_err_sts_field_info),
+	glbl_dsc_err_sts_field_info
+},
+{"GLBL_DSC_ERR_MSK", 0x258,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_dsc_err_msk_field_info),
+	glbl_dsc_err_msk_field_info
+},
+{"GLBL_DSC_ERR_LOG0", 0x25c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_dsc_err_log0_field_info),
+	glbl_dsc_err_log0_field_info
+},
+{"GLBL_DSC_ERR_LOG1", 0x260,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_dsc_err_log1_field_info),
+	glbl_dsc_err_log1_field_info
+},
+{"GLBL_TRQ_ERR_STS", 0x264,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_trq_err_sts_field_info),
+	glbl_trq_err_sts_field_info
+},
+{"GLBL_TRQ_ERR_MSK", 0x268,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_trq_err_msk_field_info),
+	glbl_trq_err_msk_field_info
+},
+{"GLBL_TRQ_ERR_LOG", 0x26c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_trq_err_log_field_info),
+	glbl_trq_err_log_field_info
+},
+{"GLBL_DSC_DBG_DAT0", 0x270,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_dsc_dbg_dat0_field_info),
+	glbl_dsc_dbg_dat0_field_info
+},
+{"GLBL_DSC_DBG_DAT1", 0x274,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_dsc_dbg_dat1_field_info),
+	glbl_dsc_dbg_dat1_field_info
+},
+{"GLBL_DSC_DBG_CTL", 0x278,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_dsc_dbg_ctl_field_info),
+	glbl_dsc_dbg_ctl_field_info
+},
+{"GLBL_DSC_ERR_LOG2", 0x27c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_dsc_err_log2_field_info),
+	glbl_dsc_err_log2_field_info
+},
+{"GLBL_GLBL_INTERRUPT_CFG", 0x2c4,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_glbl_interrupt_cfg_field_info),
+	glbl_glbl_interrupt_cfg_field_info
+},
+{"GLBL_VCH_HOST_PROFILE", 0x2c8,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_vch_host_profile_field_info),
+	glbl_vch_host_profile_field_info
+},
+{"GLBL_BRIDGE_HOST_PROFILE", 0x308,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_bridge_host_profile_field_info),
+	glbl_bridge_host_profile_field_info
+},
+{"AXIMM_IRQ_DEST_ADDR", 0x30c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(aximm_irq_dest_addr_field_info),
+	aximm_irq_dest_addr_field_info
+},
+{"FAB_ERR_LOG", 0x314,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(fab_err_log_field_info),
+	fab_err_log_field_info
+},
+{"GLBL_REQ_ERR_STS", 0x318,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(glbl_req_err_sts_field_info),
+	glbl_req_err_sts_field_info
+},
+{"GLBL_REQ_ERR_MSK", 0x31c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(glbl_req_err_msk_field_info),
+	glbl_req_err_msk_field_info
+},
+{"IND_CTXT_DATA", 0x804,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(ind_ctxt_data_field_info),
+	ind_ctxt_data_field_info
+},
+{"IND_CTXT_MASK", 0x824,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(ind_ctxt_mask_field_info),
+	ind_ctxt_mask_field_info
+},
+{"IND_CTXT_CMD", 0x844,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(ind_ctxt_cmd_field_info),
+	ind_ctxt_cmd_field_info
+},
+{"C2H_TIMER_CNT", 0xa00,
+	1, 0, 0, 0,
+	0, QDMA_COMPLETION_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_timer_cnt_field_info),
+	c2h_timer_cnt_field_info
+},
+{"C2H_CNT_TH", 0xa40,
+	1, 0, 0, 0,
+	0, QDMA_COMPLETION_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_cnt_th_field_info),
+	c2h_cnt_th_field_info
+},
+{"C2H_STAT_S_AXIS_C2H_ACCEPTED", 0xa88,
+	1, 0, 0, 0,
+	1, QDMA_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(c2h_stat_s_axis_c2h_accepted_field_info),
+	c2h_stat_s_axis_c2h_accepted_field_info
+},
+{"C2H_STAT_S_AXIS_WRB_ACCEPTED", 0xa8c,
+	1, 0, 0, 0,
+	1, QDMA_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(c2h_stat_s_axis_wrb_accepted_field_info),
+	c2h_stat_s_axis_wrb_accepted_field_info
+},
+{"C2H_STAT_DESC_RSP_PKT_ACCEPTED", 0xa90,
+	1, 0, 0, 0,
+	1, QDMA_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(c2h_stat_desc_rsp_pkt_accepted_field_info),
+	c2h_stat_desc_rsp_pkt_accepted_field_info
+},
+{"C2H_STAT_AXIS_PKG_CMP", 0xa94,
+	1, 0, 0, 0,
+	1, QDMA_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(c2h_stat_axis_pkg_cmp_field_info),
+	c2h_stat_axis_pkg_cmp_field_info
+},
+{"C2H_STAT_DESC_RSP_ACCEPTED", 0xa98,
+	1, 0, 0, 0,
+	1, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_stat_desc_rsp_accepted_field_info),
+	c2h_stat_desc_rsp_accepted_field_info
+},
+{"C2H_STAT_DESC_RSP_CMP", 0xa9c,
+	1, 0, 0, 0,
+	1, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_stat_desc_rsp_cmp_field_info),
+	c2h_stat_desc_rsp_cmp_field_info
+},
+{"C2H_STAT_WRQ_OUT", 0xaa0,
+	1, 0, 0, 0,
+	1, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_stat_wrq_out_field_info),
+	c2h_stat_wrq_out_field_info
+},
+{"C2H_STAT_WPL_REN_ACCEPTED", 0xaa4,
+	1, 0, 0, 0,
+	1, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_stat_wpl_ren_accepted_field_info),
+	c2h_stat_wpl_ren_accepted_field_info
+},
+{"C2H_STAT_TOTAL_WRQ_LEN", 0xaa8,
+	1, 0, 0, 0,
+	1, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_stat_total_wrq_len_field_info),
+	c2h_stat_total_wrq_len_field_info
+},
+{"C2H_STAT_TOTAL_WPL_LEN", 0xaac,
+	1, 0, 0, 0,
+	1, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_stat_total_wpl_len_field_info),
+	c2h_stat_total_wpl_len_field_info
+},
+{"C2H_BUF_SZ", 0xab0,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_buf_sz_field_info),
+	c2h_buf_sz_field_info
+},
+{"C2H_ERR_STAT", 0xaf0,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(c2h_err_stat_field_info),
+	c2h_err_stat_field_info
+},
+{"C2H_ERR_MASK", 0xaf4,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(c2h_err_mask_field_info),
+	c2h_err_mask_field_info
+},
+{"C2H_FATAL_ERR_STAT", 0xaf8,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(c2h_fatal_err_stat_field_info),
+	c2h_fatal_err_stat_field_info
+},
+{"C2H_FATAL_ERR_MASK", 0xafc,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(c2h_fatal_err_mask_field_info),
+	c2h_fatal_err_mask_field_info
+},
+{"C2H_FATAL_ERR_ENABLE", 0xb00,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(c2h_fatal_err_enable_field_info),
+	c2h_fatal_err_enable_field_info
+},
+{"GLBL_ERR_INT", 0xb04,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_err_int_field_info),
+	glbl_err_int_field_info
+},
+{"C2H_PFCH_CFG", 0xb08,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_pfch_cfg_field_info),
+	c2h_pfch_cfg_field_info
+},
+{"C2H_PFCH_CFG_1", 0xa80,
+	1, 0, 0, 0,
+	0, QDMA_COMPLETION_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_pfch_cfg_1_field_info),
+	c2h_pfch_cfg_1_field_info
+},
+{"C2H_PFCH_CFG_2", 0xa84,
+	1, 0, 0, 0,
+	0, QDMA_COMPLETION_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_pfch_cfg_2_field_info),
+	c2h_pfch_cfg_2_field_info
+},
+{"C2H_INT_TIMER_TICK", 0xb0c,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_int_timer_tick_field_info),
+	c2h_int_timer_tick_field_info
+},
+{"C2H_STAT_DESC_RSP_DROP_ACCEPTED", 0xb10,
+	1, 0, 0, 0,
+	1, QDMA_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(c2h_stat_desc_rsp_drop_accepted_field_info),
+	c2h_stat_desc_rsp_drop_accepted_field_info
+},
+{"C2H_STAT_DESC_RSP_ERR_ACCEPTED", 0xb14,
+	1, 0, 0, 0,
+	1, QDMA_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(c2h_stat_desc_rsp_err_accepted_field_info),
+	c2h_stat_desc_rsp_err_accepted_field_info
+},
+{"C2H_STAT_DESC_REQ", 0xb18,
+	1, 0, 0, 0,
+	1, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_stat_desc_req_field_info),
+	c2h_stat_desc_req_field_info
+},
+{"C2H_STAT_DBG_DMA_ENG_0", 0xb1c,
+	1, 0, 0, 0,
+	1, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_stat_dbg_dma_eng_0_field_info),
+	c2h_stat_dbg_dma_eng_0_field_info
+},
+{"C2H_STAT_DBG_DMA_ENG_1", 0xb20,
+	1, 0, 0, 0,
+	1, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_stat_dbg_dma_eng_1_field_info),
+	c2h_stat_dbg_dma_eng_1_field_info
+},
+{"C2H_STAT_DBG_DMA_ENG_2", 0xb24,
+	1, 0, 0, 0,
+	1, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_stat_dbg_dma_eng_2_field_info),
+	c2h_stat_dbg_dma_eng_2_field_info
+},
+{"C2H_STAT_DBG_DMA_ENG_3", 0xb28,
+	1, 0, 0, 0,
+	1, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_stat_dbg_dma_eng_3_field_info),
+	c2h_stat_dbg_dma_eng_3_field_info
+},
+{"C2H_DBG_PFCH_ERR_CTXT", 0xb2c,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_dbg_pfch_err_ctxt_field_info),
+	c2h_dbg_pfch_err_ctxt_field_info
+},
+{"C2H_FIRST_ERR_QID", 0xb30,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(c2h_first_err_qid_field_info),
+	c2h_first_err_qid_field_info
+},
+{"STAT_NUM_WRB_IN", 0xb34,
+	1, 0, 0, 0,
+	1, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(stat_num_wrb_in_field_info),
+	stat_num_wrb_in_field_info
+},
+{"STAT_NUM_WRB_OUT", 0xb38,
+	1, 0, 0, 0,
+	1, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(stat_num_wrb_out_field_info),
+	stat_num_wrb_out_field_info
+},
+{"STAT_NUM_WRB_DRP", 0xb3c,
+	1, 0, 0, 0,
+	1, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(stat_num_wrb_drp_field_info),
+	stat_num_wrb_drp_field_info
+},
+{"STAT_NUM_STAT_DESC_OUT", 0xb40,
+	1, 0, 0, 0,
+	1, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(stat_num_stat_desc_out_field_info),
+	stat_num_stat_desc_out_field_info
+},
+{"STAT_NUM_DSC_CRDT_SENT", 0xb44,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(stat_num_dsc_crdt_sent_field_info),
+	stat_num_dsc_crdt_sent_field_info
+},
+{"STAT_NUM_FCH_DSC_RCVD", 0xb48,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(stat_num_fch_dsc_rcvd_field_info),
+	stat_num_fch_dsc_rcvd_field_info
+},
+{"STAT_NUM_BYP_DSC_RCVD", 0xb4c,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(stat_num_byp_dsc_rcvd_field_info),
+	stat_num_byp_dsc_rcvd_field_info
+},
+{"C2H_WRB_COAL_CFG", 0xb50,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_wrb_coal_cfg_field_info),
+	c2h_wrb_coal_cfg_field_info
+},
+{"C2H_INTR_H2C_REQ", 0xb54,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(c2h_intr_h2c_req_field_info),
+	c2h_intr_h2c_req_field_info
+},
+{"C2H_INTR_C2H_MM_REQ", 0xb58,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(c2h_intr_c2h_mm_req_field_info),
+	c2h_intr_c2h_mm_req_field_info
+},
+{"C2H_INTR_ERR_INT_REQ", 0xb5c,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(c2h_intr_err_int_req_field_info),
+	c2h_intr_err_int_req_field_info
+},
+{"C2H_INTR_C2H_ST_REQ", 0xb60,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(c2h_intr_c2h_st_req_field_info),
+	c2h_intr_c2h_st_req_field_info
+},
+{"C2H_INTR_H2C_ERR_C2H_MM_MSIX_ACK", 0xb64,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_intr_h2c_err_c2h_mm_msix_ack_field_info),
+	c2h_intr_h2c_err_c2h_mm_msix_ack_field_info
+},
+{"C2H_INTR_H2C_ERR_C2H_MM_MSIX_FAIL", 0xb68,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_intr_h2c_err_c2h_mm_msix_fail_field_info),
+	c2h_intr_h2c_err_c2h_mm_msix_fail_field_info
+},
+{"C2H_INTR_H2C_ERR_C2H_MM_MSIX_NO_MSIX", 0xb6c,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_intr_h2c_err_c2h_mm_msix_no_msix_field_info),
+	c2h_intr_h2c_err_c2h_mm_msix_no_msix_field_info
+},
+{"C2H_INTR_H2C_ERR_C2H_MM_CTXT_INVAL", 0xb70,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_intr_h2c_err_c2h_mm_ctxt_inval_field_info),
+	c2h_intr_h2c_err_c2h_mm_ctxt_inval_field_info
+},
+{"C2H_INTR_C2H_ST_MSIX_ACK", 0xb74,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(c2h_intr_c2h_st_msix_ack_field_info),
+	c2h_intr_c2h_st_msix_ack_field_info
+},
+{"C2H_INTR_C2H_ST_MSIX_FAIL", 0xb78,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(c2h_intr_c2h_st_msix_fail_field_info),
+	c2h_intr_c2h_st_msix_fail_field_info
+},
+{"C2H_INTR_C2H_ST_NO_MSIX", 0xb7c,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_intr_c2h_st_no_msix_field_info),
+	c2h_intr_c2h_st_no_msix_field_info
+},
+{"C2H_INTR_C2H_ST_CTXT_INVAL", 0xb80,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_intr_c2h_st_ctxt_inval_field_info),
+	c2h_intr_c2h_st_ctxt_inval_field_info
+},
+{"C2H_STAT_WR_CMP", 0xb84,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_stat_wr_cmp_field_info),
+	c2h_stat_wr_cmp_field_info
+},
+{"C2H_STAT_DBG_DMA_ENG_4", 0xb88,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_stat_dbg_dma_eng_4_field_info),
+	c2h_stat_dbg_dma_eng_4_field_info
+},
+{"C2H_STAT_DBG_DMA_ENG_5", 0xb8c,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_stat_dbg_dma_eng_5_field_info),
+	c2h_stat_dbg_dma_eng_5_field_info
+},
+{"C2H_DBG_PFCH_QID", 0xb90,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_dbg_pfch_qid_field_info),
+	c2h_dbg_pfch_qid_field_info
+},
+{"C2H_DBG_PFCH", 0xb94,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_dbg_pfch_field_info),
+	c2h_dbg_pfch_field_info
+},
+{"C2H_INT_DBG", 0xb98,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_int_dbg_field_info),
+	c2h_int_dbg_field_info
+},
+{"C2H_STAT_IMM_ACCEPTED", 0xb9c,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_stat_imm_accepted_field_info),
+	c2h_stat_imm_accepted_field_info
+},
+{"C2H_STAT_MARKER_ACCEPTED", 0xba0,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_stat_marker_accepted_field_info),
+	c2h_stat_marker_accepted_field_info
+},
+{"C2H_STAT_DISABLE_CMP_ACCEPTED", 0xba4,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_stat_disable_cmp_accepted_field_info),
+	c2h_stat_disable_cmp_accepted_field_info
+},
+{"C2H_PLD_FIFO_CRDT_CNT", 0xba8,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_pld_fifo_crdt_cnt_field_info),
+	c2h_pld_fifo_crdt_cnt_field_info
+},
+{"C2H_INTR_DYN_REQ", 0xbac,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_intr_dyn_req_field_info),
+	c2h_intr_dyn_req_field_info
+},
+{"C2H_INTR_DYN_MISC", 0xbb0,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_intr_dyn_misc_field_info),
+	c2h_intr_dyn_misc_field_info
+},
+{"C2H_DROP_LEN_MISMATCH", 0xbb4,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_drop_len_mismatch_field_info),
+	c2h_drop_len_mismatch_field_info
+},
+{"C2H_DROP_DESC_RSP_LEN", 0xbb8,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_drop_desc_rsp_len_field_info),
+	c2h_drop_desc_rsp_len_field_info
+},
+{"C2H_DROP_QID_FIFO_LEN", 0xbbc,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_drop_qid_fifo_len_field_info),
+	c2h_drop_qid_fifo_len_field_info
+},
+{"C2H_DROP_PLD_CNT", 0xbc0,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_drop_pld_cnt_field_info),
+	c2h_drop_pld_cnt_field_info
+},
+{"C2H_CMPT_FORMAT_0", 0xbc4,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_cmpt_format_0_field_info),
+	c2h_cmpt_format_0_field_info
+},
+{"C2H_CMPT_FORMAT_1", 0xbc8,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_cmpt_format_1_field_info),
+	c2h_cmpt_format_1_field_info
+},
+{"C2H_CMPT_FORMAT_2", 0xbcc,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_cmpt_format_2_field_info),
+	c2h_cmpt_format_2_field_info
+},
+{"C2H_CMPT_FORMAT_3", 0xbd0,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_cmpt_format_3_field_info),
+	c2h_cmpt_format_3_field_info
+},
+{"C2H_CMPT_FORMAT_4", 0xbd4,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_cmpt_format_4_field_info),
+	c2h_cmpt_format_4_field_info
+},
+{"C2H_CMPT_FORMAT_5", 0xbd8,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_cmpt_format_5_field_info),
+	c2h_cmpt_format_5_field_info
+},
+{"C2H_CMPT_FORMAT_6", 0xbdc,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_cmpt_format_6_field_info),
+	c2h_cmpt_format_6_field_info
+},
+{"C2H_PFCH_CACHE_DEPTH", 0xbe0,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_pfch_cache_depth_field_info),
+	c2h_pfch_cache_depth_field_info
+},
+{"C2H_WRB_COAL_BUF_DEPTH", 0xbe4,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_wrb_coal_buf_depth_field_info),
+	c2h_wrb_coal_buf_depth_field_info
+},
+{"C2H_PFCH_CRDT", 0xbe8,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_pfch_crdt_field_info),
+	c2h_pfch_crdt_field_info
+},
+{"C2H_STAT_HAS_CMPT_ACCEPTED", 0xbec,
+	1, 0, 0, 0,
+	1, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_stat_has_cmpt_accepted_field_info),
+	c2h_stat_has_cmpt_accepted_field_info
+},
+{"C2H_STAT_HAS_PLD_ACCEPTED", 0xbf0,
+	1, 0, 0, 0,
+	1, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_stat_has_pld_accepted_field_info),
+	c2h_stat_has_pld_accepted_field_info
+},
+{"C2H_PLD_PKT_ID", 0xbf4,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_pld_pkt_id_field_info),
+	c2h_pld_pkt_id_field_info
+},
+{"C2H_PLD_PKT_ID_1", 0xbf8,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_pld_pkt_id_1_field_info),
+	c2h_pld_pkt_id_1_field_info
+},
+{"C2H_DROP_PLD_CNT_1", 0xbfc,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_drop_pld_cnt_1_field_info),
+	c2h_drop_pld_cnt_1_field_info
+},
+{"H2C_ERR_STAT", 0xe00,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(h2c_err_stat_field_info),
+	h2c_err_stat_field_info
+},
+{"H2C_ERR_MASK", 0xe04,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(h2c_err_mask_field_info),
+	h2c_err_mask_field_info
+},
+{"H2C_FIRST_ERR_QID", 0xe08,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(h2c_first_err_qid_field_info),
+	h2c_first_err_qid_field_info
+},
+{"H2C_DBG_REG0", 0xe0c,
+	1, 0, 0, 0,
+	1, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(h2c_dbg_reg0_field_info),
+	h2c_dbg_reg0_field_info
+},
+{"H2C_DBG_REG1", 0xe10,
+	1, 0, 0, 0,
+	1, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(h2c_dbg_reg1_field_info),
+	h2c_dbg_reg1_field_info
+},
+{"H2C_DBG_REG2", 0xe14,
+	1, 0, 0, 0,
+	1, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(h2c_dbg_reg2_field_info),
+	h2c_dbg_reg2_field_info
+},
+{"H2C_DBG_REG3", 0xe18,
+	1, 0, 0, 0,
+	1, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(h2c_dbg_reg3_field_info),
+	h2c_dbg_reg3_field_info
+},
+{"H2C_DBG_REG4", 0xe1c,
+	1, 0, 0, 0,
+	1, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(h2c_dbg_reg4_field_info),
+	h2c_dbg_reg4_field_info
+},
+{"H2C_FATAL_ERR_EN", 0xe20,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(h2c_fatal_err_en_field_info),
+	h2c_fatal_err_en_field_info
+},
+{"H2C_REQ_THROT_PCIE", 0xe24,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(h2c_req_throt_pcie_field_info),
+	h2c_req_throt_pcie_field_info
+},
+{"H2C_ALN_DBG_REG0", 0xe28,
+	1, 0, 0, 0,
+	1, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(h2c_aln_dbg_reg0_field_info),
+	h2c_aln_dbg_reg0_field_info
+},
+{"H2C_REQ_THROT_AXIMM", 0xe2c,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(h2c_req_throt_aximm_field_info),
+	h2c_req_throt_aximm_field_info
+},
+{"C2H_MM_CTL", 0x1004,
+	1, 0, 0, 0,
+	0, QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_mm_ctl_field_info),
+	c2h_mm_ctl_field_info
+},
+{"C2H_MM_STATUS", 0x1040,
+	1, 0, 0, 0,
+	0, QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_mm_status_field_info),
+	c2h_mm_status_field_info
+},
+{"C2H_MM_CMPL_DESC_CNT", 0x1048,
+	1, 0, 0, 0,
+	0, QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_mm_cmpl_desc_cnt_field_info),
+	c2h_mm_cmpl_desc_cnt_field_info
+},
+{"C2H_MM_ERR_CODE_ENABLE_MASK", 0x1054,
+	1, 0, 0, 0,
+	0, QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_mm_err_code_enable_mask_field_info),
+	c2h_mm_err_code_enable_mask_field_info
+},
+{"C2H_MM_ERR_CODE", 0x1058,
+	1, 0, 0, 0,
+	0, QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_mm_err_code_field_info),
+	c2h_mm_err_code_field_info
+},
+{"C2H_MM_ERR_INFO", 0x105c,
+	1, 0, 0, 0,
+	0, QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_mm_err_info_field_info),
+	c2h_mm_err_info_field_info
+},
+{"C2H_MM_PERF_MON_CTL", 0x10c0,
+	1, 0, 0, 0,
+	0, QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_mm_perf_mon_ctl_field_info),
+	c2h_mm_perf_mon_ctl_field_info
+},
+{"C2H_MM_PERF_MON_CYCLE_CNT0", 0x10c4,
+	1, 0, 0, 0,
+	0, QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_mm_perf_mon_cycle_cnt0_field_info),
+	c2h_mm_perf_mon_cycle_cnt0_field_info
+},
+{"C2H_MM_PERF_MON_CYCLE_CNT1", 0x10c8,
+	1, 0, 0, 0,
+	0, QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_mm_perf_mon_cycle_cnt1_field_info),
+	c2h_mm_perf_mon_cycle_cnt1_field_info
+},
+{"C2H_MM_PERF_MON_DATA_CNT0", 0x10cc,
+	1, 0, 0, 0,
+	0, QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_mm_perf_mon_data_cnt0_field_info),
+	c2h_mm_perf_mon_data_cnt0_field_info
+},
+{"C2H_MM_PERF_MON_DATA_CNT1", 0x10d0,
+	1, 0, 0, 0,
+	0, QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_mm_perf_mon_data_cnt1_field_info),
+	c2h_mm_perf_mon_data_cnt1_field_info
+},
+{"C2H_MM_DBG", 0x10e8,
+	1, 0, 0, 0,
+	0, QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_mm_dbg_field_info),
+	c2h_mm_dbg_field_info
+},
+{"H2C_MM_CTL", 0x1204,
+	1, 0, 0, 0,
+	0, QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(h2c_mm_ctl_field_info),
+	h2c_mm_ctl_field_info
+},
+{"H2C_MM_STATUS", 0x1240,
+	1, 0, 0, 0,
+	0, QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(h2c_mm_status_field_info),
+	h2c_mm_status_field_info
+},
+{"H2C_MM_CMPL_DESC_CNT", 0x1248,
+	1, 0, 0, 0,
+	0, QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(h2c_mm_cmpl_desc_cnt_field_info),
+	h2c_mm_cmpl_desc_cnt_field_info
+},
+{"H2C_MM_ERR_CODE_ENABLE_MASK", 0x1254,
+	1, 0, 0, 0,
+	0, QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(h2c_mm_err_code_enable_mask_field_info),
+	h2c_mm_err_code_enable_mask_field_info
+},
+{"H2C_MM_ERR_CODE", 0x1258,
+	1, 0, 0, 0,
+	0, QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(h2c_mm_err_code_field_info),
+	h2c_mm_err_code_field_info
+},
+{"H2C_MM_ERR_INFO", 0x125c,
+	1, 0, 0, 0,
+	0, QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(h2c_mm_err_info_field_info),
+	h2c_mm_err_info_field_info
+},
+{"H2C_MM_PERF_MON_CTL", 0x12c0,
+	1, 0, 0, 0,
+	0, QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(h2c_mm_perf_mon_ctl_field_info),
+	h2c_mm_perf_mon_ctl_field_info
+},
+{"H2C_MM_PERF_MON_CYCLE_CNT0", 0x12c4,
+	1, 0, 0, 0,
+	0, QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(h2c_mm_perf_mon_cycle_cnt0_field_info),
+	h2c_mm_perf_mon_cycle_cnt0_field_info
+},
+{"H2C_MM_PERF_MON_CYCLE_CNT1", 0x12c8,
+	1, 0, 0, 0,
+	0, QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(h2c_mm_perf_mon_cycle_cnt1_field_info),
+	h2c_mm_perf_mon_cycle_cnt1_field_info
+},
+{"H2C_MM_PERF_MON_DATA_CNT0", 0x12cc,
+	1, 0, 0, 0,
+	0, QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(h2c_mm_perf_mon_data_cnt0_field_info),
+	h2c_mm_perf_mon_data_cnt0_field_info
+},
+{"H2C_MM_PERF_MON_DATA_CNT1", 0x12d0,
+	1, 0, 0, 0,
+	0, QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(h2c_mm_perf_mon_data_cnt1_field_info),
+	h2c_mm_perf_mon_data_cnt1_field_info
+},
+{"H2C_MM_DBG", 0x12e8,
+	1, 0, 0, 0,
+	0, QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(h2c_mm_dbg_field_info),
+	h2c_mm_dbg_field_info
+},
+{"C2H_CRDT_COAL_CFG_1", 0x1400,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_crdt_coal_cfg_1_field_info),
+	c2h_crdt_coal_cfg_1_field_info
+},
+{"C2H_CRDT_COAL_CFG_2", 0x1404,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_crdt_coal_cfg_2_field_info),
+	c2h_crdt_coal_cfg_2_field_info
+},
+{"C2H_PFCH_BYP_QID", 0x1408,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_pfch_byp_qid_field_info),
+	c2h_pfch_byp_qid_field_info
+},
+{"C2H_PFCH_BYP_TAG", 0x140c,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_pfch_byp_tag_field_info),
+	c2h_pfch_byp_tag_field_info
+},
+{"C2H_WATER_MARK", 0x1500,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_water_mark_field_info),
+	c2h_water_mark_field_info
+},
+
+};
+
+uint32_t eqdma_config_num_regs_get(void)
+{
+	return (sizeof(eqdma_config_regs) /
+		sizeof(eqdma_config_regs[0]));
+}
+
+struct xreg_info *eqdma_config_regs_get(void)
+{
+	return eqdma_config_regs;
+}
diff --git a/drivers/net/qdma/qdma_access/qdma_access_common.c b/drivers/net/qdma/qdma_access/qdma_access_common.c
new file mode 100644
index 0000000000..a86ef14651
--- /dev/null
+++ b/drivers/net/qdma/qdma_access/qdma_access_common.c
@@ -0,0 +1,1271 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2022 Xilinx, Inc. All rights reserved.
+ */
+
+#include "qdma_access_common.h"
+#include "qdma_platform.h"
+#include "qdma_soft_reg.h"
+#include "qdma_soft_access.h"
+#include "qdma_s80_hard_access.h"
+#include "eqdma_soft_access.h"
+#include "qdma_reg_dump.h"
+
+#ifdef ENABLE_WPP_TRACING
+#include "qdma_access_common.tmh"
+#endif
+
+/* qdma version info */
+#define RTL_BASE_VERSION                        2
+#define RTL_PATCH_VERSION                       3
+
+/**
+ * enum qdma_ip - To hold ip type
+ */
+enum qdma_ip {
+	QDMA_OR_VERSAL_IP,
+	EQDMA_IP
+};
+
+
+/*
+ * hw_monitor_reg() - polling a register repeatly until
+ *	(the register value & mask) == val or time is up
+ *
+ * return -QDMA_BUSY_IIMEOUT_ERR if register value didn't match, 0 other wise
+ */
+int hw_monitor_reg(void *dev_hndl, uint32_t reg, uint32_t mask,
+		uint32_t val, uint32_t interval_us, uint32_t timeout_us)
+{
+	int count;
+	uint32_t v;
+
+	if (!interval_us)
+		interval_us = QDMA_REG_POLL_DFLT_INTERVAL_US;
+	if (!timeout_us)
+		timeout_us = QDMA_REG_POLL_DFLT_TIMEOUT_US;
+
+	count = timeout_us / interval_us;
+
+	do {
+		v = qdma_reg_read(dev_hndl, reg);
+		if ((v & mask) == val)
+			return QDMA_SUCCESS;
+		qdma_udelay(interval_us);
+	} while (--count);
+
+	v = qdma_reg_read(dev_hndl, reg);
+	if ((v & mask) == val)
+		return QDMA_SUCCESS;
+
+	qdma_log_error("%s: Reg read=%u Expected=%u, err:%d\n",
+				   __func__, v, val,
+				   -QDMA_ERR_HWACC_BUSY_TIMEOUT);
+	return -QDMA_ERR_HWACC_BUSY_TIMEOUT;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_get_rtl_version() - Function to get the rtl_version in
+ * string format
+ *
+ * @rtl_version: Vivado release ID
+ *
+ * Return: string - success and NULL on failure
+ *****************************************************************************/
+static const char *qdma_get_rtl_version(enum qdma_rtl_version rtl_version)
+{
+	switch (rtl_version) {
+	case QDMA_RTL_PATCH:
+		return "RTL Patch";
+	case QDMA_RTL_BASE:
+		return "RTL Base";
+	default:
+		qdma_log_error("%s: invalid rtl_version(%d), err:%d\n",
+				__func__, rtl_version, -QDMA_ERR_INV_PARAM);
+		return NULL;
+	}
+}
+
+/*****************************************************************************/
+/**
+ * qdma_get_ip_type() - Function to get the ip type in string format
+ *
+ * @ip_type: IP Type
+ *
+ * Return: string - success and NULL on failure
+ *****************************************************************************/
+static const char *qdma_get_ip_type(enum qdma_ip_type ip_type)
+{
+	switch (ip_type) {
+	case QDMA_VERSAL_HARD_IP:
+		return "Versal Hard IP";
+	case QDMA_VERSAL_SOFT_IP:
+		return "Versal Soft IP";
+	case QDMA_SOFT_IP:
+		return "QDMA Soft IP";
+	case EQDMA_SOFT_IP:
+		return "EQDMA Soft IP";
+	default:
+		qdma_log_error("%s: invalid ip type(%d), err:%d\n",
+				__func__, ip_type, -QDMA_ERR_INV_PARAM);
+		return NULL;
+	}
+}
+
+/*****************************************************************************/
+/**
+ * qdma_get_device_type() - Function to get the device type in
+ * string format
+ *
+ * @device_type: Device Type
+ *
+ * Return: string - success and NULL on failure
+ *****************************************************************************/
+static const char *qdma_get_device_type(enum qdma_device_type device_type)
+{
+	switch (device_type) {
+	case QDMA_DEVICE_SOFT:
+		return "Soft IP";
+	case QDMA_DEVICE_VERSAL:
+		return "Versal S80 Hard IP";
+	default:
+		qdma_log_error("%s: invalid device type(%d), err:%d\n",
+				__func__, device_type, -QDMA_ERR_INV_PARAM);
+		return NULL;
+	}
+}
+
+/*****************************************************************************/
+/**
+ * qdma_get_vivado_release_id() - Function to get the vivado release id in
+ * string format
+ *
+ * @vivado_release_id: Vivado release ID
+ *
+ * Return: string - success and NULL on failure
+ *****************************************************************************/
+static const char *qdma_get_vivado_release_id
+				(enum qdma_vivado_release_id vivado_release_id)
+{
+	switch (vivado_release_id) {
+	case QDMA_VIVADO_2018_3:
+		return "vivado 2018.3";
+	case QDMA_VIVADO_2019_1:
+		return "vivado 2019.1";
+	case QDMA_VIVADO_2019_2:
+		return "vivado 2019.2";
+	case QDMA_VIVADO_2020_1:
+		return "vivado 2020.1";
+	case QDMA_VIVADO_2020_2:
+		return "vivado 2020.2";
+	default:
+		qdma_log_error("%s: invalid vivado_release_id(%d), err:%d\n",
+				__func__,
+				vivado_release_id,
+				-QDMA_ERR_INV_PARAM);
+		return NULL;
+	}
+}
+
+
+void qdma_write_csr_values(void *dev_hndl, uint32_t reg_offst,
+		uint32_t idx, uint32_t cnt, const uint32_t *values)
+{
+	uint32_t index, reg_addr;
+
+	for (index = idx; index < (idx + cnt); index++) {
+		reg_addr = reg_offst + (index * sizeof(uint32_t));
+		qdma_reg_write(dev_hndl, reg_addr, values[index - idx]);
+	}
+}
+
+void qdma_read_csr_values(void *dev_hndl, uint32_t reg_offst,
+		uint32_t idx, uint32_t cnt, uint32_t *values)
+{
+	uint32_t index, reg_addr;
+
+	reg_addr = reg_offst + (idx * sizeof(uint32_t));
+	for (index = 0; index < cnt; index++) {
+		values[index] = qdma_reg_read(dev_hndl, reg_addr +
+					      (index * sizeof(uint32_t)));
+	}
+}
+
+void qdma_fetch_version_details(uint8_t is_vf, uint32_t version_reg_val,
+		struct qdma_hw_version_info *version_info)
+{
+	uint32_t rtl_version, vivado_release_id, ip_type, device_type;
+	const char *version_str;
+
+	if (!is_vf) {
+		rtl_version = FIELD_GET(QDMA_GLBL2_RTL_VERSION_MASK,
+				version_reg_val);
+		vivado_release_id =
+			FIELD_GET(QDMA_GLBL2_VIVADO_RELEASE_MASK,
+					version_reg_val);
+		device_type = FIELD_GET(QDMA_GLBL2_DEVICE_ID_MASK,
+				version_reg_val);
+		ip_type = FIELD_GET(QDMA_GLBL2_VERSAL_IP_MASK,
+				version_reg_val);
+	} else {
+		rtl_version =
+			FIELD_GET(QDMA_GLBL2_VF_RTL_VERSION_MASK,
+					version_reg_val);
+		vivado_release_id =
+			FIELD_GET(QDMA_GLBL2_VF_VIVADO_RELEASE_MASK,
+					version_reg_val);
+		device_type = FIELD_GET(QDMA_GLBL2_VF_DEVICE_ID_MASK,
+				version_reg_val);
+		ip_type =
+			FIELD_GET(QDMA_GLBL2_VF_VERSAL_IP_MASK,
+					version_reg_val);
+	}
+
+	switch (rtl_version) {
+	case 0:
+		version_info->rtl_version = QDMA_RTL_BASE;
+		break;
+	case 1:
+		version_info->rtl_version = QDMA_RTL_PATCH;
+		break;
+	default:
+		version_info->rtl_version = QDMA_RTL_NONE;
+		break;
+	}
+
+	version_str = qdma_get_rtl_version(version_info->rtl_version);
+	if (version_str != NULL)
+		qdma_strncpy(version_info->qdma_rtl_version_str,
+				version_str,
+				QDMA_HW_VERSION_STRING_LEN);
+
+	switch (device_type) {
+	case 0:
+		version_info->device_type = QDMA_DEVICE_SOFT;
+		break;
+	case 1:
+		version_info->device_type = QDMA_DEVICE_VERSAL;
+		break;
+	default:
+		version_info->device_type = QDMA_DEVICE_NONE;
+		break;
+	}
+
+	version_str = qdma_get_device_type(version_info->device_type);
+	if (version_str != NULL)
+		qdma_strncpy(version_info->qdma_device_type_str,
+				version_str,
+				QDMA_HW_VERSION_STRING_LEN);
+
+
+	if (version_info->device_type == QDMA_DEVICE_SOFT) {
+		switch (ip_type) {
+		case 0:
+			version_info->ip_type = QDMA_SOFT_IP;
+			break;
+		case 1:
+			version_info->ip_type = EQDMA_SOFT_IP;
+			break;
+		default:
+			version_info->ip_type = QDMA_NONE_IP;
+		}
+	} else {
+		switch (ip_type) {
+		case 0:
+			version_info->ip_type = QDMA_VERSAL_HARD_IP;
+			break;
+		case 1:
+			version_info->ip_type = QDMA_VERSAL_SOFT_IP;
+			break;
+		default:
+			version_info->ip_type = QDMA_NONE_IP;
+		}
+	}
+
+	version_str = qdma_get_ip_type(version_info->ip_type);
+	if (version_str != NULL)
+		qdma_strncpy(version_info->qdma_ip_type_str,
+			version_str,
+			QDMA_HW_VERSION_STRING_LEN);
+
+	if (version_info->ip_type == QDMA_SOFT_IP) {
+		switch (vivado_release_id) {
+		case 0:
+			version_info->vivado_release = QDMA_VIVADO_2018_3;
+			break;
+		case 1:
+			version_info->vivado_release = QDMA_VIVADO_2019_1;
+			break;
+		case 2:
+			version_info->vivado_release = QDMA_VIVADO_2019_2;
+			break;
+		default:
+			version_info->vivado_release = QDMA_VIVADO_NONE;
+			break;
+		}
+	} else if (version_info->ip_type == EQDMA_SOFT_IP) {
+		switch (vivado_release_id) {
+		case 0:
+			version_info->vivado_release = QDMA_VIVADO_2020_1;
+			break;
+		case 1:
+			version_info->vivado_release = QDMA_VIVADO_2020_2;
+			break;
+		default:
+			version_info->vivado_release = QDMA_VIVADO_NONE;
+			break;
+		}
+	} else { /* Versal case */
+		switch (vivado_release_id) {
+		case 0:
+			version_info->vivado_release = QDMA_VIVADO_2019_2;
+			break;
+		default:
+			version_info->vivado_release = QDMA_VIVADO_NONE;
+			break;
+		}
+	}
+
+	version_str = qdma_get_vivado_release_id
+			(version_info->vivado_release);
+	if (version_str != NULL)
+		qdma_strncpy(version_info->qdma_vivado_release_id_str,
+				version_str,
+				QDMA_HW_VERSION_STRING_LEN);
+}
+
+
+/*
+ * dump_reg() - Helper function to dump register value into string
+ *
+ * return len - length of the string copied into buffer
+ */
+int dump_reg(char *buf, int buf_sz, uint32_t raddr,
+		const char *rname, uint32_t rval)
+{
+	/* length of the line should be minimum 80 chars.
+	 * If below print pattern is changed, check for
+	 * new buffer size requirement
+	 */
+	if (buf_sz < DEBGFS_LINE_SZ) {
+		qdma_log_error("%s: buf_sz(%d) < expected(%d): err: %d\n",
+						__func__,
+						buf_sz, DEBGFS_LINE_SZ,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	return QDMA_SNPRINTF_S(buf, buf_sz, DEBGFS_LINE_SZ,
+			"[%#7x] %-47s %#-10x %u\n",
+			raddr, rname, rval, rval);
+}
+
+void qdma_memset(void *to, uint8_t val, uint32_t size)
+{
+	uint32_t i;
+	uint8_t *_to = (uint8_t *)to;
+
+	for (i = 0; i < size; i++)
+		_to[i] = val;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_queue_cmpt_cidx_read() - function to read the CMPT CIDX register
+ *
+ * @dev_hndl:	device handle
+ * @is_vf:	Whether PF or VF
+ * @qid:	Queue id relative to the PF/VF calling this API
+ * @reg_info:	pointer to array to hold the values read
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_queue_cmpt_cidx_read(void *dev_hndl, uint8_t is_vf,
+		uint16_t qid, struct qdma_q_cmpt_cidx_reg_info *reg_info)
+{
+	uint32_t reg_val = 0;
+	uint32_t reg_addr = (is_vf) ? QDMA_OFFSET_VF_DMAP_SEL_CMPT_CIDX :
+			QDMA_OFFSET_DMAP_SEL_CMPT_CIDX;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+				__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+	if (!reg_info) {
+		qdma_log_error("%s: reg_info is NULL, err:%d\n",
+				__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+
+	reg_addr += qid * QDMA_CMPT_CIDX_STEP;
+
+	reg_val = qdma_reg_read(dev_hndl, reg_addr);
+
+	reg_info->wrb_cidx =
+		FIELD_GET(QDMA_DMAP_SEL_CMPT_WRB_CIDX_MASK, reg_val);
+	reg_info->counter_idx =
+		(uint8_t)(FIELD_GET(QDMA_DMAP_SEL_CMPT_CNT_THRESH_MASK,
+			reg_val));
+	reg_info->wrb_en =
+		(uint8_t)(FIELD_GET(QDMA_DMAP_SEL_CMPT_STS_DESC_EN_MASK,
+			reg_val));
+	reg_info->irq_en =
+		(uint8_t)(FIELD_GET(QDMA_DMAP_SEL_CMPT_IRQ_EN_MASK, reg_val));
+	reg_info->timer_idx =
+		(uint8_t)(FIELD_GET(QDMA_DMAP_SEL_CMPT_TMR_CNT_MASK, reg_val));
+	reg_info->trig_mode =
+		(uint8_t)(FIELD_GET(QDMA_DMAP_SEL_CMPT_TRG_MODE_MASK, reg_val));
+
+	return QDMA_SUCCESS;
+}
+
+
+/*****************************************************************************/
+/**
+ * qdma_initiate_flr() - function to initiate Function Level Reset
+ *
+ * @dev_hndl:	device handle
+ * @is_vf:	Whether PF or VF
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_initiate_flr(void *dev_hndl, uint8_t is_vf)
+{
+	uint32_t reg_addr = (is_vf) ?  QDMA_OFFSET_VF_REG_FLR_STATUS :
+			QDMA_OFFSET_PF_REG_FLR_STATUS;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+				__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	qdma_reg_write(dev_hndl, reg_addr, 1);
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_is_flr_done() - function to check whether the FLR is done or not
+ *
+ * @dev_hndl:	device handle
+ * @is_vf:	Whether PF or VF
+ * @done:	if FLR process completed ,  done is 1 else 0.
+ *
+ * Return:   0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_is_flr_done(void *dev_hndl, uint8_t is_vf, uint8_t *done)
+{
+	int rv;
+	uint32_t reg_addr = (is_vf) ?  QDMA_OFFSET_VF_REG_FLR_STATUS :
+			QDMA_OFFSET_PF_REG_FLR_STATUS;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+				__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+	if (!done) {
+		qdma_log_error("%s: done is NULL, err:%d\n",
+				__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	/* wait for it to become zero */
+	rv = hw_monitor_reg(dev_hndl, reg_addr, QDMA_FLR_STATUS_MASK,
+			0, 5 * QDMA_REG_POLL_DFLT_INTERVAL_US,
+			QDMA_REG_POLL_DFLT_TIMEOUT_US);
+	if (rv < 0)
+		*done = 0;
+	else
+		*done = 1;
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_is_config_bar() - function for the config bar verification
+ *
+ * @dev_hndl:	device handle
+ * @is_vf:	Whether PF or VF
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_is_config_bar(void *dev_hndl, uint8_t is_vf, enum qdma_ip *ip)
+{
+	uint32_t reg_val = 0;
+	uint32_t reg_addr = (is_vf) ? QDMA_OFFSET_VF_VERSION :
+			QDMA_OFFSET_CONFIG_BLOCK_ID;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+				__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	reg_val = qdma_reg_read(dev_hndl, reg_addr);
+
+	/** TODO: Version register for VFs is 0x5014 for EQDMA and
+	 *  0x1014 for QDMA/Versal. First time reading 0x5014 for
+	 *  all the device and based on the upper 16 bits value
+	 *  (i.e. 0x1fd3), finding out whether its EQDMA or QDMA/Versal
+	 *  for EQDMA VFs.
+	 *  Need to modify this logic once the hardware team
+	 *  comes up with a common register for VFs
+	 */
+	if (is_vf) {
+		if (FIELD_GET(QDMA_GLBL2_VF_UNIQUE_ID_MASK, reg_val)
+				!= QDMA_MAGIC_NUMBER) {
+			/* Its either QDMA or Versal */
+			*ip = EQDMA_IP;
+			reg_addr = EQDMA_OFFSET_VF_VERSION;
+			reg_val = qdma_reg_read(dev_hndl, reg_addr);
+		} else {
+			*ip = QDMA_OR_VERSAL_IP;
+			return QDMA_SUCCESS;
+		}
+	}
+
+	if (FIELD_GET(QDMA_CONFIG_BLOCK_ID_MASK, reg_val)
+			!= QDMA_MAGIC_NUMBER) {
+		qdma_log_error("%s: Invalid config bar, err:%d\n",
+					__func__,
+					-QDMA_ERR_HWACC_INV_CONFIG_BAR);
+		return -QDMA_ERR_HWACC_INV_CONFIG_BAR;
+	}
+
+	return QDMA_SUCCESS;
+}
+
+int qdma_acc_reg_dump_buf_len(void *dev_hndl,
+		enum qdma_ip_type ip_type, int *buflen)
+{
+	uint32_t len = 0;
+	int rv = 0;
+
+	*buflen = 0;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+			__func__, -QDMA_ERR_INV_PARAM);
+
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	switch (ip_type) {
+	case QDMA_SOFT_IP:
+		len = qdma_soft_reg_dump_buf_len();
+		break;
+	case QDMA_VERSAL_HARD_IP:
+		len = qdma_s80_hard_reg_dump_buf_len();
+		break;
+	case EQDMA_SOFT_IP:
+		len = eqdma_reg_dump_buf_len();
+		break;
+	default:
+		qdma_log_error("%s: Invalid version number, err = %d",
+			__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	*buflen = (int)len;
+	return rv;
+}
+
+int qdma_acc_reg_info_len(void *dev_hndl,
+		enum qdma_ip_type ip_type, int *buflen, int *num_regs)
+{
+	uint32_t len = 0;
+	int rv = 0;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+			__func__, -QDMA_ERR_INV_PARAM);
+
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if (!buflen) {
+		qdma_log_error("%s: buflen is NULL, err:%d\n",
+			__func__, -QDMA_ERR_INV_PARAM);
+
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if (!num_regs) {
+		qdma_log_error("%s: num_regs is NULL, err:%d\n",
+			__func__, -QDMA_ERR_INV_PARAM);
+
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	*buflen = 0;
+
+	switch (ip_type) {
+	case QDMA_SOFT_IP:
+		len = 0;
+		*num_regs = 0;
+		break;
+	case QDMA_VERSAL_HARD_IP:
+		len = qdma_s80_hard_reg_dump_buf_len();
+		*num_regs = (int)((len / REG_DUMP_SIZE_PER_LINE) - 1);
+		break;
+	case EQDMA_SOFT_IP:
+		len = eqdma_reg_dump_buf_len();
+		*num_regs = (int)((len / REG_DUMP_SIZE_PER_LINE) - 1);
+		break;
+	default:
+		qdma_log_error("%s: Invalid version number, err = %d",
+			__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	*buflen = (int)len;
+	return rv;
+}
+
+int qdma_acc_context_buf_len(void *dev_hndl,
+		enum qdma_ip_type ip_type, uint8_t st,
+		enum qdma_dev_q_type q_type, uint32_t *buflen)
+{
+	int rv = 0;
+
+	*buflen = 0;
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+			__func__, -QDMA_ERR_INV_PARAM);
+
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	switch (ip_type) {
+	case QDMA_SOFT_IP:
+		rv = qdma_soft_context_buf_len(st, q_type, buflen);
+		break;
+	case QDMA_VERSAL_HARD_IP:
+		rv = qdma_s80_hard_context_buf_len(st, q_type, buflen);
+		break;
+	case EQDMA_SOFT_IP:
+		rv = eqdma_context_buf_len(st, q_type, buflen);
+		break;
+	default:
+		qdma_log_error("%s: Invalid version number, err = %d",
+			__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	return rv;
+}
+
+int qdma_acc_get_num_config_regs(void *dev_hndl,
+		enum qdma_ip_type ip_type, uint32_t *num_regs)
+{
+	int rv = 0;
+
+	*num_regs = 0;
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+			__func__, -QDMA_ERR_INV_PARAM);
+
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	switch (ip_type) {
+	case QDMA_SOFT_IP:
+		rv = qdma_get_config_num_regs();
+		break;
+	case QDMA_VERSAL_HARD_IP:
+		rv = qdma_s80_hard_get_config_num_regs();
+		break;
+	case EQDMA_SOFT_IP:
+		rv = eqdma_get_config_num_regs();
+		break;
+	default:
+		qdma_log_error("%s: Invalid version number, err = %d",
+			__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	*num_regs = rv;
+
+	return 0;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_acc_get_config_regs() - Function to get qdma config registers.
+ *
+ * @dev_hndl:   device handle
+ * @is_vf:      Whether PF or VF
+ * @ip_type:	QDMA IP Type
+ * @reg_data:   pointer to register data to be filled
+ *
+ * Return:	Length up-till the buffer is filled -success and < 0 - failure
+ *****************************************************************************/
+int qdma_acc_get_config_regs(void *dev_hndl, uint8_t is_vf,
+		enum qdma_ip_type ip_type,
+		uint32_t *reg_data)
+{
+	struct xreg_info *reg_info;
+	uint32_t count = 0;
+	uint32_t num_regs;
+	int rv = 0;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+				__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if (is_vf) {
+		qdma_log_error("%s: Get Config regs not valid for VF, err:%d\n",
+			__func__,
+			-QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if (reg_data == NULL) {
+		qdma_log_error("%s: reg_data is NULL, err:%d\n",
+						__func__,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	switch (ip_type) {
+	case QDMA_SOFT_IP:
+		num_regs = qdma_get_config_num_regs();
+		reg_info = qdma_get_config_regs();
+		break;
+	case QDMA_VERSAL_HARD_IP:
+		num_regs = qdma_s80_hard_get_config_num_regs();
+		reg_info = qdma_s80_hard_get_config_regs();
+		break;
+	case EQDMA_SOFT_IP:
+		num_regs = eqdma_get_config_num_regs();
+		reg_info = eqdma_get_config_regs();
+		break;
+	default:
+		qdma_log_error("%s: Invalid version number, err = %d",
+			__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	for (count = 0; count < num_regs - 1; count++) {
+		reg_data[count] = qdma_reg_read(dev_hndl,
+				reg_info[count].addr);
+	}
+
+	return rv;
+}
+
+
+/*****************************************************************************/
+/**
+ * qdma_acc_dump_config_regs() - Function to get qdma config register dump in a
+ * buffer
+ *
+ * @dev_hndl:   device handle
+ * @is_vf:      Whether PF or VF
+ * @ip_type:	QDMA IP Type
+ * @buf :       pointer to buffer to be filled
+ * @buflen :    Length of the buffer
+ *
+ * Return:	Length up-till the buffer is filled -success and < 0 - failure
+ *****************************************************************************/
+int qdma_acc_dump_config_regs(void *dev_hndl, uint8_t is_vf,
+		enum qdma_ip_type ip_type,
+		char *buf, uint32_t buflen)
+{
+	int rv = 0;
+
+	switch (ip_type) {
+	case QDMA_SOFT_IP:
+		rv =  qdma_soft_dump_config_regs(dev_hndl, is_vf,
+				buf, buflen);
+		break;
+	case QDMA_VERSAL_HARD_IP:
+		rv = qdma_s80_hard_dump_config_regs(dev_hndl, is_vf,
+				buf, buflen);
+		break;
+	case EQDMA_SOFT_IP:
+		rv = eqdma_dump_config_regs(dev_hndl, is_vf,
+				buf, buflen);
+		break;
+	default:
+		qdma_log_error("%s: Invalid version number, err = %d",
+			__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	return rv;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_acc_dump_reg_info() - Function to dump fileds in
+ * a specified register.
+ *
+ * @dev_hndl:   device handle
+ * @ip_type:	QDMA IP Type
+ * @buf :       pointer to buffer to be filled
+ * @buflen :    Length of the buffer
+ *
+ * Return:	Length up-till the buffer is filled -success and < 0 - failure
+ *****************************************************************************/
+int qdma_acc_dump_reg_info(void *dev_hndl,
+		enum qdma_ip_type ip_type, uint32_t reg_addr,
+		uint32_t num_regs, char *buf, uint32_t buflen)
+{
+	int rv = 0;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+				__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if (!buf || !buflen) {
+		qdma_log_error("%s: Invalid input buffer, err = %d",
+			__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	switch (ip_type) {
+	case QDMA_SOFT_IP:
+		QDMA_SNPRINTF_S(buf, buflen, DEBGFS_LINE_SZ,
+		"QDMA reg field info not supported for QDMA_SOFT_IP\n");
+		break;
+	case QDMA_VERSAL_HARD_IP:
+		rv = qdma_s80_hard_dump_reg_info(dev_hndl, reg_addr,
+				num_regs, buf, buflen);
+		break;
+	case EQDMA_SOFT_IP:
+		rv = eqdma_dump_reg_info(dev_hndl, reg_addr,
+				num_regs, buf, buflen);
+		break;
+	default:
+		qdma_log_error("%s: Invalid version number, err = %d",
+			__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	return rv;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_acc_dump_queue_context() - Function to get qdma queue context dump in a
+ * buffer
+ *
+ * @dev_hndl:   device handle
+ * @ip_type:	QDMA IP Type
+ * @st:		Queue Mode (ST or MM)
+ * @q_type:	Queue Type
+ * @ctxt_data:  Context Data
+ * @buf :       pointer to buffer to be filled
+ * @buflen :    Length of the buffer
+ *
+ * Return:	Length up-till the buffer is filled -success and < 0 - failure
+ *****************************************************************************/
+int qdma_acc_dump_queue_context(void *dev_hndl,
+		enum qdma_ip_type ip_type,
+		uint8_t st,
+		enum qdma_dev_q_type q_type,
+		struct qdma_descq_context *ctxt_data,
+		char *buf, uint32_t buflen)
+{
+	int rv = 0;
+
+	switch (ip_type) {
+	case QDMA_SOFT_IP:
+		rv = qdma_soft_dump_queue_context(dev_hndl,
+				st, q_type, ctxt_data, buf, buflen);
+		break;
+	case QDMA_VERSAL_HARD_IP:
+		rv = qdma_s80_hard_dump_queue_context(dev_hndl,
+				st, q_type, ctxt_data, buf, buflen);
+		break;
+	case EQDMA_SOFT_IP:
+		rv = eqdma_dump_queue_context(dev_hndl,
+				st, q_type, ctxt_data, buf, buflen);
+		break;
+	default:
+		qdma_log_error("%s: Invalid version number, err = %d",
+			__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	return rv;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_acc_read_dump_queue_context() - Function to read and dump the queue
+ * context in the user-provided buffer. This API is valid only for PF and
+ * should not be used for VFs. For VF's use qdma_dump_queue_context() API
+ * after reading the context through mailbox.
+ *
+ * @dev_hndl:   device handle
+ * @ip_type:	QDMA IP type
+ * @hw_qid:     queue id
+ * @st:		Queue Mode(ST or MM)
+ * @q_type:	Queue type(H2C/C2H/CMPT)*
+ * @buf :       pointer to buffer to be filled
+ * @buflen :    Length of the buffer
+ *
+ * Return:	Length up-till the buffer is filled -success and < 0 - failure
+ *****************************************************************************/
+int qdma_acc_read_dump_queue_context(void *dev_hndl,
+				enum qdma_ip_type ip_type,
+				uint16_t qid_hw,
+				uint8_t st,
+				enum qdma_dev_q_type q_type,
+				char *buf, uint32_t buflen)
+{
+	int rv = QDMA_SUCCESS;
+
+	switch (ip_type) {
+	case QDMA_SOFT_IP:
+		rv = qdma_soft_read_dump_queue_context(dev_hndl,
+				qid_hw, st, q_type, buf, buflen);
+		break;
+	case QDMA_VERSAL_HARD_IP:
+		rv = qdma_s80_hard_read_dump_queue_context(dev_hndl,
+				qid_hw, st, q_type, buf, buflen);
+		break;
+	case EQDMA_SOFT_IP:
+		rv = eqdma_read_dump_queue_context(dev_hndl,
+				qid_hw, st, q_type, buf, buflen);
+		break;
+	default:
+		qdma_log_error("%s: Invalid version number, err = %d",
+			__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	return rv;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_acc_dump_config_reg_list() - Dump the registers
+ *
+ * @dev_hndl:		device handle
+ * @ip_type:		QDMA ip type
+ * @num_regs :		Max registers to read
+ * @reg_list :		array of reg addr and reg values
+ * @buf :		pointer to buffer to be filled
+ * @buflen :		Length of the buffer
+ *
+ * Return: returns the platform specific error code
+ *****************************************************************************/
+int qdma_acc_dump_config_reg_list(void *dev_hndl,
+		enum qdma_ip_type ip_type,
+		uint32_t num_regs,
+		struct qdma_reg_data *reg_list,
+		char *buf, uint32_t buflen)
+{
+	int rv = 0;
+
+	switch (ip_type) {
+	case QDMA_SOFT_IP:
+		rv = qdma_soft_dump_config_reg_list(dev_hndl,
+				num_regs,
+				reg_list, buf, buflen);
+		break;
+	case QDMA_VERSAL_HARD_IP:
+		rv = qdma_s80_hard_dump_config_reg_list(dev_hndl,
+				num_regs,
+				reg_list, buf, buflen);
+		break;
+	case EQDMA_SOFT_IP:
+		rv = eqdma_dump_config_reg_list(dev_hndl,
+				num_regs,
+				reg_list, buf, buflen);
+		break;
+	default:
+		qdma_log_error("%s: Invalid version number, err = %d",
+			__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	return rv;
+}
+
+
+/*****************************************************************************/
+/**
+ * qdma_get_function_number() - Function to get the function number
+ *
+ * @dev_hndl:	device handle
+ * @func_id:	pointer to hold the function id
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_get_function_number(void *dev_hndl, uint8_t *func_id)
+{
+	if (!dev_hndl || !func_id) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+				__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	*func_id = (uint8_t)qdma_reg_read(dev_hndl,
+			QDMA_OFFSET_GLBL2_CHANNEL_FUNC_RET);
+
+	return QDMA_SUCCESS;
+}
+
+
+/*****************************************************************************/
+/**
+ * qdma_hw_error_intr_setup() - Function to set up the qdma error
+ * interrupt
+ *
+ * @dev_hndl:	device handle
+ * @func_id:	Function id
+ * @err_intr_index:	Interrupt vector
+ * @rearm:	rearm or not
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_hw_error_intr_setup(void *dev_hndl, uint16_t func_id,
+		uint8_t err_intr_index)
+{
+	uint32_t reg_val = 0;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+				__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	reg_val =
+		FIELD_SET(QDMA_GLBL_ERR_FUNC_MASK, func_id) |
+		FIELD_SET(QDMA_GLBL_ERR_VEC_MASK, err_intr_index);
+
+	qdma_reg_write(dev_hndl, QDMA_OFFSET_GLBL_ERR_INT, reg_val);
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_hw_error_intr_rearm() - Function to re-arm the error interrupt
+ *
+ * @dev_hndl: device handle
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_hw_error_intr_rearm(void *dev_hndl)
+{
+	uint32_t reg_val = 0;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+				__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	reg_val = qdma_reg_read(dev_hndl, QDMA_OFFSET_GLBL_ERR_INT);
+	reg_val |= FIELD_SET(QDMA_GLBL_ERR_ARM_MASK, 1);
+
+	qdma_reg_write(dev_hndl, QDMA_OFFSET_GLBL_ERR_INT, reg_val);
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_get_error_code() - function to get the qdma access mapped
+ *				error code
+ *
+ * @acc_err_code: qdma access error code
+ *
+ * Return:   returns the platform specific error code
+ *****************************************************************************/
+int qdma_get_error_code(int acc_err_code)
+{
+	return qdma_get_err_code(acc_err_code);
+}
+
+int qdma_hw_access_init(void *dev_hndl, uint8_t is_vf,
+				struct qdma_hw_access *hw_access)
+{
+	int rv = QDMA_SUCCESS;
+	enum qdma_ip ip = EQDMA_IP;
+
+	struct qdma_hw_version_info version_info;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+					   __func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+	if (!hw_access) {
+		qdma_log_error("%s: hw_access is NULL, err:%d\n",
+					   __func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	rv = qdma_is_config_bar(dev_hndl, is_vf, &ip);
+	if (rv != QDMA_SUCCESS) {
+		qdma_log_error("%s: config bar passed is INVALID, err:%d\n",
+				__func__, -QDMA_ERR_INV_PARAM);
+		return rv;
+	}
+
+	qdma_memset(hw_access, 0, sizeof(struct qdma_hw_access));
+
+	if (ip == EQDMA_IP)
+		hw_access->qdma_get_version = &eqdma_get_version;
+	else
+		hw_access->qdma_get_version = &qdma_get_version;
+	hw_access->qdma_init_ctxt_memory = &qdma_init_ctxt_memory;
+	hw_access->qdma_fmap_conf = &qdma_fmap_conf;
+	hw_access->qdma_sw_ctx_conf = &qdma_sw_ctx_conf;
+	hw_access->qdma_pfetch_ctx_conf = &qdma_pfetch_ctx_conf;
+	hw_access->qdma_cmpt_ctx_conf = &qdma_cmpt_ctx_conf;
+	hw_access->qdma_hw_ctx_conf = &qdma_hw_ctx_conf;
+	hw_access->qdma_credit_ctx_conf = &qdma_credit_ctx_conf;
+	hw_access->qdma_indirect_intr_ctx_conf = &qdma_indirect_intr_ctx_conf;
+	hw_access->qdma_set_default_global_csr = &qdma_set_default_global_csr;
+	hw_access->qdma_global_csr_conf = &qdma_global_csr_conf;
+	hw_access->qdma_global_writeback_interval_conf =
+					&qdma_global_writeback_interval_conf;
+	hw_access->qdma_queue_pidx_update = &qdma_queue_pidx_update;
+	hw_access->qdma_queue_cmpt_cidx_read = &qdma_queue_cmpt_cidx_read;
+	hw_access->qdma_queue_cmpt_cidx_update = &qdma_queue_cmpt_cidx_update;
+	hw_access->qdma_queue_intr_cidx_update = &qdma_queue_intr_cidx_update;
+	hw_access->qdma_mm_channel_conf = &qdma_mm_channel_conf;
+	hw_access->qdma_get_user_bar = &qdma_get_user_bar;
+	hw_access->qdma_get_function_number = &qdma_get_function_number;
+	hw_access->qdma_get_device_attributes = &qdma_get_device_attributes;
+	hw_access->qdma_hw_error_intr_setup = &qdma_hw_error_intr_setup;
+	hw_access->qdma_hw_error_intr_rearm = &qdma_hw_error_intr_rearm;
+	hw_access->qdma_hw_error_enable = &qdma_hw_error_enable;
+	hw_access->qdma_hw_get_error_name = &qdma_hw_get_error_name;
+	hw_access->qdma_hw_error_process = &qdma_hw_error_process;
+	hw_access->qdma_dump_config_regs = &qdma_soft_dump_config_regs;
+	hw_access->qdma_dump_queue_context = &qdma_soft_dump_queue_context;
+	hw_access->qdma_read_dump_queue_context =
+					&qdma_soft_read_dump_queue_context;
+	hw_access->qdma_dump_intr_context = &qdma_dump_intr_context;
+	hw_access->qdma_is_legacy_intr_pend = &qdma_is_legacy_intr_pend;
+	hw_access->qdma_clear_pend_legacy_intr = &qdma_clear_pend_legacy_intr;
+	hw_access->qdma_legacy_intr_conf = &qdma_legacy_intr_conf;
+	hw_access->qdma_initiate_flr = &qdma_initiate_flr;
+	hw_access->qdma_is_flr_done = &qdma_is_flr_done;
+	hw_access->qdma_get_error_code = &qdma_get_error_code;
+	hw_access->qdma_read_reg_list = &qdma_read_reg_list;
+	hw_access->qdma_dump_config_reg_list =
+			&qdma_soft_dump_config_reg_list;
+	hw_access->qdma_dump_reg_info = &qdma_dump_reg_info;
+	hw_access->mbox_base_pf = QDMA_OFFSET_MBOX_BASE_PF;
+	hw_access->mbox_base_vf = QDMA_OFFSET_MBOX_BASE_VF;
+	hw_access->qdma_max_errors = QDMA_ERRS_ALL;
+
+	rv = hw_access->qdma_get_version(dev_hndl, is_vf, &version_info);
+	if (rv != QDMA_SUCCESS)
+		return rv;
+
+	qdma_log_info("Device Type: %s\n",
+			qdma_get_device_type(version_info.device_type));
+
+	qdma_log_info("IP Type: %s\n",
+			qdma_get_ip_type(version_info.ip_type));
+
+	qdma_log_info("Vivado Release: %s\n",
+		qdma_get_vivado_release_id(version_info.vivado_release));
+
+	if (version_info.ip_type == QDMA_VERSAL_HARD_IP) {
+		hw_access->qdma_init_ctxt_memory =
+				&qdma_s80_hard_init_ctxt_memory;
+		hw_access->qdma_qid2vec_conf = &qdma_s80_hard_qid2vec_conf;
+		hw_access->qdma_fmap_conf = &qdma_s80_hard_fmap_conf;
+		hw_access->qdma_sw_ctx_conf = &qdma_s80_hard_sw_ctx_conf;
+		hw_access->qdma_pfetch_ctx_conf =
+				&qdma_s80_hard_pfetch_ctx_conf;
+		hw_access->qdma_cmpt_ctx_conf = &qdma_s80_hard_cmpt_ctx_conf;
+		hw_access->qdma_hw_ctx_conf = &qdma_s80_hard_hw_ctx_conf;
+		hw_access->qdma_credit_ctx_conf =
+				&qdma_s80_hard_credit_ctx_conf;
+		hw_access->qdma_indirect_intr_ctx_conf =
+				&qdma_s80_hard_indirect_intr_ctx_conf;
+		hw_access->qdma_set_default_global_csr =
+					&qdma_s80_hard_set_default_global_csr;
+		hw_access->qdma_queue_pidx_update =
+				&qdma_s80_hard_queue_pidx_update;
+		hw_access->qdma_queue_cmpt_cidx_update =
+				&qdma_s80_hard_queue_cmpt_cidx_update;
+		hw_access->qdma_queue_intr_cidx_update =
+				&qdma_s80_hard_queue_intr_cidx_update;
+		hw_access->qdma_get_user_bar = &qdma_cmp_get_user_bar;
+		hw_access->qdma_get_device_attributes =
+				&qdma_s80_hard_get_device_attributes;
+		hw_access->qdma_dump_config_regs =
+				&qdma_s80_hard_dump_config_regs;
+		hw_access->qdma_dump_intr_context =
+				&qdma_s80_hard_dump_intr_context;
+		hw_access->qdma_hw_error_enable =
+				&qdma_s80_hard_hw_error_enable;
+		hw_access->qdma_hw_error_process =
+				&qdma_s80_hard_hw_error_process;
+		hw_access->qdma_hw_get_error_name =
+				&qdma_s80_hard_hw_get_error_name;
+		hw_access->qdma_legacy_intr_conf = NULL;
+		hw_access->qdma_read_reg_list = &qdma_s80_hard_read_reg_list;
+		hw_access->qdma_dump_config_reg_list =
+				&qdma_s80_hard_dump_config_reg_list;
+		hw_access->qdma_dump_queue_context =
+				&qdma_s80_hard_dump_queue_context;
+		hw_access->qdma_read_dump_queue_context =
+				&qdma_s80_hard_read_dump_queue_context;
+		hw_access->qdma_dump_reg_info = &qdma_s80_hard_dump_reg_info;
+		hw_access->qdma_max_errors = QDMA_S80_HARD_ERRS_ALL;
+	}
+
+	if (version_info.ip_type == EQDMA_SOFT_IP) {
+		hw_access->qdma_init_ctxt_memory = &eqdma_init_ctxt_memory;
+		hw_access->qdma_sw_ctx_conf = &eqdma_sw_ctx_conf;
+		hw_access->qdma_pfetch_ctx_conf = &eqdma_pfetch_ctx_conf;
+		hw_access->qdma_cmpt_ctx_conf = &eqdma_cmpt_ctx_conf;
+		hw_access->qdma_indirect_intr_ctx_conf =
+				&eqdma_indirect_intr_ctx_conf;
+		hw_access->qdma_dump_config_regs = &eqdma_dump_config_regs;
+		hw_access->qdma_dump_intr_context = &eqdma_dump_intr_context;
+		hw_access->qdma_hw_error_enable = &eqdma_hw_error_enable;
+		hw_access->qdma_hw_error_process = &eqdma_hw_error_process;
+		hw_access->qdma_hw_get_error_name = &eqdma_hw_get_error_name;
+		hw_access->qdma_hw_ctx_conf = &eqdma_hw_ctx_conf;
+		hw_access->qdma_credit_ctx_conf = &eqdma_credit_ctx_conf;
+		hw_access->qdma_set_default_global_csr =
+				&eqdma_set_default_global_csr;
+		hw_access->qdma_get_device_attributes =
+				&eqdma_get_device_attributes;
+		hw_access->qdma_get_user_bar = &eqdma_get_user_bar;
+		hw_access->qdma_read_reg_list = &eqdma_read_reg_list;
+		hw_access->qdma_dump_config_reg_list =
+				&eqdma_dump_config_reg_list;
+		hw_access->qdma_dump_queue_context =
+				&eqdma_dump_queue_context;
+		hw_access->qdma_read_dump_queue_context =
+				&eqdma_read_dump_queue_context;
+		hw_access->qdma_dump_reg_info = &eqdma_dump_reg_info;
+		/* All CSR and Queue space register belongs to Window 0.
+		 * Mailbox and MSIX register belongs to Window 1
+		 * Therefore, Mailbox offsets are different for EQDMA
+		 * Mailbox offset for PF : 128K + original address
+		 * Mailbox offset for VF : 16K + original address
+		 */
+		hw_access->mbox_base_pf = EQDMA_OFFSET_MBOX_BASE_PF;
+		hw_access->mbox_base_vf = EQDMA_OFFSET_MBOX_BASE_VF;
+		hw_access->qdma_max_errors = EQDMA_ERRS_ALL;
+	}
+
+	return QDMA_SUCCESS;
+}
diff --git a/drivers/net/qdma/qdma_access/qdma_access_common.h b/drivers/net/qdma/qdma_access/qdma_access_common.h
new file mode 100644
index 0000000000..a2ca188c65
--- /dev/null
+++ b/drivers/net/qdma/qdma_access/qdma_access_common.h
@@ -0,0 +1,888 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2022 Xilinx, Inc. All rights reserved.
+ */
+
+#ifndef __QDMA_ACCESS_COMMON_H_
+#define __QDMA_ACCESS_COMMON_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include "qdma_access_export.h"
+#include "qdma_access_errors.h"
+
+/* QDMA HW version string array length */
+#define QDMA_HW_VERSION_STRING_LEN			32
+
+#define ENABLE_INIT_CTXT_MEMORY			1
+
+#ifdef GCC_COMPILER
+static inline uint32_t get_trailing_zeros(uint64_t x)
+{
+	uint32_t rv =
+		__builtin_ffsll(x) - 1;
+	return rv;
+}
+#else
+static inline uint32_t get_trailing_zeros(uint64_t value)
+{
+	uint32_t pos = 0;
+
+	if ((value & 0xffffffff) == 0) {
+		pos += 32;
+		value >>= 32;
+	}
+	if ((value & 0xffff) == 0) {
+		pos += 16;
+		value >>= 16;
+	}
+	if ((value & 0xff) == 0) {
+		pos += 8;
+		value >>= 8;
+	}
+	if ((value & 0xf) == 0) {
+		pos += 4;
+		value >>= 4;
+	}
+	if ((value & 0x3) == 0) {
+		pos += 2;
+		value >>= 2;
+	}
+	if ((value & 0x1) == 0)
+		pos += 1;
+
+	return pos;
+}
+#endif
+
+#define FIELD_SHIFT(mask)       get_trailing_zeros(mask)
+#define FIELD_SET(mask, val) (__extension__ ({typeof(mask) (_mask) = (mask); \
+				 (((val) << FIELD_SHIFT(_mask)) & (_mask)); }))
+#define FIELD_GET(mask, reg) (__extension__ ({typeof(mask) (_mask) = (mask); \
+				 (((reg) & (_mask)) >> FIELD_SHIFT(_mask)); }))
+
+
+/* CSR Default values */
+#define DEFAULT_MAX_DSC_FETCH               6
+#define DEFAULT_WRB_INT                     QDMA_WRB_INTERVAL_128
+#define DEFAULT_PFCH_STOP_THRESH            256
+#define DEFAULT_PFCH_NUM_ENTRIES_PER_Q      8
+#define DEFAULT_PFCH_MAX_Q_CNT              16
+#define DEFAULT_C2H_INTR_TIMER_TICK         25
+#define DEFAULT_CMPT_COAL_TIMER_CNT         5
+#define DEFAULT_CMPT_COAL_TIMER_TICK        25
+#define DEFAULT_CMPT_COAL_MAX_BUF_SZ        32
+
+#define QDMA_BAR_NUM                        6
+
+/** Maximum data vectors to be used for each function
+ * TODO: Please note that for 2018.2 only one vector would be used
+ * per pf and only one ring would be created for this vector
+ * It is also assumed that all functions have the same number of data vectors
+ * and currently different number of vectors per PF is not supported
+ */
+#define QDMA_NUM_DATA_VEC_FOR_INTR_CXT  1
+
+enum ind_ctxt_cmd_op {
+	QDMA_CTXT_CMD_CLR,
+	QDMA_CTXT_CMD_WR,
+	QDMA_CTXT_CMD_RD,
+	QDMA_CTXT_CMD_INV
+};
+
+enum ind_ctxt_cmd_sel {
+	QDMA_CTXT_SEL_SW_C2H,
+	QDMA_CTXT_SEL_SW_H2C,
+	QDMA_CTXT_SEL_HW_C2H,
+	QDMA_CTXT_SEL_HW_H2C,
+	QDMA_CTXT_SEL_CR_C2H,
+	QDMA_CTXT_SEL_CR_H2C,
+	QDMA_CTXT_SEL_CMPT,
+	QDMA_CTXT_SEL_PFTCH,
+	QDMA_CTXT_SEL_INT_COAL,
+	QDMA_CTXT_SEL_PASID_RAM_LOW,
+	QDMA_CTXT_SEL_PASID_RAM_HIGH,
+	QDMA_CTXT_SEL_TIMER,
+	QDMA_CTXT_SEL_FMAP,
+};
+
+/* polling a register */
+#define	QDMA_REG_POLL_DFLT_INTERVAL_US	10		    /* 10us per poll */
+#define	QDMA_REG_POLL_DFLT_TIMEOUT_US	(500 * 1000)	/* 500ms */
+
+/** Constants */
+#define QDMA_NUM_RING_SIZES                                 16
+#define QDMA_NUM_C2H_TIMERS                                 16
+#define QDMA_NUM_C2H_BUFFER_SIZES                           16
+#define QDMA_NUM_C2H_COUNTERS                               16
+#define QDMA_MM_CONTROL_RUN                                 0x1
+#define QDMA_MM_CONTROL_STEP                                0x100
+#define QDMA_MAGIC_NUMBER                                   0x1fd3
+#define QDMA_PIDX_STEP                                      0x10
+#define QDMA_CMPT_CIDX_STEP                                 0x10
+#define QDMA_INT_CIDX_STEP                                  0x10
+
+
+/** QDMA_IND_REG_SEL_PFTCH */
+#define QDMA_PFTCH_CTXT_SW_CRDT_GET_H_MASK                  GENMASK(15, 3)
+#define QDMA_PFTCH_CTXT_SW_CRDT_GET_L_MASK                  GENMASK(2, 0)
+
+/** QDMA_IND_REG_SEL_CMPT */
+#define QDMA_COMPL_CTXT_BADDR_GET_H_MASK                    GENMASK_ULL(63, 38)
+#define QDMA_COMPL_CTXT_BADDR_GET_L_MASK                    GENMASK_ULL(37, 12)
+#define QDMA_COMPL_CTXT_PIDX_GET_H_MASK                     GENMASK(15, 4)
+#define QDMA_COMPL_CTXT_PIDX_GET_L_MASK                     GENMASK(3, 0)
+
+#define QDMA_INTR_CTXT_BADDR_GET_H_MASK                     GENMASK_ULL(63, 61)
+#define QDMA_INTR_CTXT_BADDR_GET_M_MASK                     GENMASK_ULL(60, 29)
+#define QDMA_INTR_CTXT_BADDR_GET_L_MASK                     GENMASK_ULL(28, 12)
+
+#define     QDMA_GLBL2_MM_CMPT_EN_MASK                      BIT(2)
+#define     QDMA_GLBL2_FLR_PRESENT_MASK                     BIT(1)
+#define     QDMA_GLBL2_MAILBOX_EN_MASK                      BIT(0)
+
+#define QDMA_REG_IND_CTXT_REG_COUNT                         8
+
+/* ------------------------ indirect register context fields -----------*/
+union qdma_ind_ctxt_cmd {
+	uint32_t word;
+	struct {
+		uint32_t busy:1;
+		uint32_t sel:4;
+		uint32_t op:2;
+		uint32_t qid:11;
+		uint32_t rsvd:14;
+	} bits;
+};
+
+#define QDMA_IND_CTXT_DATA_NUM_REGS                         8
+
+/**
+ * struct qdma_indirect_ctxt_regs - Inirect Context programming registers
+ */
+struct qdma_indirect_ctxt_regs {
+	uint32_t qdma_ind_ctxt_data[QDMA_IND_CTXT_DATA_NUM_REGS];
+	uint32_t qdma_ind_ctxt_mask[QDMA_IND_CTXT_DATA_NUM_REGS];
+	union qdma_ind_ctxt_cmd cmd;
+};
+
+/**
+ * struct qdma_fmap_cfg - fmap config data structure
+ */
+struct qdma_fmap_cfg {
+	/** @qbase - queue base for the function */
+	uint16_t qbase;
+	/** @qmax - maximum queues in the function */
+	uint16_t qmax;
+};
+
+/**
+ * struct qdma_qid2vec - qid to vector mapping data structure
+ */
+struct qdma_qid2vec {
+	/** @c2h_vector - For direct interrupt, it is the interrupt
+	 * vector index of msix table;
+	 * for indirect interrupt, it is the ring index
+	 */
+	uint8_t c2h_vector;
+	/** @c2h_en_coal - C2H Interrupt aggregation enable */
+	uint8_t c2h_en_coal;
+	/** @h2c_vector - For direct interrupt, it is the interrupt
+	 * vector index of msix table;
+	 * for indirect interrupt, it is the ring index
+	 */
+	uint8_t h2c_vector;
+	/** @h2c_en_coal - H2C Interrupt aggregation enable */
+	uint8_t h2c_en_coal;
+};
+
+/**
+ * struct qdma_descq_sw_ctxt - descq SW context config data structure
+ */
+struct qdma_descq_sw_ctxt {
+	/** @ring_bs_addr - ring base address */
+	uint64_t ring_bs_addr;
+	/** @vec - vector number */
+	uint16_t vec;
+	/** @pidx - initial producer index */
+	uint16_t pidx;
+	/** @irq_arm - Interrupt Arm */
+	uint8_t irq_arm;
+	/** @fnc_id - Function ID */
+	uint8_t fnc_id;
+	/** @qen - Indicates that the queue is enabled */
+	uint8_t qen;
+	/** @frcd_en -Enable fetch credit */
+	uint8_t frcd_en;
+	/** @wbi_chk -Writeback/Interrupt after pending check */
+	uint8_t wbi_chk;
+	/** @wbi_intvl_en -Write back/Interrupt interval */
+	uint8_t wbi_intvl_en;
+	/** @at - Address tanslation */
+	uint8_t at;
+	/** @fetch_max - Maximum number of descriptor fetches outstanding */
+	uint8_t fetch_max;
+	/** @rngsz_idx - Descriptor ring size index */
+	uint8_t rngsz_idx;
+	/** @desc_sz -Descriptor fetch size */
+	uint8_t desc_sz;
+	/** @bypass - bypass enable */
+	uint8_t bypass;
+	/** @mm_chn - MM channel */
+	uint8_t mm_chn;
+	/** @wbk_en -Writeback enable */
+	uint8_t wbk_en;
+	/** @irq_en -Interrupt enable */
+	uint8_t irq_en;
+	/** @port_id -Port_id */
+	uint8_t port_id;
+	/** @irq_no_last - No interrupt was sent */
+	uint8_t irq_no_last;
+	/** @err - Error status */
+	uint8_t err;
+	/** @err_wb_sent -writeback/interrupt was sent for an error */
+	uint8_t err_wb_sent;
+	/** @irq_req - Interrupt due to error waiting to be sent */
+	uint8_t irq_req;
+	/** @mrkr_dis - Marker disable */
+	uint8_t mrkr_dis;
+	/** @is_mm - MM mode */
+	uint8_t is_mm;
+	/** @intr_aggr - interrupt aggregation enable */
+	uint8_t intr_aggr;
+	/** @pasid_en - PASID Enable */
+	uint8_t pasid_en;
+	/** @dis_intr_on_vf - Disable interrupt with VF */
+	uint8_t dis_intr_on_vf;
+	/** @virtio_en - Queue is in Virtio Mode */
+	uint8_t virtio_en;
+	/** @pack_byp_out - descs on desc output interface can be packed */
+	uint8_t pack_byp_out;
+	/** @irq_byp - IRQ Bypass mode */
+	uint8_t irq_byp;
+	/** @host_id - Host ID */
+	uint8_t host_id;
+	/** @pasid - PASID */
+	uint32_t pasid;
+	/** @virtio_dsc_base - Virtio Desc Base Address */
+	uint64_t virtio_dsc_base;
+};
+
+/**
+ * struct qdma_descq_hw_ctxt - descq hw context config data structure
+ */
+struct qdma_descq_hw_ctxt {
+	/** @cidx - consumer index */
+	uint16_t cidx;
+	/** @crd_use - credits consumed */
+	uint16_t crd_use;
+	/** @dsc_pend - descriptors pending */
+	uint8_t dsc_pend;
+	/** @idl_stp_b -Queue invalid and no descriptors pending */
+	uint8_t idl_stp_b;
+	/** @evt_pnd - Event pending */
+	uint8_t evt_pnd;
+	/** @fetch_pnd -Descriptor fetch pending */
+	uint8_t fetch_pnd;
+};
+
+/**
+ * struct qdma_descq_credit_ctxt - descq credit context config data structure
+ */
+struct qdma_descq_credit_ctxt {
+	/** @credit -Fetch credits received. */
+	uint32_t credit;
+};
+
+/**
+ * struct qdma_descq_prefetch_ctxt - descq pfetch context config data structure
+ */
+struct qdma_descq_prefetch_ctxt {
+	/** @sw_crdt -Software credit */
+	uint16_t sw_crdt;
+	/** @bypass - bypass enable */
+	uint8_t bypass;
+	/** @bufsz_idx - c2h buffer size index */
+	uint8_t bufsz_idx;
+	/** @port_id - port ID */
+	uint8_t port_id;
+	/** @var_desc - Variable Descriptor */
+	uint8_t var_desc;
+	/** @num_pftch - Number of descs prefetched */
+	uint16_t num_pftch;
+	/** @err -Error detected on this queue */
+	uint8_t err;
+	/** @pfch_en - Enable prefetch */
+	uint8_t pfch_en;
+	/** @pfch - Queue is in prefetch */
+	uint8_t pfch;
+	/** @valid - context is valid */
+	uint8_t valid;
+};
+
+/**
+ * struct qdma_descq_cmpt_ctxt - descq completion context config data structure
+ */
+struct qdma_descq_cmpt_ctxt {
+	/** @bs_addr - completion ring base address */
+	uint64_t bs_addr;
+	/** @vec - Interrupt Vector */
+	uint16_t vec;
+	/** @pidx_l - producer index low */
+	uint16_t pidx;
+	/** @cidx - consumer index */
+	uint16_t cidx;
+	/** @en_stat_desc - Enable Completion Status writes */
+	uint8_t en_stat_desc;
+	/** @en_int - Enable Completion interrupts */
+	uint8_t en_int;
+	/** @trig_mode - Interrupt and Completion Status Write Trigger Mode */
+	uint8_t trig_mode;
+	/** @fnc_id - Function ID */
+	uint8_t fnc_id;
+	/** @counter_idx - Index to counter register */
+	uint8_t counter_idx;
+	/** @timer_idx - Index to timer register */
+	uint8_t timer_idx;
+	/** @in_st - Interrupt State */
+	uint8_t in_st;
+	/** @color - initial color bit to be used on Completion */
+	uint8_t color;
+	/** @ringsz_idx - Completion ring size index to ring size registers */
+	uint8_t ringsz_idx;
+	/** @desc_sz  -descriptor size */
+	uint8_t desc_sz;
+	/** @valid  - context valid */
+	uint8_t valid;
+	/** @err - error status */
+	uint8_t err;
+	/**
+	 * @user_trig_pend - user logic initiated interrupt is
+	 * pending to be generate
+	 */
+	uint8_t user_trig_pend;
+	/** @timer_running - timer is running on this queue */
+	uint8_t timer_running;
+	/** @full_upd - Full update */
+	uint8_t full_upd;
+	/** @ovf_chk_dis - Completion Ring Overflow Check Disable */
+	uint8_t ovf_chk_dis;
+	/** @at -Address Translation */
+	uint8_t at;
+	/** @int_aggr -Interrupt Aggregation */
+	uint8_t int_aggr;
+	/** @dis_intr_on_vf - Disable interrupt with VF */
+	uint8_t dis_intr_on_vf;
+	/** @vio - queue is in VirtIO mode */
+	uint8_t vio;
+	/** @dir_c2h - DMA direction is C2H */
+	uint8_t dir_c2h;
+	/** @host_id - Host ID */
+	uint8_t host_id;
+	/** @pasid - PASID */
+	uint32_t pasid;
+	/** @pasid_en - PASID Enable */
+	uint8_t pasid_en;
+	/** @vio_eop - Virtio End-of-packet */
+	uint8_t vio_eop;
+	/** @sh_cmpt - Shared Completion Queue */
+	uint8_t sh_cmpt;
+};
+
+/**
+ * struct qdma_indirect_intr_ctxt - indirect interrupt context config data
+ * structure
+ */
+struct qdma_indirect_intr_ctxt {
+	/** @baddr_4k -Base address of Interrupt Aggregation Ring */
+	uint64_t baddr_4k;
+	/** @vec - Interrupt vector index in msix table */
+	uint16_t vec;
+	/** @pidx - Producer Index */
+	uint16_t pidx;
+	/** @valid - context valid */
+	uint8_t valid;
+	/** @int_st -Interrupt State */
+	uint8_t int_st;
+	/** @color - Color bit */
+	uint8_t color;
+	/** @page_size - Interrupt Aggregation Ring size */
+	uint8_t page_size;
+	/** @at - Address translation */
+	uint8_t at;
+	/** @host_id - Host ID */
+	uint8_t host_id;
+	/** @pasid - PASID */
+	uint32_t pasid;
+	/** @pasid_en - PASID Enable */
+	uint8_t pasid_en;
+	/** @func_id - Function ID */
+	uint16_t func_id;
+};
+
+struct qdma_hw_version_info {
+	/** @rtl_version - RTL Version */
+	enum qdma_rtl_version rtl_version;
+	/** @vivado_release - Vivado Release id */
+	enum qdma_vivado_release_id vivado_release;
+	/** @versal_ip_state - Versal IP state */
+	enum qdma_ip_type ip_type;
+	/** @device_type - Device Type */
+	enum qdma_device_type device_type;
+	/** @qdma_rtl_version_str - RTL Version string*/
+	char qdma_rtl_version_str[QDMA_HW_VERSION_STRING_LEN];
+	/** @qdma_vivado_release_id_str - Vivado Release id string*/
+	char qdma_vivado_release_id_str[QDMA_HW_VERSION_STRING_LEN];
+	/** @qdma_device_type_str - Qdma device type string*/
+	char qdma_device_type_str[QDMA_HW_VERSION_STRING_LEN];
+	/** @qdma_versal_ip_state_str - Versal IP state string*/
+	char qdma_ip_type_str[QDMA_HW_VERSION_STRING_LEN];
+};
+
+#define CTXT_ENTRY_NAME_SZ        64
+struct qctx_entry {
+	char		name[CTXT_ENTRY_NAME_SZ];
+	uint32_t	value;
+};
+
+/**
+ * @struct - qdma_descq_context
+ * @brief	queue context information
+ */
+struct qdma_descq_context {
+	struct qdma_qid2vec qid2vec;
+	struct qdma_fmap_cfg fmap;
+	struct qdma_descq_sw_ctxt sw_ctxt;
+	struct qdma_descq_hw_ctxt hw_ctxt;
+	struct qdma_descq_credit_ctxt cr_ctxt;
+	struct qdma_descq_prefetch_ctxt pfetch_ctxt;
+	struct qdma_descq_cmpt_ctxt cmpt_ctxt;
+};
+
+/**
+ * struct qdma_q_pidx_reg_info - Software PIDX register fields
+ */
+struct qdma_q_pidx_reg_info {
+	/** @pidx - Producer Index */
+	uint16_t pidx;
+	/** @irq_en - Interrupt enable */
+	uint8_t irq_en;
+};
+
+/**
+ * struct qdma_q_intr_cidx_reg_info - Interrupt Ring CIDX register fields
+ */
+struct qdma_intr_cidx_reg_info {
+	/** @sw_cidx - Software Consumer Index */
+	uint16_t sw_cidx;
+	/** @rng_idx - Ring Index of the Interrupt Aggregation ring */
+	uint8_t rng_idx;
+};
+
+/**
+ * struct qdma_q_cmpt_cidx_reg_info - CMPT CIDX register fields
+ */
+struct qdma_q_cmpt_cidx_reg_info {
+	/** @wrb_cidx - CMPT Consumer Index */
+	uint16_t wrb_cidx;
+	/** @counter_idx - Counter Threshold Index */
+	uint8_t counter_idx;
+	/** @timer_idx - Timer Count Index */
+	uint8_t timer_idx;
+	/** @trig_mode - Trigger mode */
+	uint8_t trig_mode;
+	/** @wrb_en - Enable status descriptor for CMPT */
+	uint8_t wrb_en;
+	/** @irq_en - Enable Interrupt for CMPT */
+	uint8_t irq_en;
+};
+
+
+/**
+ * struct qdma_csr_info - Global CSR info data structure
+ */
+struct qdma_csr_info {
+	/** @ringsz: ring size values */
+	uint16_t ringsz[QDMA_GLOBAL_CSR_ARRAY_SZ];
+	/** @bufsz: buffer size values */
+	uint16_t bufsz[QDMA_GLOBAL_CSR_ARRAY_SZ];
+	/** @timer_cnt: timer threshold values */
+	uint8_t timer_cnt[QDMA_GLOBAL_CSR_ARRAY_SZ];
+	/** @cnt_thres: counter threshold values */
+	uint8_t cnt_thres[QDMA_GLOBAL_CSR_ARRAY_SZ];
+	/** @wb_intvl: writeback interval */
+	uint8_t wb_intvl;
+};
+
+#define QDMA_MAX_REGISTER_DUMP	14
+
+/**
+ * struct qdma_reg_data - Structure to
+ * hold address value and pair
+ */
+struct qdma_reg_data {
+	/** @reg_addr: register address */
+	uint32_t reg_addr;
+	/** @reg_val: register value */
+	uint32_t reg_val;
+};
+
+/**
+ * enum qdma_hw_access_type - To hold hw access type
+ */
+enum qdma_hw_access_type {
+	QDMA_HW_ACCESS_READ,
+	QDMA_HW_ACCESS_WRITE,
+	QDMA_HW_ACCESS_CLEAR,
+	QDMA_HW_ACCESS_INVALIDATE,
+	QDMA_HW_ACCESS_MAX
+};
+
+/**
+ * enum qdma_global_csr_type - To hold global csr type
+ */
+enum qdma_global_csr_type {
+	QDMA_CSR_RING_SZ,
+	QDMA_CSR_TIMER_CNT,
+	QDMA_CSR_CNT_TH,
+	QDMA_CSR_BUF_SZ,
+	QDMA_CSR_MAX
+};
+
+/**
+ * enum status_type - To hold enable/disable status type
+ */
+enum status_type {
+	DISABLE = 0,
+	ENABLE = 1,
+};
+
+/**
+ * enum qdma_reg_read_type - Indicates reg read type
+ */
+enum qdma_reg_read_type {
+	/** @QDMA_REG_READ_PF_ONLY: Read the register for PFs only */
+	QDMA_REG_READ_PF_ONLY,
+	/** @QDMA_REG_READ_VF_ONLY: Read the register for VFs only */
+	QDMA_REG_READ_VF_ONLY,
+	/** @QDMA_REG_READ_PF_VF: Read the register for both PF and VF */
+	QDMA_REG_READ_PF_VF,
+	/** @QDMA_REG_READ_MAX: Reg read enum max */
+	QDMA_REG_READ_MAX
+};
+
+/**
+ * enum qdma_reg_read_groups - Indicates reg read groups
+ */
+enum qdma_reg_read_groups {
+	/** @QDMA_REG_READ_GROUP_1: Read the register from  0x000 to 0x288 */
+	QDMA_REG_READ_GROUP_1,
+	/** @QDMA_REG_READ_GROUP_2: Read the register from 0x400 to 0xAFC */
+	QDMA_REG_READ_GROUP_2,
+	/** @QDMA_REG_READ_GROUP_3: Read the register from 0xB00 to 0xE28 */
+	QDMA_REG_READ_GROUP_3,
+	/** @QDMA_REG_READ_GROUP_4: Read the register Mailbox Registers */
+	QDMA_REG_READ_GROUP_4,
+	/** @QDMA_REG_READ_GROUP_MAX: Reg read max groups */
+	QDMA_REG_READ_GROUP_MAX
+};
+
+void qdma_write_csr_values(void *dev_hndl, uint32_t reg_offst,
+		uint32_t idx, uint32_t cnt, const uint32_t *values);
+
+void qdma_read_csr_values(void *dev_hndl, uint32_t reg_offst,
+		uint32_t idx, uint32_t cnt, uint32_t *values);
+
+int dump_reg(char *buf, int buf_sz, uint32_t raddr,
+		const char *rname, uint32_t rval);
+
+int hw_monitor_reg(void *dev_hndl, uint32_t reg, uint32_t mask,
+		uint32_t val, uint32_t interval_us,
+		uint32_t timeout_us);
+
+void qdma_memset(void *to, uint8_t val, uint32_t size);
+
+int qdma_acc_reg_dump_buf_len(void *dev_hndl,
+		enum qdma_ip_type ip_type, int *buflen);
+
+int qdma_acc_reg_info_len(void *dev_hndl,
+		enum qdma_ip_type ip_type, int *buflen, int *num_regs);
+
+int qdma_acc_context_buf_len(void *dev_hndl,
+		enum qdma_ip_type ip_type, uint8_t st,
+		enum qdma_dev_q_type q_type, uint32_t *buflen);
+
+int qdma_acc_get_num_config_regs(void *dev_hndl,
+		enum qdma_ip_type ip_type, uint32_t *num_regs);
+
+/*
+ * struct qdma_hw_access - Structure to hold HW access function pointers
+ */
+struct qdma_hw_access {
+	int (*qdma_set_default_global_csr)(void *dev_hndl);
+	int (*qdma_global_csr_conf)(void *dev_hndl, uint8_t index,
+					uint8_t count, uint32_t *csr_val,
+					enum qdma_global_csr_type csr_type,
+					enum qdma_hw_access_type access_type);
+	int (*qdma_global_writeback_interval_conf)(void *dev_hndl,
+					enum qdma_wrb_interval *wb_int,
+					enum qdma_hw_access_type access_type);
+	int (*qdma_init_ctxt_memory)(void *dev_hndl);
+	int (*qdma_qid2vec_conf)(void *dev_hndl, uint8_t c2h, uint16_t hw_qid,
+				 struct qdma_qid2vec *ctxt,
+				 enum qdma_hw_access_type access_type);
+	int (*qdma_fmap_conf)(void *dev_hndl, uint16_t func_id,
+					struct qdma_fmap_cfg *config,
+					enum qdma_hw_access_type access_type);
+	int (*qdma_sw_ctx_conf)(void *dev_hndl, uint8_t c2h, uint16_t hw_qid,
+					struct qdma_descq_sw_ctxt *ctxt,
+					enum qdma_hw_access_type access_type);
+	int (*qdma_pfetch_ctx_conf)(void *dev_hndl, uint16_t hw_qid,
+					struct qdma_descq_prefetch_ctxt *ctxt,
+					enum qdma_hw_access_type access_type);
+	int (*qdma_cmpt_ctx_conf)(void *dev_hndl, uint16_t hw_qid,
+					struct qdma_descq_cmpt_ctxt *ctxt,
+					enum qdma_hw_access_type access_type);
+	int (*qdma_hw_ctx_conf)(void *dev_hndl, uint8_t c2h, uint16_t hw_qid,
+					struct qdma_descq_hw_ctxt *ctxt,
+					enum qdma_hw_access_type access_type);
+	int (*qdma_credit_ctx_conf)(void *dev_hndl, uint8_t c2h,
+					uint16_t hw_qid,
+					struct qdma_descq_credit_ctxt *ctxt,
+					enum qdma_hw_access_type access_type);
+	int (*qdma_indirect_intr_ctx_conf)(void *dev_hndl, uint16_t ring_index,
+					struct qdma_indirect_intr_ctxt *ctxt,
+					enum qdma_hw_access_type access_type);
+	int (*qdma_queue_pidx_update)(void *dev_hndl, uint8_t is_vf,
+				uint16_t qid,
+				uint8_t is_c2h,
+				const struct qdma_q_pidx_reg_info *reg_info);
+	int (*qdma_queue_cmpt_cidx_read)(void *dev_hndl, uint8_t is_vf,
+				uint16_t qid,
+				struct qdma_q_cmpt_cidx_reg_info *reg_info);
+	int (*qdma_queue_cmpt_cidx_update)(void *dev_hndl, uint8_t is_vf,
+			uint16_t qid,
+			const struct qdma_q_cmpt_cidx_reg_info *reg_info);
+	int (*qdma_queue_intr_cidx_update)(void *dev_hndl, uint8_t is_vf,
+				uint16_t qid,
+				const struct qdma_intr_cidx_reg_info *reg_info);
+	int (*qdma_mm_channel_conf)(void *dev_hndl, uint8_t channel,
+				uint8_t is_c2h, uint8_t enable);
+	int (*qdma_get_user_bar)(void *dev_hndl, uint8_t is_vf,
+				uint8_t func_id, uint8_t *user_bar);
+	int (*qdma_get_function_number)(void *dev_hndl, uint8_t *func_id);
+	int (*qdma_get_version)(void *dev_hndl, uint8_t is_vf,
+				struct qdma_hw_version_info *version_info);
+	int (*qdma_get_device_attributes)(void *dev_hndl,
+					struct qdma_dev_attributes *dev_info);
+	int (*qdma_hw_error_intr_setup)(void *dev_hndl, uint16_t func_id,
+					uint8_t err_intr_index);
+	int (*qdma_hw_error_intr_rearm)(void *dev_hndl);
+	int (*qdma_hw_error_enable)(void *dev_hndl,
+			uint32_t err_idx);
+	const char *(*qdma_hw_get_error_name)(uint32_t err_idx);
+	int (*qdma_hw_error_process)(void *dev_hndl);
+	int (*qdma_dump_config_regs)(void *dev_hndl, uint8_t is_vf, char *buf,
+					uint32_t buflen);
+	int (*qdma_dump_reg_info)(void *dev_hndl, uint32_t reg_addr,
+				  uint32_t num_regs,
+				  char *buf,
+				  uint32_t buflen);
+	int (*qdma_dump_queue_context)(void *dev_hndl,
+			uint8_t st,
+			enum qdma_dev_q_type q_type,
+			struct qdma_descq_context *ctxt_data,
+			char *buf, uint32_t buflen);
+	int (*qdma_read_dump_queue_context)(void *dev_hndl,
+			uint16_t qid_hw,
+			uint8_t st,
+			enum qdma_dev_q_type q_type,
+			char *buf, uint32_t buflen);
+	int (*qdma_dump_intr_context)(void *dev_hndl,
+			struct qdma_indirect_intr_ctxt *intr_ctx,
+			int ring_index,
+			char *buf, uint32_t buflen);
+	int (*qdma_is_legacy_intr_pend)(void *dev_hndl);
+	int (*qdma_clear_pend_legacy_intr)(void *dev_hndl);
+	int (*qdma_legacy_intr_conf)(void *dev_hndl, enum status_type enable);
+	int (*qdma_initiate_flr)(void *dev_hndl, uint8_t is_vf);
+	int (*qdma_is_flr_done)(void *dev_hndl, uint8_t is_vf, uint8_t *done);
+	int (*qdma_get_error_code)(int acc_err_code);
+	int (*qdma_read_reg_list)(void *dev_hndl, uint8_t is_vf,
+			uint16_t reg_rd_group,
+			uint16_t *total_regs,
+			struct qdma_reg_data *reg_list);
+	int (*qdma_dump_config_reg_list)(void *dev_hndl,
+			uint32_t num_regs,
+			struct qdma_reg_data *reg_list,
+			char *buf, uint32_t buflen);
+	uint32_t mbox_base_pf;
+	uint32_t mbox_base_vf;
+	uint32_t qdma_max_errors;
+};
+
+/*****************************************************************************/
+/**
+ * qdma_hw_access_init() - Function to get the QDMA hardware
+ *			access function pointers
+ *	This function should be called once per device from
+ *	device_open()/probe(). Caller shall allocate memory for
+ *	qdma_hw_access structure and store pointer to it in their
+ *	per device structure. Config BAR validation will be done
+ *	inside this function
+ *
+ * @dev_hndl: device handle
+ * @is_vf: Whether PF or VF
+ * @hw_access: qdma_hw_access structure pointer.
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+int qdma_hw_access_init(void *dev_hndl, uint8_t is_vf,
+				struct qdma_hw_access *hw_access);
+
+/*****************************************************************************/
+/**
+ * qdma_acc_dump_config_regs() - Function to get qdma config registers
+ *
+ * @dev_hndl:   device handle
+ * @is_vf:      Whether PF or VF
+ * @ip_type:	QDMA IP Type
+ * @reg_data:  pointer to register data to be filled
+ *
+ * Return:	Length up-till the buffer is filled -success and < 0 - failure
+ *****************************************************************************/
+int qdma_acc_get_config_regs(void *dev_hndl, uint8_t is_vf,
+		enum qdma_ip_type ip_type,
+		uint32_t *reg_data);
+
+/*****************************************************************************/
+/**
+ * qdma_acc_dump_config_regs() - Function to get qdma config register dump in a
+ * buffer
+ *
+ * @dev_hndl:   device handle
+ * @is_vf:      Whether PF or VF
+ * @ip_type:	QDMA IP Type
+ * @buf :       pointer to buffer to be filled
+ * @buflen :    Length of the buffer
+ *
+ * Return:	Length up-till the buffer is filled -success and < 0 - failure
+ *****************************************************************************/
+int qdma_acc_dump_config_regs(void *dev_hndl, uint8_t is_vf,
+		enum qdma_ip_type ip_type,
+		char *buf, uint32_t buflen);
+
+/*****************************************************************************/
+/**
+ * qdma_acc_dump_reg_info() - Function to get qdma reg info in a buffer
+ *
+ * @dev_hndl:   device handle
+ * @ip_type:	QDMA IP Type
+ * @reg_addr:   Register Address
+ * @num_regs:   Number of Registers
+ * @buf :       pointer to buffer to be filled
+ * @buflen :    Length of the buffer
+ *
+ * Return:	Length up-till the buffer is filled -success and < 0 - failure
+ *****************************************************************************/
+int qdma_acc_dump_reg_info(void *dev_hndl,
+		enum qdma_ip_type ip_type, uint32_t reg_addr,
+		uint32_t num_regs, char *buf, uint32_t buflen);
+
+/*****************************************************************************/
+/**
+ * qdma_acc_dump_queue_context() - Function to dump qdma queue context data in a
+ * buffer where context information is already available in 'ctxt_data'
+ * structure pointer buffer
+ *
+ * @dev_hndl:   device handle
+ * @ip_type:	QDMA IP Type
+ * @st:		ST or MM
+ * @q_type:	Queue Type
+ * @ctxt_data:	Context Data
+ * @buf :       pointer to buffer to be filled
+ * @buflen :    Length of the buffer
+ *
+ * Return:	Length up-till the buffer is filled -success and < 0 - failure
+ *****************************************************************************/
+int qdma_acc_dump_queue_context(void *dev_hndl,
+		enum qdma_ip_type ip_type,
+		uint8_t st,
+		enum qdma_dev_q_type q_type,
+		struct qdma_descq_context *ctxt_data,
+		char *buf, uint32_t buflen);
+
+/*****************************************************************************/
+/**
+ * qdma_acc_read_dump_queue_context() - Function to read and dump the queue
+ * context in a buffer
+ *
+ * @dev_hndl:   device handle
+ * @ip_type:	QDMA IP Type
+ * @qid_hw:     queue id
+ * @st:		ST or MM
+ * @q_type:	Queue Type
+ * @buf :       pointer to buffer to be filled
+ * @buflen :    Length of the buffer
+ *
+ * Return:	Length up-till the buffer is filled -success and < 0 - failure
+ *****************************************************************************/
+int qdma_acc_read_dump_queue_context(void *dev_hndl,
+				enum qdma_ip_type ip_type,
+				uint16_t qid_hw,
+				uint8_t st,
+				enum qdma_dev_q_type q_type,
+				char *buf, uint32_t buflen);
+
+
+/*****************************************************************************/
+/**
+ * qdma_acc_dump_config_reg_list() - Dump the registers
+ *
+ * @dev_hndl:		device handle
+ * @ip_type:		QDMA IP Type
+ * @total_regs :	Max registers to read
+ * @reg_list :		array of reg addr and reg values
+ * @buf :		pointer to buffer to be filled
+ * @buflen :		Length of the buffer
+ *
+ * Return: returns the platform specific error code
+ *****************************************************************************/
+int qdma_acc_dump_config_reg_list(void *dev_hndl,
+		enum qdma_ip_type ip_type,
+		uint32_t num_regs,
+		struct qdma_reg_data *reg_list,
+		char *buf, uint32_t buflen);
+
+/*****************************************************************************/
+/**
+ * qdma_get_error_code() - function to get the qdma access mapped
+ *				error code
+ *
+ * @acc_err_code: qdma access error code
+ *
+ * Return:   returns the platform specific error code
+ *****************************************************************************/
+int qdma_get_error_code(int acc_err_code);
+
+/*****************************************************************************/
+/**
+ * qdma_fetch_version_details() - Function to fetch the version details from the
+ *  version register value
+ *
+ * @is_vf           :    Whether PF or VF
+ * @version_reg_val :    Value of the version register
+ * @version_info :       Pointer to store the version details.
+ *
+ * Return:	Nothing
+ *****************************************************************************/
+void qdma_fetch_version_details(uint8_t is_vf, uint32_t version_reg_val,
+		struct qdma_hw_version_info *version_info);
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* QDMA_ACCESS_COMMON_H_ */
diff --git a/drivers/net/qdma/qdma_access/qdma_access_errors.h b/drivers/net/qdma/qdma_access/qdma_access_errors.h
new file mode 100644
index 0000000000..a103c3a7fb
--- /dev/null
+++ b/drivers/net/qdma/qdma_access/qdma_access_errors.h
@@ -0,0 +1,60 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2022 Xilinx, Inc. All rights reserved.
+ */
+
+#ifndef __QDMA_ACCESS_ERRORS_H_
+#define __QDMA_ACCESS_ERRORS_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * DOC: QDMA common library error codes definitions
+ *
+ * Header file *qdma_access_errors.h* defines error codes for common library
+ */
+
+struct err_code_map {
+	int acc_err_code;
+	int err_code;
+};
+
+#define QDMA_HW_ERR_NOT_DETECTED		1
+
+enum qdma_access_error_codes {
+	QDMA_SUCCESS = 0,
+	QDMA_ERR_INV_PARAM,
+	QDMA_ERR_NO_MEM,
+	QDMA_ERR_HWACC_BUSY_TIMEOUT,
+	QDMA_ERR_HWACC_INV_CONFIG_BAR,
+	QDMA_ERR_HWACC_NO_PEND_LEGCY_INTR,
+	QDMA_ERR_HWACC_BAR_NOT_FOUND,
+	QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED,   /* 7 */
+
+	QDMA_ERR_RM_RES_EXISTS,				/* 8 */
+	QDMA_ERR_RM_RES_NOT_EXISTS,
+	QDMA_ERR_RM_DEV_EXISTS,
+	QDMA_ERR_RM_DEV_NOT_EXISTS,
+	QDMA_ERR_RM_NO_QUEUES_LEFT,
+	QDMA_ERR_RM_QMAX_CONF_REJECTED,		/* 13 */
+
+	QDMA_ERR_MBOX_FMAP_WR_FAILED,		/* 14 */
+	QDMA_ERR_MBOX_NUM_QUEUES,
+	QDMA_ERR_MBOX_INV_QID,
+	QDMA_ERR_MBOX_INV_RINGSZ,
+	QDMA_ERR_MBOX_INV_BUFSZ,
+	QDMA_ERR_MBOX_INV_CNTR_TH,
+	QDMA_ERR_MBOX_INV_TMR_TH,
+	QDMA_ERR_MBOX_INV_MSG,
+	QDMA_ERR_MBOX_SEND_BUSY,
+	QDMA_ERR_MBOX_NO_MSG_IN,
+	QDMA_ERR_MBOX_REG_READ_FAILED,
+	QDMA_ERR_MBOX_ALL_ZERO_MSG,			/* 25 */
+};
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __QDMA_ACCESS_ERRORS_H_ */
diff --git a/drivers/net/qdma/qdma_access/qdma_access_export.h b/drivers/net/qdma/qdma_access/qdma_access_export.h
new file mode 100644
index 0000000000..37eaa4cd5e
--- /dev/null
+++ b/drivers/net/qdma/qdma_access/qdma_access_export.h
@@ -0,0 +1,243 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2022 Xilinx, Inc. All rights reserved.
+ */
+
+#ifndef __QDMA_ACCESS_EXPORT_H_
+#define __QDMA_ACCESS_EXPORT_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include "qdma_platform_env.h"
+
+/** QDMA Global CSR array size */
+#define QDMA_GLOBAL_CSR_ARRAY_SZ        16
+
+/**
+ * struct qdma_dev_attributes - QDMA device attributes
+ */
+struct qdma_dev_attributes {
+	/** @num_pfs - Num of PFs*/
+	uint8_t num_pfs;
+	/** @num_qs - Num of Queues */
+	uint16_t num_qs;
+	/** @flr_present - FLR resent or not? */
+	uint8_t flr_present:1;
+	/** @st_en - ST mode supported or not? */
+	uint8_t st_en:1;
+	/** @mm_en - MM mode supported or not? */
+	uint8_t mm_en:1;
+	/** @mm_cmpt_en - MM with Completions supported or not? */
+	uint8_t mm_cmpt_en:1;
+	/** @mailbox_en - Mailbox supported or not? */
+	uint8_t mailbox_en:1;
+	/** @debug_mode - Debug mode is enabled/disabled for IP */
+	uint8_t debug_mode:1;
+	/** @desc_eng_mode - Descriptor Engine mode:
+	 * Internal only/Bypass only/Internal & Bypass
+	 */
+	uint8_t desc_eng_mode:2;
+	/** @mm_channel_max - Num of MM channels */
+	uint8_t mm_channel_max;
+
+	/** Below are the list of HW features which are populated by qdma_access
+	 * based on RTL version
+	 */
+	/** @qid2vec_ctx - To indicate support of qid2vec context */
+	uint8_t qid2vec_ctx:1;
+	/** @cmpt_ovf_chk_dis - To indicate support of overflow check
+	 * disable in CMPT ring
+	 */
+	uint8_t cmpt_ovf_chk_dis:1;
+	/** @mailbox_intr - To indicate support of mailbox interrupt */
+	uint8_t mailbox_intr:1;
+	/** @sw_desc_64b - To indicate support of 64 bytes C2H/H2C
+	 * descriptor format
+	 */
+	uint8_t sw_desc_64b:1;
+	/** @cmpt_desc_64b - To indicate support of 64 bytes CMPT
+	 * descriptor format
+	 */
+	uint8_t cmpt_desc_64b:1;
+	/** @dynamic_bar - To indicate support of dynamic bar detection */
+	uint8_t dynamic_bar:1;
+	/** @legacy_intr - To indicate support of legacy interrupt */
+	uint8_t legacy_intr:1;
+	/** @cmpt_trig_count_timer - To indicate support of counter + timer
+	 * trigger mode
+	 */
+	uint8_t cmpt_trig_count_timer:1;
+};
+
+/** qdma_dev_attributes structure size */
+#define QDMA_DEV_ATTR_STRUCT_SIZE	(sizeof(struct qdma_dev_attributes))
+
+/** global_csr_conf structure size */
+#define QDMA_DEV_GLOBAL_CSR_STRUCT_SIZE	(sizeof(struct global_csr_conf))
+
+/**
+ * enum qdma_dev_type - To hold qdma device type
+ */
+enum qdma_dev_type {
+	QDMA_DEV_PF,
+	QDMA_DEV_VF
+};
+
+/**
+ * enum qdma_dev_q_type: Q type
+ */
+enum qdma_dev_q_type {
+	/** @QDMA_DEV_Q_TYPE_H2C: H2C Q */
+	QDMA_DEV_Q_TYPE_H2C,
+	/** @QDMA_DEV_Q_TYPE_C2H: C2H Q */
+	QDMA_DEV_Q_TYPE_C2H,
+	/** @QDMA_DEV_Q_TYPE_CMPT: CMPT Q */
+	QDMA_DEV_Q_TYPE_CMPT,
+	/** @QDMA_DEV_Q_TYPE_MAX: Total Q types */
+	QDMA_DEV_Q_TYPE_MAX
+};
+
+/**
+ * @enum qdma_desc_size - QDMA queue descriptor size
+ */
+enum qdma_desc_size {
+	/** @QDMA_DESC_SIZE_8B - 8 byte descriptor */
+	QDMA_DESC_SIZE_8B,
+	/** @QDMA_DESC_SIZE_16B - 16 byte descriptor */
+	QDMA_DESC_SIZE_16B,
+	/** @QDMA_DESC_SIZE_32B - 32 byte descriptor */
+	QDMA_DESC_SIZE_32B,
+	/** @QDMA_DESC_SIZE_64B - 64 byte descriptor */
+	QDMA_DESC_SIZE_64B
+};
+
+/**
+ * @enum qdma_cmpt_update_trig_mode - Interrupt and Completion status write
+ * trigger mode
+ */
+enum qdma_cmpt_update_trig_mode {
+	/** @QDMA_CMPT_UPDATE_TRIG_MODE_DIS - disabled */
+	QDMA_CMPT_UPDATE_TRIG_MODE_DIS,
+	/** @QDMA_CMPT_UPDATE_TRIG_MODE_EVERY - every */
+	QDMA_CMPT_UPDATE_TRIG_MODE_EVERY,
+	/** @QDMA_CMPT_UPDATE_TRIG_MODE_USR_CNT - user counter */
+	QDMA_CMPT_UPDATE_TRIG_MODE_USR_CNT,
+	/** @QDMA_CMPT_UPDATE_TRIG_MODE_USR - user */
+	QDMA_CMPT_UPDATE_TRIG_MODE_USR,
+	/** @QDMA_CMPT_UPDATE_TRIG_MODE_USR_TMR - user timer */
+	QDMA_CMPT_UPDATE_TRIG_MODE_USR_TMR,
+	/** @QDMA_CMPT_UPDATE_TRIG_MODE_TMR_CNTR - timer + counter combo */
+	QDMA_CMPT_UPDATE_TRIG_MODE_TMR_CNTR
+};
+
+
+/**
+ * @enum qdma_indirect_intr_ring_size - Indirect interrupt ring size
+ */
+enum qdma_indirect_intr_ring_size {
+	/** @QDMA_INDIRECT_INTR_RING_SIZE_4KB - Accommodates 512 entries */
+	QDMA_INDIRECT_INTR_RING_SIZE_4KB,
+	/** @QDMA_INDIRECT_INTR_RING_SIZE_8KB - Accommodates 1024 entries */
+	QDMA_INDIRECT_INTR_RING_SIZE_8KB,
+	/** @QDMA_INDIRECT_INTR_RING_SIZE_12KB - Accommodates 1536 entries */
+	QDMA_INDIRECT_INTR_RING_SIZE_12KB,
+	/** @QDMA_INDIRECT_INTR_RING_SIZE_16KB - Accommodates 2048 entries */
+	QDMA_INDIRECT_INTR_RING_SIZE_16KB,
+	/** @QDMA_INDIRECT_INTR_RING_SIZE_20KB - Accommodates 2560 entries */
+	QDMA_INDIRECT_INTR_RING_SIZE_20KB,
+	/** @QDMA_INDIRECT_INTR_RING_SIZE_24KB - Accommodates 3072 entries */
+	QDMA_INDIRECT_INTR_RING_SIZE_24KB,
+	/** @QDMA_INDIRECT_INTR_RING_SIZE_28KB - Accommodates 3584 entries */
+	QDMA_INDIRECT_INTR_RING_SIZE_28KB,
+	/** @QDMA_INDIRECT_INTR_RING_SIZE_32KB - Accommodates 4096 entries */
+	QDMA_INDIRECT_INTR_RING_SIZE_32KB
+};
+
+/**
+ * @enum qdma_wrb_interval - writeback update interval
+ */
+enum qdma_wrb_interval {
+	/** @QDMA_WRB_INTERVAL_4 - writeback update interval of 4 */
+	QDMA_WRB_INTERVAL_4,
+	/** @QDMA_WRB_INTERVAL_8 - writeback update interval of 8 */
+	QDMA_WRB_INTERVAL_8,
+	/** @QDMA_WRB_INTERVAL_16 - writeback update interval of 16 */
+	QDMA_WRB_INTERVAL_16,
+	/** @QDMA_WRB_INTERVAL_32 - writeback update interval of 32 */
+	QDMA_WRB_INTERVAL_32,
+	/** @QDMA_WRB_INTERVAL_64 - writeback update interval of 64 */
+	QDMA_WRB_INTERVAL_64,
+	/** @QDMA_WRB_INTERVAL_128 - writeback update interval of 128 */
+	QDMA_WRB_INTERVAL_128,
+	/** @QDMA_WRB_INTERVAL_256 - writeback update interval of 256 */
+	QDMA_WRB_INTERVAL_256,
+	/** @QDMA_WRB_INTERVAL_512 - writeback update interval of 512 */
+	QDMA_WRB_INTERVAL_512,
+	/** @QDMA_NUM_WRB_INTERVALS - total number of writeback intervals */
+	QDMA_NUM_WRB_INTERVALS
+};
+
+enum qdma_rtl_version {
+	/** @QDMA_RTL_BASE - RTL Base  */
+	QDMA_RTL_BASE,
+	/** @QDMA_RTL_PATCH - RTL Patch  */
+	QDMA_RTL_PATCH,
+	/** @QDMA_RTL_NONE - Not a valid RTL version */
+	QDMA_RTL_NONE,
+};
+
+enum qdma_vivado_release_id {
+	/** @QDMA_VIVADO_2018_3 - Vivado version 2018.3  */
+	QDMA_VIVADO_2018_3,
+	/** @QDMA_VIVADO_2019_1 - Vivado version 2019.1  */
+	QDMA_VIVADO_2019_1,
+	/** @QDMA_VIVADO_2019_2 - Vivado version 2019.2  */
+	QDMA_VIVADO_2019_2,
+	/** @QDMA_VIVADO_2020_1 - Vivado version 2020.1  */
+	QDMA_VIVADO_2020_1,
+	/** @QDMA_VIVADO_2020_2 - Vivado version 2020.2  */
+	QDMA_VIVADO_2020_2,
+	/** @QDMA_VIVADO_NONE - Not a valid Vivado version*/
+	QDMA_VIVADO_NONE
+};
+
+enum qdma_ip_type {
+	/** @QDMA_VERSAL_HARD_IP - Hard IP  */
+	QDMA_VERSAL_HARD_IP,
+	/** @QDMA_VERSAL_SOFT_IP - Soft IP  */
+	QDMA_VERSAL_SOFT_IP,
+	/** @QDMA_SOFT_IP - Hard IP  */
+	QDMA_SOFT_IP,
+	/** @EQDMA_SOFT_IP - Soft IP  */
+	EQDMA_SOFT_IP,
+	/** @QDMA_VERSAL_NONE - Not versal device  */
+	QDMA_NONE_IP
+};
+
+
+enum qdma_device_type {
+	/** @QDMA_DEVICE_SOFT - UltraScale+ IP's  */
+	QDMA_DEVICE_SOFT,
+	/** @QDMA_DEVICE_VERSAL -VERSAL IP  */
+	QDMA_DEVICE_VERSAL,
+	/** @QDMA_DEVICE_NONE - Not a valid device  */
+	QDMA_DEVICE_NONE
+};
+
+enum qdma_desc_eng_mode {
+	/** @QDMA_DESC_ENG_INTERNAL_BYPASS - Internal and Bypass mode */
+	QDMA_DESC_ENG_INTERNAL_BYPASS,
+	/** @QDMA_DESC_ENG_BYPASS_ONLY - Only Bypass mode  */
+	QDMA_DESC_ENG_BYPASS_ONLY,
+	/** @QDMA_DESC_ENG_INTERNAL_ONLY - Only Internal mode  */
+	QDMA_DESC_ENG_INTERNAL_ONLY,
+	/** @QDMA_DESC_ENG_MODE_MAX - Max of desc engine modes  */
+	QDMA_DESC_ENG_MODE_MAX
+};
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __QDMA_ACCESS_EXPORT_H_ */
diff --git a/drivers/net/qdma/qdma_access/qdma_access_version.h b/drivers/net/qdma/qdma_access/qdma_access_version.h
new file mode 100644
index 0000000000..d016a2a980
--- /dev/null
+++ b/drivers/net/qdma/qdma_access/qdma_access_version.h
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2022 Xilinx, Inc. All rights reserved.
+ */
+
+#ifndef __QDMA_ACCESS_VERSION_H_
+#define __QDMA_ACCESS_VERSION_H_
+
+
+#define QDMA_VERSION_MAJOR	2020
+#define QDMA_VERSION_MINOR	2
+#define QDMA_VERSION_PATCH	0
+
+#define QDMA_VERSION_STR	\
+	__stringify(QDMA_VERSION_MAJOR) "." \
+	__stringify(QDMA_VERSION_MINOR) "." \
+	__stringify(QDMA_VERSION_PATCH)
+
+#define QDMA_VERSION  \
+	((QDMA_VERSION_MAJOR) * 1000 + \
+	 (QDMA_VERSION_MINOR) * 100 + \
+	  QDMA_VERSION_PATCH)
+
+
+#endif /* __QDMA_ACCESS_VERSION_H_ */
diff --git a/drivers/net/qdma/qdma_access/qdma_list.c b/drivers/net/qdma/qdma_access/qdma_list.c
new file mode 100644
index 0000000000..f53fce20cb
--- /dev/null
+++ b/drivers/net/qdma/qdma_access/qdma_list.c
@@ -0,0 +1,51 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2022 Xilinx, Inc. All rights reserved.
+ */
+
+#include "qdma_list.h"
+
+void qdma_list_init_head(struct qdma_list_head *head)
+{
+	if (head) {
+		head->prev = head;
+		head->next = head;
+	}
+}
+
+void qdma_list_add_tail(struct qdma_list_head *node,
+			  struct qdma_list_head *head)
+{
+	head->prev->next = node;
+	node->next = head;
+	node->prev = head->prev;
+	head->prev = node;
+}
+
+void qdma_list_insert_before(struct qdma_list_head *new_node,
+				    struct qdma_list_head *node)
+{
+	node->prev->next = new_node;
+	new_node->prev = node->prev;
+	new_node->next = node;
+	node->prev = new_node;
+}
+
+void qdma_list_insert_after(struct qdma_list_head *new_node,
+				   struct qdma_list_head *node)
+{
+	new_node->prev = node;
+	new_node->next = node->next;
+	node->next->prev = new_node;
+	node->next = new_node;
+}
+
+
+void qdma_list_del(struct qdma_list_head *node)
+{
+	if (node) {
+		if (node->prev)
+			node->prev->next = node->next;
+		if (node->next)
+			node->next->prev = node->prev;
+	}
+}
diff --git a/drivers/net/qdma/qdma_access/qdma_list.h b/drivers/net/qdma/qdma_access/qdma_list.h
new file mode 100644
index 0000000000..0f2789b6b1
--- /dev/null
+++ b/drivers/net/qdma/qdma_access/qdma_list.h
@@ -0,0 +1,109 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2022 Xilinx, Inc. All rights reserved.
+ */
+
+#ifndef __QDMA_LIST_H_
+#define __QDMA_LIST_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * DOC: QDMA common library provided list implementation definitions
+ *
+ * Header file *qdma_list.h* defines APIs for creating and managing list.
+ */
+
+/**
+ * struct qdma_list_head - data type for creating a list node
+ */
+struct qdma_list_head {
+	struct qdma_list_head *prev;
+	struct qdma_list_head *next;
+	void *priv;
+};
+
+#define QDMA_LIST_GET_DATA(node) ((node)->priv)
+#define QDMA_LIST_SET_DATA(node, data) ((node)->priv = data)
+
+#define qdma_list_for_each_safe(pos, n, head) \
+	for (pos = (head)->next, n = pos->next; pos != (head); \
+		pos = n, n = pos->next)
+
+#define qdma_list_is_last_entry(entry, head) ((entry)->next == (head))
+
+static inline int qdma_list_is_empty(struct qdma_list_head *head)
+{
+	return (head->next == head);
+}
+
+/*****************************************************************************/
+/**
+ * qdma_list_init_head(): Init the list head
+ *
+ * @head:     head of the list
+ *
+ * Return:	None
+ *****************************************************************************/
+void qdma_list_init_head(struct qdma_list_head *head);
+
+/*****************************************************************************/
+/**
+ * qdma_list_add_tail(): add the given @node at the end of the list with @head
+ *
+ * @node:     new entry which has to be added at the end of the list with @head
+ * @head:     head of the list
+ *
+ * This API needs to be called with holding the lock to the list
+ *
+ * Return:	None
+ *****************************************************************************/
+void qdma_list_add_tail(struct qdma_list_head *node,
+			  struct qdma_list_head *head);
+
+/*****************************************************************************/
+/**
+ * qdma_list_insert_before(): add the given @node at the before a @node
+ *
+ * @new_node:     new entry which has to be added before @node
+ * @node:         reference node in the list
+ *
+ * This API needs to be called with holding the lock to the list
+ *
+ * Return:	None
+ *****************************************************************************/
+void qdma_list_insert_before(struct qdma_list_head *new_node,
+				    struct qdma_list_head *node);
+
+/*****************************************************************************/
+/**
+ * qdma_list_insert_after(): add the given @node at the after a @node
+ *
+ * @new_node:     new entry which has to be added after @node
+ * @node:         reference node in the list
+ *
+ * This API needs to be called with holding the lock to the list
+ *
+ * Return:	None
+ *****************************************************************************/
+void qdma_list_insert_after(struct qdma_list_head *new_node,
+				   struct qdma_list_head *node);
+
+/*****************************************************************************/
+/**
+ * qdma_list_del(): delete an node from the list
+ *
+ * @node:     node in a list
+ *
+ * This API needs to be called with holding the lock to the list
+ *
+ * Return:	None
+ *****************************************************************************/
+void qdma_list_del(struct qdma_list_head *node);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __QDMA_LIST_H_ */
diff --git a/drivers/net/qdma/qdma_access/qdma_mbox_protocol.c b/drivers/net/qdma/qdma_access/qdma_mbox_protocol.c
new file mode 100644
index 0000000000..fb797ca380
--- /dev/null
+++ b/drivers/net/qdma/qdma_access/qdma_mbox_protocol.c
@@ -0,0 +1,2107 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2022 Xilinx, Inc. All rights reserved.
+ */
+
+#include "qdma_mbox_protocol.h"
+
+/** mailbox function status */
+#define MBOX_FN_STATUS			0x0
+/** shift value for mailbox function status in msg */
+#define		S_MBOX_FN_STATUS_IN_MSG	0
+/** mask value for mailbox function status in msg*/
+#define		M_MBOX_FN_STATUS_IN_MSG	0x1
+/** face value for mailbox function status in msg */
+#define		F_MBOX_FN_STATUS_IN_MSG	0x1
+
+/** shift value for out msg */
+#define		S_MBOX_FN_STATUS_OUT_MSG	1
+/** mask value for out msg */
+#define		M_MBOX_FN_STATUS_OUT_MSG	0x1
+/** face value for out msg */
+#define		F_MBOX_FN_STATUS_OUT_MSG	(1 << S_MBOX_FN_STATUS_OUT_MSG)
+/** shift value for status ack */
+#define		S_MBOX_FN_STATUS_ACK	2	/* PF only, ack status */
+/** mask value for status ack */
+#define		M_MBOX_FN_STATUS_ACK	0x1
+/** face value for status ack */
+#define		F_MBOX_FN_STATUS_ACK	(1 << S_MBOX_FN_STATUS_ACK)
+/** shift value for status src */
+#define		S_MBOX_FN_STATUS_SRC	4	/* PF only, source func.*/
+/** mask value for status src */
+#define		M_MBOX_FN_STATUS_SRC	0xFFF
+/** face value for status src */
+#define		G_MBOX_FN_STATUS_SRC(x)	\
+		(((x) >> S_MBOX_FN_STATUS_SRC) & M_MBOX_FN_STATUS_SRC)
+/** face value for mailbox function status */
+#define MBOX_FN_STATUS_MASK \
+		(F_MBOX_FN_STATUS_IN_MSG | \
+		 F_MBOX_FN_STATUS_OUT_MSG | \
+		 F_MBOX_FN_STATUS_ACK)
+
+/** mailbox function commands register */
+#define MBOX_FN_CMD			0x4
+/** shift value for send command */
+#define		S_MBOX_FN_CMD_SND	0
+/** mask value for send command */
+#define		M_MBOX_FN_CMD_SND	0x1
+/** face value for send command */
+#define		F_MBOX_FN_CMD_SND	(1 << S_MBOX_FN_CMD_SND)
+/** shift value for receive command */
+#define		S_MBOX_FN_CMD_RCV	1
+/** mask value for receive command */
+#define		M_MBOX_FN_CMD_RCV	0x1
+/** face value for receive command */
+#define		F_MBOX_FN_CMD_RCV	(1 << S_MBOX_FN_CMD_RCV)
+/** shift value for vf reset */
+#define		S_MBOX_FN_CMD_VF_RESET	3	/* TBD PF only: reset VF */
+/** mask value for vf reset */
+#define		M_MBOX_FN_CMD_VF_RESET	0x1
+/** mailbox isr vector register */
+#define MBOX_ISR_VEC			0x8
+/** shift value for isr vector */
+#define		S_MBOX_ISR_VEC		0
+/** mask value for isr vector */
+#define		M_MBOX_ISR_VEC		0x1F
+/** face value for isr vector */
+#define		V_MBOX_ISR_VEC(x)	((x) & M_MBOX_ISR_VEC)
+/** mailbox FN target register */
+#define MBOX_FN_TARGET			0xC
+/** shift value for FN target id */
+#define		S_MBOX_FN_TARGET_ID	0
+/** mask value for FN target id */
+#define		M_MBOX_FN_TARGET_ID	0xFFF
+/** face value for FN target id */
+#define		V_MBOX_FN_TARGET_ID(x)	((x) & M_MBOX_FN_TARGET_ID)
+/** mailbox isr enable register */
+#define MBOX_ISR_EN			0x10
+/** shift value for isr enable */
+#define		S_MBOX_ISR_EN		0
+/** mask value for isr enable */
+#define		M_MBOX_ISR_EN		0x1
+/** face value for isr enable */
+#define		F_MBOX_ISR_EN		0x1
+/** pf acknowledge base */
+#define MBOX_PF_ACK_BASE		0x20
+/** pf acknowledge step */
+#define MBOX_PF_ACK_STEP		4
+/** pf acknowledge count */
+#define MBOX_PF_ACK_COUNT		8
+/** mailbox incoming msg base */
+#define MBOX_IN_MSG_BASE		0x800
+/** mailbox outgoing msg base */
+#define MBOX_OUT_MSG_BASE		0xc00
+/** mailbox msg step */
+#define MBOX_MSG_STEP			4
+/** mailbox register max */
+#define MBOX_MSG_REG_MAX		32
+
+/**
+ * enum mbox_msg_op - mailbox messages opcode
+ */
+#define MBOX_MSG_OP_RSP_OFFSET	0x80
+enum mbox_msg_op {
+	/** @MBOX_OP_BYE: vf offline, response not required*/
+	MBOX_OP_VF_BYE,
+	/** @MBOX_OP_HELLO: vf online */
+	MBOX_OP_HELLO,
+	/** @: FMAP programming request */
+	MBOX_OP_FMAP,
+	/** @MBOX_OP_CSR: global CSR registers request */
+	MBOX_OP_CSR,
+	/** @MBOX_OP_QREQ: request queues */
+	MBOX_OP_QREQ,
+	/** @MBOX_OP_QADD: notify of queue addition */
+	MBOX_OP_QNOTIFY_ADD,
+	/** @MBOX_OP_QNOTIFY_DEL: notify of queue deletion */
+	MBOX_OP_QNOTIFY_DEL,
+	/** @MBOX_OP_QACTIVE_CNT: get active q count */
+	MBOX_OP_GET_QACTIVE_CNT,
+	/** @MBOX_OP_QCTXT_WRT: queue context write */
+	MBOX_OP_QCTXT_WRT,
+	/** @MBOX_OP_QCTXT_RD: queue context read */
+	MBOX_OP_QCTXT_RD,
+	/** @MBOX_OP_QCTXT_CLR: queue context clear */
+	MBOX_OP_QCTXT_CLR,
+	/** @MBOX_OP_QCTXT_INV: queue context invalidate */
+	MBOX_OP_QCTXT_INV,
+	/** @MBOX_OP_INTR_CTXT_WRT: interrupt context write */
+	MBOX_OP_INTR_CTXT_WRT,
+	/** @MBOX_OP_INTR_CTXT_RD: interrupt context read */
+	MBOX_OP_INTR_CTXT_RD,
+	/** @MBOX_OP_INTR_CTXT_CLR: interrupt context clear */
+	MBOX_OP_INTR_CTXT_CLR,
+	/** @MBOX_OP_INTR_CTXT_INV: interrupt context invalidate */
+	MBOX_OP_INTR_CTXT_INV,
+	/** @MBOX_OP_RESET_PREPARE: PF to VF message for VF reset*/
+	MBOX_OP_RESET_PREPARE,
+	/** @MBOX_OP_RESET_DONE: PF reset done */
+	MBOX_OP_RESET_DONE,
+	/** @MBOX_OP_REG_LIST_READ: Read the register list */
+	MBOX_OP_REG_LIST_READ,
+	/** @MBOX_OP_PF_BYE: pf offline, response required */
+	MBOX_OP_PF_BYE,
+	/** @MBOX_OP_PF_RESET_VF_BYE: VF reset BYE, response required*/
+	MBOX_OP_PF_RESET_VF_BYE,
+
+	/** @MBOX_OP_HELLO_RESP: response to @MBOX_OP_HELLO */
+	MBOX_OP_HELLO_RESP = 0x81,
+	/** @MBOX_OP_FMAP_RESP: response to @MBOX_OP_FMAP */
+	MBOX_OP_FMAP_RESP,
+	/** @MBOX_OP_CSR_RESP: response to @MBOX_OP_CSR */
+	MBOX_OP_CSR_RESP,
+	/** @MBOX_OP_QREQ_RESP: response to @MBOX_OP_QREQ */
+	MBOX_OP_QREQ_RESP,
+	/** @MBOX_OP_QADD: notify of queue addition */
+	MBOX_OP_QNOTIFY_ADD_RESP,
+	/** @MBOX_OP_QNOTIFY_DEL: notify of queue deletion */
+	MBOX_OP_QNOTIFY_DEL_RESP,
+	/** @MBOX_OP_QACTIVE_CNT_RESP: get active q count */
+	MBOX_OP_GET_QACTIVE_CNT_RESP,
+	/** @MBOX_OP_QCTXT_WRT_RESP: response to @MBOX_OP_QCTXT_WRT */
+	MBOX_OP_QCTXT_WRT_RESP,
+	/** @MBOX_OP_QCTXT_RD_RESP: response to @MBOX_OP_QCTXT_RD */
+	MBOX_OP_QCTXT_RD_RESP,
+	/** @MBOX_OP_QCTXT_CLR_RESP: response to @MBOX_OP_QCTXT_CLR */
+	MBOX_OP_QCTXT_CLR_RESP,
+	/** @MBOX_OP_QCTXT_INV_RESP: response to @MBOX_OP_QCTXT_INV */
+	MBOX_OP_QCTXT_INV_RESP,
+	/** @MBOX_OP_INTR_CTXT_WRT_RESP: response to @MBOX_OP_INTR_CTXT_WRT */
+	MBOX_OP_INTR_CTXT_WRT_RESP,
+	/** @MBOX_OP_INTR_CTXT_RD_RESP: response to @MBOX_OP_INTR_CTXT_RD */
+	MBOX_OP_INTR_CTXT_RD_RESP,
+	/** @MBOX_OP_INTR_CTXT_CLR_RESP: response to @MBOX_OP_INTR_CTXT_CLR */
+	MBOX_OP_INTR_CTXT_CLR_RESP,
+	/** @MBOX_OP_INTR_CTXT_INV_RESP: response to @MBOX_OP_INTR_CTXT_INV */
+	MBOX_OP_INTR_CTXT_INV_RESP,
+	/** @MBOX_OP_RESET_PREPARE_RESP: response to @MBOX_OP_RESET_PREPARE */
+	MBOX_OP_RESET_PREPARE_RESP,
+	/** @MBOX_OP_RESET_DONE_RESP: response to @MBOX_OP_PF_VF_RESET */
+	MBOX_OP_RESET_DONE_RESP,
+	/** @MBOX_OP_REG_LIST_READ_RESP: response to @MBOX_OP_REG_LIST_READ */
+	MBOX_OP_REG_LIST_READ_RESP,
+	/** @MBOX_OP_PF_BYE_RESP: response to @MBOX_OP_PF_BYE */
+	MBOX_OP_PF_BYE_RESP,
+	/** @MBOX_OP_PF_RESET_VF_BYE_RESP:
+	 * response to @MBOX_OP_PF_RESET_VF_BYE
+	 */
+	MBOX_OP_PF_RESET_VF_BYE_RESP,
+	/** @MBOX_OP_MAX: total mbox opcodes*/
+	MBOX_OP_MAX
+};
+
+/**
+ * struct mbox_msg_hdr - mailbox message header
+ */
+struct mbox_msg_hdr {
+	/** @op: opcode */
+	uint8_t op;
+	/** @status: execution status */
+	char status;
+	/** @src_func_id: src function */
+	uint16_t src_func_id;
+	/** @dst_func_id: dst function */
+	uint16_t dst_func_id;
+};
+
+/**
+ * struct mbox_msg_fmap - FMAP programming command
+ */
+struct mbox_msg_hello {
+	/** @hdr: mailbox message header */
+	struct mbox_msg_hdr hdr;
+	/** @qbase: start queue number in the queue range */
+	uint32_t qbase;
+	/** @qmax: max queue number in the queue range(0-2k) */
+	uint32_t qmax;
+	/** @dev_cap: device capability */
+	struct qdma_dev_attributes dev_cap;
+	/** @dma_device_index: dma_device_index */
+	uint32_t dma_device_index;
+};
+
+/**
+ * struct mbox_msg_active_qcnt - get active queue count command
+ */
+struct mbox_msg_active_qcnt {
+	/** @hdr: mailbox message header */
+	struct mbox_msg_hdr hdr;
+	/** @h2c_queues: number of h2c queues */
+	uint32_t h2c_queues;
+	/** @c2h_queues: number of c2h queues */
+	uint32_t c2h_queues;
+	/** @cmpt_queues: number of cmpt queues */
+	uint32_t cmpt_queues;
+};
+
+/**
+ * struct mbox_msg_fmap - FMAP programming command
+ */
+struct mbox_msg_fmap {
+	/** @hdr: mailbox message header */
+	struct mbox_msg_hdr hdr;
+	/** @qbase: start queue number in the queue range */
+	int qbase;
+	/** @qmax: max queue number in the queue range(0-2k) */
+	uint32_t qmax;
+};
+
+/**
+ * struct mbox_msg_csr - mailbox csr reading message
+ */
+struct mbox_msg_csr {
+	/** @hdr - mailbox message header */
+	struct mbox_msg_hdr hdr;
+	/** @csr_info: csr info data strucutre */
+	struct qdma_csr_info csr_info;
+};
+
+/**
+ * struct mbox_msg_q_nitfy - queue add/del notify message
+ */
+struct mbox_msg_q_nitfy {
+	/** @hdr - mailbox message header */
+	struct mbox_msg_hdr hdr;
+	/** @qid_hw: queue ID */
+	uint16_t qid_hw;
+	/** @q_type: type of q */
+	enum qdma_dev_q_type q_type;
+};
+
+/**
+ * @struct - mbox_msg_qctxt
+ * @brief queue context mailbox message header
+ */
+struct mbox_msg_qctxt {
+	/** @hdr: mailbox message header*/
+	struct mbox_msg_hdr hdr;
+	/** @qid_hw: queue ID */
+	uint16_t qid_hw;
+	/** @st: streaming mode */
+	uint8_t st:1;
+	/** @c2h: c2h direction */
+	uint8_t c2h:1;
+	/** @cmpt_ctxt_type: completion context type */
+	enum mbox_cmpt_ctxt_type cmpt_ctxt_type:2;
+	/** @rsvd: reserved */
+	uint8_t rsvd:4;
+	/** union compiled_message - complete hw configuration */
+	union {
+		/** @descq_conf: mailbox message for queue context write*/
+		struct mbox_descq_conf descq_conf;
+		/** @descq_ctxt: mailbox message for queue context read*/
+		struct qdma_descq_context descq_ctxt;
+	};
+};
+
+/**
+ * @struct - mbox_intr_ctxt
+ * @brief queue context mailbox message header
+ */
+struct mbox_intr_ctxt {
+	/** @hdr: mailbox message header*/
+	struct mbox_msg_hdr hdr;
+	/** interrupt context mailbox message */
+	struct mbox_msg_intr_ctxt ctxt;
+};
+
+/**
+ * @struct - mbox_read_reg_list
+ * @brief read register mailbox message header
+ */
+struct mbox_read_reg_list {
+	/** @hdr: mailbox message header*/
+	struct mbox_msg_hdr hdr;
+	/** @group_num: reg group to read */
+	uint16_t group_num;
+	/** @num_regs: number of registers to read */
+	uint16_t num_regs;
+	/** @reg_list: register list */
+	struct qdma_reg_data reg_list[QDMA_MAX_REGISTER_DUMP];
+};
+
+union qdma_mbox_txrx {
+		/** mailbox message header*/
+		struct mbox_msg_hdr hdr;
+		/** hello mailbox message */
+		struct mbox_msg_hello hello;
+		/** fmap mailbox message */
+		struct mbox_msg_fmap fmap;
+		/** interrupt context mailbox message */
+		struct mbox_intr_ctxt intr_ctxt;
+		/** queue context mailbox message*/
+		struct mbox_msg_qctxt qctxt;
+		/** global csr mailbox message */
+		struct mbox_msg_csr csr;
+		/** acive q count */
+		struct mbox_msg_active_qcnt qcnt;
+		/** q add/del notify message */
+		struct mbox_msg_q_nitfy q_notify;
+		/** reg list mailbox message */
+		struct mbox_read_reg_list reg_read_list;
+		/** buffer to hold raw data between pf and vf */
+		uint32_t raw[MBOX_MSG_REG_MAX];
+};
+
+
+static inline uint32_t get_mbox_offset(void *dev_hndl, uint8_t is_vf)
+{
+	uint32_t mbox_base;
+	struct qdma_hw_access *hw = NULL;
+
+	qdma_get_hw_access(dev_hndl, &hw);
+	mbox_base = (is_vf) ?
+		hw->mbox_base_vf : hw->mbox_base_pf;
+
+	return mbox_base;
+}
+
+static inline void mbox_pf_hw_clear_func_ack(void *dev_hndl, uint16_t func_id)
+{
+	int idx = func_id / 32; /* bitmask, uint32_t reg */
+	int bit = func_id % 32;
+	uint32_t mbox_base = get_mbox_offset(dev_hndl, 0);
+
+	/* clear the function's ack status */
+	qdma_reg_write(dev_hndl,
+			mbox_base + MBOX_PF_ACK_BASE + idx * MBOX_PF_ACK_STEP,
+			(1 << bit));
+}
+
+static void qdma_mbox_memcpy(void *to, void *from, uint8_t size)
+{
+	uint8_t i;
+	uint8_t *_to = (uint8_t *)to;
+	uint8_t *_from = (uint8_t *)from;
+
+	for (i = 0; i < size; i++)
+		_to[i] = _from[i];
+}
+
+static void qdma_mbox_memset(void *to, uint8_t val, uint8_t size)
+{
+	uint8_t i;
+	uint8_t *_to = (uint8_t *)to;
+
+	for (i = 0; i < size; i++)
+		_to[i] = val;
+}
+
+static int get_ring_idx(void *dev_hndl, uint16_t ring_sz, uint16_t *rng_idx)
+{
+	uint32_t rng_sz[QDMA_GLOBAL_CSR_ARRAY_SZ] = { 0 };
+	int i, rv;
+	struct qdma_hw_access *hw = NULL;
+
+	qdma_get_hw_access(dev_hndl, &hw);
+	rv = hw->qdma_global_csr_conf(dev_hndl, 0,
+			QDMA_GLOBAL_CSR_ARRAY_SZ, rng_sz,
+			QDMA_CSR_RING_SZ, QDMA_HW_ACCESS_READ);
+
+	if (rv)
+		return rv;
+	for (i = 0; i < QDMA_GLOBAL_CSR_ARRAY_SZ; i++) {
+		if (ring_sz == (rng_sz[i] - 1)) {
+			*rng_idx = i;
+			return QDMA_SUCCESS;
+		}
+	}
+
+	qdma_log_error("%s: Ring size not found, err:%d\n",
+				   __func__, -QDMA_ERR_MBOX_INV_RINGSZ);
+	return -QDMA_ERR_MBOX_INV_RINGSZ;
+}
+
+static int get_buf_idx(void *dev_hndl,  uint16_t buf_sz, uint16_t *buf_idx)
+{
+	uint32_t c2h_buf_sz[QDMA_GLOBAL_CSR_ARRAY_SZ] = { 0 };
+	int i, rv;
+	struct qdma_hw_access *hw = NULL;
+
+	qdma_get_hw_access(dev_hndl, &hw);
+
+	rv = hw->qdma_global_csr_conf(dev_hndl, 0,
+			QDMA_GLOBAL_CSR_ARRAY_SZ, c2h_buf_sz,
+			QDMA_CSR_BUF_SZ, QDMA_HW_ACCESS_READ);
+	if (rv)
+		return rv;
+	for (i = 0; i < QDMA_GLOBAL_CSR_ARRAY_SZ; i++) {
+		if (c2h_buf_sz[i] == buf_sz) {
+			*buf_idx = i;
+			return QDMA_SUCCESS;
+		}
+	}
+
+	qdma_log_error("%s: Buf index not found, err:%d\n",
+				   __func__, -QDMA_ERR_MBOX_INV_BUFSZ);
+	return -QDMA_ERR_MBOX_INV_BUFSZ;
+}
+
+static int get_cntr_idx(void *dev_hndl, uint8_t cntr_val, uint8_t *cntr_idx)
+{
+	uint32_t cntr_th[QDMA_GLOBAL_CSR_ARRAY_SZ] = { 0 };
+	int i, rv;
+	struct qdma_hw_access *hw = NULL;
+
+	qdma_get_hw_access(dev_hndl, &hw);
+
+	rv = hw->qdma_global_csr_conf(dev_hndl, 0,
+			QDMA_GLOBAL_CSR_ARRAY_SZ, cntr_th,
+			QDMA_CSR_CNT_TH, QDMA_HW_ACCESS_READ);
+
+	if (rv)
+		return rv;
+	for (i = 0; i < QDMA_GLOBAL_CSR_ARRAY_SZ; i++) {
+		if (cntr_th[i] == cntr_val) {
+			*cntr_idx = i;
+			return QDMA_SUCCESS;
+		}
+	}
+
+	qdma_log_error("%s: Counter val not found, err:%d\n",
+				   __func__, -QDMA_ERR_MBOX_INV_CNTR_TH);
+	return -QDMA_ERR_MBOX_INV_CNTR_TH;
+}
+
+static int get_tmr_idx(void *dev_hndl, uint8_t tmr_val, uint8_t *tmr_idx)
+{
+	uint32_t tmr_th[QDMA_GLOBAL_CSR_ARRAY_SZ] = { 0 };
+	int i, rv;
+	struct qdma_hw_access *hw = NULL;
+
+	qdma_get_hw_access(dev_hndl, &hw);
+
+	rv = hw->qdma_global_csr_conf(dev_hndl, 0,
+			QDMA_GLOBAL_CSR_ARRAY_SZ, tmr_th,
+			QDMA_CSR_TIMER_CNT, QDMA_HW_ACCESS_READ);
+	if (rv)
+		return rv;
+	for (i = 0; i < QDMA_GLOBAL_CSR_ARRAY_SZ; i++) {
+		if (tmr_th[i] == tmr_val) {
+			*tmr_idx = i;
+			return QDMA_SUCCESS;
+		}
+	}
+
+	qdma_log_error("%s: Timer val not found, err:%d\n",
+				   __func__, -QDMA_ERR_MBOX_INV_TMR_TH);
+	return -QDMA_ERR_MBOX_INV_TMR_TH;
+}
+
+static int mbox_compose_sw_context(void *dev_hndl,
+				   struct mbox_msg_qctxt *qctxt,
+				   struct qdma_descq_sw_ctxt *sw_ctxt)
+{
+	uint16_t rng_idx = 0;
+	int rv = QDMA_SUCCESS;
+
+	if (!qctxt || !sw_ctxt) {
+		qdma_log_error("%s: qctxt=%p sw_ctxt=%p, err:%d\n",
+						__func__,
+						qctxt, sw_ctxt,
+						-QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	rv = get_ring_idx(dev_hndl, qctxt->descq_conf.ringsz, &rng_idx);
+	if (rv < 0) {
+		qdma_log_error("%s: failed to get ring index, err:%d\n",
+						__func__, rv);
+		return rv;
+	}
+	/* compose sw context */
+	sw_ctxt->vec = qctxt->descq_conf.intr_id;
+	sw_ctxt->intr_aggr = qctxt->descq_conf.intr_aggr;
+
+	sw_ctxt->ring_bs_addr = qctxt->descq_conf.ring_bs_addr;
+	sw_ctxt->wbi_chk = qctxt->descq_conf.wbi_chk;
+	sw_ctxt->wbi_intvl_en = qctxt->descq_conf.wbi_intvl_en;
+	sw_ctxt->rngsz_idx = rng_idx;
+	sw_ctxt->bypass = qctxt->descq_conf.en_bypass;
+	sw_ctxt->wbk_en = qctxt->descq_conf.wbk_en;
+	sw_ctxt->irq_en = qctxt->descq_conf.irq_en;
+	sw_ctxt->is_mm = ~qctxt->st;
+	sw_ctxt->mm_chn = 0;
+	sw_ctxt->qen = 1;
+	sw_ctxt->frcd_en = qctxt->descq_conf.forced_en;
+
+	sw_ctxt->desc_sz = qctxt->descq_conf.desc_sz;
+
+	/* pidx = 0; irq_ack = 0 */
+	sw_ctxt->fnc_id = qctxt->descq_conf.func_id;
+	sw_ctxt->irq_arm =  qctxt->descq_conf.irq_arm;
+
+	if (qctxt->st && qctxt->c2h) {
+		sw_ctxt->irq_en = 0;
+		sw_ctxt->irq_arm = 0;
+		sw_ctxt->wbk_en = 0;
+		sw_ctxt->wbi_chk = 0;
+	}
+
+	return QDMA_SUCCESS;
+}
+
+static int mbox_compose_prefetch_context(void *dev_hndl,
+					 struct mbox_msg_qctxt *qctxt,
+				 struct qdma_descq_prefetch_ctxt *pfetch_ctxt)
+{
+	uint16_t buf_idx = 0;
+	int rv = QDMA_SUCCESS;
+
+	if (!qctxt || !pfetch_ctxt) {
+		qdma_log_error("%s: qctxt=%p pfetch_ctxt=%p, err:%d\n",
+					   __func__,
+					   qctxt,
+					   pfetch_ctxt,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+	rv = get_buf_idx(dev_hndl, qctxt->descq_conf.bufsz, &buf_idx);
+	if (rv < 0) {
+		qdma_log_error("%s: failed to get buf index, err:%d\n",
+					   __func__, -QDMA_ERR_INV_PARAM);
+		return rv;
+	}
+	/* prefetch context */
+	pfetch_ctxt->valid = 1;
+	pfetch_ctxt->bypass = qctxt->descq_conf.en_bypass_prefetch;
+	pfetch_ctxt->bufsz_idx = buf_idx;
+	pfetch_ctxt->pfch_en = qctxt->descq_conf.pfch_en;
+
+	return QDMA_SUCCESS;
+}
+
+
+static int mbox_compose_cmpt_context(void *dev_hndl,
+				     struct mbox_msg_qctxt *qctxt,
+				     struct qdma_descq_cmpt_ctxt *cmpt_ctxt)
+{
+	uint16_t rng_idx = 0;
+	uint8_t cntr_idx = 0, tmr_idx = 0;
+	int rv = QDMA_SUCCESS;
+
+	if (!qctxt || !cmpt_ctxt) {
+		qdma_log_error("%s: qctxt=%p cmpt_ctxt=%p, err:%d\n",
+					   __func__, qctxt, cmpt_ctxt,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+	rv = get_cntr_idx(dev_hndl, qctxt->descq_conf.cnt_thres, &cntr_idx);
+	if (rv < 0)
+		return rv;
+	rv = get_tmr_idx(dev_hndl, qctxt->descq_conf.timer_thres, &tmr_idx);
+	if (rv < 0)
+		return rv;
+	rv = get_ring_idx(dev_hndl, qctxt->descq_conf.cmpt_ringsz, &rng_idx);
+	if (rv < 0)
+		return rv;
+	/* writeback context */
+
+	cmpt_ctxt->bs_addr = qctxt->descq_conf.cmpt_ring_bs_addr;
+	cmpt_ctxt->en_stat_desc = qctxt->descq_conf.cmpl_stat_en;
+	cmpt_ctxt->en_int = qctxt->descq_conf.cmpt_int_en;
+	cmpt_ctxt->trig_mode = qctxt->descq_conf.triggermode;
+	cmpt_ctxt->fnc_id = qctxt->descq_conf.func_id;
+	cmpt_ctxt->timer_idx = tmr_idx;
+	cmpt_ctxt->counter_idx = cntr_idx;
+	cmpt_ctxt->color = 1;
+	cmpt_ctxt->ringsz_idx = rng_idx;
+
+	cmpt_ctxt->desc_sz = qctxt->descq_conf.cmpt_desc_sz;
+
+	cmpt_ctxt->valid = 1;
+
+	cmpt_ctxt->ovf_chk_dis = qctxt->descq_conf.dis_overflow_check;
+	cmpt_ctxt->vec = qctxt->descq_conf.intr_id;
+	cmpt_ctxt->int_aggr = qctxt->descq_conf.intr_aggr;
+
+	return QDMA_SUCCESS;
+}
+
+static int mbox_clear_queue_contexts(void *dev_hndl, uint8_t dma_device_index,
+			      uint16_t func_id, uint16_t qid_hw, uint8_t st,
+			      uint8_t c2h,
+			      enum mbox_cmpt_ctxt_type cmpt_ctxt_type)
+{
+	int rv;
+	int qbase;
+	uint32_t qmax;
+	enum qdma_dev_q_range q_range;
+	struct qdma_hw_access *hw = NULL;
+
+	qdma_get_hw_access(dev_hndl, &hw);
+
+	if (cmpt_ctxt_type == QDMA_MBOX_CMPT_CTXT_ONLY) {
+		rv = hw->qdma_cmpt_ctx_conf(dev_hndl, qid_hw,
+					    NULL, QDMA_HW_ACCESS_CLEAR);
+		if (rv < 0) {
+			qdma_log_error("%s: clear cmpt ctxt, err:%d\n",
+						__func__, rv);
+			return rv;
+		}
+	} else {
+		rv = qdma_dev_qinfo_get(dma_device_index,
+				func_id, &qbase, &qmax);
+		if (rv < 0) {
+			qdma_log_error("%s: failed to get qinfo, err:%d\n",
+					__func__, rv);
+			return rv;
+		}
+
+		q_range = qdma_dev_is_queue_in_range(dma_device_index,
+						func_id, qid_hw);
+		if (q_range != QDMA_DEV_Q_IN_RANGE) {
+			qdma_log_error("%s: q_range invalid, err:%d\n",
+						__func__, rv);
+			return rv;
+		}
+
+		rv = hw->qdma_sw_ctx_conf(dev_hndl, c2h, qid_hw,
+					  NULL, QDMA_HW_ACCESS_CLEAR);
+		if (rv < 0) {
+			qdma_log_error("%s: clear sw_ctxt, err:%d\n",
+						__func__, rv);
+			return rv;
+		}
+
+		rv = hw->qdma_hw_ctx_conf(dev_hndl, c2h, qid_hw, NULL,
+					       QDMA_HW_ACCESS_CLEAR);
+		if (rv < 0) {
+			qdma_log_error("%s: clear hw_ctxt, err:%d\n",
+						__func__, rv);
+			return rv;
+		}
+
+		rv = hw->qdma_credit_ctx_conf(dev_hndl, c2h, qid_hw, NULL,
+					       QDMA_HW_ACCESS_CLEAR);
+		if (rv < 0) {
+			qdma_log_error("%s: clear cr_ctxt, err:%d\n",
+						__func__, rv);
+			return rv;
+		}
+
+		if (st && c2h) {
+			rv = hw->qdma_pfetch_ctx_conf(dev_hndl, qid_hw,
+						       NULL,
+						       QDMA_HW_ACCESS_CLEAR);
+			if (rv < 0) {
+				qdma_log_error("%s:clear pfetch ctxt, err:%d\n",
+						__func__, rv);
+				return rv;
+			}
+		}
+
+		if (cmpt_ctxt_type == QDMA_MBOX_CMPT_WITH_MM ||
+		    cmpt_ctxt_type == QDMA_MBOX_CMPT_WITH_ST) {
+			rv = hw->qdma_cmpt_ctx_conf(dev_hndl, qid_hw,
+						     NULL,
+						     QDMA_HW_ACCESS_CLEAR);
+			if (rv < 0) {
+				qdma_log_error("%s: clear cmpt ctxt, err:%d\n",
+							__func__, rv);
+				return rv;
+			}
+		}
+	}
+
+	return QDMA_SUCCESS;
+}
+
+static int mbox_invalidate_queue_contexts(void *dev_hndl,
+		uint8_t dma_device_index, uint16_t func_id,
+		uint16_t qid_hw, uint8_t st,
+		uint8_t c2h, enum mbox_cmpt_ctxt_type cmpt_ctxt_type)
+{
+	int rv;
+	int qbase;
+	uint32_t qmax;
+	enum qdma_dev_q_range q_range;
+	struct qdma_hw_access *hw = NULL;
+
+	qdma_get_hw_access(dev_hndl, &hw);
+
+	if (cmpt_ctxt_type == QDMA_MBOX_CMPT_CTXT_ONLY) {
+		rv = hw->qdma_cmpt_ctx_conf(dev_hndl, qid_hw, NULL,
+					    QDMA_HW_ACCESS_INVALIDATE);
+		if (rv < 0) {
+			qdma_log_error("%s: inv cmpt ctxt, err:%d\n",
+						__func__, rv);
+			return rv;
+		}
+	} else {
+		rv = qdma_dev_qinfo_get(dma_device_index, func_id,
+				&qbase, &qmax);
+		if (rv < 0) {
+			qdma_log_error("%s: failed to get qinfo, err:%d\n",
+						__func__, rv);
+			return rv;
+		}
+
+		q_range = qdma_dev_is_queue_in_range(dma_device_index,
+						func_id, qid_hw);
+		if (q_range != QDMA_DEV_Q_IN_RANGE) {
+			qdma_log_error("%s: Invalid qrange, err:%d\n",
+							__func__, rv);
+			return rv;
+		}
+
+		rv = hw->qdma_sw_ctx_conf(dev_hndl, c2h, qid_hw,
+					  NULL, QDMA_HW_ACCESS_INVALIDATE);
+		if (rv < 0) {
+			qdma_log_error("%s: inv sw ctxt, err:%d\n",
+							__func__, rv);
+			return rv;
+		}
+
+		rv = hw->qdma_hw_ctx_conf(dev_hndl, c2h, qid_hw, NULL,
+				QDMA_HW_ACCESS_INVALIDATE);
+		if (rv < 0) {
+			qdma_log_error("%s: clear hw_ctxt, err:%d\n",
+						__func__, rv);
+			return rv;
+		}
+
+		rv = hw->qdma_credit_ctx_conf(dev_hndl, c2h, qid_hw, NULL,
+				QDMA_HW_ACCESS_INVALIDATE);
+		if (rv < 0) {
+			qdma_log_error("%s: clear cr_ctxt, err:%d\n",
+						__func__, rv);
+			return rv;
+		}
+
+		if (st && c2h) {
+			rv = hw->qdma_pfetch_ctx_conf(dev_hndl, qid_hw,
+						NULL,
+						QDMA_HW_ACCESS_INVALIDATE);
+			if (rv < 0) {
+				qdma_log_error("%s: inv pfetch ctxt, err:%d\n",
+						__func__, rv);
+				return rv;
+			}
+		}
+
+		if (cmpt_ctxt_type == QDMA_MBOX_CMPT_WITH_MM ||
+		    cmpt_ctxt_type == QDMA_MBOX_CMPT_WITH_ST) {
+			rv = hw->qdma_cmpt_ctx_conf(dev_hndl, qid_hw,
+						NULL,
+						QDMA_HW_ACCESS_INVALIDATE);
+			if (rv < 0) {
+				qdma_log_error("%s: inv cmpt ctxt, err:%d\n",
+						__func__, rv);
+				return rv;
+			}
+		}
+	}
+
+	return QDMA_SUCCESS;
+}
+
+static int mbox_write_queue_contexts(void *dev_hndl, uint8_t dma_device_index,
+				     struct mbox_msg_qctxt *qctxt)
+{
+	int rv;
+	int qbase;
+	uint32_t qmax;
+	enum qdma_dev_q_range q_range;
+	struct qdma_descq_context descq_ctxt;
+	uint16_t qid_hw = qctxt->qid_hw;
+	struct qdma_hw_access *hw = NULL;
+
+	qdma_get_hw_access(dev_hndl, &hw);
+
+	rv = qdma_dev_qinfo_get(dma_device_index, qctxt->descq_conf.func_id,
+				&qbase, &qmax);
+	if (rv < 0)
+		return rv;
+
+	q_range = qdma_dev_is_queue_in_range(dma_device_index,
+					     qctxt->descq_conf.func_id,
+					     qctxt->qid_hw);
+	if (q_range != QDMA_DEV_Q_IN_RANGE) {
+		qdma_log_error("%s: Invalid qrange, err:%d\n",
+							__func__, rv);
+		return rv;
+	}
+
+	qdma_mbox_memset(&descq_ctxt, 0, sizeof(struct qdma_descq_context));
+
+	if (qctxt->cmpt_ctxt_type == QDMA_MBOX_CMPT_CTXT_ONLY) {
+		rv = mbox_compose_cmpt_context(dev_hndl, qctxt,
+			       &descq_ctxt.cmpt_ctxt);
+		if (rv < 0)
+			return rv;
+
+		rv = hw->qdma_cmpt_ctx_conf(dev_hndl, qid_hw,
+					    NULL, QDMA_HW_ACCESS_CLEAR);
+		if (rv < 0) {
+			qdma_log_error("%s: clear cmpt ctxt, err:%d\n",
+								__func__, rv);
+			return rv;
+		}
+
+		rv = hw->qdma_cmpt_ctx_conf(dev_hndl, qid_hw,
+			     &descq_ctxt.cmpt_ctxt, QDMA_HW_ACCESS_WRITE);
+		if (rv < 0) {
+			qdma_log_error("%s: write cmpt ctxt, err:%d\n",
+								__func__, rv);
+			return rv;
+		}
+
+	} else {
+		rv = mbox_compose_sw_context(dev_hndl, qctxt,
+				&descq_ctxt.sw_ctxt);
+		if (rv < 0)
+			return rv;
+
+		if (qctxt->st && qctxt->c2h) {
+			rv = mbox_compose_prefetch_context(dev_hndl, qctxt,
+						&descq_ctxt.pfetch_ctxt);
+			if (rv < 0)
+				return rv;
+		}
+
+		if (qctxt->cmpt_ctxt_type == QDMA_MBOX_CMPT_WITH_MM ||
+		    qctxt->cmpt_ctxt_type == QDMA_MBOX_CMPT_WITH_ST) {
+			rv = mbox_compose_cmpt_context(dev_hndl, qctxt,
+							&descq_ctxt.cmpt_ctxt);
+			if (rv < 0)
+				return rv;
+		}
+
+		rv = mbox_clear_queue_contexts(dev_hndl, dma_device_index,
+					qctxt->descq_conf.func_id,
+					qctxt->qid_hw,
+					qctxt->st,
+					qctxt->c2h,
+					qctxt->cmpt_ctxt_type);
+		if (rv < 0)
+			return rv;
+		rv = hw->qdma_sw_ctx_conf(dev_hndl, qctxt->c2h, qid_hw,
+					   &descq_ctxt.sw_ctxt,
+					   QDMA_HW_ACCESS_WRITE);
+		if (rv < 0) {
+			qdma_log_error("%s: write sw ctxt, err:%d\n",
+						__func__, rv);
+			return rv;
+		}
+
+		if (qctxt->st && qctxt->c2h) {
+			rv = hw->qdma_pfetch_ctx_conf(dev_hndl, qid_hw,
+						       &descq_ctxt.pfetch_ctxt,
+						       QDMA_HW_ACCESS_WRITE);
+			if (rv < 0) {
+				qdma_log_error("%s:write pfetch ctxt, err:%d\n",
+						__func__, rv);
+				return rv;
+			}
+		}
+
+		if (qctxt->cmpt_ctxt_type == QDMA_MBOX_CMPT_WITH_MM ||
+		    qctxt->cmpt_ctxt_type == QDMA_MBOX_CMPT_WITH_ST) {
+			rv = hw->qdma_cmpt_ctx_conf(dev_hndl, qid_hw,
+						     &descq_ctxt.cmpt_ctxt,
+						     QDMA_HW_ACCESS_WRITE);
+			if (rv < 0) {
+				qdma_log_error("%s: write cmpt ctxt, err:%d\n",
+						__func__, rv);
+				return rv;
+			}
+		}
+	}
+	return QDMA_SUCCESS;
+}
+
+static int mbox_read_queue_contexts(void *dev_hndl, uint16_t qid_hw,
+			uint8_t st, uint8_t c2h,
+			enum mbox_cmpt_ctxt_type cmpt_ctxt_type,
+			struct qdma_descq_context *ctxt)
+{
+	int rv;
+	struct qdma_hw_access *hw = NULL;
+
+	qdma_get_hw_access(dev_hndl, &hw);
+
+	rv = hw->qdma_sw_ctx_conf(dev_hndl, c2h, qid_hw, &ctxt->sw_ctxt,
+				  QDMA_HW_ACCESS_READ);
+	if (rv < 0) {
+		qdma_log_error("%s: read sw ctxt, err:%d\n",
+					__func__, rv);
+		return rv;
+	}
+
+	rv = hw->qdma_hw_ctx_conf(dev_hndl, c2h, qid_hw, &ctxt->hw_ctxt,
+				  QDMA_HW_ACCESS_READ);
+	if (rv < 0) {
+		qdma_log_error("%s: read hw ctxt, err:%d\n",
+					__func__, rv);
+		return rv;
+	}
+
+	rv = hw->qdma_credit_ctx_conf(dev_hndl, c2h, qid_hw, &ctxt->cr_ctxt,
+				      QDMA_HW_ACCESS_READ);
+	if (rv < 0) {
+		qdma_log_error("%s: read credit ctxt, err:%d\n",
+					__func__, rv);
+		return rv;
+	}
+
+	if (st && c2h) {
+		rv = hw->qdma_pfetch_ctx_conf(dev_hndl,
+					qid_hw, &ctxt->pfetch_ctxt,
+					QDMA_HW_ACCESS_READ);
+		if (rv < 0) {
+			qdma_log_error("%s: read pfetch ctxt, err:%d\n",
+						__func__, rv);
+			return rv;
+		}
+	}
+
+	if (cmpt_ctxt_type == QDMA_MBOX_CMPT_WITH_MM ||
+	    cmpt_ctxt_type == QDMA_MBOX_CMPT_WITH_ST) {
+		rv = hw->qdma_cmpt_ctx_conf(dev_hndl,
+					qid_hw, &ctxt->cmpt_ctxt,
+					QDMA_HW_ACCESS_READ);
+		if (rv < 0) {
+			qdma_log_error("%s: read cmpt ctxt, err:%d\n",
+						__func__, rv);
+			return rv;
+		}
+	}
+
+	return QDMA_SUCCESS;
+}
+
+int qdma_mbox_pf_rcv_msg_handler(void *dev_hndl, uint8_t dma_device_index,
+				 uint16_t func_id, uint32_t *rcv_msg,
+				 uint32_t *resp_msg)
+{
+	union qdma_mbox_txrx *rcv =  (union qdma_mbox_txrx *)rcv_msg;
+	union qdma_mbox_txrx *resp =  (union qdma_mbox_txrx *)resp_msg;
+	struct mbox_msg_hdr *hdr = &rcv->hdr;
+	struct qdma_hw_access *hw = NULL;
+	int rv = QDMA_SUCCESS;
+	int ret = 0;
+
+	if (!rcv) {
+		qdma_log_error("%s: rcv_msg=%p failure:%d\n",
+						__func__, rcv,
+						-QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+	qdma_get_hw_access(dev_hndl, &hw);
+
+	switch (rcv->hdr.op) {
+	case MBOX_OP_VF_BYE:
+	{
+		struct qdma_fmap_cfg fmap;
+
+		fmap.qbase = 0;
+		fmap.qmax = 0;
+		rv = hw->qdma_fmap_conf(dev_hndl, hdr->src_func_id, &fmap,
+					QDMA_HW_ACCESS_WRITE);
+
+		qdma_dev_entry_destroy(dma_device_index, hdr->src_func_id);
+
+		ret = QDMA_MBOX_VF_OFFLINE;
+	}
+	break;
+	case MBOX_OP_PF_RESET_VF_BYE:
+	{
+		struct qdma_fmap_cfg fmap;
+
+		fmap.qbase = 0;
+		fmap.qmax = 0;
+		rv = hw->qdma_fmap_conf(dev_hndl, hdr->src_func_id, &fmap,
+					QDMA_HW_ACCESS_WRITE);
+
+		qdma_dev_entry_destroy(dma_device_index, hdr->src_func_id);
+
+		ret = QDMA_MBOX_VF_RESET_BYE;
+	}
+	break;
+	case MBOX_OP_HELLO:
+	{
+		struct mbox_msg_fmap *fmap = &rcv->fmap;
+		struct qdma_fmap_cfg fmap_cfg;
+		struct mbox_msg_hello *rsp_hello = &resp->hello;
+
+		rv = qdma_dev_qinfo_get(dma_device_index, hdr->src_func_id,
+				&fmap->qbase, &fmap->qmax);
+		if (rv < 0)
+			rv = qdma_dev_entry_create(dma_device_index,
+					hdr->src_func_id);
+
+		if (!rv) {
+			rsp_hello->qbase = fmap->qbase;
+			rsp_hello->qmax = fmap->qmax;
+			rsp_hello->dma_device_index = dma_device_index;
+			hw->qdma_get_device_attributes(dev_hndl,
+						       &rsp_hello->dev_cap);
+		}
+		qdma_mbox_memset(&fmap_cfg, 0,
+				 sizeof(struct qdma_fmap_cfg));
+		hw->qdma_fmap_conf(dev_hndl, hdr->src_func_id, &fmap_cfg,
+				   QDMA_HW_ACCESS_WRITE);
+
+		ret = QDMA_MBOX_VF_ONLINE;
+	}
+	break;
+	case MBOX_OP_FMAP:
+	{
+		struct mbox_msg_fmap *fmap = &rcv->fmap;
+		struct qdma_fmap_cfg fmap_cfg;
+
+		fmap_cfg.qbase = fmap->qbase;
+		fmap_cfg.qmax = fmap->qmax;
+
+		rv = hw->qdma_fmap_conf(dev_hndl, hdr->src_func_id,
+				     &fmap_cfg, QDMA_HW_ACCESS_WRITE);
+		if (rv < 0) {
+			qdma_log_error("%s: failed to write fmap, err:%d\n",
+						__func__, rv);
+			return rv;
+		}
+	}
+	break;
+	case MBOX_OP_CSR:
+	{
+		struct mbox_msg_csr *rsp_csr = &resp->csr;
+		struct qdma_dev_attributes dev_cap;
+
+		uint32_t ringsz[QDMA_GLOBAL_CSR_ARRAY_SZ] = {0};
+		uint32_t bufsz[QDMA_GLOBAL_CSR_ARRAY_SZ] = {0};
+		uint32_t tmr_th[QDMA_GLOBAL_CSR_ARRAY_SZ] = {0};
+		uint32_t cntr_th[QDMA_GLOBAL_CSR_ARRAY_SZ] = {0};
+		int i;
+
+		rv = hw->qdma_global_csr_conf(dev_hndl, 0,
+				QDMA_GLOBAL_CSR_ARRAY_SZ, ringsz,
+				QDMA_CSR_RING_SZ, QDMA_HW_ACCESS_READ);
+		if (rv < 0)
+			goto exit_func;
+
+		hw->qdma_get_device_attributes(dev_hndl, &dev_cap);
+
+		if (dev_cap.st_en) {
+			rv = hw->qdma_global_csr_conf(dev_hndl, 0,
+				QDMA_GLOBAL_CSR_ARRAY_SZ, bufsz,
+				QDMA_CSR_BUF_SZ, QDMA_HW_ACCESS_READ);
+			if (rv < 0 &&
+				(rv != -QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED))
+				goto exit_func;
+		}
+
+		if (dev_cap.st_en || dev_cap.mm_cmpt_en) {
+			rv = hw->qdma_global_csr_conf(dev_hndl, 0,
+				QDMA_GLOBAL_CSR_ARRAY_SZ, tmr_th,
+				QDMA_CSR_TIMER_CNT, QDMA_HW_ACCESS_READ);
+			if (rv < 0 &&
+				(rv != -QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED))
+				goto exit_func;
+
+			rv = hw->qdma_global_csr_conf(dev_hndl, 0,
+				QDMA_GLOBAL_CSR_ARRAY_SZ, cntr_th,
+				QDMA_CSR_CNT_TH, QDMA_HW_ACCESS_READ);
+			if (rv < 0 &&
+				(rv != -QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED))
+				goto exit_func;
+		}
+
+		for (i = 0; i < QDMA_GLOBAL_CSR_ARRAY_SZ; i++) {
+			rsp_csr->csr_info.ringsz[i] = ringsz[i] &
+					0xFFFF;
+			if (!rv) {
+				rsp_csr->csr_info.bufsz[i] = bufsz[i] & 0xFFFF;
+				rsp_csr->csr_info.timer_cnt[i] = tmr_th[i] &
+						0xFF;
+				rsp_csr->csr_info.cnt_thres[i] = cntr_th[i] &
+						0xFF;
+			}
+		}
+
+		if (rv == -QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED)
+			rv = QDMA_SUCCESS;
+	}
+	break;
+	case MBOX_OP_QREQ:
+	{
+		struct mbox_msg_fmap *fmap = &rcv->fmap;
+
+		rv = qdma_dev_update(dma_device_index,
+					  hdr->src_func_id,
+					  fmap->qmax, &fmap->qbase);
+		if (rv == 0) {
+			rv = qdma_dev_qinfo_get(dma_device_index,
+						hdr->src_func_id,
+						&resp->fmap.qbase,
+						&resp->fmap.qmax);
+		}
+		if (rv < 0) {
+			rv = -QDMA_ERR_MBOX_NUM_QUEUES;
+		} else {
+			struct qdma_fmap_cfg fmap_cfg;
+
+			qdma_mbox_memset(&fmap_cfg, 0,
+					 sizeof(struct qdma_fmap_cfg));
+			hw->qdma_fmap_conf(dev_hndl, hdr->src_func_id,
+					&fmap_cfg, QDMA_HW_ACCESS_WRITE);
+		}
+	}
+	break;
+	case MBOX_OP_QNOTIFY_ADD:
+	{
+		struct mbox_msg_q_nitfy *q_notify = &rcv->q_notify;
+		enum qdma_dev_q_range q_range;
+
+		q_range = qdma_dev_is_queue_in_range(dma_device_index,
+				q_notify->hdr.src_func_id,
+				q_notify->qid_hw);
+		if (q_range != QDMA_DEV_Q_IN_RANGE)
+			rv = -QDMA_ERR_MBOX_INV_QID;
+		else
+			rv = qdma_dev_increment_active_queue(dma_device_index,
+					q_notify->hdr.src_func_id,
+					q_notify->q_type);
+	}
+	break;
+	case MBOX_OP_QNOTIFY_DEL:
+	{
+		struct mbox_msg_q_nitfy *q_notify = &rcv->q_notify;
+		enum qdma_dev_q_range q_range;
+
+		q_range = qdma_dev_is_queue_in_range(dma_device_index,
+				q_notify->hdr.src_func_id,
+				q_notify->qid_hw);
+		if (q_range != QDMA_DEV_Q_IN_RANGE)
+			rv = -QDMA_ERR_MBOX_INV_QID;
+		else
+			rv = qdma_dev_decrement_active_queue(dma_device_index,
+					q_notify->hdr.src_func_id,
+					q_notify->q_type);
+	}
+	break;
+	case MBOX_OP_GET_QACTIVE_CNT:
+	{
+		rv = qdma_get_device_active_queue_count(dma_device_index,
+				rcv->hdr.src_func_id,
+				QDMA_DEV_Q_TYPE_H2C);
+
+		resp->qcnt.h2c_queues = rv;
+
+		rv = qdma_get_device_active_queue_count(dma_device_index,
+				rcv->hdr.src_func_id,
+				QDMA_DEV_Q_TYPE_C2H);
+
+		resp->qcnt.c2h_queues = rv;
+
+		rv = qdma_get_device_active_queue_count(dma_device_index,
+				rcv->hdr.src_func_id,
+				QDMA_DEV_Q_TYPE_CMPT);
+
+		resp->qcnt.cmpt_queues = rv;
+	}
+	break;
+	case MBOX_OP_INTR_CTXT_WRT:
+	{
+		struct mbox_msg_intr_ctxt *ictxt = &rcv->intr_ctxt.ctxt;
+		struct qdma_indirect_intr_ctxt *ctxt;
+		uint8_t i;
+		uint32_t ring_index;
+
+		for (i = 0; i < ictxt->num_rings; i++) {
+			ring_index = ictxt->ring_index_list[i];
+
+			ctxt = &ictxt->ictxt[i];
+			rv = hw->qdma_indirect_intr_ctx_conf(dev_hndl,
+						      ring_index,
+						      NULL,
+						      QDMA_HW_ACCESS_CLEAR);
+			if (rv < 0)
+				resp->hdr.status = rv;
+			rv = hw->qdma_indirect_intr_ctx_conf(dev_hndl,
+						      ring_index, ctxt,
+						      QDMA_HW_ACCESS_WRITE);
+			if (rv < 0)
+				resp->hdr.status = rv;
+		}
+	}
+	break;
+	case MBOX_OP_INTR_CTXT_RD:
+	{
+		struct mbox_msg_intr_ctxt *rcv_ictxt = &rcv->intr_ctxt.ctxt;
+		struct mbox_msg_intr_ctxt *rsp_ictxt = &resp->intr_ctxt.ctxt;
+		uint8_t i;
+		uint32_t ring_index;
+
+		for (i = 0; i < rcv_ictxt->num_rings; i++) {
+			ring_index = rcv_ictxt->ring_index_list[i];
+
+			rv = hw->qdma_indirect_intr_ctx_conf(dev_hndl,
+						      ring_index,
+						      &rsp_ictxt->ictxt[i],
+						      QDMA_HW_ACCESS_READ);
+			if (rv < 0)
+				resp->hdr.status = rv;
+		}
+	}
+	break;
+	case MBOX_OP_INTR_CTXT_CLR:
+	{
+		int i;
+		struct mbox_msg_intr_ctxt *ictxt = &rcv->intr_ctxt.ctxt;
+
+		for (i = 0; i < ictxt->num_rings; i++) {
+			rv = hw->qdma_indirect_intr_ctx_conf(dev_hndl,
+					ictxt->ring_index_list[i],
+					NULL, QDMA_HW_ACCESS_CLEAR);
+			if (rv < 0)
+				resp->hdr.status = rv;
+		}
+	}
+	break;
+	case MBOX_OP_INTR_CTXT_INV:
+	{
+		struct mbox_msg_intr_ctxt *ictxt = &rcv->intr_ctxt.ctxt;
+		int i;
+
+		for (i = 0; i < ictxt->num_rings; i++) {
+			rv = hw->qdma_indirect_intr_ctx_conf(dev_hndl,
+					ictxt->ring_index_list[i],
+					NULL, QDMA_HW_ACCESS_INVALIDATE);
+			if (rv < 0)
+				resp->hdr.status = rv;
+		}
+	}
+	break;
+	case MBOX_OP_QCTXT_INV:
+	{
+		struct mbox_msg_qctxt *qctxt = &rcv->qctxt;
+
+		rv = mbox_invalidate_queue_contexts(dev_hndl,
+							dma_device_index,
+							hdr->src_func_id,
+							qctxt->qid_hw,
+							qctxt->st,
+							qctxt->c2h,
+							qctxt->cmpt_ctxt_type);
+	}
+	break;
+	case MBOX_OP_QCTXT_CLR:
+	{
+		struct mbox_msg_qctxt *qctxt = &rcv->qctxt;
+
+		rv = mbox_clear_queue_contexts(dev_hndl,
+						dma_device_index,
+						hdr->src_func_id,
+						qctxt->qid_hw,
+						qctxt->st,
+						qctxt->c2h,
+						qctxt->cmpt_ctxt_type);
+	}
+	break;
+	case MBOX_OP_QCTXT_RD:
+	{
+		struct mbox_msg_qctxt *qctxt = &rcv->qctxt;
+
+		rv = mbox_read_queue_contexts(dev_hndl, qctxt->qid_hw,
+						qctxt->st,
+						qctxt->c2h,
+						qctxt->cmpt_ctxt_type,
+						&resp->qctxt.descq_ctxt);
+	}
+	break;
+	case MBOX_OP_QCTXT_WRT:
+	{
+		struct mbox_msg_qctxt *qctxt = &rcv->qctxt;
+
+		qctxt->descq_conf.func_id = hdr->src_func_id;
+		rv = mbox_write_queue_contexts(dev_hndl,
+				dma_device_index, qctxt);
+	}
+	break;
+	case MBOX_OP_RESET_PREPARE_RESP:
+		return QDMA_MBOX_VF_RESET;
+	case MBOX_OP_RESET_DONE_RESP:
+		return QDMA_MBOX_PF_RESET_DONE;
+	case MBOX_OP_REG_LIST_READ:
+	{
+		struct mbox_read_reg_list *rcv_read_reg_list =
+						&rcv->reg_read_list;
+		struct mbox_read_reg_list *rsp_read_reg_list =
+						&resp->reg_read_list;
+
+		rv = hw->qdma_read_reg_list((void *)dev_hndl, 1,
+				 rcv_read_reg_list->group_num,
+				&rsp_read_reg_list->num_regs,
+				rsp_read_reg_list->reg_list);
+
+		if (rv < 0 || rsp_read_reg_list->num_regs == 0) {
+			rv = -QDMA_ERR_MBOX_REG_READ_FAILED;
+			goto exit_func;
+		}
+	}
+	break;
+	case MBOX_OP_PF_BYE_RESP:
+		return QDMA_MBOX_PF_BYE;
+	default:
+		qdma_log_error("%s: op=%d invalid, err:%d\n",
+						__func__,
+						rcv->hdr.op,
+						-QDMA_ERR_MBOX_INV_MSG);
+		return -QDMA_ERR_MBOX_INV_MSG;
+	break;
+	}
+
+exit_func:
+	resp->hdr.op = rcv->hdr.op + MBOX_MSG_OP_RSP_OFFSET;
+	resp->hdr.dst_func_id = rcv->hdr.src_func_id;
+	resp->hdr.src_func_id = func_id;
+
+	resp->hdr.status = rv;
+
+	return ret;
+}
+
+int qmda_mbox_compose_vf_online(uint16_t func_id,
+				uint16_t qmax, int *qbase, uint32_t *raw_data)
+{
+	union qdma_mbox_txrx *msg = (union qdma_mbox_txrx *)raw_data;
+
+	if (!raw_data) {
+		qdma_log_error("%s: raw_data=%p, err:%d\n",
+						__func__, raw_data,
+						-QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	qdma_mbox_memset(raw_data, 0, sizeof(union qdma_mbox_txrx));
+	msg->hdr.op = MBOX_OP_HELLO;
+	msg->hdr.src_func_id = func_id;
+	msg->fmap.qbase = (uint32_t)*qbase;
+	msg->fmap.qmax = qmax;
+
+	return QDMA_SUCCESS;
+}
+
+int qdma_mbox_compose_vf_offline(uint16_t func_id,
+				 uint32_t *raw_data)
+{
+	union qdma_mbox_txrx *msg = (union qdma_mbox_txrx *)raw_data;
+
+	if (!raw_data) {
+		qdma_log_error("%s: raw_data=%p, err:%d\n",
+						__func__, raw_data,
+						-QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	qdma_mbox_memset(raw_data, 0, sizeof(union qdma_mbox_txrx));
+	msg->hdr.op = MBOX_OP_VF_BYE;
+	msg->hdr.src_func_id = func_id;
+
+	return QDMA_SUCCESS;
+}
+
+int qdma_mbox_compose_vf_reset_offline(uint16_t func_id,
+				 uint32_t *raw_data)
+{
+	union qdma_mbox_txrx *msg = (union qdma_mbox_txrx *)raw_data;
+
+	if (!raw_data) {
+		qdma_log_error("%s: raw_data=%p, err:%d\n",
+						__func__, raw_data,
+						-QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	qdma_mbox_memset(raw_data, 0, sizeof(union qdma_mbox_txrx));
+	msg->hdr.op = MBOX_OP_PF_RESET_VF_BYE;
+	msg->hdr.src_func_id = func_id;
+
+	return QDMA_SUCCESS;
+}
+
+
+
+int qdma_mbox_compose_vf_qreq(uint16_t func_id,
+			      uint16_t qmax, int qbase, uint32_t *raw_data)
+{
+	union qdma_mbox_txrx *msg = (union qdma_mbox_txrx *)raw_data;
+
+	if (!raw_data) {
+		qdma_log_error("%s: raw_data=%p, err:%d\n",
+						__func__, raw_data,
+						-QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	qdma_mbox_memset(raw_data, 0, sizeof(union qdma_mbox_txrx));
+	msg->hdr.op = MBOX_OP_QREQ;
+	msg->hdr.src_func_id = func_id;
+	msg->fmap.qbase = qbase;
+	msg->fmap.qmax = qmax;
+
+	return QDMA_SUCCESS;
+}
+
+int qdma_mbox_compose_vf_notify_qadd(uint16_t func_id,
+				     uint16_t qid_hw,
+				     enum qdma_dev_q_type q_type,
+				     uint32_t *raw_data)
+{
+	union qdma_mbox_txrx *msg = (union qdma_mbox_txrx *)raw_data;
+
+	if (!raw_data) {
+		qdma_log_error("%s: raw_data=%p, err:%d\n",
+						__func__, raw_data,
+						-QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	qdma_mbox_memset(raw_data, 0, sizeof(union qdma_mbox_txrx));
+	msg->hdr.op = MBOX_OP_QNOTIFY_ADD;
+	msg->hdr.src_func_id = func_id;
+	msg->q_notify.qid_hw = qid_hw;
+	msg->q_notify.q_type = q_type;
+
+	return QDMA_SUCCESS;
+}
+
+int qdma_mbox_compose_vf_get_device_active_qcnt(uint16_t func_id,
+		uint32_t *raw_data)
+{
+	union qdma_mbox_txrx *msg = (union qdma_mbox_txrx *)raw_data;
+
+	if (!raw_data) {
+		qdma_log_error("%s: raw_data=%p, err:%d\n",
+						__func__, raw_data,
+						-QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	qdma_mbox_memset(raw_data, 0, sizeof(union qdma_mbox_txrx));
+	msg->hdr.op = MBOX_OP_GET_QACTIVE_CNT;
+	msg->hdr.src_func_id = func_id;
+
+	return QDMA_SUCCESS;
+}
+
+int qdma_mbox_compose_vf_notify_qdel(uint16_t func_id,
+				     uint16_t qid_hw,
+				     enum qdma_dev_q_type q_type,
+				    uint32_t *raw_data)
+{
+	union qdma_mbox_txrx *msg = (union qdma_mbox_txrx *)raw_data;
+
+	if (!raw_data) {
+		qdma_log_error("%s: raw_data=%p, err:%d\n",
+						__func__, raw_data,
+						-QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	qdma_mbox_memset(raw_data, 0, sizeof(union qdma_mbox_txrx));
+	msg->hdr.op = MBOX_OP_QNOTIFY_DEL;
+	msg->hdr.src_func_id = func_id;
+	msg->q_notify.qid_hw = qid_hw;
+	msg->q_notify.q_type = q_type;
+
+	return QDMA_SUCCESS;
+}
+
+int qdma_mbox_compose_vf_fmap_prog(uint16_t func_id,
+				   uint16_t qmax, int qbase,
+				   uint32_t *raw_data)
+{
+	union qdma_mbox_txrx *msg = (union qdma_mbox_txrx *)raw_data;
+
+	if (!raw_data) {
+		qdma_log_error("%s: raw_data=%p, err:%d\n",
+					__func__, raw_data,
+					-QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	qdma_mbox_memset(raw_data, 0, sizeof(union qdma_mbox_txrx));
+	msg->hdr.op = MBOX_OP_FMAP;
+	msg->hdr.src_func_id = func_id;
+	msg->fmap.qbase = (uint32_t)qbase;
+	msg->fmap.qmax = qmax;
+
+	return QDMA_SUCCESS;
+}
+
+int qdma_mbox_compose_vf_qctxt_write(uint16_t func_id,
+			uint16_t qid_hw, uint8_t st, uint8_t c2h,
+			enum mbox_cmpt_ctxt_type cmpt_ctxt_type,
+			struct mbox_descq_conf *descq_conf,
+			uint32_t *raw_data)
+{
+	union qdma_mbox_txrx *msg = (union qdma_mbox_txrx *)raw_data;
+
+	if (!raw_data) {
+		qdma_log_error("%s: raw_data=%p, err:%d\n",
+						__func__, raw_data,
+						-QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	qdma_mbox_memset(raw_data, 0, sizeof(union qdma_mbox_txrx));
+	msg->hdr.op = MBOX_OP_QCTXT_WRT;
+	msg->hdr.src_func_id = func_id;
+	msg->qctxt.qid_hw = qid_hw;
+	msg->qctxt.c2h = c2h;
+	msg->qctxt.st = st;
+	msg->qctxt.cmpt_ctxt_type = cmpt_ctxt_type;
+
+	qdma_mbox_memcpy(&msg->qctxt.descq_conf, descq_conf,
+	       sizeof(struct mbox_descq_conf));
+
+	return QDMA_SUCCESS;
+}
+
+int qdma_mbox_compose_vf_qctxt_read(uint16_t func_id,
+				uint16_t qid_hw, uint8_t st, uint8_t c2h,
+				enum mbox_cmpt_ctxt_type cmpt_ctxt_type,
+				uint32_t *raw_data)
+{
+	union qdma_mbox_txrx *msg = (union qdma_mbox_txrx *)raw_data;
+
+	if (!raw_data) {
+		qdma_log_error("%s: raw_data=%p, err:%d\n",
+						__func__, raw_data,
+						-QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	qdma_mbox_memset(raw_data, 0, sizeof(union qdma_mbox_txrx));
+	msg->hdr.op = MBOX_OP_QCTXT_RD;
+	msg->hdr.src_func_id = func_id;
+	msg->qctxt.qid_hw = qid_hw;
+	msg->qctxt.c2h = c2h;
+	msg->qctxt.st = st;
+	msg->qctxt.cmpt_ctxt_type = cmpt_ctxt_type;
+
+	return QDMA_SUCCESS;
+}
+
+int qdma_mbox_compose_vf_qctxt_invalidate(uint16_t func_id,
+				uint16_t qid_hw, uint8_t st, uint8_t c2h,
+				enum mbox_cmpt_ctxt_type cmpt_ctxt_type,
+				uint32_t *raw_data)
+{
+	union qdma_mbox_txrx *msg = (union qdma_mbox_txrx *)raw_data;
+
+	if (!raw_data) {
+		qdma_log_error("%s: raw_data=%p, err:%d\n",
+						__func__, raw_data,
+						-QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	qdma_mbox_memset(raw_data, 0, sizeof(union qdma_mbox_txrx));
+	msg->hdr.op = MBOX_OP_QCTXT_INV;
+	msg->hdr.src_func_id = func_id;
+	msg->qctxt.qid_hw = qid_hw;
+	msg->qctxt.c2h = c2h;
+	msg->qctxt.st = st;
+	msg->qctxt.cmpt_ctxt_type = cmpt_ctxt_type;
+
+	return QDMA_SUCCESS;
+}
+
+int qdma_mbox_compose_vf_qctxt_clear(uint16_t func_id,
+				uint16_t qid_hw, uint8_t st, uint8_t c2h,
+				enum mbox_cmpt_ctxt_type cmpt_ctxt_type,
+				uint32_t *raw_data)
+{
+	union qdma_mbox_txrx *msg = (union qdma_mbox_txrx *)raw_data;
+
+	if (!raw_data) {
+		qdma_log_error("%s: raw_data=%p, err:%d\n",
+						__func__, raw_data,
+						-QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	qdma_mbox_memset(raw_data, 0, sizeof(union qdma_mbox_txrx));
+	msg->hdr.op = MBOX_OP_QCTXT_CLR;
+	msg->hdr.src_func_id = func_id;
+	msg->qctxt.qid_hw = qid_hw;
+	msg->qctxt.c2h = c2h;
+	msg->qctxt.st = st;
+	msg->qctxt.cmpt_ctxt_type = cmpt_ctxt_type;
+
+	return QDMA_SUCCESS;
+}
+
+int qdma_mbox_compose_csr_read(uint16_t func_id,
+			       uint32_t *raw_data)
+{
+	union qdma_mbox_txrx *msg = (union qdma_mbox_txrx *)raw_data;
+
+	if (!raw_data) {
+		qdma_log_error("%s: raw_data=%p, err:%d\n",
+						__func__, raw_data,
+						-QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	qdma_mbox_memset(raw_data, 0, sizeof(union qdma_mbox_txrx));
+	msg->hdr.op = MBOX_OP_CSR;
+	msg->hdr.src_func_id = func_id;
+
+	return QDMA_SUCCESS;
+}
+
+int qdma_mbox_compose_reg_read(uint16_t func_id,
+					uint16_t group_num,
+					uint32_t *raw_data)
+{
+	union qdma_mbox_txrx *msg = (union qdma_mbox_txrx *)raw_data;
+
+	if (!raw_data) {
+		qdma_log_error("%s: raw_data=%p, err:%d\n",
+						__func__, raw_data,
+						-QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	qdma_mbox_memset(raw_data, 0, sizeof(union qdma_mbox_txrx));
+	msg->hdr.op = MBOX_OP_REG_LIST_READ;
+	msg->hdr.src_func_id = func_id;
+	msg->reg_read_list.group_num = group_num;
+
+	return QDMA_SUCCESS;
+}
+
+int qdma_mbox_compose_vf_intr_ctxt_write(uint16_t func_id,
+					 struct mbox_msg_intr_ctxt *intr_ctxt,
+					 uint32_t *raw_data)
+{
+	union qdma_mbox_txrx *msg = (union qdma_mbox_txrx *)raw_data;
+
+	if (!raw_data) {
+		qdma_log_error("%s: raw_data=%p, err:%d\n",
+						__func__, raw_data,
+						-QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	qdma_mbox_memset(raw_data, 0, sizeof(union qdma_mbox_txrx));
+	msg->hdr.op = MBOX_OP_INTR_CTXT_WRT;
+	msg->hdr.src_func_id = func_id;
+	qdma_mbox_memcpy(&msg->intr_ctxt.ctxt, intr_ctxt,
+	       sizeof(struct mbox_msg_intr_ctxt));
+
+	return QDMA_SUCCESS;
+}
+
+int qdma_mbox_compose_vf_intr_ctxt_read(uint16_t func_id,
+					struct mbox_msg_intr_ctxt *intr_ctxt,
+					uint32_t *raw_data)
+{
+	union qdma_mbox_txrx *msg = (union qdma_mbox_txrx *)raw_data;
+
+	if (!raw_data) {
+		qdma_log_error("%s: raw_data=%p, err:%d\n",
+						__func__, raw_data,
+						-QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	qdma_mbox_memset(raw_data, 0, sizeof(union qdma_mbox_txrx));
+	msg->hdr.op = MBOX_OP_INTR_CTXT_RD;
+	msg->hdr.src_func_id = func_id;
+	qdma_mbox_memcpy(&msg->intr_ctxt.ctxt, intr_ctxt,
+	       sizeof(struct mbox_msg_intr_ctxt));
+
+	return QDMA_SUCCESS;
+}
+
+int qdma_mbox_compose_vf_intr_ctxt_clear(uint16_t func_id,
+					 struct mbox_msg_intr_ctxt *intr_ctxt,
+					 uint32_t *raw_data)
+{
+	union qdma_mbox_txrx *msg = (union qdma_mbox_txrx *)raw_data;
+
+	if (!raw_data) {
+		qdma_log_error("%s: raw_data=%p, err:%d\n",
+						__func__, raw_data,
+						-QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	qdma_mbox_memset(raw_data, 0, sizeof(union qdma_mbox_txrx));
+	msg->hdr.op = MBOX_OP_INTR_CTXT_CLR;
+	msg->hdr.src_func_id = func_id;
+	qdma_mbox_memcpy(&msg->intr_ctxt.ctxt, intr_ctxt,
+	       sizeof(struct mbox_msg_intr_ctxt));
+
+	return QDMA_SUCCESS;
+}
+
+int qdma_mbox_compose_vf_intr_ctxt_invalidate(uint16_t func_id,
+				      struct mbox_msg_intr_ctxt *intr_ctxt,
+				      uint32_t *raw_data)
+{
+	union qdma_mbox_txrx *msg = (union qdma_mbox_txrx *)raw_data;
+
+	if (!raw_data) {
+		qdma_log_error("%s: raw_data=%p, err:%d\n",
+						__func__, raw_data,
+						-QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	qdma_mbox_memset(raw_data, 0, sizeof(union qdma_mbox_txrx));
+	msg->hdr.op = MBOX_OP_INTR_CTXT_INV;
+	msg->hdr.src_func_id = func_id;
+	qdma_mbox_memcpy(&msg->intr_ctxt.ctxt, intr_ctxt,
+	       sizeof(struct mbox_msg_intr_ctxt));
+
+	return QDMA_SUCCESS;
+}
+
+uint8_t qdma_mbox_is_msg_response(uint32_t *send_data, uint32_t *rcv_data)
+{
+	union qdma_mbox_txrx *tx_msg = (union qdma_mbox_txrx *)send_data;
+	union qdma_mbox_txrx *rx_msg = (union qdma_mbox_txrx *)rcv_data;
+
+	return ((tx_msg->hdr.op + MBOX_MSG_OP_RSP_OFFSET) == rx_msg->hdr.op) ?
+			1 : 0;
+}
+
+int qdma_mbox_vf_response_status(uint32_t *rcv_data)
+{
+	union qdma_mbox_txrx *msg = (union qdma_mbox_txrx *)rcv_data;
+
+	return msg->hdr.status;
+}
+
+uint8_t qdma_mbox_vf_func_id_get(uint32_t *rcv_data, uint8_t is_vf)
+{
+	union qdma_mbox_txrx *msg = (union qdma_mbox_txrx *)rcv_data;
+	uint16_t func_id;
+
+	if (is_vf)
+		func_id = msg->hdr.dst_func_id;
+	else
+		func_id = msg->hdr.src_func_id;
+
+	return func_id;
+}
+
+int qdma_mbox_vf_active_queues_get(uint32_t *rcv_data,
+		enum qdma_dev_q_type q_type)
+{
+	union qdma_mbox_txrx *msg = (union qdma_mbox_txrx *)rcv_data;
+	int queues = 0;
+
+	if (q_type == QDMA_DEV_Q_TYPE_H2C)
+		queues = msg->qcnt.h2c_queues;
+
+	if (q_type == QDMA_DEV_Q_TYPE_C2H)
+		queues = msg->qcnt.c2h_queues;
+
+	if (q_type == QDMA_DEV_Q_TYPE_CMPT)
+		queues = msg->qcnt.cmpt_queues;
+
+	return queues;
+}
+
+
+uint8_t qdma_mbox_vf_parent_func_id_get(uint32_t *rcv_data)
+{
+	union qdma_mbox_txrx *msg = (union qdma_mbox_txrx *)rcv_data;
+
+	return msg->hdr.src_func_id;
+}
+
+int qdma_mbox_vf_dev_info_get(uint32_t *rcv_data,
+	struct qdma_dev_attributes *dev_cap, uint32_t *dma_device_index)
+{
+	union qdma_mbox_txrx *msg = (union qdma_mbox_txrx *)rcv_data;
+
+	*dev_cap = msg->hello.dev_cap;
+	*dma_device_index = msg->hello.dma_device_index;
+
+	return msg->hdr.status;
+}
+
+int qdma_mbox_vf_qinfo_get(uint32_t *rcv_data, int *qbase, uint16_t *qmax)
+{
+	union qdma_mbox_txrx *msg = (union qdma_mbox_txrx *)rcv_data;
+
+	*qbase = msg->fmap.qbase;
+	*qmax = msg->fmap.qmax;
+
+	return msg->hdr.status;
+}
+
+int qdma_mbox_vf_csr_get(uint32_t *rcv_data, struct qdma_csr_info *csr)
+{
+	union qdma_mbox_txrx *msg = (union qdma_mbox_txrx *)rcv_data;
+
+	qdma_mbox_memcpy(csr, &msg->csr.csr_info, sizeof(struct qdma_csr_info));
+
+	return msg->hdr.status;
+}
+
+int qdma_mbox_vf_reg_list_get(uint32_t *rcv_data,
+		uint16_t *num_regs, struct qdma_reg_data *reg_list)
+{
+	union qdma_mbox_txrx *msg = (union qdma_mbox_txrx *)rcv_data;
+
+	*num_regs = msg->reg_read_list.num_regs;
+	qdma_mbox_memcpy(reg_list, &msg->reg_read_list.reg_list,
+			(*num_regs * sizeof(struct qdma_reg_data)));
+
+	return msg->hdr.status;
+}
+
+int qdma_mbox_vf_context_get(uint32_t *rcv_data,
+			     struct qdma_descq_context *ctxt)
+{
+	union qdma_mbox_txrx *msg = (union qdma_mbox_txrx *)rcv_data;
+
+	qdma_mbox_memcpy(ctxt, &msg->qctxt.descq_ctxt,
+			 sizeof(struct qdma_descq_context));
+
+	return msg->hdr.status;
+}
+
+int qdma_mbox_vf_intr_context_get(uint32_t *rcv_data,
+				  struct mbox_msg_intr_ctxt *ictxt)
+{
+	union qdma_mbox_txrx *msg = (union qdma_mbox_txrx *)rcv_data;
+
+	qdma_mbox_memcpy(ictxt, &msg->intr_ctxt.ctxt,
+			 sizeof(struct mbox_msg_intr_ctxt));
+
+	return msg->hdr.status;
+}
+
+void qdma_mbox_pf_hw_clear_ack(void *dev_hndl)
+{
+	uint32_t v;
+	uint32_t reg;
+	int i;
+	uint32_t mbox_base = get_mbox_offset(dev_hndl, 0);
+
+	reg = mbox_base + MBOX_PF_ACK_BASE;
+
+	v = qdma_reg_read(dev_hndl, mbox_base + MBOX_FN_STATUS);
+	if ((v & F_MBOX_FN_STATUS_ACK) == 0)
+		return;
+
+	for (i = 0; i < MBOX_PF_ACK_COUNT; i++, reg += MBOX_PF_ACK_STEP) {
+		v = qdma_reg_read(dev_hndl, reg);
+
+		if (!v)
+			continue;
+
+		/* clear the ack status */
+		qdma_reg_write(dev_hndl, reg, v);
+	}
+}
+
+int qdma_mbox_send(void *dev_hndl, uint8_t is_vf, uint32_t *raw_data)
+{
+	int i;
+	uint32_t reg = MBOX_OUT_MSG_BASE;
+	uint32_t v;
+	union qdma_mbox_txrx *msg = (union qdma_mbox_txrx *)raw_data;
+	uint16_t dst_func_id = msg->hdr.dst_func_id;
+	uint32_t mbox_base = get_mbox_offset(dev_hndl, is_vf);
+
+	v = qdma_reg_read(dev_hndl, mbox_base + MBOX_FN_STATUS);
+	if (v & F_MBOX_FN_STATUS_OUT_MSG)
+		return -QDMA_ERR_MBOX_SEND_BUSY;
+
+	if (!is_vf)
+		qdma_reg_write(dev_hndl, mbox_base + MBOX_FN_TARGET,
+				V_MBOX_FN_TARGET_ID(dst_func_id));
+
+	for (i = 0; i < MBOX_MSG_REG_MAX; i++, reg += MBOX_MSG_STEP)
+		qdma_reg_write(dev_hndl, mbox_base + reg, raw_data[i]);
+
+	/* clear the outgoing ack */
+	if (!is_vf)
+		mbox_pf_hw_clear_func_ack(dev_hndl, dst_func_id);
+
+
+	qdma_log_debug("%s %s tx from_id=%d, to_id=%d, opcode=0x%x\n", __func__,
+			is_vf ? "VF" : "PF", msg->hdr.src_func_id,
+			msg->hdr.dst_func_id, msg->hdr.op);
+	qdma_reg_write(dev_hndl, mbox_base + MBOX_FN_CMD, F_MBOX_FN_CMD_SND);
+
+	return QDMA_SUCCESS;
+}
+
+int qdma_mbox_rcv(void *dev_hndl, uint8_t is_vf, uint32_t *raw_data)
+{
+	uint32_t reg = MBOX_IN_MSG_BASE;
+	uint32_t v = 0;
+	int all_zero_msg = 1;
+	int i;
+	uint32_t from_id = 0;
+	union qdma_mbox_txrx *msg = (union qdma_mbox_txrx *)raw_data;
+	uint32_t mbox_base = get_mbox_offset(dev_hndl, is_vf);
+
+	v = qdma_reg_read(dev_hndl, mbox_base + MBOX_FN_STATUS);
+
+	if (!(v & M_MBOX_FN_STATUS_IN_MSG))
+		return -QDMA_ERR_MBOX_NO_MSG_IN;
+
+	if (!is_vf) {
+		from_id = G_MBOX_FN_STATUS_SRC(v);
+		qdma_reg_write(dev_hndl, mbox_base + MBOX_FN_TARGET, from_id);
+	}
+
+	for (i = 0; i < MBOX_MSG_REG_MAX; i++, reg += MBOX_MSG_STEP) {
+		raw_data[i] = qdma_reg_read(dev_hndl, mbox_base + reg);
+		/* if rcv'ed message is all zero, stop and disable the mbox,
+		 * the h/w mbox is not working properly
+		 */
+		if (raw_data[i])
+			all_zero_msg = 0;
+	}
+
+	/* ack'ed the sender */
+	qdma_reg_write(dev_hndl, mbox_base + MBOX_FN_CMD, F_MBOX_FN_CMD_RCV);
+	if (all_zero_msg) {
+		qdma_log_error("%s: Message recv'd is all zeros. failure:%d\n",
+					__func__,
+					-QDMA_ERR_MBOX_ALL_ZERO_MSG);
+		return -QDMA_ERR_MBOX_ALL_ZERO_MSG;
+	}
+
+
+	qdma_log_debug("%s %s fid=%d, opcode=0x%x\n", __func__,
+				   is_vf ? "VF" : "PF", msg->hdr.dst_func_id,
+				   msg->hdr.op);
+	if (!is_vf && from_id != msg->hdr.src_func_id)
+		msg->hdr.src_func_id = from_id;
+
+	return QDMA_SUCCESS;
+}
+
+void qdma_mbox_hw_init(void *dev_hndl, uint8_t is_vf)
+{
+	uint32_t v;
+	uint32_t mbox_base = get_mbox_offset(dev_hndl, is_vf);
+
+	if (is_vf) {
+		v = qdma_reg_read(dev_hndl, mbox_base + MBOX_FN_STATUS);
+		if (v & M_MBOX_FN_STATUS_IN_MSG)
+			qdma_reg_write(dev_hndl, mbox_base + MBOX_FN_CMD,
+				    F_MBOX_FN_CMD_RCV);
+	} else {
+		qdma_mbox_pf_hw_clear_ack(dev_hndl);
+	}
+}
+
+void qdma_mbox_enable_interrupts(void *dev_hndl, uint8_t is_vf)
+{
+	int vector = 0x0;
+	uint32_t mbox_base = get_mbox_offset(dev_hndl, is_vf);
+
+	qdma_reg_write(dev_hndl, mbox_base + MBOX_ISR_VEC, vector);
+	qdma_reg_write(dev_hndl, mbox_base + MBOX_ISR_EN, 0x1);
+}
+
+void qdma_mbox_disable_interrupts(void *dev_hndl, uint8_t is_vf)
+{
+	uint32_t mbox_base = get_mbox_offset(dev_hndl, is_vf);
+
+	qdma_reg_write(dev_hndl, mbox_base + MBOX_ISR_EN, 0x0);
+}
+
+
+int qdma_mbox_compose_vf_reset_message(uint32_t *raw_data, uint8_t src_funcid,
+				uint8_t dest_funcid)
+{
+	union qdma_mbox_txrx *msg = (union qdma_mbox_txrx *)raw_data;
+
+	if (!raw_data)
+		return -QDMA_ERR_INV_PARAM;
+
+	qdma_mbox_memset(raw_data, 0, sizeof(union qdma_mbox_txrx));
+	msg->hdr.op = MBOX_OP_RESET_PREPARE;
+	msg->hdr.src_func_id = src_funcid;
+	msg->hdr.dst_func_id = dest_funcid;
+	return 0;
+}
+
+int qdma_mbox_compose_pf_reset_done_message(uint32_t *raw_data,
+					uint8_t src_funcid, uint8_t dest_funcid)
+{
+	union qdma_mbox_txrx *msg = (union qdma_mbox_txrx *)raw_data;
+
+	if (!raw_data)
+		return -QDMA_ERR_INV_PARAM;
+
+	qdma_mbox_memset(raw_data, 0, sizeof(union qdma_mbox_txrx));
+	msg->hdr.op = MBOX_OP_RESET_DONE;
+	msg->hdr.src_func_id = src_funcid;
+	msg->hdr.dst_func_id = dest_funcid;
+	return 0;
+}
+
+int qdma_mbox_compose_pf_offline(uint32_t *raw_data, uint8_t src_funcid,
+				uint8_t dest_funcid)
+{
+	union qdma_mbox_txrx *msg = (union qdma_mbox_txrx *)raw_data;
+
+	if (!raw_data)
+		return -QDMA_ERR_INV_PARAM;
+
+	qdma_mbox_memset(raw_data, 0, sizeof(union qdma_mbox_txrx));
+	msg->hdr.op = MBOX_OP_PF_BYE;
+	msg->hdr.src_func_id = src_funcid;
+	msg->hdr.dst_func_id = dest_funcid;
+	return 0;
+}
+
+int qdma_mbox_vf_rcv_msg_handler(uint32_t *rcv_msg, uint32_t *resp_msg)
+{
+	union qdma_mbox_txrx *rcv =  (union qdma_mbox_txrx *)rcv_msg;
+	union qdma_mbox_txrx *resp =  (union qdma_mbox_txrx *)resp_msg;
+	int rv = 0;
+
+	switch (rcv->hdr.op) {
+	case MBOX_OP_RESET_PREPARE:
+		resp->hdr.op = rcv->hdr.op + MBOX_MSG_OP_RSP_OFFSET;
+		resp->hdr.dst_func_id = rcv->hdr.src_func_id;
+		resp->hdr.src_func_id = rcv->hdr.dst_func_id;
+		rv = QDMA_MBOX_VF_RESET;
+		break;
+	case MBOX_OP_RESET_DONE:
+		resp->hdr.op = rcv->hdr.op + MBOX_MSG_OP_RSP_OFFSET;
+		resp->hdr.dst_func_id = rcv->hdr.src_func_id;
+		resp->hdr.src_func_id = rcv->hdr.dst_func_id;
+		rv = QDMA_MBOX_PF_RESET_DONE;
+		break;
+	case MBOX_OP_PF_BYE:
+		resp->hdr.op = rcv->hdr.op + MBOX_MSG_OP_RSP_OFFSET;
+		resp->hdr.dst_func_id = rcv->hdr.src_func_id;
+		resp->hdr.src_func_id = rcv->hdr.dst_func_id;
+		rv = QDMA_MBOX_PF_BYE;
+		break;
+	default:
+		break;
+	}
+	return rv;
+}
+
+uint8_t qdma_mbox_out_status(void *dev_hndl, uint8_t is_vf)
+{
+	uint32_t v;
+	uint32_t mbox_base = get_mbox_offset(dev_hndl, is_vf);
+
+	v = qdma_reg_read(dev_hndl, mbox_base + MBOX_FN_STATUS);
+	if (v & F_MBOX_FN_STATUS_OUT_MSG)
+		return 1;
+	else
+		return 0;
+}
diff --git a/drivers/net/qdma/qdma_access/qdma_mbox_protocol.h b/drivers/net/qdma/qdma_access/qdma_mbox_protocol.h
new file mode 100644
index 0000000000..335e728561
--- /dev/null
+++ b/drivers/net/qdma/qdma_access/qdma_mbox_protocol.h
@@ -0,0 +1,681 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2022 Xilinx, Inc. All rights reserved.
+ */
+
+#ifndef __QDMA_MBOX_PROTOCOL_H_
+#define __QDMA_MBOX_PROTOCOL_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * DOC: QDMA message box handling interface definitions
+ *
+ * Header file *qdma_mbox_protocol.h* defines data structures and function
+ * signatures exported for QDMA Mbox message handling.
+ */
+
+#include "qdma_platform.h"
+#include "qdma_resource_mgmt.h"
+
+
+#define QDMA_MBOX_VF_ONLINE			(1)
+#define QDMA_MBOX_VF_OFFLINE		(-1)
+#define QDMA_MBOX_VF_RESET			(2)
+#define QDMA_MBOX_PF_RESET_DONE		(3)
+#define QDMA_MBOX_PF_BYE			(4)
+#define QDMA_MBOX_VF_RESET_BYE            (5)
+
+/** mailbox register max */
+#define MBOX_MSG_REG_MAX		32
+
+#define mbox_invalidate_msg(m)	{ (m)->hdr.op = MBOX_OP_NOOP; }
+
+/**
+ * struct mbox_descq_conf - collective bit-fields of all contexts
+ */
+struct mbox_descq_conf {
+	/** @ring_bs_addr: ring base address */
+	uint64_t ring_bs_addr;
+	/** @cmpt_ring_bs_addr: completion ring base address */
+	uint64_t cmpt_ring_bs_addr;
+	/** @forced_en: enable fetch credit */
+	uint32_t forced_en:1;
+	/** @en_bypass: bypass enable */
+	uint32_t en_bypass:1;
+	/** @irq_arm: arm irq */
+	uint32_t irq_arm:1;
+	/** @wbi_intvl_en: writeback interval enable */
+	uint32_t wbi_intvl_en:1;
+	/** @wbi_chk: writeback pending check */
+	uint32_t wbi_chk:1;
+	/** @at: address translation */
+	uint32_t at:1;
+	/** @wbk_en: writeback enable */
+	uint32_t wbk_en:1;
+	/** @irq_en: irq enable */
+	uint32_t irq_en:1;
+	/** @pfch_en: prefetch enable */
+	uint32_t pfch_en:1;
+	/** @en_bypass_prefetch: prefetch bypass enable */
+	uint32_t en_bypass_prefetch:1;
+	/** @dis_overflow_check: disable overflow check */
+	uint32_t dis_overflow_check:1;
+	/** @cmpt_int_en: completion interrupt enable */
+	uint32_t cmpt_int_en:1;
+	/** @cmpt_at: completion address translation */
+	uint32_t cmpt_at:1;
+	/** @cmpt_color: completion ring initial color bit */
+	uint32_t cmpt_color:1;
+	/** @cmpt_full_upd: completion full update */
+	uint32_t cmpt_full_upd:1;
+	/** @cmpl_stat_en: completion status enable */
+	uint32_t cmpl_stat_en:1;
+	/** @desc_sz: descriptor size */
+	uint32_t desc_sz:2;
+	/** @cmpt_desc_sz: completion ring descriptor size */
+	uint32_t cmpt_desc_sz:2;
+	/** @triggermode: trigger mode */
+	uint32_t triggermode:3;
+	/** @rsvd: reserved */
+	uint32_t rsvd:9;
+	/** @func_id: function ID */
+	uint32_t func_id:16;
+	/** @cnt_thres: counter threshold */
+	uint32_t cnt_thres:8;
+	/** @timer_thres: timer threshold */
+	uint32_t timer_thres:8;
+	/** @intr_id: interrupt id */
+	uint16_t intr_id:11;
+	/** @intr_aggr: interrupt aggregation */
+	uint16_t intr_aggr:1;
+	/** @filler: filler bits */
+	uint16_t filler:4;
+	/** @ringsz: ring size */
+	uint16_t ringsz;
+	/** @bufsz: c2h buffer size */
+	uint16_t bufsz;
+	/** @cmpt_ringsz: completion ring size */
+	uint16_t cmpt_ringsz;
+};
+
+/**
+ * @enum - mbox_cmpt_ctxt_type
+ * @brief  specifies whether cmpt is enabled with MM/ST
+ */
+enum mbox_cmpt_ctxt_type {
+	/** @QDMA_MBOX_CMPT_CTXT_ONLY: only cmpt context programming required */
+	QDMA_MBOX_CMPT_CTXT_ONLY,
+	/** @QDMA_MBOX_CMPT_WITH_MM: completion context with MM */
+	QDMA_MBOX_CMPT_WITH_MM,
+	/** @QDMA_MBOX_CMPT_WITH_ST: complete context with ST */
+	QDMA_MBOX_CMPT_WITH_ST,
+	/** @QDMA_MBOX_CMPT_CTXT_NONE: No completion context */
+	QDMA_MBOX_CMPT_CTXT_NONE
+};
+
+/**
+ * @struct - mbox_msg_intr_ctxt
+ * @brief	interrupt context mailbox message
+ */
+struct mbox_msg_intr_ctxt {
+	/** @num_rings: number of intr context rings be assigned
+	 * for virtual function
+	 */
+	uint8_t num_rings;	/* 1 ~ 8 */
+	/** @ring_index_list: ring index associated for each vector */
+	uint32_t ring_index_list[QDMA_NUM_DATA_VEC_FOR_INTR_CXT];
+	/** @w: interrupt context data for all rings*/
+	struct qdma_indirect_intr_ctxt ictxt[QDMA_NUM_DATA_VEC_FOR_INTR_CXT];
+};
+
+/*****************************************************************************/
+/**
+ * qdma_mbox_hw_init(): Initialize the mobx HW
+ *
+ * @dev_hndl:  device handle
+ * @is_vf:  is VF mbox
+ *
+ * Return:	None
+ *****************************************************************************/
+void qdma_mbox_hw_init(void *dev_hndl, uint8_t is_vf);
+
+/*****************************************************************************/
+/**
+ * qdma_mbox_pf_rcv_msg_handler(): handles the raw message received in pf
+ *
+ * @dma_device_index:  pci bus number
+ * @dev_hndl:  device handle
+ * @func_id:   own function id
+ * @rcv_msg:   received raw message
+ * @resp_msg:  raw response message
+ *
+ * Return:	0  : success and < 0: failure
+ *****************************************************************************/
+int qdma_mbox_pf_rcv_msg_handler(void *dev_hndl, uint8_t dma_device_index,
+				 uint16_t func_id, uint32_t *rcv_msg,
+				 uint32_t *resp_msg);
+
+/*****************************************************************************/
+/**
+ * qmda_mbox_compose_vf_online(): compose VF online message
+ *
+ * @func_id:   destination function id
+ * @qmax: number of queues being requested
+ * @qbase: q base at which queues are allocated
+ * @raw_data: output raw message to be sent
+ *
+ * Return:	0  : success and < 0: failure
+ *****************************************************************************/
+int qmda_mbox_compose_vf_online(uint16_t func_id,
+				uint16_t qmax, int *qbase, uint32_t *raw_data);
+
+/*****************************************************************************/
+/**
+ * qdma_mbox_compose_vf_offline(): compose VF offline message
+ *
+ * @func_id:   destination function id
+ * @raw_data: output raw message to be sent
+ *
+ * Return:	0  : success and < 0: failure
+ *****************************************************************************/
+int qdma_mbox_compose_vf_offline(uint16_t func_id,
+				 uint32_t *raw_data);
+
+/*****************************************************************************/
+/**
+ * qdma_mbox_compose_vf_reset_message(): compose VF reset message
+ *
+ * @raw_data:   output raw message to be sent
+ * @src_funcid: own function id
+ * @dest_funcid: destination function id
+ *
+ * Return:	0  : success and < 0: failure
+ *****************************************************************************/
+int qdma_mbox_compose_vf_reset_message(uint32_t *raw_data, uint8_t src_funcid,
+				uint8_t dest_funcid);
+
+/*****************************************************************************/
+/**
+ * qdma_mbox_compose_vf_reset_offline(): compose VF BYE for PF initiated RESET
+ *
+ * @func_id: own function id
+ * @raw_data: output raw message to be sent
+ *
+ * Return:	0  : success and < 0: failure
+ *****************************************************************************/
+int qdma_mbox_compose_vf_reset_offline(uint16_t func_id,
+				uint32_t *raw_data);
+/*****************************************************************************/
+/**
+ * qdma_mbox_compose_pf_reset_done_message(): compose PF reset done message
+ *
+ * @raw_data:   output raw message to be sent
+ * @src_funcid: own function id
+ * @dest_funcid: destination function id
+ *
+ * Return:	0  : success and < 0: failure
+ *****************************************************************************/
+int qdma_mbox_compose_pf_reset_done_message(uint32_t *raw_data,
+				uint8_t src_funcid, uint8_t dest_funcid);
+
+/*****************************************************************************/
+/**
+ * qdma_mbox_compose_pf_offline(): compose PF offline message
+ *
+ * @raw_data:   output raw message to be sent
+ * @src_funcid: own function id
+ * @dest_funcid: destination function id
+ *
+ * Return:	0  : success and < 0: failure
+ *****************************************************************************/
+int qdma_mbox_compose_pf_offline(uint32_t *raw_data, uint8_t src_funcid,
+				uint8_t dest_funcid);
+
+/*****************************************************************************/
+/**
+ * qdma_mbox_compose_vf_qreq(): compose message to request queues
+ *
+ * @func_id:   destination function id
+ * @qmax: number of queues being requested
+ * @qbase: q base at which queues are allocated
+ * @raw_data: output raw message to be sent
+ *
+ * Return:	0  : success and < 0: failure
+ *****************************************************************************/
+int qdma_mbox_compose_vf_qreq(uint16_t func_id,
+			      uint16_t qmax, int qbase, uint32_t *raw_data);
+
+/*****************************************************************************/
+/**
+ * qdma_mbox_compose_vf_notify_qadd(): compose message to notify queue add
+ *
+ * @func_id:	destination function id
+ * @qid_hw:	number of queues being requested
+ * @q_type:	direction of the of queue
+ * @raw_data:	output raw message to be sent
+ *
+ * Return:	0  : success and < 0: failure
+ *****************************************************************************/
+int qdma_mbox_compose_vf_notify_qadd(uint16_t func_id,
+				     uint16_t qid_hw,
+				     enum qdma_dev_q_type q_type,
+				     uint32_t *raw_data);
+
+/*****************************************************************************/
+/**
+ * qdma_mbox_compose_vf_notify_qdel(): compose message to notify queue delete
+ *
+ * @func_id:	destination function id
+ * @qid_hw:	number of queues being requested
+ * @q_type:	direction of the of queue
+ * @raw_data:	output raw message to be sent
+ *
+ * Return:	0  : success and < 0: failure
+ *****************************************************************************/
+int qdma_mbox_compose_vf_notify_qdel(uint16_t func_id,
+				     uint16_t qid_hw,
+				     enum qdma_dev_q_type q_type,
+				     uint32_t *raw_data);
+
+/*****************************************************************************/
+/**
+ * qdma_mbox_compose_vf_notify_qdel(): compose message to get the active
+ * queue count
+ *
+ * @func_id:	destination function id
+ * @raw_data:	output raw message to be sent
+ *
+ * Return:	0  : success and < 0: failure
+ *****************************************************************************/
+int qdma_mbox_compose_vf_get_device_active_qcnt(uint16_t func_id,
+		uint32_t *raw_data);
+
+/*****************************************************************************/
+/**
+ * qdma_mbox_compose_vf_fmap_prog(): handles the raw message received
+ *
+ * @func_id:   destination function id
+ * @qmax: number of queues being requested
+ * @qbase: q base at which queues are allocated
+ * @raw_data: output raw message to be sent
+ *
+ * Return:	0  : success and < 0: failure
+ *****************************************************************************/
+int qdma_mbox_compose_vf_fmap_prog(uint16_t func_id,
+				   uint16_t qmax, int qbase,
+				   uint32_t *raw_data);
+
+/*****************************************************************************/
+/**
+ * qdma_mbox_compose_vf_qctxt_write(): compose queue configuration data for
+ * compose and program
+ *
+ * @func_id:   destination function id
+ * @qid_hw:   HW queue for which the context has to be read
+ * @st:   is st mode
+ * @c2h:   is c2h direction
+ * @cmpt_ctxt_type:   completion context type
+ * @descq_conf:   pointer to queue config data structure
+ * @raw_data: output raw message to be sent
+ *
+ * Return:	0  : success and < 0: failure
+ *****************************************************************************/
+int qdma_mbox_compose_vf_qctxt_write(uint16_t func_id,
+			uint16_t qid_hw, uint8_t st, uint8_t c2h,
+			enum mbox_cmpt_ctxt_type cmpt_ctxt_type,
+			struct mbox_descq_conf *descq_conf,
+			uint32_t *raw_data);
+
+/*****************************************************************************/
+/**
+ * qdma_mbox_compose_vf_qctxt_read(): compose message to read context data of a
+ * queue
+ *
+ * @func_id:   destination function id
+ * @qid_hw:   HW queue for which the context has to be read
+ * @st:   is st mode
+ * @c2h:   is c2h direction
+ * @cmpt_ctxt_type:   completion context type
+ * @raw_data: output raw message to be sent
+ *
+ * Return:	0  : success and < 0: failure
+ *****************************************************************************/
+int qdma_mbox_compose_vf_qctxt_read(uint16_t func_id,
+			uint16_t qid_hw, uint8_t st, uint8_t c2h,
+			enum mbox_cmpt_ctxt_type cmpt_ctxt_type,
+			uint32_t *raw_data);
+
+/*****************************************************************************/
+/**
+ * qdma_mbox_compose_vf_qctxt_invalidate(): compose queue context invalidate
+ * message
+ *
+ * @func_id:   destination function id
+ * @qid_hw:   HW queue for which the context has to be invalidated
+ * @st:   is st mode
+ * @c2h:   is c2h direction
+ * @cmpt_ctxt_type:   completion context type
+ * @raw_data: output raw message to be sent
+ *
+ * Return:	0  : success and < 0: failure
+ *****************************************************************************/
+int qdma_mbox_compose_vf_qctxt_invalidate(uint16_t func_id,
+			uint16_t qid_hw, uint8_t st, uint8_t c2h,
+			enum mbox_cmpt_ctxt_type cmpt_ctxt_type,
+			uint32_t *raw_data);
+
+/*****************************************************************************/
+/**
+ * qdma_mbox_compose_vf_qctxt_clear(): compose queue context clear message
+ *
+ * @func_id:   destination function id
+ * @qid_hw:   HW queue for which the context has to be cleared
+ * @st:   is st mode
+ * @c2h:   is c2h direction
+ * @cmpt_ctxt_type:   completion context type
+ * @raw_data: output raw message to be sent
+ *
+ * Return:	0  : success and < 0: failure
+ *****************************************************************************/
+int qdma_mbox_compose_vf_qctxt_clear(uint16_t func_id,
+			uint16_t qid_hw, uint8_t st, uint8_t c2h,
+			enum mbox_cmpt_ctxt_type cmpt_ctxt_type,
+			uint32_t *raw_data);
+
+/*****************************************************************************/
+/**
+ * qdma_mbox_compose_csr_read(): compose message to read csr info
+ *
+ * @func_id:   destination function id
+ * @raw_data: output raw message to be sent
+ *
+ * Return:	0  : success and < 0: failure
+ *****************************************************************************/
+int qdma_mbox_compose_csr_read(uint16_t func_id,
+			       uint32_t *raw_data);
+
+/*****************************************************************************/
+/**
+ * qdma_mbox_compose_reg_read(): compose message to read the register values
+ *
+ * @func_id:   destination function id
+ * @group_num:  group number for the registers to read
+ * @raw_data: output raw message to be sent
+ *
+ * Return:	0  : success and < 0: failure
+ *****************************************************************************/
+int qdma_mbox_compose_reg_read(uint16_t func_id, uint16_t group_num,
+			       uint32_t *raw_data);
+
+/*****************************************************************************/
+/**
+ * qdma_mbox_compose_vf_intr_ctxt_write(): compose interrupt ring context
+ * programming message
+ *
+ * @func_id:   destination function id
+ * @intr_ctxt:   pointer to interrupt context data structure
+ * @raw_data: output raw message to be sent
+ *
+ * Return:	0  : success and < 0: failure
+ *****************************************************************************/
+int qdma_mbox_compose_vf_intr_ctxt_write(uint16_t func_id,
+					 struct mbox_msg_intr_ctxt *intr_ctxt,
+					 uint32_t *raw_data);
+
+/*****************************************************************************/
+/**
+ * qdma_mbox_compose_vf_intr_ctxt_read(): handles the raw message received
+ *
+ * @func_id:   destination function id
+ * @intr_ctxt:   pointer to interrupt context data structure
+ * @raw_data: output raw message to be sent
+ *
+ * Return:	0  : success and < 0: failure
+ *****************************************************************************/
+int qdma_mbox_compose_vf_intr_ctxt_read(uint16_t func_id,
+					struct mbox_msg_intr_ctxt *intr_ctxt,
+					uint32_t *raw_data);
+
+/*****************************************************************************/
+/**
+ * qdma_mbox_compose_vf_intr_ctxt_clear(): compose interrupt ring context
+ * clear message
+ *
+ * @func_id:   destination function id
+ * @intr_ctxt:   pointer to interrupt context data structure
+ * @raw_data: output raw message to be sent
+ *
+ * Return:	0  : success and < 0: failure
+ *****************************************************************************/
+int qdma_mbox_compose_vf_intr_ctxt_clear(uint16_t func_id,
+					 struct mbox_msg_intr_ctxt *intr_ctxt,
+					 uint32_t *raw_data);
+
+/*****************************************************************************/
+/**
+ * qdma_mbox_compose_vf_qctxt_invalidate(): compose interrupt ring context
+ * invalidate message
+ *
+ * @func_id:   destination function id
+ * @intr_ctxt:   pointer to interrupt context data structure
+ * @raw_data: output raw message to be sent
+ *
+ * Return:	0  : success and < 0: failure
+ *****************************************************************************/
+int qdma_mbox_compose_vf_intr_ctxt_invalidate(uint16_t func_id,
+				      struct mbox_msg_intr_ctxt *intr_ctxt,
+				      uint32_t *raw_data);
+
+/*****************************************************************************/
+/**
+ * qdma_mbox_is_msg_response(): check if the received msg opcode is response
+ *                              sent message opcode
+ *
+ * @send_data: mbox message sent
+ * @rcv_data: mbox message received
+ *
+ * Return:	1  : match and  0: does not match
+ *****************************************************************************/
+uint8_t qdma_mbox_is_msg_response(uint32_t *send_data, uint32_t *rcv_data);
+
+/*****************************************************************************/
+/**
+ * qdma_mbox_vf_response_status(): return the response received for the sent msg
+ *
+ * @rcv_data: mbox message received
+ *
+ * Return:	response status received to the sent message
+ *****************************************************************************/
+int qdma_mbox_vf_response_status(uint32_t *rcv_data);
+
+/*****************************************************************************/
+/**
+ * qdma_mbox_vf_func_id_get(): return the vf function id
+ *
+ * @rcv_data: mbox message received
+ * @is_vf:  is VF mbox
+ *
+ * Return:	vf function id
+ *****************************************************************************/
+uint8_t qdma_mbox_vf_func_id_get(uint32_t *rcv_data, uint8_t is_vf);
+
+int qdma_mbox_vf_active_queues_get(uint32_t *rcv_data,
+		enum qdma_dev_q_type q_type);
+
+/*****************************************************************************/
+/**
+ * qdma_mbox_vf_parent_func_id_get(): return the vf parent function id
+ *
+ * @rcv_data: mbox message received
+ *
+ * Return:	vf function id
+ *****************************************************************************/
+uint8_t qdma_mbox_vf_parent_func_id_get(uint32_t *rcv_data);
+
+/*****************************************************************************/
+/**
+ * qdma_mbox_vf_dev_info_get(): get dev info from received message
+ *
+ * @rcv_data: mbox message received
+ * @dev_cap: device capability information
+ * @dma_device_index: DMA Identifier to be read using the mbox.
+ *
+ * Return:	response status with dev info received to the sent message
+ *****************************************************************************/
+int qdma_mbox_vf_dev_info_get(uint32_t *rcv_data,
+		struct qdma_dev_attributes *dev_cap,
+		uint32_t *dma_device_index);
+
+/*****************************************************************************/
+/**
+ * qdma_mbox_vf_qinfo_get(): get qinfo from received message
+ *
+ * @rcv_data: mbox message received
+ * @qmax: number of queues
+ * @qbase: q base at which queues are allocated
+ *
+ * Return:	response status received to the sent message
+ *****************************************************************************/
+int qdma_mbox_vf_qinfo_get(uint32_t *rcv_data, int *qbase, uint16_t *qmax);
+
+/*****************************************************************************/
+/**
+ * qdma_mbox_vf_csr_get(): get csr info from received message
+ *
+ * @rcv_data: mbox message received
+ * @csr: pointer to the csr info
+ *
+ * Return:	response status received to the sent message
+ *****************************************************************************/
+int qdma_mbox_vf_csr_get(uint32_t *rcv_data, struct qdma_csr_info *csr);
+
+/*****************************************************************************/
+/**
+ * qdma_mbox_vf_reg_list_get(): get reg info from received message
+ *
+ * @rcv_data: mbox message received
+ * @num_regs: number of register read
+ * @reg_list: pointer to the register info
+ *
+ * Return:	response status received to the sent message
+ *****************************************************************************/
+int qdma_mbox_vf_reg_list_get(uint32_t *rcv_data,
+		uint16_t *num_regs, struct qdma_reg_data *reg_list);
+
+/*****************************************************************************/
+/**
+ * qdma_mbox_vf_context_get(): get queue context info from received message
+ *
+ * @rcv_data: mbox message received
+ * @ctxt: pointer to the queue context info
+ *
+ * Return:	response status received to the sent message
+ *****************************************************************************/
+int qdma_mbox_vf_context_get(uint32_t *rcv_data,
+			     struct qdma_descq_context *ctxt);
+
+/*****************************************************************************/
+/**
+ * qdma_mbox_vf_context_get(): get intr context info from received message
+ *
+ * @rcv_data: mbox message received
+ * @ctxt: pointer to the intr context info
+ *
+ * Return:	response status received to the sent message
+ *****************************************************************************/
+int qdma_mbox_vf_intr_context_get(uint32_t *rcv_data,
+				  struct mbox_msg_intr_ctxt *ictxt);
+
+
+/*****************************************************************************/
+/**
+ * qdma_mbox_pf_hw_clear_ack() - clear the HW ack
+ *
+ * @dev_hndl:   device handle
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+void qdma_mbox_pf_hw_clear_ack(void *dev_hndl);
+
+/*****************************************************************************/
+/**
+ * qdma_mbox_send() - function to send raw data via qdma mailbox
+ *
+ * @dev_hndl:   device handle
+ * @is_vf:	     Whether PF or VF
+ * @raw_data:   pointer to message being sent
+ *
+ * The function sends the raw_data to the outgoing mailbox memory and if PF,
+ * then assert the acknowledge status register bit.
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+int qdma_mbox_send(void *dev_hndl, uint8_t is_vf, uint32_t *raw_data);
+
+/*****************************************************************************/
+/**
+ * qdma_mbox_rcv() - function to receive raw data via qdma mailbox
+ *
+ * @dev_hndl: device handle
+ * @is_vf: Whether PF or VF
+ * @raw_data:  pointer to the message being received
+ *
+ * The function receives the raw_data from the incoming mailbox memory and
+ * then acknowledge the sender by setting msg_rcv field in the command
+ * register.
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+int qdma_mbox_rcv(void *dev_hndl, uint8_t is_vf, uint32_t *raw_data);
+
+/*****************************************************************************/
+/**
+ * qdma_mbox_enable_interrupts() - Enable the QDMA mailbox interrupt
+ *
+ * @dev_hndl: pointer to xlnx_dma_dev
+ * @is_vf: Whether PF or VF
+ *
+ * @return	none
+ *****************************************************************************/
+void qdma_mbox_enable_interrupts(void *dev_hndl, uint8_t is_vf);
+
+/*****************************************************************************/
+/**
+ * qdma_mbox_disable_interrupts() - Disable the QDMA mailbox interrupt
+ *
+ * @dev_hndl: pointer to xlnx_dma_dev
+ * @is_vf: Whether PF or VF
+ *
+ * @return	none
+ *****************************************************************************/
+void qdma_mbox_disable_interrupts(void *dev_hndl, uint8_t is_vf);
+
+/*****************************************************************************/
+/**
+ * qdma_mbox_vf_rcv_msg_handler(): handles the raw message received in VF
+ *
+ * @rcv_msg:   received raw message
+ * @resp_msg:  raw response message
+ *
+ * Return:	0  : success and < 0: failure
+ *****************************************************************************/
+int qdma_mbox_vf_rcv_msg_handler(uint32_t *rcv_msg, uint32_t *resp_msg);
+
+/*****************************************************************************/
+/**
+ * qdma_mbox_out_status():
+ *
+ * @dev_hndl: pointer to xlnx_dma_dev
+ * @is_vf: Whether PF or VF
+ *
+ * Return:	0 if MBOX outbox is empty, 1 if MBOX is not empty
+ *****************************************************************************/
+uint8_t qdma_mbox_out_status(void *dev_hndl, uint8_t is_vf);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __QDMA_MBOX_PROTOCOL_H_ */
diff --git a/drivers/net/qdma/qdma_access/qdma_platform.c b/drivers/net/qdma/qdma_access/qdma_platform.c
new file mode 100644
index 0000000000..8f9ca2aa78
--- /dev/null
+++ b/drivers/net/qdma/qdma_access/qdma_platform.c
@@ -0,0 +1,224 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2022 Xilinx, Inc. All rights reserved.
+ */
+
+#include "qdma_access_common.h"
+#include "qdma_platform.h"
+#include "qdma.h"
+#include <rte_malloc.h>
+#include <rte_spinlock.h>
+
+static rte_spinlock_t resource_lock = RTE_SPINLOCK_INITIALIZER;
+static rte_spinlock_t reg_access_lock = RTE_SPINLOCK_INITIALIZER;
+
+struct err_code_map error_code_map_list[] = {
+	{QDMA_SUCCESS,				0},
+	{QDMA_ERR_INV_PARAM,			EINVAL},
+	{QDMA_ERR_NO_MEM,			ENOMEM},
+	{QDMA_ERR_HWACC_BUSY_TIMEOUT,		EBUSY},
+	{QDMA_ERR_HWACC_INV_CONFIG_BAR,		EINVAL},
+	{QDMA_ERR_HWACC_NO_PEND_LEGCY_INTR,	EINVAL},
+	{QDMA_ERR_HWACC_BAR_NOT_FOUND,		EINVAL},
+	{QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED,	EINVAL},
+	{QDMA_ERR_RM_RES_EXISTS,		EPERM},
+	{QDMA_ERR_RM_RES_NOT_EXISTS,		EINVAL},
+	{QDMA_ERR_RM_DEV_EXISTS,		EPERM},
+	{QDMA_ERR_RM_DEV_NOT_EXISTS,		EINVAL},
+	{QDMA_ERR_RM_NO_QUEUES_LEFT,		EPERM},
+	{QDMA_ERR_RM_QMAX_CONF_REJECTED,	EPERM},
+	{QDMA_ERR_MBOX_FMAP_WR_FAILED,		EIO},
+	{QDMA_ERR_MBOX_NUM_QUEUES,		EINVAL},
+	{QDMA_ERR_MBOX_INV_QID,			EINVAL},
+	{QDMA_ERR_MBOX_INV_RINGSZ,		EINVAL},
+	{QDMA_ERR_MBOX_INV_BUFSZ,		EINVAL},
+	{QDMA_ERR_MBOX_INV_CNTR_TH,		EINVAL},
+	{QDMA_ERR_MBOX_INV_TMR_TH,		EINVAL},
+	{QDMA_ERR_MBOX_INV_MSG,			EINVAL},
+	{QDMA_ERR_MBOX_SEND_BUSY,		EBUSY},
+	{QDMA_ERR_MBOX_NO_MSG_IN,		EINVAL},
+	{QDMA_ERR_MBOX_ALL_ZERO_MSG,		EINVAL},
+};
+
+/*****************************************************************************/
+/**
+ * qdma_calloc(): allocate memory and initialize with 0
+ *
+ * @num_blocks:  number of blocks of contiguous memory of @size
+ * @size:    size of each chunk of memory
+ *
+ * Return: pointer to the memory block created on success and NULL on failure
+ *****************************************************************************/
+void *qdma_calloc(uint32_t num_blocks, uint32_t size)
+{
+	return rte_calloc(NULL, num_blocks, size, 0);
+}
+
+/*****************************************************************************/
+/**
+ * qdma_memfree(): free the memory
+ *
+ * @memptr:  pointer to the memory block
+ *
+ * Return:	None
+ *****************************************************************************/
+void qdma_memfree(void *memptr)
+{
+	return rte_free(memptr);
+}
+
+/*****************************************************************************/
+/**
+ * qdma_resource_lock_take() - take lock to access resource management APIs
+ *
+ * @return	None
+ *****************************************************************************/
+void qdma_resource_lock_take(void)
+{
+	rte_spinlock_lock(&resource_lock);
+}
+
+/*****************************************************************************/
+/**
+ * qdma_resource_lock_give() - release lock after accessing
+ *                             resource management APIs
+ *
+ * @return	None
+ *****************************************************************************/
+void qdma_resource_lock_give(void)
+{
+	rte_spinlock_unlock(&resource_lock);
+}
+
+/*****************************************************************************/
+/**
+ * qdma_reg_write() - Register write API.
+ *
+ * @dev_hndl:   device handle
+ * @reg_offst:  QDMA Config bar register offset to write
+ * @val:	value to be written
+ *
+ * Return:	None
+ *****************************************************************************/
+void qdma_reg_write(void *dev_hndl, uint32_t reg_offst, uint32_t val)
+{
+	struct qdma_pci_dev *qdma_dev;
+	uint64_t bar_addr;
+
+	qdma_dev = ((struct rte_eth_dev *)dev_hndl)->data->dev_private;
+	bar_addr = (uint64_t)qdma_dev->bar_addr[qdma_dev->config_bar_idx];
+	*((volatile uint32_t *)(bar_addr + reg_offst)) = val;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_reg_read() - Register read API.
+ *
+ * @dev_hndl:   device handle
+ * @reg_offst:  QDMA Config bar register offset to be read
+ *
+ * Return: Value read
+ *****************************************************************************/
+uint32_t qdma_reg_read(void *dev_hndl, uint32_t reg_offst)
+{
+	struct qdma_pci_dev *qdma_dev;
+	uint64_t bar_addr;
+	uint32_t val;
+
+	qdma_dev = ((struct rte_eth_dev *)dev_hndl)->data->dev_private;
+	bar_addr = (uint64_t)qdma_dev->bar_addr[qdma_dev->config_bar_idx];
+	val = *((volatile uint32_t *)(bar_addr + reg_offst));
+
+	return val;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_reg_access_lock() - Lock function for Register access
+ *
+ * @dev_hndl:   device handle
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+int qdma_reg_access_lock(void *dev_hndl)
+{
+	(void)dev_hndl;
+	rte_spinlock_lock(&reg_access_lock);
+	return 0;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_reg_access_release() - Release function for Register access
+ *
+ * @dev_hndl:   device handle
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+int qdma_reg_access_release(void *dev_hndl)
+{
+	(void)dev_hndl;
+	rte_spinlock_unlock(&reg_access_lock);
+	return 0;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_udelay() - delay function to be used in the common library
+ *
+ * @delay_usec:   delay in microseconds
+ *
+ * Return:	None
+ *****************************************************************************/
+void qdma_udelay(uint32_t delay_usec)
+{
+	rte_delay_us(delay_usec);
+}
+
+/*****************************************************************************/
+/**
+ * qdma_get_hw_access() - function to get the qdma_hw_access
+ *
+ * @dev_hndl:   device handle
+ * @dev_cap: pointer to hold qdma_hw_access structure
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+void qdma_get_hw_access(void *dev_hndl, struct qdma_hw_access **hw)
+{
+	struct qdma_pci_dev *qdma_dev;
+
+	qdma_dev = ((struct rte_eth_dev *)dev_hndl)->data->dev_private;
+	(void)qdma_dev;
+	*hw = NULL;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_strncpy(): copy n size string from source to destination buffer
+ *
+ * @memptr:  pointer to the memory block
+ *
+ * Return:	None
+ *****************************************************************************/
+void qdma_strncpy(char *dest, const char *src, size_t n)
+{
+	strncpy(dest, src, n);
+}
+
+/*****************************************************************************/
+/**
+ * qdma_get_err_code() - function to get the qdma access mapped error code
+ *
+ * @acc_err_code: qdma access error code which is a negative input value
+ *
+ * Return:   returns the platform specific error code
+ *****************************************************************************/
+int qdma_get_err_code(int acc_err_code)
+{
+	/* Multiply acc_err_code with -1 to convert it to a postive number
+	 * and use it as an array index for error codes.
+	 */
+	acc_err_code *= -1;
+
+	return -(error_code_map_list[acc_err_code].err_code);
+}
diff --git a/drivers/net/qdma/qdma_access/qdma_platform.h b/drivers/net/qdma/qdma_access/qdma_platform.h
new file mode 100644
index 0000000000..b9ee5e9c3b
--- /dev/null
+++ b/drivers/net/qdma/qdma_access/qdma_platform.h
@@ -0,0 +1,156 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2022 Xilinx, Inc. All rights reserved.
+ */
+
+#ifndef __QDMA_PLATFORM_H_
+#define __QDMA_PLATFORM_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * DOC: QDMA platform specific interface definitions
+ *
+ * Header file *qdma_platform_env.h* defines function signatures that are
+ * required to be implemented by platform specific drivers.
+ */
+
+#include "qdma_access_common.h"
+
+/*****************************************************************************/
+/**
+ * qdma_calloc(): allocate memory and initialize with 0
+ *
+ * @num_blocks:  number of blocks of contiguous memory of @size
+ * @size:    size of each chunk of memory
+ *
+ * Return: pointer to the memory block created on success and NULL on failure
+ *****************************************************************************/
+void *qdma_calloc(uint32_t num_blocks, uint32_t size);
+
+/*****************************************************************************/
+/**
+ * qdma_memfree(): free the memory
+ *
+ * @memptr:  pointer to the memory block
+ *
+ * Return:	None
+ *****************************************************************************/
+void qdma_memfree(void *memptr);
+
+/*****************************************************************************/
+/**
+ * qdma_resource_lock_init() - Init lock to access resource management APIs
+ *
+ * @return	None
+ *****************************************************************************/
+int qdma_resource_lock_init(void);
+
+/*****************************************************************************/
+/**
+ * qdma_resource_lock_take() - take lock to access resource management APIs
+ *
+ * @return	None
+ *****************************************************************************/
+void qdma_resource_lock_take(void);
+
+/*****************************************************************************/
+/**
+ * qdma_resource_lock_give() - release lock after accessing resource management
+ * APIs
+ *
+ * @return	None
+ *****************************************************************************/
+void qdma_resource_lock_give(void);
+
+/*****************************************************************************/
+/**
+ * qdma_reg_write() - Register write API.
+ *
+ * @dev_hndl:   device handle
+ * @reg_offst:  QDMA Config bar register offset to write
+ * @val:	value to be written
+ *
+ * Return:	Nothing
+ *****************************************************************************/
+void qdma_reg_write(void *dev_hndl, uint32_t reg_offst, uint32_t val);
+
+/*****************************************************************************/
+/**
+ * qdma_reg_read() - Register read API.
+ *
+ * @dev_hndl:   device handle
+ * @reg_offst:  QDMA Config bar register offset to be read
+ *
+ * Return: Value read
+ *****************************************************************************/
+uint32_t qdma_reg_read(void *dev_hndl, uint32_t reg_offst);
+
+/*****************************************************************************/
+/**
+ * qdma_reg_access_lock() - Lock function for Register access
+ *
+ * @dev_hndl:   device handle
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+int qdma_reg_access_lock(void *dev_hndl);
+
+/*****************************************************************************/
+/**
+ * qdma_reg_access_release() - Release function for Register access
+ *
+ * @dev_hndl:   device handle
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+int qdma_reg_access_release(void *dev_hndl);
+
+/*****************************************************************************/
+/**
+ * qdma_udelay() - delay function to be used in the common library
+ *
+ * @delay_usec:   delay in microseconds
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+void qdma_udelay(uint32_t delay_usec);
+
+/*****************************************************************************/
+/**
+ * qdma_get_hw_access() - function to get the qdma_hw_access
+ *
+ * @dev_hndl:   device handle
+ * @dev_cap: pointer to hold qdma_hw_access structure
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+void qdma_get_hw_access(void *dev_hndl, struct qdma_hw_access **hw);
+
+/*****************************************************************************/
+/**
+ * qdma_strncpy(): copy n size string from source to destination buffer
+ *
+ * @memptr:  pointer to the memory block
+ *
+ * Return:	None
+ *****************************************************************************/
+void qdma_strncpy(char *dest, const char *src, size_t n);
+
+
+/*****************************************************************************/
+/**
+ * qdma_get_err_code() - function to get the qdma access mapped error code
+ *
+ * @acc_err_code: qdma access error code
+ *
+ * Return:   returns the platform specific error code
+ *****************************************************************************/
+int qdma_get_err_code(int acc_err_code);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __QDMA_PLATFORM_H_ */
diff --git a/drivers/net/qdma/qdma_access/qdma_platform_env.h b/drivers/net/qdma/qdma_access/qdma_platform_env.h
new file mode 100644
index 0000000000..6d69d0f1cd
--- /dev/null
+++ b/drivers/net/qdma/qdma_access/qdma_platform_env.h
@@ -0,0 +1,32 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2022 Xilinx, Inc. All rights reserved.
+ */
+
+#ifndef QDMA_PLATFORM_ENV_H_
+#define QDMA_PLATFORM_ENV_H_
+
+#include <stdio.h>
+#include <stdint.h>
+#include <stdlib.h>
+#include <rte_log.h>
+
+#define QDMA_SNPRINTF_S(arg1, arg2, arg3, ...) \
+		snprintf(arg1, arg3, ##__VA_ARGS__)
+
+#ifdef RTE_LIBRTE_QDMA_DEBUG_DRIVER
+#define qdma_log_info(x_, ...) rte_log(RTE_LOG_INFO,\
+		RTE_LOGTYPE_USER1, x_, ##__VA_ARGS__)
+#define qdma_log_warning(x_, ...) rte_log(RTE_LOG_WARNING,\
+		RTE_LOGTYPE_USER1, x_, ##__VA_ARGS__)
+#define qdma_log_debug(x_, ...) rte_log(RTE_LOG_DEBUG,\
+		RTE_LOGTYPE_USER1, x_, ##__VA_ARGS__)
+#define qdma_log_error(x_, ...) rte_log(RTE_LOG_ERR,\
+		RTE_LOGTYPE_USER1, x_, ##__VA_ARGS__)
+#else
+#define qdma_log_info(x_, ...) do { } while (0)
+#define qdma_log_warning(x_, ...) do { } while (0)
+#define qdma_log_debug(x_, ...) do { } while (0)
+#define qdma_log_error(x_, ...) do { } while (0)
+#endif
+
+#endif /* QDMA_PLATFORM_ENV_H_ */
diff --git a/drivers/net/qdma/qdma_access/qdma_reg_dump.h b/drivers/net/qdma/qdma_access/qdma_reg_dump.h
new file mode 100644
index 0000000000..fd9e44ca6d
--- /dev/null
+++ b/drivers/net/qdma/qdma_access/qdma_reg_dump.h
@@ -0,0 +1,77 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2022 Xilinx, Inc. All rights reserved.
+ */
+
+#ifndef __QDMA_REG_DUMP_H__
+#define __QDMA_REG_DUMP_H__
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include "qdma_platform_env.h"
+#include "qdma_access_common.h"
+
+#define DEBUGFS_DEV_INFO_SZ		(300)
+
+#define QDMA_REG_NAME_LENGTH	64
+#define DEBUGFS_INTR_CNTX_SZ	(2048 * 2)
+#define DBGFS_ERR_BUFLEN		(64)
+#define DEBGFS_LINE_SZ			(81)
+#define DEBGFS_GEN_NAME_SZ		(40)
+#define REG_DUMP_SIZE_PER_LINE	(256)
+
+#define MAX_QDMA_CFG_REGS			(200)
+
+#define QDMA_MM_EN_SHIFT          0
+#define QDMA_CMPT_EN_SHIFT        1
+#define QDMA_ST_EN_SHIFT          2
+#define QDMA_MAILBOX_EN_SHIFT     3
+
+#define QDMA_MM_MODE              (1 << QDMA_MM_EN_SHIFT)
+#define QDMA_COMPLETION_MODE      (1 << QDMA_CMPT_EN_SHIFT)
+#define QDMA_ST_MODE              (1 << QDMA_ST_EN_SHIFT)
+#define QDMA_MAILBOX              (1 << QDMA_MAILBOX_EN_SHIFT)
+
+
+#define QDMA_MM_ST_MODE \
+	(QDMA_MM_MODE | QDMA_COMPLETION_MODE | QDMA_ST_MODE)
+
+static inline int get_capability_mask(uint8_t mm_en,
+				      uint8_t st_en,
+				      uint8_t mm_cmpt_en,
+				      uint8_t mailbox_en)
+{
+	int mask = 0;
+
+	mask = ((mm_en << QDMA_MM_EN_SHIFT) |
+		((mm_cmpt_en | st_en) << QDMA_CMPT_EN_SHIFT) |
+		(st_en << QDMA_ST_EN_SHIFT) |
+		(mailbox_en << QDMA_MAILBOX_EN_SHIFT));
+	return mask;
+}
+
+struct regfield_info {
+		const char *field_name;
+		uint32_t field_mask;
+};
+
+struct xreg_info {
+	const char *name;
+	uint32_t addr;
+	uint32_t repeat;
+	uint32_t step;
+	uint8_t shift;
+	uint8_t len;
+	uint8_t is_debug_reg;
+	uint8_t mode;
+	uint8_t read_type;
+	uint8_t num_bitfields;
+	struct regfield_info *bitfields;
+};
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif
diff --git a/drivers/net/qdma/qdma_access/qdma_resource_mgmt.c b/drivers/net/qdma/qdma_access/qdma_resource_mgmt.c
new file mode 100644
index 0000000000..971202207f
--- /dev/null
+++ b/drivers/net/qdma/qdma_access/qdma_resource_mgmt.c
@@ -0,0 +1,787 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2022 Xilinx, Inc. All rights reserved.
+ */
+
+#include "qdma_resource_mgmt.h"
+#include "qdma_platform.h"
+#include "qdma_list.h"
+#include "qdma_access_errors.h"
+
+#ifdef ENABLE_WPP_TRACING
+#include "qdma_resource_mgmt.tmh"
+#endif
+
+struct qdma_resource_entry {
+	int qbase;
+	uint32_t total_q;
+	struct qdma_list_head node;
+};
+
+/** per function entry */
+struct qdma_dev_entry {
+	uint16_t func_id;
+	uint32_t active_h2c_qcnt;
+	uint32_t active_c2h_qcnt;
+	uint32_t active_cmpt_qcnt;
+	struct qdma_resource_entry entry;
+};
+
+/** for hodling the qconf_entry structure */
+struct qdma_resource_master {
+	/** DMA device index this resource belongs to */
+	uint32_t dma_device_index;
+	/** starting pci bus number this resource belongs to */
+	uint32_t pci_bus_start;
+	/** ending pci bus number this resource belongs to */
+	uint32_t pci_bus_end;
+	/** total queue this resource manager handles */
+	uint32_t total_q;
+	/** queue base from which this resource manager handles */
+	int qbase;
+	/** for attaching to master resource list */
+	struct qdma_list_head node;
+	/** for holding device entries */
+	struct qdma_list_head dev_list;
+	/** for holding free resource list */
+	struct qdma_list_head free_list;
+	/** active queue count per resource*/
+	uint32_t active_qcnt;
+};
+
+static struct qdma_list_head master_resource_list = {
+	.prev = &master_resource_list,
+	.next = &master_resource_list,
+	.priv = NULL,
+};
+
+static struct qdma_resource_master *qdma_find_master_resource_entry
+		(uint32_t bus_start, uint32_t bus_end)
+{
+	struct qdma_list_head *entry, *tmp;
+
+	qdma_resource_lock_take();
+	qdma_list_for_each_safe(entry, tmp, &master_resource_list) {
+		struct qdma_resource_master *q_resource =
+			(struct qdma_resource_master *)
+			QDMA_LIST_GET_DATA(entry);
+
+		if (q_resource->pci_bus_start == bus_start &&
+			q_resource->pci_bus_end == bus_end) {
+			qdma_resource_lock_give();
+			return q_resource;
+		}
+	}
+	qdma_resource_lock_give();
+
+	return NULL;
+}
+
+static struct qdma_resource_master *qdma_get_master_resource_entry
+		(uint32_t dma_device_index)
+{
+	struct qdma_list_head *entry, *tmp;
+
+	qdma_resource_lock_take();
+	qdma_list_for_each_safe(entry, tmp, &master_resource_list) {
+		struct qdma_resource_master *q_resource =
+			(struct qdma_resource_master *)
+				QDMA_LIST_GET_DATA(entry);
+
+		if (q_resource->dma_device_index == dma_device_index) {
+			qdma_resource_lock_give();
+			return q_resource;
+		}
+	}
+	qdma_resource_lock_give();
+
+	return NULL;
+}
+
+static struct qdma_dev_entry *qdma_get_dev_entry(uint32_t dma_device_index,
+						uint16_t func_id)
+{
+	struct qdma_list_head *entry, *tmp;
+	struct qdma_resource_master *q_resource =
+			qdma_get_master_resource_entry(dma_device_index);
+
+	if (!q_resource)
+		return NULL;
+
+	qdma_resource_lock_take();
+	qdma_list_for_each_safe(entry, tmp, &q_resource->dev_list) {
+		struct qdma_dev_entry *dev_entry = (struct qdma_dev_entry *)
+			QDMA_LIST_GET_DATA(entry);
+
+		if (dev_entry->func_id == func_id) {
+			qdma_resource_lock_give();
+			return dev_entry;
+		}
+	}
+	qdma_resource_lock_give();
+
+	return NULL;
+}
+
+static struct qdma_resource_entry *qdma_free_entry_create(int q_base,
+							  uint32_t total_q)
+{
+	struct qdma_resource_entry *entry = (struct qdma_resource_entry *)
+		qdma_calloc(1, sizeof(struct qdma_resource_master));
+	if (entry == NULL)
+		return NULL;
+
+	entry->total_q = total_q;
+	entry->qbase = q_base;
+
+	return entry;
+}
+
+static void qdma_submit_to_free_list(struct qdma_dev_entry *dev_entry,
+				     struct qdma_list_head *head)
+{
+	struct qdma_resource_entry *streach_node = NULL;
+	struct qdma_list_head *entry, *tmp;
+	/* create a new node to be added to empty free list */
+	struct qdma_resource_entry *new_node = NULL;
+
+	if (!dev_entry->entry.total_q)
+		return;
+
+	if (qdma_list_is_empty(head)) {
+		new_node = qdma_free_entry_create(dev_entry->entry.qbase,
+				dev_entry->entry.total_q);
+		if (new_node == NULL)
+			return;
+		QDMA_LIST_SET_DATA(&new_node->node, new_node);
+		qdma_list_add_tail(&new_node->node, head);
+		/* reset device entry q resource params */
+		dev_entry->entry.qbase = -1;
+		dev_entry->entry.total_q = 0;
+	} else {
+		qdma_list_for_each_safe(entry, tmp, head) {
+			struct qdma_resource_entry *node =
+				(struct qdma_resource_entry *)
+					QDMA_LIST_GET_DATA(entry);
+
+			/* insert the free slot at appropriate place */
+			if (node->qbase > dev_entry->entry.qbase ||
+				qdma_list_is_last_entry(entry, head)) {
+				new_node = qdma_free_entry_create
+						(dev_entry->entry.qbase,
+						dev_entry->entry.total_q);
+				if (new_node == NULL)
+					return;
+				QDMA_LIST_SET_DATA(&new_node->node, new_node);
+				if (node->qbase > dev_entry->entry.qbase)
+					qdma_list_insert_before(&new_node->node,
+								&node->node);
+				else
+					qdma_list_add_tail(&new_node->node,
+							   head);
+				/* reset device entry q resource params */
+				dev_entry->entry.qbase = -1;
+				dev_entry->entry.total_q = 0;
+				break;
+			}
+		}
+	}
+
+	/* de-fragment (merge contiguous resource chunks) if possible */
+	qdma_list_for_each_safe(entry, tmp, head) {
+		struct qdma_resource_entry *node =
+		(struct qdma_resource_entry *)QDMA_LIST_GET_DATA(entry);
+
+		if (!streach_node) {
+			streach_node = node;
+		} else {
+			if ((streach_node->qbase + streach_node->total_q) ==
+					(uint32_t)node->qbase) {
+				streach_node->total_q += node->total_q;
+				qdma_list_del(&node->node);
+				qdma_memfree(node);
+			} else {
+				streach_node = node;
+			}
+		}
+	}
+}
+
+/**
+ * qdma_resource_entry() - return the best free list entry node that can
+ *                         accommodate the new request
+ */
+static struct qdma_resource_entry *qdma_get_resource_node(uint32_t qmax,
+							  int qbase,
+				   struct qdma_list_head *free_list_head)
+{
+	struct qdma_list_head *entry, *tmp;
+	struct qdma_resource_entry *best_fit_node = NULL;
+
+	/* try to honor requested qbase */
+	if (qbase >= 0) {
+		qdma_list_for_each_safe(entry, tmp, free_list_head) {
+			struct qdma_resource_entry *node =
+			(struct qdma_resource_entry *)QDMA_LIST_GET_DATA(entry);
+
+			if (qbase >= node->qbase &&
+					(node->qbase + node->total_q) >=
+					(qbase + qmax)) {
+				best_fit_node = node;
+				goto fragment_free_list;
+			}
+		}
+	}
+	best_fit_node = NULL;
+
+	/* find a best node to accommodate q resource request */
+	qdma_list_for_each_safe(entry, tmp, free_list_head) {
+		struct qdma_resource_entry *node =
+		(struct qdma_resource_entry *)QDMA_LIST_GET_DATA(entry);
+
+		if (node->total_q >= qmax) {
+			if (!best_fit_node || best_fit_node->total_q >=
+					node->total_q) {
+				best_fit_node = node;
+				qbase = best_fit_node->qbase;
+			}
+		}
+	}
+
+fragment_free_list:
+	if (!best_fit_node)
+		return NULL;
+
+	if (qbase == best_fit_node->qbase &&
+			qmax == best_fit_node->total_q)
+		return best_fit_node;
+
+	/* split free resource node accordingly */
+	if (qbase == best_fit_node->qbase &&
+			qmax != best_fit_node->total_q) {
+		/*
+		 * create an extra node to hold the extra queues from this node
+		 */
+		struct qdma_resource_entry *new_entry = NULL;
+		int lqbase = best_fit_node->qbase + qmax;
+		uint32_t lqmax = best_fit_node->total_q - qmax;
+
+		new_entry = qdma_free_entry_create(lqbase, lqmax);
+		if (new_entry == NULL)
+			return NULL;
+		QDMA_LIST_SET_DATA(&new_entry->node, new_entry);
+		qdma_list_insert_after(&new_entry->node,
+				       &best_fit_node->node);
+		best_fit_node->total_q -= lqmax;
+	} else if ((qbase > best_fit_node->qbase) &&
+			((qbase + qmax) == (best_fit_node->qbase +
+					best_fit_node->total_q))) {
+		/*
+		 * create an extra node to hold the extra queues from this node
+		 */
+		struct qdma_resource_entry *new_entry = NULL;
+		int lqbase = best_fit_node->qbase;
+		uint32_t lqmax = qbase - best_fit_node->qbase;
+
+		new_entry = qdma_free_entry_create(lqbase, lqmax);
+		if (new_entry == NULL)
+			return NULL;
+		QDMA_LIST_SET_DATA(&new_entry->node, new_entry);
+		qdma_list_insert_before(&new_entry->node,
+					&best_fit_node->node);
+		best_fit_node->total_q = qmax;
+		best_fit_node->qbase = qbase;
+	} else {
+		/*
+		 * create two extra node to hold the extra queues from this node
+		 */
+		struct qdma_resource_entry *new_entry = NULL;
+		int lqbase = best_fit_node->qbase;
+		uint32_t lqmax = qbase - best_fit_node->qbase;
+
+		new_entry = qdma_free_entry_create(lqbase, lqmax);
+		if (new_entry == NULL)
+			return NULL;
+		QDMA_LIST_SET_DATA(&new_entry->node, new_entry);
+		qdma_list_insert_before(&new_entry->node,
+					&best_fit_node->node);
+
+		best_fit_node->qbase = qbase;
+		best_fit_node->total_q -= lqmax;
+
+		lqbase = best_fit_node->qbase + qmax;
+		lqmax = best_fit_node->total_q - qmax;
+
+		new_entry = qdma_free_entry_create(lqbase, lqmax);
+		if (new_entry == NULL)
+			return NULL;
+		QDMA_LIST_SET_DATA(&new_entry->node, new_entry);
+		qdma_list_insert_after(&new_entry->node,
+				       &best_fit_node->node);
+		best_fit_node->total_q = qmax;
+	}
+
+	return best_fit_node;
+}
+
+static int qdma_request_q_resource(struct qdma_dev_entry *dev_entry,
+				    uint32_t new_qmax, int new_qbase,
+				    struct qdma_list_head *free_list_head)
+{
+	uint32_t qmax = dev_entry->entry.total_q;
+	int qbase = dev_entry->entry.qbase;
+	struct qdma_resource_entry *free_entry_node = NULL;
+	int rv = QDMA_SUCCESS;
+
+	/* submit already allocated queues back to free list before requesting
+	 * new resource
+	 */
+	qdma_submit_to_free_list(dev_entry, free_list_head);
+
+	if (!new_qmax)
+		return 0;
+	/* check if the request can be accomodated */
+	free_entry_node = qdma_get_resource_node(new_qmax, new_qbase,
+						 free_list_head);
+	if (free_entry_node == NULL) {
+		/* request cannot be accommodated. Restore the dev_entry */
+		free_entry_node = qdma_get_resource_node(qmax, qbase,
+							 free_list_head);
+		rv = -QDMA_ERR_RM_NO_QUEUES_LEFT;
+		qdma_log_error("%s: Not enough queues, err:%d\n", __func__,
+					   -QDMA_ERR_RM_NO_QUEUES_LEFT);
+		if (free_entry_node == NULL) {
+			dev_entry->entry.qbase = -1;
+			dev_entry->entry.total_q = 0;
+
+			return rv;
+		}
+	}
+
+	dev_entry->entry.qbase = free_entry_node->qbase;
+	dev_entry->entry.total_q = free_entry_node->total_q;
+
+	qdma_list_del(&free_entry_node->node);
+	qdma_memfree(free_entry_node);
+
+	return rv;
+}
+
+int qdma_master_resource_create(uint32_t bus_start, uint32_t bus_end,
+		int q_base, uint32_t total_q, uint32_t *dma_device_index)
+{
+	struct qdma_resource_master *q_resource;
+	struct qdma_resource_entry *free_entry;
+	static int index;
+
+	q_resource = qdma_find_master_resource_entry(bus_start, bus_end);
+	if (q_resource) {
+		*dma_device_index = q_resource->dma_device_index;
+		qdma_log_debug("%s: Resource already created", __func__);
+		qdma_log_debug("for this device(%d)\n",
+				q_resource->dma_device_index);
+		return -QDMA_ERR_RM_RES_EXISTS;
+	}
+
+	*dma_device_index = index;
+
+	q_resource = (struct qdma_resource_master *)qdma_calloc(1,
+		sizeof(struct qdma_resource_master));
+	if (!q_resource) {
+		qdma_log_error("%s: no memory for q_resource, err:%d\n",
+					__func__,
+					-QDMA_ERR_NO_MEM);
+		return -QDMA_ERR_NO_MEM;
+	}
+
+	free_entry = (struct qdma_resource_entry *)
+		qdma_calloc(1, sizeof(struct qdma_resource_entry));
+	if (!free_entry) {
+		qdma_memfree(q_resource);
+		qdma_log_error("%s: no memory for free_entry, err:%d\n",
+					__func__,
+					-QDMA_ERR_NO_MEM);
+		return -QDMA_ERR_NO_MEM;
+	}
+
+	qdma_resource_lock_take();
+	q_resource->dma_device_index = index;
+	q_resource->pci_bus_start = bus_start;
+	q_resource->pci_bus_end = bus_end;
+	q_resource->total_q = total_q;
+	q_resource->qbase = q_base;
+	qdma_list_init_head(&q_resource->dev_list);
+	qdma_list_init_head(&q_resource->free_list);
+	QDMA_LIST_SET_DATA(&q_resource->node, q_resource);
+	QDMA_LIST_SET_DATA(&q_resource->free_list, q_resource);
+	qdma_list_add_tail(&q_resource->node, &master_resource_list);
+
+
+	free_entry->total_q = total_q;
+	free_entry->qbase = q_base;
+	QDMA_LIST_SET_DATA(&free_entry->node, free_entry);
+	qdma_list_add_tail(&free_entry->node, &q_resource->free_list);
+	qdma_resource_lock_give();
+
+	qdma_log_debug("%s: New master resource created at %d",
+		__func__, index);
+	++index;
+
+	return QDMA_SUCCESS;
+}
+
+void qdma_master_resource_destroy(uint32_t dma_device_index)
+{
+	struct qdma_resource_master *q_resource =
+			qdma_get_master_resource_entry(dma_device_index);
+	struct qdma_list_head *entry, *tmp;
+
+	if (!q_resource)
+		return;
+	qdma_resource_lock_take();
+	if (!qdma_list_is_empty(&q_resource->dev_list)) {
+		qdma_resource_lock_give();
+		return;
+	}
+	qdma_list_for_each_safe(entry, tmp, &q_resource->free_list) {
+		struct qdma_resource_entry *free_entry =
+			(struct qdma_resource_entry *)
+				QDMA_LIST_GET_DATA(entry);
+
+		qdma_list_del(&free_entry->node);
+		qdma_memfree(free_entry);
+	}
+	qdma_list_del(&q_resource->node);
+	qdma_memfree(q_resource);
+	qdma_resource_lock_give();
+}
+
+
+int qdma_dev_entry_create(uint32_t dma_device_index, uint16_t func_id)
+{
+	struct qdma_resource_master *q_resource =
+			qdma_get_master_resource_entry(dma_device_index);
+	struct qdma_dev_entry *dev_entry;
+
+	if (!q_resource) {
+		qdma_log_error("%s: Queue resource not found, err: %d\n",
+					__func__,
+					-QDMA_ERR_RM_RES_NOT_EXISTS);
+		return -QDMA_ERR_RM_RES_NOT_EXISTS;
+	}
+
+	dev_entry = qdma_get_dev_entry(dma_device_index, func_id);
+	if (!dev_entry) {
+		qdma_resource_lock_take();
+		dev_entry = (struct qdma_dev_entry *)
+			qdma_calloc(1, sizeof(struct qdma_dev_entry));
+		if (dev_entry == NULL) {
+			qdma_resource_lock_give();
+			qdma_log_error("%s: Insufficient memory, err:%d\n",
+						__func__,
+						-QDMA_ERR_NO_MEM);
+			return -QDMA_ERR_NO_MEM;
+		}
+		dev_entry->func_id = func_id;
+		dev_entry->entry.qbase = -1;
+		dev_entry->entry.total_q = 0;
+		QDMA_LIST_SET_DATA(&dev_entry->entry.node, dev_entry);
+		qdma_list_add_tail(&dev_entry->entry.node,
+				   &q_resource->dev_list);
+		qdma_resource_lock_give();
+		qdma_log_info("%s: Created the dev entry successfully\n",
+						__func__);
+	} else {
+		qdma_log_error("%s: Dev entry already created, err = %d\n",
+						__func__,
+						-QDMA_ERR_RM_DEV_EXISTS);
+		return -QDMA_ERR_RM_DEV_EXISTS;
+	}
+
+	return QDMA_SUCCESS;
+}
+
+void qdma_dev_entry_destroy(uint32_t dma_device_index, uint16_t func_id)
+{
+	struct qdma_resource_master *q_resource =
+			qdma_get_master_resource_entry(dma_device_index);
+	struct qdma_dev_entry *dev_entry;
+
+	if (!q_resource) {
+		qdma_log_error("%s: Queue resource not found.\n", __func__);
+		return;
+	}
+
+	dev_entry = qdma_get_dev_entry(dma_device_index, func_id);
+	if (!dev_entry) {
+		qdma_log_error("%s: Dev entry not found\n", __func__);
+		return;
+	}
+	qdma_resource_lock_take();
+	qdma_submit_to_free_list(dev_entry, &q_resource->free_list);
+
+	qdma_list_del(&dev_entry->entry.node);
+	qdma_memfree(dev_entry);
+	qdma_resource_lock_give();
+}
+
+int qdma_dev_update(uint32_t dma_device_index, uint16_t func_id,
+		    uint32_t qmax, int *qbase)
+{
+	struct qdma_resource_master *q_resource =
+			qdma_get_master_resource_entry(dma_device_index);
+	struct qdma_dev_entry *dev_entry;
+	int rv;
+
+	if (!q_resource) {
+		qdma_log_error("%s: Queue resource not found, err: %d\n",
+				__func__, -QDMA_ERR_RM_RES_NOT_EXISTS);
+		return -QDMA_ERR_RM_RES_NOT_EXISTS;
+	}
+
+	dev_entry = qdma_get_dev_entry(dma_device_index, func_id);
+
+	if (!dev_entry) {
+		qdma_log_error("%s: Dev Entry not found, err: %d\n",
+					__func__,
+					-QDMA_ERR_RM_DEV_NOT_EXISTS);
+		return -QDMA_ERR_RM_DEV_NOT_EXISTS;
+	}
+
+	qdma_resource_lock_take();
+
+	/* if any active queue on device, no more new qmax
+	 * configuration allowed
+	 */
+	if (dev_entry->active_h2c_qcnt ||
+			dev_entry->active_c2h_qcnt ||
+			dev_entry->active_cmpt_qcnt) {
+		qdma_resource_lock_give();
+		qdma_log_error("%s: Qs active. Config blocked, err: %d\n",
+				__func__, -QDMA_ERR_RM_QMAX_CONF_REJECTED);
+		return -QDMA_ERR_RM_QMAX_CONF_REJECTED;
+	}
+
+	rv = qdma_request_q_resource(dev_entry, qmax, *qbase,
+				&q_resource->free_list);
+
+	*qbase = dev_entry->entry.qbase;
+	qdma_resource_lock_give();
+
+
+	return rv;
+}
+
+int qdma_dev_qinfo_get(uint32_t dma_device_index, uint16_t func_id,
+		       int *qbase, uint32_t *qmax)
+{
+	struct qdma_resource_master *q_resource =
+			qdma_get_master_resource_entry(dma_device_index);
+	struct qdma_dev_entry *dev_entry;
+
+	if (!q_resource) {
+		qdma_log_error("%s: Queue resource not found, err: %d\n",
+				__func__, -QDMA_ERR_RM_RES_NOT_EXISTS);
+		return -QDMA_ERR_RM_RES_NOT_EXISTS;
+	}
+
+	dev_entry = qdma_get_dev_entry(dma_device_index, func_id);
+
+	if (!dev_entry) {
+		qdma_log_debug("%s: Dev Entry not created yet\n", __func__);
+		return -QDMA_ERR_RM_DEV_NOT_EXISTS;
+	}
+
+	qdma_resource_lock_take();
+	*qbase = dev_entry->entry.qbase;
+	*qmax = dev_entry->entry.total_q;
+	qdma_resource_lock_give();
+
+	return QDMA_SUCCESS;
+}
+
+enum qdma_dev_q_range qdma_dev_is_queue_in_range(uint32_t dma_device_index,
+						 uint16_t func_id,
+						 uint32_t qid_hw)
+{
+	struct qdma_resource_master *q_resource =
+			qdma_get_master_resource_entry(dma_device_index);
+	struct qdma_dev_entry *dev_entry;
+	uint32_t qmax;
+
+	if (!q_resource) {
+		qdma_log_error("%s: Queue resource not found, err: %d\n",
+				__func__, -QDMA_ERR_RM_RES_NOT_EXISTS);
+		return QDMA_DEV_Q_OUT_OF_RANGE;
+	}
+
+	dev_entry = qdma_get_dev_entry(dma_device_index, func_id);
+
+	if (!dev_entry) {
+		qdma_log_error("%s: Dev entry not found, err: %d\n",
+				__func__, -QDMA_ERR_RM_DEV_NOT_EXISTS);
+		return QDMA_DEV_Q_OUT_OF_RANGE;
+	}
+
+	qdma_resource_lock_take();
+	qmax = dev_entry->entry.qbase + dev_entry->entry.total_q;
+	if (dev_entry->entry.total_q && qid_hw < qmax &&
+			((int)qid_hw >= dev_entry->entry.qbase)) {
+		qdma_resource_lock_give();
+		return QDMA_DEV_Q_IN_RANGE;
+	}
+	qdma_resource_lock_give();
+
+	return QDMA_DEV_Q_OUT_OF_RANGE;
+}
+
+int qdma_dev_increment_active_queue(uint32_t dma_device_index, uint16_t func_id,
+				    enum qdma_dev_q_type q_type)
+{
+	struct qdma_resource_master *q_resource =
+			qdma_get_master_resource_entry(dma_device_index);
+	struct qdma_dev_entry *dev_entry;
+	int rv = QDMA_SUCCESS;
+	uint32_t *active_qcnt = NULL;
+
+	if (!q_resource) {
+		qdma_log_error("%s: Queue resource not found, err: %d\n",
+				__func__, -QDMA_ERR_RM_RES_NOT_EXISTS);
+		return -QDMA_ERR_RM_RES_NOT_EXISTS;
+	}
+
+	dev_entry = qdma_get_dev_entry(dma_device_index, func_id);
+
+	if (!dev_entry) {
+		qdma_log_error("%s: Dev Entry not found, err: %d\n",
+					__func__,
+					-QDMA_ERR_RM_DEV_NOT_EXISTS);
+		return -QDMA_ERR_RM_DEV_NOT_EXISTS;
+	}
+
+	qdma_resource_lock_take();
+	switch (q_type) {
+	case QDMA_DEV_Q_TYPE_H2C:
+		active_qcnt = &dev_entry->active_h2c_qcnt;
+		break;
+	case QDMA_DEV_Q_TYPE_C2H:
+		active_qcnt = &dev_entry->active_c2h_qcnt;
+		break;
+	case QDMA_DEV_Q_TYPE_CMPT:
+		active_qcnt = &dev_entry->active_cmpt_qcnt;
+		break;
+	default:
+		rv = -QDMA_ERR_RM_DEV_NOT_EXISTS;
+	}
+
+	if (active_qcnt && (dev_entry->entry.total_q < ((*active_qcnt) + 1))) {
+		qdma_resource_lock_give();
+		return -QDMA_ERR_RM_NO_QUEUES_LEFT;
+	}
+
+	if (active_qcnt) {
+		*active_qcnt = (*active_qcnt) + 1;
+		q_resource->active_qcnt++;
+	}
+	qdma_resource_lock_give();
+
+	return rv;
+}
+
+
+int qdma_dev_decrement_active_queue(uint32_t dma_device_index, uint16_t func_id,
+				    enum qdma_dev_q_type q_type)
+{
+	struct qdma_resource_master *q_resource =
+			qdma_get_master_resource_entry(dma_device_index);
+	struct qdma_dev_entry *dev_entry;
+	int rv = QDMA_SUCCESS;
+
+	if (!q_resource) {
+		qdma_log_error("%s: Queue resource not found, err: %d\n",
+				__func__,
+			   -QDMA_ERR_RM_RES_NOT_EXISTS);
+		return -QDMA_ERR_RM_RES_NOT_EXISTS;
+	}
+
+	dev_entry = qdma_get_dev_entry(dma_device_index, func_id);
+
+	if (!dev_entry) {
+		qdma_log_error("%s: Dev entry not found, err: %d\n",
+				__func__, -QDMA_ERR_RM_DEV_NOT_EXISTS);
+		return -QDMA_ERR_RM_DEV_NOT_EXISTS;
+	}
+
+	qdma_resource_lock_take();
+	switch (q_type) {
+	case QDMA_DEV_Q_TYPE_H2C:
+		if (dev_entry->active_h2c_qcnt)
+			dev_entry->active_h2c_qcnt--;
+		break;
+	case QDMA_DEV_Q_TYPE_C2H:
+		if (dev_entry->active_c2h_qcnt)
+			dev_entry->active_c2h_qcnt--;
+		break;
+	case QDMA_DEV_Q_TYPE_CMPT:
+		if (dev_entry->active_cmpt_qcnt)
+			dev_entry->active_cmpt_qcnt--;
+		break;
+	default:
+		rv = -QDMA_ERR_RM_DEV_NOT_EXISTS;
+	}
+	q_resource->active_qcnt--;
+	qdma_resource_lock_give();
+
+	return rv;
+}
+
+uint32_t qdma_get_active_queue_count(uint32_t dma_device_index)
+{
+	struct qdma_resource_master *q_resource =
+			qdma_get_master_resource_entry(dma_device_index);
+	uint32_t q_cnt;
+
+	if (!q_resource)
+		return QDMA_SUCCESS;
+
+	qdma_resource_lock_take();
+	q_cnt = q_resource->active_qcnt;
+	qdma_resource_lock_give();
+
+	return q_cnt;
+}
+
+int qdma_get_device_active_queue_count(uint32_t dma_device_index,
+					uint16_t func_id,
+					enum qdma_dev_q_type q_type)
+{
+	struct qdma_resource_master *q_resource =
+			qdma_get_master_resource_entry(dma_device_index);
+	struct qdma_dev_entry *dev_entry;
+	uint32_t dev_active_qcnt = 0;
+
+	if (!q_resource)
+		return -QDMA_ERR_RM_RES_NOT_EXISTS;
+
+	dev_entry = qdma_get_dev_entry(dma_device_index, func_id);
+
+	if (!dev_entry)
+		return -QDMA_ERR_RM_DEV_NOT_EXISTS;
+
+	qdma_resource_lock_take();
+	switch (q_type) {
+	case QDMA_DEV_Q_TYPE_H2C:
+		dev_active_qcnt = dev_entry->active_h2c_qcnt;
+		break;
+	case QDMA_DEV_Q_TYPE_C2H:
+		dev_active_qcnt = dev_entry->active_c2h_qcnt;
+		break;
+	case QDMA_DEV_Q_TYPE_CMPT:
+		dev_active_qcnt = dev_entry->active_cmpt_qcnt;
+		break;
+	default:
+		dev_active_qcnt = 0;
+	}
+	qdma_resource_lock_give();
+
+	return dev_active_qcnt;
+}
diff --git a/drivers/net/qdma/qdma_access/qdma_resource_mgmt.h b/drivers/net/qdma/qdma_access/qdma_resource_mgmt.h
new file mode 100644
index 0000000000..00b52a63c8
--- /dev/null
+++ b/drivers/net/qdma/qdma_access/qdma_resource_mgmt.h
@@ -0,0 +1,201 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2022 Xilinx, Inc. All rights reserved.
+ */
+
+#ifndef __QDMA_RESOURCE_MGMT_H_
+#define __QDMA_RESOURCE_MGMT_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * DOC: QDMA resource management interface definitions
+ *
+ * Header file *qdma_resource_mgmt.h* defines data structures and function
+ * signatures exported for QDMA queue management.
+ */
+
+#include "qdma_platform_env.h"
+#include "qdma_access_export.h"
+
+/**
+ * enum qdma_dev_q_range: Q ranage check
+ */
+enum qdma_dev_q_range {
+	/** @QDMA_DEV_Q_IN_RANGE: Q belongs to dev */
+	QDMA_DEV_Q_IN_RANGE,
+	/** @QDMA_DEV_Q_OUT_OF_RANGE: Q does not belong to dev */
+	QDMA_DEV_Q_OUT_OF_RANGE,
+	/** @QDMA_DEV_Q_RANGE_MAX: total Q validity states */
+	QDMA_DEV_Q_RANGE_MAX
+};
+
+/*****************************************************************************/
+/**
+ * qdma_master_resource_create(): create the master q resource
+ *
+ * @bus_start:  Bus number of the device i.e. pdev->bus->number
+ * @bus_end:    Ending bus number i.e. the subordinate bus number of the
+ *              parent bridge
+ * @q_base:     base from which this master resource needs to be created
+ * @total_q:     total queues in this master resource
+ * @dma_device_index: DMA device identifier assigned by resource manager to
+ *                    track the number of devices
+ *
+ * A master resource per driver per board is created to manage the queues
+ * allocated to this driver.
+ *
+ * Return:	0  : success and < 0: failure
+ *****************************************************************************/
+int qdma_master_resource_create(uint32_t bus_start, uint32_t bus_end,
+		int q_base, uint32_t total_q, uint32_t *dma_device_index);
+
+/*****************************************************************************/
+/**
+ * qdma_master_resource_destroy(): destroy the master q resource
+ *
+ * @dma_device_index:  DMA device identifier this master resource belongs to
+ *
+ * Return:	None
+ *****************************************************************************/
+void qdma_master_resource_destroy(uint32_t dma_device_index);
+
+/*****************************************************************************/
+/**
+ * qdma_dev_entry_create(): create a device entry for @func_id
+ *
+ * @dma_device_index:  DMA device identifier that this device belongs to
+ * @func_id:     device identification id
+ *
+ * A device entry is to be created on every function probe.
+ *
+ * Return:	0  : success and < 0: failure
+ *****************************************************************************/
+int qdma_dev_entry_create(uint32_t dma_device_index, uint16_t func_id);
+
+/*****************************************************************************/
+/**
+ * qdma_dev_entry_destroy(): destroy device entry for @func_id
+ *
+ * @dma_device_index:  DMA device identifier that this device belongs to
+ * @func_id:     device identification id
+ *
+ * Return:	None
+ *****************************************************************************/
+void qdma_dev_entry_destroy(uint32_t dma_device_index, uint16_t func_id);
+
+/*****************************************************************************/
+/**
+ * qdma_dev_update(): update qmax for the device
+ *
+ * @dma_device_index: DMA device identifier that this device belongs to
+ * @func_id:     device identification id
+ * @dev_type:    device type
+ * @qmax:        qmax for this device
+ * @qbase:       output qbase for this device
+ *
+ * This API is to be called for update request of qmax of any function.
+ *
+ * Return:	0  : success and < 0: failure
+ *****************************************************************************/
+int qdma_dev_update(uint32_t dma_device_index, uint16_t func_id,
+		    uint32_t qmax, int *qbase);
+
+/*****************************************************************************/
+/**
+ * qdma_dev_qinfo_get(): get device info
+ *
+ * @dma_device_index: DMA device identifier that this device belongs to
+ * @func_id:     device identification id
+ * @dev_type:    device type
+ * @qmax:        output qmax for this device
+ * @qbase:       output qbase for this device
+ *
+ * This API can be used get the qbase and qmax for any function
+ *
+ * Return:	0  : success and < 0: failure
+ *****************************************************************************/
+int qdma_dev_qinfo_get(uint32_t dma_device_index, uint16_t func_id,
+		       int *qbase, uint32_t *qmax);
+
+/*****************************************************************************/
+/**
+ * qdma_dev_is_queue_in_range(): check if queue belongs to this device
+ *
+ * @dma_device_index:  DMA device identifier that this device belongs to
+ * @func_id:     device identification id
+ * @qid_hw:      hardware queue id
+ *
+ * This API checks if the queue ID is in valid range for function specified
+ *
+ * Return:	@QDMA_DEV_Q_IN_RANGE  : valid and
+ * @QDMA_DEV_Q_OUT_OF_RANGE: invalid
+ *****************************************************************************/
+enum qdma_dev_q_range qdma_dev_is_queue_in_range(uint32_t dma_device_index,
+						 uint16_t func_id,
+						 uint32_t qid_hw);
+
+/*****************************************************************************/
+/**
+ * qdma_dev_increment_active_queue(): increment active queue count
+ *
+ * @dma_device_index: DMA device identifier that this device belongs to
+ * @func_id:     device identification id
+ * @q_type:      Queue type i.e. C2H or H2C or CMPT
+ *
+ * This API is used to increment the active queue count of this function
+ *
+ * Return:	0  : success and < 0: failure
+ *****************************************************************************/
+int qdma_dev_increment_active_queue(uint32_t dma_device_index, uint16_t func_id,
+				    enum qdma_dev_q_type q_type);
+
+/*****************************************************************************/
+/**
+ * qdma_dev_decrement_active_queue(): increment active queue count
+ *
+ * @dma_device_index: DMA device identifier that this device belongs to
+ * @func_id:     device identification id
+ * @q_type:      Queue type i.e. C2H or H2C or CMPT
+ *
+ * This API is used to increment the active queue count of this function
+ *
+ * Return:	0  : success and < 0: failure
+ *****************************************************************************/
+int qdma_dev_decrement_active_queue(uint32_t dma_device_index, uint16_t func_id,
+				    enum qdma_dev_q_type q_type);
+
+/*****************************************************************************/
+/**
+ * qdma_is_active_queue(): check if any queue is active
+ *
+ * @dma_device_index:  DMA device identifier that this resource belongs to
+ *
+ * This API is used to check if any active queue is present.
+ *
+ * Return:	active queue count
+ *****************************************************************************/
+uint32_t qdma_get_active_queue_count(uint32_t dma_device_index);
+
+/*****************************************************************************/
+/**
+ * qdma_get_device_active_queue_count(): get device active queue count
+ *
+ * @dma_device_index: DMA device identifier that this device belongs to
+ * @func_id:     device identification id
+ * @q_type:      Queue type i.e. C2H or H2C or CMPT
+ *
+ * This API is used to get the active queue count of this function
+ *
+ * Return:	0  : success and < 0: failure
+ *****************************************************************************/
+int qdma_get_device_active_queue_count(uint32_t dma_device_index,
+					uint16_t func_id,
+					enum qdma_dev_q_type q_type);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __QDMA_RESOURCE_MGMT_H_ */
diff --git a/drivers/net/qdma/qdma_access/qdma_s80_hard_access/qdma_s80_hard_access.c b/drivers/net/qdma/qdma_access/qdma_s80_hard_access/qdma_s80_hard_access.c
new file mode 100644
index 0000000000..8a69cd29ec
--- /dev/null
+++ b/drivers/net/qdma/qdma_access/qdma_s80_hard_access/qdma_s80_hard_access.c
@@ -0,0 +1,5851 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2022 Xilinx, Inc. All rights reserved.
+ */
+
+#include "qdma_s80_hard_access.h"
+#include "qdma_s80_hard_reg.h"
+#include "qdma_reg_dump.h"
+
+#ifdef ENABLE_WPP_TRACING
+#include "qdma_s80_hard_access.tmh"
+#endif
+
+/** QDMA S80 Hard Context array size */
+#define QDMA_S80_HARD_SW_CONTEXT_NUM_WORDS              4
+#define QDMA_S80_HARD_CMPT_CONTEXT_NUM_WORDS            4
+#define QDMA_S80_HARD_QID2VEC_CONTEXT_NUM_WORDS         1
+#define QDMA_S80_HARD_HW_CONTEXT_NUM_WORDS              2
+#define QDMA_S80_HARD_CR_CONTEXT_NUM_WORDS              1
+#define QDMA_S80_HARD_IND_INTR_CONTEXT_NUM_WORDS        3
+#define QDMA_S80_HARD_PFETCH_CONTEXT_NUM_WORDS          2
+
+#define QDMA_S80_HARD_VF_USER_BAR_ID   2
+
+#define QDMA_S80_REG_GROUP_1_START_ADDR	0x000
+#define QDMA_S80_REG_GROUP_2_START_ADDR	0x400
+#define QDMA_S80_REG_GROUP_3_START_ADDR	0xB00
+#define QDMA_S80_REG_GROUP_4_START_ADDR	0x1014
+
+#define QDMA_S80_HARD_REG_TRQ_SEL_FMAP_STEP	4
+
+#define QDMA_S80_HARD_IND_CTXT_DATA_NUM_REGS	4
+
+#define QDMA_S80_HARD_TOTAL_LEAF_ERROR_AGGREGATORS	7
+#define QDMA_S80_HARD_GLBL_TRQ_ERR_ALL_MASK			0XB3
+#define QDMA_S80_HARD_GLBL_DSC_ERR_ALL_MASK			0X1F9037E
+#define QDMA_S80_HARD_C2H_ERR_ALL_MASK				0X3F6DF
+#define QDMA_S80_HARD_C2H_FATAL_ERR_ALL_MASK			0X1FDF1B
+#define QDMA_S80_HARD_H2C_ERR_ALL_MASK				0X3F
+#define QDMA_S80_HARD_SBE_ERR_ALL_MASK				0XFFFFFFFF
+#define QDMA_S80_HARD_DBE_ERR_ALL_MASK				0XFFFFFFFF
+
+#define QDMA_S80_HARD_OFFSET_DMAP_SEL_INT_CIDX                  0x6400
+#define QDMA_S80_HARD_OFFSET_DMAP_SEL_H2C_DSC_PIDX          0x6404
+#define QDMA_S80_HARD_OFFSET_DMAP_SEL_C2H_DSC_PIDX          0x6408
+#define QDMA_S80_HARD_OFFSET_DMAP_SEL_CMPT_CIDX               0x640C
+
+#define QDMA_S80_HARD_OFFSET_VF_DMAP_SEL_INT_CIDX             0x3000
+#define QDMA_S80_HARD_OFFSET_VF_DMAP_SEL_H2C_DSC_PIDX     0x3004
+#define QDMA_S80_HARD_OFFSET_VF_DMAP_SEL_C2H_DSC_PIDX     0x3008
+#define QDMA_S80_HARD_OFFSET_VF_DMAP_SEL_CMPT_CIDX          0x300C
+
+#define QDMA_S80_HARD_DMA_SEL_INT_SW_CIDX_MASK               GENMASK(15, 0)
+#define QDMA_S80_HARD_DMA_SEL_INT_RING_IDX_MASK              GENMASK(23, 16)
+#define QDMA_S80_HARD_DMA_SEL_DESC_PIDX_MASK                   GENMASK(15, 0)
+#define QDMA_S80_HARD_DMA_SEL_IRQ_EN_MASK                        BIT(16)
+#define QDMA_S80_HARD_DMAP_SEL_CMPT_IRQ_EN_MASK             BIT(28)
+#define QDMA_S80_HARD_DMAP_SEL_CMPT_STS_DESC_EN_MASK    BIT(27)
+#define QDMA_S80_HARD_DMAP_SEL_CMPT_TRG_MODE_MASK        GENMASK(26, 24)
+#define QDMA_S80_HARD_DMAP_SEL_CMPT_TMR_CNT_MASK          GENMASK(23, 20)
+#define QDMA_S80_HARD_DMAP_SEL_CMPT_CNT_THRESH_MASK     GENMASK(19, 16)
+#define QDMA_S80_HARD_DMAP_SEL_CMPT_WRB_CIDX_MASK        GENMASK(15, 0)
+#define QDMA_S80_HARD_INTR_CTXT_BADDR_GET_H_MASK     GENMASK_ULL(63, 35)
+#define QDMA_S80_HARD_INTR_CTXT_BADDR_GET_L_MASK     GENMASK_ULL(34, 12)
+#define QDMA_S80_HARD_COMPL_CTXT_BADDR_GET_H_MASK    GENMASK_ULL(63, 42)
+#define QDMA_S80_HARD_COMPL_CTXT_BADDR_GET_M_MASK    GENMASK_ULL(41, 10)
+#define QDMA_S80_HARD_COMPL_CTXT_BADDR_GET_L_MASK    GENMASK_ULL(9, 6)
+#define QDMA_S80_HARD_COMPL_CTXT_PIDX_GET_H_MASK     GENMASK(15, 8)
+#define QDMA_S80_HARD_COMPL_CTXT_PIDX_GET_L_MASK     GENMASK(7, 0)
+#define QDMA_S80_HARD_QID2VEC_H2C_VECTOR             GENMASK(16, 9)
+#define QDMA_S80_HARD_QID2VEC_H2C_COAL_EN            BIT(17)
+
+static void qdma_s80_hard_hw_st_h2c_err_process(void *dev_hndl);
+static void qdma_s80_hard_hw_st_c2h_err_process(void *dev_hndl);
+static void qdma_s80_hard_hw_desc_err_process(void *dev_hndl);
+static void qdma_s80_hard_hw_trq_err_process(void *dev_hndl);
+static void qdma_s80_hard_hw_ram_sbe_err_process(void *dev_hndl);
+static void qdma_s80_hard_hw_ram_dbe_err_process(void *dev_hndl);
+
+static struct qdma_s80_hard_hw_err_info
+		qdma_s80_hard_err_info[QDMA_S80_HARD_ERRS_ALL] = {
+	/* Descriptor errors */
+	{
+		QDMA_S80_HARD_DSC_ERR_POISON,
+		"Poison error",
+		QDMA_S80_HARD_GLBL_DSC_ERR_MSK_ADDR,
+		QDMA_S80_HARD_GLBL_DSC_ERR_STS_ADDR,
+		GLBL_DSC_ERR_STS_POISON_MASK,
+		GLBL_ERR_STAT_ERR_DSC_MASK,
+		&qdma_s80_hard_hw_desc_err_process
+	},
+	{
+		QDMA_S80_HARD_DSC_ERR_UR_CA,
+		"Unsupported request or completer aborted error",
+		QDMA_S80_HARD_GLBL_DSC_ERR_MSK_ADDR,
+		QDMA_S80_HARD_GLBL_DSC_ERR_STS_ADDR,
+		GLBL_DSC_ERR_STS_UR_CA_MASK,
+		GLBL_ERR_STAT_ERR_DSC_MASK,
+		&qdma_s80_hard_hw_desc_err_process
+	},
+	{
+		QDMA_S80_HARD_DSC_ERR_PARAM,
+		"Parameter mismatch error",
+		QDMA_S80_HARD_GLBL_DSC_ERR_MSK_ADDR,
+		QDMA_S80_HARD_GLBL_DSC_ERR_STS_ADDR,
+		GLBL_DSC_ERR_STS_PARAM_MASK,
+		GLBL_ERR_STAT_ERR_DSC_MASK,
+		&qdma_s80_hard_hw_desc_err_process
+	},
+	{
+		QDMA_S80_HARD_DSC_ERR_ADDR,
+		"Address mismatch error",
+		QDMA_S80_HARD_GLBL_DSC_ERR_MSK_ADDR,
+		QDMA_S80_HARD_GLBL_DSC_ERR_STS_ADDR,
+		GLBL_DSC_ERR_STS_ADDR_MASK,
+		GLBL_ERR_STAT_ERR_DSC_MASK,
+		&qdma_s80_hard_hw_desc_err_process
+	},
+	{
+		QDMA_S80_HARD_DSC_ERR_TAG,
+		"Unexpected tag error",
+		QDMA_S80_HARD_GLBL_DSC_ERR_MSK_ADDR,
+		QDMA_S80_HARD_GLBL_DSC_ERR_STS_ADDR,
+		GLBL_DSC_ERR_STS_TAG_MASK,
+		GLBL_ERR_STAT_ERR_DSC_MASK,
+		&qdma_s80_hard_hw_desc_err_process
+	},
+	{
+		QDMA_S80_HARD_DSC_ERR_FLR,
+		"FLR error",
+		QDMA_S80_HARD_GLBL_DSC_ERR_MSK_ADDR,
+		QDMA_S80_HARD_GLBL_DSC_ERR_STS_ADDR,
+		GLBL_DSC_ERR_STS_FLR_MASK,
+		GLBL_ERR_STAT_ERR_DSC_MASK,
+		&qdma_s80_hard_hw_desc_err_process
+	},
+	{
+		QDMA_S80_HARD_DSC_ERR_TIMEOUT,
+		"Timed out error",
+		QDMA_S80_HARD_GLBL_DSC_ERR_MSK_ADDR,
+		QDMA_S80_HARD_GLBL_DSC_ERR_STS_ADDR,
+		GLBL_DSC_ERR_STS_TIMEOUT_MASK,
+		GLBL_ERR_STAT_ERR_DSC_MASK,
+		&qdma_s80_hard_hw_desc_err_process
+	},
+	{
+		QDMA_S80_HARD_DSC_ERR_DAT_POISON,
+		"Poison data error",
+		QDMA_S80_HARD_GLBL_DSC_ERR_MSK_ADDR,
+		QDMA_S80_HARD_GLBL_DSC_ERR_STS_ADDR,
+		GLBL_DSC_ERR_STS_DAT_POISON_MASK,
+		GLBL_ERR_STAT_ERR_DSC_MASK,
+		&qdma_s80_hard_hw_desc_err_process
+	},
+	{
+		QDMA_S80_HARD_DSC_ERR_FLR_CANCEL,
+		"Descriptor fetch cancelled due to FLR error",
+		QDMA_S80_HARD_GLBL_DSC_ERR_MSK_ADDR,
+		QDMA_S80_HARD_GLBL_DSC_ERR_STS_ADDR,
+		GLBL_DSC_ERR_STS_FLR_CANCEL_MASK,
+		GLBL_ERR_STAT_ERR_DSC_MASK,
+		&qdma_s80_hard_hw_desc_err_process
+	},
+	{
+		QDMA_S80_HARD_DSC_ERR_DMA,
+		"DMA engine error",
+		QDMA_S80_HARD_GLBL_DSC_ERR_MSK_ADDR,
+		QDMA_S80_HARD_GLBL_DSC_ERR_STS_ADDR,
+		GLBL_DSC_ERR_STS_DMA_MASK,
+		GLBL_ERR_STAT_ERR_DSC_MASK,
+		&qdma_s80_hard_hw_desc_err_process
+	},
+	{
+		QDMA_S80_HARD_DSC_ERR_DSC,
+		"Invalid PIDX update error",
+		QDMA_S80_HARD_GLBL_DSC_ERR_MSK_ADDR,
+		QDMA_S80_HARD_GLBL_DSC_ERR_STS_ADDR,
+		GLBL_DSC_ERR_STS_DSC_MASK,
+		GLBL_ERR_STAT_ERR_DSC_MASK,
+		&qdma_s80_hard_hw_desc_err_process
+	},
+	{
+		QDMA_S80_HARD_DSC_ERR_RQ_CANCEL,
+		"Descriptor fetch cancelled due to disable register status error",
+		QDMA_S80_HARD_GLBL_DSC_ERR_MSK_ADDR,
+		QDMA_S80_HARD_GLBL_DSC_ERR_STS_ADDR,
+		GLBL_DSC_ERR_STS_RQ_CANCEL_MASK,
+		GLBL_ERR_STAT_ERR_DSC_MASK,
+		&qdma_s80_hard_hw_desc_err_process
+	},
+	{
+		QDMA_S80_HARD_DSC_ERR_DBE,
+		"UNC_ERR_RAM_DBE error",
+		QDMA_S80_HARD_GLBL_DSC_ERR_MSK_ADDR,
+		QDMA_S80_HARD_GLBL_DSC_ERR_STS_ADDR,
+		GLBL_DSC_ERR_STS_DBE_MASK,
+		GLBL_ERR_STAT_ERR_DSC_MASK,
+		&qdma_s80_hard_hw_desc_err_process
+	},
+	{
+		QDMA_S80_HARD_DSC_ERR_SBE,
+		"UNC_ERR_RAM_SBE error",
+		QDMA_S80_HARD_GLBL_DSC_ERR_MSK_ADDR,
+		QDMA_S80_HARD_GLBL_DSC_ERR_STS_ADDR,
+		GLBL_DSC_ERR_STS_SBE_MASK,
+		GLBL_ERR_STAT_ERR_DSC_MASK,
+		&qdma_s80_hard_hw_desc_err_process
+	},
+	{
+		QDMA_S80_HARD_DSC_ERR_ALL,
+		"All Descriptor errors",
+		QDMA_S80_HARD_GLBL_DSC_ERR_MSK_ADDR,
+		QDMA_S80_HARD_GLBL_DSC_ERR_STS_ADDR,
+		QDMA_S80_HARD_DBE_ERR_ALL_MASK,
+		GLBL_ERR_STAT_ERR_DSC_MASK,
+		&qdma_s80_hard_hw_desc_err_process
+	},
+
+	/* TRQ errors */
+	{
+		QDMA_S80_HARD_TRQ_ERR_UNMAPPED,
+		"Access targeted unmapped register space via CSR pathway error",
+		QDMA_S80_HARD_GLBL_TRQ_ERR_MSK_ADDR,
+		QDMA_S80_HARD_GLBL_TRQ_ERR_STS_ADDR,
+		GLBL_TRQ_ERR_STS_UNMAPPED_MASK,
+		GLBL_ERR_STAT_ERR_TRQ_MASK,
+		&qdma_s80_hard_hw_trq_err_process
+	},
+	{
+		QDMA_S80_HARD_TRQ_ERR_QID_RANGE,
+		"Qid range error",
+		QDMA_S80_HARD_GLBL_TRQ_ERR_MSK_ADDR,
+		QDMA_S80_HARD_GLBL_TRQ_ERR_STS_ADDR,
+		GLBL_TRQ_ERR_STS_QID_RANGE_MASK,
+		GLBL_ERR_STAT_ERR_TRQ_MASK,
+		&qdma_s80_hard_hw_trq_err_process
+	},
+	{
+		QDMA_S80_HARD_TRQ_ERR_VF_ACCESS_ERR,
+		"VF attempted to access Global register space or Function map",
+		QDMA_S80_HARD_GLBL_TRQ_ERR_MSK_ADDR,
+		QDMA_S80_HARD_GLBL_TRQ_ERR_STS_ADDR,
+		GLBL_TRQ_ERR_STS_VF_ACCESS_ERR_MASK,
+		GLBL_ERR_STAT_ERR_TRQ_MASK,
+		&qdma_s80_hard_hw_trq_err_process
+	},
+	{
+		QDMA_S80_HARD_TRQ_ERR_TCP_TIMEOUT,
+		"Timeout on request to dma internal csr register",
+		QDMA_S80_HARD_GLBL_TRQ_ERR_MSK_ADDR,
+		QDMA_S80_HARD_GLBL_TRQ_ERR_STS_ADDR,
+		GLBL_TRQ_ERR_STS_TCP_TIMEOUT_MASK,
+		GLBL_ERR_STAT_ERR_TRQ_MASK,
+		&qdma_s80_hard_hw_trq_err_process
+	},
+	{
+		QDMA_S80_HARD_TRQ_ERR_ALL,
+		"All TRQ errors",
+		QDMA_S80_HARD_GLBL_TRQ_ERR_MSK_ADDR,
+		QDMA_S80_HARD_GLBL_TRQ_ERR_STS_ADDR,
+		QDMA_S80_HARD_GLBL_TRQ_ERR_ALL_MASK,
+		GLBL_ERR_STAT_ERR_TRQ_MASK,
+		&qdma_s80_hard_hw_trq_err_process
+	},
+
+	/* C2H Errors*/
+	{
+		QDMA_S80_HARD_ST_C2H_ERR_MTY_MISMATCH,
+		"MTY mismatch error",
+		QDMA_S80_HARD_C2H_ERR_MASK_ADDR,
+		QDMA_S80_HARD_C2H_ERR_STAT_ADDR,
+		C2H_ERR_STAT_MTY_MISMATCH_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&qdma_s80_hard_hw_st_c2h_err_process
+	},
+	{
+		QDMA_S80_HARD_ST_C2H_ERR_LEN_MISMATCH,
+		"Packet length mismatch error",
+		QDMA_S80_HARD_C2H_ERR_MASK_ADDR,
+		QDMA_S80_HARD_C2H_ERR_STAT_ADDR,
+		C2H_ERR_STAT_LEN_MISMATCH_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&qdma_s80_hard_hw_st_c2h_err_process
+	},
+	{
+		QDMA_S80_HARD_ST_C2H_ERR_QID_MISMATCH,
+		"Qid mismatch error",
+		QDMA_S80_HARD_C2H_ERR_MASK_ADDR,
+		QDMA_S80_HARD_C2H_ERR_STAT_ADDR,
+		C2H_ERR_STAT_QID_MISMATCH_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&qdma_s80_hard_hw_st_c2h_err_process
+	},
+	{
+		QDMA_S80_HARD_ST_C2H_ERR_DESC_RSP_ERR,
+		"Descriptor error bit set",
+		QDMA_S80_HARD_C2H_ERR_MASK_ADDR,
+		QDMA_S80_HARD_C2H_ERR_STAT_ADDR,
+		C2H_ERR_STAT_DESC_RSP_ERR_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&qdma_s80_hard_hw_st_c2h_err_process
+	},
+	{
+		QDMA_S80_HARD_ST_C2H_ERR_ENG_WPL_DATA_PAR_ERR,
+		"Data parity error",
+		QDMA_S80_HARD_C2H_ERR_MASK_ADDR,
+		QDMA_S80_HARD_C2H_ERR_STAT_ADDR,
+		C2H_ERR_STAT_ENG_WPL_DATA_PAR_ERR_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&qdma_s80_hard_hw_st_c2h_err_process
+	},
+	{
+		QDMA_S80_HARD_ST_C2H_ERR_MSI_INT_FAIL,
+		"MSI got a fail response error",
+		QDMA_S80_HARD_C2H_ERR_MASK_ADDR,
+		QDMA_S80_HARD_C2H_ERR_STAT_ADDR,
+		C2H_ERR_STAT_MSI_INT_FAIL_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&qdma_s80_hard_hw_st_c2h_err_process
+	},
+	{
+		QDMA_S80_HARD_ST_C2H_ERR_ERR_DESC_CNT,
+		"Descriptor count error",
+		QDMA_S80_HARD_C2H_ERR_MASK_ADDR,
+		QDMA_S80_HARD_C2H_ERR_STAT_ADDR,
+		C2H_ERR_STAT_ERR_DESC_CNT_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&qdma_s80_hard_hw_st_c2h_err_process
+	},
+	{
+		QDMA_S80_HARD_ST_C2H_ERR_PORTID_CTXT_MISMATCH,
+		"Port id in packet and pfetch ctxt mismatch error",
+		QDMA_S80_HARD_C2H_ERR_MASK_ADDR,
+		QDMA_S80_HARD_C2H_ERR_STAT_ADDR,
+		C2H_ERR_STAT_PORT_ID_CTXT_MISMATCH_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&qdma_s80_hard_hw_st_c2h_err_process
+	},
+	{
+		QDMA_S80_HARD_ST_C2H_ERR_PORTID_BYP_IN_MISMATCH,
+		"Port id in packet and bypass in mismatch error",
+		QDMA_S80_HARD_C2H_ERR_MASK_ADDR,
+		QDMA_S80_HARD_C2H_ERR_STAT_ADDR,
+		C2H_ERR_STAT_PORT_ID_CTXT_MISMATCH_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&qdma_s80_hard_hw_st_c2h_err_process
+	},
+	{
+		QDMA_S80_HARD_ST_C2H_ERR_WRB_INV_Q_ERR,
+		"Writeback on invalid queue error",
+		QDMA_S80_HARD_C2H_ERR_MASK_ADDR,
+		QDMA_S80_HARD_C2H_ERR_STAT_ADDR,
+		C2H_ERR_STAT_WRB_INV_Q_ERR_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&qdma_s80_hard_hw_st_c2h_err_process
+	},
+	{
+		QDMA_S80_HARD_ST_C2H_ERR_WRB_QFULL_ERR,
+		"Completion queue gets full error",
+		QDMA_S80_HARD_C2H_ERR_MASK_ADDR,
+		QDMA_S80_HARD_C2H_ERR_STAT_ADDR,
+		C2H_ERR_STAT_WRB_QFULL_ERR_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&qdma_s80_hard_hw_st_c2h_err_process
+	},
+	{
+		QDMA_S80_HARD_ST_C2H_ERR_WRB_CIDX_ERR,
+		"Bad CIDX update by the software error",
+		QDMA_S80_HARD_C2H_ERR_MASK_ADDR,
+		QDMA_S80_HARD_C2H_ERR_STAT_ADDR,
+		C2H_ERR_STAT_WRB_CIDX_ERR_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&qdma_s80_hard_hw_st_c2h_err_process
+	},
+	{
+		QDMA_S80_HARD_ST_C2H_ERR_WRB_PRTY_ERR,
+		"C2H completion Parity error",
+		QDMA_S80_HARD_C2H_ERR_MASK_ADDR,
+		QDMA_S80_HARD_C2H_ERR_STAT_ADDR,
+		C2H_ERR_STAT_WRB_PRTY_ERR_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&qdma_s80_hard_hw_st_c2h_err_process
+	},
+	{
+		QDMA_S80_HARD_ST_C2H_ERR_ALL,
+		"All C2h errors",
+		QDMA_S80_HARD_C2H_ERR_MASK_ADDR,
+		QDMA_S80_HARD_C2H_ERR_STAT_ADDR,
+		QDMA_S80_HARD_C2H_ERR_ALL_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&qdma_s80_hard_hw_st_c2h_err_process
+	},
+
+	/* C2H fatal errors */
+	{
+		QDMA_S80_HARD_ST_FATAL_ERR_MTY_MISMATCH,
+		"Fatal MTY mismatch error",
+		QDMA_S80_HARD_C2H_FATAL_ERR_MASK_ADDR,
+		QDMA_S80_HARD_C2H_FATAL_ERR_STAT_ADDR,
+		C2H_FATAL_ERR_STAT_MTY_MISMATCH_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&qdma_s80_hard_hw_st_c2h_err_process
+	},
+	{
+		QDMA_S80_HARD_ST_FATAL_ERR_LEN_MISMATCH,
+		"Fatal Len mismatch error",
+		QDMA_S80_HARD_C2H_FATAL_ERR_MASK_ADDR,
+		QDMA_S80_HARD_C2H_FATAL_ERR_STAT_ADDR,
+		C2H_FATAL_ERR_STAT_LEN_MISMATCH_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&qdma_s80_hard_hw_st_c2h_err_process
+	},
+	{
+		QDMA_S80_HARD_ST_FATAL_ERR_QID_MISMATCH,
+		"Fatal Qid mismatch error",
+		QDMA_S80_HARD_C2H_FATAL_ERR_MASK_ADDR,
+		QDMA_S80_HARD_C2H_FATAL_ERR_STAT_ADDR,
+		C2H_FATAL_ERR_STAT_QID_MISMATCH_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&qdma_s80_hard_hw_st_c2h_err_process
+	},
+	{
+		QDMA_S80_HARD_ST_FATAL_ERR_TIMER_FIFO_RAM_RDBE,
+		"RAM double bit fatal error",
+		QDMA_S80_HARD_C2H_FATAL_ERR_MASK_ADDR,
+		QDMA_S80_HARD_C2H_FATAL_ERR_STAT_ADDR,
+		C2H_FATAL_ERR_STAT_TIMER_FIFO_RAM_RDBE_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&qdma_s80_hard_hw_st_c2h_err_process
+	},
+	{
+		QDMA_S80_HARD_ST_FATAL_ERR_PFCH_II_RAM_RDBE,
+		"RAM double bit fatal error",
+		QDMA_S80_HARD_C2H_FATAL_ERR_MASK_ADDR,
+		QDMA_S80_HARD_C2H_FATAL_ERR_STAT_ADDR,
+		C2H_FATAL_ERR_STAT_PFCH_LL_RAM_RDBE_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&qdma_s80_hard_hw_st_c2h_err_process
+	},
+	{
+		QDMA_S80_HARD_ST_FATAL_ERR_WRB_CTXT_RAM_RDBE,
+		"RAM double bit fatal error",
+		QDMA_S80_HARD_C2H_FATAL_ERR_MASK_ADDR,
+		QDMA_S80_HARD_C2H_FATAL_ERR_STAT_ADDR,
+		C2H_FATAL_ERR_STAT_WRB_CTXT_RAM_RDBE_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&qdma_s80_hard_hw_st_c2h_err_process
+	},
+	{
+		QDMA_S80_HARD_ST_FATAL_ERR_PFCH_CTXT_RAM_RDBE,
+		"RAM double bit fatal error",
+		QDMA_S80_HARD_C2H_FATAL_ERR_MASK_ADDR,
+		QDMA_S80_HARD_C2H_FATAL_ERR_STAT_ADDR,
+		C2H_FATAL_ERR_STAT_PFCH_CTXT_RAM_RDBE_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&qdma_s80_hard_hw_st_c2h_err_process
+	},
+	{
+		QDMA_S80_HARD_ST_FATAL_ERR_DESC_REQ_FIFO_RAM_RDBE,
+		"RAM double bit fatal error",
+		QDMA_S80_HARD_C2H_FATAL_ERR_MASK_ADDR,
+		QDMA_S80_HARD_C2H_FATAL_ERR_STAT_ADDR,
+		C2H_FATAL_ERR_STAT_DESC_REQ_FIFO_RAM_RDBE_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&qdma_s80_hard_hw_st_c2h_err_process
+	},
+	{
+		QDMA_S80_HARD_ST_FATAL_ERR_INT_CTXT_RAM_RDBE,
+		"RAM double bit fatal error",
+		QDMA_S80_HARD_C2H_FATAL_ERR_MASK_ADDR,
+		QDMA_S80_HARD_C2H_FATAL_ERR_STAT_ADDR,
+		C2H_FATAL_ERR_STAT_INT_CTXT_RAM_RDBE_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&qdma_s80_hard_hw_st_c2h_err_process
+	},
+	{
+		QDMA_S80_HARD_ST_FATAL_ERR_INT_QID2VEC_RAM_RDBE,
+		"RAM double bit fatal error",
+		QDMA_S80_HARD_C2H_FATAL_ERR_MASK_ADDR,
+		QDMA_S80_HARD_C2H_FATAL_ERR_STAT_ADDR,
+		C2H_FATAL_ERR_STAT_INT_QID2VEC_RAM_RDBE_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&qdma_s80_hard_hw_st_c2h_err_process
+	},
+	{
+		QDMA_S80_HARD_ST_FATAL_ERR_WRB_COAL_DATA_RAM_RDBE,
+		"RAM double bit fatal error",
+		QDMA_S80_HARD_C2H_FATAL_ERR_MASK_ADDR,
+		QDMA_S80_HARD_C2H_FATAL_ERR_STAT_ADDR,
+		C2H_FATAL_ERR_STAT_WRB_COAL_DATA_RAM_RDBE_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&qdma_s80_hard_hw_st_c2h_err_process
+	},
+	{
+		QDMA_S80_HARD_ST_FATAL_ERR_TUSER_FIFO_RAM_RDBE,
+		"RAM double bit fatal error",
+		QDMA_S80_HARD_C2H_FATAL_ERR_MASK_ADDR,
+		QDMA_S80_HARD_C2H_FATAL_ERR_STAT_ADDR,
+		C2H_FATAL_ERR_STAT_TUSER_FIFO_RAM_RDBE_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&qdma_s80_hard_hw_st_c2h_err_process
+	},
+	{
+		QDMA_S80_HARD_ST_FATAL_ERR_QID_FIFO_RAM_RDBE,
+		"RAM double bit fatal error",
+		QDMA_S80_HARD_C2H_FATAL_ERR_MASK_ADDR,
+		QDMA_S80_HARD_C2H_FATAL_ERR_STAT_ADDR,
+		C2H_FATAL_ERR_STAT_QID_FIFO_RAM_RDBE_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&qdma_s80_hard_hw_st_c2h_err_process
+	},
+	{
+		QDMA_S80_HARD_ST_FATAL_ERR_PAYLOAD_FIFO_RAM_RDBE,
+		"RAM double bit fatal error",
+		QDMA_S80_HARD_C2H_FATAL_ERR_MASK_ADDR,
+		QDMA_S80_HARD_C2H_FATAL_ERR_STAT_ADDR,
+		C2H_FATAL_ERR_STAT_PLD_FIFO_RAM_RDBE_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&qdma_s80_hard_hw_st_c2h_err_process
+	},
+	{
+		QDMA_S80_HARD_ST_FATAL_ERR_WPL_DATA_PAR_ERR,
+		"RAM double bit fatal error",
+		QDMA_S80_HARD_C2H_FATAL_ERR_MASK_ADDR,
+		QDMA_S80_HARD_C2H_FATAL_ERR_STAT_ADDR,
+		C2H_FATAL_ERR_STAT_WPL_DATA_PAR_ERR_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&qdma_s80_hard_hw_st_c2h_err_process
+	},
+	{
+		QDMA_S80_HARD_ST_FATAL_ERR_ALL,
+		"All fatal errors",
+		QDMA_S80_HARD_C2H_FATAL_ERR_MASK_ADDR,
+		QDMA_S80_HARD_C2H_FATAL_ERR_STAT_ADDR,
+		QDMA_S80_HARD_C2H_FATAL_ERR_ALL_MASK,
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK,
+		&qdma_s80_hard_hw_st_c2h_err_process
+	},
+
+	/* H2C St errors */
+	{
+		QDMA_S80_HARD_ST_H2C_ERR_ZERO_LEN_DESC_ERR,
+		"Zero length descriptor error",
+		QDMA_S80_HARD_H2C_ERR_MASK_ADDR,
+		QDMA_S80_HARD_H2C_ERR_STAT_ADDR,
+		H2C_ERR_STAT_ZERO_LEN_DS_MASK,
+		GLBL_ERR_STAT_ERR_H2C_ST_MASK,
+		&qdma_s80_hard_hw_st_h2c_err_process
+	},
+	{
+		QDMA_S80_HARD_ST_H2C_ERR_SDI_MRKR_REQ_MOP_ERR,
+		"A non-EOP descriptor received",
+		QDMA_S80_HARD_H2C_ERR_MASK_ADDR,
+		QDMA_S80_HARD_H2C_ERR_STAT_ADDR,
+		H2C_ERR_STAT_SDI_MRKR_REQ_MOP_ERR_MASK,
+		GLBL_ERR_STAT_ERR_H2C_ST_MASK,
+		&qdma_s80_hard_hw_st_h2c_err_process
+	},
+	{
+		QDMA_S80_HARD_ST_H2C_ERR_NO_DMA_DSC,
+		"No DMA descriptor received error",
+		QDMA_S80_HARD_H2C_ERR_MASK_ADDR,
+		QDMA_S80_HARD_H2C_ERR_STAT_ADDR,
+		H2C_ERR_STAT_NO_DMA_DS_MASK,
+		GLBL_ERR_STAT_ERR_H2C_ST_MASK,
+		&qdma_s80_hard_hw_st_h2c_err_process
+	},
+	{
+		QDMA_S80_HARD_ST_H2C_ERR_DBE,
+		"Double bit error detected on H2C-ST data error",
+		QDMA_S80_HARD_H2C_ERR_MASK_ADDR,
+		QDMA_S80_HARD_H2C_ERR_STAT_ADDR,
+		H2C_ERR_STAT_DBE_MASK,
+		GLBL_ERR_STAT_ERR_H2C_ST_MASK,
+		&qdma_s80_hard_hw_st_h2c_err_process
+	},
+	{
+		QDMA_S80_HARD_ST_H2C_ERR_SBE,
+		"Single bit error detected on H2C-ST data error",
+		QDMA_S80_HARD_H2C_ERR_MASK_ADDR,
+		QDMA_S80_HARD_H2C_ERR_STAT_ADDR,
+		H2C_ERR_STAT_SBE_MASK,
+		GLBL_ERR_STAT_ERR_H2C_ST_MASK,
+		&qdma_s80_hard_hw_st_h2c_err_process
+	},
+	{
+		QDMA_S80_HARD_ST_H2C_ERR_ALL,
+		"All H2C errors",
+		QDMA_S80_HARD_H2C_ERR_MASK_ADDR,
+		QDMA_S80_HARD_H2C_ERR_STAT_ADDR,
+		QDMA_S80_HARD_H2C_ERR_ALL_MASK,
+		GLBL_ERR_STAT_ERR_H2C_ST_MASK,
+		&qdma_s80_hard_hw_st_h2c_err_process
+	},
+
+	/* SBE errors */
+	{
+		QDMA_S80_HARD_SBE_ERR_MI_H2C0_DAT,
+		"H2C MM data buffer single bit ECC error",
+		QDMA_S80_HARD_RAM_SBE_MSK_A_ADDR,
+		QDMA_S80_HARD_RAM_SBE_STS_A_ADDR,
+		RAM_SBE_STS_A_MI_H2C0_DAT_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&qdma_s80_hard_hw_ram_sbe_err_process
+	},
+	{
+		QDMA_S80_HARD_SBE_ERR_MI_C2H0_DAT,
+		"C2H MM data buffer single bit ECC error",
+		QDMA_S80_HARD_RAM_SBE_MSK_A_ADDR,
+		QDMA_S80_HARD_RAM_SBE_STS_A_ADDR,
+		RAM_SBE_STS_A_MI_C2H0_DAT_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&qdma_s80_hard_hw_ram_sbe_err_process
+	},
+	{
+		QDMA_S80_HARD_SBE_ERR_H2C_RD_BRG_DAT,
+		"Bridge master read single bit ECC error",
+		QDMA_S80_HARD_RAM_SBE_MSK_A_ADDR,
+		QDMA_S80_HARD_RAM_SBE_STS_A_ADDR,
+		RAM_SBE_STS_A_H2C_RD_BRG_DAT_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&qdma_s80_hard_hw_ram_sbe_err_process
+	},
+	{
+		QDMA_S80_HARD_SBE_ERR_H2C_WR_BRG_DAT,
+		"Bridge master write single bit ECC error",
+		QDMA_S80_HARD_RAM_SBE_MSK_A_ADDR,
+		QDMA_S80_HARD_RAM_SBE_STS_A_ADDR,
+		RAM_SBE_STS_A_H2C_WR_BRG_DAT_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&qdma_s80_hard_hw_ram_sbe_err_process
+	},
+	{
+		QDMA_S80_HARD_SBE_ERR_C2H_RD_BRG_DAT,
+		"Bridge slave read data buffer single bit ECC error",
+		QDMA_S80_HARD_RAM_SBE_MSK_A_ADDR,
+		QDMA_S80_HARD_RAM_SBE_STS_A_ADDR,
+		RAM_SBE_STS_A_C2H_RD_BRG_DAT_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&qdma_s80_hard_hw_ram_sbe_err_process
+	},
+	{
+		QDMA_S80_HARD_SBE_ERR_C2H_WR_BRG_DAT,
+		"Bridge slave write data buffer single bit ECC error",
+		QDMA_S80_HARD_RAM_SBE_MSK_A_ADDR,
+		QDMA_S80_HARD_RAM_SBE_STS_A_ADDR,
+		RAM_SBE_STS_A_C2H_WR_BRG_DAT_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&qdma_s80_hard_hw_ram_sbe_err_process
+	},
+	{
+		QDMA_S80_HARD_SBE_ERR_FUNC_MAP,
+		"Function map RAM single bit ECC error",
+		QDMA_S80_HARD_RAM_SBE_MSK_A_ADDR,
+		QDMA_S80_HARD_RAM_SBE_STS_A_ADDR,
+		RAM_SBE_STS_A_FUNC_MAP_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&qdma_s80_hard_hw_ram_sbe_err_process
+	},
+	{
+		QDMA_S80_HARD_SBE_ERR_DSC_HW_CTXT,
+		"Descriptor engine hardware context RAM single bit ECC error",
+		QDMA_S80_HARD_RAM_SBE_MSK_A_ADDR,
+		QDMA_S80_HARD_RAM_SBE_STS_A_ADDR,
+		RAM_SBE_STS_A_DSC_HW_CTXT_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&qdma_s80_hard_hw_ram_sbe_err_process
+	},
+	{
+		QDMA_S80_HARD_SBE_ERR_DSC_CRD_RCV,
+		"Descriptor engine receive credit context RAM single bit ECC error",
+		QDMA_S80_HARD_RAM_SBE_MSK_A_ADDR,
+		QDMA_S80_HARD_RAM_SBE_STS_A_ADDR,
+		RAM_SBE_STS_A_DSC_CRD_RCV_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&qdma_s80_hard_hw_ram_sbe_err_process
+	},
+	{
+		QDMA_S80_HARD_SBE_ERR_DSC_SW_CTXT,
+		"Descriptor engine software context RAM single bit ECC error",
+		QDMA_S80_HARD_RAM_SBE_MSK_A_ADDR,
+		QDMA_S80_HARD_RAM_SBE_STS_A_ADDR,
+		RAM_SBE_STS_A_DSC_SW_CTXT_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&qdma_s80_hard_hw_ram_sbe_err_process
+	},
+	{
+		QDMA_S80_HARD_SBE_ERR_DSC_CPLI,
+		"Descriptor engine fetch completion information RAM single bit ECC error",
+		QDMA_S80_HARD_RAM_SBE_MSK_A_ADDR,
+		QDMA_S80_HARD_RAM_SBE_STS_A_ADDR,
+		RAM_SBE_STS_A_DSC_CPLI_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&qdma_s80_hard_hw_ram_sbe_err_process
+	},
+	{
+		QDMA_S80_HARD_SBE_ERR_DSC_CPLD,
+		"Descriptor engine fetch completion data RAM single bit ECC error",
+		QDMA_S80_HARD_RAM_SBE_MSK_A_ADDR,
+		QDMA_S80_HARD_RAM_SBE_STS_A_ADDR,
+		RAM_SBE_STS_A_DSC_CPLD_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&qdma_s80_hard_hw_ram_sbe_err_process
+	},
+	{
+		QDMA_S80_HARD_SBE_ERR_PASID_CTXT_RAM,
+		"Pasid ctxt FIFO RAM single bit ECC error",
+		QDMA_S80_HARD_RAM_SBE_MSK_A_ADDR,
+		QDMA_S80_HARD_RAM_SBE_STS_A_ADDR,
+		RAM_SBE_STS_A_PASID_CTXT_RAM_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&qdma_s80_hard_hw_ram_sbe_err_process
+	},
+	{
+		QDMA_S80_HARD_SBE_ERR_TIMER_FIFO_RAM,
+		"Timer fifo RAM single bit ECC error",
+		QDMA_S80_HARD_RAM_SBE_MSK_A_ADDR,
+		QDMA_S80_HARD_RAM_SBE_STS_A_ADDR,
+		RAM_SBE_STS_A_TIMER_FIFO_RAM_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&qdma_s80_hard_hw_ram_sbe_err_process
+	},
+	{
+		QDMA_S80_HARD_SBE_ERR_PAYLOAD_FIFO_RAM,
+		"C2H ST payload FIFO RAM single bit ECC error",
+		QDMA_S80_HARD_RAM_SBE_MSK_A_ADDR,
+		QDMA_S80_HARD_RAM_SBE_STS_A_ADDR,
+		RAM_SBE_STS_A_PLD_FIFO_RAM_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&qdma_s80_hard_hw_ram_sbe_err_process
+	},
+	{
+		QDMA_S80_HARD_SBE_ERR_QID_FIFO_RAM,
+		"C2H ST QID FIFO RAM single bit ECC error",
+		QDMA_S80_HARD_RAM_SBE_MSK_A_ADDR,
+		QDMA_S80_HARD_RAM_SBE_STS_A_ADDR,
+		RAM_SBE_STS_A_QID_FIFO_RAM_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&qdma_s80_hard_hw_ram_sbe_err_process
+	},
+	{
+		QDMA_S80_HARD_SBE_ERR_TUSER_FIFO_RAM,
+		"C2H ST TUSER FIFO RAM single bit ECC error",
+		QDMA_S80_HARD_RAM_SBE_MSK_A_ADDR,
+		QDMA_S80_HARD_RAM_SBE_STS_A_ADDR,
+		RAM_SBE_STS_A_TUSER_FIFO_RAM_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&qdma_s80_hard_hw_ram_sbe_err_process
+	},
+	{
+		QDMA_S80_HARD_SBE_ERR_WRB_COAL_DATA_RAM,
+		"Writeback Coalescing RAM single bit ECC error",
+		QDMA_S80_HARD_RAM_SBE_MSK_A_ADDR,
+		QDMA_S80_HARD_RAM_SBE_STS_A_ADDR,
+		RAM_SBE_STS_A_WRB_COAL_DATA_RAM_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&qdma_s80_hard_hw_ram_sbe_err_process
+	},
+	{
+		QDMA_S80_HARD_SBE_ERR_INT_QID2VEC_RAM,
+		"Interrupt QID2VEC RAM single bit ECC error",
+		QDMA_S80_HARD_RAM_SBE_MSK_A_ADDR,
+		QDMA_S80_HARD_RAM_SBE_STS_A_ADDR,
+		RAM_SBE_STS_A_INT_QID2VEC_RAM_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&qdma_s80_hard_hw_ram_sbe_err_process
+	},
+	{
+		QDMA_S80_HARD_SBE_ERR_INT_CTXT_RAM,
+		"Interrupt context RAM single bit ECC error",
+		QDMA_S80_HARD_RAM_SBE_MSK_A_ADDR,
+		QDMA_S80_HARD_RAM_SBE_STS_A_ADDR,
+		RAM_SBE_STS_A_INT_CTXT_RAM_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&qdma_s80_hard_hw_ram_sbe_err_process
+	},
+	{
+		QDMA_S80_HARD_SBE_ERR_DESC_REQ_FIFO_RAM,
+		"C2H ST descriptor request RAM single bit ECC error",
+		QDMA_S80_HARD_RAM_SBE_MSK_A_ADDR,
+		QDMA_S80_HARD_RAM_SBE_STS_A_ADDR,
+		RAM_SBE_STS_A_DESC_REQ_FIFO_RAM_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&qdma_s80_hard_hw_ram_sbe_err_process
+	},
+	{
+		QDMA_S80_HARD_SBE_ERR_PFCH_CTXT_RAM,
+		"C2H ST prefetch RAM single bit ECC error",
+		QDMA_S80_HARD_RAM_SBE_MSK_A_ADDR,
+		QDMA_S80_HARD_RAM_SBE_STS_A_ADDR,
+		RAM_SBE_STS_A_PFCH_CTXT_RAM_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&qdma_s80_hard_hw_ram_sbe_err_process
+	},
+	{
+		QDMA_S80_HARD_SBE_ERR_WRB_CTXT_RAM,
+		"C2H ST completion context RAM single bit ECC error",
+		QDMA_S80_HARD_RAM_SBE_MSK_A_ADDR,
+		QDMA_S80_HARD_RAM_SBE_STS_A_ADDR,
+		RAM_SBE_STS_A_WRB_CTXT_RAM_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&qdma_s80_hard_hw_ram_sbe_err_process
+	},
+	{
+		QDMA_S80_HARD_SBE_ERR_PFCH_LL_RAM,
+		"C2H ST prefetch list RAM single bit ECC error",
+		QDMA_S80_HARD_RAM_SBE_MSK_A_ADDR,
+		QDMA_S80_HARD_RAM_SBE_STS_A_ADDR,
+		RAM_SBE_STS_A_PFCH_LL_RAM_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&qdma_s80_hard_hw_ram_sbe_err_process
+	},
+	{
+		QDMA_S80_HARD_SBE_ERR_ALL,
+		"All SBE errors",
+		QDMA_S80_HARD_RAM_SBE_MSK_A_ADDR,
+		QDMA_S80_HARD_RAM_SBE_STS_A_ADDR,
+		QDMA_S80_HARD_SBE_ERR_ALL_MASK,
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK,
+		&qdma_s80_hard_hw_ram_sbe_err_process
+	},
+
+
+	/* DBE errors */
+	{
+		QDMA_S80_HARD_DBE_ERR_MI_H2C0_DAT,
+		"H2C MM data buffer single bit ECC error",
+		QDMA_S80_HARD_RAM_DBE_MSK_A_ADDR,
+		QDMA_S80_HARD_RAM_DBE_STS_A_ADDR,
+		RAM_DBE_STS_A_MI_H2C0_DAT_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&qdma_s80_hard_hw_ram_dbe_err_process
+	},
+	{
+		QDMA_S80_HARD_DBE_ERR_MI_C2H0_DAT,
+		"C2H MM data buffer single bit ECC error",
+		QDMA_S80_HARD_RAM_DBE_MSK_A_ADDR,
+		QDMA_S80_HARD_RAM_DBE_STS_A_ADDR,
+		RAM_DBE_STS_A_MI_C2H0_DAT_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&qdma_s80_hard_hw_ram_dbe_err_process
+	},
+	{
+		QDMA_S80_HARD_DBE_ERR_H2C_RD_BRG_DAT,
+		"Bridge master read single bit ECC error",
+		QDMA_S80_HARD_RAM_DBE_MSK_A_ADDR,
+		QDMA_S80_HARD_RAM_DBE_STS_A_ADDR,
+		RAM_DBE_STS_A_H2C_RD_BRG_DAT_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&qdma_s80_hard_hw_ram_dbe_err_process
+	},
+	{
+		QDMA_S80_HARD_DBE_ERR_H2C_WR_BRG_DAT,
+		"Bridge master write single bit ECC error",
+		QDMA_S80_HARD_RAM_DBE_MSK_A_ADDR,
+		QDMA_S80_HARD_RAM_DBE_STS_A_ADDR,
+		RAM_DBE_STS_A_H2C_WR_BRG_DAT_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&qdma_s80_hard_hw_ram_dbe_err_process
+	},
+	{
+		QDMA_S80_HARD_DBE_ERR_C2H_RD_BRG_DAT,
+		"Bridge slave read data buffer single bit ECC error",
+		QDMA_S80_HARD_RAM_DBE_MSK_A_ADDR,
+		QDMA_S80_HARD_RAM_DBE_STS_A_ADDR,
+		RAM_DBE_STS_A_C2H_RD_BRG_DAT_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&qdma_s80_hard_hw_ram_dbe_err_process
+	},
+	{
+		QDMA_S80_HARD_DBE_ERR_C2H_WR_BRG_DAT,
+		"Bridge slave write data buffer single bit ECC error",
+		QDMA_S80_HARD_RAM_DBE_MSK_A_ADDR,
+		QDMA_S80_HARD_RAM_DBE_STS_A_ADDR,
+		RAM_DBE_STS_A_C2H_WR_BRG_DAT_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&qdma_s80_hard_hw_ram_dbe_err_process
+	},
+	{
+		QDMA_S80_HARD_DBE_ERR_FUNC_MAP,
+		"Function map RAM single bit ECC error",
+		QDMA_S80_HARD_RAM_DBE_MSK_A_ADDR,
+		QDMA_S80_HARD_RAM_DBE_STS_A_ADDR,
+		RAM_DBE_STS_A_FUNC_MAP_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&qdma_s80_hard_hw_ram_dbe_err_process
+	},
+	{
+		QDMA_S80_HARD_DBE_ERR_DSC_HW_CTXT,
+		"Descriptor engine hardware context RAM single bit ECC error",
+		QDMA_S80_HARD_RAM_DBE_MSK_A_ADDR,
+		QDMA_S80_HARD_RAM_DBE_STS_A_ADDR,
+		RAM_DBE_STS_A_DSC_HW_CTXT_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&qdma_s80_hard_hw_ram_dbe_err_process
+	},
+	{
+		QDMA_S80_HARD_DBE_ERR_DSC_CRD_RCV,
+		"Descriptor engine receive credit context RAM single bit ECC error",
+		QDMA_S80_HARD_RAM_DBE_MSK_A_ADDR,
+		QDMA_S80_HARD_RAM_DBE_STS_A_ADDR,
+		RAM_DBE_STS_A_DSC_CRD_RCV_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&qdma_s80_hard_hw_ram_dbe_err_process
+	},
+	{
+		QDMA_S80_HARD_DBE_ERR_DSC_SW_CTXT,
+		"Descriptor engine software context RAM single bit ECC error",
+		QDMA_S80_HARD_RAM_DBE_MSK_A_ADDR,
+		QDMA_S80_HARD_RAM_DBE_STS_A_ADDR,
+		RAM_DBE_STS_A_DSC_SW_CTXT_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&qdma_s80_hard_hw_ram_dbe_err_process
+	},
+	{
+		QDMA_S80_HARD_DBE_ERR_DSC_CPLI,
+		"Descriptor engine fetch completion information RAM single bit ECC error",
+		QDMA_S80_HARD_RAM_DBE_MSK_A_ADDR,
+		QDMA_S80_HARD_RAM_DBE_STS_A_ADDR,
+		RAM_DBE_STS_A_DSC_CPLI_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&qdma_s80_hard_hw_ram_dbe_err_process
+	},
+	{
+		QDMA_S80_HARD_DBE_ERR_DSC_CPLD,
+		"Descriptor engine fetch completion data RAM single bit ECC error",
+		QDMA_S80_HARD_RAM_DBE_MSK_A_ADDR,
+		QDMA_S80_HARD_RAM_DBE_STS_A_ADDR,
+		RAM_DBE_STS_A_DSC_CPLD_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&qdma_s80_hard_hw_ram_dbe_err_process
+	},
+	{
+		QDMA_S80_HARD_DBE_ERR_PASID_CTXT_RAM,
+		"PASID CTXT RAM single bit ECC error",
+		QDMA_S80_HARD_RAM_DBE_MSK_A_ADDR,
+		QDMA_S80_HARD_RAM_DBE_STS_A_ADDR,
+		RAM_DBE_STS_A_PASID_CTXT_RAM_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&qdma_s80_hard_hw_ram_dbe_err_process
+	},
+	{
+		QDMA_S80_HARD_DBE_ERR_TIMER_FIFO_RAM,
+		"Timer fifo RAM single bit ECC error",
+		QDMA_S80_HARD_RAM_DBE_MSK_A_ADDR,
+		QDMA_S80_HARD_RAM_DBE_STS_A_ADDR,
+		RAM_DBE_STS_A_TIMER_FIFO_RAM_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&qdma_s80_hard_hw_ram_dbe_err_process
+	},
+	{
+		QDMA_S80_HARD_DBE_ERR_PAYLOAD_FIFO_RAM,
+		"Payload fifo RAM single bit ECC error",
+		QDMA_S80_HARD_RAM_DBE_MSK_A_ADDR,
+		QDMA_S80_HARD_RAM_DBE_STS_A_ADDR,
+		RAM_DBE_STS_A_PLD_FIFO_RAM_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&qdma_s80_hard_hw_ram_dbe_err_process
+	},
+	{
+		QDMA_S80_HARD_DBE_ERR_QID_FIFO_RAM,
+		"C2H ST QID FIFO RAM single bit ECC error",
+		QDMA_S80_HARD_RAM_DBE_MSK_A_ADDR,
+		QDMA_S80_HARD_RAM_DBE_STS_A_ADDR,
+		RAM_DBE_STS_A_QID_FIFO_RAM_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&qdma_s80_hard_hw_ram_dbe_err_process
+	},
+	{
+		QDMA_S80_HARD_DBE_ERR_WRB_COAL_DATA_RAM,
+		"Writeback Coalescing RAM single bit ECC error",
+		QDMA_S80_HARD_RAM_DBE_MSK_A_ADDR,
+		QDMA_S80_HARD_RAM_DBE_STS_A_ADDR,
+		RAM_DBE_STS_A_WRB_COAL_DATA_RAM_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&qdma_s80_hard_hw_ram_dbe_err_process
+	},
+	{
+		QDMA_S80_HARD_DBE_ERR_INT_QID2VEC_RAM,
+		"QID2VEC RAM single bit ECC error",
+		QDMA_S80_HARD_RAM_DBE_MSK_A_ADDR,
+		QDMA_S80_HARD_RAM_DBE_STS_A_ADDR,
+		RAM_DBE_STS_A_INT_QID2VEC_RAM_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&qdma_s80_hard_hw_ram_dbe_err_process
+	},
+	{
+		QDMA_S80_HARD_DBE_ERR_INT_CTXT_RAM,
+		"Interrupt context RAM single bit ECC error",
+		QDMA_S80_HARD_RAM_DBE_MSK_A_ADDR,
+		QDMA_S80_HARD_RAM_DBE_STS_A_ADDR,
+		RAM_DBE_STS_A_INT_CTXT_RAM_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&qdma_s80_hard_hw_ram_dbe_err_process
+	},
+	{
+		QDMA_S80_HARD_DBE_ERR_DESC_REQ_FIFO_RAM,
+		"C2H ST descriptor request RAM single bit ECC error",
+		QDMA_S80_HARD_RAM_DBE_MSK_A_ADDR,
+		QDMA_S80_HARD_RAM_DBE_STS_A_ADDR,
+		RAM_DBE_STS_A_DESC_REQ_FIFO_RAM_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&qdma_s80_hard_hw_ram_dbe_err_process
+	},
+	{
+		QDMA_S80_HARD_DBE_ERR_PFCH_CTXT_RAM,
+		"C2H ST prefetch RAM single bit ECC error",
+		QDMA_S80_HARD_RAM_DBE_MSK_A_ADDR,
+		QDMA_S80_HARD_RAM_DBE_STS_A_ADDR,
+		RAM_DBE_STS_A_PFCH_CTXT_RAM_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&qdma_s80_hard_hw_ram_dbe_err_process
+	},
+	{
+		QDMA_S80_HARD_DBE_ERR_WRB_CTXT_RAM,
+		"C2H ST completion context RAM single bit ECC error",
+		QDMA_S80_HARD_RAM_DBE_MSK_A_ADDR,
+		QDMA_S80_HARD_RAM_DBE_STS_A_ADDR,
+		RAM_DBE_STS_A_WRB_CTXT_RAM_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&qdma_s80_hard_hw_ram_dbe_err_process
+	},
+	{
+		QDMA_S80_HARD_DBE_ERR_PFCH_LL_RAM,
+		"C2H ST prefetch list RAM single bit ECC error",
+		QDMA_S80_HARD_RAM_DBE_MSK_A_ADDR,
+		QDMA_S80_HARD_RAM_DBE_STS_A_ADDR,
+		RAM_DBE_STS_A_PFCH_LL_RAM_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&qdma_s80_hard_hw_ram_dbe_err_process
+	},
+	{
+		QDMA_S80_HARD_DBE_ERR_ALL,
+		"All DBE errors",
+		QDMA_S80_HARD_RAM_DBE_MSK_A_ADDR,
+		QDMA_S80_HARD_RAM_DBE_STS_A_ADDR,
+		QDMA_S80_HARD_DBE_ERR_ALL_MASK,
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK,
+		&qdma_s80_hard_hw_ram_dbe_err_process
+	}
+};
+
+static int32_t all_qdma_s80_hard_hw_errs
+		[QDMA_S80_HARD_TOTAL_LEAF_ERROR_AGGREGATORS] = {
+	QDMA_S80_HARD_DSC_ERR_ALL,
+	QDMA_S80_HARD_TRQ_ERR_ALL,
+	QDMA_S80_HARD_ST_C2H_ERR_ALL,
+	QDMA_S80_HARD_ST_FATAL_ERR_ALL,
+	QDMA_S80_HARD_ST_H2C_ERR_ALL,
+	QDMA_S80_HARD_SBE_ERR_ALL,
+	QDMA_S80_HARD_DBE_ERR_ALL
+};
+
+
+
+union qdma_s80_hard_ind_ctxt_cmd {
+	uint32_t word;
+	struct {
+		uint32_t busy:1;
+		uint32_t sel:4;
+		uint32_t op:2;
+		uint32_t qid:11;
+		uint32_t rsvd:14;
+	} bits;
+};
+
+struct qdma_s80_hard_indirect_ctxt_regs {
+	uint32_t qdma_ind_ctxt_data[QDMA_S80_HARD_IND_CTXT_DATA_NUM_REGS];
+	uint32_t qdma_ind_ctxt_mask[QDMA_S80_HARD_IND_CTXT_DATA_NUM_REGS];
+	union qdma_s80_hard_ind_ctxt_cmd cmd;
+};
+
+static struct qctx_entry qdma_s80_hard_sw_ctxt_entries[] = {
+	{"PIDX", 0},
+	{"IRQ Arm", 0},
+	{"Queue Enable", 0},
+	{"Fetch Credit Enable", 0},
+	{"Write back/Intr Check", 0},
+	{"Write back/Intr Interval", 0},
+	{"Function Id", 0},
+	{"Ring Size", 0},
+	{"Descriptor Size", 0},
+	{"Bypass Enable", 0},
+	{"MM Channel", 0},
+	{"Writeback Enable", 0},
+	{"Interrupt Enable", 0},
+	{"Port Id", 0},
+	{"Interrupt No Last", 0},
+	{"Error", 0},
+	{"Writeback Error Sent", 0},
+	{"IRQ Request", 0},
+	{"Marker Disable", 0},
+	{"Is Memory Mapped", 0},
+	{"Descriptor Ring Base Addr (Low)", 0},
+	{"Descriptor Ring Base Addr (High)", 0},
+};
+
+static struct qctx_entry qdma_s80_hard_hw_ctxt_entries[] = {
+	{"CIDX", 0},
+	{"Credits Consumed", 0},
+	{"Descriptors Pending", 0},
+	{"Queue Invalid No Desc Pending", 0},
+	{"Eviction Pending", 0},
+	{"Fetch Pending", 0},
+};
+
+static struct qctx_entry qdma_s80_hard_credit_ctxt_entries[] = {
+	{"Credit", 0},
+};
+
+static struct qctx_entry qdma_s80_hard_cmpt_ctxt_entries[] = {
+	{"Enable Status Desc Update", 0},
+	{"Enable Interrupt", 0},
+	{"Trigger Mode", 0},
+	{"Function Id", 0},
+	{"Counter Index", 0},
+	{"Timer Index", 0},
+	{"Interrupt State", 0},
+	{"Color", 0},
+	{"Ring Size", 0},
+	{"Base Address (Low)", 0},
+	{"Base Address (High)", 0},
+	{"Descriptor Size", 0},
+	{"PIDX", 0},
+	{"CIDX", 0},
+	{"Valid", 0},
+	{"Error", 0},
+	{"Trigger Pending", 0},
+	{"Timer Running", 0},
+	{"Full Update", 0},
+};
+
+static struct qctx_entry qdma_s80_hard_c2h_pftch_ctxt_entries[] = {
+	{"Bypass", 0},
+	{"Buffer Size Index", 0},
+	{"Port Id", 0},
+	{"Error", 0},
+	{"Prefetch Enable", 0},
+	{"In Prefetch", 0},
+	{"Software Credit", 0},
+	{"Valid", 0},
+};
+
+static struct qctx_entry qdma_s80_hard_qid2vec_ctxt_entries[] = {
+	{"c2h_vector", 0},
+	{"c2h_en_coal", 0},
+	{"h2c_vector", 0},
+	{"h2c_en_coal", 0},
+};
+
+static struct qctx_entry qdma_s80_hard_ind_intr_ctxt_entries[] = {
+	{"valid", 0},
+	{"vec", 0},
+	{"int_st", 0},
+	{"color", 0},
+	{"baddr_4k (Low)", 0},
+	{"baddr_4k (High)", 0},
+	{"page_size", 0},
+	{"pidx", 0},
+};
+
+static int qdma_s80_hard_indirect_reg_invalidate(void *dev_hndl,
+		enum ind_ctxt_cmd_sel sel, uint16_t hw_qid);
+static int qdma_s80_hard_indirect_reg_clear(void *dev_hndl,
+		enum ind_ctxt_cmd_sel sel, uint16_t hw_qid);
+static int qdma_s80_hard_indirect_reg_read(void *dev_hndl,
+		enum ind_ctxt_cmd_sel sel,
+		uint16_t hw_qid, uint32_t cnt, uint32_t *data);
+static int qdma_s80_hard_indirect_reg_write(void *dev_hndl,
+		enum ind_ctxt_cmd_sel sel,
+		uint16_t hw_qid, uint32_t *data, uint16_t cnt);
+
+uint32_t qdma_s80_hard_get_config_num_regs(void)
+{
+	return qdma_s80_hard_config_num_regs_get();
+}
+
+struct xreg_info *qdma_s80_hard_get_config_regs(void)
+{
+	return qdma_s80_hard_config_regs_get();
+}
+
+uint32_t qdma_s80_hard_reg_dump_buf_len(void)
+{
+	uint32_t length = (qdma_s80_hard_config_num_regs_get() + 1)
+			* REG_DUMP_SIZE_PER_LINE;
+	return length;
+}
+
+int qdma_s80_hard_context_buf_len(uint8_t st,
+		enum qdma_dev_q_type q_type, uint32_t *req_buflen)
+{
+	uint32_t len = 0;
+	int rv = 0;
+
+	if (q_type == QDMA_DEV_Q_TYPE_CMPT) {
+		len += (((sizeof(qdma_s80_hard_cmpt_ctxt_entries) /
+			sizeof(qdma_s80_hard_cmpt_ctxt_entries[0])) + 1) *
+			REG_DUMP_SIZE_PER_LINE);
+	} else {
+		len += (((sizeof(qdma_s80_hard_sw_ctxt_entries) /
+				sizeof(qdma_s80_hard_sw_ctxt_entries[0])) + 1) *
+				REG_DUMP_SIZE_PER_LINE);
+
+		len += (((sizeof(qdma_s80_hard_hw_ctxt_entries) /
+			sizeof(qdma_s80_hard_hw_ctxt_entries[0])) + 1) *
+			REG_DUMP_SIZE_PER_LINE);
+
+		len += (((sizeof(qdma_s80_hard_credit_ctxt_entries) /
+			sizeof(qdma_s80_hard_credit_ctxt_entries[0])) + 1) *
+			REG_DUMP_SIZE_PER_LINE);
+
+		if (st && q_type == QDMA_DEV_Q_TYPE_C2H) {
+			len += (((sizeof(qdma_s80_hard_cmpt_ctxt_entries) /
+			sizeof(qdma_s80_hard_cmpt_ctxt_entries[0])) + 1) *
+			REG_DUMP_SIZE_PER_LINE);
+
+			len += (((sizeof(qdma_s80_hard_c2h_pftch_ctxt_entries) /
+			sizeof(qdma_s80_hard_c2h_pftch_ctxt_entries[0])) + 1) *
+			REG_DUMP_SIZE_PER_LINE);
+		}
+	}
+
+	*req_buflen = len;
+	return rv;
+}
+
+static uint32_t qdma_s80_hard_intr_context_buf_len(void)
+{
+	uint32_t len = 0;
+
+	len += (((sizeof(qdma_s80_hard_ind_intr_ctxt_entries) /
+			sizeof(qdma_s80_hard_ind_intr_ctxt_entries[0])) + 1) *
+			REG_DUMP_SIZE_PER_LINE);
+	return len;
+}
+
+/*
+ * qdma_acc_fill_sw_ctxt() - Helper function to fill sw context into structure
+ *
+ */
+static void qdma_s80_hard_fill_sw_ctxt(struct qdma_descq_sw_ctxt *sw_ctxt)
+{
+	qdma_s80_hard_sw_ctxt_entries[0].value = sw_ctxt->pidx;
+	qdma_s80_hard_sw_ctxt_entries[1].value = sw_ctxt->irq_arm;
+	qdma_s80_hard_sw_ctxt_entries[2].value = sw_ctxt->qen;
+	qdma_s80_hard_sw_ctxt_entries[3].value = sw_ctxt->frcd_en;
+	qdma_s80_hard_sw_ctxt_entries[4].value = sw_ctxt->wbi_chk;
+	qdma_s80_hard_sw_ctxt_entries[5].value = sw_ctxt->wbi_intvl_en;
+	qdma_s80_hard_sw_ctxt_entries[6].value = sw_ctxt->fnc_id;
+	qdma_s80_hard_sw_ctxt_entries[7].value = sw_ctxt->rngsz_idx;
+	qdma_s80_hard_sw_ctxt_entries[8].value = sw_ctxt->desc_sz;
+	qdma_s80_hard_sw_ctxt_entries[9].value = sw_ctxt->bypass;
+	qdma_s80_hard_sw_ctxt_entries[10].value = sw_ctxt->mm_chn;
+	qdma_s80_hard_sw_ctxt_entries[11].value = sw_ctxt->wbk_en;
+	qdma_s80_hard_sw_ctxt_entries[12].value = sw_ctxt->irq_en;
+	qdma_s80_hard_sw_ctxt_entries[13].value = sw_ctxt->port_id;
+	qdma_s80_hard_sw_ctxt_entries[14].value = sw_ctxt->irq_no_last;
+	qdma_s80_hard_sw_ctxt_entries[15].value = sw_ctxt->err;
+	qdma_s80_hard_sw_ctxt_entries[16].value = sw_ctxt->err_wb_sent;
+	qdma_s80_hard_sw_ctxt_entries[17].value = sw_ctxt->irq_req;
+	qdma_s80_hard_sw_ctxt_entries[18].value = sw_ctxt->mrkr_dis;
+	qdma_s80_hard_sw_ctxt_entries[19].value = sw_ctxt->is_mm;
+	qdma_s80_hard_sw_ctxt_entries[20].value =
+			sw_ctxt->ring_bs_addr & 0xFFFFFFFF;
+	qdma_s80_hard_sw_ctxt_entries[21].value =
+		(sw_ctxt->ring_bs_addr >> 32) & 0xFFFFFFFF;
+}
+
+/*
+ * qdma_acc_fill_cmpt_ctxt() - Helper function to fill completion context
+ *                         into structure
+ *
+ */
+static void qdma_s80_hard_fill_cmpt_ctxt(struct qdma_descq_cmpt_ctxt *cmpt_ctxt)
+{
+	qdma_s80_hard_cmpt_ctxt_entries[0].value = cmpt_ctxt->en_stat_desc;
+	qdma_s80_hard_cmpt_ctxt_entries[1].value = cmpt_ctxt->en_int;
+	qdma_s80_hard_cmpt_ctxt_entries[2].value = cmpt_ctxt->trig_mode;
+	qdma_s80_hard_cmpt_ctxt_entries[3].value = cmpt_ctxt->fnc_id;
+	qdma_s80_hard_cmpt_ctxt_entries[4].value = cmpt_ctxt->counter_idx;
+	qdma_s80_hard_cmpt_ctxt_entries[5].value = cmpt_ctxt->timer_idx;
+	qdma_s80_hard_cmpt_ctxt_entries[6].value = cmpt_ctxt->in_st;
+	qdma_s80_hard_cmpt_ctxt_entries[7].value = cmpt_ctxt->color;
+	qdma_s80_hard_cmpt_ctxt_entries[8].value = cmpt_ctxt->ringsz_idx;
+	qdma_s80_hard_cmpt_ctxt_entries[9].value =
+			cmpt_ctxt->bs_addr & 0xFFFFFFFF;
+	qdma_s80_hard_cmpt_ctxt_entries[10].value =
+		(cmpt_ctxt->bs_addr >> 32) & 0xFFFFFFFF;
+	qdma_s80_hard_cmpt_ctxt_entries[11].value = cmpt_ctxt->desc_sz;
+	qdma_s80_hard_cmpt_ctxt_entries[12].value = cmpt_ctxt->pidx;
+	qdma_s80_hard_cmpt_ctxt_entries[13].value = cmpt_ctxt->cidx;
+	qdma_s80_hard_cmpt_ctxt_entries[14].value = cmpt_ctxt->valid;
+	qdma_s80_hard_cmpt_ctxt_entries[15].value = cmpt_ctxt->err;
+	qdma_s80_hard_cmpt_ctxt_entries[16].value = cmpt_ctxt->user_trig_pend;
+	qdma_s80_hard_cmpt_ctxt_entries[17].value = cmpt_ctxt->timer_running;
+	qdma_s80_hard_cmpt_ctxt_entries[18].value = cmpt_ctxt->full_upd;
+}
+
+/*
+ * qdma_acc_fill_hw_ctxt() - Helper function to fill HW context into structure
+ *
+ */
+static void qdma_s80_hard_fill_hw_ctxt(struct qdma_descq_hw_ctxt *hw_ctxt)
+{
+	qdma_s80_hard_hw_ctxt_entries[0].value = hw_ctxt->cidx;
+	qdma_s80_hard_hw_ctxt_entries[1].value = hw_ctxt->crd_use;
+	qdma_s80_hard_hw_ctxt_entries[2].value = hw_ctxt->dsc_pend;
+	qdma_s80_hard_hw_ctxt_entries[3].value = hw_ctxt->idl_stp_b;
+	qdma_s80_hard_hw_ctxt_entries[4].value = hw_ctxt->evt_pnd;
+	qdma_s80_hard_hw_ctxt_entries[5].value = hw_ctxt->fetch_pnd;
+}
+
+/*
+ * qdma_acc_fill_credit_ctxt() - Helper function to fill Credit context
+ *                           into structure
+ *
+ */
+static void qdma_s80_hard_fill_credit_ctxt
+		(struct qdma_descq_credit_ctxt *cr_ctxt)
+{
+	qdma_s80_hard_credit_ctxt_entries[0].value = cr_ctxt->credit;
+}
+
+/*
+ * qdma_acc_fill_pfetch_ctxt() - Helper function to fill Prefetch context
+ *                           into structure
+ *
+ */
+static void qdma_s80_hard_fill_pfetch_ctxt
+		(struct qdma_descq_prefetch_ctxt *pfetch_ctxt)
+{
+	qdma_s80_hard_c2h_pftch_ctxt_entries[0].value = pfetch_ctxt->bypass;
+	qdma_s80_hard_c2h_pftch_ctxt_entries[1].value = pfetch_ctxt->bufsz_idx;
+	qdma_s80_hard_c2h_pftch_ctxt_entries[2].value = pfetch_ctxt->port_id;
+	qdma_s80_hard_c2h_pftch_ctxt_entries[3].value = pfetch_ctxt->err;
+	qdma_s80_hard_c2h_pftch_ctxt_entries[4].value = pfetch_ctxt->pfch_en;
+	qdma_s80_hard_c2h_pftch_ctxt_entries[5].value = pfetch_ctxt->pfch;
+	qdma_s80_hard_c2h_pftch_ctxt_entries[6].value = pfetch_ctxt->sw_crdt;
+	qdma_s80_hard_c2h_pftch_ctxt_entries[7].value = pfetch_ctxt->valid;
+}
+
+static void qdma_s80_hard_fill_qid2vec_ctxt(struct qdma_qid2vec *qid2vec_ctxt)
+{
+	qdma_s80_hard_qid2vec_ctxt_entries[0].value = qid2vec_ctxt->c2h_vector;
+	qdma_s80_hard_qid2vec_ctxt_entries[1].value = qid2vec_ctxt->c2h_en_coal;
+	qdma_s80_hard_qid2vec_ctxt_entries[2].value = qid2vec_ctxt->h2c_vector;
+	qdma_s80_hard_qid2vec_ctxt_entries[3].value = qid2vec_ctxt->h2c_en_coal;
+}
+
+static void qdma_s80_hard_fill_intr_ctxt
+		(struct qdma_indirect_intr_ctxt *intr_ctxt)
+{
+	qdma_s80_hard_ind_intr_ctxt_entries[0].value = intr_ctxt->valid;
+	qdma_s80_hard_ind_intr_ctxt_entries[1].value = intr_ctxt->vec;
+	qdma_s80_hard_ind_intr_ctxt_entries[2].value = intr_ctxt->int_st;
+	qdma_s80_hard_ind_intr_ctxt_entries[3].value = intr_ctxt->color;
+	qdma_s80_hard_ind_intr_ctxt_entries[4].value =
+			intr_ctxt->baddr_4k & 0xFFFFFFFF;
+	qdma_s80_hard_ind_intr_ctxt_entries[5].value =
+			(intr_ctxt->baddr_4k >> 32) & 0xFFFFFFFF;
+	qdma_s80_hard_ind_intr_ctxt_entries[6].value = intr_ctxt->page_size;
+	qdma_s80_hard_ind_intr_ctxt_entries[7].value = intr_ctxt->pidx;
+}
+
+/*
+ * dump_s80_hard_context() - Helper function to dump queue context into string
+ *
+ * return len - length of the string copied into buffer
+ */
+static int dump_s80_hard_context(struct qdma_descq_context *queue_context,
+		uint8_t st,	enum qdma_dev_q_type q_type,
+		char *buf, int buf_sz)
+{
+	int i = 0;
+	int n;
+	int len = 0;
+	int rv;
+	char banner[DEBGFS_LINE_SZ];
+
+	if (queue_context == NULL) {
+		qdma_log_error("%s: queue_context is NULL, err:%d\n",
+						__func__,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if (q_type >= QDMA_DEV_Q_TYPE_CMPT) {
+		qdma_log_error("%s: Invalid queue type(%d), err:%d\n",
+						__func__,
+						q_type,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	qdma_s80_hard_fill_sw_ctxt(&queue_context->sw_ctxt);
+	qdma_s80_hard_fill_hw_ctxt(&queue_context->hw_ctxt);
+	qdma_s80_hard_fill_credit_ctxt(&queue_context->cr_ctxt);
+	qdma_s80_hard_fill_qid2vec_ctxt(&queue_context->qid2vec);
+	if (st && q_type == QDMA_DEV_Q_TYPE_C2H) {
+		qdma_s80_hard_fill_pfetch_ctxt(&queue_context->pfetch_ctxt);
+		qdma_s80_hard_fill_cmpt_ctxt(&queue_context->cmpt_ctxt);
+	}
+
+	if (q_type != QDMA_DEV_Q_TYPE_CMPT) {
+		for (i = 0; i < DEBGFS_LINE_SZ - 5; i++) {
+			rv = QDMA_SNPRINTF_S(banner + i,
+				(DEBGFS_LINE_SZ - i),
+				sizeof("-"), "-");
+			if (rv < 0 || rv > (int)sizeof("-")) {
+				qdma_log_error
+					("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+					__LINE__, __func__,
+					rv);
+				goto INSUF_BUF_EXIT;
+			}
+		}
+
+		/* SW context dump */
+		n = sizeof(qdma_s80_hard_sw_ctxt_entries) /
+				sizeof((qdma_s80_hard_sw_ctxt_entries)[0]);
+		for (i = 0; i < n; i++) {
+			if (len >= buf_sz ||
+				((len + DEBGFS_LINE_SZ) >= buf_sz))
+				goto INSUF_BUF_EXIT;
+
+			if (i == 0) {
+				if ((len + (3 * DEBGFS_LINE_SZ)) >= buf_sz)
+					goto INSUF_BUF_EXIT;
+				rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+					DEBGFS_LINE_SZ, "\n%s", banner);
+				if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+					qdma_log_error
+						("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+						__LINE__, __func__,
+						rv);
+					goto INSUF_BUF_EXIT;
+				}
+				len += rv;
+
+				rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+					DEBGFS_LINE_SZ, "\n%40s", "SW Context");
+				if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+					qdma_log_error
+						("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+						__LINE__, __func__,
+						rv);
+					goto INSUF_BUF_EXIT;
+				}
+				len += rv;
+
+				rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+					DEBGFS_LINE_SZ, "\n%s\n", banner);
+				if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+					qdma_log_error
+						("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+						__LINE__, __func__,
+						rv);
+					goto INSUF_BUF_EXIT;
+				}
+				len += rv;
+			}
+
+			rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+				DEBGFS_LINE_SZ,
+				"%-47s %#-10x %u\n",
+				qdma_s80_hard_sw_ctxt_entries[i].name,
+				qdma_s80_hard_sw_ctxt_entries[i].value,
+				qdma_s80_hard_sw_ctxt_entries[i].value);
+			if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+				qdma_log_error
+					("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+					__LINE__, __func__,
+					rv);
+				goto INSUF_BUF_EXIT;
+			}
+			len += rv;
+		}
+
+		/* HW context dump */
+		n = sizeof(qdma_s80_hard_hw_ctxt_entries) /
+				sizeof((qdma_s80_hard_hw_ctxt_entries)[0]);
+		for (i = 0; i < n; i++) {
+			if (len >= buf_sz ||
+				((len + DEBGFS_LINE_SZ) >= buf_sz))
+				goto INSUF_BUF_EXIT;
+
+			if (i == 0) {
+				if ((len + (3 * DEBGFS_LINE_SZ)) >= buf_sz)
+					goto INSUF_BUF_EXIT;
+
+				rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+					DEBGFS_LINE_SZ, "\n%s", banner);
+				if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+					qdma_log_error
+						("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+						__LINE__, __func__,
+						rv);
+					goto INSUF_BUF_EXIT;
+				}
+				len += rv;
+
+				rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+					DEBGFS_LINE_SZ, "\n%40s", "HW Context");
+				if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+					qdma_log_error
+						("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+						__LINE__, __func__,
+						rv);
+					goto INSUF_BUF_EXIT;
+				}
+				len += rv;
+
+				rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+					DEBGFS_LINE_SZ, "\n%s\n", banner);
+				if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+					qdma_log_error
+						("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+						__LINE__, __func__,
+						rv);
+					goto INSUF_BUF_EXIT;
+				}
+				len += rv;
+			}
+
+			rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+				DEBGFS_LINE_SZ,
+				"%-47s %#-10x %u\n",
+				qdma_s80_hard_hw_ctxt_entries[i].name,
+				qdma_s80_hard_hw_ctxt_entries[i].value,
+				qdma_s80_hard_hw_ctxt_entries[i].value);
+			if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+				qdma_log_error
+					("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+					__LINE__, __func__,
+					rv);
+				goto INSUF_BUF_EXIT;
+			}
+			len += rv;
+		}
+
+		/* Credit context dump */
+		n = sizeof(qdma_s80_hard_credit_ctxt_entries) /
+			sizeof((qdma_s80_hard_credit_ctxt_entries)[0]);
+		for (i = 0; i < n; i++) {
+			if (len >= buf_sz ||
+				((len + DEBGFS_LINE_SZ) >= buf_sz))
+				goto INSUF_BUF_EXIT;
+
+			if (i == 0) {
+				if ((len + (3 * DEBGFS_LINE_SZ)) >= buf_sz)
+					goto INSUF_BUF_EXIT;
+
+				rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+					DEBGFS_LINE_SZ, "\n%s", banner);
+				if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+					qdma_log_error
+						("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+						__LINE__, __func__,
+						rv);
+					goto INSUF_BUF_EXIT;
+				}
+				len += rv;
+
+				rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+					DEBGFS_LINE_SZ, "\n%40s",
+					"Credit Context");
+				if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+					qdma_log_error
+						("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+						__LINE__, __func__,
+						rv);
+					goto INSUF_BUF_EXIT;
+				}
+				len += rv;
+
+				rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+					DEBGFS_LINE_SZ, "\n%s\n", banner);
+				if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+					qdma_log_error
+						("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+						__LINE__, __func__,
+						rv);
+					goto INSUF_BUF_EXIT;
+				}
+				len += rv;
+			}
+
+			rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+				DEBGFS_LINE_SZ,
+				"%-47s %#-10x %u\n",
+				qdma_s80_hard_credit_ctxt_entries[i].name,
+				qdma_s80_hard_credit_ctxt_entries[i].value,
+				qdma_s80_hard_credit_ctxt_entries[i].value);
+			if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+				qdma_log_error
+					("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+					__LINE__, __func__,
+					rv);
+				goto INSUF_BUF_EXIT;
+			}
+			len += rv;
+		}
+	}
+
+	/* SW context dump */
+	n = sizeof(qdma_s80_hard_qid2vec_ctxt_entries) /
+			sizeof((qdma_s80_hard_qid2vec_ctxt_entries)[0]);
+	for (i = 0; i < n; i++) {
+		if (len >= buf_sz ||
+			((len + DEBGFS_LINE_SZ) >= buf_sz))
+			goto INSUF_BUF_EXIT;
+
+		if (i == 0) {
+			if ((len + (3 * DEBGFS_LINE_SZ)) >= buf_sz)
+				goto INSUF_BUF_EXIT;
+			rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+				DEBGFS_LINE_SZ, "\n%s", banner);
+			if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+				qdma_log_error
+					("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+					__LINE__, __func__,
+					rv);
+				goto INSUF_BUF_EXIT;
+			}
+			len += rv;
+
+			rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+				DEBGFS_LINE_SZ, "\n%40s",
+				"QID2VEC Context");
+			if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+				qdma_log_error
+					("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+					__LINE__, __func__,
+					rv);
+				goto INSUF_BUF_EXIT;
+			}
+			len += rv;
+
+			rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+				DEBGFS_LINE_SZ, "\n%s\n", banner);
+			if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+				qdma_log_error
+					("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+					__LINE__, __func__,
+					rv);
+				goto INSUF_BUF_EXIT;
+			}
+			len += rv;
+		}
+
+		rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len), DEBGFS_LINE_SZ,
+			"%-47s %#-10x %u\n",
+			qdma_s80_hard_qid2vec_ctxt_entries[i].name,
+			qdma_s80_hard_qid2vec_ctxt_entries[i].value,
+			qdma_s80_hard_qid2vec_ctxt_entries[i].value);
+		if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+			qdma_log_error
+				("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+				__LINE__, __func__,
+				rv);
+			goto INSUF_BUF_EXIT;
+		}
+		len += rv;
+	}
+
+
+	if (q_type == QDMA_DEV_Q_TYPE_CMPT ||
+			(st && q_type == QDMA_DEV_Q_TYPE_C2H)) {
+		/* Completion context dump */
+		n = sizeof(qdma_s80_hard_cmpt_ctxt_entries) /
+				sizeof((qdma_s80_hard_cmpt_ctxt_entries)[0]);
+		for (i = 0; i < n; i++) {
+			if (len >= buf_sz ||
+				((len + DEBGFS_LINE_SZ) >= buf_sz))
+				goto INSUF_BUF_EXIT;
+
+			if (i == 0) {
+				if ((len + (3 * DEBGFS_LINE_SZ)) >= buf_sz)
+					goto INSUF_BUF_EXIT;
+
+				rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+					DEBGFS_LINE_SZ, "\n%s", banner);
+				if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+					qdma_log_error
+						("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+						__LINE__, __func__,
+						rv);
+					goto INSUF_BUF_EXIT;
+				}
+				len += rv;
+
+				rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+					DEBGFS_LINE_SZ, "\n%40s",
+					"Completion Context");
+				if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+					qdma_log_error
+						("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+						__LINE__, __func__,
+						rv);
+					goto INSUF_BUF_EXIT;
+				}
+				len += rv;
+
+				rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+					DEBGFS_LINE_SZ, "\n%s\n", banner);
+				if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+					qdma_log_error
+						("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+						__LINE__, __func__,
+						rv);
+					goto INSUF_BUF_EXIT;
+				}
+				len += rv;
+			}
+
+			rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+				DEBGFS_LINE_SZ,
+				"%-47s %#-10x %u\n",
+				qdma_s80_hard_cmpt_ctxt_entries[i].name,
+				qdma_s80_hard_cmpt_ctxt_entries[i].value,
+				qdma_s80_hard_cmpt_ctxt_entries[i].value);
+			if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+				qdma_log_error
+					("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+					__LINE__, __func__,
+					rv);
+				goto INSUF_BUF_EXIT;
+			}
+			len += rv;
+		}
+	}
+
+	if (st && q_type == QDMA_DEV_Q_TYPE_C2H) {
+		/* Prefetch context dump */
+		n = sizeof(qdma_s80_hard_c2h_pftch_ctxt_entries) /
+			sizeof(qdma_s80_hard_c2h_pftch_ctxt_entries[0]);
+		for (i = 0; i < n; i++) {
+			if (len >= buf_sz ||
+				((len + DEBGFS_LINE_SZ) >= buf_sz))
+				goto INSUF_BUF_EXIT;
+
+			if (i == 0) {
+				if ((len + (3 * DEBGFS_LINE_SZ)) >= buf_sz)
+					goto INSUF_BUF_EXIT;
+
+				rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+					DEBGFS_LINE_SZ, "\n%s", banner);
+				if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+					qdma_log_error
+						("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+						__LINE__, __func__,
+						rv);
+					goto INSUF_BUF_EXIT;
+				}
+				len += rv;
+
+				rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+					DEBGFS_LINE_SZ, "\n%40s",
+					"Prefetch Context");
+				if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+					qdma_log_error
+						("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+						__LINE__, __func__,
+						rv);
+					goto INSUF_BUF_EXIT;
+				}
+				len += rv;
+
+				rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+					DEBGFS_LINE_SZ, "\n%s\n", banner);
+				if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+					qdma_log_error
+						("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+						__LINE__, __func__,
+						rv);
+					goto INSUF_BUF_EXIT;
+				}
+				len += rv;
+			}
+
+			rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+				DEBGFS_LINE_SZ,
+				"%-47s %#-10x %u\n",
+				qdma_s80_hard_c2h_pftch_ctxt_entries[i].name,
+				qdma_s80_hard_c2h_pftch_ctxt_entries[i].value,
+				qdma_s80_hard_c2h_pftch_ctxt_entries[i].value);
+			if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+				qdma_log_error
+					("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+					__LINE__, __func__,
+					rv);
+				goto INSUF_BUF_EXIT;
+			}
+			len += rv;
+		}
+	}
+
+	return len;
+
+INSUF_BUF_EXIT:
+	if (buf_sz > DEBGFS_LINE_SZ) {
+		rv = QDMA_SNPRINTF_S((buf + buf_sz - DEBGFS_LINE_SZ),
+			buf_sz, DEBGFS_LINE_SZ,
+			"\n\nInsufficient buffer size, partial context dump\n");
+		if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+			qdma_log_error
+				("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+				__LINE__, __func__,
+				rv);
+		}
+	}
+
+	qdma_log_error("%s: Insufficient buffer size, err:%d\n",
+		__func__, -QDMA_ERR_NO_MEM);
+
+	return -QDMA_ERR_NO_MEM;
+}
+
+static int dump_s80_hard_intr_context(struct qdma_indirect_intr_ctxt *intr_ctx,
+		int ring_index,
+		char *buf, int buf_sz)
+{
+	int i = 0;
+	int n;
+	int len = 0;
+	int rv;
+	char banner[DEBGFS_LINE_SZ];
+
+	qdma_s80_hard_fill_intr_ctxt(intr_ctx);
+
+	for (i = 0; i < DEBGFS_LINE_SZ - 5; i++) {
+		rv = QDMA_SNPRINTF_S(banner + i,
+			(DEBGFS_LINE_SZ - i),
+			sizeof("-"), "-");
+		if (rv < 0 || rv > (int)sizeof("-")) {
+			qdma_log_error
+				("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+				__LINE__, __func__,
+				rv);
+			goto INSUF_BUF_EXIT;
+		}
+	}
+
+	/* Interrupt context dump */
+	n = sizeof(qdma_s80_hard_ind_intr_ctxt_entries) /
+			sizeof((qdma_s80_hard_ind_intr_ctxt_entries)[0]);
+	for (i = 0; i < n; i++) {
+		if (len >= buf_sz || ((len + DEBGFS_LINE_SZ) >= buf_sz))
+			goto INSUF_BUF_EXIT;
+
+		if (i == 0) {
+			if ((len + (3 * DEBGFS_LINE_SZ)) >= buf_sz)
+				goto INSUF_BUF_EXIT;
+
+			rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+				DEBGFS_LINE_SZ, "\n%s", banner);
+			if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+				qdma_log_error
+					("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+					__LINE__, __func__,
+					rv);
+				goto INSUF_BUF_EXIT;
+			}
+			len += rv;
+
+			rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+				DEBGFS_LINE_SZ, "\n%50s %d",
+				"Interrupt Context for ring#", ring_index);
+			if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+				qdma_log_error
+					("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+					__LINE__, __func__,
+					rv);
+				goto INSUF_BUF_EXIT;
+			}
+			len += rv;
+
+			rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+				DEBGFS_LINE_SZ, "\n%s\n", banner);
+			if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+				qdma_log_error
+					("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+					__LINE__, __func__,
+					rv);
+				goto INSUF_BUF_EXIT;
+			}
+			len += rv;
+		}
+
+		rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len), DEBGFS_LINE_SZ,
+			"%-47s %#-10x %u\n",
+			qdma_s80_hard_ind_intr_ctxt_entries[i].name,
+			qdma_s80_hard_ind_intr_ctxt_entries[i].value,
+			qdma_s80_hard_ind_intr_ctxt_entries[i].value);
+		if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+			qdma_log_error
+				("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+				__LINE__, __func__,
+				rv);
+			goto INSUF_BUF_EXIT;
+		}
+		len += rv;
+	}
+
+	return len;
+
+INSUF_BUF_EXIT:
+	if (buf_sz > DEBGFS_LINE_SZ) {
+		rv = QDMA_SNPRINTF_S((buf + buf_sz - DEBGFS_LINE_SZ),
+			buf_sz, DEBGFS_LINE_SZ,
+			"\n\nInsufficient buffer size, partial intr context dump\n");
+		if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+			qdma_log_error
+				("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+				__LINE__, __func__,
+				rv);
+		}
+	}
+
+	qdma_log_error("%s: Insufficient buffer size, err:%d\n",
+		__func__, -QDMA_ERR_NO_MEM);
+
+	return -QDMA_ERR_NO_MEM;
+}
+
+/*
+ * qdma_s80_hard_indirect_reg_invalidate() - helper function to invalidate
+ * indirect context registers.
+ *
+ * return -QDMA_ERR_HWACC_BUSY_TIMEOUT if register
+ *	value didn't match, QDMA_SUCCESS other wise
+ */
+static int qdma_s80_hard_indirect_reg_invalidate(void *dev_hndl,
+		enum ind_ctxt_cmd_sel sel, uint16_t hw_qid)
+{
+	union qdma_s80_hard_ind_ctxt_cmd cmd;
+
+	qdma_reg_access_lock(dev_hndl);
+
+	/* set command register */
+	cmd.word = 0;
+	cmd.bits.qid = hw_qid;
+	cmd.bits.op = QDMA_CTXT_CMD_INV;
+	cmd.bits.sel = sel;
+	qdma_reg_write(dev_hndl, QDMA_S80_HARD_IND_CTXT_CMD_ADDR, cmd.word);
+
+	/* check if the operation went through well */
+	if (hw_monitor_reg(dev_hndl, QDMA_S80_HARD_IND_CTXT_CMD_ADDR,
+			IND_CTXT_CMD_BUSY_MASK, 0,
+			QDMA_REG_POLL_DFLT_INTERVAL_US,
+			QDMA_REG_POLL_DFLT_TIMEOUT_US)) {
+		qdma_reg_access_release(dev_hndl);
+		qdma_log_error("%s: hw_monitor_reg failed, err:%d\n",
+					__func__,
+					-QDMA_ERR_HWACC_BUSY_TIMEOUT);
+		return -QDMA_ERR_HWACC_BUSY_TIMEOUT;
+	}
+
+	qdma_reg_access_release(dev_hndl);
+
+	return QDMA_SUCCESS;
+}
+
+/*
+ * qdma_s80_hard_indirect_reg_clear() - helper function to clear indirect
+ *				context registers.
+ *
+ * return -QDMA_ERR_HWACC_BUSY_TIMEOUT if register
+ *	value didn't match, QDMA_SUCCESS other wise
+ */
+static int qdma_s80_hard_indirect_reg_clear(void *dev_hndl,
+		enum ind_ctxt_cmd_sel sel, uint16_t hw_qid)
+{
+	union qdma_s80_hard_ind_ctxt_cmd cmd;
+
+	qdma_reg_access_lock(dev_hndl);
+
+	/* set command register */
+	cmd.word = 0;
+	cmd.bits.qid = hw_qid;
+	cmd.bits.op = QDMA_CTXT_CMD_CLR;
+	cmd.bits.sel = sel;
+
+	qdma_reg_write(dev_hndl, QDMA_S80_HARD_IND_CTXT_CMD_ADDR, cmd.word);
+
+	/* check if the operation went through well */
+	if (hw_monitor_reg(dev_hndl, QDMA_S80_HARD_IND_CTXT_CMD_ADDR,
+			IND_CTXT_CMD_BUSY_MASK, 0,
+			QDMA_REG_POLL_DFLT_INTERVAL_US,
+			QDMA_REG_POLL_DFLT_TIMEOUT_US)) {
+		qdma_reg_access_release(dev_hndl);
+		qdma_log_error("%s: hw_monitor_reg failed, err:%d\n",
+					__func__,
+					-QDMA_ERR_HWACC_BUSY_TIMEOUT);
+		return -QDMA_ERR_HWACC_BUSY_TIMEOUT;
+	}
+
+	qdma_reg_access_release(dev_hndl);
+
+	return QDMA_SUCCESS;
+}
+
+/*
+ * qdma_s80_hard_indirect_reg_read() - helper function to read indirect
+ *				context registers.
+ *
+ * return -QDMA_ERR_HWACC_BUSY_TIMEOUT if register
+ *	value didn't match, QDMA_SUCCESS other wise
+ */
+static int qdma_s80_hard_indirect_reg_read(void *dev_hndl,
+		enum ind_ctxt_cmd_sel sel,
+		uint16_t hw_qid, uint32_t cnt, uint32_t *data)
+{
+	uint32_t index = 0, reg_addr = QDMA_S80_HARD_IND_CTXT_DATA_3_ADDR;
+	union qdma_s80_hard_ind_ctxt_cmd cmd;
+
+	qdma_reg_access_lock(dev_hndl);
+
+	/* set command register */
+	cmd.word = 0;
+	cmd.bits.qid = hw_qid;
+	cmd.bits.op = QDMA_CTXT_CMD_RD;
+	cmd.bits.sel = sel;
+	qdma_reg_write(dev_hndl, QDMA_S80_HARD_IND_CTXT_CMD_ADDR, cmd.word);
+
+	/* check if the operation went through well */
+	if (hw_monitor_reg(dev_hndl, QDMA_S80_HARD_IND_CTXT_CMD_ADDR,
+			IND_CTXT_CMD_BUSY_MASK, 0,
+			QDMA_REG_POLL_DFLT_INTERVAL_US,
+			QDMA_REG_POLL_DFLT_TIMEOUT_US)) {
+		qdma_reg_access_release(dev_hndl);
+		qdma_log_error("%s: hw_monitor_reg failed, err:%d\n",
+					__func__,
+					-QDMA_ERR_HWACC_BUSY_TIMEOUT);
+		return -QDMA_ERR_HWACC_BUSY_TIMEOUT;
+	}
+
+	for (index = 0; index < cnt; index++, reg_addr += sizeof(uint32_t))
+		data[index] = qdma_reg_read(dev_hndl, reg_addr);
+
+	qdma_reg_access_release(dev_hndl);
+
+	return QDMA_SUCCESS;
+}
+
+/*
+ * qdma_s80_hard_indirect_reg_write() - helper function to write indirect
+ *				context registers.
+ *
+ * return -QDMA_ERR_HWACC_BUSY_TIMEOUT if register
+ *	value didn't match, QDMA_SUCCESS other wise
+ */
+static int qdma_s80_hard_indirect_reg_write(void *dev_hndl,
+		enum ind_ctxt_cmd_sel sel,
+		uint16_t hw_qid, uint32_t *data, uint16_t cnt)
+{
+	uint32_t index, reg_addr;
+	struct qdma_s80_hard_indirect_ctxt_regs regs;
+	uint32_t *wr_data = (uint32_t *)&regs;
+
+	qdma_reg_access_lock(dev_hndl);
+
+	/* write the context data */
+	for (index = 0; index < QDMA_S80_HARD_IND_CTXT_DATA_NUM_REGS;
+			index++) {
+		if (index < cnt)
+			regs.qdma_ind_ctxt_data[index] = data[index];
+		else
+			regs.qdma_ind_ctxt_data[index] = 0;
+		regs.qdma_ind_ctxt_mask[index] = 0xFFFFFFFF;
+	}
+
+	regs.cmd.word = 0;
+	regs.cmd.bits.qid = hw_qid;
+	regs.cmd.bits.op = QDMA_CTXT_CMD_WR;
+	regs.cmd.bits.sel = sel;
+	reg_addr = QDMA_S80_HARD_IND_CTXT_DATA_3_ADDR;
+
+	for (index = 0;
+		index < ((2 * QDMA_S80_HARD_IND_CTXT_DATA_NUM_REGS) + 1);
+			index++, reg_addr += sizeof(uint32_t))
+		qdma_reg_write(dev_hndl, reg_addr, wr_data[index]);
+
+	/* check if the operation went through well */
+	if (hw_monitor_reg(dev_hndl, QDMA_S80_HARD_IND_CTXT_CMD_ADDR,
+			IND_CTXT_CMD_BUSY_MASK, 0,
+			QDMA_REG_POLL_DFLT_INTERVAL_US,
+			QDMA_REG_POLL_DFLT_TIMEOUT_US)) {
+		qdma_reg_access_release(dev_hndl);
+		qdma_log_error("%s: hw_monitor_reg failed, err:%d\n",
+						__func__,
+					   -QDMA_ERR_HWACC_BUSY_TIMEOUT);
+		return -QDMA_ERR_HWACC_BUSY_TIMEOUT;
+	}
+
+	qdma_reg_access_release(dev_hndl);
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_qid2vec_write() - create qid2vec context and program it
+ *
+ * @dev_hndl:	device handle
+ * @c2h:	is c2h queue
+ * @hw_qid:	hardware qid of the queue
+ * @ctxt:	pointer to the context data
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_s80_hard_qid2vec_write(void *dev_hndl, uint8_t c2h,
+		uint16_t hw_qid, struct qdma_qid2vec *ctxt)
+{
+	uint32_t qid2vec = 0;
+	enum ind_ctxt_cmd_sel sel = QDMA_CTXT_SEL_FMAP;
+	int rv = 0;
+
+	if (!dev_hndl || !ctxt) {
+		qdma_log_error("%s: dev_hndl=%p qid2vec=%p, err:%d\n",
+				__func__, dev_hndl, ctxt, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	rv = qdma_s80_hard_indirect_reg_read(dev_hndl, sel, hw_qid,
+			1, &qid2vec);
+	if (rv < 0)
+		return rv;
+	if (c2h) {
+		qid2vec = qid2vec & (QDMA_S80_HARD_QID2VEC_H2C_VECTOR |
+					QDMA_S80_HARD_QID2VEC_H2C_COAL_EN);
+		qid2vec |= FIELD_SET(C2H_QID2VEC_MAP_C2H_VECTOR_MASK,
+				     ctxt->c2h_vector) |
+			FIELD_SET(C2H_QID2VEC_MAP_C2H_EN_COAL_MASK,
+				  ctxt->c2h_en_coal);
+	} else {
+		qid2vec = qid2vec & (C2H_QID2VEC_MAP_C2H_VECTOR_MASK |
+					C2H_QID2VEC_MAP_C2H_EN_COAL_MASK);
+		qid2vec |=
+			FIELD_SET(QDMA_S80_HARD_QID2VEC_H2C_VECTOR,
+				  ctxt->h2c_vector) |
+			FIELD_SET(QDMA_S80_HARD_QID2VEC_H2C_COAL_EN,
+				  ctxt->h2c_en_coal);
+	}
+
+	return qdma_s80_hard_indirect_reg_write(dev_hndl, sel, hw_qid,
+			&qid2vec, QDMA_S80_HARD_QID2VEC_CONTEXT_NUM_WORDS);
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_qid2vec_read() - read qid2vec context
+ *
+ * @dev_hndl:	device handle
+ * @c2h:	is c2h queue
+ * @hw_qid:	hardware qid of the queue
+ * @ctxt:	pointer to the context data
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_s80_hard_qid2vec_read(void *dev_hndl, uint8_t c2h,
+		uint16_t hw_qid, struct qdma_qid2vec *ctxt)
+{
+	int rv = 0;
+	uint32_t qid2vec[QDMA_S80_HARD_QID2VEC_CONTEXT_NUM_WORDS] = {0};
+	enum ind_ctxt_cmd_sel sel = QDMA_CTXT_SEL_FMAP;
+
+	if (!dev_hndl || !ctxt) {
+		qdma_log_error("%s: dev_hndl=%p qid2vec=%p, err:%d\n",
+				__func__, dev_hndl, ctxt, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	rv = qdma_s80_hard_indirect_reg_read(dev_hndl, sel, hw_qid,
+			QDMA_S80_HARD_QID2VEC_CONTEXT_NUM_WORDS, qid2vec);
+	if (rv < 0)
+		return rv;
+
+	if (c2h) {
+		ctxt->c2h_vector = FIELD_GET(C2H_QID2VEC_MAP_C2H_VECTOR_MASK,
+						qid2vec[0]);
+		ctxt->c2h_en_coal =
+			(uint8_t)(FIELD_GET(C2H_QID2VEC_MAP_C2H_EN_COAL_MASK,
+						qid2vec[0]));
+	} else {
+		ctxt->h2c_vector =
+			(uint8_t)(FIELD_GET(QDMA_S80_HARD_QID2VEC_H2C_VECTOR,
+								qid2vec[0]));
+		ctxt->h2c_en_coal =
+			(uint8_t)(FIELD_GET(QDMA_S80_HARD_QID2VEC_H2C_COAL_EN,
+								qid2vec[0]));
+	}
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_qid2vec_clear() - clear qid2vec context
+ *
+ * @dev_hndl:	device handle
+ * @hw_qid:	hardware qid of the queue
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_s80_hard_qid2vec_clear(void *dev_hndl, uint16_t hw_qid)
+{
+	enum ind_ctxt_cmd_sel sel = QDMA_CTXT_SEL_FMAP;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+					__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	return qdma_s80_hard_indirect_reg_clear(dev_hndl, sel, hw_qid);
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_qid2vec_invalidate() - invalidate qid2vec context
+ *
+ * @dev_hndl:	device handle
+ * @hw_qid:	hardware qid of the queue
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_s80_hard_qid2vec_invalidate(void *dev_hndl, uint16_t hw_qid)
+{
+	enum ind_ctxt_cmd_sel sel = QDMA_CTXT_SEL_FMAP;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+					__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	return qdma_s80_hard_indirect_reg_invalidate(dev_hndl, sel, hw_qid);
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_qid2vec_conf() - configure qid2vector context
+ *
+ * @dev_hndl:	device handle
+ * @c2h:	is c2h queue
+ * @hw_qid:	hardware qid of the queue
+ * @ctxt:	pointer to the context data
+ * @access_type HW access type (qdma_hw_access_type enum) value
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+int qdma_s80_hard_qid2vec_conf(void *dev_hndl, uint8_t c2h, uint16_t hw_qid,
+			 struct qdma_qid2vec *ctxt,
+			 enum qdma_hw_access_type access_type)
+{
+	int ret_val = 0;
+
+	switch (access_type) {
+	case QDMA_HW_ACCESS_READ:
+		ret_val = qdma_s80_hard_qid2vec_read(dev_hndl, c2h,
+				hw_qid, ctxt);
+		break;
+	case QDMA_HW_ACCESS_WRITE:
+		ret_val = qdma_s80_hard_qid2vec_write(dev_hndl, c2h,
+				hw_qid, ctxt);
+		break;
+	case QDMA_HW_ACCESS_CLEAR:
+		ret_val = qdma_s80_hard_qid2vec_clear(dev_hndl, hw_qid);
+		break;
+	case QDMA_HW_ACCESS_INVALIDATE:
+		ret_val = qdma_s80_hard_qid2vec_invalidate(dev_hndl, hw_qid);
+		break;
+	default:
+		qdma_log_error("%s: access_type=%d is invalid, err:%d\n",
+					   __func__, access_type,
+					   -QDMA_ERR_INV_PARAM);
+		ret_val = -QDMA_ERR_INV_PARAM;
+		break;
+	}
+
+	return ret_val;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_fmap_write() - create fmap context and program it
+ *
+ * @dev_hndl:	device handle
+ * @func_id:	function id of the device
+ * @config:	pointer to the fmap data strucutre
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_s80_hard_fmap_write(void *dev_hndl, uint16_t func_id,
+		   const struct qdma_fmap_cfg *config)
+{
+	uint32_t fmap = 0;
+
+	if (!dev_hndl || !config) {
+		qdma_log_error("%s: dev_handle or config is NULL, err:%d\n",
+					__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	fmap = FIELD_SET(TRQ_SEL_FMAP_0_QID_BASE_MASK, config->qbase) |
+		FIELD_SET(TRQ_SEL_FMAP_0_QID_MAX_MASK,
+				config->qmax);
+
+	qdma_reg_write(dev_hndl, QDMA_S80_HARD_TRQ_SEL_FMAP_0_ADDR +
+			func_id * QDMA_S80_HARD_REG_TRQ_SEL_FMAP_STEP,
+			fmap);
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_fmap_read() - read fmap context
+ *
+ * @dev_hndl:	device handle
+ * @func_id:	function id of the device
+ * @config:	pointer to the output fmap data
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_s80_hard_fmap_read(void *dev_hndl, uint16_t func_id,
+			 struct qdma_fmap_cfg *config)
+{
+	uint32_t fmap = 0;
+
+	if (!dev_hndl || !config) {
+		qdma_log_error("%s: dev_handle=%p fmap=%p NULL, err:%d\n",
+						__func__, dev_hndl, config,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	fmap = qdma_reg_read(dev_hndl, QDMA_S80_HARD_TRQ_SEL_FMAP_0_ADDR +
+			     func_id * QDMA_S80_HARD_REG_TRQ_SEL_FMAP_STEP);
+
+	config->qbase = FIELD_GET(TRQ_SEL_FMAP_0_QID_BASE_MASK, fmap);
+	config->qmax =
+		(uint16_t)(FIELD_GET(TRQ_SEL_FMAP_0_QID_MAX_MASK,
+				fmap));
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_fmap_clear() - clear fmap context
+ *
+ * @dev_hndl:	device handle
+ * @func_id:	function id of the device
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_s80_hard_fmap_clear(void *dev_hndl, uint16_t func_id)
+{
+	uint32_t fmap = 0;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+					__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	qdma_reg_write(dev_hndl, QDMA_S80_HARD_TRQ_SEL_FMAP_0_ADDR +
+			func_id * QDMA_S80_HARD_REG_TRQ_SEL_FMAP_STEP,
+			fmap);
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_fmap_conf() - configure fmap context
+ *
+ * @dev_hndl:	device handle
+ * @func_id:	function id of the device
+ * @config:	pointer to the fmap data
+ * @access_type HW access type (qdma_hw_access_type enum) value
+ *		QDMA_HW_ACCESS_INVALIDATE unsupported
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+int qdma_s80_hard_fmap_conf(void *dev_hndl, uint16_t func_id,
+				struct qdma_fmap_cfg *config,
+				enum qdma_hw_access_type access_type)
+{
+	int ret_val = 0;
+
+	switch (access_type) {
+	case QDMA_HW_ACCESS_READ:
+		ret_val = qdma_s80_hard_fmap_read(dev_hndl, func_id, config);
+		break;
+	case QDMA_HW_ACCESS_WRITE:
+		ret_val = qdma_s80_hard_fmap_write(dev_hndl, func_id, config);
+		break;
+	case QDMA_HW_ACCESS_CLEAR:
+		ret_val = qdma_s80_hard_fmap_clear(dev_hndl, func_id);
+		break;
+	case QDMA_HW_ACCESS_INVALIDATE:
+	default:
+		qdma_log_error("%s: access_type=%d is invalid, err:%d\n",
+					   __func__, access_type,
+					   -QDMA_ERR_INV_PARAM);
+		ret_val = -QDMA_ERR_INV_PARAM;
+		break;
+	}
+
+	return ret_val;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_sw_context_write() - create sw context and program it
+ *
+ * @dev_hndl:	device handle
+ * @c2h:	is c2h queue
+ * @hw_qid:	hardware qid of the queue
+ * @ctxt:	pointer to the SW context data strucutre
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_s80_hard_sw_context_write(void *dev_hndl, uint8_t c2h,
+			 uint16_t hw_qid,
+			 const struct qdma_descq_sw_ctxt *ctxt)
+{
+	uint32_t sw_ctxt[QDMA_S80_HARD_SW_CONTEXT_NUM_WORDS] = {0};
+	uint16_t num_words_count = 0;
+	enum ind_ctxt_cmd_sel sel = c2h ?
+			QDMA_CTXT_SEL_SW_C2H : QDMA_CTXT_SEL_SW_H2C;
+
+	/* Input args check */
+	if (!dev_hndl || !ctxt) {
+		qdma_log_error("%s: dev_hndl or ctxt is NULL, err:%d\n",
+					__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if (ctxt->desc_sz > QDMA_DESC_SIZE_64B ||
+		ctxt->rngsz_idx >= QDMA_NUM_RING_SIZES) {
+		qdma_log_error("%s: Invalid desc_sz(%d)/rngidx(%d), err:%d\n",
+					__func__,
+					ctxt->desc_sz,
+					ctxt->rngsz_idx,
+					-QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	sw_ctxt[num_words_count++] =
+		FIELD_SET(SW_IND_CTXT_DATA_W0_PIDX_MASK, ctxt->pidx) |
+		FIELD_SET(SW_IND_CTXT_DATA_W0_IRQ_ARM_MASK, ctxt->irq_arm);
+
+	sw_ctxt[num_words_count++] =
+		FIELD_SET(SW_IND_CTXT_DATA_W1_QEN_MASK, ctxt->qen) |
+		FIELD_SET(SW_IND_CTXT_DATA_W1_FCRD_EN_MASK, ctxt->frcd_en) |
+		FIELD_SET(SW_IND_CTXT_DATA_W1_WBI_CHK_MASK, ctxt->wbi_chk) |
+		FIELD_SET(SW_IND_CTXT_DATA_W1_WBI_INTVL_EN_MASK,
+			ctxt->wbi_intvl_en) |
+		FIELD_SET(SW_IND_CTXT_DATA_W1_FNC_ID_MASK, ctxt->fnc_id) |
+		FIELD_SET(SW_IND_CTXT_DATA_W1_RNG_SZ_MASK, ctxt->rngsz_idx) |
+		FIELD_SET(SW_IND_CTXT_DATA_W1_DSC_SZ_MASK, ctxt->desc_sz) |
+		FIELD_SET(SW_IND_CTXT_DATA_W1_BYPASS_MASK, ctxt->bypass) |
+		FIELD_SET(SW_IND_CTXT_DATA_W1_MM_CHN_MASK, ctxt->mm_chn) |
+		FIELD_SET(SW_IND_CTXT_DATA_W1_WBK_EN_MASK, ctxt->wbk_en) |
+		FIELD_SET(SW_IND_CTXT_DATA_W1_IRQ_EN_MASK, ctxt->irq_en) |
+		FIELD_SET(SW_IND_CTXT_DATA_W1_PORT_ID_MASK, ctxt->port_id) |
+		FIELD_SET(SW_IND_CTXT_DATA_W1_IRQ_NO_LAST_MASK,
+			ctxt->irq_no_last) |
+		FIELD_SET(SW_IND_CTXT_DATA_W1_ERR_MASK, ctxt->err) |
+		FIELD_SET(SW_IND_CTXT_DATA_W1_ERR_WB_SENT_MASK,
+			ctxt->err_wb_sent) |
+		FIELD_SET(SW_IND_CTXT_DATA_W1_IRQ_REQ_MASK, ctxt->irq_req) |
+		FIELD_SET(SW_IND_CTXT_DATA_W1_MRKR_DIS_MASK, ctxt->mrkr_dis) |
+		FIELD_SET(SW_IND_CTXT_DATA_W1_IS_MM_MASK, ctxt->is_mm);
+
+	sw_ctxt[num_words_count++] = ctxt->ring_bs_addr & 0xffffffff;
+	sw_ctxt[num_words_count++] = (ctxt->ring_bs_addr >> 32) & 0xffffffff;
+
+	return qdma_s80_hard_indirect_reg_write(dev_hndl, sel, hw_qid,
+			sw_ctxt, num_words_count);
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_sw_context_read() - read sw context
+ *
+ * @dev_hndl:	device handle
+ * @c2h:	is c2h queue
+ * @hw_qid:	hardware qid of the queue
+ * @ctxt:	pointer to the output context data
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_s80_hard_sw_context_read(void *dev_hndl, uint8_t c2h,
+			 uint16_t hw_qid,
+			 struct qdma_descq_sw_ctxt *ctxt)
+{
+	int rv = 0;
+	uint32_t sw_ctxt[QDMA_S80_HARD_SW_CONTEXT_NUM_WORDS] = {0};
+	enum ind_ctxt_cmd_sel sel = c2h ?
+			QDMA_CTXT_SEL_SW_C2H : QDMA_CTXT_SEL_SW_H2C;
+	struct qdma_qid2vec qid2vec_ctxt = {0};
+
+	if (!dev_hndl || !ctxt) {
+		qdma_log_error("%s: dev_hndl=%p sw_ctxt=%p, err:%d\n",
+					   __func__, dev_hndl, ctxt,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	rv = qdma_s80_hard_indirect_reg_read(dev_hndl, sel, hw_qid,
+			QDMA_S80_HARD_SW_CONTEXT_NUM_WORDS, sw_ctxt);
+	if (rv < 0)
+		return rv;
+
+	ctxt->pidx = FIELD_GET(SW_IND_CTXT_DATA_W0_PIDX_MASK, sw_ctxt[0]);
+	ctxt->irq_arm =
+		(uint8_t)(FIELD_GET(SW_IND_CTXT_DATA_W0_IRQ_ARM_MASK,
+			sw_ctxt[0]));
+
+	ctxt->qen = FIELD_GET(SW_IND_CTXT_DATA_W1_QEN_MASK, sw_ctxt[1]);
+	ctxt->frcd_en = FIELD_GET(SW_IND_CTXT_DATA_W1_FCRD_EN_MASK,
+			sw_ctxt[1]);
+	ctxt->wbi_chk = FIELD_GET(SW_IND_CTXT_DATA_W1_WBI_CHK_MASK,
+			sw_ctxt[1]);
+	ctxt->wbi_intvl_en =
+		FIELD_GET(SW_IND_CTXT_DATA_W1_WBI_INTVL_EN_MASK,
+			sw_ctxt[1]);
+	ctxt->fnc_id =
+		(uint8_t)(FIELD_GET(SW_IND_CTXT_DATA_W1_FNC_ID_MASK,
+			sw_ctxt[1]));
+	ctxt->rngsz_idx =
+		(uint8_t)(FIELD_GET(SW_IND_CTXT_DATA_W1_RNG_SZ_MASK,
+		sw_ctxt[1]));
+	ctxt->desc_sz =
+		(uint8_t)(FIELD_GET(SW_IND_CTXT_DATA_W1_DSC_SZ_MASK,
+			sw_ctxt[1]));
+	ctxt->bypass =
+		(uint8_t)(FIELD_GET(SW_IND_CTXT_DATA_W1_BYPASS_MASK,
+			sw_ctxt[1]));
+	ctxt->mm_chn =
+		(uint8_t)(FIELD_GET(SW_IND_CTXT_DATA_W1_MM_CHN_MASK,
+			sw_ctxt[1]));
+	ctxt->wbk_en =
+		(uint8_t)(FIELD_GET(SW_IND_CTXT_DATA_W1_WBK_EN_MASK,
+			sw_ctxt[1]));
+	ctxt->irq_en =
+		(uint8_t)(FIELD_GET(SW_IND_CTXT_DATA_W1_IRQ_EN_MASK,
+			sw_ctxt[1]));
+	ctxt->port_id =
+		(uint8_t)(FIELD_GET(SW_IND_CTXT_DATA_W1_PORT_ID_MASK,
+			sw_ctxt[1]));
+	ctxt->irq_no_last =
+		(uint8_t)(FIELD_GET(SW_IND_CTXT_DATA_W1_IRQ_NO_LAST_MASK,
+			sw_ctxt[1]));
+	ctxt->err =
+		(uint8_t)(FIELD_GET(SW_IND_CTXT_DATA_W1_ERR_MASK, sw_ctxt[1]));
+	ctxt->err_wb_sent =
+		(uint8_t)(FIELD_GET(SW_IND_CTXT_DATA_W1_ERR_WB_SENT_MASK,
+			sw_ctxt[1]));
+	ctxt->irq_req =
+		(uint8_t)(FIELD_GET(SW_IND_CTXT_DATA_W1_IRQ_REQ_MASK,
+			sw_ctxt[1]));
+	ctxt->mrkr_dis =
+		(uint8_t)(FIELD_GET(SW_IND_CTXT_DATA_W1_MRKR_DIS_MASK,
+			sw_ctxt[1]));
+	ctxt->is_mm =
+		(uint8_t)(FIELD_GET(SW_IND_CTXT_DATA_W1_IS_MM_MASK,
+			sw_ctxt[1]));
+
+	ctxt->ring_bs_addr = ((uint64_t)sw_ctxt[3] << 32) | (sw_ctxt[2]);
+
+	/* Read the QID2VEC Context Data */
+	rv = qdma_s80_hard_qid2vec_read(dev_hndl, c2h, hw_qid, &qid2vec_ctxt);
+	if (rv < 0)
+		return rv;
+
+	if (c2h) {
+		ctxt->vec = qid2vec_ctxt.c2h_vector;
+		ctxt->intr_aggr = qid2vec_ctxt.c2h_en_coal;
+	} else {
+		ctxt->vec = qid2vec_ctxt.h2c_vector;
+		ctxt->intr_aggr = qid2vec_ctxt.h2c_en_coal;
+	}
+
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_sw_context_clear() - clear sw context
+ *
+ * @dev_hndl:	device handle
+ * @c2h:	is c2h queue
+ * @hw_qid:	hardware qid of the queue
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_s80_hard_sw_context_clear(void *dev_hndl, uint8_t c2h,
+			  uint16_t hw_qid)
+{
+	enum ind_ctxt_cmd_sel sel = c2h ?
+			QDMA_CTXT_SEL_SW_C2H : QDMA_CTXT_SEL_SW_H2C;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+					__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	return qdma_s80_hard_indirect_reg_clear(dev_hndl, sel, hw_qid);
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_sw_context_invalidate() - invalidate sw context
+ *
+ * @dev_hndl:	device handle
+ * @c2h:	is c2h queue
+ * @hw_qid:	hardware qid of the queue
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_s80_hard_sw_context_invalidate(void *dev_hndl, uint8_t c2h,
+		uint16_t hw_qid)
+{
+	enum ind_ctxt_cmd_sel sel = c2h ?
+			QDMA_CTXT_SEL_SW_C2H : QDMA_CTXT_SEL_SW_H2C;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+					__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	return qdma_s80_hard_indirect_reg_invalidate(dev_hndl, sel, hw_qid);
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_sw_ctx_conf() - configure SW context
+ *
+ * @dev_hndl:	device handle
+ * @c2h:	is c2h queue
+ * @hw_qid:	hardware qid of the queue
+ * @ctxt:	pointer to the context data
+ * @access_type HW access type (qdma_hw_access_type enum) value
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+int qdma_s80_hard_sw_ctx_conf(void *dev_hndl, uint8_t c2h, uint16_t hw_qid,
+				struct qdma_descq_sw_ctxt *ctxt,
+				enum qdma_hw_access_type access_type)
+{
+	int ret_val = 0;
+
+	switch (access_type) {
+	case QDMA_HW_ACCESS_READ:
+		ret_val = qdma_s80_hard_sw_context_read(dev_hndl, c2h, hw_qid,
+				ctxt);
+		break;
+	case QDMA_HW_ACCESS_WRITE:
+		ret_val = qdma_s80_hard_sw_context_write(dev_hndl, c2h, hw_qid,
+				ctxt);
+		break;
+	case QDMA_HW_ACCESS_CLEAR:
+		ret_val = qdma_s80_hard_sw_context_clear(dev_hndl, c2h, hw_qid);
+		break;
+	case QDMA_HW_ACCESS_INVALIDATE:
+		ret_val = qdma_s80_hard_sw_context_invalidate(dev_hndl,
+				c2h, hw_qid);
+		break;
+	default:
+		qdma_log_error("%s: access_type=%d is invalid, err:%d\n",
+					   __func__, access_type,
+					   -QDMA_ERR_INV_PARAM);
+		ret_val = -QDMA_ERR_INV_PARAM;
+		break;
+	}
+	return ret_val;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_pfetch_context_write() - create prefetch context and program it
+ *
+ * @dev_hndl:	device handle
+ * @hw_qid:	hardware qid of the queue
+ * @ctxt:	pointer to the prefetch context data strucutre
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_s80_hard_pfetch_context_write(void *dev_hndl, uint16_t hw_qid,
+		const struct qdma_descq_prefetch_ctxt *ctxt)
+{
+	uint32_t pfetch_ctxt[QDMA_S80_HARD_PFETCH_CONTEXT_NUM_WORDS] = {0};
+	enum ind_ctxt_cmd_sel sel = QDMA_CTXT_SEL_PFTCH;
+	uint32_t sw_crdt_l, sw_crdt_h;
+	uint16_t num_words_count = 0;
+
+	if (!dev_hndl || !ctxt) {
+		qdma_log_error("%s: dev_hndl=%p pfetch_ctxt=%p, err:%d\n",
+					   __func__, dev_hndl, ctxt,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	sw_crdt_l =
+		FIELD_GET(QDMA_PFTCH_CTXT_SW_CRDT_GET_L_MASK, ctxt->sw_crdt);
+	sw_crdt_h =
+		FIELD_GET(QDMA_PFTCH_CTXT_SW_CRDT_GET_H_MASK, ctxt->sw_crdt);
+
+	pfetch_ctxt[num_words_count++] =
+		FIELD_SET(PREFETCH_CTXT_DATA_W0_BYPASS_MASK, ctxt->bypass) |
+		FIELD_SET(PREFETCH_CTXT_DATA_W0_BUF_SIZE_IDX_MASK,
+				ctxt->bufsz_idx) |
+		FIELD_SET(PREFETCH_CTXT_DATA_W0_PORT_ID_MASK, ctxt->port_id) |
+		FIELD_SET(PREFETCH_CTXT_DATA_W0_ERR_MASK, ctxt->err) |
+		FIELD_SET(PREFETCH_CTXT_DATA_W0_PFCH_EN_MASK, ctxt->pfch_en) |
+		FIELD_SET(PREFETCH_CTXT_DATA_W0_PFCH_MASK, ctxt->pfch) |
+		FIELD_SET(PREFETCH_CTXT_DATA_W0_SW_CRDT_L_MASK, sw_crdt_l);
+
+	pfetch_ctxt[num_words_count++] =
+		FIELD_SET(PREFETCH_CTXT_DATA_W1_SW_CRDT_H_MASK, sw_crdt_h) |
+		FIELD_SET(PREFETCH_CTXT_DATA_W1_VALID_MASK, ctxt->valid);
+
+	return qdma_s80_hard_indirect_reg_write(dev_hndl, sel, hw_qid,
+			pfetch_ctxt, num_words_count);
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_pfetch_context_read() - read prefetch context
+ *
+ * @dev_hndl:	device handle
+ * @hw_qid:	hardware qid of the queue
+ * @ctxt:	pointer to the output context data
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_s80_hard_pfetch_context_read(void *dev_hndl, uint16_t hw_qid,
+		struct qdma_descq_prefetch_ctxt *ctxt)
+{
+	int rv = 0;
+	uint32_t pfetch_ctxt[QDMA_S80_HARD_PFETCH_CONTEXT_NUM_WORDS] = {0};
+	enum ind_ctxt_cmd_sel sel = QDMA_CTXT_SEL_PFTCH;
+	uint32_t sw_crdt_l, sw_crdt_h;
+
+	if (!dev_hndl || !ctxt) {
+		qdma_log_error("%s: dev_hndl=%p pfetch_ctxt=%p, err:%d\n",
+					   __func__, dev_hndl, ctxt,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	rv = qdma_s80_hard_indirect_reg_read(dev_hndl, sel, hw_qid,
+			QDMA_S80_HARD_PFETCH_CONTEXT_NUM_WORDS, pfetch_ctxt);
+	if (rv < 0)
+		return rv;
+
+	ctxt->bypass =
+		(uint8_t)(FIELD_GET(PREFETCH_CTXT_DATA_W0_BYPASS_MASK,
+			pfetch_ctxt[0]));
+	ctxt->bufsz_idx =
+		(uint8_t)(FIELD_GET(PREFETCH_CTXT_DATA_W0_BUF_SIZE_IDX_MASK,
+				pfetch_ctxt[0]));
+	ctxt->port_id =
+		(uint8_t)(FIELD_GET(PREFETCH_CTXT_DATA_W0_PORT_ID_MASK,
+			pfetch_ctxt[0]));
+	ctxt->err =
+		(uint8_t)(FIELD_GET(PREFETCH_CTXT_DATA_W0_ERR_MASK,
+			pfetch_ctxt[0]));
+	ctxt->pfch_en =
+		(uint8_t)(FIELD_GET(PREFETCH_CTXT_DATA_W0_PFCH_EN_MASK,
+			pfetch_ctxt[0]));
+	ctxt->pfch =
+		(uint8_t)(FIELD_GET(PREFETCH_CTXT_DATA_W0_PFCH_MASK,
+			pfetch_ctxt[0]));
+	sw_crdt_l =
+		(uint32_t)FIELD_GET(PREFETCH_CTXT_DATA_W0_SW_CRDT_L_MASK,
+			pfetch_ctxt[0]);
+
+	sw_crdt_h =
+		(uint32_t)FIELD_GET(PREFETCH_CTXT_DATA_W1_SW_CRDT_H_MASK,
+			pfetch_ctxt[1]);
+	ctxt->valid =
+		(uint8_t)(FIELD_GET(PREFETCH_CTXT_DATA_W1_VALID_MASK,
+			pfetch_ctxt[1]));
+
+	ctxt->sw_crdt =
+		(uint16_t)(FIELD_SET(QDMA_PFTCH_CTXT_SW_CRDT_GET_L_MASK,
+			sw_crdt_l) |
+		FIELD_SET(QDMA_PFTCH_CTXT_SW_CRDT_GET_H_MASK, sw_crdt_h));
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_pfetch_context_clear() - clear prefetch context
+ *
+ * @dev_hndl:	device handle
+ * @hw_qid:	hardware qid of the queue
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_s80_hard_pfetch_context_clear(void *dev_hndl, uint16_t hw_qid)
+{
+	enum ind_ctxt_cmd_sel sel = QDMA_CTXT_SEL_PFTCH;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+					__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	return qdma_s80_hard_indirect_reg_clear(dev_hndl, sel, hw_qid);
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_pfetch_context_invalidate() - invalidate prefetch context
+ *
+ * @dev_hndl:	device handle
+ * @hw_qid:	hardware qid of the queue
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_s80_hard_pfetch_context_invalidate(void *dev_hndl,
+		uint16_t hw_qid)
+{
+	enum ind_ctxt_cmd_sel sel = QDMA_CTXT_SEL_PFTCH;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+					__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	return qdma_s80_hard_indirect_reg_invalidate(dev_hndl, sel, hw_qid);
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_pfetch_ctx_conf() - configure prefetch context
+ *
+ * @dev_hndl:	device handle
+ * @hw_qid:	hardware qid of the queue
+ * @ctxt:	pointer to context data
+ * @access_type HW access type (qdma_hw_access_type enum) value
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+int qdma_s80_hard_pfetch_ctx_conf(void *dev_hndl, uint16_t hw_qid,
+				struct qdma_descq_prefetch_ctxt *ctxt,
+				enum qdma_hw_access_type access_type)
+{
+	int ret_val = 0;
+
+	switch (access_type) {
+	case QDMA_HW_ACCESS_READ:
+		ret_val = qdma_s80_hard_pfetch_context_read(dev_hndl,
+				hw_qid, ctxt);
+		break;
+	case QDMA_HW_ACCESS_WRITE:
+		ret_val = qdma_s80_hard_pfetch_context_write(dev_hndl,
+				hw_qid, ctxt);
+		break;
+	case QDMA_HW_ACCESS_CLEAR:
+		ret_val = qdma_s80_hard_pfetch_context_clear(dev_hndl, hw_qid);
+		break;
+	case QDMA_HW_ACCESS_INVALIDATE:
+		ret_val = qdma_s80_hard_pfetch_context_invalidate(dev_hndl,
+				hw_qid);
+		break;
+	default:
+		qdma_log_error("%s: access_type=%d is invalid, err:%d\n",
+					   __func__, access_type,
+					   -QDMA_ERR_INV_PARAM);
+		ret_val = -QDMA_ERR_INV_PARAM;
+		break;
+	}
+
+	return ret_val;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_cmpt_context_write() - create completion context and program it
+ *
+ * @dev_hndl:	device handle
+ * @hw_qid:	hardware qid of the queue
+ * @ctxt:	pointer to the cmpt context data strucutre
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_s80_hard_cmpt_context_write(void *dev_hndl, uint16_t hw_qid,
+			   const struct qdma_descq_cmpt_ctxt *ctxt)
+{
+	uint32_t cmpt_ctxt[QDMA_S80_HARD_CMPT_CONTEXT_NUM_WORDS] = {0};
+	uint16_t num_words_count = 0;
+	uint32_t baddr_l, baddr_h, baddr_m, pidx_l, pidx_h;
+	enum ind_ctxt_cmd_sel sel = QDMA_CTXT_SEL_CMPT;
+
+	/* Input args check */
+	if (!dev_hndl || !ctxt) {
+		qdma_log_error("%s: dev_hndl=%p cmpt_ctxt=%p, err:%d\n",
+					   __func__, dev_hndl, ctxt,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if (ctxt->desc_sz > QDMA_DESC_SIZE_32B ||
+		ctxt->ringsz_idx >= QDMA_NUM_RING_SIZES ||
+		ctxt->counter_idx >= QDMA_NUM_C2H_COUNTERS ||
+		ctxt->timer_idx >= QDMA_NUM_C2H_TIMERS ||
+		ctxt->trig_mode > QDMA_CMPT_UPDATE_TRIG_MODE_TMR_CNTR) {
+		qdma_log_error
+		("%s Inv dsz(%d)/ridx(%d)/cntr(%d)/tmr(%d)/tm(%d), err:%d\n",
+				__func__,
+				ctxt->desc_sz,
+				ctxt->ringsz_idx,
+				ctxt->counter_idx,
+				ctxt->timer_idx,
+				ctxt->trig_mode,
+				-QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	baddr_l =
+		(uint32_t)FIELD_GET(QDMA_S80_HARD_COMPL_CTXT_BADDR_GET_L_MASK,
+			ctxt->bs_addr);
+	baddr_m =
+		(uint32_t)FIELD_GET(QDMA_S80_HARD_COMPL_CTXT_BADDR_GET_M_MASK,
+			ctxt->bs_addr);
+	baddr_h =
+		(uint32_t)FIELD_GET(QDMA_S80_HARD_COMPL_CTXT_BADDR_GET_H_MASK,
+			ctxt->bs_addr);
+
+	pidx_l = FIELD_GET(QDMA_S80_HARD_COMPL_CTXT_PIDX_GET_L_MASK,
+			ctxt->pidx);
+	pidx_h = FIELD_GET(QDMA_S80_HARD_COMPL_CTXT_PIDX_GET_H_MASK,
+			ctxt->pidx);
+
+	cmpt_ctxt[num_words_count++] =
+		FIELD_SET(CMPL_CTXT_DATA_W0_EN_STAT_DESC_MASK,
+				ctxt->en_stat_desc) |
+		FIELD_SET(CMPL_CTXT_DATA_W0_EN_INT_MASK, ctxt->en_int) |
+		FIELD_SET(CMPL_CTXT_DATA_W0_TRIG_MODE_MASK, ctxt->trig_mode) |
+		FIELD_SET(CMPL_CTXT_DATA_W0_FNC_ID_MASK, ctxt->fnc_id) |
+		FIELD_SET(CMPL_CTXT_DATA_W0_CNTER_IDX_MASK,
+				ctxt->counter_idx) |
+		FIELD_SET(CMPL_CTXT_DATA_W0_TIMER_IDX_MASK,
+				ctxt->timer_idx) |
+		FIELD_SET(CMPL_CTXT_DATA_W0_INT_ST_MASK,
+				ctxt->in_st) |
+		FIELD_SET(CMPL_CTXT_DATA_W0_COLOR_MASK,
+				ctxt->color) |
+		FIELD_SET(CMPL_CTXT_DATA_W0_QSIZE_IDX_MASK,
+				ctxt->ringsz_idx) |
+		FIELD_SET(CMPL_CTXT_DATA_W0_BADDR_64_L_MASK,
+				baddr_l);
+
+	cmpt_ctxt[num_words_count++] =
+		FIELD_SET(CMPL_CTXT_DATA_W1_BADDR_64_M_MASK,
+				baddr_m);
+
+	cmpt_ctxt[num_words_count++] =
+		FIELD_SET(CMPL_CTXT_DATA_W2_BADDR_64_H_MASK,
+				baddr_h) |
+		FIELD_SET(CMPL_CTXT_DATA_W2_DESC_SIZE_MASK,
+				ctxt->desc_sz) |
+		FIELD_SET(CMPL_CTXT_DATA_W2_PIDX_L_MASK,
+				pidx_l);
+
+	cmpt_ctxt[num_words_count++] =
+		FIELD_SET(CMPL_CTXT_DATA_W3_PIDX_H_MASK,
+				pidx_h) |
+		FIELD_SET(CMPL_CTXT_DATA_W3_CIDX_MASK, ctxt->cidx) |
+		FIELD_SET(CMPL_CTXT_DATA_W3_VALID_MASK, ctxt->valid) |
+		FIELD_SET(CMPL_CTXT_DATA_W3_ERR_MASK, ctxt->err) |
+		FIELD_SET(CMPL_CTXT_DATA_W3_USER_TRIG_PEND_MASK,
+				ctxt->user_trig_pend) |
+		FIELD_SET(CMPL_CTXT_DATA_W3_TIMER_RUNNING_MASK,
+				ctxt->timer_running) |
+		FIELD_SET(CMPL_CTXT_DATA_W3_FULL_UPD_MASK,
+				ctxt->full_upd);
+
+	return qdma_s80_hard_indirect_reg_write(dev_hndl, sel, hw_qid,
+			cmpt_ctxt, num_words_count);
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_cmpt_context_read() - read completion context
+ *
+ * @dev_hndl:	device handle
+ * @hw_qid:	hardware qid of the queue
+ * @ctxt:	    pointer to the context data
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_s80_hard_cmpt_context_read(void *dev_hndl, uint16_t hw_qid,
+			   struct qdma_descq_cmpt_ctxt *ctxt)
+{
+	int rv = 0;
+	uint32_t cmpt_ctxt[QDMA_S80_HARD_CMPT_CONTEXT_NUM_WORDS] = {0};
+	enum ind_ctxt_cmd_sel sel = QDMA_CTXT_SEL_CMPT;
+	uint32_t baddr_l, baddr_h, baddr_m,
+			 pidx_l, pidx_h;
+
+	if (!dev_hndl || !ctxt) {
+		qdma_log_error("%s: dev_hndl=%p cmpt_ctxt=%p, err:%d\n",
+					   __func__, dev_hndl, ctxt,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	rv = qdma_s80_hard_indirect_reg_read(dev_hndl, sel, hw_qid,
+			QDMA_S80_HARD_CMPT_CONTEXT_NUM_WORDS, cmpt_ctxt);
+	if (rv < 0)
+		return rv;
+
+	ctxt->en_stat_desc =
+		FIELD_GET(CMPL_CTXT_DATA_W0_EN_STAT_DESC_MASK, cmpt_ctxt[0]);
+	ctxt->en_int = FIELD_GET(CMPL_CTXT_DATA_W0_EN_INT_MASK,
+		cmpt_ctxt[0]);
+	ctxt->trig_mode =
+		FIELD_GET(CMPL_CTXT_DATA_W0_TRIG_MODE_MASK, cmpt_ctxt[0]);
+	ctxt->fnc_id =
+		(uint8_t)(FIELD_GET(CMPL_CTXT_DATA_W0_FNC_ID_MASK,
+			cmpt_ctxt[0]));
+	ctxt->counter_idx =
+		(uint8_t)(FIELD_GET
+			(CMPL_CTXT_DATA_W0_CNTER_IDX_MASK,
+			cmpt_ctxt[0]));
+	ctxt->timer_idx =
+		(uint8_t)(FIELD_GET(CMPL_CTXT_DATA_W0_TIMER_IDX_MASK,
+				cmpt_ctxt[0]));
+	ctxt->in_st =
+		(uint8_t)(FIELD_GET(CMPL_CTXT_DATA_W0_INT_ST_MASK,
+			cmpt_ctxt[0]));
+	ctxt->color =
+		(uint8_t)(FIELD_GET(CMPL_CTXT_DATA_W0_COLOR_MASK,
+			cmpt_ctxt[0]));
+	ctxt->ringsz_idx =
+		(uint8_t)(FIELD_GET(CMPL_CTXT_DATA_W0_QSIZE_IDX_MASK,
+			cmpt_ctxt[0]));
+
+	baddr_l =
+		FIELD_GET(CMPL_CTXT_DATA_W0_BADDR_64_L_MASK,
+				cmpt_ctxt[0]);
+	baddr_m =
+		FIELD_GET(CMPL_CTXT_DATA_W1_BADDR_64_M_MASK,
+				cmpt_ctxt[1]);
+	baddr_h =
+		FIELD_GET(CMPL_CTXT_DATA_W2_BADDR_64_H_MASK,
+				cmpt_ctxt[2]);
+
+	ctxt->desc_sz =
+		(uint8_t)(FIELD_GET(CMPL_CTXT_DATA_W2_DESC_SIZE_MASK,
+			cmpt_ctxt[2]));
+	pidx_l = FIELD_GET(CMPL_CTXT_DATA_W2_PIDX_L_MASK,
+			cmpt_ctxt[2]);
+
+	pidx_h = FIELD_GET(CMPL_CTXT_DATA_W3_PIDX_H_MASK,
+			cmpt_ctxt[3]);
+	ctxt->cidx =
+		(uint16_t)(FIELD_GET(CMPL_CTXT_DATA_W3_CIDX_MASK,
+			cmpt_ctxt[3]));
+	ctxt->valid =
+		(uint8_t)(FIELD_GET(CMPL_CTXT_DATA_W3_VALID_MASK,
+			cmpt_ctxt[3]));
+	ctxt->err =
+		(uint8_t)(FIELD_GET(CMPL_CTXT_DATA_W3_ERR_MASK,
+			cmpt_ctxt[3]));
+	ctxt->user_trig_pend =
+		(uint8_t)(FIELD_GET
+		(CMPL_CTXT_DATA_W3_USER_TRIG_PEND_MASK, cmpt_ctxt[3]));
+
+	ctxt->timer_running =
+		(uint8_t)(FIELD_GET(CMPL_CTXT_DATA_W3_TIMER_RUNNING_MASK,
+			cmpt_ctxt[3]));
+	ctxt->full_upd =
+		(uint8_t)(FIELD_GET(CMPL_CTXT_DATA_W3_FULL_UPD_MASK,
+			cmpt_ctxt[3]));
+
+	ctxt->bs_addr =
+		FIELD_SET(QDMA_S80_HARD_COMPL_CTXT_BADDR_GET_L_MASK,
+			(uint64_t)baddr_l) |
+		FIELD_SET(QDMA_S80_HARD_COMPL_CTXT_BADDR_GET_M_MASK,
+			(uint64_t)baddr_m) |
+		FIELD_SET(QDMA_S80_HARD_COMPL_CTXT_BADDR_GET_H_MASK,
+			(uint64_t)baddr_h);
+
+	ctxt->pidx =
+		(uint16_t)(FIELD_SET(QDMA_S80_HARD_COMPL_CTXT_PIDX_GET_L_MASK,
+			pidx_l) |
+		FIELD_SET(QDMA_S80_HARD_COMPL_CTXT_PIDX_GET_H_MASK,
+			pidx_h));
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_cmpt_context_clear() - clear completion context
+ *
+ * @dev_hndl:	device handle
+ * @hw_qid:	hardware qid of the queue
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_s80_hard_cmpt_context_clear(void *dev_hndl, uint16_t hw_qid)
+{
+	enum ind_ctxt_cmd_sel sel = QDMA_CTXT_SEL_CMPT;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+					__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	return qdma_s80_hard_indirect_reg_clear(dev_hndl, sel, hw_qid);
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_cmpt_context_invalidate() - invalidate completion context
+ *
+ * @dev_hndl:	device handle
+ * @hw_qid:	hardware qid of the queue
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_s80_hard_cmpt_context_invalidate(void *dev_hndl,
+		uint16_t hw_qid)
+{
+	enum ind_ctxt_cmd_sel sel = QDMA_CTXT_SEL_CMPT;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+					__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	return qdma_s80_hard_indirect_reg_invalidate(dev_hndl, sel, hw_qid);
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_cmpt_ctx_conf() - configure completion context
+ *
+ * @dev_hndl:	device handle
+ * @hw_qid:	hardware qid of the queue
+ * @ctxt:	pointer to context data
+ * @access_type HW access type (qdma_hw_access_type enum) value
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+int qdma_s80_hard_cmpt_ctx_conf(void *dev_hndl, uint16_t hw_qid,
+			struct qdma_descq_cmpt_ctxt *ctxt,
+			enum qdma_hw_access_type access_type)
+{
+	int ret_val = 0;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+					__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	switch (access_type) {
+	case QDMA_HW_ACCESS_READ:
+		ret_val = qdma_s80_hard_cmpt_context_read(dev_hndl,
+				hw_qid, ctxt);
+		break;
+	case QDMA_HW_ACCESS_WRITE:
+		ret_val = qdma_s80_hard_cmpt_context_write(dev_hndl,
+				hw_qid, ctxt);
+		break;
+	case QDMA_HW_ACCESS_CLEAR:
+		ret_val = qdma_s80_hard_cmpt_context_clear(dev_hndl, hw_qid);
+		break;
+	case QDMA_HW_ACCESS_INVALIDATE:
+		ret_val = qdma_s80_hard_cmpt_context_invalidate(dev_hndl,
+				hw_qid);
+		break;
+	default:
+		qdma_log_error("%s: access_type=%d is invalid, err:%d\n",
+					   __func__, access_type,
+					   -QDMA_ERR_INV_PARAM);
+		ret_val = -QDMA_ERR_INV_PARAM;
+		break;
+	}
+
+	return ret_val;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_hw_context_read() - read hardware context
+ *
+ * @dev_hndl:	device handle
+ * @c2h:	is c2h queue
+ * @hw_qid:	hardware qid of the queue
+ * @ctxt:	pointer to the output context data
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_s80_hard_hw_context_read(void *dev_hndl, uint8_t c2h,
+			 uint16_t hw_qid, struct qdma_descq_hw_ctxt *ctxt)
+{
+	int rv = 0;
+	uint32_t hw_ctxt[QDMA_S80_HARD_HW_CONTEXT_NUM_WORDS] = {0};
+	enum ind_ctxt_cmd_sel sel = c2h ? QDMA_CTXT_SEL_HW_C2H :
+			QDMA_CTXT_SEL_HW_H2C;
+
+	if (!dev_hndl || !ctxt) {
+		qdma_log_error("%s: dev_hndl=%p hw_ctxt=%p, err:%d\n",
+						__func__, dev_hndl, ctxt,
+						-QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	rv = qdma_s80_hard_indirect_reg_read(dev_hndl, sel, hw_qid,
+				   QDMA_S80_HARD_HW_CONTEXT_NUM_WORDS, hw_ctxt);
+	if (rv < 0)
+		return rv;
+
+	ctxt->cidx = FIELD_GET(HW_IND_CTXT_DATA_W0_CIDX_MASK, hw_ctxt[0]);
+	ctxt->crd_use =
+		(uint16_t)(FIELD_GET(HW_IND_CTXT_DATA_W0_CRD_USE_MASK,
+				hw_ctxt[0]));
+
+	ctxt->dsc_pend =
+		(uint8_t)(FIELD_GET(HW_IND_CTXT_DATA_W1_DSC_PND_MASK,
+				hw_ctxt[1]));
+	ctxt->idl_stp_b =
+		(uint8_t)(FIELD_GET(HW_IND_CTXT_DATA_W1_IDL_STP_B_MASK,
+			hw_ctxt[1]));
+	ctxt->fetch_pnd =
+		(uint8_t)(FIELD_GET(HW_IND_CTXT_DATA_W1_FETCH_PND_MASK,
+			hw_ctxt[1]));
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_hw_context_clear() - clear hardware context
+ *
+ * @dev_hndl:	device handle
+ * @c2h:	is c2h queue
+ * @hw_qid:	hardware qid of the queue
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_s80_hard_hw_context_clear(void *dev_hndl, uint8_t c2h,
+			  uint16_t hw_qid)
+{
+	enum ind_ctxt_cmd_sel sel = c2h ? QDMA_CTXT_SEL_HW_C2H :
+			QDMA_CTXT_SEL_HW_H2C;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+					__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	return qdma_s80_hard_indirect_reg_clear(dev_hndl, sel, hw_qid);
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_hw_context_invalidate() - invalidate hardware context
+ *
+ * @dev_hndl:	device handle
+ * @c2h:	is c2h queue
+ * @hw_qid:	hardware qid of the queue
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_s80_hard_hw_context_invalidate(void *dev_hndl, uint8_t c2h,
+				   uint16_t hw_qid)
+{
+	enum ind_ctxt_cmd_sel sel = c2h ? QDMA_CTXT_SEL_HW_C2H :
+			QDMA_CTXT_SEL_HW_H2C;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+					__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	return qdma_s80_hard_indirect_reg_invalidate(dev_hndl, sel, hw_qid);
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_hw_ctx_conf() - configure HW context
+ *
+ * @dev_hndl:	device handle
+ * @c2h:	is c2h queue
+ * @hw_qid:	hardware qid of the queue
+ * @ctxt:	pointer to context data
+ * @access_type HW access type (qdma_hw_access_type enum) value
+ *		QDMA_HW_ACCESS_WRITE unsupported
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+int qdma_s80_hard_hw_ctx_conf(void *dev_hndl, uint8_t c2h, uint16_t hw_qid,
+				struct qdma_descq_hw_ctxt *ctxt,
+				enum qdma_hw_access_type access_type)
+{
+	int ret_val = 0;
+
+	switch (access_type) {
+	case QDMA_HW_ACCESS_READ:
+		ret_val = qdma_s80_hard_hw_context_read(dev_hndl, c2h, hw_qid,
+				ctxt);
+		break;
+	case QDMA_HW_ACCESS_CLEAR:
+		ret_val = qdma_s80_hard_hw_context_clear(dev_hndl, c2h, hw_qid);
+		break;
+	case QDMA_HW_ACCESS_INVALIDATE:
+		ret_val = qdma_s80_hard_hw_context_invalidate(dev_hndl, c2h,
+				hw_qid);
+		break;
+	case QDMA_HW_ACCESS_WRITE:
+	default:
+		qdma_log_error("%s: access_type=%d is invalid, err:%d\n",
+						__func__, access_type,
+						-QDMA_ERR_INV_PARAM);
+		ret_val = -QDMA_ERR_INV_PARAM;
+		break;
+	}
+
+
+	return ret_val;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_indirect_intr_context_write() - create indirect
+ * interrupt context and program it
+ *
+ * @dev_hndl:   device handle
+ * @ring_index: indirect interrupt ring index
+ * @ctxt:	pointer to the interrupt context data strucutre
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_s80_hard_indirect_intr_context_write(void *dev_hndl,
+		uint16_t ring_index, const struct qdma_indirect_intr_ctxt *ctxt)
+{
+	uint32_t intr_ctxt[QDMA_S80_HARD_IND_INTR_CONTEXT_NUM_WORDS] = {0};
+	enum ind_ctxt_cmd_sel sel = QDMA_CTXT_SEL_INT_COAL;
+	uint16_t num_words_count = 0;
+	uint32_t baddr_l, baddr_h;
+
+	if (!dev_hndl || !ctxt) {
+		qdma_log_error("%s: dev_hndl=%p intr_ctxt=%p, err:%d\n",
+						__func__, dev_hndl, ctxt,
+						-QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if (ctxt->page_size > QDMA_INDIRECT_INTR_RING_SIZE_32KB) {
+		qdma_log_error("%s: ctxt->page_size=%u is too big, err:%d\n",
+					   __func__, ctxt->page_size,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	baddr_l =
+		(uint32_t)FIELD_GET(QDMA_S80_HARD_INTR_CTXT_BADDR_GET_L_MASK,
+			ctxt->baddr_4k);
+	baddr_h =
+		(uint32_t)FIELD_GET(QDMA_S80_HARD_INTR_CTXT_BADDR_GET_H_MASK,
+			ctxt->baddr_4k);
+
+	intr_ctxt[num_words_count++] =
+		FIELD_SET(INTR_CTXT_DATA_W0_VALID_MASK, ctxt->valid) |
+		FIELD_SET(INTR_CTXT_DATA_W0_VEC_MASK, ctxt->vec) |
+		FIELD_SET(INTR_CTXT_DATA_W0_INT_ST_MASK,
+				ctxt->int_st) |
+		FIELD_SET(INTR_CTXT_DATA_W0_COLOR_MASK, ctxt->color) |
+		FIELD_SET(INTR_CTXT_DATA_W0_BADDR_4K_L_MASK, baddr_l);
+
+	intr_ctxt[num_words_count++] =
+		FIELD_SET(INTR_CTXT_DATA_W1_BADDR_4K_H_MASK, baddr_h) |
+		FIELD_SET(INTR_CTXT_DATA_W1_PAGE_SIZE_MASK,
+				ctxt->page_size);
+
+	intr_ctxt[num_words_count++] =
+		FIELD_SET(INTR_CTXT_DATA_W2_PIDX_MASK, ctxt->pidx);
+
+	return qdma_s80_hard_indirect_reg_write(dev_hndl, sel, ring_index,
+			intr_ctxt, num_words_count);
+}
+
+/*****************************************************************************/
+/**
+ * qdma_indirect_intr_context_read() - read indirect interrupt context
+ *
+ * @dev_hndl:	device handle
+ * @ring_index:	indirect interrupt ring index
+ * @ctxt:	pointer to the output context data
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_s80_hard_indirect_intr_context_read(void *dev_hndl,
+		uint16_t ring_index, struct qdma_indirect_intr_ctxt *ctxt)
+{
+	int rv = 0;
+	uint32_t intr_ctxt[QDMA_S80_HARD_IND_INTR_CONTEXT_NUM_WORDS] = {0};
+	enum ind_ctxt_cmd_sel sel = QDMA_CTXT_SEL_INT_COAL;
+	uint64_t baddr_l, baddr_h;
+
+	if (!dev_hndl || !ctxt) {
+		qdma_log_error("%s: dev_hndl=%p intr_ctxt=%p, err:%d\n",
+					   __func__, dev_hndl, ctxt,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	rv = qdma_s80_hard_indirect_reg_read(dev_hndl, sel, ring_index,
+			QDMA_S80_HARD_IND_INTR_CONTEXT_NUM_WORDS, intr_ctxt);
+	if (rv < 0)
+		return rv;
+
+	ctxt->valid = FIELD_GET(INTR_CTXT_DATA_W0_VALID_MASK, intr_ctxt[0]);
+	ctxt->vec = FIELD_GET(INTR_CTXT_DATA_W0_VEC_MASK,
+			intr_ctxt[0]);
+	ctxt->int_st = FIELD_GET(INTR_CTXT_DATA_W0_INT_ST_MASK,
+			intr_ctxt[0]);
+	ctxt->color =
+		(uint8_t)(FIELD_GET(INTR_CTXT_DATA_W0_COLOR_MASK,
+			intr_ctxt[0]));
+	baddr_l = FIELD_GET(INTR_CTXT_DATA_W0_BADDR_4K_L_MASK,
+			intr_ctxt[0]);
+
+	baddr_h = FIELD_GET(INTR_CTXT_DATA_W1_BADDR_4K_H_MASK,
+			intr_ctxt[1]);
+	ctxt->page_size =
+		(uint8_t)(FIELD_GET(INTR_CTXT_DATA_W1_PAGE_SIZE_MASK,
+			intr_ctxt[1]));
+	ctxt->pidx = FIELD_GET(INTR_CTXT_DATA_W2_PIDX_MASK,
+			intr_ctxt[2]);
+
+	ctxt->baddr_4k =
+		FIELD_SET(QDMA_S80_HARD_INTR_CTXT_BADDR_GET_L_MASK, baddr_l) |
+		FIELD_SET(QDMA_S80_HARD_INTR_CTXT_BADDR_GET_H_MASK, baddr_h);
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_indirect_intr_context_clear() - clear indirect
+ * interrupt context
+ *
+ * @dev_hndl:	device handle
+ * @ring_index:	indirect interrupt ring index
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_s80_hard_indirect_intr_context_clear(void *dev_hndl,
+		uint16_t ring_index)
+{
+	enum ind_ctxt_cmd_sel sel = QDMA_CTXT_SEL_INT_COAL;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+					__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	return qdma_s80_hard_indirect_reg_clear(dev_hndl, sel, ring_index);
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_indirect_intr_context_invalidate() - invalidate
+ * indirect interrupt context
+ *
+ * @dev_hndl:	device handle
+ * @ring_index:	indirect interrupt ring index
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_s80_hard_indirect_intr_context_invalidate(void *dev_hndl,
+					  uint16_t ring_index)
+{
+	enum ind_ctxt_cmd_sel sel = QDMA_CTXT_SEL_INT_COAL;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+					__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	return qdma_s80_hard_indirect_reg_invalidate(dev_hndl, sel, ring_index);
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_indirect_intr_ctx_conf() - configure indirect interrupt context
+ *
+ * @dev_hndl:	device handle
+ * @ring_index:	indirect interrupt ring index
+ * @ctxt:	pointer to context data
+ * @access_type HW access type (qdma_hw_access_type enum) value
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+int qdma_s80_hard_indirect_intr_ctx_conf(void *dev_hndl, uint16_t ring_index,
+				struct qdma_indirect_intr_ctxt *ctxt,
+				enum qdma_hw_access_type access_type)
+{
+	int ret_val = 0;
+
+	switch (access_type) {
+	case QDMA_HW_ACCESS_READ:
+		ret_val = qdma_s80_hard_indirect_intr_context_read(dev_hndl,
+							      ring_index,
+							      ctxt);
+		break;
+	case QDMA_HW_ACCESS_WRITE:
+		ret_val = qdma_s80_hard_indirect_intr_context_write(dev_hndl,
+							       ring_index,
+							       ctxt);
+		break;
+	case QDMA_HW_ACCESS_CLEAR:
+		ret_val = qdma_s80_hard_indirect_intr_context_clear(dev_hndl,
+							   ring_index);
+		break;
+	case QDMA_HW_ACCESS_INVALIDATE:
+		ret_val = qdma_s80_hard_indirect_intr_context_invalidate
+				(dev_hndl, ring_index);
+		break;
+	default:
+		qdma_log_error("%s: access_type=%d is invalid, err:%d\n",
+					   __func__, access_type,
+					   -QDMA_ERR_INV_PARAM);
+		ret_val = -QDMA_ERR_INV_PARAM;
+		break;
+	}
+
+	return ret_val;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_set_default_global_csr() - function to set the global
+ *  CSR register to default values. The value can be modified later by using
+ *  the set/get csr functions
+ *
+ * @dev_hndl:	device handle
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+int qdma_s80_hard_set_default_global_csr(void *dev_hndl)
+{
+	/* Default values */
+	uint32_t reg_val = 0;
+	uint32_t rng_sz[QDMA_NUM_RING_SIZES] = {2049, 65, 129, 193, 257,
+				385, 513, 769, 1025, 1537, 3073, 4097, 6145,
+				8193, 12289, 16385};
+	uint32_t tmr_cnt[QDMA_NUM_C2H_TIMERS] = {1, 2, 4, 5, 8, 10, 15, 20, 25,
+				30, 50, 75, 100, 125, 150, 200};
+	uint32_t cnt_th[QDMA_NUM_C2H_COUNTERS] = {2, 4, 8, 16, 24,
+				32, 48, 64, 80, 96, 112, 128, 144,
+				160, 176, 192};
+	uint32_t buf_sz[QDMA_NUM_C2H_BUFFER_SIZES] = {4096, 256, 512, 1024,
+				2048, 3968, 4096, 4096, 4096, 4096, 4096, 4096,
+				4096, 8192, 9018, 16384};
+	struct qdma_dev_attributes dev_cap;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+					__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	qdma_s80_hard_get_device_attributes(dev_hndl, &dev_cap);
+
+	/* Configuring CSR registers */
+	/* Global ring sizes */
+	qdma_write_csr_values(dev_hndl, QDMA_S80_HARD_GLBL_RNG_SZ_1_ADDR, 0,
+					QDMA_NUM_RING_SIZES, rng_sz);
+
+	if (dev_cap.st_en || dev_cap.mm_cmpt_en) {
+		/* Counter thresholds */
+		qdma_write_csr_values(dev_hndl,
+				QDMA_S80_HARD_C2H_CNT_TH_1_ADDR, 0,
+				QDMA_NUM_C2H_COUNTERS, cnt_th);
+
+		/* Timer Counters */
+		qdma_write_csr_values(dev_hndl,
+				QDMA_S80_HARD_C2H_TIMER_CNT_1_ADDR, 0,
+				QDMA_NUM_C2H_TIMERS, tmr_cnt);
+
+
+		/* Writeback Interval */
+		reg_val =
+			FIELD_SET(GLBL_DSC_CFG_MAXFETCH_MASK,
+				  DEFAULT_MAX_DSC_FETCH) |
+				  FIELD_SET(GLBL_DSC_CFG_WB_ACC_INT_MASK,
+				  DEFAULT_WRB_INT);
+		qdma_reg_write(dev_hndl,
+				QDMA_S80_HARD_GLBL_DSC_CFG_ADDR, reg_val);
+	}
+
+	if (dev_cap.st_en) {
+		/* Buffer Sizes */
+		qdma_write_csr_values(dev_hndl,
+				QDMA_S80_HARD_C2H_BUF_SZ_0_ADDR, 0,
+				QDMA_NUM_C2H_BUFFER_SIZES, buf_sz);
+
+		/* Prefetch Configuration */
+		reg_val =
+			FIELD_SET(C2H_PFCH_CFG_FL_TH_MASK,
+				DEFAULT_PFCH_STOP_THRESH) |
+				FIELD_SET(C2H_PFCH_CFG_NUM_MASK,
+				DEFAULT_PFCH_NUM_ENTRIES_PER_Q) |
+				FIELD_SET(C2H_PFCH_CFG_QCNT_MASK,
+				DEFAULT_PFCH_MAX_Q_CNT) |
+				FIELD_SET(C2H_PFCH_CFG_EVT_QCNT_TH_MASK,
+				DEFAULT_C2H_INTR_TIMER_TICK);
+		qdma_reg_write(dev_hndl,
+				QDMA_S80_HARD_C2H_PFCH_CFG_ADDR, reg_val);
+
+		/* C2H interrupt timer tick */
+		qdma_reg_write(dev_hndl, QDMA_S80_HARD_C2H_INT_TIMER_TICK_ADDR,
+						DEFAULT_C2H_INTR_TIMER_TICK);
+
+		/* C2h Completion Coalesce Configuration */
+		reg_val =
+			FIELD_SET(C2H_WRB_COAL_CFG_TICK_CNT_MASK,
+				DEFAULT_CMPT_COAL_TIMER_CNT) |
+				FIELD_SET(C2H_WRB_COAL_CFG_TICK_VAL_MASK,
+				DEFAULT_CMPT_COAL_TIMER_TICK) |
+				FIELD_SET(C2H_WRB_COAL_CFG_MAX_BUF_SZ_MASK,
+				DEFAULT_CMPT_COAL_MAX_BUF_SZ);
+		qdma_reg_write(dev_hndl,
+				QDMA_S80_HARD_C2H_WRB_COAL_CFG_ADDR, reg_val);
+	}
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_queue_pidx_update() - function to update the desc PIDX
+ *
+ * @dev_hndl:	device handle
+ * @is_vf:	Whether PF or VF
+ * @qid:	Queue id relative to the PF/VF calling this API
+ * @is_c2h:	Queue direction. Set 1 for C2H and 0 for H2C
+ * @reg_info:	data needed for the PIDX register update
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+int qdma_s80_hard_queue_pidx_update(void *dev_hndl, uint8_t is_vf, uint16_t qid,
+		uint8_t is_c2h, const struct qdma_q_pidx_reg_info *reg_info)
+{
+	uint32_t reg_addr = 0;
+	uint32_t reg_val = 0;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+					__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+	if (!reg_info) {
+		qdma_log_error("%s: reg_info is NULL, err:%d\n",
+					__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if (!is_vf) {
+		reg_addr = (is_c2h) ?
+			QDMA_S80_HARD_OFFSET_DMAP_SEL_C2H_DSC_PIDX :
+			QDMA_S80_HARD_OFFSET_DMAP_SEL_H2C_DSC_PIDX;
+	} else {
+		reg_addr = (is_c2h) ?
+			QDMA_S80_HARD_OFFSET_VF_DMAP_SEL_C2H_DSC_PIDX :
+			QDMA_S80_HARD_OFFSET_VF_DMAP_SEL_H2C_DSC_PIDX;
+	}
+
+	reg_addr += (qid * QDMA_PIDX_STEP);
+
+	reg_val = FIELD_SET(QDMA_S80_HARD_DMA_SEL_DESC_PIDX_MASK,
+					reg_info->pidx) |
+			  FIELD_SET(QDMA_S80_HARD_DMA_SEL_IRQ_EN_MASK,
+					reg_info->irq_en);
+
+	qdma_reg_write(dev_hndl, reg_addr, reg_val);
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_queue_cmpt_cidx_update() - function to update the CMPT
+ * CIDX update
+ *
+ * @dev_hndl:	device handle
+ * @is_vf:	Whether PF or VF
+ * @qid:	Queue id relative to the PF/VF calling this API
+ * @reg_info:	data needed for the CIDX register update
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+int qdma_s80_hard_queue_cmpt_cidx_update(void *dev_hndl, uint8_t is_vf,
+		uint16_t qid, const struct qdma_q_cmpt_cidx_reg_info *reg_info)
+{
+	uint32_t reg_addr = (is_vf) ?
+		QDMA_S80_HARD_OFFSET_VF_DMAP_SEL_CMPT_CIDX :
+		QDMA_S80_HARD_OFFSET_DMAP_SEL_CMPT_CIDX;
+	uint32_t reg_val = 0;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+					__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if (!reg_info) {
+		qdma_log_error("%s: reg_info is NULL, err:%d\n",
+						__func__,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	reg_addr += (qid * QDMA_CMPT_CIDX_STEP);
+
+	reg_val =
+		FIELD_SET(QDMA_S80_HARD_DMAP_SEL_CMPT_WRB_CIDX_MASK,
+				reg_info->wrb_cidx) |
+		FIELD_SET(QDMA_S80_HARD_DMAP_SEL_CMPT_CNT_THRESH_MASK,
+				reg_info->counter_idx) |
+		FIELD_SET(QDMA_S80_HARD_DMAP_SEL_CMPT_TMR_CNT_MASK,
+				reg_info->timer_idx) |
+		FIELD_SET(QDMA_S80_HARD_DMAP_SEL_CMPT_TRG_MODE_MASK,
+				reg_info->trig_mode) |
+		FIELD_SET(QDMA_S80_HARD_DMAP_SEL_CMPT_STS_DESC_EN_MASK,
+				reg_info->wrb_en) |
+		FIELD_SET(QDMA_S80_HARD_DMAP_SEL_CMPT_IRQ_EN_MASK,
+				reg_info->irq_en);
+
+	qdma_reg_write(dev_hndl, reg_addr, reg_val);
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_queue_intr_cidx_update() - function to update the
+ * CMPT CIDX update
+ *
+ * @dev_hndl:	device handle
+ * @is_vf:	Whether PF or VF
+ * @qid:	Queue id relative to the PF/VF calling this API
+ * @reg_info:	data needed for the CIDX register update
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+int qdma_s80_hard_queue_intr_cidx_update(void *dev_hndl, uint8_t is_vf,
+		uint16_t qid, const struct qdma_intr_cidx_reg_info *reg_info)
+{
+	uint32_t reg_addr = (is_vf) ?
+		QDMA_S80_HARD_OFFSET_VF_DMAP_SEL_INT_CIDX :
+		QDMA_S80_HARD_OFFSET_DMAP_SEL_INT_CIDX;
+	uint32_t reg_val = 0;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+					__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if (!reg_info) {
+		qdma_log_error("%s: reg_info is NULL, err:%d\n",
+					__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	reg_addr += qid * QDMA_INT_CIDX_STEP;
+
+	reg_val =
+		FIELD_SET(QDMA_S80_HARD_DMA_SEL_INT_SW_CIDX_MASK,
+			reg_info->sw_cidx) |
+		FIELD_SET(QDMA_S80_HARD_DMA_SEL_INT_RING_IDX_MASK,
+			reg_info->rng_idx);
+
+	qdma_reg_write(dev_hndl, reg_addr, reg_val);
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_cmp_get_user_bar() - Function to get the
+ *			AXI Master Lite(user bar) number
+ * @dev_hndl:	device handle
+ * @is_vf:	Whether PF or VF
+ * @func_id:	function id of the PF
+ * @user_bar:	pointer to hold the AXI Master Lite(user bar) number
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+int qdma_cmp_get_user_bar(void *dev_hndl, uint8_t is_vf,
+		uint8_t func_id, uint8_t *user_bar)
+{
+	uint8_t bar_found = 0;
+	uint8_t bar_idx = 0;
+	uint32_t user_bar_id = 0;
+	uint32_t reg_addr = 0;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+					__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if (!user_bar) {
+		qdma_log_error("%s: user_bar is NULL, err:%d\n",
+					__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	reg_addr = (is_vf) ? QDMA_S80_HARD_GLBL2_PF_VF_BARLITE_EXT_ADDR :
+			QDMA_S80_HARD_GLBL2_PF_BARLITE_EXT_ADDR;
+
+	if (!is_vf) {
+		user_bar_id = qdma_reg_read(dev_hndl, reg_addr);
+		user_bar_id = (user_bar_id >> (6 * func_id)) & 0x3F;
+	} else {
+		*user_bar = QDMA_S80_HARD_VF_USER_BAR_ID;
+		return QDMA_SUCCESS;
+	}
+
+	for (bar_idx = 0; bar_idx < QDMA_BAR_NUM; bar_idx++) {
+		if (user_bar_id & (1 << bar_idx)) {
+			*user_bar = bar_idx;
+			bar_found = 1;
+			break;
+		}
+	}
+	if (bar_found == 0) {
+		*user_bar = 0;
+		qdma_log_error("%s: Bar not found, err:%d\n",
+					__func__,
+					-QDMA_ERR_HWACC_BAR_NOT_FOUND);
+		return -QDMA_ERR_HWACC_BAR_NOT_FOUND;
+	}
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_hw_ram_sbe_err_process() -Function to dump SBE err debug info
+ *
+ * @dev_hndl: device handle
+ * @buf: Bufffer for the debug info to be dumped in
+ * @buflen: Length of the buffer
+ *
+ * Return: void
+ *****************************************************************************/
+static void qdma_s80_hard_hw_ram_sbe_err_process(void *dev_hndl)
+{
+	qdma_s80_hard_dump_reg_info(dev_hndl, QDMA_S80_HARD_RAM_SBE_STS_A_ADDR,
+						1, NULL, 0);
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_hw_ram_dbe_err_process() -Function to dump DBE err debug info
+ *
+ * @dev_hndl: device handle
+ * @buf: Bufffer for the debug info to be dumped in
+ * @buflen: Length of the buffer
+ *
+ * Return: void
+ *****************************************************************************/
+static void qdma_s80_hard_hw_ram_dbe_err_process(void *dev_hndl)
+{
+	qdma_s80_hard_dump_reg_info(dev_hndl, QDMA_S80_HARD_RAM_DBE_STS_A_ADDR,
+						1, NULL, 0);
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_hw_desc_err_process() -Function to dump Descriptor Error info
+ *
+ * @dev_hndl: device handle
+ * @buf: Bufffer for the debug info to be dumped in
+ * @buflen: Length of the buffer
+ *
+ * Return: void
+ *****************************************************************************/
+static void qdma_s80_hard_hw_desc_err_process(void *dev_hndl)
+{
+	int i = 0;
+	uint32_t desc_err_reg_list[] = {
+		QDMA_S80_HARD_GLBL_DSC_ERR_STS_ADDR,
+		QDMA_S80_HARD_GLBL_DSC_ERR_LOG0_ADDR,
+		QDMA_S80_HARD_GLBL_DSC_ERR_LOG1_ADDR,
+		QDMA_S80_HARD_GLBL_DSC_DBG_DAT0_ADDR,
+		QDMA_S80_HARD_GLBL_DSC_DBG_DAT1_ADDR
+	};
+	int desc_err_num_regs = sizeof(desc_err_reg_list) / sizeof(uint32_t);
+
+	for (i = 0; i < desc_err_num_regs; i++) {
+		qdma_s80_hard_dump_reg_info(dev_hndl,
+					desc_err_reg_list[i],
+					1, NULL, 0);
+	}
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_hw_trq_err_process() -Function to dump Target Access Err info
+ *
+ * @dev_hndl: device handle
+ * @buf: Bufffer for the debug info to be dumped in
+ * @buflen: Length of the buffer
+ *
+ * Return: void
+ *****************************************************************************/
+static void qdma_s80_hard_hw_trq_err_process(void *dev_hndl)
+{
+	int i = 0;
+	uint32_t trq_err_reg_list[] = {
+		QDMA_S80_HARD_GLBL_TRQ_ERR_STS_ADDR,
+		QDMA_S80_HARD_GLBL_TRQ_ERR_LOG_ADDR
+	};
+	int trq_err_reg_num_regs = sizeof(trq_err_reg_list) / sizeof(uint32_t);
+
+	for (i = 0; i < trq_err_reg_num_regs; i++) {
+		qdma_s80_hard_dump_reg_info(dev_hndl, trq_err_reg_list[i],
+					1, NULL, 0);
+	}
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_hw_st_h2c_err_process() - Function to dump MM H2C Error info
+ *
+ * @dev_hndl: device handle
+ * @buf: Bufffer for the debug info to be dumped in
+ * @buflen: Length of the buffer
+ *
+ * Return: void
+ *****************************************************************************/
+static void qdma_s80_hard_hw_st_h2c_err_process(void *dev_hndl)
+{
+	int i = 0;
+	uint32_t st_h2c_err_reg_list[] = {
+		QDMA_S80_HARD_H2C_ERR_STAT_ADDR,
+		QDMA_S80_HARD_H2C_FIRST_ERR_QID_ADDR,
+		QDMA_S80_HARD_H2C_DBG_REG0_ADDR,
+		QDMA_S80_HARD_H2C_DBG_REG1_ADDR,
+		QDMA_S80_HARD_H2C_DBG_REG2_ADDR,
+		QDMA_S80_HARD_H2C_DBG_REG3_ADDR,
+		QDMA_S80_HARD_H2C_DBG_REG4_ADDR
+	};
+	int st_h2c_err_num_regs = sizeof(st_h2c_err_reg_list) / sizeof(uint32_t);
+
+	for (i = 0; i < st_h2c_err_num_regs; i++) {
+		qdma_s80_hard_dump_reg_info(dev_hndl,
+					st_h2c_err_reg_list[i],
+					1, NULL, 0);
+	}
+}
+
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_hw_st_c2h_err_process() - Function to dump MM H2C Error info
+ *
+ * @dev_hndl: device handle
+ * @buf: Bufffer for the debug info to be dumped in
+ * @buflen: Length of the buffer
+ *
+ * Return: void
+ *****************************************************************************/
+static void qdma_s80_hard_hw_st_c2h_err_process(void *dev_hndl)
+{
+	int i = 0;
+	uint32_t st_c2h_err_reg_list[] = {
+		QDMA_S80_HARD_C2H_ERR_STAT_ADDR,
+		QDMA_S80_HARD_C2H_FATAL_ERR_STAT_ADDR,
+		QDMA_S80_HARD_C2H_FIRST_ERR_QID_ADDR,
+		QDMA_S80_HARD_C2H_STAT_S_AXIS_C2H_ACCEPTED_ADDR,
+		QDMA_S80_HARD_C2H_STAT_S_AXIS_WRB_ACCEPTED_ADDR,
+		QDMA_S80_HARD_C2H_STAT_DESC_RSP_PKT_ACCEPTED_ADDR,
+		QDMA_S80_HARD_C2H_STAT_AXIS_PKG_CMP_ADDR,
+		QDMA_S80_HARD_C2H_STAT_DBG_DMA_ENG_0_ADDR,
+		QDMA_S80_HARD_C2H_STAT_DBG_DMA_ENG_1_ADDR,
+		QDMA_S80_HARD_C2H_STAT_DBG_DMA_ENG_2_ADDR,
+		QDMA_S80_HARD_C2H_STAT_DBG_DMA_ENG_3_ADDR,
+		QDMA_S80_HARD_C2H_STAT_DESC_RSP_DROP_ACCEPTED_ADDR,
+		QDMA_S80_HARD_C2H_STAT_DESC_RSP_ERR_ACCEPTED_ADDR
+	};
+	int st_c2h_err_num_regs = sizeof(st_c2h_err_reg_list) / sizeof(uint32_t);
+
+	for (i = 0; i < st_c2h_err_num_regs; i++) {
+		qdma_s80_hard_dump_reg_info(dev_hndl,
+					st_c2h_err_reg_list[i],
+					1, NULL, 0);
+	}
+}
+
+
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_hw_get_error_name() - Function to get the error in str format
+ *
+ * @err_idx: error index
+ *
+ * Return: string - success and NULL on failure
+ *****************************************************************************/
+const char *qdma_s80_hard_hw_get_error_name(uint32_t err_idx)
+{
+	if (err_idx >= QDMA_S80_HARD_ERRS_ALL) {
+		qdma_log_error("%s: err_idx=%d is invalid, returning NULL\n",
+			__func__,
+			(enum qdma_s80_hard_error_idx)err_idx);
+		return NULL;
+	}
+
+	return qdma_s80_hard_err_info
+			[(enum qdma_s80_hard_error_idx)err_idx].err_name;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_hw_error_process() - Function to find the error that got
+ * triggered and call the handler qdma_hw_error_handler of that
+ * particular error.
+ *
+ * @dev_hndl: device handle
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+int qdma_s80_hard_hw_error_process(void *dev_hndl)
+{
+	uint32_t glbl_err_stat = 0, err_stat = 0;
+	uint32_t i = 0, j = 0;
+	int32_t idx = 0;
+	struct qdma_dev_attributes dev_cap;
+	uint32_t hw_err_position[QDMA_S80_HARD_TOTAL_LEAF_ERROR_AGGREGATORS] = {
+		QDMA_S80_HARD_DSC_ERR_POISON,
+		QDMA_S80_HARD_TRQ_ERR_UNMAPPED,
+		QDMA_S80_HARD_ST_C2H_ERR_MTY_MISMATCH,
+		QDMA_S80_HARD_ST_FATAL_ERR_MTY_MISMATCH,
+		QDMA_S80_HARD_ST_H2C_ERR_ZERO_LEN_DESC_ERR,
+		QDMA_S80_HARD_SBE_ERR_MI_H2C0_DAT,
+		QDMA_S80_HARD_DBE_ERR_MI_H2C0_DAT
+	};
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+				__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+
+	qdma_s80_hard_get_device_attributes(dev_hndl, &dev_cap);
+
+	glbl_err_stat = qdma_reg_read(dev_hndl,
+			QDMA_S80_HARD_GLBL_ERR_STAT_ADDR);
+	if (!glbl_err_stat)
+		return QDMA_HW_ERR_NOT_DETECTED;
+
+	qdma_log_info("%s: Global Err Reg(0x%x) = 0x%x\n",
+				  __func__, QDMA_S80_HARD_GLBL_ERR_STAT_ADDR,
+				  glbl_err_stat);
+
+	for (i = 0; i < QDMA_S80_HARD_TOTAL_LEAF_ERROR_AGGREGATORS; i++) {
+		j = hw_err_position[i];
+
+		if (!dev_cap.st_en &&
+			(j == QDMA_S80_HARD_ST_C2H_ERR_MTY_MISMATCH ||
+			j == QDMA_S80_HARD_ST_FATAL_ERR_MTY_MISMATCH ||
+			j == QDMA_S80_HARD_ST_H2C_ERR_ZERO_LEN_DESC_ERR))
+			continue;
+
+		err_stat = qdma_reg_read(dev_hndl,
+				qdma_s80_hard_err_info[j].stat_reg_addr);
+		if (err_stat) {
+			qdma_log_info("addr = 0x%08x val = 0x%08x",
+				qdma_s80_hard_err_info[j].stat_reg_addr,
+				err_stat);
+
+			qdma_s80_hard_err_info[j].qdma_s80_hard_hw_err_process
+				(dev_hndl);
+			for (idx = j;
+				idx < all_qdma_s80_hard_hw_errs[i];
+				idx++) {
+				/* call the platform specific handler */
+				if (err_stat &
+				qdma_s80_hard_err_info[idx].leaf_err_mask)
+					qdma_log_error("%s detected %s\n",
+						__func__,
+					qdma_s80_hard_hw_get_error_name(idx));
+			}
+			qdma_reg_write(dev_hndl,
+				qdma_s80_hard_err_info[j].stat_reg_addr,
+				err_stat);
+		}
+	}
+
+	/* Write 1 to the global status register to clear the bits */
+	qdma_reg_write(dev_hndl,
+			QDMA_S80_HARD_GLBL_ERR_STAT_ADDR, glbl_err_stat);
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_hw_error_enable() - Function to enable all or a specific error
+ *
+ * @dev_hndl: device handle
+ * @err_idx: error index
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+int qdma_s80_hard_hw_error_enable(void *dev_hndl, uint32_t err_idx)
+{
+	uint32_t idx = 0, i = 0;
+	uint32_t reg_val = 0;
+	struct qdma_dev_attributes dev_cap;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+				__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if (err_idx > QDMA_S80_HARD_ERRS_ALL) {
+		qdma_log_error("%s: err_idx=%d is invalid, err:%d\n",
+				__func__, (enum qdma_s80_hard_error_idx)err_idx,
+				-QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	qdma_s80_hard_get_device_attributes(dev_hndl, &dev_cap);
+
+	if (err_idx == QDMA_S80_HARD_ERRS_ALL) {
+		for (i = 0;
+				i < QDMA_S80_HARD_TOTAL_LEAF_ERROR_AGGREGATORS;
+				i++) {
+			idx = all_qdma_s80_hard_hw_errs[i];
+
+			/* Don't access streaming registers in
+			 * MM only bitstreams
+			 */
+			if (!dev_cap.st_en) {
+				if (idx == QDMA_S80_HARD_ST_C2H_ERR_ALL ||
+					idx == QDMA_S80_HARD_ST_FATAL_ERR_ALL ||
+					idx == QDMA_S80_HARD_ST_H2C_ERR_ALL)
+					continue;
+			}
+
+			reg_val = qdma_s80_hard_err_info[idx].leaf_err_mask;
+			qdma_reg_write(dev_hndl,
+				qdma_s80_hard_err_info[idx].mask_reg_addr,
+					reg_val);
+
+			reg_val = qdma_reg_read(dev_hndl,
+					QDMA_S80_HARD_GLBL_ERR_MASK_ADDR);
+			reg_val |= FIELD_SET
+				(qdma_s80_hard_err_info[idx].global_err_mask, 1);
+			qdma_reg_write(dev_hndl,
+					QDMA_S80_HARD_GLBL_ERR_MASK_ADDR,
+					reg_val);
+		}
+
+	} else {
+		/* Don't access streaming registers in MM only bitstreams
+		 *  QDMA_C2H_ERR_MTY_MISMATCH to QDMA_H2C_ERR_ALL are all
+		 *  ST errors
+		 */
+		if (!dev_cap.st_en) {
+			if (err_idx >= QDMA_S80_HARD_ST_C2H_ERR_MTY_MISMATCH &&
+					err_idx <= QDMA_S80_HARD_ST_H2C_ERR_ALL)
+				return QDMA_SUCCESS;
+		}
+
+		reg_val = qdma_reg_read(dev_hndl,
+				qdma_s80_hard_err_info[err_idx].mask_reg_addr);
+		reg_val |=
+			FIELD_SET(qdma_s80_hard_err_info[err_idx].leaf_err_mask,
+				1);
+		qdma_reg_write(dev_hndl,
+				qdma_s80_hard_err_info[err_idx].mask_reg_addr,
+						reg_val);
+
+		reg_val = qdma_reg_read(dev_hndl,
+			QDMA_S80_HARD_GLBL_ERR_MASK_ADDR);
+		reg_val |= FIELD_SET
+			(qdma_s80_hard_err_info[err_idx].global_err_mask, 1);
+		qdma_reg_write(dev_hndl,
+			QDMA_S80_HARD_GLBL_ERR_MASK_ADDR, reg_val);
+	}
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_get_device_attributes() - Function to get the qdma
+ * device attributes
+ *
+ * @dev_hndl:	device handle
+ * @dev_info:	pointer to hold the device info
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+int qdma_s80_hard_get_device_attributes(void *dev_hndl,
+		struct qdma_dev_attributes *dev_info)
+{
+	uint8_t count = 0;
+	uint32_t reg_val = 0;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+					__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+	if (!dev_info) {
+		qdma_log_error("%s: dev_info is NULL, err:%d\n",
+					__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	/* number of PFs */
+	reg_val = qdma_reg_read(dev_hndl,
+		QDMA_S80_HARD_GLBL2_PF_BARLITE_INT_ADDR);
+	if (FIELD_GET(GLBL2_PF_BARLITE_INT_PF0_BAR_MAP_MASK, reg_val))
+		count++;
+	if (FIELD_GET(GLBL2_PF_BARLITE_INT_PF1_BAR_MAP_MASK, reg_val))
+		count++;
+	if (FIELD_GET(GLBL2_PF_BARLITE_INT_PF2_BAR_MAP_MASK, reg_val))
+		count++;
+	if (FIELD_GET(GLBL2_PF_BARLITE_INT_PF3_BAR_MAP_MASK, reg_val))
+		count++;
+	dev_info->num_pfs = count;
+
+	/* Number of Qs */
+	reg_val = qdma_reg_read(dev_hndl, QDMA_S80_HARD_GLBL2_CHANNEL_CAP_ADDR);
+	dev_info->num_qs = (FIELD_GET(GLBL2_CHANNEL_CAP_MULTIQ_MAX_MASK,
+			reg_val));
+
+	/* FLR present */
+	reg_val = qdma_reg_read(dev_hndl, QDMA_S80_HARD_GLBL2_MISC_CAP_ADDR);
+	dev_info->mailbox_en  = FIELD_GET(QDMA_GLBL2_MAILBOX_EN_MASK, reg_val);
+	dev_info->flr_present = FIELD_GET(QDMA_GLBL2_FLR_PRESENT_MASK, reg_val);
+	dev_info->mm_cmpt_en  = 0;
+
+	/* ST/MM enabled? */
+	reg_val = qdma_reg_read(dev_hndl,
+		QDMA_S80_HARD_GLBL2_CHANNEL_MDMA_ADDR);
+	dev_info->mm_en = (FIELD_GET(GLBL2_CHANNEL_MDMA_C2H_ENG_MASK, reg_val) &&
+		FIELD_GET(GLBL2_CHANNEL_MDMA_H2C_ENG_MASK, reg_val)) ? 1 : 0;
+	dev_info->st_en = (FIELD_GET(GLBL2_CHANNEL_MDMA_C2H_ST_MASK, reg_val) &&
+		FIELD_GET(GLBL2_CHANNEL_MDMA_H2C_ST_MASK, reg_val)) ? 1 : 0;
+
+	/* num of mm channels for Versal Hard is 2 */
+	dev_info->mm_channel_max = 2;
+
+	dev_info->debug_mode = 0;
+	dev_info->desc_eng_mode = 0;
+	dev_info->qid2vec_ctx = 1;
+	dev_info->cmpt_ovf_chk_dis = 0;
+	dev_info->mailbox_intr = 0;
+	dev_info->sw_desc_64b = 0;
+	dev_info->cmpt_desc_64b = 0;
+	dev_info->dynamic_bar = 0;
+	dev_info->legacy_intr = 0;
+	dev_info->cmpt_trig_count_timer = 0;
+
+	return QDMA_SUCCESS;
+}
+
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_credit_context_read() - read credit context
+ *
+ * @dev_hndl:	device handle
+ * @c2h     :	is c2h queue
+ * @hw_qid  :	hardware qid of the queue
+ * @ctxt    :	pointer to the context data
+ *
+ * Return   :	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_s80_hard_credit_context_read(void *dev_hndl, uint8_t c2h,
+			 uint16_t hw_qid,
+			 struct qdma_descq_credit_ctxt *ctxt)
+{
+	int rv = QDMA_SUCCESS;
+	uint32_t cr_ctxt[QDMA_S80_HARD_CR_CONTEXT_NUM_WORDS] = {0};
+	enum ind_ctxt_cmd_sel sel = c2h ? QDMA_CTXT_SEL_CR_C2H :
+			QDMA_CTXT_SEL_CR_H2C;
+
+	if (!dev_hndl || !ctxt) {
+		qdma_log_error("%s: dev_hndl=%p credit_ctxt=%p, err:%d\n",
+						__func__, dev_hndl, ctxt,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	rv = qdma_s80_hard_indirect_reg_read(dev_hndl, sel, hw_qid,
+			QDMA_S80_HARD_CR_CONTEXT_NUM_WORDS, cr_ctxt);
+	if (rv < 0)
+		return rv;
+
+	ctxt->credit = FIELD_GET(CRED_CTXT_DATA_W0_CREDT_MASK,
+			cr_ctxt[0]);
+
+	qdma_log_debug("%s: credit=%u\n", __func__, ctxt->credit);
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_credit_context_clear() - clear credit context
+ *
+ * @dev_hndl:	device handle
+ * @c2h     :	is c2h queue
+ * @hw_qid  :	hardware qid of the queue
+ *
+ * Return   :	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_s80_hard_credit_context_clear(void *dev_hndl, uint8_t c2h,
+			  uint16_t hw_qid)
+{
+	enum ind_ctxt_cmd_sel sel = c2h ? QDMA_CTXT_SEL_CR_C2H :
+			QDMA_CTXT_SEL_CR_H2C;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n", __func__,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	return qdma_s80_hard_indirect_reg_clear(dev_hndl, sel, hw_qid);
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_credit_context_invalidate() - invalidate credit context
+ *
+ * @dev_hndl:	device handle
+ * @c2h     :	is c2h queue
+ * @hw_qid  :	hardware qid of the queue
+ *
+ * Return   :	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_s80_hard_credit_context_invalidate(void *dev_hndl, uint8_t c2h,
+				   uint16_t hw_qid)
+{
+	enum ind_ctxt_cmd_sel sel = c2h ? QDMA_CTXT_SEL_CR_C2H :
+			QDMA_CTXT_SEL_CR_H2C;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n", __func__,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	return qdma_s80_hard_indirect_reg_invalidate(dev_hndl, sel, hw_qid);
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_credit_ctx_conf() - configure credit context
+ *
+ * @dev_hndl    :	device handle
+ * @c2h         :	is c2h queue
+ * @hw_qid      :	hardware qid of the queue
+ * @ctxt        :	pointer to the context data
+ * @access_type :	HW access type (qdma_hw_access_type enum) value
+ *		QDMA_HW_ACCESS_WRITE Not supported
+ *
+ * Return       :	0   - success and < 0 - failure
+ *****************************************************************************/
+int qdma_s80_hard_credit_ctx_conf(void *dev_hndl, uint8_t c2h, uint16_t hw_qid,
+			struct qdma_descq_credit_ctxt *ctxt,
+			enum qdma_hw_access_type access_type)
+{
+	int rv = QDMA_SUCCESS;
+
+	switch (access_type) {
+	case QDMA_HW_ACCESS_READ:
+		rv = qdma_s80_hard_credit_context_read(dev_hndl, c2h,
+				hw_qid, ctxt);
+		break;
+	case QDMA_HW_ACCESS_CLEAR:
+		rv = qdma_s80_hard_credit_context_clear(dev_hndl, c2h, hw_qid);
+		break;
+	case QDMA_HW_ACCESS_INVALIDATE:
+		rv = qdma_s80_hard_credit_context_invalidate(dev_hndl, c2h,
+				hw_qid);
+		break;
+	case QDMA_HW_ACCESS_WRITE:
+	default:
+		qdma_log_error("%s: Invalid access type=%d, err:%d\n",
+					   __func__, access_type,
+					   -QDMA_ERR_INV_PARAM);
+		rv = -QDMA_ERR_INV_PARAM;
+		break;
+	}
+
+	return rv;
+}
+
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_dump_config_regs() - Function to get qdma config register
+ * dump in a buffer
+ *
+ * @dev_hndl:	device handle
+ * @is_vf:	Whether PF or VF
+ * @buf :	pointer to buffer to be filled
+ * @buflen :	Length of the buffer
+ *
+ * Return:	Length up-till the buffer is filled -success and < 0 - failure
+ *****************************************************************************/
+int qdma_s80_hard_dump_config_regs(void *dev_hndl, uint8_t is_vf,
+		char *buf, uint32_t buflen)
+{
+	uint32_t i = 0, j = 0;
+	struct xreg_info *reg_info;
+	uint32_t num_regs = qdma_s80_hard_config_num_regs_get();
+	uint32_t len = 0, val = 0;
+	int rv = QDMA_SUCCESS;
+	char name[DEBGFS_GEN_NAME_SZ] = "";
+	struct qdma_dev_attributes dev_cap;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+					   __func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if (buflen < qdma_s80_hard_reg_dump_buf_len()) {
+		qdma_log_error("%s: Buffer too small, err:%d\n", __func__,
+					   -QDMA_ERR_NO_MEM);
+		return -QDMA_ERR_NO_MEM;
+	}
+
+	if (is_vf) {
+		qdma_log_error("%s: Wrong API used for VF, err:%d\n",
+				__func__,
+				-QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED);
+		return -QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED;
+	}
+
+	qdma_s80_hard_get_device_attributes(dev_hndl, &dev_cap);
+
+	reg_info = qdma_s80_hard_config_regs_get();
+
+	for (i = 0; i < num_regs; i++) {
+		int mask = get_capability_mask(dev_cap.mm_en, dev_cap.st_en,
+				dev_cap.mm_cmpt_en, dev_cap.mailbox_en);
+
+		if ((mask & reg_info[i].mode) == 0)
+			continue;
+
+		for (j = 0; j < reg_info[i].repeat; j++) {
+			rv = QDMA_SNPRINTF_S(name, DEBGFS_GEN_NAME_SZ,
+					DEBGFS_GEN_NAME_SZ,
+					"%s_%d", reg_info[i].name, j);
+			if (rv < 0 || rv > DEBGFS_GEN_NAME_SZ) {
+				qdma_log_error
+					("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+					__LINE__, __func__,
+					rv);
+				return -QDMA_ERR_NO_MEM;
+			}
+
+			val = qdma_reg_read(dev_hndl,
+					(reg_info[i].addr + (j * 4)));
+			rv = dump_reg(buf + len, buflen - len,
+					(reg_info[i].addr + (j * 4)),
+					name, val);
+			if (rv < 0) {
+				qdma_log_error
+				("%s Buff too small, err:%d\n",
+				__func__,
+				-QDMA_ERR_NO_MEM);
+				return -QDMA_ERR_NO_MEM;
+			}
+			len += rv;
+		}
+	}
+
+	return len;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_dump_s80_hard_queue_context() - Function to get qdma queue context dump
+ * in a buffer
+ *
+ * @dev_hndl:   device handle
+ * @hw_qid:     queue id
+ * @buf :       pointer to buffer to be filled
+ * @buflen :    Length of the buffer
+ *
+ * Return:	Length up-till the buffer is filled -success and < 0 - failure
+ *****************************************************************************/
+int qdma_s80_hard_dump_queue_context(void *dev_hndl,
+		uint8_t st,
+		enum qdma_dev_q_type q_type,
+		struct qdma_descq_context *ctxt_data,
+		char *buf, uint32_t buflen)
+{
+	int rv = 0;
+	uint32_t req_buflen = 0;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+			__func__, -QDMA_ERR_INV_PARAM);
+
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if (!ctxt_data) {
+		qdma_log_error("%s: ctxt_data is NULL, err:%d\n",
+			__func__, -QDMA_ERR_INV_PARAM);
+
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if (!buf) {
+		qdma_log_error("%s: buf is NULL, err:%d\n",
+			__func__, -QDMA_ERR_INV_PARAM);
+
+		return -QDMA_ERR_INV_PARAM;
+	}
+	if (q_type >= QDMA_DEV_Q_TYPE_MAX) {
+		qdma_log_error("%s: invalid q_type, err:%d\n",
+			__func__, -QDMA_ERR_INV_PARAM);
+
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	rv = qdma_s80_hard_context_buf_len(st, q_type, &req_buflen);
+	if (rv != QDMA_SUCCESS)
+		return rv;
+
+	if (buflen < req_buflen) {
+		qdma_log_error("%s: Too small buffer(%d), reqd(%d), err:%d\n",
+			__func__, buflen, req_buflen, -QDMA_ERR_NO_MEM);
+		return -QDMA_ERR_NO_MEM;
+	}
+
+	rv = dump_s80_hard_context(ctxt_data, st, q_type,
+				buf, buflen);
+
+	return rv;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_dump_s80_hard_intr_context() - Function to get qdma interrupt
+ * context dump in a buffer
+ *
+ * @dev_hndl:   device handle
+ * @hw_qid:     queue id
+ * @buf :       pointer to buffer to be filled
+ * @buflen :    Length of the buffer
+ *
+ * Return:	Length up-till the buffer is filled -success and < 0 - failure
+ *****************************************************************************/
+int qdma_s80_hard_dump_intr_context(void *dev_hndl,
+		struct qdma_indirect_intr_ctxt *intr_ctx,
+		int ring_index,
+		char *buf, uint32_t buflen)
+{
+	int rv = 0;
+	uint32_t req_buflen = 0;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+			__func__, -QDMA_ERR_INV_PARAM);
+
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if (!intr_ctx) {
+		qdma_log_error("%s: intr_ctx is NULL, err:%d\n",
+			__func__, -QDMA_ERR_INV_PARAM);
+
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if (!buf) {
+		qdma_log_error("%s: buf is NULL, err:%d\n",
+			__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+	req_buflen = qdma_s80_hard_intr_context_buf_len();
+	if (buflen < req_buflen) {
+		qdma_log_error("%s: Too small buffer(%d), reqd(%d), err:%d\n",
+			__func__, buflen, req_buflen, -QDMA_ERR_NO_MEM);
+		return -QDMA_ERR_NO_MEM;
+	}
+
+	rv = dump_s80_hard_intr_context(intr_ctx, ring_index, buf, buflen);
+
+	return rv;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_read_dump_queue_context() - Function to read and dump the queue
+ * context in a buffer
+ *
+ * @dev_hndl:   device handle
+ * @hw_qid:     queue id
+ * @st:			Queue Mode(ST or MM)
+ * @q_type:		Queue type(H2C/C2H/CMPT)
+ * @buf :       pointer to buffer to be filled
+ * @buflen :    Length of the buffer
+ *
+ * Return:	Length up-till the buffer is filled -success and < 0 - failure
+ *****************************************************************************/
+int qdma_s80_hard_read_dump_queue_context(void *dev_hndl,
+		uint16_t qid_hw,
+		uint8_t st,
+		enum qdma_dev_q_type q_type,
+		char *buf, uint32_t buflen)
+{
+	int rv = QDMA_SUCCESS;
+	uint32_t req_buflen = 0;
+	struct qdma_descq_context context;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+			__func__, -QDMA_ERR_INV_PARAM);
+
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if (!buf) {
+		qdma_log_error("%s: buf is NULL, err:%d\n",
+			__func__, -QDMA_ERR_INV_PARAM);
+
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if (q_type >= QDMA_DEV_Q_TYPE_CMPT) {
+		qdma_log_error("%s: Not supported for q_type, err = %d\n",
+			__func__, -QDMA_ERR_INV_PARAM);
+
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	rv = qdma_s80_hard_context_buf_len(st, q_type, &req_buflen);
+
+	if (rv != QDMA_SUCCESS)
+		return rv;
+
+	if (buflen < req_buflen) {
+		qdma_log_error("%s: Too small buffer(%d), reqd(%d), err:%d\n",
+			__func__, buflen, req_buflen, -QDMA_ERR_NO_MEM);
+		return -QDMA_ERR_NO_MEM;
+	}
+
+	qdma_memset(&context, 0, sizeof(struct qdma_descq_context));
+
+	if (q_type != QDMA_DEV_Q_TYPE_CMPT) {
+		rv = qdma_s80_hard_sw_ctx_conf(dev_hndl, (uint8_t)q_type,
+				qid_hw, &context.sw_ctxt,
+				QDMA_HW_ACCESS_READ);
+		if (rv < 0) {
+			qdma_log_error
+			("%s: Failed to read sw context, err = %d",
+					__func__, rv);
+			return rv;
+		}
+
+		rv = qdma_s80_hard_hw_ctx_conf(dev_hndl, (uint8_t)q_type,
+				qid_hw, &context.hw_ctxt,
+				QDMA_HW_ACCESS_READ);
+		if (rv < 0) {
+			qdma_log_error
+			("%s: Failed to read hw context, err = %d",
+					__func__, rv);
+			return rv;
+		}
+
+		rv = qdma_s80_hard_qid2vec_conf(dev_hndl, (uint8_t)q_type,
+				qid_hw, &context.qid2vec,
+				QDMA_HW_ACCESS_READ);
+		if (rv < 0) {
+			qdma_log_error
+			("%s: Failed to read qid2vec context, err = %d",
+					__func__, rv);
+			return rv;
+		}
+
+		rv = qdma_s80_hard_credit_ctx_conf(dev_hndl, (uint8_t)q_type,
+				qid_hw, &context.cr_ctxt,
+				QDMA_HW_ACCESS_READ);
+		if (rv < 0) {
+			qdma_log_error
+			("%s: Failed to read credit context, err = %d",
+					__func__, rv);
+			return rv;
+		}
+
+		if (st && q_type == QDMA_DEV_Q_TYPE_C2H) {
+			rv = qdma_s80_hard_pfetch_ctx_conf(dev_hndl,
+					qid_hw,
+					&context.pfetch_ctxt,
+					QDMA_HW_ACCESS_READ);
+			if (rv < 0) {
+				qdma_log_error
+			("%s: Failed to read pftech context, err = %d",
+						__func__, rv);
+				return rv;
+			}
+		}
+	}
+
+	if ((st && q_type == QDMA_DEV_Q_TYPE_C2H) ||
+			(!st && q_type == QDMA_DEV_Q_TYPE_CMPT)) {
+		rv = qdma_s80_hard_cmpt_ctx_conf(dev_hndl, qid_hw,
+						&context.cmpt_ctxt,
+						 QDMA_HW_ACCESS_READ);
+		if (rv < 0) {
+			qdma_log_error
+			("%s: Failed to read cmpt context, err = %d",
+					__func__, rv);
+			return rv;
+		}
+	}
+
+
+
+	rv = dump_s80_hard_context(&context, st, q_type,
+				buf, buflen);
+
+	return rv;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_init_ctxt_memory() - Initialize the context for all queues
+ *
+ * @dev_hndl    :	device handle
+ *
+ * Return       :	0   - success and < 0 - failure
+ *****************************************************************************/
+
+int qdma_s80_hard_init_ctxt_memory(void *dev_hndl)
+{
+#ifdef ENABLE_INIT_CTXT_MEMORY
+	uint32_t data[QDMA_REG_IND_CTXT_REG_COUNT];
+	uint16_t i = 0;
+	struct qdma_dev_attributes dev_info;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+					__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	qdma_memset(data, 0, sizeof(uint32_t) * QDMA_REG_IND_CTXT_REG_COUNT);
+	qdma_s80_hard_get_device_attributes(dev_hndl, &dev_info);
+	qdma_log_info("%s: clearing the context for all qs",
+			__func__);
+	for (; i < dev_info.num_qs; i++) {
+		int sel = QDMA_CTXT_SEL_SW_C2H;
+		int rv;
+
+		for (; sel <= QDMA_CTXT_SEL_PFTCH; sel++) {
+			/** if the st mode(h2c/c2h) not enabled
+			 *  in the design, then skip the PFTCH
+			 *  and CMPT context setup
+			 */
+			if (dev_info.st_en == 0 &&
+			    (sel == QDMA_CTXT_SEL_PFTCH ||
+				sel == QDMA_CTXT_SEL_CMPT)) {
+				qdma_log_debug("%s: ST context is skipped:",
+					__func__);
+				qdma_log_debug(" sel = %d", sel);
+				continue;
+			}
+
+			rv = qdma_s80_hard_indirect_reg_clear(dev_hndl,
+					(enum ind_ctxt_cmd_sel)sel, i);
+			if (rv < 0)
+				return rv;
+		}
+	}
+
+	/* fmap */
+	for (i = 0; i < dev_info.num_pfs; i++)
+		qdma_s80_hard_fmap_clear(dev_hndl, i);
+#else
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+					__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+#endif
+	return 0;
+}
+
+static int get_reg_entry(uint32_t reg_addr, int *reg_entry)
+{
+	uint32_t i = 0;
+	struct xreg_info *reg_info;
+	uint32_t num_regs = qdma_s80_hard_config_num_regs_get();
+
+	reg_info = qdma_s80_hard_config_regs_get();
+
+	for (i = 0; (i < num_regs - 1); i++) {
+		if (reg_info[i].addr == reg_addr) {
+			*reg_entry = i;
+			break;
+		}
+	}
+
+	if (i >= num_regs - 1) {
+		qdma_log_error("%s: 0x%08x is missing register list, err:%d\n",
+					__func__,
+					reg_addr,
+					-QDMA_ERR_INV_PARAM);
+		*reg_entry = -1;
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	return 0;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_dump_config_reg_list() - Dump the registers
+ *
+ * @dev_hndl:		device handle
+ * @total_regs :	Max registers to read
+ * @reg_list :		array of reg addr and reg values
+ * @buf :		pointer to buffer to be filled
+ * @buflen :		Length of the buffer
+ *
+ * Return: returns the platform specific error code
+ *****************************************************************************/
+int qdma_s80_hard_dump_config_reg_list(void *dev_hndl, uint32_t total_regs,
+		struct qdma_reg_data *reg_list, char *buf, uint32_t buflen)
+{
+	uint32_t j = 0, len = 0;
+	uint32_t reg_count = 0;
+	int reg_data_entry;
+	int rv = 0;
+	char name[DEBGFS_GEN_NAME_SZ] = "";
+	struct xreg_info *reg_info = qdma_s80_hard_config_regs_get();
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+				__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if (!buf) {
+		qdma_log_error("%s: buf is NULL, err:%d\n",
+				__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	for (reg_count = 0;
+			(reg_count < total_regs);) {
+		rv = get_reg_entry(reg_list[reg_count].reg_addr,
+					&reg_data_entry);
+		if (rv < 0) {
+			qdma_log_error("%s: register missing in list, err:%d\n",
+						   __func__,
+						   -QDMA_ERR_INV_PARAM);
+			return rv;
+		}
+
+		for (j = 0; j < reg_info[reg_data_entry].repeat; j++) {
+			rv = QDMA_SNPRINTF_S(name, DEBGFS_GEN_NAME_SZ,
+					DEBGFS_GEN_NAME_SZ,
+					"%s_%d",
+					reg_info[reg_data_entry].name, j);
+			if (rv < 0 || rv > DEBGFS_GEN_NAME_SZ) {
+				qdma_log_error
+					("%d:%s snprintf failed, err:%d\n",
+					__LINE__, __func__,
+					rv);
+				return -QDMA_ERR_NO_MEM;
+			}
+			rv = dump_reg(buf + len, buflen - len,
+				(reg_info[reg_data_entry].addr + (j * 4)),
+					name,
+					reg_list[reg_count + j].reg_val);
+			if (rv < 0) {
+				qdma_log_error
+				("%s Buff too small, err:%d\n",
+				__func__,
+				-QDMA_ERR_NO_MEM);
+				return -QDMA_ERR_NO_MEM;
+			}
+			len += rv;
+		}
+		reg_count += j;
+	}
+
+	return len;
+}
+
+
+/*****************************************************************************/
+/**
+ * qdma_read_reg_list() - read the register values
+ *
+ * @dev_hndl:		device handle
+ * @is_vf:		Whether PF or VF
+ * @total_regs :	Max registers to read
+ * @reg_list :		array of reg addr and reg values
+ *
+ * Return: returns the platform specific error code
+ *****************************************************************************/
+int qdma_s80_hard_read_reg_list(void *dev_hndl, uint8_t is_vf,
+		uint16_t reg_rd_slot,
+		uint16_t *total_regs,
+		struct qdma_reg_data *reg_list)
+{
+	uint16_t reg_count = 0, i = 0, j = 0;
+	uint32_t num_regs = qdma_s80_hard_config_num_regs_get();
+	struct xreg_info *reg_info = qdma_s80_hard_config_regs_get();
+	struct qdma_dev_attributes dev_cap;
+	uint32_t reg_start_addr = 0;
+	int reg_index = 0;
+	int rv = 0;
+
+	if (!is_vf) {
+		qdma_log_error("%s: not supported for PF, err:%d\n",
+				__func__,
+				-QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED);
+		return -QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED;
+	}
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+					   __func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if (!reg_list) {
+		qdma_log_error("%s: reg_list is NULL, err:%d\n",
+					   __func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	qdma_s80_hard_get_device_attributes(dev_hndl, &dev_cap);
+
+	switch (reg_rd_slot) {
+	case QDMA_REG_READ_GROUP_1:
+			reg_start_addr = QDMA_S80_REG_GROUP_1_START_ADDR;
+			break;
+	case QDMA_REG_READ_GROUP_2:
+			reg_start_addr = QDMA_S80_REG_GROUP_2_START_ADDR;
+			break;
+	case QDMA_REG_READ_GROUP_3:
+			reg_start_addr = QDMA_S80_REG_GROUP_3_START_ADDR;
+			break;
+	case QDMA_REG_READ_GROUP_4:
+			reg_start_addr = QDMA_S80_REG_GROUP_4_START_ADDR;
+			break;
+	default:
+		qdma_log_error("%s: Invalid slot received\n",
+			   __func__);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	rv = get_reg_entry(reg_start_addr, &reg_index);
+	if (rv < 0) {
+		qdma_log_error("%s: register missing in list, err:%d\n",
+					   __func__,
+					   -QDMA_ERR_INV_PARAM);
+		return rv;
+	}
+
+	for (i = 0, reg_count = 0;
+			((i < num_regs - 1 - reg_index) &&
+			(reg_count < QDMA_MAX_REGISTER_DUMP)); i++) {
+		int mask = get_capability_mask(dev_cap.mm_en, dev_cap.st_en,
+				dev_cap.mm_cmpt_en, dev_cap.mailbox_en);
+
+		if (((mask & reg_info[i].mode) == 0) ||
+			reg_info[i].read_type == QDMA_REG_READ_PF_ONLY)
+			continue;
+
+		for (j = 0; j < reg_info[i].repeat &&
+				(reg_count < QDMA_MAX_REGISTER_DUMP);
+				j++) {
+			reg_list[reg_count].reg_addr =
+					(reg_info[i].addr + (j * 4));
+			reg_list[reg_count].reg_val =
+				qdma_reg_read(dev_hndl,
+					reg_list[reg_count].reg_addr);
+			reg_count++;
+		}
+	}
+
+	*total_regs = reg_count;
+	return rv;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_write_global_ring_sizes() - set the global ring size array
+ *
+ * @dev_hndl:   device handle
+ * @index: Index from where the values needs to written
+ * @count: number of entries to be written
+ * @glbl_rng_sz: pointer to the array having the values to write
+ *
+ * (index + count) shall not be more than 16
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_s80_hard_write_global_ring_sizes(void *dev_hndl, uint8_t index,
+				uint8_t count, const uint32_t *glbl_rng_sz)
+{
+	if (!dev_hndl || !glbl_rng_sz || !count) {
+		qdma_log_error("%s: dev_hndl=%p glbl_rng_sz=%p, err:%d\n",
+					   __func__, dev_hndl, glbl_rng_sz,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if ((index + count) > QDMA_NUM_RING_SIZES) {
+		qdma_log_error("%s: index=%u count=%u > %d, err:%d\n",
+					   __func__, index, count,
+					   QDMA_NUM_RING_SIZES,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	qdma_write_csr_values(dev_hndl,
+			QDMA_S80_HARD_GLBL_RNG_SZ_1_ADDR, index, count,
+			glbl_rng_sz);
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_read_global_ring_sizes() - function to get the
+ *	global rng_sz array
+ *
+ * @dev_hndl:   device handle
+ * @index:	 Index from where the values needs to read
+ * @count:	 number of entries to be read
+ * @glbl_rng_sz: pointer to array to hold the values read
+ *
+ * (index + count) shall not be more than 16
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_s80_hard_read_global_ring_sizes(void *dev_hndl, uint8_t index,
+				uint8_t count, uint32_t *glbl_rng_sz)
+{
+	if (!dev_hndl || !glbl_rng_sz || !count) {
+		qdma_log_error("%s: dev_hndl=%p glbl_rng_sz=%p, err:%d\n",
+					   __func__, dev_hndl, glbl_rng_sz,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if ((index + count) > QDMA_NUM_RING_SIZES) {
+		qdma_log_error("%s: index=%u count=%u > %d, err:%d\n",
+					   __func__, index, count,
+					   QDMA_NUM_C2H_BUFFER_SIZES,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	qdma_read_csr_values(dev_hndl,
+			QDMA_S80_HARD_GLBL_RNG_SZ_1_ADDR, index, count,
+			glbl_rng_sz);
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_write_global_timer_count() - function to set the timer values
+ *
+ * @dev_hndl:   device handle
+ * @glbl_tmr_cnt: pointer to the array having the values to write
+ * @index:	 Index from where the values needs to written
+ * @count:	 number of entries to be written
+ *
+ * (index + count) shall not be more than 16
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_s80_hard_write_global_timer_count(void *dev_hndl, uint8_t index,
+				uint8_t count, const uint32_t *glbl_tmr_cnt)
+{
+	struct qdma_dev_attributes dev_cap;
+
+	if (!dev_hndl || !glbl_tmr_cnt || !count) {
+		qdma_log_error("%s: dev_hndl=%p glbl_tmr_cnt=%p, err:%d\n",
+					   __func__, dev_hndl, glbl_tmr_cnt,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if ((index + count) > QDMA_NUM_C2H_TIMERS) {
+		qdma_log_error("%s: index=%u count=%u > %d, err:%d\n",
+					   __func__, index, count,
+					   QDMA_NUM_C2H_TIMERS,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	qdma_s80_hard_get_device_attributes(dev_hndl, &dev_cap);
+
+	if (dev_cap.st_en || dev_cap.mm_cmpt_en) {
+		qdma_write_csr_values(dev_hndl,
+				QDMA_S80_HARD_C2H_TIMER_CNT_1_ADDR,
+				index, count, glbl_tmr_cnt);
+	} else {
+		qdma_log_error("%s: ST or MM cmpt not supported, err:%d\n",
+				__func__,
+				-QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED);
+		return -QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED;
+	}
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_read_global_timer_count() - function to get the timer values
+ *
+ * @dev_hndl:   device handle
+ * @index:	 Index from where the values needs to read
+ * @count:	 number of entries to be read
+ * @glbl_tmr_cnt: pointer to array to hold the values read
+ *
+ * (index + count) shall not be more than 16
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_s80_hard_read_global_timer_count(void *dev_hndl, uint8_t index,
+				uint8_t count, uint32_t *glbl_tmr_cnt)
+{
+	struct qdma_dev_attributes dev_cap;
+
+	if (!dev_hndl || !glbl_tmr_cnt || !count) {
+		qdma_log_error("%s: dev_hndl=%p glbl_tmr_cnt=%p, err:%d\n",
+					   __func__, dev_hndl, glbl_tmr_cnt,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if ((index + count) > QDMA_NUM_C2H_TIMERS) {
+		qdma_log_error("%s: index=%u count=%u > %d, err:%d\n",
+					   __func__, index, count,
+					   QDMA_NUM_C2H_TIMERS,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	qdma_s80_hard_get_device_attributes(dev_hndl, &dev_cap);
+
+	if (dev_cap.st_en || dev_cap.mm_cmpt_en) {
+		qdma_read_csr_values(dev_hndl,
+				QDMA_S80_HARD_C2H_TIMER_CNT_1_ADDR, index,
+				count, glbl_tmr_cnt);
+	} else {
+		qdma_log_error("%s: ST or MM cmpt not supported, err:%d\n",
+				__func__,
+				-QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED);
+		return -QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED;
+	}
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_write_global_counter_threshold() - function to set the counter
+ *						threshold values
+ *
+ * @dev_hndl:   device handle
+ * @index:	 Index from where the values needs to written
+ * @count:	 number of entries to be written
+ * @glbl_cnt_th: pointer to the array having the values to write
+ *
+ * (index + count) shall not be more than 16
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_s80_hard_write_global_counter_threshold(void *dev_hndl,
+		uint8_t index,
+		uint8_t count, const uint32_t *glbl_cnt_th)
+{
+	struct qdma_dev_attributes dev_cap;
+
+	if (!dev_hndl || !glbl_cnt_th || !count) {
+		qdma_log_error("%s: dev_hndl=%p glbl_cnt_th=%p, err:%d\n",
+					   __func__, dev_hndl, glbl_cnt_th,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if ((index + count) > QDMA_NUM_C2H_COUNTERS) {
+		qdma_log_error("%s: index=%u count=%u > %d, err:%d\n",
+					   __func__, index, count,
+					   QDMA_NUM_C2H_BUFFER_SIZES,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	qdma_s80_hard_get_device_attributes(dev_hndl, &dev_cap);
+
+	if (dev_cap.st_en || dev_cap.mm_cmpt_en) {
+		qdma_write_csr_values(dev_hndl,
+				QDMA_S80_HARD_C2H_CNT_TH_1_ADDR, index,
+				count, glbl_cnt_th);
+	} else {
+		qdma_log_error("%s: ST or MM cmpt not supported, err:%d\n",
+				__func__,
+				-QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED);
+		return -QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED;
+	}
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_read_global_counter_threshold() - get the counter
+ *	threshold values
+ *
+ * @dev_hndl:   device handle
+ * @index:	 Index from where the values needs to read
+ * @count:	 number of entries to be read
+ * @glbl_cnt_th: pointer to array to hold the values read
+ *
+ * (index + count) shall not be more than 16
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_s80_hard_read_global_counter_threshold(void *dev_hndl,
+		uint8_t index,
+		uint8_t count, uint32_t *glbl_cnt_th)
+{
+	struct qdma_dev_attributes dev_cap;
+
+	if (!dev_hndl || !glbl_cnt_th || !count) {
+		qdma_log_error("%s: dev_hndl=%p glbl_cnt_th=%p, err:%d\n",
+					   __func__, dev_hndl, glbl_cnt_th,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if ((index + count) > QDMA_NUM_C2H_COUNTERS) {
+		qdma_log_error("%s: index=%u count=%u > %d, err:%d\n",
+					   __func__, index, count,
+					   QDMA_NUM_C2H_COUNTERS,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	qdma_s80_hard_get_device_attributes(dev_hndl, &dev_cap);
+
+	if (dev_cap.st_en || dev_cap.mm_cmpt_en) {
+		qdma_read_csr_values(dev_hndl,
+				QDMA_S80_HARD_C2H_CNT_TH_1_ADDR, index,
+				count, glbl_cnt_th);
+	} else {
+		qdma_log_error("%s: ST or MM cmpt not supported, err:%d\n",
+			   __func__, -QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED);
+		return -QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED;
+	}
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_write_global_buffer_sizes() - function to set the buffer sizes
+ *
+ * @dev_hndl:   device handle
+ * @index:	 Index from where the values needs to written
+ * @count:	 number of entries to be written
+ * @glbl_buf_sz: pointer to the array having the values to write
+ *
+ * (index + count) shall not be more than 16
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_s80_hard_write_global_buffer_sizes(void *dev_hndl,
+		uint8_t index,
+		uint8_t count, const uint32_t *glbl_buf_sz)
+{
+	struct qdma_dev_attributes dev_cap;
+
+	if (!dev_hndl || !glbl_buf_sz || !count) {
+		qdma_log_error("%s: dev_hndl=%p glbl_buf_sz=%p, err:%d\n",
+					   __func__, dev_hndl, glbl_buf_sz,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if ((index + count) > QDMA_NUM_C2H_BUFFER_SIZES) {
+		qdma_log_error("%s: index=%u count=%u > %d, err:%d\n",
+					   __func__, index, count,
+					   QDMA_NUM_C2H_BUFFER_SIZES,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	qdma_s80_hard_get_device_attributes(dev_hndl, &dev_cap);
+
+	if (dev_cap.st_en) {
+		qdma_write_csr_values(dev_hndl,
+				QDMA_S80_HARD_C2H_BUF_SZ_0_ADDR, index,
+				count, glbl_buf_sz);
+	} else {
+		qdma_log_error("%s: ST not supported, err:%d\n",
+				__func__,
+				-QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED);
+		return -QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED;
+	}
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_read_global_buffer_sizes() - function to get the buffer sizes
+ *
+ * @dev_hndl:   device handle
+ * @index:	 Index from where the values needs to read
+ * @count:	 number of entries to be read
+ * @glbl_buf_sz: pointer to array to hold the values read
+ *
+ * (index + count) shall not be more than 16
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_s80_hard_read_global_buffer_sizes(void *dev_hndl, uint8_t index,
+				uint8_t count, uint32_t *glbl_buf_sz)
+{
+	struct qdma_dev_attributes dev_cap;
+
+	if (!dev_hndl || !glbl_buf_sz || !count) {
+		qdma_log_error("%s: dev_hndl=%p glbl_buf_sz=%p, err:%d\n",
+					   __func__, dev_hndl, glbl_buf_sz,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if ((index + count) > QDMA_NUM_C2H_BUFFER_SIZES) {
+		qdma_log_error("%s: index=%u count=%u > %d, err:%d\n",
+					   __func__, index, count,
+					   QDMA_NUM_C2H_BUFFER_SIZES,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	qdma_s80_hard_get_device_attributes(dev_hndl, &dev_cap);
+
+	if (dev_cap.st_en) {
+		qdma_read_csr_values(dev_hndl,
+				QDMA_S80_HARD_C2H_BUF_SZ_0_ADDR, index,
+				count, glbl_buf_sz);
+	} else {
+		qdma_log_error("%s: ST is not supported, err:%d\n",
+					__func__,
+					-QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED);
+		return -QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED;
+	}
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_global_csr_conf() - function to configure global csr
+ *
+ * @dev_hndl:	device handle
+ * @index:	Index from where the values needs to read
+ * @count:	number of entries to be read
+ * @csr_val:	uint32_t pointer to csr value
+ * @csr_type:	Type of the CSR (qdma_global_csr_type enum) to configure
+ * @access_type HW access type (qdma_hw_access_type enum) value
+ *		QDMA_HW_ACCESS_CLEAR - Not supported
+ *		QDMA_HW_ACCESS_INVALIDATE - Not supported
+ *
+ * (index + count) shall not be more than 16
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+int qdma_s80_hard_global_csr_conf(void *dev_hndl, uint8_t index, uint8_t count,
+				uint32_t *csr_val,
+				enum qdma_global_csr_type csr_type,
+				enum qdma_hw_access_type access_type)
+{
+	int rv = QDMA_SUCCESS;
+
+	switch (csr_type) {
+	case QDMA_CSR_RING_SZ:
+		switch (access_type) {
+		case QDMA_HW_ACCESS_READ:
+			rv = qdma_s80_hard_read_global_ring_sizes(dev_hndl,
+						index,
+						count,
+						csr_val);
+			break;
+		case QDMA_HW_ACCESS_WRITE:
+			rv = qdma_s80_hard_write_global_ring_sizes(dev_hndl,
+						index,
+						count,
+						csr_val);
+			break;
+		default:
+			qdma_log_error("%s: access_type(%d) invalid, err:%d\n",
+							__func__,
+							access_type,
+						   -QDMA_ERR_INV_PARAM);
+			rv = -QDMA_ERR_INV_PARAM;
+			break;
+		}
+		break;
+	case QDMA_CSR_TIMER_CNT:
+		switch (access_type) {
+		case QDMA_HW_ACCESS_READ:
+			rv = qdma_s80_hard_read_global_timer_count(dev_hndl,
+						index,
+						count,
+						csr_val);
+			break;
+		case QDMA_HW_ACCESS_WRITE:
+			rv = qdma_s80_hard_write_global_timer_count(dev_hndl,
+						index,
+						count,
+						csr_val);
+			break;
+		default:
+			qdma_log_error("%s: access_type(%d) invalid, err:%d\n",
+							__func__,
+							access_type,
+						   -QDMA_ERR_INV_PARAM);
+			rv = -QDMA_ERR_INV_PARAM;
+			break;
+		}
+		break;
+	case QDMA_CSR_CNT_TH:
+		switch (access_type) {
+		case QDMA_HW_ACCESS_READ:
+			rv =
+			qdma_s80_hard_read_global_counter_threshold(dev_hndl,
+						index,
+						count,
+						csr_val);
+			break;
+		case QDMA_HW_ACCESS_WRITE:
+			rv =
+			qdma_s80_hard_write_global_counter_threshold
+						(dev_hndl,
+						index,
+						count,
+						csr_val);
+			break;
+		default:
+			qdma_log_error("%s: access_type(%d) invalid, err:%d\n",
+							__func__,
+							access_type,
+						   -QDMA_ERR_INV_PARAM);
+			rv = -QDMA_ERR_INV_PARAM;
+			break;
+		}
+		break;
+	case QDMA_CSR_BUF_SZ:
+		switch (access_type) {
+		case QDMA_HW_ACCESS_READ:
+			rv =
+			qdma_s80_hard_read_global_buffer_sizes(dev_hndl,
+						index,
+						count,
+						csr_val);
+			break;
+		case QDMA_HW_ACCESS_WRITE:
+			rv =
+			qdma_s80_hard_write_global_buffer_sizes(dev_hndl,
+						index,
+						count,
+						csr_val);
+			break;
+		default:
+			qdma_log_error("%s: access_type(%d) invalid, err:%d\n",
+							__func__,
+							access_type,
+						   -QDMA_ERR_INV_PARAM);
+			rv = -QDMA_ERR_INV_PARAM;
+			break;
+		}
+		break;
+	default:
+		qdma_log_error("%s: csr_type(%d) invalid, err:%d\n",
+						__func__,
+						csr_type,
+					   -QDMA_ERR_INV_PARAM);
+		rv = -QDMA_ERR_INV_PARAM;
+		break;
+	}
+
+	return rv;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_global_writeback_interval_write() -  function to set the
+ * writeback interval
+ *
+ * @dev_hndl	device handle
+ * @wb_int:	Writeback Interval
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_s80_hard_global_writeback_interval_write(void *dev_hndl,
+		enum qdma_wrb_interval wb_int)
+{
+	uint32_t reg_val;
+	struct qdma_dev_attributes dev_cap;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n", __func__,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if (wb_int >=  QDMA_NUM_WRB_INTERVALS) {
+		qdma_log_error("%s: wb_int=%d is invalid, err:%d\n",
+					   __func__, wb_int,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	qdma_s80_hard_get_device_attributes(dev_hndl, &dev_cap);
+
+	if (dev_cap.st_en || dev_cap.mm_cmpt_en) {
+		reg_val = qdma_reg_read(dev_hndl,
+				QDMA_S80_HARD_GLBL_DSC_CFG_ADDR);
+		reg_val |= FIELD_SET(GLBL_DSC_CFG_WB_ACC_INT_MASK, wb_int);
+
+		qdma_reg_write(dev_hndl,
+				QDMA_S80_HARD_GLBL_DSC_CFG_ADDR, reg_val);
+	} else {
+		qdma_log_error("%s: ST or MM cmpt not supported, err:%d\n",
+			   __func__, -QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED);
+		return -QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED;
+	}
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_global_writeback_interval_read() -  function to get the
+ * writeback interval
+ *
+ * @dev_hndl:	device handle
+ * @wb_int:	pointer to the data to hold Writeback Interval
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_s80_hard_global_writeback_interval_read(void *dev_hndl,
+		enum qdma_wrb_interval *wb_int)
+{
+	uint32_t reg_val;
+	struct qdma_dev_attributes dev_cap;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n", __func__,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if (!wb_int) {
+		qdma_log_error("%s: wb_int is NULL, err:%d\n", __func__,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	qdma_s80_hard_get_device_attributes(dev_hndl, &dev_cap);
+
+	if (dev_cap.st_en || dev_cap.mm_cmpt_en) {
+		reg_val = qdma_reg_read(dev_hndl,
+				QDMA_S80_HARD_GLBL_DSC_CFG_ADDR);
+		*wb_int = (enum qdma_wrb_interval)FIELD_GET
+				(GLBL_DSC_CFG_WB_ACC_INT_MASK, reg_val);
+	} else {
+		qdma_log_error("%s: ST or MM cmpt not supported, err:%d\n",
+			   __func__, -QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED);
+		return -QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED;
+	}
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_global_writeback_interval_conf() - function to configure
+ *					the writeback interval
+ *
+ * @dev_hndl:   device handle
+ * @wb_int:	pointer to the data to hold Writeback Interval
+ * @access_type HW access type (qdma_hw_access_type enum) value
+ *		QDMA_HW_ACCESS_CLEAR - Not supported
+ *		QDMA_HW_ACCESS_INVALIDATE - Not supported
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+int qdma_s80_hard_global_writeback_interval_conf(void *dev_hndl,
+				enum qdma_wrb_interval *wb_int,
+				enum qdma_hw_access_type access_type)
+{
+	int rv = QDMA_SUCCESS;
+
+	switch (access_type) {
+	case QDMA_HW_ACCESS_READ:
+		rv =
+		qdma_s80_hard_global_writeback_interval_read(dev_hndl, wb_int);
+		break;
+	case QDMA_HW_ACCESS_WRITE:
+		rv =
+		qdma_s80_hard_global_writeback_interval_write(dev_hndl,
+								*wb_int);
+		break;
+	case QDMA_HW_ACCESS_CLEAR:
+	case QDMA_HW_ACCESS_INVALIDATE:
+	default:
+		qdma_log_error("%s: access_type(%d) invalid, err:%d\n",
+						__func__,
+						access_type,
+					   -QDMA_ERR_INV_PARAM);
+		rv = -QDMA_ERR_INV_PARAM;
+		break;
+	}
+
+	return rv;
+}
+
+
+/*****************************************************************************/
+/**
+ * qdma_s80_hard_mm_channel_conf() - Function to enable/disable the MM channel
+ *
+ * @dev_hndl:	device handle
+ * @channel:	MM channel number
+ * @is_c2h:	Queue direction. Set 1 for C2H and 0 for H2C
+ * @enable:	Enable or disable MM channel
+ *
+ * Presently, we have only 1 MM channel
+ *
+ * Return:   0   - success and < 0 - failure
+ *****************************************************************************/
+int qdma_s80_hard_mm_channel_conf(void *dev_hndl, uint8_t channel,
+				uint8_t is_c2h,
+				uint8_t enable)
+{
+	uint32_t reg_addr = (is_c2h) ?  QDMA_S80_HARD_C2H_CHANNEL_CTL_ADDR :
+			QDMA_S80_HARD_H2C_CHANNEL_CTL_ADDR;
+	struct qdma_dev_attributes dev_cap;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+				__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	qdma_s80_hard_get_device_attributes(dev_hndl, &dev_cap);
+
+	if (dev_cap.mm_en) {
+		qdma_reg_write(dev_hndl,
+				reg_addr + (channel * QDMA_MM_CONTROL_STEP),
+				enable);
+	}
+
+	return QDMA_SUCCESS;
+}
+
+int qdma_s80_hard_dump_reg_info(void *dev_hndl, uint32_t reg_addr,
+		uint32_t num_regs, char *buf, uint32_t buflen)
+{
+	uint32_t total_num_regs = qdma_s80_hard_config_num_regs_get();
+	struct xreg_info *config_regs  = qdma_s80_hard_config_regs_get();
+	const char *bitfield_name;
+	uint32_t i = 0, num_regs_idx = 0, k = 0, j = 0,
+			bitfield = 0, lsb = 0, msb = 31;
+	int rv = 0;
+	uint32_t reg_val;
+	uint32_t data_len = 0;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+				__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	for (i = 0; i < total_num_regs; i++) {
+		if (reg_addr == config_regs[i].addr) {
+			j = i;
+			break;
+		}
+	}
+
+	if (i == total_num_regs) {
+		qdma_log_error("%s: Register not found err:%d\n",
+				__func__, -QDMA_ERR_INV_PARAM);
+		if (buf)
+			QDMA_SNPRINTF_S(buf, buflen,
+					DEBGFS_LINE_SZ,
+					"Register not found 0x%x\n", reg_addr);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	num_regs_idx = (j + num_regs < total_num_regs) ?
+					(j + num_regs) : total_num_regs;
+
+	for (; j < num_regs_idx ; j++) {
+		reg_val = qdma_reg_read(dev_hndl,
+				config_regs[j].addr);
+
+		if (buf) {
+			rv = QDMA_SNPRINTF_S(buf, buflen,
+						DEBGFS_LINE_SZ,
+						"\n%-40s 0x%-7x %-#10x %-10d\n",
+						config_regs[j].name,
+						config_regs[j].addr,
+						reg_val, reg_val);
+			if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+				qdma_log_error
+					("%s: Insufficient buffer, err:%d\n",
+					__func__, -QDMA_ERR_NO_MEM);
+				return -QDMA_ERR_NO_MEM;
+			}
+			buf += rv;
+			data_len += rv;
+			buflen -= rv;
+		} else {
+			qdma_log_info("%-40s 0x%-7x %-#10x %-10d\n",
+						  config_regs[j].name,
+						  config_regs[j].addr,
+						  reg_val, reg_val);
+		}
+
+		for (k = 0;
+			 k < config_regs[j].num_bitfields; k++) {
+			bitfield =
+				config_regs[j].bitfields[k].field_mask;
+			bitfield_name =
+				config_regs[i].bitfields[k].field_name;
+			lsb = 0;
+			msb = 31;
+
+			while (!(BIT(lsb) & bitfield))
+				lsb++;
+
+			while (!(BIT(msb) & bitfield))
+				msb--;
+
+			if (msb != lsb) {
+				if (buf) {
+					rv = QDMA_SNPRINTF_S(buf, buflen,
+							DEBGFS_LINE_SZ,
+							"%-40s [%2u,%2u]   %#-10x\n",
+							bitfield_name,
+							msb, lsb,
+							(reg_val & bitfield) >>
+								lsb);
+					if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+						qdma_log_error
+							("%s: Insufficient buffer, err:%d\n",
+							__func__,
+							-QDMA_ERR_NO_MEM);
+						return -QDMA_ERR_NO_MEM;
+					}
+					buf += rv;
+					data_len += rv;
+					buflen -= rv;
+				} else {
+					qdma_log_info
+						("%-40s [%2u,%2u]   %#-10x\n",
+						bitfield_name,
+						msb, lsb,
+						(reg_val & bitfield) >> lsb);
+				}
+			} else {
+				if (buf) {
+					rv = QDMA_SNPRINTF_S(buf, buflen,
+							DEBGFS_LINE_SZ,
+							"%-40s [%5u]   %#-10x\n",
+							bitfield_name,
+							lsb,
+							(reg_val & bitfield) >>
+								lsb);
+					if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+						qdma_log_error
+							("%s: Insufficient buffer, err:%d\n",
+							__func__,
+							-QDMA_ERR_NO_MEM);
+						return -QDMA_ERR_NO_MEM;
+					}
+					buf += rv;
+					data_len += rv;
+					buflen -= rv;
+				} else {
+					qdma_log_info
+						("%-40s [%5u]   %#-10x\n",
+						bitfield_name,
+						lsb,
+						(reg_val & bitfield) >> lsb);
+				}
+			}
+		}
+	}
+
+	return data_len;
+}
diff --git a/drivers/net/qdma/qdma_access/qdma_s80_hard_access/qdma_s80_hard_access.h b/drivers/net/qdma/qdma_access/qdma_s80_hard_access/qdma_s80_hard_access.h
new file mode 100644
index 0000000000..4e5e55de58
--- /dev/null
+++ b/drivers/net/qdma/qdma_access/qdma_s80_hard_access/qdma_s80_hard_access.h
@@ -0,0 +1,266 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2022 Xilinx, Inc. All rights reserved.
+ */
+
+#ifndef __QDMA_S80_HARD_ACCESS_H_
+#define __QDMA_S80_HARD_ACCESS_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include "qdma_platform.h"
+
+/**
+ * enum qdma_error_idx - qdma errors
+ */
+enum qdma_s80_hard_error_idx {
+	/* Descriptor errors */
+	QDMA_S80_HARD_DSC_ERR_POISON,
+	QDMA_S80_HARD_DSC_ERR_UR_CA,
+	QDMA_S80_HARD_DSC_ERR_PARAM,
+	QDMA_S80_HARD_DSC_ERR_ADDR,
+	QDMA_S80_HARD_DSC_ERR_TAG,
+	QDMA_S80_HARD_DSC_ERR_FLR,
+	QDMA_S80_HARD_DSC_ERR_TIMEOUT,
+	QDMA_S80_HARD_DSC_ERR_DAT_POISON,
+	QDMA_S80_HARD_DSC_ERR_FLR_CANCEL,
+	QDMA_S80_HARD_DSC_ERR_DMA,
+	QDMA_S80_HARD_DSC_ERR_DSC,
+	QDMA_S80_HARD_DSC_ERR_RQ_CANCEL,
+	QDMA_S80_HARD_DSC_ERR_DBE,
+	QDMA_S80_HARD_DSC_ERR_SBE,
+	QDMA_S80_HARD_DSC_ERR_ALL,
+
+	/* TRQ Errors */
+	QDMA_S80_HARD_TRQ_ERR_UNMAPPED,
+	QDMA_S80_HARD_TRQ_ERR_QID_RANGE,
+	QDMA_S80_HARD_TRQ_ERR_VF_ACCESS_ERR,
+	QDMA_S80_HARD_TRQ_ERR_TCP_TIMEOUT,
+	QDMA_S80_HARD_TRQ_ERR_ALL,
+
+	/* C2H Errors */
+	QDMA_S80_HARD_ST_C2H_ERR_MTY_MISMATCH,
+	QDMA_S80_HARD_ST_C2H_ERR_LEN_MISMATCH,
+	QDMA_S80_HARD_ST_C2H_ERR_QID_MISMATCH,
+	QDMA_S80_HARD_ST_C2H_ERR_DESC_RSP_ERR,
+	QDMA_S80_HARD_ST_C2H_ERR_ENG_WPL_DATA_PAR_ERR,
+	QDMA_S80_HARD_ST_C2H_ERR_MSI_INT_FAIL,
+	QDMA_S80_HARD_ST_C2H_ERR_ERR_DESC_CNT,
+	QDMA_S80_HARD_ST_C2H_ERR_PORTID_CTXT_MISMATCH,
+	QDMA_S80_HARD_ST_C2H_ERR_PORTID_BYP_IN_MISMATCH,
+	QDMA_S80_HARD_ST_C2H_ERR_WRB_INV_Q_ERR,
+	QDMA_S80_HARD_ST_C2H_ERR_WRB_QFULL_ERR,
+	QDMA_S80_HARD_ST_C2H_ERR_WRB_CIDX_ERR,
+	QDMA_S80_HARD_ST_C2H_ERR_WRB_PRTY_ERR,
+	QDMA_S80_HARD_ST_C2H_ERR_ALL,
+
+	/* Fatal Errors */
+	QDMA_S80_HARD_ST_FATAL_ERR_MTY_MISMATCH,
+	QDMA_S80_HARD_ST_FATAL_ERR_LEN_MISMATCH,
+	QDMA_S80_HARD_ST_FATAL_ERR_QID_MISMATCH,
+	QDMA_S80_HARD_ST_FATAL_ERR_TIMER_FIFO_RAM_RDBE,
+	QDMA_S80_HARD_ST_FATAL_ERR_PFCH_II_RAM_RDBE,
+	QDMA_S80_HARD_ST_FATAL_ERR_WRB_CTXT_RAM_RDBE,
+	QDMA_S80_HARD_ST_FATAL_ERR_PFCH_CTXT_RAM_RDBE,
+	QDMA_S80_HARD_ST_FATAL_ERR_DESC_REQ_FIFO_RAM_RDBE,
+	QDMA_S80_HARD_ST_FATAL_ERR_INT_CTXT_RAM_RDBE,
+	QDMA_S80_HARD_ST_FATAL_ERR_INT_QID2VEC_RAM_RDBE,
+	QDMA_S80_HARD_ST_FATAL_ERR_WRB_COAL_DATA_RAM_RDBE,
+	QDMA_S80_HARD_ST_FATAL_ERR_TUSER_FIFO_RAM_RDBE,
+	QDMA_S80_HARD_ST_FATAL_ERR_QID_FIFO_RAM_RDBE,
+	QDMA_S80_HARD_ST_FATAL_ERR_PAYLOAD_FIFO_RAM_RDBE,
+	QDMA_S80_HARD_ST_FATAL_ERR_WPL_DATA_PAR_ERR,
+	QDMA_S80_HARD_ST_FATAL_ERR_ALL,
+
+	/* H2C Errors */
+	QDMA_S80_HARD_ST_H2C_ERR_ZERO_LEN_DESC_ERR,
+	QDMA_S80_HARD_ST_H2C_ERR_SDI_MRKR_REQ_MOP_ERR,
+	QDMA_S80_HARD_ST_H2C_ERR_NO_DMA_DSC,
+	QDMA_S80_HARD_ST_H2C_ERR_DBE,
+	QDMA_S80_HARD_ST_H2C_ERR_SBE,
+	QDMA_S80_HARD_ST_H2C_ERR_ALL,
+
+	/* Single bit errors */
+	QDMA_S80_HARD_SBE_ERR_MI_H2C0_DAT,
+	QDMA_S80_HARD_SBE_ERR_MI_C2H0_DAT,
+	QDMA_S80_HARD_SBE_ERR_H2C_RD_BRG_DAT,
+	QDMA_S80_HARD_SBE_ERR_H2C_WR_BRG_DAT,
+	QDMA_S80_HARD_SBE_ERR_C2H_RD_BRG_DAT,
+	QDMA_S80_HARD_SBE_ERR_C2H_WR_BRG_DAT,
+	QDMA_S80_HARD_SBE_ERR_FUNC_MAP,
+	QDMA_S80_HARD_SBE_ERR_DSC_HW_CTXT,
+	QDMA_S80_HARD_SBE_ERR_DSC_CRD_RCV,
+	QDMA_S80_HARD_SBE_ERR_DSC_SW_CTXT,
+	QDMA_S80_HARD_SBE_ERR_DSC_CPLI,
+	QDMA_S80_HARD_SBE_ERR_DSC_CPLD,
+	QDMA_S80_HARD_SBE_ERR_PASID_CTXT_RAM,
+	QDMA_S80_HARD_SBE_ERR_TIMER_FIFO_RAM,
+	QDMA_S80_HARD_SBE_ERR_PAYLOAD_FIFO_RAM,
+	QDMA_S80_HARD_SBE_ERR_QID_FIFO_RAM,
+	QDMA_S80_HARD_SBE_ERR_TUSER_FIFO_RAM,
+	QDMA_S80_HARD_SBE_ERR_WRB_COAL_DATA_RAM,
+	QDMA_S80_HARD_SBE_ERR_INT_QID2VEC_RAM,
+	QDMA_S80_HARD_SBE_ERR_INT_CTXT_RAM,
+	QDMA_S80_HARD_SBE_ERR_DESC_REQ_FIFO_RAM,
+	QDMA_S80_HARD_SBE_ERR_PFCH_CTXT_RAM,
+	QDMA_S80_HARD_SBE_ERR_WRB_CTXT_RAM,
+	QDMA_S80_HARD_SBE_ERR_PFCH_LL_RAM,
+	QDMA_S80_HARD_SBE_ERR_ALL,
+
+	/* Double bit Errors */
+	QDMA_S80_HARD_DBE_ERR_MI_H2C0_DAT,
+	QDMA_S80_HARD_DBE_ERR_MI_C2H0_DAT,
+	QDMA_S80_HARD_DBE_ERR_H2C_RD_BRG_DAT,
+	QDMA_S80_HARD_DBE_ERR_H2C_WR_BRG_DAT,
+	QDMA_S80_HARD_DBE_ERR_C2H_RD_BRG_DAT,
+	QDMA_S80_HARD_DBE_ERR_C2H_WR_BRG_DAT,
+	QDMA_S80_HARD_DBE_ERR_FUNC_MAP,
+	QDMA_S80_HARD_DBE_ERR_DSC_HW_CTXT,
+	QDMA_S80_HARD_DBE_ERR_DSC_CRD_RCV,
+	QDMA_S80_HARD_DBE_ERR_DSC_SW_CTXT,
+	QDMA_S80_HARD_DBE_ERR_DSC_CPLI,
+	QDMA_S80_HARD_DBE_ERR_DSC_CPLD,
+	QDMA_S80_HARD_DBE_ERR_PASID_CTXT_RAM,
+	QDMA_S80_HARD_DBE_ERR_TIMER_FIFO_RAM,
+	QDMA_S80_HARD_DBE_ERR_PAYLOAD_FIFO_RAM,
+	QDMA_S80_HARD_DBE_ERR_QID_FIFO_RAM,
+	QDMA_S80_HARD_DBE_ERR_WRB_COAL_DATA_RAM,
+	QDMA_S80_HARD_DBE_ERR_INT_QID2VEC_RAM,
+	QDMA_S80_HARD_DBE_ERR_INT_CTXT_RAM,
+	QDMA_S80_HARD_DBE_ERR_DESC_REQ_FIFO_RAM,
+	QDMA_S80_HARD_DBE_ERR_PFCH_CTXT_RAM,
+	QDMA_S80_HARD_DBE_ERR_WRB_CTXT_RAM,
+	QDMA_S80_HARD_DBE_ERR_PFCH_LL_RAM,
+	QDMA_S80_HARD_DBE_ERR_ALL,
+
+	QDMA_S80_HARD_ERRS_ALL
+};
+
+struct qdma_s80_hard_hw_err_info {
+	enum qdma_s80_hard_error_idx idx;
+	const char *err_name;
+	uint32_t mask_reg_addr;
+	uint32_t stat_reg_addr;
+	uint32_t leaf_err_mask;
+	uint32_t global_err_mask;
+	void (*qdma_s80_hard_hw_err_process)(void *dev_hndl);
+};
+
+
+int qdma_s80_hard_init_ctxt_memory(void *dev_hndl);
+
+int qdma_s80_hard_qid2vec_conf(void *dev_hndl, uint8_t c2h, uint16_t hw_qid,
+			 struct qdma_qid2vec *ctxt,
+			 enum qdma_hw_access_type access_type);
+
+int qdma_s80_hard_fmap_conf(void *dev_hndl, uint16_t func_id,
+			struct qdma_fmap_cfg *config,
+			enum qdma_hw_access_type access_type);
+
+int qdma_s80_hard_sw_ctx_conf(void *dev_hndl, uint8_t c2h, uint16_t hw_qid,
+				struct qdma_descq_sw_ctxt *ctxt,
+				enum qdma_hw_access_type access_type);
+
+int qdma_s80_hard_pfetch_ctx_conf(void *dev_hndl, uint16_t hw_qid,
+				struct qdma_descq_prefetch_ctxt *ctxt,
+				enum qdma_hw_access_type access_type);
+
+int qdma_s80_hard_cmpt_ctx_conf(void *dev_hndl, uint16_t hw_qid,
+			struct qdma_descq_cmpt_ctxt *ctxt,
+			enum qdma_hw_access_type access_type);
+
+int qdma_s80_hard_hw_ctx_conf(void *dev_hndl, uint8_t c2h, uint16_t hw_qid,
+				struct qdma_descq_hw_ctxt *ctxt,
+				enum qdma_hw_access_type access_type);
+
+int qdma_s80_hard_credit_ctx_conf(void *dev_hndl, uint8_t c2h, uint16_t hw_qid,
+			struct qdma_descq_credit_ctxt *ctxt,
+			enum qdma_hw_access_type access_type);
+
+int qdma_s80_hard_indirect_intr_ctx_conf(void *dev_hndl, uint16_t ring_index,
+				struct qdma_indirect_intr_ctxt *ctxt,
+				enum qdma_hw_access_type access_type);
+
+int qdma_s80_hard_set_default_global_csr(void *dev_hndl);
+
+int qdma_s80_hard_queue_pidx_update(void *dev_hndl, uint8_t is_vf, uint16_t qid,
+		uint8_t is_c2h, const struct qdma_q_pidx_reg_info *reg_info);
+
+int qdma_s80_hard_queue_cmpt_cidx_update(void *dev_hndl, uint8_t is_vf,
+		uint16_t qid, const struct qdma_q_cmpt_cidx_reg_info *reg_info);
+
+int qdma_s80_hard_queue_intr_cidx_update(void *dev_hndl, uint8_t is_vf,
+		uint16_t qid, const struct qdma_intr_cidx_reg_info *reg_info);
+
+int qdma_cmp_get_user_bar(void *dev_hndl, uint8_t is_vf,
+		uint8_t func_id, uint8_t *user_bar);
+
+int qdma_s80_hard_get_device_attributes(void *dev_hndl,
+		struct qdma_dev_attributes *dev_info);
+
+uint32_t qdma_s80_hard_reg_dump_buf_len(void);
+
+int qdma_s80_hard_context_buf_len(uint8_t st,
+		enum qdma_dev_q_type q_type, uint32_t *req_buflen);
+
+int qdma_s80_hard_dump_config_regs(void *dev_hndl, uint8_t is_vf,
+		char *buf, uint32_t buflen);
+
+int qdma_s80_hard_hw_error_process(void *dev_hndl);
+const char *qdma_s80_hard_hw_get_error_name(uint32_t err_idx);
+int qdma_s80_hard_hw_error_enable(void *dev_hndl, uint32_t err_idx);
+
+int qdma_s80_hard_dump_queue_context(void *dev_hndl,
+		uint8_t st,
+		enum qdma_dev_q_type q_type,
+		struct qdma_descq_context *ctxt_data,
+		char *buf, uint32_t buflen);
+
+int qdma_s80_hard_dump_intr_context(void *dev_hndl,
+		struct qdma_indirect_intr_ctxt *intr_ctx,
+		int ring_index,
+		char *buf, uint32_t buflen);
+
+int qdma_s80_hard_read_dump_queue_context(void *dev_hndl,
+		uint16_t qid_hw,
+		uint8_t st,
+		enum qdma_dev_q_type q_type,
+		char *buf, uint32_t buflen);
+
+int qdma_s80_hard_dump_config_reg_list(void *dev_hndl,
+		uint32_t total_regs,
+		struct qdma_reg_data *reg_list,
+		char *buf, uint32_t buflen);
+
+int qdma_s80_hard_read_reg_list(void *dev_hndl, uint8_t is_vf,
+		uint16_t reg_rd_slot,
+		uint16_t *total_regs,
+		struct qdma_reg_data *reg_list);
+
+int qdma_s80_hard_global_csr_conf(void *dev_hndl, uint8_t index,
+				uint8_t count,
+				uint32_t *csr_val,
+				enum qdma_global_csr_type csr_type,
+				enum qdma_hw_access_type access_type);
+
+int qdma_s80_hard_global_writeback_interval_conf(void *dev_hndl,
+				enum qdma_wrb_interval *wb_int,
+				enum qdma_hw_access_type access_type);
+
+int qdma_s80_hard_mm_channel_conf(void *dev_hndl, uint8_t channel,
+				uint8_t is_c2h,
+				uint8_t enable);
+
+int qdma_s80_hard_dump_reg_info(void *dev_hndl, uint32_t reg_addr,
+				uint32_t num_regs, char *buf, uint32_t buflen);
+
+uint32_t qdma_s80_hard_get_config_num_regs(void);
+
+struct xreg_info *qdma_s80_hard_get_config_regs(void);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __QDMA_S80_HARD_ACCESS_H_ */
diff --git a/drivers/net/qdma/qdma_access/qdma_s80_hard_access/qdma_s80_hard_reg.h b/drivers/net/qdma/qdma_access/qdma_s80_hard_access/qdma_s80_hard_reg.h
new file mode 100644
index 0000000000..d9e431bd2b
--- /dev/null
+++ b/drivers/net/qdma/qdma_access/qdma_s80_hard_access/qdma_s80_hard_reg.h
@@ -0,0 +1,2031 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2022 Xilinx, Inc. All rights reserved.
+ */
+
+#ifndef __QDMA_S80_HARD_REG_H
+#define __QDMA_S80_HARD_REG_H
+
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include "qdma_platform.h"
+
+#ifdef CHAR_BIT
+#undef CHAR_BIT
+#endif
+#define CHAR_BIT 8
+
+#ifdef BIT
+#undef BIT
+#endif
+#define BIT(n)                  (1u << (n))
+
+#ifdef BITS_PER_BYTE
+#undef BITS_PER_BYTE
+#endif
+#define BITS_PER_BYTE           CHAR_BIT
+
+#ifdef BITS_PER_LONG
+#undef BITS_PER_LONG
+#endif
+#define BITS_PER_LONG           (sizeof(uint32_t) * BITS_PER_BYTE)
+
+#ifdef BITS_PER_LONG_LONG
+#undef BITS_PER_LONG_LONG
+#endif
+#define BITS_PER_LONG_LONG      (sizeof(uint64_t) * BITS_PER_BYTE)
+
+#ifdef GENMASK
+#undef GENMASK
+#endif
+#define GENMASK(h, l) \
+	((0xFFFFFFFF << (l)) & (0xFFFFFFFF >> (BITS_PER_LONG - 1 - (h))))
+
+#ifdef GENMASK_ULL
+#undef GENMASK_ULL
+#endif
+#define GENMASK_ULL(h, l) \
+	((0xFFFFFFFFFFFFFFFF << (l)) & \
+			(0xFFFFFFFFFFFFFFFF >> (BITS_PER_LONG_LONG - 1 - (h))))
+
+#define DEBGFS_LINE_SZ			(81)
+
+#ifdef ARRAY_SIZE
+#undef ARRAY_SIZE
+#endif
+#define ARRAY_SIZE(arr) RTE_DIM(arr)
+
+
+uint32_t qdma_s80_hard_config_num_regs_get(void);
+struct xreg_info *qdma_s80_hard_config_regs_get(void);
+#define QDMA_S80_HARD_CFG_BLK_IDENTIFIER_ADDR              0x00
+#define CFG_BLK_IDENTIFIER_MASK                           GENMASK(31, 20)
+#define CFG_BLK_IDENTIFIER_1_MASK                         GENMASK(19, 16)
+#define CFG_BLK_IDENTIFIER_RSVD_1_MASK                     GENMASK(15, 8)
+#define CFG_BLK_IDENTIFIER_VERSION_MASK                    GENMASK(7, 0)
+#define QDMA_S80_HARD_CFG_BLK_BUSDEV_ADDR                  0x04
+#define CFG_BLK_BUSDEV_BDF_MASK                            GENMASK(15, 0)
+#define QDMA_S80_HARD_CFG_BLK_PCIE_MAX_PLD_SIZE_ADDR       0x08
+#define CFG_BLK_PCIE_MAX_PLD_SIZE_MASK                    GENMASK(2, 0)
+#define QDMA_S80_HARD_CFG_BLK_PCIE_MAX_READ_REQ_SIZE_ADDR  0x0C
+#define CFG_BLK_PCIE_MAX_READ_REQ_SIZE_MASK               GENMASK(2, 0)
+#define QDMA_S80_HARD_CFG_BLK_SYSTEM_ID_ADDR               0x10
+#define CFG_BLK_SYSTEM_ID_MASK                            GENMASK(15, 0)
+#define QDMA_S80_HARD_CFG_BLK_MSI_ENABLE_ADDR              0x014
+#define CFG_BLK_MSI_ENABLE_3_MASK                          BIT(17)
+#define CFG_BLK_MSI_ENABLE_MSIX3_MASK                      BIT(16)
+#define CFG_BLK_MSI_ENABLE_2_MASK                          BIT(13)
+#define CFG_BLK_MSI_ENABLE_MSIX2_MASK                      BIT(12)
+#define CFG_BLK_MSI_ENABLE_1_MASK                          BIT(9)
+#define CFG_BLK_MSI_ENABLE_MSIX1_MASK                      BIT(8)
+#define CFG_BLK_MSI_ENABLE_0_MASK                          BIT(1)
+#define CFG_BLK_MSI_ENABLE_MSIX0_MASK                      BIT(0)
+#define QDMA_S80_HARD_CFG_PCIE_DATA_WIDTH_ADDR             0x18
+#define CFG_PCIE_DATA_WIDTH_DATAPATH_MASK                  GENMASK(2, 0)
+#define QDMA_S80_HARD_CFG_PCIE_CTL_ADDR                    0x1C
+#define CFG_PCIE_CTL_RRQ_DISABLE_MASK                      BIT(1)
+#define CFG_PCIE_CTL_RELAXED_ORDERING_MASK                 BIT(0)
+#define QDMA_S80_HARD_CFG_AXI_USER_MAX_PLD_SIZE_ADDR       0x40
+#define CFG_AXI_USER_MAX_PLD_SIZE_ISSUED_MASK              GENMASK(6, 4)
+#define CFG_AXI_USER_MAX_PLD_SIZE_PROG_MASK                GENMASK(2, 0)
+#define QDMA_S80_HARD_CFG_AXI_USER_MAX_READ_REQ_SIZE_ADDR  0x44
+#define CFG_AXI_USER_MAX_READ_REQ_SIZE_USISSUED_MASK       GENMASK(6, 4)
+#define CFG_AXI_USER_MAX_READ_REQ_SIZE_USPROG_MASK         GENMASK(2, 0)
+#define QDMA_S80_HARD_CFG_BLK_MISC_CTL_ADDR                0x4C
+#define CFG_BLK_MISC_CTL_NUM_TAG_MASK                      GENMASK(19, 8)
+#define CFG_BLK_MISC_CTL_RQ_METERING_MULTIPLIER_MASK       GENMASK(4, 0)
+#define QDMA_S80_HARD_CFG_BLK_SCRATCH_0_ADDR               0x80
+#define CFG_BLK_SCRATCH_0_MASK                            GENMASK(31, 0)
+#define QDMA_S80_HARD_CFG_BLK_SCRATCH_1_ADDR               0x84
+#define CFG_BLK_SCRATCH_1_MASK                            GENMASK(31, 0)
+#define QDMA_S80_HARD_CFG_BLK_SCRATCH_2_ADDR               0x88
+#define CFG_BLK_SCRATCH_2_MASK                            GENMASK(31, 0)
+#define QDMA_S80_HARD_CFG_BLK_SCRATCH_3_ADDR               0x8C
+#define CFG_BLK_SCRATCH_3_MASK                            GENMASK(31, 0)
+#define QDMA_S80_HARD_CFG_BLK_SCRATCH_4_ADDR               0x90
+#define CFG_BLK_SCRATCH_4_MASK                            GENMASK(31, 0)
+#define QDMA_S80_HARD_CFG_BLK_SCRATCH_5_ADDR               0x94
+#define CFG_BLK_SCRATCH_5_MASK                            GENMASK(31, 0)
+#define QDMA_S80_HARD_CFG_BLK_SCRATCH_6_ADDR               0x98
+#define CFG_BLK_SCRATCH_6_MASK                            GENMASK(31, 0)
+#define QDMA_S80_HARD_CFG_BLK_SCRATCH_7_ADDR               0x9C
+#define CFG_BLK_SCRATCH_7_MASK                            GENMASK(31, 0)
+#define QDMA_S80_HARD_RAM_SBE_MSK_A_ADDR                   0xF0
+#define RAM_SBE_MSK_A_MASK                            GENMASK(31, 0)
+#define QDMA_S80_HARD_RAM_SBE_STS_A_ADDR                   0xF4
+#define RAM_SBE_STS_A_RSVD_1_MASK                          BIT(31)
+#define RAM_SBE_STS_A_PFCH_LL_RAM_MASK                     BIT(30)
+#define RAM_SBE_STS_A_WRB_CTXT_RAM_MASK                    BIT(29)
+#define RAM_SBE_STS_A_PFCH_CTXT_RAM_MASK                   BIT(28)
+#define RAM_SBE_STS_A_DESC_REQ_FIFO_RAM_MASK               BIT(27)
+#define RAM_SBE_STS_A_INT_CTXT_RAM_MASK                    BIT(26)
+#define RAM_SBE_STS_A_INT_QID2VEC_RAM_MASK                 BIT(25)
+#define RAM_SBE_STS_A_WRB_COAL_DATA_RAM_MASK               BIT(24)
+#define RAM_SBE_STS_A_TUSER_FIFO_RAM_MASK                  BIT(23)
+#define RAM_SBE_STS_A_QID_FIFO_RAM_MASK                    BIT(22)
+#define RAM_SBE_STS_A_PLD_FIFO_RAM_MASK                    BIT(21)
+#define RAM_SBE_STS_A_TIMER_FIFO_RAM_MASK                  BIT(20)
+#define RAM_SBE_STS_A_PASID_CTXT_RAM_MASK                  BIT(19)
+#define RAM_SBE_STS_A_DSC_CPLD_MASK                        BIT(18)
+#define RAM_SBE_STS_A_DSC_CPLI_MASK                        BIT(17)
+#define RAM_SBE_STS_A_DSC_SW_CTXT_MASK                     BIT(16)
+#define RAM_SBE_STS_A_DSC_CRD_RCV_MASK                     BIT(15)
+#define RAM_SBE_STS_A_DSC_HW_CTXT_MASK                     BIT(14)
+#define RAM_SBE_STS_A_FUNC_MAP_MASK                        BIT(13)
+#define RAM_SBE_STS_A_C2H_WR_BRG_DAT_MASK                  BIT(12)
+#define RAM_SBE_STS_A_C2H_RD_BRG_DAT_MASK                  BIT(11)
+#define RAM_SBE_STS_A_H2C_WR_BRG_DAT_MASK                  BIT(10)
+#define RAM_SBE_STS_A_H2C_RD_BRG_DAT_MASK                  BIT(9)
+#define RAM_SBE_STS_A_RSVD_2_MASK                          GENMASK(8, 5)
+#define RAM_SBE_STS_A_MI_C2H0_DAT_MASK                     BIT(4)
+#define RAM_SBE_STS_A_RSVD_3_MASK                          GENMASK(3, 1)
+#define RAM_SBE_STS_A_MI_H2C0_DAT_MASK                     BIT(0)
+#define QDMA_S80_HARD_RAM_DBE_MSK_A_ADDR                   0xF8
+#define RAM_DBE_MSK_A_MASK                            GENMASK(31, 0)
+#define QDMA_S80_HARD_RAM_DBE_STS_A_ADDR                   0xFC
+#define RAM_DBE_STS_A_RSVD_1_MASK                          BIT(31)
+#define RAM_DBE_STS_A_PFCH_LL_RAM_MASK                     BIT(30)
+#define RAM_DBE_STS_A_WRB_CTXT_RAM_MASK                    BIT(29)
+#define RAM_DBE_STS_A_PFCH_CTXT_RAM_MASK                   BIT(28)
+#define RAM_DBE_STS_A_DESC_REQ_FIFO_RAM_MASK               BIT(27)
+#define RAM_DBE_STS_A_INT_CTXT_RAM_MASK                    BIT(26)
+#define RAM_DBE_STS_A_INT_QID2VEC_RAM_MASK                 BIT(25)
+#define RAM_DBE_STS_A_WRB_COAL_DATA_RAM_MASK               BIT(24)
+#define RAM_DBE_STS_A_TUSER_FIFO_RAM_MASK                  BIT(23)
+#define RAM_DBE_STS_A_QID_FIFO_RAM_MASK                    BIT(22)
+#define RAM_DBE_STS_A_PLD_FIFO_RAM_MASK                    BIT(21)
+#define RAM_DBE_STS_A_TIMER_FIFO_RAM_MASK                  BIT(20)
+#define RAM_DBE_STS_A_PASID_CTXT_RAM_MASK                  BIT(19)
+#define RAM_DBE_STS_A_DSC_CPLD_MASK                        BIT(18)
+#define RAM_DBE_STS_A_DSC_CPLI_MASK                        BIT(17)
+#define RAM_DBE_STS_A_DSC_SW_CTXT_MASK                     BIT(16)
+#define RAM_DBE_STS_A_DSC_CRD_RCV_MASK                     BIT(15)
+#define RAM_DBE_STS_A_DSC_HW_CTXT_MASK                     BIT(14)
+#define RAM_DBE_STS_A_FUNC_MAP_MASK                        BIT(13)
+#define RAM_DBE_STS_A_C2H_WR_BRG_DAT_MASK                  BIT(12)
+#define RAM_DBE_STS_A_C2H_RD_BRG_DAT_MASK                  BIT(11)
+#define RAM_DBE_STS_A_H2C_WR_BRG_DAT_MASK                  BIT(10)
+#define RAM_DBE_STS_A_H2C_RD_BRG_DAT_MASK                  BIT(9)
+#define RAM_DBE_STS_A_RSVD_2_MASK                          GENMASK(8, 5)
+#define RAM_DBE_STS_A_MI_C2H0_DAT_MASK                     BIT(4)
+#define RAM_DBE_STS_A_RSVD_3_MASK                          GENMASK(3, 1)
+#define RAM_DBE_STS_A_MI_H2C0_DAT_MASK                     BIT(0)
+#define QDMA_S80_HARD_GLBL2_IDENTIFIER_ADDR                0x100
+#define GLBL2_IDENTIFIER_MASK                             GENMASK(31, 8)
+#define GLBL2_IDENTIFIER_VERSION_MASK                      GENMASK(7, 0)
+#define QDMA_S80_HARD_GLBL2_PF_BARLITE_INT_ADDR            0x104
+#define GLBL2_PF_BARLITE_INT_PF3_BAR_MAP_MASK              GENMASK(23, 18)
+#define GLBL2_PF_BARLITE_INT_PF2_BAR_MAP_MASK              GENMASK(17, 12)
+#define GLBL2_PF_BARLITE_INT_PF1_BAR_MAP_MASK              GENMASK(11, 6)
+#define GLBL2_PF_BARLITE_INT_PF0_BAR_MAP_MASK              GENMASK(5, 0)
+#define QDMA_S80_HARD_GLBL2_PF_VF_BARLITE_INT_ADDR         0x108
+#define GLBL2_PF_VF_BARLITE_INT_PF3_MAP_MASK               GENMASK(23, 18)
+#define GLBL2_PF_VF_BARLITE_INT_PF2_MAP_MASK               GENMASK(17, 12)
+#define GLBL2_PF_VF_BARLITE_INT_PF1_MAP_MASK               GENMASK(11, 6)
+#define GLBL2_PF_VF_BARLITE_INT_PF0_MAP_MASK               GENMASK(5, 0)
+#define QDMA_S80_HARD_GLBL2_PF_BARLITE_EXT_ADDR            0x10C
+#define GLBL2_PF_BARLITE_EXT_PF3_BAR_MAP_MASK              GENMASK(23, 18)
+#define GLBL2_PF_BARLITE_EXT_PF2_BAR_MAP_MASK              GENMASK(17, 12)
+#define GLBL2_PF_BARLITE_EXT_PF1_BAR_MAP_MASK              GENMASK(11, 6)
+#define GLBL2_PF_BARLITE_EXT_PF0_BAR_MAP_MASK              GENMASK(5, 0)
+#define QDMA_S80_HARD_GLBL2_PF_VF_BARLITE_EXT_ADDR         0x110
+#define GLBL2_PF_VF_BARLITE_EXT_PF3_MAP_MASK               GENMASK(23, 18)
+#define GLBL2_PF_VF_BARLITE_EXT_PF2_MAP_MASK               GENMASK(17, 12)
+#define GLBL2_PF_VF_BARLITE_EXT_PF1_MAP_MASK               GENMASK(11, 6)
+#define GLBL2_PF_VF_BARLITE_EXT_PF0_MAP_MASK               GENMASK(5, 0)
+#define QDMA_S80_HARD_GLBL2_CHANNEL_INST_ADDR              0x114
+#define GLBL2_CHANNEL_INST_RSVD_1_MASK                     GENMASK(31, 18)
+#define GLBL2_CHANNEL_INST_C2H_ST_MASK                     BIT(17)
+#define GLBL2_CHANNEL_INST_H2C_ST_MASK                     BIT(16)
+#define GLBL2_CHANNEL_INST_RSVD_2_MASK                     GENMASK(15, 9)
+#define GLBL2_CHANNEL_INST_C2H_ENG_MASK                    BIT(8)
+#define GLBL2_CHANNEL_INST_RSVD_3_MASK                     GENMASK(7, 1)
+#define GLBL2_CHANNEL_INST_H2C_ENG_MASK                    BIT(0)
+#define QDMA_S80_HARD_GLBL2_CHANNEL_MDMA_ADDR              0x118
+#define GLBL2_CHANNEL_MDMA_RSVD_1_MASK                     GENMASK(31, 18)
+#define GLBL2_CHANNEL_MDMA_C2H_ST_MASK                     BIT(17)
+#define GLBL2_CHANNEL_MDMA_H2C_ST_MASK                     BIT(16)
+#define GLBL2_CHANNEL_MDMA_RSVD_2_MASK                     GENMASK(15, 9)
+#define GLBL2_CHANNEL_MDMA_C2H_ENG_MASK                    BIT(8)
+#define GLBL2_CHANNEL_MDMA_RSVD_3_MASK                     GENMASK(7, 1)
+#define GLBL2_CHANNEL_MDMA_H2C_ENG_MASK                    BIT(0)
+#define QDMA_S80_HARD_GLBL2_CHANNEL_STRM_ADDR              0x11C
+#define GLBL2_CHANNEL_STRM_RSVD_1_MASK                     GENMASK(31, 18)
+#define GLBL2_CHANNEL_STRM_C2H_ST_MASK                     BIT(17)
+#define GLBL2_CHANNEL_STRM_H2C_ST_MASK                     BIT(16)
+#define GLBL2_CHANNEL_STRM_RSVD_2_MASK                     GENMASK(15, 9)
+#define GLBL2_CHANNEL_STRM_C2H_ENG_MASK                    BIT(8)
+#define GLBL2_CHANNEL_STRM_RSVD_3_MASK                     GENMASK(7, 1)
+#define GLBL2_CHANNEL_STRM_H2C_ENG_MASK                    BIT(0)
+#define QDMA_S80_HARD_GLBL2_CHANNEL_CAP_ADDR               0x120
+#define GLBL2_CHANNEL_CAP_RSVD_1_MASK                      GENMASK(31, 12)
+#define GLBL2_CHANNEL_CAP_MULTIQ_MAX_MASK                  GENMASK(11, 0)
+#define QDMA_S80_HARD_GLBL2_CHANNEL_PASID_CAP_ADDR         0x128
+#define GLBL2_CHANNEL_PASID_CAP_RSVD_1_MASK                GENMASK(31, 16)
+#define GLBL2_CHANNEL_PASID_CAP_BRIDGEOFFSET_MASK          GENMASK(15, 4)
+#define GLBL2_CHANNEL_PASID_CAP_RSVD_2_MASK                GENMASK(3, 2)
+#define GLBL2_CHANNEL_PASID_CAP_BRIDGEEN_MASK              BIT(1)
+#define GLBL2_CHANNEL_PASID_CAP_DMAEN_MASK                 BIT(0)
+#define QDMA_S80_HARD_GLBL2_CHANNEL_FUNC_RET_ADDR          0x12C
+#define GLBL2_CHANNEL_FUNC_RET_RSVD_1_MASK                 GENMASK(31, 8)
+#define GLBL2_CHANNEL_FUNC_RET_FUNC_MASK                   GENMASK(7, 0)
+#define QDMA_S80_HARD_GLBL2_SYSTEM_ID_ADDR                 0x130
+#define GLBL2_SYSTEM_ID_RSVD_1_MASK                        GENMASK(31, 16)
+#define GLBL2_SYSTEM_ID_MASK                              GENMASK(15, 0)
+#define QDMA_S80_HARD_GLBL2_MISC_CAP_ADDR                  0x134
+#define GLBL2_MISC_CAP_RSVD_1_MASK                         GENMASK(31, 0)
+#define QDMA_S80_HARD_GLBL2_DBG_PCIE_RQ0_ADDR              0x1B8
+#define GLBL2_PCIE_RQ0_NPH_AVL_MASK                    GENMASK(31, 20)
+#define GLBL2_PCIE_RQ0_RCB_AVL_MASK                    GENMASK(19, 10)
+#define GLBL2_PCIE_RQ0_SLV_RD_CREDS_MASK               GENMASK(9, 4)
+#define GLBL2_PCIE_RQ0_TAG_EP_MASK                     GENMASK(3, 2)
+#define GLBL2_PCIE_RQ0_TAG_FL_MASK                     GENMASK(1, 0)
+#define QDMA_S80_HARD_GLBL2_DBG_PCIE_RQ1_ADDR              0x1BC
+#define GLBL2_PCIE_RQ1_RSVD_1_MASK                     GENMASK(31, 17)
+#define GLBL2_PCIE_RQ1_WTLP_REQ_MASK                   BIT(16)
+#define GLBL2_PCIE_RQ1_WTLP_HEADER_FIFO_FL_MASK        BIT(15)
+#define GLBL2_PCIE_RQ1_WTLP_HEADER_FIFO_EP_MASK        BIT(14)
+#define GLBL2_PCIE_RQ1_RQ_FIFO_EP_MASK                 BIT(13)
+#define GLBL2_PCIE_RQ1_RQ_FIFO_FL_MASK                 BIT(12)
+#define GLBL2_PCIE_RQ1_TLPSM_MASK                      GENMASK(11, 9)
+#define GLBL2_PCIE_RQ1_TLPSM512_MASK                   GENMASK(8, 6)
+#define GLBL2_PCIE_RQ1_RREQ0_RCB_OK_MASK               BIT(5)
+#define GLBL2_PCIE_RQ1_RREQ0_SLV_MASK                  BIT(4)
+#define GLBL2_PCIE_RQ1_RREQ0_VLD_MASK                  BIT(3)
+#define GLBL2_PCIE_RQ1_RREQ1_RCB_OK_MASK               BIT(2)
+#define GLBL2_PCIE_RQ1_RREQ1_SLV_MASK                  BIT(1)
+#define GLBL2_PCIE_RQ1_RREQ1_VLD_MASK                  BIT(0)
+#define QDMA_S80_HARD_GLBL2_DBG_AXIMM_WR0_ADDR             0x1C0
+#define GLBL2_AXIMM_WR0_RSVD_1_MASK                    GENMASK(31, 27)
+#define GLBL2_AXIMM_WR0_WR_REQ_MASK                    BIT(26)
+#define GLBL2_AXIMM_WR0_WR_CHN_MASK                    GENMASK(25, 23)
+#define GLBL2_AXIMM_WR0_WTLP_DATA_FIFO_EP_MASK         BIT(22)
+#define GLBL2_AXIMM_WR0_WPL_FIFO_EP_MASK               BIT(21)
+#define GLBL2_AXIMM_WR0_BRSP_CLAIM_CHN_MASK            GENMASK(20, 18)
+#define GLBL2_AXIMM_WR0_WRREQ_CNT_MASK                 GENMASK(17, 12)
+#define GLBL2_AXIMM_WR0_BID_MASK                       GENMASK(11, 9)
+#define GLBL2_AXIMM_WR0_BVALID_MASK                    BIT(8)
+#define GLBL2_AXIMM_WR0_BREADY_MASK                    BIT(7)
+#define GLBL2_AXIMM_WR0_WVALID_MASK                    BIT(6)
+#define GLBL2_AXIMM_WR0_WREADY_MASK                    BIT(5)
+#define GLBL2_AXIMM_WR0_AWID_MASK                      GENMASK(4, 2)
+#define GLBL2_AXIMM_WR0_AWVALID_MASK                   BIT(1)
+#define GLBL2_AXIMM_WR0_AWREADY_MASK                   BIT(0)
+#define QDMA_S80_HARD_GLBL2_DBG_AXIMM_WR1_ADDR             0x1C4
+#define GLBL2_AXIMM_WR1_RSVD_1_MASK                    GENMASK(31, 30)
+#define GLBL2_AXIMM_WR1_BRSP_CNT4_MASK                 GENMASK(29, 24)
+#define GLBL2_AXIMM_WR1_BRSP_CNT3_MASK                 GENMASK(23, 18)
+#define GLBL2_AXIMM_WR1_BRSP_CNT2_MASK                 GENMASK(17, 12)
+#define GLBL2_AXIMM_WR1_BRSP_CNT1_MASK                 GENMASK(11, 6)
+#define GLBL2_AXIMM_WR1_BRSP_CNT0_MASK                 GENMASK(5, 0)
+#define QDMA_S80_HARD_GLBL2_DBG_AXIMM_RD0_ADDR             0x1C8
+#define GLBL2_AXIMM_RD0_RSVD_1_MASK                    GENMASK(31, 23)
+#define GLBL2_AXIMM_RD0_PND_CNT_MASK                   GENMASK(22, 17)
+#define GLBL2_AXIMM_RD0_RD_CHNL_MASK                   GENMASK(16, 14)
+#define GLBL2_AXIMM_RD0_RD_REQ_MASK                    BIT(13)
+#define GLBL2_AXIMM_RD0_RRSP_CLAIM_CHNL_MASK           GENMASK(12, 10)
+#define GLBL2_AXIMM_RD0_RID_MASK                       GENMASK(9, 7)
+#define GLBL2_AXIMM_RD0_RVALID_MASK                    BIT(6)
+#define GLBL2_AXIMM_RD0_RREADY_MASK                    BIT(5)
+#define GLBL2_AXIMM_RD0_ARID_MASK                      GENMASK(4, 2)
+#define GLBL2_AXIMM_RD0_ARVALID_MASK                   BIT(1)
+#define GLBL2_AXIMM_RD0_ARREADY_MASK                   BIT(0)
+#define QDMA_S80_HARD_GLBL2_DBG_AXIMM_RD1_ADDR             0x1CC
+#define GLBL2_AXIMM_RD1_RSVD_1_MASK                    GENMASK(31, 30)
+#define GLBL2_AXIMM_RD1_RRSP_CNT4_MASK                 GENMASK(29, 24)
+#define GLBL2_AXIMM_RD1_RRSP_CNT3_MASK                 GENMASK(23, 18)
+#define GLBL2_AXIMM_RD1_RRSP_CNT2_MASK                 GENMASK(17, 12)
+#define GLBL2_AXIMM_RD1_RRSP_CNT1_MASK                 GENMASK(11, 6)
+#define GLBL2_AXIMM_RD1_RRSP_CNT0_MASK                 GENMASK(5, 0)
+#define QDMA_S80_HARD_GLBL_RNG_SZ_1_ADDR                   0x204
+#define GLBL_RNG_SZ_1_RSVD_1_MASK                          GENMASK(31, 16)
+#define GLBL_RNG_SZ_1_RING_SIZE_MASK                       GENMASK(15, 0)
+#define QDMA_S80_HARD_GLBL_RNG_SZ_2_ADDR                   0x208
+#define GLBL_RNG_SZ_2_RSVD_1_MASK                          GENMASK(31, 16)
+#define GLBL_RNG_SZ_2_RING_SIZE_MASK                       GENMASK(15, 0)
+#define QDMA_S80_HARD_GLBL_RNG_SZ_3_ADDR                   0x20C
+#define GLBL_RNG_SZ_3_RSVD_1_MASK                          GENMASK(31, 16)
+#define GLBL_RNG_SZ_3_RING_SIZE_MASK                       GENMASK(15, 0)
+#define QDMA_S80_HARD_GLBL_RNG_SZ_4_ADDR                   0x210
+#define GLBL_RNG_SZ_4_RSVD_1_MASK                          GENMASK(31, 16)
+#define GLBL_RNG_SZ_4_RING_SIZE_MASK                       GENMASK(15, 0)
+#define QDMA_S80_HARD_GLBL_RNG_SZ_5_ADDR                   0x214
+#define GLBL_RNG_SZ_5_RSVD_1_MASK                          GENMASK(31, 16)
+#define GLBL_RNG_SZ_5_RING_SIZE_MASK                       GENMASK(15, 0)
+#define QDMA_S80_HARD_GLBL_RNG_SZ_6_ADDR                   0x218
+#define GLBL_RNG_SZ_6_RSVD_1_MASK                          GENMASK(31, 16)
+#define GLBL_RNG_SZ_6_RING_SIZE_MASK                       GENMASK(15, 0)
+#define QDMA_S80_HARD_GLBL_RNG_SZ_7_ADDR                   0x21C
+#define GLBL_RNG_SZ_7_RSVD_1_MASK                          GENMASK(31, 16)
+#define GLBL_RNG_SZ_7_RING_SIZE_MASK                       GENMASK(15, 0)
+#define QDMA_S80_HARD_GLBL_RNG_SZ_8_ADDR                   0x220
+#define GLBL_RNG_SZ_8_RSVD_1_MASK                          GENMASK(31, 16)
+#define GLBL_RNG_SZ_8_RING_SIZE_MASK                       GENMASK(15, 0)
+#define QDMA_S80_HARD_GLBL_RNG_SZ_9_ADDR                   0x224
+#define GLBL_RNG_SZ_9_RSVD_1_MASK                          GENMASK(31, 16)
+#define GLBL_RNG_SZ_9_RING_SIZE_MASK                       GENMASK(15, 0)
+#define QDMA_S80_HARD_GLBL_RNG_SZ_A_ADDR                   0x228
+#define GLBL_RNG_SZ_A_RSVD_1_MASK                          GENMASK(31, 16)
+#define GLBL_RNG_SZ_A_RING_SIZE_MASK                       GENMASK(15, 0)
+#define QDMA_S80_HARD_GLBL_RNG_SZ_B_ADDR                   0x22C
+#define GLBL_RNG_SZ_B_RSVD_1_MASK                          GENMASK(31, 16)
+#define GLBL_RNG_SZ_B_RING_SIZE_MASK                       GENMASK(15, 0)
+#define QDMA_S80_HARD_GLBL_RNG_SZ_C_ADDR                   0x230
+#define GLBL_RNG_SZ_C_RSVD_1_MASK                          GENMASK(31, 16)
+#define GLBL_RNG_SZ_C_RING_SIZE_MASK                       GENMASK(15, 0)
+#define QDMA_S80_HARD_GLBL_RNG_SZ_D_ADDR                   0x234
+#define GLBL_RNG_SZ_D_RSVD_1_MASK                          GENMASK(31, 16)
+#define GLBL_RNG_SZ_D_RING_SIZE_MASK                       GENMASK(15, 0)
+#define QDMA_S80_HARD_GLBL_RNG_SZ_E_ADDR                   0x238
+#define GLBL_RNG_SZ_E_RSVD_1_MASK                          GENMASK(31, 16)
+#define GLBL_RNG_SZ_E_RING_SIZE_MASK                       GENMASK(15, 0)
+#define QDMA_S80_HARD_GLBL_RNG_SZ_F_ADDR                   0x23C
+#define GLBL_RNG_SZ_F_RSVD_1_MASK                          GENMASK(31, 16)
+#define GLBL_RNG_SZ_F_RING_SIZE_MASK                       GENMASK(15, 0)
+#define QDMA_S80_HARD_GLBL_RNG_SZ_10_ADDR                  0x240
+#define GLBL_RNG_SZ_10_RSVD_1_MASK                         GENMASK(31, 16)
+#define GLBL_RNG_SZ_10_RING_SIZE_MASK                      GENMASK(15, 0)
+#define QDMA_S80_HARD_GLBL_ERR_STAT_ADDR                   0x248
+#define GLBL_ERR_STAT_RSVD_1_MASK                          GENMASK(31, 12)
+#define GLBL_ERR_STAT_ERR_H2C_ST_MASK                      BIT(11)
+#define GLBL_ERR_STAT_ERR_BDG_MASK                         BIT(10)
+#define GLBL_ERR_STAT_IND_CTXT_CMD_ERR_MASK                BIT(9)
+#define GLBL_ERR_STAT_ERR_C2H_ST_MASK                      BIT(8)
+#define GLBL_ERR_STAT_ERR_C2H_MM_1_MASK                    BIT(7)
+#define GLBL_ERR_STAT_ERR_C2H_MM_0_MASK                    BIT(6)
+#define GLBL_ERR_STAT_ERR_H2C_MM_1_MASK                    BIT(5)
+#define GLBL_ERR_STAT_ERR_H2C_MM_0_MASK                    BIT(4)
+#define GLBL_ERR_STAT_ERR_TRQ_MASK                         BIT(3)
+#define GLBL_ERR_STAT_ERR_DSC_MASK                         BIT(2)
+#define GLBL_ERR_STAT_ERR_RAM_DBE_MASK                     BIT(1)
+#define GLBL_ERR_STAT_ERR_RAM_SBE_MASK                     BIT(0)
+#define QDMA_S80_HARD_GLBL_ERR_MASK_ADDR                   0x24C
+#define GLBL_ERR_RSVD_1_MASK                          GENMASK(31, 9)
+#define GLBL_ERR_MASK                            GENMASK(8, 0)
+#define QDMA_S80_HARD_GLBL_DSC_CFG_ADDR                    0x250
+#define GLBL_DSC_CFG_RSVD_1_MASK                           GENMASK(31, 10)
+#define GLBL_DSC_CFG_UNC_OVR_COR_MASK                      BIT(9)
+#define GLBL_DSC_CFG_CTXT_FER_DIS_MASK                     BIT(8)
+#define GLBL_DSC_CFG_RSVD_2_MASK                           GENMASK(7, 6)
+#define GLBL_DSC_CFG_MAXFETCH_MASK                         GENMASK(5, 3)
+#define GLBL_DSC_CFG_WB_ACC_INT_MASK                       GENMASK(2, 0)
+#define QDMA_S80_HARD_GLBL_DSC_ERR_STS_ADDR                0x254
+#define GLBL_DSC_ERR_STS_RSVD_1_MASK                       GENMASK(31, 25)
+#define GLBL_DSC_ERR_STS_SBE_MASK                          BIT(24)
+#define GLBL_DSC_ERR_STS_DBE_MASK                          BIT(23)
+#define GLBL_DSC_ERR_STS_RQ_CANCEL_MASK                    BIT(22)
+#define GLBL_DSC_ERR_STS_DSC_MASK                          BIT(21)
+#define GLBL_DSC_ERR_STS_DMA_MASK                          BIT(20)
+#define GLBL_DSC_ERR_STS_FLR_CANCEL_MASK                   BIT(19)
+#define GLBL_DSC_ERR_STS_RSVD_2_MASK                       GENMASK(18, 17)
+#define GLBL_DSC_ERR_STS_DAT_POISON_MASK                   BIT(16)
+#define GLBL_DSC_ERR_STS_TIMEOUT_MASK                      BIT(9)
+#define GLBL_DSC_ERR_STS_FLR_MASK                          BIT(5)
+#define GLBL_DSC_ERR_STS_TAG_MASK                          BIT(4)
+#define GLBL_DSC_ERR_STS_ADDR_MASK                         BIT(3)
+#define GLBL_DSC_ERR_STS_PARAM_MASK                        BIT(2)
+#define GLBL_DSC_ERR_STS_UR_CA_MASK                        BIT(1)
+#define GLBL_DSC_ERR_STS_POISON_MASK                       BIT(0)
+#define QDMA_S80_HARD_GLBL_DSC_ERR_MSK_ADDR                0x258
+#define GLBL_DSC_ERR_MSK_MASK                         GENMASK(8, 0)
+#define QDMA_S80_HARD_GLBL_DSC_ERR_LOG0_ADDR               0x25C
+#define GLBL_DSC_ERR_LOG0_VALID_MASK                       BIT(31)
+#define GLBL_DSC_ERR_LOG0_RSVD_1_MASK                      GENMASK(30, 29)
+#define GLBL_DSC_ERR_LOG0_QID_MASK                         GENMASK(28, 17)
+#define GLBL_DSC_ERR_LOG0_SEL_MASK                         BIT(16)
+#define GLBL_DSC_ERR_LOG0_CIDX_MASK                        GENMASK(15, 0)
+#define QDMA_S80_HARD_GLBL_DSC_ERR_LOG1_ADDR               0x260
+#define GLBL_DSC_ERR_LOG1_RSVD_1_MASK                      GENMASK(31, 9)
+#define GLBL_DSC_ERR_LOG1_SUB_TYPE_MASK                    GENMASK(8, 5)
+#define GLBL_DSC_ERR_LOG1_ERR_TYPE_MASK                    GENMASK(4, 0)
+#define QDMA_S80_HARD_GLBL_TRQ_ERR_STS_ADDR                0x264
+#define GLBL_TRQ_ERR_STS_RSVD_1_MASK                       GENMASK(31, 4)
+#define GLBL_TRQ_ERR_STS_TCP_TIMEOUT_MASK                  BIT(3)
+#define GLBL_TRQ_ERR_STS_VF_ACCESS_ERR_MASK                BIT(2)
+#define GLBL_TRQ_ERR_STS_QID_RANGE_MASK                    BIT(1)
+#define GLBL_TRQ_ERR_STS_UNMAPPED_MASK                     BIT(0)
+#define QDMA_S80_HARD_GLBL_TRQ_ERR_MSK_ADDR                0x268
+#define GLBL_TRQ_ERR_MSK_MASK                         GENMASK(31, 0)
+#define QDMA_S80_HARD_GLBL_TRQ_ERR_LOG_ADDR                0x26C
+#define GLBL_TRQ_ERR_LOG_RSVD_1_MASK                       GENMASK(31, 28)
+#define GLBL_TRQ_ERR_LOG_TARGET_MASK                       GENMASK(27, 24)
+#define GLBL_TRQ_ERR_LOG_FUNC_MASK                         GENMASK(23, 16)
+#define GLBL_TRQ_ERR_LOG_ADDRESS_MASK                      GENMASK(15, 0)
+#define QDMA_S80_HARD_GLBL_DSC_DBG_DAT0_ADDR               0x270
+#define GLBL_DSC_DAT0_RSVD_1_MASK                      GENMASK(31, 30)
+#define GLBL_DSC_DAT0_CTXT_ARB_DIR_MASK                BIT(29)
+#define GLBL_DSC_DAT0_CTXT_ARB_QID_MASK                GENMASK(28, 17)
+#define GLBL_DSC_DAT0_CTXT_ARB_REQ_MASK                GENMASK(16, 12)
+#define GLBL_DSC_DAT0_IRQ_FIFO_FL_MASK                 BIT(11)
+#define GLBL_DSC_DAT0_TMSTALL_MASK                     BIT(10)
+#define GLBL_DSC_DAT0_RRQ_STALL_MASK                   GENMASK(9, 8)
+#define GLBL_DSC_DAT0_RCP_FIFO_SPC_STALL_MASK          GENMASK(7, 6)
+#define GLBL_DSC_DAT0_RRQ_FIFO_SPC_STALL_MASK          GENMASK(5, 4)
+#define GLBL_DSC_DAT0_FAB_MRKR_RSP_STALL_MASK          GENMASK(3, 2)
+#define GLBL_DSC_DAT0_DSC_OUT_STALL_MASK               GENMASK(1, 0)
+#define QDMA_S80_HARD_GLBL_DSC_DBG_DAT1_ADDR               0x274
+#define GLBL_DSC_DAT1_RSVD_1_MASK                      GENMASK(31, 28)
+#define GLBL_DSC_DAT1_EVT_SPC_C2H_MASK                 GENMASK(27, 22)
+#define GLBL_DSC_DAT1_EVT_SP_H2C_MASK                  GENMASK(21, 16)
+#define GLBL_DSC_DAT1_DSC_SPC_C2H_MASK                 GENMASK(15, 8)
+#define GLBL_DSC_DAT1_DSC_SPC_H2C_MASK                 GENMASK(7, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_0_ADDR                  0x400
+#define TRQ_SEL_FMAP_0_RSVD_1_MASK                         GENMASK(31, 23)
+#define TRQ_SEL_FMAP_0_QID_MAX_MASK                        GENMASK(22, 11)
+#define TRQ_SEL_FMAP_0_QID_BASE_MASK                       GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_1_ADDR                  0x404
+#define TRQ_SEL_FMAP_1_RSVD_1_MASK                         GENMASK(31, 23)
+#define TRQ_SEL_FMAP_1_QID_MAX_MASK                        GENMASK(22, 11)
+#define TRQ_SEL_FMAP_1_QID_BASE_MASK                       GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_2_ADDR                  0x408
+#define TRQ_SEL_FMAP_2_RSVD_1_MASK                         GENMASK(31, 23)
+#define TRQ_SEL_FMAP_2_QID_MAX_MASK                        GENMASK(22, 11)
+#define TRQ_SEL_FMAP_2_QID_BASE_MASK                       GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_3_ADDR                  0x40C
+#define TRQ_SEL_FMAP_3_RSVD_1_MASK                         GENMASK(31, 23)
+#define TRQ_SEL_FMAP_3_QID_MAX_MASK                        GENMASK(22, 11)
+#define TRQ_SEL_FMAP_3_QID_BASE_MASK                       GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_4_ADDR                  0x410
+#define TRQ_SEL_FMAP_4_RSVD_1_MASK                         GENMASK(31, 23)
+#define TRQ_SEL_FMAP_4_QID_MAX_MASK                        GENMASK(22, 11)
+#define TRQ_SEL_FMAP_4_QID_BASE_MASK                       GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_5_ADDR                  0x414
+#define TRQ_SEL_FMAP_5_RSVD_1_MASK                         GENMASK(31, 23)
+#define TRQ_SEL_FMAP_5_QID_MAX_MASK                        GENMASK(22, 11)
+#define TRQ_SEL_FMAP_5_QID_BASE_MASK                       GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_6_ADDR                  0x418
+#define TRQ_SEL_FMAP_6_RSVD_1_MASK                         GENMASK(31, 23)
+#define TRQ_SEL_FMAP_6_QID_MAX_MASK                        GENMASK(22, 11)
+#define TRQ_SEL_FMAP_6_QID_BASE_MASK                       GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_7_ADDR                  0x41C
+#define TRQ_SEL_FMAP_7_RSVD_1_MASK                         GENMASK(31, 23)
+#define TRQ_SEL_FMAP_7_QID_MAX_MASK                        GENMASK(22, 11)
+#define TRQ_SEL_FMAP_7_QID_BASE_MASK                       GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_8_ADDR                  0x420
+#define TRQ_SEL_FMAP_8_RSVD_1_MASK                         GENMASK(31, 23)
+#define TRQ_SEL_FMAP_8_QID_MAX_MASK                        GENMASK(22, 11)
+#define TRQ_SEL_FMAP_8_QID_BASE_MASK                       GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_9_ADDR                  0x424
+#define TRQ_SEL_FMAP_9_RSVD_1_MASK                         GENMASK(31, 23)
+#define TRQ_SEL_FMAP_9_QID_MAX_MASK                        GENMASK(22, 11)
+#define TRQ_SEL_FMAP_9_QID_BASE_MASK                       GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_A_ADDR                  0x428
+#define TRQ_SEL_FMAP_A_RSVD_1_MASK                         GENMASK(31, 23)
+#define TRQ_SEL_FMAP_A_QID_MAX_MASK                        GENMASK(22, 11)
+#define TRQ_SEL_FMAP_A_QID_BASE_MASK                       GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_B_ADDR                  0x42C
+#define TRQ_SEL_FMAP_B_RSVD_1_MASK                         GENMASK(31, 23)
+#define TRQ_SEL_FMAP_B_QID_MAX_MASK                        GENMASK(22, 11)
+#define TRQ_SEL_FMAP_B_QID_BASE_MASK                       GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_D_ADDR                  0x430
+#define TRQ_SEL_FMAP_D_RSVD_1_MASK                         GENMASK(31, 23)
+#define TRQ_SEL_FMAP_D_QID_MAX_MASK                        GENMASK(22, 11)
+#define TRQ_SEL_FMAP_D_QID_BASE_MASK                       GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_E_ADDR                  0x434
+#define TRQ_SEL_FMAP_E_RSVD_1_MASK                         GENMASK(31, 23)
+#define TRQ_SEL_FMAP_E_QID_MAX_MASK                        GENMASK(22, 11)
+#define TRQ_SEL_FMAP_E_QID_BASE_MASK                       GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_F_ADDR                  0x438
+#define TRQ_SEL_FMAP_F_RSVD_1_MASK                         GENMASK(31, 23)
+#define TRQ_SEL_FMAP_F_QID_MAX_MASK                        GENMASK(22, 11)
+#define TRQ_SEL_FMAP_F_QID_BASE_MASK                       GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_10_ADDR                 0x43C
+#define TRQ_SEL_FMAP_10_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_10_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_10_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_11_ADDR                 0x440
+#define TRQ_SEL_FMAP_11_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_11_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_11_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_12_ADDR                 0x444
+#define TRQ_SEL_FMAP_12_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_12_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_12_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_13_ADDR                 0x448
+#define TRQ_SEL_FMAP_13_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_13_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_13_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_14_ADDR                 0x44C
+#define TRQ_SEL_FMAP_14_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_14_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_14_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_15_ADDR                 0x450
+#define TRQ_SEL_FMAP_15_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_15_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_15_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_16_ADDR                 0x454
+#define TRQ_SEL_FMAP_16_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_16_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_16_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_17_ADDR                 0x458
+#define TRQ_SEL_FMAP_17_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_17_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_17_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_18_ADDR                 0x45C
+#define TRQ_SEL_FMAP_18_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_18_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_18_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_19_ADDR                 0x460
+#define TRQ_SEL_FMAP_19_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_19_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_19_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_1A_ADDR                 0x464
+#define TRQ_SEL_FMAP_1A_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_1A_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_1A_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_1B_ADDR                 0x468
+#define TRQ_SEL_FMAP_1B_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_1B_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_1B_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_1C_ADDR                 0x46C
+#define TRQ_SEL_FMAP_1C_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_1C_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_1C_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_1D_ADDR                 0x470
+#define TRQ_SEL_FMAP_1D_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_1D_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_1D_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_1E_ADDR                 0x474
+#define TRQ_SEL_FMAP_1E_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_1E_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_1E_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_1F_ADDR                 0x478
+#define TRQ_SEL_FMAP_1F_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_1F_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_1F_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_20_ADDR                 0x47C
+#define TRQ_SEL_FMAP_20_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_20_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_20_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_21_ADDR                 0x480
+#define TRQ_SEL_FMAP_21_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_21_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_21_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_22_ADDR                 0x484
+#define TRQ_SEL_FMAP_22_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_22_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_22_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_23_ADDR                 0x488
+#define TRQ_SEL_FMAP_23_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_23_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_23_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_24_ADDR                 0x48C
+#define TRQ_SEL_FMAP_24_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_24_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_24_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_25_ADDR                 0x490
+#define TRQ_SEL_FMAP_25_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_25_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_25_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_26_ADDR                 0x494
+#define TRQ_SEL_FMAP_26_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_26_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_26_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_27_ADDR                 0x498
+#define TRQ_SEL_FMAP_27_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_27_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_27_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_28_ADDR                 0x49C
+#define TRQ_SEL_FMAP_28_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_28_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_28_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_29_ADDR                 0x4A0
+#define TRQ_SEL_FMAP_29_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_29_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_29_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_2A_ADDR                 0x4A4
+#define TRQ_SEL_FMAP_2A_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_2A_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_2A_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_2B_ADDR                 0x4A8
+#define TRQ_SEL_FMAP_2B_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_2B_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_2B_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_2C_ADDR                 0x4AC
+#define TRQ_SEL_FMAP_2C_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_2C_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_2C_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_2D_ADDR                 0x4B0
+#define TRQ_SEL_FMAP_2D_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_2D_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_2D_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_2E_ADDR                 0x4B4
+#define TRQ_SEL_FMAP_2E_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_2E_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_2E_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_2F_ADDR                 0x4B8
+#define TRQ_SEL_FMAP_2F_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_2F_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_2F_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_30_ADDR                 0x4BC
+#define TRQ_SEL_FMAP_30_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_30_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_30_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_31_ADDR                 0x4D0
+#define TRQ_SEL_FMAP_31_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_31_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_31_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_32_ADDR                 0x4D4
+#define TRQ_SEL_FMAP_32_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_32_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_32_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_33_ADDR                 0x4D8
+#define TRQ_SEL_FMAP_33_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_33_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_33_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_34_ADDR                 0x4DC
+#define TRQ_SEL_FMAP_34_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_34_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_34_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_35_ADDR                 0x4E0
+#define TRQ_SEL_FMAP_35_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_35_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_35_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_36_ADDR                 0x4E4
+#define TRQ_SEL_FMAP_36_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_36_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_36_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_37_ADDR                 0x4E8
+#define TRQ_SEL_FMAP_37_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_37_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_37_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_38_ADDR                 0x4EC
+#define TRQ_SEL_FMAP_38_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_38_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_38_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_39_ADDR                 0x4F0
+#define TRQ_SEL_FMAP_39_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_39_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_39_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_3A_ADDR                 0x4F4
+#define TRQ_SEL_FMAP_3A_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_3A_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_3A_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_3B_ADDR                 0x4F8
+#define TRQ_SEL_FMAP_3B_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_3B_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_3B_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_3C_ADDR                 0x4FC
+#define TRQ_SEL_FMAP_3C_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_3C_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_3C_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_3D_ADDR                 0x500
+#define TRQ_SEL_FMAP_3D_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_3D_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_3D_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_3E_ADDR                 0x504
+#define TRQ_SEL_FMAP_3E_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_3E_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_3E_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_3F_ADDR                 0x508
+#define TRQ_SEL_FMAP_3F_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_3F_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_3F_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_40_ADDR                 0x50C
+#define TRQ_SEL_FMAP_40_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_40_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_40_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_41_ADDR                 0x510
+#define TRQ_SEL_FMAP_41_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_41_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_41_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_42_ADDR                 0x514
+#define TRQ_SEL_FMAP_42_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_42_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_42_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_43_ADDR                 0x518
+#define TRQ_SEL_FMAP_43_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_43_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_43_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_44_ADDR                 0x51C
+#define TRQ_SEL_FMAP_44_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_44_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_44_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_45_ADDR                 0x520
+#define TRQ_SEL_FMAP_45_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_45_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_45_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_46_ADDR                 0x524
+#define TRQ_SEL_FMAP_46_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_46_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_46_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_47_ADDR                 0x528
+#define TRQ_SEL_FMAP_47_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_47_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_47_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_48_ADDR                 0x52C
+#define TRQ_SEL_FMAP_48_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_48_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_48_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_49_ADDR                 0x530
+#define TRQ_SEL_FMAP_49_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_49_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_49_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_4A_ADDR                 0x534
+#define TRQ_SEL_FMAP_4A_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_4A_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_4A_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_4B_ADDR                 0x538
+#define TRQ_SEL_FMAP_4B_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_4B_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_4B_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_4C_ADDR                 0x53C
+#define TRQ_SEL_FMAP_4C_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_4C_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_4C_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_4D_ADDR                 0x540
+#define TRQ_SEL_FMAP_4D_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_4D_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_4D_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_4E_ADDR                 0x544
+#define TRQ_SEL_FMAP_4E_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_4E_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_4E_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_4F_ADDR                 0x548
+#define TRQ_SEL_FMAP_4F_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_4F_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_4F_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_50_ADDR                 0x54C
+#define TRQ_SEL_FMAP_50_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_50_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_50_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_51_ADDR                 0x550
+#define TRQ_SEL_FMAP_51_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_51_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_51_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_52_ADDR                 0x554
+#define TRQ_SEL_FMAP_52_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_52_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_52_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_53_ADDR                 0x558
+#define TRQ_SEL_FMAP_53_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_53_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_53_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_54_ADDR                 0x55C
+#define TRQ_SEL_FMAP_54_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_54_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_54_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_55_ADDR                 0x560
+#define TRQ_SEL_FMAP_55_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_55_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_55_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_56_ADDR                 0x564
+#define TRQ_SEL_FMAP_56_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_56_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_56_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_57_ADDR                 0x568
+#define TRQ_SEL_FMAP_57_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_57_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_57_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_58_ADDR                 0x56C
+#define TRQ_SEL_FMAP_58_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_58_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_58_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_59_ADDR                 0x570
+#define TRQ_SEL_FMAP_59_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_59_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_59_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_5A_ADDR                 0x574
+#define TRQ_SEL_FMAP_5A_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_5A_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_5A_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_5B_ADDR                 0x578
+#define TRQ_SEL_FMAP_5B_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_5B_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_5B_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_5C_ADDR                 0x57C
+#define TRQ_SEL_FMAP_5C_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_5C_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_5C_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_5D_ADDR                 0x580
+#define TRQ_SEL_FMAP_5D_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_5D_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_5D_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_5E_ADDR                 0x584
+#define TRQ_SEL_FMAP_5E_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_5E_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_5E_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_5F_ADDR                 0x588
+#define TRQ_SEL_FMAP_5F_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_5F_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_5F_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_60_ADDR                 0x58C
+#define TRQ_SEL_FMAP_60_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_60_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_60_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_61_ADDR                 0x590
+#define TRQ_SEL_FMAP_61_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_61_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_61_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_62_ADDR                 0x594
+#define TRQ_SEL_FMAP_62_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_62_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_62_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_63_ADDR                 0x598
+#define TRQ_SEL_FMAP_63_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_63_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_63_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_64_ADDR                 0x59C
+#define TRQ_SEL_FMAP_64_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_64_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_64_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_65_ADDR                 0x5A0
+#define TRQ_SEL_FMAP_65_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_65_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_65_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_66_ADDR                 0x5A4
+#define TRQ_SEL_FMAP_66_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_66_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_66_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_67_ADDR                 0x5A8
+#define TRQ_SEL_FMAP_67_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_67_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_67_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_68_ADDR                 0x5AC
+#define TRQ_SEL_FMAP_68_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_68_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_68_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_69_ADDR                 0x5B0
+#define TRQ_SEL_FMAP_69_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_69_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_69_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_6A_ADDR                 0x5B4
+#define TRQ_SEL_FMAP_6A_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_6A_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_6A_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_6B_ADDR                 0x5B8
+#define TRQ_SEL_FMAP_6B_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_6B_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_6B_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_6C_ADDR                 0x5BC
+#define TRQ_SEL_FMAP_6C_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_6C_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_6C_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_6D_ADDR                 0x5C0
+#define TRQ_SEL_FMAP_6D_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_6D_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_6D_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_6E_ADDR                 0x5C4
+#define TRQ_SEL_FMAP_6E_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_6E_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_6E_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_6F_ADDR                 0x5C8
+#define TRQ_SEL_FMAP_6F_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_6F_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_6F_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_70_ADDR                 0x5CC
+#define TRQ_SEL_FMAP_70_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_70_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_70_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_71_ADDR                 0x5D0
+#define TRQ_SEL_FMAP_71_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_71_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_71_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_72_ADDR                 0x5D4
+#define TRQ_SEL_FMAP_72_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_72_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_72_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_73_ADDR                 0x5D8
+#define TRQ_SEL_FMAP_73_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_73_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_73_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_74_ADDR                 0x5DC
+#define TRQ_SEL_FMAP_74_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_74_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_74_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_75_ADDR                 0x5E0
+#define TRQ_SEL_FMAP_75_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_75_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_75_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_76_ADDR                 0x5E4
+#define TRQ_SEL_FMAP_76_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_76_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_76_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_77_ADDR                 0x5E8
+#define TRQ_SEL_FMAP_77_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_77_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_77_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_78_ADDR                 0x5EC
+#define TRQ_SEL_FMAP_78_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_78_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_78_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_79_ADDR                 0x5F0
+#define TRQ_SEL_FMAP_79_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_79_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_79_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_7A_ADDR                 0x5F4
+#define TRQ_SEL_FMAP_7A_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_7A_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_7A_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_7B_ADDR                 0x5F8
+#define TRQ_SEL_FMAP_7B_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_7B_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_7B_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_7C_ADDR                 0x5FC
+#define TRQ_SEL_FMAP_7C_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_7C_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_7C_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_7D_ADDR                 0x600
+#define TRQ_SEL_FMAP_7D_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_7D_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_7D_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_7E_ADDR                 0x604
+#define TRQ_SEL_FMAP_7E_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_7E_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_7E_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_7F_ADDR                 0x608
+#define TRQ_SEL_FMAP_7F_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_7F_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_7F_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_80_ADDR                 0x60C
+#define TRQ_SEL_FMAP_80_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_80_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_80_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_81_ADDR                 0x610
+#define TRQ_SEL_FMAP_81_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_81_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_81_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_82_ADDR                 0x614
+#define TRQ_SEL_FMAP_82_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_82_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_82_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_83_ADDR                 0x618
+#define TRQ_SEL_FMAP_83_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_83_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_83_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_84_ADDR                 0x61C
+#define TRQ_SEL_FMAP_84_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_84_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_84_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_85_ADDR                 0x620
+#define TRQ_SEL_FMAP_85_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_85_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_85_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_86_ADDR                 0x624
+#define TRQ_SEL_FMAP_86_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_86_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_86_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_87_ADDR                 0x628
+#define TRQ_SEL_FMAP_87_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_87_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_87_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_88_ADDR                 0x62C
+#define TRQ_SEL_FMAP_88_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_88_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_88_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_89_ADDR                 0x630
+#define TRQ_SEL_FMAP_89_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_89_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_89_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_8A_ADDR                 0x634
+#define TRQ_SEL_FMAP_8A_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_8A_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_8A_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_8B_ADDR                 0x638
+#define TRQ_SEL_FMAP_8B_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_8B_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_8B_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_8C_ADDR                 0x63C
+#define TRQ_SEL_FMAP_8C_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_8C_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_8C_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_8D_ADDR                 0x640
+#define TRQ_SEL_FMAP_8D_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_8D_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_8D_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_8E_ADDR                 0x644
+#define TRQ_SEL_FMAP_8E_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_8E_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_8E_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_8F_ADDR                 0x648
+#define TRQ_SEL_FMAP_8F_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_8F_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_8F_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_90_ADDR                 0x64C
+#define TRQ_SEL_FMAP_90_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_90_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_90_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_91_ADDR                 0x650
+#define TRQ_SEL_FMAP_91_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_91_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_91_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_92_ADDR                 0x654
+#define TRQ_SEL_FMAP_92_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_92_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_92_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_93_ADDR                 0x658
+#define TRQ_SEL_FMAP_93_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_93_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_93_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_94_ADDR                 0x65C
+#define TRQ_SEL_FMAP_94_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_94_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_94_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_95_ADDR                 0x660
+#define TRQ_SEL_FMAP_95_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_95_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_95_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_96_ADDR                 0x664
+#define TRQ_SEL_FMAP_96_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_96_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_96_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_97_ADDR                 0x668
+#define TRQ_SEL_FMAP_97_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_97_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_97_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_98_ADDR                 0x66C
+#define TRQ_SEL_FMAP_98_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_98_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_98_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_99_ADDR                 0x670
+#define TRQ_SEL_FMAP_99_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_99_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_99_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_9A_ADDR                 0x674
+#define TRQ_SEL_FMAP_9A_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_9A_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_9A_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_9B_ADDR                 0x678
+#define TRQ_SEL_FMAP_9B_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_9B_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_9B_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_9C_ADDR                 0x67C
+#define TRQ_SEL_FMAP_9C_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_9C_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_9C_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_9D_ADDR                 0x680
+#define TRQ_SEL_FMAP_9D_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_9D_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_9D_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_9E_ADDR                 0x684
+#define TRQ_SEL_FMAP_9E_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_9E_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_9E_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_9F_ADDR                 0x688
+#define TRQ_SEL_FMAP_9F_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_9F_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_9F_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_A0_ADDR                 0x68C
+#define TRQ_SEL_FMAP_A0_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_A0_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_A0_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_A1_ADDR                 0x690
+#define TRQ_SEL_FMAP_A1_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_A1_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_A1_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_A2_ADDR                 0x694
+#define TRQ_SEL_FMAP_A2_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_A2_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_A2_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_A3_ADDR                 0x698
+#define TRQ_SEL_FMAP_A3_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_A3_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_A3_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_A4_ADDR                 0x69C
+#define TRQ_SEL_FMAP_A4_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_A4_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_A4_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_A5_ADDR                 0x6A0
+#define TRQ_SEL_FMAP_A5_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_A5_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_A5_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_A6_ADDR                 0x6A4
+#define TRQ_SEL_FMAP_A6_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_A6_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_A6_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_A7_ADDR                 0x6A8
+#define TRQ_SEL_FMAP_A7_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_A7_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_A7_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_A8_ADDR                 0x6AC
+#define TRQ_SEL_FMAP_A8_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_A8_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_A8_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_A9_ADDR                 0x6B0
+#define TRQ_SEL_FMAP_A9_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_A9_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_A9_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_AA_ADDR                 0x6B4
+#define TRQ_SEL_FMAP_AA_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_AA_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_AA_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_AB_ADDR                 0x6B8
+#define TRQ_SEL_FMAP_AB_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_AB_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_AB_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_AC_ADDR                 0x6BC
+#define TRQ_SEL_FMAP_AC_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_AC_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_AC_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_AD_ADDR                 0x6D0
+#define TRQ_SEL_FMAP_AD_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_AD_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_AD_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_AE_ADDR                 0x6D4
+#define TRQ_SEL_FMAP_AE_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_AE_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_AE_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_AF_ADDR                 0x6D8
+#define TRQ_SEL_FMAP_AF_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_AF_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_AF_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_B0_ADDR                 0x6DC
+#define TRQ_SEL_FMAP_B0_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_B0_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_B0_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_B1_ADDR                 0x6E0
+#define TRQ_SEL_FMAP_B1_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_B1_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_B1_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_B2_ADDR                 0x6E4
+#define TRQ_SEL_FMAP_B2_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_B2_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_B2_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_B3_ADDR                 0x6E8
+#define TRQ_SEL_FMAP_B3_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_B3_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_B3_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_B4_ADDR                 0x6EC
+#define TRQ_SEL_FMAP_B4_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_B4_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_B4_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_B5_ADDR                 0x6F0
+#define TRQ_SEL_FMAP_B5_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_B5_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_B5_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_B6_ADDR                 0x6F4
+#define TRQ_SEL_FMAP_B6_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_B6_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_B6_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_B7_ADDR                 0x6F8
+#define TRQ_SEL_FMAP_B7_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_B7_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_B7_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_B8_ADDR                 0x6FC
+#define TRQ_SEL_FMAP_B8_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_B8_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_B8_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_B9_ADDR                 0x700
+#define TRQ_SEL_FMAP_B9_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_B9_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_B9_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_BA_ADDR                 0x704
+#define TRQ_SEL_FMAP_BA_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_BA_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_BA_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_BB_ADDR                 0x708
+#define TRQ_SEL_FMAP_BB_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_BB_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_BB_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_BC_ADDR                 0x70C
+#define TRQ_SEL_FMAP_BC_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_BC_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_BC_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_BD_ADDR                 0x710
+#define TRQ_SEL_FMAP_BD_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_BD_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_BD_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_BE_ADDR                 0x714
+#define TRQ_SEL_FMAP_BE_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_BE_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_BE_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_BF_ADDR                 0x718
+#define TRQ_SEL_FMAP_BF_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_BF_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_BF_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_C0_ADDR                 0x71C
+#define TRQ_SEL_FMAP_C0_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_C0_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_C0_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_C1_ADDR                 0x720
+#define TRQ_SEL_FMAP_C1_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_C1_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_C1_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_C2_ADDR                 0x734
+#define TRQ_SEL_FMAP_C2_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_C2_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_C2_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_C3_ADDR                 0x748
+#define TRQ_SEL_FMAP_C3_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_C3_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_C3_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_C4_ADDR                 0x74C
+#define TRQ_SEL_FMAP_C4_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_C4_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_C4_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_C5_ADDR                 0x750
+#define TRQ_SEL_FMAP_C5_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_C5_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_C5_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_C6_ADDR                 0x754
+#define TRQ_SEL_FMAP_C6_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_C6_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_C6_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_C7_ADDR                 0x758
+#define TRQ_SEL_FMAP_C7_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_C7_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_C7_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_C8_ADDR                 0x75C
+#define TRQ_SEL_FMAP_C8_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_C8_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_C8_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_C9_ADDR                 0x760
+#define TRQ_SEL_FMAP_C9_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_C9_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_C9_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_CA_ADDR                 0x764
+#define TRQ_SEL_FMAP_CA_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_CA_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_CA_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_CB_ADDR                 0x768
+#define TRQ_SEL_FMAP_CB_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_CB_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_CB_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_CC_ADDR                 0x76C
+#define TRQ_SEL_FMAP_CC_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_CC_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_CC_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_CD_ADDR                 0x770
+#define TRQ_SEL_FMAP_CD_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_CD_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_CD_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_CE_ADDR                 0x774
+#define TRQ_SEL_FMAP_CE_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_CE_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_CE_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_CF_ADDR                 0x778
+#define TRQ_SEL_FMAP_CF_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_CF_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_CF_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_D0_ADDR                 0x77C
+#define TRQ_SEL_FMAP_D0_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_D0_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_D0_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_D1_ADDR                 0x780
+#define TRQ_SEL_FMAP_D1_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_D1_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_D1_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_D2_ADDR                 0x784
+#define TRQ_SEL_FMAP_D2_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_D2_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_D2_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_D3_ADDR                 0x788
+#define TRQ_SEL_FMAP_D3_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_D3_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_D3_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_D4_ADDR                 0x78C
+#define TRQ_SEL_FMAP_D4_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_D4_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_D4_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_D5_ADDR                 0x790
+#define TRQ_SEL_FMAP_D5_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_D5_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_D5_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_D6_ADDR                 0x794
+#define TRQ_SEL_FMAP_D6_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_D6_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_D6_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_D7_ADDR                 0x798
+#define TRQ_SEL_FMAP_D7_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_D7_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_D7_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_D8_ADDR                 0x79C
+#define TRQ_SEL_FMAP_D8_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_D8_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_D8_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_D9_ADDR                 0x7A0
+#define TRQ_SEL_FMAP_D9_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_D9_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_D9_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_DA_ADDR                 0x7A4
+#define TRQ_SEL_FMAP_DA_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_DA_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_DA_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_DB_ADDR                 0x7A8
+#define TRQ_SEL_FMAP_DB_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_DB_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_DB_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_DC_ADDR                 0x7AC
+#define TRQ_SEL_FMAP_DC_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_DC_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_DC_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_DD_ADDR                 0x7B0
+#define TRQ_SEL_FMAP_DD_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_DD_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_DD_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_DE_ADDR                 0x7B4
+#define TRQ_SEL_FMAP_DE_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_DE_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_DE_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_DF_ADDR                 0x7B8
+#define TRQ_SEL_FMAP_DF_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_DF_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_DF_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_E0_ADDR                 0x7BC
+#define TRQ_SEL_FMAP_E0_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_E0_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_E0_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_E1_ADDR                 0x7C0
+#define TRQ_SEL_FMAP_E1_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_E1_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_E1_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_E2_ADDR                 0x7C4
+#define TRQ_SEL_FMAP_E2_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_E2_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_E2_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_E3_ADDR                 0x7C8
+#define TRQ_SEL_FMAP_E3_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_E3_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_E3_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_E4_ADDR                 0x7CC
+#define TRQ_SEL_FMAP_E4_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_E4_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_E4_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_E5_ADDR                 0x7D0
+#define TRQ_SEL_FMAP_E5_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_E5_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_E5_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_E6_ADDR                 0x7D4
+#define TRQ_SEL_FMAP_E6_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_E6_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_E6_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_E7_ADDR                 0x7D8
+#define TRQ_SEL_FMAP_E7_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_E7_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_E7_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_E8_ADDR                 0x7DC
+#define TRQ_SEL_FMAP_E8_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_E8_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_E8_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_E9_ADDR                 0x7E0
+#define TRQ_SEL_FMAP_E9_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_E9_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_E9_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_EA_ADDR                 0x7E4
+#define TRQ_SEL_FMAP_EA_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_EA_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_EA_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_EB_ADDR                 0x7E8
+#define TRQ_SEL_FMAP_EB_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_EB_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_EB_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_EC_ADDR                 0x7EC
+#define TRQ_SEL_FMAP_EC_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_EC_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_EC_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_ED_ADDR                 0x7F0
+#define TRQ_SEL_FMAP_ED_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_ED_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_ED_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_EE_ADDR                 0x7F4
+#define TRQ_SEL_FMAP_EE_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_EE_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_EE_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_EF_ADDR                 0x7F8
+#define TRQ_SEL_FMAP_EF_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_EF_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_EF_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_TRQ_SEL_FMAP_F0_ADDR                 0x7FC
+#define TRQ_SEL_FMAP_F0_RSVD_1_MASK                        GENMASK(31, 23)
+#define TRQ_SEL_FMAP_F0_QID_MAX_MASK                       GENMASK(22, 11)
+#define TRQ_SEL_FMAP_F0_QID_BASE_MASK                      GENMASK(10, 0)
+#define QDMA_S80_HARD_IND_CTXT_DATA_3_ADDR                 0x804
+#define IND_CTXT_DATA_3_DATA_MASK                          GENMASK(31, 0)
+#define QDMA_S80_HARD_IND_CTXT_DATA_2_ADDR                 0x808
+#define IND_CTXT_DATA_2_DATA_MASK                          GENMASK(31, 0)
+#define QDMA_S80_HARD_IND_CTXT_DATA_1_ADDR                 0x80C
+#define IND_CTXT_DATA_1_DATA_MASK                          GENMASK(31, 0)
+#define QDMA_S80_HARD_IND_CTXT_DATA_0_ADDR                 0x810
+#define IND_CTXT_DATA_0_DATA_MASK                          GENMASK(31, 0)
+#define QDMA_S80_HARD_IND_CTXT3_ADDR                       0x814
+#define IND_CTXT3_MASK                                GENMASK(31, 0)
+#define QDMA_S80_HARD_IND_CTXT2_ADDR                       0x818
+#define IND_CTXT2_MASK                                GENMASK(31, 0)
+#define QDMA_S80_HARD_IND_CTXT1_ADDR                       0x81C
+#define IND_CTXT1_MASK                                GENMASK(31, 0)
+#define QDMA_S80_HARD_IND_CTXT0_ADDR                       0x820
+#define IND_CTXT0_MASK                                GENMASK(31, 0)
+#define QDMA_S80_HARD_IND_CTXT_CMD_ADDR                    0x824
+#define IND_CTXT_CMD_RSVD_1_MASK                           GENMASK(31, 18)
+#define IND_CTXT_CMD_QID_MASK                              GENMASK(17, 7)
+#define IND_CTXT_CMD_OP_MASK                               GENMASK(6, 5)
+#define IND_CTXT_CMD_SET_MASK                              GENMASK(4, 1)
+#define IND_CTXT_CMD_BUSY_MASK                             BIT(0)
+#define QDMA_S80_HARD_C2H_TIMER_CNT_1_ADDR                 0xA00
+#define C2H_TIMER_CNT_1_RSVD_1_MASK                        GENMASK(31, 8)
+#define C2H_TIMER_CNT_1_MASK                              GENMASK(7, 0)
+#define QDMA_S80_HARD_C2H_TIMER_CNT_2_ADDR                 0xA04
+#define C2H_TIMER_CNT_2_RSVD_1_MASK                        GENMASK(31, 8)
+#define C2H_TIMER_CNT_2_MASK                              GENMASK(7, 0)
+#define QDMA_S80_HARD_C2H_TIMER_CNT_3_ADDR                 0xA08
+#define C2H_TIMER_CNT_3_RSVD_1_MASK                        GENMASK(31, 8)
+#define C2H_TIMER_CNT_3_MASK                              GENMASK(7, 0)
+#define QDMA_S80_HARD_C2H_TIMER_CNT_4_ADDR                 0xA0C
+#define C2H_TIMER_CNT_4_RSVD_1_MASK                        GENMASK(31, 8)
+#define C2H_TIMER_CNT_4_MASK                              GENMASK(7, 0)
+#define QDMA_S80_HARD_C2H_TIMER_CNT_5_ADDR                 0xA10
+#define C2H_TIMER_CNT_5_RSVD_1_MASK                        GENMASK(31, 8)
+#define C2H_TIMER_CNT_5_MASK                              GENMASK(7, 0)
+#define QDMA_S80_HARD_C2H_TIMER_CNT_6_ADDR                 0xA14
+#define C2H_TIMER_CNT_6_RSVD_1_MASK                        GENMASK(31, 8)
+#define C2H_TIMER_CNT_6_MASK                              GENMASK(7, 0)
+#define QDMA_S80_HARD_C2H_TIMER_CNT_7_ADDR                 0xA18
+#define C2H_TIMER_CNT_7_RSVD_1_MASK                        GENMASK(31, 8)
+#define C2H_TIMER_CNT_7_MASK                              GENMASK(7, 0)
+#define QDMA_S80_HARD_C2H_TIMER_CNT_8_ADDR                 0xA1C
+#define C2H_TIMER_CNT_8_RSVD_1_MASK                        GENMASK(31, 8)
+#define C2H_TIMER_CNT_8_MASK                              GENMASK(7, 0)
+#define QDMA_S80_HARD_C2H_TIMER_CNT_9_ADDR                 0xA20
+#define C2H_TIMER_CNT_9_RSVD_1_MASK                        GENMASK(31, 8)
+#define C2H_TIMER_CNT_9_MASK                              GENMASK(7, 0)
+#define QDMA_S80_HARD_C2H_TIMER_CNT_A_ADDR                 0xA24
+#define C2H_TIMER_CNT_A_RSVD_1_MASK                        GENMASK(31, 8)
+#define C2H_TIMER_CNT_A_MASK                              GENMASK(7, 0)
+#define QDMA_S80_HARD_C2H_TIMER_CNT_B_ADDR                 0xA28
+#define C2H_TIMER_CNT_B_RSVD_1_MASK                        GENMASK(31, 8)
+#define C2H_TIMER_CNT_B_MASK                              GENMASK(7, 0)
+#define QDMA_S80_HARD_C2H_TIMER_CNT_C_ADDR                 0xA2C
+#define C2H_TIMER_CNT_C_RSVD_1_MASK                        GENMASK(31, 8)
+#define C2H_TIMER_CNT_C_MASK                              GENMASK(7, 0)
+#define QDMA_S80_HARD_C2H_TIMER_CNT_D_ADDR                 0xA30
+#define C2H_TIMER_CNT_D_RSVD_1_MASK                        GENMASK(31, 8)
+#define C2H_TIMER_CNT_D_MASK                              GENMASK(7, 0)
+#define QDMA_S80_HARD_C2H_TIMER_CNT_E_ADDR                 0xA34
+#define C2H_TIMER_CNT_E_RSVD_1_MASK                        GENMASK(31, 8)
+#define C2H_TIMER_CNT_E_MASK                              GENMASK(7, 0)
+#define QDMA_S80_HARD_C2H_TIMER_CNT_F_ADDR                 0xA38
+#define C2H_TIMER_CNT_F_RSVD_1_MASK                        GENMASK(31, 8)
+#define C2H_TIMER_CNT_F_MASK                              GENMASK(7, 0)
+#define QDMA_S80_HARD_C2H_TIMER_CNT_10_ADDR                0xA3C
+#define C2H_TIMER_CNT_10_RSVD_1_MASK                       GENMASK(31, 8)
+#define C2H_TIMER_CNT_10_MASK                             GENMASK(7, 0)
+#define QDMA_S80_HARD_C2H_CNT_TH_1_ADDR                    0xA40
+#define C2H_CNT_TH_1_RSVD_1_MASK                           GENMASK(31, 8)
+#define C2H_CNT_TH_1_THESHOLD_CNT_MASK                     GENMASK(7, 0)
+#define QDMA_S80_HARD_C2H_CNT_TH_2_ADDR                    0xA44
+#define C2H_CNT_TH_2_RSVD_1_MASK                           GENMASK(31, 8)
+#define C2H_CNT_TH_2_THESHOLD_CNT_MASK                     GENMASK(7, 0)
+#define QDMA_S80_HARD_C2H_CNT_TH_3_ADDR                    0xA48
+#define C2H_CNT_TH_3_RSVD_1_MASK                           GENMASK(31, 8)
+#define C2H_CNT_TH_3_THESHOLD_CNT_MASK                     GENMASK(7, 0)
+#define QDMA_S80_HARD_C2H_CNT_TH_4_ADDR                    0xA4C
+#define C2H_CNT_TH_4_RSVD_1_MASK                           GENMASK(31, 8)
+#define C2H_CNT_TH_4_THESHOLD_CNT_MASK                     GENMASK(7, 0)
+#define QDMA_S80_HARD_C2H_CNT_TH_5_ADDR                    0xA50
+#define C2H_CNT_TH_5_RSVD_1_MASK                           GENMASK(31, 8)
+#define C2H_CNT_TH_5_THESHOLD_CNT_MASK                     GENMASK(7, 0)
+#define QDMA_S80_HARD_C2H_CNT_TH_6_ADDR                    0xA54
+#define C2H_CNT_TH_6_RSVD_1_MASK                           GENMASK(31, 8)
+#define C2H_CNT_TH_6_THESHOLD_CNT_MASK                     GENMASK(7, 0)
+#define QDMA_S80_HARD_C2H_CNT_TH_7_ADDR                    0xA58
+#define C2H_CNT_TH_7_RSVD_1_MASK                           GENMASK(31, 8)
+#define C2H_CNT_TH_7_THESHOLD_CNT_MASK                     GENMASK(7, 0)
+#define QDMA_S80_HARD_C2H_CNT_TH_8_ADDR                    0xA5C
+#define C2H_CNT_TH_8_RSVD_1_MASK                           GENMASK(31, 8)
+#define C2H_CNT_TH_8_THESHOLD_CNT_MASK                     GENMASK(7, 0)
+#define QDMA_S80_HARD_C2H_CNT_TH_9_ADDR                    0xA60
+#define C2H_CNT_TH_9_RSVD_1_MASK                           GENMASK(31, 8)
+#define C2H_CNT_TH_9_THESHOLD_CNT_MASK                     GENMASK(7, 0)
+#define QDMA_S80_HARD_C2H_CNT_TH_A_ADDR                    0xA64
+#define C2H_CNT_TH_A_RSVD_1_MASK                           GENMASK(31, 8)
+#define C2H_CNT_TH_A_THESHOLD_CNT_MASK                     GENMASK(7, 0)
+#define QDMA_S80_HARD_C2H_CNT_TH_B_ADDR                    0xA68
+#define C2H_CNT_TH_B_RSVD_1_MASK                           GENMASK(31, 8)
+#define C2H_CNT_TH_B_THESHOLD_CNT_MASK                     GENMASK(7, 0)
+#define QDMA_S80_HARD_C2H_CNT_TH_C_ADDR                    0xA6C
+#define C2H_CNT_TH_C_RSVD_1_MASK                           GENMASK(31, 8)
+#define C2H_CNT_TH_C_THESHOLD_CNT_MASK                     GENMASK(7, 0)
+#define QDMA_S80_HARD_C2H_CNT_TH_D_ADDR                    0xA70
+#define C2H_CNT_TH_D_RSVD_1_MASK                           GENMASK(31, 8)
+#define C2H_CNT_TH_D_THESHOLD_CNT_MASK                     GENMASK(7, 0)
+#define QDMA_S80_HARD_C2H_CNT_TH_E_ADDR                    0xA74
+#define C2H_CNT_TH_E_RSVD_1_MASK                           GENMASK(31, 8)
+#define C2H_CNT_TH_E_THESHOLD_CNT_MASK                     GENMASK(7, 0)
+#define QDMA_S80_HARD_C2H_CNT_TH_F_ADDR                    0xA78
+#define C2H_CNT_TH_F_RSVD_1_MASK                           GENMASK(31, 8)
+#define C2H_CNT_TH_F_THESHOLD_CNT_MASK                     GENMASK(7, 0)
+#define QDMA_S80_HARD_C2H_CNT_TH_10_ADDR                   0xA7C
+#define C2H_CNT_TH_10_RSVD_1_MASK                          GENMASK(31, 8)
+#define C2H_CNT_TH_10_THESHOLD_CNT_MASK                    GENMASK(7, 0)
+#define QDMA_S80_HARD_C2H_QID2VEC_MAP_QID_ADDR             0xA80
+#define C2H_QID2VEC_MAP_QID_RSVD_1_MASK                    GENMASK(31, 11)
+#define C2H_QID2VEC_MAP_QID_QID_MASK                       GENMASK(10, 0)
+#define QDMA_S80_HARD_C2H_QID2VEC_MAP_ADDR                 0xA84
+#define C2H_QID2VEC_MAP_RSVD_1_MASK                        GENMASK(31, 19)
+#define C2H_QID2VEC_MAP_H2C_EN_COAL_MASK                   BIT(18)
+#define C2H_QID2VEC_MAP_H2C_VECTOR_MASK                    GENMASK(17, 9)
+#define C2H_QID2VEC_MAP_C2H_EN_COAL_MASK                   BIT(8)
+#define C2H_QID2VEC_MAP_C2H_VECTOR_MASK                    GENMASK(7, 0)
+#define QDMA_S80_HARD_C2H_STAT_S_AXIS_C2H_ACCEPTED_ADDR    0xA88
+#define C2H_STAT_S_AXIS_C2H_ACCEPTED_MASK                 GENMASK(31, 0)
+#define QDMA_S80_HARD_C2H_STAT_S_AXIS_WRB_ACCEPTED_ADDR    0xA8C
+#define C2H_STAT_S_AXIS_WRB_ACCEPTED_MASK                 GENMASK(31, 0)
+#define QDMA_S80_HARD_C2H_STAT_DESC_RSP_PKT_ACCEPTED_ADDR  0xA90
+#define C2H_STAT_DESC_RSP_PKT_ACCEPTED_D_MASK              GENMASK(31, 0)
+#define QDMA_S80_HARD_C2H_STAT_AXIS_PKG_CMP_ADDR           0xA94
+#define C2H_STAT_AXIS_PKG_CMP_MASK                        GENMASK(31, 0)
+#define QDMA_S80_HARD_C2H_STAT_DESC_RSP_ACCEPTED_ADDR      0xA98
+#define C2H_STAT_DESC_RSP_ACCEPTED_D_MASK                  GENMASK(31, 0)
+#define QDMA_S80_HARD_C2H_STAT_DESC_RSP_CMP_ADDR           0xA9C
+#define C2H_STAT_DESC_RSP_CMP_D_MASK                       GENMASK(31, 0)
+#define QDMA_S80_HARD_C2H_STAT_WRQ_OUT_ADDR                0xAA0
+#define C2H_STAT_WRQ_OUT_MASK                             GENMASK(31, 0)
+#define QDMA_S80_HARD_C2H_STAT_WPL_REN_ACCEPTED_ADDR       0xAA4
+#define C2H_STAT_WPL_REN_ACCEPTED_MASK                    GENMASK(31, 0)
+#define QDMA_S80_HARD_C2H_STAT_TOTAL_WRQ_LEN_ADDR          0xAA8
+#define C2H_STAT_TOTAL_WRQ_LEN_MASK                       GENMASK(31, 0)
+#define QDMA_S80_HARD_C2H_STAT_TOTAL_WPL_LEN_ADDR          0xAAC
+#define C2H_STAT_TOTAL_WPL_LEN_MASK                       GENMASK(31, 0)
+#define QDMA_S80_HARD_C2H_BUF_SZ_0_ADDR                    0xAB0
+#define C2H_BUF_SZ_0_SIZE_MASK                             GENMASK(31, 0)
+#define QDMA_S80_HARD_C2H_BUF_SZ_1_ADDR                    0xAB4
+#define C2H_BUF_SZ_1_SIZE_MASK                             GENMASK(31, 0)
+#define QDMA_S80_HARD_C2H_BUF_SZ_2_ADDR                    0xAB8
+#define C2H_BUF_SZ_2_SIZE_MASK                             GENMASK(31, 0)
+#define QDMA_S80_HARD_C2H_BUF_SZ_3_ADDR                    0xABC
+#define C2H_BUF_SZ_3_SIZE_MASK                             GENMASK(31, 0)
+#define QDMA_S80_HARD_C2H_BUF_SZ_4_ADDR                    0xAC0
+#define C2H_BUF_SZ_4_SIZE_MASK                             GENMASK(31, 0)
+#define QDMA_S80_HARD_C2H_BUF_SZ_5_ADDR                    0xAC4
+#define C2H_BUF_SZ_5_SIZE_MASK                             GENMASK(31, 0)
+#define QDMA_S80_HARD_C2H_BUF_SZ_7_ADDR                    0XAC8
+#define C2H_BUF_SZ_7_SIZE_MASK                             GENMASK(31, 0)
+#define QDMA_S80_HARD_C2H_BUF_SZ_8_ADDR                    0XACC
+#define C2H_BUF_SZ_8_SIZE_MASK                             GENMASK(31, 0)
+#define QDMA_S80_HARD_C2H_BUF_SZ_9_ADDR                    0xAD0
+#define C2H_BUF_SZ_9_SIZE_MASK                             GENMASK(31, 0)
+#define QDMA_S80_HARD_C2H_BUF_SZ_10_ADDR                   0xAD4
+#define C2H_BUF_SZ_10_SIZE_MASK                            GENMASK(31, 0)
+#define QDMA_S80_HARD_C2H_BUF_SZ_11_ADDR                   0xAD8
+#define C2H_BUF_SZ_11_SIZE_MASK                            GENMASK(31, 0)
+#define QDMA_S80_HARD_C2H_BUF_SZ_12_ADDR                   0xAE0
+#define C2H_BUF_SZ_12_SIZE_MASK                            GENMASK(31, 0)
+#define QDMA_S80_HARD_C2H_BUF_SZ_13_ADDR                   0xAE4
+#define C2H_BUF_SZ_13_SIZE_MASK                            GENMASK(31, 0)
+#define QDMA_S80_HARD_C2H_BUF_SZ_14_ADDR                   0xAE8
+#define C2H_BUF_SZ_14_SIZE_MASK                            GENMASK(31, 0)
+#define QDMA_S80_HARD_C2H_BUF_SZ_15_ADDR                   0XAEC
+#define C2H_BUF_SZ_15_SIZE_MASK                            GENMASK(31, 0)
+#define QDMA_S80_HARD_C2H_ERR_STAT_ADDR                    0xAF0
+#define C2H_ERR_STAT_RSVD_1_MASK                           GENMASK(31, 16)
+#define C2H_ERR_STAT_WRB_PRTY_ERR_MASK                     BIT(15)
+#define C2H_ERR_STAT_WRB_CIDX_ERR_MASK                     BIT(14)
+#define C2H_ERR_STAT_WRB_QFULL_ERR_MASK                    BIT(13)
+#define C2H_ERR_STAT_WRB_INV_Q_ERR_MASK                    BIT(12)
+#define C2H_ERR_STAT_PORT_ID_BYP_IN_MISMATCH_MASK          BIT(11)
+#define C2H_ERR_STAT_PORT_ID_CTXT_MISMATCH_MASK            BIT(10)
+#define C2H_ERR_STAT_ERR_DESC_CNT_MASK                     BIT(9)
+#define C2H_ERR_STAT_RSVD_2_MASK                           BIT(8)
+#define C2H_ERR_STAT_MSI_INT_FAIL_MASK                     BIT(7)
+#define C2H_ERR_STAT_ENG_WPL_DATA_PAR_ERR_MASK             BIT(6)
+#define C2H_ERR_STAT_RSVD_3_MASK                           BIT(5)
+#define C2H_ERR_STAT_DESC_RSP_ERR_MASK                     BIT(4)
+#define C2H_ERR_STAT_QID_MISMATCH_MASK                     BIT(3)
+#define C2H_ERR_STAT_RSVD_4_MASK                           BIT(2)
+#define C2H_ERR_STAT_LEN_MISMATCH_MASK                     BIT(1)
+#define C2H_ERR_STAT_MTY_MISMATCH_MASK                     BIT(0)
+#define QDMA_S80_HARD_C2H_ERR_MASK_ADDR                    0xAF4
+#define C2H_ERR_EN_MASK                          GENMASK(31, 0)
+#define QDMA_S80_HARD_C2H_FATAL_ERR_STAT_ADDR              0xAF8
+#define C2H_FATAL_ERR_STAT_RSVD_1_MASK                     GENMASK(31, 19)
+#define C2H_FATAL_ERR_STAT_WPL_DATA_PAR_ERR_MASK           BIT(18)
+#define C2H_FATAL_ERR_STAT_PLD_FIFO_RAM_RDBE_MASK          BIT(17)
+#define C2H_FATAL_ERR_STAT_QID_FIFO_RAM_RDBE_MASK          BIT(16)
+#define C2H_FATAL_ERR_STAT_TUSER_FIFO_RAM_RDBE_MASK        BIT(15)
+#define C2H_FATAL_ERR_STAT_WRB_COAL_DATA_RAM_RDBE_MASK     BIT(14)
+#define C2H_FATAL_ERR_STAT_INT_QID2VEC_RAM_RDBE_MASK       BIT(13)
+#define C2H_FATAL_ERR_STAT_INT_CTXT_RAM_RDBE_MASK          BIT(12)
+#define C2H_FATAL_ERR_STAT_DESC_REQ_FIFO_RAM_RDBE_MASK     BIT(11)
+#define C2H_FATAL_ERR_STAT_PFCH_CTXT_RAM_RDBE_MASK         BIT(10)
+#define C2H_FATAL_ERR_STAT_WRB_CTXT_RAM_RDBE_MASK          BIT(9)
+#define C2H_FATAL_ERR_STAT_PFCH_LL_RAM_RDBE_MASK           BIT(8)
+#define C2H_FATAL_ERR_STAT_TIMER_FIFO_RAM_RDBE_MASK        GENMASK(7, 4)
+#define C2H_FATAL_ERR_STAT_QID_MISMATCH_MASK               BIT(3)
+#define C2H_FATAL_ERR_STAT_RSVD_2_MASK                     BIT(2)
+#define C2H_FATAL_ERR_STAT_LEN_MISMATCH_MASK               BIT(1)
+#define C2H_FATAL_ERR_STAT_MTY_MISMATCH_MASK               BIT(0)
+#define QDMA_S80_HARD_C2H_FATAL_ERR_MASK_ADDR              0xAFC
+#define C2H_FATAL_ERR_C2HEN_MASK                 GENMASK(31, 0)
+#define QDMA_S80_HARD_C2H_FATAL_ERR_ENABLE_ADDR            0xB00
+#define C2H_FATAL_ERR_ENABLE_RSVD_1_MASK                   GENMASK(31, 2)
+#define C2H_FATAL_ERR_ENABLE_WPL_PAR_INV_MASK             BIT(1)
+#define C2H_FATAL_ERR_ENABLE_WRQ_DIS_MASK                 BIT(0)
+#define QDMA_S80_HARD_GLBL_ERR_INT_ADDR                    0xB04
+#define GLBL_ERR_INT_RSVD_1_MASK                           GENMASK(31, 18)
+#define GLBL_ERR_INT_ARM_MASK                             BIT(17)
+#define GLBL_ERR_INT_EN_COAL_MASK                          BIT(16)
+#define GLBL_ERR_INT_VEC_MASK                              GENMASK(15, 8)
+#define GLBL_ERR_INT_FUNC_MASK                             GENMASK(7, 0)
+#define QDMA_S80_HARD_C2H_PFCH_CFG_ADDR                    0xB08
+#define C2H_PFCH_CFG_EVT_QCNT_TH_MASK                      GENMASK(31, 25)
+#define C2H_PFCH_CFG_QCNT_MASK                             GENMASK(24, 16)
+#define C2H_PFCH_CFG_NUM_MASK                              GENMASK(15, 8)
+#define C2H_PFCH_CFG_FL_TH_MASK                            GENMASK(7, 0)
+#define QDMA_S80_HARD_C2H_INT_TIMER_TICK_ADDR              0xB0C
+#define C2H_INT_TIMER_TICK_MASK                           GENMASK(31, 0)
+#define QDMA_S80_HARD_C2H_STAT_DESC_RSP_DROP_ACCEPTED_ADDR 0xB10
+#define C2H_STAT_DESC_RSP_DROP_ACCEPTED_D_MASK             GENMASK(31, 0)
+#define QDMA_S80_HARD_C2H_STAT_DESC_RSP_ERR_ACCEPTED_ADDR  0xB14
+#define C2H_STAT_DESC_RSP_ERR_ACCEPTED_D_MASK              GENMASK(31, 0)
+#define QDMA_S80_HARD_C2H_STAT_DESC_REQ_ADDR               0xB18
+#define C2H_STAT_DESC_REQ_MASK                            GENMASK(31, 0)
+#define QDMA_S80_HARD_C2H_STAT_DBG_DMA_ENG_0_ADDR          0xB1C
+#define C2H_STAT_DMA_ENG_0_RSVD_1_MASK                 BIT(31)
+#define C2H_STAT_DMA_ENG_0_WRB_FIFO_OUT_CNT_MASK       GENMASK(30, 28)
+#define C2H_STAT_DMA_ENG_0_QID_FIFO_OUT_CNT_MASK       GENMASK(27, 18)
+#define C2H_STAT_DMA_ENG_0_PLD_FIFO_OUT_CNT_MASK       GENMASK(17, 8)
+#define C2H_STAT_DMA_ENG_0_WRQ_FIFO_OUT_CNT_MASK       GENMASK(7, 5)
+#define C2H_STAT_DMA_ENG_0_WRB_SM_CS_MASK              BIT(4)
+#define C2H_STAT_DMA_ENG_0_MAIN_SM_CS_MASK             GENMASK(3, 0)
+#define QDMA_S80_HARD_C2H_STAT_DBG_DMA_ENG_1_ADDR          0xB20
+#define C2H_STAT_DMA_ENG_1_RSVD_1_MASK                 BIT(31)
+#define C2H_STAT_DMA_ENG_1_DESC_RSP_LAST_MASK          BIT(30)
+#define C2H_STAT_DMA_ENG_1_PLD_FIFO_IN_CNT_MASK        GENMASK(29, 20)
+#define C2H_STAT_DMA_ENG_1_PLD_FIFO_OUTPUT_CNT_MASK    GENMASK(19, 10)
+#define C2H_STAT_DMA_ENG_1_QID_FIFO_IN_CNT_MASK        GENMASK(9, 0)
+#define QDMA_S80_HARD_C2H_STAT_DBG_DMA_ENG_2_ADDR          0xB24
+#define C2H_STAT_DMA_ENG_2_RSVD_1_MASK                 GENMASK(31, 30)
+#define C2H_STAT_DMA_ENG_2_WRB_FIFO_IN_CNT_MASK        GENMASK(29, 20)
+#define C2H_STAT_DMA_ENG_2_WRB_FIFO_OUTPUT_CNT_MASK    GENMASK(19, 10)
+#define C2H_STAT_DMA_ENG_2_QID_FIFO_OUTPUT_CNT_MASK    GENMASK(9, 0)
+#define QDMA_S80_HARD_C2H_STAT_DBG_DMA_ENG_3_ADDR          0xB28
+#define C2H_STAT_DMA_ENG_3_RSVD_1_MASK                 GENMASK(31, 30)
+#define C2H_STAT_DMA_ENG_3_ADDR_4K_SPLIT_CNT_MASK      GENMASK(29, 20)
+#define C2H_STAT_DMA_ENG_3_WRQ_FIFO_IN_CNT_MASK        GENMASK(19, 10)
+#define C2H_STAT_DMA_ENG_3_WRQ_FIFO_OUTPUT_CNT_MASK    GENMASK(9, 0)
+#define QDMA_S80_HARD_C2H_DBG_PFCH_ERR_CTXT_ADDR           0xB2C
+#define C2H_PFCH_ERR_CTXT_RSVD_1_MASK                  GENMASK(31, 14)
+#define C2H_PFCH_ERR_CTXT_ERR_STAT_MASK                BIT(13)
+#define C2H_PFCH_ERR_CTXT_CMD_WR_MASK                  BIT(12)
+#define C2H_PFCH_ERR_CTXT_QID_MASK                     GENMASK(11, 1)
+#define C2H_PFCH_ERR_CTXT_DONE_MASK                    BIT(0)
+#define QDMA_S80_HARD_C2H_FIRST_ERR_QID_ADDR               0xB30
+#define C2H_FIRST_ERR_QID_RSVD_1_MASK                      GENMASK(31, 21)
+#define C2H_FIRST_ERR_QID_ERR_STAT_MASK                    GENMASK(20, 16)
+#define C2H_FIRST_ERR_QID_CMD_WR_MASK                      GENMASK(15, 12)
+#define C2H_FIRST_ERR_QID_QID_MASK                         GENMASK(11, 0)
+#define QDMA_S80_HARD_STAT_NUM_WRB_IN_ADDR                 0xB34
+#define STAT_NUM_WRB_IN_RSVD_1_MASK                        GENMASK(31, 16)
+#define STAT_NUM_WRB_IN_WRB_CNT_MASK                       GENMASK(15, 0)
+#define QDMA_S80_HARD_STAT_NUM_WRB_OUT_ADDR                0xB38
+#define STAT_NUM_WRB_OUT_RSVD_1_MASK                       GENMASK(31, 16)
+#define STAT_NUM_WRB_OUT_WRB_CNT_MASK                      GENMASK(15, 0)
+#define QDMA_S80_HARD_STAT_NUM_WRB_DRP_ADDR                0xB3C
+#define STAT_NUM_WRB_DRP_RSVD_1_MASK                       GENMASK(31, 16)
+#define STAT_NUM_WRB_DRP_WRB_CNT_MASK                      GENMASK(15, 0)
+#define QDMA_S80_HARD_STAT_NUM_STAT_DESC_OUT_ADDR          0xB40
+#define STAT_NUM_STAT_DESC_OUT_RSVD_1_MASK                 GENMASK(31, 16)
+#define STAT_NUM_STAT_DESC_OUT_CNT_MASK                    GENMASK(15, 0)
+#define QDMA_S80_HARD_STAT_NUM_DSC_CRDT_SENT_ADDR          0xB44
+#define STAT_NUM_DSC_CRDT_SENT_RSVD_1_MASK                 GENMASK(31, 16)
+#define STAT_NUM_DSC_CRDT_SENT_CNT_MASK                    GENMASK(15, 0)
+#define QDMA_S80_HARD_STAT_NUM_FCH_DSC_RCVD_ADDR           0xB48
+#define STAT_NUM_FCH_DSC_RCVD_RSVD_1_MASK                  GENMASK(31, 16)
+#define STAT_NUM_FCH_DSC_RCVD_DSC_CNT_MASK                 GENMASK(15, 0)
+#define QDMA_S80_HARD_STAT_NUM_BYP_DSC_RCVD_ADDR           0xB4C
+#define STAT_NUM_BYP_DSC_RCVD_RSVD_1_MASK                  GENMASK(31, 11)
+#define STAT_NUM_BYP_DSC_RCVD_DSC_CNT_MASK                 GENMASK(10, 0)
+#define QDMA_S80_HARD_C2H_WRB_COAL_CFG_ADDR                0xB50
+#define C2H_WRB_COAL_CFG_MAX_BUF_SZ_MASK                   GENMASK(31, 26)
+#define C2H_WRB_COAL_CFG_TICK_VAL_MASK                     GENMASK(25, 14)
+#define C2H_WRB_COAL_CFG_TICK_CNT_MASK                     GENMASK(13, 2)
+#define C2H_WRB_COAL_CFG_SET_GLB_FLUSH_MASK                BIT(1)
+#define C2H_WRB_COAL_CFG_DONE_GLB_FLUSH_MASK               BIT(0)
+#define QDMA_S80_HARD_C2H_INTR_H2C_REQ_ADDR                0xB54
+#define C2H_INTR_H2C_REQ_RSVD_1_MASK                       GENMASK(31, 18)
+#define C2H_INTR_H2C_REQ_CNT_MASK                          GENMASK(17, 0)
+#define QDMA_S80_HARD_C2H_INTR_C2H_MM_REQ_ADDR             0xB58
+#define C2H_INTR_C2H_MM_REQ_RSVD_1_MASK                    GENMASK(31, 18)
+#define C2H_INTR_C2H_MM_REQ_CNT_MASK                       GENMASK(17, 0)
+#define QDMA_S80_HARD_C2H_INTR_ERR_INT_REQ_ADDR            0xB5C
+#define C2H_INTR_ERR_INT_REQ_RSVD_1_MASK                   GENMASK(31, 18)
+#define C2H_INTR_ERR_INT_REQ_CNT_MASK                      GENMASK(17, 0)
+#define QDMA_S80_HARD_C2H_INTR_C2H_ST_REQ_ADDR             0xB60
+#define C2H_INTR_C2H_ST_REQ_RSVD_1_MASK                    GENMASK(31, 18)
+#define C2H_INTR_C2H_ST_REQ_CNT_MASK                       GENMASK(17, 0)
+#define QDMA_S80_HARD_C2H_INTR_H2C_ERR_C2H_MM_MSIX_ACK_ADDR 0xB64
+#define C2H_INTR_H2C_ERR_C2H_MM_MSIX_ACK_RSVD_1_MASK       GENMASK(31, 18)
+#define C2H_INTR_H2C_ERR_C2H_MM_MSIX_ACK_CNT_MASK          GENMASK(17, 0)
+#define QDMA_S80_HARD_C2H_INTR_H2C_ERR_C2H_MM_MSIX_FAIL_ADDR 0xB68
+#define C2H_INTR_H2C_ERR_C2H_MM_MSIX_FAIL_RSVD_1_MASK      GENMASK(31, 18)
+#define C2H_INTR_H2C_ERR_C2H_MM_MSIX_FAIL_CNT_MASK         GENMASK(17, 0)
+#define QDMA_S80_HARD_C2H_INTR_H2C_ERR_C2H_MM_MSIX_NO_MSIX_ADDR 0xB6C
+#define C2H_INTR_H2C_ERR_C2H_MM_MSIX_NO_MSIX_RSVD_1_MASK   GENMASK(31, 18)
+#define C2H_INTR_H2C_ERR_C2H_MM_MSIX_NO_MSIX_CNT_MASK      GENMASK(17, 0)
+#define QDMA_S80_HARD_C2H_INTR_H2C_ERR_C2H_MM_CTXT_INVAL_ADDR 0xB70
+#define C2H_INTR_H2C_ERR_C2H_MM_CTXT_INVAL_RSVD_1_MASK     GENMASK(31, 18)
+#define C2H_INTR_H2C_ERR_C2H_MM_CTXT_INVAL_CNT_MASK        GENMASK(17, 0)
+#define QDMA_S80_HARD_C2H_INTR_C2H_ST_MSIX_ACK_ADDR        0xB74
+#define C2H_INTR_C2H_ST_MSIX_ACK_RSVD_1_MASK               GENMASK(31, 18)
+#define C2H_INTR_C2H_ST_MSIX_ACK_CNT_MASK                  GENMASK(17, 0)
+#define QDMA_S80_HARD_C2H_INTR_C2H_ST_MSIX_FAIL_ADDR       0xB78
+#define C2H_INTR_C2H_ST_MSIX_FAIL_RSVD_1_MASK              GENMASK(31, 18)
+#define C2H_INTR_C2H_ST_MSIX_FAIL_CNT_MASK                 GENMASK(17, 0)
+#define QDMA_S80_HARD_C2H_INTR_C2H_ST_NO_MSIX_ADDR         0xB7C
+#define C2H_INTR_C2H_ST_NO_MSIX_RSVD_1_MASK                GENMASK(31, 18)
+#define C2H_INTR_C2H_ST_NO_MSIX_CNT_MASK                   GENMASK(17, 0)
+#define QDMA_S80_HARD_C2H_INTR_C2H_ST_CTXT_INVAL_ADDR      0xB80
+#define C2H_INTR_C2H_ST_CTXT_INVAL_RSVD_1_MASK             GENMASK(31, 18)
+#define C2H_INTR_C2H_ST_CTXT_INVAL_CNT_MASK                GENMASK(17, 0)
+#define QDMA_S80_HARD_C2H_STAT_WR_CMP_ADDR                 0xB84
+#define C2H_STAT_WR_CMP_RSVD_1_MASK                        GENMASK(31, 18)
+#define C2H_STAT_WR_CMP_CNT_MASK                           GENMASK(17, 0)
+#define QDMA_S80_HARD_C2H_STAT_DBG_DMA_ENG_4_ADDR          0xB88
+#define C2H_STAT_DMA_ENG_4_TUSER_FIFO_OUT_VLD_MASK     BIT(31)
+#define C2H_STAT_DMA_ENG_4_WRB_FIFO_IN_RDY_MASK        BIT(30)
+#define C2H_STAT_DMA_ENG_4_TUSER_FIFO_IN_CNT_MASK      GENMASK(29, 20)
+#define C2H_STAT_DMA_ENG_4_TUSER_FIFO_OUTPUT_CNT_MASK  GENMASK(19, 10)
+#define C2H_STAT_DMA_ENG_4_TUSER_FIFO_OUT_CNT_MASK     GENMASK(9, 0)
+#define QDMA_S80_HARD_C2H_STAT_DBG_DMA_ENG_5_ADDR          0xB8C
+#define C2H_STAT_DMA_ENG_5_RSVD_1_MASK                 GENMASK(31, 25)
+#define C2H_STAT_DMA_ENG_5_TUSER_COMB_OUT_VLD_MASK     BIT(24)
+#define C2H_STAT_DMA_ENG_5_TUSER_FIFO_IN_RDY_MASK      BIT(23)
+#define C2H_STAT_DMA_ENG_5_TUSER_COMB_IN_CNT_MASK      GENMASK(22, 13)
+#define C2H_STAT_DMA_ENG_5_TUSE_COMB_OUTPUT_CNT_MASK   GENMASK(12, 3)
+#define C2H_STAT_DMA_ENG_5_TUSER_COMB_CNT_MASK         GENMASK(2, 0)
+#define QDMA_S80_HARD_C2H_DBG_PFCH_QID_ADDR                0xB90
+#define C2H_PFCH_QID_RSVD_1_MASK                       GENMASK(31, 15)
+#define C2H_PFCH_QID_ERR_CTXT_MASK                     BIT(14)
+#define C2H_PFCH_QID_TARGET_MASK                       GENMASK(13, 11)
+#define C2H_PFCH_QID_QID_OR_TAG_MASK                   GENMASK(10, 0)
+#define QDMA_S80_HARD_C2H_DBG_PFCH_ADDR                    0xB94
+#define C2H_PFCH_DATA_MASK                             GENMASK(31, 0)
+#define QDMA_S80_HARD_C2H_INT_DBG_ADDR                     0xB98
+#define C2H_INT_RSVD_1_MASK                            GENMASK(31, 8)
+#define C2H_INT_INT_COAL_SM_MASK                       GENMASK(7, 4)
+#define C2H_INT_INT_SM_MASK                            GENMASK(3, 0)
+#define QDMA_S80_HARD_C2H_STAT_IMM_ACCEPTED_ADDR           0xB9C
+#define C2H_STAT_IMM_ACCEPTED_RSVD_1_MASK                  GENMASK(31, 18)
+#define C2H_STAT_IMM_ACCEPTED_CNT_MASK                     GENMASK(17, 0)
+#define QDMA_S80_HARD_C2H_STAT_MARKER_ACCEPTED_ADDR        0xBA0
+#define C2H_STAT_MARKER_ACCEPTED_RSVD_1_MASK               GENMASK(31, 18)
+#define C2H_STAT_MARKER_ACCEPTED_CNT_MASK                  GENMASK(17, 0)
+#define QDMA_S80_HARD_C2H_STAT_DISABLE_CMP_ACCEPTED_ADDR   0xBA4
+#define C2H_STAT_DISABLE_CMP_ACCEPTED_RSVD_1_MASK          GENMASK(31, 18)
+#define C2H_STAT_DISABLE_CMP_ACCEPTED_CNT_MASK             GENMASK(17, 0)
+#define QDMA_S80_HARD_C2H_PLD_FIFO_CRDT_CNT_ADDR           0xBA8
+#define C2H_PLD_FIFO_CRDT_CNT_RSVD_1_MASK                  GENMASK(31, 18)
+#define C2H_PLD_FIFO_CRDT_CNT_CNT_MASK                     GENMASK(17, 0)
+#define QDMA_S80_HARD_H2C_ERR_STAT_ADDR                    0xE00
+#define H2C_ERR_STAT_RSVD_1_MASK                           GENMASK(31, 5)
+#define H2C_ERR_STAT_SBE_MASK                              BIT(4)
+#define H2C_ERR_STAT_DBE_MASK                              BIT(3)
+#define H2C_ERR_STAT_NO_DMA_DS_MASK                        BIT(2)
+#define H2C_ERR_STAT_SDI_MRKR_REQ_MOP_ERR_MASK             BIT(1)
+#define H2C_ERR_STAT_ZERO_LEN_DS_MASK                      BIT(0)
+#define QDMA_S80_HARD_H2C_ERR_MASK_ADDR                    0xE04
+#define H2C_ERR_EN_MASK                          GENMASK(31, 0)
+#define QDMA_S80_HARD_H2C_FIRST_ERR_QID_ADDR               0xE08
+#define H2C_FIRST_ERR_QID_RSVD_1_MASK                      GENMASK(31, 20)
+#define H2C_FIRST_ERR_QID_ERR_TYPE_MASK                    GENMASK(19, 16)
+#define H2C_FIRST_ERR_QID_RSVD_2_MASK                      GENMASK(15, 12)
+#define H2C_FIRST_ERR_QID_QID_MASK                         GENMASK(11, 0)
+#define QDMA_S80_HARD_H2C_DBG_REG0_ADDR                    0xE0C
+#define H2C_REG0_NUM_DSC_RCVD_MASK                     GENMASK(31, 16)
+#define H2C_REG0_NUM_WRB_SENT_MASK                     GENMASK(15, 0)
+#define QDMA_S80_HARD_H2C_DBG_REG1_ADDR                    0xE10
+#define H2C_REG1_NUM_REQ_SENT_MASK                     GENMASK(31, 16)
+#define H2C_REG1_NUM_CMP_SENT_MASK                     GENMASK(15, 0)
+#define QDMA_S80_HARD_H2C_DBG_REG2_ADDR                    0xE14
+#define H2C_REG2_RSVD_1_MASK                           GENMASK(31, 16)
+#define H2C_REG2_NUM_ERR_DSC_RCVD_MASK                 GENMASK(15, 0)
+#define QDMA_S80_HARD_H2C_DBG_REG3_ADDR                    0xE18
+#define H2C_REG3_MASK                              BIT(31)
+#define H2C_REG3_DSCO_FIFO_EMPTY_MASK                  BIT(30)
+#define H2C_REG3_DSCO_FIFO_FULL_MASK                   BIT(29)
+#define H2C_REG3_CUR_RC_STATE_MASK                     GENMASK(28, 26)
+#define H2C_REG3_RDREQ_LINES_MASK                      GENMASK(25, 16)
+#define H2C_REG3_RDATA_LINES_AVAIL_MASK                GENMASK(15, 6)
+#define H2C_REG3_PEND_FIFO_EMPTY_MASK                  BIT(5)
+#define H2C_REG3_PEND_FIFO_FULL_MASK                   BIT(4)
+#define H2C_REG3_CUR_RQ_STATE_MASK                     GENMASK(3, 2)
+#define H2C_REG3_DSCI_FIFO_FULL_MASK                   BIT(1)
+#define H2C_REG3_DSCI_FIFO_EMPTY_MASK                  BIT(0)
+#define QDMA_S80_HARD_H2C_DBG_REG4_ADDR                    0xE1C
+#define H2C_REG4_RDREQ_ADDR_MASK                       GENMASK(31, 0)
+#define QDMA_S80_HARD_H2C_FATAL_ERR_EN_ADDR                0xE20
+#define H2C_FATAL_ERR_EN_RSVD_1_MASK                       GENMASK(31, 1)
+#define H2C_FATAL_ERR_EN_H2C_MASK                          BIT(0)
+#define QDMA_S80_HARD_C2H_CHANNEL_CTL_ADDR                 0x1004
+#define C2H_CHANNEL_CTL_RSVD_1_MASK                        GENMASK(31, 1)
+#define C2H_CHANNEL_CTL_RUN_MASK                           BIT(0)
+#define QDMA_S80_HARD_C2H_CHANNEL_CTL_1_ADDR               0x1008
+#define C2H_CHANNEL_CTL_1_RUN_MASK                         GENMASK(31, 1)
+#define C2H_CHANNEL_CTL_1_RUN_1_MASK                       BIT(0)
+#define QDMA_S80_HARD_C2H_MM_STATUS_ADDR                   0x1040
+#define C2H_MM_STATUS_RSVD_1_MASK                          GENMASK(31, 1)
+#define C2H_MM_STATUS_RUN_MASK                             BIT(0)
+#define QDMA_S80_HARD_C2H_CHANNEL_CMPL_DESC_CNT_ADDR       0x1048
+#define C2H_CHANNEL_CMPL_DESC_CNT_C2H_CO_MASK              GENMASK(31, 0)
+#define QDMA_S80_HARD_C2H_MM_ERR_CODE_ENABLE_MASK_ADDR     0x1054
+#define C2H_MM_ERR_CODE_ENABLE_RSVD_1_MASK            BIT(31)
+#define C2H_MM_ERR_CODE_ENABLE_WR_UC_RAM_MASK         BIT(30)
+#define C2H_MM_ERR_CODE_ENABLE_WR_UR_MASK             BIT(29)
+#define C2H_MM_ERR_CODE_ENABLE_WR_FLR_MASK            BIT(28)
+#define C2H_MM_ERR_CODE_ENABLE_RSVD_2_MASK            GENMASK(27, 2)
+#define C2H_MM_ERR_CODE_ENABLE_RD_SLV_ERR_MASK        BIT(1)
+#define C2H_MM_ERR_CODE_ENABLE_WR_SLV_ERR_MASK        BIT(0)
+#define QDMA_S80_HARD_C2H_MM_ERR_CODE_ADDR                 0x1058
+#define C2H_MM_ERR_CODE_RSVD_1_MASK                        GENMASK(31, 18)
+#define C2H_MM_ERR_CODE_VALID_MASK                         BIT(17)
+#define C2H_MM_ERR_CODE_RDWR_MASK                          BIT(16)
+#define C2H_MM_ERR_CODE_MASK                              GENMASK(4, 0)
+#define QDMA_S80_HARD_C2H_MM_ERR_INFO_ADDR                 0x105C
+#define C2H_MM_ERR_INFO_RSVD_1_MASK                        GENMASK(31, 29)
+#define C2H_MM_ERR_INFO_QID_MASK                           GENMASK(28, 17)
+#define C2H_MM_ERR_INFO_DIR_MASK                           BIT(16)
+#define C2H_MM_ERR_INFO_CIDX_MASK                          GENMASK(15, 0)
+#define QDMA_S80_HARD_C2H_MM_PERF_MON_CTL_ADDR             0x10C0
+#define C2H_MM_PERF_MON_CTL_RSVD_1_MASK                    GENMASK(31, 4)
+#define C2H_MM_PERF_MON_CTL_IMM_START_MASK                 BIT(3)
+#define C2H_MM_PERF_MON_CTL_RUN_START_MASK                 BIT(2)
+#define C2H_MM_PERF_MON_CTL_IMM_CLEAR_MASK                 BIT(1)
+#define C2H_MM_PERF_MON_CTL_RUN_CLEAR_MASK                 BIT(0)
+#define QDMA_S80_HARD_C2H_MM_PERF_MON_CYCLE_CNT0_ADDR      0x10C4
+#define C2H_MM_PERF_MON_CYCLE_CNT0_CYC_CNT_MASK            GENMASK(31, 0)
+#define QDMA_S80_HARD_C2H_MM_PERF_MON_CYCLE_CNT1_ADDR      0x10C8
+#define C2H_MM_PERF_MON_CYCLE_CNT1_RSVD_1_MASK             GENMASK(31, 10)
+#define C2H_MM_PERF_MON_CYCLE_CNT1_CYC_CNT_MASK            GENMASK(9, 0)
+#define QDMA_S80_HARD_C2H_MM_PERF_MON_DATA_CNT0_ADDR       0x10CC
+#define C2H_MM_PERF_MON_DATA_CNT0_DCNT_MASK                GENMASK(31, 0)
+#define QDMA_S80_HARD_C2H_MM_PERF_MON_DATA_CNT1_ADDR       0x10D0
+#define C2H_MM_PERF_MON_DATA_CNT1_RSVD_1_MASK              GENMASK(31, 10)
+#define C2H_MM_PERF_MON_DATA_CNT1_DCNT_MASK                GENMASK(9, 0)
+#define QDMA_S80_HARD_C2H_MM_DBG_ADDR                      0x10E8
+#define C2H_MM_RSVD_1_MASK                             GENMASK(31, 24)
+#define C2H_MM_RRQ_ENTRIES_MASK                        GENMASK(23, 17)
+#define C2H_MM_DAT_FIFO_SPC_MASK                       GENMASK(16, 7)
+#define C2H_MM_RD_STALL_MASK                           BIT(6)
+#define C2H_MM_RRQ_FIFO_FI_MASK                        BIT(5)
+#define C2H_MM_WR_STALL_MASK                           BIT(4)
+#define C2H_MM_WRQ_FIFO_FI_MASK                        BIT(3)
+#define C2H_MM_WBK_STALL_MASK                          BIT(2)
+#define C2H_MM_DSC_FIFO_EP_MASK                        BIT(1)
+#define C2H_MM_DSC_FIFO_FL_MASK                        BIT(0)
+#define QDMA_S80_HARD_H2C_CHANNEL_CTL_ADDR                 0x1204
+#define H2C_CHANNEL_CTL_RSVD_1_MASK                        GENMASK(31, 1)
+#define H2C_CHANNEL_CTL_RUN_MASK                           BIT(0)
+#define QDMA_S80_HARD_H2C_CHANNEL_CTL_1_ADDR               0x1208
+#define H2C_CHANNEL_CTL_1_RUN_MASK                         BIT(0)
+#define QDMA_S80_HARD_H2C_CHANNEL_CTL_2_ADDR               0x120C
+#define H2C_CHANNEL_CTL_2_RUN_MASK                         BIT(0)
+#define QDMA_S80_HARD_H2C_MM_STATUS_ADDR                   0x1240
+#define H2C_MM_STATUS_RSVD_1_MASK                          GENMASK(31, 1)
+#define H2C_MM_STATUS_RUN_MASK                             BIT(0)
+#define QDMA_S80_HARD_H2C_CHANNEL_CMPL_DESC_CNT_ADDR       0x1248
+#define H2C_CHANNEL_CMPL_DESC_CNT_H2C_CO_MASK              GENMASK(31, 0)
+#define QDMA_S80_HARD_H2C_MM_ERR_CODE_ENABLE_MASK_ADDR     0x1254
+#define H2C_MM_ERR_CODE_ENABLE_RSVD_1_MASK            GENMASK(31, 30)
+#define H2C_MM_ERR_CODE_ENABLE_WR_SLV_ERR_MASK        BIT(29)
+#define H2C_MM_ERR_CODE_ENABLE_WR_DEC_ERR_MASK        BIT(28)
+#define H2C_MM_ERR_CODE_ENABLE_RSVD_2_MASK            GENMASK(27, 23)
+#define H2C_MM_ERR_CODE_ENABLE_RD_RQ_DIS_ERR_MASK     BIT(22)
+#define H2C_MM_ERR_CODE_ENABLE_RSVD_3_MASK            GENMASK(21, 17)
+#define H2C_MM_ERR_CODE_ENABLE_RD_DAT_POISON_ERR_MASK BIT(16)
+#define H2C_MM_ERR_CODE_ENABLE_RSVD_4_MASK            GENMASK(15, 9)
+#define H2C_MM_ERR_CODE_ENABLE_RD_FLR_ERR_MASK        BIT(8)
+#define H2C_MM_ERR_CODE_ENABLE_RSVD_5_MASK            GENMASK(7, 6)
+#define H2C_MM_ERR_CODE_ENABLE_RD_HDR_ADR_ERR_MASK    BIT(5)
+#define H2C_MM_ERR_CODE_ENABLE_RD_HDR_PARA_MASK       BIT(4)
+#define H2C_MM_ERR_CODE_ENABLE_RD_HDR_BYTE_ERR_MASK   BIT(3)
+#define H2C_MM_ERR_CODE_ENABLE_RD_UR_CA_MASK          BIT(2)
+#define H2C_MM_ERR_CODE_ENABLE_RD_HRD_POISON_ERR_MASK BIT(1)
+#define H2C_MM_ERR_CODE_ENABLE_RSVD_6_MASK            BIT(0)
+#define QDMA_S80_HARD_H2C_MM_ERR_CODE_ADDR                 0x1258
+#define H2C_MM_ERR_CODE_RSVD_1_MASK                        GENMASK(31, 18)
+#define H2C_MM_ERR_CODE_VALID_MASK                         BIT(17)
+#define H2C_MM_ERR_CODE_RDWR_MASK                          BIT(16)
+#define H2C_MM_ERR_CODE_MASK                              GENMASK(4, 0)
+#define QDMA_S80_HARD_H2C_MM_ERR_INFO_ADDR                 0x125C
+#define H2C_MM_ERR_INFO_RSVD_1_MASK                        GENMASK(31, 29)
+#define H2C_MM_ERR_INFO_QID_MASK                           GENMASK(28, 17)
+#define H2C_MM_ERR_INFO_DIR_MASK                           BIT(16)
+#define H2C_MM_ERR_INFO_CIDX_MASK                          GENMASK(15, 0)
+#define QDMA_S80_HARD_H2C_MM_PERF_MON_CTL_ADDR             0x12C0
+#define H2C_MM_PERF_MON_CTL_RSVD_1_MASK                    GENMASK(31, 4)
+#define H2C_MM_PERF_MON_CTL_IMM_START_MASK                 BIT(3)
+#define H2C_MM_PERF_MON_CTL_RUN_START_MASK                 BIT(2)
+#define H2C_MM_PERF_MON_CTL_IMM_CLEAR_MASK                 BIT(1)
+#define H2C_MM_PERF_MON_CTL_RUN_CLEAR_MASK                 BIT(0)
+#define QDMA_S80_HARD_H2C_MM_PERF_MON_CYCLE_CNT0_ADDR      0x12C4
+#define H2C_MM_PERF_MON_CYCLE_CNT0_CYC_CNT_MASK            GENMASK(31, 0)
+#define QDMA_S80_HARD_H2C_MM_PERF_MON_CYCLE_CNT1_ADDR      0x12C8
+#define H2C_MM_PERF_MON_CYCLE_CNT1_RSVD_1_MASK             GENMASK(31, 10)
+#define H2C_MM_PERF_MON_CYCLE_CNT1_CYC_CNT_MASK            GENMASK(9, 0)
+#define QDMA_S80_HARD_H2C_MM_PERF_MON_DATA_CNT0_ADDR       0x12CC
+#define H2C_MM_PERF_MON_DATA_CNT0_DCNT_MASK                GENMASK(31, 0)
+#define QDMA_S80_HARD_H2C_MM_PERF_MON_DATA_CNT1_ADDR       0x12D0
+#define H2C_MM_PERF_MON_DATA_CNT1_RSVD_1_MASK              GENMASK(31, 10)
+#define H2C_MM_PERF_MON_DATA_CNT1_DCNT_MASK                GENMASK(9, 0)
+#define QDMA_S80_HARD_H2C_MM_DBG_ADDR                      0x12E8
+#define H2C_MM_RSVD_1_MASK                             GENMASK(31, 24)
+#define H2C_MM_RRQ_ENTRIES_MASK                        GENMASK(23, 17)
+#define H2C_MM_DAT_FIFO_SPC_MASK                       GENMASK(16, 7)
+#define H2C_MM_RD_STALL_MASK                           BIT(6)
+#define H2C_MM_RRQ_FIFO_FI_MASK                        BIT(5)
+#define H2C_MM_WR_STALL_MASK                           BIT(4)
+#define H2C_MM_WRQ_FIFO_FI_MASK                        BIT(3)
+#define H2C_MM_WBK_STALL_MASK                          BIT(2)
+#define H2C_MM_DSC_FIFO_EP_MASK                        BIT(1)
+#define H2C_MM_DSC_FIFO_FL_MASK                        BIT(0)
+#define QDMA_S80_HARD_FUNC_STATUS_REG_ADDR                 0x2400
+#define FUNC_STATUS_REG_RSVD_1_MASK                        GENMASK(31, 12)
+#define FUNC_STATUS_REG_CUR_SRC_FN_MASK                    GENMASK(11, 4)
+#define FUNC_STATUS_REG_ACK_MASK                           BIT(2)
+#define FUNC_STATUS_REG_O_MSG_MASK                         BIT(1)
+#define FUNC_STATUS_REG_I_MSG_MASK                         BIT(0)
+#define QDMA_S80_HARD_FUNC_CMD_REG_ADDR                    0x2404
+#define FUNC_CMD_REG_RSVD_1_MASK                           GENMASK(31, 3)
+#define FUNC_CMD_REG_RSVD_2_MASK                           BIT(2)
+#define FUNC_CMD_REG_MSG_RCV_MASK                          BIT(1)
+#define FUNC_CMD_REG_MSG_SENT_MASK                         BIT(0)
+#define QDMA_S80_HARD_FUNC_INTERRUPT_VECTOR_REG_ADDR       0x2408
+#define FUNC_INTERRUPT_VECTOR_REG_RSVD_1_MASK              GENMASK(31, 5)
+#define FUNC_INTERRUPT_VECTOR_REG_IN_MASK                  GENMASK(4, 0)
+#define QDMA_S80_HARD_TARGET_FUNC_REG_ADDR                 0x240C
+#define TARGET_FUNC_REG_RSVD_1_MASK                        GENMASK(31, 8)
+#define TARGET_FUNC_REG_N_ID_MASK                          GENMASK(7, 0)
+#define QDMA_S80_HARD_FUNC_INTERRUPT_CTL_REG_ADDR          0x2410
+#define FUNC_INTERRUPT_CTL_REG_RSVD_1_MASK                 GENMASK(31, 1)
+#define FUNC_INTERRUPT_CTL_REG_INT_EN_MASK                 BIT(0)
+#define SW_IND_CTXT_DATA_W3_DSC_BASE_H_MASK               GENMASK(31, 0)
+#define SW_IND_CTXT_DATA_W2_DSC_BASE_L_MASK               GENMASK(31, 0)
+#define SW_IND_CTXT_DATA_W1_IS_MM_MASK                    BIT(31)
+#define SW_IND_CTXT_DATA_W1_MRKR_DIS_MASK                 BIT(30)
+#define SW_IND_CTXT_DATA_W1_IRQ_REQ_MASK                  BIT(29)
+#define SW_IND_CTXT_DATA_W1_ERR_WB_SENT_MASK              BIT(28)
+#define SW_IND_CTXT_DATA_W1_ERR_MASK                      GENMASK(27, 26)
+#define SW_IND_CTXT_DATA_W1_IRQ_NO_LAST_MASK              BIT(25)
+#define SW_IND_CTXT_DATA_W1_PORT_ID_MASK                  GENMASK(24, 22)
+#define SW_IND_CTXT_DATA_W1_IRQ_EN_MASK                   BIT(21)
+#define SW_IND_CTXT_DATA_W1_WBK_EN_MASK                   BIT(20)
+#define SW_IND_CTXT_DATA_W1_MM_CHN_MASK                   BIT(19)
+#define SW_IND_CTXT_DATA_W1_BYPASS_MASK                   BIT(18)
+#define SW_IND_CTXT_DATA_W1_DSC_SZ_MASK                   GENMASK(17, 16)
+#define SW_IND_CTXT_DATA_W1_RNG_SZ_MASK                   GENMASK(15, 12)
+#define SW_IND_CTXT_DATA_W1_FNC_ID_MASK                   GENMASK(11, 4)
+#define SW_IND_CTXT_DATA_W1_WBI_INTVL_EN_MASK             BIT(3)
+#define SW_IND_CTXT_DATA_W1_WBI_CHK_MASK                  BIT(2)
+#define SW_IND_CTXT_DATA_W1_FCRD_EN_MASK                  BIT(1)
+#define SW_IND_CTXT_DATA_W1_QEN_MASK                      BIT(0)
+#define SW_IND_CTXT_DATA_W0_RSV_MASK                      GENMASK(31, 17)
+#define SW_IND_CTXT_DATA_W0_IRQ_ARM_MASK                  BIT(16)
+#define SW_IND_CTXT_DATA_W0_PIDX_MASK                     GENMASK(15, 0)
+#define HW_IND_CTXT_DATA_W1_RSVD_MASK                     GENMASK(15, 11)
+#define HW_IND_CTXT_DATA_W1_FETCH_PND_MASK                BIT(10)
+#define HW_IND_CTXT_DATA_W1_IDL_STP_B_MASK                BIT(9)
+#define HW_IND_CTXT_DATA_W1_DSC_PND_MASK                  BIT(8)
+#define HW_IND_CTXT_DATA_W1_RSVD_1_MASK                   GENMASK(7, 0)
+#define HW_IND_CTXT_DATA_W0_CRD_USE_MASK                  GENMASK(31, 16)
+#define HW_IND_CTXT_DATA_W0_CIDX_MASK                     GENMASK(15, 0)
+#define CRED_CTXT_DATA_W0_RSVD_1_MASK                     GENMASK(31, 16)
+#define CRED_CTXT_DATA_W0_CREDT_MASK                      GENMASK(15, 0)
+#define PREFETCH_CTXT_DATA_W1_VALID_MASK                  BIT(13)
+#define PREFETCH_CTXT_DATA_W1_SW_CRDT_H_MASK              GENMASK(12, 0)
+#define PREFETCH_CTXT_DATA_W0_SW_CRDT_L_MASK              GENMASK(31, 29)
+#define PREFETCH_CTXT_DATA_W0_PFCH_MASK                   BIT(28)
+#define PREFETCH_CTXT_DATA_W0_PFCH_EN_MASK                BIT(27)
+#define PREFETCH_CTXT_DATA_W0_ERR_MASK                    BIT(26)
+#define PREFETCH_CTXT_DATA_W0_RSVD_MASK                   GENMASK(25, 8)
+#define PREFETCH_CTXT_DATA_W0_PORT_ID_MASK                GENMASK(7, 5)
+#define PREFETCH_CTXT_DATA_W0_BUF_SIZE_IDX_MASK           GENMASK(4, 1)
+#define PREFETCH_CTXT_DATA_W0_BYPASS_MASK                 BIT(0)
+#define CMPL_CTXT_DATA_W3_RSVD_MASK                       GENMASK(31, 30)
+#define CMPL_CTXT_DATA_W3_FULL_UPD_MASK                   BIT(29)
+#define CMPL_CTXT_DATA_W3_TIMER_RUNNING_MASK              BIT(28)
+#define CMPL_CTXT_DATA_W3_USER_TRIG_PEND_MASK             BIT(27)
+#define CMPL_CTXT_DATA_W3_ERR_MASK                        GENMASK(26, 25)
+#define CMPL_CTXT_DATA_W3_VALID_MASK                      BIT(24)
+#define CMPL_CTXT_DATA_W3_CIDX_MASK                       GENMASK(23, 8)
+#define CMPL_CTXT_DATA_W3_PIDX_H_MASK                     GENMASK(7, 0)
+#define CMPL_CTXT_DATA_W2_PIDX_L_MASK                     GENMASK(31, 24)
+#define CMPL_CTXT_DATA_W2_DESC_SIZE_MASK                  GENMASK(23, 22)
+#define CMPL_CTXT_DATA_W2_BADDR_64_H_MASK                 GENMASK(21, 0)
+#define CMPL_CTXT_DATA_W1_BADDR_64_M_MASK                 GENMASK(31, 0)
+#define CMPL_CTXT_DATA_W0_BADDR_64_L_MASK                 GENMASK(31, 28)
+#define CMPL_CTXT_DATA_W0_QSIZE_IDX_MASK                  GENMASK(27, 24)
+#define CMPL_CTXT_DATA_W0_COLOR_MASK                      BIT(23)
+#define CMPL_CTXT_DATA_W0_INT_ST_MASK                     GENMASK(22, 21)
+#define CMPL_CTXT_DATA_W0_TIMER_IDX_MASK                  GENMASK(20, 17)
+#define CMPL_CTXT_DATA_W0_CNTER_IDX_MASK                  GENMASK(16, 13)
+#define CMPL_CTXT_DATA_W0_FNC_ID_MASK                     GENMASK(12, 5)
+#define CMPL_CTXT_DATA_W0_TRIG_MODE_MASK                  GENMASK(4, 2)
+#define CMPL_CTXT_DATA_W0_EN_INT_MASK                     BIT(1)
+#define CMPL_CTXT_DATA_W0_EN_STAT_DESC_MASK               BIT(0)
+#define INTR_CTXT_DATA_W2_PIDX_MASK                       GENMASK(11, 0)
+#define INTR_CTXT_DATA_W1_PAGE_SIZE_MASK                  GENMASK(31, 29)
+#define INTR_CTXT_DATA_W1_BADDR_4K_H_MASK                 GENMASK(28, 0)
+#define INTR_CTXT_DATA_W0_BADDR_4K_L_MASK                 GENMASK(31, 9)
+#define INTR_CTXT_DATA_W0_COLOR_MASK                      BIT(8)
+#define INTR_CTXT_DATA_W0_INT_ST_MASK                     BIT(7)
+#define INTR_CTXT_DATA_W0_RSVD_MASK                       BIT(6)
+#define INTR_CTXT_DATA_W0_VEC_MASK                        GENMASK(5, 1)
+#define INTR_CTXT_DATA_W0_VALID_MASK                      BIT(0)
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif
diff --git a/drivers/net/qdma/qdma_access/qdma_s80_hard_access/qdma_s80_hard_reg_dump.c b/drivers/net/qdma/qdma_access/qdma_s80_hard_access/qdma_s80_hard_reg_dump.c
new file mode 100644
index 0000000000..749fd8a8cf
--- /dev/null
+++ b/drivers/net/qdma/qdma_access/qdma_s80_hard_access/qdma_s80_hard_reg_dump.c
@@ -0,0 +1,7999 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2022 Xilinx, Inc. All rights reserved.
+ */
+
+#include "qdma_s80_hard_reg.h"
+#include "qdma_reg_dump.h"
+
+#ifdef ENABLE_WPP_TRACING
+#include "qdma_s80_hard_reg_dump.tmh"
+#endif
+
+
+static struct regfield_info
+	cfg_blk_identifier_field_info[] = {
+	{"CFG_BLK_IDENTIFIER",
+		CFG_BLK_IDENTIFIER_MASK},
+	{"CFG_BLK_IDENTIFIER_1",
+		CFG_BLK_IDENTIFIER_1_MASK},
+	{"CFG_BLK_IDENTIFIER_RSVD_1",
+		CFG_BLK_IDENTIFIER_RSVD_1_MASK},
+	{"CFG_BLK_IDENTIFIER_VERSION",
+		CFG_BLK_IDENTIFIER_VERSION_MASK},
+};
+
+
+static struct regfield_info
+	cfg_blk_busdev_field_info[] = {
+	{"CFG_BLK_BUSDEV_BDF",
+		CFG_BLK_BUSDEV_BDF_MASK},
+};
+
+
+static struct regfield_info
+	cfg_blk_pcie_max_pld_size_field_info[] = {
+	{"CFG_BLK_PCIE_MAX_PLD_SIZE",
+		CFG_BLK_PCIE_MAX_PLD_SIZE_MASK},
+};
+
+
+static struct regfield_info
+	cfg_blk_pcie_max_read_req_size_field_info[] = {
+	{"CFG_BLK_PCIE_MAX_READ_REQ_SIZE",
+		CFG_BLK_PCIE_MAX_READ_REQ_SIZE_MASK},
+};
+
+
+static struct regfield_info
+	cfg_blk_system_id_field_info[] = {
+	{"CFG_BLK_SYSTEM_ID",
+		CFG_BLK_SYSTEM_ID_MASK},
+};
+
+
+static struct regfield_info
+	cfg_blk_msi_enable_field_info[] = {
+	{"CFG_BLK_MSI_ENABLE_3",
+		CFG_BLK_MSI_ENABLE_3_MASK},
+	{"CFG_BLK_MSI_ENABLE_MSIX3",
+		CFG_BLK_MSI_ENABLE_MSIX3_MASK},
+	{"CFG_BLK_MSI_ENABLE_2",
+		CFG_BLK_MSI_ENABLE_2_MASK},
+	{"CFG_BLK_MSI_ENABLE_MSIX2",
+		CFG_BLK_MSI_ENABLE_MSIX2_MASK},
+	{"CFG_BLK_MSI_ENABLE_1",
+		CFG_BLK_MSI_ENABLE_1_MASK},
+	{"CFG_BLK_MSI_ENABLE_MSIX1",
+		CFG_BLK_MSI_ENABLE_MSIX1_MASK},
+	{"CFG_BLK_MSI_ENABLE_0",
+		CFG_BLK_MSI_ENABLE_0_MASK},
+	{"CFG_BLK_MSI_ENABLE_MSIX0",
+		CFG_BLK_MSI_ENABLE_MSIX0_MASK},
+};
+
+
+static struct regfield_info
+	cfg_pcie_data_width_field_info[] = {
+	{"CFG_PCIE_DATA_WIDTH_DATAPATH",
+		CFG_PCIE_DATA_WIDTH_DATAPATH_MASK},
+};
+
+
+static struct regfield_info
+	cfg_pcie_ctl_field_info[] = {
+	{"CFG_PCIE_CTL_RRQ_DISABLE",
+		CFG_PCIE_CTL_RRQ_DISABLE_MASK},
+	{"CFG_PCIE_CTL_RELAXED_ORDERING",
+		CFG_PCIE_CTL_RELAXED_ORDERING_MASK},
+};
+
+
+static struct regfield_info
+	cfg_axi_user_max_pld_size_field_info[] = {
+	{"CFG_AXI_USER_MAX_PLD_SIZE_ISSUED",
+		CFG_AXI_USER_MAX_PLD_SIZE_ISSUED_MASK},
+	{"CFG_AXI_USER_MAX_PLD_SIZE_PROG",
+		CFG_AXI_USER_MAX_PLD_SIZE_PROG_MASK},
+};
+
+
+static struct regfield_info
+	cfg_axi_user_max_read_req_size_field_info[] = {
+	{"CFG_AXI_USER_MAX_READ_REQ_SIZE_USISSUED",
+		CFG_AXI_USER_MAX_READ_REQ_SIZE_USISSUED_MASK},
+	{"CFG_AXI_USER_MAX_READ_REQ_SIZE_USPROG",
+		CFG_AXI_USER_MAX_READ_REQ_SIZE_USPROG_MASK},
+};
+
+
+static struct regfield_info
+	cfg_blk_misc_ctl_field_info[] = {
+	{"CFG_BLK_MISC_CTL_NUM_TAG",
+		CFG_BLK_MISC_CTL_NUM_TAG_MASK},
+	{"CFG_BLK_MISC_CTL_RQ_METERING_MULTIPLIER",
+		CFG_BLK_MISC_CTL_RQ_METERING_MULTIPLIER_MASK},
+};
+
+
+static struct regfield_info
+	cfg_blk_scratch_0_field_info[] = {
+	{"CFG_BLK_SCRATCH_0",
+		CFG_BLK_SCRATCH_0_MASK},
+};
+
+
+static struct regfield_info
+	cfg_blk_scratch_1_field_info[] = {
+	{"CFG_BLK_SCRATCH_1",
+		CFG_BLK_SCRATCH_1_MASK},
+};
+
+
+static struct regfield_info
+	cfg_blk_scratch_2_field_info[] = {
+	{"CFG_BLK_SCRATCH_2",
+		CFG_BLK_SCRATCH_2_MASK},
+};
+
+
+static struct regfield_info
+	cfg_blk_scratch_3_field_info[] = {
+	{"CFG_BLK_SCRATCH_3",
+		CFG_BLK_SCRATCH_3_MASK},
+};
+
+
+static struct regfield_info
+	cfg_blk_scratch_4_field_info[] = {
+	{"CFG_BLK_SCRATCH_4",
+		CFG_BLK_SCRATCH_4_MASK},
+};
+
+
+static struct regfield_info
+	cfg_blk_scratch_5_field_info[] = {
+	{"CFG_BLK_SCRATCH_5",
+		CFG_BLK_SCRATCH_5_MASK},
+};
+
+
+static struct regfield_info
+	cfg_blk_scratch_6_field_info[] = {
+	{"CFG_BLK_SCRATCH_6",
+		CFG_BLK_SCRATCH_6_MASK},
+};
+
+
+static struct regfield_info
+	cfg_blk_scratch_7_field_info[] = {
+	{"CFG_BLK_SCRATCH_7",
+		CFG_BLK_SCRATCH_7_MASK},
+};
+
+
+static struct regfield_info
+	ram_sbe_msk_a_field_info[] = {
+	{"RAM_SBE_MSK_A",
+		RAM_SBE_MSK_A_MASK},
+};
+
+
+static struct regfield_info
+	ram_sbe_sts_a_field_info[] = {
+	{"RAM_SBE_STS_A_RSVD_1",
+		RAM_SBE_STS_A_RSVD_1_MASK},
+	{"RAM_SBE_STS_A_PFCH_LL_RAM",
+		RAM_SBE_STS_A_PFCH_LL_RAM_MASK},
+	{"RAM_SBE_STS_A_WRB_CTXT_RAM",
+		RAM_SBE_STS_A_WRB_CTXT_RAM_MASK},
+	{"RAM_SBE_STS_A_PFCH_CTXT_RAM",
+		RAM_SBE_STS_A_PFCH_CTXT_RAM_MASK},
+	{"RAM_SBE_STS_A_DESC_REQ_FIFO_RAM",
+		RAM_SBE_STS_A_DESC_REQ_FIFO_RAM_MASK},
+	{"RAM_SBE_STS_A_INT_CTXT_RAM",
+		RAM_SBE_STS_A_INT_CTXT_RAM_MASK},
+	{"RAM_SBE_STS_A_INT_QID2VEC_RAM",
+		RAM_SBE_STS_A_INT_QID2VEC_RAM_MASK},
+	{"RAM_SBE_STS_A_WRB_COAL_DATA_RAM",
+		RAM_SBE_STS_A_WRB_COAL_DATA_RAM_MASK},
+	{"RAM_SBE_STS_A_TUSER_FIFO_RAM",
+		RAM_SBE_STS_A_TUSER_FIFO_RAM_MASK},
+	{"RAM_SBE_STS_A_QID_FIFO_RAM",
+		RAM_SBE_STS_A_QID_FIFO_RAM_MASK},
+	{"RAM_SBE_STS_A_PLD_FIFO_RAM",
+		RAM_SBE_STS_A_PLD_FIFO_RAM_MASK},
+	{"RAM_SBE_STS_A_TIMER_FIFO_RAM",
+		RAM_SBE_STS_A_TIMER_FIFO_RAM_MASK},
+	{"RAM_SBE_STS_A_PASID_CTXT_RAM",
+		RAM_SBE_STS_A_PASID_CTXT_RAM_MASK},
+	{"RAM_SBE_STS_A_DSC_CPLD",
+		RAM_SBE_STS_A_DSC_CPLD_MASK},
+	{"RAM_SBE_STS_A_DSC_CPLI",
+		RAM_SBE_STS_A_DSC_CPLI_MASK},
+	{"RAM_SBE_STS_A_DSC_SW_CTXT",
+		RAM_SBE_STS_A_DSC_SW_CTXT_MASK},
+	{"RAM_SBE_STS_A_DSC_CRD_RCV",
+		RAM_SBE_STS_A_DSC_CRD_RCV_MASK},
+	{"RAM_SBE_STS_A_DSC_HW_CTXT",
+		RAM_SBE_STS_A_DSC_HW_CTXT_MASK},
+	{"RAM_SBE_STS_A_FUNC_MAP",
+		RAM_SBE_STS_A_FUNC_MAP_MASK},
+	{"RAM_SBE_STS_A_C2H_WR_BRG_DAT",
+		RAM_SBE_STS_A_C2H_WR_BRG_DAT_MASK},
+	{"RAM_SBE_STS_A_C2H_RD_BRG_DAT",
+		RAM_SBE_STS_A_C2H_RD_BRG_DAT_MASK},
+	{"RAM_SBE_STS_A_H2C_WR_BRG_DAT",
+		RAM_SBE_STS_A_H2C_WR_BRG_DAT_MASK},
+	{"RAM_SBE_STS_A_H2C_RD_BRG_DAT",
+		RAM_SBE_STS_A_H2C_RD_BRG_DAT_MASK},
+	{"RAM_SBE_STS_A_RSVD_2",
+		RAM_SBE_STS_A_RSVD_2_MASK},
+	{"RAM_SBE_STS_A_MI_C2H0_DAT",
+		RAM_SBE_STS_A_MI_C2H0_DAT_MASK},
+	{"RAM_SBE_STS_A_RSVD_3",
+		RAM_SBE_STS_A_RSVD_3_MASK},
+	{"RAM_SBE_STS_A_MI_H2C0_DAT",
+		RAM_SBE_STS_A_MI_H2C0_DAT_MASK},
+};
+
+
+static struct regfield_info
+	ram_dbe_msk_a_field_info[] = {
+	{"RAM_DBE_MSK_A",
+		RAM_DBE_MSK_A_MASK},
+};
+
+
+static struct regfield_info
+	ram_dbe_sts_a_field_info[] = {
+	{"RAM_DBE_STS_A_RSVD_1",
+		RAM_DBE_STS_A_RSVD_1_MASK},
+	{"RAM_DBE_STS_A_PFCH_LL_RAM",
+		RAM_DBE_STS_A_PFCH_LL_RAM_MASK},
+	{"RAM_DBE_STS_A_WRB_CTXT_RAM",
+		RAM_DBE_STS_A_WRB_CTXT_RAM_MASK},
+	{"RAM_DBE_STS_A_PFCH_CTXT_RAM",
+		RAM_DBE_STS_A_PFCH_CTXT_RAM_MASK},
+	{"RAM_DBE_STS_A_DESC_REQ_FIFO_RAM",
+		RAM_DBE_STS_A_DESC_REQ_FIFO_RAM_MASK},
+	{"RAM_DBE_STS_A_INT_CTXT_RAM",
+		RAM_DBE_STS_A_INT_CTXT_RAM_MASK},
+	{"RAM_DBE_STS_A_INT_QID2VEC_RAM",
+		RAM_DBE_STS_A_INT_QID2VEC_RAM_MASK},
+	{"RAM_DBE_STS_A_WRB_COAL_DATA_RAM",
+		RAM_DBE_STS_A_WRB_COAL_DATA_RAM_MASK},
+	{"RAM_DBE_STS_A_TUSER_FIFO_RAM",
+		RAM_DBE_STS_A_TUSER_FIFO_RAM_MASK},
+	{"RAM_DBE_STS_A_QID_FIFO_RAM",
+		RAM_DBE_STS_A_QID_FIFO_RAM_MASK},
+	{"RAM_DBE_STS_A_PLD_FIFO_RAM",
+		RAM_DBE_STS_A_PLD_FIFO_RAM_MASK},
+	{"RAM_DBE_STS_A_TIMER_FIFO_RAM",
+		RAM_DBE_STS_A_TIMER_FIFO_RAM_MASK},
+	{"RAM_DBE_STS_A_PASID_CTXT_RAM",
+		RAM_DBE_STS_A_PASID_CTXT_RAM_MASK},
+	{"RAM_DBE_STS_A_DSC_CPLD",
+		RAM_DBE_STS_A_DSC_CPLD_MASK},
+	{"RAM_DBE_STS_A_DSC_CPLI",
+		RAM_DBE_STS_A_DSC_CPLI_MASK},
+	{"RAM_DBE_STS_A_DSC_SW_CTXT",
+		RAM_DBE_STS_A_DSC_SW_CTXT_MASK},
+	{"RAM_DBE_STS_A_DSC_CRD_RCV",
+		RAM_DBE_STS_A_DSC_CRD_RCV_MASK},
+	{"RAM_DBE_STS_A_DSC_HW_CTXT",
+		RAM_DBE_STS_A_DSC_HW_CTXT_MASK},
+	{"RAM_DBE_STS_A_FUNC_MAP",
+		RAM_DBE_STS_A_FUNC_MAP_MASK},
+	{"RAM_DBE_STS_A_C2H_WR_BRG_DAT",
+		RAM_DBE_STS_A_C2H_WR_BRG_DAT_MASK},
+	{"RAM_DBE_STS_A_C2H_RD_BRG_DAT",
+		RAM_DBE_STS_A_C2H_RD_BRG_DAT_MASK},
+	{"RAM_DBE_STS_A_H2C_WR_BRG_DAT",
+		RAM_DBE_STS_A_H2C_WR_BRG_DAT_MASK},
+	{"RAM_DBE_STS_A_H2C_RD_BRG_DAT",
+		RAM_DBE_STS_A_H2C_RD_BRG_DAT_MASK},
+	{"RAM_DBE_STS_A_RSVD_2",
+		RAM_DBE_STS_A_RSVD_2_MASK},
+	{"RAM_DBE_STS_A_MI_C2H0_DAT",
+		RAM_DBE_STS_A_MI_C2H0_DAT_MASK},
+	{"RAM_DBE_STS_A_RSVD_3",
+		RAM_DBE_STS_A_RSVD_3_MASK},
+	{"RAM_DBE_STS_A_MI_H2C0_DAT",
+		RAM_DBE_STS_A_MI_H2C0_DAT_MASK},
+};
+
+
+static struct regfield_info
+	glbl2_identifier_field_info[] = {
+	{"GLBL2_IDENTIFIER",
+		GLBL2_IDENTIFIER_MASK},
+	{"GLBL2_IDENTIFIER_VERSION",
+		GLBL2_IDENTIFIER_VERSION_MASK},
+};
+
+
+static struct regfield_info
+	glbl2_pf_barlite_int_field_info[] = {
+	{"GLBL2_PF_BARLITE_INT_PF3_BAR_MAP",
+		GLBL2_PF_BARLITE_INT_PF3_BAR_MAP_MASK},
+	{"GLBL2_PF_BARLITE_INT_PF2_BAR_MAP",
+		GLBL2_PF_BARLITE_INT_PF2_BAR_MAP_MASK},
+	{"GLBL2_PF_BARLITE_INT_PF1_BAR_MAP",
+		GLBL2_PF_BARLITE_INT_PF1_BAR_MAP_MASK},
+	{"GLBL2_PF_BARLITE_INT_PF0_BAR_MAP",
+		GLBL2_PF_BARLITE_INT_PF0_BAR_MAP_MASK},
+};
+
+
+static struct regfield_info
+	glbl2_pf_vf_barlite_int_field_info[] = {
+	{"GLBL2_PF_VF_BARLITE_INT_PF3_MAP",
+		GLBL2_PF_VF_BARLITE_INT_PF3_MAP_MASK},
+	{"GLBL2_PF_VF_BARLITE_INT_PF2_MAP",
+		GLBL2_PF_VF_BARLITE_INT_PF2_MAP_MASK},
+	{"GLBL2_PF_VF_BARLITE_INT_PF1_MAP",
+		GLBL2_PF_VF_BARLITE_INT_PF1_MAP_MASK},
+	{"GLBL2_PF_VF_BARLITE_INT_PF0_MAP",
+		GLBL2_PF_VF_BARLITE_INT_PF0_MAP_MASK},
+};
+
+
+static struct regfield_info
+	glbl2_pf_barlite_ext_field_info[] = {
+	{"GLBL2_PF_BARLITE_EXT_PF3_BAR_MAP",
+		GLBL2_PF_BARLITE_EXT_PF3_BAR_MAP_MASK},
+	{"GLBL2_PF_BARLITE_EXT_PF2_BAR_MAP",
+		GLBL2_PF_BARLITE_EXT_PF2_BAR_MAP_MASK},
+	{"GLBL2_PF_BARLITE_EXT_PF1_BAR_MAP",
+		GLBL2_PF_BARLITE_EXT_PF1_BAR_MAP_MASK},
+	{"GLBL2_PF_BARLITE_EXT_PF0_BAR_MAP",
+		GLBL2_PF_BARLITE_EXT_PF0_BAR_MAP_MASK},
+};
+
+
+static struct regfield_info
+	glbl2_pf_vf_barlite_ext_field_info[] = {
+	{"GLBL2_PF_VF_BARLITE_EXT_PF3_MAP",
+		GLBL2_PF_VF_BARLITE_EXT_PF3_MAP_MASK},
+	{"GLBL2_PF_VF_BARLITE_EXT_PF2_MAP",
+		GLBL2_PF_VF_BARLITE_EXT_PF2_MAP_MASK},
+	{"GLBL2_PF_VF_BARLITE_EXT_PF1_MAP",
+		GLBL2_PF_VF_BARLITE_EXT_PF1_MAP_MASK},
+	{"GLBL2_PF_VF_BARLITE_EXT_PF0_MAP",
+		GLBL2_PF_VF_BARLITE_EXT_PF0_MAP_MASK},
+};
+
+
+static struct regfield_info
+	glbl2_channel_inst_field_info[] = {
+	{"GLBL2_CHANNEL_INST_RSVD_1",
+		GLBL2_CHANNEL_INST_RSVD_1_MASK},
+	{"GLBL2_CHANNEL_INST_C2H_ST",
+		GLBL2_CHANNEL_INST_C2H_ST_MASK},
+	{"GLBL2_CHANNEL_INST_H2C_ST",
+		GLBL2_CHANNEL_INST_H2C_ST_MASK},
+	{"GLBL2_CHANNEL_INST_RSVD_2",
+		GLBL2_CHANNEL_INST_RSVD_2_MASK},
+	{"GLBL2_CHANNEL_INST_C2H_ENG",
+		GLBL2_CHANNEL_INST_C2H_ENG_MASK},
+	{"GLBL2_CHANNEL_INST_RSVD_3",
+		GLBL2_CHANNEL_INST_RSVD_3_MASK},
+	{"GLBL2_CHANNEL_INST_H2C_ENG",
+		GLBL2_CHANNEL_INST_H2C_ENG_MASK},
+};
+
+
+static struct regfield_info
+	glbl2_channel_mdma_field_info[] = {
+	{"GLBL2_CHANNEL_MDMA_RSVD_1",
+		GLBL2_CHANNEL_MDMA_RSVD_1_MASK},
+	{"GLBL2_CHANNEL_MDMA_C2H_ST",
+		GLBL2_CHANNEL_MDMA_C2H_ST_MASK},
+	{"GLBL2_CHANNEL_MDMA_H2C_ST",
+		GLBL2_CHANNEL_MDMA_H2C_ST_MASK},
+	{"GLBL2_CHANNEL_MDMA_RSVD_2",
+		GLBL2_CHANNEL_MDMA_RSVD_2_MASK},
+	{"GLBL2_CHANNEL_MDMA_C2H_ENG",
+		GLBL2_CHANNEL_MDMA_C2H_ENG_MASK},
+	{"GLBL2_CHANNEL_MDMA_RSVD_3",
+		GLBL2_CHANNEL_MDMA_RSVD_3_MASK},
+	{"GLBL2_CHANNEL_MDMA_H2C_ENG",
+		GLBL2_CHANNEL_MDMA_H2C_ENG_MASK},
+};
+
+
+static struct regfield_info
+	glbl2_channel_strm_field_info[] = {
+	{"GLBL2_CHANNEL_STRM_RSVD_1",
+		GLBL2_CHANNEL_STRM_RSVD_1_MASK},
+	{"GLBL2_CHANNEL_STRM_C2H_ST",
+		GLBL2_CHANNEL_STRM_C2H_ST_MASK},
+	{"GLBL2_CHANNEL_STRM_H2C_ST",
+		GLBL2_CHANNEL_STRM_H2C_ST_MASK},
+	{"GLBL2_CHANNEL_STRM_RSVD_2",
+		GLBL2_CHANNEL_STRM_RSVD_2_MASK},
+	{"GLBL2_CHANNEL_STRM_C2H_ENG",
+		GLBL2_CHANNEL_STRM_C2H_ENG_MASK},
+	{"GLBL2_CHANNEL_STRM_RSVD_3",
+		GLBL2_CHANNEL_STRM_RSVD_3_MASK},
+	{"GLBL2_CHANNEL_STRM_H2C_ENG",
+		GLBL2_CHANNEL_STRM_H2C_ENG_MASK},
+};
+
+
+static struct regfield_info
+	glbl2_channel_cap_field_info[] = {
+	{"GLBL2_CHANNEL_CAP_RSVD_1",
+		GLBL2_CHANNEL_CAP_RSVD_1_MASK},
+	{"GLBL2_CHANNEL_CAP_MULTIQ_MAX",
+		GLBL2_CHANNEL_CAP_MULTIQ_MAX_MASK},
+};
+
+
+static struct regfield_info
+	glbl2_channel_pasid_cap_field_info[] = {
+	{"GLBL2_CHANNEL_PASID_CAP_RSVD_1",
+		GLBL2_CHANNEL_PASID_CAP_RSVD_1_MASK},
+	{"GLBL2_CHANNEL_PASID_CAP_BRIDGEOFFSET",
+		GLBL2_CHANNEL_PASID_CAP_BRIDGEOFFSET_MASK},
+	{"GLBL2_CHANNEL_PASID_CAP_RSVD_2",
+		GLBL2_CHANNEL_PASID_CAP_RSVD_2_MASK},
+	{"GLBL2_CHANNEL_PASID_CAP_BRIDGEEN",
+		GLBL2_CHANNEL_PASID_CAP_BRIDGEEN_MASK},
+	{"GLBL2_CHANNEL_PASID_CAP_DMAEN",
+		GLBL2_CHANNEL_PASID_CAP_DMAEN_MASK},
+};
+
+
+static struct regfield_info
+	glbl2_channel_func_ret_field_info[] = {
+	{"GLBL2_CHANNEL_FUNC_RET_RSVD_1",
+		GLBL2_CHANNEL_FUNC_RET_RSVD_1_MASK},
+	{"GLBL2_CHANNEL_FUNC_RET_FUNC",
+		GLBL2_CHANNEL_FUNC_RET_FUNC_MASK},
+};
+
+
+static struct regfield_info
+	glbl2_system_id_field_info[] = {
+	{"GLBL2_SYSTEM_ID_RSVD_1",
+		GLBL2_SYSTEM_ID_RSVD_1_MASK},
+	{"GLBL2_SYSTEM_ID",
+		GLBL2_SYSTEM_ID_MASK},
+};
+
+
+static struct regfield_info
+	glbl2_misc_cap_field_info[] = {
+	{"GLBL2_MISC_CAP_RSVD_1",
+		GLBL2_MISC_CAP_RSVD_1_MASK},
+};
+
+
+static struct regfield_info
+	glbl2_dbg_pcie_rq0_field_info[] = {
+	{"GLBL2_PCIE_RQ0_NPH_AVL",
+		GLBL2_PCIE_RQ0_NPH_AVL_MASK},
+	{"GLBL2_PCIE_RQ0_RCB_AVL",
+		GLBL2_PCIE_RQ0_RCB_AVL_MASK},
+	{"GLBL2_PCIE_RQ0_SLV_RD_CREDS",
+		GLBL2_PCIE_RQ0_SLV_RD_CREDS_MASK},
+	{"GLBL2_PCIE_RQ0_TAG_EP",
+		GLBL2_PCIE_RQ0_TAG_EP_MASK},
+	{"GLBL2_PCIE_RQ0_TAG_FL",
+		GLBL2_PCIE_RQ0_TAG_FL_MASK},
+};
+
+
+static struct regfield_info
+	glbl2_dbg_pcie_rq1_field_info[] = {
+	{"GLBL2_PCIE_RQ1_RSVD_1",
+		GLBL2_PCIE_RQ1_RSVD_1_MASK},
+	{"GLBL2_PCIE_RQ1_WTLP_REQ",
+		GLBL2_PCIE_RQ1_WTLP_REQ_MASK},
+	{"GLBL2_PCIE_RQ1_WTLP_HEADER_FIFO_FL",
+		GLBL2_PCIE_RQ1_WTLP_HEADER_FIFO_FL_MASK},
+	{"GLBL2_PCIE_RQ1_WTLP_HEADER_FIFO_EP",
+		GLBL2_PCIE_RQ1_WTLP_HEADER_FIFO_EP_MASK},
+	{"GLBL2_PCIE_RQ1_RQ_FIFO_EP",
+		GLBL2_PCIE_RQ1_RQ_FIFO_EP_MASK},
+	{"GLBL2_PCIE_RQ1_RQ_FIFO_FL",
+		GLBL2_PCIE_RQ1_RQ_FIFO_FL_MASK},
+	{"GLBL2_PCIE_RQ1_TLPSM",
+		GLBL2_PCIE_RQ1_TLPSM_MASK},
+	{"GLBL2_PCIE_RQ1_TLPSM512",
+		GLBL2_PCIE_RQ1_TLPSM512_MASK},
+	{"GLBL2_PCIE_RQ1_RREQ0_RCB_OK",
+		GLBL2_PCIE_RQ1_RREQ0_RCB_OK_MASK},
+	{"GLBL2_PCIE_RQ1_RREQ0_SLV",
+		GLBL2_PCIE_RQ1_RREQ0_SLV_MASK},
+	{"GLBL2_PCIE_RQ1_RREQ0_VLD",
+		GLBL2_PCIE_RQ1_RREQ0_VLD_MASK},
+	{"GLBL2_PCIE_RQ1_RREQ1_RCB_OK",
+		GLBL2_PCIE_RQ1_RREQ1_RCB_OK_MASK},
+	{"GLBL2_PCIE_RQ1_RREQ1_SLV",
+		GLBL2_PCIE_RQ1_RREQ1_SLV_MASK},
+	{"GLBL2_PCIE_RQ1_RREQ1_VLD",
+		GLBL2_PCIE_RQ1_RREQ1_VLD_MASK},
+};
+
+
+static struct regfield_info
+	glbl2_dbg_aximm_wr0_field_info[] = {
+	{"GLBL2_AXIMM_WR0_RSVD_1",
+		GLBL2_AXIMM_WR0_RSVD_1_MASK},
+	{"GLBL2_AXIMM_WR0_WR_REQ",
+		GLBL2_AXIMM_WR0_WR_REQ_MASK},
+	{"GLBL2_AXIMM_WR0_WR_CHN",
+		GLBL2_AXIMM_WR0_WR_CHN_MASK},
+	{"GLBL2_AXIMM_WR0_WTLP_DATA_FIFO_EP",
+		GLBL2_AXIMM_WR0_WTLP_DATA_FIFO_EP_MASK},
+	{"GLBL2_AXIMM_WR0_WPL_FIFO_EP",
+		GLBL2_AXIMM_WR0_WPL_FIFO_EP_MASK},
+	{"GLBL2_AXIMM_WR0_BRSP_CLAIM_CHN",
+		GLBL2_AXIMM_WR0_BRSP_CLAIM_CHN_MASK},
+	{"GLBL2_AXIMM_WR0_WRREQ_CNT",
+		GLBL2_AXIMM_WR0_WRREQ_CNT_MASK},
+	{"GLBL2_AXIMM_WR0_BID",
+		GLBL2_AXIMM_WR0_BID_MASK},
+	{"GLBL2_AXIMM_WR0_BVALID",
+		GLBL2_AXIMM_WR0_BVALID_MASK},
+	{"GLBL2_AXIMM_WR0_BREADY",
+		GLBL2_AXIMM_WR0_BREADY_MASK},
+	{"GLBL2_AXIMM_WR0_WVALID",
+		GLBL2_AXIMM_WR0_WVALID_MASK},
+	{"GLBL2_AXIMM_WR0_WREADY",
+		GLBL2_AXIMM_WR0_WREADY_MASK},
+	{"GLBL2_AXIMM_WR0_AWID",
+		GLBL2_AXIMM_WR0_AWID_MASK},
+	{"GLBL2_AXIMM_WR0_AWVALID",
+		GLBL2_AXIMM_WR0_AWVALID_MASK},
+	{"GLBL2_AXIMM_WR0_AWREADY",
+		GLBL2_AXIMM_WR0_AWREADY_MASK},
+};
+
+
+static struct regfield_info
+	glbl2_dbg_aximm_wr1_field_info[] = {
+	{"GLBL2_AXIMM_WR1_RSVD_1",
+		GLBL2_AXIMM_WR1_RSVD_1_MASK},
+	{"GLBL2_AXIMM_WR1_BRSP_CNT4",
+		GLBL2_AXIMM_WR1_BRSP_CNT4_MASK},
+	{"GLBL2_AXIMM_WR1_BRSP_CNT3",
+		GLBL2_AXIMM_WR1_BRSP_CNT3_MASK},
+	{"GLBL2_AXIMM_WR1_BRSP_CNT2",
+		GLBL2_AXIMM_WR1_BRSP_CNT2_MASK},
+	{"GLBL2_AXIMM_WR1_BRSP_CNT1",
+		GLBL2_AXIMM_WR1_BRSP_CNT1_MASK},
+	{"GLBL2_AXIMM_WR1_BRSP_CNT0",
+		GLBL2_AXIMM_WR1_BRSP_CNT0_MASK},
+};
+
+
+static struct regfield_info
+	glbl2_dbg_aximm_rd0_field_info[] = {
+	{"GLBL2_AXIMM_RD0_RSVD_1",
+		GLBL2_AXIMM_RD0_RSVD_1_MASK},
+	{"GLBL2_AXIMM_RD0_PND_CNT",
+		GLBL2_AXIMM_RD0_PND_CNT_MASK},
+	{"GLBL2_AXIMM_RD0_RD_CHNL",
+		GLBL2_AXIMM_RD0_RD_CHNL_MASK},
+	{"GLBL2_AXIMM_RD0_RD_REQ",
+		GLBL2_AXIMM_RD0_RD_REQ_MASK},
+	{"GLBL2_AXIMM_RD0_RRSP_CLAIM_CHNL",
+		GLBL2_AXIMM_RD0_RRSP_CLAIM_CHNL_MASK},
+	{"GLBL2_AXIMM_RD0_RID",
+		GLBL2_AXIMM_RD0_RID_MASK},
+	{"GLBL2_AXIMM_RD0_RVALID",
+		GLBL2_AXIMM_RD0_RVALID_MASK},
+	{"GLBL2_AXIMM_RD0_RREADY",
+		GLBL2_AXIMM_RD0_RREADY_MASK},
+	{"GLBL2_AXIMM_RD0_ARID",
+		GLBL2_AXIMM_RD0_ARID_MASK},
+	{"GLBL2_AXIMM_RD0_ARVALID",
+		GLBL2_AXIMM_RD0_ARVALID_MASK},
+	{"GLBL2_AXIMM_RD0_ARREADY",
+		GLBL2_AXIMM_RD0_ARREADY_MASK},
+};
+
+
+static struct regfield_info
+	glbl2_dbg_aximm_rd1_field_info[] = {
+	{"GLBL2_AXIMM_RD1_RSVD_1",
+		GLBL2_AXIMM_RD1_RSVD_1_MASK},
+	{"GLBL2_AXIMM_RD1_RRSP_CNT4",
+		GLBL2_AXIMM_RD1_RRSP_CNT4_MASK},
+	{"GLBL2_AXIMM_RD1_RRSP_CNT3",
+		GLBL2_AXIMM_RD1_RRSP_CNT3_MASK},
+	{"GLBL2_AXIMM_RD1_RRSP_CNT2",
+		GLBL2_AXIMM_RD1_RRSP_CNT2_MASK},
+	{"GLBL2_AXIMM_RD1_RRSP_CNT1",
+		GLBL2_AXIMM_RD1_RRSP_CNT1_MASK},
+	{"GLBL2_AXIMM_RD1_RRSP_CNT0",
+		GLBL2_AXIMM_RD1_RRSP_CNT0_MASK},
+};
+
+
+static struct regfield_info
+	glbl_rng_sz_1_field_info[] = {
+	{"GLBL_RNG_SZ_1_RSVD_1",
+		GLBL_RNG_SZ_1_RSVD_1_MASK},
+	{"GLBL_RNG_SZ_1_RING_SIZE",
+		GLBL_RNG_SZ_1_RING_SIZE_MASK},
+};
+
+
+static struct regfield_info
+	glbl_rng_sz_2_field_info[] = {
+	{"GLBL_RNG_SZ_2_RSVD_1",
+		GLBL_RNG_SZ_2_RSVD_1_MASK},
+	{"GLBL_RNG_SZ_2_RING_SIZE",
+		GLBL_RNG_SZ_2_RING_SIZE_MASK},
+};
+
+
+static struct regfield_info
+	glbl_rng_sz_3_field_info[] = {
+	{"GLBL_RNG_SZ_3_RSVD_1",
+		GLBL_RNG_SZ_3_RSVD_1_MASK},
+	{"GLBL_RNG_SZ_3_RING_SIZE",
+		GLBL_RNG_SZ_3_RING_SIZE_MASK},
+};
+
+
+static struct regfield_info
+	glbl_rng_sz_4_field_info[] = {
+	{"GLBL_RNG_SZ_4_RSVD_1",
+		GLBL_RNG_SZ_4_RSVD_1_MASK},
+	{"GLBL_RNG_SZ_4_RING_SIZE",
+		GLBL_RNG_SZ_4_RING_SIZE_MASK},
+};
+
+
+static struct regfield_info
+	glbl_rng_sz_5_field_info[] = {
+	{"GLBL_RNG_SZ_5_RSVD_1",
+		GLBL_RNG_SZ_5_RSVD_1_MASK},
+	{"GLBL_RNG_SZ_5_RING_SIZE",
+		GLBL_RNG_SZ_5_RING_SIZE_MASK},
+};
+
+
+static struct regfield_info
+	glbl_rng_sz_6_field_info[] = {
+	{"GLBL_RNG_SZ_6_RSVD_1",
+		GLBL_RNG_SZ_6_RSVD_1_MASK},
+	{"GLBL_RNG_SZ_6_RING_SIZE",
+		GLBL_RNG_SZ_6_RING_SIZE_MASK},
+};
+
+
+static struct regfield_info
+	glbl_rng_sz_7_field_info[] = {
+	{"GLBL_RNG_SZ_7_RSVD_1",
+		GLBL_RNG_SZ_7_RSVD_1_MASK},
+	{"GLBL_RNG_SZ_7_RING_SIZE",
+		GLBL_RNG_SZ_7_RING_SIZE_MASK},
+};
+
+
+static struct regfield_info
+	glbl_rng_sz_8_field_info[] = {
+	{"GLBL_RNG_SZ_8_RSVD_1",
+		GLBL_RNG_SZ_8_RSVD_1_MASK},
+	{"GLBL_RNG_SZ_8_RING_SIZE",
+		GLBL_RNG_SZ_8_RING_SIZE_MASK},
+};
+
+
+static struct regfield_info
+	glbl_rng_sz_9_field_info[] = {
+	{"GLBL_RNG_SZ_9_RSVD_1",
+		GLBL_RNG_SZ_9_RSVD_1_MASK},
+	{"GLBL_RNG_SZ_9_RING_SIZE",
+		GLBL_RNG_SZ_9_RING_SIZE_MASK},
+};
+
+
+static struct regfield_info
+	glbl_rng_sz_a_field_info[] = {
+	{"GLBL_RNG_SZ_A_RSVD_1",
+		GLBL_RNG_SZ_A_RSVD_1_MASK},
+	{"GLBL_RNG_SZ_A_RING_SIZE",
+		GLBL_RNG_SZ_A_RING_SIZE_MASK},
+};
+
+
+static struct regfield_info
+	glbl_rng_sz_b_field_info[] = {
+	{"GLBL_RNG_SZ_B_RSVD_1",
+		GLBL_RNG_SZ_B_RSVD_1_MASK},
+	{"GLBL_RNG_SZ_B_RING_SIZE",
+		GLBL_RNG_SZ_B_RING_SIZE_MASK},
+};
+
+
+static struct regfield_info
+	glbl_rng_sz_c_field_info[] = {
+	{"GLBL_RNG_SZ_C_RSVD_1",
+		GLBL_RNG_SZ_C_RSVD_1_MASK},
+	{"GLBL_RNG_SZ_C_RING_SIZE",
+		GLBL_RNG_SZ_C_RING_SIZE_MASK},
+};
+
+
+static struct regfield_info
+	glbl_rng_sz_d_field_info[] = {
+	{"GLBL_RNG_SZ_D_RSVD_1",
+		GLBL_RNG_SZ_D_RSVD_1_MASK},
+	{"GLBL_RNG_SZ_D_RING_SIZE",
+		GLBL_RNG_SZ_D_RING_SIZE_MASK},
+};
+
+
+static struct regfield_info
+	glbl_rng_sz_e_field_info[] = {
+	{"GLBL_RNG_SZ_E_RSVD_1",
+		GLBL_RNG_SZ_E_RSVD_1_MASK},
+	{"GLBL_RNG_SZ_E_RING_SIZE",
+		GLBL_RNG_SZ_E_RING_SIZE_MASK},
+};
+
+
+static struct regfield_info
+	glbl_rng_sz_f_field_info[] = {
+	{"GLBL_RNG_SZ_F_RSVD_1",
+		GLBL_RNG_SZ_F_RSVD_1_MASK},
+	{"GLBL_RNG_SZ_F_RING_SIZE",
+		GLBL_RNG_SZ_F_RING_SIZE_MASK},
+};
+
+
+static struct regfield_info
+	glbl_rng_sz_10_field_info[] = {
+	{"GLBL_RNG_SZ_10_RSVD_1",
+		GLBL_RNG_SZ_10_RSVD_1_MASK},
+	{"GLBL_RNG_SZ_10_RING_SIZE",
+		GLBL_RNG_SZ_10_RING_SIZE_MASK},
+};
+
+
+static struct regfield_info
+	glbl_err_stat_field_info[] = {
+	{"GLBL_ERR_STAT_RSVD_1",
+		GLBL_ERR_STAT_RSVD_1_MASK},
+	{"GLBL_ERR_STAT_ERR_H2C_ST",
+		GLBL_ERR_STAT_ERR_H2C_ST_MASK},
+	{"GLBL_ERR_STAT_ERR_BDG",
+		GLBL_ERR_STAT_ERR_BDG_MASK},
+	{"GLBL_ERR_STAT_IND_CTXT_CMD_ERR",
+		GLBL_ERR_STAT_IND_CTXT_CMD_ERR_MASK},
+	{"GLBL_ERR_STAT_ERR_C2H_ST",
+		GLBL_ERR_STAT_ERR_C2H_ST_MASK},
+	{"GLBL_ERR_STAT_ERR_C2H_MM_1",
+		GLBL_ERR_STAT_ERR_C2H_MM_1_MASK},
+	{"GLBL_ERR_STAT_ERR_C2H_MM_0",
+		GLBL_ERR_STAT_ERR_C2H_MM_0_MASK},
+	{"GLBL_ERR_STAT_ERR_H2C_MM_1",
+		GLBL_ERR_STAT_ERR_H2C_MM_1_MASK},
+	{"GLBL_ERR_STAT_ERR_H2C_MM_0",
+		GLBL_ERR_STAT_ERR_H2C_MM_0_MASK},
+	{"GLBL_ERR_STAT_ERR_TRQ",
+		GLBL_ERR_STAT_ERR_TRQ_MASK},
+	{"GLBL_ERR_STAT_ERR_DSC",
+		GLBL_ERR_STAT_ERR_DSC_MASK},
+	{"GLBL_ERR_STAT_ERR_RAM_DBE",
+		GLBL_ERR_STAT_ERR_RAM_DBE_MASK},
+	{"GLBL_ERR_STAT_ERR_RAM_SBE",
+		GLBL_ERR_STAT_ERR_RAM_SBE_MASK},
+};
+
+
+static struct regfield_info
+	glbl_err_mask_field_info[] = {
+	{"GLBL_ERR_RSVD_1",
+		GLBL_ERR_RSVD_1_MASK},
+	{"GLBL_ERR",
+		GLBL_ERR_MASK},
+};
+
+
+static struct regfield_info
+	glbl_dsc_cfg_field_info[] = {
+	{"GLBL_DSC_CFG_RSVD_1",
+		GLBL_DSC_CFG_RSVD_1_MASK},
+	{"GLBL_DSC_CFG_UNC_OVR_COR",
+		GLBL_DSC_CFG_UNC_OVR_COR_MASK},
+	{"GLBL_DSC_CFG_CTXT_FER_DIS",
+		GLBL_DSC_CFG_CTXT_FER_DIS_MASK},
+	{"GLBL_DSC_CFG_RSVD_2",
+		GLBL_DSC_CFG_RSVD_2_MASK},
+	{"GLBL_DSC_CFG_MAXFETCH",
+		GLBL_DSC_CFG_MAXFETCH_MASK},
+	{"GLBL_DSC_CFG_WB_ACC_INT",
+		GLBL_DSC_CFG_WB_ACC_INT_MASK},
+};
+
+
+static struct regfield_info
+	glbl_dsc_err_sts_field_info[] = {
+	{"GLBL_DSC_ERR_STS_RSVD_1",
+		GLBL_DSC_ERR_STS_RSVD_1_MASK},
+	{"GLBL_DSC_ERR_STS_SBE",
+		GLBL_DSC_ERR_STS_SBE_MASK},
+	{"GLBL_DSC_ERR_STS_DBE",
+		GLBL_DSC_ERR_STS_DBE_MASK},
+	{"GLBL_DSC_ERR_STS_RQ_CANCEL",
+		GLBL_DSC_ERR_STS_RQ_CANCEL_MASK},
+	{"GLBL_DSC_ERR_STS_DSC",
+		GLBL_DSC_ERR_STS_DSC_MASK},
+	{"GLBL_DSC_ERR_STS_DMA",
+		GLBL_DSC_ERR_STS_DMA_MASK},
+	{"GLBL_DSC_ERR_STS_FLR_CANCEL",
+		GLBL_DSC_ERR_STS_FLR_CANCEL_MASK},
+	{"GLBL_DSC_ERR_STS_RSVD_2",
+		GLBL_DSC_ERR_STS_RSVD_2_MASK},
+	{"GLBL_DSC_ERR_STS_DAT_POISON",
+		GLBL_DSC_ERR_STS_DAT_POISON_MASK},
+	{"GLBL_DSC_ERR_STS_TIMEOUT",
+		GLBL_DSC_ERR_STS_TIMEOUT_MASK},
+	{"GLBL_DSC_ERR_STS_FLR",
+		GLBL_DSC_ERR_STS_FLR_MASK},
+	{"GLBL_DSC_ERR_STS_TAG",
+		GLBL_DSC_ERR_STS_TAG_MASK},
+	{"GLBL_DSC_ERR_STS_ADDR",
+		GLBL_DSC_ERR_STS_ADDR_MASK},
+	{"GLBL_DSC_ERR_STS_PARAM",
+		GLBL_DSC_ERR_STS_PARAM_MASK},
+	{"GLBL_DSC_ERR_STS_UR_CA",
+		GLBL_DSC_ERR_STS_UR_CA_MASK},
+	{"GLBL_DSC_ERR_STS_POISON",
+		GLBL_DSC_ERR_STS_POISON_MASK},
+};
+
+
+static struct regfield_info
+	glbl_dsc_err_msk_field_info[] = {
+	{"GLBL_DSC_ERR_MSK",
+		GLBL_DSC_ERR_MSK_MASK},
+};
+
+
+static struct regfield_info
+	glbl_dsc_err_log0_field_info[] = {
+	{"GLBL_DSC_ERR_LOG0_VALID",
+		GLBL_DSC_ERR_LOG0_VALID_MASK},
+	{"GLBL_DSC_ERR_LOG0_RSVD_1",
+		GLBL_DSC_ERR_LOG0_RSVD_1_MASK},
+	{"GLBL_DSC_ERR_LOG0_QID",
+		GLBL_DSC_ERR_LOG0_QID_MASK},
+	{"GLBL_DSC_ERR_LOG0_SEL",
+		GLBL_DSC_ERR_LOG0_SEL_MASK},
+	{"GLBL_DSC_ERR_LOG0_CIDX",
+		GLBL_DSC_ERR_LOG0_CIDX_MASK},
+};
+
+
+static struct regfield_info
+	glbl_dsc_err_log1_field_info[] = {
+	{"GLBL_DSC_ERR_LOG1_RSVD_1",
+		GLBL_DSC_ERR_LOG1_RSVD_1_MASK},
+	{"GLBL_DSC_ERR_LOG1_SUB_TYPE",
+		GLBL_DSC_ERR_LOG1_SUB_TYPE_MASK},
+	{"GLBL_DSC_ERR_LOG1_ERR_TYPE",
+		GLBL_DSC_ERR_LOG1_ERR_TYPE_MASK},
+};
+
+
+static struct regfield_info
+	glbl_trq_err_sts_field_info[] = {
+	{"GLBL_TRQ_ERR_STS_RSVD_1",
+		GLBL_TRQ_ERR_STS_RSVD_1_MASK},
+	{"GLBL_TRQ_ERR_STS_TCP_TIMEOUT",
+		GLBL_TRQ_ERR_STS_TCP_TIMEOUT_MASK},
+	{"GLBL_TRQ_ERR_STS_VF_ACCESS_ERR",
+		GLBL_TRQ_ERR_STS_VF_ACCESS_ERR_MASK},
+	{"GLBL_TRQ_ERR_STS_QID_RANGE",
+		GLBL_TRQ_ERR_STS_QID_RANGE_MASK},
+	{"GLBL_TRQ_ERR_STS_UNMAPPED",
+		GLBL_TRQ_ERR_STS_UNMAPPED_MASK},
+};
+
+
+static struct regfield_info
+	glbl_trq_err_msk_field_info[] = {
+	{"GLBL_TRQ_ERR_MSK",
+		GLBL_TRQ_ERR_MSK_MASK},
+};
+
+
+static struct regfield_info
+	glbl_trq_err_log_field_info[] = {
+	{"GLBL_TRQ_ERR_LOG_RSVD_1",
+		GLBL_TRQ_ERR_LOG_RSVD_1_MASK},
+	{"GLBL_TRQ_ERR_LOG_TARGET",
+		GLBL_TRQ_ERR_LOG_TARGET_MASK},
+	{"GLBL_TRQ_ERR_LOG_FUNC",
+		GLBL_TRQ_ERR_LOG_FUNC_MASK},
+	{"GLBL_TRQ_ERR_LOG_ADDRESS",
+		GLBL_TRQ_ERR_LOG_ADDRESS_MASK},
+};
+
+
+static struct regfield_info
+	glbl_dsc_dbg_dat0_field_info[] = {
+	{"GLBL_DSC_DAT0_RSVD_1",
+		GLBL_DSC_DAT0_RSVD_1_MASK},
+	{"GLBL_DSC_DAT0_CTXT_ARB_DIR",
+		GLBL_DSC_DAT0_CTXT_ARB_DIR_MASK},
+	{"GLBL_DSC_DAT0_CTXT_ARB_QID",
+		GLBL_DSC_DAT0_CTXT_ARB_QID_MASK},
+	{"GLBL_DSC_DAT0_CTXT_ARB_REQ",
+		GLBL_DSC_DAT0_CTXT_ARB_REQ_MASK},
+	{"GLBL_DSC_DAT0_IRQ_FIFO_FL",
+		GLBL_DSC_DAT0_IRQ_FIFO_FL_MASK},
+	{"GLBL_DSC_DAT0_TMSTALL",
+		GLBL_DSC_DAT0_TMSTALL_MASK},
+	{"GLBL_DSC_DAT0_RRQ_STALL",
+		GLBL_DSC_DAT0_RRQ_STALL_MASK},
+	{"GLBL_DSC_DAT0_RCP_FIFO_SPC_STALL",
+		GLBL_DSC_DAT0_RCP_FIFO_SPC_STALL_MASK},
+	{"GLBL_DSC_DAT0_RRQ_FIFO_SPC_STALL",
+		GLBL_DSC_DAT0_RRQ_FIFO_SPC_STALL_MASK},
+	{"GLBL_DSC_DAT0_FAB_MRKR_RSP_STALL",
+		GLBL_DSC_DAT0_FAB_MRKR_RSP_STALL_MASK},
+	{"GLBL_DSC_DAT0_DSC_OUT_STALL",
+		GLBL_DSC_DAT0_DSC_OUT_STALL_MASK},
+};
+
+
+static struct regfield_info
+	glbl_dsc_dbg_dat1_field_info[] = {
+	{"GLBL_DSC_DAT1_RSVD_1",
+		GLBL_DSC_DAT1_RSVD_1_MASK},
+	{"GLBL_DSC_DAT1_EVT_SPC_C2H",
+		GLBL_DSC_DAT1_EVT_SPC_C2H_MASK},
+	{"GLBL_DSC_DAT1_EVT_SP_H2C",
+		GLBL_DSC_DAT1_EVT_SP_H2C_MASK},
+	{"GLBL_DSC_DAT1_DSC_SPC_C2H",
+		GLBL_DSC_DAT1_DSC_SPC_C2H_MASK},
+	{"GLBL_DSC_DAT1_DSC_SPC_H2C",
+		GLBL_DSC_DAT1_DSC_SPC_H2C_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_0_field_info[] = {
+	{"TRQ_SEL_FMAP_0_RSVD_1",
+		TRQ_SEL_FMAP_0_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_0_QID_MAX",
+		TRQ_SEL_FMAP_0_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_0_QID_BASE",
+		TRQ_SEL_FMAP_0_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_1_field_info[] = {
+	{"TRQ_SEL_FMAP_1_RSVD_1",
+		TRQ_SEL_FMAP_1_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_1_QID_MAX",
+		TRQ_SEL_FMAP_1_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_1_QID_BASE",
+		TRQ_SEL_FMAP_1_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_2_field_info[] = {
+	{"TRQ_SEL_FMAP_2_RSVD_1",
+		TRQ_SEL_FMAP_2_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_2_QID_MAX",
+		TRQ_SEL_FMAP_2_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_2_QID_BASE",
+		TRQ_SEL_FMAP_2_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_3_field_info[] = {
+	{"TRQ_SEL_FMAP_3_RSVD_1",
+		TRQ_SEL_FMAP_3_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_3_QID_MAX",
+		TRQ_SEL_FMAP_3_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_3_QID_BASE",
+		TRQ_SEL_FMAP_3_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_4_field_info[] = {
+	{"TRQ_SEL_FMAP_4_RSVD_1",
+		TRQ_SEL_FMAP_4_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_4_QID_MAX",
+		TRQ_SEL_FMAP_4_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_4_QID_BASE",
+		TRQ_SEL_FMAP_4_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_5_field_info[] = {
+	{"TRQ_SEL_FMAP_5_RSVD_1",
+		TRQ_SEL_FMAP_5_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_5_QID_MAX",
+		TRQ_SEL_FMAP_5_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_5_QID_BASE",
+		TRQ_SEL_FMAP_5_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_6_field_info[] = {
+	{"TRQ_SEL_FMAP_6_RSVD_1",
+		TRQ_SEL_FMAP_6_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_6_QID_MAX",
+		TRQ_SEL_FMAP_6_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_6_QID_BASE",
+		TRQ_SEL_FMAP_6_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_7_field_info[] = {
+	{"TRQ_SEL_FMAP_7_RSVD_1",
+		TRQ_SEL_FMAP_7_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_7_QID_MAX",
+		TRQ_SEL_FMAP_7_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_7_QID_BASE",
+		TRQ_SEL_FMAP_7_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_8_field_info[] = {
+	{"TRQ_SEL_FMAP_8_RSVD_1",
+		TRQ_SEL_FMAP_8_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_8_QID_MAX",
+		TRQ_SEL_FMAP_8_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_8_QID_BASE",
+		TRQ_SEL_FMAP_8_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_9_field_info[] = {
+	{"TRQ_SEL_FMAP_9_RSVD_1",
+		TRQ_SEL_FMAP_9_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_9_QID_MAX",
+		TRQ_SEL_FMAP_9_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_9_QID_BASE",
+		TRQ_SEL_FMAP_9_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_a_field_info[] = {
+	{"TRQ_SEL_FMAP_A_RSVD_1",
+		TRQ_SEL_FMAP_A_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_A_QID_MAX",
+		TRQ_SEL_FMAP_A_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_A_QID_BASE",
+		TRQ_SEL_FMAP_A_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_b_field_info[] = {
+	{"TRQ_SEL_FMAP_B_RSVD_1",
+		TRQ_SEL_FMAP_B_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_B_QID_MAX",
+		TRQ_SEL_FMAP_B_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_B_QID_BASE",
+		TRQ_SEL_FMAP_B_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_d_field_info[] = {
+	{"TRQ_SEL_FMAP_D_RSVD_1",
+		TRQ_SEL_FMAP_D_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_D_QID_MAX",
+		TRQ_SEL_FMAP_D_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_D_QID_BASE",
+		TRQ_SEL_FMAP_D_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_e_field_info[] = {
+	{"TRQ_SEL_FMAP_E_RSVD_1",
+		TRQ_SEL_FMAP_E_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_E_QID_MAX",
+		TRQ_SEL_FMAP_E_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_E_QID_BASE",
+		TRQ_SEL_FMAP_E_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_f_field_info[] = {
+	{"TRQ_SEL_FMAP_F_RSVD_1",
+		TRQ_SEL_FMAP_F_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_F_QID_MAX",
+		TRQ_SEL_FMAP_F_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_F_QID_BASE",
+		TRQ_SEL_FMAP_F_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_10_field_info[] = {
+	{"TRQ_SEL_FMAP_10_RSVD_1",
+		TRQ_SEL_FMAP_10_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_10_QID_MAX",
+		TRQ_SEL_FMAP_10_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_10_QID_BASE",
+		TRQ_SEL_FMAP_10_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_11_field_info[] = {
+	{"TRQ_SEL_FMAP_11_RSVD_1",
+		TRQ_SEL_FMAP_11_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_11_QID_MAX",
+		TRQ_SEL_FMAP_11_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_11_QID_BASE",
+		TRQ_SEL_FMAP_11_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_12_field_info[] = {
+	{"TRQ_SEL_FMAP_12_RSVD_1",
+		TRQ_SEL_FMAP_12_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_12_QID_MAX",
+		TRQ_SEL_FMAP_12_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_12_QID_BASE",
+		TRQ_SEL_FMAP_12_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_13_field_info[] = {
+	{"TRQ_SEL_FMAP_13_RSVD_1",
+		TRQ_SEL_FMAP_13_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_13_QID_MAX",
+		TRQ_SEL_FMAP_13_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_13_QID_BASE",
+		TRQ_SEL_FMAP_13_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_14_field_info[] = {
+	{"TRQ_SEL_FMAP_14_RSVD_1",
+		TRQ_SEL_FMAP_14_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_14_QID_MAX",
+		TRQ_SEL_FMAP_14_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_14_QID_BASE",
+		TRQ_SEL_FMAP_14_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_15_field_info[] = {
+	{"TRQ_SEL_FMAP_15_RSVD_1",
+		TRQ_SEL_FMAP_15_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_15_QID_MAX",
+		TRQ_SEL_FMAP_15_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_15_QID_BASE",
+		TRQ_SEL_FMAP_15_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_16_field_info[] = {
+	{"TRQ_SEL_FMAP_16_RSVD_1",
+		TRQ_SEL_FMAP_16_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_16_QID_MAX",
+		TRQ_SEL_FMAP_16_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_16_QID_BASE",
+		TRQ_SEL_FMAP_16_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_17_field_info[] = {
+	{"TRQ_SEL_FMAP_17_RSVD_1",
+		TRQ_SEL_FMAP_17_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_17_QID_MAX",
+		TRQ_SEL_FMAP_17_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_17_QID_BASE",
+		TRQ_SEL_FMAP_17_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_18_field_info[] = {
+	{"TRQ_SEL_FMAP_18_RSVD_1",
+		TRQ_SEL_FMAP_18_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_18_QID_MAX",
+		TRQ_SEL_FMAP_18_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_18_QID_BASE",
+		TRQ_SEL_FMAP_18_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_19_field_info[] = {
+	{"TRQ_SEL_FMAP_19_RSVD_1",
+		TRQ_SEL_FMAP_19_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_19_QID_MAX",
+		TRQ_SEL_FMAP_19_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_19_QID_BASE",
+		TRQ_SEL_FMAP_19_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_1a_field_info[] = {
+	{"TRQ_SEL_FMAP_1A_RSVD_1",
+		TRQ_SEL_FMAP_1A_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_1A_QID_MAX",
+		TRQ_SEL_FMAP_1A_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_1A_QID_BASE",
+		TRQ_SEL_FMAP_1A_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_1b_field_info[] = {
+	{"TRQ_SEL_FMAP_1B_RSVD_1",
+		TRQ_SEL_FMAP_1B_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_1B_QID_MAX",
+		TRQ_SEL_FMAP_1B_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_1B_QID_BASE",
+		TRQ_SEL_FMAP_1B_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_1c_field_info[] = {
+	{"TRQ_SEL_FMAP_1C_RSVD_1",
+		TRQ_SEL_FMAP_1C_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_1C_QID_MAX",
+		TRQ_SEL_FMAP_1C_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_1C_QID_BASE",
+		TRQ_SEL_FMAP_1C_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_1d_field_info[] = {
+	{"TRQ_SEL_FMAP_1D_RSVD_1",
+		TRQ_SEL_FMAP_1D_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_1D_QID_MAX",
+		TRQ_SEL_FMAP_1D_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_1D_QID_BASE",
+		TRQ_SEL_FMAP_1D_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_1e_field_info[] = {
+	{"TRQ_SEL_FMAP_1E_RSVD_1",
+		TRQ_SEL_FMAP_1E_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_1E_QID_MAX",
+		TRQ_SEL_FMAP_1E_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_1E_QID_BASE",
+		TRQ_SEL_FMAP_1E_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_1f_field_info[] = {
+	{"TRQ_SEL_FMAP_1F_RSVD_1",
+		TRQ_SEL_FMAP_1F_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_1F_QID_MAX",
+		TRQ_SEL_FMAP_1F_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_1F_QID_BASE",
+		TRQ_SEL_FMAP_1F_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_20_field_info[] = {
+	{"TRQ_SEL_FMAP_20_RSVD_1",
+		TRQ_SEL_FMAP_20_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_20_QID_MAX",
+		TRQ_SEL_FMAP_20_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_20_QID_BASE",
+		TRQ_SEL_FMAP_20_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_21_field_info[] = {
+	{"TRQ_SEL_FMAP_21_RSVD_1",
+		TRQ_SEL_FMAP_21_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_21_QID_MAX",
+		TRQ_SEL_FMAP_21_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_21_QID_BASE",
+		TRQ_SEL_FMAP_21_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_22_field_info[] = {
+	{"TRQ_SEL_FMAP_22_RSVD_1",
+		TRQ_SEL_FMAP_22_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_22_QID_MAX",
+		TRQ_SEL_FMAP_22_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_22_QID_BASE",
+		TRQ_SEL_FMAP_22_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_23_field_info[] = {
+	{"TRQ_SEL_FMAP_23_RSVD_1",
+		TRQ_SEL_FMAP_23_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_23_QID_MAX",
+		TRQ_SEL_FMAP_23_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_23_QID_BASE",
+		TRQ_SEL_FMAP_23_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_24_field_info[] = {
+	{"TRQ_SEL_FMAP_24_RSVD_1",
+		TRQ_SEL_FMAP_24_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_24_QID_MAX",
+		TRQ_SEL_FMAP_24_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_24_QID_BASE",
+		TRQ_SEL_FMAP_24_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_25_field_info[] = {
+	{"TRQ_SEL_FMAP_25_RSVD_1",
+		TRQ_SEL_FMAP_25_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_25_QID_MAX",
+		TRQ_SEL_FMAP_25_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_25_QID_BASE",
+		TRQ_SEL_FMAP_25_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_26_field_info[] = {
+	{"TRQ_SEL_FMAP_26_RSVD_1",
+		TRQ_SEL_FMAP_26_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_26_QID_MAX",
+		TRQ_SEL_FMAP_26_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_26_QID_BASE",
+		TRQ_SEL_FMAP_26_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_27_field_info[] = {
+	{"TRQ_SEL_FMAP_27_RSVD_1",
+		TRQ_SEL_FMAP_27_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_27_QID_MAX",
+		TRQ_SEL_FMAP_27_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_27_QID_BASE",
+		TRQ_SEL_FMAP_27_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_28_field_info[] = {
+	{"TRQ_SEL_FMAP_28_RSVD_1",
+		TRQ_SEL_FMAP_28_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_28_QID_MAX",
+		TRQ_SEL_FMAP_28_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_28_QID_BASE",
+		TRQ_SEL_FMAP_28_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_29_field_info[] = {
+	{"TRQ_SEL_FMAP_29_RSVD_1",
+		TRQ_SEL_FMAP_29_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_29_QID_MAX",
+		TRQ_SEL_FMAP_29_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_29_QID_BASE",
+		TRQ_SEL_FMAP_29_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_2a_field_info[] = {
+	{"TRQ_SEL_FMAP_2A_RSVD_1",
+		TRQ_SEL_FMAP_2A_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_2A_QID_MAX",
+		TRQ_SEL_FMAP_2A_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_2A_QID_BASE",
+		TRQ_SEL_FMAP_2A_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_2b_field_info[] = {
+	{"TRQ_SEL_FMAP_2B_RSVD_1",
+		TRQ_SEL_FMAP_2B_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_2B_QID_MAX",
+		TRQ_SEL_FMAP_2B_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_2B_QID_BASE",
+		TRQ_SEL_FMAP_2B_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_2c_field_info[] = {
+	{"TRQ_SEL_FMAP_2C_RSVD_1",
+		TRQ_SEL_FMAP_2C_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_2C_QID_MAX",
+		TRQ_SEL_FMAP_2C_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_2C_QID_BASE",
+		TRQ_SEL_FMAP_2C_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_2d_field_info[] = {
+	{"TRQ_SEL_FMAP_2D_RSVD_1",
+		TRQ_SEL_FMAP_2D_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_2D_QID_MAX",
+		TRQ_SEL_FMAP_2D_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_2D_QID_BASE",
+		TRQ_SEL_FMAP_2D_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_2e_field_info[] = {
+	{"TRQ_SEL_FMAP_2E_RSVD_1",
+		TRQ_SEL_FMAP_2E_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_2E_QID_MAX",
+		TRQ_SEL_FMAP_2E_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_2E_QID_BASE",
+		TRQ_SEL_FMAP_2E_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_2f_field_info[] = {
+	{"TRQ_SEL_FMAP_2F_RSVD_1",
+		TRQ_SEL_FMAP_2F_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_2F_QID_MAX",
+		TRQ_SEL_FMAP_2F_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_2F_QID_BASE",
+		TRQ_SEL_FMAP_2F_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_30_field_info[] = {
+	{"TRQ_SEL_FMAP_30_RSVD_1",
+		TRQ_SEL_FMAP_30_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_30_QID_MAX",
+		TRQ_SEL_FMAP_30_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_30_QID_BASE",
+		TRQ_SEL_FMAP_30_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_31_field_info[] = {
+	{"TRQ_SEL_FMAP_31_RSVD_1",
+		TRQ_SEL_FMAP_31_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_31_QID_MAX",
+		TRQ_SEL_FMAP_31_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_31_QID_BASE",
+		TRQ_SEL_FMAP_31_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_32_field_info[] = {
+	{"TRQ_SEL_FMAP_32_RSVD_1",
+		TRQ_SEL_FMAP_32_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_32_QID_MAX",
+		TRQ_SEL_FMAP_32_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_32_QID_BASE",
+		TRQ_SEL_FMAP_32_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_33_field_info[] = {
+	{"TRQ_SEL_FMAP_33_RSVD_1",
+		TRQ_SEL_FMAP_33_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_33_QID_MAX",
+		TRQ_SEL_FMAP_33_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_33_QID_BASE",
+		TRQ_SEL_FMAP_33_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_34_field_info[] = {
+	{"TRQ_SEL_FMAP_34_RSVD_1",
+		TRQ_SEL_FMAP_34_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_34_QID_MAX",
+		TRQ_SEL_FMAP_34_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_34_QID_BASE",
+		TRQ_SEL_FMAP_34_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_35_field_info[] = {
+	{"TRQ_SEL_FMAP_35_RSVD_1",
+		TRQ_SEL_FMAP_35_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_35_QID_MAX",
+		TRQ_SEL_FMAP_35_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_35_QID_BASE",
+		TRQ_SEL_FMAP_35_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_36_field_info[] = {
+	{"TRQ_SEL_FMAP_36_RSVD_1",
+		TRQ_SEL_FMAP_36_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_36_QID_MAX",
+		TRQ_SEL_FMAP_36_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_36_QID_BASE",
+		TRQ_SEL_FMAP_36_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_37_field_info[] = {
+	{"TRQ_SEL_FMAP_37_RSVD_1",
+		TRQ_SEL_FMAP_37_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_37_QID_MAX",
+		TRQ_SEL_FMAP_37_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_37_QID_BASE",
+		TRQ_SEL_FMAP_37_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_38_field_info[] = {
+	{"TRQ_SEL_FMAP_38_RSVD_1",
+		TRQ_SEL_FMAP_38_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_38_QID_MAX",
+		TRQ_SEL_FMAP_38_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_38_QID_BASE",
+		TRQ_SEL_FMAP_38_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_39_field_info[] = {
+	{"TRQ_SEL_FMAP_39_RSVD_1",
+		TRQ_SEL_FMAP_39_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_39_QID_MAX",
+		TRQ_SEL_FMAP_39_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_39_QID_BASE",
+		TRQ_SEL_FMAP_39_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_3a_field_info[] = {
+	{"TRQ_SEL_FMAP_3A_RSVD_1",
+		TRQ_SEL_FMAP_3A_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_3A_QID_MAX",
+		TRQ_SEL_FMAP_3A_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_3A_QID_BASE",
+		TRQ_SEL_FMAP_3A_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_3b_field_info[] = {
+	{"TRQ_SEL_FMAP_3B_RSVD_1",
+		TRQ_SEL_FMAP_3B_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_3B_QID_MAX",
+		TRQ_SEL_FMAP_3B_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_3B_QID_BASE",
+		TRQ_SEL_FMAP_3B_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_3c_field_info[] = {
+	{"TRQ_SEL_FMAP_3C_RSVD_1",
+		TRQ_SEL_FMAP_3C_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_3C_QID_MAX",
+		TRQ_SEL_FMAP_3C_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_3C_QID_BASE",
+		TRQ_SEL_FMAP_3C_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_3d_field_info[] = {
+	{"TRQ_SEL_FMAP_3D_RSVD_1",
+		TRQ_SEL_FMAP_3D_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_3D_QID_MAX",
+		TRQ_SEL_FMAP_3D_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_3D_QID_BASE",
+		TRQ_SEL_FMAP_3D_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_3e_field_info[] = {
+	{"TRQ_SEL_FMAP_3E_RSVD_1",
+		TRQ_SEL_FMAP_3E_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_3E_QID_MAX",
+		TRQ_SEL_FMAP_3E_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_3E_QID_BASE",
+		TRQ_SEL_FMAP_3E_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_3f_field_info[] = {
+	{"TRQ_SEL_FMAP_3F_RSVD_1",
+		TRQ_SEL_FMAP_3F_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_3F_QID_MAX",
+		TRQ_SEL_FMAP_3F_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_3F_QID_BASE",
+		TRQ_SEL_FMAP_3F_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_40_field_info[] = {
+	{"TRQ_SEL_FMAP_40_RSVD_1",
+		TRQ_SEL_FMAP_40_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_40_QID_MAX",
+		TRQ_SEL_FMAP_40_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_40_QID_BASE",
+		TRQ_SEL_FMAP_40_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_41_field_info[] = {
+	{"TRQ_SEL_FMAP_41_RSVD_1",
+		TRQ_SEL_FMAP_41_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_41_QID_MAX",
+		TRQ_SEL_FMAP_41_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_41_QID_BASE",
+		TRQ_SEL_FMAP_41_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_42_field_info[] = {
+	{"TRQ_SEL_FMAP_42_RSVD_1",
+		TRQ_SEL_FMAP_42_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_42_QID_MAX",
+		TRQ_SEL_FMAP_42_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_42_QID_BASE",
+		TRQ_SEL_FMAP_42_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_43_field_info[] = {
+	{"TRQ_SEL_FMAP_43_RSVD_1",
+		TRQ_SEL_FMAP_43_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_43_QID_MAX",
+		TRQ_SEL_FMAP_43_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_43_QID_BASE",
+		TRQ_SEL_FMAP_43_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_44_field_info[] = {
+	{"TRQ_SEL_FMAP_44_RSVD_1",
+		TRQ_SEL_FMAP_44_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_44_QID_MAX",
+		TRQ_SEL_FMAP_44_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_44_QID_BASE",
+		TRQ_SEL_FMAP_44_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_45_field_info[] = {
+	{"TRQ_SEL_FMAP_45_RSVD_1",
+		TRQ_SEL_FMAP_45_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_45_QID_MAX",
+		TRQ_SEL_FMAP_45_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_45_QID_BASE",
+		TRQ_SEL_FMAP_45_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_46_field_info[] = {
+	{"TRQ_SEL_FMAP_46_RSVD_1",
+		TRQ_SEL_FMAP_46_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_46_QID_MAX",
+		TRQ_SEL_FMAP_46_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_46_QID_BASE",
+		TRQ_SEL_FMAP_46_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_47_field_info[] = {
+	{"TRQ_SEL_FMAP_47_RSVD_1",
+		TRQ_SEL_FMAP_47_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_47_QID_MAX",
+		TRQ_SEL_FMAP_47_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_47_QID_BASE",
+		TRQ_SEL_FMAP_47_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_48_field_info[] = {
+	{"TRQ_SEL_FMAP_48_RSVD_1",
+		TRQ_SEL_FMAP_48_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_48_QID_MAX",
+		TRQ_SEL_FMAP_48_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_48_QID_BASE",
+		TRQ_SEL_FMAP_48_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_49_field_info[] = {
+	{"TRQ_SEL_FMAP_49_RSVD_1",
+		TRQ_SEL_FMAP_49_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_49_QID_MAX",
+		TRQ_SEL_FMAP_49_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_49_QID_BASE",
+		TRQ_SEL_FMAP_49_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_4a_field_info[] = {
+	{"TRQ_SEL_FMAP_4A_RSVD_1",
+		TRQ_SEL_FMAP_4A_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_4A_QID_MAX",
+		TRQ_SEL_FMAP_4A_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_4A_QID_BASE",
+		TRQ_SEL_FMAP_4A_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_4b_field_info[] = {
+	{"TRQ_SEL_FMAP_4B_RSVD_1",
+		TRQ_SEL_FMAP_4B_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_4B_QID_MAX",
+		TRQ_SEL_FMAP_4B_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_4B_QID_BASE",
+		TRQ_SEL_FMAP_4B_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_4c_field_info[] = {
+	{"TRQ_SEL_FMAP_4C_RSVD_1",
+		TRQ_SEL_FMAP_4C_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_4C_QID_MAX",
+		TRQ_SEL_FMAP_4C_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_4C_QID_BASE",
+		TRQ_SEL_FMAP_4C_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_4d_field_info[] = {
+	{"TRQ_SEL_FMAP_4D_RSVD_1",
+		TRQ_SEL_FMAP_4D_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_4D_QID_MAX",
+		TRQ_SEL_FMAP_4D_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_4D_QID_BASE",
+		TRQ_SEL_FMAP_4D_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_4e_field_info[] = {
+	{"TRQ_SEL_FMAP_4E_RSVD_1",
+		TRQ_SEL_FMAP_4E_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_4E_QID_MAX",
+		TRQ_SEL_FMAP_4E_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_4E_QID_BASE",
+		TRQ_SEL_FMAP_4E_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_4f_field_info[] = {
+	{"TRQ_SEL_FMAP_4F_RSVD_1",
+		TRQ_SEL_FMAP_4F_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_4F_QID_MAX",
+		TRQ_SEL_FMAP_4F_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_4F_QID_BASE",
+		TRQ_SEL_FMAP_4F_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_50_field_info[] = {
+	{"TRQ_SEL_FMAP_50_RSVD_1",
+		TRQ_SEL_FMAP_50_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_50_QID_MAX",
+		TRQ_SEL_FMAP_50_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_50_QID_BASE",
+		TRQ_SEL_FMAP_50_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_51_field_info[] = {
+	{"TRQ_SEL_FMAP_51_RSVD_1",
+		TRQ_SEL_FMAP_51_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_51_QID_MAX",
+		TRQ_SEL_FMAP_51_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_51_QID_BASE",
+		TRQ_SEL_FMAP_51_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_52_field_info[] = {
+	{"TRQ_SEL_FMAP_52_RSVD_1",
+		TRQ_SEL_FMAP_52_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_52_QID_MAX",
+		TRQ_SEL_FMAP_52_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_52_QID_BASE",
+		TRQ_SEL_FMAP_52_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_53_field_info[] = {
+	{"TRQ_SEL_FMAP_53_RSVD_1",
+		TRQ_SEL_FMAP_53_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_53_QID_MAX",
+		TRQ_SEL_FMAP_53_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_53_QID_BASE",
+		TRQ_SEL_FMAP_53_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_54_field_info[] = {
+	{"TRQ_SEL_FMAP_54_RSVD_1",
+		TRQ_SEL_FMAP_54_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_54_QID_MAX",
+		TRQ_SEL_FMAP_54_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_54_QID_BASE",
+		TRQ_SEL_FMAP_54_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_55_field_info[] = {
+	{"TRQ_SEL_FMAP_55_RSVD_1",
+		TRQ_SEL_FMAP_55_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_55_QID_MAX",
+		TRQ_SEL_FMAP_55_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_55_QID_BASE",
+		TRQ_SEL_FMAP_55_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_56_field_info[] = {
+	{"TRQ_SEL_FMAP_56_RSVD_1",
+		TRQ_SEL_FMAP_56_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_56_QID_MAX",
+		TRQ_SEL_FMAP_56_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_56_QID_BASE",
+		TRQ_SEL_FMAP_56_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_57_field_info[] = {
+	{"TRQ_SEL_FMAP_57_RSVD_1",
+		TRQ_SEL_FMAP_57_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_57_QID_MAX",
+		TRQ_SEL_FMAP_57_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_57_QID_BASE",
+		TRQ_SEL_FMAP_57_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_58_field_info[] = {
+	{"TRQ_SEL_FMAP_58_RSVD_1",
+		TRQ_SEL_FMAP_58_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_58_QID_MAX",
+		TRQ_SEL_FMAP_58_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_58_QID_BASE",
+		TRQ_SEL_FMAP_58_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_59_field_info[] = {
+	{"TRQ_SEL_FMAP_59_RSVD_1",
+		TRQ_SEL_FMAP_59_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_59_QID_MAX",
+		TRQ_SEL_FMAP_59_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_59_QID_BASE",
+		TRQ_SEL_FMAP_59_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_5a_field_info[] = {
+	{"TRQ_SEL_FMAP_5A_RSVD_1",
+		TRQ_SEL_FMAP_5A_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_5A_QID_MAX",
+		TRQ_SEL_FMAP_5A_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_5A_QID_BASE",
+		TRQ_SEL_FMAP_5A_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_5b_field_info[] = {
+	{"TRQ_SEL_FMAP_5B_RSVD_1",
+		TRQ_SEL_FMAP_5B_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_5B_QID_MAX",
+		TRQ_SEL_FMAP_5B_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_5B_QID_BASE",
+		TRQ_SEL_FMAP_5B_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_5c_field_info[] = {
+	{"TRQ_SEL_FMAP_5C_RSVD_1",
+		TRQ_SEL_FMAP_5C_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_5C_QID_MAX",
+		TRQ_SEL_FMAP_5C_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_5C_QID_BASE",
+		TRQ_SEL_FMAP_5C_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_5d_field_info[] = {
+	{"TRQ_SEL_FMAP_5D_RSVD_1",
+		TRQ_SEL_FMAP_5D_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_5D_QID_MAX",
+		TRQ_SEL_FMAP_5D_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_5D_QID_BASE",
+		TRQ_SEL_FMAP_5D_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_5e_field_info[] = {
+	{"TRQ_SEL_FMAP_5E_RSVD_1",
+		TRQ_SEL_FMAP_5E_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_5E_QID_MAX",
+		TRQ_SEL_FMAP_5E_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_5E_QID_BASE",
+		TRQ_SEL_FMAP_5E_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_5f_field_info[] = {
+	{"TRQ_SEL_FMAP_5F_RSVD_1",
+		TRQ_SEL_FMAP_5F_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_5F_QID_MAX",
+		TRQ_SEL_FMAP_5F_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_5F_QID_BASE",
+		TRQ_SEL_FMAP_5F_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_60_field_info[] = {
+	{"TRQ_SEL_FMAP_60_RSVD_1",
+		TRQ_SEL_FMAP_60_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_60_QID_MAX",
+		TRQ_SEL_FMAP_60_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_60_QID_BASE",
+		TRQ_SEL_FMAP_60_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_61_field_info[] = {
+	{"TRQ_SEL_FMAP_61_RSVD_1",
+		TRQ_SEL_FMAP_61_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_61_QID_MAX",
+		TRQ_SEL_FMAP_61_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_61_QID_BASE",
+		TRQ_SEL_FMAP_61_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_62_field_info[] = {
+	{"TRQ_SEL_FMAP_62_RSVD_1",
+		TRQ_SEL_FMAP_62_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_62_QID_MAX",
+		TRQ_SEL_FMAP_62_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_62_QID_BASE",
+		TRQ_SEL_FMAP_62_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_63_field_info[] = {
+	{"TRQ_SEL_FMAP_63_RSVD_1",
+		TRQ_SEL_FMAP_63_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_63_QID_MAX",
+		TRQ_SEL_FMAP_63_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_63_QID_BASE",
+		TRQ_SEL_FMAP_63_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_64_field_info[] = {
+	{"TRQ_SEL_FMAP_64_RSVD_1",
+		TRQ_SEL_FMAP_64_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_64_QID_MAX",
+		TRQ_SEL_FMAP_64_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_64_QID_BASE",
+		TRQ_SEL_FMAP_64_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_65_field_info[] = {
+	{"TRQ_SEL_FMAP_65_RSVD_1",
+		TRQ_SEL_FMAP_65_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_65_QID_MAX",
+		TRQ_SEL_FMAP_65_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_65_QID_BASE",
+		TRQ_SEL_FMAP_65_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_66_field_info[] = {
+	{"TRQ_SEL_FMAP_66_RSVD_1",
+		TRQ_SEL_FMAP_66_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_66_QID_MAX",
+		TRQ_SEL_FMAP_66_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_66_QID_BASE",
+		TRQ_SEL_FMAP_66_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_67_field_info[] = {
+	{"TRQ_SEL_FMAP_67_RSVD_1",
+		TRQ_SEL_FMAP_67_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_67_QID_MAX",
+		TRQ_SEL_FMAP_67_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_67_QID_BASE",
+		TRQ_SEL_FMAP_67_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_68_field_info[] = {
+	{"TRQ_SEL_FMAP_68_RSVD_1",
+		TRQ_SEL_FMAP_68_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_68_QID_MAX",
+		TRQ_SEL_FMAP_68_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_68_QID_BASE",
+		TRQ_SEL_FMAP_68_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_69_field_info[] = {
+	{"TRQ_SEL_FMAP_69_RSVD_1",
+		TRQ_SEL_FMAP_69_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_69_QID_MAX",
+		TRQ_SEL_FMAP_69_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_69_QID_BASE",
+		TRQ_SEL_FMAP_69_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_6a_field_info[] = {
+	{"TRQ_SEL_FMAP_6A_RSVD_1",
+		TRQ_SEL_FMAP_6A_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_6A_QID_MAX",
+		TRQ_SEL_FMAP_6A_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_6A_QID_BASE",
+		TRQ_SEL_FMAP_6A_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_6b_field_info[] = {
+	{"TRQ_SEL_FMAP_6B_RSVD_1",
+		TRQ_SEL_FMAP_6B_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_6B_QID_MAX",
+		TRQ_SEL_FMAP_6B_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_6B_QID_BASE",
+		TRQ_SEL_FMAP_6B_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_6c_field_info[] = {
+	{"TRQ_SEL_FMAP_6C_RSVD_1",
+		TRQ_SEL_FMAP_6C_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_6C_QID_MAX",
+		TRQ_SEL_FMAP_6C_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_6C_QID_BASE",
+		TRQ_SEL_FMAP_6C_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_6d_field_info[] = {
+	{"TRQ_SEL_FMAP_6D_RSVD_1",
+		TRQ_SEL_FMAP_6D_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_6D_QID_MAX",
+		TRQ_SEL_FMAP_6D_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_6D_QID_BASE",
+		TRQ_SEL_FMAP_6D_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_6e_field_info[] = {
+	{"TRQ_SEL_FMAP_6E_RSVD_1",
+		TRQ_SEL_FMAP_6E_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_6E_QID_MAX",
+		TRQ_SEL_FMAP_6E_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_6E_QID_BASE",
+		TRQ_SEL_FMAP_6E_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_6f_field_info[] = {
+	{"TRQ_SEL_FMAP_6F_RSVD_1",
+		TRQ_SEL_FMAP_6F_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_6F_QID_MAX",
+		TRQ_SEL_FMAP_6F_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_6F_QID_BASE",
+		TRQ_SEL_FMAP_6F_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_70_field_info[] = {
+	{"TRQ_SEL_FMAP_70_RSVD_1",
+		TRQ_SEL_FMAP_70_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_70_QID_MAX",
+		TRQ_SEL_FMAP_70_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_70_QID_BASE",
+		TRQ_SEL_FMAP_70_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_71_field_info[] = {
+	{"TRQ_SEL_FMAP_71_RSVD_1",
+		TRQ_SEL_FMAP_71_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_71_QID_MAX",
+		TRQ_SEL_FMAP_71_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_71_QID_BASE",
+		TRQ_SEL_FMAP_71_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_72_field_info[] = {
+	{"TRQ_SEL_FMAP_72_RSVD_1",
+		TRQ_SEL_FMAP_72_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_72_QID_MAX",
+		TRQ_SEL_FMAP_72_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_72_QID_BASE",
+		TRQ_SEL_FMAP_72_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_73_field_info[] = {
+	{"TRQ_SEL_FMAP_73_RSVD_1",
+		TRQ_SEL_FMAP_73_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_73_QID_MAX",
+		TRQ_SEL_FMAP_73_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_73_QID_BASE",
+		TRQ_SEL_FMAP_73_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_74_field_info[] = {
+	{"TRQ_SEL_FMAP_74_RSVD_1",
+		TRQ_SEL_FMAP_74_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_74_QID_MAX",
+		TRQ_SEL_FMAP_74_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_74_QID_BASE",
+		TRQ_SEL_FMAP_74_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_75_field_info[] = {
+	{"TRQ_SEL_FMAP_75_RSVD_1",
+		TRQ_SEL_FMAP_75_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_75_QID_MAX",
+		TRQ_SEL_FMAP_75_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_75_QID_BASE",
+		TRQ_SEL_FMAP_75_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_76_field_info[] = {
+	{"TRQ_SEL_FMAP_76_RSVD_1",
+		TRQ_SEL_FMAP_76_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_76_QID_MAX",
+		TRQ_SEL_FMAP_76_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_76_QID_BASE",
+		TRQ_SEL_FMAP_76_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_77_field_info[] = {
+	{"TRQ_SEL_FMAP_77_RSVD_1",
+		TRQ_SEL_FMAP_77_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_77_QID_MAX",
+		TRQ_SEL_FMAP_77_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_77_QID_BASE",
+		TRQ_SEL_FMAP_77_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_78_field_info[] = {
+	{"TRQ_SEL_FMAP_78_RSVD_1",
+		TRQ_SEL_FMAP_78_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_78_QID_MAX",
+		TRQ_SEL_FMAP_78_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_78_QID_BASE",
+		TRQ_SEL_FMAP_78_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_79_field_info[] = {
+	{"TRQ_SEL_FMAP_79_RSVD_1",
+		TRQ_SEL_FMAP_79_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_79_QID_MAX",
+		TRQ_SEL_FMAP_79_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_79_QID_BASE",
+		TRQ_SEL_FMAP_79_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_7a_field_info[] = {
+	{"TRQ_SEL_FMAP_7A_RSVD_1",
+		TRQ_SEL_FMAP_7A_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_7A_QID_MAX",
+		TRQ_SEL_FMAP_7A_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_7A_QID_BASE",
+		TRQ_SEL_FMAP_7A_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_7b_field_info[] = {
+	{"TRQ_SEL_FMAP_7B_RSVD_1",
+		TRQ_SEL_FMAP_7B_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_7B_QID_MAX",
+		TRQ_SEL_FMAP_7B_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_7B_QID_BASE",
+		TRQ_SEL_FMAP_7B_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_7c_field_info[] = {
+	{"TRQ_SEL_FMAP_7C_RSVD_1",
+		TRQ_SEL_FMAP_7C_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_7C_QID_MAX",
+		TRQ_SEL_FMAP_7C_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_7C_QID_BASE",
+		TRQ_SEL_FMAP_7C_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_7d_field_info[] = {
+	{"TRQ_SEL_FMAP_7D_RSVD_1",
+		TRQ_SEL_FMAP_7D_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_7D_QID_MAX",
+		TRQ_SEL_FMAP_7D_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_7D_QID_BASE",
+		TRQ_SEL_FMAP_7D_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_7e_field_info[] = {
+	{"TRQ_SEL_FMAP_7E_RSVD_1",
+		TRQ_SEL_FMAP_7E_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_7E_QID_MAX",
+		TRQ_SEL_FMAP_7E_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_7E_QID_BASE",
+		TRQ_SEL_FMAP_7E_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_7f_field_info[] = {
+	{"TRQ_SEL_FMAP_7F_RSVD_1",
+		TRQ_SEL_FMAP_7F_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_7F_QID_MAX",
+		TRQ_SEL_FMAP_7F_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_7F_QID_BASE",
+		TRQ_SEL_FMAP_7F_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_80_field_info[] = {
+	{"TRQ_SEL_FMAP_80_RSVD_1",
+		TRQ_SEL_FMAP_80_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_80_QID_MAX",
+		TRQ_SEL_FMAP_80_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_80_QID_BASE",
+		TRQ_SEL_FMAP_80_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_81_field_info[] = {
+	{"TRQ_SEL_FMAP_81_RSVD_1",
+		TRQ_SEL_FMAP_81_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_81_QID_MAX",
+		TRQ_SEL_FMAP_81_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_81_QID_BASE",
+		TRQ_SEL_FMAP_81_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_82_field_info[] = {
+	{"TRQ_SEL_FMAP_82_RSVD_1",
+		TRQ_SEL_FMAP_82_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_82_QID_MAX",
+		TRQ_SEL_FMAP_82_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_82_QID_BASE",
+		TRQ_SEL_FMAP_82_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_83_field_info[] = {
+	{"TRQ_SEL_FMAP_83_RSVD_1",
+		TRQ_SEL_FMAP_83_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_83_QID_MAX",
+		TRQ_SEL_FMAP_83_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_83_QID_BASE",
+		TRQ_SEL_FMAP_83_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_84_field_info[] = {
+	{"TRQ_SEL_FMAP_84_RSVD_1",
+		TRQ_SEL_FMAP_84_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_84_QID_MAX",
+		TRQ_SEL_FMAP_84_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_84_QID_BASE",
+		TRQ_SEL_FMAP_84_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_85_field_info[] = {
+	{"TRQ_SEL_FMAP_85_RSVD_1",
+		TRQ_SEL_FMAP_85_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_85_QID_MAX",
+		TRQ_SEL_FMAP_85_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_85_QID_BASE",
+		TRQ_SEL_FMAP_85_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_86_field_info[] = {
+	{"TRQ_SEL_FMAP_86_RSVD_1",
+		TRQ_SEL_FMAP_86_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_86_QID_MAX",
+		TRQ_SEL_FMAP_86_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_86_QID_BASE",
+		TRQ_SEL_FMAP_86_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_87_field_info[] = {
+	{"TRQ_SEL_FMAP_87_RSVD_1",
+		TRQ_SEL_FMAP_87_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_87_QID_MAX",
+		TRQ_SEL_FMAP_87_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_87_QID_BASE",
+		TRQ_SEL_FMAP_87_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_88_field_info[] = {
+	{"TRQ_SEL_FMAP_88_RSVD_1",
+		TRQ_SEL_FMAP_88_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_88_QID_MAX",
+		TRQ_SEL_FMAP_88_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_88_QID_BASE",
+		TRQ_SEL_FMAP_88_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_89_field_info[] = {
+	{"TRQ_SEL_FMAP_89_RSVD_1",
+		TRQ_SEL_FMAP_89_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_89_QID_MAX",
+		TRQ_SEL_FMAP_89_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_89_QID_BASE",
+		TRQ_SEL_FMAP_89_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_8a_field_info[] = {
+	{"TRQ_SEL_FMAP_8A_RSVD_1",
+		TRQ_SEL_FMAP_8A_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_8A_QID_MAX",
+		TRQ_SEL_FMAP_8A_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_8A_QID_BASE",
+		TRQ_SEL_FMAP_8A_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_8b_field_info[] = {
+	{"TRQ_SEL_FMAP_8B_RSVD_1",
+		TRQ_SEL_FMAP_8B_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_8B_QID_MAX",
+		TRQ_SEL_FMAP_8B_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_8B_QID_BASE",
+		TRQ_SEL_FMAP_8B_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_8c_field_info[] = {
+	{"TRQ_SEL_FMAP_8C_RSVD_1",
+		TRQ_SEL_FMAP_8C_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_8C_QID_MAX",
+		TRQ_SEL_FMAP_8C_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_8C_QID_BASE",
+		TRQ_SEL_FMAP_8C_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_8d_field_info[] = {
+	{"TRQ_SEL_FMAP_8D_RSVD_1",
+		TRQ_SEL_FMAP_8D_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_8D_QID_MAX",
+		TRQ_SEL_FMAP_8D_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_8D_QID_BASE",
+		TRQ_SEL_FMAP_8D_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_8e_field_info[] = {
+	{"TRQ_SEL_FMAP_8E_RSVD_1",
+		TRQ_SEL_FMAP_8E_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_8E_QID_MAX",
+		TRQ_SEL_FMAP_8E_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_8E_QID_BASE",
+		TRQ_SEL_FMAP_8E_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_8f_field_info[] = {
+	{"TRQ_SEL_FMAP_8F_RSVD_1",
+		TRQ_SEL_FMAP_8F_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_8F_QID_MAX",
+		TRQ_SEL_FMAP_8F_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_8F_QID_BASE",
+		TRQ_SEL_FMAP_8F_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_90_field_info[] = {
+	{"TRQ_SEL_FMAP_90_RSVD_1",
+		TRQ_SEL_FMAP_90_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_90_QID_MAX",
+		TRQ_SEL_FMAP_90_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_90_QID_BASE",
+		TRQ_SEL_FMAP_90_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_91_field_info[] = {
+	{"TRQ_SEL_FMAP_91_RSVD_1",
+		TRQ_SEL_FMAP_91_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_91_QID_MAX",
+		TRQ_SEL_FMAP_91_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_91_QID_BASE",
+		TRQ_SEL_FMAP_91_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_92_field_info[] = {
+	{"TRQ_SEL_FMAP_92_RSVD_1",
+		TRQ_SEL_FMAP_92_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_92_QID_MAX",
+		TRQ_SEL_FMAP_92_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_92_QID_BASE",
+		TRQ_SEL_FMAP_92_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_93_field_info[] = {
+	{"TRQ_SEL_FMAP_93_RSVD_1",
+		TRQ_SEL_FMAP_93_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_93_QID_MAX",
+		TRQ_SEL_FMAP_93_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_93_QID_BASE",
+		TRQ_SEL_FMAP_93_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_94_field_info[] = {
+	{"TRQ_SEL_FMAP_94_RSVD_1",
+		TRQ_SEL_FMAP_94_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_94_QID_MAX",
+		TRQ_SEL_FMAP_94_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_94_QID_BASE",
+		TRQ_SEL_FMAP_94_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_95_field_info[] = {
+	{"TRQ_SEL_FMAP_95_RSVD_1",
+		TRQ_SEL_FMAP_95_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_95_QID_MAX",
+		TRQ_SEL_FMAP_95_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_95_QID_BASE",
+		TRQ_SEL_FMAP_95_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_96_field_info[] = {
+	{"TRQ_SEL_FMAP_96_RSVD_1",
+		TRQ_SEL_FMAP_96_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_96_QID_MAX",
+		TRQ_SEL_FMAP_96_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_96_QID_BASE",
+		TRQ_SEL_FMAP_96_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_97_field_info[] = {
+	{"TRQ_SEL_FMAP_97_RSVD_1",
+		TRQ_SEL_FMAP_97_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_97_QID_MAX",
+		TRQ_SEL_FMAP_97_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_97_QID_BASE",
+		TRQ_SEL_FMAP_97_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_98_field_info[] = {
+	{"TRQ_SEL_FMAP_98_RSVD_1",
+		TRQ_SEL_FMAP_98_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_98_QID_MAX",
+		TRQ_SEL_FMAP_98_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_98_QID_BASE",
+		TRQ_SEL_FMAP_98_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_99_field_info[] = {
+	{"TRQ_SEL_FMAP_99_RSVD_1",
+		TRQ_SEL_FMAP_99_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_99_QID_MAX",
+		TRQ_SEL_FMAP_99_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_99_QID_BASE",
+		TRQ_SEL_FMAP_99_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_9a_field_info[] = {
+	{"TRQ_SEL_FMAP_9A_RSVD_1",
+		TRQ_SEL_FMAP_9A_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_9A_QID_MAX",
+		TRQ_SEL_FMAP_9A_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_9A_QID_BASE",
+		TRQ_SEL_FMAP_9A_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_9b_field_info[] = {
+	{"TRQ_SEL_FMAP_9B_RSVD_1",
+		TRQ_SEL_FMAP_9B_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_9B_QID_MAX",
+		TRQ_SEL_FMAP_9B_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_9B_QID_BASE",
+		TRQ_SEL_FMAP_9B_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_9c_field_info[] = {
+	{"TRQ_SEL_FMAP_9C_RSVD_1",
+		TRQ_SEL_FMAP_9C_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_9C_QID_MAX",
+		TRQ_SEL_FMAP_9C_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_9C_QID_BASE",
+		TRQ_SEL_FMAP_9C_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_9d_field_info[] = {
+	{"TRQ_SEL_FMAP_9D_RSVD_1",
+		TRQ_SEL_FMAP_9D_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_9D_QID_MAX",
+		TRQ_SEL_FMAP_9D_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_9D_QID_BASE",
+		TRQ_SEL_FMAP_9D_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_9e_field_info[] = {
+	{"TRQ_SEL_FMAP_9E_RSVD_1",
+		TRQ_SEL_FMAP_9E_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_9E_QID_MAX",
+		TRQ_SEL_FMAP_9E_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_9E_QID_BASE",
+		TRQ_SEL_FMAP_9E_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_9f_field_info[] = {
+	{"TRQ_SEL_FMAP_9F_RSVD_1",
+		TRQ_SEL_FMAP_9F_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_9F_QID_MAX",
+		TRQ_SEL_FMAP_9F_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_9F_QID_BASE",
+		TRQ_SEL_FMAP_9F_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_a0_field_info[] = {
+	{"TRQ_SEL_FMAP_A0_RSVD_1",
+		TRQ_SEL_FMAP_A0_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_A0_QID_MAX",
+		TRQ_SEL_FMAP_A0_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_A0_QID_BASE",
+		TRQ_SEL_FMAP_A0_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_a1_field_info[] = {
+	{"TRQ_SEL_FMAP_A1_RSVD_1",
+		TRQ_SEL_FMAP_A1_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_A1_QID_MAX",
+		TRQ_SEL_FMAP_A1_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_A1_QID_BASE",
+		TRQ_SEL_FMAP_A1_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_a2_field_info[] = {
+	{"TRQ_SEL_FMAP_A2_RSVD_1",
+		TRQ_SEL_FMAP_A2_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_A2_QID_MAX",
+		TRQ_SEL_FMAP_A2_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_A2_QID_BASE",
+		TRQ_SEL_FMAP_A2_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_a3_field_info[] = {
+	{"TRQ_SEL_FMAP_A3_RSVD_1",
+		TRQ_SEL_FMAP_A3_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_A3_QID_MAX",
+		TRQ_SEL_FMAP_A3_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_A3_QID_BASE",
+		TRQ_SEL_FMAP_A3_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_a4_field_info[] = {
+	{"TRQ_SEL_FMAP_A4_RSVD_1",
+		TRQ_SEL_FMAP_A4_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_A4_QID_MAX",
+		TRQ_SEL_FMAP_A4_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_A4_QID_BASE",
+		TRQ_SEL_FMAP_A4_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_a5_field_info[] = {
+	{"TRQ_SEL_FMAP_A5_RSVD_1",
+		TRQ_SEL_FMAP_A5_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_A5_QID_MAX",
+		TRQ_SEL_FMAP_A5_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_A5_QID_BASE",
+		TRQ_SEL_FMAP_A5_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_a6_field_info[] = {
+	{"TRQ_SEL_FMAP_A6_RSVD_1",
+		TRQ_SEL_FMAP_A6_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_A6_QID_MAX",
+		TRQ_SEL_FMAP_A6_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_A6_QID_BASE",
+		TRQ_SEL_FMAP_A6_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_a7_field_info[] = {
+	{"TRQ_SEL_FMAP_A7_RSVD_1",
+		TRQ_SEL_FMAP_A7_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_A7_QID_MAX",
+		TRQ_SEL_FMAP_A7_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_A7_QID_BASE",
+		TRQ_SEL_FMAP_A7_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_a8_field_info[] = {
+	{"TRQ_SEL_FMAP_A8_RSVD_1",
+		TRQ_SEL_FMAP_A8_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_A8_QID_MAX",
+		TRQ_SEL_FMAP_A8_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_A8_QID_BASE",
+		TRQ_SEL_FMAP_A8_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_a9_field_info[] = {
+	{"TRQ_SEL_FMAP_A9_RSVD_1",
+		TRQ_SEL_FMAP_A9_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_A9_QID_MAX",
+		TRQ_SEL_FMAP_A9_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_A9_QID_BASE",
+		TRQ_SEL_FMAP_A9_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_aa_field_info[] = {
+	{"TRQ_SEL_FMAP_AA_RSVD_1",
+		TRQ_SEL_FMAP_AA_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_AA_QID_MAX",
+		TRQ_SEL_FMAP_AA_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_AA_QID_BASE",
+		TRQ_SEL_FMAP_AA_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_ab_field_info[] = {
+	{"TRQ_SEL_FMAP_AB_RSVD_1",
+		TRQ_SEL_FMAP_AB_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_AB_QID_MAX",
+		TRQ_SEL_FMAP_AB_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_AB_QID_BASE",
+		TRQ_SEL_FMAP_AB_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_ac_field_info[] = {
+	{"TRQ_SEL_FMAP_AC_RSVD_1",
+		TRQ_SEL_FMAP_AC_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_AC_QID_MAX",
+		TRQ_SEL_FMAP_AC_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_AC_QID_BASE",
+		TRQ_SEL_FMAP_AC_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_ad_field_info[] = {
+	{"TRQ_SEL_FMAP_AD_RSVD_1",
+		TRQ_SEL_FMAP_AD_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_AD_QID_MAX",
+		TRQ_SEL_FMAP_AD_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_AD_QID_BASE",
+		TRQ_SEL_FMAP_AD_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_ae_field_info[] = {
+	{"TRQ_SEL_FMAP_AE_RSVD_1",
+		TRQ_SEL_FMAP_AE_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_AE_QID_MAX",
+		TRQ_SEL_FMAP_AE_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_AE_QID_BASE",
+		TRQ_SEL_FMAP_AE_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_af_field_info[] = {
+	{"TRQ_SEL_FMAP_AF_RSVD_1",
+		TRQ_SEL_FMAP_AF_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_AF_QID_MAX",
+		TRQ_SEL_FMAP_AF_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_AF_QID_BASE",
+		TRQ_SEL_FMAP_AF_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_b0_field_info[] = {
+	{"TRQ_SEL_FMAP_B0_RSVD_1",
+		TRQ_SEL_FMAP_B0_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_B0_QID_MAX",
+		TRQ_SEL_FMAP_B0_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_B0_QID_BASE",
+		TRQ_SEL_FMAP_B0_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_b1_field_info[] = {
+	{"TRQ_SEL_FMAP_B1_RSVD_1",
+		TRQ_SEL_FMAP_B1_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_B1_QID_MAX",
+		TRQ_SEL_FMAP_B1_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_B1_QID_BASE",
+		TRQ_SEL_FMAP_B1_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_b2_field_info[] = {
+	{"TRQ_SEL_FMAP_B2_RSVD_1",
+		TRQ_SEL_FMAP_B2_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_B2_QID_MAX",
+		TRQ_SEL_FMAP_B2_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_B2_QID_BASE",
+		TRQ_SEL_FMAP_B2_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_b3_field_info[] = {
+	{"TRQ_SEL_FMAP_B3_RSVD_1",
+		TRQ_SEL_FMAP_B3_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_B3_QID_MAX",
+		TRQ_SEL_FMAP_B3_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_B3_QID_BASE",
+		TRQ_SEL_FMAP_B3_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_b4_field_info[] = {
+	{"TRQ_SEL_FMAP_B4_RSVD_1",
+		TRQ_SEL_FMAP_B4_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_B4_QID_MAX",
+		TRQ_SEL_FMAP_B4_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_B4_QID_BASE",
+		TRQ_SEL_FMAP_B4_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_b5_field_info[] = {
+	{"TRQ_SEL_FMAP_B5_RSVD_1",
+		TRQ_SEL_FMAP_B5_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_B5_QID_MAX",
+		TRQ_SEL_FMAP_B5_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_B5_QID_BASE",
+		TRQ_SEL_FMAP_B5_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_b6_field_info[] = {
+	{"TRQ_SEL_FMAP_B6_RSVD_1",
+		TRQ_SEL_FMAP_B6_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_B6_QID_MAX",
+		TRQ_SEL_FMAP_B6_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_B6_QID_BASE",
+		TRQ_SEL_FMAP_B6_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_b7_field_info[] = {
+	{"TRQ_SEL_FMAP_B7_RSVD_1",
+		TRQ_SEL_FMAP_B7_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_B7_QID_MAX",
+		TRQ_SEL_FMAP_B7_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_B7_QID_BASE",
+		TRQ_SEL_FMAP_B7_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_b8_field_info[] = {
+	{"TRQ_SEL_FMAP_B8_RSVD_1",
+		TRQ_SEL_FMAP_B8_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_B8_QID_MAX",
+		TRQ_SEL_FMAP_B8_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_B8_QID_BASE",
+		TRQ_SEL_FMAP_B8_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_b9_field_info[] = {
+	{"TRQ_SEL_FMAP_B9_RSVD_1",
+		TRQ_SEL_FMAP_B9_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_B9_QID_MAX",
+		TRQ_SEL_FMAP_B9_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_B9_QID_BASE",
+		TRQ_SEL_FMAP_B9_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_ba_field_info[] = {
+	{"TRQ_SEL_FMAP_BA_RSVD_1",
+		TRQ_SEL_FMAP_BA_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_BA_QID_MAX",
+		TRQ_SEL_FMAP_BA_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_BA_QID_BASE",
+		TRQ_SEL_FMAP_BA_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_bb_field_info[] = {
+	{"TRQ_SEL_FMAP_BB_RSVD_1",
+		TRQ_SEL_FMAP_BB_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_BB_QID_MAX",
+		TRQ_SEL_FMAP_BB_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_BB_QID_BASE",
+		TRQ_SEL_FMAP_BB_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_bc_field_info[] = {
+	{"TRQ_SEL_FMAP_BC_RSVD_1",
+		TRQ_SEL_FMAP_BC_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_BC_QID_MAX",
+		TRQ_SEL_FMAP_BC_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_BC_QID_BASE",
+		TRQ_SEL_FMAP_BC_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_bd_field_info[] = {
+	{"TRQ_SEL_FMAP_BD_RSVD_1",
+		TRQ_SEL_FMAP_BD_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_BD_QID_MAX",
+		TRQ_SEL_FMAP_BD_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_BD_QID_BASE",
+		TRQ_SEL_FMAP_BD_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_be_field_info[] = {
+	{"TRQ_SEL_FMAP_BE_RSVD_1",
+		TRQ_SEL_FMAP_BE_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_BE_QID_MAX",
+		TRQ_SEL_FMAP_BE_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_BE_QID_BASE",
+		TRQ_SEL_FMAP_BE_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_bf_field_info[] = {
+	{"TRQ_SEL_FMAP_BF_RSVD_1",
+		TRQ_SEL_FMAP_BF_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_BF_QID_MAX",
+		TRQ_SEL_FMAP_BF_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_BF_QID_BASE",
+		TRQ_SEL_FMAP_BF_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_c0_field_info[] = {
+	{"TRQ_SEL_FMAP_C0_RSVD_1",
+		TRQ_SEL_FMAP_C0_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_C0_QID_MAX",
+		TRQ_SEL_FMAP_C0_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_C0_QID_BASE",
+		TRQ_SEL_FMAP_C0_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_c1_field_info[] = {
+	{"TRQ_SEL_FMAP_C1_RSVD_1",
+		TRQ_SEL_FMAP_C1_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_C1_QID_MAX",
+		TRQ_SEL_FMAP_C1_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_C1_QID_BASE",
+		TRQ_SEL_FMAP_C1_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_c2_field_info[] = {
+	{"TRQ_SEL_FMAP_C2_RSVD_1",
+		TRQ_SEL_FMAP_C2_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_C2_QID_MAX",
+		TRQ_SEL_FMAP_C2_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_C2_QID_BASE",
+		TRQ_SEL_FMAP_C2_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_c3_field_info[] = {
+	{"TRQ_SEL_FMAP_C3_RSVD_1",
+		TRQ_SEL_FMAP_C3_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_C3_QID_MAX",
+		TRQ_SEL_FMAP_C3_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_C3_QID_BASE",
+		TRQ_SEL_FMAP_C3_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_c4_field_info[] = {
+	{"TRQ_SEL_FMAP_C4_RSVD_1",
+		TRQ_SEL_FMAP_C4_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_C4_QID_MAX",
+		TRQ_SEL_FMAP_C4_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_C4_QID_BASE",
+		TRQ_SEL_FMAP_C4_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_c5_field_info[] = {
+	{"TRQ_SEL_FMAP_C5_RSVD_1",
+		TRQ_SEL_FMAP_C5_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_C5_QID_MAX",
+		TRQ_SEL_FMAP_C5_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_C5_QID_BASE",
+		TRQ_SEL_FMAP_C5_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_c6_field_info[] = {
+	{"TRQ_SEL_FMAP_C6_RSVD_1",
+		TRQ_SEL_FMAP_C6_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_C6_QID_MAX",
+		TRQ_SEL_FMAP_C6_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_C6_QID_BASE",
+		TRQ_SEL_FMAP_C6_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_c7_field_info[] = {
+	{"TRQ_SEL_FMAP_C7_RSVD_1",
+		TRQ_SEL_FMAP_C7_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_C7_QID_MAX",
+		TRQ_SEL_FMAP_C7_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_C7_QID_BASE",
+		TRQ_SEL_FMAP_C7_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_c8_field_info[] = {
+	{"TRQ_SEL_FMAP_C8_RSVD_1",
+		TRQ_SEL_FMAP_C8_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_C8_QID_MAX",
+		TRQ_SEL_FMAP_C8_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_C8_QID_BASE",
+		TRQ_SEL_FMAP_C8_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_c9_field_info[] = {
+	{"TRQ_SEL_FMAP_C9_RSVD_1",
+		TRQ_SEL_FMAP_C9_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_C9_QID_MAX",
+		TRQ_SEL_FMAP_C9_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_C9_QID_BASE",
+		TRQ_SEL_FMAP_C9_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_ca_field_info[] = {
+	{"TRQ_SEL_FMAP_CA_RSVD_1",
+		TRQ_SEL_FMAP_CA_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_CA_QID_MAX",
+		TRQ_SEL_FMAP_CA_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_CA_QID_BASE",
+		TRQ_SEL_FMAP_CA_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_cb_field_info[] = {
+	{"TRQ_SEL_FMAP_CB_RSVD_1",
+		TRQ_SEL_FMAP_CB_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_CB_QID_MAX",
+		TRQ_SEL_FMAP_CB_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_CB_QID_BASE",
+		TRQ_SEL_FMAP_CB_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_cc_field_info[] = {
+	{"TRQ_SEL_FMAP_CC_RSVD_1",
+		TRQ_SEL_FMAP_CC_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_CC_QID_MAX",
+		TRQ_SEL_FMAP_CC_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_CC_QID_BASE",
+		TRQ_SEL_FMAP_CC_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_cd_field_info[] = {
+	{"TRQ_SEL_FMAP_CD_RSVD_1",
+		TRQ_SEL_FMAP_CD_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_CD_QID_MAX",
+		TRQ_SEL_FMAP_CD_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_CD_QID_BASE",
+		TRQ_SEL_FMAP_CD_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_ce_field_info[] = {
+	{"TRQ_SEL_FMAP_CE_RSVD_1",
+		TRQ_SEL_FMAP_CE_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_CE_QID_MAX",
+		TRQ_SEL_FMAP_CE_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_CE_QID_BASE",
+		TRQ_SEL_FMAP_CE_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_cf_field_info[] = {
+	{"TRQ_SEL_FMAP_CF_RSVD_1",
+		TRQ_SEL_FMAP_CF_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_CF_QID_MAX",
+		TRQ_SEL_FMAP_CF_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_CF_QID_BASE",
+		TRQ_SEL_FMAP_CF_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_d0_field_info[] = {
+	{"TRQ_SEL_FMAP_D0_RSVD_1",
+		TRQ_SEL_FMAP_D0_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_D0_QID_MAX",
+		TRQ_SEL_FMAP_D0_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_D0_QID_BASE",
+		TRQ_SEL_FMAP_D0_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_d1_field_info[] = {
+	{"TRQ_SEL_FMAP_D1_RSVD_1",
+		TRQ_SEL_FMAP_D1_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_D1_QID_MAX",
+		TRQ_SEL_FMAP_D1_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_D1_QID_BASE",
+		TRQ_SEL_FMAP_D1_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_d2_field_info[] = {
+	{"TRQ_SEL_FMAP_D2_RSVD_1",
+		TRQ_SEL_FMAP_D2_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_D2_QID_MAX",
+		TRQ_SEL_FMAP_D2_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_D2_QID_BASE",
+		TRQ_SEL_FMAP_D2_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_d3_field_info[] = {
+	{"TRQ_SEL_FMAP_D3_RSVD_1",
+		TRQ_SEL_FMAP_D3_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_D3_QID_MAX",
+		TRQ_SEL_FMAP_D3_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_D3_QID_BASE",
+		TRQ_SEL_FMAP_D3_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_d4_field_info[] = {
+	{"TRQ_SEL_FMAP_D4_RSVD_1",
+		TRQ_SEL_FMAP_D4_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_D4_QID_MAX",
+		TRQ_SEL_FMAP_D4_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_D4_QID_BASE",
+		TRQ_SEL_FMAP_D4_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_d5_field_info[] = {
+	{"TRQ_SEL_FMAP_D5_RSVD_1",
+		TRQ_SEL_FMAP_D5_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_D5_QID_MAX",
+		TRQ_SEL_FMAP_D5_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_D5_QID_BASE",
+		TRQ_SEL_FMAP_D5_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_d6_field_info[] = {
+	{"TRQ_SEL_FMAP_D6_RSVD_1",
+		TRQ_SEL_FMAP_D6_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_D6_QID_MAX",
+		TRQ_SEL_FMAP_D6_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_D6_QID_BASE",
+		TRQ_SEL_FMAP_D6_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_d7_field_info[] = {
+	{"TRQ_SEL_FMAP_D7_RSVD_1",
+		TRQ_SEL_FMAP_D7_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_D7_QID_MAX",
+		TRQ_SEL_FMAP_D7_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_D7_QID_BASE",
+		TRQ_SEL_FMAP_D7_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_d8_field_info[] = {
+	{"TRQ_SEL_FMAP_D8_RSVD_1",
+		TRQ_SEL_FMAP_D8_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_D8_QID_MAX",
+		TRQ_SEL_FMAP_D8_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_D8_QID_BASE",
+		TRQ_SEL_FMAP_D8_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_d9_field_info[] = {
+	{"TRQ_SEL_FMAP_D9_RSVD_1",
+		TRQ_SEL_FMAP_D9_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_D9_QID_MAX",
+		TRQ_SEL_FMAP_D9_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_D9_QID_BASE",
+		TRQ_SEL_FMAP_D9_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_da_field_info[] = {
+	{"TRQ_SEL_FMAP_DA_RSVD_1",
+		TRQ_SEL_FMAP_DA_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_DA_QID_MAX",
+		TRQ_SEL_FMAP_DA_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_DA_QID_BASE",
+		TRQ_SEL_FMAP_DA_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_db_field_info[] = {
+	{"TRQ_SEL_FMAP_DB_RSVD_1",
+		TRQ_SEL_FMAP_DB_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_DB_QID_MAX",
+		TRQ_SEL_FMAP_DB_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_DB_QID_BASE",
+		TRQ_SEL_FMAP_DB_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_dc_field_info[] = {
+	{"TRQ_SEL_FMAP_DC_RSVD_1",
+		TRQ_SEL_FMAP_DC_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_DC_QID_MAX",
+		TRQ_SEL_FMAP_DC_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_DC_QID_BASE",
+		TRQ_SEL_FMAP_DC_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_dd_field_info[] = {
+	{"TRQ_SEL_FMAP_DD_RSVD_1",
+		TRQ_SEL_FMAP_DD_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_DD_QID_MAX",
+		TRQ_SEL_FMAP_DD_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_DD_QID_BASE",
+		TRQ_SEL_FMAP_DD_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_de_field_info[] = {
+	{"TRQ_SEL_FMAP_DE_RSVD_1",
+		TRQ_SEL_FMAP_DE_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_DE_QID_MAX",
+		TRQ_SEL_FMAP_DE_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_DE_QID_BASE",
+		TRQ_SEL_FMAP_DE_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_df_field_info[] = {
+	{"TRQ_SEL_FMAP_DF_RSVD_1",
+		TRQ_SEL_FMAP_DF_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_DF_QID_MAX",
+		TRQ_SEL_FMAP_DF_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_DF_QID_BASE",
+		TRQ_SEL_FMAP_DF_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_e0_field_info[] = {
+	{"TRQ_SEL_FMAP_E0_RSVD_1",
+		TRQ_SEL_FMAP_E0_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_E0_QID_MAX",
+		TRQ_SEL_FMAP_E0_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_E0_QID_BASE",
+		TRQ_SEL_FMAP_E0_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_e1_field_info[] = {
+	{"TRQ_SEL_FMAP_E1_RSVD_1",
+		TRQ_SEL_FMAP_E1_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_E1_QID_MAX",
+		TRQ_SEL_FMAP_E1_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_E1_QID_BASE",
+		TRQ_SEL_FMAP_E1_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_e2_field_info[] = {
+	{"TRQ_SEL_FMAP_E2_RSVD_1",
+		TRQ_SEL_FMAP_E2_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_E2_QID_MAX",
+		TRQ_SEL_FMAP_E2_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_E2_QID_BASE",
+		TRQ_SEL_FMAP_E2_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_e3_field_info[] = {
+	{"TRQ_SEL_FMAP_E3_RSVD_1",
+		TRQ_SEL_FMAP_E3_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_E3_QID_MAX",
+		TRQ_SEL_FMAP_E3_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_E3_QID_BASE",
+		TRQ_SEL_FMAP_E3_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_e4_field_info[] = {
+	{"TRQ_SEL_FMAP_E4_RSVD_1",
+		TRQ_SEL_FMAP_E4_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_E4_QID_MAX",
+		TRQ_SEL_FMAP_E4_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_E4_QID_BASE",
+		TRQ_SEL_FMAP_E4_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_e5_field_info[] = {
+	{"TRQ_SEL_FMAP_E5_RSVD_1",
+		TRQ_SEL_FMAP_E5_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_E5_QID_MAX",
+		TRQ_SEL_FMAP_E5_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_E5_QID_BASE",
+		TRQ_SEL_FMAP_E5_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_e6_field_info[] = {
+	{"TRQ_SEL_FMAP_E6_RSVD_1",
+		TRQ_SEL_FMAP_E6_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_E6_QID_MAX",
+		TRQ_SEL_FMAP_E6_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_E6_QID_BASE",
+		TRQ_SEL_FMAP_E6_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_e7_field_info[] = {
+	{"TRQ_SEL_FMAP_E7_RSVD_1",
+		TRQ_SEL_FMAP_E7_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_E7_QID_MAX",
+		TRQ_SEL_FMAP_E7_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_E7_QID_BASE",
+		TRQ_SEL_FMAP_E7_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_e8_field_info[] = {
+	{"TRQ_SEL_FMAP_E8_RSVD_1",
+		TRQ_SEL_FMAP_E8_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_E8_QID_MAX",
+		TRQ_SEL_FMAP_E8_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_E8_QID_BASE",
+		TRQ_SEL_FMAP_E8_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_e9_field_info[] = {
+	{"TRQ_SEL_FMAP_E9_RSVD_1",
+		TRQ_SEL_FMAP_E9_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_E9_QID_MAX",
+		TRQ_SEL_FMAP_E9_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_E9_QID_BASE",
+		TRQ_SEL_FMAP_E9_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_ea_field_info[] = {
+	{"TRQ_SEL_FMAP_EA_RSVD_1",
+		TRQ_SEL_FMAP_EA_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_EA_QID_MAX",
+		TRQ_SEL_FMAP_EA_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_EA_QID_BASE",
+		TRQ_SEL_FMAP_EA_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_eb_field_info[] = {
+	{"TRQ_SEL_FMAP_EB_RSVD_1",
+		TRQ_SEL_FMAP_EB_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_EB_QID_MAX",
+		TRQ_SEL_FMAP_EB_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_EB_QID_BASE",
+		TRQ_SEL_FMAP_EB_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_ec_field_info[] = {
+	{"TRQ_SEL_FMAP_EC_RSVD_1",
+		TRQ_SEL_FMAP_EC_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_EC_QID_MAX",
+		TRQ_SEL_FMAP_EC_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_EC_QID_BASE",
+		TRQ_SEL_FMAP_EC_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_ed_field_info[] = {
+	{"TRQ_SEL_FMAP_ED_RSVD_1",
+		TRQ_SEL_FMAP_ED_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_ED_QID_MAX",
+		TRQ_SEL_FMAP_ED_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_ED_QID_BASE",
+		TRQ_SEL_FMAP_ED_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_ee_field_info[] = {
+	{"TRQ_SEL_FMAP_EE_RSVD_1",
+		TRQ_SEL_FMAP_EE_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_EE_QID_MAX",
+		TRQ_SEL_FMAP_EE_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_EE_QID_BASE",
+		TRQ_SEL_FMAP_EE_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_ef_field_info[] = {
+	{"TRQ_SEL_FMAP_EF_RSVD_1",
+		TRQ_SEL_FMAP_EF_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_EF_QID_MAX",
+		TRQ_SEL_FMAP_EF_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_EF_QID_BASE",
+		TRQ_SEL_FMAP_EF_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	trq_sel_fmap_f0_field_info[] = {
+	{"TRQ_SEL_FMAP_F0_RSVD_1",
+		TRQ_SEL_FMAP_F0_RSVD_1_MASK},
+	{"TRQ_SEL_FMAP_F0_QID_MAX",
+		TRQ_SEL_FMAP_F0_QID_MAX_MASK},
+	{"TRQ_SEL_FMAP_F0_QID_BASE",
+		TRQ_SEL_FMAP_F0_QID_BASE_MASK},
+};
+
+
+static struct regfield_info
+	ind_ctxt_data_3_field_info[] = {
+	{"IND_CTXT_DATA_3_DATA",
+		IND_CTXT_DATA_3_DATA_MASK},
+};
+
+
+static struct regfield_info
+	ind_ctxt_data_2_field_info[] = {
+	{"IND_CTXT_DATA_2_DATA",
+		IND_CTXT_DATA_2_DATA_MASK},
+};
+
+
+static struct regfield_info
+	ind_ctxt_data_1_field_info[] = {
+	{"IND_CTXT_DATA_1_DATA",
+		IND_CTXT_DATA_1_DATA_MASK},
+};
+
+
+static struct regfield_info
+	ind_ctxt_data_0_field_info[] = {
+	{"IND_CTXT_DATA_0_DATA",
+		IND_CTXT_DATA_0_DATA_MASK},
+};
+
+
+static struct regfield_info
+	ind_ctxt3_field_info[] = {
+	{"IND_CTXT3",
+		IND_CTXT3_MASK},
+};
+
+
+static struct regfield_info
+	ind_ctxt2_field_info[] = {
+	{"IND_CTXT2",
+		IND_CTXT2_MASK},
+};
+
+
+static struct regfield_info
+	ind_ctxt1_field_info[] = {
+	{"IND_CTXT1",
+		IND_CTXT1_MASK},
+};
+
+
+static struct regfield_info
+	ind_ctxt0_field_info[] = {
+	{"IND_CTXT0",
+		IND_CTXT0_MASK},
+};
+
+
+static struct regfield_info
+	ind_ctxt_cmd_field_info[] = {
+	{"IND_CTXT_CMD_RSVD_1",
+		IND_CTXT_CMD_RSVD_1_MASK},
+	{"IND_CTXT_CMD_QID",
+		IND_CTXT_CMD_QID_MASK},
+	{"IND_CTXT_CMD_OP",
+		IND_CTXT_CMD_OP_MASK},
+	{"IND_CTXT_CMD_SET",
+		IND_CTXT_CMD_SET_MASK},
+	{"IND_CTXT_CMD_BUSY",
+		IND_CTXT_CMD_BUSY_MASK},
+};
+
+
+static struct regfield_info
+	c2h_timer_cnt_1_field_info[] = {
+	{"C2H_TIMER_CNT_1_RSVD_1",
+		C2H_TIMER_CNT_1_RSVD_1_MASK},
+	{"C2H_TIMER_CNT_1",
+		C2H_TIMER_CNT_1_MASK},
+};
+
+
+static struct regfield_info
+	c2h_timer_cnt_2_field_info[] = {
+	{"C2H_TIMER_CNT_2_RSVD_1",
+		C2H_TIMER_CNT_2_RSVD_1_MASK},
+	{"C2H_TIMER_CNT_2",
+		C2H_TIMER_CNT_2_MASK},
+};
+
+
+static struct regfield_info
+	c2h_timer_cnt_3_field_info[] = {
+	{"C2H_TIMER_CNT_3_RSVD_1",
+		C2H_TIMER_CNT_3_RSVD_1_MASK},
+	{"C2H_TIMER_CNT_3",
+		C2H_TIMER_CNT_3_MASK},
+};
+
+
+static struct regfield_info
+	c2h_timer_cnt_4_field_info[] = {
+	{"C2H_TIMER_CNT_4_RSVD_1",
+		C2H_TIMER_CNT_4_RSVD_1_MASK},
+	{"C2H_TIMER_CNT_4",
+		C2H_TIMER_CNT_4_MASK},
+};
+
+
+static struct regfield_info
+	c2h_timer_cnt_5_field_info[] = {
+	{"C2H_TIMER_CNT_5_RSVD_1",
+		C2H_TIMER_CNT_5_RSVD_1_MASK},
+	{"C2H_TIMER_CNT_5",
+		C2H_TIMER_CNT_5_MASK},
+};
+
+
+static struct regfield_info
+	c2h_timer_cnt_6_field_info[] = {
+	{"C2H_TIMER_CNT_6_RSVD_1",
+		C2H_TIMER_CNT_6_RSVD_1_MASK},
+	{"C2H_TIMER_CNT_6",
+		C2H_TIMER_CNT_6_MASK},
+};
+
+
+static struct regfield_info
+	c2h_timer_cnt_7_field_info[] = {
+	{"C2H_TIMER_CNT_7_RSVD_1",
+		C2H_TIMER_CNT_7_RSVD_1_MASK},
+	{"C2H_TIMER_CNT_7",
+		C2H_TIMER_CNT_7_MASK},
+};
+
+
+static struct regfield_info
+	c2h_timer_cnt_8_field_info[] = {
+	{"C2H_TIMER_CNT_8_RSVD_1",
+		C2H_TIMER_CNT_8_RSVD_1_MASK},
+	{"C2H_TIMER_CNT_8",
+		C2H_TIMER_CNT_8_MASK},
+};
+
+
+static struct regfield_info
+	c2h_timer_cnt_9_field_info[] = {
+	{"C2H_TIMER_CNT_9_RSVD_1",
+		C2H_TIMER_CNT_9_RSVD_1_MASK},
+	{"C2H_TIMER_CNT_9",
+		C2H_TIMER_CNT_9_MASK},
+};
+
+
+static struct regfield_info
+	c2h_timer_cnt_a_field_info[] = {
+	{"C2H_TIMER_CNT_A_RSVD_1",
+		C2H_TIMER_CNT_A_RSVD_1_MASK},
+	{"C2H_TIMER_CNT_A",
+		C2H_TIMER_CNT_A_MASK},
+};
+
+
+static struct regfield_info
+	c2h_timer_cnt_b_field_info[] = {
+	{"C2H_TIMER_CNT_B_RSVD_1",
+		C2H_TIMER_CNT_B_RSVD_1_MASK},
+	{"C2H_TIMER_CNT_B",
+		C2H_TIMER_CNT_B_MASK},
+};
+
+
+static struct regfield_info
+	c2h_timer_cnt_c_field_info[] = {
+	{"C2H_TIMER_CNT_C_RSVD_1",
+		C2H_TIMER_CNT_C_RSVD_1_MASK},
+	{"C2H_TIMER_CNT_C",
+		C2H_TIMER_CNT_C_MASK},
+};
+
+
+static struct regfield_info
+	c2h_timer_cnt_d_field_info[] = {
+	{"C2H_TIMER_CNT_D_RSVD_1",
+		C2H_TIMER_CNT_D_RSVD_1_MASK},
+	{"C2H_TIMER_CNT_D",
+		C2H_TIMER_CNT_D_MASK},
+};
+
+
+static struct regfield_info
+	c2h_timer_cnt_e_field_info[] = {
+	{"C2H_TIMER_CNT_E_RSVD_1",
+		C2H_TIMER_CNT_E_RSVD_1_MASK},
+	{"C2H_TIMER_CNT_E",
+		C2H_TIMER_CNT_E_MASK},
+};
+
+
+static struct regfield_info
+	c2h_timer_cnt_f_field_info[] = {
+	{"C2H_TIMER_CNT_F_RSVD_1",
+		C2H_TIMER_CNT_F_RSVD_1_MASK},
+	{"C2H_TIMER_CNT_F",
+		C2H_TIMER_CNT_F_MASK},
+};
+
+
+static struct regfield_info
+	c2h_timer_cnt_10_field_info[] = {
+	{"C2H_TIMER_CNT_10_RSVD_1",
+		C2H_TIMER_CNT_10_RSVD_1_MASK},
+	{"C2H_TIMER_CNT_10",
+		C2H_TIMER_CNT_10_MASK},
+};
+
+
+static struct regfield_info
+	c2h_cnt_th_1_field_info[] = {
+	{"C2H_CNT_TH_1_RSVD_1",
+		C2H_CNT_TH_1_RSVD_1_MASK},
+	{"C2H_CNT_TH_1_THESHOLD_CNT",
+		C2H_CNT_TH_1_THESHOLD_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_cnt_th_2_field_info[] = {
+	{"C2H_CNT_TH_2_RSVD_1",
+		C2H_CNT_TH_2_RSVD_1_MASK},
+	{"C2H_CNT_TH_2_THESHOLD_CNT",
+		C2H_CNT_TH_2_THESHOLD_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_cnt_th_3_field_info[] = {
+	{"C2H_CNT_TH_3_RSVD_1",
+		C2H_CNT_TH_3_RSVD_1_MASK},
+	{"C2H_CNT_TH_3_THESHOLD_CNT",
+		C2H_CNT_TH_3_THESHOLD_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_cnt_th_4_field_info[] = {
+	{"C2H_CNT_TH_4_RSVD_1",
+		C2H_CNT_TH_4_RSVD_1_MASK},
+	{"C2H_CNT_TH_4_THESHOLD_CNT",
+		C2H_CNT_TH_4_THESHOLD_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_cnt_th_5_field_info[] = {
+	{"C2H_CNT_TH_5_RSVD_1",
+		C2H_CNT_TH_5_RSVD_1_MASK},
+	{"C2H_CNT_TH_5_THESHOLD_CNT",
+		C2H_CNT_TH_5_THESHOLD_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_cnt_th_6_field_info[] = {
+	{"C2H_CNT_TH_6_RSVD_1",
+		C2H_CNT_TH_6_RSVD_1_MASK},
+	{"C2H_CNT_TH_6_THESHOLD_CNT",
+		C2H_CNT_TH_6_THESHOLD_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_cnt_th_7_field_info[] = {
+	{"C2H_CNT_TH_7_RSVD_1",
+		C2H_CNT_TH_7_RSVD_1_MASK},
+	{"C2H_CNT_TH_7_THESHOLD_CNT",
+		C2H_CNT_TH_7_THESHOLD_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_cnt_th_8_field_info[] = {
+	{"C2H_CNT_TH_8_RSVD_1",
+		C2H_CNT_TH_8_RSVD_1_MASK},
+	{"C2H_CNT_TH_8_THESHOLD_CNT",
+		C2H_CNT_TH_8_THESHOLD_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_cnt_th_9_field_info[] = {
+	{"C2H_CNT_TH_9_RSVD_1",
+		C2H_CNT_TH_9_RSVD_1_MASK},
+	{"C2H_CNT_TH_9_THESHOLD_CNT",
+		C2H_CNT_TH_9_THESHOLD_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_cnt_th_a_field_info[] = {
+	{"C2H_CNT_TH_A_RSVD_1",
+		C2H_CNT_TH_A_RSVD_1_MASK},
+	{"C2H_CNT_TH_A_THESHOLD_CNT",
+		C2H_CNT_TH_A_THESHOLD_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_cnt_th_b_field_info[] = {
+	{"C2H_CNT_TH_B_RSVD_1",
+		C2H_CNT_TH_B_RSVD_1_MASK},
+	{"C2H_CNT_TH_B_THESHOLD_CNT",
+		C2H_CNT_TH_B_THESHOLD_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_cnt_th_c_field_info[] = {
+	{"C2H_CNT_TH_C_RSVD_1",
+		C2H_CNT_TH_C_RSVD_1_MASK},
+	{"C2H_CNT_TH_C_THESHOLD_CNT",
+		C2H_CNT_TH_C_THESHOLD_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_cnt_th_d_field_info[] = {
+	{"C2H_CNT_TH_D_RSVD_1",
+		C2H_CNT_TH_D_RSVD_1_MASK},
+	{"C2H_CNT_TH_D_THESHOLD_CNT",
+		C2H_CNT_TH_D_THESHOLD_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_cnt_th_e_field_info[] = {
+	{"C2H_CNT_TH_E_RSVD_1",
+		C2H_CNT_TH_E_RSVD_1_MASK},
+	{"C2H_CNT_TH_E_THESHOLD_CNT",
+		C2H_CNT_TH_E_THESHOLD_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_cnt_th_f_field_info[] = {
+	{"C2H_CNT_TH_F_RSVD_1",
+		C2H_CNT_TH_F_RSVD_1_MASK},
+	{"C2H_CNT_TH_F_THESHOLD_CNT",
+		C2H_CNT_TH_F_THESHOLD_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_cnt_th_10_field_info[] = {
+	{"C2H_CNT_TH_10_RSVD_1",
+		C2H_CNT_TH_10_RSVD_1_MASK},
+	{"C2H_CNT_TH_10_THESHOLD_CNT",
+		C2H_CNT_TH_10_THESHOLD_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_qid2vec_map_qid_field_info[] = {
+	{"C2H_QID2VEC_MAP_QID_RSVD_1",
+		C2H_QID2VEC_MAP_QID_RSVD_1_MASK},
+	{"C2H_QID2VEC_MAP_QID_QID",
+		C2H_QID2VEC_MAP_QID_QID_MASK},
+};
+
+
+static struct regfield_info
+	c2h_qid2vec_map_field_info[] = {
+	{"C2H_QID2VEC_MAP_RSVD_1",
+		C2H_QID2VEC_MAP_RSVD_1_MASK},
+	{"C2H_QID2VEC_MAP_H2C_EN_COAL",
+		C2H_QID2VEC_MAP_H2C_EN_COAL_MASK},
+	{"C2H_QID2VEC_MAP_H2C_VECTOR",
+		C2H_QID2VEC_MAP_H2C_VECTOR_MASK},
+	{"C2H_QID2VEC_MAP_C2H_EN_COAL",
+		C2H_QID2VEC_MAP_C2H_EN_COAL_MASK},
+	{"C2H_QID2VEC_MAP_C2H_VECTOR",
+		C2H_QID2VEC_MAP_C2H_VECTOR_MASK},
+};
+
+
+static struct regfield_info
+	c2h_stat_s_axis_c2h_accepted_field_info[] = {
+	{"C2H_STAT_S_AXIS_C2H_ACCEPTED",
+		C2H_STAT_S_AXIS_C2H_ACCEPTED_MASK},
+};
+
+
+static struct regfield_info
+	c2h_stat_s_axis_wrb_accepted_field_info[] = {
+	{"C2H_STAT_S_AXIS_WRB_ACCEPTED",
+		C2H_STAT_S_AXIS_WRB_ACCEPTED_MASK},
+};
+
+
+static struct regfield_info
+	c2h_stat_desc_rsp_pkt_accepted_field_info[] = {
+	{"C2H_STAT_DESC_RSP_PKT_ACCEPTED_D",
+		C2H_STAT_DESC_RSP_PKT_ACCEPTED_D_MASK},
+};
+
+
+static struct regfield_info
+	c2h_stat_axis_pkg_cmp_field_info[] = {
+	{"C2H_STAT_AXIS_PKG_CMP",
+		C2H_STAT_AXIS_PKG_CMP_MASK},
+};
+
+
+static struct regfield_info
+	c2h_stat_desc_rsp_accepted_field_info[] = {
+	{"C2H_STAT_DESC_RSP_ACCEPTED_D",
+		C2H_STAT_DESC_RSP_ACCEPTED_D_MASK},
+};
+
+
+static struct regfield_info
+	c2h_stat_desc_rsp_cmp_field_info[] = {
+	{"C2H_STAT_DESC_RSP_CMP_D",
+		C2H_STAT_DESC_RSP_CMP_D_MASK},
+};
+
+
+static struct regfield_info
+	c2h_stat_wrq_out_field_info[] = {
+	{"C2H_STAT_WRQ_OUT",
+		C2H_STAT_WRQ_OUT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_stat_wpl_ren_accepted_field_info[] = {
+	{"C2H_STAT_WPL_REN_ACCEPTED",
+		C2H_STAT_WPL_REN_ACCEPTED_MASK},
+};
+
+
+static struct regfield_info
+	c2h_stat_total_wrq_len_field_info[] = {
+	{"C2H_STAT_TOTAL_WRQ_LEN",
+		C2H_STAT_TOTAL_WRQ_LEN_MASK},
+};
+
+
+static struct regfield_info
+	c2h_stat_total_wpl_len_field_info[] = {
+	{"C2H_STAT_TOTAL_WPL_LEN",
+		C2H_STAT_TOTAL_WPL_LEN_MASK},
+};
+
+
+static struct regfield_info
+	c2h_buf_sz_0_field_info[] = {
+	{"C2H_BUF_SZ_0_SIZE",
+		C2H_BUF_SZ_0_SIZE_MASK},
+};
+
+
+static struct regfield_info
+	c2h_buf_sz_1_field_info[] = {
+	{"C2H_BUF_SZ_1_SIZE",
+		C2H_BUF_SZ_1_SIZE_MASK},
+};
+
+
+static struct regfield_info
+	c2h_buf_sz_2_field_info[] = {
+	{"C2H_BUF_SZ_2_SIZE",
+		C2H_BUF_SZ_2_SIZE_MASK},
+};
+
+
+static struct regfield_info
+	c2h_buf_sz_3_field_info[] = {
+	{"C2H_BUF_SZ_3_SIZE",
+		C2H_BUF_SZ_3_SIZE_MASK},
+};
+
+
+static struct regfield_info
+	c2h_buf_sz_4_field_info[] = {
+	{"C2H_BUF_SZ_4_SIZE",
+		C2H_BUF_SZ_4_SIZE_MASK},
+};
+
+
+static struct regfield_info
+	c2h_buf_sz_5_field_info[] = {
+	{"C2H_BUF_SZ_5_SIZE",
+		C2H_BUF_SZ_5_SIZE_MASK},
+};
+
+
+static struct regfield_info
+	c2h_buf_sz_7_field_info[] = {
+	{"C2H_BUF_SZ_7_SIZE",
+		C2H_BUF_SZ_7_SIZE_MASK},
+};
+
+
+static struct regfield_info
+	c2h_buf_sz_8_field_info[] = {
+	{"C2H_BUF_SZ_8_SIZE",
+		C2H_BUF_SZ_8_SIZE_MASK},
+};
+
+
+static struct regfield_info
+	c2h_buf_sz_9_field_info[] = {
+	{"C2H_BUF_SZ_9_SIZE",
+		C2H_BUF_SZ_9_SIZE_MASK},
+};
+
+
+static struct regfield_info
+	c2h_buf_sz_10_field_info[] = {
+	{"C2H_BUF_SZ_10_SIZE",
+		C2H_BUF_SZ_10_SIZE_MASK},
+};
+
+
+static struct regfield_info
+	c2h_buf_sz_11_field_info[] = {
+	{"C2H_BUF_SZ_11_SIZE",
+		C2H_BUF_SZ_11_SIZE_MASK},
+};
+
+
+static struct regfield_info
+	c2h_buf_sz_12_field_info[] = {
+	{"C2H_BUF_SZ_12_SIZE",
+		C2H_BUF_SZ_12_SIZE_MASK},
+};
+
+
+static struct regfield_info
+	c2h_buf_sz_13_field_info[] = {
+	{"C2H_BUF_SZ_13_SIZE",
+		C2H_BUF_SZ_13_SIZE_MASK},
+};
+
+
+static struct regfield_info
+	c2h_buf_sz_14_field_info[] = {
+	{"C2H_BUF_SZ_14_SIZE",
+		C2H_BUF_SZ_14_SIZE_MASK},
+};
+
+
+static struct regfield_info
+	c2h_buf_sz_15_field_info[] = {
+	{"C2H_BUF_SZ_15_SIZE",
+		C2H_BUF_SZ_15_SIZE_MASK},
+};
+
+
+static struct regfield_info
+	c2h_err_stat_field_info[] = {
+	{"C2H_ERR_STAT_RSVD_1",
+		C2H_ERR_STAT_RSVD_1_MASK},
+	{"C2H_ERR_STAT_WRB_PRTY_ERR",
+		C2H_ERR_STAT_WRB_PRTY_ERR_MASK},
+	{"C2H_ERR_STAT_WRB_CIDX_ERR",
+		C2H_ERR_STAT_WRB_CIDX_ERR_MASK},
+	{"C2H_ERR_STAT_WRB_QFULL_ERR",
+		C2H_ERR_STAT_WRB_QFULL_ERR_MASK},
+	{"C2H_ERR_STAT_WRB_INV_Q_ERR",
+		C2H_ERR_STAT_WRB_INV_Q_ERR_MASK},
+	{"C2H_ERR_STAT_PORT_ID_BYP_IN_MISMATCH",
+		C2H_ERR_STAT_PORT_ID_BYP_IN_MISMATCH_MASK},
+	{"C2H_ERR_STAT_PORT_ID_CTXT_MISMATCH",
+		C2H_ERR_STAT_PORT_ID_CTXT_MISMATCH_MASK},
+	{"C2H_ERR_STAT_ERR_DESC_CNT",
+		C2H_ERR_STAT_ERR_DESC_CNT_MASK},
+	{"C2H_ERR_STAT_RSVD_2",
+		C2H_ERR_STAT_RSVD_2_MASK},
+	{"C2H_ERR_STAT_MSI_INT_FAIL",
+		C2H_ERR_STAT_MSI_INT_FAIL_MASK},
+	{"C2H_ERR_STAT_ENG_WPL_DATA_PAR_ERR",
+		C2H_ERR_STAT_ENG_WPL_DATA_PAR_ERR_MASK},
+	{"C2H_ERR_STAT_RSVD_3",
+		C2H_ERR_STAT_RSVD_3_MASK},
+	{"C2H_ERR_STAT_DESC_RSP_ERR",
+		C2H_ERR_STAT_DESC_RSP_ERR_MASK},
+	{"C2H_ERR_STAT_QID_MISMATCH",
+		C2H_ERR_STAT_QID_MISMATCH_MASK},
+	{"C2H_ERR_STAT_RSVD_4",
+		C2H_ERR_STAT_RSVD_4_MASK},
+	{"C2H_ERR_STAT_LEN_MISMATCH",
+		C2H_ERR_STAT_LEN_MISMATCH_MASK},
+	{"C2H_ERR_STAT_MTY_MISMATCH",
+		C2H_ERR_STAT_MTY_MISMATCH_MASK},
+};
+
+
+static struct regfield_info
+	c2h_err_mask_field_info[] = {
+	{"C2H_ERR_EN",
+		C2H_ERR_EN_MASK},
+};
+
+
+static struct regfield_info
+	c2h_fatal_err_stat_field_info[] = {
+	{"C2H_FATAL_ERR_STAT_RSVD_1",
+		C2H_FATAL_ERR_STAT_RSVD_1_MASK},
+	{"C2H_FATAL_ERR_STAT_WPL_DATA_PAR_ERR",
+		C2H_FATAL_ERR_STAT_WPL_DATA_PAR_ERR_MASK},
+	{"C2H_FATAL_ERR_STAT_PLD_FIFO_RAM_RDBE",
+		C2H_FATAL_ERR_STAT_PLD_FIFO_RAM_RDBE_MASK},
+	{"C2H_FATAL_ERR_STAT_QID_FIFO_RAM_RDBE",
+		C2H_FATAL_ERR_STAT_QID_FIFO_RAM_RDBE_MASK},
+	{"C2H_FATAL_ERR_STAT_TUSER_FIFO_RAM_RDBE",
+		C2H_FATAL_ERR_STAT_TUSER_FIFO_RAM_RDBE_MASK},
+	{"C2H_FATAL_ERR_STAT_WRB_COAL_DATA_RAM_RDBE",
+		C2H_FATAL_ERR_STAT_WRB_COAL_DATA_RAM_RDBE_MASK},
+	{"C2H_FATAL_ERR_STAT_INT_QID2VEC_RAM_RDBE",
+		C2H_FATAL_ERR_STAT_INT_QID2VEC_RAM_RDBE_MASK},
+	{"C2H_FATAL_ERR_STAT_INT_CTXT_RAM_RDBE",
+		C2H_FATAL_ERR_STAT_INT_CTXT_RAM_RDBE_MASK},
+	{"C2H_FATAL_ERR_STAT_DESC_REQ_FIFO_RAM_RDBE",
+		C2H_FATAL_ERR_STAT_DESC_REQ_FIFO_RAM_RDBE_MASK},
+	{"C2H_FATAL_ERR_STAT_PFCH_CTXT_RAM_RDBE",
+		C2H_FATAL_ERR_STAT_PFCH_CTXT_RAM_RDBE_MASK},
+	{"C2H_FATAL_ERR_STAT_WRB_CTXT_RAM_RDBE",
+		C2H_FATAL_ERR_STAT_WRB_CTXT_RAM_RDBE_MASK},
+	{"C2H_FATAL_ERR_STAT_PFCH_LL_RAM_RDBE",
+		C2H_FATAL_ERR_STAT_PFCH_LL_RAM_RDBE_MASK},
+	{"C2H_FATAL_ERR_STAT_TIMER_FIFO_RAM_RDBE",
+		C2H_FATAL_ERR_STAT_TIMER_FIFO_RAM_RDBE_MASK},
+	{"C2H_FATAL_ERR_STAT_QID_MISMATCH",
+		C2H_FATAL_ERR_STAT_QID_MISMATCH_MASK},
+	{"C2H_FATAL_ERR_STAT_RSVD_2",
+		C2H_FATAL_ERR_STAT_RSVD_2_MASK},
+	{"C2H_FATAL_ERR_STAT_LEN_MISMATCH",
+		C2H_FATAL_ERR_STAT_LEN_MISMATCH_MASK},
+	{"C2H_FATAL_ERR_STAT_MTY_MISMATCH",
+		C2H_FATAL_ERR_STAT_MTY_MISMATCH_MASK},
+};
+
+
+static struct regfield_info
+	c2h_fatal_err_mask_field_info[] = {
+	{"C2H_FATAL_ERR_C2HEN",
+		C2H_FATAL_ERR_C2HEN_MASK},
+};
+
+
+static struct regfield_info
+	c2h_fatal_err_enable_field_info[] = {
+	{"C2H_FATAL_ERR_ENABLE_RSVD_1",
+		C2H_FATAL_ERR_ENABLE_RSVD_1_MASK},
+	{"C2H_FATAL_ERR_ENABLE_WPL_PAR_INV",
+		C2H_FATAL_ERR_ENABLE_WPL_PAR_INV_MASK},
+	{"C2H_FATAL_ERR_ENABLE_WRQ_DIS",
+		C2H_FATAL_ERR_ENABLE_WRQ_DIS_MASK},
+};
+
+
+static struct regfield_info
+	glbl_err_int_field_info[] = {
+	{"GLBL_ERR_INT_RSVD_1",
+		GLBL_ERR_INT_RSVD_1_MASK},
+	{"GLBL_ERR_INT_ARM",
+		GLBL_ERR_INT_ARM_MASK},
+	{"GLBL_ERR_INT_EN_COAL",
+		GLBL_ERR_INT_EN_COAL_MASK},
+	{"GLBL_ERR_INT_VEC",
+		GLBL_ERR_INT_VEC_MASK},
+	{"GLBL_ERR_INT_FUNC",
+		GLBL_ERR_INT_FUNC_MASK},
+};
+
+
+static struct regfield_info
+	c2h_pfch_cfg_field_info[] = {
+	{"C2H_PFCH_CFG_EVT_QCNT_TH",
+		C2H_PFCH_CFG_EVT_QCNT_TH_MASK},
+	{"C2H_PFCH_CFG_QCNT",
+		C2H_PFCH_CFG_QCNT_MASK},
+	{"C2H_PFCH_CFG_NUM",
+		C2H_PFCH_CFG_NUM_MASK},
+	{"C2H_PFCH_CFG_FL_TH",
+		C2H_PFCH_CFG_FL_TH_MASK},
+};
+
+
+static struct regfield_info
+	c2h_int_timer_tick_field_info[] = {
+	{"C2H_INT_TIMER_TICK",
+		C2H_INT_TIMER_TICK_MASK},
+};
+
+
+static struct regfield_info
+	c2h_stat_desc_rsp_drop_accepted_field_info[] = {
+	{"C2H_STAT_DESC_RSP_DROP_ACCEPTED_D",
+		C2H_STAT_DESC_RSP_DROP_ACCEPTED_D_MASK},
+};
+
+
+static struct regfield_info
+	c2h_stat_desc_rsp_err_accepted_field_info[] = {
+	{"C2H_STAT_DESC_RSP_ERR_ACCEPTED_D",
+		C2H_STAT_DESC_RSP_ERR_ACCEPTED_D_MASK},
+};
+
+
+static struct regfield_info
+	c2h_stat_desc_req_field_info[] = {
+	{"C2H_STAT_DESC_REQ",
+		C2H_STAT_DESC_REQ_MASK},
+};
+
+
+static struct regfield_info
+	c2h_stat_dbg_dma_eng_0_field_info[] = {
+	{"C2H_STAT_DMA_ENG_0_RSVD_1",
+		C2H_STAT_DMA_ENG_0_RSVD_1_MASK},
+	{"C2H_STAT_DMA_ENG_0_WRB_FIFO_OUT_CNT",
+		C2H_STAT_DMA_ENG_0_WRB_FIFO_OUT_CNT_MASK},
+	{"C2H_STAT_DMA_ENG_0_QID_FIFO_OUT_CNT",
+		C2H_STAT_DMA_ENG_0_QID_FIFO_OUT_CNT_MASK},
+	{"C2H_STAT_DMA_ENG_0_PLD_FIFO_OUT_CNT",
+		C2H_STAT_DMA_ENG_0_PLD_FIFO_OUT_CNT_MASK},
+	{"C2H_STAT_DMA_ENG_0_WRQ_FIFO_OUT_CNT",
+		C2H_STAT_DMA_ENG_0_WRQ_FIFO_OUT_CNT_MASK},
+	{"C2H_STAT_DMA_ENG_0_WRB_SM_CS",
+		C2H_STAT_DMA_ENG_0_WRB_SM_CS_MASK},
+	{"C2H_STAT_DMA_ENG_0_MAIN_SM_CS",
+		C2H_STAT_DMA_ENG_0_MAIN_SM_CS_MASK},
+};
+
+
+static struct regfield_info
+	c2h_stat_dbg_dma_eng_1_field_info[] = {
+	{"C2H_STAT_DMA_ENG_1_RSVD_1",
+		C2H_STAT_DMA_ENG_1_RSVD_1_MASK},
+	{"C2H_STAT_DMA_ENG_1_DESC_RSP_LAST",
+		C2H_STAT_DMA_ENG_1_DESC_RSP_LAST_MASK},
+	{"C2H_STAT_DMA_ENG_1_PLD_FIFO_IN_CNT",
+		C2H_STAT_DMA_ENG_1_PLD_FIFO_IN_CNT_MASK},
+	{"C2H_STAT_DMA_ENG_1_PLD_FIFO_OUTPUT_CNT",
+		C2H_STAT_DMA_ENG_1_PLD_FIFO_OUTPUT_CNT_MASK},
+	{"C2H_STAT_DMA_ENG_1_QID_FIFO_IN_CNT",
+		C2H_STAT_DMA_ENG_1_QID_FIFO_IN_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_stat_dbg_dma_eng_2_field_info[] = {
+	{"C2H_STAT_DMA_ENG_2_RSVD_1",
+		C2H_STAT_DMA_ENG_2_RSVD_1_MASK},
+	{"C2H_STAT_DMA_ENG_2_WRB_FIFO_IN_CNT",
+		C2H_STAT_DMA_ENG_2_WRB_FIFO_IN_CNT_MASK},
+	{"C2H_STAT_DMA_ENG_2_WRB_FIFO_OUTPUT_CNT",
+		C2H_STAT_DMA_ENG_2_WRB_FIFO_OUTPUT_CNT_MASK},
+	{"C2H_STAT_DMA_ENG_2_QID_FIFO_OUTPUT_CNT",
+		C2H_STAT_DMA_ENG_2_QID_FIFO_OUTPUT_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_stat_dbg_dma_eng_3_field_info[] = {
+	{"C2H_STAT_DMA_ENG_3_RSVD_1",
+		C2H_STAT_DMA_ENG_3_RSVD_1_MASK},
+	{"C2H_STAT_DMA_ENG_3_ADDR_4K_SPLIT_CNT",
+		C2H_STAT_DMA_ENG_3_ADDR_4K_SPLIT_CNT_MASK},
+	{"C2H_STAT_DMA_ENG_3_WRQ_FIFO_IN_CNT",
+		C2H_STAT_DMA_ENG_3_WRQ_FIFO_IN_CNT_MASK},
+	{"C2H_STAT_DMA_ENG_3_WRQ_FIFO_OUTPUT_CNT",
+		C2H_STAT_DMA_ENG_3_WRQ_FIFO_OUTPUT_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_dbg_pfch_err_ctxt_field_info[] = {
+	{"C2H_PFCH_ERR_CTXT_RSVD_1",
+		C2H_PFCH_ERR_CTXT_RSVD_1_MASK},
+	{"C2H_PFCH_ERR_CTXT_ERR_STAT",
+		C2H_PFCH_ERR_CTXT_ERR_STAT_MASK},
+	{"C2H_PFCH_ERR_CTXT_CMD_WR",
+		C2H_PFCH_ERR_CTXT_CMD_WR_MASK},
+	{"C2H_PFCH_ERR_CTXT_QID",
+		C2H_PFCH_ERR_CTXT_QID_MASK},
+	{"C2H_PFCH_ERR_CTXT_DONE",
+		C2H_PFCH_ERR_CTXT_DONE_MASK},
+};
+
+
+static struct regfield_info
+	c2h_first_err_qid_field_info[] = {
+	{"C2H_FIRST_ERR_QID_RSVD_1",
+		C2H_FIRST_ERR_QID_RSVD_1_MASK},
+	{"C2H_FIRST_ERR_QID_ERR_STAT",
+		C2H_FIRST_ERR_QID_ERR_STAT_MASK},
+	{"C2H_FIRST_ERR_QID_CMD_WR",
+		C2H_FIRST_ERR_QID_CMD_WR_MASK},
+	{"C2H_FIRST_ERR_QID_QID",
+		C2H_FIRST_ERR_QID_QID_MASK},
+};
+
+
+static struct regfield_info
+	stat_num_wrb_in_field_info[] = {
+	{"STAT_NUM_WRB_IN_RSVD_1",
+		STAT_NUM_WRB_IN_RSVD_1_MASK},
+	{"STAT_NUM_WRB_IN_WRB_CNT",
+		STAT_NUM_WRB_IN_WRB_CNT_MASK},
+};
+
+
+static struct regfield_info
+	stat_num_wrb_out_field_info[] = {
+	{"STAT_NUM_WRB_OUT_RSVD_1",
+		STAT_NUM_WRB_OUT_RSVD_1_MASK},
+	{"STAT_NUM_WRB_OUT_WRB_CNT",
+		STAT_NUM_WRB_OUT_WRB_CNT_MASK},
+};
+
+
+static struct regfield_info
+	stat_num_wrb_drp_field_info[] = {
+	{"STAT_NUM_WRB_DRP_RSVD_1",
+		STAT_NUM_WRB_DRP_RSVD_1_MASK},
+	{"STAT_NUM_WRB_DRP_WRB_CNT",
+		STAT_NUM_WRB_DRP_WRB_CNT_MASK},
+};
+
+
+static struct regfield_info
+	stat_num_stat_desc_out_field_info[] = {
+	{"STAT_NUM_STAT_DESC_OUT_RSVD_1",
+		STAT_NUM_STAT_DESC_OUT_RSVD_1_MASK},
+	{"STAT_NUM_STAT_DESC_OUT_CNT",
+		STAT_NUM_STAT_DESC_OUT_CNT_MASK},
+};
+
+
+static struct regfield_info
+	stat_num_dsc_crdt_sent_field_info[] = {
+	{"STAT_NUM_DSC_CRDT_SENT_RSVD_1",
+		STAT_NUM_DSC_CRDT_SENT_RSVD_1_MASK},
+	{"STAT_NUM_DSC_CRDT_SENT_CNT",
+		STAT_NUM_DSC_CRDT_SENT_CNT_MASK},
+};
+
+
+static struct regfield_info
+	stat_num_fch_dsc_rcvd_field_info[] = {
+	{"STAT_NUM_FCH_DSC_RCVD_RSVD_1",
+		STAT_NUM_FCH_DSC_RCVD_RSVD_1_MASK},
+	{"STAT_NUM_FCH_DSC_RCVD_DSC_CNT",
+		STAT_NUM_FCH_DSC_RCVD_DSC_CNT_MASK},
+};
+
+
+static struct regfield_info
+	stat_num_byp_dsc_rcvd_field_info[] = {
+	{"STAT_NUM_BYP_DSC_RCVD_RSVD_1",
+		STAT_NUM_BYP_DSC_RCVD_RSVD_1_MASK},
+	{"STAT_NUM_BYP_DSC_RCVD_DSC_CNT",
+		STAT_NUM_BYP_DSC_RCVD_DSC_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_wrb_coal_cfg_field_info[] = {
+	{"C2H_WRB_COAL_CFG_MAX_BUF_SZ",
+		C2H_WRB_COAL_CFG_MAX_BUF_SZ_MASK},
+	{"C2H_WRB_COAL_CFG_TICK_VAL",
+		C2H_WRB_COAL_CFG_TICK_VAL_MASK},
+	{"C2H_WRB_COAL_CFG_TICK_CNT",
+		C2H_WRB_COAL_CFG_TICK_CNT_MASK},
+	{"C2H_WRB_COAL_CFG_SET_GLB_FLUSH",
+		C2H_WRB_COAL_CFG_SET_GLB_FLUSH_MASK},
+	{"C2H_WRB_COAL_CFG_DONE_GLB_FLUSH",
+		C2H_WRB_COAL_CFG_DONE_GLB_FLUSH_MASK},
+};
+
+
+static struct regfield_info
+	c2h_intr_h2c_req_field_info[] = {
+	{"C2H_INTR_H2C_REQ_RSVD_1",
+		C2H_INTR_H2C_REQ_RSVD_1_MASK},
+	{"C2H_INTR_H2C_REQ_CNT",
+		C2H_INTR_H2C_REQ_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_intr_c2h_mm_req_field_info[] = {
+	{"C2H_INTR_C2H_MM_REQ_RSVD_1",
+		C2H_INTR_C2H_MM_REQ_RSVD_1_MASK},
+	{"C2H_INTR_C2H_MM_REQ_CNT",
+		C2H_INTR_C2H_MM_REQ_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_intr_err_int_req_field_info[] = {
+	{"C2H_INTR_ERR_INT_REQ_RSVD_1",
+		C2H_INTR_ERR_INT_REQ_RSVD_1_MASK},
+	{"C2H_INTR_ERR_INT_REQ_CNT",
+		C2H_INTR_ERR_INT_REQ_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_intr_c2h_st_req_field_info[] = {
+	{"C2H_INTR_C2H_ST_REQ_RSVD_1",
+		C2H_INTR_C2H_ST_REQ_RSVD_1_MASK},
+	{"C2H_INTR_C2H_ST_REQ_CNT",
+		C2H_INTR_C2H_ST_REQ_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_intr_h2c_err_c2h_mm_msix_ack_field_info[] = {
+	{"C2H_INTR_H2C_ERR_C2H_MM_MSIX_ACK_RSVD_1",
+		C2H_INTR_H2C_ERR_C2H_MM_MSIX_ACK_RSVD_1_MASK},
+	{"C2H_INTR_H2C_ERR_C2H_MM_MSIX_ACK_CNT",
+		C2H_INTR_H2C_ERR_C2H_MM_MSIX_ACK_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_intr_h2c_err_c2h_mm_msix_fail_field_info[] = {
+	{"C2H_INTR_H2C_ERR_C2H_MM_MSIX_FAIL_RSVD_1",
+		C2H_INTR_H2C_ERR_C2H_MM_MSIX_FAIL_RSVD_1_MASK},
+	{"C2H_INTR_H2C_ERR_C2H_MM_MSIX_FAIL_CNT",
+		C2H_INTR_H2C_ERR_C2H_MM_MSIX_FAIL_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_intr_h2c_err_c2h_mm_msix_no_msix_field_info[] = {
+	{"C2H_INTR_H2C_ERR_C2H_MM_MSIX_NO_MSIX_RSVD_1",
+		C2H_INTR_H2C_ERR_C2H_MM_MSIX_NO_MSIX_RSVD_1_MASK},
+	{"C2H_INTR_H2C_ERR_C2H_MM_MSIX_NO_MSIX_CNT",
+		C2H_INTR_H2C_ERR_C2H_MM_MSIX_NO_MSIX_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_intr_h2c_err_c2h_mm_ctxt_inval_field_info[] = {
+	{"C2H_INTR_H2C_ERR_C2H_MM_CTXT_INVAL_RSVD_1",
+		C2H_INTR_H2C_ERR_C2H_MM_CTXT_INVAL_RSVD_1_MASK},
+	{"C2H_INTR_H2C_ERR_C2H_MM_CTXT_INVAL_CNT",
+		C2H_INTR_H2C_ERR_C2H_MM_CTXT_INVAL_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_intr_c2h_st_msix_ack_field_info[] = {
+	{"C2H_INTR_C2H_ST_MSIX_ACK_RSVD_1",
+		C2H_INTR_C2H_ST_MSIX_ACK_RSVD_1_MASK},
+	{"C2H_INTR_C2H_ST_MSIX_ACK_CNT",
+		C2H_INTR_C2H_ST_MSIX_ACK_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_intr_c2h_st_msix_fail_field_info[] = {
+	{"C2H_INTR_C2H_ST_MSIX_FAIL_RSVD_1",
+		C2H_INTR_C2H_ST_MSIX_FAIL_RSVD_1_MASK},
+	{"C2H_INTR_C2H_ST_MSIX_FAIL_CNT",
+		C2H_INTR_C2H_ST_MSIX_FAIL_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_intr_c2h_st_no_msix_field_info[] = {
+	{"C2H_INTR_C2H_ST_NO_MSIX_RSVD_1",
+		C2H_INTR_C2H_ST_NO_MSIX_RSVD_1_MASK},
+	{"C2H_INTR_C2H_ST_NO_MSIX_CNT",
+		C2H_INTR_C2H_ST_NO_MSIX_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_intr_c2h_st_ctxt_inval_field_info[] = {
+	{"C2H_INTR_C2H_ST_CTXT_INVAL_RSVD_1",
+		C2H_INTR_C2H_ST_CTXT_INVAL_RSVD_1_MASK},
+	{"C2H_INTR_C2H_ST_CTXT_INVAL_CNT",
+		C2H_INTR_C2H_ST_CTXT_INVAL_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_stat_wr_cmp_field_info[] = {
+	{"C2H_STAT_WR_CMP_RSVD_1",
+		C2H_STAT_WR_CMP_RSVD_1_MASK},
+	{"C2H_STAT_WR_CMP_CNT",
+		C2H_STAT_WR_CMP_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_stat_dbg_dma_eng_4_field_info[] = {
+	{"C2H_STAT_DMA_ENG_4_TUSER_FIFO_OUT_VLD",
+		C2H_STAT_DMA_ENG_4_TUSER_FIFO_OUT_VLD_MASK},
+	{"C2H_STAT_DMA_ENG_4_WRB_FIFO_IN_RDY",
+		C2H_STAT_DMA_ENG_4_WRB_FIFO_IN_RDY_MASK},
+	{"C2H_STAT_DMA_ENG_4_TUSER_FIFO_IN_CNT",
+		C2H_STAT_DMA_ENG_4_TUSER_FIFO_IN_CNT_MASK},
+	{"C2H_STAT_DMA_ENG_4_TUSER_FIFO_OUTPUT_CNT",
+		C2H_STAT_DMA_ENG_4_TUSER_FIFO_OUTPUT_CNT_MASK},
+	{"C2H_STAT_DMA_ENG_4_TUSER_FIFO_OUT_CNT",
+		C2H_STAT_DMA_ENG_4_TUSER_FIFO_OUT_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_stat_dbg_dma_eng_5_field_info[] = {
+	{"C2H_STAT_DMA_ENG_5_RSVD_1",
+		C2H_STAT_DMA_ENG_5_RSVD_1_MASK},
+	{"C2H_STAT_DMA_ENG_5_TUSER_COMB_OUT_VLD",
+		C2H_STAT_DMA_ENG_5_TUSER_COMB_OUT_VLD_MASK},
+	{"C2H_STAT_DMA_ENG_5_TUSER_FIFO_IN_RDY",
+		C2H_STAT_DMA_ENG_5_TUSER_FIFO_IN_RDY_MASK},
+	{"C2H_STAT_DMA_ENG_5_TUSER_COMB_IN_CNT",
+		C2H_STAT_DMA_ENG_5_TUSER_COMB_IN_CNT_MASK},
+	{"C2H_STAT_DMA_ENG_5_TUSE_COMB_OUTPUT_CNT",
+		C2H_STAT_DMA_ENG_5_TUSE_COMB_OUTPUT_CNT_MASK},
+	{"C2H_STAT_DMA_ENG_5_TUSER_COMB_CNT",
+		C2H_STAT_DMA_ENG_5_TUSER_COMB_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_dbg_pfch_qid_field_info[] = {
+	{"C2H_PFCH_QID_RSVD_1",
+		C2H_PFCH_QID_RSVD_1_MASK},
+	{"C2H_PFCH_QID_ERR_CTXT",
+		C2H_PFCH_QID_ERR_CTXT_MASK},
+	{"C2H_PFCH_QID_TARGET",
+		C2H_PFCH_QID_TARGET_MASK},
+	{"C2H_PFCH_QID_QID_OR_TAG",
+		C2H_PFCH_QID_QID_OR_TAG_MASK},
+};
+
+
+static struct regfield_info
+	c2h_dbg_pfch_field_info[] = {
+	{"C2H_PFCH_DATA",
+		C2H_PFCH_DATA_MASK},
+};
+
+
+static struct regfield_info
+	c2h_int_dbg_field_info[] = {
+	{"C2H_INT_RSVD_1",
+		C2H_INT_RSVD_1_MASK},
+	{"C2H_INT_INT_COAL_SM",
+		C2H_INT_INT_COAL_SM_MASK},
+	{"C2H_INT_INT_SM",
+		C2H_INT_INT_SM_MASK},
+};
+
+
+static struct regfield_info
+	c2h_stat_imm_accepted_field_info[] = {
+	{"C2H_STAT_IMM_ACCEPTED_RSVD_1",
+		C2H_STAT_IMM_ACCEPTED_RSVD_1_MASK},
+	{"C2H_STAT_IMM_ACCEPTED_CNT",
+		C2H_STAT_IMM_ACCEPTED_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_stat_marker_accepted_field_info[] = {
+	{"C2H_STAT_MARKER_ACCEPTED_RSVD_1",
+		C2H_STAT_MARKER_ACCEPTED_RSVD_1_MASK},
+	{"C2H_STAT_MARKER_ACCEPTED_CNT",
+		C2H_STAT_MARKER_ACCEPTED_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_stat_disable_cmp_accepted_field_info[] = {
+	{"C2H_STAT_DISABLE_CMP_ACCEPTED_RSVD_1",
+		C2H_STAT_DISABLE_CMP_ACCEPTED_RSVD_1_MASK},
+	{"C2H_STAT_DISABLE_CMP_ACCEPTED_CNT",
+		C2H_STAT_DISABLE_CMP_ACCEPTED_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_pld_fifo_crdt_cnt_field_info[] = {
+	{"C2H_PLD_FIFO_CRDT_CNT_RSVD_1",
+		C2H_PLD_FIFO_CRDT_CNT_RSVD_1_MASK},
+	{"C2H_PLD_FIFO_CRDT_CNT_CNT",
+		C2H_PLD_FIFO_CRDT_CNT_CNT_MASK},
+};
+
+
+static struct regfield_info
+	h2c_err_stat_field_info[] = {
+	{"H2C_ERR_STAT_RSVD_1",
+		H2C_ERR_STAT_RSVD_1_MASK},
+	{"H2C_ERR_STAT_SBE",
+		H2C_ERR_STAT_SBE_MASK},
+	{"H2C_ERR_STAT_DBE",
+		H2C_ERR_STAT_DBE_MASK},
+	{"H2C_ERR_STAT_NO_DMA_DS",
+		H2C_ERR_STAT_NO_DMA_DS_MASK},
+	{"H2C_ERR_STAT_SDI_MRKR_REQ_MOP_ERR",
+		H2C_ERR_STAT_SDI_MRKR_REQ_MOP_ERR_MASK},
+	{"H2C_ERR_STAT_ZERO_LEN_DS",
+		H2C_ERR_STAT_ZERO_LEN_DS_MASK},
+};
+
+
+static struct regfield_info
+	h2c_err_mask_field_info[] = {
+	{"H2C_ERR_EN",
+		H2C_ERR_EN_MASK},
+};
+
+
+static struct regfield_info
+	h2c_first_err_qid_field_info[] = {
+	{"H2C_FIRST_ERR_QID_RSVD_1",
+		H2C_FIRST_ERR_QID_RSVD_1_MASK},
+	{"H2C_FIRST_ERR_QID_ERR_TYPE",
+		H2C_FIRST_ERR_QID_ERR_TYPE_MASK},
+	{"H2C_FIRST_ERR_QID_RSVD_2",
+		H2C_FIRST_ERR_QID_RSVD_2_MASK},
+	{"H2C_FIRST_ERR_QID_QID",
+		H2C_FIRST_ERR_QID_QID_MASK},
+};
+
+
+static struct regfield_info
+	h2c_dbg_reg0_field_info[] = {
+	{"H2C_REG0_NUM_DSC_RCVD",
+		H2C_REG0_NUM_DSC_RCVD_MASK},
+	{"H2C_REG0_NUM_WRB_SENT",
+		H2C_REG0_NUM_WRB_SENT_MASK},
+};
+
+
+static struct regfield_info
+	h2c_dbg_reg1_field_info[] = {
+	{"H2C_REG1_NUM_REQ_SENT",
+		H2C_REG1_NUM_REQ_SENT_MASK},
+	{"H2C_REG1_NUM_CMP_SENT",
+		H2C_REG1_NUM_CMP_SENT_MASK},
+};
+
+
+static struct regfield_info
+	h2c_dbg_reg2_field_info[] = {
+	{"H2C_REG2_RSVD_1",
+		H2C_REG2_RSVD_1_MASK},
+	{"H2C_REG2_NUM_ERR_DSC_RCVD",
+		H2C_REG2_NUM_ERR_DSC_RCVD_MASK},
+};
+
+
+static struct regfield_info
+	h2c_dbg_reg3_field_info[] = {
+	{"H2C_REG3",
+		H2C_REG3_MASK},
+	{"H2C_REG3_DSCO_FIFO_EMPTY",
+		H2C_REG3_DSCO_FIFO_EMPTY_MASK},
+	{"H2C_REG3_DSCO_FIFO_FULL",
+		H2C_REG3_DSCO_FIFO_FULL_MASK},
+	{"H2C_REG3_CUR_RC_STATE",
+		H2C_REG3_CUR_RC_STATE_MASK},
+	{"H2C_REG3_RDREQ_LINES",
+		H2C_REG3_RDREQ_LINES_MASK},
+	{"H2C_REG3_RDATA_LINES_AVAIL",
+		H2C_REG3_RDATA_LINES_AVAIL_MASK},
+	{"H2C_REG3_PEND_FIFO_EMPTY",
+		H2C_REG3_PEND_FIFO_EMPTY_MASK},
+	{"H2C_REG3_PEND_FIFO_FULL",
+		H2C_REG3_PEND_FIFO_FULL_MASK},
+	{"H2C_REG3_CUR_RQ_STATE",
+		H2C_REG3_CUR_RQ_STATE_MASK},
+	{"H2C_REG3_DSCI_FIFO_FULL",
+		H2C_REG3_DSCI_FIFO_FULL_MASK},
+	{"H2C_REG3_DSCI_FIFO_EMPTY",
+		H2C_REG3_DSCI_FIFO_EMPTY_MASK},
+};
+
+
+static struct regfield_info
+	h2c_dbg_reg4_field_info[] = {
+	{"H2C_REG4_RDREQ_ADDR",
+		H2C_REG4_RDREQ_ADDR_MASK},
+};
+
+
+static struct regfield_info
+	h2c_fatal_err_en_field_info[] = {
+	{"H2C_FATAL_ERR_EN_RSVD_1",
+		H2C_FATAL_ERR_EN_RSVD_1_MASK},
+	{"H2C_FATAL_ERR_EN_H2C",
+		H2C_FATAL_ERR_EN_H2C_MASK},
+};
+
+
+static struct regfield_info
+	c2h_channel_ctl_field_info[] = {
+	{"C2H_CHANNEL_CTL_RSVD_1",
+		C2H_CHANNEL_CTL_RSVD_1_MASK},
+	{"C2H_CHANNEL_CTL_RUN",
+		C2H_CHANNEL_CTL_RUN_MASK},
+};
+
+
+static struct regfield_info
+	c2h_channel_ctl_1_field_info[] = {
+	{"C2H_CHANNEL_CTL_1_RUN",
+		C2H_CHANNEL_CTL_1_RUN_MASK},
+	{"C2H_CHANNEL_CTL_1_RUN_1",
+		C2H_CHANNEL_CTL_1_RUN_1_MASK},
+};
+
+
+static struct regfield_info
+	c2h_mm_status_field_info[] = {
+	{"C2H_MM_STATUS_RSVD_1",
+		C2H_MM_STATUS_RSVD_1_MASK},
+	{"C2H_MM_STATUS_RUN",
+		C2H_MM_STATUS_RUN_MASK},
+};
+
+
+static struct regfield_info
+	c2h_channel_cmpl_desc_cnt_field_info[] = {
+	{"C2H_CHANNEL_CMPL_DESC_CNT_C2H_CO",
+		C2H_CHANNEL_CMPL_DESC_CNT_C2H_CO_MASK},
+};
+
+
+static struct regfield_info
+	c2h_mm_err_code_enable_mask_field_info[] = {
+	{"C2H_MM_ERR_CODE_ENABLE_RSVD_1",
+		C2H_MM_ERR_CODE_ENABLE_RSVD_1_MASK},
+	{"C2H_MM_ERR_CODE_ENABLE_WR_UC_RAM",
+		C2H_MM_ERR_CODE_ENABLE_WR_UC_RAM_MASK},
+	{"C2H_MM_ERR_CODE_ENABLE_WR_UR",
+		C2H_MM_ERR_CODE_ENABLE_WR_UR_MASK},
+	{"C2H_MM_ERR_CODE_ENABLE_WR_FLR",
+		C2H_MM_ERR_CODE_ENABLE_WR_FLR_MASK},
+	{"C2H_MM_ERR_CODE_ENABLE_RSVD_2",
+		C2H_MM_ERR_CODE_ENABLE_RSVD_2_MASK},
+	{"C2H_MM_ERR_CODE_ENABLE_RD_SLV_ERR",
+		C2H_MM_ERR_CODE_ENABLE_RD_SLV_ERR_MASK},
+	{"C2H_MM_ERR_CODE_ENABLE_WR_SLV_ERR",
+		C2H_MM_ERR_CODE_ENABLE_WR_SLV_ERR_MASK},
+};
+
+
+static struct regfield_info
+	c2h_mm_err_code_field_info[] = {
+	{"C2H_MM_ERR_CODE_RSVD_1",
+		C2H_MM_ERR_CODE_RSVD_1_MASK},
+	{"C2H_MM_ERR_CODE_VALID",
+		C2H_MM_ERR_CODE_VALID_MASK},
+	{"C2H_MM_ERR_CODE_RDWR",
+		C2H_MM_ERR_CODE_RDWR_MASK},
+	{"C2H_MM_ERR_CODE",
+		C2H_MM_ERR_CODE_MASK},
+};
+
+
+static struct regfield_info
+	c2h_mm_err_info_field_info[] = {
+	{"C2H_MM_ERR_INFO_RSVD_1",
+		C2H_MM_ERR_INFO_RSVD_1_MASK},
+	{"C2H_MM_ERR_INFO_QID",
+		C2H_MM_ERR_INFO_QID_MASK},
+	{"C2H_MM_ERR_INFO_DIR",
+		C2H_MM_ERR_INFO_DIR_MASK},
+	{"C2H_MM_ERR_INFO_CIDX",
+		C2H_MM_ERR_INFO_CIDX_MASK},
+};
+
+
+static struct regfield_info
+	c2h_mm_perf_mon_ctl_field_info[] = {
+	{"C2H_MM_PERF_MON_CTL_RSVD_1",
+		C2H_MM_PERF_MON_CTL_RSVD_1_MASK},
+	{"C2H_MM_PERF_MON_CTL_IMM_START",
+		C2H_MM_PERF_MON_CTL_IMM_START_MASK},
+	{"C2H_MM_PERF_MON_CTL_RUN_START",
+		C2H_MM_PERF_MON_CTL_RUN_START_MASK},
+	{"C2H_MM_PERF_MON_CTL_IMM_CLEAR",
+		C2H_MM_PERF_MON_CTL_IMM_CLEAR_MASK},
+	{"C2H_MM_PERF_MON_CTL_RUN_CLEAR",
+		C2H_MM_PERF_MON_CTL_RUN_CLEAR_MASK},
+};
+
+
+static struct regfield_info
+	c2h_mm_perf_mon_cycle_cnt0_field_info[] = {
+	{"C2H_MM_PERF_MON_CYCLE_CNT0_CYC_CNT",
+		C2H_MM_PERF_MON_CYCLE_CNT0_CYC_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_mm_perf_mon_cycle_cnt1_field_info[] = {
+	{"C2H_MM_PERF_MON_CYCLE_CNT1_RSVD_1",
+		C2H_MM_PERF_MON_CYCLE_CNT1_RSVD_1_MASK},
+	{"C2H_MM_PERF_MON_CYCLE_CNT1_CYC_CNT",
+		C2H_MM_PERF_MON_CYCLE_CNT1_CYC_CNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_mm_perf_mon_data_cnt0_field_info[] = {
+	{"C2H_MM_PERF_MON_DATA_CNT0_DCNT",
+		C2H_MM_PERF_MON_DATA_CNT0_DCNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_mm_perf_mon_data_cnt1_field_info[] = {
+	{"C2H_MM_PERF_MON_DATA_CNT1_RSVD_1",
+		C2H_MM_PERF_MON_DATA_CNT1_RSVD_1_MASK},
+	{"C2H_MM_PERF_MON_DATA_CNT1_DCNT",
+		C2H_MM_PERF_MON_DATA_CNT1_DCNT_MASK},
+};
+
+
+static struct regfield_info
+	c2h_mm_dbg_field_info[] = {
+	{"C2H_MM_RSVD_1",
+		C2H_MM_RSVD_1_MASK},
+	{"C2H_MM_RRQ_ENTRIES",
+		C2H_MM_RRQ_ENTRIES_MASK},
+	{"C2H_MM_DAT_FIFO_SPC",
+		C2H_MM_DAT_FIFO_SPC_MASK},
+	{"C2H_MM_RD_STALL",
+		C2H_MM_RD_STALL_MASK},
+	{"C2H_MM_RRQ_FIFO_FI",
+		C2H_MM_RRQ_FIFO_FI_MASK},
+	{"C2H_MM_WR_STALL",
+		C2H_MM_WR_STALL_MASK},
+	{"C2H_MM_WRQ_FIFO_FI",
+		C2H_MM_WRQ_FIFO_FI_MASK},
+	{"C2H_MM_WBK_STALL",
+		C2H_MM_WBK_STALL_MASK},
+	{"C2H_MM_DSC_FIFO_EP",
+		C2H_MM_DSC_FIFO_EP_MASK},
+	{"C2H_MM_DSC_FIFO_FL",
+		C2H_MM_DSC_FIFO_FL_MASK},
+};
+
+
+static struct regfield_info
+	h2c_channel_ctl_field_info[] = {
+	{"H2C_CHANNEL_CTL_RSVD_1",
+		H2C_CHANNEL_CTL_RSVD_1_MASK},
+	{"H2C_CHANNEL_CTL_RUN",
+		H2C_CHANNEL_CTL_RUN_MASK},
+};
+
+
+static struct regfield_info
+	h2c_channel_ctl_1_field_info[] = {
+	{"H2C_CHANNEL_CTL_1_RUN",
+		H2C_CHANNEL_CTL_1_RUN_MASK},
+};
+
+
+static struct regfield_info
+	h2c_channel_ctl_2_field_info[] = {
+	{"H2C_CHANNEL_CTL_2_RUN",
+		H2C_CHANNEL_CTL_2_RUN_MASK},
+};
+
+
+static struct regfield_info
+	h2c_mm_status_field_info[] = {
+	{"H2C_MM_STATUS_RSVD_1",
+		H2C_MM_STATUS_RSVD_1_MASK},
+	{"H2C_MM_STATUS_RUN",
+		H2C_MM_STATUS_RUN_MASK},
+};
+
+
+static struct regfield_info
+	h2c_channel_cmpl_desc_cnt_field_info[] = {
+	{"H2C_CHANNEL_CMPL_DESC_CNT_H2C_CO",
+		H2C_CHANNEL_CMPL_DESC_CNT_H2C_CO_MASK},
+};
+
+
+static struct regfield_info
+	h2c_mm_err_code_enable_mask_field_info[] = {
+	{"H2C_MM_ERR_CODE_ENABLE_RSVD_1",
+		H2C_MM_ERR_CODE_ENABLE_RSVD_1_MASK},
+	{"H2C_MM_ERR_CODE_ENABLE_WR_SLV_ERR",
+		H2C_MM_ERR_CODE_ENABLE_WR_SLV_ERR_MASK},
+	{"H2C_MM_ERR_CODE_ENABLE_WR_DEC_ERR",
+		H2C_MM_ERR_CODE_ENABLE_WR_DEC_ERR_MASK},
+	{"H2C_MM_ERR_CODE_ENABLE_RSVD_2",
+		H2C_MM_ERR_CODE_ENABLE_RSVD_2_MASK},
+	{"H2C_MM_ERR_CODE_ENABLE_RD_RQ_DIS_ERR",
+		H2C_MM_ERR_CODE_ENABLE_RD_RQ_DIS_ERR_MASK},
+	{"H2C_MM_ERR_CODE_ENABLE_RSVD_3",
+		H2C_MM_ERR_CODE_ENABLE_RSVD_3_MASK},
+	{"H2C_MM_ERR_CODE_ENABLE_RD_DAT_POISON_ERR",
+		H2C_MM_ERR_CODE_ENABLE_RD_DAT_POISON_ERR_MASK},
+	{"H2C_MM_ERR_CODE_ENABLE_RSVD_4",
+		H2C_MM_ERR_CODE_ENABLE_RSVD_4_MASK},
+	{"H2C_MM_ERR_CODE_ENABLE_RD_FLR_ERR",
+		H2C_MM_ERR_CODE_ENABLE_RD_FLR_ERR_MASK},
+	{"H2C_MM_ERR_CODE_ENABLE_RSVD_5",
+		H2C_MM_ERR_CODE_ENABLE_RSVD_5_MASK},
+	{"H2C_MM_ERR_CODE_ENABLE_RD_HDR_ADR_ERR",
+		H2C_MM_ERR_CODE_ENABLE_RD_HDR_ADR_ERR_MASK},
+	{"H2C_MM_ERR_CODE_ENABLE_RD_HDR_PARA",
+		H2C_MM_ERR_CODE_ENABLE_RD_HDR_PARA_MASK},
+	{"H2C_MM_ERR_CODE_ENABLE_RD_HDR_BYTE_ERR",
+		H2C_MM_ERR_CODE_ENABLE_RD_HDR_BYTE_ERR_MASK},
+	{"H2C_MM_ERR_CODE_ENABLE_RD_UR_CA",
+		H2C_MM_ERR_CODE_ENABLE_RD_UR_CA_MASK},
+	{"H2C_MM_ERR_CODE_ENABLE_RD_HRD_POISON_ERR",
+		H2C_MM_ERR_CODE_ENABLE_RD_HRD_POISON_ERR_MASK},
+	{"H2C_MM_ERR_CODE_ENABLE_RSVD_6",
+		H2C_MM_ERR_CODE_ENABLE_RSVD_6_MASK},
+};
+
+
+static struct regfield_info
+	h2c_mm_err_code_field_info[] = {
+	{"H2C_MM_ERR_CODE_RSVD_1",
+		H2C_MM_ERR_CODE_RSVD_1_MASK},
+	{"H2C_MM_ERR_CODE_VALID",
+		H2C_MM_ERR_CODE_VALID_MASK},
+	{"H2C_MM_ERR_CODE_RDWR",
+		H2C_MM_ERR_CODE_RDWR_MASK},
+	{"H2C_MM_ERR_CODE",
+		H2C_MM_ERR_CODE_MASK},
+};
+
+
+static struct regfield_info
+	h2c_mm_err_info_field_info[] = {
+	{"H2C_MM_ERR_INFO_RSVD_1",
+		H2C_MM_ERR_INFO_RSVD_1_MASK},
+	{"H2C_MM_ERR_INFO_QID",
+		H2C_MM_ERR_INFO_QID_MASK},
+	{"H2C_MM_ERR_INFO_DIR",
+		H2C_MM_ERR_INFO_DIR_MASK},
+	{"H2C_MM_ERR_INFO_CIDX",
+		H2C_MM_ERR_INFO_CIDX_MASK},
+};
+
+
+static struct regfield_info
+	h2c_mm_perf_mon_ctl_field_info[] = {
+	{"H2C_MM_PERF_MON_CTL_RSVD_1",
+		H2C_MM_PERF_MON_CTL_RSVD_1_MASK},
+	{"H2C_MM_PERF_MON_CTL_IMM_START",
+		H2C_MM_PERF_MON_CTL_IMM_START_MASK},
+	{"H2C_MM_PERF_MON_CTL_RUN_START",
+		H2C_MM_PERF_MON_CTL_RUN_START_MASK},
+	{"H2C_MM_PERF_MON_CTL_IMM_CLEAR",
+		H2C_MM_PERF_MON_CTL_IMM_CLEAR_MASK},
+	{"H2C_MM_PERF_MON_CTL_RUN_CLEAR",
+		H2C_MM_PERF_MON_CTL_RUN_CLEAR_MASK},
+};
+
+
+static struct regfield_info
+	h2c_mm_perf_mon_cycle_cnt0_field_info[] = {
+	{"H2C_MM_PERF_MON_CYCLE_CNT0_CYC_CNT",
+		H2C_MM_PERF_MON_CYCLE_CNT0_CYC_CNT_MASK},
+};
+
+
+static struct regfield_info
+	h2c_mm_perf_mon_cycle_cnt1_field_info[] = {
+	{"H2C_MM_PERF_MON_CYCLE_CNT1_RSVD_1",
+		H2C_MM_PERF_MON_CYCLE_CNT1_RSVD_1_MASK},
+	{"H2C_MM_PERF_MON_CYCLE_CNT1_CYC_CNT",
+		H2C_MM_PERF_MON_CYCLE_CNT1_CYC_CNT_MASK},
+};
+
+
+static struct regfield_info
+	h2c_mm_perf_mon_data_cnt0_field_info[] = {
+	{"H2C_MM_PERF_MON_DATA_CNT0_DCNT",
+		H2C_MM_PERF_MON_DATA_CNT0_DCNT_MASK},
+};
+
+
+static struct regfield_info
+	h2c_mm_perf_mon_data_cnt1_field_info[] = {
+	{"H2C_MM_PERF_MON_DATA_CNT1_RSVD_1",
+		H2C_MM_PERF_MON_DATA_CNT1_RSVD_1_MASK},
+	{"H2C_MM_PERF_MON_DATA_CNT1_DCNT",
+		H2C_MM_PERF_MON_DATA_CNT1_DCNT_MASK},
+};
+
+
+static struct regfield_info
+	h2c_mm_dbg_field_info[] = {
+	{"H2C_MM_RSVD_1",
+		H2C_MM_RSVD_1_MASK},
+	{"H2C_MM_RRQ_ENTRIES",
+		H2C_MM_RRQ_ENTRIES_MASK},
+	{"H2C_MM_DAT_FIFO_SPC",
+		H2C_MM_DAT_FIFO_SPC_MASK},
+	{"H2C_MM_RD_STALL",
+		H2C_MM_RD_STALL_MASK},
+	{"H2C_MM_RRQ_FIFO_FI",
+		H2C_MM_RRQ_FIFO_FI_MASK},
+	{"H2C_MM_WR_STALL",
+		H2C_MM_WR_STALL_MASK},
+	{"H2C_MM_WRQ_FIFO_FI",
+		H2C_MM_WRQ_FIFO_FI_MASK},
+	{"H2C_MM_WBK_STALL",
+		H2C_MM_WBK_STALL_MASK},
+	{"H2C_MM_DSC_FIFO_EP",
+		H2C_MM_DSC_FIFO_EP_MASK},
+	{"H2C_MM_DSC_FIFO_FL",
+		H2C_MM_DSC_FIFO_FL_MASK},
+};
+
+
+static struct regfield_info
+	func_status_reg_field_info[] = {
+	{"FUNC_STATUS_REG_RSVD_1",
+		FUNC_STATUS_REG_RSVD_1_MASK},
+	{"FUNC_STATUS_REG_CUR_SRC_FN",
+		FUNC_STATUS_REG_CUR_SRC_FN_MASK},
+	{"FUNC_STATUS_REG_ACK",
+		FUNC_STATUS_REG_ACK_MASK},
+	{"FUNC_STATUS_REG_O_MSG",
+		FUNC_STATUS_REG_O_MSG_MASK},
+	{"FUNC_STATUS_REG_I_MSG",
+		FUNC_STATUS_REG_I_MSG_MASK},
+};
+
+
+static struct regfield_info
+	func_cmd_reg_field_info[] = {
+	{"FUNC_CMD_REG_RSVD_1",
+		FUNC_CMD_REG_RSVD_1_MASK},
+	{"FUNC_CMD_REG_RSVD_2",
+		FUNC_CMD_REG_RSVD_2_MASK},
+	{"FUNC_CMD_REG_MSG_RCV",
+		FUNC_CMD_REG_MSG_RCV_MASK},
+	{"FUNC_CMD_REG_MSG_SENT",
+		FUNC_CMD_REG_MSG_SENT_MASK},
+};
+
+
+static struct regfield_info
+	func_interrupt_vector_reg_field_info[] = {
+	{"FUNC_INTERRUPT_VECTOR_REG_RSVD_1",
+		FUNC_INTERRUPT_VECTOR_REG_RSVD_1_MASK},
+	{"FUNC_INTERRUPT_VECTOR_REG_IN",
+		FUNC_INTERRUPT_VECTOR_REG_IN_MASK},
+};
+
+
+static struct regfield_info
+	target_func_reg_field_info[] = {
+	{"TARGET_FUNC_REG_RSVD_1",
+		TARGET_FUNC_REG_RSVD_1_MASK},
+	{"TARGET_FUNC_REG_N_ID",
+		TARGET_FUNC_REG_N_ID_MASK},
+};
+
+
+static struct regfield_info
+	func_interrupt_ctl_reg_field_info[] = {
+	{"FUNC_INTERRUPT_CTL_REG_RSVD_1",
+		FUNC_INTERRUPT_CTL_REG_RSVD_1_MASK},
+	{"FUNC_INTERRUPT_CTL_REG_INT_EN",
+		FUNC_INTERRUPT_CTL_REG_INT_EN_MASK},
+};
+
+static struct xreg_info qdma_s80_hard_config_regs[] = {
+{"CFG_BLK_IDENTIFIER", 0x00,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(cfg_blk_identifier_field_info),
+	cfg_blk_identifier_field_info
+},
+{"CFG_BLK_BUSDEV", 0x04,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(cfg_blk_busdev_field_info),
+	cfg_blk_busdev_field_info
+},
+{"CFG_BLK_PCIE_MAX_PLD_SIZE", 0x08,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(cfg_blk_pcie_max_pld_size_field_info),
+	cfg_blk_pcie_max_pld_size_field_info
+},
+{"CFG_BLK_PCIE_MAX_READ_REQ_SIZE", 0x0c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(cfg_blk_pcie_max_read_req_size_field_info),
+	cfg_blk_pcie_max_read_req_size_field_info
+},
+{"CFG_BLK_SYSTEM_ID", 0x10,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(cfg_blk_system_id_field_info),
+	cfg_blk_system_id_field_info
+},
+{"CFG_BLK_MSI_ENABLE", 0x014,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(cfg_blk_msi_enable_field_info),
+	cfg_blk_msi_enable_field_info
+},
+{"CFG_PCIE_DATA_WIDTH", 0x18,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(cfg_pcie_data_width_field_info),
+	cfg_pcie_data_width_field_info
+},
+{"CFG_PCIE_CTL", 0x1c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(cfg_pcie_ctl_field_info),
+	cfg_pcie_ctl_field_info
+},
+{"CFG_AXI_USER_MAX_PLD_SIZE", 0x40,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(cfg_axi_user_max_pld_size_field_info),
+	cfg_axi_user_max_pld_size_field_info
+},
+{"CFG_AXI_USER_MAX_READ_REQ_SIZE", 0x44,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(cfg_axi_user_max_read_req_size_field_info),
+	cfg_axi_user_max_read_req_size_field_info
+},
+{"CFG_BLK_MISC_CTL", 0x4c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(cfg_blk_misc_ctl_field_info),
+	cfg_blk_misc_ctl_field_info
+},
+{"CFG_BLK_SCRATCH_0", 0x80,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(cfg_blk_scratch_0_field_info),
+	cfg_blk_scratch_0_field_info
+},
+{"CFG_BLK_SCRATCH_1", 0x84,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(cfg_blk_scratch_1_field_info),
+	cfg_blk_scratch_1_field_info
+},
+{"CFG_BLK_SCRATCH_2", 0x88,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(cfg_blk_scratch_2_field_info),
+	cfg_blk_scratch_2_field_info
+},
+{"CFG_BLK_SCRATCH_3", 0x8c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(cfg_blk_scratch_3_field_info),
+	cfg_blk_scratch_3_field_info
+},
+{"CFG_BLK_SCRATCH_4", 0x90,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(cfg_blk_scratch_4_field_info),
+	cfg_blk_scratch_4_field_info
+},
+{"CFG_BLK_SCRATCH_5", 0x94,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(cfg_blk_scratch_5_field_info),
+	cfg_blk_scratch_5_field_info
+},
+{"CFG_BLK_SCRATCH_6", 0x98,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(cfg_blk_scratch_6_field_info),
+	cfg_blk_scratch_6_field_info
+},
+{"CFG_BLK_SCRATCH_7", 0x9c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(cfg_blk_scratch_7_field_info),
+	cfg_blk_scratch_7_field_info
+},
+{"RAM_SBE_MSK_A", 0xf0,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(ram_sbe_msk_a_field_info),
+	ram_sbe_msk_a_field_info
+},
+{"RAM_SBE_STS_A", 0xf4,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(ram_sbe_sts_a_field_info),
+	ram_sbe_sts_a_field_info
+},
+{"RAM_DBE_MSK_A", 0xf8,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(ram_dbe_msk_a_field_info),
+	ram_dbe_msk_a_field_info
+},
+{"RAM_DBE_STS_A", 0xfc,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(ram_dbe_sts_a_field_info),
+	ram_dbe_sts_a_field_info
+},
+{"GLBL2_IDENTIFIER", 0x100,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(glbl2_identifier_field_info),
+	glbl2_identifier_field_info
+},
+{"GLBL2_PF_BARLITE_INT", 0x104,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(glbl2_pf_barlite_int_field_info),
+	glbl2_pf_barlite_int_field_info
+},
+{"GLBL2_PF_VF_BARLITE_INT", 0x108,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(glbl2_pf_vf_barlite_int_field_info),
+	glbl2_pf_vf_barlite_int_field_info
+},
+{"GLBL2_PF_BARLITE_EXT", 0x10c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(glbl2_pf_barlite_ext_field_info),
+	glbl2_pf_barlite_ext_field_info
+},
+{"GLBL2_PF_VF_BARLITE_EXT", 0x110,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(glbl2_pf_vf_barlite_ext_field_info),
+	glbl2_pf_vf_barlite_ext_field_info
+},
+{"GLBL2_CHANNEL_INST", 0x114,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(glbl2_channel_inst_field_info),
+	glbl2_channel_inst_field_info
+},
+{"GLBL2_CHANNEL_MDMA", 0x118,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(glbl2_channel_mdma_field_info),
+	glbl2_channel_mdma_field_info
+},
+{"GLBL2_CHANNEL_STRM", 0x11c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(glbl2_channel_strm_field_info),
+	glbl2_channel_strm_field_info
+},
+{"GLBL2_CHANNEL_CAP", 0x120,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(glbl2_channel_cap_field_info),
+	glbl2_channel_cap_field_info
+},
+{"GLBL2_CHANNEL_PASID_CAP", 0x128,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(glbl2_channel_pasid_cap_field_info),
+	glbl2_channel_pasid_cap_field_info
+},
+{"GLBL2_CHANNEL_FUNC_RET", 0x12c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(glbl2_channel_func_ret_field_info),
+	glbl2_channel_func_ret_field_info
+},
+{"GLBL2_SYSTEM_ID", 0x130,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(glbl2_system_id_field_info),
+	glbl2_system_id_field_info
+},
+{"GLBL2_MISC_CAP", 0x134,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(glbl2_misc_cap_field_info),
+	glbl2_misc_cap_field_info
+},
+{"GLBL2_DBG_PCIE_RQ0", 0x1b8,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(glbl2_dbg_pcie_rq0_field_info),
+	glbl2_dbg_pcie_rq0_field_info
+},
+{"GLBL2_DBG_PCIE_RQ1", 0x1bc,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(glbl2_dbg_pcie_rq1_field_info),
+	glbl2_dbg_pcie_rq1_field_info
+},
+{"GLBL2_DBG_AXIMM_WR0", 0x1c0,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(glbl2_dbg_aximm_wr0_field_info),
+	glbl2_dbg_aximm_wr0_field_info
+},
+{"GLBL2_DBG_AXIMM_WR1", 0x1c4,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(glbl2_dbg_aximm_wr1_field_info),
+	glbl2_dbg_aximm_wr1_field_info
+},
+{"GLBL2_DBG_AXIMM_RD0", 0x1c8,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(glbl2_dbg_aximm_rd0_field_info),
+	glbl2_dbg_aximm_rd0_field_info
+},
+{"GLBL2_DBG_AXIMM_RD1", 0x1cc,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(glbl2_dbg_aximm_rd1_field_info),
+	glbl2_dbg_aximm_rd1_field_info
+},
+{"GLBL_RNG_SZ_1", 0x204,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_rng_sz_1_field_info),
+	glbl_rng_sz_1_field_info
+},
+{"GLBL_RNG_SZ_2", 0x208,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_rng_sz_2_field_info),
+	glbl_rng_sz_2_field_info
+},
+{"GLBL_RNG_SZ_3", 0x20c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_rng_sz_3_field_info),
+	glbl_rng_sz_3_field_info
+},
+{"GLBL_RNG_SZ_4", 0x210,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_rng_sz_4_field_info),
+	glbl_rng_sz_4_field_info
+},
+{"GLBL_RNG_SZ_5", 0x214,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_rng_sz_5_field_info),
+	glbl_rng_sz_5_field_info
+},
+{"GLBL_RNG_SZ_6", 0x218,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_rng_sz_6_field_info),
+	glbl_rng_sz_6_field_info
+},
+{"GLBL_RNG_SZ_7", 0x21c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_rng_sz_7_field_info),
+	glbl_rng_sz_7_field_info
+},
+{"GLBL_RNG_SZ_8", 0x220,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_rng_sz_8_field_info),
+	glbl_rng_sz_8_field_info
+},
+{"GLBL_RNG_SZ_9", 0x224,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_rng_sz_9_field_info),
+	glbl_rng_sz_9_field_info
+},
+{"GLBL_RNG_SZ_A", 0x228,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_rng_sz_a_field_info),
+	glbl_rng_sz_a_field_info
+},
+{"GLBL_RNG_SZ_B", 0x22c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_rng_sz_b_field_info),
+	glbl_rng_sz_b_field_info
+},
+{"GLBL_RNG_SZ_C", 0x230,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_rng_sz_c_field_info),
+	glbl_rng_sz_c_field_info
+},
+{"GLBL_RNG_SZ_D", 0x234,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_rng_sz_d_field_info),
+	glbl_rng_sz_d_field_info
+},
+{"GLBL_RNG_SZ_E", 0x238,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_rng_sz_e_field_info),
+	glbl_rng_sz_e_field_info
+},
+{"GLBL_RNG_SZ_F", 0x23c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_rng_sz_f_field_info),
+	glbl_rng_sz_f_field_info
+},
+{"GLBL_RNG_SZ_10", 0x240,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_rng_sz_10_field_info),
+	glbl_rng_sz_10_field_info
+},
+{"GLBL_ERR_STAT", 0x248,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_err_stat_field_info),
+	glbl_err_stat_field_info
+},
+{"GLBL_ERR_MASK", 0x24c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_err_mask_field_info),
+	glbl_err_mask_field_info
+},
+{"GLBL_DSC_CFG", 0x250,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_dsc_cfg_field_info),
+	glbl_dsc_cfg_field_info
+},
+{"GLBL_DSC_ERR_STS", 0x254,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_dsc_err_sts_field_info),
+	glbl_dsc_err_sts_field_info
+},
+{"GLBL_DSC_ERR_MSK", 0x258,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_dsc_err_msk_field_info),
+	glbl_dsc_err_msk_field_info
+},
+{"GLBL_DSC_ERR_LOG0", 0x25c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_dsc_err_log0_field_info),
+	glbl_dsc_err_log0_field_info
+},
+{"GLBL_DSC_ERR_LOG1", 0x260,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_dsc_err_log1_field_info),
+	glbl_dsc_err_log1_field_info
+},
+{"GLBL_TRQ_ERR_STS", 0x264,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_trq_err_sts_field_info),
+	glbl_trq_err_sts_field_info
+},
+{"GLBL_TRQ_ERR_MSK", 0x268,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_trq_err_msk_field_info),
+	glbl_trq_err_msk_field_info
+},
+{"GLBL_TRQ_ERR_LOG", 0x26c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_trq_err_log_field_info),
+	glbl_trq_err_log_field_info
+},
+{"GLBL_DSC_DBG_DAT0", 0x270,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_dsc_dbg_dat0_field_info),
+	glbl_dsc_dbg_dat0_field_info
+},
+{"GLBL_DSC_DBG_DAT1", 0x274,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_dsc_dbg_dat1_field_info),
+	glbl_dsc_dbg_dat1_field_info
+},
+{"TRQ_SEL_FMAP_0", 0x400,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_0_field_info),
+	trq_sel_fmap_0_field_info
+},
+{"TRQ_SEL_FMAP_1", 0x404,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_1_field_info),
+	trq_sel_fmap_1_field_info
+},
+{"TRQ_SEL_FMAP_2", 0x408,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_2_field_info),
+	trq_sel_fmap_2_field_info
+},
+{"TRQ_SEL_FMAP_3", 0x40c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_3_field_info),
+	trq_sel_fmap_3_field_info
+},
+{"TRQ_SEL_FMAP_4", 0x410,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_4_field_info),
+	trq_sel_fmap_4_field_info
+},
+{"TRQ_SEL_FMAP_5", 0x414,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_5_field_info),
+	trq_sel_fmap_5_field_info
+},
+{"TRQ_SEL_FMAP_6", 0x418,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_6_field_info),
+	trq_sel_fmap_6_field_info
+},
+{"TRQ_SEL_FMAP_7", 0x41c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_7_field_info),
+	trq_sel_fmap_7_field_info
+},
+{"TRQ_SEL_FMAP_8", 0x420,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_8_field_info),
+	trq_sel_fmap_8_field_info
+},
+{"TRQ_SEL_FMAP_9", 0x424,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_9_field_info),
+	trq_sel_fmap_9_field_info
+},
+{"TRQ_SEL_FMAP_A", 0x428,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_a_field_info),
+	trq_sel_fmap_a_field_info
+},
+{"TRQ_SEL_FMAP_B", 0x42c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_b_field_info),
+	trq_sel_fmap_b_field_info
+},
+{"TRQ_SEL_FMAP_D", 0x430,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_d_field_info),
+	trq_sel_fmap_d_field_info
+},
+{"TRQ_SEL_FMAP_E", 0x434,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_e_field_info),
+	trq_sel_fmap_e_field_info
+},
+{"TRQ_SEL_FMAP_F", 0x438,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_f_field_info),
+	trq_sel_fmap_f_field_info
+},
+{"TRQ_SEL_FMAP_10", 0x43c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_10_field_info),
+	trq_sel_fmap_10_field_info
+},
+{"TRQ_SEL_FMAP_11", 0x440,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_11_field_info),
+	trq_sel_fmap_11_field_info
+},
+{"TRQ_SEL_FMAP_12", 0x444,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_12_field_info),
+	trq_sel_fmap_12_field_info
+},
+{"TRQ_SEL_FMAP_13", 0x448,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_13_field_info),
+	trq_sel_fmap_13_field_info
+},
+{"TRQ_SEL_FMAP_14", 0x44c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_14_field_info),
+	trq_sel_fmap_14_field_info
+},
+{"TRQ_SEL_FMAP_15", 0x450,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_15_field_info),
+	trq_sel_fmap_15_field_info
+},
+{"TRQ_SEL_FMAP_16", 0x454,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_16_field_info),
+	trq_sel_fmap_16_field_info
+},
+{"TRQ_SEL_FMAP_17", 0x458,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_17_field_info),
+	trq_sel_fmap_17_field_info
+},
+{"TRQ_SEL_FMAP_18", 0x45c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_18_field_info),
+	trq_sel_fmap_18_field_info
+},
+{"TRQ_SEL_FMAP_19", 0x460,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_19_field_info),
+	trq_sel_fmap_19_field_info
+},
+{"TRQ_SEL_FMAP_1A", 0x464,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_1a_field_info),
+	trq_sel_fmap_1a_field_info
+},
+{"TRQ_SEL_FMAP_1B", 0x468,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_1b_field_info),
+	trq_sel_fmap_1b_field_info
+},
+{"TRQ_SEL_FMAP_1C", 0x46c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_1c_field_info),
+	trq_sel_fmap_1c_field_info
+},
+{"TRQ_SEL_FMAP_1D", 0x470,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_1d_field_info),
+	trq_sel_fmap_1d_field_info
+},
+{"TRQ_SEL_FMAP_1E", 0x474,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_1e_field_info),
+	trq_sel_fmap_1e_field_info
+},
+{"TRQ_SEL_FMAP_1F", 0x478,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_1f_field_info),
+	trq_sel_fmap_1f_field_info
+},
+{"TRQ_SEL_FMAP_20", 0x47c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_20_field_info),
+	trq_sel_fmap_20_field_info
+},
+{"TRQ_SEL_FMAP_21", 0x480,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_21_field_info),
+	trq_sel_fmap_21_field_info
+},
+{"TRQ_SEL_FMAP_22", 0x484,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_22_field_info),
+	trq_sel_fmap_22_field_info
+},
+{"TRQ_SEL_FMAP_23", 0x488,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_23_field_info),
+	trq_sel_fmap_23_field_info
+},
+{"TRQ_SEL_FMAP_24", 0x48c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_24_field_info),
+	trq_sel_fmap_24_field_info
+},
+{"TRQ_SEL_FMAP_25", 0x490,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_25_field_info),
+	trq_sel_fmap_25_field_info
+},
+{"TRQ_SEL_FMAP_26", 0x494,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_26_field_info),
+	trq_sel_fmap_26_field_info
+},
+{"TRQ_SEL_FMAP_27", 0x498,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_27_field_info),
+	trq_sel_fmap_27_field_info
+},
+{"TRQ_SEL_FMAP_28", 0x49c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_28_field_info),
+	trq_sel_fmap_28_field_info
+},
+{"TRQ_SEL_FMAP_29", 0x4a0,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_29_field_info),
+	trq_sel_fmap_29_field_info
+},
+{"TRQ_SEL_FMAP_2A", 0x4a4,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_2a_field_info),
+	trq_sel_fmap_2a_field_info
+},
+{"TRQ_SEL_FMAP_2B", 0x4a8,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_2b_field_info),
+	trq_sel_fmap_2b_field_info
+},
+{"TRQ_SEL_FMAP_2C", 0x4ac,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_2c_field_info),
+	trq_sel_fmap_2c_field_info
+},
+{"TRQ_SEL_FMAP_2D", 0x4b0,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_2d_field_info),
+	trq_sel_fmap_2d_field_info
+},
+{"TRQ_SEL_FMAP_2E", 0x4b4,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_2e_field_info),
+	trq_sel_fmap_2e_field_info
+},
+{"TRQ_SEL_FMAP_2F", 0x4b8,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_2f_field_info),
+	trq_sel_fmap_2f_field_info
+},
+{"TRQ_SEL_FMAP_30", 0x4bc,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_30_field_info),
+	trq_sel_fmap_30_field_info
+},
+{"TRQ_SEL_FMAP_31", 0x4d0,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_31_field_info),
+	trq_sel_fmap_31_field_info
+},
+{"TRQ_SEL_FMAP_32", 0x4d4,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_32_field_info),
+	trq_sel_fmap_32_field_info
+},
+{"TRQ_SEL_FMAP_33", 0x4d8,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_33_field_info),
+	trq_sel_fmap_33_field_info
+},
+{"TRQ_SEL_FMAP_34", 0x4dc,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_34_field_info),
+	trq_sel_fmap_34_field_info
+},
+{"TRQ_SEL_FMAP_35", 0x4e0,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_35_field_info),
+	trq_sel_fmap_35_field_info
+},
+{"TRQ_SEL_FMAP_36", 0x4e4,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_36_field_info),
+	trq_sel_fmap_36_field_info
+},
+{"TRQ_SEL_FMAP_37", 0x4e8,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_37_field_info),
+	trq_sel_fmap_37_field_info
+},
+{"TRQ_SEL_FMAP_38", 0x4ec,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_38_field_info),
+	trq_sel_fmap_38_field_info
+},
+{"TRQ_SEL_FMAP_39", 0x4f0,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_39_field_info),
+	trq_sel_fmap_39_field_info
+},
+{"TRQ_SEL_FMAP_3A", 0x4f4,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_3a_field_info),
+	trq_sel_fmap_3a_field_info
+},
+{"TRQ_SEL_FMAP_3B", 0x4f8,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_3b_field_info),
+	trq_sel_fmap_3b_field_info
+},
+{"TRQ_SEL_FMAP_3C", 0x4fc,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_3c_field_info),
+	trq_sel_fmap_3c_field_info
+},
+{"TRQ_SEL_FMAP_3D", 0x500,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_3d_field_info),
+	trq_sel_fmap_3d_field_info
+},
+{"TRQ_SEL_FMAP_3E", 0x504,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_3e_field_info),
+	trq_sel_fmap_3e_field_info
+},
+{"TRQ_SEL_FMAP_3F", 0x508,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_3f_field_info),
+	trq_sel_fmap_3f_field_info
+},
+{"TRQ_SEL_FMAP_40", 0x50c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_40_field_info),
+	trq_sel_fmap_40_field_info
+},
+{"TRQ_SEL_FMAP_41", 0x510,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_41_field_info),
+	trq_sel_fmap_41_field_info
+},
+{"TRQ_SEL_FMAP_42", 0x514,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_42_field_info),
+	trq_sel_fmap_42_field_info
+},
+{"TRQ_SEL_FMAP_43", 0x518,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_43_field_info),
+	trq_sel_fmap_43_field_info
+},
+{"TRQ_SEL_FMAP_44", 0x51c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_44_field_info),
+	trq_sel_fmap_44_field_info
+},
+{"TRQ_SEL_FMAP_45", 0x520,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_45_field_info),
+	trq_sel_fmap_45_field_info
+},
+{"TRQ_SEL_FMAP_46", 0x524,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_46_field_info),
+	trq_sel_fmap_46_field_info
+},
+{"TRQ_SEL_FMAP_47", 0x528,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_47_field_info),
+	trq_sel_fmap_47_field_info
+},
+{"TRQ_SEL_FMAP_48", 0x52c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_48_field_info),
+	trq_sel_fmap_48_field_info
+},
+{"TRQ_SEL_FMAP_49", 0x530,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_49_field_info),
+	trq_sel_fmap_49_field_info
+},
+{"TRQ_SEL_FMAP_4A", 0x534,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_4a_field_info),
+	trq_sel_fmap_4a_field_info
+},
+{"TRQ_SEL_FMAP_4B", 0x538,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_4b_field_info),
+	trq_sel_fmap_4b_field_info
+},
+{"TRQ_SEL_FMAP_4C", 0x53c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_4c_field_info),
+	trq_sel_fmap_4c_field_info
+},
+{"TRQ_SEL_FMAP_4D", 0x540,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_4d_field_info),
+	trq_sel_fmap_4d_field_info
+},
+{"TRQ_SEL_FMAP_4E", 0x544,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_4e_field_info),
+	trq_sel_fmap_4e_field_info
+},
+{"TRQ_SEL_FMAP_4F", 0x548,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_4f_field_info),
+	trq_sel_fmap_4f_field_info
+},
+{"TRQ_SEL_FMAP_50", 0x54c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_50_field_info),
+	trq_sel_fmap_50_field_info
+},
+{"TRQ_SEL_FMAP_51", 0x550,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_51_field_info),
+	trq_sel_fmap_51_field_info
+},
+{"TRQ_SEL_FMAP_52", 0x554,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_52_field_info),
+	trq_sel_fmap_52_field_info
+},
+{"TRQ_SEL_FMAP_53", 0x558,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_53_field_info),
+	trq_sel_fmap_53_field_info
+},
+{"TRQ_SEL_FMAP_54", 0x55c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_54_field_info),
+	trq_sel_fmap_54_field_info
+},
+{"TRQ_SEL_FMAP_55", 0x560,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_55_field_info),
+	trq_sel_fmap_55_field_info
+},
+{"TRQ_SEL_FMAP_56", 0x564,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_56_field_info),
+	trq_sel_fmap_56_field_info
+},
+{"TRQ_SEL_FMAP_57", 0x568,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_57_field_info),
+	trq_sel_fmap_57_field_info
+},
+{"TRQ_SEL_FMAP_58", 0x56c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_58_field_info),
+	trq_sel_fmap_58_field_info
+},
+{"TRQ_SEL_FMAP_59", 0x570,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_59_field_info),
+	trq_sel_fmap_59_field_info
+},
+{"TRQ_SEL_FMAP_5A", 0x574,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_5a_field_info),
+	trq_sel_fmap_5a_field_info
+},
+{"TRQ_SEL_FMAP_5B", 0x578,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_5b_field_info),
+	trq_sel_fmap_5b_field_info
+},
+{"TRQ_SEL_FMAP_5C", 0x57c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_5c_field_info),
+	trq_sel_fmap_5c_field_info
+},
+{"TRQ_SEL_FMAP_5D", 0x580,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_5d_field_info),
+	trq_sel_fmap_5d_field_info
+},
+{"TRQ_SEL_FMAP_5E", 0x584,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_5e_field_info),
+	trq_sel_fmap_5e_field_info
+},
+{"TRQ_SEL_FMAP_5F", 0x588,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_5f_field_info),
+	trq_sel_fmap_5f_field_info
+},
+{"TRQ_SEL_FMAP_60", 0x58c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_60_field_info),
+	trq_sel_fmap_60_field_info
+},
+{"TRQ_SEL_FMAP_61", 0x590,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_61_field_info),
+	trq_sel_fmap_61_field_info
+},
+{"TRQ_SEL_FMAP_62", 0x594,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_62_field_info),
+	trq_sel_fmap_62_field_info
+},
+{"TRQ_SEL_FMAP_63", 0x598,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_63_field_info),
+	trq_sel_fmap_63_field_info
+},
+{"TRQ_SEL_FMAP_64", 0x59c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_64_field_info),
+	trq_sel_fmap_64_field_info
+},
+{"TRQ_SEL_FMAP_65", 0x5a0,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_65_field_info),
+	trq_sel_fmap_65_field_info
+},
+{"TRQ_SEL_FMAP_66", 0x5a4,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_66_field_info),
+	trq_sel_fmap_66_field_info
+},
+{"TRQ_SEL_FMAP_67", 0x5a8,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_67_field_info),
+	trq_sel_fmap_67_field_info
+},
+{"TRQ_SEL_FMAP_68", 0x5ac,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_68_field_info),
+	trq_sel_fmap_68_field_info
+},
+{"TRQ_SEL_FMAP_69", 0x5b0,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_69_field_info),
+	trq_sel_fmap_69_field_info
+},
+{"TRQ_SEL_FMAP_6A", 0x5b4,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_6a_field_info),
+	trq_sel_fmap_6a_field_info
+},
+{"TRQ_SEL_FMAP_6B", 0x5b8,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_6b_field_info),
+	trq_sel_fmap_6b_field_info
+},
+{"TRQ_SEL_FMAP_6C", 0x5bc,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_6c_field_info),
+	trq_sel_fmap_6c_field_info
+},
+{"TRQ_SEL_FMAP_6D", 0x5c0,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_6d_field_info),
+	trq_sel_fmap_6d_field_info
+},
+{"TRQ_SEL_FMAP_6E", 0x5c4,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_6e_field_info),
+	trq_sel_fmap_6e_field_info
+},
+{"TRQ_SEL_FMAP_6F", 0x5c8,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_6f_field_info),
+	trq_sel_fmap_6f_field_info
+},
+{"TRQ_SEL_FMAP_70", 0x5cc,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_70_field_info),
+	trq_sel_fmap_70_field_info
+},
+{"TRQ_SEL_FMAP_71", 0x5d0,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_71_field_info),
+	trq_sel_fmap_71_field_info
+},
+{"TRQ_SEL_FMAP_72", 0x5d4,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_72_field_info),
+	trq_sel_fmap_72_field_info
+},
+{"TRQ_SEL_FMAP_73", 0x5d8,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_73_field_info),
+	trq_sel_fmap_73_field_info
+},
+{"TRQ_SEL_FMAP_74", 0x5dc,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_74_field_info),
+	trq_sel_fmap_74_field_info
+},
+{"TRQ_SEL_FMAP_75", 0x5e0,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_75_field_info),
+	trq_sel_fmap_75_field_info
+},
+{"TRQ_SEL_FMAP_76", 0x5e4,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_76_field_info),
+	trq_sel_fmap_76_field_info
+},
+{"TRQ_SEL_FMAP_77", 0x5e8,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_77_field_info),
+	trq_sel_fmap_77_field_info
+},
+{"TRQ_SEL_FMAP_78", 0x5ec,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_78_field_info),
+	trq_sel_fmap_78_field_info
+},
+{"TRQ_SEL_FMAP_79", 0x5f0,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_79_field_info),
+	trq_sel_fmap_79_field_info
+},
+{"TRQ_SEL_FMAP_7A", 0x5f4,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_7a_field_info),
+	trq_sel_fmap_7a_field_info
+},
+{"TRQ_SEL_FMAP_7B", 0x5f8,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_7b_field_info),
+	trq_sel_fmap_7b_field_info
+},
+{"TRQ_SEL_FMAP_7C", 0x5fc,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_7c_field_info),
+	trq_sel_fmap_7c_field_info
+},
+{"TRQ_SEL_FMAP_7D", 0x600,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_7d_field_info),
+	trq_sel_fmap_7d_field_info
+},
+{"TRQ_SEL_FMAP_7E", 0x604,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_7e_field_info),
+	trq_sel_fmap_7e_field_info
+},
+{"TRQ_SEL_FMAP_7F", 0x608,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_7f_field_info),
+	trq_sel_fmap_7f_field_info
+},
+{"TRQ_SEL_FMAP_80", 0x60c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_80_field_info),
+	trq_sel_fmap_80_field_info
+},
+{"TRQ_SEL_FMAP_81", 0x610,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_81_field_info),
+	trq_sel_fmap_81_field_info
+},
+{"TRQ_SEL_FMAP_82", 0x614,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_82_field_info),
+	trq_sel_fmap_82_field_info
+},
+{"TRQ_SEL_FMAP_83", 0x618,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_83_field_info),
+	trq_sel_fmap_83_field_info
+},
+{"TRQ_SEL_FMAP_84", 0x61c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_84_field_info),
+	trq_sel_fmap_84_field_info
+},
+{"TRQ_SEL_FMAP_85", 0x620,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_85_field_info),
+	trq_sel_fmap_85_field_info
+},
+{"TRQ_SEL_FMAP_86", 0x624,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_86_field_info),
+	trq_sel_fmap_86_field_info
+},
+{"TRQ_SEL_FMAP_87", 0x628,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_87_field_info),
+	trq_sel_fmap_87_field_info
+},
+{"TRQ_SEL_FMAP_88", 0x62c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_88_field_info),
+	trq_sel_fmap_88_field_info
+},
+{"TRQ_SEL_FMAP_89", 0x630,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_89_field_info),
+	trq_sel_fmap_89_field_info
+},
+{"TRQ_SEL_FMAP_8A", 0x634,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_8a_field_info),
+	trq_sel_fmap_8a_field_info
+},
+{"TRQ_SEL_FMAP_8B", 0x638,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_8b_field_info),
+	trq_sel_fmap_8b_field_info
+},
+{"TRQ_SEL_FMAP_8C", 0x63c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_8c_field_info),
+	trq_sel_fmap_8c_field_info
+},
+{"TRQ_SEL_FMAP_8D", 0x640,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_8d_field_info),
+	trq_sel_fmap_8d_field_info
+},
+{"TRQ_SEL_FMAP_8E", 0x644,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_8e_field_info),
+	trq_sel_fmap_8e_field_info
+},
+{"TRQ_SEL_FMAP_8F", 0x648,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_8f_field_info),
+	trq_sel_fmap_8f_field_info
+},
+{"TRQ_SEL_FMAP_90", 0x64c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_90_field_info),
+	trq_sel_fmap_90_field_info
+},
+{"TRQ_SEL_FMAP_91", 0x650,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_91_field_info),
+	trq_sel_fmap_91_field_info
+},
+{"TRQ_SEL_FMAP_92", 0x654,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_92_field_info),
+	trq_sel_fmap_92_field_info
+},
+{"TRQ_SEL_FMAP_93", 0x658,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_93_field_info),
+	trq_sel_fmap_93_field_info
+},
+{"TRQ_SEL_FMAP_94", 0x65c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_94_field_info),
+	trq_sel_fmap_94_field_info
+},
+{"TRQ_SEL_FMAP_95", 0x660,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_95_field_info),
+	trq_sel_fmap_95_field_info
+},
+{"TRQ_SEL_FMAP_96", 0x664,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_96_field_info),
+	trq_sel_fmap_96_field_info
+},
+{"TRQ_SEL_FMAP_97", 0x668,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_97_field_info),
+	trq_sel_fmap_97_field_info
+},
+{"TRQ_SEL_FMAP_98", 0x66c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_98_field_info),
+	trq_sel_fmap_98_field_info
+},
+{"TRQ_SEL_FMAP_99", 0x670,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_99_field_info),
+	trq_sel_fmap_99_field_info
+},
+{"TRQ_SEL_FMAP_9A", 0x674,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_9a_field_info),
+	trq_sel_fmap_9a_field_info
+},
+{"TRQ_SEL_FMAP_9B", 0x678,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_9b_field_info),
+	trq_sel_fmap_9b_field_info
+},
+{"TRQ_SEL_FMAP_9C", 0x67c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_9c_field_info),
+	trq_sel_fmap_9c_field_info
+},
+{"TRQ_SEL_FMAP_9D", 0x680,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_9d_field_info),
+	trq_sel_fmap_9d_field_info
+},
+{"TRQ_SEL_FMAP_9E", 0x684,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_9e_field_info),
+	trq_sel_fmap_9e_field_info
+},
+{"TRQ_SEL_FMAP_9F", 0x688,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_9f_field_info),
+	trq_sel_fmap_9f_field_info
+},
+{"TRQ_SEL_FMAP_A0", 0x68c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_a0_field_info),
+	trq_sel_fmap_a0_field_info
+},
+{"TRQ_SEL_FMAP_A1", 0x690,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_a1_field_info),
+	trq_sel_fmap_a1_field_info
+},
+{"TRQ_SEL_FMAP_A2", 0x694,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_a2_field_info),
+	trq_sel_fmap_a2_field_info
+},
+{"TRQ_SEL_FMAP_A3", 0x698,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_a3_field_info),
+	trq_sel_fmap_a3_field_info
+},
+{"TRQ_SEL_FMAP_A4", 0x69c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_a4_field_info),
+	trq_sel_fmap_a4_field_info
+},
+{"TRQ_SEL_FMAP_A5", 0x6a0,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_a5_field_info),
+	trq_sel_fmap_a5_field_info
+},
+{"TRQ_SEL_FMAP_A6", 0x6a4,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_a6_field_info),
+	trq_sel_fmap_a6_field_info
+},
+{"TRQ_SEL_FMAP_A7", 0x6a8,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_a7_field_info),
+	trq_sel_fmap_a7_field_info
+},
+{"TRQ_SEL_FMAP_A8", 0x6ac,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_a8_field_info),
+	trq_sel_fmap_a8_field_info
+},
+{"TRQ_SEL_FMAP_A9", 0x6b0,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_a9_field_info),
+	trq_sel_fmap_a9_field_info
+},
+{"TRQ_SEL_FMAP_AA", 0x6b4,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_aa_field_info),
+	trq_sel_fmap_aa_field_info
+},
+{"TRQ_SEL_FMAP_AB", 0x6b8,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_ab_field_info),
+	trq_sel_fmap_ab_field_info
+},
+{"TRQ_SEL_FMAP_AC", 0x6bc,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_ac_field_info),
+	trq_sel_fmap_ac_field_info
+},
+{"TRQ_SEL_FMAP_AD", 0x6d0,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_ad_field_info),
+	trq_sel_fmap_ad_field_info
+},
+{"TRQ_SEL_FMAP_AE", 0x6d4,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_ae_field_info),
+	trq_sel_fmap_ae_field_info
+},
+{"TRQ_SEL_FMAP_AF", 0x6d8,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_af_field_info),
+	trq_sel_fmap_af_field_info
+},
+{"TRQ_SEL_FMAP_B0", 0x6dc,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_b0_field_info),
+	trq_sel_fmap_b0_field_info
+},
+{"TRQ_SEL_FMAP_B1", 0x6e0,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_b1_field_info),
+	trq_sel_fmap_b1_field_info
+},
+{"TRQ_SEL_FMAP_B2", 0x6e4,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_b2_field_info),
+	trq_sel_fmap_b2_field_info
+},
+{"TRQ_SEL_FMAP_B3", 0x6e8,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_b3_field_info),
+	trq_sel_fmap_b3_field_info
+},
+{"TRQ_SEL_FMAP_B4", 0x6ec,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_b4_field_info),
+	trq_sel_fmap_b4_field_info
+},
+{"TRQ_SEL_FMAP_B5", 0x6f0,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_b5_field_info),
+	trq_sel_fmap_b5_field_info
+},
+{"TRQ_SEL_FMAP_B6", 0x6f4,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_b6_field_info),
+	trq_sel_fmap_b6_field_info
+},
+{"TRQ_SEL_FMAP_B7", 0x6f8,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_b7_field_info),
+	trq_sel_fmap_b7_field_info
+},
+{"TRQ_SEL_FMAP_B8", 0x6fc,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_b8_field_info),
+	trq_sel_fmap_b8_field_info
+},
+{"TRQ_SEL_FMAP_B9", 0x700,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_b9_field_info),
+	trq_sel_fmap_b9_field_info
+},
+{"TRQ_SEL_FMAP_BA", 0x704,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_ba_field_info),
+	trq_sel_fmap_ba_field_info
+},
+{"TRQ_SEL_FMAP_BB", 0x708,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_bb_field_info),
+	trq_sel_fmap_bb_field_info
+},
+{"TRQ_SEL_FMAP_BC", 0x70c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_bc_field_info),
+	trq_sel_fmap_bc_field_info
+},
+{"TRQ_SEL_FMAP_BD", 0x710,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_bd_field_info),
+	trq_sel_fmap_bd_field_info
+},
+{"TRQ_SEL_FMAP_BE", 0x714,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_be_field_info),
+	trq_sel_fmap_be_field_info
+},
+{"TRQ_SEL_FMAP_BF", 0x718,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_bf_field_info),
+	trq_sel_fmap_bf_field_info
+},
+{"TRQ_SEL_FMAP_C0", 0x71c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_c0_field_info),
+	trq_sel_fmap_c0_field_info
+},
+{"TRQ_SEL_FMAP_C1", 0x720,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_c1_field_info),
+	trq_sel_fmap_c1_field_info
+},
+{"TRQ_SEL_FMAP_C2", 0x734,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_c2_field_info),
+	trq_sel_fmap_c2_field_info
+},
+{"TRQ_SEL_FMAP_C3", 0x748,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_c3_field_info),
+	trq_sel_fmap_c3_field_info
+},
+{"TRQ_SEL_FMAP_C4", 0x74c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_c4_field_info),
+	trq_sel_fmap_c4_field_info
+},
+{"TRQ_SEL_FMAP_C5", 0x750,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_c5_field_info),
+	trq_sel_fmap_c5_field_info
+},
+{"TRQ_SEL_FMAP_C6", 0x754,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_c6_field_info),
+	trq_sel_fmap_c6_field_info
+},
+{"TRQ_SEL_FMAP_C7", 0x758,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_c7_field_info),
+	trq_sel_fmap_c7_field_info
+},
+{"TRQ_SEL_FMAP_C8", 0x75c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_c8_field_info),
+	trq_sel_fmap_c8_field_info
+},
+{"TRQ_SEL_FMAP_C9", 0x760,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_c9_field_info),
+	trq_sel_fmap_c9_field_info
+},
+{"TRQ_SEL_FMAP_CA", 0x764,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_ca_field_info),
+	trq_sel_fmap_ca_field_info
+},
+{"TRQ_SEL_FMAP_CB", 0x768,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_cb_field_info),
+	trq_sel_fmap_cb_field_info
+},
+{"TRQ_SEL_FMAP_CC", 0x76c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_cc_field_info),
+	trq_sel_fmap_cc_field_info
+},
+{"TRQ_SEL_FMAP_CD", 0x770,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_cd_field_info),
+	trq_sel_fmap_cd_field_info
+},
+{"TRQ_SEL_FMAP_CE", 0x774,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_ce_field_info),
+	trq_sel_fmap_ce_field_info
+},
+{"TRQ_SEL_FMAP_CF", 0x778,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_cf_field_info),
+	trq_sel_fmap_cf_field_info
+},
+{"TRQ_SEL_FMAP_D0", 0x77c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_d0_field_info),
+	trq_sel_fmap_d0_field_info
+},
+{"TRQ_SEL_FMAP_D1", 0x780,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_d1_field_info),
+	trq_sel_fmap_d1_field_info
+},
+{"TRQ_SEL_FMAP_D2", 0x784,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_d2_field_info),
+	trq_sel_fmap_d2_field_info
+},
+{"TRQ_SEL_FMAP_D3", 0x788,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_d3_field_info),
+	trq_sel_fmap_d3_field_info
+},
+{"TRQ_SEL_FMAP_D4", 0x78c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_d4_field_info),
+	trq_sel_fmap_d4_field_info
+},
+{"TRQ_SEL_FMAP_D5", 0x790,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_d5_field_info),
+	trq_sel_fmap_d5_field_info
+},
+{"TRQ_SEL_FMAP_D6", 0x794,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_d6_field_info),
+	trq_sel_fmap_d6_field_info
+},
+{"TRQ_SEL_FMAP_D7", 0x798,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_d7_field_info),
+	trq_sel_fmap_d7_field_info
+},
+{"TRQ_SEL_FMAP_D8", 0x79c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_d8_field_info),
+	trq_sel_fmap_d8_field_info
+},
+{"TRQ_SEL_FMAP_D9", 0x7a0,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_d9_field_info),
+	trq_sel_fmap_d9_field_info
+},
+{"TRQ_SEL_FMAP_DA", 0x7a4,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_da_field_info),
+	trq_sel_fmap_da_field_info
+},
+{"TRQ_SEL_FMAP_DB", 0x7a8,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_db_field_info),
+	trq_sel_fmap_db_field_info
+},
+{"TRQ_SEL_FMAP_DC", 0x7ac,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_dc_field_info),
+	trq_sel_fmap_dc_field_info
+},
+{"TRQ_SEL_FMAP_DD", 0x7b0,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_dd_field_info),
+	trq_sel_fmap_dd_field_info
+},
+{"TRQ_SEL_FMAP_DE", 0x7b4,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_de_field_info),
+	trq_sel_fmap_de_field_info
+},
+{"TRQ_SEL_FMAP_DF", 0x7b8,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_df_field_info),
+	trq_sel_fmap_df_field_info
+},
+{"TRQ_SEL_FMAP_E0", 0x7bc,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_e0_field_info),
+	trq_sel_fmap_e0_field_info
+},
+{"TRQ_SEL_FMAP_E1", 0x7c0,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_e1_field_info),
+	trq_sel_fmap_e1_field_info
+},
+{"TRQ_SEL_FMAP_E2", 0x7c4,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_e2_field_info),
+	trq_sel_fmap_e2_field_info
+},
+{"TRQ_SEL_FMAP_E3", 0x7c8,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_e3_field_info),
+	trq_sel_fmap_e3_field_info
+},
+{"TRQ_SEL_FMAP_E4", 0x7cc,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_e4_field_info),
+	trq_sel_fmap_e4_field_info
+},
+{"TRQ_SEL_FMAP_E5", 0x7d0,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_e5_field_info),
+	trq_sel_fmap_e5_field_info
+},
+{"TRQ_SEL_FMAP_E6", 0x7d4,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_e6_field_info),
+	trq_sel_fmap_e6_field_info
+},
+{"TRQ_SEL_FMAP_E7", 0x7d8,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_e7_field_info),
+	trq_sel_fmap_e7_field_info
+},
+{"TRQ_SEL_FMAP_E8", 0x7dc,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_e8_field_info),
+	trq_sel_fmap_e8_field_info
+},
+{"TRQ_SEL_FMAP_E9", 0x7e0,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_e9_field_info),
+	trq_sel_fmap_e9_field_info
+},
+{"TRQ_SEL_FMAP_EA", 0x7e4,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_ea_field_info),
+	trq_sel_fmap_ea_field_info
+},
+{"TRQ_SEL_FMAP_EB", 0x7e8,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_eb_field_info),
+	trq_sel_fmap_eb_field_info
+},
+{"TRQ_SEL_FMAP_EC", 0x7ec,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_ec_field_info),
+	trq_sel_fmap_ec_field_info
+},
+{"TRQ_SEL_FMAP_ED", 0x7f0,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_ed_field_info),
+	trq_sel_fmap_ed_field_info
+},
+{"TRQ_SEL_FMAP_EE", 0x7f4,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_ee_field_info),
+	trq_sel_fmap_ee_field_info
+},
+{"TRQ_SEL_FMAP_EF", 0x7f8,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_ef_field_info),
+	trq_sel_fmap_ef_field_info
+},
+{"TRQ_SEL_FMAP_F0", 0x7fc,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(trq_sel_fmap_f0_field_info),
+	trq_sel_fmap_f0_field_info
+},
+{"IND_CTXT_DATA_3", 0x804,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(ind_ctxt_data_3_field_info),
+	ind_ctxt_data_3_field_info
+},
+{"IND_CTXT_DATA_2", 0x808,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(ind_ctxt_data_2_field_info),
+	ind_ctxt_data_2_field_info
+},
+{"IND_CTXT_DATA_1", 0x80c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(ind_ctxt_data_1_field_info),
+	ind_ctxt_data_1_field_info
+},
+{"IND_CTXT_DATA_0", 0x810,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(ind_ctxt_data_0_field_info),
+	ind_ctxt_data_0_field_info
+},
+{"IND_CTXT3", 0x814,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(ind_ctxt3_field_info),
+	ind_ctxt3_field_info
+},
+{"IND_CTXT2", 0x818,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(ind_ctxt2_field_info),
+	ind_ctxt2_field_info
+},
+{"IND_CTXT1", 0x81c,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(ind_ctxt1_field_info),
+	ind_ctxt1_field_info
+},
+{"IND_CTXT0", 0x820,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(ind_ctxt0_field_info),
+	ind_ctxt0_field_info
+},
+{"IND_CTXT_CMD", 0x824,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(ind_ctxt_cmd_field_info),
+	ind_ctxt_cmd_field_info
+},
+{"C2H_TIMER_CNT_1", 0xa00,
+	1, 0, 0, 0,
+	0, QDMA_COMPLETION_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_timer_cnt_1_field_info),
+	c2h_timer_cnt_1_field_info
+},
+{"C2H_TIMER_CNT_2", 0xa04,
+	1, 0, 0, 0,
+	0, QDMA_COMPLETION_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_timer_cnt_2_field_info),
+	c2h_timer_cnt_2_field_info
+},
+{"C2H_TIMER_CNT_3", 0xa08,
+	1, 0, 0, 0,
+	0, QDMA_COMPLETION_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_timer_cnt_3_field_info),
+	c2h_timer_cnt_3_field_info
+},
+{"C2H_TIMER_CNT_4", 0xa0c,
+	1, 0, 0, 0,
+	0, QDMA_COMPLETION_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_timer_cnt_4_field_info),
+	c2h_timer_cnt_4_field_info
+},
+{"C2H_TIMER_CNT_5", 0xa10,
+	1, 0, 0, 0,
+	0, QDMA_COMPLETION_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_timer_cnt_5_field_info),
+	c2h_timer_cnt_5_field_info
+},
+{"C2H_TIMER_CNT_6", 0xa14,
+	1, 0, 0, 0,
+	0, QDMA_COMPLETION_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_timer_cnt_6_field_info),
+	c2h_timer_cnt_6_field_info
+},
+{"C2H_TIMER_CNT_7", 0xa18,
+	1, 0, 0, 0,
+	0, QDMA_COMPLETION_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_timer_cnt_7_field_info),
+	c2h_timer_cnt_7_field_info
+},
+{"C2H_TIMER_CNT_8", 0xa1c,
+	1, 0, 0, 0,
+	0, QDMA_COMPLETION_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_timer_cnt_8_field_info),
+	c2h_timer_cnt_8_field_info
+},
+{"C2H_TIMER_CNT_9", 0xa20,
+	1, 0, 0, 0,
+	0, QDMA_COMPLETION_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_timer_cnt_9_field_info),
+	c2h_timer_cnt_9_field_info
+},
+{"C2H_TIMER_CNT_A", 0xa24,
+	1, 0, 0, 0,
+	0, QDMA_COMPLETION_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_timer_cnt_a_field_info),
+	c2h_timer_cnt_a_field_info
+},
+{"C2H_TIMER_CNT_B", 0xa28,
+	1, 0, 0, 0,
+	0, QDMA_COMPLETION_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_timer_cnt_b_field_info),
+	c2h_timer_cnt_b_field_info
+},
+{"C2H_TIMER_CNT_C", 0xa2c,
+	1, 0, 0, 0,
+	0, QDMA_COMPLETION_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_timer_cnt_c_field_info),
+	c2h_timer_cnt_c_field_info
+},
+{"C2H_TIMER_CNT_D", 0xa30,
+	1, 0, 0, 0,
+	0, QDMA_COMPLETION_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_timer_cnt_d_field_info),
+	c2h_timer_cnt_d_field_info
+},
+{"C2H_TIMER_CNT_E", 0xa34,
+	1, 0, 0, 0,
+	0, QDMA_COMPLETION_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_timer_cnt_e_field_info),
+	c2h_timer_cnt_e_field_info
+},
+{"C2H_TIMER_CNT_F", 0xa38,
+	1, 0, 0, 0,
+	0, QDMA_COMPLETION_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_timer_cnt_f_field_info),
+	c2h_timer_cnt_f_field_info
+},
+{"C2H_TIMER_CNT_10", 0xa3c,
+	1, 0, 0, 0,
+	0, QDMA_COMPLETION_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_timer_cnt_10_field_info),
+	c2h_timer_cnt_10_field_info
+},
+{"C2H_CNT_TH_1", 0xa40,
+	1, 0, 0, 0,
+	0, QDMA_COMPLETION_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_cnt_th_1_field_info),
+	c2h_cnt_th_1_field_info
+},
+{"C2H_CNT_TH_2", 0xa44,
+	1, 0, 0, 0,
+	0, QDMA_COMPLETION_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_cnt_th_2_field_info),
+	c2h_cnt_th_2_field_info
+},
+{"C2H_CNT_TH_3", 0xa48,
+	1, 0, 0, 0,
+	0, QDMA_COMPLETION_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_cnt_th_3_field_info),
+	c2h_cnt_th_3_field_info
+},
+{"C2H_CNT_TH_4", 0xa4c,
+	1, 0, 0, 0,
+	0, QDMA_COMPLETION_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_cnt_th_4_field_info),
+	c2h_cnt_th_4_field_info
+},
+{"C2H_CNT_TH_5", 0xa50,
+	1, 0, 0, 0,
+	0, QDMA_COMPLETION_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_cnt_th_5_field_info),
+	c2h_cnt_th_5_field_info
+},
+{"C2H_CNT_TH_6", 0xa54,
+	1, 0, 0, 0,
+	0, QDMA_COMPLETION_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_cnt_th_6_field_info),
+	c2h_cnt_th_6_field_info
+},
+{"C2H_CNT_TH_7", 0xa58,
+	1, 0, 0, 0,
+	0, QDMA_COMPLETION_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_cnt_th_7_field_info),
+	c2h_cnt_th_7_field_info
+},
+{"C2H_CNT_TH_8", 0xa5c,
+	1, 0, 0, 0,
+	0, QDMA_COMPLETION_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_cnt_th_8_field_info),
+	c2h_cnt_th_8_field_info
+},
+{"C2H_CNT_TH_9", 0xa60,
+	1, 0, 0, 0,
+	0, QDMA_COMPLETION_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_cnt_th_9_field_info),
+	c2h_cnt_th_9_field_info
+},
+{"C2H_CNT_TH_A", 0xa64,
+	1, 0, 0, 0,
+	0, QDMA_COMPLETION_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_cnt_th_a_field_info),
+	c2h_cnt_th_a_field_info
+},
+{"C2H_CNT_TH_B", 0xa68,
+	1, 0, 0, 0,
+	0, QDMA_COMPLETION_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_cnt_th_b_field_info),
+	c2h_cnt_th_b_field_info
+},
+{"C2H_CNT_TH_C", 0xa6c,
+	1, 0, 0, 0,
+	0, QDMA_COMPLETION_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_cnt_th_c_field_info),
+	c2h_cnt_th_c_field_info
+},
+{"C2H_CNT_TH_D", 0xa70,
+	1, 0, 0, 0,
+	0, QDMA_COMPLETION_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_cnt_th_d_field_info),
+	c2h_cnt_th_d_field_info
+},
+{"C2H_CNT_TH_E", 0xa74,
+	1, 0, 0, 0,
+	0, QDMA_COMPLETION_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_cnt_th_e_field_info),
+	c2h_cnt_th_e_field_info
+},
+{"C2H_CNT_TH_F", 0xa78,
+	1, 0, 0, 0,
+	0, QDMA_COMPLETION_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_cnt_th_f_field_info),
+	c2h_cnt_th_f_field_info
+},
+{"C2H_CNT_TH_10", 0xa7c,
+	1, 0, 0, 0,
+	0, QDMA_COMPLETION_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_cnt_th_10_field_info),
+	c2h_cnt_th_10_field_info
+},
+{"C2H_QID2VEC_MAP_QID", 0xa80,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_qid2vec_map_qid_field_info),
+	c2h_qid2vec_map_qid_field_info
+},
+{"C2H_QID2VEC_MAP", 0xa84,
+	1, 0, 0, 0,
+	0, QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_qid2vec_map_field_info),
+	c2h_qid2vec_map_field_info
+},
+{"C2H_STAT_S_AXIS_C2H_ACCEPTED", 0xa88,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(c2h_stat_s_axis_c2h_accepted_field_info),
+	c2h_stat_s_axis_c2h_accepted_field_info
+},
+{"C2H_STAT_S_AXIS_WRB_ACCEPTED", 0xa8c,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(c2h_stat_s_axis_wrb_accepted_field_info),
+	c2h_stat_s_axis_wrb_accepted_field_info
+},
+{"C2H_STAT_DESC_RSP_PKT_ACCEPTED", 0xa90,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(c2h_stat_desc_rsp_pkt_accepted_field_info),
+	c2h_stat_desc_rsp_pkt_accepted_field_info
+},
+{"C2H_STAT_AXIS_PKG_CMP", 0xa94,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(c2h_stat_axis_pkg_cmp_field_info),
+	c2h_stat_axis_pkg_cmp_field_info
+},
+{"C2H_STAT_DESC_RSP_ACCEPTED", 0xa98,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_stat_desc_rsp_accepted_field_info),
+	c2h_stat_desc_rsp_accepted_field_info
+},
+{"C2H_STAT_DESC_RSP_CMP", 0xa9c,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_stat_desc_rsp_cmp_field_info),
+	c2h_stat_desc_rsp_cmp_field_info
+},
+{"C2H_STAT_WRQ_OUT", 0xaa0,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_stat_wrq_out_field_info),
+	c2h_stat_wrq_out_field_info
+},
+{"C2H_STAT_WPL_REN_ACCEPTED", 0xaa4,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_stat_wpl_ren_accepted_field_info),
+	c2h_stat_wpl_ren_accepted_field_info
+},
+{"C2H_STAT_TOTAL_WRQ_LEN", 0xaa8,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_stat_total_wrq_len_field_info),
+	c2h_stat_total_wrq_len_field_info
+},
+{"C2H_STAT_TOTAL_WPL_LEN", 0xaac,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_stat_total_wpl_len_field_info),
+	c2h_stat_total_wpl_len_field_info
+},
+{"C2H_BUF_SZ_0", 0xab0,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_buf_sz_0_field_info),
+	c2h_buf_sz_0_field_info
+},
+{"C2H_BUF_SZ_1", 0xab4,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_buf_sz_1_field_info),
+	c2h_buf_sz_1_field_info
+},
+{"C2H_BUF_SZ_2", 0xab8,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_buf_sz_2_field_info),
+	c2h_buf_sz_2_field_info
+},
+{"C2H_BUF_SZ_3", 0xabc,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_buf_sz_3_field_info),
+	c2h_buf_sz_3_field_info
+},
+{"C2H_BUF_SZ_4", 0xac0,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_buf_sz_4_field_info),
+	c2h_buf_sz_4_field_info
+},
+{"C2H_BUF_SZ_5", 0xac4,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_buf_sz_5_field_info),
+	c2h_buf_sz_5_field_info
+},
+{"C2H_BUF_SZ_7", 0xac8,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_buf_sz_7_field_info),
+	c2h_buf_sz_7_field_info
+},
+{"C2H_BUF_SZ_8", 0xacc,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_buf_sz_8_field_info),
+	c2h_buf_sz_8_field_info
+},
+{"C2H_BUF_SZ_9", 0xad0,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_buf_sz_9_field_info),
+	c2h_buf_sz_9_field_info
+},
+{"C2H_BUF_SZ_10", 0xad4,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_buf_sz_10_field_info),
+	c2h_buf_sz_10_field_info
+},
+{"C2H_BUF_SZ_11", 0xad8,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_buf_sz_11_field_info),
+	c2h_buf_sz_11_field_info
+},
+{"C2H_BUF_SZ_12", 0xae0,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_buf_sz_12_field_info),
+	c2h_buf_sz_12_field_info
+},
+{"C2H_BUF_SZ_13", 0xae4,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_buf_sz_13_field_info),
+	c2h_buf_sz_13_field_info
+},
+{"C2H_BUF_SZ_14", 0xae8,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_buf_sz_14_field_info),
+	c2h_buf_sz_14_field_info
+},
+{"C2H_BUF_SZ_15", 0xaec,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_buf_sz_15_field_info),
+	c2h_buf_sz_15_field_info
+},
+{"C2H_ERR_STAT", 0xaf0,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(c2h_err_stat_field_info),
+	c2h_err_stat_field_info
+},
+{"C2H_ERR_MASK", 0xaf4,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(c2h_err_mask_field_info),
+	c2h_err_mask_field_info
+},
+{"C2H_FATAL_ERR_STAT", 0xaf8,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(c2h_fatal_err_stat_field_info),
+	c2h_fatal_err_stat_field_info
+},
+{"C2H_FATAL_ERR_MASK", 0xafc,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(c2h_fatal_err_mask_field_info),
+	c2h_fatal_err_mask_field_info
+},
+{"C2H_FATAL_ERR_ENABLE", 0xb00,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(c2h_fatal_err_enable_field_info),
+	c2h_fatal_err_enable_field_info
+},
+{"GLBL_ERR_INT", 0xb04,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(glbl_err_int_field_info),
+	glbl_err_int_field_info
+},
+{"C2H_PFCH_CFG", 0xb08,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_pfch_cfg_field_info),
+	c2h_pfch_cfg_field_info
+},
+{"C2H_INT_TIMER_TICK", 0xb0c,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_int_timer_tick_field_info),
+	c2h_int_timer_tick_field_info
+},
+{"C2H_STAT_DESC_RSP_DROP_ACCEPTED", 0xb10,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(c2h_stat_desc_rsp_drop_accepted_field_info),
+	c2h_stat_desc_rsp_drop_accepted_field_info
+},
+{"C2H_STAT_DESC_RSP_ERR_ACCEPTED", 0xb14,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(c2h_stat_desc_rsp_err_accepted_field_info),
+	c2h_stat_desc_rsp_err_accepted_field_info
+},
+{"C2H_STAT_DESC_REQ", 0xb18,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_stat_desc_req_field_info),
+	c2h_stat_desc_req_field_info
+},
+{"C2H_STAT_DBG_DMA_ENG_0", 0xb1c,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_stat_dbg_dma_eng_0_field_info),
+	c2h_stat_dbg_dma_eng_0_field_info
+},
+{"C2H_STAT_DBG_DMA_ENG_1", 0xb20,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_stat_dbg_dma_eng_1_field_info),
+	c2h_stat_dbg_dma_eng_1_field_info
+},
+{"C2H_STAT_DBG_DMA_ENG_2", 0xb24,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_stat_dbg_dma_eng_2_field_info),
+	c2h_stat_dbg_dma_eng_2_field_info
+},
+{"C2H_STAT_DBG_DMA_ENG_3", 0xb28,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_stat_dbg_dma_eng_3_field_info),
+	c2h_stat_dbg_dma_eng_3_field_info
+},
+{"C2H_DBG_PFCH_ERR_CTXT", 0xb2c,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_dbg_pfch_err_ctxt_field_info),
+	c2h_dbg_pfch_err_ctxt_field_info
+},
+{"C2H_FIRST_ERR_QID", 0xb30,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(c2h_first_err_qid_field_info),
+	c2h_first_err_qid_field_info
+},
+{"STAT_NUM_WRB_IN", 0xb34,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(stat_num_wrb_in_field_info),
+	stat_num_wrb_in_field_info
+},
+{"STAT_NUM_WRB_OUT", 0xb38,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(stat_num_wrb_out_field_info),
+	stat_num_wrb_out_field_info
+},
+{"STAT_NUM_WRB_DRP", 0xb3c,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(stat_num_wrb_drp_field_info),
+	stat_num_wrb_drp_field_info
+},
+{"STAT_NUM_STAT_DESC_OUT", 0xb40,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(stat_num_stat_desc_out_field_info),
+	stat_num_stat_desc_out_field_info
+},
+{"STAT_NUM_DSC_CRDT_SENT", 0xb44,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(stat_num_dsc_crdt_sent_field_info),
+	stat_num_dsc_crdt_sent_field_info
+},
+{"STAT_NUM_FCH_DSC_RCVD", 0xb48,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(stat_num_fch_dsc_rcvd_field_info),
+	stat_num_fch_dsc_rcvd_field_info
+},
+{"STAT_NUM_BYP_DSC_RCVD", 0xb4c,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(stat_num_byp_dsc_rcvd_field_info),
+	stat_num_byp_dsc_rcvd_field_info
+},
+{"C2H_WRB_COAL_CFG", 0xb50,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_wrb_coal_cfg_field_info),
+	c2h_wrb_coal_cfg_field_info
+},
+{"C2H_INTR_H2C_REQ", 0xb54,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(c2h_intr_h2c_req_field_info),
+	c2h_intr_h2c_req_field_info
+},
+{"C2H_INTR_C2H_MM_REQ", 0xb58,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(c2h_intr_c2h_mm_req_field_info),
+	c2h_intr_c2h_mm_req_field_info
+},
+{"C2H_INTR_ERR_INT_REQ", 0xb5c,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(c2h_intr_err_int_req_field_info),
+	c2h_intr_err_int_req_field_info
+},
+{"C2H_INTR_C2H_ST_REQ", 0xb60,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(c2h_intr_c2h_st_req_field_info),
+	c2h_intr_c2h_st_req_field_info
+},
+{"C2H_INTR_H2C_ERR_C2H_MM_MSIX_ACK", 0xb64,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_intr_h2c_err_c2h_mm_msix_ack_field_info),
+	c2h_intr_h2c_err_c2h_mm_msix_ack_field_info
+},
+{"C2H_INTR_H2C_ERR_C2H_MM_MSIX_FAIL", 0xb68,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_intr_h2c_err_c2h_mm_msix_fail_field_info),
+	c2h_intr_h2c_err_c2h_mm_msix_fail_field_info
+},
+{"C2H_INTR_H2C_ERR_C2H_MM_MSIX_NO_MSIX", 0xb6c,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_intr_h2c_err_c2h_mm_msix_no_msix_field_info),
+	c2h_intr_h2c_err_c2h_mm_msix_no_msix_field_info
+},
+{"C2H_INTR_H2C_ERR_C2H_MM_CTXT_INVAL", 0xb70,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_intr_h2c_err_c2h_mm_ctxt_inval_field_info),
+	c2h_intr_h2c_err_c2h_mm_ctxt_inval_field_info
+},
+{"C2H_INTR_C2H_ST_MSIX_ACK", 0xb74,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(c2h_intr_c2h_st_msix_ack_field_info),
+	c2h_intr_c2h_st_msix_ack_field_info
+},
+{"C2H_INTR_C2H_ST_MSIX_FAIL", 0xb78,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(c2h_intr_c2h_st_msix_fail_field_info),
+	c2h_intr_c2h_st_msix_fail_field_info
+},
+{"C2H_INTR_C2H_ST_NO_MSIX", 0xb7c,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_intr_c2h_st_no_msix_field_info),
+	c2h_intr_c2h_st_no_msix_field_info
+},
+{"C2H_INTR_C2H_ST_CTXT_INVAL", 0xb80,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_intr_c2h_st_ctxt_inval_field_info),
+	c2h_intr_c2h_st_ctxt_inval_field_info
+},
+{"C2H_STAT_WR_CMP", 0xb84,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_stat_wr_cmp_field_info),
+	c2h_stat_wr_cmp_field_info
+},
+{"C2H_STAT_DBG_DMA_ENG_4", 0xb88,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_stat_dbg_dma_eng_4_field_info),
+	c2h_stat_dbg_dma_eng_4_field_info
+},
+{"C2H_STAT_DBG_DMA_ENG_5", 0xb8c,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_stat_dbg_dma_eng_5_field_info),
+	c2h_stat_dbg_dma_eng_5_field_info
+},
+{"C2H_DBG_PFCH_QID", 0xb90,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_dbg_pfch_qid_field_info),
+	c2h_dbg_pfch_qid_field_info
+},
+{"C2H_DBG_PFCH", 0xb94,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_dbg_pfch_field_info),
+	c2h_dbg_pfch_field_info
+},
+{"C2H_INT_DBG", 0xb98,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_int_dbg_field_info),
+	c2h_int_dbg_field_info
+},
+{"C2H_STAT_IMM_ACCEPTED", 0xb9c,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_stat_imm_accepted_field_info),
+	c2h_stat_imm_accepted_field_info
+},
+{"C2H_STAT_MARKER_ACCEPTED", 0xba0,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_stat_marker_accepted_field_info),
+	c2h_stat_marker_accepted_field_info
+},
+{"C2H_STAT_DISABLE_CMP_ACCEPTED", 0xba4,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_stat_disable_cmp_accepted_field_info),
+	c2h_stat_disable_cmp_accepted_field_info
+},
+{"C2H_PLD_FIFO_CRDT_CNT", 0xba8,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_pld_fifo_crdt_cnt_field_info),
+	c2h_pld_fifo_crdt_cnt_field_info
+},
+{"H2C_ERR_STAT", 0xe00,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(h2c_err_stat_field_info),
+	h2c_err_stat_field_info
+},
+{"H2C_ERR_MASK", 0xe04,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(h2c_err_mask_field_info),
+	h2c_err_mask_field_info
+},
+{"H2C_FIRST_ERR_QID", 0xe08,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(h2c_first_err_qid_field_info),
+	h2c_first_err_qid_field_info
+},
+{"H2C_DBG_REG0", 0xe0c,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(h2c_dbg_reg0_field_info),
+	h2c_dbg_reg0_field_info
+},
+{"H2C_DBG_REG1", 0xe10,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(h2c_dbg_reg1_field_info),
+	h2c_dbg_reg1_field_info
+},
+{"H2C_DBG_REG2", 0xe14,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(h2c_dbg_reg2_field_info),
+	h2c_dbg_reg2_field_info
+},
+{"H2C_DBG_REG3", 0xe18,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(h2c_dbg_reg3_field_info),
+	h2c_dbg_reg3_field_info
+},
+{"H2C_DBG_REG4", 0xe1c,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(h2c_dbg_reg4_field_info),
+	h2c_dbg_reg4_field_info
+},
+{"H2C_FATAL_ERR_EN", 0xe20,
+	1, 0, 0, 0,
+	0, QDMA_ST_MODE, QDMA_REG_READ_PF_VF,
+	ARRAY_SIZE(h2c_fatal_err_en_field_info),
+	h2c_fatal_err_en_field_info
+},
+{"C2H_CHANNEL_CTL", 0x1004,
+	1, 0, 0, 0,
+	0, QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_channel_ctl_field_info),
+	c2h_channel_ctl_field_info
+},
+{"C2H_CHANNEL_CTL_1", 0x1008,
+	1, 0, 0, 0,
+	0, QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_channel_ctl_1_field_info),
+	c2h_channel_ctl_1_field_info
+},
+{"C2H_MM_STATUS", 0x1040,
+	1, 0, 0, 0,
+	0, QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_mm_status_field_info),
+	c2h_mm_status_field_info
+},
+{"C2H_CHANNEL_CMPL_DESC_CNT", 0x1048,
+	1, 0, 0, 0,
+	0, QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_channel_cmpl_desc_cnt_field_info),
+	c2h_channel_cmpl_desc_cnt_field_info
+},
+{"C2H_MM_ERR_CODE_ENABLE_MASK", 0x1054,
+	1, 0, 0, 0,
+	0, QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_mm_err_code_enable_mask_field_info),
+	c2h_mm_err_code_enable_mask_field_info
+},
+{"C2H_MM_ERR_CODE", 0x1058,
+	1, 0, 0, 0,
+	0, QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_mm_err_code_field_info),
+	c2h_mm_err_code_field_info
+},
+{"C2H_MM_ERR_INFO", 0x105c,
+	1, 0, 0, 0,
+	0, QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_mm_err_info_field_info),
+	c2h_mm_err_info_field_info
+},
+{"C2H_MM_PERF_MON_CTL", 0x10c0,
+	1, 0, 0, 0,
+	0, QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_mm_perf_mon_ctl_field_info),
+	c2h_mm_perf_mon_ctl_field_info
+},
+{"C2H_MM_PERF_MON_CYCLE_CNT0", 0x10c4,
+	1, 0, 0, 0,
+	0, QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_mm_perf_mon_cycle_cnt0_field_info),
+	c2h_mm_perf_mon_cycle_cnt0_field_info
+},
+{"C2H_MM_PERF_MON_CYCLE_CNT1", 0x10c8,
+	1, 0, 0, 0,
+	0, QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_mm_perf_mon_cycle_cnt1_field_info),
+	c2h_mm_perf_mon_cycle_cnt1_field_info
+},
+{"C2H_MM_PERF_MON_DATA_CNT0", 0x10cc,
+	1, 0, 0, 0,
+	0, QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_mm_perf_mon_data_cnt0_field_info),
+	c2h_mm_perf_mon_data_cnt0_field_info
+},
+{"C2H_MM_PERF_MON_DATA_CNT1", 0x10d0,
+	1, 0, 0, 0,
+	0, QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_mm_perf_mon_data_cnt1_field_info),
+	c2h_mm_perf_mon_data_cnt1_field_info
+},
+{"C2H_MM_DBG", 0x10e8,
+	1, 0, 0, 0,
+	0, QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(c2h_mm_dbg_field_info),
+	c2h_mm_dbg_field_info
+},
+{"H2C_CHANNEL_CTL", 0x1204,
+	1, 0, 0, 0,
+	0, QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(h2c_channel_ctl_field_info),
+	h2c_channel_ctl_field_info
+},
+{"H2C_CHANNEL_CTL_1", 0x1208,
+	1, 0, 0, 0,
+	0, QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(h2c_channel_ctl_1_field_info),
+	h2c_channel_ctl_1_field_info
+},
+{"H2C_CHANNEL_CTL_2", 0x120c,
+	1, 0, 0, 0,
+	0, QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(h2c_channel_ctl_2_field_info),
+	h2c_channel_ctl_2_field_info
+},
+{"H2C_MM_STATUS", 0x1240,
+	1, 0, 0, 0,
+	0, QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(h2c_mm_status_field_info),
+	h2c_mm_status_field_info
+},
+{"H2C_CHANNEL_CMPL_DESC_CNT", 0x1248,
+	1, 0, 0, 0,
+	0, QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(h2c_channel_cmpl_desc_cnt_field_info),
+	h2c_channel_cmpl_desc_cnt_field_info
+},
+{"H2C_MM_ERR_CODE_ENABLE_MASK", 0x1254,
+	1, 0, 0, 0,
+	0, QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(h2c_mm_err_code_enable_mask_field_info),
+	h2c_mm_err_code_enable_mask_field_info
+},
+{"H2C_MM_ERR_CODE", 0x1258,
+	1, 0, 0, 0,
+	0, QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(h2c_mm_err_code_field_info),
+	h2c_mm_err_code_field_info
+},
+{"H2C_MM_ERR_INFO", 0x125c,
+	1, 0, 0, 0,
+	0, QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(h2c_mm_err_info_field_info),
+	h2c_mm_err_info_field_info
+},
+{"H2C_MM_PERF_MON_CTL", 0x12c0,
+	1, 0, 0, 0,
+	0, QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(h2c_mm_perf_mon_ctl_field_info),
+	h2c_mm_perf_mon_ctl_field_info
+},
+{"H2C_MM_PERF_MON_CYCLE_CNT0", 0x12c4,
+	1, 0, 0, 0,
+	0, QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(h2c_mm_perf_mon_cycle_cnt0_field_info),
+	h2c_mm_perf_mon_cycle_cnt0_field_info
+},
+{"H2C_MM_PERF_MON_CYCLE_CNT1", 0x12c8,
+	1, 0, 0, 0,
+	0, QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(h2c_mm_perf_mon_cycle_cnt1_field_info),
+	h2c_mm_perf_mon_cycle_cnt1_field_info
+},
+{"H2C_MM_PERF_MON_DATA_CNT0", 0x12cc,
+	1, 0, 0, 0,
+	0, QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(h2c_mm_perf_mon_data_cnt0_field_info),
+	h2c_mm_perf_mon_data_cnt0_field_info
+},
+{"H2C_MM_PERF_MON_DATA_CNT1", 0x12d0,
+	1, 0, 0, 0,
+	0, QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(h2c_mm_perf_mon_data_cnt1_field_info),
+	h2c_mm_perf_mon_data_cnt1_field_info
+},
+{"H2C_MM_DBG", 0x12e8,
+	1, 0, 0, 0,
+	0, QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(h2c_mm_dbg_field_info),
+	h2c_mm_dbg_field_info
+},
+{"FUNC_STATUS_REG", 0x2400,
+	1, 0, 0, 0,
+	0, QDMA_MAILBOX, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(func_status_reg_field_info),
+	func_status_reg_field_info
+},
+{"FUNC_CMD_REG", 0x2404,
+	1, 0, 0, 0,
+	0, QDMA_MAILBOX, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(func_cmd_reg_field_info),
+	func_cmd_reg_field_info
+},
+{"FUNC_INTERRUPT_VECTOR_REG", 0x2408,
+	1, 0, 0, 0,
+	0, QDMA_MAILBOX, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(func_interrupt_vector_reg_field_info),
+	func_interrupt_vector_reg_field_info
+},
+{"TARGET_FUNC_REG", 0x240c,
+	1, 0, 0, 0,
+	0, QDMA_MAILBOX, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(target_func_reg_field_info),
+	target_func_reg_field_info
+},
+{"FUNC_INTERRUPT_CTL_REG", 0x2410,
+	1, 0, 0, 0,
+	0, QDMA_MAILBOX, QDMA_REG_READ_PF_ONLY,
+	ARRAY_SIZE(func_interrupt_ctl_reg_field_info),
+	func_interrupt_ctl_reg_field_info
+},
+
+};
+
+uint32_t qdma_s80_hard_config_num_regs_get(void)
+{
+	return (sizeof(qdma_s80_hard_config_regs) /
+		sizeof(qdma_s80_hard_config_regs[0]));
+}
+
+struct xreg_info *qdma_s80_hard_config_regs_get(void)
+{
+	return qdma_s80_hard_config_regs;
+}
diff --git a/drivers/net/qdma/qdma_access/qdma_soft_access/qdma_soft_access.c b/drivers/net/qdma/qdma_access/qdma_soft_access/qdma_soft_access.c
new file mode 100644
index 0000000000..8103702ac9
--- /dev/null
+++ b/drivers/net/qdma/qdma_access/qdma_soft_access/qdma_soft_access.c
@@ -0,0 +1,6106 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2022 Xilinx, Inc. All rights reserved.
+ */
+
+#include "qdma_soft_access.h"
+#include "qdma_soft_reg.h"
+#include "qdma_reg_dump.h"
+
+
+#ifdef ENABLE_WPP_TRACING
+#include "qdma_soft_access.tmh"
+#endif
+
+/** QDMA Context array size */
+#define QDMA_FMAP_NUM_WORDS			2
+#define QDMA_SW_CONTEXT_NUM_WORDS		5
+#define QDMA_PFETCH_CONTEXT_NUM_WORDS		2
+#define QDMA_CMPT_CONTEXT_NUM_WORDS		5
+#define QDMA_HW_CONTEXT_NUM_WORDS		2
+#define QDMA_CR_CONTEXT_NUM_WORDS		1
+#define QDMA_IND_INTR_CONTEXT_NUM_WORDS		3
+#define QDMA_REG_IND_CTXT_REG_COUNT		8
+
+#define QDMA_REG_GROUP_1_START_ADDR	0x000
+#define QDMA_REG_GROUP_2_START_ADDR	0x400
+#define QDMA_REG_GROUP_3_START_ADDR	0xB00
+#define QDMA_REG_GROUP_4_START_ADDR	0x1014
+
+static void qdma_hw_st_h2c_err_process(void *dev_hndl);
+static void qdma_hw_st_c2h_err_process(void *dev_hndl);
+static void qdma_hw_desc_err_process(void *dev_hndl);
+static void qdma_hw_trq_err_process(void *dev_hndl);
+static void qdma_hw_ram_sbe_err_process(void *dev_hndl);
+static void qdma_hw_ram_dbe_err_process(void *dev_hndl);
+
+static struct xreg_info qdma_config_regs[] = {
+	/* QDMA_TRQ_SEL_GLBL1 (0x00000) */
+	{"CFG_BLOCK_ID",
+		0x00, 1, 0, 0, 0, 0,
+		QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"CFG_BUSDEV",
+		0x04, 1, 0, 0, 0, 0,
+		QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"CFG_PCIE_MAX_PL_SZ",
+		0x08, 1, 0, 0, 0, 0,
+		QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"CFG_PCIE_MAX_RDRQ_SZ",
+		0x0C, 1, 0, 0, 0, 0,
+		QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"CFG_SYS_ID",
+		0x10, 1, 0, 0, 0, 0,
+		QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"CFG_MSI_EN",
+		0x14, 1, 0, 0, 0, 0,
+		QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"CFG_PCIE_DATA_WIDTH",
+		0x18, 1, 0, 0, 0, 0,
+		QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"CFG_PCIE_CTRL",
+		0x1C, 1, 0, 0, 0, 0,
+		QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"CFG_AXI_USR_MAX_PL_SZ",
+		0x40, 1, 0, 0, 0, 0,
+		QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"CFG_AXI_USR_MAX_RDRQ_SZ",
+		0x44, 1, 0, 0, 0, 0,
+		QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"CFG_MISC_CTRL",
+		0x4C, 1, 0, 0, 0, 0,
+		QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"CFG_SCRATCH_REG",
+		0x80, 8, 0, 0, 0, 0,
+		QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"QDMA_RAM_SBE_MSK_A",
+		0xF0, 1, 0, 0, 0, 0,
+		QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"QDMA_RAM_SBE_STS_A",
+		0xF4, 1, 0, 0, 0, 0,
+		QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"QDMA_RAM_DBE_MSK_A",
+		0xF8, 1, 0, 0, 0, 0,
+		QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"QDMA_RAM_DBE_STS_A",
+		0xFC, 1, 0, 0, 0, 0,
+		QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+
+	/* QDMA_TRQ_SEL_GLBL2 (0x00100) */
+	{"GLBL2_ID",
+		0x100, 1, 0, 0, 0, 0,
+		QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"GLBL2_PF_BL_INT",
+		0x104, 1, 0, 0, 0, 0,
+		QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"GLBL2_PF_VF_BL_INT",
+		0x108, 1, 0, 0, 0, 0,
+		QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"GLBL2_PF_BL_EXT",
+		0x10C, 1, 0, 0, 0, 0,
+		QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"GLBL2_PF_VF_BL_EXT",
+		0x110, 1, 0, 0, 0, 0,
+		QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"GLBL2_CHNL_INST",
+		0x114, 1, 0, 0, 0, 0,
+		QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"GLBL2_CHNL_QDMA",
+		0x118, 1, 0, 0, 0, 0,
+		QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"GLBL2_CHNL_STRM",
+		0x11C, 1, 0, 0, 0, 0,
+		QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"GLBL2_QDMA_CAP",
+		0x120, 1, 0, 0, 0, 0,
+		QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"GLBL2_PASID_CAP",
+		0x128, 1, 0, 0, 0, 0,
+		QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"GLBL2_FUNC_RET",
+		0x12C, 1, 0, 0, 0, 0,
+		QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"GLBL2_SYS_ID",
+		0x130, 1, 0, 0, 0, 0,
+		QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"GLBL2_MISC_CAP",
+		0x134, 1, 0, 0, 0, 0,
+		QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"GLBL2_DBG_PCIE_RQ",
+		0x1B8, 2, 0, 0, 0, 0,
+		QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"GLBL2_DBG_AXIMM_WR",
+		0x1C0, 2, 0, 0, 0, 0,
+		QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"GLBL2_DBG_AXIMM_RD",
+		0x1C8, 2, 0, 0, 0, 0,
+		QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+
+	/* QDMA_TRQ_SEL_GLBL (0x00200) */
+	{"GLBL_RNGSZ",
+		0x204, 16, 1, 0, 0, 0,
+		QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"GLBL_ERR_STAT",
+		0x248, 1,  0, 0, 0, 0,
+		QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF, 0, NULL},
+	{"GLBL_ERR_MASK",
+		0x24C, 1,  0, 0, 0, 0,
+		QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF, 0, NULL},
+	{"GLBL_DSC_CFG",
+		0x250, 1, 0, 0, 0, 0,
+		QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF, 0, NULL},
+	{"GLBL_DSC_ERR_STS",
+		0x254, 1,  0, 0, 0, 0,
+		QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF, 0, NULL},
+	{"GLBL_DSC_ERR_MSK",
+		0x258, 1,  0, 0, 0, 0,
+		QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF, 0, NULL},
+	{"GLBL_DSC_ERR_LOG",
+		0x25C, 2,  0, 0, 0, 0,
+		QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF, 0, NULL},
+	{"GLBL_TRQ_ERR_STS",
+		0x264, 1,  0, 0, 0, 0,
+		QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF, 0, NULL},
+	{"GLBL_TRQ_ERR_MSK",
+		0x268, 1,  0, 0, 0, 0,
+		QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF, 0, NULL},
+	{"GLBL_TRQ_ERR_LOG",
+		0x26C, 1,  0, 0, 0, 0,
+		QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF, 0, NULL},
+	{"GLBL_DSC_DBG_DAT",
+		0x270, 2,  0, 0, 0, 0,
+		QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF, 0, NULL},
+	{"GLBL_DSC_ERR_LOG2",
+		0x27C, 1,  0, 0, 0, 0,
+		QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF, 0, NULL},
+	{"GLBL_INTERRUPT_CFG",
+		0x288, 1, 0, 0, 0, 0,
+		QDMA_MM_ST_MODE, QDMA_REG_READ_PF_VF, 0, NULL},
+
+	/* QDMA_TRQ_SEL_FMAP (0x00400 - 0x7FC) */
+	/* TODO: max 256, display 4 for now */
+	{"TRQ_SEL_FMAP",
+		0x400, 4, 0, 0, 0, 0,
+		QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+
+	/* QDMA_TRQ_SEL_IND (0x00800) */
+	{"IND_CTXT_DATA",
+		0x804, 8, 0, 0, 0, 0,
+		QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"IND_CTXT_MASK",
+		0x824, 8, 0, 0, 0, 0,
+		QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"IND_CTXT_CMD",
+		0x844, 1, 0, 0, 0, 0,
+		QDMA_MM_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+
+	/* QDMA_TRQ_SEL_C2H (0x00A00) */
+	{"C2H_TIMER_CNT",
+	0xA00, 16, 0, 0, 0, 0,
+		QDMA_COMPLETION_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"C2H_CNT_THRESH",
+	0xA40, 16, 0, 0, 0, 0,
+		QDMA_COMPLETION_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"C2H_STAT_S_AXIS_C2H_ACCEPTED",
+		0xA88, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_VF, 0, NULL},
+	{"C2H_STAT_S_AXIS_CMPT_ACCEPTED",
+		0xA8C, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_VF, 0, NULL},
+	{"C2H_STAT_DESC_RSP_PKT_ACCEPTED",
+		0xA90, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_VF, 0, NULL},
+	{"C2H_STAT_AXIS_PKG_CMP",
+		0xA94, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_VF, 0, NULL},
+	{"C2H_STAT_DESC_RSP_ACCEPTED",
+		0xA98, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_VF, 0, NULL},
+	{"C2H_STAT_DESC_RSP_CMP",
+		0xA9C, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"C2H_STAT_WRQ_OUT",
+		0xAA0, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"C2H_STAT_WPL_REN_ACCEPTED",
+		0xAA4, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"C2H_STAT_TOTAL_WRQ_LEN",
+		0xAA8, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"C2H_STAT_TOTAL_WPL_LEN",
+		0xAAC, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"C2H_BUF_SZ",
+		0xAB0, 16, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"C2H_ERR_STAT",
+		0xAF0, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_VF, 0, NULL},
+	{"C2H_ERR_MASK",
+		0xAF4, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_VF, 0, NULL},
+	{"C2H_FATAL_ERR_STAT",
+		0xAF8, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_VF, 0, NULL},
+	{"C2H_FATAL_ERR_MASK",
+		0xAFC, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_VF, 0, NULL},
+	{"C2H_FATAL_ERR_ENABLE",
+		0xB00, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_VF, 0, NULL},
+	{"GLBL_ERR_INT",
+		0xB04, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_VF, 0, NULL},
+	{"C2H_PFCH_CFG",
+		0xB08, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"C2H_INT_TIMER_TICK",
+		0xB0C, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"C2H_STAT_DESC_RSP_DROP_ACCEPTED",
+		0xB10, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_VF, 0, NULL},
+	{"C2H_STAT_DESC_RSP_ERR_ACCEPTED",
+		0xB14, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_VF, 0, NULL},
+	{"C2H_STAT_DESC_REQ",
+		0xB18, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"C2H_STAT_DEBUG_DMA_ENG",
+		0xB1C, 4, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"C2H_DBG_PFCH_ERR_CTXT",
+		0xB2C, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"C2H_FIRST_ERR_QID",
+		0xB30, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_VF, 0, NULL},
+	{"STAT_NUM_CMPT_IN",
+		0xB34, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"STAT_NUM_CMPT_OUT",
+		0xB38, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"STAT_NUM_CMPT_DRP",
+		0xB3C, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"STAT_NUM_STAT_DESC_OUT",
+		0xB40, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"STAT_NUM_DSC_CRDT_SENT",
+		0xB44, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"STAT_NUM_FCH_DSC_RCVD",
+		0xB48, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"STAT_NUM_BYP_DSC_RCVD",
+		0xB4C, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"C2H_CMPT_COAL_CFG",
+		0xB50, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"C2H_INTR_H2C_REQ",
+		0xB54, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_VF, 0, NULL},
+	{"C2H_INTR_C2H_MM_REQ",
+		0xB58, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_VF, 0, NULL},
+	{"C2H_INTR_ERR_INT_REQ",
+		0xB5C, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_VF, 0, NULL},
+	{"C2H_INTR_C2H_ST_REQ",
+		0xB60, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_VF, 0, NULL},
+	{"C2H_INTR_H2C_ERR_MM_MSIX_ACK",
+		0xB64, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"C2H_INTR_H2C_ERR_MM_MSIX_FAIL",
+		0xB68, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"C2H_INTR_H2C_ERR_MM_NO_MSIX",
+		0xB6C, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"C2H_INTR_H2C_ERR_MM_CTXT_INVAL",
+		0xB70, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"C2H_INTR_C2H_ST_MSIX_ACK",
+		0xB74, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_VF, 0, NULL},
+	{"C2H_INTR_C2H_ST_MSIX_FAIL",
+		0xB78, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_VF, 0, NULL},
+	{"C2H_INTR_C2H_ST_NO_MSIX",
+		0xB7C, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"C2H_INTR_C2H_ST_CTXT_INVAL",
+		0xB80, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"C2H_STAT_WR_CMP",
+		0xB84, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"C2H_STAT_DEBUG_DMA_ENG_4",
+		0xB88, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"C2H_STAT_DEBUG_DMA_ENG_5",
+		0xB8C, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"C2H_DBG_PFCH_QID",
+		0xB90, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"C2H_DBG_PFCH",
+		0xB94, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"C2H_INT_DEBUG",
+		0xB98, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"C2H_STAT_IMM_ACCEPTED",
+		0xB9C, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"C2H_STAT_MARKER_ACCEPTED",
+		0xBA0, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"C2H_STAT_DISABLE_CMP_ACCEPTED",
+		0xBA4, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"C2H_C2H_PAYLOAD_FIFO_CRDT_CNT",
+		0xBA8, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"C2H_INTR_DYN_REQ",
+		0xBAC, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"C2H_INTR_DYN_MSIX",
+		0xBB0, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"C2H_DROP_LEN_MISMATCH",
+		0xBB4, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"C2H_DROP_DESC_RSP_LEN",
+		0xBB8, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"C2H_DROP_QID_FIFO_LEN",
+		0xBBC, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"C2H_DROP_PAYLOAD_CNT",
+		0xBC0, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"QDMA_C2H_CMPT_FORMAT",
+		0xBC4, 7, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"C2H_PFCH_CACHE_DEPTH",
+		0xBE0, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"C2H_CMPT_COAL_BUF_DEPTH",
+		0xBE4, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"C2H_PFCH_CRDT",
+		0xBE8, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+
+	/* QDMA_TRQ_SEL_H2C(0x00E00) Register Space */
+	{"H2C_ERR_STAT",
+		0xE00, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_VF, 0, NULL},
+	{"H2C_ERR_MASK",
+		0xE04, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_VF, 0, NULL},
+	{"H2C_FIRST_ERR_QID",
+		0xE08, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_VF, 0, NULL},
+	{"H2C_DBG_REG",
+		0xE0C, 5, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"H2C_FATAL_ERR_EN",
+		0xE20, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"H2C_REQ_THROT",
+		0xE24, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"H2C_ALN_DBG_REG0",
+		0xE28, 1, 0, 0, 0, 0,
+		QDMA_ST_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+
+	/* QDMA_TRQ_SEL_C2H_MM (0x1000) */
+	{"C2H_MM_CONTROL",
+		0x1004, 3, 0, 0, 0, 0,
+		QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"C2H_MM_STATUS",
+		0x1040, 2, 0, 0, 0, 0,
+		QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"C2H_MM_CMPL_DSC_CNT",
+		0x1048, 1, 0, 0, 0, 0,
+		QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"C2H_MM_ERR_CODE_EN_MASK",
+		0x1054, 1, 0, 0, 0, 0,
+		QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"C2H_MM_ERR_CODE",
+		0x1058, 1, 0, 0, 0, 0,
+		QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"C2H_MM_ERR_INFO",
+		0x105C, 1, 0, 0, 0, 0,
+		QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"C2H_MM_PERF_MON_CTRL",
+		0x10C0, 1, 0, 0, 0, 0,
+		QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"C2H_MM_PERF_MON_CY_CNT",
+		0x10C4, 2, 0, 0, 0, 0,
+		QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"C2H_MM_PERF_MON_DATA_CNT",
+		0x10CC, 2, 0, 0, 0, 0,
+		QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"C2H_MM_DBG_INFO",
+		0x10E8, 2, 0, 0, 0, 0,
+		QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+
+	/* QDMA_TRQ_SEL_H2C_MM (0x1200) */
+	{"H2C_MM_CONTROL",
+		0x1204, 3, 0, 0, 0, 0,
+		QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"H2C_MM_STATUS",
+		0x1240, 1, 0, 0, 0, 0,
+		QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"H2C_MM_CMPL_DSC_CNT",
+		0x1248, 1, 0, 0, 0, 0,
+		QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"H2C_MM_ERR_CODE_EN_MASK",
+		0x1254, 1, 0, 0, 0, 0,
+		QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"H2C_MM_ERR_CODE",
+		0x1258, 1, 0, 0, 0, 0,
+		QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"H2C_MM_ERR_INFO",
+		0x125C, 1, 0, 0, 0, 0,
+		QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"H2C_MM_PERF_MON_CTRL",
+		0x12C0, 1, 0, 0, 0, 0,
+		QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"H2C_MM_PERF_MON_CY_CNT",
+		0x12C4, 2, 0, 0, 0, 0,
+		QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"H2C_MM_PERF_MON_DATA_CNT",
+		0x12CC, 2, 0, 0, 0, 0,
+		QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"H2C_MM_DBG_INFO",
+		0x12E8, 1, 0, 0, 0, 0,
+		QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"H2C_MM_REQ_THROT",
+		0x12EC, 1, 0, 0, 0, 0,
+		QDMA_MM_MODE, QDMA_REG_READ_PF_ONLY, 0, NULL},
+
+	/* QDMA_PF_MAILBOX (0x2400) */
+	{"FUNC_STATUS",
+		0x2400, 1, 0, 0, 0, 0,
+		QDMA_MAILBOX, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"FUNC_CMD",
+		 0x2404, 1, 0, 0, 0, 0,
+		QDMA_MAILBOX, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"FUNC_INTR_VEC",
+		 0x2408, 1, 0, 0, 0, 0,
+		QDMA_MAILBOX, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"TARGET_FUNC",
+		 0x240C, 1, 0, 0, 0, 0,
+		QDMA_MAILBOX, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"INTR_CTRL",
+		 0x2410, 1, 0, 0, 0, 0,
+		QDMA_MAILBOX, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"PF_ACK",
+		 0x2420, 8, 0, 0, 0, 0,
+		QDMA_MAILBOX, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"FLR_CTRL_STATUS",
+		 0x2500, 1, 0, 0, 0, 0,
+		QDMA_MAILBOX, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"MSG_IN",
+		 0x2800, 32, 0, 0, 0, 0,
+		QDMA_MAILBOX, QDMA_REG_READ_PF_ONLY, 0, NULL},
+	{"MSG_OUT",
+		0x2C00, 32, 0, 0, 0, 0,
+		QDMA_MAILBOX, QDMA_REG_READ_PF_ONLY, 0, NULL},
+
+	{"", 0, 0, 0, 0, 0, 0, 0, 0, 0, NULL }
+};
+
+
+static struct qdma_hw_err_info qdma_err_info[QDMA_ERRS_ALL] = {
+	/* Descriptor errors */
+	{
+		QDMA_DSC_ERR_POISON,
+		"Poison error",
+		QDMA_OFFSET_GLBL_DSC_ERR_MASK,
+		QDMA_OFFSET_GLBL_DSC_ERR_STAT,
+		QDMA_GLBL_DSC_ERR_POISON_MASK,
+		QDMA_GLBL_ERR_DSC_MASK,
+		&qdma_hw_desc_err_process
+	},
+	{
+		QDMA_DSC_ERR_UR_CA,
+		"Unsupported request or completer aborted error",
+		QDMA_OFFSET_GLBL_DSC_ERR_MASK,
+		QDMA_OFFSET_GLBL_DSC_ERR_STAT,
+		QDMA_GLBL_DSC_ERR_UR_CA_MASK,
+		QDMA_GLBL_ERR_DSC_MASK,
+		&qdma_hw_desc_err_process
+	},
+	{
+		QDMA_DSC_ERR_PARAM,
+		"Parameter mismatch error",
+		QDMA_OFFSET_GLBL_DSC_ERR_MASK,
+		QDMA_OFFSET_GLBL_DSC_ERR_STAT,
+		QDMA_GLBL_DSC_ERR_PARAM_MASK,
+		QDMA_GLBL_ERR_DSC_MASK,
+		&qdma_hw_desc_err_process
+	},
+	{
+		QDMA_DSC_ERR_ADDR,
+		"Address mismatch error",
+		QDMA_OFFSET_GLBL_DSC_ERR_MASK,
+		QDMA_OFFSET_GLBL_DSC_ERR_STAT,
+		QDMA_GLBL_DSC_ERR_ADDR_MASK,
+		QDMA_GLBL_ERR_DSC_MASK,
+		&qdma_hw_desc_err_process
+	},
+	{
+		QDMA_DSC_ERR_TAG,
+		"Unexpected tag error",
+		QDMA_OFFSET_GLBL_DSC_ERR_MASK,
+		QDMA_OFFSET_GLBL_DSC_ERR_STAT,
+		QDMA_GLBL_DSC_ERR_TAG_MASK,
+		QDMA_GLBL_ERR_DSC_MASK,
+		&qdma_hw_desc_err_process
+	},
+	{
+		QDMA_DSC_ERR_FLR,
+		"FLR error",
+		QDMA_OFFSET_GLBL_DSC_ERR_MASK,
+		QDMA_OFFSET_GLBL_DSC_ERR_STAT,
+		QDMA_GLBL_DSC_ERR_FLR_MASK,
+		QDMA_GLBL_ERR_DSC_MASK,
+		&qdma_hw_desc_err_process
+	},
+	{
+		QDMA_DSC_ERR_TIMEOUT,
+		"Timed out error",
+		QDMA_OFFSET_GLBL_DSC_ERR_MASK,
+		QDMA_OFFSET_GLBL_DSC_ERR_STAT,
+		QDMA_GLBL_DSC_ERR_TIMEOUT_MASK,
+		QDMA_GLBL_ERR_DSC_MASK,
+		&qdma_hw_desc_err_process
+	},
+	{
+		QDMA_DSC_ERR_DAT_POISON,
+		"Poison data error",
+		QDMA_OFFSET_GLBL_DSC_ERR_MASK,
+		QDMA_OFFSET_GLBL_DSC_ERR_STAT,
+		QDMA_GLBL_DSC_ERR_DAT_POISON_MASK,
+		QDMA_GLBL_ERR_DSC_MASK,
+		&qdma_hw_desc_err_process
+	},
+	{
+		QDMA_DSC_ERR_FLR_CANCEL,
+		"Descriptor fetch cancelled due to FLR error",
+		QDMA_OFFSET_GLBL_DSC_ERR_MASK,
+		QDMA_OFFSET_GLBL_DSC_ERR_STAT,
+		QDMA_GLBL_DSC_ERR_FLR_CANCEL_MASK,
+		QDMA_GLBL_ERR_DSC_MASK,
+		&qdma_hw_desc_err_process
+	},
+	{
+		QDMA_DSC_ERR_DMA,
+		"DMA engine error",
+		QDMA_OFFSET_GLBL_DSC_ERR_MASK,
+		QDMA_OFFSET_GLBL_DSC_ERR_STAT,
+		QDMA_GLBL_DSC_ERR_DMA_MASK,
+		QDMA_GLBL_ERR_DSC_MASK,
+		&qdma_hw_desc_err_process
+	},
+	{
+		QDMA_DSC_ERR_DSC,
+		"Invalid PIDX update error",
+		QDMA_OFFSET_GLBL_DSC_ERR_MASK,
+		QDMA_OFFSET_GLBL_DSC_ERR_STAT,
+		QDMA_GLBL_DSC_ERR_DSC_MASK,
+		QDMA_GLBL_ERR_DSC_MASK,
+		&qdma_hw_desc_err_process
+	},
+	{
+		QDMA_DSC_ERR_RQ_CANCEL,
+		"Descriptor fetch cancelled due to disable register status error",
+		QDMA_OFFSET_GLBL_DSC_ERR_MASK,
+		QDMA_OFFSET_GLBL_DSC_ERR_STAT,
+		QDMA_GLBL_DSC_ERR_RQ_CANCEL_MASK,
+		QDMA_GLBL_ERR_DSC_MASK,
+		&qdma_hw_desc_err_process
+	},
+	{
+		QDMA_DSC_ERR_DBE,
+		"UNC_ERR_RAM_DBE error",
+		QDMA_OFFSET_GLBL_DSC_ERR_MASK,
+		QDMA_OFFSET_GLBL_DSC_ERR_STAT,
+		QDMA_GLBL_DSC_ERR_DBE_MASK,
+		QDMA_GLBL_ERR_DSC_MASK,
+		&qdma_hw_desc_err_process
+	},
+	{
+		QDMA_DSC_ERR_SBE,
+		"UNC_ERR_RAM_SBE error",
+		QDMA_OFFSET_GLBL_DSC_ERR_MASK,
+		QDMA_OFFSET_GLBL_DSC_ERR_STAT,
+		QDMA_GLBL_DSC_ERR_SBE_MASK,
+		QDMA_GLBL_ERR_DSC_MASK,
+		&qdma_hw_desc_err_process
+	},
+	{
+		QDMA_DSC_ERR_ALL,
+		"All Descriptor errors",
+		QDMA_OFFSET_GLBL_DSC_ERR_MASK,
+		QDMA_OFFSET_GLBL_DSC_ERR_STAT,
+		QDMA_GLBL_DSC_ERR_ALL_MASK,
+		QDMA_GLBL_ERR_DSC_MASK,
+		&qdma_hw_desc_err_process
+	},
+
+	/* TRQ errors */
+	{
+		QDMA_TRQ_ERR_UNMAPPED,
+		"Access targeted unmapped register space error",
+		QDMA_OFFSET_GLBL_TRQ_ERR_MASK,
+		QDMA_OFFSET_GLBL_TRQ_ERR_STAT,
+		QDMA_GLBL_TRQ_ERR_UNMAPPED_MASK,
+		QDMA_GLBL_ERR_TRQ_MASK,
+		&qdma_hw_trq_err_process
+	},
+	{
+		QDMA_TRQ_ERR_QID_RANGE,
+		"Qid range error",
+		QDMA_OFFSET_GLBL_TRQ_ERR_MASK,
+		QDMA_OFFSET_GLBL_TRQ_ERR_STAT,
+		QDMA_GLBL_TRQ_ERR_QID_RANGE_MASK,
+		QDMA_GLBL_ERR_TRQ_MASK,
+		&qdma_hw_trq_err_process
+	},
+	{
+		QDMA_TRQ_ERR_VF_ACCESS,
+		"Invalid VF access error",
+		QDMA_OFFSET_GLBL_TRQ_ERR_MASK,
+		QDMA_OFFSET_GLBL_TRQ_ERR_STAT,
+		QDMA_GLBL_TRQ_ERR_VF_ACCESS_MASK,
+		QDMA_GLBL_ERR_TRQ_MASK,
+		&qdma_hw_trq_err_process
+	},
+	{
+		QDMA_TRQ_ERR_TCP_TIMEOUT,
+		"Timeout on request error",
+		QDMA_OFFSET_GLBL_TRQ_ERR_MASK,
+		QDMA_OFFSET_GLBL_TRQ_ERR_STAT,
+		QDMA_GLBL_TRQ_ERR_TCP_TIMEOUT_MASK,
+		QDMA_GLBL_ERR_TRQ_MASK,
+		&qdma_hw_trq_err_process
+	},
+	{
+		QDMA_TRQ_ERR_ALL,
+		"All TRQ errors",
+		QDMA_OFFSET_GLBL_TRQ_ERR_MASK,
+		QDMA_OFFSET_GLBL_TRQ_ERR_STAT,
+		QDMA_GLBL_TRQ_ERR_ALL_MASK,
+		QDMA_GLBL_ERR_TRQ_MASK,
+		&qdma_hw_trq_err_process
+	},
+
+	/* C2H Errors */
+	{
+		QDMA_ST_C2H_ERR_MTY_MISMATCH,
+		"MTY mismatch error",
+		QDMA_OFFSET_C2H_ERR_MASK,
+		QDMA_OFFSET_C2H_ERR_STAT,
+		QDMA_C2H_ERR_MTY_MISMATCH_MASK,
+		QDMA_GLBL_ERR_ST_C2H_MASK,
+		&qdma_hw_st_c2h_err_process
+	},
+	{
+		QDMA_ST_C2H_ERR_LEN_MISMATCH,
+		"Packet length mismatch error",
+		QDMA_OFFSET_C2H_ERR_MASK,
+		QDMA_OFFSET_C2H_ERR_STAT,
+		QDMA_C2H_ERR_LEN_MISMATCH_MASK,
+		QDMA_GLBL_ERR_ST_C2H_MASK,
+		&qdma_hw_st_c2h_err_process
+	},
+	{
+		QDMA_ST_C2H_ERR_QID_MISMATCH,
+		"Qid mismatch error",
+		QDMA_OFFSET_C2H_ERR_MASK,
+		QDMA_OFFSET_C2H_ERR_STAT,
+		QDMA_C2H_ERR_QID_MISMATCH_MASK,
+		QDMA_GLBL_ERR_ST_C2H_MASK,
+		&qdma_hw_st_c2h_err_process
+	},
+	{
+		QDMA_ST_C2H_ERR_DESC_RSP_ERR,
+		"Descriptor error bit set",
+		QDMA_OFFSET_C2H_ERR_MASK,
+		QDMA_OFFSET_C2H_ERR_STAT,
+		QDMA_C2H_ERR_DESC_RSP_ERR_MASK,
+		QDMA_GLBL_ERR_ST_C2H_MASK,
+		&qdma_hw_st_c2h_err_process
+	},
+	{
+		QDMA_ST_C2H_ERR_ENG_WPL_DATA_PAR_ERR,
+		"Data parity error",
+		QDMA_OFFSET_C2H_ERR_MASK,
+		QDMA_OFFSET_C2H_ERR_STAT,
+		QDMA_C2H_ERR_ENG_WPL_DATA_PAR_ERR_MASK,
+		QDMA_GLBL_ERR_ST_C2H_MASK,
+		&qdma_hw_st_c2h_err_process
+	},
+	{
+		QDMA_ST_C2H_ERR_MSI_INT_FAIL,
+		"MSI got a fail response error",
+		QDMA_OFFSET_C2H_ERR_MASK,
+		QDMA_OFFSET_C2H_ERR_STAT,
+		QDMA_C2H_ERR_MSI_INT_FAIL_MASK,
+		QDMA_GLBL_ERR_ST_C2H_MASK,
+		&qdma_hw_st_c2h_err_process
+	},
+	{
+		QDMA_ST_C2H_ERR_ERR_DESC_CNT,
+		"Descriptor count error",
+		QDMA_OFFSET_C2H_ERR_MASK,
+		QDMA_OFFSET_C2H_ERR_STAT,
+		QDMA_C2H_ERR_ERR_DESC_CNT_MASK,
+		QDMA_GLBL_ERR_ST_C2H_MASK,
+		&qdma_hw_st_c2h_err_process
+	},
+	{
+		QDMA_ST_C2H_ERR_PORTID_CTXT_MISMATCH,
+		"Port id in packet and pfetch ctxt mismatch error",
+		QDMA_OFFSET_C2H_ERR_MASK,
+		QDMA_OFFSET_C2H_ERR_STAT,
+		QDMA_C2H_ERR_PORTID_CTXT_MISMATCH_MASK,
+		QDMA_GLBL_ERR_ST_C2H_MASK,
+		&qdma_hw_st_c2h_err_process
+	},
+	{
+		QDMA_ST_C2H_ERR_PORTID_BYP_IN_MISMATCH,
+		"Port id in packet and bypass interface mismatch error",
+		QDMA_OFFSET_C2H_ERR_MASK,
+		QDMA_OFFSET_C2H_ERR_STAT,
+		QDMA_C2H_ERR_PORTID_BYP_IN_MISMATCH_MASK,
+		QDMA_GLBL_ERR_ST_C2H_MASK,
+		&qdma_hw_st_c2h_err_process
+	},
+	{
+		QDMA_ST_C2H_ERR_CMPT_INV_Q_ERR,
+		"Writeback on invalid queue error",
+		QDMA_OFFSET_C2H_ERR_MASK,
+		QDMA_OFFSET_C2H_ERR_STAT,
+		QDMA_C2H_ERR_CMPT_INV_Q_ERR_MASK,
+		QDMA_GLBL_ERR_ST_C2H_MASK,
+		&qdma_hw_st_c2h_err_process
+	},
+	{
+		QDMA_ST_C2H_ERR_CMPT_QFULL_ERR,
+		"Completion queue gets full error",
+		QDMA_OFFSET_C2H_ERR_MASK,
+		QDMA_OFFSET_C2H_ERR_STAT,
+		QDMA_C2H_ERR_CMPT_QFULL_ERR_MASK,
+		QDMA_GLBL_ERR_ST_C2H_MASK,
+		&qdma_hw_st_c2h_err_process
+	},
+	{
+		QDMA_ST_C2H_ERR_CMPT_CIDX_ERR,
+		"Bad CIDX update by the software error",
+		QDMA_OFFSET_C2H_ERR_MASK,
+		QDMA_OFFSET_C2H_ERR_STAT,
+		QDMA_C2H_ERR_CMPT_CIDX_ERR_MASK,
+		QDMA_GLBL_ERR_ST_C2H_MASK,
+		&qdma_hw_st_c2h_err_process
+	},
+	{
+		QDMA_ST_C2H_ERR_CMPT_PRTY_ERR,
+		"C2H completion Parity error",
+		QDMA_OFFSET_C2H_ERR_MASK,
+		QDMA_OFFSET_C2H_ERR_STAT,
+		QDMA_C2H_ERR_CMPT_PRTY_ERR_MASK,
+		QDMA_GLBL_ERR_ST_C2H_MASK,
+		&qdma_hw_st_c2h_err_process
+	},
+	{
+		QDMA_ST_C2H_ERR_ALL,
+		"All C2h errors",
+		QDMA_OFFSET_C2H_ERR_MASK,
+		QDMA_OFFSET_C2H_ERR_STAT,
+		QDMA_C2H_ERR_ALL_MASK,
+		QDMA_GLBL_ERR_ST_C2H_MASK,
+		&qdma_hw_st_c2h_err_process
+	},
+
+	/* C2H fatal errors */
+	{
+		QDMA_ST_FATAL_ERR_MTY_MISMATCH,
+		"Fatal MTY mismatch error",
+		QDMA_OFFSET_C2H_FATAL_ERR_MASK,
+		QDMA_OFFSET_C2H_FATAL_ERR_STAT,
+		QDMA_C2H_FATAL_ERR_MTY_MISMATCH_MASK,
+		QDMA_GLBL_ERR_ST_C2H_MASK,
+		&qdma_hw_st_c2h_err_process
+	},
+	{
+		QDMA_ST_FATAL_ERR_LEN_MISMATCH,
+		"Fatal Len mismatch error",
+		QDMA_OFFSET_C2H_FATAL_ERR_MASK,
+		QDMA_OFFSET_C2H_FATAL_ERR_STAT,
+		QDMA_C2H_FATAL_ERR_LEN_MISMATCH_MASK,
+		QDMA_GLBL_ERR_ST_C2H_MASK,
+		&qdma_hw_st_c2h_err_process
+	},
+	{
+		QDMA_ST_FATAL_ERR_QID_MISMATCH,
+		"Fatal Qid mismatch error",
+		QDMA_OFFSET_C2H_FATAL_ERR_MASK,
+		QDMA_OFFSET_C2H_FATAL_ERR_STAT,
+		QDMA_C2H_FATAL_ERR_QID_MISMATCH_MASK,
+		QDMA_GLBL_ERR_ST_C2H_MASK,
+		&qdma_hw_st_c2h_err_process
+	},
+	{
+		QDMA_ST_FATAL_ERR_TIMER_FIFO_RAM_RDBE,
+		"RAM double bit fatal error",
+		QDMA_OFFSET_C2H_FATAL_ERR_MASK,
+		QDMA_OFFSET_C2H_FATAL_ERR_STAT,
+		QDMA_C2H_FATAL_ERR_TIMER_FIFO_RAM_RDBE_MASK,
+		QDMA_GLBL_ERR_ST_C2H_MASK,
+		&qdma_hw_st_c2h_err_process
+	},
+	{
+		QDMA_ST_FATAL_ERR_PFCH_II_RAM_RDBE,
+		"RAM double bit fatal error",
+		QDMA_OFFSET_C2H_FATAL_ERR_MASK,
+		QDMA_OFFSET_C2H_FATAL_ERR_STAT,
+		QDMA_C2H_FATAL_ERR_PFCH_II_RAM_RDBE_MASK,
+		QDMA_GLBL_ERR_ST_C2H_MASK,
+		&qdma_hw_st_c2h_err_process
+	},
+	{
+		QDMA_ST_FATAL_ERR_CMPT_CTXT_RAM_RDBE,
+		"RAM double bit fatal error",
+		QDMA_OFFSET_C2H_FATAL_ERR_MASK,
+		QDMA_OFFSET_C2H_FATAL_ERR_STAT,
+		QDMA_C2H_FATAL_ERR_CMPT_CTXT_RAM_RDBE_MASK,
+		QDMA_GLBL_ERR_ST_C2H_MASK,
+		&qdma_hw_st_c2h_err_process
+	},
+	{
+		QDMA_ST_FATAL_ERR_PFCH_CTXT_RAM_RDBE,
+		"RAM double bit fatal error",
+		QDMA_OFFSET_C2H_FATAL_ERR_MASK,
+		QDMA_OFFSET_C2H_FATAL_ERR_STAT,
+		QDMA_C2H_FATAL_ERR_PFCH_CTXT_RAM_RDBE_MASK,
+		QDMA_GLBL_ERR_ST_C2H_MASK,
+		&qdma_hw_st_c2h_err_process
+	},
+	{
+		QDMA_ST_FATAL_ERR_DESC_REQ_FIFO_RAM_RDBE,
+		"RAM double bit fatal error",
+		QDMA_OFFSET_C2H_FATAL_ERR_MASK,
+		QDMA_OFFSET_C2H_FATAL_ERR_STAT,
+		QDMA_C2H_FATAL_ERR_DESC_REQ_FIFO_RAM_RDBE_MASK,
+		QDMA_GLBL_ERR_ST_C2H_MASK,
+		&qdma_hw_st_c2h_err_process
+	},
+	{
+		QDMA_ST_FATAL_ERR_INT_CTXT_RAM_RDBE,
+		"RAM double bit fatal error",
+		QDMA_OFFSET_C2H_FATAL_ERR_MASK,
+		QDMA_OFFSET_C2H_FATAL_ERR_STAT,
+		QDMA_C2H_FATAL_ERR_INT_CTXT_RAM_RDBE_MASK,
+		QDMA_GLBL_ERR_ST_C2H_MASK,
+		&qdma_hw_st_c2h_err_process
+	},
+	{
+		QDMA_ST_FATAL_ERR_CMPT_COAL_DATA_RAM_RDBE,
+		"RAM double bit fatal error",
+		QDMA_OFFSET_C2H_FATAL_ERR_MASK,
+		QDMA_OFFSET_C2H_FATAL_ERR_STAT,
+		QDMA_C2H_FATAL_ERR_CMPT_COAL_DATA_RAM_RDBE_MASK,
+		QDMA_GLBL_ERR_ST_C2H_MASK,
+		&qdma_hw_st_c2h_err_process
+	},
+	{
+		QDMA_ST_FATAL_ERR_TUSER_FIFO_RAM_RDBE,
+		"RAM double bit fatal error",
+		QDMA_OFFSET_C2H_FATAL_ERR_MASK,
+		QDMA_OFFSET_C2H_FATAL_ERR_STAT,
+		QDMA_C2H_FATAL_ERR_TUSER_FIFO_RAM_RDBE_MASK,
+		QDMA_GLBL_ERR_ST_C2H_MASK,
+		&qdma_hw_st_c2h_err_process
+	},
+	{
+		QDMA_ST_FATAL_ERR_QID_FIFO_RAM_RDBE,
+		"RAM double bit fatal error",
+		QDMA_OFFSET_C2H_FATAL_ERR_MASK,
+		QDMA_OFFSET_C2H_FATAL_ERR_STAT,
+		QDMA_C2H_FATAL_ERR_QID_FIFO_RAM_RDBE_MASK,
+		QDMA_GLBL_ERR_ST_C2H_MASK,
+		&qdma_hw_st_c2h_err_process
+	},
+	{
+		QDMA_ST_FATAL_ERR_PAYLOAD_FIFO_RAM_RDBE,
+		"RAM double bit fatal error",
+		QDMA_OFFSET_C2H_FATAL_ERR_MASK,
+		QDMA_OFFSET_C2H_FATAL_ERR_STAT,
+		QDMA_C2H_FATAL_ERR_PAYLOAD_FIFO_RAM_RDBE_MASK,
+		QDMA_GLBL_ERR_ST_C2H_MASK,
+		&qdma_hw_st_c2h_err_process
+	},
+	{
+		QDMA_ST_FATAL_ERR_WPL_DATA_PAR,
+		"RAM double bit fatal error",
+		QDMA_OFFSET_C2H_FATAL_ERR_MASK,
+		QDMA_OFFSET_C2H_FATAL_ERR_STAT,
+		QDMA_C2H_FATAL_ERR_WPL_DATA_PAR_MASK,
+		QDMA_GLBL_ERR_ST_C2H_MASK,
+		&qdma_hw_st_c2h_err_process
+	},
+	{
+		QDMA_ST_FATAL_ERR_ALL,
+		"All fatal errors",
+		QDMA_OFFSET_C2H_FATAL_ERR_MASK,
+		QDMA_OFFSET_C2H_FATAL_ERR_STAT,
+		QDMA_C2H_FATAL_ERR_ALL_MASK,
+		QDMA_GLBL_ERR_ST_C2H_MASK,
+		&qdma_hw_st_c2h_err_process
+	},
+
+	/* H2C St errors */
+	{
+		QDMA_ST_H2C_ERR_ZERO_LEN_DESC,
+		"Zero length descriptor error",
+		QDMA_OFFSET_H2C_ERR_MASK,
+		QDMA_OFFSET_H2C_ERR_STAT,
+		QDMA_H2C_ERR_ZERO_LEN_DESC_MASK,
+		QDMA_GLBL_ERR_ST_H2C_MASK,
+		&qdma_hw_st_h2c_err_process
+	},
+	{
+		QDMA_ST_H2C_ERR_CSI_MOP,
+		"Non EOP descriptor received error",
+		QDMA_OFFSET_H2C_ERR_MASK,
+		QDMA_OFFSET_H2C_ERR_STAT,
+		QDMA_H2C_ERR_CSI_MOP_MASK,
+		QDMA_GLBL_ERR_ST_H2C_MASK,
+		&qdma_hw_st_h2c_err_process
+	},
+	{
+		QDMA_ST_H2C_ERR_NO_DMA_DSC,
+		"No DMA descriptor received error",
+		QDMA_OFFSET_H2C_ERR_MASK,
+		QDMA_OFFSET_H2C_ERR_STAT,
+		QDMA_H2C_ERR_NO_DMA_DSC_MASK,
+		QDMA_GLBL_ERR_ST_H2C_MASK,
+		&qdma_hw_st_h2c_err_process
+	},
+	{
+		QDMA_ST_H2C_ERR_SBE,
+		"Single bit error detected on H2C-ST data error",
+		QDMA_OFFSET_H2C_ERR_MASK,
+		QDMA_OFFSET_H2C_ERR_STAT,
+		QDMA_H2C_ERR_SBE_MASK,
+		QDMA_GLBL_ERR_ST_H2C_MASK,
+		&qdma_hw_st_h2c_err_process
+	},
+	{
+		QDMA_ST_H2C_ERR_DBE,
+		"Double bit error detected on H2C-ST data error",
+		QDMA_OFFSET_H2C_ERR_MASK,
+		QDMA_OFFSET_H2C_ERR_STAT,
+		QDMA_H2C_ERR_DBE_MASK,
+		QDMA_GLBL_ERR_ST_H2C_MASK,
+		&qdma_hw_st_h2c_err_process
+	},
+	{
+		QDMA_ST_H2C_ERR_ALL,
+		"All H2C errors",
+		QDMA_OFFSET_H2C_ERR_MASK,
+		QDMA_OFFSET_H2C_ERR_STAT,
+		QDMA_H2C_ERR_ALL_MASK,
+		QDMA_GLBL_ERR_ST_H2C_MASK,
+		&qdma_hw_st_h2c_err_process
+	},
+
+	/* SBE errors */
+	{
+		QDMA_SBE_ERR_MI_H2C0_DAT,
+		"H2C MM data buffer single bit ECC error",
+		QDMA_OFFSET_RAM_SBE_MASK,
+		QDMA_OFFSET_RAM_SBE_STAT,
+		QDMA_SBE_ERR_MI_H2C0_DAT_MASK,
+		QDMA_GLBL_ERR_RAM_SBE_MASK,
+		&qdma_hw_ram_sbe_err_process
+	},
+	{
+		QDMA_SBE_ERR_MI_C2H0_DAT,
+		"C2H MM data buffer single bit ECC error",
+		QDMA_OFFSET_RAM_SBE_MASK,
+		QDMA_OFFSET_RAM_SBE_STAT,
+		QDMA_SBE_ERR_MI_C2H0_DAT_MASK,
+		QDMA_GLBL_ERR_RAM_SBE_MASK,
+		&qdma_hw_ram_sbe_err_process
+	},
+	{
+		QDMA_SBE_ERR_H2C_RD_BRG_DAT,
+		"Bridge master read single bit ECC error",
+		QDMA_OFFSET_RAM_SBE_MASK,
+		QDMA_OFFSET_RAM_SBE_STAT,
+		QDMA_SBE_ERR_H2C_RD_BRG_DAT_MASK,
+		QDMA_GLBL_ERR_RAM_SBE_MASK,
+		&qdma_hw_ram_sbe_err_process
+	},
+	{
+		QDMA_SBE_ERR_H2C_WR_BRG_DAT,
+		"Bridge master write single bit ECC error",
+		QDMA_OFFSET_RAM_SBE_MASK,
+		QDMA_OFFSET_RAM_SBE_STAT,
+		QDMA_SBE_ERR_H2C_WR_BRG_DAT_MASK,
+		QDMA_GLBL_ERR_RAM_SBE_MASK,
+		&qdma_hw_ram_sbe_err_process
+	},
+	{
+		QDMA_SBE_ERR_C2H_RD_BRG_DAT,
+		"Bridge slave read data buffer single bit ECC error",
+		QDMA_OFFSET_RAM_SBE_MASK,
+		QDMA_OFFSET_RAM_SBE_STAT,
+		QDMA_SBE_ERR_C2H_RD_BRG_DAT_MASK,
+		QDMA_GLBL_ERR_RAM_SBE_MASK,
+		&qdma_hw_ram_sbe_err_process
+	},
+	{
+		QDMA_SBE_ERR_C2H_WR_BRG_DAT,
+		"Bridge slave write data buffer single bit ECC error",
+		QDMA_OFFSET_RAM_SBE_MASK,
+		QDMA_OFFSET_RAM_SBE_STAT,
+		QDMA_SBE_ERR_C2H_WR_BRG_DAT_MASK,
+		QDMA_GLBL_ERR_RAM_SBE_MASK,
+		&qdma_hw_ram_sbe_err_process
+	},
+	{
+		QDMA_SBE_ERR_FUNC_MAP,
+		"Function map RAM single bit ECC error",
+		QDMA_OFFSET_RAM_SBE_MASK,
+		QDMA_OFFSET_RAM_SBE_STAT,
+		QDMA_SBE_ERR_FUNC_MAP_MASK,
+		QDMA_GLBL_ERR_RAM_SBE_MASK,
+		&qdma_hw_ram_sbe_err_process
+	},
+	{
+		QDMA_SBE_ERR_DSC_HW_CTXT,
+		"Descriptor engine hardware context RAM single bit ECC error",
+		QDMA_OFFSET_RAM_SBE_MASK,
+		QDMA_OFFSET_RAM_SBE_STAT,
+		QDMA_SBE_ERR_DSC_HW_CTXT_MASK,
+		QDMA_GLBL_ERR_RAM_SBE_MASK,
+		&qdma_hw_ram_sbe_err_process
+	},
+	{
+		QDMA_SBE_ERR_DSC_CRD_RCV,
+		"Descriptor engine receive credit context RAM single bit ECC error",
+		QDMA_OFFSET_RAM_SBE_MASK,
+		QDMA_OFFSET_RAM_SBE_STAT,
+		QDMA_SBE_ERR_DSC_CRD_RCV_MASK,
+		QDMA_GLBL_ERR_RAM_SBE_MASK,
+		&qdma_hw_ram_sbe_err_process
+	},
+	{
+		QDMA_SBE_ERR_DSC_SW_CTXT,
+		"Descriptor engine software context RAM single bit ECC error",
+		QDMA_OFFSET_RAM_SBE_MASK,
+		QDMA_OFFSET_RAM_SBE_STAT,
+		QDMA_SBE_ERR_DSC_SW_CTXT_MASK,
+		QDMA_GLBL_ERR_RAM_SBE_MASK,
+		&qdma_hw_ram_sbe_err_process
+	},
+	{
+		QDMA_SBE_ERR_DSC_CPLI,
+		"Descriptor engine fetch completion information RAM single bit ECC error",
+		QDMA_OFFSET_RAM_SBE_MASK,
+		QDMA_OFFSET_RAM_SBE_STAT,
+		QDMA_SBE_ERR_DSC_CPLI_MASK,
+		QDMA_GLBL_ERR_RAM_SBE_MASK,
+		&qdma_hw_ram_sbe_err_process
+	},
+	{
+		QDMA_SBE_ERR_DSC_CPLD,
+		"Descriptor engine fetch completion data RAM single bit ECC error",
+		QDMA_OFFSET_RAM_SBE_MASK,
+		QDMA_OFFSET_RAM_SBE_STAT,
+		QDMA_SBE_ERR_DSC_CPLD_MASK,
+		QDMA_GLBL_ERR_RAM_SBE_MASK,
+		&qdma_hw_ram_sbe_err_process
+	},
+	{
+		QDMA_SBE_ERR_PASID_CTXT_RAM,
+		"PASID configuration RAM single bit ECC error",
+		QDMA_OFFSET_RAM_SBE_MASK,
+		QDMA_OFFSET_RAM_SBE_STAT,
+		QDMA_SBE_ERR_PASID_CTXT_RAM_MASK,
+		QDMA_GLBL_ERR_RAM_SBE_MASK,
+		&qdma_hw_ram_sbe_err_process
+	},
+	{
+		QDMA_SBE_ERR_TIMER_FIFO_RAM,
+		"Timer fifo RAM single bit ECC error",
+		QDMA_OFFSET_RAM_SBE_MASK,
+		QDMA_OFFSET_RAM_SBE_STAT,
+		QDMA_SBE_ERR_TIMER_FIFO_RAM_MASK,
+		QDMA_GLBL_ERR_RAM_SBE_MASK,
+		&qdma_hw_ram_sbe_err_process
+	},
+	{
+		QDMA_SBE_ERR_PAYLOAD_FIFO_RAM,
+		"C2H ST payload RAM single bit ECC error",
+		QDMA_OFFSET_RAM_SBE_MASK,
+		QDMA_OFFSET_RAM_SBE_STAT,
+		QDMA_SBE_ERR_PAYLOAD_FIFO_RAM_MASK,
+		QDMA_GLBL_ERR_RAM_SBE_MASK,
+		&qdma_hw_ram_sbe_err_process
+	},
+	{
+		QDMA_SBE_ERR_QID_FIFO_RAM,
+		"C2H ST QID FIFO RAM single bit ECC error",
+		QDMA_OFFSET_RAM_SBE_MASK,
+		QDMA_OFFSET_RAM_SBE_STAT,
+		QDMA_SBE_ERR_QID_FIFO_RAM_MASK,
+		QDMA_GLBL_ERR_RAM_SBE_MASK,
+		&qdma_hw_ram_sbe_err_process
+	},
+	{
+		QDMA_SBE_ERR_TUSER_FIFO_RAM,
+		"C2H ST TUSER RAM single bit ECC error",
+		QDMA_OFFSET_RAM_SBE_MASK,
+		QDMA_OFFSET_RAM_SBE_STAT,
+		QDMA_SBE_ERR_TUSER_FIFO_RAM_MASK,
+		QDMA_GLBL_ERR_RAM_SBE_MASK,
+		&qdma_hw_ram_sbe_err_process
+	},
+	{
+		QDMA_SBE_ERR_WRB_COAL_DATA_RAM,
+		"Completion Coalescing RAM single bit ECC error",
+		QDMA_OFFSET_RAM_SBE_MASK,
+		QDMA_OFFSET_RAM_SBE_STAT,
+		QDMA_SBE_ERR_WRB_COAL_DATA_RAM_MASK,
+		QDMA_GLBL_ERR_RAM_SBE_MASK,
+		&qdma_hw_ram_sbe_err_process
+	},
+	{
+		QDMA_SBE_ERR_INT_QID2VEC_RAM,
+		"Interrupt QID2VEC RAM single bit ECC error",
+		QDMA_OFFSET_RAM_SBE_MASK,
+		QDMA_OFFSET_RAM_SBE_STAT,
+		QDMA_SBE_ERR_INT_QID2VEC_RAM_MASK,
+		QDMA_GLBL_ERR_RAM_SBE_MASK,
+		&qdma_hw_ram_sbe_err_process
+	},
+	{
+		QDMA_SBE_ERR_INT_CTXT_RAM,
+		"Interrupt context RAM single bit ECC error",
+		QDMA_OFFSET_RAM_SBE_MASK,
+		QDMA_OFFSET_RAM_SBE_STAT,
+		QDMA_SBE_ERR_INT_CTXT_RAM_MASK,
+		QDMA_GLBL_ERR_RAM_SBE_MASK,
+		&qdma_hw_ram_sbe_err_process
+	},
+	{
+		QDMA_SBE_ERR_DESC_REQ_FIFO_RAM,
+		"C2H ST descriptor request RAM single bit ECC error",
+		QDMA_OFFSET_RAM_SBE_MASK,
+		QDMA_OFFSET_RAM_SBE_STAT,
+		QDMA_SBE_ERR_DESC_REQ_FIFO_RAM_MASK,
+		QDMA_GLBL_ERR_RAM_SBE_MASK,
+		&qdma_hw_ram_sbe_err_process
+	},
+	{
+		QDMA_SBE_ERR_PFCH_CTXT_RAM,
+		"C2H ST prefetch RAM single bit ECC error",
+		QDMA_OFFSET_RAM_SBE_MASK,
+		QDMA_OFFSET_RAM_SBE_STAT,
+		QDMA_SBE_ERR_PFCH_CTXT_RAM_MASK,
+		QDMA_GLBL_ERR_RAM_SBE_MASK,
+		&qdma_hw_ram_sbe_err_process
+	},
+	{
+		QDMA_SBE_ERR_WRB_CTXT_RAM,
+		"C2H ST completion context RAM single bit ECC error",
+		QDMA_OFFSET_RAM_SBE_MASK,
+		QDMA_OFFSET_RAM_SBE_STAT,
+		QDMA_SBE_ERR_WRB_CTXT_RAM_MASK,
+		QDMA_GLBL_ERR_RAM_SBE_MASK,
+		&qdma_hw_ram_sbe_err_process
+	},
+	{
+		QDMA_SBE_ERR_PFCH_LL_RAM,
+		"C2H ST prefetch list RAM single bit ECC error",
+		QDMA_OFFSET_RAM_SBE_MASK,
+		QDMA_OFFSET_RAM_SBE_STAT,
+		QDMA_SBE_ERR_PFCH_LL_RAM_MASK,
+		QDMA_GLBL_ERR_RAM_SBE_MASK,
+		&qdma_hw_ram_sbe_err_process
+	},
+	{
+		QDMA_SBE_ERR_H2C_PEND_FIFO,
+		"H2C ST pending fifo RAM single bit ECC error",
+		QDMA_OFFSET_RAM_SBE_MASK,
+		QDMA_OFFSET_RAM_SBE_STAT,
+		QDMA_SBE_ERR_H2C_PEND_FIFO_MASK,
+		QDMA_GLBL_ERR_RAM_SBE_MASK,
+		&qdma_hw_ram_sbe_err_process
+	},
+	{
+		QDMA_SBE_ERR_ALL,
+		"All SBE errors",
+		QDMA_OFFSET_RAM_SBE_MASK,
+		QDMA_OFFSET_RAM_SBE_STAT,
+		QDMA_SBE_ERR_ALL_MASK,
+		QDMA_GLBL_ERR_RAM_SBE_MASK,
+		&qdma_hw_ram_sbe_err_process
+	},
+
+
+	/* DBE Errors */
+	{
+		QDMA_DBE_ERR_MI_H2C0_DAT,
+		"H2C MM data buffer double bit ECC error",
+		QDMA_OFFSET_RAM_DBE_MASK,
+		QDMA_OFFSET_RAM_DBE_STAT,
+		QDMA_DBE_ERR_MI_H2C0_DAT_MASK,
+		QDMA_GLBL_ERR_RAM_DBE_MASK,
+		&qdma_hw_ram_dbe_err_process
+	},
+	{
+		QDMA_DBE_ERR_MI_C2H0_DAT,
+		"C2H MM data buffer double bit ECC error",
+		QDMA_OFFSET_RAM_DBE_MASK,
+		QDMA_OFFSET_RAM_DBE_STAT,
+		QDMA_DBE_ERR_MI_C2H0_DAT_MASK,
+		QDMA_GLBL_ERR_RAM_DBE_MASK,
+		&qdma_hw_ram_dbe_err_process
+	},
+	{
+		QDMA_DBE_ERR_H2C_RD_BRG_DAT,
+		"Bridge master read double bit ECC error",
+		QDMA_OFFSET_RAM_DBE_MASK,
+		QDMA_OFFSET_RAM_DBE_STAT,
+		QDMA_DBE_ERR_H2C_RD_BRG_DAT_MASK,
+		QDMA_GLBL_ERR_RAM_DBE_MASK,
+		&qdma_hw_ram_dbe_err_process
+	},
+	{
+		QDMA_DBE_ERR_H2C_WR_BRG_DAT,
+		"Bridge master write double bit ECC error",
+		QDMA_OFFSET_RAM_DBE_MASK,
+		QDMA_OFFSET_RAM_DBE_STAT,
+		QDMA_DBE_ERR_H2C_WR_BRG_DAT_MASK,
+		QDMA_GLBL_ERR_RAM_DBE_MASK,
+		&qdma_hw_ram_dbe_err_process
+	},
+	{
+		QDMA_DBE_ERR_C2H_RD_BRG_DAT,
+		"Bridge slave read data buffer double bit ECC error",
+		QDMA_OFFSET_RAM_DBE_MASK,
+		QDMA_OFFSET_RAM_DBE_STAT,
+		QDMA_DBE_ERR_C2H_RD_BRG_DAT_MASK,
+		QDMA_GLBL_ERR_RAM_DBE_MASK,
+		&qdma_hw_ram_dbe_err_process
+	},
+	{
+		QDMA_DBE_ERR_C2H_WR_BRG_DAT,
+		"Bridge slave write data buffer double bit ECC error",
+		QDMA_OFFSET_RAM_DBE_MASK,
+		QDMA_OFFSET_RAM_DBE_STAT,
+		QDMA_DBE_ERR_C2H_WR_BRG_DAT_MASK,
+		QDMA_GLBL_ERR_RAM_DBE_MASK,
+		&qdma_hw_ram_dbe_err_process
+	},
+	{
+		QDMA_DBE_ERR_FUNC_MAP,
+		"Function map RAM double bit ECC error",
+		QDMA_OFFSET_RAM_DBE_MASK,
+		QDMA_OFFSET_RAM_DBE_STAT,
+		QDMA_DBE_ERR_FUNC_MAP_MASK,
+		QDMA_GLBL_ERR_RAM_DBE_MASK,
+		&qdma_hw_ram_dbe_err_process
+	},
+	{
+		QDMA_DBE_ERR_DSC_HW_CTXT,
+		"Descriptor engine hardware context RAM double bit ECC error",
+		QDMA_OFFSET_RAM_DBE_MASK,
+		QDMA_OFFSET_RAM_DBE_STAT,
+		QDMA_DBE_ERR_DSC_HW_CTXT_MASK,
+		QDMA_GLBL_ERR_RAM_DBE_MASK,
+		&qdma_hw_ram_dbe_err_process
+	},
+	{
+		QDMA_DBE_ERR_DSC_CRD_RCV,
+		"Descriptor engine receive credit context RAM double bit ECC error",
+		QDMA_OFFSET_RAM_DBE_MASK,
+		QDMA_OFFSET_RAM_DBE_STAT,
+		QDMA_DBE_ERR_DSC_CRD_RCV_MASK,
+		QDMA_GLBL_ERR_RAM_DBE_MASK,
+		&qdma_hw_ram_dbe_err_process
+	},
+	{
+		QDMA_DBE_ERR_DSC_SW_CTXT,
+		"Descriptor engine software context RAM double bit ECC error",
+		QDMA_OFFSET_RAM_DBE_MASK,
+		QDMA_OFFSET_RAM_DBE_STAT,
+		QDMA_DBE_ERR_DSC_SW_CTXT_MASK,
+		QDMA_GLBL_ERR_RAM_DBE_MASK,
+		&qdma_hw_ram_dbe_err_process
+	},
+	{
+		QDMA_DBE_ERR_DSC_CPLI,
+	"Descriptor engine fetch completion information RAM double bit ECC error",
+		QDMA_OFFSET_RAM_DBE_MASK,
+		QDMA_OFFSET_RAM_DBE_STAT,
+		QDMA_DBE_ERR_DSC_CPLI_MASK,
+		QDMA_GLBL_ERR_RAM_DBE_MASK,
+		&qdma_hw_ram_dbe_err_process
+	},
+	{
+		QDMA_DBE_ERR_DSC_CPLD,
+		"Descriptor engine fetch completion data RAM double bit ECC error",
+		QDMA_OFFSET_RAM_DBE_MASK,
+		QDMA_OFFSET_RAM_DBE_STAT,
+		QDMA_DBE_ERR_DSC_CPLD_MASK,
+		QDMA_GLBL_ERR_RAM_DBE_MASK,
+		&qdma_hw_ram_dbe_err_process
+	},
+	{
+		QDMA_DBE_ERR_PASID_CTXT_RAM,
+		"PASID configuration RAM double bit ECC error",
+		QDMA_OFFSET_RAM_DBE_MASK,
+		QDMA_OFFSET_RAM_DBE_STAT,
+		QDMA_DBE_ERR_PASID_CTXT_RAM_MASK,
+		QDMA_GLBL_ERR_RAM_DBE_MASK,
+		&qdma_hw_ram_dbe_err_process
+	},
+	{
+		QDMA_DBE_ERR_TIMER_FIFO_RAM,
+		"Timer fifo RAM double bit ECC error",
+		QDMA_OFFSET_RAM_DBE_MASK,
+		QDMA_OFFSET_RAM_DBE_STAT,
+		QDMA_DBE_ERR_TIMER_FIFO_RAM_MASK,
+		QDMA_GLBL_ERR_RAM_DBE_MASK,
+		&qdma_hw_ram_dbe_err_process
+	},
+	{
+		QDMA_DBE_ERR_PAYLOAD_FIFO_RAM,
+		"C2H ST payload RAM double bit ECC error",
+		QDMA_OFFSET_RAM_DBE_MASK,
+		QDMA_OFFSET_RAM_DBE_STAT,
+		QDMA_DBE_ERR_PAYLOAD_FIFO_RAM_MASK,
+		QDMA_GLBL_ERR_RAM_DBE_MASK,
+		&qdma_hw_ram_dbe_err_process
+	},
+	{
+		QDMA_DBE_ERR_QID_FIFO_RAM,
+		"C2H ST QID FIFO RAM double bit ECC error",
+		QDMA_OFFSET_RAM_DBE_MASK,
+		QDMA_OFFSET_RAM_DBE_STAT,
+		QDMA_DBE_ERR_QID_FIFO_RAM_MASK,
+		QDMA_GLBL_ERR_RAM_DBE_MASK,
+		&qdma_hw_ram_dbe_err_process
+	},
+	{
+		QDMA_DBE_ERR_TUSER_FIFO_RAM,
+		"C2H ST TUSER RAM double bit ECC error",
+		QDMA_OFFSET_RAM_DBE_MASK,
+		QDMA_OFFSET_RAM_DBE_STAT,
+		QDMA_DBE_ERR_TUSER_FIFO_RAM_MASK,
+		QDMA_GLBL_ERR_RAM_DBE_MASK,
+		&qdma_hw_ram_dbe_err_process
+	},
+	{
+		QDMA_DBE_ERR_WRB_COAL_DATA_RAM,
+		"Completion Coalescing RAM double bit ECC error",
+		QDMA_OFFSET_RAM_DBE_MASK,
+		QDMA_OFFSET_RAM_DBE_STAT,
+		QDMA_DBE_ERR_WRB_COAL_DATA_RAM_MASK,
+		QDMA_GLBL_ERR_RAM_DBE_MASK,
+		&qdma_hw_ram_dbe_err_process
+	},
+	{
+		QDMA_DBE_ERR_INT_QID2VEC_RAM,
+		"Interrupt QID2VEC RAM double bit ECC error",
+		QDMA_OFFSET_RAM_DBE_MASK,
+		QDMA_OFFSET_RAM_DBE_STAT,
+		QDMA_DBE_ERR_INT_QID2VEC_RAM_MASK,
+		QDMA_GLBL_ERR_RAM_DBE_MASK,
+		&qdma_hw_ram_dbe_err_process
+	},
+	{
+		QDMA_DBE_ERR_INT_CTXT_RAM,
+		"Interrupt context RAM double bit ECC error",
+		QDMA_OFFSET_RAM_DBE_MASK,
+		QDMA_OFFSET_RAM_DBE_STAT,
+		QDMA_DBE_ERR_INT_CTXT_RAM_MASK,
+		QDMA_GLBL_ERR_RAM_DBE_MASK,
+		&qdma_hw_ram_dbe_err_process
+	},
+	{
+		QDMA_DBE_ERR_DESC_REQ_FIFO_RAM,
+		"C2H ST descriptor request RAM double bit ECC error",
+		QDMA_OFFSET_RAM_DBE_MASK,
+		QDMA_OFFSET_RAM_DBE_STAT,
+		QDMA_DBE_ERR_DESC_REQ_FIFO_RAM_MASK,
+		QDMA_GLBL_ERR_RAM_DBE_MASK,
+		&qdma_hw_ram_dbe_err_process
+	},
+	{
+		QDMA_DBE_ERR_PFCH_CTXT_RAM,
+		"C2H ST prefetch RAM double bit ECC error",
+		QDMA_OFFSET_RAM_DBE_MASK,
+		QDMA_OFFSET_RAM_DBE_STAT,
+		QDMA_DBE_ERR_PFCH_CTXT_RAM_MASK,
+		QDMA_GLBL_ERR_RAM_DBE_MASK,
+		&qdma_hw_ram_dbe_err_process
+	},
+	{
+		QDMA_DBE_ERR_WRB_CTXT_RAM,
+		"C2H ST completion context RAM double bit ECC error",
+		QDMA_OFFSET_RAM_DBE_MASK,
+		QDMA_OFFSET_RAM_DBE_STAT,
+		QDMA_DBE_ERR_WRB_CTXT_RAM_MASK,
+		QDMA_GLBL_ERR_RAM_DBE_MASK,
+		&qdma_hw_ram_dbe_err_process
+	},
+	{
+		QDMA_DBE_ERR_PFCH_LL_RAM,
+		"C2H ST prefetch list RAM double bit ECC error",
+		QDMA_OFFSET_RAM_DBE_MASK,
+		QDMA_OFFSET_RAM_DBE_STAT,
+		QDMA_DBE_ERR_PFCH_LL_RAM_MASK,
+		QDMA_GLBL_ERR_RAM_DBE_MASK,
+		&qdma_hw_ram_dbe_err_process
+	},
+	{
+		QDMA_DBE_ERR_H2C_PEND_FIFO,
+		"H2C pending fifo RAM double bit ECC error",
+		QDMA_OFFSET_RAM_DBE_MASK,
+		QDMA_OFFSET_RAM_DBE_STAT,
+		QDMA_DBE_ERR_H2C_PEND_FIFO_MASK,
+		QDMA_GLBL_ERR_RAM_DBE_MASK,
+		&qdma_hw_ram_dbe_err_process
+	},
+	{
+		QDMA_DBE_ERR_ALL,
+		"All DBE errors",
+		QDMA_OFFSET_RAM_DBE_MASK,
+		QDMA_OFFSET_RAM_DBE_STAT,
+		QDMA_DBE_ERR_ALL_MASK,
+		QDMA_GLBL_ERR_RAM_DBE_MASK,
+		&qdma_hw_ram_dbe_err_process
+	}
+};
+
+static int32_t all_hw_errs[TOTAL_LEAF_ERROR_AGGREGATORS] = {
+	QDMA_DSC_ERR_ALL,
+	QDMA_TRQ_ERR_ALL,
+	QDMA_ST_C2H_ERR_ALL,
+	QDMA_ST_FATAL_ERR_ALL,
+	QDMA_ST_H2C_ERR_ALL,
+	QDMA_SBE_ERR_ALL,
+	QDMA_DBE_ERR_ALL
+};
+
+static int qdma_indirect_reg_invalidate(void *dev_hndl,
+		enum ind_ctxt_cmd_sel sel, uint16_t hw_qid);
+static int qdma_indirect_reg_clear(void *dev_hndl,
+		enum ind_ctxt_cmd_sel sel, uint16_t hw_qid);
+static int qdma_indirect_reg_read(void *dev_hndl, enum ind_ctxt_cmd_sel sel,
+		uint16_t hw_qid, uint32_t cnt, uint32_t *data);
+static int qdma_indirect_reg_write(void *dev_hndl, enum ind_ctxt_cmd_sel sel,
+		uint16_t hw_qid, uint32_t *data, uint16_t cnt);
+
+
+static struct qctx_entry sw_ctxt_entries[] = {
+	{"PIDX", 0},
+	{"IRQ Arm", 0},
+	{"Function Id", 0},
+	{"Queue Enable", 0},
+	{"Fetch Credit Enable", 0},
+	{"Write back/Intr Check", 0},
+	{"Write back/Intr Interval", 0},
+	{"Address Translation", 0},
+	{"Fetch Max", 0},
+	{"Ring Size", 0},
+	{"Descriptor Size", 0},
+	{"Bypass Enable", 0},
+	{"MM Channel", 0},
+	{"Writeback Enable", 0},
+	{"Interrupt Enable", 0},
+	{"Port Id", 0},
+	{"Interrupt No Last", 0},
+	{"Error", 0},
+	{"Writeback Error Sent", 0},
+	{"IRQ Request", 0},
+	{"Marker Disable", 0},
+	{"Is Memory Mapped", 0},
+	{"Descriptor Ring Base Addr (Low)", 0},
+	{"Descriptor Ring Base Addr (High)", 0},
+	{"Interrupt Vector/Ring Index", 0},
+	{"Interrupt Aggregation", 0},
+};
+
+static struct qctx_entry hw_ctxt_entries[] = {
+	{"CIDX", 0},
+	{"Credits Consumed", 0},
+	{"Descriptors Pending", 0},
+	{"Queue Invalid No Desc Pending", 0},
+	{"Eviction Pending", 0},
+	{"Fetch Pending", 0},
+};
+
+static struct qctx_entry credit_ctxt_entries[] = {
+	{"Credit", 0},
+};
+
+static struct qctx_entry cmpt_ctxt_entries[] = {
+	{"Enable Status Desc Update", 0},
+	{"Enable Interrupt", 0},
+	{"Trigger Mode", 0},
+	{"Function Id", 0},
+	{"Counter Index", 0},
+	{"Timer Index", 0},
+	{"Interrupt State", 0},
+	{"Color", 0},
+	{"Ring Size", 0},
+	{"Base Address (Low)", 0},
+	{"Base Address (High)", 0},
+	{"Descriptor Size", 0},
+	{"PIDX", 0},
+	{"CIDX", 0},
+	{"Valid", 0},
+	{"Error", 0},
+	{"Trigger Pending", 0},
+	{"Timer Running", 0},
+	{"Full Update", 0},
+	{"Over Flow Check Disable", 0},
+	{"Address Translation", 0},
+	{"Interrupt Vector/Ring Index", 0},
+	{"Interrupt Aggregation", 0},
+};
+
+static struct qctx_entry c2h_pftch_ctxt_entries[] = {
+	{"Bypass", 0},
+	{"Buffer Size Index", 0},
+	{"Port Id", 0},
+	{"Error", 0},
+	{"Prefetch Enable", 0},
+	{"In Prefetch", 0},
+	{"Software Credit", 0},
+	{"Valid", 0},
+};
+
+static struct qctx_entry ind_intr_ctxt_entries[] = {
+	{"valid", 0},
+	{"vec", 0},
+	{"int_st", 0},
+	{"color", 0},
+	{"baddr_4k (Low)", 0},
+	{"baddr_4k (High)", 0},
+	{"page_size", 0},
+	{"pidx", 0},
+	{"at", 0},
+};
+
+uint32_t qdma_soft_reg_dump_buf_len(void)
+{
+	uint32_t length = ((sizeof(qdma_config_regs) /
+			sizeof(qdma_config_regs[0])) + 1) *
+			REG_DUMP_SIZE_PER_LINE;
+	return length;
+}
+
+uint32_t qdma_get_config_num_regs(void)
+{
+	return (sizeof(qdma_config_regs) /
+		sizeof(qdma_config_regs[0]));
+}
+
+struct xreg_info *qdma_get_config_regs(void)
+{
+	return qdma_config_regs;
+}
+
+int qdma_soft_context_buf_len(uint8_t st,
+		enum qdma_dev_q_type q_type, uint32_t *buflen)
+{
+	uint32_t len = 0;
+	int rv = 0;
+
+	*buflen = 0;
+
+	if (q_type == QDMA_DEV_Q_TYPE_CMPT) {
+		len += (((sizeof(cmpt_ctxt_entries) /
+			sizeof(cmpt_ctxt_entries[0])) + 1) *
+			REG_DUMP_SIZE_PER_LINE);
+	} else {
+		len += (((sizeof(sw_ctxt_entries) /
+				sizeof(sw_ctxt_entries[0])) + 1) *
+				REG_DUMP_SIZE_PER_LINE);
+
+		len += (((sizeof(hw_ctxt_entries) /
+			sizeof(hw_ctxt_entries[0])) + 1) *
+			REG_DUMP_SIZE_PER_LINE);
+
+		len += (((sizeof(credit_ctxt_entries) /
+			sizeof(credit_ctxt_entries[0])) + 1) *
+			REG_DUMP_SIZE_PER_LINE);
+
+		if (st && q_type == QDMA_DEV_Q_TYPE_C2H) {
+			len += (((sizeof(cmpt_ctxt_entries) /
+				sizeof(cmpt_ctxt_entries[0])) + 1) *
+				REG_DUMP_SIZE_PER_LINE);
+
+			len += (((sizeof(c2h_pftch_ctxt_entries) /
+				sizeof(c2h_pftch_ctxt_entries[0]))
+				+ 1) * REG_DUMP_SIZE_PER_LINE);
+		}
+	}
+
+	*buflen = len;
+	return rv;
+}
+
+/*
+ * qdma_acc_fill_sw_ctxt() - Helper function to fill sw context into structure
+ *
+ */
+static void qdma_fill_sw_ctxt(struct qdma_descq_sw_ctxt *sw_ctxt)
+{
+	sw_ctxt_entries[0].value = sw_ctxt->pidx;
+	sw_ctxt_entries[1].value = sw_ctxt->irq_arm;
+	sw_ctxt_entries[2].value = sw_ctxt->fnc_id;
+	sw_ctxt_entries[3].value = sw_ctxt->qen;
+	sw_ctxt_entries[4].value = sw_ctxt->frcd_en;
+	sw_ctxt_entries[5].value = sw_ctxt->wbi_chk;
+	sw_ctxt_entries[6].value = sw_ctxt->wbi_intvl_en;
+	sw_ctxt_entries[7].value = sw_ctxt->at;
+	sw_ctxt_entries[8].value = sw_ctxt->fetch_max;
+	sw_ctxt_entries[9].value = sw_ctxt->rngsz_idx;
+	sw_ctxt_entries[10].value = sw_ctxt->desc_sz;
+	sw_ctxt_entries[11].value = sw_ctxt->bypass;
+	sw_ctxt_entries[12].value = sw_ctxt->mm_chn;
+	sw_ctxt_entries[13].value = sw_ctxt->wbk_en;
+	sw_ctxt_entries[14].value = sw_ctxt->irq_en;
+	sw_ctxt_entries[15].value = sw_ctxt->port_id;
+	sw_ctxt_entries[16].value = sw_ctxt->irq_no_last;
+	sw_ctxt_entries[17].value = sw_ctxt->err;
+	sw_ctxt_entries[18].value = sw_ctxt->err_wb_sent;
+	sw_ctxt_entries[19].value = sw_ctxt->irq_req;
+	sw_ctxt_entries[20].value = sw_ctxt->mrkr_dis;
+	sw_ctxt_entries[21].value = sw_ctxt->is_mm;
+	sw_ctxt_entries[22].value = sw_ctxt->ring_bs_addr & 0xFFFFFFFF;
+	sw_ctxt_entries[23].value =
+		(sw_ctxt->ring_bs_addr >> 32) & 0xFFFFFFFF;
+	sw_ctxt_entries[24].value = sw_ctxt->vec;
+	sw_ctxt_entries[25].value = sw_ctxt->intr_aggr;
+}
+
+/*
+ * qdma_acc_fill_cmpt_ctxt() - Helper function to fill completion context
+ *                         into structure
+ *
+ */
+static void qdma_fill_cmpt_ctxt(struct qdma_descq_cmpt_ctxt *cmpt_ctxt)
+{
+	cmpt_ctxt_entries[0].value = cmpt_ctxt->en_stat_desc;
+	cmpt_ctxt_entries[1].value = cmpt_ctxt->en_int;
+	cmpt_ctxt_entries[2].value = cmpt_ctxt->trig_mode;
+	cmpt_ctxt_entries[3].value = cmpt_ctxt->fnc_id;
+	cmpt_ctxt_entries[4].value = cmpt_ctxt->counter_idx;
+	cmpt_ctxt_entries[5].value = cmpt_ctxt->timer_idx;
+	cmpt_ctxt_entries[6].value = cmpt_ctxt->in_st;
+	cmpt_ctxt_entries[7].value = cmpt_ctxt->color;
+	cmpt_ctxt_entries[8].value = cmpt_ctxt->ringsz_idx;
+	cmpt_ctxt_entries[9].value = cmpt_ctxt->bs_addr & 0xFFFFFFFF;
+	cmpt_ctxt_entries[10].value =
+		(cmpt_ctxt->bs_addr >> 32) & 0xFFFFFFFF;
+	cmpt_ctxt_entries[11].value = cmpt_ctxt->desc_sz;
+	cmpt_ctxt_entries[12].value = cmpt_ctxt->pidx;
+	cmpt_ctxt_entries[13].value = cmpt_ctxt->cidx;
+	cmpt_ctxt_entries[14].value = cmpt_ctxt->valid;
+	cmpt_ctxt_entries[15].value = cmpt_ctxt->err;
+	cmpt_ctxt_entries[16].value = cmpt_ctxt->user_trig_pend;
+	cmpt_ctxt_entries[17].value = cmpt_ctxt->timer_running;
+	cmpt_ctxt_entries[18].value = cmpt_ctxt->full_upd;
+	cmpt_ctxt_entries[19].value = cmpt_ctxt->ovf_chk_dis;
+	cmpt_ctxt_entries[20].value = cmpt_ctxt->at;
+	cmpt_ctxt_entries[21].value = cmpt_ctxt->vec;
+	cmpt_ctxt_entries[22].value = cmpt_ctxt->int_aggr;
+}
+
+/*
+ * qdma_acc_fill_hw_ctxt() - Helper function to fill HW context into structure
+ *
+ */
+static void qdma_fill_hw_ctxt(struct qdma_descq_hw_ctxt *hw_ctxt)
+{
+	hw_ctxt_entries[0].value = hw_ctxt->cidx;
+	hw_ctxt_entries[1].value = hw_ctxt->crd_use;
+	hw_ctxt_entries[2].value = hw_ctxt->dsc_pend;
+	hw_ctxt_entries[3].value = hw_ctxt->idl_stp_b;
+	hw_ctxt_entries[4].value = hw_ctxt->evt_pnd;
+	hw_ctxt_entries[5].value = hw_ctxt->fetch_pnd;
+}
+
+/*
+ * qdma_acc_fill_credit_ctxt() - Helper function to fill Credit context
+ *                           into structure
+ *
+ */
+static void qdma_fill_credit_ctxt(struct qdma_descq_credit_ctxt *cr_ctxt)
+{
+	credit_ctxt_entries[0].value = cr_ctxt->credit;
+}
+
+/*
+ * qdma_acc_fill_pfetch_ctxt() - Helper function to fill Prefetch context
+ *                           into structure
+ *
+ */
+static void qdma_fill_pfetch_ctxt(struct qdma_descq_prefetch_ctxt *pfetch_ctxt)
+{
+	c2h_pftch_ctxt_entries[0].value = pfetch_ctxt->bypass;
+	c2h_pftch_ctxt_entries[1].value = pfetch_ctxt->bufsz_idx;
+	c2h_pftch_ctxt_entries[2].value = pfetch_ctxt->port_id;
+	c2h_pftch_ctxt_entries[3].value = pfetch_ctxt->err;
+	c2h_pftch_ctxt_entries[4].value = pfetch_ctxt->pfch_en;
+	c2h_pftch_ctxt_entries[5].value = pfetch_ctxt->pfch;
+	c2h_pftch_ctxt_entries[6].value = pfetch_ctxt->sw_crdt;
+	c2h_pftch_ctxt_entries[7].value = pfetch_ctxt->valid;
+}
+
+/*
+ * dump_soft_context() - Helper function to dump queue context into string
+ *
+ * return len - length of the string copied into buffer
+ */
+static int dump_soft_context(struct qdma_descq_context *queue_context,
+		uint8_t st,	enum qdma_dev_q_type q_type,
+		char *buf, int buf_sz)
+{
+	int i = 0;
+	int n;
+	int len = 0;
+	int rv;
+	char banner[DEBGFS_LINE_SZ];
+
+	if (queue_context == NULL) {
+		qdma_log_error("%s: queue_context is NULL, err:%d\n",
+						__func__,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if (q_type == QDMA_DEV_Q_TYPE_CMPT) {
+		qdma_fill_cmpt_ctxt(&queue_context->cmpt_ctxt);
+	} else if (q_type == QDMA_DEV_Q_TYPE_H2C) {
+		qdma_fill_sw_ctxt(&queue_context->sw_ctxt);
+		qdma_fill_hw_ctxt(&queue_context->hw_ctxt);
+		qdma_fill_credit_ctxt(&queue_context->cr_ctxt);
+	} else if (q_type == QDMA_DEV_Q_TYPE_C2H) {
+		qdma_fill_sw_ctxt(&queue_context->sw_ctxt);
+		qdma_fill_hw_ctxt(&queue_context->hw_ctxt);
+		qdma_fill_credit_ctxt(&queue_context->cr_ctxt);
+		if (st) {
+			qdma_fill_pfetch_ctxt(&queue_context->pfetch_ctxt);
+			qdma_fill_cmpt_ctxt(&queue_context->cmpt_ctxt);
+		}
+	}
+
+	for (i = 0; i < DEBGFS_LINE_SZ - 5; i++) {
+		rv = QDMA_SNPRINTF_S(banner + i,
+			(DEBGFS_LINE_SZ - i),
+			sizeof("-"), "-");
+		if (rv < 0 || rv > (int)sizeof("-")) {
+			qdma_log_error
+				("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+				__LINE__, __func__,
+				rv);
+			goto INSUF_BUF_EXIT;
+		}
+	}
+
+	if (q_type != QDMA_DEV_Q_TYPE_CMPT) {
+		/* SW context dump */
+		n = sizeof(sw_ctxt_entries) / sizeof((sw_ctxt_entries)[0]);
+		for (i = 0; i < n; i++) {
+			if (len >= buf_sz ||
+				((len + DEBGFS_LINE_SZ) >= buf_sz))
+				goto INSUF_BUF_EXIT;
+
+			if (i == 0) {
+				if ((len + (3 * DEBGFS_LINE_SZ)) >= buf_sz)
+					goto INSUF_BUF_EXIT;
+				rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+					DEBGFS_LINE_SZ, "\n%s", banner);
+				if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+					qdma_log_error
+				("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+						__LINE__, __func__,
+						rv);
+					goto INSUF_BUF_EXIT;
+				}
+				len += rv;
+
+				rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+					DEBGFS_LINE_SZ, "\n%40s", "SW Context");
+				if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+					qdma_log_error
+				("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+						__LINE__, __func__,
+						rv);
+					goto INSUF_BUF_EXIT;
+				}
+				len += rv;
+
+				rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+					DEBGFS_LINE_SZ, "\n%s\n", banner);
+				if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+					qdma_log_error
+				("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+						__LINE__, __func__,
+						rv);
+					goto INSUF_BUF_EXIT;
+				}
+				len += rv;
+			}
+
+			rv = QDMA_SNPRINTF_S(buf + len,
+				(buf_sz - len), DEBGFS_LINE_SZ,
+				"%-47s %#-10x %u\n",
+				sw_ctxt_entries[i].name,
+				sw_ctxt_entries[i].value,
+				sw_ctxt_entries[i].value);
+			if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+				qdma_log_error
+				("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+					__LINE__, __func__,
+					rv);
+				goto INSUF_BUF_EXIT;
+			}
+			len += rv;
+		}
+
+		/* HW context dump */
+		n = sizeof(hw_ctxt_entries) / sizeof((hw_ctxt_entries)[0]);
+		for (i = 0; i < n; i++) {
+			if (len >= buf_sz ||
+				((len + DEBGFS_LINE_SZ) >= buf_sz))
+				goto INSUF_BUF_EXIT;
+
+			if (i == 0) {
+				if ((len + (3 * DEBGFS_LINE_SZ)) >= buf_sz)
+					goto INSUF_BUF_EXIT;
+
+				rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+					DEBGFS_LINE_SZ, "\n%s", banner);
+				if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+					qdma_log_error
+				("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+						__LINE__, __func__,
+						rv);
+					goto INSUF_BUF_EXIT;
+				}
+				len += rv;
+
+				rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+					DEBGFS_LINE_SZ, "\n%40s", "HW Context");
+				if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+					qdma_log_error
+				("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+						__LINE__, __func__,
+						rv);
+					goto INSUF_BUF_EXIT;
+				}
+				len += rv;
+
+				rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+					DEBGFS_LINE_SZ, "\n%s\n", banner);
+				if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+					qdma_log_error
+				("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+						__LINE__, __func__,
+						rv);
+					goto INSUF_BUF_EXIT;
+				}
+				len += rv;
+			}
+
+			rv = QDMA_SNPRINTF_S(buf + len,
+				(buf_sz - len), DEBGFS_LINE_SZ,
+				"%-47s %#-10x %u\n",
+				hw_ctxt_entries[i].name,
+				hw_ctxt_entries[i].value,
+				hw_ctxt_entries[i].value);
+			if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+				qdma_log_error
+				("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+					__LINE__, __func__,
+					rv);
+				goto INSUF_BUF_EXIT;
+			}
+			len += rv;
+		}
+
+		/* Credit context dump */
+		n = sizeof(credit_ctxt_entries) /
+			sizeof((credit_ctxt_entries)[0]);
+		for (i = 0; i < n; i++) {
+			if (len >= buf_sz ||
+				((len + DEBGFS_LINE_SZ) >= buf_sz))
+				goto INSUF_BUF_EXIT;
+
+			if (i == 0) {
+				if ((len + (3 * DEBGFS_LINE_SZ)) >= buf_sz)
+					goto INSUF_BUF_EXIT;
+
+				rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+					DEBGFS_LINE_SZ, "\n%s", banner);
+				if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+					qdma_log_error
+				("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+						__LINE__, __func__,
+						rv);
+					goto INSUF_BUF_EXIT;
+				}
+				len += rv;
+
+				rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+					DEBGFS_LINE_SZ, "\n%40s",
+					"Credit Context");
+				if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+					qdma_log_error
+				("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+						__LINE__, __func__,
+						rv);
+					goto INSUF_BUF_EXIT;
+				}
+				len += rv;
+
+				rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+					DEBGFS_LINE_SZ, "\n%s\n", banner);
+				if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+					qdma_log_error
+				("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+						__LINE__, __func__,
+						rv);
+					goto INSUF_BUF_EXIT;
+				}
+				len += rv;
+			}
+
+			rv = QDMA_SNPRINTF_S(buf + len,
+				(buf_sz - len), DEBGFS_LINE_SZ,
+				"%-47s %#-10x %u\n",
+				credit_ctxt_entries[i].name,
+				credit_ctxt_entries[i].value,
+				credit_ctxt_entries[i].value);
+			if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+				qdma_log_error
+				("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+					__LINE__, __func__,
+					rv);
+				goto INSUF_BUF_EXIT;
+			}
+			len += rv;
+		}
+	}
+
+	if (q_type == QDMA_DEV_Q_TYPE_CMPT ||
+			(st && q_type == QDMA_DEV_Q_TYPE_C2H)) {
+		/* Completion context dump */
+		n = sizeof(cmpt_ctxt_entries) / sizeof((cmpt_ctxt_entries)[0]);
+		for (i = 0; i < n; i++) {
+			if (len >= buf_sz ||
+				((len + DEBGFS_LINE_SZ) >= buf_sz))
+				goto INSUF_BUF_EXIT;
+
+			if (i == 0) {
+				if ((len + (3 * DEBGFS_LINE_SZ)) >= buf_sz)
+					goto INSUF_BUF_EXIT;
+
+				rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+					DEBGFS_LINE_SZ, "\n%s", banner);
+				if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+					qdma_log_error
+				("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+						__LINE__, __func__,
+						rv);
+					goto INSUF_BUF_EXIT;
+				}
+				len += rv;
+
+				rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+					DEBGFS_LINE_SZ, "\n%40s",
+					"Completion Context");
+				if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+					qdma_log_error
+				("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+						__LINE__, __func__,
+						rv);
+					goto INSUF_BUF_EXIT;
+				}
+				len += rv;
+
+				rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+					DEBGFS_LINE_SZ, "\n%s\n", banner);
+				if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+					qdma_log_error
+				("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+						__LINE__, __func__,
+						rv);
+					goto INSUF_BUF_EXIT;
+				}
+				len += rv;
+			}
+
+			rv = QDMA_SNPRINTF_S(buf + len,
+				(buf_sz - len), DEBGFS_LINE_SZ,
+				"%-47s %#-10x %u\n",
+				cmpt_ctxt_entries[i].name,
+				cmpt_ctxt_entries[i].value,
+				cmpt_ctxt_entries[i].value);
+			if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+				qdma_log_error
+				("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+					__LINE__, __func__,
+					rv);
+				goto INSUF_BUF_EXIT;
+			}
+			len += rv;
+		}
+	}
+
+	if (st && q_type == QDMA_DEV_Q_TYPE_C2H) {
+		/* Prefetch context dump */
+		n = sizeof(c2h_pftch_ctxt_entries) /
+			sizeof(c2h_pftch_ctxt_entries[0]);
+		for (i = 0; i < n; i++) {
+			if (len >= buf_sz ||
+				((len + DEBGFS_LINE_SZ) >= buf_sz))
+				goto INSUF_BUF_EXIT;
+
+			if (i == 0) {
+				if ((len + (3 * DEBGFS_LINE_SZ)) >= buf_sz)
+					goto INSUF_BUF_EXIT;
+
+				rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+					DEBGFS_LINE_SZ, "\n%s", banner);
+				if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+					qdma_log_error
+				("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+						__LINE__, __func__,
+						rv);
+					goto INSUF_BUF_EXIT;
+				}
+				len += rv;
+
+				rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+					DEBGFS_LINE_SZ, "\n%40s",
+					"Prefetch Context");
+				if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+					qdma_log_error
+				("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+						__LINE__, __func__,
+						rv);
+					goto INSUF_BUF_EXIT;
+				}
+				len += rv;
+
+				rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+					DEBGFS_LINE_SZ, "\n%s\n", banner);
+				if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+					qdma_log_error
+				("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+						__LINE__, __func__,
+						rv);
+					goto INSUF_BUF_EXIT;
+				}
+				len += rv;
+			}
+
+			rv = QDMA_SNPRINTF_S(buf + len,
+				(buf_sz - len), DEBGFS_LINE_SZ,
+				"%-47s %#-10x %u\n",
+				c2h_pftch_ctxt_entries[i].name,
+				c2h_pftch_ctxt_entries[i].value,
+				c2h_pftch_ctxt_entries[i].value);
+			if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+				qdma_log_error
+				("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+					__LINE__, __func__,
+					rv);
+				goto INSUF_BUF_EXIT;
+			}
+			len += rv;
+		}
+	}
+
+	return len;
+
+INSUF_BUF_EXIT:
+	if (buf_sz > DEBGFS_LINE_SZ) {
+		rv = QDMA_SNPRINTF_S((buf + buf_sz - DEBGFS_LINE_SZ),
+			buf_sz, DEBGFS_LINE_SZ,
+			"\n\nInsufficient buffer size, partial context dump\n");
+		if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+			qdma_log_error
+				("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+				__LINE__, __func__,
+				rv);
+		}
+	}
+
+	qdma_log_error("%s: Insufficient buffer size, err:%d\n",
+		__func__, -QDMA_ERR_NO_MEM);
+
+	return -QDMA_ERR_NO_MEM;
+}
+
+/*
+ * qdma_indirect_reg_invalidate() - helper function to invalidate indirect
+ *					context registers.
+ *
+ * return -QDMA_ERR_HWACC_BUSY_TIMEOUT if register
+ *	value didn't match, QDMA_SUCCESS other wise
+ */
+static int qdma_indirect_reg_invalidate(void *dev_hndl,
+		enum ind_ctxt_cmd_sel sel, uint16_t hw_qid)
+{
+	union qdma_ind_ctxt_cmd cmd;
+
+	qdma_reg_access_lock(dev_hndl);
+
+	/* set command register */
+	cmd.word = 0;
+	cmd.bits.qid = hw_qid;
+	cmd.bits.op = QDMA_CTXT_CMD_INV;
+	cmd.bits.sel = sel;
+	qdma_reg_write(dev_hndl, QDMA_OFFSET_IND_CTXT_CMD, cmd.word);
+
+	/* check if the operation went through well */
+	if (hw_monitor_reg(dev_hndl, QDMA_OFFSET_IND_CTXT_CMD,
+			QDMA_IND_CTXT_CMD_BUSY_MASK, 0,
+			QDMA_REG_POLL_DFLT_INTERVAL_US,
+			QDMA_REG_POLL_DFLT_TIMEOUT_US)) {
+		qdma_reg_access_release(dev_hndl);
+		qdma_log_error("%s: hw_monitor_reg failed with err:%d\n",
+						__func__,
+					   -QDMA_ERR_HWACC_BUSY_TIMEOUT);
+		return -QDMA_ERR_HWACC_BUSY_TIMEOUT;
+	}
+
+	qdma_reg_access_release(dev_hndl);
+
+	return QDMA_SUCCESS;
+}
+
+/*
+ * qdma_indirect_reg_clear() - helper function to clear indirect
+ *				context registers.
+ *
+ * return -QDMA_ERR_HWACC_BUSY_TIMEOUT if register
+ *	value didn't match, QDMA_SUCCESS other wise
+ */
+static int qdma_indirect_reg_clear(void *dev_hndl,
+		enum ind_ctxt_cmd_sel sel, uint16_t hw_qid)
+{
+	union qdma_ind_ctxt_cmd cmd;
+
+	qdma_reg_access_lock(dev_hndl);
+
+	/* set command register */
+	cmd.word = 0;
+	cmd.bits.qid = hw_qid;
+	cmd.bits.op = QDMA_CTXT_CMD_CLR;
+	cmd.bits.sel = sel;
+	qdma_reg_write(dev_hndl, QDMA_OFFSET_IND_CTXT_CMD, cmd.word);
+
+	/* check if the operation went through well */
+	if (hw_monitor_reg(dev_hndl, QDMA_OFFSET_IND_CTXT_CMD,
+			QDMA_IND_CTXT_CMD_BUSY_MASK, 0,
+			QDMA_REG_POLL_DFLT_INTERVAL_US,
+			QDMA_REG_POLL_DFLT_TIMEOUT_US)) {
+		qdma_reg_access_release(dev_hndl);
+		qdma_log_error("%s: hw_monitor_reg failed with err:%d\n",
+						__func__,
+					   -QDMA_ERR_HWACC_BUSY_TIMEOUT);
+		return -QDMA_ERR_HWACC_BUSY_TIMEOUT;
+	}
+
+	qdma_reg_access_release(dev_hndl);
+
+	return QDMA_SUCCESS;
+}
+
+/*
+ * qdma_indirect_reg_read() - helper function to read indirect
+ *				context registers.
+ *
+ * return -QDMA_ERR_HWACC_BUSY_TIMEOUT if register
+ *	value didn't match, QDMA_SUCCESS other wise
+ */
+static int qdma_indirect_reg_read(void *dev_hndl, enum ind_ctxt_cmd_sel sel,
+		uint16_t hw_qid, uint32_t cnt, uint32_t *data)
+{
+	uint32_t index = 0, reg_addr = QDMA_OFFSET_IND_CTXT_DATA;
+	union qdma_ind_ctxt_cmd cmd;
+
+	qdma_reg_access_lock(dev_hndl);
+
+	/* set command register */
+	cmd.word = 0;
+	cmd.bits.qid = hw_qid;
+	cmd.bits.op = QDMA_CTXT_CMD_RD;
+	cmd.bits.sel = sel;
+	qdma_reg_write(dev_hndl, QDMA_OFFSET_IND_CTXT_CMD, cmd.word);
+
+	/* check if the operation went through well */
+	if (hw_monitor_reg(dev_hndl, QDMA_OFFSET_IND_CTXT_CMD,
+			QDMA_IND_CTXT_CMD_BUSY_MASK, 0,
+			QDMA_REG_POLL_DFLT_INTERVAL_US,
+			QDMA_REG_POLL_DFLT_TIMEOUT_US)) {
+		qdma_reg_access_release(dev_hndl);
+		qdma_log_error("%s: hw_monitor_reg failed with err:%d\n",
+						__func__,
+					   -QDMA_ERR_HWACC_BUSY_TIMEOUT);
+		return -QDMA_ERR_HWACC_BUSY_TIMEOUT;
+	}
+
+	for (index = 0; index < cnt; index++, reg_addr += sizeof(uint32_t))
+		data[index] = qdma_reg_read(dev_hndl, reg_addr);
+
+	qdma_reg_access_release(dev_hndl);
+
+	return QDMA_SUCCESS;
+}
+
+/*
+ * qdma_indirect_reg_write() - helper function to write indirect
+ *				context registers.
+ *
+ * return -QDMA_ERR_HWACC_BUSY_TIMEOUT if register
+ *	value didn't match, QDMA_SUCCESS other wise
+ */
+static int qdma_indirect_reg_write(void *dev_hndl, enum ind_ctxt_cmd_sel sel,
+		uint16_t hw_qid, uint32_t *data, uint16_t cnt)
+{
+	uint32_t index, reg_addr;
+	struct qdma_indirect_ctxt_regs regs;
+	uint32_t *wr_data = (uint32_t *)&regs;
+
+	qdma_reg_access_lock(dev_hndl);
+
+	/* write the context data */
+	for (index = 0; index < QDMA_IND_CTXT_DATA_NUM_REGS; index++) {
+		if (index < cnt)
+			regs.qdma_ind_ctxt_data[index] = data[index];
+		else
+			regs.qdma_ind_ctxt_data[index] = 0;
+		regs.qdma_ind_ctxt_mask[index] = 0xFFFFFFFF;
+	}
+
+	regs.cmd.word = 0;
+	regs.cmd.bits.qid = hw_qid;
+	regs.cmd.bits.op = QDMA_CTXT_CMD_WR;
+	regs.cmd.bits.sel = sel;
+	reg_addr = QDMA_OFFSET_IND_CTXT_DATA;
+
+	for (index = 0; index < ((2 * QDMA_IND_CTXT_DATA_NUM_REGS) + 1);
+			index++, reg_addr += sizeof(uint32_t))
+		qdma_reg_write(dev_hndl, reg_addr, wr_data[index]);
+
+	/* check if the operation went through well */
+	if (hw_monitor_reg(dev_hndl, QDMA_OFFSET_IND_CTXT_CMD,
+			QDMA_IND_CTXT_CMD_BUSY_MASK, 0,
+			QDMA_REG_POLL_DFLT_INTERVAL_US,
+			QDMA_REG_POLL_DFLT_TIMEOUT_US)) {
+		qdma_reg_access_release(dev_hndl);
+		qdma_log_error("%s: hw_monitor_reg failed with err:%d\n",
+						__func__,
+					   -QDMA_ERR_HWACC_BUSY_TIMEOUT);
+		return -QDMA_ERR_HWACC_BUSY_TIMEOUT;
+	}
+
+	qdma_reg_access_release(dev_hndl);
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_get_version() - Function to get the qdma version
+ *
+ * @dev_hndl:	device handle
+ * @is_vf:	Whether PF or VF
+ * @version_info:	pointer to hold the version info
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+int qdma_get_version(void *dev_hndl, uint8_t is_vf,
+		struct qdma_hw_version_info *version_info)
+{
+	uint32_t reg_val = 0;
+	uint32_t reg_addr = (is_vf) ? QDMA_OFFSET_VF_VERSION :
+			QDMA_OFFSET_GLBL2_MISC_CAP;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+				__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	reg_val = qdma_reg_read(dev_hndl, reg_addr);
+
+	qdma_fetch_version_details(is_vf, reg_val, version_info);
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_fmap_write() - create fmap context and program it
+ *
+ * @dev_hndl:	device handle
+ * @func_id:	function id of the device
+ * @config:	pointer to the fmap data strucutre
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_fmap_write(void *dev_hndl, uint16_t func_id,
+		   const struct qdma_fmap_cfg *config)
+{
+	uint32_t fmap[QDMA_FMAP_NUM_WORDS] = {0};
+	uint16_t num_words_count = 0;
+	enum ind_ctxt_cmd_sel sel = QDMA_CTXT_SEL_FMAP;
+
+	if (!dev_hndl || !config) {
+		qdma_log_error("%s: dev_handle=%p fmap=%p NULL, err:%d\n",
+				__func__, dev_hndl, config,
+				-QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	qdma_log_debug("%s: func_id=%hu, qbase=%hu, qmax=%hu\n", __func__,
+				   func_id, config->qbase, config->qmax);
+	fmap[num_words_count++] =
+		FIELD_SET(QDMA_FMAP_CTXT_W0_QID_MASK, config->qbase);
+	fmap[num_words_count++] =
+		FIELD_SET(QDMA_FMAP_CTXT_W1_QID_MAX_MASK, config->qmax);
+
+	return qdma_indirect_reg_write(dev_hndl, sel, func_id,
+			fmap, num_words_count);
+}
+
+/*****************************************************************************/
+/**
+ * qdma_fmap_read() - read fmap context
+ *
+ * @dev_hndl:   device handle
+ * @func_id:    function id of the device
+ * @config:	pointer to the output fmap data
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_fmap_read(void *dev_hndl, uint16_t func_id,
+			 struct qdma_fmap_cfg *config)
+{
+	int rv = QDMA_SUCCESS;
+	uint32_t fmap[QDMA_FMAP_NUM_WORDS] = {0};
+	enum ind_ctxt_cmd_sel sel = QDMA_CTXT_SEL_FMAP;
+
+	if (!dev_hndl || !config) {
+		qdma_log_error("%s: dev_handle=%p fmap=%p NULL, err:%d\n",
+						__func__, dev_hndl, config,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	rv = qdma_indirect_reg_read(dev_hndl, sel, func_id,
+			QDMA_FMAP_NUM_WORDS, fmap);
+	if (rv < 0)
+		return rv;
+
+	config->qbase = FIELD_GET(QDMA_FMAP_CTXT_W0_QID_MASK, fmap[0]);
+	config->qmax = FIELD_GET(QDMA_FMAP_CTXT_W1_QID_MAX_MASK, fmap[1]);
+
+	qdma_log_debug("%s: func_id=%u, qbase=%u, qmax=%u\n", __func__,
+				   func_id, config->qbase, config->qmax);
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_fmap_clear() - clear fmap context
+ *
+ * @dev_hndl:   device handle
+ * @func_id:    function id of the device
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_fmap_clear(void *dev_hndl, uint16_t func_id)
+{
+	enum ind_ctxt_cmd_sel sel = QDMA_CTXT_SEL_FMAP;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n", __func__,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	qdma_log_debug("%s: func_id=%hu\n", __func__, func_id);
+	return qdma_indirect_reg_clear(dev_hndl, sel, func_id);
+}
+
+/*****************************************************************************/
+/**
+ * qdma_fmap_conf() - configure fmap context
+ *
+ * @dev_hndl:	device handle
+ * @func_id:	function id of the device
+ * @config:	pointer to the fmap data
+ * @access_type HW access type (qdma_hw_access_type enum) value
+ *		QDMA_HW_ACCESS_INVALIDATE Not supported
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+int qdma_fmap_conf(void *dev_hndl, uint16_t func_id,
+				struct qdma_fmap_cfg *config,
+				enum qdma_hw_access_type access_type)
+{
+	int rv = QDMA_SUCCESS;
+
+	switch (access_type) {
+	case QDMA_HW_ACCESS_READ:
+		rv = qdma_fmap_read(dev_hndl, func_id, config);
+		break;
+	case QDMA_HW_ACCESS_WRITE:
+		rv = qdma_fmap_write(dev_hndl, func_id, config);
+		break;
+	case QDMA_HW_ACCESS_CLEAR:
+		rv = qdma_fmap_clear(dev_hndl, func_id);
+		break;
+	case QDMA_HW_ACCESS_INVALIDATE:
+	default:
+		qdma_log_error("%s: access_type(%d) invalid, err:%d\n",
+						__func__,
+						access_type,
+					   -QDMA_ERR_INV_PARAM);
+		rv = -QDMA_ERR_INV_PARAM;
+		break;
+	}
+
+	return rv;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_sw_context_write() - create sw context and program it
+ *
+ * @dev_hndl:	device handle
+ * @c2h:	is c2h queue
+ * @hw_qid:	hardware qid of the queue
+ * @ctxt:	pointer to the SW context data strucutre
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_sw_context_write(void *dev_hndl, uint8_t c2h,
+			 uint16_t hw_qid,
+			 const struct qdma_descq_sw_ctxt *ctxt)
+{
+	uint32_t sw_ctxt[QDMA_SW_CONTEXT_NUM_WORDS] = {0};
+	uint16_t num_words_count = 0;
+	enum ind_ctxt_cmd_sel sel = c2h ?
+			QDMA_CTXT_SEL_SW_C2H : QDMA_CTXT_SEL_SW_H2C;
+
+	/* Input args check */
+	if (!dev_hndl || !ctxt) {
+		qdma_log_error("%s: dev_handle=%p sw_ctxt=%p NULL, err:%d\n",
+					   __func__, dev_hndl, ctxt,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	sw_ctxt[num_words_count++] =
+		FIELD_SET(QDMA_SW_CTXT_W0_PIDX, ctxt->pidx) |
+		FIELD_SET(QDMA_SW_CTXT_W0_IRQ_ARM_MASK, ctxt->irq_arm) |
+		FIELD_SET(QDMA_SW_CTXT_W0_FUNC_ID_MASK, ctxt->fnc_id);
+
+	qdma_log_debug("%s: pidx=%x, irq_arm=%x, fnc_id=%x\n",
+			 __func__, ctxt->pidx, ctxt->irq_arm, ctxt->fnc_id);
+
+	sw_ctxt[num_words_count++] =
+		FIELD_SET(QDMA_SW_CTXT_W1_QEN_MASK, ctxt->qen) |
+		FIELD_SET(QDMA_SW_CTXT_W1_FCRD_EN_MASK, ctxt->frcd_en) |
+		FIELD_SET(QDMA_SW_CTXT_W1_WBI_CHK_MASK, ctxt->wbi_chk) |
+		FIELD_SET(QDMA_SW_CTXT_W1_WB_INT_EN_MASK, ctxt->wbi_intvl_en) |
+		FIELD_SET(QDMA_SW_CTXT_W1_AT_MASK, ctxt->at) |
+		FIELD_SET(QDMA_SW_CTXT_W1_FETCH_MAX_MASK, ctxt->fetch_max) |
+		FIELD_SET(QDMA_SW_CTXT_W1_RNG_SZ_MASK, ctxt->rngsz_idx) |
+		FIELD_SET(QDMA_SW_CTXT_W1_DSC_SZ_MASK, ctxt->desc_sz) |
+		FIELD_SET(QDMA_SW_CTXT_W1_BYP_MASK, ctxt->bypass) |
+		FIELD_SET(QDMA_SW_CTXT_W1_MM_CHN_MASK, ctxt->mm_chn) |
+		FIELD_SET(QDMA_SW_CTXT_W1_WBK_EN_MASK, ctxt->wbk_en) |
+		FIELD_SET(QDMA_SW_CTXT_W1_IRQ_EN_MASK, ctxt->irq_en) |
+		FIELD_SET(QDMA_SW_CTXT_W1_PORT_ID_MASK, ctxt->port_id) |
+		FIELD_SET(QDMA_SW_CTXT_W1_IRQ_NO_LAST_MASK, ctxt->irq_no_last) |
+		FIELD_SET(QDMA_SW_CTXT_W1_ERR_MASK, ctxt->err) |
+		FIELD_SET(QDMA_SW_CTXT_W1_ERR_WB_SENT_MASK, ctxt->err_wb_sent) |
+		FIELD_SET(QDMA_SW_CTXT_W1_IRQ_REQ_MASK, ctxt->irq_req) |
+		FIELD_SET(QDMA_SW_CTXT_W1_MRKR_DIS_MASK, ctxt->mrkr_dis) |
+		FIELD_SET(QDMA_SW_CTXT_W1_IS_MM_MASK, ctxt->is_mm);
+
+	qdma_log_debug("%s: qen=%x, frcd_en=%x, wbi_chk=%x, wbi_intvl_en=%x\n",
+			 __func__, ctxt->qen, ctxt->frcd_en, ctxt->wbi_chk,
+			ctxt->wbi_intvl_en);
+
+	qdma_log_debug("%s: at=%x, fetch_max=%x, rngsz_idx=%x, desc_sz=%x\n",
+			__func__, ctxt->at, ctxt->fetch_max, ctxt->rngsz_idx,
+			ctxt->desc_sz);
+
+	qdma_log_debug("%s: bypass=%x, mm_chn=%x, wbk_en=%x, irq_en=%x\n",
+			__func__, ctxt->bypass, ctxt->mm_chn, ctxt->wbk_en,
+			ctxt->irq_en);
+
+	qdma_log_debug("%s: port_id=%x, irq_no_last=%x,err=%x",
+			__func__, ctxt->port_id, ctxt->irq_no_last, ctxt->err);
+	qdma_log_debug(", err_wb_sent=%x\n", ctxt->err_wb_sent);
+
+	qdma_log_debug("%s: irq_req=%x, mrkr_dis=%x, is_mm=%x\n",
+			__func__, ctxt->irq_req, ctxt->mrkr_dis, ctxt->is_mm);
+
+	sw_ctxt[num_words_count++] = ctxt->ring_bs_addr & 0xffffffff;
+	sw_ctxt[num_words_count++] = (ctxt->ring_bs_addr >> 32) & 0xffffffff;
+
+	sw_ctxt[num_words_count++] =
+		FIELD_SET(QDMA_SW_CTXT_W4_VEC_MASK, ctxt->vec) |
+		FIELD_SET(QDMA_SW_CTXT_W4_INTR_AGGR_MASK, ctxt->intr_aggr);
+
+	qdma_log_debug("%s: vec=%x, intr_aggr=%x\n",
+			__func__, ctxt->vec, ctxt->intr_aggr);
+
+	return qdma_indirect_reg_write(dev_hndl, sel, hw_qid,
+			sw_ctxt, num_words_count);
+}
+
+/*****************************************************************************/
+/**
+ * qdma_sw_context_read() - read sw context
+ *
+ * @dev_hndl:	device handle
+ * @c2h:	is c2h queue
+ * @hw_qid:	hardware qid of the queue
+ * @ctxt:	pointer to the output context data
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_sw_context_read(void *dev_hndl, uint8_t c2h,
+			 uint16_t hw_qid,
+			 struct qdma_descq_sw_ctxt *ctxt)
+{
+	int rv = QDMA_SUCCESS;
+	uint32_t sw_ctxt[QDMA_SW_CONTEXT_NUM_WORDS] = {0};
+	enum ind_ctxt_cmd_sel sel = c2h ?
+			QDMA_CTXT_SEL_SW_C2H : QDMA_CTXT_SEL_SW_H2C;
+
+	if (!dev_hndl || !ctxt) {
+		qdma_log_error("%s: dev_handle=%p sw_ctxt=%p NULL, err:%d\n",
+					   __func__, dev_hndl, ctxt,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	rv = qdma_indirect_reg_read(dev_hndl, sel, hw_qid,
+			QDMA_SW_CONTEXT_NUM_WORDS, sw_ctxt);
+	if (rv < 0)
+		return rv;
+
+	ctxt->pidx = FIELD_GET(QDMA_SW_CTXT_W0_PIDX, sw_ctxt[0]);
+	ctxt->irq_arm =
+		(uint8_t)(FIELD_GET(QDMA_SW_CTXT_W0_IRQ_ARM_MASK, sw_ctxt[0]));
+	ctxt->fnc_id =
+		(uint8_t)(FIELD_GET(QDMA_SW_CTXT_W0_FUNC_ID_MASK, sw_ctxt[0]));
+
+	qdma_log_debug("%s: pidx=%x, irq_arm=%x, fnc_id=%x",
+			 __func__, ctxt->pidx, ctxt->irq_arm, ctxt->fnc_id);
+
+	ctxt->qen = FIELD_GET(QDMA_SW_CTXT_W1_QEN_MASK, sw_ctxt[1]);
+	ctxt->frcd_en = FIELD_GET(QDMA_SW_CTXT_W1_FCRD_EN_MASK, sw_ctxt[1]);
+	ctxt->wbi_chk = FIELD_GET(QDMA_SW_CTXT_W1_WBI_CHK_MASK, sw_ctxt[1]);
+	ctxt->wbi_intvl_en =
+		FIELD_GET(QDMA_SW_CTXT_W1_WB_INT_EN_MASK, sw_ctxt[1]);
+	ctxt->at = FIELD_GET(QDMA_SW_CTXT_W1_AT_MASK, sw_ctxt[1]);
+	ctxt->fetch_max =
+		FIELD_GET(QDMA_SW_CTXT_W1_FETCH_MAX_MASK, sw_ctxt[1]);
+	ctxt->rngsz_idx =
+		(uint8_t)(FIELD_GET(QDMA_SW_CTXT_W1_RNG_SZ_MASK, sw_ctxt[1]));
+	ctxt->desc_sz =
+		(uint8_t)(FIELD_GET(QDMA_SW_CTXT_W1_DSC_SZ_MASK, sw_ctxt[1]));
+	ctxt->bypass =
+		(uint8_t)(FIELD_GET(QDMA_SW_CTXT_W1_BYP_MASK, sw_ctxt[1]));
+	ctxt->mm_chn =
+		(uint8_t)(FIELD_GET(QDMA_SW_CTXT_W1_MM_CHN_MASK, sw_ctxt[1]));
+	ctxt->wbk_en =
+		(uint8_t)(FIELD_GET(QDMA_SW_CTXT_W1_WBK_EN_MASK, sw_ctxt[1]));
+	ctxt->irq_en =
+		(uint8_t)(FIELD_GET(QDMA_SW_CTXT_W1_IRQ_EN_MASK, sw_ctxt[1]));
+	ctxt->port_id =
+		(uint8_t)(FIELD_GET(QDMA_SW_CTXT_W1_PORT_ID_MASK, sw_ctxt[1]));
+	ctxt->irq_no_last =
+		(uint8_t)(FIELD_GET(QDMA_SW_CTXT_W1_IRQ_NO_LAST_MASK,
+			sw_ctxt[1]));
+	ctxt->err =
+		(uint8_t)(FIELD_GET(QDMA_SW_CTXT_W1_ERR_MASK, sw_ctxt[1]));
+	ctxt->err_wb_sent =
+		(uint8_t)(FIELD_GET(QDMA_SW_CTXT_W1_ERR_WB_SENT_MASK,
+			sw_ctxt[1]));
+	ctxt->irq_req =
+		(uint8_t)(FIELD_GET(QDMA_SW_CTXT_W1_IRQ_REQ_MASK, sw_ctxt[1]));
+	ctxt->mrkr_dis =
+		(uint8_t)(FIELD_GET(QDMA_SW_CTXT_W1_MRKR_DIS_MASK, sw_ctxt[1]));
+	ctxt->is_mm =
+		(uint8_t)(FIELD_GET(QDMA_SW_CTXT_W1_IS_MM_MASK, sw_ctxt[1]));
+
+	qdma_log_debug("%s: qen=%x, frcd_en=%x, wbi_chk=%x, wbi_intvl_en=%x\n",
+			 __func__, ctxt->qen, ctxt->frcd_en, ctxt->wbi_chk,
+			ctxt->wbi_intvl_en);
+	qdma_log_debug("%s: at=%x, fetch_max=%x, rngsz_idx=%x, desc_sz=%x\n",
+			__func__, ctxt->at, ctxt->fetch_max, ctxt->rngsz_idx,
+			ctxt->desc_sz);
+	qdma_log_debug("%s: bypass=%x, mm_chn=%x, wbk_en=%x, irq_en=%x\n",
+			__func__, ctxt->bypass, ctxt->mm_chn, ctxt->wbk_en,
+			ctxt->irq_en);
+	qdma_log_debug("%s: port_id=%x, irq_no_last=%x,",
+			__func__, ctxt->port_id, ctxt->irq_no_last);
+	qdma_log_debug(" err=%x, err_wb_sent=%x\n",
+			ctxt->err, ctxt->err_wb_sent);
+	qdma_log_debug("%s: irq_req=%x, mrkr_dis=%x, is_mm=%x\n",
+			__func__, ctxt->irq_req, ctxt->mrkr_dis, ctxt->is_mm);
+
+	ctxt->ring_bs_addr = ((uint64_t)sw_ctxt[3] << 32) | (sw_ctxt[2]);
+
+	ctxt->vec = FIELD_GET(QDMA_SW_CTXT_W4_VEC_MASK, sw_ctxt[4]);
+	ctxt->intr_aggr =
+		(uint8_t)(FIELD_GET(QDMA_SW_CTXT_W4_INTR_AGGR_MASK,
+			sw_ctxt[4]));
+
+	qdma_log_debug("%s: vec=%x, intr_aggr=%x\n",
+			__func__, ctxt->vec, ctxt->intr_aggr);
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_sw_context_clear() - clear sw context
+ *
+ * @dev_hndl:	device handle
+ * @c2h:	is c2h queue
+ * @hw_qid:	hardware qid of the queue
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_sw_context_clear(void *dev_hndl, uint8_t c2h,
+			  uint16_t hw_qid)
+{
+	enum ind_ctxt_cmd_sel sel = c2h ?
+			QDMA_CTXT_SEL_SW_C2H : QDMA_CTXT_SEL_SW_H2C;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n", __func__,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	return qdma_indirect_reg_clear(dev_hndl, sel, hw_qid);
+}
+
+/*****************************************************************************/
+/**
+ * qdma_sw_context_invalidate() - invalidate sw context
+ *
+ * @dev_hndl:	device handle
+ * @c2h:	is c2h queue
+ * @hw_qid:	hardware qid of the queue
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_sw_context_invalidate(void *dev_hndl, uint8_t c2h,
+		uint16_t hw_qid)
+{
+	enum ind_ctxt_cmd_sel sel = c2h ?
+			QDMA_CTXT_SEL_SW_C2H : QDMA_CTXT_SEL_SW_H2C;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n", __func__,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+	return qdma_indirect_reg_invalidate(dev_hndl, sel, hw_qid);
+}
+
+/*****************************************************************************/
+/**
+ * qdma_sw_ctx_conf() - configure SW context
+ *
+ * @dev_hndl:	device handle
+ * @c2h:	is c2h queue
+ * @hw_qid:	hardware qid of the queue
+ * @ctxt:	pointer to the context data
+ * @access_type HW access type (qdma_hw_access_type enum) value
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+int qdma_sw_ctx_conf(void *dev_hndl, uint8_t c2h, uint16_t hw_qid,
+				struct qdma_descq_sw_ctxt *ctxt,
+				enum qdma_hw_access_type access_type)
+{
+	int rv = QDMA_SUCCESS;
+
+	/** ctxt requires only H2C-0 or C2H-1
+	 *  return error for any other values
+	 */
+	if (c2h > 1) {
+		qdma_log_error("%s: c2h(%d) invalid, err:%d\n",
+						__func__,
+						c2h,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	switch (access_type) {
+	case QDMA_HW_ACCESS_READ:
+		rv = qdma_sw_context_read(dev_hndl, c2h, hw_qid, ctxt);
+		break;
+	case QDMA_HW_ACCESS_WRITE:
+		rv = qdma_sw_context_write(dev_hndl, c2h, hw_qid, ctxt);
+		break;
+	case QDMA_HW_ACCESS_CLEAR:
+		rv = qdma_sw_context_clear(dev_hndl, c2h, hw_qid);
+		break;
+	case QDMA_HW_ACCESS_INVALIDATE:
+		rv = qdma_sw_context_invalidate(dev_hndl, c2h, hw_qid);
+		break;
+	default:
+		qdma_log_error("%s: access_type(%d) invalid, err:%d\n",
+						__func__,
+						access_type,
+					   -QDMA_ERR_INV_PARAM);
+		rv = -QDMA_ERR_INV_PARAM;
+		break;
+	}
+
+	return rv;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_pfetch_context_write() - create prefetch context and program it
+ *
+ * @dev_hndl:	device handle
+ * @hw_qid:	hardware qid of the queue
+ * @ctxt:	pointer to the prefetch context data strucutre
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_pfetch_context_write(void *dev_hndl, uint16_t hw_qid,
+		const struct qdma_descq_prefetch_ctxt *ctxt)
+{
+	uint32_t pfetch_ctxt[QDMA_PFETCH_CONTEXT_NUM_WORDS] = {0};
+	enum ind_ctxt_cmd_sel sel = QDMA_CTXT_SEL_PFTCH;
+	uint32_t sw_crdt_l, sw_crdt_h;
+	uint16_t num_words_count = 0;
+
+	if (!dev_hndl || !ctxt) {
+		qdma_log_error("%s: dev_handle or pfetch ctxt NULL, err:%d\n",
+					   __func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	sw_crdt_l =
+		FIELD_GET(QDMA_PFTCH_CTXT_SW_CRDT_GET_L_MASK, ctxt->sw_crdt);
+	sw_crdt_h =
+		FIELD_GET(QDMA_PFTCH_CTXT_SW_CRDT_GET_H_MASK, ctxt->sw_crdt);
+
+	qdma_log_debug("%s: sw_crdt_l=%u, sw_crdt_h=%u, hw_qid=%u\n",
+			 __func__, sw_crdt_l, sw_crdt_h, hw_qid);
+
+	pfetch_ctxt[num_words_count++] =
+		FIELD_SET(QDMA_PFTCH_CTXT_W0_BYPASS_MASK, ctxt->bypass) |
+		FIELD_SET(QDMA_PFTCH_CTXT_W0_BUF_SIZE_IDX_MASK,
+				ctxt->bufsz_idx) |
+		FIELD_SET(QDMA_PFTCH_CTXT_W0_PORT_ID_MASK, ctxt->port_id) |
+		FIELD_SET(QDMA_PFTCH_CTXT_W0_ERR_MASK, ctxt->err) |
+		FIELD_SET(QDMA_PFTCH_CTXT_W0_PFETCH_EN_MASK, ctxt->pfch_en) |
+		FIELD_SET(QDMA_PFTCH_CTXT_W0_Q_IN_PFETCH_MASK, ctxt->pfch) |
+		FIELD_SET(QDMA_PFTCH_CTXT_W0_SW_CRDT_L_MASK, sw_crdt_l);
+
+	qdma_log_debug("%s: bypass=%x, bufsz_idx=%x, port_id=%x\n",
+			__func__, ctxt->bypass, ctxt->bufsz_idx, ctxt->port_id);
+	qdma_log_debug("%s: err=%x, pfch_en=%x, pfch=%x, ctxt->valid=%x\n",
+			__func__, ctxt->err, ctxt->pfch_en, ctxt->pfch,
+			ctxt->valid);
+
+	pfetch_ctxt[num_words_count++] =
+		FIELD_SET(QDMA_PFTCH_CTXT_W1_SW_CRDT_H_MASK, sw_crdt_h) |
+		FIELD_SET(QDMA_PFTCH_CTXT_W1_VALID_MASK, ctxt->valid);
+
+	return qdma_indirect_reg_write(dev_hndl, sel, hw_qid,
+			pfetch_ctxt, num_words_count);
+}
+
+/*****************************************************************************/
+/**
+ * qdma_pfetch_context_read() - read prefetch context
+ *
+ * @dev_hndl:	device handle
+ * @hw_qid:	hardware qid of the queue
+ * @ctxt:	pointer to the output context data
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_pfetch_context_read(void *dev_hndl, uint16_t hw_qid,
+		struct qdma_descq_prefetch_ctxt *ctxt)
+{
+	int rv = QDMA_SUCCESS;
+	uint32_t pfetch_ctxt[QDMA_PFETCH_CONTEXT_NUM_WORDS] = {0};
+	enum ind_ctxt_cmd_sel sel = QDMA_CTXT_SEL_PFTCH;
+	uint32_t sw_crdt_l, sw_crdt_h;
+
+	if (!dev_hndl || !ctxt) {
+		qdma_log_error("%s: dev_handle or pfetch ctxt NULL, err:%d\n",
+					   __func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	rv = qdma_indirect_reg_read(dev_hndl, sel, hw_qid,
+			QDMA_PFETCH_CONTEXT_NUM_WORDS, pfetch_ctxt);
+	if (rv < 0)
+		return rv;
+
+	ctxt->bypass =
+		FIELD_GET(QDMA_PFTCH_CTXT_W0_BYPASS_MASK, pfetch_ctxt[0]);
+	ctxt->bufsz_idx =
+		FIELD_GET(QDMA_PFTCH_CTXT_W0_BUF_SIZE_IDX_MASK, pfetch_ctxt[0]);
+	ctxt->port_id =
+		FIELD_GET(QDMA_PFTCH_CTXT_W0_PORT_ID_MASK, pfetch_ctxt[0]);
+	ctxt->err =
+		(uint8_t)(FIELD_GET(QDMA_PFTCH_CTXT_W0_ERR_MASK,
+			pfetch_ctxt[0]));
+	ctxt->pfch_en =
+		(uint8_t)(FIELD_GET(QDMA_PFTCH_CTXT_W0_PFETCH_EN_MASK,
+			pfetch_ctxt[0]));
+	ctxt->pfch =
+		(uint8_t)(FIELD_GET(QDMA_PFTCH_CTXT_W0_Q_IN_PFETCH_MASK,
+				pfetch_ctxt[0]));
+	sw_crdt_l =
+		FIELD_GET(QDMA_PFTCH_CTXT_W0_SW_CRDT_L_MASK, pfetch_ctxt[0]);
+
+	sw_crdt_h =
+		FIELD_GET(QDMA_PFTCH_CTXT_W1_SW_CRDT_H_MASK, pfetch_ctxt[1]);
+	ctxt->valid =
+		(uint8_t)(FIELD_GET(QDMA_PFTCH_CTXT_W1_VALID_MASK,
+			pfetch_ctxt[1]));
+
+	ctxt->sw_crdt =
+		FIELD_SET(QDMA_PFTCH_CTXT_SW_CRDT_GET_L_MASK, sw_crdt_l) |
+		FIELD_SET(QDMA_PFTCH_CTXT_SW_CRDT_GET_H_MASK, sw_crdt_h);
+
+	qdma_log_debug("%s: sw_crdt_l=%u, sw_crdt_h=%u, hw_qid=%u\n",
+			 __func__, sw_crdt_l, sw_crdt_h, hw_qid);
+	qdma_log_debug("%s: bypass=%x, bufsz_idx=%x, port_id=%x\n",
+			__func__, ctxt->bypass, ctxt->bufsz_idx, ctxt->port_id);
+	qdma_log_debug("%s: err=%x, pfch_en=%x, pfch=%x, ctxt->valid=%x\n",
+			__func__, ctxt->err, ctxt->pfch_en, ctxt->pfch,
+			ctxt->valid);
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_pfetch_context_clear() - clear prefetch context
+ *
+ * @dev_hndl:	device handle
+ * @hw_qid:	hardware qid of the queue
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_pfetch_context_clear(void *dev_hndl, uint16_t hw_qid)
+{
+	enum ind_ctxt_cmd_sel sel = QDMA_CTXT_SEL_PFTCH;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n", __func__,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	return qdma_indirect_reg_clear(dev_hndl, sel, hw_qid);
+}
+
+/*****************************************************************************/
+/**
+ * qdma_pfetch_context_invalidate() - invalidate prefetch context
+ *
+ * @dev_hndl:	device handle
+ * @hw_qid:	hardware qid of the queue
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_pfetch_context_invalidate(void *dev_hndl, uint16_t hw_qid)
+{
+	enum ind_ctxt_cmd_sel sel = QDMA_CTXT_SEL_PFTCH;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n", __func__,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	return qdma_indirect_reg_invalidate(dev_hndl, sel, hw_qid);
+}
+
+/*****************************************************************************/
+/**
+ * qdma_pfetch_ctx_conf() - configure prefetch context
+ *
+ * @dev_hndl:	device handle
+ * @hw_qid:	hardware qid of the queue
+ * @ctxt:	pointer to context data
+ * @access_type HW access type (qdma_hw_access_type enum) value
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+int qdma_pfetch_ctx_conf(void *dev_hndl, uint16_t hw_qid,
+				struct qdma_descq_prefetch_ctxt *ctxt,
+				enum qdma_hw_access_type access_type)
+{
+	int rv = QDMA_SUCCESS;
+
+	switch (access_type) {
+	case QDMA_HW_ACCESS_READ:
+		rv = qdma_pfetch_context_read(dev_hndl, hw_qid, ctxt);
+		break;
+	case QDMA_HW_ACCESS_WRITE:
+		rv = qdma_pfetch_context_write(dev_hndl, hw_qid, ctxt);
+		break;
+	case QDMA_HW_ACCESS_CLEAR:
+		rv = qdma_pfetch_context_clear(dev_hndl, hw_qid);
+		break;
+	case QDMA_HW_ACCESS_INVALIDATE:
+		rv = qdma_pfetch_context_invalidate(dev_hndl, hw_qid);
+		break;
+	default:
+		qdma_log_error("%s: access_type(%d) invalid, err:%d\n",
+						__func__,
+						access_type,
+					   -QDMA_ERR_INV_PARAM);
+		rv = -QDMA_ERR_INV_PARAM;
+		break;
+	}
+
+	return rv;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_cmpt_context_write() - create completion context and program it
+ *
+ * @dev_hndl:	device handle
+ * @hw_qid:	hardware qid of the queue
+ * @ctxt:	pointer to the cmpt context data strucutre
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_cmpt_context_write(void *dev_hndl, uint16_t hw_qid,
+			   const struct qdma_descq_cmpt_ctxt *ctxt)
+{
+	uint32_t cmpt_ctxt[QDMA_CMPT_CONTEXT_NUM_WORDS] = {0};
+	uint16_t num_words_count = 0;
+	uint32_t baddr_l, baddr_h, pidx_l, pidx_h;
+	enum ind_ctxt_cmd_sel sel = QDMA_CTXT_SEL_CMPT;
+
+	/* Input args check */
+	if (!dev_hndl || !ctxt) {
+		qdma_log_error("%s: dev_handle or cmpt ctxt NULL, err:%d\n",
+					   __func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if (ctxt->trig_mode > QDMA_CMPT_UPDATE_TRIG_MODE_TMR_CNTR) {
+		qdma_log_error("%s: trig_mode(%d) > (%d) is invalid, err:%d\n",
+					__func__,
+					ctxt->trig_mode,
+					QDMA_CMPT_UPDATE_TRIG_MODE_TMR_CNTR,
+					-QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	baddr_l = (uint32_t)FIELD_GET(QDMA_COMPL_CTXT_BADDR_GET_L_MASK,
+			ctxt->bs_addr);
+	baddr_h = (uint32_t)FIELD_GET(QDMA_COMPL_CTXT_BADDR_GET_H_MASK,
+			ctxt->bs_addr);
+	pidx_l = FIELD_GET(QDMA_COMPL_CTXT_PIDX_GET_L_MASK, ctxt->pidx);
+	pidx_h = FIELD_GET(QDMA_COMPL_CTXT_PIDX_GET_H_MASK, ctxt->pidx);
+
+	cmpt_ctxt[num_words_count++] =
+		FIELD_SET(QDMA_COMPL_CTXT_W0_EN_STAT_DESC_MASK,
+				ctxt->en_stat_desc) |
+		FIELD_SET(QDMA_COMPL_CTXT_W0_EN_INT_MASK, ctxt->en_int) |
+		FIELD_SET(QDMA_COMPL_CTXT_W0_TRIG_MODE_MASK, ctxt->trig_mode) |
+		FIELD_SET(QDMA_COMPL_CTXT_W0_FNC_ID_MASK, ctxt->fnc_id) |
+		FIELD_SET(QDMA_COMPL_CTXT_W0_COUNTER_IDX_MASK,
+				ctxt->counter_idx) |
+		FIELD_SET(QDMA_COMPL_CTXT_W0_TIMER_IDX_MASK, ctxt->timer_idx) |
+		FIELD_SET(QDMA_COMPL_CTXT_W0_INT_ST_MASK, ctxt->in_st) |
+		FIELD_SET(QDMA_COMPL_CTXT_W0_COLOR_MASK, ctxt->color) |
+		FIELD_SET(QDMA_COMPL_CTXT_W0_RING_SZ_MASK, ctxt->ringsz_idx);
+
+	cmpt_ctxt[num_words_count++] =
+		FIELD_SET(QDMA_COMPL_CTXT_W1_BADDR_64_L_MASK, baddr_l);
+
+	cmpt_ctxt[num_words_count++] =
+		FIELD_SET(QDMA_COMPL_CTXT_W2_BADDR_64_H_MASK, baddr_h) |
+		FIELD_SET(QDMA_COMPL_CTXT_W2_DESC_SIZE_MASK, ctxt->desc_sz) |
+		FIELD_SET(QDMA_COMPL_CTXT_W2_PIDX_L_MASK, pidx_l);
+
+
+	cmpt_ctxt[num_words_count++] =
+		FIELD_SET(QDMA_COMPL_CTXT_W3_PIDX_H_MASK, pidx_h) |
+		FIELD_SET(QDMA_COMPL_CTXT_W3_CIDX_MASK, ctxt->cidx) |
+		FIELD_SET(QDMA_COMPL_CTXT_W3_VALID_MASK, ctxt->valid) |
+		FIELD_SET(QDMA_COMPL_CTXT_W3_ERR_MASK, ctxt->err) |
+		FIELD_SET(QDMA_COMPL_CTXT_W3_USR_TRG_PND_MASK,
+				ctxt->user_trig_pend);
+
+	cmpt_ctxt[num_words_count++] =
+		FIELD_SET(QDMA_COMPL_CTXT_W4_TMR_RUN_MASK,
+				ctxt->timer_running) |
+		FIELD_SET(QDMA_COMPL_CTXT_W4_FULL_UPDT_MASK, ctxt->full_upd) |
+		FIELD_SET(QDMA_COMPL_CTXT_W4_OVF_CHK_DIS_MASK,
+				ctxt->ovf_chk_dis) |
+		FIELD_SET(QDMA_COMPL_CTXT_W4_AT_MASK, ctxt->at) |
+		FIELD_SET(QDMA_COMPL_CTXT_W4_INTR_VEC_MASK, ctxt->vec) |
+		FIELD_SET(QDMA_COMPL_CTXT_W4_INTR_AGGR_MASK, ctxt->int_aggr);
+
+	return qdma_indirect_reg_write(dev_hndl, sel, hw_qid,
+			cmpt_ctxt, num_words_count);
+}
+
+/*****************************************************************************/
+/**
+ * qdma_cmpt_context_read() - read completion context
+ *
+ * @dev_hndl:	device handle
+ * @hw_qid:	hardware qid of the queue
+ * @ctxt:	pointer to the context data
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_cmpt_context_read(void *dev_hndl, uint16_t hw_qid,
+			   struct qdma_descq_cmpt_ctxt *ctxt)
+{
+	int rv = QDMA_SUCCESS;
+	uint32_t cmpt_ctxt[QDMA_CMPT_CONTEXT_NUM_WORDS] = {0};
+	enum ind_ctxt_cmd_sel sel = QDMA_CTXT_SEL_CMPT;
+	uint32_t baddr_l, baddr_h, pidx_l, pidx_h;
+
+	if (!dev_hndl || !ctxt) {
+		qdma_log_error("%s: dev_handle or cmpt ctxt NULL, err:%d\n",
+					   __func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	rv = qdma_indirect_reg_read(dev_hndl, sel, hw_qid,
+			QDMA_CMPT_CONTEXT_NUM_WORDS, cmpt_ctxt);
+	if (rv < 0)
+		return rv;
+
+	ctxt->en_stat_desc =
+		FIELD_GET(QDMA_COMPL_CTXT_W0_EN_STAT_DESC_MASK, cmpt_ctxt[0]);
+	ctxt->en_int = FIELD_GET(QDMA_COMPL_CTXT_W0_EN_INT_MASK, cmpt_ctxt[0]);
+	ctxt->trig_mode =
+		FIELD_GET(QDMA_COMPL_CTXT_W0_TRIG_MODE_MASK, cmpt_ctxt[0]);
+	ctxt->fnc_id =
+		(uint8_t)(FIELD_GET(QDMA_COMPL_CTXT_W0_FNC_ID_MASK,
+			cmpt_ctxt[0]));
+	ctxt->counter_idx =
+		(uint8_t)(FIELD_GET(QDMA_COMPL_CTXT_W0_COUNTER_IDX_MASK,
+			cmpt_ctxt[0]));
+	ctxt->timer_idx =
+		(uint8_t)(FIELD_GET(QDMA_COMPL_CTXT_W0_TIMER_IDX_MASK,
+			cmpt_ctxt[0]));
+	ctxt->in_st =
+		(uint8_t)(FIELD_GET(QDMA_COMPL_CTXT_W0_INT_ST_MASK,
+			cmpt_ctxt[0]));
+	ctxt->color =
+		(uint8_t)(FIELD_GET(QDMA_COMPL_CTXT_W0_COLOR_MASK,
+			cmpt_ctxt[0]));
+	ctxt->ringsz_idx =
+		(uint8_t)(FIELD_GET(QDMA_COMPL_CTXT_W0_RING_SZ_MASK,
+			cmpt_ctxt[0]));
+
+	baddr_l = FIELD_GET(QDMA_COMPL_CTXT_W1_BADDR_64_L_MASK, cmpt_ctxt[1]);
+
+	baddr_h = FIELD_GET(QDMA_COMPL_CTXT_W2_BADDR_64_H_MASK, cmpt_ctxt[2]);
+	ctxt->desc_sz =
+		(uint8_t)(FIELD_GET(QDMA_COMPL_CTXT_W2_DESC_SIZE_MASK,
+			cmpt_ctxt[2]));
+	pidx_l = FIELD_GET(QDMA_COMPL_CTXT_W2_PIDX_L_MASK, cmpt_ctxt[2]);
+
+	pidx_h = FIELD_GET(QDMA_COMPL_CTXT_W3_PIDX_H_MASK, cmpt_ctxt[3]);
+	ctxt->cidx =
+		(uint16_t)(FIELD_GET(QDMA_COMPL_CTXT_W3_CIDX_MASK,
+			cmpt_ctxt[3]));
+	ctxt->valid =
+		(uint8_t)(FIELD_GET(QDMA_COMPL_CTXT_W3_VALID_MASK,
+			cmpt_ctxt[3]));
+	ctxt->err =
+		(uint8_t)(FIELD_GET(QDMA_COMPL_CTXT_W3_ERR_MASK, cmpt_ctxt[3]));
+	ctxt->user_trig_pend = (uint8_t)
+		(FIELD_GET(QDMA_COMPL_CTXT_W3_USR_TRG_PND_MASK, cmpt_ctxt[3]));
+
+	ctxt->timer_running =
+		FIELD_GET(QDMA_COMPL_CTXT_W4_TMR_RUN_MASK, cmpt_ctxt[4]);
+	ctxt->full_upd =
+		FIELD_GET(QDMA_COMPL_CTXT_W4_FULL_UPDT_MASK, cmpt_ctxt[4]);
+	ctxt->ovf_chk_dis =
+		FIELD_GET(QDMA_COMPL_CTXT_W4_OVF_CHK_DIS_MASK, cmpt_ctxt[4]);
+	ctxt->at = FIELD_GET(QDMA_COMPL_CTXT_W4_AT_MASK, cmpt_ctxt[4]);
+	ctxt->vec = FIELD_GET(QDMA_COMPL_CTXT_W4_INTR_VEC_MASK, cmpt_ctxt[4]);
+	ctxt->int_aggr = (uint8_t)
+		(FIELD_GET(QDMA_COMPL_CTXT_W4_INTR_AGGR_MASK, cmpt_ctxt[4]));
+
+	ctxt->bs_addr =
+		FIELD_SET(QDMA_COMPL_CTXT_BADDR_GET_L_MASK, (uint64_t)baddr_l) |
+		FIELD_SET(QDMA_COMPL_CTXT_BADDR_GET_H_MASK, (uint64_t)baddr_h);
+
+	ctxt->pidx =
+		FIELD_SET(QDMA_COMPL_CTXT_PIDX_GET_L_MASK, pidx_l) |
+		FIELD_SET(QDMA_COMPL_CTXT_PIDX_GET_H_MASK, pidx_h);
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_cmpt_context_clear() - clear completion context
+ *
+ * @dev_hndl:	device handle
+ * @hw_qid:	hardware qid of the queue
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_cmpt_context_clear(void *dev_hndl, uint16_t hw_qid)
+{
+	enum ind_ctxt_cmd_sel sel = QDMA_CTXT_SEL_CMPT;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n", __func__,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	return qdma_indirect_reg_clear(dev_hndl, sel, hw_qid);
+}
+
+/*****************************************************************************/
+/**
+ * qdma_cmpt_context_invalidate() - invalidate completion context
+ *
+ * @dev_hndl:	device handle
+ * @hw_qid:	hardware qid of the queue
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_cmpt_context_invalidate(void *dev_hndl, uint16_t hw_qid)
+{
+	enum ind_ctxt_cmd_sel sel = QDMA_CTXT_SEL_CMPT;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n", __func__,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	return qdma_indirect_reg_invalidate(dev_hndl, sel, hw_qid);
+}
+
+/*****************************************************************************/
+/**
+ * qdma_cmpt_ctx_conf() - configure completion context
+ *
+ * @dev_hndl:	device handle
+ * @hw_qid:	hardware qid of the queue
+ * @ctxt:	pointer to context data
+ * @access_type HW access type (qdma_hw_access_type enum) value
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+int qdma_cmpt_ctx_conf(void *dev_hndl, uint16_t hw_qid,
+			struct qdma_descq_cmpt_ctxt *ctxt,
+			enum qdma_hw_access_type access_type)
+{
+	int rv = QDMA_SUCCESS;
+
+	switch (access_type) {
+	case QDMA_HW_ACCESS_READ:
+		rv = qdma_cmpt_context_read(dev_hndl, hw_qid, ctxt);
+		break;
+	case QDMA_HW_ACCESS_WRITE:
+		rv = qdma_cmpt_context_write(dev_hndl, hw_qid, ctxt);
+		break;
+	case QDMA_HW_ACCESS_CLEAR:
+		rv = qdma_cmpt_context_clear(dev_hndl, hw_qid);
+		break;
+	case QDMA_HW_ACCESS_INVALIDATE:
+		rv = qdma_cmpt_context_invalidate(dev_hndl, hw_qid);
+		break;
+	default:
+		qdma_log_error("%s: access_type(%d) invalid, err:%d\n",
+						__func__,
+						access_type,
+					   -QDMA_ERR_INV_PARAM);
+		rv = -QDMA_ERR_INV_PARAM;
+		break;
+	}
+
+	return rv;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_hw_context_read() - read hardware context
+ *
+ * @dev_hndl:	device handle
+ * @c2h:	is c2h queue
+ * @hw_qid:	hardware qid of the queue
+ * @ctxt:	pointer to the output context data
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_hw_context_read(void *dev_hndl, uint8_t c2h,
+			 uint16_t hw_qid, struct qdma_descq_hw_ctxt *ctxt)
+{
+	int rv = QDMA_SUCCESS;
+	uint32_t hw_ctxt[QDMA_HW_CONTEXT_NUM_WORDS] = {0};
+	enum ind_ctxt_cmd_sel sel = c2h ? QDMA_CTXT_SEL_HW_C2H :
+			QDMA_CTXT_SEL_HW_H2C;
+
+	if (!dev_hndl || !ctxt) {
+		qdma_log_error("%s: dev_handle or hw_ctxt NULL, err:%d\n",
+					   __func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	rv = qdma_indirect_reg_read(dev_hndl, sel, hw_qid,
+			QDMA_HW_CONTEXT_NUM_WORDS, hw_ctxt);
+	if (rv < 0)
+		return rv;
+
+	ctxt->cidx = FIELD_GET(QDMA_HW_CTXT_W0_CIDX_MASK, hw_ctxt[0]);
+	ctxt->crd_use =
+		(uint16_t)(FIELD_GET(QDMA_HW_CTXT_W0_CRD_USE_MASK, hw_ctxt[0]));
+
+	ctxt->dsc_pend =
+		(uint8_t)(FIELD_GET(QDMA_HW_CTXT_W1_DSC_PND_MASK, hw_ctxt[1]));
+	ctxt->idl_stp_b =
+		(uint8_t)(FIELD_GET(QDMA_HW_CTXT_W1_IDL_STP_B_MASK,
+			hw_ctxt[1]));
+	ctxt->evt_pnd =
+		(uint8_t)(FIELD_GET(QDMA_HW_CTXT_W1_EVENT_PEND_MASK,
+			hw_ctxt[1]));
+	ctxt->fetch_pnd = (uint8_t)
+		(FIELD_GET(QDMA_HW_CTXT_W1_FETCH_PEND_MASK, hw_ctxt[1]));
+
+	qdma_log_debug("%s: cidx=%u, crd_use=%u, dsc_pend=%x\n",
+			__func__, ctxt->cidx, ctxt->crd_use, ctxt->dsc_pend);
+	qdma_log_debug("%s: idl_stp_b=%x, evt_pnd=%x, fetch_pnd=%x\n",
+			__func__, ctxt->idl_stp_b, ctxt->evt_pnd,
+			ctxt->fetch_pnd);
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_hw_context_clear() - clear hardware context
+ *
+ * @dev_hndl:	device handle
+ * @c2h:	is c2h queue
+ * @hw_qid:	hardware qid of the queue
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_hw_context_clear(void *dev_hndl, uint8_t c2h,
+			  uint16_t hw_qid)
+{
+	enum ind_ctxt_cmd_sel sel = c2h ? QDMA_CTXT_SEL_HW_C2H :
+			QDMA_CTXT_SEL_HW_H2C;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n", __func__,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	return qdma_indirect_reg_clear(dev_hndl, sel, hw_qid);
+}
+
+/*****************************************************************************/
+/**
+ * qdma_hw_context_invalidate() - invalidate hardware context
+ *
+ * @dev_hndl:	device handle
+ * @c2h:	is c2h queue
+ * @hw_qid:	hardware qid of the queue
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_hw_context_invalidate(void *dev_hndl, uint8_t c2h,
+				   uint16_t hw_qid)
+{
+	enum ind_ctxt_cmd_sel sel = c2h ? QDMA_CTXT_SEL_HW_C2H :
+			QDMA_CTXT_SEL_HW_H2C;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n", __func__,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	return qdma_indirect_reg_invalidate(dev_hndl, sel, hw_qid);
+}
+
+/*****************************************************************************/
+/**
+ * qdma_hw_ctx_conf() - configure HW context
+ *
+ * @dev_hndl:	device handle
+ * @c2h:	is c2h queue
+ * @hw_qid:	hardware qid of the queue
+ * @ctxt:	pointer to context data
+ * @access_type HW access type (qdma_hw_access_type enum) value
+ *		QDMA_HW_ACCESS_WRITE Not supported
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+int qdma_hw_ctx_conf(void *dev_hndl, uint8_t c2h, uint16_t hw_qid,
+				struct qdma_descq_hw_ctxt *ctxt,
+				enum qdma_hw_access_type access_type)
+{
+	int rv = QDMA_SUCCESS;
+
+	/** ctxt requires only H2C-0 or C2H-1
+	 *  return error for any other values
+	 */
+	if (c2h > 1) {
+		qdma_log_error("%s: c2h(%d) invalid, err:%d\n",
+						__func__,
+						c2h,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	switch (access_type) {
+	case QDMA_HW_ACCESS_READ:
+		rv = qdma_hw_context_read(dev_hndl, c2h, hw_qid, ctxt);
+		break;
+	case QDMA_HW_ACCESS_CLEAR:
+		rv = qdma_hw_context_clear(dev_hndl, c2h, hw_qid);
+		break;
+	case QDMA_HW_ACCESS_INVALIDATE:
+		rv = qdma_hw_context_invalidate(dev_hndl, c2h, hw_qid);
+		break;
+	case QDMA_HW_ACCESS_WRITE:
+	default:
+		qdma_log_error("%s: access_type=%d is invalid, err:%d\n",
+					   __func__, access_type,
+					   -QDMA_ERR_INV_PARAM);
+		rv = -QDMA_ERR_INV_PARAM;
+		break;
+	}
+
+	return rv;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_credit_context_read() - read credit context
+ *
+ * @dev_hndl:	device handle
+ * @c2h:	is c2h queue
+ * @hw_qid:	hardware qid of the queue
+ * @ctxt:	pointer to the context data
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_credit_context_read(void *dev_hndl, uint8_t c2h,
+			 uint16_t hw_qid,
+			 struct qdma_descq_credit_ctxt *ctxt)
+{
+	int rv = QDMA_SUCCESS;
+	uint32_t cr_ctxt[QDMA_CR_CONTEXT_NUM_WORDS] = {0};
+	enum ind_ctxt_cmd_sel sel = c2h ? QDMA_CTXT_SEL_CR_C2H :
+			QDMA_CTXT_SEL_CR_H2C;
+
+	if (!dev_hndl || !ctxt) {
+		qdma_log_error("%s: dev_hndl=%p credit_ctxt=%p, err:%d\n",
+						__func__, dev_hndl, ctxt,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	rv = qdma_indirect_reg_read(dev_hndl, sel, hw_qid,
+			QDMA_CR_CONTEXT_NUM_WORDS, cr_ctxt);
+	if (rv < 0)
+		return rv;
+
+	ctxt->credit = FIELD_GET(QDMA_CR_CTXT_W0_CREDT_MASK, cr_ctxt[0]);
+
+	qdma_log_debug("%s: credit=%u\n", __func__, ctxt->credit);
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_credit_context_clear() - clear credit context
+ *
+ * @dev_hndl:	device handle
+ * @c2h:	is c2h queue
+ * @hw_qid:	hardware qid of the queue
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_credit_context_clear(void *dev_hndl, uint8_t c2h,
+			  uint16_t hw_qid)
+{
+	enum ind_ctxt_cmd_sel sel = c2h ? QDMA_CTXT_SEL_CR_C2H :
+			QDMA_CTXT_SEL_CR_H2C;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n", __func__,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	return qdma_indirect_reg_clear(dev_hndl, sel, hw_qid);
+}
+
+/*****************************************************************************/
+/**
+ * qdma_credit_context_invalidate() - invalidate credit context
+ *
+ * @dev_hndl:	device handle
+ * @c2h:	is c2h queue
+ * @hw_qid:	hardware qid of the queue
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_credit_context_invalidate(void *dev_hndl, uint8_t c2h,
+				   uint16_t hw_qid)
+{
+	enum ind_ctxt_cmd_sel sel = c2h ? QDMA_CTXT_SEL_CR_C2H :
+			QDMA_CTXT_SEL_CR_H2C;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n", __func__,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	return qdma_indirect_reg_invalidate(dev_hndl, sel, hw_qid);
+}
+
+/*****************************************************************************/
+/**
+ * qdma_credit_ctx_conf() - configure credit context
+ *
+ * @dev_hndl:	device handle
+ * @c2h:	is c2h queue
+ * @hw_qid:	hardware qid of the queue
+ * @ctxt:	pointer to the context data
+ * @access_type HW access type (qdma_hw_access_type enum) value
+ *		QDMA_HW_ACCESS_WRITE Not supported
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+int qdma_credit_ctx_conf(void *dev_hndl, uint8_t c2h, uint16_t hw_qid,
+			struct qdma_descq_credit_ctxt *ctxt,
+			enum qdma_hw_access_type access_type)
+{
+	int rv = QDMA_SUCCESS;
+
+	/** ctxt requires only H2C-0 or C2H-1
+	 *  return error for any other values
+	 */
+	if (c2h > 1) {
+		qdma_log_error("%s: c2h(%d) invalid, err:%d\n",
+						__func__,
+						c2h,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	switch (access_type) {
+	case QDMA_HW_ACCESS_READ:
+		rv = qdma_credit_context_read(dev_hndl, c2h, hw_qid, ctxt);
+		break;
+	case QDMA_HW_ACCESS_CLEAR:
+		rv = qdma_credit_context_clear(dev_hndl, c2h, hw_qid);
+		break;
+	case QDMA_HW_ACCESS_INVALIDATE:
+		rv = qdma_credit_context_invalidate(dev_hndl, c2h, hw_qid);
+		break;
+	case QDMA_HW_ACCESS_WRITE:
+	default:
+		qdma_log_error("%s: Invalid access type=%d, err:%d\n",
+					   __func__, access_type,
+					   -QDMA_ERR_INV_PARAM);
+		rv = -QDMA_ERR_INV_PARAM;
+		break;
+	}
+
+	return rv;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_indirect_intr_context_write() - create indirect interrupt context
+ *					and program it
+ *
+ * @dev_hndl:   device handle
+ * @ring_index: indirect interrupt ring index
+ * @ctxt:	pointer to the interrupt context data strucutre
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_indirect_intr_context_write(void *dev_hndl, uint16_t ring_index,
+		const struct qdma_indirect_intr_ctxt *ctxt)
+{
+	uint32_t intr_ctxt[QDMA_IND_INTR_CONTEXT_NUM_WORDS] = {0};
+	enum ind_ctxt_cmd_sel sel = QDMA_CTXT_SEL_INT_COAL;
+	uint32_t baddr_l, baddr_m, baddr_h;
+	uint16_t num_words_count = 0;
+
+	if (!dev_hndl || !ctxt) {
+		qdma_log_error("%s: dev_hndl=%p intr_ctxt=%p, err:%d\n",
+						__func__, dev_hndl, ctxt,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	baddr_l = (uint32_t)FIELD_GET(QDMA_INTR_CTXT_BADDR_GET_L_MASK,
+			ctxt->baddr_4k);
+	baddr_m = (uint32_t)FIELD_GET(QDMA_INTR_CTXT_BADDR_GET_M_MASK,
+			ctxt->baddr_4k);
+	baddr_h = (uint32_t)FIELD_GET(QDMA_INTR_CTXT_BADDR_GET_H_MASK,
+			ctxt->baddr_4k);
+
+	intr_ctxt[num_words_count++] =
+		FIELD_SET(QDMA_INTR_CTXT_W0_VALID_MASK, ctxt->valid) |
+		FIELD_SET(QDMA_INTR_CTXT_W0_VEC_ID_MASK, ctxt->vec) |
+		FIELD_SET(QDMA_INTR_CTXT_W0_INT_ST_MASK, ctxt->int_st) |
+		FIELD_SET(QDMA_INTR_CTXT_W0_COLOR_MASK, ctxt->color) |
+		FIELD_SET(QDMA_INTR_CTXT_W0_BADDR_64_MASK, baddr_l);
+
+	intr_ctxt[num_words_count++] =
+		FIELD_SET(QDMA_INTR_CTXT_W1_BADDR_64_MASK, baddr_m);
+
+	intr_ctxt[num_words_count++] =
+		FIELD_SET(QDMA_INTR_CTXT_W2_BADDR_64_MASK, baddr_h) |
+		FIELD_SET(QDMA_INTR_CTXT_W2_PAGE_SIZE_MASK, ctxt->page_size) |
+		FIELD_SET(QDMA_INTR_CTXT_W2_PIDX_MASK, ctxt->pidx) |
+		FIELD_SET(QDMA_INTR_CTXT_W2_AT_MASK, ctxt->at);
+
+	return qdma_indirect_reg_write(dev_hndl, sel, ring_index,
+			intr_ctxt, num_words_count);
+}
+
+/*****************************************************************************/
+/**
+ * qdma_indirect_intr_context_read() - read indirect interrupt context
+ *
+ * @dev_hndl:	device handle
+ * @ring_index:	indirect interrupt ring index
+ * @ctxt:	pointer to the output context data
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_indirect_intr_context_read(void *dev_hndl, uint16_t ring_index,
+				   struct qdma_indirect_intr_ctxt *ctxt)
+{
+	int rv = QDMA_SUCCESS;
+	uint32_t intr_ctxt[QDMA_IND_INTR_CONTEXT_NUM_WORDS] = {0};
+	enum ind_ctxt_cmd_sel sel = QDMA_CTXT_SEL_INT_COAL;
+	uint64_t baddr_l, baddr_m, baddr_h;
+
+	if (!dev_hndl || !ctxt) {
+		qdma_log_error("%s: dev_hndl=%p intr_ctxt=%p, err:%d\n",
+						__func__, dev_hndl, ctxt,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	rv = qdma_indirect_reg_read(dev_hndl, sel, ring_index,
+			QDMA_IND_INTR_CONTEXT_NUM_WORDS, intr_ctxt);
+	if (rv < 0)
+		return rv;
+
+	ctxt->valid = FIELD_GET(QDMA_INTR_CTXT_W0_VALID_MASK, intr_ctxt[0]);
+	ctxt->vec = FIELD_GET(QDMA_INTR_CTXT_W0_VEC_ID_MASK, intr_ctxt[0]);
+	ctxt->int_st =
+		(uint8_t)(FIELD_GET(QDMA_INTR_CTXT_W0_INT_ST_MASK,
+			intr_ctxt[0]));
+	ctxt->color =
+		(uint8_t)(FIELD_GET(QDMA_INTR_CTXT_W0_COLOR_MASK,
+			intr_ctxt[0]));
+
+	baddr_l = FIELD_GET(QDMA_INTR_CTXT_W0_BADDR_64_MASK, intr_ctxt[0]);
+
+	baddr_m = FIELD_GET(QDMA_INTR_CTXT_W1_BADDR_64_MASK, intr_ctxt[1]);
+
+	baddr_h = FIELD_GET(QDMA_INTR_CTXT_W2_BADDR_64_MASK, intr_ctxt[2]);
+	ctxt->page_size =
+		FIELD_GET(QDMA_INTR_CTXT_W2_PAGE_SIZE_MASK, intr_ctxt[2]);
+	ctxt->pidx =
+		(uint16_t)(FIELD_GET(QDMA_INTR_CTXT_W2_PIDX_MASK,
+			intr_ctxt[2]));
+	ctxt->at =
+		(uint8_t)(FIELD_GET(QDMA_INTR_CTXT_W2_AT_MASK, intr_ctxt[2]));
+
+	ctxt->baddr_4k =
+		FIELD_SET(QDMA_INTR_CTXT_BADDR_GET_L_MASK, baddr_l) |
+		FIELD_SET(QDMA_INTR_CTXT_BADDR_GET_M_MASK, baddr_m) |
+		FIELD_SET(QDMA_INTR_CTXT_BADDR_GET_H_MASK, baddr_h);
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_indirect_intr_context_clear() - clear indirect interrupt context
+ *
+ * @dev_hndl:	device handle
+ * @ring_index:	indirect interrupt ring index
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_indirect_intr_context_clear(void *dev_hndl, uint16_t ring_index)
+{
+	enum ind_ctxt_cmd_sel sel = QDMA_CTXT_SEL_INT_COAL;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n", __func__,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	return qdma_indirect_reg_clear(dev_hndl, sel, ring_index);
+}
+
+/*****************************************************************************/
+/**
+ * qdma_indirect_intr_context_invalidate() - invalidate indirect interrupt
+ * context
+ *
+ * @dev_hndl:	device handle
+ * @ring_index:	indirect interrupt ring index
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_indirect_intr_context_invalidate(void *dev_hndl,
+					  uint16_t ring_index)
+{
+	enum ind_ctxt_cmd_sel sel = QDMA_CTXT_SEL_INT_COAL;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n", __func__,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	return qdma_indirect_reg_invalidate(dev_hndl, sel, ring_index);
+}
+
+/*****************************************************************************/
+/**
+ * qdma_indirect_intr_ctx_conf() - configure indirect interrupt context
+ *
+ * @dev_hndl:	device handle
+ * @ring_index:	indirect interrupt ring index
+ * @ctxt:	pointer to context data
+ * @access_type HW access type (qdma_hw_access_type enum) value
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+int qdma_indirect_intr_ctx_conf(void *dev_hndl, uint16_t ring_index,
+				struct qdma_indirect_intr_ctxt *ctxt,
+				enum qdma_hw_access_type access_type)
+{
+	int rv = QDMA_SUCCESS;
+
+	switch (access_type) {
+	case QDMA_HW_ACCESS_READ:
+		rv = qdma_indirect_intr_context_read(dev_hndl, ring_index,
+							ctxt);
+		break;
+	case QDMA_HW_ACCESS_WRITE:
+		rv = qdma_indirect_intr_context_write(dev_hndl, ring_index,
+							ctxt);
+		break;
+	case QDMA_HW_ACCESS_CLEAR:
+		rv = qdma_indirect_intr_context_clear(dev_hndl,
+							ring_index);
+		break;
+	case QDMA_HW_ACCESS_INVALIDATE:
+		rv = qdma_indirect_intr_context_invalidate(dev_hndl,
+								ring_index);
+		break;
+	default:
+		qdma_log_error("%s: access_type=%d is invalid, err:%d\n",
+					   __func__, access_type,
+					   -QDMA_ERR_INV_PARAM);
+		rv = -QDMA_ERR_INV_PARAM;
+		break;
+	}
+
+	return rv;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_set_default_global_csr() - function to set the global CSR register to
+ * default values. The value can be modified later by using the set/get csr
+ * functions
+ *
+ * @dev_hndl:	device handle
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+int qdma_set_default_global_csr(void *dev_hndl)
+{
+	/* Default values */
+	uint32_t cfg_val = 0, reg_val = 0;
+	uint32_t rng_sz[QDMA_NUM_RING_SIZES] = {2049, 65, 129, 193, 257, 385,
+		513, 769, 1025, 1537, 3073, 4097, 6145, 8193, 12289, 16385};
+	uint32_t tmr_cnt[QDMA_NUM_C2H_TIMERS] = {1, 2, 4, 5, 8, 10, 15, 20, 25,
+		30, 50, 75, 100, 125, 150, 200};
+	uint32_t cnt_th[QDMA_NUM_C2H_COUNTERS] = {2, 4, 8, 16, 24, 32, 48, 64,
+		80, 96, 112, 128, 144, 160, 176, 192};
+	uint32_t buf_sz[QDMA_NUM_C2H_BUFFER_SIZES] = {4096, 256, 512, 1024,
+		2048, 3968, 4096, 4096, 4096, 4096, 4096, 4096, 4096, 8192,
+		9018, 16384};
+	struct qdma_dev_attributes dev_cap;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n", __func__,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	qdma_get_device_attributes(dev_hndl, &dev_cap);
+
+	/* Configuring CSR registers */
+	/* Global ring sizes */
+	qdma_write_csr_values(dev_hndl, QDMA_OFFSET_GLBL_RNG_SZ, 0,
+			QDMA_NUM_RING_SIZES, rng_sz);
+
+	if (dev_cap.st_en || dev_cap.mm_cmpt_en) {
+		/* Counter thresholds */
+		qdma_write_csr_values(dev_hndl, QDMA_OFFSET_C2H_CNT_TH, 0,
+				QDMA_NUM_C2H_COUNTERS, cnt_th);
+
+		/* Timer Counters */
+		qdma_write_csr_values(dev_hndl, QDMA_OFFSET_C2H_TIMER_CNT, 0,
+				QDMA_NUM_C2H_TIMERS, tmr_cnt);
+
+
+		/* Writeback Interval */
+		reg_val =
+			FIELD_SET(QDMA_GLBL_DSC_CFG_MAX_DSC_FETCH_MASK,
+					DEFAULT_MAX_DSC_FETCH) |
+			FIELD_SET(QDMA_GLBL_DSC_CFG_WB_ACC_INT_MASK,
+					DEFAULT_WRB_INT);
+		qdma_reg_write(dev_hndl, QDMA_OFFSET_GLBL_DSC_CFG, reg_val);
+	}
+
+	if (dev_cap.st_en) {
+		/* Buffer Sizes */
+		qdma_write_csr_values(dev_hndl, QDMA_OFFSET_C2H_BUF_SZ, 0,
+				QDMA_NUM_C2H_BUFFER_SIZES, buf_sz);
+
+		/* Prefetch Configuration */
+		cfg_val = qdma_reg_read(dev_hndl,
+				QDMA_OFFSET_C2H_PFETCH_CACHE_DEPTH);
+		reg_val =
+			FIELD_SET(QDMA_C2H_PFCH_FL_TH_MASK,
+					DEFAULT_PFCH_STOP_THRESH) |
+			FIELD_SET(QDMA_C2H_NUM_PFCH_MASK,
+					DEFAULT_PFCH_NUM_ENTRIES_PER_Q) |
+			FIELD_SET(QDMA_C2H_PFCH_QCNT_MASK, (cfg_val >> 1)) |
+			FIELD_SET(QDMA_C2H_EVT_QCNT_TH_MASK,
+					((cfg_val >> 1) - 2));
+		qdma_reg_write(dev_hndl, QDMA_OFFSET_C2H_PFETCH_CFG, reg_val);
+
+		/* C2H interrupt timer tick */
+		qdma_reg_write(dev_hndl, QDMA_OFFSET_C2H_INT_TIMER_TICK,
+				DEFAULT_C2H_INTR_TIMER_TICK);
+
+		/* C2h Completion Coalesce Configuration */
+		cfg_val = qdma_reg_read(dev_hndl,
+				QDMA_OFFSET_C2H_CMPT_COAL_BUF_DEPTH);
+		reg_val =
+			FIELD_SET(QDMA_C2H_TICK_CNT_MASK,
+					DEFAULT_CMPT_COAL_TIMER_CNT) |
+			FIELD_SET(QDMA_C2H_TICK_VAL_MASK,
+					DEFAULT_CMPT_COAL_TIMER_TICK) |
+			FIELD_SET(QDMA_C2H_MAX_BUF_SZ_MASK, cfg_val);
+		qdma_reg_write(dev_hndl, QDMA_OFFSET_C2H_WRB_COAL_CFG, reg_val);
+
+		/* H2C throttle Configuration */
+		reg_val =
+			FIELD_SET(QDMA_H2C_DATA_THRESH_MASK,
+					QDMA_H2C_THROT_DATA_THRESH) |
+			FIELD_SET(QDMA_H2C_REQ_THROT_EN_DATA_MASK,
+					QDMA_THROT_EN_DATA) |
+			FIELD_SET(QDMA_H2C_REQ_THRESH_MASK,
+					QDMA_H2C_THROT_REQ_THRESH) |
+			FIELD_SET(QDMA_H2C_REQ_THROT_EN_REQ_MASK,
+					QDMA_THROT_EN_REQ);
+		qdma_reg_write(dev_hndl, QDMA_OFFSET_H2C_REQ_THROT, reg_val);
+	}
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_queue_pidx_update() - function to update the desc PIDX
+ *
+ * @dev_hndl:	device handle
+ * @is_vf:	Whether PF or VF
+ * @qid:	Queue id relative to the PF/VF calling this API
+ * @is_c2h:	Queue direction. Set 1 for C2H and 0 for H2C
+ * @reg_info:	data needed for the PIDX register update
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+int qdma_queue_pidx_update(void *dev_hndl, uint8_t is_vf, uint16_t qid,
+		uint8_t is_c2h, const struct qdma_q_pidx_reg_info *reg_info)
+{
+	uint32_t reg_addr = 0;
+	uint32_t reg_val = 0;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+						__func__,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+	if (!reg_info) {
+		qdma_log_error("%s: reg_info is NULL, err:%d\n",
+						__func__,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if (!is_vf) {
+		reg_addr = (is_c2h) ?  QDMA_OFFSET_DMAP_SEL_C2H_DSC_PIDX :
+			QDMA_OFFSET_DMAP_SEL_H2C_DSC_PIDX;
+	} else {
+		reg_addr = (is_c2h) ?  QDMA_OFFSET_VF_DMAP_SEL_C2H_DSC_PIDX :
+			QDMA_OFFSET_VF_DMAP_SEL_H2C_DSC_PIDX;
+	}
+
+	reg_addr += (qid * QDMA_PIDX_STEP);
+
+	reg_val = FIELD_SET(QDMA_DMA_SEL_DESC_PIDX_MASK, reg_info->pidx) |
+			  FIELD_SET(QDMA_DMA_SEL_IRQ_EN_MASK,
+			  reg_info->irq_en);
+
+	qdma_reg_write(dev_hndl, reg_addr, reg_val);
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_queue_cmpt_cidx_update() - function to update the CMPT CIDX update
+ *
+ * @dev_hndl:	device handle
+ * @is_vf:	Whether PF or VF
+ * @qid:	Queue id relative to the PF/VF calling this API
+ * @reg_info:	data needed for the CIDX register update
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+int qdma_queue_cmpt_cidx_update(void *dev_hndl, uint8_t is_vf,
+		uint16_t qid, const struct qdma_q_cmpt_cidx_reg_info *reg_info)
+{
+	uint32_t reg_addr = (is_vf) ? QDMA_OFFSET_VF_DMAP_SEL_CMPT_CIDX :
+		QDMA_OFFSET_DMAP_SEL_CMPT_CIDX;
+	uint32_t reg_val = 0;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+						__func__,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if (!reg_info) {
+		qdma_log_error("%s: reg_info is NULL, err:%d\n",
+						__func__,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	reg_addr += (qid * QDMA_CMPT_CIDX_STEP);
+
+	reg_val =
+		FIELD_SET(QDMA_DMAP_SEL_CMPT_WRB_CIDX_MASK,
+				reg_info->wrb_cidx) |
+		FIELD_SET(QDMA_DMAP_SEL_CMPT_CNT_THRESH_MASK,
+				reg_info->counter_idx) |
+		FIELD_SET(QDMA_DMAP_SEL_CMPT_TMR_CNT_MASK,
+				reg_info->timer_idx) |
+		FIELD_SET(QDMA_DMAP_SEL_CMPT_TRG_MODE_MASK,
+				reg_info->trig_mode) |
+		FIELD_SET(QDMA_DMAP_SEL_CMPT_STS_DESC_EN_MASK,
+				reg_info->wrb_en) |
+		FIELD_SET(QDMA_DMAP_SEL_CMPT_IRQ_EN_MASK, reg_info->irq_en);
+
+	qdma_reg_write(dev_hndl, reg_addr, reg_val);
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_queue_intr_cidx_update() - function to update the CMPT CIDX update
+ *
+ * @dev_hndl:	device handle
+ * @is_vf:	Whether PF or VF
+ * @qid:	Queue id relative to the PF/VF calling this API
+ * @reg_info:	data needed for the CIDX register update
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+int qdma_queue_intr_cidx_update(void *dev_hndl, uint8_t is_vf,
+		uint16_t qid, const struct qdma_intr_cidx_reg_info *reg_info)
+{
+	uint32_t reg_addr = (is_vf) ? QDMA_OFFSET_VF_DMAP_SEL_INT_CIDX :
+		QDMA_OFFSET_DMAP_SEL_INT_CIDX;
+	uint32_t reg_val = 0;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+				__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if (!reg_info) {
+		qdma_log_error("%s: reg_info is NULL, err:%d\n",
+					__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	reg_addr += qid * QDMA_INT_CIDX_STEP;
+
+	reg_val =
+		FIELD_SET(QDMA_DMA_SEL_INT_SW_CIDX_MASK, reg_info->sw_cidx) |
+		FIELD_SET(QDMA_DMA_SEL_INT_RING_IDX_MASK, reg_info->rng_idx);
+
+	qdma_reg_write(dev_hndl, reg_addr, reg_val);
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_get_user_bar() - Function to get the
+ *						AXI Master Lite(user bar) number
+ *
+ * @dev_hndl:	device handle
+ * @is_vf:	Whether PF or VF
+ * @func_id:	function id of the PF
+ * @user_bar:	pointer to hold the AXI Master Lite number
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+int qdma_get_user_bar(void *dev_hndl, uint8_t is_vf,
+		uint8_t func_id, uint8_t *user_bar)
+{
+	uint8_t bar_found = 0;
+	uint8_t bar_idx = 0;
+	uint32_t user_bar_id = 0;
+	uint32_t reg_addr = (is_vf) ?  QDMA_OFFSET_VF_USER_BAR_ID :
+			QDMA_OFFSET_GLBL2_PF_BARLITE_EXT;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+				__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if (!user_bar) {
+		qdma_log_error("%s: user_bar is NULL, err:%d\n",
+					__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	user_bar_id = qdma_reg_read(dev_hndl, reg_addr);
+
+	if (!is_vf)
+		user_bar_id = (user_bar_id >> (6 * func_id)) & 0x3F;
+	else
+		user_bar_id = user_bar_id & 0x3F;
+
+	for (bar_idx = 0; bar_idx < QDMA_BAR_NUM; bar_idx++) {
+		if (user_bar_id & (1 << bar_idx)) {
+			*user_bar = bar_idx;
+			bar_found = 1;
+			break;
+		}
+	}
+	if (bar_found == 0) {
+		*user_bar = 0;
+		qdma_log_error("%s: Bar not found, vf:%d, usrbar:%d, err:%d\n",
+					   __func__,
+					   is_vf,
+					   *user_bar,
+					   -QDMA_ERR_HWACC_BAR_NOT_FOUND);
+		return -QDMA_ERR_HWACC_BAR_NOT_FOUND;
+	}
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_get_device_attributes() - Function to get the qdma device attributes
+ *
+ * @dev_hndl:	device handle
+ * @dev_info:	pointer to hold the device info
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+int qdma_get_device_attributes(void *dev_hndl,
+		struct qdma_dev_attributes *dev_info)
+{
+	uint8_t count = 0;
+	uint32_t reg_val = 0;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+				__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if (!dev_info) {
+		qdma_log_error("%s: dev_info is NULL, err:%d\n",
+				__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	/* number of PFs */
+	reg_val = qdma_reg_read(dev_hndl, QDMA_OFFSET_GLBL2_PF_BARLITE_INT);
+	if (FIELD_GET(QDMA_GLBL2_PF0_BAR_MAP_MASK, reg_val))
+		count++;
+	if (FIELD_GET(QDMA_GLBL2_PF1_BAR_MAP_MASK, reg_val))
+		count++;
+	if (FIELD_GET(QDMA_GLBL2_PF2_BAR_MAP_MASK, reg_val))
+		count++;
+	if (FIELD_GET(QDMA_GLBL2_PF3_BAR_MAP_MASK, reg_val))
+		count++;
+	dev_info->num_pfs = count;
+
+	/* Number of Qs */
+	reg_val = qdma_reg_read(dev_hndl, QDMA_OFFSET_GLBL2_CHANNEL_QDMA_CAP);
+	dev_info->num_qs = FIELD_GET(QDMA_GLBL2_MULTQ_MAX_MASK, reg_val);
+
+	/* FLR present */
+	reg_val = qdma_reg_read(dev_hndl, QDMA_OFFSET_GLBL2_MISC_CAP);
+	dev_info->mailbox_en  = FIELD_GET(QDMA_GLBL2_MAILBOX_EN_MASK, reg_val);
+	dev_info->flr_present = FIELD_GET(QDMA_GLBL2_FLR_PRESENT_MASK, reg_val);
+	dev_info->mm_cmpt_en  = FIELD_GET(QDMA_GLBL2_MM_CMPT_EN_MASK, reg_val);
+
+	/* ST/MM enabled? */
+	reg_val = qdma_reg_read(dev_hndl, QDMA_OFFSET_GLBL2_CHANNEL_MDMA);
+	dev_info->mm_en = (FIELD_GET(QDMA_GLBL2_MM_C2H_MASK, reg_val) &&
+		FIELD_GET(QDMA_GLBL2_MM_H2C_MASK, reg_val)) ? 1 : 0;
+	dev_info->st_en = (FIELD_GET(QDMA_GLBL2_ST_C2H_MASK, reg_val) &&
+		FIELD_GET(QDMA_GLBL2_ST_H2C_MASK, reg_val)) ? 1 : 0;
+
+	/* num of mm channels */
+	/* TODO : Register not yet defined for this. Hard coding it to 1.*/
+	dev_info->mm_channel_max = 1;
+
+	dev_info->debug_mode = 0;
+	dev_info->desc_eng_mode = 0;
+	dev_info->qid2vec_ctx = 0;
+	dev_info->cmpt_ovf_chk_dis = 1;
+	dev_info->mailbox_intr = 1;
+	dev_info->sw_desc_64b = 1;
+	dev_info->cmpt_desc_64b = 1;
+	dev_info->dynamic_bar = 1;
+	dev_info->legacy_intr = 1;
+	dev_info->cmpt_trig_count_timer = 1;
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_hw_ram_sbe_err_process() - Function to dump SBE error debug information
+ *
+ * @dev_hndl: device handle
+ * @buf: Bufffer for the debug info to be dumped in
+ * @buflen: Length of the buffer
+ *
+ * Return: void
+ *****************************************************************************/
+static void qdma_hw_ram_sbe_err_process(void *dev_hndl)
+{
+	qdma_dump_reg_info(dev_hndl, QDMA_OFFSET_RAM_SBE_STAT,
+						1, NULL, 0);
+}
+
+/*****************************************************************************/
+/**
+ * qdma_hw_ram_dbe_err_process() - Function to dump DBE error debug information
+ *
+ * @dev_hndl: device handle
+ * @buf: Bufffer for the debug info to be dumped in
+ * @buflen: Length of the buffer
+ *
+ * Return: void
+ *****************************************************************************/
+static void qdma_hw_ram_dbe_err_process(void *dev_hndl)
+{
+	qdma_dump_reg_info(dev_hndl, QDMA_OFFSET_RAM_DBE_STAT,
+						1, NULL, 0);
+}
+
+/*****************************************************************************/
+/**
+ * qdma_hw_desc_err_process() - Function to dump Descriptor Error information
+ *
+ * @dev_hndl: device handle
+ * @buf: Bufffer for the debug info to be dumped in
+ * @buflen: Length of the buffer
+ *
+ * Return: void
+ *****************************************************************************/
+static void qdma_hw_desc_err_process(void *dev_hndl)
+{
+	int i = 0;
+	uint32_t desc_err_reg_list[] = {
+		QDMA_OFFSET_GLBL_DSC_ERR_STS,
+		QDMA_OFFSET_GLBL_DSC_ERR_LOG0,
+		QDMA_OFFSET_GLBL_DSC_ERR_LOG1,
+		QDMA_OFFSET_GLBL_DSC_DBG_DAT0,
+		QDMA_OFFSET_GLBL_DSC_DBG_DAT1,
+		QDMA_OFFSET_GLBL_DSC_ERR_LOG2
+	};
+	int desc_err_num_regs = sizeof(desc_err_reg_list) / sizeof(uint32_t);
+
+	for (i = 0; i < desc_err_num_regs; i++) {
+		qdma_dump_reg_info(dev_hndl, desc_err_reg_list[i],
+					1, NULL, 0);
+	}
+}
+
+/*****************************************************************************/
+/**
+ * qdma_hw_trq_err_process() - Function to dump Target Access Error information
+ *
+ * @dev_hndl: device handle
+ * @buf: Bufffer for the debug info to be dumped in
+ * @buflen: Length of the buffer
+ *
+ * Return: void
+ *****************************************************************************/
+static void qdma_hw_trq_err_process(void *dev_hndl)
+{
+	int i = 0;
+	uint32_t trq_err_reg_list[] = {
+		QDMA_OFFSET_GLBL_TRQ_ERR_STS,
+		QDMA_OFFSET_GLBL_TRQ_ERR_LOG
+	};
+	int trq_err_reg_num_regs = sizeof(trq_err_reg_list) / sizeof(uint32_t);
+
+	for (i = 0; i < trq_err_reg_num_regs; i++) {
+		qdma_dump_reg_info(dev_hndl, trq_err_reg_list[i],
+					1, NULL, 0);
+	}
+}
+
+/*****************************************************************************/
+/**
+ * qdma_hw_st_h2c_err_process() - Function to dump MM H2C Error information
+ *
+ * @dev_hndl: device handle
+ * @buf: Bufffer for the debug info to be dumped in
+ * @buflen: Length of the buffer
+ *
+ * Return: void
+ *****************************************************************************/
+static void qdma_hw_st_h2c_err_process(void *dev_hndl)
+{
+	int i = 0;
+	uint32_t st_h2c_err_reg_list[] = {
+		QDMA_OFFSET_H2C_ERR_STAT,
+		QDMA_OFFSET_H2C_FIRST_ERR_QID,
+		QDMA_OFFSET_H2C_DBG_REG0,
+		QDMA_OFFSET_H2C_DBG_REG1,
+		QDMA_OFFSET_H2C_DBG_REG2,
+		QDMA_OFFSET_H2C_DBG_REG3,
+		QDMA_OFFSET_H2C_DBG_REG4
+	};
+	int st_h2c_err_num_regs = sizeof(st_h2c_err_reg_list) / sizeof(uint32_t);
+
+	for (i = 0; i < st_h2c_err_num_regs; i++) {
+		qdma_dump_reg_info(dev_hndl, st_h2c_err_reg_list[i],
+					1, NULL, 0);
+	}
+}
+
+
+/*****************************************************************************/
+/**
+ * qdma_hw_st_c2h_err_process() - Function to dump MM H2C Error information
+ *
+ * @dev_hndl: device handle
+ * @buf: Bufffer for the debug info to be dumped in
+ * @buflen: Length of the buffer
+ *
+ * Return: void
+ *****************************************************************************/
+static void qdma_hw_st_c2h_err_process(void *dev_hndl)
+{
+	int i = 0;
+	uint32_t st_c2h_err_reg_list[] = {
+		QDMA_OFFSET_C2H_ERR_STAT,
+		QDMA_OFFSET_C2H_FATAL_ERR_STAT,
+		QDMA_OFFSET_C2H_FIRST_ERR_QID,
+		QDMA_OFFSET_C2H_STAT_S_AXIS_C2H_ACCEPTED,
+		QDMA_OFFSET_C2H_STAT_S_AXIS_CMPT_ACCEPTED,
+		QDMA_OFFSET_C2H_STAT_DESC_RSP_PKT_ACCEPTED,
+		QDMA_OFFSET_C2H_STAT_AXIS_PKG_CMP,
+		QDMA_OFFSET_C2H_STAT_DEBUG_DMA_ENG_0,
+		QDMA_OFFSET_C2H_STAT_DEBUG_DMA_ENG_1,
+		QDMA_OFFSET_C2H_STAT_DEBUG_DMA_ENG_2,
+		QDMA_OFFSET_C2H_STAT_DEBUG_DMA_ENG_3,
+		QDMA_OFFSET_C2H_STAT_DESC_RSP_DROP_ACCEPTED,
+		QDMA_OFFSET_C2H_STAT_DESC_RSP_ERR_ACCEPTED
+	};
+	int st_c2h_err_num_regs = sizeof(st_c2h_err_reg_list) / sizeof(uint32_t);
+
+	for (i = 0; i < st_c2h_err_num_regs; i++) {
+		qdma_dump_reg_info(dev_hndl, st_c2h_err_reg_list[i],
+					1, NULL, 0);
+	}
+}
+
+
+/*****************************************************************************/
+/**
+ * qdma_hw_get_error_name() - Function to get the error in string format
+ *
+ * @err_idx: error index
+ *
+ * Return: string - success and NULL on failure
+ *****************************************************************************/
+const char *qdma_hw_get_error_name(uint32_t err_idx)
+{
+	if (err_idx >= QDMA_ERRS_ALL) {
+		qdma_log_error("%s: err_idx=%d is invalid, returning NULL\n",
+				__func__, (enum qdma_error_idx)err_idx);
+		return NULL;
+	}
+
+	return qdma_err_info[(enum qdma_error_idx)err_idx].err_name;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_hw_error_process() - Function to find the error that got
+ * triggered and call the handler qdma_hw_error_handler of that
+ * particular error.
+ *
+ * @dev_hndl: device handle
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+int qdma_hw_error_process(void *dev_hndl)
+{
+	uint32_t glbl_err_stat = 0, err_stat = 0;
+	uint32_t bit = 0, i = 0;
+	int32_t idx = 0;
+	struct qdma_dev_attributes dev_cap;
+	uint32_t hw_err_position[TOTAL_LEAF_ERROR_AGGREGATORS] = {
+		QDMA_DSC_ERR_POISON,
+		QDMA_TRQ_ERR_UNMAPPED,
+		QDMA_ST_C2H_ERR_MTY_MISMATCH,
+		QDMA_ST_FATAL_ERR_MTY_MISMATCH,
+		QDMA_ST_H2C_ERR_ZERO_LEN_DESC,
+		QDMA_SBE_ERR_MI_H2C0_DAT,
+		QDMA_DBE_ERR_MI_H2C0_DAT
+	};
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+				__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	qdma_get_device_attributes(dev_hndl, &dev_cap);
+
+	glbl_err_stat = qdma_reg_read(dev_hndl, QDMA_OFFSET_GLBL_ERR_STAT);
+	if (!glbl_err_stat)
+		return QDMA_HW_ERR_NOT_DETECTED;
+
+	qdma_log_info("addr = 0x%08x val = 0x%08x",
+			QDMA_OFFSET_GLBL_ERR_STAT,
+			glbl_err_stat);
+	for (i = 0; i < TOTAL_LEAF_ERROR_AGGREGATORS; i++) {
+		bit = hw_err_position[i];
+
+		if (!dev_cap.st_en && (bit == QDMA_ST_C2H_ERR_MTY_MISMATCH ||
+				bit == QDMA_ST_FATAL_ERR_MTY_MISMATCH ||
+				bit == QDMA_ST_H2C_ERR_ZERO_LEN_DESC))
+			continue;
+
+		err_stat = qdma_reg_read(dev_hndl,
+				qdma_err_info[bit].stat_reg_addr);
+
+		if (err_stat) {
+			qdma_log_info("addr = 0x%08x val = 0x%08x",
+					qdma_err_info[bit].stat_reg_addr,
+					err_stat);
+
+			qdma_err_info[bit].qdma_hw_err_process(dev_hndl);
+
+			for (idx = bit; idx < all_hw_errs[i]; idx++) {
+				/* call the platform specific handler */
+				if (err_stat & qdma_err_info[idx].leaf_err_mask)
+					qdma_log_error("%s detected %s",
+						__func__,
+						qdma_hw_get_error_name(idx));
+			}
+
+			qdma_reg_write(dev_hndl,
+				qdma_err_info[bit].stat_reg_addr,
+				err_stat);
+		}
+	}
+
+	/* Write 1 to the global status register to clear the bits */
+	qdma_reg_write(dev_hndl, QDMA_OFFSET_GLBL_ERR_STAT, glbl_err_stat);
+
+	return QDMA_SUCCESS;
+}
+
+
+/*****************************************************************************/
+/**
+ * qdma_hw_error_enable() - Function to enable all or a specific error
+ *
+ * @dev_hndl: device handle
+ * @err_idx: error index
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+int qdma_hw_error_enable(void *dev_hndl, uint32_t err_idx)
+{
+	uint32_t idx = 0, i = 0;
+	uint32_t reg_val = 0;
+	struct qdma_dev_attributes dev_cap;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+				__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if (err_idx > QDMA_ERRS_ALL) {
+		qdma_log_error("%s: err_idx=%d is invalid, err:%d\n",
+					   __func__, err_idx,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	qdma_get_device_attributes(dev_hndl, &dev_cap);
+
+	if (err_idx == QDMA_ERRS_ALL) {
+		for (i = 0; i < TOTAL_LEAF_ERROR_AGGREGATORS; i++) {
+			idx = all_hw_errs[i];
+
+			/* Don't access streaming registers in
+			 * MM only bitstreams
+			 */
+			if (!dev_cap.st_en) {
+				if (idx == QDMA_ST_C2H_ERR_ALL ||
+					idx == QDMA_ST_FATAL_ERR_ALL ||
+					idx == QDMA_ST_H2C_ERR_ALL)
+					continue;
+			}
+
+			reg_val = qdma_err_info[idx].leaf_err_mask;
+			qdma_reg_write(dev_hndl,
+				qdma_err_info[idx].mask_reg_addr, reg_val);
+
+			reg_val = qdma_reg_read(dev_hndl,
+					QDMA_OFFSET_GLBL_ERR_MASK);
+			reg_val |= FIELD_SET
+				(qdma_err_info[idx].global_err_mask, 1);
+			qdma_reg_write(dev_hndl, QDMA_OFFSET_GLBL_ERR_MASK,
+					reg_val);
+		}
+
+	} else {
+		/* Don't access streaming registers in MM only bitstreams
+		 *  QDMA_C2H_ERR_MTY_MISMATCH to QDMA_H2C_ERR_ALL are all
+		 *  ST errors
+		 */
+		if (!dev_cap.st_en) {
+			if (err_idx >= QDMA_ST_C2H_ERR_MTY_MISMATCH &&
+					err_idx <= QDMA_ST_H2C_ERR_ALL)
+				return QDMA_SUCCESS;
+		}
+
+		reg_val = qdma_reg_read(dev_hndl,
+				qdma_err_info[err_idx].mask_reg_addr);
+		reg_val |= FIELD_SET(qdma_err_info[err_idx].leaf_err_mask, 1);
+		qdma_reg_write(dev_hndl,
+				qdma_err_info[err_idx].mask_reg_addr, reg_val);
+
+		reg_val = qdma_reg_read(dev_hndl, QDMA_OFFSET_GLBL_ERR_MASK);
+		reg_val |= FIELD_SET(qdma_err_info[err_idx].global_err_mask, 1);
+		qdma_reg_write(dev_hndl, QDMA_OFFSET_GLBL_ERR_MASK, reg_val);
+	}
+
+	return QDMA_SUCCESS;
+}
+
+
+/*****************************************************************************/
+/**
+ * qdma_soft_dump_config_regs() - Function to get qdma config register dump in a
+ * buffer
+ *
+ * @dev_hndl:   device handle
+ * @is_vf:      Whether PF or VF
+ * @buf :       pointer to buffer to be filled
+ * @buflen :    Length of the buffer
+ *
+ * Return:	Length up-till the buffer is filled -success and < 0 - failure
+ *****************************************************************************/
+int qdma_soft_dump_config_regs(void *dev_hndl, uint8_t is_vf,
+		char *buf, uint32_t buflen)
+{
+	uint32_t i = 0, j = 0;
+	struct xreg_info *reg_info;
+	uint32_t num_regs =
+		sizeof(qdma_config_regs) / sizeof((qdma_config_regs)[0]);
+	uint32_t len = 0, val = 0;
+	int rv = QDMA_SUCCESS;
+	char name[DEBGFS_GEN_NAME_SZ] = "";
+	struct qdma_dev_attributes dev_cap;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+					   __func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if (buflen < qdma_soft_reg_dump_buf_len()) {
+		qdma_log_error("%s: Buffer too small, err:%d\n",
+					__func__, -QDMA_ERR_NO_MEM);
+		return -QDMA_ERR_NO_MEM;
+	}
+
+	if (is_vf) {
+		qdma_log_error("%s: Wrong API used for VF, err:%d\n",
+				__func__,
+				-QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED);
+		return -QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED;
+	}
+
+	qdma_get_device_attributes(dev_hndl, &dev_cap);
+
+	reg_info = qdma_config_regs;
+	for (i = 0; i < num_regs; i++) {
+		int mask = get_capability_mask(dev_cap.mm_en, dev_cap.st_en,
+				dev_cap.mm_cmpt_en, dev_cap.mailbox_en);
+
+		if ((mask & reg_info[i].mode) == 0)
+			continue;
+
+		for (j = 0; j < reg_info[i].repeat; j++) {
+			rv = QDMA_SNPRINTF_S(name, DEBGFS_GEN_NAME_SZ,
+					DEBGFS_GEN_NAME_SZ,
+					"%s_%d", reg_info[i].name, j);
+			if (rv < 0 || rv > DEBGFS_GEN_NAME_SZ) {
+				qdma_log_error
+					("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+					__LINE__, __func__,
+					rv);
+				return -QDMA_ERR_NO_MEM;
+			}
+			val = qdma_reg_read(dev_hndl,
+					(reg_info[i].addr + (j * 4)));
+			rv = dump_reg(buf + len, buflen - len,
+					(reg_info[i].addr + (j * 4)),
+						name, val);
+			if (rv < 0) {
+				qdma_log_error
+				("%s Buff too small, err:%d\n",
+				__func__,
+				-QDMA_ERR_NO_MEM);
+				return -QDMA_ERR_NO_MEM;
+			}
+			len += rv;
+		}
+	}
+
+	return len;
+}
+
+/*
+ * qdma_fill_intr_ctxt() - Helper function to fill interrupt context
+ *                           into structure
+ *
+ */
+static void qdma_fill_intr_ctxt(struct qdma_indirect_intr_ctxt *intr_ctxt)
+{
+	ind_intr_ctxt_entries[0].value = intr_ctxt->valid;
+	ind_intr_ctxt_entries[1].value = intr_ctxt->vec;
+	ind_intr_ctxt_entries[2].value = intr_ctxt->int_st;
+	ind_intr_ctxt_entries[3].value = intr_ctxt->color;
+	ind_intr_ctxt_entries[4].value =
+			intr_ctxt->baddr_4k & 0xFFFFFFFF;
+	ind_intr_ctxt_entries[5].value =
+			(intr_ctxt->baddr_4k >> 32) & 0xFFFFFFFF;
+	ind_intr_ctxt_entries[6].value = intr_ctxt->page_size;
+	ind_intr_ctxt_entries[7].value = intr_ctxt->pidx;
+	ind_intr_ctxt_entries[8].value = intr_ctxt->at;
+}
+
+
+static uint32_t qdma_intr_context_buf_len(void)
+{
+	uint32_t len = 0;
+
+	len += (((sizeof(ind_intr_ctxt_entries) /
+			sizeof(ind_intr_ctxt_entries[0])) + 1) *
+			REG_DUMP_SIZE_PER_LINE);
+	return len;
+}
+
+/*
+ * dump_intr_context() - Helper function to dump interrupt context into string
+ *
+ * return len - length of the string copied into buffer
+ */
+static int dump_intr_context(struct qdma_indirect_intr_ctxt *intr_ctx,
+		int ring_index,
+		char *buf, int buf_sz)
+{
+	int i = 0;
+	int n;
+	int len = 0;
+	int rv;
+	char banner[DEBGFS_LINE_SZ];
+
+	qdma_fill_intr_ctxt(intr_ctx);
+
+	for (i = 0; i < DEBGFS_LINE_SZ - 5; i++) {
+		rv = QDMA_SNPRINTF_S(banner + i,
+			(DEBGFS_LINE_SZ - i),
+			sizeof("-"), "-");
+		if (rv < 0 || rv > (int)sizeof("-")) {
+			qdma_log_error
+				("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+				__LINE__, __func__,
+				rv);
+			goto INSUF_BUF_EXIT;
+		}
+	}
+
+	/* Interrupt context dump */
+	n = sizeof(ind_intr_ctxt_entries) /
+			sizeof((ind_intr_ctxt_entries)[0]);
+	for (i = 0; i < n; i++) {
+		if (len >= buf_sz || ((len + DEBGFS_LINE_SZ) >= buf_sz))
+			goto INSUF_BUF_EXIT;
+
+		if (i == 0) {
+			if ((len + (3 * DEBGFS_LINE_SZ)) >= buf_sz)
+				goto INSUF_BUF_EXIT;
+
+			rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+				DEBGFS_LINE_SZ, "\n%s", banner);
+			if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+				qdma_log_error
+					("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+					__LINE__, __func__,
+					rv);
+				goto INSUF_BUF_EXIT;
+			}
+			len += rv;
+
+			rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+				DEBGFS_LINE_SZ, "\n%50s %d",
+				"Interrupt Context for ring#", ring_index);
+			if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+				qdma_log_error
+					("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+					__LINE__, __func__,
+					rv);
+				goto INSUF_BUF_EXIT;
+			}
+			len += rv;
+
+			rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
+				DEBGFS_LINE_SZ, "\n%s\n", banner);
+			if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+				qdma_log_error
+					("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+					__LINE__, __func__,
+					rv);
+				goto INSUF_BUF_EXIT;
+			}
+			len += rv;
+		}
+
+		rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len), DEBGFS_LINE_SZ,
+			"%-47s %#-10x %u\n",
+			ind_intr_ctxt_entries[i].name,
+			ind_intr_ctxt_entries[i].value,
+			ind_intr_ctxt_entries[i].value);
+		if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+			qdma_log_error
+				("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+				__LINE__, __func__,
+				rv);
+			goto INSUF_BUF_EXIT;
+		}
+		len += rv;
+	}
+
+	return len;
+
+INSUF_BUF_EXIT:
+	if (buf_sz > DEBGFS_LINE_SZ) {
+		rv = QDMA_SNPRINTF_S((buf + buf_sz - DEBGFS_LINE_SZ),
+			buf_sz, DEBGFS_LINE_SZ,
+			"\n\nInsufficient buffer size, partial context dump\n");
+		if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+			qdma_log_error
+				("%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
+				__LINE__, __func__,
+				rv);
+		}
+	}
+
+	qdma_log_error("%s: Insufficient buffer size, err:%d\n",
+		__func__, -QDMA_ERR_NO_MEM);
+
+	return -QDMA_ERR_NO_MEM;
+}
+
+
+/*****************************************************************************/
+/**
+ * qdma_dump_intr_context() - Function to get qdma interrupt context dump in a
+ * buffer
+ *
+ * @dev_hndl:   device handle
+ * @hw_qid:     queue id
+ * @buf :       pointer to buffer to be filled
+ * @buflen :    Length of the buffer
+ *
+ * Return:	Length up-till the buffer is filled -success and < 0 - failure
+ *****************************************************************************/
+int qdma_dump_intr_context(void *dev_hndl,
+		struct qdma_indirect_intr_ctxt *intr_ctx,
+		int ring_index,
+		char *buf, uint32_t buflen)
+{
+	int rv = 0;
+	uint32_t req_buflen = 0;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+			__func__, -QDMA_ERR_INV_PARAM);
+
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if (!buf) {
+		qdma_log_error("%s: buf is NULL, err:%d\n",
+			__func__, -QDMA_ERR_INV_PARAM);
+
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if (!intr_ctx) {
+		qdma_log_error("%s: intr_ctx is NULL, err:%d\n",
+			__func__, -QDMA_ERR_INV_PARAM);
+
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	req_buflen = qdma_intr_context_buf_len();
+	if (buflen < req_buflen) {
+		qdma_log_error("%s: Too small buffer(%d), reqd(%d), err:%d\n",
+			__func__, buflen, req_buflen, -QDMA_ERR_NO_MEM);
+		return -QDMA_ERR_NO_MEM;
+	}
+
+	rv = dump_intr_context(intr_ctx, ring_index, buf, buflen);
+
+	return rv;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_soft_dump_queue_context() - Function to get qdma queue context dump in a
+ * buffer
+ *
+ * @dev_hndl:   device handle
+ * @st:			Queue Mode (ST or MM)
+ * @q_type:		Queue Type
+ * @ctxt_data:  Context Data
+ * @buf :       pointer to buffer to be filled
+ * @buflen :    Length of the buffer
+ *
+ * Return:	Length up-till the buffer is filled -success and < 0 - failure
+ *****************************************************************************/
+int qdma_soft_dump_queue_context(void *dev_hndl,
+		uint8_t st,
+		enum qdma_dev_q_type q_type,
+		struct qdma_descq_context *ctxt_data,
+		char *buf, uint32_t buflen)
+{
+	int rv = 0;
+	uint32_t req_buflen = 0;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+			__func__, -QDMA_ERR_INV_PARAM);
+
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if (!ctxt_data) {
+		qdma_log_error("%s: ctxt_data is NULL, err:%d\n",
+			__func__, -QDMA_ERR_INV_PARAM);
+
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if (!buf) {
+		qdma_log_error("%s: buf is NULL, err:%d\n",
+			__func__, -QDMA_ERR_INV_PARAM);
+
+		return -QDMA_ERR_INV_PARAM;
+	}
+	if (q_type >= QDMA_DEV_Q_TYPE_MAX) {
+		qdma_log_error("%s: invalid q_type, err:%d\n",
+			__func__, -QDMA_ERR_INV_PARAM);
+
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	rv = qdma_soft_context_buf_len(st, q_type, &req_buflen);
+	if (rv != QDMA_SUCCESS)
+		return rv;
+
+	if (buflen < req_buflen) {
+		qdma_log_error
+		("%s: Too small buffer(%d), reqd(%d), err:%d\n",
+		__func__, buflen, req_buflen, -QDMA_ERR_NO_MEM);
+		return -QDMA_ERR_NO_MEM;
+	}
+
+	rv = dump_soft_context(ctxt_data, st, q_type, buf, buflen);
+
+	return rv;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_read_dump_queue_context() - Function to read and dump the queue
+ * context in the user-provided buffer. This API is valid only for PF and
+ * should not be used for VFs. For VF's use qdma_dump_queue_context() API
+ * after reading the context through mailbox.
+ *
+ * @dev_hndl:   device handle
+ * @is_vf:		VF or PF
+ * @hw_qid:     queue id
+ * @st:			Queue Mode(ST or MM)
+ * @q_type:		Queue type(H2C/C2H/CMPT)*
+ * @buf :       pointer to buffer to be filled
+ * @buflen :    Length of the buffer
+ *
+ * Return:	Length up-till the buffer is filled -success and < 0 - failure
+ *****************************************************************************/
+int qdma_soft_read_dump_queue_context(void *dev_hndl,
+				uint16_t qid_hw,
+				uint8_t st,
+				enum qdma_dev_q_type q_type,
+				char *buf, uint32_t buflen)
+{
+	int rv = QDMA_SUCCESS;
+	uint32_t req_buflen = 0;
+	struct qdma_descq_context context;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+			__func__, -QDMA_ERR_INV_PARAM);
+
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if (!buf) {
+		qdma_log_error("%s: buf is NULL, err:%d\n",
+			__func__, -QDMA_ERR_INV_PARAM);
+
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if (q_type >= QDMA_DEV_Q_TYPE_MAX) {
+		qdma_log_error("%s: Not supported for q_type, err = %d\n",
+			__func__, -QDMA_ERR_INV_PARAM);
+
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	rv = qdma_soft_context_buf_len(st, q_type, &req_buflen);
+
+	if (rv != QDMA_SUCCESS)
+		return rv;
+
+	if (buflen < req_buflen) {
+		qdma_log_error
+		("%s: Too small buffer(%d), reqd(%d), err:%d\n",
+		__func__, buflen, req_buflen, -QDMA_ERR_NO_MEM);
+		return -QDMA_ERR_NO_MEM;
+	}
+
+	qdma_memset(&context, 0, sizeof(struct qdma_descq_context));
+
+	if (q_type != QDMA_DEV_Q_TYPE_CMPT) {
+		rv = qdma_sw_ctx_conf(dev_hndl, (uint8_t)q_type, qid_hw,
+				&context.sw_ctxt,
+				QDMA_HW_ACCESS_READ);
+		if (rv < 0) {
+			qdma_log_error("%s:sw ctxt read fail, err = %d",
+					__func__, rv);
+			return rv;
+		}
+
+		rv = qdma_hw_ctx_conf(dev_hndl, (uint8_t)q_type, qid_hw,
+				&context.hw_ctxt,
+				QDMA_HW_ACCESS_READ);
+		if (rv < 0) {
+			qdma_log_error("%s:hw ctxt read fail, err = %d",
+					__func__, rv);
+			return rv;
+		}
+
+		rv = qdma_credit_ctx_conf(dev_hndl, (uint8_t)q_type,
+				qid_hw,
+				&context.cr_ctxt,
+				QDMA_HW_ACCESS_READ);
+		if (rv < 0) {
+			qdma_log_error("%s:cr ctxt read fail, err = %d",
+					__func__, rv);
+			return rv;
+		}
+
+		if (st && q_type == QDMA_DEV_Q_TYPE_C2H) {
+			rv = qdma_pfetch_ctx_conf(dev_hndl,
+				qid_hw, &context.pfetch_ctxt,
+				QDMA_HW_ACCESS_READ);
+			if (rv < 0) {
+				qdma_log_error
+				("%s:pftch ctxt read fail, err = %d", __func__, rv);
+				return rv;
+			}
+		}
+	}
+
+	if ((st && q_type == QDMA_DEV_Q_TYPE_C2H) ||
+		(!st && q_type == QDMA_DEV_Q_TYPE_CMPT)) {
+		rv = qdma_cmpt_ctx_conf(dev_hndl, qid_hw,
+					 &context.cmpt_ctxt,
+					 QDMA_HW_ACCESS_READ);
+		if (rv < 0) {
+			qdma_log_error("%s:cmpt ctxt read fail, err = %d",
+					__func__, rv);
+			return rv;
+		}
+	}
+
+	rv = dump_soft_context(&context, st, q_type, buf, buflen);
+
+	return rv;
+}
+/*****************************************************************************/
+/**
+ * qdma_is_legacy_intr_pend() - function to get legacy_intr_pending status bit
+ *
+ * @dev_hndl: device handle
+ *
+ * Return: legacy interrupt pending status bit value
+ *****************************************************************************/
+int qdma_is_legacy_intr_pend(void *dev_hndl)
+{
+	uint32_t reg_val;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+					   __func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	reg_val = qdma_reg_read(dev_hndl, QDMA_OFFSET_GLBL_INTERRUPT_CFG);
+	if (FIELD_GET(QDMA_GLBL_INTR_LGCY_INTR_PEND_MASK, reg_val))
+		return QDMA_SUCCESS;
+
+	qdma_log_error("%s: no pending legacy intr, err:%d\n",
+				   __func__, -QDMA_ERR_INV_PARAM);
+	return -QDMA_ERR_HWACC_NO_PEND_LEGCY_INTR;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_clear_pend_legacy_intr() - function to clear legacy_intr_pending bit
+ *
+ * @dev_hndl: device handle
+ *
+ * Return: void
+ *****************************************************************************/
+int qdma_clear_pend_legacy_intr(void *dev_hndl)
+{
+	uint32_t reg_val;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+					   __func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	reg_val = qdma_reg_read(dev_hndl, QDMA_OFFSET_GLBL_INTERRUPT_CFG);
+	reg_val |= FIELD_SET(QDMA_GLBL_INTR_LGCY_INTR_PEND_MASK, 1);
+	qdma_reg_write(dev_hndl, QDMA_OFFSET_GLBL_INTERRUPT_CFG, reg_val);
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_legacy_intr_conf() - function to disable/enable legacy interrupt
+ *
+ * @dev_hndl: device handle
+ * @enable: enable/disable flag. 1 - enable, 0 - disable
+ *
+ * Return: void
+ *****************************************************************************/
+int qdma_legacy_intr_conf(void *dev_hndl, enum status_type enable)
+{
+	uint32_t reg_val;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+					   __func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	reg_val = qdma_reg_read(dev_hndl, QDMA_OFFSET_GLBL_INTERRUPT_CFG);
+	reg_val |= FIELD_SET(QDMA_GLBL_INTR_CFG_EN_LGCY_INTR_MASK, enable);
+	qdma_reg_write(dev_hndl, QDMA_OFFSET_GLBL_INTERRUPT_CFG, reg_val);
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_init_ctxt_memory() - function to initialize the context memory
+ *
+ * @dev_hndl: device handle
+ *
+ * Return: returns the platform specific error code
+ *****************************************************************************/
+int qdma_init_ctxt_memory(void *dev_hndl)
+{
+#ifdef ENABLE_INIT_CTXT_MEMORY
+	uint32_t data[QDMA_REG_IND_CTXT_REG_COUNT];
+	uint16_t i = 0;
+	struct qdma_dev_attributes dev_info;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+					__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	qdma_memset(data, 0, sizeof(uint32_t) * QDMA_REG_IND_CTXT_REG_COUNT);
+	qdma_get_device_attributes(dev_hndl, &dev_info);
+
+	for (; i < dev_info.num_qs; i++) {
+		int sel = QDMA_CTXT_SEL_SW_C2H;
+		int rv;
+
+		for (; sel <= QDMA_CTXT_SEL_PFTCH; sel++) {
+			/** if the st mode(h2c/c2h) not enabled
+			 *  in the design, then skip the PFTCH
+			 *  and CMPT context setup
+			 */
+			if (dev_info.st_en == 0 &&
+			    (sel == QDMA_CTXT_SEL_PFTCH ||
+				sel == QDMA_CTXT_SEL_CMPT)) {
+				qdma_log_debug("%s: ST context is skipped:",
+					__func__);
+				qdma_log_debug("sel = %d\n", sel);
+				continue;
+			}
+
+			rv = qdma_indirect_reg_clear(dev_hndl,
+					(enum ind_ctxt_cmd_sel)sel, i);
+			if (rv < 0)
+				return rv;
+		}
+	}
+
+	/* fmap */
+	for (i = 0; i < dev_info.num_pfs; i++)
+		qdma_indirect_reg_clear(dev_hndl,
+				QDMA_CTXT_SEL_FMAP, i);
+
+#else
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+					__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+#endif
+	return QDMA_SUCCESS;
+}
+
+static int get_reg_entry(uint32_t reg_addr, int *reg_entry)
+{
+	uint32_t i = 0;
+	struct xreg_info *reg_info;
+	uint32_t num_regs =
+		sizeof(qdma_config_regs) /
+		sizeof((qdma_config_regs)[0]);
+
+	reg_info = qdma_config_regs;
+
+	for (i = 0; (i < num_regs - 1); i++) {
+		if (reg_info[i].addr == reg_addr) {
+			*reg_entry = i;
+			break;
+		}
+	}
+
+	if (i >= num_regs - 1) {
+		qdma_log_error("%s: 0x%08x is missing register list, err:%d\n",
+					__func__,
+					reg_addr,
+					-QDMA_ERR_INV_PARAM);
+		*reg_entry = -1;
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	return 0;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_soft_dump_config_reg_list() - Dump the registers
+ *
+ * @dev_hndl:		device handle
+ * @total_regs :	Max registers to read
+ * @reg_list :		array of reg addr and reg values
+ * @buf :		pointer to buffer to be filled
+ * @buflen :		Length of the buffer
+ *
+ * Return: returns the platform specific error code
+ *****************************************************************************/
+int qdma_soft_dump_config_reg_list(void *dev_hndl, uint32_t total_regs,
+		struct qdma_reg_data *reg_list, char *buf, uint32_t buflen)
+{
+	uint32_t j = 0, len = 0;
+	uint32_t reg_count = 0;
+	int reg_data_entry;
+	int rv = 0;
+	char name[DEBGFS_GEN_NAME_SZ] = "";
+	struct xreg_info *reg_info = qdma_config_regs;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+				__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if (!buf) {
+		qdma_log_error("%s: buf is NULL, err:%d\n",
+				__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	for (reg_count = 0;
+			(reg_count < total_regs);) {
+		rv = get_reg_entry(reg_list[reg_count].reg_addr,
+					&reg_data_entry);
+		if (rv < 0) {
+			qdma_log_error("%s: register missing in list, err:%d\n",
+						   __func__,
+						   -QDMA_ERR_INV_PARAM);
+			return rv;
+		}
+
+		for (j = 0; j < reg_info[reg_data_entry].repeat; j++) {
+			rv = QDMA_SNPRINTF_S(name, DEBGFS_GEN_NAME_SZ,
+					DEBGFS_GEN_NAME_SZ,
+					"%s_%d",
+					reg_info[reg_data_entry].name, j);
+			if (rv < 0 || rv > DEBGFS_GEN_NAME_SZ) {
+				qdma_log_error("%d:%s snprintf failed, err:%d\n",
+					__LINE__, __func__,
+					rv);
+				return -QDMA_ERR_NO_MEM;
+			}
+			rv = dump_reg(buf + len, buflen - len,
+				(reg_info[reg_data_entry].addr + (j * 4)),
+					name,
+					reg_list[reg_count + j].reg_val);
+			if (rv < 0) {
+				qdma_log_error("%s Buff too small, err:%d\n",
+				__func__,
+				-QDMA_ERR_NO_MEM);
+				return -QDMA_ERR_NO_MEM;
+			}
+			len += rv;
+		}
+		reg_count += j;
+	}
+
+	return len;
+}
+
+
+/*****************************************************************************/
+/**
+ * qdma_read_reg_list() - read the register values
+ *
+ * @dev_hndl:		device handle
+ * @is_vf:		Whether PF or VF
+ * @total_regs :	Max registers to read
+ * @reg_list :		array of reg addr and reg values
+ *
+ * Return: returns the platform specific error code
+ *****************************************************************************/
+int qdma_read_reg_list(void *dev_hndl, uint8_t is_vf,
+		uint16_t reg_rd_group,
+		uint16_t *total_regs,
+		struct qdma_reg_data *reg_list)
+{
+	uint16_t reg_count = 0, i = 0, j = 0;
+	struct xreg_info *reg_info;
+	uint32_t num_regs =
+		sizeof(qdma_config_regs) /
+		sizeof((qdma_config_regs)[0]);
+	struct qdma_dev_attributes dev_cap;
+	uint32_t reg_start_addr = 0;
+	int reg_index = 0;
+	int rv = 0;
+
+	if (!is_vf) {
+		qdma_log_error("%s: not supported for PF, err:%d\n",
+				__func__,
+				-QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED);
+		return -QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED;
+	}
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+					   __func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if (!reg_list) {
+		qdma_log_error("%s: reg_list is NULL, err:%d\n",
+					   __func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	qdma_get_device_attributes(dev_hndl, &dev_cap);
+
+	switch (reg_rd_group) {
+	case QDMA_REG_READ_GROUP_1:
+			reg_start_addr = QDMA_REG_GROUP_1_START_ADDR;
+			break;
+	case QDMA_REG_READ_GROUP_2:
+			reg_start_addr = QDMA_REG_GROUP_2_START_ADDR;
+			break;
+	case QDMA_REG_READ_GROUP_3:
+			reg_start_addr = QDMA_REG_GROUP_3_START_ADDR;
+			break;
+	case QDMA_REG_READ_GROUP_4:
+			reg_start_addr = QDMA_REG_GROUP_4_START_ADDR;
+			break;
+	default:
+		qdma_log_error("%s: Invalid group received\n",
+			   __func__);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	rv = get_reg_entry(reg_start_addr, &reg_index);
+	if (rv < 0) {
+		qdma_log_error("%s: register missing in list, err:%d\n",
+					   __func__,
+					   -QDMA_ERR_INV_PARAM);
+		return rv;
+	}
+	reg_info = &qdma_config_regs[reg_index];
+
+	for (i = 0, reg_count = 0;
+			((i < num_regs - 1 - reg_index) &&
+			(reg_count < QDMA_MAX_REGISTER_DUMP)); i++) {
+		int mask = get_capability_mask(dev_cap.mm_en, dev_cap.st_en,
+				dev_cap.mm_cmpt_en, dev_cap.mailbox_en);
+
+		if (((mask & reg_info[i].mode) == 0) ||
+			reg_info[i].read_type == QDMA_REG_READ_PF_ONLY)
+			continue;
+
+		for (j = 0; j < reg_info[i].repeat &&
+				(reg_count < QDMA_MAX_REGISTER_DUMP);
+				j++) {
+			reg_list[reg_count].reg_addr =
+					(reg_info[i].addr + (j * 4));
+			reg_list[reg_count].reg_val =
+				qdma_reg_read(dev_hndl,
+					reg_list[reg_count].reg_addr);
+			reg_count++;
+		}
+	}
+
+	*total_regs = reg_count;
+	return rv;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_write_global_ring_sizes() - function to set the global ring size array
+ *
+ * @dev_hndl:   device handle
+ * @index: Index from where the values needs to written
+ * @count: number of entries to be written
+ * @glbl_rng_sz: pointer to the array having the values to write
+ *
+ * (index + count) shall not be more than 16
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_write_global_ring_sizes(void *dev_hndl, uint8_t index,
+				uint8_t count, const uint32_t *glbl_rng_sz)
+{
+	if (!dev_hndl || !glbl_rng_sz || !count) {
+		qdma_log_error("%s: dev_hndl=%p glbl_rng_sz=%p, err:%d\n",
+					   __func__, dev_hndl, glbl_rng_sz,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if ((index + count) > QDMA_NUM_RING_SIZES) {
+		qdma_log_error("%s: index=%u count=%u > %d, err:%d\n",
+					   __func__, index, count,
+					   QDMA_NUM_RING_SIZES,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	qdma_write_csr_values(dev_hndl, QDMA_OFFSET_GLBL_RNG_SZ, index, count,
+			glbl_rng_sz);
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_read_global_ring_sizes() - function to get the global rng_sz array
+ *
+ * @dev_hndl:   device handle
+ * @index:	 Index from where the values needs to read
+ * @count:	 number of entries to be read
+ * @glbl_rng_sz: pointer to array to hold the values read
+ *
+ * (index + count) shall not be more than 16
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_read_global_ring_sizes(void *dev_hndl, uint8_t index,
+				uint8_t count, uint32_t *glbl_rng_sz)
+{
+	if (!dev_hndl || !glbl_rng_sz || !count) {
+		qdma_log_error("%s: dev_hndl=%p glbl_rng_sz=%p, err:%d\n",
+					   __func__, dev_hndl, glbl_rng_sz,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if ((index + count) > QDMA_NUM_RING_SIZES) {
+		qdma_log_error("%s: index=%u count=%u > %d, err:%d\n",
+					   __func__, index, count,
+					   QDMA_NUM_C2H_BUFFER_SIZES,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	qdma_read_csr_values(dev_hndl, QDMA_OFFSET_GLBL_RNG_SZ, index, count,
+			glbl_rng_sz);
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_write_global_timer_count() - function to set the timer values
+ *
+ * @dev_hndl:   device handle
+ * @glbl_tmr_cnt: pointer to the array having the values to write
+ * @index:	 Index from where the values needs to written
+ * @count:	 number of entries to be written
+ *
+ * (index + count) shall not be more than 16
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_write_global_timer_count(void *dev_hndl, uint8_t index,
+				uint8_t count, const uint32_t *glbl_tmr_cnt)
+{
+	struct qdma_dev_attributes dev_cap;
+
+
+
+	if (!dev_hndl || !glbl_tmr_cnt || !count) {
+		qdma_log_error("%s: dev_hndl=%p glbl_tmr_cnt=%p, err:%d\n",
+					   __func__, dev_hndl, glbl_tmr_cnt,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if ((index + count) > QDMA_NUM_C2H_TIMERS) {
+		qdma_log_error("%s: index=%u count=%u > %d, err:%d\n",
+					   __func__, index, count,
+					   QDMA_NUM_C2H_TIMERS,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	qdma_get_device_attributes(dev_hndl, &dev_cap);
+
+	if (dev_cap.st_en || dev_cap.mm_cmpt_en) {
+		qdma_write_csr_values(dev_hndl, QDMA_OFFSET_C2H_TIMER_CNT,
+				index, count, glbl_tmr_cnt);
+	} else {
+		qdma_log_error("%s: ST or MM cmpt not supported, err:%d\n",
+				__func__,
+				-QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED);
+		return -QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED;
+	}
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_read_global_timer_count() - function to get the timer values
+ *
+ * @dev_hndl:   device handle
+ * @index:	 Index from where the values needs to read
+ * @count:	 number of entries to be read
+ * @glbl_tmr_cnt: pointer to array to hold the values read
+ *
+ * (index + count) shall not be more than 16
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_read_global_timer_count(void *dev_hndl, uint8_t index,
+				uint8_t count, uint32_t *glbl_tmr_cnt)
+{
+	struct qdma_dev_attributes dev_cap;
+
+	if (!dev_hndl || !glbl_tmr_cnt || !count) {
+		qdma_log_error("%s: dev_hndl=%p glbl_tmr_cnt=%p, err:%d\n",
+					   __func__, dev_hndl, glbl_tmr_cnt,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if ((index + count) > QDMA_NUM_C2H_TIMERS) {
+		qdma_log_error("%s: index=%u count=%u > %d, err:%d\n",
+					   __func__, index, count,
+					   QDMA_NUM_C2H_TIMERS,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	qdma_get_device_attributes(dev_hndl, &dev_cap);
+
+	if (dev_cap.st_en || dev_cap.mm_cmpt_en) {
+		qdma_read_csr_values(dev_hndl,
+				QDMA_OFFSET_C2H_TIMER_CNT, index,
+				count, glbl_tmr_cnt);
+	} else {
+		qdma_log_error("%s: ST or MM cmpt not supported, err:%d\n",
+				__func__,
+				-QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED);
+		return -QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED;
+	}
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_write_global_counter_threshold() - function to set the counter
+ *						threshold values
+ *
+ * @dev_hndl:   device handle
+ * @index:	 Index from where the values needs to written
+ * @count:	 number of entries to be written
+ * @glbl_cnt_th: pointer to the array having the values to write
+ *
+ * (index + count) shall not be more than 16
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_write_global_counter_threshold(void *dev_hndl, uint8_t index,
+		uint8_t count, const uint32_t *glbl_cnt_th)
+{
+	struct qdma_dev_attributes dev_cap;
+
+	if (!dev_hndl || !glbl_cnt_th || !count) {
+		qdma_log_error("%s: dev_hndl=%p glbl_cnt_th=%p, err:%d\n",
+					   __func__, dev_hndl, glbl_cnt_th,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if ((index + count) > QDMA_NUM_C2H_COUNTERS) {
+		qdma_log_error("%s: index=%u count=%u > %d, err:%d\n",
+					   __func__, index, count,
+					   QDMA_NUM_C2H_BUFFER_SIZES,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	qdma_get_device_attributes(dev_hndl, &dev_cap);
+
+	if (dev_cap.st_en || dev_cap.mm_cmpt_en) {
+		qdma_write_csr_values(dev_hndl, QDMA_OFFSET_C2H_CNT_TH, index,
+				count, glbl_cnt_th);
+	} else {
+		qdma_log_error("%s: ST or MM cmpt not supported, err:%d\n",
+				__func__,
+				-QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED);
+		return -QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED;
+	}
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_read_global_counter_threshold() - function to get the counter threshold
+ * values
+ *
+ * @dev_hndl:   device handle
+ * @index:	 Index from where the values needs to read
+ * @count:	 number of entries to be read
+ * @glbl_cnt_th: pointer to array to hold the values read
+ *
+ * (index + count) shall not be more than 16
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_read_global_counter_threshold(void *dev_hndl, uint8_t index,
+		uint8_t count, uint32_t *glbl_cnt_th)
+{
+	struct qdma_dev_attributes dev_cap;
+
+	if (!dev_hndl || !glbl_cnt_th || !count) {
+		qdma_log_error("%s: dev_hndl=%p glbl_cnt_th=%p, err:%d\n",
+					   __func__, dev_hndl, glbl_cnt_th,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if ((index + count) > QDMA_NUM_C2H_COUNTERS) {
+		qdma_log_error("%s: index=%u count=%u > %d, err:%d\n",
+					   __func__, index, count,
+					   QDMA_NUM_C2H_COUNTERS,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	qdma_get_device_attributes(dev_hndl, &dev_cap);
+
+	if (dev_cap.st_en || dev_cap.mm_cmpt_en) {
+		qdma_read_csr_values(dev_hndl, QDMA_OFFSET_C2H_CNT_TH, index,
+				count, glbl_cnt_th);
+	} else {
+		qdma_log_error("%s: ST or MM cmpt not supported, err:%d\n",
+			   __func__, -QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED);
+		return -QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED;
+	}
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_write_global_buffer_sizes() - function to set the buffer sizes
+ *
+ * @dev_hndl:   device handle
+ * @index:	 Index from where the values needs to written
+ * @count:	 number of entries to be written
+ * @glbl_buf_sz: pointer to the array having the values to write
+ *
+ * (index + count) shall not be more than 16
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_write_global_buffer_sizes(void *dev_hndl, uint8_t index,
+		uint8_t count, const uint32_t *glbl_buf_sz)
+{
+	struct qdma_dev_attributes dev_cap;
+
+	if (!dev_hndl || !glbl_buf_sz || !count) {
+		qdma_log_error("%s: dev_hndl=%p glbl_buf_sz=%p, err:%d\n",
+					   __func__, dev_hndl, glbl_buf_sz,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if ((index + count) > QDMA_NUM_C2H_BUFFER_SIZES) {
+		qdma_log_error("%s: index=%u count=%u > %d, err:%d\n",
+					   __func__, index, count,
+					   QDMA_NUM_C2H_BUFFER_SIZES,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	qdma_get_device_attributes(dev_hndl, &dev_cap);
+
+	if (dev_cap.st_en) {
+		qdma_write_csr_values(dev_hndl, QDMA_OFFSET_C2H_BUF_SZ, index,
+				count, glbl_buf_sz);
+	} else {
+		qdma_log_error("%s: ST not supported, err:%d\n",
+				__func__,
+				-QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED);
+		return -QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED;
+	}
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_read_global_buffer_sizes() - function to get the buffer sizes
+ *
+ * @dev_hndl:   device handle
+ * @index:	 Index from where the values needs to read
+ * @count:	 number of entries to be read
+ * @glbl_buf_sz: pointer to array to hold the values read
+ *
+ * (index + count) shall not be more than 16
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_read_global_buffer_sizes(void *dev_hndl, uint8_t index,
+				uint8_t count, uint32_t *glbl_buf_sz)
+{
+	struct qdma_dev_attributes dev_cap;
+
+	if (!dev_hndl || !glbl_buf_sz || !count) {
+		qdma_log_error("%s: dev_hndl=%p glbl_buf_sz=%p, err:%d\n",
+					   __func__, dev_hndl, glbl_buf_sz,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if ((index + count) > QDMA_NUM_C2H_BUFFER_SIZES) {
+		qdma_log_error("%s: index=%u count=%u > %d, err:%d\n",
+					   __func__, index, count,
+					   QDMA_NUM_C2H_BUFFER_SIZES,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	qdma_get_device_attributes(dev_hndl, &dev_cap);
+
+	if (dev_cap.st_en) {
+		qdma_read_csr_values(dev_hndl, QDMA_OFFSET_C2H_BUF_SZ, index,
+				count, glbl_buf_sz);
+	} else {
+		qdma_log_error("%s: ST is not supported, err:%d\n",
+					__func__,
+					-QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED);
+		return -QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED;
+	}
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_global_csr_conf() - function to configure global csr
+ *
+ * @dev_hndl:	device handle
+ * @index:	Index from where the values needs to read
+ * @count:	number of entries to be read
+ * @csr_val:	uint32_t pointer to csr value
+ * @csr_type:	Type of the CSR (qdma_global_csr_type enum) to configure
+ * @access_type HW access type (qdma_hw_access_type enum) value
+ *		QDMA_HW_ACCESS_CLEAR - Not supported
+ *		QDMA_HW_ACCESS_INVALIDATE - Not supported
+ *
+ * (index + count) shall not be more than 16
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+int qdma_global_csr_conf(void *dev_hndl, uint8_t index, uint8_t count,
+				uint32_t *csr_val,
+				enum qdma_global_csr_type csr_type,
+				enum qdma_hw_access_type access_type)
+{
+	int rv = QDMA_SUCCESS;
+
+	switch (csr_type) {
+	case QDMA_CSR_RING_SZ:
+		switch (access_type) {
+		case QDMA_HW_ACCESS_READ:
+			rv = qdma_read_global_ring_sizes(dev_hndl,
+						index,
+						count,
+						csr_val);
+			break;
+		case QDMA_HW_ACCESS_WRITE:
+			rv = qdma_write_global_ring_sizes(dev_hndl,
+						index,
+						count,
+						csr_val);
+			break;
+		default:
+			qdma_log_error("%s: access_type(%d) invalid, err:%d\n",
+							__func__,
+							access_type,
+							-QDMA_ERR_INV_PARAM);
+			rv = -QDMA_ERR_INV_PARAM;
+			break;
+		}
+		break;
+	case QDMA_CSR_TIMER_CNT:
+		switch (access_type) {
+		case QDMA_HW_ACCESS_READ:
+			rv = qdma_read_global_timer_count(dev_hndl,
+						index,
+						count,
+						csr_val);
+			break;
+		case QDMA_HW_ACCESS_WRITE:
+			rv = qdma_write_global_timer_count(dev_hndl,
+						index,
+						count,
+						csr_val);
+			break;
+		default:
+			qdma_log_error("%s: access_type(%d) invalid, err:%d\n",
+							__func__,
+							access_type,
+							-QDMA_ERR_INV_PARAM);
+			rv = -QDMA_ERR_INV_PARAM;
+			break;
+		}
+		break;
+	case QDMA_CSR_CNT_TH:
+		switch (access_type) {
+		case QDMA_HW_ACCESS_READ:
+			rv =
+			qdma_read_global_counter_threshold(dev_hndl,
+						index,
+						count,
+						csr_val);
+			break;
+		case QDMA_HW_ACCESS_WRITE:
+			rv =
+			qdma_write_global_counter_threshold(dev_hndl,
+						index,
+						count,
+						csr_val);
+			break;
+		default:
+			qdma_log_error("%s: access_type(%d) invalid, err:%d\n",
+							__func__,
+							access_type,
+							-QDMA_ERR_INV_PARAM);
+			rv = -QDMA_ERR_INV_PARAM;
+			break;
+		}
+		break;
+	case QDMA_CSR_BUF_SZ:
+		switch (access_type) {
+		case QDMA_HW_ACCESS_READ:
+			rv =
+			qdma_read_global_buffer_sizes(dev_hndl,
+						index,
+						count,
+						csr_val);
+			break;
+		case QDMA_HW_ACCESS_WRITE:
+			rv =
+			qdma_write_global_buffer_sizes(dev_hndl,
+						index,
+						count,
+						csr_val);
+			break;
+		default:
+			qdma_log_error("%s: access_type(%d) invalid, err:%d\n",
+							__func__,
+							access_type,
+							-QDMA_ERR_INV_PARAM);
+			rv = -QDMA_ERR_INV_PARAM;
+			break;
+		}
+		break;
+	default:
+		qdma_log_error("%s: csr_type(%d) invalid, err:%d\n",
+						__func__,
+						csr_type,
+						-QDMA_ERR_INV_PARAM);
+		rv = -QDMA_ERR_INV_PARAM;
+		break;
+	}
+
+	return rv;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_global_writeback_interval_write() -  function to set the writeback
+ * interval
+ *
+ * @dev_hndl	device handle
+ * @wb_int:	Writeback Interval
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_global_writeback_interval_write(void *dev_hndl,
+		enum qdma_wrb_interval wb_int)
+{
+	uint32_t reg_val;
+	struct qdma_dev_attributes dev_cap;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n", __func__,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if (wb_int >=  QDMA_NUM_WRB_INTERVALS) {
+		qdma_log_error("%s: wb_int=%d is invalid, err:%d\n",
+					   __func__, wb_int,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	qdma_get_device_attributes(dev_hndl, &dev_cap);
+
+	if (dev_cap.st_en || dev_cap.mm_cmpt_en) {
+		reg_val = qdma_reg_read(dev_hndl, QDMA_OFFSET_GLBL_DSC_CFG);
+		reg_val |= FIELD_SET(QDMA_GLBL_DSC_CFG_WB_ACC_INT_MASK, wb_int);
+
+		qdma_reg_write(dev_hndl, QDMA_OFFSET_GLBL_DSC_CFG, reg_val);
+	} else {
+		qdma_log_error("%s: ST or MM cmpt not supported, err:%d\n",
+			   __func__, -QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED);
+		return -QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED;
+	}
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_global_writeback_interval_read() -  function to get the writeback
+ * interval
+ *
+ * @dev_hndl:	device handle
+ * @wb_int:	pointer to the data to hold Writeback Interval
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+static int qdma_global_writeback_interval_read(void *dev_hndl,
+		enum qdma_wrb_interval *wb_int)
+{
+	uint32_t reg_val;
+	struct qdma_dev_attributes dev_cap;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n", __func__,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	if (!wb_int) {
+		qdma_log_error("%s: wb_int is NULL, err:%d\n", __func__,
+					   -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	qdma_get_device_attributes(dev_hndl, &dev_cap);
+
+	if (dev_cap.st_en || dev_cap.mm_cmpt_en) {
+		reg_val = qdma_reg_read(dev_hndl, QDMA_OFFSET_GLBL_DSC_CFG);
+		*wb_int = (enum qdma_wrb_interval)FIELD_GET
+			(QDMA_GLBL_DSC_CFG_WB_ACC_INT_MASK, reg_val);
+	} else {
+		qdma_log_error("%s: ST or MM cmpt not supported, err:%d\n",
+			   __func__, -QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED);
+		return -QDMA_ERR_HWACC_FEATURE_NOT_SUPPORTED;
+	}
+
+	return QDMA_SUCCESS;
+}
+
+/*****************************************************************************/
+/**
+ * qdma_global_writeback_interval_conf() - function to configure
+ *					the writeback interval
+ *
+ * @dev_hndl:   device handle
+ * @wb_int:	pointer to the data to hold Writeback Interval
+ * @access_type HW access type (qdma_hw_access_type enum) value
+ *		QDMA_HW_ACCESS_CLEAR - Not supported
+ *		QDMA_HW_ACCESS_INVALIDATE - Not supported
+ *
+ * Return:	0   - success and < 0 - failure
+ *****************************************************************************/
+int qdma_global_writeback_interval_conf(void *dev_hndl,
+				enum qdma_wrb_interval *wb_int,
+				enum qdma_hw_access_type access_type)
+{
+	int rv = QDMA_SUCCESS;
+
+	switch (access_type) {
+	case QDMA_HW_ACCESS_READ:
+		rv = qdma_global_writeback_interval_read(dev_hndl, wb_int);
+		break;
+	case QDMA_HW_ACCESS_WRITE:
+		rv = qdma_global_writeback_interval_write(dev_hndl, *wb_int);
+		break;
+	case QDMA_HW_ACCESS_CLEAR:
+	case QDMA_HW_ACCESS_INVALIDATE:
+	default:
+		qdma_log_error("%s: access_type(%d) invalid, err:%d\n",
+						__func__,
+						access_type,
+						-QDMA_ERR_INV_PARAM);
+		rv = -QDMA_ERR_INV_PARAM;
+		break;
+	}
+
+	return rv;
+}
+
+
+/*****************************************************************************/
+/**
+ * qdma_mm_channel_conf() - Function to enable/disable the MM channel
+ *
+ * @dev_hndl:	device handle
+ * @channel:	MM channel number
+ * @is_c2h:	Queue direction. Set 1 for C2H and 0 for H2C
+ * @enable:	Enable or disable MM channel
+ *
+ * Presently, we have only 1 MM channel
+ *
+ * Return:   0   - success and < 0 - failure
+ *****************************************************************************/
+int qdma_mm_channel_conf(void *dev_hndl, uint8_t channel, uint8_t is_c2h,
+				uint8_t enable)
+{
+	uint32_t reg_addr = (is_c2h) ?  QDMA_OFFSET_C2H_MM_CONTROL :
+			QDMA_OFFSET_H2C_MM_CONTROL;
+	struct qdma_dev_attributes dev_cap;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+				__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	qdma_get_device_attributes(dev_hndl, &dev_cap);
+
+	if (dev_cap.mm_en) {
+		qdma_reg_write(dev_hndl,
+				reg_addr + (channel * QDMA_MM_CONTROL_STEP),
+				enable);
+	}
+
+	return QDMA_SUCCESS;
+}
+
+
+int qdma_dump_reg_info(void *dev_hndl, uint32_t reg_addr,
+		uint32_t num_regs, char *buf, uint32_t buflen)
+{
+	uint32_t total_num_regs = qdma_get_config_num_regs();
+	struct xreg_info *config_regs  = qdma_get_config_regs();
+	const char *bitfield_name;
+	uint32_t i = 0, num_regs_idx = 0, k = 0, j = 0,
+			bitfield = 0, lsb = 0, msb = 31;
+	int rv = 0;
+	uint32_t reg_val;
+	uint32_t data_len = 0;
+
+	if (!dev_hndl) {
+		qdma_log_error("%s: dev_handle is NULL, err:%d\n",
+				__func__, -QDMA_ERR_INV_PARAM);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	for (i = 0; i < total_num_regs; i++) {
+		if (reg_addr == config_regs[i].addr) {
+			j = i;
+			break;
+		}
+	}
+
+	if (i == total_num_regs) {
+		qdma_log_error("%s: Register not found err:%d\n",
+				__func__, -QDMA_ERR_INV_PARAM);
+		if (buf)
+			QDMA_SNPRINTF_S(buf, buflen,
+					DEBGFS_LINE_SZ,
+					"Register not found 0x%x\n", reg_addr);
+		return -QDMA_ERR_INV_PARAM;
+	}
+
+	num_regs_idx = (j + num_regs < total_num_regs) ?
+					(j + num_regs) : total_num_regs;
+
+	for (; j < num_regs_idx ; j++) {
+		reg_val = qdma_reg_read(dev_hndl,
+				config_regs[j].addr);
+
+		if (buf) {
+			rv = QDMA_SNPRINTF_S(buf, buflen,
+						DEBGFS_LINE_SZ,
+						"\n%-40s 0x%-7x %-#10x %-10d\n",
+						config_regs[j].name,
+						config_regs[j].addr,
+						reg_val, reg_val);
+			if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+				qdma_log_error("%s: Insufficient buffer, err:%d\n",
+					__func__, -QDMA_ERR_NO_MEM);
+				return -QDMA_ERR_NO_MEM;
+			}
+			buf += rv;
+			data_len += rv;
+			buflen -= rv;
+		} else {
+			qdma_log_info("%-40s 0x%-7x %-#10x %-10d\n",
+				      config_regs[j].name,
+				      config_regs[j].addr,
+				      reg_val, reg_val);
+		}
+		for (k = 0; k < config_regs[j].num_bitfields; k++) {
+			bitfield =
+				config_regs[j].bitfields[k].field_mask;
+			bitfield_name =
+				config_regs[i].bitfields[k].field_name;
+			lsb = 0;
+			msb = 31;
+
+			while (!(BIT(lsb) & bitfield))
+				lsb++;
+
+			while (!(BIT(msb) & bitfield))
+				msb--;
+
+			if (msb != lsb) {
+				if (buf) {
+					rv = QDMA_SNPRINTF_S(buf, buflen,
+							DEBGFS_LINE_SZ,
+							"%-40s [%2u,%2u]   %#-10x\n",
+							bitfield_name,
+							msb, lsb,
+							(reg_val & bitfield) >>
+								lsb);
+					if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+						qdma_log_error("%s: Insufficient buffer, err:%d\n",
+							__func__,
+							-QDMA_ERR_NO_MEM);
+						return -QDMA_ERR_NO_MEM;
+					}
+					buf += rv;
+					data_len += rv;
+					buflen -= rv;
+				} else {
+					qdma_log_info("%-40s [%2u,%2u]   %#-10x\n",
+						bitfield_name, msb, lsb,
+						(reg_val & bitfield) >> lsb);
+				}
+			} else {
+				if (buf) {
+					rv = QDMA_SNPRINTF_S(buf, buflen,
+							DEBGFS_LINE_SZ,
+							"%-40s [%5u]  %#-10x\n",
+							bitfield_name,
+							lsb,
+							(reg_val & bitfield) >>
+								lsb);
+					if (rv < 0 || rv > DEBGFS_LINE_SZ) {
+						qdma_log_error("%s: Insufficient buffer, err:%d\n",
+							__func__,
+							-QDMA_ERR_NO_MEM);
+						return -QDMA_ERR_NO_MEM;
+					}
+					buf += rv;
+					data_len += rv;
+					buflen -= rv;
+				} else {
+					qdma_log_info("%-40s [%5u]   %#-10x\n",
+						bitfield_name,
+						lsb,
+						(reg_val & bitfield) >> lsb);
+				}
+			}
+		}
+	}
+
+	return data_len;
+}
diff --git a/drivers/net/qdma/qdma_access/qdma_soft_access/qdma_soft_access.h b/drivers/net/qdma/qdma_access/qdma_soft_access/qdma_soft_access.h
new file mode 100644
index 0000000000..2ed9fd1d77
--- /dev/null
+++ b/drivers/net/qdma/qdma_access/qdma_soft_access/qdma_soft_access.h
@@ -0,0 +1,280 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2022 Xilinx, Inc. All rights reserved.
+ */
+
+#ifndef __QDMA_SOFT_ACCESS_H_
+#define __QDMA_SOFT_ACCESS_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * DOC: QDMA common library interface definitions
+ *
+ * Header file *qdma_access.h* defines data structures and function signatures
+ * exported by QDMA common library.
+ */
+
+#include "qdma_platform.h"
+
+/**
+ * enum qdma_error_idx - qdma errors
+ */
+enum qdma_error_idx {
+	/* Descriptor errors */
+	QDMA_DSC_ERR_POISON,
+	QDMA_DSC_ERR_UR_CA,
+	QDMA_DSC_ERR_PARAM,
+	QDMA_DSC_ERR_ADDR,
+	QDMA_DSC_ERR_TAG,
+	QDMA_DSC_ERR_FLR,
+	QDMA_DSC_ERR_TIMEOUT,
+	QDMA_DSC_ERR_DAT_POISON,
+	QDMA_DSC_ERR_FLR_CANCEL,
+	QDMA_DSC_ERR_DMA,
+	QDMA_DSC_ERR_DSC,
+	QDMA_DSC_ERR_RQ_CANCEL,
+	QDMA_DSC_ERR_DBE,
+	QDMA_DSC_ERR_SBE,
+	QDMA_DSC_ERR_ALL,
+
+	/* TRQ Errors */
+	QDMA_TRQ_ERR_UNMAPPED,
+	QDMA_TRQ_ERR_QID_RANGE,
+	QDMA_TRQ_ERR_VF_ACCESS,
+	QDMA_TRQ_ERR_TCP_TIMEOUT,
+	QDMA_TRQ_ERR_ALL,
+
+	/* C2H Errors */
+	QDMA_ST_C2H_ERR_MTY_MISMATCH,
+	QDMA_ST_C2H_ERR_LEN_MISMATCH,
+	QDMA_ST_C2H_ERR_QID_MISMATCH,
+	QDMA_ST_C2H_ERR_DESC_RSP_ERR,
+	QDMA_ST_C2H_ERR_ENG_WPL_DATA_PAR_ERR,
+	QDMA_ST_C2H_ERR_MSI_INT_FAIL,
+	QDMA_ST_C2H_ERR_ERR_DESC_CNT,
+	QDMA_ST_C2H_ERR_PORTID_CTXT_MISMATCH,
+	QDMA_ST_C2H_ERR_PORTID_BYP_IN_MISMATCH,
+	QDMA_ST_C2H_ERR_CMPT_INV_Q_ERR,
+	QDMA_ST_C2H_ERR_CMPT_QFULL_ERR,
+	QDMA_ST_C2H_ERR_CMPT_CIDX_ERR,
+	QDMA_ST_C2H_ERR_CMPT_PRTY_ERR,
+	QDMA_ST_C2H_ERR_ALL,
+
+	/* Fatal Errors */
+	QDMA_ST_FATAL_ERR_MTY_MISMATCH,
+	QDMA_ST_FATAL_ERR_LEN_MISMATCH,
+	QDMA_ST_FATAL_ERR_QID_MISMATCH,
+	QDMA_ST_FATAL_ERR_TIMER_FIFO_RAM_RDBE,
+	QDMA_ST_FATAL_ERR_PFCH_II_RAM_RDBE,
+	QDMA_ST_FATAL_ERR_CMPT_CTXT_RAM_RDBE,
+	QDMA_ST_FATAL_ERR_PFCH_CTXT_RAM_RDBE,
+	QDMA_ST_FATAL_ERR_DESC_REQ_FIFO_RAM_RDBE,
+	QDMA_ST_FATAL_ERR_INT_CTXT_RAM_RDBE,
+	QDMA_ST_FATAL_ERR_CMPT_COAL_DATA_RAM_RDBE,
+	QDMA_ST_FATAL_ERR_TUSER_FIFO_RAM_RDBE,
+	QDMA_ST_FATAL_ERR_QID_FIFO_RAM_RDBE,
+	QDMA_ST_FATAL_ERR_PAYLOAD_FIFO_RAM_RDBE,
+	QDMA_ST_FATAL_ERR_WPL_DATA_PAR,
+	QDMA_ST_FATAL_ERR_ALL,
+
+	/* H2C Errors */
+	QDMA_ST_H2C_ERR_ZERO_LEN_DESC,
+	QDMA_ST_H2C_ERR_CSI_MOP,
+	QDMA_ST_H2C_ERR_NO_DMA_DSC,
+	QDMA_ST_H2C_ERR_SBE,
+	QDMA_ST_H2C_ERR_DBE,
+	QDMA_ST_H2C_ERR_ALL,
+
+	/* Single bit errors */
+	QDMA_SBE_ERR_MI_H2C0_DAT,
+	QDMA_SBE_ERR_MI_C2H0_DAT,
+	QDMA_SBE_ERR_H2C_RD_BRG_DAT,
+	QDMA_SBE_ERR_H2C_WR_BRG_DAT,
+	QDMA_SBE_ERR_C2H_RD_BRG_DAT,
+	QDMA_SBE_ERR_C2H_WR_BRG_DAT,
+	QDMA_SBE_ERR_FUNC_MAP,
+	QDMA_SBE_ERR_DSC_HW_CTXT,
+	QDMA_SBE_ERR_DSC_CRD_RCV,
+	QDMA_SBE_ERR_DSC_SW_CTXT,
+	QDMA_SBE_ERR_DSC_CPLI,
+	QDMA_SBE_ERR_DSC_CPLD,
+	QDMA_SBE_ERR_PASID_CTXT_RAM,
+	QDMA_SBE_ERR_TIMER_FIFO_RAM,
+	QDMA_SBE_ERR_PAYLOAD_FIFO_RAM,
+	QDMA_SBE_ERR_QID_FIFO_RAM,
+	QDMA_SBE_ERR_TUSER_FIFO_RAM,
+	QDMA_SBE_ERR_WRB_COAL_DATA_RAM,
+	QDMA_SBE_ERR_INT_QID2VEC_RAM,
+	QDMA_SBE_ERR_INT_CTXT_RAM,
+	QDMA_SBE_ERR_DESC_REQ_FIFO_RAM,
+	QDMA_SBE_ERR_PFCH_CTXT_RAM,
+	QDMA_SBE_ERR_WRB_CTXT_RAM,
+	QDMA_SBE_ERR_PFCH_LL_RAM,
+	QDMA_SBE_ERR_H2C_PEND_FIFO,
+	QDMA_SBE_ERR_ALL,
+
+	/* Double bit Errors */
+	QDMA_DBE_ERR_MI_H2C0_DAT,
+	QDMA_DBE_ERR_MI_C2H0_DAT,
+	QDMA_DBE_ERR_H2C_RD_BRG_DAT,
+	QDMA_DBE_ERR_H2C_WR_BRG_DAT,
+	QDMA_DBE_ERR_C2H_RD_BRG_DAT,
+	QDMA_DBE_ERR_C2H_WR_BRG_DAT,
+	QDMA_DBE_ERR_FUNC_MAP,
+	QDMA_DBE_ERR_DSC_HW_CTXT,
+	QDMA_DBE_ERR_DSC_CRD_RCV,
+	QDMA_DBE_ERR_DSC_SW_CTXT,
+	QDMA_DBE_ERR_DSC_CPLI,
+	QDMA_DBE_ERR_DSC_CPLD,
+	QDMA_DBE_ERR_PASID_CTXT_RAM,
+	QDMA_DBE_ERR_TIMER_FIFO_RAM,
+	QDMA_DBE_ERR_PAYLOAD_FIFO_RAM,
+	QDMA_DBE_ERR_QID_FIFO_RAM,
+	QDMA_DBE_ERR_TUSER_FIFO_RAM,
+	QDMA_DBE_ERR_WRB_COAL_DATA_RAM,
+	QDMA_DBE_ERR_INT_QID2VEC_RAM,
+	QDMA_DBE_ERR_INT_CTXT_RAM,
+	QDMA_DBE_ERR_DESC_REQ_FIFO_RAM,
+	QDMA_DBE_ERR_PFCH_CTXT_RAM,
+	QDMA_DBE_ERR_WRB_CTXT_RAM,
+	QDMA_DBE_ERR_PFCH_LL_RAM,
+	QDMA_DBE_ERR_H2C_PEND_FIFO,
+	QDMA_DBE_ERR_ALL,
+
+	QDMA_ERRS_ALL
+};
+
+struct qdma_hw_err_info {
+	enum qdma_error_idx idx;
+	const char *err_name;
+	uint32_t mask_reg_addr;
+	uint32_t stat_reg_addr;
+	uint32_t leaf_err_mask;
+	uint32_t global_err_mask;
+	void (*qdma_hw_err_process)(void *dev_hndl);
+};
+
+
+int qdma_set_default_global_csr(void *dev_hndl);
+
+int qdma_get_version(void *dev_hndl, uint8_t is_vf,
+		struct qdma_hw_version_info *version_info);
+
+int qdma_pfetch_ctx_conf(void *dev_hndl, uint16_t hw_qid,
+				struct qdma_descq_prefetch_ctxt *ctxt,
+				enum qdma_hw_access_type access_type);
+
+int qdma_sw_ctx_conf(void *dev_hndl, uint8_t c2h, uint16_t hw_qid,
+				struct qdma_descq_sw_ctxt *ctxt,
+				enum qdma_hw_access_type access_type);
+
+int qdma_fmap_conf(void *dev_hndl, uint16_t func_id,
+				struct qdma_fmap_cfg *config,
+				enum qdma_hw_access_type access_type);
+
+int qdma_cmpt_ctx_conf(void *dev_hndl, uint16_t hw_qid,
+			struct qdma_descq_cmpt_ctxt *ctxt,
+			enum qdma_hw_access_type access_type);
+
+int qdma_hw_ctx_conf(void *dev_hndl, uint8_t c2h, uint16_t hw_qid,
+				struct qdma_descq_hw_ctxt *ctxt,
+				enum qdma_hw_access_type access_type);
+
+int qdma_credit_ctx_conf(void *dev_hndl, uint8_t c2h, uint16_t hw_qid,
+			struct qdma_descq_credit_ctxt *ctxt,
+			enum qdma_hw_access_type access_type);
+
+int qdma_indirect_intr_ctx_conf(void *dev_hndl, uint16_t ring_index,
+				struct qdma_indirect_intr_ctxt *ctxt,
+				enum qdma_hw_access_type access_type);
+
+int qdma_queue_pidx_update(void *dev_hndl, uint8_t is_vf, uint16_t qid,
+		uint8_t is_c2h, const struct qdma_q_pidx_reg_info *reg_info);
+
+int qdma_queue_cmpt_cidx_update(void *dev_hndl, uint8_t is_vf,
+		uint16_t qid, const struct qdma_q_cmpt_cidx_reg_info *reg_info);
+
+int qdma_queue_intr_cidx_update(void *dev_hndl, uint8_t is_vf,
+		uint16_t qid, const struct qdma_intr_cidx_reg_info *reg_info);
+
+int qdma_init_ctxt_memory(void *dev_hndl);
+
+int qdma_legacy_intr_conf(void *dev_hndl, enum status_type enable);
+
+int qdma_clear_pend_legacy_intr(void *dev_hndl);
+
+int qdma_is_legacy_intr_pend(void *dev_hndl);
+
+int qdma_dump_intr_context(void *dev_hndl,
+		struct qdma_indirect_intr_ctxt *intr_ctx,
+		int ring_index,
+		char *buf, uint32_t buflen);
+
+uint32_t qdma_soft_reg_dump_buf_len(void);
+
+uint32_t qdma_get_config_num_regs(void);
+
+struct xreg_info *qdma_get_config_regs(void);
+
+int qdma_soft_context_buf_len(uint8_t st,
+		enum qdma_dev_q_type q_type, uint32_t *buflen);
+
+int qdma_soft_dump_config_regs(void *dev_hndl, uint8_t is_vf,
+		char *buf, uint32_t buflen);
+
+int qdma_soft_dump_queue_context(void *dev_hndl,
+		uint8_t st,
+		enum qdma_dev_q_type q_type,
+		struct qdma_descq_context *ctxt_data,
+		char *buf, uint32_t buflen);
+
+int qdma_soft_read_dump_queue_context(void *dev_hndl,
+				uint16_t qid_hw,
+				uint8_t st,
+				enum qdma_dev_q_type q_type,
+				char *buf, uint32_t buflen);
+
+int qdma_hw_error_process(void *dev_hndl);
+
+const char *qdma_hw_get_error_name(uint32_t err_idx);
+
+int qdma_hw_error_enable(void *dev_hndl, uint32_t err_idx);
+
+int qdma_get_device_attributes(void *dev_hndl,
+		struct qdma_dev_attributes *dev_info);
+
+int qdma_get_user_bar(void *dev_hndl, uint8_t is_vf,
+		uint8_t func_id, uint8_t *user_bar);
+
+int qdma_soft_dump_config_reg_list(void *dev_hndl,
+		uint32_t total_regs,
+		struct qdma_reg_data *reg_list,
+		char *buf, uint32_t buflen);
+
+int qdma_read_reg_list(void *dev_hndl, uint8_t is_vf,
+		uint16_t reg_rd_group,
+		uint16_t *total_regs,
+		struct qdma_reg_data *reg_list);
+
+int qdma_global_csr_conf(void *dev_hndl, uint8_t index, uint8_t count,
+				uint32_t *csr_val,
+				enum qdma_global_csr_type csr_type,
+				enum qdma_hw_access_type access_type);
+
+int qdma_global_writeback_interval_conf(void *dev_hndl,
+				enum qdma_wrb_interval *wb_int,
+				enum qdma_hw_access_type access_type);
+
+int qdma_mm_channel_conf(void *dev_hndl, uint8_t channel, uint8_t is_c2h,
+				uint8_t enable);
+
+int qdma_dump_reg_info(void *dev_hndl, uint32_t reg_addr,
+			uint32_t num_regs, char *buf, uint32_t buflen);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __QDMA_SOFT_ACCESS_H_ */
diff --git a/drivers/net/qdma/qdma_access/qdma_soft_access/qdma_soft_reg.h b/drivers/net/qdma/qdma_access/qdma_soft_access/qdma_soft_reg.h
new file mode 100644
index 0000000000..f3602265cf
--- /dev/null
+++ b/drivers/net/qdma/qdma_access/qdma_soft_access/qdma_soft_reg.h
@@ -0,0 +1,570 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2022 Xilinx, Inc. All rights reserved.
+ */
+
+#ifndef __QDMA_SOFT_REG_H__
+#define __QDMA_SOFT_REG_H__
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * User defined helper macros for masks and shifts. If the same macros are
+ * defined in linux kernel code , then undefined them and used the user
+ * defined macros
+ */
+#ifdef CHAR_BIT
+#undef CHAR_BIT
+#endif
+#define CHAR_BIT 8
+
+#ifdef BIT
+#undef BIT
+#endif
+#define BIT(n)                  (1u << (n))
+
+#ifdef BITS_PER_BYTE
+#undef BITS_PER_BYTE
+#endif
+#define BITS_PER_BYTE           CHAR_BIT
+
+#ifdef BITS_PER_LONG
+#undef BITS_PER_LONG
+#endif
+#define BITS_PER_LONG           (sizeof(uint32_t) * BITS_PER_BYTE)
+
+#ifdef BITS_PER_LONG_LONG
+#undef BITS_PER_LONG_LONG
+#endif
+#define BITS_PER_LONG_LONG      (sizeof(uint64_t) * BITS_PER_BYTE)
+
+#ifdef GENMASK
+#undef GENMASK
+#endif
+#define GENMASK(h, l) \
+	((0xFFFFFFFF << (l)) & (0xFFFFFFFF >> (BITS_PER_LONG - 1 - (h))))
+
+#ifdef GENMASK_ULL
+#undef GENMASK_ULL
+#endif
+#define GENMASK_ULL(h, l) \
+	((0xFFFFFFFFFFFFFFFF << (l)) & \
+			(0xFFFFFFFFFFFFFFFF >> (BITS_PER_LONG_LONG - 1 - (h))))
+
+
+#define DEBGFS_LINE_SZ			(81)
+
+
+#define QDMA_H2C_THROT_DATA_THRESH       0x4000
+#define QDMA_THROT_EN_DATA               1
+#define QDMA_THROT_EN_REQ                0
+#define QDMA_H2C_THROT_REQ_THRESH        0x60
+
+/*
+ * Q Context programming (indirect)
+ */
+
+#define QDMA_REG_IND_CTXT_REG_COUNT                         8
+#define QDMA_REG_IND_CTXT_WCNT_1                            1
+#define QDMA_REG_IND_CTXT_WCNT_2                            2
+#define QDMA_REG_IND_CTXT_WCNT_3                            3
+#define QDMA_REG_IND_CTXT_WCNT_4                            4
+#define QDMA_REG_IND_CTXT_WCNT_5                            5
+#define QDMA_REG_IND_CTXT_WCNT_6                            6
+#define QDMA_REG_IND_CTXT_WCNT_7                            7
+#define QDMA_REG_IND_CTXT_WCNT_8                            8
+
+/* ------------------------- QDMA_TRQ_SEL_IND (0x00800) ----------------*/
+#define QDMA_OFFSET_IND_CTXT_DATA                           0x804
+#define QDMA_OFFSET_IND_CTXT_MASK                           0x824
+#define QDMA_OFFSET_IND_CTXT_CMD                            0x844
+#define     QDMA_IND_CTXT_CMD_BUSY_MASK                     0x1
+
+/** QDMA_IND_REG_SEL_FMAP */
+#define QDMA_FMAP_CTXT_W1_QID_MAX_MASK                      GENMASK(11, 0)
+#define QDMA_FMAP_CTXT_W0_QID_MASK                          GENMASK(10, 0)
+
+/** QDMA_IND_REG_SEL_SW_C2H */
+/** QDMA_IND_REG_SEL_SW_H2C */
+#define QDMA_SW_CTXT_W4_INTR_AGGR_MASK                      BIT(11)
+#define QDMA_SW_CTXT_W4_VEC_MASK                            GENMASK(10, 0)
+#define QDMA_SW_CTXT_W3_DSC_H_MASK                          GENMASK(31, 0)
+#define QDMA_SW_CTXT_W2_DSC_L_MASK                          GENMASK(31, 0)
+#define QDMA_SW_CTXT_W1_IS_MM_MASK                          BIT(31)
+#define QDMA_SW_CTXT_W1_MRKR_DIS_MASK                       BIT(30)
+#define QDMA_SW_CTXT_W1_IRQ_REQ_MASK                        BIT(29)
+#define QDMA_SW_CTXT_W1_ERR_WB_SENT_MASK                    BIT(28)
+#define QDMA_SW_CTXT_W1_ERR_MASK                            GENMASK(27, 26)
+#define QDMA_SW_CTXT_W1_IRQ_NO_LAST_MASK                    BIT(25)
+#define QDMA_SW_CTXT_W1_PORT_ID_MASK                        GENMASK(24, 22)
+#define QDMA_SW_CTXT_W1_IRQ_EN_MASK                         BIT(21)
+#define QDMA_SW_CTXT_W1_WBK_EN_MASK                         BIT(20)
+#define QDMA_SW_CTXT_W1_MM_CHN_MASK                         BIT(19)
+#define QDMA_SW_CTXT_W1_BYP_MASK                            BIT(18)
+#define QDMA_SW_CTXT_W1_DSC_SZ_MASK                         GENMASK(17, 16)
+#define QDMA_SW_CTXT_W1_RNG_SZ_MASK                         GENMASK(15, 12)
+#define QDMA_SW_CTXT_W1_FETCH_MAX_MASK                      GENMASK(7, 5)
+#define QDMA_SW_CTXT_W1_AT_MASK                             BIT(4)
+#define QDMA_SW_CTXT_W1_WB_INT_EN_MASK                      BIT(3)
+#define QDMA_SW_CTXT_W1_WBI_CHK_MASK                        BIT(2)
+#define QDMA_SW_CTXT_W1_FCRD_EN_MASK                        BIT(1)
+#define QDMA_SW_CTXT_W1_QEN_MASK                            BIT(0)
+#define QDMA_SW_CTXT_W0_FUNC_ID_MASK                        GENMASK(24, 17)
+#define QDMA_SW_CTXT_W0_IRQ_ARM_MASK                        BIT(16)
+#define QDMA_SW_CTXT_W0_PIDX                                GENMASK(15, 0)
+
+
+
+#define QDMA_PFTCH_CTXT_W1_VALID_MASK                       BIT(13)
+#define QDMA_PFTCH_CTXT_W1_SW_CRDT_H_MASK                   GENMASK(12, 0)
+#define QDMA_PFTCH_CTXT_W0_SW_CRDT_L_MASK                   GENMASK(31, 29)
+#define QDMA_PFTCH_CTXT_W0_Q_IN_PFETCH_MASK                 BIT(28)
+#define QDMA_PFTCH_CTXT_W0_PFETCH_EN_MASK                   BIT(27)
+#define QDMA_PFTCH_CTXT_W0_ERR_MASK                         BIT(26)
+#define QDMA_PFTCH_CTXT_W0_PORT_ID_MASK                     GENMASK(7, 5)
+#define QDMA_PFTCH_CTXT_W0_BUF_SIZE_IDX_MASK                GENMASK(4, 1)
+#define QDMA_PFTCH_CTXT_W0_BYPASS_MASK                      BIT(0)
+
+
+
+
+#define QDMA_COMPL_CTXT_W4_INTR_AGGR_MASK                   BIT(15)
+#define QDMA_COMPL_CTXT_W4_INTR_VEC_MASK                    GENMASK(14, 4)
+#define QDMA_COMPL_CTXT_W4_AT_MASK                          BIT(3)
+#define QDMA_COMPL_CTXT_W4_OVF_CHK_DIS_MASK                 BIT(2)
+#define QDMA_COMPL_CTXT_W4_FULL_UPDT_MASK                   BIT(1)
+#define QDMA_COMPL_CTXT_W4_TMR_RUN_MASK                     BIT(0)
+#define QDMA_COMPL_CTXT_W3_USR_TRG_PND_MASK                 BIT(31)
+#define QDMA_COMPL_CTXT_W3_ERR_MASK                         GENMASK(30, 29)
+#define QDMA_COMPL_CTXT_W3_VALID_MASK                       BIT(28)
+#define QDMA_COMPL_CTXT_W3_CIDX_MASK                        GENMASK(27, 12)
+#define QDMA_COMPL_CTXT_W3_PIDX_H_MASK                      GENMASK(11, 0)
+#define QDMA_COMPL_CTXT_W2_PIDX_L_MASK                      GENMASK(31, 28)
+#define QDMA_COMPL_CTXT_W2_DESC_SIZE_MASK                   GENMASK(27, 26)
+#define QDMA_COMPL_CTXT_W2_BADDR_64_H_MASK                  GENMASK(25, 0)
+#define QDMA_COMPL_CTXT_W1_BADDR_64_L_MASK                  GENMASK(31, 6)
+#define QDMA_COMPL_CTXT_W0_RING_SZ_MASK                     GENMASK(31, 28)
+#define QDMA_COMPL_CTXT_W0_COLOR_MASK                       BIT(27)
+#define QDMA_COMPL_CTXT_W0_INT_ST_MASK                      GENMASK(26, 25)
+#define QDMA_COMPL_CTXT_W0_TIMER_IDX_MASK                   GENMASK(24, 21)
+#define QDMA_COMPL_CTXT_W0_COUNTER_IDX_MASK                 GENMASK(20, 17)
+#define QDMA_COMPL_CTXT_W0_FNC_ID_MASK                      GENMASK(12, 5)
+#define QDMA_COMPL_CTXT_W0_TRIG_MODE_MASK                   GENMASK(4, 2)
+#define QDMA_COMPL_CTXT_W0_EN_INT_MASK                      BIT(1)
+#define QDMA_COMPL_CTXT_W0_EN_STAT_DESC_MASK                BIT(0)
+
+/** QDMA_IND_REG_SEL_HW_C2H */
+/** QDMA_IND_REG_SEL_HW_H2C */
+#define QDMA_HW_CTXT_W1_FETCH_PEND_MASK                     GENMASK(14, 11)
+#define QDMA_HW_CTXT_W1_EVENT_PEND_MASK                     BIT(10)
+#define QDMA_HW_CTXT_W1_IDL_STP_B_MASK                      BIT(9)
+#define QDMA_HW_CTXT_W1_DSC_PND_MASK                        BIT(8)
+#define QDMA_HW_CTXT_W0_CRD_USE_MASK                        GENMASK(31, 16)
+#define QDMA_HW_CTXT_W0_CIDX_MASK                           GENMASK(15, 0)
+
+/** QDMA_IND_REG_SEL_CR_C2H */
+/** QDMA_IND_REG_SEL_CR_H2C */
+#define QDMA_CR_CTXT_W0_CREDT_MASK                          GENMASK(15, 0)
+
+/** QDMA_IND_REG_SEL_INTR */
+
+
+#define QDMA_INTR_CTXT_W2_AT_MASK                           BIT(18)
+#define QDMA_INTR_CTXT_W2_PIDX_MASK                         GENMASK(17, 6)
+#define QDMA_INTR_CTXT_W2_PAGE_SIZE_MASK                    GENMASK(5, 3)
+#define QDMA_INTR_CTXT_W2_BADDR_64_MASK                     GENMASK(2, 0)
+#define QDMA_INTR_CTXT_W1_BADDR_64_MASK                     GENMASK(31, 0)
+#define QDMA_INTR_CTXT_W0_BADDR_64_MASK                     GENMASK(31, 15)
+#define QDMA_INTR_CTXT_W0_COLOR_MASK                        BIT(14)
+#define QDMA_INTR_CTXT_W0_INT_ST_MASK                       BIT(13)
+#define QDMA_INTR_CTXT_W0_VEC_ID_MASK                       GENMASK(11, 1)
+#define QDMA_INTR_CTXT_W0_VALID_MASK                        BIT(0)
+
+
+
+
+
+/* ------------------------ QDMA_TRQ_SEL_GLBL (0x00200)-------------------*/
+#define QDMA_OFFSET_GLBL_RNG_SZ                             0x204
+#define QDMA_OFFSET_GLBL_SCRATCH                            0x244
+#define QDMA_OFFSET_GLBL_ERR_STAT                           0x248
+#define QDMA_OFFSET_GLBL_ERR_MASK                           0x24C
+#define QDMA_OFFSET_GLBL_DSC_CFG                            0x250
+#define     QDMA_GLBL_DSC_CFG_WB_ACC_INT_MASK               GENMASK(2, 0)
+#define     QDMA_GLBL_DSC_CFG_MAX_DSC_FETCH_MASK            GENMASK(5, 3)
+#define QDMA_OFFSET_GLBL_DSC_ERR_STS                        0x254
+#define QDMA_OFFSET_GLBL_DSC_ERR_MSK                        0x258
+#define QDMA_OFFSET_GLBL_DSC_ERR_LOG0                       0x25C
+#define QDMA_OFFSET_GLBL_DSC_ERR_LOG1                       0x260
+#define QDMA_OFFSET_GLBL_TRQ_ERR_STS                        0x264
+#define QDMA_OFFSET_GLBL_TRQ_ERR_MSK                        0x268
+#define QDMA_OFFSET_GLBL_TRQ_ERR_LOG                        0x26C
+#define QDMA_OFFSET_GLBL_DSC_DBG_DAT0                       0x270
+#define QDMA_OFFSET_GLBL_DSC_DBG_DAT1                       0x274
+#define QDMA_OFFSET_GLBL_DSC_ERR_LOG2                       0x27C
+#define QDMA_OFFSET_GLBL_INTERRUPT_CFG                      0x2C4
+#define     QDMA_GLBL_INTR_CFG_EN_LGCY_INTR_MASK            BIT(0)
+#define     QDMA_GLBL_INTR_LGCY_INTR_PEND_MASK              BIT(1)
+
+/* ------------------------- QDMA_TRQ_SEL_C2H (0x00A00) ------------------*/
+#define QDMA_OFFSET_C2H_TIMER_CNT                           0xA00
+#define QDMA_OFFSET_C2H_CNT_TH                              0xA40
+#define QDMA_OFFSET_C2H_QID2VEC_MAP_QID                     0xA80
+#define QDMA_OFFSET_C2H_QID2VEC_MAP                         0xA84
+#define QDMA_OFFSET_C2H_STAT_S_AXIS_C2H_ACCEPTED            0xA88
+#define QDMA_OFFSET_C2H_STAT_S_AXIS_CMPT_ACCEPTED           0xA8C
+#define QDMA_OFFSET_C2H_STAT_DESC_RSP_PKT_ACCEPTED          0xA90
+#define QDMA_OFFSET_C2H_STAT_AXIS_PKG_CMP                   0xA94
+#define QDMA_OFFSET_C2H_STAT_DESC_RSP_ACCEPTED              0xA98
+#define QDMA_OFFSET_C2H_STAT_DESC_RSP_CMP                   0xA9C
+#define QDMA_OFFSET_C2H_STAT_WRQ_OUT                        0xAA0
+#define QDMA_OFFSET_C2H_STAT_WPL_REN_ACCEPTED               0xAA4
+#define QDMA_OFFSET_C2H_STAT_TOTAL_WRQ_LEN                  0xAA8
+#define QDMA_OFFSET_C2H_STAT_TOTAL_WPL_LEN                  0xAAC
+#define QDMA_OFFSET_C2H_BUF_SZ                              0xAB0
+#define QDMA_OFFSET_C2H_ERR_STAT                            0xAF0
+#define QDMA_OFFSET_C2H_ERR_MASK                            0xAF4
+#define QDMA_OFFSET_C2H_FATAL_ERR_STAT                      0xAF8
+#define QDMA_OFFSET_C2H_FATAL_ERR_MASK                      0xAFC
+#define QDMA_OFFSET_C2H_FATAL_ERR_ENABLE                    0xB00
+#define QDMA_OFFSET_C2H_ERR_INT                             0xB04
+#define QDMA_OFFSET_C2H_PFETCH_CFG                          0xB08
+#define     QDMA_C2H_EVT_QCNT_TH_MASK                       GENMASK(31, 25)
+#define     QDMA_C2H_PFCH_QCNT_MASK                         GENMASK(24, 18)
+#define     QDMA_C2H_NUM_PFCH_MASK                          GENMASK(17, 9)
+#define     QDMA_C2H_PFCH_FL_TH_MASK                        GENMASK(8, 0)
+#define QDMA_OFFSET_C2H_INT_TIMER_TICK                      0xB0C
+#define QDMA_OFFSET_C2H_STAT_DESC_RSP_DROP_ACCEPTED         0xB10
+#define QDMA_OFFSET_C2H_STAT_DESC_RSP_ERR_ACCEPTED          0xB14
+#define QDMA_OFFSET_C2H_STAT_DESC_REQ                       0xB18
+#define QDMA_OFFSET_C2H_STAT_DEBUG_DMA_ENG_0                0xB1C
+#define QDMA_OFFSET_C2H_STAT_DEBUG_DMA_ENG_1                0xB20
+#define QDMA_OFFSET_C2H_STAT_DEBUG_DMA_ENG_2                0xB24
+#define QDMA_OFFSET_C2H_STAT_DEBUG_DMA_ENG_3                0xB28
+#define QDMA_OFFSET_C2H_DBG_PFCH_ERR_CTXT                   0xB2C
+#define QDMA_OFFSET_C2H_FIRST_ERR_QID                       0xB30
+#define QDMA_OFFSET_C2H_STAT_NUM_CMPT_IN                    0xB34
+#define QDMA_OFFSET_C2H_STAT_NUM_CMPT_OUT                   0xB38
+#define QDMA_OFFSET_C2H_STAT_NUM_CMPT_DRP                   0xB3C
+#define QDMA_OFFSET_C2H_STAT_NUM_STAT_DESC_OUT              0xB40
+#define QDMA_OFFSET_C2H_STAT_NUM_DSC_CRDT_SENT              0xB44
+#define QDMA_OFFSET_C2H_STAT_NUM_FCH_DSC_RCVD               0xB48
+#define QDMA_OFFSET_C2H_STAT_NUM_BYP_DSC_RCVD               0xB4C
+#define QDMA_OFFSET_C2H_WRB_COAL_CFG                        0xB50
+#define     QDMA_C2H_MAX_BUF_SZ_MASK                        GENMASK(31, 26)
+#define     QDMA_C2H_TICK_VAL_MASK                          GENMASK(25, 14)
+#define     QDMA_C2H_TICK_CNT_MASK                          GENMASK(13, 2)
+#define     QDMA_C2H_SET_GLB_FLUSH_MASK                     BIT(1)
+#define     QDMA_C2H_DONE_GLB_FLUSH_MASK                    BIT(0)
+#define QDMA_OFFSET_C2H_INTR_H2C_REQ                        0xB54
+#define QDMA_OFFSET_C2H_INTR_C2H_MM_REQ                     0xB58
+#define QDMA_OFFSET_C2H_INTR_ERR_INT_REQ                    0xB5C
+#define QDMA_OFFSET_C2H_INTR_C2H_ST_REQ                     0xB60
+#define QDMA_OFFSET_C2H_INTR_H2C_ERR_C2H_MM_MSIX_ACK        0xB64
+#define QDMA_OFFSET_C2H_INTR_H2C_ERR_C2H_MM_MSIX_FAIL       0xB68
+#define QDMA_OFFSET_C2H_INTR_H2C_ERR_C2H_MM_MSIX_NO_MSIX    0xB6C
+#define QDMA_OFFSET_C2H_INTR_H2C_ERR_C2H_MM_CTXT_INVAL      0xB70
+#define QDMA_OFFSET_C2H_INTR_C2H_ST_MSIX_ACK                0xB74
+#define QDMA_OFFSET_C2H_INTR_C2H_ST_MSIX_FAIL               0xB78
+#define QDMA_OFFSET_C2H_INTR_C2H_ST_NO_MSIX                 0xB7C
+#define QDMA_OFFSET_C2H_INTR_C2H_ST_CTXT_INVAL              0xB80
+#define QDMA_OFFSET_C2H_STAT_WR_CMP                         0xB84
+#define QDMA_OFFSET_C2H_STAT_DEBUG_DMA_ENG_4                0xB88
+#define QDMA_OFFSET_C2H_STAT_DEBUG_DMA_ENG_5                0xB8C
+#define QDMA_OFFSET_C2H_DBG_PFCH_QID                        0xB90
+#define QDMA_OFFSET_C2H_DBG_PFCH                            0xB94
+#define QDMA_OFFSET_C2H_INT_DEBUG                           0xB98
+#define QDMA_OFFSET_C2H_STAT_IMM_ACCEPTED                   0xB9C
+#define QDMA_OFFSET_C2H_STAT_MARKER_ACCEPTED                0xBA0
+#define QDMA_OFFSET_C2H_STAT_DISABLE_CMP_ACCEPTED           0xBA4
+#define QDMA_OFFSET_C2H_PAYLOAD_FIFO_CRDT_CNT               0xBA8
+#define QDMA_OFFSET_C2H_PFETCH_CACHE_DEPTH                  0xBE0
+#define QDMA_OFFSET_C2H_CMPT_COAL_BUF_DEPTH                 0xBE4
+
+/* ------------------------- QDMA_TRQ_SEL_H2C (0x00E00) ------------------*/
+#define QDMA_OFFSET_H2C_ERR_STAT                            0xE00
+#define QDMA_OFFSET_H2C_ERR_MASK                            0xE04
+#define QDMA_OFFSET_H2C_FIRST_ERR_QID                       0xE08
+#define QDMA_OFFSET_H2C_DBG_REG0                            0xE0C
+#define QDMA_OFFSET_H2C_DBG_REG1                            0xE10
+#define QDMA_OFFSET_H2C_DBG_REG2                            0xE14
+#define QDMA_OFFSET_H2C_DBG_REG3                            0xE18
+#define QDMA_OFFSET_H2C_DBG_REG4                            0xE1C
+#define QDMA_OFFSET_H2C_FATAL_ERR_EN                        0xE20
+#define QDMA_OFFSET_H2C_REQ_THROT                           0xE24
+#define     QDMA_H2C_REQ_THROT_EN_REQ_MASK                  BIT(31)
+#define     QDMA_H2C_REQ_THRESH_MASK                        GENMASK(25, 17)
+#define     QDMA_H2C_REQ_THROT_EN_DATA_MASK                 BIT(16)
+#define     QDMA_H2C_DATA_THRESH_MASK                       GENMASK(15, 0)
+
+
+/* ------------------------- QDMA_TRQ_SEL_H2C_MM (0x1200) ----------------*/
+#define QDMA_OFFSET_H2C_MM_CONTROL                          0x1204
+#define QDMA_OFFSET_H2C_MM_CONTROL_W1S                      0x1208
+#define QDMA_OFFSET_H2C_MM_CONTROL_W1C                      0x120C
+#define QDMA_OFFSET_H2C_MM_STATUS                           0x1240
+#define QDMA_OFFSET_H2C_MM_STATUS_RC                        0x1244
+#define QDMA_OFFSET_H2C_MM_COMPLETED_DESC_COUNT             0x1248
+#define QDMA_OFFSET_H2C_MM_ERR_CODE_EN_MASK                 0x1254
+#define QDMA_OFFSET_H2C_MM_ERR_CODE                         0x1258
+#define QDMA_OFFSET_H2C_MM_ERR_INFO                         0x125C
+#define QDMA_OFFSET_H2C_MM_PERF_MON_CONTROL                 0x12C0
+#define QDMA_OFFSET_H2C_MM_PERF_MON_CYCLE_COUNT_0           0x12C4
+#define QDMA_OFFSET_H2C_MM_PERF_MON_CYCLE_COUNT_1           0x12C8
+#define QDMA_OFFSET_H2C_MM_PERF_MON_DATA_COUNT_0            0x12CC
+#define QDMA_OFFSET_H2C_MM_PERF_MON_DATA_COUNT_1            0x12D0
+#define QDMA_OFFSET_H2C_MM_DEBUG                            0x12E8
+
+/* ------------------------- QDMA_TRQ_SEL_C2H_MM (0x1000) ----------------*/
+#define QDMA_OFFSET_C2H_MM_CONTROL                          0x1004
+#define QDMA_OFFSET_C2H_MM_CONTROL_W1S                      0x1008
+#define QDMA_OFFSET_C2H_MM_CONTROL_W1C                      0x100C
+#define QDMA_OFFSET_C2H_MM_STATUS                           0x1040
+#define QDMA_OFFSET_C2H_MM_STATUS_RC                        0x1044
+#define QDMA_OFFSET_C2H_MM_COMPLETED_DESC_COUNT             0x1048
+#define QDMA_OFFSET_C2H_MM_ERR_CODE_EN_MASK                 0x1054
+#define QDMA_OFFSET_C2H_MM_ERR_CODE                         0x1058
+#define QDMA_OFFSET_C2H_MM_ERR_INFO                         0x105C
+#define QDMA_OFFSET_C2H_MM_PERF_MON_CONTROL                 0x10C0
+#define QDMA_OFFSET_C2H_MM_PERF_MON_CYCLE_COUNT_0           0x10C4
+#define QDMA_OFFSET_C2H_MM_PERF_MON_CYCLE_COUNT_1           0x10C8
+#define QDMA_OFFSET_C2H_MM_PERF_MON_DATA_COUNT_0            0x10CC
+#define QDMA_OFFSET_C2H_MM_PERF_MON_DATA_COUNT_1            0x10D0
+#define QDMA_OFFSET_C2H_MM_DEBUG                            0x10E8
+
+/* ------------------------- QDMA_TRQ_SEL_GLBL1 (0x0) -----------------*/
+#define QDMA_OFFSET_CONFIG_BLOCK_ID                         0x0
+#define     QDMA_CONFIG_BLOCK_ID_MASK                       GENMASK(31, 16)
+
+
+/* ------------------------- QDMA_TRQ_SEL_GLBL2 (0x00100) ----------------*/
+#define QDMA_OFFSET_GLBL2_ID                                0x100
+#define QDMA_OFFSET_GLBL2_PF_BARLITE_INT                    0x104
+#define     QDMA_GLBL2_PF3_BAR_MAP_MASK                     GENMASK(23, 18)
+#define     QDMA_GLBL2_PF2_BAR_MAP_MASK                     GENMASK(17, 12)
+#define     QDMA_GLBL2_PF1_BAR_MAP_MASK                     GENMASK(11, 6)
+#define     QDMA_GLBL2_PF0_BAR_MAP_MASK                     GENMASK(5, 0)
+#define QDMA_OFFSET_GLBL2_PF_VF_BARLITE_INT                 0x108
+#define QDMA_OFFSET_GLBL2_PF_BARLITE_EXT                    0x10C
+#define QDMA_OFFSET_GLBL2_PF_VF_BARLITE_EXT                 0x110
+#define QDMA_OFFSET_GLBL2_CHANNEL_INST                      0x114
+#define QDMA_OFFSET_GLBL2_CHANNEL_MDMA                      0x118
+#define     QDMA_GLBL2_ST_C2H_MASK                          BIT(17)
+#define     QDMA_GLBL2_ST_H2C_MASK                          BIT(16)
+#define     QDMA_GLBL2_MM_C2H_MASK                          BIT(8)
+#define     QDMA_GLBL2_MM_H2C_MASK                          BIT(0)
+#define QDMA_OFFSET_GLBL2_CHANNEL_STRM                      0x11C
+#define QDMA_OFFSET_GLBL2_CHANNEL_QDMA_CAP                  0x120
+#define     QDMA_GLBL2_MULTQ_MAX_MASK                       GENMASK(11, 0)
+#define QDMA_OFFSET_GLBL2_CHANNEL_PASID_CAP                 0x128
+#define QDMA_OFFSET_GLBL2_CHANNEL_FUNC_RET                  0x12C
+#define QDMA_OFFSET_GLBL2_SYSTEM_ID                         0x130
+#define QDMA_OFFSET_GLBL2_MISC_CAP                          0x134
+
+#define     QDMA_GLBL2_DEVICE_ID_MASK                       GENMASK(31, 28)
+#define     QDMA_GLBL2_VIVADO_RELEASE_MASK                  GENMASK(27, 24)
+#define     QDMA_GLBL2_VERSAL_IP_MASK                       GENMASK(23, 20)
+#define     QDMA_GLBL2_RTL_VERSION_MASK                     GENMASK(19, 16)
+#define QDMA_OFFSET_GLBL2_DBG_PCIE_RQ0                      0x1B8
+#define QDMA_OFFSET_GLBL2_DBG_PCIE_RQ1                      0x1BC
+#define QDMA_OFFSET_GLBL2_DBG_AXIMM_WR0                     0x1C0
+#define QDMA_OFFSET_GLBL2_DBG_AXIMM_WR1                     0x1C4
+#define QDMA_OFFSET_GLBL2_DBG_AXIMM_RD0                     0x1C8
+#define QDMA_OFFSET_GLBL2_DBG_AXIMM_RD1                     0x1CC
+
+/* used for VF bars identification */
+#define QDMA_OFFSET_VF_USER_BAR_ID                          0x1018
+#define QDMA_OFFSET_VF_CONFIG_BAR_ID                        0x1014
+
+/* FLR programming */
+#define QDMA_OFFSET_VF_REG_FLR_STATUS                       0x1100
+#define QDMA_OFFSET_PF_REG_FLR_STATUS                       0x2500
+#define     QDMA_FLR_STATUS_MASK                            0x1
+
+/* VF qdma version */
+#define QDMA_OFFSET_VF_VERSION                              0x1014
+#define QDMA_OFFSET_PF_VERSION                              0x2414
+#define     QDMA_GLBL2_VF_UNIQUE_ID_MASK                    GENMASK(31, 16)
+#define     QDMA_GLBL2_VF_DEVICE_ID_MASK                    GENMASK(15, 12)
+#define     QDMA_GLBL2_VF_VIVADO_RELEASE_MASK               GENMASK(11, 8)
+#define     QDMA_GLBL2_VF_VERSAL_IP_MASK                    GENMASK(7, 4)
+#define     QDMA_GLBL2_VF_RTL_VERSION_MASK                  GENMASK(3, 0)
+
+
+/* ------------------------- QDMA_TRQ_SEL_QUEUE_PF (0x18000) ----------------*/
+
+#define QDMA_OFFSET_DMAP_SEL_INT_CIDX                       0x18000
+#define QDMA_OFFSET_DMAP_SEL_H2C_DSC_PIDX                   0x18004
+#define QDMA_OFFSET_DMAP_SEL_C2H_DSC_PIDX                   0x18008
+#define QDMA_OFFSET_DMAP_SEL_CMPT_CIDX                      0x1800C
+
+#define QDMA_OFFSET_VF_DMAP_SEL_INT_CIDX                    0x3000
+#define QDMA_OFFSET_VF_DMAP_SEL_H2C_DSC_PIDX                0x3004
+#define QDMA_OFFSET_VF_DMAP_SEL_C2H_DSC_PIDX                0x3008
+#define QDMA_OFFSET_VF_DMAP_SEL_CMPT_CIDX                   0x300C
+
+#define     QDMA_DMA_SEL_INT_SW_CIDX_MASK                   GENMASK(15, 0)
+#define     QDMA_DMA_SEL_INT_RING_IDX_MASK                  GENMASK(23, 16)
+#define     QDMA_DMA_SEL_DESC_PIDX_MASK                     GENMASK(15, 0)
+#define     QDMA_DMA_SEL_IRQ_EN_MASK                        BIT(16)
+#define     QDMA_DMAP_SEL_CMPT_IRQ_EN_MASK                  BIT(28)
+#define     QDMA_DMAP_SEL_CMPT_STS_DESC_EN_MASK             BIT(27)
+#define     QDMA_DMAP_SEL_CMPT_TRG_MODE_MASK                GENMASK(26, 24)
+#define     QDMA_DMAP_SEL_CMPT_TMR_CNT_MASK                 GENMASK(23, 20)
+#define     QDMA_DMAP_SEL_CMPT_CNT_THRESH_MASK              GENMASK(19, 16)
+#define     QDMA_DMAP_SEL_CMPT_WRB_CIDX_MASK                GENMASK(15, 0)
+
+/* ------------------------- Hardware Errors ------------------------------ */
+#define TOTAL_LEAF_ERROR_AGGREGATORS                        7
+
+#define QDMA_OFFSET_GLBL_ERR_INT                            0xB04
+#define     QDMA_GLBL_ERR_FUNC_MASK                         GENMASK(7, 0)
+#define     QDMA_GLBL_ERR_VEC_MASK                          GENMASK(22, 12)
+#define     QDMA_GLBL_ERR_ARM_MASK                          BIT(24)
+
+#define QDMA_OFFSET_GLBL_ERR_STAT                           0x248
+#define QDMA_OFFSET_GLBL_ERR_MASK                           0x24C
+#define     QDMA_GLBL_ERR_RAM_SBE_MASK                      BIT(0)
+#define     QDMA_GLBL_ERR_RAM_DBE_MASK                      BIT(1)
+#define     QDMA_GLBL_ERR_DSC_MASK                          BIT(2)
+#define     QDMA_GLBL_ERR_TRQ_MASK                          BIT(3)
+#define     QDMA_GLBL_ERR_ST_C2H_MASK                       BIT(8)
+#define     QDMA_GLBL_ERR_ST_H2C_MASK                       BIT(11)
+
+#define QDMA_OFFSET_C2H_ERR_STAT                            0xAF0
+#define QDMA_OFFSET_C2H_ERR_MASK                            0xAF4
+#define     QDMA_C2H_ERR_MTY_MISMATCH_MASK                  BIT(0)
+#define     QDMA_C2H_ERR_LEN_MISMATCH_MASK                  BIT(1)
+#define     QDMA_C2H_ERR_QID_MISMATCH_MASK                  BIT(3)
+#define     QDMA_C2H_ERR_DESC_RSP_ERR_MASK                  BIT(4)
+#define     QDMA_C2H_ERR_ENG_WPL_DATA_PAR_ERR_MASK          BIT(6)
+#define     QDMA_C2H_ERR_MSI_INT_FAIL_MASK                  BIT(7)
+#define     QDMA_C2H_ERR_ERR_DESC_CNT_MASK                  BIT(9)
+#define     QDMA_C2H_ERR_PORTID_CTXT_MISMATCH_MASK          BIT(10)
+#define     QDMA_C2H_ERR_PORTID_BYP_IN_MISMATCH_MASK        BIT(11)
+#define     QDMA_C2H_ERR_CMPT_INV_Q_ERR_MASK                BIT(12)
+#define     QDMA_C2H_ERR_CMPT_QFULL_ERR_MASK                BIT(13)
+#define     QDMA_C2H_ERR_CMPT_CIDX_ERR_MASK                 BIT(14)
+#define     QDMA_C2H_ERR_CMPT_PRTY_ERR_MASK                 BIT(15)
+#define     QDMA_C2H_ERR_ALL_MASK                           0xFEDB
+
+#define QDMA_OFFSET_C2H_FATAL_ERR_STAT                      0xAF8
+#define QDMA_OFFSET_C2H_FATAL_ERR_MASK                      0xAFC
+#define     QDMA_C2H_FATAL_ERR_MTY_MISMATCH_MASK            BIT(0)
+#define     QDMA_C2H_FATAL_ERR_LEN_MISMATCH_MASK            BIT(1)
+#define     QDMA_C2H_FATAL_ERR_QID_MISMATCH_MASK            BIT(3)
+#define     QDMA_C2H_FATAL_ERR_TIMER_FIFO_RAM_RDBE_MASK     BIT(4)
+#define     QDMA_C2H_FATAL_ERR_PFCH_II_RAM_RDBE_MASK        BIT(8)
+#define     QDMA_C2H_FATAL_ERR_CMPT_CTXT_RAM_RDBE_MASK      BIT(9)
+#define     QDMA_C2H_FATAL_ERR_PFCH_CTXT_RAM_RDBE_MASK      BIT(10)
+#define     QDMA_C2H_FATAL_ERR_DESC_REQ_FIFO_RAM_RDBE_MASK  BIT(11)
+#define     QDMA_C2H_FATAL_ERR_INT_CTXT_RAM_RDBE_MASK       BIT(12)
+#define     QDMA_C2H_FATAL_ERR_CMPT_COAL_DATA_RAM_RDBE_MASK BIT(14)
+#define     QDMA_C2H_FATAL_ERR_TUSER_FIFO_RAM_RDBE_MASK     BIT(15)
+#define     QDMA_C2H_FATAL_ERR_QID_FIFO_RAM_RDBE_MASK       BIT(16)
+#define     QDMA_C2H_FATAL_ERR_PAYLOAD_FIFO_RAM_RDBE_MASK   BIT(17)
+#define     QDMA_C2H_FATAL_ERR_WPL_DATA_PAR_MASK            BIT(18)
+#define     QDMA_C2H_FATAL_ERR_ALL_MASK                     0x7DF1B
+
+#define QDMA_OFFSET_H2C_ERR_STAT                            0xE00
+#define QDMA_OFFSET_H2C_ERR_MASK                            0xE04
+#define     QDMA_H2C_ERR_ZERO_LEN_DESC_MASK                 BIT(0)
+#define     QDMA_H2C_ERR_CSI_MOP_MASK                       BIT(1)
+#define     QDMA_H2C_ERR_NO_DMA_DSC_MASK                    BIT(2)
+#define     QDMA_H2C_ERR_SBE_MASK                           BIT(3)
+#define     QDMA_H2C_ERR_DBE_MASK                           BIT(4)
+#define     QDMA_H2C_ERR_ALL_MASK                           0x1F
+
+#define QDMA_OFFSET_GLBL_DSC_ERR_STAT                       0x254
+#define QDMA_OFFSET_GLBL_DSC_ERR_MASK                       0x258
+#define     QDMA_GLBL_DSC_ERR_POISON_MASK                   BIT(0)
+#define     QDMA_GLBL_DSC_ERR_UR_CA_MASK                    BIT(1)
+#define     QDMA_GLBL_DSC_ERR_PARAM_MASK                    BIT(2)
+#define     QDMA_GLBL_DSC_ERR_ADDR_MASK                     BIT(3)
+#define     QDMA_GLBL_DSC_ERR_TAG_MASK                      BIT(4)
+#define     QDMA_GLBL_DSC_ERR_FLR_MASK                      BIT(5)
+#define     QDMA_GLBL_DSC_ERR_TIMEOUT_MASK                  BIT(9)
+#define     QDMA_GLBL_DSC_ERR_DAT_POISON_MASK               BIT(16)
+#define     QDMA_GLBL_DSC_ERR_FLR_CANCEL_MASK               BIT(19)
+#define     QDMA_GLBL_DSC_ERR_DMA_MASK                      BIT(20)
+#define     QDMA_GLBL_DSC_ERR_DSC_MASK                      BIT(21)
+#define     QDMA_GLBL_DSC_ERR_RQ_CANCEL_MASK                BIT(22)
+#define     QDMA_GLBL_DSC_ERR_DBE_MASK                      BIT(23)
+#define     QDMA_GLBL_DSC_ERR_SBE_MASK                      BIT(24)
+#define     QDMA_GLBL_DSC_ERR_ALL_MASK                      0x1F9023F
+
+#define QDMA_OFFSET_GLBL_TRQ_ERR_STAT                       0x264
+#define QDMA_OFFSET_GLBL_TRQ_ERR_MASK                       0x268
+#define     QDMA_GLBL_TRQ_ERR_UNMAPPED_MASK                 BIT(0)
+#define     QDMA_GLBL_TRQ_ERR_QID_RANGE_MASK                BIT(1)
+#define     QDMA_GLBL_TRQ_ERR_VF_ACCESS_MASK                BIT(2)
+#define     QDMA_GLBL_TRQ_ERR_TCP_TIMEOUT_MASK              BIT(3)
+#define     QDMA_GLBL_TRQ_ERR_ALL_MASK                      0xF
+
+#define QDMA_OFFSET_RAM_SBE_STAT                            0xF4
+#define QDMA_OFFSET_RAM_SBE_MASK                            0xF0
+#define     QDMA_SBE_ERR_MI_H2C0_DAT_MASK                   BIT(0)
+#define     QDMA_SBE_ERR_MI_C2H0_DAT_MASK                   BIT(4)
+#define     QDMA_SBE_ERR_H2C_RD_BRG_DAT_MASK                BIT(9)
+#define     QDMA_SBE_ERR_H2C_WR_BRG_DAT_MASK                BIT(10)
+#define     QDMA_SBE_ERR_C2H_RD_BRG_DAT_MASK                BIT(11)
+#define     QDMA_SBE_ERR_C2H_WR_BRG_DAT_MASK                BIT(12)
+#define     QDMA_SBE_ERR_FUNC_MAP_MASK                      BIT(13)
+#define     QDMA_SBE_ERR_DSC_HW_CTXT_MASK                   BIT(14)
+#define     QDMA_SBE_ERR_DSC_CRD_RCV_MASK                   BIT(15)
+#define     QDMA_SBE_ERR_DSC_SW_CTXT_MASK                   BIT(16)
+#define     QDMA_SBE_ERR_DSC_CPLI_MASK                      BIT(17)
+#define     QDMA_SBE_ERR_DSC_CPLD_MASK                      BIT(18)
+#define     QDMA_SBE_ERR_PASID_CTXT_RAM_MASK                BIT(19)
+#define     QDMA_SBE_ERR_TIMER_FIFO_RAM_MASK                BIT(20)
+#define     QDMA_SBE_ERR_PAYLOAD_FIFO_RAM_MASK              BIT(21)
+#define     QDMA_SBE_ERR_QID_FIFO_RAM_MASK                  BIT(22)
+#define     QDMA_SBE_ERR_TUSER_FIFO_RAM_MASK                BIT(23)
+#define     QDMA_SBE_ERR_WRB_COAL_DATA_RAM_MASK             BIT(24)
+#define     QDMA_SBE_ERR_INT_QID2VEC_RAM_MASK               BIT(25)
+#define     QDMA_SBE_ERR_INT_CTXT_RAM_MASK                  BIT(26)
+#define     QDMA_SBE_ERR_DESC_REQ_FIFO_RAM_MASK             BIT(27)
+#define     QDMA_SBE_ERR_PFCH_CTXT_RAM_MASK                 BIT(28)
+#define     QDMA_SBE_ERR_WRB_CTXT_RAM_MASK                  BIT(29)
+#define     QDMA_SBE_ERR_PFCH_LL_RAM_MASK                   BIT(30)
+#define     QDMA_SBE_ERR_H2C_PEND_FIFO_MASK                 BIT(31)
+#define     QDMA_SBE_ERR_ALL_MASK                           0xFFFFFF11
+
+#define QDMA_OFFSET_RAM_DBE_STAT                            0xFC
+#define QDMA_OFFSET_RAM_DBE_MASK                            0xF8
+#define     QDMA_DBE_ERR_MI_H2C0_DAT_MASK                   BIT(0)
+#define     QDMA_DBE_ERR_MI_C2H0_DAT_MASK                   BIT(4)
+#define     QDMA_DBE_ERR_H2C_RD_BRG_DAT_MASK                BIT(9)
+#define     QDMA_DBE_ERR_H2C_WR_BRG_DAT_MASK                BIT(10)
+#define     QDMA_DBE_ERR_C2H_RD_BRG_DAT_MASK                BIT(11)
+#define     QDMA_DBE_ERR_C2H_WR_BRG_DAT_MASK                BIT(12)
+#define     QDMA_DBE_ERR_FUNC_MAP_MASK                      BIT(13)
+#define     QDMA_DBE_ERR_DSC_HW_CTXT_MASK                   BIT(14)
+#define     QDMA_DBE_ERR_DSC_CRD_RCV_MASK                   BIT(15)
+#define     QDMA_DBE_ERR_DSC_SW_CTXT_MASK                   BIT(16)
+#define     QDMA_DBE_ERR_DSC_CPLI_MASK                      BIT(17)
+#define     QDMA_DBE_ERR_DSC_CPLD_MASK                      BIT(18)
+#define     QDMA_DBE_ERR_PASID_CTXT_RAM_MASK                BIT(19)
+#define     QDMA_DBE_ERR_TIMER_FIFO_RAM_MASK                BIT(20)
+#define     QDMA_DBE_ERR_PAYLOAD_FIFO_RAM_MASK              BIT(21)
+#define     QDMA_DBE_ERR_QID_FIFO_RAM_MASK                  BIT(22)
+#define     QDMA_DBE_ERR_TUSER_FIFO_RAM_MASK                BIT(23)
+#define     QDMA_DBE_ERR_WRB_COAL_DATA_RAM_MASK             BIT(24)
+#define     QDMA_DBE_ERR_INT_QID2VEC_RAM_MASK               BIT(25)
+#define     QDMA_DBE_ERR_INT_CTXT_RAM_MASK                  BIT(26)
+#define     QDMA_DBE_ERR_DESC_REQ_FIFO_RAM_MASK             BIT(27)
+#define     QDMA_DBE_ERR_PFCH_CTXT_RAM_MASK                 BIT(28)
+#define     QDMA_DBE_ERR_WRB_CTXT_RAM_MASK                  BIT(29)
+#define     QDMA_DBE_ERR_PFCH_LL_RAM_MASK                   BIT(30)
+#define     QDMA_DBE_ERR_H2C_PEND_FIFO_MASK                 BIT(31)
+#define     QDMA_DBE_ERR_ALL_MASK                           0xFFFFFF11
+
+#define QDMA_OFFSET_MBOX_BASE_VF                            0x1000
+#define QDMA_OFFSET_MBOX_BASE_PF                            0x2400
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __QDMA_SOFT_REG_H__ */
-- 
2.36.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [RFC PATCH 07/29] net/qdma: add supported qdma version
  2022-07-06  7:51 [RFC PATCH 00/29] cover letter for net/qdma PMD Aman Kumar
                   ` (5 preceding siblings ...)
  2022-07-06  7:51 ` [RFC PATCH 06/29] net/qdma: add qdma access library Aman Kumar
@ 2022-07-06  7:51 ` Aman Kumar
  2022-07-06  7:51 ` [RFC PATCH 08/29] net/qdma: qdma hardware initialization Aman Kumar
                   ` (22 subsequent siblings)
  29 siblings, 0 replies; 43+ messages in thread
From: Aman Kumar @ 2022-07-06  7:51 UTC (permalink / raw)
  To: dev; +Cc: maxime.coquelin, david.marchand, aman.kumar

qdma-20.2 is supported for latest firmware.

Signed-off-by: Aman Kumar <aman.kumar@vvdntech.in>
---
 drivers/net/qdma/qdma_version.h | 23 +++++++++++++++++++++++
 1 file changed, 23 insertions(+)
 create mode 100644 drivers/net/qdma/qdma_version.h

diff --git a/drivers/net/qdma/qdma_version.h b/drivers/net/qdma/qdma_version.h
new file mode 100644
index 0000000000..0cc38321f7
--- /dev/null
+++ b/drivers/net/qdma/qdma_version.h
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2022 Xilinx, Inc. All rights reserved.
+ */
+
+#ifndef __QDMA_VERSION_H__
+#define __QDMA_VERSION_H__
+
+#define qdma_stringify1(x...)	#x
+#define qdma_stringify(x...)	qdma_stringify1(x)
+
+#define QDMA_PMD_MAJOR		2020
+#define QDMA_PMD_MINOR		2
+#define QDMA_PMD_PATCHLEVEL	1
+
+#define QDMA_PMD_VERSION      \
+	qdma_stringify(QDMA_PMD_MAJOR) "." \
+	qdma_stringify(QDMA_PMD_MINOR) "." \
+	qdma_stringify(QDMA_PMD_PATCHLEVEL)
+
+#define QDMA_PMD_VERSION_NUMBER  \
+	((QDMA_PMD_MAJOR) * 1000 + (QDMA_PMD_MINOR) * 100 + QDMA_PMD_PATCHLEVEL)
+
+#endif /* ifndef __QDMA_VERSION_H__ */
-- 
2.36.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [RFC PATCH 08/29] net/qdma: qdma hardware initialization
  2022-07-06  7:51 [RFC PATCH 00/29] cover letter for net/qdma PMD Aman Kumar
                   ` (6 preceding siblings ...)
  2022-07-06  7:51 ` [RFC PATCH 07/29] net/qdma: add supported qdma version Aman Kumar
@ 2022-07-06  7:51 ` Aman Kumar
  2022-07-06  7:51 ` [RFC PATCH 09/29] net/qdma: define device modes and data structure Aman Kumar
                   ` (21 subsequent siblings)
  29 siblings, 0 replies; 43+ messages in thread
From: Aman Kumar @ 2022-07-06  7:51 UTC (permalink / raw)
  To: dev; +Cc: maxime.coquelin, david.marchand, aman.kumar

add hardware initialization as per qdma_access lib.
Fetch hardware version and other details and bind to
proper qdma layer.

Signed-off-by: Aman Kumar <aman.kumar@vvdntech.in>
---
 drivers/net/qdma/qdma.h        |  13 +-
 drivers/net/qdma/qdma_common.c |  52 +++++++-
 drivers/net/qdma/qdma_ethdev.c | 225 ++++++++++++++++++++++++++++++++-
 3 files changed, 286 insertions(+), 4 deletions(-)

diff --git a/drivers/net/qdma/qdma.h b/drivers/net/qdma/qdma.h
index 4bc61d2a08..db59cddd25 100644
--- a/drivers/net/qdma/qdma.h
+++ b/drivers/net/qdma/qdma.h
@@ -15,6 +15,8 @@
 #include <rte_byteorder.h>
 #include <rte_memzone.h>
 #include <linux/pci.h>
+
+#include "qdma_resource_mgmt.h"
 #include "qdma_log.h"
 
 #define QDMA_NUM_BARS          (6)
@@ -23,6 +25,8 @@
 
 #define QDMA_FUNC_ID_INVALID    0xFFFF
 
+#define DEFAULT_QUEUE_BASE	(0)
+
 #define DEFAULT_TIMER_CNT_TRIG_MODE_TIMER	(5)
 
 enum dma_data_direction {
@@ -186,6 +190,9 @@ struct qdma_pci_dev {
 	 */
 	uint32_t dma_device_index;
 
+	/* Device capabilities */
+	struct qdma_dev_attributes dev_cap;
+
 	uint8_t cmpt_desc_len;
 	uint8_t c2h_bypass_mode;
 	uint8_t h2c_bypass_mode;
@@ -210,6 +217,9 @@ struct qdma_pci_dev {
 	struct queue_info *q_info;
 	uint8_t init_q_range;
 
+	/* Pointer to QDMA access layer function pointers */
+	struct qdma_hw_access *hw_access;
+
 	struct qdma_vf_info *vfinfo;
 	uint8_t vf_online_count;
 
@@ -218,8 +228,9 @@ struct qdma_pci_dev {
 };
 
 int qdma_identify_bars(struct rte_eth_dev *dev);
+int qdma_get_hw_version(struct rte_eth_dev *dev);
 
 int qdma_check_kvargs(struct rte_devargs *devargs,
 			struct qdma_pci_dev *qdma_dev);
-
+void qdma_check_errors(void *arg);
 #endif /* ifndef __QDMA_H__ */
diff --git a/drivers/net/qdma/qdma_common.c b/drivers/net/qdma/qdma_common.c
index c0c5162f0f..0ea920f255 100644
--- a/drivers/net/qdma/qdma_common.c
+++ b/drivers/net/qdma/qdma_common.c
@@ -10,6 +10,7 @@
 #include <rte_cycles.h>
 #include <rte_kvargs.h>
 #include "qdma.h"
+#include "qdma_access_common.h"
 
 #include <fcntl.h>
 #include <unistd.h>
@@ -199,7 +200,8 @@ int qdma_check_kvargs(struct rte_devargs *devargs,
 
 int qdma_identify_bars(struct rte_eth_dev *dev)
 {
-	int bar_len, i;
+	int bar_len, i, ret;
+	uint8_t  usr_bar;
 	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
 	struct qdma_pci_dev *dma_priv;
 
@@ -213,6 +215,24 @@ int qdma_identify_bars(struct rte_eth_dev *dev)
 		return -1;
 	}
 
+	/* Find AXI Master Lite(user bar) */
+	ret = dma_priv->hw_access->qdma_get_user_bar(dev,
+			dma_priv->is_vf, dma_priv->func_id, &usr_bar);
+	if (ret != QDMA_SUCCESS ||
+	    pci_dev->mem_resource[usr_bar].len == 0) {
+		if (dma_priv->ip_type == QDMA_VERSAL_HARD_IP) {
+			if (pci_dev->mem_resource[1].len == 0)
+				dma_priv->user_bar_idx = 2;
+			else
+				dma_priv->user_bar_idx = 1;
+		} else {
+			dma_priv->user_bar_idx = -1;
+			PMD_DRV_LOG(INFO, "Cannot find AXI Master Lite BAR");
+		}
+	} else {
+		dma_priv->user_bar_idx = usr_bar;
+	}
+
 	/* Find AXI Bridge Master bar(bypass bar) */
 	for (i = 0; i < QDMA_NUM_BARS; i++) {
 		bar_len = pci_dev->mem_resource[i].len;
@@ -234,3 +254,33 @@ int qdma_identify_bars(struct rte_eth_dev *dev)
 
 	return 0;
 }
+int qdma_get_hw_version(struct rte_eth_dev *dev)
+{
+	int ret;
+	struct qdma_pci_dev *dma_priv;
+	struct qdma_hw_version_info version_info;
+
+	dma_priv = (struct qdma_pci_dev *)dev->data->dev_private;
+	ret = dma_priv->hw_access->qdma_get_version(dev,
+			dma_priv->is_vf, &version_info);
+	if (ret < 0)
+		return dma_priv->hw_access->qdma_get_error_code(ret);
+
+	dma_priv->rtl_version = version_info.rtl_version;
+	dma_priv->vivado_rel = version_info.vivado_release;
+	dma_priv->device_type = version_info.device_type;
+	dma_priv->ip_type = version_info.ip_type;
+
+	PMD_DRV_LOG(INFO, "QDMA RTL VERSION : %s\n",
+		version_info.qdma_rtl_version_str);
+	PMD_DRV_LOG(INFO, "QDMA DEVICE TYPE : %s\n",
+		version_info.qdma_device_type_str);
+	PMD_DRV_LOG(INFO, "QDMA VIVADO RELEASE ID : %s\n",
+		version_info.qdma_vivado_release_id_str);
+	if (version_info.ip_type == QDMA_VERSAL_HARD_IP) {
+		PMD_DRV_LOG(INFO, "QDMA VERSAL IP TYPE : %s\n",
+			version_info.qdma_ip_type_str);
+	}
+
+	return 0;
+}
diff --git a/drivers/net/qdma/qdma_ethdev.c b/drivers/net/qdma/qdma_ethdev.c
index c2ed6a52bb..bc902e607f 100644
--- a/drivers/net/qdma/qdma_ethdev.c
+++ b/drivers/net/qdma/qdma_ethdev.c
@@ -22,11 +22,27 @@
 #include <rte_cycles.h>
 
 #include "qdma.h"
+#include "qdma_version.h"
+#include "qdma_access_common.h"
+#include "qdma_access_export.h"
 
+/* Poll for QDMA errors every 1 second */
+#define QDMA_ERROR_POLL_FRQ (1000000)
 #define PCI_CONFIG_BRIDGE_DEVICE	(6)
 #define PCI_CONFIG_CLASS_CODE_SHIFT	(16)
 #define MAX_PCIE_CAPABILITY		(48)
 
+static void qdma_device_attributes_get(struct rte_eth_dev *dev);
+
+/* Poll for any QDMA errors */
+void qdma_check_errors(void *arg)
+{
+	struct qdma_pci_dev *qdma_dev;
+	qdma_dev = ((struct rte_eth_dev *)arg)->data->dev_private;
+	qdma_dev->hw_access->qdma_hw_error_process(arg);
+	rte_eal_alarm_set(QDMA_ERROR_POLL_FRQ, qdma_check_errors, arg);
+}
+
 /*
  * The set of PCI devices this driver supports
  */
@@ -43,6 +59,92 @@ static struct rte_pci_id qdma_pci_id_tbl[] = {
 	{ .vendor_id = 0, /* sentinel */ },
 };
 
+static void qdma_device_attributes_get(struct rte_eth_dev *dev)
+{
+	struct qdma_pci_dev *qdma_dev;
+
+	qdma_dev = (struct qdma_pci_dev *)dev->data->dev_private;
+	qdma_dev->hw_access->qdma_get_device_attributes(dev,
+			&qdma_dev->dev_cap);
+
+	/* Check DPDK configured queues per port */
+	if (qdma_dev->dev_cap.num_qs > RTE_MAX_QUEUES_PER_PORT)
+		qdma_dev->dev_cap.num_qs = RTE_MAX_QUEUES_PER_PORT;
+
+	PMD_DRV_LOG(INFO, "qmax = %d, mm %d, st %d.\n",
+	qdma_dev->dev_cap.num_qs, qdma_dev->dev_cap.mm_en,
+	qdma_dev->dev_cap.st_en);
+}
+
+static inline uint8_t pcie_find_cap(const struct rte_pci_device *pci_dev,
+					uint8_t cap)
+{
+	uint8_t pcie_cap_pos = 0;
+	uint8_t pcie_cap_id = 0;
+	int ttl = MAX_PCIE_CAPABILITY;
+	int ret;
+
+	ret = rte_pci_read_config(pci_dev, &pcie_cap_pos, sizeof(uint8_t),
+		PCI_CAPABILITY_LIST);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "PCIe config space read failed..\n");
+		return 0;
+	}
+
+	while (ttl-- && pcie_cap_pos >= PCI_STD_HEADER_SIZEOF) {
+		pcie_cap_pos &= ~3;
+
+		ret = rte_pci_read_config(pci_dev,
+			&pcie_cap_id, sizeof(uint8_t),
+			(pcie_cap_pos + PCI_CAP_LIST_ID));
+		if (ret < 0) {
+			PMD_DRV_LOG(ERR, "PCIe config space read failed..\n");
+			return 0;
+		}
+
+		if (pcie_cap_id == 0xff)
+			break;
+
+		if (pcie_cap_id == cap)
+			return pcie_cap_pos;
+
+		ret = rte_pci_read_config(pci_dev,
+			&pcie_cap_pos, sizeof(uint8_t),
+			(pcie_cap_pos + PCI_CAP_LIST_NEXT));
+		if (ret < 0) {
+			PMD_DRV_LOG(ERR, "PCIe config space read failed..\n");
+			return 0;
+		}
+	}
+
+	return 0;
+}
+
+static void pcie_perf_enable(const struct rte_pci_device *pci_dev)
+{
+	uint16_t value;
+	uint8_t pcie_cap_pos = pcie_find_cap(pci_dev, PCI_CAP_ID_EXP);
+
+	if (!pcie_cap_pos)
+		return;
+
+	if (pcie_cap_pos > 0) {
+		if (rte_pci_read_config(pci_dev, &value, sizeof(uint16_t),
+					 pcie_cap_pos + PCI_EXP_DEVCTL) < 0) {
+			PMD_DRV_LOG(ERR, "PCIe config space read failed..\n");
+			return;
+		}
+
+		value |= (PCI_EXP_DEVCTL_EXT_TAG | PCI_EXP_DEVCTL_RELAX_EN);
+
+		if (rte_pci_write_config(pci_dev, &value, sizeof(uint16_t),
+					   pcie_cap_pos + PCI_EXP_DEVCTL) < 0) {
+			PMD_DRV_LOG(ERR, "PCIe config space write failed..\n");
+			return;
+		}
+	}
+}
+
 /* parse a sysfs file containing one integer value */
 static int parse_sysfs_value(const char *filename, uint32_t *val)
 {
@@ -234,7 +336,7 @@ static int qdma_eth_dev_init(struct rte_eth_dev *dev)
 {
 	struct qdma_pci_dev *dma_priv;
 	uint8_t *baseaddr;
-	int i, idx, ret;
+	int i, idx, ret, qbase;
 	struct rte_pci_device *pci_dev;
 	uint16_t num_vfs;
 	uint8_t max_pci_bus = 0;
@@ -297,8 +399,30 @@ static int qdma_eth_dev_init(struct rte_eth_dev *dev)
 			pci_dev->mem_resource[dma_priv->config_bar_idx].addr;
 	dma_priv->bar_addr[dma_priv->config_bar_idx] = baseaddr;
 
+	/* Assigning QDMA access layer function pointers based on the HW design */
+	dma_priv->hw_access = rte_zmalloc("hwaccess",
+					sizeof(struct qdma_hw_access), 0);
+	if (dma_priv->hw_access == NULL) {
+		rte_free(dev->data->mac_addrs);
+		return -ENOMEM;
+	}
+	idx = qdma_hw_access_init(dev, dma_priv->is_vf, dma_priv->hw_access);
+	if (idx < 0) {
+		rte_free(dma_priv->hw_access);
+		rte_free(dev->data->mac_addrs);
+		return -EINVAL;
+	}
+
+	idx = qdma_get_hw_version(dev);
+	if (idx < 0) {
+		rte_free(dma_priv->hw_access);
+		rte_free(dev->data->mac_addrs);
+		return -EINVAL;
+	}
+
 	idx = qdma_identify_bars(dev);
 	if (idx < 0) {
+		rte_free(dma_priv->hw_access);
 		rte_free(dev->data->mac_addrs);
 		return -EINVAL;
 	}
@@ -312,14 +436,99 @@ static int qdma_eth_dev_init(struct rte_eth_dev *dev)
 
 	PMD_DRV_LOG(INFO, "QDMA device driver probe:");
 
+	/* Getting the device attributes from the Hardware */
+	qdma_device_attributes_get(dev);
+
+	/* Create master resource node for queue management on the given
+	 * bus number. Node will be created only once per bus number.
+	 */
+	qbase = DEFAULT_QUEUE_BASE;
+
 	ret = get_max_pci_bus_num(pci_dev->addr.bus, &max_pci_bus);
-	if (ret != 0 && !max_pci_bus) {
+	if (ret != QDMA_SUCCESS && !max_pci_bus) {
 		PMD_DRV_LOG(ERR, "Failed to get max pci bus number\n");
+		rte_free(dma_priv->hw_access);
 		rte_free(dev->data->mac_addrs);
 		return -EINVAL;
 	}
 	PMD_DRV_LOG(INFO, "PCI max bus number : 0x%x", max_pci_bus);
 
+	ret = qdma_master_resource_create(pci_dev->addr.bus, max_pci_bus,
+				qbase, dma_priv->dev_cap.num_qs,
+				&dma_priv->dma_device_index);
+	if (ret == -QDMA_ERR_NO_MEM) {
+		rte_free(dma_priv->hw_access);
+		rte_free(dev->data->mac_addrs);
+		return -ENOMEM;
+	}
+
+	dma_priv->hw_access->qdma_get_function_number(dev,
+			&dma_priv->func_id);
+	PMD_DRV_LOG(INFO, "PF function ID: %d", dma_priv->func_id);
+
+	/* CSR programming is done once per given board or bus number,
+	 * done by the master PF
+	 */
+	if (ret == QDMA_SUCCESS) {
+		RTE_LOG(INFO, PMD, "QDMA PMD VERSION: %s\n", QDMA_PMD_VERSION);
+		dma_priv->hw_access->qdma_set_default_global_csr(dev);
+		for (i = 0; i < dma_priv->dev_cap.mm_channel_max; i++) {
+			if (dma_priv->dev_cap.mm_en) {
+				/* Enable MM C2H Channel */
+				dma_priv->hw_access->qdma_mm_channel_conf(dev,
+							i, 1, 1);
+				/* Enable MM H2C Channel */
+				dma_priv->hw_access->qdma_mm_channel_conf(dev,
+							i, 0, 1);
+			} else {
+				/* Disable MM C2H Channel */
+				dma_priv->hw_access->qdma_mm_channel_conf(dev,
+							i, 1, 0);
+				/* Disable MM H2C Channel */
+				dma_priv->hw_access->qdma_mm_channel_conf(dev,
+							i, 0, 0);
+			}
+		}
+
+		ret = dma_priv->hw_access->qdma_init_ctxt_memory(dev);
+		if (ret < 0) {
+			PMD_DRV_LOG(ERR,
+				"%s: Failed to initialize ctxt memory, err = %d\n",
+				__func__, ret);
+			return -EINVAL;
+		}
+
+		dma_priv->hw_access->qdma_hw_error_enable(dev,
+				dma_priv->hw_access->qdma_max_errors);
+		if (ret < 0) {
+			PMD_DRV_LOG(ERR,
+				"%s: Failed to enable hw errors, err = %d\n",
+				__func__, ret);
+			return -EINVAL;
+		}
+
+		rte_eal_alarm_set(QDMA_ERROR_POLL_FRQ, qdma_check_errors,
+							(void *)dev);
+		dma_priv->is_master = 1;
+	}
+
+	/*
+	 * Create an entry for the device in board list if not already
+	 * created
+	 */
+	ret = qdma_dev_entry_create(dma_priv->dma_device_index,
+				dma_priv->func_id);
+	if (ret != QDMA_SUCCESS &&
+	    ret != -QDMA_ERR_RM_DEV_EXISTS) {
+		PMD_DRV_LOG(ERR, "PF-%d(DEVFN) qdma_dev_entry_create failed: %d\n",
+			    dma_priv->func_id, ret);
+		rte_free(dma_priv->hw_access);
+		rte_free(dev->data->mac_addrs);
+		return -ENOMEM;
+	}
+
+	pcie_perf_enable(pci_dev);
+
 	if (!dma_priv->reset_in_progress) {
 		num_vfs = pci_dev->max_vfs;
 		if (num_vfs) {
@@ -358,6 +567,14 @@ static int qdma_eth_dev_uninit(struct rte_eth_dev *dev)
 	/* only uninitialize in the primary process */
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return -EPERM;
+	/* cancel pending polls */
+	if (qdma_dev->is_master)
+		rte_eal_alarm_cancel(qdma_check_errors, (void *)dev);
+
+	/* Remove the device node from the board list */
+	qdma_dev_entry_destroy(qdma_dev->dma_device_index,
+			qdma_dev->func_id);
+	qdma_master_resource_destroy(qdma_dev->dma_device_index);
 
 	dev->dev_ops = NULL;
 	dev->rx_pkt_burst = NULL;
@@ -381,6 +598,10 @@ static int qdma_eth_dev_uninit(struct rte_eth_dev *dev)
 		qdma_dev->q_info = NULL;
 	}
 
+	if (qdma_dev->hw_access != NULL) {
+		rte_free(qdma_dev->hw_access);
+		qdma_dev->hw_access = NULL;
+	}
 	return 0;
 }
 
-- 
2.36.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [RFC PATCH 09/29] net/qdma: define device modes and data structure
  2022-07-06  7:51 [RFC PATCH 00/29] cover letter for net/qdma PMD Aman Kumar
                   ` (7 preceding siblings ...)
  2022-07-06  7:51 ` [RFC PATCH 08/29] net/qdma: qdma hardware initialization Aman Kumar
@ 2022-07-06  7:51 ` Aman Kumar
  2022-07-06  7:52 ` [RFC PATCH 10/29] net/qdma: add net PMD ops template Aman Kumar
                   ` (20 subsequent siblings)
  29 siblings, 0 replies; 43+ messages in thread
From: Aman Kumar @ 2022-07-06  7:51 UTC (permalink / raw)
  To: dev; +Cc: maxime.coquelin, david.marchand, aman.kumar

define enums and structure for supported device modes
adapt macros into the routines.

Signed-off-by: Aman Kumar <aman.kumar@vvdntech.in>
---
 drivers/net/qdma/qdma.h         |  21 +++
 drivers/net/qdma/qdma_common.c  |  22 ++-
 drivers/net/qdma/qdma_ethdev.c  |  17 ++-
 drivers/net/qdma/rte_pmd_qdma.h | 262 ++++++++++++++++++++++++++++++++
 4 files changed, 316 insertions(+), 6 deletions(-)
 create mode 100644 drivers/net/qdma/rte_pmd_qdma.h

diff --git a/drivers/net/qdma/qdma.h b/drivers/net/qdma/qdma.h
index db59cddd25..7c2d3b34e0 100644
--- a/drivers/net/qdma/qdma.h
+++ b/drivers/net/qdma/qdma.h
@@ -17,6 +17,7 @@
 #include <linux/pci.h>
 
 #include "qdma_resource_mgmt.h"
+#include "rte_pmd_qdma.h"
 #include "qdma_log.h"
 
 #define QDMA_NUM_BARS          (6)
@@ -28,6 +29,26 @@
 #define DEFAULT_QUEUE_BASE	(0)
 
 #define DEFAULT_TIMER_CNT_TRIG_MODE_TIMER	(5)
+#define DEFAULT_TIMER_CNT_TRIG_MODE_COUNT_TIMER	(30)
+
+/* Completion Context config */
+#define CMPT_DEFAULT_COLOR_BIT           (1)
+#define CMPT_CNTXT_DESC_SIZE_8B          (0)
+#define CMPT_CNTXT_DESC_SIZE_16B         (1)
+#define CMPT_CNTXT_DESC_SIZE_32B         (2)
+#define CMPT_CNTXT_DESC_SIZE_64B         (3)
+
+/* SOFTWARE DESC CONTEXT */
+#define SW_DESC_CNTXT_8B_BYPASS_DMA	    (0)
+#define SW_DESC_CNTXT_16B_BYPASS_DMA	    (1)
+#define SW_DESC_CNTXT_32B_BYPASS_DMA	    (2)
+#define SW_DESC_CNTXT_64B_BYPASS_DMA	    (3)
+
+#define SW_DESC_CNTXT_C2H_STREAM_DMA        (0)
+#define SW_DESC_CNTXT_H2C_STREAM_DMA        (1)
+#define SW_DESC_CNTXT_MEMORY_MAP_DMA        (2)
+
+#define DEFAULT_QDMA_CMPT_DESC_LEN (RTE_PMD_QDMA_CMPT_DESC_LEN_8B)
 
 enum dma_data_direction {
 	DMA_BIDIRECTIONAL = 0,
diff --git a/drivers/net/qdma/qdma_common.c b/drivers/net/qdma/qdma_common.c
index 0ea920f255..4f50be5b06 100644
--- a/drivers/net/qdma/qdma_common.c
+++ b/drivers/net/qdma/qdma_common.c
@@ -40,10 +40,11 @@ static int cmpt_desc_len_check_handler(__rte_unused const char *key,
 
 	PMD_DRV_LOG(INFO, "QDMA devargs cmpt_desc_len is: %s\n", value);
 	qdma_dev->cmpt_desc_len =  (uint8_t)strtoul(value, &end, 10);
-	if (qdma_dev->cmpt_desc_len != 8 &&
-		qdma_dev->cmpt_desc_len != 16 &&
-		qdma_dev->cmpt_desc_len != 32 &&
-		qdma_dev->cmpt_desc_len != 64) {
+	if (qdma_dev->cmpt_desc_len != RTE_PMD_QDMA_CMPT_DESC_LEN_8B &&
+	    qdma_dev->cmpt_desc_len != RTE_PMD_QDMA_CMPT_DESC_LEN_16B &&
+	    qdma_dev->cmpt_desc_len != RTE_PMD_QDMA_CMPT_DESC_LEN_32B &&
+	    (qdma_dev->cmpt_desc_len != RTE_PMD_QDMA_CMPT_DESC_LEN_64B ||
+	     !qdma_dev->dev_cap.cmpt_desc_64b)) {
 		PMD_DRV_LOG(INFO, "QDMA devargs incorrect cmpt_desc_len = %d "
 						  "specified\n",
 						  qdma_dev->cmpt_desc_len);
@@ -62,6 +63,12 @@ static int trigger_mode_handler(__rte_unused const char *key,
 	PMD_DRV_LOG(INFO, "QDMA devargs trigger mode: %s\n", value);
 	qdma_dev->trigger_mode =  (uint8_t)strtoul(value, &end, 10);
 
+	if (qdma_dev->trigger_mode >= RTE_PMD_QDMA_TRIG_MODE_MAX) {
+		qdma_dev->trigger_mode = RTE_PMD_QDMA_TRIG_MODE_MAX;
+		PMD_DRV_LOG(INFO, "QDMA devargs trigger mode invalid,"
+						  "reset to default: %d\n",
+						  qdma_dev->trigger_mode);
+	}
 	return 0;
 }
 
@@ -92,6 +99,13 @@ static int c2h_byp_mode_check_handler(__rte_unused const char *key,
 	PMD_DRV_LOG(INFO, "QDMA devargs c2h_byp_mode is: %s\n", value);
 	qdma_dev->c2h_bypass_mode =  (uint8_t)strtoul(value, &end, 10);
 
+	if (qdma_dev->c2h_bypass_mode >= RTE_PMD_QDMA_RX_BYPASS_MAX) {
+		PMD_DRV_LOG(INFO, "QDMA devargs incorrect "
+				"c2h_byp_mode= %d specified\n",
+						qdma_dev->c2h_bypass_mode);
+		return -1;
+	}
+
 	return 0;
 }
 
diff --git a/drivers/net/qdma/qdma_ethdev.c b/drivers/net/qdma/qdma_ethdev.c
index bc902e607f..54776c637d 100644
--- a/drivers/net/qdma/qdma_ethdev.c
+++ b/drivers/net/qdma/qdma_ethdev.c
@@ -379,8 +379,8 @@ static int qdma_eth_dev_init(struct rte_eth_dev *dev)
 	dma_priv->timer_count = DEFAULT_TIMER_CNT_TRIG_MODE_TIMER;
 
 	dma_priv->en_desc_prefetch = 0; /* Keep prefetch default to 0 */
-	dma_priv->cmpt_desc_len = 0;
-	dma_priv->c2h_bypass_mode = 0;
+	dma_priv->cmpt_desc_len = DEFAULT_QDMA_CMPT_DESC_LEN;
+	dma_priv->c2h_bypass_mode = RTE_PMD_QDMA_RX_BYPASS_NONE;
 	dma_priv->h2c_bypass_mode = 0;
 
 	dma_priv->config_bar_idx = DEFAULT_PF_CONFIG_BAR;
@@ -439,6 +439,19 @@ static int qdma_eth_dev_init(struct rte_eth_dev *dev)
 	/* Getting the device attributes from the Hardware */
 	qdma_device_attributes_get(dev);
 
+	if (dma_priv->dev_cap.cmpt_trig_count_timer) {
+		/* Setting default Mode to
+		 * RTE_PMD_QDMA_TRIG_MODE_USER_TIMER_COUNT
+		 */
+		dma_priv->trigger_mode =
+					RTE_PMD_QDMA_TRIG_MODE_USER_TIMER_COUNT;
+	} else {
+		/* Setting default Mode to RTE_PMD_QDMA_TRIG_MODE_USER_TIMER */
+		dma_priv->trigger_mode = RTE_PMD_QDMA_TRIG_MODE_USER_TIMER;
+	}
+	if (dma_priv->trigger_mode == RTE_PMD_QDMA_TRIG_MODE_USER_TIMER_COUNT)
+		dma_priv->timer_count = DEFAULT_TIMER_CNT_TRIG_MODE_COUNT_TIMER;
+
 	/* Create master resource node for queue management on the given
 	 * bus number. Node will be created only once per bus number.
 	 */
diff --git a/drivers/net/qdma/rte_pmd_qdma.h b/drivers/net/qdma/rte_pmd_qdma.h
new file mode 100644
index 0000000000..f5e3def613
--- /dev/null
+++ b/drivers/net/qdma/rte_pmd_qdma.h
@@ -0,0 +1,262 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2022 Xilinx, Inc. All rights reserved.
+ */
+
+#ifndef __RTE_PMD_QDMA_EXPORT_H__
+#define __RTE_PMD_QDMA_EXPORT_H__
+
+#include <rte_dev.h>
+#include <rte_ethdev.h>
+#include <rte_spinlock.h>
+#include <rte_log.h>
+#include <rte_byteorder.h>
+#include <rte_memzone.h>
+#include <linux/pci.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/** @defgroup rte_pmd_qdma_enums Enumerations */
+/** @defgroup rte_pmd_qdma_struct Data Structures */
+/** @defgroup rte_pmd_qdma_func Functions */
+
+/**
+ * Bypass modes in C2H direction
+ * @ingroup rte_pmd_qdma_enums
+ */
+enum rte_pmd_qdma_rx_bypass_mode {
+	/** C2H bypass mode disabled */
+	RTE_PMD_QDMA_RX_BYPASS_NONE = 0,
+	/** C2H cache bypass mode */
+	RTE_PMD_QDMA_RX_BYPASS_CACHE = 1,
+	/** C2H simple bypass mode */
+	RTE_PMD_QDMA_RX_BYPASS_SIMPLE = 2,
+	/** C2H bypass mode invalid */
+	RTE_PMD_QDMA_RX_BYPASS_MAX
+};
+
+/**
+ * Bypass modes in H2C direction
+ * @ingroup rte_pmd_qdma_enums
+ */
+enum rte_pmd_qdma_tx_bypass_mode {
+	/** H2C bypass mode disabled */
+	RTE_PMD_QDMA_TX_BYPASS_NONE = 0,
+	/** H2C bypass mode enabled */
+	RTE_PMD_QDMA_TX_BYPASS_ENABLE = 1,
+	/** H2C bypass mode invalid */
+	RTE_PMD_QDMA_TX_BYPASS_MAX
+};
+
+/**
+ * Enum to specify the direction i.e. TX or RX
+ * @ingroup rte_pmd_qdma_enums
+ */
+enum rte_pmd_qdma_dir_type {
+	/** H2C direction */
+	RTE_PMD_QDMA_TX = 0,
+	/** C2H direction */
+	RTE_PMD_QDMA_RX,
+	/** Invalid Direction */
+	RTE_PMD_QDMA_DIR_TYPE_MAX
+};
+
+/**
+ * Enum to specify the PCIe function type i.e. PF or VF
+ * @ingroup rte_pmd_qdma_enums
+ */
+enum rte_pmd_qdma_pci_func_type {
+	/** Physical Function */
+	RTE_PMD_QDMA_PCI_FUNC_PF,
+	/** Virtual Function */
+	RTE_PMD_QDMA_PCI_FUNC_VF,
+	/** Invalid PCI Function */
+	RTE_PMD_QDMA_PCI_FUNC_TYPE_MAX,
+};
+
+/**
+ * Enum to specify the queue mode
+ * @ingroup rte_pmd_qdma_enums
+ */
+enum rte_pmd_qdma_queue_mode_t {
+	/** Memory mapped queue mode */
+	RTE_PMD_QDMA_MEMORY_MAPPED_MODE,
+	/** Streaming queue mode */
+	RTE_PMD_QDMA_STREAMING_MODE,
+	/** Invalid queue mode */
+	RTE_PMD_QDMA_QUEUE_MODE_MAX,
+};
+
+/**
+ * Enum to specify the completion trigger mode
+ * @ingroup rte_pmd_qdma_enums
+ */
+enum rte_pmd_qdma_tigger_mode_t {
+	/** Trigger mode disabled */
+	RTE_PMD_QDMA_TRIG_MODE_DISABLE,
+	/** Trigger mode every */
+	RTE_PMD_QDMA_TRIG_MODE_EVERY,
+	/** Trigger mode user count */
+	RTE_PMD_QDMA_TRIG_MODE_USER_COUNT,
+	/** Trigger mode user */
+	RTE_PMD_QDMA_TRIG_MODE_USER,
+	/** Trigger mode timer */
+	RTE_PMD_QDMA_TRIG_MODE_USER_TIMER,
+	/** Trigger mode timer + count */
+	RTE_PMD_QDMA_TRIG_MODE_USER_TIMER_COUNT,
+	/** Trigger mode invalid */
+	RTE_PMD_QDMA_TRIG_MODE_MAX,
+};
+
+/**
+ * Enum to specify the completion descriptor length
+ * @ingroup rte_pmd_qdma_enums
+ */
+enum rte_pmd_qdma_cmpt_desc_len {
+	/** 8B Completion descriptor */
+	RTE_PMD_QDMA_CMPT_DESC_LEN_8B = 8,
+	/** 16B Completion descriptor */
+	RTE_PMD_QDMA_CMPT_DESC_LEN_16B = 16,
+	/** 32B Completion descriptor */
+	RTE_PMD_QDMA_CMPT_DESC_LEN_32B = 32,
+	/** 64B Completion descriptor */
+	RTE_PMD_QDMA_CMPT_DESC_LEN_64B = 64,
+	/** Invalid Completion descriptor */
+	RTE_PMD_QDMA_CMPT_DESC_LEN_MAX,
+};
+
+/**
+ * Enum to specify the bypass descriptor length
+ * @ingroup rte_pmd_qdma_enums
+ */
+enum rte_pmd_qdma_bypass_desc_len {
+	/** 8B Bypass descriptor */
+	RTE_PMD_QDMA_BYPASS_DESC_LEN_8B = 8,
+	/** 16B Bypass descriptor */
+	RTE_PMD_QDMA_BYPASS_DESC_LEN_16B = 16,
+	/** 32B Bypass descriptor */
+	RTE_PMD_QDMA_BYPASS_DESC_LEN_32B = 32,
+	/** 64B Bypass descriptor */
+	RTE_PMD_QDMA_BYPASS_DESC_LEN_64B = 64,
+	/** Invalid Bypass descriptor */
+	RTE_PMD_QDMA_BYPASS_DESC_LEN_MAX,
+};
+
+/**
+ * Enum to specify the debug request type
+ * @ingroup rte_pmd_qdma_enums
+ */
+enum rte_pmd_qdma_xdebug_type {
+	/** Debug Global registers */
+	RTE_PMD_QDMA_XDEBUG_QDMA_GLOBAL_CSR,
+	/** Debug Device specific structure */
+	RTE_PMD_QDMA_XDEBUG_QDMA_DEVICE_STRUCT,
+	/** Debug Queue information */
+	RTE_PMD_QDMA_XDEBUG_QUEUE_INFO,
+	/** Debug descriptor */
+	RTE_PMD_QDMA_XDEBUG_QUEUE_DESC_DUMP,
+	/** Invalid debug type */
+	RTE_PMD_QDMA_XDEBUG_MAX,
+};
+
+/**
+ * Enum to specify the queue ring for debug
+ * @ingroup rte_pmd_qdma_enums
+ */
+enum rte_pmd_qdma_xdebug_desc_type {
+	/** Debug C2H ring descriptor */
+	RTE_PMD_QDMA_XDEBUG_DESC_C2H,
+	/** Debug H2C ring descriptor */
+	RTE_PMD_QDMA_XDEBUG_DESC_H2C,
+	/** Debug CMPT ring descriptor */
+	RTE_PMD_QDMA_XDEBUG_DESC_CMPT,
+	/** Invalid debug type */
+	RTE_PMD_QDMA_XDEBUG_DESC_MAX,
+};
+
+/**
+ * Enum to specify the QDMA device type
+ * @ingroup rte_pmd_qdma_enums
+ */
+enum rte_pmd_qdma_device_type {
+	/** QDMA Soft device e.g. UltraScale+ IP's  */
+	RTE_PMD_QDMA_DEVICE_SOFT,
+	/** QDMA Versal device */
+	RTE_PMD_QDMA_DEVICE_VERSAL,
+	/** Invalid QDMA device  */
+	RTE_PMD_QDMA_DEVICE_NONE
+};
+
+/**
+ * Enum to specify the QDMA IP type
+ * @ingroup rte_pmd_qdma_enums
+ */
+enum rte_pmd_qdma_ip_type {
+	/** Versal Hard IP  */
+	RTE_PMD_QDMA_VERSAL_HARD_IP,
+	/** Versal Soft IP  */
+	RTE_PMD_QDMA_VERSAL_SOFT_IP,
+	/** QDMA Soft IP  */
+	RTE_PMD_QDMA_SOFT_IP,
+	/** EQDMA Soft IP  */
+	RTE_PMD_EQDMA_SOFT_IP,
+	/** Invalid IP type  */
+	RTE_PMD_QDMA_NONE_IP
+};
+
+/**
+ * Structure to hold the QDMA device attributes
+ *
+ * @ingroup rte_pmd_qdma_struct
+ */
+struct rte_pmd_qdma_dev_attributes {
+	/** Number of PFs*/
+	uint8_t num_pfs;
+	/** Number of Queues */
+	uint16_t num_qs;
+	/** Indicates whether FLR supported or not */
+	uint8_t flr_present:1;
+	/** Indicates whether ST mode supported or not */
+	uint8_t st_en:1;
+	/** Indicates whether MM mode supported or not */
+	uint8_t mm_en:1;
+	/** Indicates whether MM with Completions supported or not */
+	uint8_t mm_cmpt_en:1;
+	/** Indicates whether Mailbox supported or not */
+	uint8_t mailbox_en:1;
+	/** Debug mode is enabled/disabled for IP */
+	uint8_t debug_mode:1;
+	/** Descriptor Engine mode:
+	 * Internal only/Bypass only/Internal & Bypass
+	 */
+	uint8_t desc_eng_mode:2;
+	/** Number of MM channels */
+	uint8_t mm_channel_max;
+
+	/** To indicate support of
+	 * overflow check disable in CMPT ring
+	 */
+	uint8_t cmpt_ovf_chk_dis:1;
+	/** To indicate support of 64 bytes
+	 * C2H/H2C descriptor format
+	 */
+	uint8_t sw_desc_64b:1;
+	/** To indicate support of 64 bytes
+	 * CMPT descriptor format
+	 */
+	uint8_t cmpt_desc_64b:1;
+	/** To indicate support of
+	 * counter + timer trigger mode
+	 */
+	uint8_t cmpt_trig_count_timer:1;
+	/** Device Type */
+	enum rte_pmd_qdma_device_type device_type;
+	/** Versal IP Type */
+	enum rte_pmd_qdma_ip_type ip_type;
+};
+
+#ifdef __cplusplus
+}
+#endif
+#endif /* ifndef __RTE_PMD_QDMA_EXPORT_H__ */
-- 
2.36.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [RFC PATCH 10/29] net/qdma: add net PMD ops template
  2022-07-06  7:51 [RFC PATCH 00/29] cover letter for net/qdma PMD Aman Kumar
                   ` (8 preceding siblings ...)
  2022-07-06  7:51 ` [RFC PATCH 09/29] net/qdma: define device modes and data structure Aman Kumar
@ 2022-07-06  7:52 ` Aman Kumar
  2022-07-06  7:52 ` [RFC PATCH 11/29] net/qdma: add configure close and reset ethdev ops Aman Kumar
                   ` (19 subsequent siblings)
  29 siblings, 0 replies; 43+ messages in thread
From: Aman Kumar @ 2022-07-06  7:52 UTC (permalink / raw)
  To: dev; +Cc: maxime.coquelin, david.marchand, aman.kumar

define dpdk pmd ops function for the device.
routines are added as dummy calls.

Signed-off-by: Aman Kumar <aman.kumar@vvdntech.in>
---
 drivers/net/qdma/meson.build    |   5 +-
 drivers/net/qdma/qdma.h         |   1 +
 drivers/net/qdma/qdma_devops.c  | 464 ++++++++++++++++++++++++++++++
 drivers/net/qdma/qdma_devops.h  | 486 ++++++++++++++++++++++++++++++++
 drivers/net/qdma/qdma_ethdev.c  |  11 +-
 drivers/net/qdma/rte_pmd_qdma.h |   8 +-
 6 files changed, 970 insertions(+), 5 deletions(-)
 create mode 100644 drivers/net/qdma/qdma_devops.c
 create mode 100644 drivers/net/qdma/qdma_devops.h

diff --git a/drivers/net/qdma/meson.build b/drivers/net/qdma/meson.build
index 99076e1ebf..858d981002 100644
--- a/drivers/net/qdma/meson.build
+++ b/drivers/net/qdma/meson.build
@@ -17,9 +17,12 @@ includes += include_directories('qdma_access/qdma_soft_access')
 includes += include_directories('qdma_access/eqdma_soft_access')
 includes += include_directories('qdma_access/qdma_s80_hard_access')
 
+headers += files('rte_pmd_qdma.h')
+
 sources = files(
-        'qdma_ethdev.c',
         'qdma_common.c',
+        'qdma_devops.c',
+        'qdma_ethdev.c',
         'qdma_access/eqdma_soft_access/eqdma_soft_access.c',
         'qdma_access/eqdma_soft_access/eqdma_soft_reg_dump.c',
         'qdma_access/qdma_s80_hard_access/qdma_s80_hard_access.c',
diff --git a/drivers/net/qdma/qdma.h b/drivers/net/qdma/qdma.h
index 7c2d3b34e0..f4155380f9 100644
--- a/drivers/net/qdma/qdma.h
+++ b/drivers/net/qdma/qdma.h
@@ -248,6 +248,7 @@ struct qdma_pci_dev {
 	int16_t rx_qid_statid_map[RTE_ETHDEV_QUEUE_STAT_CNTRS];
 };
 
+void qdma_dev_ops_init(struct rte_eth_dev *dev);
 int qdma_identify_bars(struct rte_eth_dev *dev);
 int qdma_get_hw_version(struct rte_eth_dev *dev);
 
diff --git a/drivers/net/qdma/qdma_devops.c b/drivers/net/qdma/qdma_devops.c
new file mode 100644
index 0000000000..cf3ef6de34
--- /dev/null
+++ b/drivers/net/qdma/qdma_devops.c
@@ -0,0 +1,464 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2022 Xilinx, Inc. All rights reserved.
+ * Copyright(c) 2022 VVDN Technologies Private Limited. All rights reserved.
+ */
+
+#include <stdint.h>
+#include <sys/mman.h>
+#include <sys/fcntl.h>
+#include <rte_memzone.h>
+#include <rte_string_fns.h>
+#include <ethdev_pci.h>
+#include <rte_malloc.h>
+#include <rte_dev.h>
+#include <rte_pci.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_alarm.h>
+#include <rte_cycles.h>
+#include <rte_atomic.h>
+#include <unistd.h>
+#include <string.h>
+
+#include "qdma.h"
+#include "qdma_access_common.h"
+#include "qdma_reg_dump.h"
+#include "qdma_platform.h"
+#include "qdma_devops.h"
+
+/**
+ * DPDK callback to configure a RX queue.
+ *
+ * @param dev
+ *   Pointer to Ethernet device structure.
+ * @param rx_queue_id
+ *   RX queue index.
+ * @param nb_rx_desc
+ *   Number of descriptors to configure in queue.
+ * @param socket_id
+ *   NUMA socket on which memory must be allocated.
+ * @param[in] rx_conf
+ *   Thresholds parameters.
+ * @param mp_pool
+ *   Memory pool for buffer allocations.
+ *
+ * @return
+ *   0 on success,
+ *   -ENOMEM when memory allocation fails
+ *   -ENOTSUP when HW doesn't support the required configuration
+ *   -EINVAL on other failure.
+ */
+int qdma_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
+			    uint16_t nb_rx_desc, unsigned int socket_id,
+			    const struct rte_eth_rxconf *rx_conf,
+			    struct rte_mempool *mb_pool)
+{
+	(void)dev;
+	(void)rx_queue_id;
+	(void)nb_rx_desc;
+	(void)socket_id;
+	(void)rx_conf;
+	(void)mb_pool;
+
+	return 0;
+}
+
+/**
+ * DPDK callback to configure a TX queue.
+ *
+ * @param dev
+ *   Pointer to Ethernet device structure.
+ * @param tx_queue_id
+ *   TX queue index.
+ * @param nb_tx_desc
+ *   Number of descriptors to configure in queue.
+ * @param socket_id
+ *   NUMA socket on which memory must be allocated.
+ * @param[in] tx_conf
+ *   Thresholds parameters.
+ *
+ * @return
+ *   0 on success
+ *   -ENOMEM when memory allocation fails
+ *   -EINVAL on other failure.
+ */
+int qdma_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
+			    uint16_t nb_tx_desc, unsigned int socket_id,
+			    const struct rte_eth_txconf *tx_conf)
+{
+	(void)dev;
+	(void)tx_queue_id;
+	(void)nb_tx_desc;
+	(void)socket_id;
+	(void)tx_conf;
+
+	return 0;
+}
+
+void qdma_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t q_id)
+{
+	(void)dev;
+	(void)q_id;
+}
+
+void qdma_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t q_id)
+{
+	(void)dev;
+	(void)q_id;
+}
+
+/**
+ * DPDK callback to start the device.
+ *
+ * Start the device by configuring the Rx/Tx descriptor and device registers.
+ *
+ * @param dev
+ *   Pointer to Ethernet device structure.
+ *
+ * @return
+ *   0 on success, negative errno value on failure.
+ */
+int qdma_dev_start(struct rte_eth_dev *dev)
+{
+	(void)dev;
+
+	return 0;
+}
+
+/**
+ * DPDK callback to retrieve the physical link information.
+ *
+ * @param dev
+ *   Pointer to Ethernet device structure.
+ * @param wait_to_complete
+ *   wait_to_complete field is ignored.
+ */
+int qdma_dev_link_update(struct rte_eth_dev *dev,
+				__rte_unused int wait_to_complete)
+{
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
+	dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_25G;
+	PMD_DRV_LOG(INFO, "Link update done\n");
+	return 0;
+}
+
+/**
+ * DPDK callback to get information about the device.
+ *
+ * @param dev
+ *   Pointer to Ethernet device structure.
+ * @param[out] dev_info
+ *   Device information structure output buffer.
+ */
+int qdma_dev_infos_get(struct rte_eth_dev *dev,
+		       struct rte_eth_dev_info *dev_info)
+{
+	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
+
+	dev_info->max_rx_queues = qdma_dev->dev_cap.num_qs;
+	dev_info->max_tx_queues = qdma_dev->dev_cap.num_qs;
+
+	dev_info->min_rx_bufsize = 256;
+	dev_info->max_rx_pktlen = DMA_BRAM_SIZE;
+	dev_info->max_mac_addrs = 1;
+
+	return 0;
+}
+
+/**
+ * DPDK callback to stop the device.
+ *
+ * Stop the device by clearing all configured Rx/Tx queue
+ * descriptors and registers.
+ *
+ * @param dev
+ *   Pointer to Ethernet device structure.
+ */
+int qdma_dev_stop(struct rte_eth_dev *dev)
+{
+	(void)dev;
+
+	return 0;
+}
+
+/**
+ * DPDK callback to close the device.
+ *
+ * Destroy all queues and objects, free memory.
+ *
+ * @param dev
+ *   Pointer to Ethernet device structure.
+ */
+int qdma_dev_close(struct rte_eth_dev *dev)
+{
+	(void)dev;
+
+	return 0;
+}
+
+/**
+ * DPDK callback to reset the device.
+ *
+ * Uninitialze PF device after waiting for all its VFs to shutdown.
+ * Initialize back PF device and then send Reset done mailbox
+ * message to all its VFs to come online again.
+ *
+ * @param dev
+ *   Pointer to Ethernet device structure.
+ *
+ * @return
+ *   0 on success, negative errno value on failure.
+ */
+int qdma_dev_reset(struct rte_eth_dev *dev)
+{
+	(void)dev;
+
+	return 0;
+}
+
+/**
+ * DPDK callback for Ethernet device configuration.
+ *
+ * @param dev
+ *   Pointer to Ethernet device structure.
+ *
+ * @return
+ *   0 on success, negative errno value on failure.
+ */
+int qdma_dev_configure(struct rte_eth_dev *dev)
+{
+	(void)dev;
+
+	return 0;
+}
+
+int qdma_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t qid)
+{
+	(void)dev;
+	(void)qid;
+
+	return 0;
+}
+
+
+int qdma_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t qid)
+{
+	(void)dev;
+	(void)qid;
+
+	return 0;
+}
+
+int qdma_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t qid)
+{
+	(void)dev;
+	(void)qid;
+
+	return 0;
+}
+
+int qdma_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t qid)
+{
+	(void)dev;
+	(void)qid;
+
+	return 0;
+}
+
+/**
+ * DPDK callback to retrieve device registers and
+ * register attributes (number of registers and register size)
+ *
+ * @param dev
+ *   Pointer to Ethernet device structure.
+ * @param regs
+ *   Pointer to rte_dev_reg_info structure to fill in. If regs->data is
+ *   NULL the function fills in the width and length fields. If non-NULL
+ *   the registers are put into the buffer pointed at by the data field.
+ *
+ * @return
+ *   0 on success, -ENOTSUP on failure.
+ */
+int
+qdma_dev_get_regs(struct rte_eth_dev *dev,
+	      struct rte_dev_reg_info *regs)
+{
+	(void)dev;
+	(void)regs;
+
+	return -ENOTSUP;
+}
+
+/**
+ * DPDK callback to set a queue statistics mapping for
+ * a tx/rx queue of an Ethernet device.
+ *
+ * @param dev
+ *   Pointer to Ethernet device structure.
+ * @param queue_id
+ *   Index of the queue for which a queue stats mapping is required.
+ * @param stat_idx
+ *   The per-queue packet statistics functionality number that
+ *   the queue_id is to be assigned.
+ * @param is_rx
+ *   Whether queue is a Rx or a Tx queue.
+ *
+ * @return
+ *   0 on success, -EINVAL on failure.
+ */
+int qdma_dev_queue_stats_mapping(struct rte_eth_dev *dev,
+				 uint16_t queue_id,
+				 uint8_t stat_idx,
+				 uint8_t is_rx)
+{
+	(void)dev;
+	(void)queue_id;
+	(void)stat_idx;
+	(void)is_rx;
+
+	return 0;
+}
+
+/**
+ * DPDK callback for retrieving Port statistics.
+ *
+ * @param dev
+ *   Pointer to Ethernet device structure.
+ * @param eth_stats
+ *   Pointer to structure containing statistics.
+ *
+ * @return
+ *   Returns 0 i.e. success
+ */
+int qdma_dev_stats_get(struct rte_eth_dev *dev,
+			      struct rte_eth_stats *eth_stats)
+{
+	(void)dev;
+	(void)eth_stats;
+
+	return 0;
+}
+
+/**
+ * DPDK callback to reset Port statistics.
+ *
+ * @param dev
+ *   Pointer to Ethernet device structure.
+ *
+ */
+int qdma_dev_stats_reset(struct rte_eth_dev *dev)
+{
+	(void)dev;
+
+	return 0;
+}
+
+/**
+ * DPDK callback to get Rx Queue info of an Ethernet device.
+ *
+ * @param dev
+ *   Pointer to Ethernet device structure.
+ * @param rx_queue_id
+ *   The RX queue on the Ethernet device for which information will be
+ *   retrieved
+ * @param qinfo
+ *   A pointer to a structure of type rte_eth_rxq_info_info to be filled with
+ *   the information of given Rx queue.
+ */
+void
+qdma_dev_rxq_info_get(struct rte_eth_dev *dev, uint16_t rx_queue_id,
+		     struct rte_eth_rxq_info *qinfo)
+{
+	(void)dev;
+	(void)rx_queue_id;
+	(void)qinfo;
+}
+
+/**
+ * DPDK callback to get Tx Queue info of an Ethernet device.
+ *
+ * @param dev
+ *   Pointer to Ethernet device structure.
+ * @param tx_queue_id
+ *   The TX queue on the Ethernet device for which information will be
+ *   retrieved
+ * @param qinfo
+ *   A pointer to a structure of type rte_eth_txq_info_info to be filled with
+ *   the information of given Tx queue.
+ */
+void
+qdma_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t tx_queue_id,
+		      struct rte_eth_txq_info *qinfo)
+{
+	struct qdma_tx_queue *txq = NULL;
+
+	if (!qinfo)
+		return;
+
+	txq = dev->data->tx_queues[tx_queue_id];
+	qinfo->conf.offloads = txq->offloads;
+	qinfo->conf.tx_deferred_start = txq->tx_deferred_start;
+	qinfo->conf.tx_rs_thresh = 0;
+	qinfo->nb_desc = txq->nb_tx_desc - 1;
+}
+
+int qdma_dev_tx_done_cleanup(void *tx_queue, uint32_t free_cnt)
+{
+	(void)tx_queue;
+	(void)free_cnt;
+
+	return 0;
+}
+
+static struct eth_dev_ops qdma_eth_dev_ops = {
+	.dev_configure            = qdma_dev_configure,
+	.dev_infos_get            = qdma_dev_infos_get,
+	.dev_start                = qdma_dev_start,
+	.dev_stop                 = qdma_dev_stop,
+	.dev_close                = qdma_dev_close,
+	.dev_reset                = qdma_dev_reset,
+	.link_update              = qdma_dev_link_update,
+	.rx_queue_setup           = qdma_dev_rx_queue_setup,
+	.tx_queue_setup           = qdma_dev_tx_queue_setup,
+	.rx_queue_release         = qdma_dev_rx_queue_release,
+	.tx_queue_release         = qdma_dev_tx_queue_release,
+	.rx_queue_start           = qdma_dev_rx_queue_start,
+	.rx_queue_stop            = qdma_dev_rx_queue_stop,
+	.tx_queue_start           = qdma_dev_tx_queue_start,
+	.tx_queue_stop            = qdma_dev_tx_queue_stop,
+	.tx_done_cleanup          = qdma_dev_tx_done_cleanup,
+	.queue_stats_mapping_set  = qdma_dev_queue_stats_mapping,
+	.get_reg                  = qdma_dev_get_regs,
+	.stats_get                = qdma_dev_stats_get,
+	.stats_reset              = qdma_dev_stats_reset,
+	.rxq_info_get             = qdma_dev_rxq_info_get,
+	.txq_info_get             = qdma_dev_txq_info_get,
+};
+
+uint16_t qdma_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+			uint16_t nb_pkts)
+{
+	(void)rx_queue;
+	(void)rx_pkts;
+	(void)nb_pkts;
+
+	return 0;
+}
+
+uint16_t qdma_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+			uint16_t nb_pkts)
+{
+	(void)tx_queue;
+	(void)tx_pkts;
+	(void)nb_pkts;
+
+	return 0;
+}
+
+void qdma_dev_ops_init(struct rte_eth_dev *dev)
+{
+	dev->dev_ops = &qdma_eth_dev_ops;
+	dev->rx_pkt_burst = &qdma_recv_pkts;
+	dev->tx_pkt_burst = &qdma_xmit_pkts;
+}
diff --git a/drivers/net/qdma/qdma_devops.h b/drivers/net/qdma/qdma_devops.h
new file mode 100644
index 0000000000..240fa6b60c
--- /dev/null
+++ b/drivers/net/qdma/qdma_devops.h
@@ -0,0 +1,486 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2022 Xilinx, Inc. All rights reserved.
+ * Copyright(c) 2022 VVDN Technologies Private Limited. All rights reserved.
+ */
+
+#ifndef __QDMA_DEVOPS_H__
+#define __QDMA_DEVOPS_H__
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/** @defgroup dpdk_devops_func Functions
+ */
+
+/**
+ * DPDK callback for Ethernet device configuration.
+ *
+ * This API requests the queue base from Queue Resource Manager and programs
+ * the queue base and queue count in function map (FMAP)
+ *
+ * @param dev Pointer to Ethernet device structure
+ *
+ * @return 0 on success, < 0 on failure
+ * @ingroup dpdk_devops_func
+ *
+ */
+int qdma_dev_configure(struct rte_eth_dev *dev);
+
+/**
+ * DPDK callback to get information about the device
+ *
+ * @param dev Pointer to Ethernet device structure
+ * @param dev_info: Pointer to Device information structure
+ *
+ * @ingroup dpdk_devops_func
+ */
+int qdma_dev_infos_get(struct rte_eth_dev *dev,
+				struct rte_eth_dev_info *dev_info);
+
+/**
+ * DPDK callback to retrieve the physical link information
+ *
+ * @param dev
+ *   Pointer to Ethernet device structure
+ * @param wait_to_complete
+ *   wait_to_complete field is ignored
+ *
+ * @ingroup dpdk_devops_func
+ */
+int qdma_dev_link_update(struct rte_eth_dev *dev,
+				__rte_unused int wait_to_complete);
+
+/**
+ * DPDK callback to configure a RX queue.
+ *
+ * This API validates queue parameters and allocates C2H ring and
+ * Streaming CMPT ring from the DPDK reserved hugepage memory zones
+ *
+ * @param dev Pointer to Ethernet device structure.
+ * @param rx_queue_id RX queue index relative to the PCIe function
+ *                    pointed by dev
+ * @param nb_rx_desc Number of C2H descriptors to configure for this queue
+ * @param socket_id NUMA socket on which memory must be allocated
+ * @param rx_conf Rx queue configuration parameters
+ * @param mb_pool Memory pool to use for buffer allocations on this queue
+ *
+ * @return 0 on success, < 0 on failure
+ * @ingroup dpdk_devops_func
+ *
+ */
+int qdma_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
+				uint16_t nb_rx_desc, unsigned int socket_id,
+				const struct rte_eth_rxconf *rx_conf,
+				struct rte_mempool *mb_pool);
+
+/**
+ * DPDK callback to configure a TX queue.
+ *
+ * This API validates queue parameters and allocates H2C ring from the
+ * DPDK reserved hugepage memory zone
+ *
+ * @param dev Pointer to Ethernet device structure
+ * @param tx_queue_id TX queue index
+ * @param nb_tx_desc Number of descriptors to configure in queue
+ * @param socket_id NUMA socket on which memory must be allocated
+ * @param tx_conf Tx queue configuration parameters
+ *
+ * @return 0 on success, < 0 on failure
+ * @ingroup dpdk_devops_func
+ *
+ */
+int qdma_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
+			    uint16_t nb_tx_desc, unsigned int socket_id,
+			    const struct rte_eth_txconf *tx_conf);
+
+/**
+ * DPDK callback to get Rx queue info of an Ethernet device
+ *
+ * @param dev
+ *   Pointer to Ethernet device structure
+ * @param rx_queue_id
+ *   The RX queue on the Ethernet device for which information will be
+ *   retrieved
+ * @param qinfo
+ *   A pointer to a structure of type rte_eth_rxq_info_info to be filled with
+ *   the information of given Rx queue
+ *
+ * @ingroup dpdk_devops_func
+ */
+void
+qdma_dev_rxq_info_get(struct rte_eth_dev *dev, uint16_t rx_queue_id,
+		     struct rte_eth_rxq_info *qinfo);
+
+/**
+ * DPDK callback to get Tx queue info of an Ethernet device
+ *
+ * @param dev
+ *   Pointer to Ethernet device structure
+ * @param tx_queue_id
+ *   The TX queue on the Ethernet device for which information will be
+ *   retrieved
+ * @param qinfo
+ *   A pointer to a structure of type rte_eth_txq_info_info to be filled with
+ *   the information of given Tx queue
+ *
+ * @ingroup dpdk_devops_func
+ */
+void
+qdma_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t tx_queue_id,
+		      struct rte_eth_txq_info *qinfo);
+
+/**
+ * DPDK callback to start the device.
+ *
+ * This API starts the Ethernet device by initializing Rx, Tx descriptors
+ * and device registers. For the Port queues whose start is not deferred,
+ * it calls qdma_dev_tx_queue_start and qdma_dev_rx_queue_start to start
+ * the queues for packet processing.
+ *
+ * @param dev Pointer to Ethernet device structure
+ *
+ * @return 0 on success, < 0 on failure
+ * @ingroup dpdk_devops_func
+ *
+ */
+int qdma_dev_start(struct rte_eth_dev *dev);
+
+/**
+ * DPDK callback to start a C2H queue which has been deferred start.
+ *
+ * This API clears and then programs the Software, Prefetch and
+ * Completion context of the C2H queue
+ *
+ * @param dev Pointer to Ethernet device structure
+ * @param qid Rx queue index
+ *
+ * @return 0 on success, < 0 on failure
+ * @ingroup dpdk_devops_func
+ *
+ */
+int qdma_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t qid);
+
+/**
+ * DPDK callback to start a H2C queue which has been deferred start.
+ *
+ * This API clears and then programs the Software context of the H2C queue
+ *
+ * @param dev Pointer to Ethernet device structure
+ * @param qid Tx queue index
+ *
+ * @return 0 on success, < 0 on failure
+ * @ingroup dpdk_devops_func
+ *
+ */
+int qdma_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t qid);
+
+/**
+ * DPDK callback for receiving packets in burst.
+ *
+ * This API does following operations:
+ *	- Process the Completion ring to determine and store packet information
+ *	- Update CMPT CIDX
+ *	- Process C2H ring to retrieve rte_mbuf pointers corresponding to
+ *    received packets and store in rx_pkts array.
+ *	- Populate C2H ring with new pointers for packet buffers
+ *	- Update C2H ring PIDX
+ *
+ * @param rx_queue Generic pointer to Rx queue structure
+ * @param rx_pkts The address of an array of pointers to rte_mbuf structures
+ *                 that must be large enough to store nb_pkts pointers in it
+ * @param nb_pkts Maximum number of packets to retrieve
+ *
+ * @return Number of packets successfully received (<= nb_pkts)
+ * @ingroup dpdk_devops_func
+ *
+ */
+uint16_t qdma_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+			uint16_t nb_pkts);
+
+/**
+ * DPDK callback for transmitting packets in burst.
+ *
+ * This API does following operations:
+ *	- Free rte_mbuf pointers to previous transmitted packets,
+ *    back to the memory pool
+ *	- Retrieve packet buffer pointer from tx_pkts and populate H2C ring
+ *    with pointers to new packet buffers.
+ *	- Update H2C ring PIDX
+ *
+ * @param tx_queue Generic pointer to Tx queue structure
+ * @param tx_pkts The address of an array of nb_pkts pointers to
+ *                rte_mbuf structures which contain the output packets
+ * @param nb_pkts The maximum number of packets to transmit
+ *
+ * @return Number of packets successfully transmitted (<= nb_pkts)
+ * @ingroup dpdk_devops_func
+ *
+ */
+uint16_t qdma_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+			uint16_t nb_pkts);
+
+/**
+ * DPDK callback for retrieving Port statistics.
+ *
+ * This API updates Port statistics in rte_eth_stats structure parameters
+ *
+ * @param dev Pointer to Ethernet device structure
+ * @param eth_stats Pointer to structure containing statistics
+ *
+ * @return 0 on success, < 0 on failure
+ * @ingroup dpdk_devops_func
+ *
+ */
+int qdma_dev_stats_get(struct rte_eth_dev *dev,
+			      struct rte_eth_stats *eth_stats);
+
+/**
+ * DPDK callback to reset Port statistics.
+ *
+ * @param dev
+ *   Pointer to Ethernet device structure.
+ *
+ * @ingroup dpdk_devops_func
+ */
+int qdma_dev_stats_reset(struct rte_eth_dev *dev);
+
+/**
+ * DPDK callback to set a queue statistics mapping for
+ * a tx/rx queue of an Ethernet device.
+ *
+ * @param dev
+ *   Pointer to Ethernet device structure
+ * @param queue_id
+ *   Index of the queue for which a queue stats mapping is required
+ * @param stat_idx
+ *   The per-queue packet statistics functionality number that
+ *   the queue_id is to be assigned
+ * @param is_rx
+ *   Whether queue is a Rx or a Tx queue
+ *
+ * @return
+ *   0 on success, -EINVAL on failure
+ * @ingroup dpdk_devops_func
+ */
+int qdma_dev_queue_stats_mapping(struct rte_eth_dev *dev,
+					     uint16_t queue_id,
+					     uint8_t stat_idx,
+					     uint8_t is_rx);
+
+/**
+ * DPDK callback to get the number of used descriptors of a rx queue
+ *
+ * @param dev
+ *   Pointer to Ethernet device structure
+ * @param rx_queue_id
+ *   The RX queue on the Ethernet device for which information will be
+ *   retrieved
+ *
+ * @return
+ *   The number of used descriptors in the specific queue
+ * @ingroup dpdk_devops_func
+ */
+uint32_t
+qdma_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+
+/**
+ * DPDK callback to check the status of a Rx descriptor in the queue
+ *
+ * @param rx_queue
+ *   Pointer to Rx queue specific data structure
+ * @param offset
+ *   The offset of the descriptor starting from tail (0 is the next
+ *   packet to be received by the driver)
+ *
+ * @return
+ *  - (RTE_ETH_RX_DESC_AVAIL): Descriptor is available for the hardware to
+ *    receive a packet.
+ *  - (RTE_ETH_RX_DESC_DONE): Descriptor is done, it is filled by hw, but
+ *    not yet processed by the driver (i.e. in the receive queue).
+ *  - (RTE_ETH_RX_DESC_UNAVAIL): Descriptor is unavailable, either hold by
+ *    the driver and not yet returned to hw, or reserved by the hw.
+ *  - (-EINVAL) bad descriptor offset.
+ * @ingroup dpdk_devops_func
+ */
+int
+qdma_dev_rx_descriptor_status(void *rx_queue, uint16_t offset);
+
+/**
+ * DPDK callback to check the status of a Tx descriptor in the queue
+ *
+ * @param tx_queue
+ *   Pointer to Tx queue specific data structure
+ * @param offset
+ *   The offset of the descriptor starting from tail (0 is the place where
+ *   the next packet will be send)
+ *
+ * @return
+ *  - (RTE_ETH_TX_DESC_FULL) Descriptor is being processed by the hw, i.e.
+ *    in the transmit queue.
+ *  - (RTE_ETH_TX_DESC_DONE) Hardware is done with this descriptor, it can
+ *    be reused by the driver.
+ *  - (RTE_ETH_TX_DESC_UNAVAIL): Descriptor is unavailable, reserved by the
+ *    driver or the hardware.
+ *  - (-EINVAL) bad descriptor offset.
+ * @ingroup dpdk_devops_func
+ */
+int
+qdma_dev_tx_descriptor_status(void *tx_queue, uint16_t offset);
+
+/**
+ * DPDK callback to request the driver to free mbufs
+ * currently cached by the driver
+ *
+ * @param tx_queue
+ *   Pointer to Tx queue specific data structure
+ * @param free_cnt
+ *   Maximum number of packets to free. Use 0 to indicate all possible packets
+ *   should be freed. Note that a packet may be using multiple mbufs.
+ *
+ * @return
+ *   - Failure: < 0
+ *   - Success: >= 0
+ *     0-n: Number of packets freed. More packets may still remain in ring that
+ *     are in use.
+ * @ingroup dpdk_devops_func
+ */
+int
+qdma_dev_tx_done_cleanup(void *tx_queue, uint32_t free_cnt);
+
+/**
+ * DPDK callback to retrieve device registers and
+ * register attributes (number of registers and register size)
+ *
+ * @param dev
+ *   Pointer to Ethernet device structure
+ * @param regs
+ *   Pointer to rte_dev_reg_info structure to fill in. If regs->data is
+ *   NULL the function fills in the width and length fields. If non-NULL
+ *   the registers are put into the buffer pointed at by the data field.
+ *
+ * @return
+ *   0 on success, -ENOTSUP on failure
+ * @ingroup dpdk_devops_func
+ */
+int
+qdma_dev_get_regs(struct rte_eth_dev *dev,
+	      struct rte_dev_reg_info *regs);
+
+/**
+ * DPDK callback to stop a C2H queue
+ *
+ * This API invalidates Hardware, Software, Prefetch and completion contexts
+ * of C2H queue. It also free the rte_mbuf pointers assigned to descriptors
+ * prepared for packet reception.
+ *
+ * @param dev Pointer to Ethernet device structure
+ * @param qid Rx queue index
+ *
+ * @return 0 on success, < 0 on failure
+ * @ingroup dpdk_devops_func
+ *
+ */
+int qdma_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t qid);
+
+/**
+ * qdma_dev_tx_queue_stop() - DPDK callback to stop a queue in H2C direction
+ *
+ * This API invalidates Hardware, Software contexts of H2C queue. It also free
+ * the rte_mbuf pointers assigned to descriptors that are pending transmission.
+ *
+ * @param dev Pointer to Ethernet device structure
+ * @param qid TX queue index
+ *
+ * @return 0 on success, < 0 on failure
+ * @ingroup dpdk_devops_func
+ *
+ */
+int qdma_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t qid);
+
+
+/**
+ * DPDK callback to stop the device.
+ *
+ * This API stops the device by invalidating all the contexts of all the queues
+ * belonging to the port by calling qdma_dev_tx_queue_stop() and
+ * qdma_dev_rx_queue_stop() for all the queues of the port.
+ *
+ * @param dev Pointer to Ethernet device structure
+ *
+ * @ingroup dpdk_devops_func
+ *
+ */
+int qdma_dev_stop(struct rte_eth_dev *dev);
+
+/**
+ * DPDK callback to release a Rx queue.
+ *
+ * This API releases the descriptor rings and any additional memory allocated
+ * for given C2H queue
+ *
+ * @param dev Pointer to Ethernet device structure
+ * @param q_id: Rx queue id
+ *
+ * @ingroup dpdk_devops_func
+ */
+void qdma_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t q_id);
+
+/**
+ * DPDK callback to release a Tx queue.
+ *
+ * This API releases the descriptor rings and any additional memory allocated
+ * for given H2C queue
+ *
+ * @param dev Pointer to Ethernet device structure
+ * @param q_id: Tx queue id
+ *
+ * @ingroup dpdk_devops_func
+ */
+void qdma_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t q_id);
+
+/**
+ * DPDK callback to close the device.
+ *
+ * This API frees the descriptor rings and objects beonging to all the queues
+ * of the given port. It also clears the FMAP.
+ *
+ * @param dev Pointer to Ethernet device structure
+ *
+ * @ingroup dpdk_devops_func
+ */
+int qdma_dev_close(struct rte_eth_dev *dev);
+
+/**
+ * DPDK callback to close the VF device.
+ *
+ * This API frees the descriptor rings and objects beonging to all the queues
+ * of the given port. It also clears the FMAP.
+ *
+ * @param dev Pointer to Ethernet device structure
+ *
+ * @ingroup dpdk_devops_func
+ */
+int qdma_vf_dev_close(struct rte_eth_dev *dev);
+
+/**
+ * DPDK callback to reset the device.
+ *
+ * This callback is invoked when applcation calls rte_eth_dev_reset() API
+ * to reset a device. This callback uninitialzes PF device after waiting for
+ * all its VFs to shutdown. It initialize back PF device and then send
+ * Reset done mailbox message to all its VFs to come online again.
+ *
+ * @param dev
+ *   Pointer to Ethernet device structure
+ *
+ * @return
+ *   0 on success, negative errno value on failure
+ * @ingroup dpdk_devops_func
+ */
+int qdma_dev_reset(struct rte_eth_dev *dev);
+
+#ifdef __cplusplus
+}
+#endif
+#endif /* ifndef __QDMA_DEVOPS_H__ */
diff --git a/drivers/net/qdma/qdma_ethdev.c b/drivers/net/qdma/qdma_ethdev.c
index 54776c637d..79aac4aa60 100644
--- a/drivers/net/qdma/qdma_ethdev.c
+++ b/drivers/net/qdma/qdma_ethdev.c
@@ -25,6 +25,7 @@
 #include "qdma_version.h"
 #include "qdma_access_common.h"
 #include "qdma_access_export.h"
+#include "qdma_devops.h"
 
 /* Poll for QDMA errors every 1 second */
 #define QDMA_ERROR_POLL_FRQ (1000000)
@@ -356,8 +357,10 @@ static int qdma_eth_dev_init(struct rte_eth_dev *dev)
 	/* for secondary processes, we don't initialise any further as primary
 	 * has already done this work.
 	 */
-	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+		qdma_dev_ops_init(dev);
 		return 0;
+	}
 
 	/* allocate space for a single Ethernet MAC address */
 	dev->data->mac_addrs = rte_zmalloc("qdma", RTE_ETHER_ADDR_LEN * 1, 0);
@@ -436,6 +439,8 @@ static int qdma_eth_dev_init(struct rte_eth_dev *dev)
 
 	PMD_DRV_LOG(INFO, "QDMA device driver probe:");
 
+	qdma_dev_ops_init(dev);
+
 	/* Getting the device attributes from the Hardware */
 	qdma_device_attributes_get(dev);
 
@@ -580,6 +585,10 @@ static int qdma_eth_dev_uninit(struct rte_eth_dev *dev)
 	/* only uninitialize in the primary process */
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return -EPERM;
+
+	if (qdma_dev->dev_configured)
+		qdma_dev_close(dev);
+
 	/* cancel pending polls */
 	if (qdma_dev->is_master)
 		rte_eal_alarm_cancel(qdma_check_errors, (void *)dev);
diff --git a/drivers/net/qdma/rte_pmd_qdma.h b/drivers/net/qdma/rte_pmd_qdma.h
index f5e3def613..d09ec4a715 100644
--- a/drivers/net/qdma/rte_pmd_qdma.h
+++ b/drivers/net/qdma/rte_pmd_qdma.h
@@ -2,8 +2,8 @@
  * Copyright(c) 2017-2022 Xilinx, Inc. All rights reserved.
  */
 
-#ifndef __RTE_PMD_QDMA_EXPORT_H__
-#define __RTE_PMD_QDMA_EXPORT_H__
+#ifndef __RTE_PMD_QDMA_H__
+#define __RTE_PMD_QDMA_H__
 
 #include <rte_dev.h>
 #include <rte_ethdev.h>
@@ -256,7 +256,9 @@ struct rte_pmd_qdma_dev_attributes {
 	enum rte_pmd_qdma_ip_type ip_type;
 };
 
+#define DMA_BRAM_SIZE 524288
+
 #ifdef __cplusplus
 }
 #endif
-#endif /* ifndef __RTE_PMD_QDMA_EXPORT_H__ */
+#endif /* ifndef __RTE_PMD_QDMA_H__ */
-- 
2.36.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [RFC PATCH 11/29] net/qdma: add configure close and reset ethdev ops
  2022-07-06  7:51 [RFC PATCH 00/29] cover letter for net/qdma PMD Aman Kumar
                   ` (9 preceding siblings ...)
  2022-07-06  7:52 ` [RFC PATCH 10/29] net/qdma: add net PMD ops template Aman Kumar
@ 2022-07-06  7:52 ` Aman Kumar
  2022-07-06  7:52 ` [RFC PATCH 12/29] net/qdma: add routine for Rx queue initialization Aman Kumar
                   ` (18 subsequent siblings)
  29 siblings, 0 replies; 43+ messages in thread
From: Aman Kumar @ 2022-07-06  7:52 UTC (permalink / raw)
  To: dev; +Cc: maxime.coquelin, david.marchand, aman.kumar

add routines for below ethdev ops:
 1. dev_configure
 2. dev_reset
 3. dev_close

Signed-off-by: Aman Kumar <aman.kumar@vvdntech.in>
---
 drivers/net/qdma/qdma.h        |  11 +-
 drivers/net/qdma/qdma_devops.c | 292 ++++++++++++++++++++++++++++++++-
 drivers/net/qdma/qdma_devops.h |  42 +++++
 drivers/net/qdma/qdma_ethdev.c |   4 +-
 4 files changed, 339 insertions(+), 10 deletions(-)

diff --git a/drivers/net/qdma/qdma.h b/drivers/net/qdma/qdma.h
index f4155380f9..7314af71d7 100644
--- a/drivers/net/qdma/qdma.h
+++ b/drivers/net/qdma/qdma.h
@@ -28,9 +28,16 @@
 
 #define DEFAULT_QUEUE_BASE	(0)
 
+#define QDMA_MAX_BURST_SIZE (128)
+#define QDMA_MIN_RXBUFF_SIZE	(256)
+
 #define DEFAULT_TIMER_CNT_TRIG_MODE_TIMER	(5)
 #define DEFAULT_TIMER_CNT_TRIG_MODE_COUNT_TIMER	(30)
 
+#define WB_TIMEOUT		(100000)
+#define RESET_TIMEOUT		(60000)
+#define SHUTDOWN_TIMEOUT	(60000)
+
 /* Completion Context config */
 #define CMPT_DEFAULT_COLOR_BIT           (1)
 #define CMPT_CNTXT_DESC_SIZE_8B          (0)
@@ -121,8 +128,6 @@ struct qdma_rx_queue {
 	uint32_t		queue_id; /* RX queue index. */
 	uint64_t		mbuf_initializer; /* value to init mbufs */
 
-	struct qdma_pkt_stats	stats;
-
 	uint16_t		port_id; /* Device port identifier. */
 	uint8_t			status:1;
 	uint8_t			err:1;
@@ -167,6 +172,7 @@ struct qdma_tx_queue {
 	uint8_t				tx_deferred_start:1;
 	uint8_t				en_bypass:1;
 	uint8_t				status:1;
+	enum rte_pmd_qdma_bypass_desc_len		bypass_desc_sz:7;
 	uint16_t			port_id; /* Device port identifier. */
 	uint8_t				func_id; /* RX queue index. */
 	int8_t				ringszidx;
@@ -238,6 +244,7 @@ struct qdma_pci_dev {
 	struct queue_info *q_info;
 	uint8_t init_q_range;
 
+	void	**cmpt_queues;
 	/* Pointer to QDMA access layer function pointers */
 	struct qdma_hw_access *hw_access;
 
diff --git a/drivers/net/qdma/qdma_devops.c b/drivers/net/qdma/qdma_devops.c
index cf3ef6de34..2dd76e82c3 100644
--- a/drivers/net/qdma/qdma_devops.c
+++ b/drivers/net/qdma/qdma_devops.c
@@ -26,6 +26,25 @@
 #include "qdma_platform.h"
 #include "qdma_devops.h"
 
+static int qdma_pf_fmap_prog(struct rte_eth_dev *dev)
+{
+	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
+	struct qdma_fmap_cfg fmap_cfg;
+	int ret = 0;
+
+	memset(&fmap_cfg, 0, sizeof(struct qdma_fmap_cfg));
+
+	/* FMAP configuration */
+	fmap_cfg.qbase = qdma_dev->queue_base;
+	fmap_cfg.qmax = qdma_dev->qsets_en;
+	ret = qdma_dev->hw_access->qdma_fmap_conf(dev,
+			qdma_dev->func_id, &fmap_cfg, QDMA_HW_ACCESS_WRITE);
+	if (ret < 0)
+		return qdma_dev->hw_access->qdma_get_error_code(ret);
+
+	return ret;
+}
+
 /**
  * DPDK callback to configure a RX queue.
  *
@@ -159,7 +178,7 @@ int qdma_dev_infos_get(struct rte_eth_dev *dev,
 	dev_info->max_rx_queues = qdma_dev->dev_cap.num_qs;
 	dev_info->max_tx_queues = qdma_dev->dev_cap.num_qs;
 
-	dev_info->min_rx_bufsize = 256;
+	dev_info->min_rx_bufsize = QDMA_MIN_RXBUFF_SIZE;
 	dev_info->max_rx_pktlen = DMA_BRAM_SIZE;
 	dev_info->max_mac_addrs = 1;
 
@@ -192,7 +211,110 @@ int qdma_dev_stop(struct rte_eth_dev *dev)
  */
 int qdma_dev_close(struct rte_eth_dev *dev)
 {
-	(void)dev;
+	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
+	struct qdma_tx_queue *txq;
+	struct qdma_rx_queue *rxq;
+	struct qdma_cmpt_queue *cmptq;
+	uint32_t qid;
+	struct qdma_fmap_cfg fmap_cfg;
+	int ret = 0;
+
+	PMD_DRV_LOG(INFO, "PF-%d(DEVFN) DEV Close\n", qdma_dev->func_id);
+
+	if (dev->data->dev_started)
+		qdma_dev_stop(dev);
+
+	memset(&fmap_cfg, 0, sizeof(struct qdma_fmap_cfg));
+	qdma_dev->hw_access->qdma_fmap_conf(dev,
+		qdma_dev->func_id, &fmap_cfg, QDMA_HW_ACCESS_CLEAR);
+
+	/* iterate over rx queues */
+	for (qid = 0; qid < dev->data->nb_rx_queues; ++qid) {
+		rxq = dev->data->rx_queues[qid];
+		if (rxq) {
+			PMD_DRV_LOG(INFO, "Remove C2H queue: %d", qid);
+
+			if (rxq->sw_ring)
+				rte_free(rxq->sw_ring);
+			if (rxq->st_mode) { /* if ST-mode */
+				if (rxq->rx_cmpt_mz)
+					rte_memzone_free(rxq->rx_cmpt_mz);
+			}
+
+			qdma_dev_decrement_active_queue(qdma_dev->dma_device_index,
+					qdma_dev->func_id,
+					QDMA_DEV_Q_TYPE_C2H);
+
+			if (rxq->st_mode)
+				qdma_dev_decrement_active_queue(qdma_dev->dma_device_index,
+					qdma_dev->func_id,
+					QDMA_DEV_Q_TYPE_CMPT);
+
+			if (rxq->rx_mz)
+				rte_memzone_free(rxq->rx_mz);
+			rte_free(rxq);
+			PMD_DRV_LOG(INFO, "C2H queue %d removed", qid);
+		}
+	}
+
+	/* iterate over tx queues */
+	for (qid = 0; qid < dev->data->nb_tx_queues; ++qid) {
+		txq = dev->data->tx_queues[qid];
+		if (txq) {
+			PMD_DRV_LOG(INFO, "Remove H2C queue: %d", qid);
+
+			if (txq->sw_ring)
+				rte_free(txq->sw_ring);
+			if (txq->tx_mz)
+				rte_memzone_free(txq->tx_mz);
+			rte_free(txq);
+			PMD_DRV_LOG(INFO, "H2C queue %d removed", qid);
+
+			qdma_dev_decrement_active_queue(qdma_dev->dma_device_index,
+					qdma_dev->func_id,
+					QDMA_DEV_Q_TYPE_H2C);
+		}
+	}
+	if (qdma_dev->dev_cap.mm_cmpt_en) {
+		/* iterate over cmpt queues */
+		for (qid = 0; qid < qdma_dev->qsets_en; ++qid) {
+			cmptq = qdma_dev->cmpt_queues[qid];
+			if (cmptq != NULL) {
+				PMD_DRV_LOG(INFO, "PF-%d(DEVFN) Remove CMPT queue: %d",
+						qdma_dev->func_id, qid);
+				if (cmptq->cmpt_mz)
+					rte_memzone_free(cmptq->cmpt_mz);
+				rte_free(cmptq);
+				PMD_DRV_LOG(INFO, "PF-%d(DEVFN) CMPT queue %d removed",
+						qdma_dev->func_id, qid);
+				qdma_dev_decrement_active_queue(qdma_dev->dma_device_index,
+					qdma_dev->func_id,
+					QDMA_DEV_Q_TYPE_CMPT);
+			}
+		}
+
+		if (qdma_dev->cmpt_queues != NULL) {
+			rte_free(qdma_dev->cmpt_queues);
+			qdma_dev->cmpt_queues = NULL;
+		}
+	}
+	qdma_dev->qsets_en = 0;
+	ret = qdma_dev_update(qdma_dev->dma_device_index, qdma_dev->func_id,
+			qdma_dev->qsets_en, (int *)&qdma_dev->queue_base);
+	if (ret != QDMA_SUCCESS) {
+		PMD_DRV_LOG(ERR, "PF-%d(DEVFN) qmax update failed: %d\n",
+			qdma_dev->func_id, ret);
+		return 0;
+	}
+
+	qdma_dev->init_q_range = 0;
+	rte_free(qdma_dev->q_info);
+	qdma_dev->q_info = NULL;
+	qdma_dev->dev_configured = 0;
+
+	/* cancel pending polls */
+	if (qdma_dev->is_master)
+		rte_eal_alarm_cancel(qdma_check_errors, (void *)dev);
 
 	return 0;
 }
@@ -212,9 +334,61 @@ int qdma_dev_close(struct rte_eth_dev *dev)
  */
 int qdma_dev_reset(struct rte_eth_dev *dev)
 {
-	(void)dev;
-
-	return 0;
+	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	uint32_t vf_device_count = 0;
+	uint32_t i = 0;
+	int ret = 0;
+
+	/* Get the number of active VFs for this PF device */
+	vf_device_count = qdma_dev->vf_online_count;
+	qdma_dev->reset_in_progress = 1;
+
+	/* Uninitialze PCI device */
+	ret = qdma_eth_dev_uninit(dev);
+	if (ret != QDMA_SUCCESS) {
+		PMD_DRV_LOG(ERR, "PF-%d(DEVFN) uninitialization failed: %d\n",
+			qdma_dev->func_id, ret);
+		return -1;
+	}
+
+	/* Initialize PCI device */
+	ret = qdma_eth_dev_init(dev);
+	if (ret != QDMA_SUCCESS) {
+		PMD_DRV_LOG(ERR, "PF-%d(DEVFN) initialization failed: %d\n",
+			qdma_dev->func_id, ret);
+		return -1;
+	}
+
+	/* Send "PF_RESET_DONE" mailbox message from PF to all its VFs,
+	 * so that VFs can come online again
+	 */
+	for (i = 0; i < pci_dev->max_vfs; i++) {
+		if (qdma_dev->vfinfo[i].func_id == QDMA_FUNC_ID_INVALID)
+			continue;
+	}
+
+	/* Start waiting for a maximum of 60 secs to get all its VFs
+	 * to come online that were active before PF reset
+	 */
+	i = 0;
+	while (i < RESET_TIMEOUT) {
+		if (qdma_dev->vf_online_count == vf_device_count) {
+			PMD_DRV_LOG(INFO,
+				"%s: Reset completed for PF-%d(DEVFN)\n",
+				__func__, qdma_dev->func_id);
+			break;
+		}
+		rte_delay_ms(1);
+		i++;
+	}
+
+	if (i >= RESET_TIMEOUT) {
+		PMD_DRV_LOG(ERR, "%s: Failed reset for PF-%d(DEVFN)\n",
+			__func__, qdma_dev->func_id);
+	}
+
+	return ret;
 }
 
 /**
@@ -228,7 +402,113 @@ int qdma_dev_reset(struct rte_eth_dev *dev)
  */
 int qdma_dev_configure(struct rte_eth_dev *dev)
 {
-	(void)dev;
+	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
+	uint16_t qid = 0;
+	int ret = 0, queue_base = -1;
+	uint8_t stat_id;
+
+	PMD_DRV_LOG(INFO, "Configure the qdma engines\n");
+
+	qdma_dev->qsets_en = RTE_MAX(dev->data->nb_rx_queues,
+					dev->data->nb_tx_queues);
+	if (qdma_dev->qsets_en > qdma_dev->dev_cap.num_qs) {
+		PMD_DRV_LOG(ERR, "PF-%d(DEVFN) Error: Number of Queues to be"
+				" configured are greater than the queues"
+				" supported by the hardware\n",
+				qdma_dev->func_id);
+		qdma_dev->qsets_en = 0;
+		return -1;
+	}
+
+	/* Request queue base from the resource manager */
+	ret = qdma_dev_update(qdma_dev->dma_device_index, qdma_dev->func_id,
+			qdma_dev->qsets_en, &queue_base);
+	if (ret != QDMA_SUCCESS) {
+		PMD_DRV_LOG(ERR, "PF-%d(DEVFN) queue allocation failed: %d\n",
+			qdma_dev->func_id, ret);
+		return -1;
+	}
+
+	ret = qdma_dev_qinfo_get(qdma_dev->dma_device_index, qdma_dev->func_id,
+				(int *)&qdma_dev->queue_base,
+				&qdma_dev->qsets_en);
+	if (ret != QDMA_SUCCESS) {
+		PMD_DRV_LOG(ERR, "%s: Error %d querying qbase\n",
+				__func__, ret);
+		return -1;
+	}
+	PMD_DRV_LOG(INFO, "Bus: 0x%x, PF-%d(DEVFN) queue_base: %d\n",
+		qdma_dev->dma_device_index,
+		qdma_dev->func_id,
+		qdma_dev->queue_base);
+
+	qdma_dev->q_info = rte_zmalloc("qinfo", sizeof(struct queue_info) *
+					(qdma_dev->qsets_en), 0);
+	if (!qdma_dev->q_info) {
+		PMD_DRV_LOG(ERR, "PF-%d(DEVFN) Cannot allocate "
+				"memory for queue info\n", qdma_dev->func_id);
+		return (-ENOMEM);
+	}
+
+	/* Reserve memory for cmptq ring pointers
+	 * Max completion queues can be maximum of rx and tx queues.
+	 */
+	qdma_dev->cmpt_queues = rte_zmalloc("cmpt_queues",
+					    sizeof(qdma_dev->cmpt_queues[0]) *
+						qdma_dev->qsets_en,
+						RTE_CACHE_LINE_SIZE);
+	if (qdma_dev->cmpt_queues == NULL) {
+		PMD_DRV_LOG(ERR, "PF-%d(DEVFN) cmpt ring pointers memory "
+				"allocation failed:\n", qdma_dev->func_id);
+		rte_free(qdma_dev->q_info);
+		qdma_dev->q_info = NULL;
+		return -(ENOMEM);
+	}
+
+	for (qid = 0 ; qid < qdma_dev->qsets_en; qid++) {
+		/* Initialize queue_modes to all 1's ( i.e. Streaming) */
+		qdma_dev->q_info[qid].queue_mode = RTE_PMD_QDMA_STREAMING_MODE;
+
+		/* Disable the cmpt over flow check by default */
+		qdma_dev->q_info[qid].dis_cmpt_ovf_chk = 0;
+
+		qdma_dev->q_info[qid].trigger_mode = qdma_dev->trigger_mode;
+		qdma_dev->q_info[qid].timer_count =
+					qdma_dev->timer_count;
+	}
+
+	for (qid = 0 ; qid < dev->data->nb_rx_queues; qid++) {
+		qdma_dev->q_info[qid].cmpt_desc_sz = qdma_dev->cmpt_desc_len;
+		qdma_dev->q_info[qid].rx_bypass_mode =
+						qdma_dev->c2h_bypass_mode;
+		qdma_dev->q_info[qid].en_prefetch = qdma_dev->en_desc_prefetch;
+		qdma_dev->q_info[qid].immediate_data_state = 0;
+	}
+
+	for (qid = 0 ; qid < dev->data->nb_tx_queues; qid++)
+		qdma_dev->q_info[qid].tx_bypass_mode =
+						qdma_dev->h2c_bypass_mode;
+	for (stat_id = 0, qid = 0;
+		stat_id < RTE_ETHDEV_QUEUE_STAT_CNTRS;
+		stat_id++, qid++) {
+		/* Initialize map with qid same as stat_id */
+		qdma_dev->tx_qid_statid_map[stat_id] =
+			(qid < dev->data->nb_tx_queues) ? qid : -1;
+		qdma_dev->rx_qid_statid_map[stat_id] =
+			(qid < dev->data->nb_rx_queues) ? qid : -1;
+	}
+
+	ret = qdma_pf_fmap_prog(dev);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "FMAP programming failed\n");
+		rte_free(qdma_dev->q_info);
+		qdma_dev->q_info = NULL;
+		rte_free(qdma_dev->cmpt_queues);
+		qdma_dev->cmpt_queues = NULL;
+		return ret;
+	}
+
+	qdma_dev->dev_configured = 1;
 
 	return 0;
 }
diff --git a/drivers/net/qdma/qdma_devops.h b/drivers/net/qdma/qdma_devops.h
index 240fa6b60c..c0f903f1cf 100644
--- a/drivers/net/qdma/qdma_devops.h
+++ b/drivers/net/qdma/qdma_devops.h
@@ -13,6 +13,29 @@ extern "C" {
 /** @defgroup dpdk_devops_func Functions
  */
 
+/**
+ * DPDK callback to register an Ethernet PCIe device.
+ *
+ * The Following actions are performed by this function:
+ *  - Parse and validate device arguments
+ *  - Identify PCIe BARs present in the device
+ *  - Register device operations
+ *  - Enable MM C2H and H2C channels
+ *  - Register PCIe device with Queue Resource Manager
+ *  - Program the QDMA IP global registers (by 1st PF that was probed)
+ *  - Enable HW errors and launch QDMA HW error monitoring thread
+ *    (by 1st PF that was probed)
+ *  - If VF is enabled, then enable Mailbox interrupt and register
+ *    Rx message handling function as interrupt handler
+ *
+ * @param dev Pointer to Ethernet device structure
+ *
+ * @return 0 on success, < 0 on failure
+ * @ingroup dpdk_devops_func
+ *
+ */
+int qdma_eth_dev_init(struct rte_eth_dev *dev);
+
 /**
  * DPDK callback for Ethernet device configuration.
  *
@@ -480,6 +503,25 @@ int qdma_vf_dev_close(struct rte_eth_dev *dev);
  */
 int qdma_dev_reset(struct rte_eth_dev *dev);
 
+/**
+ * DPDK callback to deregister a PCI device.
+ *
+ * The Following actions are performed by this function:
+ *  - Flushes out pending actions from the Tx Mailbox List
+ *  - Terminate Tx Mailbox thread
+ *  - Disable Mailbox interrupt and unregister interrupt handler
+ *  - Unregister PCIe device from Queue Resource Manager
+ *  - Cancel QDMA HW error monitoring thread if created by this device
+ *  - Disable MM C2H and H2C channels
+ *
+ * @param dev Pointer to Ethernet device structure
+ *
+ * @return 0 on success, < 0 on failure
+ * @ingroup dpdk_devops_func
+ *
+ */
+int qdma_eth_dev_uninit(struct rte_eth_dev *dev);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/drivers/net/qdma/qdma_ethdev.c b/drivers/net/qdma/qdma_ethdev.c
index 79aac4aa60..cc1e8eee71 100644
--- a/drivers/net/qdma/qdma_ethdev.c
+++ b/drivers/net/qdma/qdma_ethdev.c
@@ -333,7 +333,7 @@ static int get_max_pci_bus_num(uint8_t start_bus, uint8_t *end_bus)
  * @return
  *   0 on success, negative errno value on failure.
  */
-static int qdma_eth_dev_init(struct rte_eth_dev *dev)
+int qdma_eth_dev_init(struct rte_eth_dev *dev)
 {
 	struct qdma_pci_dev *dma_priv;
 	uint8_t *baseaddr;
@@ -578,7 +578,7 @@ static int qdma_eth_dev_init(struct rte_eth_dev *dev)
  * @return
  *   0 on success, negative errno value on failure.
  */
-static int qdma_eth_dev_uninit(struct rte_eth_dev *dev)
+int qdma_eth_dev_uninit(struct rte_eth_dev *dev)
 {
 	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
 
-- 
2.36.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [RFC PATCH 12/29] net/qdma: add routine for Rx queue initialization
  2022-07-06  7:51 [RFC PATCH 00/29] cover letter for net/qdma PMD Aman Kumar
                   ` (10 preceding siblings ...)
  2022-07-06  7:52 ` [RFC PATCH 11/29] net/qdma: add configure close and reset ethdev ops Aman Kumar
@ 2022-07-06  7:52 ` Aman Kumar
  2022-07-06  7:52 ` [RFC PATCH 13/29] net/qdma: add callback support for Rx queue count Aman Kumar
                   ` (17 subsequent siblings)
  29 siblings, 0 replies; 43+ messages in thread
From: Aman Kumar @ 2022-07-06  7:52 UTC (permalink / raw)
  To: dev; +Cc: maxime.coquelin, david.marchand, aman.kumar

defined routines to handle rx queue related ops.
this patch add support to rte_eth_dev_rx_queue*
apis for this PMD.

Signed-off-by: Aman Kumar <aman.kumar@vvdntech.in>
---
 drivers/net/qdma/meson.build   |   2 +
 drivers/net/qdma/qdma.h        |  74 +++-
 drivers/net/qdma/qdma_common.c | 157 ++++++++
 drivers/net/qdma/qdma_devops.c | 684 ++++++++++++++++++++++++++++++++-
 drivers/net/qdma/qdma_rxtx.c   | 208 ++++++++++
 drivers/net/qdma/qdma_rxtx.h   |  20 +
 drivers/net/qdma/qdma_user.c   | 188 +++++++++
 drivers/net/qdma/qdma_user.h   | 225 +++++++++++
 8 files changed, 1543 insertions(+), 15 deletions(-)
 create mode 100644 drivers/net/qdma/qdma_rxtx.c
 create mode 100644 drivers/net/qdma/qdma_rxtx.h
 create mode 100644 drivers/net/qdma/qdma_user.c
 create mode 100644 drivers/net/qdma/qdma_user.h

diff --git a/drivers/net/qdma/meson.build b/drivers/net/qdma/meson.build
index 858d981002..e2da7f25ec 100644
--- a/drivers/net/qdma/meson.build
+++ b/drivers/net/qdma/meson.build
@@ -23,6 +23,8 @@ sources = files(
         'qdma_common.c',
         'qdma_devops.c',
         'qdma_ethdev.c',
+        'qdma_user.c',
+        'qdma_rxtx.c',
         'qdma_access/eqdma_soft_access/eqdma_soft_access.c',
         'qdma_access/eqdma_soft_access/eqdma_soft_reg_dump.c',
         'qdma_access/qdma_s80_hard_access/qdma_s80_hard_access.c',
diff --git a/drivers/net/qdma/qdma.h b/drivers/net/qdma/qdma.h
index 7314af71d7..5992473b33 100644
--- a/drivers/net/qdma/qdma.h
+++ b/drivers/net/qdma/qdma.h
@@ -16,7 +16,9 @@
 #include <rte_memzone.h>
 #include <linux/pci.h>
 
+#include "qdma_user.h"
 #include "qdma_resource_mgmt.h"
+#include "qdma_access_common.h"
 #include "rte_pmd_qdma.h"
 #include "qdma_log.h"
 
@@ -31,13 +33,27 @@
 #define QDMA_MAX_BURST_SIZE (128)
 #define QDMA_MIN_RXBUFF_SIZE	(256)
 
+/* Descriptor Rings aligned to 4KB boundaries - only supported value */
+#define QDMA_ALIGN	(4096)
+
 #define DEFAULT_TIMER_CNT_TRIG_MODE_TIMER	(5)
 #define DEFAULT_TIMER_CNT_TRIG_MODE_COUNT_TIMER	(30)
 
+#define MIN_RX_PIDX_UPDATE_THRESHOLD (1)
+#define MIN_TX_PIDX_UPDATE_THRESHOLD (1)
+#define DEFAULT_MM_CMPT_CNT_THRESHOLD	(2)
+
 #define WB_TIMEOUT		(100000)
 #define RESET_TIMEOUT		(60000)
 #define SHUTDOWN_TIMEOUT	(60000)
 
+#define QDMA_MAX_BUFLEN     (2048 * 10)
+
+#ifdef spin_lock_init
+#undef spin_lock_init
+#endif
+#define spin_lock_init(sl) rte_spinlock_init(sl)
+
 /* Completion Context config */
 #define CMPT_DEFAULT_COLOR_BIT           (1)
 #define CMPT_CNTXT_DESC_SIZE_8B          (0)
@@ -90,6 +106,7 @@ struct qdma_pkt_stats {
 struct qdma_cmpt_queue {
 	struct qdma_ul_cmpt_ring *cmpt_ring;
 	struct wb_status    *wb_status;
+	struct qdma_q_cmpt_cidx_reg_info cmpt_cidx_info;
 	struct rte_eth_dev	*dev;
 
 	uint16_t	cmpt_desc_len;
@@ -127,7 +144,8 @@ struct qdma_rx_queue {
 	uint16_t		nb_rx_cmpt_desc;
 	uint32_t		queue_id; /* RX queue index. */
 	uint64_t		mbuf_initializer; /* value to init mbufs */
-
+	struct qdma_q_pidx_reg_info	q_pidx_info;
+	struct qdma_q_cmpt_cidx_reg_info cmpt_cidx_info;
 	uint16_t		port_id; /* Device port identifier. */
 	uint8_t			status:1;
 	uint8_t			err:1;
@@ -138,7 +156,8 @@ struct qdma_rx_queue {
 	uint8_t			en_bypass:1;
 	uint8_t			en_bypass_prefetch:1;
 	uint8_t			dis_overflow_check:1;
-
+	union qdma_ul_st_cmpt_ring cmpt_data[QDMA_MAX_BURST_SIZE];
+	enum rte_pmd_qdma_bypass_desc_len	bypass_desc_sz:7;
 	uint8_t			func_id; /* RX queue index. */
 	uint32_t		ep_addr;
 
@@ -152,6 +171,19 @@ struct qdma_rx_queue {
 	const struct rte_memzone *rx_mz;
 	/* C2H stream mode, completion descriptor result */
 	const struct rte_memzone *rx_cmpt_mz;
+
+#ifdef QDMA_LATENCY_OPTIMIZED
+	/* pend_pkt_moving_avg: average rate of packets received */
+	unsigned int pend_pkt_moving_avg;
+	/* pend_pkt_avg_thr_hi: higher average threshold */
+	unsigned int pend_pkt_avg_thr_hi;
+	/* pend_pkt_avg_thr_lo: lower average threshold */
+	unsigned int pend_pkt_avg_thr_lo;
+	/* sorted_c2h_cntr_idx: sorted c2h counter index */
+	unsigned char sorted_c2h_cntr_idx;
+	/* c2h_cntr_monitor_cnt: c2h counter stagnant monitor count */
+	unsigned char c2h_cntr_monitor_cnt;
+#endif /* QDMA_LATENCY_OPTIMIZED */
 };
 
 /**
@@ -197,6 +229,8 @@ struct queue_info {
 	uint8_t		immediate_data_state:1;
 	uint8_t		dis_cmpt_ovf_chk:1;
 	uint8_t		en_prefetch:1;
+	enum rte_pmd_qdma_bypass_desc_len rx_bypass_desc_sz:7;
+	enum rte_pmd_qdma_bypass_desc_len tx_bypass_desc_sz:7;
 	uint8_t		timer_count;
 	int8_t		trigger_mode;
 };
@@ -244,6 +278,13 @@ struct qdma_pci_dev {
 	struct queue_info *q_info;
 	uint8_t init_q_range;
 
+	uint32_t g_ring_sz[QDMA_NUM_RING_SIZES];
+	uint32_t g_c2h_cnt_th[QDMA_NUM_C2H_COUNTERS];
+	uint32_t g_c2h_buf_sz[QDMA_NUM_C2H_BUFFER_SIZES];
+	uint32_t g_c2h_timer_cnt[QDMA_NUM_C2H_TIMERS];
+#ifdef QDMA_LATENCY_OPTIMIZED
+	uint32_t sorted_idx_c2h_cnt_th[QDMA_NUM_C2H_COUNTERS];
+#endif /* QDMA_LATENCY_OPTIMIZED */
 	void	**cmpt_queues;
 	/* Pointer to QDMA access layer function pointers */
 	struct qdma_hw_access *hw_access;
@@ -256,10 +297,39 @@ struct qdma_pci_dev {
 };
 
 void qdma_dev_ops_init(struct rte_eth_dev *dev);
+int qdma_pf_csr_read(struct rte_eth_dev *dev);
+
+uint8_t qmda_get_desc_sz_idx(enum rte_pmd_qdma_bypass_desc_len);
+
+int qdma_init_rx_queue(struct qdma_rx_queue *rxq);
+void qdma_reset_rx_queue(struct qdma_rx_queue *rxq);
+
+void qdma_clr_rx_queue_ctxts(struct rte_eth_dev *dev, uint32_t qid,
+				uint32_t mode);
+void qdma_inv_rx_queue_ctxts(struct rte_eth_dev *dev, uint32_t qid,
+				uint32_t mode);
 int qdma_identify_bars(struct rte_eth_dev *dev);
 int qdma_get_hw_version(struct rte_eth_dev *dev);
 
+int index_of_array(uint32_t *arr, uint32_t n, uint32_t element);
+
 int qdma_check_kvargs(struct rte_devargs *devargs,
 			struct qdma_pci_dev *qdma_dev);
+
+static inline const
+struct rte_memzone *qdma_zone_reserve(struct rte_eth_dev *dev,
+					const char *ring_name,
+					uint32_t queue_id,
+					uint32_t ring_size,
+					int socket_id)
+{
+	char z_name[RTE_MEMZONE_NAMESIZE];
+	snprintf(z_name, sizeof(z_name), "%s%s%d_%u",
+			dev->device->driver->name, ring_name,
+			dev->data->port_id, queue_id);
+	return rte_memzone_reserve_aligned(z_name, (uint64_t)ring_size,
+						socket_id, 0, QDMA_ALIGN);
+}
+
 void qdma_check_errors(void *arg);
 #endif /* ifndef __QDMA_H__ */
diff --git a/drivers/net/qdma/qdma_common.c b/drivers/net/qdma/qdma_common.c
index 4f50be5b06..d39e642008 100644
--- a/drivers/net/qdma/qdma_common.c
+++ b/drivers/net/qdma/qdma_common.c
@@ -15,6 +15,163 @@
 #include <fcntl.h>
 #include <unistd.h>
 
+void qdma_reset_rx_queue(struct qdma_rx_queue *rxq)
+{
+	uint32_t i;
+	uint32_t sz;
+
+	rxq->rx_tail = 0;
+	rxq->q_pidx_info.pidx = 0;
+
+	/* Zero out HW ring memory, For MM Descriptor */
+	if (rxq->st_mode) {  /** if ST-mode **/
+		sz = rxq->cmpt_desc_len;
+		for (i = 0; i < (sz * rxq->nb_rx_cmpt_desc); i++)
+			((volatile char *)rxq->cmpt_ring)[i] = 0;
+
+		sz = sizeof(struct qdma_ul_st_c2h_desc);
+		for (i = 0; i < (sz * rxq->nb_rx_desc); i++)
+			((volatile char *)rxq->rx_ring)[i] = 0;
+
+	} else {
+		sz = sizeof(struct qdma_ul_mm_desc);
+		for (i = 0; i < (sz * rxq->nb_rx_desc); i++)
+			((volatile char *)rxq->rx_ring)[i] = 0;
+	}
+
+	/* Initialize SW ring entries */
+	for (i = 0; i < rxq->nb_rx_desc; i++)
+		rxq->sw_ring[i] = NULL;
+}
+
+void qdma_inv_rx_queue_ctxts(struct rte_eth_dev *dev,
+			     uint32_t qid, uint32_t mode)
+{
+	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
+	struct qdma_descq_sw_ctxt q_sw_ctxt;
+	struct qdma_descq_prefetch_ctxt q_prefetch_ctxt;
+	struct qdma_descq_cmpt_ctxt q_cmpt_ctxt;
+	struct qdma_descq_hw_ctxt q_hw_ctxt;
+	struct qdma_descq_credit_ctxt q_credit_ctxt;
+	struct qdma_hw_access *hw_access = qdma_dev->hw_access;
+
+	hw_access->qdma_sw_ctx_conf(dev, 1, qid, &q_sw_ctxt,
+			QDMA_HW_ACCESS_INVALIDATE);
+	hw_access->qdma_hw_ctx_conf(dev, 1, qid, &q_hw_ctxt,
+			QDMA_HW_ACCESS_INVALIDATE);
+	if (mode) {  /* ST-mode */
+		hw_access->qdma_pfetch_ctx_conf(dev, qid,
+			&q_prefetch_ctxt, QDMA_HW_ACCESS_INVALIDATE);
+		hw_access->qdma_cmpt_ctx_conf(dev, qid,
+			&q_cmpt_ctxt, QDMA_HW_ACCESS_INVALIDATE);
+		hw_access->qdma_credit_ctx_conf(dev, 1, qid,
+			&q_credit_ctxt, QDMA_HW_ACCESS_INVALIDATE);
+	}
+}
+
+/**
+ * Clears the Rx queue contexts.
+ *
+ * @param dev
+ *   Pointer to Ethernet device structure.
+ *
+ * @return
+ *   Nothing.
+ */
+void qdma_clr_rx_queue_ctxts(struct rte_eth_dev *dev,
+			     uint32_t qid, uint32_t mode)
+{
+	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
+	struct qdma_descq_prefetch_ctxt q_prefetch_ctxt;
+	struct qdma_descq_cmpt_ctxt q_cmpt_ctxt;
+	struct qdma_descq_hw_ctxt q_hw_ctxt;
+	struct qdma_descq_credit_ctxt q_credit_ctxt;
+	struct qdma_descq_sw_ctxt q_sw_ctxt;
+	struct qdma_hw_access *hw_access = qdma_dev->hw_access;
+
+	hw_access->qdma_sw_ctx_conf(dev, 1, qid, &q_sw_ctxt,
+			QDMA_HW_ACCESS_CLEAR);
+	hw_access->qdma_hw_ctx_conf(dev, 1, qid, &q_hw_ctxt,
+			QDMA_HW_ACCESS_CLEAR);
+	if (mode) {  /* ST-mode */
+		hw_access->qdma_pfetch_ctx_conf(dev, qid,
+			&q_prefetch_ctxt, QDMA_HW_ACCESS_CLEAR);
+		hw_access->qdma_cmpt_ctx_conf(dev, qid,
+			&q_cmpt_ctxt, QDMA_HW_ACCESS_CLEAR);
+		hw_access->qdma_credit_ctx_conf(dev, 1, qid,
+			&q_credit_ctxt, QDMA_HW_ACCESS_CLEAR);
+	}
+}
+
+int qdma_init_rx_queue(struct qdma_rx_queue *rxq)
+{
+	struct rte_mbuf *mb;
+	void *obj = NULL;
+	uint64_t phys_addr;
+	uint16_t i;
+	struct qdma_ul_st_c2h_desc *rx_ring_st = NULL;
+
+	/* allocate new buffers for the Rx descriptor ring */
+	if (rxq->st_mode) {  /* ST-mode */
+		rx_ring_st = (struct qdma_ul_st_c2h_desc *)rxq->rx_ring;
+#ifdef DUMP_MEMPOOL_USAGE_STATS
+		PMD_DRV_LOG(INFO, "%s(): %d: queue id %d, mbuf_avail_count =%d,"
+				"mbuf_in_use_count = %d",
+				__func__, __LINE__, rxq->queue_id,
+				rte_mempool_avail_count(rxq->mb_pool),
+				rte_mempool_in_use_count(rxq->mb_pool));
+#endif /* DUMP_MEMPOOL_USAGE_STATS */
+		for (i = 0; i < (rxq->nb_rx_desc - 2); i++) {
+			if (rte_mempool_get(rxq->mb_pool, &obj) != 0) {
+				PMD_DRV_LOG(ERR, "qdma-start-rx-queue(): "
+						"rte_mempool_get: failed");
+				goto fail;
+			}
+
+			if (obj != NULL) {
+				mb = obj;
+			} else {
+				PMD_DRV_LOG(ERR, "%s(): %d: qid %d, rte_mempool_get failed",
+				__func__, __LINE__, rxq->queue_id);
+				goto fail;
+			}
+
+			phys_addr = (uint64_t)mb->buf_iova +
+				     RTE_PKTMBUF_HEADROOM;
+
+			mb->data_off = RTE_PKTMBUF_HEADROOM;
+			rxq->sw_ring[i] = mb;
+			rx_ring_st[i].dst_addr = phys_addr;
+		}
+#ifdef DUMP_MEMPOOL_USAGE_STATS
+		PMD_DRV_LOG(INFO, "%s(): %d: qid %d, mbuf_avail_count = %d,"
+				"mbuf_in_use_count = %d",
+				__func__, __LINE__, rxq->queue_id,
+				rte_mempool_avail_count(rxq->mb_pool),
+				rte_mempool_in_use_count(rxq->mb_pool));
+#endif /* DUMP_MEMPOOL_USAGE_STATS */
+	}
+
+	/* initialize tail */
+	rxq->rx_tail = 0;
+
+	return 0;
+fail:
+	return -ENOMEM;
+}
+
+/* Utility function to find index of an element in an array */
+int index_of_array(uint32_t *arr, uint32_t n, uint32_t element)
+{
+	int index = 0;
+
+	for (index = 0; (uint32_t)index < n; index++) {
+		if (*(arr + index) == element)
+			return index;
+	}
+	return -1;
+}
+
 static int pfetch_check_handler(__rte_unused const char *key,
 					const char *value,  void *opaque)
 {
diff --git a/drivers/net/qdma/qdma_devops.c b/drivers/net/qdma/qdma_devops.c
index 2dd76e82c3..017dcf39ff 100644
--- a/drivers/net/qdma/qdma_devops.c
+++ b/drivers/net/qdma/qdma_devops.c
@@ -26,6 +26,92 @@
 #include "qdma_platform.h"
 #include "qdma_devops.h"
 
+#ifdef QDMA_LATENCY_OPTIMIZED
+static void qdma_sort_c2h_cntr_th_values(struct qdma_pci_dev *qdma_dev)
+{
+	uint8_t i, idx = 0, j = 0;
+	uint8_t c2h_cntr_val = qdma_dev->g_c2h_cnt_th[0];
+	uint8_t least_max = 0;
+	int ref_idx = -1;
+
+get_next_idx:
+	for (i = 0; i < QDMA_NUM_C2H_COUNTERS; i++) {
+		if (ref_idx >= 0 && ref_idx == i)
+			continue;
+		if (qdma_dev->g_c2h_cnt_th[i] < least_max)
+			continue;
+		c2h_cntr_val = qdma_dev->g_c2h_cnt_th[i];
+		idx = i;
+		break;
+	}
+	for (; i < QDMA_NUM_C2H_COUNTERS; i++) {
+		if (ref_idx >= 0 && ref_idx == i)
+			continue;
+		if (qdma_dev->g_c2h_cnt_th[i] < least_max)
+			continue;
+		if (c2h_cntr_val >= qdma_dev->g_c2h_cnt_th[i]) {
+			c2h_cntr_val = qdma_dev->g_c2h_cnt_th[i];
+			idx = i;
+		}
+	}
+	qdma_dev->sorted_idx_c2h_cnt_th[j] = idx;
+	ref_idx = idx;
+	j++;
+	idx = j;
+	least_max = c2h_cntr_val;
+	if (j < QDMA_NUM_C2H_COUNTERS)
+		goto get_next_idx;
+}
+#endif /* QDMA_LATENCY_OPTIMIZED */
+
+int qdma_pf_csr_read(struct rte_eth_dev *dev)
+{
+	int ret = 0;
+	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
+	struct qdma_hw_access *hw_access = qdma_dev->hw_access;
+
+	ret = hw_access->qdma_global_csr_conf(dev, 0,
+				QDMA_NUM_RING_SIZES, qdma_dev->g_ring_sz,
+		QDMA_CSR_RING_SZ, QDMA_HW_ACCESS_READ);
+	if (ret != QDMA_SUCCESS)
+		PMD_DRV_LOG(ERR, "qdma_global_csr_conf for ring size "
+				  "returned %d", ret);
+	if (qdma_dev->dev_cap.st_en || qdma_dev->dev_cap.mm_cmpt_en) {
+		ret = hw_access->qdma_global_csr_conf(dev, 0,
+				QDMA_NUM_C2H_TIMERS, qdma_dev->g_c2h_timer_cnt,
+		QDMA_CSR_TIMER_CNT, QDMA_HW_ACCESS_READ);
+		if (ret != QDMA_SUCCESS)
+			PMD_DRV_LOG(ERR, "qdma_global_csr_conf for timer count "
+					  "returned %d", ret);
+
+		ret = hw_access->qdma_global_csr_conf(dev, 0,
+				QDMA_NUM_C2H_COUNTERS, qdma_dev->g_c2h_cnt_th,
+		QDMA_CSR_CNT_TH, QDMA_HW_ACCESS_READ);
+		if (ret != QDMA_SUCCESS)
+			PMD_DRV_LOG(ERR, "qdma_global_csr_conf for counter threshold "
+					  "returned %d", ret);
+#ifdef QDMA_LATENCY_OPTIMIZED
+		qdma_sort_c2h_cntr_th_values(qdma_dev);
+#endif /* QDMA_LATENCY_OPTIMIZED */
+	}
+
+	if (qdma_dev->dev_cap.st_en) {
+		ret = hw_access->qdma_global_csr_conf(dev, 0,
+				QDMA_NUM_C2H_BUFFER_SIZES,
+				qdma_dev->g_c2h_buf_sz,
+				QDMA_CSR_BUF_SZ,
+				QDMA_HW_ACCESS_READ);
+		if (ret != QDMA_SUCCESS)
+			PMD_DRV_LOG(ERR, "qdma_global_csr_conf for buffer sizes "
+					  "returned %d", ret);
+	}
+
+	if (ret < 0)
+		return qdma_dev->hw_access->qdma_get_error_code(ret);
+
+	return ret;
+}
+
 static int qdma_pf_fmap_prog(struct rte_eth_dev *dev)
 {
 	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
@@ -45,6 +131,47 @@ static int qdma_pf_fmap_prog(struct rte_eth_dev *dev)
 	return ret;
 }
 
+uint8_t qmda_get_desc_sz_idx(enum rte_pmd_qdma_bypass_desc_len size)
+{
+	uint8_t ret;
+	switch (size) {
+	case RTE_PMD_QDMA_BYPASS_DESC_LEN_8B:
+		ret = 0;
+		break;
+	case RTE_PMD_QDMA_BYPASS_DESC_LEN_16B:
+		ret = 1;
+		break;
+	case RTE_PMD_QDMA_BYPASS_DESC_LEN_32B:
+		ret = 2;
+		break;
+	case RTE_PMD_QDMA_BYPASS_DESC_LEN_64B:
+		ret = 3;
+		break;
+	default:
+		/* Suppress compiler warnings */
+		ret = 0;
+	}
+	return ret;
+}
+
+static inline int
+qdma_rxq_default_mbuf_init(struct qdma_rx_queue *rxq)
+{
+	uintptr_t p;
+	struct rte_mbuf mb = { .buf_addr = 0 };
+
+	mb.nb_segs = 1;
+	mb.data_off = RTE_PKTMBUF_HEADROOM;
+	mb.port = rxq->port_id;
+	rte_mbuf_refcnt_set(&mb, 1);
+
+	/* prevent compiler reordering */
+	rte_compiler_barrier();
+	p = (uintptr_t)&mb.rearm_data;
+	rxq->mbuf_initializer = *(uint64_t *)p;
+	return 0;
+}
+
 /**
  * DPDK callback to configure a RX queue.
  *
@@ -72,14 +199,355 @@ int qdma_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 			    const struct rte_eth_rxconf *rx_conf,
 			    struct rte_mempool *mb_pool)
 {
-	(void)dev;
-	(void)rx_queue_id;
-	(void)nb_rx_desc;
-	(void)socket_id;
-	(void)rx_conf;
-	(void)mb_pool;
+	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
+	struct qdma_rx_queue *rxq = NULL;
+	struct qdma_ul_mm_desc *rx_ring_mm;
+	uint32_t sz;
+	uint8_t  *rx_ring_bypass;
+	int err = 0;
+
+	PMD_DRV_LOG(INFO, "Configuring Rx queue id:%d\n", rx_queue_id);
+
+	if (nb_rx_desc == 0) {
+		PMD_DRV_LOG(ERR, "Invalid descriptor ring size %d\n",
+				nb_rx_desc);
+		return -EINVAL;
+	}
+
+	if (!qdma_dev->dev_configured) {
+		PMD_DRV_LOG(ERR,
+			"Device for Rx queue id %d is not configured yet\n",
+			rx_queue_id);
+		return -EINVAL;
+	}
+
+	if (!qdma_dev->is_vf) {
+		err = qdma_dev_increment_active_queue
+					(qdma_dev->dma_device_index,
+					qdma_dev->func_id,
+					QDMA_DEV_Q_TYPE_C2H);
+		if (err != QDMA_SUCCESS)
+			return -EINVAL;
+
+		if (qdma_dev->q_info[rx_queue_id].queue_mode ==
+				RTE_PMD_QDMA_STREAMING_MODE) {
+			err = qdma_dev_increment_active_queue
+						(qdma_dev->dma_device_index,
+						qdma_dev->func_id,
+						QDMA_DEV_Q_TYPE_CMPT);
+			if (err != QDMA_SUCCESS) {
+				qdma_dev_decrement_active_queue
+						(qdma_dev->dma_device_index,
+						qdma_dev->func_id,
+						QDMA_DEV_Q_TYPE_C2H);
+				return -EINVAL;
+			}
+		}
+	}
+	if (!qdma_dev->init_q_range) {
+		if (!qdma_dev->is_vf) {
+			err = qdma_pf_csr_read(dev);
+			if (err < 0)
+				goto rx_setup_err;
+		}
+		qdma_dev->init_q_range = 1;
+	}
+
+	/* allocate rx queue data structure */
+	rxq = rte_zmalloc_socket("QDMA_RxQ", sizeof(struct qdma_rx_queue),
+						RTE_CACHE_LINE_SIZE, socket_id);
+	if (!rxq) {
+		PMD_DRV_LOG(ERR, "Unable to allocate structure rxq of "
+				"size %d\n",
+				(int)(sizeof(struct qdma_rx_queue)));
+		err = -ENOMEM;
+		goto rx_setup_err;
+	}
+
+	rxq->queue_id = rx_queue_id;
+	rxq->port_id = dev->data->port_id;
+	rxq->func_id = qdma_dev->func_id;
+	rxq->mb_pool = mb_pool;
+	rxq->dev = dev;
+	rxq->st_mode = qdma_dev->q_info[rx_queue_id].queue_mode;
+	rxq->nb_rx_desc = (nb_rx_desc + 1);
+	/* <= 2018.2 IP
+	 * double the cmpl ring size to avoid run out of cmpl entry while
+	 * desc. ring still have free entries
+	 */
+	rxq->nb_rx_cmpt_desc = ((nb_rx_desc * 2) + 1);
+	rxq->en_prefetch = qdma_dev->q_info[rx_queue_id].en_prefetch;
+	rxq->cmpt_desc_len = qdma_dev->q_info[rx_queue_id].cmpt_desc_sz;
+	if (rxq->cmpt_desc_len == RTE_PMD_QDMA_CMPT_DESC_LEN_64B &&
+		!qdma_dev->dev_cap.cmpt_desc_64b) {
+		PMD_DRV_LOG(ERR, "PF-%d(DEVFN) 64B completion entry size is "
+			"not supported in this design\n", qdma_dev->func_id);
+		return -ENOTSUP;
+	}
+	rxq->triggermode = qdma_dev->q_info[rx_queue_id].trigger_mode;
+	rxq->rx_deferred_start = rx_conf->rx_deferred_start;
+	rxq->dump_immediate_data =
+			qdma_dev->q_info[rx_queue_id].immediate_data_state;
+	rxq->dis_overflow_check =
+			qdma_dev->q_info[rx_queue_id].dis_cmpt_ovf_chk;
+
+	if (qdma_dev->q_info[rx_queue_id].rx_bypass_mode ==
+				RTE_PMD_QDMA_RX_BYPASS_CACHE ||
+			qdma_dev->q_info[rx_queue_id].rx_bypass_mode ==
+			 RTE_PMD_QDMA_RX_BYPASS_SIMPLE)
+		rxq->en_bypass = 1;
+	if (qdma_dev->q_info[rx_queue_id].rx_bypass_mode ==
+			RTE_PMD_QDMA_RX_BYPASS_SIMPLE)
+		rxq->en_bypass_prefetch = 1;
+
+	if (qdma_dev->ip_type == EQDMA_SOFT_IP &&
+			qdma_dev->vivado_rel >= QDMA_VIVADO_2020_2) {
+		if (qdma_dev->dev_cap.desc_eng_mode ==
+				QDMA_DESC_ENG_BYPASS_ONLY) {
+			PMD_DRV_LOG(ERR,
+				"Bypass only mode design "
+				"is not supported\n");
+			return -ENOTSUP;
+		}
+
+		if (rxq->en_bypass &&
+				qdma_dev->dev_cap.desc_eng_mode ==
+				QDMA_DESC_ENG_INTERNAL_ONLY) {
+			PMD_DRV_LOG(ERR,
+				"Rx qid %d config in bypass "
+				"mode not supported on "
+				"internal only mode design\n",
+				rx_queue_id);
+			return -ENOTSUP;
+		}
+	}
+
+	if (rxq->en_bypass) {
+		rxq->bypass_desc_sz =
+				qdma_dev->q_info[rx_queue_id].rx_bypass_desc_sz;
+		if (rxq->bypass_desc_sz == RTE_PMD_QDMA_BYPASS_DESC_LEN_64B &&
+						!qdma_dev->dev_cap.sw_desc_64b) {
+			PMD_DRV_LOG(ERR, "PF-%d(DEVFN) C2H bypass descriptor "
+				"size of 64B is not supported in this design:\n",
+				qdma_dev->func_id);
+			return -ENOTSUP;
+		}
+	}
+	/* Calculate the ring index, completion queue ring size,
+	 * buffer index and threshold index.
+	 * If index is not found , by default use the index as 0
+	 */
+
+	/* Find C2H queue ring size index */
+	rxq->ringszidx = index_of_array(qdma_dev->g_ring_sz,
+					QDMA_NUM_RING_SIZES, rxq->nb_rx_desc);
+	if (rxq->ringszidx < 0) {
+		PMD_DRV_LOG(ERR, "Expected Ring size %d not found\n",
+				rxq->nb_rx_desc);
+		err = -EINVAL;
+		goto rx_setup_err;
+	}
+
+	/* Find completion ring size index */
+	rxq->cmpt_ringszidx = index_of_array(qdma_dev->g_ring_sz,
+						QDMA_NUM_RING_SIZES,
+						rxq->nb_rx_cmpt_desc);
+	if (rxq->cmpt_ringszidx < 0) {
+		PMD_DRV_LOG(ERR, "Expected completion ring size %d not found\n",
+				rxq->nb_rx_cmpt_desc);
+		err = -EINVAL;
+		goto rx_setup_err;
+	}
+
+	/* Find Threshold index */
+	rxq->threshidx = index_of_array(qdma_dev->g_c2h_cnt_th,
+					QDMA_NUM_C2H_COUNTERS,
+					rx_conf->rx_thresh.wthresh);
+	if (rxq->threshidx < 0) {
+		PMD_DRV_LOG(WARNING, "Expected Threshold %d not found,"
+				" using the value %d at index 7\n",
+				rx_conf->rx_thresh.wthresh,
+				qdma_dev->g_c2h_cnt_th[7]);
+		rxq->threshidx = 7;
+	}
+
+#ifdef QDMA_LATENCY_OPTIMIZED
+	uint8_t next_idx;
+
+	/* Initialize sorted_c2h_cntr_idx */
+	rxq->sorted_c2h_cntr_idx = index_of_array
+					(qdma_dev->sorted_idx_c2h_cnt_th,
+					QDMA_NUM_C2H_COUNTERS,
+					qdma_dev->g_c2h_cnt_th[rxq->threshidx]);
+
+	/* Initialize pend_pkt_moving_avg */
+	rxq->pend_pkt_moving_avg = qdma_dev->g_c2h_cnt_th[rxq->threshidx];
+
+	/* Initialize pend_pkt_avg_thr_hi */
+	if (rxq->sorted_c2h_cntr_idx < (QDMA_NUM_C2H_COUNTERS - 1))
+		next_idx = qdma_dev->sorted_idx_c2h_cnt_th
+						[rxq->sorted_c2h_cntr_idx + 1];
+	else
+		next_idx = qdma_dev->sorted_idx_c2h_cnt_th
+				[rxq->sorted_c2h_cntr_idx];
+
+	rxq->pend_pkt_avg_thr_hi = qdma_dev->g_c2h_cnt_th[next_idx];
+
+	/* Initialize pend_pkt_avg_thr_lo */
+	if (rxq->sorted_c2h_cntr_idx > 0)
+		next_idx = qdma_dev->sorted_idx_c2h_cnt_th
+						[rxq->sorted_c2h_cntr_idx - 1];
+	else
+		next_idx = qdma_dev->sorted_idx_c2h_cnt_th
+				[rxq->sorted_c2h_cntr_idx];
+
+	rxq->pend_pkt_avg_thr_lo = qdma_dev->g_c2h_cnt_th[next_idx];
+#endif /* QDMA_LATENCY_OPTIMIZED */
+
+	/* Find Timer index */
+	rxq->timeridx = index_of_array(qdma_dev->g_c2h_timer_cnt,
+				QDMA_NUM_C2H_TIMERS,
+				qdma_dev->q_info[rx_queue_id].timer_count);
+	if (rxq->timeridx < 0) {
+		PMD_DRV_LOG(WARNING, "Expected timer %d not found, "
+				"using the value %d at index 1\n",
+				qdma_dev->q_info[rx_queue_id].timer_count,
+				qdma_dev->g_c2h_timer_cnt[1]);
+		rxq->timeridx = 1;
+	}
+
+	rxq->rx_buff_size = (uint16_t)
+				(rte_pktmbuf_data_room_size(rxq->mb_pool) -
+				RTE_PKTMBUF_HEADROOM);
+	/* Allocate memory for Rx descriptor ring */
+	if (rxq->st_mode) {
+		if (!qdma_dev->dev_cap.st_en) {
+			PMD_DRV_LOG(ERR, "Streaming mode not enabled "
+					"in the hardware\n");
+			err = -EINVAL;
+			goto rx_setup_err;
+		}
+		/* Find Buffer size index */
+		rxq->buffszidx = index_of_array(qdma_dev->g_c2h_buf_sz,
+						QDMA_NUM_C2H_BUFFER_SIZES,
+						rxq->rx_buff_size);
+		if (rxq->buffszidx < 0) {
+			PMD_DRV_LOG(ERR, "Expected buffer size %d not found\n",
+					rxq->rx_buff_size);
+			err = -EINVAL;
+			goto rx_setup_err;
+		}
+
+		if (rxq->en_bypass &&
+		     rxq->bypass_desc_sz != 0)
+			sz = (rxq->nb_rx_desc) * (rxq->bypass_desc_sz);
+		else
+			sz = (rxq->nb_rx_desc) *
+					sizeof(struct qdma_ul_st_c2h_desc);
+
+		rxq->rx_mz = qdma_zone_reserve(dev, "RxHwRn", rx_queue_id,
+						sz, socket_id);
+		if (!rxq->rx_mz) {
+			PMD_DRV_LOG(ERR, "Unable to allocate rxq->rx_mz "
+					"of size %d\n", sz);
+			err = -ENOMEM;
+			goto rx_setup_err;
+		}
+		rxq->rx_ring = rxq->rx_mz->addr;
+		memset(rxq->rx_ring, 0, sz);
+
+		/* Allocate memory for Rx completion(CMPT) descriptor ring */
+		sz = (rxq->nb_rx_cmpt_desc) * rxq->cmpt_desc_len;
+		rxq->rx_cmpt_mz = qdma_zone_reserve(dev, "RxHwCmptRn",
+						    rx_queue_id, sz, socket_id);
+		if (!rxq->rx_cmpt_mz) {
+			PMD_DRV_LOG(ERR, "Unable to allocate rxq->rx_cmpt_mz "
+					"of size %d\n", sz);
+			err = -ENOMEM;
+			goto rx_setup_err;
+		}
+		rxq->cmpt_ring =
+			(union qdma_ul_st_cmpt_ring *)rxq->rx_cmpt_mz->addr;
+
+		/* Write-back status structure */
+		rxq->wb_status = (struct wb_status *)((uint64_t)rxq->cmpt_ring +
+				 (((uint64_t)rxq->nb_rx_cmpt_desc - 1) *
+				  rxq->cmpt_desc_len));
+		memset(rxq->cmpt_ring, 0, sz);
+	} else {
+		if (!qdma_dev->dev_cap.mm_en) {
+			PMD_DRV_LOG(ERR, "Memory mapped mode not enabled "
+					"in the hardware\n");
+			err = -EINVAL;
+			goto rx_setup_err;
+		}
+
+		if (rxq->en_bypass &&
+			rxq->bypass_desc_sz != 0)
+			sz = (rxq->nb_rx_desc) * (rxq->bypass_desc_sz);
+		else
+			sz = (rxq->nb_rx_desc) * sizeof(struct qdma_ul_mm_desc);
+		rxq->rx_mz = qdma_zone_reserve(dev, "RxHwRn",
+						rx_queue_id, sz, socket_id);
+		if (!rxq->rx_mz) {
+			PMD_DRV_LOG(ERR, "Unable to allocate rxq->rx_mz "
+					"of size %d\n", sz);
+			err = -ENOMEM;
+			goto rx_setup_err;
+		}
+		rxq->rx_ring = rxq->rx_mz->addr;
+		rx_ring_mm = (struct qdma_ul_mm_desc *)rxq->rx_mz->addr;
+		memset(rxq->rx_ring, 0, sz);
+
+		rx_ring_bypass = (uint8_t *)rxq->rx_mz->addr;
+		if (rxq->en_bypass &&
+			rxq->bypass_desc_sz != 0)
+			rxq->wb_status = (struct wb_status *)&
+					(rx_ring_bypass[(rxq->nb_rx_desc - 1) *
+							(rxq->bypass_desc_sz)]);
+		else
+			rxq->wb_status = (struct wb_status *)&
+					 (rx_ring_mm[rxq->nb_rx_desc - 1]);
+	}
+
+	/* allocate memory for RX software ring */
+	sz = (rxq->nb_rx_desc) * sizeof(struct rte_mbuf *);
+	rxq->sw_ring = rte_zmalloc_socket("RxSwRn", sz,
+					RTE_CACHE_LINE_SIZE, socket_id);
+	if (!rxq->sw_ring) {
+		PMD_DRV_LOG(ERR, "Unable to allocate rxq->sw_ring of size %d\n",
+									sz);
+		err = -ENOMEM;
+		goto rx_setup_err;
+	}
+
+	qdma_rxq_default_mbuf_init(rxq);
+
+	dev->data->rx_queues[rx_queue_id] = rxq;
 
 	return 0;
+
+rx_setup_err:
+	if (!qdma_dev->is_vf) {
+		qdma_dev_decrement_active_queue(qdma_dev->dma_device_index,
+						qdma_dev->func_id,
+						QDMA_DEV_Q_TYPE_C2H);
+
+		if (qdma_dev->q_info[rx_queue_id].queue_mode ==
+				RTE_PMD_QDMA_STREAMING_MODE)
+			qdma_dev_decrement_active_queue
+					(qdma_dev->dma_device_index,
+					qdma_dev->func_id,
+					QDMA_DEV_Q_TYPE_CMPT);
+	}
+	if (rxq) {
+		if (rxq->rx_mz)
+			rte_memzone_free(rxq->rx_mz);
+		if (rxq->sw_ring)
+			rte_free(rxq->sw_ring);
+		rte_free(rxq);
+	}
+	return err;
 }
 
 /**
@@ -524,16 +992,193 @@ int qdma_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t qid)
 
 int qdma_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t qid)
 {
-	(void)dev;
-	(void)qid;
+	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
+	struct qdma_rx_queue *rxq;
+	uint32_t queue_base =  qdma_dev->queue_base;
+	uint8_t cmpt_desc_fmt;
+	int err, bypass_desc_sz_idx;
+	struct qdma_descq_sw_ctxt q_sw_ctxt;
+	struct qdma_descq_cmpt_ctxt q_cmpt_ctxt;
+	struct qdma_descq_prefetch_ctxt q_prefetch_ctxt;
+	struct qdma_hw_access *hw_access = qdma_dev->hw_access;
+
+	rxq = (struct qdma_rx_queue *)dev->data->rx_queues[qid];
+
+	memset(&q_sw_ctxt, 0, sizeof(struct qdma_descq_sw_ctxt));
+
+	qdma_reset_rx_queue(rxq);
+	qdma_clr_rx_queue_ctxts(dev, (qid + queue_base), rxq->st_mode);
+
+	bypass_desc_sz_idx = qmda_get_desc_sz_idx(rxq->bypass_desc_sz);
+
+	switch (rxq->cmpt_desc_len) {
+	case RTE_PMD_QDMA_CMPT_DESC_LEN_8B:
+		cmpt_desc_fmt = CMPT_CNTXT_DESC_SIZE_8B;
+		break;
+	case RTE_PMD_QDMA_CMPT_DESC_LEN_16B:
+		cmpt_desc_fmt = CMPT_CNTXT_DESC_SIZE_16B;
+		break;
+	case RTE_PMD_QDMA_CMPT_DESC_LEN_32B:
+		cmpt_desc_fmt = CMPT_CNTXT_DESC_SIZE_32B;
+		break;
+	case RTE_PMD_QDMA_CMPT_DESC_LEN_64B:
+		cmpt_desc_fmt = CMPT_CNTXT_DESC_SIZE_64B;
+		break;
+	default:
+		cmpt_desc_fmt = CMPT_CNTXT_DESC_SIZE_8B;
+		break;
+	}
+
+	err = qdma_init_rx_queue(rxq);
+	if (err != 0)
+		return err;
+
+	if (rxq->st_mode) {
+		memset(&q_cmpt_ctxt, 0, sizeof(struct qdma_descq_cmpt_ctxt));
+		memset(&q_prefetch_ctxt, 0,
+				sizeof(struct qdma_descq_prefetch_ctxt));
+
+		q_prefetch_ctxt.bypass = (rxq->en_bypass_prefetch) ? 1 : 0;
+		q_prefetch_ctxt.bufsz_idx = rxq->buffszidx;
+		q_prefetch_ctxt.pfch_en = (rxq->en_prefetch) ? 1 : 0;
+		q_prefetch_ctxt.valid = 1;
+
+#ifdef QDMA_LATENCY_OPTIMIZED
+		q_cmpt_ctxt.full_upd = 1;
+#endif /* QDMA_LATENCY_OPTIMIZED */
+		q_cmpt_ctxt.en_stat_desc = 1;
+		q_cmpt_ctxt.trig_mode = rxq->triggermode;
+		q_cmpt_ctxt.fnc_id = rxq->func_id;
+		q_cmpt_ctxt.counter_idx = rxq->threshidx;
+		q_cmpt_ctxt.timer_idx = rxq->timeridx;
+		q_cmpt_ctxt.color = CMPT_DEFAULT_COLOR_BIT;
+		q_cmpt_ctxt.ringsz_idx = rxq->cmpt_ringszidx;
+		q_cmpt_ctxt.bs_addr = (uint64_t)rxq->rx_cmpt_mz->iova;
+		q_cmpt_ctxt.desc_sz = cmpt_desc_fmt;
+		q_cmpt_ctxt.valid = 1;
+		if (qdma_dev->dev_cap.cmpt_ovf_chk_dis)
+			q_cmpt_ctxt.ovf_chk_dis = rxq->dis_overflow_check;
+
+
+		q_sw_ctxt.desc_sz = SW_DESC_CNTXT_C2H_STREAM_DMA;
+		q_sw_ctxt.frcd_en = 1;
+	} else {
+		q_sw_ctxt.desc_sz = SW_DESC_CNTXT_MEMORY_MAP_DMA;
+		q_sw_ctxt.is_mm = 1;
+		q_sw_ctxt.wbi_chk = 1;
+		q_sw_ctxt.wbi_intvl_en = 1;
+	}
 
+	q_sw_ctxt.fnc_id = rxq->func_id;
+	q_sw_ctxt.qen = 1;
+	q_sw_ctxt.rngsz_idx = rxq->ringszidx;
+	q_sw_ctxt.bypass = rxq->en_bypass;
+	q_sw_ctxt.wbk_en = 1;
+	q_sw_ctxt.ring_bs_addr = (uint64_t)rxq->rx_mz->iova;
+
+	if (rxq->en_bypass &&
+		rxq->bypass_desc_sz != 0)
+		q_sw_ctxt.desc_sz = bypass_desc_sz_idx;
+
+	/* Set SW Context */
+	err = hw_access->qdma_sw_ctx_conf(dev, 1, (qid + queue_base),
+			&q_sw_ctxt, QDMA_HW_ACCESS_WRITE);
+	if (err < 0)
+		return qdma_dev->hw_access->qdma_get_error_code(err);
+
+	if (rxq->st_mode) {
+		/* Set Prefetch Context */
+		err = hw_access->qdma_pfetch_ctx_conf(dev, (qid + queue_base),
+				&q_prefetch_ctxt, QDMA_HW_ACCESS_WRITE);
+		if (err < 0)
+			return qdma_dev->hw_access->qdma_get_error_code(err);
+
+		/* Set Completion Context */
+		err = hw_access->qdma_cmpt_ctx_conf(dev, (qid + queue_base),
+				&q_cmpt_ctxt, QDMA_HW_ACCESS_WRITE);
+		if (err < 0)
+			return qdma_dev->hw_access->qdma_get_error_code(err);
+
+		rte_wmb();
+		/* enable status desc , loading the triggermode,
+		 * thresidx and timeridx passed from the user
+		 */
+
+		rxq->cmpt_cidx_info.counter_idx = rxq->threshidx;
+		rxq->cmpt_cidx_info.timer_idx = rxq->timeridx;
+		rxq->cmpt_cidx_info.trig_mode = rxq->triggermode;
+		rxq->cmpt_cidx_info.wrb_en = 1;
+		rxq->cmpt_cidx_info.wrb_cidx = 0;
+		hw_access->qdma_queue_cmpt_cidx_update(dev, qdma_dev->is_vf,
+			qid, &rxq->cmpt_cidx_info);
+
+		rxq->q_pidx_info.pidx = (rxq->nb_rx_desc - 2);
+		hw_access->qdma_queue_pidx_update(dev, qdma_dev->is_vf, qid,
+				1, &rxq->q_pidx_info);
+	}
+
+	dev->data->rx_queue_state[qid] = RTE_ETH_QUEUE_STATE_STARTED;
+	rxq->status = RTE_ETH_QUEUE_STATE_STARTED;
 	return 0;
 }
 
 int qdma_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t qid)
 {
-	(void)dev;
-	(void)qid;
+	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
+	struct qdma_rx_queue *rxq;
+	uint32_t queue_base =  qdma_dev->queue_base;
+	int i = 0;
+	int cnt = 0;
+
+	rxq = (struct qdma_rx_queue *)dev->data->rx_queues[qid];
+
+	rxq->status = RTE_ETH_QUEUE_STATE_STOPPED;
+
+	/* Wait for queue to recv all packets. */
+	if (rxq->st_mode) {  /** ST-mode **/
+		/* For eqdma, c2h marker takes care to drain the pipeline */
+		if (!(qdma_dev->ip_type == EQDMA_SOFT_IP)) {
+			while (rxq->wb_status->pidx !=
+					rxq->cmpt_cidx_info.wrb_cidx) {
+				usleep(10);
+				if (cnt++ > 10000)
+					break;
+			}
+		}
+	} else { /* MM mode */
+		while (rxq->wb_status->cidx != rxq->q_pidx_info.pidx) {
+			usleep(10);
+			if (cnt++ > 10000)
+				break;
+		}
+	}
+
+	qdma_inv_rx_queue_ctxts(dev, (qid + queue_base), rxq->st_mode);
+
+	if (rxq->st_mode) {  /* ST-mode */
+#ifdef DUMP_MEMPOOL_USAGE_STATS
+		PMD_DRV_LOG(INFO, "%s(): %d: queue id %d,"
+		"mbuf_avail_count = %d, mbuf_in_use_count = %d",
+		__func__, __LINE__, rxq->queue_id,
+		rte_mempool_avail_count(rxq->mb_pool),
+		rte_mempool_in_use_count(rxq->mb_pool));
+#endif /* DUMP_MEMPOOL_USAGE_STATS */
+		for (i = 0; i < rxq->nb_rx_desc - 1; i++) {
+			rte_pktmbuf_free(rxq->sw_ring[i]);
+			rxq->sw_ring[i] = NULL;
+		}
+#ifdef DUMP_MEMPOOL_USAGE_STATS
+		PMD_DRV_LOG(INFO, "%s(): %d: queue id %d,"
+		"mbuf_avail_count = %d, mbuf_in_use_count = %d",
+			__func__, __LINE__, rxq->queue_id,
+			rte_mempool_avail_count(rxq->mb_pool),
+			rte_mempool_in_use_count(rxq->mb_pool));
+#endif /* DUMP_MEMPOOL_USAGE_STATS */
+	}
+
+	qdma_reset_rx_queue(rxq);
+
+	dev->data->rx_queue_state[qid] = RTE_ETH_QUEUE_STATE_STOPPED;
 
 	return 0;
 }
@@ -650,9 +1295,22 @@ void
 qdma_dev_rxq_info_get(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 		     struct rte_eth_rxq_info *qinfo)
 {
-	(void)dev;
-	(void)rx_queue_id;
-	(void)qinfo;
+	struct qdma_pci_dev *dma_priv;
+	struct qdma_rx_queue *rxq = NULL;
+
+	if (!qinfo)
+		return;
+
+	dma_priv = (struct qdma_pci_dev *)dev->data->dev_private;
+
+	rxq = dev->data->rx_queues[rx_queue_id];
+	memset(qinfo, 0, sizeof(struct rte_eth_rxq_info));
+	qinfo->mp = rxq->mb_pool;
+	qinfo->conf.rx_deferred_start = rxq->rx_deferred_start;
+	qinfo->conf.rx_drop_en = 1;
+	qinfo->conf.rx_thresh.wthresh = dma_priv->g_c2h_cnt_th[rxq->threshidx];
+	qinfo->scattered_rx = 1;
+	qinfo->nb_desc = rxq->nb_rx_desc - 1;
 }
 
 /**
diff --git a/drivers/net/qdma/qdma_rxtx.c b/drivers/net/qdma/qdma_rxtx.c
new file mode 100644
index 0000000000..15f6661cbf
--- /dev/null
+++ b/drivers/net/qdma/qdma_rxtx.c
@@ -0,0 +1,208 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2022 Xilinx, Inc. All rights reserved.
+ * Copyright(c) 2022 VVDN Technologies Private Limited. All rights reserved.
+ */
+
+#include <rte_mbuf.h>
+#include <rte_cycles.h>
+#include "qdma.h"
+#include "qdma_access_common.h"
+
+#include <fcntl.h>
+#include <unistd.h>
+#include "qdma_rxtx.h"
+#include "qdma_devops.h"
+
+#if defined RTE_ARCH_X86_64
+#include <immintrin.h>
+#include <emmintrin.h>
+#define RTE_QDMA_DESCS_PER_LOOP (2)
+#endif /* RTE_ARCH_X86_64 */
+
+/******** User logic dependent functions start **********/
+#ifdef QDMA_RX_VEC_X86_64
+/* Vector implementation to get packet length from two completion entries */
+static void qdma_ul_get_cmpt_pkt_len_v(void *ul_cmpt_entry, __m128i *data)
+{
+	union qdma_ul_st_cmpt_ring *cmpt_entry1, *cmpt_entry2;
+	__m128i pkt_len_shift = _mm_set_epi64x(0, 4);
+
+	cmpt_entry1 = (union qdma_ul_st_cmpt_ring *)(ul_cmpt_entry);
+	cmpt_entry2 = cmpt_entry1 + 1;
+
+	/* Read desc statuses backwards to avoid race condition */
+	/* Load a pkt desc */
+	data[1] = _mm_set_epi64x(0, cmpt_entry2->data);
+	/* Find packet length, currently driver needs
+	 * only packet length from completion info
+	 */
+	data[1] = _mm_srl_epi32(data[1], pkt_len_shift);
+
+	/* Load a pkt desc */
+	data[0] = _mm_set_epi64x(0, cmpt_entry1->data);
+	/* Find packet length, currently driver needs
+	 * only packet length from completion info
+	 */
+	data[0] = _mm_srl_epi32(data[0], pkt_len_shift);
+}
+#endif /* QDMA_RX_VEC_X86_64 */
+
+/******** User logic dependent functions end **********/
+uint16_t qdma_get_rx_queue_id(void *queue_hndl)
+{
+	struct qdma_rx_queue *rxq = (struct qdma_rx_queue *)queue_hndl;
+
+	return rxq->queue_id;
+}
+
+void qdma_get_device_info(void *queue_hndl,
+		enum qdma_device_type *device_type,
+		enum qdma_ip_type *ip_type)
+{
+	struct qdma_rx_queue *rxq = (struct qdma_rx_queue *)queue_hndl;
+	struct qdma_pci_dev *qdma_dev = rxq->dev->data->dev_private;
+
+	*device_type = (enum qdma_device_type)qdma_dev->device_type;
+	*ip_type = (enum qdma_ip_type)qdma_dev->ip_type;
+}
+
+uint32_t get_mm_c2h_ep_addr(void *queue_hndl)
+{
+	struct qdma_rx_queue *rxq = (struct qdma_rx_queue *)queue_hndl;
+
+	return rxq->ep_addr;
+}
+
+uint32_t get_mm_buff_size(void *queue_hndl)
+{
+	struct qdma_rx_queue *rxq = (struct qdma_rx_queue *)queue_hndl;
+
+	return rxq->rx_buff_size;
+}
+
+#ifdef QDMA_LATENCY_OPTIMIZED
+static void adjust_c2h_cntr_avgs(struct qdma_rx_queue *rxq)
+{
+	int i;
+	struct qdma_pci_dev *qdma_dev = rxq->dev->data->dev_private;
+
+	rxq->pend_pkt_moving_avg =
+		qdma_dev->g_c2h_cnt_th[rxq->cmpt_cidx_info.counter_idx];
+
+	if (rxq->sorted_c2h_cntr_idx == (QDMA_GLOBAL_CSR_ARRAY_SZ - 1))
+		i = qdma_dev->sorted_idx_c2h_cnt_th[rxq->sorted_c2h_cntr_idx];
+	else
+		i = qdma_dev->sorted_idx_c2h_cnt_th
+					[rxq->sorted_c2h_cntr_idx + 1];
+
+	rxq->pend_pkt_avg_thr_hi = qdma_dev->g_c2h_cnt_th[i];
+
+	if (rxq->sorted_c2h_cntr_idx > 0)
+		i = qdma_dev->sorted_idx_c2h_cnt_th
+					[rxq->sorted_c2h_cntr_idx - 1];
+	else
+		i = qdma_dev->sorted_idx_c2h_cnt_th[rxq->sorted_c2h_cntr_idx];
+
+	rxq->pend_pkt_avg_thr_lo = qdma_dev->g_c2h_cnt_th[i];
+
+	PMD_DRV_LOG(DEBUG, "q%u: c2h_cntr_idx =  %u %u %u",
+		rxq->queue_id,
+		rxq->cmpt_cidx_info.counter_idx,
+		rxq->pend_pkt_avg_thr_lo,
+		rxq->pend_pkt_avg_thr_hi);
+}
+
+static void incr_c2h_cntr_th(struct qdma_rx_queue *rxq)
+{
+	struct qdma_pci_dev *qdma_dev = rxq->dev->data->dev_private;
+	unsigned char i, c2h_cntr_idx;
+	unsigned char c2h_cntr_val_new;
+	unsigned char c2h_cntr_val_curr;
+
+	if (rxq->sorted_c2h_cntr_idx ==
+			(QDMA_NUM_C2H_COUNTERS - 1))
+		return;
+
+	rxq->c2h_cntr_monitor_cnt = 0;
+	i = rxq->sorted_c2h_cntr_idx;
+	c2h_cntr_idx = qdma_dev->sorted_idx_c2h_cnt_th[i];
+	c2h_cntr_val_curr = qdma_dev->g_c2h_cnt_th[c2h_cntr_idx];
+	i++;
+	c2h_cntr_idx = qdma_dev->sorted_idx_c2h_cnt_th[i];
+	c2h_cntr_val_new = qdma_dev->g_c2h_cnt_th[c2h_cntr_idx];
+
+	/* Choose the closest counter value */
+	if (c2h_cntr_val_new >= rxq->pend_pkt_moving_avg &&
+		(c2h_cntr_val_new - rxq->pend_pkt_moving_avg) >=
+		(rxq->pend_pkt_moving_avg - c2h_cntr_val_curr))
+		return;
+
+	/* Do not allow c2h counter value go beyond half of C2H ring sz */
+	if (c2h_cntr_val_new < (qdma_dev->g_ring_sz[rxq->ringszidx] >> 1)) {
+		rxq->cmpt_cidx_info.counter_idx = c2h_cntr_idx;
+		rxq->sorted_c2h_cntr_idx = i;
+		adjust_c2h_cntr_avgs(rxq);
+	}
+}
+
+static void decr_c2h_cntr_th(struct qdma_rx_queue *rxq)
+{
+	struct qdma_pci_dev *qdma_dev = rxq->dev->data->dev_private;
+	unsigned char i, c2h_cntr_idx;
+	unsigned char c2h_cntr_val_new;
+	unsigned char c2h_cntr_val_curr;
+
+	if (!rxq->sorted_c2h_cntr_idx)
+		return;
+	rxq->c2h_cntr_monitor_cnt = 0;
+	i = rxq->sorted_c2h_cntr_idx;
+	c2h_cntr_idx = qdma_dev->sorted_idx_c2h_cnt_th[i];
+	c2h_cntr_val_curr = qdma_dev->g_c2h_cnt_th[c2h_cntr_idx];
+	i--;
+	c2h_cntr_idx = qdma_dev->sorted_idx_c2h_cnt_th[i];
+
+	c2h_cntr_val_new = qdma_dev->g_c2h_cnt_th[c2h_cntr_idx];
+
+	/* Choose the closest counter value */
+	if (c2h_cntr_val_new <= rxq->pend_pkt_moving_avg &&
+		(rxq->pend_pkt_moving_avg - c2h_cntr_val_new) >=
+		(c2h_cntr_val_curr - rxq->pend_pkt_moving_avg))
+		return;
+
+	rxq->cmpt_cidx_info.counter_idx = c2h_cntr_idx;
+
+	rxq->sorted_c2h_cntr_idx = i;
+	adjust_c2h_cntr_avgs(rxq);
+}
+
+#define MAX_C2H_CNTR_STAGNANT_CNT 16
+static void adapt_update_counter(struct qdma_rx_queue *rxq,
+		uint16_t nb_pkts_avail)
+{
+	/* Add available pkt count and average */
+	rxq->pend_pkt_moving_avg += nb_pkts_avail;
+	rxq->pend_pkt_moving_avg >>= 1;
+
+	/* if avg > hi_th, increase the counter
+	 * if avg < lo_th, decrease the counter
+	 */
+	if (rxq->pend_pkt_avg_thr_hi <= rxq->pend_pkt_moving_avg) {
+		incr_c2h_cntr_th(rxq);
+	} else if (rxq->pend_pkt_avg_thr_lo >=
+				rxq->pend_pkt_moving_avg) {
+		decr_c2h_cntr_th(rxq);
+	} else {
+		rxq->c2h_cntr_monitor_cnt++;
+		if (rxq->c2h_cntr_monitor_cnt == MAX_C2H_CNTR_STAGNANT_CNT) {
+			/* go down on counter value to see if we actually are
+			 * increasing latency by setting
+			 * higher counter threshold
+			 */
+			decr_c2h_cntr_th(rxq);
+			rxq->c2h_cntr_monitor_cnt = 0;
+		} else {
+			return;
+		}
+	}
+}
+#endif /* QDMA_LATENCY_OPTIMIZED */
diff --git a/drivers/net/qdma/qdma_rxtx.h b/drivers/net/qdma/qdma_rxtx.h
new file mode 100644
index 0000000000..5f902df695
--- /dev/null
+++ b/drivers/net/qdma/qdma_rxtx.h
@@ -0,0 +1,20 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2022 Xilinx, Inc. All rights reserved.
+ */
+
+#ifndef QDMA_DPDK_RXTX_H_
+#define QDMA_DPDK_RXTX_H_
+
+#include "qdma_access_export.h"
+
+/* Supporting functions for user logic pluggability */
+uint16_t qdma_get_rx_queue_id(void *queue_hndl);
+void qdma_get_device_info(void *queue_hndl,
+		enum qdma_device_type *device_type,
+		enum qdma_ip_type *ip_type);
+struct qdma_ul_st_h2c_desc *get_st_h2c_desc(void *queue_hndl);
+struct qdma_ul_mm_desc *get_mm_h2c_desc(void *queue_hndl);
+uint32_t get_mm_c2h_ep_addr(void *queue_hndl);
+uint32_t get_mm_buff_size(void *queue_hndl);
+
+#endif /* QDMA_DPDK_RXTX_H_ */
diff --git a/drivers/net/qdma/qdma_user.c b/drivers/net/qdma/qdma_user.c
new file mode 100644
index 0000000000..312bb86670
--- /dev/null
+++ b/drivers/net/qdma/qdma_user.c
@@ -0,0 +1,188 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2022 Xilinx, Inc. All rights reserved.
+ */
+
+#include <rte_mbuf.h>
+#include <rte_cycles.h>
+#include <rte_ethdev.h>
+#include "qdma_user.h"
+#include "qdma_access_common.h"
+#include "qdma_log.h"
+
+#include <fcntl.h>
+#include <unistd.h>
+
+/**
+ * Extract the fields of given completion entry in the completion ring.
+ *
+ * @param ul_cmpt_entry
+ *   Pointer to completion entry to be extracted.
+ * @param cmpt_info
+ *   Pointer to variable to which completion entry details to be extracted.
+ *
+ * @return
+ *   0 on success and -1 on failure.
+ */
+int qdma_ul_extract_st_cmpt_info(void *ul_cmpt_entry, void *cmpt_info)
+{
+	union qdma_ul_st_cmpt_ring *cmpt_data, *cmpt_desc;
+
+	cmpt_desc = (union qdma_ul_st_cmpt_ring *)(ul_cmpt_entry);
+	cmpt_data = (union qdma_ul_st_cmpt_ring *)(cmpt_info);
+
+	if (unlikely(cmpt_desc->err || cmpt_desc->data_frmt))
+		return -1;
+
+	cmpt_data->data = cmpt_desc->data;
+	if (unlikely(!cmpt_desc->desc_used))
+		cmpt_data->length = 0;
+
+	return 0;
+}
+
+/**
+ * Extract the packet length from the given completion entry.
+ *
+ * @param ul_cmpt_entry
+ *   Pointer to completion entry to be extracted.
+ *
+ * @return
+ *   Packet length
+ */
+uint16_t qdma_ul_get_cmpt_pkt_len(void *ul_cmpt_entry)
+{
+	return ((union qdma_ul_st_cmpt_ring *)ul_cmpt_entry)->length;
+}
+
+/**
+ * Processes the immediate data for the given completion ring entry
+ * and stores in a file.
+ *
+ * @param qhndl
+ *   Pointer to RX queue handle.
+ * @param cmpt_desc_len
+ *   Completion descriptor length.
+ * @param cmpt_entry
+ *   Pointer to completion entry to be processed.
+ *
+ * @return
+ *   None.
+ */
+int qdma_ul_process_immediate_data_st(void *qhndl, void *cmpt_entry,
+			uint16_t cmpt_desc_len)
+{
+	int ofd;
+	char fln[50];
+#ifndef TEST_64B_DESC_BYPASS
+	uint16_t i = 0;
+	enum qdma_device_type dev_type;
+	enum qdma_ip_type ip_type;
+#else
+	int ret = 0;
+#endif
+	uint16_t queue_id = 0;
+
+	queue_id = qdma_get_rx_queue_id(qhndl);
+	snprintf(fln, sizeof(fln), "q_%d_%s", queue_id,
+			"immmediate_data.txt");
+	ofd = open(fln, O_RDWR | O_CREAT | O_APPEND |
+			O_SYNC, 0666);
+	if (ofd < 0) {
+		PMD_DRV_LOG(INFO, "recv on qhndl[%d] CMPT, "
+				"unable to create outfile "
+				" to dump immediate data",
+				queue_id);
+		return ofd;
+	}
+#ifdef TEST_64B_DESC_BYPASS
+	ret = write(ofd, cmpt_entry, cmpt_desc_len);
+	if (ret < cmpt_desc_len)
+		PMD_DRV_LOG(DEBUG, "recv on rxq[%d] CMPT, "
+			"immediate data len: %d, "
+			"written to outfile :%d bytes",
+			 queue_id, cmpt_desc_len,
+			 ret);
+#else
+	qdma_get_device_info(qhndl, &dev_type, &ip_type);
+
+	if (ip_type == QDMA_VERSAL_HARD_IP) {
+		/* ignoring first 20 bits of length feild */
+		dprintf(ofd, "%02x",
+			(*((uint8_t *)cmpt_entry + 2) & 0xF0));
+		for (i = 3; i < (cmpt_desc_len) ; i++)
+			dprintf(ofd, "%02x",
+				*((uint8_t *)cmpt_entry + i));
+	} else {
+		dprintf(ofd, "%02x",
+			(*((uint8_t *)cmpt_entry) & 0xF0));
+		for (i = 1; i < (cmpt_desc_len) ; i++)
+			dprintf(ofd, "%02x",
+				*((uint8_t *)cmpt_entry + i));
+	}
+#endif
+
+	close(ofd);
+	return 0;
+}
+
+/**
+ * updates the MM c2h descriptor.
+ *
+ * @param qhndl
+ *   Pointer to RX queue handle.
+ * @param mb
+ *   Pointer to memory buffer.
+ * @param desc
+ *   Pointer to descriptor entry.
+ *
+ * @return
+ *   None.
+ */
+int qdma_ul_update_mm_c2h_desc(void *qhndl, struct rte_mbuf *mb, void *desc)
+{
+	struct qdma_ul_mm_desc *desc_info = (struct qdma_ul_mm_desc *)desc;
+
+	desc_info->src_addr = get_mm_c2h_ep_addr(qhndl);
+	/* make it so the data pointer starts there too... */
+	mb->data_off = RTE_PKTMBUF_HEADROOM;
+	/* low 32-bits of phys addr must be 4KB aligned... */
+	desc_info->dst_addr = (uint64_t)mb->buf_iova + RTE_PKTMBUF_HEADROOM;
+	desc_info->dv = 1;
+	desc_info->eop = 1;
+	desc_info->sop = 1;
+	desc_info->len = (int)get_mm_buff_size(qhndl);
+
+	return 0;
+}
+
+/**
+ * Processes the completion data from the given completion entry.
+ *
+ * @param cmpt_entry
+ *   Pointer to completion entry to be processed.
+ * @param cmpt_desc_len
+ *   Completion descriptor length.
+ * @param cmpt_buff
+ *   Pointer to the data buffer to which the data will be extracted.
+ *
+ * @return
+ *   None.
+ */
+int qdma_ul_process_immediate_data(void *cmpt_entry, uint16_t cmpt_desc_len,
+				char *cmpt_buff)
+{
+	uint16_t i = 0;
+	char *cmpt_buff_ptr;
+	struct qdma_ul_cmpt_ring *cmpt_desc =
+			(struct qdma_ul_cmpt_ring *)(cmpt_entry);
+
+	if (unlikely(cmpt_desc->err || cmpt_desc->data_frmt))
+		return -1;
+
+	cmpt_buff_ptr = (char *)cmpt_buff;
+	*(cmpt_buff_ptr) = (*((uint8_t *)cmpt_desc) & 0xF0);
+	for (i = 1; i < (cmpt_desc_len); i++)
+		*(cmpt_buff_ptr + i) = (*((uint8_t *)cmpt_desc + i));
+
+	return 0;
+}
diff --git a/drivers/net/qdma/qdma_user.h b/drivers/net/qdma/qdma_user.h
new file mode 100644
index 0000000000..536aaa7945
--- /dev/null
+++ b/drivers/net/qdma/qdma_user.h
@@ -0,0 +1,225 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2022 Xilinx, Inc. All rights reserved.
+ */
+
+/**
+ * @file
+ * @brief This file contains example design/user logic controlled
+ * data structures and functions
+ * The driver is specific to an example design, if the example design
+ * changes user controlled parameters, this file needs to be modified
+ * appropriately.
+ * Structures for Completion entry, Descriptor bypass can be added here.
+ */
+
+#ifndef __QDMA_USER_H__
+#define __QDMA_USER_H__
+
+#include "qdma_rxtx.h"
+ /**
+  * C2H Completion entry structure
+  * This structure is specific for the example design.
+  * Processing of this ring happens in qdma_rxtx.c.
+  */
+union qdma_ul_st_cmpt_ring {
+	volatile uint64_t data;
+	struct {
+		/* For 2018.2 IP, this field determines the
+		 * Standard or User format of completion entry
+		 */
+		volatile uint32_t	data_frmt:1;
+
+		/* This field inverts every time PIDX wraps
+		 * the completion ring
+		 */
+		volatile uint32_t	color:1;
+
+		/* Indicates that C2H engine encountered
+		 * a descriptor error
+		 */
+		volatile uint32_t	err:1;
+
+		/* Indicates that the completion packet
+		 * consumes descriptor in C2H ring
+		 */
+		volatile uint32_t	desc_used:1;
+
+		/* Indicates length of the data packet */
+		volatile uint32_t	length:16;
+
+		/* Reserved field */
+		volatile uint32_t	user_rsv:4;
+
+		/* User logic defined data of
+		 * length based on CMPT entry
+		 * length
+		 */
+		volatile uint8_t	user_def[];
+	};
+};
+
+
+ /**
+  * Completion entry structure
+  * This structure is specific for the example design.
+  * Currently this structure is used for the processing
+  * of the MM completion ring in rte_pmd_qdma.c.
+  */
+struct __rte_packed qdma_ul_cmpt_ring
+{
+	volatile uint32_t	data_frmt:1; /* For 2018.2 IP, this field
+					      * determines the Standard or User
+					      * format of completion entry
+					      */
+	volatile uint32_t	color:1;     /* This field inverts every time
+					      * PIDX wraps the completion ring
+					      */
+	volatile uint32_t	err:1;       /* Indicates that C2H engine
+					      * encountered a descriptor
+					      * error
+					      */
+	volatile uint32_t	rsv:1;   /* Reserved */
+	volatile uint8_t	user_def[];    /* User logic defined data of
+						* length based on CMPT entry
+						* length
+						*/
+};
+
+/** ST C2H Descriptor **/
+struct __rte_packed qdma_ul_st_c2h_desc
+{
+	uint64_t	dst_addr;
+};
+
+#define S_H2C_DESC_F_SOP		1
+#define S_H2C_DESC_F_EOP		2
+
+/* pld_len and flags members are part of custom descriptor format needed
+ * by example design for ST loopback and desc bypass
+ */
+
+/** ST H2C Descriptor **/
+struct __rte_packed qdma_ul_st_h2c_desc
+{
+	volatile uint16_t	cdh_flags;
+	volatile uint16_t	pld_len;
+	volatile uint16_t	len;
+	volatile uint16_t	flags;
+	volatile uint64_t	src_addr;
+};
+
+/** MM Descriptor **/
+struct __rte_packed qdma_ul_mm_desc
+{
+	volatile uint64_t	src_addr;
+	volatile uint64_t	len:28;
+	volatile uint64_t	dv:1;
+	volatile uint64_t	sop:1;
+	volatile uint64_t	eop:1;
+	volatile uint64_t	rsvd:33;
+	volatile uint64_t	dst_addr;
+	volatile uint64_t	rsvd2;
+};
+
+/**
+ * Extract the fields of given completion entry in the completion ring.
+ *
+ * @param ul_cmpt_entry
+ *   Pointer to completion entry to be extracted.
+ * @param cmpt_info
+ *   Pointer to structure to which completion entry details needs to be filled.
+ *
+ * @return
+ *   0 on success and -ve on error.
+ */
+int qdma_ul_extract_st_cmpt_info(void *ul_cmpt_entry, void *cmpt_info);
+
+/**
+ * Extract the packet length from the given completion entry.
+ *
+ * @param ul_cmpt_entry
+ *   Pointer to completion entry to be extracted.
+ *
+ * @return
+ *   Packet length
+ */
+uint16_t qdma_ul_get_cmpt_pkt_len(void *ul_cmpt_entry);
+
+/**
+ * Processes the immediate data for the given completion ring entry
+ * and stores the immediate data in a file.
+ *
+ * @param qhndl
+ *   Pointer to RX queue handle.
+ * @param cmpt_entry
+ *   Pointer to completion entry to be processed.
+ * @param cmpt_desc_len
+ *   Completion descriptor length.
+ *
+ * @return
+ *   None.
+ */
+int qdma_ul_process_immediate_data_st(void *qhndl, void *cmpt_entry,
+			uint16_t cmpt_desc_len);
+
+/**
+ * Updates the ST H2C descriptor
+ *
+ * @param qhndl
+ *   Pointer to TX queue handle.
+ * @param q_offloads
+ *   Offloads supported for the queue.
+ * @param mb
+ *   Pointer to memory buffer.
+ *
+ * @return
+ *   None.
+ */
+int qdma_ul_update_st_h2c_desc(void *qhndl, uint64_t q_offloads,
+				struct rte_mbuf *mb);
+
+/**
+ * Updates the MM c2h descriptor.
+ *
+ * @param qhndl
+ *   Pointer to RX queue handle.
+ * @param mb
+ *   Pointer to memory buffer.
+ * @param desc
+ *   Pointer to descriptor entry.
+ *
+ * @return
+ *   None.
+ */
+int qdma_ul_update_mm_c2h_desc(void *qhndl, struct rte_mbuf *mb, void *desc);
+
+/**
+ * updates the MM h2c descriptor.
+ *
+ * @param qhndl
+ *   Pointer to TX queue handle.
+ * @param mb
+ *   Pointer to memory buffer.
+ *
+ * @return
+ *   None.
+ */
+int qdma_ul_update_mm_h2c_desc(void *qhndl, struct rte_mbuf *mb);
+
+/**
+ * Processes the completion data from the given completion entry.
+ *
+ * @param cmpt_entry
+ *   Pointer to completion entry to be processed.
+ * @param cmpt_desc_len
+ *   Completion descriptor length.
+ * @param cmpt_buff
+ *   Pointer to the data buffer to which the data will be extracted.
+ *
+ * @return
+ *   None.
+ */
+int qdma_ul_process_immediate_data(void *cmpt_entry, uint16_t cmpt_desc_len,
+			char *cmpt_buff);
+
+#endif /* ifndef __QDMA_USER_H__ */
-- 
2.36.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [RFC PATCH 13/29] net/qdma: add callback support for Rx queue count
  2022-07-06  7:51 [RFC PATCH 00/29] cover letter for net/qdma PMD Aman Kumar
                   ` (11 preceding siblings ...)
  2022-07-06  7:52 ` [RFC PATCH 12/29] net/qdma: add routine for Rx queue initialization Aman Kumar
@ 2022-07-06  7:52 ` Aman Kumar
  2022-07-06  7:52 ` [RFC PATCH 14/29] net/qdma: add routine for Tx queue initialization Aman Kumar
                   ` (16 subsequent siblings)
  29 siblings, 0 replies; 43+ messages in thread
From: Aman Kumar @ 2022-07-06  7:52 UTC (permalink / raw)
  To: dev; +Cc: maxime.coquelin, david.marchand, aman.kumar

this patch implements callback support to read
rx_desc and fetch rx_queue_count info.

Signed-off-by: Aman Kumar <aman.kumar@vvdntech.in>
---
 drivers/net/qdma/qdma_devops.c |   2 +
 drivers/net/qdma/qdma_devops.h |   6 +-
 drivers/net/qdma/qdma_rxtx.c   | 120 +++++++++++++++++++++++++++++++++
 3 files changed, 124 insertions(+), 4 deletions(-)

diff --git a/drivers/net/qdma/qdma_devops.c b/drivers/net/qdma/qdma_devops.c
index 017dcf39ff..fefbbda012 100644
--- a/drivers/net/qdma/qdma_devops.c
+++ b/drivers/net/qdma/qdma_devops.c
@@ -1399,4 +1399,6 @@ void qdma_dev_ops_init(struct rte_eth_dev *dev)
 	dev->dev_ops = &qdma_eth_dev_ops;
 	dev->rx_pkt_burst = &qdma_recv_pkts;
 	dev->tx_pkt_burst = &qdma_xmit_pkts;
+	dev->rx_queue_count = &qdma_dev_rx_queue_count;
+	dev->rx_descriptor_status = &qdma_dev_rx_descriptor_status;
 }
diff --git a/drivers/net/qdma/qdma_devops.h b/drivers/net/qdma/qdma_devops.h
index c0f903f1cf..0014f4b0c9 100644
--- a/drivers/net/qdma/qdma_devops.h
+++ b/drivers/net/qdma/qdma_devops.h
@@ -294,9 +294,7 @@ int qdma_dev_queue_stats_mapping(struct rte_eth_dev *dev,
 /**
  * DPDK callback to get the number of used descriptors of a rx queue
  *
- * @param dev
- *   Pointer to Ethernet device structure
- * @param rx_queue_id
+ * @param rxq
  *   The RX queue on the Ethernet device for which information will be
  *   retrieved
  *
@@ -305,7 +303,7 @@ int qdma_dev_queue_stats_mapping(struct rte_eth_dev *dev,
  * @ingroup dpdk_devops_func
  */
 uint32_t
-qdma_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+qdma_dev_rx_queue_count(void *rxq);
 
 /**
  * DPDK callback to check the status of a Rx descriptor in the queue
diff --git a/drivers/net/qdma/qdma_rxtx.c b/drivers/net/qdma/qdma_rxtx.c
index 15f6661cbf..102671e16f 100644
--- a/drivers/net/qdma/qdma_rxtx.c
+++ b/drivers/net/qdma/qdma_rxtx.c
@@ -206,3 +206,123 @@ static void adapt_update_counter(struct qdma_rx_queue *rxq,
 	}
 }
 #endif /* QDMA_LATENCY_OPTIMIZED */
+
+static uint32_t rx_queue_count(void *rx_queue)
+{
+	struct qdma_rx_queue *rxq = rx_queue;
+	struct wb_status *wb_status;
+	uint16_t pkt_length;
+	uint16_t nb_pkts_avail = 0;
+	uint16_t rx_cmpt_tail = 0;
+	uint16_t cmpt_pidx;
+	uint32_t nb_desc_used = 0, count = 0;
+	union qdma_ul_st_cmpt_ring *user_cmpt_entry;
+	union qdma_ul_st_cmpt_ring cmpt_data;
+
+	wb_status = rxq->wb_status;
+	rx_cmpt_tail = rxq->cmpt_cidx_info.wrb_cidx;
+	cmpt_pidx = wb_status->pidx;
+
+	if (rx_cmpt_tail < cmpt_pidx)
+		nb_pkts_avail = cmpt_pidx - rx_cmpt_tail;
+	else if (rx_cmpt_tail > cmpt_pidx)
+		nb_pkts_avail = rxq->nb_rx_cmpt_desc - 1 - rx_cmpt_tail +
+				cmpt_pidx;
+
+	if (nb_pkts_avail == 0)
+		return 0;
+
+	while (count < nb_pkts_avail) {
+		user_cmpt_entry =
+		(union qdma_ul_st_cmpt_ring *)((uint64_t)rxq->cmpt_ring +
+		((uint64_t)rx_cmpt_tail * rxq->cmpt_desc_len));
+
+		if (qdma_ul_extract_st_cmpt_info(user_cmpt_entry,
+				&cmpt_data)) {
+			break;
+		}
+
+		pkt_length = qdma_ul_get_cmpt_pkt_len(&cmpt_data);
+		if (unlikely(!pkt_length)) {
+			count++;
+			continue;
+		}
+
+		nb_desc_used += ((pkt_length / rxq->rx_buff_size) + 1);
+		rx_cmpt_tail++;
+		if (unlikely(rx_cmpt_tail >= (rxq->nb_rx_cmpt_desc - 1)))
+			rx_cmpt_tail -= (rxq->nb_rx_cmpt_desc - 1);
+		count++;
+	}
+	PMD_DRV_LOG(DEBUG, "%s: nb_desc_used = %d",
+			__func__, nb_desc_used);
+	return nb_desc_used;
+}
+
+/**
+ * DPDK callback to get the number of used descriptors of a rx queue.
+ *
+ * @param dev
+ *   Pointer to Ethernet device structure.
+ * @param rx_queue_id
+ *   The RX queue on the Ethernet device for which information will be
+ *   retrieved
+ *
+ * @return
+ *   The number of used descriptors in the specific queue.
+ */
+uint32_t
+qdma_dev_rx_queue_count(void *rxq)
+{
+	return rx_queue_count(rxq);
+}
+/**
+ * DPDK callback to check the status of a Rx descriptor in the queue.
+ *
+ * @param rx_queue
+ *   Pointer to Rx queue specific data structure.
+ * @param offset
+ *   The offset of the descriptor starting from tail (0 is the next
+ *   packet to be received by the driver).
+ *
+ * @return
+ *  - (RTE_ETH_RX_DESC_AVAIL): Descriptor is available for the hardware to
+ *    receive a packet.
+ *  - (RTE_ETH_RX_DESC_DONE): Descriptor is done, it is filled by hw, but
+ *    not yet processed by the driver (i.e. in the receive queue).
+ *  - (RTE_ETH_RX_DESC_UNAVAIL): Descriptor is unavailable, either hold by
+ *    the driver and not yet returned to hw, or reserved by the hw.
+ *  - (-EINVAL) bad descriptor offset.
+ */
+int
+qdma_dev_rx_descriptor_status(void *rx_queue, uint16_t offset)
+{
+	struct qdma_rx_queue *rxq = rx_queue;
+	uint32_t desc_used_count;
+	uint16_t rx_tail, c2h_pidx, pending_desc;
+
+	if (unlikely(offset >= (rxq->nb_rx_desc - 1)))
+		return -EINVAL;
+
+	/* One descriptor is reserved so that pidx is not same as tail */
+	if (offset == (rxq->nb_rx_desc - 2))
+		return RTE_ETH_RX_DESC_UNAVAIL;
+
+	desc_used_count = rx_queue_count(rxq);
+	if (offset < desc_used_count)
+		return RTE_ETH_RX_DESC_DONE;
+
+	/* If Tail is not same as PIDX, descriptors are held by the driver */
+	rx_tail = rxq->rx_tail;
+	c2h_pidx = rxq->q_pidx_info.pidx;
+
+	pending_desc = rx_tail - c2h_pidx - 1;
+	if (rx_tail < (c2h_pidx + 1))
+		pending_desc = rxq->nb_rx_desc - 2 + rx_tail -
+				c2h_pidx;
+
+	if (offset < (desc_used_count + pending_desc))
+		return RTE_ETH_RX_DESC_UNAVAIL;
+
+	return RTE_ETH_RX_DESC_AVAIL;
+}
-- 
2.36.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [RFC PATCH 14/29] net/qdma: add routine for Tx queue initialization
  2022-07-06  7:51 [RFC PATCH 00/29] cover letter for net/qdma PMD Aman Kumar
                   ` (12 preceding siblings ...)
  2022-07-06  7:52 ` [RFC PATCH 13/29] net/qdma: add callback support for Rx queue count Aman Kumar
@ 2022-07-06  7:52 ` Aman Kumar
  2022-07-06  7:52 ` [RFC PATCH 15/29] net/qdma: add queue cleanup PMD ops Aman Kumar
                   ` (15 subsequent siblings)
  29 siblings, 0 replies; 43+ messages in thread
From: Aman Kumar @ 2022-07-06  7:52 UTC (permalink / raw)
  To: dev; +Cc: maxime.coquelin, david.marchand, aman.kumar

defined routines to handle tx queue related ops.
this patch add support to rte_eth_dev_tx_queue*
apis for this PMD.

Signed-off-by: Aman Kumar <aman.kumar@vvdntech.in>
---
 drivers/net/qdma/qdma.h        |   8 +
 drivers/net/qdma/qdma_common.c |  74 +++++++++
 drivers/net/qdma/qdma_devops.c | 270 +++++++++++++++++++++++++++++++--
 3 files changed, 343 insertions(+), 9 deletions(-)

diff --git a/drivers/net/qdma/qdma.h b/drivers/net/qdma/qdma.h
index 5992473b33..8515ebe60e 100644
--- a/drivers/net/qdma/qdma.h
+++ b/drivers/net/qdma/qdma.h
@@ -42,6 +42,7 @@
 #define MIN_RX_PIDX_UPDATE_THRESHOLD (1)
 #define MIN_TX_PIDX_UPDATE_THRESHOLD (1)
 #define DEFAULT_MM_CMPT_CNT_THRESHOLD	(2)
+#define QDMA_TXQ_PIDX_UPDATE_INTERVAL	(1000) /* 100 uSec */
 
 #define WB_TIMEOUT		(100000)
 #define RESET_TIMEOUT		(60000)
@@ -198,6 +199,7 @@ struct qdma_tx_queue {
 	uint16_t			tx_desc_pend;
 	uint16_t			nb_tx_desc; /* No of TX descriptors. */
 	rte_spinlock_t			pidx_update_lock;
+	struct qdma_q_pidx_reg_info	q_pidx_info;
 	uint64_t			offloads; /* Tx offloads */
 
 	uint8_t				st_mode:1;/* dma-mode: MM or ST */
@@ -297,17 +299,23 @@ struct qdma_pci_dev {
 };
 
 void qdma_dev_ops_init(struct rte_eth_dev *dev);
+void qdma_txq_pidx_update(void *arg);
 int qdma_pf_csr_read(struct rte_eth_dev *dev);
 
 uint8_t qmda_get_desc_sz_idx(enum rte_pmd_qdma_bypass_desc_len);
 
 int qdma_init_rx_queue(struct qdma_rx_queue *rxq);
+void qdma_reset_tx_queue(struct qdma_tx_queue *txq);
 void qdma_reset_rx_queue(struct qdma_rx_queue *rxq);
 
 void qdma_clr_rx_queue_ctxts(struct rte_eth_dev *dev, uint32_t qid,
 				uint32_t mode);
 void qdma_inv_rx_queue_ctxts(struct rte_eth_dev *dev, uint32_t qid,
 				uint32_t mode);
+void qdma_clr_tx_queue_ctxts(struct rte_eth_dev *dev, uint32_t qid,
+				uint32_t mode);
+void qdma_inv_tx_queue_ctxts(struct rte_eth_dev *dev, uint32_t qid,
+				uint32_t mode);
 int qdma_identify_bars(struct rte_eth_dev *dev);
 int qdma_get_hw_version(struct rte_eth_dev *dev);
 
diff --git a/drivers/net/qdma/qdma_common.c b/drivers/net/qdma/qdma_common.c
index d39e642008..2650438e47 100644
--- a/drivers/net/qdma/qdma_common.c
+++ b/drivers/net/qdma/qdma_common.c
@@ -160,6 +160,80 @@ int qdma_init_rx_queue(struct qdma_rx_queue *rxq)
 	return -ENOMEM;
 }
 
+/*
+ * Tx queue reset
+ */
+void qdma_reset_tx_queue(struct qdma_tx_queue *txq)
+{
+	uint32_t i;
+	uint32_t sz;
+
+	txq->tx_fl_tail = 0;
+	if (txq->st_mode) {  /* ST-mode */
+		sz = sizeof(struct qdma_ul_st_h2c_desc);
+		/* Zero out HW ring memory */
+		for (i = 0; i < (sz * (txq->nb_tx_desc)); i++)
+			((volatile char *)txq->tx_ring)[i] = 0;
+	} else {
+		sz = sizeof(struct qdma_ul_mm_desc);
+		/* Zero out HW ring memory */
+		for (i = 0; i < (sz * (txq->nb_tx_desc)); i++)
+			((volatile char *)txq->tx_ring)[i] = 0;
+	}
+
+	/* Initialize SW ring entries */
+	for (i = 0; i < txq->nb_tx_desc; i++)
+		txq->sw_ring[i] = NULL;
+}
+
+void qdma_inv_tx_queue_ctxts(struct rte_eth_dev *dev,
+			     uint32_t qid, uint32_t mode)
+{
+	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
+	struct qdma_descq_sw_ctxt q_sw_ctxt;
+	struct qdma_descq_hw_ctxt q_hw_ctxt;
+	struct qdma_descq_credit_ctxt q_credit_ctxt;
+	struct qdma_hw_access *hw_access = qdma_dev->hw_access;
+
+	hw_access->qdma_sw_ctx_conf(dev, 0, qid, &q_sw_ctxt,
+			QDMA_HW_ACCESS_INVALIDATE);
+	hw_access->qdma_hw_ctx_conf(dev, 0, qid, &q_hw_ctxt,
+			QDMA_HW_ACCESS_INVALIDATE);
+
+	if (mode) {  /* ST-mode */
+		hw_access->qdma_credit_ctx_conf(dev, 0, qid,
+			&q_credit_ctxt, QDMA_HW_ACCESS_INVALIDATE);
+	}
+}
+
+/**
+ * Clear Tx queue contexts
+ *
+ * @param dev
+ *   Pointer to Ethernet device structure.
+ *
+ * @return
+ *   Nothing.
+ */
+void qdma_clr_tx_queue_ctxts(struct rte_eth_dev *dev,
+			     uint32_t qid, uint32_t mode)
+{
+	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
+	struct qdma_descq_sw_ctxt q_sw_ctxt;
+	struct qdma_descq_credit_ctxt q_credit_ctxt;
+	struct qdma_descq_hw_ctxt q_hw_ctxt;
+	struct qdma_hw_access *hw_access = qdma_dev->hw_access;
+
+	hw_access->qdma_sw_ctx_conf(dev, 0, qid, &q_sw_ctxt,
+			QDMA_HW_ACCESS_CLEAR);
+	hw_access->qdma_hw_ctx_conf(dev, 0, qid, &q_hw_ctxt,
+			QDMA_HW_ACCESS_CLEAR);
+	if (mode) {  /* ST-mode */
+		hw_access->qdma_credit_ctx_conf(dev, 0, qid,
+			&q_credit_ctxt, QDMA_HW_ACCESS_CLEAR);
+	}
+}
+
 /* Utility function to find index of an element in an array */
 int index_of_array(uint32_t *arr, uint32_t n, uint32_t element)
 {
diff --git a/drivers/net/qdma/qdma_devops.c b/drivers/net/qdma/qdma_devops.c
index fefbbda012..e411c0f1be 100644
--- a/drivers/net/qdma/qdma_devops.c
+++ b/drivers/net/qdma/qdma_devops.c
@@ -573,13 +573,196 @@ int qdma_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
 			    uint16_t nb_tx_desc, unsigned int socket_id,
 			    const struct rte_eth_txconf *tx_conf)
 {
-	(void)dev;
-	(void)tx_queue_id;
-	(void)nb_tx_desc;
-	(void)socket_id;
-	(void)tx_conf;
+	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
+	struct qdma_tx_queue *txq = NULL;
+	struct qdma_ul_mm_desc *tx_ring_mm;
+	struct qdma_ul_st_h2c_desc *tx_ring_st;
+	uint32_t sz;
+	uint8_t  *tx_ring_bypass;
+	int err = 0;
+
+	PMD_DRV_LOG(INFO, "Configuring Tx queue id:%d with %d desc\n",
+		    tx_queue_id, nb_tx_desc);
+
+	if (!qdma_dev->is_vf) {
+		err = qdma_dev_increment_active_queue
+				(qdma_dev->dma_device_index,
+				qdma_dev->func_id,
+				QDMA_DEV_Q_TYPE_H2C);
+		if (err != QDMA_SUCCESS)
+			return -EINVAL;
+	}
+	if (!qdma_dev->init_q_range) {
+		if (!qdma_dev->is_vf) {
+			err = qdma_pf_csr_read(dev);
+			if (err < 0) {
+				PMD_DRV_LOG(ERR, "CSR read failed\n");
+				goto tx_setup_err;
+			}
+		}
+		qdma_dev->init_q_range = 1;
+	}
+	/* allocate rx queue data structure */
+	txq = rte_zmalloc_socket("QDMA_TxQ", sizeof(struct qdma_tx_queue),
+						RTE_CACHE_LINE_SIZE, socket_id);
+	if (txq == NULL) {
+		PMD_DRV_LOG(ERR, "Memory allocation failed for "
+				"Tx queue SW structure\n");
+		err = -ENOMEM;
+		goto tx_setup_err;
+	}
+
+	txq->st_mode = qdma_dev->q_info[tx_queue_id].queue_mode;
+	txq->en_bypass = (qdma_dev->q_info[tx_queue_id].tx_bypass_mode) ? 1 : 0;
+	txq->bypass_desc_sz = qdma_dev->q_info[tx_queue_id].tx_bypass_desc_sz;
+
+	txq->nb_tx_desc = (nb_tx_desc + 1);
+	txq->queue_id = tx_queue_id;
+	txq->dev = dev;
+	txq->port_id = dev->data->port_id;
+	txq->func_id = qdma_dev->func_id;
+	txq->num_queues = dev->data->nb_tx_queues;
+	txq->tx_deferred_start = tx_conf->tx_deferred_start;
+
+	txq->ringszidx = index_of_array(qdma_dev->g_ring_sz,
+					QDMA_NUM_RING_SIZES, txq->nb_tx_desc);
+	if (txq->ringszidx < 0) {
+		PMD_DRV_LOG(ERR, "Expected Ring size %d not found\n",
+				txq->nb_tx_desc);
+		err = -EINVAL;
+		goto tx_setup_err;
+	}
+
+	if (qdma_dev->ip_type == EQDMA_SOFT_IP &&
+			qdma_dev->vivado_rel >= QDMA_VIVADO_2020_2) {
+		if (qdma_dev->dev_cap.desc_eng_mode ==
+				QDMA_DESC_ENG_BYPASS_ONLY) {
+			PMD_DRV_LOG(ERR,
+				"Bypass only mode design "
+				"is not supported\n");
+			return -ENOTSUP;
+		}
+
+		if (txq->en_bypass &&
+				qdma_dev->dev_cap.desc_eng_mode ==
+				QDMA_DESC_ENG_INTERNAL_ONLY) {
+			PMD_DRV_LOG(ERR,
+				"Tx qid %d config in bypass "
+				"mode not supported on "
+				"internal only mode design\n",
+				tx_queue_id);
+			return -ENOTSUP;
+		}
+	}
+
+	/* Allocate memory for TX descriptor ring */
+	if (txq->st_mode) {
+		if (!qdma_dev->dev_cap.st_en) {
+			PMD_DRV_LOG(ERR, "Streaming mode not enabled "
+					"in the hardware\n");
+			err = -EINVAL;
+			goto tx_setup_err;
+		}
+
+		if (txq->en_bypass &&
+			txq->bypass_desc_sz != 0)
+			sz = (txq->nb_tx_desc) * (txq->bypass_desc_sz);
+		else
+			sz = (txq->nb_tx_desc) *
+					sizeof(struct qdma_ul_st_h2c_desc);
+		txq->tx_mz = qdma_zone_reserve(dev, "TxHwRn", tx_queue_id, sz,
+						socket_id);
+		if (!txq->tx_mz) {
+			PMD_DRV_LOG(ERR, "Couldn't reserve memory for "
+					"ST H2C ring of size %d\n", sz);
+			err = -ENOMEM;
+			goto tx_setup_err;
+		}
+
+		txq->tx_ring = txq->tx_mz->addr;
+		tx_ring_st = (struct qdma_ul_st_h2c_desc *)txq->tx_ring;
+
+		tx_ring_bypass = (uint8_t *)txq->tx_ring;
+		/* Write-back status structure */
+		if (txq->en_bypass &&
+			txq->bypass_desc_sz != 0)
+			txq->wb_status = (struct wb_status *)&
+					tx_ring_bypass[(txq->nb_tx_desc - 1) *
+					(txq->bypass_desc_sz)];
+		else
+			txq->wb_status = (struct wb_status *)&
+					tx_ring_st[txq->nb_tx_desc - 1];
+	} else {
+		if (!qdma_dev->dev_cap.mm_en) {
+			PMD_DRV_LOG(ERR, "Memory mapped mode not "
+					"enabled in the hardware\n");
+			err = -EINVAL;
+			goto tx_setup_err;
+		}
+
+		if (txq->en_bypass &&
+			txq->bypass_desc_sz != 0)
+			sz = (txq->nb_tx_desc) * (txq->bypass_desc_sz);
+		else
+			sz = (txq->nb_tx_desc) * sizeof(struct qdma_ul_mm_desc);
+		txq->tx_mz = qdma_zone_reserve(dev, "TxHwRn", tx_queue_id,
+						sz, socket_id);
+		if (!txq->tx_mz) {
+			PMD_DRV_LOG(ERR, "Couldn't reserve memory for "
+					"MM H2C ring of size %d\n", sz);
+			err = -ENOMEM;
+			goto tx_setup_err;
+		}
+
+		txq->tx_ring = txq->tx_mz->addr;
+		tx_ring_mm = (struct qdma_ul_mm_desc *)txq->tx_ring;
+
+		/* Write-back status structure */
+
+		tx_ring_bypass = (uint8_t *)txq->tx_ring;
+		if (txq->en_bypass &&
+			txq->bypass_desc_sz != 0)
+			txq->wb_status = (struct wb_status *)&
+				tx_ring_bypass[(txq->nb_tx_desc - 1) *
+				(txq->bypass_desc_sz)];
+		else
+			txq->wb_status = (struct wb_status *)&
+				tx_ring_mm[txq->nb_tx_desc - 1];
+	}
+
+	PMD_DRV_LOG(INFO, "Tx ring phys addr: 0x%lX, Tx Ring virt addr: 0x%lX",
+	    (uint64_t)txq->tx_mz->iova, (uint64_t)txq->tx_ring);
+
+	/* Allocate memory for TX software ring */
+	sz = txq->nb_tx_desc * sizeof(struct rte_mbuf *);
+	txq->sw_ring = rte_zmalloc_socket("TxSwRn", sz,
+				RTE_CACHE_LINE_SIZE, socket_id);
+	if (txq->sw_ring == NULL) {
+		PMD_DRV_LOG(ERR, "Memory allocation failed for "
+				 "Tx queue SW ring\n");
+		err = -ENOMEM;
+		goto tx_setup_err;
+	}
+
+	rte_spinlock_init(&txq->pidx_update_lock);
+	dev->data->tx_queues[tx_queue_id] = txq;
 
 	return 0;
+
+tx_setup_err:
+	PMD_DRV_LOG(ERR, " Tx queue setup failed");
+	if (!qdma_dev->is_vf)
+		qdma_dev_decrement_active_queue(qdma_dev->dma_device_index,
+						qdma_dev->func_id,
+						QDMA_DEV_Q_TYPE_H2C);
+	if (txq) {
+		if (txq->tx_mz)
+			rte_memzone_free(txq->tx_mz);
+		if (txq->sw_ring)
+			rte_free(txq->sw_ring);
+		rte_free(txq);
+	}
+	return err;
 }
 
 void qdma_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t q_id)
@@ -983,9 +1166,54 @@ int qdma_dev_configure(struct rte_eth_dev *dev)
 
 int qdma_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t qid)
 {
-	(void)dev;
-	(void)qid;
+	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
+	struct qdma_tx_queue *txq;
+	uint32_t queue_base =  qdma_dev->queue_base;
+	int err, bypass_desc_sz_idx;
+	struct qdma_descq_sw_ctxt q_sw_ctxt;
+	struct qdma_hw_access *hw_access = qdma_dev->hw_access;
+
+	txq = (struct qdma_tx_queue *)dev->data->tx_queues[qid];
 
+	memset(&q_sw_ctxt, 0, sizeof(struct qdma_descq_sw_ctxt));
+
+	bypass_desc_sz_idx = qmda_get_desc_sz_idx(txq->bypass_desc_sz);
+
+	qdma_reset_tx_queue(txq);
+	qdma_clr_tx_queue_ctxts(dev, (qid + queue_base), txq->st_mode);
+
+	if (txq->st_mode) {
+		q_sw_ctxt.desc_sz = SW_DESC_CNTXT_H2C_STREAM_DMA;
+	} else {
+		q_sw_ctxt.desc_sz = SW_DESC_CNTXT_MEMORY_MAP_DMA;
+		q_sw_ctxt.is_mm = 1;
+	}
+	q_sw_ctxt.wbi_chk = 1;
+	q_sw_ctxt.wbi_intvl_en = 1;
+	q_sw_ctxt.fnc_id = txq->func_id;
+	q_sw_ctxt.qen = 1;
+	q_sw_ctxt.rngsz_idx = txq->ringszidx;
+	q_sw_ctxt.bypass = txq->en_bypass;
+	q_sw_ctxt.wbk_en = 1;
+	q_sw_ctxt.ring_bs_addr = (uint64_t)txq->tx_mz->iova;
+
+	if (txq->en_bypass &&
+		txq->bypass_desc_sz != 0)
+		q_sw_ctxt.desc_sz = bypass_desc_sz_idx;
+
+	/* Set SW Context */
+	err = hw_access->qdma_sw_ctx_conf(dev, 0,
+			(qid + queue_base), &q_sw_ctxt,
+			QDMA_HW_ACCESS_WRITE);
+	if (err < 0)
+		return qdma_dev->hw_access->qdma_get_error_code(err);
+
+	txq->q_pidx_info.pidx = 0;
+	hw_access->qdma_queue_pidx_update(dev, qdma_dev->is_vf,
+		qid, 0, &txq->q_pidx_info);
+
+	dev->data->tx_queue_state[qid] = RTE_ETH_QUEUE_STATE_STARTED;
+	txq->status = RTE_ETH_QUEUE_STATE_STARTED;
 	return 0;
 }
 
@@ -1185,8 +1413,32 @@ int qdma_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t qid)
 
 int qdma_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t qid)
 {
-	(void)dev;
-	(void)qid;
+	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
+	uint32_t queue_base =  qdma_dev->queue_base;
+	struct qdma_tx_queue *txq;
+	int cnt = 0;
+	uint16_t count;
+
+	txq = (struct qdma_tx_queue *)dev->data->tx_queues[qid];
+
+	txq->status = RTE_ETH_QUEUE_STATE_STOPPED;
+	/* Wait for TXQ to send out all packets. */
+	while (txq->wb_status->cidx != txq->q_pidx_info.pidx) {
+		usleep(10);
+		if (cnt++ > 10000)
+			break;
+	}
+
+	qdma_inv_tx_queue_ctxts(dev, (qid + queue_base), txq->st_mode);
+
+	/* Relinquish pending mbufs */
+	for (count = 0; count < txq->nb_tx_desc - 1; count++) {
+		rte_pktmbuf_free(txq->sw_ring[count]);
+		txq->sw_ring[count] = NULL;
+	}
+	qdma_reset_tx_queue(txq);
+
+	dev->data->tx_queue_state[qid] = RTE_ETH_QUEUE_STATE_STOPPED;
 
 	return 0;
 }
-- 
2.36.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [RFC PATCH 15/29] net/qdma: add queue cleanup PMD ops
  2022-07-06  7:51 [RFC PATCH 00/29] cover letter for net/qdma PMD Aman Kumar
                   ` (13 preceding siblings ...)
  2022-07-06  7:52 ` [RFC PATCH 14/29] net/qdma: add routine for Tx queue initialization Aman Kumar
@ 2022-07-06  7:52 ` Aman Kumar
  2022-07-06  7:52 ` [RFC PATCH 16/29] net/qdma: add start and stop apis Aman Kumar
                   ` (14 subsequent siblings)
  29 siblings, 0 replies; 43+ messages in thread
From: Aman Kumar @ 2022-07-06  7:52 UTC (permalink / raw)
  To: dev; +Cc: maxime.coquelin, david.marchand, aman.kumar

defined routines for rx and tx queue cleanup
functions.

Signed-off-by: Aman Kumar <aman.kumar@vvdntech.in>
---
 drivers/net/qdma/qdma_devops.c | 83 ++++++++++++++++++++++++++++++++--
 1 file changed, 79 insertions(+), 4 deletions(-)

diff --git a/drivers/net/qdma/qdma_devops.c b/drivers/net/qdma/qdma_devops.c
index e411c0f1be..5329bd3cd4 100644
--- a/drivers/net/qdma/qdma_devops.c
+++ b/drivers/net/qdma/qdma_devops.c
@@ -765,16 +765,91 @@ int qdma_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
 	return err;
 }
 
+#if (MIN_TX_PIDX_UPDATE_THRESHOLD > 1)
+void qdma_txq_pidx_update(void *arg)
+{
+	struct rte_eth_dev *dev = (struct rte_eth_dev *)arg;
+	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
+	struct qdma_tx_queue *txq;
+	uint32_t qid;
+
+	for (qid = 0; qid < dev->data->nb_tx_queues; qid++) {
+		txq = (struct qdma_tx_queue *)dev->data->tx_queues[qid];
+		if (txq->tx_desc_pend) {
+			rte_spinlock_lock(&txq->pidx_update_lock);
+			if (txq->tx_desc_pend) {
+				qdma_dev->hw_access->qdma_queue_pidx_update(dev,
+					qdma_dev->is_vf,
+					qid, 0, &txq->q_pidx_info);
+
+				txq->tx_desc_pend = 0;
+			}
+			rte_spinlock_unlock(&txq->pidx_update_lock);
+		}
+	}
+	rte_eal_alarm_set(QDMA_TXQ_PIDX_UPDATE_INTERVAL,
+			qdma_txq_pidx_update, (void *)arg);
+}
+#endif
+
 void qdma_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t q_id)
 {
-	(void)dev;
-	(void)q_id;
+	struct qdma_tx_queue *txq;
+	struct qdma_pci_dev *qdma_dev;
+
+	txq = (struct qdma_tx_queue *)dev->data->tx_queues[q_id];
+	if (txq != NULL) {
+		PMD_DRV_LOG(INFO, "Remove H2C queue: %d", txq->queue_id);
+		qdma_dev = txq->dev->data->dev_private;
+
+		if (!qdma_dev->is_vf)
+			qdma_dev_decrement_active_queue
+					(qdma_dev->dma_device_index,
+					qdma_dev->func_id,
+					QDMA_DEV_Q_TYPE_H2C);
+		if (txq->sw_ring)
+			rte_free(txq->sw_ring);
+		if (txq->tx_mz)
+			rte_memzone_free(txq->tx_mz);
+		rte_free(txq);
+		PMD_DRV_LOG(INFO, "H2C queue %d removed", txq->queue_id);
+	}
 }
 
 void qdma_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t q_id)
 {
-	(void)dev;
-	(void)q_id;
+	struct qdma_rx_queue *rxq;
+	struct qdma_pci_dev *qdma_dev = NULL;
+
+	rxq = (struct qdma_rx_queue *)dev->data->rx_queues[q_id];
+	if (rxq != NULL) {
+		PMD_DRV_LOG(INFO, "Remove C2H queue: %d", rxq->queue_id);
+		qdma_dev = rxq->dev->data->dev_private;
+
+		if (!qdma_dev->is_vf) {
+			qdma_dev_decrement_active_queue
+					(qdma_dev->dma_device_index,
+					qdma_dev->func_id,
+					QDMA_DEV_Q_TYPE_C2H);
+
+			if (rxq->st_mode)
+				qdma_dev_decrement_active_queue
+					(qdma_dev->dma_device_index,
+					qdma_dev->func_id,
+					QDMA_DEV_Q_TYPE_CMPT);
+		}
+
+		if (rxq->sw_ring)
+			rte_free(rxq->sw_ring);
+		if (rxq->st_mode) { /* if ST-mode */
+			if (rxq->rx_cmpt_mz)
+				rte_memzone_free(rxq->rx_cmpt_mz);
+		}
+		if (rxq->rx_mz)
+			rte_memzone_free(rxq->rx_mz);
+		rte_free(rxq);
+		PMD_DRV_LOG(INFO, "C2H queue %d removed", rxq->queue_id);
+	}
 }
 
 /**
-- 
2.36.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [RFC PATCH 16/29] net/qdma: add start and stop apis
  2022-07-06  7:51 [RFC PATCH 00/29] cover letter for net/qdma PMD Aman Kumar
                   ` (14 preceding siblings ...)
  2022-07-06  7:52 ` [RFC PATCH 15/29] net/qdma: add queue cleanup PMD ops Aman Kumar
@ 2022-07-06  7:52 ` Aman Kumar
  2022-07-06  7:52 ` [RFC PATCH 17/29] net/qdma: add Tx burst API Aman Kumar
                   ` (13 subsequent siblings)
  29 siblings, 0 replies; 43+ messages in thread
From: Aman Kumar @ 2022-07-06  7:52 UTC (permalink / raw)
  To: dev; +Cc: maxime.coquelin, david.marchand, aman.kumar

this patch implements definitions for dev_start
and dev_stop apis.

Signed-off-by: Aman Kumar <aman.kumar@vvdntech.in>
---
 drivers/net/qdma/qdma_devops.c | 53 ++++++++++++++++++++++++++++++++--
 1 file changed, 51 insertions(+), 2 deletions(-)

diff --git a/drivers/net/qdma/qdma_devops.c b/drivers/net/qdma/qdma_devops.c
index 5329bd3cd4..28de783207 100644
--- a/drivers/net/qdma/qdma_devops.c
+++ b/drivers/net/qdma/qdma_devops.c
@@ -865,8 +865,40 @@ void qdma_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t q_id)
  */
 int qdma_dev_start(struct rte_eth_dev *dev)
 {
-	(void)dev;
+	struct qdma_tx_queue *txq;
+	struct qdma_rx_queue *rxq;
+	uint32_t qid;
+	int err;
+
+	PMD_DRV_LOG(INFO, "qdma-dev-start: Starting\n");
+
+	/* prepare descriptor rings for operation */
+	for (qid = 0; qid < dev->data->nb_tx_queues; qid++) {
+		txq = (struct qdma_tx_queue *)dev->data->tx_queues[qid];
+
+		/* Deferred Queues should not start with dev_start */
+		if (!txq->tx_deferred_start) {
+			err = qdma_dev_tx_queue_start(dev, qid);
+			if (err != 0)
+				return err;
+		}
+	}
 
+	for (qid = 0; qid < dev->data->nb_rx_queues; qid++) {
+		rxq = (struct qdma_rx_queue *)dev->data->rx_queues[qid];
+
+		/* Deferred Queues should not start with dev_start */
+		if (!rxq->rx_deferred_start) {
+			err = qdma_dev_rx_queue_start(dev, qid);
+			if (err != 0)
+				return err;
+		}
+	}
+
+#if (MIN_TX_PIDX_UPDATE_THRESHOLD > 1)
+	rte_eal_alarm_set(QDMA_TXQ_PIDX_UPDATE_INTERVAL,
+			qdma_txq_pidx_update, (void *)dev);
+#endif
 	return 0;
 }
 
@@ -922,7 +954,24 @@ int qdma_dev_infos_get(struct rte_eth_dev *dev,
  */
 int qdma_dev_stop(struct rte_eth_dev *dev)
 {
-	(void)dev;
+	uint32_t qid;
+#ifdef RTE_LIBRTE_QDMA_DEBUG_DRIVER
+	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
+
+	PMD_DRV_LOG(INFO, "PF-%d(DEVFN) Stop H2C & C2H queues",
+			qdma_dev->func_id);
+#endif
+
+	/* reset driver's internal queue structures to default values */
+	for (qid = 0; qid < dev->data->nb_tx_queues; qid++)
+		qdma_dev_tx_queue_stop(dev, qid);
+	for (qid = 0; qid < dev->data->nb_rx_queues; qid++)
+		qdma_dev_rx_queue_stop(dev, qid);
+
+#if (MIN_TX_PIDX_UPDATE_THRESHOLD > 1)
+	/* Cancel pending PIDX updates */
+	rte_eal_alarm_cancel(qdma_txq_pidx_update, (void *)dev);
+#endif
 
 	return 0;
 }
-- 
2.36.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [RFC PATCH 17/29] net/qdma: add Tx burst API
  2022-07-06  7:51 [RFC PATCH 00/29] cover letter for net/qdma PMD Aman Kumar
                   ` (15 preceding siblings ...)
  2022-07-06  7:52 ` [RFC PATCH 16/29] net/qdma: add start and stop apis Aman Kumar
@ 2022-07-06  7:52 ` Aman Kumar
  2022-07-06  7:52 ` [RFC PATCH 18/29] net/qdma: add Tx queue reclaim routine Aman Kumar
                   ` (12 subsequent siblings)
  29 siblings, 0 replies; 43+ messages in thread
From: Aman Kumar @ 2022-07-06  7:52 UTC (permalink / raw)
  To: dev; +Cc: maxime.coquelin, david.marchand, aman.kumar

add Tx data path burst API for the device.

Signed-off-by: Aman Kumar <aman.kumar@vvdntech.in>
---
 drivers/net/qdma/meson.build   |   2 +
 drivers/net/qdma/qdma_devops.c |  10 -
 drivers/net/qdma/qdma_rxtx.c   | 422 +++++++++++++++++++++++++++++++++
 drivers/net/qdma/qdma_rxtx.h   |  10 +
 drivers/net/qdma/qdma_user.c   |  75 ++++++
 5 files changed, 509 insertions(+), 10 deletions(-)

diff --git a/drivers/net/qdma/meson.build b/drivers/net/qdma/meson.build
index e2da7f25ec..8c86412b83 100644
--- a/drivers/net/qdma/meson.build
+++ b/drivers/net/qdma/meson.build
@@ -19,6 +19,8 @@ includes += include_directories('qdma_access/qdma_s80_hard_access')
 
 headers += files('rte_pmd_qdma.h')
 
+deps += ['mempool_ring']
+
 sources = files(
         'qdma_common.c',
         'qdma_devops.c',
diff --git a/drivers/net/qdma/qdma_devops.c b/drivers/net/qdma/qdma_devops.c
index 28de783207..10d7d67b87 100644
--- a/drivers/net/qdma/qdma_devops.c
+++ b/drivers/net/qdma/qdma_devops.c
@@ -1760,16 +1760,6 @@ uint16_t qdma_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	return 0;
 }
 
-uint16_t qdma_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
-			uint16_t nb_pkts)
-{
-	(void)tx_queue;
-	(void)tx_pkts;
-	(void)nb_pkts;
-
-	return 0;
-}
-
 void qdma_dev_ops_init(struct rte_eth_dev *dev)
 {
 	dev->dev_ops = &qdma_eth_dev_ops;
diff --git a/drivers/net/qdma/qdma_rxtx.c b/drivers/net/qdma/qdma_rxtx.c
index 102671e16f..1605c9973c 100644
--- a/drivers/net/qdma/qdma_rxtx.c
+++ b/drivers/net/qdma/qdma_rxtx.c
@@ -47,7 +47,166 @@ static void qdma_ul_get_cmpt_pkt_len_v(void *ul_cmpt_entry, __m128i *data)
 }
 #endif /* QDMA_RX_VEC_X86_64 */
 
+#ifdef QDMA_TX_VEC_X86_64
+/* Vector implementation to update H2C descriptor */
+static int qdma_ul_update_st_h2c_desc_v(void *qhndl, uint64_t q_offloads,
+				struct rte_mbuf *mb)
+{
+	(void)q_offloads;
+	int nsegs = mb->nb_segs;
+	uint16_t flags = S_H2C_DESC_F_SOP | S_H2C_DESC_F_EOP;
+	uint16_t id;
+	struct qdma_ul_st_h2c_desc *tx_ring_st;
+	struct qdma_tx_queue *txq = (struct qdma_tx_queue *)qhndl;
+
+	tx_ring_st = (struct qdma_ul_st_h2c_desc *)txq->tx_ring;
+	id = txq->q_pidx_info.pidx;
+
+	if (nsegs == 1) {
+		__m128i descriptor;
+		uint16_t datalen = mb->data_len;
+
+		descriptor = _mm_set_epi64x(mb->buf_iova + mb->data_off,
+				(uint64_t)datalen << 16 |
+				(uint64_t)datalen << 32 |
+				(uint64_t)flags << 48);
+		_mm_store_si128((__m128i *)&tx_ring_st[id], descriptor);
+
+		id++;
+		if (unlikely(id >= (txq->nb_tx_desc - 1)))
+			id -= (txq->nb_tx_desc - 1);
+	} else {
+		int pkt_segs = nsegs;
+		while (nsegs && mb) {
+			__m128i descriptor;
+			uint16_t datalen = mb->data_len;
+
+			flags = 0;
+			if (nsegs == pkt_segs)
+				flags |= S_H2C_DESC_F_SOP;
+			if (nsegs == 1)
+				flags |= S_H2C_DESC_F_EOP;
+
+			descriptor = _mm_set_epi64x(mb->buf_iova + mb->data_off,
+					(uint64_t)datalen << 16 |
+					(uint64_t)datalen << 32 |
+					(uint64_t)flags << 48);
+			_mm_store_si128((__m128i *)&tx_ring_st[id], descriptor);
+
+			nsegs--;
+			mb = mb->next;
+			id++;
+			if (unlikely(id >= (txq->nb_tx_desc - 1)))
+				id -= (txq->nb_tx_desc - 1);
+		}
+	}
+
+	txq->q_pidx_info.pidx = id;
+
+	return 0;
+}
+#endif /* QDMA_TX_VEC_X86_64 */
+
 /******** User logic dependent functions end **********/
+static int reclaim_tx_mbuf(struct qdma_tx_queue *txq,
+			uint16_t cidx, uint16_t free_cnt)
+{
+	int fl_desc = 0;
+	uint16_t count;
+	int id;
+
+	id = txq->tx_fl_tail;
+	fl_desc = (int)cidx - id;
+	if (fl_desc < 0)
+		fl_desc += (txq->nb_tx_desc - 1);
+
+	if (free_cnt && fl_desc > free_cnt)
+		fl_desc = free_cnt;
+
+	if ((id + fl_desc) < (txq->nb_tx_desc - 1)) {
+		for (count = 0; count < ((uint16_t)fl_desc & 0xFFFF);
+				count++) {
+			rte_pktmbuf_free(txq->sw_ring[id]);
+			txq->sw_ring[id++] = NULL;
+		}
+	} else {
+		fl_desc -= (txq->nb_tx_desc - 1 - id);
+		for (; id < (txq->nb_tx_desc - 1); id++) {
+			rte_pktmbuf_free(txq->sw_ring[id]);
+			txq->sw_ring[id] = NULL;
+		}
+
+		id -= (txq->nb_tx_desc - 1);
+		for (count = 0; count < ((uint16_t)fl_desc & 0xFFFF);
+				count++) {
+			rte_pktmbuf_free(txq->sw_ring[id]);
+			txq->sw_ring[id++] = NULL;
+		}
+	}
+	txq->tx_fl_tail = id;
+
+	return fl_desc;
+}
+
+#ifdef TEST_64B_DESC_BYPASS
+static uint16_t qdma_xmit_64B_desc_bypass(struct qdma_tx_queue *txq,
+			struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+	uint16_t count, id;
+	uint8_t *tx_ring_st_bypass = NULL;
+	int ofd = -1, ret = 0;
+	char fln[50];
+	struct qdma_pci_dev *qdma_dev = txq->dev->data->dev_private;
+
+	id = txq->q_pidx_info.pidx;
+
+	for (count = 0; count < nb_pkts; count++) {
+		tx_ring_st_bypass = (uint8_t *)txq->tx_ring;
+		memset(&tx_ring_st_bypass[id * (txq->bypass_desc_sz)],
+				((id  % 255) + 1), txq->bypass_desc_sz);
+
+		snprintf(fln, sizeof(fln), "q_%u_%s", txq->queue_id,
+				"h2c_desc_data.txt");
+		ofd = open(fln, O_RDWR | O_CREAT | O_APPEND | O_SYNC,
+				0666);
+		if (ofd < 0) {
+			PMD_DRV_LOG(INFO, " txq[%d] unable to create "
+					"outfile to dump descriptor"
+					" data", txq->queue_id);
+			return 0;
+		}
+		ret = write(ofd, &(tx_ring_st_bypass[id *
+					(txq->bypass_desc_sz)]),
+					txq->bypass_desc_sz);
+		if (ret < txq->bypass_desc_sz)
+			PMD_DRV_LOG(DEBUG, "Txq[%d] descriptor data "
+					"len: %d, written to inputfile"
+					" :%d bytes", txq->queue_id,
+					txq->bypass_desc_sz, ret);
+		close(ofd);
+
+		rte_pktmbuf_free(tx_pkts[count]);
+
+		id++;
+		if (unlikely(id >= (txq->nb_tx_desc - 1)))
+			id -= (txq->nb_tx_desc - 1);
+	}
+
+	/* Make sure writes to the H2C descriptors are synchronized
+	 * before updating PIDX
+	 */
+	rte_wmb();
+
+	txq->q_pidx_info.pidx = id;
+	qdma_dev->hw_access->qdma_queue_pidx_update(txq->dev, qdma_dev->is_vf,
+		txq->queue_id, 0, &txq->q_pidx_info);
+
+	PMD_DRV_LOG(DEBUG, " xmit completed with count:%d\n", count);
+
+	return count;
+}
+#endif
+
 uint16_t qdma_get_rx_queue_id(void *queue_hndl)
 {
 	struct qdma_rx_queue *rxq = (struct qdma_rx_queue *)queue_hndl;
@@ -80,6 +239,50 @@ uint32_t get_mm_buff_size(void *queue_hndl)
 	return rxq->rx_buff_size;
 }
 
+struct qdma_ul_st_h2c_desc *get_st_h2c_desc(void *queue_hndl)
+{
+	volatile uint16_t id;
+	struct qdma_ul_st_h2c_desc *tx_ring_st;
+	struct qdma_ul_st_h2c_desc *desc;
+	struct qdma_tx_queue *txq = (struct qdma_tx_queue *)queue_hndl;
+
+	id = txq->q_pidx_info.pidx;
+	tx_ring_st = (struct qdma_ul_st_h2c_desc *)txq->tx_ring;
+	desc = (struct qdma_ul_st_h2c_desc *)&tx_ring_st[id];
+
+	id++;
+	if (unlikely(id >= (txq->nb_tx_desc - 1)))
+		id -= (txq->nb_tx_desc - 1);
+
+	txq->q_pidx_info.pidx = id;
+
+	return desc;
+}
+
+struct qdma_ul_mm_desc *get_mm_h2c_desc(void *queue_hndl)
+{
+	struct qdma_ul_mm_desc *desc;
+	struct qdma_tx_queue *txq = (struct qdma_tx_queue *)queue_hndl;
+	struct qdma_ul_mm_desc *tx_ring =
+					(struct qdma_ul_mm_desc *)txq->tx_ring;
+	uint32_t id;
+
+	id = txq->q_pidx_info.pidx;
+	desc =  (struct qdma_ul_mm_desc *)&tx_ring[id];
+
+	id = (id + 1) % (txq->nb_tx_desc - 1);
+	txq->q_pidx_info.pidx = id;
+
+	return desc;
+}
+
+uint64_t get_mm_h2c_ep_addr(void *queue_hndl)
+{
+	struct qdma_tx_queue *txq = (struct qdma_tx_queue *)queue_hndl;
+
+	return txq->ep_addr;
+}
+
 #ifdef QDMA_LATENCY_OPTIMIZED
 static void adjust_c2h_cntr_avgs(struct qdma_rx_queue *rxq)
 {
@@ -276,6 +479,7 @@ qdma_dev_rx_queue_count(void *rxq)
 {
 	return rx_queue_count(rxq);
 }
+
 /**
  * DPDK callback to check the status of a Rx descriptor in the queue.
  *
@@ -326,3 +530,221 @@ qdma_dev_rx_descriptor_status(void *rx_queue, uint16_t offset)
 
 	return RTE_ETH_RX_DESC_AVAIL;
 }
+
+/* Transmit API for Streaming mode */
+uint16_t qdma_xmit_pkts_st(struct qdma_tx_queue *txq, struct rte_mbuf **tx_pkts,
+			uint16_t nb_pkts)
+{
+	struct rte_mbuf *mb;
+	uint64_t pkt_len = 0;
+	int avail, in_use, ret, nsegs;
+	uint16_t cidx = 0;
+	uint16_t count = 0, id;
+	struct qdma_pci_dev *qdma_dev = txq->dev->data->dev_private;
+#ifdef TEST_64B_DESC_BYPASS
+	int bypass_desc_sz_idx = qmda_get_desc_sz_idx(txq->bypass_desc_sz);
+
+	if (unlikely(txq->en_bypass &&
+			bypass_desc_sz_idx == SW_DESC_CNTXT_64B_BYPASS_DMA)) {
+		return qdma_xmit_64B_desc_bypass(txq, tx_pkts, nb_pkts);
+	}
+#endif
+
+	id = txq->q_pidx_info.pidx;
+	cidx = txq->wb_status->cidx;
+	PMD_DRV_LOG(DEBUG, "Xmit start on tx queue-id:%d, tail index:%d\n",
+			txq->queue_id, id);
+
+	/* Free transmitted mbufs back to pool */
+	reclaim_tx_mbuf(txq, cidx, 0);
+
+	in_use = (int)id - cidx;
+	if (in_use < 0)
+		in_use += (txq->nb_tx_desc - 1);
+
+	/* Make 1 less available, otherwise if we allow all descriptors
+	 * to be filled, when nb_pkts = nb_tx_desc - 1, pidx will be same
+	 * as old pidx and HW will treat this as no new descriptors were added.
+	 * Hence, DMA won't happen with new descriptors.
+	 */
+	avail = txq->nb_tx_desc - 2 - in_use;
+	if (!avail) {
+		PMD_DRV_LOG(DEBUG, "Tx queue full, in_use = %d", in_use);
+		return 0;
+	}
+
+	for (count = 0; count < nb_pkts; count++) {
+		mb = tx_pkts[count];
+		nsegs = mb->nb_segs;
+		if (nsegs > avail) {
+			/* Number of segments in current mbuf are greater
+			 * than number of descriptors available,
+			 * hence update PIDX and return
+			 */
+			break;
+		}
+		avail -= nsegs;
+		id = txq->q_pidx_info.pidx;
+		txq->sw_ring[id] = mb;
+		pkt_len += rte_pktmbuf_pkt_len(mb);
+
+#ifdef QDMA_TX_VEC_X86_64
+		ret = qdma_ul_update_st_h2c_desc_v(txq, txq->offloads, mb);
+#else
+		ret = qdma_ul_update_st_h2c_desc(txq, txq->offloads, mb);
+#endif /* RTE_ARCH_X86_64 */
+		if (ret < 0)
+			break;
+	}
+
+	txq->stats.pkts += count;
+	txq->stats.bytes += pkt_len;
+
+	/* Make sure writes to the H2C descriptors are synchronized
+	 * before updating PIDX
+	 */
+	rte_wmb();
+
+#if (MIN_TX_PIDX_UPDATE_THRESHOLD > 1)
+	rte_spinlock_lock(&txq->pidx_update_lock);
+#endif
+	txq->tx_desc_pend += count;
+
+	/* Send PIDX update only if pending desc is more than threshold
+	 * Saves frequent Hardware transactions
+	 */
+	if (txq->tx_desc_pend >= MIN_TX_PIDX_UPDATE_THRESHOLD) {
+		qdma_dev->hw_access->qdma_queue_pidx_update(txq->dev,
+			qdma_dev->is_vf,
+			txq->queue_id, 0, &txq->q_pidx_info);
+
+		txq->tx_desc_pend = 0;
+	}
+#if (MIN_TX_PIDX_UPDATE_THRESHOLD > 1)
+	rte_spinlock_unlock(&txq->pidx_update_lock);
+#endif
+	PMD_DRV_LOG(DEBUG, " xmit completed with count:%d\n", count);
+
+	return count;
+}
+
+/* Transmit API for Memory mapped mode */
+uint16_t qdma_xmit_pkts_mm(struct qdma_tx_queue *txq, struct rte_mbuf **tx_pkts,
+			uint16_t nb_pkts)
+{
+	struct rte_mbuf *mb;
+	uint32_t count, id;
+	uint64_t len = 0;
+	int avail, in_use;
+	struct qdma_pci_dev *qdma_dev = txq->dev->data->dev_private;
+	uint16_t cidx = 0;
+	int nsegs = 0;
+
+#ifdef TEST_64B_DESC_BYPASS
+	int bypass_desc_sz_idx = qmda_get_desc_sz_idx(txq->bypass_desc_sz);
+#endif
+
+	id = txq->q_pidx_info.pidx;
+	PMD_DRV_LOG(DEBUG, "Xmit start on tx queue-id:%d, tail index:%d\n",
+			txq->queue_id, id);
+
+#ifdef TEST_64B_DESC_BYPASS
+	if (unlikely(txq->en_bypass &&
+			bypass_desc_sz_idx == SW_DESC_CNTXT_64B_BYPASS_DMA)) {
+		PMD_DRV_LOG(DEBUG, "For MM mode, example design doesn't "
+				"support 64B bypass testing\n");
+		return 0;
+	}
+#endif
+	cidx = txq->wb_status->cidx;
+	/* Free transmitted mbufs back to pool */
+	reclaim_tx_mbuf(txq, cidx, 0);
+	in_use = (int)id - cidx;
+	if (in_use < 0)
+		in_use += (txq->nb_tx_desc - 1);
+
+	/* Make 1 less available, otherwise if we allow all descriptors to be
+	 * filled, when nb_pkts = nb_tx_desc - 1, pidx will be same as old pidx
+	 * and HW will treat this as no new descriptors were added.
+	 * Hence, DMA won't happen with new descriptors.
+	 */
+	avail = txq->nb_tx_desc - 2 - in_use;
+	if (!avail) {
+		PMD_DRV_LOG(ERR, "Tx queue full, in_use = %d", in_use);
+		return 0;
+	}
+
+	if (nb_pkts > avail)
+		nb_pkts = avail;
+
+	/* Set the xmit descriptors and control bits */
+	for (count = 0; count < nb_pkts; count++) {
+		mb = tx_pkts[count];
+		txq->sw_ring[id] = mb;
+		nsegs = mb->nb_segs;
+		if (nsegs > avail) {
+			/* Number of segments in current mbuf are greater
+			 * than number of descriptors available,
+			 * hence update PIDX and return
+			 */
+			break;
+		}
+
+		txq->ep_addr = mb->dynfield1[1];
+		txq->ep_addr = (txq->ep_addr << 32) | mb->dynfield1[0];
+
+		while (nsegs && mb) {
+			/* Update the descriptor control feilds */
+			qdma_ul_update_mm_h2c_desc(txq, mb);
+
+			len = rte_pktmbuf_data_len(mb);
+			txq->ep_addr = txq->ep_addr + len;
+			id = txq->q_pidx_info.pidx;
+			mb = mb->next;
+		}
+	}
+
+	/* Make sure writes to the H2C descriptors are synchronized before
+	 * updating PIDX
+	 */
+	rte_wmb();
+
+	/* update pidx pointer */
+	if (count > 0) {
+		qdma_dev->hw_access->qdma_queue_pidx_update(txq->dev,
+			qdma_dev->is_vf,
+			txq->queue_id, 0, &txq->q_pidx_info);
+	}
+
+	PMD_DRV_LOG(DEBUG, " xmit completed with count:%d", count);
+	return count;
+}
+/**
+ * DPDK callback for transmitting packets in burst.
+ *
+ * @param tx_queue
+ G*   Generic pointer to TX queue structure.
+ * @param[in] tx_pkts
+ *   Packets to transmit.
+ * @param nb_pkts
+ *   Number of packets in array.
+ *
+ * @return
+ *   Number of packets successfully transmitted (<= nb_pkts).
+ */
+uint16_t qdma_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+			uint16_t nb_pkts)
+{
+	struct qdma_tx_queue *txq = tx_queue;
+	uint16_t count;
+
+	if (txq->status != RTE_ETH_QUEUE_STATE_STARTED)
+		return 0;
+
+	if (txq->st_mode)
+		count =	qdma_xmit_pkts_st(txq, tx_pkts, nb_pkts);
+	else
+		count =	qdma_xmit_pkts_mm(txq, tx_pkts, nb_pkts);
+
+	return count;
+}
diff --git a/drivers/net/qdma/qdma_rxtx.h b/drivers/net/qdma/qdma_rxtx.h
index 5f902df695..397740abc0 100644
--- a/drivers/net/qdma/qdma_rxtx.h
+++ b/drivers/net/qdma/qdma_rxtx.h
@@ -7,6 +7,9 @@
 
 #include "qdma_access_export.h"
 
+/* forward declaration */
+struct qdma_tx_queue;
+
 /* Supporting functions for user logic pluggability */
 uint16_t qdma_get_rx_queue_id(void *queue_hndl);
 void qdma_get_device_info(void *queue_hndl,
@@ -15,6 +18,13 @@ void qdma_get_device_info(void *queue_hndl,
 struct qdma_ul_st_h2c_desc *get_st_h2c_desc(void *queue_hndl);
 struct qdma_ul_mm_desc *get_mm_h2c_desc(void *queue_hndl);
 uint32_t get_mm_c2h_ep_addr(void *queue_hndl);
+uint64_t get_mm_h2c_ep_addr(void *queue_hndl);
 uint32_t get_mm_buff_size(void *queue_hndl);
+uint16_t qdma_xmit_pkts_st(struct qdma_tx_queue *txq,
+			   struct rte_mbuf **tx_pkts,
+			   uint16_t nb_pkts);
+uint16_t qdma_xmit_pkts_mm(struct qdma_tx_queue *txq,
+			   struct rte_mbuf **tx_pkts,
+			   uint16_t nb_pkts);
 
 #endif /* QDMA_DPDK_RXTX_H_ */
diff --git a/drivers/net/qdma/qdma_user.c b/drivers/net/qdma/qdma_user.c
index 312bb86670..82f6750616 100644
--- a/drivers/net/qdma/qdma_user.c
+++ b/drivers/net/qdma/qdma_user.c
@@ -125,6 +125,55 @@ int qdma_ul_process_immediate_data_st(void *qhndl, void *cmpt_entry,
 	return 0;
 }
 
+/**
+ * Updates the ST H2C descriptor.
+ *
+ * @param qhndl
+ *   Pointer to TX queue handle.
+ * @param q_offloads
+ *   Offloads supported for the queue.
+ * @param mb
+ *   Pointer to memory buffer.
+ *
+ * @return
+ *   None.
+ */
+int qdma_ul_update_st_h2c_desc(void *qhndl, uint64_t q_offloads,
+				struct rte_mbuf *mb)
+{
+	(void)q_offloads;
+	struct qdma_ul_st_h2c_desc *desc_info;
+	int nsegs = mb->nb_segs;
+	int pkt_segs = nsegs;
+
+	if (nsegs == 1) {
+		desc_info = get_st_h2c_desc(qhndl);
+		desc_info->len = rte_pktmbuf_data_len(mb);
+		desc_info->pld_len = desc_info->len;
+		desc_info->src_addr = mb->buf_iova + mb->data_off;
+		desc_info->flags = (S_H2C_DESC_F_SOP | S_H2C_DESC_F_EOP);
+		desc_info->cdh_flags = 0;
+	} else {
+		while (nsegs && mb) {
+			desc_info = get_st_h2c_desc(qhndl);
+
+			desc_info->len = rte_pktmbuf_data_len(mb);
+			desc_info->pld_len = desc_info->len;
+			desc_info->src_addr = mb->buf_iova + mb->data_off;
+			desc_info->flags = 0;
+			if (nsegs == pkt_segs)
+				desc_info->flags |= S_H2C_DESC_F_SOP;
+			if (nsegs == 1)
+				desc_info->flags |= S_H2C_DESC_F_EOP;
+			desc_info->cdh_flags = 0;
+
+			nsegs--;
+			mb = mb->next;
+		}
+	}
+	return 0;
+}
+
 /**
  * updates the MM c2h descriptor.
  *
@@ -155,6 +204,32 @@ int qdma_ul_update_mm_c2h_desc(void *qhndl, struct rte_mbuf *mb, void *desc)
 	return 0;
 }
 
+/**
+ * updates the MM h2c descriptor.
+ *
+ * @param qhndl
+ *   Pointer to TX queue handle.
+ * @param mb
+ *   Pointer to memory buffer.
+ *
+ * @return
+ *   None.
+ */
+int qdma_ul_update_mm_h2c_desc(void *qhndl, struct rte_mbuf *mb)
+{
+	struct qdma_ul_mm_desc *desc_info;
+
+	desc_info = (struct qdma_ul_mm_desc *)get_mm_h2c_desc(qhndl);
+	desc_info->src_addr = mb->buf_iova + mb->data_off;
+	desc_info->dst_addr = get_mm_h2c_ep_addr(qhndl);
+	desc_info->dv = 1;
+	desc_info->eop = 1;
+	desc_info->sop = 1;
+	desc_info->len = rte_pktmbuf_data_len(mb);
+
+	return 0;
+}
+
 /**
  * Processes the completion data from the given completion entry.
  *
-- 
2.36.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [RFC PATCH 18/29] net/qdma: add Tx queue reclaim routine
  2022-07-06  7:51 [RFC PATCH 00/29] cover letter for net/qdma PMD Aman Kumar
                   ` (16 preceding siblings ...)
  2022-07-06  7:52 ` [RFC PATCH 17/29] net/qdma: add Tx burst API Aman Kumar
@ 2022-07-06  7:52 ` Aman Kumar
  2022-07-06  7:52 ` [RFC PATCH 19/29] net/qdma: add callback function for Tx desc status Aman Kumar
                   ` (11 subsequent siblings)
  29 siblings, 0 replies; 43+ messages in thread
From: Aman Kumar @ 2022-07-06  7:52 UTC (permalink / raw)
  To: dev; +Cc: maxime.coquelin, david.marchand, aman.kumar

define function for dev->tx_done_cleanup for
net_qdma PMD.

Signed-off-by: Aman Kumar <aman.kumar@vvdntech.in>
---
 drivers/net/qdma/qdma_devops.c |  8 --------
 drivers/net/qdma/qdma_rxtx.c   | 28 ++++++++++++++++++++++++++++
 2 files changed, 28 insertions(+), 8 deletions(-)

diff --git a/drivers/net/qdma/qdma_devops.c b/drivers/net/qdma/qdma_devops.c
index 10d7d67b87..12391790f0 100644
--- a/drivers/net/qdma/qdma_devops.c
+++ b/drivers/net/qdma/qdma_devops.c
@@ -1717,14 +1717,6 @@ qdma_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t tx_queue_id,
 	qinfo->nb_desc = txq->nb_tx_desc - 1;
 }
 
-int qdma_dev_tx_done_cleanup(void *tx_queue, uint32_t free_cnt)
-{
-	(void)tx_queue;
-	(void)free_cnt;
-
-	return 0;
-}
-
 static struct eth_dev_ops qdma_eth_dev_ops = {
 	.dev_configure            = qdma_dev_configure,
 	.dev_infos_get            = qdma_dev_infos_get,
diff --git a/drivers/net/qdma/qdma_rxtx.c b/drivers/net/qdma/qdma_rxtx.c
index 1605c9973c..6842203ada 100644
--- a/drivers/net/qdma/qdma_rxtx.c
+++ b/drivers/net/qdma/qdma_rxtx.c
@@ -531,6 +531,34 @@ qdma_dev_rx_descriptor_status(void *rx_queue, uint16_t offset)
 	return RTE_ETH_RX_DESC_AVAIL;
 }
 
+/**
+ * DPDK callback to request the driver to free mbufs
+ * currently cached by the driver.
+ *
+ * @param tx_queue
+ *   Pointer to Tx queue specific data structure.
+ * @param free_cnt
+ *   Maximum number of packets to free. Use 0 to indicate all possible packets
+ *   should be freed. Note that a packet may be using multiple mbufs.
+ *
+ * @return
+ *   Failure: < 0
+ *   Success: >= 0
+ *     0-n: Number of packets freed. More packets may still remain in ring that
+ *     are in use.
+ */
+int
+qdma_dev_tx_done_cleanup(void *tx_queue, uint32_t free_cnt)
+{
+	struct qdma_tx_queue *txq = tx_queue;
+
+	if ((uint16_t)free_cnt >= (txq->nb_tx_desc - 1))
+		return -EINVAL;
+
+	/* Free transmitted mbufs back to pool */
+	return reclaim_tx_mbuf(txq, txq->wb_status->cidx, free_cnt);
+}
+
 /* Transmit API for Streaming mode */
 uint16_t qdma_xmit_pkts_st(struct qdma_tx_queue *txq, struct rte_mbuf **tx_pkts,
 			uint16_t nb_pkts)
-- 
2.36.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [RFC PATCH 19/29] net/qdma: add callback function for Tx desc status
  2022-07-06  7:51 [RFC PATCH 00/29] cover letter for net/qdma PMD Aman Kumar
                   ` (17 preceding siblings ...)
  2022-07-06  7:52 ` [RFC PATCH 18/29] net/qdma: add Tx queue reclaim routine Aman Kumar
@ 2022-07-06  7:52 ` Aman Kumar
  2022-07-06  7:52 ` [RFC PATCH 20/29] net/qdma: add Rx burst API Aman Kumar
                   ` (10 subsequent siblings)
  29 siblings, 0 replies; 43+ messages in thread
From: Aman Kumar @ 2022-07-06  7:52 UTC (permalink / raw)
  To: dev; +Cc: maxime.coquelin, david.marchand, aman.kumar

This patch implements dev->tx_descriptor_status
function for net_qdma PMD.

Signed-off-by: Aman Kumar <aman.kumar@vvdntech.in>
---
 drivers/net/qdma/qdma_devops.c |  1 +
 drivers/net/qdma/qdma_rxtx.c   | 47 ++++++++++++++++++++++++++++++++++
 2 files changed, 48 insertions(+)

diff --git a/drivers/net/qdma/qdma_devops.c b/drivers/net/qdma/qdma_devops.c
index 12391790f0..dfa41a9aa7 100644
--- a/drivers/net/qdma/qdma_devops.c
+++ b/drivers/net/qdma/qdma_devops.c
@@ -1759,4 +1759,5 @@ void qdma_dev_ops_init(struct rte_eth_dev *dev)
 	dev->tx_pkt_burst = &qdma_xmit_pkts;
 	dev->rx_queue_count = &qdma_dev_rx_queue_count;
 	dev->rx_descriptor_status = &qdma_dev_rx_descriptor_status;
+	dev->tx_descriptor_status = &qdma_dev_tx_descriptor_status;
 }
diff --git a/drivers/net/qdma/qdma_rxtx.c b/drivers/net/qdma/qdma_rxtx.c
index 6842203ada..3abc72717f 100644
--- a/drivers/net/qdma/qdma_rxtx.c
+++ b/drivers/net/qdma/qdma_rxtx.c
@@ -559,6 +559,53 @@ qdma_dev_tx_done_cleanup(void *tx_queue, uint32_t free_cnt)
 	return reclaim_tx_mbuf(txq, txq->wb_status->cidx, free_cnt);
 }
 
+/**
+ * DPDK callback to check the status of a Tx descriptor in the queue.
+ *
+ * @param tx_queue
+ *   Pointer to Tx queue specific data structure.
+ * @param offset
+ *   The offset of the descriptor starting from tail (0 is the place where
+ *   the next packet will be send).
+ *
+ * @return
+ *  - (RTE_ETH_TX_DESC_FULL) Descriptor is being processed by the hw, i.e.
+ *    in the transmit queue.
+ *  - (RTE_ETH_TX_DESC_DONE) Hardware is done with this descriptor, it can
+ *    be reused by the driver.
+ *  - (RTE_ETH_TX_DESC_UNAVAIL): Descriptor is unavailable, reserved by the
+ *    driver or the hardware.
+ *  - (-EINVAL) bad descriptor offset.
+ */
+int
+qdma_dev_tx_descriptor_status(void *tx_queue, uint16_t offset)
+{
+	struct qdma_tx_queue *txq = tx_queue;
+	uint16_t id;
+	int avail, in_use;
+	uint16_t cidx = 0;
+
+	if (unlikely(offset >= (txq->nb_tx_desc - 1)))
+		return -EINVAL;
+
+	/* One descriptor is reserved so that pidx is not same as old pidx */
+	if (offset == (txq->nb_tx_desc - 2))
+		return RTE_ETH_TX_DESC_UNAVAIL;
+
+	id = txq->q_pidx_info.pidx;
+	cidx = txq->wb_status->cidx;
+
+	in_use = (int)id - cidx;
+	if (in_use < 0)
+		in_use += (txq->nb_tx_desc - 1);
+	avail = txq->nb_tx_desc - 2 - in_use;
+
+	if (offset < avail)
+		return RTE_ETH_TX_DESC_DONE;
+
+	return RTE_ETH_TX_DESC_FULL;
+}
+
 /* Transmit API for Streaming mode */
 uint16_t qdma_xmit_pkts_st(struct qdma_tx_queue *txq, struct rte_mbuf **tx_pkts,
 			uint16_t nb_pkts)
-- 
2.36.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [RFC PATCH 20/29] net/qdma: add Rx burst API
  2022-07-06  7:51 [RFC PATCH 00/29] cover letter for net/qdma PMD Aman Kumar
                   ` (18 preceding siblings ...)
  2022-07-06  7:52 ` [RFC PATCH 19/29] net/qdma: add callback function for Tx desc status Aman Kumar
@ 2022-07-06  7:52 ` Aman Kumar
  2022-07-06  7:52 ` [RFC PATCH 21/29] net/qdma: add mailbox communication library Aman Kumar
                   ` (9 subsequent siblings)
  29 siblings, 0 replies; 43+ messages in thread
From: Aman Kumar @ 2022-07-06  7:52 UTC (permalink / raw)
  To: dev; +Cc: maxime.coquelin, david.marchand, aman.kumar

add Rx data path burst API support for device.

Signed-off-by: Aman Kumar <aman.kumar@vvdntech.in>
---
 drivers/net/qdma/qdma_devops.c |  10 -
 drivers/net/qdma/qdma_rxtx.c   | 709 +++++++++++++++++++++++++++++++++
 drivers/net/qdma/qdma_rxtx.h   |   8 +-
 3 files changed, 716 insertions(+), 11 deletions(-)

diff --git a/drivers/net/qdma/qdma_devops.c b/drivers/net/qdma/qdma_devops.c
index dfa41a9aa7..7f525773d0 100644
--- a/drivers/net/qdma/qdma_devops.c
+++ b/drivers/net/qdma/qdma_devops.c
@@ -1742,16 +1742,6 @@ static struct eth_dev_ops qdma_eth_dev_ops = {
 	.txq_info_get             = qdma_dev_txq_info_get,
 };
 
-uint16_t qdma_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
-			uint16_t nb_pkts)
-{
-	(void)rx_queue;
-	(void)rx_pkts;
-	(void)nb_pkts;
-
-	return 0;
-}
-
 void qdma_dev_ops_init(struct rte_eth_dev *dev)
 {
 	dev->dev_ops = &qdma_eth_dev_ops;
diff --git a/drivers/net/qdma/qdma_rxtx.c b/drivers/net/qdma/qdma_rxtx.c
index 3abc72717f..7652f35dd2 100644
--- a/drivers/net/qdma/qdma_rxtx.c
+++ b/drivers/net/qdma/qdma_rxtx.c
@@ -20,6 +20,20 @@
 #endif /* RTE_ARCH_X86_64 */
 
 /******** User logic dependent functions start **********/
+static int qdma_ul_extract_st_cmpt_info_v(void *ul_cmpt_entry, void *cmpt_info)
+{
+	union qdma_ul_st_cmpt_ring *cmpt_data, *cmpt_desc;
+
+	cmpt_desc = (union qdma_ul_st_cmpt_ring *)(ul_cmpt_entry);
+	cmpt_data = (union qdma_ul_st_cmpt_ring *)(cmpt_info);
+
+	cmpt_data->data = cmpt_desc->data;
+	if (unlikely(!cmpt_desc->desc_used))
+		cmpt_data->length = 0;
+
+	return 0;
+}
+
 #ifdef QDMA_RX_VEC_X86_64
 /* Vector implementation to get packet length from two completion entries */
 static void qdma_ul_get_cmpt_pkt_len_v(void *ul_cmpt_entry, __m128i *data)
@@ -410,6 +424,107 @@ static void adapt_update_counter(struct qdma_rx_queue *rxq,
 }
 #endif /* QDMA_LATENCY_OPTIMIZED */
 
+/* Process completion ring */
+static int process_cmpt_ring(struct qdma_rx_queue *rxq,
+		uint16_t num_cmpt_entries)
+{
+	struct qdma_pci_dev *qdma_dev = rxq->dev->data->dev_private;
+	union qdma_ul_st_cmpt_ring *user_cmpt_entry;
+	uint32_t count = 0;
+	int ret = 0;
+	uint16_t rx_cmpt_tail = rxq->cmpt_cidx_info.wrb_cidx;
+
+	if (likely(!rxq->dump_immediate_data)) {
+		if ((rx_cmpt_tail + num_cmpt_entries) <
+			(rxq->nb_rx_cmpt_desc - 1)) {
+			for (count = 0; count < num_cmpt_entries; count++) {
+				user_cmpt_entry =
+				(union qdma_ul_st_cmpt_ring *)
+				((uint64_t)rxq->cmpt_ring +
+				((uint64_t)rx_cmpt_tail * rxq->cmpt_desc_len));
+
+				ret = qdma_ul_extract_st_cmpt_info_v
+						(user_cmpt_entry,
+						&rxq->cmpt_data[count]);
+				if (ret != 0) {
+					PMD_DRV_LOG(ERR, "Error detected on CMPT ring "
+						"at index %d, queue_id = %d\n",
+						rx_cmpt_tail, rxq->queue_id);
+					rxq->err = 1;
+					return -1;
+				}
+				rx_cmpt_tail++;
+			}
+		} else {
+			while (count < num_cmpt_entries) {
+				user_cmpt_entry =
+				(union qdma_ul_st_cmpt_ring *)
+				((uint64_t)rxq->cmpt_ring +
+				((uint64_t)rx_cmpt_tail * rxq->cmpt_desc_len));
+
+				ret = qdma_ul_extract_st_cmpt_info_v
+						(user_cmpt_entry,
+						&rxq->cmpt_data[count]);
+				if (ret != 0) {
+					PMD_DRV_LOG(ERR, "Error detected on CMPT ring "
+						"at index %d, queue_id = %d\n",
+						rx_cmpt_tail, rxq->queue_id);
+					rxq->err = 1;
+					return -1;
+				}
+
+				rx_cmpt_tail++;
+				if (unlikely(rx_cmpt_tail >=
+					(rxq->nb_rx_cmpt_desc - 1)))
+					rx_cmpt_tail -=
+						(rxq->nb_rx_cmpt_desc - 1);
+				count++;
+			}
+		}
+	} else {
+		while (count < num_cmpt_entries) {
+			user_cmpt_entry =
+			(union qdma_ul_st_cmpt_ring *)
+			((uint64_t)rxq->cmpt_ring +
+			((uint64_t)rx_cmpt_tail * rxq->cmpt_desc_len));
+
+			ret = qdma_ul_extract_st_cmpt_info
+					(user_cmpt_entry,
+					&rxq->cmpt_data[count]);
+			if (ret != 0) {
+				PMD_DRV_LOG(ERR, "Error detected on CMPT ring "
+					"at CMPT index %d, queue_id = %d\n",
+					rx_cmpt_tail, rxq->queue_id);
+				rxq->err = 1;
+				return -1;
+			}
+
+			ret = qdma_ul_process_immediate_data_st((void *)rxq,
+					user_cmpt_entry, rxq->cmpt_desc_len);
+			if (ret < 0) {
+				PMD_DRV_LOG(ERR, "Error processing immediate data "
+					"at CMPT index = %d, queue_id = %d\n",
+					rx_cmpt_tail, rxq->queue_id);
+				return -1;
+			}
+
+			rx_cmpt_tail++;
+			if (unlikely(rx_cmpt_tail >=
+				(rxq->nb_rx_cmpt_desc - 1)))
+				rx_cmpt_tail -= (rxq->nb_rx_cmpt_desc - 1);
+			count++;
+		}
+	}
+
+	/* Update the CPMT CIDX */
+	rxq->cmpt_cidx_info.wrb_cidx = rx_cmpt_tail;
+	qdma_dev->hw_access->qdma_queue_cmpt_cidx_update(rxq->dev,
+		qdma_dev->is_vf,
+		rxq->queue_id, &rxq->cmpt_cidx_info);
+
+	return 0;
+}
+
 static uint32_t rx_queue_count(void *rx_queue)
 {
 	struct qdma_rx_queue *rxq = rx_queue;
@@ -531,6 +646,600 @@ qdma_dev_rx_descriptor_status(void *rx_queue, uint16_t offset)
 	return RTE_ETH_RX_DESC_AVAIL;
 }
 
+/* Update mbuf for a segmented packet */
+static struct rte_mbuf *prepare_segmented_packet(struct qdma_rx_queue *rxq,
+		uint16_t pkt_length, uint16_t *tail)
+{
+	struct rte_mbuf *mb;
+	struct rte_mbuf *first_seg = NULL;
+	struct rte_mbuf *last_seg = NULL;
+	uint16_t id = *tail;
+	uint16_t length;
+	uint16_t rx_buff_size = rxq->rx_buff_size;
+
+	do {
+		mb = rxq->sw_ring[id];
+		rxq->sw_ring[id++] = NULL;
+		length = pkt_length;
+
+		if (unlikely(id >= (rxq->nb_rx_desc - 1)))
+			id -= (rxq->nb_rx_desc - 1);
+		if (pkt_length > rx_buff_size) {
+			rte_pktmbuf_data_len(mb) = rx_buff_size;
+			pkt_length -= rx_buff_size;
+		} else {
+			rte_pktmbuf_data_len(mb) = pkt_length;
+			pkt_length = 0;
+		}
+		rte_mbuf_refcnt_set(mb, 1);
+
+		if (first_seg == NULL) {
+			first_seg = mb;
+			first_seg->nb_segs = 1;
+			first_seg->pkt_len = length;
+			first_seg->packet_type = 0;
+			first_seg->ol_flags = 0;
+			first_seg->port = rxq->port_id;
+			first_seg->vlan_tci = 0;
+			first_seg->hash.rss = 0;
+		} else {
+			first_seg->nb_segs++;
+			if (last_seg != NULL)
+				last_seg->next = mb;
+		}
+
+		last_seg = mb;
+		mb->next = NULL;
+	} while (pkt_length);
+
+	*tail = id;
+	return first_seg;
+}
+
+/* Prepare mbuf for one packet */
+static inline
+struct rte_mbuf *prepare_single_packet(struct qdma_rx_queue *rxq,
+		uint16_t cmpt_idx)
+{
+	struct rte_mbuf *mb = NULL;
+	uint16_t id = rxq->rx_tail;
+	uint16_t pkt_length;
+
+	pkt_length = qdma_ul_get_cmpt_pkt_len(&rxq->cmpt_data[cmpt_idx]);
+
+	if (pkt_length) {
+		if (likely(pkt_length <= rxq->rx_buff_size)) {
+			mb = rxq->sw_ring[id];
+			rxq->sw_ring[id++] = NULL;
+
+			if (unlikely(id >= (rxq->nb_rx_desc - 1)))
+				id -= (rxq->nb_rx_desc - 1);
+
+			rte_mbuf_refcnt_set(mb, 1);
+			mb->nb_segs = 1;
+			mb->port = rxq->port_id;
+			mb->ol_flags = 0;
+			mb->packet_type = 0;
+			mb->pkt_len = pkt_length;
+			mb->data_len = pkt_length;
+		} else {
+			mb = prepare_segmented_packet(rxq, pkt_length, &id);
+		}
+
+		rxq->rx_tail = id;
+	}
+	return mb;
+}
+
+#ifdef QDMA_RX_VEC_X86_64
+/* Vector implementation to prepare mbufs for packets.
+ * Update this API if HW provides more information to be populated in mbuf.
+ */
+static uint16_t prepare_packets_v(struct qdma_rx_queue *rxq,
+			struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+{
+	struct rte_mbuf *mb;
+	uint16_t count = 0, count_pkts = 0;
+	uint16_t n_pkts = nb_pkts & -2;
+	uint16_t id = rxq->rx_tail;
+	struct rte_mbuf **sw_ring = rxq->sw_ring;
+	uint16_t rx_buff_size = rxq->rx_buff_size;
+	/* mask to shuffle from desc. to mbuf */
+	__m128i shuf_msk = _mm_set_epi8
+			(0xFF, 0xFF, 0xFF, 0xFF,  /* skip 32bits rss */
+			0xFF, 0xFF,      /* skip low 16 bits vlan_macip */
+			1, 0,      /* octet 0~1, 16 bits data_len */
+			0xFF, 0xFF,  /* skip high 16 bits pkt_len, zero out */
+			1, 0,      /* octet 0~1, low 16 bits pkt_len */
+			0xFF, 0xFF,  /* skip 32 bit pkt_type */
+			0xFF, 0xFF
+			);
+	__m128i mbuf_init, pktlen, zero_data;
+
+	mbuf_init = _mm_set_epi64x(0, rxq->mbuf_initializer);
+	pktlen = _mm_setzero_si128();
+	zero_data = _mm_setzero_si128();
+
+	/* compile-time check */
+	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, pkt_len) !=
+			offsetof(struct rte_mbuf, rx_descriptor_fields1) + 4);
+	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, data_len) !=
+			offsetof(struct rte_mbuf, rx_descriptor_fields1) + 8);
+	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, rearm_data) !=
+			RTE_ALIGN(offsetof(struct rte_mbuf, rearm_data), 16));
+
+	for (count = 0; count < n_pkts;
+		count += RTE_QDMA_DESCS_PER_LOOP) {
+		__m128i pkt_len[RTE_QDMA_DESCS_PER_LOOP];
+		__m128i pkt_mb1, pkt_mb2;
+		__m128i mbp1;
+		uint16_t pktlen1, pktlen2;
+
+		qdma_ul_get_cmpt_pkt_len_v
+			(&rxq->cmpt_data[count], pkt_len);
+
+		pktlen1 = _mm_extract_epi16(pkt_len[0], 0);
+		pktlen2 = _mm_extract_epi16(pkt_len[1], 0);
+
+		/* Check if packets are segmented across descriptors */
+		if ((pktlen1 && pktlen1 <= rx_buff_size) &&
+			(pktlen2 && pktlen2 <= rx_buff_size) &&
+			((id + RTE_QDMA_DESCS_PER_LOOP) <
+				(rxq->nb_rx_desc - 1))) {
+			/* Load 2 (64 bit) mbuf pointers */
+			mbp1 = _mm_loadu_si128((__m128i *)&sw_ring[id]);
+
+			/* Copy 2 64 bit mbuf point into rx_pkts */
+			_mm_storeu_si128((__m128i *)&rx_pkts[count_pkts], mbp1);
+			_mm_storeu_si128((__m128i *)&sw_ring[id], zero_data);
+
+			/* Pkt 1,2 convert format from desc to pktmbuf */
+			/* We only have packet length to copy */
+			pkt_mb2 = _mm_shuffle_epi8(pkt_len[1], shuf_msk);
+			pkt_mb1 = _mm_shuffle_epi8(pkt_len[0], shuf_msk);
+
+			/* Write the rearm data and the olflags in one write */
+			_mm_store_si128
+			((__m128i *)&rx_pkts[count_pkts]->rearm_data, mbuf_init);
+			_mm_store_si128
+			((__m128i *)&rx_pkts[count_pkts + 1]->rearm_data,
+			mbuf_init);
+
+			/* Write packet length */
+			_mm_storeu_si128
+			((void *)&rx_pkts[count_pkts]->rx_descriptor_fields1,
+			pkt_mb1);
+			_mm_storeu_si128
+			((void *)&rx_pkts[count_pkts + 1]->rx_descriptor_fields1,
+			pkt_mb2);
+
+			/* Accumulate packet length counter */
+			pktlen = _mm_add_epi32(pktlen, pkt_len[0]);
+			pktlen = _mm_add_epi32(pktlen, pkt_len[1]);
+
+			count_pkts += RTE_QDMA_DESCS_PER_LOOP;
+			id += RTE_QDMA_DESCS_PER_LOOP;
+		} else {
+			/* Handle packets segmented
+			 * across multiple descriptors
+			 * or ring wrap
+			 */
+			if (pktlen1) {
+				mb = prepare_segmented_packet(rxq,
+					pktlen1, &id);
+				rx_pkts[count_pkts++] = mb;
+				pktlen = _mm_add_epi32(pktlen, pkt_len[0]);
+			}
+
+			if (pktlen2) {
+				mb = prepare_segmented_packet(rxq,
+					pktlen2, &id);
+				rx_pkts[count_pkts++] = mb;
+				pktlen = _mm_add_epi32(pktlen, pkt_len[1]);
+			}
+		}
+	}
+
+	rxq->stats.pkts += count_pkts;
+	rxq->stats.bytes += _mm_extract_epi64(pktlen, 0);
+	rxq->rx_tail = id;
+
+	/* Handle single packet, if any pending */
+	if (nb_pkts & 1) {
+		mb = prepare_single_packet(rxq, count);
+		if (mb)
+			rx_pkts[count_pkts++] = mb;
+	}
+
+	return count_pkts;
+}
+#endif /* QDMA_RX_VEC_X86_64 */
+
+/* Prepare mbufs with packet information */
+static uint16_t prepare_packets(struct qdma_rx_queue *rxq,
+			struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+{
+	uint16_t count_pkts = 0;
+
+#ifdef QDMA_RX_VEC_X86_64
+	count_pkts = prepare_packets_v(rxq, rx_pkts, nb_pkts);
+#else /* QDMA_RX_VEC_X86_64 */
+	struct rte_mbuf *mb;
+	uint16_t pkt_length;
+	uint16_t count = 0;
+	while (count < nb_pkts) {
+		pkt_length = qdma_ul_get_cmpt_pkt_len(&rxq->cmpt_data[count]);
+		if (pkt_length) {
+			mb = prepare_segmented_packet(rxq,
+					pkt_length, &rxq->rx_tail);
+			rx_pkts[count_pkts++] = mb;
+		}
+		count++;
+	}
+#endif /* QDMA_RX_VEC_X86_64 */
+
+	return count_pkts;
+}
+
+/* Populate C2H ring with new buffers */
+static int rearm_c2h_ring(struct qdma_rx_queue *rxq, uint16_t num_desc)
+{
+	struct qdma_pci_dev *qdma_dev = rxq->dev->data->dev_private;
+	struct rte_mbuf *mb;
+	struct qdma_ul_st_c2h_desc *rx_ring_st =
+			(struct qdma_ul_st_c2h_desc *)rxq->rx_ring;
+	uint16_t mbuf_index = 0;
+	uint16_t id;
+	int rearm_descs;
+
+	id = rxq->q_pidx_info.pidx;
+
+	/* Split the C2H ring updation in two parts.
+	 * First handle till end of ring and then
+	 * handle from beginning of ring, if ring wraps
+	 */
+	if ((id + num_desc) < (rxq->nb_rx_desc - 1))
+		rearm_descs = num_desc;
+	else
+		rearm_descs = (rxq->nb_rx_desc - 1) - id;
+
+	/* allocate new buffer */
+	if (rte_mempool_get_bulk(rxq->mb_pool, (void *)&rxq->sw_ring[id],
+					rearm_descs) != 0){
+		PMD_DRV_LOG(ERR, "%s(): %d: No MBUFS, queue id = %d,"
+		"mbuf_avail_count = %d,"
+		" mbuf_in_use_count = %d, num_desc_req = %d\n",
+		__func__, __LINE__, rxq->queue_id,
+		rte_mempool_avail_count(rxq->mb_pool),
+		rte_mempool_in_use_count(rxq->mb_pool), rearm_descs);
+		return -1;
+	}
+
+#ifdef QDMA_RX_VEC_X86_64
+	int rearm_cnt = rearm_descs & -2;
+	__m128i head_room = _mm_set_epi64x(RTE_PKTMBUF_HEADROOM,
+			RTE_PKTMBUF_HEADROOM);
+
+	for (mbuf_index = 0; mbuf_index < ((uint16_t)rearm_cnt  & 0xFFFF);
+			mbuf_index += RTE_QDMA_DESCS_PER_LOOP,
+			id += RTE_QDMA_DESCS_PER_LOOP) {
+		__m128i vaddr0, vaddr1;
+		__m128i dma_addr;
+
+		/* load buf_addr(lo 64bit) and buf_iova(hi 64bit) */
+		RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, buf_iova) !=
+				offsetof(struct rte_mbuf, buf_addr) + 8);
+
+		/* Load two mbufs data addresses */
+		vaddr0 = _mm_loadu_si128
+				((__m128i *)&rxq->sw_ring[id]->buf_addr);
+		vaddr1 = _mm_loadu_si128
+				((__m128i *)&rxq->sw_ring[id + 1]->buf_addr);
+
+		/* Extract physical addresses of two mbufs */
+		dma_addr = _mm_unpackhi_epi64(vaddr0, vaddr1);
+
+		/* Add headroom to dma_addr */
+		dma_addr = _mm_add_epi64(dma_addr, head_room);
+
+		/* Write C2H desc with physical dma_addr */
+		_mm_storeu_si128((__m128i *)&rx_ring_st[id], dma_addr);
+	}
+
+	if (rearm_descs & 1) {
+		mb = rxq->sw_ring[id];
+
+		/* rearm descriptor */
+		rx_ring_st[id].dst_addr =
+				(uint64_t)mb->buf_iova +
+					RTE_PKTMBUF_HEADROOM;
+		id++;
+	}
+#else /* QDMA_RX_VEC_X86_64 */
+	for (mbuf_index = 0; mbuf_index < rearm_descs;
+			mbuf_index++, id++) {
+		mb = rxq->sw_ring[id];
+		mb->data_off = RTE_PKTMBUF_HEADROOM;
+
+		/* rearm descriptor */
+		rx_ring_st[id].dst_addr =
+				(uint64_t)mb->buf_iova +
+					RTE_PKTMBUF_HEADROOM;
+	}
+#endif /* QDMA_RX_VEC_X86_64 */
+
+	if (unlikely(id >= (rxq->nb_rx_desc - 1)))
+		id -= (rxq->nb_rx_desc - 1);
+
+	/* Handle from beginning of ring, if ring wrapped */
+	rearm_descs = num_desc - rearm_descs;
+	if (unlikely(rearm_descs)) {
+		/* allocate new buffer */
+		if (rte_mempool_get_bulk(rxq->mb_pool,
+			(void *)&rxq->sw_ring[id], rearm_descs) != 0) {
+			PMD_DRV_LOG(ERR, "%s(): %d: No MBUFS, queue id = %d,"
+			"mbuf_avail_count = %d,"
+			" mbuf_in_use_count = %d, num_desc_req = %d\n",
+			__func__, __LINE__, rxq->queue_id,
+			rte_mempool_avail_count(rxq->mb_pool),
+			rte_mempool_in_use_count(rxq->mb_pool), rearm_descs);
+
+			rxq->q_pidx_info.pidx = id;
+			qdma_dev->hw_access->qdma_queue_pidx_update(rxq->dev,
+				qdma_dev->is_vf,
+				rxq->queue_id, 1, &rxq->q_pidx_info);
+
+			return -1;
+		}
+
+		for (mbuf_index = 0;
+				mbuf_index < ((uint16_t)rearm_descs & 0xFFFF);
+				mbuf_index++, id++) {
+			mb = rxq->sw_ring[id];
+			mb->data_off = RTE_PKTMBUF_HEADROOM;
+
+			/* rearm descriptor */
+			rx_ring_st[id].dst_addr =
+					(uint64_t)mb->buf_iova +
+						RTE_PKTMBUF_HEADROOM;
+		}
+	}
+
+	PMD_DRV_LOG(DEBUG, "%s(): %d: PIDX Update: queue id = %d, "
+				"num_desc = %d",
+				__func__, __LINE__, rxq->queue_id,
+				num_desc);
+
+	/* Make sure writes to the C2H descriptors are
+	 * synchronized before updating PIDX
+	 */
+	rte_wmb();
+
+	rxq->q_pidx_info.pidx = id;
+	qdma_dev->hw_access->qdma_queue_pidx_update(rxq->dev,
+		qdma_dev->is_vf,
+		rxq->queue_id, 1, &rxq->q_pidx_info);
+
+	return 0;
+}
+
+/* Receive API for Streaming mode */
+uint16_t qdma_recv_pkts_st(struct qdma_rx_queue *rxq, struct rte_mbuf **rx_pkts,
+				uint16_t nb_pkts)
+{
+	uint16_t count_pkts;
+	struct wb_status *wb_status;
+	uint16_t nb_pkts_avail = 0;
+	uint16_t rx_cmpt_tail = 0;
+	uint16_t cmpt_pidx, c2h_pidx;
+	uint16_t pending_desc;
+#ifdef TEST_64B_DESC_BYPASS
+	int bypass_desc_sz_idx = qmda_get_desc_sz_idx(rxq->bypass_desc_sz);
+#endif
+
+	if (unlikely(rxq->err))
+		return 0;
+
+	PMD_DRV_LOG(DEBUG, "recv start on rx queue-id :%d, on "
+			"tail index:%d number of pkts %d",
+			rxq->queue_id, rxq->rx_tail, nb_pkts);
+	wb_status = rxq->wb_status;
+	rx_cmpt_tail = rxq->cmpt_cidx_info.wrb_cidx;
+
+#ifdef TEST_64B_DESC_BYPASS
+	if (unlikely(rxq->en_bypass &&
+			bypass_desc_sz_idx == SW_DESC_CNTXT_64B_BYPASS_DMA)) {
+		PMD_DRV_LOG(DEBUG, "For  RX ST-mode, example"
+				" design doesn't support 64byte descriptor\n");
+		return 0;
+	}
+#endif
+	cmpt_pidx = wb_status->pidx;
+
+	if (rx_cmpt_tail < cmpt_pidx)
+		nb_pkts_avail = cmpt_pidx - rx_cmpt_tail;
+	else if (rx_cmpt_tail > cmpt_pidx)
+		nb_pkts_avail = rxq->nb_rx_cmpt_desc - 1 - rx_cmpt_tail +
+				cmpt_pidx;
+
+	if (nb_pkts_avail == 0) {
+		PMD_DRV_LOG(DEBUG, "%s(): %d: nb_pkts_avail = 0\n",
+				__func__, __LINE__);
+		return 0;
+	}
+
+	if (nb_pkts > QDMA_MAX_BURST_SIZE)
+		nb_pkts = QDMA_MAX_BURST_SIZE;
+
+	if (nb_pkts > nb_pkts_avail)
+		nb_pkts = nb_pkts_avail;
+
+#ifdef DUMP_MEMPOOL_USAGE_STATS
+	PMD_DRV_LOG(DEBUG, "%s(): %d: queue id = %d, mbuf_avail_count = %d, "
+			"mbuf_in_use_count = %d",
+		__func__, __LINE__, rxq->queue_id,
+		rte_mempool_avail_count(rxq->mb_pool),
+		rte_mempool_in_use_count(rxq->mb_pool));
+#endif /* DUMP_MEMPOOL_USAGE_STATS */
+	/* Make sure reads to CMPT ring are synchronized before
+	 * accessing the ring
+	 */
+	rte_rmb();
+#ifdef QDMA_LATENCY_OPTIMIZED
+	adapt_update_counter(rxq, nb_pkts_avail);
+#endif /* QDMA_LATENCY_OPTIMIZED */
+	if (process_cmpt_ring(rxq, nb_pkts) != 0)
+		return 0;
+
+	if (rxq->status != RTE_ETH_QUEUE_STATE_STARTED) {
+		PMD_DRV_LOG(DEBUG, "%s(): %d: rxq->status = %d\n",
+				__func__, __LINE__, rxq->status);
+		return 0;
+	}
+
+	count_pkts = prepare_packets(rxq, rx_pkts, nb_pkts);
+
+	c2h_pidx = rxq->q_pidx_info.pidx;
+	pending_desc = rxq->rx_tail - c2h_pidx - 1;
+	if (rxq->rx_tail < (c2h_pidx + 1))
+		pending_desc = rxq->nb_rx_desc - 2 + rxq->rx_tail -
+				c2h_pidx;
+
+	/* Batch the PIDX updates, this minimizes overhead on
+	 * descriptor engine
+	 */
+	if (pending_desc >= MIN_RX_PIDX_UPDATE_THRESHOLD)
+		rearm_c2h_ring(rxq, pending_desc);
+
+#ifdef DUMP_MEMPOOL_USAGE_STATS
+	PMD_DRV_LOG(DEBUG, "%s(): %d: queue id = %d, mbuf_avail_count = %d,"
+			" mbuf_in_use_count = %d, count_pkts = %d",
+		__func__, __LINE__, rxq->queue_id,
+		rte_mempool_avail_count(rxq->mb_pool),
+		rte_mempool_in_use_count(rxq->mb_pool), count_pkts);
+#endif /* DUMP_MEMPOOL_USAGE_STATS */
+
+	PMD_DRV_LOG(DEBUG, " Recv complete with hw cidx :%d",
+				rxq->wb_status->cidx);
+	PMD_DRV_LOG(DEBUG, " Recv complete with hw pidx :%d\n",
+				rxq->wb_status->pidx);
+
+	return count_pkts;
+}
+
+/* Receive API for Memory mapped mode */
+uint16_t qdma_recv_pkts_mm(struct qdma_rx_queue *rxq, struct rte_mbuf **rx_pkts,
+			uint16_t nb_pkts)
+{
+	struct rte_mbuf *mb;
+	uint32_t count, id;
+	struct qdma_ul_mm_desc *desc;
+	uint32_t len;
+	struct qdma_pci_dev *qdma_dev = rxq->dev->data->dev_private;
+#ifdef TEST_64B_DESC_BYPASS
+	int bypass_desc_sz_idx = qmda_get_desc_sz_idx(rxq->bypass_desc_sz);
+#endif
+
+	if (rxq->status != RTE_ETH_QUEUE_STATE_STARTED)
+		return 0;
+
+	id = rxq->q_pidx_info.pidx; /* Descriptor index */
+
+	PMD_DRV_LOG(DEBUG, "recv start on rx queue-id :%d, on tail index:%d\n",
+			rxq->queue_id, id);
+
+#ifdef TEST_64B_DESC_BYPASS
+	if (unlikely(rxq->en_bypass &&
+			bypass_desc_sz_idx == SW_DESC_CNTXT_64B_BYPASS_DMA)) {
+		PMD_DRV_LOG(DEBUG, "For MM mode, example design doesn't "
+				"support 64byte descriptor\n");
+		return 0;
+	}
+#endif
+	/* Make 1 less available, otherwise if we allow all descriptors
+	 * to be filled,when nb_pkts = nb_tx_desc - 1, pidx will be same
+	 * as old pidx and HW will treat this as no new descriptors were added.
+	 * Hence, DMA won't happen with new descriptors.
+	 */
+	if (nb_pkts > rxq->nb_rx_desc - 2)
+		nb_pkts = rxq->nb_rx_desc - 2;
+
+	for (count = 0; count < nb_pkts; count++) {
+		/* allocate new buffer */
+		if (rte_mempool_get(rxq->mb_pool, (void *)&mb) != 0) {
+			PMD_DRV_LOG(ERR, "%s(): %d: No MBUFS, queue id = %d,"
+			"mbuf_avail_count = %d,"
+			" mbuf_in_use_count = %d\n",
+			__func__, __LINE__, rxq->queue_id,
+			rte_mempool_avail_count(rxq->mb_pool),
+			rte_mempool_in_use_count(rxq->mb_pool));
+			return 0;
+		}
+
+		desc = (struct qdma_ul_mm_desc *)rxq->rx_ring;
+		desc += id;
+		qdma_ul_update_mm_c2h_desc(rxq, mb, desc);
+
+		len = (int)rxq->rx_buff_size;
+		rte_pktmbuf_pkt_len(mb) = len;
+
+		rte_mbuf_refcnt_set(mb, 1);
+		mb->packet_type = 0;
+		mb->ol_flags = 0;
+		mb->next = 0;
+		mb->nb_segs = 1;
+		mb->port = rxq->port_id;
+		mb->vlan_tci = 0;
+		mb->hash.rss = 0;
+
+		rx_pkts[count] = mb;
+
+		rxq->ep_addr = (rxq->ep_addr + len) % DMA_BRAM_SIZE;
+		id = (id + 1) % (rxq->nb_rx_desc - 1);
+	}
+
+	/* Make sure writes to the C2H descriptors are synchronized
+	 * before updating PIDX
+	 */
+	rte_wmb();
+
+	/* update pidx pointer for MM-mode */
+	if (count > 0) {
+		rxq->q_pidx_info.pidx = id;
+		qdma_dev->hw_access->qdma_queue_pidx_update(rxq->dev,
+			qdma_dev->is_vf,
+			rxq->queue_id, 1, &rxq->q_pidx_info);
+	}
+
+	return count;
+}
+/**
+ * DPDK callback for receiving packets in burst.
+ *
+ * @param rx_queue
+ *   Generic pointer to Rx queue structure.
+ * @param[out] rx_pkts
+ *   Array to store received packets.
+ * @param nb_pkts
+ *   Maximum number of packets in array.
+ *
+ * @return
+ *   Number of packets successfully received (<= nb_pkts).
+ */
+uint16_t qdma_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+			uint16_t nb_pkts)
+{
+	struct qdma_rx_queue *rxq = rx_queue;
+	uint32_t count;
+
+	if (rxq->st_mode)
+		count = qdma_recv_pkts_st(rxq, rx_pkts, nb_pkts);
+	else
+		count = qdma_recv_pkts_mm(rxq, rx_pkts, nb_pkts);
+
+	return count;
+}
+
 /**
  * DPDK callback to request the driver to free mbufs
  * currently cached by the driver.
diff --git a/drivers/net/qdma/qdma_rxtx.h b/drivers/net/qdma/qdma_rxtx.h
index 397740abc0..b940788973 100644
--- a/drivers/net/qdma/qdma_rxtx.h
+++ b/drivers/net/qdma/qdma_rxtx.h
@@ -9,6 +9,7 @@
 
 /* forward declaration */
 struct qdma_tx_queue;
+struct qdma_rx_queue;
 
 /* Supporting functions for user logic pluggability */
 uint16_t qdma_get_rx_queue_id(void *queue_hndl);
@@ -26,5 +27,10 @@ uint16_t qdma_xmit_pkts_st(struct qdma_tx_queue *txq,
 uint16_t qdma_xmit_pkts_mm(struct qdma_tx_queue *txq,
 			   struct rte_mbuf **tx_pkts,
 			   uint16_t nb_pkts);
-
+uint16_t qdma_recv_pkts_st(struct qdma_rx_queue *rxq,
+			   struct rte_mbuf **rx_pkts,
+			   uint16_t nb_pkts);
+uint16_t qdma_recv_pkts_mm(struct qdma_rx_queue *rxq,
+			   struct rte_mbuf **rx_pkts,
+			   uint16_t nb_pkts);
 #endif /* QDMA_DPDK_RXTX_H_ */
-- 
2.36.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [RFC PATCH 21/29] net/qdma: add mailbox communication library
  2022-07-06  7:51 [RFC PATCH 00/29] cover letter for net/qdma PMD Aman Kumar
                   ` (19 preceding siblings ...)
  2022-07-06  7:52 ` [RFC PATCH 20/29] net/qdma: add Rx burst API Aman Kumar
@ 2022-07-06  7:52 ` Aman Kumar
  2022-07-06  7:52 ` [RFC PATCH 22/29] net/qdma: mbox API adaptation in Rx/Tx init Aman Kumar
                   ` (8 subsequent siblings)
  29 siblings, 0 replies; 43+ messages in thread
From: Aman Kumar @ 2022-07-06  7:52 UTC (permalink / raw)
  To: dev; +Cc: maxime.coquelin, david.marchand, aman.kumar

this patch implements mailbox communication
mechanism to enable communication between
virtual functions and physical function.

Signed-off-by: Aman Kumar <aman.kumar@vvdntech.in>
---
 drivers/net/qdma/meson.build |   1 +
 drivers/net/qdma/qdma.h      |   2 +
 drivers/net/qdma/qdma_mbox.c | 400 +++++++++++++++++++++++++++++++++++
 drivers/net/qdma/qdma_mbox.h |  47 ++++
 4 files changed, 450 insertions(+)
 create mode 100644 drivers/net/qdma/qdma_mbox.c
 create mode 100644 drivers/net/qdma/qdma_mbox.h

diff --git a/drivers/net/qdma/meson.build b/drivers/net/qdma/meson.build
index 8c86412b83..dd2478be6c 100644
--- a/drivers/net/qdma/meson.build
+++ b/drivers/net/qdma/meson.build
@@ -25,6 +25,7 @@ sources = files(
         'qdma_common.c',
         'qdma_devops.c',
         'qdma_ethdev.c',
+        'qdma_mbox.c',
         'qdma_user.c',
         'qdma_rxtx.c',
         'qdma_access/eqdma_soft_access/eqdma_soft_access.c',
diff --git a/drivers/net/qdma/qdma.h b/drivers/net/qdma/qdma.h
index 8515ebe60e..8fb64c21b0 100644
--- a/drivers/net/qdma/qdma.h
+++ b/drivers/net/qdma/qdma.h
@@ -19,6 +19,7 @@
 #include "qdma_user.h"
 #include "qdma_resource_mgmt.h"
 #include "qdma_access_common.h"
+#include "qdma_mbox.h"
 #include "rte_pmd_qdma.h"
 #include "qdma_log.h"
 
@@ -278,6 +279,7 @@ struct qdma_pci_dev {
 	uint32_t ip_type:4;
 
 	struct queue_info *q_info;
+	struct qdma_dev_mbox mbox;
 	uint8_t init_q_range;
 
 	uint32_t g_ring_sz[QDMA_NUM_RING_SIZES];
diff --git a/drivers/net/qdma/qdma_mbox.c b/drivers/net/qdma/qdma_mbox.c
new file mode 100644
index 0000000000..eec2ecaf6a
--- /dev/null
+++ b/drivers/net/qdma/qdma_mbox.c
@@ -0,0 +1,400 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2022 Xilinx, Inc. All rights reserved.
+ */
+
+#include <ethdev_pci.h>
+#include <rte_malloc.h>
+#include <rte_spinlock.h>
+#include <rte_alarm.h>
+#include <rte_eal.h>
+
+#include "qdma.h"
+#include "qdma_mbox.h"
+
+/*
+ * Get index from VF info array of PF device for a given VF function id.
+ */
+static int qdma_get_internal_vf_index(struct rte_eth_dev *dev, uint8_t devfn)
+{
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
+	uint16_t  i;
+
+	for (i = 0; i < pci_dev->max_vfs; i++) {
+		if (qdma_dev->vfinfo[i].func_id == devfn)
+			return i;
+	}
+
+	return QDMA_FUNC_ID_INVALID;
+}
+
+static void qdma_mbox_process_msg_from_vf(void *arg)
+{
+	struct qdma_mbox_msg *mbox_msg_rsp = qdma_mbox_msg_alloc();
+	struct rte_eth_dev *dev = (struct rte_eth_dev *)arg;
+	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	uint16_t vf_func_id;
+	uint16_t vf_index;
+	int i, rv;
+
+	if (mbox_msg_rsp == NULL)
+		return;
+
+	if (!qdma_dev)
+		return;
+
+	rv = qdma_mbox_pf_rcv_msg_handler(dev,
+					  qdma_dev->dma_device_index,
+					  qdma_dev->func_id,
+					  qdma_dev->mbox.rx_data,
+					  mbox_msg_rsp->raw_data);
+	if (rv != QDMA_MBOX_VF_OFFLINE &&
+			rv != QDMA_MBOX_VF_RESET &&
+			rv != QDMA_MBOX_PF_RESET_DONE &&
+			rv != QDMA_MBOX_PF_BYE)
+		qdma_mbox_msg_send(dev, mbox_msg_rsp, 0);
+	else
+		qdma_mbox_msg_free(mbox_msg_rsp);
+
+	if (rv == QDMA_MBOX_VF_ONLINE) {
+		vf_func_id = qdma_mbox_vf_func_id_get(qdma_dev->mbox.rx_data,
+			qdma_dev->is_vf);
+		/* Mapping internal VF function id to a valid VF function id */
+		for (i = 0; i < pci_dev->max_vfs; i++) {
+			if (qdma_dev->vfinfo[i].func_id ==
+					QDMA_FUNC_ID_INVALID) {
+				qdma_dev->vfinfo[i].func_id =
+					vf_func_id;
+				break;
+			}
+		}
+
+		if (i == pci_dev->max_vfs) {
+			PMD_DRV_LOG(INFO,
+			"PF-%d failed to create function id mapping VF func_id%d",
+				    qdma_dev->func_id, vf_func_id);
+			return;
+		}
+
+		qdma_dev->vf_online_count++;
+	} else if (rv == QDMA_MBOX_VF_OFFLINE) {
+		if (!qdma_dev->reset_in_progress) {
+			vf_func_id =
+				qdma_mbox_vf_func_id_get(qdma_dev->mbox.rx_data,
+					qdma_dev->is_vf);
+			vf_index = qdma_get_internal_vf_index(dev, vf_func_id);
+			if (vf_index != QDMA_FUNC_ID_INVALID)
+				qdma_dev->vfinfo[vf_index].func_id =
+					QDMA_FUNC_ID_INVALID;
+		}
+		qdma_dev->vf_online_count--;
+	}
+}
+
+static void *qdma_reset_task(void *arg)
+{
+	struct rte_eth_dev *dev = (struct rte_eth_dev *)arg;
+	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
+
+	if (!qdma_dev)
+		return NULL;
+
+	rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_RESET,
+					      NULL);
+
+	return NULL;
+}
+
+static void *qdma_remove_task(void *arg)
+{
+	struct rte_eth_dev *dev = (struct rte_eth_dev *)arg;
+	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
+
+	if (!qdma_dev)
+		return NULL;
+
+	rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_RMV,
+					      NULL);
+
+	return NULL;
+}
+
+static void qdma_mbox_process_msg_from_pf(void *arg)
+{
+	struct qdma_mbox_msg *mbox_msg_rsp = NULL;
+	struct rte_eth_dev *dev = (struct rte_eth_dev *)arg;
+	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
+	pthread_t thread;
+	pthread_attr_t tattr;
+	int rv;
+
+	if (!qdma_dev)
+		return;
+
+	mbox_msg_rsp = qdma_mbox_msg_alloc();
+	if (!mbox_msg_rsp)
+		return;
+
+	rv = qdma_mbox_vf_rcv_msg_handler(qdma_dev->mbox.rx_data,
+					  mbox_msg_rsp->raw_data);
+	if (rv) {
+		qdma_mbox_msg_send(dev, mbox_msg_rsp, 0);
+	} else {
+		qdma_mbox_msg_free(mbox_msg_rsp);
+		return;
+	}
+
+	if (rv == QDMA_MBOX_VF_RESET) {
+		qdma_dev->reset_state = RESET_STATE_RECV_PF_RESET_REQ;
+
+		rv = pthread_attr_init(&tattr);
+		if (rv)
+			PMD_DRV_LOG(ERR,
+				"Failed pthread_attr_init for PF reset\n");
+
+		rv = pthread_attr_setdetachstate(&tattr,
+					PTHREAD_CREATE_DETACHED);
+		if (rv)
+			PMD_DRV_LOG(ERR,
+				"Failed pthread_attr_setdetachstate for PF reset\n");
+
+		if (pthread_create(&thread, NULL,
+				qdma_reset_task, (void *)dev)) {
+			PMD_DRV_LOG(ERR, "Could not create qdma reset"
+					" starter thread\n");
+		}
+	} else if (rv == QDMA_MBOX_PF_RESET_DONE) {
+		qdma_dev->reset_state = RESET_STATE_RECV_PF_RESET_DONE;
+	} else if (rv == QDMA_MBOX_PF_BYE) {
+		rv = pthread_attr_init(&tattr);
+		if (rv)
+			PMD_DRV_LOG(ERR,
+				"Failed pthread_attr_init for PF shutdown\n");
+
+		rv = pthread_attr_setdetachstate(&tattr,
+					PTHREAD_CREATE_DETACHED);
+		if (rv)
+			PMD_DRV_LOG(ERR,
+				"Failed pthread_attr_setdetachstate for PF shutdown\n");
+
+		if (pthread_create(&thread, NULL,
+				qdma_remove_task, (void *)dev)) {
+			PMD_DRV_LOG(ERR,
+				"Could not create qdma remove"
+				" starter thread\n");
+		}
+	}
+}
+
+static void qdma_mbox_process_rsp_from_pf(void *arg)
+{
+	struct rte_eth_dev *dev = (struct rte_eth_dev *)arg;
+	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
+	struct qdma_list_head *entry = NULL;
+	struct qdma_list_head *tmp = NULL;
+
+	if (!qdma_dev)
+		return;
+
+	if (qdma_dev->is_vf && qdma_dev->func_id == 0) {
+		qdma_dev->func_id =
+			qdma_mbox_vf_func_id_get(qdma_dev->mbox.rx_data,
+					qdma_dev->is_vf);
+		PMD_DRV_LOG(INFO, "VF function ID: %d", qdma_dev->func_id);
+	}
+
+	rte_spinlock_lock(&qdma_dev->mbox.list_lock);
+	qdma_list_for_each_safe(entry, tmp,
+				&qdma_dev->mbox.rx_pend_list) {
+		struct qdma_mbox_msg *msg = QDMA_LIST_GET_DATA(entry);
+
+		if (qdma_mbox_is_msg_response(msg->raw_data,
+					      qdma_dev->mbox.rx_data)) {
+			/* copy response message back to tx buffer */
+			memcpy(msg->raw_data, qdma_dev->mbox.rx_data,
+			       MBOX_MSG_REG_MAX * sizeof(uint32_t));
+			msg->rsp_rcvd = 1;
+			qdma_list_del(entry);
+			break;
+		}
+	}
+	rte_spinlock_unlock(&qdma_dev->mbox.list_lock);
+}
+
+static void qdma_mbox_rcv_task(void *arg)
+{
+	struct rte_eth_dev *dev = (struct rte_eth_dev *)arg;
+	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
+	int rv;
+
+	if (!qdma_dev)
+		return;
+
+	do {
+		memset(qdma_dev->mbox.rx_data, 0,
+		       MBOX_MSG_REG_MAX * sizeof(uint32_t));
+		rv = qdma_mbox_rcv(dev, qdma_dev->is_vf,
+				   qdma_dev->mbox.rx_data);
+		if (rv < 0)
+			break;
+		if (qdma_dev->is_vf) {
+			qdma_mbox_process_msg_from_pf(arg);
+			qdma_mbox_process_rsp_from_pf(arg);
+		} else {
+			qdma_mbox_process_msg_from_vf(arg);
+		}
+
+	} while (1);
+
+	if (!qdma_dev->dev_cap.mailbox_intr)
+		rte_eal_alarm_set(MBOX_POLL_FRQ, qdma_mbox_rcv_task, arg);
+}
+
+static void qdma_mbox_send_task(void *arg)
+{
+	struct rte_eth_dev *dev = (struct rte_eth_dev *)arg;
+	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
+	struct qdma_list_head *entry = NULL;
+	struct qdma_list_head *tmp = NULL;
+	int rv;
+
+	rte_spinlock_lock(&qdma_dev->mbox.list_lock);
+	qdma_list_for_each_safe(entry, tmp, &qdma_dev->mbox.tx_todo_list) {
+		struct qdma_mbox_msg *msg = QDMA_LIST_GET_DATA(entry);
+
+		rv = qdma_mbox_send(dev, qdma_dev->is_vf, msg->raw_data);
+		if (rv < 0) {
+			msg->retry_cnt--;
+			if (!msg->retry_cnt) {
+				qdma_list_del(entry);
+				if (msg->rsp_wait == QDMA_MBOX_RSP_NO_WAIT)
+					qdma_mbox_msg_free(msg);
+			}
+		} else {
+			qdma_list_del(entry);
+			if (msg->rsp_wait == QDMA_MBOX_RSP_WAIT)
+				qdma_list_add_tail(entry,
+					   &qdma_dev->mbox.rx_pend_list);
+			else
+				qdma_mbox_msg_free(msg);
+		}
+	}
+	if (!qdma_list_is_empty(&qdma_dev->mbox.tx_todo_list))
+		rte_eal_alarm_set(MBOX_POLL_FRQ, qdma_mbox_send_task, arg);
+	rte_spinlock_unlock(&qdma_dev->mbox.list_lock);
+}
+
+int qdma_mbox_msg_send(struct rte_eth_dev *dev, struct qdma_mbox_msg *msg,
+		       unsigned int timeout_ms)
+{
+	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
+
+	if (!msg)
+		return -EINVAL;
+
+	msg->retry_cnt = timeout_ms ? ((timeout_ms / MBOX_POLL_FRQ) + 1) :
+			MBOX_SEND_RETRY_COUNT;
+	QDMA_LIST_SET_DATA(&msg->node, msg);
+
+	rte_spinlock_lock(&qdma_dev->mbox.list_lock);
+	qdma_list_add_tail(&msg->node, &qdma_dev->mbox.tx_todo_list);
+	rte_spinlock_unlock(&qdma_dev->mbox.list_lock);
+
+	msg->rsp_wait = (!timeout_ms) ? QDMA_MBOX_RSP_NO_WAIT :
+			QDMA_MBOX_RSP_WAIT;
+	rte_eal_alarm_set(MBOX_POLL_FRQ, qdma_mbox_send_task, dev);
+
+	if (!timeout_ms)
+		return 0;
+
+	/* if code reached here, caller should free the buffer */
+	while (msg->retry_cnt && !msg->rsp_rcvd)
+		rte_delay_ms(1);
+
+	if (!msg->rsp_rcvd)
+		return  -EPIPE;
+
+	return 0;
+}
+
+void *qdma_mbox_msg_alloc(void)
+{
+	return rte_zmalloc(NULL, sizeof(struct qdma_mbox_msg), 0);
+}
+
+void qdma_mbox_msg_free(void *buffer)
+{
+	rte_free(buffer);
+}
+
+int qdma_mbox_init(struct rte_eth_dev *dev)
+{
+	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	uint32_t raw_data[MBOX_MSG_REG_MAX] = {0};
+	struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
+
+	if (!qdma_dev->is_vf) {
+		int i;
+
+		for (i = 0; i < pci_dev->max_vfs; i++)
+			qdma_mbox_rcv(dev, 0, raw_data);
+	} else {
+		qdma_mbox_rcv(dev, 1, raw_data);
+	}
+
+	qdma_mbox_hw_init(dev, qdma_dev->is_vf);
+	qdma_list_init_head(&qdma_dev->mbox.tx_todo_list);
+	qdma_list_init_head(&qdma_dev->mbox.rx_pend_list);
+	rte_spinlock_init(&qdma_dev->mbox.list_lock);
+
+	if (qdma_dev->dev_cap.mailbox_intr) {
+		/* Register interrupt call back handler */
+		rte_intr_callback_register(intr_handle,
+					qdma_mbox_rcv_task, dev);
+
+		/* enable uio/vfio intr/eventfd mapping */
+		rte_intr_enable(intr_handle);
+
+		/* enable qdma mailbox interrupt */
+		qdma_mbox_enable_interrupts((void *)dev, qdma_dev->is_vf);
+	} else {
+		rte_eal_alarm_set(MBOX_POLL_FRQ, qdma_mbox_rcv_task,
+				  (void *)dev);
+	}
+
+	return 0;
+}
+
+void qdma_mbox_uninit(struct rte_eth_dev *dev)
+{
+	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
+
+	do {
+		rte_spinlock_lock(&qdma_dev->mbox.list_lock);
+		if (!qdma_list_is_empty(&qdma_dev->mbox.tx_todo_list)) {
+			rte_spinlock_unlock(&qdma_dev->mbox.list_lock);
+			rte_delay_ms(100);
+			continue;
+		}
+		rte_spinlock_unlock(&qdma_dev->mbox.list_lock);
+		break;
+	} while (1);
+
+	rte_eal_alarm_cancel(qdma_mbox_send_task, (void *)dev);
+	if (qdma_dev->dev_cap.mailbox_intr) {
+		/* Disable the mailbox interrupt */
+		qdma_mbox_disable_interrupts((void *)dev, qdma_dev->is_vf);
+
+		/* Disable uio intr before callback unregister */
+		rte_intr_disable(intr_handle);
+
+		rte_intr_callback_unregister(intr_handle,
+				qdma_mbox_rcv_task, dev);
+	} else {
+		rte_eal_alarm_cancel(qdma_mbox_rcv_task, (void *)dev);
+	}
+}
diff --git a/drivers/net/qdma/qdma_mbox.h b/drivers/net/qdma/qdma_mbox.h
new file mode 100644
index 0000000000..99d6149cec
--- /dev/null
+++ b/drivers/net/qdma/qdma_mbox.h
@@ -0,0 +1,47 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2022 Xilinx, Inc. All rights reserved.
+ */
+
+#ifndef QDMA_DPDK_MBOX_H_
+#define QDMA_DPDK_MBOX_H_
+
+#include "qdma_list.h"
+#include "qdma_mbox_protocol.h"
+#include <rte_ethdev.h>
+
+#define MBOX_POLL_FRQ 1000
+#define MBOX_OP_RSP_TIMEOUT (10000 * MBOX_POLL_FRQ) /* 10 sec */
+#define MBOX_SEND_RETRY_COUNT (MBOX_OP_RSP_TIMEOUT / MBOX_POLL_FRQ)
+
+enum qdma_mbox_rsp_state {
+	QDMA_MBOX_RSP_NO_WAIT,
+	QDMA_MBOX_RSP_WAIT
+};
+
+struct qdma_dev_mbox {
+	struct qdma_list_head tx_todo_list;
+	struct qdma_list_head rx_pend_list;
+	rte_spinlock_t list_lock;
+	uint32_t rx_data[MBOX_MSG_REG_MAX];
+};
+
+struct qdma_mbox_msg {
+	uint8_t rsp_rcvd;
+	uint32_t retry_cnt;
+	enum qdma_mbox_rsp_state rsp_wait;
+	uint32_t raw_data[MBOX_MSG_REG_MAX];
+	struct qdma_list_head node;
+};
+
+int qdma_mbox_init(struct rte_eth_dev *dev);
+void qdma_mbox_uninit(struct rte_eth_dev *dev);
+void *qdma_mbox_msg_alloc(void);
+void qdma_mbox_msg_free(void *buffer);
+int qdma_mbox_msg_send(struct rte_eth_dev *dev, struct qdma_mbox_msg *msg,
+		       unsigned int timeout_ms);
+int qdma_dev_notify_qadd(struct rte_eth_dev *dev, uint32_t qidx_hw,
+						enum qdma_dev_q_type q_type);
+int qdma_dev_notify_qdel(struct rte_eth_dev *dev, uint32_t qidx_hw,
+						enum qdma_dev_q_type q_type);
+
+#endif /* QDMA_DPDK_MBOX_H_ */
-- 
2.36.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [RFC PATCH 22/29] net/qdma: mbox API adaptation in Rx/Tx init
  2022-07-06  7:51 [RFC PATCH 00/29] cover letter for net/qdma PMD Aman Kumar
                   ` (20 preceding siblings ...)
  2022-07-06  7:52 ` [RFC PATCH 21/29] net/qdma: add mailbox communication library Aman Kumar
@ 2022-07-06  7:52 ` Aman Kumar
  2022-07-06  7:52 ` [RFC PATCH 23/29] net/qdma: add support for VF interfaces Aman Kumar
                   ` (7 subsequent siblings)
  29 siblings, 0 replies; 43+ messages in thread
From: Aman Kumar @ 2022-07-06  7:52 UTC (permalink / raw)
  To: dev; +Cc: maxime.coquelin, david.marchand, aman.kumar

add mbox initialization and handling to enable virtual
functions queue setup.

Signed-off-by: Aman Kumar <aman.kumar@vvdntech.in>
---
 drivers/net/qdma/qdma.h        |   1 +
 drivers/net/qdma/qdma_devops.c | 196 ++++++++++++++++++++++++++++++---
 drivers/net/qdma/qdma_ethdev.c |  47 ++++++++
 3 files changed, 226 insertions(+), 18 deletions(-)

diff --git a/drivers/net/qdma/qdma.h b/drivers/net/qdma/qdma.h
index 8fb64c21b0..20a1b72dd1 100644
--- a/drivers/net/qdma/qdma.h
+++ b/drivers/net/qdma/qdma.h
@@ -303,6 +303,7 @@ struct qdma_pci_dev {
 void qdma_dev_ops_init(struct rte_eth_dev *dev);
 void qdma_txq_pidx_update(void *arg);
 int qdma_pf_csr_read(struct rte_eth_dev *dev);
+int qdma_vf_csr_read(struct rte_eth_dev *dev);
 
 uint8_t qmda_get_desc_sz_idx(enum rte_pmd_qdma_bypass_desc_len);
 
diff --git a/drivers/net/qdma/qdma_devops.c b/drivers/net/qdma/qdma_devops.c
index 7f525773d0..e6803dd86f 100644
--- a/drivers/net/qdma/qdma_devops.c
+++ b/drivers/net/qdma/qdma_devops.c
@@ -22,6 +22,8 @@
 
 #include "qdma.h"
 #include "qdma_access_common.h"
+#include "qdma_mbox_protocol.h"
+#include "qdma_mbox.h"
 #include "qdma_reg_dump.h"
 #include "qdma_platform.h"
 #include "qdma_devops.h"
@@ -64,6 +66,39 @@ static void qdma_sort_c2h_cntr_th_values(struct qdma_pci_dev *qdma_dev)
 }
 #endif /* QDMA_LATENCY_OPTIMIZED */
 
+int qdma_vf_csr_read(struct rte_eth_dev *dev)
+{
+	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
+	struct qdma_mbox_msg *m = qdma_mbox_msg_alloc();
+	int rv, i;
+	struct qdma_csr_info csr_info;
+
+	if (!m)
+		return -ENOMEM;
+
+	qdma_mbox_compose_csr_read(qdma_dev->func_id, m->raw_data);
+	rv = qdma_mbox_msg_send(dev, m, MBOX_OP_RSP_TIMEOUT);
+	if (rv < 0)
+		goto free_msg;
+
+	rv = qdma_mbox_vf_csr_get(m->raw_data, &csr_info);
+	if (rv < 0)
+		goto free_msg;
+	for (i = 0; i < QDMA_NUM_RING_SIZES; i++) {
+		qdma_dev->g_ring_sz[i] = (uint32_t)csr_info.ringsz[i];
+		qdma_dev->g_c2h_buf_sz[i] = (uint32_t)csr_info.bufsz[i];
+		qdma_dev->g_c2h_timer_cnt[i] = (uint32_t)csr_info.timer_cnt[i];
+		qdma_dev->g_c2h_cnt_th[i] = (uint32_t)csr_info.cnt_thres[i];
+		#ifdef QDMA_LATENCY_OPTIMIZED
+		qdma_sort_c2h_cntr_th_values(qdma_dev);
+		#endif /* QDMA_LATENCY_OPTIMIZED */
+	}
+
+free_msg:
+	qdma_mbox_msg_free(m);
+	return rv;
+}
+
 int qdma_pf_csr_read(struct rte_eth_dev *dev)
 {
 	int ret = 0;
@@ -131,6 +166,44 @@ static int qdma_pf_fmap_prog(struct rte_eth_dev *dev)
 	return ret;
 }
 
+int qdma_dev_notify_qadd(struct rte_eth_dev *dev, uint32_t qidx_hw,
+						enum qdma_dev_q_type q_type)
+{
+	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
+	struct qdma_mbox_msg *m;
+	int rv = 0;
+
+	m = qdma_mbox_msg_alloc();
+	if (!m)
+		return -ENOMEM;
+
+	qdma_mbox_compose_vf_notify_qadd(qdma_dev->func_id, qidx_hw,
+						q_type, m->raw_data);
+	rv = qdma_mbox_msg_send(dev, m, MBOX_OP_RSP_TIMEOUT);
+
+	qdma_mbox_msg_free(m);
+	return rv;
+}
+
+int qdma_dev_notify_qdel(struct rte_eth_dev *dev, uint32_t qidx_hw,
+						enum qdma_dev_q_type q_type)
+{
+	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
+	struct qdma_mbox_msg *m;
+	int rv = 0;
+
+	m = qdma_mbox_msg_alloc();
+	if (!m)
+		return -ENOMEM;
+
+	qdma_mbox_compose_vf_notify_qdel(qdma_dev->func_id, qidx_hw,
+					q_type, m->raw_data);
+	rv = qdma_mbox_msg_send(dev, m, MBOX_OP_RSP_TIMEOUT);
+
+	qdma_mbox_msg_free(m);
+	return rv;
+}
+
 uint8_t qmda_get_desc_sz_idx(enum rte_pmd_qdma_bypass_desc_len size)
 {
 	uint8_t ret;
@@ -243,9 +316,33 @@ int qdma_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 				return -EINVAL;
 			}
 		}
+	} else {
+		err = qdma_dev_notify_qadd(dev, rx_queue_id +
+						qdma_dev->queue_base,
+						QDMA_DEV_Q_TYPE_C2H);
+		if (err < 0)
+			return -EINVAL;
+
+		if (qdma_dev->q_info[rx_queue_id].queue_mode ==
+					RTE_PMD_QDMA_STREAMING_MODE) {
+			err = qdma_dev_notify_qadd(dev, rx_queue_id +
+					qdma_dev->queue_base,
+					QDMA_DEV_Q_TYPE_CMPT);
+			if (err < 0) {
+				qdma_dev_notify_qdel(dev, rx_queue_id +
+						qdma_dev->queue_base,
+						QDMA_DEV_Q_TYPE_C2H);
+				return -EINVAL;
+			}
+		}
 	}
+
 	if (!qdma_dev->init_q_range) {
-		if (!qdma_dev->is_vf) {
+		if (qdma_dev->is_vf) {
+			err = qdma_vf_csr_read(dev);
+			if (err < 0)
+				goto rx_setup_err;
+		} else {
 			err = qdma_pf_csr_read(dev);
 			if (err < 0)
 				goto rx_setup_err;
@@ -534,18 +631,27 @@ int qdma_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 						QDMA_DEV_Q_TYPE_C2H);
 
 		if (qdma_dev->q_info[rx_queue_id].queue_mode ==
-				RTE_PMD_QDMA_STREAMING_MODE)
+				RTE_PMD_QDMA_STREAMING_MODE) {
 			qdma_dev_decrement_active_queue
 					(qdma_dev->dma_device_index,
 					qdma_dev->func_id,
 					QDMA_DEV_Q_TYPE_CMPT);
-	}
-	if (rxq) {
-		if (rxq->rx_mz)
-			rte_memzone_free(rxq->rx_mz);
-		if (rxq->sw_ring)
-			rte_free(rxq->sw_ring);
-		rte_free(rxq);
+		} else {
+			qdma_dev_notify_qdel(dev, rx_queue_id +
+				qdma_dev->queue_base, QDMA_DEV_Q_TYPE_C2H);
+
+			if (qdma_dev->q_info[rx_queue_id].queue_mode ==
+						RTE_PMD_QDMA_STREAMING_MODE)
+				qdma_dev_notify_qdel(dev, rx_queue_id +
+					qdma_dev->queue_base, QDMA_DEV_Q_TYPE_CMPT);
+		}
+		if (rxq) {
+			if (rxq->rx_mz)
+				rte_memzone_free(rxq->rx_mz);
+			if (rxq->sw_ring)
+				rte_free(rxq->sw_ring);
+			rte_free(rxq);
+		}
 	}
 	return err;
 }
@@ -591,9 +697,21 @@ int qdma_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
 				QDMA_DEV_Q_TYPE_H2C);
 		if (err != QDMA_SUCCESS)
 			return -EINVAL;
+	} else {
+		err = qdma_dev_notify_qadd(dev, tx_queue_id +
+					qdma_dev->queue_base,
+					QDMA_DEV_Q_TYPE_H2C);
+		if (err < 0)
+			return -EINVAL;
 	}
 	if (!qdma_dev->init_q_range) {
-		if (!qdma_dev->is_vf) {
+		if (qdma_dev->is_vf) {
+			err = qdma_vf_csr_read(dev);
+			if (err < 0) {
+				PMD_DRV_LOG(ERR, "CSR read failed\n");
+				goto tx_setup_err;
+			}
+		} else {
 			err = qdma_pf_csr_read(dev);
 			if (err < 0) {
 				PMD_DRV_LOG(ERR, "CSR read failed\n");
@@ -751,16 +869,28 @@ int qdma_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
 
 tx_setup_err:
 	PMD_DRV_LOG(ERR, " Tx queue setup failed");
-	if (!qdma_dev->is_vf)
+	if (!qdma_dev->is_vf) {
 		qdma_dev_decrement_active_queue(qdma_dev->dma_device_index,
 						qdma_dev->func_id,
 						QDMA_DEV_Q_TYPE_H2C);
+	} else {
+		qdma_dev_notify_qdel(dev, tx_queue_id +
+				qdma_dev->queue_base, QDMA_DEV_Q_TYPE_H2C);
+	}
 	if (txq) {
-		if (txq->tx_mz)
-			rte_memzone_free(txq->tx_mz);
-		if (txq->sw_ring)
-			rte_free(txq->sw_ring);
-		rte_free(txq);
+		if (qdma_dev->is_vf) {
+			err = qdma_vf_csr_read(dev);
+		if (err < 0) {
+			PMD_DRV_LOG(ERR, "CSR read failed\n");
+			goto tx_setup_err;
+		}
+		} else {
+			if (txq->tx_mz)
+				rte_memzone_free(txq->tx_mz);
+			if (txq->sw_ring)
+				rte_free(txq->sw_ring);
+			rte_free(txq);
+		}
 	}
 	return err;
 }
@@ -802,11 +932,16 @@ void qdma_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t q_id)
 		PMD_DRV_LOG(INFO, "Remove H2C queue: %d", txq->queue_id);
 		qdma_dev = txq->dev->data->dev_private;
 
-		if (!qdma_dev->is_vf)
+		if (!qdma_dev->is_vf) {
 			qdma_dev_decrement_active_queue
 					(qdma_dev->dma_device_index,
 					qdma_dev->func_id,
-					QDMA_DEV_Q_TYPE_H2C);
+				QDMA_DEV_Q_TYPE_H2C);
+		} else {
+			qdma_dev_notify_qdel(txq->dev, txq->queue_id +
+						qdma_dev->queue_base,
+						QDMA_DEV_Q_TYPE_H2C);
+		}
 		if (txq->sw_ring)
 			rte_free(txq->sw_ring);
 		if (txq->tx_mz)
@@ -837,6 +972,15 @@ void qdma_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t q_id)
 					(qdma_dev->dma_device_index,
 					qdma_dev->func_id,
 					QDMA_DEV_Q_TYPE_CMPT);
+		} else {
+			qdma_dev_notify_qdel(rxq->dev, rxq->queue_id +
+			qdma_dev->queue_base,
+			QDMA_DEV_Q_TYPE_C2H);
+
+			if (rxq->st_mode)
+				qdma_dev_notify_qdel(rxq->dev, rxq->queue_id +
+						qdma_dev->queue_base,
+						QDMA_DEV_Q_TYPE_CMPT);
 		}
 
 		if (rxq->sw_ring)
@@ -1111,6 +1255,7 @@ int qdma_dev_reset(struct rte_eth_dev *dev)
 {
 	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
 	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct qdma_mbox_msg *m = NULL;
 	uint32_t vf_device_count = 0;
 	uint32_t i = 0;
 	int ret = 0;
@@ -1141,6 +1286,21 @@ int qdma_dev_reset(struct rte_eth_dev *dev)
 	for (i = 0; i < pci_dev->max_vfs; i++) {
 		if (qdma_dev->vfinfo[i].func_id == QDMA_FUNC_ID_INVALID)
 			continue;
+
+		m = qdma_mbox_msg_alloc();
+		if (!m)
+			return -ENOMEM;
+		qdma_mbox_compose_pf_reset_done_message(m->raw_data, qdma_dev->func_id,
+						qdma_dev->vfinfo[i].func_id);
+		ret = qdma_mbox_msg_send(dev, m, 0);
+		if (ret < 0)
+			PMD_DRV_LOG(ERR, "Sending reset failed from PF:%d to VF:%d\n",
+					qdma_dev->func_id, qdma_dev->vfinfo[i].func_id);
+
+		/* Mark VFs with invalid function id mapping,
+		 * and this gets updated when VF comes online again
+		 */
+		qdma_dev->vfinfo[i].func_id = QDMA_FUNC_ID_INVALID;
 	}
 
 	/* Start waiting for a maximum of 60 secs to get all its VFs
diff --git a/drivers/net/qdma/qdma_ethdev.c b/drivers/net/qdma/qdma_ethdev.c
index cc1e8eee71..466a9e9284 100644
--- a/drivers/net/qdma/qdma_ethdev.c
+++ b/drivers/net/qdma/qdma_ethdev.c
@@ -25,6 +25,7 @@
 #include "qdma_version.h"
 #include "qdma_access_common.h"
 #include "qdma_access_export.h"
+#include "qdma_mbox.h"
 #include "qdma_devops.h"
 
 /* Poll for QDMA errors every 1 second */
@@ -546,6 +547,8 @@ int qdma_eth_dev_init(struct rte_eth_dev *dev)
 	}
 
 	pcie_perf_enable(pci_dev);
+	if (dma_priv->dev_cap.mailbox_en && pci_dev->max_vfs)
+		qdma_mbox_init(dev);
 
 	if (!dma_priv->reset_in_progress) {
 		num_vfs = pci_dev->max_vfs;
@@ -581,13 +584,57 @@ int qdma_eth_dev_init(struct rte_eth_dev *dev)
 int qdma_eth_dev_uninit(struct rte_eth_dev *dev)
 {
 	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct qdma_mbox_msg *m = NULL;
+	int i, rv;
 
 	/* only uninitialize in the primary process */
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return -EPERM;
+	if (qdma_dev->vf_online_count) {
+		for (i = 0; i < pci_dev->max_vfs; i++) {
+			if (qdma_dev->vfinfo[i].func_id == QDMA_FUNC_ID_INVALID)
+				continue;
+			m = qdma_mbox_msg_alloc();
+			if (!m)
+				return -ENOMEM;
 
+			if (!qdma_dev->reset_in_progress)
+				qdma_mbox_compose_pf_offline(m->raw_data,
+						qdma_dev->func_id,
+						qdma_dev->vfinfo[i].func_id);
+			else
+				qdma_mbox_compose_vf_reset_message(m->raw_data,
+						qdma_dev->func_id,
+						qdma_dev->vfinfo[i].func_id);
+			rv = qdma_mbox_msg_send(dev, m, 0);
+			if (rv < 0)
+				PMD_DRV_LOG(ERR, "Send bye failed from PF:%d to VF:%d\n",
+			qdma_dev->func_id,
+			qdma_dev->vfinfo[i].func_id);
+		}
+		PMD_DRV_LOG(INFO, "%s: Wait till all VFs shutdown for PF-%d(DEVFN)\n",
+							__func__, qdma_dev->func_id);
+		i = 0;
+		while (i < SHUTDOWN_TIMEOUT) {
+			if (!qdma_dev->vf_online_count) {
+				PMD_DRV_LOG(INFO, "%s: VFs shutdown completed for PF-%d(DEVFN)\n",
+						__func__, qdma_dev->func_id);
+				break;
+			}
+			rte_delay_ms(1);
+			i++;
+		}
+
+		if (i >= SHUTDOWN_TIMEOUT) {
+			PMD_DRV_LOG(ERR, "%s: Failed VFs shutdown for PF-%d(DEVFN)\n",
+					__func__, qdma_dev->func_id);
+		}
+	}
 	if (qdma_dev->dev_configured)
 		qdma_dev_close(dev);
+	if (qdma_dev->dev_cap.mailbox_en && pci_dev->max_vfs)
+		qdma_mbox_uninit(dev);
 
 	/* cancel pending polls */
 	if (qdma_dev->is_master)
-- 
2.36.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [RFC PATCH 23/29] net/qdma: add support for VF interfaces
  2022-07-06  7:51 [RFC PATCH 00/29] cover letter for net/qdma PMD Aman Kumar
                   ` (21 preceding siblings ...)
  2022-07-06  7:52 ` [RFC PATCH 22/29] net/qdma: mbox API adaptation in Rx/Tx init Aman Kumar
@ 2022-07-06  7:52 ` Aman Kumar
  2022-07-06  7:52 ` [RFC PATCH 24/29] net/qdma: add Rx/Tx queue setup routine for VF devices Aman Kumar
                   ` (6 subsequent siblings)
  29 siblings, 0 replies; 43+ messages in thread
From: Aman Kumar @ 2022-07-06  7:52 UTC (permalink / raw)
  To: dev; +Cc: maxime.coquelin, david.marchand, aman.kumar

This patch registers supported virtual function
and initialization/deinit routines for the same.

Signed-off-by: Aman Kumar <aman.kumar@vvdntech.in>
---
 drivers/net/qdma/meson.build      |   1 +
 drivers/net/qdma/qdma.h           |   9 +
 drivers/net/qdma/qdma_ethdev.c    |  22 +++
 drivers/net/qdma/qdma_vf_ethdev.c | 319 ++++++++++++++++++++++++++++++
 4 files changed, 351 insertions(+)
 create mode 100644 drivers/net/qdma/qdma_vf_ethdev.c

diff --git a/drivers/net/qdma/meson.build b/drivers/net/qdma/meson.build
index dd2478be6c..c453d556b6 100644
--- a/drivers/net/qdma/meson.build
+++ b/drivers/net/qdma/meson.build
@@ -28,6 +28,7 @@ sources = files(
         'qdma_mbox.c',
         'qdma_user.c',
         'qdma_rxtx.c',
+        'qdma_vf_ethdev.c',
         'qdma_access/eqdma_soft_access/eqdma_soft_access.c',
         'qdma_access/eqdma_soft_access/eqdma_soft_reg_dump.c',
         'qdma_access/qdma_s80_hard_access/qdma_s80_hard_access.c',
diff --git a/drivers/net/qdma/qdma.h b/drivers/net/qdma/qdma.h
index 20a1b72dd1..d9239f34a7 100644
--- a/drivers/net/qdma/qdma.h
+++ b/drivers/net/qdma/qdma.h
@@ -25,6 +25,7 @@
 
 #define QDMA_NUM_BARS          (6)
 #define DEFAULT_PF_CONFIG_BAR  (0)
+#define DEFAULT_VF_CONFIG_BAR  (0)
 #define BAR_ID_INVALID         (-1)
 
 #define QDMA_FUNC_ID_INVALID    0xFFFF
@@ -306,6 +307,10 @@ int qdma_pf_csr_read(struct rte_eth_dev *dev);
 int qdma_vf_csr_read(struct rte_eth_dev *dev);
 
 uint8_t qmda_get_desc_sz_idx(enum rte_pmd_qdma_bypass_desc_len);
+int qdma_vf_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t qid);
+int qdma_vf_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t qid);
+int qdma_vf_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t qid);
+int qdma_vf_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t qid);
 
 int qdma_init_rx_queue(struct qdma_rx_queue *rxq);
 void qdma_reset_tx_queue(struct qdma_tx_queue *txq);
@@ -342,5 +347,9 @@ struct rte_memzone *qdma_zone_reserve(struct rte_eth_dev *dev,
 						socket_id, 0, QDMA_ALIGN);
 }
 
+bool is_qdma_supported(struct rte_eth_dev *dev);
+bool is_vf_device_supported(struct rte_eth_dev *dev);
+bool is_pf_device_supported(struct rte_eth_dev *dev);
+
 void qdma_check_errors(void *arg);
 #endif /* ifndef __QDMA_H__ */
diff --git a/drivers/net/qdma/qdma_ethdev.c b/drivers/net/qdma/qdma_ethdev.c
index 466a9e9284..a33d5efc5a 100644
--- a/drivers/net/qdma/qdma_ethdev.c
+++ b/drivers/net/qdma/qdma_ethdev.c
@@ -695,6 +695,28 @@ static struct rte_pci_driver rte_qdma_pmd = {
 	.remove = eth_qdma_pci_remove,
 };
 
+bool
+is_pf_device_supported(struct rte_eth_dev *dev)
+{
+	if (strcmp(dev->device->driver->name, rte_qdma_pmd.driver.name))
+		return false;
+
+	return true;
+}
+
+bool is_qdma_supported(struct rte_eth_dev *dev)
+{
+	bool is_pf, is_vf;
+
+	is_pf = is_pf_device_supported(dev);
+	is_vf = is_vf_device_supported(dev);
+
+	if (!is_pf && !is_vf)
+		return false;
+
+	return true;
+}
+
 RTE_PMD_REGISTER_PCI(net_qdma, rte_qdma_pmd);
 RTE_PMD_REGISTER_PCI_TABLE(net_qdma, qdma_pci_id_tbl);
 RTE_LOG_REGISTER_DEFAULT(qdma_logtype_pmd, NOTICE);
diff --git a/drivers/net/qdma/qdma_vf_ethdev.c b/drivers/net/qdma/qdma_vf_ethdev.c
new file mode 100644
index 0000000000..ca3d21b688
--- /dev/null
+++ b/drivers/net/qdma/qdma_vf_ethdev.c
@@ -0,0 +1,319 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2022 Xilinx, Inc. All rights reserved.
+ */
+
+#include <stdint.h>
+#include <stdbool.h>
+#include <sys/mman.h>
+#include <sys/fcntl.h>
+#include <rte_memzone.h>
+#include <rte_string_fns.h>
+#include <ethdev_pci.h>
+#include <rte_malloc.h>
+#include <rte_dev.h>
+#include <rte_pci.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_cycles.h>
+#include <rte_alarm.h>
+#include <unistd.h>
+#include <string.h>
+#include <linux/pci.h>
+
+#include "qdma.h"
+#include "qdma_version.h"
+#include "qdma_access_common.h"
+#include "qdma_mbox_protocol.h"
+#include "qdma_mbox.h"
+#include "qdma_devops.h"
+
+static int eth_qdma_vf_dev_init(struct rte_eth_dev *dev);
+static int eth_qdma_vf_dev_uninit(struct rte_eth_dev *dev);
+
+/*
+ * The set of PCI devices this driver supports
+ */
+static struct rte_pci_id qdma_vf_pci_id_tbl[] = {
+#define RTE_PCI_DEV_ID_DECL(vend, dev) {RTE_PCI_DEVICE(vend, dev)},
+#ifndef PCI_VENDOR_ID_VVDN
+#define PCI_VENDOR_ID_VVDN 0x1f44
+#endif
+
+	/** Gen 3 VF */
+	RTE_PCI_DEV_ID_DECL(PCI_VENDOR_ID_VVDN, 0x0281)	/* VF on PF 0 */
+
+	{ .vendor_id = 0, /* sentinel */ },
+};
+
+static int qdma_ethdev_online(struct rte_eth_dev *dev)
+{
+	int rv = 0;
+	int qbase = -1;
+	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
+	struct qdma_mbox_msg *m = qdma_mbox_msg_alloc();
+
+	if (!m)
+		return -ENOMEM;
+
+	qmda_mbox_compose_vf_online(qdma_dev->func_id, 0, &qbase, m->raw_data);
+
+	rv = qdma_mbox_msg_send(dev, m, MBOX_OP_RSP_TIMEOUT);
+	if (rv < 0)
+		PMD_DRV_LOG(ERR, "%x, send hello failed %d.\n",
+			    qdma_dev->func_id, rv);
+
+	rv = qdma_mbox_vf_dev_info_get(m->raw_data,
+				&qdma_dev->dev_cap,
+				&qdma_dev->dma_device_index);
+
+	if (rv < 0) {
+		PMD_DRV_LOG(ERR, "%x, failed to get dev info %d.\n",
+				qdma_dev->func_id, rv);
+	} else {
+		qdma_mbox_msg_free(m);
+	}
+	return rv;
+}
+
+static int qdma_ethdev_offline(struct rte_eth_dev *dev)
+{
+	int rv;
+	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
+	struct qdma_mbox_msg *m = qdma_mbox_msg_alloc();
+
+	if (!m)
+		return -ENOMEM;
+
+	qdma_mbox_compose_vf_offline(qdma_dev->func_id, m->raw_data);
+
+	rv = qdma_mbox_msg_send(dev, m, 0);
+	if (rv < 0)
+		PMD_DRV_LOG(ERR, "%x, send bye failed %d.\n",
+			    qdma_dev->func_id, rv);
+
+	return rv;
+}
+
+/**
+ * DPDK callback to register a PCI device.
+ *
+ * This function creates an Ethernet device for each port of a given
+ * PCI device.
+ *
+ * @param[in] dev
+ *   Pointer to Ethernet device structure.
+ *
+ * @return
+ *   0 on success, negative errno value on failure.
+ */
+static int eth_qdma_vf_dev_init(struct rte_eth_dev *dev)
+{
+	struct qdma_pci_dev *dma_priv;
+	uint8_t *baseaddr;
+	int i, idx;
+	static bool once = true;
+	struct rte_pci_device *pci_dev;
+
+	/* sanity checks */
+	if (dev == NULL)
+		return -EINVAL;
+	if (dev->data == NULL)
+		return -EINVAL;
+	if (dev->data->dev_private == NULL)
+		return -EINVAL;
+
+	pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	if (pci_dev == NULL)
+		return -EINVAL;
+
+	/* for secondary processes, we don't initialise any further as primary
+	 * has already done this work.
+	 */
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return 0;
+
+	if (once) {
+		RTE_LOG(INFO, PMD, "QDMA PMD VERSION: %s\n", QDMA_PMD_VERSION);
+		once = false;
+	}
+
+	/* allocate space for a single Ethernet MAC address */
+	dev->data->mac_addrs = rte_zmalloc("qdma_vf",
+			RTE_ETHER_ADDR_LEN * 1, 0);
+	if (dev->data->mac_addrs == NULL)
+		return -ENOMEM;
+
+	/* Copy some dummy Ethernet MAC address for XDMA device
+	 * This will change in real NIC device...
+	 */
+	for (i = 0; i < RTE_ETHER_ADDR_LEN; ++i)
+		dev->data->mac_addrs[0].addr_bytes[i] = 0x15 + i;
+
+	/* Init system & device */
+	dma_priv = (struct qdma_pci_dev *)dev->data->dev_private;
+	dma_priv->func_id = 0;
+	dma_priv->is_vf = 1;
+	dma_priv->timer_count = DEFAULT_TIMER_CNT_TRIG_MODE_TIMER;
+
+	dma_priv->en_desc_prefetch = 0;
+	dma_priv->cmpt_desc_len = DEFAULT_QDMA_CMPT_DESC_LEN;
+	dma_priv->c2h_bypass_mode = RTE_PMD_QDMA_RX_BYPASS_NONE;
+	dma_priv->h2c_bypass_mode = 0;
+
+	dma_priv->config_bar_idx = DEFAULT_VF_CONFIG_BAR;
+	dma_priv->bypass_bar_idx = BAR_ID_INVALID;
+	dma_priv->user_bar_idx = BAR_ID_INVALID;
+
+	if (qdma_check_kvargs(dev->device->devargs, dma_priv)) {
+		PMD_DRV_LOG(INFO, "devargs failed\n");
+		rte_free(dev->data->mac_addrs);
+		return -EINVAL;
+	}
+
+	/* Store BAR address and length of Config BAR */
+	baseaddr = (uint8_t *)
+			pci_dev->mem_resource[dma_priv->config_bar_idx].addr;
+	dma_priv->bar_addr[dma_priv->config_bar_idx] = baseaddr;
+
+	/* Assigning QDMA access layer function pointers based on the HW design */
+	dma_priv->hw_access = rte_zmalloc("vf_hwaccess",
+			sizeof(struct qdma_hw_access), 0);
+	if (dma_priv->hw_access == NULL) {
+		rte_free(dev->data->mac_addrs);
+		return -ENOMEM;
+	}
+	idx = qdma_hw_access_init(dev, dma_priv->is_vf, dma_priv->hw_access);
+	if (idx < 0) {
+		rte_free(dma_priv->hw_access);
+		rte_free(dev->data->mac_addrs);
+		return -EINVAL;
+	}
+
+	idx = qdma_get_hw_version(dev);
+	if (idx < 0) {
+		rte_free(dma_priv->hw_access);
+		rte_free(dev->data->mac_addrs);
+		return -EINVAL;
+	}
+
+	idx = qdma_identify_bars(dev);
+	if (idx < 0) {
+		rte_free(dma_priv->hw_access);
+		rte_free(dev->data->mac_addrs);
+		return -EINVAL;
+	}
+
+	/* Store BAR address and length of AXI Master Lite BAR(user bar) */
+	if (dma_priv->user_bar_idx >= 0) {
+		baseaddr = (uint8_t *)
+			     pci_dev->mem_resource[dma_priv->user_bar_idx].addr;
+		dma_priv->bar_addr[dma_priv->user_bar_idx] = baseaddr;
+	}
+
+	if (dma_priv->ip_type == QDMA_VERSAL_HARD_IP)
+		dma_priv->dev_cap.mailbox_intr = 0;
+	else
+		dma_priv->dev_cap.mailbox_intr = 1;
+
+	qdma_mbox_init(dev);
+	idx = qdma_ethdev_online(dev);
+	if (idx < 0) {
+		rte_free(dma_priv->hw_access);
+		rte_free(dev->data->mac_addrs);
+		return -EINVAL;
+	}
+
+	if (dma_priv->dev_cap.cmpt_trig_count_timer) {
+		/* Setting default Mode to
+		 * RTE_PMD_QDMA_TRIG_MODE_USER_TIMER_COUNT
+		 */
+		dma_priv->trigger_mode =
+				RTE_PMD_QDMA_TRIG_MODE_USER_TIMER_COUNT;
+	} else {
+		/* Setting default Mode to RTE_PMD_QDMA_TRIG_MODE_USER_TIMER */
+		dma_priv->trigger_mode = RTE_PMD_QDMA_TRIG_MODE_USER_TIMER;
+	}
+	if (dma_priv->trigger_mode == RTE_PMD_QDMA_TRIG_MODE_USER_TIMER_COUNT)
+		dma_priv->timer_count = DEFAULT_TIMER_CNT_TRIG_MODE_COUNT_TIMER;
+
+	dma_priv->reset_state = RESET_STATE_IDLE;
+
+	PMD_DRV_LOG(INFO, "VF-%d(DEVFN) QDMA device driver probe:",
+				dma_priv->func_id);
+
+	return 0;
+}
+
+/**
+ * DPDK callback to deregister PCI device.
+ *
+ * @param[in] dev
+ *   Pointer to Ethernet device structure.
+ *
+ * @return
+ *   0 on success, negative errno value on failure.
+ */
+static int eth_qdma_vf_dev_uninit(struct rte_eth_dev *dev)
+{
+	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
+
+	/* only uninitialize in the primary process */
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return -EPERM;
+
+	qdma_ethdev_offline(dev);
+
+	if (qdma_dev->reset_state != RESET_STATE_RECV_PF_RESET_REQ)
+		qdma_mbox_uninit(dev);
+
+	dev->dev_ops = NULL;
+	dev->rx_pkt_burst = NULL;
+	dev->tx_pkt_burst = NULL;
+	dev->data->nb_rx_queues = 0;
+	dev->data->nb_tx_queues = 0;
+
+	if (dev->data->mac_addrs != NULL) {
+		rte_free(dev->data->mac_addrs);
+		dev->data->mac_addrs = NULL;
+	}
+
+	if (qdma_dev->q_info != NULL) {
+		rte_free(qdma_dev->q_info);
+		qdma_dev->q_info = NULL;
+	}
+
+	return 0;
+}
+
+static int eth_qdma_vf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+					struct rte_pci_device *pci_dev)
+{
+	return rte_eth_dev_pci_generic_probe(pci_dev,
+						sizeof(struct qdma_pci_dev),
+						eth_qdma_vf_dev_init);
+}
+
+/* Detach a ethdev interface */
+static int eth_qdma_vf_pci_remove(struct rte_pci_device *pci_dev)
+{
+	return rte_eth_dev_pci_generic_remove(pci_dev, eth_qdma_vf_dev_uninit);
+}
+
+static struct rte_pci_driver rte_qdma_vf_pmd = {
+	.id_table = qdma_vf_pci_id_tbl,
+	.drv_flags = RTE_PCI_DRV_NEED_MAPPING,
+	.probe = eth_qdma_vf_pci_probe,
+	.remove = eth_qdma_vf_pci_remove,
+};
+
+bool
+is_vf_device_supported(struct rte_eth_dev *dev)
+{
+	if (strcmp(dev->device->driver->name, rte_qdma_vf_pmd.driver.name))
+		return false;
+
+	return true;
+}
+
+RTE_PMD_REGISTER_PCI(net_qdma_vf, rte_qdma_vf_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(net_qdma_vf, qdma_vf_pci_id_tbl);
-- 
2.36.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [RFC PATCH 24/29] net/qdma: add Rx/Tx queue setup routine for VF devices
  2022-07-06  7:51 [RFC PATCH 00/29] cover letter for net/qdma PMD Aman Kumar
                   ` (22 preceding siblings ...)
  2022-07-06  7:52 ` [RFC PATCH 23/29] net/qdma: add support for VF interfaces Aman Kumar
@ 2022-07-06  7:52 ` Aman Kumar
  2022-07-06  7:52 ` [RFC PATCH 25/29] net/qdma: add basic PMD ops for VF Aman Kumar
                   ` (5 subsequent siblings)
  29 siblings, 0 replies; 43+ messages in thread
From: Aman Kumar @ 2022-07-06  7:52 UTC (permalink / raw)
  To: dev; +Cc: maxime.coquelin, david.marchand, aman.kumar

Rx/Tx queue initialization routine for VFs is same
as that of phys function. define separate ops structure
for VF devices.

Signed-off-by: Aman Kumar <aman.kumar@vvdntech.in>
---
 drivers/net/qdma/qdma_vf_ethdev.c | 13 ++++++++++++-
 1 file changed, 12 insertions(+), 1 deletion(-)

diff --git a/drivers/net/qdma/qdma_vf_ethdev.c b/drivers/net/qdma/qdma_vf_ethdev.c
index ca3d21b688..28d34560c1 100644
--- a/drivers/net/qdma/qdma_vf_ethdev.c
+++ b/drivers/net/qdma/qdma_vf_ethdev.c
@@ -94,6 +94,13 @@ static int qdma_ethdev_offline(struct rte_eth_dev *dev)
 	return rv;
 }
 
+static struct eth_dev_ops qdma_vf_eth_dev_ops = {
+	.rx_queue_setup       = qdma_dev_rx_queue_setup,
+	.tx_queue_setup       = qdma_dev_tx_queue_setup,
+	.rx_queue_release     = qdma_dev_rx_queue_release,
+	.tx_queue_release     = qdma_dev_tx_queue_release,
+};
+
 /**
  * DPDK callback to register a PCI device.
  *
@@ -129,8 +136,10 @@ static int eth_qdma_vf_dev_init(struct rte_eth_dev *dev)
 	/* for secondary processes, we don't initialise any further as primary
 	 * has already done this work.
 	 */
-	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+		dev->dev_ops = &qdma_vf_eth_dev_ops;
 		return 0;
+	}
 
 	if (once) {
 		RTE_LOG(INFO, PMD, "QDMA PMD VERSION: %s\n", QDMA_PMD_VERSION);
@@ -160,6 +169,8 @@ static int eth_qdma_vf_dev_init(struct rte_eth_dev *dev)
 	dma_priv->c2h_bypass_mode = RTE_PMD_QDMA_RX_BYPASS_NONE;
 	dma_priv->h2c_bypass_mode = 0;
 
+	dev->dev_ops = &qdma_vf_eth_dev_ops;
+
 	dma_priv->config_bar_idx = DEFAULT_VF_CONFIG_BAR;
 	dma_priv->bypass_bar_idx = BAR_ID_INVALID;
 	dma_priv->user_bar_idx = BAR_ID_INVALID;
-- 
2.36.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [RFC PATCH 25/29] net/qdma: add basic PMD ops for VF
  2022-07-06  7:51 [RFC PATCH 00/29] cover letter for net/qdma PMD Aman Kumar
                   ` (23 preceding siblings ...)
  2022-07-06  7:52 ` [RFC PATCH 24/29] net/qdma: add Rx/Tx queue setup routine for VF devices Aman Kumar
@ 2022-07-06  7:52 ` Aman Kumar
  2022-07-06  7:52 ` [RFC PATCH 26/29] net/qdma: add datapath burst API " Aman Kumar
                   ` (4 subsequent siblings)
  29 siblings, 0 replies; 43+ messages in thread
From: Aman Kumar @ 2022-07-06  7:52 UTC (permalink / raw)
  To: dev; +Cc: maxime.coquelin, david.marchand, aman.kumar

this patch adds dev_configure, queue start/stop
ops for VF devices.

Signed-off-by: Aman Kumar <aman.kumar@vvdntech.in>
---
 drivers/net/qdma/qdma_vf_ethdev.c | 641 ++++++++++++++++++++++++++++++
 1 file changed, 641 insertions(+)

diff --git a/drivers/net/qdma/qdma_vf_ethdev.c b/drivers/net/qdma/qdma_vf_ethdev.c
index 28d34560c1..5a54c00893 100644
--- a/drivers/net/qdma/qdma_vf_ethdev.c
+++ b/drivers/net/qdma/qdma_vf_ethdev.c
@@ -94,11 +94,652 @@ static int qdma_ethdev_offline(struct rte_eth_dev *dev)
 	return rv;
 }
 
+static int qdma_vf_set_qrange(struct rte_eth_dev *dev)
+{
+	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
+	struct qdma_mbox_msg *m;
+	int rv = 0;
+
+
+	m = qdma_mbox_msg_alloc();
+	if (!m)
+		return -ENOMEM;
+
+	qdma_mbox_compose_vf_fmap_prog(qdma_dev->func_id,
+					(uint16_t)qdma_dev->qsets_en,
+					(int)qdma_dev->queue_base,
+					m->raw_data);
+	rv = qdma_mbox_msg_send(dev, m, MBOX_OP_RSP_TIMEOUT);
+	if (rv < 0) {
+		if (rv != -ENODEV)
+			PMD_DRV_LOG(ERR, "%x set q range (fmap) failed %d.\n",
+				    qdma_dev->func_id, rv);
+		goto err_out;
+	}
+
+	rv = qdma_mbox_vf_response_status(m->raw_data);
+
+err_out:
+	qdma_mbox_msg_free(m);
+	return rv;
+}
+
+static int qdma_set_qmax(struct rte_eth_dev *dev, int *qmax, int *qbase)
+{
+	struct qdma_mbox_msg *m;
+	int rv = 0;
+	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
+
+	m = qdma_mbox_msg_alloc();
+	if (!m)
+		return -ENOMEM;
+
+	qdma_mbox_compose_vf_qreq(qdma_dev->func_id, (uint16_t)*qmax & 0xFFFF,
+				  *qbase, m->raw_data);
+	rv = qdma_mbox_msg_send(dev, m, MBOX_OP_RSP_TIMEOUT);
+	if (rv < 0) {
+		PMD_DRV_LOG(ERR, "%x set q max failed %d.\n",
+			qdma_dev->func_id, rv);
+		goto err_out;
+	}
+
+	rv = qdma_mbox_vf_qinfo_get(m->raw_data, qbase, (uint16_t *)qmax);
+err_out:
+	qdma_mbox_msg_free(m);
+	return rv;
+}
+
+static int qdma_rxq_context_setup(struct rte_eth_dev *dev, uint16_t qid)
+{
+	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
+	uint32_t qid_hw;
+	struct qdma_mbox_msg *m = qdma_mbox_msg_alloc();
+	struct mbox_descq_conf descq_conf;
+	int rv, bypass_desc_sz_idx;
+	struct qdma_rx_queue *rxq;
+	uint8_t cmpt_desc_fmt;
+	enum mbox_cmpt_ctxt_type cmpt_ctxt_type = QDMA_MBOX_CMPT_CTXT_NONE;
+
+	if (!m)
+		return -ENOMEM;
+	memset(&descq_conf, 0, sizeof(struct mbox_descq_conf));
+	rxq = (struct qdma_rx_queue *)dev->data->rx_queues[qid];
+	qid_hw =  qdma_dev->queue_base + rxq->queue_id;
+
+	switch (rxq->cmpt_desc_len) {
+	case RTE_PMD_QDMA_CMPT_DESC_LEN_8B:
+		cmpt_desc_fmt = CMPT_CNTXT_DESC_SIZE_8B;
+		break;
+	case RTE_PMD_QDMA_CMPT_DESC_LEN_16B:
+		cmpt_desc_fmt = CMPT_CNTXT_DESC_SIZE_16B;
+		break;
+	case RTE_PMD_QDMA_CMPT_DESC_LEN_32B:
+		cmpt_desc_fmt = CMPT_CNTXT_DESC_SIZE_32B;
+		break;
+	case RTE_PMD_QDMA_CMPT_DESC_LEN_64B:
+		if (!qdma_dev->dev_cap.cmpt_desc_64b) {
+			PMD_DRV_LOG(ERR, "PF-%d(DEVFN) 64B is not supported in this "
+				"mode:\n", qdma_dev->func_id);
+			return -1;
+		}
+		cmpt_desc_fmt = CMPT_CNTXT_DESC_SIZE_64B;
+		break;
+	default:
+		cmpt_desc_fmt = CMPT_CNTXT_DESC_SIZE_8B;
+		break;
+	}
+	descq_conf.ring_bs_addr = rxq->rx_mz->iova;
+	descq_conf.en_bypass = rxq->en_bypass;
+	descq_conf.irq_arm = 0;
+	descq_conf.at = 0;
+	descq_conf.wbk_en = 1;
+	descq_conf.irq_en = 0;
+
+	bypass_desc_sz_idx = qmda_get_desc_sz_idx(rxq->bypass_desc_sz);
+
+	if (!rxq->st_mode) {/* mm c2h */
+		descq_conf.desc_sz = SW_DESC_CNTXT_MEMORY_MAP_DMA;
+		descq_conf.wbi_intvl_en = 1;
+		descq_conf.wbi_chk = 1;
+	} else {/* st c2h */
+		descq_conf.desc_sz = SW_DESC_CNTXT_C2H_STREAM_DMA;
+		descq_conf.forced_en = 1;
+		descq_conf.cmpt_ring_bs_addr = rxq->rx_cmpt_mz->iova;
+		descq_conf.cmpt_desc_sz = cmpt_desc_fmt;
+		descq_conf.triggermode = rxq->triggermode;
+
+		descq_conf.cmpt_color = CMPT_DEFAULT_COLOR_BIT;
+		descq_conf.cmpt_full_upd = 0;
+		descq_conf.cnt_thres =
+				qdma_dev->g_c2h_cnt_th[rxq->threshidx];
+		descq_conf.timer_thres =
+				qdma_dev->g_c2h_timer_cnt[rxq->timeridx];
+		descq_conf.cmpt_ringsz =
+				qdma_dev->g_ring_sz[rxq->cmpt_ringszidx] - 1;
+		descq_conf.bufsz = qdma_dev->g_c2h_buf_sz[rxq->buffszidx];
+		descq_conf.cmpt_int_en = 0;
+		descq_conf.cmpl_stat_en = rxq->st_mode;
+		descq_conf.pfch_en = rxq->en_prefetch;
+		descq_conf.en_bypass_prefetch = rxq->en_bypass_prefetch;
+		if (qdma_dev->dev_cap.cmpt_ovf_chk_dis)
+			descq_conf.dis_overflow_check = rxq->dis_overflow_check;
+
+		cmpt_ctxt_type = QDMA_MBOX_CMPT_WITH_ST;
+	}
+
+	if (rxq->en_bypass && rxq->bypass_desc_sz != 0)
+		descq_conf.desc_sz = bypass_desc_sz_idx;
+
+	descq_conf.func_id = rxq->func_id;
+	descq_conf.ringsz = qdma_dev->g_ring_sz[rxq->ringszidx] - 1;
+
+	qdma_mbox_compose_vf_qctxt_write(rxq->func_id, qid_hw, rxq->st_mode, 1,
+					 cmpt_ctxt_type,
+					 &descq_conf, m->raw_data);
+
+	rv = qdma_mbox_msg_send(dev, m, MBOX_OP_RSP_TIMEOUT);
+	if (rv < 0) {
+		PMD_DRV_LOG(ERR, "%x, qid_hw 0x%x, mbox failed %d.\n",
+			qdma_dev->func_id, qid_hw, rv);
+		goto err_out;
+	}
+
+	rv = qdma_mbox_vf_response_status(m->raw_data);
+
+err_out:
+	qdma_mbox_msg_free(m);
+	return rv;
+}
+
+static int qdma_txq_context_setup(struct rte_eth_dev *dev, uint16_t qid)
+{
+	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
+	struct qdma_mbox_msg *m = qdma_mbox_msg_alloc();
+	struct mbox_descq_conf descq_conf;
+	int rv, bypass_desc_sz_idx;
+	struct qdma_tx_queue *txq;
+	uint32_t qid_hw;
+
+	if (!m)
+		return -ENOMEM;
+	memset(&descq_conf, 0, sizeof(struct mbox_descq_conf));
+	txq = (struct qdma_tx_queue *)dev->data->tx_queues[qid];
+	qid_hw =  qdma_dev->queue_base + txq->queue_id;
+	descq_conf.ring_bs_addr = txq->tx_mz->iova;
+	descq_conf.en_bypass = txq->en_bypass;
+	descq_conf.wbi_intvl_en = 1;
+	descq_conf.wbi_chk = 1;
+	descq_conf.wbk_en = 1;
+
+	bypass_desc_sz_idx = qmda_get_desc_sz_idx(txq->bypass_desc_sz);
+
+	if (!txq->st_mode) /* mm h2c */
+		descq_conf.desc_sz = SW_DESC_CNTXT_MEMORY_MAP_DMA;
+	else /* st h2c */
+		descq_conf.desc_sz = SW_DESC_CNTXT_H2C_STREAM_DMA;
+	descq_conf.func_id = txq->func_id;
+	descq_conf.ringsz = qdma_dev->g_ring_sz[txq->ringszidx] - 1;
+
+	if (txq->en_bypass && txq->bypass_desc_sz != 0)
+		descq_conf.desc_sz = bypass_desc_sz_idx;
+
+	qdma_mbox_compose_vf_qctxt_write(txq->func_id, qid_hw, txq->st_mode, 0,
+					 QDMA_MBOX_CMPT_CTXT_NONE,
+					 &descq_conf, m->raw_data);
+
+	rv = qdma_mbox_msg_send(dev, m, MBOX_OP_RSP_TIMEOUT);
+	if (rv < 0) {
+		PMD_DRV_LOG(ERR, "%x, qid_hw 0x%x, mbox failed %d.\n",
+			qdma_dev->func_id, qid_hw, rv);
+		goto err_out;
+	}
+
+	rv = qdma_mbox_vf_response_status(m->raw_data);
+
+err_out:
+	qdma_mbox_msg_free(m);
+	return rv;
+}
+
+static int qdma_queue_context_invalidate(struct rte_eth_dev *dev, uint32_t qid,
+				  bool st, bool c2h)
+{
+	struct qdma_mbox_msg *m = qdma_mbox_msg_alloc();
+	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
+	uint32_t qid_hw;
+	int rv;
+	enum mbox_cmpt_ctxt_type cmpt_ctxt_type = QDMA_MBOX_CMPT_CTXT_NONE;
+
+	if (!m)
+		return -ENOMEM;
+
+	if (st && c2h)
+		cmpt_ctxt_type = QDMA_MBOX_CMPT_WITH_ST;
+	qid_hw = qdma_dev->queue_base + qid;
+	qdma_mbox_compose_vf_qctxt_invalidate(qdma_dev->func_id, qid_hw,
+					      st, c2h, cmpt_ctxt_type,
+					      m->raw_data);
+	rv = qdma_mbox_msg_send(dev, m, MBOX_OP_RSP_TIMEOUT);
+	if (rv < 0) {
+		if (rv != -ENODEV)
+			PMD_DRV_LOG(INFO, "%x, qid_hw 0x%x mbox failed %d.\n",
+				    qdma_dev->func_id, qid_hw, rv);
+		goto err_out;
+	}
+
+	rv = qdma_mbox_vf_response_status(m->raw_data);
+
+err_out:
+	qdma_mbox_msg_free(m);
+	return rv;
+}
+
+static int qdma_vf_dev_link_update(struct rte_eth_dev *dev,
+					__rte_unused int wait_to_complete)
+{
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
+	dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_25G;
+
+	PMD_DRV_LOG(INFO, "Link update done\n");
+
+	return 0;
+}
+
+static int qdma_vf_dev_infos_get(__rte_unused struct rte_eth_dev *dev,
+					struct rte_eth_dev_info *dev_info)
+{
+	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
+
+	dev_info->max_rx_queues = qdma_dev->dev_cap.num_qs;
+	dev_info->max_tx_queues = qdma_dev->dev_cap.num_qs;
+
+	dev_info->min_rx_bufsize = QDMA_MIN_RXBUFF_SIZE;
+	dev_info->max_rx_pktlen = DMA_BRAM_SIZE;
+	dev_info->max_mac_addrs = 1;
+
+	return 0;
+}
+
+int qdma_vf_dev_close(struct rte_eth_dev *dev)
+{
+	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
+	struct qdma_tx_queue *txq;
+	struct qdma_rx_queue *rxq;
+	struct qdma_cmpt_queue *cmptq;
+	uint32_t qid;
+
+	PMD_DRV_LOG(INFO, "Closing all queues\n");
+
+	/* iterate over rx queues */
+	for (qid = 0; qid < dev->data->nb_rx_queues; ++qid) {
+		rxq = dev->data->rx_queues[qid];
+		if (rxq != NULL) {
+			PMD_DRV_LOG(INFO, "VF-%d(DEVFN) Remove C2H queue: %d",
+							qdma_dev->func_id, qid);
+
+			qdma_dev_notify_qdel(rxq->dev, rxq->queue_id +
+						qdma_dev->queue_base,
+						QDMA_DEV_Q_TYPE_C2H);
+
+			if (rxq->st_mode)
+				qdma_dev_notify_qdel(rxq->dev, rxq->queue_id +
+						qdma_dev->queue_base,
+						QDMA_DEV_Q_TYPE_CMPT);
+
+			if (rxq->sw_ring)
+				rte_free(rxq->sw_ring);
+
+			if (rxq->st_mode) { /* if ST-mode */
+				if (rxq->rx_cmpt_mz)
+					rte_memzone_free(rxq->rx_cmpt_mz);
+			}
+
+			if (rxq->rx_mz)
+				rte_memzone_free(rxq->rx_mz);
+			rte_free(rxq);
+			PMD_DRV_LOG(INFO, "VF-%d(DEVFN) C2H queue %d removed",
+							qdma_dev->func_id, qid);
+		}
+	}
+
+	/* iterate over tx queues */
+	for (qid = 0; qid < dev->data->nb_tx_queues; ++qid) {
+		txq = dev->data->tx_queues[qid];
+		if (txq != NULL) {
+			PMD_DRV_LOG(INFO, "VF-%d(DEVFN) Remove H2C queue: %d",
+							qdma_dev->func_id, qid);
+
+			qdma_dev_notify_qdel(txq->dev, txq->queue_id +
+						qdma_dev->queue_base,
+						QDMA_DEV_Q_TYPE_H2C);
+			if (txq->sw_ring)
+				rte_free(txq->sw_ring);
+			if (txq->tx_mz)
+				rte_memzone_free(txq->tx_mz);
+			rte_free(txq);
+			PMD_DRV_LOG(INFO, "VF-%d(DEVFN) H2C queue %d removed",
+							qdma_dev->func_id, qid);
+		}
+	}
+	if (qdma_dev->dev_cap.mm_cmpt_en) {
+		/* iterate over cmpt queues */
+		for (qid = 0; qid < qdma_dev->qsets_en; ++qid) {
+			cmptq = qdma_dev->cmpt_queues[qid];
+			if (cmptq != NULL) {
+				PMD_DRV_LOG(INFO, "VF-%d(DEVFN) Remove CMPT queue: %d",
+						qdma_dev->func_id, qid);
+				qdma_dev_notify_qdel(cmptq->dev,
+						cmptq->queue_id +
+						qdma_dev->queue_base,
+						QDMA_DEV_Q_TYPE_CMPT);
+				if (cmptq->cmpt_mz)
+					rte_memzone_free(cmptq->cmpt_mz);
+				rte_free(cmptq);
+				PMD_DRV_LOG(INFO, "VF-%d(DEVFN) CMPT queue %d removed",
+						qdma_dev->func_id, qid);
+			}
+		}
+
+		if (qdma_dev->cmpt_queues != NULL) {
+			rte_free(qdma_dev->cmpt_queues);
+			qdma_dev->cmpt_queues = NULL;
+		}
+	}
+
+	qdma_dev->qsets_en = 0;
+	qdma_set_qmax(dev, (int *)&qdma_dev->qsets_en,
+		      (int *)&qdma_dev->queue_base);
+	qdma_dev->init_q_range = 0;
+	rte_free(qdma_dev->q_info);
+	qdma_dev->q_info = NULL;
+	qdma_dev->dev_configured = 0;
+
+	return 0;
+}
+
+static int qdma_vf_dev_reset(struct rte_eth_dev *dev)
+{
+	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
+	uint32_t i = 0;
+	int ret;
+
+	PMD_DRV_LOG(INFO, "%s: Reset VF-%d(DEVFN)",
+			__func__, qdma_dev->func_id);
+
+	ret = eth_qdma_vf_dev_uninit(dev);
+	if (ret)
+		return ret;
+
+	if (qdma_dev->reset_state == RESET_STATE_IDLE) {
+		ret = eth_qdma_vf_dev_init(dev);
+	} else {
+		/* VFs do not stop mbox and start waiting for a
+		 * "PF_RESET_DONE" mailbox message from PF
+		 * for a maximum of 60 secs
+		 */
+		PMD_DRV_LOG(INFO,
+			"%s: Waiting for reset done message from PF",
+			__func__);
+		while (i < RESET_TIMEOUT) {
+			if (qdma_dev->reset_state ==
+					RESET_STATE_RECV_PF_RESET_DONE) {
+				qdma_mbox_uninit(dev);
+
+				ret = eth_qdma_vf_dev_init(dev);
+				return ret;
+			}
+
+			rte_delay_ms(1);
+			i++;
+		}
+	}
+
+	if (i >= RESET_TIMEOUT) {
+		PMD_DRV_LOG(ERR, "%s: Reset failed for VF-%d(DEVFN)\n",
+			__func__, qdma_dev->func_id);
+		return -ETIMEDOUT;
+	}
+
+	return ret;
+}
+
+static int qdma_vf_dev_configure(struct rte_eth_dev *dev)
+{
+	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
+	int32_t ret = 0, queue_base = -1;
+	uint32_t qid = 0;
+
+	/* FMAP configuration */
+	qdma_dev->qsets_en = RTE_MAX(dev->data->nb_rx_queues,
+					dev->data->nb_tx_queues);
+
+	if (qdma_dev->qsets_en > qdma_dev->dev_cap.num_qs) {
+		PMD_DRV_LOG(INFO, "VF-%d(DEVFN) Error: Number of Queues to be "
+				"configured are greater than the queues "
+				"supported by the hardware\n",
+				qdma_dev->func_id);
+		qdma_dev->qsets_en = 0;
+		return -1;
+	}
+
+	/* Request queue base from the resource manager */
+	ret = qdma_set_qmax(dev, (int *)&qdma_dev->qsets_en,
+			    (int *)&queue_base);
+	if (ret != QDMA_SUCCESS) {
+		PMD_DRV_LOG(ERR, "VF-%d(DEVFN) queue allocation failed: %d\n",
+			qdma_dev->func_id, ret);
+		return -1;
+	}
+	qdma_dev->queue_base = queue_base;
+
+	qdma_dev->q_info = rte_zmalloc("qinfo", sizeof(struct queue_info) *
+						qdma_dev->qsets_en, 0);
+	if (qdma_dev->q_info == NULL) {
+		PMD_DRV_LOG(INFO, "VF-%d fail to allocate queue info memory\n",
+						qdma_dev->func_id);
+		return (-ENOMEM);
+	}
+
+	/* Reserve memory for cmptq ring pointers
+	 * Max completion queues can be maximum of rx and tx queues.
+	 */
+	qdma_dev->cmpt_queues = rte_zmalloc("cmpt_queues",
+					    sizeof(qdma_dev->cmpt_queues[0]) *
+						qdma_dev->qsets_en,
+						RTE_CACHE_LINE_SIZE);
+	if (qdma_dev->cmpt_queues == NULL) {
+		PMD_DRV_LOG(ERR, "VF-%d(DEVFN) cmpt ring pointers memory "
+				"allocation failed:\n", qdma_dev->func_id);
+		rte_free(qdma_dev->q_info);
+		qdma_dev->q_info = NULL;
+		return -(ENOMEM);
+	}
+
+	/* Initialize queue_modes to all 1's ( i.e. Streaming) */
+	for (qid = 0 ; qid < qdma_dev->qsets_en; qid++)
+		qdma_dev->q_info[qid].queue_mode = RTE_PMD_QDMA_STREAMING_MODE;
+
+	for (qid = 0 ; qid < dev->data->nb_rx_queues; qid++) {
+		qdma_dev->q_info[qid].cmpt_desc_sz = qdma_dev->cmpt_desc_len;
+		qdma_dev->q_info[qid].rx_bypass_mode =
+						qdma_dev->c2h_bypass_mode;
+		qdma_dev->q_info[qid].trigger_mode = qdma_dev->trigger_mode;
+		qdma_dev->q_info[qid].timer_count =
+					qdma_dev->timer_count;
+	}
+
+	for (qid = 0 ; qid < dev->data->nb_tx_queues; qid++)
+		qdma_dev->q_info[qid].tx_bypass_mode =
+						qdma_dev->h2c_bypass_mode;
+
+	ret = qdma_vf_set_qrange(dev);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "FMAP programming failed\n");
+		rte_free(qdma_dev->q_info);
+		qdma_dev->q_info = NULL;
+		rte_free(qdma_dev->cmpt_queues);
+		qdma_dev->cmpt_queues = NULL;
+		return ret;
+	}
+
+	qdma_dev->dev_configured = 1;
+
+	return ret;
+}
+
+int qdma_vf_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t qid)
+{
+	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
+	struct qdma_tx_queue *txq;
+
+	txq = (struct qdma_tx_queue *)dev->data->tx_queues[qid];
+	qdma_reset_tx_queue(txq);
+
+	if (qdma_txq_context_setup(dev, qid) < 0)
+		return -1;
+
+	txq->q_pidx_info.pidx = 0;
+	qdma_dev->hw_access->qdma_queue_pidx_update(dev, qdma_dev->is_vf,
+			qid, 0, &txq->q_pidx_info);
+
+	dev->data->tx_queue_state[qid] = RTE_ETH_QUEUE_STATE_STARTED;
+	txq->status = RTE_ETH_QUEUE_STATE_STARTED;
+
+	return 0;
+}
+
+int qdma_vf_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t qid)
+{
+	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
+	struct qdma_rx_queue *rxq;
+	int err;
+
+	rxq = (struct qdma_rx_queue *)dev->data->rx_queues[qid];
+	qdma_reset_rx_queue(rxq);
+
+	err = qdma_init_rx_queue(rxq);
+	if (err != 0)
+		return err;
+	if (qdma_rxq_context_setup(dev, qid) < 0) {
+		PMD_DRV_LOG(ERR, "context_setup for qid - %u failed", qid);
+
+		return -1;
+	}
+
+	if (rxq->st_mode) {
+		rxq->cmpt_cidx_info.counter_idx = rxq->threshidx;
+		rxq->cmpt_cidx_info.timer_idx = rxq->timeridx;
+		rxq->cmpt_cidx_info.trig_mode = rxq->triggermode;
+		rxq->cmpt_cidx_info.wrb_en = 1;
+		qdma_dev->hw_access->qdma_queue_cmpt_cidx_update(dev, 1,
+				qid, &rxq->cmpt_cidx_info);
+
+		rxq->q_pidx_info.pidx = (rxq->nb_rx_desc - 2);
+		qdma_dev->hw_access->qdma_queue_pidx_update(dev, 1,
+				qid, 1, &rxq->q_pidx_info);
+	}
+
+	dev->data->rx_queue_state[qid] = RTE_ETH_QUEUE_STATE_STARTED;
+	rxq->status = RTE_ETH_QUEUE_STATE_STARTED;
+	return 0;
+}
+
+int qdma_vf_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t qid)
+{
+	struct qdma_rx_queue *rxq;
+	int i = 0, cnt = 0;
+
+	rxq = (struct qdma_rx_queue *)dev->data->rx_queues[qid];
+
+	rxq->status = RTE_ETH_QUEUE_STATE_STOPPED;
+
+	/* Wait for queue to recv all packets. */
+	if (rxq->st_mode) {  /* ST-mode */
+		while (rxq->wb_status->pidx != rxq->cmpt_cidx_info.wrb_cidx) {
+			usleep(10);
+			if (cnt++ > 10000)
+				break;
+		}
+	} else { /* MM mode */
+		while (rxq->wb_status->cidx != rxq->q_pidx_info.pidx) {
+			usleep(10);
+			if (cnt++ > 10000)
+				break;
+		}
+	}
+
+	qdma_queue_context_invalidate(dev, qid, rxq->st_mode, 1);
+
+	if (rxq->st_mode) {  /* ST-mode */
+#ifdef DUMP_MEMPOOL_USAGE_STATS
+		PMD_DRV_LOG(INFO, "%s(): %d: queue id = %d, mbuf_avail_count = "
+				"%d, mbuf_in_use_count = %d",
+				__func__, __LINE__, rxq->queue_id,
+				rte_mempool_avail_count(rxq->mb_pool),
+				rte_mempool_in_use_count(rxq->mb_pool));
+#endif /* DUMP_MEMPOOL_USAGE_STATS */
+
+		for (i = 0; i < rxq->nb_rx_desc - 1; i++) {
+			rte_pktmbuf_free(rxq->sw_ring[i]);
+			rxq->sw_ring[i] = NULL;
+		}
+#ifdef DUMP_MEMPOOL_USAGE_STATS
+		PMD_DRV_LOG(INFO, "%s(): %d: queue id = %d, mbuf_avail_count = "
+				"%d, mbuf_in_use_count = %d",
+				__func__, __LINE__, rxq->queue_id,
+				rte_mempool_avail_count(rxq->mb_pool),
+				rte_mempool_in_use_count(rxq->mb_pool));
+#endif /* DUMP_MEMPOOL_USAGE_STATS */
+	}
+
+	qdma_reset_rx_queue(rxq);
+	dev->data->rx_queue_state[qid] = RTE_ETH_QUEUE_STATE_STOPPED;
+	return 0;
+}
+
+
+int qdma_vf_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t qid)
+{
+	struct qdma_tx_queue *txq;
+	int i = 0, cnt = 0;
+
+	txq = (struct qdma_tx_queue *)dev->data->tx_queues[qid];
+
+	txq->status = RTE_ETH_QUEUE_STATE_STOPPED;
+	/* Wait for TXQ to send out all packets. */
+	while (txq->wb_status->cidx != txq->q_pidx_info.pidx) {
+		usleep(10);
+		if (cnt++ > 10000)
+			break;
+	}
+
+	qdma_queue_context_invalidate(dev, qid, txq->st_mode, 0);
+
+	/* Free mbufs if any pending in the ring */
+	for (i = 0; i < txq->nb_tx_desc; i++) {
+		rte_pktmbuf_free(txq->sw_ring[i]);
+		txq->sw_ring[i] = NULL;
+	}
+	qdma_reset_tx_queue(txq);
+	dev->data->tx_queue_state[qid] = RTE_ETH_QUEUE_STATE_STOPPED;
+	return 0;
+}
+
 static struct eth_dev_ops qdma_vf_eth_dev_ops = {
+	.dev_configure        = qdma_vf_dev_configure,
+	.dev_infos_get        = qdma_vf_dev_infos_get,
+	.dev_close            = qdma_vf_dev_close,
+	.dev_reset            = qdma_vf_dev_reset,
+	.link_update          = qdma_vf_dev_link_update,
 	.rx_queue_setup       = qdma_dev_rx_queue_setup,
 	.tx_queue_setup       = qdma_dev_tx_queue_setup,
 	.rx_queue_release     = qdma_dev_rx_queue_release,
 	.tx_queue_release     = qdma_dev_tx_queue_release,
+	.rx_queue_start       = qdma_vf_dev_rx_queue_start,
+	.rx_queue_stop        = qdma_vf_dev_rx_queue_stop,
+	.tx_queue_start       = qdma_vf_dev_tx_queue_start,
+	.tx_queue_stop        = qdma_vf_dev_tx_queue_stop,
 };
 
 /**
-- 
2.36.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [RFC PATCH 26/29] net/qdma: add datapath burst API for VF
  2022-07-06  7:51 [RFC PATCH 00/29] cover letter for net/qdma PMD Aman Kumar
                   ` (24 preceding siblings ...)
  2022-07-06  7:52 ` [RFC PATCH 25/29] net/qdma: add basic PMD ops for VF Aman Kumar
@ 2022-07-06  7:52 ` Aman Kumar
  2022-07-06  7:52 ` [RFC PATCH 27/29] net/qdma: add device specific APIs for export Aman Kumar
                   ` (3 subsequent siblings)
  29 siblings, 0 replies; 43+ messages in thread
From: Aman Kumar @ 2022-07-06  7:52 UTC (permalink / raw)
  To: dev; +Cc: maxime.coquelin, david.marchand, aman.kumar

this patch implements routine for dev_start and
dev_stop PMD ops and adds support for RX/TX
datapath burst APIs for VF.

Signed-off-by: Aman Kumar <aman.kumar@vvdntech.in>
---
 drivers/net/qdma/qdma_vf_ethdev.c | 61 +++++++++++++++++++++++++++++++
 1 file changed, 61 insertions(+)

diff --git a/drivers/net/qdma/qdma_vf_ethdev.c b/drivers/net/qdma/qdma_vf_ethdev.c
index 5a54c00893..cbae4c9716 100644
--- a/drivers/net/qdma/qdma_vf_ethdev.c
+++ b/drivers/net/qdma/qdma_vf_ethdev.c
@@ -334,6 +334,39 @@ static int qdma_queue_context_invalidate(struct rte_eth_dev *dev, uint32_t qid,
 	return rv;
 }
 
+static int qdma_vf_dev_start(struct rte_eth_dev *dev)
+{
+	struct qdma_tx_queue *txq;
+	struct qdma_rx_queue *rxq;
+	uint32_t qid;
+	int err;
+
+	PMD_DRV_LOG(INFO, "qdma_dev_start: Starting\n");
+	/* prepare descriptor rings for operation */
+	for (qid = 0; qid < dev->data->nb_tx_queues; qid++) {
+		txq = (struct qdma_tx_queue *)dev->data->tx_queues[qid];
+
+		/* Deferred Queues should not start with dev_start */
+		if (!txq->tx_deferred_start) {
+			err = qdma_vf_dev_tx_queue_start(dev, qid);
+			if (err != 0)
+				return err;
+		}
+	}
+
+	for (qid = 0; qid < dev->data->nb_rx_queues; qid++) {
+		rxq = (struct qdma_rx_queue *)dev->data->rx_queues[qid];
+
+		/* Deferred Queues should not start with dev_start */
+		if (!rxq->rx_deferred_start) {
+			err = qdma_vf_dev_rx_queue_start(dev, qid);
+			if (err != 0)
+				return err;
+		}
+	}
+	return 0;
+}
+
 static int qdma_vf_dev_link_update(struct rte_eth_dev *dev,
 					__rte_unused int wait_to_complete)
 {
@@ -361,6 +394,24 @@ static int qdma_vf_dev_infos_get(__rte_unused struct rte_eth_dev *dev,
 	return 0;
 }
 
+static int qdma_vf_dev_stop(struct rte_eth_dev *dev)
+{
+	uint32_t qid;
+#ifdef RTE_LIBRTE_QDMA_DEBUG_DRIVER
+	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
+
+	PMD_DRV_LOG(INFO, "VF-%d(DEVFN) Stop H2C & C2H queues",
+			qdma_dev->func_id);
+#endif
+	/* reset driver's internal queue structures to default values */
+	for (qid = 0; qid < dev->data->nb_tx_queues; qid++)
+		qdma_vf_dev_tx_queue_stop(dev, qid);
+	for (qid = 0; qid < dev->data->nb_rx_queues; qid++)
+		qdma_vf_dev_rx_queue_stop(dev, qid);
+
+	return 0;
+}
+
 int qdma_vf_dev_close(struct rte_eth_dev *dev)
 {
 	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
@@ -371,6 +422,9 @@ int qdma_vf_dev_close(struct rte_eth_dev *dev)
 
 	PMD_DRV_LOG(INFO, "Closing all queues\n");
 
+	if (dev->data->dev_started)
+		qdma_vf_dev_stop(dev);
+
 	/* iterate over rx queues */
 	for (qid = 0; qid < dev->data->nb_rx_queues; ++qid) {
 		rxq = dev->data->rx_queues[qid];
@@ -729,6 +783,8 @@ int qdma_vf_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t qid)
 static struct eth_dev_ops qdma_vf_eth_dev_ops = {
 	.dev_configure        = qdma_vf_dev_configure,
 	.dev_infos_get        = qdma_vf_dev_infos_get,
+	.dev_start            = qdma_vf_dev_start,
+	.dev_stop             = qdma_vf_dev_stop,
 	.dev_close            = qdma_vf_dev_close,
 	.dev_reset            = qdma_vf_dev_reset,
 	.link_update          = qdma_vf_dev_link_update,
@@ -811,6 +867,8 @@ static int eth_qdma_vf_dev_init(struct rte_eth_dev *dev)
 	dma_priv->h2c_bypass_mode = 0;
 
 	dev->dev_ops = &qdma_vf_eth_dev_ops;
+	dev->rx_pkt_burst = &qdma_recv_pkts;
+	dev->tx_pkt_burst = &qdma_xmit_pkts;
 
 	dma_priv->config_bar_idx = DEFAULT_VF_CONFIG_BAR;
 	dma_priv->bypass_bar_idx = BAR_ID_INVALID;
@@ -913,6 +971,9 @@ static int eth_qdma_vf_dev_uninit(struct rte_eth_dev *dev)
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return -EPERM;
 
+	if (qdma_dev->dev_configured)
+		qdma_vf_dev_close(dev);
+
 	qdma_ethdev_offline(dev);
 
 	if (qdma_dev->reset_state != RESET_STATE_RECV_PF_RESET_REQ)
-- 
2.36.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [RFC PATCH 27/29] net/qdma: add device specific APIs for export
  2022-07-06  7:51 [RFC PATCH 00/29] cover letter for net/qdma PMD Aman Kumar
                   ` (25 preceding siblings ...)
  2022-07-06  7:52 ` [RFC PATCH 26/29] net/qdma: add datapath burst API " Aman Kumar
@ 2022-07-06  7:52 ` Aman Kumar
  2022-07-06  7:52 ` [RFC PATCH 28/29] net/qdma: add additional debug APIs Aman Kumar
                   ` (2 subsequent siblings)
  29 siblings, 0 replies; 43+ messages in thread
From: Aman Kumar @ 2022-07-06  7:52 UTC (permalink / raw)
  To: dev; +Cc: maxime.coquelin, david.marchand, aman.kumar

this patch defines few PMD specific APIs to be
directly used by the apps. such an example is to select
queue modes between MM(memory mapped) or ST(streaming).

Signed-off-by: Aman Kumar <aman.kumar@vvdntech.in>
---
 drivers/net/qdma/meson.build    |    1 +
 drivers/net/qdma/rte_pmd_qdma.c | 1728 +++++++++++++++++++++++++++++++
 drivers/net/qdma/rte_pmd_qdma.h |  425 ++++++++
 drivers/net/qdma/version.map    |   35 +
 4 files changed, 2189 insertions(+)
 create mode 100644 drivers/net/qdma/rte_pmd_qdma.c

diff --git a/drivers/net/qdma/meson.build b/drivers/net/qdma/meson.build
index c453d556b6..1dc4392666 100644
--- a/drivers/net/qdma/meson.build
+++ b/drivers/net/qdma/meson.build
@@ -39,4 +39,5 @@ sources = files(
         'qdma_access/qdma_mbox_protocol.c',
         'qdma_access/qdma_access_common.c',
         'qdma_access/qdma_platform.c',
+        'rte_pmd_qdma.c',
 )
diff --git a/drivers/net/qdma/rte_pmd_qdma.c b/drivers/net/qdma/rte_pmd_qdma.c
new file mode 100644
index 0000000000..dc864d2771
--- /dev/null
+++ b/drivers/net/qdma/rte_pmd_qdma.c
@@ -0,0 +1,1728 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2022 Xilinx, Inc. All rights reserved.
+ * Copyright(c) 2022 VVDN Technologies Private Limited. All rights reserved.
+ */
+
+#include <stdint.h>
+#include <sys/mman.h>
+#include <sys/fcntl.h>
+#include <rte_memzone.h>
+#include <rte_string_fns.h>
+#include <ethdev_pci.h>
+#include <rte_malloc.h>
+#include <rte_dev.h>
+#include <rte_pci.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_alarm.h>
+#include <rte_cycles.h>
+#include <unistd.h>
+#include <string.h>
+
+#include "qdma.h"
+#include "qdma_access_common.h"
+#include "rte_pmd_qdma.h"
+#include "qdma_devops.h"
+
+
+static int validate_qdma_dev_info(int port_id, uint16_t qid)
+{
+	struct rte_eth_dev *dev;
+	struct qdma_pci_dev *qdma_dev;
+
+	if (port_id < 0 || port_id >= rte_eth_dev_count_avail()) {
+		PMD_DRV_LOG(ERR, "Wrong port id %d\n", port_id);
+		return -ENOTSUP;
+	}
+	dev = &rte_eth_devices[port_id];
+	qdma_dev = dev->data->dev_private;
+	if (!is_qdma_supported(dev)) {
+		PMD_DRV_LOG(ERR, "Device is not supported\n");
+		return -ENOTSUP;
+	}
+
+	if (qid >= qdma_dev->qsets_en) {
+		PMD_DRV_LOG(ERR, "Invalid Queue id passed, queue ID = %d\n",
+					qid);
+		return -EINVAL;
+	}
+
+	if (!qdma_dev->dev_configured) {
+		PMD_DRV_LOG(ERR,
+			"Device for port id %d is not configured yet\n",
+			port_id);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int8_t qdma_get_trigger_mode(enum rte_pmd_qdma_tigger_mode_t mode)
+{
+	int8_t ret;
+	switch (mode) {
+	case RTE_PMD_QDMA_TRIG_MODE_DISABLE:
+		ret = QDMA_CMPT_UPDATE_TRIG_MODE_DIS;
+		break;
+	case RTE_PMD_QDMA_TRIG_MODE_EVERY:
+		ret = QDMA_CMPT_UPDATE_TRIG_MODE_EVERY;
+		break;
+	case RTE_PMD_QDMA_TRIG_MODE_USER_COUNT:
+		ret = QDMA_CMPT_UPDATE_TRIG_MODE_USR_CNT;
+		break;
+	case RTE_PMD_QDMA_TRIG_MODE_USER:
+		ret = QDMA_CMPT_UPDATE_TRIG_MODE_USR;
+		break;
+	case RTE_PMD_QDMA_TRIG_MODE_USER_TIMER:
+		ret = QDMA_CMPT_UPDATE_TRIG_MODE_USR_TMR;
+		break;
+	case RTE_PMD_QDMA_TRIG_MODE_USER_TIMER_COUNT:
+		ret = QDMA_CMPT_UPDATE_TRIG_MODE_TMR_CNTR;
+		break;
+	default:
+		ret = QDMA_CMPT_UPDATE_TRIG_MODE_USR_TMR;
+		break;
+	}
+	return ret;
+}
+
+/**
+ * Function Name:	rte_pmd_qdma_get_bar_details
+ * Description:		Returns the BAR indices of the QDMA BARs
+ *
+ * @param	port_id : Port ID
+ * @param	config_bar_idx : Config BAR index
+ * @param	user_bar_idx   : AXI Master Lite BAR(user bar) index
+ * @param	bypass_bar_idx : AXI Bridge Master BAR(bypass bar) index
+ *
+ * @return	'0' on success and '< 0' on failure.
+ *
+ * @note   None.
+ */
+int rte_pmd_qdma_get_bar_details(int port_id, int32_t *config_bar_idx,
+			int32_t *user_bar_idx, int32_t *bypass_bar_idx)
+{
+	struct rte_eth_dev *dev;
+	struct qdma_pci_dev *dma_priv;
+
+	if (port_id < 0 || port_id >= rte_eth_dev_count_avail()) {
+		PMD_DRV_LOG(ERR, "Wrong port id %d\n", port_id);
+		return -ENOTSUP;
+	}
+	dev = &rte_eth_devices[port_id];
+	dma_priv = dev->data->dev_private;
+	if (!is_qdma_supported(dev)) {
+		PMD_DRV_LOG(ERR, "Device is not supported\n");
+		return -ENOTSUP;
+	}
+
+	if (config_bar_idx != NULL)
+		*(config_bar_idx) = dma_priv->config_bar_idx;
+
+	if (user_bar_idx != NULL)
+		*(user_bar_idx) = dma_priv->user_bar_idx;
+
+	if (bypass_bar_idx != NULL)
+		*(bypass_bar_idx) = dma_priv->bypass_bar_idx;
+
+	return 0;
+}
+
+/**
+ * Function Name:	rte_pmd_qdma_get_queue_base
+ * Description:		Returns queue base for given port
+ *
+ * @param	port_id : Port ID.
+ * @param	queue_base : queue base.
+ *
+ * @return	'0' on success and '< 0' on failure.
+ *
+ * @note    Application can call this API only after successful
+ *          call to rte_eh_dev_configure() API.
+ */
+int rte_pmd_qdma_get_queue_base(int port_id, uint32_t *queue_base)
+{
+	struct rte_eth_dev *dev;
+	struct qdma_pci_dev *dma_priv;
+
+	if (port_id < 0 || port_id >= rte_eth_dev_count_avail()) {
+		PMD_DRV_LOG(ERR, "Wrong port id %d\n", port_id);
+		return -ENOTSUP;
+	}
+	dev = &rte_eth_devices[port_id];
+	dma_priv = dev->data->dev_private;
+	if (!is_qdma_supported(dev)) {
+		PMD_DRV_LOG(ERR, "Device is not supported\n");
+		return -ENOTSUP;
+	}
+
+	if (queue_base == NULL) {
+		PMD_DRV_LOG(ERR, "Caught NULL pointer for queue base\n");
+		return -EINVAL;
+	}
+
+	*(queue_base) = dma_priv->queue_base;
+
+	return 0;
+}
+
+/**
+ * Function Name:	rte_pmd_qdma_get_pci_func_type
+ * Description:		Retrieves pci function type i.e. PF or VF
+ *
+ * @param	port_id : Port ID.
+ * @param	func_type : Indicates pci function type.
+ *
+ * @return	'0' on success and '< 0' on failure.
+ *
+ * @note    Returns the PCIe function type i.e. PF or VF of the given port.
+ */
+int rte_pmd_qdma_get_pci_func_type(int port_id,
+		enum rte_pmd_qdma_pci_func_type *func_type)
+{
+	struct rte_eth_dev *dev;
+	struct qdma_pci_dev *dma_priv;
+
+	if (port_id < 0 || port_id >= rte_eth_dev_count_avail()) {
+		PMD_DRV_LOG(ERR, "Wrong port id %d\n", port_id);
+		return -ENOTSUP;
+	}
+	dev = &rte_eth_devices[port_id];
+	dma_priv = dev->data->dev_private;
+	if (!is_qdma_supported(dev)) {
+		PMD_DRV_LOG(ERR, "Device is not supported\n");
+		return -ENOTSUP;
+	}
+
+	if (func_type == NULL) {
+		PMD_DRV_LOG(ERR, "Caught NULL pointer for function type\n");
+		return -EINVAL;
+	}
+
+	*((enum rte_pmd_qdma_pci_func_type *)func_type) = (dma_priv->is_vf) ?
+			RTE_PMD_QDMA_PCI_FUNC_VF : RTE_PMD_QDMA_PCI_FUNC_PF;
+
+	return 0;
+}
+
+/**
+ * Function Name:	rte_pmd_qdma_get_immediate_data_state
+ * Description:		Returns immediate data state
+ *			i.e. whether enabled or disabled, for the specified
+ *			queue
+ *
+ * @param	port_id : Port ID.
+ * @param	qid : Queue ID.
+ * @param	state : Pointer to the state specifying whether
+ *			immediate data is enabled or not
+ *
+ * @return	'0' on success and '< 0' on failure.
+ *
+ * @note	Application can call this function after
+ *		rte_eth_tx_queue_setup() or
+ *		rte_eth_rx_queue_setup() is called.
+ *		API is applicable for streaming queues only.
+ */
+int rte_pmd_qdma_get_immediate_data_state(int port_id, uint32_t qid,
+		int *state)
+{
+	struct rte_eth_dev *dev;
+	struct qdma_pci_dev *qdma_dev;
+	struct qdma_rx_queue *rxq;
+	int ret = 0;
+
+	ret = validate_qdma_dev_info(port_id, qid);
+	if (ret != QDMA_SUCCESS) {
+		PMD_DRV_LOG(ERR,
+			"QDMA device validation failed for port id %d\n",
+			port_id);
+		return ret;
+	}
+	dev = &rte_eth_devices[port_id];
+	qdma_dev = dev->data->dev_private;
+	if (qid >= dev->data->nb_rx_queues) {
+		PMD_DRV_LOG(ERR, "Invalid Q-id passed qid %d max en_qid %d\n",
+				qid, dev->data->nb_rx_queues);
+		return -EINVAL;
+	}
+
+	if (state == NULL) {
+		PMD_DRV_LOG(ERR, "Invalid state for qid %d\n", qid);
+		return -EINVAL;
+	}
+
+	if (qdma_dev->q_info[qid].queue_mode ==
+			RTE_PMD_QDMA_STREAMING_MODE) {
+		rxq = (struct qdma_rx_queue *)dev->data->rx_queues[qid];
+		if (rxq != NULL) {
+			*((int *)state) = rxq->dump_immediate_data;
+		} else {
+			PMD_DRV_LOG(ERR, "Qid %d is not setup\n", qid);
+			return -EINVAL;
+		}
+	} else {
+		PMD_DRV_LOG(ERR, "Qid %d is not setup in Streaming mode\n",
+				qid);
+		return -EINVAL;
+	}
+	return ret;
+}
+
+/**
+ * Function Name:	rte_pmd_qdma_set_queue_mode
+ * Description:		Sets queue mode for the specified queue
+ *
+ * @param	port_id : Port ID.
+ * @param	qid : Queue ID.
+ * @param	mode : Queue mode to be set
+ *
+ * @return	'0' on success and '<0' on failure.
+ *
+ * @note	Application can call this API after successful call to
+ *		rte_eth_dev_configure() but before
+ *		rte_eth_tx_queue_setup/rte_eth_rx_queue_setup() API.
+ *		By default, all queues are setup in streaming mode.
+ */
+int rte_pmd_qdma_set_queue_mode(int port_id, uint32_t qid,
+		enum rte_pmd_qdma_queue_mode_t mode)
+{
+	struct rte_eth_dev *dev;
+	struct qdma_pci_dev *qdma_dev;
+	int ret = 0;
+
+	ret = validate_qdma_dev_info(port_id, qid);
+	if (ret != QDMA_SUCCESS) {
+		PMD_DRV_LOG(ERR,
+			"QDMA device validation failed for port id %d\n",
+			port_id);
+		return ret;
+	}
+	dev = &rte_eth_devices[port_id];
+	qdma_dev = dev->data->dev_private;
+	if (mode >= RTE_PMD_QDMA_QUEUE_MODE_MAX) {
+		PMD_DRV_LOG(ERR, "Invalid Queue mode passed,Mode = %d\n", mode);
+		return -EINVAL;
+	}
+
+	qdma_dev->q_info[qid].queue_mode = mode;
+
+	return ret;
+}
+
+/**
+ *Function Name:	rte_pmd_qdma_set_immediate_data_state
+ *Description:		Sets immediate data state
+ *			i.e. enable or disable, for the specified queue.
+ *			If enabled, the user defined data in the completion
+ *			ring are dumped in to a queue specific file
+ *			"q_<qid>_immmediate_data.txt" in the local directory.
+ *
+ * @param	port_id : Port ID.
+ * @param	qid : Queue ID.
+ * @param	value :	Immediate data state to be set
+ *			Set '0' to disable and '1' to enable
+ *
+ * @return	'0' on success and '< 0' on failure.
+ *
+ * @note	Application can call this API after successful
+ *		call to rte_eth_dev_configure() API. Application can
+ *		also call this API after successful call to
+ *		rte_eth_rx_queue_setup() only if rx queue is not in
+ *		start state. This API is applicable for
+ *		streaming queues only.
+ */
+int rte_pmd_qdma_set_immediate_data_state(int port_id, uint32_t qid,
+		uint8_t state)
+{
+	struct rte_eth_dev *dev;
+	struct qdma_pci_dev *qdma_dev;
+	struct qdma_rx_queue *rxq;
+	int ret = 0;
+
+	ret = validate_qdma_dev_info(port_id, qid);
+	if (ret != QDMA_SUCCESS) {
+		PMD_DRV_LOG(ERR,
+			"QDMA device validation failed for port id %d\n",
+			port_id);
+		return ret;
+	}
+	dev = &rte_eth_devices[port_id];
+	qdma_dev = dev->data->dev_private;
+	if (qid >= dev->data->nb_rx_queues) {
+		PMD_DRV_LOG(ERR, "Invalid RX Queue id passed for %s,"
+				"Queue ID = %d\n", __func__, qid);
+		return -EINVAL;
+	}
+
+	if (state > 1) {
+		PMD_DRV_LOG(ERR, "Invalid value specified for immediate data "
+				"state %s, Queue ID = %d\n", __func__, qid);
+		return -EINVAL;
+	}
+
+	if (qdma_dev->q_info[qid].queue_mode !=
+			RTE_PMD_QDMA_STREAMING_MODE) {
+		PMD_DRV_LOG(ERR, "Qid %d is not setup in ST mode\n", qid);
+		return -EINVAL;
+	}
+
+	rxq = (struct qdma_rx_queue *)dev->data->rx_queues[qid];
+	if (rxq == NULL) {
+		/* Update the configuration in q_info structure
+		 * if rx queue is not setup.
+		 */
+		qdma_dev->q_info[qid].immediate_data_state = state;
+	} else if (dev->data->rx_queue_state[qid] ==
+			RTE_ETH_QUEUE_STATE_STOPPED) {
+		/* Update the config in both q_info and rxq structures,
+		 * only if rx queue is setup but not yet started.
+		 */
+		qdma_dev->q_info[qid].immediate_data_state = state;
+		rxq->dump_immediate_data = state;
+	} else {
+		PMD_DRV_LOG(ERR,
+			"Cannot configure when Qid %d is in start state\n",
+			qid);
+		return -EINVAL;
+	}
+
+	return ret;
+}
+
+/**
+ * Function Name:	rte_pmd_qdma_set_cmpt_overflow_check
+ * Description:		Enables or disables the overflow check
+ *			(whether PIDX is overflowing the CIDX) performed by
+ *			QDMA on the completion descriptor ring of specified
+ *			queue.
+ *
+ * @param	port_id : Port ID.
+ * @param	qid	   : Queue ID.
+ * @param	enable :  '1' to enable and '0' to disable the overflow check
+ *
+ * @return	'0' on success and '< 0' on failure.
+ *
+ * @note	Application can call this API after successful call to
+ *		rte_eth_dev_configure() API. Application can also call this
+ *		API after successful call to rte_eth_rx_queue_setup()/
+ *		rte_pmd_qdma_dev_cmptq_setup() API only if
+ *		rx/cmpt queue is not in start state.
+ */
+int rte_pmd_qdma_set_cmpt_overflow_check(int port_id, uint32_t qid,
+		uint8_t enable)
+{
+	struct rte_eth_dev *dev;
+	struct qdma_pci_dev *qdma_dev;
+	struct qdma_cmpt_queue *cmptq;
+	struct qdma_rx_queue *rxq;
+	int ret = 0;
+
+	ret = validate_qdma_dev_info(port_id, qid);
+	if (ret != QDMA_SUCCESS) {
+		PMD_DRV_LOG(ERR,
+			"QDMA device validation failed for port id %d\n",
+			port_id);
+		return ret;
+	}
+	dev = &rte_eth_devices[port_id];
+	qdma_dev = dev->data->dev_private;
+	if (enable > 1)
+		return -EINVAL;
+
+	if (!qdma_dev->dev_cap.cmpt_ovf_chk_dis) {
+		PMD_DRV_LOG(ERR, "%s: Completion overflow check disable is "
+			"not supported in the current design\n", __func__);
+			return -EINVAL;
+	}
+	if (qdma_dev->q_info[qid].queue_mode ==
+			RTE_PMD_QDMA_STREAMING_MODE) {
+		if (qid >= dev->data->nb_rx_queues) {
+			PMD_DRV_LOG(ERR, "Invalid Queue id passed for %s,"
+					"Queue ID(ST-mode) = %d\n", __func__,
+					qid);
+			return -EINVAL;
+		}
+
+		rxq = (struct qdma_rx_queue *)dev->data->rx_queues[qid];
+		if (rxq == NULL) {
+			/* Update the configuration in q_info structure
+			 * if rx queue is not setup.
+			 */
+			qdma_dev->q_info[qid].dis_cmpt_ovf_chk =
+					(enable == 1) ? 0 : 1;
+		} else if (dev->data->rx_queue_state[qid] ==
+				RTE_ETH_QUEUE_STATE_STOPPED) {
+			/* Update the config in both q_info and rxq structures,
+			 * only if rx queue is setup but not yet started.
+			 */
+			qdma_dev->q_info[qid].dis_cmpt_ovf_chk =
+					(enable == 1) ? 0 : 1;
+			rxq->dis_overflow_check =
+					qdma_dev->q_info[qid].dis_cmpt_ovf_chk;
+		} else {
+			PMD_DRV_LOG(ERR,
+				"Cannot configure when Qid %d is in start state\n",
+				qid);
+			return -EINVAL;
+		}
+	} else {
+		cmptq = (struct qdma_cmpt_queue *)qdma_dev->cmpt_queues[qid];
+		if (cmptq == NULL) {
+			/* Update the configuration in q_info structure
+			 * if cmpt queue is not setup.
+			 */
+			qdma_dev->q_info[qid].dis_cmpt_ovf_chk =
+					(enable == 1) ? 0 : 1;
+		} else if (cmptq->status ==
+				RTE_ETH_QUEUE_STATE_STOPPED) {
+			/* Update the configuration in both q_info and cmptq
+			 * structures if cmpt queue is already setup.
+			 */
+			qdma_dev->q_info[qid].dis_cmpt_ovf_chk =
+					(enable == 1) ? 0 : 1;
+			cmptq->dis_overflow_check =
+					qdma_dev->q_info[qid].dis_cmpt_ovf_chk;
+		} else {
+			PMD_DRV_LOG(ERR,
+				"Cannot configure when Qid %d is in start state\n",
+				qid);
+			return -EINVAL;
+		}
+	}
+
+	return ret;
+}
+
+/**
+ * Function Name:	rte_pmd_qdma_set_cmpt_descriptor_size
+ * Description:		Configures the completion ring descriptor size
+ *
+ * @param	port_id : Port ID.
+ * @param	qid : Queue ID.
+ * @param	size : Descriptor size to be configured
+ *
+ * @return	'0' on success and '<0' on failure.
+ *
+ * @note	Application can call this API after successful call to
+ *		rte_eth_dev_configure() but before rte_eth_rx_queue_setup() API
+ *		when queue is in streaming mode, and before
+ *		rte_pmd_qdma_dev_cmptq_setup when queue is in memory mapped
+ *		mode.
+ *		By default, the completion descriptor size is set to 8 bytes.
+ */
+int rte_pmd_qdma_set_cmpt_descriptor_size(int port_id, uint32_t qid,
+		enum rte_pmd_qdma_cmpt_desc_len size)
+{
+	struct rte_eth_dev *dev;
+	struct qdma_pci_dev *qdma_dev;
+	int ret = 0;
+
+	ret = validate_qdma_dev_info(port_id, qid);
+	if (ret != QDMA_SUCCESS) {
+		PMD_DRV_LOG(ERR,
+			"QDMA device validation failed for port id %d\n",
+			port_id);
+		return ret;
+	}
+	dev = &rte_eth_devices[port_id];
+	qdma_dev = dev->data->dev_private;
+	if (qdma_dev->q_info[qid].queue_mode ==
+			RTE_PMD_QDMA_STREAMING_MODE) {
+		if (qid >= dev->data->nb_rx_queues) {
+			PMD_DRV_LOG(ERR, "Invalid Queue id passed for %s,"
+					"Queue ID(ST-mode) = %d\n", __func__,
+					qid);
+			return -EINVAL;
+		}
+	}
+
+	if (size != RTE_PMD_QDMA_CMPT_DESC_LEN_8B &&
+			size != RTE_PMD_QDMA_CMPT_DESC_LEN_16B &&
+			size != RTE_PMD_QDMA_CMPT_DESC_LEN_32B &&
+			(size != RTE_PMD_QDMA_CMPT_DESC_LEN_64B ||
+			!qdma_dev->dev_cap.cmpt_desc_64b)) {
+		PMD_DRV_LOG(ERR, "Invalid Size passed for %s, Size = %d\n",
+				__func__, size);
+		return -EINVAL;
+	}
+
+	qdma_dev->q_info[qid].cmpt_desc_sz = size;
+
+	return ret;
+}
+
+/**
+ * Function Name:	rte_pmd_qdma_set_cmpt_trigger_mode
+ * Description:		Configures the trigger mode for completion ring CIDX
+ *			updates
+ *
+ * @param	port_id : Port ID.
+ * @param	qid : Queue ID.
+ * @param	mode : Trigger mode to be configured
+ *
+ * @return	'0' on success and '<0' on failure.
+ *
+ * @note	Application can call this API after successful
+ *		call to rte_eth_dev_configure() API. Application can
+ *		also call this API after successful call to
+ *		rte_eth_rx_queue_setup()/rte_pmd_qdma_dev_cmptq_setup()
+ *		API only if rx/cmpt queue is not in start state.
+ *		By default, trigger mode is set to Counter + Timer.
+ */
+int rte_pmd_qdma_set_cmpt_trigger_mode(int port_id, uint32_t qid,
+				enum rte_pmd_qdma_tigger_mode_t mode)
+{
+	struct rte_eth_dev *dev;
+	struct qdma_pci_dev *qdma_dev;
+	struct qdma_cmpt_queue *cmptq;
+	struct qdma_rx_queue *rxq;
+	int ret = 0;
+
+	ret = validate_qdma_dev_info(port_id, qid);
+	if (ret != QDMA_SUCCESS) {
+		PMD_DRV_LOG(ERR,
+			"QDMA device validation failed for port id %d\n",
+			port_id);
+		return ret;
+	}
+	dev = &rte_eth_devices[port_id];
+	qdma_dev = dev->data->dev_private;
+	if (mode >= RTE_PMD_QDMA_TRIG_MODE_MAX) {
+		PMD_DRV_LOG(ERR, "Invalid Trigger mode passed\n");
+		return -EINVAL;
+	}
+
+	if (mode == RTE_PMD_QDMA_TRIG_MODE_USER_TIMER_COUNT &&
+	    !qdma_dev->dev_cap.cmpt_trig_count_timer) {
+		PMD_DRV_LOG(ERR, "%s: Trigger mode %d is "
+			"not supported in the current design\n",
+			__func__, mode);
+			return -EINVAL;
+	}
+
+	if (qdma_dev->q_info[qid].queue_mode ==
+			RTE_PMD_QDMA_STREAMING_MODE) {
+		if (qid >= dev->data->nb_rx_queues) {
+			PMD_DRV_LOG(ERR, "Invalid Queue id passed for %s,"
+					"Queue ID(ST-mode) = %d\n", __func__,
+					qid);
+			return -EINVAL;
+		}
+
+		rxq = (struct qdma_rx_queue *)dev->data->rx_queues[qid];
+		if (rxq == NULL) {
+			/* Update the configuration in q_info structure
+			 * if rx queue is not setup.
+			 */
+			qdma_dev->q_info[qid].trigger_mode =
+					qdma_get_trigger_mode(mode);
+		} else if (dev->data->rx_queue_state[qid] ==
+				RTE_ETH_QUEUE_STATE_STOPPED) {
+			/* Update the config in both q_info and rxq structures,
+			 * only if rx queue is setup but not yet started.
+			 */
+			qdma_dev->q_info[qid].trigger_mode =
+					qdma_get_trigger_mode(mode);
+			rxq->triggermode = qdma_dev->q_info[qid].trigger_mode;
+		} else {
+			PMD_DRV_LOG(ERR,
+				"Cannot configure when Qid %d is in start state\n",
+				qid);
+			return -EINVAL;
+		}
+	} else if (qdma_dev->dev_cap.mm_cmpt_en) {
+		cmptq = (struct qdma_cmpt_queue *)qdma_dev->cmpt_queues[qid];
+		if (cmptq == NULL) {
+			/* Update the configuration in q_info structure
+			 * if cmpt queue is not setup.
+			 */
+			qdma_dev->q_info[qid].trigger_mode =
+					qdma_get_trigger_mode(mode);
+		} else if (cmptq->status ==
+				RTE_ETH_QUEUE_STATE_STOPPED) {
+			/* Update the configuration in both q_info and cmptq
+			 * structures if cmpt queue is already setup.
+			 */
+			qdma_dev->q_info[qid].trigger_mode =
+					qdma_get_trigger_mode(mode);
+			cmptq->triggermode =
+					qdma_dev->q_info[qid].trigger_mode;
+		} else {
+			PMD_DRV_LOG(ERR,
+				"Cannot configure when Qid %d is in start state\n",
+				qid);
+			return -EINVAL;
+		}
+	} else {
+		PMD_DRV_LOG(ERR, "Unable to set trigger mode for %s,"
+					"Queue ID = %d, Queue Mode = %d\n",
+					__func__,
+					qid, qdma_dev->q_info[qid].queue_mode);
+	}
+	return ret;
+}
+
+/**
+ * Function Name:	rte_pmd_qdma_set_cmpt_timer
+ * Description:		Configures the timer interval in microseconds to trigger
+ *			the completion ring CIDX updates
+ *
+ * @param	port_id : Port ID.
+ * @param	qid : Queue ID.
+ * @param	value : Timer interval for completion trigger to be configured
+ *
+ * @return	'0' on success and "<0" on failure.
+ *
+ * @note	Application can call this API after successful
+ *		call to rte_eth_dev_configure() API. Application can
+ *		also call this API after successful call to
+ *		rte_eth_rx_queue_setup()/rte_pmd_qdma_dev_cmptq_setup() API
+ *		only if rx/cmpt queue is not in start state.
+ */
+int rte_pmd_qdma_set_cmpt_timer(int port_id, uint32_t qid, uint32_t value)
+{
+	struct rte_eth_dev *dev;
+	struct qdma_pci_dev *qdma_dev;
+	struct qdma_cmpt_queue *cmptq;
+	struct qdma_rx_queue *rxq;
+	int8_t timer_index;
+	int ret = 0;
+
+	ret = validate_qdma_dev_info(port_id, qid);
+	if (ret != QDMA_SUCCESS) {
+		PMD_DRV_LOG(ERR,
+			"QDMA device validation failed for port id %d\n",
+			port_id);
+		return ret;
+	}
+	dev = &rte_eth_devices[port_id];
+	qdma_dev = dev->data->dev_private;
+	timer_index = index_of_array(qdma_dev->g_c2h_timer_cnt,
+			QDMA_NUM_C2H_TIMERS,
+			value);
+
+	if (timer_index < 0) {
+		PMD_DRV_LOG(ERR, "Expected timer %d not found\n", value);
+		return -ENOTSUP;
+	}
+
+	if (qdma_dev->q_info[qid].queue_mode ==
+			RTE_PMD_QDMA_STREAMING_MODE) {
+		if (qid >= dev->data->nb_rx_queues) {
+			PMD_DRV_LOG(ERR, "Invalid Queue id passed for %s,"
+					"Queue ID(ST-mode) = %d\n", __func__,
+					qid);
+			return -EINVAL;
+		}
+
+		rxq = (struct qdma_rx_queue *)dev->data->rx_queues[qid];
+		if (rxq == NULL) {
+			/* Update the configuration in q_info structure
+			 * if rx queue is not setup.
+			 */
+			qdma_dev->q_info[qid].timer_count = value;
+		} else if (dev->data->rx_queue_state[qid] ==
+				RTE_ETH_QUEUE_STATE_STOPPED) {
+			/* Update the config in both q_info and rxq structures,
+			 * only if rx queue is setup but not yet started.
+			 */
+			qdma_dev->q_info[qid].timer_count = value;
+			rxq->timeridx = timer_index;
+		} else {
+			PMD_DRV_LOG(ERR,
+				"Cannot configure when Qid %d is in start state\n",
+				qid);
+			return -EINVAL;
+		}
+	} else if (qdma_dev->dev_cap.mm_cmpt_en) {
+		cmptq = (struct qdma_cmpt_queue *)qdma_dev->cmpt_queues[qid];
+		if (cmptq == NULL) {
+			/* Update the configuration in q_info structure
+			 * if cmpt queue is not setup.
+			 */
+			qdma_dev->q_info[qid].timer_count = value;
+		} else if (cmptq->status ==
+				RTE_ETH_QUEUE_STATE_STOPPED) {
+			/* Update the configuration in both q_info and cmptq
+			 * structures if cmpt queue is already setup.
+			 */
+			qdma_dev->q_info[qid].timer_count = value;
+			cmptq->timeridx = timer_index;
+		} else {
+			PMD_DRV_LOG(ERR,
+				"Cannot configure when Qid %d is in start state\n",
+				qid);
+			return -EINVAL;
+		}
+	} else {
+		PMD_DRV_LOG(ERR, "Unable to set trigger mode for %s,"
+					"Queue ID = %d, Queue Mode = %d\n",
+					__func__,
+					qid, qdma_dev->q_info[qid].queue_mode);
+	}
+	return ret;
+}
+
+/**
+ *Function Name:	rte_pmd_qdma_set_c2h_descriptor_prefetch
+ *Description:		Enables or disables prefetch of the descriptors by
+ *			prefetch engine
+ *
+ * @param	port_id : Port ID.
+ * @param	qid : Queue ID.
+ * @param	enable:'1' to enable and '0' to disable the descriptor prefetch
+ *
+ * @return	'0' on success and '<0' on failure.
+ *
+ * @note	Application can call this API after successful
+ *		call to rte_eth_dev_configure() API. Application can
+ *		also call this API after successful call to
+ *		rte_eth_rx_queue_setup() API, only if rx queue
+ *		is not in start state.
+ */
+int rte_pmd_qdma_set_c2h_descriptor_prefetch(int port_id, uint32_t qid,
+		uint8_t enable)
+{
+	struct rte_eth_dev *dev;
+	struct qdma_pci_dev *qdma_dev;
+	struct qdma_rx_queue *rxq;
+	int ret = 0;
+
+	ret = validate_qdma_dev_info(port_id, qid);
+	if (ret != QDMA_SUCCESS) {
+		PMD_DRV_LOG(ERR,
+			"QDMA device validation failed for port id %d\n",
+			port_id);
+		return ret;
+	}
+	dev = &rte_eth_devices[port_id];
+	qdma_dev = dev->data->dev_private;
+	if (qid >= dev->data->nb_rx_queues) {
+		PMD_DRV_LOG(ERR, "Invalid Queue id passed for %s, "
+				"Queue ID = %d\n", __func__, qid);
+		return -EINVAL;
+	}
+
+	if (qdma_dev->q_info[qid].queue_mode ==
+			RTE_PMD_QDMA_MEMORY_MAPPED_MODE) {
+		PMD_DRV_LOG(ERR, "%s() not supported for qid %d in MM mode",
+				__func__, qid);
+		return -ENOTSUP;
+	}
+
+	rxq = (struct qdma_rx_queue *)dev->data->rx_queues[qid];
+
+	if (rxq == NULL) {
+		/* Update the configuration in q_info structure
+		 * if rx queue is not setup.
+		 */
+		qdma_dev->q_info[qid].en_prefetch = (enable > 0) ? 1 : 0;
+	} else if (dev->data->rx_queue_state[qid] ==
+			RTE_ETH_QUEUE_STATE_STOPPED) {
+		/* Update the config in both q_info and rxq structures,
+		 * only if rx queue is setup but not yet started.
+		 */
+		qdma_dev->q_info[qid].en_prefetch = (enable > 0) ? 1 : 0;
+		rxq->en_prefetch = qdma_dev->q_info[qid].en_prefetch;
+	} else {
+		PMD_DRV_LOG(ERR,
+			"Cannot configure when Qid %d is in start state\n",
+			qid);
+		return -EINVAL;
+	}
+
+	return ret;
+}
+
+/**
+ * Function Name:	rte_pmd_qdma_set_mm_endpoint_addr
+ * Description:		Sets the PCIe endpoint memory offset at which to
+ *			perform DMA operation for the specified queue operating
+ *			in memory mapped mode.
+ *
+ * @param	port_id : Port ID.
+ * @param	qid : Queue ID.
+ * @param	dir : direction i.e. TX or RX.
+ * @param	addr : Destination address for TX , Source address for RX
+ *
+ * @return	'0' on success and '<0' on failure.
+ *
+ * @note	This API can be called before TX/RX burst API's
+ *		(rte_eth_tx_burst() and rte_eth_rx_burst()) are called.
+ */
+int rte_pmd_qdma_set_mm_endpoint_addr(int port_id, uint32_t qid,
+			enum rte_pmd_qdma_dir_type dir, uint32_t addr)
+{
+	struct rte_eth_dev *dev;
+	struct qdma_pci_dev *qdma_dev;
+	struct qdma_rx_queue *rxq;
+	struct qdma_tx_queue *txq;
+	int ret = 0;
+
+	ret = validate_qdma_dev_info(port_id, qid);
+	if (ret != QDMA_SUCCESS) {
+		PMD_DRV_LOG(ERR,
+			"QDMA device validation failed for port id %d\n",
+			port_id);
+		return ret;
+	}
+	dev = &rte_eth_devices[port_id];
+	qdma_dev = dev->data->dev_private;
+	if (qdma_dev->q_info[qid].queue_mode !=
+		RTE_PMD_QDMA_MEMORY_MAPPED_MODE) {
+		PMD_DRV_LOG(ERR, "Invalid Queue mode for %s, Queue ID = %d,"
+				"mode = %d\n", __func__, qid,
+				qdma_dev->q_info[qid].queue_mode);
+		return -EINVAL;
+	}
+
+	if (dir == RTE_PMD_QDMA_TX) {
+		txq = (struct qdma_tx_queue *)dev->data->tx_queues[qid];
+		if (txq != NULL) {
+			txq->ep_addr = addr;
+		} else {
+			PMD_DRV_LOG(ERR, "Qid %d is not setup\n", qid);
+			return -EINVAL;
+		}
+	} else if (dir == RTE_PMD_QDMA_RX) {
+		rxq = (struct qdma_rx_queue *)dev->data->rx_queues[qid];
+		if (rxq != NULL) {
+			rxq->ep_addr = addr;
+		} else {
+			PMD_DRV_LOG(ERR, "Qid %d is not setup\n", qid);
+			return -EINVAL;
+		}
+	} else {
+		PMD_DRV_LOG(ERR, "Invalid direction specified,"
+			"Direction is %d\n", dir);
+		return -EINVAL;
+	}
+	return ret;
+}
+
+/**
+ * Function Name:	rte_pmd_qdma_configure_tx_bypass
+ * Description:		Sets the TX bypass mode and bypass descriptor size
+ *			for the specified queue
+ *
+ * @param	port_id : Port ID.
+ * @param	qid : Queue ID.
+ * @param	bypass_mode : Bypass mode to be set
+ * @param	size : Bypass descriptor size to be set
+ *
+ * @return	'0' on success and '<0' on failure.
+ *
+ * @note	Application can call this API after successful call to
+ *		rte_eth_dev_configure() but before tx_setup() API.
+ *		By default, all queues are configured in internal mode
+ *		i.e. bypass disabled.
+ *		If size is specified zero, then the bypass descriptor size is
+ *		set to the one used in internal mode.
+ */
+int rte_pmd_qdma_configure_tx_bypass(int port_id, uint32_t qid,
+		enum rte_pmd_qdma_tx_bypass_mode bypass_mode,
+		enum rte_pmd_qdma_bypass_desc_len size)
+{
+	struct rte_eth_dev *dev;
+	struct qdma_pci_dev *qdma_dev;
+	int ret = 0;
+
+	ret = validate_qdma_dev_info(port_id, qid);
+	if (ret != QDMA_SUCCESS) {
+		PMD_DRV_LOG(ERR,
+			"QDMA device validation failed for port id %d\n",
+			port_id);
+		return ret;
+	}
+	dev = &rte_eth_devices[port_id];
+	qdma_dev = dev->data->dev_private;
+	if (qid < dev->data->nb_tx_queues) {
+		if (bypass_mode >= RTE_PMD_QDMA_TX_BYPASS_MAX) {
+			PMD_DRV_LOG(ERR, "Invalid Tx Bypass mode : %d\n",
+					bypass_mode);
+			return -EINVAL;
+		}
+		if (qdma_dev->dev_cap.sw_desc_64b) {
+			/*64byte descriptor size supported
+			 *in >2018.3 Example Design Only
+			 */
+			if (size  != 0 && size  != 64) {
+				PMD_DRV_LOG(ERR, "%s: Descriptor size %d not supported."
+				"64B and internal "
+				"mode descriptor sizes (size = 0) "
+				"are only supported by the driver in > 2018.3 "
+				"example design\n", __func__, size);
+				return -EINVAL;
+			}
+		} else {
+			/*In 2018.2 design, internal mode descriptor
+			 *sizes are only supported.Hence not allowing
+			 *to configure bypass descriptor size.
+			 *Size 0 indicates internal mode descriptor size.
+			 */
+			if (size  != 0) {
+				PMD_DRV_LOG(ERR, "%s: Descriptor size %d not supported.Only "
+				"Internal mode descriptor sizes (size = 0)"
+				"are supported in the current design.\n",
+				__func__, size);
+				return -EINVAL;
+			}
+		}
+		qdma_dev->q_info[qid].tx_bypass_mode = bypass_mode;
+
+		qdma_dev->q_info[qid].tx_bypass_desc_sz = size;
+	} else {
+		PMD_DRV_LOG(ERR, "Invalid queue ID specified, Queue ID = %d\n",
+				qid);
+		return -EINVAL;
+	}
+
+	return ret;
+}
+
+/**
+ * Function Name:	rte_pmd_qdma_configure_rx_bypass
+ * Description:		Sets the RX bypass mode and bypass descriptor size for
+ *			the specified queue
+ *
+ * @param	port_id : Port ID.
+ * @param	qid : Queue ID.
+ * @param	bypass_mode : Bypass mode to be set
+ * @param	size : Bypass descriptor size to be set
+ *
+ * @return	'0' on success and '<0' on failure.
+ *
+ * @note	Application can call this API after successful call to
+ *		rte_eth_dev_configure() but before rte_eth_rx_queue_setup() API.
+ *		By default, all queues are configured in internal mode
+ *		i.e. bypass disabled.
+ *		If size is specified zero, then the bypass descriptor size is
+ *		set to the one used in internal mode.
+ */
+int rte_pmd_qdma_configure_rx_bypass(int port_id, uint32_t qid,
+		enum rte_pmd_qdma_rx_bypass_mode bypass_mode,
+		enum rte_pmd_qdma_bypass_desc_len size)
+{
+	struct rte_eth_dev *dev;
+	struct qdma_pci_dev *qdma_dev;
+	int ret = 0;
+
+	ret = validate_qdma_dev_info(port_id, qid);
+	if (ret != QDMA_SUCCESS) {
+		PMD_DRV_LOG(ERR,
+			"QDMA device validation failed for port id %d\n",
+			port_id);
+		return ret;
+	}
+	dev = &rte_eth_devices[port_id];
+	qdma_dev = dev->data->dev_private;
+	if (qid < dev->data->nb_rx_queues) {
+		if (bypass_mode >= RTE_PMD_QDMA_RX_BYPASS_MAX) {
+			PMD_DRV_LOG(ERR, "Invalid Rx Bypass mode : %d\n",
+					bypass_mode);
+			return -EINVAL;
+		}
+
+		if (qdma_dev->dev_cap.sw_desc_64b) {
+			/*64byte descriptor size supported
+			 *in >2018.3 Example Design Only
+			 */
+			if (size  != 0 && size  != 64) {
+				PMD_DRV_LOG(ERR, "%s: Descriptor size %d not supported."
+				"64B and internal "
+				"mode descriptor sizes (size = 0) "
+				"are only supported by the driver in > 2018.3 "
+				"example design\n", __func__, size);
+				return -EINVAL;
+			}
+		} else {
+			/*In 2018.2 design, internal mode descriptor
+			 *sizes are only supported.Hence not allowing
+			 *to configure bypass descriptor size.
+			 *Size 0 indicates internal mode descriptor size.
+			 */
+			if (size  != 0) {
+				PMD_DRV_LOG(ERR, "%s: Descriptor size %d not supported.Only "
+				"Internal mode descriptor sizes (size = 0)"
+				"are supported in the current design.\n",
+				__func__, size);
+				return -EINVAL;
+			}
+		}
+
+		qdma_dev->q_info[qid].rx_bypass_mode = bypass_mode;
+
+		qdma_dev->q_info[qid].rx_bypass_desc_sz = size;
+	} else {
+		PMD_DRV_LOG(ERR, "Invalid queue ID specified, Queue ID = %d\n",
+				qid);
+		return -EINVAL;
+	}
+
+	return ret;
+}
+
+/**
+ * Function Name:	rte_pmd_qdma_get_device_capabilities
+ * Description:		Retrieve the device capabilities
+ *
+ * @param   port_id : Port ID.
+ * @param   dev_attr:Pointer to the device capabilities structure
+ *
+ * @return  '0' on success and '< 0' on failure.
+ *
+ * @note	None.
+ */
+int rte_pmd_qdma_get_device_capabilities(int port_id,
+		struct rte_pmd_qdma_dev_attributes *dev_attr)
+{
+	struct rte_eth_dev *dev;
+	struct qdma_pci_dev *qdma_dev;
+
+	if (port_id < 0 || port_id >= rte_eth_dev_count_avail()) {
+		PMD_DRV_LOG(ERR, "Wrong port id %d\n", port_id);
+		return -ENOTSUP;
+	}
+	dev = &rte_eth_devices[port_id];
+	qdma_dev = dev->data->dev_private;
+	if (!is_qdma_supported(dev)) {
+		PMD_DRV_LOG(ERR, "Device is not supported\n");
+		return -ENOTSUP;
+	}
+
+	if (dev_attr == NULL) {
+		PMD_DRV_LOG(ERR, "Caught NULL pointer for dev_attr\n");
+		return -EINVAL;
+	}
+
+	dev_attr->num_pfs = qdma_dev->dev_cap.num_pfs;
+	dev_attr->num_qs = qdma_dev->dev_cap.num_qs;
+	dev_attr->flr_present = qdma_dev->dev_cap.flr_present;
+	dev_attr->st_en = qdma_dev->dev_cap.st_en;
+	dev_attr->mm_en = qdma_dev->dev_cap.mm_en;
+	dev_attr->mm_cmpt_en = qdma_dev->dev_cap.mm_cmpt_en;
+	dev_attr->mailbox_en = qdma_dev->dev_cap.mailbox_en;
+	dev_attr->mm_channel_max = qdma_dev->dev_cap.mm_channel_max;
+	dev_attr->debug_mode = qdma_dev->dev_cap.debug_mode;
+	dev_attr->desc_eng_mode = qdma_dev->dev_cap.desc_eng_mode;
+	dev_attr->cmpt_ovf_chk_dis = qdma_dev->dev_cap.cmpt_ovf_chk_dis;
+	dev_attr->sw_desc_64b = qdma_dev->dev_cap.sw_desc_64b;
+	dev_attr->cmpt_desc_64b = qdma_dev->dev_cap.cmpt_desc_64b;
+	dev_attr->cmpt_trig_count_timer =
+				qdma_dev->dev_cap.cmpt_trig_count_timer;
+
+	switch (qdma_dev->device_type) {
+	case QDMA_DEVICE_SOFT:
+		dev_attr->device_type = RTE_PMD_QDMA_DEVICE_SOFT;
+		break;
+	case QDMA_DEVICE_VERSAL:
+		dev_attr->device_type = RTE_PMD_QDMA_DEVICE_VERSAL;
+		break;
+	default:
+		PMD_DRV_LOG(ERR, "%s: Invalid device type "
+			"Id = %d\n",	__func__, qdma_dev->device_type);
+		return -EINVAL;
+	}
+
+	switch (qdma_dev->ip_type) {
+	case QDMA_VERSAL_HARD_IP:
+		dev_attr->ip_type =
+			RTE_PMD_QDMA_VERSAL_HARD_IP;
+		break;
+	case QDMA_VERSAL_SOFT_IP:
+		dev_attr->ip_type =
+			RTE_PMD_QDMA_VERSAL_SOFT_IP;
+		break;
+	case QDMA_SOFT_IP:
+		dev_attr->ip_type =
+			RTE_PMD_QDMA_SOFT_IP;
+		break;
+	case EQDMA_SOFT_IP:
+		dev_attr->ip_type =
+			RTE_PMD_EQDMA_SOFT_IP;
+		break;
+	default:
+		dev_attr->ip_type = RTE_PMD_QDMA_NONE_IP;
+		PMD_DRV_LOG(ERR, "%s: Invalid IP type "
+			"ip_type = %d\n", __func__,
+			qdma_dev->ip_type);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+/**
+ * Function Name:	rte_pmd_qdma_dev_cmptq_setup
+ * Description:		Allocate and set up a completion queue for
+ *			memory mapped mode.
+ *
+ * @param   port_id : Port ID.
+ * @param   qid : Queue ID.
+ * @param   nb_cmpt_desc:	Completion queue ring size.
+ * @param   socket_id :	The socket_id argument is the socket identifier
+ *			in case of NUMA. Its value can be SOCKET_ID_ANY
+ *			if there is no NUMA constraint for the DMA memory
+ *			allocated for the transmit descriptors of the ring.
+ *
+ * @return  '0' on success and '< 0' on failure.
+ *
+ * @note	Application can call this API after successful call to
+ *		rte_eth_dev_configure() and rte_pmd_qdma_set_queue_mode()
+ *		for queues in memory mapped mode.
+ */
+
+int rte_pmd_qdma_dev_cmptq_setup(int port_id, uint32_t cmpt_queue_id,
+				 uint16_t nb_cmpt_desc,
+				 unsigned int socket_id)
+{
+	struct rte_eth_dev *dev;
+	uint32_t sz;
+	struct qdma_pci_dev *qdma_dev;
+	struct qdma_cmpt_queue *cmptq = NULL;
+	int err;
+	int ret = 0;
+
+	ret = validate_qdma_dev_info(port_id, cmpt_queue_id);
+	if (ret != QDMA_SUCCESS) {
+		PMD_DRV_LOG(ERR,
+			"QDMA device validation failed for port id %d\n",
+			port_id);
+		return ret;
+	}
+	dev = &rte_eth_devices[port_id];
+	qdma_dev = dev->data->dev_private;
+	if (nb_cmpt_desc == 0) {
+		PMD_DRV_LOG(ERR, "Invalid descriptor ring size %d\n",
+				nb_cmpt_desc);
+		return -EINVAL;
+	}
+
+	if (!qdma_dev->dev_cap.mm_cmpt_en) {
+		PMD_DRV_LOG(ERR, "Completion Queue support for MM-mode "
+					"not enabled");
+		return -EINVAL;
+	}
+
+	if (!qdma_dev->is_vf) {
+		err = qdma_dev_increment_active_queue
+					(qdma_dev->dma_device_index,
+					qdma_dev->func_id,
+					QDMA_DEV_Q_TYPE_CMPT);
+		if (err != QDMA_SUCCESS)
+			return -EINVAL;
+	} else {
+		err = qdma_dev_notify_qadd(dev, cmpt_queue_id +
+				qdma_dev->queue_base, QDMA_DEV_Q_TYPE_CMPT);
+
+		if (err < 0) {
+			PMD_DRV_LOG(ERR, "%s: Queue addition failed for CMPT Queue ID "
+				"%d\n",	__func__, cmpt_queue_id);
+			return -EINVAL;
+		}
+	}
+
+	if (!qdma_dev->init_q_range) {
+		if (qdma_dev->is_vf) {
+			err = qdma_vf_csr_read(dev);
+			if (err < 0)
+				goto cmptq_setup_err;
+		} else {
+			err = qdma_pf_csr_read(dev);
+			if (err < 0)
+				goto cmptq_setup_err;
+		}
+		qdma_dev->init_q_range = 1;
+	}
+
+	/* Allocate cmpt queue data structure */
+	cmptq = rte_zmalloc("QDMA_CmptQ", sizeof(struct qdma_cmpt_queue),
+						RTE_CACHE_LINE_SIZE);
+
+	if (!cmptq) {
+		PMD_DRV_LOG(ERR, "Unable to allocate structure cmptq of "
+				"size %d\n",
+				(int)(sizeof(struct qdma_cmpt_queue)));
+		err = -ENOMEM;
+		goto cmptq_setup_err;
+	}
+
+	cmptq->queue_id = cmpt_queue_id;
+	cmptq->port_id = dev->data->port_id;
+	cmptq->func_id = qdma_dev->func_id;
+	cmptq->dev = dev;
+	cmptq->st_mode = qdma_dev->q_info[cmpt_queue_id].queue_mode;
+	cmptq->triggermode = qdma_dev->q_info[cmpt_queue_id].trigger_mode;
+	cmptq->nb_cmpt_desc = nb_cmpt_desc + 1;
+	cmptq->cmpt_desc_len = qdma_dev->q_info[cmpt_queue_id].cmpt_desc_sz;
+	if (cmptq->cmpt_desc_len == RTE_PMD_QDMA_CMPT_DESC_LEN_64B &&
+	    !qdma_dev->dev_cap.cmpt_desc_64b) {
+		PMD_DRV_LOG(ERR, "%s: PF-%d(DEVFN) CMPT of 64B is not supported in the "
+			"current design\n",  __func__, qdma_dev->func_id);
+		return -ENOTSUP;
+	}
+	/* Find completion ring size index */
+	cmptq->ringszidx = index_of_array(qdma_dev->g_ring_sz,
+			QDMA_NUM_RING_SIZES,
+			cmptq->nb_cmpt_desc);
+	if (cmptq->ringszidx < 0) {
+		PMD_DRV_LOG(ERR, "Expected completion ring size %d not found\n",
+				cmptq->nb_cmpt_desc);
+		err = -EINVAL;
+		goto cmptq_setup_err;
+	}
+
+	/* Find Threshold index */
+	cmptq->threshidx = index_of_array(qdma_dev->g_c2h_cnt_th,
+						QDMA_NUM_C2H_COUNTERS,
+						DEFAULT_MM_CMPT_CNT_THRESHOLD);
+	if (cmptq->threshidx < 0) {
+		PMD_DRV_LOG(ERR, "Expected Threshold %d not found,"
+				" using the value %d at index 0\n",
+				DEFAULT_MM_CMPT_CNT_THRESHOLD,
+				qdma_dev->g_c2h_cnt_th[0]);
+		cmptq->threshidx = 0;
+	}
+
+	/* Find Timer index */
+	cmptq->timeridx = index_of_array(qdma_dev->g_c2h_timer_cnt,
+			QDMA_NUM_C2H_TIMERS,
+			qdma_dev->q_info[cmpt_queue_id].timer_count);
+	if (cmptq->timeridx < 0) {
+		PMD_DRV_LOG(ERR, "Expected timer %d not found, "
+				"using the value %d at index 1\n",
+				qdma_dev->q_info[cmpt_queue_id].timer_count,
+				qdma_dev->g_c2h_timer_cnt[1]);
+		cmptq->timeridx = 1;
+	}
+
+	cmptq->dis_overflow_check =
+			qdma_dev->q_info[cmpt_queue_id].dis_cmpt_ovf_chk;
+
+	/* Allocate memory for completion(CMPT) descriptor ring */
+	sz = (cmptq->nb_cmpt_desc) * cmptq->cmpt_desc_len;
+	cmptq->cmpt_mz = qdma_zone_reserve(dev, "RxHwCmptRn",
+			cmpt_queue_id, sz, socket_id);
+	if (!cmptq->cmpt_mz) {
+		PMD_DRV_LOG(ERR, "Unable to allocate cmptq->cmpt_mz "
+				"of size %d\n", sz);
+		err = -ENOMEM;
+		goto cmptq_setup_err;
+	}
+	cmptq->cmpt_ring = (struct qdma_ul_cmpt_ring *)cmptq->cmpt_mz->addr;
+
+	/* Write-back status structure */
+	cmptq->wb_status = (struct wb_status *)((uint64_t)cmptq->cmpt_ring +
+			(((uint64_t)cmptq->nb_cmpt_desc - 1) *
+			 cmptq->cmpt_desc_len));
+	memset(cmptq->cmpt_ring, 0, sz);
+	qdma_dev->cmpt_queues[cmpt_queue_id] = cmptq;
+	return ret;
+
+cmptq_setup_err:
+	if (!qdma_dev->is_vf)
+		qdma_dev_decrement_active_queue(qdma_dev->dma_device_index,
+				qdma_dev->func_id, QDMA_DEV_Q_TYPE_CMPT);
+	else
+		qdma_dev_notify_qdel(dev, cmpt_queue_id +
+				qdma_dev->queue_base, QDMA_DEV_Q_TYPE_CMPT);
+
+	if (cmptq) {
+		if (cmptq->cmpt_mz)
+			rte_memzone_free(cmptq->cmpt_mz);
+		rte_free(cmptq);
+	}
+	return err;
+}
+
+static int qdma_vf_cmptq_context_write(struct rte_eth_dev *dev, uint16_t qid)
+{
+	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
+	uint32_t qid_hw;
+	struct qdma_mbox_msg *m = qdma_mbox_msg_alloc();
+	struct mbox_descq_conf descq_conf;
+	int rv;
+	struct qdma_cmpt_queue *cmptq;
+	uint8_t cmpt_desc_fmt;
+
+	if (!m)
+		return -ENOMEM;
+	memset(&descq_conf, 0, sizeof(struct mbox_descq_conf));
+	cmptq = (struct qdma_cmpt_queue *)qdma_dev->cmpt_queues[qid];
+	qid_hw =  qdma_dev->queue_base + cmptq->queue_id;
+
+	switch (cmptq->cmpt_desc_len) {
+	case RTE_PMD_QDMA_CMPT_DESC_LEN_8B:
+		cmpt_desc_fmt = CMPT_CNTXT_DESC_SIZE_8B;
+		break;
+	case RTE_PMD_QDMA_CMPT_DESC_LEN_16B:
+		cmpt_desc_fmt = CMPT_CNTXT_DESC_SIZE_16B;
+		break;
+	case RTE_PMD_QDMA_CMPT_DESC_LEN_32B:
+		cmpt_desc_fmt = CMPT_CNTXT_DESC_SIZE_32B;
+		break;
+	case RTE_PMD_QDMA_CMPT_DESC_LEN_64B:
+		cmpt_desc_fmt = CMPT_CNTXT_DESC_SIZE_64B;
+		break;
+	default:
+		cmpt_desc_fmt = CMPT_CNTXT_DESC_SIZE_8B;
+		break;
+	}
+
+	descq_conf.irq_arm = 0;
+	descq_conf.at = 0;
+	descq_conf.wbk_en = 1;
+	descq_conf.irq_en = 0;
+	descq_conf.desc_sz = SW_DESC_CNTXT_MEMORY_MAP_DMA;
+	descq_conf.forced_en = 1;
+	descq_conf.cmpt_ring_bs_addr = cmptq->cmpt_mz->iova;
+	descq_conf.cmpt_desc_sz = cmpt_desc_fmt;
+	descq_conf.triggermode = cmptq->triggermode;
+
+	descq_conf.cmpt_color = CMPT_DEFAULT_COLOR_BIT;
+	descq_conf.cmpt_full_upd = 0;
+	descq_conf.cnt_thres = qdma_dev->g_c2h_cnt_th[cmptq->threshidx];
+	descq_conf.timer_thres = qdma_dev->g_c2h_timer_cnt[cmptq->timeridx];
+	descq_conf.cmpt_ringsz = qdma_dev->g_ring_sz[cmptq->ringszidx] - 1;
+	descq_conf.cmpt_int_en = 0;
+	descq_conf.cmpl_stat_en = 1; /* Enable stats for MM-CMPT */
+
+	if (qdma_dev->dev_cap.cmpt_ovf_chk_dis)
+		descq_conf.dis_overflow_check = cmptq->dis_overflow_check;
+
+	descq_conf.func_id = cmptq->func_id;
+
+	qdma_mbox_compose_vf_qctxt_write(cmptq->func_id, qid_hw,
+						cmptq->st_mode, 1,
+						QDMA_MBOX_CMPT_CTXT_ONLY,
+						&descq_conf, m->raw_data);
+
+	rv = qdma_mbox_msg_send(dev, m, MBOX_OP_RSP_TIMEOUT);
+	if (rv < 0) {
+		PMD_DRV_LOG(ERR, "%x, qid_hw 0x%x, mbox failed %d.\n",
+				qdma_dev->func_id, qid_hw, rv);
+		goto err_out;
+	}
+
+	rv = qdma_mbox_vf_response_status(m->raw_data);
+
+	cmptq->cmpt_cidx_info.counter_idx = cmptq->threshidx;
+	cmptq->cmpt_cidx_info.timer_idx = cmptq->timeridx;
+	cmptq->cmpt_cidx_info.trig_mode = cmptq->triggermode;
+	cmptq->cmpt_cidx_info.wrb_en = 1;
+	cmptq->cmpt_cidx_info.wrb_cidx = 0;
+	qdma_dev->hw_access->qdma_queue_cmpt_cidx_update(dev, qdma_dev->is_vf,
+			qid, &cmptq->cmpt_cidx_info);
+	cmptq->status = RTE_ETH_QUEUE_STATE_STARTED;
+err_out:
+	qdma_mbox_msg_free(m);
+	return rv;
+}
+
+static int qdma_pf_cmptq_context_write(struct rte_eth_dev *dev, uint32_t qid)
+{
+	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
+	struct qdma_cmpt_queue *cmptq;
+	uint32_t queue_base =  qdma_dev->queue_base;
+	uint8_t cmpt_desc_fmt;
+	int err = 0;
+	struct qdma_descq_cmpt_ctxt q_cmpt_ctxt;
+
+	cmptq = (struct qdma_cmpt_queue *)qdma_dev->cmpt_queues[qid];
+	memset(&q_cmpt_ctxt, 0, sizeof(struct qdma_descq_cmpt_ctxt));
+	/* Clear Completion Context */
+	qdma_dev->hw_access->qdma_cmpt_ctx_conf(dev, qid,
+				&q_cmpt_ctxt, QDMA_HW_ACCESS_CLEAR);
+
+	switch (cmptq->cmpt_desc_len) {
+	case RTE_PMD_QDMA_CMPT_DESC_LEN_8B:
+		cmpt_desc_fmt = CMPT_CNTXT_DESC_SIZE_8B;
+		break;
+	case RTE_PMD_QDMA_CMPT_DESC_LEN_16B:
+		cmpt_desc_fmt = CMPT_CNTXT_DESC_SIZE_16B;
+		break;
+	case RTE_PMD_QDMA_CMPT_DESC_LEN_32B:
+		cmpt_desc_fmt = CMPT_CNTXT_DESC_SIZE_32B;
+		break;
+	case RTE_PMD_QDMA_CMPT_DESC_LEN_64B:
+		cmpt_desc_fmt = CMPT_CNTXT_DESC_SIZE_64B;
+		break;
+	default:
+		cmpt_desc_fmt = CMPT_CNTXT_DESC_SIZE_8B;
+		break;
+	}
+
+	q_cmpt_ctxt.en_stat_desc = 1;
+	q_cmpt_ctxt.trig_mode = cmptq->triggermode;
+	q_cmpt_ctxt.fnc_id = cmptq->func_id;
+	q_cmpt_ctxt.counter_idx = cmptq->threshidx;
+	q_cmpt_ctxt.timer_idx = cmptq->timeridx;
+	q_cmpt_ctxt.color = CMPT_DEFAULT_COLOR_BIT;
+	q_cmpt_ctxt.ringsz_idx = cmptq->ringszidx;
+	q_cmpt_ctxt.bs_addr = (uint64_t)cmptq->cmpt_mz->iova;
+	q_cmpt_ctxt.desc_sz = cmpt_desc_fmt;
+	q_cmpt_ctxt.valid = 1;
+
+	if (qdma_dev->dev_cap.cmpt_ovf_chk_dis)
+		q_cmpt_ctxt.ovf_chk_dis = cmptq->dis_overflow_check;
+
+	/* Set Completion Context */
+	err = qdma_dev->hw_access->qdma_cmpt_ctx_conf(dev, (qid + queue_base),
+				&q_cmpt_ctxt, QDMA_HW_ACCESS_WRITE);
+	if (err < 0)
+		return qdma_dev->hw_access->qdma_get_error_code(err);
+
+	cmptq->cmpt_cidx_info.counter_idx = cmptq->threshidx;
+	cmptq->cmpt_cidx_info.timer_idx = cmptq->timeridx;
+	cmptq->cmpt_cidx_info.trig_mode = cmptq->triggermode;
+	cmptq->cmpt_cidx_info.wrb_en = 1;
+	cmptq->cmpt_cidx_info.wrb_cidx = 0;
+	qdma_dev->hw_access->qdma_queue_cmpt_cidx_update(dev, qdma_dev->is_vf,
+			qid, &cmptq->cmpt_cidx_info);
+	cmptq->status = RTE_ETH_QUEUE_STATE_STARTED;
+
+	return 0;
+}
+
+/**
+ * Function Name:   rte_pmd_qdma_dev_cmptq_start
+ * Description:     Start the MM completion queue.
+ *
+ * @param   port_id : Port ID.
+ * @param   qid : Queue ID.
+ *
+ * @return  '0' on success and '< 0' on failure.
+ *
+ * @note	Application can call this API after successful call to
+ *		rte_pmd_qdma_dev_cmptq_setup() API when queue is in
+ *		memory mapped mode.
+ */
+int rte_pmd_qdma_dev_cmptq_start(int port_id, uint32_t qid)
+{
+	struct rte_eth_dev *dev;
+	struct qdma_pci_dev *qdma_dev;
+	int ret = 0;
+
+	ret = validate_qdma_dev_info(port_id, qid);
+	if (ret != QDMA_SUCCESS) {
+		PMD_DRV_LOG(ERR,
+			"QDMA device validation failed for port id %d\n",
+			port_id);
+		return ret;
+	}
+	dev = &rte_eth_devices[port_id];
+	qdma_dev = dev->data->dev_private;
+	if (qdma_dev->q_info[qid].queue_mode !=
+			RTE_PMD_QDMA_MEMORY_MAPPED_MODE) {
+		PMD_DRV_LOG(ERR, "Qid %d is not configured in MM-mode\n", qid);
+		return -EINVAL;
+	}
+
+	if (qid >= qdma_dev->qsets_en) {
+		PMD_DRV_LOG(ERR, "Invalid Queue id passed for %s,"
+				"Queue ID(MM-mode) = %d\n", __func__,
+				qid);
+		return -EINVAL;
+	}
+
+	if (qdma_dev->is_vf)
+		return qdma_vf_cmptq_context_write(dev, qid);
+	else
+		return qdma_pf_cmptq_context_write(dev, qid);
+}
+
+static int qdma_pf_cmptq_context_invalidate(struct rte_eth_dev *dev,
+		uint32_t qid)
+{
+	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
+	struct qdma_cmpt_queue *cmptq;
+	uint32_t sz, i = 0;
+	struct qdma_descq_cmpt_ctxt q_cmpt_ctxt;
+
+	cmptq = (struct qdma_cmpt_queue *)qdma_dev->cmpt_queues[qid];
+	qdma_dev->hw_access->qdma_cmpt_ctx_conf(dev,
+			(qid + qdma_dev->queue_base),
+			&q_cmpt_ctxt, QDMA_HW_ACCESS_INVALIDATE);
+
+	/* Zero the cmpt-ring entries */
+	sz = cmptq->cmpt_desc_len;
+	for (i = 0; i < (sz * cmptq->nb_cmpt_desc); i++)
+		((volatile char *)cmptq->cmpt_ring)[i] = 0;
+
+	cmptq->status = RTE_ETH_QUEUE_STATE_STOPPED;
+
+	return 0;
+}
+
+static int qdma_vf_cmptq_context_invalidate(struct rte_eth_dev *dev,
+						uint32_t qid)
+{
+	struct qdma_mbox_msg *m = qdma_mbox_msg_alloc();
+	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
+	struct qdma_cmpt_queue *cmptq;
+	uint32_t qid_hw;
+	int rv;
+
+	if (!m)
+		return -ENOMEM;
+
+	qid_hw = qdma_dev->queue_base + qid;
+	qdma_mbox_compose_vf_qctxt_invalidate(qdma_dev->func_id, qid_hw,
+					      0, 0, QDMA_MBOX_CMPT_CTXT_ONLY,
+					      m->raw_data);
+	rv = qdma_mbox_msg_send(dev, m, MBOX_OP_RSP_TIMEOUT);
+	if (rv < 0) {
+		if (rv != -ENODEV)
+			PMD_DRV_LOG(INFO, "%x, qid_hw 0x%x mbox failed %d.\n",
+				    qdma_dev->func_id, qid_hw, rv);
+		goto err_out;
+	}
+
+	rv = qdma_mbox_vf_response_status(m->raw_data);
+
+	cmptq = (struct qdma_cmpt_queue *)qdma_dev->cmpt_queues[qid];
+	cmptq->status = RTE_ETH_QUEUE_STATE_STOPPED;
+
+err_out:
+	qdma_mbox_msg_free(m);
+	return rv;
+}
+
+/**
+ * Function Name:   rte_pmd_qdma_dev_cmptq_stop
+ * Description:     Stop the MM completion queue.
+ *
+ * @param   port_id : Port ID.
+ * @param   qid : Queue ID.
+ *
+ * @return  '0' on success and '< 0' on failure.
+ *
+ * @note	Application can call this API after successful call to
+ *		rte_pmd_qdma_dev_cmptq_start() API when queue is in
+ *		memory mapped mode.
+ */
+int rte_pmd_qdma_dev_cmptq_stop(int port_id, uint32_t qid)
+{
+	struct rte_eth_dev *dev;
+	struct qdma_pci_dev *qdma_dev;
+	int ret = 0;
+
+	ret = validate_qdma_dev_info(port_id, qid);
+	if (ret != QDMA_SUCCESS) {
+		PMD_DRV_LOG(ERR,
+			"QDMA device validation failed for port id %d\n",
+			port_id);
+		return ret;
+	}
+	dev = &rte_eth_devices[port_id];
+	qdma_dev = dev->data->dev_private;
+	if (qdma_dev->q_info[qid].queue_mode !=
+			RTE_PMD_QDMA_MEMORY_MAPPED_MODE) {
+		PMD_DRV_LOG(ERR, "Qid %d is not configured in MM-mode\n", qid);
+		return -EINVAL;
+	}
+
+	if (qdma_dev->is_vf)
+		return qdma_vf_cmptq_context_invalidate(dev, qid);
+	else
+		return qdma_pf_cmptq_context_invalidate(dev, qid);
+}
+
+/**
+ * Function Name:   rte_pmd_qdma_mm_cmpt_process
+ * Description:     Process the MM Completion queue entries.
+ *
+ * @param   port_id : Port ID.
+ * @param   qid : Queue ID.
+ * @param   cmpt_buff : User buffer pointer to store the completion data.
+ * @param   nb_entries: Number of compeltion entries to process.
+ *
+ * @return  'number of entries processed' on success and '< 0' on failure.
+ *
+ * @note    Application can call this API after successful call to
+ *	    rte_pmd_qdma_dev_cmptq_start() API.
+ */
+
+uint16_t rte_pmd_qdma_mm_cmpt_process(int port_id, uint32_t qid,
+					void *cmpt_buff, uint16_t nb_entries)
+{
+	struct rte_eth_dev *dev;
+	struct qdma_pci_dev *qdma_dev;
+	struct qdma_cmpt_queue *cmptq;
+	uint32_t count = 0;
+	struct qdma_ul_cmpt_ring *cmpt_entry;
+	struct wb_status *wb_status;
+	uint16_t nb_entries_avail = 0;
+	uint16_t cmpt_tail = 0;
+	uint16_t cmpt_pidx;
+	int ret = 0;
+
+	ret = validate_qdma_dev_info(port_id, qid);
+	if (ret != QDMA_SUCCESS) {
+		PMD_DRV_LOG(ERR,
+			"QDMA device validation failed for port id %d\n",
+			port_id);
+		return ret;
+	}
+	dev = &rte_eth_devices[port_id];
+	qdma_dev = dev->data->dev_private;
+	if (qdma_dev->q_info[qid].queue_mode !=
+			RTE_PMD_QDMA_MEMORY_MAPPED_MODE) {
+		PMD_DRV_LOG(ERR, "Qid %d is not configured in MM-mode\n", qid);
+		return -EINVAL;
+	}
+
+	cmptq = (struct qdma_cmpt_queue *)qdma_dev->cmpt_queues[qid];
+
+	if (cmpt_buff == NULL) {
+		PMD_DRV_LOG(ERR, "Invalid user buffer pointer from user");
+		return 0;
+	}
+
+	wb_status = cmptq->wb_status;
+	cmpt_tail = cmptq->cmpt_cidx_info.wrb_cidx;
+	cmpt_pidx = wb_status->pidx;
+
+	if (cmpt_tail < cmpt_pidx)
+		nb_entries_avail = cmpt_pidx - cmpt_tail;
+	else if (cmpt_tail > cmpt_pidx)
+		nb_entries_avail = cmptq->nb_cmpt_desc - 1 - cmpt_tail +
+			cmpt_pidx;
+
+	if (nb_entries_avail == 0) {
+		PMD_DRV_LOG(DEBUG, "%s(): %d: nb_entries_avail = 0\n",
+				__func__, __LINE__);
+		return 0;
+	}
+
+	if (nb_entries > nb_entries_avail)
+		nb_entries = nb_entries_avail;
+
+	while (count < nb_entries) {
+		cmpt_entry =
+		(struct qdma_ul_cmpt_ring *)((uint64_t)cmptq->cmpt_ring +
+		((uint64_t)cmpt_tail * cmptq->cmpt_desc_len));
+
+		ret = qdma_ul_process_immediate_data(cmpt_entry,
+				cmptq->cmpt_desc_len, cmpt_buff);
+		if (ret < 0) {
+			PMD_DRV_LOG(ERR, "Error detected on CMPT ring at "
+					"index %d, queue_id = %d\n",
+					cmpt_tail,
+					cmptq->queue_id);
+			return 0;
+		}
+		cmpt_tail++;
+		if (unlikely(cmpt_tail >= (cmptq->nb_cmpt_desc - 1)))
+			cmpt_tail -= (cmptq->nb_cmpt_desc - 1);
+		count++;
+	}
+
+	/* Update the CPMT CIDX */
+	cmptq->cmpt_cidx_info.wrb_cidx = cmpt_tail;
+	qdma_dev->hw_access->qdma_queue_cmpt_cidx_update(cmptq->dev,
+			qdma_dev->is_vf,
+			cmptq->queue_id,
+			&cmptq->cmpt_cidx_info);
+	return count;
+}
diff --git a/drivers/net/qdma/rte_pmd_qdma.h b/drivers/net/qdma/rte_pmd_qdma.h
index d09ec4a715..a004ffe1ab 100644
--- a/drivers/net/qdma/rte_pmd_qdma.h
+++ b/drivers/net/qdma/rte_pmd_qdma.h
@@ -258,6 +258,431 @@ struct rte_pmd_qdma_dev_attributes {
 
 #define DMA_BRAM_SIZE 524288
 
+/**
+ * Returns queue base for given port
+ *
+ * @param	port_id Port ID
+ * @param	queue_base Queue base
+ *
+ * @return	'0' on success and '< 0' on failure
+ *
+ * @note    Application can call this API only after successful
+ *          call to rte_eh_dev_configure() API
+ * @ingroup rte_pmd_qdma_func
+ */
+int rte_pmd_qdma_get_queue_base(int port_id, uint32_t *queue_base);
+
+/**
+ * Retrieves PCIe function type i.e. PF or VF
+ *
+ * @param	port_id Port ID
+ * @param	func_type Indicates PCIe function type
+ *
+ * @return	'0' on success and '< 0' on failure
+ *
+ * @note    Returns the PCIe function type i.e. PF or VF of the given port
+ * @ingroup rte_pmd_qdma_func
+ */
+int rte_pmd_qdma_get_pci_func_type(int port_id,
+		enum rte_pmd_qdma_pci_func_type *func_type);
+
+/**
+ * Retrieve the device capabilities
+ *
+ * @param   port_id Port ID
+ * @param   dev_attr Pointer to the device capabilities structure
+ *
+ * @return  '0' on success and '< 0' on failure
+ *
+ * @note	None.
+ * @ingroup rte_pmd_qdma_func
+ */
+int rte_pmd_qdma_get_device_capabilities(int port_id,
+			struct rte_pmd_qdma_dev_attributes *dev_attr);
+
+/**
+ * Sets queue interface mode for the specified queue
+ *
+ * @param	port_id Port ID
+ * @param	qid  Queue ID
+ * @param	mode Queue interface mode to be set
+ *
+ * @return	'0' on success and '< 0' on failure
+ *
+ * @note	Application can call this API after successful call to
+ *		rte_eth_dev_configure() but before
+ *		rte_eth_tx_queue_setup()/rte_eth_rx_queue_setup() API.
+ *		By default, all queues are setup in streaming mode.
+ * @ingroup rte_pmd_qdma_func
+ */
+int rte_pmd_qdma_set_queue_mode(int port_id, uint32_t qid,
+			enum rte_pmd_qdma_queue_mode_t mode);
+
+/**
+ * Sets the PCIe endpoint memory offset at which to
+ * perform DMA operation for the specified queue operating
+ * in memory mapped mode.
+ *
+ * @param	port_id Port ID
+ * @param	qid  Queue ID
+ * @param	dir Direction i.e. Tx or Rx
+ * @param	addr Destination address for Tx, Source address for Rx
+ *
+ * @return	'0' on success and '< 0' on failure
+ *
+ * @note	This API can be called before Tx/Rx burst API's
+ *		(rte_eth_tx_burst() and rte_eth_rx_burst()) are called.
+ * @ingroup rte_pmd_qdma_func
+ */
+int rte_pmd_qdma_set_mm_endpoint_addr(int port_id, uint32_t qid,
+			enum rte_pmd_qdma_dir_type dir, uint32_t addr);
+
+/**
+ * Returns the BAR indices of the QDMA BARs
+ *
+ * @param	port_id Port ID
+ * @param	config_bar_idx Config BAR index
+ * @param	user_bar_idx   AXI Master Lite BAR(user bar) index
+ * @param	bypass_bar_idx AXI Bridge Master BAR(bypass bar) index
+ *
+ * @return	'0' on success and '< 0' on failure
+ *
+ * @note	None
+ * @ingroup rte_pmd_qdma_func
+ */
+__rte_experimental
+int rte_pmd_qdma_get_bar_details(int port_id, int32_t *config_bar_idx,
+			int32_t *user_bar_idx, int32_t *bypass_bar_idx);
+
+/**
+ * Returns immediate data state	i.e. whether enabled or disabled,
+ * for the specified queue
+ *
+ * @param	port_id Port ID
+ * @param	qid  Queue ID
+ * @param	state Pointer to the state specifying whether
+ *			immediate data is enabled or not
+ * @return	'0' on success and '< 0' on failure
+ *
+ * @note	Application can call this function after
+ *		rte_eth_rx_queue_setup() is called.
+ *		API is applicable for streaming queues only.
+ * @ingroup rte_pmd_qdma_func
+ */
+__rte_experimental
+int rte_pmd_qdma_get_immediate_data_state(int port_id, uint32_t qid,
+			int *state);
+
+/**
+ * Dumps the QDMA configuration registers for the given port
+ *
+ * @param	port_id Port ID
+ *
+ * @return	'0' on success and "< 0" on failure
+ *
+ * @note	None
+ * @ingroup rte_pmd_qdma_func
+ */
+__rte_experimental
+int rte_pmd_qdma_dbg_regdump(uint8_t port_id);
+
+/**
+ * Dumps the QDMA register field information for a given register offset
+ *
+ * @param	port_id Port ID
+ * @param	reg_addr Register Address
+ *
+ * @return	'0' on success and "< 0" on failure
+ *
+ * @note	None
+ * @ingroup rte_pmd_qdma_func
+ */
+__rte_experimental
+int rte_pmd_qdma_dbg_reg_info_dump(uint8_t port_id,
+			uint32_t num_regs, uint32_t reg_addr);
+
+/**
+ * Dumps the device specific SW structure for the given port
+ *
+ * @param	port_id Port ID
+ *
+ * @return	'0' on success and "< 0" on failure
+ *
+ * @note	None
+ * @ingroup rte_pmd_qdma_func
+ */
+__rte_experimental
+int rte_pmd_qdma_dbg_qdevice(uint8_t port_id);
+
+/**
+ * Dumps the queue contexts and queue specific SW
+ * structures for the given queue ID
+ *
+ * @param	port_id Port ID
+ * @param	queue  Queue ID relative to the Port
+ *
+ * @return	'0' on success and "< 0" on failure
+ *
+ * @note	None
+ * @ingroup rte_pmd_qdma_func
+ */
+__rte_experimental
+int rte_pmd_qdma_dbg_qinfo(uint8_t port_id, uint16_t queue);
+
+/**
+ * Dumps the Queue descriptors
+ *
+ * @param	port_id Port ID
+ * @param	queue  Queue ID relative to the Port
+ * @param	start  Start index of the descriptor to dump
+ * @param	end    End index of the descriptor to dump
+ * @param	type   Descriptor type
+ *
+ * @return	'0' on success and "< 0" on failure
+ *
+ * @note	None
+ * @ingroup rte_pmd_qdma_func
+ */
+__rte_experimental
+int rte_pmd_qdma_dbg_qdesc(uint8_t port_id, uint16_t queue, int start,
+			int end, enum rte_pmd_qdma_xdebug_desc_type type);
+
+/**
+ * Sets immediate data state i.e. enable or disable, for the specified queue.
+ * If enabled, the user defined data in the completion
+ * ring are dumped in to a queue specific file
+ * "q_<qid>_immmediate_data.txt" in the local directory.
+ *
+ *@param	port_id Port ID
+ *@param	qid  Queue ID
+ *@param	state Immediate data state to be set.
+ *			Set '0' to disable and '1' to enable.
+ *
+ *@return	'0' on success and '< 0' on failure
+ *
+ *@note		Application can call this API after successful
+ *		call to rte_eth_rx_queue_setup() API.
+ *		This API is applicable for streaming queues only.
+ * @ingroup rte_pmd_qdma_func
+ */
+__rte_experimental
+int rte_pmd_qdma_set_immediate_data_state(int port_id, uint32_t qid,
+			uint8_t state);
+
+/**
+ * Enables or disables the overflow check (whether PIDX is overflowing
+ * the CIDX) performed by QDMA on the completion descriptor ring of specified
+ * queue.
+ *
+ * @param	port_id Port ID
+ * @param	qid	   Queue ID
+ * @param	enable '1' to enable and '0' to disable the overflow check
+ *
+ * @return	'0' on success and '< 0' on failure
+ *
+ * @note	Application can call this API after successful call to
+ *		rte_eth_rx_queue_setup() API, but before calling
+ *		rte_eth_rx_queue_start() or rte_eth_dev_start() API.
+ * @ingroup rte_pmd_qdma_func
+ */
+__rte_experimental
+int rte_pmd_qdma_set_cmpt_overflow_check(int port_id, uint32_t qid,
+			uint8_t enable);
+
+/**
+ * Configures the completion ring descriptor size
+ *
+ * @param	port_id Port ID
+ * @param	qid  Queue ID
+ * @param	size Descriptor size to be configured
+ *
+ * @return	'0' on success and '< 0' on failure
+ *
+ * @note	Application can call this API after successful call to
+ *		rte_eth_dev_configure() but before rte_eth_rx_queue_setup() API
+ *		when queue is in streaming mode, and before
+ *		rte_pmd_qdma_dev_cmptq_setup when queue is in
+ *		memory mapped mode.
+ *		By default, the completion descriptor size is set to 8 bytes.
+ * @ingroup rte_pmd_qdma_func
+ */
+__rte_experimental
+int rte_pmd_qdma_set_cmpt_descriptor_size(int port_id, uint32_t qid,
+			enum rte_pmd_qdma_cmpt_desc_len size);
+
+/**
+ * Configures the trigger mode for completion ring CIDX updates
+ *
+ * @param	port_id Port ID
+ * @param	qid  Queue ID
+ * @param	mode Trigger mode to be configured
+ *
+ * @return	'0' on success and '< 0' on failure
+ *
+ * @note	Application can call this API before calling
+ *		rte_eth_rx_queue_start() or rte_eth_dev_start() API.
+ *		By default, trigger mode is set to
+ *		RTE_PMD_QDMA_TRIG_MODE_USER_TIMER_COUNT.
+ * @ingroup rte_pmd_qdma_func
+ */
+__rte_experimental
+int rte_pmd_qdma_set_cmpt_trigger_mode(int port_id, uint32_t qid,
+			enum rte_pmd_qdma_tigger_mode_t mode);
+
+/**
+ * Configures the timer interval in microseconds to trigger
+ * the completion ring CIDX updates
+ *
+ * @param	port_id Port ID
+ * @param	qid  Queue ID
+ * @param	value Timer interval for completion trigger to be configured
+ *
+ * @return	'0' on success and '< 0' on failure
+ *
+ * @note	Application can call this API before calling
+ *		rte_eth_rx_queue_start() or rte_eth_dev_start() API.
+ * @ingroup rte_pmd_qdma_func
+ */
+__rte_experimental
+int rte_pmd_qdma_set_cmpt_timer(int port_id, uint32_t qid, uint32_t value);
+
+/**
+ * Enables or disables prefetch of the descriptors by prefetch engine
+ *
+ *@param	port_id Port ID
+ *@param	qid     Queue ID
+ *@param	enable  '1' to enable and '0' to disable the descriptor prefetch
+ *
+ *@return	'0' on success and '< 0' on failure
+ *
+ *@note		Application can call this API after successful call to
+ *		rte_eth_rx_queue_setup() API, but before calling
+ *		rte_eth_rx_queue_start() or rte_eth_dev_start() API.
+ * @ingroup rte_pmd_qdma_func
+ */
+__rte_experimental
+int rte_pmd_qdma_set_c2h_descriptor_prefetch(int port_id, uint32_t qid,
+			uint8_t enable);
+
+/**
+ * Sets the Tx bypass mode and bypass descriptor size for the specified queue
+ *
+ * @param	port_id Port ID
+ * @param	qid  Queue ID
+ * @param	bypass_mode Bypass mode to be set
+ * @param	size Bypass descriptor size to be set
+ *
+ * @return	'0' on success and '< 0' on failure
+ *
+ * @note	Application can call this API after successful call to
+ *		rte_eth_dev_configure() but before tx_setup() API.
+ *		By default, all queues are configured in internal mode
+ *		i.e. bypass disabled.
+ *		If size is specified zero, then the bypass descriptor size is
+ *		set to the one used in internal mode.
+ * @ingroup rte_pmd_qdma_func
+ */
+__rte_experimental
+int rte_pmd_qdma_configure_tx_bypass(int port_id, uint32_t qid,
+			enum rte_pmd_qdma_tx_bypass_mode bypass_mode,
+			enum rte_pmd_qdma_bypass_desc_len size);
+
+/**
+ * Sets the Rx bypass mode and bypass descriptor size for the specified queue
+ *
+ * @param	port_id Port ID
+ * @param	qid  Queue ID
+ * @param	bypass_mode Bypass mode to be set
+ * @param	size Bypass descriptor size to be set
+ *
+ * @return	'0' on success and '< 0' on failure
+ *
+ * @note	Application can call this API after successful call to
+ *		rte_eth_dev_configure() but before rte_eth_rx_queue_setup() API.
+ *		By default, all queues are configured in internal mode
+ *		i.e. bypass disabled.
+ *		If size is specified zero, then the bypass descriptor size is
+ *		set to the one used in internal mode.
+ * @ingroup rte_pmd_qdma_func
+ */
+__rte_experimental
+int rte_pmd_qdma_configure_rx_bypass(int port_id, uint32_t qid,
+			enum rte_pmd_qdma_rx_bypass_mode bypass_mode,
+			enum rte_pmd_qdma_bypass_desc_len size);
+
+/**
+ * Allocate and set up a completion queue for memory mapped mode
+ *
+ * @param   port_id Port ID
+ * @param   qid     Queue ID
+ * @param   nb_cmpt_desc Completion queue ring size
+ * @param   socket_id    The socket_id argument is the socket identifier
+ *			in case of NUMA. Its value can be SOCKET_ID_ANY
+ *			if there is no NUMA constraint for the DMA memory
+ *			allocated for the transmit descriptors of the ring.
+ *
+ * @return  '0' on success and '< 0' on failure
+ *
+ * @note	Application can call this API after successful call to
+ *		rte_eth_dev_configure() and rte_pmd_qdma_set_queue_mode()
+ *		for queues in memory mapped mode.
+ * @ingroup rte_pmd_qdma_func
+ */
+__rte_experimental
+int rte_pmd_qdma_dev_cmptq_setup(int port_id, uint32_t cmpt_queue_id,
+					uint16_t nb_cmpt_desc,
+					unsigned int socket_id);
+
+/**
+ * Start the MM completion queue
+ *
+ * @param   port_id Port ID
+ * @param   qid Queue ID
+ *
+ * @return  '0' on success and '< 0' on failure
+ *
+ * @note    Application can call this API after successful call to
+ *          rte_pmd_qdma_dev_cmptq_setup() API when queue is in
+ *          memory mapped mode.
+ * @ingroup rte_pmd_qdma_func
+ */
+__rte_experimental
+int rte_pmd_qdma_dev_cmptq_start(int port_id, uint32_t qid);
+
+/**
+ * Stop the MM completion queue
+ *
+ * @param   port_id Port ID
+ * @param   qid  Queue ID
+ *
+ * @return  '0' on success and '< 0' on failure
+ *
+ * @note    Application can call this API after successful call to
+ *          rte_pmd_qdma_dev_cmptq_start() API when queue is in
+ *          memory mapped mode.
+ * @ingroup rte_pmd_qdma_func
+ */
+__rte_experimental
+int rte_pmd_qdma_dev_cmptq_stop(int port_id, uint32_t qid);
+
+/**
+ * Process the MM Completion queue entries
+ *
+ * @param   port_id Port ID
+ * @param   qid  Queue ID
+ * @param   cmpt_buff  User buffer pointer to store the completion data
+ * @param   nb_entries Number of compeltion entries to process
+ *
+ * @return  'number of entries processed' on success and '< 0' on failure
+ *
+ * @note    Application can call this API after successful call to
+ *	    rte_pmd_qdma_dev_cmptq_start() API
+ * @ingroup rte_pmd_qdma_func
+ */
+__rte_experimental
+uint16_t rte_pmd_qdma_mm_cmpt_process(int port_id, uint32_t qid,
+		void *cmpt_buff, uint16_t nb_entries);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/drivers/net/qdma/version.map b/drivers/net/qdma/version.map
index c2e0723b4c..c9caeb7715 100644
--- a/drivers/net/qdma/version.map
+++ b/drivers/net/qdma/version.map
@@ -1,3 +1,38 @@
 DPDK_22 {
+	global:
+
+	rte_pmd_qdma_get_queue_base;
+	rte_pmd_qdma_get_pci_func_type;
+	rte_pmd_qdma_get_device_capabilities;
+	rte_pmd_qdma_set_queue_mode;
+	rte_pmd_qdma_set_mm_endpoint_addr;
+
 	local: *;
 };
+
+EXPERIMENTAL {
+	global:
+
+	rte_pmd_qdma_get_bar_details;
+	rte_pmd_qdma_get_immediate_data_state;
+
+	rte_pmd_qdma_dbg_qdesc;
+	rte_pmd_qdma_dbg_regdump;
+	rte_pmd_qdma_dbg_reg_info_dump;
+	rte_pmd_qdma_dbg_qinfo;
+	rte_pmd_qdma_dbg_qdevice;
+
+	rte_pmd_qdma_set_immediate_data_state;
+	rte_pmd_qdma_set_cmpt_overflow_check;
+	rte_pmd_qdma_set_cmpt_descriptor_size;
+	rte_pmd_qdma_set_cmpt_trigger_mode;
+	rte_pmd_qdma_set_cmpt_timer;
+	rte_pmd_qdma_set_c2h_descriptor_prefetch;
+
+	rte_pmd_qdma_configure_tx_bypass;
+	rte_pmd_qdma_configure_rx_bypass;
+	rte_pmd_qdma_dev_cmptq_setup;
+	rte_pmd_qdma_dev_cmptq_start;
+	rte_pmd_qdma_dev_cmptq_stop;
+	rte_pmd_qdma_mm_cmpt_process;
+};
-- 
2.36.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [RFC PATCH 28/29] net/qdma: add additional debug APIs
  2022-07-06  7:51 [RFC PATCH 00/29] cover letter for net/qdma PMD Aman Kumar
                   ` (26 preceding siblings ...)
  2022-07-06  7:52 ` [RFC PATCH 27/29] net/qdma: add device specific APIs for export Aman Kumar
@ 2022-07-06  7:52 ` Aman Kumar
  2022-07-06  7:52 ` [RFC PATCH 29/29] net/qdma: add stats PMD ops for PF and VF Aman Kumar
  2022-07-07  6:57 ` [RFC PATCH 00/29] cover letter for net/qdma PMD Thomas Monjalon
  29 siblings, 0 replies; 43+ messages in thread
From: Aman Kumar @ 2022-07-06  7:52 UTC (permalink / raw)
  To: dev; +Cc: maxime.coquelin, david.marchand, aman.kumar

this patch implements function to debug and dump
PMD/QDMA specific data.

Signed-off-by: Aman Kumar <aman.kumar@vvdntech.in>
---
 drivers/net/qdma/meson.build   |    1 +
 drivers/net/qdma/qdma_xdebug.c | 1072 ++++++++++++++++++++++++++++++++
 2 files changed, 1073 insertions(+)
 create mode 100644 drivers/net/qdma/qdma_xdebug.c

diff --git a/drivers/net/qdma/meson.build b/drivers/net/qdma/meson.build
index 1dc4392666..7ba2c89442 100644
--- a/drivers/net/qdma/meson.build
+++ b/drivers/net/qdma/meson.build
@@ -29,6 +29,7 @@ sources = files(
         'qdma_user.c',
         'qdma_rxtx.c',
         'qdma_vf_ethdev.c',
+        'qdma_xdebug.c',
         'qdma_access/eqdma_soft_access/eqdma_soft_access.c',
         'qdma_access/eqdma_soft_access/eqdma_soft_reg_dump.c',
         'qdma_access/qdma_s80_hard_access/qdma_s80_hard_access.c',
diff --git a/drivers/net/qdma/qdma_xdebug.c b/drivers/net/qdma/qdma_xdebug.c
new file mode 100644
index 0000000000..43f8380465
--- /dev/null
+++ b/drivers/net/qdma/qdma_xdebug.c
@@ -0,0 +1,1072 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2022 Xilinx, Inc. All rights reserved.
+ * Copyright(c) 2022 VVDN Technologies Private Limited. All rights reserved.
+ */
+
+#include <stdint.h>
+#include <sys/mman.h>
+#include <sys/fcntl.h>
+#include <rte_memzone.h>
+#include <rte_string_fns.h>
+#include <ethdev_pci.h>
+#include <rte_malloc.h>
+#include <rte_dev.h>
+#include <rte_pci.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_cycles.h>
+#include <unistd.h>
+#include <string.h>
+#include <rte_hexdump.h>
+
+#include "qdma.h"
+#include "rte_pmd_qdma.h"
+#include "qdma_access_common.h"
+#include "qdma_reg_dump.h"
+#include "qdma_mbox_protocol.h"
+#include "qdma_mbox.h"
+
+#define xdebug_info(args...) rte_log(RTE_LOG_INFO, RTE_LOGTYPE_USER1,\
+					## args)
+#define xdebug_error(args...) rte_log(RTE_LOG_ERR, RTE_LOGTYPE_USER1,\
+					## args)
+
+struct xdebug_desc_param {
+	uint16_t queue;
+	int start;
+	int end;
+	enum rte_pmd_qdma_xdebug_desc_type type;
+};
+
+const char *qdma_desc_eng_mode_info[QDMA_DESC_ENG_MODE_MAX] = {
+	"Internal and Bypass mode",
+	"Bypass only mode",
+	"Internal only mode"
+};
+
+static void print_header(const char *str)
+{
+	xdebug_info("\n\n%s\n\n", str);
+}
+
+static int qdma_h2c_struct_dump(uint8_t port_id, uint16_t queue)
+{
+	struct rte_eth_dev *dev;
+	struct qdma_tx_queue *tx_q;
+
+	if (port_id >= rte_eth_dev_count_avail()) {
+		xdebug_error("Wrong port id %d\n", port_id);
+		return -EINVAL;
+	}
+
+	dev = &rte_eth_devices[port_id];
+	tx_q = (struct qdma_tx_queue *)dev->data->tx_queues[queue];
+
+	if (queue >= dev->data->nb_tx_queues) {
+		xdebug_info("TX queue_id=%d not configured\n", queue);
+		return -EINVAL;
+	}
+
+	if (tx_q) {
+		print_header("***********TX Queue struct************");
+		xdebug_info("\t\t wb_pidx             :%x\n",
+				tx_q->wb_status->pidx);
+		xdebug_info("\t\t wb_cidx             :%x\n",
+				tx_q->wb_status->cidx);
+		xdebug_info("\t\t h2c_pidx            :%x\n",
+				tx_q->q_pidx_info.pidx);
+		xdebug_info("\t\t tx_fl_tail          :%x\n",
+				tx_q->tx_fl_tail);
+		xdebug_info("\t\t tx_desc_pend        :%x\n",
+				tx_q->tx_desc_pend);
+		xdebug_info("\t\t nb_tx_desc          :%x\n",
+				tx_q->nb_tx_desc);
+		xdebug_info("\t\t st_mode             :%x\n",
+				tx_q->st_mode);
+		xdebug_info("\t\t tx_deferred_start   :%x\n",
+				tx_q->tx_deferred_start);
+		xdebug_info("\t\t en_bypass           :%x\n",
+				tx_q->en_bypass);
+		xdebug_info("\t\t bypass_desc_sz      :%x\n",
+				tx_q->bypass_desc_sz);
+		xdebug_info("\t\t func_id             :%x\n",
+				tx_q->func_id);
+		xdebug_info("\t\t port_id             :%x\n",
+				tx_q->port_id);
+		xdebug_info("\t\t ringszidx           :%x\n",
+				tx_q->ringszidx);
+	}
+
+	return 0;
+}
+
+static int qdma_c2h_struct_dump(uint8_t port_id, uint16_t queue)
+{
+	struct rte_eth_dev *dev;
+	struct qdma_pci_dev *qdma_dev;
+	struct qdma_rx_queue *rx_q;
+	enum qdma_ip_type ip_type;
+
+	if (port_id >= rte_eth_dev_count_avail()) {
+		xdebug_error("Wrong port id %d\n", port_id);
+		return -EINVAL;
+	}
+
+	dev = &rte_eth_devices[port_id];
+	qdma_dev = dev->data->dev_private;
+	ip_type = (enum qdma_ip_type)qdma_dev->ip_type;
+	rx_q = (struct qdma_rx_queue *)dev->data->rx_queues[queue];
+
+	if (queue >= dev->data->nb_rx_queues) {
+		xdebug_info("RX queue_id=%d not configured\n", queue);
+		return -EINVAL;
+	}
+
+	if (rx_q) {
+		print_header(" ***********RX Queue struct********** ");
+		xdebug_info("\t\t wb_pidx             :%x\n",
+				rx_q->wb_status->pidx);
+		xdebug_info("\t\t wb_cidx             :%x\n",
+				rx_q->wb_status->cidx);
+		xdebug_info("\t\t rx_tail (ST)        :%x\n",
+				rx_q->rx_tail);
+		xdebug_info("\t\t c2h_pidx            :%x\n",
+				rx_q->q_pidx_info.pidx);
+		xdebug_info("\t\t rx_cmpt_cidx        :%x\n",
+				rx_q->cmpt_cidx_info.wrb_cidx);
+		xdebug_info("\t\t cmpt_desc_len       :%x\n",
+				rx_q->cmpt_desc_len);
+		xdebug_info("\t\t rx_buff_size        :%x\n",
+				rx_q->rx_buff_size);
+		xdebug_info("\t\t nb_rx_desc          :%x\n",
+				rx_q->nb_rx_desc);
+		xdebug_info("\t\t nb_rx_cmpt_desc     :%x\n",
+				rx_q->nb_rx_cmpt_desc);
+		xdebug_info("\t\t ep_addr             :%x\n",
+				rx_q->ep_addr);
+		xdebug_info("\t\t st_mode             :%x\n",
+				rx_q->st_mode);
+		xdebug_info("\t\t rx_deferred_start   :%x\n",
+				rx_q->rx_deferred_start);
+		xdebug_info("\t\t en_prefetch         :%x\n",
+				rx_q->en_prefetch);
+		xdebug_info("\t\t en_bypass           :%x\n",
+				rx_q->en_bypass);
+		xdebug_info("\t\t dump_immediate_data :%x\n",
+				rx_q->dump_immediate_data);
+		xdebug_info("\t\t en_bypass_prefetch  :%x\n",
+				rx_q->en_bypass_prefetch);
+
+		if (!(ip_type == QDMA_VERSAL_HARD_IP))
+			xdebug_info("\t\t dis_overflow_check  :%x\n",
+				rx_q->dis_overflow_check);
+
+		xdebug_info("\t\t bypass_desc_sz      :%x\n",
+				rx_q->bypass_desc_sz);
+		xdebug_info("\t\t ringszidx           :%x\n",
+				rx_q->ringszidx);
+		xdebug_info("\t\t cmpt_ringszidx      :%x\n",
+				rx_q->cmpt_ringszidx);
+		xdebug_info("\t\t buffszidx           :%x\n",
+				rx_q->buffszidx);
+		xdebug_info("\t\t threshidx           :%x\n",
+				rx_q->threshidx);
+		xdebug_info("\t\t timeridx            :%x\n",
+				rx_q->timeridx);
+		xdebug_info("\t\t triggermode         :%x\n",
+				rx_q->triggermode);
+	}
+
+	return 0;
+}
+
+static int qdma_config_read_reg_list(struct rte_eth_dev *dev,
+			uint16_t group_num,
+			uint16_t *num_regs, struct qdma_reg_data *reg_list)
+{
+	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
+	struct qdma_mbox_msg *m = qdma_mbox_msg_alloc();
+	int rv;
+
+	if (!m)
+		return -ENOMEM;
+
+	qdma_mbox_compose_reg_read(qdma_dev->func_id, group_num, m->raw_data);
+
+	rv = qdma_mbox_msg_send(dev, m, MBOX_OP_RSP_TIMEOUT);
+	if (rv < 0) {
+		if (rv != -ENODEV)
+			xdebug_error("reg read mbox failed with error = %d\n",
+				rv);
+		goto err_out;
+	}
+
+	rv = qdma_mbox_vf_reg_list_get(m->raw_data, num_regs, reg_list);
+	if (rv < 0) {
+		xdebug_error("qdma_mbox_vf_reg_list_get failed with error = %d\n",
+			rv);
+		goto err_out;
+	}
+
+err_out:
+	qdma_mbox_msg_free(m);
+	return rv;
+}
+
+static int qdma_config_reg_dump(uint8_t port_id)
+{
+	struct rte_eth_dev *dev;
+	struct qdma_pci_dev *qdma_dev;
+	enum qdma_ip_type ip_type;
+	char *buf = NULL;
+	int buflen;
+	int ret;
+	struct qdma_reg_data *reg_list;
+	uint16_t num_regs = 0, group_num = 0;
+	int len = 0, rcv_len = 0, reg_len = 0;
+
+	if (port_id >= rte_eth_dev_count_avail()) {
+		xdebug_error("Wrong port id %d\n", port_id);
+		return -EINVAL;
+	}
+
+	dev = &rte_eth_devices[port_id];
+	qdma_dev = dev->data->dev_private;
+	ip_type = (enum qdma_ip_type)qdma_dev->ip_type;
+
+	if (qdma_dev->is_vf) {
+		reg_len = (QDMA_MAX_REGISTER_DUMP *
+						sizeof(struct qdma_reg_data));
+		reg_list = (struct qdma_reg_data *)rte_zmalloc("QDMA_DUMP_REG_VF",
+				reg_len, RTE_CACHE_LINE_SIZE);
+		if (!reg_list) {
+			xdebug_error("Unable to allocate memory for VF dump for reglist "
+					"size %d\n", reg_len);
+			return -ENOMEM;
+		}
+
+		ret = qdma_acc_reg_dump_buf_len(dev, ip_type, &buflen);
+		if (ret < 0) {
+			xdebug_error("Failed to get register dump buffer length\n");
+			return ret;
+		}
+		/* allocate memory for register dump */
+		buf = (char *)rte_zmalloc("QDMA_DUMP_BUF_VF", buflen,
+				RTE_CACHE_LINE_SIZE);
+		if (!buf) {
+			xdebug_error("Unable to allocate memory for reg dump "
+					"size %d\n", buflen);
+			rte_free(reg_list);
+			return -ENOMEM;
+		}
+		xdebug_info("FPGA Config Registers for port_id: %d\n--------\n",
+			port_id);
+		xdebug_info(" Offset       Name    "
+				"                                    Value(Hex) Value(Dec)\n");
+
+		for (group_num = 0; group_num < QDMA_REG_READ_GROUP_3;
+				group_num++) {
+			/* Reset the reg_list  with 0's */
+			memset(reg_list, 0, (QDMA_MAX_REGISTER_DUMP *
+					sizeof(struct qdma_reg_data)));
+			ret = qdma_config_read_reg_list(dev,
+						group_num, &num_regs, reg_list);
+			if (ret < 0) {
+				xdebug_error("Failed to read config registers "
+					"size %d, err = %d\n", buflen, ret);
+				rte_free(reg_list);
+				rte_free(buf);
+				return ret;
+			}
+
+			rcv_len = qdma_acc_dump_config_reg_list(dev,
+				ip_type, num_regs,
+				reg_list, buf + len, buflen - len);
+			if (len < 0) {
+				xdebug_error("Failed to dump config regs "
+					"size %d, err = %d\n", buflen, ret);
+				rte_free(reg_list);
+				rte_free(buf);
+				return ret;
+			}
+			len += rcv_len;
+		}
+		if (ret < 0) {
+			xdebug_error("Insufficient space to dump Config Bar register values\n");
+			rte_free(reg_list);
+			rte_free(buf);
+			return qdma_get_error_code(ret);
+		}
+		xdebug_info("%s\n", buf);
+		rte_free(reg_list);
+		rte_free(buf);
+	} else {
+		ret = qdma_acc_reg_dump_buf_len(dev,
+			ip_type, &buflen);
+		if (ret < 0) {
+			xdebug_error("Failed to get register dump buffer length\n");
+			return ret;
+		}
+
+		/* allocate memory for register dump */
+		buf = (char *)rte_zmalloc("QDMA_REG_DUMP", buflen,
+					RTE_CACHE_LINE_SIZE);
+		if (!buf) {
+			xdebug_error("Unable to allocate memory for reg dump "
+					"size %d\n", buflen);
+			return -ENOMEM;
+		}
+		xdebug_info("FPGA Config Registers for port_id: %d\n--------\n",
+			port_id);
+		xdebug_info(" Offset       Name    "
+				"                                    Value(Hex) Value(Dec)\n");
+
+		ret = qdma_acc_dump_config_regs(dev, qdma_dev->is_vf,
+			ip_type, buf, buflen);
+		if (ret < 0) {
+			xdebug_error
+			("Insufficient space to dump Config Bar register values\n");
+			rte_free(buf);
+			return qdma_get_error_code(ret);
+		}
+		xdebug_info("%s\n", buf);
+		rte_free(buf);
+	}
+
+	return 0;
+}
+
+static int qdma_device_dump(uint8_t port_id)
+{
+	struct rte_eth_dev *dev;
+	struct qdma_pci_dev *qdma_dev;
+
+	if (port_id >= rte_eth_dev_count_avail()) {
+		xdebug_error("Wrong port id %d\n", port_id);
+		return -EINVAL;
+	}
+
+	dev = &rte_eth_devices[port_id];
+	qdma_dev = dev->data->dev_private;
+
+	xdebug_info("\n*** QDMA Device struct for port_id: %d ***\n\n",
+		port_id);
+
+	xdebug_info("\t\t config BAR index         :%x\n",
+			qdma_dev->config_bar_idx);
+	xdebug_info("\t\t AXI Master Lite BAR index           :%x\n",
+			qdma_dev->user_bar_idx);
+	xdebug_info("\t\t AXI Bridge Master BAR index         :%x\n",
+			qdma_dev->bypass_bar_idx);
+	xdebug_info("\t\t qsets enable             :%x\n",
+			qdma_dev->qsets_en);
+	xdebug_info("\t\t queue base               :%x\n",
+			qdma_dev->queue_base);
+	xdebug_info("\t\t pf                       :%x\n",
+			qdma_dev->func_id);
+	xdebug_info("\t\t cmpt desc length         :%x\n",
+			qdma_dev->cmpt_desc_len);
+	xdebug_info("\t\t c2h bypass mode          :%x\n",
+			qdma_dev->c2h_bypass_mode);
+	xdebug_info("\t\t h2c bypass mode          :%x\n",
+			qdma_dev->h2c_bypass_mode);
+	xdebug_info("\t\t trigger mode             :%x\n",
+			qdma_dev->trigger_mode);
+	xdebug_info("\t\t timer count              :%x\n",
+			qdma_dev->timer_count);
+	xdebug_info("\t\t is vf                    :%x\n",
+			qdma_dev->is_vf);
+	xdebug_info("\t\t is master                :%x\n",
+			qdma_dev->is_master);
+	xdebug_info("\t\t enable desc prefetch     :%x\n",
+			qdma_dev->en_desc_prefetch);
+	xdebug_info("\t\t ip type                  :%x\n",
+			qdma_dev->ip_type);
+	xdebug_info("\t\t vivado release           :%x\n",
+			qdma_dev->vivado_rel);
+	xdebug_info("\t\t rtl version              :%x\n",
+			qdma_dev->rtl_version);
+	xdebug_info("\t\t is queue conigured       :%x\n",
+		qdma_dev->init_q_range);
+
+	xdebug_info("\n\t ***** Device Capabilities *****\n");
+	xdebug_info("\t\t number of PFs            :%x\n",
+			qdma_dev->dev_cap.num_pfs);
+	xdebug_info("\t\t number of Queues         :%x\n",
+			qdma_dev->dev_cap.num_qs);
+	xdebug_info("\t\t FLR present              :%x\n",
+			qdma_dev->dev_cap.flr_present);
+	xdebug_info("\t\t ST mode enable           :%x\n",
+			qdma_dev->dev_cap.st_en);
+	xdebug_info("\t\t MM mode enable           :%x\n",
+			qdma_dev->dev_cap.mm_en);
+	xdebug_info("\t\t MM with compt enable     :%x\n",
+			qdma_dev->dev_cap.mm_cmpt_en);
+	xdebug_info("\t\t Mailbox enable           :%x\n",
+			qdma_dev->dev_cap.mailbox_en);
+	xdebug_info("\t\t Num of MM channels       :%x\n",
+			qdma_dev->dev_cap.mm_channel_max);
+	xdebug_info("\t\t Descriptor engine mode   :%s\n",
+		qdma_desc_eng_mode_info[qdma_dev->dev_cap.desc_eng_mode]);
+	xdebug_info("\t\t Debug mode enable        :%x\n",
+			qdma_dev->dev_cap.debug_mode);
+
+	return 0;
+}
+
+static int qdma_descq_context_read_vf(struct rte_eth_dev *dev,
+	unsigned int qid_hw, bool st_mode,
+	enum qdma_dev_q_type q_type,
+	struct qdma_descq_context *context)
+{
+	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
+	struct qdma_mbox_msg *m = qdma_mbox_msg_alloc();
+	enum mbox_cmpt_ctxt_type cmpt_ctxt_type = QDMA_MBOX_CMPT_CTXT_NONE;
+	int rv;
+
+	if (!m)
+		return -ENOMEM;
+
+	if (!st_mode) {
+		if (q_type == QDMA_DEV_Q_TYPE_CMPT)
+			cmpt_ctxt_type = QDMA_MBOX_CMPT_CTXT_ONLY;
+		else
+			cmpt_ctxt_type = QDMA_MBOX_CMPT_CTXT_NONE;
+	} else {
+		if (q_type == QDMA_DEV_Q_TYPE_C2H)
+			cmpt_ctxt_type = QDMA_MBOX_CMPT_WITH_ST;
+	}
+
+	qdma_mbox_compose_vf_qctxt_read(qdma_dev->func_id,
+		qid_hw, st_mode, q_type, cmpt_ctxt_type, m->raw_data);
+
+	rv = qdma_mbox_msg_send(dev, m, MBOX_OP_RSP_TIMEOUT);
+	if (rv < 0) {
+		xdebug_error("%x, qid_hw 0x%x, mbox failed for vf q context %d.\n",
+			qdma_dev->func_id, qid_hw, rv);
+		goto free_msg;
+	}
+
+	rv = qdma_mbox_vf_context_get(m->raw_data, context);
+	if (rv < 0) {
+		xdebug_error("%x, failed to get vf queue context info %d.\n",
+				qdma_dev->func_id, rv);
+		goto free_msg;
+	}
+
+free_msg:
+	qdma_mbox_msg_free(m);
+	return rv;
+}
+
+static int qdma_c2h_context_dump(uint8_t port_id, uint16_t queue)
+{
+	struct rte_eth_dev *dev;
+	struct qdma_pci_dev *qdma_dev;
+	struct qdma_descq_context queue_context;
+	enum qdma_dev_q_type q_type;
+	enum qdma_ip_type ip_type;
+	uint16_t qid;
+	uint8_t st_mode;
+	char *buf = NULL;
+	uint32_t buflen = 0;
+	int ret = 0;
+
+	if (port_id >= rte_eth_dev_count_avail()) {
+		xdebug_error("Wrong port id %d\n", port_id);
+		return -EINVAL;
+	}
+
+	dev = &rte_eth_devices[port_id];
+	qdma_dev = dev->data->dev_private;
+	qid = qdma_dev->queue_base + queue;
+	ip_type = (enum qdma_ip_type)qdma_dev->ip_type;
+	st_mode = qdma_dev->q_info[qid].queue_mode;
+	q_type = QDMA_DEV_Q_TYPE_C2H;
+
+	if (queue >= dev->data->nb_rx_queues) {
+		xdebug_info("RX queue_id=%d not configured\n", queue);
+		return -EINVAL;
+	}
+
+	xdebug_info("\n ***** C2H Queue Contexts on port_id: %d for q_id: %d *****\n",
+		port_id, qid);
+
+	ret = qdma_acc_context_buf_len(dev, ip_type, st_mode,
+			q_type, &buflen);
+	if (ret < 0) {
+		xdebug_error("Failed to get context buffer length,\n");
+		return ret;
+	}
+
+	/* allocate memory for csr dump */
+	buf = (char *)rte_zmalloc("QDMA_C2H_CONTEXT_DUMP",
+				buflen, RTE_CACHE_LINE_SIZE);
+	if (!buf) {
+		xdebug_error("Unable to allocate memory for c2h context dump "
+				"size %d\n", buflen);
+		return -ENOMEM;
+	}
+
+	if (qdma_dev->is_vf) {
+		ret = qdma_descq_context_read_vf(dev, qid,
+				st_mode, q_type, &queue_context);
+		if (ret < 0) {
+			xdebug_error("Failed to read c2h queue context\n");
+			rte_free(buf);
+			return qdma_get_error_code(ret);
+		}
+
+		ret = qdma_acc_dump_queue_context(dev, ip_type,
+			st_mode, q_type, &queue_context, buf, buflen);
+		if (ret < 0) {
+			xdebug_error("Failed to dump c2h queue context\n");
+			rte_free(buf);
+			return qdma_get_error_code(ret);
+		}
+	} else {
+		ret = qdma_acc_read_dump_queue_context(dev, ip_type,
+			qid, st_mode, q_type, buf, buflen);
+		if (ret < 0) {
+			xdebug_error("Failed to read and dump c2h queue context\n");
+			rte_free(buf);
+			return qdma_get_error_code(ret);
+		}
+	}
+
+	xdebug_info("%s\n", buf);
+	rte_free(buf);
+
+	return 0;
+}
+
+static int qdma_h2c_context_dump(uint8_t port_id, uint16_t queue)
+{
+	struct rte_eth_dev *dev;
+	struct qdma_pci_dev *qdma_dev;
+	struct qdma_descq_context queue_context;
+	enum qdma_dev_q_type q_type;
+	enum qdma_ip_type ip_type;
+	uint32_t buflen = 0;
+	uint16_t qid;
+	uint8_t st_mode;
+	char *buf = NULL;
+	int ret = 0;
+
+	if (port_id >= rte_eth_dev_count_avail()) {
+		xdebug_error("Wrong port id %d\n", port_id);
+		return -EINVAL;
+	}
+
+	dev = &rte_eth_devices[port_id];
+	qdma_dev = dev->data->dev_private;
+	qid = qdma_dev->queue_base + queue;
+	ip_type = (enum qdma_ip_type)qdma_dev->ip_type;
+	st_mode = qdma_dev->q_info[qid].queue_mode;
+	q_type = QDMA_DEV_Q_TYPE_H2C;
+
+	if (queue >= dev->data->nb_tx_queues) {
+		xdebug_info("TX queue_id=%d not configured\n", queue);
+		return -EINVAL;
+	}
+
+	xdebug_info("\n ***** H2C Queue Contexts on port_id: %d for q_id: %d *****\n",
+		port_id, qid);
+
+	ret = qdma_acc_context_buf_len(dev, ip_type, st_mode,
+			q_type, &buflen);
+	if (ret < 0) {
+		xdebug_error("Failed to get context buffer length,\n");
+		return ret;
+	}
+
+	/* allocate memory for csr dump */
+	buf = (char *)rte_zmalloc("QDMA_H2C_CONTEXT_DUMP",
+			buflen, RTE_CACHE_LINE_SIZE);
+	if (!buf) {
+		xdebug_error("Unable to allocate memory for h2c context dump "
+				"size %d\n", buflen);
+		return -ENOMEM;
+	}
+
+	if (qdma_dev->is_vf) {
+		ret = qdma_descq_context_read_vf(dev, qid,
+				st_mode, q_type, &queue_context);
+		if (ret < 0) {
+			xdebug_error("Failed to read h2c queue context\n");
+			rte_free(buf);
+			return qdma_get_error_code(ret);
+		}
+
+		ret = qdma_acc_dump_queue_context(dev, ip_type,
+			st_mode, q_type, &queue_context, buf, buflen);
+		if (ret < 0) {
+			xdebug_error("Failed to dump h2c queue context\n");
+			rte_free(buf);
+			return qdma_get_error_code(ret);
+		}
+	} else {
+		ret = qdma_acc_read_dump_queue_context(dev, ip_type,
+				qid, st_mode, q_type, buf, buflen);
+		if (ret < 0) {
+			xdebug_error("Failed to read and dump h2c queue context\n");
+			rte_free(buf);
+			return qdma_get_error_code(ret);
+		}
+	}
+
+	xdebug_info("%s\n", buf);
+	rte_free(buf);
+
+	return 0;
+}
+
+static int qdma_cmpt_context_dump(uint8_t port_id, uint16_t queue)
+{
+	struct rte_eth_dev *dev;
+	struct qdma_pci_dev *qdma_dev;
+	struct qdma_descq_context queue_context;
+	enum qdma_dev_q_type q_type;
+	enum qdma_ip_type ip_type;
+	uint32_t buflen;
+	uint16_t qid;
+	uint8_t st_mode;
+	char *buf = NULL;
+	int ret = 0;
+
+	if (port_id >= rte_eth_dev_count_avail()) {
+		xdebug_error("Wrong port id %d\n", port_id);
+		return -EINVAL;
+	}
+
+	dev = &rte_eth_devices[port_id];
+	qdma_dev = dev->data->dev_private;
+	qid = qdma_dev->queue_base + queue;
+	ip_type = (enum qdma_ip_type)qdma_dev->ip_type;
+	st_mode = qdma_dev->q_info[qid].queue_mode;
+	q_type = QDMA_DEV_Q_TYPE_CMPT;
+
+	if (queue >= dev->data->nb_rx_queues) {
+		xdebug_info("RX queue_id=%d not configured\n", queue);
+		return -EINVAL;
+	}
+
+	xdebug_info("\n ***** CMPT Queue Contexts on port_id: %d for q_id: %d *****\n",
+		port_id, qid);
+
+	ret = qdma_acc_context_buf_len(dev, ip_type,
+			st_mode, q_type, &buflen);
+	if (ret < 0) {
+		xdebug_error("Failed to get context buffer length\n");
+		return ret;
+	}
+
+	/* allocate memory for csr dump */
+	buf = (char *)rte_zmalloc("QDMA_CMPT_CONTEXT_DUMP",
+			buflen, RTE_CACHE_LINE_SIZE);
+	if (!buf) {
+		xdebug_error("Unable to allocate memory for cmpt context dump "
+				"size %d\n", buflen);
+		return -ENOMEM;
+	}
+
+	if (qdma_dev->is_vf) {
+		ret = qdma_descq_context_read_vf(dev, qid,
+			st_mode, q_type, &queue_context);
+		if (ret < 0) {
+			xdebug_error("Failed to read cmpt queue context\n");
+			rte_free(buf);
+			return qdma_get_error_code(ret);
+		}
+
+		ret = qdma_acc_dump_queue_context(dev, ip_type,
+			st_mode, q_type,
+			&queue_context, buf, buflen);
+		if (ret < 0) {
+			xdebug_error("Failed to dump cmpt queue context\n");
+			rte_free(buf);
+			return qdma_get_error_code(ret);
+		}
+	} else {
+		ret = qdma_acc_read_dump_queue_context(dev,
+			ip_type, qid, st_mode,
+			q_type, buf, buflen);
+		if (ret < 0) {
+			xdebug_error("Failed to read and dump cmpt queue context\n");
+			rte_free(buf);
+			return qdma_get_error_code(ret);
+		}
+	}
+
+	xdebug_info("%s\n", buf);
+	rte_free(buf);
+
+	return 0;
+}
+
+static int qdma_queue_desc_dump(uint8_t port_id,
+		struct xdebug_desc_param *param)
+{
+	struct rte_eth_dev *dev;
+	struct qdma_rx_queue *rxq;
+	struct qdma_tx_queue *txq;
+	uint8_t *rx_ring_bypass = NULL;
+	uint8_t *tx_ring_bypass = NULL;
+	char str[50];
+	int x;
+
+	if (port_id >= rte_eth_dev_count_avail()) {
+		xdebug_error("Wrong port id %d\n", port_id);
+		return -EINVAL;
+	}
+
+	dev = &rte_eth_devices[port_id];
+
+	switch (param->type) {
+	case RTE_PMD_QDMA_XDEBUG_DESC_C2H:
+
+		if (param->queue >= dev->data->nb_rx_queues) {
+			xdebug_info("queue_id=%d not configured",
+					param->queue);
+			return -1;
+		}
+
+		rxq = (struct qdma_rx_queue *)
+			dev->data->rx_queues[param->queue];
+
+		if (rxq == NULL) {
+			xdebug_info("Caught NULL pointer for queue_id: %d\n",
+				param->queue);
+			return -1;
+		}
+
+		if (rxq->status != RTE_ETH_QUEUE_STATE_STARTED) {
+			xdebug_info("Queue_id %d is not yet started\n",
+				param->queue);
+			return -1;
+		}
+
+		if (param->start < 0 || param->start > rxq->nb_rx_desc)
+			param->start = 0;
+		if (param->end <= param->start ||
+				param->end > rxq->nb_rx_desc)
+			param->end = rxq->nb_rx_desc;
+
+		if (rxq->en_bypass && rxq->bypass_desc_sz != 0) {
+			rx_ring_bypass = (uint8_t *)rxq->rx_ring;
+
+			xdebug_info("\n===== C2H bypass descriptors=====\n");
+			for (x = param->start; x < param->end; x++) {
+				uint8_t *rx_bypass =
+						&rx_ring_bypass[x];
+				snprintf(str, sizeof(str),
+						"\nDescriptor ID %d\t", x);
+				rte_hexdump(stdout, str,
+					(const void *)rx_bypass,
+					rxq->bypass_desc_sz);
+			}
+		} else {
+			if (rxq->st_mode) {
+				struct qdma_ul_st_c2h_desc *rx_ring_st =
+				(struct qdma_ul_st_c2h_desc *)rxq->rx_ring;
+
+				xdebug_info("\n===== C2H ring descriptors=====\n");
+				for (x = param->start; x < param->end; x++) {
+					struct qdma_ul_st_c2h_desc *rx_st =
+						&rx_ring_st[x];
+					snprintf(str, sizeof(str),
+						"\nDescriptor ID %d\t", x);
+					rte_hexdump(stdout, str,
+					(const void *)rx_st,
+					sizeof(struct qdma_ul_st_c2h_desc));
+				}
+			} else {
+				struct qdma_ul_mm_desc *rx_ring_mm =
+					(struct qdma_ul_mm_desc *)rxq->rx_ring;
+				xdebug_info("\n====== C2H ring descriptors======\n");
+				for (x = param->start; x < param->end; x++) {
+					snprintf(str, sizeof(str),
+						"\nDescriptor ID %d\t", x);
+					rte_hexdump(stdout, str,
+						(const void *)&rx_ring_mm[x],
+						sizeof(struct qdma_ul_mm_desc));
+				}
+			}
+		}
+		break;
+	case RTE_PMD_QDMA_XDEBUG_DESC_CMPT:
+
+		if (param->queue >= dev->data->nb_rx_queues) {
+			xdebug_info("queue_id=%d not configured",
+					param->queue);
+			return -1;
+		}
+
+		rxq = (struct qdma_rx_queue *)
+			dev->data->rx_queues[param->queue];
+
+		if (rxq) {
+			if (rxq->status != RTE_ETH_QUEUE_STATE_STARTED) {
+				xdebug_info("Queue_id %d is not yet started\n",
+						param->queue);
+				return -1;
+			}
+
+			if (param->start < 0 ||
+					param->start > rxq->nb_rx_cmpt_desc)
+				param->start = 0;
+			if (param->end <= param->start ||
+					param->end > rxq->nb_rx_cmpt_desc)
+				param->end = rxq->nb_rx_cmpt_desc;
+
+			if (!rxq->st_mode) {
+				xdebug_info("Queue_id %d is not initialized "
+					"in Stream mode\n", param->queue);
+				return -1;
+			}
+
+			xdebug_info("\n===== CMPT ring descriptors=====\n");
+			for (x = param->start; x < param->end; x++) {
+				uint32_t *cmpt_ring = (uint32_t *)
+					((uint64_t)(rxq->cmpt_ring) +
+					((uint64_t)x * rxq->cmpt_desc_len));
+				snprintf(str, sizeof(str),
+						"\nDescriptor ID %d\t", x);
+				rte_hexdump(stdout, str,
+						(const void *)cmpt_ring,
+						rxq->cmpt_desc_len);
+			}
+		}
+		break;
+	case RTE_PMD_QDMA_XDEBUG_DESC_H2C:
+
+		if (param->queue >= dev->data->nb_tx_queues) {
+			xdebug_info("queue_id=%d not configured",
+				param->queue);
+			return -1;
+		}
+
+		txq = (struct qdma_tx_queue *)
+			dev->data->tx_queues[param->queue];
+
+		if (txq == NULL) {
+			xdebug_info("Caught NULL pointer for queue_id: %d\n",
+				param->queue);
+			return -1;
+		}
+
+		if (txq->status != RTE_ETH_QUEUE_STATE_STARTED) {
+			xdebug_info("Queue_id %d is not yet started\n",
+				param->queue);
+			return -1;
+		}
+
+		if (param->start < 0 || param->start > txq->nb_tx_desc)
+			param->start = 0;
+		if (param->end <= param->start ||
+				param->end > txq->nb_tx_desc)
+			param->end = txq->nb_tx_desc;
+
+		if (txq->en_bypass && txq->bypass_desc_sz != 0) {
+			tx_ring_bypass = (uint8_t *)txq->tx_ring;
+
+			xdebug_info("\n====== H2C bypass descriptors=====\n");
+			for (x = param->start; x < param->end; x++) {
+				uint8_t *tx_bypass =
+					&tx_ring_bypass[x];
+				snprintf(str, sizeof(str),
+						"\nDescriptor ID %d\t", x);
+				rte_hexdump(stdout, str,
+					(const void *)tx_bypass,
+					txq->bypass_desc_sz);
+			}
+		} else {
+			if (txq->st_mode) {
+				struct qdma_ul_st_h2c_desc *qdma_h2c_ring =
+				(struct qdma_ul_st_h2c_desc *)txq->tx_ring;
+				xdebug_info("\n====== H2C ring descriptors=====\n");
+				for (x = param->start; x < param->end; x++) {
+					snprintf(str, sizeof(str),
+						"\nDescriptor ID %d\t", x);
+					rte_hexdump(stdout, str,
+					(const void *)&qdma_h2c_ring[x],
+					sizeof(struct qdma_ul_st_h2c_desc));
+				}
+			} else {
+				struct qdma_ul_mm_desc *tx_ring_mm =
+					(struct qdma_ul_mm_desc *)txq->tx_ring;
+				xdebug_info("\n===== H2C ring descriptors=====\n");
+				for (x = param->start; x < param->end; x++) {
+					snprintf(str, sizeof(str),
+						"\nDescriptor ID %d\t", x);
+					rte_hexdump(stdout, str,
+						(const void *)&tx_ring_mm[x],
+						sizeof(struct qdma_ul_mm_desc));
+				}
+			}
+		}
+		break;
+	default:
+		xdebug_info("Invalid ring selected\n");
+		break;
+	}
+	return 0;
+}
+
+int rte_pmd_qdma_dbg_regdump(uint8_t port_id)
+{
+	int err;
+
+	if (port_id >= rte_eth_dev_count_avail()) {
+		xdebug_error("Wrong port id %d\n", port_id);
+		return -EINVAL;
+	}
+
+	err = qdma_config_reg_dump(port_id);
+	if (err) {
+		xdebug_error("Error dumping Global registers\n");
+		return err;
+	}
+	return 0;
+}
+
+int rte_pmd_qdma_dbg_reg_info_dump(uint8_t port_id,
+	uint32_t num_regs, uint32_t reg_addr)
+{
+	struct rte_eth_dev *dev;
+	struct qdma_pci_dev *qdma_dev;
+	enum qdma_ip_type ip_type;
+	char *buf = NULL;
+	int buflen = QDMA_MAX_BUFLEN;
+	int ret;
+
+	if (port_id >= rte_eth_dev_count_avail()) {
+		xdebug_error("Wrong port id %d\n", port_id);
+		return -EINVAL;
+	}
+
+	dev = &rte_eth_devices[port_id];
+	qdma_dev = dev->data->dev_private;
+	ip_type = (enum qdma_ip_type)qdma_dev->ip_type;
+
+	/* allocate memory for register dump */
+	buf = (char *)rte_zmalloc("QDMA_DUMP_BUF_REG_INFO", buflen,
+			RTE_CACHE_LINE_SIZE);
+	if (!buf) {
+		xdebug_error("Unable to allocate memory for reg info dump "
+				"size %d\n", buflen);
+		return -ENOMEM;
+	}
+
+	ret = qdma_acc_dump_reg_info(dev, ip_type,
+		reg_addr, num_regs, buf, buflen);
+	if (ret < 0) {
+		xdebug_error("Failed to dump reg field values\n");
+		rte_free(buf);
+		return qdma_get_error_code(ret);
+	}
+	xdebug_info("%s\n", buf);
+	rte_free(buf);
+
+	return 0;
+}
+
+int rte_pmd_qdma_dbg_qdevice(uint8_t port_id)
+{
+	int err;
+
+	if (port_id >= rte_eth_dev_count_avail()) {
+		xdebug_error("Wrong port id %d\n", port_id);
+		return -EINVAL;
+	}
+
+	err = qdma_device_dump(port_id);
+	if (err) {
+		xdebug_error("Error dumping QDMA device\n");
+		return err;
+	}
+	return 0;
+}
+
+int rte_pmd_qdma_dbg_qinfo(uint8_t port_id, uint16_t queue)
+{
+	struct rte_eth_dev *dev;
+	struct qdma_pci_dev *qdma_dev;
+	uint16_t qid;
+	uint8_t st_mode;
+	int err;
+
+	if (port_id >= rte_eth_dev_count_avail()) {
+		xdebug_error("Wrong port id %d\n", port_id);
+		return -EINVAL;
+	}
+
+	dev = &rte_eth_devices[port_id];
+	qdma_dev = dev->data->dev_private;
+	qid = qdma_dev->queue_base + queue;
+	st_mode = qdma_dev->q_info[qid].queue_mode;
+
+	err = qdma_h2c_context_dump(port_id, queue);
+	if (err) {
+		xdebug_error("Error dumping %d: %d\n",
+				queue, err);
+		return err;
+	}
+
+	err = qdma_c2h_context_dump(port_id, queue);
+	if (err) {
+		xdebug_error("Error dumping %d: %d\n",
+				queue, err);
+		return err;
+	}
+
+	if (!st_mode && qdma_dev->dev_cap.mm_cmpt_en) {
+		err = qdma_cmpt_context_dump(port_id, queue);
+		if (err) {
+			xdebug_error("Error dumping %d: %d\n",
+					queue, err);
+			return err;
+		}
+	}
+
+	err = qdma_h2c_struct_dump(port_id, queue);
+	if (err) {
+		xdebug_error("Error dumping %d: %d\n",
+				queue, err);
+		return err;
+	}
+
+	err = qdma_c2h_struct_dump(port_id, queue);
+	if (err) {
+		xdebug_error("Error dumping %d: %d\n",
+				queue, err);
+		return err;
+	}
+
+	return 0;
+}
+
+int rte_pmd_qdma_dbg_qdesc(uint8_t port_id, uint16_t queue, int start,
+		int end, enum rte_pmd_qdma_xdebug_desc_type type)
+{
+	struct xdebug_desc_param param;
+	int err;
+
+	if (port_id >= rte_eth_dev_count_avail()) {
+		xdebug_error("Wrong port id %d\n", port_id);
+		return -EINVAL;
+	}
+
+	param.queue = queue;
+	param.start = start;
+	param.end = end;
+	param.type = type;
+
+	err = qdma_queue_desc_dump(port_id, &param);
+	if (err) {
+		xdebug_error("Error dumping %d: %d\n",
+			queue, err);
+		return err;
+	}
+	return 0;
+}
-- 
2.36.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [RFC PATCH 29/29] net/qdma: add stats PMD ops for PF and VF
  2022-07-06  7:51 [RFC PATCH 00/29] cover letter for net/qdma PMD Aman Kumar
                   ` (27 preceding siblings ...)
  2022-07-06  7:52 ` [RFC PATCH 28/29] net/qdma: add additional debug APIs Aman Kumar
@ 2022-07-06  7:52 ` Aman Kumar
  2022-07-07  6:57 ` [RFC PATCH 00/29] cover letter for net/qdma PMD Thomas Monjalon
  29 siblings, 0 replies; 43+ messages in thread
From: Aman Kumar @ 2022-07-06  7:52 UTC (permalink / raw)
  To: dev; +Cc: maxime.coquelin, david.marchand, aman.kumar

This patch implements PMD ops related to stats
for both PF and VF functions.

Signed-off-by: Aman Kumar <aman.kumar@vvdntech.in>
---
 drivers/net/qdma/qdma.h           |   3 +-
 drivers/net/qdma/qdma_devops.c    | 114 +++++++++++++++++++++++++++---
 drivers/net/qdma/qdma_rxtx.c      |   4 ++
 drivers/net/qdma/qdma_vf_ethdev.c |   1 +
 4 files changed, 111 insertions(+), 11 deletions(-)

diff --git a/drivers/net/qdma/qdma.h b/drivers/net/qdma/qdma.h
index d9239f34a7..4c86d0702a 100644
--- a/drivers/net/qdma/qdma.h
+++ b/drivers/net/qdma/qdma.h
@@ -149,6 +149,7 @@ struct qdma_rx_queue {
 	uint64_t		mbuf_initializer; /* value to init mbufs */
 	struct qdma_q_pidx_reg_info	q_pidx_info;
 	struct qdma_q_cmpt_cidx_reg_info cmpt_cidx_info;
+	struct qdma_pkt_stats   stats;
 	uint16_t		port_id; /* Device port identifier. */
 	uint8_t			status:1;
 	uint8_t			err:1;
@@ -212,9 +213,7 @@ struct qdma_tx_queue {
 	uint16_t			port_id; /* Device port identifier. */
 	uint8_t				func_id; /* RX queue index. */
 	int8_t				ringszidx;
-
 	struct qdma_pkt_stats		stats;
-
 	uint64_t			ep_addr;
 	uint32_t			queue_id; /* TX queue index. */
 	uint32_t			num_queues; /* TX queue index. */
diff --git a/drivers/net/qdma/qdma_devops.c b/drivers/net/qdma/qdma_devops.c
index e6803dd86f..f0b7291e8c 100644
--- a/drivers/net/qdma/qdma_devops.c
+++ b/drivers/net/qdma/qdma_devops.c
@@ -1745,9 +1745,40 @@ int
 qdma_dev_get_regs(struct rte_eth_dev *dev,
 	      struct rte_dev_reg_info *regs)
 {
-	(void)dev;
-	(void)regs;
+	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
+	uint32_t *data = regs->data;
+	uint32_t reg_length = 0;
+	int ret = 0;
+
+	ret = qdma_acc_get_num_config_regs(dev,
+			(enum qdma_ip_type)qdma_dev->ip_type,
+			&reg_length);
+	if (ret < 0 || reg_length == 0) {
+		PMD_DRV_LOG(ERR, "%s: Failed to get number of config registers\n",
+						__func__);
+		return ret;
+	}
 
+	if (data == NULL) {
+		regs->length = reg_length - 1;
+		regs->width = sizeof(uint32_t);
+		return 0;
+	}
+
+	/* Support only full register dump */
+	if (regs->length == 0 || regs->length == (reg_length - 1)) {
+		regs->version = 1;
+		ret = qdma_acc_get_config_regs(dev, qdma_dev->is_vf,
+		(enum qdma_ip_type)qdma_dev->ip_type, data);
+		if (ret < 0) {
+			PMD_DRV_LOG(ERR, "%s: Failed to get config registers\n",
+						__func__);
+		}
+		return ret;
+	}
+
+	PMD_DRV_LOG(ERR, "%s: Unsupported length (0x%x) requested\n",
+						__func__, regs->length);
 	return -ENOTSUP;
 }
 
@@ -1773,11 +1804,30 @@ int qdma_dev_queue_stats_mapping(struct rte_eth_dev *dev,
 				 uint8_t stat_idx,
 				 uint8_t is_rx)
 {
-	(void)dev;
-	(void)queue_id;
-	(void)stat_idx;
-	(void)is_rx;
+	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
+
+	if (is_rx && queue_id >= dev->data->nb_rx_queues) {
+		PMD_DRV_LOG(ERR, "%s: Invalid Rx qid %d\n",
+					__func__, queue_id);
+		return -EINVAL;
+	}
+
+	if (!is_rx && queue_id >= dev->data->nb_tx_queues) {
+		PMD_DRV_LOG(ERR, "%s: Invalid Tx qid %d\n",
+					__func__, queue_id);
+		return -EINVAL;
+	}
 
+	if (stat_idx >= RTE_ETHDEV_QUEUE_STAT_CNTRS) {
+		PMD_DRV_LOG(ERR, "%s: Invalid stats index %d\n",
+					__func__, stat_idx);
+		return -EINVAL;
+	}
+
+	if (is_rx)
+		qdma_dev->rx_qid_statid_map[stat_idx] = queue_id;
+	else
+		qdma_dev->tx_qid_statid_map[stat_idx] = queue_id;
 	return 0;
 }
 
@@ -1795,9 +1845,42 @@ int qdma_dev_queue_stats_mapping(struct rte_eth_dev *dev,
 int qdma_dev_stats_get(struct rte_eth_dev *dev,
 			      struct rte_eth_stats *eth_stats)
 {
-	(void)dev;
-	(void)eth_stats;
+	uint32_t i;
+	int qid;
+	struct qdma_rx_queue *rxq;
+	struct qdma_tx_queue *txq;
+	struct qdma_pci_dev *qdma_dev = dev->data->dev_private;
 
+	memset(eth_stats, 0, sizeof(struct rte_eth_stats));
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = (struct qdma_rx_queue *)dev->data->rx_queues[i];
+		eth_stats->ipackets += rxq->stats.pkts;
+		eth_stats->ibytes += rxq->stats.bytes;
+	}
+
+	for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS; i++) {
+		qid = qdma_dev->rx_qid_statid_map[i];
+		if (qid >= 0) {
+			rxq = (struct qdma_rx_queue *)dev->data->rx_queues[qid];
+			eth_stats->q_ipackets[i] = rxq->stats.pkts;
+			eth_stats->q_ibytes[i] = rxq->stats.bytes;
+		}
+	}
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = (struct qdma_tx_queue *)dev->data->tx_queues[i];
+		eth_stats->opackets += txq->stats.pkts;
+		eth_stats->obytes   += txq->stats.bytes;
+	}
+
+	for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS; i++) {
+		qid = qdma_dev->tx_qid_statid_map[i];
+		if (qid >= 0) {
+			txq = (struct qdma_tx_queue *)dev->data->tx_queues[qid];
+			eth_stats->q_opackets[i] = txq->stats.pkts;
+			eth_stats->q_obytes[i] = txq->stats.bytes;
+		}
+	}
 	return 0;
 }
 
@@ -1810,8 +1893,21 @@ int qdma_dev_stats_get(struct rte_eth_dev *dev,
  */
 int qdma_dev_stats_reset(struct rte_eth_dev *dev)
 {
-	(void)dev;
+	uint32_t i;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		struct qdma_rx_queue *rxq =
+			(struct qdma_rx_queue *)dev->data->rx_queues[i];
+		rxq->stats.pkts = 0;
+		rxq->stats.bytes = 0;
+	}
 
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		struct qdma_tx_queue *txq =
+			(struct qdma_tx_queue *)dev->data->tx_queues[i];
+		txq->stats.pkts = 0;
+		txq->stats.bytes = 0;
+	}
 	return 0;
 }
 
diff --git a/drivers/net/qdma/qdma_rxtx.c b/drivers/net/qdma/qdma_rxtx.c
index 7652f35dd2..8a4caa465b 100644
--- a/drivers/net/qdma/qdma_rxtx.c
+++ b/drivers/net/qdma/qdma_rxtx.c
@@ -708,6 +708,8 @@ struct rte_mbuf *prepare_single_packet(struct qdma_rx_queue *rxq,
 	pkt_length = qdma_ul_get_cmpt_pkt_len(&rxq->cmpt_data[cmpt_idx]);
 
 	if (pkt_length) {
+		rxq->stats.pkts++;
+		rxq->stats.bytes += pkt_length;
 		if (likely(pkt_length <= rxq->rx_buff_size)) {
 			mb = rxq->sw_ring[id];
 			rxq->sw_ring[id++] = NULL;
@@ -870,6 +872,8 @@ static uint16_t prepare_packets(struct qdma_rx_queue *rxq,
 	while (count < nb_pkts) {
 		pkt_length = qdma_ul_get_cmpt_pkt_len(&rxq->cmpt_data[count]);
 		if (pkt_length) {
+			rxq->stats.pkts++;
+			rxq->stats.bytes += pkt_length;
 			mb = prepare_segmented_packet(rxq,
 					pkt_length, &rxq->rx_tail);
 			rx_pkts[count_pkts++] = mb;
diff --git a/drivers/net/qdma/qdma_vf_ethdev.c b/drivers/net/qdma/qdma_vf_ethdev.c
index cbae4c9716..50529340b5 100644
--- a/drivers/net/qdma/qdma_vf_ethdev.c
+++ b/drivers/net/qdma/qdma_vf_ethdev.c
@@ -796,6 +796,7 @@ static struct eth_dev_ops qdma_vf_eth_dev_ops = {
 	.rx_queue_stop        = qdma_vf_dev_rx_queue_stop,
 	.tx_queue_start       = qdma_vf_dev_tx_queue_start,
 	.tx_queue_stop        = qdma_vf_dev_tx_queue_stop,
+	.stats_get            = qdma_dev_stats_get,
 };
 
 /**
-- 
2.36.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [RFC PATCH 04/29] net/qdma: add logging support
  2022-07-06  7:51 ` [RFC PATCH 04/29] net/qdma: add logging support Aman Kumar
@ 2022-07-06 15:27   ` Stephen Hemminger
  2022-07-07  2:32     ` Aman Kumar
  0 siblings, 1 reply; 43+ messages in thread
From: Stephen Hemminger @ 2022-07-06 15:27 UTC (permalink / raw)
  To: Aman Kumar; +Cc: dev, maxime.coquelin, david.marchand

On Wed,  6 Jul 2022 13:21:54 +0530
Aman Kumar <aman.kumar@vvdntech.in> wrote:

> +extern int qdma_logtype_pmd;
> +#define PMD_DRV_LOG(level, fmt, args...) \
> +	rte_log(RTE_LOG_ ## level, qdma_logtype_pmd, "%s(): " \
> +		fmt "\n", __func__, ## args)

Did  you test this. Looks like your log messages are going
to end up double spaced. Because many of the later calls put a
newline on the message. Example:

+	PMD_DRV_LOG(INFO, "QDMA devargs desc_prefetch is: %s\n", value);

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [RFC PATCH 05/29] net/qdma: add device init and uninit functions
  2022-07-06  7:51 ` [RFC PATCH 05/29] net/qdma: add device init and uninit functions Aman Kumar
@ 2022-07-06 15:35   ` Stephen Hemminger
  2022-07-07  2:41     ` Aman Kumar
  0 siblings, 1 reply; 43+ messages in thread
From: Stephen Hemminger @ 2022-07-06 15:35 UTC (permalink / raw)
  To: Aman Kumar; +Cc: dev, maxime.coquelin, david.marchand

On Wed,  6 Jul 2022 13:21:55 +0530
Aman Kumar <aman.kumar@vvdntech.in> wrote:

> +/* parse a sysfs file containing one integer value */
> +static int parse_sysfs_value(const char *filename, uint32_t *val)
> +{
> +	FILE *f;
> +	char buf[BUFSIZ];
> +	char *end = NULL;
> +
> +	f = fopen(filename, "r");
> +	if (f == NULL) {
> +		PMD_DRV_LOG(ERR, "%s(): Failed to open sysfs file %s\n",
> +				__func__, filename);
> +		return -1;
> +	}
> +
> +	if (fgets(buf, sizeof(buf), f) == NULL) {
> +		PMD_DRV_LOG(ERR, "%s(): Failed to read sysfs value %s\n",
> +			__func__, filename);
> +		fclose(f);
> +		return -1;
> +	}
> +	*val = (uint32_t)strtoul(buf, &end, 0);
> +	if ((buf[0] == '\0') || end == NULL || (*end != '\n')) {
> +		PMD_DRV_LOG(ERR, "%s(): Failed to parse sysfs value %s\n",
> +				__func__, filename);
> +		fclose(f);
> +		return -1;
> +	}
> +	fclose(f);
> +	return 0;
> +}

Why reinvent eal_parse_sysfs_value?

> +/* Split up a pci address into its constituent parts. */
> +static int parse_pci_addr_format(const char *buf,
> +		int bufsize, struct rte_pci_addr *addr)
> +{
> +	/* first split on ':' */
> +	union splitaddr {
> +		struct {
> +			char *domain;
> +			char *bus;
> +			char *devid;
> +			char *function;
> +		};
> +		/* last element-separator is "." not ":" */
> +		char *str[PCI_FMT_NVAL];
> +	} splitaddr;
> +
> +	char *buf_copy = strndup(buf, bufsize);
> +	if (buf_copy == NULL) {
> +		PMD_DRV_LOG(ERR, "Failed to get pci address duplicate copy\n");
> +		return -1;
> +	}
> +
> +	if (rte_strsplit(buf_copy, bufsize, splitaddr.str, PCI_FMT_NVAL, ':')
> +			!= PCI_FMT_NVAL - 1) {
> +		PMD_DRV_LOG(ERR, "Failed to split pci address string\n");
> +		goto error;
> +	}
> +
> +	/* final split is on '.' between devid and function */
> +	splitaddr.function = strchr(splitaddr.devid, '.');
> +	if (splitaddr.function == NULL) {
> +		PMD_DRV_LOG(ERR, "Failed to split pci devid and function\n");
> +		goto error;
> +	}
> +	*splitaddr.function++ = '\0';
> +
> +	/* now convert to int values */
> +	addr->domain = strtoul(splitaddr.domain, NULL, 16);
> +	addr->bus = strtoul(splitaddr.bus, NULL, 16);
> +	addr->devid = strtoul(splitaddr.devid, NULL, 16);
> +	addr->function = strtoul(splitaddr.function, NULL, 10);
> +
> +	free(buf_copy); /* free the copy made with strdup */
> +	return 0;
> +
> +error:
> +	free(buf_copy);
> +	return -1;
> +}

Looks like you didn't see rte_pci_addr_parse..

> +/* Get max pci bus number from the corresponding pci bridge device */
> +static int get_max_pci_bus_num(uint8_t start_bus, uint8_t *end_bus)
> +{
> +	char dirname[PATH_MAX];
> +	char filename[PATH_MAX];
> +	char cfgname[PATH_MAX];
> +	struct rte_pci_addr addr;
> +	struct dirent *dp;
> +	uint32_t pci_class_code;
> +	uint8_t sec_bus_num, sub_bus_num;
> +	DIR *dir;
> +	int ret, fd;
> +
> +	/* Initialize end bus number to zero */
> +	*end_bus = 0;
> +
> +	/* Open pci devices directory */
> +	dir = opendir(rte_pci_get_sysfs_path());
> +	if (dir == NULL) {
> +		PMD_DRV_LOG(ERR, "%s(): opendir failed\n",
> +			__func__);
> +		return -1;
> +	}
> +
> +	while ((dp = readdir(dir)) != NULL) {
> +		if (dp->d_name[0] == '.')
> +			continue;
> +
> +		/* Split pci address to get bus, devid and function numbers */
> +		if (parse_pci_addr_format(dp->d_name,
> +				sizeof(dp->d_name), &addr) != 0)
> +			continue;
> +
> +		snprintf(dirname, sizeof(dirname), "%s/%s",
> +				rte_pci_get_sysfs_path(), dp->d_name);
> +
> +		/* get class code */
> +		snprintf(filename, sizeof(filename), "%s/class", dirname);
> +		if (parse_sysfs_value(filename, &pci_class_code) < 0) {
> +			PMD_DRV_LOG(ERR, "Failed to get pci class code\n");
> +			goto error;
> +		}
> +
> +		/* Get max pci number from pci bridge device */
> +		if ((((pci_class_code >> PCI_CONFIG_CLASS_CODE_SHIFT) & 0xFF) ==
> +				PCI_CONFIG_BRIDGE_DEVICE)) {
> +			snprintf(cfgname, sizeof(cfgname),
> +					"%s/config", dirname);
> +			fd = open(cfgname, O_RDWR);
> +			if (fd < 0) {
> +				PMD_DRV_LOG(ERR, "Failed to open %s\n",
> +					cfgname);
> +				goto error;
> +			}
> +
> +			/* get secondary bus number */
> +			ret = pread(fd, &sec_bus_num, sizeof(uint8_t),
> +						PCI_SECONDARY_BUS);
> +			if (ret == -1) {
> +				PMD_DRV_LOG(ERR, "Failed to read secondary bus number\n");
> +				close(fd);
> +				goto error;
> +			}
> +
> +			/* get subordinate bus number */
> +			ret = pread(fd, &sub_bus_num, sizeof(uint8_t),
> +						PCI_SUBORDINATE_BUS);
> +			if (ret == -1) {
> +				PMD_DRV_LOG(ERR, "Failed to read subordinate bus number\n");
> +				close(fd);
> +				goto error;
> +			}
> +
> +			/* Get max bus number by checking if given bus number
> +			 * falls in between secondary and subordinate bus
> +			 * numbers of this pci bridge device.
> +			 */
> +			if (start_bus >= sec_bus_num &&
> +			    start_bus <= sub_bus_num) {
> +				*end_bus = sub_bus_num;
> +				close(fd);
> +				closedir(dir);
> +				return 0;
> +			}
> +
> +			close(fd);
> +		}
> +	}
> +
> +error:
> +	closedir(dir);
> +	return -1;
> +}
> +
>  /**
>   * DPDK callback to register a PCI device.
>   *
> @@ -39,7 +232,12 @@ static struct rte_pci_id qdma_pci_id_tbl[] = {
>   */
>  static int qdma_eth_dev_init(struct rte_eth_dev *dev)
>  {
> +	struct qdma_pci_dev *dma_priv;
> +	uint8_t *baseaddr;
> +	int i, idx, ret;
>  	struct rte_pci_device *pci_dev;
> +	uint16_t num_vfs;
> +	uint8_t max_pci_bus = 0;
>  
>  	/* sanity checks */
>  	if (dev == NULL)
> @@ -59,6 +257,88 @@ static int qdma_eth_dev_init(struct rte_eth_dev *dev)
>  	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
>  		return 0;
>  
> +	/* allocate space for a single Ethernet MAC address */
> +	dev->data->mac_addrs = rte_zmalloc("qdma", RTE_ETHER_ADDR_LEN * 1, 0);
> +	if (dev->data->mac_addrs == NULL)
> +		return -ENOMEM;
> +
> +	/* Copy some dummy Ethernet MAC address for QDMA device
> +	 * This will change in real NIC device...
> +	 * TODO: Read MAC from EEPROM
> +	 */
> +	for (i = 0; i < RTE_ETHER_ADDR_LEN; ++i)
> +		dev->data->mac_addrs[0].addr_bytes[i] = 0x15 + i;

If you don't have a real EEPROM to read, use rte_eth_random_addr() instead.

> +
> +	/* Init system & device */
> +	dma_priv = (struct qdma_pci_dev *)dev->data->dev_private;
> +	dma_priv->is_vf = 0;
> +	dma_priv->is_master = 0;
> +	dma_priv->vf_online_count = 0;
> +	dma_priv->timer_count = DEFAULT_TIMER_CNT_TRIG_MODE_TIMER;
> +
> +	dma_priv->en_desc_prefetch = 0; /* Keep prefetch default to 0 */
> +	dma_priv->cmpt_desc_len = 0;
> +	dma_priv->c2h_bypass_mode = 0;
> +	dma_priv->h2c_bypass_mode = 0;
> +
> +	dma_priv->config_bar_idx = DEFAULT_PF_CONFIG_BAR;
> +	dma_priv->bypass_bar_idx = BAR_ID_INVALID;
> +	dma_priv->user_bar_idx = BAR_ID_INVALID;
> +
> +	/* Check and handle device devargs */
> +	if (qdma_check_kvargs(dev->device->devargs, dma_priv)) {
> +		PMD_DRV_LOG(INFO, "devargs failed\n");
> +		rte_free(dev->data->mac_addrs);
> +		return -EINVAL;
> +	}
> +
> +	/* Store BAR address and length of Config BAR */
> +	baseaddr = (uint8_t *)
> +			pci_dev->mem_resource[dma_priv->config_bar_idx].addr;
> +	dma_priv->bar_addr[dma_priv->config_bar_idx] = baseaddr;
> +
> +	idx = qdma_identify_bars(dev);
> +	if (idx < 0) {
> +		rte_free(dev->data->mac_addrs);
> +		return -EINVAL;
> +	}
> +
> +	/* Store BAR address and length of AXI Master Lite BAR(user bar) */
> +	if (dma_priv->user_bar_idx >= 0) {
> +		baseaddr = (uint8_t *)
> +			    pci_dev->mem_resource[dma_priv->user_bar_idx].addr;
> +		dma_priv->bar_addr[dma_priv->user_bar_idx] = baseaddr;
> +	}
> +
> +	PMD_DRV_LOG(INFO, "QDMA device driver probe:");
> +
> +	ret = get_max_pci_bus_num(pci_dev->addr.bus, &max_pci_bus);
> +	if (ret != 0 && !max_pci_bus) {
> +		PMD_DRV_LOG(ERR, "Failed to get max pci bus number\n");
> +		rte_free(dev->data->mac_addrs);
> +		return -EINVAL;
> +	}
> +	PMD_DRV_LOG(INFO, "PCI max bus number : 0x%x", max_pci_bus);
> +
> +	if (!dma_priv->reset_in_progress) {
> +		num_vfs = pci_dev->max_vfs;
> +		if (num_vfs) {
> +			dma_priv->vfinfo = rte_zmalloc("vfinfo",
> +				sizeof(struct qdma_vf_info) * num_vfs, 0);
> +			if (dma_priv->vfinfo == NULL) {
> +				PMD_DRV_LOG(ERR, "Cannot allocate memory for private VF info\n");
> +				return -ENOMEM;
> +			}
> +
> +			/* Mark all VFs with invalid function id mapping*/
> +			for (i = 0; i < num_vfs; i++)
> +				dma_priv->vfinfo[i].func_id =
> +					QDMA_FUNC_ID_INVALID;
> +		}
> +	}
> +
> +	dma_priv->reset_in_progress = 0;
> +
>  	return 0;
>  }

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [RFC PATCH 04/29] net/qdma: add logging support
  2022-07-06 15:27   ` Stephen Hemminger
@ 2022-07-07  2:32     ` Aman Kumar
  0 siblings, 0 replies; 43+ messages in thread
From: Aman Kumar @ 2022-07-07  2:32 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: dev, maxime.coquelin, david.marchand

On 06/07/22 8:57 pm, Stephen Hemminger wrote:
> On Wed,  6 Jul 2022 13:21:54 +0530
> Aman Kumar <aman.kumar@vvdntech.in> wrote:
>
>> +extern int qdma_logtype_pmd;
>> +#define PMD_DRV_LOG(level, fmt, args...) \
>> +	rte_log(RTE_LOG_ ## level, qdma_logtype_pmd, "%s(): " \
>> +		fmt "\n", __func__, ## args)
> Did  you test this. Looks like your log messages are going
> to end up double spaced. Because many of the later calls put a
> newline on the message. Example:
>
> +	PMD_DRV_LOG(INFO, "QDMA devargs desc_prefetch is: %s\n", value);

Additional lines did not bother us during feature testing. I'll fix this 
in v2.
Thanks for pointing this.


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [RFC PATCH 05/29] net/qdma: add device init and uninit functions
  2022-07-06 15:35   ` Stephen Hemminger
@ 2022-07-07  2:41     ` Aman Kumar
  0 siblings, 0 replies; 43+ messages in thread
From: Aman Kumar @ 2022-07-07  2:41 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: dev, maxime.coquelin, david.marchand

On 06/07/22 9:05 pm, Stephen Hemminger wrote:
> On Wed,  6 Jul 2022 13:21:55 +0530
> Aman Kumar <aman.kumar@vvdntech.in> wrote:
>
>> +/* parse a sysfs file containing one integer value */
>> +static int parse_sysfs_value(const char *filename, uint32_t *val)
>> +{
>> +	FILE *f;
>> +	char buf[BUFSIZ];
>> +	char *end = NULL;
>> +
>> +	f = fopen(filename, "r");
>> +	if (f == NULL) {
>> +		PMD_DRV_LOG(ERR, "%s(): Failed to open sysfs file %s\n",
>> +				__func__, filename);
>> +		return -1;
>> +	}
>> +
>> +	if (fgets(buf, sizeof(buf), f) == NULL) {
>> +		PMD_DRV_LOG(ERR, "%s(): Failed to read sysfs value %s\n",
>> +			__func__, filename);
>> +		fclose(f);
>> +		return -1;
>> +	}
>> +	*val = (uint32_t)strtoul(buf, &end, 0);
>> +	if ((buf[0] == '\0') || end == NULL || (*end != '\n')) {
>> +		PMD_DRV_LOG(ERR, "%s(): Failed to parse sysfs value %s\n",
>> +				__func__, filename);
>> +		fclose(f);
>> +		return -1;
>> +	}
>> +	fclose(f);
>> +	return 0;
>> +}
> Why reinvent eal_parse_sysfs_value?
I'll rework this and adapt to existing DPDK APIs.
>
>> +/* Split up a pci address into its constituent parts. */
>> +static int parse_pci_addr_format(const char *buf,
>> +		int bufsize, struct rte_pci_addr *addr)
>> +{
>> +	/* first split on ':' */
>> +	union splitaddr {
>> +		struct {
>> +			char *domain;
>> +			char *bus;
>> +			char *devid;
>> +			char *function;
>> +		};
>> +		/* last element-separator is "." not ":" */
>> +		char *str[PCI_FMT_NVAL];
>> +	} splitaddr;
>> +
>> +	char *buf_copy = strndup(buf, bufsize);
>> +	if (buf_copy == NULL) {
>> +		PMD_DRV_LOG(ERR, "Failed to get pci address duplicate copy\n");
>> +		return -1;
>> +	}
>> +
>> +	if (rte_strsplit(buf_copy, bufsize, splitaddr.str, PCI_FMT_NVAL, ':')
>> +			!= PCI_FMT_NVAL - 1) {
>> +		PMD_DRV_LOG(ERR, "Failed to split pci address string\n");
>> +		goto error;
>> +	}
>> +
>> +	/* final split is on '.' between devid and function */
>> +	splitaddr.function = strchr(splitaddr.devid, '.');
>> +	if (splitaddr.function == NULL) {
>> +		PMD_DRV_LOG(ERR, "Failed to split pci devid and function\n");
>> +		goto error;
>> +	}
>> +	*splitaddr.function++ = '\0';
>> +
>> +	/* now convert to int values */
>> +	addr->domain = strtoul(splitaddr.domain, NULL, 16);
>> +	addr->bus = strtoul(splitaddr.bus, NULL, 16);
>> +	addr->devid = strtoul(splitaddr.devid, NULL, 16);
>> +	addr->function = strtoul(splitaddr.function, NULL, 10);
>> +
>> +	free(buf_copy); /* free the copy made with strdup */
>> +	return 0;
>> +
>> +error:
>> +	free(buf_copy);
>> +	return -1;
>> +}
> Looks like you didn't see rte_pci_addr_parse..
I'll rework this and adapt to existing DPDK APIs.
>> +/* Get max pci bus number from the corresponding pci bridge device */
>> +static int get_max_pci_bus_num(uint8_t start_bus, uint8_t *end_bus)
>> +{
>> +	char dirname[PATH_MAX];
>> +	char filename[PATH_MAX];
>> +	char cfgname[PATH_MAX];
>> +	struct rte_pci_addr addr;
>> +	struct dirent *dp;
>> +	uint32_t pci_class_code;
>> +	uint8_t sec_bus_num, sub_bus_num;
>> +	DIR *dir;
>> +	int ret, fd;
>> +
>> +	/* Initialize end bus number to zero */
>> +	*end_bus = 0;
>> +
>> +	/* Open pci devices directory */
>> +	dir = opendir(rte_pci_get_sysfs_path());
>> +	if (dir == NULL) {
>> +		PMD_DRV_LOG(ERR, "%s(): opendir failed\n",
>> +			__func__);
>> +		return -1;
>> +	}
>> +
>> +	while ((dp = readdir(dir)) != NULL) {
>> +		if (dp->d_name[0] == '.')
>> +			continue;
>> +
>> +		/* Split pci address to get bus, devid and function numbers */
>> +		if (parse_pci_addr_format(dp->d_name,
>> +				sizeof(dp->d_name), &addr) != 0)
>> +			continue;
>> +
>> +		snprintf(dirname, sizeof(dirname), "%s/%s",
>> +				rte_pci_get_sysfs_path(), dp->d_name);
>> +
>> +		/* get class code */
>> +		snprintf(filename, sizeof(filename), "%s/class", dirname);
>> +		if (parse_sysfs_value(filename, &pci_class_code) < 0) {
>> +			PMD_DRV_LOG(ERR, "Failed to get pci class code\n");
>> +			goto error;
>> +		}
>> +
>> +		/* Get max pci number from pci bridge device */
>> +		if ((((pci_class_code >> PCI_CONFIG_CLASS_CODE_SHIFT) & 0xFF) ==
>> +				PCI_CONFIG_BRIDGE_DEVICE)) {
>> +			snprintf(cfgname, sizeof(cfgname),
>> +					"%s/config", dirname);
>> +			fd = open(cfgname, O_RDWR);
>> +			if (fd < 0) {
>> +				PMD_DRV_LOG(ERR, "Failed to open %s\n",
>> +					cfgname);
>> +				goto error;
>> +			}
>> +
>> +			/* get secondary bus number */
>> +			ret = pread(fd, &sec_bus_num, sizeof(uint8_t),
>> +						PCI_SECONDARY_BUS);
>> +			if (ret == -1) {
>> +				PMD_DRV_LOG(ERR, "Failed to read secondary bus number\n");
>> +				close(fd);
>> +				goto error;
>> +			}
>> +
>> +			/* get subordinate bus number */
>> +			ret = pread(fd, &sub_bus_num, sizeof(uint8_t),
>> +						PCI_SUBORDINATE_BUS);
>> +			if (ret == -1) {
>> +				PMD_DRV_LOG(ERR, "Failed to read subordinate bus number\n");
>> +				close(fd);
>> +				goto error;
>> +			}
>> +
>> +			/* Get max bus number by checking if given bus number
>> +			 * falls in between secondary and subordinate bus
>> +			 * numbers of this pci bridge device.
>> +			 */
>> +			if (start_bus >= sec_bus_num &&
>> +			    start_bus <= sub_bus_num) {
>> +				*end_bus = sub_bus_num;
>> +				close(fd);
>> +				closedir(dir);
>> +				return 0;
>> +			}
>> +
>> +			close(fd);
>> +		}
>> +	}
>> +
>> +error:
>> +	closedir(dir);
>> +	return -1;
>> +}
>> +
>>   /**
>>    * DPDK callback to register a PCI device.
>>    *
>> @@ -39,7 +232,12 @@ static struct rte_pci_id qdma_pci_id_tbl[] = {
>>    */
>>   static int qdma_eth_dev_init(struct rte_eth_dev *dev)
>>   {
>> +	struct qdma_pci_dev *dma_priv;
>> +	uint8_t *baseaddr;
>> +	int i, idx, ret;
>>   	struct rte_pci_device *pci_dev;
>> +	uint16_t num_vfs;
>> +	uint8_t max_pci_bus = 0;
>>   
>>   	/* sanity checks */
>>   	if (dev == NULL)
>> @@ -59,6 +257,88 @@ static int qdma_eth_dev_init(struct rte_eth_dev *dev)
>>   	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
>>   		return 0;
>>   
>> +	/* allocate space for a single Ethernet MAC address */
>> +	dev->data->mac_addrs = rte_zmalloc("qdma", RTE_ETHER_ADDR_LEN * 1, 0);
>> +	if (dev->data->mac_addrs == NULL)
>> +		return -ENOMEM;
>> +
>> +	/* Copy some dummy Ethernet MAC address for QDMA device
>> +	 * This will change in real NIC device...
>> +	 * TODO: Read MAC from EEPROM
>> +	 */
>> +	for (i = 0; i < RTE_ETHER_ADDR_LEN; ++i)
>> +		dev->data->mac_addrs[0].addr_bytes[i] = 0x15 + i;
> If you don't have a real EEPROM to read, use rte_eth_random_addr() instead.

We do have an on-board EEPROM, but accessing it is not straight forward 
hardware wise as it is behind FPGA control.
The hardware team is working to simplify this. Meanwhile we were just 
using dummy values, but it is smart to use existing APIs.
I'll adapt to rte_eth_random_addr() in v2. Will also look for other 
places where I can reuse existing DPDK APIs. Thanks.

>
>> +
>> +	/* Init system & device */
>> +	dma_priv = (struct qdma_pci_dev *)dev->data->dev_private;
>> +	dma_priv->is_vf = 0;
>> +	dma_priv->is_master = 0;
>> +	dma_priv->vf_online_count = 0;
>> +	dma_priv->timer_count = DEFAULT_TIMER_CNT_TRIG_MODE_TIMER;
>> +
>> +	dma_priv->en_desc_prefetch = 0; /* Keep prefetch default to 0 */
>> +	dma_priv->cmpt_desc_len = 0;
>> +	dma_priv->c2h_bypass_mode = 0;
>> +	dma_priv->h2c_bypass_mode = 0;
>> +
>> +	dma_priv->config_bar_idx = DEFAULT_PF_CONFIG_BAR;
>> +	dma_priv->bypass_bar_idx = BAR_ID_INVALID;
>> +	dma_priv->user_bar_idx = BAR_ID_INVALID;
>> +
>> +	/* Check and handle device devargs */
>> +	if (qdma_check_kvargs(dev->device->devargs, dma_priv)) {
>> +		PMD_DRV_LOG(INFO, "devargs failed\n");
>> +		rte_free(dev->data->mac_addrs);
>> +		return -EINVAL;
>> +	}
>> +
>> +	/* Store BAR address and length of Config BAR */
>> +	baseaddr = (uint8_t *)
>> +			pci_dev->mem_resource[dma_priv->config_bar_idx].addr;
>> +	dma_priv->bar_addr[dma_priv->config_bar_idx] = baseaddr;
>> +
>> +	idx = qdma_identify_bars(dev);
>> +	if (idx < 0) {
>> +		rte_free(dev->data->mac_addrs);
>> +		return -EINVAL;
>> +	}
>> +
>> +	/* Store BAR address and length of AXI Master Lite BAR(user bar) */
>> +	if (dma_priv->user_bar_idx >= 0) {
>> +		baseaddr = (uint8_t *)
>> +			    pci_dev->mem_resource[dma_priv->user_bar_idx].addr;
>> +		dma_priv->bar_addr[dma_priv->user_bar_idx] = baseaddr;
>> +	}
>> +
>> +	PMD_DRV_LOG(INFO, "QDMA device driver probe:");
>> +
>> +	ret = get_max_pci_bus_num(pci_dev->addr.bus, &max_pci_bus);
>> +	if (ret != 0 && !max_pci_bus) {
>> +		PMD_DRV_LOG(ERR, "Failed to get max pci bus number\n");
>> +		rte_free(dev->data->mac_addrs);
>> +		return -EINVAL;
>> +	}
>> +	PMD_DRV_LOG(INFO, "PCI max bus number : 0x%x", max_pci_bus);
>> +
>> +	if (!dma_priv->reset_in_progress) {
>> +		num_vfs = pci_dev->max_vfs;
>> +		if (num_vfs) {
>> +			dma_priv->vfinfo = rte_zmalloc("vfinfo",
>> +				sizeof(struct qdma_vf_info) * num_vfs, 0);
>> +			if (dma_priv->vfinfo == NULL) {
>> +				PMD_DRV_LOG(ERR, "Cannot allocate memory for private VF info\n");
>> +				return -ENOMEM;
>> +			}
>> +
>> +			/* Mark all VFs with invalid function id mapping*/
>> +			for (i = 0; i < num_vfs; i++)
>> +				dma_priv->vfinfo[i].func_id =
>> +					QDMA_FUNC_ID_INVALID;
>> +		}
>> +	}
>> +
>> +	dma_priv->reset_in_progress = 0;
>> +
>>   	return 0;
>>   }



^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [RFC PATCH 00/29] cover letter for net/qdma PMD
  2022-07-06  7:51 [RFC PATCH 00/29] cover letter for net/qdma PMD Aman Kumar
                   ` (28 preceding siblings ...)
  2022-07-06  7:52 ` [RFC PATCH 29/29] net/qdma: add stats PMD ops for PF and VF Aman Kumar
@ 2022-07-07  6:57 ` Thomas Monjalon
  2022-07-07 13:55   ` Aman Kumar
  29 siblings, 1 reply; 43+ messages in thread
From: Thomas Monjalon @ 2022-07-07  6:57 UTC (permalink / raw)
  To: Aman Kumar
  Cc: dev, maxime.coquelin, david.marchand, hemant.agrawal, Gagandeep Singh

06/07/2022 09:51, Aman Kumar:
> This patch series provides net PMD for VVDN's NR(5G) Hi-PHY solution over
> T1 telco card. These telco accelerator NIC cards are targeted for ORAN DU
> systems to offload inline NR Hi-PHY (split 7.2) operations. For the DU host,
> the cards typically appears as a basic NIC device. The device is based on
> AMD/Xilinx's Ultrasale MPSoC and RFSoC FPGA for which the inline Hi-PHY IP
> is developed by VVDN Technologies Private Limited.
> PCI_VENDOR: 0x1f44, Supported Devices: 0x0201, 0x0281
> 
> Hardware-specs:
> https://www.xilinx.com/publications/product-briefs/xilinx-t1-product-brief.pdf
> 
> - This series is an RFC and target for DPDK v22.11.
> - Currently, the PMD is supported only for x86_64 host.
> - Build machine used: Fedora 36 with gcc 12.1.1
> - The device communicates to host over AMD/Xilinx's QDMA subsystem for
>   PCIe interface. Link: https://docs.xilinx.com/r/en-US/pg302-qdma

That's unfortunate, there is something else called QDMA in NXP solution:
https://git.dpdk.org/dpdk/tree/drivers/dma/dpaa2



^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [RFC PATCH 00/29] cover letter for net/qdma PMD
  2022-07-07  6:57 ` [RFC PATCH 00/29] cover letter for net/qdma PMD Thomas Monjalon
@ 2022-07-07 13:55   ` Aman Kumar
  2022-07-07 14:15     ` Thomas Monjalon
  0 siblings, 1 reply; 43+ messages in thread
From: Aman Kumar @ 2022-07-07 13:55 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: dev, maxime.coquelin, david.marchand, hemant.agrawal, Gagandeep Singh

On 07/07/22 12:27 pm, Thomas Monjalon wrote:
> 06/07/2022 09:51, Aman Kumar:
>> This patch series provides net PMD for VVDN's NR(5G) Hi-PHY solution over
>> T1 telco card. These telco accelerator NIC cards are targeted for ORAN DU
>> systems to offload inline NR Hi-PHY (split 7.2) operations. For the DU host,
>> the cards typically appears as a basic NIC device. The device is based on
>> AMD/Xilinx's Ultrasale MPSoC and RFSoC FPGA for which the inline Hi-PHY IP
>> is developed by VVDN Technologies Private Limited.
>> PCI_VENDOR: 0x1f44, Supported Devices: 0x0201, 0x0281
>>
>> Hardware-specs:
>> https://www.xilinx.com/publications/product-briefs/xilinx-t1-product-brief.pdf
>>
>> - This series is an RFC and target for DPDK v22.11.
>> - Currently, the PMD is supported only for x86_64 host.
>> - Build machine used: Fedora 36 with gcc 12.1.1
>> - The device communicates to host over AMD/Xilinx's QDMA subsystem for
>>    PCIe interface. Link: https://docs.xilinx.com/r/en-US/pg302-qdma
> That's unfortunate, there is something else called QDMA in NXP solution:
> https://git.dpdk.org/dpdk/tree/drivers/dma/dpaa2

Is this going to create a conflict against this submission? I guess both are publicly available/known for long time.



^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [RFC PATCH 00/29] cover letter for net/qdma PMD
  2022-07-07 13:55   ` Aman Kumar
@ 2022-07-07 14:15     ` Thomas Monjalon
  2022-07-07 14:19       ` Hemant Agrawal
  0 siblings, 1 reply; 43+ messages in thread
From: Thomas Monjalon @ 2022-07-07 14:15 UTC (permalink / raw)
  To: Aman Kumar
  Cc: dev, maxime.coquelin, david.marchand, hemant.agrawal, Gagandeep Singh

07/07/2022 15:55, Aman Kumar:
> On 07/07/22 12:27 pm, Thomas Monjalon wrote:
> > 06/07/2022 09:51, Aman Kumar:
> >> This patch series provides net PMD for VVDN's NR(5G) Hi-PHY solution over
> >> T1 telco card. These telco accelerator NIC cards are targeted for ORAN DU
> >> systems to offload inline NR Hi-PHY (split 7.2) operations. For the DU host,
> >> the cards typically appears as a basic NIC device. The device is based on
> >> AMD/Xilinx's Ultrasale MPSoC and RFSoC FPGA for which the inline Hi-PHY IP
> >> is developed by VVDN Technologies Private Limited.
> >> PCI_VENDOR: 0x1f44, Supported Devices: 0x0201, 0x0281
> >>
> >> Hardware-specs:
> >> https://www.xilinx.com/publications/product-briefs/xilinx-t1-product-brief.pdf
> >>
> >> - This series is an RFC and target for DPDK v22.11.
> >> - Currently, the PMD is supported only for x86_64 host.
> >> - Build machine used: Fedora 36 with gcc 12.1.1
> >> - The device communicates to host over AMD/Xilinx's QDMA subsystem for
> >>    PCIe interface. Link: https://docs.xilinx.com/r/en-US/pg302-qdma
> > That's unfortunate, there is something else called QDMA in NXP solution:
> > https://git.dpdk.org/dpdk/tree/drivers/dma/dpaa2
> 
> Is this going to create a conflict against this submission? I guess both are publicly available/known for long time.

If it's the marketing name, go for it,
but it is unfortunate.



^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [RFC PATCH 00/29] cover letter for net/qdma PMD
  2022-07-07 14:15     ` Thomas Monjalon
@ 2022-07-07 14:19       ` Hemant Agrawal
  2022-07-18 18:15         ` aman.kumar
  0 siblings, 1 reply; 43+ messages in thread
From: Hemant Agrawal @ 2022-07-07 14:19 UTC (permalink / raw)
  To: Thomas Monjalon, Aman Kumar
  Cc: dev, maxime.coquelin, david.marchand, hemant.agrawal, Gagandeep Singh


On 7/7/2022 7:45 PM, Thomas Monjalon wrote:
> 07/07/2022 15:55, Aman Kumar:
>> On 07/07/22 12:27 pm, Thomas Monjalon wrote:
>>> 06/07/2022 09:51, Aman Kumar:
>>>> This patch series provides net PMD for VVDN's NR(5G) Hi-PHY solution over
>>>> T1 telco card. These telco accelerator NIC cards are targeted for ORAN DU
>>>> systems to offload inline NR Hi-PHY (split 7.2) operations. For the DU host,
>>>> the cards typically appears as a basic NIC device. The device is based on
>>>> AMD/Xilinx's Ultrasale MPSoC and RFSoC FPGA for which the inline Hi-PHY IP
>>>> is developed by VVDN Technologies Private Limited.
>>>> PCI_VENDOR: 0x1f44, Supported Devices: 0x0201, 0x0281
>>>>
>>>> Hardware-specs:
>>>> https://www.xilinx.com/publications/product-briefs/xilinx-t1-product-brief.pdf
>>>>
>>>> - This series is an RFC and target for DPDK v22.11.
>>>> - Currently, the PMD is supported only for x86_64 host.
>>>> - Build machine used: Fedora 36 with gcc 12.1.1
>>>> - The device communicates to host over AMD/Xilinx's QDMA subsystem for
>>>>     PCIe interface. Link: https://docs.xilinx.com/r/en-US/pg302-qdma
>>> That's unfortunate, there is something else called QDMA in NXP solution:
>>> https://git.dpdk.org/dpdk/tree/drivers/dma/dpaa2
>> Is this going to create a conflict against this submission? I guess both are publicly available/known for long time.
> If it's the marketing name, go for it,
> but it is unfortunate.

QDMA is a very generic name and many vendors have IP for it.

My suggestions is the qualify the specific driver with vendor name i.e. 
amd_qdma or xilinx_qdma or something similar.

NXP also did the same dpaa2_qdma



>

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [RFC PATCH 00/29] cover letter for net/qdma PMD
  2022-07-07 14:19       ` Hemant Agrawal
@ 2022-07-18 18:15         ` aman.kumar
  2022-07-19 12:12           ` Thomas Monjalon
  0 siblings, 1 reply; 43+ messages in thread
From: aman.kumar @ 2022-07-18 18:15 UTC (permalink / raw)
  To: hemant.agrawal, Hemant Agrawal, Thomas Monjalon, dev,
	maxime.coquelin, david.marchand, hemant.agrawal, Gagandeep Singh

On 07/07/22 7:49 pm, Hemant Agrawal <hemant.agrawal@oss.nxp.com> wrote:
> 
> On 7/7/2022 7:45 PM, Thomas Monjalon wrote:
> > 07/07/2022 15:55, Aman Kumar:
> >> On 07/07/22 12:27 pm, Thomas Monjalon wrote:
> >>> 06/07/2022 09:51, Aman Kumar:
> >>>> This patch series provides net PMD for VVDN's NR(5G) Hi-PHY 
> >>>> solution over
> >>>> T1 telco card. These telco accelerator NIC cards are targeted for 
> >>>> ORAN DU
> >>>> systems to offload inline NR Hi-PHY (split 7.2) operations. For the 
> >>>> DU host,
> >>>> the cards typically appears as a basic NIC device. The device is 
> >>>> based on
> >>>> AMD/Xilinx's Ultrasale MPSoC and RFSoC FPGA for which the inline 
> >>>> Hi-PHY IP
> >>>> is developed by VVDN Technologies Private Limited.
> >>>> PCI_VENDOR: 0x1f44, Supported Devices: 0x0201, 0x0281
> >>>>
> >>>> Hardware-specs:
> >>>> https://www.xilinx.com/publications/product-briefs/xilinx-t1-product-brief.pdf 
> >>>>
> >>>>
> >>>> - This series is an RFC and target for DPDK v22.11.
> >>>> - Currently, the PMD is supported only for x86_64 host.
> >>>> - Build machine used: Fedora 36 with gcc 12.1.1
> >>>> - The device communicates to host over AMD/Xilinx's QDMA subsystem for
> >>>>     PCIe interface. Link: https://docs.xilinx.com/r/en-US/pg302-qdma
> >>> That's unfortunate, there is something else called QDMA in NXP 
> >>> solution:
> >>> https://git.dpdk.org/dpdk/tree/drivers/dma/dpaa2
> >> Is this going to create a conflict against this submission? I guess 
> >> both are publicly available/known for long time.
> > If it's the marketing name, go for it,
> > but it is unfortunate.
> 
> QDMA is a very generic name and many vendors have IP for it.
> 
> My suggestions is the qualify the specific driver with vendor name i.e. 
> amd_qdma or xilinx_qdma or something similar.
> 
> NXP also did the same dpaa2_qdma
> 
> 
@Thomas, @Hemant,
Thank you for highlights and suggestions regarding conflicting names.
We've discussed this internally and came up with below plan.

For v22.11 DPDK, we would like to submit patches with below renames:
 drivers/net/qdma -> drivers/net/t1
 drivers/net/qdma/qdma_*.c/h -> drivers/net/t1/t1_*.c/h
 driver/net/qdma/qdma_access -> driver/net/t1/base

We've plan to split Xilinx QDMA library to drivers/dma/xilinx_dma/* or into drivers/common/* around v23.02 as it require a big rework currently.
Also, currently no other devices are dependent on Xilinx QDMA, we would like to submit with "renamed" items as mentioned above under drivers/net/*.
We've a plan to submit a bbdev device too early next year and rework is planned before submitting that(post v22.11).
I'll update this in v2 patch. I hope this is OK. 

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [RFC PATCH 00/29] cover letter for net/qdma PMD
  2022-07-18 18:15         ` aman.kumar
@ 2022-07-19 12:12           ` Thomas Monjalon
  2022-07-19 17:22             ` aman.kumar
  0 siblings, 1 reply; 43+ messages in thread
From: Thomas Monjalon @ 2022-07-19 12:12 UTC (permalink / raw)
  To: hemant.agrawal, Hemant Agrawal, dev, maxime.coquelin,
	david.marchand, hemant.agrawal, Gagandeep Singh, aman.kumar

18/07/2022 20:15, aman.kumar@vvdntech.in:
> On 07/07/22 7:49 pm, Hemant Agrawal <hemant.agrawal@oss.nxp.com> wrote:
> > 
> > On 7/7/2022 7:45 PM, Thomas Monjalon wrote:
> > > 07/07/2022 15:55, Aman Kumar:
> > >> On 07/07/22 12:27 pm, Thomas Monjalon wrote:
> > >>> 06/07/2022 09:51, Aman Kumar:
> > >>>> This patch series provides net PMD for VVDN's NR(5G) Hi-PHY 
> > >>>> solution over
> > >>>> T1 telco card. These telco accelerator NIC cards are targeted for 
> > >>>> ORAN DU
> > >>>> systems to offload inline NR Hi-PHY (split 7.2) operations. For the 
> > >>>> DU host,
> > >>>> the cards typically appears as a basic NIC device. The device is 
> > >>>> based on
> > >>>> AMD/Xilinx's Ultrasale MPSoC and RFSoC FPGA for which the inline 
> > >>>> Hi-PHY IP
> > >>>> is developed by VVDN Technologies Private Limited.
> > >>>> PCI_VENDOR: 0x1f44, Supported Devices: 0x0201, 0x0281
> > >>>>
> > >>>> Hardware-specs:
> > >>>> https://www.xilinx.com/publications/product-briefs/xilinx-t1-product-brief.pdf 
> > >>>>
> > >>>>
> > >>>> - This series is an RFC and target for DPDK v22.11.
> > >>>> - Currently, the PMD is supported only for x86_64 host.
> > >>>> - Build machine used: Fedora 36 with gcc 12.1.1
> > >>>> - The device communicates to host over AMD/Xilinx's QDMA subsystem for
> > >>>>     PCIe interface. Link: https://docs.xilinx.com/r/en-US/pg302-qdma
> > >>> That's unfortunate, there is something else called QDMA in NXP 
> > >>> solution:
> > >>> https://git.dpdk.org/dpdk/tree/drivers/dma/dpaa2
> > >> Is this going to create a conflict against this submission? I guess 
> > >> both are publicly available/known for long time.
> > > If it's the marketing name, go for it,
> > > but it is unfortunate.
> > 
> > QDMA is a very generic name and many vendors have IP for it.
> > 
> > My suggestions is the qualify the specific driver with vendor name i.e. 
> > amd_qdma or xilinx_qdma or something similar.
> > 
> > NXP also did the same dpaa2_qdma
> > 
> > 
> @Thomas, @Hemant,
> Thank you for highlights and suggestions regarding conflicting names.
> We've discussed this internally and came up with below plan.
> 
> For v22.11 DPDK, we would like to submit patches with below renames:
>  drivers/net/qdma -> drivers/net/t1
>  drivers/net/qdma/qdma_*.c/h -> drivers/net/t1/t1_*.c/h
>  driver/net/qdma/qdma_access -> driver/net/t1/base

Curious why "t1"?
What is the meaning?




^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [RFC PATCH 00/29] cover letter for net/qdma PMD
  2022-07-19 12:12           ` Thomas Monjalon
@ 2022-07-19 17:22             ` aman.kumar
  2023-07-02 23:36               ` Stephen Hemminger
  0 siblings, 1 reply; 43+ messages in thread
From: aman.kumar @ 2022-07-19 17:22 UTC (permalink / raw)
  To: Thomas Monjalon, hemant.agrawal, Hemant Agrawal, dev,
	maxime.coquelin, david.marchand, hemant.agrawal, Gagandeep Singh

On 19/07/22 5:42 pm, Thomas Monjalon <thomas@monjalon.net> wrote:
> 18/07/2022 20:15, aman.kumar@vvdntech.in:
> > On 07/07/22 7:49 pm, Hemant Agrawal <hemant.agrawal@oss.nxp.com> wrote:
> >>
> >> On 7/7/2022 7:45 PM, Thomas Monjalon wrote:
> >>> 07/07/2022 15:55, Aman Kumar:
> >>>> On 07/07/22 12:27 pm, Thomas Monjalon wrote:
> >>>>> 06/07/2022 09:51, Aman Kumar:
> >>>>>> This patch series provides net PMD for VVDN's NR(5G) Hi-PHY
> >>>>>> solution over
> >>>>>> T1 telco card. These telco accelerator NIC cards are targeted for
> >>>>>> ORAN DU
> >>>>>> systems to offload inline NR Hi-PHY (split 7.2) operations. For the
> >>>>>> DU host,
> >>>>>> the cards typically appears as a basic NIC device. The device is
> >>>>>> based on
> >>>>>> AMD/Xilinx's Ultrasale MPSoC and RFSoC FPGA for which the inline
> >>>>>> Hi-PHY IP
> >>>>>> is developed by VVDN Technologies Private Limited.
> >>>>>> PCI_VENDOR: 0x1f44, Supported Devices: 0x0201, 0x0281
> >>>>>>
> >>>>>> Hardware-specs:
> >>>>>> https://www.xilinx.com/publications/product-briefs/xilinx-t1-product-brief.pdf
> >>>>>>
> >>>>>>
> >>>>>> - This series is an RFC and target for DPDK v22.11.
> >>>>>> - Currently, the PMD is supported only for x86_64 host.
> >>>>>> - Build machine used: Fedora 36 with gcc 12.1.1
> >>>>>> - The device communicates to host over AMD/Xilinx's QDMA subsystem for
> >>>>>>      PCIe interface. Link: https://docs.xilinx.com/r/en-US/pg302-qdma
> >>>>> That's unfortunate, there is something else called QDMA in NXP
> >>>>> solution:
> >>>>> https://git.dpdk.org/dpdk/tree/drivers/dma/dpaa2
> >>>> Is this going to create a conflict against this submission? I guess
> >>>> both are publicly available/known for long time.
> >>> If it's the marketing name, go for it,
> >>> but it is unfortunate.
> >>
> >> QDMA is a very generic name and many vendors have IP for it.
> >>
> >> My suggestions is the qualify the specific driver with vendor name i.e.
> >> amd_qdma or xilinx_qdma or something similar.
> >>
> >> NXP also did the same dpaa2_qdma
> >>
> >>
> > @Thomas, @Hemant,
> > Thank you for highlights and suggestions regarding conflicting names.
> > We've discussed this internally and came up with below plan.
> >
> > For v22.11 DPDK, we would like to submit patches with below renames:
> >   drivers/net/qdma -> drivers/net/t1
> >   drivers/net/qdma/qdma_*.c/h -> drivers/net/t1/t1_*.c/h
> >   driver/net/qdma/qdma_access -> driver/net/t1/base
> 
> Curious why "t1"?
> What is the meaning?

The hardware is commerically named as the "T1 card". T1 represents first card of the telco accelerator series.
https://www.xilinx.com/publications/product-briefs/xilinx-t1-product-brief.pdf



^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [RFC PATCH 00/29] cover letter for net/qdma PMD
  2022-07-19 17:22             ` aman.kumar
@ 2023-07-02 23:36               ` Stephen Hemminger
  2023-07-03  9:15                 ` Ferruh Yigit
  0 siblings, 1 reply; 43+ messages in thread
From: Stephen Hemminger @ 2023-07-02 23:36 UTC (permalink / raw)
  To: aman.kumar
  Cc: Thomas Monjalon, hemant.agrawal, Hemant Agrawal, dev,
	maxime.coquelin, david.marchand, Gagandeep Singh

On Tue, 19 Jul 2022 22:52:20 +0530
aman.kumar@vvdntech.in wrote:

> > >>
> > >> QDMA is a very generic name and many vendors have IP for it.
> > >>
> > >> My suggestions is the qualify the specific driver with vendor name i.e.
> > >> amd_qdma or xilinx_qdma or something similar.
> > >>
> > >> NXP also did the same dpaa2_qdma
> > >>
> > >>  
> > > @Thomas, @Hemant,
> > > Thank you for highlights and suggestions regarding conflicting names.
> > > We've discussed this internally and came up with below plan.
> > >
> > > For v22.11 DPDK, we would like to submit patches with below renames:
> > >   drivers/net/qdma -> drivers/net/t1
> > >   drivers/net/qdma/qdma_*.c/h -> drivers/net/t1/t1_*.c/h
> > >   driver/net/qdma/qdma_access -> driver/net/t1/base  
> > 
> > Curious why "t1"?
> > What is the meaning?  
> 
> The hardware is commerically named as the "T1 card". T1 represents first card of the telco accelerator series.
> https://www.xilinx.com/publications/product-briefs/xilinx-t1-product-brief.pdf
> 

The discussion around this driver stalled. Either hardware abandoned, renamed, or there was no interest.
If you want to get this in a future DPDK release, please resubmit a new version addressing review comments.

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [RFC PATCH 00/29] cover letter for net/qdma PMD
  2023-07-02 23:36               ` Stephen Hemminger
@ 2023-07-03  9:15                 ` Ferruh Yigit
  0 siblings, 0 replies; 43+ messages in thread
From: Ferruh Yigit @ 2023-07-03  9:15 UTC (permalink / raw)
  To: Stephen Hemminger, aman.kumar
  Cc: Thomas Monjalon, hemant.agrawal, Hemant Agrawal, dev,
	maxime.coquelin, david.marchand, Gagandeep Singh

On 7/3/2023 12:36 AM, Stephen Hemminger wrote:
> On Tue, 19 Jul 2022 22:52:20 +0530
> aman.kumar@vvdntech.in wrote:
> 
>>>>>
>>>>> QDMA is a very generic name and many vendors have IP for it.
>>>>>
>>>>> My suggestions is the qualify the specific driver with vendor name i.e.
>>>>> amd_qdma or xilinx_qdma or something similar.
>>>>>
>>>>> NXP also did the same dpaa2_qdma
>>>>>
>>>>>  
>>>> @Thomas, @Hemant,
>>>> Thank you for highlights and suggestions regarding conflicting names.
>>>> We've discussed this internally and came up with below plan.
>>>>
>>>> For v22.11 DPDK, we would like to submit patches with below renames:
>>>>   drivers/net/qdma -> drivers/net/t1
>>>>   drivers/net/qdma/qdma_*.c/h -> drivers/net/t1/t1_*.c/h
>>>>   driver/net/qdma/qdma_access -> driver/net/t1/base  
>>>
>>> Curious why "t1"?
>>> What is the meaning?  
>>
>> The hardware is commerically named as the "T1 card". T1 represents first card of the telco accelerator series.
>> https://www.xilinx.com/publications/product-briefs/xilinx-t1-product-brief.pdf
>>
> 
> The discussion around this driver stalled. Either hardware abandoned, renamed, or there was no interest.
> If you want to get this in a future DPDK release, please resubmit a new version addressing review comments.
>

Hi Stephen,

I can confirm that this work is stalled, and it is not clear yet if
there will be a new version or not, so OK to clean this from patchwork.

Thanks,
ferruh

^ permalink raw reply	[flat|nested] 43+ messages in thread

end of thread, other threads:[~2023-07-03  9:16 UTC | newest]

Thread overview: 43+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-07-06  7:51 [RFC PATCH 00/29] cover letter for net/qdma PMD Aman Kumar
2022-07-06  7:51 ` [RFC PATCH 01/29] net/qdma: add net PMD template Aman Kumar
2022-07-06  7:51 ` [RFC PATCH 02/29] maintainers: add maintainer for net/qdma PMD Aman Kumar
2022-07-06  7:51 ` [RFC PATCH 03/29] net/meson.build: add support to compile net qdma Aman Kumar
2022-07-06  7:51 ` [RFC PATCH 04/29] net/qdma: add logging support Aman Kumar
2022-07-06 15:27   ` Stephen Hemminger
2022-07-07  2:32     ` Aman Kumar
2022-07-06  7:51 ` [RFC PATCH 05/29] net/qdma: add device init and uninit functions Aman Kumar
2022-07-06 15:35   ` Stephen Hemminger
2022-07-07  2:41     ` Aman Kumar
2022-07-06  7:51 ` [RFC PATCH 06/29] net/qdma: add qdma access library Aman Kumar
2022-07-06  7:51 ` [RFC PATCH 07/29] net/qdma: add supported qdma version Aman Kumar
2022-07-06  7:51 ` [RFC PATCH 08/29] net/qdma: qdma hardware initialization Aman Kumar
2022-07-06  7:51 ` [RFC PATCH 09/29] net/qdma: define device modes and data structure Aman Kumar
2022-07-06  7:52 ` [RFC PATCH 10/29] net/qdma: add net PMD ops template Aman Kumar
2022-07-06  7:52 ` [RFC PATCH 11/29] net/qdma: add configure close and reset ethdev ops Aman Kumar
2022-07-06  7:52 ` [RFC PATCH 12/29] net/qdma: add routine for Rx queue initialization Aman Kumar
2022-07-06  7:52 ` [RFC PATCH 13/29] net/qdma: add callback support for Rx queue count Aman Kumar
2022-07-06  7:52 ` [RFC PATCH 14/29] net/qdma: add routine for Tx queue initialization Aman Kumar
2022-07-06  7:52 ` [RFC PATCH 15/29] net/qdma: add queue cleanup PMD ops Aman Kumar
2022-07-06  7:52 ` [RFC PATCH 16/29] net/qdma: add start and stop apis Aman Kumar
2022-07-06  7:52 ` [RFC PATCH 17/29] net/qdma: add Tx burst API Aman Kumar
2022-07-06  7:52 ` [RFC PATCH 18/29] net/qdma: add Tx queue reclaim routine Aman Kumar
2022-07-06  7:52 ` [RFC PATCH 19/29] net/qdma: add callback function for Tx desc status Aman Kumar
2022-07-06  7:52 ` [RFC PATCH 20/29] net/qdma: add Rx burst API Aman Kumar
2022-07-06  7:52 ` [RFC PATCH 21/29] net/qdma: add mailbox communication library Aman Kumar
2022-07-06  7:52 ` [RFC PATCH 22/29] net/qdma: mbox API adaptation in Rx/Tx init Aman Kumar
2022-07-06  7:52 ` [RFC PATCH 23/29] net/qdma: add support for VF interfaces Aman Kumar
2022-07-06  7:52 ` [RFC PATCH 24/29] net/qdma: add Rx/Tx queue setup routine for VF devices Aman Kumar
2022-07-06  7:52 ` [RFC PATCH 25/29] net/qdma: add basic PMD ops for VF Aman Kumar
2022-07-06  7:52 ` [RFC PATCH 26/29] net/qdma: add datapath burst API " Aman Kumar
2022-07-06  7:52 ` [RFC PATCH 27/29] net/qdma: add device specific APIs for export Aman Kumar
2022-07-06  7:52 ` [RFC PATCH 28/29] net/qdma: add additional debug APIs Aman Kumar
2022-07-06  7:52 ` [RFC PATCH 29/29] net/qdma: add stats PMD ops for PF and VF Aman Kumar
2022-07-07  6:57 ` [RFC PATCH 00/29] cover letter for net/qdma PMD Thomas Monjalon
2022-07-07 13:55   ` Aman Kumar
2022-07-07 14:15     ` Thomas Monjalon
2022-07-07 14:19       ` Hemant Agrawal
2022-07-18 18:15         ` aman.kumar
2022-07-19 12:12           ` Thomas Monjalon
2022-07-19 17:22             ` aman.kumar
2023-07-02 23:36               ` Stephen Hemminger
2023-07-03  9:15                 ` Ferruh Yigit

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).