DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 00/25] Add Support for DLB v2.5
@ 2021-03-16 22:18 Timothy McDaniel
  2021-03-16 22:18 ` [dpdk-dev] [PATCH 01/25] event/dlb2: add dlb v2.5 probe Timothy McDaniel
                   ` (25 more replies)
  0 siblings, 26 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-03-16 22:18 UTC (permalink / raw)
  To: dev
  Cc: jerinj, harry.van.haaren, mdr, nhorman, nikhil.rao,
	erik.g.carrillo, abhinandan.gujjar, pbhagavatula, hemant.agrawal,
	mattias.ronnblom, peter.mccarthy

This patch series adds support for DLB v2.5 to
the current DLB V2.0 PMD. The resulting PMD supports
both hardware versions.

The main differences between the DLB v2.5 and v2.0 hardware
are:
- Number of queues/ports
- DLB v2.5 uses a combined credit pool, whereas DLB v2.0
  splits credits into 2 pools, a directed credit pool and a
  load balanced credit pool.
- Different register maps, with different bit names and offsets

In order to support both hardware versions with the same PMD,
and avoid code duplication, the file dlb2_resource.c required a
complete rewrite. This required some creative staging of the changes
in order to keep the individual patches relatively small, while
also meeting the requirement that all individual patches in the set
compile cleanly.

To accomplish this, a few temporary files are used:

dlb2_hw_types_new.h
dlb2_resources_new.h
dlb2_resources_new.c

As dlb2_resources_new.c is populated with the new combined v2.0/v2.5
low level logic, the corresponding old code is removed from
dlb2_resource.c, thus allowing both the original and new code to
continue to compile and link cleanly. Once all of the code has been
migrated to the new model, the old versions of the files are removed,
and the new versions are renamed, effectively replacing the old original
files.

As you review the code, you can ignore the code deletions from
dlb2_resource.c, as that file continues to shrink as the new
corresponding logic is added to dlb2_resource_new.c.

Timothy McDaniel (25):
  event/dlb2: add dlb v2.5 probe
  event/dlb2: add DLB v2.5 probe-time hardware init
  event/dlb2: add DLB v2.5 support to get_resources
  event/dlb2: add DLB v2.5 support to create sched domain
  event/dlb2: add DLB v2.5 support to domain reset
  event/dlb2: add DLB V2.5 support to create ldb queue
  event/dlb2: add DLB v2.5 support to create ldb port
  event/dlb2: add DLB v2.5 support to create dir port
  event/dlb2: add DLB v2.5 support to create dir queue
  event/dlb2: add DLB v2.5 support to map qid
  event/dlb2: add DLB v2.5 support to unmap queue
  event/dlb2: add DLB v2.5 support to start domain
  event/dlb2: add DLB v2.5 credit scheme
  event/dlb2: Add DLB v2.5 support to get queue depth functions
  event/dlb2: add DLB v2.5 finish map/unmap interfaces
  event/dlb2: add DLB v2.5 sparse cq mode
  event/dlb2: add DLB v2.5 support to sequence number management
  event/dlb2: consolidate dlb resource header files into one file
  event/dlb2: delete old dlb2_resource.c file
  event/dlb2: move dlb_resource_new.c to dlb_resource.c
  event/dlb2: remove temporary file, dlb_hw_types.h
  event/dlb2: move dlb2_hw_type_new.h to dlb2_hw_types.h
  event/dlb2: delete old register map file, dlb2_regs.h
  event/dlb2: rename dlb2_regs_new.h to dlb2_regs.h
  event/dlb2: update xstats for DLB v2.5

 drivers/event/dlb2/dlb2.c                  |  430 +-
 drivers/event/dlb2/dlb2_priv.h             |  158 +-
 drivers/event/dlb2/dlb2_user.h             |   27 +-
 drivers/event/dlb2/dlb2_xstats.c           |   70 +-
 drivers/event/dlb2/pf/base/dlb2_hw_types.h |  102 +-
 drivers/event/dlb2/pf/base/dlb2_mbox.h     |    1 -
 drivers/event/dlb2/pf/base/dlb2_osdep.h    |    3 +
 drivers/event/dlb2/pf/base/dlb2_regs.h     | 6063 +++++++++++++-------
 drivers/event/dlb2/pf/base/dlb2_resource.c | 3277 ++++++-----
 drivers/event/dlb2/pf/base/dlb2_resource.h |   28 +-
 drivers/event/dlb2/pf/dlb2_main.c          |   37 +-
 drivers/event/dlb2/pf/dlb2_pf.c            |   62 +-
 12 files changed, 6366 insertions(+), 3892 deletions(-)

-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH 01/25] event/dlb2: add dlb v2.5 probe
  2021-03-16 22:18 [dpdk-dev] [PATCH 00/25] Add Support for DLB v2.5 Timothy McDaniel
@ 2021-03-16 22:18 ` Timothy McDaniel
  2021-03-21  9:48   ` Jerin Jacob
                     ` (4 more replies)
  2021-03-16 22:18 ` [dpdk-dev] [PATCH 02/25] event/dlb2: add DLB v2.5 probe-time hardware init Timothy McDaniel
                   ` (24 subsequent siblings)
  25 siblings, 5 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-03-16 22:18 UTC (permalink / raw)
  To: dev
  Cc: jerinj, harry.van.haaren, mdr, nhorman, nikhil.rao,
	erik.g.carrillo, abhinandan.gujjar, pbhagavatula, hemant.agrawal,
	mattias.ronnblom, peter.mccarthy

This commit adds dlb v2.5 probe support, and updates
parameter parsing.

The dlb v2.5 device differs from dlb v2, in that the
number of resources (ports, queues, ...) is different,
so macros have been added to take the device version
into account.

This commit also cleans up a few issues in the original
dlb2 source:
- eliminate duplicate constant definitions
- removed unused constant definitions

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/dlb2.c                  |  99 ++++++++++---
 drivers/event/dlb2/dlb2_priv.h             | 153 +++++++++++++++------
 drivers/event/dlb2/dlb2_xstats.c           |  37 ++---
 drivers/event/dlb2/pf/base/dlb2_hw_types.h |  64 +++------
 drivers/event/dlb2/pf/base/dlb2_resource.c |  47 ++++---
 drivers/event/dlb2/pf/dlb2_pf.c            |  62 ++++++++-
 6 files changed, 320 insertions(+), 142 deletions(-)

diff --git a/drivers/event/dlb2/dlb2.c b/drivers/event/dlb2/dlb2.c
index b28ec58bf..826b68121 100644
--- a/drivers/event/dlb2/dlb2.c
+++ b/drivers/event/dlb2/dlb2.c
@@ -59,7 +59,8 @@ static struct rte_event_dev_info evdev_dlb2_default_info = {
 	.max_event_port_enqueue_depth = DLB2_MAX_ENQUEUE_DEPTH,
 	.max_event_port_links = DLB2_MAX_NUM_QIDS_PER_LDB_CQ,
 	.max_num_events = DLB2_MAX_NUM_LDB_CREDITS,
-	.max_single_link_event_port_queue_pairs = DLB2_MAX_NUM_DIR_PORTS,
+	.max_single_link_event_port_queue_pairs =
+		DLB2_MAX_NUM_DIR_PORTS(DLB2_HW_V2),
 	.event_dev_cap = (RTE_EVENT_DEV_CAP_QUEUE_QOS |
 			  RTE_EVENT_DEV_CAP_EVENT_QOS |
 			  RTE_EVENT_DEV_CAP_BURST_MODE |
@@ -69,7 +70,7 @@ static struct rte_event_dev_info evdev_dlb2_default_info = {
 };
 
 struct process_local_port_data
-dlb2_port[DLB2_MAX_NUM_PORTS][DLB2_NUM_PORT_TYPES];
+dlb2_port[DLB2_MAX_NUM_PORTS_ALL][DLB2_NUM_PORT_TYPES];
 
 static void
 dlb2_free_qe_mem(struct dlb2_port *qm_port)
@@ -97,7 +98,7 @@ dlb2_init_queue_depth_thresholds(struct dlb2_eventdev *dlb2,
 {
 	int q;
 
-	for (q = 0; q < DLB2_MAX_NUM_QUEUES; q++) {
+	for (q = 0; q < DLB2_MAX_NUM_QUEUES(dlb2->version); q++) {
 		if (qid_depth_thresholds[q] != 0)
 			dlb2->ev_queues[q].depth_threshold =
 				qid_depth_thresholds[q];
@@ -247,9 +248,9 @@ set_num_dir_credits(const char *key __rte_unused,
 		return ret;
 
 	if (*num_dir_credits < 0 ||
-	    *num_dir_credits > DLB2_MAX_NUM_DIR_CREDITS) {
+	    *num_dir_credits > DLB2_MAX_NUM_DIR_CREDITS(DLB2_HW_V2)) {
 		DLB2_LOG_ERR("dlb2: num_dir_credits must be between 0 and %d\n",
-			     DLB2_MAX_NUM_DIR_CREDITS);
+			     DLB2_MAX_NUM_DIR_CREDITS(DLB2_HW_V2));
 		return -EINVAL;
 	}
 
@@ -306,7 +307,6 @@ set_cos(const char *key __rte_unused,
 	return 0;
 }
 
-
 static int
 set_qid_depth_thresh(const char *key __rte_unused,
 		     const char *value,
@@ -327,7 +327,7 @@ set_qid_depth_thresh(const char *key __rte_unused,
 	 */
 	if (sscanf(value, "all:%d", &thresh) == 1) {
 		first = 0;
-		last = DLB2_MAX_NUM_QUEUES - 1;
+		last = DLB2_MAX_NUM_QUEUES(DLB2_HW_V2) - 1;
 	} else if (sscanf(value, "%d-%d:%d", &first, &last, &thresh) == 3) {
 		/* we have everything we need */
 	} else if (sscanf(value, "%d:%d", &first, &thresh) == 2) {
@@ -337,7 +337,56 @@ set_qid_depth_thresh(const char *key __rte_unused,
 		return -EINVAL;
 	}
 
-	if (first > last || first < 0 || last >= DLB2_MAX_NUM_QUEUES) {
+	if (first > last || first < 0 ||
+		last >= DLB2_MAX_NUM_QUEUES(DLB2_HW_V2)) {
+		DLB2_LOG_ERR("Error parsing qid depth devarg, invalid qid value\n");
+		return -EINVAL;
+	}
+
+	if (thresh < 0 || thresh > DLB2_MAX_QUEUE_DEPTH_THRESHOLD) {
+		DLB2_LOG_ERR("Error parsing qid depth devarg, threshold > %d\n",
+			     DLB2_MAX_QUEUE_DEPTH_THRESHOLD);
+		return -EINVAL;
+	}
+
+	for (i = first; i <= last; i++)
+		qid_thresh->val[i] = thresh; /* indexed by qid */
+
+	return 0;
+}
+
+static int
+set_qid_depth_thresh_v2_5(const char *key __rte_unused,
+			  const char *value,
+			  void *opaque)
+{
+	struct dlb2_qid_depth_thresholds *qid_thresh = opaque;
+	int first, last, thresh, i;
+
+	if (value == NULL || opaque == NULL) {
+		DLB2_LOG_ERR("NULL pointer\n");
+		return -EINVAL;
+	}
+
+	/* command line override may take one of the following 3 forms:
+	 * qid_depth_thresh=all:<threshold_value> ... all queues
+	 * qid_depth_thresh=qidA-qidB:<threshold_value> ... a range of queues
+	 * qid_depth_thresh=qid:<threshold_value> ... just one queue
+	 */
+	if (sscanf(value, "all:%d", &thresh) == 1) {
+		first = 0;
+		last = DLB2_MAX_NUM_QUEUES(DLB2_HW_V2_5) - 1;
+	} else if (sscanf(value, "%d-%d:%d", &first, &last, &thresh) == 3) {
+		/* we have everything we need */
+	} else if (sscanf(value, "%d:%d", &first, &thresh) == 2) {
+		last = first;
+	} else {
+		DLB2_LOG_ERR("Error parsing qid depth devarg. Should be all:val, qid-qid:val, or qid:val\n");
+		return -EINVAL;
+	}
+
+	if (first > last || first < 0 ||
+		last >= DLB2_MAX_NUM_QUEUES(DLB2_HW_V2_5)) {
 		DLB2_LOG_ERR("Error parsing qid depth devarg, invalid qid value\n");
 		return -EINVAL;
 	}
@@ -521,7 +570,7 @@ dlb2_hw_reset_sched_domain(const struct rte_eventdev *dev, bool reconfig)
 	for (i = 0; i < dlb2->num_queues; i++)
 		dlb2->ev_queues[i].qm_queue.config_state = config_state;
 
-	for (i = 0; i < DLB2_MAX_NUM_QUEUES; i++)
+	for (i = 0; i < DLB2_MAX_NUM_QUEUES(DLB2_HW_V2_5); i++)
 		dlb2->ev_queues[i].setup_done = false;
 
 	dlb2->num_ports = 0;
@@ -1453,7 +1502,7 @@ dlb2_eventdev_port_setup(struct rte_eventdev *dev,
 
 	dlb2 = dlb2_pmd_priv(dev);
 
-	if (ev_port_id >= DLB2_MAX_NUM_PORTS)
+	if (ev_port_id >= DLB2_MAX_NUM_PORTS(dlb2->version))
 		return -EINVAL;
 
 	if (port_conf->dequeue_depth >
@@ -3895,7 +3944,7 @@ dlb2_primary_eventdev_probe(struct rte_eventdev *dev,
 	}
 
 	/* Initialize each port's token pop mode */
-	for (i = 0; i < DLB2_MAX_NUM_PORTS; i++)
+	for (i = 0; i < DLB2_MAX_NUM_PORTS(dlb2->version); i++)
 		dlb2->ev_ports[i].qm_port.token_pop_mode = AUTO_POP;
 
 	rte_spinlock_init(&dlb2->qm_instance.resource_lock);
@@ -3945,7 +3994,8 @@ dlb2_secondary_eventdev_probe(struct rte_eventdev *dev,
 int
 dlb2_parse_params(const char *params,
 		  const char *name,
-		  struct dlb2_devargs *dlb2_args)
+		  struct dlb2_devargs *dlb2_args,
+		  uint8_t version)
 {
 	int ret = 0;
 	static const char * const args[] = { NUMA_NODE_ARG,
@@ -3984,17 +4034,18 @@ dlb2_parse_params(const char *params,
 				return ret;
 			}
 
-			ret = rte_kvargs_process(kvlist,
+			if (version == DLB2_HW_V2) {
+				ret = rte_kvargs_process(kvlist,
 					DLB2_NUM_DIR_CREDITS,
 					set_num_dir_credits,
 					&dlb2_args->num_dir_credits_override);
-			if (ret != 0) {
-				DLB2_LOG_ERR("%s: Error parsing num_dir_credits parameter",
-					     name);
-				rte_kvargs_free(kvlist);
-				return ret;
+				if (ret != 0) {
+					DLB2_LOG_ERR("%s: Error parsing num_dir_credits parameter",
+						     name);
+					rte_kvargs_free(kvlist);
+					return ret;
+				}
 			}
-
 			ret = rte_kvargs_process(kvlist, DEV_ID_ARG,
 						 set_dev_id,
 						 &dlb2_args->dev_id);
@@ -4005,11 +4056,19 @@ dlb2_parse_params(const char *params,
 				return ret;
 			}
 
-			ret = rte_kvargs_process(
+			if (version == DLB2_HW_V2) {
+				ret = rte_kvargs_process(
 					kvlist,
 					DLB2_QID_DEPTH_THRESH_ARG,
 					set_qid_depth_thresh,
 					&dlb2_args->qid_depth_thresholds);
+			} else {
+				ret = rte_kvargs_process(
+					kvlist,
+					DLB2_QID_DEPTH_THRESH_ARG,
+					set_qid_depth_thresh_v2_5,
+					&dlb2_args->qid_depth_thresholds);
+			}
 			if (ret != 0) {
 				DLB2_LOG_ERR("%s: Error parsing qid_depth_thresh parameter",
 					     name);
diff --git a/drivers/event/dlb2/dlb2_priv.h b/drivers/event/dlb2/dlb2_priv.h
index b73cf3ff1..b6de8d937 100644
--- a/drivers/event/dlb2/dlb2_priv.h
+++ b/drivers/event/dlb2/dlb2_priv.h
@@ -20,7 +20,7 @@
 #define DLB2_INC_STAT(_stat, _incr_val)
 #endif
 
-#define EVDEV_DLB2_NAME_PMD dlb2_event
+#define EVDEV_DLB2_NAME_PMD dlb_event
 
 /*  command line arg strings */
 #define NUMA_NODE_ARG "numa_node"
@@ -33,19 +33,31 @@
 
 /* Begin HW related defines and structs */
 
+#define DLB2_HW_V2 0
+#define DLB2_HW_V2_5 1
 #define DLB2_MAX_NUM_DOMAINS 32
 #define DLB2_MAX_NUM_VFS 16
 #define DLB2_MAX_NUM_LDB_QUEUES 32
 #define DLB2_MAX_NUM_LDB_PORTS 64
-#define DLB2_MAX_NUM_DIR_PORTS 64
-#define DLB2_MAX_NUM_DIR_QUEUES 64
+#define DLB2_MAX_NUM_DIR_PORTS_V2		DLB2_MAX_NUM_DIR_QUEUES_V2
+#define DLB2_MAX_NUM_DIR_PORTS_V2_5		DLB2_MAX_NUM_DIR_QUEUES_V2_5
+#define DLB2_MAX_NUM_DIR_PORTS(ver)		(ver == DLB2_HW_V2 ? \
+						 DLB2_MAX_NUM_DIR_PORTS_V2 : \
+						 DLB2_MAX_NUM_DIR_PORTS_V2_5)
+#define DLB2_MAX_NUM_DIR_QUEUES_V2		64 /* DIR == directed */
+#define DLB2_MAX_NUM_DIR_QUEUES_V2_5		96
+/* When needed for array sizing, the DLB 2.5 macro is used */
+#define DLB2_MAX_NUM_DIR_QUEUES(ver)		(ver == DLB2_HW_V2 ? \
+						 DLB2_MAX_NUM_DIR_QUEUES_V2 : \
+						 DLB2_MAX_NUM_DIR_QUEUES_V2_5)
 #define DLB2_MAX_NUM_FLOWS (64 * 1024)
 #define DLB2_MAX_NUM_LDB_CREDITS (8 * 1024)
-#define DLB2_MAX_NUM_DIR_CREDITS (2 * 1024)
+#define DLB2_MAX_NUM_DIR_CREDITS(ver)		(ver == DLB2_HW_V2 ? 4096 : 0)
+#define DLB2_MAX_NUM_CREDITS(ver)		(ver == DLB2_HW_V2 ? \
+						 0 : DLB2_MAX_NUM_LDB_CREDITS)
 #define DLB2_MAX_NUM_LDB_CREDIT_POOLS 64
 #define DLB2_MAX_NUM_DIR_CREDIT_POOLS 64
 #define DLB2_MAX_NUM_HIST_LIST_ENTRIES 2048
-#define DLB2_MAX_NUM_AQOS_ENTRIES 2048
 #define DLB2_MAX_NUM_QIDS_PER_LDB_CQ 8
 #define DLB2_QID_PRIORITIES 8
 #define DLB2_MAX_DEVICE_PATH 32
@@ -68,6 +80,11 @@
 #define DLB2_NUM_HIST_LIST_ENTRIES_PER_LDB_PORT \
 	DLB2_MAX_CQ_DEPTH
 
+#define DLB2_HW_DEVICE_FROM_PCI_ID(_pdev) \
+	(((_pdev->id.device_id == PCI_DEVICE_ID_INTEL_DLB2_5_PF) ||        \
+	  (_pdev->id.device_id == PCI_DEVICE_ID_INTEL_DLB2_5_VF))   ?   \
+		DLB2_HW_V2_5 : DLB2_HW_V2)
+
 /*
  * Static per queue/port provisioning values
  */
@@ -111,6 +128,8 @@ enum dlb2_hw_queue_types {
 	DLB2_NUM_QUEUE_TYPES /* Must be last */
 };
 
+#define DLB2_COMBINED_POOL DLB2_LDB_QUEUE
+
 #define PORT_TYPE(p) ((p)->is_directed ? DLB2_DIR_PORT : DLB2_LDB_PORT)
 
 /* Do not change - must match hardware! */
@@ -129,8 +148,15 @@ struct dlb2_hw_rsrcs {
 	uint32_t num_ldb_queues;	/* Number of available ldb queues */
 	uint32_t num_ldb_ports;         /* Number of load balanced ports */
 	uint32_t num_dir_ports;         /* Number of directed ports */
-	uint32_t num_ldb_credits;       /* Number of load balanced credits */
-	uint32_t num_dir_credits;       /* Number of directed credits */
+	union {
+		struct {
+			uint32_t num_ldb_credits; /* Number of ldb credits */
+			uint32_t num_dir_credits; /* Number of dir credits */
+		};
+		struct {
+			uint32_t num_credits; /* Number of combined credits */
+		};
+	};
 	uint32_t reorder_window_size;   /* Size of reorder window */
 };
 
@@ -294,9 +320,17 @@ struct dlb2_port {
 	enum dlb2_token_pop_mode token_pop_mode;
 	union dlb2_port_config cfg;
 	uint32_t *credit_pool[DLB2_NUM_QUEUE_TYPES]; /* use __atomic builtins */
-	uint16_t cached_ldb_credits;
-	uint16_t ldb_credits;
-	uint16_t cached_dir_credits;
+	union {
+		struct {
+			uint16_t cached_ldb_credits;
+			uint16_t ldb_credits;
+			uint16_t cached_dir_credits;
+		};
+		struct {
+			uint16_t cached_credits;
+			uint16_t credits;
+		};
+	};
 	bool int_armed;
 	uint16_t owed_tokens;
 	int16_t issued_releases;
@@ -327,11 +361,22 @@ struct process_local_port_data {
 
 struct dlb2_eventdev;
 
+struct dlb2_port_low_level_io_functions {
+	void (*pp_enqueue_four)(void *qe4, void *pp_addr);
+};
+
 struct dlb2_config {
 	int configured;
 	int reserved;
-	uint32_t num_ldb_credits;
-	uint32_t num_dir_credits;
+	union {
+		struct {
+			uint32_t num_ldb_credits;
+			uint32_t num_dir_credits;
+		};
+		struct {
+			uint32_t num_credits;
+		};
+	};
 	struct dlb2_create_sched_domain_args resources;
 };
 
@@ -356,10 +401,18 @@ struct dlb2_hw_dev {
 
 /* Begin DLB2 PMD Eventdev related defines and structs */
 
-#define DLB2_MAX_NUM_QUEUES \
-	(DLB2_MAX_NUM_DIR_QUEUES + DLB2_MAX_NUM_LDB_QUEUES)
+#define DLB2_MAX_NUM_QUEUES(ver)                                \
+	(DLB2_MAX_NUM_DIR_QUEUES(ver) + DLB2_MAX_NUM_LDB_QUEUES)
 
-#define DLB2_MAX_NUM_PORTS (DLB2_MAX_NUM_DIR_PORTS + DLB2_MAX_NUM_LDB_PORTS)
+#define DLB2_MAX_NUM_PORTS(ver) \
+	(DLB2_MAX_NUM_DIR_PORTS(ver) + DLB2_MAX_NUM_LDB_PORTS)
+
+#define DLB2_MAX_NUM_DIR_QUEUES_V2_5 96
+#define DLB2_MAX_NUM_DIR_PORTS_V2_5 DLB2_MAX_NUM_DIR_QUEUES_V2_5
+#define DLB2_MAX_NUM_QUEUES_ALL \
+	(DLB2_MAX_NUM_DIR_QUEUES_V2_5 + DLB2_MAX_NUM_LDB_QUEUES)
+#define DLB2_MAX_NUM_PORTS_ALL \
+	(DLB2_MAX_NUM_DIR_PORTS_V2_5 + DLB2_MAX_NUM_LDB_PORTS)
 #define DLB2_MAX_INPUT_QUEUE_DEPTH 256
 
 /** Structure to hold the queue to port link establishment attributes */
@@ -379,8 +432,15 @@ struct dlb2_traffic_stats {
 	uint64_t tx_ok;
 	uint64_t total_polls;
 	uint64_t zero_polls;
-	uint64_t tx_nospc_ldb_hw_credits;
-	uint64_t tx_nospc_dir_hw_credits;
+	union {
+		struct {
+			uint64_t tx_nospc_ldb_hw_credits;
+			uint64_t tx_nospc_dir_hw_credits;
+		};
+		struct {
+			uint64_t tx_nospc_hw_credits;
+		};
+	};
 	uint64_t tx_nospc_inflight_max;
 	uint64_t tx_nospc_new_event_limit;
 	uint64_t tx_nospc_inflight_credits;
@@ -413,7 +473,7 @@ struct dlb2_port_stats {
 	uint64_t tx_invalid;
 	uint64_t rx_sched_cnt[DLB2_NUM_HW_SCHED_TYPES];
 	uint64_t rx_sched_invalid;
-	struct dlb2_queue_stats queue[DLB2_MAX_NUM_QUEUES];
+	struct dlb2_queue_stats queue[DLB2_MAX_NUM_QUEUES_ALL];
 };
 
 struct dlb2_eventdev_port {
@@ -464,16 +524,16 @@ enum dlb2_run_state {
 };
 
 struct dlb2_eventdev {
-	struct dlb2_eventdev_port ev_ports[DLB2_MAX_NUM_PORTS];
-	struct dlb2_eventdev_queue ev_queues[DLB2_MAX_NUM_QUEUES];
-	uint8_t qm_ldb_to_ev_queue_id[DLB2_MAX_NUM_QUEUES];
-	uint8_t qm_dir_to_ev_queue_id[DLB2_MAX_NUM_QUEUES];
+	struct dlb2_eventdev_port ev_ports[DLB2_MAX_NUM_PORTS_ALL];
+	struct dlb2_eventdev_queue ev_queues[DLB2_MAX_NUM_QUEUES_ALL];
+	uint8_t qm_ldb_to_ev_queue_id[DLB2_MAX_NUM_QUEUES_ALL];
+	uint8_t qm_dir_to_ev_queue_id[DLB2_MAX_NUM_QUEUES_ALL];
 	/* store num stats and offset of the stats for each queue */
-	uint16_t xstats_count_per_qid[DLB2_MAX_NUM_QUEUES];
-	uint16_t xstats_offset_for_qid[DLB2_MAX_NUM_QUEUES];
+	uint16_t xstats_count_per_qid[DLB2_MAX_NUM_QUEUES_ALL];
+	uint16_t xstats_offset_for_qid[DLB2_MAX_NUM_QUEUES_ALL];
 	/* store num stats and offset of the stats for each port */
-	uint16_t xstats_count_per_port[DLB2_MAX_NUM_PORTS];
-	uint16_t xstats_offset_for_port[DLB2_MAX_NUM_PORTS];
+	uint16_t xstats_count_per_port[DLB2_MAX_NUM_PORTS_ALL];
+	uint16_t xstats_offset_for_port[DLB2_MAX_NUM_PORTS_ALL];
 	struct dlb2_get_num_resources_args hw_rsrc_query_results;
 	uint32_t xstats_count_mode_queue;
 	struct dlb2_hw_dev qm_instance; /* strictly hw related */
@@ -489,8 +549,15 @@ struct dlb2_eventdev {
 	int num_dir_credits_override;
 	volatile enum dlb2_run_state run_state;
 	uint16_t num_dir_queues; /* total num of evdev dir queues requested */
-	uint16_t num_dir_credits;
-	uint16_t num_ldb_credits;
+	union {
+		struct {
+			uint16_t num_dir_credits;
+			uint16_t num_ldb_credits;
+		};
+		struct {
+			uint16_t num_credits;
+		};
+	};
 	uint16_t num_queues; /* total queues */
 	uint16_t num_ldb_queues; /* total num of evdev ldb queues requested */
 	uint16_t num_ports; /* total num of evdev ports requested */
@@ -501,21 +568,28 @@ struct dlb2_eventdev {
 	bool defer_sched;
 	enum dlb2_cq_poll_modes poll_mode;
 	uint8_t revision;
+	uint8_t version;
 	bool configured;
-	uint16_t max_ldb_credits;
-	uint16_t max_dir_credits;
-
-	/* force hw credit pool counters into exclusive cache lines */
-
-	/* use __atomic builtins */ /* shared hw cred */
-	uint32_t ldb_credit_pool __rte_cache_aligned;
-	/* use __atomic builtins */ /* shared hw cred */
-	uint32_t dir_credit_pool __rte_cache_aligned;
+	union {
+		struct {
+			uint16_t max_ldb_credits;
+			uint16_t max_dir_credits;
+			/* use __atomic builtins */ /* shared hw cred */
+			uint32_t ldb_credit_pool __rte_cache_aligned;
+			/* use __atomic builtins */ /* shared hw cred */
+			uint32_t dir_credit_pool __rte_cache_aligned;
+		};
+		struct {
+			uint16_t max_credits;
+			/* use __atomic builtins */ /* shared hw cred */
+			uint32_t credit_pool __rte_cache_aligned;
+		};
+	};
 };
 
 /* used for collecting and passing around the dev args */
 struct dlb2_qid_depth_thresholds {
-	int val[DLB2_MAX_NUM_QUEUES];
+	int val[DLB2_MAX_NUM_QUEUES_ALL];
 };
 
 struct dlb2_devargs {
@@ -570,7 +644,8 @@ uint32_t dlb2_get_queue_depth(struct dlb2_eventdev *dlb2,
 
 int dlb2_parse_params(const char *params,
 		      const char *name,
-		      struct dlb2_devargs *dlb2_args);
+		      struct dlb2_devargs *dlb2_args,
+		      uint8_t version);
 
 /* Extern globals */
 extern struct process_local_port_data dlb2_port[][DLB2_NUM_PORT_TYPES];
diff --git a/drivers/event/dlb2/dlb2_xstats.c b/drivers/event/dlb2/dlb2_xstats.c
index 8c3c3cda9..b62e62060 100644
--- a/drivers/event/dlb2/dlb2_xstats.c
+++ b/drivers/event/dlb2/dlb2_xstats.c
@@ -95,7 +95,7 @@ dlb2_device_traffic_stat_get(struct dlb2_eventdev *dlb2,
 	int i;
 	uint64_t val = 0;
 
-	for (i = 0; i < DLB2_MAX_NUM_PORTS; i++) {
+	for (i = 0; i < DLB2_MAX_NUM_PORTS(dlb2->version); i++) {
 		struct dlb2_eventdev_port *port = &dlb2->ev_ports[i];
 
 		if (!port->setup_done)
@@ -269,7 +269,7 @@ dlb2_get_threshold_stat(struct dlb2_eventdev *dlb2, int qid, int stat)
 	int port = 0;
 	uint64_t tally = 0;
 
-	for (port = 0; port < DLB2_MAX_NUM_PORTS; port++)
+	for (port = 0; port < DLB2_MAX_NUM_PORTS(dlb2->version); port++)
 		tally += dlb2->ev_ports[port].stats.queue[qid].qid_depth[stat];
 
 	return tally;
@@ -281,7 +281,7 @@ dlb2_get_enq_ok_stat(struct dlb2_eventdev *dlb2, int qid)
 	int port = 0;
 	uint64_t enq_ok_tally = 0;
 
-	for (port = 0; port < DLB2_MAX_NUM_PORTS; port++)
+	for (port = 0; port < DLB2_MAX_NUM_PORTS(dlb2->version); port++)
 		enq_ok_tally += dlb2->ev_ports[port].stats.queue[qid].enq_ok;
 
 	return enq_ok_tally;
@@ -561,8 +561,8 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 
 	/* other vars */
 	const unsigned int count = RTE_DIM(dev_stats) +
-			DLB2_MAX_NUM_PORTS * RTE_DIM(port_stats) +
-			DLB2_MAX_NUM_QUEUES * RTE_DIM(qid_stats);
+		DLB2_MAX_NUM_PORTS(dlb2->version) * RTE_DIM(port_stats) +
+		DLB2_MAX_NUM_QUEUES(dlb2->version) * RTE_DIM(qid_stats);
 	unsigned int i, port, qid, stat_id = 0;
 
 	dlb2->xstats = rte_zmalloc_socket(NULL,
@@ -583,7 +583,7 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 	}
 	dlb2->xstats_count_mode_dev = stat_id;
 
-	for (port = 0; port < DLB2_MAX_NUM_PORTS; port++) {
+	for (port = 0; port < DLB2_MAX_NUM_PORTS(dlb2->version); port++) {
 		dlb2->xstats_offset_for_port[port] = stat_id;
 
 		uint32_t count_offset = stat_id;
@@ -605,7 +605,7 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 
 	dlb2->xstats_count_mode_port = stat_id - dlb2->xstats_count_mode_dev;
 
-	for (qid = 0; qid < DLB2_MAX_NUM_QUEUES; qid++) {
+	for (qid = 0; qid < DLB2_MAX_NUM_QUEUES(dlb2->version); qid++) {
 		uint32_t count_offset = stat_id;
 
 		dlb2->xstats_offset_for_qid[qid] = stat_id;
@@ -658,16 +658,15 @@ dlb2_eventdev_xstats_get_names(const struct rte_eventdev *dev,
 		xstats_mode_count = dlb2->xstats_count_mode_dev;
 		break;
 	case RTE_EVENT_DEV_XSTATS_PORT:
-		if (queue_port_id >= DLB2_MAX_NUM_PORTS)
+		if (queue_port_id >= DLB2_MAX_NUM_PORTS(dlb2->version))
 			break;
 		xstats_mode_count = dlb2->xstats_count_per_port[queue_port_id];
 		start_offset = dlb2->xstats_offset_for_port[queue_port_id];
 		break;
 	case RTE_EVENT_DEV_XSTATS_QUEUE:
-#if (DLB2_MAX_NUM_QUEUES <= 255) /* max 8 bit value */
-		if (queue_port_id >= DLB2_MAX_NUM_QUEUES)
+		if (queue_port_id >= DLB2_MAX_NUM_QUEUES(dlb2->version) &&
+		    (DLB2_MAX_NUM_QUEUES(dlb2->version) <= 255))
 			break;
-#endif
 		xstats_mode_count = dlb2->xstats_count_per_qid[queue_port_id];
 		start_offset = dlb2->xstats_offset_for_qid[queue_port_id];
 		break;
@@ -709,13 +708,13 @@ dlb2_xstats_update(struct dlb2_eventdev *dlb2,
 		xstats_mode_count = dlb2->xstats_count_mode_dev;
 		break;
 	case RTE_EVENT_DEV_XSTATS_PORT:
-		if (queue_port_id >= DLB2_MAX_NUM_PORTS)
+		if (queue_port_id >= DLB2_MAX_NUM_PORTS(dlb2->version))
 			goto invalid_value;
 		xstats_mode_count = dlb2->xstats_count_per_port[queue_port_id];
 		break;
 	case RTE_EVENT_DEV_XSTATS_QUEUE:
-#if (DLB2_MAX_NUM_QUEUES <= 255) /* max 8 bit value */
-		if (queue_port_id >= DLB2_MAX_NUM_QUEUES)
+#if (DLB2_MAX_NUM_QUEUES(DLB2_HW_V2_5) <= 255) /* max 8 bit value */
+		if (queue_port_id >= DLB2_MAX_NUM_QUEUES(dlb2->version))
 			goto invalid_value;
 #endif
 		xstats_mode_count = dlb2->xstats_count_per_qid[queue_port_id];
@@ -936,12 +935,13 @@ dlb2_eventdev_xstats_reset(struct rte_eventdev *dev,
 		break;
 	case RTE_EVENT_DEV_XSTATS_PORT:
 		if (queue_port_id == -1) {
-			for (i = 0; i < DLB2_MAX_NUM_PORTS; i++) {
+			for (i = 0; i < DLB2_MAX_NUM_PORTS(dlb2->version);
+					i++) {
 				if (dlb2_xstats_reset_port(dlb2, i,
 							   ids, nb_ids))
 					return -EINVAL;
 			}
-		} else if (queue_port_id < DLB2_MAX_NUM_PORTS) {
+		} else if (queue_port_id < DLB2_MAX_NUM_PORTS(dlb2->version)) {
 			if (dlb2_xstats_reset_port(dlb2, queue_port_id,
 						   ids, nb_ids))
 				return -EINVAL;
@@ -949,12 +949,13 @@ dlb2_eventdev_xstats_reset(struct rte_eventdev *dev,
 		break;
 	case RTE_EVENT_DEV_XSTATS_QUEUE:
 		if (queue_port_id == -1) {
-			for (i = 0; i < DLB2_MAX_NUM_QUEUES; i++) {
+			for (i = 0; i < DLB2_MAX_NUM_QUEUES(dlb2->version);
+					i++) {
 				if (dlb2_xstats_reset_queue(dlb2, i,
 							    ids, nb_ids))
 					return -EINVAL;
 			}
-		} else if (queue_port_id < DLB2_MAX_NUM_QUEUES) {
+		} else if (queue_port_id < DLB2_MAX_NUM_QUEUES(dlb2->version)) {
 			if (dlb2_xstats_reset_queue(dlb2, queue_port_id,
 						    ids, nb_ids))
 				return -EINVAL;
diff --git a/drivers/event/dlb2/pf/base/dlb2_hw_types.h b/drivers/event/dlb2/pf/base/dlb2_hw_types.h
index 1d99f1e01..11e518982 100644
--- a/drivers/event/dlb2/pf/base/dlb2_hw_types.h
+++ b/drivers/event/dlb2/pf/base/dlb2_hw_types.h
@@ -5,30 +5,26 @@
 #ifndef __DLB2_HW_TYPES_H
 #define __DLB2_HW_TYPES_H
 
+#include "../../dlb2_priv.h"
 #include "dlb2_user.h"
 
 #include "dlb2_osdep_list.h"
 #include "dlb2_osdep_types.h"
 
 #define DLB2_MAX_NUM_VDEVS			16
-#define DLB2_MAX_NUM_DOMAINS			32
-#define DLB2_MAX_NUM_LDB_QUEUES			32 /* LDB == load-balanced */
-#define DLB2_MAX_NUM_DIR_QUEUES			64 /* DIR == directed */
-#define DLB2_MAX_NUM_LDB_PORTS			64
-#define DLB2_MAX_NUM_DIR_PORTS			64
-#define DLB2_MAX_NUM_LDB_CREDITS		(8 * 1024)
-#define DLB2_MAX_NUM_DIR_CREDITS		(2 * 1024)
-#define DLB2_MAX_NUM_HIST_LIST_ENTRIES		2048
-#define DLB2_MAX_NUM_AQED_ENTRIES		2048
-#define DLB2_MAX_NUM_QIDS_PER_LDB_CQ		8
 #define DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS	2
-#define DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES	5
-#define DLB2_QID_PRIORITIES			8
 #define DLB2_NUM_ARB_WEIGHTS			8
+#define DLB2_MAX_NUM_AQED_ENTRIES		2048
 #define DLB2_MAX_WEIGHT				255
 #define DLB2_NUM_COS_DOMAINS			4
+#define DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS	2
+#define DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES	5
 #define DLB2_MAX_CQ_COMP_CHECK_LOOPS		409600
 #define DLB2_MAX_QID_EMPTY_CHECK_LOOPS		(32 * 64 * 1024 * (800 / 30))
+
+#define DLB2_FUNC_BAR				0
+#define DLB2_CSR_BAR				2
+
 #ifdef FPGA
 #define DLB2_HZ					2000000
 #else
@@ -38,21 +34,8 @@
 #define PCI_DEVICE_ID_INTEL_DLB2_PF 0x2710
 #define PCI_DEVICE_ID_INTEL_DLB2_VF 0x2711
 
-/* Interrupt related macros */
-#define DLB2_PF_NUM_NON_CQ_INTERRUPT_VECTORS 1
-#define DLB2_PF_NUM_CQ_INTERRUPT_VECTORS     64
-#define DLB2_PF_TOTAL_NUM_INTERRUPT_VECTORS \
-	(DLB2_PF_NUM_NON_CQ_INTERRUPT_VECTORS + \
-	 DLB2_PF_NUM_CQ_INTERRUPT_VECTORS)
-#define DLB2_PF_NUM_COMPRESSED_MODE_VECTORS \
-	(DLB2_PF_NUM_NON_CQ_INTERRUPT_VECTORS + 1)
-#define DLB2_PF_NUM_PACKED_MODE_VECTORS \
-	DLB2_PF_TOTAL_NUM_INTERRUPT_VECTORS
-#define DLB2_PF_COMPRESSED_MODE_CQ_VECTOR_ID \
-	DLB2_PF_NUM_NON_CQ_INTERRUPT_VECTORS
-
-/* DLB non-CQ interrupts (alarm, mailbox, WDT) */
-#define DLB2_INT_NON_CQ 0
+#define PCI_DEVICE_ID_INTEL_DLB2_5_PF 0x2714
+#define PCI_DEVICE_ID_INTEL_DLB2_5_VF 0x2715
 
 #define DLB2_ALARM_HW_SOURCE_SYS 0
 #define DLB2_ALARM_HW_SOURCE_DLB 1
@@ -65,18 +48,6 @@
 #define DLB2_ALARM_HW_CHP_AID_ILLEGAL_ENQ	1
 #define DLB2_ALARM_HW_CHP_AID_EXCESS_TOKEN_POPS 2
 
-#define DLB2_VF_NUM_NON_CQ_INTERRUPT_VECTORS 1
-#define DLB2_VF_NUM_CQ_INTERRUPT_VECTORS     31
-#define DLB2_VF_BASE_CQ_VECTOR_ID	     0
-#define DLB2_VF_LAST_CQ_VECTOR_ID	     30
-#define DLB2_VF_MBOX_VECTOR_ID		     31
-#define DLB2_VF_TOTAL_NUM_INTERRUPT_VECTORS \
-	(DLB2_VF_NUM_NON_CQ_INTERRUPT_VECTORS + \
-	 DLB2_VF_NUM_CQ_INTERRUPT_VECTORS)
-
-#define DLB2_VDEV_MAX_NUM_INTERRUPT_VECTORS (DLB2_MAX_NUM_LDB_PORTS + \
-					     DLB2_MAX_NUM_DIR_PORTS + 1)
-
 /*
  * Hardware-defined base addresses. Those prefixed 'DLB2_DRV' are only used by
  * the PF driver.
@@ -97,7 +68,8 @@
 #define DLB2_DIR_PP_BASE       0x2000000
 #define DLB2_DIR_PP_STRIDE     0x1000
 #define DLB2_DIR_PP_BOUND      (DLB2_DIR_PP_BASE + \
-				DLB2_DIR_PP_STRIDE * DLB2_MAX_NUM_DIR_PORTS)
+				DLB2_DIR_PP_STRIDE * \
+				DLB2_MAX_NUM_DIR_PORTS_V2_5)
 #define DLB2_DIR_PP_OFFS(id)   (DLB2_DIR_PP_BASE + (id) * DLB2_PP_SIZE)
 
 struct dlb2_resource_id {
@@ -225,7 +197,7 @@ struct dlb2_sn_group {
 
 static inline bool dlb2_sn_group_full(struct dlb2_sn_group *group)
 {
-	u32 mask[] = {
+	const u32 mask[] = {
 		0x0000ffff,  /* 64 SNs per queue */
 		0x000000ff,  /* 128 SNs per queue */
 		0x0000000f,  /* 256 SNs per queue */
@@ -237,7 +209,7 @@ static inline bool dlb2_sn_group_full(struct dlb2_sn_group *group)
 
 static inline int dlb2_sn_group_alloc_slot(struct dlb2_sn_group *group)
 {
-	u32 bound[6] = {16, 8, 4, 2, 1};
+	const u32 bound[] = {16, 8, 4, 2, 1};
 	u32 i;
 
 	for (i = 0; i < bound[group->mode]; i++) {
@@ -327,7 +299,7 @@ struct dlb2_function_resources {
 struct dlb2_hw_resources {
 	struct dlb2_ldb_queue ldb_queues[DLB2_MAX_NUM_LDB_QUEUES];
 	struct dlb2_ldb_port ldb_ports[DLB2_MAX_NUM_LDB_PORTS];
-	struct dlb2_dir_pq_pair dir_pq_pairs[DLB2_MAX_NUM_DIR_PORTS];
+	struct dlb2_dir_pq_pair dir_pq_pairs[DLB2_MAX_NUM_DIR_PORTS_V2_5];
 	struct dlb2_sn_group sn_groups[DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS];
 };
 
@@ -344,11 +316,13 @@ struct dlb2_sw_mbox {
 };
 
 struct dlb2_hw {
+	uint8_t ver;
+
 	/* BAR 0 address */
-	void  *csr_kva;
+	void *csr_kva;
 	unsigned long csr_phys_addr;
 	/* BAR 2 address */
-	void  *func_kva;
+	void *func_kva;
 	unsigned long func_phys_addr;
 
 	/* Resource tracking */
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index ae5ef2fc3..7d31d9a85 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -212,7 +212,7 @@ int dlb2_resource_init(struct dlb2_hw *hw)
 			      &port->func_list);
 	}
 
-	hw->pf.num_avail_dir_pq_pairs = DLB2_MAX_NUM_DIR_PORTS;
+	hw->pf.num_avail_dir_pq_pairs = DLB2_MAX_NUM_DIR_PORTS(hw->ver);
 	for (i = 0; i < hw->pf.num_avail_dir_pq_pairs; i++) {
 		list = &hw->rsrcs.dir_pq_pairs[i].func_list;
 
@@ -220,7 +220,9 @@ int dlb2_resource_init(struct dlb2_hw *hw)
 	}
 
 	hw->pf.num_avail_qed_entries = DLB2_MAX_NUM_LDB_CREDITS;
-	hw->pf.num_avail_dqed_entries = DLB2_MAX_NUM_DIR_CREDITS;
+	hw->pf.num_avail_dqed_entries =
+		DLB2_MAX_NUM_DIR_CREDITS(hw->ver);
+
 	hw->pf.num_avail_aqed_entries = DLB2_MAX_NUM_AQED_ENTRIES;
 
 	ret = dlb2_bitmap_alloc(&hw->pf.avail_hist_list_entries,
@@ -259,7 +261,7 @@ int dlb2_resource_init(struct dlb2_hw *hw)
 		hw->rsrcs.ldb_ports[i].id.vdev_owned = false;
 	}
 
-	for (i = 0; i < DLB2_MAX_NUM_DIR_PORTS; i++) {
+	for (i = 0; i < DLB2_MAX_NUM_DIR_PORTS(hw->ver); i++) {
 		hw->rsrcs.dir_pq_pairs[i].id.phys_id = i;
 		hw->rsrcs.dir_pq_pairs[i].id.vdev_owned = false;
 	}
@@ -2373,7 +2375,7 @@ static void dlb2_domain_disable_dir_vpps(struct dlb2_hw *hw,
 		else
 			virt_id = port->id.phys_id;
 
-		offs = vdev_id * DLB2_MAX_NUM_DIR_PORTS + virt_id;
+		offs = vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) + virt_id;
 
 		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP_V(offs), r1.val);
 	}
@@ -2506,7 +2508,8 @@ static void
 dlb2_domain_disable_dir_queue_write_perms(struct dlb2_hw *hw,
 					  struct dlb2_hw_domain *domain)
 {
-	int domain_offset = domain->id.phys_id * DLB2_MAX_NUM_DIR_PORTS;
+	int domain_offset = domain->id.phys_id *
+		DLB2_MAX_NUM_DIR_PORTS(hw->ver);
 	struct dlb2_list_entry *iter;
 	struct dlb2_dir_pq_pair *queue;
 	RTE_SET_USED(iter);
@@ -2522,7 +2525,8 @@ dlb2_domain_disable_dir_queue_write_perms(struct dlb2_hw *hw,
 		DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(idx), r0.val);
 
 		if (queue->id.vdev_owned) {
-			idx = queue->id.vdev_id * DLB2_MAX_NUM_DIR_PORTS +
+			idx = queue->id.vdev_id *
+				DLB2_MAX_NUM_DIR_PORTS(hw->ver) +
 				queue->id.virt_id;
 
 			DLB2_CSR_WR(hw,
@@ -2961,7 +2965,8 @@ __dlb2_domain_reset_dir_port_registers(struct dlb2_hw *hw,
 		else
 			virt_id = port->id.phys_id;
 
-		offs = port->id.vdev_id * DLB2_MAX_NUM_DIR_PORTS + virt_id;
+		offs = port->id.vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver)
+			+ virt_id;
 
 		DLB2_CSR_WR(hw,
 			    DLB2_SYS_VF_DIR_VPP2PP(offs),
@@ -4484,7 +4489,8 @@ dlb2_log_create_dir_port_args(struct dlb2_hw *hw,
 }
 
 static struct dlb2_dir_pq_pair *
-dlb2_get_domain_used_dir_pq(u32 id,
+dlb2_get_domain_used_dir_pq(struct dlb2_hw *hw,
+			    u32 id,
 			    bool vdev_req,
 			    struct dlb2_hw_domain *domain)
 {
@@ -4492,7 +4498,7 @@ dlb2_get_domain_used_dir_pq(u32 id,
 	struct dlb2_dir_pq_pair *port;
 	RTE_SET_USED(iter);
 
-	if (id >= DLB2_MAX_NUM_DIR_PORTS)
+	if (id >= DLB2_MAX_NUM_DIR_PORTS(hw->ver))
 		return NULL;
 
 	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter)
@@ -4538,7 +4544,8 @@ dlb2_verify_create_dir_port_args(struct dlb2_hw *hw,
 	if (args->queue_id != -1) {
 		struct dlb2_dir_pq_pair *queue;
 
-		queue = dlb2_get_domain_used_dir_pq(args->queue_id,
+		queue = dlb2_get_domain_used_dir_pq(hw,
+						    args->queue_id,
 						    vdev_req,
 						    domain);
 
@@ -4618,7 +4625,7 @@ static void dlb2_dir_port_configure_pp(struct dlb2_hw *hw,
 
 		r1.field.pp = port->id.phys_id;
 
-		offs = vdev_id * DLB2_MAX_NUM_DIR_PORTS + virt_id;
+		offs = vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) + virt_id;
 
 		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP2PP(offs), r1.val);
 
@@ -4857,7 +4864,8 @@ int dlb2_hw_create_dir_port(struct dlb2_hw *hw,
 	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
 
 	if (args->queue_id != -1)
-		port = dlb2_get_domain_used_dir_pq(args->queue_id,
+		port = dlb2_get_domain_used_dir_pq(hw,
+						   args->queue_id,
 						   vdev_req,
 						   domain);
 	else
@@ -4913,7 +4921,7 @@ static void dlb2_configure_dir_queue(struct dlb2_hw *hw,
 	/* QID write permissions are turned on when the domain is started */
 	r0.field.vasqid_v = 0;
 
-	offs = domain->id.phys_id * DLB2_MAX_NUM_DIR_QUEUES +
+	offs = domain->id.phys_id * DLB2_MAX_NUM_DIR_QUEUES(hw->ver) +
 		queue->id.phys_id;
 
 	DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(offs), r0.val);
@@ -4935,7 +4943,8 @@ static void dlb2_configure_dir_queue(struct dlb2_hw *hw,
 		union dlb2_sys_vf_dir_vqid_v r3 = { {0} };
 		union dlb2_sys_vf_dir_vqid2qid r4 = { {0} };
 
-		offs = vdev_id * DLB2_MAX_NUM_DIR_QUEUES + queue->id.virt_id;
+		offs = vdev_id * DLB2_MAX_NUM_DIR_QUEUES(hw->ver)
+			+ queue->id.virt_id;
 
 		r3.field.vqid_v = 1;
 
@@ -5001,7 +5010,8 @@ dlb2_verify_create_dir_queue_args(struct dlb2_hw *hw,
 	if (args->port_id != -1) {
 		struct dlb2_dir_pq_pair *port;
 
-		port = dlb2_get_domain_used_dir_pq(args->port_id,
+		port = dlb2_get_domain_used_dir_pq(hw,
+						   args->port_id,
 						   vdev_req,
 						   domain);
 
@@ -5072,7 +5082,8 @@ int dlb2_hw_create_dir_queue(struct dlb2_hw *hw,
 	}
 
 	if (args->port_id != -1)
-		queue = dlb2_get_domain_used_dir_pq(args->port_id,
+		queue = dlb2_get_domain_used_dir_pq(hw,
+						    args->port_id,
 						    vdev_req,
 						    domain);
 	else
@@ -5920,7 +5931,7 @@ dlb2_hw_start_domain(struct dlb2_hw *hw,
 
 		r0.field.vasqid_v = 1;
 
-		offs = domain->id.phys_id * DLB2_MAX_NUM_DIR_PORTS +
+		offs = domain->id.phys_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) +
 			dir_queue->id.phys_id;
 
 		DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(offs), r0.val);
@@ -5972,7 +5983,7 @@ int dlb2_hw_get_dir_queue_depth(struct dlb2_hw *hw,
 
 	id = args->queue_id;
 
-	queue = dlb2_get_domain_used_dir_pq(id, vdev_req, domain);
+	queue = dlb2_get_domain_used_dir_pq(hw, id, vdev_req, domain);
 	if (queue == NULL) {
 		resp->status = DLB2_ST_INVALID_QID;
 		return -EINVAL;
diff --git a/drivers/event/dlb2/pf/dlb2_pf.c b/drivers/event/dlb2/pf/dlb2_pf.c
index 1a7d8fc29..a937d0f9c 100644
--- a/drivers/event/dlb2/pf/dlb2_pf.c
+++ b/drivers/event/dlb2/pf/dlb2_pf.c
@@ -46,7 +46,7 @@ dlb2_pf_low_level_io_init(void)
 {
 	int i;
 	/* Addresses will be initialized at port create */
-	for (i = 0; i < DLB2_MAX_NUM_PORTS; i++) {
+	for (i = 0; i < DLB2_MAX_NUM_PORTS(DLB2_HW_V2_5); i++) {
 		/* First directed ports */
 		dlb2_port[i][DLB2_DIR_PORT].pp_addr = NULL;
 		dlb2_port[i][DLB2_DIR_PORT].cq_base = NULL;
@@ -627,6 +627,7 @@ dlb2_eventdev_pci_init(struct rte_eventdev *eventdev)
 
 	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
 		dlb2 = dlb2_pmd_priv(eventdev); /* rte_zmalloc_socket mem */
+		dlb2->version = DLB2_HW_DEVICE_FROM_PCI_ID(pci_dev);
 
 		/* Probe the DLB2 PF layer */
 		dlb2->qm_instance.pf_dev = dlb2_probe(pci_dev);
@@ -642,7 +643,8 @@ dlb2_eventdev_pci_init(struct rte_eventdev *eventdev)
 		if (pci_dev->device.devargs) {
 			ret = dlb2_parse_params(pci_dev->device.devargs->args,
 						pci_dev->device.devargs->name,
-						&dlb2_args);
+						&dlb2_args,
+						dlb2->version);
 			if (ret) {
 				DLB2_LOG_ERR("PFPMD failed to parse args ret=%d, errno=%d\n",
 					     ret, rte_errno);
@@ -654,6 +656,8 @@ dlb2_eventdev_pci_init(struct rte_eventdev *eventdev)
 						  event_dlb2_pf_name,
 						  &dlb2_args);
 	} else {
+		dlb2 = dlb2_pmd_priv(eventdev);
+		dlb2->version = DLB2_HW_DEVICE_FROM_PCI_ID(pci_dev);
 		ret = dlb2_secondary_eventdev_probe(eventdev,
 						    event_dlb2_pf_name);
 	}
@@ -683,6 +687,16 @@ static const struct rte_pci_id pci_id_dlb2_map[] = {
 	},
 };
 
+static const struct rte_pci_id pci_id_dlb2_5_map[] = {
+	{
+		RTE_PCI_DEVICE(EVENTDEV_INTEL_VENDOR_ID,
+			       PCI_DEVICE_ID_INTEL_DLB2_5_PF)
+	},
+	{
+		.vendor_id = 0,
+	},
+};
+
 static int
 event_dlb2_pci_probe(struct rte_pci_driver *pci_drv,
 		     struct rte_pci_device *pci_dev)
@@ -717,6 +731,40 @@ event_dlb2_pci_remove(struct rte_pci_device *pci_dev)
 
 }
 
+static int
+event_dlb2_5_pci_probe(struct rte_pci_driver *pci_drv,
+		       struct rte_pci_device *pci_dev)
+{
+	int ret;
+
+	ret = rte_event_pmd_pci_probe_named(pci_drv, pci_dev,
+					    sizeof(struct dlb2_eventdev),
+					    dlb2_eventdev_pci_init,
+					    event_dlb2_pf_name);
+	if (ret) {
+		DLB2_LOG_INFO("rte_event_pmd_pci_probe_named() failed, "
+				"ret=%d\n", ret);
+	}
+
+	return ret;
+}
+
+static int
+event_dlb2_5_pci_remove(struct rte_pci_device *pci_dev)
+{
+	int ret;
+
+	ret = rte_event_pmd_pci_remove(pci_dev, NULL);
+
+	if (ret) {
+		DLB2_LOG_INFO("rte_event_pmd_pci_remove() failed, "
+				"ret=%d\n", ret);
+	}
+
+	return ret;
+
+}
+
 static struct rte_pci_driver pci_eventdev_dlb2_pmd = {
 	.id_table = pci_id_dlb2_map,
 	.drv_flags = RTE_PCI_DRV_NEED_MAPPING,
@@ -724,5 +772,15 @@ static struct rte_pci_driver pci_eventdev_dlb2_pmd = {
 	.remove = event_dlb2_pci_remove,
 };
 
+static struct rte_pci_driver pci_eventdev_dlb2_5_pmd = {
+	.id_table = pci_id_dlb2_5_map,
+	.drv_flags = RTE_PCI_DRV_NEED_MAPPING,
+	.probe = event_dlb2_5_pci_probe,
+	.remove = event_dlb2_5_pci_remove,
+};
+
 RTE_PMD_REGISTER_PCI(event_dlb2_pf, pci_eventdev_dlb2_pmd);
 RTE_PMD_REGISTER_PCI_TABLE(event_dlb2_pf, pci_id_dlb2_map);
+
+RTE_PMD_REGISTER_PCI(event_dlb2_5_pf, pci_eventdev_dlb2_5_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(event_dlb2_5_pf, pci_id_dlb2_5_map);
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH 02/25] event/dlb2: add DLB v2.5 probe-time hardware init
  2021-03-16 22:18 [dpdk-dev] [PATCH 00/25] Add Support for DLB v2.5 Timothy McDaniel
  2021-03-16 22:18 ` [dpdk-dev] [PATCH 01/25] event/dlb2: add dlb v2.5 probe Timothy McDaniel
@ 2021-03-16 22:18 ` Timothy McDaniel
  2021-03-21 10:30   ` [dpdk-dev] [EXT] " Jerin Jacob Kollanukkaran
  2021-03-16 22:18 ` [dpdk-dev] [PATCH 03/25] event/dlb2: add DLB v2.5 support to get_resources Timothy McDaniel
                   ` (23 subsequent siblings)
  25 siblings, 1 reply; 174+ messages in thread
From: Timothy McDaniel @ 2021-03-16 22:18 UTC (permalink / raw)
  To: dev
  Cc: jerinj, harry.van.haaren, mdr, nhorman, nikhil.rao,
	erik.g.carrillo, abhinandan.gujjar, pbhagavatula, hemant.agrawal,
	mattias.ronnblom, peter.mccarthy

This commit adds support for DLB v2.5 probe-time hardware init,
and sets up a framework for incorporating the remaining
changes required to support DLB v2.5.

DLB v2.0 and DLB v2.5 are similar in many respects, but their
register offsets and definitions are different. As a result of these,
differences, the low level hardware functions must take the devicei
version into consideration. This requires that the hardware version be
passed to many of the low level functions, so that the PMD can
take the appropriate action based on the device version.

To ease the transition and keep the individual patches small, three
temporary files are added in this commit. These files have "new"
in their names.  The files with "new" contain changes specific to a
consolidated PMD that supports both DLB v2.0 and DLB 2.5. Their sister
files of the same name (minus "new") contain the old DLB v2.0 specific
code. The intent is to remove code from the original files as that code
is ported to the combined DLB 2.0/2.5 PMD model and added to the "new"
files in a series of commits. At end of the patch series, the old files
will be empty and the "new" files will have the logic needed
to implement a single PMD that supports both DLB v2.0 and DLB v2.5.
At that time, the original DLB v2.0 specific files will be deleted,
and the "new" files will be renamed and replace them.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/dlb2_priv.h                |    5 +
 drivers/event/dlb2/meson.build                |    1 +
 .../event/dlb2/pf/base/dlb2_hw_types_new.h    |  362 ++
 drivers/event/dlb2/pf/base/dlb2_mbox.h        |    1 -
 drivers/event/dlb2/pf/base/dlb2_osdep.h       |    4 +
 drivers/event/dlb2/pf/base/dlb2_regs_new.h    | 4412 +++++++++++++++++
 drivers/event/dlb2/pf/base/dlb2_resource.c    |  180 +-
 drivers/event/dlb2/pf/base/dlb2_resource.h    |   36 -
 .../event/dlb2/pf/base/dlb2_resource_new.c    |  271 +
 .../event/dlb2/pf/base/dlb2_resource_new.h    |   73 +
 drivers/event/dlb2/pf/dlb2_main.c             |   41 +-
 drivers/event/dlb2/pf/dlb2_main.h             |    4 +
 drivers/event/dlb2/pf/dlb2_pf.c               |    6 +-
 13 files changed, 5165 insertions(+), 231 deletions(-)
 create mode 100644 drivers/event/dlb2/pf/base/dlb2_hw_types_new.h
 create mode 100644 drivers/event/dlb2/pf/base/dlb2_regs_new.h
 create mode 100644 drivers/event/dlb2/pf/base/dlb2_resource_new.c
 create mode 100644 drivers/event/dlb2/pf/base/dlb2_resource_new.h

diff --git a/drivers/event/dlb2/dlb2_priv.h b/drivers/event/dlb2/dlb2_priv.h
index b6de8d937..ad663a38e 100644
--- a/drivers/event/dlb2/dlb2_priv.h
+++ b/drivers/event/dlb2/dlb2_priv.h
@@ -116,6 +116,11 @@
 #define EV_TO_DLB2_PRIO(x) ((x) >> 5)
 #define DLB2_TO_EV_PRIO(x) ((x) << 5)
 
+enum dlb2_hw_ver {
+	DLB2_HW_VER_2,
+	DLB2_HW_VER_2_5,
+};
+
 enum dlb2_hw_port_types {
 	DLB2_LDB_PORT,
 	DLB2_DIR_PORT,
diff --git a/drivers/event/dlb2/meson.build b/drivers/event/dlb2/meson.build
index f22638b8e..bded07e06 100644
--- a/drivers/event/dlb2/meson.build
+++ b/drivers/event/dlb2/meson.build
@@ -14,6 +14,7 @@ sources = files('dlb2.c',
 		'pf/dlb2_main.c',
 		'pf/dlb2_pf.c',
 		'pf/base/dlb2_resource.c',
+		'pf/base/dlb2_resource_new.c',
 		'rte_pmd_dlb2.c',
 		'dlb2_selftest.c'
 )
diff --git a/drivers/event/dlb2/pf/base/dlb2_hw_types_new.h b/drivers/event/dlb2/pf/base/dlb2_hw_types_new.h
new file mode 100644
index 000000000..d58aa94ad
--- /dev/null
+++ b/drivers/event/dlb2/pf/base/dlb2_hw_types_new.h
@@ -0,0 +1,362 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2016-2020 Intel Corporation
+ */
+
+#ifndef __DLB2_HW_TYPES_NEW_H
+#define __DLB2_HW_TYPES_NEW_H
+
+#include "../../dlb2_priv.h"
+#include "dlb2_user.h"
+
+#include "dlb2_osdep_list.h"
+#include "dlb2_osdep_types.h"
+#include "dlb2_regs_new.h"
+
+#define DLB2_BITS_SET(x, val, mask)	(x = ((x) & ~(mask))     \
+				 | (((val) << (mask##_LOC)) & (mask)))
+#define DLB2_BITS_CLR(x, mask)	(x &= ~(mask))
+#define DLB2_BIT_SET(x, mask)	((x) |= (mask))
+#define DLB2_BITS_GET(x, mask)	(((x) & (mask)) >> (mask##_LOC))
+
+#define DLB2_MAX_NUM_VDEVS			16
+#define DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS	2
+#define DLB2_NUM_ARB_WEIGHTS			8
+#define DLB2_MAX_NUM_AQED_ENTRIES		2048
+#define DLB2_MAX_WEIGHT				255
+#define DLB2_NUM_COS_DOMAINS			4
+#define DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS	2
+#define DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES	5
+#define DLB2_MAX_CQ_COMP_CHECK_LOOPS		409600
+#define DLB2_MAX_QID_EMPTY_CHECK_LOOPS		(32 * 64 * 1024 * (800 / 30))
+
+#define DLB2_FUNC_BAR				0
+#define DLB2_CSR_BAR				2
+
+#ifdef FPGA
+#define DLB2_HZ					2000000
+#else
+#define DLB2_HZ					800000000
+#endif
+
+#define PCI_DEVICE_ID_INTEL_DLB2_PF 0x2710
+#define PCI_DEVICE_ID_INTEL_DLB2_VF 0x2711
+
+#define PCI_DEVICE_ID_INTEL_DLB2_5_PF 0x2714
+#define PCI_DEVICE_ID_INTEL_DLB2_5_VF 0x2715
+
+#define DLB2_ALARM_HW_SOURCE_SYS 0
+#define DLB2_ALARM_HW_SOURCE_DLB 1
+
+#define DLB2_ALARM_HW_UNIT_CHP 4
+
+#define DLB2_ALARM_SYS_AID_ILLEGAL_QID		3
+#define DLB2_ALARM_SYS_AID_DISABLED_QID		4
+#define DLB2_ALARM_SYS_AID_ILLEGAL_HCW		5
+#define DLB2_ALARM_HW_CHP_AID_ILLEGAL_ENQ	1
+#define DLB2_ALARM_HW_CHP_AID_EXCESS_TOKEN_POPS 2
+
+/*
+ * Hardware-defined base addresses. Those prefixed 'DLB2_DRV' are only used by
+ * the PF driver.
+ */
+#define DLB2_DRV_LDB_PP_BASE   0x2300000
+#define DLB2_DRV_LDB_PP_STRIDE 0x1000
+#define DLB2_DRV_LDB_PP_BOUND  (DLB2_DRV_LDB_PP_BASE + \
+				DLB2_DRV_LDB_PP_STRIDE * DLB2_MAX_NUM_LDB_PORTS)
+#define DLB2_DRV_DIR_PP_BASE   0x2200000
+#define DLB2_DRV_DIR_PP_STRIDE 0x1000
+#define DLB2_DRV_DIR_PP_BOUND  (DLB2_DRV_DIR_PP_BASE + \
+				DLB2_DRV_DIR_PP_STRIDE * DLB2_MAX_NUM_DIR_PORTS)
+#define DLB2_LDB_PP_BASE       0x2100000
+#define DLB2_LDB_PP_STRIDE     0x1000
+#define DLB2_LDB_PP_BOUND      (DLB2_LDB_PP_BASE + \
+				DLB2_LDB_PP_STRIDE * DLB2_MAX_NUM_LDB_PORTS)
+#define DLB2_LDB_PP_OFFS(id)   (DLB2_LDB_PP_BASE + (id) * DLB2_PP_SIZE)
+#define DLB2_DIR_PP_BASE       0x2000000
+#define DLB2_DIR_PP_STRIDE     0x1000
+#define DLB2_DIR_PP_BOUND      (DLB2_DIR_PP_BASE + \
+				DLB2_DIR_PP_STRIDE * \
+				DLB2_MAX_NUM_DIR_PORTS_V2_5)
+#define DLB2_DIR_PP_OFFS(id)   (DLB2_DIR_PP_BASE + (id) * DLB2_PP_SIZE)
+
+struct dlb2_resource_id {
+	u32 phys_id;
+	u32 virt_id;
+	u8 vdev_owned;
+	u8 vdev_id;
+};
+
+struct dlb2_freelist {
+	u32 base;
+	u32 bound;
+	u32 offset;
+};
+
+static inline u32 dlb2_freelist_count(struct dlb2_freelist *list)
+{
+	return list->bound - list->base - list->offset;
+}
+
+struct dlb2_hcw {
+	u64 data;
+	/* Word 3 */
+	u16 opaque;
+	u8 qid;
+	u8 sched_type:2;
+	u8 priority:3;
+	u8 msg_type:3;
+	/* Word 4 */
+	u16 lock_id;
+	u8 ts_flag:1;
+	u8 rsvd1:2;
+	u8 no_dec:1;
+	u8 cmp_id:4;
+	u8 cq_token:1;
+	u8 qe_comp:1;
+	u8 qe_frag:1;
+	u8 qe_valid:1;
+	u8 int_arm:1;
+	u8 error:1;
+	u8 rsvd:2;
+};
+
+struct dlb2_ldb_queue {
+	struct dlb2_list_entry domain_list;
+	struct dlb2_list_entry func_list;
+	struct dlb2_resource_id id;
+	struct dlb2_resource_id domain_id;
+	u32 num_qid_inflights;
+	u32 aqed_limit;
+	u32 sn_group; /* sn == sequence number */
+	u32 sn_slot;
+	u32 num_mappings;
+	u8 sn_cfg_valid;
+	u8 num_pending_additions;
+	u8 owned;
+	u8 configured;
+};
+
+/*
+ * Directed ports and queues are paired by nature, so the driver tracks them
+ * with a single data structure.
+ */
+struct dlb2_dir_pq_pair {
+	struct dlb2_list_entry domain_list;
+	struct dlb2_list_entry func_list;
+	struct dlb2_resource_id id;
+	struct dlb2_resource_id domain_id;
+	u32 ref_cnt;
+	u8 init_tkn_cnt;
+	u8 queue_configured;
+	u8 port_configured;
+	u8 owned;
+	u8 enabled;
+};
+
+enum dlb2_qid_map_state {
+	/* The slot does not contain a valid queue mapping */
+	DLB2_QUEUE_UNMAPPED,
+	/* The slot contains a valid queue mapping */
+	DLB2_QUEUE_MAPPED,
+	/* The driver is mapping a queue into this slot */
+	DLB2_QUEUE_MAP_IN_PROG,
+	/* The driver is unmapping a queue from this slot */
+	DLB2_QUEUE_UNMAP_IN_PROG,
+	/*
+	 * The driver is unmapping a queue from this slot, and once complete
+	 * will replace it with another mapping.
+	 */
+	DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP,
+};
+
+struct dlb2_ldb_port_qid_map {
+	enum dlb2_qid_map_state state;
+	u16 qid;
+	u16 pending_qid;
+	u8 priority;
+	u8 pending_priority;
+};
+
+struct dlb2_ldb_port {
+	struct dlb2_list_entry domain_list;
+	struct dlb2_list_entry func_list;
+	struct dlb2_resource_id id;
+	struct dlb2_resource_id domain_id;
+	/* The qid_map represents the hardware QID mapping state. */
+	struct dlb2_ldb_port_qid_map qid_map[DLB2_MAX_NUM_QIDS_PER_LDB_CQ];
+	u32 hist_list_entry_base;
+	u32 hist_list_entry_limit;
+	u32 ref_cnt;
+	u8 init_tkn_cnt;
+	u8 num_pending_removals;
+	u8 num_mappings;
+	u8 owned;
+	u8 enabled;
+	u8 configured;
+};
+
+struct dlb2_sn_group {
+	u32 mode;
+	u32 sequence_numbers_per_queue;
+	u32 slot_use_bitmap;
+	u32 id;
+};
+
+static inline bool dlb2_sn_group_full(struct dlb2_sn_group *group)
+{
+	const u32 mask[] = {
+		0x0000ffff,  /* 64 SNs per queue */
+		0x000000ff,  /* 128 SNs per queue */
+		0x0000000f,  /* 256 SNs per queue */
+		0x00000003,  /* 512 SNs per queue */
+		0x00000001}; /* 1024 SNs per queue */
+
+	return group->slot_use_bitmap == mask[group->mode];
+}
+
+static inline int dlb2_sn_group_alloc_slot(struct dlb2_sn_group *group)
+{
+	const u32 bound[] = {16, 8, 4, 2, 1};
+	u32 i;
+
+	for (i = 0; i < bound[group->mode]; i++) {
+		if (!(group->slot_use_bitmap & (1 << i))) {
+			group->slot_use_bitmap |= 1 << i;
+			return i;
+		}
+	}
+
+	return -1;
+}
+
+static inline void
+dlb2_sn_group_free_slot(struct dlb2_sn_group *group, int slot)
+{
+	group->slot_use_bitmap &= ~(1 << slot);
+}
+
+static inline int dlb2_sn_group_used_slots(struct dlb2_sn_group *group)
+{
+	int i, cnt = 0;
+
+	for (i = 0; i < 32; i++)
+		cnt += !!(group->slot_use_bitmap & (1 << i));
+
+	return cnt;
+}
+
+struct dlb2_hw_domain {
+	struct dlb2_function_resources *parent_func;
+	struct dlb2_list_entry func_list;
+	struct dlb2_list_head used_ldb_queues;
+	struct dlb2_list_head used_ldb_ports[DLB2_NUM_COS_DOMAINS];
+	struct dlb2_list_head used_dir_pq_pairs;
+	struct dlb2_list_head avail_ldb_queues;
+	struct dlb2_list_head avail_ldb_ports[DLB2_NUM_COS_DOMAINS];
+	struct dlb2_list_head avail_dir_pq_pairs;
+	u32 total_hist_list_entries;
+	u32 avail_hist_list_entries;
+	u32 hist_list_entry_base;
+	u32 hist_list_entry_offset;
+	union {
+		struct {
+			u32 num_ldb_credits;
+			u32 num_dir_credits;
+		};
+		struct {
+			u32 num_credits;
+		};
+	};
+	u32 num_avail_aqed_entries;
+	u32 num_used_aqed_entries;
+	struct dlb2_resource_id id;
+	int num_pending_removals;
+	int num_pending_additions;
+	u8 configured;
+	u8 started;
+};
+
+struct dlb2_bitmap;
+
+struct dlb2_function_resources {
+	struct dlb2_list_head avail_domains;
+	struct dlb2_list_head used_domains;
+	struct dlb2_list_head avail_ldb_queues;
+	struct dlb2_list_head avail_ldb_ports[DLB2_NUM_COS_DOMAINS];
+	struct dlb2_list_head avail_dir_pq_pairs;
+	struct dlb2_bitmap *avail_hist_list_entries;
+	u32 num_avail_domains;
+	u32 num_avail_ldb_queues;
+	u32 num_avail_ldb_ports[DLB2_NUM_COS_DOMAINS];
+	u32 num_avail_dir_pq_pairs;
+	union {
+		struct {
+			u32 num_avail_qed_entries;
+			u32 num_avail_dqed_entries;
+		};
+		struct {
+			u32 num_avail_entries;
+		};
+	};
+	u32 num_avail_aqed_entries;
+	u8 locked; /* (VDEV only) */
+};
+
+/*
+ * After initialization, each resource in dlb2_hw_resources is located in one
+ * of the following lists:
+ * -- The PF's available resources list. These are unconfigured resources owned
+ *	by the PF and not allocated to a dlb2 scheduling domain.
+ * -- A VDEV's available resources list. These are VDEV-owned unconfigured
+ *	resources not allocated to a dlb2 scheduling domain.
+ * -- A domain's available resources list. These are domain-owned unconfigured
+ *	resources.
+ * -- A domain's used resources list. These are domain-owned configured
+ *	resources.
+ *
+ * A resource moves to a new list when a VDEV or domain is created or destroyed,
+ * or when the resource is configured.
+ */
+struct dlb2_hw_resources {
+	struct dlb2_ldb_queue ldb_queues[DLB2_MAX_NUM_LDB_QUEUES];
+	struct dlb2_ldb_port ldb_ports[DLB2_MAX_NUM_LDB_PORTS];
+	struct dlb2_dir_pq_pair dir_pq_pairs[DLB2_MAX_NUM_DIR_PORTS_V2_5];
+	struct dlb2_sn_group sn_groups[DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS];
+};
+
+struct dlb2_mbox {
+	u32 *mbox;
+	u32 *isr_in_progress;
+};
+
+struct dlb2_sw_mbox {
+	struct dlb2_mbox vdev_to_pf;
+	struct dlb2_mbox pf_to_vdev;
+	void (*pf_to_vdev_inject)(void *arg);
+	void *pf_to_vdev_inject_arg;
+};
+
+struct dlb2_hw {
+	uint8_t ver;
+
+	/* BAR 0 address */
+	void *csr_kva;
+	unsigned long csr_phys_addr;
+	/* BAR 2 address */
+	void *func_kva;
+	unsigned long func_phys_addr;
+
+	/* Resource tracking */
+	struct dlb2_hw_resources rsrcs;
+	struct dlb2_function_resources pf;
+	struct dlb2_function_resources vdev[DLB2_MAX_NUM_VDEVS];
+	struct dlb2_hw_domain domains[DLB2_MAX_NUM_DOMAINS];
+	u8 cos_reservation[DLB2_NUM_COS_DOMAINS];
+
+	/* Virtualization */
+	int virt_mode;
+	struct dlb2_sw_mbox mbox[DLB2_MAX_NUM_VDEVS];
+	unsigned int pasid[DLB2_MAX_NUM_VDEVS];
+};
+
+#endif /* __DLB2_HW_TYPES_NEW_H */
diff --git a/drivers/event/dlb2/pf/base/dlb2_mbox.h b/drivers/event/dlb2/pf/base/dlb2_mbox.h
index ce462c089..c6a562f13 100644
--- a/drivers/event/dlb2/pf/base/dlb2_mbox.h
+++ b/drivers/event/dlb2/pf/base/dlb2_mbox.h
@@ -6,7 +6,6 @@
 #define __DLB2_BASE_DLB2_MBOX_H
 
 #include "dlb2_osdep_types.h"
-#include "dlb2_regs.h"
 
 #define DLB2_MBOX_INTERFACE_VERSION 1
 
diff --git a/drivers/event/dlb2/pf/base/dlb2_osdep.h b/drivers/event/dlb2/pf/base/dlb2_osdep.h
index c4c34eba5..747f680b9 100644
--- a/drivers/event/dlb2/pf/base/dlb2_osdep.h
+++ b/drivers/event/dlb2/pf/base/dlb2_osdep.h
@@ -16,7 +16,11 @@
 #include <rte_log.h>
 #include <rte_spinlock.h>
 #include "../dlb2_main.h"
+
+/* TEMPORARY inclusion of both headers for merge */
+#include "dlb2_resource_new.h"
 #include "dlb2_resource.h"
+
 #include "../../dlb2_log.h"
 #include "../../dlb2_user.h"
 
diff --git a/drivers/event/dlb2/pf/base/dlb2_regs_new.h b/drivers/event/dlb2/pf/base/dlb2_regs_new.h
new file mode 100644
index 000000000..593243d63
--- /dev/null
+++ b/drivers/event/dlb2/pf/base/dlb2_regs_new.h
@@ -0,0 +1,4412 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2016-2020 Intel Corporation
+ */
+
+#ifndef __DLB2_REGS_NEW_H
+#define __DLB2_REGS_NEW_H
+
+#include "dlb2_osdep_types.h"
+
+#define DLB2_PF_VF2PF_MAILBOX_BYTES 256
+#define DLB2_PF_VF2PF_MAILBOX(vf_id, x) \
+	(0x1000 + 0x4 * (x) + (vf_id) * 0x10000)
+#define DLB2_PF_VF2PF_MAILBOX_RST 0x0
+
+#define DLB2_PF_VF2PF_MAILBOX_MSG	0xFFFFFFFF
+#define DLB2_PF_VF2PF_MAILBOX_MSG_LOC	0
+
+#define DLB2_PF_VF2PF_MAILBOX_ISR(vf_id) \
+	(0x1f00 + (vf_id) * 0x10000)
+#define DLB2_PF_VF2PF_MAILBOX_ISR_RST 0x0
+
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF0_ISR	0x00000001
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF1_ISR	0x00000002
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF2_ISR	0x00000004
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF3_ISR	0x00000008
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF4_ISR	0x00000010
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF5_ISR	0x00000020
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF6_ISR	0x00000040
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF7_ISR	0x00000080
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF8_ISR	0x00000100
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF9_ISR	0x00000200
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF10_ISR	0x00000400
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF11_ISR	0x00000800
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF12_ISR	0x00001000
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF13_ISR	0x00002000
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF14_ISR	0x00004000
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF15_ISR	0x00008000
+#define DLB2_PF_VF2PF_MAILBOX_ISR_RSVD0	0xFFFF0000
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF0_ISR_LOC	0
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF1_ISR_LOC	1
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF2_ISR_LOC	2
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF3_ISR_LOC	3
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF4_ISR_LOC	4
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF5_ISR_LOC	5
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF6_ISR_LOC	6
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF7_ISR_LOC	7
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF8_ISR_LOC	8
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF9_ISR_LOC	9
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF10_ISR_LOC	10
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF11_ISR_LOC	11
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF12_ISR_LOC	12
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF13_ISR_LOC	13
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF14_ISR_LOC	14
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF15_ISR_LOC	15
+#define DLB2_PF_VF2PF_MAILBOX_ISR_RSVD0_LOC		16
+
+#define DLB2_PF_VF2PF_FLR_ISR(vf_id) \
+	(0x1f04 + (vf_id) * 0x10000)
+#define DLB2_PF_VF2PF_FLR_ISR_RST 0x0
+
+#define DLB2_PF_VF2PF_FLR_ISR_VF0_ISR	0x00000001
+#define DLB2_PF_VF2PF_FLR_ISR_VF1_ISR	0x00000002
+#define DLB2_PF_VF2PF_FLR_ISR_VF2_ISR	0x00000004
+#define DLB2_PF_VF2PF_FLR_ISR_VF3_ISR	0x00000008
+#define DLB2_PF_VF2PF_FLR_ISR_VF4_ISR	0x00000010
+#define DLB2_PF_VF2PF_FLR_ISR_VF5_ISR	0x00000020
+#define DLB2_PF_VF2PF_FLR_ISR_VF6_ISR	0x00000040
+#define DLB2_PF_VF2PF_FLR_ISR_VF7_ISR	0x00000080
+#define DLB2_PF_VF2PF_FLR_ISR_VF8_ISR	0x00000100
+#define DLB2_PF_VF2PF_FLR_ISR_VF9_ISR	0x00000200
+#define DLB2_PF_VF2PF_FLR_ISR_VF10_ISR	0x00000400
+#define DLB2_PF_VF2PF_FLR_ISR_VF11_ISR	0x00000800
+#define DLB2_PF_VF2PF_FLR_ISR_VF12_ISR	0x00001000
+#define DLB2_PF_VF2PF_FLR_ISR_VF13_ISR	0x00002000
+#define DLB2_PF_VF2PF_FLR_ISR_VF14_ISR	0x00004000
+#define DLB2_PF_VF2PF_FLR_ISR_VF15_ISR	0x00008000
+#define DLB2_PF_VF2PF_FLR_ISR_RSVD0		0xFFFF0000
+#define DLB2_PF_VF2PF_FLR_ISR_VF0_ISR_LOC	0
+#define DLB2_PF_VF2PF_FLR_ISR_VF1_ISR_LOC	1
+#define DLB2_PF_VF2PF_FLR_ISR_VF2_ISR_LOC	2
+#define DLB2_PF_VF2PF_FLR_ISR_VF3_ISR_LOC	3
+#define DLB2_PF_VF2PF_FLR_ISR_VF4_ISR_LOC	4
+#define DLB2_PF_VF2PF_FLR_ISR_VF5_ISR_LOC	5
+#define DLB2_PF_VF2PF_FLR_ISR_VF6_ISR_LOC	6
+#define DLB2_PF_VF2PF_FLR_ISR_VF7_ISR_LOC	7
+#define DLB2_PF_VF2PF_FLR_ISR_VF8_ISR_LOC	8
+#define DLB2_PF_VF2PF_FLR_ISR_VF9_ISR_LOC	9
+#define DLB2_PF_VF2PF_FLR_ISR_VF10_ISR_LOC	10
+#define DLB2_PF_VF2PF_FLR_ISR_VF11_ISR_LOC	11
+#define DLB2_PF_VF2PF_FLR_ISR_VF12_ISR_LOC	12
+#define DLB2_PF_VF2PF_FLR_ISR_VF13_ISR_LOC	13
+#define DLB2_PF_VF2PF_FLR_ISR_VF14_ISR_LOC	14
+#define DLB2_PF_VF2PF_FLR_ISR_VF15_ISR_LOC	15
+#define DLB2_PF_VF2PF_FLR_ISR_RSVD0_LOC	16
+
+#define DLB2_PF_VF2PF_ISR_PEND(vf_id) \
+	(0x1f10 + (vf_id) * 0x10000)
+#define DLB2_PF_VF2PF_ISR_PEND_RST 0x0
+
+#define DLB2_PF_VF2PF_ISR_PEND_ISR_PEND	0x00000001
+#define DLB2_PF_VF2PF_ISR_PEND_RSVD0		0xFFFFFFFE
+#define DLB2_PF_VF2PF_ISR_PEND_ISR_PEND_LOC	0
+#define DLB2_PF_VF2PF_ISR_PEND_RSVD0_LOC	1
+
+#define DLB2_PF_PF2VF_MAILBOX_BYTES 64
+#define DLB2_PF_PF2VF_MAILBOX(vf_id, x) \
+	(0x2000 + 0x4 * (x) + (vf_id) * 0x10000)
+#define DLB2_PF_PF2VF_MAILBOX_RST 0x0
+
+#define DLB2_PF_PF2VF_MAILBOX_MSG	0xFFFFFFFF
+#define DLB2_PF_PF2VF_MAILBOX_MSG_LOC	0
+
+#define DLB2_PF_PF2VF_MAILBOX_ISR(vf_id) \
+	(0x2f00 + (vf_id) * 0x10000)
+#define DLB2_PF_PF2VF_MAILBOX_ISR_RST 0x0
+
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF0_ISR	0x00000001
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF1_ISR	0x00000002
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF2_ISR	0x00000004
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF3_ISR	0x00000008
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF4_ISR	0x00000010
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF5_ISR	0x00000020
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF6_ISR	0x00000040
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF7_ISR	0x00000080
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF8_ISR	0x00000100
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF9_ISR	0x00000200
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF10_ISR	0x00000400
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF11_ISR	0x00000800
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF12_ISR	0x00001000
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF13_ISR	0x00002000
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF14_ISR	0x00004000
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF15_ISR	0x00008000
+#define DLB2_PF_PF2VF_MAILBOX_ISR_RSVD0	0xFFFF0000
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF0_ISR_LOC	0
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF1_ISR_LOC	1
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF2_ISR_LOC	2
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF3_ISR_LOC	3
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF4_ISR_LOC	4
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF5_ISR_LOC	5
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF6_ISR_LOC	6
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF7_ISR_LOC	7
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF8_ISR_LOC	8
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF9_ISR_LOC	9
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF10_ISR_LOC	10
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF11_ISR_LOC	11
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF12_ISR_LOC	12
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF13_ISR_LOC	13
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF14_ISR_LOC	14
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF15_ISR_LOC	15
+#define DLB2_PF_PF2VF_MAILBOX_ISR_RSVD0_LOC		16
+
+#define DLB2_PF_VF_RESET_IN_PROGRESS(vf_id) \
+	(0x3000 + (vf_id) * 0x10000)
+#define DLB2_PF_VF_RESET_IN_PROGRESS_RST 0xffff
+
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF0_RESET_IN_PROGRESS	0x00000001
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF1_RESET_IN_PROGRESS	0x00000002
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF2_RESET_IN_PROGRESS	0x00000004
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF3_RESET_IN_PROGRESS	0x00000008
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF4_RESET_IN_PROGRESS	0x00000010
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF5_RESET_IN_PROGRESS	0x00000020
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF6_RESET_IN_PROGRESS	0x00000040
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF7_RESET_IN_PROGRESS	0x00000080
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF8_RESET_IN_PROGRESS	0x00000100
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF9_RESET_IN_PROGRESS	0x00000200
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF10_RESET_IN_PROGRESS	0x00000400
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF11_RESET_IN_PROGRESS	0x00000800
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF12_RESET_IN_PROGRESS	0x00001000
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF13_RESET_IN_PROGRESS	0x00002000
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF14_RESET_IN_PROGRESS	0x00004000
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF15_RESET_IN_PROGRESS	0x00008000
+#define DLB2_PF_VF_RESET_IN_PROGRESS_RSVD0			0xFFFF0000
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF0_RESET_IN_PROGRESS_LOC	0
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF1_RESET_IN_PROGRESS_LOC	1
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF2_RESET_IN_PROGRESS_LOC	2
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF3_RESET_IN_PROGRESS_LOC	3
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF4_RESET_IN_PROGRESS_LOC	4
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF5_RESET_IN_PROGRESS_LOC	5
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF6_RESET_IN_PROGRESS_LOC	6
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF7_RESET_IN_PROGRESS_LOC	7
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF8_RESET_IN_PROGRESS_LOC	8
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF9_RESET_IN_PROGRESS_LOC	9
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF10_RESET_IN_PROGRESS_LOC	10
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF11_RESET_IN_PROGRESS_LOC	11
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF12_RESET_IN_PROGRESS_LOC	12
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF13_RESET_IN_PROGRESS_LOC	13
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF14_RESET_IN_PROGRESS_LOC	14
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF15_RESET_IN_PROGRESS_LOC	15
+#define DLB2_PF_VF_RESET_IN_PROGRESS_RSVD0_LOC			16
+
+#define DLB2_MSIX_VECTOR_CTRL(x) \
+	(0x100000c + (x) * 0x10)
+#define DLB2_MSIX_VECTOR_CTRL_RST 0x1
+
+#define DLB2_MSIX_VECTOR_CTRL_VEC_MASK	0x00000001
+#define DLB2_MSIX_VECTOR_CTRL_RSVD0		0xFFFFFFFE
+#define DLB2_MSIX_VECTOR_CTRL_VEC_MASK_LOC	0
+#define DLB2_MSIX_VECTOR_CTRL_RSVD0_LOC	1
+
+#define DLB2_IOSF_SMON_COMP_MASK1(x) \
+	(0x8002024 + (x) * 0x40)
+#define DLB2_IOSF_SMON_COMP_MASK1_RST 0xffffffff
+
+#define DLB2_IOSF_SMON_COMP_MASK1_COMP_MASK1	0xFFFFFFFF
+#define DLB2_IOSF_SMON_COMP_MASK1_COMP_MASK1_LOC	0
+
+#define DLB2_IOSF_SMON_COMP_MASK0(x) \
+	(0x8002020 + (x) * 0x40)
+#define DLB2_IOSF_SMON_COMP_MASK0_RST 0xffffffff
+
+#define DLB2_IOSF_SMON_COMP_MASK0_COMP_MASK0	0xFFFFFFFF
+#define DLB2_IOSF_SMON_COMP_MASK0_COMP_MASK0_LOC	0
+
+#define DLB2_IOSF_SMON_MAX_TMR(x) \
+	(0x800201c + (x) * 0x40)
+#define DLB2_IOSF_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_IOSF_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_IOSF_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_IOSF_SMON_TMR(x) \
+	(0x8002018 + (x) * 0x40)
+#define DLB2_IOSF_SMON_TMR_RST 0x0
+
+#define DLB2_IOSF_SMON_TMR_TIMER_VAL	0xFFFFFFFF
+#define DLB2_IOSF_SMON_TMR_TIMER_VAL_LOC	0
+
+#define DLB2_IOSF_SMON_ACTIVITYCNTR1(x) \
+	(0x8002014 + (x) * 0x40)
+#define DLB2_IOSF_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_IOSF_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_IOSF_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_IOSF_SMON_ACTIVITYCNTR0(x) \
+	(0x8002010 + (x) * 0x40)
+#define DLB2_IOSF_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_IOSF_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_IOSF_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_IOSF_SMON_COMPARE1(x) \
+	(0x800200c + (x) * 0x40)
+#define DLB2_IOSF_SMON_COMPARE1_RST 0x0
+
+#define DLB2_IOSF_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_IOSF_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_IOSF_SMON_COMPARE0(x) \
+	(0x8002008 + (x) * 0x40)
+#define DLB2_IOSF_SMON_COMPARE0_RST 0x0
+
+#define DLB2_IOSF_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_IOSF_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_IOSF_SMON_CFG1(x) \
+	(0x8002004 + (x) * 0x40)
+#define DLB2_IOSF_SMON_CFG1_RST 0x0
+
+#define DLB2_IOSF_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_IOSF_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_IOSF_SMON_CFG1_RSVD	0xFFFF0000
+#define DLB2_IOSF_SMON_CFG1_MODE0_LOC	0
+#define DLB2_IOSF_SMON_CFG1_MODE1_LOC	8
+#define DLB2_IOSF_SMON_CFG1_RSVD_LOC		16
+
+#define DLB2_IOSF_SMON_CFG0(x) \
+	(0x8002000 + (x) * 0x40)
+#define DLB2_IOSF_SMON_CFG0_RST 0x40000000
+
+#define DLB2_IOSF_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_IOSF_SMON_CFG0_RSVD2			0x0000000E
+#define DLB2_IOSF_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_IOSF_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_IOSF_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_IOSF_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_IOSF_SMON_CFG0_SMON_MODE		0x0000F000
+#define DLB2_IOSF_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_IOSF_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_IOSF_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_IOSF_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_IOSF_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_IOSF_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_IOSF_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_IOSF_SMON_CFG0_RSVD1			0x00800000
+#define DLB2_IOSF_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_IOSF_SMON_CFG0_RSVD0			0x20000000
+#define DLB2_IOSF_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_IOSF_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_IOSF_SMON_CFG0_RSVD2_LOC			1
+#define DLB2_IOSF_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_IOSF_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_IOSF_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_IOSF_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_IOSF_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_IOSF_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_IOSF_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_IOSF_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_IOSF_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_IOSF_SMON_CFG0_STOPTIMEROVFL_LOC		20
+#define DLB2_IOSF_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_IOSF_SMON_CFG0_STATTIMEROVFL_LOC		22
+#define DLB2_IOSF_SMON_CFG0_RSVD1_LOC			23
+#define DLB2_IOSF_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_IOSF_SMON_CFG0_RSVD0_LOC			29
+#define DLB2_IOSF_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_IOSF_FUNC_VF_BAR_DSBL(x) \
+	(0x20 + (x) * 0x4)
+#define DLB2_IOSF_FUNC_VF_BAR_DSBL_RST 0x0
+
+#define DLB2_IOSF_FUNC_VF_BAR_DSBL_FUNC_VF_BAR_DIS	0x00000001
+#define DLB2_IOSF_FUNC_VF_BAR_DSBL_RSVD0		0xFFFFFFFE
+#define DLB2_IOSF_FUNC_VF_BAR_DSBL_FUNC_VF_BAR_DIS_LOC	0
+#define DLB2_IOSF_FUNC_VF_BAR_DSBL_RSVD0_LOC			1
+
+#define DLB2_V2SYS_TOTAL_VAS 0x1000011c
+#define DLB2_V2_5SYS_TOTAL_VAS 0x10000114
+#define DLB2_SYS_TOTAL_VAS(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2SYS_TOTAL_VAS : \
+	 DLB2_V2_5SYS_TOTAL_VAS)
+#define DLB2_SYS_TOTAL_VAS_RST 0x20
+
+#define DLB2_SYS_TOTAL_VAS_TOTAL_VAS	0xFFFFFFFF
+#define DLB2_SYS_TOTAL_VAS_TOTAL_VAS_LOC	0
+
+#define DLB2_SYS_TOTAL_DIR_CRDS 0x10000108
+#define DLB2_SYS_TOTAL_DIR_CRDS_RST 0x1000
+
+#define DLB2_SYS_TOTAL_DIR_CRDS_TOTAL_DIR_CREDITS	0xFFFFFFFF
+#define DLB2_SYS_TOTAL_DIR_CRDS_TOTAL_DIR_CREDITS_LOC	0
+
+#define DLB2_SYS_TOTAL_LDB_CRDS 0x10000104
+#define DLB2_SYS_TOTAL_LDB_CRDS_RST 0x2000
+
+#define DLB2_SYS_TOTAL_LDB_CRDS_TOTAL_LDB_CREDITS	0xFFFFFFFF
+#define DLB2_SYS_TOTAL_LDB_CRDS_TOTAL_LDB_CREDITS_LOC	0
+
+#define DLB2_SYS_ALARM_PF_SYND2 0x10000508
+#define DLB2_SYS_ALARM_PF_SYND2_RST 0x0
+
+#define DLB2_SYS_ALARM_PF_SYND2_LOCK_ID	0x0000FFFF
+#define DLB2_SYS_ALARM_PF_SYND2_MEAS		0x00010000
+#define DLB2_SYS_ALARM_PF_SYND2_DEBUG	0x00FE0000
+#define DLB2_SYS_ALARM_PF_SYND2_CQ_POP	0x01000000
+#define DLB2_SYS_ALARM_PF_SYND2_QE_UHL	0x02000000
+#define DLB2_SYS_ALARM_PF_SYND2_QE_ORSP	0x04000000
+#define DLB2_SYS_ALARM_PF_SYND2_QE_VALID	0x08000000
+#define DLB2_SYS_ALARM_PF_SYND2_CQ_INT_REARM	0x10000000
+#define DLB2_SYS_ALARM_PF_SYND2_DSI_ERROR	0x20000000
+#define DLB2_SYS_ALARM_PF_SYND2_RSVD0	0xC0000000
+#define DLB2_SYS_ALARM_PF_SYND2_LOCK_ID_LOC		0
+#define DLB2_SYS_ALARM_PF_SYND2_MEAS_LOC		16
+#define DLB2_SYS_ALARM_PF_SYND2_DEBUG_LOC		17
+#define DLB2_SYS_ALARM_PF_SYND2_CQ_POP_LOC		24
+#define DLB2_SYS_ALARM_PF_SYND2_QE_UHL_LOC		25
+#define DLB2_SYS_ALARM_PF_SYND2_QE_ORSP_LOC		26
+#define DLB2_SYS_ALARM_PF_SYND2_QE_VALID_LOC		27
+#define DLB2_SYS_ALARM_PF_SYND2_CQ_INT_REARM_LOC	28
+#define DLB2_SYS_ALARM_PF_SYND2_DSI_ERROR_LOC	29
+#define DLB2_SYS_ALARM_PF_SYND2_RSVD0_LOC		30
+
+#define DLB2_SYS_ALARM_PF_SYND1 0x10000504
+#define DLB2_SYS_ALARM_PF_SYND1_RST 0x0
+
+#define DLB2_SYS_ALARM_PF_SYND1_DSI		0x0000FFFF
+#define DLB2_SYS_ALARM_PF_SYND1_QID		0x00FF0000
+#define DLB2_SYS_ALARM_PF_SYND1_QTYPE	0x03000000
+#define DLB2_SYS_ALARM_PF_SYND1_QPRI		0x1C000000
+#define DLB2_SYS_ALARM_PF_SYND1_MSG_TYPE	0xE0000000
+#define DLB2_SYS_ALARM_PF_SYND1_DSI_LOC	0
+#define DLB2_SYS_ALARM_PF_SYND1_QID_LOC	16
+#define DLB2_SYS_ALARM_PF_SYND1_QTYPE_LOC	24
+#define DLB2_SYS_ALARM_PF_SYND1_QPRI_LOC	26
+#define DLB2_SYS_ALARM_PF_SYND1_MSG_TYPE_LOC	29
+
+#define DLB2_SYS_ALARM_PF_SYND0 0x10000500
+#define DLB2_SYS_ALARM_PF_SYND0_RST 0x0
+
+#define DLB2_SYS_ALARM_PF_SYND0_SYNDROME	0x000000FF
+#define DLB2_SYS_ALARM_PF_SYND0_RTYPE	0x00000300
+#define DLB2_SYS_ALARM_PF_SYND0_RSVD0	0x00001C00
+#define DLB2_SYS_ALARM_PF_SYND0_IS_LDB	0x00002000
+#define DLB2_SYS_ALARM_PF_SYND0_CLS		0x0000C000
+#define DLB2_SYS_ALARM_PF_SYND0_AID		0x003F0000
+#define DLB2_SYS_ALARM_PF_SYND0_UNIT		0x03C00000
+#define DLB2_SYS_ALARM_PF_SYND0_SOURCE	0x3C000000
+#define DLB2_SYS_ALARM_PF_SYND0_MORE		0x40000000
+#define DLB2_SYS_ALARM_PF_SYND0_VALID	0x80000000
+#define DLB2_SYS_ALARM_PF_SYND0_SYNDROME_LOC	0
+#define DLB2_SYS_ALARM_PF_SYND0_RTYPE_LOC	8
+#define DLB2_SYS_ALARM_PF_SYND0_RSVD0_LOC	10
+#define DLB2_SYS_ALARM_PF_SYND0_IS_LDB_LOC	13
+#define DLB2_SYS_ALARM_PF_SYND0_CLS_LOC	14
+#define DLB2_SYS_ALARM_PF_SYND0_AID_LOC	16
+#define DLB2_SYS_ALARM_PF_SYND0_UNIT_LOC	22
+#define DLB2_SYS_ALARM_PF_SYND0_SOURCE_LOC	26
+#define DLB2_SYS_ALARM_PF_SYND0_MORE_LOC	30
+#define DLB2_SYS_ALARM_PF_SYND0_VALID_LOC	31
+
+#define DLB2_SYS_VF_LDB_VPP_V(x) \
+	(0x10000f00 + (x) * 0x1000)
+#define DLB2_SYS_VF_LDB_VPP_V_RST 0x0
+
+#define DLB2_SYS_VF_LDB_VPP_V_VPP_V	0x00000001
+#define DLB2_SYS_VF_LDB_VPP_V_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_VF_LDB_VPP_V_VPP_V_LOC	0
+#define DLB2_SYS_VF_LDB_VPP_V_RSVD0_LOC	1
+
+#define DLB2_SYS_VF_LDB_VPP2PP(x) \
+	(0x10000f04 + (x) * 0x1000)
+#define DLB2_SYS_VF_LDB_VPP2PP_RST 0x0
+
+#define DLB2_SYS_VF_LDB_VPP2PP_PP	0x0000003F
+#define DLB2_SYS_VF_LDB_VPP2PP_RSVD0	0xFFFFFFC0
+#define DLB2_SYS_VF_LDB_VPP2PP_PP_LOC	0
+#define DLB2_SYS_VF_LDB_VPP2PP_RSVD0_LOC	6
+
+#define DLB2_SYS_VF_DIR_VPP_V(x) \
+	(0x10000f08 + (x) * 0x1000)
+#define DLB2_SYS_VF_DIR_VPP_V_RST 0x0
+
+#define DLB2_SYS_VF_DIR_VPP_V_VPP_V	0x00000001
+#define DLB2_SYS_VF_DIR_VPP_V_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_VF_DIR_VPP_V_VPP_V_LOC	0
+#define DLB2_SYS_VF_DIR_VPP_V_RSVD0_LOC	1
+
+#define DLB2_SYS_VF_DIR_VPP2PP(x) \
+	(0x10000f0c + (x) * 0x1000)
+#define DLB2_SYS_VF_DIR_VPP2PP_RST 0x0
+
+#define DLB2_SYS_VF_DIR_VPP2PP_PP	0x0000003F
+#define DLB2_SYS_VF_DIR_VPP2PP_RSVD0	0xFFFFFFC0
+#define DLB2_SYS_VF_DIR_VPP2PP_PP_LOC	0
+#define DLB2_SYS_VF_DIR_VPP2PP_RSVD0_LOC	6
+
+#define DLB2_SYS_VF_LDB_VQID_V(x) \
+	(0x10000f10 + (x) * 0x1000)
+#define DLB2_SYS_VF_LDB_VQID_V_RST 0x0
+
+#define DLB2_SYS_VF_LDB_VQID_V_VQID_V	0x00000001
+#define DLB2_SYS_VF_LDB_VQID_V_RSVD0		0xFFFFFFFE
+#define DLB2_SYS_VF_LDB_VQID_V_VQID_V_LOC	0
+#define DLB2_SYS_VF_LDB_VQID_V_RSVD0_LOC	1
+
+#define DLB2_SYS_VF_LDB_VQID2QID(x) \
+	(0x10000f14 + (x) * 0x1000)
+#define DLB2_SYS_VF_LDB_VQID2QID_RST 0x0
+
+#define DLB2_SYS_VF_LDB_VQID2QID_QID		0x0000001F
+#define DLB2_SYS_VF_LDB_VQID2QID_RSVD0	0xFFFFFFE0
+#define DLB2_SYS_VF_LDB_VQID2QID_QID_LOC	0
+#define DLB2_SYS_VF_LDB_VQID2QID_RSVD0_LOC	5
+
+#define DLB2_SYS_LDB_QID2VQID(x) \
+	(0x10000f18 + (x) * 0x1000)
+#define DLB2_SYS_LDB_QID2VQID_RST 0x0
+
+#define DLB2_SYS_LDB_QID2VQID_VQID	0x0000001F
+#define DLB2_SYS_LDB_QID2VQID_RSVD0	0xFFFFFFE0
+#define DLB2_SYS_LDB_QID2VQID_VQID_LOC	0
+#define DLB2_SYS_LDB_QID2VQID_RSVD0_LOC	5
+
+#define DLB2_SYS_VF_DIR_VQID_V(x) \
+	(0x10000f1c + (x) * 0x1000)
+#define DLB2_SYS_VF_DIR_VQID_V_RST 0x0
+
+#define DLB2_SYS_VF_DIR_VQID_V_VQID_V	0x00000001
+#define DLB2_SYS_VF_DIR_VQID_V_RSVD0		0xFFFFFFFE
+#define DLB2_SYS_VF_DIR_VQID_V_VQID_V_LOC	0
+#define DLB2_SYS_VF_DIR_VQID_V_RSVD0_LOC	1
+
+#define DLB2_SYS_VF_DIR_VQID2QID(x) \
+	(0x10000f20 + (x) * 0x1000)
+#define DLB2_SYS_VF_DIR_VQID2QID_RST 0x0
+
+#define DLB2_SYS_VF_DIR_VQID2QID_QID		0x0000003F
+#define DLB2_SYS_VF_DIR_VQID2QID_RSVD0	0xFFFFFFC0
+#define DLB2_SYS_VF_DIR_VQID2QID_QID_LOC	0
+#define DLB2_SYS_VF_DIR_VQID2QID_RSVD0_LOC	6
+
+#define DLB2_SYS_LDB_VASQID_V(x) \
+	(0x10000f24 + (x) * 0x1000)
+#define DLB2_SYS_LDB_VASQID_V_RST 0x0
+
+#define DLB2_SYS_LDB_VASQID_V_VASQID_V	0x00000001
+#define DLB2_SYS_LDB_VASQID_V_RSVD0		0xFFFFFFFE
+#define DLB2_SYS_LDB_VASQID_V_VASQID_V_LOC	0
+#define DLB2_SYS_LDB_VASQID_V_RSVD0_LOC	1
+
+#define DLB2_SYS_DIR_VASQID_V(x) \
+	(0x10000f28 + (x) * 0x1000)
+#define DLB2_SYS_DIR_VASQID_V_RST 0x0
+
+#define DLB2_SYS_DIR_VASQID_V_VASQID_V	0x00000001
+#define DLB2_SYS_DIR_VASQID_V_RSVD0		0xFFFFFFFE
+#define DLB2_SYS_DIR_VASQID_V_VASQID_V_LOC	0
+#define DLB2_SYS_DIR_VASQID_V_RSVD0_LOC	1
+
+#define DLB2_SYS_ALARM_VF_SYND2(x) \
+	(0x10000f48 + (x) * 0x1000)
+#define DLB2_SYS_ALARM_VF_SYND2_RST 0x0
+
+#define DLB2_SYS_ALARM_VF_SYND2_LOCK_ID	0x0000FFFF
+#define DLB2_SYS_ALARM_VF_SYND2_DEBUG	0x00FF0000
+#define DLB2_SYS_ALARM_VF_SYND2_CQ_POP	0x01000000
+#define DLB2_SYS_ALARM_VF_SYND2_QE_UHL	0x02000000
+#define DLB2_SYS_ALARM_VF_SYND2_QE_ORSP	0x04000000
+#define DLB2_SYS_ALARM_VF_SYND2_QE_VALID	0x08000000
+#define DLB2_SYS_ALARM_VF_SYND2_ISZ		0x10000000
+#define DLB2_SYS_ALARM_VF_SYND2_DSI_ERROR	0x20000000
+#define DLB2_SYS_ALARM_VF_SYND2_DLBRSVD	0xC0000000
+#define DLB2_SYS_ALARM_VF_SYND2_LOCK_ID_LOC		0
+#define DLB2_SYS_ALARM_VF_SYND2_DEBUG_LOC		16
+#define DLB2_SYS_ALARM_VF_SYND2_CQ_POP_LOC		24
+#define DLB2_SYS_ALARM_VF_SYND2_QE_UHL_LOC		25
+#define DLB2_SYS_ALARM_VF_SYND2_QE_ORSP_LOC		26
+#define DLB2_SYS_ALARM_VF_SYND2_QE_VALID_LOC		27
+#define DLB2_SYS_ALARM_VF_SYND2_ISZ_LOC		28
+#define DLB2_SYS_ALARM_VF_SYND2_DSI_ERROR_LOC	29
+#define DLB2_SYS_ALARM_VF_SYND2_DLBRSVD_LOC		30
+
+#define DLB2_SYS_ALARM_VF_SYND1(x) \
+	(0x10000f44 + (x) * 0x1000)
+#define DLB2_SYS_ALARM_VF_SYND1_RST 0x0
+
+#define DLB2_SYS_ALARM_VF_SYND1_DSI		0x0000FFFF
+#define DLB2_SYS_ALARM_VF_SYND1_QID		0x00FF0000
+#define DLB2_SYS_ALARM_VF_SYND1_QTYPE	0x03000000
+#define DLB2_SYS_ALARM_VF_SYND1_QPRI		0x1C000000
+#define DLB2_SYS_ALARM_VF_SYND1_MSG_TYPE	0xE0000000
+#define DLB2_SYS_ALARM_VF_SYND1_DSI_LOC	0
+#define DLB2_SYS_ALARM_VF_SYND1_QID_LOC	16
+#define DLB2_SYS_ALARM_VF_SYND1_QTYPE_LOC	24
+#define DLB2_SYS_ALARM_VF_SYND1_QPRI_LOC	26
+#define DLB2_SYS_ALARM_VF_SYND1_MSG_TYPE_LOC	29
+
+#define DLB2_SYS_ALARM_VF_SYND0(x) \
+	(0x10000f40 + (x) * 0x1000)
+#define DLB2_SYS_ALARM_VF_SYND0_RST 0x0
+
+#define DLB2_SYS_ALARM_VF_SYND0_SYNDROME		0x000000FF
+#define DLB2_SYS_ALARM_VF_SYND0_RTYPE		0x00000300
+#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND0_PARITY	0x00000400
+#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND1_PARITY	0x00000800
+#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND2_PARITY	0x00001000
+#define DLB2_SYS_ALARM_VF_SYND0_IS_LDB		0x00002000
+#define DLB2_SYS_ALARM_VF_SYND0_CLS			0x0000C000
+#define DLB2_SYS_ALARM_VF_SYND0_AID			0x003F0000
+#define DLB2_SYS_ALARM_VF_SYND0_UNIT			0x03C00000
+#define DLB2_SYS_ALARM_VF_SYND0_SOURCE		0x3C000000
+#define DLB2_SYS_ALARM_VF_SYND0_MORE			0x40000000
+#define DLB2_SYS_ALARM_VF_SYND0_VALID		0x80000000
+#define DLB2_SYS_ALARM_VF_SYND0_SYNDROME_LOC		0
+#define DLB2_SYS_ALARM_VF_SYND0_RTYPE_LOC		8
+#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND0_PARITY_LOC	10
+#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND1_PARITY_LOC	11
+#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND2_PARITY_LOC	12
+#define DLB2_SYS_ALARM_VF_SYND0_IS_LDB_LOC		13
+#define DLB2_SYS_ALARM_VF_SYND0_CLS_LOC		14
+#define DLB2_SYS_ALARM_VF_SYND0_AID_LOC		16
+#define DLB2_SYS_ALARM_VF_SYND0_UNIT_LOC		22
+#define DLB2_SYS_ALARM_VF_SYND0_SOURCE_LOC		26
+#define DLB2_SYS_ALARM_VF_SYND0_MORE_LOC		30
+#define DLB2_SYS_ALARM_VF_SYND0_VALID_LOC		31
+
+#define DLB2_SYS_LDB_QID_CFG_V(x) \
+	(0x10000f58 + (x) * 0x1000)
+#define DLB2_SYS_LDB_QID_CFG_V_RST 0x0
+
+#define DLB2_SYS_LDB_QID_CFG_V_SN_CFG_V	0x00000001
+#define DLB2_SYS_LDB_QID_CFG_V_FID_CFG_V	0x00000002
+#define DLB2_SYS_LDB_QID_CFG_V_RSVD0		0xFFFFFFFC
+#define DLB2_SYS_LDB_QID_CFG_V_SN_CFG_V_LOC	0
+#define DLB2_SYS_LDB_QID_CFG_V_FID_CFG_V_LOC	1
+#define DLB2_SYS_LDB_QID_CFG_V_RSVD0_LOC	2
+
+#define DLB2_SYS_LDB_QID_ITS(x) \
+	(0x10000f54 + (x) * 0x1000)
+#define DLB2_SYS_LDB_QID_ITS_RST 0x0
+
+#define DLB2_SYS_LDB_QID_ITS_QID_ITS	0x00000001
+#define DLB2_SYS_LDB_QID_ITS_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_LDB_QID_ITS_QID_ITS_LOC	0
+#define DLB2_SYS_LDB_QID_ITS_RSVD0_LOC	1
+
+#define DLB2_SYS_LDB_QID_V(x) \
+	(0x10000f50 + (x) * 0x1000)
+#define DLB2_SYS_LDB_QID_V_RST 0x0
+
+#define DLB2_SYS_LDB_QID_V_QID_V	0x00000001
+#define DLB2_SYS_LDB_QID_V_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_LDB_QID_V_QID_V_LOC	0
+#define DLB2_SYS_LDB_QID_V_RSVD0_LOC	1
+
+#define DLB2_SYS_DIR_QID_ITS(x) \
+	(0x10000f64 + (x) * 0x1000)
+#define DLB2_SYS_DIR_QID_ITS_RST 0x0
+
+#define DLB2_SYS_DIR_QID_ITS_QID_ITS	0x00000001
+#define DLB2_SYS_DIR_QID_ITS_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_DIR_QID_ITS_QID_ITS_LOC	0
+#define DLB2_SYS_DIR_QID_ITS_RSVD0_LOC	1
+
+#define DLB2_SYS_DIR_QID_V(x) \
+	(0x10000f60 + (x) * 0x1000)
+#define DLB2_SYS_DIR_QID_V_RST 0x0
+
+#define DLB2_SYS_DIR_QID_V_QID_V	0x00000001
+#define DLB2_SYS_DIR_QID_V_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_DIR_QID_V_QID_V_LOC	0
+#define DLB2_SYS_DIR_QID_V_RSVD0_LOC	1
+
+#define DLB2_SYS_LDB_CQ_AI_DATA(x) \
+	(0x10000fa8 + (x) * 0x1000)
+#define DLB2_SYS_LDB_CQ_AI_DATA_RST 0x0
+
+#define DLB2_SYS_LDB_CQ_AI_DATA_CQ_AI_DATA	0xFFFFFFFF
+#define DLB2_SYS_LDB_CQ_AI_DATA_CQ_AI_DATA_LOC	0
+
+#define DLB2_SYS_LDB_CQ_AI_ADDR(x) \
+	(0x10000fa4 + (x) * 0x1000)
+#define DLB2_SYS_LDB_CQ_AI_ADDR_RST 0x0
+
+#define DLB2_SYS_LDB_CQ_AI_ADDR_RSVD1	0x00000003
+#define DLB2_SYS_LDB_CQ_AI_ADDR_CQ_AI_ADDR	0x000FFFFC
+#define DLB2_SYS_LDB_CQ_AI_ADDR_RSVD0	0xFFF00000
+#define DLB2_SYS_LDB_CQ_AI_ADDR_RSVD1_LOC		0
+#define DLB2_SYS_LDB_CQ_AI_ADDR_CQ_AI_ADDR_LOC	2
+#define DLB2_SYS_LDB_CQ_AI_ADDR_RSVD0_LOC		20
+
+#define DLB2_V2SYS_LDB_CQ_PASID(x) \
+	(0x10000fa0 + (x) * 0x1000)
+#define DLB2_V2_5SYS_LDB_CQ_PASID(x) \
+	(0x10000f9c + (x) * 0x1000)
+#define DLB2_SYS_LDB_CQ_PASID(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2SYS_LDB_CQ_PASID(x) : \
+	 DLB2_V2_5SYS_LDB_CQ_PASID(x))
+#define DLB2_SYS_LDB_CQ_PASID_RST 0x0
+
+#define DLB2_SYS_LDB_CQ_PASID_PASID		0x000FFFFF
+#define DLB2_SYS_LDB_CQ_PASID_EXE_REQ	0x00100000
+#define DLB2_SYS_LDB_CQ_PASID_PRIV_REQ	0x00200000
+#define DLB2_SYS_LDB_CQ_PASID_FMT2		0x00400000
+#define DLB2_SYS_LDB_CQ_PASID_RSVD0		0xFF800000
+#define DLB2_SYS_LDB_CQ_PASID_PASID_LOC	0
+#define DLB2_SYS_LDB_CQ_PASID_EXE_REQ_LOC	20
+#define DLB2_SYS_LDB_CQ_PASID_PRIV_REQ_LOC	21
+#define DLB2_SYS_LDB_CQ_PASID_FMT2_LOC	22
+#define DLB2_SYS_LDB_CQ_PASID_RSVD0_LOC	23
+
+#define DLB2_SYS_LDB_CQ_AT(x) \
+	(0x10000f9c + (x) * 0x1000)
+#define DLB2_SYS_LDB_CQ_AT_RST 0x0
+
+#define DLB2_SYS_LDB_CQ_AT_CQ_AT	0x00000003
+#define DLB2_SYS_LDB_CQ_AT_RSVD0	0xFFFFFFFC
+#define DLB2_SYS_LDB_CQ_AT_CQ_AT_LOC	0
+#define DLB2_SYS_LDB_CQ_AT_RSVD0_LOC	2
+
+#define DLB2_SYS_LDB_CQ_ISR(x) \
+	(0x10000f98 + (x) * 0x1000)
+#define DLB2_SYS_LDB_CQ_ISR_RST 0x0
+/* CQ Interrupt Modes */
+#define DLB2_CQ_ISR_MODE_DIS  0
+#define DLB2_CQ_ISR_MODE_MSI  1
+#define DLB2_CQ_ISR_MODE_MSIX 2
+#define DLB2_CQ_ISR_MODE_ADI  3
+
+#define DLB2_SYS_LDB_CQ_ISR_VECTOR	0x0000003F
+#define DLB2_SYS_LDB_CQ_ISR_VF	0x000003C0
+#define DLB2_SYS_LDB_CQ_ISR_EN_CODE	0x00000C00
+#define DLB2_SYS_LDB_CQ_ISR_RSVD0	0xFFFFF000
+#define DLB2_SYS_LDB_CQ_ISR_VECTOR_LOC	0
+#define DLB2_SYS_LDB_CQ_ISR_VF_LOC		6
+#define DLB2_SYS_LDB_CQ_ISR_EN_CODE_LOC	10
+#define DLB2_SYS_LDB_CQ_ISR_RSVD0_LOC	12
+
+#define DLB2_SYS_LDB_CQ2VF_PF_RO(x) \
+	(0x10000f94 + (x) * 0x1000)
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_RST 0x0
+
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_VF		0x0000000F
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_IS_PF	0x00000010
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_RO		0x00000020
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_RSVD0	0xFFFFFFC0
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_VF_LOC	0
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_IS_PF_LOC	4
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_RO_LOC	5
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_RSVD0_LOC	6
+
+#define DLB2_SYS_LDB_PP_V(x) \
+	(0x10000f90 + (x) * 0x1000)
+#define DLB2_SYS_LDB_PP_V_RST 0x0
+
+#define DLB2_SYS_LDB_PP_V_PP_V	0x00000001
+#define DLB2_SYS_LDB_PP_V_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_LDB_PP_V_PP_V_LOC	0
+#define DLB2_SYS_LDB_PP_V_RSVD0_LOC	1
+
+#define DLB2_SYS_LDB_PP2VDEV(x) \
+	(0x10000f8c + (x) * 0x1000)
+#define DLB2_SYS_LDB_PP2VDEV_RST 0x0
+
+#define DLB2_SYS_LDB_PP2VDEV_VDEV	0x0000000F
+#define DLB2_SYS_LDB_PP2VDEV_RSVD0	0xFFFFFFF0
+#define DLB2_SYS_LDB_PP2VDEV_VDEV_LOC	0
+#define DLB2_SYS_LDB_PP2VDEV_RSVD0_LOC	4
+
+#define DLB2_SYS_LDB_PP2VAS(x) \
+	(0x10000f88 + (x) * 0x1000)
+#define DLB2_SYS_LDB_PP2VAS_RST 0x0
+
+#define DLB2_SYS_LDB_PP2VAS_VAS	0x0000001F
+#define DLB2_SYS_LDB_PP2VAS_RSVD0	0xFFFFFFE0
+#define DLB2_SYS_LDB_PP2VAS_VAS_LOC		0
+#define DLB2_SYS_LDB_PP2VAS_RSVD0_LOC	5
+
+#define DLB2_SYS_LDB_CQ_ADDR_U(x) \
+	(0x10000f84 + (x) * 0x1000)
+#define DLB2_SYS_LDB_CQ_ADDR_U_RST 0x0
+
+#define DLB2_SYS_LDB_CQ_ADDR_U_ADDR_U	0xFFFFFFFF
+#define DLB2_SYS_LDB_CQ_ADDR_U_ADDR_U_LOC	0
+
+#define DLB2_SYS_LDB_CQ_ADDR_L(x) \
+	(0x10000f80 + (x) * 0x1000)
+#define DLB2_SYS_LDB_CQ_ADDR_L_RST 0x0
+
+#define DLB2_SYS_LDB_CQ_ADDR_L_RSVD0		0x0000003F
+#define DLB2_SYS_LDB_CQ_ADDR_L_ADDR_L	0xFFFFFFC0
+#define DLB2_SYS_LDB_CQ_ADDR_L_RSVD0_LOC	0
+#define DLB2_SYS_LDB_CQ_ADDR_L_ADDR_L_LOC	6
+
+#define DLB2_SYS_DIR_CQ_FMT(x) \
+	(0x10000fec + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ_FMT_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_FMT_KEEP_PF_PPID	0x00000001
+#define DLB2_SYS_DIR_CQ_FMT_RSVD0		0xFFFFFFFE
+#define DLB2_SYS_DIR_CQ_FMT_KEEP_PF_PPID_LOC	0
+#define DLB2_SYS_DIR_CQ_FMT_RSVD0_LOC	1
+
+#define DLB2_SYS_DIR_CQ_AI_DATA(x) \
+	(0x10000fe8 + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ_AI_DATA_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_AI_DATA_CQ_AI_DATA	0xFFFFFFFF
+#define DLB2_SYS_DIR_CQ_AI_DATA_CQ_AI_DATA_LOC	0
+
+#define DLB2_SYS_DIR_CQ_AI_ADDR(x) \
+	(0x10000fe4 + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ_AI_ADDR_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_AI_ADDR_RSVD1	0x00000003
+#define DLB2_SYS_DIR_CQ_AI_ADDR_CQ_AI_ADDR	0x000FFFFC
+#define DLB2_SYS_DIR_CQ_AI_ADDR_RSVD0	0xFFF00000
+#define DLB2_SYS_DIR_CQ_AI_ADDR_RSVD1_LOC		0
+#define DLB2_SYS_DIR_CQ_AI_ADDR_CQ_AI_ADDR_LOC	2
+#define DLB2_SYS_DIR_CQ_AI_ADDR_RSVD0_LOC		20
+
+#define DLB2_V2SYS_DIR_CQ_PASID(x) \
+	(0x10000fe0 + (x) * 0x1000)
+#define DLB2_V2_5SYS_DIR_CQ_PASID(x) \
+	(0x10000fdc + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ_PASID(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2SYS_DIR_CQ_PASID(x) : \
+	 DLB2_V2_5SYS_DIR_CQ_PASID(x))
+#define DLB2_SYS_DIR_CQ_PASID_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_PASID_PASID		0x000FFFFF
+#define DLB2_SYS_DIR_CQ_PASID_EXE_REQ	0x00100000
+#define DLB2_SYS_DIR_CQ_PASID_PRIV_REQ	0x00200000
+#define DLB2_SYS_DIR_CQ_PASID_FMT2		0x00400000
+#define DLB2_SYS_DIR_CQ_PASID_RSVD0		0xFF800000
+#define DLB2_SYS_DIR_CQ_PASID_PASID_LOC	0
+#define DLB2_SYS_DIR_CQ_PASID_EXE_REQ_LOC	20
+#define DLB2_SYS_DIR_CQ_PASID_PRIV_REQ_LOC	21
+#define DLB2_SYS_DIR_CQ_PASID_FMT2_LOC	22
+#define DLB2_SYS_DIR_CQ_PASID_RSVD0_LOC	23
+
+#define DLB2_SYS_DIR_CQ_AT(x) \
+	(0x10000fdc + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ_AT_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_AT_CQ_AT	0x00000003
+#define DLB2_SYS_DIR_CQ_AT_RSVD0	0xFFFFFFFC
+#define DLB2_SYS_DIR_CQ_AT_CQ_AT_LOC	0
+#define DLB2_SYS_DIR_CQ_AT_RSVD0_LOC	2
+
+#define DLB2_SYS_DIR_CQ_ISR(x) \
+	(0x10000fd8 + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ_ISR_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_ISR_VECTOR	0x0000003F
+#define DLB2_SYS_DIR_CQ_ISR_VF	0x000003C0
+#define DLB2_SYS_DIR_CQ_ISR_EN_CODE	0x00000C00
+#define DLB2_SYS_DIR_CQ_ISR_RSVD0	0xFFFFF000
+#define DLB2_SYS_DIR_CQ_ISR_VECTOR_LOC	0
+#define DLB2_SYS_DIR_CQ_ISR_VF_LOC		6
+#define DLB2_SYS_DIR_CQ_ISR_EN_CODE_LOC	10
+#define DLB2_SYS_DIR_CQ_ISR_RSVD0_LOC	12
+
+#define DLB2_SYS_DIR_CQ2VF_PF_RO(x) \
+	(0x10000fd4 + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_RST 0x0
+
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_VF		0x0000000F
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_IS_PF	0x00000010
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_RO		0x00000020
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_RSVD0	0xFFFFFFC0
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_VF_LOC	0
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_IS_PF_LOC	4
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_RO_LOC	5
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_RSVD0_LOC	6
+
+#define DLB2_SYS_DIR_PP_V(x) \
+	(0x10000fd0 + (x) * 0x1000)
+#define DLB2_SYS_DIR_PP_V_RST 0x0
+
+#define DLB2_SYS_DIR_PP_V_PP_V	0x00000001
+#define DLB2_SYS_DIR_PP_V_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_DIR_PP_V_PP_V_LOC	0
+#define DLB2_SYS_DIR_PP_V_RSVD0_LOC	1
+
+#define DLB2_SYS_DIR_PP2VDEV(x) \
+	(0x10000fcc + (x) * 0x1000)
+#define DLB2_SYS_DIR_PP2VDEV_RST 0x0
+
+#define DLB2_SYS_DIR_PP2VDEV_VDEV	0x0000000F
+#define DLB2_SYS_DIR_PP2VDEV_RSVD0	0xFFFFFFF0
+#define DLB2_SYS_DIR_PP2VDEV_VDEV_LOC	0
+#define DLB2_SYS_DIR_PP2VDEV_RSVD0_LOC	4
+
+#define DLB2_SYS_DIR_PP2VAS(x) \
+	(0x10000fc8 + (x) * 0x1000)
+#define DLB2_SYS_DIR_PP2VAS_RST 0x0
+
+#define DLB2_SYS_DIR_PP2VAS_VAS	0x0000001F
+#define DLB2_SYS_DIR_PP2VAS_RSVD0	0xFFFFFFE0
+#define DLB2_SYS_DIR_PP2VAS_VAS_LOC		0
+#define DLB2_SYS_DIR_PP2VAS_RSVD0_LOC	5
+
+#define DLB2_SYS_DIR_CQ_ADDR_U(x) \
+	(0x10000fc4 + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ_ADDR_U_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_ADDR_U_ADDR_U	0xFFFFFFFF
+#define DLB2_SYS_DIR_CQ_ADDR_U_ADDR_U_LOC	0
+
+#define DLB2_SYS_DIR_CQ_ADDR_L(x) \
+	(0x10000fc0 + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ_ADDR_L_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_ADDR_L_RSVD0		0x0000003F
+#define DLB2_SYS_DIR_CQ_ADDR_L_ADDR_L	0xFFFFFFC0
+#define DLB2_SYS_DIR_CQ_ADDR_L_RSVD0_LOC	0
+#define DLB2_SYS_DIR_CQ_ADDR_L_ADDR_L_LOC	6
+
+#define DLB2_SYS_PM_SMON_COMP_MASK1 0x10003024
+#define DLB2_SYS_PM_SMON_COMP_MASK1_RST 0xffffffff
+
+#define DLB2_SYS_PM_SMON_COMP_MASK1_COMP_MASK1	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_COMP_MASK1_COMP_MASK1_LOC	0
+
+#define DLB2_SYS_PM_SMON_COMP_MASK0 0x10003020
+#define DLB2_SYS_PM_SMON_COMP_MASK0_RST 0xffffffff
+
+#define DLB2_SYS_PM_SMON_COMP_MASK0_COMP_MASK0	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_COMP_MASK0_COMP_MASK0_LOC	0
+
+#define DLB2_SYS_PM_SMON_MAX_TMR 0x1000301c
+#define DLB2_SYS_PM_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_SYS_PM_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_SYS_PM_SMON_TMR 0x10003018
+#define DLB2_SYS_PM_SMON_TMR_RST 0x0
+
+#define DLB2_SYS_PM_SMON_TMR_TIMER_VAL	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_TMR_TIMER_VAL_LOC	0
+
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR1 0x10003014
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR0 0x10003010
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_SYS_PM_SMON_COMPARE1 0x1000300c
+#define DLB2_SYS_PM_SMON_COMPARE1_RST 0x0
+
+#define DLB2_SYS_PM_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_SYS_PM_SMON_COMPARE0 0x10003008
+#define DLB2_SYS_PM_SMON_COMPARE0_RST 0x0
+
+#define DLB2_SYS_PM_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_SYS_PM_SMON_CFG1 0x10003004
+#define DLB2_SYS_PM_SMON_CFG1_RST 0x0
+
+#define DLB2_SYS_PM_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_SYS_PM_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_SYS_PM_SMON_CFG1_RSVD	0xFFFF0000
+#define DLB2_SYS_PM_SMON_CFG1_MODE0_LOC	0
+#define DLB2_SYS_PM_SMON_CFG1_MODE1_LOC	8
+#define DLB2_SYS_PM_SMON_CFG1_RSVD_LOC	16
+
+#define DLB2_SYS_PM_SMON_CFG0 0x10003000
+#define DLB2_SYS_PM_SMON_CFG0_RST 0x40000000
+
+#define DLB2_SYS_PM_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_SYS_PM_SMON_CFG0_RSVD2			0x0000000E
+#define DLB2_SYS_PM_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_SYS_PM_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_SYS_PM_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_SYS_PM_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_SYS_PM_SMON_CFG0_SMON_MODE		0x0000F000
+#define DLB2_SYS_PM_SMON_CFG0_STOPCOUNTEROVFL	0x00010000
+#define DLB2_SYS_PM_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_SYS_PM_SMON_CFG0_STATCOUNTER0OVFL	0x00040000
+#define DLB2_SYS_PM_SMON_CFG0_STATCOUNTER1OVFL	0x00080000
+#define DLB2_SYS_PM_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_SYS_PM_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_SYS_PM_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_SYS_PM_SMON_CFG0_RSVD1			0x00800000
+#define DLB2_SYS_PM_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_SYS_PM_SMON_CFG0_RSVD0			0x20000000
+#define DLB2_SYS_PM_SMON_CFG0_VERSION		0xC0000000
+#define DLB2_SYS_PM_SMON_CFG0_SMON_ENABLE_LOC		0
+#define DLB2_SYS_PM_SMON_CFG0_RSVD2_LOC			1
+#define DLB2_SYS_PM_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_SYS_PM_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_SYS_PM_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_SYS_PM_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_SYS_PM_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_SYS_PM_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_SYS_PM_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_SYS_PM_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_SYS_PM_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_SYS_PM_SMON_CFG0_STOPTIMEROVFL_LOC		20
+#define DLB2_SYS_PM_SMON_CFG0_INTTIMEROVFL_LOC		21
+#define DLB2_SYS_PM_SMON_CFG0_STATTIMEROVFL_LOC		22
+#define DLB2_SYS_PM_SMON_CFG0_RSVD1_LOC			23
+#define DLB2_SYS_PM_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_SYS_PM_SMON_CFG0_RSVD0_LOC			29
+#define DLB2_SYS_PM_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_SYS_SMON_COMP_MASK1(x) \
+	(0x18002024 + (x) * 0x40)
+#define DLB2_SYS_SMON_COMP_MASK1_RST 0xffffffff
+
+#define DLB2_SYS_SMON_COMP_MASK1_COMP_MASK1	0xFFFFFFFF
+#define DLB2_SYS_SMON_COMP_MASK1_COMP_MASK1_LOC	0
+
+#define DLB2_SYS_SMON_COMP_MASK0(x) \
+	(0x18002020 + (x) * 0x40)
+#define DLB2_SYS_SMON_COMP_MASK0_RST 0xffffffff
+
+#define DLB2_SYS_SMON_COMP_MASK0_COMP_MASK0	0xFFFFFFFF
+#define DLB2_SYS_SMON_COMP_MASK0_COMP_MASK0_LOC	0
+
+#define DLB2_SYS_SMON_MAX_TMR(x) \
+	(0x1800201c + (x) * 0x40)
+#define DLB2_SYS_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_SYS_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_SYS_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_SYS_SMON_TMR(x) \
+	(0x18002018 + (x) * 0x40)
+#define DLB2_SYS_SMON_TMR_RST 0x0
+
+#define DLB2_SYS_SMON_TMR_TIMER_VAL	0xFFFFFFFF
+#define DLB2_SYS_SMON_TMR_TIMER_VAL_LOC	0
+
+#define DLB2_SYS_SMON_ACTIVITYCNTR1(x) \
+	(0x18002014 + (x) * 0x40)
+#define DLB2_SYS_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_SYS_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_SYS_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_SYS_SMON_ACTIVITYCNTR0(x) \
+	(0x18002010 + (x) * 0x40)
+#define DLB2_SYS_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_SYS_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_SYS_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_SYS_SMON_COMPARE1(x) \
+	(0x1800200c + (x) * 0x40)
+#define DLB2_SYS_SMON_COMPARE1_RST 0x0
+
+#define DLB2_SYS_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_SYS_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_SYS_SMON_COMPARE0(x) \
+	(0x18002008 + (x) * 0x40)
+#define DLB2_SYS_SMON_COMPARE0_RST 0x0
+
+#define DLB2_SYS_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_SYS_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_SYS_SMON_CFG1(x) \
+	(0x18002004 + (x) * 0x40)
+#define DLB2_SYS_SMON_CFG1_RST 0x0
+
+#define DLB2_SYS_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_SYS_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_SYS_SMON_CFG1_RSVD	0xFFFF0000
+#define DLB2_SYS_SMON_CFG1_MODE0_LOC	0
+#define DLB2_SYS_SMON_CFG1_MODE1_LOC	8
+#define DLB2_SYS_SMON_CFG1_RSVD_LOC	16
+
+#define DLB2_SYS_SMON_CFG0(x) \
+	(0x18002000 + (x) * 0x40)
+#define DLB2_SYS_SMON_CFG0_RST 0x40000000
+
+#define DLB2_SYS_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_SYS_SMON_CFG0_RSVD2			0x0000000E
+#define DLB2_SYS_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_SYS_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_SYS_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_SYS_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_SYS_SMON_CFG0_SMON_MODE			0x0000F000
+#define DLB2_SYS_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_SYS_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_SYS_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_SYS_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_SYS_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_SYS_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_SYS_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_SYS_SMON_CFG0_RSVD1			0x00800000
+#define DLB2_SYS_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_SYS_SMON_CFG0_RSVD0			0x20000000
+#define DLB2_SYS_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_SYS_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_SYS_SMON_CFG0_RSVD2_LOC				1
+#define DLB2_SYS_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_SYS_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_SYS_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_SYS_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_SYS_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_SYS_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_SYS_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_SYS_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_SYS_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_SYS_SMON_CFG0_STOPTIMEROVFL_LOC			20
+#define DLB2_SYS_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_SYS_SMON_CFG0_STATTIMEROVFL_LOC			22
+#define DLB2_SYS_SMON_CFG0_RSVD1_LOC				23
+#define DLB2_SYS_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_SYS_SMON_CFG0_RSVD0_LOC				29
+#define DLB2_SYS_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_SYS_INGRESS_ALARM_ENBL 0x10000300
+#define DLB2_SYS_INGRESS_ALARM_ENBL_RST 0x0
+
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_HCW		0x00000001
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_PP		0x00000002
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_PASID		0x00000004
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_QID		0x00000008
+#define DLB2_SYS_INGRESS_ALARM_ENBL_DISABLED_QID		0x00000010
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_LDB_QID_CFG	0x00000020
+#define DLB2_SYS_INGRESS_ALARM_ENBL_RSVD0			0xFFFFFFC0
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_HCW_LOC		0
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_PP_LOC		1
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_PASID_LOC	2
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_QID_LOC		3
+#define DLB2_SYS_INGRESS_ALARM_ENBL_DISABLED_QID_LOC		4
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_LDB_QID_CFG_LOC	5
+#define DLB2_SYS_INGRESS_ALARM_ENBL_RSVD0_LOC		6
+
+#define DLB2_SYS_MSIX_ACK 0x10000400
+#define DLB2_SYS_MSIX_ACK_RST 0x0
+
+#define DLB2_SYS_MSIX_ACK_MSIX_0_ACK	0x00000001
+#define DLB2_SYS_MSIX_ACK_MSIX_1_ACK	0x00000002
+#define DLB2_SYS_MSIX_ACK_RSVD0	0xFFFFFFFC
+#define DLB2_SYS_MSIX_ACK_MSIX_0_ACK_LOC	0
+#define DLB2_SYS_MSIX_ACK_MSIX_1_ACK_LOC	1
+#define DLB2_SYS_MSIX_ACK_RSVD0_LOC		2
+
+#define DLB2_SYS_MSIX_PASSTHRU 0x10000404
+#define DLB2_SYS_MSIX_PASSTHRU_RST 0x0
+
+#define DLB2_SYS_MSIX_PASSTHRU_MSIX_0_PASSTHRU	0x00000001
+#define DLB2_SYS_MSIX_PASSTHRU_MSIX_1_PASSTHRU	0x00000002
+#define DLB2_SYS_MSIX_PASSTHRU_RSVD0			0xFFFFFFFC
+#define DLB2_SYS_MSIX_PASSTHRU_MSIX_0_PASSTHRU_LOC	0
+#define DLB2_SYS_MSIX_PASSTHRU_MSIX_1_PASSTHRU_LOC	1
+#define DLB2_SYS_MSIX_PASSTHRU_RSVD0_LOC		2
+
+#define DLB2_SYS_MSIX_MODE 0x10000408
+#define DLB2_SYS_MSIX_MODE_RST 0x0
+/* MSI-X Modes */
+#define DLB2_MSIX_MODE_PACKED     0
+#define DLB2_MSIX_MODE_COMPRESSED 1
+
+#define DLB2_SYS_MSIX_MODE_MODE_V2	0x00000001
+#define DLB2_SYS_MSIX_MODE_POLL_MODE_V2	0x00000002
+#define DLB2_SYS_MSIX_MODE_POLL_MASK_V2	0x00000004
+#define DLB2_SYS_MSIX_MODE_POLL_LOCK_V2	0x00000008
+#define DLB2_SYS_MSIX_MODE_RSVD0_V2	0xFFFFFFF0
+#define DLB2_SYS_MSIX_MODE_MODE_V2_LOC	0
+#define DLB2_SYS_MSIX_MODE_POLL_MODE_V2_LOC	1
+#define DLB2_SYS_MSIX_MODE_POLL_MASK_V2_LOC	2
+#define DLB2_SYS_MSIX_MODE_POLL_LOCK_V2_LOC	3
+#define DLB2_SYS_MSIX_MODE_RSVD0_V2_LOC	4
+
+#define DLB2_SYS_MSIX_MODE_MODE_V2_5	0x00000001
+#define DLB2_SYS_MSIX_MODE_IMS_POLLING_V2_5	0x00000002
+#define DLB2_SYS_MSIX_MODE_RSVD0_V2_5	0xFFFFFFFC
+#define DLB2_SYS_MSIX_MODE_MODE_V2_5_LOC		0
+#define DLB2_SYS_MSIX_MODE_IMS_POLLING_V2_5_LOC	1
+#define DLB2_SYS_MSIX_MODE_RSVD0_V2_5_LOC		2
+
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS 0x10000440
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_0_OCC_INT	0x00000001
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_1_OCC_INT	0x00000002
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_2_OCC_INT	0x00000004
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_3_OCC_INT	0x00000008
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_4_OCC_INT	0x00000010
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_5_OCC_INT	0x00000020
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_6_OCC_INT	0x00000040
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_7_OCC_INT	0x00000080
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_8_OCC_INT	0x00000100
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_9_OCC_INT	0x00000200
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_10_OCC_INT	0x00000400
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_11_OCC_INT	0x00000800
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_12_OCC_INT	0x00001000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_13_OCC_INT	0x00002000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_14_OCC_INT	0x00004000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_15_OCC_INT	0x00008000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_16_OCC_INT	0x00010000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_17_OCC_INT	0x00020000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_18_OCC_INT	0x00040000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_19_OCC_INT	0x00080000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_20_OCC_INT	0x00100000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_21_OCC_INT	0x00200000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_22_OCC_INT	0x00400000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_23_OCC_INT	0x00800000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_24_OCC_INT	0x01000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_25_OCC_INT	0x02000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_26_OCC_INT	0x04000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_27_OCC_INT	0x08000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_28_OCC_INT	0x10000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_29_OCC_INT	0x20000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_30_OCC_INT	0x40000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_31_OCC_INT	0x80000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_0_OCC_INT_LOC	0
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_1_OCC_INT_LOC	1
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_2_OCC_INT_LOC	2
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_3_OCC_INT_LOC	3
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_4_OCC_INT_LOC	4
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_5_OCC_INT_LOC	5
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_6_OCC_INT_LOC	6
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_7_OCC_INT_LOC	7
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_8_OCC_INT_LOC	8
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_9_OCC_INT_LOC	9
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_10_OCC_INT_LOC	10
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_11_OCC_INT_LOC	11
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_12_OCC_INT_LOC	12
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_13_OCC_INT_LOC	13
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_14_OCC_INT_LOC	14
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_15_OCC_INT_LOC	15
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_16_OCC_INT_LOC	16
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_17_OCC_INT_LOC	17
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_18_OCC_INT_LOC	18
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_19_OCC_INT_LOC	19
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_20_OCC_INT_LOC	20
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_21_OCC_INT_LOC	21
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_22_OCC_INT_LOC	22
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_23_OCC_INT_LOC	23
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_24_OCC_INT_LOC	24
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_25_OCC_INT_LOC	25
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_26_OCC_INT_LOC	26
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_27_OCC_INT_LOC	27
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_28_OCC_INT_LOC	28
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_29_OCC_INT_LOC	29
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_30_OCC_INT_LOC	30
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_31_OCC_INT_LOC	31
+
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS 0x10000444
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_32_OCC_INT	0x00000001
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_33_OCC_INT	0x00000002
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_34_OCC_INT	0x00000004
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_35_OCC_INT	0x00000008
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_36_OCC_INT	0x00000010
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_37_OCC_INT	0x00000020
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_38_OCC_INT	0x00000040
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_39_OCC_INT	0x00000080
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_40_OCC_INT	0x00000100
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_41_OCC_INT	0x00000200
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_42_OCC_INT	0x00000400
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_43_OCC_INT	0x00000800
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_44_OCC_INT	0x00001000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_45_OCC_INT	0x00002000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_46_OCC_INT	0x00004000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_47_OCC_INT	0x00008000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_48_OCC_INT	0x00010000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_49_OCC_INT	0x00020000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_50_OCC_INT	0x00040000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_51_OCC_INT	0x00080000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_52_OCC_INT	0x00100000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_53_OCC_INT	0x00200000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_54_OCC_INT	0x00400000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_55_OCC_INT	0x00800000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_56_OCC_INT	0x01000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_57_OCC_INT	0x02000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_58_OCC_INT	0x04000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_59_OCC_INT	0x08000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_60_OCC_INT	0x10000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_61_OCC_INT	0x20000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_62_OCC_INT	0x40000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_63_OCC_INT	0x80000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_32_OCC_INT_LOC	0
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_33_OCC_INT_LOC	1
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_34_OCC_INT_LOC	2
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_35_OCC_INT_LOC	3
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_36_OCC_INT_LOC	4
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_37_OCC_INT_LOC	5
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_38_OCC_INT_LOC	6
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_39_OCC_INT_LOC	7
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_40_OCC_INT_LOC	8
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_41_OCC_INT_LOC	9
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_42_OCC_INT_LOC	10
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_43_OCC_INT_LOC	11
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_44_OCC_INT_LOC	12
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_45_OCC_INT_LOC	13
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_46_OCC_INT_LOC	14
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_47_OCC_INT_LOC	15
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_48_OCC_INT_LOC	16
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_49_OCC_INT_LOC	17
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_50_OCC_INT_LOC	18
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_51_OCC_INT_LOC	19
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_52_OCC_INT_LOC	20
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_53_OCC_INT_LOC	21
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_54_OCC_INT_LOC	22
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_55_OCC_INT_LOC	23
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_56_OCC_INT_LOC	24
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_57_OCC_INT_LOC	25
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_58_OCC_INT_LOC	26
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_59_OCC_INT_LOC	27
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_60_OCC_INT_LOC	28
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_61_OCC_INT_LOC	29
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_62_OCC_INT_LOC	30
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_63_OCC_INT_LOC	31
+
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS 0x10000460
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_RST 0x0
+
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_0_OCC_INT	0x00000001
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_1_OCC_INT	0x00000002
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_2_OCC_INT	0x00000004
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_3_OCC_INT	0x00000008
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_4_OCC_INT	0x00000010
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_5_OCC_INT	0x00000020
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_6_OCC_INT	0x00000040
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_7_OCC_INT	0x00000080
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_8_OCC_INT	0x00000100
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_9_OCC_INT	0x00000200
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_10_OCC_INT	0x00000400
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_11_OCC_INT	0x00000800
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_12_OCC_INT	0x00001000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_13_OCC_INT	0x00002000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_14_OCC_INT	0x00004000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_15_OCC_INT	0x00008000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_16_OCC_INT	0x00010000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_17_OCC_INT	0x00020000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_18_OCC_INT	0x00040000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_19_OCC_INT	0x00080000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_20_OCC_INT	0x00100000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_21_OCC_INT	0x00200000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_22_OCC_INT	0x00400000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_23_OCC_INT	0x00800000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_24_OCC_INT	0x01000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_25_OCC_INT	0x02000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_26_OCC_INT	0x04000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_27_OCC_INT	0x08000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_28_OCC_INT	0x10000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_29_OCC_INT	0x20000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_30_OCC_INT	0x40000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_31_OCC_INT	0x80000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_0_OCC_INT_LOC	0
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_1_OCC_INT_LOC	1
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_2_OCC_INT_LOC	2
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_3_OCC_INT_LOC	3
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_4_OCC_INT_LOC	4
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_5_OCC_INT_LOC	5
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_6_OCC_INT_LOC	6
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_7_OCC_INT_LOC	7
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_8_OCC_INT_LOC	8
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_9_OCC_INT_LOC	9
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_10_OCC_INT_LOC	10
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_11_OCC_INT_LOC	11
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_12_OCC_INT_LOC	12
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_13_OCC_INT_LOC	13
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_14_OCC_INT_LOC	14
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_15_OCC_INT_LOC	15
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_16_OCC_INT_LOC	16
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_17_OCC_INT_LOC	17
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_18_OCC_INT_LOC	18
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_19_OCC_INT_LOC	19
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_20_OCC_INT_LOC	20
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_21_OCC_INT_LOC	21
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_22_OCC_INT_LOC	22
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_23_OCC_INT_LOC	23
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_24_OCC_INT_LOC	24
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_25_OCC_INT_LOC	25
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_26_OCC_INT_LOC	26
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_27_OCC_INT_LOC	27
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_28_OCC_INT_LOC	28
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_29_OCC_INT_LOC	29
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_30_OCC_INT_LOC	30
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_31_OCC_INT_LOC	31
+
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS 0x10000464
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_RST 0x0
+
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_32_OCC_INT	0x00000001
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_33_OCC_INT	0x00000002
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_34_OCC_INT	0x00000004
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_35_OCC_INT	0x00000008
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_36_OCC_INT	0x00000010
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_37_OCC_INT	0x00000020
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_38_OCC_INT	0x00000040
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_39_OCC_INT	0x00000080
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_40_OCC_INT	0x00000100
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_41_OCC_INT	0x00000200
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_42_OCC_INT	0x00000400
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_43_OCC_INT	0x00000800
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_44_OCC_INT	0x00001000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_45_OCC_INT	0x00002000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_46_OCC_INT	0x00004000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_47_OCC_INT	0x00008000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_48_OCC_INT	0x00010000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_49_OCC_INT	0x00020000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_50_OCC_INT	0x00040000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_51_OCC_INT	0x00080000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_52_OCC_INT	0x00100000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_53_OCC_INT	0x00200000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_54_OCC_INT	0x00400000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_55_OCC_INT	0x00800000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_56_OCC_INT	0x01000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_57_OCC_INT	0x02000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_58_OCC_INT	0x04000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_59_OCC_INT	0x08000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_60_OCC_INT	0x10000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_61_OCC_INT	0x20000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_62_OCC_INT	0x40000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_63_OCC_INT	0x80000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_32_OCC_INT_LOC	0
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_33_OCC_INT_LOC	1
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_34_OCC_INT_LOC	2
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_35_OCC_INT_LOC	3
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_36_OCC_INT_LOC	4
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_37_OCC_INT_LOC	5
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_38_OCC_INT_LOC	6
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_39_OCC_INT_LOC	7
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_40_OCC_INT_LOC	8
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_41_OCC_INT_LOC	9
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_42_OCC_INT_LOC	10
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_43_OCC_INT_LOC	11
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_44_OCC_INT_LOC	12
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_45_OCC_INT_LOC	13
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_46_OCC_INT_LOC	14
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_47_OCC_INT_LOC	15
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_48_OCC_INT_LOC	16
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_49_OCC_INT_LOC	17
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_50_OCC_INT_LOC	18
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_51_OCC_INT_LOC	19
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_52_OCC_INT_LOC	20
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_53_OCC_INT_LOC	21
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_54_OCC_INT_LOC	22
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_55_OCC_INT_LOC	23
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_56_OCC_INT_LOC	24
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_57_OCC_INT_LOC	25
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_58_OCC_INT_LOC	26
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_59_OCC_INT_LOC	27
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_60_OCC_INT_LOC	28
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_61_OCC_INT_LOC	29
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_62_OCC_INT_LOC	30
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_63_OCC_INT_LOC	31
+
+#define DLB2_SYS_DIR_CQ_OPT_CLR 0x100004c0
+#define DLB2_SYS_DIR_CQ_OPT_CLR_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_OPT_CLR_CQ		0x0000003F
+#define DLB2_SYS_DIR_CQ_OPT_CLR_RSVD0	0xFFFFFFC0
+#define DLB2_SYS_DIR_CQ_OPT_CLR_CQ_LOC	0
+#define DLB2_SYS_DIR_CQ_OPT_CLR_RSVD0_LOC	6
+
+#define DLB2_SYS_ALARM_HW_SYND 0x1000050c
+#define DLB2_SYS_ALARM_HW_SYND_RST 0x0
+
+#define DLB2_SYS_ALARM_HW_SYND_SYNDROME	0x000000FF
+#define DLB2_SYS_ALARM_HW_SYND_RTYPE		0x00000300
+#define DLB2_SYS_ALARM_HW_SYND_ALARM		0x00000400
+#define DLB2_SYS_ALARM_HW_SYND_CWD		0x00000800
+#define DLB2_SYS_ALARM_HW_SYND_VF_PF_MB	0x00001000
+#define DLB2_SYS_ALARM_HW_SYND_RSVD0		0x00002000
+#define DLB2_SYS_ALARM_HW_SYND_CLS		0x0000C000
+#define DLB2_SYS_ALARM_HW_SYND_AID		0x003F0000
+#define DLB2_SYS_ALARM_HW_SYND_UNIT		0x03C00000
+#define DLB2_SYS_ALARM_HW_SYND_SOURCE	0x3C000000
+#define DLB2_SYS_ALARM_HW_SYND_MORE		0x40000000
+#define DLB2_SYS_ALARM_HW_SYND_VALID		0x80000000
+#define DLB2_SYS_ALARM_HW_SYND_SYNDROME_LOC	0
+#define DLB2_SYS_ALARM_HW_SYND_RTYPE_LOC	8
+#define DLB2_SYS_ALARM_HW_SYND_ALARM_LOC	10
+#define DLB2_SYS_ALARM_HW_SYND_CWD_LOC	11
+#define DLB2_SYS_ALARM_HW_SYND_VF_PF_MB_LOC	12
+#define DLB2_SYS_ALARM_HW_SYND_RSVD0_LOC	13
+#define DLB2_SYS_ALARM_HW_SYND_CLS_LOC	14
+#define DLB2_SYS_ALARM_HW_SYND_AID_LOC	16
+#define DLB2_SYS_ALARM_HW_SYND_UNIT_LOC	22
+#define DLB2_SYS_ALARM_HW_SYND_SOURCE_LOC	26
+#define DLB2_SYS_ALARM_HW_SYND_MORE_LOC	30
+#define DLB2_SYS_ALARM_HW_SYND_VALID_LOC	31
+
+#define DLB2_AQED_QID_FID_LIM(x) \
+	(0x20000000 + (x) * 0x1000)
+#define DLB2_AQED_QID_FID_LIM_RST 0x7ff
+
+#define DLB2_AQED_QID_FID_LIM_QID_FID_LIMIT	0x00001FFF
+#define DLB2_AQED_QID_FID_LIM_RSVD0		0xFFFFE000
+#define DLB2_AQED_QID_FID_LIM_QID_FID_LIMIT_LOC	0
+#define DLB2_AQED_QID_FID_LIM_RSVD0_LOC		13
+
+#define DLB2_AQED_QID_HID_WIDTH(x) \
+	(0x20080000 + (x) * 0x1000)
+#define DLB2_AQED_QID_HID_WIDTH_RST 0x0
+
+#define DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE	0x00000007
+#define DLB2_AQED_QID_HID_WIDTH_RSVD0		0xFFFFFFF8
+#define DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE_LOC	0
+#define DLB2_AQED_QID_HID_WIDTH_RSVD0_LOC		3
+
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0 0x24000004
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_RST 0xfefcfaf8
+
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI0	0x000000FF
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI1	0x0000FF00
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI2	0x00FF0000
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI3	0xFF000000
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI0_LOC	0
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI1_LOC	8
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI2_LOC	16
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI3_LOC	24
+
+#define DLB2_AQED_SMON_ACTIVITYCNTR0 0x2c00004c
+#define DLB2_AQED_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_AQED_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_AQED_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_AQED_SMON_ACTIVITYCNTR1 0x2c000050
+#define DLB2_AQED_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_AQED_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_AQED_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_AQED_SMON_COMPARE0 0x2c000054
+#define DLB2_AQED_SMON_COMPARE0_RST 0x0
+
+#define DLB2_AQED_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_AQED_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_AQED_SMON_COMPARE1 0x2c000058
+#define DLB2_AQED_SMON_COMPARE1_RST 0x0
+
+#define DLB2_AQED_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_AQED_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_AQED_SMON_CFG0 0x2c00005c
+#define DLB2_AQED_SMON_CFG0_RST 0x40000000
+
+#define DLB2_AQED_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_AQED_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_AQED_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_AQED_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_AQED_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_AQED_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_AQED_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_AQED_SMON_CFG0_SMON_MODE		0x0000F000
+#define DLB2_AQED_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_AQED_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_AQED_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_AQED_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_AQED_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_AQED_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_AQED_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_AQED_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_AQED_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_AQED_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_AQED_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_AQED_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_AQED_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
+#define DLB2_AQED_SMON_CFG0_RSVZ0_LOC			2
+#define DLB2_AQED_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_AQED_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_AQED_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_AQED_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_AQED_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_AQED_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_AQED_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_AQED_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_AQED_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_AQED_SMON_CFG0_STOPTIMEROVFL_LOC		20
+#define DLB2_AQED_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_AQED_SMON_CFG0_STATTIMEROVFL_LOC		22
+#define DLB2_AQED_SMON_CFG0_RSVZ1_LOC			23
+#define DLB2_AQED_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_AQED_SMON_CFG0_RSVZ2_LOC			29
+#define DLB2_AQED_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_AQED_SMON_CFG1 0x2c000060
+#define DLB2_AQED_SMON_CFG1_RST 0x0
+
+#define DLB2_AQED_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_AQED_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_AQED_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_AQED_SMON_CFG1_MODE0_LOC	0
+#define DLB2_AQED_SMON_CFG1_MODE1_LOC	8
+#define DLB2_AQED_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_AQED_SMON_MAX_TMR 0x2c000064
+#define DLB2_AQED_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_AQED_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_AQED_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_AQED_SMON_TMR 0x2c000068
+#define DLB2_AQED_SMON_TMR_RST 0x0
+
+#define DLB2_AQED_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_AQED_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_ATM_QID2CQIDIX_00(x) \
+	(0x30080000 + (x) * 0x1000)
+#define DLB2_ATM_QID2CQIDIX_00_RST 0x0
+#define DLB2_ATM_QID2CQIDIX(x, y) \
+	(DLB2_ATM_QID2CQIDIX_00(x) + 0x80000 * (y))
+#define DLB2_ATM_QID2CQIDIX_NUM 16
+
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P0	0x000000FF
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P1	0x0000FF00
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P2	0x00FF0000
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P3	0xFF000000
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P0_LOC	0
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P1_LOC	8
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P2_LOC	16
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P3_LOC	24
+
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN 0x34000004
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_RST 0xfffefdfc
+
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN0	0x000000FF
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN1	0x0000FF00
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN2	0x00FF0000
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN3	0xFF000000
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN0_LOC	0
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN1_LOC	8
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN2_LOC	16
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN3_LOC	24
+
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN 0x34000008
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_RST 0xfffefdfc
+
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN0	0x000000FF
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN1	0x0000FF00
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN2	0x00FF0000
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN3	0xFF000000
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN0_LOC	0
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN1_LOC	8
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN2_LOC	16
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN3_LOC	24
+
+#define DLB2_ATM_SMON_ACTIVITYCNTR0 0x3c000050
+#define DLB2_ATM_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_ATM_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_ATM_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_ATM_SMON_ACTIVITYCNTR1 0x3c000054
+#define DLB2_ATM_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_ATM_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_ATM_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_ATM_SMON_COMPARE0 0x3c000058
+#define DLB2_ATM_SMON_COMPARE0_RST 0x0
+
+#define DLB2_ATM_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_ATM_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_ATM_SMON_COMPARE1 0x3c00005c
+#define DLB2_ATM_SMON_COMPARE1_RST 0x0
+
+#define DLB2_ATM_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_ATM_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_ATM_SMON_CFG0 0x3c000060
+#define DLB2_ATM_SMON_CFG0_RST 0x40000000
+
+#define DLB2_ATM_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_ATM_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_ATM_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_ATM_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_ATM_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_ATM_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_ATM_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_ATM_SMON_CFG0_SMON_MODE			0x0000F000
+#define DLB2_ATM_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_ATM_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_ATM_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_ATM_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_ATM_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_ATM_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_ATM_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_ATM_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_ATM_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_ATM_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_ATM_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_ATM_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_ATM_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
+#define DLB2_ATM_SMON_CFG0_RSVZ0_LOC				2
+#define DLB2_ATM_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_ATM_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_ATM_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_ATM_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_ATM_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_ATM_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_ATM_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_ATM_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_ATM_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_ATM_SMON_CFG0_STOPTIMEROVFL_LOC			20
+#define DLB2_ATM_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_ATM_SMON_CFG0_STATTIMEROVFL_LOC			22
+#define DLB2_ATM_SMON_CFG0_RSVZ1_LOC				23
+#define DLB2_ATM_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_ATM_SMON_CFG0_RSVZ2_LOC				29
+#define DLB2_ATM_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_ATM_SMON_CFG1 0x3c000064
+#define DLB2_ATM_SMON_CFG1_RST 0x0
+
+#define DLB2_ATM_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_ATM_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_ATM_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_ATM_SMON_CFG1_MODE0_LOC	0
+#define DLB2_ATM_SMON_CFG1_MODE1_LOC	8
+#define DLB2_ATM_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_ATM_SMON_MAX_TMR 0x3c000068
+#define DLB2_ATM_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_ATM_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_ATM_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_ATM_SMON_TMR 0x3c00006c
+#define DLB2_ATM_SMON_TMR_RST 0x0
+
+#define DLB2_ATM_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_ATM_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_CHP_CFG_DIR_VAS_CRD(x) \
+	(0x40000000 + (x) * 0x1000)
+#define DLB2_CHP_CFG_DIR_VAS_CRD_RST 0x0
+
+#define DLB2_CHP_CFG_DIR_VAS_CRD_COUNT	0x00003FFF
+#define DLB2_CHP_CFG_DIR_VAS_CRD_RSVD0	0xFFFFC000
+#define DLB2_CHP_CFG_DIR_VAS_CRD_COUNT_LOC	0
+#define DLB2_CHP_CFG_DIR_VAS_CRD_RSVD0_LOC	14
+
+#define DLB2_CHP_CFG_LDB_VAS_CRD(x) \
+	(0x40080000 + (x) * 0x1000)
+#define DLB2_CHP_CFG_LDB_VAS_CRD_RST 0x0
+
+#define DLB2_CHP_CFG_LDB_VAS_CRD_COUNT	0x00007FFF
+#define DLB2_CHP_CFG_LDB_VAS_CRD_RSVD0	0xFFFF8000
+#define DLB2_CHP_CFG_LDB_VAS_CRD_COUNT_LOC	0
+#define DLB2_CHP_CFG_LDB_VAS_CRD_RSVD0_LOC	15
+
+#define DLB2_V2CHP_ORD_QID_SN(x) \
+	(0x40100000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_ORD_QID_SN(x) \
+	(0x40080000 + (x) * 0x1000)
+#define DLB2_CHP_ORD_QID_SN(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_ORD_QID_SN(x) : \
+	 DLB2_V2_5CHP_ORD_QID_SN(x))
+#define DLB2_CHP_ORD_QID_SN_RST 0x0
+
+#define DLB2_CHP_ORD_QID_SN_SN	0x000003FF
+#define DLB2_CHP_ORD_QID_SN_RSVD0	0xFFFFFC00
+#define DLB2_CHP_ORD_QID_SN_SN_LOC		0
+#define DLB2_CHP_ORD_QID_SN_RSVD0_LOC	10
+
+#define DLB2_V2CHP_ORD_QID_SN_MAP(x) \
+	(0x40180000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_ORD_QID_SN_MAP(x) \
+	(0x40100000 + (x) * 0x1000)
+#define DLB2_CHP_ORD_QID_SN_MAP(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_ORD_QID_SN_MAP(x) : \
+	 DLB2_V2_5CHP_ORD_QID_SN_MAP(x))
+#define DLB2_CHP_ORD_QID_SN_MAP_RST 0x0
+
+#define DLB2_CHP_ORD_QID_SN_MAP_MODE		0x00000007
+#define DLB2_CHP_ORD_QID_SN_MAP_SLOT		0x00000078
+#define DLB2_CHP_ORD_QID_SN_MAP_RSVZ0	0x00000080
+#define DLB2_CHP_ORD_QID_SN_MAP_GRP		0x00000100
+#define DLB2_CHP_ORD_QID_SN_MAP_RSVZ1	0x00000200
+#define DLB2_CHP_ORD_QID_SN_MAP_RSVD0	0xFFFFFC00
+#define DLB2_CHP_ORD_QID_SN_MAP_MODE_LOC	0
+#define DLB2_CHP_ORD_QID_SN_MAP_SLOT_LOC	3
+#define DLB2_CHP_ORD_QID_SN_MAP_RSVZ0_LOC	7
+#define DLB2_CHP_ORD_QID_SN_MAP_GRP_LOC	8
+#define DLB2_CHP_ORD_QID_SN_MAP_RSVZ1_LOC	9
+#define DLB2_CHP_ORD_QID_SN_MAP_RSVD0_LOC	10
+
+#define DLB2_V2CHP_SN_CHK_ENBL(x) \
+	(0x40200000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_SN_CHK_ENBL(x) \
+	(0x40180000 + (x) * 0x1000)
+#define DLB2_CHP_SN_CHK_ENBL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_SN_CHK_ENBL(x) : \
+	 DLB2_V2_5CHP_SN_CHK_ENBL(x))
+#define DLB2_CHP_SN_CHK_ENBL_RST 0x0
+
+#define DLB2_CHP_SN_CHK_ENBL_EN	0x00000001
+#define DLB2_CHP_SN_CHK_ENBL_RSVD0	0xFFFFFFFE
+#define DLB2_CHP_SN_CHK_ENBL_EN_LOC		0
+#define DLB2_CHP_SN_CHK_ENBL_RSVD0_LOC	1
+
+#define DLB2_V2CHP_DIR_CQ_DEPTH(x) \
+	(0x40280000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ_DEPTH(x) \
+	(0x40300000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ_DEPTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_DEPTH(x) : \
+	 DLB2_V2_5CHP_DIR_CQ_DEPTH(x))
+#define DLB2_CHP_DIR_CQ_DEPTH_RST 0x0
+
+#define DLB2_CHP_DIR_CQ_DEPTH_DEPTH	0x00001FFF
+#define DLB2_CHP_DIR_CQ_DEPTH_RSVD0	0xFFFFE000
+#define DLB2_CHP_DIR_CQ_DEPTH_DEPTH_LOC	0
+#define DLB2_CHP_DIR_CQ_DEPTH_RSVD0_LOC	13
+
+#define DLB2_V2CHP_DIR_CQ_INT_DEPTH_THRSH(x) \
+	(0x40300000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ_INT_DEPTH_THRSH(x) \
+	(0x40380000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_INT_DEPTH_THRSH(x) : \
+	 DLB2_V2_5CHP_DIR_CQ_INT_DEPTH_THRSH(x))
+#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_RST 0x0
+
+#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_DEPTH_THRESHOLD	0x00001FFF
+#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_RSVD0		0xFFFFE000
+#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_DEPTH_THRESHOLD_LOC	0
+#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_RSVD0_LOC		13
+
+#define DLB2_V2CHP_DIR_CQ_INT_ENB(x) \
+	(0x40380000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ_INT_ENB(x) \
+	(0x40400000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ_INT_ENB(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_INT_ENB(x) : \
+	 DLB2_V2_5CHP_DIR_CQ_INT_ENB(x))
+#define DLB2_CHP_DIR_CQ_INT_ENB_RST 0x0
+
+#define DLB2_CHP_DIR_CQ_INT_ENB_EN_TIM	0x00000001
+#define DLB2_CHP_DIR_CQ_INT_ENB_EN_DEPTH	0x00000002
+#define DLB2_CHP_DIR_CQ_INT_ENB_RSVD0	0xFFFFFFFC
+#define DLB2_CHP_DIR_CQ_INT_ENB_EN_TIM_LOC	0
+#define DLB2_CHP_DIR_CQ_INT_ENB_EN_DEPTH_LOC	1
+#define DLB2_CHP_DIR_CQ_INT_ENB_RSVD0_LOC	2
+
+#define DLB2_V2CHP_DIR_CQ_TMR_THRSH(x) \
+	(0x40480000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ_TMR_THRSH(x) \
+	(0x40500000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ_TMR_THRSH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_TMR_THRSH(x) : \
+	 DLB2_V2_5CHP_DIR_CQ_TMR_THRSH(x))
+#define DLB2_CHP_DIR_CQ_TMR_THRSH_RST 0x1
+
+#define DLB2_CHP_DIR_CQ_TMR_THRSH_THRSH_0	0x00000001
+#define DLB2_CHP_DIR_CQ_TMR_THRSH_THRSH_13_1	0x00003FFE
+#define DLB2_CHP_DIR_CQ_TMR_THRSH_RSVD0	0xFFFFC000
+#define DLB2_CHP_DIR_CQ_TMR_THRSH_THRSH_0_LOC	0
+#define DLB2_CHP_DIR_CQ_TMR_THRSH_THRSH_13_1_LOC	1
+#define DLB2_CHP_DIR_CQ_TMR_THRSH_RSVD0_LOC		14
+
+#define DLB2_V2CHP_DIR_CQ_TKN_DEPTH_SEL(x) \
+	(0x40500000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ_TKN_DEPTH_SEL(x) \
+	(0x40580000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_TKN_DEPTH_SEL(x) : \
+	 DLB2_V2_5CHP_DIR_CQ_TKN_DEPTH_SEL(x))
+#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_RST 0x0
+
+#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT	0x0000000F
+#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_RSVD0			0xFFFFFFF0
+#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_LOC	0
+#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_RSVD0_LOC		4
+
+#define DLB2_V2CHP_DIR_CQ_WD_ENB(x) \
+	(0x40580000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ_WD_ENB(x) \
+	(0x40600000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ_WD_ENB(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_WD_ENB(x) : \
+	 DLB2_V2_5CHP_DIR_CQ_WD_ENB(x))
+#define DLB2_CHP_DIR_CQ_WD_ENB_RST 0x0
+
+#define DLB2_CHP_DIR_CQ_WD_ENB_WD_ENABLE	0x00000001
+#define DLB2_CHP_DIR_CQ_WD_ENB_RSVD0		0xFFFFFFFE
+#define DLB2_CHP_DIR_CQ_WD_ENB_WD_ENABLE_LOC	0
+#define DLB2_CHP_DIR_CQ_WD_ENB_RSVD0_LOC	1
+
+#define DLB2_V2CHP_DIR_CQ_WPTR(x) \
+	(0x40600000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ_WPTR(x) \
+	(0x40680000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ_WPTR(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_WPTR(x) : \
+	 DLB2_V2_5CHP_DIR_CQ_WPTR(x))
+#define DLB2_CHP_DIR_CQ_WPTR_RST 0x0
+
+#define DLB2_CHP_DIR_CQ_WPTR_WRITE_POINTER	0x00001FFF
+#define DLB2_CHP_DIR_CQ_WPTR_RSVD0		0xFFFFE000
+#define DLB2_CHP_DIR_CQ_WPTR_WRITE_POINTER_LOC	0
+#define DLB2_CHP_DIR_CQ_WPTR_RSVD0_LOC		13
+
+#define DLB2_V2CHP_DIR_CQ2VAS(x) \
+	(0x40680000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ2VAS(x) \
+	(0x40700000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ2VAS(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ2VAS(x) : \
+	 DLB2_V2_5CHP_DIR_CQ2VAS(x))
+#define DLB2_CHP_DIR_CQ2VAS_RST 0x0
+
+#define DLB2_CHP_DIR_CQ2VAS_CQ2VAS	0x0000001F
+#define DLB2_CHP_DIR_CQ2VAS_RSVD0	0xFFFFFFE0
+#define DLB2_CHP_DIR_CQ2VAS_CQ2VAS_LOC	0
+#define DLB2_CHP_DIR_CQ2VAS_RSVD0_LOC	5
+
+#define DLB2_V2CHP_HIST_LIST_BASE(x) \
+	(0x40700000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_HIST_LIST_BASE(x) \
+	(0x40780000 + (x) * 0x1000)
+#define DLB2_CHP_HIST_LIST_BASE(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_HIST_LIST_BASE(x) : \
+	 DLB2_V2_5CHP_HIST_LIST_BASE(x))
+#define DLB2_CHP_HIST_LIST_BASE_RST 0x0
+
+#define DLB2_CHP_HIST_LIST_BASE_BASE		0x00001FFF
+#define DLB2_CHP_HIST_LIST_BASE_RSVD0	0xFFFFE000
+#define DLB2_CHP_HIST_LIST_BASE_BASE_LOC	0
+#define DLB2_CHP_HIST_LIST_BASE_RSVD0_LOC	13
+
+#define DLB2_V2CHP_HIST_LIST_LIM(x) \
+	(0x40780000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_HIST_LIST_LIM(x) \
+	(0x40800000 + (x) * 0x1000)
+#define DLB2_CHP_HIST_LIST_LIM(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_HIST_LIST_LIM(x) : \
+	 DLB2_V2_5CHP_HIST_LIST_LIM(x))
+#define DLB2_CHP_HIST_LIST_LIM_RST 0x0
+
+#define DLB2_CHP_HIST_LIST_LIM_LIMIT	0x00001FFF
+#define DLB2_CHP_HIST_LIST_LIM_RSVD0	0xFFFFE000
+#define DLB2_CHP_HIST_LIST_LIM_LIMIT_LOC	0
+#define DLB2_CHP_HIST_LIST_LIM_RSVD0_LOC	13
+
+#define DLB2_V2CHP_HIST_LIST_POP_PTR(x) \
+	(0x40800000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_HIST_LIST_POP_PTR(x) \
+	(0x40880000 + (x) * 0x1000)
+#define DLB2_CHP_HIST_LIST_POP_PTR(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_HIST_LIST_POP_PTR(x) : \
+	 DLB2_V2_5CHP_HIST_LIST_POP_PTR(x))
+#define DLB2_CHP_HIST_LIST_POP_PTR_RST 0x0
+
+#define DLB2_CHP_HIST_LIST_POP_PTR_POP_PTR		0x00001FFF
+#define DLB2_CHP_HIST_LIST_POP_PTR_GENERATION	0x00002000
+#define DLB2_CHP_HIST_LIST_POP_PTR_RSVD0		0xFFFFC000
+#define DLB2_CHP_HIST_LIST_POP_PTR_POP_PTR_LOC	0
+#define DLB2_CHP_HIST_LIST_POP_PTR_GENERATION_LOC	13
+#define DLB2_CHP_HIST_LIST_POP_PTR_RSVD0_LOC		14
+
+#define DLB2_V2CHP_HIST_LIST_PUSH_PTR(x) \
+	(0x40880000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_HIST_LIST_PUSH_PTR(x) \
+	(0x40900000 + (x) * 0x1000)
+#define DLB2_CHP_HIST_LIST_PUSH_PTR(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_HIST_LIST_PUSH_PTR(x) : \
+	 DLB2_V2_5CHP_HIST_LIST_PUSH_PTR(x))
+#define DLB2_CHP_HIST_LIST_PUSH_PTR_RST 0x0
+
+#define DLB2_CHP_HIST_LIST_PUSH_PTR_PUSH_PTR		0x00001FFF
+#define DLB2_CHP_HIST_LIST_PUSH_PTR_GENERATION	0x00002000
+#define DLB2_CHP_HIST_LIST_PUSH_PTR_RSVD0		0xFFFFC000
+#define DLB2_CHP_HIST_LIST_PUSH_PTR_PUSH_PTR_LOC	0
+#define DLB2_CHP_HIST_LIST_PUSH_PTR_GENERATION_LOC	13
+#define DLB2_CHP_HIST_LIST_PUSH_PTR_RSVD0_LOC	14
+
+#define DLB2_V2CHP_LDB_CQ_DEPTH(x) \
+	(0x40900000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ_DEPTH(x) \
+	(0x40a80000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ_DEPTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_DEPTH(x) : \
+	 DLB2_V2_5CHP_LDB_CQ_DEPTH(x))
+#define DLB2_CHP_LDB_CQ_DEPTH_RST 0x0
+
+#define DLB2_CHP_LDB_CQ_DEPTH_DEPTH	0x000007FF
+#define DLB2_CHP_LDB_CQ_DEPTH_RSVD0	0xFFFFF800
+#define DLB2_CHP_LDB_CQ_DEPTH_DEPTH_LOC	0
+#define DLB2_CHP_LDB_CQ_DEPTH_RSVD0_LOC	11
+
+#define DLB2_V2CHP_LDB_CQ_INT_DEPTH_THRSH(x) \
+	(0x40980000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ_INT_DEPTH_THRSH(x) \
+	(0x40b00000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_INT_DEPTH_THRSH(x) : \
+	 DLB2_V2_5CHP_LDB_CQ_INT_DEPTH_THRSH(x))
+#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_RST 0x0
+
+#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_DEPTH_THRESHOLD	0x000007FF
+#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_RSVD0		0xFFFFF800
+#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_DEPTH_THRESHOLD_LOC	0
+#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_RSVD0_LOC		11
+
+#define DLB2_V2CHP_LDB_CQ_INT_ENB(x) \
+	(0x40a00000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ_INT_ENB(x) \
+	(0x40b80000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ_INT_ENB(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_INT_ENB(x) : \
+	 DLB2_V2_5CHP_LDB_CQ_INT_ENB(x))
+#define DLB2_CHP_LDB_CQ_INT_ENB_RST 0x0
+
+#define DLB2_CHP_LDB_CQ_INT_ENB_EN_TIM	0x00000001
+#define DLB2_CHP_LDB_CQ_INT_ENB_EN_DEPTH	0x00000002
+#define DLB2_CHP_LDB_CQ_INT_ENB_RSVD0	0xFFFFFFFC
+#define DLB2_CHP_LDB_CQ_INT_ENB_EN_TIM_LOC	0
+#define DLB2_CHP_LDB_CQ_INT_ENB_EN_DEPTH_LOC	1
+#define DLB2_CHP_LDB_CQ_INT_ENB_RSVD0_LOC	2
+
+#define DLB2_V2CHP_LDB_CQ_TMR_THRSH(x) \
+	(0x40b00000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ_TMR_THRSH(x) \
+	(0x40c80000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ_TMR_THRSH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_TMR_THRSH(x) : \
+	 DLB2_V2_5CHP_LDB_CQ_TMR_THRSH(x))
+#define DLB2_CHP_LDB_CQ_TMR_THRSH_RST 0x1
+
+#define DLB2_CHP_LDB_CQ_TMR_THRSH_THRSH_0	0x00000001
+#define DLB2_CHP_LDB_CQ_TMR_THRSH_THRSH_13_1	0x00003FFE
+#define DLB2_CHP_LDB_CQ_TMR_THRSH_RSVD0	0xFFFFC000
+#define DLB2_CHP_LDB_CQ_TMR_THRSH_THRSH_0_LOC	0
+#define DLB2_CHP_LDB_CQ_TMR_THRSH_THRSH_13_1_LOC	1
+#define DLB2_CHP_LDB_CQ_TMR_THRSH_RSVD0_LOC		14
+
+#define DLB2_V2CHP_LDB_CQ_TKN_DEPTH_SEL(x) \
+	(0x40b80000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ_TKN_DEPTH_SEL(x) \
+	(0x40d00000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_TKN_DEPTH_SEL(x) : \
+	 DLB2_V2_5CHP_LDB_CQ_TKN_DEPTH_SEL(x))
+#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_RST 0x0
+
+#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT	0x0000000F
+#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_RSVD0			0xFFFFFFF0
+#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_LOC	0
+#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_RSVD0_LOC		4
+
+#define DLB2_V2CHP_LDB_CQ_WD_ENB(x) \
+	(0x40c00000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ_WD_ENB(x) \
+	(0x40d80000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ_WD_ENB(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_WD_ENB(x) : \
+	 DLB2_V2_5CHP_LDB_CQ_WD_ENB(x))
+#define DLB2_CHP_LDB_CQ_WD_ENB_RST 0x0
+
+#define DLB2_CHP_LDB_CQ_WD_ENB_WD_ENABLE	0x00000001
+#define DLB2_CHP_LDB_CQ_WD_ENB_RSVD0		0xFFFFFFFE
+#define DLB2_CHP_LDB_CQ_WD_ENB_WD_ENABLE_LOC	0
+#define DLB2_CHP_LDB_CQ_WD_ENB_RSVD0_LOC	1
+
+#define DLB2_V2CHP_LDB_CQ_WPTR(x) \
+	(0x40c80000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ_WPTR(x) \
+	(0x40e00000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ_WPTR(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_WPTR(x) : \
+	 DLB2_V2_5CHP_LDB_CQ_WPTR(x))
+#define DLB2_CHP_LDB_CQ_WPTR_RST 0x0
+
+#define DLB2_CHP_LDB_CQ_WPTR_WRITE_POINTER	0x000007FF
+#define DLB2_CHP_LDB_CQ_WPTR_RSVD0		0xFFFFF800
+#define DLB2_CHP_LDB_CQ_WPTR_WRITE_POINTER_LOC	0
+#define DLB2_CHP_LDB_CQ_WPTR_RSVD0_LOC		11
+
+#define DLB2_V2CHP_LDB_CQ2VAS(x) \
+	(0x40d00000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ2VAS(x) \
+	(0x40e80000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ2VAS(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ2VAS(x) : \
+	 DLB2_V2_5CHP_LDB_CQ2VAS(x))
+#define DLB2_CHP_LDB_CQ2VAS_RST 0x0
+
+#define DLB2_CHP_LDB_CQ2VAS_CQ2VAS	0x0000001F
+#define DLB2_CHP_LDB_CQ2VAS_RSVD0	0xFFFFFFE0
+#define DLB2_CHP_LDB_CQ2VAS_CQ2VAS_LOC	0
+#define DLB2_CHP_LDB_CQ2VAS_RSVD0_LOC	5
+
+#define DLB2_CHP_CFG_CHP_CSR_CTRL 0x44000008
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_RST 0x180002
+
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_COR_ALARM_DIS		0x00000001
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_COR_SYND_DIS		0x00000002
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_UNCR_ALARM_DIS		0x00000004
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_UNC_SYND_DIS		0x00000008
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF0_ALARM_DIS		0x00000010
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF0_SYND_DIS		0x00000020
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF1_ALARM_DIS		0x00000040
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF1_SYND_DIS		0x00000080
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF2_ALARM_DIS		0x00000100
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF2_SYND_DIS		0x00000200
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF3_ALARM_DIS		0x00000400
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF3_SYND_DIS		0x00000800
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF4_ALARM_DIS		0x00001000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF4_SYND_DIS		0x00002000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF5_ALARM_DIS		0x00004000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF5_SYND_DIS		0x00008000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_DLB_COR_ALARM_ENABLE	0x00010000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_LDB_CQ_MODE	0x00020000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_DIR_CQ_MODE	0x00040000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_WRITE_LDB		0x00080000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_WRITE_DIR		0x00100000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_FIRST_WRITE_LDB	0x00200000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_FIRST_WRITE_DIR	0x00400000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_RSVZ0			0xFF800000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_COR_ALARM_DIS_LOC		0
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_COR_SYND_DIS_LOC		1
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_UNCR_ALARM_DIS_LOC		2
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_UNC_SYND_DIS_LOC		3
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF0_ALARM_DIS_LOC		4
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF0_SYND_DIS_LOC		5
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF1_ALARM_DIS_LOC		6
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF1_SYND_DIS_LOC		7
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF2_ALARM_DIS_LOC		8
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF2_SYND_DIS_LOC		9
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF3_ALARM_DIS_LOC		10
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF3_SYND_DIS_LOC		11
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF4_ALARM_DIS_LOC		12
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF4_SYND_DIS_LOC		13
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF5_ALARM_DIS_LOC		14
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF5_SYND_DIS_LOC		15
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_DLB_COR_ALARM_ENABLE_LOC		16
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_LDB_CQ_MODE_LOC	17
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_DIR_CQ_MODE_LOC	18
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_WRITE_LDB_LOC			19
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_WRITE_DIR_LOC			20
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_FIRST_WRITE_LDB_LOC		21
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_FIRST_WRITE_DIR_LOC		22
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_RSVZ0_LOC				23
+
+#define DLB2_V2CHP_DIR_CQ_INTR_ARMED0 0x4400005c
+#define DLB2_V2_5CHP_DIR_CQ_INTR_ARMED0 0x4400004c
+#define DLB2_CHP_DIR_CQ_INTR_ARMED0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_INTR_ARMED0 : \
+	 DLB2_V2_5CHP_DIR_CQ_INTR_ARMED0)
+#define DLB2_CHP_DIR_CQ_INTR_ARMED0_RST 0x0
+
+#define DLB2_CHP_DIR_CQ_INTR_ARMED0_ARMED	0xFFFFFFFF
+#define DLB2_CHP_DIR_CQ_INTR_ARMED0_ARMED_LOC	0
+
+#define DLB2_V2CHP_DIR_CQ_INTR_ARMED1 0x44000060
+#define DLB2_V2_5CHP_DIR_CQ_INTR_ARMED1 0x44000050
+#define DLB2_CHP_DIR_CQ_INTR_ARMED1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_INTR_ARMED1 : \
+	 DLB2_V2_5CHP_DIR_CQ_INTR_ARMED1)
+#define DLB2_CHP_DIR_CQ_INTR_ARMED1_RST 0x0
+
+#define DLB2_CHP_DIR_CQ_INTR_ARMED1_ARMED	0xFFFFFFFF
+#define DLB2_CHP_DIR_CQ_INTR_ARMED1_ARMED_LOC	0
+
+#define DLB2_V2CHP_CFG_DIR_CQ_TIMER_CTL 0x44000084
+#define DLB2_V2_5CHP_CFG_DIR_CQ_TIMER_CTL 0x44000088
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_DIR_CQ_TIMER_CTL : \
+	 DLB2_V2_5CHP_CFG_DIR_CQ_TIMER_CTL)
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_RST 0x0
+
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_SAMPLE_INTERVAL	0x000000FF
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_ENB			0x00000100
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_RSVZ0			0xFFFFFE00
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_SAMPLE_INTERVAL_LOC	0
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_ENB_LOC		8
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_RSVZ0_LOC		9
+
+#define DLB2_V2CHP_CFG_DIR_WDTO_0 0x44000088
+#define DLB2_V2_5CHP_CFG_DIR_WDTO_0 0x4400008c
+#define DLB2_CHP_CFG_DIR_WDTO_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_DIR_WDTO_0 : \
+	 DLB2_V2_5CHP_CFG_DIR_WDTO_0)
+#define DLB2_CHP_CFG_DIR_WDTO_0_RST 0x0
+
+#define DLB2_CHP_CFG_DIR_WDTO_0_WDTO	0xFFFFFFFF
+#define DLB2_CHP_CFG_DIR_WDTO_0_WDTO_LOC	0
+
+#define DLB2_V2CHP_CFG_DIR_WDTO_1 0x4400008c
+#define DLB2_V2_5CHP_CFG_DIR_WDTO_1 0x44000090
+#define DLB2_CHP_CFG_DIR_WDTO_1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_DIR_WDTO_1 : \
+	 DLB2_V2_5CHP_CFG_DIR_WDTO_1)
+#define DLB2_CHP_CFG_DIR_WDTO_1_RST 0x0
+
+#define DLB2_CHP_CFG_DIR_WDTO_1_WDTO	0xFFFFFFFF
+#define DLB2_CHP_CFG_DIR_WDTO_1_WDTO_LOC	0
+
+#define DLB2_V2CHP_CFG_DIR_WD_DISABLE0 0x44000098
+#define DLB2_V2_5CHP_CFG_DIR_WD_DISABLE0 0x440000a4
+#define DLB2_CHP_CFG_DIR_WD_DISABLE0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_DIR_WD_DISABLE0 : \
+	 DLB2_V2_5CHP_CFG_DIR_WD_DISABLE0)
+#define DLB2_CHP_CFG_DIR_WD_DISABLE0_RST 0xffffffff
+
+#define DLB2_CHP_CFG_DIR_WD_DISABLE0_WD_DISABLE	0xFFFFFFFF
+#define DLB2_CHP_CFG_DIR_WD_DISABLE0_WD_DISABLE_LOC	0
+
+#define DLB2_V2CHP_CFG_DIR_WD_DISABLE1 0x4400009c
+#define DLB2_V2_5CHP_CFG_DIR_WD_DISABLE1 0x440000a8
+#define DLB2_CHP_CFG_DIR_WD_DISABLE1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_DIR_WD_DISABLE1 : \
+	 DLB2_V2_5CHP_CFG_DIR_WD_DISABLE1)
+#define DLB2_CHP_CFG_DIR_WD_DISABLE1_RST 0xffffffff
+
+#define DLB2_CHP_CFG_DIR_WD_DISABLE1_WD_DISABLE	0xFFFFFFFF
+#define DLB2_CHP_CFG_DIR_WD_DISABLE1_WD_DISABLE_LOC	0
+
+#define DLB2_V2CHP_CFG_DIR_WD_ENB_INTERVAL 0x440000a0
+#define DLB2_V2_5CHP_CFG_DIR_WD_ENB_INTERVAL 0x440000b0
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_DIR_WD_ENB_INTERVAL : \
+	 DLB2_V2_5CHP_CFG_DIR_WD_ENB_INTERVAL)
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_RST 0x0
+
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_SAMPLE_INTERVAL	0x0FFFFFFF
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_ENB			0x10000000
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_RSVZ0		0xE0000000
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_SAMPLE_INTERVAL_LOC	0
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_ENB_LOC		28
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_RSVZ0_LOC		29
+
+#define DLB2_V2CHP_CFG_DIR_WD_THRESHOLD 0x440000ac
+#define DLB2_V2_5CHP_CFG_DIR_WD_THRESHOLD 0x440000c0
+#define DLB2_CHP_CFG_DIR_WD_THRESHOLD(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_DIR_WD_THRESHOLD : \
+	 DLB2_V2_5CHP_CFG_DIR_WD_THRESHOLD)
+#define DLB2_CHP_CFG_DIR_WD_THRESHOLD_RST 0x0
+
+#define DLB2_CHP_CFG_DIR_WD_THRESHOLD_WD_THRESHOLD	0x000000FF
+#define DLB2_CHP_CFG_DIR_WD_THRESHOLD_RSVZ0		0xFFFFFF00
+#define DLB2_CHP_CFG_DIR_WD_THRESHOLD_WD_THRESHOLD_LOC	0
+#define DLB2_CHP_CFG_DIR_WD_THRESHOLD_RSVZ0_LOC		8
+
+#define DLB2_V2CHP_LDB_CQ_INTR_ARMED0 0x440000b0
+#define DLB2_V2_5CHP_LDB_CQ_INTR_ARMED0 0x440000c4
+#define DLB2_CHP_LDB_CQ_INTR_ARMED0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_INTR_ARMED0 : \
+	 DLB2_V2_5CHP_LDB_CQ_INTR_ARMED0)
+#define DLB2_CHP_LDB_CQ_INTR_ARMED0_RST 0x0
+
+#define DLB2_CHP_LDB_CQ_INTR_ARMED0_ARMED	0xFFFFFFFF
+#define DLB2_CHP_LDB_CQ_INTR_ARMED0_ARMED_LOC	0
+
+#define DLB2_V2CHP_LDB_CQ_INTR_ARMED1 0x440000b4
+#define DLB2_V2_5CHP_LDB_CQ_INTR_ARMED1 0x440000c8
+#define DLB2_CHP_LDB_CQ_INTR_ARMED1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_INTR_ARMED1 : \
+	 DLB2_V2_5CHP_LDB_CQ_INTR_ARMED1)
+#define DLB2_CHP_LDB_CQ_INTR_ARMED1_RST 0x0
+
+#define DLB2_CHP_LDB_CQ_INTR_ARMED1_ARMED	0xFFFFFFFF
+#define DLB2_CHP_LDB_CQ_INTR_ARMED1_ARMED_LOC	0
+
+#define DLB2_V2CHP_CFG_LDB_CQ_TIMER_CTL 0x440000d8
+#define DLB2_V2_5CHP_CFG_LDB_CQ_TIMER_CTL 0x440000ec
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_LDB_CQ_TIMER_CTL : \
+	 DLB2_V2_5CHP_CFG_LDB_CQ_TIMER_CTL)
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_RST 0x0
+
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_SAMPLE_INTERVAL	0x000000FF
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_ENB			0x00000100
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_RSVZ0			0xFFFFFE00
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_SAMPLE_INTERVAL_LOC	0
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_ENB_LOC		8
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_RSVZ0_LOC		9
+
+#define DLB2_V2CHP_CFG_LDB_WDTO_0 0x440000dc
+#define DLB2_V2_5CHP_CFG_LDB_WDTO_0 0x440000f0
+#define DLB2_CHP_CFG_LDB_WDTO_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_LDB_WDTO_0 : \
+	 DLB2_V2_5CHP_CFG_LDB_WDTO_0)
+#define DLB2_CHP_CFG_LDB_WDTO_0_RST 0x0
+
+#define DLB2_CHP_CFG_LDB_WDTO_0_WDTO	0xFFFFFFFF
+#define DLB2_CHP_CFG_LDB_WDTO_0_WDTO_LOC	0
+
+#define DLB2_V2CHP_CFG_LDB_WDTO_1 0x440000e0
+#define DLB2_V2_5CHP_CFG_LDB_WDTO_1 0x440000f4
+#define DLB2_CHP_CFG_LDB_WDTO_1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_LDB_WDTO_1 : \
+	 DLB2_V2_5CHP_CFG_LDB_WDTO_1)
+#define DLB2_CHP_CFG_LDB_WDTO_1_RST 0x0
+
+#define DLB2_CHP_CFG_LDB_WDTO_1_WDTO	0xFFFFFFFF
+#define DLB2_CHP_CFG_LDB_WDTO_1_WDTO_LOC	0
+
+#define DLB2_V2CHP_CFG_LDB_WD_DISABLE0 0x440000ec
+#define DLB2_V2_5CHP_CFG_LDB_WD_DISABLE0 0x44000100
+#define DLB2_CHP_CFG_LDB_WD_DISABLE0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_LDB_WD_DISABLE0 : \
+	 DLB2_V2_5CHP_CFG_LDB_WD_DISABLE0)
+#define DLB2_CHP_CFG_LDB_WD_DISABLE0_RST 0xffffffff
+
+#define DLB2_CHP_CFG_LDB_WD_DISABLE0_WD_DISABLE	0xFFFFFFFF
+#define DLB2_CHP_CFG_LDB_WD_DISABLE0_WD_DISABLE_LOC	0
+
+#define DLB2_V2CHP_CFG_LDB_WD_DISABLE1 0x440000f0
+#define DLB2_V2_5CHP_CFG_LDB_WD_DISABLE1 0x44000104
+#define DLB2_CHP_CFG_LDB_WD_DISABLE1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_LDB_WD_DISABLE1 : \
+	 DLB2_V2_5CHP_CFG_LDB_WD_DISABLE1)
+#define DLB2_CHP_CFG_LDB_WD_DISABLE1_RST 0xffffffff
+
+#define DLB2_CHP_CFG_LDB_WD_DISABLE1_WD_DISABLE	0xFFFFFFFF
+#define DLB2_CHP_CFG_LDB_WD_DISABLE1_WD_DISABLE_LOC	0
+
+#define DLB2_V2CHP_CFG_LDB_WD_ENB_INTERVAL 0x440000f4
+#define DLB2_V2_5CHP_CFG_LDB_WD_ENB_INTERVAL 0x44000108
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_LDB_WD_ENB_INTERVAL : \
+	 DLB2_V2_5CHP_CFG_LDB_WD_ENB_INTERVAL)
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_RST 0x0
+
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_SAMPLE_INTERVAL	0x0FFFFFFF
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_ENB			0x10000000
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_RSVZ0		0xE0000000
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_SAMPLE_INTERVAL_LOC	0
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_ENB_LOC		28
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_RSVZ0_LOC		29
+
+#define DLB2_V2CHP_CFG_LDB_WD_THRESHOLD 0x44000100
+#define DLB2_V2_5CHP_CFG_LDB_WD_THRESHOLD 0x44000114
+#define DLB2_CHP_CFG_LDB_WD_THRESHOLD(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_LDB_WD_THRESHOLD : \
+	 DLB2_V2_5CHP_CFG_LDB_WD_THRESHOLD)
+#define DLB2_CHP_CFG_LDB_WD_THRESHOLD_RST 0x0
+
+#define DLB2_CHP_CFG_LDB_WD_THRESHOLD_WD_THRESHOLD	0x000000FF
+#define DLB2_CHP_CFG_LDB_WD_THRESHOLD_RSVZ0		0xFFFFFF00
+#define DLB2_CHP_CFG_LDB_WD_THRESHOLD_WD_THRESHOLD_LOC	0
+#define DLB2_CHP_CFG_LDB_WD_THRESHOLD_RSVZ0_LOC		8
+
+#define DLB2_CHP_SMON_COMPARE0 0x4c000000
+#define DLB2_CHP_SMON_COMPARE0_RST 0x0
+
+#define DLB2_CHP_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_CHP_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_CHP_SMON_COMPARE1 0x4c000004
+#define DLB2_CHP_SMON_COMPARE1_RST 0x0
+
+#define DLB2_CHP_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_CHP_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_CHP_SMON_CFG0 0x4c000008
+#define DLB2_CHP_SMON_CFG0_RST 0x40000000
+
+#define DLB2_CHP_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_CHP_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_CHP_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_CHP_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_CHP_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_CHP_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_CHP_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_CHP_SMON_CFG0_SMON_MODE			0x0000F000
+#define DLB2_CHP_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_CHP_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_CHP_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_CHP_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_CHP_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_CHP_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_CHP_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_CHP_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_CHP_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_CHP_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_CHP_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_CHP_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_CHP_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
+#define DLB2_CHP_SMON_CFG0_RSVZ0_LOC				2
+#define DLB2_CHP_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_CHP_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_CHP_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_CHP_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_CHP_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_CHP_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_CHP_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_CHP_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_CHP_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_CHP_SMON_CFG0_STOPTIMEROVFL_LOC			20
+#define DLB2_CHP_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_CHP_SMON_CFG0_STATTIMEROVFL_LOC			22
+#define DLB2_CHP_SMON_CFG0_RSVZ1_LOC				23
+#define DLB2_CHP_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_CHP_SMON_CFG0_RSVZ2_LOC				29
+#define DLB2_CHP_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_CHP_SMON_CFG1 0x4c00000c
+#define DLB2_CHP_SMON_CFG1_RST 0x0
+
+#define DLB2_CHP_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_CHP_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_CHP_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_CHP_SMON_CFG1_MODE0_LOC	0
+#define DLB2_CHP_SMON_CFG1_MODE1_LOC	8
+#define DLB2_CHP_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_CHP_SMON_ACTIVITYCNTR0 0x4c000010
+#define DLB2_CHP_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_CHP_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_CHP_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_CHP_SMON_ACTIVITYCNTR1 0x4c000014
+#define DLB2_CHP_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_CHP_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_CHP_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_CHP_SMON_MAX_TMR 0x4c000018
+#define DLB2_CHP_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_CHP_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_CHP_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_CHP_SMON_TMR 0x4c00001c
+#define DLB2_CHP_SMON_TMR_RST 0x0
+
+#define DLB2_CHP_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_CHP_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_CHP_CTRL_DIAG_02 0x4c000028
+#define DLB2_CHP_CTRL_DIAG_02_RST 0x1555
+
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_EMPTY_V2	0x00000001
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_AFULL_V2	0x00000002
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_EMPTY_V2 0x04
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_AFULL_V2 0x08
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_EMPTY_V2 0x0010
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_AFULL_V2 0x0020
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_EMPTY_V2    0x0040
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_AFULL_V2    0x0080
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_EMPTY_V2	 0x0100
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_AFULL_V2	 0x0200
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_EMPTY_V2	0x0400
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_AFULL_V2	0x0800
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_EMPTY_V2    0x1000
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_AFULL_V2    0x2000
+#define DLB2_CHP_CTRL_DIAG_02_RSVD0_V2				    0xFFFFC000
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_EMPTY_V2_LOC	    0
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_AFULL_V2_LOC	    1
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_EMPTY_V2_LOC 2
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_AFULL_V2_LOC 3
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_EMPTY_V2_LOC 4
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_AFULL_V2_LOC 5
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_EMPTY_V2_LOC    6
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_AFULL_V2_LOC    7
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_EMPTY_V2_LOC	  8
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_AFULL_V2_LOC	  9
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_EMPTY_V2_LOC	  10
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_AFULL_V2_LOC	  11
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_EMPTY_V2_LOC 12
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_AFULL_V2_LOC 13
+#define DLB2_CHP_CTRL_DIAG_02_RSVD0_V2_LOC				  14
+
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_EMPTY_V2_5	     0x00000001
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_AFULL_V2_5	     0x00000002
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_EMPTY_V2_5  4
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_AFULL_V2_5  8
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_EMPTY_V2_5 0x10
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_AFULL_V2_5 0x20
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_EMPTY_V2_5	0x0040
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_AFULL_V2_5	0x0080
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_EMPTY_V2_5 0x00000100
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_AFULL_V2_5 0x00000200
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_EMPTY_V2_5 0x0400
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_AFULL_V2_5 0x0800
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_EMPTY_V2_5 0x1000
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_AFULL_V2_5 0x2000
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOKEN_QB_STATUS_SIZE_V2_5	    0x0001C000
+#define DLB2_CHP_CTRL_DIAG_02_FREELIST_SIZE_V2_5		    0xFFFE0000
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_EMPTY_V2_5_LOC 0
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_AFULL_V2_5_LOC 1
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_EMPTY_V2_5_LOC 2
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_AFULL_V2_5_LOC 3
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_EMPTY_V2_5_LOC 4
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_AFULL_V2_5_LOC 5
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_EMPTY_V2_5_LOC    6
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_AFULL_V2_5_LOC 7
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_EMPTY_V2_5_LOC	    8
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_AFULL_V2_5_LOC	    9
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_EMPTY_V2_5_LOC   10
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_AFULL_V2_5_LOC   11
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_EMPTY_V2_5_LOC 12
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_AFULL_V2_5_LOC 13
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOKEN_QB_STATUS_SIZE_V2_5_LOC	    14
+#define DLB2_CHP_CTRL_DIAG_02_FREELIST_SIZE_V2_5_LOC			    17
+
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0 0x54000000
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_RST 0xfefcfaf8
+
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI0	0x000000FF
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI1	0x0000FF00
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI2	0x00FF0000
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI3	0xFF000000
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI0_LOC	0
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI1_LOC	8
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI2_LOC	16
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI3_LOC	24
+
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_1 0x54000004
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_1_RST 0x0
+
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_1_RSVZ0	0xFFFFFFFF
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_1_RSVZ0_LOC	0
+
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0 0x54000008
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_RST 0xfefcfaf8
+
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI0	0x000000FF
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI1	0x0000FF00
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI2	0x00FF0000
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI3	0xFF000000
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI0_LOC	0
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI1_LOC	8
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI2_LOC	16
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI3_LOC	24
+
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1 0x5400000c
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RST 0x0
+
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RSVZ0	0xFFFFFFFF
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RSVZ0_LOC	0
+
+#define DLB2_DP_DIR_CSR_CTRL 0x54000010
+#define DLB2_DP_DIR_CSR_CTRL_RST 0x0
+
+#define DLB2_DP_DIR_CSR_CTRL_INT_COR_ALARM_DIS	0x00000001
+#define DLB2_DP_DIR_CSR_CTRL_INT_COR_SYND_DIS	0x00000002
+#define DLB2_DP_DIR_CSR_CTRL_INT_UNCR_ALARM_DIS	0x00000004
+#define DLB2_DP_DIR_CSR_CTRL_INT_UNC_SYND_DIS	0x00000008
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF0_ALARM_DIS	0x00000010
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF0_SYND_DIS	0x00000020
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF1_ALARM_DIS	0x00000040
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF1_SYND_DIS	0x00000080
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF2_ALARM_DIS	0x00000100
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF2_SYND_DIS	0x00000200
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF3_ALARM_DIS	0x00000400
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF3_SYND_DIS	0x00000800
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF4_ALARM_DIS	0x00001000
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF4_SYND_DIS	0x00002000
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF5_ALARM_DIS	0x00004000
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF5_SYND_DIS	0x00008000
+#define DLB2_DP_DIR_CSR_CTRL_RSVZ0			0xFFFF0000
+#define DLB2_DP_DIR_CSR_CTRL_INT_COR_ALARM_DIS_LOC	0
+#define DLB2_DP_DIR_CSR_CTRL_INT_COR_SYND_DIS_LOC	1
+#define DLB2_DP_DIR_CSR_CTRL_INT_UNCR_ALARM_DIS_LOC	2
+#define DLB2_DP_DIR_CSR_CTRL_INT_UNC_SYND_DIS_LOC	3
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF0_ALARM_DIS_LOC	4
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF0_SYND_DIS_LOC	5
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF1_ALARM_DIS_LOC	6
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF1_SYND_DIS_LOC	7
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF2_ALARM_DIS_LOC	8
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF2_SYND_DIS_LOC	9
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF3_ALARM_DIS_LOC	10
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF3_SYND_DIS_LOC	11
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF4_ALARM_DIS_LOC	12
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF4_SYND_DIS_LOC	13
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF5_ALARM_DIS_LOC	14
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF5_SYND_DIS_LOC	15
+#define DLB2_DP_DIR_CSR_CTRL_RSVZ0_LOC		16
+
+#define DLB2_DP_SMON_ACTIVITYCNTR0 0x5c000058
+#define DLB2_DP_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_DP_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_DP_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_DP_SMON_ACTIVITYCNTR1 0x5c00005c
+#define DLB2_DP_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_DP_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_DP_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_DP_SMON_COMPARE0 0x5c000060
+#define DLB2_DP_SMON_COMPARE0_RST 0x0
+
+#define DLB2_DP_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_DP_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_DP_SMON_COMPARE1 0x5c000064
+#define DLB2_DP_SMON_COMPARE1_RST 0x0
+
+#define DLB2_DP_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_DP_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_DP_SMON_CFG0 0x5c000068
+#define DLB2_DP_SMON_CFG0_RST 0x40000000
+
+#define DLB2_DP_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_DP_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_DP_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_DP_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_DP_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_DP_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_DP_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_DP_SMON_CFG0_SMON_MODE			0x0000F000
+#define DLB2_DP_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_DP_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_DP_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_DP_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_DP_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_DP_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_DP_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_DP_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_DP_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_DP_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_DP_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_DP_SMON_CFG0_SMON_ENABLE_LOC		0
+#define DLB2_DP_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC	1
+#define DLB2_DP_SMON_CFG0_RSVZ0_LOC			2
+#define DLB2_DP_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_DP_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_DP_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_DP_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_DP_SMON_CFG0_SMON_MODE_LOC		12
+#define DLB2_DP_SMON_CFG0_STOPCOUNTEROVFL_LOC	16
+#define DLB2_DP_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_DP_SMON_CFG0_STATCOUNTER0OVFL_LOC	18
+#define DLB2_DP_SMON_CFG0_STATCOUNTER1OVFL_LOC	19
+#define DLB2_DP_SMON_CFG0_STOPTIMEROVFL_LOC		20
+#define DLB2_DP_SMON_CFG0_INTTIMEROVFL_LOC		21
+#define DLB2_DP_SMON_CFG0_STATTIMEROVFL_LOC		22
+#define DLB2_DP_SMON_CFG0_RSVZ1_LOC			23
+#define DLB2_DP_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_DP_SMON_CFG0_RSVZ2_LOC			29
+#define DLB2_DP_SMON_CFG0_VERSION_LOC		30
+
+#define DLB2_DP_SMON_CFG1 0x5c00006c
+#define DLB2_DP_SMON_CFG1_RST 0x0
+
+#define DLB2_DP_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_DP_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_DP_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_DP_SMON_CFG1_MODE0_LOC	0
+#define DLB2_DP_SMON_CFG1_MODE1_LOC	8
+#define DLB2_DP_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_DP_SMON_MAX_TMR 0x5c000070
+#define DLB2_DP_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_DP_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_DP_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_DP_SMON_TMR 0x5c000074
+#define DLB2_DP_SMON_TMR_RST 0x0
+
+#define DLB2_DP_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_DP_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_DQED_SMON_ACTIVITYCNTR0 0x6c000024
+#define DLB2_DQED_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_DQED_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_DQED_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_DQED_SMON_ACTIVITYCNTR1 0x6c000028
+#define DLB2_DQED_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_DQED_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_DQED_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_DQED_SMON_COMPARE0 0x6c00002c
+#define DLB2_DQED_SMON_COMPARE0_RST 0x0
+
+#define DLB2_DQED_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_DQED_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_DQED_SMON_COMPARE1 0x6c000030
+#define DLB2_DQED_SMON_COMPARE1_RST 0x0
+
+#define DLB2_DQED_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_DQED_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_DQED_SMON_CFG0 0x6c000034
+#define DLB2_DQED_SMON_CFG0_RST 0x40000000
+
+#define DLB2_DQED_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_DQED_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_DQED_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_DQED_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_DQED_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_DQED_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_DQED_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_DQED_SMON_CFG0_SMON_MODE		0x0000F000
+#define DLB2_DQED_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_DQED_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_DQED_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_DQED_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_DQED_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_DQED_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_DQED_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_DQED_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_DQED_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_DQED_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_DQED_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_DQED_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_DQED_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
+#define DLB2_DQED_SMON_CFG0_RSVZ0_LOC			2
+#define DLB2_DQED_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_DQED_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_DQED_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_DQED_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_DQED_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_DQED_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_DQED_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_DQED_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_DQED_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_DQED_SMON_CFG0_STOPTIMEROVFL_LOC		20
+#define DLB2_DQED_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_DQED_SMON_CFG0_STATTIMEROVFL_LOC		22
+#define DLB2_DQED_SMON_CFG0_RSVZ1_LOC			23
+#define DLB2_DQED_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_DQED_SMON_CFG0_RSVZ2_LOC			29
+#define DLB2_DQED_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_DQED_SMON_CFG1 0x6c000038
+#define DLB2_DQED_SMON_CFG1_RST 0x0
+
+#define DLB2_DQED_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_DQED_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_DQED_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_DQED_SMON_CFG1_MODE0_LOC	0
+#define DLB2_DQED_SMON_CFG1_MODE1_LOC	8
+#define DLB2_DQED_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_DQED_SMON_MAX_TMR 0x6c00003c
+#define DLB2_DQED_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_DQED_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_DQED_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_DQED_SMON_TMR 0x6c000040
+#define DLB2_DQED_SMON_TMR_RST 0x0
+
+#define DLB2_DQED_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_DQED_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_QED_SMON_ACTIVITYCNTR0 0x7c000024
+#define DLB2_QED_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_QED_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_QED_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_QED_SMON_ACTIVITYCNTR1 0x7c000028
+#define DLB2_QED_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_QED_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_QED_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_QED_SMON_COMPARE0 0x7c00002c
+#define DLB2_QED_SMON_COMPARE0_RST 0x0
+
+#define DLB2_QED_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_QED_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_QED_SMON_COMPARE1 0x7c000030
+#define DLB2_QED_SMON_COMPARE1_RST 0x0
+
+#define DLB2_QED_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_QED_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_QED_SMON_CFG0 0x7c000034
+#define DLB2_QED_SMON_CFG0_RST 0x40000000
+
+#define DLB2_QED_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_QED_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_QED_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_QED_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_QED_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_QED_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_QED_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_QED_SMON_CFG0_SMON_MODE			0x0000F000
+#define DLB2_QED_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_QED_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_QED_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_QED_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_QED_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_QED_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_QED_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_QED_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_QED_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_QED_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_QED_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_QED_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_QED_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
+#define DLB2_QED_SMON_CFG0_RSVZ0_LOC				2
+#define DLB2_QED_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_QED_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_QED_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_QED_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_QED_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_QED_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_QED_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_QED_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_QED_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_QED_SMON_CFG0_STOPTIMEROVFL_LOC			20
+#define DLB2_QED_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_QED_SMON_CFG0_STATTIMEROVFL_LOC			22
+#define DLB2_QED_SMON_CFG0_RSVZ1_LOC				23
+#define DLB2_QED_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_QED_SMON_CFG0_RSVZ2_LOC				29
+#define DLB2_QED_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_QED_SMON_CFG1 0x7c000038
+#define DLB2_QED_SMON_CFG1_RST 0x0
+
+#define DLB2_QED_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_QED_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_QED_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_QED_SMON_CFG1_MODE0_LOC	0
+#define DLB2_QED_SMON_CFG1_MODE1_LOC	8
+#define DLB2_QED_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_QED_SMON_MAX_TMR 0x7c00003c
+#define DLB2_QED_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_QED_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_QED_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_QED_SMON_TMR 0x7c000040
+#define DLB2_QED_SMON_TMR_RST 0x0
+
+#define DLB2_QED_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_QED_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0 0x84000000
+#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0 0x74000000
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0 : \
+	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0)
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_RST 0xfefcfaf8
+
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI0	0x000000FF
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI1	0x0000FF00
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI2	0x00FF0000
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI3	0xFF000000
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI0_LOC	0
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI1_LOC	8
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI2_LOC	16
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI3_LOC	24
+
+#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1 0x84000004
+#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1 0x74000004
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1 : \
+	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1)
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1_RST 0x0
+
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1_RSVZ0	0xFFFFFFFF
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1_RSVZ0_LOC	0
+
+#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0 0x84000008
+#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0 0x74000008
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0 : \
+	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0)
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_RST 0xfefcfaf8
+
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI0	0x000000FF
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI1	0x0000FF00
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI2	0x00FF0000
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI3	0xFF000000
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI0_LOC	0
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI1_LOC	8
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI2_LOC	16
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI3_LOC	24
+
+#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1 0x8400000c
+#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1 0x7400000c
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1 : \
+	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1)
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1_RST 0x0
+
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1_RSVZ0	0xFFFFFFFF
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1_RSVZ0_LOC	0
+
+#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0 0x84000010
+#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0 0x74000010
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0 : \
+	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0)
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_RST 0xfefcfaf8
+
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI0	0x000000FF
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI1	0x0000FF00
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI2	0x00FF0000
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI3	0xFF000000
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI0_LOC	0
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI1_LOC	8
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI2_LOC	16
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI3_LOC	24
+
+#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1 0x84000014
+#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1 0x74000014
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1 : \
+	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1)
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RST 0x0
+
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RSVZ0	0xFFFFFFFF
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RSVZ0_LOC	0
+
+#define DLB2_NALB_SMON_ACTIVITYCNTR0 0x8c000064
+#define DLB2_NALB_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_NALB_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_NALB_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_NALB_SMON_ACTIVITYCNTR1 0x8c000068
+#define DLB2_NALB_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_NALB_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_NALB_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_NALB_SMON_COMPARE0 0x8c00006c
+#define DLB2_NALB_SMON_COMPARE0_RST 0x0
+
+#define DLB2_NALB_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_NALB_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_NALB_SMON_COMPARE1 0x8c000070
+#define DLB2_NALB_SMON_COMPARE1_RST 0x0
+
+#define DLB2_NALB_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_NALB_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_NALB_SMON_CFG0 0x8c000074
+#define DLB2_NALB_SMON_CFG0_RST 0x40000000
+
+#define DLB2_NALB_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_NALB_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_NALB_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_NALB_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_NALB_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_NALB_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_NALB_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_NALB_SMON_CFG0_SMON_MODE		0x0000F000
+#define DLB2_NALB_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_NALB_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_NALB_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_NALB_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_NALB_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_NALB_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_NALB_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_NALB_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_NALB_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_NALB_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_NALB_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_NALB_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_NALB_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
+#define DLB2_NALB_SMON_CFG0_RSVZ0_LOC			2
+#define DLB2_NALB_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_NALB_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_NALB_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_NALB_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_NALB_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_NALB_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_NALB_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_NALB_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_NALB_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_NALB_SMON_CFG0_STOPTIMEROVFL_LOC		20
+#define DLB2_NALB_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_NALB_SMON_CFG0_STATTIMEROVFL_LOC		22
+#define DLB2_NALB_SMON_CFG0_RSVZ1_LOC			23
+#define DLB2_NALB_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_NALB_SMON_CFG0_RSVZ2_LOC			29
+#define DLB2_NALB_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_NALB_SMON_CFG1 0x8c000078
+#define DLB2_NALB_SMON_CFG1_RST 0x0
+
+#define DLB2_NALB_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_NALB_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_NALB_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_NALB_SMON_CFG1_MODE0_LOC	0
+#define DLB2_NALB_SMON_CFG1_MODE1_LOC	8
+#define DLB2_NALB_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_NALB_SMON_MAX_TMR 0x8c00007c
+#define DLB2_NALB_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_NALB_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_NALB_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_NALB_SMON_TMR 0x8c000080
+#define DLB2_NALB_SMON_TMR_RST 0x0
+
+#define DLB2_NALB_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_NALB_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_V2RO_GRP_0_SLT_SHFT(x) \
+	(0x96000000 + (x) * 0x4)
+#define DLB2_V2_5RO_GRP_0_SLT_SHFT(x) \
+	(0x86000000 + (x) * 0x4)
+#define DLB2_RO_GRP_0_SLT_SHFT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2RO_GRP_0_SLT_SHFT(x) : \
+	 DLB2_V2_5RO_GRP_0_SLT_SHFT(x))
+#define DLB2_RO_GRP_0_SLT_SHFT_RST 0x0
+
+#define DLB2_RO_GRP_0_SLT_SHFT_CHANGE	0x000003FF
+#define DLB2_RO_GRP_0_SLT_SHFT_RSVD0		0xFFFFFC00
+#define DLB2_RO_GRP_0_SLT_SHFT_CHANGE_LOC	0
+#define DLB2_RO_GRP_0_SLT_SHFT_RSVD0_LOC	10
+
+#define DLB2_V2RO_GRP_1_SLT_SHFT(x) \
+	(0x96010000 + (x) * 0x4)
+#define DLB2_V2_5RO_GRP_1_SLT_SHFT(x) \
+	(0x86010000 + (x) * 0x4)
+#define DLB2_RO_GRP_1_SLT_SHFT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2RO_GRP_1_SLT_SHFT(x) : \
+	 DLB2_V2_5RO_GRP_1_SLT_SHFT(x))
+#define DLB2_RO_GRP_1_SLT_SHFT_RST 0x0
+
+#define DLB2_RO_GRP_1_SLT_SHFT_CHANGE	0x000003FF
+#define DLB2_RO_GRP_1_SLT_SHFT_RSVD0		0xFFFFFC00
+#define DLB2_RO_GRP_1_SLT_SHFT_CHANGE_LOC	0
+#define DLB2_RO_GRP_1_SLT_SHFT_RSVD0_LOC	10
+
+#define DLB2_V2RO_GRP_SN_MODE 0x94000000
+#define DLB2_V2_5RO_GRP_SN_MODE 0x84000000
+#define DLB2_RO_GRP_SN_MODE(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2RO_GRP_SN_MODE : \
+	 DLB2_V2_5RO_GRP_SN_MODE)
+#define DLB2_RO_GRP_SN_MODE_RST 0x0
+
+#define DLB2_RO_GRP_SN_MODE_SN_MODE_0	0x00000007
+#define DLB2_RO_GRP_SN_MODE_RSZV0		0x000000F8
+#define DLB2_RO_GRP_SN_MODE_SN_MODE_1	0x00000700
+#define DLB2_RO_GRP_SN_MODE_RSZV1		0xFFFFF800
+#define DLB2_RO_GRP_SN_MODE_SN_MODE_0_LOC	0
+#define DLB2_RO_GRP_SN_MODE_RSZV0_LOC	3
+#define DLB2_RO_GRP_SN_MODE_SN_MODE_1_LOC	8
+#define DLB2_RO_GRP_SN_MODE_RSZV1_LOC	11
+
+#define DLB2_V2RO_CFG_CTRL_GENERAL_0 0x9c000000
+#define DLB2_V2_5RO_CFG_CTRL_GENERAL_0 0x8c000000
+#define DLB2_RO_CFG_CTRL_GENERAL_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2RO_CFG_CTRL_GENERAL_0 : \
+	 DLB2_V2_5RO_CFG_CTRL_GENERAL_0)
+#define DLB2_RO_CFG_CTRL_GENERAL_0_RST 0x0
+
+#define DLB2_RO_CFG_CTRL_GENERAL_0_UNIT_SINGLE_STEP_MODE	0x00000001
+#define DLB2_RO_CFG_CTRL_GENERAL_0_RR_EN			0x00000002
+#define DLB2_RO_CFG_CTRL_GENERAL_0_RSZV0			0xFFFFFFFC
+#define DLB2_RO_CFG_CTRL_GENERAL_0_UNIT_SINGLE_STEP_MODE_LOC	0
+#define DLB2_RO_CFG_CTRL_GENERAL_0_RR_EN_LOC			1
+#define DLB2_RO_CFG_CTRL_GENERAL_0_RSZV0_LOC			2
+
+#define DLB2_RO_SMON_ACTIVITYCNTR0 0x9c000030
+#define DLB2_RO_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_RO_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_RO_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_RO_SMON_ACTIVITYCNTR1 0x9c000034
+#define DLB2_RO_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_RO_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_RO_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_RO_SMON_COMPARE0 0x9c000038
+#define DLB2_RO_SMON_COMPARE0_RST 0x0
+
+#define DLB2_RO_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_RO_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_RO_SMON_COMPARE1 0x9c00003c
+#define DLB2_RO_SMON_COMPARE1_RST 0x0
+
+#define DLB2_RO_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_RO_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_RO_SMON_CFG0 0x9c000040
+#define DLB2_RO_SMON_CFG0_RST 0x40000000
+
+#define DLB2_RO_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_RO_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_RO_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_RO_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_RO_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_RO_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_RO_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_RO_SMON_CFG0_SMON_MODE			0x0000F000
+#define DLB2_RO_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_RO_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_RO_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_RO_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_RO_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_RO_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_RO_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_RO_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_RO_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_RO_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_RO_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_RO_SMON_CFG0_SMON_ENABLE_LOC		0
+#define DLB2_RO_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC	1
+#define DLB2_RO_SMON_CFG0_RSVZ0_LOC			2
+#define DLB2_RO_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_RO_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_RO_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_RO_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_RO_SMON_CFG0_SMON_MODE_LOC		12
+#define DLB2_RO_SMON_CFG0_STOPCOUNTEROVFL_LOC	16
+#define DLB2_RO_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_RO_SMON_CFG0_STATCOUNTER0OVFL_LOC	18
+#define DLB2_RO_SMON_CFG0_STATCOUNTER1OVFL_LOC	19
+#define DLB2_RO_SMON_CFG0_STOPTIMEROVFL_LOC		20
+#define DLB2_RO_SMON_CFG0_INTTIMEROVFL_LOC		21
+#define DLB2_RO_SMON_CFG0_STATTIMEROVFL_LOC		22
+#define DLB2_RO_SMON_CFG0_RSVZ1_LOC			23
+#define DLB2_RO_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_RO_SMON_CFG0_RSVZ2_LOC			29
+#define DLB2_RO_SMON_CFG0_VERSION_LOC		30
+
+#define DLB2_RO_SMON_CFG1 0x9c000044
+#define DLB2_RO_SMON_CFG1_RST 0x0
+
+#define DLB2_RO_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_RO_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_RO_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_RO_SMON_CFG1_MODE0_LOC	0
+#define DLB2_RO_SMON_CFG1_MODE1_LOC	8
+#define DLB2_RO_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_RO_SMON_MAX_TMR 0x9c000048
+#define DLB2_RO_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_RO_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_RO_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_RO_SMON_TMR 0x9c00004c
+#define DLB2_RO_SMON_TMR_RST 0x0
+
+#define DLB2_RO_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_RO_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_V2LSP_CQ2PRIOV(x) \
+	(0xa0000000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ2PRIOV(x) \
+	(0x90000000 + (x) * 0x1000)
+#define DLB2_LSP_CQ2PRIOV(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ2PRIOV(x) : \
+	 DLB2_V2_5LSP_CQ2PRIOV(x))
+#define DLB2_LSP_CQ2PRIOV_RST 0x0
+
+#define DLB2_LSP_CQ2PRIOV_PRIO	0x00FFFFFF
+#define DLB2_LSP_CQ2PRIOV_V		0xFF000000
+#define DLB2_LSP_CQ2PRIOV_PRIO_LOC	0
+#define DLB2_LSP_CQ2PRIOV_V_LOC	24
+
+#define DLB2_V2LSP_CQ2QID0(x) \
+	(0xa0080000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ2QID0(x) \
+	(0x90080000 + (x) * 0x1000)
+#define DLB2_LSP_CQ2QID0(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ2QID0(x) : \
+	 DLB2_V2_5LSP_CQ2QID0(x))
+#define DLB2_LSP_CQ2QID0_RST 0x0
+
+#define DLB2_LSP_CQ2QID0_QID_P0	0x0000007F
+#define DLB2_LSP_CQ2QID0_RSVD3	0x00000080
+#define DLB2_LSP_CQ2QID0_QID_P1	0x00007F00
+#define DLB2_LSP_CQ2QID0_RSVD2	0x00008000
+#define DLB2_LSP_CQ2QID0_QID_P2	0x007F0000
+#define DLB2_LSP_CQ2QID0_RSVD1	0x00800000
+#define DLB2_LSP_CQ2QID0_QID_P3	0x7F000000
+#define DLB2_LSP_CQ2QID0_RSVD0	0x80000000
+#define DLB2_LSP_CQ2QID0_QID_P0_LOC	0
+#define DLB2_LSP_CQ2QID0_RSVD3_LOC	7
+#define DLB2_LSP_CQ2QID0_QID_P1_LOC	8
+#define DLB2_LSP_CQ2QID0_RSVD2_LOC	15
+#define DLB2_LSP_CQ2QID0_QID_P2_LOC	16
+#define DLB2_LSP_CQ2QID0_RSVD1_LOC	23
+#define DLB2_LSP_CQ2QID0_QID_P3_LOC	24
+#define DLB2_LSP_CQ2QID0_RSVD0_LOC	31
+
+#define DLB2_V2LSP_CQ2QID1(x) \
+	(0xa0100000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ2QID1(x) \
+	(0x90100000 + (x) * 0x1000)
+#define DLB2_LSP_CQ2QID1(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ2QID1(x) : \
+	 DLB2_V2_5LSP_CQ2QID1(x))
+#define DLB2_LSP_CQ2QID1_RST 0x0
+
+#define DLB2_LSP_CQ2QID1_QID_P4	0x0000007F
+#define DLB2_LSP_CQ2QID1_RSVD3	0x00000080
+#define DLB2_LSP_CQ2QID1_QID_P5	0x00007F00
+#define DLB2_LSP_CQ2QID1_RSVD2	0x00008000
+#define DLB2_LSP_CQ2QID1_QID_P6	0x007F0000
+#define DLB2_LSP_CQ2QID1_RSVD1	0x00800000
+#define DLB2_LSP_CQ2QID1_QID_P7	0x7F000000
+#define DLB2_LSP_CQ2QID1_RSVD0	0x80000000
+#define DLB2_LSP_CQ2QID1_QID_P4_LOC	0
+#define DLB2_LSP_CQ2QID1_RSVD3_LOC	7
+#define DLB2_LSP_CQ2QID1_QID_P5_LOC	8
+#define DLB2_LSP_CQ2QID1_RSVD2_LOC	15
+#define DLB2_LSP_CQ2QID1_QID_P6_LOC	16
+#define DLB2_LSP_CQ2QID1_RSVD1_LOC	23
+#define DLB2_LSP_CQ2QID1_QID_P7_LOC	24
+#define DLB2_LSP_CQ2QID1_RSVD0_LOC	31
+
+#define DLB2_V2LSP_CQ_DIR_DSBL(x) \
+	(0xa0180000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_DIR_DSBL(x) \
+	(0x90180000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_DIR_DSBL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_DIR_DSBL(x) : \
+	 DLB2_V2_5LSP_CQ_DIR_DSBL(x))
+#define DLB2_LSP_CQ_DIR_DSBL_RST 0x1
+
+#define DLB2_LSP_CQ_DIR_DSBL_DISABLED	0x00000001
+#define DLB2_LSP_CQ_DIR_DSBL_RSVD0		0xFFFFFFFE
+#define DLB2_LSP_CQ_DIR_DSBL_DISABLED_LOC	0
+#define DLB2_LSP_CQ_DIR_DSBL_RSVD0_LOC	1
+
+#define DLB2_V2LSP_CQ_DIR_TKN_CNT(x) \
+	(0xa0200000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_DIR_TKN_CNT(x) \
+	(0x90200000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_DIR_TKN_CNT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_DIR_TKN_CNT(x) : \
+	 DLB2_V2_5LSP_CQ_DIR_TKN_CNT(x))
+#define DLB2_LSP_CQ_DIR_TKN_CNT_RST 0x0
+
+#define DLB2_LSP_CQ_DIR_TKN_CNT_COUNT	0x00001FFF
+#define DLB2_LSP_CQ_DIR_TKN_CNT_RSVD0	0xFFFFE000
+#define DLB2_LSP_CQ_DIR_TKN_CNT_COUNT_LOC	0
+#define DLB2_LSP_CQ_DIR_TKN_CNT_RSVD0_LOC	13
+
+#define DLB2_V2LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(x) \
+	(0xa0280000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(x) \
+	(0x90280000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(x) : \
+	 DLB2_V2_5LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(x))
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RST 0x0
+
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_TOKEN_DEPTH_SELECT_V2	0x0000000F
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_DISABLE_WB_OPT_V2	0x00000010
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_IGNORE_DEPTH_V2	0x00000020
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RSVD0_V2		0xFFFFFFC0
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_TOKEN_DEPTH_SELECT_V2_LOC	0
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_DISABLE_WB_OPT_V2_LOC	4
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_IGNORE_DEPTH_V2_LOC	5
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RSVD0_V2_LOC		6
+
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_TOKEN_DEPTH_SELECT_V2_5 0x0000000F
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_DISABLE_WB_OPT_V2_5	0x00000010
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RSVD0_V2_5		0xFFFFFFE0
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_TOKEN_DEPTH_SELECT_V2_5_LOC	0
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_DISABLE_WB_OPT_V2_5_LOC	4
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RSVD0_V2_5_LOC		5
+
+#define DLB2_V2LSP_CQ_DIR_TOT_SCH_CNTL(x) \
+	(0xa0300000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_DIR_TOT_SCH_CNTL(x) \
+	(0x90300000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_DIR_TOT_SCH_CNTL(x) : \
+	 DLB2_V2_5LSP_CQ_DIR_TOT_SCH_CNTL(x))
+#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTL_RST 0x0
+
+#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTL_COUNT	0xFFFFFFFF
+#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTL_COUNT_LOC	0
+
+#define DLB2_V2LSP_CQ_DIR_TOT_SCH_CNTH(x) \
+	(0xa0380000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_DIR_TOT_SCH_CNTH(x) \
+	(0x90380000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_DIR_TOT_SCH_CNTH(x) : \
+	 DLB2_V2_5LSP_CQ_DIR_TOT_SCH_CNTH(x))
+#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTH_RST 0x0
+
+#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTH_COUNT	0xFFFFFFFF
+#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTH_COUNT_LOC	0
+
+#define DLB2_V2LSP_CQ_LDB_DSBL(x) \
+	(0xa0400000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_LDB_DSBL(x) \
+	(0x90400000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_LDB_DSBL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_LDB_DSBL(x) : \
+	 DLB2_V2_5LSP_CQ_LDB_DSBL(x))
+#define DLB2_LSP_CQ_LDB_DSBL_RST 0x1
+
+#define DLB2_LSP_CQ_LDB_DSBL_DISABLED	0x00000001
+#define DLB2_LSP_CQ_LDB_DSBL_RSVD0		0xFFFFFFFE
+#define DLB2_LSP_CQ_LDB_DSBL_DISABLED_LOC	0
+#define DLB2_LSP_CQ_LDB_DSBL_RSVD0_LOC	1
+
+#define DLB2_V2LSP_CQ_LDB_INFL_CNT(x) \
+	(0xa0480000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_LDB_INFL_CNT(x) \
+	(0x90480000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_LDB_INFL_CNT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_LDB_INFL_CNT(x) : \
+	 DLB2_V2_5LSP_CQ_LDB_INFL_CNT(x))
+#define DLB2_LSP_CQ_LDB_INFL_CNT_RST 0x0
+
+#define DLB2_LSP_CQ_LDB_INFL_CNT_COUNT	0x00000FFF
+#define DLB2_LSP_CQ_LDB_INFL_CNT_RSVD0	0xFFFFF000
+#define DLB2_LSP_CQ_LDB_INFL_CNT_COUNT_LOC	0
+#define DLB2_LSP_CQ_LDB_INFL_CNT_RSVD0_LOC	12
+
+#define DLB2_V2LSP_CQ_LDB_INFL_LIM(x) \
+	(0xa0500000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_LDB_INFL_LIM(x) \
+	(0x90500000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_LDB_INFL_LIM(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_LDB_INFL_LIM(x) : \
+	 DLB2_V2_5LSP_CQ_LDB_INFL_LIM(x))
+#define DLB2_LSP_CQ_LDB_INFL_LIM_RST 0x0
+
+#define DLB2_LSP_CQ_LDB_INFL_LIM_LIMIT	0x00000FFF
+#define DLB2_LSP_CQ_LDB_INFL_LIM_RSVD0	0xFFFFF000
+#define DLB2_LSP_CQ_LDB_INFL_LIM_LIMIT_LOC	0
+#define DLB2_LSP_CQ_LDB_INFL_LIM_RSVD0_LOC	12
+
+#define DLB2_V2LSP_CQ_LDB_TKN_CNT(x) \
+	(0xa0580000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_LDB_TKN_CNT(x) \
+	(0x90600000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_LDB_TKN_CNT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_LDB_TKN_CNT(x) : \
+	 DLB2_V2_5LSP_CQ_LDB_TKN_CNT(x))
+#define DLB2_LSP_CQ_LDB_TKN_CNT_RST 0x0
+
+#define DLB2_LSP_CQ_LDB_TKN_CNT_TOKEN_COUNT	0x000007FF
+#define DLB2_LSP_CQ_LDB_TKN_CNT_RSVD0	0xFFFFF800
+#define DLB2_LSP_CQ_LDB_TKN_CNT_TOKEN_COUNT_LOC	0
+#define DLB2_LSP_CQ_LDB_TKN_CNT_RSVD0_LOC		11
+
+#define DLB2_V2LSP_CQ_LDB_TKN_DEPTH_SEL(x) \
+	(0xa0600000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_LDB_TKN_DEPTH_SEL(x) \
+	(0x90680000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_LDB_TKN_DEPTH_SEL(x) : \
+	 DLB2_V2_5LSP_CQ_LDB_TKN_DEPTH_SEL(x))
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RST 0x0
+
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_V2	0x0000000F
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_IGNORE_DEPTH_V2	0x00000010
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RSVD0_V2		0xFFFFFFE0
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_V2_LOC	0
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_IGNORE_DEPTH_V2_LOC		4
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RSVD0_V2_LOC			5
+
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_V2_5	0x0000000F
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RSVD0_V2_5		0xFFFFFFF0
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_V2_5_LOC	0
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RSVD0_V2_5_LOC			4
+
+#define DLB2_V2LSP_CQ_LDB_TOT_SCH_CNTL(x) \
+	(0xa0680000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_LDB_TOT_SCH_CNTL(x) \
+	(0x90700000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_LDB_TOT_SCH_CNTL(x) : \
+	 DLB2_V2_5LSP_CQ_LDB_TOT_SCH_CNTL(x))
+#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTL_RST 0x0
+
+#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTL_COUNT	0xFFFFFFFF
+#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTL_COUNT_LOC	0
+
+#define DLB2_V2LSP_CQ_LDB_TOT_SCH_CNTH(x) \
+	(0xa0700000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_LDB_TOT_SCH_CNTH(x) \
+	(0x90780000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_LDB_TOT_SCH_CNTH(x) : \
+	 DLB2_V2_5LSP_CQ_LDB_TOT_SCH_CNTH(x))
+#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTH_RST 0x0
+
+#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTH_COUNT	0xFFFFFFFF
+#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTH_COUNT_LOC	0
+
+#define DLB2_V2LSP_QID_DIR_MAX_DEPTH(x) \
+	(0xa0780000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_DIR_MAX_DEPTH(x) \
+	(0x90800000 + (x) * 0x1000)
+#define DLB2_LSP_QID_DIR_MAX_DEPTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_DIR_MAX_DEPTH(x) : \
+	 DLB2_V2_5LSP_QID_DIR_MAX_DEPTH(x))
+#define DLB2_LSP_QID_DIR_MAX_DEPTH_RST 0x0
+
+#define DLB2_LSP_QID_DIR_MAX_DEPTH_DEPTH	0x00001FFF
+#define DLB2_LSP_QID_DIR_MAX_DEPTH_RSVD0	0xFFFFE000
+#define DLB2_LSP_QID_DIR_MAX_DEPTH_DEPTH_LOC	0
+#define DLB2_LSP_QID_DIR_MAX_DEPTH_RSVD0_LOC	13
+
+#define DLB2_V2LSP_QID_DIR_TOT_ENQ_CNTL(x) \
+	(0xa0800000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_DIR_TOT_ENQ_CNTL(x) \
+	(0x90880000 + (x) * 0x1000)
+#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_DIR_TOT_ENQ_CNTL(x) : \
+	 DLB2_V2_5LSP_QID_DIR_TOT_ENQ_CNTL(x))
+#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTL_RST 0x0
+
+#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTL_COUNT	0xFFFFFFFF
+#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTL_COUNT_LOC	0
+
+#define DLB2_V2LSP_QID_DIR_TOT_ENQ_CNTH(x) \
+	(0xa0880000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_DIR_TOT_ENQ_CNTH(x) \
+	(0x90900000 + (x) * 0x1000)
+#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_DIR_TOT_ENQ_CNTH(x) : \
+	 DLB2_V2_5LSP_QID_DIR_TOT_ENQ_CNTH(x))
+#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTH_RST 0x0
+
+#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTH_COUNT	0xFFFFFFFF
+#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTH_COUNT_LOC	0
+
+#define DLB2_V2LSP_QID_DIR_ENQUEUE_CNT(x) \
+	(0xa0900000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_DIR_ENQUEUE_CNT(x) \
+	(0x90980000 + (x) * 0x1000)
+#define DLB2_LSP_QID_DIR_ENQUEUE_CNT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_DIR_ENQUEUE_CNT(x) : \
+	 DLB2_V2_5LSP_QID_DIR_ENQUEUE_CNT(x))
+#define DLB2_LSP_QID_DIR_ENQUEUE_CNT_RST 0x0
+
+#define DLB2_LSP_QID_DIR_ENQUEUE_CNT_COUNT	0x00001FFF
+#define DLB2_LSP_QID_DIR_ENQUEUE_CNT_RSVD0	0xFFFFE000
+#define DLB2_LSP_QID_DIR_ENQUEUE_CNT_COUNT_LOC	0
+#define DLB2_LSP_QID_DIR_ENQUEUE_CNT_RSVD0_LOC	13
+
+#define DLB2_V2LSP_QID_DIR_DEPTH_THRSH(x) \
+	(0xa0980000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_DIR_DEPTH_THRSH(x) \
+	(0x90a00000 + (x) * 0x1000)
+#define DLB2_LSP_QID_DIR_DEPTH_THRSH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_DIR_DEPTH_THRSH(x) : \
+	 DLB2_V2_5LSP_QID_DIR_DEPTH_THRSH(x))
+#define DLB2_LSP_QID_DIR_DEPTH_THRSH_RST 0x0
+
+#define DLB2_LSP_QID_DIR_DEPTH_THRSH_THRESH	0x00001FFF
+#define DLB2_LSP_QID_DIR_DEPTH_THRSH_RSVD0	0xFFFFE000
+#define DLB2_LSP_QID_DIR_DEPTH_THRSH_THRESH_LOC	0
+#define DLB2_LSP_QID_DIR_DEPTH_THRSH_RSVD0_LOC	13
+
+#define DLB2_V2LSP_QID_AQED_ACTIVE_CNT(x) \
+	(0xa0a00000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_AQED_ACTIVE_CNT(x) \
+	(0x90b80000 + (x) * 0x1000)
+#define DLB2_LSP_QID_AQED_ACTIVE_CNT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_AQED_ACTIVE_CNT(x) : \
+	 DLB2_V2_5LSP_QID_AQED_ACTIVE_CNT(x))
+#define DLB2_LSP_QID_AQED_ACTIVE_CNT_RST 0x0
+
+#define DLB2_LSP_QID_AQED_ACTIVE_CNT_COUNT	0x00000FFF
+#define DLB2_LSP_QID_AQED_ACTIVE_CNT_RSVD0	0xFFFFF000
+#define DLB2_LSP_QID_AQED_ACTIVE_CNT_COUNT_LOC	0
+#define DLB2_LSP_QID_AQED_ACTIVE_CNT_RSVD0_LOC	12
+
+#define DLB2_V2LSP_QID_AQED_ACTIVE_LIM(x) \
+	(0xa0a80000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_AQED_ACTIVE_LIM(x) \
+	(0x90c00000 + (x) * 0x1000)
+#define DLB2_LSP_QID_AQED_ACTIVE_LIM(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_AQED_ACTIVE_LIM(x) : \
+	 DLB2_V2_5LSP_QID_AQED_ACTIVE_LIM(x))
+#define DLB2_LSP_QID_AQED_ACTIVE_LIM_RST 0x0
+
+#define DLB2_LSP_QID_AQED_ACTIVE_LIM_LIMIT	0x00000FFF
+#define DLB2_LSP_QID_AQED_ACTIVE_LIM_RSVD0	0xFFFFF000
+#define DLB2_LSP_QID_AQED_ACTIVE_LIM_LIMIT_LOC	0
+#define DLB2_LSP_QID_AQED_ACTIVE_LIM_RSVD0_LOC	12
+
+#define DLB2_V2LSP_QID_ATM_TOT_ENQ_CNTL(x) \
+	(0xa0b00000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_ATM_TOT_ENQ_CNTL(x) \
+	(0x90c80000 + (x) * 0x1000)
+#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_ATM_TOT_ENQ_CNTL(x) : \
+	 DLB2_V2_5LSP_QID_ATM_TOT_ENQ_CNTL(x))
+#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTL_RST 0x0
+
+#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTL_COUNT	0xFFFFFFFF
+#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTL_COUNT_LOC	0
+
+#define DLB2_V2LSP_QID_ATM_TOT_ENQ_CNTH(x) \
+	(0xa0b80000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_ATM_TOT_ENQ_CNTH(x) \
+	(0x90d00000 + (x) * 0x1000)
+#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_ATM_TOT_ENQ_CNTH(x) : \
+	 DLB2_V2_5LSP_QID_ATM_TOT_ENQ_CNTH(x))
+#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTH_RST 0x0
+
+#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTH_COUNT	0xFFFFFFFF
+#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTH_COUNT_LOC	0
+
+#define DLB2_V2LSP_QID_LDB_ENQUEUE_CNT(x) \
+	(0xa0c80000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_LDB_ENQUEUE_CNT(x) \
+	(0x90e00000 + (x) * 0x1000)
+#define DLB2_LSP_QID_LDB_ENQUEUE_CNT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_LDB_ENQUEUE_CNT(x) : \
+	 DLB2_V2_5LSP_QID_LDB_ENQUEUE_CNT(x))
+#define DLB2_LSP_QID_LDB_ENQUEUE_CNT_RST 0x0
+
+#define DLB2_LSP_QID_LDB_ENQUEUE_CNT_COUNT	0x00003FFF
+#define DLB2_LSP_QID_LDB_ENQUEUE_CNT_RSVD0	0xFFFFC000
+#define DLB2_LSP_QID_LDB_ENQUEUE_CNT_COUNT_LOC	0
+#define DLB2_LSP_QID_LDB_ENQUEUE_CNT_RSVD0_LOC	14
+
+#define DLB2_V2LSP_QID_LDB_INFL_CNT(x) \
+	(0xa0d00000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_LDB_INFL_CNT(x) \
+	(0x90e80000 + (x) * 0x1000)
+#define DLB2_LSP_QID_LDB_INFL_CNT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_LDB_INFL_CNT(x) : \
+	 DLB2_V2_5LSP_QID_LDB_INFL_CNT(x))
+#define DLB2_LSP_QID_LDB_INFL_CNT_RST 0x0
+
+#define DLB2_LSP_QID_LDB_INFL_CNT_COUNT	0x00000FFF
+#define DLB2_LSP_QID_LDB_INFL_CNT_RSVD0	0xFFFFF000
+#define DLB2_LSP_QID_LDB_INFL_CNT_COUNT_LOC	0
+#define DLB2_LSP_QID_LDB_INFL_CNT_RSVD0_LOC	12
+
+#define DLB2_V2LSP_QID_LDB_INFL_LIM(x) \
+	(0xa0d80000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_LDB_INFL_LIM(x) \
+	(0x90f00000 + (x) * 0x1000)
+#define DLB2_LSP_QID_LDB_INFL_LIM(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_LDB_INFL_LIM(x) : \
+	 DLB2_V2_5LSP_QID_LDB_INFL_LIM(x))
+#define DLB2_LSP_QID_LDB_INFL_LIM_RST 0x0
+
+#define DLB2_LSP_QID_LDB_INFL_LIM_LIMIT	0x00000FFF
+#define DLB2_LSP_QID_LDB_INFL_LIM_RSVD0	0xFFFFF000
+#define DLB2_LSP_QID_LDB_INFL_LIM_LIMIT_LOC	0
+#define DLB2_LSP_QID_LDB_INFL_LIM_RSVD0_LOC	12
+
+#define DLB2_V2LSP_QID2CQIDIX_00(x) \
+	(0xa0e00000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID2CQIDIX_00(x) \
+	(0x90f80000 + (x) * 0x1000)
+#define DLB2_LSP_QID2CQIDIX_00(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID2CQIDIX_00(x) : \
+	 DLB2_V2_5LSP_QID2CQIDIX_00(x))
+#define DLB2_LSP_QID2CQIDIX_00_RST 0x0
+#define DLB2_LSP_QID2CQIDIX(ver, x, y) \
+	(DLB2_LSP_QID2CQIDIX_00(ver, x) + 0x80000 * (y))
+#define DLB2_LSP_QID2CQIDIX_NUM 16
+
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P0	0x000000FF
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P1	0x0000FF00
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P2	0x00FF0000
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P3	0xFF000000
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P0_LOC	0
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P1_LOC	8
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P2_LOC	16
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P3_LOC	24
+
+#define DLB2_V2LSP_QID2CQIDIX2_00(x) \
+	(0xa1600000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID2CQIDIX2_00(x) \
+	(0x91780000 + (x) * 0x1000)
+#define DLB2_LSP_QID2CQIDIX2_00(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID2CQIDIX2_00(x) : \
+	 DLB2_V2_5LSP_QID2CQIDIX2_00(x))
+#define DLB2_LSP_QID2CQIDIX2_00_RST 0x0
+#define DLB2_LSP_QID2CQIDIX2(ver, x, y) \
+	(DLB2_LSP_QID2CQIDIX2_00(ver, x) + 0x80000 * (y))
+#define DLB2_LSP_QID2CQIDIX2_NUM 16
+
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P0	0x000000FF
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P1	0x0000FF00
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P2	0x00FF0000
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P3	0xFF000000
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P0_LOC	0
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P1_LOC	8
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P2_LOC	16
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P3_LOC	24
+
+#define DLB2_V2LSP_QID_NALDB_MAX_DEPTH(x) \
+	(0xa1f00000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_NALDB_MAX_DEPTH(x) \
+	(0x92080000 + (x) * 0x1000)
+#define DLB2_LSP_QID_NALDB_MAX_DEPTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_NALDB_MAX_DEPTH(x) : \
+	 DLB2_V2_5LSP_QID_NALDB_MAX_DEPTH(x))
+#define DLB2_LSP_QID_NALDB_MAX_DEPTH_RST 0x0
+
+#define DLB2_LSP_QID_NALDB_MAX_DEPTH_DEPTH	0x00003FFF
+#define DLB2_LSP_QID_NALDB_MAX_DEPTH_RSVD0	0xFFFFC000
+#define DLB2_LSP_QID_NALDB_MAX_DEPTH_DEPTH_LOC	0
+#define DLB2_LSP_QID_NALDB_MAX_DEPTH_RSVD0_LOC	14
+
+#define DLB2_V2LSP_QID_NALDB_TOT_ENQ_CNTL(x) \
+	(0xa1f80000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_NALDB_TOT_ENQ_CNTL(x) \
+	(0x92100000 + (x) * 0x1000)
+#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_NALDB_TOT_ENQ_CNTL(x) : \
+	 DLB2_V2_5LSP_QID_NALDB_TOT_ENQ_CNTL(x))
+#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL_RST 0x0
+
+#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL_COUNT	0xFFFFFFFF
+#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL_COUNT_LOC	0
+
+#define DLB2_V2LSP_QID_NALDB_TOT_ENQ_CNTH(x) \
+	(0xa2000000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_NALDB_TOT_ENQ_CNTH(x) \
+	(0x92180000 + (x) * 0x1000)
+#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_NALDB_TOT_ENQ_CNTH(x) : \
+	 DLB2_V2_5LSP_QID_NALDB_TOT_ENQ_CNTH(x))
+#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH_RST 0x0
+
+#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH_COUNT	0xFFFFFFFF
+#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH_COUNT_LOC	0
+
+#define DLB2_V2LSP_QID_ATM_DEPTH_THRSH(x) \
+	(0xa2080000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_ATM_DEPTH_THRSH(x) \
+	(0x92200000 + (x) * 0x1000)
+#define DLB2_LSP_QID_ATM_DEPTH_THRSH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_ATM_DEPTH_THRSH(x) : \
+	 DLB2_V2_5LSP_QID_ATM_DEPTH_THRSH(x))
+#define DLB2_LSP_QID_ATM_DEPTH_THRSH_RST 0x0
+
+#define DLB2_LSP_QID_ATM_DEPTH_THRSH_THRESH	0x00003FFF
+#define DLB2_LSP_QID_ATM_DEPTH_THRSH_RSVD0	0xFFFFC000
+#define DLB2_LSP_QID_ATM_DEPTH_THRSH_THRESH_LOC	0
+#define DLB2_LSP_QID_ATM_DEPTH_THRSH_RSVD0_LOC	14
+
+#define DLB2_V2LSP_QID_NALDB_DEPTH_THRSH(x) \
+	(0xa2100000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_NALDB_DEPTH_THRSH(x) \
+	(0x92280000 + (x) * 0x1000)
+#define DLB2_LSP_QID_NALDB_DEPTH_THRSH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_NALDB_DEPTH_THRSH(x) : \
+	 DLB2_V2_5LSP_QID_NALDB_DEPTH_THRSH(x))
+#define DLB2_LSP_QID_NALDB_DEPTH_THRSH_RST 0x0
+
+#define DLB2_LSP_QID_NALDB_DEPTH_THRSH_THRESH	0x00003FFF
+#define DLB2_LSP_QID_NALDB_DEPTH_THRSH_RSVD0		0xFFFFC000
+#define DLB2_LSP_QID_NALDB_DEPTH_THRSH_THRESH_LOC	0
+#define DLB2_LSP_QID_NALDB_DEPTH_THRSH_RSVD0_LOC	14
+
+#define DLB2_V2LSP_QID_ATM_ACTIVE(x) \
+	(0xa2180000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_ATM_ACTIVE(x) \
+	(0x92300000 + (x) * 0x1000)
+#define DLB2_LSP_QID_ATM_ACTIVE(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_ATM_ACTIVE(x) : \
+	 DLB2_V2_5LSP_QID_ATM_ACTIVE(x))
+#define DLB2_LSP_QID_ATM_ACTIVE_RST 0x0
+
+#define DLB2_LSP_QID_ATM_ACTIVE_COUNT	0x00003FFF
+#define DLB2_LSP_QID_ATM_ACTIVE_RSVD0	0xFFFFC000
+#define DLB2_LSP_QID_ATM_ACTIVE_COUNT_LOC	0
+#define DLB2_LSP_QID_ATM_ACTIVE_RSVD0_LOC	14
+
+#define DLB2_V2LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0 0xa4000008
+#define DLB2_V2_5LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0 0x94000008
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0 : \
+	 DLB2_V2_5LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0)
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_RST 0x0
+
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI0_WEIGHT	0x000000FF
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI1_WEIGHT	0x0000FF00
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI2_WEIGHT	0x00FF0000
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI3_WEIGHT	0xFF000000
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI0_WEIGHT_LOC	0
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI1_WEIGHT_LOC	8
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI2_WEIGHT_LOC	16
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI3_WEIGHT_LOC	24
+
+#define DLB2_V2LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1 0xa400000c
+#define DLB2_V2_5LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1 0x9400000c
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1 : \
+	 DLB2_V2_5LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1)
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_RST 0x0
+
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_RSVZ0_V2	0xFFFFFFFF
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_RSVZ0_V2_LOC	0
+
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI4_WEIGHT_V2_5	0x000000FF
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI5_WEIGHT_V2_5	0x0000FF00
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI6_WEIGHT_V2_5	0x00FF0000
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI7_WEIGHT_V2_5	0xFF000000
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI4_WEIGHT_V2_5_LOC	0
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI5_WEIGHT_V2_5_LOC	8
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI6_WEIGHT_V2_5_LOC	16
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI7_WEIGHT_V2_5_LOC	24
+
+#define DLB2_V2LSP_CFG_ARB_WEIGHT_LDB_QID_0 0xa4000014
+#define DLB2_V2_5LSP_CFG_ARB_WEIGHT_LDB_QID_0 0x94000014
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CFG_ARB_WEIGHT_LDB_QID_0 : \
+	 DLB2_V2_5LSP_CFG_ARB_WEIGHT_LDB_QID_0)
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_RST 0x0
+
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI0_WEIGHT	0x000000FF
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI1_WEIGHT	0x0000FF00
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI2_WEIGHT	0x00FF0000
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI3_WEIGHT	0xFF000000
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI0_WEIGHT_LOC	0
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI1_WEIGHT_LOC	8
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI2_WEIGHT_LOC	16
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI3_WEIGHT_LOC	24
+
+#define DLB2_V2LSP_CFG_ARB_WEIGHT_LDB_QID_1 0xa4000018
+#define DLB2_V2_5LSP_CFG_ARB_WEIGHT_LDB_QID_1 0x94000018
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CFG_ARB_WEIGHT_LDB_QID_1 : \
+	 DLB2_V2_5LSP_CFG_ARB_WEIGHT_LDB_QID_1)
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_RST 0x0
+
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_RSVZ0_V2	0xFFFFFFFF
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_RSVZ0_V2_LOC	0
+
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI4_WEIGHT_V2_5	0x000000FF
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI5_WEIGHT_V2_5	0x0000FF00
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI6_WEIGHT_V2_5	0x00FF0000
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI7_WEIGHT_V2_5	0xFF000000
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI4_WEIGHT_V2_5_LOC	0
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI5_WEIGHT_V2_5_LOC	8
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI6_WEIGHT_V2_5_LOC	16
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI7_WEIGHT_V2_5_LOC	24
+
+#define DLB2_V2LSP_LDB_SCHED_CTRL 0xa400002c
+#define DLB2_V2_5LSP_LDB_SCHED_CTRL 0x9400002c
+#define DLB2_LSP_LDB_SCHED_CTRL(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_LDB_SCHED_CTRL : \
+	 DLB2_V2_5LSP_LDB_SCHED_CTRL)
+#define DLB2_LSP_LDB_SCHED_CTRL_RST 0x0
+
+#define DLB2_LSP_LDB_SCHED_CTRL_CQ			0x000000FF
+#define DLB2_LSP_LDB_SCHED_CTRL_QIDIX		0x00000700
+#define DLB2_LSP_LDB_SCHED_CTRL_VALUE		0x00000800
+#define DLB2_LSP_LDB_SCHED_CTRL_NALB_HASWORK_V	0x00001000
+#define DLB2_LSP_LDB_SCHED_CTRL_RLIST_HASWORK_V	0x00002000
+#define DLB2_LSP_LDB_SCHED_CTRL_SLIST_HASWORK_V	0x00004000
+#define DLB2_LSP_LDB_SCHED_CTRL_INFLIGHT_OK_V	0x00008000
+#define DLB2_LSP_LDB_SCHED_CTRL_AQED_NFULL_V		0x00010000
+#define DLB2_LSP_LDB_SCHED_CTRL_RSVZ0		0xFFFE0000
+#define DLB2_LSP_LDB_SCHED_CTRL_CQ_LOC		0
+#define DLB2_LSP_LDB_SCHED_CTRL_QIDIX_LOC		8
+#define DLB2_LSP_LDB_SCHED_CTRL_VALUE_LOC		11
+#define DLB2_LSP_LDB_SCHED_CTRL_NALB_HASWORK_V_LOC	12
+#define DLB2_LSP_LDB_SCHED_CTRL_RLIST_HASWORK_V_LOC	13
+#define DLB2_LSP_LDB_SCHED_CTRL_SLIST_HASWORK_V_LOC	14
+#define DLB2_LSP_LDB_SCHED_CTRL_INFLIGHT_OK_V_LOC	15
+#define DLB2_LSP_LDB_SCHED_CTRL_AQED_NFULL_V_LOC	16
+#define DLB2_LSP_LDB_SCHED_CTRL_RSVZ0_LOC		17
+
+#define DLB2_V2LSP_DIR_SCH_CNT_L 0xa4000034
+#define DLB2_V2_5LSP_DIR_SCH_CNT_L 0x94000034
+#define DLB2_LSP_DIR_SCH_CNT_L(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_DIR_SCH_CNT_L : \
+	 DLB2_V2_5LSP_DIR_SCH_CNT_L)
+#define DLB2_LSP_DIR_SCH_CNT_L_RST 0x0
+
+#define DLB2_LSP_DIR_SCH_CNT_L_COUNT	0xFFFFFFFF
+#define DLB2_LSP_DIR_SCH_CNT_L_COUNT_LOC	0
+
+#define DLB2_V2LSP_DIR_SCH_CNT_H 0xa4000038
+#define DLB2_V2_5LSP_DIR_SCH_CNT_H 0x94000038
+#define DLB2_LSP_DIR_SCH_CNT_H(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_DIR_SCH_CNT_H : \
+	 DLB2_V2_5LSP_DIR_SCH_CNT_H)
+#define DLB2_LSP_DIR_SCH_CNT_H_RST 0x0
+
+#define DLB2_LSP_DIR_SCH_CNT_H_COUNT	0xFFFFFFFF
+#define DLB2_LSP_DIR_SCH_CNT_H_COUNT_LOC	0
+
+#define DLB2_V2LSP_LDB_SCH_CNT_L 0xa400003c
+#define DLB2_V2_5LSP_LDB_SCH_CNT_L 0x9400003c
+#define DLB2_LSP_LDB_SCH_CNT_L(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_LDB_SCH_CNT_L : \
+	 DLB2_V2_5LSP_LDB_SCH_CNT_L)
+#define DLB2_LSP_LDB_SCH_CNT_L_RST 0x0
+
+#define DLB2_LSP_LDB_SCH_CNT_L_COUNT	0xFFFFFFFF
+#define DLB2_LSP_LDB_SCH_CNT_L_COUNT_LOC	0
+
+#define DLB2_V2LSP_LDB_SCH_CNT_H 0xa4000040
+#define DLB2_V2_5LSP_LDB_SCH_CNT_H 0x94000040
+#define DLB2_LSP_LDB_SCH_CNT_H(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_LDB_SCH_CNT_H : \
+	 DLB2_V2_5LSP_LDB_SCH_CNT_H)
+#define DLB2_LSP_LDB_SCH_CNT_H_RST 0x0
+
+#define DLB2_LSP_LDB_SCH_CNT_H_COUNT	0xFFFFFFFF
+#define DLB2_LSP_LDB_SCH_CNT_H_COUNT_LOC	0
+
+#define DLB2_V2LSP_CFG_SHDW_CTRL 0xa4000070
+#define DLB2_V2_5LSP_CFG_SHDW_CTRL 0x94000070
+#define DLB2_LSP_CFG_SHDW_CTRL(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CFG_SHDW_CTRL : \
+	 DLB2_V2_5LSP_CFG_SHDW_CTRL)
+#define DLB2_LSP_CFG_SHDW_CTRL_RST 0x0
+
+#define DLB2_LSP_CFG_SHDW_CTRL_TRANSFER	0x00000001
+#define DLB2_LSP_CFG_SHDW_CTRL_RSVD0		0xFFFFFFFE
+#define DLB2_LSP_CFG_SHDW_CTRL_TRANSFER_LOC	0
+#define DLB2_LSP_CFG_SHDW_CTRL_RSVD0_LOC	1
+
+#define DLB2_V2LSP_CFG_SHDW_RANGE_COS(x) \
+	(0xa4000074 + (x) * 4)
+#define DLB2_V2_5LSP_CFG_SHDW_RANGE_COS(x) \
+	(0x94000074 + (x) * 4)
+#define DLB2_LSP_CFG_SHDW_RANGE_COS(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CFG_SHDW_RANGE_COS(x) : \
+	 DLB2_V2_5LSP_CFG_SHDW_RANGE_COS(x))
+#define DLB2_LSP_CFG_SHDW_RANGE_COS_RST 0x40
+
+#define DLB2_LSP_CFG_SHDW_RANGE_COS_BW_RANGE		0x000001FF
+#define DLB2_LSP_CFG_SHDW_RANGE_COS_RSVZ0		0x7FFFFE00
+#define DLB2_LSP_CFG_SHDW_RANGE_COS_NO_EXTRA_CREDIT	0x80000000
+#define DLB2_LSP_CFG_SHDW_RANGE_COS_BW_RANGE_LOC		0
+#define DLB2_LSP_CFG_SHDW_RANGE_COS_RSVZ0_LOC		9
+#define DLB2_LSP_CFG_SHDW_RANGE_COS_NO_EXTRA_CREDIT_LOC	31
+
+#define DLB2_V2LSP_CFG_CTRL_GENERAL_0 0xac000000
+#define DLB2_V2_5LSP_CFG_CTRL_GENERAL_0 0x9c000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CFG_CTRL_GENERAL_0 : \
+	 DLB2_V2_5LSP_CFG_CTRL_GENERAL_0)
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RST 0x0
+
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_ATQ_EMPTY_ARB_V2	0x00000001
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_TOK_UNIT_IDLE_V2	0x00000002
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_RLIST_PRI_V2	0x00000004
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_CMP_UNIT_IDLE_V2	0x00000008
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ0_V2		0x00000030
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OP_V2	0x00000040
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_HALF_BW_V2	0x00000080
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OUT_V2	0x00000100
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_DISAB_MULTI_V2	0x00000200
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OP_V2	0x00000400
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_HALF_BW_V2	0x00000800
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OUT_V2	0x00001000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_DISAB_MULTI_V2	0x00002000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OP_V2	0x00004000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_HALF_BW_V2	0x00008000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OUT_V2	0x00010000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OP_V2	0x00020000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_HALF_BW_V2	0x00040000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OUT_V2	0x00080000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_SINGLE_OP_V2	0x00100000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_HALF_BW_V2	0x00200000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_DISAB_MULTI_V2	0x00400000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_SCH_V2	0x00800000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_CMP_V2	0x01000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_CE_TOG_ARB_V2	0x02000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ1_V2		0x04000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALID_SEL_V2	0x18000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALUE_SEL_V2	0x20000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_COMPARE_SEL_V2	0xC0000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_ATQ_EMPTY_ARB_V2_LOC	0
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_TOK_UNIT_IDLE_V2_LOC		1
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_RLIST_PRI_V2_LOC		2
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_CMP_UNIT_IDLE_V2_LOC		3
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ0_V2_LOC			4
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OP_V2_LOC		6
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_HALF_BW_V2_LOC		7
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OUT_V2_LOC		8
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_DISAB_MULTI_V2_LOC		9
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OP_V2_LOC		10
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_HALF_BW_V2_LOC		11
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OUT_V2_LOC		12
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_DISAB_MULTI_V2_LOC		13
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OP_V2_LOC		14
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_HALF_BW_V2_LOC		15
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OUT_V2_LOC		16
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OP_V2_LOC		17
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_HALF_BW_V2_LOC		18
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OUT_V2_LOC		19
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_SINGLE_OP_V2_LOC		20
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_HALF_BW_V2_LOC		21
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_DISAB_MULTI_V2_LOC		22
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_SCH_V2_LOC		23
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_CMP_V2_LOC		24
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_CE_TOG_ARB_V2_LOC		25
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ1_V2_LOC			26
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALID_SEL_V2_LOC		27
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALUE_SEL_V2_LOC		29
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_COMPARE_SEL_V2_LOC		30
+
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_ATQ_EMPTY_ARB_V2_5	0x00000001
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_TOK_UNIT_IDLE_V2_5	0x00000002
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_RLIST_PRI_V2_5	0x00000004
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_CMP_UNIT_IDLE_V2_5	0x00000008
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ENAB_IF_THRESH_V2_5	0x00000010
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ0_V2_5		0x00000020
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OP_V2_5	0x00000040
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_HALF_BW_V2_5	0x00000080
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OUT_V2_5	0x00000100
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_DISAB_MULTI_V2_5	0x00000200
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OP_V2_5	0x00000400
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_HALF_BW_V2_5	0x00000800
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OUT_V2_5	0x00001000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_DISAB_MULTI_V2_5	0x00002000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OP_V2_5	0x00004000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_HALF_BW_V2_5	0x00008000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OUT_V2_5	0x00010000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OP_V2_5	0x00020000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_HALF_BW_V2_5	0x00040000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OUT_V2_5	0x00080000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_SINGLE_OP_V2_5	0x00100000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_HALF_BW_V2_5	0x00200000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_DISAB_MULTI_V2_5	0x00400000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_SCH_V2_5	0x00800000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_CMP_V2_5	0x01000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_CE_TOG_ARB_V2_5	0x02000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ1_V2_5		0x04000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALID_SEL_V2_5	0x18000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALUE_SEL_V2_5	0x20000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_COMPARE_SEL_V2_5	0xC0000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_ATQ_EMPTY_ARB_V2_5_LOC	0
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_TOK_UNIT_IDLE_V2_5_LOC	1
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_RLIST_PRI_V2_5_LOC		2
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_CMP_UNIT_IDLE_V2_5_LOC	3
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ENAB_IF_THRESH_V2_5_LOC		4
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ0_V2_5_LOC			5
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OP_V2_5_LOC		6
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_HALF_BW_V2_5_LOC		7
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OUT_V2_5_LOC		8
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_DISAB_MULTI_V2_5_LOC		9
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OP_V2_5_LOC		10
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_HALF_BW_V2_5_LOC		11
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OUT_V2_5_LOC		12
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_DISAB_MULTI_V2_5_LOC		13
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OP_V2_5_LOC	14
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_HALF_BW_V2_5_LOC		15
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OUT_V2_5_LOC	16
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OP_V2_5_LOC		17
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_HALF_BW_V2_5_LOC		18
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OUT_V2_5_LOC	19
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_SINGLE_OP_V2_5_LOC		20
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_HALF_BW_V2_5_LOC		21
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_DISAB_MULTI_V2_5_LOC		22
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_SCH_V2_5_LOC		23
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_CMP_V2_5_LOC		24
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_CE_TOG_ARB_V2_5_LOC		25
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ1_V2_5_LOC			26
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALID_SEL_V2_5_LOC		27
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALUE_SEL_V2_5_LOC		29
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_COMPARE_SEL_V2_5_LOC	30
+
+#define DLB2_LSP_SMON_COMPARE0 0xac000048
+#define DLB2_LSP_SMON_COMPARE0_RST 0x0
+
+#define DLB2_LSP_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_LSP_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_LSP_SMON_COMPARE1 0xac00004c
+#define DLB2_LSP_SMON_COMPARE1_RST 0x0
+
+#define DLB2_LSP_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_LSP_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_LSP_SMON_CFG0 0xac000050
+#define DLB2_LSP_SMON_CFG0_RST 0x40000000
+
+#define DLB2_LSP_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_LSP_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_LSP_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_LSP_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_LSP_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_LSP_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_LSP_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_LSP_SMON_CFG0_SMON_MODE			0x0000F000
+#define DLB2_LSP_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_LSP_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_LSP_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_LSP_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_LSP_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_LSP_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_LSP_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_LSP_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_LSP_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_LSP_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_LSP_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_LSP_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_LSP_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
+#define DLB2_LSP_SMON_CFG0_RSVZ0_LOC				2
+#define DLB2_LSP_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_LSP_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_LSP_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_LSP_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_LSP_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_LSP_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_LSP_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_LSP_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_LSP_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_LSP_SMON_CFG0_STOPTIMEROVFL_LOC			20
+#define DLB2_LSP_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_LSP_SMON_CFG0_STATTIMEROVFL_LOC			22
+#define DLB2_LSP_SMON_CFG0_RSVZ1_LOC				23
+#define DLB2_LSP_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_LSP_SMON_CFG0_RSVZ2_LOC				29
+#define DLB2_LSP_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_LSP_SMON_CFG1 0xac000054
+#define DLB2_LSP_SMON_CFG1_RST 0x0
+
+#define DLB2_LSP_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_LSP_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_LSP_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_LSP_SMON_CFG1_MODE0_LOC	0
+#define DLB2_LSP_SMON_CFG1_MODE1_LOC	8
+#define DLB2_LSP_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_LSP_SMON_ACTIVITYCNTR0 0xac000058
+#define DLB2_LSP_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_LSP_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_LSP_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_LSP_SMON_ACTIVITYCNTR1 0xac00005c
+#define DLB2_LSP_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_LSP_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_LSP_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_LSP_SMON_MAX_TMR 0xac000060
+#define DLB2_LSP_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_LSP_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_LSP_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_LSP_SMON_TMR 0xac000064
+#define DLB2_LSP_SMON_TMR_RST 0x0
+
+#define DLB2_LSP_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_LSP_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_V2CM_DIAG_RESET_STS 0xb4000000
+#define DLB2_V2_5CM_DIAG_RESET_STS 0xa4000000
+#define DLB2_CM_DIAG_RESET_STS(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 V2CM_DIAG_RESET_STS : \
+	 V2_5CM_DIAG_RESET_STS)
+#define DLB2_CM_DIAG_RESET_STS_RST 0x80000bff
+
+#define DLB2_CM_DIAG_RESET_STS_CHP_PF_RESET_DONE	0x00000001
+#define DLB2_CM_DIAG_RESET_STS_ROP_PF_RESET_DONE	0x00000002
+#define DLB2_CM_DIAG_RESET_STS_LSP_PF_RESET_DONE	0x00000004
+#define DLB2_CM_DIAG_RESET_STS_NALB_PF_RESET_DONE	0x00000008
+#define DLB2_CM_DIAG_RESET_STS_AP_PF_RESET_DONE	0x00000010
+#define DLB2_CM_DIAG_RESET_STS_DP_PF_RESET_DONE	0x00000020
+#define DLB2_CM_DIAG_RESET_STS_QED_PF_RESET_DONE	0x00000040
+#define DLB2_CM_DIAG_RESET_STS_DQED_PF_RESET_DONE	0x00000080
+#define DLB2_CM_DIAG_RESET_STS_AQED_PF_RESET_DONE	0x00000100
+#define DLB2_CM_DIAG_RESET_STS_SYS_PF_RESET_DONE	0x00000200
+#define DLB2_CM_DIAG_RESET_STS_PF_RESET_ACTIVE	0x00000400
+#define DLB2_CM_DIAG_RESET_STS_FLRSM_STATE		0x0003F800
+#define DLB2_CM_DIAG_RESET_STS_RSVD0			0x7FFC0000
+#define DLB2_CM_DIAG_RESET_STS_DLB_PROC_RESET_DONE	0x80000000
+#define DLB2_CM_DIAG_RESET_STS_CHP_PF_RESET_DONE_LOC		0
+#define DLB2_CM_DIAG_RESET_STS_ROP_PF_RESET_DONE_LOC		1
+#define DLB2_CM_DIAG_RESET_STS_LSP_PF_RESET_DONE_LOC		2
+#define DLB2_CM_DIAG_RESET_STS_NALB_PF_RESET_DONE_LOC	3
+#define DLB2_CM_DIAG_RESET_STS_AP_PF_RESET_DONE_LOC		4
+#define DLB2_CM_DIAG_RESET_STS_DP_PF_RESET_DONE_LOC		5
+#define DLB2_CM_DIAG_RESET_STS_QED_PF_RESET_DONE_LOC		6
+#define DLB2_CM_DIAG_RESET_STS_DQED_PF_RESET_DONE_LOC	7
+#define DLB2_CM_DIAG_RESET_STS_AQED_PF_RESET_DONE_LOC	8
+#define DLB2_CM_DIAG_RESET_STS_SYS_PF_RESET_DONE_LOC		9
+#define DLB2_CM_DIAG_RESET_STS_PF_RESET_ACTIVE_LOC		10
+#define DLB2_CM_DIAG_RESET_STS_FLRSM_STATE_LOC		11
+#define DLB2_CM_DIAG_RESET_STS_RSVD0_LOC			18
+#define DLB2_CM_DIAG_RESET_STS_DLB_PROC_RESET_DONE_LOC	31
+
+#define DLB2_V2CM_CFG_DIAGNOSTIC_IDLE_STATUS 0xb4000004
+#define DLB2_V2_5CM_CFG_DIAGNOSTIC_IDLE_STATUS 0xa4000004
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CM_CFG_DIAGNOSTIC_IDLE_STATUS : \
+	 DLB2_V2_5CM_CFG_DIAGNOSTIC_IDLE_STATUS)
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_RST 0x9d0fffff
+
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_CHP_PIPEIDLE		0x00000001
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_ROP_PIPEIDLE		0x00000002
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_LSP_PIPEIDLE		0x00000004
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_NALB_PIPEIDLE	0x00000008
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AP_PIPEIDLE		0x00000010
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DP_PIPEIDLE		0x00000020
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_QED_PIPEIDLE		0x00000040
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DQED_PIPEIDLE	0x00000080
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AQED_PIPEIDLE	0x00000100
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_SYS_PIPEIDLE		0x00000200
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_CHP_UNIT_IDLE	0x00000400
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_ROP_UNIT_IDLE	0x00000800
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_LSP_UNIT_IDLE	0x00001000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_NALB_UNIT_IDLE	0x00002000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AP_UNIT_IDLE		0x00004000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DP_UNIT_IDLE		0x00008000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_QED_UNIT_IDLE	0x00010000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DQED_UNIT_IDLE	0x00020000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AQED_UNIT_IDLE	0x00040000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_SYS_UNIT_IDLE	0x00080000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_RSVD1		0x00F00000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_CFG_RING_IDLE	0x01000000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_CFG_MSTR_IDLE	0x02000000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_FLR_CLKREQ_B	0x04000000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_PROC_IDLE	0x08000000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_PROC_IDLE_MASKED 0x10000000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_RSVD0		 0x60000000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DLB_FUNC_IDLE	 0x80000000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_CHP_PIPEIDLE_LOC		0
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_ROP_PIPEIDLE_LOC		1
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_LSP_PIPEIDLE_LOC		2
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_NALB_PIPEIDLE_LOC		3
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AP_PIPEIDLE_LOC		4
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DP_PIPEIDLE_LOC		5
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_QED_PIPEIDLE_LOC		6
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DQED_PIPEIDLE_LOC		7
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AQED_PIPEIDLE_LOC		8
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_SYS_PIPEIDLE_LOC		9
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_CHP_UNIT_IDLE_LOC		10
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_ROP_UNIT_IDLE_LOC		11
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_LSP_UNIT_IDLE_LOC		12
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_NALB_UNIT_IDLE_LOC	13
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AP_UNIT_IDLE_LOC		14
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DP_UNIT_IDLE_LOC		15
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_QED_UNIT_IDLE_LOC		16
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DQED_UNIT_IDLE_LOC	17
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AQED_UNIT_IDLE_LOC	18
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_SYS_UNIT_IDLE_LOC		19
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_RSVD1_LOC			20
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_CFG_RING_IDLE_LOC	24
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_CFG_MSTR_IDLE_LOC	25
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_FLR_CLKREQ_B_LOC	26
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_PROC_IDLE_LOC	27
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_PROC_IDLE_MASKED_LOC	28
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_RSVD0_LOC			29
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DLB_FUNC_IDLE_LOC		31
+
+#define DLB2_V2CM_CFG_PM_STATUS 0xb4000014
+#define DLB2_V2_5CM_CFG_PM_STATUS 0xa4000014
+#define DLB2_CM_CFG_PM_STATUS(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CM_CFG_PM_STATUS : \
+	 DLB2_V2_5CM_CFG_PM_STATUS)
+#define DLB2_CM_CFG_PM_STATUS_RST 0x100403e
+
+#define DLB2_CM_CFG_PM_STATUS_PROCHOT		0x00000001
+#define DLB2_CM_CFG_PM_STATUS_PGCB_DLB_IDLE		0x00000002
+#define DLB2_CM_CFG_PM_STATUS_PGCB_DLB_PG_RDY_ACK_B	0x00000004
+#define DLB2_CM_CFG_PM_STATUS_PMSM_PGCB_REQ_B	0x00000008
+#define DLB2_CM_CFG_PM_STATUS_PGBC_PMC_PG_REQ_B	0x00000010
+#define DLB2_CM_CFG_PM_STATUS_PMC_PGCB_PG_ACK_B	0x00000020
+#define DLB2_CM_CFG_PM_STATUS_PMC_PGCB_FET_EN_B	0x00000040
+#define DLB2_CM_CFG_PM_STATUS_PGCB_FET_EN_B		0x00000080
+#define DLB2_CM_CFG_PM_STATUS_RSVZ0			0x00000100
+#define DLB2_CM_CFG_PM_STATUS_RSVZ1			0x00000200
+#define DLB2_CM_CFG_PM_STATUS_FUSE_FORCE_ON		0x00000400
+#define DLB2_CM_CFG_PM_STATUS_FUSE_PROC_DISABLE	0x00000800
+#define DLB2_CM_CFG_PM_STATUS_RSVZ2			0x00001000
+#define DLB2_CM_CFG_PM_STATUS_RSVZ3			0x00002000
+#define DLB2_CM_CFG_PM_STATUS_PM_FSM_D0TOD3_OK	0x00004000
+#define DLB2_CM_CFG_PM_STATUS_PM_FSM_D3TOD0_OK	0x00008000
+#define DLB2_CM_CFG_PM_STATUS_DLB_IN_D3		0x00010000
+#define DLB2_CM_CFG_PM_STATUS_RSVZ4			0x00FE0000
+#define DLB2_CM_CFG_PM_STATUS_PMSM			0xFF000000
+#define DLB2_CM_CFG_PM_STATUS_PROCHOT_LOC			0
+#define DLB2_CM_CFG_PM_STATUS_PGCB_DLB_IDLE_LOC		1
+#define DLB2_CM_CFG_PM_STATUS_PGCB_DLB_PG_RDY_ACK_B_LOC	2
+#define DLB2_CM_CFG_PM_STATUS_PMSM_PGCB_REQ_B_LOC		3
+#define DLB2_CM_CFG_PM_STATUS_PGBC_PMC_PG_REQ_B_LOC		4
+#define DLB2_CM_CFG_PM_STATUS_PMC_PGCB_PG_ACK_B_LOC		5
+#define DLB2_CM_CFG_PM_STATUS_PMC_PGCB_FET_EN_B_LOC		6
+#define DLB2_CM_CFG_PM_STATUS_PGCB_FET_EN_B_LOC		7
+#define DLB2_CM_CFG_PM_STATUS_RSVZ0_LOC			8
+#define DLB2_CM_CFG_PM_STATUS_RSVZ1_LOC			9
+#define DLB2_CM_CFG_PM_STATUS_FUSE_FORCE_ON_LOC		10
+#define DLB2_CM_CFG_PM_STATUS_FUSE_PROC_DISABLE_LOC		11
+#define DLB2_CM_CFG_PM_STATUS_RSVZ2_LOC			12
+#define DLB2_CM_CFG_PM_STATUS_RSVZ3_LOC			13
+#define DLB2_CM_CFG_PM_STATUS_PM_FSM_D0TOD3_OK_LOC		14
+#define DLB2_CM_CFG_PM_STATUS_PM_FSM_D3TOD0_OK_LOC		15
+#define DLB2_CM_CFG_PM_STATUS_DLB_IN_D3_LOC			16
+#define DLB2_CM_CFG_PM_STATUS_RSVZ4_LOC			17
+#define DLB2_CM_CFG_PM_STATUS_PMSM_LOC			24
+
+#define DLB2_V2CM_CFG_PM_PMCSR_DISABLE 0xb4000018
+#define DLB2_V2_5CM_CFG_PM_PMCSR_DISABLE 0xa4000018
+#define DLB2_CM_CFG_PM_PMCSR_DISABLE(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CM_CFG_PM_PMCSR_DISABLE : \
+	 DLB2_V2_5CM_CFG_PM_PMCSR_DISABLE)
+#define DLB2_CM_CFG_PM_PMCSR_DISABLE_RST 0x1
+
+#define DLB2_CM_CFG_PM_PMCSR_DISABLE_DISABLE	0x00000001
+#define DLB2_CM_CFG_PM_PMCSR_DISABLE_RSVZ0	0xFFFFFFFE
+#define DLB2_CM_CFG_PM_PMCSR_DISABLE_DISABLE_LOC	0
+#define DLB2_CM_CFG_PM_PMCSR_DISABLE_RSVZ0_LOC	1
+
+#define DLB2_VF_VF2PF_MAILBOX_BYTES 256
+#define DLB2_VF_VF2PF_MAILBOX(x) \
+	(0x1000 + (x) * 0x4)
+#define DLB2_VF_VF2PF_MAILBOX_RST 0x0
+
+#define DLB2_VF_VF2PF_MAILBOX_MSG	0xFFFFFFFF
+#define DLB2_VF_VF2PF_MAILBOX_MSG_LOC	0
+
+#define DLB2_VF_VF2PF_MAILBOX_ISR 0x1f00
+#define DLB2_VF_VF2PF_MAILBOX_ISR_RST 0x0
+#define DLB2_VF_SIOV_MBOX_ISR_TRIGGER 0x8000
+
+#define DLB2_VF_VF2PF_MAILBOX_ISR_ISR	0x00000001
+#define DLB2_VF_VF2PF_MAILBOX_ISR_RSVD0	0xFFFFFFFE
+#define DLB2_VF_VF2PF_MAILBOX_ISR_ISR_LOC	0
+#define DLB2_VF_VF2PF_MAILBOX_ISR_RSVD0_LOC	1
+
+#define DLB2_VF_PF2VF_MAILBOX_BYTES 64
+#define DLB2_VF_PF2VF_MAILBOX(x) \
+	(0x2000 + (x) * 0x4)
+#define DLB2_VF_PF2VF_MAILBOX_RST 0x0
+
+#define DLB2_VF_PF2VF_MAILBOX_MSG	0xFFFFFFFF
+#define DLB2_VF_PF2VF_MAILBOX_MSG_LOC	0
+
+#define DLB2_VF_PF2VF_MAILBOX_ISR 0x2f00
+#define DLB2_VF_PF2VF_MAILBOX_ISR_RST 0x0
+
+#define DLB2_VF_PF2VF_MAILBOX_ISR_PF_ISR	0x00000001
+#define DLB2_VF_PF2VF_MAILBOX_ISR_RSVD0	0xFFFFFFFE
+#define DLB2_VF_PF2VF_MAILBOX_ISR_PF_ISR_LOC	0
+#define DLB2_VF_PF2VF_MAILBOX_ISR_RSVD0_LOC	1
+
+#define DLB2_VF_VF_MSI_ISR_PEND 0x2f10
+#define DLB2_VF_VF_MSI_ISR_PEND_RST 0x0
+
+#define DLB2_VF_VF_MSI_ISR_PEND_ISR_PEND	0xFFFFFFFF
+#define DLB2_VF_VF_MSI_ISR_PEND_ISR_PEND_LOC	0
+
+#define DLB2_VF_VF_RESET_IN_PROGRESS 0x3000
+#define DLB2_VF_VF_RESET_IN_PROGRESS_RST 0x1
+
+#define DLB2_VF_VF_RESET_IN_PROGRESS_RESET_IN_PROGRESS	0x00000001
+#define DLB2_VF_VF_RESET_IN_PROGRESS_RSVD0			0xFFFFFFFE
+#define DLB2_VF_VF_RESET_IN_PROGRESS_RESET_IN_PROGRESS_LOC	0
+#define DLB2_VF_VF_RESET_IN_PROGRESS_RSVD0_LOC		1
+
+#define DLB2_VF_VF_MSI_ISR 0x4000
+#define DLB2_VF_VF_MSI_ISR_RST 0x0
+
+#define DLB2_VF_VF_MSI_ISR_VF_MSI_ISR	0xFFFFFFFF
+#define DLB2_VF_VF_MSI_ISR_VF_MSI_ISR_LOC	0
+
+#define DLB2_SYS_TOTAL_CREDITS 0x10000100
+#define DLB2_SYS_TOTAL_CREDITS_RST 0x4000
+
+#define DLB2_SYS_TOTAL_CREDITS_TOTAL_CREDITS	0xFFFFFFFF
+#define DLB2_SYS_TOTAL_CREDITS_TOTAL_CREDITS_LOC	0
+
+#define DLB2_SYS_LDB_CQ_AI_ADDR_U(x) \
+	(0x10000fa4 + (x) * 0x1000)
+#define DLB2_SYS_LDB_CQ_AI_ADDR_U_RST 0x0
+
+#define DLB2_SYS_LDB_CQ_AI_ADDR_U_CQ_AI_ADDR_U	0xFFFFFFFF
+#define DLB2_SYS_LDB_CQ_AI_ADDR_U_CQ_AI_ADDR_U_LOC	0
+
+#define DLB2_SYS_LDB_CQ_AI_ADDR_L(x) \
+	(0x10000fa0 + (x) * 0x1000)
+#define DLB2_SYS_LDB_CQ_AI_ADDR_L_RST 0x0
+
+#define DLB2_SYS_LDB_CQ_AI_ADDR_L_RSVD0		0x00000003
+#define DLB2_SYS_LDB_CQ_AI_ADDR_L_CQ_AI_ADDR_L	0xFFFFFFFC
+#define DLB2_SYS_LDB_CQ_AI_ADDR_L_RSVD0_LOC		0
+#define DLB2_SYS_LDB_CQ_AI_ADDR_L_CQ_AI_ADDR_L_LOC	2
+
+#define DLB2_SYS_DIR_CQ_AI_ADDR_U(x) \
+	(0x10000fe4 + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ_AI_ADDR_U_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_AI_ADDR_U_CQ_AI_ADDR_U	0xFFFFFFFF
+#define DLB2_SYS_DIR_CQ_AI_ADDR_U_CQ_AI_ADDR_U_LOC	0
+
+#define DLB2_SYS_DIR_CQ_AI_ADDR_L(x) \
+	(0x10000fe0 + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ_AI_ADDR_L_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_AI_ADDR_L_RSVD0		0x00000003
+#define DLB2_SYS_DIR_CQ_AI_ADDR_L_CQ_AI_ADDR_L	0xFFFFFFFC
+#define DLB2_SYS_DIR_CQ_AI_ADDR_L_RSVD0_LOC		0
+#define DLB2_SYS_DIR_CQ_AI_ADDR_L_CQ_AI_ADDR_L_LOC	2
+
+#define DLB2_SYS_WB_DIR_CQ_STATE(x) \
+	(0x11c00000 + (x) * 0x1000)
+#define DLB2_SYS_WB_DIR_CQ_STATE_RST 0x0
+
+#define DLB2_SYS_WB_DIR_CQ_STATE_WB0_V	0x00000001
+#define DLB2_SYS_WB_DIR_CQ_STATE_WB1_V	0x00000002
+#define DLB2_SYS_WB_DIR_CQ_STATE_WB2_V	0x00000004
+#define DLB2_SYS_WB_DIR_CQ_STATE_DIR_OPT	0x00000008
+#define DLB2_SYS_WB_DIR_CQ_STATE_CQ_OPT_CLR	0x00000010
+#define DLB2_SYS_WB_DIR_CQ_STATE_RSVD0	0xFFFFFFE0
+#define DLB2_SYS_WB_DIR_CQ_STATE_WB0_V_LOC		0
+#define DLB2_SYS_WB_DIR_CQ_STATE_WB1_V_LOC		1
+#define DLB2_SYS_WB_DIR_CQ_STATE_WB2_V_LOC		2
+#define DLB2_SYS_WB_DIR_CQ_STATE_DIR_OPT_LOC		3
+#define DLB2_SYS_WB_DIR_CQ_STATE_CQ_OPT_CLR_LOC	4
+#define DLB2_SYS_WB_DIR_CQ_STATE_RSVD0_LOC		5
+
+#define DLB2_SYS_WB_LDB_CQ_STATE(x) \
+	(0x11d00000 + (x) * 0x1000)
+#define DLB2_SYS_WB_LDB_CQ_STATE_RST 0x0
+
+#define DLB2_SYS_WB_LDB_CQ_STATE_WB0_V	0x00000001
+#define DLB2_SYS_WB_LDB_CQ_STATE_WB1_V	0x00000002
+#define DLB2_SYS_WB_LDB_CQ_STATE_WB2_V	0x00000004
+#define DLB2_SYS_WB_LDB_CQ_STATE_RSVD1	0x00000008
+#define DLB2_SYS_WB_LDB_CQ_STATE_CQ_OPT_CLR	0x00000010
+#define DLB2_SYS_WB_LDB_CQ_STATE_RSVD0	0xFFFFFFE0
+#define DLB2_SYS_WB_LDB_CQ_STATE_WB0_V_LOC		0
+#define DLB2_SYS_WB_LDB_CQ_STATE_WB1_V_LOC		1
+#define DLB2_SYS_WB_LDB_CQ_STATE_WB2_V_LOC		2
+#define DLB2_SYS_WB_LDB_CQ_STATE_RSVD1_LOC		3
+#define DLB2_SYS_WB_LDB_CQ_STATE_CQ_OPT_CLR_LOC	4
+#define DLB2_SYS_WB_LDB_CQ_STATE_RSVD0_LOC		5
+
+#define DLB2_CHP_CFG_VAS_CRD(x) \
+	(0x40000000 + (x) * 0x1000)
+#define DLB2_CHP_CFG_VAS_CRD_RST 0x0
+
+#define DLB2_CHP_CFG_VAS_CRD_COUNT	0x00007FFF
+#define DLB2_CHP_CFG_VAS_CRD_RSVD0	0xFFFF8000
+#define DLB2_CHP_CFG_VAS_CRD_COUNT_LOC	0
+#define DLB2_CHP_CFG_VAS_CRD_RSVD0_LOC	15
+
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT(x) \
+	(0x90b00000 + (x) * 0x1000)
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_RST 0x0
+
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_LIMIT	0x00007FFF
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_V	0x00008000
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_RSVD0	0xFFFF0000
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_LIMIT_LOC	0
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_V_LOC		15
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_RSVD0_LOC	16
+
+#endif /* __DLB2_REGS_NEW_H */
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index 7d31d9a85..cd62de3af 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -48,19 +48,6 @@ static void dlb2_init_domain_rsrc_lists(struct dlb2_hw_domain *domain)
 		dlb2_list_init_head(&domain->avail_ldb_ports[i]);
 }
 
-static void dlb2_init_fn_rsrc_lists(struct dlb2_function_resources *rsrc)
-{
-	int i;
-
-	dlb2_list_init_head(&rsrc->avail_domains);
-	dlb2_list_init_head(&rsrc->used_domains);
-	dlb2_list_init_head(&rsrc->avail_ldb_queues);
-	dlb2_list_init_head(&rsrc->avail_dir_pq_pairs);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
-		dlb2_list_init_head(&rsrc->avail_ldb_ports[i]);
-}
-
 void dlb2_hw_enable_sparse_dir_cq_mode(struct dlb2_hw *hw)
 {
 	union dlb2_chp_cfg_chp_csr_ctrl r0;
@@ -131,171 +118,6 @@ void dlb2_hw_enable_sparse_ldb_cq_mode(struct dlb2_hw *hw)
 	DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, r0.val);
 }
 
-void dlb2_resource_free(struct dlb2_hw *hw)
-{
-	int i;
-
-	if (hw->pf.avail_hist_list_entries)
-		dlb2_bitmap_free(hw->pf.avail_hist_list_entries);
-
-	for (i = 0; i < DLB2_MAX_NUM_VDEVS; i++) {
-		if (hw->vdev[i].avail_hist_list_entries)
-			dlb2_bitmap_free(hw->vdev[i].avail_hist_list_entries);
-	}
-}
-
-int dlb2_resource_init(struct dlb2_hw *hw)
-{
-	struct dlb2_list_entry *list;
-	unsigned int i;
-	int ret;
-
-	/*
-	 * For optimal load-balancing, ports that map to one or more QIDs in
-	 * common should not be in numerical sequence. This is application
-	 * dependent, but the driver interleaves port IDs as much as possible
-	 * to reduce the likelihood of this. This initial allocation maximizes
-	 * the average distance between an ID and its immediate neighbors (i.e.
-	 * the distance from 1 to 0 and to 2, the distance from 2 to 1 and to
-	 * 3, etc.).
-	 */
-	u8 init_ldb_port_allocation[DLB2_MAX_NUM_LDB_PORTS] = {
-		0,  7,  14,  5, 12,  3, 10,  1,  8, 15,  6, 13,  4, 11,  2,  9,
-		16, 23, 30, 21, 28, 19, 26, 17, 24, 31, 22, 29, 20, 27, 18, 25,
-		32, 39, 46, 37, 44, 35, 42, 33, 40, 47, 38, 45, 36, 43, 34, 41,
-		48, 55, 62, 53, 60, 51, 58, 49, 56, 63, 54, 61, 52, 59, 50, 57,
-	};
-
-	/* Zero-out resource tracking data structures */
-	memset(&hw->rsrcs, 0, sizeof(hw->rsrcs));
-	memset(&hw->pf, 0, sizeof(hw->pf));
-
-	dlb2_init_fn_rsrc_lists(&hw->pf);
-
-	for (i = 0; i < DLB2_MAX_NUM_VDEVS; i++) {
-		memset(&hw->vdev[i], 0, sizeof(hw->vdev[i]));
-		dlb2_init_fn_rsrc_lists(&hw->vdev[i]);
-	}
-
-	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
-		memset(&hw->domains[i], 0, sizeof(hw->domains[i]));
-		dlb2_init_domain_rsrc_lists(&hw->domains[i]);
-		hw->domains[i].parent_func = &hw->pf;
-	}
-
-	/* Give all resources to the PF driver */
-	hw->pf.num_avail_domains = DLB2_MAX_NUM_DOMAINS;
-	for (i = 0; i < hw->pf.num_avail_domains; i++) {
-		list = &hw->domains[i].func_list;
-
-		dlb2_list_add(&hw->pf.avail_domains, list);
-	}
-
-	hw->pf.num_avail_ldb_queues = DLB2_MAX_NUM_LDB_QUEUES;
-	for (i = 0; i < hw->pf.num_avail_ldb_queues; i++) {
-		list = &hw->rsrcs.ldb_queues[i].func_list;
-
-		dlb2_list_add(&hw->pf.avail_ldb_queues, list);
-	}
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
-		hw->pf.num_avail_ldb_ports[i] =
-			DLB2_MAX_NUM_LDB_PORTS / DLB2_NUM_COS_DOMAINS;
-
-	for (i = 0; i < DLB2_MAX_NUM_LDB_PORTS; i++) {
-		int cos_id = i >> DLB2_NUM_COS_DOMAINS;
-		struct dlb2_ldb_port *port;
-
-		port = &hw->rsrcs.ldb_ports[init_ldb_port_allocation[i]];
-
-		dlb2_list_add(&hw->pf.avail_ldb_ports[cos_id],
-			      &port->func_list);
-	}
-
-	hw->pf.num_avail_dir_pq_pairs = DLB2_MAX_NUM_DIR_PORTS(hw->ver);
-	for (i = 0; i < hw->pf.num_avail_dir_pq_pairs; i++) {
-		list = &hw->rsrcs.dir_pq_pairs[i].func_list;
-
-		dlb2_list_add(&hw->pf.avail_dir_pq_pairs, list);
-	}
-
-	hw->pf.num_avail_qed_entries = DLB2_MAX_NUM_LDB_CREDITS;
-	hw->pf.num_avail_dqed_entries =
-		DLB2_MAX_NUM_DIR_CREDITS(hw->ver);
-
-	hw->pf.num_avail_aqed_entries = DLB2_MAX_NUM_AQED_ENTRIES;
-
-	ret = dlb2_bitmap_alloc(&hw->pf.avail_hist_list_entries,
-				DLB2_MAX_NUM_HIST_LIST_ENTRIES);
-	if (ret)
-		goto unwind;
-
-	ret = dlb2_bitmap_fill(hw->pf.avail_hist_list_entries);
-	if (ret)
-		goto unwind;
-
-	for (i = 0; i < DLB2_MAX_NUM_VDEVS; i++) {
-		ret = dlb2_bitmap_alloc(&hw->vdev[i].avail_hist_list_entries,
-					DLB2_MAX_NUM_HIST_LIST_ENTRIES);
-		if (ret)
-			goto unwind;
-
-		ret = dlb2_bitmap_zero(hw->vdev[i].avail_hist_list_entries);
-		if (ret)
-			goto unwind;
-	}
-
-	/* Initialize the hardware resource IDs */
-	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
-		hw->domains[i].id.phys_id = i;
-		hw->domains[i].id.vdev_owned = false;
-	}
-
-	for (i = 0; i < DLB2_MAX_NUM_LDB_QUEUES; i++) {
-		hw->rsrcs.ldb_queues[i].id.phys_id = i;
-		hw->rsrcs.ldb_queues[i].id.vdev_owned = false;
-	}
-
-	for (i = 0; i < DLB2_MAX_NUM_LDB_PORTS; i++) {
-		hw->rsrcs.ldb_ports[i].id.phys_id = i;
-		hw->rsrcs.ldb_ports[i].id.vdev_owned = false;
-	}
-
-	for (i = 0; i < DLB2_MAX_NUM_DIR_PORTS(hw->ver); i++) {
-		hw->rsrcs.dir_pq_pairs[i].id.phys_id = i;
-		hw->rsrcs.dir_pq_pairs[i].id.vdev_owned = false;
-	}
-
-	for (i = 0; i < DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {
-		hw->rsrcs.sn_groups[i].id = i;
-		/* Default mode (0) is 64 sequence numbers per queue */
-		hw->rsrcs.sn_groups[i].mode = 0;
-		hw->rsrcs.sn_groups[i].sequence_numbers_per_queue = 64;
-		hw->rsrcs.sn_groups[i].slot_use_bitmap = 0;
-	}
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
-		hw->cos_reservation[i] = 100 / DLB2_NUM_COS_DOMAINS;
-
-	return 0;
-
-unwind:
-	dlb2_resource_free(hw);
-
-	return ret;
-}
-
-void dlb2_clr_pmcsr_disable(struct dlb2_hw *hw)
-{
-	union dlb2_cfg_mstr_cfg_pm_pmcsr_disable r0;
-
-	r0.val = DLB2_CSR_RD(hw, DLB2_CFG_MSTR_CFG_PM_PMCSR_DISABLE);
-
-	r0.field.disable = 0;
-
-	DLB2_CSR_WR(hw, DLB2_CFG_MSTR_CFG_PM_PMCSR_DISABLE, r0.val);
-}
-
 static void dlb2_configure_domain_credits(struct dlb2_hw *hw,
 					  struct dlb2_hw_domain *domain)
 {
@@ -5877,7 +5699,7 @@ static void dlb2_log_start_domain(struct dlb2_hw *hw,
 int
 dlb2_hw_start_domain(struct dlb2_hw *hw,
 		     u32 domain_id,
-		     __attribute((unused)) struct dlb2_start_domain_args *arg,
+		     struct dlb2_start_domain_args *arg,
 		     struct dlb2_cmd_response *resp,
 		     bool vdev_req,
 		     unsigned int vdev_id)
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.h b/drivers/event/dlb2/pf/base/dlb2_resource.h
index 503fdf317..2e13193bb 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.h
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.h
@@ -6,35 +6,8 @@
 #define __DLB2_RESOURCE_H
 
 #include "dlb2_user.h"
-
-#include "dlb2_hw_types.h"
 #include "dlb2_osdep_types.h"
 
-/**
- * dlb2_resource_init() - initialize the device
- * @hw: pointer to struct dlb2_hw.
- *
- * This function initializes the device's software state (pointed to by the hw
- * argument) and programs global scheduling QoS registers. This function should
- * be called during driver initialization.
- *
- * The dlb2_hw struct must be unique per DLB 2.0 device and persist until the
- * device is reset.
- *
- * Return:
- * Returns 0 upon success, <0 otherwise.
- */
-int dlb2_resource_init(struct dlb2_hw *hw);
-
-/**
- * dlb2_resource_free() - free device state memory
- * @hw: dlb2_hw handle for a particular device.
- *
- * This function frees software state pointed to by dlb2_hw. This function
- * should be called when resetting the device or unloading the driver.
- */
-void dlb2_resource_free(struct dlb2_hw *hw);
-
 /**
  * dlb2_resource_reset() - reset in-use resources to their initial state
  * @hw: dlb2_hw handle for a particular device.
@@ -1485,15 +1458,6 @@ int dlb2_notify_vf(struct dlb2_hw *hw,
  */
 int dlb2_vdev_in_use(struct dlb2_hw *hw, unsigned int id);
 
-/**
- * dlb2_clr_pmcsr_disable() - power on bulk of DLB 2.0 logic
- * @hw: dlb2_hw handle for a particular device.
- *
- * Clearing the PMCSR must be done at initialization to make the device fully
- * operational.
- */
-void dlb2_clr_pmcsr_disable(struct dlb2_hw *hw);
-
 /**
  * dlb2_hw_get_ldb_queue_depth() - returns the depth of a load-balanced queue
  * @hw: dlb2_hw handle for a particular device.
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
new file mode 100644
index 000000000..af68655b4
--- /dev/null
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -0,0 +1,271 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2016-2020 Intel Corporation
+ */
+
+#define DLB2_USE_NEW_HEADERS /* TEMPORARY FOR MERGE */
+
+#include "dlb2_user.h"
+
+#include "dlb2_hw_types_new.h"
+#include "dlb2_mbox.h"
+#include "dlb2_osdep.h"
+#include "dlb2_osdep_bitmap.h"
+#include "dlb2_osdep_types.h"
+#include "dlb2_regs_new.h"
+#include "dlb2_resource_new.h" /* TEMP FOR UPSTREAMPATCHES */
+
+#include "../../dlb2_priv.h"
+#include "../../dlb2_inline_fns.h"
+
+#define DLB2_DOM_LIST_HEAD(head, type) \
+	DLB2_LIST_HEAD((head), type, domain_list)
+
+#define DLB2_FUNC_LIST_HEAD(head, type) \
+	DLB2_LIST_HEAD((head), type, func_list)
+
+#define DLB2_DOM_LIST_FOR(head, ptr, iter) \
+	DLB2_LIST_FOR_EACH(head, ptr, domain_list, iter)
+
+#define DLB2_FUNC_LIST_FOR(head, ptr, iter) \
+	DLB2_LIST_FOR_EACH(head, ptr, func_list, iter)
+
+#define DLB2_DOM_LIST_FOR_SAFE(head, ptr, ptr_tmp, it, it_tmp) \
+	DLB2_LIST_FOR_EACH_SAFE((head), ptr, ptr_tmp, domain_list, it, it_tmp)
+
+#define DLB2_FUNC_LIST_FOR_SAFE(head, ptr, ptr_tmp, it, it_tmp) \
+	DLB2_LIST_FOR_EACH_SAFE((head), ptr, ptr_tmp, func_list, it, it_tmp)
+
+/*
+ * The PF driver cannot assume that a register write will affect subsequent HCW
+ * writes. To ensure a write completes, the driver must read back a CSR. This
+ * function only need be called for configuration that can occur after the
+ * domain has started; prior to starting, applications can't send HCWs.
+ */
+static inline void dlb2_flush_csr(struct dlb2_hw *hw)
+{
+	DLB2_CSR_RD(hw, DLB2_SYS_TOTAL_VAS(hw->ver));
+}
+
+static void dlb2_init_domain_rsrc_lists(struct dlb2_hw_domain *domain)
+{
+	int i;
+
+	dlb2_list_init_head(&domain->used_ldb_queues);
+	dlb2_list_init_head(&domain->used_dir_pq_pairs);
+	dlb2_list_init_head(&domain->avail_ldb_queues);
+	dlb2_list_init_head(&domain->avail_dir_pq_pairs);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
+		dlb2_list_init_head(&domain->used_ldb_ports[i]);
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
+		dlb2_list_init_head(&domain->avail_ldb_ports[i]);
+}
+
+static void dlb2_init_fn_rsrc_lists(struct dlb2_function_resources *rsrc)
+{
+	int i;
+	dlb2_list_init_head(&rsrc->avail_domains);
+	dlb2_list_init_head(&rsrc->used_domains);
+	dlb2_list_init_head(&rsrc->avail_ldb_queues);
+	dlb2_list_init_head(&rsrc->avail_dir_pq_pairs);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
+		dlb2_list_init_head(&rsrc->avail_ldb_ports[i]);
+}
+
+/**
+ * dlb2_resource_free() - free device state memory
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function frees software state pointed to by dlb2_hw. This function
+ * should be called when resetting the device or unloading the driver.
+ */
+void dlb2_resource_free(struct dlb2_hw *hw)
+{
+	int i;
+
+	if (hw->pf.avail_hist_list_entries)
+		dlb2_bitmap_free(hw->pf.avail_hist_list_entries);
+
+	for (i = 0; i < DLB2_MAX_NUM_VDEVS; i++) {
+		if (hw->vdev[i].avail_hist_list_entries)
+			dlb2_bitmap_free(hw->vdev[i].avail_hist_list_entries);
+	}
+}
+
+/**
+ * dlb2_resource_init() - initialize the device
+ * @hw: pointer to struct dlb2_hw.
+ * @ver: device version.
+ *
+ * This function initializes the device's software state (pointed to by the hw
+ * argument) and programs global scheduling QoS registers. This function should
+ * be called during driver initialization, and the dlb2_hw structure should
+ * be zero-initialized before calling the function.
+ *
+ * The dlb2_hw struct must be unique per DLB 2.0 device and persist until the
+ * device is reset.
+ *
+ * Return:
+ * Returns 0 upon success, <0 otherwise.
+ */
+int dlb2_resource_init(struct dlb2_hw *hw, enum dlb2_hw_ver ver)
+{
+	struct dlb2_list_entry *list;
+	unsigned int i;
+	int ret;
+
+	/*
+	 * For optimal load-balancing, ports that map to one or more QIDs in
+	 * common should not be in numerical sequence. The port->QID mapping is
+	 * application dependent, but the driver interleaves port IDs as much
+	 * as possible to reduce the likelihood of sequential ports mapping to
+	 * the same QID(s). This initial allocation of port IDs maximizes the
+	 * average distance between an ID and its immediate neighbors (i.e.
+	 * the distance from 1 to 0 and to 2, the distance from 2 to 1 and to
+	 * 3, etc.).
+	 */
+	const u8 init_ldb_port_allocation[DLB2_MAX_NUM_LDB_PORTS] = {
+		0,  7,  14,  5, 12,  3, 10,  1,  8, 15,  6, 13,  4, 11,  2,  9,
+		16, 23, 30, 21, 28, 19, 26, 17, 24, 31, 22, 29, 20, 27, 18, 25,
+		32, 39, 46, 37, 44, 35, 42, 33, 40, 47, 38, 45, 36, 43, 34, 41,
+		48, 55, 62, 53, 60, 51, 58, 49, 56, 63, 54, 61, 52, 59, 50, 57,
+	};
+
+	hw->ver = ver;
+
+	dlb2_init_fn_rsrc_lists(&hw->pf);
+
+	for (i = 0; i < DLB2_MAX_NUM_VDEVS; i++)
+		dlb2_init_fn_rsrc_lists(&hw->vdev[i]);
+
+	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
+		dlb2_init_domain_rsrc_lists(&hw->domains[i]);
+		hw->domains[i].parent_func = &hw->pf;
+	}
+
+	/* Give all resources to the PF driver */
+	hw->pf.num_avail_domains = DLB2_MAX_NUM_DOMAINS;
+	for (i = 0; i < hw->pf.num_avail_domains; i++) {
+		list = &hw->domains[i].func_list;
+
+		dlb2_list_add(&hw->pf.avail_domains, list);
+	}
+
+	hw->pf.num_avail_ldb_queues = DLB2_MAX_NUM_LDB_QUEUES;
+	for (i = 0; i < hw->pf.num_avail_ldb_queues; i++) {
+		list = &hw->rsrcs.ldb_queues[i].func_list;
+
+		dlb2_list_add(&hw->pf.avail_ldb_queues, list);
+	}
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
+		hw->pf.num_avail_ldb_ports[i] =
+			DLB2_MAX_NUM_LDB_PORTS / DLB2_NUM_COS_DOMAINS;
+
+	for (i = 0; i < DLB2_MAX_NUM_LDB_PORTS; i++) {
+		int cos_id = i >> DLB2_NUM_COS_DOMAINS;
+		struct dlb2_ldb_port *port;
+
+		port = &hw->rsrcs.ldb_ports[init_ldb_port_allocation[i]];
+
+		dlb2_list_add(&hw->pf.avail_ldb_ports[cos_id],
+			      &port->func_list);
+	}
+
+	hw->pf.num_avail_dir_pq_pairs = DLB2_MAX_NUM_DIR_PORTS(hw->ver);
+	for (i = 0; i < hw->pf.num_avail_dir_pq_pairs; i++) {
+		list = &hw->rsrcs.dir_pq_pairs[i].func_list;
+
+		dlb2_list_add(&hw->pf.avail_dir_pq_pairs, list);
+	}
+
+	if (hw->ver == DLB2_HW_V2) {
+		hw->pf.num_avail_qed_entries = DLB2_MAX_NUM_LDB_CREDITS;
+		hw->pf.num_avail_dqed_entries =
+			DLB2_MAX_NUM_DIR_CREDITS(hw->ver);
+	} else {
+		hw->pf.num_avail_entries = DLB2_MAX_NUM_CREDITS(hw->ver);
+	}
+
+	hw->pf.num_avail_aqed_entries = DLB2_MAX_NUM_AQED_ENTRIES;
+
+	ret = dlb2_bitmap_alloc(&hw->pf.avail_hist_list_entries,
+				DLB2_MAX_NUM_HIST_LIST_ENTRIES);
+	if (ret)
+		goto unwind;
+
+	ret = dlb2_bitmap_fill(hw->pf.avail_hist_list_entries);
+	if (ret)
+		goto unwind;
+
+	for (i = 0; i < DLB2_MAX_NUM_VDEVS; i++) {
+		ret = dlb2_bitmap_alloc(&hw->vdev[i].avail_hist_list_entries,
+					DLB2_MAX_NUM_HIST_LIST_ENTRIES);
+		if (ret)
+			goto unwind;
+
+		ret = dlb2_bitmap_zero(hw->vdev[i].avail_hist_list_entries);
+		if (ret)
+			goto unwind;
+	}
+
+	/* Initialize the hardware resource IDs */
+	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
+		hw->domains[i].id.phys_id = i;
+		hw->domains[i].id.vdev_owned = false;
+	}
+
+	for (i = 0; i < DLB2_MAX_NUM_LDB_QUEUES; i++) {
+		hw->rsrcs.ldb_queues[i].id.phys_id = i;
+		hw->rsrcs.ldb_queues[i].id.vdev_owned = false;
+	}
+
+	for (i = 0; i < DLB2_MAX_NUM_LDB_PORTS; i++) {
+		hw->rsrcs.ldb_ports[i].id.phys_id = i;
+		hw->rsrcs.ldb_ports[i].id.vdev_owned = false;
+	}
+
+	for (i = 0; i < DLB2_MAX_NUM_DIR_PORTS(hw->ver); i++) {
+		hw->rsrcs.dir_pq_pairs[i].id.phys_id = i;
+		hw->rsrcs.dir_pq_pairs[i].id.vdev_owned = false;
+	}
+
+	for (i = 0; i < DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {
+		hw->rsrcs.sn_groups[i].id = i;
+		/* Default mode (0) is 64 sequence numbers per queue */
+		hw->rsrcs.sn_groups[i].mode = 0;
+		hw->rsrcs.sn_groups[i].sequence_numbers_per_queue = 64;
+		hw->rsrcs.sn_groups[i].slot_use_bitmap = 0;
+	}
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
+		hw->cos_reservation[i] = 100 / DLB2_NUM_COS_DOMAINS;
+
+	return 0;
+
+unwind:
+	dlb2_resource_free(hw);
+
+	return ret;
+}
+
+/**
+ * dlb2_clr_pmcsr_disable() - power on bulk of DLB 2.0 logic
+ * @hw: dlb2_hw handle for a particular device.
+ * @ver: device version.
+ *
+ * Clearing the PMCSR must be done at initialization to make the device fully
+ * operational.
+ */
+void dlb2_clr_pmcsr_disable(struct dlb2_hw *hw, enum dlb2_hw_ver ver)
+{
+	u32 pmcsr_dis;
+
+	pmcsr_dis = DLB2_CSR_RD(hw, DLB2_CM_CFG_PM_PMCSR_DISABLE(ver));
+
+	DLB2_BITS_CLR(pmcsr_dis, DLB2_CM_CFG_PM_PMCSR_DISABLE_DISABLE);
+
+	DLB2_CSR_WR(hw, DLB2_CM_CFG_PM_PMCSR_DISABLE(ver), pmcsr_dis);
+}
+
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.h b/drivers/event/dlb2/pf/base/dlb2_resource_new.h
new file mode 100644
index 000000000..51f31543c
--- /dev/null
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.h
@@ -0,0 +1,73 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2016-2020 Intel Corporation
+ */
+
+#ifndef __DLB2_RESOURCE_NEW_H
+#define __DLB2_RESOURCE_NEW_H
+
+#include "dlb2_user.h"
+#include "dlb2_osdep_types.h"
+
+/**
+ * dlb2_resource_init() - initialize the device
+ * @hw: pointer to struct dlb2_hw.
+ * @ver: device version.
+ *
+ * This function initializes the device's software state (pointed to by the hw
+ * argument) and programs global scheduling QoS registers. This function should
+ * be called during driver initialization.
+ *
+ * The dlb2_hw struct must be unique per DLB 2.0 device and persist until the
+ * device is reset.
+ *
+ * Return:
+ * Returns 0 upon success, <0 otherwise.
+ */
+int dlb2_resource_init(struct dlb2_hw *hw, enum dlb2_hw_ver ver);
+
+/**
+ * dlb2_clr_pmcsr_disable() - power on bulk of DLB 2.0 logic
+ * @hw: dlb2_hw handle for a particular device.
+ * @ver: device version.
+ *
+ * Clearing the PMCSR must be done at initialization to make the device fully
+ * operational.
+ */
+void dlb2_clr_pmcsr_disable(struct dlb2_hw *hw, enum dlb2_hw_ver ver);
+
+/**
+ * dlb2_finish_unmap_qid_procedures() - finish any pending unmap procedures
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function attempts to finish any outstanding unmap procedures.
+ * This function should be called by the kernel thread responsible for
+ * finishing map/unmap procedures.
+ *
+ * Return:
+ * Returns the number of procedures that weren't completed.
+ */
+unsigned int dlb2_finish_unmap_qid_procedures(struct dlb2_hw *hw);
+
+/**
+ * dlb2_finish_map_qid_procedures() - finish any pending map procedures
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function attempts to finish any outstanding map procedures.
+ * This function should be called by the kernel thread responsible for
+ * finishing map/unmap procedures.
+ *
+ * Return:
+ * Returns the number of procedures that weren't completed.
+ */
+unsigned int dlb2_finish_map_qid_procedures(struct dlb2_hw *hw);
+
+/**
+ * dlb2_resource_free() - free device state memory
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function frees software state pointed to by dlb2_hw. This function
+ * should be called when resetting the device or unloading the driver.
+ */
+void dlb2_resource_free(struct dlb2_hw *hw);
+
+#endif /* __DLB2_RESOURCE_NEW_H */
diff --git a/drivers/event/dlb2/pf/dlb2_main.c b/drivers/event/dlb2/pf/dlb2_main.c
index a9d407f2f..5c0640b3c 100644
--- a/drivers/event/dlb2/pf/dlb2_main.c
+++ b/drivers/event/dlb2/pf/dlb2_main.c
@@ -13,9 +13,12 @@
 #include <rte_malloc.h>
 #include <rte_errno.h>
 
-#include "base/dlb2_resource.h"
+#define DLB2_USE_NEW_HEADERS /* TEMPORARY FOR MERGE */
+
+#include "base/dlb2_regs_new.h"
+#include "base/dlb2_hw_types_new.h"
+#include "base/dlb2_resource_new.h"
 #include "base/dlb2_osdep.h"
-#include "base/dlb2_regs.h"
 #include "dlb2_main.h"
 #include "../dlb2_user.h"
 #include "../dlb2_priv.h"
@@ -103,25 +106,34 @@ dlb2_pf_init_driver_state(struct dlb2_dev *dlb2_dev)
 
 static void dlb2_pf_enable_pm(struct dlb2_dev *dlb2_dev)
 {
-	dlb2_clr_pmcsr_disable(&dlb2_dev->hw);
+	int version;
+	version = DLB2_HW_DEVICE_FROM_PCI_ID(dlb2_dev->pdev);
+
+	dlb2_clr_pmcsr_disable(&dlb2_dev->hw, version);
 }
 
 #define DLB2_READY_RETRY_LIMIT 1000
-static int dlb2_pf_wait_for_device_ready(struct dlb2_dev *dlb2_dev)
+static int dlb2_pf_wait_for_device_ready(struct dlb2_dev *dlb2_dev,
+					 int dlb_version)
 {
 	u32 retries = 0;
 
 	/* Allow at least 1s for the device to become active after power-on */
 	for (retries = 0; retries < DLB2_READY_RETRY_LIMIT; retries++) {
-		union dlb2_cfg_mstr_cfg_diagnostic_idle_status idle;
-		union dlb2_cfg_mstr_cfg_pm_status pm_st;
+		u32 idle_val;
+		u32 idle_dlb_func_idle;
+		u32 pm_st_val;
+		u32 pm_st_pmsm;
 		u32 addr;
 
-		addr = DLB2_CFG_MSTR_CFG_PM_STATUS;
-		pm_st.val = DLB2_CSR_RD(&dlb2_dev->hw, addr);
-		addr = DLB2_CFG_MSTR_CFG_DIAGNOSTIC_IDLE_STATUS;
-		idle.val = DLB2_CSR_RD(&dlb2_dev->hw, addr);
-		if (pm_st.field.pmsm == 1 && idle.field.dlb_func_idle == 1)
+		addr = DLB2_CM_CFG_PM_STATUS(dlb_version);
+		pm_st_val = DLB2_CSR_RD(&dlb2_dev->hw, addr);
+		addr = DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS(dlb_version);
+		idle_val = DLB2_CSR_RD(&dlb2_dev->hw, addr);
+		idle_dlb_func_idle = idle_val &
+			DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DLB_FUNC_IDLE;
+		pm_st_pmsm = pm_st_val & DLB2_CM_CFG_PM_STATUS_PMSM;
+		if (pm_st_pmsm && idle_dlb_func_idle)
 			break;
 
 		rte_delay_ms(1);
@@ -141,6 +153,7 @@ dlb2_probe(struct rte_pci_device *pdev)
 {
 	struct dlb2_dev *dlb2_dev;
 	int ret = 0;
+	int dlb_version = 0;
 
 	DLB2_INFO(dlb2_dev, "probe\n");
 
@@ -152,6 +165,8 @@ dlb2_probe(struct rte_pci_device *pdev)
 		goto dlb2_dev_malloc_fail;
 	}
 
+	dlb_version = DLB2_HW_DEVICE_FROM_PCI_ID(pdev);
+
 	/* PCI Bus driver has already mapped bar space into process.
 	 * Save off our IO register and FUNC addresses.
 	 */
@@ -191,7 +206,7 @@ dlb2_probe(struct rte_pci_device *pdev)
 	 */
 	dlb2_pf_enable_pm(dlb2_dev);
 
-	ret = dlb2_pf_wait_for_device_ready(dlb2_dev);
+	ret = dlb2_pf_wait_for_device_ready(dlb2_dev, dlb_version);
 	if (ret)
 		goto wait_for_device_ready_fail;
 
@@ -203,7 +218,7 @@ dlb2_probe(struct rte_pci_device *pdev)
 	if (ret)
 		goto init_driver_state_fail;
 
-	ret = dlb2_resource_init(&dlb2_dev->hw);
+	ret = dlb2_resource_init(&dlb2_dev->hw, dlb_version);
 	if (ret)
 		goto resource_init_fail;
 
diff --git a/drivers/event/dlb2/pf/dlb2_main.h b/drivers/event/dlb2/pf/dlb2_main.h
index f3bee71fb..01a24e8a4 100644
--- a/drivers/event/dlb2/pf/dlb2_main.h
+++ b/drivers/event/dlb2/pf/dlb2_main.h
@@ -15,7 +15,11 @@
 #define PAGE_SIZE (sysconf(_SC_PAGESIZE))
 #endif
 
+#ifdef DLB2_USE_NEW_HEADERS
+#include "base/dlb2_hw_types_new.h"
+#else
 #include "base/dlb2_hw_types.h"
+#endif
 #include "../dlb2_user.h"
 
 #define DLB2_DEFAULT_UNREGISTER_TIMEOUT_S 5
diff --git a/drivers/event/dlb2/pf/dlb2_pf.c b/drivers/event/dlb2/pf/dlb2_pf.c
index a937d0f9c..9b40e5eb3 100644
--- a/drivers/event/dlb2/pf/dlb2_pf.c
+++ b/drivers/event/dlb2/pf/dlb2_pf.c
@@ -31,13 +31,15 @@
 #include <rte_memory.h>
 #include <rte_string_fns.h>
 
+#define DLB2_USE_NEW_HEADERS /* TEMPORARY FOR MERGE */
+
 #include "../dlb2_priv.h"
 #include "../dlb2_iface.h"
 #include "../dlb2_inline_fns.h"
 #include "dlb2_main.h"
-#include "base/dlb2_hw_types.h"
+#include "base/dlb2_hw_types_new.h"
 #include "base/dlb2_osdep.h"
-#include "base/dlb2_resource.h"
+#include "base/dlb2_resource_new.h"
 
 static const char *event_dlb2_pf_name = RTE_STR(EVDEV_DLB2_NAME_PMD);
 
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH 03/25] event/dlb2: add DLB v2.5 support to get_resources
  2021-03-16 22:18 [dpdk-dev] [PATCH 00/25] Add Support for DLB v2.5 Timothy McDaniel
  2021-03-16 22:18 ` [dpdk-dev] [PATCH 01/25] event/dlb2: add dlb v2.5 probe Timothy McDaniel
  2021-03-16 22:18 ` [dpdk-dev] [PATCH 02/25] event/dlb2: add DLB v2.5 probe-time hardware init Timothy McDaniel
@ 2021-03-16 22:18 ` Timothy McDaniel
  2021-03-16 22:18 ` [dpdk-dev] [PATCH 04/25] event/dlb2: add DLB v2.5 support to create sched domain Timothy McDaniel
                   ` (22 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-03-16 22:18 UTC (permalink / raw)
  To: dev
  Cc: jerinj, harry.van.haaren, mdr, nhorman, nikhil.rao,
	erik.g.carrillo, abhinandan.gujjar, pbhagavatula, hemant.agrawal,
	mattias.ronnblom, peter.mccarthy

DLB v2.5 uses a new credit scheme, where directed and load balanced
credits are unified, instead of having separate directed and load
balanced credit pools.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/dlb2.c                     | 20 ++++--
 drivers/event/dlb2/dlb2_user.h                | 14 +++-
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 48 --------------
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 66 +++++++++++++++++++
 4 files changed, 92 insertions(+), 56 deletions(-)

diff --git a/drivers/event/dlb2/dlb2.c b/drivers/event/dlb2/dlb2.c
index 826b68121..769bcb8af 100644
--- a/drivers/event/dlb2/dlb2.c
+++ b/drivers/event/dlb2/dlb2.c
@@ -132,17 +132,25 @@ dlb2_hw_query_resources(struct dlb2_eventdev *dlb2)
 	evdev_dlb2_default_info.max_event_ports =
 		dlb2->hw_rsrc_query_results.num_ldb_ports;
 
-	evdev_dlb2_default_info.max_num_events =
-		dlb2->hw_rsrc_query_results.num_ldb_credits;
-
+	if (dlb2->version == DLB2_HW_V2_5) {
+		evdev_dlb2_default_info.max_num_events =
+			dlb2->hw_rsrc_query_results.num_credits;
+	} else {
+		evdev_dlb2_default_info.max_num_events =
+			dlb2->hw_rsrc_query_results.num_ldb_credits;
+	}
 	/* Save off values used when creating the scheduling domain. */
 
 	handle->info.num_sched_domains =
 		dlb2->hw_rsrc_query_results.num_sched_domains;
 
-	handle->info.hw_rsrc_max.nb_events_limit =
-		dlb2->hw_rsrc_query_results.num_ldb_credits;
-
+	if (dlb2->version == DLB2_HW_V2_5) {
+		handle->info.hw_rsrc_max.nb_events_limit =
+			dlb2->hw_rsrc_query_results.num_credits;
+	} else {
+		handle->info.hw_rsrc_max.nb_events_limit =
+			dlb2->hw_rsrc_query_results.num_ldb_credits;
+	}
 	handle->info.hw_rsrc_max.num_queues =
 		dlb2->hw_rsrc_query_results.num_ldb_queues +
 		dlb2->hw_rsrc_query_results.num_dir_ports;
diff --git a/drivers/event/dlb2/dlb2_user.h b/drivers/event/dlb2/dlb2_user.h
index f4bda7822..b7d125dec 100644
--- a/drivers/event/dlb2/dlb2_user.h
+++ b/drivers/event/dlb2/dlb2_user.h
@@ -195,9 +195,12 @@ struct dlb2_create_sched_domain_args {
  *	contiguous range of history list entries.
  * - num_ldb_credits: Amount of available load-balanced QE storage.
  * - num_dir_credits: Amount of available directed QE storage.
+ * - response.status: Detailed error code. In certain cases, such as if the
+ *	ioctl request arg is invalid, the driver won't set status.
  */
 struct dlb2_get_num_resources_args {
 	/* Output parameters */
+	struct dlb2_cmd_response response;
 	__u32 num_sched_domains;
 	__u32 num_ldb_queues;
 	__u32 num_ldb_ports;
@@ -206,8 +209,15 @@ struct dlb2_get_num_resources_args {
 	__u32 num_atomic_inflights;
 	__u32 num_hist_list_entries;
 	__u32 max_contiguous_hist_list_entries;
-	__u32 num_ldb_credits;
-	__u32 num_dir_credits;
+	union {
+		struct {
+			__u32 num_ldb_credits;
+			__u32 num_dir_credits;
+		};
+		struct {
+			__u32 num_credits;
+		};
+	};
 };
 
 /*
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index cd62de3af..5b8723aaf 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -59,54 +59,6 @@ void dlb2_hw_enable_sparse_dir_cq_mode(struct dlb2_hw *hw)
 	DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, r0.val);
 }
 
-int dlb2_hw_get_num_resources(struct dlb2_hw *hw,
-			      struct dlb2_get_num_resources_args *arg,
-			      bool vdev_req,
-			      unsigned int vdev_id)
-{
-	struct dlb2_function_resources *rsrcs;
-	struct dlb2_bitmap *map;
-	int i;
-
-	if (vdev_req && vdev_id >= DLB2_MAX_NUM_VDEVS)
-		return -EINVAL;
-
-	if (vdev_req)
-		rsrcs = &hw->vdev[vdev_id];
-	else
-		rsrcs = &hw->pf;
-
-	arg->num_sched_domains = rsrcs->num_avail_domains;
-
-	arg->num_ldb_queues = rsrcs->num_avail_ldb_queues;
-
-	arg->num_ldb_ports = 0;
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
-		arg->num_ldb_ports += rsrcs->num_avail_ldb_ports[i];
-
-	arg->num_cos_ldb_ports[0] = rsrcs->num_avail_ldb_ports[0];
-	arg->num_cos_ldb_ports[1] = rsrcs->num_avail_ldb_ports[1];
-	arg->num_cos_ldb_ports[2] = rsrcs->num_avail_ldb_ports[2];
-	arg->num_cos_ldb_ports[3] = rsrcs->num_avail_ldb_ports[3];
-
-	arg->num_dir_ports = rsrcs->num_avail_dir_pq_pairs;
-
-	arg->num_atomic_inflights = rsrcs->num_avail_aqed_entries;
-
-	map = rsrcs->avail_hist_list_entries;
-
-	arg->num_hist_list_entries = dlb2_bitmap_count(map);
-
-	arg->max_contiguous_hist_list_entries =
-		dlb2_bitmap_longest_set_range(map);
-
-	arg->num_ldb_credits = rsrcs->num_avail_qed_entries;
-
-	arg->num_dir_credits = rsrcs->num_avail_dqed_entries;
-
-	return 0;
-}
-
 void dlb2_hw_enable_sparse_ldb_cq_mode(struct dlb2_hw *hw)
 {
 	union dlb2_chp_cfg_chp_csr_ctrl r0;
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index af68655b4..b0fd37a55 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -269,3 +269,69 @@ void dlb2_clr_pmcsr_disable(struct dlb2_hw *hw, enum dlb2_hw_ver ver)
 	DLB2_CSR_WR(hw, DLB2_CM_CFG_PM_PMCSR_DISABLE(ver), pmcsr_dis);
 }
 
+/**
+ * dlb2_hw_get_num_resources() - query the PCI function's available resources
+ * @hw: dlb2_hw handle for a particular device.
+ * @arg: pointer to resource counts.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function returns the number of available resources for the PF or for a
+ * VF.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, -EINVAL if vdev_req is true and vdev_id is
+ * invalid.
+ */
+int dlb2_hw_get_num_resources(struct dlb2_hw *hw,
+			      struct dlb2_get_num_resources_args *arg,
+			      bool vdev_req,
+			      unsigned int vdev_id)
+{
+	struct dlb2_function_resources *rsrcs;
+	struct dlb2_bitmap *map;
+	int i;
+
+	if (vdev_req && vdev_id >= DLB2_MAX_NUM_VDEVS)
+		return -EINVAL;
+
+	if (vdev_req)
+		rsrcs = &hw->vdev[vdev_id];
+	else
+		rsrcs = &hw->pf;
+
+	arg->num_sched_domains = rsrcs->num_avail_domains;
+
+	arg->num_ldb_queues = rsrcs->num_avail_ldb_queues;
+
+	arg->num_ldb_ports = 0;
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
+		arg->num_ldb_ports += rsrcs->num_avail_ldb_ports[i];
+
+	arg->num_cos_ldb_ports[0] = rsrcs->num_avail_ldb_ports[0];
+	arg->num_cos_ldb_ports[1] = rsrcs->num_avail_ldb_ports[1];
+	arg->num_cos_ldb_ports[2] = rsrcs->num_avail_ldb_ports[2];
+	arg->num_cos_ldb_ports[3] = rsrcs->num_avail_ldb_ports[3];
+
+	arg->num_dir_ports = rsrcs->num_avail_dir_pq_pairs;
+
+	arg->num_atomic_inflights = rsrcs->num_avail_aqed_entries;
+
+	map = rsrcs->avail_hist_list_entries;
+
+	arg->num_hist_list_entries = dlb2_bitmap_count(map);
+
+	arg->max_contiguous_hist_list_entries =
+		dlb2_bitmap_longest_set_range(map);
+
+	if (hw->ver == DLB2_HW_V2) {
+		arg->num_ldb_credits = rsrcs->num_avail_qed_entries;
+		arg->num_dir_credits = rsrcs->num_avail_dqed_entries;
+	} else {
+		arg->num_credits = rsrcs->num_avail_entries;
+	}
+	return 0;
+}
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH 04/25] event/dlb2: add DLB v2.5 support to create sched domain
  2021-03-16 22:18 [dpdk-dev] [PATCH 00/25] Add Support for DLB v2.5 Timothy McDaniel
                   ` (2 preceding siblings ...)
  2021-03-16 22:18 ` [dpdk-dev] [PATCH 03/25] event/dlb2: add DLB v2.5 support to get_resources Timothy McDaniel
@ 2021-03-16 22:18 ` Timothy McDaniel
  2021-04-03 10:22   ` Jerin Jacob
  2021-03-16 22:18 ` [dpdk-dev] [PATCH 05/25] event/dlb2: add DLB v2.5 support to domain reset Timothy McDaniel
                   ` (21 subsequent siblings)
  25 siblings, 1 reply; 174+ messages in thread
From: Timothy McDaniel @ 2021-03-16 22:18 UTC (permalink / raw)
  To: dev
  Cc: jerinj, harry.van.haaren, mdr, nhorman, nikhil.rao,
	erik.g.carrillo, abhinandan.gujjar, pbhagavatula, hemant.agrawal,
	mattias.ronnblom, peter.mccarthy

Update domain creation logic to account for DLB v2.5
credit scheme, new register map, and new register access
macros.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/dlb2_user.h                |  13 +-
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 645 ----------------
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 696 ++++++++++++++++++
 3 files changed, 707 insertions(+), 647 deletions(-)

diff --git a/drivers/event/dlb2/dlb2_user.h b/drivers/event/dlb2/dlb2_user.h
index b7d125dec..9760e9bda 100644
--- a/drivers/event/dlb2/dlb2_user.h
+++ b/drivers/event/dlb2/dlb2_user.h
@@ -18,6 +18,7 @@ enum dlb2_error {
 	DLB2_ST_LDB_QUEUES_UNAVAILABLE,
 	DLB2_ST_LDB_CREDITS_UNAVAILABLE,
 	DLB2_ST_DIR_CREDITS_UNAVAILABLE,
+	DLB2_ST_CREDITS_UNAVAILABLE,
 	DLB2_ST_SEQUENCE_NUMBERS_UNAVAILABLE,
 	DLB2_ST_INVALID_DOMAIN_ID,
 	DLB2_ST_INVALID_QID_INFLIGHT_ALLOCATION,
@@ -57,6 +58,7 @@ static const char dlb2_error_strings[][128] = {
 	"DLB2_ST_LDB_QUEUES_UNAVAILABLE",
 	"DLB2_ST_LDB_CREDITS_UNAVAILABLE",
 	"DLB2_ST_DIR_CREDITS_UNAVAILABLE",
+	"DLB2_ST_CREDITS_UNAVAILABLE",
 	"DLB2_ST_SEQUENCE_NUMBERS_UNAVAILABLE",
 	"DLB2_ST_INVALID_DOMAIN_ID",
 	"DLB2_ST_INVALID_QID_INFLIGHT_ALLOCATION",
@@ -170,8 +172,15 @@ struct dlb2_create_sched_domain_args {
 	__u32 num_dir_ports;
 	__u32 num_atomic_inflights;
 	__u32 num_hist_list_entries;
-	__u32 num_ldb_credits;
-	__u32 num_dir_credits;
+	union {
+		struct {
+			__u32 num_ldb_credits;
+			__u32 num_dir_credits;
+		};
+		struct {
+			__u32 num_credits;
+		};
+	};
 	__u8 cos_strict;
 	__u8 padding1[3];
 };
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index 5b8723aaf..5d296f725 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -33,21 +33,6 @@
 #define DLB2_FUNC_LIST_FOR_SAFE(head, ptr, ptr_tmp, it, it_tmp) \
 	DLB2_LIST_FOR_EACH_SAFE((head), ptr, ptr_tmp, func_list, it, it_tmp)
 
-static void dlb2_init_domain_rsrc_lists(struct dlb2_hw_domain *domain)
-{
-	int i;
-
-	dlb2_list_init_head(&domain->used_ldb_queues);
-	dlb2_list_init_head(&domain->used_dir_pq_pairs);
-	dlb2_list_init_head(&domain->avail_ldb_queues);
-	dlb2_list_init_head(&domain->avail_dir_pq_pairs);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
-		dlb2_list_init_head(&domain->used_ldb_ports[i]);
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
-		dlb2_list_init_head(&domain->avail_ldb_ports[i]);
-}
-
 void dlb2_hw_enable_sparse_dir_cq_mode(struct dlb2_hw *hw)
 {
 	union dlb2_chp_cfg_chp_csr_ctrl r0;
@@ -70,636 +55,6 @@ void dlb2_hw_enable_sparse_ldb_cq_mode(struct dlb2_hw *hw)
 	DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, r0.val);
 }
 
-static void dlb2_configure_domain_credits(struct dlb2_hw *hw,
-					  struct dlb2_hw_domain *domain)
-{
-	union dlb2_chp_cfg_ldb_vas_crd r0 = { {0} };
-	union dlb2_chp_cfg_dir_vas_crd r1 = { {0} };
-
-	r0.field.count = domain->num_ldb_credits;
-
-	DLB2_CSR_WR(hw, DLB2_CHP_CFG_LDB_VAS_CRD(domain->id.phys_id), r0.val);
-
-	r1.field.count = domain->num_dir_credits;
-
-	DLB2_CSR_WR(hw, DLB2_CHP_CFG_DIR_VAS_CRD(domain->id.phys_id), r1.val);
-}
-
-static struct dlb2_ldb_port *
-dlb2_get_next_ldb_port(struct dlb2_hw *hw,
-		       struct dlb2_function_resources *rsrcs,
-		       u32 domain_id,
-		       u32 cos_id)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	RTE_SET_USED(iter);
-	/*
-	 * To reduce the odds of consecutive load-balanced ports mapping to the
-	 * same queue(s), the driver attempts to allocate ports whose neighbors
-	 * are owned by a different domain.
-	 */
-	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {
-		u32 next, prev;
-		u32 phys_id;
-
-		phys_id = port->id.phys_id;
-		next = phys_id + 1;
-		prev = phys_id - 1;
-
-		if (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)
-			next = 0;
-		if (phys_id == 0)
-			prev = DLB2_MAX_NUM_LDB_PORTS - 1;
-
-		if (!hw->rsrcs.ldb_ports[next].owned ||
-		    hw->rsrcs.ldb_ports[next].domain_id.phys_id == domain_id)
-			continue;
-
-		if (!hw->rsrcs.ldb_ports[prev].owned ||
-		    hw->rsrcs.ldb_ports[prev].domain_id.phys_id == domain_id)
-			continue;
-
-		return port;
-	}
-
-	/*
-	 * Failing that, the driver looks for a port with one neighbor owned by
-	 * a different domain and the other unallocated.
-	 */
-	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {
-		u32 next, prev;
-		u32 phys_id;
-
-		phys_id = port->id.phys_id;
-		next = phys_id + 1;
-		prev = phys_id - 1;
-
-		if (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)
-			next = 0;
-		if (phys_id == 0)
-			prev = DLB2_MAX_NUM_LDB_PORTS - 1;
-
-		if (!hw->rsrcs.ldb_ports[prev].owned &&
-		    hw->rsrcs.ldb_ports[next].owned &&
-		    hw->rsrcs.ldb_ports[next].domain_id.phys_id != domain_id)
-			return port;
-
-		if (!hw->rsrcs.ldb_ports[next].owned &&
-		    hw->rsrcs.ldb_ports[prev].owned &&
-		    hw->rsrcs.ldb_ports[prev].domain_id.phys_id != domain_id)
-			return port;
-	}
-
-	/*
-	 * Failing that, the driver looks for a port with both neighbors
-	 * unallocated.
-	 */
-	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {
-		u32 next, prev;
-		u32 phys_id;
-
-		phys_id = port->id.phys_id;
-		next = phys_id + 1;
-		prev = phys_id - 1;
-
-		if (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)
-			next = 0;
-		if (phys_id == 0)
-			prev = DLB2_MAX_NUM_LDB_PORTS - 1;
-
-		if (!hw->rsrcs.ldb_ports[prev].owned &&
-		    !hw->rsrcs.ldb_ports[next].owned)
-			return port;
-	}
-
-	/* If all else fails, the driver returns the next available port. */
-	return DLB2_FUNC_LIST_HEAD(rsrcs->avail_ldb_ports[cos_id],
-				   typeof(*port));
-}
-
-static int __dlb2_attach_ldb_ports(struct dlb2_hw *hw,
-				   struct dlb2_function_resources *rsrcs,
-				   struct dlb2_hw_domain *domain,
-				   u32 num_ports,
-				   u32 cos_id,
-				   struct dlb2_cmd_response *resp)
-{
-	unsigned int i;
-
-	if (rsrcs->num_avail_ldb_ports[cos_id] < num_ports) {
-		resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	for (i = 0; i < num_ports; i++) {
-		struct dlb2_ldb_port *port;
-
-		port = dlb2_get_next_ldb_port(hw, rsrcs,
-					      domain->id.phys_id, cos_id);
-		if (port == NULL) {
-			DLB2_HW_ERR(hw,
-				    "[%s()] Internal error: domain validation failed\n",
-				    __func__);
-			return -EFAULT;
-		}
-
-		dlb2_list_del(&rsrcs->avail_ldb_ports[cos_id],
-			      &port->func_list);
-
-		port->domain_id = domain->id;
-		port->owned = true;
-
-		dlb2_list_add(&domain->avail_ldb_ports[cos_id],
-			      &port->domain_list);
-	}
-
-	rsrcs->num_avail_ldb_ports[cos_id] -= num_ports;
-
-	return 0;
-}
-
-static int dlb2_attach_ldb_ports(struct dlb2_hw *hw,
-				 struct dlb2_function_resources *rsrcs,
-				 struct dlb2_hw_domain *domain,
-				 struct dlb2_create_sched_domain_args *args,
-				 struct dlb2_cmd_response *resp)
-{
-	unsigned int i, j;
-	int ret;
-
-	if (args->cos_strict) {
-		for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-			u32 num = args->num_cos_ldb_ports[i];
-
-			/* Allocate ports from specific classes-of-service */
-			ret = __dlb2_attach_ldb_ports(hw,
-						      rsrcs,
-						      domain,
-						      num,
-						      i,
-						      resp);
-			if (ret)
-				return ret;
-		}
-	} else {
-		unsigned int k;
-		u32 cos_id;
-
-		/*
-		 * Attempt to allocate from specific class-of-service, but
-		 * fallback to the other classes if that fails.
-		 */
-		for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-			for (j = 0; j < args->num_cos_ldb_ports[i]; j++) {
-				for (k = 0; k < DLB2_NUM_COS_DOMAINS; k++) {
-					cos_id = (i + k) % DLB2_NUM_COS_DOMAINS;
-
-					ret = __dlb2_attach_ldb_ports(hw,
-								      rsrcs,
-								      domain,
-								      1,
-								      cos_id,
-								      resp);
-					if (ret == 0)
-						break;
-				}
-
-				if (ret < 0)
-					return ret;
-			}
-		}
-	}
-
-	/* Allocate num_ldb_ports from any class-of-service */
-	for (i = 0; i < args->num_ldb_ports; i++) {
-		for (j = 0; j < DLB2_NUM_COS_DOMAINS; j++) {
-			ret = __dlb2_attach_ldb_ports(hw,
-						      rsrcs,
-						      domain,
-						      1,
-						      j,
-						      resp);
-			if (ret == 0)
-				break;
-		}
-
-		if (ret < 0)
-			return ret;
-	}
-
-	return 0;
-}
-
-static int dlb2_attach_dir_ports(struct dlb2_hw *hw,
-				 struct dlb2_function_resources *rsrcs,
-				 struct dlb2_hw_domain *domain,
-				 u32 num_ports,
-				 struct dlb2_cmd_response *resp)
-{
-	unsigned int i;
-
-	if (rsrcs->num_avail_dir_pq_pairs < num_ports) {
-		resp->status = DLB2_ST_DIR_PORTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	for (i = 0; i < num_ports; i++) {
-		struct dlb2_dir_pq_pair *port;
-
-		port = DLB2_FUNC_LIST_HEAD(rsrcs->avail_dir_pq_pairs,
-					   typeof(*port));
-		if (port == NULL) {
-			DLB2_HW_ERR(hw,
-				    "[%s()] Internal error: domain validation failed\n",
-				    __func__);
-			return -EFAULT;
-		}
-
-		dlb2_list_del(&rsrcs->avail_dir_pq_pairs, &port->func_list);
-
-		port->domain_id = domain->id;
-		port->owned = true;
-
-		dlb2_list_add(&domain->avail_dir_pq_pairs, &port->domain_list);
-	}
-
-	rsrcs->num_avail_dir_pq_pairs -= num_ports;
-
-	return 0;
-}
-
-static int dlb2_attach_ldb_credits(struct dlb2_function_resources *rsrcs,
-				   struct dlb2_hw_domain *domain,
-				   u32 num_credits,
-				   struct dlb2_cmd_response *resp)
-{
-	if (rsrcs->num_avail_qed_entries < num_credits) {
-		resp->status = DLB2_ST_LDB_CREDITS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	rsrcs->num_avail_qed_entries -= num_credits;
-	domain->num_ldb_credits += num_credits;
-	return 0;
-}
-
-static int dlb2_attach_dir_credits(struct dlb2_function_resources *rsrcs,
-				   struct dlb2_hw_domain *domain,
-				   u32 num_credits,
-				   struct dlb2_cmd_response *resp)
-{
-	if (rsrcs->num_avail_dqed_entries < num_credits) {
-		resp->status = DLB2_ST_DIR_CREDITS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	rsrcs->num_avail_dqed_entries -= num_credits;
-	domain->num_dir_credits += num_credits;
-	return 0;
-}
-
-static int dlb2_attach_atomic_inflights(struct dlb2_function_resources *rsrcs,
-					struct dlb2_hw_domain *domain,
-					u32 num_atomic_inflights,
-					struct dlb2_cmd_response *resp)
-{
-	if (rsrcs->num_avail_aqed_entries < num_atomic_inflights) {
-		resp->status = DLB2_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	rsrcs->num_avail_aqed_entries -= num_atomic_inflights;
-	domain->num_avail_aqed_entries += num_atomic_inflights;
-	return 0;
-}
-
-static int
-dlb2_attach_domain_hist_list_entries(struct dlb2_function_resources *rsrcs,
-				     struct dlb2_hw_domain *domain,
-				     u32 num_hist_list_entries,
-				     struct dlb2_cmd_response *resp)
-{
-	struct dlb2_bitmap *bitmap;
-	int base;
-
-	if (num_hist_list_entries) {
-		bitmap = rsrcs->avail_hist_list_entries;
-
-		base = dlb2_bitmap_find_set_bit_range(bitmap,
-						      num_hist_list_entries);
-		if (base < 0)
-			goto error;
-
-		domain->total_hist_list_entries = num_hist_list_entries;
-		domain->avail_hist_list_entries = num_hist_list_entries;
-		domain->hist_list_entry_base = base;
-		domain->hist_list_entry_offset = 0;
-
-		dlb2_bitmap_clear_range(bitmap, base, num_hist_list_entries);
-	}
-	return 0;
-
-error:
-	resp->status = DLB2_ST_HIST_LIST_ENTRIES_UNAVAILABLE;
-	return -EINVAL;
-}
-
-static int dlb2_attach_ldb_queues(struct dlb2_hw *hw,
-				  struct dlb2_function_resources *rsrcs,
-				  struct dlb2_hw_domain *domain,
-				  u32 num_queues,
-				  struct dlb2_cmd_response *resp)
-{
-	unsigned int i;
-
-	if (rsrcs->num_avail_ldb_queues < num_queues) {
-		resp->status = DLB2_ST_LDB_QUEUES_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	for (i = 0; i < num_queues; i++) {
-		struct dlb2_ldb_queue *queue;
-
-		queue = DLB2_FUNC_LIST_HEAD(rsrcs->avail_ldb_queues,
-					    typeof(*queue));
-		if (queue == NULL) {
-			DLB2_HW_ERR(hw,
-				    "[%s()] Internal error: domain validation failed\n",
-				    __func__);
-			return -EFAULT;
-		}
-
-		dlb2_list_del(&rsrcs->avail_ldb_queues, &queue->func_list);
-
-		queue->domain_id = domain->id;
-		queue->owned = true;
-
-		dlb2_list_add(&domain->avail_ldb_queues, &queue->domain_list);
-	}
-
-	rsrcs->num_avail_ldb_queues -= num_queues;
-
-	return 0;
-}
-
-static int
-dlb2_domain_attach_resources(struct dlb2_hw *hw,
-			     struct dlb2_function_resources *rsrcs,
-			     struct dlb2_hw_domain *domain,
-			     struct dlb2_create_sched_domain_args *args,
-			     struct dlb2_cmd_response *resp)
-{
-	int ret;
-
-	ret = dlb2_attach_ldb_queues(hw,
-				     rsrcs,
-				     domain,
-				     args->num_ldb_queues,
-				     resp);
-	if (ret < 0)
-		return ret;
-
-	ret = dlb2_attach_ldb_ports(hw,
-				    rsrcs,
-				    domain,
-				    args,
-				    resp);
-	if (ret < 0)
-		return ret;
-
-	ret = dlb2_attach_dir_ports(hw,
-				    rsrcs,
-				    domain,
-				    args->num_dir_ports,
-				    resp);
-	if (ret < 0)
-		return ret;
-
-	ret = dlb2_attach_ldb_credits(rsrcs,
-				      domain,
-				      args->num_ldb_credits,
-				      resp);
-	if (ret < 0)
-		return ret;
-
-	ret = dlb2_attach_dir_credits(rsrcs,
-				      domain,
-				      args->num_dir_credits,
-				      resp);
-	if (ret < 0)
-		return ret;
-
-	ret = dlb2_attach_domain_hist_list_entries(rsrcs,
-						   domain,
-						   args->num_hist_list_entries,
-						   resp);
-	if (ret < 0)
-		return ret;
-
-	ret = dlb2_attach_atomic_inflights(rsrcs,
-					   domain,
-					   args->num_atomic_inflights,
-					   resp);
-	if (ret < 0)
-		return ret;
-
-	dlb2_configure_domain_credits(hw, domain);
-
-	domain->configured = true;
-
-	domain->started = false;
-
-	rsrcs->num_avail_domains--;
-
-	return 0;
-}
-
-static int
-dlb2_verify_create_sched_dom_args(struct dlb2_function_resources *rsrcs,
-				  struct dlb2_create_sched_domain_args *args,
-				  struct dlb2_cmd_response *resp)
-{
-	u32 num_avail_ldb_ports, req_ldb_ports;
-	struct dlb2_bitmap *avail_hl_entries;
-	unsigned int max_contig_hl_range;
-	int i;
-
-	avail_hl_entries = rsrcs->avail_hist_list_entries;
-
-	max_contig_hl_range = dlb2_bitmap_longest_set_range(avail_hl_entries);
-
-	num_avail_ldb_ports = 0;
-	req_ldb_ports = 0;
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		num_avail_ldb_ports += rsrcs->num_avail_ldb_ports[i];
-
-		req_ldb_ports += args->num_cos_ldb_ports[i];
-	}
-
-	req_ldb_ports += args->num_ldb_ports;
-
-	if (rsrcs->num_avail_domains < 1) {
-		resp->status = DLB2_ST_DOMAIN_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	if (rsrcs->num_avail_ldb_queues < args->num_ldb_queues) {
-		resp->status = DLB2_ST_LDB_QUEUES_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	if (req_ldb_ports > num_avail_ldb_ports) {
-		resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	for (i = 0; args->cos_strict && i < DLB2_NUM_COS_DOMAINS; i++) {
-		if (args->num_cos_ldb_ports[i] >
-		    rsrcs->num_avail_ldb_ports[i]) {
-			resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
-			return -EINVAL;
-		}
-	}
-
-	if (args->num_ldb_queues > 0 && req_ldb_ports == 0) {
-		resp->status = DLB2_ST_LDB_PORT_REQUIRED_FOR_LDB_QUEUES;
-		return -EINVAL;
-	}
-
-	if (rsrcs->num_avail_dir_pq_pairs < args->num_dir_ports) {
-		resp->status = DLB2_ST_DIR_PORTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	if (rsrcs->num_avail_qed_entries < args->num_ldb_credits) {
-		resp->status = DLB2_ST_LDB_CREDITS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	if (rsrcs->num_avail_dqed_entries < args->num_dir_credits) {
-		resp->status = DLB2_ST_DIR_CREDITS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	if (rsrcs->num_avail_aqed_entries < args->num_atomic_inflights) {
-		resp->status = DLB2_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	if (max_contig_hl_range < args->num_hist_list_entries) {
-		resp->status = DLB2_ST_HIST_LIST_ENTRIES_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	return 0;
-}
-
-static void
-dlb2_log_create_sched_domain_args(struct dlb2_hw *hw,
-				  struct dlb2_create_sched_domain_args *args,
-				  bool vdev_req,
-				  unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 create sched domain arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tNumber of LDB queues:          %d\n",
-		    args->num_ldb_queues);
-	DLB2_HW_DBG(hw, "\tNumber of LDB ports (any CoS): %d\n",
-		    args->num_ldb_ports);
-	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 0):   %d\n",
-		    args->num_cos_ldb_ports[0]);
-	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 1):   %d\n",
-		    args->num_cos_ldb_ports[1]);
-	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 2):   %d\n",
-		    args->num_cos_ldb_ports[1]);
-	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 3):   %d\n",
-		    args->num_cos_ldb_ports[1]);
-	DLB2_HW_DBG(hw, "\tStrict CoS allocation:         %d\n",
-		    args->cos_strict);
-	DLB2_HW_DBG(hw, "\tNumber of DIR ports:           %d\n",
-		    args->num_dir_ports);
-	DLB2_HW_DBG(hw, "\tNumber of ATM inflights:       %d\n",
-		    args->num_atomic_inflights);
-	DLB2_HW_DBG(hw, "\tNumber of hist list entries:   %d\n",
-		    args->num_hist_list_entries);
-	DLB2_HW_DBG(hw, "\tNumber of LDB credits:         %d\n",
-		    args->num_ldb_credits);
-	DLB2_HW_DBG(hw, "\tNumber of DIR credits:         %d\n",
-		    args->num_dir_credits);
-}
-
-/**
- * dlb2_hw_create_sched_domain() - Allocate and initialize a DLB scheduling
- *	domain and its resources.
- * @hw:	Contains the current state of the DLB2 hardware.
- * @args: User-provided arguments.
- * @resp: Response to user.
- * @vdev_req: Request came from a virtual device.
- * @vdev_id: If vdev_req is true, this contains the virtual device's ID.
- *
- * Return: returns < 0 on error, 0 otherwise. If the driver is unable to
- * satisfy a request, resp->status will be set accordingly.
- */
-int dlb2_hw_create_sched_domain(struct dlb2_hw *hw,
-				struct dlb2_create_sched_domain_args *args,
-				struct dlb2_cmd_response *resp,
-				bool vdev_req,
-				unsigned int vdev_id)
-{
-	struct dlb2_function_resources *rsrcs;
-	struct dlb2_hw_domain *domain;
-	int ret;
-
-	rsrcs = (vdev_req) ? &hw->vdev[vdev_id] : &hw->pf;
-
-	dlb2_log_create_sched_domain_args(hw, args, vdev_req, vdev_id);
-
-	/*
-	 * Verify that hardware resources are available before attempting to
-	 * satisfy the request. This simplifies the error unwinding code.
-	 */
-	ret = dlb2_verify_create_sched_dom_args(rsrcs, args, resp);
-	if (ret)
-		return ret;
-
-	domain = DLB2_FUNC_LIST_HEAD(rsrcs->avail_domains, typeof(*domain));
-	if (domain == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: no available domains\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	if (domain->configured) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: avail_domains contains configured domains.\n",
-			    __func__);
-		return -EFAULT;
-	}
-
-	dlb2_init_domain_rsrc_lists(domain);
-
-	ret = dlb2_domain_attach_resources(hw, rsrcs, domain, args, resp);
-	if (ret < 0) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: failed to verify args.\n",
-			    __func__);
-
-		return ret;
-	}
-
-	dlb2_list_del(&rsrcs->avail_domains, &domain->func_list);
-
-	dlb2_list_add(&rsrcs->used_domains, &domain->func_list);
-
-	resp->id = (vdev_req) ? domain->id.virt_id : domain->id.phys_id;
-	resp->status = 0;
-
-	return 0;
-}
-
 /*
  * The PF driver cannot assume that a register write will affect subsequent HCW
  * writes. To ensure a write completes, the driver must read back a CSR. This
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index b0fd37a55..4d679a0a9 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -335,3 +335,699 @@ int dlb2_hw_get_num_resources(struct dlb2_hw *hw,
 	}
 	return 0;
 }
+
+static void dlb2_configure_domain_credits_v2_5(struct dlb2_hw *hw,
+					       struct dlb2_hw_domain *domain)
+{
+	u32 reg = 0;
+
+	DLB2_BITS_SET(reg, domain->num_credits, DLB2_CHP_CFG_LDB_VAS_CRD_COUNT);
+	DLB2_CSR_WR(hw, DLB2_CHP_CFG_VAS_CRD(domain->id.phys_id), reg);
+}
+
+static void dlb2_configure_domain_credits_v2(struct dlb2_hw *hw,
+					     struct dlb2_hw_domain *domain)
+{
+	u32 reg = 0;
+
+	DLB2_BITS_SET(reg, domain->num_ldb_credits,
+		      DLB2_CHP_CFG_LDB_VAS_CRD_COUNT);
+	DLB2_CSR_WR(hw, DLB2_CHP_CFG_LDB_VAS_CRD(domain->id.phys_id), reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, domain->num_dir_credits,
+		      DLB2_CHP_CFG_DIR_VAS_CRD_COUNT);
+	DLB2_CSR_WR(hw, DLB2_CHP_CFG_DIR_VAS_CRD(domain->id.phys_id), reg);
+}
+
+static void dlb2_configure_domain_credits(struct dlb2_hw *hw,
+					  struct dlb2_hw_domain *domain)
+{
+	if (hw->ver == DLB2_HW_V2)
+		dlb2_configure_domain_credits_v2(hw, domain);
+	else
+		dlb2_configure_domain_credits_v2_5(hw, domain);
+}
+
+static int dlb2_attach_credits(struct dlb2_function_resources *rsrcs,
+			       struct dlb2_hw_domain *domain,
+			       u32 num_credits,
+			       struct dlb2_cmd_response *resp)
+{
+	if (rsrcs->num_avail_entries < num_credits) {
+		resp->status = DLB2_ST_CREDITS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	rsrcs->num_avail_entries -= num_credits;
+	domain->num_credits += num_credits;
+	return 0;
+}
+
+static struct dlb2_ldb_port *
+dlb2_get_next_ldb_port(struct dlb2_hw *hw,
+		       struct dlb2_function_resources *rsrcs,
+		       u32 domain_id,
+		       u32 cos_id)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	RTE_SET_USED(iter);
+
+	/*
+	 * To reduce the odds of consecutive load-balanced ports mapping to the
+	 * same queue(s), the driver attempts to allocate ports whose neighbors
+	 * are owned by a different domain.
+	 */
+	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {
+		u32 next, prev;
+		u32 phys_id;
+
+		phys_id = port->id.phys_id;
+		next = phys_id + 1;
+		prev = phys_id - 1;
+
+		if (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)
+			next = 0;
+		if (phys_id == 0)
+			prev = DLB2_MAX_NUM_LDB_PORTS - 1;
+
+		if (!hw->rsrcs.ldb_ports[next].owned ||
+		    hw->rsrcs.ldb_ports[next].domain_id.phys_id == domain_id)
+			continue;
+
+		if (!hw->rsrcs.ldb_ports[prev].owned ||
+		    hw->rsrcs.ldb_ports[prev].domain_id.phys_id == domain_id)
+			continue;
+
+		return port;
+	}
+
+	/*
+	 * Failing that, the driver looks for a port with one neighbor owned by
+	 * a different domain and the other unallocated.
+	 */
+	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {
+		u32 next, prev;
+		u32 phys_id;
+
+		phys_id = port->id.phys_id;
+		next = phys_id + 1;
+		prev = phys_id - 1;
+
+		if (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)
+			next = 0;
+		if (phys_id == 0)
+			prev = DLB2_MAX_NUM_LDB_PORTS - 1;
+
+		if (!hw->rsrcs.ldb_ports[prev].owned &&
+		    hw->rsrcs.ldb_ports[next].owned &&
+		    hw->rsrcs.ldb_ports[next].domain_id.phys_id != domain_id)
+			return port;
+
+		if (!hw->rsrcs.ldb_ports[next].owned &&
+		    hw->rsrcs.ldb_ports[prev].owned &&
+		    hw->rsrcs.ldb_ports[prev].domain_id.phys_id != domain_id)
+			return port;
+	}
+
+	/*
+	 * Failing that, the driver looks for a port with both neighbors
+	 * unallocated.
+	 */
+	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {
+		u32 next, prev;
+		u32 phys_id;
+
+		phys_id = port->id.phys_id;
+		next = phys_id + 1;
+		prev = phys_id - 1;
+
+		if (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)
+			next = 0;
+		if (phys_id == 0)
+			prev = DLB2_MAX_NUM_LDB_PORTS - 1;
+
+		if (!hw->rsrcs.ldb_ports[prev].owned &&
+		    !hw->rsrcs.ldb_ports[next].owned)
+			return port;
+	}
+
+	/* If all else fails, the driver returns the next available port. */
+	return DLB2_FUNC_LIST_HEAD(rsrcs->avail_ldb_ports[cos_id],
+				   typeof(*port));
+}
+
+static int __dlb2_attach_ldb_ports(struct dlb2_hw *hw,
+				   struct dlb2_function_resources *rsrcs,
+				   struct dlb2_hw_domain *domain,
+				   u32 num_ports,
+				   u32 cos_id,
+				   struct dlb2_cmd_response *resp)
+{
+	unsigned int i;
+
+	if (rsrcs->num_avail_ldb_ports[cos_id] < num_ports) {
+		resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	for (i = 0; i < num_ports; i++) {
+		struct dlb2_ldb_port *port;
+
+		port = dlb2_get_next_ldb_port(hw, rsrcs,
+					      domain->id.phys_id, cos_id);
+		if (port == NULL) {
+			DLB2_HW_ERR(hw,
+				    "[%s()] Internal error: domain validation failed\n",
+				    __func__);
+			return -EFAULT;
+		}
+
+		dlb2_list_del(&rsrcs->avail_ldb_ports[cos_id],
+			      &port->func_list);
+
+		port->domain_id = domain->id;
+		port->owned = true;
+
+		dlb2_list_add(&domain->avail_ldb_ports[cos_id],
+			      &port->domain_list);
+	}
+
+	rsrcs->num_avail_ldb_ports[cos_id] -= num_ports;
+
+	return 0;
+}
+
+
+static int dlb2_attach_ldb_ports(struct dlb2_hw *hw,
+				 struct dlb2_function_resources *rsrcs,
+				 struct dlb2_hw_domain *domain,
+				 struct dlb2_create_sched_domain_args *args,
+				 struct dlb2_cmd_response *resp)
+{
+	unsigned int i, j;
+	int ret;
+
+	if (args->cos_strict) {
+		for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+			u32 num = args->num_cos_ldb_ports[i];
+
+			/* Allocate ports from specific classes-of-service */
+			ret = __dlb2_attach_ldb_ports(hw,
+						      rsrcs,
+						      domain,
+						      num,
+						      i,
+						      resp);
+			if (ret)
+				return ret;
+		}
+	} else {
+		unsigned int k;
+		u32 cos_id;
+
+		/*
+		 * Attempt to allocate from specific class-of-service, but
+		 * fallback to the other classes if that fails.
+		 */
+		for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+			for (j = 0; j < args->num_cos_ldb_ports[i]; j++) {
+				for (k = 0; k < DLB2_NUM_COS_DOMAINS; k++) {
+					cos_id = (i + k) % DLB2_NUM_COS_DOMAINS;
+
+					ret = __dlb2_attach_ldb_ports(hw,
+								      rsrcs,
+								      domain,
+								      1,
+								      cos_id,
+								      resp);
+					if (ret == 0)
+						break;
+				}
+
+				if (ret)
+					return ret;
+			}
+		}
+	}
+
+	/* Allocate num_ldb_ports from any class-of-service */
+	for (i = 0; i < args->num_ldb_ports; i++) {
+		for (j = 0; j < DLB2_NUM_COS_DOMAINS; j++) {
+			ret = __dlb2_attach_ldb_ports(hw,
+						      rsrcs,
+						      domain,
+						      1,
+						      j,
+						      resp);
+			if (ret == 0)
+				break;
+		}
+
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+
+static int dlb2_attach_dir_ports(struct dlb2_hw *hw,
+				 struct dlb2_function_resources *rsrcs,
+				 struct dlb2_hw_domain *domain,
+				 u32 num_ports,
+				 struct dlb2_cmd_response *resp)
+{
+	unsigned int i;
+
+	if (rsrcs->num_avail_dir_pq_pairs < num_ports) {
+		resp->status = DLB2_ST_DIR_PORTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	for (i = 0; i < num_ports; i++) {
+		struct dlb2_dir_pq_pair *port;
+
+		port = DLB2_FUNC_LIST_HEAD(rsrcs->avail_dir_pq_pairs,
+					   typeof(*port));
+		if (port == NULL) {
+			DLB2_HW_ERR(hw,
+				    "[%s()] Internal error: domain validation failed\n",
+				    __func__);
+			return -EFAULT;
+		}
+
+		dlb2_list_del(&rsrcs->avail_dir_pq_pairs, &port->func_list);
+
+		port->domain_id = domain->id;
+		port->owned = true;
+
+		dlb2_list_add(&domain->avail_dir_pq_pairs, &port->domain_list);
+	}
+
+	rsrcs->num_avail_dir_pq_pairs -= num_ports;
+
+	return 0;
+}
+
+static int dlb2_attach_ldb_credits(struct dlb2_function_resources *rsrcs,
+				   struct dlb2_hw_domain *domain,
+				   u32 num_credits,
+				   struct dlb2_cmd_response *resp)
+{
+	if (rsrcs->num_avail_qed_entries < num_credits) {
+		resp->status = DLB2_ST_LDB_CREDITS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	rsrcs->num_avail_qed_entries -= num_credits;
+	domain->num_ldb_credits += num_credits;
+	return 0;
+}
+
+static int dlb2_attach_dir_credits(struct dlb2_function_resources *rsrcs,
+				   struct dlb2_hw_domain *domain,
+				   u32 num_credits,
+				   struct dlb2_cmd_response *resp)
+{
+	if (rsrcs->num_avail_dqed_entries < num_credits) {
+		resp->status = DLB2_ST_DIR_CREDITS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	rsrcs->num_avail_dqed_entries -= num_credits;
+	domain->num_dir_credits += num_credits;
+	return 0;
+}
+
+
+static int dlb2_attach_atomic_inflights(struct dlb2_function_resources *rsrcs,
+					struct dlb2_hw_domain *domain,
+					u32 num_atomic_inflights,
+					struct dlb2_cmd_response *resp)
+{
+	if (rsrcs->num_avail_aqed_entries < num_atomic_inflights) {
+		resp->status = DLB2_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	rsrcs->num_avail_aqed_entries -= num_atomic_inflights;
+	domain->num_avail_aqed_entries += num_atomic_inflights;
+	return 0;
+}
+
+static int
+dlb2_attach_domain_hist_list_entries(struct dlb2_function_resources *rsrcs,
+				     struct dlb2_hw_domain *domain,
+				     u32 num_hist_list_entries,
+				     struct dlb2_cmd_response *resp)
+{
+	struct dlb2_bitmap *bitmap;
+	int base;
+
+	if (num_hist_list_entries) {
+		bitmap = rsrcs->avail_hist_list_entries;
+
+		base = dlb2_bitmap_find_set_bit_range(bitmap,
+						      num_hist_list_entries);
+		if (base < 0)
+			goto error;
+
+		domain->total_hist_list_entries = num_hist_list_entries;
+		domain->avail_hist_list_entries = num_hist_list_entries;
+		domain->hist_list_entry_base = base;
+		domain->hist_list_entry_offset = 0;
+
+		dlb2_bitmap_clear_range(bitmap, base, num_hist_list_entries);
+	}
+	return 0;
+
+error:
+	resp->status = DLB2_ST_HIST_LIST_ENTRIES_UNAVAILABLE;
+	return -EINVAL;
+}
+
+static int dlb2_attach_ldb_queues(struct dlb2_hw *hw,
+				  struct dlb2_function_resources *rsrcs,
+				  struct dlb2_hw_domain *domain,
+				  u32 num_queues,
+				  struct dlb2_cmd_response *resp)
+{
+	unsigned int i;
+
+	if (rsrcs->num_avail_ldb_queues < num_queues) {
+		resp->status = DLB2_ST_LDB_QUEUES_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	for (i = 0; i < num_queues; i++) {
+		struct dlb2_ldb_queue *queue;
+
+		queue = DLB2_FUNC_LIST_HEAD(rsrcs->avail_ldb_queues,
+					    typeof(*queue));
+		if (queue == NULL) {
+			DLB2_HW_ERR(hw,
+				    "[%s()] Internal error: domain validation failed\n",
+				    __func__);
+			return -EFAULT;
+		}
+
+		dlb2_list_del(&rsrcs->avail_ldb_queues, &queue->func_list);
+
+		queue->domain_id = domain->id;
+		queue->owned = true;
+
+		dlb2_list_add(&domain->avail_ldb_queues, &queue->domain_list);
+	}
+
+	rsrcs->num_avail_ldb_queues -= num_queues;
+
+	return 0;
+}
+
+static int
+dlb2_domain_attach_resources(struct dlb2_hw *hw,
+			     struct dlb2_function_resources *rsrcs,
+			     struct dlb2_hw_domain *domain,
+			     struct dlb2_create_sched_domain_args *args,
+			     struct dlb2_cmd_response *resp)
+{
+	int ret;
+
+	ret = dlb2_attach_ldb_queues(hw,
+				     rsrcs,
+				     domain,
+				     args->num_ldb_queues,
+				     resp);
+	if (ret)
+		return ret;
+
+	ret = dlb2_attach_ldb_ports(hw,
+				    rsrcs,
+				    domain,
+				    args,
+				    resp);
+	if (ret)
+		return ret;
+
+	ret = dlb2_attach_dir_ports(hw,
+				    rsrcs,
+				    domain,
+				    args->num_dir_ports,
+				    resp);
+	if (ret)
+		return ret;
+
+	if (hw->ver == DLB2_HW_V2) {
+		ret = dlb2_attach_ldb_credits(rsrcs,
+					      domain,
+					      args->num_ldb_credits,
+					      resp);
+		if (ret)
+			return ret;
+
+		ret = dlb2_attach_dir_credits(rsrcs,
+					      domain,
+					      args->num_dir_credits,
+					      resp);
+		if (ret)
+			return ret;
+	} else {  /* DLB 2.5 */
+		ret = dlb2_attach_credits(rsrcs,
+					  domain,
+					  args->num_credits,
+					  resp);
+		if (ret)
+			return ret;
+	}
+
+	ret = dlb2_attach_domain_hist_list_entries(rsrcs,
+						   domain,
+						   args->num_hist_list_entries,
+						   resp);
+	if (ret)
+		return ret;
+
+	ret = dlb2_attach_atomic_inflights(rsrcs,
+					   domain,
+					   args->num_atomic_inflights,
+					   resp);
+	if (ret)
+		return ret;
+
+	dlb2_configure_domain_credits(hw, domain);
+
+	domain->configured = true;
+
+	domain->started = false;
+
+	rsrcs->num_avail_domains--;
+
+	return 0;
+}
+
+static int
+dlb2_verify_create_sched_dom_args(struct dlb2_function_resources *rsrcs,
+				  struct dlb2_create_sched_domain_args *args,
+				  struct dlb2_cmd_response *resp,
+				  struct dlb2_hw *hw,
+				  struct dlb2_hw_domain **out_domain)
+{
+	u32 num_avail_ldb_ports, req_ldb_ports;
+	struct dlb2_bitmap *avail_hl_entries;
+	unsigned int max_contig_hl_range;
+	struct dlb2_hw_domain *domain;
+	int i;
+
+	avail_hl_entries = rsrcs->avail_hist_list_entries;
+
+	max_contig_hl_range = dlb2_bitmap_longest_set_range(avail_hl_entries);
+
+	num_avail_ldb_ports = 0;
+	req_ldb_ports = 0;
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		num_avail_ldb_ports += rsrcs->num_avail_ldb_ports[i];
+
+		req_ldb_ports += args->num_cos_ldb_ports[i];
+	}
+
+	req_ldb_ports += args->num_ldb_ports;
+
+	if (rsrcs->num_avail_domains < 1) {
+		resp->status = DLB2_ST_DOMAIN_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	domain = DLB2_FUNC_LIST_HEAD(rsrcs->avail_domains, typeof(*domain));
+	if (domain == NULL) {
+		resp->status = DLB2_ST_DOMAIN_UNAVAILABLE;
+		return -EFAULT;
+	}
+
+	if (rsrcs->num_avail_ldb_queues < args->num_ldb_queues) {
+		resp->status = DLB2_ST_LDB_QUEUES_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	if (req_ldb_ports > num_avail_ldb_ports) {
+		resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	for (i = 0; args->cos_strict && i < DLB2_NUM_COS_DOMAINS; i++) {
+		if (args->num_cos_ldb_ports[i] >
+		    rsrcs->num_avail_ldb_ports[i]) {
+			resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
+			return -EINVAL;
+		}
+	}
+
+	if (args->num_ldb_queues > 0 && req_ldb_ports == 0) {
+		resp->status = DLB2_ST_LDB_PORT_REQUIRED_FOR_LDB_QUEUES;
+		return -EINVAL;
+	}
+
+	if (rsrcs->num_avail_dir_pq_pairs < args->num_dir_ports) {
+		resp->status = DLB2_ST_DIR_PORTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+	if (hw->ver == DLB2_HW_V2_5) {
+		if (rsrcs->num_avail_entries < args->num_credits) {
+			resp->status = DLB2_ST_CREDITS_UNAVAILABLE;
+			return -EINVAL;
+		}
+	} else {
+		if (rsrcs->num_avail_qed_entries < args->num_ldb_credits) {
+			resp->status = DLB2_ST_LDB_CREDITS_UNAVAILABLE;
+			return -EINVAL;
+		}
+		if (rsrcs->num_avail_dqed_entries < args->num_dir_credits) {
+			resp->status = DLB2_ST_DIR_CREDITS_UNAVAILABLE;
+			return -EINVAL;
+		}
+	}
+
+	if (rsrcs->num_avail_aqed_entries < args->num_atomic_inflights) {
+		resp->status = DLB2_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	if (max_contig_hl_range < args->num_hist_list_entries) {
+		resp->status = DLB2_ST_HIST_LIST_ENTRIES_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	*out_domain = domain;
+
+	return 0;
+}
+
+static void
+dlb2_log_create_sched_domain_args(struct dlb2_hw *hw,
+				  struct dlb2_create_sched_domain_args *args,
+				  bool vdev_req,
+				  unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 create sched domain arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tNumber of LDB queues:          %d\n",
+		    args->num_ldb_queues);
+	DLB2_HW_DBG(hw, "\tNumber of LDB ports (any CoS): %d\n",
+		    args->num_ldb_ports);
+	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 0):   %d\n",
+		    args->num_cos_ldb_ports[0]);
+	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 1):   %d\n",
+		    args->num_cos_ldb_ports[1]);
+	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 2):   %d\n",
+		    args->num_cos_ldb_ports[2]);
+	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 3):   %d\n",
+		    args->num_cos_ldb_ports[3]);
+	DLB2_HW_DBG(hw, "\tStrict CoS allocation:         %d\n",
+		    args->cos_strict);
+	DLB2_HW_DBG(hw, "\tNumber of DIR ports:           %d\n",
+		    args->num_dir_ports);
+	DLB2_HW_DBG(hw, "\tNumber of ATM inflights:       %d\n",
+		    args->num_atomic_inflights);
+	DLB2_HW_DBG(hw, "\tNumber of hist list entries:   %d\n",
+		    args->num_hist_list_entries);
+	if (hw->ver == DLB2_HW_V2) {
+		DLB2_HW_DBG(hw, "\tNumber of LDB credits:         %d\n",
+			    args->num_ldb_credits);
+		DLB2_HW_DBG(hw, "\tNumber of DIR credits:         %d\n",
+			    args->num_dir_credits);
+	} else {
+		DLB2_HW_DBG(hw, "\tNumber of credits:         %d\n",
+			    args->num_credits);
+	}
+}
+
+/**
+ * dlb2_hw_create_sched_domain() - create a scheduling domain
+ * @hw: dlb2_hw handle for a particular device.
+ * @args: scheduling domain creation arguments.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function creates a scheduling domain containing the resources specified
+ * in args. The individual resources (queues, ports, credits) can be configured
+ * after creating a scheduling domain.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the domain ID.
+ *
+ * resp->id contains a virtual ID if vdev_req is true.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, or the requested domain name
+ *	    is already in use.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb2_hw_create_sched_domain(struct dlb2_hw *hw,
+				struct dlb2_create_sched_domain_args *args,
+				struct dlb2_cmd_response *resp,
+				bool vdev_req,
+				unsigned int vdev_id)
+{
+	struct dlb2_function_resources *rsrcs;
+	struct dlb2_hw_domain *domain;
+	int ret;
+
+	rsrcs = (vdev_req) ? &hw->vdev[vdev_id] : &hw->pf;
+
+	dlb2_log_create_sched_domain_args(hw, args, vdev_req, vdev_id);
+
+	/*
+	 * Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	ret = dlb2_verify_create_sched_dom_args(rsrcs, args, resp, hw, &domain);
+	if (ret)
+		return ret;
+
+	dlb2_init_domain_rsrc_lists(domain);
+
+	ret = dlb2_domain_attach_resources(hw, rsrcs, domain, args, resp);
+	if (ret) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: failed to verify args.\n",
+			    __func__);
+
+		return ret;
+	}
+
+	dlb2_list_del(&rsrcs->avail_domains, &domain->func_list);
+
+	dlb2_list_add(&rsrcs->used_domains, &domain->func_list);
+
+	resp->id = (vdev_req) ? domain->id.virt_id : domain->id.phys_id;
+	resp->status = 0;
+
+	return 0;
+}
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH 05/25] event/dlb2: add DLB v2.5 support to domain reset
  2021-03-16 22:18 [dpdk-dev] [PATCH 00/25] Add Support for DLB v2.5 Timothy McDaniel
                   ` (3 preceding siblings ...)
  2021-03-16 22:18 ` [dpdk-dev] [PATCH 04/25] event/dlb2: add DLB v2.5 support to create sched domain Timothy McDaniel
@ 2021-03-16 22:18 ` Timothy McDaniel
  2021-03-16 22:18 ` [dpdk-dev] [PATCH 06/25] event/dlb2: add DLB V2.5 support to create ldb queue Timothy McDaniel
                   ` (20 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-03-16 22:18 UTC (permalink / raw)
  To: dev
  Cc: jerinj, harry.van.haaren, mdr, nhorman, nikhil.rao,
	erik.g.carrillo, abhinandan.gujjar, pbhagavatula, hemant.agrawal,
	mattias.ronnblom, peter.mccarthy

Convert to new register map and new register access
macros.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 .../event/dlb2/pf/base/dlb2_hw_types_new.h    |    1 +
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 1494 ----------
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 2551 +++++++++++++++++
 3 files changed, 2552 insertions(+), 1494 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_hw_types_new.h b/drivers/event/dlb2/pf/base/dlb2_hw_types_new.h
index d58aa94ad..0f418ef5d 100644
--- a/drivers/event/dlb2/pf/base/dlb2_hw_types_new.h
+++ b/drivers/event/dlb2/pf/base/dlb2_hw_types_new.h
@@ -187,6 +187,7 @@ struct dlb2_ldb_port {
 	u32 hist_list_entry_base;
 	u32 hist_list_entry_limit;
 	u32 ref_cnt;
+	u8 cq_depth;
 	u8 init_tkn_cnt;
 	u8 num_pending_removals;
 	u8 num_mappings;
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index 5d296f725..830c74d0a 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -66,69 +66,6 @@ static inline void dlb2_flush_csr(struct dlb2_hw *hw)
 	DLB2_CSR_RD(hw, DLB2_SYS_TOTAL_VAS);
 }
 
-static void dlb2_dir_port_cq_disable(struct dlb2_hw *hw,
-				     struct dlb2_dir_pq_pair *port)
-{
-	union dlb2_lsp_cq_dir_dsbl reg;
-
-	reg.field.disabled = 1;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ_DIR_DSBL(port->id.phys_id), reg.val);
-
-	dlb2_flush_csr(hw);
-}
-
-static u32 dlb2_dir_cq_token_count(struct dlb2_hw *hw,
-				   struct dlb2_dir_pq_pair *port)
-{
-	union dlb2_lsp_cq_dir_tkn_cnt r0;
-
-	r0.val = DLB2_CSR_RD(hw, DLB2_LSP_CQ_DIR_TKN_CNT(port->id.phys_id));
-
-	/*
-	 * Account for the initial token count, which is used in order to
-	 * provide a CQ with depth less than 8.
-	 */
-
-	return r0.field.count - port->init_tkn_cnt;
-}
-
-static int dlb2_drain_dir_cq(struct dlb2_hw *hw,
-			     struct dlb2_dir_pq_pair *port)
-{
-	unsigned int port_id = port->id.phys_id;
-	u32 cnt;
-
-	/* Return any outstanding tokens */
-	cnt = dlb2_dir_cq_token_count(hw, port);
-
-	if (cnt != 0) {
-		struct dlb2_hcw hcw_mem[8], *hcw;
-		void  *pp_addr;
-
-		pp_addr = os_map_producer_port(hw, port_id, false);
-
-		/* Point hcw to a 64B-aligned location */
-		hcw = (struct dlb2_hcw *)((uintptr_t)&hcw_mem[4] & ~0x3F);
-
-		/*
-		 * Program the first HCW for a batch token return and
-		 * the rest as NOOPS
-		 */
-		memset(hcw, 0, 4 * sizeof(*hcw));
-		hcw->cq_token = 1;
-		hcw->lock_id = cnt - 1;
-
-		dlb2_movdir64b(pp_addr, hcw);
-
-		os_fence_hcw(hw, pp_addr);
-
-		os_unmap_producer_port(hw, pp_addr);
-	}
-
-	return 0;
-}
-
 static void dlb2_dir_port_cq_enable(struct dlb2_hw *hw,
 				    struct dlb2_dir_pq_pair *port)
 {
@@ -141,37 +78,6 @@ static void dlb2_dir_port_cq_enable(struct dlb2_hw *hw,
 	dlb2_flush_csr(hw);
 }
 
-static int dlb2_domain_drain_dir_cqs(struct dlb2_hw *hw,
-				     struct dlb2_hw_domain *domain,
-				     bool toggle_port)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *port;
-	int ret;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
-		/*
-		 * Can't drain a port if it's not configured, and there's
-		 * nothing to drain if its queue is unconfigured.
-		 */
-		if (!port->port_configured || !port->queue_configured)
-			continue;
-
-		if (toggle_port)
-			dlb2_dir_port_cq_disable(hw, port);
-
-		ret = dlb2_drain_dir_cq(hw, port);
-		if (ret < 0)
-			return ret;
-
-		if (toggle_port)
-			dlb2_dir_port_cq_enable(hw, port);
-	}
-
-	return 0;
-}
-
 static u32 dlb2_dir_queue_depth(struct dlb2_hw *hw,
 				struct dlb2_dir_pq_pair *queue)
 {
@@ -183,63 +89,6 @@ static u32 dlb2_dir_queue_depth(struct dlb2_hw *hw,
 	return r0.field.count;
 }
 
-static bool dlb2_dir_queue_is_empty(struct dlb2_hw *hw,
-				    struct dlb2_dir_pq_pair *queue)
-{
-	return dlb2_dir_queue_depth(hw, queue) == 0;
-}
-
-static bool dlb2_domain_dir_queues_empty(struct dlb2_hw *hw,
-					 struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *queue;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, queue, iter) {
-		if (!dlb2_dir_queue_is_empty(hw, queue))
-			return false;
-	}
-
-	return true;
-}
-
-static int dlb2_domain_drain_dir_queues(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain)
-{
-	int i, ret;
-
-	/* If the domain hasn't been started, there's no traffic to drain */
-	if (!domain->started)
-		return 0;
-
-	for (i = 0; i < DLB2_MAX_QID_EMPTY_CHECK_LOOPS; i++) {
-		ret = dlb2_domain_drain_dir_cqs(hw, domain, true);
-		if (ret < 0)
-			return ret;
-
-		if (dlb2_domain_dir_queues_empty(hw, domain))
-			break;
-	}
-
-	if (i == DLB2_MAX_QID_EMPTY_CHECK_LOOPS) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: failed to empty queues\n",
-			    __func__);
-		return -EFAULT;
-	}
-
-	/*
-	 * Drain the CQs one more time. For the queues to go empty, they would
-	 * have scheduled one or more QEs.
-	 */
-	ret = dlb2_domain_drain_dir_cqs(hw, domain, true);
-	if (ret < 0)
-		return ret;
-
-	return 0;
-}
-
 static void dlb2_ldb_port_cq_enable(struct dlb2_hw *hw,
 				    struct dlb2_ldb_port *port)
 {
@@ -272,105 +121,6 @@ static void dlb2_ldb_port_cq_disable(struct dlb2_hw *hw,
 	dlb2_flush_csr(hw);
 }
 
-static u32 dlb2_ldb_cq_inflight_count(struct dlb2_hw *hw,
-				      struct dlb2_ldb_port *port)
-{
-	union dlb2_lsp_cq_ldb_infl_cnt r0;
-
-	r0.val = DLB2_CSR_RD(hw, DLB2_LSP_CQ_LDB_INFL_CNT(port->id.phys_id));
-
-	return r0.field.count;
-}
-
-static u32 dlb2_ldb_cq_token_count(struct dlb2_hw *hw,
-				   struct dlb2_ldb_port *port)
-{
-	union dlb2_lsp_cq_ldb_tkn_cnt r0;
-
-	r0.val = DLB2_CSR_RD(hw, DLB2_LSP_CQ_LDB_TKN_CNT(port->id.phys_id));
-
-	/*
-	 * Account for the initial token count, which is used in order to
-	 * provide a CQ with depth less than 8.
-	 */
-
-	return r0.field.token_count - port->init_tkn_cnt;
-}
-
-static int dlb2_drain_ldb_cq(struct dlb2_hw *hw, struct dlb2_ldb_port *port)
-{
-	u32 infl_cnt, tkn_cnt;
-	unsigned int i;
-
-	infl_cnt = dlb2_ldb_cq_inflight_count(hw, port);
-	tkn_cnt = dlb2_ldb_cq_token_count(hw, port);
-
-	if (infl_cnt || tkn_cnt) {
-		struct dlb2_hcw hcw_mem[8], *hcw;
-		void  *pp_addr;
-
-		pp_addr = os_map_producer_port(hw, port->id.phys_id, true);
-
-		/* Point hcw to a 64B-aligned location */
-		hcw = (struct dlb2_hcw *)((uintptr_t)&hcw_mem[4] & ~0x3F);
-
-		/*
-		 * Program the first HCW for a completion and token return and
-		 * the other HCWs as NOOPS
-		 */
-
-		memset(hcw, 0, 4 * sizeof(*hcw));
-		hcw->qe_comp = (infl_cnt > 0);
-		hcw->cq_token = (tkn_cnt > 0);
-		hcw->lock_id = tkn_cnt - 1;
-
-		/* Return tokens in the first HCW */
-		dlb2_movdir64b(pp_addr, hcw);
-
-		hcw->cq_token = 0;
-
-		/* Issue remaining completions (if any) */
-		for (i = 1; i < infl_cnt; i++)
-			dlb2_movdir64b(pp_addr, hcw);
-
-		os_fence_hcw(hw, pp_addr);
-
-		os_unmap_producer_port(hw, pp_addr);
-	}
-
-	return 0;
-}
-
-static int dlb2_domain_drain_ldb_cqs(struct dlb2_hw *hw,
-				     struct dlb2_hw_domain *domain,
-				     bool toggle_port)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int ret, i;
-	RTE_SET_USED(iter);
-
-	/* If the domain hasn't been started, there's no traffic to drain */
-	if (!domain->started)
-		return 0;
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			if (toggle_port)
-				dlb2_ldb_port_cq_disable(hw, port);
-
-			ret = dlb2_drain_ldb_cq(hw, port);
-			if (ret < 0)
-				return ret;
-
-			if (toggle_port)
-				dlb2_ldb_port_cq_enable(hw, port);
-		}
-	}
-
-	return 0;
-}
-
 static u32 dlb2_ldb_queue_depth(struct dlb2_hw *hw,
 				struct dlb2_ldb_queue *queue)
 {
@@ -389,90 +139,6 @@ static u32 dlb2_ldb_queue_depth(struct dlb2_hw *hw,
 	return r0.field.count + r1.field.count + r2.field.count;
 }
 
-static bool dlb2_ldb_queue_is_empty(struct dlb2_hw *hw,
-				    struct dlb2_ldb_queue *queue)
-{
-	return dlb2_ldb_queue_depth(hw, queue) == 0;
-}
-
-static bool dlb2_domain_mapped_queues_empty(struct dlb2_hw *hw,
-					    struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_queue *queue;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
-		if (queue->num_mappings == 0)
-			continue;
-
-		if (!dlb2_ldb_queue_is_empty(hw, queue))
-			return false;
-	}
-
-	return true;
-}
-
-static int dlb2_domain_drain_mapped_queues(struct dlb2_hw *hw,
-					   struct dlb2_hw_domain *domain)
-{
-	int i, ret;
-
-	/* If the domain hasn't been started, there's no traffic to drain */
-	if (!domain->started)
-		return 0;
-
-	if (domain->num_pending_removals > 0) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: failed to unmap domain queues\n",
-			    __func__);
-		return -EFAULT;
-	}
-
-	for (i = 0; i < DLB2_MAX_QID_EMPTY_CHECK_LOOPS; i++) {
-		ret = dlb2_domain_drain_ldb_cqs(hw, domain, true);
-		if (ret < 0)
-			return ret;
-
-		if (dlb2_domain_mapped_queues_empty(hw, domain))
-			break;
-	}
-
-	if (i == DLB2_MAX_QID_EMPTY_CHECK_LOOPS) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: failed to empty queues\n",
-			    __func__);
-		return -EFAULT;
-	}
-
-	/*
-	 * Drain the CQs one more time. For the queues to go empty, they would
-	 * have scheduled one or more QEs.
-	 */
-	ret = dlb2_domain_drain_ldb_cqs(hw, domain, true);
-	if (ret < 0)
-		return ret;
-
-	return 0;
-}
-
-static void dlb2_domain_enable_ldb_cqs(struct dlb2_hw *hw,
-				       struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			port->enabled = true;
-
-			dlb2_ldb_port_cq_enable(hw, port);
-		}
-	}
-}
-
 static struct dlb2_ldb_queue *
 dlb2_get_ldb_queue_from_id(struct dlb2_hw *hw,
 			   u32 id,
@@ -1456,1166 +1122,6 @@ dlb2_domain_finish_unmap_qid_procedures(struct dlb2_hw *hw,
 	return domain->num_pending_removals;
 }
 
-static void dlb2_domain_disable_ldb_cqs(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			port->enabled = false;
-
-			dlb2_ldb_port_cq_disable(hw, port);
-		}
-	}
-}
-
-static void dlb2_log_reset_domain(struct dlb2_hw *hw,
-				  u32 domain_id,
-				  bool vdev_req,
-				  unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 reset domain:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
-}
-
-static void dlb2_domain_disable_dir_vpps(struct dlb2_hw *hw,
-					 struct dlb2_hw_domain *domain,
-					 unsigned int vdev_id)
-{
-	struct dlb2_list_entry *iter;
-	union dlb2_sys_vf_dir_vpp_v r1;
-	struct dlb2_dir_pq_pair *port;
-	RTE_SET_USED(iter);
-
-	r1.field.vpp_v = 0;
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
-		unsigned int offs;
-		u32 virt_id;
-
-		if (hw->virt_mode == DLB2_VIRT_SRIOV)
-			virt_id = port->id.virt_id;
-		else
-			virt_id = port->id.phys_id;
-
-		offs = vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) + virt_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP_V(offs), r1.val);
-	}
-}
-
-static void dlb2_domain_disable_ldb_vpps(struct dlb2_hw *hw,
-					 struct dlb2_hw_domain *domain,
-					 unsigned int vdev_id)
-{
-	struct dlb2_list_entry *iter;
-	union dlb2_sys_vf_ldb_vpp_v r1;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	r1.field.vpp_v = 0;
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			unsigned int offs;
-			u32 virt_id;
-
-			if (hw->virt_mode == DLB2_VIRT_SRIOV)
-				virt_id = port->id.virt_id;
-			else
-				virt_id = port->id.phys_id;
-
-			offs = vdev_id * DLB2_MAX_NUM_LDB_PORTS + virt_id;
-
-			DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VPP_V(offs), r1.val);
-		}
-	}
-}
-
-static void
-dlb2_domain_disable_ldb_port_interrupts(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	union dlb2_chp_ldb_cq_int_enb r0 = { {0} };
-	union dlb2_chp_ldb_cq_wd_enb r1 = { {0} };
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	r0.field.en_tim = 0;
-	r0.field.en_depth = 0;
-
-	r1.field.wd_enable = 0;
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			DLB2_CSR_WR(hw,
-				    DLB2_CHP_LDB_CQ_INT_ENB(port->id.phys_id),
-				    r0.val);
-
-			DLB2_CSR_WR(hw,
-				    DLB2_CHP_LDB_CQ_WD_ENB(port->id.phys_id),
-				    r1.val);
-		}
-	}
-}
-
-static void
-dlb2_domain_disable_dir_port_interrupts(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	union dlb2_chp_dir_cq_int_enb r0 = { {0} };
-	union dlb2_chp_dir_cq_wd_enb r1 = { {0} };
-	struct dlb2_dir_pq_pair *port;
-	RTE_SET_USED(iter);
-
-	r0.field.en_tim = 0;
-	r0.field.en_depth = 0;
-
-	r1.field.wd_enable = 0;
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
-		DLB2_CSR_WR(hw,
-			    DLB2_CHP_DIR_CQ_INT_ENB(port->id.phys_id),
-			    r0.val);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_CHP_DIR_CQ_WD_ENB(port->id.phys_id),
-			    r1.val);
-	}
-}
-
-static void
-dlb2_domain_disable_ldb_queue_write_perms(struct dlb2_hw *hw,
-					  struct dlb2_hw_domain *domain)
-{
-	int domain_offset = domain->id.phys_id * DLB2_MAX_NUM_LDB_QUEUES;
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_queue *queue;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
-		union dlb2_sys_ldb_vasqid_v r0 = { {0} };
-		union dlb2_sys_ldb_qid2vqid r1 = { {0} };
-		union dlb2_sys_vf_ldb_vqid_v r2 = { {0} };
-		union dlb2_sys_vf_ldb_vqid2qid r3 = { {0} };
-		int idx;
-
-		idx = domain_offset + queue->id.phys_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_LDB_VASQID_V(idx), r0.val);
-
-		if (queue->id.vdev_owned) {
-			DLB2_CSR_WR(hw,
-				    DLB2_SYS_LDB_QID2VQID(queue->id.phys_id),
-				    r1.val);
-
-			idx = queue->id.vdev_id * DLB2_MAX_NUM_LDB_QUEUES +
-				queue->id.virt_id;
-
-			DLB2_CSR_WR(hw,
-				    DLB2_SYS_VF_LDB_VQID_V(idx),
-				    r2.val);
-
-			DLB2_CSR_WR(hw,
-				    DLB2_SYS_VF_LDB_VQID2QID(idx),
-				    r3.val);
-		}
-	}
-}
-
-static void
-dlb2_domain_disable_dir_queue_write_perms(struct dlb2_hw *hw,
-					  struct dlb2_hw_domain *domain)
-{
-	int domain_offset = domain->id.phys_id *
-		DLB2_MAX_NUM_DIR_PORTS(hw->ver);
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *queue;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, queue, iter) {
-		union dlb2_sys_dir_vasqid_v r0 = { {0} };
-		union dlb2_sys_vf_dir_vqid_v r1 = { {0} };
-		union dlb2_sys_vf_dir_vqid2qid r2 = { {0} };
-		int idx;
-
-		idx = domain_offset + queue->id.phys_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(idx), r0.val);
-
-		if (queue->id.vdev_owned) {
-			idx = queue->id.vdev_id *
-				DLB2_MAX_NUM_DIR_PORTS(hw->ver) +
-				queue->id.virt_id;
-
-			DLB2_CSR_WR(hw,
-				    DLB2_SYS_VF_DIR_VQID_V(idx),
-				    r1.val);
-
-			DLB2_CSR_WR(hw,
-				    DLB2_SYS_VF_DIR_VQID2QID(idx),
-				    r2.val);
-		}
-	}
-}
-
-static void dlb2_domain_disable_ldb_seq_checks(struct dlb2_hw *hw,
-					       struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	union dlb2_chp_sn_chk_enbl r1;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	r1.field.en = 0;
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
-			DLB2_CSR_WR(hw,
-				    DLB2_CHP_SN_CHK_ENBL(port->id.phys_id),
-				    r1.val);
-	}
-}
-
-static int dlb2_domain_wait_for_ldb_cqs_to_empty(struct dlb2_hw *hw,
-						 struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			int i;
-
-			for (i = 0; i < DLB2_MAX_CQ_COMP_CHECK_LOOPS; i++) {
-				if (dlb2_ldb_cq_inflight_count(hw, port) == 0)
-					break;
-			}
-
-			if (i == DLB2_MAX_CQ_COMP_CHECK_LOOPS) {
-				DLB2_HW_ERR(hw,
-					    "[%s()] Internal error: failed to flush load-balanced port %d's completions.\n",
-					    __func__, port->id.phys_id);
-				return -EFAULT;
-			}
-		}
-	}
-
-	return 0;
-}
-
-static void dlb2_domain_disable_dir_cqs(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *port;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
-		port->enabled = false;
-
-		dlb2_dir_port_cq_disable(hw, port);
-	}
-}
-
-static void
-dlb2_domain_disable_dir_producer_ports(struct dlb2_hw *hw,
-				       struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *port;
-	union dlb2_sys_dir_pp_v r1;
-	RTE_SET_USED(iter);
-
-	r1.field.pp_v = 0;
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter)
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_DIR_PP_V(port->id.phys_id),
-			    r1.val);
-}
-
-static void
-dlb2_domain_disable_ldb_producer_ports(struct dlb2_hw *hw,
-				       struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	union dlb2_sys_ldb_pp_v r1;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	r1.field.pp_v = 0;
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
-			DLB2_CSR_WR(hw,
-				    DLB2_SYS_LDB_PP_V(port->id.phys_id),
-				    r1.val);
-	}
-}
-
-static int dlb2_domain_verify_reset_success(struct dlb2_hw *hw,
-					    struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *dir_port;
-	struct dlb2_ldb_port *ldb_port;
-	struct dlb2_ldb_queue *queue;
-	int i;
-	RTE_SET_USED(iter);
-
-	/*
-	 * Confirm that all the domain's queue's inflight counts and AQED
-	 * active counts are 0.
-	 */
-	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
-		if (!dlb2_ldb_queue_is_empty(hw, queue)) {
-			DLB2_HW_ERR(hw,
-				    "[%s()] Internal error: failed to empty ldb queue %d\n",
-				    __func__, queue->id.phys_id);
-			return -EFAULT;
-		}
-	}
-
-	/* Confirm that all the domain's CQs inflight and token counts are 0. */
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], ldb_port, iter) {
-			if (dlb2_ldb_cq_inflight_count(hw, ldb_port) ||
-			    dlb2_ldb_cq_token_count(hw, ldb_port)) {
-				DLB2_HW_ERR(hw,
-					    "[%s()] Internal error: failed to empty ldb port %d\n",
-					    __func__, ldb_port->id.phys_id);
-				return -EFAULT;
-			}
-		}
-	}
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, dir_port, iter) {
-		if (!dlb2_dir_queue_is_empty(hw, dir_port)) {
-			DLB2_HW_ERR(hw,
-				    "[%s()] Internal error: failed to empty dir queue %d\n",
-				    __func__, dir_port->id.phys_id);
-			return -EFAULT;
-		}
-
-		if (dlb2_dir_cq_token_count(hw, dir_port)) {
-			DLB2_HW_ERR(hw,
-				    "[%s()] Internal error: failed to empty dir port %d\n",
-				    __func__, dir_port->id.phys_id);
-			return -EFAULT;
-		}
-	}
-
-	return 0;
-}
-
-static void __dlb2_domain_reset_ldb_port_registers(struct dlb2_hw *hw,
-						   struct dlb2_ldb_port *port)
-{
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_PP2VAS(port->id.phys_id),
-		    DLB2_SYS_LDB_PP2VAS_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ2VAS(port->id.phys_id),
-		    DLB2_CHP_LDB_CQ2VAS_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_PP2VDEV(port->id.phys_id),
-		    DLB2_SYS_LDB_PP2VDEV_RST);
-
-	if (port->id.vdev_owned) {
-		unsigned int offs;
-		u32 virt_id;
-
-		/*
-		 * DLB uses producer port address bits 17:12 to determine the
-		 * producer port ID. In Scalable IOV mode, PP accesses come
-		 * through the PF MMIO window for the physical producer port,
-		 * so for translation purposes the virtual and physical port
-		 * IDs are equal.
-		 */
-		if (hw->virt_mode == DLB2_VIRT_SRIOV)
-			virt_id = port->id.virt_id;
-		else
-			virt_id = port->id.phys_id;
-
-		offs = port->id.vdev_id * DLB2_MAX_NUM_LDB_PORTS + virt_id;
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_VF_LDB_VPP2PP(offs),
-			    DLB2_SYS_VF_LDB_VPP2PP_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_VF_LDB_VPP_V(offs),
-			    DLB2_SYS_VF_LDB_VPP_V_RST);
-	}
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_PP_V(port->id.phys_id),
-		    DLB2_SYS_LDB_PP_V_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_LDB_DSBL(port->id.phys_id),
-		    DLB2_LSP_CQ_LDB_DSBL_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_DEPTH(port->id.phys_id),
-		    DLB2_CHP_LDB_CQ_DEPTH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_LDB_INFL_LIM(port->id.phys_id),
-		    DLB2_LSP_CQ_LDB_INFL_LIM_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_HIST_LIST_LIM(port->id.phys_id),
-		    DLB2_CHP_HIST_LIST_LIM_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_HIST_LIST_BASE(port->id.phys_id),
-		    DLB2_CHP_HIST_LIST_BASE_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_HIST_LIST_POP_PTR(port->id.phys_id),
-		    DLB2_CHP_HIST_LIST_POP_PTR_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_HIST_LIST_PUSH_PTR(port->id.phys_id),
-		    DLB2_CHP_HIST_LIST_PUSH_PTR_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH(port->id.phys_id),
-		    DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_TMR_THRSH(port->id.phys_id),
-		    DLB2_CHP_LDB_CQ_TMR_THRSH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_INT_ENB(port->id.phys_id),
-		    DLB2_CHP_LDB_CQ_INT_ENB_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_CQ_ISR(port->id.phys_id),
-		    DLB2_SYS_LDB_CQ_ISR_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL(port->id.phys_id),
-		    DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL(port->id.phys_id),
-		    DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_WPTR(port->id.phys_id),
-		    DLB2_CHP_LDB_CQ_WPTR_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_LDB_TKN_CNT(port->id.phys_id),
-		    DLB2_LSP_CQ_LDB_TKN_CNT_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_CQ_ADDR_L(port->id.phys_id),
-		    DLB2_SYS_LDB_CQ_ADDR_L_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_CQ_ADDR_U(port->id.phys_id),
-		    DLB2_SYS_LDB_CQ_ADDR_U_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_CQ_AT(port->id.phys_id),
-		    DLB2_SYS_LDB_CQ_AT_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_CQ_PASID(port->id.phys_id),
-		    DLB2_SYS_LDB_CQ_PASID_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_CQ2VF_PF_RO(port->id.phys_id),
-		    DLB2_SYS_LDB_CQ2VF_PF_RO_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTL(port->id.phys_id),
-		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTL_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTH(port->id.phys_id),
-		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ2QID0(port->id.phys_id),
-		    DLB2_LSP_CQ2QID0_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ2QID1(port->id.phys_id),
-		    DLB2_LSP_CQ2QID1_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ2PRIOV(port->id.phys_id),
-		    DLB2_LSP_CQ2PRIOV_RST);
-}
-
-static void dlb2_domain_reset_ldb_port_registers(struct dlb2_hw *hw,
-						 struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
-			__dlb2_domain_reset_ldb_port_registers(hw, port);
-	}
-}
-
-static void
-__dlb2_domain_reset_dir_port_registers(struct dlb2_hw *hw,
-				       struct dlb2_dir_pq_pair *port)
-{
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ2VAS(port->id.phys_id),
-		    DLB2_CHP_DIR_CQ2VAS_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_DIR_DSBL(port->id.phys_id),
-		    DLB2_LSP_CQ_DIR_DSBL_RST);
-
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_OPT_CLR, port->id.phys_id);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_DEPTH(port->id.phys_id),
-		    DLB2_CHP_DIR_CQ_DEPTH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH(port->id.phys_id),
-		    DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_TMR_THRSH(port->id.phys_id),
-		    DLB2_CHP_DIR_CQ_TMR_THRSH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_INT_ENB(port->id.phys_id),
-		    DLB2_CHP_DIR_CQ_INT_ENB_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ_ISR(port->id.phys_id),
-		    DLB2_SYS_DIR_CQ_ISR_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(port->id.phys_id),
-		    DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL(port->id.phys_id),
-		    DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_WPTR(port->id.phys_id),
-		    DLB2_CHP_DIR_CQ_WPTR_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_DIR_TKN_CNT(port->id.phys_id),
-		    DLB2_LSP_CQ_DIR_TKN_CNT_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ_ADDR_L(port->id.phys_id),
-		    DLB2_SYS_DIR_CQ_ADDR_L_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ_ADDR_U(port->id.phys_id),
-		    DLB2_SYS_DIR_CQ_ADDR_U_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ_AT(port->id.phys_id),
-		    DLB2_SYS_DIR_CQ_AT_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ_PASID(port->id.phys_id),
-		    DLB2_SYS_DIR_CQ_PASID_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ_FMT(port->id.phys_id),
-		    DLB2_SYS_DIR_CQ_FMT_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ2VF_PF_RO(port->id.phys_id),
-		    DLB2_SYS_DIR_CQ2VF_PF_RO_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTL(port->id.phys_id),
-		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTL_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTH(port->id.phys_id),
-		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_PP2VAS(port->id.phys_id),
-		    DLB2_SYS_DIR_PP2VAS_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ2VAS(port->id.phys_id),
-		    DLB2_CHP_DIR_CQ2VAS_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_PP2VDEV(port->id.phys_id),
-		    DLB2_SYS_DIR_PP2VDEV_RST);
-
-	if (port->id.vdev_owned) {
-		unsigned int offs;
-		u32 virt_id;
-
-		/*
-		 * DLB uses producer port address bits 17:12 to determine the
-		 * producer port ID. In Scalable IOV mode, PP accesses come
-		 * through the PF MMIO window for the physical producer port,
-		 * so for translation purposes the virtual and physical port
-		 * IDs are equal.
-		 */
-		if (hw->virt_mode == DLB2_VIRT_SRIOV)
-			virt_id = port->id.virt_id;
-		else
-			virt_id = port->id.phys_id;
-
-		offs = port->id.vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver)
-			+ virt_id;
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_VF_DIR_VPP2PP(offs),
-			    DLB2_SYS_VF_DIR_VPP2PP_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_VF_DIR_VPP_V(offs),
-			    DLB2_SYS_VF_DIR_VPP_V_RST);
-	}
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_PP_V(port->id.phys_id),
-		    DLB2_SYS_DIR_PP_V_RST);
-}
-
-static void dlb2_domain_reset_dir_port_registers(struct dlb2_hw *hw,
-						 struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *port;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter)
-		__dlb2_domain_reset_dir_port_registers(hw, port);
-}
-
-static void dlb2_domain_reset_ldb_queue_registers(struct dlb2_hw *hw,
-						  struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_queue *queue;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
-		unsigned int queue_id = queue->id.phys_id;
-		int i;
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL(queue_id),
-			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH(queue_id),
-			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTL(queue_id),
-			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTL_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTH(queue_id),
-			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_NALDB_MAX_DEPTH(queue_id),
-			    DLB2_LSP_QID_NALDB_MAX_DEPTH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_LDB_INFL_LIM(queue_id),
-			    DLB2_LSP_QID_LDB_INFL_LIM_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_AQED_ACTIVE_LIM(queue_id),
-			    DLB2_LSP_QID_AQED_ACTIVE_LIM_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_ATM_DEPTH_THRSH(queue_id),
-			    DLB2_LSP_QID_ATM_DEPTH_THRSH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_NALDB_DEPTH_THRSH(queue_id),
-			    DLB2_LSP_QID_NALDB_DEPTH_THRSH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_LDB_QID_ITS(queue_id),
-			    DLB2_SYS_LDB_QID_ITS_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_CHP_ORD_QID_SN(queue_id),
-			    DLB2_CHP_ORD_QID_SN_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_CHP_ORD_QID_SN_MAP(queue_id),
-			    DLB2_CHP_ORD_QID_SN_MAP_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_LDB_QID_V(queue_id),
-			    DLB2_SYS_LDB_QID_V_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_LDB_QID_CFG_V(queue_id),
-			    DLB2_SYS_LDB_QID_CFG_V_RST);
-
-		if (queue->sn_cfg_valid) {
-			u32 offs[2];
-
-			offs[0] = DLB2_RO_PIPE_GRP_0_SLT_SHFT(queue->sn_slot);
-			offs[1] = DLB2_RO_PIPE_GRP_1_SLT_SHFT(queue->sn_slot);
-
-			DLB2_CSR_WR(hw,
-				    offs[queue->sn_group],
-				    DLB2_RO_PIPE_GRP_0_SLT_SHFT_RST);
-		}
-
-		for (i = 0; i < DLB2_LSP_QID2CQIDIX_NUM; i++) {
-			DLB2_CSR_WR(hw,
-				    DLB2_LSP_QID2CQIDIX(queue_id, i),
-				    DLB2_LSP_QID2CQIDIX_00_RST);
-
-			DLB2_CSR_WR(hw,
-				    DLB2_LSP_QID2CQIDIX2(queue_id, i),
-				    DLB2_LSP_QID2CQIDIX2_00_RST);
-
-			DLB2_CSR_WR(hw,
-				    DLB2_ATM_QID2CQIDIX(queue_id, i),
-				    DLB2_ATM_QID2CQIDIX_00_RST);
-		}
-	}
-}
-
-static void dlb2_domain_reset_dir_queue_registers(struct dlb2_hw *hw,
-						  struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *queue;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, queue, iter) {
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_DIR_MAX_DEPTH(queue->id.phys_id),
-			    DLB2_LSP_QID_DIR_MAX_DEPTH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTL(queue->id.phys_id),
-			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTL_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTH(queue->id.phys_id),
-			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_DIR_DEPTH_THRSH(queue->id.phys_id),
-			    DLB2_LSP_QID_DIR_DEPTH_THRSH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_DIR_QID_ITS(queue->id.phys_id),
-			    DLB2_SYS_DIR_QID_ITS_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_DIR_QID_V(queue->id.phys_id),
-			    DLB2_SYS_DIR_QID_V_RST);
-	}
-}
-
-static void dlb2_domain_reset_registers(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain)
-{
-	dlb2_domain_reset_ldb_port_registers(hw, domain);
-
-	dlb2_domain_reset_dir_port_registers(hw, domain);
-
-	dlb2_domain_reset_ldb_queue_registers(hw, domain);
-
-	dlb2_domain_reset_dir_queue_registers(hw, domain);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_CFG_LDB_VAS_CRD(domain->id.phys_id),
-		    DLB2_CHP_CFG_LDB_VAS_CRD_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_CFG_DIR_VAS_CRD(domain->id.phys_id),
-		    DLB2_CHP_CFG_DIR_VAS_CRD_RST);
-}
-
-static int dlb2_domain_reset_software_state(struct dlb2_hw *hw,
-					    struct dlb2_hw_domain *domain)
-{
-	struct dlb2_dir_pq_pair *tmp_dir_port;
-	struct dlb2_ldb_queue *tmp_ldb_queue;
-	struct dlb2_ldb_port *tmp_ldb_port;
-	struct dlb2_list_entry *iter1;
-	struct dlb2_list_entry *iter2;
-	struct dlb2_function_resources *rsrcs;
-	struct dlb2_dir_pq_pair *dir_port;
-	struct dlb2_ldb_queue *ldb_queue;
-	struct dlb2_ldb_port *ldb_port;
-	struct dlb2_list_head *list;
-	int ret, i;
-	RTE_SET_USED(tmp_dir_port);
-	RTE_SET_USED(tmp_ldb_queue);
-	RTE_SET_USED(tmp_ldb_port);
-	RTE_SET_USED(iter1);
-	RTE_SET_USED(iter2);
-
-	rsrcs = domain->parent_func;
-
-	/* Move the domain's ldb queues to the function's avail list */
-	list = &domain->used_ldb_queues;
-	DLB2_DOM_LIST_FOR_SAFE(*list, ldb_queue, tmp_ldb_queue, iter1, iter2) {
-		if (ldb_queue->sn_cfg_valid) {
-			struct dlb2_sn_group *grp;
-
-			grp = &hw->rsrcs.sn_groups[ldb_queue->sn_group];
-
-			dlb2_sn_group_free_slot(grp, ldb_queue->sn_slot);
-			ldb_queue->sn_cfg_valid = false;
-		}
-
-		ldb_queue->owned = false;
-		ldb_queue->num_mappings = 0;
-		ldb_queue->num_pending_additions = 0;
-
-		dlb2_list_del(&domain->used_ldb_queues,
-			      &ldb_queue->domain_list);
-		dlb2_list_add(&rsrcs->avail_ldb_queues,
-			      &ldb_queue->func_list);
-		rsrcs->num_avail_ldb_queues++;
-	}
-
-	list = &domain->avail_ldb_queues;
-	DLB2_DOM_LIST_FOR_SAFE(*list, ldb_queue, tmp_ldb_queue, iter1, iter2) {
-		ldb_queue->owned = false;
-
-		dlb2_list_del(&domain->avail_ldb_queues,
-			      &ldb_queue->domain_list);
-		dlb2_list_add(&rsrcs->avail_ldb_queues,
-			      &ldb_queue->func_list);
-		rsrcs->num_avail_ldb_queues++;
-	}
-
-	/* Move the domain's ldb ports to the function's avail list */
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		list = &domain->used_ldb_ports[i];
-		DLB2_DOM_LIST_FOR_SAFE(*list, ldb_port, tmp_ldb_port,
-				       iter1, iter2) {
-			int j;
-
-			ldb_port->owned = false;
-			ldb_port->configured = false;
-			ldb_port->num_pending_removals = 0;
-			ldb_port->num_mappings = 0;
-			ldb_port->init_tkn_cnt = 0;
-			for (j = 0; j < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; j++)
-				ldb_port->qid_map[j].state =
-					DLB2_QUEUE_UNMAPPED;
-
-			dlb2_list_del(&domain->used_ldb_ports[i],
-				      &ldb_port->domain_list);
-			dlb2_list_add(&rsrcs->avail_ldb_ports[i],
-				      &ldb_port->func_list);
-			rsrcs->num_avail_ldb_ports[i]++;
-		}
-
-		list = &domain->avail_ldb_ports[i];
-		DLB2_DOM_LIST_FOR_SAFE(*list, ldb_port, tmp_ldb_port,
-				       iter1, iter2) {
-			ldb_port->owned = false;
-
-			dlb2_list_del(&domain->avail_ldb_ports[i],
-				      &ldb_port->domain_list);
-			dlb2_list_add(&rsrcs->avail_ldb_ports[i],
-				      &ldb_port->func_list);
-			rsrcs->num_avail_ldb_ports[i]++;
-		}
-	}
-
-	/* Move the domain's dir ports to the function's avail list */
-	list = &domain->used_dir_pq_pairs;
-	DLB2_DOM_LIST_FOR_SAFE(*list, dir_port, tmp_dir_port, iter1, iter2) {
-		dir_port->owned = false;
-		dir_port->port_configured = false;
-		dir_port->init_tkn_cnt = 0;
-
-		dlb2_list_del(&domain->used_dir_pq_pairs,
-			      &dir_port->domain_list);
-
-		dlb2_list_add(&rsrcs->avail_dir_pq_pairs,
-			      &dir_port->func_list);
-		rsrcs->num_avail_dir_pq_pairs++;
-	}
-
-	list = &domain->avail_dir_pq_pairs;
-	DLB2_DOM_LIST_FOR_SAFE(*list, dir_port, tmp_dir_port, iter1, iter2) {
-		dir_port->owned = false;
-
-		dlb2_list_del(&domain->avail_dir_pq_pairs,
-			      &dir_port->domain_list);
-
-		dlb2_list_add(&rsrcs->avail_dir_pq_pairs,
-			      &dir_port->func_list);
-		rsrcs->num_avail_dir_pq_pairs++;
-	}
-
-	/* Return hist list entries to the function */
-	ret = dlb2_bitmap_set_range(rsrcs->avail_hist_list_entries,
-				    domain->hist_list_entry_base,
-				    domain->total_hist_list_entries);
-	if (ret) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: domain hist list base doesn't match the function's bitmap.\n",
-			    __func__);
-		return ret;
-	}
-
-	domain->total_hist_list_entries = 0;
-	domain->avail_hist_list_entries = 0;
-	domain->hist_list_entry_base = 0;
-	domain->hist_list_entry_offset = 0;
-
-	rsrcs->num_avail_qed_entries += domain->num_ldb_credits;
-	domain->num_ldb_credits = 0;
-
-	rsrcs->num_avail_dqed_entries += domain->num_dir_credits;
-	domain->num_dir_credits = 0;
-
-	rsrcs->num_avail_aqed_entries += domain->num_avail_aqed_entries;
-	rsrcs->num_avail_aqed_entries += domain->num_used_aqed_entries;
-	domain->num_avail_aqed_entries = 0;
-	domain->num_used_aqed_entries = 0;
-
-	domain->num_pending_removals = 0;
-	domain->num_pending_additions = 0;
-	domain->configured = false;
-	domain->started = false;
-
-	/*
-	 * Move the domain out of the used_domains list and back to the
-	 * function's avail_domains list.
-	 */
-	dlb2_list_del(&rsrcs->used_domains, &domain->func_list);
-	dlb2_list_add(&rsrcs->avail_domains, &domain->func_list);
-	rsrcs->num_avail_domains++;
-
-	return 0;
-}
-
-static int dlb2_domain_drain_unmapped_queue(struct dlb2_hw *hw,
-					    struct dlb2_hw_domain *domain,
-					    struct dlb2_ldb_queue *queue)
-{
-	struct dlb2_ldb_port *port;
-	int ret, i;
-
-	/* If a domain has LDB queues, it must have LDB ports */
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		if (!dlb2_list_empty(&domain->used_ldb_ports[i]))
-			break;
-	}
-
-	if (i == DLB2_NUM_COS_DOMAINS) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: No configured LDB ports\n",
-			    __func__);
-		return -EFAULT;
-	}
-
-	port = DLB2_DOM_LIST_HEAD(domain->used_ldb_ports[i], typeof(*port));
-
-	/* If necessary, free up a QID slot in this CQ */
-	if (port->num_mappings == DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-		struct dlb2_ldb_queue *mapped_queue;
-
-		mapped_queue = &hw->rsrcs.ldb_queues[port->qid_map[0].qid];
-
-		ret = dlb2_ldb_port_unmap_qid(hw, port, mapped_queue);
-		if (ret)
-			return ret;
-	}
-
-	ret = dlb2_ldb_port_map_qid_dynamic(hw, port, queue, 0);
-	if (ret)
-		return ret;
-
-	return dlb2_domain_drain_mapped_queues(hw, domain);
-}
-
-static int dlb2_domain_drain_unmapped_queues(struct dlb2_hw *hw,
-					     struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_queue *queue;
-	int ret;
-	RTE_SET_USED(iter);
-
-	/* If the domain hasn't been started, there's no traffic to drain */
-	if (!domain->started)
-		return 0;
-
-	/*
-	 * Pre-condition: the unattached queue must not have any outstanding
-	 * completions. This is ensured by calling dlb2_domain_drain_ldb_cqs()
-	 * prior to this in dlb2_domain_drain_mapped_queues().
-	 */
-	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
-		if (queue->num_mappings != 0 ||
-		    dlb2_ldb_queue_is_empty(hw, queue))
-			continue;
-
-		ret = dlb2_domain_drain_unmapped_queue(hw, domain, queue);
-		if (ret)
-			return ret;
-	}
-
-	return 0;
-}
-
-/**
- * dlb2_reset_domain() - Reset a DLB scheduling domain and its associated
- *	hardware resources.
- * @hw:	Contains the current state of the DLB2 hardware.
- * @domain_id: Domain ID
- * @vdev_req: Request came from a virtual device.
- * @vdev_id: If vdev_req is true, this contains the virtual device's ID.
- *
- * Note: User software *must* stop sending to this domain's producer ports
- * before invoking this function, otherwise undefined behavior will result.
- *
- * Return: returns < 0 on error, 0 otherwise.
- */
-int dlb2_reset_domain(struct dlb2_hw *hw,
-		      u32 domain_id,
-		      bool vdev_req,
-		      unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	int ret;
-
-	dlb2_log_reset_domain(hw, domain_id, vdev_req, vdev_id);
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (domain  == NULL || !domain->configured)
-		return -EINVAL;
-
-	/* Disable VPPs */
-	if (vdev_req) {
-		dlb2_domain_disable_dir_vpps(hw, domain, vdev_id);
-
-		dlb2_domain_disable_ldb_vpps(hw, domain, vdev_id);
-	}
-
-	/* Disable CQ interrupts */
-	dlb2_domain_disable_dir_port_interrupts(hw, domain);
-
-	dlb2_domain_disable_ldb_port_interrupts(hw, domain);
-
-	/*
-	 * For each queue owned by this domain, disable its write permissions to
-	 * cause any traffic sent to it to be dropped. Well-behaved software
-	 * should not be sending QEs at this point.
-	 */
-	dlb2_domain_disable_dir_queue_write_perms(hw, domain);
-
-	dlb2_domain_disable_ldb_queue_write_perms(hw, domain);
-
-	/* Turn off completion tracking on all the domain's PPs. */
-	dlb2_domain_disable_ldb_seq_checks(hw, domain);
-
-	/*
-	 * Disable the LDB CQs and drain them in order to complete the map and
-	 * unmap procedures, which require zero CQ inflights and zero QID
-	 * inflights respectively.
-	 */
-	dlb2_domain_disable_ldb_cqs(hw, domain);
-
-	ret = dlb2_domain_drain_ldb_cqs(hw, domain, false);
-	if (ret < 0)
-		return ret;
-
-	ret = dlb2_domain_wait_for_ldb_cqs_to_empty(hw, domain);
-	if (ret < 0)
-		return ret;
-
-	ret = dlb2_domain_finish_unmap_qid_procedures(hw, domain);
-	if (ret < 0)
-		return ret;
-
-	ret = dlb2_domain_finish_map_qid_procedures(hw, domain);
-	if (ret < 0)
-		return ret;
-
-	/* Re-enable the CQs in order to drain the mapped queues. */
-	dlb2_domain_enable_ldb_cqs(hw, domain);
-
-	ret = dlb2_domain_drain_mapped_queues(hw, domain);
-	if (ret < 0)
-		return ret;
-
-	ret = dlb2_domain_drain_unmapped_queues(hw, domain);
-	if (ret < 0)
-		return ret;
-
-	/* Done draining LDB QEs, so disable the CQs. */
-	dlb2_domain_disable_ldb_cqs(hw, domain);
-
-	dlb2_domain_drain_dir_queues(hw, domain);
-
-	/* Done draining DIR QEs, so disable the CQs. */
-	dlb2_domain_disable_dir_cqs(hw, domain);
-
-	/* Disable PPs */
-	dlb2_domain_disable_dir_producer_ports(hw, domain);
-
-	dlb2_domain_disable_ldb_producer_ports(hw, domain);
-
-	ret = dlb2_domain_verify_reset_success(hw, domain);
-	if (ret)
-		return ret;
-
-	/* Reset the QID and port state. */
-	dlb2_domain_reset_registers(hw, domain);
-
-	/* Hardware reset complete. Reset the domain's software state */
-	ret = dlb2_domain_reset_software_state(hw, domain);
-	if (ret)
-		return ret;
-
-	return 0;
-}
-
 unsigned int dlb2_finish_unmap_qid_procedures(struct dlb2_hw *hw)
 {
 	int i, num = 0;
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index 4d679a0a9..de34f5cce 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -1031,3 +1031,2554 @@ int dlb2_hw_create_sched_domain(struct dlb2_hw *hw,
 
 	return 0;
 }
+
+static void dlb2_dir_port_cq_disable(struct dlb2_hw *hw,
+				     struct dlb2_dir_pq_pair *port)
+{
+	u32 reg = 0;
+
+	DLB2_BIT_SET(reg, DLB2_LSP_CQ_DIR_DSBL_DISABLED);
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ_DIR_DSBL(hw->ver, port->id.phys_id), reg);
+
+	dlb2_flush_csr(hw);
+}
+
+static u32 dlb2_dir_cq_token_count(struct dlb2_hw *hw,
+				   struct dlb2_dir_pq_pair *port)
+{
+	u32 cnt;
+
+	cnt = DLB2_CSR_RD(hw,
+			  DLB2_LSP_CQ_DIR_TKN_CNT(hw->ver, port->id.phys_id));
+
+	/*
+	 * Account for the initial token count, which is used in order to
+	 * provide a CQ with depth less than 8.
+	 */
+
+	return DLB2_BITS_GET(cnt, DLB2_LSP_CQ_DIR_TKN_CNT_COUNT) -
+	       port->init_tkn_cnt;
+}
+
+static void dlb2_drain_dir_cq(struct dlb2_hw *hw,
+			      struct dlb2_dir_pq_pair *port)
+{
+	unsigned int port_id = port->id.phys_id;
+	u32 cnt;
+
+	/* Return any outstanding tokens */
+	cnt = dlb2_dir_cq_token_count(hw, port);
+
+	if (cnt != 0) {
+		struct dlb2_hcw hcw_mem[8], *hcw;
+		void __iomem *pp_addr;
+
+		pp_addr = os_map_producer_port(hw, port_id, false);
+
+		/* Point hcw to a 64B-aligned location */
+		hcw = (struct dlb2_hcw *)((uintptr_t)&hcw_mem[4] & ~0x3F);
+
+		/*
+		 * Program the first HCW for a batch token return and
+		 * the rest as NOOPS
+		 */
+		memset(hcw, 0, 4 * sizeof(*hcw));
+		hcw->cq_token = 1;
+		hcw->lock_id = cnt - 1;
+
+		dlb2_movdir64b(pp_addr, hcw);
+
+		os_fence_hcw(hw, pp_addr);
+
+		os_unmap_producer_port(hw, pp_addr);
+	}
+}
+
+static void dlb2_dir_port_cq_enable(struct dlb2_hw *hw,
+				    struct dlb2_dir_pq_pair *port)
+{
+	u32 reg = 0;
+
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ_DIR_DSBL(hw->ver, port->id.phys_id), reg);
+
+	dlb2_flush_csr(hw);
+}
+
+static int dlb2_domain_drain_dir_cqs(struct dlb2_hw *hw,
+				     struct dlb2_hw_domain *domain,
+				     bool toggle_port)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *port;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
+		/*
+		 * Can't drain a port if it's not configured, and there's
+		 * nothing to drain if its queue is unconfigured.
+		 */
+		if (!port->port_configured || !port->queue_configured)
+			continue;
+
+		if (toggle_port)
+			dlb2_dir_port_cq_disable(hw, port);
+
+		dlb2_drain_dir_cq(hw, port);
+
+		if (toggle_port)
+			dlb2_dir_port_cq_enable(hw, port);
+	}
+
+	return 0;
+}
+
+static u32 dlb2_dir_queue_depth(struct dlb2_hw *hw,
+				struct dlb2_dir_pq_pair *queue)
+{
+	u32 cnt;
+
+	cnt = DLB2_CSR_RD(hw, DLB2_LSP_QID_DIR_ENQUEUE_CNT(hw->ver,
+						      queue->id.phys_id));
+
+	return DLB2_BITS_GET(cnt, DLB2_LSP_QID_DIR_ENQUEUE_CNT_COUNT);
+}
+
+static bool dlb2_dir_queue_is_empty(struct dlb2_hw *hw,
+				    struct dlb2_dir_pq_pair *queue)
+{
+	return dlb2_dir_queue_depth(hw, queue) == 0;
+}
+
+static bool dlb2_domain_dir_queues_empty(struct dlb2_hw *hw,
+					 struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *queue;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, queue, iter) {
+		if (!dlb2_dir_queue_is_empty(hw, queue))
+			return false;
+	}
+
+	return true;
+}
+static int dlb2_domain_drain_dir_queues(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain)
+{
+	int i;
+
+	/* If the domain hasn't been started, there's no traffic to drain */
+	if (!domain->started)
+		return 0;
+
+	for (i = 0; i < DLB2_MAX_QID_EMPTY_CHECK_LOOPS; i++) {
+		dlb2_domain_drain_dir_cqs(hw, domain, true);
+
+		if (dlb2_domain_dir_queues_empty(hw, domain))
+			break;
+	}
+
+	if (i == DLB2_MAX_QID_EMPTY_CHECK_LOOPS) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: failed to empty queues\n",
+			    __func__);
+		return -EFAULT;
+	}
+
+	/*
+	 * Drain the CQs one more time. For the queues to go empty, they would
+	 * have scheduled one or more QEs.
+	 */
+	dlb2_domain_drain_dir_cqs(hw, domain, true);
+
+	return 0;
+}
+
+static void dlb2_ldb_port_cq_enable(struct dlb2_hw *hw,
+				    struct dlb2_ldb_port *port)
+{
+	u32 reg = 0;
+
+	/*
+	 * Don't re-enable the port if a removal is pending. The caller should
+	 * mark this port as enabled (if it isn't already), and when the
+	 * removal completes the port will be enabled.
+	 */
+	if (port->num_pending_removals)
+		return;
+
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ_LDB_DSBL(hw->ver, port->id.phys_id), reg);
+
+	dlb2_flush_csr(hw);
+}
+
+static void dlb2_ldb_port_cq_disable(struct dlb2_hw *hw,
+				     struct dlb2_ldb_port *port)
+{
+	u32 reg = 0;
+
+	DLB2_BIT_SET(reg, DLB2_LSP_CQ_LDB_DSBL_DISABLED);
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ_LDB_DSBL(hw->ver, port->id.phys_id), reg);
+
+	dlb2_flush_csr(hw);
+}
+
+static u32 dlb2_ldb_cq_inflight_count(struct dlb2_hw *hw,
+				      struct dlb2_ldb_port *port)
+{
+	u32 cnt;
+
+	cnt = DLB2_CSR_RD(hw,
+			  DLB2_LSP_CQ_LDB_INFL_CNT(hw->ver, port->id.phys_id));
+
+	return DLB2_BITS_GET(cnt, DLB2_LSP_CQ_LDB_INFL_CNT_COUNT);
+}
+
+static u32 dlb2_ldb_cq_token_count(struct dlb2_hw *hw,
+				   struct dlb2_ldb_port *port)
+{
+	u32 cnt;
+
+	cnt = DLB2_CSR_RD(hw,
+			  DLB2_LSP_CQ_LDB_TKN_CNT(hw->ver, port->id.phys_id));
+
+	/*
+	 * Account for the initial token count, which is used in order to
+	 * provide a CQ with depth less than 8.
+	 */
+
+	return DLB2_BITS_GET(cnt, DLB2_LSP_CQ_LDB_TKN_CNT_TOKEN_COUNT) -
+		port->init_tkn_cnt;
+}
+
+static void dlb2_drain_ldb_cq(struct dlb2_hw *hw, struct dlb2_ldb_port *port)
+{
+	u32 infl_cnt, tkn_cnt;
+	unsigned int i;
+
+	infl_cnt = dlb2_ldb_cq_inflight_count(hw, port);
+	tkn_cnt = dlb2_ldb_cq_token_count(hw, port);
+
+	if (infl_cnt || tkn_cnt) {
+		struct dlb2_hcw hcw_mem[8], *hcw;
+		void __iomem *pp_addr;
+
+		pp_addr = os_map_producer_port(hw, port->id.phys_id, true);
+
+		/* Point hcw to a 64B-aligned location */
+		hcw = (struct dlb2_hcw *)((uintptr_t)&hcw_mem[4] & ~0x3F);
+
+		/*
+		 * Program the first HCW for a completion and token return and
+		 * the other HCWs as NOOPS
+		 */
+
+		memset(hcw, 0, 4 * sizeof(*hcw));
+		hcw->qe_comp = (infl_cnt > 0);
+		hcw->cq_token = (tkn_cnt > 0);
+		hcw->lock_id = tkn_cnt - 1;
+
+		/* Return tokens in the first HCW */
+		dlb2_movdir64b(pp_addr, hcw);
+
+		hcw->cq_token = 0;
+
+		/* Issue remaining completions (if any) */
+		for (i = 1; i < infl_cnt; i++)
+			dlb2_movdir64b(pp_addr, hcw);
+
+		os_fence_hcw(hw, pp_addr);
+
+		os_unmap_producer_port(hw, pp_addr);
+	}
+}
+
+static void dlb2_domain_drain_ldb_cqs(struct dlb2_hw *hw,
+				      struct dlb2_hw_domain *domain,
+				      bool toggle_port)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	/* If the domain hasn't been started, there's no traffic to drain */
+	if (!domain->started)
+		return;
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			if (toggle_port)
+				dlb2_ldb_port_cq_disable(hw, port);
+
+			dlb2_drain_ldb_cq(hw, port);
+
+			if (toggle_port)
+				dlb2_ldb_port_cq_enable(hw, port);
+		}
+	}
+}
+
+static u32 dlb2_ldb_queue_depth(struct dlb2_hw *hw,
+				struct dlb2_ldb_queue *queue)
+{
+	u32 aqed, ldb, atm;
+
+	aqed = DLB2_CSR_RD(hw, DLB2_LSP_QID_AQED_ACTIVE_CNT(hw->ver,
+						       queue->id.phys_id));
+	ldb = DLB2_CSR_RD(hw, DLB2_LSP_QID_LDB_ENQUEUE_CNT(hw->ver,
+						      queue->id.phys_id));
+	atm = DLB2_CSR_RD(hw,
+			  DLB2_LSP_QID_ATM_ACTIVE(hw->ver, queue->id.phys_id));
+
+	return DLB2_BITS_GET(aqed, DLB2_LSP_QID_AQED_ACTIVE_CNT_COUNT)
+	       + DLB2_BITS_GET(ldb, DLB2_LSP_QID_LDB_ENQUEUE_CNT_COUNT)
+	       + DLB2_BITS_GET(atm, DLB2_LSP_QID_ATM_ACTIVE_COUNT);
+}
+
+static bool dlb2_ldb_queue_is_empty(struct dlb2_hw *hw,
+				    struct dlb2_ldb_queue *queue)
+{
+	return dlb2_ldb_queue_depth(hw, queue) == 0;
+}
+
+static bool dlb2_domain_mapped_queues_empty(struct dlb2_hw *hw,
+					    struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_queue *queue;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
+		if (queue->num_mappings == 0)
+			continue;
+
+		if (!dlb2_ldb_queue_is_empty(hw, queue))
+			return false;
+	}
+
+	return true;
+}
+
+static int dlb2_domain_drain_mapped_queues(struct dlb2_hw *hw,
+					   struct dlb2_hw_domain *domain)
+{
+	int i;
+
+	/* If the domain hasn't been started, there's no traffic to drain */
+	if (!domain->started)
+		return 0;
+
+	if (domain->num_pending_removals > 0) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: failed to unmap domain queues\n",
+			    __func__);
+		return -EFAULT;
+	}
+
+	for (i = 0; i < DLB2_MAX_QID_EMPTY_CHECK_LOOPS; i++) {
+		dlb2_domain_drain_ldb_cqs(hw, domain, true);
+
+		if (dlb2_domain_mapped_queues_empty(hw, domain))
+			break;
+	}
+
+	if (i == DLB2_MAX_QID_EMPTY_CHECK_LOOPS) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: failed to empty queues\n",
+			    __func__);
+		return -EFAULT;
+	}
+
+	/*
+	 * Drain the CQs one more time. For the queues to go empty, they would
+	 * have scheduled one or more QEs.
+	 */
+	dlb2_domain_drain_ldb_cqs(hw, domain, true);
+
+	return 0;
+}
+
+static void dlb2_domain_enable_ldb_cqs(struct dlb2_hw *hw,
+				       struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			port->enabled = true;
+
+			dlb2_ldb_port_cq_enable(hw, port);
+		}
+	}
+}
+
+static struct dlb2_ldb_queue *
+dlb2_get_ldb_queue_from_id(struct dlb2_hw *hw,
+			   u32 id,
+			   bool vdev_req,
+			   unsigned int vdev_id)
+{
+	struct dlb2_list_entry *iter1;
+	struct dlb2_list_entry *iter2;
+	struct dlb2_function_resources *rsrcs;
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+	RTE_SET_USED(iter1);
+	RTE_SET_USED(iter2);
+
+	if (id >= DLB2_MAX_NUM_LDB_QUEUES)
+		return NULL;
+
+	rsrcs = (vdev_req) ? &hw->vdev[vdev_id] : &hw->pf;
+
+	if (!vdev_req)
+		return &hw->rsrcs.ldb_queues[id];
+
+	DLB2_FUNC_LIST_FOR(rsrcs->used_domains, domain, iter1) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter2) {
+			if (queue->id.virt_id == id)
+				return queue;
+		}
+	}
+
+	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_queues, queue, iter1) {
+		if (queue->id.virt_id == id)
+			return queue;
+	}
+
+	return NULL;
+}
+
+static struct dlb2_hw_domain *dlb2_get_domain_from_id(struct dlb2_hw *hw,
+						      u32 id,
+						      bool vdev_req,
+						      unsigned int vdev_id)
+{
+	struct dlb2_list_entry *iteration;
+	struct dlb2_function_resources *rsrcs;
+	struct dlb2_hw_domain *domain;
+	RTE_SET_USED(iteration);
+
+	if (id >= DLB2_MAX_NUM_DOMAINS)
+		return NULL;
+
+	if (!vdev_req)
+		return &hw->domains[id];
+
+	rsrcs = &hw->vdev[vdev_id];
+
+	DLB2_FUNC_LIST_FOR(rsrcs->used_domains, domain, iteration) {
+		if (domain->id.virt_id == id)
+			return domain;
+	}
+
+	return NULL;
+}
+
+static int dlb2_port_slot_state_transition(struct dlb2_hw *hw,
+					   struct dlb2_ldb_port *port,
+					   struct dlb2_ldb_queue *queue,
+					   int slot,
+					   enum dlb2_qid_map_state new_state)
+{
+	enum dlb2_qid_map_state curr_state = port->qid_map[slot].state;
+	struct dlb2_hw_domain *domain;
+	int domain_id;
+
+	domain_id = port->domain_id.phys_id;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, false, 0);
+	if (domain == NULL) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: unable to find domain %d\n",
+			    __func__, domain_id);
+		return -EINVAL;
+	}
+
+	switch (curr_state) {
+	case DLB2_QUEUE_UNMAPPED:
+		switch (new_state) {
+		case DLB2_QUEUE_MAPPED:
+			queue->num_mappings++;
+			port->num_mappings++;
+			break;
+		case DLB2_QUEUE_MAP_IN_PROG:
+			queue->num_pending_additions++;
+			domain->num_pending_additions++;
+			break;
+		default:
+			goto error;
+		}
+		break;
+	case DLB2_QUEUE_MAPPED:
+		switch (new_state) {
+		case DLB2_QUEUE_UNMAPPED:
+			queue->num_mappings--;
+			port->num_mappings--;
+			break;
+		case DLB2_QUEUE_UNMAP_IN_PROG:
+			port->num_pending_removals++;
+			domain->num_pending_removals++;
+			break;
+		case DLB2_QUEUE_MAPPED:
+			/* Priority change, nothing to update */
+			break;
+		default:
+			goto error;
+		}
+		break;
+	case DLB2_QUEUE_MAP_IN_PROG:
+		switch (new_state) {
+		case DLB2_QUEUE_UNMAPPED:
+			queue->num_pending_additions--;
+			domain->num_pending_additions--;
+			break;
+		case DLB2_QUEUE_MAPPED:
+			queue->num_mappings++;
+			port->num_mappings++;
+			queue->num_pending_additions--;
+			domain->num_pending_additions--;
+			break;
+		default:
+			goto error;
+		}
+		break;
+	case DLB2_QUEUE_UNMAP_IN_PROG:
+		switch (new_state) {
+		case DLB2_QUEUE_UNMAPPED:
+			port->num_pending_removals--;
+			domain->num_pending_removals--;
+			queue->num_mappings--;
+			port->num_mappings--;
+			break;
+		case DLB2_QUEUE_MAPPED:
+			port->num_pending_removals--;
+			domain->num_pending_removals--;
+			break;
+		case DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP:
+			/* Nothing to update */
+			break;
+		default:
+			goto error;
+		}
+		break;
+	case DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP:
+		switch (new_state) {
+		case DLB2_QUEUE_UNMAP_IN_PROG:
+			/* Nothing to update */
+			break;
+		case DLB2_QUEUE_UNMAPPED:
+			/*
+			 * An UNMAP_IN_PROG_PENDING_MAP slot briefly
+			 * becomes UNMAPPED before it transitions to
+			 * MAP_IN_PROG.
+			 */
+			queue->num_mappings--;
+			port->num_mappings--;
+			port->num_pending_removals--;
+			domain->num_pending_removals--;
+			break;
+		default:
+			goto error;
+		}
+		break;
+	default:
+		goto error;
+	}
+
+	port->qid_map[slot].state = new_state;
+
+	DLB2_HW_DBG(hw,
+		    "[%s()] queue %d -> port %d state transition (%d -> %d)\n",
+		    __func__, queue->id.phys_id, port->id.phys_id,
+		    curr_state, new_state);
+	return 0;
+
+error:
+	DLB2_HW_ERR(hw,
+		    "[%s()] Internal error: invalid queue %d -> port %d state transition (%d -> %d)\n",
+		    __func__, queue->id.phys_id, port->id.phys_id,
+		    curr_state, new_state);
+	return -EFAULT;
+}
+
+static bool dlb2_port_find_slot(struct dlb2_ldb_port *port,
+				enum dlb2_qid_map_state state,
+				int *slot)
+{
+	int i;
+
+	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
+		if (port->qid_map[i].state == state)
+			break;
+	}
+
+	*slot = i;
+
+	return (i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ);
+}
+
+static bool dlb2_port_find_slot_queue(struct dlb2_ldb_port *port,
+				      enum dlb2_qid_map_state state,
+				      struct dlb2_ldb_queue *queue,
+				      int *slot)
+{
+	int i;
+
+	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
+		if (port->qid_map[i].state == state &&
+		    port->qid_map[i].qid == queue->id.phys_id)
+			break;
+	}
+
+	*slot = i;
+
+	return (i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ);
+}
+
+/*
+ * dlb2_ldb_queue_{enable, disable}_mapped_cqs() don't operate exactly as
+ * their function names imply, and should only be called by the dynamic CQ
+ * mapping code.
+ */
+static void dlb2_ldb_queue_disable_mapped_cqs(struct dlb2_hw *hw,
+					      struct dlb2_hw_domain *domain,
+					      struct dlb2_ldb_queue *queue)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int slot, i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			enum dlb2_qid_map_state state = DLB2_QUEUE_MAPPED;
+
+			if (!dlb2_port_find_slot_queue(port, state,
+						       queue, &slot))
+				continue;
+
+			if (port->enabled)
+				dlb2_ldb_port_cq_disable(hw, port);
+		}
+	}
+}
+
+static void dlb2_ldb_queue_enable_mapped_cqs(struct dlb2_hw *hw,
+					     struct dlb2_hw_domain *domain,
+					     struct dlb2_ldb_queue *queue)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int slot, i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			enum dlb2_qid_map_state state = DLB2_QUEUE_MAPPED;
+
+			if (!dlb2_port_find_slot_queue(port, state,
+						       queue, &slot))
+				continue;
+
+			if (port->enabled)
+				dlb2_ldb_port_cq_enable(hw, port);
+		}
+	}
+}
+
+static void dlb2_ldb_port_clear_queue_if_status(struct dlb2_hw *hw,
+						struct dlb2_ldb_port *port,
+						int slot)
+{
+	u32 ctrl = 0;
+
+	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
+	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
+	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_INFLIGHT_OK_V);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
+
+	dlb2_flush_csr(hw);
+}
+
+static void dlb2_ldb_port_set_queue_if_status(struct dlb2_hw *hw,
+					      struct dlb2_ldb_port *port,
+					      int slot)
+{
+	u32 ctrl = 0;
+
+	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
+	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
+	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_VALUE);
+	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_INFLIGHT_OK_V);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
+
+	dlb2_flush_csr(hw);
+}
+
+static int dlb2_ldb_port_map_qid_static(struct dlb2_hw *hw,
+					struct dlb2_ldb_port *p,
+					struct dlb2_ldb_queue *q,
+					u8 priority)
+{
+	enum dlb2_qid_map_state state;
+	u32 lsp_qid2cq2;
+	u32 lsp_qid2cq;
+	u32 atm_qid2cq;
+	u32 cq2priov;
+	u32 cq2qid;
+	int i;
+
+	/* Look for a pending or already mapped slot, else an unused slot */
+	if (!dlb2_port_find_slot_queue(p, DLB2_QUEUE_MAP_IN_PROG, q, &i) &&
+	    !dlb2_port_find_slot_queue(p, DLB2_QUEUE_MAPPED, q, &i) &&
+	    !dlb2_port_find_slot(p, DLB2_QUEUE_UNMAPPED, &i)) {
+		DLB2_HW_ERR(hw,
+			    "[%s():%d] Internal error: CQ has no available QID mapping slots\n",
+			    __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	/* Read-modify-write the priority and valid bit register */
+	cq2priov = DLB2_CSR_RD(hw, DLB2_LSP_CQ2PRIOV(hw->ver, p->id.phys_id));
+
+	cq2priov |= (1 << (i + DLB2_LSP_CQ2PRIOV_V_LOC)) & DLB2_LSP_CQ2PRIOV_V;
+	cq2priov |= ((priority & 0x7) << (i + DLB2_LSP_CQ2PRIOV_PRIO_LOC) * 3)
+		    & DLB2_LSP_CQ2PRIOV_PRIO;
+
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(hw->ver, p->id.phys_id), cq2priov);
+
+	/* Read-modify-write the QID map register */
+	if (i < 4)
+		cq2qid = DLB2_CSR_RD(hw, DLB2_LSP_CQ2QID0(hw->ver,
+							  p->id.phys_id));
+	else
+		cq2qid = DLB2_CSR_RD(hw, DLB2_LSP_CQ2QID1(hw->ver,
+							  p->id.phys_id));
+
+	if (i == 0 || i == 4)
+		DLB2_BITS_SET(cq2qid, q->id.phys_id, DLB2_LSP_CQ2QID0_QID_P0);
+	if (i == 1 || i == 5)
+		DLB2_BITS_SET(cq2qid, q->id.phys_id, DLB2_LSP_CQ2QID0_QID_P1);
+	if (i == 2 || i == 6)
+		DLB2_BITS_SET(cq2qid, q->id.phys_id, DLB2_LSP_CQ2QID0_QID_P2);
+	if (i == 3 || i == 7)
+		DLB2_BITS_SET(cq2qid, q->id.phys_id, DLB2_LSP_CQ2QID0_QID_P3);
+
+	if (i < 4)
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_CQ2QID0(hw->ver, p->id.phys_id), cq2qid);
+	else
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_CQ2QID1(hw->ver, p->id.phys_id), cq2qid);
+
+	atm_qid2cq = DLB2_CSR_RD(hw,
+				 DLB2_ATM_QID2CQIDIX(q->id.phys_id,
+						p->id.phys_id / 4));
+
+	lsp_qid2cq = DLB2_CSR_RD(hw,
+				 DLB2_LSP_QID2CQIDIX(hw->ver, q->id.phys_id,
+						p->id.phys_id / 4));
+
+	lsp_qid2cq2 = DLB2_CSR_RD(hw,
+				  DLB2_LSP_QID2CQIDIX2(hw->ver, q->id.phys_id,
+						  p->id.phys_id / 4));
+
+	switch (p->id.phys_id % 4) {
+	case 0:
+		DLB2_BIT_SET(atm_qid2cq,
+			     1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P0_LOC));
+		DLB2_BIT_SET(lsp_qid2cq,
+			     1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P0_LOC));
+		DLB2_BIT_SET(lsp_qid2cq2,
+			     1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P0_LOC));
+		break;
+
+	case 1:
+		DLB2_BIT_SET(atm_qid2cq,
+			     1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P1_LOC));
+		DLB2_BIT_SET(lsp_qid2cq,
+			     1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P1_LOC));
+		DLB2_BIT_SET(lsp_qid2cq2,
+			     1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P1_LOC));
+		break;
+
+	case 2:
+		DLB2_BIT_SET(atm_qid2cq,
+			     1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P2_LOC));
+		DLB2_BIT_SET(lsp_qid2cq,
+			     1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P2_LOC));
+		DLB2_BIT_SET(lsp_qid2cq2,
+			     1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P2_LOC));
+		break;
+
+	case 3:
+		DLB2_BIT_SET(atm_qid2cq,
+			     1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P3_LOC));
+		DLB2_BIT_SET(lsp_qid2cq,
+			     1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P3_LOC));
+		DLB2_BIT_SET(lsp_qid2cq2,
+			     1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P3_LOC));
+		break;
+	}
+
+	DLB2_CSR_WR(hw,
+		    DLB2_ATM_QID2CQIDIX(q->id.phys_id, p->id.phys_id / 4),
+		    atm_qid2cq);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_QID2CQIDIX(hw->ver,
+					q->id.phys_id, p->id.phys_id / 4),
+		    lsp_qid2cq);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_QID2CQIDIX2(hw->ver,
+					 q->id.phys_id, p->id.phys_id / 4),
+		    lsp_qid2cq2);
+
+	dlb2_flush_csr(hw);
+
+	p->qid_map[i].qid = q->id.phys_id;
+	p->qid_map[i].priority = priority;
+
+	state = DLB2_QUEUE_MAPPED;
+
+	return dlb2_port_slot_state_transition(hw, p, q, i, state);
+}
+
+static int dlb2_ldb_port_set_has_work_bits(struct dlb2_hw *hw,
+					   struct dlb2_ldb_port *port,
+					   struct dlb2_ldb_queue *queue,
+					   int slot)
+{
+	u32 ctrl = 0;
+	u32 active;
+	u32 enq;
+
+	/* Set the atomic scheduling haswork bit */
+	active = DLB2_CSR_RD(hw, DLB2_LSP_QID_AQED_ACTIVE_CNT(hw->ver,
+							 queue->id.phys_id));
+
+	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
+	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
+	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_VALUE);
+	DLB2_BITS_SET(ctrl,
+		      DLB2_BITS_GET(active,
+				    DLB2_LSP_QID_AQED_ACTIVE_CNT_COUNT) > 0,
+				    DLB2_LSP_LDB_SCHED_CTRL_RLIST_HASWORK_V);
+
+	/* Set the non-atomic scheduling haswork bit */
+	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
+
+	enq = DLB2_CSR_RD(hw,
+			  DLB2_LSP_QID_LDB_ENQUEUE_CNT(hw->ver,
+						       queue->id.phys_id));
+
+	memset(&ctrl, 0, sizeof(ctrl));
+
+	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
+	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
+	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_VALUE);
+	DLB2_BITS_SET(ctrl,
+		      DLB2_BITS_GET(enq,
+				    DLB2_LSP_QID_LDB_ENQUEUE_CNT_COUNT) > 0,
+		      DLB2_LSP_LDB_SCHED_CTRL_NALB_HASWORK_V);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
+
+	dlb2_flush_csr(hw);
+
+	return 0;
+}
+
+static void dlb2_ldb_port_clear_has_work_bits(struct dlb2_hw *hw,
+					      struct dlb2_ldb_port *port,
+					      u8 slot)
+{
+	u32 ctrl = 0;
+
+	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
+	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
+	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_RLIST_HASWORK_V);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
+
+	memset(&ctrl, 0, sizeof(ctrl));
+
+	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
+	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
+	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_NALB_HASWORK_V);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
+
+	dlb2_flush_csr(hw);
+}
+
+
+static void dlb2_ldb_queue_set_inflight_limit(struct dlb2_hw *hw,
+					      struct dlb2_ldb_queue *queue)
+{
+	u32 infl_lim = 0;
+
+	DLB2_BITS_SET(infl_lim, queue->num_qid_inflights,
+		 DLB2_LSP_QID_LDB_INFL_LIM_LIMIT);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_QID_LDB_INFL_LIM(hw->ver, queue->id.phys_id),
+		    infl_lim);
+}
+
+static void dlb2_ldb_queue_clear_inflight_limit(struct dlb2_hw *hw,
+						struct dlb2_ldb_queue *queue)
+{
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_QID_LDB_INFL_LIM(hw->ver, queue->id.phys_id),
+		    DLB2_LSP_QID_LDB_INFL_LIM_RST);
+}
+
+static int dlb2_ldb_port_finish_map_qid_dynamic(struct dlb2_hw *hw,
+						struct dlb2_hw_domain *domain,
+						struct dlb2_ldb_port *port,
+						struct dlb2_ldb_queue *queue)
+{
+	struct dlb2_list_entry *iter;
+	enum dlb2_qid_map_state state;
+	int slot, ret, i;
+	u32 infl_cnt;
+	u8 prio;
+	RTE_SET_USED(iter);
+
+	infl_cnt = DLB2_CSR_RD(hw,
+			       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver,
+						    queue->id.phys_id));
+
+	if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT)) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: non-zero QID inflight count\n",
+			    __func__);
+		return -EINVAL;
+	}
+
+	/*
+	 * Static map the port and set its corresponding has_work bits.
+	 */
+	state = DLB2_QUEUE_MAP_IN_PROG;
+	if (!dlb2_port_find_slot_queue(port, state, queue, &slot))
+		return -EINVAL;
+
+	prio = port->qid_map[slot].priority;
+
+	/*
+	 * Update the CQ2QID, CQ2PRIOV, and QID2CQIDX registers, and
+	 * the port's qid_map state.
+	 */
+	ret = dlb2_ldb_port_map_qid_static(hw, port, queue, prio);
+	if (ret)
+		return ret;
+
+	ret = dlb2_ldb_port_set_has_work_bits(hw, port, queue, slot);
+	if (ret)
+		return ret;
+
+	/*
+	 * Ensure IF_status(cq,qid) is 0 before enabling the port to
+	 * prevent spurious schedules to cause the queue's inflight
+	 * count to increase.
+	 */
+	dlb2_ldb_port_clear_queue_if_status(hw, port, slot);
+
+	/* Reset the queue's inflight status */
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			state = DLB2_QUEUE_MAPPED;
+			if (!dlb2_port_find_slot_queue(port, state,
+						       queue, &slot))
+				continue;
+
+			dlb2_ldb_port_set_queue_if_status(hw, port, slot);
+		}
+	}
+
+	dlb2_ldb_queue_set_inflight_limit(hw, queue);
+
+	/* Re-enable CQs mapped to this queue */
+	dlb2_ldb_queue_enable_mapped_cqs(hw, domain, queue);
+
+	/* If this queue has other mappings pending, clear its inflight limit */
+	if (queue->num_pending_additions > 0)
+		dlb2_ldb_queue_clear_inflight_limit(hw, queue);
+
+	return 0;
+}
+
+/**
+ * dlb2_ldb_port_map_qid_dynamic() - perform a "dynamic" QID->CQ mapping
+ * @hw: dlb2_hw handle for a particular device.
+ * @port: load-balanced port
+ * @queue: load-balanced queue
+ * @priority: queue servicing priority
+ *
+ * Returns 0 if the queue was mapped, 1 if the mapping is scheduled to occur
+ * at a later point, and <0 if an error occurred.
+ */
+static int dlb2_ldb_port_map_qid_dynamic(struct dlb2_hw *hw,
+					 struct dlb2_ldb_port *port,
+					 struct dlb2_ldb_queue *queue,
+					 u8 priority)
+{
+	enum dlb2_qid_map_state state;
+	struct dlb2_hw_domain *domain;
+	int domain_id, slot, ret;
+	u32 infl_cnt;
+
+	domain_id = port->domain_id.phys_id;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, false, 0);
+	if (domain == NULL) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: unable to find domain %d\n",
+			    __func__, port->domain_id.phys_id);
+		return -EINVAL;
+	}
+
+	/*
+	 * Set the QID inflight limit to 0 to prevent further scheduling of the
+	 * queue.
+	 */
+	DLB2_CSR_WR(hw, DLB2_LSP_QID_LDB_INFL_LIM(hw->ver,
+						  queue->id.phys_id), 0);
+
+	if (!dlb2_port_find_slot(port, DLB2_QUEUE_UNMAPPED, &slot)) {
+		DLB2_HW_ERR(hw,
+			    "Internal error: No available unmapped slots\n");
+		return -EFAULT;
+	}
+
+	port->qid_map[slot].qid = queue->id.phys_id;
+	port->qid_map[slot].priority = priority;
+
+	state = DLB2_QUEUE_MAP_IN_PROG;
+	ret = dlb2_port_slot_state_transition(hw, port, queue, slot, state);
+	if (ret)
+		return ret;
+
+	infl_cnt = DLB2_CSR_RD(hw,
+			       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver,
+						    queue->id.phys_id));
+
+	if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT)) {
+		/*
+		 * The queue is owed completions so it's not safe to map it
+		 * yet. Schedule a kernel thread to complete the mapping later,
+		 * once software has completed all the queue's inflight events.
+		 */
+		if (!os_worker_active(hw))
+			os_schedule_work(hw);
+
+		return 1;
+	}
+
+	/*
+	 * Disable the affected CQ, and the CQs already mapped to the QID,
+	 * before reading the QID's inflight count a second time. There is an
+	 * unlikely race in which the QID may schedule one more QE after we
+	 * read an inflight count of 0, and disabling the CQs guarantees that
+	 * the race will not occur after a re-read of the inflight count
+	 * register.
+	 */
+	if (port->enabled)
+		dlb2_ldb_port_cq_disable(hw, port);
+
+	dlb2_ldb_queue_disable_mapped_cqs(hw, domain, queue);
+
+	infl_cnt = DLB2_CSR_RD(hw,
+			       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver,
+						    queue->id.phys_id));
+
+	if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT)) {
+		if (port->enabled)
+			dlb2_ldb_port_cq_enable(hw, port);
+
+		dlb2_ldb_queue_enable_mapped_cqs(hw, domain, queue);
+
+		/*
+		 * The queue is owed completions so it's not safe to map it
+		 * yet. Schedule a kernel thread to complete the mapping later,
+		 * once software has completed all the queue's inflight events.
+		 */
+		if (!os_worker_active(hw))
+			os_schedule_work(hw);
+
+		return 1;
+	}
+
+	return dlb2_ldb_port_finish_map_qid_dynamic(hw, domain, port, queue);
+}
+
+static void dlb2_domain_finish_map_port(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain,
+					struct dlb2_ldb_port *port)
+{
+	int i;
+
+	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
+		u32 infl_cnt;
+		struct dlb2_ldb_queue *queue;
+		int qid;
+
+		if (port->qid_map[i].state != DLB2_QUEUE_MAP_IN_PROG)
+			continue;
+
+		qid = port->qid_map[i].qid;
+
+		queue = dlb2_get_ldb_queue_from_id(hw, qid, false, 0);
+
+		if (queue == NULL) {
+			DLB2_HW_ERR(hw,
+				    "[%s()] Internal error: unable to find queue %d\n",
+				    __func__, qid);
+			continue;
+		}
+
+		infl_cnt = DLB2_CSR_RD(hw,
+				       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver, qid));
+
+		if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT))
+			continue;
+
+		/*
+		 * Disable the affected CQ, and the CQs already mapped to the
+		 * QID, before reading the QID's inflight count a second time.
+		 * There is an unlikely race in which the QID may schedule one
+		 * more QE after we read an inflight count of 0, and disabling
+		 * the CQs guarantees that the race will not occur after a
+		 * re-read of the inflight count register.
+		 */
+		if (port->enabled)
+			dlb2_ldb_port_cq_disable(hw, port);
+
+		dlb2_ldb_queue_disable_mapped_cqs(hw, domain, queue);
+
+		infl_cnt = DLB2_CSR_RD(hw,
+				       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver, qid));
+
+		if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT)) {
+			if (port->enabled)
+				dlb2_ldb_port_cq_enable(hw, port);
+
+			dlb2_ldb_queue_enable_mapped_cqs(hw, domain, queue);
+
+			continue;
+		}
+
+		dlb2_ldb_port_finish_map_qid_dynamic(hw, domain, port, queue);
+	}
+}
+
+static unsigned int
+dlb2_domain_finish_map_qid_procedures(struct dlb2_hw *hw,
+				      struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	if (!domain->configured || domain->num_pending_additions == 0)
+		return 0;
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
+			dlb2_domain_finish_map_port(hw, domain, port);
+	}
+
+	return domain->num_pending_additions;
+}
+
+static int dlb2_ldb_port_unmap_qid(struct dlb2_hw *hw,
+				   struct dlb2_ldb_port *port,
+				   struct dlb2_ldb_queue *queue)
+{
+	enum dlb2_qid_map_state mapped, in_progress, pending_map, unmapped;
+	u32 lsp_qid2cq2;
+	u32 lsp_qid2cq;
+	u32 atm_qid2cq;
+	u32 cq2priov;
+	u32 queue_id;
+	u32 port_id;
+	int i;
+
+	/* Find the queue's slot */
+	mapped = DLB2_QUEUE_MAPPED;
+	in_progress = DLB2_QUEUE_UNMAP_IN_PROG;
+	pending_map = DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP;
+
+	if (!dlb2_port_find_slot_queue(port, mapped, queue, &i) &&
+	    !dlb2_port_find_slot_queue(port, in_progress, queue, &i) &&
+	    !dlb2_port_find_slot_queue(port, pending_map, queue, &i)) {
+		DLB2_HW_ERR(hw,
+			    "[%s():%d] Internal error: QID %d isn't mapped\n",
+			    __func__, __LINE__, queue->id.phys_id);
+		return -EFAULT;
+	}
+
+	port_id = port->id.phys_id;
+	queue_id = queue->id.phys_id;
+
+	/* Read-modify-write the priority and valid bit register */
+	cq2priov = DLB2_CSR_RD(hw, DLB2_LSP_CQ2PRIOV(hw->ver, port_id));
+
+	cq2priov &= ~(1 << (i + DLB2_LSP_CQ2PRIOV_V_LOC));
+
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(hw->ver, port_id), cq2priov);
+
+	atm_qid2cq = DLB2_CSR_RD(hw, DLB2_ATM_QID2CQIDIX(queue_id,
+							 port_id / 4));
+
+	lsp_qid2cq = DLB2_CSR_RD(hw,
+				 DLB2_LSP_QID2CQIDIX(hw->ver,
+						queue_id, port_id / 4));
+
+	lsp_qid2cq2 = DLB2_CSR_RD(hw,
+				  DLB2_LSP_QID2CQIDIX2(hw->ver,
+						  queue_id, port_id / 4));
+
+	switch (port_id % 4) {
+	case 0:
+		atm_qid2cq &= ~(1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P0_LOC));
+		lsp_qid2cq &= ~(1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P0_LOC));
+		lsp_qid2cq2 &= ~(1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P0_LOC));
+		break;
+
+	case 1:
+		atm_qid2cq &= ~(1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P1_LOC));
+		lsp_qid2cq &= ~(1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P1_LOC));
+		lsp_qid2cq2 &= ~(1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P1_LOC));
+		break;
+
+	case 2:
+		atm_qid2cq &= ~(1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P2_LOC));
+		lsp_qid2cq &= ~(1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P2_LOC));
+		lsp_qid2cq2 &= ~(1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P2_LOC));
+		break;
+
+	case 3:
+		atm_qid2cq &= ~(1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P3_LOC));
+		lsp_qid2cq &= ~(1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P3_LOC));
+		lsp_qid2cq2 &= ~(1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P3_LOC));
+		break;
+	}
+
+	DLB2_CSR_WR(hw, DLB2_ATM_QID2CQIDIX(queue_id, port_id / 4), atm_qid2cq);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_QID2CQIDIX(hw->ver, queue_id, port_id / 4),
+		    lsp_qid2cq);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_QID2CQIDIX2(hw->ver, queue_id, port_id / 4),
+		    lsp_qid2cq2);
+
+	dlb2_flush_csr(hw);
+
+	unmapped = DLB2_QUEUE_UNMAPPED;
+
+	return dlb2_port_slot_state_transition(hw, port, queue, i, unmapped);
+}
+
+static int dlb2_ldb_port_map_qid(struct dlb2_hw *hw,
+				 struct dlb2_hw_domain *domain,
+				 struct dlb2_ldb_port *port,
+				 struct dlb2_ldb_queue *queue,
+				 u8 prio)
+{
+	if (domain->started)
+		return dlb2_ldb_port_map_qid_dynamic(hw, port, queue, prio);
+	else
+		return dlb2_ldb_port_map_qid_static(hw, port, queue, prio);
+}
+
+static void
+dlb2_domain_finish_unmap_port_slot(struct dlb2_hw *hw,
+				   struct dlb2_hw_domain *domain,
+				   struct dlb2_ldb_port *port,
+				   int slot)
+{
+	enum dlb2_qid_map_state state;
+	struct dlb2_ldb_queue *queue;
+
+	queue = &hw->rsrcs.ldb_queues[port->qid_map[slot].qid];
+
+	state = port->qid_map[slot].state;
+
+	/* Update the QID2CQIDX and CQ2QID vectors */
+	dlb2_ldb_port_unmap_qid(hw, port, queue);
+
+	/*
+	 * Ensure the QID will not be serviced by this {CQ, slot} by clearing
+	 * the has_work bits
+	 */
+	dlb2_ldb_port_clear_has_work_bits(hw, port, slot);
+
+	/* Reset the {CQ, slot} to its default state */
+	dlb2_ldb_port_set_queue_if_status(hw, port, slot);
+
+	/* Re-enable the CQ if it was not manually disabled by the user */
+	if (port->enabled)
+		dlb2_ldb_port_cq_enable(hw, port);
+
+	/*
+	 * If there is a mapping that is pending this slot's removal, perform
+	 * the mapping now.
+	 */
+	if (state == DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP) {
+		struct dlb2_ldb_port_qid_map *map;
+		struct dlb2_ldb_queue *map_queue;
+		u8 prio;
+
+		map = &port->qid_map[slot];
+
+		map->qid = map->pending_qid;
+		map->priority = map->pending_priority;
+
+		map_queue = &hw->rsrcs.ldb_queues[map->qid];
+		prio = map->priority;
+
+		dlb2_ldb_port_map_qid(hw, domain, port, map_queue, prio);
+	}
+}
+
+
+static bool dlb2_domain_finish_unmap_port(struct dlb2_hw *hw,
+					  struct dlb2_hw_domain *domain,
+					  struct dlb2_ldb_port *port)
+{
+	u32 infl_cnt;
+	int i;
+
+	if (port->num_pending_removals == 0)
+		return false;
+
+	/*
+	 * The unmap requires all the CQ's outstanding inflights to be
+	 * completed.
+	 */
+	infl_cnt = DLB2_CSR_RD(hw, DLB2_LSP_CQ_LDB_INFL_CNT(hw->ver,
+						       port->id.phys_id));
+	if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_CQ_LDB_INFL_CNT_COUNT) > 0)
+		return false;
+
+	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
+		struct dlb2_ldb_port_qid_map *map;
+
+		map = &port->qid_map[i];
+
+		if (map->state != DLB2_QUEUE_UNMAP_IN_PROG &&
+		    map->state != DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP)
+			continue;
+
+		dlb2_domain_finish_unmap_port_slot(hw, domain, port, i);
+	}
+
+	return true;
+}
+
+static unsigned int
+dlb2_domain_finish_unmap_qid_procedures(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	if (!domain->configured || domain->num_pending_removals == 0)
+		return 0;
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
+			dlb2_domain_finish_unmap_port(hw, domain, port);
+	}
+
+	return domain->num_pending_removals;
+}
+
+static void dlb2_domain_disable_ldb_cqs(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			port->enabled = false;
+
+			dlb2_ldb_port_cq_disable(hw, port);
+		}
+	}
+}
+
+
+static void dlb2_log_reset_domain(struct dlb2_hw *hw,
+				  u32 domain_id,
+				  bool vdev_req,
+				  unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 reset domain:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
+}
+
+static void dlb2_domain_disable_dir_vpps(struct dlb2_hw *hw,
+					 struct dlb2_hw_domain *domain,
+					 unsigned int vdev_id)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *port;
+	u32 vpp_v = 0;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
+		unsigned int offs;
+		u32 virt_id;
+
+		if (hw->virt_mode == DLB2_VIRT_SRIOV)
+			virt_id = port->id.virt_id;
+		else
+			virt_id = port->id.phys_id;
+
+		offs = vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) + virt_id;
+
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP_V(offs), vpp_v);
+	}
+}
+
+static void dlb2_domain_disable_ldb_vpps(struct dlb2_hw *hw,
+					 struct dlb2_hw_domain *domain,
+					 unsigned int vdev_id)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	u32 vpp_v = 0;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			unsigned int offs;
+			u32 virt_id;
+
+			if (hw->virt_mode == DLB2_VIRT_SRIOV)
+				virt_id = port->id.virt_id;
+			else
+				virt_id = port->id.phys_id;
+
+			offs = vdev_id * DLB2_MAX_NUM_LDB_PORTS + virt_id;
+
+			DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VPP_V(offs), vpp_v);
+		}
+	}
+}
+
+static void
+dlb2_domain_disable_ldb_port_interrupts(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	u32 int_en = 0;
+	u32 wd_en = 0;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			DLB2_CSR_WR(hw,
+				    DLB2_CHP_LDB_CQ_INT_ENB(hw->ver,
+						       port->id.phys_id),
+				    int_en);
+
+			DLB2_CSR_WR(hw,
+				    DLB2_CHP_LDB_CQ_WD_ENB(hw->ver,
+						      port->id.phys_id),
+				    wd_en);
+		}
+	}
+}
+
+static void
+dlb2_domain_disable_dir_port_interrupts(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *port;
+	u32 int_en = 0;
+	u32 wd_en = 0;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
+		DLB2_CSR_WR(hw,
+			    DLB2_CHP_DIR_CQ_INT_ENB(hw->ver, port->id.phys_id),
+			    int_en);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_CHP_DIR_CQ_WD_ENB(hw->ver, port->id.phys_id),
+			    wd_en);
+	}
+}
+
+static void
+dlb2_domain_disable_ldb_queue_write_perms(struct dlb2_hw *hw,
+					  struct dlb2_hw_domain *domain)
+{
+	int domain_offset = domain->id.phys_id * DLB2_MAX_NUM_LDB_QUEUES;
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_queue *queue;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
+		int idx = domain_offset + queue->id.phys_id;
+
+		DLB2_CSR_WR(hw, DLB2_SYS_LDB_VASQID_V(idx), 0);
+
+		if (queue->id.vdev_owned) {
+			DLB2_CSR_WR(hw,
+				    DLB2_SYS_LDB_QID2VQID(queue->id.phys_id),
+				    0);
+
+			idx = queue->id.vdev_id * DLB2_MAX_NUM_LDB_QUEUES +
+				queue->id.virt_id;
+
+			DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID_V(idx), 0);
+
+			DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID2QID(idx), 0);
+		}
+	}
+}
+
+static void
+dlb2_domain_disable_dir_queue_write_perms(struct dlb2_hw *hw,
+					  struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *queue;
+	unsigned long max_ports;
+	int domain_offset;
+	RTE_SET_USED(iter);
+
+	max_ports = DLB2_MAX_NUM_DIR_PORTS(hw->ver);
+
+	domain_offset = domain->id.phys_id * max_ports;
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, queue, iter) {
+		int idx = domain_offset + queue->id.phys_id;
+
+		DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(idx), 0);
+
+		if (queue->id.vdev_owned) {
+			idx = queue->id.vdev_id * max_ports + queue->id.virt_id;
+
+			DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID_V(idx), 0);
+
+			DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID2QID(idx), 0);
+		}
+	}
+}
+
+static void dlb2_domain_disable_ldb_seq_checks(struct dlb2_hw *hw,
+					       struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	u32 chk_en = 0;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			DLB2_CSR_WR(hw,
+				    DLB2_CHP_SN_CHK_ENBL(hw->ver,
+							 port->id.phys_id),
+				    chk_en);
+		}
+	}
+}
+
+static int dlb2_domain_wait_for_ldb_cqs_to_empty(struct dlb2_hw *hw,
+						 struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			int j;
+
+			for (j = 0; j < DLB2_MAX_CQ_COMP_CHECK_LOOPS; j++) {
+				if (dlb2_ldb_cq_inflight_count(hw, port) == 0)
+					break;
+			}
+
+			if (j == DLB2_MAX_CQ_COMP_CHECK_LOOPS) {
+				DLB2_HW_ERR(hw,
+					    "[%s()] Internal error: failed to flush load-balanced port %d's completions.\n",
+					    __func__, port->id.phys_id);
+				return -EFAULT;
+			}
+		}
+	}
+
+	return 0;
+}
+
+static void dlb2_domain_disable_dir_cqs(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *port;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
+		port->enabled = false;
+
+		dlb2_dir_port_cq_disable(hw, port);
+	}
+}
+
+static void
+dlb2_domain_disable_dir_producer_ports(struct dlb2_hw *hw,
+				       struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *port;
+	u32 pp_v = 0;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_DIR_PP_V(port->id.phys_id),
+			    pp_v);
+	}
+}
+
+static void
+dlb2_domain_disable_ldb_producer_ports(struct dlb2_hw *hw,
+				       struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	u32 pp_v = 0;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			DLB2_CSR_WR(hw,
+				    DLB2_SYS_LDB_PP_V(port->id.phys_id),
+				    pp_v);
+		}
+	}
+}
+
+static int dlb2_domain_verify_reset_success(struct dlb2_hw *hw,
+					    struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *dir_port;
+	struct dlb2_ldb_port *ldb_port;
+	struct dlb2_ldb_queue *queue;
+	int i;
+	RTE_SET_USED(iter);
+
+	/*
+	 * Confirm that all the domain's queue's inflight counts and AQED
+	 * active counts are 0.
+	 */
+	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
+		if (!dlb2_ldb_queue_is_empty(hw, queue)) {
+			DLB2_HW_ERR(hw,
+				    "[%s()] Internal error: failed to empty ldb queue %d\n",
+				    __func__, queue->id.phys_id);
+			return -EFAULT;
+		}
+	}
+
+	/* Confirm that all the domain's CQs inflight and token counts are 0. */
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], ldb_port, iter) {
+			if (dlb2_ldb_cq_inflight_count(hw, ldb_port) ||
+			    dlb2_ldb_cq_token_count(hw, ldb_port)) {
+				DLB2_HW_ERR(hw,
+					    "[%s()] Internal error: failed to empty ldb port %d\n",
+					    __func__, ldb_port->id.phys_id);
+				return -EFAULT;
+			}
+		}
+	}
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, dir_port, iter) {
+		if (!dlb2_dir_queue_is_empty(hw, dir_port)) {
+			DLB2_HW_ERR(hw,
+				    "[%s()] Internal error: failed to empty dir queue %d\n",
+				    __func__, dir_port->id.phys_id);
+			return -EFAULT;
+		}
+
+		if (dlb2_dir_cq_token_count(hw, dir_port)) {
+			DLB2_HW_ERR(hw,
+				    "[%s()] Internal error: failed to empty dir port %d\n",
+				    __func__, dir_port->id.phys_id);
+			return -EFAULT;
+		}
+	}
+
+	return 0;
+}
+
+static void __dlb2_domain_reset_ldb_port_registers(struct dlb2_hw *hw,
+						   struct dlb2_ldb_port *port)
+{
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_PP2VAS(port->id.phys_id),
+		    DLB2_SYS_LDB_PP2VAS_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ2VAS(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ2VAS_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_PP2VDEV(port->id.phys_id),
+		    DLB2_SYS_LDB_PP2VDEV_RST);
+
+	if (port->id.vdev_owned) {
+		unsigned int offs;
+		u32 virt_id;
+
+		/*
+		 * DLB uses producer port address bits 17:12 to determine the
+		 * producer port ID. In Scalable IOV mode, PP accesses come
+		 * through the PF MMIO window for the physical producer port,
+		 * so for translation purposes the virtual and physical port
+		 * IDs are equal.
+		 */
+		if (hw->virt_mode == DLB2_VIRT_SRIOV)
+			virt_id = port->id.virt_id;
+		else
+			virt_id = port->id.phys_id;
+
+		offs = port->id.vdev_id * DLB2_MAX_NUM_LDB_PORTS + virt_id;
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_VF_LDB_VPP2PP(offs),
+			    DLB2_SYS_VF_LDB_VPP2PP_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_VF_LDB_VPP_V(offs),
+			    DLB2_SYS_VF_LDB_VPP_V_RST);
+	}
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_PP_V(port->id.phys_id),
+		    DLB2_SYS_LDB_PP_V_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_LDB_DSBL(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_LDB_DSBL_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_DEPTH(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ_DEPTH_RST);
+
+	if (hw->ver != DLB2_HW_V2)
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_CFG_CQ_LDB_WU_LIMIT(port->id.phys_id),
+			    DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_LDB_INFL_LIM(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_LDB_INFL_LIM_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_HIST_LIST_LIM(hw->ver, port->id.phys_id),
+		    DLB2_CHP_HIST_LIST_LIM_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_HIST_LIST_BASE(hw->ver, port->id.phys_id),
+		    DLB2_CHP_HIST_LIST_BASE_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_HIST_LIST_POP_PTR(hw->ver, port->id.phys_id),
+		    DLB2_CHP_HIST_LIST_POP_PTR_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_HIST_LIST_PUSH_PTR(hw->ver, port->id.phys_id),
+		    DLB2_CHP_HIST_LIST_PUSH_PTR_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_TMR_THRSH(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ_TMR_THRSH_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_INT_ENB(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ_INT_ENB_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_CQ_ISR(port->id.phys_id),
+		    DLB2_SYS_LDB_CQ_ISR_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_WPTR(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ_WPTR_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_LDB_TKN_CNT(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_LDB_TKN_CNT_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_CQ_ADDR_L(port->id.phys_id),
+		    DLB2_SYS_LDB_CQ_ADDR_L_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_CQ_ADDR_U(port->id.phys_id),
+		    DLB2_SYS_LDB_CQ_ADDR_U_RST);
+
+	if (hw->ver == DLB2_HW_V2)
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_LDB_CQ_AT(port->id.phys_id),
+			    DLB2_SYS_LDB_CQ_AT_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_CQ_PASID(hw->ver, port->id.phys_id),
+		    DLB2_SYS_LDB_CQ_PASID_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_CQ2VF_PF_RO(port->id.phys_id),
+		    DLB2_SYS_LDB_CQ2VF_PF_RO_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTL(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTL_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTH(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTH_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ2QID0(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ2QID0_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ2QID1(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ2QID1_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ2PRIOV(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ2PRIOV_RST);
+}
+
+static void dlb2_domain_reset_ldb_port_registers(struct dlb2_hw *hw,
+						 struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
+			__dlb2_domain_reset_ldb_port_registers(hw, port);
+	}
+}
+
+static void
+__dlb2_domain_reset_dir_port_registers(struct dlb2_hw *hw,
+				       struct dlb2_dir_pq_pair *port)
+{
+	u32 reg = 0;
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ2VAS(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ2VAS_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_DIR_DSBL(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_DIR_DSBL_RST);
+
+	DLB2_BIT_SET(reg, DLB2_SYS_WB_DIR_CQ_STATE_CQ_OPT_CLR);
+
+	if (hw->ver == DLB2_HW_V2)
+		DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_OPT_CLR, port->id.phys_id);
+	else
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_WB_DIR_CQ_STATE(port->id.phys_id), reg);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_DEPTH(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ_DEPTH_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_TMR_THRSH(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ_TMR_THRSH_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_INT_ENB(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ_INT_ENB_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_CQ_ISR(port->id.phys_id),
+		    DLB2_SYS_DIR_CQ_ISR_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(hw->ver,
+						      port->id.phys_id),
+		    DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_WPTR(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ_WPTR_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_DIR_TKN_CNT(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_DIR_TKN_CNT_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_CQ_ADDR_L(port->id.phys_id),
+		    DLB2_SYS_DIR_CQ_ADDR_L_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_CQ_ADDR_U(port->id.phys_id),
+		    DLB2_SYS_DIR_CQ_ADDR_U_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_CQ_AT(port->id.phys_id),
+		    DLB2_SYS_DIR_CQ_AT_RST);
+
+	if (hw->ver == DLB2_HW_V2)
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_DIR_CQ_AT(port->id.phys_id),
+			    DLB2_SYS_DIR_CQ_AT_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_CQ_PASID(hw->ver, port->id.phys_id),
+		    DLB2_SYS_DIR_CQ_PASID_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_CQ_FMT(port->id.phys_id),
+		    DLB2_SYS_DIR_CQ_FMT_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_CQ2VF_PF_RO(port->id.phys_id),
+		    DLB2_SYS_DIR_CQ2VF_PF_RO_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTL(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTL_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTH(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTH_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_PP2VAS(port->id.phys_id),
+		    DLB2_SYS_DIR_PP2VAS_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ2VAS(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ2VAS_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_PP2VDEV(port->id.phys_id),
+		    DLB2_SYS_DIR_PP2VDEV_RST);
+
+	if (port->id.vdev_owned) {
+		unsigned int offs;
+		u32 virt_id;
+
+		/*
+		 * DLB uses producer port address bits 17:12 to determine the
+		 * producer port ID. In Scalable IOV mode, PP accesses come
+		 * through the PF MMIO window for the physical producer port,
+		 * so for translation purposes the virtual and physical port
+		 * IDs are equal.
+		 */
+		if (hw->virt_mode == DLB2_VIRT_SRIOV)
+			virt_id = port->id.virt_id;
+		else
+			virt_id = port->id.phys_id;
+
+		offs = port->id.vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) +
+			virt_id;
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_VF_DIR_VPP2PP(offs),
+			    DLB2_SYS_VF_DIR_VPP2PP_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_VF_DIR_VPP_V(offs),
+			    DLB2_SYS_VF_DIR_VPP_V_RST);
+	}
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_PP_V(port->id.phys_id),
+		    DLB2_SYS_DIR_PP_V_RST);
+}
+
+static void dlb2_domain_reset_dir_port_registers(struct dlb2_hw *hw,
+						 struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *port;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter)
+		__dlb2_domain_reset_dir_port_registers(hw, port);
+}
+
+static void dlb2_domain_reset_ldb_queue_registers(struct dlb2_hw *hw,
+						  struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_queue *queue;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
+		unsigned int queue_id = queue->id.phys_id;
+		int i;
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL(hw->ver, queue_id),
+			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH(hw->ver, queue_id),
+			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTL(hw->ver, queue_id),
+			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTL_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTH(hw->ver, queue_id),
+			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_NALDB_MAX_DEPTH(hw->ver, queue_id),
+			    DLB2_LSP_QID_NALDB_MAX_DEPTH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_LDB_INFL_LIM(hw->ver, queue_id),
+			    DLB2_LSP_QID_LDB_INFL_LIM_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_AQED_ACTIVE_LIM(hw->ver, queue_id),
+			    DLB2_LSP_QID_AQED_ACTIVE_LIM_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_ATM_DEPTH_THRSH(hw->ver, queue_id),
+			    DLB2_LSP_QID_ATM_DEPTH_THRSH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_NALDB_DEPTH_THRSH(hw->ver, queue_id),
+			    DLB2_LSP_QID_NALDB_DEPTH_THRSH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_LDB_QID_ITS(queue_id),
+			    DLB2_SYS_LDB_QID_ITS_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_CHP_ORD_QID_SN(hw->ver, queue_id),
+			    DLB2_CHP_ORD_QID_SN_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_CHP_ORD_QID_SN_MAP(hw->ver, queue_id),
+			    DLB2_CHP_ORD_QID_SN_MAP_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_LDB_QID_V(queue_id),
+			    DLB2_SYS_LDB_QID_V_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_LDB_QID_CFG_V(queue_id),
+			    DLB2_SYS_LDB_QID_CFG_V_RST);
+
+		if (queue->sn_cfg_valid) {
+			u32 offs[2];
+
+			offs[0] = DLB2_RO_GRP_0_SLT_SHFT(hw->ver,
+							 queue->sn_slot);
+			offs[1] = DLB2_RO_GRP_1_SLT_SHFT(hw->ver,
+							 queue->sn_slot);
+
+			DLB2_CSR_WR(hw,
+				    offs[queue->sn_group],
+				    DLB2_RO_GRP_0_SLT_SHFT_RST);
+		}
+
+		for (i = 0; i < DLB2_LSP_QID2CQIDIX_NUM; i++) {
+			DLB2_CSR_WR(hw,
+				    DLB2_LSP_QID2CQIDIX(hw->ver, queue_id, i),
+				    DLB2_LSP_QID2CQIDIX_00_RST);
+
+			DLB2_CSR_WR(hw,
+				    DLB2_LSP_QID2CQIDIX2(hw->ver, queue_id, i),
+				    DLB2_LSP_QID2CQIDIX2_00_RST);
+
+			DLB2_CSR_WR(hw,
+				    DLB2_ATM_QID2CQIDIX(queue_id, i),
+				    DLB2_ATM_QID2CQIDIX_00_RST);
+		}
+	}
+}
+
+static void dlb2_domain_reset_dir_queue_registers(struct dlb2_hw *hw,
+						  struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *queue;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, queue, iter) {
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_DIR_MAX_DEPTH(hw->ver,
+						       queue->id.phys_id),
+			    DLB2_LSP_QID_DIR_MAX_DEPTH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTL(hw->ver,
+							  queue->id.phys_id),
+			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTL_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTH(hw->ver,
+							  queue->id.phys_id),
+			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_DIR_DEPTH_THRSH(hw->ver,
+							 queue->id.phys_id),
+			    DLB2_LSP_QID_DIR_DEPTH_THRSH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_DIR_QID_ITS(queue->id.phys_id),
+			    DLB2_SYS_DIR_QID_ITS_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_DIR_QID_V(queue->id.phys_id),
+			    DLB2_SYS_DIR_QID_V_RST);
+	}
+}
+
+
+
+
+
+static void dlb2_domain_reset_registers(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain)
+{
+	dlb2_domain_reset_ldb_port_registers(hw, domain);
+
+	dlb2_domain_reset_dir_port_registers(hw, domain);
+
+	dlb2_domain_reset_ldb_queue_registers(hw, domain);
+
+	dlb2_domain_reset_dir_queue_registers(hw, domain);
+
+	if (hw->ver == DLB2_HW_V2) {
+		DLB2_CSR_WR(hw,
+			    DLB2_CHP_CFG_LDB_VAS_CRD(domain->id.phys_id),
+			    DLB2_CHP_CFG_LDB_VAS_CRD_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_CHP_CFG_DIR_VAS_CRD(domain->id.phys_id),
+			    DLB2_CHP_CFG_DIR_VAS_CRD_RST);
+	} else
+		DLB2_CSR_WR(hw,
+			    DLB2_CHP_CFG_VAS_CRD(domain->id.phys_id),
+			    DLB2_CHP_CFG_VAS_CRD_RST);
+}
+
+static int dlb2_domain_reset_software_state(struct dlb2_hw *hw,
+					    struct dlb2_hw_domain *domain)
+{
+	struct dlb2_dir_pq_pair *tmp_dir_port;
+	struct dlb2_ldb_queue *tmp_ldb_queue;
+	struct dlb2_ldb_port *tmp_ldb_port;
+	struct dlb2_list_entry *iter1;
+	struct dlb2_list_entry *iter2;
+	struct dlb2_function_resources *rsrcs;
+	struct dlb2_dir_pq_pair *dir_port;
+	struct dlb2_ldb_queue *ldb_queue;
+	struct dlb2_ldb_port *ldb_port;
+	struct dlb2_list_head *list;
+	int ret, i;
+	RTE_SET_USED(tmp_dir_port);
+	RTE_SET_USED(tmp_ldb_queue);
+	RTE_SET_USED(tmp_ldb_port);
+	RTE_SET_USED(iter1);
+	RTE_SET_USED(iter2);
+
+	rsrcs = domain->parent_func;
+
+	/* Move the domain's ldb queues to the function's avail list */
+	list = &domain->used_ldb_queues;
+	DLB2_DOM_LIST_FOR_SAFE(*list, ldb_queue, tmp_ldb_queue, iter1, iter2) {
+		if (ldb_queue->sn_cfg_valid) {
+			struct dlb2_sn_group *grp;
+
+			grp = &hw->rsrcs.sn_groups[ldb_queue->sn_group];
+
+			dlb2_sn_group_free_slot(grp, ldb_queue->sn_slot);
+			ldb_queue->sn_cfg_valid = false;
+		}
+
+		ldb_queue->owned = false;
+		ldb_queue->num_mappings = 0;
+		ldb_queue->num_pending_additions = 0;
+
+		dlb2_list_del(&domain->used_ldb_queues,
+			      &ldb_queue->domain_list);
+		dlb2_list_add(&rsrcs->avail_ldb_queues,
+			      &ldb_queue->func_list);
+		rsrcs->num_avail_ldb_queues++;
+	}
+
+	list = &domain->avail_ldb_queues;
+	DLB2_DOM_LIST_FOR_SAFE(*list, ldb_queue, tmp_ldb_queue, iter1, iter2) {
+		ldb_queue->owned = false;
+
+		dlb2_list_del(&domain->avail_ldb_queues,
+			      &ldb_queue->domain_list);
+		dlb2_list_add(&rsrcs->avail_ldb_queues,
+			      &ldb_queue->func_list);
+		rsrcs->num_avail_ldb_queues++;
+	}
+
+	/* Move the domain's ldb ports to the function's avail list */
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		list = &domain->used_ldb_ports[i];
+		DLB2_DOM_LIST_FOR_SAFE(*list, ldb_port, tmp_ldb_port,
+				       iter1, iter2) {
+			int j;
+
+			ldb_port->owned = false;
+			ldb_port->configured = false;
+			ldb_port->num_pending_removals = 0;
+			ldb_port->num_mappings = 0;
+			ldb_port->init_tkn_cnt = 0;
+			ldb_port->cq_depth = 0;
+			for (j = 0; j < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; j++)
+				ldb_port->qid_map[j].state =
+					DLB2_QUEUE_UNMAPPED;
+
+			dlb2_list_del(&domain->used_ldb_ports[i],
+				      &ldb_port->domain_list);
+			dlb2_list_add(&rsrcs->avail_ldb_ports[i],
+				      &ldb_port->func_list);
+			rsrcs->num_avail_ldb_ports[i]++;
+		}
+
+		list = &domain->avail_ldb_ports[i];
+		DLB2_DOM_LIST_FOR_SAFE(*list, ldb_port, tmp_ldb_port,
+				       iter1, iter2) {
+			ldb_port->owned = false;
+
+			dlb2_list_del(&domain->avail_ldb_ports[i],
+				      &ldb_port->domain_list);
+			dlb2_list_add(&rsrcs->avail_ldb_ports[i],
+				      &ldb_port->func_list);
+			rsrcs->num_avail_ldb_ports[i]++;
+		}
+	}
+
+	/* Move the domain's dir ports to the function's avail list */
+	list = &domain->used_dir_pq_pairs;
+	DLB2_DOM_LIST_FOR_SAFE(*list, dir_port, tmp_dir_port, iter1, iter2) {
+		dir_port->owned = false;
+		dir_port->port_configured = false;
+		dir_port->init_tkn_cnt = 0;
+
+		dlb2_list_del(&domain->used_dir_pq_pairs,
+			      &dir_port->domain_list);
+
+		dlb2_list_add(&rsrcs->avail_dir_pq_pairs,
+			      &dir_port->func_list);
+		rsrcs->num_avail_dir_pq_pairs++;
+	}
+
+	list = &domain->avail_dir_pq_pairs;
+	DLB2_DOM_LIST_FOR_SAFE(*list, dir_port, tmp_dir_port, iter1, iter2) {
+		dir_port->owned = false;
+
+		dlb2_list_del(&domain->avail_dir_pq_pairs,
+			      &dir_port->domain_list);
+
+		dlb2_list_add(&rsrcs->avail_dir_pq_pairs,
+			      &dir_port->func_list);
+		rsrcs->num_avail_dir_pq_pairs++;
+	}
+
+	/* Return hist list entries to the function */
+	ret = dlb2_bitmap_set_range(rsrcs->avail_hist_list_entries,
+				    domain->hist_list_entry_base,
+				    domain->total_hist_list_entries);
+	if (ret) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: domain hist list base does not match the function's bitmap.\n",
+			    __func__);
+		return ret;
+	}
+
+	domain->total_hist_list_entries = 0;
+	domain->avail_hist_list_entries = 0;
+	domain->hist_list_entry_base = 0;
+	domain->hist_list_entry_offset = 0;
+
+	if (hw->ver == DLB2_HW_V2_5) {
+		rsrcs->num_avail_entries += domain->num_credits;
+		domain->num_credits = 0;
+	} else {
+		rsrcs->num_avail_qed_entries += domain->num_ldb_credits;
+		domain->num_ldb_credits = 0;
+
+		rsrcs->num_avail_dqed_entries += domain->num_dir_credits;
+		domain->num_dir_credits = 0;
+	}
+	rsrcs->num_avail_aqed_entries += domain->num_avail_aqed_entries;
+	rsrcs->num_avail_aqed_entries += domain->num_used_aqed_entries;
+	domain->num_avail_aqed_entries = 0;
+	domain->num_used_aqed_entries = 0;
+
+	domain->num_pending_removals = 0;
+	domain->num_pending_additions = 0;
+	domain->configured = false;
+	domain->started = false;
+
+	/*
+	 * Move the domain out of the used_domains list and back to the
+	 * function's avail_domains list.
+	 */
+	dlb2_list_del(&rsrcs->used_domains, &domain->func_list);
+	dlb2_list_add(&rsrcs->avail_domains, &domain->func_list);
+	rsrcs->num_avail_domains++;
+
+	return 0;
+}
+
+static int dlb2_domain_drain_unmapped_queue(struct dlb2_hw *hw,
+					    struct dlb2_hw_domain *domain,
+					    struct dlb2_ldb_queue *queue)
+{
+	struct dlb2_ldb_port *port = NULL;
+	int ret, i;
+
+	/* If a domain has LDB queues, it must have LDB ports */
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		port = DLB2_DOM_LIST_HEAD(domain->used_ldb_ports[i],
+					  typeof(*port));
+		if (port)
+			break;
+	}
+
+	if (port == NULL) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: No configured LDB ports\n",
+			    __func__);
+		return -EFAULT;
+	}
+
+	/* If necessary, free up a QID slot in this CQ */
+	if (port->num_mappings == DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
+		struct dlb2_ldb_queue *mapped_queue;
+
+		mapped_queue = &hw->rsrcs.ldb_queues[port->qid_map[0].qid];
+
+		ret = dlb2_ldb_port_unmap_qid(hw, port, mapped_queue);
+		if (ret)
+			return ret;
+	}
+
+	ret = dlb2_ldb_port_map_qid_dynamic(hw, port, queue, 0);
+	if (ret)
+		return ret;
+
+	return dlb2_domain_drain_mapped_queues(hw, domain);
+}
+
+static int dlb2_domain_drain_unmapped_queues(struct dlb2_hw *hw,
+					     struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_queue *queue;
+	int ret;
+	RTE_SET_USED(iter);
+
+	/* If the domain hasn't been started, there's no traffic to drain */
+	if (!domain->started)
+		return 0;
+
+	/*
+	 * Pre-condition: the unattached queue must not have any outstanding
+	 * completions. This is ensured by calling dlb2_domain_drain_ldb_cqs()
+	 * prior to this in dlb2_domain_drain_mapped_queues().
+	 */
+	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
+		if (queue->num_mappings != 0 ||
+		    dlb2_ldb_queue_is_empty(hw, queue))
+			continue;
+
+		ret = dlb2_domain_drain_unmapped_queue(hw, domain, queue);
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+
+/**
+ * dlb2_reset_domain() - reset a scheduling domain
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function resets and frees a DLB 2.0 scheduling domain and its associated
+ * resources.
+ *
+ * Pre-condition: the driver must ensure software has stopped sending QEs
+ * through this domain's producer ports before invoking this function, or
+ * undefined behavior will result.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, -1 otherwise.
+ *
+ * EINVAL - Invalid domain ID, or the domain is not configured.
+ * EFAULT - Internal error. (Possibly caused if software is the pre-condition
+ *	    is not met.)
+ * ETIMEDOUT - Hardware component didn't reset in the expected time.
+ */
+int dlb2_reset_domain(struct dlb2_hw *hw,
+		      u32 domain_id,
+		      bool vdev_req,
+		      unsigned int vdev_id)
+{
+	struct dlb2_hw_domain *domain;
+	int ret;
+
+	dlb2_log_reset_domain(hw, domain_id, vdev_req, vdev_id);
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (domain == NULL || !domain->configured)
+		return -EINVAL;
+
+	/* Disable VPPs */
+	if (vdev_req) {
+		dlb2_domain_disable_dir_vpps(hw, domain, vdev_id);
+
+		dlb2_domain_disable_ldb_vpps(hw, domain, vdev_id);
+	}
+
+	/* Disable CQ interrupts */
+	dlb2_domain_disable_dir_port_interrupts(hw, domain);
+
+	dlb2_domain_disable_ldb_port_interrupts(hw, domain);
+
+	/*
+	 * For each queue owned by this domain, disable its write permissions to
+	 * cause any traffic sent to it to be dropped. Well-behaved software
+	 * should not be sending QEs at this point.
+	 */
+	dlb2_domain_disable_dir_queue_write_perms(hw, domain);
+
+	dlb2_domain_disable_ldb_queue_write_perms(hw, domain);
+
+	/* Turn off completion tracking on all the domain's PPs. */
+	dlb2_domain_disable_ldb_seq_checks(hw, domain);
+
+	/*
+	 * Disable the LDB CQs and drain them in order to complete the map and
+	 * unmap procedures, which require zero CQ inflights and zero QID
+	 * inflights respectively.
+	 */
+	dlb2_domain_disable_ldb_cqs(hw, domain);
+
+	dlb2_domain_drain_ldb_cqs(hw, domain, false);
+
+	ret = dlb2_domain_wait_for_ldb_cqs_to_empty(hw, domain);
+	if (ret)
+		return ret;
+
+	ret = dlb2_domain_finish_unmap_qid_procedures(hw, domain);
+	if (ret)
+		return ret;
+
+	ret = dlb2_domain_finish_map_qid_procedures(hw, domain);
+	if (ret)
+		return ret;
+
+	/* Re-enable the CQs in order to drain the mapped queues. */
+	dlb2_domain_enable_ldb_cqs(hw, domain);
+
+	ret = dlb2_domain_drain_mapped_queues(hw, domain);
+	if (ret)
+		return ret;
+
+	ret = dlb2_domain_drain_unmapped_queues(hw, domain);
+	if (ret)
+		return ret;
+
+	/* Done draining LDB QEs, so disable the CQs. */
+	dlb2_domain_disable_ldb_cqs(hw, domain);
+
+	dlb2_domain_drain_dir_queues(hw, domain);
+
+	/* Done draining DIR QEs, so disable the CQs. */
+	dlb2_domain_disable_dir_cqs(hw, domain);
+
+	/* Disable PPs */
+	dlb2_domain_disable_dir_producer_ports(hw, domain);
+
+	dlb2_domain_disable_ldb_producer_ports(hw, domain);
+
+	ret = dlb2_domain_verify_reset_success(hw, domain);
+	if (ret)
+		return ret;
+
+	/* Reset the QID and port state. */
+	dlb2_domain_reset_registers(hw, domain);
+
+	/* Hardware reset complete. Reset the domain's software state */
+	return dlb2_domain_reset_software_state(hw, domain);
+}
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH 06/25] event/dlb2: add DLB V2.5 support to create ldb queue
  2021-03-16 22:18 [dpdk-dev] [PATCH 00/25] Add Support for DLB v2.5 Timothy McDaniel
                   ` (4 preceding siblings ...)
  2021-03-16 22:18 ` [dpdk-dev] [PATCH 05/25] event/dlb2: add DLB v2.5 support to domain reset Timothy McDaniel
@ 2021-03-16 22:18 ` Timothy McDaniel
  2021-03-16 22:18 ` [dpdk-dev] [PATCH 07/25] event/dlb2: add DLB v2.5 support to create ldb port Timothy McDaniel
                   ` (19 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-03-16 22:18 UTC (permalink / raw)
  To: dev
  Cc: jerinj, harry.van.haaren, mdr, nhorman, nikhil.rao,
	erik.g.carrillo, abhinandan.gujjar, pbhagavatula, hemant.agrawal,
	mattias.ronnblom, peter.mccarthy

Updated low level hardware functions to add DLB 2.5 support
for creating load balanced queues.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 397 ------------------
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 391 +++++++++++++++++
 2 files changed, 391 insertions(+), 397 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index 830c74d0a..5a8251ee0 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -1150,403 +1150,6 @@ unsigned int dlb2_finish_map_qid_procedures(struct dlb2_hw *hw)
 	return num;
 }
 
-
-static void dlb2_configure_ldb_queue(struct dlb2_hw *hw,
-				     struct dlb2_hw_domain *domain,
-				     struct dlb2_ldb_queue *queue,
-				     struct dlb2_create_ldb_queue_args *args,
-				     bool vdev_req,
-				     unsigned int vdev_id)
-{
-	union dlb2_sys_vf_ldb_vqid_v r0 = { {0} };
-	union dlb2_sys_vf_ldb_vqid2qid r1 = { {0} };
-	union dlb2_sys_ldb_qid2vqid r2 = { {0} };
-	union dlb2_sys_ldb_vasqid_v r3 = { {0} };
-	union dlb2_lsp_qid_ldb_infl_lim r4 = { {0} };
-	union dlb2_lsp_qid_aqed_active_lim r5 = { {0} };
-	union dlb2_aqed_pipe_qid_hid_width r6 = { {0} };
-	union dlb2_sys_ldb_qid_its r7 = { {0} };
-	union dlb2_lsp_qid_atm_depth_thrsh r8 = { {0} };
-	union dlb2_lsp_qid_naldb_depth_thrsh r9 = { {0} };
-	union dlb2_aqed_pipe_qid_fid_lim r10 = { {0} };
-	union dlb2_chp_ord_qid_sn_map r11 = { {0} };
-	union dlb2_sys_ldb_qid_cfg_v r12 = { {0} };
-	union dlb2_sys_ldb_qid_v r13 = { {0} };
-
-	struct dlb2_sn_group *sn_group;
-	unsigned int offs;
-
-	/* QID write permissions are turned on when the domain is started */
-	r3.field.vasqid_v = 0;
-
-	offs = domain->id.phys_id * DLB2_MAX_NUM_LDB_QUEUES +
-		queue->id.phys_id;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_VASQID_V(offs), r3.val);
-
-	/*
-	 * Unordered QIDs get 4K inflights, ordered get as many as the number
-	 * of sequence numbers.
-	 */
-	r4.field.limit = args->num_qid_inflights;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_QID_LDB_INFL_LIM(queue->id.phys_id), r4.val);
-
-	r5.field.limit = queue->aqed_limit;
-
-	if (r5.field.limit > DLB2_MAX_NUM_AQED_ENTRIES)
-		r5.field.limit = DLB2_MAX_NUM_AQED_ENTRIES;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID_AQED_ACTIVE_LIM(queue->id.phys_id),
-		    r5.val);
-
-	switch (args->lock_id_comp_level) {
-	case 64:
-		r6.field.compress_code = 1;
-		break;
-	case 128:
-		r6.field.compress_code = 2;
-		break;
-	case 256:
-		r6.field.compress_code = 3;
-		break;
-	case 512:
-		r6.field.compress_code = 4;
-		break;
-	case 1024:
-		r6.field.compress_code = 5;
-		break;
-	case 2048:
-		r6.field.compress_code = 6;
-		break;
-	case 4096:
-		r6.field.compress_code = 7;
-		break;
-	case 0:
-	case 65536:
-		r6.field.compress_code = 0;
-	}
-
-	DLB2_CSR_WR(hw,
-		    DLB2_AQED_PIPE_QID_HID_WIDTH(queue->id.phys_id),
-		    r6.val);
-
-	/* Don't timestamp QEs that pass through this queue */
-	r7.field.qid_its = 0;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_QID_ITS(queue->id.phys_id),
-		    r7.val);
-
-	r8.field.thresh = args->depth_threshold;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID_ATM_DEPTH_THRSH(queue->id.phys_id),
-		    r8.val);
-
-	r9.field.thresh = args->depth_threshold;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID_NALDB_DEPTH_THRSH(queue->id.phys_id),
-		    r9.val);
-
-	/*
-	 * This register limits the number of inflight flows a queue can have
-	 * at one time.  It has an upper bound of 2048, but can be
-	 * over-subscribed. 512 is chosen so that a single queue doesn't use
-	 * the entire atomic storage, but can use a substantial portion if
-	 * needed.
-	 */
-	r10.field.qid_fid_limit = 512;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_AQED_PIPE_QID_FID_LIM(queue->id.phys_id),
-		    r10.val);
-
-	/* Configure SNs */
-	sn_group = &hw->rsrcs.sn_groups[queue->sn_group];
-	r11.field.mode = sn_group->mode;
-	r11.field.slot = queue->sn_slot;
-	r11.field.grp  = sn_group->id;
-
-	DLB2_CSR_WR(hw, DLB2_CHP_ORD_QID_SN_MAP(queue->id.phys_id), r11.val);
-
-	r12.field.sn_cfg_v = (args->num_sequence_numbers != 0);
-	r12.field.fid_cfg_v = (args->num_atomic_inflights != 0);
-
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_QID_CFG_V(queue->id.phys_id), r12.val);
-
-	if (vdev_req) {
-		offs = vdev_id * DLB2_MAX_NUM_LDB_QUEUES + queue->id.virt_id;
-
-		r0.field.vqid_v = 1;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID_V(offs), r0.val);
-
-		r1.field.qid = queue->id.phys_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID2QID(offs), r1.val);
-
-		r2.field.vqid = queue->id.virt_id;
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_LDB_QID2VQID(queue->id.phys_id),
-			    r2.val);
-	}
-
-	r13.field.qid_v = 1;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_QID_V(queue->id.phys_id), r13.val);
-}
-
-static int
-dlb2_ldb_queue_attach_to_sn_group(struct dlb2_hw *hw,
-				  struct dlb2_ldb_queue *queue,
-				  struct dlb2_create_ldb_queue_args *args)
-{
-	int slot = -1;
-	int i;
-
-	queue->sn_cfg_valid = false;
-
-	if (args->num_sequence_numbers == 0)
-		return 0;
-
-	for (i = 0; i < DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {
-		struct dlb2_sn_group *group = &hw->rsrcs.sn_groups[i];
-
-		if (group->sequence_numbers_per_queue ==
-		    args->num_sequence_numbers &&
-		    !dlb2_sn_group_full(group)) {
-			slot = dlb2_sn_group_alloc_slot(group);
-			if (slot >= 0)
-				break;
-		}
-	}
-
-	if (slot == -1) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: no sequence number slots available\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	queue->sn_cfg_valid = true;
-	queue->sn_group = i;
-	queue->sn_slot = slot;
-	return 0;
-}
-
-static int
-dlb2_ldb_queue_attach_resources(struct dlb2_hw *hw,
-				struct dlb2_hw_domain *domain,
-				struct dlb2_ldb_queue *queue,
-				struct dlb2_create_ldb_queue_args *args)
-{
-	int ret;
-
-	ret = dlb2_ldb_queue_attach_to_sn_group(hw, queue, args);
-	if (ret)
-		return ret;
-
-	/* Attach QID inflights */
-	queue->num_qid_inflights = args->num_qid_inflights;
-
-	/* Attach atomic inflights */
-	queue->aqed_limit = args->num_atomic_inflights;
-
-	domain->num_avail_aqed_entries -= args->num_atomic_inflights;
-	domain->num_used_aqed_entries += args->num_atomic_inflights;
-
-	return 0;
-}
-
-static int
-dlb2_verify_create_ldb_queue_args(struct dlb2_hw *hw,
-				  u32 domain_id,
-				  struct dlb2_create_ldb_queue_args *args,
-				  struct dlb2_cmd_response *resp,
-				  bool vdev_req,
-				  unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	int i;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (domain == NULL) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	if (!domain->configured) {
-		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
-		return -EINVAL;
-	}
-
-	if (domain->started) {
-		resp->status = DLB2_ST_DOMAIN_STARTED;
-		return -EINVAL;
-	}
-
-	if (dlb2_list_empty(&domain->avail_ldb_queues)) {
-		resp->status = DLB2_ST_LDB_QUEUES_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	if (args->num_sequence_numbers) {
-		for (i = 0; i < DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {
-			struct dlb2_sn_group *group = &hw->rsrcs.sn_groups[i];
-
-			if (group->sequence_numbers_per_queue ==
-			    args->num_sequence_numbers &&
-			    !dlb2_sn_group_full(group))
-				break;
-		}
-
-		if (i == DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS) {
-			resp->status = DLB2_ST_SEQUENCE_NUMBERS_UNAVAILABLE;
-			return -EINVAL;
-		}
-	}
-
-	if (args->num_qid_inflights > 4096) {
-		resp->status = DLB2_ST_INVALID_QID_INFLIGHT_ALLOCATION;
-		return -EINVAL;
-	}
-
-	/* Inflights must be <= number of sequence numbers if ordered */
-	if (args->num_sequence_numbers != 0 &&
-	    args->num_qid_inflights > args->num_sequence_numbers) {
-		resp->status = DLB2_ST_INVALID_QID_INFLIGHT_ALLOCATION;
-		return -EINVAL;
-	}
-
-	if (domain->num_avail_aqed_entries < args->num_atomic_inflights) {
-		resp->status = DLB2_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	if (args->num_atomic_inflights &&
-	    args->lock_id_comp_level != 0 &&
-	    args->lock_id_comp_level != 64 &&
-	    args->lock_id_comp_level != 128 &&
-	    args->lock_id_comp_level != 256 &&
-	    args->lock_id_comp_level != 512 &&
-	    args->lock_id_comp_level != 1024 &&
-	    args->lock_id_comp_level != 2048 &&
-	    args->lock_id_comp_level != 4096 &&
-	    args->lock_id_comp_level != 65536) {
-		resp->status = DLB2_ST_INVALID_LOCK_ID_COMP_LEVEL;
-		return -EINVAL;
-	}
-
-	return 0;
-}
-
-static void
-dlb2_log_create_ldb_queue_args(struct dlb2_hw *hw,
-			       u32 domain_id,
-			       struct dlb2_create_ldb_queue_args *args,
-			       bool vdev_req,
-			       unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 create load-balanced queue arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID:                  %d\n",
-		    domain_id);
-	DLB2_HW_DBG(hw, "\tNumber of sequence numbers: %d\n",
-		    args->num_sequence_numbers);
-	DLB2_HW_DBG(hw, "\tNumber of QID inflights:    %d\n",
-		    args->num_qid_inflights);
-	DLB2_HW_DBG(hw, "\tNumber of ATM inflights:    %d\n",
-		    args->num_atomic_inflights);
-}
-
-/**
- * dlb2_hw_create_ldb_queue() - Allocate and initialize a DLB LDB queue.
- * @hw:	Contains the current state of the DLB2 hardware.
- * @domain_id: Domain ID
- * @args: User-provided arguments.
- * @resp: Response to user.
- * @vdev_req: Request came from a virtual device.
- * @vdev_id: If vdev_req is true, this contains the virtual device's ID.
- *
- * Return: returns < 0 on error, 0 otherwise. If the driver is unable to
- * satisfy a request, resp->status will be set accordingly.
- */
-int dlb2_hw_create_ldb_queue(struct dlb2_hw *hw,
-			     u32 domain_id,
-			     struct dlb2_create_ldb_queue_args *args,
-			     struct dlb2_cmd_response *resp,
-			     bool vdev_req,
-			     unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_queue *queue;
-	int ret;
-
-	dlb2_log_create_ldb_queue_args(hw, domain_id, args, vdev_req, vdev_id);
-
-	/*
-	 * Verify that hardware resources are available before attempting to
-	 * satisfy the request. This simplifies the error unwinding code.
-	 */
-	ret = dlb2_verify_create_ldb_queue_args(hw,
-						domain_id,
-						args,
-						resp,
-						vdev_req,
-						vdev_id);
-	if (ret)
-		return ret;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-	if (domain == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: domain not found\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	queue = DLB2_DOM_LIST_HEAD(domain->avail_ldb_queues, typeof(*queue));
-	if (queue == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: no available ldb queues\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	ret = dlb2_ldb_queue_attach_resources(hw, domain, queue, args);
-	if (ret < 0) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: failed to attach the ldb queue resources\n",
-			    __func__, __LINE__);
-		return ret;
-	}
-
-	dlb2_configure_ldb_queue(hw, domain, queue, args, vdev_req, vdev_id);
-
-	queue->num_mappings = 0;
-
-	queue->configured = true;
-
-	/*
-	 * Configuration succeeded, so move the resource from the 'avail' to
-	 * the 'used' list.
-	 */
-	dlb2_list_del(&domain->avail_ldb_queues, &queue->domain_list);
-
-	dlb2_list_add(&domain->used_ldb_queues, &queue->domain_list);
-
-	resp->status = 0;
-	resp->id = (vdev_req) ? queue->id.virt_id : queue->id.phys_id;
-
-	return 0;
-}
-
 int dlb2_get_group_sequence_numbers(struct dlb2_hw *hw, unsigned int group_id)
 {
 	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index de34f5cce..811cf79c6 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -3582,3 +3582,394 @@ int dlb2_reset_domain(struct dlb2_hw *hw,
 	/* Hardware reset complete. Reset the domain's software state */
 	return dlb2_domain_reset_software_state(hw, domain);
 }
+
+static void
+dlb2_log_create_ldb_queue_args(struct dlb2_hw *hw,
+			       u32 domain_id,
+			       struct dlb2_create_ldb_queue_args *args,
+			       bool vdev_req,
+			       unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 create load-balanced queue arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID:                  %d\n",
+		    domain_id);
+	DLB2_HW_DBG(hw, "\tNumber of sequence numbers: %d\n",
+		    args->num_sequence_numbers);
+	DLB2_HW_DBG(hw, "\tNumber of QID inflights:    %d\n",
+		    args->num_qid_inflights);
+	DLB2_HW_DBG(hw, "\tNumber of ATM inflights:    %d\n",
+		    args->num_atomic_inflights);
+}
+
+static int
+dlb2_ldb_queue_attach_to_sn_group(struct dlb2_hw *hw,
+				  struct dlb2_ldb_queue *queue,
+				  struct dlb2_create_ldb_queue_args *args)
+{
+	int slot = -1;
+	int i;
+
+	queue->sn_cfg_valid = false;
+
+	if (args->num_sequence_numbers == 0)
+		return 0;
+
+	for (i = 0; i < DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {
+		struct dlb2_sn_group *group = &hw->rsrcs.sn_groups[i];
+
+		if (group->sequence_numbers_per_queue ==
+		    args->num_sequence_numbers &&
+		    !dlb2_sn_group_full(group)) {
+			slot = dlb2_sn_group_alloc_slot(group);
+			if (slot >= 0)
+				break;
+		}
+	}
+
+	if (slot == -1) {
+		DLB2_HW_ERR(hw,
+			    "[%s():%d] Internal error: no sequence number slots available\n",
+			    __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	queue->sn_cfg_valid = true;
+	queue->sn_group = i;
+	queue->sn_slot = slot;
+	return 0;
+}
+
+static int
+dlb2_verify_create_ldb_queue_args(struct dlb2_hw *hw,
+				  u32 domain_id,
+				  struct dlb2_create_ldb_queue_args *args,
+				  struct dlb2_cmd_response *resp,
+				  bool vdev_req,
+				  unsigned int vdev_id,
+				  struct dlb2_hw_domain **out_domain,
+				  struct dlb2_ldb_queue **out_queue)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+	int i;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
+		return -EINVAL;
+	}
+
+	if (domain->started) {
+		resp->status = DLB2_ST_DOMAIN_STARTED;
+		return -EINVAL;
+	}
+
+	queue = DLB2_DOM_LIST_HEAD(domain->avail_ldb_queues, typeof(*queue));
+	if (!queue) {
+		resp->status = DLB2_ST_LDB_QUEUES_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	if (args->num_sequence_numbers) {
+		for (i = 0; i < DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {
+			struct dlb2_sn_group *group = &hw->rsrcs.sn_groups[i];
+
+			if (group->sequence_numbers_per_queue ==
+			    args->num_sequence_numbers &&
+			    !dlb2_sn_group_full(group))
+				break;
+		}
+
+		if (i == DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS) {
+			resp->status = DLB2_ST_SEQUENCE_NUMBERS_UNAVAILABLE;
+			return -EINVAL;
+		}
+	}
+
+	if (args->num_qid_inflights > 4096) {
+		resp->status = DLB2_ST_INVALID_QID_INFLIGHT_ALLOCATION;
+		return -EINVAL;
+	}
+
+	/* Inflights must be <= number of sequence numbers if ordered */
+	if (args->num_sequence_numbers != 0 &&
+	    args->num_qid_inflights > args->num_sequence_numbers) {
+		resp->status = DLB2_ST_INVALID_QID_INFLIGHT_ALLOCATION;
+		return -EINVAL;
+	}
+
+	if (domain->num_avail_aqed_entries < args->num_atomic_inflights) {
+		resp->status = DLB2_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	if (args->num_atomic_inflights &&
+	    args->lock_id_comp_level != 0 &&
+	    args->lock_id_comp_level != 64 &&
+	    args->lock_id_comp_level != 128 &&
+	    args->lock_id_comp_level != 256 &&
+	    args->lock_id_comp_level != 512 &&
+	    args->lock_id_comp_level != 1024 &&
+	    args->lock_id_comp_level != 2048 &&
+	    args->lock_id_comp_level != 4096 &&
+	    args->lock_id_comp_level != 65536) {
+		resp->status = DLB2_ST_INVALID_LOCK_ID_COMP_LEVEL;
+		return -EINVAL;
+	}
+
+	*out_domain = domain;
+	*out_queue = queue;
+
+	return 0;
+}
+
+static int
+dlb2_ldb_queue_attach_resources(struct dlb2_hw *hw,
+				struct dlb2_hw_domain *domain,
+				struct dlb2_ldb_queue *queue,
+				struct dlb2_create_ldb_queue_args *args)
+{
+	int ret;
+	ret = dlb2_ldb_queue_attach_to_sn_group(hw, queue, args);
+	if (ret)
+		return ret;
+
+	/* Attach QID inflights */
+	queue->num_qid_inflights = args->num_qid_inflights;
+
+	/* Attach atomic inflights */
+	queue->aqed_limit = args->num_atomic_inflights;
+
+	domain->num_avail_aqed_entries -= args->num_atomic_inflights;
+	domain->num_used_aqed_entries += args->num_atomic_inflights;
+
+	return 0;
+}
+
+static void dlb2_configure_ldb_queue(struct dlb2_hw *hw,
+				     struct dlb2_hw_domain *domain,
+				     struct dlb2_ldb_queue *queue,
+				     struct dlb2_create_ldb_queue_args *args,
+				     bool vdev_req,
+				     unsigned int vdev_id)
+{
+	struct dlb2_sn_group *sn_group;
+	unsigned int offs;
+	u32 reg = 0;
+	u32 alimit;
+
+	/* QID write permissions are turned on when the domain is started */
+	offs = domain->id.phys_id * DLB2_MAX_NUM_LDB_QUEUES + queue->id.phys_id;
+
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_VASQID_V(offs), reg);
+
+	/*
+	 * Unordered QIDs get 4K inflights, ordered get as many as the number
+	 * of sequence numbers.
+	 */
+	DLB2_BITS_SET(reg, args->num_qid_inflights,
+		      DLB2_LSP_QID_LDB_INFL_LIM_LIMIT);
+	DLB2_CSR_WR(hw, DLB2_LSP_QID_LDB_INFL_LIM(hw->ver,
+						  queue->id.phys_id), reg);
+
+	alimit = queue->aqed_limit;
+
+	if (alimit > DLB2_MAX_NUM_AQED_ENTRIES)
+		alimit = DLB2_MAX_NUM_AQED_ENTRIES;
+
+	reg = 0;
+	DLB2_BITS_SET(reg, alimit, DLB2_LSP_QID_AQED_ACTIVE_LIM_LIMIT);
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_QID_AQED_ACTIVE_LIM(hw->ver,
+						 queue->id.phys_id), reg);
+
+	reg = 0;
+	switch (args->lock_id_comp_level) {
+	case 64:
+		DLB2_BITS_SET(reg, 1, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
+		break;
+	case 128:
+		DLB2_BITS_SET(reg, 2, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
+		break;
+	case 256:
+		DLB2_BITS_SET(reg, 3, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
+		break;
+	case 512:
+		DLB2_BITS_SET(reg, 4, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
+		break;
+	case 1024:
+		DLB2_BITS_SET(reg, 5, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
+		break;
+	case 2048:
+		DLB2_BITS_SET(reg, 6, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
+		break;
+	case 4096:
+		DLB2_BITS_SET(reg, 7, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
+		break;
+	default:
+		/* No compression by default */
+		break;
+	}
+
+	DLB2_CSR_WR(hw, DLB2_AQED_QID_HID_WIDTH(queue->id.phys_id), reg);
+
+	reg = 0;
+	/* Don't timestamp QEs that pass through this queue */
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_QID_ITS(queue->id.phys_id), reg);
+
+	DLB2_BITS_SET(reg, args->depth_threshold,
+		      DLB2_LSP_QID_ATM_DEPTH_THRSH_THRESH);
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_QID_ATM_DEPTH_THRSH(hw->ver,
+						 queue->id.phys_id), reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, args->depth_threshold,
+		      DLB2_LSP_QID_NALDB_DEPTH_THRSH_THRESH);
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_QID_NALDB_DEPTH_THRSH(hw->ver, queue->id.phys_id),
+		    reg);
+
+	/*
+	 * This register limits the number of inflight flows a queue can have
+	 * at one time.  It has an upper bound of 2048, but can be
+	 * over-subscribed. 512 is chosen so that a single queue does not use
+	 * the entire atomic storage, but can use a substantial portion if
+	 * needed.
+	 */
+	reg = 0;
+	DLB2_BITS_SET(reg, 512, DLB2_AQED_QID_FID_LIM_QID_FID_LIMIT);
+	DLB2_CSR_WR(hw, DLB2_AQED_QID_FID_LIM(queue->id.phys_id), reg);
+
+	/* Configure SNs */
+	reg = 0;
+	sn_group = &hw->rsrcs.sn_groups[queue->sn_group];
+	DLB2_BITS_SET(reg, sn_group->mode, DLB2_CHP_ORD_QID_SN_MAP_MODE);
+	DLB2_BITS_SET(reg, queue->sn_slot, DLB2_CHP_ORD_QID_SN_MAP_SLOT);
+	DLB2_BITS_SET(reg, sn_group->id, DLB2_CHP_ORD_QID_SN_MAP_GRP);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_ORD_QID_SN_MAP(hw->ver, queue->id.phys_id), reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, (args->num_sequence_numbers != 0),
+		 DLB2_SYS_LDB_QID_CFG_V_SN_CFG_V);
+	DLB2_BITS_SET(reg, (args->num_atomic_inflights != 0),
+		 DLB2_SYS_LDB_QID_CFG_V_FID_CFG_V);
+
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_QID_CFG_V(queue->id.phys_id), reg);
+
+	if (vdev_req) {
+		offs = vdev_id * DLB2_MAX_NUM_LDB_QUEUES + queue->id.virt_id;
+
+		reg = 0;
+		DLB2_BIT_SET(reg, DLB2_SYS_VF_LDB_VQID_V_VQID_V);
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID_V(offs), reg);
+
+		reg = 0;
+		DLB2_BITS_SET(reg, queue->id.phys_id,
+			      DLB2_SYS_VF_LDB_VQID2QID_QID);
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID2QID(offs), reg);
+
+		reg = 0;
+		DLB2_BITS_SET(reg, queue->id.virt_id,
+			      DLB2_SYS_LDB_QID2VQID_VQID);
+		DLB2_CSR_WR(hw, DLB2_SYS_LDB_QID2VQID(queue->id.phys_id), reg);
+	}
+
+	reg = 0;
+	DLB2_BIT_SET(reg, DLB2_SYS_LDB_QID_V_QID_V);
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_QID_V(queue->id.phys_id), reg);
+}
+
+/**
+ * dlb2_hw_create_ldb_queue() - create a load-balanced queue
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: queue creation arguments.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function creates a load-balanced queue.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the queue ID.
+ *
+ * resp->id contains a virtual ID if vdev_req is true.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, the domain is not configured,
+ *	    the domain has already been started, or the requested queue name is
+ *	    already in use.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb2_hw_create_ldb_queue(struct dlb2_hw *hw,
+			     u32 domain_id,
+			     struct dlb2_create_ldb_queue_args *args,
+			     struct dlb2_cmd_response *resp,
+			     bool vdev_req,
+			     unsigned int vdev_id)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+	int ret;
+
+	dlb2_log_create_ldb_queue_args(hw, domain_id, args, vdev_req, vdev_id);
+
+	/*
+	 * Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	ret = dlb2_verify_create_ldb_queue_args(hw,
+						domain_id,
+						args,
+						resp,
+						vdev_req,
+						vdev_id,
+						&domain,
+						&queue);
+	if (ret)
+		return ret;
+
+	ret = dlb2_ldb_queue_attach_resources(hw, domain, queue, args);
+
+	if (ret) {
+		DLB2_HW_ERR(hw,
+			    "[%s():%d] Internal error: failed to attach the ldb queue resources\n",
+			    __func__, __LINE__);
+		return ret;
+	}
+
+	dlb2_configure_ldb_queue(hw, domain, queue, args, vdev_req, vdev_id);
+
+	queue->num_mappings = 0;
+
+	queue->configured = true;
+
+	/*
+	 * Configuration succeeded, so move the resource from the 'avail' to
+	 * the 'used' list.
+	 */
+	dlb2_list_del(&domain->avail_ldb_queues, &queue->domain_list);
+
+	dlb2_list_add(&domain->used_ldb_queues, &queue->domain_list);
+
+	resp->status = 0;
+	resp->id = (vdev_req) ? queue->id.virt_id : queue->id.phys_id;
+
+	return 0;
+}
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH 07/25] event/dlb2: add DLB v2.5 support to create ldb port
  2021-03-16 22:18 [dpdk-dev] [PATCH 00/25] Add Support for DLB v2.5 Timothy McDaniel
                   ` (5 preceding siblings ...)
  2021-03-16 22:18 ` [dpdk-dev] [PATCH 06/25] event/dlb2: add DLB V2.5 support to create ldb queue Timothy McDaniel
@ 2021-03-16 22:18 ` Timothy McDaniel
  2021-03-16 22:18 ` [dpdk-dev] [PATCH 08/25] event/dlb2: add DLB v2.5 support to create dir port Timothy McDaniel
                   ` (18 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-03-16 22:18 UTC (permalink / raw)
  To: dev
  Cc: jerinj, harry.van.haaren, mdr, nhorman, nikhil.rao,
	erik.g.carrillo, abhinandan.gujjar, pbhagavatula, hemant.agrawal,
	mattias.ronnblom, peter.mccarthy

Update create ldb port low level code to account for new
register map and hardware access macros.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 490 ------------------
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 471 +++++++++++++++++
 2 files changed, 471 insertions(+), 490 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index 5a8251ee0..d6ff7f6d9 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -1217,496 +1217,6 @@ int dlb2_set_group_sequence_numbers(struct dlb2_hw *hw,
 	return 0;
 }
 
-static void dlb2_ldb_port_configure_pp(struct dlb2_hw *hw,
-				       struct dlb2_hw_domain *domain,
-				       struct dlb2_ldb_port *port,
-				       bool vdev_req,
-				       unsigned int vdev_id)
-{
-	union dlb2_sys_ldb_pp2vas r0 = { {0} };
-	union dlb2_sys_ldb_pp_v r4 = { {0} };
-
-	r0.field.vas = domain->id.phys_id;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_PP2VAS(port->id.phys_id), r0.val);
-
-	if (vdev_req) {
-		union dlb2_sys_vf_ldb_vpp2pp r1 = { {0} };
-		union dlb2_sys_ldb_pp2vdev r2 = { {0} };
-		union dlb2_sys_vf_ldb_vpp_v r3 = { {0} };
-		unsigned int offs;
-		u32 virt_id;
-
-		/*
-		 * DLB uses producer port address bits 17:12 to determine the
-		 * producer port ID. In Scalable IOV mode, PP accesses come
-		 * through the PF MMIO window for the physical producer port,
-		 * so for translation purposes the virtual and physical port
-		 * IDs are equal.
-		 */
-		if (hw->virt_mode == DLB2_VIRT_SRIOV)
-			virt_id = port->id.virt_id;
-		else
-			virt_id = port->id.phys_id;
-
-		r1.field.pp = port->id.phys_id;
-
-		offs = vdev_id * DLB2_MAX_NUM_LDB_PORTS + virt_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VPP2PP(offs), r1.val);
-
-		r2.field.vdev = vdev_id;
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_LDB_PP2VDEV(port->id.phys_id),
-			    r2.val);
-
-		r3.field.vpp_v = 1;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VPP_V(offs), r3.val);
-	}
-
-	r4.field.pp_v = 1;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_PP_V(port->id.phys_id),
-		    r4.val);
-}
-
-static int dlb2_ldb_port_configure_cq(struct dlb2_hw *hw,
-				      struct dlb2_hw_domain *domain,
-				      struct dlb2_ldb_port *port,
-				      uintptr_t cq_dma_base,
-				      struct dlb2_create_ldb_port_args *args,
-				      bool vdev_req,
-				      unsigned int vdev_id)
-{
-	union dlb2_sys_ldb_cq_addr_l r0 = { {0} };
-	union dlb2_sys_ldb_cq_addr_u r1 = { {0} };
-	union dlb2_sys_ldb_cq2vf_pf_ro r2 = { {0} };
-	union dlb2_chp_ldb_cq_tkn_depth_sel r3 = { {0} };
-	union dlb2_lsp_cq_ldb_tkn_depth_sel r4 = { {0} };
-	union dlb2_chp_hist_list_lim r5 = { {0} };
-	union dlb2_chp_hist_list_base r6 = { {0} };
-	union dlb2_lsp_cq_ldb_infl_lim r7 = { {0} };
-	union dlb2_chp_hist_list_push_ptr r8 = { {0} };
-	union dlb2_chp_hist_list_pop_ptr r9 = { {0} };
-	union dlb2_sys_ldb_cq_at r10 = { {0} };
-	union dlb2_sys_ldb_cq_pasid r11 = { {0} };
-	union dlb2_chp_ldb_cq2vas r12 = { {0} };
-	union dlb2_lsp_cq2priov r13 = { {0} };
-
-	/* The CQ address is 64B-aligned, and the DLB only wants bits [63:6] */
-	r0.field.addr_l = cq_dma_base >> 6;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_ADDR_L(port->id.phys_id), r0.val);
-
-	r1.field.addr_u = cq_dma_base >> 32;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_ADDR_U(port->id.phys_id), r1.val);
-
-	/*
-	 * 'ro' == relaxed ordering. This setting allows DLB2 to write
-	 * cache lines out-of-order (but QEs within a cache line are always
-	 * updated in-order).
-	 */
-	r2.field.vf = vdev_id;
-	r2.field.is_pf = !vdev_req && (hw->virt_mode != DLB2_VIRT_SIOV);
-	r2.field.ro = 1;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ2VF_PF_RO(port->id.phys_id), r2.val);
-
-	if (args->cq_depth <= 8) {
-		r3.field.token_depth_select = 1;
-	} else if (args->cq_depth == 16) {
-		r3.field.token_depth_select = 2;
-	} else if (args->cq_depth == 32) {
-		r3.field.token_depth_select = 3;
-	} else if (args->cq_depth == 64) {
-		r3.field.token_depth_select = 4;
-	} else if (args->cq_depth == 128) {
-		r3.field.token_depth_select = 5;
-	} else if (args->cq_depth == 256) {
-		r3.field.token_depth_select = 6;
-	} else if (args->cq_depth == 512) {
-		r3.field.token_depth_select = 7;
-	} else if (args->cq_depth == 1024) {
-		r3.field.token_depth_select = 8;
-	} else {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: invalid CQ depth\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL(port->id.phys_id),
-		    r3.val);
-
-	/*
-	 * To support CQs with depth less than 8, program the token count
-	 * register with a non-zero initial value. Operations such as domain
-	 * reset must take this initial value into account when quiescing the
-	 * CQ.
-	 */
-	port->init_tkn_cnt = 0;
-
-	if (args->cq_depth < 8) {
-		union dlb2_lsp_cq_ldb_tkn_cnt r14 = { {0} };
-
-		port->init_tkn_cnt = 8 - args->cq_depth;
-
-		r14.field.token_count = port->init_tkn_cnt;
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_CQ_LDB_TKN_CNT(port->id.phys_id),
-			    r14.val);
-	} else {
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_CQ_LDB_TKN_CNT(port->id.phys_id),
-			    DLB2_LSP_CQ_LDB_TKN_CNT_RST);
-	}
-
-	r4.field.token_depth_select = r3.field.token_depth_select;
-	r4.field.ignore_depth = 0;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL(port->id.phys_id),
-		    r4.val);
-
-	/* Reset the CQ write pointer */
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_WPTR(port->id.phys_id),
-		    DLB2_CHP_LDB_CQ_WPTR_RST);
-
-	r5.field.limit = port->hist_list_entry_limit - 1;
-
-	DLB2_CSR_WR(hw, DLB2_CHP_HIST_LIST_LIM(port->id.phys_id), r5.val);
-
-	r6.field.base = port->hist_list_entry_base;
-
-	DLB2_CSR_WR(hw, DLB2_CHP_HIST_LIST_BASE(port->id.phys_id), r6.val);
-
-	/*
-	 * The inflight limit sets a cap on the number of QEs for which this CQ
-	 * can owe completions at one time.
-	 */
-	r7.field.limit = args->cq_history_list_size;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ_LDB_INFL_LIM(port->id.phys_id), r7.val);
-
-	r8.field.push_ptr = r6.field.base;
-	r8.field.generation = 0;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_HIST_LIST_PUSH_PTR(port->id.phys_id),
-		    r8.val);
-
-	r9.field.pop_ptr = r6.field.base;
-	r9.field.generation = 0;
-
-	DLB2_CSR_WR(hw, DLB2_CHP_HIST_LIST_POP_PTR(port->id.phys_id), r9.val);
-
-	/*
-	 * Address translation (AT) settings: 0: untranslated, 2: translated
-	 * (see ATS spec regarding Address Type field for more details)
-	 */
-	r10.field.cq_at = 0;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_AT(port->id.phys_id), r10.val);
-
-	if (vdev_req && hw->virt_mode == DLB2_VIRT_SIOV) {
-		r11.field.pasid = hw->pasid[vdev_id];
-		r11.field.fmt2 = 1;
-	}
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_CQ_PASID(port->id.phys_id),
-		    r11.val);
-
-	r12.field.cq2vas = domain->id.phys_id;
-
-	DLB2_CSR_WR(hw, DLB2_CHP_LDB_CQ2VAS(port->id.phys_id), r12.val);
-
-	/* Disable the port's QID mappings */
-	r13.field.v = 0;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(port->id.phys_id), r13.val);
-
-	return 0;
-}
-
-static int dlb2_configure_ldb_port(struct dlb2_hw *hw,
-				   struct dlb2_hw_domain *domain,
-				   struct dlb2_ldb_port *port,
-				   uintptr_t cq_dma_base,
-				   struct dlb2_create_ldb_port_args *args,
-				   bool vdev_req,
-				   unsigned int vdev_id)
-{
-	int ret, i;
-
-	port->hist_list_entry_base = domain->hist_list_entry_base +
-				     domain->hist_list_entry_offset;
-	port->hist_list_entry_limit = port->hist_list_entry_base +
-				      args->cq_history_list_size;
-
-	domain->hist_list_entry_offset += args->cq_history_list_size;
-	domain->avail_hist_list_entries -= args->cq_history_list_size;
-
-	ret = dlb2_ldb_port_configure_cq(hw,
-					 domain,
-					 port,
-					 cq_dma_base,
-					 args,
-					 vdev_req,
-					 vdev_id);
-	if (ret < 0)
-		return ret;
-
-	dlb2_ldb_port_configure_pp(hw,
-				   domain,
-				   port,
-				   vdev_req,
-				   vdev_id);
-
-	dlb2_ldb_port_cq_enable(hw, port);
-
-	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++)
-		port->qid_map[i].state = DLB2_QUEUE_UNMAPPED;
-	port->num_mappings = 0;
-
-	port->enabled = true;
-
-	port->configured = true;
-
-	return 0;
-}
-
-static void
-dlb2_log_create_ldb_port_args(struct dlb2_hw *hw,
-			      u32 domain_id,
-			      uintptr_t cq_dma_base,
-			      struct dlb2_create_ldb_port_args *args,
-			      bool vdev_req,
-			      unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 create load-balanced port arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID:                 %d\n",
-		    domain_id);
-	DLB2_HW_DBG(hw, "\tCQ depth:                  %d\n",
-		    args->cq_depth);
-	DLB2_HW_DBG(hw, "\tCQ hist list size:         %d\n",
-		    args->cq_history_list_size);
-	DLB2_HW_DBG(hw, "\tCQ base address:           0x%lx\n",
-		    cq_dma_base);
-	DLB2_HW_DBG(hw, "\tCoS ID:                    %u\n", args->cos_id);
-	DLB2_HW_DBG(hw, "\tStrict CoS allocation:     %u\n",
-		    args->cos_strict);
-}
-
-static int
-dlb2_verify_create_ldb_port_args(struct dlb2_hw *hw,
-				 u32 domain_id,
-				 uintptr_t cq_dma_base,
-				 struct dlb2_create_ldb_port_args *args,
-				 struct dlb2_cmd_response *resp,
-				 bool vdev_req,
-				 unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	int i;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (domain == NULL) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	if (!domain->configured) {
-		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
-		return -EINVAL;
-	}
-
-	if (domain->started) {
-		resp->status = DLB2_ST_DOMAIN_STARTED;
-		return -EINVAL;
-	}
-
-	if (args->cos_id >= DLB2_NUM_COS_DOMAINS) {
-		resp->status = DLB2_ST_INVALID_COS_ID;
-		return -EINVAL;
-	}
-
-	if (args->cos_strict) {
-		if (dlb2_list_empty(&domain->avail_ldb_ports[args->cos_id])) {
-			resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
-			return -EINVAL;
-		}
-	} else {
-		for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-			if (!dlb2_list_empty(&domain->avail_ldb_ports[i]))
-				break;
-		}
-
-		if (i == DLB2_NUM_COS_DOMAINS) {
-			resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
-			return -EINVAL;
-		}
-	}
-
-	/* Check cache-line alignment */
-	if ((cq_dma_base & 0x3F) != 0) {
-		resp->status = DLB2_ST_INVALID_CQ_VIRT_ADDR;
-		return -EINVAL;
-	}
-
-	if (args->cq_depth != 1 &&
-	    args->cq_depth != 2 &&
-	    args->cq_depth != 4 &&
-	    args->cq_depth != 8 &&
-	    args->cq_depth != 16 &&
-	    args->cq_depth != 32 &&
-	    args->cq_depth != 64 &&
-	    args->cq_depth != 128 &&
-	    args->cq_depth != 256 &&
-	    args->cq_depth != 512 &&
-	    args->cq_depth != 1024) {
-		resp->status = DLB2_ST_INVALID_CQ_DEPTH;
-		return -EINVAL;
-	}
-
-	/* The history list size must be >= 1 */
-	if (!args->cq_history_list_size) {
-		resp->status = DLB2_ST_INVALID_HIST_LIST_DEPTH;
-		return -EINVAL;
-	}
-
-	if (args->cq_history_list_size > domain->avail_hist_list_entries) {
-		resp->status = DLB2_ST_HIST_LIST_ENTRIES_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	return 0;
-}
-
-
-/**
- * dlb2_hw_create_ldb_port() - Allocate and initialize a load-balanced port and
- *	its resources.
- * @hw:	Contains the current state of the DLB2 hardware.
- * @domain_id: Domain ID
- * @args: User-provided arguments.
- * @cq_dma_base: Base DMA address for consumer queue memory
- * @resp: Response to user.
- * @vdev_req: Request came from a virtual device.
- * @vdev_id: If vdev_req is true, this contains the virtual device's ID.
- *
- * Return: returns < 0 on error, 0 otherwise. If the driver is unable to
- * satisfy a request, resp->status will be set accordingly.
- */
-int dlb2_hw_create_ldb_port(struct dlb2_hw *hw,
-			    u32 domain_id,
-			    struct dlb2_create_ldb_port_args *args,
-			    uintptr_t cq_dma_base,
-			    struct dlb2_cmd_response *resp,
-			    bool vdev_req,
-			    unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_port *port;
-	int ret, cos_id, i;
-
-	dlb2_log_create_ldb_port_args(hw,
-				      domain_id,
-				      cq_dma_base,
-				      args,
-				      vdev_req,
-				      vdev_id);
-
-	/*
-	 * Verify that hardware resources are available before attempting to
-	 * satisfy the request. This simplifies the error unwinding code.
-	 */
-	ret = dlb2_verify_create_ldb_port_args(hw,
-					       domain_id,
-					       cq_dma_base,
-					       args,
-					       resp,
-					       vdev_req,
-					       vdev_id);
-	if (ret)
-		return ret;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-	if (domain == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: domain not found\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	if (args->cos_strict) {
-		cos_id = args->cos_id;
-
-		port = DLB2_DOM_LIST_HEAD(domain->avail_ldb_ports[cos_id],
-					  typeof(*port));
-	} else {
-		int idx;
-
-		for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-			idx = (args->cos_id + i) % DLB2_NUM_COS_DOMAINS;
-
-			port = DLB2_DOM_LIST_HEAD(domain->avail_ldb_ports[idx],
-						  typeof(*port));
-			if (port)
-				break;
-		}
-
-		cos_id = idx;
-	}
-
-	if (port == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: no available ldb ports\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	if (port->configured) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: avail_ldb_ports contains configured ports.\n",
-			    __func__);
-		return -EFAULT;
-	}
-
-	ret = dlb2_configure_ldb_port(hw,
-				      domain,
-				      port,
-				      cq_dma_base,
-				      args,
-				      vdev_req,
-				      vdev_id);
-	if (ret < 0)
-		return ret;
-
-	/*
-	 * Configuration succeeded, so move the resource from the 'avail' to
-	 * the 'used' list.
-	 */
-	dlb2_list_del(&domain->avail_ldb_ports[cos_id], &port->domain_list);
-
-	dlb2_list_add(&domain->used_ldb_ports[cos_id], &port->domain_list);
-
-	resp->status = 0;
-	resp->id = (vdev_req) ? port->id.virt_id : port->id.phys_id;
-
-	return 0;
-}
-
 static void
 dlb2_log_create_dir_port_args(struct dlb2_hw *hw,
 			      u32 domain_id,
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index 811cf79c6..31afdc5f9 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -3973,3 +3973,474 @@ int dlb2_hw_create_ldb_queue(struct dlb2_hw *hw,
 
 	return 0;
 }
+
+static void dlb2_ldb_port_configure_pp(struct dlb2_hw *hw,
+				       struct dlb2_hw_domain *domain,
+				       struct dlb2_ldb_port *port,
+				       bool vdev_req,
+				       unsigned int vdev_id)
+{
+	u32 reg = 0;
+
+	DLB2_BITS_SET(reg, domain->id.phys_id, DLB2_SYS_LDB_PP2VAS_VAS);
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_PP2VAS(port->id.phys_id), reg);
+
+	if (vdev_req) {
+		unsigned int offs;
+		u32 virt_id;
+
+		/*
+		 * DLB uses producer port address bits 17:12 to determine the
+		 * producer port ID. In Scalable IOV mode, PP accesses come
+		 * through the PF MMIO window for the physical producer port,
+		 * so for translation purposes the virtual and physical port
+		 * IDs are equal.
+		 */
+		if (hw->virt_mode == DLB2_VIRT_SRIOV)
+			virt_id = port->id.virt_id;
+		else
+			virt_id = port->id.phys_id;
+
+		reg = 0;
+		DLB2_BITS_SET(reg, port->id.phys_id, DLB2_SYS_VF_LDB_VPP2PP_PP);
+		offs = vdev_id * DLB2_MAX_NUM_LDB_PORTS + virt_id;
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VPP2PP(offs), reg);
+
+		reg = 0;
+		DLB2_BITS_SET(reg, vdev_id, DLB2_SYS_LDB_PP2VDEV_VDEV);
+		DLB2_CSR_WR(hw, DLB2_SYS_LDB_PP2VDEV(port->id.phys_id), reg);
+
+		reg = 0;
+		DLB2_BIT_SET(reg, DLB2_SYS_VF_LDB_VPP_V_VPP_V);
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VPP_V(offs), reg);
+	}
+
+	reg = 0;
+	DLB2_BIT_SET(reg, DLB2_SYS_LDB_PP_V_PP_V);
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_PP_V(port->id.phys_id), reg);
+}
+
+static int dlb2_ldb_port_configure_cq(struct dlb2_hw *hw,
+				      struct dlb2_hw_domain *domain,
+				      struct dlb2_ldb_port *port,
+				      uintptr_t cq_dma_base,
+				      struct dlb2_create_ldb_port_args *args,
+				      bool vdev_req,
+				      unsigned int vdev_id)
+{
+	u32 hl_base = 0;
+	u32 reg = 0;
+	u32 ds = 0;
+
+	/* The CQ address is 64B-aligned, and the DLB only wants bits [63:6] */
+	DLB2_BITS_SET(reg, cq_dma_base >> 6, DLB2_SYS_LDB_CQ_ADDR_L_ADDR_L);
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_ADDR_L(port->id.phys_id), reg);
+
+	reg = cq_dma_base >> 32;
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_ADDR_U(port->id.phys_id), reg);
+
+	/*
+	 * 'ro' == relaxed ordering. This setting allows DLB2 to write
+	 * cache lines out-of-order (but QEs within a cache line are always
+	 * updated in-order).
+	 */
+	reg = 0;
+	DLB2_BITS_SET(reg, vdev_id, DLB2_SYS_LDB_CQ2VF_PF_RO_VF);
+	DLB2_BITS_SET(reg,
+		 !vdev_req && (hw->virt_mode != DLB2_VIRT_SIOV),
+		 DLB2_SYS_LDB_CQ2VF_PF_RO_IS_PF);
+	DLB2_BIT_SET(reg, DLB2_SYS_LDB_CQ2VF_PF_RO_RO);
+
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ2VF_PF_RO(port->id.phys_id), reg);
+
+	port->cq_depth = args->cq_depth;
+
+	if (args->cq_depth <= 8) {
+		ds = 1;
+	} else if (args->cq_depth == 16) {
+		ds = 2;
+	} else if (args->cq_depth == 32) {
+		ds = 3;
+	} else if (args->cq_depth == 64) {
+		ds = 4;
+	} else if (args->cq_depth == 128) {
+		ds = 5;
+	} else if (args->cq_depth == 256) {
+		ds = 6;
+	} else if (args->cq_depth == 512) {
+		ds = 7;
+	} else if (args->cq_depth == 1024) {
+		ds = 8;
+	} else {
+		DLB2_HW_ERR(hw,
+			    "[%s():%d] Internal error: invalid CQ depth\n",
+			    __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	reg = 0;
+	DLB2_BITS_SET(reg, ds,
+		      DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT);
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
+		    reg);
+
+	/*
+	 * To support CQs with depth less than 8, program the token count
+	 * register with a non-zero initial value. Operations such as domain
+	 * reset must take this initial value into account when quiescing the
+	 * CQ.
+	 */
+	port->init_tkn_cnt = 0;
+
+	if (args->cq_depth < 8) {
+		reg = 0;
+		port->init_tkn_cnt = 8 - args->cq_depth;
+
+		DLB2_BITS_SET(reg,
+			      port->init_tkn_cnt,
+			      DLB2_LSP_CQ_LDB_TKN_CNT_TOKEN_COUNT);
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_CQ_LDB_TKN_CNT(hw->ver, port->id.phys_id),
+			    reg);
+	} else {
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_CQ_LDB_TKN_CNT(hw->ver, port->id.phys_id),
+			    DLB2_LSP_CQ_LDB_TKN_CNT_RST);
+	}
+
+	reg = 0;
+	DLB2_BITS_SET(reg, ds,
+		      DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_V2);
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
+		    reg);
+
+	/* Reset the CQ write pointer */
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_WPTR(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ_WPTR_RST);
+
+	reg = 0;
+	DLB2_BITS_SET(reg,
+		      port->hist_list_entry_limit - 1,
+		      DLB2_CHP_HIST_LIST_LIM_LIMIT);
+	DLB2_CSR_WR(hw, DLB2_CHP_HIST_LIST_LIM(hw->ver, port->id.phys_id), reg);
+
+	DLB2_BITS_SET(hl_base, port->hist_list_entry_base,
+		      DLB2_CHP_HIST_LIST_BASE_BASE);
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_HIST_LIST_BASE(hw->ver, port->id.phys_id),
+		    hl_base);
+
+	/*
+	 * The inflight limit sets a cap on the number of QEs for which this CQ
+	 * can owe completions at one time.
+	 */
+	reg = 0;
+	DLB2_BITS_SET(reg, args->cq_history_list_size,
+		      DLB2_LSP_CQ_LDB_INFL_LIM_LIMIT);
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ_LDB_INFL_LIM(hw->ver, port->id.phys_id),
+		    reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, DLB2_BITS_GET(hl_base, DLB2_CHP_HIST_LIST_BASE_BASE),
+		      DLB2_CHP_HIST_LIST_PUSH_PTR_PUSH_PTR);
+	DLB2_CSR_WR(hw, DLB2_CHP_HIST_LIST_PUSH_PTR(hw->ver, port->id.phys_id),
+		    reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, DLB2_BITS_GET(hl_base, DLB2_CHP_HIST_LIST_BASE_BASE),
+		      DLB2_CHP_HIST_LIST_POP_PTR_POP_PTR);
+	DLB2_CSR_WR(hw, DLB2_CHP_HIST_LIST_POP_PTR(hw->ver, port->id.phys_id),
+		    reg);
+
+	/*
+	 * Address translation (AT) settings: 0: untranslated, 2: translated
+	 * (see ATS spec regarding Address Type field for more details)
+	 */
+
+	if (hw->ver == DLB2_HW_V2) {
+		reg = 0;
+		DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_AT(port->id.phys_id), reg);
+	}
+
+	if (vdev_req && hw->virt_mode == DLB2_VIRT_SIOV) {
+		reg = 0;
+		DLB2_BITS_SET(reg, hw->pasid[vdev_id],
+			      DLB2_SYS_LDB_CQ_PASID_PASID);
+		DLB2_BIT_SET(reg, DLB2_SYS_LDB_CQ_PASID_FMT2);
+	}
+
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_PASID(hw->ver, port->id.phys_id), reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, domain->id.phys_id, DLB2_CHP_LDB_CQ2VAS_CQ2VAS);
+	DLB2_CSR_WR(hw, DLB2_CHP_LDB_CQ2VAS(hw->ver, port->id.phys_id), reg);
+
+	/* Disable the port's QID mappings */
+	reg = 0;
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(hw->ver, port->id.phys_id), reg);
+
+	return 0;
+}
+
+static bool
+dlb2_cq_depth_is_valid(u32 depth)
+{
+	if (depth != 1 && depth != 2 &&
+	    depth != 4 && depth != 8 &&
+	    depth != 16 && depth != 32 &&
+	    depth != 64 && depth != 128 &&
+	    depth != 256 && depth != 512 &&
+	    depth != 1024)
+		return false;
+
+	return true;
+}
+
+static int dlb2_configure_ldb_port(struct dlb2_hw *hw,
+				   struct dlb2_hw_domain *domain,
+				   struct dlb2_ldb_port *port,
+				   uintptr_t cq_dma_base,
+				   struct dlb2_create_ldb_port_args *args,
+				   bool vdev_req,
+				   unsigned int vdev_id)
+{
+	int ret, i;
+
+	port->hist_list_entry_base = domain->hist_list_entry_base +
+				     domain->hist_list_entry_offset;
+	port->hist_list_entry_limit = port->hist_list_entry_base +
+				      args->cq_history_list_size;
+
+	domain->hist_list_entry_offset += args->cq_history_list_size;
+	domain->avail_hist_list_entries -= args->cq_history_list_size;
+
+	ret = dlb2_ldb_port_configure_cq(hw,
+					 domain,
+					 port,
+					 cq_dma_base,
+					 args,
+					 vdev_req,
+					 vdev_id);
+	if (ret)
+		return ret;
+
+	dlb2_ldb_port_configure_pp(hw,
+				   domain,
+				   port,
+				   vdev_req,
+				   vdev_id);
+
+	dlb2_ldb_port_cq_enable(hw, port);
+
+	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++)
+		port->qid_map[i].state = DLB2_QUEUE_UNMAPPED;
+	port->num_mappings = 0;
+
+	port->enabled = true;
+
+	port->configured = true;
+
+	return 0;
+}
+
+static void
+dlb2_log_create_ldb_port_args(struct dlb2_hw *hw,
+			      u32 domain_id,
+			      uintptr_t cq_dma_base,
+			      struct dlb2_create_ldb_port_args *args,
+			      bool vdev_req,
+			      unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 create load-balanced port arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID:                 %d\n",
+		    domain_id);
+	DLB2_HW_DBG(hw, "\tCQ depth:                  %d\n",
+		    args->cq_depth);
+	DLB2_HW_DBG(hw, "\tCQ hist list size:         %d\n",
+		    args->cq_history_list_size);
+	DLB2_HW_DBG(hw, "\tCQ base address:           0x%lx\n",
+		    cq_dma_base);
+	DLB2_HW_DBG(hw, "\tCoS ID:                    %u\n", args->cos_id);
+	DLB2_HW_DBG(hw, "\tStrict CoS allocation:     %u\n",
+		    args->cos_strict);
+}
+
+static int
+dlb2_verify_create_ldb_port_args(struct dlb2_hw *hw,
+				 u32 domain_id,
+				 uintptr_t cq_dma_base,
+				 struct dlb2_create_ldb_port_args *args,
+				 struct dlb2_cmd_response *resp,
+				 bool vdev_req,
+				 unsigned int vdev_id,
+				 struct dlb2_hw_domain **out_domain,
+				 struct dlb2_ldb_port **out_port,
+				 int *out_cos_id)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_port *port;
+	int i, id;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
+		return -EINVAL;
+	}
+
+	if (domain->started) {
+		resp->status = DLB2_ST_DOMAIN_STARTED;
+		return -EINVAL;
+	}
+
+	if (args->cos_id >= DLB2_NUM_COS_DOMAINS) {
+		resp->status = DLB2_ST_INVALID_COS_ID;
+		return -EINVAL;
+	}
+
+	if (args->cos_strict) {
+		id = args->cos_id;
+		port = DLB2_DOM_LIST_HEAD(domain->avail_ldb_ports[id],
+					  typeof(*port));
+	} else {
+		for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+			id = (args->cos_id + i) % DLB2_NUM_COS_DOMAINS;
+
+			port = DLB2_DOM_LIST_HEAD(domain->avail_ldb_ports[id],
+						  typeof(*port));
+			if (port)
+				break;
+		}
+	}
+
+	if (!port) {
+		resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	/* Check cache-line alignment */
+	if ((cq_dma_base & 0x3F) != 0) {
+		resp->status = DLB2_ST_INVALID_CQ_VIRT_ADDR;
+		return -EINVAL;
+	}
+
+	if (!dlb2_cq_depth_is_valid(args->cq_depth)) {
+		resp->status = DLB2_ST_INVALID_CQ_DEPTH;
+		return -EINVAL;
+	}
+
+	/* The history list size must be >= 1 */
+	if (!args->cq_history_list_size) {
+		resp->status = DLB2_ST_INVALID_HIST_LIST_DEPTH;
+		return -EINVAL;
+	}
+
+	if (args->cq_history_list_size > domain->avail_hist_list_entries) {
+		resp->status = DLB2_ST_HIST_LIST_ENTRIES_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	*out_domain = domain;
+	*out_port = port;
+	*out_cos_id = id;
+
+	return 0;
+}
+
+/**
+ * dlb2_hw_create_ldb_port() - create a load-balanced port
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: port creation arguments.
+ * @cq_dma_base: base address of the CQ memory. This can be a PA or an IOVA.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function creates a load-balanced port.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the port ID.
+ *
+ * resp->id contains a virtual ID if vdev_req is true.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, a credit setting is invalid, a
+ *	    pointer address is not properly aligned, the domain is not
+ *	    configured, or the domain has already been started.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb2_hw_create_ldb_port(struct dlb2_hw *hw,
+			    u32 domain_id,
+			    struct dlb2_create_ldb_port_args *args,
+			    uintptr_t cq_dma_base,
+			    struct dlb2_cmd_response *resp,
+			    bool vdev_req,
+			    unsigned int vdev_id)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_port *port;
+	int ret, cos_id;
+
+	dlb2_log_create_ldb_port_args(hw,
+				      domain_id,
+				      cq_dma_base,
+				      args,
+				      vdev_req,
+				      vdev_id);
+
+	/*
+	 * Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	ret = dlb2_verify_create_ldb_port_args(hw,
+					       domain_id,
+					       cq_dma_base,
+					       args,
+					       resp,
+					       vdev_req,
+					       vdev_id,
+					       &domain,
+					       &port,
+					       &cos_id);
+	if (ret)
+		return ret;
+
+	ret = dlb2_configure_ldb_port(hw,
+				      domain,
+				      port,
+				      cq_dma_base,
+				      args,
+				      vdev_req,
+				      vdev_id);
+	if (ret)
+		return ret;
+
+	/*
+	 * Configuration succeeded, so move the resource from the 'avail' to
+	 * the 'used' list.
+	 */
+	dlb2_list_del(&domain->avail_ldb_ports[cos_id], &port->domain_list);
+
+	dlb2_list_add(&domain->used_ldb_ports[cos_id], &port->domain_list);
+
+	resp->status = 0;
+	resp->id = (vdev_req) ? port->id.virt_id : port->id.phys_id;
+
+	return 0;
+}
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH 08/25] event/dlb2: add DLB v2.5 support to create dir port
  2021-03-16 22:18 [dpdk-dev] [PATCH 00/25] Add Support for DLB v2.5 Timothy McDaniel
                   ` (6 preceding siblings ...)
  2021-03-16 22:18 ` [dpdk-dev] [PATCH 07/25] event/dlb2: add DLB v2.5 support to create ldb port Timothy McDaniel
@ 2021-03-16 22:18 ` Timothy McDaniel
  2021-03-16 22:18 ` [dpdk-dev] [PATCH 09/25] event/dlb2: add DLB v2.5 support to create dir queue Timothy McDaniel
                   ` (17 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-03-16 22:18 UTC (permalink / raw)
  To: dev
  Cc: jerinj, harry.van.haaren, mdr, nhorman, nikhil.rao,
	erik.g.carrillo, abhinandan.gujjar, pbhagavatula, hemant.agrawal,
	mattias.ronnblom, peter.mccarthy

Updated low level hardware functions to account for new
register map and access macros.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 426 ------------------
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 414 +++++++++++++++++
 2 files changed, 414 insertions(+), 426 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index d6ff7f6d9..2442327d3 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -66,18 +66,6 @@ static inline void dlb2_flush_csr(struct dlb2_hw *hw)
 	DLB2_CSR_RD(hw, DLB2_SYS_TOTAL_VAS);
 }
 
-static void dlb2_dir_port_cq_enable(struct dlb2_hw *hw,
-				    struct dlb2_dir_pq_pair *port)
-{
-	union dlb2_lsp_cq_dir_dsbl reg;
-
-	reg.field.disabled = 0;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ_DIR_DSBL(port->id.phys_id), reg.val);
-
-	dlb2_flush_csr(hw);
-}
-
 static u32 dlb2_dir_queue_depth(struct dlb2_hw *hw,
 				struct dlb2_dir_pq_pair *queue)
 {
@@ -1217,25 +1205,6 @@ int dlb2_set_group_sequence_numbers(struct dlb2_hw *hw,
 	return 0;
 }
 
-static void
-dlb2_log_create_dir_port_args(struct dlb2_hw *hw,
-			      u32 domain_id,
-			      uintptr_t cq_dma_base,
-			      struct dlb2_create_dir_port_args *args,
-			      bool vdev_req,
-			      unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 create directed port arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID:                 %d\n",
-		    domain_id);
-	DLB2_HW_DBG(hw, "\tCQ depth:                  %d\n",
-		    args->cq_depth);
-	DLB2_HW_DBG(hw, "\tCQ base address:           0x%lx\n",
-		    cq_dma_base);
-}
-
 static struct dlb2_dir_pq_pair *
 dlb2_get_domain_used_dir_pq(struct dlb2_hw *hw,
 			    u32 id,
@@ -1257,401 +1226,6 @@ dlb2_get_domain_used_dir_pq(struct dlb2_hw *hw,
 	return NULL;
 }
 
-static int
-dlb2_verify_create_dir_port_args(struct dlb2_hw *hw,
-				 u32 domain_id,
-				 uintptr_t cq_dma_base,
-				 struct dlb2_create_dir_port_args *args,
-				 struct dlb2_cmd_response *resp,
-				 bool vdev_req,
-				 unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (domain == NULL) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	if (!domain->configured) {
-		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
-		return -EINVAL;
-	}
-
-	if (domain->started) {
-		resp->status = DLB2_ST_DOMAIN_STARTED;
-		return -EINVAL;
-	}
-
-	/*
-	 * If the user claims the queue is already configured, validate
-	 * the queue ID, its domain, and whether the queue is configured.
-	 */
-	if (args->queue_id != -1) {
-		struct dlb2_dir_pq_pair *queue;
-
-		queue = dlb2_get_domain_used_dir_pq(hw,
-						    args->queue_id,
-						    vdev_req,
-						    domain);
-
-		if (queue == NULL || queue->domain_id.phys_id !=
-				domain->id.phys_id ||
-				!queue->queue_configured) {
-			resp->status = DLB2_ST_INVALID_DIR_QUEUE_ID;
-			return -EINVAL;
-		}
-	}
-
-	/*
-	 * If the port's queue is not configured, validate that a free
-	 * port-queue pair is available.
-	 */
-	if (args->queue_id == -1 &&
-	    dlb2_list_empty(&domain->avail_dir_pq_pairs)) {
-		resp->status = DLB2_ST_DIR_PORTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	/* Check cache-line alignment */
-	if ((cq_dma_base & 0x3F) != 0) {
-		resp->status = DLB2_ST_INVALID_CQ_VIRT_ADDR;
-		return -EINVAL;
-	}
-
-	if (args->cq_depth != 1 &&
-	    args->cq_depth != 2 &&
-	    args->cq_depth != 4 &&
-	    args->cq_depth != 8 &&
-	    args->cq_depth != 16 &&
-	    args->cq_depth != 32 &&
-	    args->cq_depth != 64 &&
-	    args->cq_depth != 128 &&
-	    args->cq_depth != 256 &&
-	    args->cq_depth != 512 &&
-	    args->cq_depth != 1024) {
-		resp->status = DLB2_ST_INVALID_CQ_DEPTH;
-		return -EINVAL;
-	}
-
-	return 0;
-}
-
-static void dlb2_dir_port_configure_pp(struct dlb2_hw *hw,
-				       struct dlb2_hw_domain *domain,
-				       struct dlb2_dir_pq_pair *port,
-				       bool vdev_req,
-				       unsigned int vdev_id)
-{
-	union dlb2_sys_dir_pp2vas r0 = { {0} };
-	union dlb2_sys_dir_pp_v r4 = { {0} };
-
-	r0.field.vas = domain->id.phys_id;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_PP2VAS(port->id.phys_id), r0.val);
-
-	if (vdev_req) {
-		union dlb2_sys_vf_dir_vpp2pp r1 = { {0} };
-		union dlb2_sys_dir_pp2vdev r2 = { {0} };
-		union dlb2_sys_vf_dir_vpp_v r3 = { {0} };
-		unsigned int offs;
-		u32 virt_id;
-
-		/*
-		 * DLB uses producer port address bits 17:12 to determine the
-		 * producer port ID. In Scalable IOV mode, PP accesses come
-		 * through the PF MMIO window for the physical producer port,
-		 * so for translation purposes the virtual and physical port
-		 * IDs are equal.
-		 */
-		if (hw->virt_mode == DLB2_VIRT_SRIOV)
-			virt_id = port->id.virt_id;
-		else
-			virt_id = port->id.phys_id;
-
-		r1.field.pp = port->id.phys_id;
-
-		offs = vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) + virt_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP2PP(offs), r1.val);
-
-		r2.field.vdev = vdev_id;
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_DIR_PP2VDEV(port->id.phys_id),
-			    r2.val);
-
-		r3.field.vpp_v = 1;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP_V(offs), r3.val);
-	}
-
-	r4.field.pp_v = 1;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_PP_V(port->id.phys_id),
-		    r4.val);
-}
-
-static int dlb2_dir_port_configure_cq(struct dlb2_hw *hw,
-				      struct dlb2_hw_domain *domain,
-				      struct dlb2_dir_pq_pair *port,
-				      uintptr_t cq_dma_base,
-				      struct dlb2_create_dir_port_args *args,
-				      bool vdev_req,
-				      unsigned int vdev_id)
-{
-	union dlb2_sys_dir_cq_addr_l r0 = { {0} };
-	union dlb2_sys_dir_cq_addr_u r1 = { {0} };
-	union dlb2_sys_dir_cq2vf_pf_ro r2 = { {0} };
-	union dlb2_chp_dir_cq_tkn_depth_sel r3 = { {0} };
-	union dlb2_lsp_cq_dir_tkn_depth_sel_dsi r4 = { {0} };
-	union dlb2_sys_dir_cq_fmt r9 = { {0} };
-	union dlb2_sys_dir_cq_at r10 = { {0} };
-	union dlb2_sys_dir_cq_pasid r11 = { {0} };
-	union dlb2_chp_dir_cq2vas r12 = { {0} };
-
-	/* The CQ address is 64B-aligned, and the DLB only wants bits [63:6] */
-	r0.field.addr_l = cq_dma_base >> 6;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_ADDR_L(port->id.phys_id), r0.val);
-
-	r1.field.addr_u = cq_dma_base >> 32;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_ADDR_U(port->id.phys_id), r1.val);
-
-	/*
-	 * 'ro' == relaxed ordering. This setting allows DLB2 to write
-	 * cache lines out-of-order (but QEs within a cache line are always
-	 * updated in-order).
-	 */
-	r2.field.vf = vdev_id;
-	r2.field.is_pf = !vdev_req && (hw->virt_mode != DLB2_VIRT_SIOV);
-	r2.field.ro = 1;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ2VF_PF_RO(port->id.phys_id), r2.val);
-
-	if (args->cq_depth <= 8) {
-		r3.field.token_depth_select = 1;
-	} else if (args->cq_depth == 16) {
-		r3.field.token_depth_select = 2;
-	} else if (args->cq_depth == 32) {
-		r3.field.token_depth_select = 3;
-	} else if (args->cq_depth == 64) {
-		r3.field.token_depth_select = 4;
-	} else if (args->cq_depth == 128) {
-		r3.field.token_depth_select = 5;
-	} else if (args->cq_depth == 256) {
-		r3.field.token_depth_select = 6;
-	} else if (args->cq_depth == 512) {
-		r3.field.token_depth_select = 7;
-	} else if (args->cq_depth == 1024) {
-		r3.field.token_depth_select = 8;
-	} else {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: invalid CQ depth\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL(port->id.phys_id),
-		    r3.val);
-
-	/*
-	 * To support CQs with depth less than 8, program the token count
-	 * register with a non-zero initial value. Operations such as domain
-	 * reset must take this initial value into account when quiescing the
-	 * CQ.
-	 */
-	port->init_tkn_cnt = 0;
-
-	if (args->cq_depth < 8) {
-		union dlb2_lsp_cq_dir_tkn_cnt r13 = { {0} };
-
-		port->init_tkn_cnt = 8 - args->cq_depth;
-
-		r13.field.count = port->init_tkn_cnt;
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_CQ_DIR_TKN_CNT(port->id.phys_id),
-			    r13.val);
-	} else {
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_CQ_DIR_TKN_CNT(port->id.phys_id),
-			    DLB2_LSP_CQ_DIR_TKN_CNT_RST);
-	}
-
-	r4.field.token_depth_select = r3.field.token_depth_select;
-	r4.field.disable_wb_opt = 0;
-	r4.field.ignore_depth = 0;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(port->id.phys_id),
-		    r4.val);
-
-	/* Reset the CQ write pointer */
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_WPTR(port->id.phys_id),
-		    DLB2_CHP_DIR_CQ_WPTR_RST);
-
-	/* Virtualize the PPID */
-	r9.field.keep_pf_ppid = 0;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_FMT(port->id.phys_id), r9.val);
-
-	/*
-	 * Address translation (AT) settings: 0: untranslated, 2: translated
-	 * (see ATS spec regarding Address Type field for more details)
-	 */
-	r10.field.cq_at = 0;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_AT(port->id.phys_id), r10.val);
-
-	if (vdev_req && hw->virt_mode == DLB2_VIRT_SIOV) {
-		r11.field.pasid = hw->pasid[vdev_id];
-		r11.field.fmt2 = 1;
-	}
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ_PASID(port->id.phys_id),
-		    r11.val);
-
-	r12.field.cq2vas = domain->id.phys_id;
-
-	DLB2_CSR_WR(hw, DLB2_CHP_DIR_CQ2VAS(port->id.phys_id), r12.val);
-
-	return 0;
-}
-
-static int dlb2_configure_dir_port(struct dlb2_hw *hw,
-				   struct dlb2_hw_domain *domain,
-				   struct dlb2_dir_pq_pair *port,
-				   uintptr_t cq_dma_base,
-				   struct dlb2_create_dir_port_args *args,
-				   bool vdev_req,
-				   unsigned int vdev_id)
-{
-	int ret;
-
-	ret = dlb2_dir_port_configure_cq(hw,
-					 domain,
-					 port,
-					 cq_dma_base,
-					 args,
-					 vdev_req,
-					 vdev_id);
-
-	if (ret < 0)
-		return ret;
-
-	dlb2_dir_port_configure_pp(hw,
-				   domain,
-				   port,
-				   vdev_req,
-				   vdev_id);
-
-	dlb2_dir_port_cq_enable(hw, port);
-
-	port->enabled = true;
-
-	port->port_configured = true;
-
-	return 0;
-}
-
-/**
- * dlb2_hw_create_dir_port() - Allocate and initialize a DLB directed port
- *	and queue. The port/queue pair have the same ID and name.
- * @hw:	Contains the current state of the DLB2 hardware.
- * @domain_id: Domain ID
- * @args: User-provided arguments.
- * @cq_dma_base: Base DMA address for consumer queue memory
- * @resp: Response to user.
- * @vdev_req: Request came from a virtual device.
- * @vdev_id: If vdev_req is true, this contains the virtual device's ID.
- *
- * Return: returns < 0 on error, 0 otherwise. If the driver is unable to
- * satisfy a request, resp->status will be set accordingly.
- */
-int dlb2_hw_create_dir_port(struct dlb2_hw *hw,
-			    u32 domain_id,
-			    struct dlb2_create_dir_port_args *args,
-			    uintptr_t cq_dma_base,
-			    struct dlb2_cmd_response *resp,
-			    bool vdev_req,
-			    unsigned int vdev_id)
-{
-	struct dlb2_dir_pq_pair *port;
-	struct dlb2_hw_domain *domain;
-	int ret;
-
-	dlb2_log_create_dir_port_args(hw,
-				      domain_id,
-				      cq_dma_base,
-				      args,
-				      vdev_req,
-				      vdev_id);
-
-	/*
-	 * Verify that hardware resources are available before attempting to
-	 * satisfy the request. This simplifies the error unwinding code.
-	 */
-	ret = dlb2_verify_create_dir_port_args(hw,
-					       domain_id,
-					       cq_dma_base,
-					       args,
-					       resp,
-					       vdev_req,
-					       vdev_id);
-	if (ret)
-		return ret;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (args->queue_id != -1)
-		port = dlb2_get_domain_used_dir_pq(hw,
-						   args->queue_id,
-						   vdev_req,
-						   domain);
-	else
-		port = DLB2_DOM_LIST_HEAD(domain->avail_dir_pq_pairs,
-					  typeof(*port));
-	if (port == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: no available dir ports\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	ret = dlb2_configure_dir_port(hw,
-				      domain,
-				      port,
-				      cq_dma_base,
-				      args,
-				      vdev_req,
-				      vdev_id);
-	if (ret < 0)
-		return ret;
-
-	/*
-	 * Configuration succeeded, so move the resource from the 'avail' to
-	 * the 'used' list (if it's not already there).
-	 */
-	if (args->queue_id == -1) {
-		dlb2_list_del(&domain->avail_dir_pq_pairs, &port->domain_list);
-
-		dlb2_list_add(&domain->used_dir_pq_pairs, &port->domain_list);
-	}
-
-	resp->status = 0;
-	resp->id = (vdev_req) ? port->id.virt_id : port->id.phys_id;
-
-	return 0;
-}
-
 static void dlb2_configure_dir_queue(struct dlb2_hw *hw,
 				     struct dlb2_hw_domain *domain,
 				     struct dlb2_dir_pq_pair *queue,
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index 31afdc5f9..1dfbc0c6d 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -4444,3 +4444,417 @@ int dlb2_hw_create_ldb_port(struct dlb2_hw *hw,
 
 	return 0;
 }
+
+static void
+dlb2_log_create_dir_port_args(struct dlb2_hw *hw,
+			      u32 domain_id,
+			      uintptr_t cq_dma_base,
+			      struct dlb2_create_dir_port_args *args,
+			      bool vdev_req,
+			      unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 create directed port arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID:                 %d\n",
+		    domain_id);
+	DLB2_HW_DBG(hw, "\tCQ depth:                  %d\n",
+		    args->cq_depth);
+	DLB2_HW_DBG(hw, "\tCQ base address:           0x%lx\n",
+		    cq_dma_base);
+}
+
+static struct dlb2_dir_pq_pair *
+dlb2_get_domain_used_dir_pq(struct dlb2_hw *hw,
+			    u32 id,
+			    bool vdev_req,
+			    struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *port;
+	RTE_SET_USED(iter);
+
+	if (id >= DLB2_MAX_NUM_DIR_PORTS(hw->ver))
+		return NULL;
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
+		if ((!vdev_req && port->id.phys_id == id) ||
+		    (vdev_req && port->id.virt_id == id))
+			return port;
+	}
+
+	return NULL;
+}
+
+static int
+dlb2_verify_create_dir_port_args(struct dlb2_hw *hw,
+				 u32 domain_id,
+				 uintptr_t cq_dma_base,
+				 struct dlb2_create_dir_port_args *args,
+				 struct dlb2_cmd_response *resp,
+				 bool vdev_req,
+				 unsigned int vdev_id,
+				 struct dlb2_hw_domain **out_domain,
+				 struct dlb2_dir_pq_pair **out_port)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_dir_pq_pair *pq;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
+		return -EINVAL;
+	}
+
+	if (domain->started) {
+		resp->status = DLB2_ST_DOMAIN_STARTED;
+		return -EINVAL;
+	}
+
+	if (args->queue_id != -1) {
+		/*
+		 * If the user claims the queue is already configured, validate
+		 * the queue ID, its domain, and whether the queue is
+		 * configured.
+		 */
+		pq = dlb2_get_domain_used_dir_pq(hw,
+						 args->queue_id,
+						 vdev_req,
+						 domain);
+
+		if (!pq || pq->domain_id.phys_id != domain->id.phys_id ||
+		    !pq->queue_configured) {
+			resp->status = DLB2_ST_INVALID_DIR_QUEUE_ID;
+			return -EINVAL;
+		}
+	} else {
+		/*
+		 * If the port's queue is not configured, validate that a free
+		 * port-queue pair is available.
+		 */
+		pq = DLB2_DOM_LIST_HEAD(domain->avail_dir_pq_pairs,
+					typeof(*pq));
+		if (!pq) {
+			resp->status = DLB2_ST_DIR_PORTS_UNAVAILABLE;
+			return -EINVAL;
+		}
+	}
+
+	/* Check cache-line alignment */
+	if ((cq_dma_base & 0x3F) != 0) {
+		resp->status = DLB2_ST_INVALID_CQ_VIRT_ADDR;
+		return -EINVAL;
+	}
+
+	if (!dlb2_cq_depth_is_valid(args->cq_depth)) {
+		resp->status = DLB2_ST_INVALID_CQ_DEPTH;
+		return -EINVAL;
+	}
+
+	*out_domain = domain;
+	*out_port = pq;
+
+	return 0;
+}
+
+static void dlb2_dir_port_configure_pp(struct dlb2_hw *hw,
+				       struct dlb2_hw_domain *domain,
+				       struct dlb2_dir_pq_pair *port,
+				       bool vdev_req,
+				       unsigned int vdev_id)
+{
+	u32 reg = 0;
+
+	DLB2_BITS_SET(reg, domain->id.phys_id, DLB2_SYS_DIR_PP2VAS_VAS);
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_PP2VAS(port->id.phys_id), reg);
+
+	if (vdev_req) {
+		unsigned int offs;
+		u32 virt_id;
+
+		/*
+		 * DLB uses producer port address bits 17:12 to determine the
+		 * producer port ID. In Scalable IOV mode, PP accesses come
+		 * through the PF MMIO window for the physical producer port,
+		 * so for translation purposes the virtual and physical port
+		 * IDs are equal.
+		 */
+		if (hw->virt_mode == DLB2_VIRT_SRIOV)
+			virt_id = port->id.virt_id;
+		else
+			virt_id = port->id.phys_id;
+
+		reg = 0;
+		DLB2_BITS_SET(reg, port->id.phys_id, DLB2_SYS_VF_DIR_VPP2PP_PP);
+		offs = vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) + virt_id;
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP2PP(offs), reg);
+
+		reg = 0;
+		DLB2_BITS_SET(reg, vdev_id, DLB2_SYS_DIR_PP2VDEV_VDEV);
+		DLB2_CSR_WR(hw, DLB2_SYS_DIR_PP2VDEV(port->id.phys_id), reg);
+
+		reg = 0;
+		DLB2_BIT_SET(reg, DLB2_SYS_VF_DIR_VPP_V_VPP_V);
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP_V(offs), reg);
+	}
+
+	reg = 0;
+	DLB2_BIT_SET(reg, DLB2_SYS_DIR_PP_V_PP_V);
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_PP_V(port->id.phys_id), reg);
+}
+
+static int dlb2_dir_port_configure_cq(struct dlb2_hw *hw,
+				      struct dlb2_hw_domain *domain,
+				      struct dlb2_dir_pq_pair *port,
+				      uintptr_t cq_dma_base,
+				      struct dlb2_create_dir_port_args *args,
+				      bool vdev_req,
+				      unsigned int vdev_id)
+{
+	u32 reg = 0;
+	u32 ds = 0;
+
+	/* The CQ address is 64B-aligned, and the DLB only wants bits [63:6] */
+	DLB2_BITS_SET(reg, cq_dma_base >> 6, DLB2_SYS_DIR_CQ_ADDR_L_ADDR_L);
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_ADDR_L(port->id.phys_id), reg);
+
+	reg = cq_dma_base >> 32;
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_ADDR_U(port->id.phys_id), reg);
+
+	/*
+	 * 'ro' == relaxed ordering. This setting allows DLB2 to write
+	 * cache lines out-of-order (but QEs within a cache line are always
+	 * updated in-order).
+	 */
+	reg = 0;
+	DLB2_BITS_SET(reg, vdev_id, DLB2_SYS_DIR_CQ2VF_PF_RO_VF);
+	DLB2_BITS_SET(reg, !vdev_req && (hw->virt_mode != DLB2_VIRT_SIOV),
+		 DLB2_SYS_DIR_CQ2VF_PF_RO_IS_PF);
+	DLB2_BIT_SET(reg, DLB2_SYS_DIR_CQ2VF_PF_RO_RO);
+
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ2VF_PF_RO(port->id.phys_id), reg);
+
+	if (args->cq_depth <= 8) {
+		ds = 1;
+	} else if (args->cq_depth == 16) {
+		ds = 2;
+	} else if (args->cq_depth == 32) {
+		ds = 3;
+	} else if (args->cq_depth == 64) {
+		ds = 4;
+	} else if (args->cq_depth == 128) {
+		ds = 5;
+	} else if (args->cq_depth == 256) {
+		ds = 6;
+	} else if (args->cq_depth == 512) {
+		ds = 7;
+	} else if (args->cq_depth == 1024) {
+		ds = 8;
+	} else {
+		DLB2_HW_ERR(hw,
+			    "[%s():%d] Internal error: invalid CQ depth\n",
+			    __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	reg = 0;
+	DLB2_BITS_SET(reg, ds,
+		      DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT);
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
+		    reg);
+
+	/*
+	 * To support CQs with depth less than 8, program the token count
+	 * register with a non-zero initial value. Operations such as domain
+	 * reset must take this initial value into account when quiescing the
+	 * CQ.
+	 */
+	port->init_tkn_cnt = 0;
+
+	if (args->cq_depth < 8) {
+		reg = 0;
+		port->init_tkn_cnt = 8 - args->cq_depth;
+
+		DLB2_BITS_SET(reg, port->init_tkn_cnt,
+			      DLB2_LSP_CQ_DIR_TKN_CNT_COUNT);
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_CQ_DIR_TKN_CNT(hw->ver, port->id.phys_id),
+			    reg);
+	} else {
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_CQ_DIR_TKN_CNT(hw->ver, port->id.phys_id),
+			    DLB2_LSP_CQ_DIR_TKN_CNT_RST);
+	}
+
+	reg = 0;
+	DLB2_BITS_SET(reg, ds,
+		      DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_TOKEN_DEPTH_SELECT_V2);
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(hw->ver,
+						      port->id.phys_id),
+		    reg);
+
+	/* Reset the CQ write pointer */
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_WPTR(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ_WPTR_RST);
+
+	/* Virtualize the PPID */
+	reg = 0;
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_FMT(port->id.phys_id), reg);
+
+	/*
+	 * Address translation (AT) settings: 0: untranslated, 2: translated
+	 * (see ATS spec regarding Address Type field for more details)
+	 */
+	if (hw->ver == DLB2_HW_V2) {
+		reg = 0;
+		DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_AT(port->id.phys_id), reg);
+	}
+
+	if (vdev_req && hw->virt_mode == DLB2_VIRT_SIOV) {
+		DLB2_BITS_SET(reg, hw->pasid[vdev_id],
+			      DLB2_SYS_DIR_CQ_PASID_PASID);
+		DLB2_BIT_SET(reg, DLB2_SYS_DIR_CQ_PASID_FMT2);
+	}
+
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_PASID(hw->ver, port->id.phys_id), reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, domain->id.phys_id, DLB2_CHP_DIR_CQ2VAS_CQ2VAS);
+	DLB2_CSR_WR(hw, DLB2_CHP_DIR_CQ2VAS(hw->ver, port->id.phys_id), reg);
+
+	return 0;
+}
+
+static int dlb2_configure_dir_port(struct dlb2_hw *hw,
+				   struct dlb2_hw_domain *domain,
+				   struct dlb2_dir_pq_pair *port,
+				   uintptr_t cq_dma_base,
+				   struct dlb2_create_dir_port_args *args,
+				   bool vdev_req,
+				   unsigned int vdev_id)
+{
+	int ret;
+
+	ret = dlb2_dir_port_configure_cq(hw,
+					 domain,
+					 port,
+					 cq_dma_base,
+					 args,
+					 vdev_req,
+					 vdev_id);
+
+	if (ret)
+		return ret;
+
+	dlb2_dir_port_configure_pp(hw,
+				   domain,
+				   port,
+				   vdev_req,
+				   vdev_id);
+
+	dlb2_dir_port_cq_enable(hw, port);
+
+	port->enabled = true;
+
+	port->port_configured = true;
+
+	return 0;
+}
+
+/**
+ * dlb2_hw_create_dir_port() - create a directed port
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: port creation arguments.
+ * @cq_dma_base: base address of the CQ memory. This can be a PA or an IOVA.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function creates a directed port.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the port ID.
+ *
+ * resp->id contains a virtual ID if vdev_req is true.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, a credit setting is invalid, a
+ *	    pointer address is not properly aligned, the domain is not
+ *	    configured, or the domain has already been started.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb2_hw_create_dir_port(struct dlb2_hw *hw,
+			    u32 domain_id,
+			    struct dlb2_create_dir_port_args *args,
+			    uintptr_t cq_dma_base,
+			    struct dlb2_cmd_response *resp,
+			    bool vdev_req,
+			    unsigned int vdev_id)
+{
+	struct dlb2_dir_pq_pair *port;
+	struct dlb2_hw_domain *domain;
+	int ret;
+
+	dlb2_log_create_dir_port_args(hw,
+				      domain_id,
+				      cq_dma_base,
+				      args,
+				      vdev_req,
+				      vdev_id);
+
+	/*
+	 * Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	ret = dlb2_verify_create_dir_port_args(hw,
+					       domain_id,
+					       cq_dma_base,
+					       args,
+					       resp,
+					       vdev_req,
+					       vdev_id,
+					       &domain,
+					       &port);
+	if (ret)
+		return ret;
+
+	ret = dlb2_configure_dir_port(hw,
+				      domain,
+				      port,
+				      cq_dma_base,
+				      args,
+				      vdev_req,
+				      vdev_id);
+	if (ret)
+		return ret;
+
+	/*
+	 * Configuration succeeded, so move the resource from the 'avail' to
+	 * the 'used' list (if it's not already there).
+	 */
+	if (args->queue_id == -1) {
+		dlb2_list_del(&domain->avail_dir_pq_pairs, &port->domain_list);
+
+		dlb2_list_add(&domain->used_dir_pq_pairs, &port->domain_list);
+	}
+
+	resp->status = 0;
+	resp->id = (vdev_req) ? port->id.virt_id : port->id.phys_id;
+
+	return 0;
+}
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH 09/25] event/dlb2: add DLB v2.5 support to create dir queue
  2021-03-16 22:18 [dpdk-dev] [PATCH 00/25] Add Support for DLB v2.5 Timothy McDaniel
                   ` (7 preceding siblings ...)
  2021-03-16 22:18 ` [dpdk-dev] [PATCH 08/25] event/dlb2: add DLB v2.5 support to create dir port Timothy McDaniel
@ 2021-03-16 22:18 ` Timothy McDaniel
  2021-03-16 22:18 ` [dpdk-dev] [PATCH 10/25] event/dlb2: add DLB v2.5 support to map qid Timothy McDaniel
                   ` (16 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-03-16 22:18 UTC (permalink / raw)
  To: dev
  Cc: jerinj, harry.van.haaren, mdr, nhorman, nikhil.rao,
	erik.g.carrillo, abhinandan.gujjar, pbhagavatula, hemant.agrawal,
	mattias.ronnblom, peter.mccarthy

Updated low level hardware functions to account for new
register map and hardware access macros.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 213 ------------------
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 201 +++++++++++++++++
 2 files changed, 201 insertions(+), 213 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index 2442327d3..d9284812a 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -1226,219 +1226,6 @@ dlb2_get_domain_used_dir_pq(struct dlb2_hw *hw,
 	return NULL;
 }
 
-static void dlb2_configure_dir_queue(struct dlb2_hw *hw,
-				     struct dlb2_hw_domain *domain,
-				     struct dlb2_dir_pq_pair *queue,
-				     struct dlb2_create_dir_queue_args *args,
-				     bool vdev_req,
-				     unsigned int vdev_id)
-{
-	union dlb2_sys_dir_vasqid_v r0 = { {0} };
-	union dlb2_sys_dir_qid_its r1 = { {0} };
-	union dlb2_lsp_qid_dir_depth_thrsh r2 = { {0} };
-	union dlb2_sys_dir_qid_v r5 = { {0} };
-
-	unsigned int offs;
-
-	/* QID write permissions are turned on when the domain is started */
-	r0.field.vasqid_v = 0;
-
-	offs = domain->id.phys_id * DLB2_MAX_NUM_DIR_QUEUES(hw->ver) +
-		queue->id.phys_id;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(offs), r0.val);
-
-	/* Don't timestamp QEs that pass through this queue */
-	r1.field.qid_its = 0;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_QID_ITS(queue->id.phys_id),
-		    r1.val);
-
-	r2.field.thresh = args->depth_threshold;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID_DIR_DEPTH_THRSH(queue->id.phys_id),
-		    r2.val);
-
-	if (vdev_req) {
-		union dlb2_sys_vf_dir_vqid_v r3 = { {0} };
-		union dlb2_sys_vf_dir_vqid2qid r4 = { {0} };
-
-		offs = vdev_id * DLB2_MAX_NUM_DIR_QUEUES(hw->ver)
-			+ queue->id.virt_id;
-
-		r3.field.vqid_v = 1;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID_V(offs), r3.val);
-
-		r4.field.qid = queue->id.phys_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID2QID(offs), r4.val);
-	}
-
-	r5.field.qid_v = 1;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_QID_V(queue->id.phys_id), r5.val);
-
-	queue->queue_configured = true;
-}
-
-static void
-dlb2_log_create_dir_queue_args(struct dlb2_hw *hw,
-			       u32 domain_id,
-			       struct dlb2_create_dir_queue_args *args,
-			       bool vdev_req,
-			       unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 create directed queue arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
-	DLB2_HW_DBG(hw, "\tPort ID:   %d\n", args->port_id);
-}
-
-static int
-dlb2_verify_create_dir_queue_args(struct dlb2_hw *hw,
-				  u32 domain_id,
-				  struct dlb2_create_dir_queue_args *args,
-				  struct dlb2_cmd_response *resp,
-				  bool vdev_req,
-				  unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (domain == NULL) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	if (!domain->configured) {
-		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
-		return -EINVAL;
-	}
-
-	if (domain->started) {
-		resp->status = DLB2_ST_DOMAIN_STARTED;
-		return -EINVAL;
-	}
-
-	/*
-	 * If the user claims the port is already configured, validate the port
-	 * ID, its domain, and whether the port is configured.
-	 */
-	if (args->port_id != -1) {
-		struct dlb2_dir_pq_pair *port;
-
-		port = dlb2_get_domain_used_dir_pq(hw,
-						   args->port_id,
-						   vdev_req,
-						   domain);
-
-		if (port == NULL || port->domain_id.phys_id !=
-				domain->id.phys_id || !port->port_configured) {
-			resp->status = DLB2_ST_INVALID_PORT_ID;
-			return -EINVAL;
-		}
-	}
-
-	/*
-	 * If the queue's port is not configured, validate that a free
-	 * port-queue pair is available.
-	 */
-	if (args->port_id == -1 &&
-	    dlb2_list_empty(&domain->avail_dir_pq_pairs)) {
-		resp->status = DLB2_ST_DIR_QUEUES_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	return 0;
-}
-
-/**
- * dlb2_hw_create_dir_queue() - Allocate and initialize a DLB DIR queue.
- * @hw:	Contains the current state of the DLB2 hardware.
- * @domain_id: Domain ID
- * @args: User-provided arguments.
- * @resp: Response to user.
- * @vdev_req: Request came from a virtual device.
- * @vdev_id: If vdev_req is true, this contains the virtual device's ID.
- *
- * Return: returns < 0 on error, 0 otherwise. If the driver is unable to
- * satisfy a request, resp->status will be set accordingly.
- */
-int dlb2_hw_create_dir_queue(struct dlb2_hw *hw,
-			     u32 domain_id,
-			     struct dlb2_create_dir_queue_args *args,
-			     struct dlb2_cmd_response *resp,
-			     bool vdev_req,
-			     unsigned int vdev_id)
-{
-	struct dlb2_dir_pq_pair *queue;
-	struct dlb2_hw_domain *domain;
-	int ret;
-
-	dlb2_log_create_dir_queue_args(hw, domain_id, args, vdev_req, vdev_id);
-
-	/*
-	 * Verify that hardware resources are available before attempting to
-	 * satisfy the request. This simplifies the error unwinding code.
-	 */
-	ret = dlb2_verify_create_dir_queue_args(hw,
-						domain_id,
-						args,
-						resp,
-						vdev_req,
-						vdev_id);
-	if (ret)
-		return ret;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-	if (domain == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: domain not found\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	if (args->port_id != -1)
-		queue = dlb2_get_domain_used_dir_pq(hw,
-						    args->port_id,
-						    vdev_req,
-						    domain);
-	else
-		queue = DLB2_DOM_LIST_HEAD(domain->avail_dir_pq_pairs,
-					   typeof(*queue));
-	if (queue == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: no available dir queues\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	dlb2_configure_dir_queue(hw, domain, queue, args, vdev_req, vdev_id);
-
-	/*
-	 * Configuration succeeded, so move the resource from the 'avail' to
-	 * the 'used' list (if it's not already there).
-	 */
-	if (args->port_id == -1) {
-		dlb2_list_del(&domain->avail_dir_pq_pairs,
-			      &queue->domain_list);
-
-		dlb2_list_add(&domain->used_dir_pq_pairs,
-			      &queue->domain_list);
-	}
-
-	resp->status = 0;
-
-	resp->id = (vdev_req) ? queue->id.virt_id : queue->id.phys_id;
-
-	return 0;
-}
-
 static bool
 dlb2_port_find_slot_with_pending_map_queue(struct dlb2_ldb_port *port,
 					   struct dlb2_ldb_queue *queue,
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index 1dfbc0c6d..998515933 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -4858,3 +4858,204 @@ int dlb2_hw_create_dir_port(struct dlb2_hw *hw,
 
 	return 0;
 }
+
+static void dlb2_configure_dir_queue(struct dlb2_hw *hw,
+				     struct dlb2_hw_domain *domain,
+				     struct dlb2_dir_pq_pair *queue,
+				     struct dlb2_create_dir_queue_args *args,
+				     bool vdev_req,
+				     unsigned int vdev_id)
+{
+	unsigned int offs;
+	u32 reg = 0;
+
+	/* QID write permissions are turned on when the domain is started */
+	offs = domain->id.phys_id * DLB2_MAX_NUM_DIR_QUEUES(hw->ver) +
+		queue->id.phys_id;
+
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(offs), reg);
+
+	/* Don't timestamp QEs that pass through this queue */
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_QID_ITS(queue->id.phys_id), reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, args->depth_threshold,
+		      DLB2_LSP_QID_DIR_DEPTH_THRSH_THRESH);
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_QID_DIR_DEPTH_THRSH(hw->ver, queue->id.phys_id),
+		    reg);
+
+	if (vdev_req) {
+		offs = vdev_id * DLB2_MAX_NUM_DIR_QUEUES(hw->ver) +
+			queue->id.virt_id;
+
+		reg = 0;
+		DLB2_BIT_SET(reg, DLB2_SYS_VF_DIR_VQID_V_VQID_V);
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID_V(offs), reg);
+
+		reg = 0;
+		DLB2_BITS_SET(reg, queue->id.phys_id,
+			      DLB2_SYS_VF_DIR_VQID2QID_QID);
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID2QID(offs), reg);
+	}
+
+	reg = 0;
+	DLB2_BIT_SET(reg, DLB2_SYS_DIR_QID_V_QID_V);
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_QID_V(queue->id.phys_id), reg);
+
+	queue->queue_configured = true;
+}
+
+static void
+dlb2_log_create_dir_queue_args(struct dlb2_hw *hw,
+			       u32 domain_id,
+			       struct dlb2_create_dir_queue_args *args,
+			       bool vdev_req,
+			       unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 create directed queue arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
+	DLB2_HW_DBG(hw, "\tPort ID:   %d\n", args->port_id);
+}
+
+static int
+dlb2_verify_create_dir_queue_args(struct dlb2_hw *hw,
+				  u32 domain_id,
+				  struct dlb2_create_dir_queue_args *args,
+				  struct dlb2_cmd_response *resp,
+				  bool vdev_req,
+				  unsigned int vdev_id,
+				  struct dlb2_hw_domain **out_domain,
+				  struct dlb2_dir_pq_pair **out_queue)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_dir_pq_pair *pq;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
+		return -EINVAL;
+	}
+
+	if (domain->started) {
+		resp->status = DLB2_ST_DOMAIN_STARTED;
+		return -EINVAL;
+	}
+
+	/*
+	 * If the user claims the port is already configured, validate the port
+	 * ID, its domain, and whether the port is configured.
+	 */
+	if (args->port_id != -1) {
+		pq = dlb2_get_domain_used_dir_pq(hw,
+						 args->port_id,
+						 vdev_req,
+						 domain);
+
+		if (!pq || pq->domain_id.phys_id != domain->id.phys_id ||
+		    !pq->port_configured) {
+			resp->status = DLB2_ST_INVALID_PORT_ID;
+			return -EINVAL;
+		}
+	} else {
+		/*
+		 * If the queue's port is not configured, validate that a free
+		 * port-queue pair is available.
+		 */
+		pq = DLB2_DOM_LIST_HEAD(domain->avail_dir_pq_pairs,
+					typeof(*pq));
+		if (!pq) {
+			resp->status = DLB2_ST_DIR_QUEUES_UNAVAILABLE;
+			return -EINVAL;
+		}
+	}
+
+	*out_domain = domain;
+	*out_queue = pq;
+
+	return 0;
+}
+
+/**
+ * dlb2_hw_create_dir_queue() - create a directed queue
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: queue creation arguments.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function creates a directed queue.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the queue ID.
+ *
+ * resp->id contains a virtual ID if vdev_req is true.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, the domain is not configured,
+ *	    or the domain has already been started.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb2_hw_create_dir_queue(struct dlb2_hw *hw,
+			     u32 domain_id,
+			     struct dlb2_create_dir_queue_args *args,
+			     struct dlb2_cmd_response *resp,
+			     bool vdev_req,
+			     unsigned int vdev_id)
+{
+	struct dlb2_dir_pq_pair *queue;
+	struct dlb2_hw_domain *domain;
+	int ret;
+
+	dlb2_log_create_dir_queue_args(hw, domain_id, args, vdev_req, vdev_id);
+
+	/*
+	 * Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	ret = dlb2_verify_create_dir_queue_args(hw,
+						domain_id,
+						args,
+						resp,
+						vdev_req,
+						vdev_id,
+						&domain,
+						&queue);
+	if (ret)
+		return ret;
+
+	dlb2_configure_dir_queue(hw, domain, queue, args, vdev_req, vdev_id);
+
+	/*
+	 * Configuration succeeded, so move the resource from the 'avail' to
+	 * the 'used' list (if it's not already there).
+	 */
+	if (args->port_id == -1) {
+		dlb2_list_del(&domain->avail_dir_pq_pairs,
+			      &queue->domain_list);
+
+		dlb2_list_add(&domain->used_dir_pq_pairs,
+			      &queue->domain_list);
+	}
+
+	resp->status = 0;
+
+	resp->id = (vdev_req) ? queue->id.virt_id : queue->id.phys_id;
+
+	return 0;
+}
+
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH 10/25] event/dlb2: add DLB v2.5 support to map qid
  2021-03-16 22:18 [dpdk-dev] [PATCH 00/25] Add Support for DLB v2.5 Timothy McDaniel
                   ` (8 preceding siblings ...)
  2021-03-16 22:18 ` [dpdk-dev] [PATCH 09/25] event/dlb2: add DLB v2.5 support to create dir queue Timothy McDaniel
@ 2021-03-16 22:18 ` Timothy McDaniel
  2021-03-16 22:18 ` [dpdk-dev] [PATCH 11/25] event/dlb2: add DLB v2.5 support to unmap queue Timothy McDaniel
                   ` (15 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-03-16 22:18 UTC (permalink / raw)
  To: dev
  Cc: jerinj, harry.van.haaren, mdr, nhorman, nikhil.rao,
	erik.g.carrillo, abhinandan.gujjar, pbhagavatula, hemant.agrawal,
	mattias.ronnblom, peter.mccarthy

Update low level hardware functions to account for
new register map and hardware access macros.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 355 ---------------
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 418 ++++++++++++++++++
 2 files changed, 418 insertions(+), 355 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index d9284812a..4fa867c3f 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -1246,68 +1246,6 @@ dlb2_port_find_slot_with_pending_map_queue(struct dlb2_ldb_port *port,
 	return (i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ);
 }
 
-static void dlb2_ldb_port_change_qid_priority(struct dlb2_hw *hw,
-					      struct dlb2_ldb_port *port,
-					      int slot,
-					      struct dlb2_map_qid_args *args)
-{
-	union dlb2_lsp_cq2priov r0;
-
-	/* Read-modify-write the priority and valid bit register */
-	r0.val = DLB2_CSR_RD(hw, DLB2_LSP_CQ2PRIOV(port->id.phys_id));
-
-	r0.field.v |= 1 << slot;
-	r0.field.prio |= (args->priority & 0x7) << slot * 3;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(port->id.phys_id), r0.val);
-
-	dlb2_flush_csr(hw);
-
-	port->qid_map[slot].priority = args->priority;
-}
-
-static int dlb2_verify_map_qid_slot_available(struct dlb2_ldb_port *port,
-					      struct dlb2_ldb_queue *queue,
-					      struct dlb2_cmd_response *resp)
-{
-	enum dlb2_qid_map_state state;
-	int i;
-
-	/* Unused slot available? */
-	if (port->num_mappings < DLB2_MAX_NUM_QIDS_PER_LDB_CQ)
-		return 0;
-
-	/*
-	 * If the queue is already mapped (from the application's perspective),
-	 * this is simply a priority update.
-	 */
-	state = DLB2_QUEUE_MAPPED;
-	if (dlb2_port_find_slot_queue(port, state, queue, &i))
-		return 0;
-
-	state = DLB2_QUEUE_MAP_IN_PROG;
-	if (dlb2_port_find_slot_queue(port, state, queue, &i))
-		return 0;
-
-	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &i))
-		return 0;
-
-	/*
-	 * If the slot contains an unmap in progress, it's considered
-	 * available.
-	 */
-	state = DLB2_QUEUE_UNMAP_IN_PROG;
-	if (dlb2_port_find_slot(port, state, &i))
-		return 0;
-
-	state = DLB2_QUEUE_UNMAPPED;
-	if (dlb2_port_find_slot(port, state, &i))
-		return 0;
-
-	resp->status = DLB2_ST_NO_QID_SLOTS_AVAILABLE;
-	return -EINVAL;
-}
-
 static struct dlb2_ldb_queue *
 dlb2_get_domain_ldb_queue(u32 id,
 			  bool vdev_req,
@@ -1356,299 +1294,6 @@ dlb2_get_domain_used_ldb_port(u32 id,
 	return NULL;
 }
 
-static int dlb2_verify_map_qid_args(struct dlb2_hw *hw,
-				    u32 domain_id,
-				    struct dlb2_map_qid_args *args,
-				    struct dlb2_cmd_response *resp,
-				    bool vdev_req,
-				    unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_port *port;
-	struct dlb2_ldb_queue *queue;
-	int id;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (domain == NULL) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	if (!domain->configured) {
-		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
-		return -EINVAL;
-	}
-
-	id = args->port_id;
-
-	port = dlb2_get_domain_used_ldb_port(id, vdev_req, domain);
-
-	if (port == NULL || !port->configured) {
-		resp->status = DLB2_ST_INVALID_PORT_ID;
-		return -EINVAL;
-	}
-
-	if (args->priority >= DLB2_QID_PRIORITIES) {
-		resp->status = DLB2_ST_INVALID_PRIORITY;
-		return -EINVAL;
-	}
-
-	queue = dlb2_get_domain_ldb_queue(args->qid, vdev_req, domain);
-
-	if (queue == NULL || !queue->configured) {
-		resp->status = DLB2_ST_INVALID_QID;
-		return -EINVAL;
-	}
-
-	if (queue->domain_id.phys_id != domain->id.phys_id) {
-		resp->status = DLB2_ST_INVALID_QID;
-		return -EINVAL;
-	}
-
-	if (port->domain_id.phys_id != domain->id.phys_id) {
-		resp->status = DLB2_ST_INVALID_PORT_ID;
-		return -EINVAL;
-	}
-
-	return 0;
-}
-
-static void dlb2_log_map_qid(struct dlb2_hw *hw,
-			     u32 domain_id,
-			     struct dlb2_map_qid_args *args,
-			     bool vdev_req,
-			     unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 map QID arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID: %d\n",
-		    domain_id);
-	DLB2_HW_DBG(hw, "\tPort ID:   %d\n",
-		    args->port_id);
-	DLB2_HW_DBG(hw, "\tQueue ID:  %d\n",
-		    args->qid);
-	DLB2_HW_DBG(hw, "\tPriority:  %d\n",
-		    args->priority);
-}
-
-int dlb2_hw_map_qid(struct dlb2_hw *hw,
-		    u32 domain_id,
-		    struct dlb2_map_qid_args *args,
-		    struct dlb2_cmd_response *resp,
-		    bool vdev_req,
-		    unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_queue *queue;
-	enum dlb2_qid_map_state st;
-	struct dlb2_ldb_port *port;
-	int ret, i, id;
-	u8 prio;
-
-	dlb2_log_map_qid(hw, domain_id, args, vdev_req, vdev_id);
-
-	/*
-	 * Verify that hardware resources are available before attempting to
-	 * satisfy the request. This simplifies the error unwinding code.
-	 */
-	ret = dlb2_verify_map_qid_args(hw,
-				       domain_id,
-				       args,
-				       resp,
-				       vdev_req,
-				       vdev_id);
-	if (ret)
-		return ret;
-
-	prio = args->priority;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-	if (domain == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: domain not found\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	id = args->port_id;
-
-	port = dlb2_get_domain_used_ldb_port(id, vdev_req, domain);
-	if (port == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: port not found\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	queue = dlb2_get_domain_ldb_queue(args->qid, vdev_req, domain);
-	if (queue == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: queue not found\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	/*
-	 * If there are any outstanding detach operations for this port,
-	 * attempt to complete them. This may be necessary to free up a QID
-	 * slot for this requested mapping.
-	 */
-	if (port->num_pending_removals)
-		dlb2_domain_finish_unmap_port(hw, domain, port);
-
-	ret = dlb2_verify_map_qid_slot_available(port, queue, resp);
-	if (ret)
-		return ret;
-
-	/* Hardware requires disabling the CQ before mapping QIDs. */
-	if (port->enabled)
-		dlb2_ldb_port_cq_disable(hw, port);
-
-	/*
-	 * If this is only a priority change, don't perform the full QID->CQ
-	 * mapping procedure
-	 */
-	st = DLB2_QUEUE_MAPPED;
-	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
-		if (i >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-			DLB2_HW_ERR(hw,
-				    "[%s():%d] Internal error: port slot tracking failed\n",
-				    __func__, __LINE__);
-			return -EFAULT;
-		}
-
-		if (prio != port->qid_map[i].priority) {
-			dlb2_ldb_port_change_qid_priority(hw, port, i, args);
-			DLB2_HW_DBG(hw, "DLB2 map: priority change\n");
-		}
-
-		st = DLB2_QUEUE_MAPPED;
-		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
-		if (ret)
-			return ret;
-
-		goto map_qid_done;
-	}
-
-	st = DLB2_QUEUE_UNMAP_IN_PROG;
-	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
-		if (i >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-			DLB2_HW_ERR(hw,
-				    "[%s():%d] Internal error: port slot tracking failed\n",
-				    __func__, __LINE__);
-			return -EFAULT;
-		}
-
-		if (prio != port->qid_map[i].priority) {
-			dlb2_ldb_port_change_qid_priority(hw, port, i, args);
-			DLB2_HW_DBG(hw, "DLB2 map: priority change\n");
-		}
-
-		st = DLB2_QUEUE_MAPPED;
-		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
-		if (ret)
-			return ret;
-
-		goto map_qid_done;
-	}
-
-	/*
-	 * If this is a priority change on an in-progress mapping, don't
-	 * perform the full QID->CQ mapping procedure.
-	 */
-	st = DLB2_QUEUE_MAP_IN_PROG;
-	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
-		if (i >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-			DLB2_HW_ERR(hw,
-				    "[%s():%d] Internal error: port slot tracking failed\n",
-				    __func__, __LINE__);
-			return -EFAULT;
-		}
-
-		port->qid_map[i].priority = prio;
-
-		DLB2_HW_DBG(hw, "DLB2 map: priority change only\n");
-
-		goto map_qid_done;
-	}
-
-	/*
-	 * If this is a priority change on a pending mapping, update the
-	 * pending priority
-	 */
-	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &i)) {
-		if (i >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-			DLB2_HW_ERR(hw,
-				    "[%s():%d] Internal error: port slot tracking failed\n",
-				    __func__, __LINE__);
-			return -EFAULT;
-		}
-
-		port->qid_map[i].pending_priority = prio;
-
-		DLB2_HW_DBG(hw, "DLB2 map: priority change only\n");
-
-		goto map_qid_done;
-	}
-
-	/*
-	 * If all the CQ's slots are in use, then there's an unmap in progress
-	 * (guaranteed by dlb2_verify_map_qid_slot_available()), so add this
-	 * mapping to pending_map and return. When the removal is completed for
-	 * the slot's current occupant, this mapping will be performed.
-	 */
-	if (!dlb2_port_find_slot(port, DLB2_QUEUE_UNMAPPED, &i)) {
-		if (dlb2_port_find_slot(port, DLB2_QUEUE_UNMAP_IN_PROG, &i)) {
-			enum dlb2_qid_map_state st;
-
-			if (i >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-				DLB2_HW_ERR(hw,
-					    "[%s():%d] Internal error: port slot tracking failed\n",
-					    __func__, __LINE__);
-				return -EFAULT;
-			}
-
-			port->qid_map[i].pending_qid = queue->id.phys_id;
-			port->qid_map[i].pending_priority = prio;
-
-			st = DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP;
-
-			ret = dlb2_port_slot_state_transition(hw, port, queue,
-							      i, st);
-			if (ret)
-				return ret;
-
-			DLB2_HW_DBG(hw, "DLB2 map: map pending removal\n");
-
-			goto map_qid_done;
-		}
-	}
-
-	/*
-	 * If the domain has started, a special "dynamic" CQ->queue mapping
-	 * procedure is required in order to safely update the CQ<->QID tables.
-	 * The "static" procedure cannot be used when traffic is flowing,
-	 * because the CQ<->QID tables cannot be updated atomically and the
-	 * scheduler won't see the new mapping unless the queue's if_status
-	 * changes, which isn't guaranteed.
-	 */
-	ret = dlb2_ldb_port_map_qid(hw, domain, port, queue, prio);
-
-	/* If ret is less than zero, it's due to an internal error */
-	if (ret < 0)
-		return ret;
-
-map_qid_done:
-	if (port->enabled)
-		dlb2_ldb_port_cq_enable(hw, port);
-
-	resp->status = 0;
-
-	return 0;
-}
-
 static void dlb2_log_unmap_qid(struct dlb2_hw *hw,
 			       u32 domain_id,
 			       struct dlb2_unmap_qid_args *args,
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index 998515933..5070428ba 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -5059,3 +5059,421 @@ int dlb2_hw_create_dir_queue(struct dlb2_hw *hw,
 	return 0;
 }
 
+static bool
+dlb2_port_find_slot_with_pending_map_queue(struct dlb2_ldb_port *port,
+					   struct dlb2_ldb_queue *queue,
+					   int *slot)
+{
+	int i;
+
+	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
+		struct dlb2_ldb_port_qid_map *map = &port->qid_map[i];
+
+		if (map->state == DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP &&
+		    map->pending_qid == queue->id.phys_id)
+			break;
+	}
+
+	*slot = i;
+
+	return (i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ);
+}
+
+static int dlb2_verify_map_qid_slot_available(struct dlb2_ldb_port *port,
+					      struct dlb2_ldb_queue *queue,
+					      struct dlb2_cmd_response *resp)
+{
+	enum dlb2_qid_map_state state;
+	int i;
+
+	/* Unused slot available? */
+	if (port->num_mappings < DLB2_MAX_NUM_QIDS_PER_LDB_CQ)
+		return 0;
+
+	/*
+	 * If the queue is already mapped (from the application's perspective),
+	 * this is simply a priority update.
+	 */
+	state = DLB2_QUEUE_MAPPED;
+	if (dlb2_port_find_slot_queue(port, state, queue, &i))
+		return 0;
+
+	state = DLB2_QUEUE_MAP_IN_PROG;
+	if (dlb2_port_find_slot_queue(port, state, queue, &i))
+		return 0;
+
+	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &i))
+		return 0;
+
+	/*
+	 * If the slot contains an unmap in progress, it's considered
+	 * available.
+	 */
+	state = DLB2_QUEUE_UNMAP_IN_PROG;
+	if (dlb2_port_find_slot(port, state, &i))
+		return 0;
+
+	state = DLB2_QUEUE_UNMAPPED;
+	if (dlb2_port_find_slot(port, state, &i))
+		return 0;
+
+	resp->status = DLB2_ST_NO_QID_SLOTS_AVAILABLE;
+	return -EINVAL;
+}
+
+static struct dlb2_ldb_queue *
+dlb2_get_domain_ldb_queue(u32 id,
+			  bool vdev_req,
+			  struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_queue *queue;
+	RTE_SET_USED(iter);
+
+	if (id >= DLB2_MAX_NUM_LDB_QUEUES)
+		return NULL;
+
+	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
+		if ((!vdev_req && queue->id.phys_id == id) ||
+		    (vdev_req && queue->id.virt_id == id))
+			return queue;
+	}
+
+	return NULL;
+}
+
+static struct dlb2_ldb_port *
+dlb2_get_domain_used_ldb_port(u32 id,
+			      bool vdev_req,
+			      struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	if (id >= DLB2_MAX_NUM_LDB_PORTS)
+		return NULL;
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			if ((!vdev_req && port->id.phys_id == id) ||
+			    (vdev_req && port->id.virt_id == id))
+				return port;
+		}
+
+		DLB2_DOM_LIST_FOR(domain->avail_ldb_ports[i], port, iter) {
+			if ((!vdev_req && port->id.phys_id == id) ||
+			    (vdev_req && port->id.virt_id == id))
+				return port;
+		}
+	}
+
+	return NULL;
+}
+
+static void dlb2_ldb_port_change_qid_priority(struct dlb2_hw *hw,
+					      struct dlb2_ldb_port *port,
+					      int slot,
+					      struct dlb2_map_qid_args *args)
+{
+	u32 cq2priov;
+
+	/* Read-modify-write the priority and valid bit register */
+	cq2priov = DLB2_CSR_RD(hw,
+			       DLB2_LSP_CQ2PRIOV(hw->ver, port->id.phys_id));
+
+	cq2priov |= (1 << (slot + DLB2_LSP_CQ2PRIOV_V_LOC)) &
+		    DLB2_LSP_CQ2PRIOV_V;
+	cq2priov |= ((args->priority & 0x7) << slot * 3) &
+		    DLB2_LSP_CQ2PRIOV_PRIO;
+
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(hw->ver, port->id.phys_id), cq2priov);
+
+	dlb2_flush_csr(hw);
+
+	port->qid_map[slot].priority = args->priority;
+}
+
+static int dlb2_verify_map_qid_args(struct dlb2_hw *hw,
+				    u32 domain_id,
+				    struct dlb2_map_qid_args *args,
+				    struct dlb2_cmd_response *resp,
+				    bool vdev_req,
+				    unsigned int vdev_id,
+				    struct dlb2_hw_domain **out_domain,
+				    struct dlb2_ldb_port **out_port,
+				    struct dlb2_ldb_queue **out_queue)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+	struct dlb2_ldb_port *port;
+	int id;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
+		return -EINVAL;
+	}
+
+	id = args->port_id;
+
+	port = dlb2_get_domain_used_ldb_port(id, vdev_req, domain);
+
+	if (!port || !port->configured) {
+		resp->status = DLB2_ST_INVALID_PORT_ID;
+		return -EINVAL;
+	}
+
+	if (args->priority >= DLB2_QID_PRIORITIES) {
+		resp->status = DLB2_ST_INVALID_PRIORITY;
+		return -EINVAL;
+	}
+
+	queue = dlb2_get_domain_ldb_queue(args->qid, vdev_req, domain);
+
+	if (!queue || !queue->configured) {
+		resp->status = DLB2_ST_INVALID_QID;
+		return -EINVAL;
+	}
+
+	if (queue->domain_id.phys_id != domain->id.phys_id) {
+		resp->status = DLB2_ST_INVALID_QID;
+		return -EINVAL;
+	}
+
+	if (port->domain_id.phys_id != domain->id.phys_id) {
+		resp->status = DLB2_ST_INVALID_PORT_ID;
+		return -EINVAL;
+	}
+
+	*out_domain = domain;
+	*out_queue = queue;
+	*out_port = port;
+
+	return 0;
+}
+
+static void dlb2_log_map_qid(struct dlb2_hw *hw,
+			     u32 domain_id,
+			     struct dlb2_map_qid_args *args,
+			     bool vdev_req,
+			     unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 map QID arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID: %d\n",
+		    domain_id);
+	DLB2_HW_DBG(hw, "\tPort ID:   %d\n",
+		    args->port_id);
+	DLB2_HW_DBG(hw, "\tQueue ID:  %d\n",
+		    args->qid);
+	DLB2_HW_DBG(hw, "\tPriority:  %d\n",
+		    args->priority);
+}
+
+/**
+ * dlb2_hw_map_qid() - map a load-balanced queue to a load-balanced port
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: map QID arguments.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function configures the DLB to schedule QEs from the specified queue
+ * to the specified port. Each load-balanced port can be mapped to up to 8
+ * queues; each load-balanced queue can potentially map to all the
+ * load-balanced ports.
+ *
+ * A successful return does not necessarily mean the mapping was configured. If
+ * this function is unable to immediately map the queue to the port, it will
+ * add the requested operation to a per-port list of pending map/unmap
+ * operations, and (if it's not already running) launch a kernel thread that
+ * periodically attempts to process all pending operations. In a sense, this is
+ * an asynchronous function.
+ *
+ * This asynchronicity creates two views of the state of hardware: the actual
+ * hardware state and the requested state (as if every request completed
+ * immediately). If there are any pending map/unmap operations, the requested
+ * state will differ from the actual state. All validation is performed with
+ * respect to the pending state; for instance, if there are 8 pending map
+ * operations for port X, a request for a 9th will fail because a load-balanced
+ * port can only map up to 8 queues.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, invalid port or queue ID, or
+ *	    the domain is not configured.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb2_hw_map_qid(struct dlb2_hw *hw,
+		    u32 domain_id,
+		    struct dlb2_map_qid_args *args,
+		    struct dlb2_cmd_response *resp,
+		    bool vdev_req,
+		    unsigned int vdev_id)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+	enum dlb2_qid_map_state st;
+	struct dlb2_ldb_port *port;
+	int ret, i;
+	u8 prio;
+
+	dlb2_log_map_qid(hw, domain_id, args, vdev_req, vdev_id);
+
+	/*
+	 * Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	ret = dlb2_verify_map_qid_args(hw,
+				       domain_id,
+				       args,
+				       resp,
+				       vdev_req,
+				       vdev_id,
+				       &domain,
+				       &port,
+				       &queue);
+	if (ret)
+		return ret;
+
+	prio = args->priority;
+
+	/*
+	 * If there are any outstanding detach operations for this port,
+	 * attempt to complete them. This may be necessary to free up a QID
+	 * slot for this requested mapping.
+	 */
+	if (port->num_pending_removals)
+		dlb2_domain_finish_unmap_port(hw, domain, port);
+
+	ret = dlb2_verify_map_qid_slot_available(port, queue, resp);
+	if (ret)
+		return ret;
+
+	/* Hardware requires disabling the CQ before mapping QIDs. */
+	if (port->enabled)
+		dlb2_ldb_port_cq_disable(hw, port);
+
+	/*
+	 * If this is only a priority change, don't perform the full QID->CQ
+	 * mapping procedure
+	 */
+	st = DLB2_QUEUE_MAPPED;
+	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
+		if (prio != port->qid_map[i].priority) {
+			dlb2_ldb_port_change_qid_priority(hw, port, i, args);
+			DLB2_HW_DBG(hw, "DLB2 map: priority change\n");
+		}
+
+		st = DLB2_QUEUE_MAPPED;
+		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
+		if (ret)
+			return ret;
+
+		goto map_qid_done;
+	}
+
+	st = DLB2_QUEUE_UNMAP_IN_PROG;
+	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
+		if (prio != port->qid_map[i].priority) {
+			dlb2_ldb_port_change_qid_priority(hw, port, i, args);
+			DLB2_HW_DBG(hw, "DLB2 map: priority change\n");
+		}
+
+		st = DLB2_QUEUE_MAPPED;
+		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
+		if (ret)
+			return ret;
+
+		goto map_qid_done;
+	}
+
+	/*
+	 * If this is a priority change on an in-progress mapping, don't
+	 * perform the full QID->CQ mapping procedure.
+	 */
+	st = DLB2_QUEUE_MAP_IN_PROG;
+	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
+		port->qid_map[i].priority = prio;
+
+		DLB2_HW_DBG(hw, "DLB2 map: priority change only\n");
+
+		goto map_qid_done;
+	}
+
+	/*
+	 * If this is a priority change on a pending mapping, update the
+	 * pending priority
+	 */
+	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &i)) {
+		port->qid_map[i].pending_priority = prio;
+
+		DLB2_HW_DBG(hw, "DLB2 map: priority change only\n");
+
+		goto map_qid_done;
+	}
+
+	/*
+	 * If all the CQ's slots are in use, then there's an unmap in progress
+	 * (guaranteed by dlb2_verify_map_qid_slot_available()), so add this
+	 * mapping to pending_map and return. When the removal is completed for
+	 * the slot's current occupant, this mapping will be performed.
+	 */
+	if (!dlb2_port_find_slot(port, DLB2_QUEUE_UNMAPPED, &i)) {
+		if (dlb2_port_find_slot(port, DLB2_QUEUE_UNMAP_IN_PROG, &i)) {
+			enum dlb2_qid_map_state new_st;
+
+			port->qid_map[i].pending_qid = queue->id.phys_id;
+			port->qid_map[i].pending_priority = prio;
+
+			new_st = DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP;
+
+			ret = dlb2_port_slot_state_transition(hw, port, queue,
+							      i, new_st);
+			if (ret)
+				return ret;
+
+			DLB2_HW_DBG(hw, "DLB2 map: map pending removal\n");
+
+			goto map_qid_done;
+		}
+	}
+
+	/*
+	 * If the domain has started, a special "dynamic" CQ->queue mapping
+	 * procedure is required in order to safely update the CQ<->QID tables.
+	 * The "static" procedure cannot be used when traffic is flowing,
+	 * because the CQ<->QID tables cannot be updated atomically and the
+	 * scheduler won't see the new mapping unless the queue's if_status
+	 * changes, which isn't guaranteed.
+	 */
+	ret = dlb2_ldb_port_map_qid(hw, domain, port, queue, prio);
+
+	/* If ret is less than zero, it's due to an internal error */
+	if (ret < 0)
+		return ret;
+
+map_qid_done:
+	if (port->enabled)
+		dlb2_ldb_port_cq_enable(hw, port);
+
+	resp->status = 0;
+
+	return 0;
+}
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH 11/25] event/dlb2: add DLB v2.5 support to unmap queue
  2021-03-16 22:18 [dpdk-dev] [PATCH 00/25] Add Support for DLB v2.5 Timothy McDaniel
                   ` (9 preceding siblings ...)
  2021-03-16 22:18 ` [dpdk-dev] [PATCH 10/25] event/dlb2: add DLB v2.5 support to map qid Timothy McDaniel
@ 2021-03-16 22:18 ` Timothy McDaniel
  2021-03-16 22:18 ` [dpdk-dev] [PATCH 12/25] event/dlb2: add DLB v2.5 support to start domain Timothy McDaniel
                   ` (14 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-03-16 22:18 UTC (permalink / raw)
  To: dev
  Cc: jerinj, harry.van.haaren, mdr, nhorman, nikhil.rao,
	erik.g.carrillo, abhinandan.gujjar, pbhagavatula, hemant.agrawal,
	mattias.ronnblom, peter.mccarthy

Update low level functions to account for new register map
and hardware access macros.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 331 ------------------
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 298 ++++++++++++++++
 2 files changed, 298 insertions(+), 331 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index 4fa867c3f..02c2836ad 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -1226,26 +1226,6 @@ dlb2_get_domain_used_dir_pq(struct dlb2_hw *hw,
 	return NULL;
 }
 
-static bool
-dlb2_port_find_slot_with_pending_map_queue(struct dlb2_ldb_port *port,
-					   struct dlb2_ldb_queue *queue,
-					   int *slot)
-{
-	int i;
-
-	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
-		struct dlb2_ldb_port_qid_map *map = &port->qid_map[i];
-
-		if (map->state == DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP &&
-		    map->pending_qid == queue->id.phys_id)
-			break;
-	}
-
-	*slot = i;
-
-	return (i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ);
-}
-
 static struct dlb2_ldb_queue *
 dlb2_get_domain_ldb_queue(u32 id,
 			  bool vdev_req,
@@ -1266,317 +1246,6 @@ dlb2_get_domain_ldb_queue(u32 id,
 	return NULL;
 }
 
-static struct dlb2_ldb_port *
-dlb2_get_domain_used_ldb_port(u32 id,
-			      bool vdev_req,
-			      struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	if (id >= DLB2_MAX_NUM_LDB_PORTS)
-		return NULL;
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
-			if ((!vdev_req && port->id.phys_id == id) ||
-			    (vdev_req && port->id.virt_id == id))
-				return port;
-
-		DLB2_DOM_LIST_FOR(domain->avail_ldb_ports[i], port, iter)
-			if ((!vdev_req && port->id.phys_id == id) ||
-			    (vdev_req && port->id.virt_id == id))
-				return port;
-	}
-
-	return NULL;
-}
-
-static void dlb2_log_unmap_qid(struct dlb2_hw *hw,
-			       u32 domain_id,
-			       struct dlb2_unmap_qid_args *args,
-			       bool vdev_req,
-			       unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 unmap QID arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID: %d\n",
-		    domain_id);
-	DLB2_HW_DBG(hw, "\tPort ID:   %d\n",
-		    args->port_id);
-	DLB2_HW_DBG(hw, "\tQueue ID:  %d\n",
-		    args->qid);
-	if (args->qid < DLB2_MAX_NUM_LDB_QUEUES)
-		DLB2_HW_DBG(hw, "\tQueue's num mappings:  %d\n",
-			    hw->rsrcs.ldb_queues[args->qid].num_mappings);
-}
-
-static int dlb2_verify_unmap_qid_args(struct dlb2_hw *hw,
-				      u32 domain_id,
-				      struct dlb2_unmap_qid_args *args,
-				      struct dlb2_cmd_response *resp,
-				      bool vdev_req,
-				      unsigned int vdev_id)
-{
-	enum dlb2_qid_map_state state;
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_queue *queue;
-	struct dlb2_ldb_port *port;
-	int slot;
-	int id;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (domain == NULL) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	if (!domain->configured) {
-		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
-		return -EINVAL;
-	}
-
-	id = args->port_id;
-
-	port = dlb2_get_domain_used_ldb_port(id, vdev_req, domain);
-
-	if (port == NULL || !port->configured) {
-		resp->status = DLB2_ST_INVALID_PORT_ID;
-		return -EINVAL;
-	}
-
-	if (port->domain_id.phys_id != domain->id.phys_id) {
-		resp->status = DLB2_ST_INVALID_PORT_ID;
-		return -EINVAL;
-	}
-
-	queue = dlb2_get_domain_ldb_queue(args->qid, vdev_req, domain);
-
-	if (queue == NULL || !queue->configured) {
-		DLB2_HW_ERR(hw, "[%s()] Can't unmap unconfigured queue %d\n",
-			    __func__, args->qid);
-		resp->status = DLB2_ST_INVALID_QID;
-		return -EINVAL;
-	}
-
-	/*
-	 * Verify that the port has the queue mapped. From the application's
-	 * perspective a queue is mapped if it is actually mapped, the map is
-	 * in progress, or the map is blocked pending an unmap.
-	 */
-	state = DLB2_QUEUE_MAPPED;
-	if (dlb2_port_find_slot_queue(port, state, queue, &slot))
-		return 0;
-
-	state = DLB2_QUEUE_MAP_IN_PROG;
-	if (dlb2_port_find_slot_queue(port, state, queue, &slot))
-		return 0;
-
-	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &slot))
-		return 0;
-
-	resp->status = DLB2_ST_INVALID_QID;
-	return -EINVAL;
-}
-
-int dlb2_hw_unmap_qid(struct dlb2_hw *hw,
-		      u32 domain_id,
-		      struct dlb2_unmap_qid_args *args,
-		      struct dlb2_cmd_response *resp,
-		      bool vdev_req,
-		      unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_queue *queue;
-	enum dlb2_qid_map_state st;
-	struct dlb2_ldb_port *port;
-	bool unmap_complete;
-	int i, ret, id;
-
-	dlb2_log_unmap_qid(hw, domain_id, args, vdev_req, vdev_id);
-
-	/*
-	 * Verify that hardware resources are available before attempting to
-	 * satisfy the request. This simplifies the error unwinding code.
-	 */
-	ret = dlb2_verify_unmap_qid_args(hw,
-					 domain_id,
-					 args,
-					 resp,
-					 vdev_req,
-					 vdev_id);
-	if (ret)
-		return ret;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-	if (domain == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: domain not found\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	id = args->port_id;
-
-	port = dlb2_get_domain_used_ldb_port(id, vdev_req, domain);
-	if (port == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: port not found\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	queue = dlb2_get_domain_ldb_queue(args->qid, vdev_req, domain);
-	if (queue == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: queue not found\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	/*
-	 * If the queue hasn't been mapped yet, we need to update the slot's
-	 * state and re-enable the queue's inflights.
-	 */
-	st = DLB2_QUEUE_MAP_IN_PROG;
-	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
-		if (i >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-			DLB2_HW_ERR(hw,
-				    "[%s():%d] Internal error: port slot tracking failed\n",
-				    __func__, __LINE__);
-			return -EFAULT;
-		}
-
-		/*
-		 * Since the in-progress map was aborted, re-enable the QID's
-		 * inflights.
-		 */
-		if (queue->num_pending_additions == 0)
-			dlb2_ldb_queue_set_inflight_limit(hw, queue);
-
-		st = DLB2_QUEUE_UNMAPPED;
-		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
-		if (ret)
-			return ret;
-
-		goto unmap_qid_done;
-	}
-
-	/*
-	 * If the queue mapping is on hold pending an unmap, we simply need to
-	 * update the slot's state.
-	 */
-	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &i)) {
-		if (i >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-			DLB2_HW_ERR(hw,
-				    "[%s():%d] Internal error: port slot tracking failed\n",
-				    __func__, __LINE__);
-			return -EFAULT;
-		}
-
-		st = DLB2_QUEUE_UNMAP_IN_PROG;
-		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
-		if (ret)
-			return ret;
-
-		goto unmap_qid_done;
-	}
-
-	st = DLB2_QUEUE_MAPPED;
-	if (!dlb2_port_find_slot_queue(port, st, queue, &i)) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: no available CQ slots\n",
-			    __func__);
-		return -EFAULT;
-	}
-
-	if (i >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: port slot tracking failed\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	/*
-	 * QID->CQ mapping removal is an asynchronous procedure. It requires
-	 * stopping the DLB2 from scheduling this CQ, draining all inflights
-	 * from the CQ, then unmapping the queue from the CQ. This function
-	 * simply marks the port as needing the queue unmapped, and (if
-	 * necessary) starts the unmapping worker thread.
-	 */
-	dlb2_ldb_port_cq_disable(hw, port);
-
-	st = DLB2_QUEUE_UNMAP_IN_PROG;
-	ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
-	if (ret)
-		return ret;
-
-	/*
-	 * Attempt to finish the unmapping now, in case the port has no
-	 * outstanding inflights. If that's not the case, this will fail and
-	 * the unmapping will be completed at a later time.
-	 */
-	unmap_complete = dlb2_domain_finish_unmap_port(hw, domain, port);
-
-	/*
-	 * If the unmapping couldn't complete immediately, launch the worker
-	 * thread (if it isn't already launched) to finish it later.
-	 */
-	if (!unmap_complete && !os_worker_active(hw))
-		os_schedule_work(hw);
-
-unmap_qid_done:
-	resp->status = 0;
-
-	return 0;
-}
-
-static void
-dlb2_log_pending_port_unmaps_args(struct dlb2_hw *hw,
-				  struct dlb2_pending_port_unmaps_args *args,
-				  bool vdev_req,
-				  unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB unmaps in progress arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from VF %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tPort ID: %d\n", args->port_id);
-}
-
-int dlb2_hw_pending_port_unmaps(struct dlb2_hw *hw,
-				u32 domain_id,
-				struct dlb2_pending_port_unmaps_args *args,
-				struct dlb2_cmd_response *resp,
-				bool vdev_req,
-				unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_port *port;
-
-	dlb2_log_pending_port_unmaps_args(hw, args, vdev_req, vdev_id);
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (domain == NULL) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	port = dlb2_get_domain_used_ldb_port(args->port_id, vdev_req, domain);
-	if (port == NULL || !port->configured) {
-		resp->status = DLB2_ST_INVALID_PORT_ID;
-		return -EINVAL;
-	}
-
-	resp->id = port->num_pending_removals;
-
-	return 0;
-}
-
 static int dlb2_verify_start_domain_args(struct dlb2_hw *hw,
 					 u32 domain_id,
 					 struct dlb2_cmd_response *resp,
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index 5070428ba..6f35a9118 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -5477,3 +5477,301 @@ int dlb2_hw_map_qid(struct dlb2_hw *hw,
 
 	return 0;
 }
+
+static void dlb2_log_unmap_qid(struct dlb2_hw *hw,
+			       u32 domain_id,
+			       struct dlb2_unmap_qid_args *args,
+			       bool vdev_req,
+			       unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 unmap QID arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID: %d\n",
+		    domain_id);
+	DLB2_HW_DBG(hw, "\tPort ID:   %d\n",
+		    args->port_id);
+	DLB2_HW_DBG(hw, "\tQueue ID:  %d\n",
+		    args->qid);
+	if (args->qid < DLB2_MAX_NUM_LDB_QUEUES)
+		DLB2_HW_DBG(hw, "\tQueue's num mappings:  %d\n",
+			    hw->rsrcs.ldb_queues[args->qid].num_mappings);
+}
+
+static int dlb2_verify_unmap_qid_args(struct dlb2_hw *hw,
+				      u32 domain_id,
+				      struct dlb2_unmap_qid_args *args,
+				      struct dlb2_cmd_response *resp,
+				      bool vdev_req,
+				      unsigned int vdev_id,
+				      struct dlb2_hw_domain **out_domain,
+				      struct dlb2_ldb_port **out_port,
+				      struct dlb2_ldb_queue **out_queue)
+{
+	enum dlb2_qid_map_state state;
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+	struct dlb2_ldb_port *port;
+	int slot;
+	int id;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
+		return -EINVAL;
+	}
+
+	id = args->port_id;
+
+	port = dlb2_get_domain_used_ldb_port(id, vdev_req, domain);
+
+	if (!port || !port->configured) {
+		resp->status = DLB2_ST_INVALID_PORT_ID;
+		return -EINVAL;
+	}
+
+	if (port->domain_id.phys_id != domain->id.phys_id) {
+		resp->status = DLB2_ST_INVALID_PORT_ID;
+		return -EINVAL;
+	}
+
+	queue = dlb2_get_domain_ldb_queue(args->qid, vdev_req, domain);
+
+	if (!queue || !queue->configured) {
+		DLB2_HW_ERR(hw, "[%s()] Can't unmap unconfigured queue %d\n",
+			    __func__, args->qid);
+		resp->status = DLB2_ST_INVALID_QID;
+		return -EINVAL;
+	}
+
+	/*
+	 * Verify that the port has the queue mapped. From the application's
+	 * perspective a queue is mapped if it is actually mapped, the map is
+	 * in progress, or the map is blocked pending an unmap.
+	 */
+	state = DLB2_QUEUE_MAPPED;
+	if (dlb2_port_find_slot_queue(port, state, queue, &slot))
+		goto done;
+
+	state = DLB2_QUEUE_MAP_IN_PROG;
+	if (dlb2_port_find_slot_queue(port, state, queue, &slot))
+		goto done;
+
+	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &slot))
+		goto done;
+
+	resp->status = DLB2_ST_INVALID_QID;
+	return -EINVAL;
+
+done:
+	*out_domain = domain;
+	*out_port = port;
+	*out_queue = queue;
+
+	return 0;
+}
+
+/**
+ * dlb2_hw_unmap_qid() - Unmap a load-balanced queue from a load-balanced port
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: unmap QID arguments.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function configures the DLB to stop scheduling QEs from the specified
+ * queue to the specified port.
+ *
+ * A successful return does not necessarily mean the mapping was removed. If
+ * this function is unable to immediately unmap the queue from the port, it
+ * will add the requested operation to a per-port list of pending map/unmap
+ * operations, and (if it's not already running) launch a kernel thread that
+ * periodically attempts to process all pending operations. See
+ * dlb2_hw_map_qid() for more details.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, invalid port or queue ID, or
+ *	    the domain is not configured.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb2_hw_unmap_qid(struct dlb2_hw *hw,
+		      u32 domain_id,
+		      struct dlb2_unmap_qid_args *args,
+		      struct dlb2_cmd_response *resp,
+		      bool vdev_req,
+		      unsigned int vdev_id)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+	enum dlb2_qid_map_state st;
+	struct dlb2_ldb_port *port;
+	bool unmap_complete;
+	int i, ret;
+
+	dlb2_log_unmap_qid(hw, domain_id, args, vdev_req, vdev_id);
+
+	/*
+	 * Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	ret = dlb2_verify_unmap_qid_args(hw,
+					 domain_id,
+					 args,
+					 resp,
+					 vdev_req,
+					 vdev_id,
+					 &domain,
+					 &port,
+					 &queue);
+	if (ret)
+		return ret;
+
+	/*
+	 * If the queue hasn't been mapped yet, we need to update the slot's
+	 * state and re-enable the queue's inflights.
+	 */
+	st = DLB2_QUEUE_MAP_IN_PROG;
+	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
+		/*
+		 * Since the in-progress map was aborted, re-enable the QID's
+		 * inflights.
+		 */
+		if (queue->num_pending_additions == 0)
+			dlb2_ldb_queue_set_inflight_limit(hw, queue);
+
+		st = DLB2_QUEUE_UNMAPPED;
+		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
+		if (ret)
+			return ret;
+
+		goto unmap_qid_done;
+	}
+
+	/*
+	 * If the queue mapping is on hold pending an unmap, we simply need to
+	 * update the slot's state.
+	 */
+	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &i)) {
+		st = DLB2_QUEUE_UNMAP_IN_PROG;
+		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
+		if (ret)
+			return ret;
+
+		goto unmap_qid_done;
+	}
+
+	st = DLB2_QUEUE_MAPPED;
+	if (!dlb2_port_find_slot_queue(port, st, queue, &i)) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: no available CQ slots\n",
+			    __func__);
+		return -EFAULT;
+	}
+
+	/*
+	 * QID->CQ mapping removal is an asynchronous procedure. It requires
+	 * stopping the DLB2 from scheduling this CQ, draining all inflights
+	 * from the CQ, then unmapping the queue from the CQ. This function
+	 * simply marks the port as needing the queue unmapped, and (if
+	 * necessary) starts the unmapping worker thread.
+	 */
+	dlb2_ldb_port_cq_disable(hw, port);
+
+	st = DLB2_QUEUE_UNMAP_IN_PROG;
+	ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
+	if (ret)
+		return ret;
+
+	/*
+	 * Attempt to finish the unmapping now, in case the port has no
+	 * outstanding inflights. If that's not the case, this will fail and
+	 * the unmapping will be completed at a later time.
+	 */
+	unmap_complete = dlb2_domain_finish_unmap_port(hw, domain, port);
+
+	/*
+	 * If the unmapping couldn't complete immediately, launch the worker
+	 * thread (if it isn't already launched) to finish it later.
+	 */
+	if (!unmap_complete && !os_worker_active(hw))
+		os_schedule_work(hw);
+
+unmap_qid_done:
+	resp->status = 0;
+
+	return 0;
+}
+
+static void
+dlb2_log_pending_port_unmaps_args(struct dlb2_hw *hw,
+				  struct dlb2_pending_port_unmaps_args *args,
+				  bool vdev_req,
+				  unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB unmaps in progress arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from VF %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tPort ID: %d\n", args->port_id);
+}
+
+/**
+ * dlb2_hw_pending_port_unmaps() - returns the number of unmap operations in
+ *	progress.
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: number of unmaps in progress args
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the number of unmaps in progress.
+ *
+ * Errors:
+ * EINVAL - Invalid port ID.
+ */
+int dlb2_hw_pending_port_unmaps(struct dlb2_hw *hw,
+				u32 domain_id,
+				struct dlb2_pending_port_unmaps_args *args,
+				struct dlb2_cmd_response *resp,
+				bool vdev_req,
+				unsigned int vdev_id)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_port *port;
+
+	dlb2_log_pending_port_unmaps_args(hw, args, vdev_req, vdev_id);
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	port = dlb2_get_domain_used_ldb_port(args->port_id, vdev_req, domain);
+	if (!port || !port->configured) {
+		resp->status = DLB2_ST_INVALID_PORT_ID;
+		return -EINVAL;
+	}
+
+	resp->id = port->num_pending_removals;
+
+	return 0;
+}
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH 12/25] event/dlb2: add DLB v2.5 support to start domain
  2021-03-16 22:18 [dpdk-dev] [PATCH 00/25] Add Support for DLB v2.5 Timothy McDaniel
                   ` (10 preceding siblings ...)
  2021-03-16 22:18 ` [dpdk-dev] [PATCH 11/25] event/dlb2: add DLB v2.5 support to unmap queue Timothy McDaniel
@ 2021-03-16 22:18 ` Timothy McDaniel
  2021-03-16 22:18 ` [dpdk-dev] [PATCH 13/25] event/dlb2: add DLB v2.5 credit scheme Timothy McDaniel
                   ` (13 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-03-16 22:18 UTC (permalink / raw)
  To: dev
  Cc: jerinj, harry.van.haaren, mdr, nhorman, nikhil.rao,
	erik.g.carrillo, abhinandan.gujjar, pbhagavatula, hemant.agrawal,
	mattias.ronnblom, peter.mccarthy

Update low level functions to account for new register map
and hardware access macros.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 123 -----------------
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 130 ++++++++++++++++++
 2 files changed, 130 insertions(+), 123 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index 02c2836ad..6b5c8ba01 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -1246,129 +1246,6 @@ dlb2_get_domain_ldb_queue(u32 id,
 	return NULL;
 }
 
-static int dlb2_verify_start_domain_args(struct dlb2_hw *hw,
-					 u32 domain_id,
-					 struct dlb2_cmd_response *resp,
-					 bool vdev_req,
-					 unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (domain == NULL) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	if (!domain->configured) {
-		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
-		return -EINVAL;
-	}
-
-	if (domain->started) {
-		resp->status = DLB2_ST_DOMAIN_STARTED;
-		return -EINVAL;
-	}
-
-	return 0;
-}
-
-static void dlb2_log_start_domain(struct dlb2_hw *hw,
-				  u32 domain_id,
-				  bool vdev_req,
-				  unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 start domain arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
-}
-
-/**
- * dlb2_hw_start_domain() - Lock the domain configuration
- * @hw:	Contains the current state of the DLB2 hardware.
- * @domain_id: Domain ID
- * @arg: User-provided arguments (unused, here for ioctl callback template).
- * @resp: Response to user.
- * @vdev_req: Request came from a virtual device.
- * @vdev_id: If vdev_req is true, this contains the virtual device's ID.
- *
- * Return: returns < 0 on error, 0 otherwise. If the driver is unable to
- * satisfy a request, resp->status will be set accordingly.
- */
-int
-dlb2_hw_start_domain(struct dlb2_hw *hw,
-		     u32 domain_id,
-		     struct dlb2_start_domain_args *arg,
-		     struct dlb2_cmd_response *resp,
-		     bool vdev_req,
-		     unsigned int vdev_id)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *dir_queue;
-	struct dlb2_ldb_queue *ldb_queue;
-	struct dlb2_hw_domain *domain;
-	int ret;
-	RTE_SET_USED(arg);
-	RTE_SET_USED(iter);
-
-	dlb2_log_start_domain(hw, domain_id, vdev_req, vdev_id);
-
-	ret = dlb2_verify_start_domain_args(hw,
-					    domain_id,
-					    resp,
-					    vdev_req,
-					    vdev_id);
-	if (ret)
-		return ret;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-	if (domain == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: domain not found\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	/*
-	 * Enable load-balanced and directed queue write permissions for the
-	 * queues this domain owns. Without this, the DLB2 will drop all
-	 * incoming traffic to those queues.
-	 */
-	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, ldb_queue, iter) {
-		union dlb2_sys_ldb_vasqid_v r0 = { {0} };
-		unsigned int offs;
-
-		r0.field.vasqid_v = 1;
-
-		offs = domain->id.phys_id * DLB2_MAX_NUM_LDB_QUEUES +
-			ldb_queue->id.phys_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_LDB_VASQID_V(offs), r0.val);
-	}
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, dir_queue, iter) {
-		union dlb2_sys_dir_vasqid_v r0 = { {0} };
-		unsigned int offs;
-
-		r0.field.vasqid_v = 1;
-
-		offs = domain->id.phys_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) +
-			dir_queue->id.phys_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(offs), r0.val);
-	}
-
-	dlb2_flush_csr(hw);
-
-	domain->started = true;
-
-	resp->status = 0;
-
-	return 0;
-}
-
 static void dlb2_log_get_dir_queue_depth(struct dlb2_hw *hw,
 					 u32 domain_id,
 					 u32 queue_id,
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index 6f35a9118..850312a10 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -5775,3 +5775,133 @@ int dlb2_hw_pending_port_unmaps(struct dlb2_hw *hw,
 
 	return 0;
 }
+
+static int dlb2_verify_start_domain_args(struct dlb2_hw *hw,
+					 u32 domain_id,
+					 struct dlb2_cmd_response *resp,
+					 bool vdev_req,
+					 unsigned int vdev_id,
+					 struct dlb2_hw_domain **out_domain)
+{
+	struct dlb2_hw_domain *domain;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
+		return -EINVAL;
+	}
+
+	if (domain->started) {
+		resp->status = DLB2_ST_DOMAIN_STARTED;
+		return -EINVAL;
+	}
+
+	*out_domain = domain;
+
+	return 0;
+}
+
+static void dlb2_log_start_domain(struct dlb2_hw *hw,
+				  u32 domain_id,
+				  bool vdev_req,
+				  unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 start domain arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
+}
+
+/**
+ * dlb2_hw_start_domain() - start a scheduling domain
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @arg: start domain arguments.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function starts a scheduling domain, which allows applications to send
+ * traffic through it. Once a domain is started, its resources can no longer be
+ * configured (besides QID remapping and port enable/disable).
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error.
+ *
+ * Errors:
+ * EINVAL - the domain is not configured, or the domain is already started.
+ */
+int
+dlb2_hw_start_domain(struct dlb2_hw *hw,
+		     u32 domain_id,
+		     struct dlb2_start_domain_args *args,
+		     struct dlb2_cmd_response *resp,
+		     bool vdev_req,
+		     unsigned int vdev_id)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *dir_queue;
+	struct dlb2_ldb_queue *ldb_queue;
+	struct dlb2_hw_domain *domain;
+	int ret;
+	RTE_SET_USED(args);
+	RTE_SET_USED(iter);
+
+	dlb2_log_start_domain(hw, domain_id, vdev_req, vdev_id);
+
+	ret = dlb2_verify_start_domain_args(hw,
+					    domain_id,
+					    resp,
+					    vdev_req,
+					    vdev_id,
+					    &domain);
+	if (ret)
+		return ret;
+
+	/*
+	 * Enable load-balanced and directed queue write permissions for the
+	 * queues this domain owns. Without this, the DLB2 will drop all
+	 * incoming traffic to those queues.
+	 */
+	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, ldb_queue, iter) {
+		u32 vasqid_v = 0;
+		unsigned int offs;
+
+		DLB2_BIT_SET(vasqid_v, DLB2_SYS_LDB_VASQID_V_VASQID_V);
+
+		offs = domain->id.phys_id * DLB2_MAX_NUM_LDB_QUEUES +
+			ldb_queue->id.phys_id;
+
+		DLB2_CSR_WR(hw, DLB2_SYS_LDB_VASQID_V(offs), vasqid_v);
+	}
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, dir_queue, iter) {
+		u32 vasqid_v = 0;
+		unsigned int offs;
+
+		DLB2_BIT_SET(vasqid_v, DLB2_SYS_DIR_VASQID_V_VASQID_V);
+
+		offs = domain->id.phys_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) +
+			dir_queue->id.phys_id;
+
+		DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(offs), vasqid_v);
+	}
+
+	dlb2_flush_csr(hw);
+
+	domain->started = true;
+
+	resp->status = 0;
+
+	return 0;
+}
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH 13/25] event/dlb2: add DLB v2.5 credit scheme
  2021-03-16 22:18 [dpdk-dev] [PATCH 00/25] Add Support for DLB v2.5 Timothy McDaniel
                   ` (11 preceding siblings ...)
  2021-03-16 22:18 ` [dpdk-dev] [PATCH 12/25] event/dlb2: add DLB v2.5 support to start domain Timothy McDaniel
@ 2021-03-16 22:18 ` Timothy McDaniel
  2021-03-16 22:18 ` [dpdk-dev] [PATCH 14/25] event/dlb2: Add DLB v2.5 support to get queue depth functions Timothy McDaniel
                   ` (12 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-03-16 22:18 UTC (permalink / raw)
  To: dev
  Cc: jerinj, harry.van.haaren, mdr, nhorman, nikhil.rao,
	erik.g.carrillo, abhinandan.gujjar, pbhagavatula, hemant.agrawal,
	mattias.ronnblom, peter.mccarthy

DLB v2.5 uses a different credit scheme than was used in DLB v2.0 .
Specifically, there is a single credit pool for both load balanced
and directed traffic, instead of a separate pool for each as is
found with DLB v2.0.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/dlb2.c | 311 ++++++++++++++++++++++++++------------
 1 file changed, 212 insertions(+), 99 deletions(-)

diff --git a/drivers/event/dlb2/dlb2.c b/drivers/event/dlb2/dlb2.c
index 769bcb8af..a4a7db42e 100644
--- a/drivers/event/dlb2/dlb2.c
+++ b/drivers/event/dlb2/dlb2.c
@@ -436,8 +436,13 @@ dlb2_eventdev_info_get(struct rte_eventdev *dev,
 	 */
 	evdev_dlb2_default_info.max_event_ports += dlb2->num_ldb_ports;
 	evdev_dlb2_default_info.max_event_queues += dlb2->num_ldb_queues;
-	evdev_dlb2_default_info.max_num_events += dlb2->max_ldb_credits;
-
+	if (dlb2->version == DLB2_HW_V2_5) {
+		evdev_dlb2_default_info.max_num_events +=
+			dlb2->max_credits;
+	} else {
+		evdev_dlb2_default_info.max_num_events +=
+			dlb2->max_ldb_credits;
+	}
 	evdev_dlb2_default_info.max_event_queues =
 		RTE_MIN(evdev_dlb2_default_info.max_event_queues,
 			RTE_EVENT_MAX_QUEUES_PER_DEV);
@@ -451,7 +456,8 @@ dlb2_eventdev_info_get(struct rte_eventdev *dev,
 
 static int
 dlb2_hw_create_sched_domain(struct dlb2_hw_dev *handle,
-			    const struct dlb2_hw_rsrcs *resources_asked)
+			    const struct dlb2_hw_rsrcs *resources_asked,
+			    uint8_t device_version)
 {
 	int ret = 0;
 	struct dlb2_create_sched_domain_args *cfg;
@@ -468,8 +474,10 @@ dlb2_hw_create_sched_domain(struct dlb2_hw_dev *handle,
 	/* DIR ports and queues */
 
 	cfg->num_dir_ports = resources_asked->num_dir_ports;
-
-	cfg->num_dir_credits = resources_asked->num_dir_credits;
+	if (device_version == DLB2_HW_V2_5)
+		cfg->num_credits = resources_asked->num_credits;
+	else
+		cfg->num_dir_credits = resources_asked->num_dir_credits;
 
 	/* LDB queues */
 
@@ -509,8 +517,8 @@ dlb2_hw_create_sched_domain(struct dlb2_hw_dev *handle,
 		break;
 	}
 
-	cfg->num_ldb_credits =
-		resources_asked->num_ldb_credits;
+	if (device_version == DLB2_HW_V2)
+		cfg->num_ldb_credits = resources_asked->num_ldb_credits;
 
 	cfg->num_atomic_inflights =
 		DLB2_NUM_ATOMIC_INFLIGHTS_PER_QUEUE *
@@ -519,14 +527,24 @@ dlb2_hw_create_sched_domain(struct dlb2_hw_dev *handle,
 	cfg->num_hist_list_entries = resources_asked->num_ldb_ports *
 		DLB2_NUM_HIST_LIST_ENTRIES_PER_LDB_PORT;
 
-	DLB2_LOG_DBG("sched domain create - ldb_qs=%d, ldb_ports=%d, dir_ports=%d, atomic_inflights=%d, hist_list_entries=%d, ldb_credits=%d, dir_credits=%d\n",
-		     cfg->num_ldb_queues,
-		     resources_asked->num_ldb_ports,
-		     cfg->num_dir_ports,
-		     cfg->num_atomic_inflights,
-		     cfg->num_hist_list_entries,
-		     cfg->num_ldb_credits,
-		     cfg->num_dir_credits);
+	if (device_version == DLB2_HW_V2_5) {
+		DLB2_LOG_DBG("sched domain create - ldb_qs=%d, ldb_ports=%d, dir_ports=%d, atomic_inflights=%d, hist_list_entries=%d, credits=%d\n",
+			     cfg->num_ldb_queues,
+			     resources_asked->num_ldb_ports,
+			     cfg->num_dir_ports,
+			     cfg->num_atomic_inflights,
+			     cfg->num_hist_list_entries,
+			     cfg->num_credits);
+	} else {
+		DLB2_LOG_DBG("sched domain create - ldb_qs=%d, ldb_ports=%d, dir_ports=%d, atomic_inflights=%d, hist_list_entries=%d, ldb_credits=%d, dir_credits=%d\n",
+			     cfg->num_ldb_queues,
+			     resources_asked->num_ldb_ports,
+			     cfg->num_dir_ports,
+			     cfg->num_atomic_inflights,
+			     cfg->num_hist_list_entries,
+			     cfg->num_ldb_credits,
+			     cfg->num_dir_credits);
+	}
 
 	/* Configure the QM */
 
@@ -606,7 +624,6 @@ dlb2_eventdev_configure(const struct rte_eventdev *dev)
 	 */
 	if (dlb2->configured) {
 		dlb2_hw_reset_sched_domain(dev, true);
-
 		ret = dlb2_hw_query_resources(dlb2);
 		if (ret) {
 			DLB2_LOG_ERR("get resources err=%d, devid=%d\n",
@@ -665,20 +682,26 @@ dlb2_eventdev_configure(const struct rte_eventdev *dev)
 	/* 1 dir queue per dir port */
 	rsrcs->num_ldb_queues = config->nb_event_queues - rsrcs->num_dir_ports;
 
-	/* Scale down nb_events_limit by 4 for directed credits, since there
-	 * are 4x as many load-balanced credits.
-	 */
-	rsrcs->num_ldb_credits = 0;
-	rsrcs->num_dir_credits = 0;
+	if (dlb2->version == DLB2_HW_V2_5) {
+		rsrcs->num_credits = 0;
+		if (rsrcs->num_ldb_queues || rsrcs->num_dir_ports)
+			rsrcs->num_credits = config->nb_events_limit;
+	} else {
+		/* Scale down nb_events_limit by 4 for directed credits,
+		 * since there are 4x as many load-balanced credits.
+		 */
+		rsrcs->num_ldb_credits = 0;
+		rsrcs->num_dir_credits = 0;
 
-	if (rsrcs->num_ldb_queues)
-		rsrcs->num_ldb_credits = config->nb_events_limit;
-	if (rsrcs->num_dir_ports)
-		rsrcs->num_dir_credits = config->nb_events_limit / 4;
-	if (dlb2->num_dir_credits_override != -1)
-		rsrcs->num_dir_credits = dlb2->num_dir_credits_override;
+		if (rsrcs->num_ldb_queues)
+			rsrcs->num_ldb_credits = config->nb_events_limit;
+		if (rsrcs->num_dir_ports)
+			rsrcs->num_dir_credits = config->nb_events_limit / 4;
+		if (dlb2->num_dir_credits_override != -1)
+			rsrcs->num_dir_credits = dlb2->num_dir_credits_override;
+	}
 
-	if (dlb2_hw_create_sched_domain(handle, rsrcs) < 0) {
+	if (dlb2_hw_create_sched_domain(handle, rsrcs, dlb2->version) < 0) {
 		DLB2_LOG_ERR("dlb2_hw_create_sched_domain failed\n");
 		return -ENODEV;
 	}
@@ -693,10 +716,15 @@ dlb2_eventdev_configure(const struct rte_eventdev *dev)
 	dlb2->num_ldb_ports = dlb2->num_ports - dlb2->num_dir_ports;
 	dlb2->num_ldb_queues = dlb2->num_queues - dlb2->num_dir_ports;
 	dlb2->num_dir_queues = dlb2->num_dir_ports;
-	dlb2->ldb_credit_pool = rsrcs->num_ldb_credits;
-	dlb2->max_ldb_credits = rsrcs->num_ldb_credits;
-	dlb2->dir_credit_pool = rsrcs->num_dir_credits;
-	dlb2->max_dir_credits = rsrcs->num_dir_credits;
+	if (dlb2->version == DLB2_HW_V2_5) {
+		dlb2->credit_pool = rsrcs->num_credits;
+		dlb2->max_credits = rsrcs->num_credits;
+	} else {
+		dlb2->ldb_credit_pool = rsrcs->num_ldb_credits;
+		dlb2->max_ldb_credits = rsrcs->num_ldb_credits;
+		dlb2->dir_credit_pool = rsrcs->num_dir_credits;
+		dlb2->max_dir_credits = rsrcs->num_dir_credits;
+	}
 
 	dlb2->configured = true;
 
@@ -1170,8 +1198,9 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2,
 	struct dlb2_port *qm_port = NULL;
 	char mz_name[RTE_MEMZONE_NAMESIZE];
 	uint32_t qm_port_id;
-	uint16_t ldb_credit_high_watermark;
-	uint16_t dir_credit_high_watermark;
+	uint16_t ldb_credit_high_watermark = 0;
+	uint16_t dir_credit_high_watermark = 0;
+	uint16_t credit_high_watermark = 0;
 
 	if (handle == NULL)
 		return -EINVAL;
@@ -1206,15 +1235,18 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2,
 	/* User controls the LDB high watermark via enqueue depth. The DIR high
 	 * watermark is equal, unless the directed credit pool is too small.
 	 */
-	ldb_credit_high_watermark = enqueue_depth;
-
-	/* If there are no directed ports, the kernel driver will ignore this
-	 * port's directed credit settings. Don't use enqueue_depth if it would
-	 * require more directed credits than are available.
-	 */
-	dir_credit_high_watermark =
-		RTE_MIN(enqueue_depth,
-			handle->cfg.num_dir_credits / dlb2->num_ports);
+	if (dlb2->version == DLB2_HW_V2) {
+		ldb_credit_high_watermark = enqueue_depth;
+		/* If there are no directed ports, the kernel driver will
+		 * ignore this port's directed credit settings. Don't use
+		 * enqueue_depth if it would require more directed credits
+		 * than are available.
+		 */
+		dir_credit_high_watermark =
+			RTE_MIN(enqueue_depth,
+				handle->cfg.num_dir_credits / dlb2->num_ports);
+	} else
+		credit_high_watermark = enqueue_depth;
 
 	/* Per QM values */
 
@@ -1249,8 +1281,12 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2,
 
 	qm_port->id = qm_port_id;
 
-	qm_port->cached_ldb_credits = 0;
-	qm_port->cached_dir_credits = 0;
+	if (dlb2->version == DLB2_HW_V2) {
+		qm_port->cached_ldb_credits = 0;
+		qm_port->cached_dir_credits = 0;
+	} else
+		qm_port->cached_credits = 0;
+
 	/* CQs with depth < 8 use an 8-entry queue, but withhold credits so
 	 * the effective depth is smaller.
 	 */
@@ -1298,17 +1334,26 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2,
 	qm_port->state = PORT_STARTED; /* enabled at create time */
 	qm_port->config_state = DLB2_CONFIGURED;
 
-	qm_port->dir_credits = dir_credit_high_watermark;
-	qm_port->ldb_credits = ldb_credit_high_watermark;
-	qm_port->credit_pool[DLB2_DIR_QUEUE] = &dlb2->dir_credit_pool;
-	qm_port->credit_pool[DLB2_LDB_QUEUE] = &dlb2->ldb_credit_pool;
-
-	DLB2_LOG_DBG("dlb2: created ldb port %d, depth = %d, ldb credits=%d, dir credits=%d\n",
-		     qm_port_id,
-		     dequeue_depth,
-		     qm_port->ldb_credits,
-		     qm_port->dir_credits);
+	if (dlb2->version == DLB2_HW_V2) {
+		qm_port->dir_credits = dir_credit_high_watermark;
+		qm_port->ldb_credits = ldb_credit_high_watermark;
+		qm_port->credit_pool[DLB2_DIR_QUEUE] = &dlb2->dir_credit_pool;
+		qm_port->credit_pool[DLB2_LDB_QUEUE] = &dlb2->ldb_credit_pool;
+
+		DLB2_LOG_DBG("dlb2: created ldb port %d, depth = %d, ldb credits=%d, dir credits=%d\n",
+			     qm_port_id,
+			     dequeue_depth,
+			     qm_port->ldb_credits,
+			     qm_port->dir_credits);
+	} else {
+		qm_port->credits = credit_high_watermark;
+		qm_port->credit_pool[DLB2_COMBINED_POOL] = &dlb2->credit_pool;
 
+		DLB2_LOG_DBG("dlb2: created ldb port %d, depth = %d, credits=%d\n",
+			     qm_port_id,
+			     dequeue_depth,
+			     qm_port->credits);
+	}
 	rte_spinlock_unlock(&handle->resource_lock);
 
 	return 0;
@@ -1356,8 +1401,9 @@ dlb2_hw_create_dir_port(struct dlb2_eventdev *dlb2,
 	struct dlb2_port *qm_port = NULL;
 	char mz_name[RTE_MEMZONE_NAMESIZE];
 	uint32_t qm_port_id;
-	uint16_t ldb_credit_high_watermark;
-	uint16_t dir_credit_high_watermark;
+	uint16_t ldb_credit_high_watermark = 0;
+	uint16_t dir_credit_high_watermark = 0;
+	uint16_t credit_high_watermark = 0;
 
 	if (dlb2 == NULL || handle == NULL)
 		return -EINVAL;
@@ -1386,14 +1432,16 @@ dlb2_hw_create_dir_port(struct dlb2_eventdev *dlb2,
 	/* User controls the LDB high watermark via enqueue depth. The DIR high
 	 * watermark is equal, unless the directed credit pool is too small.
 	 */
-	ldb_credit_high_watermark = enqueue_depth;
-
-	/* Don't use enqueue_depth if it would require more directed credits
-	 * than are available.
-	 */
-	dir_credit_high_watermark =
-		RTE_MIN(enqueue_depth,
-			handle->cfg.num_dir_credits / dlb2->num_ports);
+	if (dlb2->version == DLB2_HW_V2) {
+		ldb_credit_high_watermark = enqueue_depth;
+		/* Don't use enqueue_depth if it would require more directed
+		 * credits than are available.
+		 */
+		dir_credit_high_watermark =
+			RTE_MIN(enqueue_depth,
+				handle->cfg.num_dir_credits / dlb2->num_ports);
+	} else
+		credit_high_watermark = enqueue_depth;
 
 	/* Per QM values */
 
@@ -1430,8 +1478,12 @@ dlb2_hw_create_dir_port(struct dlb2_eventdev *dlb2,
 
 	qm_port->id = qm_port_id;
 
-	qm_port->cached_ldb_credits = 0;
-	qm_port->cached_dir_credits = 0;
+	if (dlb2->version == DLB2_HW_V2) {
+		qm_port->cached_ldb_credits = 0;
+		qm_port->cached_dir_credits = 0;
+	} else
+		qm_port->cached_credits = 0;
+
 	/* CQs with depth < 8 use an 8-entry queue, but withhold credits so
 	 * the effective depth is smaller.
 	 */
@@ -1467,17 +1519,26 @@ dlb2_hw_create_dir_port(struct dlb2_eventdev *dlb2,
 	qm_port->state = PORT_STARTED; /* enabled at create time */
 	qm_port->config_state = DLB2_CONFIGURED;
 
-	qm_port->dir_credits = dir_credit_high_watermark;
-	qm_port->ldb_credits = ldb_credit_high_watermark;
-	qm_port->credit_pool[DLB2_DIR_QUEUE] = &dlb2->dir_credit_pool;
-	qm_port->credit_pool[DLB2_LDB_QUEUE] = &dlb2->ldb_credit_pool;
-
-	DLB2_LOG_DBG("dlb2: created dir port %d, depth = %d cr=%d,%d\n",
-		     qm_port_id,
-		     dequeue_depth,
-		     dir_credit_high_watermark,
-		     ldb_credit_high_watermark);
+	if (dlb2->version == DLB2_HW_V2) {
+		qm_port->dir_credits = dir_credit_high_watermark;
+		qm_port->ldb_credits = ldb_credit_high_watermark;
+		qm_port->credit_pool[DLB2_DIR_QUEUE] = &dlb2->dir_credit_pool;
+		qm_port->credit_pool[DLB2_LDB_QUEUE] = &dlb2->ldb_credit_pool;
+
+		DLB2_LOG_DBG("dlb2: created dir port %d, depth = %d cr=%d,%d\n",
+			     qm_port_id,
+			     dequeue_depth,
+			     dir_credit_high_watermark,
+			     ldb_credit_high_watermark);
+	} else {
+		qm_port->credits = credit_high_watermark;
+		qm_port->credit_pool[DLB2_COMBINED_POOL] = &dlb2->credit_pool;
 
+		DLB2_LOG_DBG("dlb2: created dir port %d, depth = %d cr=%d\n",
+			     qm_port_id,
+			     dequeue_depth,
+			     credit_high_watermark);
+	}
 	rte_spinlock_unlock(&handle->resource_lock);
 
 	return 0;
@@ -2297,6 +2358,24 @@ dlb2_check_enqueue_hw_dir_credits(struct dlb2_port *qm_port)
 	return 0;
 }
 
+static inline int
+dlb2_check_enqueue_hw_credits(struct dlb2_port *qm_port)
+{
+	if (unlikely(qm_port->cached_credits == 0)) {
+		qm_port->cached_credits =
+			dlb2_port_credits_get(qm_port,
+					      DLB2_COMBINED_POOL);
+		if (unlikely(qm_port->cached_credits == 0)) {
+			DLB2_INC_STAT(
+			qm_port->ev_port->stats.traffic.tx_nospc_hw_credits, 1);
+			DLB2_LOG_DBG("credits exhausted\n");
+			return 1; /* credits exhausted */
+		}
+	}
+
+	return 0;
+}
+
 static __rte_always_inline void
 dlb2_pp_write(struct dlb2_enqueue_qe *qe4,
 	      struct process_local_port_data *port_data)
@@ -2565,12 +2644,19 @@ dlb2_event_enqueue_prep(struct dlb2_eventdev_port *ev_port,
 	if (!qm_queue->is_directed) {
 		/* Load balanced destination queue */
 
-		if (dlb2_check_enqueue_hw_ldb_credits(qm_port)) {
-			rte_errno = -ENOSPC;
-			return 1;
+		if (dlb2->version == DLB2_HW_V2) {
+			if (dlb2_check_enqueue_hw_ldb_credits(qm_port)) {
+				rte_errno = -ENOSPC;
+				return 1;
+			}
+			cached_credits = &qm_port->cached_ldb_credits;
+		} else {
+			if (dlb2_check_enqueue_hw_credits(qm_port)) {
+				rte_errno = -ENOSPC;
+				return 1;
+			}
+			cached_credits = &qm_port->cached_credits;
 		}
-		cached_credits = &qm_port->cached_ldb_credits;
-
 		switch (ev->sched_type) {
 		case RTE_SCHED_TYPE_ORDERED:
 			DLB2_LOG_DBG("dlb2: put_qe: RTE_SCHED_TYPE_ORDERED\n");
@@ -2602,12 +2688,19 @@ dlb2_event_enqueue_prep(struct dlb2_eventdev_port *ev_port,
 	} else {
 		/* Directed destination queue */
 
-		if (dlb2_check_enqueue_hw_dir_credits(qm_port)) {
-			rte_errno = -ENOSPC;
-			return 1;
+		if (dlb2->version == DLB2_HW_V2) {
+			if (dlb2_check_enqueue_hw_dir_credits(qm_port)) {
+				rte_errno = -ENOSPC;
+				return 1;
+			}
+			cached_credits = &qm_port->cached_dir_credits;
+		} else {
+			if (dlb2_check_enqueue_hw_credits(qm_port)) {
+				rte_errno = -ENOSPC;
+				return 1;
+			}
+			cached_credits = &qm_port->cached_credits;
 		}
-		cached_credits = &qm_port->cached_dir_credits;
-
 		DLB2_LOG_DBG("dlb2: put_qe: RTE_SCHED_TYPE_DIRECTED\n");
 
 		*sched_type = DLB2_SCHED_DIRECTED;
@@ -2891,20 +2984,40 @@ dlb2_port_credits_inc(struct dlb2_port *qm_port, int num)
 
 	/* increment port credits, and return to pool if exceeds threshold */
 	if (!qm_port->is_directed) {
-		qm_port->cached_ldb_credits += num;
-		if (qm_port->cached_ldb_credits >= 2 * batch_size) {
-			__atomic_fetch_add(
-				qm_port->credit_pool[DLB2_LDB_QUEUE],
-				batch_size, __ATOMIC_SEQ_CST);
-			qm_port->cached_ldb_credits -= batch_size;
+		if (qm_port->dlb2->version == DLB2_HW_V2) {
+			qm_port->cached_ldb_credits += num;
+			if (qm_port->cached_ldb_credits >= 2 * batch_size) {
+				__atomic_fetch_add(
+					qm_port->credit_pool[DLB2_LDB_QUEUE],
+					batch_size, __ATOMIC_SEQ_CST);
+				qm_port->cached_ldb_credits -= batch_size;
+			}
+		} else {
+			qm_port->cached_credits += num;
+			if (qm_port->cached_credits >= 2 * batch_size) {
+				__atomic_fetch_add(
+				      qm_port->credit_pool[DLB2_COMBINED_POOL],
+				      batch_size, __ATOMIC_SEQ_CST);
+				qm_port->cached_credits -= batch_size;
+			}
 		}
 	} else {
-		qm_port->cached_dir_credits += num;
-		if (qm_port->cached_dir_credits >= 2 * batch_size) {
-			__atomic_fetch_add(
-				qm_port->credit_pool[DLB2_DIR_QUEUE],
-				batch_size, __ATOMIC_SEQ_CST);
-			qm_port->cached_dir_credits -= batch_size;
+		if (qm_port->dlb2->version == DLB2_HW_V2) {
+			qm_port->cached_dir_credits += num;
+			if (qm_port->cached_dir_credits >= 2 * batch_size) {
+				__atomic_fetch_add(
+					qm_port->credit_pool[DLB2_DIR_QUEUE],
+					batch_size, __ATOMIC_SEQ_CST);
+				qm_port->cached_dir_credits -= batch_size;
+			}
+		} else {
+			qm_port->cached_credits += num;
+			if (qm_port->cached_credits >= 2 * batch_size) {
+				__atomic_fetch_add(
+				      qm_port->credit_pool[DLB2_COMBINED_POOL],
+				      batch_size, __ATOMIC_SEQ_CST);
+				qm_port->cached_credits -= batch_size;
+			}
 		}
 	}
 }
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH 14/25] event/dlb2: Add DLB v2.5 support to get queue depth functions
  2021-03-16 22:18 [dpdk-dev] [PATCH 00/25] Add Support for DLB v2.5 Timothy McDaniel
                   ` (12 preceding siblings ...)
  2021-03-16 22:18 ` [dpdk-dev] [PATCH 13/25] event/dlb2: add DLB v2.5 credit scheme Timothy McDaniel
@ 2021-03-16 22:18 ` Timothy McDaniel
  2021-03-16 22:18 ` [dpdk-dev] [PATCH 15/25] event/dlb2: add DLB v2.5 finish map/unmap interfaces Timothy McDaniel
                   ` (11 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-03-16 22:18 UTC (permalink / raw)
  To: dev
  Cc: jerinj, harry.van.haaren, mdr, nhorman, nikhil.rao,
	erik.g.carrillo, abhinandan.gujjar, pbhagavatula, hemant.agrawal,
	mattias.ronnblom, peter.mccarthy

Update get queue depth functions for DLB v2.5, accounting for
combined register map and new hardware access macros.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_resource.c    |  29 ----
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 135 ++++++++++++++++++
 2 files changed, 135 insertions(+), 29 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index 6b5c8ba01..1066b8834 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -66,17 +66,6 @@ static inline void dlb2_flush_csr(struct dlb2_hw *hw)
 	DLB2_CSR_RD(hw, DLB2_SYS_TOTAL_VAS);
 }
 
-static u32 dlb2_dir_queue_depth(struct dlb2_hw *hw,
-				struct dlb2_dir_pq_pair *queue)
-{
-	union dlb2_lsp_qid_dir_enqueue_cnt r0;
-
-	r0.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID_DIR_ENQUEUE_CNT(queue->id.phys_id));
-
-	return r0.field.count;
-}
-
 static void dlb2_ldb_port_cq_enable(struct dlb2_hw *hw,
 				    struct dlb2_ldb_port *port)
 {
@@ -109,24 +98,6 @@ static void dlb2_ldb_port_cq_disable(struct dlb2_hw *hw,
 	dlb2_flush_csr(hw);
 }
 
-static u32 dlb2_ldb_queue_depth(struct dlb2_hw *hw,
-				struct dlb2_ldb_queue *queue)
-{
-	union dlb2_lsp_qid_aqed_active_cnt r0;
-	union dlb2_lsp_qid_atm_active r1;
-	union dlb2_lsp_qid_ldb_enqueue_cnt r2;
-
-	r0.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID_AQED_ACTIVE_CNT(queue->id.phys_id));
-	r1.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID_ATM_ACTIVE(queue->id.phys_id));
-
-	r2.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID_LDB_ENQUEUE_CNT(queue->id.phys_id));
-
-	return r0.field.count + r1.field.count + r2.field.count;
-}
-
 static struct dlb2_ldb_queue *
 dlb2_get_ldb_queue_from_id(struct dlb2_hw *hw,
 			   u32 id,
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index 850312a10..77a946953 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -5905,3 +5905,138 @@ dlb2_hw_start_domain(struct dlb2_hw *hw,
 
 	return 0;
 }
+
+static void dlb2_log_get_dir_queue_depth(struct dlb2_hw *hw,
+					 u32 domain_id,
+					 u32 queue_id,
+					 bool vdev_req,
+					 unsigned int vf_id)
+{
+	DLB2_HW_DBG(hw, "DLB get directed queue depth:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from VF %d)\n", vf_id);
+	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
+	DLB2_HW_DBG(hw, "\tQueue ID: %d\n", queue_id);
+}
+
+/**
+ * dlb2_hw_get_dir_queue_depth() - returns the depth of a directed queue
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: queue depth args
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function returns the depth of a directed queue.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the depth.
+ *
+ * Errors:
+ * EINVAL - Invalid domain ID or queue ID.
+ */
+int dlb2_hw_get_dir_queue_depth(struct dlb2_hw *hw,
+				u32 domain_id,
+				struct dlb2_get_dir_queue_depth_args *args,
+				struct dlb2_cmd_response *resp,
+				bool vdev_req,
+				unsigned int vdev_id)
+{
+	struct dlb2_dir_pq_pair *queue;
+	struct dlb2_hw_domain *domain;
+	int id;
+
+	id = domain_id;
+
+	dlb2_log_get_dir_queue_depth(hw, domain_id, args->queue_id,
+				     vdev_req, vdev_id);
+
+	domain = dlb2_get_domain_from_id(hw, id, vdev_req, vdev_id);
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	id = args->queue_id;
+
+	queue = dlb2_get_domain_used_dir_pq(hw, id, vdev_req, domain);
+	if (!queue) {
+		resp->status = DLB2_ST_INVALID_QID;
+		return -EINVAL;
+	}
+
+	resp->id = dlb2_dir_queue_depth(hw, queue);
+
+	return 0;
+}
+
+static void dlb2_log_get_ldb_queue_depth(struct dlb2_hw *hw,
+					 u32 domain_id,
+					 u32 queue_id,
+					 bool vdev_req,
+					 unsigned int vf_id)
+{
+	DLB2_HW_DBG(hw, "DLB get load-balanced queue depth:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from VF %d)\n", vf_id);
+	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
+	DLB2_HW_DBG(hw, "\tQueue ID: %d\n", queue_id);
+}
+
+/**
+ * dlb2_hw_get_ldb_queue_depth() - returns the depth of a load-balanced queue
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: queue depth args
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function returns the depth of a load-balanced queue.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the depth.
+ *
+ * Errors:
+ * EINVAL - Invalid domain ID or queue ID.
+ */
+int dlb2_hw_get_ldb_queue_depth(struct dlb2_hw *hw,
+				u32 domain_id,
+				struct dlb2_get_ldb_queue_depth_args *args,
+				struct dlb2_cmd_response *resp,
+				bool vdev_req,
+				unsigned int vdev_id)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+
+	dlb2_log_get_ldb_queue_depth(hw, domain_id, args->queue_id,
+				     vdev_req, vdev_id);
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	queue = dlb2_get_domain_ldb_queue(args->queue_id, vdev_req, domain);
+	if (!queue) {
+		resp->status = DLB2_ST_INVALID_QID;
+		return -EINVAL;
+	}
+
+	resp->id = dlb2_ldb_queue_depth(hw, queue);
+
+	return 0;
+}
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH 15/25] event/dlb2: add DLB v2.5 finish map/unmap interfaces
  2021-03-16 22:18 [dpdk-dev] [PATCH 00/25] Add Support for DLB v2.5 Timothy McDaniel
                   ` (13 preceding siblings ...)
  2021-03-16 22:18 ` [dpdk-dev] [PATCH 14/25] event/dlb2: Add DLB v2.5 support to get queue depth functions Timothy McDaniel
@ 2021-03-16 22:18 ` Timothy McDaniel
  2021-03-16 22:18 ` [dpdk-dev] [PATCH 16/25] event/dlb2: add DLB v2.5 sparse cq mode Timothy McDaniel
                   ` (10 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-03-16 22:18 UTC (permalink / raw)
  To: dev
  Cc: jerinj, harry.van.haaren, mdr, nhorman, nikhil.rao,
	erik.g.carrillo, abhinandan.gujjar, pbhagavatula, hemant.agrawal,
	mattias.ronnblom, peter.mccarthy

Update low level hardware funcs with map/unmap interfaces,
accounting for new combined register file and hardware access macros.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 1043 -----------------
 .../event/dlb2/pf/base/dlb2_resource_new.c    |   50 +
 2 files changed, 50 insertions(+), 1043 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index 1066b8834..d66442c19 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -66,1049 +66,6 @@ static inline void dlb2_flush_csr(struct dlb2_hw *hw)
 	DLB2_CSR_RD(hw, DLB2_SYS_TOTAL_VAS);
 }
 
-static void dlb2_ldb_port_cq_enable(struct dlb2_hw *hw,
-				    struct dlb2_ldb_port *port)
-{
-	union dlb2_lsp_cq_ldb_dsbl reg;
-
-	/*
-	 * Don't re-enable the port if a removal is pending. The caller should
-	 * mark this port as enabled (if it isn't already), and when the
-	 * removal completes the port will be enabled.
-	 */
-	if (port->num_pending_removals)
-		return;
-
-	reg.field.disabled = 0;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ_LDB_DSBL(port->id.phys_id), reg.val);
-
-	dlb2_flush_csr(hw);
-}
-
-static void dlb2_ldb_port_cq_disable(struct dlb2_hw *hw,
-				     struct dlb2_ldb_port *port)
-{
-	union dlb2_lsp_cq_ldb_dsbl reg;
-
-	reg.field.disabled = 1;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ_LDB_DSBL(port->id.phys_id), reg.val);
-
-	dlb2_flush_csr(hw);
-}
-
-static struct dlb2_ldb_queue *
-dlb2_get_ldb_queue_from_id(struct dlb2_hw *hw,
-			   u32 id,
-			   bool vdev_req,
-			   unsigned int vdev_id)
-{
-	struct dlb2_list_entry *iter1;
-	struct dlb2_list_entry *iter2;
-	struct dlb2_function_resources *rsrcs;
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_queue *queue;
-	RTE_SET_USED(iter1);
-	RTE_SET_USED(iter2);
-
-	if (id >= DLB2_MAX_NUM_LDB_QUEUES)
-		return NULL;
-
-	rsrcs = (vdev_req) ? &hw->vdev[vdev_id] : &hw->pf;
-
-	if (!vdev_req)
-		return &hw->rsrcs.ldb_queues[id];
-
-	DLB2_FUNC_LIST_FOR(rsrcs->used_domains, domain, iter1) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter2)
-			if (queue->id.virt_id == id)
-				return queue;
-	}
-
-	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_queues, queue, iter1)
-		if (queue->id.virt_id == id)
-			return queue;
-
-	return NULL;
-}
-
-static struct dlb2_hw_domain *dlb2_get_domain_from_id(struct dlb2_hw *hw,
-						      u32 id,
-						      bool vdev_req,
-						      unsigned int vdev_id)
-{
-	struct dlb2_list_entry *iteration;
-	struct dlb2_function_resources *rsrcs;
-	struct dlb2_hw_domain *domain;
-	RTE_SET_USED(iteration);
-
-	if (id >= DLB2_MAX_NUM_DOMAINS)
-		return NULL;
-
-	if (!vdev_req)
-		return &hw->domains[id];
-
-	rsrcs = &hw->vdev[vdev_id];
-
-	DLB2_FUNC_LIST_FOR(rsrcs->used_domains, domain, iteration)
-		if (domain->id.virt_id == id)
-			return domain;
-
-	return NULL;
-}
-
-static int dlb2_port_slot_state_transition(struct dlb2_hw *hw,
-					   struct dlb2_ldb_port *port,
-					   struct dlb2_ldb_queue *queue,
-					   int slot,
-					   enum dlb2_qid_map_state new_state)
-{
-	enum dlb2_qid_map_state curr_state = port->qid_map[slot].state;
-	struct dlb2_hw_domain *domain;
-	int domain_id;
-
-	domain_id = port->domain_id.phys_id;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, false, 0);
-	if (domain == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: unable to find domain %d\n",
-			    __func__, domain_id);
-		return -EINVAL;
-	}
-
-	switch (curr_state) {
-	case DLB2_QUEUE_UNMAPPED:
-		switch (new_state) {
-		case DLB2_QUEUE_MAPPED:
-			queue->num_mappings++;
-			port->num_mappings++;
-			break;
-		case DLB2_QUEUE_MAP_IN_PROG:
-			queue->num_pending_additions++;
-			domain->num_pending_additions++;
-			break;
-		default:
-			goto error;
-		}
-		break;
-	case DLB2_QUEUE_MAPPED:
-		switch (new_state) {
-		case DLB2_QUEUE_UNMAPPED:
-			queue->num_mappings--;
-			port->num_mappings--;
-			break;
-		case DLB2_QUEUE_UNMAP_IN_PROG:
-			port->num_pending_removals++;
-			domain->num_pending_removals++;
-			break;
-		case DLB2_QUEUE_MAPPED:
-			/* Priority change, nothing to update */
-			break;
-		default:
-			goto error;
-		}
-		break;
-	case DLB2_QUEUE_MAP_IN_PROG:
-		switch (new_state) {
-		case DLB2_QUEUE_UNMAPPED:
-			queue->num_pending_additions--;
-			domain->num_pending_additions--;
-			break;
-		case DLB2_QUEUE_MAPPED:
-			queue->num_mappings++;
-			port->num_mappings++;
-			queue->num_pending_additions--;
-			domain->num_pending_additions--;
-			break;
-		default:
-			goto error;
-		}
-		break;
-	case DLB2_QUEUE_UNMAP_IN_PROG:
-		switch (new_state) {
-		case DLB2_QUEUE_UNMAPPED:
-			port->num_pending_removals--;
-			domain->num_pending_removals--;
-			queue->num_mappings--;
-			port->num_mappings--;
-			break;
-		case DLB2_QUEUE_MAPPED:
-			port->num_pending_removals--;
-			domain->num_pending_removals--;
-			break;
-		case DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP:
-			/* Nothing to update */
-			break;
-		default:
-			goto error;
-		}
-		break;
-	case DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP:
-		switch (new_state) {
-		case DLB2_QUEUE_UNMAP_IN_PROG:
-			/* Nothing to update */
-			break;
-		case DLB2_QUEUE_UNMAPPED:
-			/*
-			 * An UNMAP_IN_PROG_PENDING_MAP slot briefly
-			 * becomes UNMAPPED before it transitions to
-			 * MAP_IN_PROG.
-			 */
-			queue->num_mappings--;
-			port->num_mappings--;
-			port->num_pending_removals--;
-			domain->num_pending_removals--;
-			break;
-		default:
-			goto error;
-		}
-		break;
-	default:
-		goto error;
-	}
-
-	port->qid_map[slot].state = new_state;
-
-	DLB2_HW_DBG(hw,
-		    "[%s()] queue %d -> port %d state transition (%d -> %d)\n",
-		    __func__, queue->id.phys_id, port->id.phys_id,
-		    curr_state, new_state);
-	return 0;
-
-error:
-	DLB2_HW_ERR(hw,
-		    "[%s()] Internal error: invalid queue %d -> port %d state transition (%d -> %d)\n",
-		    __func__, queue->id.phys_id, port->id.phys_id,
-		    curr_state, new_state);
-	return -EFAULT;
-}
-
-static bool dlb2_port_find_slot(struct dlb2_ldb_port *port,
-				enum dlb2_qid_map_state state,
-				int *slot)
-{
-	int i;
-
-	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
-		if (port->qid_map[i].state == state)
-			break;
-	}
-
-	*slot = i;
-
-	return (i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ);
-}
-
-static bool dlb2_port_find_slot_queue(struct dlb2_ldb_port *port,
-				      enum dlb2_qid_map_state state,
-				      struct dlb2_ldb_queue *queue,
-				      int *slot)
-{
-	int i;
-
-	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
-		if (port->qid_map[i].state == state &&
-		    port->qid_map[i].qid == queue->id.phys_id)
-			break;
-	}
-
-	*slot = i;
-
-	return (i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ);
-}
-
-/*
- * dlb2_ldb_queue_{enable, disable}_mapped_cqs() don't operate exactly as
- * their function names imply, and should only be called by the dynamic CQ
- * mapping code.
- */
-static void dlb2_ldb_queue_disable_mapped_cqs(struct dlb2_hw *hw,
-					      struct dlb2_hw_domain *domain,
-					      struct dlb2_ldb_queue *queue)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int slot, i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			enum dlb2_qid_map_state state = DLB2_QUEUE_MAPPED;
-
-			if (!dlb2_port_find_slot_queue(port, state,
-						       queue, &slot))
-				continue;
-
-			if (port->enabled)
-				dlb2_ldb_port_cq_disable(hw, port);
-		}
-	}
-}
-
-static void dlb2_ldb_queue_enable_mapped_cqs(struct dlb2_hw *hw,
-					     struct dlb2_hw_domain *domain,
-					     struct dlb2_ldb_queue *queue)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int slot, i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			enum dlb2_qid_map_state state = DLB2_QUEUE_MAPPED;
-
-			if (!dlb2_port_find_slot_queue(port, state,
-						       queue, &slot))
-				continue;
-
-			if (port->enabled)
-				dlb2_ldb_port_cq_enable(hw, port);
-		}
-	}
-}
-
-static void dlb2_ldb_port_clear_queue_if_status(struct dlb2_hw *hw,
-						struct dlb2_ldb_port *port,
-						int slot)
-{
-	union dlb2_lsp_ldb_sched_ctrl r0 = { {0} };
-
-	r0.field.cq = port->id.phys_id;
-	r0.field.qidix = slot;
-	r0.field.value = 0;
-	r0.field.inflight_ok_v = 1;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL, r0.val);
-
-	dlb2_flush_csr(hw);
-}
-
-static void dlb2_ldb_port_set_queue_if_status(struct dlb2_hw *hw,
-					      struct dlb2_ldb_port *port,
-					      int slot)
-{
-	union dlb2_lsp_ldb_sched_ctrl r0 = { {0} };
-
-	r0.field.cq = port->id.phys_id;
-	r0.field.qidix = slot;
-	r0.field.value = 1;
-	r0.field.inflight_ok_v = 1;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL, r0.val);
-
-	dlb2_flush_csr(hw);
-}
-
-static int dlb2_ldb_port_map_qid_static(struct dlb2_hw *hw,
-					struct dlb2_ldb_port *p,
-					struct dlb2_ldb_queue *q,
-					u8 priority)
-{
-	union dlb2_lsp_cq2priov r0;
-	union dlb2_lsp_cq2qid0 r1;
-	union dlb2_atm_qid2cqidix_00 r2;
-	union dlb2_lsp_qid2cqidix_00 r3;
-	union dlb2_lsp_qid2cqidix2_00 r4;
-	enum dlb2_qid_map_state state;
-	int i;
-
-	/* Look for a pending or already mapped slot, else an unused slot */
-	if (!dlb2_port_find_slot_queue(p, DLB2_QUEUE_MAP_IN_PROG, q, &i) &&
-	    !dlb2_port_find_slot_queue(p, DLB2_QUEUE_MAPPED, q, &i) &&
-	    !dlb2_port_find_slot(p, DLB2_QUEUE_UNMAPPED, &i)) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: CQ has no available QID mapping slots\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	if (i >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: port slot tracking failed\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	/* Read-modify-write the priority and valid bit register */
-	r0.val = DLB2_CSR_RD(hw, DLB2_LSP_CQ2PRIOV(p->id.phys_id));
-
-	r0.field.v |= 1 << i;
-	r0.field.prio |= (priority & 0x7) << i * 3;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(p->id.phys_id), r0.val);
-
-	/* Read-modify-write the QID map register */
-	if (i < 4)
-		r1.val = DLB2_CSR_RD(hw, DLB2_LSP_CQ2QID0(p->id.phys_id));
-	else
-		r1.val = DLB2_CSR_RD(hw, DLB2_LSP_CQ2QID1(p->id.phys_id));
-
-	if (i == 0 || i == 4)
-		r1.field.qid_p0 = q->id.phys_id;
-	if (i == 1 || i == 5)
-		r1.field.qid_p1 = q->id.phys_id;
-	if (i == 2 || i == 6)
-		r1.field.qid_p2 = q->id.phys_id;
-	if (i == 3 || i == 7)
-		r1.field.qid_p3 = q->id.phys_id;
-
-	if (i < 4)
-		DLB2_CSR_WR(hw, DLB2_LSP_CQ2QID0(p->id.phys_id), r1.val);
-	else
-		DLB2_CSR_WR(hw, DLB2_LSP_CQ2QID1(p->id.phys_id), r1.val);
-
-	r2.val = DLB2_CSR_RD(hw,
-			     DLB2_ATM_QID2CQIDIX(q->id.phys_id,
-						 p->id.phys_id / 4));
-
-	r3.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID2CQIDIX(q->id.phys_id,
-						 p->id.phys_id / 4));
-
-	r4.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID2CQIDIX2(q->id.phys_id,
-						  p->id.phys_id / 4));
-
-	switch (p->id.phys_id % 4) {
-	case 0:
-		r2.field.cq_p0 |= 1 << i;
-		r3.field.cq_p0 |= 1 << i;
-		r4.field.cq_p0 |= 1 << i;
-		break;
-
-	case 1:
-		r2.field.cq_p1 |= 1 << i;
-		r3.field.cq_p1 |= 1 << i;
-		r4.field.cq_p1 |= 1 << i;
-		break;
-
-	case 2:
-		r2.field.cq_p2 |= 1 << i;
-		r3.field.cq_p2 |= 1 << i;
-		r4.field.cq_p2 |= 1 << i;
-		break;
-
-	case 3:
-		r2.field.cq_p3 |= 1 << i;
-		r3.field.cq_p3 |= 1 << i;
-		r4.field.cq_p3 |= 1 << i;
-		break;
-	}
-
-	DLB2_CSR_WR(hw,
-		    DLB2_ATM_QID2CQIDIX(q->id.phys_id, p->id.phys_id / 4),
-		    r2.val);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID2CQIDIX(q->id.phys_id, p->id.phys_id / 4),
-		    r3.val);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID2CQIDIX2(q->id.phys_id, p->id.phys_id / 4),
-		    r4.val);
-
-	dlb2_flush_csr(hw);
-
-	p->qid_map[i].qid = q->id.phys_id;
-	p->qid_map[i].priority = priority;
-
-	state = DLB2_QUEUE_MAPPED;
-
-	return dlb2_port_slot_state_transition(hw, p, q, i, state);
-}
-
-static int dlb2_ldb_port_set_has_work_bits(struct dlb2_hw *hw,
-					   struct dlb2_ldb_port *port,
-					   struct dlb2_ldb_queue *queue,
-					   int slot)
-{
-	union dlb2_lsp_qid_aqed_active_cnt r0;
-	union dlb2_lsp_qid_ldb_enqueue_cnt r1;
-	union dlb2_lsp_ldb_sched_ctrl r2 = { {0} };
-
-	/* Set the atomic scheduling haswork bit */
-	r0.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID_AQED_ACTIVE_CNT(queue->id.phys_id));
-
-	r2.field.cq = port->id.phys_id;
-	r2.field.qidix = slot;
-	r2.field.value = 1;
-	r2.field.rlist_haswork_v = r0.field.count > 0;
-
-	/* Set the non-atomic scheduling haswork bit */
-	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL, r2.val);
-
-	r1.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID_LDB_ENQUEUE_CNT(queue->id.phys_id));
-
-	memset(&r2, 0, sizeof(r2));
-
-	r2.field.cq = port->id.phys_id;
-	r2.field.qidix = slot;
-	r2.field.value = 1;
-	r2.field.nalb_haswork_v = (r1.field.count > 0);
-
-	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL, r2.val);
-
-	dlb2_flush_csr(hw);
-
-	return 0;
-}
-
-static void dlb2_ldb_port_clear_has_work_bits(struct dlb2_hw *hw,
-					      struct dlb2_ldb_port *port,
-					      u8 slot)
-{
-	union dlb2_lsp_ldb_sched_ctrl r2 = { {0} };
-
-	r2.field.cq = port->id.phys_id;
-	r2.field.qidix = slot;
-	r2.field.value = 0;
-	r2.field.rlist_haswork_v = 1;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL, r2.val);
-
-	memset(&r2, 0, sizeof(r2));
-
-	r2.field.cq = port->id.phys_id;
-	r2.field.qidix = slot;
-	r2.field.value = 0;
-	r2.field.nalb_haswork_v = 1;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL, r2.val);
-
-	dlb2_flush_csr(hw);
-}
-
-static void dlb2_ldb_queue_set_inflight_limit(struct dlb2_hw *hw,
-					      struct dlb2_ldb_queue *queue)
-{
-	union dlb2_lsp_qid_ldb_infl_lim r0 = { {0} };
-
-	r0.field.limit = queue->num_qid_inflights;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_QID_LDB_INFL_LIM(queue->id.phys_id), r0.val);
-}
-
-static void dlb2_ldb_queue_clear_inflight_limit(struct dlb2_hw *hw,
-						struct dlb2_ldb_queue *queue)
-{
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID_LDB_INFL_LIM(queue->id.phys_id),
-		    DLB2_LSP_QID_LDB_INFL_LIM_RST);
-}
-
-static int dlb2_ldb_port_finish_map_qid_dynamic(struct dlb2_hw *hw,
-						struct dlb2_hw_domain *domain,
-						struct dlb2_ldb_port *port,
-						struct dlb2_ldb_queue *queue)
-{
-	struct dlb2_list_entry *iter;
-	union dlb2_lsp_qid_ldb_infl_cnt r0;
-	enum dlb2_qid_map_state state;
-	int slot, ret, i;
-	u8 prio;
-	RTE_SET_USED(iter);
-
-	r0.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID_LDB_INFL_CNT(queue->id.phys_id));
-
-	if (r0.field.count) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: non-zero QID inflight count\n",
-			    __func__);
-		return -EINVAL;
-	}
-
-	/*
-	 * Static map the port and set its corresponding has_work bits.
-	 */
-	state = DLB2_QUEUE_MAP_IN_PROG;
-	if (!dlb2_port_find_slot_queue(port, state, queue, &slot))
-		return -EINVAL;
-
-	if (slot >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: port slot tracking failed\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	prio = port->qid_map[slot].priority;
-
-	/*
-	 * Update the CQ2QID, CQ2PRIOV, and QID2CQIDX registers, and
-	 * the port's qid_map state.
-	 */
-	ret = dlb2_ldb_port_map_qid_static(hw, port, queue, prio);
-	if (ret)
-		return ret;
-
-	ret = dlb2_ldb_port_set_has_work_bits(hw, port, queue, slot);
-	if (ret)
-		return ret;
-
-	/*
-	 * Ensure IF_status(cq,qid) is 0 before enabling the port to
-	 * prevent spurious schedules to cause the queue's inflight
-	 * count to increase.
-	 */
-	dlb2_ldb_port_clear_queue_if_status(hw, port, slot);
-
-	/* Reset the queue's inflight status */
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			state = DLB2_QUEUE_MAPPED;
-			if (!dlb2_port_find_slot_queue(port, state,
-						       queue, &slot))
-				continue;
-
-			dlb2_ldb_port_set_queue_if_status(hw, port, slot);
-		}
-	}
-
-	dlb2_ldb_queue_set_inflight_limit(hw, queue);
-
-	/* Re-enable CQs mapped to this queue */
-	dlb2_ldb_queue_enable_mapped_cqs(hw, domain, queue);
-
-	/* If this queue has other mappings pending, clear its inflight limit */
-	if (queue->num_pending_additions > 0)
-		dlb2_ldb_queue_clear_inflight_limit(hw, queue);
-
-	return 0;
-}
-
-/**
- * dlb2_ldb_port_map_qid_dynamic() - perform a "dynamic" QID->CQ mapping
- * @hw: dlb2_hw handle for a particular device.
- * @port: load-balanced port
- * @queue: load-balanced queue
- * @priority: queue servicing priority
- *
- * Returns 0 if the queue was mapped, 1 if the mapping is scheduled to occur
- * at a later point, and <0 if an error occurred.
- */
-static int dlb2_ldb_port_map_qid_dynamic(struct dlb2_hw *hw,
-					 struct dlb2_ldb_port *port,
-					 struct dlb2_ldb_queue *queue,
-					 u8 priority)
-{
-	union dlb2_lsp_qid_ldb_infl_cnt r0 = { {0} };
-	enum dlb2_qid_map_state state;
-	struct dlb2_hw_domain *domain;
-	int domain_id, slot, ret;
-
-	domain_id = port->domain_id.phys_id;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, false, 0);
-	if (domain == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: unable to find domain %d\n",
-			    __func__, port->domain_id.phys_id);
-		return -EINVAL;
-	}
-
-	/*
-	 * Set the QID inflight limit to 0 to prevent further scheduling of the
-	 * queue.
-	 */
-	DLB2_CSR_WR(hw, DLB2_LSP_QID_LDB_INFL_LIM(queue->id.phys_id), 0);
-
-	if (!dlb2_port_find_slot(port, DLB2_QUEUE_UNMAPPED, &slot)) {
-		DLB2_HW_ERR(hw,
-			    "Internal error: No available unmapped slots\n");
-		return -EFAULT;
-	}
-
-	if (slot >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: port slot tracking failed\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	port->qid_map[slot].qid = queue->id.phys_id;
-	port->qid_map[slot].priority = priority;
-
-	state = DLB2_QUEUE_MAP_IN_PROG;
-	ret = dlb2_port_slot_state_transition(hw, port, queue, slot, state);
-	if (ret)
-		return ret;
-
-	r0.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID_LDB_INFL_CNT(queue->id.phys_id));
-
-	if (r0.field.count) {
-		/*
-		 * The queue is owed completions so it's not safe to map it
-		 * yet. Schedule a kernel thread to complete the mapping later,
-		 * once software has completed all the queue's inflight events.
-		 */
-		if (!os_worker_active(hw))
-			os_schedule_work(hw);
-
-		return 1;
-	}
-
-	/*
-	 * Disable the affected CQ, and the CQs already mapped to the QID,
-	 * before reading the QID's inflight count a second time. There is an
-	 * unlikely race in which the QID may schedule one more QE after we
-	 * read an inflight count of 0, and disabling the CQs guarantees that
-	 * the race will not occur after a re-read of the inflight count
-	 * register.
-	 */
-	if (port->enabled)
-		dlb2_ldb_port_cq_disable(hw, port);
-
-	dlb2_ldb_queue_disable_mapped_cqs(hw, domain, queue);
-
-	r0.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID_LDB_INFL_CNT(queue->id.phys_id));
-
-	if (r0.field.count) {
-		if (port->enabled)
-			dlb2_ldb_port_cq_enable(hw, port);
-
-		dlb2_ldb_queue_enable_mapped_cqs(hw, domain, queue);
-
-		/*
-		 * The queue is owed completions so it's not safe to map it
-		 * yet. Schedule a kernel thread to complete the mapping later,
-		 * once software has completed all the queue's inflight events.
-		 */
-		if (!os_worker_active(hw))
-			os_schedule_work(hw);
-
-		return 1;
-	}
-
-	return dlb2_ldb_port_finish_map_qid_dynamic(hw, domain, port, queue);
-}
-
-static void dlb2_domain_finish_map_port(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain,
-					struct dlb2_ldb_port *port)
-{
-	int i;
-
-	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
-		union dlb2_lsp_qid_ldb_infl_cnt r0;
-		struct dlb2_ldb_queue *queue;
-		int qid;
-
-		if (port->qid_map[i].state != DLB2_QUEUE_MAP_IN_PROG)
-			continue;
-
-		qid = port->qid_map[i].qid;
-
-		queue = dlb2_get_ldb_queue_from_id(hw, qid, false, 0);
-
-		if (queue == NULL) {
-			DLB2_HW_ERR(hw,
-				    "[%s()] Internal error: unable to find queue %d\n",
-				    __func__, qid);
-			continue;
-		}
-
-		r0.val = DLB2_CSR_RD(hw, DLB2_LSP_QID_LDB_INFL_CNT(qid));
-
-		if (r0.field.count)
-			continue;
-
-		/*
-		 * Disable the affected CQ, and the CQs already mapped to the
-		 * QID, before reading the QID's inflight count a second time.
-		 * There is an unlikely race in which the QID may schedule one
-		 * more QE after we read an inflight count of 0, and disabling
-		 * the CQs guarantees that the race will not occur after a
-		 * re-read of the inflight count register.
-		 */
-		if (port->enabled)
-			dlb2_ldb_port_cq_disable(hw, port);
-
-		dlb2_ldb_queue_disable_mapped_cqs(hw, domain, queue);
-
-		r0.val = DLB2_CSR_RD(hw, DLB2_LSP_QID_LDB_INFL_CNT(qid));
-
-		if (r0.field.count) {
-			if (port->enabled)
-				dlb2_ldb_port_cq_enable(hw, port);
-
-			dlb2_ldb_queue_enable_mapped_cqs(hw, domain, queue);
-
-			continue;
-		}
-
-		dlb2_ldb_port_finish_map_qid_dynamic(hw, domain, port, queue);
-	}
-}
-
-static unsigned int
-dlb2_domain_finish_map_qid_procedures(struct dlb2_hw *hw,
-				      struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	if (!domain->configured || domain->num_pending_additions == 0)
-		return 0;
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
-			dlb2_domain_finish_map_port(hw, domain, port);
-	}
-
-	return domain->num_pending_additions;
-}
-
-static int dlb2_ldb_port_unmap_qid(struct dlb2_hw *hw,
-				   struct dlb2_ldb_port *port,
-				   struct dlb2_ldb_queue *queue)
-{
-	enum dlb2_qid_map_state mapped, in_progress, pending_map, unmapped;
-	union dlb2_lsp_cq2priov r0;
-	union dlb2_atm_qid2cqidix_00 r1;
-	union dlb2_lsp_qid2cqidix_00 r2;
-	union dlb2_lsp_qid2cqidix2_00 r3;
-	u32 queue_id;
-	u32 port_id;
-	int i;
-
-	/* Find the queue's slot */
-	mapped = DLB2_QUEUE_MAPPED;
-	in_progress = DLB2_QUEUE_UNMAP_IN_PROG;
-	pending_map = DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP;
-
-	if (!dlb2_port_find_slot_queue(port, mapped, queue, &i) &&
-	    !dlb2_port_find_slot_queue(port, in_progress, queue, &i) &&
-	    !dlb2_port_find_slot_queue(port, pending_map, queue, &i)) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: QID %d isn't mapped\n",
-			    __func__, __LINE__, queue->id.phys_id);
-		return -EFAULT;
-	}
-
-	if (i >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: port slot tracking failed\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	port_id = port->id.phys_id;
-	queue_id = queue->id.phys_id;
-
-	/* Read-modify-write the priority and valid bit register */
-	r0.val = DLB2_CSR_RD(hw, DLB2_LSP_CQ2PRIOV(port_id));
-
-	r0.field.v &= ~(1 << i);
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(port_id), r0.val);
-
-	r1.val = DLB2_CSR_RD(hw,
-			     DLB2_ATM_QID2CQIDIX(queue_id, port_id / 4));
-
-	r2.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID2CQIDIX(queue_id, port_id / 4));
-
-	r3.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID2CQIDIX2(queue_id, port_id / 4));
-
-	switch (port_id % 4) {
-	case 0:
-		r1.field.cq_p0 &= ~(1 << i);
-		r2.field.cq_p0 &= ~(1 << i);
-		r3.field.cq_p0 &= ~(1 << i);
-		break;
-
-	case 1:
-		r1.field.cq_p1 &= ~(1 << i);
-		r2.field.cq_p1 &= ~(1 << i);
-		r3.field.cq_p1 &= ~(1 << i);
-		break;
-
-	case 2:
-		r1.field.cq_p2 &= ~(1 << i);
-		r2.field.cq_p2 &= ~(1 << i);
-		r3.field.cq_p2 &= ~(1 << i);
-		break;
-
-	case 3:
-		r1.field.cq_p3 &= ~(1 << i);
-		r2.field.cq_p3 &= ~(1 << i);
-		r3.field.cq_p3 &= ~(1 << i);
-		break;
-	}
-
-	DLB2_CSR_WR(hw,
-		    DLB2_ATM_QID2CQIDIX(queue_id, port_id / 4),
-		    r1.val);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID2CQIDIX(queue_id, port_id / 4),
-		    r2.val);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID2CQIDIX2(queue_id, port_id / 4),
-		    r3.val);
-
-	dlb2_flush_csr(hw);
-
-	unmapped = DLB2_QUEUE_UNMAPPED;
-
-	return dlb2_port_slot_state_transition(hw, port, queue, i, unmapped);
-}
-
-static int dlb2_ldb_port_map_qid(struct dlb2_hw *hw,
-				 struct dlb2_hw_domain *domain,
-				 struct dlb2_ldb_port *port,
-				 struct dlb2_ldb_queue *queue,
-				 u8 prio)
-{
-	if (domain->started)
-		return dlb2_ldb_port_map_qid_dynamic(hw, port, queue, prio);
-	else
-		return dlb2_ldb_port_map_qid_static(hw, port, queue, prio);
-}
-
-static void
-dlb2_domain_finish_unmap_port_slot(struct dlb2_hw *hw,
-				   struct dlb2_hw_domain *domain,
-				   struct dlb2_ldb_port *port,
-				   int slot)
-{
-	enum dlb2_qid_map_state state;
-	struct dlb2_ldb_queue *queue;
-
-	queue = &hw->rsrcs.ldb_queues[port->qid_map[slot].qid];
-
-	state = port->qid_map[slot].state;
-
-	/* Update the QID2CQIDX and CQ2QID vectors */
-	dlb2_ldb_port_unmap_qid(hw, port, queue);
-
-	/*
-	 * Ensure the QID will not be serviced by this {CQ, slot} by clearing
-	 * the has_work bits
-	 */
-	dlb2_ldb_port_clear_has_work_bits(hw, port, slot);
-
-	/* Reset the {CQ, slot} to its default state */
-	dlb2_ldb_port_set_queue_if_status(hw, port, slot);
-
-	/* Re-enable the CQ if it wasn't manually disabled by the user */
-	if (port->enabled)
-		dlb2_ldb_port_cq_enable(hw, port);
-
-	/*
-	 * If there is a mapping that is pending this slot's removal, perform
-	 * the mapping now.
-	 */
-	if (state == DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP) {
-		struct dlb2_ldb_port_qid_map *map;
-		struct dlb2_ldb_queue *map_queue;
-		u8 prio;
-
-		map = &port->qid_map[slot];
-
-		map->qid = map->pending_qid;
-		map->priority = map->pending_priority;
-
-		map_queue = &hw->rsrcs.ldb_queues[map->qid];
-		prio = map->priority;
-
-		dlb2_ldb_port_map_qid(hw, domain, port, map_queue, prio);
-	}
-}
-
-static bool dlb2_domain_finish_unmap_port(struct dlb2_hw *hw,
-					  struct dlb2_hw_domain *domain,
-					  struct dlb2_ldb_port *port)
-{
-	union dlb2_lsp_cq_ldb_infl_cnt r0;
-	int i;
-
-	if (port->num_pending_removals == 0)
-		return false;
-
-	/*
-	 * The unmap requires all the CQ's outstanding inflights to be
-	 * completed.
-	 */
-	r0.val = DLB2_CSR_RD(hw, DLB2_LSP_CQ_LDB_INFL_CNT(port->id.phys_id));
-	if (r0.field.count > 0)
-		return false;
-
-	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
-		struct dlb2_ldb_port_qid_map *map;
-
-		map = &port->qid_map[i];
-
-		if (map->state != DLB2_QUEUE_UNMAP_IN_PROG &&
-		    map->state != DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP)
-			continue;
-
-		dlb2_domain_finish_unmap_port_slot(hw, domain, port, i);
-	}
-
-	return true;
-}
-
-static unsigned int
-dlb2_domain_finish_unmap_qid_procedures(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	if (!domain->configured || domain->num_pending_removals == 0)
-		return 0;
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
-			dlb2_domain_finish_unmap_port(hw, domain, port);
-	}
-
-	return domain->num_pending_removals;
-}
-
-unsigned int dlb2_finish_unmap_qid_procedures(struct dlb2_hw *hw)
-{
-	int i, num = 0;
-
-	/* Finish queue unmap jobs for any domain that needs it */
-	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
-		struct dlb2_hw_domain *domain = &hw->domains[i];
-
-		num += dlb2_domain_finish_unmap_qid_procedures(hw, domain);
-	}
-
-	return num;
-}
-
-unsigned int dlb2_finish_map_qid_procedures(struct dlb2_hw *hw)
-{
-	int i, num = 0;
-
-	/* Finish queue map jobs for any domain that needs it */
-	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
-		struct dlb2_hw_domain *domain = &hw->domains[i];
-
-		num += dlb2_domain_finish_map_qid_procedures(hw, domain);
-	}
-
-	return num;
-}
-
 int dlb2_get_group_sequence_numbers(struct dlb2_hw *hw, unsigned int group_id)
 {
 	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index 77a946953..7c71fa791 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -6040,3 +6040,53 @@ int dlb2_hw_get_ldb_queue_depth(struct dlb2_hw *hw,
 
 	return 0;
 }
+
+/**
+ * dlb2_finish_unmap_qid_procedures() - finish any pending unmap procedures
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function attempts to finish any outstanding unmap procedures.
+ * This function should be called by the kernel thread responsible for
+ * finishing map/unmap procedures.
+ *
+ * Return:
+ * Returns the number of procedures that weren't completed.
+ */
+unsigned int dlb2_finish_unmap_qid_procedures(struct dlb2_hw *hw)
+{
+	int i, num = 0;
+
+	/* Finish queue unmap jobs for any domain that needs it */
+	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
+		struct dlb2_hw_domain *domain = &hw->domains[i];
+
+		num += dlb2_domain_finish_unmap_qid_procedures(hw, domain);
+	}
+
+	return num;
+}
+
+/**
+ * dlb2_finish_map_qid_procedures() - finish any pending map procedures
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function attempts to finish any outstanding map procedures.
+ * This function should be called by the kernel thread responsible for
+ * finishing map/unmap procedures.
+ *
+ * Return:
+ * Returns the number of procedures that weren't completed.
+ */
+unsigned int dlb2_finish_map_qid_procedures(struct dlb2_hw *hw)
+{
+	int i, num = 0;
+
+	/* Finish queue map jobs for any domain that needs it */
+	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
+		struct dlb2_hw_domain *domain = &hw->domains[i];
+
+		num += dlb2_domain_finish_map_qid_procedures(hw, domain);
+	}
+
+	return num;
+}
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH 16/25] event/dlb2: add DLB v2.5 sparse cq mode
  2021-03-16 22:18 [dpdk-dev] [PATCH 00/25] Add Support for DLB v2.5 Timothy McDaniel
                   ` (14 preceding siblings ...)
  2021-03-16 22:18 ` [dpdk-dev] [PATCH 15/25] event/dlb2: add DLB v2.5 finish map/unmap interfaces Timothy McDaniel
@ 2021-03-16 22:18 ` Timothy McDaniel
  2021-03-16 22:18 ` [dpdk-dev] [PATCH 17/25] event/dlb2: add DLB v2.5 support to sequence number management Timothy McDaniel
                   ` (9 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-03-16 22:18 UTC (permalink / raw)
  To: dev
  Cc: jerinj, harry.van.haaren, mdr, nhorman, nikhil.rao,
	erik.g.carrillo, abhinandan.gujjar, pbhagavatula, hemant.agrawal,
	mattias.ronnblom, peter.mccarthy

Update sparse cq mode mode functions for DLB v2.5, accounting for new
combined register map and hardware access macros.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 22 -----------
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 39 +++++++++++++++++++
 2 files changed, 39 insertions(+), 22 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index d66442c19..1759cee6b 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -33,28 +33,6 @@
 #define DLB2_FUNC_LIST_FOR_SAFE(head, ptr, ptr_tmp, it, it_tmp) \
 	DLB2_LIST_FOR_EACH_SAFE((head), ptr, ptr_tmp, func_list, it, it_tmp)
 
-void dlb2_hw_enable_sparse_dir_cq_mode(struct dlb2_hw *hw)
-{
-	union dlb2_chp_cfg_chp_csr_ctrl r0;
-
-	r0.val = DLB2_CSR_RD(hw, DLB2_CHP_CFG_CHP_CSR_CTRL);
-
-	r0.field.cfg_64bytes_qe_dir_cq_mode = 1;
-
-	DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, r0.val);
-}
-
-void dlb2_hw_enable_sparse_ldb_cq_mode(struct dlb2_hw *hw)
-{
-	union dlb2_chp_cfg_chp_csr_ctrl r0;
-
-	r0.val = DLB2_CSR_RD(hw, DLB2_CHP_CFG_CHP_CSR_CTRL);
-
-	r0.field.cfg_64bytes_qe_ldb_cq_mode = 1;
-
-	DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, r0.val);
-}
-
 /*
  * The PF driver cannot assume that a register write will affect subsequent HCW
  * writes. To ensure a write completes, the driver must read back a CSR. This
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index 7c71fa791..f147937c0 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -6090,3 +6090,42 @@ unsigned int dlb2_finish_map_qid_procedures(struct dlb2_hw *hw)
 
 	return num;
 }
+
+/**
+ * dlb2_hw_enable_sparse_dir_cq_mode() - enable sparse mode for directed ports.
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function must be called prior to configuring scheduling domains.
+ */
+
+void dlb2_hw_enable_sparse_dir_cq_mode(struct dlb2_hw *hw)
+{
+	u32 ctrl;
+
+	ctrl = DLB2_CSR_RD(hw, DLB2_CHP_CFG_CHP_CSR_CTRL);
+
+	DLB2_BIT_SET(ctrl,
+		     DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_DIR_CQ_MODE);
+
+	DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, ctrl);
+}
+
+/**
+ * dlb2_hw_enable_sparse_ldb_cq_mode() - enable sparse mode for load-balanced
+ *	ports.
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function must be called prior to configuring scheduling domains.
+ */
+void dlb2_hw_enable_sparse_ldb_cq_mode(struct dlb2_hw *hw)
+{
+	u32 ctrl;
+
+	ctrl = DLB2_CSR_RD(hw, DLB2_CHP_CFG_CHP_CSR_CTRL);
+
+	DLB2_BIT_SET(ctrl,
+		     DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_LDB_CQ_MODE);
+
+	DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, ctrl);
+}
+
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH 17/25] event/dlb2: add DLB v2.5 support to sequence number management
  2021-03-16 22:18 [dpdk-dev] [PATCH 00/25] Add Support for DLB v2.5 Timothy McDaniel
                   ` (15 preceding siblings ...)
  2021-03-16 22:18 ` [dpdk-dev] [PATCH 16/25] event/dlb2: add DLB v2.5 sparse cq mode Timothy McDaniel
@ 2021-03-16 22:18 ` Timothy McDaniel
  2021-03-16 22:18 ` [dpdk-dev] [PATCH 18/25] event/dlb2: consolidate dlb resource header files into one file Timothy McDaniel
                   ` (8 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-03-16 22:18 UTC (permalink / raw)
  To: dev
  Cc: jerinj, harry.van.haaren, mdr, nhorman, nikhil.rao,
	erik.g.carrillo, abhinandan.gujjar, pbhagavatula, hemant.agrawal,
	mattias.ronnblom, peter.mccarthy

Update sequence number management functions for DLB v2.5,
accounting for new combined register map and hardware access macros.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_resource.c    |   1 +
 drivers/event/dlb2/pf/base/dlb2_resource.h    |   4 +-
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 105 ++++++++++++++++++
 3 files changed, 108 insertions(+), 2 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index 1759cee6b..bd1404f33 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -242,3 +242,4 @@ int dlb2_hw_get_ldb_queue_depth(struct dlb2_hw *hw,
 
 	return 0;
 }
+
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.h b/drivers/event/dlb2/pf/base/dlb2_resource.h
index 2e13193bb..00a0b6b57 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.h
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.h
@@ -792,8 +792,8 @@ int dlb2_get_group_sequence_number_occupancy(struct dlb2_hw *hw,
  * ordered queue is configured.
  */
 int dlb2_set_group_sequence_numbers(struct dlb2_hw *hw,
-				    unsigned int group_id,
-				    unsigned long val);
+				    u32 group_id,
+				    u32 val);
 
 /**
  * dlb2_reset_domain() - reset a scheduling domain
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index f147937c0..9e4e49583 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -6129,3 +6129,108 @@ void dlb2_hw_enable_sparse_ldb_cq_mode(struct dlb2_hw *hw)
 	DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, ctrl);
 }
 
+/**
+ * dlb2_get_group_sequence_numbers() - return a group's number of SNs per queue
+ * @hw: dlb2_hw handle for a particular device.
+ * @group_id: sequence number group ID.
+ *
+ * This function returns the configured number of sequence numbers per queue
+ * for the specified group.
+ *
+ * Return:
+ * Returns -EINVAL if group_id is invalid, else the group's SNs per queue.
+ */
+int dlb2_get_group_sequence_numbers(struct dlb2_hw *hw, u32 group_id)
+{
+	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
+		return -EINVAL;
+
+	return hw->rsrcs.sn_groups[group_id].sequence_numbers_per_queue;
+}
+
+/**
+ * dlb2_get_group_sequence_number_occupancy() - return a group's in-use slots
+ * @hw: dlb2_hw handle for a particular device.
+ * @group_id: sequence number group ID.
+ *
+ * This function returns the group's number of in-use slots (i.e. load-balanced
+ * queues using the specified group).
+ *
+ * Return:
+ * Returns -EINVAL if group_id is invalid, else the group's SNs per queue.
+ */
+int dlb2_get_group_sequence_number_occupancy(struct dlb2_hw *hw, u32 group_id)
+{
+	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
+		return -EINVAL;
+
+	return dlb2_sn_group_used_slots(&hw->rsrcs.sn_groups[group_id]);
+}
+
+static void dlb2_log_set_group_sequence_numbers(struct dlb2_hw *hw,
+						u32 group_id,
+						u32 val)
+{
+	DLB2_HW_DBG(hw, "DLB2 set group sequence numbers:\n");
+	DLB2_HW_DBG(hw, "\tGroup ID: %u\n", group_id);
+	DLB2_HW_DBG(hw, "\tValue:    %u\n", val);
+}
+
+/**
+ * dlb2_set_group_sequence_numbers() - assign a group's number of SNs per queue
+ * @hw: dlb2_hw handle for a particular device.
+ * @group_id: sequence number group ID.
+ * @val: requested amount of sequence numbers per queue.
+ *
+ * This function configures the group's number of sequence numbers per queue.
+ * val can be a power-of-two between 32 and 1024, inclusive. This setting can
+ * be configured until the first ordered load-balanced queue is configured, at
+ * which point the configuration is locked.
+ *
+ * Return:
+ * Returns 0 upon success; -EINVAL if group_id or val is invalid, -EPERM if an
+ * ordered queue is configured.
+ */
+int dlb2_set_group_sequence_numbers(struct dlb2_hw *hw,
+				    u32 group_id,
+				    u32 val)
+{
+	const u32 valid_allocations[] = {64, 128, 256, 512, 1024};
+	struct dlb2_sn_group *group;
+	u32 sn_mode = 0;
+	int mode;
+
+	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
+		return -EINVAL;
+
+	group = &hw->rsrcs.sn_groups[group_id];
+
+	/*
+	 * Once the first load-balanced queue using an SN group is configured,
+	 * the group cannot be changed.
+	 */
+	if (group->slot_use_bitmap != 0)
+		return -EPERM;
+
+	for (mode = 0; mode < DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES; mode++)
+		if (val == valid_allocations[mode])
+			break;
+
+	if (mode == DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES)
+		return -EINVAL;
+
+	group->mode = mode;
+	group->sequence_numbers_per_queue = val;
+
+	DLB2_BITS_SET(sn_mode, hw->rsrcs.sn_groups[0].mode,
+		 DLB2_RO_GRP_SN_MODE_SN_MODE_0);
+	DLB2_BITS_SET(sn_mode, hw->rsrcs.sn_groups[1].mode,
+		 DLB2_RO_GRP_SN_MODE_SN_MODE_1);
+
+	DLB2_CSR_WR(hw, DLB2_RO_GRP_SN_MODE(hw->ver), sn_mode);
+
+	dlb2_log_set_group_sequence_numbers(hw, group_id, val);
+
+	return 0;
+}
+
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH 18/25] event/dlb2: consolidate dlb resource header files into one file
  2021-03-16 22:18 [dpdk-dev] [PATCH 00/25] Add Support for DLB v2.5 Timothy McDaniel
                   ` (16 preceding siblings ...)
  2021-03-16 22:18 ` [dpdk-dev] [PATCH 17/25] event/dlb2: add DLB v2.5 support to sequence number management Timothy McDaniel
@ 2021-03-16 22:18 ` Timothy McDaniel
  2021-03-16 22:18 ` [dpdk-dev] [PATCH 19/25] event/dlb2: delete old dlb2_resource.c file Timothy McDaniel
                   ` (7 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-03-16 22:18 UTC (permalink / raw)
  To: dev
  Cc: jerinj, harry.van.haaren, mdr, nhorman, nikhil.rao,
	erik.g.carrillo, abhinandan.gujjar, pbhagavatula, hemant.agrawal,
	mattias.ronnblom, peter.mccarthy

A temporary version of dlb_resource.h (dlb_resource_new.h) was used
by the previous commits in this patch series. Merge the two files
now that DLB v2.5 support has been fully added to dlb_resource.c.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_osdep.h       |  1 -
 drivers/event/dlb2/pf/base/dlb2_resource.h    | 36 +++++++++
 .../event/dlb2/pf/base/dlb2_resource_new.c    |  2 +-
 .../event/dlb2/pf/base/dlb2_resource_new.h    | 73 -------------------
 drivers/event/dlb2/pf/dlb2_main.c             |  2 +-
 drivers/event/dlb2/pf/dlb2_pf.c               |  2 +-
 6 files changed, 39 insertions(+), 77 deletions(-)
 delete mode 100644 drivers/event/dlb2/pf/base/dlb2_resource_new.h

diff --git a/drivers/event/dlb2/pf/base/dlb2_osdep.h b/drivers/event/dlb2/pf/base/dlb2_osdep.h
index 747f680b9..1bdb201f2 100644
--- a/drivers/event/dlb2/pf/base/dlb2_osdep.h
+++ b/drivers/event/dlb2/pf/base/dlb2_osdep.h
@@ -18,7 +18,6 @@
 #include "../dlb2_main.h"
 
 /* TEMPORARY inclusion of both headers for merge */
-#include "dlb2_resource_new.h"
 #include "dlb2_resource.h"
 
 #include "../../dlb2_log.h"
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.h b/drivers/event/dlb2/pf/base/dlb2_resource.h
index 00a0b6b57..684049cd6 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.h
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.h
@@ -8,6 +8,42 @@
 #include "dlb2_user.h"
 #include "dlb2_osdep_types.h"
 
+/**
+ * dlb2_resource_init() - initialize the device
+ * @hw: pointer to struct dlb2_hw.
+ * @ver: device version.
+ *
+ * This function initializes the device's software state (pointed to by the hw
+ * argument) and programs global scheduling QoS registers. This function should
+ * be called during driver initialization.
+ *
+ * The dlb2_hw struct must be unique per DLB 2.0 device and persist until the
+ * device is reset.
+ *
+ * Return:
+ * Returns 0 upon success, <0 otherwise.
+ */
+int dlb2_resource_init(struct dlb2_hw *hw, enum dlb2_hw_ver ver);
+
+/**
+ * dlb2_clr_pmcsr_disable() - power on bulk of DLB 2.0 logic
+ * @hw: dlb2_hw handle for a particular device.
+ * @ver: device version.
+ *
+ * Clearing the PMCSR must be done at initialization to make the device fully
+ * operational.
+ */
+void dlb2_clr_pmcsr_disable(struct dlb2_hw *hw, enum dlb2_hw_ver ver);
+
+/**
+ * dlb2_resource_free() - free device state memory
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function frees software state pointed to by dlb2_hw. This function
+ * should be called when resetting the device or unloading the driver.
+ */
+void dlb2_resource_free(struct dlb2_hw *hw);
+
 /**
  * dlb2_resource_reset() - reset in-use resources to their initial state
  * @hw: dlb2_hw handle for a particular device.
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index 9e4e49583..8d6c00f31 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -12,7 +12,7 @@
 #include "dlb2_osdep_bitmap.h"
 #include "dlb2_osdep_types.h"
 #include "dlb2_regs_new.h"
-#include "dlb2_resource_new.h" /* TEMP FOR UPSTREAMPATCHES */
+#include "dlb2_resource.h"
 
 #include "../../dlb2_priv.h"
 #include "../../dlb2_inline_fns.h"
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.h b/drivers/event/dlb2/pf/base/dlb2_resource_new.h
deleted file mode 100644
index 51f31543c..000000000
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.h
+++ /dev/null
@@ -1,73 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016-2020 Intel Corporation
- */
-
-#ifndef __DLB2_RESOURCE_NEW_H
-#define __DLB2_RESOURCE_NEW_H
-
-#include "dlb2_user.h"
-#include "dlb2_osdep_types.h"
-
-/**
- * dlb2_resource_init() - initialize the device
- * @hw: pointer to struct dlb2_hw.
- * @ver: device version.
- *
- * This function initializes the device's software state (pointed to by the hw
- * argument) and programs global scheduling QoS registers. This function should
- * be called during driver initialization.
- *
- * The dlb2_hw struct must be unique per DLB 2.0 device and persist until the
- * device is reset.
- *
- * Return:
- * Returns 0 upon success, <0 otherwise.
- */
-int dlb2_resource_init(struct dlb2_hw *hw, enum dlb2_hw_ver ver);
-
-/**
- * dlb2_clr_pmcsr_disable() - power on bulk of DLB 2.0 logic
- * @hw: dlb2_hw handle for a particular device.
- * @ver: device version.
- *
- * Clearing the PMCSR must be done at initialization to make the device fully
- * operational.
- */
-void dlb2_clr_pmcsr_disable(struct dlb2_hw *hw, enum dlb2_hw_ver ver);
-
-/**
- * dlb2_finish_unmap_qid_procedures() - finish any pending unmap procedures
- * @hw: dlb2_hw handle for a particular device.
- *
- * This function attempts to finish any outstanding unmap procedures.
- * This function should be called by the kernel thread responsible for
- * finishing map/unmap procedures.
- *
- * Return:
- * Returns the number of procedures that weren't completed.
- */
-unsigned int dlb2_finish_unmap_qid_procedures(struct dlb2_hw *hw);
-
-/**
- * dlb2_finish_map_qid_procedures() - finish any pending map procedures
- * @hw: dlb2_hw handle for a particular device.
- *
- * This function attempts to finish any outstanding map procedures.
- * This function should be called by the kernel thread responsible for
- * finishing map/unmap procedures.
- *
- * Return:
- * Returns the number of procedures that weren't completed.
- */
-unsigned int dlb2_finish_map_qid_procedures(struct dlb2_hw *hw);
-
-/**
- * dlb2_resource_free() - free device state memory
- * @hw: dlb2_hw handle for a particular device.
- *
- * This function frees software state pointed to by dlb2_hw. This function
- * should be called when resetting the device or unloading the driver.
- */
-void dlb2_resource_free(struct dlb2_hw *hw);
-
-#endif /* __DLB2_RESOURCE_NEW_H */
diff --git a/drivers/event/dlb2/pf/dlb2_main.c b/drivers/event/dlb2/pf/dlb2_main.c
index 5c0640b3c..bac07f097 100644
--- a/drivers/event/dlb2/pf/dlb2_main.c
+++ b/drivers/event/dlb2/pf/dlb2_main.c
@@ -17,7 +17,7 @@
 
 #include "base/dlb2_regs_new.h"
 #include "base/dlb2_hw_types_new.h"
-#include "base/dlb2_resource_new.h"
+#include "base/dlb2_resource.h"
 #include "base/dlb2_osdep.h"
 #include "dlb2_main.h"
 #include "../dlb2_user.h"
diff --git a/drivers/event/dlb2/pf/dlb2_pf.c b/drivers/event/dlb2/pf/dlb2_pf.c
index 9b40e5eb3..4214ed85a 100644
--- a/drivers/event/dlb2/pf/dlb2_pf.c
+++ b/drivers/event/dlb2/pf/dlb2_pf.c
@@ -39,7 +39,7 @@
 #include "dlb2_main.h"
 #include "base/dlb2_hw_types_new.h"
 #include "base/dlb2_osdep.h"
-#include "base/dlb2_resource_new.h"
+#include "base/dlb2_resource.h"
 
 static const char *event_dlb2_pf_name = RTE_STR(EVDEV_DLB2_NAME_PMD);
 
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH 19/25] event/dlb2: delete old dlb2_resource.c file
  2021-03-16 22:18 [dpdk-dev] [PATCH 00/25] Add Support for DLB v2.5 Timothy McDaniel
                   ` (17 preceding siblings ...)
  2021-03-16 22:18 ` [dpdk-dev] [PATCH 18/25] event/dlb2: consolidate dlb resource header files into one file Timothy McDaniel
@ 2021-03-16 22:18 ` Timothy McDaniel
  2021-03-16 22:18 ` [dpdk-dev] [PATCH 20/25] event/dlb2: move dlb_resource_new.c to dlb_resource.c Timothy McDaniel
                   ` (6 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-03-16 22:18 UTC (permalink / raw)
  To: dev
  Cc: jerinj, harry.van.haaren, mdr, nhorman, nikhil.rao,
	erik.g.carrillo, abhinandan.gujjar, pbhagavatula, hemant.agrawal,
	mattias.ronnblom, peter.mccarthy

The file dlb_resource_new.c now contains all of the low level
functions required to support both DLB v2.0 and DLB v2.5, so
delete the temporary "old" file, and stop building it. The new
file (dlb_resource_new.c) will be renamed to dlb_resource.c in
the next commit.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/meson.build             |   1 -
 drivers/event/dlb2/pf/base/dlb2_resource.c | 245 ---------------------
 2 files changed, 246 deletions(-)
 delete mode 100644 drivers/event/dlb2/pf/base/dlb2_resource.c

diff --git a/drivers/event/dlb2/meson.build b/drivers/event/dlb2/meson.build
index bded07e06..d8cfd377f 100644
--- a/drivers/event/dlb2/meson.build
+++ b/drivers/event/dlb2/meson.build
@@ -13,7 +13,6 @@ sources = files('dlb2.c',
 		'dlb2_xstats.c',
 		'pf/dlb2_main.c',
 		'pf/dlb2_pf.c',
-		'pf/base/dlb2_resource.c',
 		'pf/base/dlb2_resource_new.c',
 		'rte_pmd_dlb2.c',
 		'dlb2_selftest.c'
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
deleted file mode 100644
index bd1404f33..000000000
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ /dev/null
@@ -1,245 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016-2020 Intel Corporation
- */
-
-#include "dlb2_user.h"
-
-#include "dlb2_hw_types.h"
-#include "dlb2_mbox.h"
-#include "dlb2_osdep.h"
-#include "dlb2_osdep_bitmap.h"
-#include "dlb2_osdep_types.h"
-#include "dlb2_regs.h"
-#include "dlb2_resource.h"
-
-#include "../../dlb2_priv.h"
-#include "../../dlb2_inline_fns.h"
-
-#define DLB2_DOM_LIST_HEAD(head, type) \
-	DLB2_LIST_HEAD((head), type, domain_list)
-
-#define DLB2_FUNC_LIST_HEAD(head, type) \
-	DLB2_LIST_HEAD((head), type, func_list)
-
-#define DLB2_DOM_LIST_FOR(head, ptr, iter) \
-	DLB2_LIST_FOR_EACH(head, ptr, domain_list, iter)
-
-#define DLB2_FUNC_LIST_FOR(head, ptr, iter) \
-	DLB2_LIST_FOR_EACH(head, ptr, func_list, iter)
-
-#define DLB2_DOM_LIST_FOR_SAFE(head, ptr, ptr_tmp, it, it_tmp) \
-	DLB2_LIST_FOR_EACH_SAFE((head), ptr, ptr_tmp, domain_list, it, it_tmp)
-
-#define DLB2_FUNC_LIST_FOR_SAFE(head, ptr, ptr_tmp, it, it_tmp) \
-	DLB2_LIST_FOR_EACH_SAFE((head), ptr, ptr_tmp, func_list, it, it_tmp)
-
-/*
- * The PF driver cannot assume that a register write will affect subsequent HCW
- * writes. To ensure a write completes, the driver must read back a CSR. This
- * function only need be called for configuration that can occur after the
- * domain has started; prior to starting, applications can't send HCWs.
- */
-static inline void dlb2_flush_csr(struct dlb2_hw *hw)
-{
-	DLB2_CSR_RD(hw, DLB2_SYS_TOTAL_VAS);
-}
-
-int dlb2_get_group_sequence_numbers(struct dlb2_hw *hw, unsigned int group_id)
-{
-	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
-		return -EINVAL;
-
-	return hw->rsrcs.sn_groups[group_id].sequence_numbers_per_queue;
-}
-
-int dlb2_get_group_sequence_number_occupancy(struct dlb2_hw *hw,
-					     unsigned int group_id)
-{
-	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
-		return -EINVAL;
-
-	return dlb2_sn_group_used_slots(&hw->rsrcs.sn_groups[group_id]);
-}
-
-static void dlb2_log_set_group_sequence_numbers(struct dlb2_hw *hw,
-						unsigned int group_id,
-						unsigned long val)
-{
-	DLB2_HW_DBG(hw, "DLB2 set group sequence numbers:\n");
-	DLB2_HW_DBG(hw, "\tGroup ID: %u\n", group_id);
-	DLB2_HW_DBG(hw, "\tValue:    %lu\n", val);
-}
-
-int dlb2_set_group_sequence_numbers(struct dlb2_hw *hw,
-				    unsigned int group_id,
-				    unsigned long val)
-{
-	u32 valid_allocations[] = {64, 128, 256, 512, 1024};
-	union dlb2_ro_pipe_grp_sn_mode r0 = { {0} };
-	struct dlb2_sn_group *group;
-	int mode;
-
-	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
-		return -EINVAL;
-
-	group = &hw->rsrcs.sn_groups[group_id];
-
-	/*
-	 * Once the first load-balanced queue using an SN group is configured,
-	 * the group cannot be changed.
-	 */
-	if (group->slot_use_bitmap != 0)
-		return -EPERM;
-
-	for (mode = 0; mode < DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES; mode++)
-		if (val == valid_allocations[mode])
-			break;
-
-	if (mode == DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES)
-		return -EINVAL;
-
-	group->mode = mode;
-	group->sequence_numbers_per_queue = val;
-
-	r0.field.sn_mode_0 = hw->rsrcs.sn_groups[0].mode;
-	r0.field.sn_mode_1 = hw->rsrcs.sn_groups[1].mode;
-
-	DLB2_CSR_WR(hw, DLB2_RO_PIPE_GRP_SN_MODE, r0.val);
-
-	dlb2_log_set_group_sequence_numbers(hw, group_id, val);
-
-	return 0;
-}
-
-static struct dlb2_dir_pq_pair *
-dlb2_get_domain_used_dir_pq(struct dlb2_hw *hw,
-			    u32 id,
-			    bool vdev_req,
-			    struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *port;
-	RTE_SET_USED(iter);
-
-	if (id >= DLB2_MAX_NUM_DIR_PORTS(hw->ver))
-		return NULL;
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter)
-		if ((!vdev_req && port->id.phys_id == id) ||
-		    (vdev_req && port->id.virt_id == id))
-			return port;
-
-	return NULL;
-}
-
-static struct dlb2_ldb_queue *
-dlb2_get_domain_ldb_queue(u32 id,
-			  bool vdev_req,
-			  struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_queue *queue;
-	RTE_SET_USED(iter);
-
-	if (id >= DLB2_MAX_NUM_LDB_QUEUES)
-		return NULL;
-
-	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter)
-		if ((!vdev_req && queue->id.phys_id == id) ||
-		    (vdev_req && queue->id.virt_id == id))
-			return queue;
-
-	return NULL;
-}
-
-static void dlb2_log_get_dir_queue_depth(struct dlb2_hw *hw,
-					 u32 domain_id,
-					 u32 queue_id,
-					 bool vdev_req,
-					 unsigned int vf_id)
-{
-	DLB2_HW_DBG(hw, "DLB get directed queue depth:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from VF %d)\n", vf_id);
-	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
-	DLB2_HW_DBG(hw, "\tQueue ID: %d\n", queue_id);
-}
-
-int dlb2_hw_get_dir_queue_depth(struct dlb2_hw *hw,
-				u32 domain_id,
-				struct dlb2_get_dir_queue_depth_args *args,
-				struct dlb2_cmd_response *resp,
-				bool vdev_req,
-				unsigned int vdev_id)
-{
-	struct dlb2_dir_pq_pair *queue;
-	struct dlb2_hw_domain *domain;
-	int id;
-
-	id = domain_id;
-
-	dlb2_log_get_dir_queue_depth(hw, domain_id, args->queue_id,
-				     vdev_req, vdev_id);
-
-	domain = dlb2_get_domain_from_id(hw, id, vdev_req, vdev_id);
-	if (domain == NULL) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	id = args->queue_id;
-
-	queue = dlb2_get_domain_used_dir_pq(hw, id, vdev_req, domain);
-	if (queue == NULL) {
-		resp->status = DLB2_ST_INVALID_QID;
-		return -EINVAL;
-	}
-
-	resp->id = dlb2_dir_queue_depth(hw, queue);
-
-	return 0;
-}
-
-static void dlb2_log_get_ldb_queue_depth(struct dlb2_hw *hw,
-					 u32 domain_id,
-					 u32 queue_id,
-					 bool vdev_req,
-					 unsigned int vf_id)
-{
-	DLB2_HW_DBG(hw, "DLB get load-balanced queue depth:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from VF %d)\n", vf_id);
-	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
-	DLB2_HW_DBG(hw, "\tQueue ID: %d\n", queue_id);
-}
-
-int dlb2_hw_get_ldb_queue_depth(struct dlb2_hw *hw,
-				u32 domain_id,
-				struct dlb2_get_ldb_queue_depth_args *args,
-				struct dlb2_cmd_response *resp,
-				bool vdev_req,
-				unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_queue *queue;
-
-	dlb2_log_get_ldb_queue_depth(hw, domain_id, args->queue_id,
-				     vdev_req, vdev_id);
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-	if (domain == NULL) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	queue = dlb2_get_domain_ldb_queue(args->queue_id, vdev_req, domain);
-	if (queue == NULL) {
-		resp->status = DLB2_ST_INVALID_QID;
-		return -EINVAL;
-	}
-
-	resp->id = dlb2_ldb_queue_depth(hw, queue);
-
-	return 0;
-}
-
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH 20/25] event/dlb2: move dlb_resource_new.c to dlb_resource.c
  2021-03-16 22:18 [dpdk-dev] [PATCH 00/25] Add Support for DLB v2.5 Timothy McDaniel
                   ` (18 preceding siblings ...)
  2021-03-16 22:18 ` [dpdk-dev] [PATCH 19/25] event/dlb2: delete old dlb2_resource.c file Timothy McDaniel
@ 2021-03-16 22:18 ` Timothy McDaniel
  2021-03-16 22:18 ` [dpdk-dev] [PATCH 21/25] event/dlb2: remove temporary file, dlb_hw_types.h Timothy McDaniel
                   ` (5 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-03-16 22:18 UTC (permalink / raw)
  To: dev
  Cc: jerinj, harry.van.haaren, mdr, nhorman, nikhil.rao,
	erik.g.carrillo, abhinandan.gujjar, pbhagavatula, hemant.agrawal,
	mattias.ronnblom, peter.mccarthy

The file dlb_resource_new.c now contains all of the low level
functions required to support both DLB v2.0 and DLB v2.5, and
the original file (dlb_resource.c) was removed in the previous
commit, so rename dlb_resource_new.c to dlb_resource.c, and
update the meson build file so that the new file is built.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/meson.build                                  | 2 +-
 .../event/dlb2/pf/base/{dlb2_resource_new.c => dlb2_resource.c} | 0
 2 files changed, 1 insertion(+), 1 deletion(-)
 rename drivers/event/dlb2/pf/base/{dlb2_resource_new.c => dlb2_resource.c} (100%)

diff --git a/drivers/event/dlb2/meson.build b/drivers/event/dlb2/meson.build
index d8cfd377f..f22638b8e 100644
--- a/drivers/event/dlb2/meson.build
+++ b/drivers/event/dlb2/meson.build
@@ -13,7 +13,7 @@ sources = files('dlb2.c',
 		'dlb2_xstats.c',
 		'pf/dlb2_main.c',
 		'pf/dlb2_pf.c',
-		'pf/base/dlb2_resource_new.c',
+		'pf/base/dlb2_resource.c',
 		'rte_pmd_dlb2.c',
 		'dlb2_selftest.c'
 )
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
similarity index 100%
rename from drivers/event/dlb2/pf/base/dlb2_resource_new.c
rename to drivers/event/dlb2/pf/base/dlb2_resource.c
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH 21/25] event/dlb2: remove temporary file, dlb_hw_types.h
  2021-03-16 22:18 [dpdk-dev] [PATCH 00/25] Add Support for DLB v2.5 Timothy McDaniel
                   ` (19 preceding siblings ...)
  2021-03-16 22:18 ` [dpdk-dev] [PATCH 20/25] event/dlb2: move dlb_resource_new.c to dlb_resource.c Timothy McDaniel
@ 2021-03-16 22:18 ` Timothy McDaniel
  2021-03-16 22:18 ` [dpdk-dev] [PATCH 22/25] event/dlb2: move dlb2_hw_type_new.h to dlb2_hw_types.h Timothy McDaniel
                   ` (4 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-03-16 22:18 UTC (permalink / raw)
  To: dev
  Cc: jerinj, harry.van.haaren, mdr, nhorman, nikhil.rao,
	erik.g.carrillo, abhinandan.gujjar, pbhagavatula, hemant.agrawal,
	mattias.ronnblom, peter.mccarthy

As support for DLB v2.5 was added, modifications were made to
dlb_hw_types_new.h, but the old file needed to be preserved during
the port in order to meet the requirement that individual patches in
a series each compile successfully. Since the DLB v2.5 support is
completely integrated, it is now safe to remove the old (original)
file, as well as the DLB2_USE_NEW_HEADERS define that was used to
control which version of the file was to be included in certain
source files. The next commit will rename dlb2_hw_type_new.h
to dlb_hw_types.h.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_hw_types.h | 341 ---------------------
 drivers/event/dlb2/pf/base/dlb2_resource.c |   2 -
 drivers/event/dlb2/pf/dlb2_main.c          |   2 -
 drivers/event/dlb2/pf/dlb2_main.h          |   4 -
 drivers/event/dlb2/pf/dlb2_pf.c            |   2 -
 5 files changed, 351 deletions(-)
 delete mode 100644 drivers/event/dlb2/pf/base/dlb2_hw_types.h

diff --git a/drivers/event/dlb2/pf/base/dlb2_hw_types.h b/drivers/event/dlb2/pf/base/dlb2_hw_types.h
deleted file mode 100644
index 11e518982..000000000
--- a/drivers/event/dlb2/pf/base/dlb2_hw_types.h
+++ /dev/null
@@ -1,341 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016-2020 Intel Corporation
- */
-
-#ifndef __DLB2_HW_TYPES_H
-#define __DLB2_HW_TYPES_H
-
-#include "../../dlb2_priv.h"
-#include "dlb2_user.h"
-
-#include "dlb2_osdep_list.h"
-#include "dlb2_osdep_types.h"
-
-#define DLB2_MAX_NUM_VDEVS			16
-#define DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS	2
-#define DLB2_NUM_ARB_WEIGHTS			8
-#define DLB2_MAX_NUM_AQED_ENTRIES		2048
-#define DLB2_MAX_WEIGHT				255
-#define DLB2_NUM_COS_DOMAINS			4
-#define DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS	2
-#define DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES	5
-#define DLB2_MAX_CQ_COMP_CHECK_LOOPS		409600
-#define DLB2_MAX_QID_EMPTY_CHECK_LOOPS		(32 * 64 * 1024 * (800 / 30))
-
-#define DLB2_FUNC_BAR				0
-#define DLB2_CSR_BAR				2
-
-#ifdef FPGA
-#define DLB2_HZ					2000000
-#else
-#define DLB2_HZ					800000000
-#endif
-
-#define PCI_DEVICE_ID_INTEL_DLB2_PF 0x2710
-#define PCI_DEVICE_ID_INTEL_DLB2_VF 0x2711
-
-#define PCI_DEVICE_ID_INTEL_DLB2_5_PF 0x2714
-#define PCI_DEVICE_ID_INTEL_DLB2_5_VF 0x2715
-
-#define DLB2_ALARM_HW_SOURCE_SYS 0
-#define DLB2_ALARM_HW_SOURCE_DLB 1
-
-#define DLB2_ALARM_HW_UNIT_CHP 4
-
-#define DLB2_ALARM_SYS_AID_ILLEGAL_QID		3
-#define DLB2_ALARM_SYS_AID_DISABLED_QID		4
-#define DLB2_ALARM_SYS_AID_ILLEGAL_HCW		5
-#define DLB2_ALARM_HW_CHP_AID_ILLEGAL_ENQ	1
-#define DLB2_ALARM_HW_CHP_AID_EXCESS_TOKEN_POPS 2
-
-/*
- * Hardware-defined base addresses. Those prefixed 'DLB2_DRV' are only used by
- * the PF driver.
- */
-#define DLB2_DRV_LDB_PP_BASE   0x2300000
-#define DLB2_DRV_LDB_PP_STRIDE 0x1000
-#define DLB2_DRV_LDB_PP_BOUND  (DLB2_DRV_LDB_PP_BASE + \
-				DLB2_DRV_LDB_PP_STRIDE * DLB2_MAX_NUM_LDB_PORTS)
-#define DLB2_DRV_DIR_PP_BASE   0x2200000
-#define DLB2_DRV_DIR_PP_STRIDE 0x1000
-#define DLB2_DRV_DIR_PP_BOUND  (DLB2_DRV_DIR_PP_BASE + \
-				DLB2_DRV_DIR_PP_STRIDE * DLB2_MAX_NUM_DIR_PORTS)
-#define DLB2_LDB_PP_BASE       0x2100000
-#define DLB2_LDB_PP_STRIDE     0x1000
-#define DLB2_LDB_PP_BOUND      (DLB2_LDB_PP_BASE + \
-				DLB2_LDB_PP_STRIDE * DLB2_MAX_NUM_LDB_PORTS)
-#define DLB2_LDB_PP_OFFS(id)   (DLB2_LDB_PP_BASE + (id) * DLB2_PP_SIZE)
-#define DLB2_DIR_PP_BASE       0x2000000
-#define DLB2_DIR_PP_STRIDE     0x1000
-#define DLB2_DIR_PP_BOUND      (DLB2_DIR_PP_BASE + \
-				DLB2_DIR_PP_STRIDE * \
-				DLB2_MAX_NUM_DIR_PORTS_V2_5)
-#define DLB2_DIR_PP_OFFS(id)   (DLB2_DIR_PP_BASE + (id) * DLB2_PP_SIZE)
-
-struct dlb2_resource_id {
-	u32 phys_id;
-	u32 virt_id;
-	u8 vdev_owned;
-	u8 vdev_id;
-};
-
-struct dlb2_freelist {
-	u32 base;
-	u32 bound;
-	u32 offset;
-};
-
-static inline u32 dlb2_freelist_count(struct dlb2_freelist *list)
-{
-	return list->bound - list->base - list->offset;
-}
-
-struct dlb2_hcw {
-	u64 data;
-	/* Word 3 */
-	u16 opaque;
-	u8 qid;
-	u8 sched_type:2;
-	u8 priority:3;
-	u8 msg_type:3;
-	/* Word 4 */
-	u16 lock_id;
-	u8 ts_flag:1;
-	u8 rsvd1:2;
-	u8 no_dec:1;
-	u8 cmp_id:4;
-	u8 cq_token:1;
-	u8 qe_comp:1;
-	u8 qe_frag:1;
-	u8 qe_valid:1;
-	u8 int_arm:1;
-	u8 error:1;
-	u8 rsvd:2;
-};
-
-struct dlb2_ldb_queue {
-	struct dlb2_list_entry domain_list;
-	struct dlb2_list_entry func_list;
-	struct dlb2_resource_id id;
-	struct dlb2_resource_id domain_id;
-	u32 num_qid_inflights;
-	u32 aqed_limit;
-	u32 sn_group; /* sn == sequence number */
-	u32 sn_slot;
-	u32 num_mappings;
-	u8 sn_cfg_valid;
-	u8 num_pending_additions;
-	u8 owned;
-	u8 configured;
-};
-
-/*
- * Directed ports and queues are paired by nature, so the driver tracks them
- * with a single data structure.
- */
-struct dlb2_dir_pq_pair {
-	struct dlb2_list_entry domain_list;
-	struct dlb2_list_entry func_list;
-	struct dlb2_resource_id id;
-	struct dlb2_resource_id domain_id;
-	u32 ref_cnt;
-	u8 init_tkn_cnt;
-	u8 queue_configured;
-	u8 port_configured;
-	u8 owned;
-	u8 enabled;
-};
-
-enum dlb2_qid_map_state {
-	/* The slot doesn't contain a valid queue mapping */
-	DLB2_QUEUE_UNMAPPED,
-	/* The slot contains a valid queue mapping */
-	DLB2_QUEUE_MAPPED,
-	/* The driver is mapping a queue into this slot */
-	DLB2_QUEUE_MAP_IN_PROG,
-	/* The driver is unmapping a queue from this slot */
-	DLB2_QUEUE_UNMAP_IN_PROG,
-	/*
-	 * The driver is unmapping a queue from this slot, and once complete
-	 * will replace it with another mapping.
-	 */
-	DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP,
-};
-
-struct dlb2_ldb_port_qid_map {
-	enum dlb2_qid_map_state state;
-	u16 qid;
-	u16 pending_qid;
-	u8 priority;
-	u8 pending_priority;
-};
-
-struct dlb2_ldb_port {
-	struct dlb2_list_entry domain_list;
-	struct dlb2_list_entry func_list;
-	struct dlb2_resource_id id;
-	struct dlb2_resource_id domain_id;
-	/* The qid_map represents the hardware QID mapping state. */
-	struct dlb2_ldb_port_qid_map qid_map[DLB2_MAX_NUM_QIDS_PER_LDB_CQ];
-	u32 hist_list_entry_base;
-	u32 hist_list_entry_limit;
-	u32 ref_cnt;
-	u8 init_tkn_cnt;
-	u8 num_pending_removals;
-	u8 num_mappings;
-	u8 owned;
-	u8 enabled;
-	u8 configured;
-};
-
-struct dlb2_sn_group {
-	u32 mode;
-	u32 sequence_numbers_per_queue;
-	u32 slot_use_bitmap;
-	u32 id;
-};
-
-static inline bool dlb2_sn_group_full(struct dlb2_sn_group *group)
-{
-	const u32 mask[] = {
-		0x0000ffff,  /* 64 SNs per queue */
-		0x000000ff,  /* 128 SNs per queue */
-		0x0000000f,  /* 256 SNs per queue */
-		0x00000003,  /* 512 SNs per queue */
-		0x00000001}; /* 1024 SNs per queue */
-
-	return group->slot_use_bitmap == mask[group->mode];
-}
-
-static inline int dlb2_sn_group_alloc_slot(struct dlb2_sn_group *group)
-{
-	const u32 bound[] = {16, 8, 4, 2, 1};
-	u32 i;
-
-	for (i = 0; i < bound[group->mode]; i++) {
-		if (!(group->slot_use_bitmap & (1 << i))) {
-			group->slot_use_bitmap |= 1 << i;
-			return i;
-		}
-	}
-
-	return -1;
-}
-
-static inline void
-dlb2_sn_group_free_slot(struct dlb2_sn_group *group, int slot)
-{
-	group->slot_use_bitmap &= ~(1 << slot);
-}
-
-static inline int dlb2_sn_group_used_slots(struct dlb2_sn_group *group)
-{
-	int i, cnt = 0;
-
-	for (i = 0; i < 32; i++)
-		cnt += !!(group->slot_use_bitmap & (1 << i));
-
-	return cnt;
-}
-
-struct dlb2_hw_domain {
-	struct dlb2_function_resources *parent_func;
-	struct dlb2_list_entry func_list;
-	struct dlb2_list_head used_ldb_queues;
-	struct dlb2_list_head used_ldb_ports[DLB2_NUM_COS_DOMAINS];
-	struct dlb2_list_head used_dir_pq_pairs;
-	struct dlb2_list_head avail_ldb_queues;
-	struct dlb2_list_head avail_ldb_ports[DLB2_NUM_COS_DOMAINS];
-	struct dlb2_list_head avail_dir_pq_pairs;
-	u32 total_hist_list_entries;
-	u32 avail_hist_list_entries;
-	u32 hist_list_entry_base;
-	u32 hist_list_entry_offset;
-	u32 num_ldb_credits;
-	u32 num_dir_credits;
-	u32 num_avail_aqed_entries;
-	u32 num_used_aqed_entries;
-	struct dlb2_resource_id id;
-	int num_pending_removals;
-	int num_pending_additions;
-	u8 configured;
-	u8 started;
-};
-
-struct dlb2_bitmap;
-
-struct dlb2_function_resources {
-	struct dlb2_list_head avail_domains;
-	struct dlb2_list_head used_domains;
-	struct dlb2_list_head avail_ldb_queues;
-	struct dlb2_list_head avail_ldb_ports[DLB2_NUM_COS_DOMAINS];
-	struct dlb2_list_head avail_dir_pq_pairs;
-	struct dlb2_bitmap *avail_hist_list_entries;
-	u32 num_avail_domains;
-	u32 num_avail_ldb_queues;
-	u32 num_avail_ldb_ports[DLB2_NUM_COS_DOMAINS];
-	u32 num_avail_dir_pq_pairs;
-	u32 num_avail_qed_entries;
-	u32 num_avail_dqed_entries;
-	u32 num_avail_aqed_entries;
-	u8 locked; /* (VDEV only) */
-};
-
-/*
- * After initialization, each resource in dlb2_hw_resources is located in one
- * of the following lists:
- * -- The PF's available resources list. These are unconfigured resources owned
- *	by the PF and not allocated to a dlb2 scheduling domain.
- * -- A VDEV's available resources list. These are VDEV-owned unconfigured
- *	resources not allocated to a dlb2 scheduling domain.
- * -- A domain's available resources list. These are domain-owned unconfigured
- *	resources.
- * -- A domain's used resources list. These are domain-owned configured
- *	resources.
- *
- * A resource moves to a new list when a VDEV or domain is created or destroyed,
- * or when the resource is configured.
- */
-struct dlb2_hw_resources {
-	struct dlb2_ldb_queue ldb_queues[DLB2_MAX_NUM_LDB_QUEUES];
-	struct dlb2_ldb_port ldb_ports[DLB2_MAX_NUM_LDB_PORTS];
-	struct dlb2_dir_pq_pair dir_pq_pairs[DLB2_MAX_NUM_DIR_PORTS_V2_5];
-	struct dlb2_sn_group sn_groups[DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS];
-};
-
-struct dlb2_mbox {
-	u32 *mbox;
-	u32 *isr_in_progress;
-};
-
-struct dlb2_sw_mbox {
-	struct dlb2_mbox vdev_to_pf;
-	struct dlb2_mbox pf_to_vdev;
-	void (*pf_to_vdev_inject)(void *arg);
-	void *pf_to_vdev_inject_arg;
-};
-
-struct dlb2_hw {
-	uint8_t ver;
-
-	/* BAR 0 address */
-	void *csr_kva;
-	unsigned long csr_phys_addr;
-	/* BAR 2 address */
-	void *func_kva;
-	unsigned long func_phys_addr;
-
-	/* Resource tracking */
-	struct dlb2_hw_resources rsrcs;
-	struct dlb2_function_resources pf;
-	struct dlb2_function_resources vdev[DLB2_MAX_NUM_VDEVS];
-	struct dlb2_hw_domain domains[DLB2_MAX_NUM_DOMAINS];
-	u8 cos_reservation[DLB2_NUM_COS_DOMAINS];
-
-	/* Virtualization */
-	int virt_mode;
-	struct dlb2_sw_mbox mbox[DLB2_MAX_NUM_VDEVS];
-	unsigned int pasid[DLB2_MAX_NUM_VDEVS];
-};
-
-#endif /* __DLB2_HW_TYPES_H */
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index 8d6c00f31..4b3d1202c 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -2,8 +2,6 @@
  * Copyright(c) 2016-2020 Intel Corporation
  */
 
-#define DLB2_USE_NEW_HEADERS /* TEMPORARY FOR MERGE */
-
 #include "dlb2_user.h"
 
 #include "dlb2_hw_types_new.h"
diff --git a/drivers/event/dlb2/pf/dlb2_main.c b/drivers/event/dlb2/pf/dlb2_main.c
index bac07f097..3ab0c3ef5 100644
--- a/drivers/event/dlb2/pf/dlb2_main.c
+++ b/drivers/event/dlb2/pf/dlb2_main.c
@@ -13,8 +13,6 @@
 #include <rte_malloc.h>
 #include <rte_errno.h>
 
-#define DLB2_USE_NEW_HEADERS /* TEMPORARY FOR MERGE */
-
 #include "base/dlb2_regs_new.h"
 #include "base/dlb2_hw_types_new.h"
 #include "base/dlb2_resource.h"
diff --git a/drivers/event/dlb2/pf/dlb2_main.h b/drivers/event/dlb2/pf/dlb2_main.h
index 01a24e8a4..2dfca58e3 100644
--- a/drivers/event/dlb2/pf/dlb2_main.h
+++ b/drivers/event/dlb2/pf/dlb2_main.h
@@ -15,11 +15,7 @@
 #define PAGE_SIZE (sysconf(_SC_PAGESIZE))
 #endif
 
-#ifdef DLB2_USE_NEW_HEADERS
 #include "base/dlb2_hw_types_new.h"
-#else
-#include "base/dlb2_hw_types.h"
-#endif
 #include "../dlb2_user.h"
 
 #define DLB2_DEFAULT_UNREGISTER_TIMEOUT_S 5
diff --git a/drivers/event/dlb2/pf/dlb2_pf.c b/drivers/event/dlb2/pf/dlb2_pf.c
index 4214ed85a..c4c776e83 100644
--- a/drivers/event/dlb2/pf/dlb2_pf.c
+++ b/drivers/event/dlb2/pf/dlb2_pf.c
@@ -31,8 +31,6 @@
 #include <rte_memory.h>
 #include <rte_string_fns.h>
 
-#define DLB2_USE_NEW_HEADERS /* TEMPORARY FOR MERGE */
-
 #include "../dlb2_priv.h"
 #include "../dlb2_iface.h"
 #include "../dlb2_inline_fns.h"
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH 22/25] event/dlb2: move dlb2_hw_type_new.h to dlb2_hw_types.h
  2021-03-16 22:18 [dpdk-dev] [PATCH 00/25] Add Support for DLB v2.5 Timothy McDaniel
                   ` (20 preceding siblings ...)
  2021-03-16 22:18 ` [dpdk-dev] [PATCH 21/25] event/dlb2: remove temporary file, dlb_hw_types.h Timothy McDaniel
@ 2021-03-16 22:18 ` Timothy McDaniel
  2021-03-16 22:18 ` [dpdk-dev] [PATCH 23/25] event/dlb2: delete old register map file, dlb2_regs.h Timothy McDaniel
                   ` (3 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-03-16 22:18 UTC (permalink / raw)
  To: dev
  Cc: jerinj, harry.van.haaren, mdr, nhorman, nikhil.rao,
	erik.g.carrillo, abhinandan.gujjar, pbhagavatula, hemant.agrawal,
	mattias.ronnblom, peter.mccarthy

The original and a "new" file were maintained during the
early portions of the patch series in order to ensure that
all individual patches compiled cleanly. It is now safe to
rename the new file, and use it unconditionally in all DLB
source files.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 .../event/dlb2/pf/base/{dlb2_hw_types_new.h => dlb2_hw_types.h} | 0
 drivers/event/dlb2/pf/base/dlb2_resource.c                      | 2 +-
 drivers/event/dlb2/pf/dlb2_main.c                               | 2 +-
 drivers/event/dlb2/pf/dlb2_main.h                               | 2 +-
 drivers/event/dlb2/pf/dlb2_pf.c                                 | 2 +-
 5 files changed, 4 insertions(+), 4 deletions(-)
 rename drivers/event/dlb2/pf/base/{dlb2_hw_types_new.h => dlb2_hw_types.h} (100%)

diff --git a/drivers/event/dlb2/pf/base/dlb2_hw_types_new.h b/drivers/event/dlb2/pf/base/dlb2_hw_types.h
similarity index 100%
rename from drivers/event/dlb2/pf/base/dlb2_hw_types_new.h
rename to drivers/event/dlb2/pf/base/dlb2_hw_types.h
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index 4b3d1202c..e5fa0f047 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -4,7 +4,7 @@
 
 #include "dlb2_user.h"
 
-#include "dlb2_hw_types_new.h"
+#include "dlb2_hw_types.h"
 #include "dlb2_mbox.h"
 #include "dlb2_osdep.h"
 #include "dlb2_osdep_bitmap.h"
diff --git a/drivers/event/dlb2/pf/dlb2_main.c b/drivers/event/dlb2/pf/dlb2_main.c
index 3ab0c3ef5..1f6ccf8e4 100644
--- a/drivers/event/dlb2/pf/dlb2_main.c
+++ b/drivers/event/dlb2/pf/dlb2_main.c
@@ -14,7 +14,7 @@
 #include <rte_errno.h>
 
 #include "base/dlb2_regs_new.h"
-#include "base/dlb2_hw_types_new.h"
+#include "base/dlb2_hw_types.h"
 #include "base/dlb2_resource.h"
 #include "base/dlb2_osdep.h"
 #include "dlb2_main.h"
diff --git a/drivers/event/dlb2/pf/dlb2_main.h b/drivers/event/dlb2/pf/dlb2_main.h
index 2dfca58e3..f3bee71fb 100644
--- a/drivers/event/dlb2/pf/dlb2_main.h
+++ b/drivers/event/dlb2/pf/dlb2_main.h
@@ -15,7 +15,7 @@
 #define PAGE_SIZE (sysconf(_SC_PAGESIZE))
 #endif
 
-#include "base/dlb2_hw_types_new.h"
+#include "base/dlb2_hw_types.h"
 #include "../dlb2_user.h"
 
 #define DLB2_DEFAULT_UNREGISTER_TIMEOUT_S 5
diff --git a/drivers/event/dlb2/pf/dlb2_pf.c b/drivers/event/dlb2/pf/dlb2_pf.c
index c4c776e83..a937d0f9c 100644
--- a/drivers/event/dlb2/pf/dlb2_pf.c
+++ b/drivers/event/dlb2/pf/dlb2_pf.c
@@ -35,7 +35,7 @@
 #include "../dlb2_iface.h"
 #include "../dlb2_inline_fns.h"
 #include "dlb2_main.h"
-#include "base/dlb2_hw_types_new.h"
+#include "base/dlb2_hw_types.h"
 #include "base/dlb2_osdep.h"
 #include "base/dlb2_resource.h"
 
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH 23/25] event/dlb2: delete old register map file, dlb2_regs.h
  2021-03-16 22:18 [dpdk-dev] [PATCH 00/25] Add Support for DLB v2.5 Timothy McDaniel
                   ` (21 preceding siblings ...)
  2021-03-16 22:18 ` [dpdk-dev] [PATCH 22/25] event/dlb2: move dlb2_hw_type_new.h to dlb2_hw_types.h Timothy McDaniel
@ 2021-03-16 22:18 ` Timothy McDaniel
  2021-03-16 22:18 ` [dpdk-dev] [PATCH 24/25] event/dlb2: rename dlb2_regs_new.h to dlb2_regs.h Timothy McDaniel
                   ` (2 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-03-16 22:18 UTC (permalink / raw)
  To: dev
  Cc: jerinj, harry.van.haaren, mdr, nhorman, nikhil.rao,
	erik.g.carrillo, abhinandan.gujjar, pbhagavatula, hemant.agrawal,
	mattias.ronnblom, peter.mccarthy

All dependencies on the old register map have been removed, so
it can now be deleted.  The next commit will rename dlb2_regs_new.h
to dlb2_regs.h.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_regs.h | 2527 ------------------------
 1 file changed, 2527 deletions(-)
 delete mode 100644 drivers/event/dlb2/pf/base/dlb2_regs.h

diff --git a/drivers/event/dlb2/pf/base/dlb2_regs.h b/drivers/event/dlb2/pf/base/dlb2_regs.h
deleted file mode 100644
index 43ecad4f8..000000000
--- a/drivers/event/dlb2/pf/base/dlb2_regs.h
+++ /dev/null
@@ -1,2527 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016-2020 Intel Corporation
- */
-
-#ifndef __DLB2_REGS_H
-#define __DLB2_REGS_H
-
-#include "dlb2_osdep_types.h"
-
-#define DLB2_FUNC_PF_VF2PF_MAILBOX_BYTES 256
-#define DLB2_FUNC_PF_VF2PF_MAILBOX(vf_id, x) \
-	(0x1000 + 0x4 * (x) + (vf_id) * 0x10000)
-#define DLB2_FUNC_PF_VF2PF_MAILBOX_RST 0x0
-union dlb2_func_pf_vf2pf_mailbox {
-	struct {
-		u32 msg : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_PF_VF2PF_MAILBOX_ISR(vf_id) \
-	(0x1f00 + (vf_id) * 0x10000)
-#define DLB2_FUNC_PF_VF2PF_MAILBOX_ISR_RST 0x0
-union dlb2_func_pf_vf2pf_mailbox_isr {
-	struct {
-		u32 vf0_isr : 1;
-		u32 vf1_isr : 1;
-		u32 vf2_isr : 1;
-		u32 vf3_isr : 1;
-		u32 vf4_isr : 1;
-		u32 vf5_isr : 1;
-		u32 vf6_isr : 1;
-		u32 vf7_isr : 1;
-		u32 vf8_isr : 1;
-		u32 vf9_isr : 1;
-		u32 vf10_isr : 1;
-		u32 vf11_isr : 1;
-		u32 vf12_isr : 1;
-		u32 vf13_isr : 1;
-		u32 vf14_isr : 1;
-		u32 vf15_isr : 1;
-		u32 rsvd0 : 16;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_PF_VF2PF_FLR_ISR(vf_id) \
-	(0x1f04 + (vf_id) * 0x10000)
-#define DLB2_FUNC_PF_VF2PF_FLR_ISR_RST 0x0
-union dlb2_func_pf_vf2pf_flr_isr {
-	struct {
-		u32 vf0_isr : 1;
-		u32 vf1_isr : 1;
-		u32 vf2_isr : 1;
-		u32 vf3_isr : 1;
-		u32 vf4_isr : 1;
-		u32 vf5_isr : 1;
-		u32 vf6_isr : 1;
-		u32 vf7_isr : 1;
-		u32 vf8_isr : 1;
-		u32 vf9_isr : 1;
-		u32 vf10_isr : 1;
-		u32 vf11_isr : 1;
-		u32 vf12_isr : 1;
-		u32 vf13_isr : 1;
-		u32 vf14_isr : 1;
-		u32 vf15_isr : 1;
-		u32 rsvd0 : 16;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_PF_VF2PF_ISR_PEND(vf_id) \
-	(0x1f10 + (vf_id) * 0x10000)
-#define DLB2_FUNC_PF_VF2PF_ISR_PEND_RST 0x0
-union dlb2_func_pf_vf2pf_isr_pend {
-	struct {
-		u32 isr_pend : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_PF_PF2VF_MAILBOX_BYTES 64
-#define DLB2_FUNC_PF_PF2VF_MAILBOX(vf_id, x) \
-	(0x2000 + 0x4 * (x) + (vf_id) * 0x10000)
-#define DLB2_FUNC_PF_PF2VF_MAILBOX_RST 0x0
-union dlb2_func_pf_pf2vf_mailbox {
-	struct {
-		u32 msg : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_PF_PF2VF_MAILBOX_ISR(vf_id) \
-	(0x2f00 + (vf_id) * 0x10000)
-#define DLB2_FUNC_PF_PF2VF_MAILBOX_ISR_RST 0x0
-union dlb2_func_pf_pf2vf_mailbox_isr {
-	struct {
-		u32 vf0_isr : 1;
-		u32 vf1_isr : 1;
-		u32 vf2_isr : 1;
-		u32 vf3_isr : 1;
-		u32 vf4_isr : 1;
-		u32 vf5_isr : 1;
-		u32 vf6_isr : 1;
-		u32 vf7_isr : 1;
-		u32 vf8_isr : 1;
-		u32 vf9_isr : 1;
-		u32 vf10_isr : 1;
-		u32 vf11_isr : 1;
-		u32 vf12_isr : 1;
-		u32 vf13_isr : 1;
-		u32 vf14_isr : 1;
-		u32 vf15_isr : 1;
-		u32 rsvd0 : 16;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_PF_VF_RESET_IN_PROGRESS(vf_id) \
-	(0x3000 + (vf_id) * 0x10000)
-#define DLB2_FUNC_PF_VF_RESET_IN_PROGRESS_RST 0xffff
-union dlb2_func_pf_vf_reset_in_progress {
-	struct {
-		u32 vf0_reset_in_progress : 1;
-		u32 vf1_reset_in_progress : 1;
-		u32 vf2_reset_in_progress : 1;
-		u32 vf3_reset_in_progress : 1;
-		u32 vf4_reset_in_progress : 1;
-		u32 vf5_reset_in_progress : 1;
-		u32 vf6_reset_in_progress : 1;
-		u32 vf7_reset_in_progress : 1;
-		u32 vf8_reset_in_progress : 1;
-		u32 vf9_reset_in_progress : 1;
-		u32 vf10_reset_in_progress : 1;
-		u32 vf11_reset_in_progress : 1;
-		u32 vf12_reset_in_progress : 1;
-		u32 vf13_reset_in_progress : 1;
-		u32 vf14_reset_in_progress : 1;
-		u32 vf15_reset_in_progress : 1;
-		u32 rsvd0 : 16;
-	} field;
-	u32 val;
-};
-
-#define DLB2_MSIX_MEM_VECTOR_CTRL(x) \
-	(0x100000c + (x) * 0x10)
-#define DLB2_MSIX_MEM_VECTOR_CTRL_RST 0x1
-union dlb2_msix_mem_vector_ctrl {
-	struct {
-		u32 vec_mask : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_IOSF_FUNC_VF_BAR_DSBL(x) \
-	(0x20 + (x) * 0x4)
-#define DLB2_IOSF_FUNC_VF_BAR_DSBL_RST 0x0
-union dlb2_iosf_func_vf_bar_dsbl {
-	struct {
-		u32 func_vf_bar_dis : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_TOTAL_VAS 0x1000011c
-#define DLB2_SYS_TOTAL_VAS_RST 0x20
-union dlb2_sys_total_vas {
-	struct {
-		u32 total_vas : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_TOTAL_DIR_PORTS 0x10000118
-#define DLB2_SYS_TOTAL_DIR_PORTS_RST 0x40
-union dlb2_sys_total_dir_ports {
-	struct {
-		u32 total_dir_ports : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_TOTAL_LDB_PORTS 0x10000114
-#define DLB2_SYS_TOTAL_LDB_PORTS_RST 0x40
-union dlb2_sys_total_ldb_ports {
-	struct {
-		u32 total_ldb_ports : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_TOTAL_DIR_QID 0x10000110
-#define DLB2_SYS_TOTAL_DIR_QID_RST 0x40
-union dlb2_sys_total_dir_qid {
-	struct {
-		u32 total_dir_qid : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_TOTAL_LDB_QID 0x1000010c
-#define DLB2_SYS_TOTAL_LDB_QID_RST 0x20
-union dlb2_sys_total_ldb_qid {
-	struct {
-		u32 total_ldb_qid : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_TOTAL_DIR_CRDS 0x10000108
-#define DLB2_SYS_TOTAL_DIR_CRDS_RST 0x1000
-union dlb2_sys_total_dir_crds {
-	struct {
-		u32 total_dir_credits : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_TOTAL_LDB_CRDS 0x10000104
-#define DLB2_SYS_TOTAL_LDB_CRDS_RST 0x2000
-union dlb2_sys_total_ldb_crds {
-	struct {
-		u32 total_ldb_credits : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_ALARM_PF_SYND2 0x10000508
-#define DLB2_SYS_ALARM_PF_SYND2_RST 0x0
-union dlb2_sys_alarm_pf_synd2 {
-	struct {
-		u32 lock_id : 16;
-		u32 meas : 1;
-		u32 debug : 7;
-		u32 cq_pop : 1;
-		u32 qe_uhl : 1;
-		u32 qe_orsp : 1;
-		u32 qe_valid : 1;
-		u32 cq_int_rearm : 1;
-		u32 dsi_error : 1;
-		u32 rsvd0 : 2;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_ALARM_PF_SYND1 0x10000504
-#define DLB2_SYS_ALARM_PF_SYND1_RST 0x0
-union dlb2_sys_alarm_pf_synd1 {
-	struct {
-		u32 dsi : 16;
-		u32 qid : 8;
-		u32 qtype : 2;
-		u32 qpri : 3;
-		u32 msg_type : 3;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_ALARM_PF_SYND0 0x10000500
-#define DLB2_SYS_ALARM_PF_SYND0_RST 0x0
-union dlb2_sys_alarm_pf_synd0 {
-	struct {
-		u32 syndrome : 8;
-		u32 rtype : 2;
-		u32 rsvd0 : 3;
-		u32 is_ldb : 1;
-		u32 cls : 2;
-		u32 aid : 6;
-		u32 unit : 4;
-		u32 source : 4;
-		u32 more : 1;
-		u32 valid : 1;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_VF_LDB_VPP_V(x) \
-	(0x10000f00 + (x) * 0x1000)
-#define DLB2_SYS_VF_LDB_VPP_V_RST 0x0
-union dlb2_sys_vf_ldb_vpp_v {
-	struct {
-		u32 vpp_v : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_VF_LDB_VPP2PP(x) \
-	(0x10000f04 + (x) * 0x1000)
-#define DLB2_SYS_VF_LDB_VPP2PP_RST 0x0
-union dlb2_sys_vf_ldb_vpp2pp {
-	struct {
-		u32 pp : 6;
-		u32 rsvd0 : 26;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_VF_DIR_VPP_V(x) \
-	(0x10000f08 + (x) * 0x1000)
-#define DLB2_SYS_VF_DIR_VPP_V_RST 0x0
-union dlb2_sys_vf_dir_vpp_v {
-	struct {
-		u32 vpp_v : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_VF_DIR_VPP2PP(x) \
-	(0x10000f0c + (x) * 0x1000)
-#define DLB2_SYS_VF_DIR_VPP2PP_RST 0x0
-union dlb2_sys_vf_dir_vpp2pp {
-	struct {
-		u32 pp : 6;
-		u32 rsvd0 : 26;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_VF_LDB_VQID_V(x) \
-	(0x10000f10 + (x) * 0x1000)
-#define DLB2_SYS_VF_LDB_VQID_V_RST 0x0
-union dlb2_sys_vf_ldb_vqid_v {
-	struct {
-		u32 vqid_v : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_VF_LDB_VQID2QID(x) \
-	(0x10000f14 + (x) * 0x1000)
-#define DLB2_SYS_VF_LDB_VQID2QID_RST 0x0
-union dlb2_sys_vf_ldb_vqid2qid {
-	struct {
-		u32 qid : 5;
-		u32 rsvd0 : 27;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_LDB_QID2VQID(x) \
-	(0x10000f18 + (x) * 0x1000)
-#define DLB2_SYS_LDB_QID2VQID_RST 0x0
-union dlb2_sys_ldb_qid2vqid {
-	struct {
-		u32 vqid : 5;
-		u32 rsvd0 : 27;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_VF_DIR_VQID_V(x) \
-	(0x10000f1c + (x) * 0x1000)
-#define DLB2_SYS_VF_DIR_VQID_V_RST 0x0
-union dlb2_sys_vf_dir_vqid_v {
-	struct {
-		u32 vqid_v : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_VF_DIR_VQID2QID(x) \
-	(0x10000f20 + (x) * 0x1000)
-#define DLB2_SYS_VF_DIR_VQID2QID_RST 0x0
-union dlb2_sys_vf_dir_vqid2qid {
-	struct {
-		u32 qid : 6;
-		u32 rsvd0 : 26;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_LDB_VASQID_V(x) \
-	(0x10000f24 + (x) * 0x1000)
-#define DLB2_SYS_LDB_VASQID_V_RST 0x0
-union dlb2_sys_ldb_vasqid_v {
-	struct {
-		u32 vasqid_v : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_DIR_VASQID_V(x) \
-	(0x10000f28 + (x) * 0x1000)
-#define DLB2_SYS_DIR_VASQID_V_RST 0x0
-union dlb2_sys_dir_vasqid_v {
-	struct {
-		u32 vasqid_v : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_ALARM_VF_SYND2(x) \
-	(0x10000f48 + (x) * 0x1000)
-#define DLB2_SYS_ALARM_VF_SYND2_RST 0x0
-union dlb2_sys_alarm_vf_synd2 {
-	struct {
-		u32 lock_id : 16;
-		u32 debug : 8;
-		u32 cq_pop : 1;
-		u32 qe_uhl : 1;
-		u32 qe_orsp : 1;
-		u32 qe_valid : 1;
-		u32 isz : 1;
-		u32 dsi_error : 1;
-		u32 dlbrsvd : 2;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_ALARM_VF_SYND1(x) \
-	(0x10000f44 + (x) * 0x1000)
-#define DLB2_SYS_ALARM_VF_SYND1_RST 0x0
-union dlb2_sys_alarm_vf_synd1 {
-	struct {
-		u32 dsi : 16;
-		u32 qid : 8;
-		u32 qtype : 2;
-		u32 qpri : 3;
-		u32 msg_type : 3;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_ALARM_VF_SYND0(x) \
-	(0x10000f40 + (x) * 0x1000)
-#define DLB2_SYS_ALARM_VF_SYND0_RST 0x0
-union dlb2_sys_alarm_vf_synd0 {
-	struct {
-		u32 syndrome : 8;
-		u32 rtype : 2;
-		u32 vf_synd0_parity : 1;
-		u32 vf_synd1_parity : 1;
-		u32 vf_synd2_parity : 1;
-		u32 is_ldb : 1;
-		u32 cls : 2;
-		u32 aid : 6;
-		u32 unit : 4;
-		u32 source : 4;
-		u32 more : 1;
-		u32 valid : 1;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_LDB_QID_CFG_V(x) \
-	(0x10000f58 + (x) * 0x1000)
-#define DLB2_SYS_LDB_QID_CFG_V_RST 0x0
-union dlb2_sys_ldb_qid_cfg_v {
-	struct {
-		u32 sn_cfg_v : 1;
-		u32 fid_cfg_v : 1;
-		u32 rsvd0 : 30;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_LDB_QID_ITS(x) \
-	(0x10000f54 + (x) * 0x1000)
-#define DLB2_SYS_LDB_QID_ITS_RST 0x0
-union dlb2_sys_ldb_qid_its {
-	struct {
-		u32 qid_its : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_LDB_QID_V(x) \
-	(0x10000f50 + (x) * 0x1000)
-#define DLB2_SYS_LDB_QID_V_RST 0x0
-union dlb2_sys_ldb_qid_v {
-	struct {
-		u32 qid_v : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_DIR_QID_ITS(x) \
-	(0x10000f64 + (x) * 0x1000)
-#define DLB2_SYS_DIR_QID_ITS_RST 0x0
-union dlb2_sys_dir_qid_its {
-	struct {
-		u32 qid_its : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_DIR_QID_V(x) \
-	(0x10000f60 + (x) * 0x1000)
-#define DLB2_SYS_DIR_QID_V_RST 0x0
-union dlb2_sys_dir_qid_v {
-	struct {
-		u32 qid_v : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_LDB_CQ_AI_DATA(x) \
-	(0x10000fa8 + (x) * 0x1000)
-#define DLB2_SYS_LDB_CQ_AI_DATA_RST 0x0
-union dlb2_sys_ldb_cq_ai_data {
-	struct {
-		u32 cq_ai_data : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_LDB_CQ_AI_ADDR(x) \
-	(0x10000fa4 + (x) * 0x1000)
-#define DLB2_SYS_LDB_CQ_AI_ADDR_RST 0x0
-union dlb2_sys_ldb_cq_ai_addr {
-	struct {
-		u32 rsvd1 : 2;
-		u32 cq_ai_addr : 18;
-		u32 rsvd0 : 12;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_LDB_CQ_PASID(x) \
-	(0x10000fa0 + (x) * 0x1000)
-#define DLB2_SYS_LDB_CQ_PASID_RST 0x0
-union dlb2_sys_ldb_cq_pasid {
-	struct {
-		u32 pasid : 20;
-		u32 exe_req : 1;
-		u32 priv_req : 1;
-		u32 fmt2 : 1;
-		u32 rsvd0 : 9;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_LDB_CQ_AT(x) \
-	(0x10000f9c + (x) * 0x1000)
-#define DLB2_SYS_LDB_CQ_AT_RST 0x0
-union dlb2_sys_ldb_cq_at {
-	struct {
-		u32 cq_at : 2;
-		u32 rsvd0 : 30;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_LDB_CQ_ISR(x) \
-	(0x10000f98 + (x) * 0x1000)
-#define DLB2_SYS_LDB_CQ_ISR_RST 0x0
-/* CQ Interrupt Modes */
-#define DLB2_CQ_ISR_MODE_DIS  0
-#define DLB2_CQ_ISR_MODE_MSI  1
-#define DLB2_CQ_ISR_MODE_MSIX 2
-#define DLB2_CQ_ISR_MODE_ADI  3
-union dlb2_sys_ldb_cq_isr {
-	struct {
-		u32 vector : 6;
-		u32 vf : 4;
-		u32 en_code : 2;
-		u32 rsvd0 : 20;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_LDB_CQ2VF_PF_RO(x) \
-	(0x10000f94 + (x) * 0x1000)
-#define DLB2_SYS_LDB_CQ2VF_PF_RO_RST 0x0
-union dlb2_sys_ldb_cq2vf_pf_ro {
-	struct {
-		u32 vf : 4;
-		u32 is_pf : 1;
-		u32 ro : 1;
-		u32 rsvd0 : 26;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_LDB_PP_V(x) \
-	(0x10000f90 + (x) * 0x1000)
-#define DLB2_SYS_LDB_PP_V_RST 0x0
-union dlb2_sys_ldb_pp_v {
-	struct {
-		u32 pp_v : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_LDB_PP2VDEV(x) \
-	(0x10000f8c + (x) * 0x1000)
-#define DLB2_SYS_LDB_PP2VDEV_RST 0x0
-union dlb2_sys_ldb_pp2vdev {
-	struct {
-		u32 vdev : 4;
-		u32 rsvd0 : 28;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_LDB_PP2VAS(x) \
-	(0x10000f88 + (x) * 0x1000)
-#define DLB2_SYS_LDB_PP2VAS_RST 0x0
-union dlb2_sys_ldb_pp2vas {
-	struct {
-		u32 vas : 5;
-		u32 rsvd0 : 27;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_LDB_CQ_ADDR_U(x) \
-	(0x10000f84 + (x) * 0x1000)
-#define DLB2_SYS_LDB_CQ_ADDR_U_RST 0x0
-union dlb2_sys_ldb_cq_addr_u {
-	struct {
-		u32 addr_u : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_LDB_CQ_ADDR_L(x) \
-	(0x10000f80 + (x) * 0x1000)
-#define DLB2_SYS_LDB_CQ_ADDR_L_RST 0x0
-union dlb2_sys_ldb_cq_addr_l {
-	struct {
-		u32 rsvd0 : 6;
-		u32 addr_l : 26;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_DIR_CQ_FMT(x) \
-	(0x10000fec + (x) * 0x1000)
-#define DLB2_SYS_DIR_CQ_FMT_RST 0x0
-union dlb2_sys_dir_cq_fmt {
-	struct {
-		u32 keep_pf_ppid : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_DIR_CQ_AI_DATA(x) \
-	(0x10000fe8 + (x) * 0x1000)
-#define DLB2_SYS_DIR_CQ_AI_DATA_RST 0x0
-union dlb2_sys_dir_cq_ai_data {
-	struct {
-		u32 cq_ai_data : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_DIR_CQ_AI_ADDR(x) \
-	(0x10000fe4 + (x) * 0x1000)
-#define DLB2_SYS_DIR_CQ_AI_ADDR_RST 0x0
-union dlb2_sys_dir_cq_ai_addr {
-	struct {
-		u32 rsvd1 : 2;
-		u32 cq_ai_addr : 18;
-		u32 rsvd0 : 12;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_DIR_CQ_PASID(x) \
-	(0x10000fe0 + (x) * 0x1000)
-#define DLB2_SYS_DIR_CQ_PASID_RST 0x0
-union dlb2_sys_dir_cq_pasid {
-	struct {
-		u32 pasid : 20;
-		u32 exe_req : 1;
-		u32 priv_req : 1;
-		u32 fmt2 : 1;
-		u32 rsvd0 : 9;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_DIR_CQ_AT(x) \
-	(0x10000fdc + (x) * 0x1000)
-#define DLB2_SYS_DIR_CQ_AT_RST 0x0
-union dlb2_sys_dir_cq_at {
-	struct {
-		u32 cq_at : 2;
-		u32 rsvd0 : 30;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_DIR_CQ_ISR(x) \
-	(0x10000fd8 + (x) * 0x1000)
-#define DLB2_SYS_DIR_CQ_ISR_RST 0x0
-union dlb2_sys_dir_cq_isr {
-	struct {
-		u32 vector : 6;
-		u32 vf : 4;
-		u32 en_code : 2;
-		u32 rsvd0 : 20;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_DIR_CQ2VF_PF_RO(x) \
-	(0x10000fd4 + (x) * 0x1000)
-#define DLB2_SYS_DIR_CQ2VF_PF_RO_RST 0x0
-union dlb2_sys_dir_cq2vf_pf_ro {
-	struct {
-		u32 vf : 4;
-		u32 is_pf : 1;
-		u32 ro : 1;
-		u32 rsvd0 : 26;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_DIR_PP_V(x) \
-	(0x10000fd0 + (x) * 0x1000)
-#define DLB2_SYS_DIR_PP_V_RST 0x0
-union dlb2_sys_dir_pp_v {
-	struct {
-		u32 pp_v : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_DIR_PP2VDEV(x) \
-	(0x10000fcc + (x) * 0x1000)
-#define DLB2_SYS_DIR_PP2VDEV_RST 0x0
-union dlb2_sys_dir_pp2vdev {
-	struct {
-		u32 vdev : 4;
-		u32 rsvd0 : 28;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_DIR_PP2VAS(x) \
-	(0x10000fc8 + (x) * 0x1000)
-#define DLB2_SYS_DIR_PP2VAS_RST 0x0
-union dlb2_sys_dir_pp2vas {
-	struct {
-		u32 vas : 5;
-		u32 rsvd0 : 27;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_DIR_CQ_ADDR_U(x) \
-	(0x10000fc4 + (x) * 0x1000)
-#define DLB2_SYS_DIR_CQ_ADDR_U_RST 0x0
-union dlb2_sys_dir_cq_addr_u {
-	struct {
-		u32 addr_u : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_DIR_CQ_ADDR_L(x) \
-	(0x10000fc0 + (x) * 0x1000)
-#define DLB2_SYS_DIR_CQ_ADDR_L_RST 0x0
-union dlb2_sys_dir_cq_addr_l {
-	struct {
-		u32 rsvd0 : 6;
-		u32 addr_l : 26;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_INGRESS_ALARM_ENBL 0x10000300
-#define DLB2_SYS_INGRESS_ALARM_ENBL_RST 0x0
-union dlb2_sys_ingress_alarm_enbl {
-	struct {
-		u32 illegal_hcw : 1;
-		u32 illegal_pp : 1;
-		u32 illegal_pasid : 1;
-		u32 illegal_qid : 1;
-		u32 disabled_qid : 1;
-		u32 illegal_ldb_qid_cfg : 1;
-		u32 rsvd0 : 26;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_MSIX_ACK 0x10000400
-#define DLB2_SYS_MSIX_ACK_RST 0x0
-union dlb2_sys_msix_ack {
-	struct {
-		u32 msix_0_ack : 1;
-		u32 msix_1_ack : 1;
-		u32 rsvd0 : 30;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_MSIX_PASSTHRU 0x10000404
-#define DLB2_SYS_MSIX_PASSTHRU_RST 0x0
-union dlb2_sys_msix_passthru {
-	struct {
-		u32 msix_0_passthru : 1;
-		u32 msix_1_passthru : 1;
-		u32 rsvd0 : 30;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_MSIX_MODE 0x10000408
-#define DLB2_SYS_MSIX_MODE_RST 0x0
-/* MSI-X Modes */
-#define DLB2_MSIX_MODE_PACKED     0
-#define DLB2_MSIX_MODE_COMPRESSED 1
-union dlb2_sys_msix_mode {
-	struct {
-		u32 mode : 1;
-		u32 poll_mode : 1;
-		u32 poll_mask : 1;
-		u32 poll_lock : 1;
-		u32 rsvd0 : 28;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS 0x10000440
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_RST 0x0
-union dlb2_sys_dir_cq_31_0_occ_int_sts {
-	struct {
-		u32 cq_0_occ_int : 1;
-		u32 cq_1_occ_int : 1;
-		u32 cq_2_occ_int : 1;
-		u32 cq_3_occ_int : 1;
-		u32 cq_4_occ_int : 1;
-		u32 cq_5_occ_int : 1;
-		u32 cq_6_occ_int : 1;
-		u32 cq_7_occ_int : 1;
-		u32 cq_8_occ_int : 1;
-		u32 cq_9_occ_int : 1;
-		u32 cq_10_occ_int : 1;
-		u32 cq_11_occ_int : 1;
-		u32 cq_12_occ_int : 1;
-		u32 cq_13_occ_int : 1;
-		u32 cq_14_occ_int : 1;
-		u32 cq_15_occ_int : 1;
-		u32 cq_16_occ_int : 1;
-		u32 cq_17_occ_int : 1;
-		u32 cq_18_occ_int : 1;
-		u32 cq_19_occ_int : 1;
-		u32 cq_20_occ_int : 1;
-		u32 cq_21_occ_int : 1;
-		u32 cq_22_occ_int : 1;
-		u32 cq_23_occ_int : 1;
-		u32 cq_24_occ_int : 1;
-		u32 cq_25_occ_int : 1;
-		u32 cq_26_occ_int : 1;
-		u32 cq_27_occ_int : 1;
-		u32 cq_28_occ_int : 1;
-		u32 cq_29_occ_int : 1;
-		u32 cq_30_occ_int : 1;
-		u32 cq_31_occ_int : 1;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS 0x10000444
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_RST 0x0
-union dlb2_sys_dir_cq_63_32_occ_int_sts {
-	struct {
-		u32 cq_32_occ_int : 1;
-		u32 cq_33_occ_int : 1;
-		u32 cq_34_occ_int : 1;
-		u32 cq_35_occ_int : 1;
-		u32 cq_36_occ_int : 1;
-		u32 cq_37_occ_int : 1;
-		u32 cq_38_occ_int : 1;
-		u32 cq_39_occ_int : 1;
-		u32 cq_40_occ_int : 1;
-		u32 cq_41_occ_int : 1;
-		u32 cq_42_occ_int : 1;
-		u32 cq_43_occ_int : 1;
-		u32 cq_44_occ_int : 1;
-		u32 cq_45_occ_int : 1;
-		u32 cq_46_occ_int : 1;
-		u32 cq_47_occ_int : 1;
-		u32 cq_48_occ_int : 1;
-		u32 cq_49_occ_int : 1;
-		u32 cq_50_occ_int : 1;
-		u32 cq_51_occ_int : 1;
-		u32 cq_52_occ_int : 1;
-		u32 cq_53_occ_int : 1;
-		u32 cq_54_occ_int : 1;
-		u32 cq_55_occ_int : 1;
-		u32 cq_56_occ_int : 1;
-		u32 cq_57_occ_int : 1;
-		u32 cq_58_occ_int : 1;
-		u32 cq_59_occ_int : 1;
-		u32 cq_60_occ_int : 1;
-		u32 cq_61_occ_int : 1;
-		u32 cq_62_occ_int : 1;
-		u32 cq_63_occ_int : 1;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS 0x10000460
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_RST 0x0
-union dlb2_sys_ldb_cq_31_0_occ_int_sts {
-	struct {
-		u32 cq_0_occ_int : 1;
-		u32 cq_1_occ_int : 1;
-		u32 cq_2_occ_int : 1;
-		u32 cq_3_occ_int : 1;
-		u32 cq_4_occ_int : 1;
-		u32 cq_5_occ_int : 1;
-		u32 cq_6_occ_int : 1;
-		u32 cq_7_occ_int : 1;
-		u32 cq_8_occ_int : 1;
-		u32 cq_9_occ_int : 1;
-		u32 cq_10_occ_int : 1;
-		u32 cq_11_occ_int : 1;
-		u32 cq_12_occ_int : 1;
-		u32 cq_13_occ_int : 1;
-		u32 cq_14_occ_int : 1;
-		u32 cq_15_occ_int : 1;
-		u32 cq_16_occ_int : 1;
-		u32 cq_17_occ_int : 1;
-		u32 cq_18_occ_int : 1;
-		u32 cq_19_occ_int : 1;
-		u32 cq_20_occ_int : 1;
-		u32 cq_21_occ_int : 1;
-		u32 cq_22_occ_int : 1;
-		u32 cq_23_occ_int : 1;
-		u32 cq_24_occ_int : 1;
-		u32 cq_25_occ_int : 1;
-		u32 cq_26_occ_int : 1;
-		u32 cq_27_occ_int : 1;
-		u32 cq_28_occ_int : 1;
-		u32 cq_29_occ_int : 1;
-		u32 cq_30_occ_int : 1;
-		u32 cq_31_occ_int : 1;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS 0x10000464
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_RST 0x0
-union dlb2_sys_ldb_cq_63_32_occ_int_sts {
-	struct {
-		u32 cq_32_occ_int : 1;
-		u32 cq_33_occ_int : 1;
-		u32 cq_34_occ_int : 1;
-		u32 cq_35_occ_int : 1;
-		u32 cq_36_occ_int : 1;
-		u32 cq_37_occ_int : 1;
-		u32 cq_38_occ_int : 1;
-		u32 cq_39_occ_int : 1;
-		u32 cq_40_occ_int : 1;
-		u32 cq_41_occ_int : 1;
-		u32 cq_42_occ_int : 1;
-		u32 cq_43_occ_int : 1;
-		u32 cq_44_occ_int : 1;
-		u32 cq_45_occ_int : 1;
-		u32 cq_46_occ_int : 1;
-		u32 cq_47_occ_int : 1;
-		u32 cq_48_occ_int : 1;
-		u32 cq_49_occ_int : 1;
-		u32 cq_50_occ_int : 1;
-		u32 cq_51_occ_int : 1;
-		u32 cq_52_occ_int : 1;
-		u32 cq_53_occ_int : 1;
-		u32 cq_54_occ_int : 1;
-		u32 cq_55_occ_int : 1;
-		u32 cq_56_occ_int : 1;
-		u32 cq_57_occ_int : 1;
-		u32 cq_58_occ_int : 1;
-		u32 cq_59_occ_int : 1;
-		u32 cq_60_occ_int : 1;
-		u32 cq_61_occ_int : 1;
-		u32 cq_62_occ_int : 1;
-		u32 cq_63_occ_int : 1;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_DIR_CQ_OPT_CLR 0x100004c0
-#define DLB2_SYS_DIR_CQ_OPT_CLR_RST 0x0
-union dlb2_sys_dir_cq_opt_clr {
-	struct {
-		u32 cq : 6;
-		u32 rsvd0 : 26;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_ALARM_HW_SYND 0x1000050c
-#define DLB2_SYS_ALARM_HW_SYND_RST 0x0
-union dlb2_sys_alarm_hw_synd {
-	struct {
-		u32 syndrome : 8;
-		u32 rtype : 2;
-		u32 alarm : 1;
-		u32 cwd : 1;
-		u32 vf_pf_mb : 1;
-		u32 rsvd0 : 1;
-		u32 cls : 2;
-		u32 aid : 6;
-		u32 unit : 4;
-		u32 source : 4;
-		u32 more : 1;
-		u32 valid : 1;
-	} field;
-	u32 val;
-};
-
-#define DLB2_AQED_PIPE_QID_FID_LIM(x) \
-	(0x20000000 + (x) * 0x1000)
-#define DLB2_AQED_PIPE_QID_FID_LIM_RST 0x7ff
-union dlb2_aqed_pipe_qid_fid_lim {
-	struct {
-		u32 qid_fid_limit : 13;
-		u32 rsvd0 : 19;
-	} field;
-	u32 val;
-};
-
-#define DLB2_AQED_PIPE_QID_HID_WIDTH(x) \
-	(0x20080000 + (x) * 0x1000)
-#define DLB2_AQED_PIPE_QID_HID_WIDTH_RST 0x0
-union dlb2_aqed_pipe_qid_hid_width {
-	struct {
-		u32 compress_code : 3;
-		u32 rsvd0 : 29;
-	} field;
-	u32 val;
-};
-
-#define DLB2_AQED_PIPE_CFG_ARB_WEIGHTS_TQPRI_ATM_0 0x24000004
-#define DLB2_AQED_PIPE_CFG_ARB_WEIGHTS_TQPRI_ATM_0_RST 0xfefcfaf8
-union dlb2_aqed_pipe_cfg_arb_weights_tqpri_atm_0 {
-	struct {
-		u32 pri0 : 8;
-		u32 pri1 : 8;
-		u32 pri2 : 8;
-		u32 pri3 : 8;
-	} field;
-	u32 val;
-};
-
-#define DLB2_ATM_QID2CQIDIX_00(x) \
-	(0x30080000 + (x) * 0x1000)
-#define DLB2_ATM_QID2CQIDIX_00_RST 0x0
-#define DLB2_ATM_QID2CQIDIX(x, y) \
-	(DLB2_ATM_QID2CQIDIX_00(x) + 0x80000 * (y))
-#define DLB2_ATM_QID2CQIDIX_NUM 16
-union dlb2_atm_qid2cqidix_00 {
-	struct {
-		u32 cq_p0 : 8;
-		u32 cq_p1 : 8;
-		u32 cq_p2 : 8;
-		u32 cq_p3 : 8;
-	} field;
-	u32 val;
-};
-
-#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN 0x34000004
-#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_RST 0xfffefdfc
-union dlb2_atm_cfg_arb_weights_rdy_bin {
-	struct {
-		u32 bin0 : 8;
-		u32 bin1 : 8;
-		u32 bin2 : 8;
-		u32 bin3 : 8;
-	} field;
-	u32 val;
-};
-
-#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN 0x34000008
-#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_RST 0xfffefdfc
-union dlb2_atm_cfg_arb_weights_sched_bin {
-	struct {
-		u32 bin0 : 8;
-		u32 bin1 : 8;
-		u32 bin2 : 8;
-		u32 bin3 : 8;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_DIR_VAS_CRD(x) \
-	(0x40000000 + (x) * 0x1000)
-#define DLB2_CHP_CFG_DIR_VAS_CRD_RST 0x0
-union dlb2_chp_cfg_dir_vas_crd {
-	struct {
-		u32 count : 14;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_LDB_VAS_CRD(x) \
-	(0x40080000 + (x) * 0x1000)
-#define DLB2_CHP_CFG_LDB_VAS_CRD_RST 0x0
-union dlb2_chp_cfg_ldb_vas_crd {
-	struct {
-		u32 count : 15;
-		u32 rsvd0 : 17;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_ORD_QID_SN(x) \
-	(0x40100000 + (x) * 0x1000)
-#define DLB2_CHP_ORD_QID_SN_RST 0x0
-union dlb2_chp_ord_qid_sn {
-	struct {
-		u32 sn : 10;
-		u32 rsvd0 : 22;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_ORD_QID_SN_MAP(x) \
-	(0x40180000 + (x) * 0x1000)
-#define DLB2_CHP_ORD_QID_SN_MAP_RST 0x0
-union dlb2_chp_ord_qid_sn_map {
-	struct {
-		u32 mode : 3;
-		u32 slot : 4;
-		u32 rsvz0 : 1;
-		u32 grp : 1;
-		u32 rsvz1 : 1;
-		u32 rsvd0 : 22;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_SN_CHK_ENBL(x) \
-	(0x40200000 + (x) * 0x1000)
-#define DLB2_CHP_SN_CHK_ENBL_RST 0x0
-union dlb2_chp_sn_chk_enbl {
-	struct {
-		u32 en : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_DIR_CQ_DEPTH(x) \
-	(0x40280000 + (x) * 0x1000)
-#define DLB2_CHP_DIR_CQ_DEPTH_RST 0x0
-union dlb2_chp_dir_cq_depth {
-	struct {
-		u32 depth : 13;
-		u32 rsvd0 : 19;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH(x) \
-	(0x40300000 + (x) * 0x1000)
-#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_RST 0x0
-union dlb2_chp_dir_cq_int_depth_thrsh {
-	struct {
-		u32 depth_threshold : 13;
-		u32 rsvd0 : 19;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_DIR_CQ_INT_ENB(x) \
-	(0x40380000 + (x) * 0x1000)
-#define DLB2_CHP_DIR_CQ_INT_ENB_RST 0x0
-union dlb2_chp_dir_cq_int_enb {
-	struct {
-		u32 en_tim : 1;
-		u32 en_depth : 1;
-		u32 rsvd0 : 30;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_DIR_CQ_TMR_THRSH(x) \
-	(0x40480000 + (x) * 0x1000)
-#define DLB2_CHP_DIR_CQ_TMR_THRSH_RST 0x1
-union dlb2_chp_dir_cq_tmr_thrsh {
-	struct {
-		u32 thrsh_0 : 1;
-		u32 thrsh_13_1 : 13;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL(x) \
-	(0x40500000 + (x) * 0x1000)
-#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_RST 0x0
-union dlb2_chp_dir_cq_tkn_depth_sel {
-	struct {
-		u32 token_depth_select : 4;
-		u32 rsvd0 : 28;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_DIR_CQ_WD_ENB(x) \
-	(0x40580000 + (x) * 0x1000)
-#define DLB2_CHP_DIR_CQ_WD_ENB_RST 0x0
-union dlb2_chp_dir_cq_wd_enb {
-	struct {
-		u32 wd_enable : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_DIR_CQ_WPTR(x) \
-	(0x40600000 + (x) * 0x1000)
-#define DLB2_CHP_DIR_CQ_WPTR_RST 0x0
-union dlb2_chp_dir_cq_wptr {
-	struct {
-		u32 write_pointer : 13;
-		u32 rsvd0 : 19;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_DIR_CQ2VAS(x) \
-	(0x40680000 + (x) * 0x1000)
-#define DLB2_CHP_DIR_CQ2VAS_RST 0x0
-union dlb2_chp_dir_cq2vas {
-	struct {
-		u32 cq2vas : 5;
-		u32 rsvd0 : 27;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_HIST_LIST_BASE(x) \
-	(0x40700000 + (x) * 0x1000)
-#define DLB2_CHP_HIST_LIST_BASE_RST 0x0
-union dlb2_chp_hist_list_base {
-	struct {
-		u32 base : 13;
-		u32 rsvd0 : 19;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_HIST_LIST_LIM(x) \
-	(0x40780000 + (x) * 0x1000)
-#define DLB2_CHP_HIST_LIST_LIM_RST 0x0
-union dlb2_chp_hist_list_lim {
-	struct {
-		u32 limit : 13;
-		u32 rsvd0 : 19;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_HIST_LIST_POP_PTR(x) \
-	(0x40800000 + (x) * 0x1000)
-#define DLB2_CHP_HIST_LIST_POP_PTR_RST 0x0
-union dlb2_chp_hist_list_pop_ptr {
-	struct {
-		u32 pop_ptr : 13;
-		u32 generation : 1;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_HIST_LIST_PUSH_PTR(x) \
-	(0x40880000 + (x) * 0x1000)
-#define DLB2_CHP_HIST_LIST_PUSH_PTR_RST 0x0
-union dlb2_chp_hist_list_push_ptr {
-	struct {
-		u32 push_ptr : 13;
-		u32 generation : 1;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_LDB_CQ_DEPTH(x) \
-	(0x40900000 + (x) * 0x1000)
-#define DLB2_CHP_LDB_CQ_DEPTH_RST 0x0
-union dlb2_chp_ldb_cq_depth {
-	struct {
-		u32 depth : 11;
-		u32 rsvd0 : 21;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH(x) \
-	(0x40980000 + (x) * 0x1000)
-#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_RST 0x0
-union dlb2_chp_ldb_cq_int_depth_thrsh {
-	struct {
-		u32 depth_threshold : 11;
-		u32 rsvd0 : 21;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_LDB_CQ_INT_ENB(x) \
-	(0x40a00000 + (x) * 0x1000)
-#define DLB2_CHP_LDB_CQ_INT_ENB_RST 0x0
-union dlb2_chp_ldb_cq_int_enb {
-	struct {
-		u32 en_tim : 1;
-		u32 en_depth : 1;
-		u32 rsvd0 : 30;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_LDB_CQ_TMR_THRSH(x) \
-	(0x40b00000 + (x) * 0x1000)
-#define DLB2_CHP_LDB_CQ_TMR_THRSH_RST 0x1
-union dlb2_chp_ldb_cq_tmr_thrsh {
-	struct {
-		u32 thrsh_0 : 1;
-		u32 thrsh_13_1 : 13;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL(x) \
-	(0x40b80000 + (x) * 0x1000)
-#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_RST 0x0
-union dlb2_chp_ldb_cq_tkn_depth_sel {
-	struct {
-		u32 token_depth_select : 4;
-		u32 rsvd0 : 28;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_LDB_CQ_WD_ENB(x) \
-	(0x40c00000 + (x) * 0x1000)
-#define DLB2_CHP_LDB_CQ_WD_ENB_RST 0x0
-union dlb2_chp_ldb_cq_wd_enb {
-	struct {
-		u32 wd_enable : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_LDB_CQ_WPTR(x) \
-	(0x40c80000 + (x) * 0x1000)
-#define DLB2_CHP_LDB_CQ_WPTR_RST 0x0
-union dlb2_chp_ldb_cq_wptr {
-	struct {
-		u32 write_pointer : 11;
-		u32 rsvd0 : 21;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_LDB_CQ2VAS(x) \
-	(0x40d00000 + (x) * 0x1000)
-#define DLB2_CHP_LDB_CQ2VAS_RST 0x0
-union dlb2_chp_ldb_cq2vas {
-	struct {
-		u32 cq2vas : 5;
-		u32 rsvd0 : 27;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_CHP_CSR_CTRL 0x44000008
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_RST 0x180002
-union dlb2_chp_cfg_chp_csr_ctrl {
-	struct {
-		u32 int_cor_alarm_dis : 1;
-		u32 int_cor_synd_dis : 1;
-		u32 int_uncr_alarm_dis : 1;
-		u32 int_unc_synd_dis : 1;
-		u32 int_inf0_alarm_dis : 1;
-		u32 int_inf0_synd_dis : 1;
-		u32 int_inf1_alarm_dis : 1;
-		u32 int_inf1_synd_dis : 1;
-		u32 int_inf2_alarm_dis : 1;
-		u32 int_inf2_synd_dis : 1;
-		u32 int_inf3_alarm_dis : 1;
-		u32 int_inf3_synd_dis : 1;
-		u32 int_inf4_alarm_dis : 1;
-		u32 int_inf4_synd_dis : 1;
-		u32 int_inf5_alarm_dis : 1;
-		u32 int_inf5_synd_dis : 1;
-		u32 dlb_cor_alarm_enable : 1;
-		u32 cfg_64bytes_qe_ldb_cq_mode : 1;
-		u32 cfg_64bytes_qe_dir_cq_mode : 1;
-		u32 pad_write_ldb : 1;
-		u32 pad_write_dir : 1;
-		u32 pad_first_write_ldb : 1;
-		u32 pad_first_write_dir : 1;
-		u32 rsvz0 : 9;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_DIR_CQ_INTR_ARMED0 0x4400005c
-#define DLB2_CHP_DIR_CQ_INTR_ARMED0_RST 0x0
-union dlb2_chp_dir_cq_intr_armed0 {
-	struct {
-		u32 armed : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_DIR_CQ_INTR_ARMED1 0x44000060
-#define DLB2_CHP_DIR_CQ_INTR_ARMED1_RST 0x0
-union dlb2_chp_dir_cq_intr_armed1 {
-	struct {
-		u32 armed : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL 0x44000084
-#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_RST 0x0
-union dlb2_chp_cfg_dir_cq_timer_ctl {
-	struct {
-		u32 sample_interval : 8;
-		u32 enb : 1;
-		u32 rsvz0 : 23;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_DIR_WDTO_0 0x44000088
-#define DLB2_CHP_CFG_DIR_WDTO_0_RST 0x0
-union dlb2_chp_cfg_dir_wdto_0 {
-	struct {
-		u32 wdto : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_DIR_WDTO_1 0x4400008c
-#define DLB2_CHP_CFG_DIR_WDTO_1_RST 0x0
-union dlb2_chp_cfg_dir_wdto_1 {
-	struct {
-		u32 wdto : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_DIR_WD_DISABLE0 0x44000098
-#define DLB2_CHP_CFG_DIR_WD_DISABLE0_RST 0xffffffff
-union dlb2_chp_cfg_dir_wd_disable0 {
-	struct {
-		u32 wd_disable : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_DIR_WD_DISABLE1 0x4400009c
-#define DLB2_CHP_CFG_DIR_WD_DISABLE1_RST 0xffffffff
-union dlb2_chp_cfg_dir_wd_disable1 {
-	struct {
-		u32 wd_disable : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL 0x440000a0
-#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_RST 0x0
-union dlb2_chp_cfg_dir_wd_enb_interval {
-	struct {
-		u32 sample_interval : 28;
-		u32 enb : 1;
-		u32 rsvz0 : 3;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_DIR_WD_THRESHOLD 0x440000ac
-#define DLB2_CHP_CFG_DIR_WD_THRESHOLD_RST 0x0
-union dlb2_chp_cfg_dir_wd_threshold {
-	struct {
-		u32 wd_threshold : 8;
-		u32 rsvz0 : 24;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_LDB_CQ_INTR_ARMED0 0x440000b0
-#define DLB2_CHP_LDB_CQ_INTR_ARMED0_RST 0x0
-union dlb2_chp_ldb_cq_intr_armed0 {
-	struct {
-		u32 armed : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_LDB_CQ_INTR_ARMED1 0x440000b4
-#define DLB2_CHP_LDB_CQ_INTR_ARMED1_RST 0x0
-union dlb2_chp_ldb_cq_intr_armed1 {
-	struct {
-		u32 armed : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL 0x440000d8
-#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_RST 0x0
-union dlb2_chp_cfg_ldb_cq_timer_ctl {
-	struct {
-		u32 sample_interval : 8;
-		u32 enb : 1;
-		u32 rsvz0 : 23;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_LDB_WDTO_0 0x440000dc
-#define DLB2_CHP_CFG_LDB_WDTO_0_RST 0x0
-union dlb2_chp_cfg_ldb_wdto_0 {
-	struct {
-		u32 wdto : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_LDB_WDTO_1 0x440000e0
-#define DLB2_CHP_CFG_LDB_WDTO_1_RST 0x0
-union dlb2_chp_cfg_ldb_wdto_1 {
-	struct {
-		u32 wdto : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_LDB_WD_DISABLE0 0x440000ec
-#define DLB2_CHP_CFG_LDB_WD_DISABLE0_RST 0xffffffff
-union dlb2_chp_cfg_ldb_wd_disable0 {
-	struct {
-		u32 wd_disable : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_LDB_WD_DISABLE1 0x440000f0
-#define DLB2_CHP_CFG_LDB_WD_DISABLE1_RST 0xffffffff
-union dlb2_chp_cfg_ldb_wd_disable1 {
-	struct {
-		u32 wd_disable : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL 0x440000f4
-#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_RST 0x0
-union dlb2_chp_cfg_ldb_wd_enb_interval {
-	struct {
-		u32 sample_interval : 28;
-		u32 enb : 1;
-		u32 rsvz0 : 3;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_LDB_WD_THRESHOLD 0x44000100
-#define DLB2_CHP_CFG_LDB_WD_THRESHOLD_RST 0x0
-union dlb2_chp_cfg_ldb_wd_threshold {
-	struct {
-		u32 wd_threshold : 8;
-		u32 rsvz0 : 24;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CTRL_DIAG_02 0x4c000028
-#define DLB2_CHP_CTRL_DIAG_02_RST 0x1555
-union dlb2_chp_ctrl_diag_02 {
-	struct {
-		u32 egress_credit_status_empty : 1;
-		u32 egress_credit_status_afull : 1;
-		u32 chp_outbound_hcw_pipe_credit_status_empty : 1;
-		u32 chp_outbound_hcw_pipe_credit_status_afull : 1;
-		u32 chp_lsp_ap_cmp_pipe_credit_status_empty : 1;
-		u32 chp_lsp_ap_cmp_pipe_credit_status_afull : 1;
-		u32 chp_lsp_tok_pipe_credit_status_empty : 1;
-		u32 chp_lsp_tok_pipe_credit_status_afull : 1;
-		u32 chp_rop_pipe_credit_status_empty : 1;
-		u32 chp_rop_pipe_credit_status_afull : 1;
-		u32 qed_to_cq_pipe_credit_status_empty : 1;
-		u32 qed_to_cq_pipe_credit_status_afull : 1;
-		u32 egress_lsp_token_credit_status_empty : 1;
-		u32 egress_lsp_token_credit_status_afull : 1;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
-
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0 0x54000000
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_RST 0xfefcfaf8
-union dlb2_dp_cfg_arb_weights_tqpri_dir_0 {
-	struct {
-		u32 pri0 : 8;
-		u32 pri1 : 8;
-		u32 pri2 : 8;
-		u32 pri3 : 8;
-	} field;
-	u32 val;
-};
-
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_1 0x54000004
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_1_RST 0x0
-union dlb2_dp_cfg_arb_weights_tqpri_dir_1 {
-	struct {
-		u32 rsvz0 : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0 0x54000008
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_RST 0xfefcfaf8
-union dlb2_dp_cfg_arb_weights_tqpri_replay_0 {
-	struct {
-		u32 pri0 : 8;
-		u32 pri1 : 8;
-		u32 pri2 : 8;
-		u32 pri3 : 8;
-	} field;
-	u32 val;
-};
-
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1 0x5400000c
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RST 0x0
-union dlb2_dp_cfg_arb_weights_tqpri_replay_1 {
-	struct {
-		u32 rsvz0 : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_DP_DIR_CSR_CTRL 0x54000010
-#define DLB2_DP_DIR_CSR_CTRL_RST 0x0
-union dlb2_dp_dir_csr_ctrl {
-	struct {
-		u32 int_cor_alarm_dis : 1;
-		u32 int_cor_synd_dis : 1;
-		u32 int_uncr_alarm_dis : 1;
-		u32 int_unc_synd_dis : 1;
-		u32 int_inf0_alarm_dis : 1;
-		u32 int_inf0_synd_dis : 1;
-		u32 int_inf1_alarm_dis : 1;
-		u32 int_inf1_synd_dis : 1;
-		u32 int_inf2_alarm_dis : 1;
-		u32 int_inf2_synd_dis : 1;
-		u32 int_inf3_alarm_dis : 1;
-		u32 int_inf3_synd_dis : 1;
-		u32 int_inf4_alarm_dis : 1;
-		u32 int_inf4_synd_dis : 1;
-		u32 int_inf5_alarm_dis : 1;
-		u32 int_inf5_synd_dis : 1;
-		u32 rsvz0 : 16;
-	} field;
-	u32 val;
-};
-
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_ATQ_0 0x84000000
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_RST 0xfefcfaf8
-union dlb2_nalb_pipe_cfg_arb_weights_tqpri_atq_0 {
-	struct {
-		u32 pri0 : 8;
-		u32 pri1 : 8;
-		u32 pri2 : 8;
-		u32 pri3 : 8;
-	} field;
-	u32 val;
-};
-
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_ATQ_1 0x84000004
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_ATQ_1_RST 0x0
-union dlb2_nalb_pipe_cfg_arb_weights_tqpri_atq_1 {
-	struct {
-		u32 rsvz0 : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_NALB_0 0x84000008
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_NALB_0_RST 0xfefcfaf8
-union dlb2_nalb_pipe_cfg_arb_weights_tqpri_nalb_0 {
-	struct {
-		u32 pri0 : 8;
-		u32 pri1 : 8;
-		u32 pri2 : 8;
-		u32 pri3 : 8;
-	} field;
-	u32 val;
-};
-
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_NALB_1 0x8400000c
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_NALB_1_RST 0x0
-union dlb2_nalb_pipe_cfg_arb_weights_tqpri_nalb_1 {
-	struct {
-		u32 rsvz0 : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0 0x84000010
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_RST 0xfefcfaf8
-union dlb2_nalb_pipe_cfg_arb_weights_tqpri_replay_0 {
-	struct {
-		u32 pri0 : 8;
-		u32 pri1 : 8;
-		u32 pri2 : 8;
-		u32 pri3 : 8;
-	} field;
-	u32 val;
-};
-
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1 0x84000014
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RST 0x0
-union dlb2_nalb_pipe_cfg_arb_weights_tqpri_replay_1 {
-	struct {
-		u32 rsvz0 : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_RO_PIPE_GRP_0_SLT_SHFT(x) \
-	(0x96000000 + (x) * 0x4)
-#define DLB2_RO_PIPE_GRP_0_SLT_SHFT_RST 0x0
-union dlb2_ro_pipe_grp_0_slt_shft {
-	struct {
-		u32 change : 10;
-		u32 rsvd0 : 22;
-	} field;
-	u32 val;
-};
-
-#define DLB2_RO_PIPE_GRP_1_SLT_SHFT(x) \
-	(0x96010000 + (x) * 0x4)
-#define DLB2_RO_PIPE_GRP_1_SLT_SHFT_RST 0x0
-union dlb2_ro_pipe_grp_1_slt_shft {
-	struct {
-		u32 change : 10;
-		u32 rsvd0 : 22;
-	} field;
-	u32 val;
-};
-
-#define DLB2_RO_PIPE_GRP_SN_MODE 0x94000000
-#define DLB2_RO_PIPE_GRP_SN_MODE_RST 0x0
-union dlb2_ro_pipe_grp_sn_mode {
-	struct {
-		u32 sn_mode_0 : 3;
-		u32 rszv0 : 5;
-		u32 sn_mode_1 : 3;
-		u32 rszv1 : 21;
-	} field;
-	u32 val;
-};
-
-#define DLB2_RO_PIPE_CFG_CTRL_GENERAL_0 0x9c000000
-#define DLB2_RO_PIPE_CFG_CTRL_GENERAL_0_RST 0x0
-union dlb2_ro_pipe_cfg_ctrl_general_0 {
-	struct {
-		u32 unit_single_step_mode : 1;
-		u32 rr_en : 1;
-		u32 rszv0 : 30;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ2PRIOV(x) \
-	(0xa0000000 + (x) * 0x1000)
-#define DLB2_LSP_CQ2PRIOV_RST 0x0
-union dlb2_lsp_cq2priov {
-	struct {
-		u32 prio : 24;
-		u32 v : 8;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ2QID0(x) \
-	(0xa0080000 + (x) * 0x1000)
-#define DLB2_LSP_CQ2QID0_RST 0x0
-union dlb2_lsp_cq2qid0 {
-	struct {
-		u32 qid_p0 : 7;
-		u32 rsvd3 : 1;
-		u32 qid_p1 : 7;
-		u32 rsvd2 : 1;
-		u32 qid_p2 : 7;
-		u32 rsvd1 : 1;
-		u32 qid_p3 : 7;
-		u32 rsvd0 : 1;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ2QID1(x) \
-	(0xa0100000 + (x) * 0x1000)
-#define DLB2_LSP_CQ2QID1_RST 0x0
-union dlb2_lsp_cq2qid1 {
-	struct {
-		u32 qid_p4 : 7;
-		u32 rsvd3 : 1;
-		u32 qid_p5 : 7;
-		u32 rsvd2 : 1;
-		u32 qid_p6 : 7;
-		u32 rsvd1 : 1;
-		u32 qid_p7 : 7;
-		u32 rsvd0 : 1;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_DIR_DSBL(x) \
-	(0xa0180000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_DIR_DSBL_RST 0x1
-union dlb2_lsp_cq_dir_dsbl {
-	struct {
-		u32 disabled : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_DIR_TKN_CNT(x) \
-	(0xa0200000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_DIR_TKN_CNT_RST 0x0
-union dlb2_lsp_cq_dir_tkn_cnt {
-	struct {
-		u32 count : 13;
-		u32 rsvd0 : 19;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(x) \
-	(0xa0280000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RST 0x0
-union dlb2_lsp_cq_dir_tkn_depth_sel_dsi {
-	struct {
-		u32 token_depth_select : 4;
-		u32 disable_wb_opt : 1;
-		u32 ignore_depth : 1;
-		u32 rsvd0 : 26;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTL(x) \
-	(0xa0300000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTL_RST 0x0
-union dlb2_lsp_cq_dir_tot_sch_cntl {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTH(x) \
-	(0xa0380000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTH_RST 0x0
-union dlb2_lsp_cq_dir_tot_sch_cnth {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_LDB_DSBL(x) \
-	(0xa0400000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_LDB_DSBL_RST 0x1
-union dlb2_lsp_cq_ldb_dsbl {
-	struct {
-		u32 disabled : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_LDB_INFL_CNT(x) \
-	(0xa0480000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_LDB_INFL_CNT_RST 0x0
-union dlb2_lsp_cq_ldb_infl_cnt {
-	struct {
-		u32 count : 12;
-		u32 rsvd0 : 20;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_LDB_INFL_LIM(x) \
-	(0xa0500000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_LDB_INFL_LIM_RST 0x0
-union dlb2_lsp_cq_ldb_infl_lim {
-	struct {
-		u32 limit : 12;
-		u32 rsvd0 : 20;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_LDB_TKN_CNT(x) \
-	(0xa0580000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_LDB_TKN_CNT_RST 0x0
-union dlb2_lsp_cq_ldb_tkn_cnt {
-	struct {
-		u32 token_count : 11;
-		u32 rsvd0 : 21;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL(x) \
-	(0xa0600000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RST 0x0
-union dlb2_lsp_cq_ldb_tkn_depth_sel {
-	struct {
-		u32 token_depth_select : 4;
-		u32 ignore_depth : 1;
-		u32 rsvd0 : 27;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTL(x) \
-	(0xa0680000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTL_RST 0x0
-union dlb2_lsp_cq_ldb_tot_sch_cntl {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTH(x) \
-	(0xa0700000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTH_RST 0x0
-union dlb2_lsp_cq_ldb_tot_sch_cnth {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_DIR_MAX_DEPTH(x) \
-	(0xa0780000 + (x) * 0x1000)
-#define DLB2_LSP_QID_DIR_MAX_DEPTH_RST 0x0
-union dlb2_lsp_qid_dir_max_depth {
-	struct {
-		u32 depth : 13;
-		u32 rsvd0 : 19;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTL(x) \
-	(0xa0800000 + (x) * 0x1000)
-#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTL_RST 0x0
-union dlb2_lsp_qid_dir_tot_enq_cntl {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTH(x) \
-	(0xa0880000 + (x) * 0x1000)
-#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTH_RST 0x0
-union dlb2_lsp_qid_dir_tot_enq_cnth {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_DIR_ENQUEUE_CNT(x) \
-	(0xa0900000 + (x) * 0x1000)
-#define DLB2_LSP_QID_DIR_ENQUEUE_CNT_RST 0x0
-union dlb2_lsp_qid_dir_enqueue_cnt {
-	struct {
-		u32 count : 13;
-		u32 rsvd0 : 19;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_DIR_DEPTH_THRSH(x) \
-	(0xa0980000 + (x) * 0x1000)
-#define DLB2_LSP_QID_DIR_DEPTH_THRSH_RST 0x0
-union dlb2_lsp_qid_dir_depth_thrsh {
-	struct {
-		u32 thresh : 13;
-		u32 rsvd0 : 19;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_AQED_ACTIVE_CNT(x) \
-	(0xa0a00000 + (x) * 0x1000)
-#define DLB2_LSP_QID_AQED_ACTIVE_CNT_RST 0x0
-union dlb2_lsp_qid_aqed_active_cnt {
-	struct {
-		u32 count : 12;
-		u32 rsvd0 : 20;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_AQED_ACTIVE_LIM(x) \
-	(0xa0a80000 + (x) * 0x1000)
-#define DLB2_LSP_QID_AQED_ACTIVE_LIM_RST 0x0
-union dlb2_lsp_qid_aqed_active_lim {
-	struct {
-		u32 limit : 12;
-		u32 rsvd0 : 20;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTL(x) \
-	(0xa0b00000 + (x) * 0x1000)
-#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTL_RST 0x0
-union dlb2_lsp_qid_atm_tot_enq_cntl {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTH(x) \
-	(0xa0b80000 + (x) * 0x1000)
-#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTH_RST 0x0
-union dlb2_lsp_qid_atm_tot_enq_cnth {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_ATQ_ENQUEUE_CNT(x) \
-	(0xa0c00000 + (x) * 0x1000)
-#define DLB2_LSP_QID_ATQ_ENQUEUE_CNT_RST 0x0
-union dlb2_lsp_qid_atq_enqueue_cnt {
-	struct {
-		u32 count : 14;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_LDB_ENQUEUE_CNT(x) \
-	(0xa0c80000 + (x) * 0x1000)
-#define DLB2_LSP_QID_LDB_ENQUEUE_CNT_RST 0x0
-union dlb2_lsp_qid_ldb_enqueue_cnt {
-	struct {
-		u32 count : 14;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_LDB_INFL_CNT(x) \
-	(0xa0d00000 + (x) * 0x1000)
-#define DLB2_LSP_QID_LDB_INFL_CNT_RST 0x0
-union dlb2_lsp_qid_ldb_infl_cnt {
-	struct {
-		u32 count : 12;
-		u32 rsvd0 : 20;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_LDB_INFL_LIM(x) \
-	(0xa0d80000 + (x) * 0x1000)
-#define DLB2_LSP_QID_LDB_INFL_LIM_RST 0x0
-union dlb2_lsp_qid_ldb_infl_lim {
-	struct {
-		u32 limit : 12;
-		u32 rsvd0 : 20;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID2CQIDIX_00(x) \
-	(0xa0e00000 + (x) * 0x1000)
-#define DLB2_LSP_QID2CQIDIX_00_RST 0x0
-#define DLB2_LSP_QID2CQIDIX(x, y) \
-	(DLB2_LSP_QID2CQIDIX_00(x) + 0x80000 * (y))
-#define DLB2_LSP_QID2CQIDIX_NUM 16
-union dlb2_lsp_qid2cqidix_00 {
-	struct {
-		u32 cq_p0 : 8;
-		u32 cq_p1 : 8;
-		u32 cq_p2 : 8;
-		u32 cq_p3 : 8;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID2CQIDIX2_00(x) \
-	(0xa1600000 + (x) * 0x1000)
-#define DLB2_LSP_QID2CQIDIX2_00_RST 0x0
-#define DLB2_LSP_QID2CQIDIX2(x, y) \
-	(DLB2_LSP_QID2CQIDIX2_00(x) + 0x80000 * (y))
-#define DLB2_LSP_QID2CQIDIX2_NUM 16
-union dlb2_lsp_qid2cqidix2_00 {
-	struct {
-		u32 cq_p0 : 8;
-		u32 cq_p1 : 8;
-		u32 cq_p2 : 8;
-		u32 cq_p3 : 8;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_LDB_REPLAY_CNT(x) \
-	(0xa1e00000 + (x) * 0x1000)
-#define DLB2_LSP_QID_LDB_REPLAY_CNT_RST 0x0
-union dlb2_lsp_qid_ldb_replay_cnt {
-	struct {
-		u32 count : 14;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_NALDB_MAX_DEPTH(x) \
-	(0xa1f00000 + (x) * 0x1000)
-#define DLB2_LSP_QID_NALDB_MAX_DEPTH_RST 0x0
-union dlb2_lsp_qid_naldb_max_depth {
-	struct {
-		u32 depth : 14;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL(x) \
-	(0xa1f80000 + (x) * 0x1000)
-#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL_RST 0x0
-union dlb2_lsp_qid_naldb_tot_enq_cntl {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH(x) \
-	(0xa2000000 + (x) * 0x1000)
-#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH_RST 0x0
-union dlb2_lsp_qid_naldb_tot_enq_cnth {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_ATM_DEPTH_THRSH(x) \
-	(0xa2080000 + (x) * 0x1000)
-#define DLB2_LSP_QID_ATM_DEPTH_THRSH_RST 0x0
-union dlb2_lsp_qid_atm_depth_thrsh {
-	struct {
-		u32 thresh : 14;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_NALDB_DEPTH_THRSH(x) \
-	(0xa2100000 + (x) * 0x1000)
-#define DLB2_LSP_QID_NALDB_DEPTH_THRSH_RST 0x0
-union dlb2_lsp_qid_naldb_depth_thrsh {
-	struct {
-		u32 thresh : 14;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_ATM_ACTIVE(x) \
-	(0xa2180000 + (x) * 0x1000)
-#define DLB2_LSP_QID_ATM_ACTIVE_RST 0x0
-union dlb2_lsp_qid_atm_active {
-	struct {
-		u32 count : 14;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0 0xa4000008
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_RST 0x0
-union dlb2_lsp_cfg_arb_weight_atm_nalb_qid_0 {
-	struct {
-		u32 pri0_weight : 8;
-		u32 pri1_weight : 8;
-		u32 pri2_weight : 8;
-		u32 pri3_weight : 8;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1 0xa400000c
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_RST 0x0
-union dlb2_lsp_cfg_arb_weight_atm_nalb_qid_1 {
-	struct {
-		u32 rsvz0 : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0 0xa4000014
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_RST 0x0
-union dlb2_lsp_cfg_arb_weight_ldb_qid_0 {
-	struct {
-		u32 pri0_weight : 8;
-		u32 pri1_weight : 8;
-		u32 pri2_weight : 8;
-		u32 pri3_weight : 8;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1 0xa4000018
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_RST 0x0
-union dlb2_lsp_cfg_arb_weight_ldb_qid_1 {
-	struct {
-		u32 rsvz0 : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_LDB_SCHED_CTRL 0xa400002c
-#define DLB2_LSP_LDB_SCHED_CTRL_RST 0x0
-union dlb2_lsp_ldb_sched_ctrl {
-	struct {
-		u32 cq : 8;
-		u32 qidix : 3;
-		u32 value : 1;
-		u32 nalb_haswork_v : 1;
-		u32 rlist_haswork_v : 1;
-		u32 slist_haswork_v : 1;
-		u32 inflight_ok_v : 1;
-		u32 aqed_nfull_v : 1;
-		u32 rsvz0 : 15;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_DIR_SCH_CNT_L 0xa4000034
-#define DLB2_LSP_DIR_SCH_CNT_L_RST 0x0
-union dlb2_lsp_dir_sch_cnt_l {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_DIR_SCH_CNT_H 0xa4000038
-#define DLB2_LSP_DIR_SCH_CNT_H_RST 0x0
-union dlb2_lsp_dir_sch_cnt_h {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_LDB_SCH_CNT_L 0xa400003c
-#define DLB2_LSP_LDB_SCH_CNT_L_RST 0x0
-union dlb2_lsp_ldb_sch_cnt_l {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_LDB_SCH_CNT_H 0xa4000040
-#define DLB2_LSP_LDB_SCH_CNT_H_RST 0x0
-union dlb2_lsp_ldb_sch_cnt_h {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CFG_SHDW_CTRL 0xa4000070
-#define DLB2_LSP_CFG_SHDW_CTRL_RST 0x0
-union dlb2_lsp_cfg_shdw_ctrl {
-	struct {
-		u32 transfer : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CFG_SHDW_RANGE_COS(x) \
-	(0xa4000074 + (x) * 4)
-#define DLB2_LSP_CFG_SHDW_RANGE_COS_RST 0x40
-union dlb2_lsp_cfg_shdw_range_cos {
-	struct {
-		u32 bw_range : 9;
-		u32 rsvz0 : 22;
-		u32 no_extra_credit : 1;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CFG_CTRL_GENERAL_0 0xac000000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_RST 0x0
-union dlb2_lsp_cfg_ctrl_general_0 {
-	struct {
-		u32 disab_atq_empty_arb : 1;
-		u32 inc_tok_unit_idle : 1;
-		u32 disab_rlist_pri : 1;
-		u32 inc_cmp_unit_idle : 1;
-		u32 rsvz0 : 2;
-		u32 dir_single_op : 1;
-		u32 dir_half_bw : 1;
-		u32 dir_single_out : 1;
-		u32 dir_disab_multi : 1;
-		u32 atq_single_op : 1;
-		u32 atq_half_bw : 1;
-		u32 atq_single_out : 1;
-		u32 atq_disab_multi : 1;
-		u32 dirrpl_single_op : 1;
-		u32 dirrpl_half_bw : 1;
-		u32 dirrpl_single_out : 1;
-		u32 lbrpl_single_op : 1;
-		u32 lbrpl_half_bw : 1;
-		u32 lbrpl_single_out : 1;
-		u32 ldb_single_op : 1;
-		u32 ldb_half_bw : 1;
-		u32 ldb_disab_multi : 1;
-		u32 atm_single_sch : 1;
-		u32 atm_single_cmp : 1;
-		u32 ldb_ce_tog_arb : 1;
-		u32 rsvz1 : 1;
-		u32 smon0_valid_sel : 2;
-		u32 smon0_value_sel : 1;
-		u32 smon0_compare_sel : 2;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CFG_MSTR_DIAG_RESET_STS 0xb4000000
-#define DLB2_CFG_MSTR_DIAG_RESET_STS_RST 0x80000bff
-union dlb2_cfg_mstr_diag_reset_sts {
-	struct {
-		u32 chp_pf_reset_done : 1;
-		u32 rop_pf_reset_done : 1;
-		u32 lsp_pf_reset_done : 1;
-		u32 nalb_pf_reset_done : 1;
-		u32 ap_pf_reset_done : 1;
-		u32 dp_pf_reset_done : 1;
-		u32 qed_pf_reset_done : 1;
-		u32 dqed_pf_reset_done : 1;
-		u32 aqed_pf_reset_done : 1;
-		u32 sys_pf_reset_done : 1;
-		u32 pf_reset_active : 1;
-		u32 flrsm_state : 7;
-		u32 rsvd0 : 13;
-		u32 dlb_proc_reset_done : 1;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CFG_MSTR_CFG_DIAGNOSTIC_IDLE_STATUS 0xb4000004
-#define DLB2_CFG_MSTR_CFG_DIAGNOSTIC_IDLE_STATUS_RST 0x9d0fffff
-union dlb2_cfg_mstr_cfg_diagnostic_idle_status {
-	struct {
-		u32 chp_pipeidle : 1;
-		u32 rop_pipeidle : 1;
-		u32 lsp_pipeidle : 1;
-		u32 nalb_pipeidle : 1;
-		u32 ap_pipeidle : 1;
-		u32 dp_pipeidle : 1;
-		u32 qed_pipeidle : 1;
-		u32 dqed_pipeidle : 1;
-		u32 aqed_pipeidle : 1;
-		u32 sys_pipeidle : 1;
-		u32 chp_unit_idle : 1;
-		u32 rop_unit_idle : 1;
-		u32 lsp_unit_idle : 1;
-		u32 nalb_unit_idle : 1;
-		u32 ap_unit_idle : 1;
-		u32 dp_unit_idle : 1;
-		u32 qed_unit_idle : 1;
-		u32 dqed_unit_idle : 1;
-		u32 aqed_unit_idle : 1;
-		u32 sys_unit_idle : 1;
-		u32 rsvd1 : 4;
-		u32 mstr_cfg_ring_idle : 1;
-		u32 mstr_cfg_mstr_idle : 1;
-		u32 mstr_flr_clkreq_b : 1;
-		u32 mstr_proc_idle : 1;
-		u32 mstr_proc_idle_masked : 1;
-		u32 rsvd0 : 2;
-		u32 dlb_func_idle : 1;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CFG_MSTR_CFG_PM_STATUS 0xb4000014
-#define DLB2_CFG_MSTR_CFG_PM_STATUS_RST 0x100403e
-union dlb2_cfg_mstr_cfg_pm_status {
-	struct {
-		u32 prochot : 1;
-		u32 pgcb_dlb_idle : 1;
-		u32 pgcb_dlb_pg_rdy_ack_b : 1;
-		u32 pmsm_pgcb_req_b : 1;
-		u32 pgbc_pmc_pg_req_b : 1;
-		u32 pmc_pgcb_pg_ack_b : 1;
-		u32 pmc_pgcb_fet_en_b : 1;
-		u32 pgcb_fet_en_b : 1;
-		u32 rsvz0 : 1;
-		u32 rsvz1 : 1;
-		u32 fuse_force_on : 1;
-		u32 fuse_proc_disable : 1;
-		u32 rsvz2 : 1;
-		u32 rsvz3 : 1;
-		u32 pm_fsm_d0tod3_ok : 1;
-		u32 pm_fsm_d3tod0_ok : 1;
-		u32 dlb_in_d3 : 1;
-		u32 rsvz4 : 7;
-		u32 pmsm : 8;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CFG_MSTR_CFG_PM_PMCSR_DISABLE 0xb4000018
-#define DLB2_CFG_MSTR_CFG_PM_PMCSR_DISABLE_RST 0x1
-union dlb2_cfg_mstr_cfg_pm_pmcsr_disable {
-	struct {
-		u32 disable : 1;
-		u32 rsvz0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_VF_VF2PF_MAILBOX_BYTES 256
-#define DLB2_FUNC_VF_VF2PF_MAILBOX(x) \
-	(0x1000 + (x) * 0x4)
-#define DLB2_FUNC_VF_VF2PF_MAILBOX_RST 0x0
-union dlb2_func_vf_vf2pf_mailbox {
-	struct {
-		u32 msg : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_VF_VF2PF_MAILBOX_ISR 0x1f00
-#define DLB2_FUNC_VF_VF2PF_MAILBOX_ISR_RST 0x0
-#define DLB2_FUNC_VF_SIOV_VF2PF_MAILBOX_ISR_TRIGGER 0x8000
-union dlb2_func_vf_vf2pf_mailbox_isr {
-	struct {
-		u32 isr : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_VF_PF2VF_MAILBOX_BYTES 64
-#define DLB2_FUNC_VF_PF2VF_MAILBOX(x) \
-	(0x2000 + (x) * 0x4)
-#define DLB2_FUNC_VF_PF2VF_MAILBOX_RST 0x0
-union dlb2_func_vf_pf2vf_mailbox {
-	struct {
-		u32 msg : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_VF_PF2VF_MAILBOX_ISR 0x2f00
-#define DLB2_FUNC_VF_PF2VF_MAILBOX_ISR_RST 0x0
-union dlb2_func_vf_pf2vf_mailbox_isr {
-	struct {
-		u32 pf_isr : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_VF_VF_MSI_ISR_PEND 0x2f10
-#define DLB2_FUNC_VF_VF_MSI_ISR_PEND_RST 0x0
-union dlb2_func_vf_vf_msi_isr_pend {
-	struct {
-		u32 isr_pend : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_VF_VF_RESET_IN_PROGRESS 0x3000
-#define DLB2_FUNC_VF_VF_RESET_IN_PROGRESS_RST 0x1
-union dlb2_func_vf_vf_reset_in_progress {
-	struct {
-		u32 reset_in_progress : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_VF_VF_MSI_ISR 0x4000
-#define DLB2_FUNC_VF_VF_MSI_ISR_RST 0x0
-union dlb2_func_vf_vf_msi_isr {
-	struct {
-		u32 vf_msi_isr : 32;
-	} field;
-	u32 val;
-};
-
-#endif /* __DLB2_REGS_H */
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH 24/25] event/dlb2: rename dlb2_regs_new.h to dlb2_regs.h
  2021-03-16 22:18 [dpdk-dev] [PATCH 00/25] Add Support for DLB v2.5 Timothy McDaniel
                   ` (22 preceding siblings ...)
  2021-03-16 22:18 ` [dpdk-dev] [PATCH 23/25] event/dlb2: delete old register map file, dlb2_regs.h Timothy McDaniel
@ 2021-03-16 22:18 ` Timothy McDaniel
  2021-03-16 22:18 ` [dpdk-dev] [PATCH 25/25] event/dlb2: update xstats for DLB v2.5 Timothy McDaniel
  2021-03-21 10:50 ` [dpdk-dev] [PATCH 00/25] Add Support " Jerin Jacob
  25 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-03-16 22:18 UTC (permalink / raw)
  To: dev
  Cc: jerinj, harry.van.haaren, mdr, nhorman, nikhil.rao,
	erik.g.carrillo, abhinandan.gujjar, pbhagavatula, hemant.agrawal,
	mattias.ronnblom, peter.mccarthy

All references to the old register map have been removed,
so it is safe to rename the new combined file that supports
both DLB v2.0 and DLB v2.5. Also fixed all places where this
file is included.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_hw_types.h                  | 2 +-
 drivers/event/dlb2/pf/base/{dlb2_regs_new.h => dlb2_regs.h} | 6 +++---
 drivers/event/dlb2/pf/base/dlb2_resource.c                  | 2 +-
 drivers/event/dlb2/pf/dlb2_main.c                           | 2 +-
 4 files changed, 6 insertions(+), 6 deletions(-)
 rename drivers/event/dlb2/pf/base/{dlb2_regs_new.h => dlb2_regs.h} (99%)

diff --git a/drivers/event/dlb2/pf/base/dlb2_hw_types.h b/drivers/event/dlb2/pf/base/dlb2_hw_types.h
index 0f418ef5d..db9dfd240 100644
--- a/drivers/event/dlb2/pf/base/dlb2_hw_types.h
+++ b/drivers/event/dlb2/pf/base/dlb2_hw_types.h
@@ -10,7 +10,7 @@
 
 #include "dlb2_osdep_list.h"
 #include "dlb2_osdep_types.h"
-#include "dlb2_regs_new.h"
+#include "dlb2_regs.h"
 
 #define DLB2_BITS_SET(x, val, mask)	(x = ((x) & ~(mask))     \
 				 | (((val) << (mask##_LOC)) & (mask)))
diff --git a/drivers/event/dlb2/pf/base/dlb2_regs_new.h b/drivers/event/dlb2/pf/base/dlb2_regs.h
similarity index 99%
rename from drivers/event/dlb2/pf/base/dlb2_regs_new.h
rename to drivers/event/dlb2/pf/base/dlb2_regs.h
index 593243d63..cdff5cb1f 100644
--- a/drivers/event/dlb2/pf/base/dlb2_regs_new.h
+++ b/drivers/event/dlb2/pf/base/dlb2_regs.h
@@ -2,8 +2,8 @@
  * Copyright(c) 2016-2020 Intel Corporation
  */
 
-#ifndef __DLB2_REGS_NEW_H
-#define __DLB2_REGS_NEW_H
+#ifndef __DLB2_REGS_H
+#define __DLB2_REGS_H
 
 #include "dlb2_osdep_types.h"
 
@@ -4409,4 +4409,4 @@
 #define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_V_LOC		15
 #define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_RSVD0_LOC	16
 
-#endif /* __DLB2_REGS_NEW_H */
+#endif /* __DLB2_REGS_H */
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index e5fa0f047..d71adce16 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -9,7 +9,7 @@
 #include "dlb2_osdep.h"
 #include "dlb2_osdep_bitmap.h"
 #include "dlb2_osdep_types.h"
-#include "dlb2_regs_new.h"
+#include "dlb2_regs.h"
 #include "dlb2_resource.h"
 
 #include "../../dlb2_priv.h"
diff --git a/drivers/event/dlb2/pf/dlb2_main.c b/drivers/event/dlb2/pf/dlb2_main.c
index 1f6ccf8e4..b6ec85b47 100644
--- a/drivers/event/dlb2/pf/dlb2_main.c
+++ b/drivers/event/dlb2/pf/dlb2_main.c
@@ -13,7 +13,7 @@
 #include <rte_malloc.h>
 #include <rte_errno.h>
 
-#include "base/dlb2_regs_new.h"
+#include "base/dlb2_regs.h"
 #include "base/dlb2_hw_types.h"
 #include "base/dlb2_resource.h"
 #include "base/dlb2_osdep.h"
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH 25/25] event/dlb2: update xstats for DLB v2.5
  2021-03-16 22:18 [dpdk-dev] [PATCH 00/25] Add Support for DLB v2.5 Timothy McDaniel
                   ` (23 preceding siblings ...)
  2021-03-16 22:18 ` [dpdk-dev] [PATCH 24/25] event/dlb2: rename dlb2_regs_new.h to dlb2_regs.h Timothy McDaniel
@ 2021-03-16 22:18 ` Timothy McDaniel
  2021-03-21 10:50 ` [dpdk-dev] [PATCH 00/25] Add Support " Jerin Jacob
  25 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-03-16 22:18 UTC (permalink / raw)
  To: dev
  Cc: jerinj, harry.van.haaren, mdr, nhorman, nikhil.rao,
	erik.g.carrillo, abhinandan.gujjar, pbhagavatula, hemant.agrawal,
	mattias.ronnblom, peter.mccarthy

Add DLB v2.5 specific information such as credit metrics to xstats.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/dlb2_xstats.c | 41 ++++++++++++++++++++++++++++----
 1 file changed, 37 insertions(+), 4 deletions(-)

diff --git a/drivers/event/dlb2/dlb2_xstats.c b/drivers/event/dlb2/dlb2_xstats.c
index b62e62060..d4c8d9903 100644
--- a/drivers/event/dlb2/dlb2_xstats.c
+++ b/drivers/event/dlb2/dlb2_xstats.c
@@ -9,6 +9,7 @@
 
 #include "dlb2_priv.h"
 #include "dlb2_inline_fns.h"
+#include "pf/base/dlb2_regs.h"
 
 enum dlb2_xstats_type {
 	/* common to device and port */
@@ -21,6 +22,7 @@ enum dlb2_xstats_type {
 	zero_polls,			/**< Call dequeue burst and return 0 */
 	tx_nospc_ldb_hw_credits,	/**< Insufficient LDB h/w credits */
 	tx_nospc_dir_hw_credits,	/**< Insufficient DIR h/w credits */
+	tx_nospc_hw_credits,		/**< Insufficient h/w credits */
 	tx_nospc_inflight_max,		/**< Reach the new_event_threshold */
 	tx_nospc_new_event_limit,	/**< Insufficient s/w credits */
 	tx_nospc_inflight_credits,	/**< Port has too few s/w credits */
@@ -29,6 +31,7 @@ enum dlb2_xstats_type {
 	inflight_events,
 	ldb_pool_size,
 	dir_pool_size,
+	pool_size,
 	/* port specific */
 	tx_new,				/**< Send an OP_NEW event */
 	tx_fwd,				/**< Send an OP_FORWARD event */
@@ -129,6 +132,9 @@ dlb2_device_traffic_stat_get(struct dlb2_eventdev *dlb2,
 		case tx_nospc_dir_hw_credits:
 			val += port->stats.traffic.tx_nospc_dir_hw_credits;
 			break;
+		case tx_nospc_hw_credits:
+			val += port->stats.traffic.tx_nospc_hw_credits;
+			break;
 		case tx_nospc_inflight_max:
 			val += port->stats.traffic.tx_nospc_inflight_max;
 			break;
@@ -159,6 +165,7 @@ get_dev_stat(struct dlb2_eventdev *dlb2, uint16_t obj_idx __rte_unused,
 	case zero_polls:
 	case tx_nospc_ldb_hw_credits:
 	case tx_nospc_dir_hw_credits:
+	case tx_nospc_hw_credits:
 	case tx_nospc_inflight_max:
 	case tx_nospc_new_event_limit:
 	case tx_nospc_inflight_credits:
@@ -171,6 +178,8 @@ get_dev_stat(struct dlb2_eventdev *dlb2, uint16_t obj_idx __rte_unused,
 		return dlb2->num_ldb_credits;
 	case dir_pool_size:
 		return dlb2->num_dir_credits;
+	case pool_size:
+		return dlb2->num_credits;
 	default: return -1;
 	}
 }
@@ -203,6 +212,9 @@ get_port_stat(struct dlb2_eventdev *dlb2, uint16_t obj_idx,
 	case tx_nospc_dir_hw_credits:
 		return ev_port->stats.traffic.tx_nospc_dir_hw_credits;
 
+	case tx_nospc_hw_credits:
+		return ev_port->stats.traffic.tx_nospc_hw_credits;
+
 	case tx_nospc_inflight_max:
 		return ev_port->stats.traffic.tx_nospc_inflight_max;
 
@@ -357,6 +369,7 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 		"zero_polls",
 		"tx_nospc_ldb_hw_credits",
 		"tx_nospc_dir_hw_credits",
+		"tx_nospc_hw_credits",
 		"tx_nospc_inflight_max",
 		"tx_nospc_new_event_limit",
 		"tx_nospc_inflight_credits",
@@ -364,6 +377,7 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 		"inflight_events",
 		"ldb_pool_size",
 		"dir_pool_size",
+		"pool_size",
 	};
 	static const enum dlb2_xstats_type dev_types[] = {
 		rx_ok,
@@ -375,6 +389,7 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 		zero_polls,
 		tx_nospc_ldb_hw_credits,
 		tx_nospc_dir_hw_credits,
+		tx_nospc_hw_credits,
 		tx_nospc_inflight_max,
 		tx_nospc_new_event_limit,
 		tx_nospc_inflight_credits,
@@ -382,6 +397,7 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 		inflight_events,
 		ldb_pool_size,
 		dir_pool_size,
+		pool_size,
 	};
 	/* Note: generated device stats are not allowed to be reset. */
 	static const uint8_t dev_reset_allowed[] = {
@@ -394,6 +410,7 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 		0, /* zero_polls */
 		0, /* tx_nospc_ldb_hw_credits */
 		0, /* tx_nospc_dir_hw_credits */
+		0, /* tx_nospc_hw_credits */
 		0, /* tx_nospc_inflight_max */
 		0, /* tx_nospc_new_event_limit */
 		0, /* tx_nospc_inflight_credits */
@@ -401,6 +418,7 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 		0, /* inflight_events */
 		0, /* ldb_pool_size */
 		0, /* dir_pool_size */
+		0, /* pool_size */
 	};
 	static const char * const port_stats[] = {
 		"is_configured",
@@ -415,6 +433,7 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 		"zero_polls",
 		"tx_nospc_ldb_hw_credits",
 		"tx_nospc_dir_hw_credits",
+		"tx_nospc_hw_credits",
 		"tx_nospc_inflight_max",
 		"tx_nospc_new_event_limit",
 		"tx_nospc_inflight_credits",
@@ -448,6 +467,7 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 		zero_polls,
 		tx_nospc_ldb_hw_credits,
 		tx_nospc_dir_hw_credits,
+		tx_nospc_hw_credits,
 		tx_nospc_inflight_max,
 		tx_nospc_new_event_limit,
 		tx_nospc_inflight_credits,
@@ -481,6 +501,7 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 		1, /* zero_polls */
 		1, /* tx_nospc_ldb_hw_credits */
 		1, /* tx_nospc_dir_hw_credits */
+		1, /* tx_nospc_hw_credits */
 		1, /* tx_nospc_inflight_max */
 		1, /* tx_nospc_new_event_limit */
 		1, /* tx_nospc_inflight_credits */
@@ -935,8 +956,8 @@ dlb2_eventdev_xstats_reset(struct rte_eventdev *dev,
 		break;
 	case RTE_EVENT_DEV_XSTATS_PORT:
 		if (queue_port_id == -1) {
-			for (i = 0; i < DLB2_MAX_NUM_PORTS(dlb2->version);
-					i++) {
+			for (i = 0;
+			     i < DLB2_MAX_NUM_PORTS(dlb2->version); i++) {
 				if (dlb2_xstats_reset_port(dlb2, i,
 							   ids, nb_ids))
 					return -EINVAL;
@@ -949,8 +970,8 @@ dlb2_eventdev_xstats_reset(struct rte_eventdev *dev,
 		break;
 	case RTE_EVENT_DEV_XSTATS_QUEUE:
 		if (queue_port_id == -1) {
-			for (i = 0; i < DLB2_MAX_NUM_QUEUES(dlb2->version);
-					i++) {
+			for (i = 0;
+			     i < DLB2_MAX_NUM_QUEUES(dlb2->version); i++) {
 				if (dlb2_xstats_reset_queue(dlb2, i,
 							    ids, nb_ids))
 					return -EINVAL;
@@ -1048,6 +1069,9 @@ dlb2_eventdev_dump(struct rte_eventdev *dev, FILE *f)
 	fprintf(f, "\tnum_dir_credits = %u\n",
 		dlb2->hw_rsrc_query_results.num_dir_credits);
 
+	fprintf(f, "\tnum_credits = %u\n",
+		dlb2->hw_rsrc_query_results.num_credits);
+
 	/* Port level information */
 
 	for (i = 0; i < dlb2->num_ports; i++) {
@@ -1102,6 +1126,12 @@ dlb2_eventdev_dump(struct rte_eventdev *dev, FILE *f)
 		fprintf(f, "\tdir_credits = %u\n",
 			p->qm_port.dir_credits);
 
+		fprintf(f, "\tcached_credits = %u\n",
+			p->qm_port.cached_credits);
+
+		fprintf(f, "\tdir_credits = %u\n",
+			p->qm_port.credits);
+
 		fprintf(f, "\tgenbit=%d, cq_idx=%d, cq_depth=%d\n",
 			p->qm_port.gen_bit,
 			p->qm_port.cq_idx,
@@ -1139,6 +1169,9 @@ dlb2_eventdev_dump(struct rte_eventdev *dev, FILE *f)
 		fprintf(f, "\t\ttx_nospc_dir_hw_credits %" PRIu64 "\n",
 			p->stats.traffic.tx_nospc_dir_hw_credits);
 
+		fprintf(f, "\t\ttx_nospc_hw_credits %" PRIu64 "\n",
+			p->stats.traffic.tx_nospc_hw_credits);
+
 		fprintf(f, "\t\ttx_nospc_inflight_max %" PRIu64 "\n",
 			p->stats.traffic.tx_nospc_inflight_max);
 
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [dpdk-dev] [PATCH 01/25] event/dlb2: add dlb v2.5 probe
  2021-03-16 22:18 ` [dpdk-dev] [PATCH 01/25] event/dlb2: add dlb v2.5 probe Timothy McDaniel
@ 2021-03-21  9:48   ` Jerin Jacob
  2021-03-24 19:31     ` McDaniel, Timothy
  2021-03-30 19:35   ` [dpdk-dev] [PATCH v2 00/27] Add DLB V2.5 Timothy McDaniel
                     ` (3 subsequent siblings)
  4 siblings, 1 reply; 174+ messages in thread
From: Jerin Jacob @ 2021-03-21  9:48 UTC (permalink / raw)
  To: Timothy McDaniel
  Cc: dpdk-dev, Jerin Jacob, Van Haaren, Harry, Ray Kinsella,
	Neil Horman, Nikhil Rao, Erik Gabriel Carrillo, Gujjar,
	Abhinandan S, Pavan Nikhilesh, Hemant Agrawal,
	Mattias Rönnblom, Peter Mccarthy

On Wed, Mar 17, 2021 at 3:49 AM Timothy McDaniel
<timothy.mcdaniel@intel.com> wrote:
>
> This commit adds dlb v2.5 probe support, and updates
> parameter parsing.
>
> The dlb v2.5 device differs from dlb v2, in that the
> number of resources (ports, queues, ...) is different,
> so macros have been added to take the device version
> into account.
>
> This commit also cleans up a few issues in the original
> dlb2 source:
> - eliminate duplicate constant definitions
> - removed unused constant definitions
>
> Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
> ---

>
> -#define EVDEV_DLB2_NAME_PMD dlb2_event
> +#define EVDEV_DLB2_NAME_PMD dlb_event

Is this an intended change? why change the driver's name.

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [dpdk-dev] [EXT] [PATCH 02/25] event/dlb2: add DLB v2.5 probe-time hardware init
  2021-03-16 22:18 ` [dpdk-dev] [PATCH 02/25] event/dlb2: add DLB v2.5 probe-time hardware init Timothy McDaniel
@ 2021-03-21 10:30   ` Jerin Jacob Kollanukkaran
  2021-03-26 16:37     ` McDaniel, Timothy
  0 siblings, 1 reply; 174+ messages in thread
From: Jerin Jacob Kollanukkaran @ 2021-03-21 10:30 UTC (permalink / raw)
  To: Timothy McDaniel, dev
  Cc: harry.van.haaren, mdr, nhorman, nikhil.rao, erik.g.carrillo,
	abhinandan.gujjar, Pavan Nikhilesh Bhagavatula, hemant.agrawal,
	mattias.ronnblom, peter.mccarthy

> -----Original Message-----
> From: Timothy McDaniel <timothy.mcdaniel@intel.com>
> Sent: Wednesday, March 17, 2021 3:49 AM
> To: dev@dpdk.org
> Cc: Jerin Jacob Kollanukkaran <jerinj@marvell.com>;
> harry.van.haaren@intel.com; mdr@ashroe.eu; nhorman@tuxdriver.com;
> nikhil.rao@intel.com; erik.g.carrillo@intel.com; abhinandan.gujjar@intel.com;
> Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com>;
> hemant.agrawal@nxp.com; mattias.ronnblom@ericsson.com;
> peter.mccarthy@intel.com
> Subject: [EXT] [PATCH 02/25] event/dlb2: add DLB v2.5 probe-time hardware
> init


Please simplify subject in all the patches like
event/dlb2: add v2.5 HW init
 
 
> ----------------------------------------------------------------------
> This commit adds support for DLB v2.5 probe-time hardware init,
> and sets up a framework for incorporating the remaining
> changes required to support DLB v2.5.
> 
> DLB v2.0 and DLB v2.5 are similar in many respects, but their
> register offsets and definitions are different. As a result of these,
> differences, the low level hardware functions must take the devicei


s/devicei/device

> version into consideration. This requires that the hardware version be
> passed to many of the low level functions, so that the PMD can
> take the appropriate action based on the device version.
> 
> To ease the transition and keep the individual patches small, three
> temporary files are added in this commit. These files have "new"
> in their names.  The files with "new" contain changes specific to a
> consolidated PMD that supports both DLB v2.0 and DLB 2.5. Their sister
> files of the same name (minus "new") contain the old DLB v2.0 specific
> code. The intent is to remove code from the original files as that code
> is ported to the combined DLB 2.0/2.5 PMD model and added to the "new"
> files in a series of commits. At end of the patch series, the old files
> will be empty and the "new" files will have the logic needed
> to implement a single PMD that supports both DLB v2.0 and DLB v2.5.
> At that time, the original DLB v2.0 specific files will be deleted,
> and the "new" files will be renamed and replace them.
> 
> Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
> ---
>  drivers/event/dlb2/dlb2_priv.h                |    5 +
>  drivers/event/dlb2/meson.build                |    1 +
>  .../event/dlb2/pf/base/dlb2_hw_types_new.h    |  362 ++
>  drivers/event/dlb2/pf/base/dlb2_mbox.h        |    1 -
>  drivers/event/dlb2/pf/base/dlb2_osdep.h       |    4 +
>  drivers/event/dlb2/pf/base/dlb2_regs_new.h    | 4412 +++++++++++++++++
>  drivers/event/dlb2/pf/base/dlb2_resource.c    |  180 +-
>  drivers/event/dlb2/pf/base/dlb2_resource.h    |   36 -
>  .../event/dlb2/pf/base/dlb2_resource_new.c    |  271 +
>  .../event/dlb2/pf/base/dlb2_resource_new.h    |   73 +
>  drivers/event/dlb2/pf/dlb2_main.c             |   41 +-
>  drivers/event/dlb2/pf/dlb2_main.h             |    4 +
>  drivers/event/dlb2/pf/dlb2_pf.c               |    6 +-
>  13 files changed, 5165 insertions(+), 231 deletions(-)
>  create mode 100644 drivers/event/dlb2/pf/base/dlb2_hw_types_new.h
>  create mode 100644 drivers/event/dlb2/pf/base/dlb2_regs_new.h
>  create mode 100644 drivers/event/dlb2/pf/base/dlb2_resource_new.c
>  create mode 100644 drivers/event/dlb2/pf/base/dlb2_resource_new.h
> 
> +#ifdef FPGA

Don't do this. Either detect the FPGA presence or make it devargs

> +#define DLB2_HZ					2000000
> +#else
> +#define DLB2_HZ					800000000
> +#endif
> +
> +
> +/* TEMPORARY inclusion of both headers for merge */


Please make sure to remove this comments in sub sequent patches.

> b/drivers/event/dlb2/pf/dlb2_main.h
> index f3bee71fb..01a24e8a4 100644
> --- a/drivers/event/dlb2/pf/dlb2_main.h
> +++ b/drivers/event/dlb2/pf/dlb2_main.h
> @@ -15,7 +15,11 @@
>  #define PAGE_SIZE (sysconf(_SC_PAGESIZE))

Please use DPDK APIs for this.

>  #endif

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [dpdk-dev] [PATCH 00/25] Add Support for DLB v2.5
  2021-03-16 22:18 [dpdk-dev] [PATCH 00/25] Add Support for DLB v2.5 Timothy McDaniel
                   ` (24 preceding siblings ...)
  2021-03-16 22:18 ` [dpdk-dev] [PATCH 25/25] event/dlb2: update xstats for DLB v2.5 Timothy McDaniel
@ 2021-03-21 10:50 ` Jerin Jacob
  25 siblings, 0 replies; 174+ messages in thread
From: Jerin Jacob @ 2021-03-21 10:50 UTC (permalink / raw)
  To: Timothy McDaniel
  Cc: dpdk-dev, Jerin Jacob, Van Haaren, Harry, Ray Kinsella,
	Neil Horman, Nikhil Rao, Erik Gabriel Carrillo, Gujjar,
	Abhinandan S, Pavan Nikhilesh, Hemant Agrawal,
	Mattias Rönnblom, Peter Mccarthy

On Wed, Mar 17, 2021 at 3:49 AM Timothy McDaniel
<timothy.mcdaniel@intel.com> wrote:
>
> This patch series adds support for DLB v2.5 to
> the current DLB V2.0 PMD. The resulting PMD supports
> both hardware versions.
>
> The main differences between the DLB v2.5 and v2.0 hardware
> are:
> - Number of queues/ports
> - DLB v2.5 uses a combined credit pool, whereas DLB v2.0
>   splits credits into 2 pools, a directed credit pool and a
>   load balanced credit pool.
> - Different register maps, with different bit names and offsets
>
> In order to support both hardware versions with the same PMD,
> and avoid code duplication, the file dlb2_resource.c required a
> complete rewrite. This required some creative staging of the changes
> in order to keep the individual patches relatively small, while
> also meeting the requirement that all individual patches in the set
> compile cleanly.
>
> To accomplish this, a few temporary files are used:
>
> dlb2_hw_types_new.h
> dlb2_resources_new.h
> dlb2_resources_new.c
>
> As dlb2_resources_new.c is populated with the new combined v2.0/v2.5
> low level logic, the corresponding old code is removed from
> dlb2_resource.c, thus allowing both the original and new code to
> continue to compile and link cleanly. Once all of the code has been
> migrated to the new model, the old versions of the files are removed,
> and the new versions are renamed, effectively replacing the old original
> files.


# Please make sure each patch compiles. It fails on the second
patch[1] now with clang.
# Please check each patch with ./devtools/test-meson-builds.sh.
# Also, update the release notes for 2.5 HW support.

[1]
FAILED: drivers/libtmp_rte_event_dlb2.a.p/event_dlb2_pf_base_dlb2_resource_new.c.o
ccache clang -Idrivers/libtmp_rte_event_dlb2.a.p -Idrivers
-I../drivers -Idrivers/event/dlb2 -I../drivers/event/dlb2
-Ilib/librte_eventdev -I../lib/librte_eventdev -I. -I.. -Iconfig
-I../config -Ilib/librte_eal/include -I../lib/librte_eal/i
nclude -Ilib/librte_eal/linux/include
-I../lib/librte_eal/linux/include -Ilib/librte_eal/x86/include
-I../lib/librte_eal/x86/include -Ilib/librte_eal/common
-I../lib/librte_eal/common -Ilib/librte_eal -I../lib/librte_eal
-Ilib/librte_kvargs
 -I../lib/librte_kvargs -Ilib/librte_metrics -I../lib/librte_metrics
-Ilib/librte_telemetry -I../lib/librte_telemetry -Ilib/librte_ring
-I../lib/librte_ring -Ilib/librte_ethdev -I../lib/librte_ethdev
-Ilib/librte_net -I../lib/librte_net -Il
ib/librte_mbuf -I../lib/librte_mbuf -Ilib/librte_mempool
-I../lib/librte_mempool -Ilib/librte_meter -I../lib/librte_meter
-Ilib/librte_hash -I../lib/librte_hash -Ilib/librte_rcu
-I../lib/librte_rcu -Ilib/librte_timer -I../lib/librte_timer -
Ilib/librte_cryptodev -I../lib/librte_cryptodev -Ilib/librte_pci
-I../lib/librte_pci -Idrivers/bus/pci -I../drivers/bus/pci
-I../drivers/bus/pci/linux -Xclang -fcolor-diagnostics -pipe
-D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch -Werror -O2
-g -include rte_config.h -Wextra -Wcast-qual -Wdeprecated -Wformat
-Wformat-nonliteral -Wformat-security -Wmissing-declarations
-Wmissing-prototypes -Wnested-externs -Wold-style-definition
-Wpointer-arith -Wsign-compare -Wstrict-prototypes
-Wundef -Wwrite-strings -Wno-address-of-packed-member
-Wno-missing-field-initializers -D_GNU_SOURCE -fPIC -march=native
-DALLOW_EXPERIMENTAL_API -DALLOW_INTERNAL_API -MD -MQ
drivers/libtmp_rte_event_dlb2.a.p/event_dlb2_pf_base_dlb2_resource
_new.c.o -MF drivers/libtmp_rte_event_dlb2.a.p/event_dlb2_pf_base_dlb2_resource_new.c.o.d
-o drivers/libtmp_rte_event_dlb2.a.p/event_dlb2_pf_base_dlb2_resource_new.c.o
-c ../drivers/event/dlb2/pf/base/dlb2_resource_new.c
../drivers/event/dlb2/pf/base/dlb2_resource_new.c:44:20: error: unused
function 'dlb2_flush_csr' [-Werror,-Wunused-function]
static inline void dlb2_flush_csr(struct dlb2_hw *hw)
                   ^
1 error generated.
[1976/2578] Compiling C ob


>
> As you review the code, you can ignore the code deletions from
> dlb2_resource.c, as that file continues to shrink as the new
> corresponding logic is added to dlb2_resource_new.c.
>
> Timothy McDaniel (25):
>   event/dlb2: add dlb v2.5 probe
>   event/dlb2: add DLB v2.5 probe-time hardware init
>   event/dlb2: add DLB v2.5 support to get_resources
>   event/dlb2: add DLB v2.5 support to create sched domain
>   event/dlb2: add DLB v2.5 support to domain reset
>   event/dlb2: add DLB V2.5 support to create ldb queue
>   event/dlb2: add DLB v2.5 support to create ldb port
>   event/dlb2: add DLB v2.5 support to create dir port
>   event/dlb2: add DLB v2.5 support to create dir queue
>   event/dlb2: add DLB v2.5 support to map qid
>   event/dlb2: add DLB v2.5 support to unmap queue
>   event/dlb2: add DLB v2.5 support to start domain
>   event/dlb2: add DLB v2.5 credit scheme
>   event/dlb2: Add DLB v2.5 support to get queue depth functions
>   event/dlb2: add DLB v2.5 finish map/unmap interfaces
>   event/dlb2: add DLB v2.5 sparse cq mode
>   event/dlb2: add DLB v2.5 support to sequence number management
>   event/dlb2: consolidate dlb resource header files into one file
>   event/dlb2: delete old dlb2_resource.c file
>   event/dlb2: move dlb_resource_new.c to dlb_resource.c
>   event/dlb2: remove temporary file, dlb_hw_types.h
>   event/dlb2: move dlb2_hw_type_new.h to dlb2_hw_types.h
>   event/dlb2: delete old register map file, dlb2_regs.h
>   event/dlb2: rename dlb2_regs_new.h to dlb2_regs.h
>   event/dlb2: update xstats for DLB v2.5
>
>  drivers/event/dlb2/dlb2.c                  |  430 +-
>  drivers/event/dlb2/dlb2_priv.h             |  158 +-
>  drivers/event/dlb2/dlb2_user.h             |   27 +-
>  drivers/event/dlb2/dlb2_xstats.c           |   70 +-
>  drivers/event/dlb2/pf/base/dlb2_hw_types.h |  102 +-
>  drivers/event/dlb2/pf/base/dlb2_mbox.h     |    1 -
>  drivers/event/dlb2/pf/base/dlb2_osdep.h    |    3 +
>  drivers/event/dlb2/pf/base/dlb2_regs.h     | 6063 +++++++++++++-------
>  drivers/event/dlb2/pf/base/dlb2_resource.c | 3277 ++++++-----
>  drivers/event/dlb2/pf/base/dlb2_resource.h |   28 +-
>  drivers/event/dlb2/pf/dlb2_main.c          |   37 +-
>  drivers/event/dlb2/pf/dlb2_pf.c            |   62 +-
>  12 files changed, 6366 insertions(+), 3892 deletions(-)
>
> --
> 2.23.0
>

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [dpdk-dev] [PATCH 01/25] event/dlb2: add dlb v2.5 probe
  2021-03-21  9:48   ` Jerin Jacob
@ 2021-03-24 19:31     ` McDaniel, Timothy
  2021-03-26 11:01       ` Jerin Jacob
  0 siblings, 1 reply; 174+ messages in thread
From: McDaniel, Timothy @ 2021-03-24 19:31 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dpdk-dev, Jerin Jacob, Van Haaren, Harry, Ray Kinsella,
	Neil Horman, Rao, Nikhil, Carrillo, Erik G, Gujjar, Abhinandan S,
	Pavan Nikhilesh, Hemant Agrawal, mattias.ronnblom, Mccarthy,
	Peter



> -----Original Message-----
> From: Jerin Jacob <jerinjacobk@gmail.com>
> Sent: Sunday, March 21, 2021 4:48 AM
> To: McDaniel, Timothy <timothy.mcdaniel@intel.com>
> Cc: dpdk-dev <dev@dpdk.org>; Jerin Jacob <jerinj@marvell.com>; Van Haaren,
> Harry <harry.van.haaren@intel.com>; Ray Kinsella <mdr@ashroe.eu>; Neil
> Horman <nhorman@tuxdriver.com>; Rao, Nikhil <nikhil.rao@intel.com>;
> Carrillo, Erik G <Erik.G.Carrillo@intel.com>; Gujjar, Abhinandan S
> <abhinandan.gujjar@intel.com>; Pavan Nikhilesh
> <pbhagavatula@marvell.com>; Hemant Agrawal <hemant.agrawal@nxp.com>;
> mattias.ronnblom <mattias.ronnblom@ericsson.com>; Mccarthy, Peter
> <Peter.Mccarthy@intel.com>
> Subject: Re: [dpdk-dev] [PATCH 01/25] event/dlb2: add dlb v2.5 probe
> 
> On Wed, Mar 17, 2021 at 3:49 AM Timothy McDaniel
> <timothy.mcdaniel@intel.com> wrote:
> >
> > This commit adds dlb v2.5 probe support, and updates
> > parameter parsing.
> >
> > The dlb v2.5 device differs from dlb v2, in that the
> > number of resources (ports, queues, ...) is different,
> > so macros have been added to take the device version
> > into account.
> >
> > This commit also cleans up a few issues in the original
> > dlb2 source:
> > - eliminate duplicate constant definitions
> > - removed unused constant definitions
> >
> > Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
> > ---
> 
> >
> > -#define EVDEV_DLB2_NAME_PMD dlb2_event
> > +#define EVDEV_DLB2_NAME_PMD dlb_event
> 
> Is this an intended change? why change the driver's name.

Yes, This is an intentional change.  We will be using the same driver name going forward, regardless of the hardware version.
Internally, we know which version of the hardware is present.

Thanks,
Tim


^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [dpdk-dev] [PATCH 01/25] event/dlb2: add dlb v2.5 probe
  2021-03-24 19:31     ` McDaniel, Timothy
@ 2021-03-26 11:01       ` Jerin Jacob
  2021-03-26 14:03         ` McDaniel, Timothy
  0 siblings, 1 reply; 174+ messages in thread
From: Jerin Jacob @ 2021-03-26 11:01 UTC (permalink / raw)
  To: McDaniel, Timothy
  Cc: dpdk-dev, Jerin Jacob, Van Haaren, Harry, Ray Kinsella,
	Neil Horman, Rao, Nikhil, Carrillo, Erik G, Gujjar, Abhinandan S,
	Pavan Nikhilesh, Hemant Agrawal, mattias.ronnblom, Mccarthy,
	Peter

On Thu, Mar 25, 2021 at 1:01 AM McDaniel, Timothy
<timothy.mcdaniel@intel.com> wrote:
>
>
>
> > -----Original Message-----
> > From: Jerin Jacob <jerinjacobk@gmail.com>
> > Sent: Sunday, March 21, 2021 4:48 AM
> > To: McDaniel, Timothy <timothy.mcdaniel@intel.com>
> > Cc: dpdk-dev <dev@dpdk.org>; Jerin Jacob <jerinj@marvell.com>; Van Haaren,
> > Harry <harry.van.haaren@intel.com>; Ray Kinsella <mdr@ashroe.eu>; Neil
> > Horman <nhorman@tuxdriver.com>; Rao, Nikhil <nikhil.rao@intel.com>;
> > Carrillo, Erik G <Erik.G.Carrillo@intel.com>; Gujjar, Abhinandan S
> > <abhinandan.gujjar@intel.com>; Pavan Nikhilesh
> > <pbhagavatula@marvell.com>; Hemant Agrawal <hemant.agrawal@nxp.com>;
> > mattias.ronnblom <mattias.ronnblom@ericsson.com>; Mccarthy, Peter
> > <Peter.Mccarthy@intel.com>
> > Subject: Re: [dpdk-dev] [PATCH 01/25] event/dlb2: add dlb v2.5 probe
> >
> > On Wed, Mar 17, 2021 at 3:49 AM Timothy McDaniel
> > <timothy.mcdaniel@intel.com> wrote:
> > >
> > > This commit adds dlb v2.5 probe support, and updates
> > > parameter parsing.
> > >
> > > The dlb v2.5 device differs from dlb v2, in that the
> > > number of resources (ports, queues, ...) is different,
> > > so macros have been added to take the device version
> > > into account.
> > >
> > > This commit also cleans up a few issues in the original
> > > dlb2 source:
> > > - eliminate duplicate constant definitions
> > > - removed unused constant definitions
> > >
> > > Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
> > > ---
> >
> > >
> > > -#define EVDEV_DLB2_NAME_PMD dlb2_event
> > > +#define EVDEV_DLB2_NAME_PMD dlb_event
> >
> > Is this an intended change? why change the driver's name.
>
> Yes, This is an intentional change.  We will be using the same driver name going forward, regardless of the hardware version.
> Internally, we know which version of the hardware is present.

Since the driver name is still driver/event/dlb2. Keep it as same
prefix scheme with other drivers.


>
> Thanks,
> Tim
>

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [dpdk-dev] [PATCH 01/25] event/dlb2: add dlb v2.5 probe
  2021-03-26 11:01       ` Jerin Jacob
@ 2021-03-26 14:03         ` McDaniel, Timothy
  2021-03-26 14:33           ` Jerin Jacob
  0 siblings, 1 reply; 174+ messages in thread
From: McDaniel, Timothy @ 2021-03-26 14:03 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dpdk-dev, Jerin Jacob, Van Haaren, Harry, Ray Kinsella,
	Neil Horman, Rao, Nikhil, Carrillo, Erik G, Gujjar, Abhinandan S,
	Pavan Nikhilesh, Hemant Agrawal, mattias.ronnblom, Mccarthy,
	Peter



> -----Original Message-----
> From: Jerin Jacob <jerinjacobk@gmail.com>
> Sent: Friday, March 26, 2021 6:01 AM
> To: McDaniel, Timothy <timothy.mcdaniel@intel.com>
> Cc: dpdk-dev <dev@dpdk.org>; Jerin Jacob <jerinj@marvell.com>; Van Haaren,
> Harry <harry.van.haaren@intel.com>; Ray Kinsella <mdr@ashroe.eu>; Neil
> Horman <nhorman@tuxdriver.com>; Rao, Nikhil <nikhil.rao@intel.com>;
> Carrillo, Erik G <erik.g.carrillo@intel.com>; Gujjar, Abhinandan S
> <abhinandan.gujjar@intel.com>; Pavan Nikhilesh
> <pbhagavatula@marvell.com>; Hemant Agrawal <hemant.agrawal@nxp.com>;
> mattias.ronnblom <mattias.ronnblom@ericsson.com>; Mccarthy, Peter
> <peter.mccarthy@intel.com>
> Subject: Re: [dpdk-dev] [PATCH 01/25] event/dlb2: add dlb v2.5 probe
> 
> On Thu, Mar 25, 2021 at 1:01 AM McDaniel, Timothy
> <timothy.mcdaniel@intel.com> wrote:
> >
> >
> >
> > > -----Original Message-----
> > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > Sent: Sunday, March 21, 2021 4:48 AM
> > > To: McDaniel, Timothy <timothy.mcdaniel@intel.com>
> > > Cc: dpdk-dev <dev@dpdk.org>; Jerin Jacob <jerinj@marvell.com>; Van
> Haaren,
> > > Harry <harry.van.haaren@intel.com>; Ray Kinsella <mdr@ashroe.eu>; Neil
> > > Horman <nhorman@tuxdriver.com>; Rao, Nikhil <nikhil.rao@intel.com>;
> > > Carrillo, Erik G <Erik.G.Carrillo@intel.com>; Gujjar, Abhinandan S
> > > <abhinandan.gujjar@intel.com>; Pavan Nikhilesh
> > > <pbhagavatula@marvell.com>; Hemant Agrawal
> <hemant.agrawal@nxp.com>;
> > > mattias.ronnblom <mattias.ronnblom@ericsson.com>; Mccarthy, Peter
> > > <Peter.Mccarthy@intel.com>
> > > Subject: Re: [dpdk-dev] [PATCH 01/25] event/dlb2: add dlb v2.5 probe
> > >
> > > On Wed, Mar 17, 2021 at 3:49 AM Timothy McDaniel
> > > <timothy.mcdaniel@intel.com> wrote:
> > > >
> > > > This commit adds dlb v2.5 probe support, and updates
> > > > parameter parsing.
> > > >
> > > > The dlb v2.5 device differs from dlb v2, in that the
> > > > number of resources (ports, queues, ...) is different,
> > > > so macros have been added to take the device version
> > > > into account.
> > > >
> > > > This commit also cleans up a few issues in the original
> > > > dlb2 source:
> > > > - eliminate duplicate constant definitions
> > > > - removed unused constant definitions
> > > >
> > > > Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
> > > > ---
> > >
> > > >
> > > > -#define EVDEV_DLB2_NAME_PMD dlb2_event
> > > > +#define EVDEV_DLB2_NAME_PMD dlb_event
> > >
> > > Is this an intended change? why change the driver's name.
> >
> > Yes, This is an intentional change.  We will be using the same driver name
> going forward, regardless of the hardware version.
> > Internally, we know which version of the hardware is present.
> 
> Since the driver name is still driver/event/dlb2. Keep it as same
> prefix scheme with other drivers.
> 
> 
> >
> > Thanks,
> > Tim
> >

Would it be acceptable to rename drivers/event/dlb2 to drivers/event/dlb?
We may have additional dlb devices in the pipeline, such as v3, and we would really like
to have them all use a common name.


^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [dpdk-dev] [PATCH 01/25] event/dlb2: add dlb v2.5 probe
  2021-03-26 14:03         ` McDaniel, Timothy
@ 2021-03-26 14:33           ` Jerin Jacob
  2021-03-29 15:00             ` McDaniel, Timothy
  0 siblings, 1 reply; 174+ messages in thread
From: Jerin Jacob @ 2021-03-26 14:33 UTC (permalink / raw)
  To: McDaniel, Timothy
  Cc: dpdk-dev, Jerin Jacob, Van Haaren, Harry, Ray Kinsella,
	Neil Horman, Rao, Nikhil, Carrillo, Erik G, Gujjar, Abhinandan S,
	Pavan Nikhilesh, Hemant Agrawal, mattias.ronnblom, Mccarthy,
	Peter

On Fri, Mar 26, 2021 at 7:33 PM McDaniel, Timothy
<timothy.mcdaniel@intel.com> wrote:
>
>
>
> > -----Original Message-----
> > From: Jerin Jacob <jerinjacobk@gmail.com>
> > Sent: Friday, March 26, 2021 6:01 AM
> > To: McDaniel, Timothy <timothy.mcdaniel@intel.com>
> > Cc: dpdk-dev <dev@dpdk.org>; Jerin Jacob <jerinj@marvell.com>; Van Haaren,
> > Harry <harry.van.haaren@intel.com>; Ray Kinsella <mdr@ashroe.eu>; Neil
> > Horman <nhorman@tuxdriver.com>; Rao, Nikhil <nikhil.rao@intel.com>;
> > Carrillo, Erik G <erik.g.carrillo@intel.com>; Gujjar, Abhinandan S
> > <abhinandan.gujjar@intel.com>; Pavan Nikhilesh
> > <pbhagavatula@marvell.com>; Hemant Agrawal <hemant.agrawal@nxp.com>;
> > mattias.ronnblom <mattias.ronnblom@ericsson.com>; Mccarthy, Peter
> > <peter.mccarthy@intel.com>
> > Subject: Re: [dpdk-dev] [PATCH 01/25] event/dlb2: add dlb v2.5 probe
> >
> > On Thu, Mar 25, 2021 at 1:01 AM McDaniel, Timothy
> > <timothy.mcdaniel@intel.com> wrote:
> > >
> > >
> > >
> > > > -----Original Message-----
> > > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > > Sent: Sunday, March 21, 2021 4:48 AM
> > > > To: McDaniel, Timothy <timothy.mcdaniel@intel.com>
> > > > Cc: dpdk-dev <dev@dpdk.org>; Jerin Jacob <jerinj@marvell.com>; Van
> > Haaren,
> > > > Harry <harry.van.haaren@intel.com>; Ray Kinsella <mdr@ashroe.eu>; Neil
> > > > Horman <nhorman@tuxdriver.com>; Rao, Nikhil <nikhil.rao@intel.com>;
> > > > Carrillo, Erik G <Erik.G.Carrillo@intel.com>; Gujjar, Abhinandan S
> > > > <abhinandan.gujjar@intel.com>; Pavan Nikhilesh
> > > > <pbhagavatula@marvell.com>; Hemant Agrawal
> > <hemant.agrawal@nxp.com>;
> > > > mattias.ronnblom <mattias.ronnblom@ericsson.com>; Mccarthy, Peter
> > > > <Peter.Mccarthy@intel.com>
> > > > Subject: Re: [dpdk-dev] [PATCH 01/25] event/dlb2: add dlb v2.5 probe
> > > >
> > > > On Wed, Mar 17, 2021 at 3:49 AM Timothy McDaniel
> > > > <timothy.mcdaniel@intel.com> wrote:
> > > > >
> > > > > This commit adds dlb v2.5 probe support, and updates
> > > > > parameter parsing.
> > > > >
> > > > > The dlb v2.5 device differs from dlb v2, in that the
> > > > > number of resources (ports, queues, ...) is different,
> > > > > so macros have been added to take the device version
> > > > > into account.
> > > > >
> > > > > This commit also cleans up a few issues in the original
> > > > > dlb2 source:
> > > > > - eliminate duplicate constant definitions
> > > > > - removed unused constant definitions
> > > > >
> > > > > Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
> > > > > ---
> > > >
> > > > >
> > > > > -#define EVDEV_DLB2_NAME_PMD dlb2_event
> > > > > +#define EVDEV_DLB2_NAME_PMD dlb_event
> > > >
> > > > Is this an intended change? why change the driver's name.
> > >
> > > Yes, This is an intentional change.  We will be using the same driver name
> > going forward, regardless of the hardware version.
> > > Internally, we know which version of the hardware is present.
> >
> > Since the driver name is still driver/event/dlb2. Keep it as same
> > prefix scheme with other drivers.
> >
> >
> > >
> > > Thanks,
> > > Tim
> > >
>
> Would it be acceptable to rename drivers/event/dlb2 to drivers/event/dlb?
> We may have additional dlb devices in the pipeline, such as v3, and we would really like
> to have them all use a common name.

Makes sense to change to drivers/event/dlb. I think, we can make to
dlb when you
add v3 support. Now there is no need.


>

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [dpdk-dev] [EXT] [PATCH 02/25] event/dlb2: add DLB v2.5 probe-time hardware init
  2021-03-21 10:30   ` [dpdk-dev] [EXT] " Jerin Jacob Kollanukkaran
@ 2021-03-26 16:37     ` McDaniel, Timothy
  0 siblings, 0 replies; 174+ messages in thread
From: McDaniel, Timothy @ 2021-03-26 16:37 UTC (permalink / raw)
  To: Jerin Jacob Kollanukkaran, dev
  Cc: Van Haaren, Harry, mdr, nhorman, Rao, Nikhil, Carrillo, Erik G,
	Gujjar, Abhinandan S, Pavan Nikhilesh Bhagavatula,
	hemant.agrawal, mattias.ronnblom, Mccarthy, Peter



> -----Original Message-----
> From: Jerin Jacob Kollanukkaran <jerinj@marvell.com>
> Sent: Sunday, March 21, 2021 5:30 AM
> To: McDaniel, Timothy <timothy.mcdaniel@intel.com>; dev@dpdk.org
> Cc: Van Haaren, Harry <harry.van.haaren@intel.com>; mdr@ashroe.eu;
> nhorman@tuxdriver.com; Rao, Nikhil <nikhil.rao@intel.com>; Carrillo, Erik G
> <Erik.G.Carrillo@intel.com>; Gujjar, Abhinandan S
> <abhinandan.gujjar@intel.com>; Pavan Nikhilesh Bhagavatula
> <pbhagavatula@marvell.com>; hemant.agrawal@nxp.com; mattias.ronnblom
> <mattias.ronnblom@ericsson.com>; Mccarthy, Peter
> <Peter.Mccarthy@intel.com>
> Subject: RE: [EXT] [PATCH 02/25] event/dlb2: add DLB v2.5 probe-time hardware
> init
> 
> > -----Original Message-----
> > From: Timothy McDaniel <timothy.mcdaniel@intel.com>
> > Sent: Wednesday, March 17, 2021 3:49 AM
> > To: dev@dpdk.org
> > Cc: Jerin Jacob Kollanukkaran <jerinj@marvell.com>;
> > harry.van.haaren@intel.com; mdr@ashroe.eu; nhorman@tuxdriver.com;
> > nikhil.rao@intel.com; erik.g.carrillo@intel.com; abhinandan.gujjar@intel.com;
> > Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com>;
> > hemant.agrawal@nxp.com; mattias.ronnblom@ericsson.com;
> > peter.mccarthy@intel.com
> > Subject: [EXT] [PATCH 02/25] event/dlb2: add DLB v2.5 probe-time hardware
> > init
> 
> 
> Please simplify subject in all the patches like
> event/dlb2: add v2.5 HW init
> 

Will do

> 
> > ----------------------------------------------------------------------
> > This commit adds support for DLB v2.5 probe-time hardware init,
> > and sets up a framework for incorporating the remaining
> > changes required to support DLB v2.5.
> >
> > DLB v2.0 and DLB v2.5 are similar in many respects, but their
> > register offsets and definitions are different. As a result of these,
> > differences, the low level hardware functions must take the devicei
> 
> 
> s/devicei/device
> 

fixed 

> > version into consideration. This requires that the hardware version be
> > passed to many of the low level functions, so that the PMD can
> > take the appropriate action based on the device version.
> >
> > To ease the transition and keep the individual patches small, three
> > temporary files are added in this commit. These files have "new"
> > in their names.  The files with "new" contain changes specific to a
> > consolidated PMD that supports both DLB v2.0 and DLB 2.5. Their sister
> > files of the same name (minus "new") contain the old DLB v2.0 specific
> > code. The intent is to remove code from the original files as that code
> > is ported to the combined DLB 2.0/2.5 PMD model and added to the "new"
> > files in a series of commits. At end of the patch series, the old files
> > will be empty and the "new" files will have the logic needed
> > to implement a single PMD that supports both DLB v2.0 and DLB v2.5.
> > At that time, the original DLB v2.0 specific files will be deleted,
> > and the "new" files will be renamed and replace them.
> >
> > Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
> > ---
> >  drivers/event/dlb2/dlb2_priv.h                |    5 +
> >  drivers/event/dlb2/meson.build                |    1 +
> >  .../event/dlb2/pf/base/dlb2_hw_types_new.h    |  362 ++
> >  drivers/event/dlb2/pf/base/dlb2_mbox.h        |    1 -
> >  drivers/event/dlb2/pf/base/dlb2_osdep.h       |    4 +
> >  drivers/event/dlb2/pf/base/dlb2_regs_new.h    | 4412 +++++++++++++++++
> >  drivers/event/dlb2/pf/base/dlb2_resource.c    |  180 +-
> >  drivers/event/dlb2/pf/base/dlb2_resource.h    |   36 -
> >  .../event/dlb2/pf/base/dlb2_resource_new.c    |  271 +
> >  .../event/dlb2/pf/base/dlb2_resource_new.h    |   73 +
> >  drivers/event/dlb2/pf/dlb2_main.c             |   41 +-
> >  drivers/event/dlb2/pf/dlb2_main.h             |    4 +
> >  drivers/event/dlb2/pf/dlb2_pf.c               |    6 +-
> >  13 files changed, 5165 insertions(+), 231 deletions(-)
> >  create mode 100644 drivers/event/dlb2/pf/base/dlb2_hw_types_new.h
> >  create mode 100644 drivers/event/dlb2/pf/base/dlb2_regs_new.h
> >  create mode 100644 drivers/event/dlb2/pf/base/dlb2_resource_new.c
> >  create mode 100644 drivers/event/dlb2/pf/base/dlb2_resource_new.h
> >
> > +#ifdef FPGA
> 
> Don't do this. Either detect the FPGA presence or make it devargs
> 
> > +#define DLB2_HZ					2000000
> > +#else
> > +#define DLB2_HZ					800000000
> > +#endif
> > +
> > +
> > +/* TEMPORARY inclusion of both headers for merge */
> 

fixed

> 
> Please make sure to remove this comments in sub sequent patches.
> 

will do

> > b/drivers/event/dlb2/pf/dlb2_main.h
> > index f3bee71fb..01a24e8a4 100644
> > --- a/drivers/event/dlb2/pf/dlb2_main.h
> > +++ b/drivers/event/dlb2/pf/dlb2_main.h
> > @@ -15,7 +15,11 @@
> >  #define PAGE_SIZE (sysconf(_SC_PAGESIZE))
> 
> Please use DPDK APIs for this.
> 

done

> >  #endif

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [dpdk-dev] [PATCH 01/25] event/dlb2: add dlb v2.5 probe
  2021-03-26 14:33           ` Jerin Jacob
@ 2021-03-29 15:00             ` McDaniel, Timothy
  2021-03-29 15:51               ` Jerin Jacob
  0 siblings, 1 reply; 174+ messages in thread
From: McDaniel, Timothy @ 2021-03-29 15:00 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dpdk-dev, Jerin Jacob, Van Haaren, Harry, Ray Kinsella,
	Neil Horman, Rao, Nikhil, Carrillo, Erik G, Gujjar, Abhinandan S,
	Pavan Nikhilesh, Hemant Agrawal, mattias.ronnblom, Mccarthy,
	Peter



> -----Original Message-----
> From: Jerin Jacob <jerinjacobk@gmail.com>
> Sent: Friday, March 26, 2021 9:33 AM
> To: McDaniel, Timothy <timothy.mcdaniel@intel.com>
> Cc: dpdk-dev <dev@dpdk.org>; Jerin Jacob <jerinj@marvell.com>; Van Haaren,
> Harry <harry.van.haaren@intel.com>; Ray Kinsella <mdr@ashroe.eu>; Neil
> Horman <nhorman@tuxdriver.com>; Rao, Nikhil <nikhil.rao@intel.com>;
> Carrillo, Erik G <Erik.G.Carrillo@intel.com>; Gujjar, Abhinandan S
> <abhinandan.gujjar@intel.com>; Pavan Nikhilesh
> <pbhagavatula@marvell.com>; Hemant Agrawal <hemant.agrawal@nxp.com>;
> mattias.ronnblom <mattias.ronnblom@ericsson.com>; Mccarthy, Peter
> <Peter.Mccarthy@intel.com>
> Subject: Re: [dpdk-dev] [PATCH 01/25] event/dlb2: add dlb v2.5 probe
> 
> On Fri, Mar 26, 2021 at 7:33 PM McDaniel, Timothy
> <timothy.mcdaniel@intel.com> wrote:
> >
> >
> >
> > > -----Original Message-----
> > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > Sent: Friday, March 26, 2021 6:01 AM
> > > To: McDaniel, Timothy <timothy.mcdaniel@intel.com>
> > > Cc: dpdk-dev <dev@dpdk.org>; Jerin Jacob <jerinj@marvell.com>; Van
> Haaren,
> > > Harry <harry.van.haaren@intel.com>; Ray Kinsella <mdr@ashroe.eu>; Neil
> > > Horman <nhorman@tuxdriver.com>; Rao, Nikhil <nikhil.rao@intel.com>;
> > > Carrillo, Erik G <erik.g.carrillo@intel.com>; Gujjar, Abhinandan S
> > > <abhinandan.gujjar@intel.com>; Pavan Nikhilesh
> > > <pbhagavatula@marvell.com>; Hemant Agrawal
> <hemant.agrawal@nxp.com>;
> > > mattias.ronnblom <mattias.ronnblom@ericsson.com>; Mccarthy, Peter
> > > <peter.mccarthy@intel.com>
> > > Subject: Re: [dpdk-dev] [PATCH 01/25] event/dlb2: add dlb v2.5 probe
> > >
> > > On Thu, Mar 25, 2021 at 1:01 AM McDaniel, Timothy
> > > <timothy.mcdaniel@intel.com> wrote:
> > > >
> > > >
> > > >
> > > > > -----Original Message-----
> > > > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > > > Sent: Sunday, March 21, 2021 4:48 AM
> > > > > To: McDaniel, Timothy <timothy.mcdaniel@intel.com>
> > > > > Cc: dpdk-dev <dev@dpdk.org>; Jerin Jacob <jerinj@marvell.com>; Van
> > > Haaren,
> > > > > Harry <harry.van.haaren@intel.com>; Ray Kinsella <mdr@ashroe.eu>;
> Neil
> > > > > Horman <nhorman@tuxdriver.com>; Rao, Nikhil <nikhil.rao@intel.com>;
> > > > > Carrillo, Erik G <Erik.G.Carrillo@intel.com>; Gujjar, Abhinandan S
> > > > > <abhinandan.gujjar@intel.com>; Pavan Nikhilesh
> > > > > <pbhagavatula@marvell.com>; Hemant Agrawal
> > > <hemant.agrawal@nxp.com>;
> > > > > mattias.ronnblom <mattias.ronnblom@ericsson.com>; Mccarthy, Peter
> > > > > <Peter.Mccarthy@intel.com>
> > > > > Subject: Re: [dpdk-dev] [PATCH 01/25] event/dlb2: add dlb v2.5 probe
> > > > >
> > > > > On Wed, Mar 17, 2021 at 3:49 AM Timothy McDaniel
> > > > > <timothy.mcdaniel@intel.com> wrote:
> > > > > >
> > > > > > This commit adds dlb v2.5 probe support, and updates
> > > > > > parameter parsing.
> > > > > >
> > > > > > The dlb v2.5 device differs from dlb v2, in that the
> > > > > > number of resources (ports, queues, ...) is different,
> > > > > > so macros have been added to take the device version
> > > > > > into account.
> > > > > >
> > > > > > This commit also cleans up a few issues in the original
> > > > > > dlb2 source:
> > > > > > - eliminate duplicate constant definitions
> > > > > > - removed unused constant definitions
> > > > > >
> > > > > > Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
> > > > > > ---
> > > > >
> > > > > >
> > > > > > -#define EVDEV_DLB2_NAME_PMD dlb2_event
> > > > > > +#define EVDEV_DLB2_NAME_PMD dlb_event
> > > > >
> > > > > Is this an intended change? why change the driver's name.
> > > >
> > > > Yes, This is an intentional change.  We will be using the same driver name
> > > going forward, regardless of the hardware version.
> > > > Internally, we know which version of the hardware is present.
> > >
> > > Since the driver name is still driver/event/dlb2. Keep it as same
> > > prefix scheme with other drivers.
> > >
> > >
> > > >
> > > > Thanks,
> > > > Tim
> > > >
> >
> > Would it be acceptable to rename drivers/event/dlb2 to drivers/event/dlb?
> > We may have additional dlb devices in the pipeline, such as v3, and we would
> really like
> > to have them all use a common name.
> 
> Makes sense to change to drivers/event/dlb. I think, we can make to
> dlb when you
> add v3 support. Now there is no need.
> 
> 
> >

Hi Jerin,

I spoke to the team, and we would like to get this change in now. It happens that we have
several applications that use the eventdev API rte_event_dev_get_dev_id(const char *name).
Having a single name simplifies these applications, and also prevents customers from having to
update application source code every time a new dlb device is released.

Thanks,
Tim


^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [dpdk-dev] [PATCH 01/25] event/dlb2: add dlb v2.5 probe
  2021-03-29 15:00             ` McDaniel, Timothy
@ 2021-03-29 15:51               ` Jerin Jacob
  2021-03-29 15:55                 ` McDaniel, Timothy
  0 siblings, 1 reply; 174+ messages in thread
From: Jerin Jacob @ 2021-03-29 15:51 UTC (permalink / raw)
  To: McDaniel, Timothy
  Cc: dpdk-dev, Jerin Jacob, Van Haaren, Harry, Ray Kinsella,
	Neil Horman, Rao, Nikhil, Carrillo, Erik G, Gujjar, Abhinandan S,
	Pavan Nikhilesh, Hemant Agrawal, mattias.ronnblom, Mccarthy,
	Peter

On Mon, Mar 29, 2021 at 8:30 PM McDaniel, Timothy
<timothy.mcdaniel@intel.com> wrote:
>
>
>
> > -----Original Message-----
> > From: Jerin Jacob <jerinjacobk@gmail.com>
> > Sent: Friday, March 26, 2021 9:33 AM
> > To: McDaniel, Timothy <timothy.mcdaniel@intel.com>
> > Cc: dpdk-dev <dev@dpdk.org>; Jerin Jacob <jerinj@marvell.com>; Van Haaren,
> > Harry <harry.van.haaren@intel.com>; Ray Kinsella <mdr@ashroe.eu>; Neil
> > Horman <nhorman@tuxdriver.com>; Rao, Nikhil <nikhil.rao@intel.com>;
> > Carrillo, Erik G <Erik.G.Carrillo@intel.com>; Gujjar, Abhinandan S
> > <abhinandan.gujjar@intel.com>; Pavan Nikhilesh
> > <pbhagavatula@marvell.com>; Hemant Agrawal <hemant.agrawal@nxp.com>;
> > mattias.ronnblom <mattias.ronnblom@ericsson.com>; Mccarthy, Peter
> > <Peter.Mccarthy@intel.com>
> > Subject: Re: [dpdk-dev] [PATCH 01/25] event/dlb2: add dlb v2.5 probe
> >
> > On Fri, Mar 26, 2021 at 7:33 PM McDaniel, Timothy
> > <timothy.mcdaniel@intel.com> wrote:
> > >
> > >
> > >
> > > > -----Original Message-----
> > > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > > Sent: Friday, March 26, 2021 6:01 AM
> > > > To: McDaniel, Timothy <timothy.mcdaniel@intel.com>
> > > > Cc: dpdk-dev <dev@dpdk.org>; Jerin Jacob <jerinj@marvell.com>; Van
> > Haaren,
> > > > Harry <harry.van.haaren@intel.com>; Ray Kinsella <mdr@ashroe.eu>; Neil
> > > > Horman <nhorman@tuxdriver.com>; Rao, Nikhil <nikhil.rao@intel.com>;
> > > > Carrillo, Erik G <erik.g.carrillo@intel.com>; Gujjar, Abhinandan S
> > > > <abhinandan.gujjar@intel.com>; Pavan Nikhilesh
> > > > <pbhagavatula@marvell.com>; Hemant Agrawal
> > <hemant.agrawal@nxp.com>;
> > > > mattias.ronnblom <mattias.ronnblom@ericsson.com>; Mccarthy, Peter
> > > > <peter.mccarthy@intel.com>
> > > > Subject: Re: [dpdk-dev] [PATCH 01/25] event/dlb2: add dlb v2.5 probe
> > > >
> > > > On Thu, Mar 25, 2021 at 1:01 AM McDaniel, Timothy
> > > > <timothy.mcdaniel@intel.com> wrote:
> > > > >
> > > > >
> > > > >
> > > > > > -----Original Message-----
> > > > > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > > > > Sent: Sunday, March 21, 2021 4:48 AM
> > > > > > To: McDaniel, Timothy <timothy.mcdaniel@intel.com>
> > > > > > Cc: dpdk-dev <dev@dpdk.org>; Jerin Jacob <jerinj@marvell.com>; Van
> > > > Haaren,
> > > > > > Harry <harry.van.haaren@intel.com>; Ray Kinsella <mdr@ashroe.eu>;
> > Neil
> > > > > > Horman <nhorman@tuxdriver.com>; Rao, Nikhil <nikhil.rao@intel.com>;
> > > > > > Carrillo, Erik G <Erik.G.Carrillo@intel.com>; Gujjar, Abhinandan S
> > > > > > <abhinandan.gujjar@intel.com>; Pavan Nikhilesh
> > > > > > <pbhagavatula@marvell.com>; Hemant Agrawal
> > > > <hemant.agrawal@nxp.com>;
> > > > > > mattias.ronnblom <mattias.ronnblom@ericsson.com>; Mccarthy, Peter
> > > > > > <Peter.Mccarthy@intel.com>
> > > > > > Subject: Re: [dpdk-dev] [PATCH 01/25] event/dlb2: add dlb v2.5 probe
> > > > > >
> > > > > > On Wed, Mar 17, 2021 at 3:49 AM Timothy McDaniel
> > > > > > <timothy.mcdaniel@intel.com> wrote:
> > > > > > >
> > > > > > > This commit adds dlb v2.5 probe support, and updates
> > > > > > > parameter parsing.
> > > > > > >
> > > > > > > The dlb v2.5 device differs from dlb v2, in that the
> > > > > > > number of resources (ports, queues, ...) is different,
> > > > > > > so macros have been added to take the device version
> > > > > > > into account.
> > > > > > >
> > > > > > > This commit also cleans up a few issues in the original
> > > > > > > dlb2 source:
> > > > > > > - eliminate duplicate constant definitions
> > > > > > > - removed unused constant definitions
> > > > > > >
> > > > > > > Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
> > > > > > > ---
> > > > > >
> > > > > > >
> > > > > > > -#define EVDEV_DLB2_NAME_PMD dlb2_event
> > > > > > > +#define EVDEV_DLB2_NAME_PMD dlb_event
> > > > > >
> > > > > > Is this an intended change? why change the driver's name.
> > > > >
> > > > > Yes, This is an intentional change.  We will be using the same driver name
> > > > going forward, regardless of the hardware version.
> > > > > Internally, we know which version of the hardware is present.
> > > >
> > > > Since the driver name is still driver/event/dlb2. Keep it as same
> > > > prefix scheme with other drivers.
> > > >
> > > >
> > > > >
> > > > > Thanks,
> > > > > Tim
> > > > >
> > >
> > > Would it be acceptable to rename drivers/event/dlb2 to drivers/event/dlb?
> > > We may have additional dlb devices in the pipeline, such as v3, and we would
> > really like
> > > to have them all use a common name.
> >
> > Makes sense to change to drivers/event/dlb. I think, we can make to
> > dlb when you
> > add v3 support. Now there is no need.
> >
> >
> > >
>
> Hi Jerin,
>
> I spoke to the team, and we would like to get this change in now. It happens that we have
> several applications that use the eventdev API rte_event_dev_get_dev_id(const char *name).
> Having a single name simplifies these applications, and also prevents customers from having to
> update application source code every time a new dlb device is released.


Now that we removed drivers/event/dlb, Please rename
drivers/event/dlb2 as drivers/event/dlb.
and change EVDEV_DLB_NAME_PMD as dlb_event and submit the new patch
patches for v2.5.



>
> Thanks,
> Tim
>

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [dpdk-dev] [PATCH 01/25] event/dlb2: add dlb v2.5 probe
  2021-03-29 15:51               ` Jerin Jacob
@ 2021-03-29 15:55                 ` McDaniel, Timothy
  0 siblings, 0 replies; 174+ messages in thread
From: McDaniel, Timothy @ 2021-03-29 15:55 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dpdk-dev, Jerin Jacob, Van Haaren, Harry, Ray Kinsella,
	Neil Horman, Rao, Nikhil, Carrillo, Erik G, Gujjar, Abhinandan S,
	Pavan Nikhilesh, Hemant Agrawal, mattias.ronnblom, Mccarthy,
	Peter



> -----Original Message-----
> From: Jerin Jacob <jerinjacobk@gmail.com>
> Sent: Monday, March 29, 2021 10:51 AM
> To: McDaniel, Timothy <timothy.mcdaniel@intel.com>
> Cc: dpdk-dev <dev@dpdk.org>; Jerin Jacob <jerinj@marvell.com>; Van Haaren,
> Harry <harry.van.haaren@intel.com>; Ray Kinsella <mdr@ashroe.eu>; Neil
> Horman <nhorman@tuxdriver.com>; Rao, Nikhil <nikhil.rao@intel.com>;
> Carrillo, Erik G <erik.g.carrillo@intel.com>; Gujjar, Abhinandan S
> <abhinandan.gujjar@intel.com>; Pavan Nikhilesh
> <pbhagavatula@marvell.com>; Hemant Agrawal <hemant.agrawal@nxp.com>;
> mattias.ronnblom <mattias.ronnblom@ericsson.com>; Mccarthy, Peter
> <peter.mccarthy@intel.com>
> Subject: Re: [dpdk-dev] [PATCH 01/25] event/dlb2: add dlb v2.5 probe
> 
> On Mon, Mar 29, 2021 at 8:30 PM McDaniel, Timothy
> <timothy.mcdaniel@intel.com> wrote:
> >
> >
> >
> > > -----Original Message-----
> > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > Sent: Friday, March 26, 2021 9:33 AM
> > > To: McDaniel, Timothy <timothy.mcdaniel@intel.com>
> > > Cc: dpdk-dev <dev@dpdk.org>; Jerin Jacob <jerinj@marvell.com>; Van
> Haaren,
> > > Harry <harry.van.haaren@intel.com>; Ray Kinsella <mdr@ashroe.eu>; Neil
> > > Horman <nhorman@tuxdriver.com>; Rao, Nikhil <nikhil.rao@intel.com>;
> > > Carrillo, Erik G <Erik.G.Carrillo@intel.com>; Gujjar, Abhinandan S
> > > <abhinandan.gujjar@intel.com>; Pavan Nikhilesh
> > > <pbhagavatula@marvell.com>; Hemant Agrawal
> <hemant.agrawal@nxp.com>;
> > > mattias.ronnblom <mattias.ronnblom@ericsson.com>; Mccarthy, Peter
> > > <Peter.Mccarthy@intel.com>
> > > Subject: Re: [dpdk-dev] [PATCH 01/25] event/dlb2: add dlb v2.5 probe
> > >
> > > On Fri, Mar 26, 2021 at 7:33 PM McDaniel, Timothy
> > > <timothy.mcdaniel@intel.com> wrote:
> > > >
> > > >
> > > >
> > > > > -----Original Message-----
> > > > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > > > Sent: Friday, March 26, 2021 6:01 AM
> > > > > To: McDaniel, Timothy <timothy.mcdaniel@intel.com>
> > > > > Cc: dpdk-dev <dev@dpdk.org>; Jerin Jacob <jerinj@marvell.com>; Van
> > > Haaren,
> > > > > Harry <harry.van.haaren@intel.com>; Ray Kinsella <mdr@ashroe.eu>;
> Neil
> > > > > Horman <nhorman@tuxdriver.com>; Rao, Nikhil <nikhil.rao@intel.com>;
> > > > > Carrillo, Erik G <erik.g.carrillo@intel.com>; Gujjar, Abhinandan S
> > > > > <abhinandan.gujjar@intel.com>; Pavan Nikhilesh
> > > > > <pbhagavatula@marvell.com>; Hemant Agrawal
> > > <hemant.agrawal@nxp.com>;
> > > > > mattias.ronnblom <mattias.ronnblom@ericsson.com>; Mccarthy, Peter
> > > > > <peter.mccarthy@intel.com>
> > > > > Subject: Re: [dpdk-dev] [PATCH 01/25] event/dlb2: add dlb v2.5 probe
> > > > >
> > > > > On Thu, Mar 25, 2021 at 1:01 AM McDaniel, Timothy
> > > > > <timothy.mcdaniel@intel.com> wrote:
> > > > > >
> > > > > >
> > > > > >
> > > > > > > -----Original Message-----
> > > > > > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > > > > > Sent: Sunday, March 21, 2021 4:48 AM
> > > > > > > To: McDaniel, Timothy <timothy.mcdaniel@intel.com>
> > > > > > > Cc: dpdk-dev <dev@dpdk.org>; Jerin Jacob <jerinj@marvell.com>;
> Van
> > > > > Haaren,
> > > > > > > Harry <harry.van.haaren@intel.com>; Ray Kinsella
> <mdr@ashroe.eu>;
> > > Neil
> > > > > > > Horman <nhorman@tuxdriver.com>; Rao, Nikhil
> <nikhil.rao@intel.com>;
> > > > > > > Carrillo, Erik G <Erik.G.Carrillo@intel.com>; Gujjar, Abhinandan S
> > > > > > > <abhinandan.gujjar@intel.com>; Pavan Nikhilesh
> > > > > > > <pbhagavatula@marvell.com>; Hemant Agrawal
> > > > > <hemant.agrawal@nxp.com>;
> > > > > > > mattias.ronnblom <mattias.ronnblom@ericsson.com>; Mccarthy,
> Peter
> > > > > > > <Peter.Mccarthy@intel.com>
> > > > > > > Subject: Re: [dpdk-dev] [PATCH 01/25] event/dlb2: add dlb v2.5 probe
> > > > > > >
> > > > > > > On Wed, Mar 17, 2021 at 3:49 AM Timothy McDaniel
> > > > > > > <timothy.mcdaniel@intel.com> wrote:
> > > > > > > >
> > > > > > > > This commit adds dlb v2.5 probe support, and updates
> > > > > > > > parameter parsing.
> > > > > > > >
> > > > > > > > The dlb v2.5 device differs from dlb v2, in that the
> > > > > > > > number of resources (ports, queues, ...) is different,
> > > > > > > > so macros have been added to take the device version
> > > > > > > > into account.
> > > > > > > >
> > > > > > > > This commit also cleans up a few issues in the original
> > > > > > > > dlb2 source:
> > > > > > > > - eliminate duplicate constant definitions
> > > > > > > > - removed unused constant definitions
> > > > > > > >
> > > > > > > > Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
> > > > > > > > ---
> > > > > > >
> > > > > > > >
> > > > > > > > -#define EVDEV_DLB2_NAME_PMD dlb2_event
> > > > > > > > +#define EVDEV_DLB2_NAME_PMD dlb_event
> > > > > > >
> > > > > > > Is this an intended change? why change the driver's name.
> > > > > >
> > > > > > Yes, This is an intentional change.  We will be using the same driver
> name
> > > > > going forward, regardless of the hardware version.
> > > > > > Internally, we know which version of the hardware is present.
> > > > >
> > > > > Since the driver name is still driver/event/dlb2. Keep it as same
> > > > > prefix scheme with other drivers.
> > > > >
> > > > >
> > > > > >
> > > > > > Thanks,
> > > > > > Tim
> > > > > >
> > > >
> > > > Would it be acceptable to rename drivers/event/dlb2 to drivers/event/dlb?
> > > > We may have additional dlb devices in the pipeline, such as v3, and we
> would
> > > really like
> > > > to have them all use a common name.
> > >
> > > Makes sense to change to drivers/event/dlb. I think, we can make to
> > > dlb when you
> > > add v3 support. Now there is no need.
> > >
> > >
> > > >
> >
> > Hi Jerin,
> >
> > I spoke to the team, and we would like to get this change in now. It happens
> that we have
> > several applications that use the eventdev API
> rte_event_dev_get_dev_id(const char *name).
> > Having a single name simplifies these applications, and also prevents
> customers from having to
> > update application source code every time a new dlb device is released.
> 
> 
> Now that we removed drivers/event/dlb, Please rename
> drivers/event/dlb2 as drivers/event/dlb.
> and change EVDEV_DLB_NAME_PMD as dlb_event and submit the new patch
> patches for v2.5.
> 
> 
> 
> >
> > Thanks,
> > Tim
> >

Thank you Jerin.  I will make those changes and submit a new patch set containing this and the other requested
changes.

Best Regards,
Tim


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v2 00/27] Add DLB V2.5
  2021-03-16 22:18 ` [dpdk-dev] [PATCH 01/25] event/dlb2: add dlb v2.5 probe Timothy McDaniel
  2021-03-21  9:48   ` Jerin Jacob
@ 2021-03-30 19:35   ` Timothy McDaniel
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 01/27] event/dlb2: add v2.5 probe Timothy McDaniel
                       ` (27 more replies)
  2021-04-13 20:14   ` [dpdk-dev] [PATCH v3 00/26] " Timothy McDaniel
                     ` (2 subsequent siblings)
  4 siblings, 28 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-03-30 19:35 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas

This patch series adds support for DLB v2.5 to
the current DLB V2.0 PMD. The resulting PMD supports
both hardware versions.

The main differences between the DLB v2.5 and v2.0 hardware
are:
- Number of queues/ports
- DLB v2.5 uses a combined credit pool, whereas DLB v2.0
  splits credits into 2 pools, a directed credit pool and a
  load balanced credit pool.
- Different register maps, with different bit names and offsets

In order to support both hardware versions with the same PMD,
and avoid code duplication, the file dlb2_resource.c required a
complete rewrite. This required some creative staging of the changes
in order to keep the individual patches relatively small, while
also meeting the requirement that all individual patches in the set
compile cleanly.

To accomplish this, a few temporary files are used:

dlb2_hw_types_new.h
dlb2_resources_new.h
dlb2_resources_new.c

As dlb2_resources_new.c is populated with the new combined v2.0/v2.5
low level logic, the corresponding old code is removed from
dlb2_resource.c, thus allowing both the original and new code to
continue to compile and link cleanly. Once all of the code has been
migrated to the new model, the old versions of the files are removed,
and the new versions are renamed, effectively replacing the old original
files.

As you review the code, you can ignore the code deletions from
dlb2_resource.c, as that file continues to shrink as the new
corresponding logic is added to dlb2_resource_new.c.

Changes since V1
1) Simplified subject text for all patches
2) correct typos/spelling
3) remove FPGA references
4) remove stale sysconf() references
5) fixed patches that had compilation issues
6) updated release notes
7) renamed dlb device from dlb2_event to dlb_event
8) moved dlb2 directory to dlb,to match name change
9) fixed other cases where "dlb2" was being used externally

Timothy McDaniel (27):
  event/dlb2: add v2.5 probe
  event/dlb2: add v2.5 HW init
  event/dlb2: add v2.5 get_resources
  event/dlb2: add v2.5 create sched domain
  event/dlb2: add v2.5 domain reset
  event/dlb2: add V2.5 create ldb queue
  event/dlb2: add v2.5 create ldb port
  event/dlb2: add v2.5 create dir port
  event/dlb2: add v2.5 create dir queue
  event/dlb2: add v2.5 map qid
  event/dlb2: add v2.5 unmap queue
  event/dlb2: add v2.5 start domain
  event/dlb2: add v2.5 credit scheme
  event/dlb2: add v2.5 queue depth functions
  event/dlb2: add v2.5 finish map/unmap
  event/dlb2: add v2.5 sparse cq mode
  event/dlb2: add v2.5 sequence number management
  event/dlb2: consolidate resource header files into one file
  event/dlb2: delete old dlb2_resource.c file
  event/dlb2: move dlb_resource_new.c to dlb_resource.c
  event/dlb2: remove temporary file, dlb_hw_types.h
  event/dlb2: move dlb2_hw_type_new.h to dlb2_hw_types.h
  event/dlb2: delete old register map file, dlb2_regs.h
  event/dlb2: rename dlb2_regs_new.h to dlb2_regs.h
  event/dlb2: update xstats for v2.5
  doc/dlb2: update documentation for v2.5
  event/dlb2: Change device name to dlb_event

 MAINTAINERS                                   |    6 +-
 app/test/test_eventdev.c                      |    6 +-
 config/rte_config.h                           |   11 +-
 doc/api/doxy-api-index.md                     |    2 +-
 doc/api/doxy-api.conf.in                      |    2 +-
 doc/guides/eventdevs/dlb.rst                  |  390 ++
 doc/guides/eventdevs/dlb2.rst                 |   75 +-
 doc/guides/eventdevs/index.rst                |    2 +-
 doc/guides/rel_notes/release_21_05.rst        |    5 +
 drivers/event/{dlb2 => dlb}/dlb2.c            |  455 +-
 drivers/event/{dlb2 => dlb}/dlb2_iface.c      |    0
 drivers/event/{dlb2 => dlb}/dlb2_iface.h      |    0
 drivers/event/{dlb2 => dlb}/dlb2_inline_fns.h |    0
 drivers/event/{dlb2 => dlb}/dlb2_log.h        |    0
 drivers/event/{dlb2 => dlb}/dlb2_priv.h       |  163 +-
 drivers/event/{dlb2 => dlb}/dlb2_selftest.c   |    8 +-
 drivers/event/{dlb2 => dlb}/dlb2_user.h       |   27 +-
 drivers/event/{dlb2 => dlb}/dlb2_xstats.c     |   70 +-
 drivers/event/{dlb2 => dlb}/meson.build       |    4 +-
 .../{dlb2 => dlb}/pf/base/dlb2_hw_types.h     |  102 +-
 .../event/{dlb2 => dlb}/pf/base/dlb2_osdep.h  |    3 +
 .../{dlb2 => dlb}/pf/base/dlb2_osdep_bitmap.h |    0
 .../{dlb2 => dlb}/pf/base/dlb2_osdep_list.h   |    0
 .../{dlb2 => dlb}/pf/base/dlb2_osdep_types.h  |    0
 drivers/event/dlb/pf/base/dlb2_regs.h         | 4412 +++++++++++++++++
 .../{dlb2 => dlb}/pf/base/dlb2_resource.c     | 3278 ++++++------
 .../{dlb2 => dlb}/pf/base/dlb2_resource.h     |   28 +-
 drivers/event/{dlb2 => dlb}/pf/dlb2_main.c    |   37 +-
 drivers/event/{dlb2 => dlb}/pf/dlb2_main.h    |    0
 drivers/event/{dlb2 => dlb}/pf/dlb2_pf.c      |   62 +-
 .../rte_pmd_dlb2.c => dlb/rte_pmd_dlb.c}      |    6 +-
 .../rte_pmd_dlb2.h => dlb/rte_pmd_dlb.h}      |   12 +-
 drivers/event/{dlb2 => dlb}/version.map       |    2 +-
 drivers/event/dlb2/pf/base/dlb2_mbox.h        |  596 ---
 drivers/event/dlb2/pf/base/dlb2_regs.h        | 2527 ----------
 drivers/event/meson.build                     |    2 +-
 36 files changed, 7270 insertions(+), 5023 deletions(-)
 create mode 100644 doc/guides/eventdevs/dlb.rst
 rename drivers/event/{dlb2 => dlb}/dlb2.c (90%)
 rename drivers/event/{dlb2 => dlb}/dlb2_iface.c (100%)
 rename drivers/event/{dlb2 => dlb}/dlb2_iface.h (100%)
 rename drivers/event/{dlb2 => dlb}/dlb2_inline_fns.h (100%)
 rename drivers/event/{dlb2 => dlb}/dlb2_log.h (100%)
 rename drivers/event/{dlb2 => dlb}/dlb2_priv.h (79%)
 rename drivers/event/{dlb2 => dlb}/dlb2_selftest.c (99%)
 rename drivers/event/{dlb2 => dlb}/dlb2_user.h (97%)
 rename drivers/event/{dlb2 => dlb}/dlb2_xstats.c (94%)
 rename drivers/event/{dlb2 => dlb}/meson.build (89%)
 rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_hw_types.h (81%)
 rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_osdep.h (99%)
 rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_osdep_bitmap.h (100%)
 rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_osdep_list.h (100%)
 rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_osdep_types.h (100%)
 create mode 100644 drivers/event/dlb/pf/base/dlb2_regs.h
 rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_resource.c (68%)
 rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_resource.h (99%)
 rename drivers/event/{dlb2 => dlb}/pf/dlb2_main.c (95%)
 rename drivers/event/{dlb2 => dlb}/pf/dlb2_main.h (100%)
 rename drivers/event/{dlb2 => dlb}/pf/dlb2_pf.c (92%)
 rename drivers/event/{dlb2/rte_pmd_dlb2.c => dlb/rte_pmd_dlb.c} (88%)
 rename drivers/event/{dlb2/rte_pmd_dlb2.h => dlb/rte_pmd_dlb.h} (88%)
 rename drivers/event/{dlb2 => dlb}/version.map (60%)
 delete mode 100644 drivers/event/dlb2/pf/base/dlb2_mbox.h
 delete mode 100644 drivers/event/dlb2/pf/base/dlb2_regs.h

-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v2 01/27] event/dlb2: add v2.5 probe
  2021-03-30 19:35   ` [dpdk-dev] [PATCH v2 00/27] Add DLB V2.5 Timothy McDaniel
@ 2021-03-30 19:35     ` Timothy McDaniel
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 02/27] event/dlb2: add v2.5 HW init Timothy McDaniel
                       ` (26 subsequent siblings)
  27 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-03-30 19:35 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas

This commit adds dlb v2.5 probe support, and updates
parameter parsing.

The dlb v2.5 device differs from dlb v2, in that the
number of resources (ports, queues, ...) is different,
so macros have been added to take the device version
into account.

This commit also cleans up a few issues in the original
dlb2 source:
- eliminate duplicate constant definitions
- removed unused constant definitions
- remove #ifdef FPGA
- remove unused include file, dlb2_mbox.h

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/dlb2.c                  |  99 +++-
 drivers/event/dlb2/dlb2_priv.h             | 151 ++++--
 drivers/event/dlb2/dlb2_xstats.c           |  37 +-
 drivers/event/dlb2/pf/base/dlb2_hw_types.h |  68 +--
 drivers/event/dlb2/pf/base/dlb2_mbox.h     | 596 ---------------------
 drivers/event/dlb2/pf/base/dlb2_resource.c |  48 +-
 drivers/event/dlb2/pf/dlb2_pf.c            |  62 ++-
 7 files changed, 318 insertions(+), 743 deletions(-)
 delete mode 100644 drivers/event/dlb2/pf/base/dlb2_mbox.h

diff --git a/drivers/event/dlb2/dlb2.c b/drivers/event/dlb2/dlb2.c
index fb5ff012a..7f5b9141b 100644
--- a/drivers/event/dlb2/dlb2.c
+++ b/drivers/event/dlb2/dlb2.c
@@ -59,7 +59,8 @@ static struct rte_event_dev_info evdev_dlb2_default_info = {
 	.max_event_port_enqueue_depth = DLB2_MAX_ENQUEUE_DEPTH,
 	.max_event_port_links = DLB2_MAX_NUM_QIDS_PER_LDB_CQ,
 	.max_num_events = DLB2_MAX_NUM_LDB_CREDITS,
-	.max_single_link_event_port_queue_pairs = DLB2_MAX_NUM_DIR_PORTS,
+	.max_single_link_event_port_queue_pairs =
+		DLB2_MAX_NUM_DIR_PORTS(DLB2_HW_V2),
 	.event_dev_cap = (RTE_EVENT_DEV_CAP_QUEUE_QOS |
 			  RTE_EVENT_DEV_CAP_EVENT_QOS |
 			  RTE_EVENT_DEV_CAP_BURST_MODE |
@@ -69,7 +70,7 @@ static struct rte_event_dev_info evdev_dlb2_default_info = {
 };
 
 struct process_local_port_data
-dlb2_port[DLB2_MAX_NUM_PORTS][DLB2_NUM_PORT_TYPES];
+dlb2_port[DLB2_MAX_NUM_PORTS_ALL][DLB2_NUM_PORT_TYPES];
 
 static void
 dlb2_free_qe_mem(struct dlb2_port *qm_port)
@@ -97,7 +98,7 @@ dlb2_init_queue_depth_thresholds(struct dlb2_eventdev *dlb2,
 {
 	int q;
 
-	for (q = 0; q < DLB2_MAX_NUM_QUEUES; q++) {
+	for (q = 0; q < DLB2_MAX_NUM_QUEUES(dlb2->version); q++) {
 		if (qid_depth_thresholds[q] != 0)
 			dlb2->ev_queues[q].depth_threshold =
 				qid_depth_thresholds[q];
@@ -247,9 +248,9 @@ set_num_dir_credits(const char *key __rte_unused,
 		return ret;
 
 	if (*num_dir_credits < 0 ||
-	    *num_dir_credits > DLB2_MAX_NUM_DIR_CREDITS) {
+	    *num_dir_credits > DLB2_MAX_NUM_DIR_CREDITS(DLB2_HW_V2)) {
 		DLB2_LOG_ERR("dlb2: num_dir_credits must be between 0 and %d\n",
-			     DLB2_MAX_NUM_DIR_CREDITS);
+			     DLB2_MAX_NUM_DIR_CREDITS(DLB2_HW_V2));
 		return -EINVAL;
 	}
 
@@ -306,7 +307,6 @@ set_cos(const char *key __rte_unused,
 	return 0;
 }
 
-
 static int
 set_qid_depth_thresh(const char *key __rte_unused,
 		     const char *value,
@@ -327,7 +327,7 @@ set_qid_depth_thresh(const char *key __rte_unused,
 	 */
 	if (sscanf(value, "all:%d", &thresh) == 1) {
 		first = 0;
-		last = DLB2_MAX_NUM_QUEUES - 1;
+		last = DLB2_MAX_NUM_QUEUES(DLB2_HW_V2) - 1;
 	} else if (sscanf(value, "%d-%d:%d", &first, &last, &thresh) == 3) {
 		/* we have everything we need */
 	} else if (sscanf(value, "%d:%d", &first, &thresh) == 2) {
@@ -337,7 +337,56 @@ set_qid_depth_thresh(const char *key __rte_unused,
 		return -EINVAL;
 	}
 
-	if (first > last || first < 0 || last >= DLB2_MAX_NUM_QUEUES) {
+	if (first > last || first < 0 ||
+		last >= DLB2_MAX_NUM_QUEUES(DLB2_HW_V2)) {
+		DLB2_LOG_ERR("Error parsing qid depth devarg, invalid qid value\n");
+		return -EINVAL;
+	}
+
+	if (thresh < 0 || thresh > DLB2_MAX_QUEUE_DEPTH_THRESHOLD) {
+		DLB2_LOG_ERR("Error parsing qid depth devarg, threshold > %d\n",
+			     DLB2_MAX_QUEUE_DEPTH_THRESHOLD);
+		return -EINVAL;
+	}
+
+	for (i = first; i <= last; i++)
+		qid_thresh->val[i] = thresh; /* indexed by qid */
+
+	return 0;
+}
+
+static int
+set_qid_depth_thresh_v2_5(const char *key __rte_unused,
+			  const char *value,
+			  void *opaque)
+{
+	struct dlb2_qid_depth_thresholds *qid_thresh = opaque;
+	int first, last, thresh, i;
+
+	if (value == NULL || opaque == NULL) {
+		DLB2_LOG_ERR("NULL pointer\n");
+		return -EINVAL;
+	}
+
+	/* command line override may take one of the following 3 forms:
+	 * qid_depth_thresh=all:<threshold_value> ... all queues
+	 * qid_depth_thresh=qidA-qidB:<threshold_value> ... a range of queues
+	 * qid_depth_thresh=qid:<threshold_value> ... just one queue
+	 */
+	if (sscanf(value, "all:%d", &thresh) == 1) {
+		first = 0;
+		last = DLB2_MAX_NUM_QUEUES(DLB2_HW_V2_5) - 1;
+	} else if (sscanf(value, "%d-%d:%d", &first, &last, &thresh) == 3) {
+		/* we have everything we need */
+	} else if (sscanf(value, "%d:%d", &first, &thresh) == 2) {
+		last = first;
+	} else {
+		DLB2_LOG_ERR("Error parsing qid depth devarg. Should be all:val, qid-qid:val, or qid:val\n");
+		return -EINVAL;
+	}
+
+	if (first > last || first < 0 ||
+		last >= DLB2_MAX_NUM_QUEUES(DLB2_HW_V2_5)) {
 		DLB2_LOG_ERR("Error parsing qid depth devarg, invalid qid value\n");
 		return -EINVAL;
 	}
@@ -521,7 +570,7 @@ dlb2_hw_reset_sched_domain(const struct rte_eventdev *dev, bool reconfig)
 	for (i = 0; i < dlb2->num_queues; i++)
 		dlb2->ev_queues[i].qm_queue.config_state = config_state;
 
-	for (i = 0; i < DLB2_MAX_NUM_QUEUES; i++)
+	for (i = 0; i < DLB2_MAX_NUM_QUEUES(DLB2_HW_V2_5); i++)
 		dlb2->ev_queues[i].setup_done = false;
 
 	dlb2->num_ports = 0;
@@ -1453,7 +1502,7 @@ dlb2_eventdev_port_setup(struct rte_eventdev *dev,
 
 	dlb2 = dlb2_pmd_priv(dev);
 
-	if (ev_port_id >= DLB2_MAX_NUM_PORTS)
+	if (ev_port_id >= DLB2_MAX_NUM_PORTS(dlb2->version))
 		return -EINVAL;
 
 	if (port_conf->dequeue_depth >
@@ -3895,7 +3944,7 @@ dlb2_primary_eventdev_probe(struct rte_eventdev *dev,
 	}
 
 	/* Initialize each port's token pop mode */
-	for (i = 0; i < DLB2_MAX_NUM_PORTS; i++)
+	for (i = 0; i < DLB2_MAX_NUM_PORTS(dlb2->version); i++)
 		dlb2->ev_ports[i].qm_port.token_pop_mode = AUTO_POP;
 
 	rte_spinlock_init(&dlb2->qm_instance.resource_lock);
@@ -3945,7 +3994,8 @@ dlb2_secondary_eventdev_probe(struct rte_eventdev *dev,
 int
 dlb2_parse_params(const char *params,
 		  const char *name,
-		  struct dlb2_devargs *dlb2_args)
+		  struct dlb2_devargs *dlb2_args,
+		  uint8_t version)
 {
 	int ret = 0;
 	static const char * const args[] = { NUMA_NODE_ARG,
@@ -3984,17 +4034,18 @@ dlb2_parse_params(const char *params,
 				return ret;
 			}
 
-			ret = rte_kvargs_process(kvlist,
+			if (version == DLB2_HW_V2) {
+				ret = rte_kvargs_process(kvlist,
 					DLB2_NUM_DIR_CREDITS,
 					set_num_dir_credits,
 					&dlb2_args->num_dir_credits_override);
-			if (ret != 0) {
-				DLB2_LOG_ERR("%s: Error parsing num_dir_credits parameter",
-					     name);
-				rte_kvargs_free(kvlist);
-				return ret;
+				if (ret != 0) {
+					DLB2_LOG_ERR("%s: Error parsing num_dir_credits parameter",
+						     name);
+					rte_kvargs_free(kvlist);
+					return ret;
+				}
 			}
-
 			ret = rte_kvargs_process(kvlist, DEV_ID_ARG,
 						 set_dev_id,
 						 &dlb2_args->dev_id);
@@ -4005,11 +4056,19 @@ dlb2_parse_params(const char *params,
 				return ret;
 			}
 
-			ret = rte_kvargs_process(
+			if (version == DLB2_HW_V2) {
+				ret = rte_kvargs_process(
 					kvlist,
 					DLB2_QID_DEPTH_THRESH_ARG,
 					set_qid_depth_thresh,
 					&dlb2_args->qid_depth_thresholds);
+			} else {
+				ret = rte_kvargs_process(
+					kvlist,
+					DLB2_QID_DEPTH_THRESH_ARG,
+					set_qid_depth_thresh_v2_5,
+					&dlb2_args->qid_depth_thresholds);
+			}
 			if (ret != 0) {
 				DLB2_LOG_ERR("%s: Error parsing qid_depth_thresh parameter",
 					     name);
diff --git a/drivers/event/dlb2/dlb2_priv.h b/drivers/event/dlb2/dlb2_priv.h
index eb1a93239..1cd78ad94 100644
--- a/drivers/event/dlb2/dlb2_priv.h
+++ b/drivers/event/dlb2/dlb2_priv.h
@@ -33,19 +33,31 @@
 
 /* Begin HW related defines and structs */
 
+#define DLB2_HW_V2 0
+#define DLB2_HW_V2_5 1
 #define DLB2_MAX_NUM_DOMAINS 32
 #define DLB2_MAX_NUM_VFS 16
 #define DLB2_MAX_NUM_LDB_QUEUES 32
 #define DLB2_MAX_NUM_LDB_PORTS 64
-#define DLB2_MAX_NUM_DIR_PORTS 64
-#define DLB2_MAX_NUM_DIR_QUEUES 64
+#define DLB2_MAX_NUM_DIR_PORTS_V2		DLB2_MAX_NUM_DIR_QUEUES_V2
+#define DLB2_MAX_NUM_DIR_PORTS_V2_5		DLB2_MAX_NUM_DIR_QUEUES_V2_5
+#define DLB2_MAX_NUM_DIR_PORTS(ver)		(ver == DLB2_HW_V2 ? \
+						 DLB2_MAX_NUM_DIR_PORTS_V2 : \
+						 DLB2_MAX_NUM_DIR_PORTS_V2_5)
+#define DLB2_MAX_NUM_DIR_QUEUES_V2		64 /* DIR == directed */
+#define DLB2_MAX_NUM_DIR_QUEUES_V2_5		96
+/* When needed for array sizing, the DLB 2.5 macro is used */
+#define DLB2_MAX_NUM_DIR_QUEUES(ver)		(ver == DLB2_HW_V2 ? \
+						 DLB2_MAX_NUM_DIR_QUEUES_V2 : \
+						 DLB2_MAX_NUM_DIR_QUEUES_V2_5)
 #define DLB2_MAX_NUM_FLOWS (64 * 1024)
 #define DLB2_MAX_NUM_LDB_CREDITS (8 * 1024)
-#define DLB2_MAX_NUM_DIR_CREDITS (2 * 1024)
+#define DLB2_MAX_NUM_DIR_CREDITS(ver)		(ver == DLB2_HW_V2 ? 4096 : 0)
+#define DLB2_MAX_NUM_CREDITS(ver)		(ver == DLB2_HW_V2 ? \
+						 0 : DLB2_MAX_NUM_LDB_CREDITS)
 #define DLB2_MAX_NUM_LDB_CREDIT_POOLS 64
 #define DLB2_MAX_NUM_DIR_CREDIT_POOLS 64
 #define DLB2_MAX_NUM_HIST_LIST_ENTRIES 2048
-#define DLB2_MAX_NUM_AQOS_ENTRIES 2048
 #define DLB2_MAX_NUM_QIDS_PER_LDB_CQ 8
 #define DLB2_QID_PRIORITIES 8
 #define DLB2_MAX_DEVICE_PATH 32
@@ -68,6 +80,11 @@
 #define DLB2_NUM_HIST_LIST_ENTRIES_PER_LDB_PORT \
 	DLB2_MAX_CQ_DEPTH
 
+#define DLB2_HW_DEVICE_FROM_PCI_ID(_pdev) \
+	(((_pdev->id.device_id == PCI_DEVICE_ID_INTEL_DLB2_5_PF) ||        \
+	  (_pdev->id.device_id == PCI_DEVICE_ID_INTEL_DLB2_5_VF))   ?   \
+		DLB2_HW_V2_5 : DLB2_HW_V2)
+
 /*
  * Static per queue/port provisioning values
  */
@@ -109,6 +126,8 @@ enum dlb2_hw_queue_types {
 	DLB2_NUM_QUEUE_TYPES /* Must be last */
 };
 
+#define DLB2_COMBINED_POOL DLB2_LDB_QUEUE
+
 #define PORT_TYPE(p) ((p)->is_directed ? DLB2_DIR_PORT : DLB2_LDB_PORT)
 
 /* Do not change - must match hardware! */
@@ -127,8 +146,15 @@ struct dlb2_hw_rsrcs {
 	uint32_t num_ldb_queues;	/* Number of available ldb queues */
 	uint32_t num_ldb_ports;         /* Number of load balanced ports */
 	uint32_t num_dir_ports;         /* Number of directed ports */
-	uint32_t num_ldb_credits;       /* Number of load balanced credits */
-	uint32_t num_dir_credits;       /* Number of directed credits */
+	union {
+		struct {
+			uint32_t num_ldb_credits; /* Number of ldb credits */
+			uint32_t num_dir_credits; /* Number of dir credits */
+		};
+		struct {
+			uint32_t num_credits; /* Number of combined credits */
+		};
+	};
 	uint32_t reorder_window_size;   /* Size of reorder window */
 };
 
@@ -292,9 +318,17 @@ struct dlb2_port {
 	enum dlb2_token_pop_mode token_pop_mode;
 	union dlb2_port_config cfg;
 	uint32_t *credit_pool[DLB2_NUM_QUEUE_TYPES]; /* use __atomic builtins */
-	uint16_t cached_ldb_credits;
-	uint16_t ldb_credits;
-	uint16_t cached_dir_credits;
+	union {
+		struct {
+			uint16_t cached_ldb_credits;
+			uint16_t ldb_credits;
+			uint16_t cached_dir_credits;
+		};
+		struct {
+			uint16_t cached_credits;
+			uint16_t credits;
+		};
+	};
 	bool int_armed;
 	uint16_t owed_tokens;
 	int16_t issued_releases;
@@ -325,11 +359,22 @@ struct process_local_port_data {
 
 struct dlb2_eventdev;
 
+struct dlb2_port_low_level_io_functions {
+	void (*pp_enqueue_four)(void *qe4, void *pp_addr);
+};
+
 struct dlb2_config {
 	int configured;
 	int reserved;
-	uint32_t num_ldb_credits;
-	uint32_t num_dir_credits;
+	union {
+		struct {
+			uint32_t num_ldb_credits;
+			uint32_t num_dir_credits;
+		};
+		struct {
+			uint32_t num_credits;
+		};
+	};
 	struct dlb2_create_sched_domain_args resources;
 };
 
@@ -354,10 +399,18 @@ struct dlb2_hw_dev {
 
 /* Begin DLB2 PMD Eventdev related defines and structs */
 
-#define DLB2_MAX_NUM_QUEUES \
-	(DLB2_MAX_NUM_DIR_QUEUES + DLB2_MAX_NUM_LDB_QUEUES)
+#define DLB2_MAX_NUM_QUEUES(ver)                                \
+	(DLB2_MAX_NUM_DIR_QUEUES(ver) + DLB2_MAX_NUM_LDB_QUEUES)
 
-#define DLB2_MAX_NUM_PORTS (DLB2_MAX_NUM_DIR_PORTS + DLB2_MAX_NUM_LDB_PORTS)
+#define DLB2_MAX_NUM_PORTS(ver) \
+	(DLB2_MAX_NUM_DIR_PORTS(ver) + DLB2_MAX_NUM_LDB_PORTS)
+
+#define DLB2_MAX_NUM_DIR_QUEUES_V2_5 96
+#define DLB2_MAX_NUM_DIR_PORTS_V2_5 DLB2_MAX_NUM_DIR_QUEUES_V2_5
+#define DLB2_MAX_NUM_QUEUES_ALL \
+	(DLB2_MAX_NUM_DIR_QUEUES_V2_5 + DLB2_MAX_NUM_LDB_QUEUES)
+#define DLB2_MAX_NUM_PORTS_ALL \
+	(DLB2_MAX_NUM_DIR_PORTS_V2_5 + DLB2_MAX_NUM_LDB_PORTS)
 #define DLB2_MAX_INPUT_QUEUE_DEPTH 256
 
 /** Structure to hold the queue to port link establishment attributes */
@@ -377,8 +430,15 @@ struct dlb2_traffic_stats {
 	uint64_t tx_ok;
 	uint64_t total_polls;
 	uint64_t zero_polls;
-	uint64_t tx_nospc_ldb_hw_credits;
-	uint64_t tx_nospc_dir_hw_credits;
+	union {
+		struct {
+			uint64_t tx_nospc_ldb_hw_credits;
+			uint64_t tx_nospc_dir_hw_credits;
+		};
+		struct {
+			uint64_t tx_nospc_hw_credits;
+		};
+	};
 	uint64_t tx_nospc_inflight_max;
 	uint64_t tx_nospc_new_event_limit;
 	uint64_t tx_nospc_inflight_credits;
@@ -411,7 +471,7 @@ struct dlb2_port_stats {
 	uint64_t tx_invalid;
 	uint64_t rx_sched_cnt[DLB2_NUM_HW_SCHED_TYPES];
 	uint64_t rx_sched_invalid;
-	struct dlb2_queue_stats queue[DLB2_MAX_NUM_QUEUES];
+	struct dlb2_queue_stats queue[DLB2_MAX_NUM_QUEUES_ALL];
 };
 
 struct dlb2_eventdev_port {
@@ -462,16 +522,16 @@ enum dlb2_run_state {
 };
 
 struct dlb2_eventdev {
-	struct dlb2_eventdev_port ev_ports[DLB2_MAX_NUM_PORTS];
-	struct dlb2_eventdev_queue ev_queues[DLB2_MAX_NUM_QUEUES];
-	uint8_t qm_ldb_to_ev_queue_id[DLB2_MAX_NUM_QUEUES];
-	uint8_t qm_dir_to_ev_queue_id[DLB2_MAX_NUM_QUEUES];
+	struct dlb2_eventdev_port ev_ports[DLB2_MAX_NUM_PORTS_ALL];
+	struct dlb2_eventdev_queue ev_queues[DLB2_MAX_NUM_QUEUES_ALL];
+	uint8_t qm_ldb_to_ev_queue_id[DLB2_MAX_NUM_QUEUES_ALL];
+	uint8_t qm_dir_to_ev_queue_id[DLB2_MAX_NUM_QUEUES_ALL];
 	/* store num stats and offset of the stats for each queue */
-	uint16_t xstats_count_per_qid[DLB2_MAX_NUM_QUEUES];
-	uint16_t xstats_offset_for_qid[DLB2_MAX_NUM_QUEUES];
+	uint16_t xstats_count_per_qid[DLB2_MAX_NUM_QUEUES_ALL];
+	uint16_t xstats_offset_for_qid[DLB2_MAX_NUM_QUEUES_ALL];
 	/* store num stats and offset of the stats for each port */
-	uint16_t xstats_count_per_port[DLB2_MAX_NUM_PORTS];
-	uint16_t xstats_offset_for_port[DLB2_MAX_NUM_PORTS];
+	uint16_t xstats_count_per_port[DLB2_MAX_NUM_PORTS_ALL];
+	uint16_t xstats_offset_for_port[DLB2_MAX_NUM_PORTS_ALL];
 	struct dlb2_get_num_resources_args hw_rsrc_query_results;
 	uint32_t xstats_count_mode_queue;
 	struct dlb2_hw_dev qm_instance; /* strictly hw related */
@@ -487,8 +547,15 @@ struct dlb2_eventdev {
 	int num_dir_credits_override;
 	volatile enum dlb2_run_state run_state;
 	uint16_t num_dir_queues; /* total num of evdev dir queues requested */
-	uint16_t num_dir_credits;
-	uint16_t num_ldb_credits;
+	union {
+		struct {
+			uint16_t num_dir_credits;
+			uint16_t num_ldb_credits;
+		};
+		struct {
+			uint16_t num_credits;
+		};
+	};
 	uint16_t num_queues; /* total queues */
 	uint16_t num_ldb_queues; /* total num of evdev ldb queues requested */
 	uint16_t num_ports; /* total num of evdev ports requested */
@@ -499,21 +566,28 @@ struct dlb2_eventdev {
 	bool defer_sched;
 	enum dlb2_cq_poll_modes poll_mode;
 	uint8_t revision;
+	uint8_t version;
 	bool configured;
-	uint16_t max_ldb_credits;
-	uint16_t max_dir_credits;
-
-	/* force hw credit pool counters into exclusive cache lines */
-
-	/* use __atomic builtins */ /* shared hw cred */
-	uint32_t ldb_credit_pool __rte_cache_aligned;
-	/* use __atomic builtins */ /* shared hw cred */
-	uint32_t dir_credit_pool __rte_cache_aligned;
+	union {
+		struct {
+			uint16_t max_ldb_credits;
+			uint16_t max_dir_credits;
+			/* use __atomic builtins */ /* shared hw cred */
+			uint32_t ldb_credit_pool __rte_cache_aligned;
+			/* use __atomic builtins */ /* shared hw cred */
+			uint32_t dir_credit_pool __rte_cache_aligned;
+		};
+		struct {
+			uint16_t max_credits;
+			/* use __atomic builtins */ /* shared hw cred */
+			uint32_t credit_pool __rte_cache_aligned;
+		};
+	};
 };
 
 /* used for collecting and passing around the dev args */
 struct dlb2_qid_depth_thresholds {
-	int val[DLB2_MAX_NUM_QUEUES];
+	int val[DLB2_MAX_NUM_QUEUES_ALL];
 };
 
 struct dlb2_devargs {
@@ -568,7 +642,8 @@ uint32_t dlb2_get_queue_depth(struct dlb2_eventdev *dlb2,
 
 int dlb2_parse_params(const char *params,
 		      const char *name,
-		      struct dlb2_devargs *dlb2_args);
+		      struct dlb2_devargs *dlb2_args,
+		      uint8_t version);
 
 /* Extern globals */
 extern struct process_local_port_data dlb2_port[][DLB2_NUM_PORT_TYPES];
diff --git a/drivers/event/dlb2/dlb2_xstats.c b/drivers/event/dlb2/dlb2_xstats.c
index 8c3c3cda9..b62e62060 100644
--- a/drivers/event/dlb2/dlb2_xstats.c
+++ b/drivers/event/dlb2/dlb2_xstats.c
@@ -95,7 +95,7 @@ dlb2_device_traffic_stat_get(struct dlb2_eventdev *dlb2,
 	int i;
 	uint64_t val = 0;
 
-	for (i = 0; i < DLB2_MAX_NUM_PORTS; i++) {
+	for (i = 0; i < DLB2_MAX_NUM_PORTS(dlb2->version); i++) {
 		struct dlb2_eventdev_port *port = &dlb2->ev_ports[i];
 
 		if (!port->setup_done)
@@ -269,7 +269,7 @@ dlb2_get_threshold_stat(struct dlb2_eventdev *dlb2, int qid, int stat)
 	int port = 0;
 	uint64_t tally = 0;
 
-	for (port = 0; port < DLB2_MAX_NUM_PORTS; port++)
+	for (port = 0; port < DLB2_MAX_NUM_PORTS(dlb2->version); port++)
 		tally += dlb2->ev_ports[port].stats.queue[qid].qid_depth[stat];
 
 	return tally;
@@ -281,7 +281,7 @@ dlb2_get_enq_ok_stat(struct dlb2_eventdev *dlb2, int qid)
 	int port = 0;
 	uint64_t enq_ok_tally = 0;
 
-	for (port = 0; port < DLB2_MAX_NUM_PORTS; port++)
+	for (port = 0; port < DLB2_MAX_NUM_PORTS(dlb2->version); port++)
 		enq_ok_tally += dlb2->ev_ports[port].stats.queue[qid].enq_ok;
 
 	return enq_ok_tally;
@@ -561,8 +561,8 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 
 	/* other vars */
 	const unsigned int count = RTE_DIM(dev_stats) +
-			DLB2_MAX_NUM_PORTS * RTE_DIM(port_stats) +
-			DLB2_MAX_NUM_QUEUES * RTE_DIM(qid_stats);
+		DLB2_MAX_NUM_PORTS(dlb2->version) * RTE_DIM(port_stats) +
+		DLB2_MAX_NUM_QUEUES(dlb2->version) * RTE_DIM(qid_stats);
 	unsigned int i, port, qid, stat_id = 0;
 
 	dlb2->xstats = rte_zmalloc_socket(NULL,
@@ -583,7 +583,7 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 	}
 	dlb2->xstats_count_mode_dev = stat_id;
 
-	for (port = 0; port < DLB2_MAX_NUM_PORTS; port++) {
+	for (port = 0; port < DLB2_MAX_NUM_PORTS(dlb2->version); port++) {
 		dlb2->xstats_offset_for_port[port] = stat_id;
 
 		uint32_t count_offset = stat_id;
@@ -605,7 +605,7 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 
 	dlb2->xstats_count_mode_port = stat_id - dlb2->xstats_count_mode_dev;
 
-	for (qid = 0; qid < DLB2_MAX_NUM_QUEUES; qid++) {
+	for (qid = 0; qid < DLB2_MAX_NUM_QUEUES(dlb2->version); qid++) {
 		uint32_t count_offset = stat_id;
 
 		dlb2->xstats_offset_for_qid[qid] = stat_id;
@@ -658,16 +658,15 @@ dlb2_eventdev_xstats_get_names(const struct rte_eventdev *dev,
 		xstats_mode_count = dlb2->xstats_count_mode_dev;
 		break;
 	case RTE_EVENT_DEV_XSTATS_PORT:
-		if (queue_port_id >= DLB2_MAX_NUM_PORTS)
+		if (queue_port_id >= DLB2_MAX_NUM_PORTS(dlb2->version))
 			break;
 		xstats_mode_count = dlb2->xstats_count_per_port[queue_port_id];
 		start_offset = dlb2->xstats_offset_for_port[queue_port_id];
 		break;
 	case RTE_EVENT_DEV_XSTATS_QUEUE:
-#if (DLB2_MAX_NUM_QUEUES <= 255) /* max 8 bit value */
-		if (queue_port_id >= DLB2_MAX_NUM_QUEUES)
+		if (queue_port_id >= DLB2_MAX_NUM_QUEUES(dlb2->version) &&
+		    (DLB2_MAX_NUM_QUEUES(dlb2->version) <= 255))
 			break;
-#endif
 		xstats_mode_count = dlb2->xstats_count_per_qid[queue_port_id];
 		start_offset = dlb2->xstats_offset_for_qid[queue_port_id];
 		break;
@@ -709,13 +708,13 @@ dlb2_xstats_update(struct dlb2_eventdev *dlb2,
 		xstats_mode_count = dlb2->xstats_count_mode_dev;
 		break;
 	case RTE_EVENT_DEV_XSTATS_PORT:
-		if (queue_port_id >= DLB2_MAX_NUM_PORTS)
+		if (queue_port_id >= DLB2_MAX_NUM_PORTS(dlb2->version))
 			goto invalid_value;
 		xstats_mode_count = dlb2->xstats_count_per_port[queue_port_id];
 		break;
 	case RTE_EVENT_DEV_XSTATS_QUEUE:
-#if (DLB2_MAX_NUM_QUEUES <= 255) /* max 8 bit value */
-		if (queue_port_id >= DLB2_MAX_NUM_QUEUES)
+#if (DLB2_MAX_NUM_QUEUES(DLB2_HW_V2_5) <= 255) /* max 8 bit value */
+		if (queue_port_id >= DLB2_MAX_NUM_QUEUES(dlb2->version))
 			goto invalid_value;
 #endif
 		xstats_mode_count = dlb2->xstats_count_per_qid[queue_port_id];
@@ -936,12 +935,13 @@ dlb2_eventdev_xstats_reset(struct rte_eventdev *dev,
 		break;
 	case RTE_EVENT_DEV_XSTATS_PORT:
 		if (queue_port_id == -1) {
-			for (i = 0; i < DLB2_MAX_NUM_PORTS; i++) {
+			for (i = 0; i < DLB2_MAX_NUM_PORTS(dlb2->version);
+					i++) {
 				if (dlb2_xstats_reset_port(dlb2, i,
 							   ids, nb_ids))
 					return -EINVAL;
 			}
-		} else if (queue_port_id < DLB2_MAX_NUM_PORTS) {
+		} else if (queue_port_id < DLB2_MAX_NUM_PORTS(dlb2->version)) {
 			if (dlb2_xstats_reset_port(dlb2, queue_port_id,
 						   ids, nb_ids))
 				return -EINVAL;
@@ -949,12 +949,13 @@ dlb2_eventdev_xstats_reset(struct rte_eventdev *dev,
 		break;
 	case RTE_EVENT_DEV_XSTATS_QUEUE:
 		if (queue_port_id == -1) {
-			for (i = 0; i < DLB2_MAX_NUM_QUEUES; i++) {
+			for (i = 0; i < DLB2_MAX_NUM_QUEUES(dlb2->version);
+					i++) {
 				if (dlb2_xstats_reset_queue(dlb2, i,
 							    ids, nb_ids))
 					return -EINVAL;
 			}
-		} else if (queue_port_id < DLB2_MAX_NUM_QUEUES) {
+		} else if (queue_port_id < DLB2_MAX_NUM_QUEUES(dlb2->version)) {
 			if (dlb2_xstats_reset_queue(dlb2, queue_port_id,
 						    ids, nb_ids))
 				return -EINVAL;
diff --git a/drivers/event/dlb2/pf/base/dlb2_hw_types.h b/drivers/event/dlb2/pf/base/dlb2_hw_types.h
index 1d99f1e01..b007e1674 100644
--- a/drivers/event/dlb2/pf/base/dlb2_hw_types.h
+++ b/drivers/event/dlb2/pf/base/dlb2_hw_types.h
@@ -5,54 +5,31 @@
 #ifndef __DLB2_HW_TYPES_H
 #define __DLB2_HW_TYPES_H
 
+#include "../../dlb2_priv.h"
 #include "dlb2_user.h"
 
 #include "dlb2_osdep_list.h"
 #include "dlb2_osdep_types.h"
 
 #define DLB2_MAX_NUM_VDEVS			16
-#define DLB2_MAX_NUM_DOMAINS			32
-#define DLB2_MAX_NUM_LDB_QUEUES			32 /* LDB == load-balanced */
-#define DLB2_MAX_NUM_DIR_QUEUES			64 /* DIR == directed */
-#define DLB2_MAX_NUM_LDB_PORTS			64
-#define DLB2_MAX_NUM_DIR_PORTS			64
-#define DLB2_MAX_NUM_LDB_CREDITS		(8 * 1024)
-#define DLB2_MAX_NUM_DIR_CREDITS		(2 * 1024)
-#define DLB2_MAX_NUM_HIST_LIST_ENTRIES		2048
-#define DLB2_MAX_NUM_AQED_ENTRIES		2048
-#define DLB2_MAX_NUM_QIDS_PER_LDB_CQ		8
 #define DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS	2
-#define DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES	5
-#define DLB2_QID_PRIORITIES			8
 #define DLB2_NUM_ARB_WEIGHTS			8
+#define DLB2_MAX_NUM_AQED_ENTRIES		2048
 #define DLB2_MAX_WEIGHT				255
 #define DLB2_NUM_COS_DOMAINS			4
+#define DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS	2
+#define DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES	5
 #define DLB2_MAX_CQ_COMP_CHECK_LOOPS		409600
 #define DLB2_MAX_QID_EMPTY_CHECK_LOOPS		(32 * 64 * 1024 * (800 / 30))
-#ifdef FPGA
-#define DLB2_HZ					2000000
-#else
-#define DLB2_HZ					800000000
-#endif
+
+#define DLB2_FUNC_BAR				0
+#define DLB2_CSR_BAR				2
 
 #define PCI_DEVICE_ID_INTEL_DLB2_PF 0x2710
 #define PCI_DEVICE_ID_INTEL_DLB2_VF 0x2711
 
-/* Interrupt related macros */
-#define DLB2_PF_NUM_NON_CQ_INTERRUPT_VECTORS 1
-#define DLB2_PF_NUM_CQ_INTERRUPT_VECTORS     64
-#define DLB2_PF_TOTAL_NUM_INTERRUPT_VECTORS \
-	(DLB2_PF_NUM_NON_CQ_INTERRUPT_VECTORS + \
-	 DLB2_PF_NUM_CQ_INTERRUPT_VECTORS)
-#define DLB2_PF_NUM_COMPRESSED_MODE_VECTORS \
-	(DLB2_PF_NUM_NON_CQ_INTERRUPT_VECTORS + 1)
-#define DLB2_PF_NUM_PACKED_MODE_VECTORS \
-	DLB2_PF_TOTAL_NUM_INTERRUPT_VECTORS
-#define DLB2_PF_COMPRESSED_MODE_CQ_VECTOR_ID \
-	DLB2_PF_NUM_NON_CQ_INTERRUPT_VECTORS
-
-/* DLB non-CQ interrupts (alarm, mailbox, WDT) */
-#define DLB2_INT_NON_CQ 0
+#define PCI_DEVICE_ID_INTEL_DLB2_5_PF 0x2714
+#define PCI_DEVICE_ID_INTEL_DLB2_5_VF 0x2715
 
 #define DLB2_ALARM_HW_SOURCE_SYS 0
 #define DLB2_ALARM_HW_SOURCE_DLB 1
@@ -65,18 +42,6 @@
 #define DLB2_ALARM_HW_CHP_AID_ILLEGAL_ENQ	1
 #define DLB2_ALARM_HW_CHP_AID_EXCESS_TOKEN_POPS 2
 
-#define DLB2_VF_NUM_NON_CQ_INTERRUPT_VECTORS 1
-#define DLB2_VF_NUM_CQ_INTERRUPT_VECTORS     31
-#define DLB2_VF_BASE_CQ_VECTOR_ID	     0
-#define DLB2_VF_LAST_CQ_VECTOR_ID	     30
-#define DLB2_VF_MBOX_VECTOR_ID		     31
-#define DLB2_VF_TOTAL_NUM_INTERRUPT_VECTORS \
-	(DLB2_VF_NUM_NON_CQ_INTERRUPT_VECTORS + \
-	 DLB2_VF_NUM_CQ_INTERRUPT_VECTORS)
-
-#define DLB2_VDEV_MAX_NUM_INTERRUPT_VECTORS (DLB2_MAX_NUM_LDB_PORTS + \
-					     DLB2_MAX_NUM_DIR_PORTS + 1)
-
 /*
  * Hardware-defined base addresses. Those prefixed 'DLB2_DRV' are only used by
  * the PF driver.
@@ -97,7 +62,8 @@
 #define DLB2_DIR_PP_BASE       0x2000000
 #define DLB2_DIR_PP_STRIDE     0x1000
 #define DLB2_DIR_PP_BOUND      (DLB2_DIR_PP_BASE + \
-				DLB2_DIR_PP_STRIDE * DLB2_MAX_NUM_DIR_PORTS)
+				DLB2_DIR_PP_STRIDE * \
+				DLB2_MAX_NUM_DIR_PORTS_V2_5)
 #define DLB2_DIR_PP_OFFS(id)   (DLB2_DIR_PP_BASE + (id) * DLB2_PP_SIZE)
 
 struct dlb2_resource_id {
@@ -225,7 +191,7 @@ struct dlb2_sn_group {
 
 static inline bool dlb2_sn_group_full(struct dlb2_sn_group *group)
 {
-	u32 mask[] = {
+	const u32 mask[] = {
 		0x0000ffff,  /* 64 SNs per queue */
 		0x000000ff,  /* 128 SNs per queue */
 		0x0000000f,  /* 256 SNs per queue */
@@ -237,7 +203,7 @@ static inline bool dlb2_sn_group_full(struct dlb2_sn_group *group)
 
 static inline int dlb2_sn_group_alloc_slot(struct dlb2_sn_group *group)
 {
-	u32 bound[6] = {16, 8, 4, 2, 1};
+	const u32 bound[] = {16, 8, 4, 2, 1};
 	u32 i;
 
 	for (i = 0; i < bound[group->mode]; i++) {
@@ -327,7 +293,7 @@ struct dlb2_function_resources {
 struct dlb2_hw_resources {
 	struct dlb2_ldb_queue ldb_queues[DLB2_MAX_NUM_LDB_QUEUES];
 	struct dlb2_ldb_port ldb_ports[DLB2_MAX_NUM_LDB_PORTS];
-	struct dlb2_dir_pq_pair dir_pq_pairs[DLB2_MAX_NUM_DIR_PORTS];
+	struct dlb2_dir_pq_pair dir_pq_pairs[DLB2_MAX_NUM_DIR_PORTS_V2_5];
 	struct dlb2_sn_group sn_groups[DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS];
 };
 
@@ -344,11 +310,13 @@ struct dlb2_sw_mbox {
 };
 
 struct dlb2_hw {
+	uint8_t ver;
+
 	/* BAR 0 address */
-	void  *csr_kva;
+	void *csr_kva;
 	unsigned long csr_phys_addr;
 	/* BAR 2 address */
-	void  *func_kva;
+	void *func_kva;
 	unsigned long func_phys_addr;
 
 	/* Resource tracking */
diff --git a/drivers/event/dlb2/pf/base/dlb2_mbox.h b/drivers/event/dlb2/pf/base/dlb2_mbox.h
deleted file mode 100644
index ce462c089..000000000
--- a/drivers/event/dlb2/pf/base/dlb2_mbox.h
+++ /dev/null
@@ -1,596 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016-2020 Intel Corporation
- */
-
-#ifndef __DLB2_BASE_DLB2_MBOX_H
-#define __DLB2_BASE_DLB2_MBOX_H
-
-#include "dlb2_osdep_types.h"
-#include "dlb2_regs.h"
-
-#define DLB2_MBOX_INTERFACE_VERSION 1
-
-/*
- * The PF uses its PF->VF mailbox to send responses to VF requests, as well as
- * to send requests of its own (e.g. notifying a VF of an impending FLR).
- * To avoid communication race conditions, e.g. the PF sends a response and then
- * sends a request before the VF reads the response, the PF->VF mailbox is
- * divided into two sections:
- * - Bytes 0-47: PF responses
- * - Bytes 48-63: PF requests
- *
- * Partitioning the PF->VF mailbox allows responses and requests to occupy the
- * mailbox simultaneously.
- */
-#define DLB2_PF2VF_RESP_BYTES	  48
-#define DLB2_PF2VF_RESP_BASE	  0
-#define DLB2_PF2VF_RESP_BASE_WORD (DLB2_PF2VF_RESP_BASE / 4)
-
-#define DLB2_PF2VF_REQ_BYTES	  16
-#define DLB2_PF2VF_REQ_BASE	  (DLB2_PF2VF_RESP_BASE + DLB2_PF2VF_RESP_BYTES)
-#define DLB2_PF2VF_REQ_BASE_WORD  (DLB2_PF2VF_REQ_BASE / 4)
-
-/*
- * Similarly, the VF->PF mailbox is divided into two sections:
- * - Bytes 0-239: VF requests
- * -- (Bytes 0-3 are unused due to a hardware errata)
- * - Bytes 240-255: VF responses
- */
-#define DLB2_VF2PF_REQ_BYTES	 236
-#define DLB2_VF2PF_REQ_BASE	 4
-#define DLB2_VF2PF_REQ_BASE_WORD (DLB2_VF2PF_REQ_BASE / 4)
-
-#define DLB2_VF2PF_RESP_BYTES	  16
-#define DLB2_VF2PF_RESP_BASE	  (DLB2_VF2PF_REQ_BASE + DLB2_VF2PF_REQ_BYTES)
-#define DLB2_VF2PF_RESP_BASE_WORD (DLB2_VF2PF_RESP_BASE / 4)
-
-/* VF-initiated commands */
-enum dlb2_mbox_cmd_type {
-	DLB2_MBOX_CMD_REGISTER,
-	DLB2_MBOX_CMD_UNREGISTER,
-	DLB2_MBOX_CMD_GET_NUM_RESOURCES,
-	DLB2_MBOX_CMD_CREATE_SCHED_DOMAIN,
-	DLB2_MBOX_CMD_RESET_SCHED_DOMAIN,
-	DLB2_MBOX_CMD_CREATE_LDB_QUEUE,
-	DLB2_MBOX_CMD_CREATE_DIR_QUEUE,
-	DLB2_MBOX_CMD_CREATE_LDB_PORT,
-	DLB2_MBOX_CMD_CREATE_DIR_PORT,
-	DLB2_MBOX_CMD_ENABLE_LDB_PORT,
-	DLB2_MBOX_CMD_DISABLE_LDB_PORT,
-	DLB2_MBOX_CMD_ENABLE_DIR_PORT,
-	DLB2_MBOX_CMD_DISABLE_DIR_PORT,
-	DLB2_MBOX_CMD_LDB_PORT_OWNED_BY_DOMAIN,
-	DLB2_MBOX_CMD_DIR_PORT_OWNED_BY_DOMAIN,
-	DLB2_MBOX_CMD_MAP_QID,
-	DLB2_MBOX_CMD_UNMAP_QID,
-	DLB2_MBOX_CMD_START_DOMAIN,
-	DLB2_MBOX_CMD_ENABLE_LDB_PORT_INTR,
-	DLB2_MBOX_CMD_ENABLE_DIR_PORT_INTR,
-	DLB2_MBOX_CMD_ARM_CQ_INTR,
-	DLB2_MBOX_CMD_GET_NUM_USED_RESOURCES,
-	DLB2_MBOX_CMD_GET_SN_ALLOCATION,
-	DLB2_MBOX_CMD_GET_LDB_QUEUE_DEPTH,
-	DLB2_MBOX_CMD_GET_DIR_QUEUE_DEPTH,
-	DLB2_MBOX_CMD_PENDING_PORT_UNMAPS,
-	DLB2_MBOX_CMD_GET_COS_BW,
-	DLB2_MBOX_CMD_GET_SN_OCCUPANCY,
-	DLB2_MBOX_CMD_QUERY_CQ_POLL_MODE,
-
-	/* NUM_QE_CMD_TYPES must be last */
-	NUM_DLB2_MBOX_CMD_TYPES,
-};
-
-static const char dlb2_mbox_cmd_type_strings[][128] = {
-	"DLB2_MBOX_CMD_REGISTER",
-	"DLB2_MBOX_CMD_UNREGISTER",
-	"DLB2_MBOX_CMD_GET_NUM_RESOURCES",
-	"DLB2_MBOX_CMD_CREATE_SCHED_DOMAIN",
-	"DLB2_MBOX_CMD_RESET_SCHED_DOMAIN",
-	"DLB2_MBOX_CMD_CREATE_LDB_QUEUE",
-	"DLB2_MBOX_CMD_CREATE_DIR_QUEUE",
-	"DLB2_MBOX_CMD_CREATE_LDB_PORT",
-	"DLB2_MBOX_CMD_CREATE_DIR_PORT",
-	"DLB2_MBOX_CMD_ENABLE_LDB_PORT",
-	"DLB2_MBOX_CMD_DISABLE_LDB_PORT",
-	"DLB2_MBOX_CMD_ENABLE_DIR_PORT",
-	"DLB2_MBOX_CMD_DISABLE_DIR_PORT",
-	"DLB2_MBOX_CMD_LDB_PORT_OWNED_BY_DOMAIN",
-	"DLB2_MBOX_CMD_DIR_PORT_OWNED_BY_DOMAIN",
-	"DLB2_MBOX_CMD_MAP_QID",
-	"DLB2_MBOX_CMD_UNMAP_QID",
-	"DLB2_MBOX_CMD_START_DOMAIN",
-	"DLB2_MBOX_CMD_ENABLE_LDB_PORT_INTR",
-	"DLB2_MBOX_CMD_ENABLE_DIR_PORT_INTR",
-	"DLB2_MBOX_CMD_ARM_CQ_INTR",
-	"DLB2_MBOX_CMD_GET_NUM_USED_RESOURCES",
-	"DLB2_MBOX_CMD_GET_SN_ALLOCATION",
-	"DLB2_MBOX_CMD_GET_LDB_QUEUE_DEPTH",
-	"DLB2_MBOX_CMD_GET_DIR_QUEUE_DEPTH",
-	"DLB2_MBOX_CMD_PENDING_PORT_UNMAPS",
-	"DLB2_MBOX_CMD_GET_COS_BW",
-	"DLB2_MBOX_CMD_GET_SN_OCCUPANCY",
-	"DLB2_MBOX_CMD_QUERY_CQ_POLL_MODE",
-};
-
-/* PF-initiated commands */
-enum dlb2_mbox_vf_cmd_type {
-	DLB2_MBOX_VF_CMD_DOMAIN_ALERT,
-	DLB2_MBOX_VF_CMD_NOTIFICATION,
-	DLB2_MBOX_VF_CMD_IN_USE,
-
-	/* NUM_DLB2_MBOX_VF_CMD_TYPES must be last */
-	NUM_DLB2_MBOX_VF_CMD_TYPES,
-};
-
-static const char dlb2_mbox_vf_cmd_type_strings[][128] = {
-	"DLB2_MBOX_VF_CMD_DOMAIN_ALERT",
-	"DLB2_MBOX_VF_CMD_NOTIFICATION",
-	"DLB2_MBOX_VF_CMD_IN_USE",
-};
-
-#define DLB2_MBOX_CMD_TYPE(hdr) \
-	(((struct dlb2_mbox_req_hdr *)hdr)->type)
-#define DLB2_MBOX_CMD_STRING(hdr) \
-	dlb2_mbox_cmd_type_strings[DLB2_MBOX_CMD_TYPE(hdr)]
-
-enum dlb2_mbox_status_type {
-	DLB2_MBOX_ST_SUCCESS,
-	DLB2_MBOX_ST_INVALID_CMD_TYPE,
-	DLB2_MBOX_ST_VERSION_MISMATCH,
-	DLB2_MBOX_ST_INVALID_OWNER_VF,
-};
-
-static const char dlb2_mbox_status_type_strings[][128] = {
-	"DLB2_MBOX_ST_SUCCESS",
-	"DLB2_MBOX_ST_INVALID_CMD_TYPE",
-	"DLB2_MBOX_ST_VERSION_MISMATCH",
-	"DLB2_MBOX_ST_INVALID_OWNER_VF",
-};
-
-#define DLB2_MBOX_ST_TYPE(hdr) \
-	(((struct dlb2_mbox_resp_hdr *)hdr)->status)
-#define DLB2_MBOX_ST_STRING(hdr) \
-	dlb2_mbox_status_type_strings[DLB2_MBOX_ST_TYPE(hdr)]
-
-/* This structure is always the first field in a request structure */
-struct dlb2_mbox_req_hdr {
-	u32 type;
-};
-
-/* This structure is always the first field in a response structure */
-struct dlb2_mbox_resp_hdr {
-	u32 status;
-};
-
-struct dlb2_mbox_register_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u16 min_interface_version;
-	u16 max_interface_version;
-};
-
-struct dlb2_mbox_register_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 interface_version;
-	u8 pf_id;
-	u8 vf_id;
-	u8 is_auxiliary_vf;
-	u8 primary_vf_id;
-	u32 padding;
-};
-
-struct dlb2_mbox_unregister_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 padding;
-};
-
-struct dlb2_mbox_unregister_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 padding;
-};
-
-struct dlb2_mbox_get_num_resources_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 padding;
-};
-
-struct dlb2_mbox_get_num_resources_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u16 num_sched_domains;
-	u16 num_ldb_queues;
-	u16 num_ldb_ports;
-	u16 num_cos_ldb_ports[4];
-	u16 num_dir_ports;
-	u32 num_atomic_inflights;
-	u32 num_hist_list_entries;
-	u32 max_contiguous_hist_list_entries;
-	u16 num_ldb_credits;
-	u16 num_dir_credits;
-};
-
-struct dlb2_mbox_create_sched_domain_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 num_ldb_queues;
-	u32 num_ldb_ports;
-	u32 num_cos_ldb_ports[4];
-	u32 num_dir_ports;
-	u32 num_atomic_inflights;
-	u32 num_hist_list_entries;
-	u32 num_ldb_credits;
-	u32 num_dir_credits;
-	u8 cos_strict;
-	u8 padding0[3];
-	u32 padding1;
-};
-
-struct dlb2_mbox_create_sched_domain_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 id;
-};
-
-struct dlb2_mbox_reset_sched_domain_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 id;
-};
-
-struct dlb2_mbox_reset_sched_domain_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-};
-
-struct dlb2_mbox_create_ldb_queue_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 num_sequence_numbers;
-	u32 num_qid_inflights;
-	u32 num_atomic_inflights;
-	u32 lock_id_comp_level;
-	u32 depth_threshold;
-	u32 padding;
-};
-
-struct dlb2_mbox_create_ldb_queue_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 id;
-};
-
-struct dlb2_mbox_create_dir_queue_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 port_id;
-	u32 depth_threshold;
-};
-
-struct dlb2_mbox_create_dir_queue_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 id;
-};
-
-struct dlb2_mbox_create_ldb_port_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u16 cq_depth;
-	u16 cq_history_list_size;
-	u8 cos_id;
-	u8 cos_strict;
-	u16 padding1;
-	u64 cq_base_address;
-};
-
-struct dlb2_mbox_create_ldb_port_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 id;
-};
-
-struct dlb2_mbox_create_dir_port_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u64 cq_base_address;
-	u16 cq_depth;
-	u16 padding0;
-	s32 queue_id;
-};
-
-struct dlb2_mbox_create_dir_port_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 id;
-};
-
-struct dlb2_mbox_enable_ldb_port_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 port_id;
-	u32 padding;
-};
-
-struct dlb2_mbox_enable_ldb_port_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 padding;
-};
-
-struct dlb2_mbox_disable_ldb_port_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 port_id;
-	u32 padding;
-};
-
-struct dlb2_mbox_disable_ldb_port_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 padding;
-};
-
-struct dlb2_mbox_enable_dir_port_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 port_id;
-	u32 padding;
-};
-
-struct dlb2_mbox_enable_dir_port_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 padding;
-};
-
-struct dlb2_mbox_disable_dir_port_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 port_id;
-	u32 padding;
-};
-
-struct dlb2_mbox_disable_dir_port_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 padding;
-};
-
-struct dlb2_mbox_ldb_port_owned_by_domain_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 port_id;
-	u32 padding;
-};
-
-struct dlb2_mbox_ldb_port_owned_by_domain_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	s32 owned;
-};
-
-struct dlb2_mbox_dir_port_owned_by_domain_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 port_id;
-	u32 padding;
-};
-
-struct dlb2_mbox_dir_port_owned_by_domain_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	s32 owned;
-};
-
-struct dlb2_mbox_map_qid_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 port_id;
-	u32 qid;
-	u32 priority;
-	u32 padding0;
-};
-
-struct dlb2_mbox_map_qid_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 id;
-};
-
-struct dlb2_mbox_unmap_qid_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 port_id;
-	u32 qid;
-};
-
-struct dlb2_mbox_unmap_qid_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 padding;
-};
-
-struct dlb2_mbox_start_domain_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-};
-
-struct dlb2_mbox_start_domain_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 padding;
-};
-
-struct dlb2_mbox_enable_ldb_port_intr_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u16 port_id;
-	u16 thresh;
-	u16 vector;
-	u16 owner_vf;
-	u16 reserved[2];
-};
-
-struct dlb2_mbox_enable_ldb_port_intr_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 padding;
-};
-
-struct dlb2_mbox_enable_dir_port_intr_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u16 port_id;
-	u16 thresh;
-	u16 vector;
-	u16 owner_vf;
-	u16 reserved[2];
-};
-
-struct dlb2_mbox_enable_dir_port_intr_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 padding;
-};
-
-struct dlb2_mbox_arm_cq_intr_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 port_id;
-	u32 is_ldb;
-};
-
-struct dlb2_mbox_arm_cq_intr_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 padding0;
-};
-
-/*
- * The alert_id and aux_alert_data follows the format of the alerts defined in
- * dlb2_types.h. The alert id contains an enum dlb2_domain_alert_id value, and
- * the aux_alert_data value varies depending on the alert.
- */
-struct dlb2_mbox_vf_alert_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 alert_id;
-	u32 aux_alert_data;
-};
-
-enum dlb2_mbox_vf_notification_type {
-	DLB2_MBOX_VF_NOTIFICATION_PRE_RESET,
-	DLB2_MBOX_VF_NOTIFICATION_POST_RESET,
-
-	/* NUM_DLB2_MBOX_VF_NOTIFICATION_TYPES must be last */
-	NUM_DLB2_MBOX_VF_NOTIFICATION_TYPES,
-};
-
-struct dlb2_mbox_vf_notification_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 notification;
-};
-
-struct dlb2_mbox_vf_in_use_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 padding;
-};
-
-struct dlb2_mbox_vf_in_use_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 in_use;
-};
-
-struct dlb2_mbox_get_sn_allocation_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 group_id;
-};
-
-struct dlb2_mbox_get_sn_allocation_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 num;
-};
-
-struct dlb2_mbox_get_ldb_queue_depth_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 queue_id;
-	u32 padding;
-};
-
-struct dlb2_mbox_get_ldb_queue_depth_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 depth;
-};
-
-struct dlb2_mbox_get_dir_queue_depth_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 queue_id;
-	u32 padding;
-};
-
-struct dlb2_mbox_get_dir_queue_depth_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 depth;
-};
-
-struct dlb2_mbox_pending_port_unmaps_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 port_id;
-	u32 padding;
-};
-
-struct dlb2_mbox_pending_port_unmaps_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 num;
-};
-
-struct dlb2_mbox_get_cos_bw_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 cos_id;
-};
-
-struct dlb2_mbox_get_cos_bw_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 num;
-};
-
-struct dlb2_mbox_get_sn_occupancy_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 group_id;
-};
-
-struct dlb2_mbox_get_sn_occupancy_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 num;
-};
-
-struct dlb2_mbox_query_cq_poll_mode_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 padding;
-};
-
-struct dlb2_mbox_query_cq_poll_mode_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 mode;
-};
-
-#endif /* __DLB2_BASE_DLB2_MBOX_H */
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index ae5ef2fc3..1cb0b9f50 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -5,7 +5,6 @@
 #include "dlb2_user.h"
 
 #include "dlb2_hw_types.h"
-#include "dlb2_mbox.h"
 #include "dlb2_osdep.h"
 #include "dlb2_osdep_bitmap.h"
 #include "dlb2_osdep_types.h"
@@ -212,7 +211,7 @@ int dlb2_resource_init(struct dlb2_hw *hw)
 			      &port->func_list);
 	}
 
-	hw->pf.num_avail_dir_pq_pairs = DLB2_MAX_NUM_DIR_PORTS;
+	hw->pf.num_avail_dir_pq_pairs = DLB2_MAX_NUM_DIR_PORTS(hw->ver);
 	for (i = 0; i < hw->pf.num_avail_dir_pq_pairs; i++) {
 		list = &hw->rsrcs.dir_pq_pairs[i].func_list;
 
@@ -220,7 +219,9 @@ int dlb2_resource_init(struct dlb2_hw *hw)
 	}
 
 	hw->pf.num_avail_qed_entries = DLB2_MAX_NUM_LDB_CREDITS;
-	hw->pf.num_avail_dqed_entries = DLB2_MAX_NUM_DIR_CREDITS;
+	hw->pf.num_avail_dqed_entries =
+		DLB2_MAX_NUM_DIR_CREDITS(hw->ver);
+
 	hw->pf.num_avail_aqed_entries = DLB2_MAX_NUM_AQED_ENTRIES;
 
 	ret = dlb2_bitmap_alloc(&hw->pf.avail_hist_list_entries,
@@ -259,7 +260,7 @@ int dlb2_resource_init(struct dlb2_hw *hw)
 		hw->rsrcs.ldb_ports[i].id.vdev_owned = false;
 	}
 
-	for (i = 0; i < DLB2_MAX_NUM_DIR_PORTS; i++) {
+	for (i = 0; i < DLB2_MAX_NUM_DIR_PORTS(hw->ver); i++) {
 		hw->rsrcs.dir_pq_pairs[i].id.phys_id = i;
 		hw->rsrcs.dir_pq_pairs[i].id.vdev_owned = false;
 	}
@@ -2373,7 +2374,7 @@ static void dlb2_domain_disable_dir_vpps(struct dlb2_hw *hw,
 		else
 			virt_id = port->id.phys_id;
 
-		offs = vdev_id * DLB2_MAX_NUM_DIR_PORTS + virt_id;
+		offs = vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) + virt_id;
 
 		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP_V(offs), r1.val);
 	}
@@ -2506,7 +2507,8 @@ static void
 dlb2_domain_disable_dir_queue_write_perms(struct dlb2_hw *hw,
 					  struct dlb2_hw_domain *domain)
 {
-	int domain_offset = domain->id.phys_id * DLB2_MAX_NUM_DIR_PORTS;
+	int domain_offset = domain->id.phys_id *
+		DLB2_MAX_NUM_DIR_PORTS(hw->ver);
 	struct dlb2_list_entry *iter;
 	struct dlb2_dir_pq_pair *queue;
 	RTE_SET_USED(iter);
@@ -2522,7 +2524,8 @@ dlb2_domain_disable_dir_queue_write_perms(struct dlb2_hw *hw,
 		DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(idx), r0.val);
 
 		if (queue->id.vdev_owned) {
-			idx = queue->id.vdev_id * DLB2_MAX_NUM_DIR_PORTS +
+			idx = queue->id.vdev_id *
+				DLB2_MAX_NUM_DIR_PORTS(hw->ver) +
 				queue->id.virt_id;
 
 			DLB2_CSR_WR(hw,
@@ -2961,7 +2964,8 @@ __dlb2_domain_reset_dir_port_registers(struct dlb2_hw *hw,
 		else
 			virt_id = port->id.phys_id;
 
-		offs = port->id.vdev_id * DLB2_MAX_NUM_DIR_PORTS + virt_id;
+		offs = port->id.vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver)
+			+ virt_id;
 
 		DLB2_CSR_WR(hw,
 			    DLB2_SYS_VF_DIR_VPP2PP(offs),
@@ -4484,7 +4488,8 @@ dlb2_log_create_dir_port_args(struct dlb2_hw *hw,
 }
 
 static struct dlb2_dir_pq_pair *
-dlb2_get_domain_used_dir_pq(u32 id,
+dlb2_get_domain_used_dir_pq(struct dlb2_hw *hw,
+			    u32 id,
 			    bool vdev_req,
 			    struct dlb2_hw_domain *domain)
 {
@@ -4492,7 +4497,7 @@ dlb2_get_domain_used_dir_pq(u32 id,
 	struct dlb2_dir_pq_pair *port;
 	RTE_SET_USED(iter);
 
-	if (id >= DLB2_MAX_NUM_DIR_PORTS)
+	if (id >= DLB2_MAX_NUM_DIR_PORTS(hw->ver))
 		return NULL;
 
 	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter)
@@ -4538,7 +4543,8 @@ dlb2_verify_create_dir_port_args(struct dlb2_hw *hw,
 	if (args->queue_id != -1) {
 		struct dlb2_dir_pq_pair *queue;
 
-		queue = dlb2_get_domain_used_dir_pq(args->queue_id,
+		queue = dlb2_get_domain_used_dir_pq(hw,
+						    args->queue_id,
 						    vdev_req,
 						    domain);
 
@@ -4618,7 +4624,7 @@ static void dlb2_dir_port_configure_pp(struct dlb2_hw *hw,
 
 		r1.field.pp = port->id.phys_id;
 
-		offs = vdev_id * DLB2_MAX_NUM_DIR_PORTS + virt_id;
+		offs = vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) + virt_id;
 
 		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP2PP(offs), r1.val);
 
@@ -4857,7 +4863,8 @@ int dlb2_hw_create_dir_port(struct dlb2_hw *hw,
 	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
 
 	if (args->queue_id != -1)
-		port = dlb2_get_domain_used_dir_pq(args->queue_id,
+		port = dlb2_get_domain_used_dir_pq(hw,
+						   args->queue_id,
 						   vdev_req,
 						   domain);
 	else
@@ -4913,7 +4920,7 @@ static void dlb2_configure_dir_queue(struct dlb2_hw *hw,
 	/* QID write permissions are turned on when the domain is started */
 	r0.field.vasqid_v = 0;
 
-	offs = domain->id.phys_id * DLB2_MAX_NUM_DIR_QUEUES +
+	offs = domain->id.phys_id * DLB2_MAX_NUM_DIR_QUEUES(hw->ver) +
 		queue->id.phys_id;
 
 	DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(offs), r0.val);
@@ -4935,7 +4942,8 @@ static void dlb2_configure_dir_queue(struct dlb2_hw *hw,
 		union dlb2_sys_vf_dir_vqid_v r3 = { {0} };
 		union dlb2_sys_vf_dir_vqid2qid r4 = { {0} };
 
-		offs = vdev_id * DLB2_MAX_NUM_DIR_QUEUES + queue->id.virt_id;
+		offs = vdev_id * DLB2_MAX_NUM_DIR_QUEUES(hw->ver)
+			+ queue->id.virt_id;
 
 		r3.field.vqid_v = 1;
 
@@ -5001,7 +5009,8 @@ dlb2_verify_create_dir_queue_args(struct dlb2_hw *hw,
 	if (args->port_id != -1) {
 		struct dlb2_dir_pq_pair *port;
 
-		port = dlb2_get_domain_used_dir_pq(args->port_id,
+		port = dlb2_get_domain_used_dir_pq(hw,
+						   args->port_id,
 						   vdev_req,
 						   domain);
 
@@ -5072,7 +5081,8 @@ int dlb2_hw_create_dir_queue(struct dlb2_hw *hw,
 	}
 
 	if (args->port_id != -1)
-		queue = dlb2_get_domain_used_dir_pq(args->port_id,
+		queue = dlb2_get_domain_used_dir_pq(hw,
+						    args->port_id,
 						    vdev_req,
 						    domain);
 	else
@@ -5920,7 +5930,7 @@ dlb2_hw_start_domain(struct dlb2_hw *hw,
 
 		r0.field.vasqid_v = 1;
 
-		offs = domain->id.phys_id * DLB2_MAX_NUM_DIR_PORTS +
+		offs = domain->id.phys_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) +
 			dir_queue->id.phys_id;
 
 		DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(offs), r0.val);
@@ -5972,7 +5982,7 @@ int dlb2_hw_get_dir_queue_depth(struct dlb2_hw *hw,
 
 	id = args->queue_id;
 
-	queue = dlb2_get_domain_used_dir_pq(id, vdev_req, domain);
+	queue = dlb2_get_domain_used_dir_pq(hw, id, vdev_req, domain);
 	if (queue == NULL) {
 		resp->status = DLB2_ST_INVALID_QID;
 		return -EINVAL;
diff --git a/drivers/event/dlb2/pf/dlb2_pf.c b/drivers/event/dlb2/pf/dlb2_pf.c
index cfb22efe8..f57dc1584 100644
--- a/drivers/event/dlb2/pf/dlb2_pf.c
+++ b/drivers/event/dlb2/pf/dlb2_pf.c
@@ -47,7 +47,7 @@ dlb2_pf_low_level_io_init(void)
 {
 	int i;
 	/* Addresses will be initialized at port create */
-	for (i = 0; i < DLB2_MAX_NUM_PORTS; i++) {
+	for (i = 0; i < DLB2_MAX_NUM_PORTS(DLB2_HW_V2_5); i++) {
 		/* First directed ports */
 		dlb2_port[i][DLB2_DIR_PORT].pp_addr = NULL;
 		dlb2_port[i][DLB2_DIR_PORT].cq_base = NULL;
@@ -628,6 +628,7 @@ dlb2_eventdev_pci_init(struct rte_eventdev *eventdev)
 
 	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
 		dlb2 = dlb2_pmd_priv(eventdev); /* rte_zmalloc_socket mem */
+		dlb2->version = DLB2_HW_DEVICE_FROM_PCI_ID(pci_dev);
 
 		/* Probe the DLB2 PF layer */
 		dlb2->qm_instance.pf_dev = dlb2_probe(pci_dev);
@@ -643,7 +644,8 @@ dlb2_eventdev_pci_init(struct rte_eventdev *eventdev)
 		if (pci_dev->device.devargs) {
 			ret = dlb2_parse_params(pci_dev->device.devargs->args,
 						pci_dev->device.devargs->name,
-						&dlb2_args);
+						&dlb2_args,
+						dlb2->version);
 			if (ret) {
 				DLB2_LOG_ERR("PFPMD failed to parse args ret=%d, errno=%d\n",
 					     ret, rte_errno);
@@ -655,6 +657,8 @@ dlb2_eventdev_pci_init(struct rte_eventdev *eventdev)
 						  event_dlb2_pf_name,
 						  &dlb2_args);
 	} else {
+		dlb2 = dlb2_pmd_priv(eventdev);
+		dlb2->version = DLB2_HW_DEVICE_FROM_PCI_ID(pci_dev);
 		ret = dlb2_secondary_eventdev_probe(eventdev,
 						    event_dlb2_pf_name);
 	}
@@ -684,6 +688,16 @@ static const struct rte_pci_id pci_id_dlb2_map[] = {
 	},
 };
 
+static const struct rte_pci_id pci_id_dlb2_5_map[] = {
+	{
+		RTE_PCI_DEVICE(EVENTDEV_INTEL_VENDOR_ID,
+			       PCI_DEVICE_ID_INTEL_DLB2_5_PF)
+	},
+	{
+		.vendor_id = 0,
+	},
+};
+
 static int
 event_dlb2_pci_probe(struct rte_pci_driver *pci_drv,
 		     struct rte_pci_device *pci_dev)
@@ -718,6 +732,40 @@ event_dlb2_pci_remove(struct rte_pci_device *pci_dev)
 
 }
 
+static int
+event_dlb2_5_pci_probe(struct rte_pci_driver *pci_drv,
+		       struct rte_pci_device *pci_dev)
+{
+	int ret;
+
+	ret = rte_event_pmd_pci_probe_named(pci_drv, pci_dev,
+					    sizeof(struct dlb2_eventdev),
+					    dlb2_eventdev_pci_init,
+					    event_dlb2_pf_name);
+	if (ret) {
+		DLB2_LOG_INFO("rte_event_pmd_pci_probe_named() failed, "
+				"ret=%d\n", ret);
+	}
+
+	return ret;
+}
+
+static int
+event_dlb2_5_pci_remove(struct rte_pci_device *pci_dev)
+{
+	int ret;
+
+	ret = rte_event_pmd_pci_remove(pci_dev, NULL);
+
+	if (ret) {
+		DLB2_LOG_INFO("rte_event_pmd_pci_remove() failed, "
+				"ret=%d\n", ret);
+	}
+
+	return ret;
+
+}
+
 static struct rte_pci_driver pci_eventdev_dlb2_pmd = {
 	.id_table = pci_id_dlb2_map,
 	.drv_flags = RTE_PCI_DRV_NEED_MAPPING,
@@ -725,5 +773,15 @@ static struct rte_pci_driver pci_eventdev_dlb2_pmd = {
 	.remove = event_dlb2_pci_remove,
 };
 
+static struct rte_pci_driver pci_eventdev_dlb2_5_pmd = {
+	.id_table = pci_id_dlb2_5_map,
+	.drv_flags = RTE_PCI_DRV_NEED_MAPPING,
+	.probe = event_dlb2_5_pci_probe,
+	.remove = event_dlb2_5_pci_remove,
+};
+
 RTE_PMD_REGISTER_PCI(event_dlb2_pf, pci_eventdev_dlb2_pmd);
 RTE_PMD_REGISTER_PCI_TABLE(event_dlb2_pf, pci_id_dlb2_map);
+
+RTE_PMD_REGISTER_PCI(event_dlb2_5_pf, pci_eventdev_dlb2_5_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(event_dlb2_5_pf, pci_id_dlb2_5_map);
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v2 02/27] event/dlb2: add v2.5 HW init
  2021-03-30 19:35   ` [dpdk-dev] [PATCH v2 00/27] Add DLB V2.5 Timothy McDaniel
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 01/27] event/dlb2: add v2.5 probe Timothy McDaniel
@ 2021-03-30 19:35     ` Timothy McDaniel
  2021-04-03 10:18       ` Jerin Jacob
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 03/27] event/dlb2: add v2.5 get_resources Timothy McDaniel
                       ` (25 subsequent siblings)
  27 siblings, 1 reply; 174+ messages in thread
From: Timothy McDaniel @ 2021-03-30 19:35 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas

This commit adds support for DLB v2.5 probe-time hardware init,
and sets up a framework for incorporating the remaining
changes required to support DLB v2.5.

DLB v2.0 and DLB v2.5 are similar in many respects, but their
register offsets and definitions are different. As a result of these,
differences, the low level hardware functions must take the device
version into consideration. This requires that the hardware version be
passed to many of the low level functions, so that the PMD can
take the appropriate action based on the device version.

To ease the transition and keep the individual patches small, three
temporary files are added in this commit. These files have "new"
in their names.  The files with "new" contain changes specific to a
consolidated PMD that supports both DLB v2.0 and DLB 2.5. Their sister
files of the same name (minus "new") contain the old DLB v2.0 specific
code. The intent is to remove code from the original files as that code
is ported to the combined DLB 2.0/2.5 PMD model and added to the "new"
files in a series of commits. At end of the patch series, the old files
will be empty and the "new" files will have the logic needed
to implement a single PMD that supports both DLB v2.0 and DLB v2.5.
At that time, the original DLB v2.0 specific files will be deleted,
and the "new" files will be renamed and replace them.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/dlb2_priv.h                |    5 +
 drivers/event/dlb2/meson.build                |    1 +
 .../event/dlb2/pf/base/dlb2_hw_types_new.h    |  362 ++
 drivers/event/dlb2/pf/base/dlb2_osdep.h       |    4 +
 drivers/event/dlb2/pf/base/dlb2_regs_new.h    | 4412 +++++++++++++++++
 drivers/event/dlb2/pf/base/dlb2_resource.c    |  180 +-
 drivers/event/dlb2/pf/base/dlb2_resource.h    |   36 -
 .../event/dlb2/pf/base/dlb2_resource_new.c    |  259 +
 .../event/dlb2/pf/base/dlb2_resource_new.h    |   73 +
 drivers/event/dlb2/pf/dlb2_main.c             |   41 +-
 drivers/event/dlb2/pf/dlb2_main.h             |    4 +
 drivers/event/dlb2/pf/dlb2_pf.c               |    6 +-
 12 files changed, 5153 insertions(+), 230 deletions(-)
 create mode 100644 drivers/event/dlb2/pf/base/dlb2_hw_types_new.h
 create mode 100644 drivers/event/dlb2/pf/base/dlb2_regs_new.h
 create mode 100644 drivers/event/dlb2/pf/base/dlb2_resource_new.c
 create mode 100644 drivers/event/dlb2/pf/base/dlb2_resource_new.h

diff --git a/drivers/event/dlb2/dlb2_priv.h b/drivers/event/dlb2/dlb2_priv.h
index 1cd78ad94..f3a9fe0aa 100644
--- a/drivers/event/dlb2/dlb2_priv.h
+++ b/drivers/event/dlb2/dlb2_priv.h
@@ -114,6 +114,11 @@
 #define EV_TO_DLB2_PRIO(x) ((x) >> 5)
 #define DLB2_TO_EV_PRIO(x) ((x) << 5)
 
+enum dlb2_hw_ver {
+	DLB2_HW_VER_2,
+	DLB2_HW_VER_2_5,
+};
+
 enum dlb2_hw_port_types {
 	DLB2_LDB_PORT,
 	DLB2_DIR_PORT,
diff --git a/drivers/event/dlb2/meson.build b/drivers/event/dlb2/meson.build
index f22638b8e..bded07e06 100644
--- a/drivers/event/dlb2/meson.build
+++ b/drivers/event/dlb2/meson.build
@@ -14,6 +14,7 @@ sources = files('dlb2.c',
 		'pf/dlb2_main.c',
 		'pf/dlb2_pf.c',
 		'pf/base/dlb2_resource.c',
+		'pf/base/dlb2_resource_new.c',
 		'rte_pmd_dlb2.c',
 		'dlb2_selftest.c'
 )
diff --git a/drivers/event/dlb2/pf/base/dlb2_hw_types_new.h b/drivers/event/dlb2/pf/base/dlb2_hw_types_new.h
new file mode 100644
index 000000000..d58aa94ad
--- /dev/null
+++ b/drivers/event/dlb2/pf/base/dlb2_hw_types_new.h
@@ -0,0 +1,362 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2016-2020 Intel Corporation
+ */
+
+#ifndef __DLB2_HW_TYPES_NEW_H
+#define __DLB2_HW_TYPES_NEW_H
+
+#include "../../dlb2_priv.h"
+#include "dlb2_user.h"
+
+#include "dlb2_osdep_list.h"
+#include "dlb2_osdep_types.h"
+#include "dlb2_regs_new.h"
+
+#define DLB2_BITS_SET(x, val, mask)	(x = ((x) & ~(mask))     \
+				 | (((val) << (mask##_LOC)) & (mask)))
+#define DLB2_BITS_CLR(x, mask)	(x &= ~(mask))
+#define DLB2_BIT_SET(x, mask)	((x) |= (mask))
+#define DLB2_BITS_GET(x, mask)	(((x) & (mask)) >> (mask##_LOC))
+
+#define DLB2_MAX_NUM_VDEVS			16
+#define DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS	2
+#define DLB2_NUM_ARB_WEIGHTS			8
+#define DLB2_MAX_NUM_AQED_ENTRIES		2048
+#define DLB2_MAX_WEIGHT				255
+#define DLB2_NUM_COS_DOMAINS			4
+#define DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS	2
+#define DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES	5
+#define DLB2_MAX_CQ_COMP_CHECK_LOOPS		409600
+#define DLB2_MAX_QID_EMPTY_CHECK_LOOPS		(32 * 64 * 1024 * (800 / 30))
+
+#define DLB2_FUNC_BAR				0
+#define DLB2_CSR_BAR				2
+
+#ifdef FPGA
+#define DLB2_HZ					2000000
+#else
+#define DLB2_HZ					800000000
+#endif
+
+#define PCI_DEVICE_ID_INTEL_DLB2_PF 0x2710
+#define PCI_DEVICE_ID_INTEL_DLB2_VF 0x2711
+
+#define PCI_DEVICE_ID_INTEL_DLB2_5_PF 0x2714
+#define PCI_DEVICE_ID_INTEL_DLB2_5_VF 0x2715
+
+#define DLB2_ALARM_HW_SOURCE_SYS 0
+#define DLB2_ALARM_HW_SOURCE_DLB 1
+
+#define DLB2_ALARM_HW_UNIT_CHP 4
+
+#define DLB2_ALARM_SYS_AID_ILLEGAL_QID		3
+#define DLB2_ALARM_SYS_AID_DISABLED_QID		4
+#define DLB2_ALARM_SYS_AID_ILLEGAL_HCW		5
+#define DLB2_ALARM_HW_CHP_AID_ILLEGAL_ENQ	1
+#define DLB2_ALARM_HW_CHP_AID_EXCESS_TOKEN_POPS 2
+
+/*
+ * Hardware-defined base addresses. Those prefixed 'DLB2_DRV' are only used by
+ * the PF driver.
+ */
+#define DLB2_DRV_LDB_PP_BASE   0x2300000
+#define DLB2_DRV_LDB_PP_STRIDE 0x1000
+#define DLB2_DRV_LDB_PP_BOUND  (DLB2_DRV_LDB_PP_BASE + \
+				DLB2_DRV_LDB_PP_STRIDE * DLB2_MAX_NUM_LDB_PORTS)
+#define DLB2_DRV_DIR_PP_BASE   0x2200000
+#define DLB2_DRV_DIR_PP_STRIDE 0x1000
+#define DLB2_DRV_DIR_PP_BOUND  (DLB2_DRV_DIR_PP_BASE + \
+				DLB2_DRV_DIR_PP_STRIDE * DLB2_MAX_NUM_DIR_PORTS)
+#define DLB2_LDB_PP_BASE       0x2100000
+#define DLB2_LDB_PP_STRIDE     0x1000
+#define DLB2_LDB_PP_BOUND      (DLB2_LDB_PP_BASE + \
+				DLB2_LDB_PP_STRIDE * DLB2_MAX_NUM_LDB_PORTS)
+#define DLB2_LDB_PP_OFFS(id)   (DLB2_LDB_PP_BASE + (id) * DLB2_PP_SIZE)
+#define DLB2_DIR_PP_BASE       0x2000000
+#define DLB2_DIR_PP_STRIDE     0x1000
+#define DLB2_DIR_PP_BOUND      (DLB2_DIR_PP_BASE + \
+				DLB2_DIR_PP_STRIDE * \
+				DLB2_MAX_NUM_DIR_PORTS_V2_5)
+#define DLB2_DIR_PP_OFFS(id)   (DLB2_DIR_PP_BASE + (id) * DLB2_PP_SIZE)
+
+struct dlb2_resource_id {
+	u32 phys_id;
+	u32 virt_id;
+	u8 vdev_owned;
+	u8 vdev_id;
+};
+
+struct dlb2_freelist {
+	u32 base;
+	u32 bound;
+	u32 offset;
+};
+
+static inline u32 dlb2_freelist_count(struct dlb2_freelist *list)
+{
+	return list->bound - list->base - list->offset;
+}
+
+struct dlb2_hcw {
+	u64 data;
+	/* Word 3 */
+	u16 opaque;
+	u8 qid;
+	u8 sched_type:2;
+	u8 priority:3;
+	u8 msg_type:3;
+	/* Word 4 */
+	u16 lock_id;
+	u8 ts_flag:1;
+	u8 rsvd1:2;
+	u8 no_dec:1;
+	u8 cmp_id:4;
+	u8 cq_token:1;
+	u8 qe_comp:1;
+	u8 qe_frag:1;
+	u8 qe_valid:1;
+	u8 int_arm:1;
+	u8 error:1;
+	u8 rsvd:2;
+};
+
+struct dlb2_ldb_queue {
+	struct dlb2_list_entry domain_list;
+	struct dlb2_list_entry func_list;
+	struct dlb2_resource_id id;
+	struct dlb2_resource_id domain_id;
+	u32 num_qid_inflights;
+	u32 aqed_limit;
+	u32 sn_group; /* sn == sequence number */
+	u32 sn_slot;
+	u32 num_mappings;
+	u8 sn_cfg_valid;
+	u8 num_pending_additions;
+	u8 owned;
+	u8 configured;
+};
+
+/*
+ * Directed ports and queues are paired by nature, so the driver tracks them
+ * with a single data structure.
+ */
+struct dlb2_dir_pq_pair {
+	struct dlb2_list_entry domain_list;
+	struct dlb2_list_entry func_list;
+	struct dlb2_resource_id id;
+	struct dlb2_resource_id domain_id;
+	u32 ref_cnt;
+	u8 init_tkn_cnt;
+	u8 queue_configured;
+	u8 port_configured;
+	u8 owned;
+	u8 enabled;
+};
+
+enum dlb2_qid_map_state {
+	/* The slot does not contain a valid queue mapping */
+	DLB2_QUEUE_UNMAPPED,
+	/* The slot contains a valid queue mapping */
+	DLB2_QUEUE_MAPPED,
+	/* The driver is mapping a queue into this slot */
+	DLB2_QUEUE_MAP_IN_PROG,
+	/* The driver is unmapping a queue from this slot */
+	DLB2_QUEUE_UNMAP_IN_PROG,
+	/*
+	 * The driver is unmapping a queue from this slot, and once complete
+	 * will replace it with another mapping.
+	 */
+	DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP,
+};
+
+struct dlb2_ldb_port_qid_map {
+	enum dlb2_qid_map_state state;
+	u16 qid;
+	u16 pending_qid;
+	u8 priority;
+	u8 pending_priority;
+};
+
+struct dlb2_ldb_port {
+	struct dlb2_list_entry domain_list;
+	struct dlb2_list_entry func_list;
+	struct dlb2_resource_id id;
+	struct dlb2_resource_id domain_id;
+	/* The qid_map represents the hardware QID mapping state. */
+	struct dlb2_ldb_port_qid_map qid_map[DLB2_MAX_NUM_QIDS_PER_LDB_CQ];
+	u32 hist_list_entry_base;
+	u32 hist_list_entry_limit;
+	u32 ref_cnt;
+	u8 init_tkn_cnt;
+	u8 num_pending_removals;
+	u8 num_mappings;
+	u8 owned;
+	u8 enabled;
+	u8 configured;
+};
+
+struct dlb2_sn_group {
+	u32 mode;
+	u32 sequence_numbers_per_queue;
+	u32 slot_use_bitmap;
+	u32 id;
+};
+
+static inline bool dlb2_sn_group_full(struct dlb2_sn_group *group)
+{
+	const u32 mask[] = {
+		0x0000ffff,  /* 64 SNs per queue */
+		0x000000ff,  /* 128 SNs per queue */
+		0x0000000f,  /* 256 SNs per queue */
+		0x00000003,  /* 512 SNs per queue */
+		0x00000001}; /* 1024 SNs per queue */
+
+	return group->slot_use_bitmap == mask[group->mode];
+}
+
+static inline int dlb2_sn_group_alloc_slot(struct dlb2_sn_group *group)
+{
+	const u32 bound[] = {16, 8, 4, 2, 1};
+	u32 i;
+
+	for (i = 0; i < bound[group->mode]; i++) {
+		if (!(group->slot_use_bitmap & (1 << i))) {
+			group->slot_use_bitmap |= 1 << i;
+			return i;
+		}
+	}
+
+	return -1;
+}
+
+static inline void
+dlb2_sn_group_free_slot(struct dlb2_sn_group *group, int slot)
+{
+	group->slot_use_bitmap &= ~(1 << slot);
+}
+
+static inline int dlb2_sn_group_used_slots(struct dlb2_sn_group *group)
+{
+	int i, cnt = 0;
+
+	for (i = 0; i < 32; i++)
+		cnt += !!(group->slot_use_bitmap & (1 << i));
+
+	return cnt;
+}
+
+struct dlb2_hw_domain {
+	struct dlb2_function_resources *parent_func;
+	struct dlb2_list_entry func_list;
+	struct dlb2_list_head used_ldb_queues;
+	struct dlb2_list_head used_ldb_ports[DLB2_NUM_COS_DOMAINS];
+	struct dlb2_list_head used_dir_pq_pairs;
+	struct dlb2_list_head avail_ldb_queues;
+	struct dlb2_list_head avail_ldb_ports[DLB2_NUM_COS_DOMAINS];
+	struct dlb2_list_head avail_dir_pq_pairs;
+	u32 total_hist_list_entries;
+	u32 avail_hist_list_entries;
+	u32 hist_list_entry_base;
+	u32 hist_list_entry_offset;
+	union {
+		struct {
+			u32 num_ldb_credits;
+			u32 num_dir_credits;
+		};
+		struct {
+			u32 num_credits;
+		};
+	};
+	u32 num_avail_aqed_entries;
+	u32 num_used_aqed_entries;
+	struct dlb2_resource_id id;
+	int num_pending_removals;
+	int num_pending_additions;
+	u8 configured;
+	u8 started;
+};
+
+struct dlb2_bitmap;
+
+struct dlb2_function_resources {
+	struct dlb2_list_head avail_domains;
+	struct dlb2_list_head used_domains;
+	struct dlb2_list_head avail_ldb_queues;
+	struct dlb2_list_head avail_ldb_ports[DLB2_NUM_COS_DOMAINS];
+	struct dlb2_list_head avail_dir_pq_pairs;
+	struct dlb2_bitmap *avail_hist_list_entries;
+	u32 num_avail_domains;
+	u32 num_avail_ldb_queues;
+	u32 num_avail_ldb_ports[DLB2_NUM_COS_DOMAINS];
+	u32 num_avail_dir_pq_pairs;
+	union {
+		struct {
+			u32 num_avail_qed_entries;
+			u32 num_avail_dqed_entries;
+		};
+		struct {
+			u32 num_avail_entries;
+		};
+	};
+	u32 num_avail_aqed_entries;
+	u8 locked; /* (VDEV only) */
+};
+
+/*
+ * After initialization, each resource in dlb2_hw_resources is located in one
+ * of the following lists:
+ * -- The PF's available resources list. These are unconfigured resources owned
+ *	by the PF and not allocated to a dlb2 scheduling domain.
+ * -- A VDEV's available resources list. These are VDEV-owned unconfigured
+ *	resources not allocated to a dlb2 scheduling domain.
+ * -- A domain's available resources list. These are domain-owned unconfigured
+ *	resources.
+ * -- A domain's used resources list. These are domain-owned configured
+ *	resources.
+ *
+ * A resource moves to a new list when a VDEV or domain is created or destroyed,
+ * or when the resource is configured.
+ */
+struct dlb2_hw_resources {
+	struct dlb2_ldb_queue ldb_queues[DLB2_MAX_NUM_LDB_QUEUES];
+	struct dlb2_ldb_port ldb_ports[DLB2_MAX_NUM_LDB_PORTS];
+	struct dlb2_dir_pq_pair dir_pq_pairs[DLB2_MAX_NUM_DIR_PORTS_V2_5];
+	struct dlb2_sn_group sn_groups[DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS];
+};
+
+struct dlb2_mbox {
+	u32 *mbox;
+	u32 *isr_in_progress;
+};
+
+struct dlb2_sw_mbox {
+	struct dlb2_mbox vdev_to_pf;
+	struct dlb2_mbox pf_to_vdev;
+	void (*pf_to_vdev_inject)(void *arg);
+	void *pf_to_vdev_inject_arg;
+};
+
+struct dlb2_hw {
+	uint8_t ver;
+
+	/* BAR 0 address */
+	void *csr_kva;
+	unsigned long csr_phys_addr;
+	/* BAR 2 address */
+	void *func_kva;
+	unsigned long func_phys_addr;
+
+	/* Resource tracking */
+	struct dlb2_hw_resources rsrcs;
+	struct dlb2_function_resources pf;
+	struct dlb2_function_resources vdev[DLB2_MAX_NUM_VDEVS];
+	struct dlb2_hw_domain domains[DLB2_MAX_NUM_DOMAINS];
+	u8 cos_reservation[DLB2_NUM_COS_DOMAINS];
+
+	/* Virtualization */
+	int virt_mode;
+	struct dlb2_sw_mbox mbox[DLB2_MAX_NUM_VDEVS];
+	unsigned int pasid[DLB2_MAX_NUM_VDEVS];
+};
+
+#endif /* __DLB2_HW_TYPES_NEW_H */
diff --git a/drivers/event/dlb2/pf/base/dlb2_osdep.h b/drivers/event/dlb2/pf/base/dlb2_osdep.h
index aa101a49a..3b0ca84ba 100644
--- a/drivers/event/dlb2/pf/base/dlb2_osdep.h
+++ b/drivers/event/dlb2/pf/base/dlb2_osdep.h
@@ -16,7 +16,11 @@
 #include <rte_log.h>
 #include <rte_spinlock.h>
 #include "../dlb2_main.h"
+
+/* TEMPORARY inclusion of both headers for merge */
+#include "dlb2_resource_new.h"
 #include "dlb2_resource.h"
+
 #include "../../dlb2_log.h"
 #include "../../dlb2_user.h"
 
diff --git a/drivers/event/dlb2/pf/base/dlb2_regs_new.h b/drivers/event/dlb2/pf/base/dlb2_regs_new.h
new file mode 100644
index 000000000..593243d63
--- /dev/null
+++ b/drivers/event/dlb2/pf/base/dlb2_regs_new.h
@@ -0,0 +1,4412 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2016-2020 Intel Corporation
+ */
+
+#ifndef __DLB2_REGS_NEW_H
+#define __DLB2_REGS_NEW_H
+
+#include "dlb2_osdep_types.h"
+
+#define DLB2_PF_VF2PF_MAILBOX_BYTES 256
+#define DLB2_PF_VF2PF_MAILBOX(vf_id, x) \
+	(0x1000 + 0x4 * (x) + (vf_id) * 0x10000)
+#define DLB2_PF_VF2PF_MAILBOX_RST 0x0
+
+#define DLB2_PF_VF2PF_MAILBOX_MSG	0xFFFFFFFF
+#define DLB2_PF_VF2PF_MAILBOX_MSG_LOC	0
+
+#define DLB2_PF_VF2PF_MAILBOX_ISR(vf_id) \
+	(0x1f00 + (vf_id) * 0x10000)
+#define DLB2_PF_VF2PF_MAILBOX_ISR_RST 0x0
+
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF0_ISR	0x00000001
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF1_ISR	0x00000002
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF2_ISR	0x00000004
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF3_ISR	0x00000008
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF4_ISR	0x00000010
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF5_ISR	0x00000020
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF6_ISR	0x00000040
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF7_ISR	0x00000080
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF8_ISR	0x00000100
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF9_ISR	0x00000200
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF10_ISR	0x00000400
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF11_ISR	0x00000800
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF12_ISR	0x00001000
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF13_ISR	0x00002000
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF14_ISR	0x00004000
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF15_ISR	0x00008000
+#define DLB2_PF_VF2PF_MAILBOX_ISR_RSVD0	0xFFFF0000
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF0_ISR_LOC	0
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF1_ISR_LOC	1
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF2_ISR_LOC	2
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF3_ISR_LOC	3
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF4_ISR_LOC	4
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF5_ISR_LOC	5
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF6_ISR_LOC	6
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF7_ISR_LOC	7
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF8_ISR_LOC	8
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF9_ISR_LOC	9
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF10_ISR_LOC	10
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF11_ISR_LOC	11
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF12_ISR_LOC	12
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF13_ISR_LOC	13
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF14_ISR_LOC	14
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF15_ISR_LOC	15
+#define DLB2_PF_VF2PF_MAILBOX_ISR_RSVD0_LOC		16
+
+#define DLB2_PF_VF2PF_FLR_ISR(vf_id) \
+	(0x1f04 + (vf_id) * 0x10000)
+#define DLB2_PF_VF2PF_FLR_ISR_RST 0x0
+
+#define DLB2_PF_VF2PF_FLR_ISR_VF0_ISR	0x00000001
+#define DLB2_PF_VF2PF_FLR_ISR_VF1_ISR	0x00000002
+#define DLB2_PF_VF2PF_FLR_ISR_VF2_ISR	0x00000004
+#define DLB2_PF_VF2PF_FLR_ISR_VF3_ISR	0x00000008
+#define DLB2_PF_VF2PF_FLR_ISR_VF4_ISR	0x00000010
+#define DLB2_PF_VF2PF_FLR_ISR_VF5_ISR	0x00000020
+#define DLB2_PF_VF2PF_FLR_ISR_VF6_ISR	0x00000040
+#define DLB2_PF_VF2PF_FLR_ISR_VF7_ISR	0x00000080
+#define DLB2_PF_VF2PF_FLR_ISR_VF8_ISR	0x00000100
+#define DLB2_PF_VF2PF_FLR_ISR_VF9_ISR	0x00000200
+#define DLB2_PF_VF2PF_FLR_ISR_VF10_ISR	0x00000400
+#define DLB2_PF_VF2PF_FLR_ISR_VF11_ISR	0x00000800
+#define DLB2_PF_VF2PF_FLR_ISR_VF12_ISR	0x00001000
+#define DLB2_PF_VF2PF_FLR_ISR_VF13_ISR	0x00002000
+#define DLB2_PF_VF2PF_FLR_ISR_VF14_ISR	0x00004000
+#define DLB2_PF_VF2PF_FLR_ISR_VF15_ISR	0x00008000
+#define DLB2_PF_VF2PF_FLR_ISR_RSVD0		0xFFFF0000
+#define DLB2_PF_VF2PF_FLR_ISR_VF0_ISR_LOC	0
+#define DLB2_PF_VF2PF_FLR_ISR_VF1_ISR_LOC	1
+#define DLB2_PF_VF2PF_FLR_ISR_VF2_ISR_LOC	2
+#define DLB2_PF_VF2PF_FLR_ISR_VF3_ISR_LOC	3
+#define DLB2_PF_VF2PF_FLR_ISR_VF4_ISR_LOC	4
+#define DLB2_PF_VF2PF_FLR_ISR_VF5_ISR_LOC	5
+#define DLB2_PF_VF2PF_FLR_ISR_VF6_ISR_LOC	6
+#define DLB2_PF_VF2PF_FLR_ISR_VF7_ISR_LOC	7
+#define DLB2_PF_VF2PF_FLR_ISR_VF8_ISR_LOC	8
+#define DLB2_PF_VF2PF_FLR_ISR_VF9_ISR_LOC	9
+#define DLB2_PF_VF2PF_FLR_ISR_VF10_ISR_LOC	10
+#define DLB2_PF_VF2PF_FLR_ISR_VF11_ISR_LOC	11
+#define DLB2_PF_VF2PF_FLR_ISR_VF12_ISR_LOC	12
+#define DLB2_PF_VF2PF_FLR_ISR_VF13_ISR_LOC	13
+#define DLB2_PF_VF2PF_FLR_ISR_VF14_ISR_LOC	14
+#define DLB2_PF_VF2PF_FLR_ISR_VF15_ISR_LOC	15
+#define DLB2_PF_VF2PF_FLR_ISR_RSVD0_LOC	16
+
+#define DLB2_PF_VF2PF_ISR_PEND(vf_id) \
+	(0x1f10 + (vf_id) * 0x10000)
+#define DLB2_PF_VF2PF_ISR_PEND_RST 0x0
+
+#define DLB2_PF_VF2PF_ISR_PEND_ISR_PEND	0x00000001
+#define DLB2_PF_VF2PF_ISR_PEND_RSVD0		0xFFFFFFFE
+#define DLB2_PF_VF2PF_ISR_PEND_ISR_PEND_LOC	0
+#define DLB2_PF_VF2PF_ISR_PEND_RSVD0_LOC	1
+
+#define DLB2_PF_PF2VF_MAILBOX_BYTES 64
+#define DLB2_PF_PF2VF_MAILBOX(vf_id, x) \
+	(0x2000 + 0x4 * (x) + (vf_id) * 0x10000)
+#define DLB2_PF_PF2VF_MAILBOX_RST 0x0
+
+#define DLB2_PF_PF2VF_MAILBOX_MSG	0xFFFFFFFF
+#define DLB2_PF_PF2VF_MAILBOX_MSG_LOC	0
+
+#define DLB2_PF_PF2VF_MAILBOX_ISR(vf_id) \
+	(0x2f00 + (vf_id) * 0x10000)
+#define DLB2_PF_PF2VF_MAILBOX_ISR_RST 0x0
+
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF0_ISR	0x00000001
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF1_ISR	0x00000002
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF2_ISR	0x00000004
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF3_ISR	0x00000008
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF4_ISR	0x00000010
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF5_ISR	0x00000020
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF6_ISR	0x00000040
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF7_ISR	0x00000080
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF8_ISR	0x00000100
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF9_ISR	0x00000200
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF10_ISR	0x00000400
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF11_ISR	0x00000800
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF12_ISR	0x00001000
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF13_ISR	0x00002000
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF14_ISR	0x00004000
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF15_ISR	0x00008000
+#define DLB2_PF_PF2VF_MAILBOX_ISR_RSVD0	0xFFFF0000
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF0_ISR_LOC	0
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF1_ISR_LOC	1
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF2_ISR_LOC	2
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF3_ISR_LOC	3
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF4_ISR_LOC	4
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF5_ISR_LOC	5
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF6_ISR_LOC	6
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF7_ISR_LOC	7
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF8_ISR_LOC	8
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF9_ISR_LOC	9
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF10_ISR_LOC	10
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF11_ISR_LOC	11
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF12_ISR_LOC	12
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF13_ISR_LOC	13
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF14_ISR_LOC	14
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF15_ISR_LOC	15
+#define DLB2_PF_PF2VF_MAILBOX_ISR_RSVD0_LOC		16
+
+#define DLB2_PF_VF_RESET_IN_PROGRESS(vf_id) \
+	(0x3000 + (vf_id) * 0x10000)
+#define DLB2_PF_VF_RESET_IN_PROGRESS_RST 0xffff
+
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF0_RESET_IN_PROGRESS	0x00000001
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF1_RESET_IN_PROGRESS	0x00000002
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF2_RESET_IN_PROGRESS	0x00000004
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF3_RESET_IN_PROGRESS	0x00000008
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF4_RESET_IN_PROGRESS	0x00000010
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF5_RESET_IN_PROGRESS	0x00000020
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF6_RESET_IN_PROGRESS	0x00000040
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF7_RESET_IN_PROGRESS	0x00000080
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF8_RESET_IN_PROGRESS	0x00000100
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF9_RESET_IN_PROGRESS	0x00000200
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF10_RESET_IN_PROGRESS	0x00000400
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF11_RESET_IN_PROGRESS	0x00000800
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF12_RESET_IN_PROGRESS	0x00001000
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF13_RESET_IN_PROGRESS	0x00002000
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF14_RESET_IN_PROGRESS	0x00004000
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF15_RESET_IN_PROGRESS	0x00008000
+#define DLB2_PF_VF_RESET_IN_PROGRESS_RSVD0			0xFFFF0000
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF0_RESET_IN_PROGRESS_LOC	0
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF1_RESET_IN_PROGRESS_LOC	1
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF2_RESET_IN_PROGRESS_LOC	2
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF3_RESET_IN_PROGRESS_LOC	3
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF4_RESET_IN_PROGRESS_LOC	4
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF5_RESET_IN_PROGRESS_LOC	5
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF6_RESET_IN_PROGRESS_LOC	6
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF7_RESET_IN_PROGRESS_LOC	7
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF8_RESET_IN_PROGRESS_LOC	8
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF9_RESET_IN_PROGRESS_LOC	9
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF10_RESET_IN_PROGRESS_LOC	10
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF11_RESET_IN_PROGRESS_LOC	11
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF12_RESET_IN_PROGRESS_LOC	12
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF13_RESET_IN_PROGRESS_LOC	13
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF14_RESET_IN_PROGRESS_LOC	14
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF15_RESET_IN_PROGRESS_LOC	15
+#define DLB2_PF_VF_RESET_IN_PROGRESS_RSVD0_LOC			16
+
+#define DLB2_MSIX_VECTOR_CTRL(x) \
+	(0x100000c + (x) * 0x10)
+#define DLB2_MSIX_VECTOR_CTRL_RST 0x1
+
+#define DLB2_MSIX_VECTOR_CTRL_VEC_MASK	0x00000001
+#define DLB2_MSIX_VECTOR_CTRL_RSVD0		0xFFFFFFFE
+#define DLB2_MSIX_VECTOR_CTRL_VEC_MASK_LOC	0
+#define DLB2_MSIX_VECTOR_CTRL_RSVD0_LOC	1
+
+#define DLB2_IOSF_SMON_COMP_MASK1(x) \
+	(0x8002024 + (x) * 0x40)
+#define DLB2_IOSF_SMON_COMP_MASK1_RST 0xffffffff
+
+#define DLB2_IOSF_SMON_COMP_MASK1_COMP_MASK1	0xFFFFFFFF
+#define DLB2_IOSF_SMON_COMP_MASK1_COMP_MASK1_LOC	0
+
+#define DLB2_IOSF_SMON_COMP_MASK0(x) \
+	(0x8002020 + (x) * 0x40)
+#define DLB2_IOSF_SMON_COMP_MASK0_RST 0xffffffff
+
+#define DLB2_IOSF_SMON_COMP_MASK0_COMP_MASK0	0xFFFFFFFF
+#define DLB2_IOSF_SMON_COMP_MASK0_COMP_MASK0_LOC	0
+
+#define DLB2_IOSF_SMON_MAX_TMR(x) \
+	(0x800201c + (x) * 0x40)
+#define DLB2_IOSF_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_IOSF_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_IOSF_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_IOSF_SMON_TMR(x) \
+	(0x8002018 + (x) * 0x40)
+#define DLB2_IOSF_SMON_TMR_RST 0x0
+
+#define DLB2_IOSF_SMON_TMR_TIMER_VAL	0xFFFFFFFF
+#define DLB2_IOSF_SMON_TMR_TIMER_VAL_LOC	0
+
+#define DLB2_IOSF_SMON_ACTIVITYCNTR1(x) \
+	(0x8002014 + (x) * 0x40)
+#define DLB2_IOSF_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_IOSF_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_IOSF_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_IOSF_SMON_ACTIVITYCNTR0(x) \
+	(0x8002010 + (x) * 0x40)
+#define DLB2_IOSF_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_IOSF_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_IOSF_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_IOSF_SMON_COMPARE1(x) \
+	(0x800200c + (x) * 0x40)
+#define DLB2_IOSF_SMON_COMPARE1_RST 0x0
+
+#define DLB2_IOSF_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_IOSF_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_IOSF_SMON_COMPARE0(x) \
+	(0x8002008 + (x) * 0x40)
+#define DLB2_IOSF_SMON_COMPARE0_RST 0x0
+
+#define DLB2_IOSF_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_IOSF_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_IOSF_SMON_CFG1(x) \
+	(0x8002004 + (x) * 0x40)
+#define DLB2_IOSF_SMON_CFG1_RST 0x0
+
+#define DLB2_IOSF_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_IOSF_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_IOSF_SMON_CFG1_RSVD	0xFFFF0000
+#define DLB2_IOSF_SMON_CFG1_MODE0_LOC	0
+#define DLB2_IOSF_SMON_CFG1_MODE1_LOC	8
+#define DLB2_IOSF_SMON_CFG1_RSVD_LOC		16
+
+#define DLB2_IOSF_SMON_CFG0(x) \
+	(0x8002000 + (x) * 0x40)
+#define DLB2_IOSF_SMON_CFG0_RST 0x40000000
+
+#define DLB2_IOSF_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_IOSF_SMON_CFG0_RSVD2			0x0000000E
+#define DLB2_IOSF_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_IOSF_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_IOSF_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_IOSF_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_IOSF_SMON_CFG0_SMON_MODE		0x0000F000
+#define DLB2_IOSF_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_IOSF_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_IOSF_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_IOSF_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_IOSF_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_IOSF_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_IOSF_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_IOSF_SMON_CFG0_RSVD1			0x00800000
+#define DLB2_IOSF_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_IOSF_SMON_CFG0_RSVD0			0x20000000
+#define DLB2_IOSF_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_IOSF_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_IOSF_SMON_CFG0_RSVD2_LOC			1
+#define DLB2_IOSF_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_IOSF_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_IOSF_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_IOSF_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_IOSF_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_IOSF_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_IOSF_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_IOSF_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_IOSF_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_IOSF_SMON_CFG0_STOPTIMEROVFL_LOC		20
+#define DLB2_IOSF_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_IOSF_SMON_CFG0_STATTIMEROVFL_LOC		22
+#define DLB2_IOSF_SMON_CFG0_RSVD1_LOC			23
+#define DLB2_IOSF_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_IOSF_SMON_CFG0_RSVD0_LOC			29
+#define DLB2_IOSF_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_IOSF_FUNC_VF_BAR_DSBL(x) \
+	(0x20 + (x) * 0x4)
+#define DLB2_IOSF_FUNC_VF_BAR_DSBL_RST 0x0
+
+#define DLB2_IOSF_FUNC_VF_BAR_DSBL_FUNC_VF_BAR_DIS	0x00000001
+#define DLB2_IOSF_FUNC_VF_BAR_DSBL_RSVD0		0xFFFFFFFE
+#define DLB2_IOSF_FUNC_VF_BAR_DSBL_FUNC_VF_BAR_DIS_LOC	0
+#define DLB2_IOSF_FUNC_VF_BAR_DSBL_RSVD0_LOC			1
+
+#define DLB2_V2SYS_TOTAL_VAS 0x1000011c
+#define DLB2_V2_5SYS_TOTAL_VAS 0x10000114
+#define DLB2_SYS_TOTAL_VAS(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2SYS_TOTAL_VAS : \
+	 DLB2_V2_5SYS_TOTAL_VAS)
+#define DLB2_SYS_TOTAL_VAS_RST 0x20
+
+#define DLB2_SYS_TOTAL_VAS_TOTAL_VAS	0xFFFFFFFF
+#define DLB2_SYS_TOTAL_VAS_TOTAL_VAS_LOC	0
+
+#define DLB2_SYS_TOTAL_DIR_CRDS 0x10000108
+#define DLB2_SYS_TOTAL_DIR_CRDS_RST 0x1000
+
+#define DLB2_SYS_TOTAL_DIR_CRDS_TOTAL_DIR_CREDITS	0xFFFFFFFF
+#define DLB2_SYS_TOTAL_DIR_CRDS_TOTAL_DIR_CREDITS_LOC	0
+
+#define DLB2_SYS_TOTAL_LDB_CRDS 0x10000104
+#define DLB2_SYS_TOTAL_LDB_CRDS_RST 0x2000
+
+#define DLB2_SYS_TOTAL_LDB_CRDS_TOTAL_LDB_CREDITS	0xFFFFFFFF
+#define DLB2_SYS_TOTAL_LDB_CRDS_TOTAL_LDB_CREDITS_LOC	0
+
+#define DLB2_SYS_ALARM_PF_SYND2 0x10000508
+#define DLB2_SYS_ALARM_PF_SYND2_RST 0x0
+
+#define DLB2_SYS_ALARM_PF_SYND2_LOCK_ID	0x0000FFFF
+#define DLB2_SYS_ALARM_PF_SYND2_MEAS		0x00010000
+#define DLB2_SYS_ALARM_PF_SYND2_DEBUG	0x00FE0000
+#define DLB2_SYS_ALARM_PF_SYND2_CQ_POP	0x01000000
+#define DLB2_SYS_ALARM_PF_SYND2_QE_UHL	0x02000000
+#define DLB2_SYS_ALARM_PF_SYND2_QE_ORSP	0x04000000
+#define DLB2_SYS_ALARM_PF_SYND2_QE_VALID	0x08000000
+#define DLB2_SYS_ALARM_PF_SYND2_CQ_INT_REARM	0x10000000
+#define DLB2_SYS_ALARM_PF_SYND2_DSI_ERROR	0x20000000
+#define DLB2_SYS_ALARM_PF_SYND2_RSVD0	0xC0000000
+#define DLB2_SYS_ALARM_PF_SYND2_LOCK_ID_LOC		0
+#define DLB2_SYS_ALARM_PF_SYND2_MEAS_LOC		16
+#define DLB2_SYS_ALARM_PF_SYND2_DEBUG_LOC		17
+#define DLB2_SYS_ALARM_PF_SYND2_CQ_POP_LOC		24
+#define DLB2_SYS_ALARM_PF_SYND2_QE_UHL_LOC		25
+#define DLB2_SYS_ALARM_PF_SYND2_QE_ORSP_LOC		26
+#define DLB2_SYS_ALARM_PF_SYND2_QE_VALID_LOC		27
+#define DLB2_SYS_ALARM_PF_SYND2_CQ_INT_REARM_LOC	28
+#define DLB2_SYS_ALARM_PF_SYND2_DSI_ERROR_LOC	29
+#define DLB2_SYS_ALARM_PF_SYND2_RSVD0_LOC		30
+
+#define DLB2_SYS_ALARM_PF_SYND1 0x10000504
+#define DLB2_SYS_ALARM_PF_SYND1_RST 0x0
+
+#define DLB2_SYS_ALARM_PF_SYND1_DSI		0x0000FFFF
+#define DLB2_SYS_ALARM_PF_SYND1_QID		0x00FF0000
+#define DLB2_SYS_ALARM_PF_SYND1_QTYPE	0x03000000
+#define DLB2_SYS_ALARM_PF_SYND1_QPRI		0x1C000000
+#define DLB2_SYS_ALARM_PF_SYND1_MSG_TYPE	0xE0000000
+#define DLB2_SYS_ALARM_PF_SYND1_DSI_LOC	0
+#define DLB2_SYS_ALARM_PF_SYND1_QID_LOC	16
+#define DLB2_SYS_ALARM_PF_SYND1_QTYPE_LOC	24
+#define DLB2_SYS_ALARM_PF_SYND1_QPRI_LOC	26
+#define DLB2_SYS_ALARM_PF_SYND1_MSG_TYPE_LOC	29
+
+#define DLB2_SYS_ALARM_PF_SYND0 0x10000500
+#define DLB2_SYS_ALARM_PF_SYND0_RST 0x0
+
+#define DLB2_SYS_ALARM_PF_SYND0_SYNDROME	0x000000FF
+#define DLB2_SYS_ALARM_PF_SYND0_RTYPE	0x00000300
+#define DLB2_SYS_ALARM_PF_SYND0_RSVD0	0x00001C00
+#define DLB2_SYS_ALARM_PF_SYND0_IS_LDB	0x00002000
+#define DLB2_SYS_ALARM_PF_SYND0_CLS		0x0000C000
+#define DLB2_SYS_ALARM_PF_SYND0_AID		0x003F0000
+#define DLB2_SYS_ALARM_PF_SYND0_UNIT		0x03C00000
+#define DLB2_SYS_ALARM_PF_SYND0_SOURCE	0x3C000000
+#define DLB2_SYS_ALARM_PF_SYND0_MORE		0x40000000
+#define DLB2_SYS_ALARM_PF_SYND0_VALID	0x80000000
+#define DLB2_SYS_ALARM_PF_SYND0_SYNDROME_LOC	0
+#define DLB2_SYS_ALARM_PF_SYND0_RTYPE_LOC	8
+#define DLB2_SYS_ALARM_PF_SYND0_RSVD0_LOC	10
+#define DLB2_SYS_ALARM_PF_SYND0_IS_LDB_LOC	13
+#define DLB2_SYS_ALARM_PF_SYND0_CLS_LOC	14
+#define DLB2_SYS_ALARM_PF_SYND0_AID_LOC	16
+#define DLB2_SYS_ALARM_PF_SYND0_UNIT_LOC	22
+#define DLB2_SYS_ALARM_PF_SYND0_SOURCE_LOC	26
+#define DLB2_SYS_ALARM_PF_SYND0_MORE_LOC	30
+#define DLB2_SYS_ALARM_PF_SYND0_VALID_LOC	31
+
+#define DLB2_SYS_VF_LDB_VPP_V(x) \
+	(0x10000f00 + (x) * 0x1000)
+#define DLB2_SYS_VF_LDB_VPP_V_RST 0x0
+
+#define DLB2_SYS_VF_LDB_VPP_V_VPP_V	0x00000001
+#define DLB2_SYS_VF_LDB_VPP_V_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_VF_LDB_VPP_V_VPP_V_LOC	0
+#define DLB2_SYS_VF_LDB_VPP_V_RSVD0_LOC	1
+
+#define DLB2_SYS_VF_LDB_VPP2PP(x) \
+	(0x10000f04 + (x) * 0x1000)
+#define DLB2_SYS_VF_LDB_VPP2PP_RST 0x0
+
+#define DLB2_SYS_VF_LDB_VPP2PP_PP	0x0000003F
+#define DLB2_SYS_VF_LDB_VPP2PP_RSVD0	0xFFFFFFC0
+#define DLB2_SYS_VF_LDB_VPP2PP_PP_LOC	0
+#define DLB2_SYS_VF_LDB_VPP2PP_RSVD0_LOC	6
+
+#define DLB2_SYS_VF_DIR_VPP_V(x) \
+	(0x10000f08 + (x) * 0x1000)
+#define DLB2_SYS_VF_DIR_VPP_V_RST 0x0
+
+#define DLB2_SYS_VF_DIR_VPP_V_VPP_V	0x00000001
+#define DLB2_SYS_VF_DIR_VPP_V_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_VF_DIR_VPP_V_VPP_V_LOC	0
+#define DLB2_SYS_VF_DIR_VPP_V_RSVD0_LOC	1
+
+#define DLB2_SYS_VF_DIR_VPP2PP(x) \
+	(0x10000f0c + (x) * 0x1000)
+#define DLB2_SYS_VF_DIR_VPP2PP_RST 0x0
+
+#define DLB2_SYS_VF_DIR_VPP2PP_PP	0x0000003F
+#define DLB2_SYS_VF_DIR_VPP2PP_RSVD0	0xFFFFFFC0
+#define DLB2_SYS_VF_DIR_VPP2PP_PP_LOC	0
+#define DLB2_SYS_VF_DIR_VPP2PP_RSVD0_LOC	6
+
+#define DLB2_SYS_VF_LDB_VQID_V(x) \
+	(0x10000f10 + (x) * 0x1000)
+#define DLB2_SYS_VF_LDB_VQID_V_RST 0x0
+
+#define DLB2_SYS_VF_LDB_VQID_V_VQID_V	0x00000001
+#define DLB2_SYS_VF_LDB_VQID_V_RSVD0		0xFFFFFFFE
+#define DLB2_SYS_VF_LDB_VQID_V_VQID_V_LOC	0
+#define DLB2_SYS_VF_LDB_VQID_V_RSVD0_LOC	1
+
+#define DLB2_SYS_VF_LDB_VQID2QID(x) \
+	(0x10000f14 + (x) * 0x1000)
+#define DLB2_SYS_VF_LDB_VQID2QID_RST 0x0
+
+#define DLB2_SYS_VF_LDB_VQID2QID_QID		0x0000001F
+#define DLB2_SYS_VF_LDB_VQID2QID_RSVD0	0xFFFFFFE0
+#define DLB2_SYS_VF_LDB_VQID2QID_QID_LOC	0
+#define DLB2_SYS_VF_LDB_VQID2QID_RSVD0_LOC	5
+
+#define DLB2_SYS_LDB_QID2VQID(x) \
+	(0x10000f18 + (x) * 0x1000)
+#define DLB2_SYS_LDB_QID2VQID_RST 0x0
+
+#define DLB2_SYS_LDB_QID2VQID_VQID	0x0000001F
+#define DLB2_SYS_LDB_QID2VQID_RSVD0	0xFFFFFFE0
+#define DLB2_SYS_LDB_QID2VQID_VQID_LOC	0
+#define DLB2_SYS_LDB_QID2VQID_RSVD0_LOC	5
+
+#define DLB2_SYS_VF_DIR_VQID_V(x) \
+	(0x10000f1c + (x) * 0x1000)
+#define DLB2_SYS_VF_DIR_VQID_V_RST 0x0
+
+#define DLB2_SYS_VF_DIR_VQID_V_VQID_V	0x00000001
+#define DLB2_SYS_VF_DIR_VQID_V_RSVD0		0xFFFFFFFE
+#define DLB2_SYS_VF_DIR_VQID_V_VQID_V_LOC	0
+#define DLB2_SYS_VF_DIR_VQID_V_RSVD0_LOC	1
+
+#define DLB2_SYS_VF_DIR_VQID2QID(x) \
+	(0x10000f20 + (x) * 0x1000)
+#define DLB2_SYS_VF_DIR_VQID2QID_RST 0x0
+
+#define DLB2_SYS_VF_DIR_VQID2QID_QID		0x0000003F
+#define DLB2_SYS_VF_DIR_VQID2QID_RSVD0	0xFFFFFFC0
+#define DLB2_SYS_VF_DIR_VQID2QID_QID_LOC	0
+#define DLB2_SYS_VF_DIR_VQID2QID_RSVD0_LOC	6
+
+#define DLB2_SYS_LDB_VASQID_V(x) \
+	(0x10000f24 + (x) * 0x1000)
+#define DLB2_SYS_LDB_VASQID_V_RST 0x0
+
+#define DLB2_SYS_LDB_VASQID_V_VASQID_V	0x00000001
+#define DLB2_SYS_LDB_VASQID_V_RSVD0		0xFFFFFFFE
+#define DLB2_SYS_LDB_VASQID_V_VASQID_V_LOC	0
+#define DLB2_SYS_LDB_VASQID_V_RSVD0_LOC	1
+
+#define DLB2_SYS_DIR_VASQID_V(x) \
+	(0x10000f28 + (x) * 0x1000)
+#define DLB2_SYS_DIR_VASQID_V_RST 0x0
+
+#define DLB2_SYS_DIR_VASQID_V_VASQID_V	0x00000001
+#define DLB2_SYS_DIR_VASQID_V_RSVD0		0xFFFFFFFE
+#define DLB2_SYS_DIR_VASQID_V_VASQID_V_LOC	0
+#define DLB2_SYS_DIR_VASQID_V_RSVD0_LOC	1
+
+#define DLB2_SYS_ALARM_VF_SYND2(x) \
+	(0x10000f48 + (x) * 0x1000)
+#define DLB2_SYS_ALARM_VF_SYND2_RST 0x0
+
+#define DLB2_SYS_ALARM_VF_SYND2_LOCK_ID	0x0000FFFF
+#define DLB2_SYS_ALARM_VF_SYND2_DEBUG	0x00FF0000
+#define DLB2_SYS_ALARM_VF_SYND2_CQ_POP	0x01000000
+#define DLB2_SYS_ALARM_VF_SYND2_QE_UHL	0x02000000
+#define DLB2_SYS_ALARM_VF_SYND2_QE_ORSP	0x04000000
+#define DLB2_SYS_ALARM_VF_SYND2_QE_VALID	0x08000000
+#define DLB2_SYS_ALARM_VF_SYND2_ISZ		0x10000000
+#define DLB2_SYS_ALARM_VF_SYND2_DSI_ERROR	0x20000000
+#define DLB2_SYS_ALARM_VF_SYND2_DLBRSVD	0xC0000000
+#define DLB2_SYS_ALARM_VF_SYND2_LOCK_ID_LOC		0
+#define DLB2_SYS_ALARM_VF_SYND2_DEBUG_LOC		16
+#define DLB2_SYS_ALARM_VF_SYND2_CQ_POP_LOC		24
+#define DLB2_SYS_ALARM_VF_SYND2_QE_UHL_LOC		25
+#define DLB2_SYS_ALARM_VF_SYND2_QE_ORSP_LOC		26
+#define DLB2_SYS_ALARM_VF_SYND2_QE_VALID_LOC		27
+#define DLB2_SYS_ALARM_VF_SYND2_ISZ_LOC		28
+#define DLB2_SYS_ALARM_VF_SYND2_DSI_ERROR_LOC	29
+#define DLB2_SYS_ALARM_VF_SYND2_DLBRSVD_LOC		30
+
+#define DLB2_SYS_ALARM_VF_SYND1(x) \
+	(0x10000f44 + (x) * 0x1000)
+#define DLB2_SYS_ALARM_VF_SYND1_RST 0x0
+
+#define DLB2_SYS_ALARM_VF_SYND1_DSI		0x0000FFFF
+#define DLB2_SYS_ALARM_VF_SYND1_QID		0x00FF0000
+#define DLB2_SYS_ALARM_VF_SYND1_QTYPE	0x03000000
+#define DLB2_SYS_ALARM_VF_SYND1_QPRI		0x1C000000
+#define DLB2_SYS_ALARM_VF_SYND1_MSG_TYPE	0xE0000000
+#define DLB2_SYS_ALARM_VF_SYND1_DSI_LOC	0
+#define DLB2_SYS_ALARM_VF_SYND1_QID_LOC	16
+#define DLB2_SYS_ALARM_VF_SYND1_QTYPE_LOC	24
+#define DLB2_SYS_ALARM_VF_SYND1_QPRI_LOC	26
+#define DLB2_SYS_ALARM_VF_SYND1_MSG_TYPE_LOC	29
+
+#define DLB2_SYS_ALARM_VF_SYND0(x) \
+	(0x10000f40 + (x) * 0x1000)
+#define DLB2_SYS_ALARM_VF_SYND0_RST 0x0
+
+#define DLB2_SYS_ALARM_VF_SYND0_SYNDROME		0x000000FF
+#define DLB2_SYS_ALARM_VF_SYND0_RTYPE		0x00000300
+#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND0_PARITY	0x00000400
+#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND1_PARITY	0x00000800
+#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND2_PARITY	0x00001000
+#define DLB2_SYS_ALARM_VF_SYND0_IS_LDB		0x00002000
+#define DLB2_SYS_ALARM_VF_SYND0_CLS			0x0000C000
+#define DLB2_SYS_ALARM_VF_SYND0_AID			0x003F0000
+#define DLB2_SYS_ALARM_VF_SYND0_UNIT			0x03C00000
+#define DLB2_SYS_ALARM_VF_SYND0_SOURCE		0x3C000000
+#define DLB2_SYS_ALARM_VF_SYND0_MORE			0x40000000
+#define DLB2_SYS_ALARM_VF_SYND0_VALID		0x80000000
+#define DLB2_SYS_ALARM_VF_SYND0_SYNDROME_LOC		0
+#define DLB2_SYS_ALARM_VF_SYND0_RTYPE_LOC		8
+#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND0_PARITY_LOC	10
+#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND1_PARITY_LOC	11
+#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND2_PARITY_LOC	12
+#define DLB2_SYS_ALARM_VF_SYND0_IS_LDB_LOC		13
+#define DLB2_SYS_ALARM_VF_SYND0_CLS_LOC		14
+#define DLB2_SYS_ALARM_VF_SYND0_AID_LOC		16
+#define DLB2_SYS_ALARM_VF_SYND0_UNIT_LOC		22
+#define DLB2_SYS_ALARM_VF_SYND0_SOURCE_LOC		26
+#define DLB2_SYS_ALARM_VF_SYND0_MORE_LOC		30
+#define DLB2_SYS_ALARM_VF_SYND0_VALID_LOC		31
+
+#define DLB2_SYS_LDB_QID_CFG_V(x) \
+	(0x10000f58 + (x) * 0x1000)
+#define DLB2_SYS_LDB_QID_CFG_V_RST 0x0
+
+#define DLB2_SYS_LDB_QID_CFG_V_SN_CFG_V	0x00000001
+#define DLB2_SYS_LDB_QID_CFG_V_FID_CFG_V	0x00000002
+#define DLB2_SYS_LDB_QID_CFG_V_RSVD0		0xFFFFFFFC
+#define DLB2_SYS_LDB_QID_CFG_V_SN_CFG_V_LOC	0
+#define DLB2_SYS_LDB_QID_CFG_V_FID_CFG_V_LOC	1
+#define DLB2_SYS_LDB_QID_CFG_V_RSVD0_LOC	2
+
+#define DLB2_SYS_LDB_QID_ITS(x) \
+	(0x10000f54 + (x) * 0x1000)
+#define DLB2_SYS_LDB_QID_ITS_RST 0x0
+
+#define DLB2_SYS_LDB_QID_ITS_QID_ITS	0x00000001
+#define DLB2_SYS_LDB_QID_ITS_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_LDB_QID_ITS_QID_ITS_LOC	0
+#define DLB2_SYS_LDB_QID_ITS_RSVD0_LOC	1
+
+#define DLB2_SYS_LDB_QID_V(x) \
+	(0x10000f50 + (x) * 0x1000)
+#define DLB2_SYS_LDB_QID_V_RST 0x0
+
+#define DLB2_SYS_LDB_QID_V_QID_V	0x00000001
+#define DLB2_SYS_LDB_QID_V_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_LDB_QID_V_QID_V_LOC	0
+#define DLB2_SYS_LDB_QID_V_RSVD0_LOC	1
+
+#define DLB2_SYS_DIR_QID_ITS(x) \
+	(0x10000f64 + (x) * 0x1000)
+#define DLB2_SYS_DIR_QID_ITS_RST 0x0
+
+#define DLB2_SYS_DIR_QID_ITS_QID_ITS	0x00000001
+#define DLB2_SYS_DIR_QID_ITS_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_DIR_QID_ITS_QID_ITS_LOC	0
+#define DLB2_SYS_DIR_QID_ITS_RSVD0_LOC	1
+
+#define DLB2_SYS_DIR_QID_V(x) \
+	(0x10000f60 + (x) * 0x1000)
+#define DLB2_SYS_DIR_QID_V_RST 0x0
+
+#define DLB2_SYS_DIR_QID_V_QID_V	0x00000001
+#define DLB2_SYS_DIR_QID_V_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_DIR_QID_V_QID_V_LOC	0
+#define DLB2_SYS_DIR_QID_V_RSVD0_LOC	1
+
+#define DLB2_SYS_LDB_CQ_AI_DATA(x) \
+	(0x10000fa8 + (x) * 0x1000)
+#define DLB2_SYS_LDB_CQ_AI_DATA_RST 0x0
+
+#define DLB2_SYS_LDB_CQ_AI_DATA_CQ_AI_DATA	0xFFFFFFFF
+#define DLB2_SYS_LDB_CQ_AI_DATA_CQ_AI_DATA_LOC	0
+
+#define DLB2_SYS_LDB_CQ_AI_ADDR(x) \
+	(0x10000fa4 + (x) * 0x1000)
+#define DLB2_SYS_LDB_CQ_AI_ADDR_RST 0x0
+
+#define DLB2_SYS_LDB_CQ_AI_ADDR_RSVD1	0x00000003
+#define DLB2_SYS_LDB_CQ_AI_ADDR_CQ_AI_ADDR	0x000FFFFC
+#define DLB2_SYS_LDB_CQ_AI_ADDR_RSVD0	0xFFF00000
+#define DLB2_SYS_LDB_CQ_AI_ADDR_RSVD1_LOC		0
+#define DLB2_SYS_LDB_CQ_AI_ADDR_CQ_AI_ADDR_LOC	2
+#define DLB2_SYS_LDB_CQ_AI_ADDR_RSVD0_LOC		20
+
+#define DLB2_V2SYS_LDB_CQ_PASID(x) \
+	(0x10000fa0 + (x) * 0x1000)
+#define DLB2_V2_5SYS_LDB_CQ_PASID(x) \
+	(0x10000f9c + (x) * 0x1000)
+#define DLB2_SYS_LDB_CQ_PASID(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2SYS_LDB_CQ_PASID(x) : \
+	 DLB2_V2_5SYS_LDB_CQ_PASID(x))
+#define DLB2_SYS_LDB_CQ_PASID_RST 0x0
+
+#define DLB2_SYS_LDB_CQ_PASID_PASID		0x000FFFFF
+#define DLB2_SYS_LDB_CQ_PASID_EXE_REQ	0x00100000
+#define DLB2_SYS_LDB_CQ_PASID_PRIV_REQ	0x00200000
+#define DLB2_SYS_LDB_CQ_PASID_FMT2		0x00400000
+#define DLB2_SYS_LDB_CQ_PASID_RSVD0		0xFF800000
+#define DLB2_SYS_LDB_CQ_PASID_PASID_LOC	0
+#define DLB2_SYS_LDB_CQ_PASID_EXE_REQ_LOC	20
+#define DLB2_SYS_LDB_CQ_PASID_PRIV_REQ_LOC	21
+#define DLB2_SYS_LDB_CQ_PASID_FMT2_LOC	22
+#define DLB2_SYS_LDB_CQ_PASID_RSVD0_LOC	23
+
+#define DLB2_SYS_LDB_CQ_AT(x) \
+	(0x10000f9c + (x) * 0x1000)
+#define DLB2_SYS_LDB_CQ_AT_RST 0x0
+
+#define DLB2_SYS_LDB_CQ_AT_CQ_AT	0x00000003
+#define DLB2_SYS_LDB_CQ_AT_RSVD0	0xFFFFFFFC
+#define DLB2_SYS_LDB_CQ_AT_CQ_AT_LOC	0
+#define DLB2_SYS_LDB_CQ_AT_RSVD0_LOC	2
+
+#define DLB2_SYS_LDB_CQ_ISR(x) \
+	(0x10000f98 + (x) * 0x1000)
+#define DLB2_SYS_LDB_CQ_ISR_RST 0x0
+/* CQ Interrupt Modes */
+#define DLB2_CQ_ISR_MODE_DIS  0
+#define DLB2_CQ_ISR_MODE_MSI  1
+#define DLB2_CQ_ISR_MODE_MSIX 2
+#define DLB2_CQ_ISR_MODE_ADI  3
+
+#define DLB2_SYS_LDB_CQ_ISR_VECTOR	0x0000003F
+#define DLB2_SYS_LDB_CQ_ISR_VF	0x000003C0
+#define DLB2_SYS_LDB_CQ_ISR_EN_CODE	0x00000C00
+#define DLB2_SYS_LDB_CQ_ISR_RSVD0	0xFFFFF000
+#define DLB2_SYS_LDB_CQ_ISR_VECTOR_LOC	0
+#define DLB2_SYS_LDB_CQ_ISR_VF_LOC		6
+#define DLB2_SYS_LDB_CQ_ISR_EN_CODE_LOC	10
+#define DLB2_SYS_LDB_CQ_ISR_RSVD0_LOC	12
+
+#define DLB2_SYS_LDB_CQ2VF_PF_RO(x) \
+	(0x10000f94 + (x) * 0x1000)
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_RST 0x0
+
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_VF		0x0000000F
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_IS_PF	0x00000010
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_RO		0x00000020
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_RSVD0	0xFFFFFFC0
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_VF_LOC	0
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_IS_PF_LOC	4
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_RO_LOC	5
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_RSVD0_LOC	6
+
+#define DLB2_SYS_LDB_PP_V(x) \
+	(0x10000f90 + (x) * 0x1000)
+#define DLB2_SYS_LDB_PP_V_RST 0x0
+
+#define DLB2_SYS_LDB_PP_V_PP_V	0x00000001
+#define DLB2_SYS_LDB_PP_V_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_LDB_PP_V_PP_V_LOC	0
+#define DLB2_SYS_LDB_PP_V_RSVD0_LOC	1
+
+#define DLB2_SYS_LDB_PP2VDEV(x) \
+	(0x10000f8c + (x) * 0x1000)
+#define DLB2_SYS_LDB_PP2VDEV_RST 0x0
+
+#define DLB2_SYS_LDB_PP2VDEV_VDEV	0x0000000F
+#define DLB2_SYS_LDB_PP2VDEV_RSVD0	0xFFFFFFF0
+#define DLB2_SYS_LDB_PP2VDEV_VDEV_LOC	0
+#define DLB2_SYS_LDB_PP2VDEV_RSVD0_LOC	4
+
+#define DLB2_SYS_LDB_PP2VAS(x) \
+	(0x10000f88 + (x) * 0x1000)
+#define DLB2_SYS_LDB_PP2VAS_RST 0x0
+
+#define DLB2_SYS_LDB_PP2VAS_VAS	0x0000001F
+#define DLB2_SYS_LDB_PP2VAS_RSVD0	0xFFFFFFE0
+#define DLB2_SYS_LDB_PP2VAS_VAS_LOC		0
+#define DLB2_SYS_LDB_PP2VAS_RSVD0_LOC	5
+
+#define DLB2_SYS_LDB_CQ_ADDR_U(x) \
+	(0x10000f84 + (x) * 0x1000)
+#define DLB2_SYS_LDB_CQ_ADDR_U_RST 0x0
+
+#define DLB2_SYS_LDB_CQ_ADDR_U_ADDR_U	0xFFFFFFFF
+#define DLB2_SYS_LDB_CQ_ADDR_U_ADDR_U_LOC	0
+
+#define DLB2_SYS_LDB_CQ_ADDR_L(x) \
+	(0x10000f80 + (x) * 0x1000)
+#define DLB2_SYS_LDB_CQ_ADDR_L_RST 0x0
+
+#define DLB2_SYS_LDB_CQ_ADDR_L_RSVD0		0x0000003F
+#define DLB2_SYS_LDB_CQ_ADDR_L_ADDR_L	0xFFFFFFC0
+#define DLB2_SYS_LDB_CQ_ADDR_L_RSVD0_LOC	0
+#define DLB2_SYS_LDB_CQ_ADDR_L_ADDR_L_LOC	6
+
+#define DLB2_SYS_DIR_CQ_FMT(x) \
+	(0x10000fec + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ_FMT_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_FMT_KEEP_PF_PPID	0x00000001
+#define DLB2_SYS_DIR_CQ_FMT_RSVD0		0xFFFFFFFE
+#define DLB2_SYS_DIR_CQ_FMT_KEEP_PF_PPID_LOC	0
+#define DLB2_SYS_DIR_CQ_FMT_RSVD0_LOC	1
+
+#define DLB2_SYS_DIR_CQ_AI_DATA(x) \
+	(0x10000fe8 + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ_AI_DATA_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_AI_DATA_CQ_AI_DATA	0xFFFFFFFF
+#define DLB2_SYS_DIR_CQ_AI_DATA_CQ_AI_DATA_LOC	0
+
+#define DLB2_SYS_DIR_CQ_AI_ADDR(x) \
+	(0x10000fe4 + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ_AI_ADDR_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_AI_ADDR_RSVD1	0x00000003
+#define DLB2_SYS_DIR_CQ_AI_ADDR_CQ_AI_ADDR	0x000FFFFC
+#define DLB2_SYS_DIR_CQ_AI_ADDR_RSVD0	0xFFF00000
+#define DLB2_SYS_DIR_CQ_AI_ADDR_RSVD1_LOC		0
+#define DLB2_SYS_DIR_CQ_AI_ADDR_CQ_AI_ADDR_LOC	2
+#define DLB2_SYS_DIR_CQ_AI_ADDR_RSVD0_LOC		20
+
+#define DLB2_V2SYS_DIR_CQ_PASID(x) \
+	(0x10000fe0 + (x) * 0x1000)
+#define DLB2_V2_5SYS_DIR_CQ_PASID(x) \
+	(0x10000fdc + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ_PASID(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2SYS_DIR_CQ_PASID(x) : \
+	 DLB2_V2_5SYS_DIR_CQ_PASID(x))
+#define DLB2_SYS_DIR_CQ_PASID_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_PASID_PASID		0x000FFFFF
+#define DLB2_SYS_DIR_CQ_PASID_EXE_REQ	0x00100000
+#define DLB2_SYS_DIR_CQ_PASID_PRIV_REQ	0x00200000
+#define DLB2_SYS_DIR_CQ_PASID_FMT2		0x00400000
+#define DLB2_SYS_DIR_CQ_PASID_RSVD0		0xFF800000
+#define DLB2_SYS_DIR_CQ_PASID_PASID_LOC	0
+#define DLB2_SYS_DIR_CQ_PASID_EXE_REQ_LOC	20
+#define DLB2_SYS_DIR_CQ_PASID_PRIV_REQ_LOC	21
+#define DLB2_SYS_DIR_CQ_PASID_FMT2_LOC	22
+#define DLB2_SYS_DIR_CQ_PASID_RSVD0_LOC	23
+
+#define DLB2_SYS_DIR_CQ_AT(x) \
+	(0x10000fdc + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ_AT_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_AT_CQ_AT	0x00000003
+#define DLB2_SYS_DIR_CQ_AT_RSVD0	0xFFFFFFFC
+#define DLB2_SYS_DIR_CQ_AT_CQ_AT_LOC	0
+#define DLB2_SYS_DIR_CQ_AT_RSVD0_LOC	2
+
+#define DLB2_SYS_DIR_CQ_ISR(x) \
+	(0x10000fd8 + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ_ISR_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_ISR_VECTOR	0x0000003F
+#define DLB2_SYS_DIR_CQ_ISR_VF	0x000003C0
+#define DLB2_SYS_DIR_CQ_ISR_EN_CODE	0x00000C00
+#define DLB2_SYS_DIR_CQ_ISR_RSVD0	0xFFFFF000
+#define DLB2_SYS_DIR_CQ_ISR_VECTOR_LOC	0
+#define DLB2_SYS_DIR_CQ_ISR_VF_LOC		6
+#define DLB2_SYS_DIR_CQ_ISR_EN_CODE_LOC	10
+#define DLB2_SYS_DIR_CQ_ISR_RSVD0_LOC	12
+
+#define DLB2_SYS_DIR_CQ2VF_PF_RO(x) \
+	(0x10000fd4 + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_RST 0x0
+
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_VF		0x0000000F
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_IS_PF	0x00000010
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_RO		0x00000020
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_RSVD0	0xFFFFFFC0
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_VF_LOC	0
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_IS_PF_LOC	4
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_RO_LOC	5
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_RSVD0_LOC	6
+
+#define DLB2_SYS_DIR_PP_V(x) \
+	(0x10000fd0 + (x) * 0x1000)
+#define DLB2_SYS_DIR_PP_V_RST 0x0
+
+#define DLB2_SYS_DIR_PP_V_PP_V	0x00000001
+#define DLB2_SYS_DIR_PP_V_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_DIR_PP_V_PP_V_LOC	0
+#define DLB2_SYS_DIR_PP_V_RSVD0_LOC	1
+
+#define DLB2_SYS_DIR_PP2VDEV(x) \
+	(0x10000fcc + (x) * 0x1000)
+#define DLB2_SYS_DIR_PP2VDEV_RST 0x0
+
+#define DLB2_SYS_DIR_PP2VDEV_VDEV	0x0000000F
+#define DLB2_SYS_DIR_PP2VDEV_RSVD0	0xFFFFFFF0
+#define DLB2_SYS_DIR_PP2VDEV_VDEV_LOC	0
+#define DLB2_SYS_DIR_PP2VDEV_RSVD0_LOC	4
+
+#define DLB2_SYS_DIR_PP2VAS(x) \
+	(0x10000fc8 + (x) * 0x1000)
+#define DLB2_SYS_DIR_PP2VAS_RST 0x0
+
+#define DLB2_SYS_DIR_PP2VAS_VAS	0x0000001F
+#define DLB2_SYS_DIR_PP2VAS_RSVD0	0xFFFFFFE0
+#define DLB2_SYS_DIR_PP2VAS_VAS_LOC		0
+#define DLB2_SYS_DIR_PP2VAS_RSVD0_LOC	5
+
+#define DLB2_SYS_DIR_CQ_ADDR_U(x) \
+	(0x10000fc4 + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ_ADDR_U_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_ADDR_U_ADDR_U	0xFFFFFFFF
+#define DLB2_SYS_DIR_CQ_ADDR_U_ADDR_U_LOC	0
+
+#define DLB2_SYS_DIR_CQ_ADDR_L(x) \
+	(0x10000fc0 + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ_ADDR_L_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_ADDR_L_RSVD0		0x0000003F
+#define DLB2_SYS_DIR_CQ_ADDR_L_ADDR_L	0xFFFFFFC0
+#define DLB2_SYS_DIR_CQ_ADDR_L_RSVD0_LOC	0
+#define DLB2_SYS_DIR_CQ_ADDR_L_ADDR_L_LOC	6
+
+#define DLB2_SYS_PM_SMON_COMP_MASK1 0x10003024
+#define DLB2_SYS_PM_SMON_COMP_MASK1_RST 0xffffffff
+
+#define DLB2_SYS_PM_SMON_COMP_MASK1_COMP_MASK1	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_COMP_MASK1_COMP_MASK1_LOC	0
+
+#define DLB2_SYS_PM_SMON_COMP_MASK0 0x10003020
+#define DLB2_SYS_PM_SMON_COMP_MASK0_RST 0xffffffff
+
+#define DLB2_SYS_PM_SMON_COMP_MASK0_COMP_MASK0	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_COMP_MASK0_COMP_MASK0_LOC	0
+
+#define DLB2_SYS_PM_SMON_MAX_TMR 0x1000301c
+#define DLB2_SYS_PM_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_SYS_PM_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_SYS_PM_SMON_TMR 0x10003018
+#define DLB2_SYS_PM_SMON_TMR_RST 0x0
+
+#define DLB2_SYS_PM_SMON_TMR_TIMER_VAL	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_TMR_TIMER_VAL_LOC	0
+
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR1 0x10003014
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR0 0x10003010
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_SYS_PM_SMON_COMPARE1 0x1000300c
+#define DLB2_SYS_PM_SMON_COMPARE1_RST 0x0
+
+#define DLB2_SYS_PM_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_SYS_PM_SMON_COMPARE0 0x10003008
+#define DLB2_SYS_PM_SMON_COMPARE0_RST 0x0
+
+#define DLB2_SYS_PM_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_SYS_PM_SMON_CFG1 0x10003004
+#define DLB2_SYS_PM_SMON_CFG1_RST 0x0
+
+#define DLB2_SYS_PM_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_SYS_PM_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_SYS_PM_SMON_CFG1_RSVD	0xFFFF0000
+#define DLB2_SYS_PM_SMON_CFG1_MODE0_LOC	0
+#define DLB2_SYS_PM_SMON_CFG1_MODE1_LOC	8
+#define DLB2_SYS_PM_SMON_CFG1_RSVD_LOC	16
+
+#define DLB2_SYS_PM_SMON_CFG0 0x10003000
+#define DLB2_SYS_PM_SMON_CFG0_RST 0x40000000
+
+#define DLB2_SYS_PM_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_SYS_PM_SMON_CFG0_RSVD2			0x0000000E
+#define DLB2_SYS_PM_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_SYS_PM_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_SYS_PM_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_SYS_PM_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_SYS_PM_SMON_CFG0_SMON_MODE		0x0000F000
+#define DLB2_SYS_PM_SMON_CFG0_STOPCOUNTEROVFL	0x00010000
+#define DLB2_SYS_PM_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_SYS_PM_SMON_CFG0_STATCOUNTER0OVFL	0x00040000
+#define DLB2_SYS_PM_SMON_CFG0_STATCOUNTER1OVFL	0x00080000
+#define DLB2_SYS_PM_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_SYS_PM_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_SYS_PM_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_SYS_PM_SMON_CFG0_RSVD1			0x00800000
+#define DLB2_SYS_PM_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_SYS_PM_SMON_CFG0_RSVD0			0x20000000
+#define DLB2_SYS_PM_SMON_CFG0_VERSION		0xC0000000
+#define DLB2_SYS_PM_SMON_CFG0_SMON_ENABLE_LOC		0
+#define DLB2_SYS_PM_SMON_CFG0_RSVD2_LOC			1
+#define DLB2_SYS_PM_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_SYS_PM_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_SYS_PM_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_SYS_PM_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_SYS_PM_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_SYS_PM_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_SYS_PM_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_SYS_PM_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_SYS_PM_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_SYS_PM_SMON_CFG0_STOPTIMEROVFL_LOC		20
+#define DLB2_SYS_PM_SMON_CFG0_INTTIMEROVFL_LOC		21
+#define DLB2_SYS_PM_SMON_CFG0_STATTIMEROVFL_LOC		22
+#define DLB2_SYS_PM_SMON_CFG0_RSVD1_LOC			23
+#define DLB2_SYS_PM_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_SYS_PM_SMON_CFG0_RSVD0_LOC			29
+#define DLB2_SYS_PM_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_SYS_SMON_COMP_MASK1(x) \
+	(0x18002024 + (x) * 0x40)
+#define DLB2_SYS_SMON_COMP_MASK1_RST 0xffffffff
+
+#define DLB2_SYS_SMON_COMP_MASK1_COMP_MASK1	0xFFFFFFFF
+#define DLB2_SYS_SMON_COMP_MASK1_COMP_MASK1_LOC	0
+
+#define DLB2_SYS_SMON_COMP_MASK0(x) \
+	(0x18002020 + (x) * 0x40)
+#define DLB2_SYS_SMON_COMP_MASK0_RST 0xffffffff
+
+#define DLB2_SYS_SMON_COMP_MASK0_COMP_MASK0	0xFFFFFFFF
+#define DLB2_SYS_SMON_COMP_MASK0_COMP_MASK0_LOC	0
+
+#define DLB2_SYS_SMON_MAX_TMR(x) \
+	(0x1800201c + (x) * 0x40)
+#define DLB2_SYS_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_SYS_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_SYS_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_SYS_SMON_TMR(x) \
+	(0x18002018 + (x) * 0x40)
+#define DLB2_SYS_SMON_TMR_RST 0x0
+
+#define DLB2_SYS_SMON_TMR_TIMER_VAL	0xFFFFFFFF
+#define DLB2_SYS_SMON_TMR_TIMER_VAL_LOC	0
+
+#define DLB2_SYS_SMON_ACTIVITYCNTR1(x) \
+	(0x18002014 + (x) * 0x40)
+#define DLB2_SYS_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_SYS_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_SYS_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_SYS_SMON_ACTIVITYCNTR0(x) \
+	(0x18002010 + (x) * 0x40)
+#define DLB2_SYS_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_SYS_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_SYS_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_SYS_SMON_COMPARE1(x) \
+	(0x1800200c + (x) * 0x40)
+#define DLB2_SYS_SMON_COMPARE1_RST 0x0
+
+#define DLB2_SYS_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_SYS_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_SYS_SMON_COMPARE0(x) \
+	(0x18002008 + (x) * 0x40)
+#define DLB2_SYS_SMON_COMPARE0_RST 0x0
+
+#define DLB2_SYS_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_SYS_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_SYS_SMON_CFG1(x) \
+	(0x18002004 + (x) * 0x40)
+#define DLB2_SYS_SMON_CFG1_RST 0x0
+
+#define DLB2_SYS_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_SYS_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_SYS_SMON_CFG1_RSVD	0xFFFF0000
+#define DLB2_SYS_SMON_CFG1_MODE0_LOC	0
+#define DLB2_SYS_SMON_CFG1_MODE1_LOC	8
+#define DLB2_SYS_SMON_CFG1_RSVD_LOC	16
+
+#define DLB2_SYS_SMON_CFG0(x) \
+	(0x18002000 + (x) * 0x40)
+#define DLB2_SYS_SMON_CFG0_RST 0x40000000
+
+#define DLB2_SYS_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_SYS_SMON_CFG0_RSVD2			0x0000000E
+#define DLB2_SYS_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_SYS_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_SYS_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_SYS_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_SYS_SMON_CFG0_SMON_MODE			0x0000F000
+#define DLB2_SYS_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_SYS_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_SYS_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_SYS_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_SYS_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_SYS_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_SYS_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_SYS_SMON_CFG0_RSVD1			0x00800000
+#define DLB2_SYS_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_SYS_SMON_CFG0_RSVD0			0x20000000
+#define DLB2_SYS_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_SYS_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_SYS_SMON_CFG0_RSVD2_LOC				1
+#define DLB2_SYS_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_SYS_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_SYS_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_SYS_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_SYS_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_SYS_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_SYS_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_SYS_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_SYS_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_SYS_SMON_CFG0_STOPTIMEROVFL_LOC			20
+#define DLB2_SYS_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_SYS_SMON_CFG0_STATTIMEROVFL_LOC			22
+#define DLB2_SYS_SMON_CFG0_RSVD1_LOC				23
+#define DLB2_SYS_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_SYS_SMON_CFG0_RSVD0_LOC				29
+#define DLB2_SYS_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_SYS_INGRESS_ALARM_ENBL 0x10000300
+#define DLB2_SYS_INGRESS_ALARM_ENBL_RST 0x0
+
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_HCW		0x00000001
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_PP		0x00000002
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_PASID		0x00000004
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_QID		0x00000008
+#define DLB2_SYS_INGRESS_ALARM_ENBL_DISABLED_QID		0x00000010
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_LDB_QID_CFG	0x00000020
+#define DLB2_SYS_INGRESS_ALARM_ENBL_RSVD0			0xFFFFFFC0
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_HCW_LOC		0
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_PP_LOC		1
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_PASID_LOC	2
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_QID_LOC		3
+#define DLB2_SYS_INGRESS_ALARM_ENBL_DISABLED_QID_LOC		4
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_LDB_QID_CFG_LOC	5
+#define DLB2_SYS_INGRESS_ALARM_ENBL_RSVD0_LOC		6
+
+#define DLB2_SYS_MSIX_ACK 0x10000400
+#define DLB2_SYS_MSIX_ACK_RST 0x0
+
+#define DLB2_SYS_MSIX_ACK_MSIX_0_ACK	0x00000001
+#define DLB2_SYS_MSIX_ACK_MSIX_1_ACK	0x00000002
+#define DLB2_SYS_MSIX_ACK_RSVD0	0xFFFFFFFC
+#define DLB2_SYS_MSIX_ACK_MSIX_0_ACK_LOC	0
+#define DLB2_SYS_MSIX_ACK_MSIX_1_ACK_LOC	1
+#define DLB2_SYS_MSIX_ACK_RSVD0_LOC		2
+
+#define DLB2_SYS_MSIX_PASSTHRU 0x10000404
+#define DLB2_SYS_MSIX_PASSTHRU_RST 0x0
+
+#define DLB2_SYS_MSIX_PASSTHRU_MSIX_0_PASSTHRU	0x00000001
+#define DLB2_SYS_MSIX_PASSTHRU_MSIX_1_PASSTHRU	0x00000002
+#define DLB2_SYS_MSIX_PASSTHRU_RSVD0			0xFFFFFFFC
+#define DLB2_SYS_MSIX_PASSTHRU_MSIX_0_PASSTHRU_LOC	0
+#define DLB2_SYS_MSIX_PASSTHRU_MSIX_1_PASSTHRU_LOC	1
+#define DLB2_SYS_MSIX_PASSTHRU_RSVD0_LOC		2
+
+#define DLB2_SYS_MSIX_MODE 0x10000408
+#define DLB2_SYS_MSIX_MODE_RST 0x0
+/* MSI-X Modes */
+#define DLB2_MSIX_MODE_PACKED     0
+#define DLB2_MSIX_MODE_COMPRESSED 1
+
+#define DLB2_SYS_MSIX_MODE_MODE_V2	0x00000001
+#define DLB2_SYS_MSIX_MODE_POLL_MODE_V2	0x00000002
+#define DLB2_SYS_MSIX_MODE_POLL_MASK_V2	0x00000004
+#define DLB2_SYS_MSIX_MODE_POLL_LOCK_V2	0x00000008
+#define DLB2_SYS_MSIX_MODE_RSVD0_V2	0xFFFFFFF0
+#define DLB2_SYS_MSIX_MODE_MODE_V2_LOC	0
+#define DLB2_SYS_MSIX_MODE_POLL_MODE_V2_LOC	1
+#define DLB2_SYS_MSIX_MODE_POLL_MASK_V2_LOC	2
+#define DLB2_SYS_MSIX_MODE_POLL_LOCK_V2_LOC	3
+#define DLB2_SYS_MSIX_MODE_RSVD0_V2_LOC	4
+
+#define DLB2_SYS_MSIX_MODE_MODE_V2_5	0x00000001
+#define DLB2_SYS_MSIX_MODE_IMS_POLLING_V2_5	0x00000002
+#define DLB2_SYS_MSIX_MODE_RSVD0_V2_5	0xFFFFFFFC
+#define DLB2_SYS_MSIX_MODE_MODE_V2_5_LOC		0
+#define DLB2_SYS_MSIX_MODE_IMS_POLLING_V2_5_LOC	1
+#define DLB2_SYS_MSIX_MODE_RSVD0_V2_5_LOC		2
+
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS 0x10000440
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_0_OCC_INT	0x00000001
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_1_OCC_INT	0x00000002
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_2_OCC_INT	0x00000004
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_3_OCC_INT	0x00000008
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_4_OCC_INT	0x00000010
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_5_OCC_INT	0x00000020
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_6_OCC_INT	0x00000040
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_7_OCC_INT	0x00000080
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_8_OCC_INT	0x00000100
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_9_OCC_INT	0x00000200
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_10_OCC_INT	0x00000400
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_11_OCC_INT	0x00000800
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_12_OCC_INT	0x00001000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_13_OCC_INT	0x00002000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_14_OCC_INT	0x00004000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_15_OCC_INT	0x00008000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_16_OCC_INT	0x00010000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_17_OCC_INT	0x00020000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_18_OCC_INT	0x00040000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_19_OCC_INT	0x00080000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_20_OCC_INT	0x00100000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_21_OCC_INT	0x00200000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_22_OCC_INT	0x00400000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_23_OCC_INT	0x00800000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_24_OCC_INT	0x01000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_25_OCC_INT	0x02000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_26_OCC_INT	0x04000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_27_OCC_INT	0x08000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_28_OCC_INT	0x10000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_29_OCC_INT	0x20000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_30_OCC_INT	0x40000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_31_OCC_INT	0x80000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_0_OCC_INT_LOC	0
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_1_OCC_INT_LOC	1
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_2_OCC_INT_LOC	2
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_3_OCC_INT_LOC	3
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_4_OCC_INT_LOC	4
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_5_OCC_INT_LOC	5
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_6_OCC_INT_LOC	6
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_7_OCC_INT_LOC	7
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_8_OCC_INT_LOC	8
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_9_OCC_INT_LOC	9
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_10_OCC_INT_LOC	10
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_11_OCC_INT_LOC	11
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_12_OCC_INT_LOC	12
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_13_OCC_INT_LOC	13
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_14_OCC_INT_LOC	14
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_15_OCC_INT_LOC	15
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_16_OCC_INT_LOC	16
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_17_OCC_INT_LOC	17
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_18_OCC_INT_LOC	18
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_19_OCC_INT_LOC	19
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_20_OCC_INT_LOC	20
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_21_OCC_INT_LOC	21
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_22_OCC_INT_LOC	22
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_23_OCC_INT_LOC	23
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_24_OCC_INT_LOC	24
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_25_OCC_INT_LOC	25
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_26_OCC_INT_LOC	26
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_27_OCC_INT_LOC	27
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_28_OCC_INT_LOC	28
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_29_OCC_INT_LOC	29
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_30_OCC_INT_LOC	30
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_31_OCC_INT_LOC	31
+
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS 0x10000444
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_32_OCC_INT	0x00000001
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_33_OCC_INT	0x00000002
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_34_OCC_INT	0x00000004
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_35_OCC_INT	0x00000008
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_36_OCC_INT	0x00000010
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_37_OCC_INT	0x00000020
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_38_OCC_INT	0x00000040
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_39_OCC_INT	0x00000080
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_40_OCC_INT	0x00000100
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_41_OCC_INT	0x00000200
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_42_OCC_INT	0x00000400
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_43_OCC_INT	0x00000800
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_44_OCC_INT	0x00001000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_45_OCC_INT	0x00002000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_46_OCC_INT	0x00004000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_47_OCC_INT	0x00008000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_48_OCC_INT	0x00010000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_49_OCC_INT	0x00020000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_50_OCC_INT	0x00040000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_51_OCC_INT	0x00080000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_52_OCC_INT	0x00100000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_53_OCC_INT	0x00200000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_54_OCC_INT	0x00400000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_55_OCC_INT	0x00800000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_56_OCC_INT	0x01000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_57_OCC_INT	0x02000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_58_OCC_INT	0x04000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_59_OCC_INT	0x08000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_60_OCC_INT	0x10000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_61_OCC_INT	0x20000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_62_OCC_INT	0x40000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_63_OCC_INT	0x80000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_32_OCC_INT_LOC	0
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_33_OCC_INT_LOC	1
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_34_OCC_INT_LOC	2
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_35_OCC_INT_LOC	3
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_36_OCC_INT_LOC	4
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_37_OCC_INT_LOC	5
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_38_OCC_INT_LOC	6
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_39_OCC_INT_LOC	7
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_40_OCC_INT_LOC	8
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_41_OCC_INT_LOC	9
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_42_OCC_INT_LOC	10
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_43_OCC_INT_LOC	11
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_44_OCC_INT_LOC	12
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_45_OCC_INT_LOC	13
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_46_OCC_INT_LOC	14
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_47_OCC_INT_LOC	15
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_48_OCC_INT_LOC	16
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_49_OCC_INT_LOC	17
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_50_OCC_INT_LOC	18
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_51_OCC_INT_LOC	19
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_52_OCC_INT_LOC	20
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_53_OCC_INT_LOC	21
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_54_OCC_INT_LOC	22
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_55_OCC_INT_LOC	23
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_56_OCC_INT_LOC	24
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_57_OCC_INT_LOC	25
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_58_OCC_INT_LOC	26
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_59_OCC_INT_LOC	27
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_60_OCC_INT_LOC	28
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_61_OCC_INT_LOC	29
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_62_OCC_INT_LOC	30
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_63_OCC_INT_LOC	31
+
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS 0x10000460
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_RST 0x0
+
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_0_OCC_INT	0x00000001
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_1_OCC_INT	0x00000002
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_2_OCC_INT	0x00000004
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_3_OCC_INT	0x00000008
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_4_OCC_INT	0x00000010
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_5_OCC_INT	0x00000020
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_6_OCC_INT	0x00000040
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_7_OCC_INT	0x00000080
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_8_OCC_INT	0x00000100
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_9_OCC_INT	0x00000200
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_10_OCC_INT	0x00000400
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_11_OCC_INT	0x00000800
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_12_OCC_INT	0x00001000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_13_OCC_INT	0x00002000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_14_OCC_INT	0x00004000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_15_OCC_INT	0x00008000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_16_OCC_INT	0x00010000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_17_OCC_INT	0x00020000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_18_OCC_INT	0x00040000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_19_OCC_INT	0x00080000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_20_OCC_INT	0x00100000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_21_OCC_INT	0x00200000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_22_OCC_INT	0x00400000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_23_OCC_INT	0x00800000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_24_OCC_INT	0x01000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_25_OCC_INT	0x02000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_26_OCC_INT	0x04000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_27_OCC_INT	0x08000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_28_OCC_INT	0x10000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_29_OCC_INT	0x20000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_30_OCC_INT	0x40000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_31_OCC_INT	0x80000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_0_OCC_INT_LOC	0
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_1_OCC_INT_LOC	1
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_2_OCC_INT_LOC	2
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_3_OCC_INT_LOC	3
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_4_OCC_INT_LOC	4
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_5_OCC_INT_LOC	5
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_6_OCC_INT_LOC	6
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_7_OCC_INT_LOC	7
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_8_OCC_INT_LOC	8
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_9_OCC_INT_LOC	9
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_10_OCC_INT_LOC	10
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_11_OCC_INT_LOC	11
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_12_OCC_INT_LOC	12
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_13_OCC_INT_LOC	13
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_14_OCC_INT_LOC	14
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_15_OCC_INT_LOC	15
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_16_OCC_INT_LOC	16
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_17_OCC_INT_LOC	17
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_18_OCC_INT_LOC	18
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_19_OCC_INT_LOC	19
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_20_OCC_INT_LOC	20
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_21_OCC_INT_LOC	21
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_22_OCC_INT_LOC	22
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_23_OCC_INT_LOC	23
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_24_OCC_INT_LOC	24
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_25_OCC_INT_LOC	25
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_26_OCC_INT_LOC	26
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_27_OCC_INT_LOC	27
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_28_OCC_INT_LOC	28
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_29_OCC_INT_LOC	29
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_30_OCC_INT_LOC	30
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_31_OCC_INT_LOC	31
+
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS 0x10000464
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_RST 0x0
+
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_32_OCC_INT	0x00000001
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_33_OCC_INT	0x00000002
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_34_OCC_INT	0x00000004
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_35_OCC_INT	0x00000008
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_36_OCC_INT	0x00000010
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_37_OCC_INT	0x00000020
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_38_OCC_INT	0x00000040
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_39_OCC_INT	0x00000080
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_40_OCC_INT	0x00000100
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_41_OCC_INT	0x00000200
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_42_OCC_INT	0x00000400
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_43_OCC_INT	0x00000800
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_44_OCC_INT	0x00001000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_45_OCC_INT	0x00002000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_46_OCC_INT	0x00004000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_47_OCC_INT	0x00008000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_48_OCC_INT	0x00010000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_49_OCC_INT	0x00020000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_50_OCC_INT	0x00040000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_51_OCC_INT	0x00080000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_52_OCC_INT	0x00100000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_53_OCC_INT	0x00200000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_54_OCC_INT	0x00400000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_55_OCC_INT	0x00800000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_56_OCC_INT	0x01000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_57_OCC_INT	0x02000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_58_OCC_INT	0x04000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_59_OCC_INT	0x08000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_60_OCC_INT	0x10000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_61_OCC_INT	0x20000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_62_OCC_INT	0x40000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_63_OCC_INT	0x80000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_32_OCC_INT_LOC	0
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_33_OCC_INT_LOC	1
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_34_OCC_INT_LOC	2
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_35_OCC_INT_LOC	3
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_36_OCC_INT_LOC	4
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_37_OCC_INT_LOC	5
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_38_OCC_INT_LOC	6
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_39_OCC_INT_LOC	7
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_40_OCC_INT_LOC	8
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_41_OCC_INT_LOC	9
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_42_OCC_INT_LOC	10
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_43_OCC_INT_LOC	11
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_44_OCC_INT_LOC	12
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_45_OCC_INT_LOC	13
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_46_OCC_INT_LOC	14
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_47_OCC_INT_LOC	15
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_48_OCC_INT_LOC	16
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_49_OCC_INT_LOC	17
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_50_OCC_INT_LOC	18
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_51_OCC_INT_LOC	19
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_52_OCC_INT_LOC	20
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_53_OCC_INT_LOC	21
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_54_OCC_INT_LOC	22
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_55_OCC_INT_LOC	23
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_56_OCC_INT_LOC	24
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_57_OCC_INT_LOC	25
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_58_OCC_INT_LOC	26
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_59_OCC_INT_LOC	27
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_60_OCC_INT_LOC	28
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_61_OCC_INT_LOC	29
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_62_OCC_INT_LOC	30
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_63_OCC_INT_LOC	31
+
+#define DLB2_SYS_DIR_CQ_OPT_CLR 0x100004c0
+#define DLB2_SYS_DIR_CQ_OPT_CLR_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_OPT_CLR_CQ		0x0000003F
+#define DLB2_SYS_DIR_CQ_OPT_CLR_RSVD0	0xFFFFFFC0
+#define DLB2_SYS_DIR_CQ_OPT_CLR_CQ_LOC	0
+#define DLB2_SYS_DIR_CQ_OPT_CLR_RSVD0_LOC	6
+
+#define DLB2_SYS_ALARM_HW_SYND 0x1000050c
+#define DLB2_SYS_ALARM_HW_SYND_RST 0x0
+
+#define DLB2_SYS_ALARM_HW_SYND_SYNDROME	0x000000FF
+#define DLB2_SYS_ALARM_HW_SYND_RTYPE		0x00000300
+#define DLB2_SYS_ALARM_HW_SYND_ALARM		0x00000400
+#define DLB2_SYS_ALARM_HW_SYND_CWD		0x00000800
+#define DLB2_SYS_ALARM_HW_SYND_VF_PF_MB	0x00001000
+#define DLB2_SYS_ALARM_HW_SYND_RSVD0		0x00002000
+#define DLB2_SYS_ALARM_HW_SYND_CLS		0x0000C000
+#define DLB2_SYS_ALARM_HW_SYND_AID		0x003F0000
+#define DLB2_SYS_ALARM_HW_SYND_UNIT		0x03C00000
+#define DLB2_SYS_ALARM_HW_SYND_SOURCE	0x3C000000
+#define DLB2_SYS_ALARM_HW_SYND_MORE		0x40000000
+#define DLB2_SYS_ALARM_HW_SYND_VALID		0x80000000
+#define DLB2_SYS_ALARM_HW_SYND_SYNDROME_LOC	0
+#define DLB2_SYS_ALARM_HW_SYND_RTYPE_LOC	8
+#define DLB2_SYS_ALARM_HW_SYND_ALARM_LOC	10
+#define DLB2_SYS_ALARM_HW_SYND_CWD_LOC	11
+#define DLB2_SYS_ALARM_HW_SYND_VF_PF_MB_LOC	12
+#define DLB2_SYS_ALARM_HW_SYND_RSVD0_LOC	13
+#define DLB2_SYS_ALARM_HW_SYND_CLS_LOC	14
+#define DLB2_SYS_ALARM_HW_SYND_AID_LOC	16
+#define DLB2_SYS_ALARM_HW_SYND_UNIT_LOC	22
+#define DLB2_SYS_ALARM_HW_SYND_SOURCE_LOC	26
+#define DLB2_SYS_ALARM_HW_SYND_MORE_LOC	30
+#define DLB2_SYS_ALARM_HW_SYND_VALID_LOC	31
+
+#define DLB2_AQED_QID_FID_LIM(x) \
+	(0x20000000 + (x) * 0x1000)
+#define DLB2_AQED_QID_FID_LIM_RST 0x7ff
+
+#define DLB2_AQED_QID_FID_LIM_QID_FID_LIMIT	0x00001FFF
+#define DLB2_AQED_QID_FID_LIM_RSVD0		0xFFFFE000
+#define DLB2_AQED_QID_FID_LIM_QID_FID_LIMIT_LOC	0
+#define DLB2_AQED_QID_FID_LIM_RSVD0_LOC		13
+
+#define DLB2_AQED_QID_HID_WIDTH(x) \
+	(0x20080000 + (x) * 0x1000)
+#define DLB2_AQED_QID_HID_WIDTH_RST 0x0
+
+#define DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE	0x00000007
+#define DLB2_AQED_QID_HID_WIDTH_RSVD0		0xFFFFFFF8
+#define DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE_LOC	0
+#define DLB2_AQED_QID_HID_WIDTH_RSVD0_LOC		3
+
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0 0x24000004
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_RST 0xfefcfaf8
+
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI0	0x000000FF
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI1	0x0000FF00
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI2	0x00FF0000
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI3	0xFF000000
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI0_LOC	0
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI1_LOC	8
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI2_LOC	16
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI3_LOC	24
+
+#define DLB2_AQED_SMON_ACTIVITYCNTR0 0x2c00004c
+#define DLB2_AQED_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_AQED_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_AQED_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_AQED_SMON_ACTIVITYCNTR1 0x2c000050
+#define DLB2_AQED_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_AQED_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_AQED_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_AQED_SMON_COMPARE0 0x2c000054
+#define DLB2_AQED_SMON_COMPARE0_RST 0x0
+
+#define DLB2_AQED_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_AQED_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_AQED_SMON_COMPARE1 0x2c000058
+#define DLB2_AQED_SMON_COMPARE1_RST 0x0
+
+#define DLB2_AQED_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_AQED_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_AQED_SMON_CFG0 0x2c00005c
+#define DLB2_AQED_SMON_CFG0_RST 0x40000000
+
+#define DLB2_AQED_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_AQED_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_AQED_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_AQED_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_AQED_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_AQED_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_AQED_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_AQED_SMON_CFG0_SMON_MODE		0x0000F000
+#define DLB2_AQED_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_AQED_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_AQED_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_AQED_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_AQED_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_AQED_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_AQED_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_AQED_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_AQED_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_AQED_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_AQED_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_AQED_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_AQED_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
+#define DLB2_AQED_SMON_CFG0_RSVZ0_LOC			2
+#define DLB2_AQED_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_AQED_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_AQED_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_AQED_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_AQED_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_AQED_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_AQED_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_AQED_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_AQED_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_AQED_SMON_CFG0_STOPTIMEROVFL_LOC		20
+#define DLB2_AQED_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_AQED_SMON_CFG0_STATTIMEROVFL_LOC		22
+#define DLB2_AQED_SMON_CFG0_RSVZ1_LOC			23
+#define DLB2_AQED_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_AQED_SMON_CFG0_RSVZ2_LOC			29
+#define DLB2_AQED_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_AQED_SMON_CFG1 0x2c000060
+#define DLB2_AQED_SMON_CFG1_RST 0x0
+
+#define DLB2_AQED_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_AQED_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_AQED_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_AQED_SMON_CFG1_MODE0_LOC	0
+#define DLB2_AQED_SMON_CFG1_MODE1_LOC	8
+#define DLB2_AQED_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_AQED_SMON_MAX_TMR 0x2c000064
+#define DLB2_AQED_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_AQED_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_AQED_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_AQED_SMON_TMR 0x2c000068
+#define DLB2_AQED_SMON_TMR_RST 0x0
+
+#define DLB2_AQED_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_AQED_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_ATM_QID2CQIDIX_00(x) \
+	(0x30080000 + (x) * 0x1000)
+#define DLB2_ATM_QID2CQIDIX_00_RST 0x0
+#define DLB2_ATM_QID2CQIDIX(x, y) \
+	(DLB2_ATM_QID2CQIDIX_00(x) + 0x80000 * (y))
+#define DLB2_ATM_QID2CQIDIX_NUM 16
+
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P0	0x000000FF
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P1	0x0000FF00
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P2	0x00FF0000
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P3	0xFF000000
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P0_LOC	0
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P1_LOC	8
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P2_LOC	16
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P3_LOC	24
+
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN 0x34000004
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_RST 0xfffefdfc
+
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN0	0x000000FF
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN1	0x0000FF00
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN2	0x00FF0000
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN3	0xFF000000
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN0_LOC	0
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN1_LOC	8
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN2_LOC	16
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN3_LOC	24
+
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN 0x34000008
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_RST 0xfffefdfc
+
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN0	0x000000FF
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN1	0x0000FF00
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN2	0x00FF0000
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN3	0xFF000000
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN0_LOC	0
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN1_LOC	8
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN2_LOC	16
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN3_LOC	24
+
+#define DLB2_ATM_SMON_ACTIVITYCNTR0 0x3c000050
+#define DLB2_ATM_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_ATM_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_ATM_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_ATM_SMON_ACTIVITYCNTR1 0x3c000054
+#define DLB2_ATM_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_ATM_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_ATM_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_ATM_SMON_COMPARE0 0x3c000058
+#define DLB2_ATM_SMON_COMPARE0_RST 0x0
+
+#define DLB2_ATM_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_ATM_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_ATM_SMON_COMPARE1 0x3c00005c
+#define DLB2_ATM_SMON_COMPARE1_RST 0x0
+
+#define DLB2_ATM_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_ATM_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_ATM_SMON_CFG0 0x3c000060
+#define DLB2_ATM_SMON_CFG0_RST 0x40000000
+
+#define DLB2_ATM_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_ATM_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_ATM_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_ATM_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_ATM_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_ATM_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_ATM_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_ATM_SMON_CFG0_SMON_MODE			0x0000F000
+#define DLB2_ATM_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_ATM_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_ATM_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_ATM_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_ATM_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_ATM_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_ATM_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_ATM_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_ATM_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_ATM_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_ATM_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_ATM_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_ATM_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
+#define DLB2_ATM_SMON_CFG0_RSVZ0_LOC				2
+#define DLB2_ATM_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_ATM_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_ATM_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_ATM_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_ATM_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_ATM_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_ATM_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_ATM_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_ATM_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_ATM_SMON_CFG0_STOPTIMEROVFL_LOC			20
+#define DLB2_ATM_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_ATM_SMON_CFG0_STATTIMEROVFL_LOC			22
+#define DLB2_ATM_SMON_CFG0_RSVZ1_LOC				23
+#define DLB2_ATM_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_ATM_SMON_CFG0_RSVZ2_LOC				29
+#define DLB2_ATM_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_ATM_SMON_CFG1 0x3c000064
+#define DLB2_ATM_SMON_CFG1_RST 0x0
+
+#define DLB2_ATM_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_ATM_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_ATM_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_ATM_SMON_CFG1_MODE0_LOC	0
+#define DLB2_ATM_SMON_CFG1_MODE1_LOC	8
+#define DLB2_ATM_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_ATM_SMON_MAX_TMR 0x3c000068
+#define DLB2_ATM_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_ATM_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_ATM_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_ATM_SMON_TMR 0x3c00006c
+#define DLB2_ATM_SMON_TMR_RST 0x0
+
+#define DLB2_ATM_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_ATM_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_CHP_CFG_DIR_VAS_CRD(x) \
+	(0x40000000 + (x) * 0x1000)
+#define DLB2_CHP_CFG_DIR_VAS_CRD_RST 0x0
+
+#define DLB2_CHP_CFG_DIR_VAS_CRD_COUNT	0x00003FFF
+#define DLB2_CHP_CFG_DIR_VAS_CRD_RSVD0	0xFFFFC000
+#define DLB2_CHP_CFG_DIR_VAS_CRD_COUNT_LOC	0
+#define DLB2_CHP_CFG_DIR_VAS_CRD_RSVD0_LOC	14
+
+#define DLB2_CHP_CFG_LDB_VAS_CRD(x) \
+	(0x40080000 + (x) * 0x1000)
+#define DLB2_CHP_CFG_LDB_VAS_CRD_RST 0x0
+
+#define DLB2_CHP_CFG_LDB_VAS_CRD_COUNT	0x00007FFF
+#define DLB2_CHP_CFG_LDB_VAS_CRD_RSVD0	0xFFFF8000
+#define DLB2_CHP_CFG_LDB_VAS_CRD_COUNT_LOC	0
+#define DLB2_CHP_CFG_LDB_VAS_CRD_RSVD0_LOC	15
+
+#define DLB2_V2CHP_ORD_QID_SN(x) \
+	(0x40100000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_ORD_QID_SN(x) \
+	(0x40080000 + (x) * 0x1000)
+#define DLB2_CHP_ORD_QID_SN(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_ORD_QID_SN(x) : \
+	 DLB2_V2_5CHP_ORD_QID_SN(x))
+#define DLB2_CHP_ORD_QID_SN_RST 0x0
+
+#define DLB2_CHP_ORD_QID_SN_SN	0x000003FF
+#define DLB2_CHP_ORD_QID_SN_RSVD0	0xFFFFFC00
+#define DLB2_CHP_ORD_QID_SN_SN_LOC		0
+#define DLB2_CHP_ORD_QID_SN_RSVD0_LOC	10
+
+#define DLB2_V2CHP_ORD_QID_SN_MAP(x) \
+	(0x40180000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_ORD_QID_SN_MAP(x) \
+	(0x40100000 + (x) * 0x1000)
+#define DLB2_CHP_ORD_QID_SN_MAP(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_ORD_QID_SN_MAP(x) : \
+	 DLB2_V2_5CHP_ORD_QID_SN_MAP(x))
+#define DLB2_CHP_ORD_QID_SN_MAP_RST 0x0
+
+#define DLB2_CHP_ORD_QID_SN_MAP_MODE		0x00000007
+#define DLB2_CHP_ORD_QID_SN_MAP_SLOT		0x00000078
+#define DLB2_CHP_ORD_QID_SN_MAP_RSVZ0	0x00000080
+#define DLB2_CHP_ORD_QID_SN_MAP_GRP		0x00000100
+#define DLB2_CHP_ORD_QID_SN_MAP_RSVZ1	0x00000200
+#define DLB2_CHP_ORD_QID_SN_MAP_RSVD0	0xFFFFFC00
+#define DLB2_CHP_ORD_QID_SN_MAP_MODE_LOC	0
+#define DLB2_CHP_ORD_QID_SN_MAP_SLOT_LOC	3
+#define DLB2_CHP_ORD_QID_SN_MAP_RSVZ0_LOC	7
+#define DLB2_CHP_ORD_QID_SN_MAP_GRP_LOC	8
+#define DLB2_CHP_ORD_QID_SN_MAP_RSVZ1_LOC	9
+#define DLB2_CHP_ORD_QID_SN_MAP_RSVD0_LOC	10
+
+#define DLB2_V2CHP_SN_CHK_ENBL(x) \
+	(0x40200000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_SN_CHK_ENBL(x) \
+	(0x40180000 + (x) * 0x1000)
+#define DLB2_CHP_SN_CHK_ENBL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_SN_CHK_ENBL(x) : \
+	 DLB2_V2_5CHP_SN_CHK_ENBL(x))
+#define DLB2_CHP_SN_CHK_ENBL_RST 0x0
+
+#define DLB2_CHP_SN_CHK_ENBL_EN	0x00000001
+#define DLB2_CHP_SN_CHK_ENBL_RSVD0	0xFFFFFFFE
+#define DLB2_CHP_SN_CHK_ENBL_EN_LOC		0
+#define DLB2_CHP_SN_CHK_ENBL_RSVD0_LOC	1
+
+#define DLB2_V2CHP_DIR_CQ_DEPTH(x) \
+	(0x40280000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ_DEPTH(x) \
+	(0x40300000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ_DEPTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_DEPTH(x) : \
+	 DLB2_V2_5CHP_DIR_CQ_DEPTH(x))
+#define DLB2_CHP_DIR_CQ_DEPTH_RST 0x0
+
+#define DLB2_CHP_DIR_CQ_DEPTH_DEPTH	0x00001FFF
+#define DLB2_CHP_DIR_CQ_DEPTH_RSVD0	0xFFFFE000
+#define DLB2_CHP_DIR_CQ_DEPTH_DEPTH_LOC	0
+#define DLB2_CHP_DIR_CQ_DEPTH_RSVD0_LOC	13
+
+#define DLB2_V2CHP_DIR_CQ_INT_DEPTH_THRSH(x) \
+	(0x40300000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ_INT_DEPTH_THRSH(x) \
+	(0x40380000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_INT_DEPTH_THRSH(x) : \
+	 DLB2_V2_5CHP_DIR_CQ_INT_DEPTH_THRSH(x))
+#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_RST 0x0
+
+#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_DEPTH_THRESHOLD	0x00001FFF
+#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_RSVD0		0xFFFFE000
+#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_DEPTH_THRESHOLD_LOC	0
+#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_RSVD0_LOC		13
+
+#define DLB2_V2CHP_DIR_CQ_INT_ENB(x) \
+	(0x40380000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ_INT_ENB(x) \
+	(0x40400000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ_INT_ENB(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_INT_ENB(x) : \
+	 DLB2_V2_5CHP_DIR_CQ_INT_ENB(x))
+#define DLB2_CHP_DIR_CQ_INT_ENB_RST 0x0
+
+#define DLB2_CHP_DIR_CQ_INT_ENB_EN_TIM	0x00000001
+#define DLB2_CHP_DIR_CQ_INT_ENB_EN_DEPTH	0x00000002
+#define DLB2_CHP_DIR_CQ_INT_ENB_RSVD0	0xFFFFFFFC
+#define DLB2_CHP_DIR_CQ_INT_ENB_EN_TIM_LOC	0
+#define DLB2_CHP_DIR_CQ_INT_ENB_EN_DEPTH_LOC	1
+#define DLB2_CHP_DIR_CQ_INT_ENB_RSVD0_LOC	2
+
+#define DLB2_V2CHP_DIR_CQ_TMR_THRSH(x) \
+	(0x40480000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ_TMR_THRSH(x) \
+	(0x40500000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ_TMR_THRSH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_TMR_THRSH(x) : \
+	 DLB2_V2_5CHP_DIR_CQ_TMR_THRSH(x))
+#define DLB2_CHP_DIR_CQ_TMR_THRSH_RST 0x1
+
+#define DLB2_CHP_DIR_CQ_TMR_THRSH_THRSH_0	0x00000001
+#define DLB2_CHP_DIR_CQ_TMR_THRSH_THRSH_13_1	0x00003FFE
+#define DLB2_CHP_DIR_CQ_TMR_THRSH_RSVD0	0xFFFFC000
+#define DLB2_CHP_DIR_CQ_TMR_THRSH_THRSH_0_LOC	0
+#define DLB2_CHP_DIR_CQ_TMR_THRSH_THRSH_13_1_LOC	1
+#define DLB2_CHP_DIR_CQ_TMR_THRSH_RSVD0_LOC		14
+
+#define DLB2_V2CHP_DIR_CQ_TKN_DEPTH_SEL(x) \
+	(0x40500000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ_TKN_DEPTH_SEL(x) \
+	(0x40580000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_TKN_DEPTH_SEL(x) : \
+	 DLB2_V2_5CHP_DIR_CQ_TKN_DEPTH_SEL(x))
+#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_RST 0x0
+
+#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT	0x0000000F
+#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_RSVD0			0xFFFFFFF0
+#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_LOC	0
+#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_RSVD0_LOC		4
+
+#define DLB2_V2CHP_DIR_CQ_WD_ENB(x) \
+	(0x40580000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ_WD_ENB(x) \
+	(0x40600000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ_WD_ENB(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_WD_ENB(x) : \
+	 DLB2_V2_5CHP_DIR_CQ_WD_ENB(x))
+#define DLB2_CHP_DIR_CQ_WD_ENB_RST 0x0
+
+#define DLB2_CHP_DIR_CQ_WD_ENB_WD_ENABLE	0x00000001
+#define DLB2_CHP_DIR_CQ_WD_ENB_RSVD0		0xFFFFFFFE
+#define DLB2_CHP_DIR_CQ_WD_ENB_WD_ENABLE_LOC	0
+#define DLB2_CHP_DIR_CQ_WD_ENB_RSVD0_LOC	1
+
+#define DLB2_V2CHP_DIR_CQ_WPTR(x) \
+	(0x40600000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ_WPTR(x) \
+	(0x40680000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ_WPTR(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_WPTR(x) : \
+	 DLB2_V2_5CHP_DIR_CQ_WPTR(x))
+#define DLB2_CHP_DIR_CQ_WPTR_RST 0x0
+
+#define DLB2_CHP_DIR_CQ_WPTR_WRITE_POINTER	0x00001FFF
+#define DLB2_CHP_DIR_CQ_WPTR_RSVD0		0xFFFFE000
+#define DLB2_CHP_DIR_CQ_WPTR_WRITE_POINTER_LOC	0
+#define DLB2_CHP_DIR_CQ_WPTR_RSVD0_LOC		13
+
+#define DLB2_V2CHP_DIR_CQ2VAS(x) \
+	(0x40680000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ2VAS(x) \
+	(0x40700000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ2VAS(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ2VAS(x) : \
+	 DLB2_V2_5CHP_DIR_CQ2VAS(x))
+#define DLB2_CHP_DIR_CQ2VAS_RST 0x0
+
+#define DLB2_CHP_DIR_CQ2VAS_CQ2VAS	0x0000001F
+#define DLB2_CHP_DIR_CQ2VAS_RSVD0	0xFFFFFFE0
+#define DLB2_CHP_DIR_CQ2VAS_CQ2VAS_LOC	0
+#define DLB2_CHP_DIR_CQ2VAS_RSVD0_LOC	5
+
+#define DLB2_V2CHP_HIST_LIST_BASE(x) \
+	(0x40700000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_HIST_LIST_BASE(x) \
+	(0x40780000 + (x) * 0x1000)
+#define DLB2_CHP_HIST_LIST_BASE(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_HIST_LIST_BASE(x) : \
+	 DLB2_V2_5CHP_HIST_LIST_BASE(x))
+#define DLB2_CHP_HIST_LIST_BASE_RST 0x0
+
+#define DLB2_CHP_HIST_LIST_BASE_BASE		0x00001FFF
+#define DLB2_CHP_HIST_LIST_BASE_RSVD0	0xFFFFE000
+#define DLB2_CHP_HIST_LIST_BASE_BASE_LOC	0
+#define DLB2_CHP_HIST_LIST_BASE_RSVD0_LOC	13
+
+#define DLB2_V2CHP_HIST_LIST_LIM(x) \
+	(0x40780000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_HIST_LIST_LIM(x) \
+	(0x40800000 + (x) * 0x1000)
+#define DLB2_CHP_HIST_LIST_LIM(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_HIST_LIST_LIM(x) : \
+	 DLB2_V2_5CHP_HIST_LIST_LIM(x))
+#define DLB2_CHP_HIST_LIST_LIM_RST 0x0
+
+#define DLB2_CHP_HIST_LIST_LIM_LIMIT	0x00001FFF
+#define DLB2_CHP_HIST_LIST_LIM_RSVD0	0xFFFFE000
+#define DLB2_CHP_HIST_LIST_LIM_LIMIT_LOC	0
+#define DLB2_CHP_HIST_LIST_LIM_RSVD0_LOC	13
+
+#define DLB2_V2CHP_HIST_LIST_POP_PTR(x) \
+	(0x40800000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_HIST_LIST_POP_PTR(x) \
+	(0x40880000 + (x) * 0x1000)
+#define DLB2_CHP_HIST_LIST_POP_PTR(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_HIST_LIST_POP_PTR(x) : \
+	 DLB2_V2_5CHP_HIST_LIST_POP_PTR(x))
+#define DLB2_CHP_HIST_LIST_POP_PTR_RST 0x0
+
+#define DLB2_CHP_HIST_LIST_POP_PTR_POP_PTR		0x00001FFF
+#define DLB2_CHP_HIST_LIST_POP_PTR_GENERATION	0x00002000
+#define DLB2_CHP_HIST_LIST_POP_PTR_RSVD0		0xFFFFC000
+#define DLB2_CHP_HIST_LIST_POP_PTR_POP_PTR_LOC	0
+#define DLB2_CHP_HIST_LIST_POP_PTR_GENERATION_LOC	13
+#define DLB2_CHP_HIST_LIST_POP_PTR_RSVD0_LOC		14
+
+#define DLB2_V2CHP_HIST_LIST_PUSH_PTR(x) \
+	(0x40880000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_HIST_LIST_PUSH_PTR(x) \
+	(0x40900000 + (x) * 0x1000)
+#define DLB2_CHP_HIST_LIST_PUSH_PTR(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_HIST_LIST_PUSH_PTR(x) : \
+	 DLB2_V2_5CHP_HIST_LIST_PUSH_PTR(x))
+#define DLB2_CHP_HIST_LIST_PUSH_PTR_RST 0x0
+
+#define DLB2_CHP_HIST_LIST_PUSH_PTR_PUSH_PTR		0x00001FFF
+#define DLB2_CHP_HIST_LIST_PUSH_PTR_GENERATION	0x00002000
+#define DLB2_CHP_HIST_LIST_PUSH_PTR_RSVD0		0xFFFFC000
+#define DLB2_CHP_HIST_LIST_PUSH_PTR_PUSH_PTR_LOC	0
+#define DLB2_CHP_HIST_LIST_PUSH_PTR_GENERATION_LOC	13
+#define DLB2_CHP_HIST_LIST_PUSH_PTR_RSVD0_LOC	14
+
+#define DLB2_V2CHP_LDB_CQ_DEPTH(x) \
+	(0x40900000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ_DEPTH(x) \
+	(0x40a80000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ_DEPTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_DEPTH(x) : \
+	 DLB2_V2_5CHP_LDB_CQ_DEPTH(x))
+#define DLB2_CHP_LDB_CQ_DEPTH_RST 0x0
+
+#define DLB2_CHP_LDB_CQ_DEPTH_DEPTH	0x000007FF
+#define DLB2_CHP_LDB_CQ_DEPTH_RSVD0	0xFFFFF800
+#define DLB2_CHP_LDB_CQ_DEPTH_DEPTH_LOC	0
+#define DLB2_CHP_LDB_CQ_DEPTH_RSVD0_LOC	11
+
+#define DLB2_V2CHP_LDB_CQ_INT_DEPTH_THRSH(x) \
+	(0x40980000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ_INT_DEPTH_THRSH(x) \
+	(0x40b00000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_INT_DEPTH_THRSH(x) : \
+	 DLB2_V2_5CHP_LDB_CQ_INT_DEPTH_THRSH(x))
+#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_RST 0x0
+
+#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_DEPTH_THRESHOLD	0x000007FF
+#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_RSVD0		0xFFFFF800
+#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_DEPTH_THRESHOLD_LOC	0
+#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_RSVD0_LOC		11
+
+#define DLB2_V2CHP_LDB_CQ_INT_ENB(x) \
+	(0x40a00000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ_INT_ENB(x) \
+	(0x40b80000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ_INT_ENB(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_INT_ENB(x) : \
+	 DLB2_V2_5CHP_LDB_CQ_INT_ENB(x))
+#define DLB2_CHP_LDB_CQ_INT_ENB_RST 0x0
+
+#define DLB2_CHP_LDB_CQ_INT_ENB_EN_TIM	0x00000001
+#define DLB2_CHP_LDB_CQ_INT_ENB_EN_DEPTH	0x00000002
+#define DLB2_CHP_LDB_CQ_INT_ENB_RSVD0	0xFFFFFFFC
+#define DLB2_CHP_LDB_CQ_INT_ENB_EN_TIM_LOC	0
+#define DLB2_CHP_LDB_CQ_INT_ENB_EN_DEPTH_LOC	1
+#define DLB2_CHP_LDB_CQ_INT_ENB_RSVD0_LOC	2
+
+#define DLB2_V2CHP_LDB_CQ_TMR_THRSH(x) \
+	(0x40b00000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ_TMR_THRSH(x) \
+	(0x40c80000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ_TMR_THRSH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_TMR_THRSH(x) : \
+	 DLB2_V2_5CHP_LDB_CQ_TMR_THRSH(x))
+#define DLB2_CHP_LDB_CQ_TMR_THRSH_RST 0x1
+
+#define DLB2_CHP_LDB_CQ_TMR_THRSH_THRSH_0	0x00000001
+#define DLB2_CHP_LDB_CQ_TMR_THRSH_THRSH_13_1	0x00003FFE
+#define DLB2_CHP_LDB_CQ_TMR_THRSH_RSVD0	0xFFFFC000
+#define DLB2_CHP_LDB_CQ_TMR_THRSH_THRSH_0_LOC	0
+#define DLB2_CHP_LDB_CQ_TMR_THRSH_THRSH_13_1_LOC	1
+#define DLB2_CHP_LDB_CQ_TMR_THRSH_RSVD0_LOC		14
+
+#define DLB2_V2CHP_LDB_CQ_TKN_DEPTH_SEL(x) \
+	(0x40b80000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ_TKN_DEPTH_SEL(x) \
+	(0x40d00000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_TKN_DEPTH_SEL(x) : \
+	 DLB2_V2_5CHP_LDB_CQ_TKN_DEPTH_SEL(x))
+#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_RST 0x0
+
+#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT	0x0000000F
+#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_RSVD0			0xFFFFFFF0
+#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_LOC	0
+#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_RSVD0_LOC		4
+
+#define DLB2_V2CHP_LDB_CQ_WD_ENB(x) \
+	(0x40c00000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ_WD_ENB(x) \
+	(0x40d80000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ_WD_ENB(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_WD_ENB(x) : \
+	 DLB2_V2_5CHP_LDB_CQ_WD_ENB(x))
+#define DLB2_CHP_LDB_CQ_WD_ENB_RST 0x0
+
+#define DLB2_CHP_LDB_CQ_WD_ENB_WD_ENABLE	0x00000001
+#define DLB2_CHP_LDB_CQ_WD_ENB_RSVD0		0xFFFFFFFE
+#define DLB2_CHP_LDB_CQ_WD_ENB_WD_ENABLE_LOC	0
+#define DLB2_CHP_LDB_CQ_WD_ENB_RSVD0_LOC	1
+
+#define DLB2_V2CHP_LDB_CQ_WPTR(x) \
+	(0x40c80000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ_WPTR(x) \
+	(0x40e00000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ_WPTR(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_WPTR(x) : \
+	 DLB2_V2_5CHP_LDB_CQ_WPTR(x))
+#define DLB2_CHP_LDB_CQ_WPTR_RST 0x0
+
+#define DLB2_CHP_LDB_CQ_WPTR_WRITE_POINTER	0x000007FF
+#define DLB2_CHP_LDB_CQ_WPTR_RSVD0		0xFFFFF800
+#define DLB2_CHP_LDB_CQ_WPTR_WRITE_POINTER_LOC	0
+#define DLB2_CHP_LDB_CQ_WPTR_RSVD0_LOC		11
+
+#define DLB2_V2CHP_LDB_CQ2VAS(x) \
+	(0x40d00000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ2VAS(x) \
+	(0x40e80000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ2VAS(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ2VAS(x) : \
+	 DLB2_V2_5CHP_LDB_CQ2VAS(x))
+#define DLB2_CHP_LDB_CQ2VAS_RST 0x0
+
+#define DLB2_CHP_LDB_CQ2VAS_CQ2VAS	0x0000001F
+#define DLB2_CHP_LDB_CQ2VAS_RSVD0	0xFFFFFFE0
+#define DLB2_CHP_LDB_CQ2VAS_CQ2VAS_LOC	0
+#define DLB2_CHP_LDB_CQ2VAS_RSVD0_LOC	5
+
+#define DLB2_CHP_CFG_CHP_CSR_CTRL 0x44000008
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_RST 0x180002
+
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_COR_ALARM_DIS		0x00000001
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_COR_SYND_DIS		0x00000002
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_UNCR_ALARM_DIS		0x00000004
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_UNC_SYND_DIS		0x00000008
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF0_ALARM_DIS		0x00000010
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF0_SYND_DIS		0x00000020
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF1_ALARM_DIS		0x00000040
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF1_SYND_DIS		0x00000080
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF2_ALARM_DIS		0x00000100
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF2_SYND_DIS		0x00000200
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF3_ALARM_DIS		0x00000400
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF3_SYND_DIS		0x00000800
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF4_ALARM_DIS		0x00001000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF4_SYND_DIS		0x00002000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF5_ALARM_DIS		0x00004000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF5_SYND_DIS		0x00008000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_DLB_COR_ALARM_ENABLE	0x00010000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_LDB_CQ_MODE	0x00020000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_DIR_CQ_MODE	0x00040000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_WRITE_LDB		0x00080000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_WRITE_DIR		0x00100000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_FIRST_WRITE_LDB	0x00200000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_FIRST_WRITE_DIR	0x00400000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_RSVZ0			0xFF800000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_COR_ALARM_DIS_LOC		0
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_COR_SYND_DIS_LOC		1
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_UNCR_ALARM_DIS_LOC		2
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_UNC_SYND_DIS_LOC		3
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF0_ALARM_DIS_LOC		4
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF0_SYND_DIS_LOC		5
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF1_ALARM_DIS_LOC		6
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF1_SYND_DIS_LOC		7
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF2_ALARM_DIS_LOC		8
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF2_SYND_DIS_LOC		9
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF3_ALARM_DIS_LOC		10
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF3_SYND_DIS_LOC		11
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF4_ALARM_DIS_LOC		12
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF4_SYND_DIS_LOC		13
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF5_ALARM_DIS_LOC		14
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF5_SYND_DIS_LOC		15
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_DLB_COR_ALARM_ENABLE_LOC		16
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_LDB_CQ_MODE_LOC	17
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_DIR_CQ_MODE_LOC	18
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_WRITE_LDB_LOC			19
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_WRITE_DIR_LOC			20
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_FIRST_WRITE_LDB_LOC		21
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_FIRST_WRITE_DIR_LOC		22
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_RSVZ0_LOC				23
+
+#define DLB2_V2CHP_DIR_CQ_INTR_ARMED0 0x4400005c
+#define DLB2_V2_5CHP_DIR_CQ_INTR_ARMED0 0x4400004c
+#define DLB2_CHP_DIR_CQ_INTR_ARMED0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_INTR_ARMED0 : \
+	 DLB2_V2_5CHP_DIR_CQ_INTR_ARMED0)
+#define DLB2_CHP_DIR_CQ_INTR_ARMED0_RST 0x0
+
+#define DLB2_CHP_DIR_CQ_INTR_ARMED0_ARMED	0xFFFFFFFF
+#define DLB2_CHP_DIR_CQ_INTR_ARMED0_ARMED_LOC	0
+
+#define DLB2_V2CHP_DIR_CQ_INTR_ARMED1 0x44000060
+#define DLB2_V2_5CHP_DIR_CQ_INTR_ARMED1 0x44000050
+#define DLB2_CHP_DIR_CQ_INTR_ARMED1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_INTR_ARMED1 : \
+	 DLB2_V2_5CHP_DIR_CQ_INTR_ARMED1)
+#define DLB2_CHP_DIR_CQ_INTR_ARMED1_RST 0x0
+
+#define DLB2_CHP_DIR_CQ_INTR_ARMED1_ARMED	0xFFFFFFFF
+#define DLB2_CHP_DIR_CQ_INTR_ARMED1_ARMED_LOC	0
+
+#define DLB2_V2CHP_CFG_DIR_CQ_TIMER_CTL 0x44000084
+#define DLB2_V2_5CHP_CFG_DIR_CQ_TIMER_CTL 0x44000088
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_DIR_CQ_TIMER_CTL : \
+	 DLB2_V2_5CHP_CFG_DIR_CQ_TIMER_CTL)
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_RST 0x0
+
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_SAMPLE_INTERVAL	0x000000FF
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_ENB			0x00000100
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_RSVZ0			0xFFFFFE00
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_SAMPLE_INTERVAL_LOC	0
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_ENB_LOC		8
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_RSVZ0_LOC		9
+
+#define DLB2_V2CHP_CFG_DIR_WDTO_0 0x44000088
+#define DLB2_V2_5CHP_CFG_DIR_WDTO_0 0x4400008c
+#define DLB2_CHP_CFG_DIR_WDTO_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_DIR_WDTO_0 : \
+	 DLB2_V2_5CHP_CFG_DIR_WDTO_0)
+#define DLB2_CHP_CFG_DIR_WDTO_0_RST 0x0
+
+#define DLB2_CHP_CFG_DIR_WDTO_0_WDTO	0xFFFFFFFF
+#define DLB2_CHP_CFG_DIR_WDTO_0_WDTO_LOC	0
+
+#define DLB2_V2CHP_CFG_DIR_WDTO_1 0x4400008c
+#define DLB2_V2_5CHP_CFG_DIR_WDTO_1 0x44000090
+#define DLB2_CHP_CFG_DIR_WDTO_1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_DIR_WDTO_1 : \
+	 DLB2_V2_5CHP_CFG_DIR_WDTO_1)
+#define DLB2_CHP_CFG_DIR_WDTO_1_RST 0x0
+
+#define DLB2_CHP_CFG_DIR_WDTO_1_WDTO	0xFFFFFFFF
+#define DLB2_CHP_CFG_DIR_WDTO_1_WDTO_LOC	0
+
+#define DLB2_V2CHP_CFG_DIR_WD_DISABLE0 0x44000098
+#define DLB2_V2_5CHP_CFG_DIR_WD_DISABLE0 0x440000a4
+#define DLB2_CHP_CFG_DIR_WD_DISABLE0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_DIR_WD_DISABLE0 : \
+	 DLB2_V2_5CHP_CFG_DIR_WD_DISABLE0)
+#define DLB2_CHP_CFG_DIR_WD_DISABLE0_RST 0xffffffff
+
+#define DLB2_CHP_CFG_DIR_WD_DISABLE0_WD_DISABLE	0xFFFFFFFF
+#define DLB2_CHP_CFG_DIR_WD_DISABLE0_WD_DISABLE_LOC	0
+
+#define DLB2_V2CHP_CFG_DIR_WD_DISABLE1 0x4400009c
+#define DLB2_V2_5CHP_CFG_DIR_WD_DISABLE1 0x440000a8
+#define DLB2_CHP_CFG_DIR_WD_DISABLE1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_DIR_WD_DISABLE1 : \
+	 DLB2_V2_5CHP_CFG_DIR_WD_DISABLE1)
+#define DLB2_CHP_CFG_DIR_WD_DISABLE1_RST 0xffffffff
+
+#define DLB2_CHP_CFG_DIR_WD_DISABLE1_WD_DISABLE	0xFFFFFFFF
+#define DLB2_CHP_CFG_DIR_WD_DISABLE1_WD_DISABLE_LOC	0
+
+#define DLB2_V2CHP_CFG_DIR_WD_ENB_INTERVAL 0x440000a0
+#define DLB2_V2_5CHP_CFG_DIR_WD_ENB_INTERVAL 0x440000b0
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_DIR_WD_ENB_INTERVAL : \
+	 DLB2_V2_5CHP_CFG_DIR_WD_ENB_INTERVAL)
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_RST 0x0
+
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_SAMPLE_INTERVAL	0x0FFFFFFF
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_ENB			0x10000000
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_RSVZ0		0xE0000000
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_SAMPLE_INTERVAL_LOC	0
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_ENB_LOC		28
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_RSVZ0_LOC		29
+
+#define DLB2_V2CHP_CFG_DIR_WD_THRESHOLD 0x440000ac
+#define DLB2_V2_5CHP_CFG_DIR_WD_THRESHOLD 0x440000c0
+#define DLB2_CHP_CFG_DIR_WD_THRESHOLD(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_DIR_WD_THRESHOLD : \
+	 DLB2_V2_5CHP_CFG_DIR_WD_THRESHOLD)
+#define DLB2_CHP_CFG_DIR_WD_THRESHOLD_RST 0x0
+
+#define DLB2_CHP_CFG_DIR_WD_THRESHOLD_WD_THRESHOLD	0x000000FF
+#define DLB2_CHP_CFG_DIR_WD_THRESHOLD_RSVZ0		0xFFFFFF00
+#define DLB2_CHP_CFG_DIR_WD_THRESHOLD_WD_THRESHOLD_LOC	0
+#define DLB2_CHP_CFG_DIR_WD_THRESHOLD_RSVZ0_LOC		8
+
+#define DLB2_V2CHP_LDB_CQ_INTR_ARMED0 0x440000b0
+#define DLB2_V2_5CHP_LDB_CQ_INTR_ARMED0 0x440000c4
+#define DLB2_CHP_LDB_CQ_INTR_ARMED0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_INTR_ARMED0 : \
+	 DLB2_V2_5CHP_LDB_CQ_INTR_ARMED0)
+#define DLB2_CHP_LDB_CQ_INTR_ARMED0_RST 0x0
+
+#define DLB2_CHP_LDB_CQ_INTR_ARMED0_ARMED	0xFFFFFFFF
+#define DLB2_CHP_LDB_CQ_INTR_ARMED0_ARMED_LOC	0
+
+#define DLB2_V2CHP_LDB_CQ_INTR_ARMED1 0x440000b4
+#define DLB2_V2_5CHP_LDB_CQ_INTR_ARMED1 0x440000c8
+#define DLB2_CHP_LDB_CQ_INTR_ARMED1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_INTR_ARMED1 : \
+	 DLB2_V2_5CHP_LDB_CQ_INTR_ARMED1)
+#define DLB2_CHP_LDB_CQ_INTR_ARMED1_RST 0x0
+
+#define DLB2_CHP_LDB_CQ_INTR_ARMED1_ARMED	0xFFFFFFFF
+#define DLB2_CHP_LDB_CQ_INTR_ARMED1_ARMED_LOC	0
+
+#define DLB2_V2CHP_CFG_LDB_CQ_TIMER_CTL 0x440000d8
+#define DLB2_V2_5CHP_CFG_LDB_CQ_TIMER_CTL 0x440000ec
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_LDB_CQ_TIMER_CTL : \
+	 DLB2_V2_5CHP_CFG_LDB_CQ_TIMER_CTL)
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_RST 0x0
+
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_SAMPLE_INTERVAL	0x000000FF
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_ENB			0x00000100
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_RSVZ0			0xFFFFFE00
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_SAMPLE_INTERVAL_LOC	0
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_ENB_LOC		8
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_RSVZ0_LOC		9
+
+#define DLB2_V2CHP_CFG_LDB_WDTO_0 0x440000dc
+#define DLB2_V2_5CHP_CFG_LDB_WDTO_0 0x440000f0
+#define DLB2_CHP_CFG_LDB_WDTO_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_LDB_WDTO_0 : \
+	 DLB2_V2_5CHP_CFG_LDB_WDTO_0)
+#define DLB2_CHP_CFG_LDB_WDTO_0_RST 0x0
+
+#define DLB2_CHP_CFG_LDB_WDTO_0_WDTO	0xFFFFFFFF
+#define DLB2_CHP_CFG_LDB_WDTO_0_WDTO_LOC	0
+
+#define DLB2_V2CHP_CFG_LDB_WDTO_1 0x440000e0
+#define DLB2_V2_5CHP_CFG_LDB_WDTO_1 0x440000f4
+#define DLB2_CHP_CFG_LDB_WDTO_1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_LDB_WDTO_1 : \
+	 DLB2_V2_5CHP_CFG_LDB_WDTO_1)
+#define DLB2_CHP_CFG_LDB_WDTO_1_RST 0x0
+
+#define DLB2_CHP_CFG_LDB_WDTO_1_WDTO	0xFFFFFFFF
+#define DLB2_CHP_CFG_LDB_WDTO_1_WDTO_LOC	0
+
+#define DLB2_V2CHP_CFG_LDB_WD_DISABLE0 0x440000ec
+#define DLB2_V2_5CHP_CFG_LDB_WD_DISABLE0 0x44000100
+#define DLB2_CHP_CFG_LDB_WD_DISABLE0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_LDB_WD_DISABLE0 : \
+	 DLB2_V2_5CHP_CFG_LDB_WD_DISABLE0)
+#define DLB2_CHP_CFG_LDB_WD_DISABLE0_RST 0xffffffff
+
+#define DLB2_CHP_CFG_LDB_WD_DISABLE0_WD_DISABLE	0xFFFFFFFF
+#define DLB2_CHP_CFG_LDB_WD_DISABLE0_WD_DISABLE_LOC	0
+
+#define DLB2_V2CHP_CFG_LDB_WD_DISABLE1 0x440000f0
+#define DLB2_V2_5CHP_CFG_LDB_WD_DISABLE1 0x44000104
+#define DLB2_CHP_CFG_LDB_WD_DISABLE1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_LDB_WD_DISABLE1 : \
+	 DLB2_V2_5CHP_CFG_LDB_WD_DISABLE1)
+#define DLB2_CHP_CFG_LDB_WD_DISABLE1_RST 0xffffffff
+
+#define DLB2_CHP_CFG_LDB_WD_DISABLE1_WD_DISABLE	0xFFFFFFFF
+#define DLB2_CHP_CFG_LDB_WD_DISABLE1_WD_DISABLE_LOC	0
+
+#define DLB2_V2CHP_CFG_LDB_WD_ENB_INTERVAL 0x440000f4
+#define DLB2_V2_5CHP_CFG_LDB_WD_ENB_INTERVAL 0x44000108
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_LDB_WD_ENB_INTERVAL : \
+	 DLB2_V2_5CHP_CFG_LDB_WD_ENB_INTERVAL)
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_RST 0x0
+
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_SAMPLE_INTERVAL	0x0FFFFFFF
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_ENB			0x10000000
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_RSVZ0		0xE0000000
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_SAMPLE_INTERVAL_LOC	0
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_ENB_LOC		28
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_RSVZ0_LOC		29
+
+#define DLB2_V2CHP_CFG_LDB_WD_THRESHOLD 0x44000100
+#define DLB2_V2_5CHP_CFG_LDB_WD_THRESHOLD 0x44000114
+#define DLB2_CHP_CFG_LDB_WD_THRESHOLD(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_LDB_WD_THRESHOLD : \
+	 DLB2_V2_5CHP_CFG_LDB_WD_THRESHOLD)
+#define DLB2_CHP_CFG_LDB_WD_THRESHOLD_RST 0x0
+
+#define DLB2_CHP_CFG_LDB_WD_THRESHOLD_WD_THRESHOLD	0x000000FF
+#define DLB2_CHP_CFG_LDB_WD_THRESHOLD_RSVZ0		0xFFFFFF00
+#define DLB2_CHP_CFG_LDB_WD_THRESHOLD_WD_THRESHOLD_LOC	0
+#define DLB2_CHP_CFG_LDB_WD_THRESHOLD_RSVZ0_LOC		8
+
+#define DLB2_CHP_SMON_COMPARE0 0x4c000000
+#define DLB2_CHP_SMON_COMPARE0_RST 0x0
+
+#define DLB2_CHP_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_CHP_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_CHP_SMON_COMPARE1 0x4c000004
+#define DLB2_CHP_SMON_COMPARE1_RST 0x0
+
+#define DLB2_CHP_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_CHP_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_CHP_SMON_CFG0 0x4c000008
+#define DLB2_CHP_SMON_CFG0_RST 0x40000000
+
+#define DLB2_CHP_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_CHP_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_CHP_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_CHP_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_CHP_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_CHP_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_CHP_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_CHP_SMON_CFG0_SMON_MODE			0x0000F000
+#define DLB2_CHP_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_CHP_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_CHP_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_CHP_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_CHP_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_CHP_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_CHP_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_CHP_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_CHP_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_CHP_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_CHP_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_CHP_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_CHP_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
+#define DLB2_CHP_SMON_CFG0_RSVZ0_LOC				2
+#define DLB2_CHP_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_CHP_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_CHP_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_CHP_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_CHP_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_CHP_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_CHP_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_CHP_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_CHP_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_CHP_SMON_CFG0_STOPTIMEROVFL_LOC			20
+#define DLB2_CHP_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_CHP_SMON_CFG0_STATTIMEROVFL_LOC			22
+#define DLB2_CHP_SMON_CFG0_RSVZ1_LOC				23
+#define DLB2_CHP_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_CHP_SMON_CFG0_RSVZ2_LOC				29
+#define DLB2_CHP_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_CHP_SMON_CFG1 0x4c00000c
+#define DLB2_CHP_SMON_CFG1_RST 0x0
+
+#define DLB2_CHP_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_CHP_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_CHP_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_CHP_SMON_CFG1_MODE0_LOC	0
+#define DLB2_CHP_SMON_CFG1_MODE1_LOC	8
+#define DLB2_CHP_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_CHP_SMON_ACTIVITYCNTR0 0x4c000010
+#define DLB2_CHP_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_CHP_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_CHP_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_CHP_SMON_ACTIVITYCNTR1 0x4c000014
+#define DLB2_CHP_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_CHP_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_CHP_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_CHP_SMON_MAX_TMR 0x4c000018
+#define DLB2_CHP_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_CHP_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_CHP_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_CHP_SMON_TMR 0x4c00001c
+#define DLB2_CHP_SMON_TMR_RST 0x0
+
+#define DLB2_CHP_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_CHP_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_CHP_CTRL_DIAG_02 0x4c000028
+#define DLB2_CHP_CTRL_DIAG_02_RST 0x1555
+
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_EMPTY_V2	0x00000001
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_AFULL_V2	0x00000002
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_EMPTY_V2 0x04
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_AFULL_V2 0x08
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_EMPTY_V2 0x0010
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_AFULL_V2 0x0020
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_EMPTY_V2    0x0040
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_AFULL_V2    0x0080
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_EMPTY_V2	 0x0100
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_AFULL_V2	 0x0200
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_EMPTY_V2	0x0400
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_AFULL_V2	0x0800
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_EMPTY_V2    0x1000
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_AFULL_V2    0x2000
+#define DLB2_CHP_CTRL_DIAG_02_RSVD0_V2				    0xFFFFC000
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_EMPTY_V2_LOC	    0
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_AFULL_V2_LOC	    1
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_EMPTY_V2_LOC 2
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_AFULL_V2_LOC 3
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_EMPTY_V2_LOC 4
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_AFULL_V2_LOC 5
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_EMPTY_V2_LOC    6
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_AFULL_V2_LOC    7
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_EMPTY_V2_LOC	  8
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_AFULL_V2_LOC	  9
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_EMPTY_V2_LOC	  10
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_AFULL_V2_LOC	  11
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_EMPTY_V2_LOC 12
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_AFULL_V2_LOC 13
+#define DLB2_CHP_CTRL_DIAG_02_RSVD0_V2_LOC				  14
+
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_EMPTY_V2_5	     0x00000001
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_AFULL_V2_5	     0x00000002
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_EMPTY_V2_5  4
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_AFULL_V2_5  8
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_EMPTY_V2_5 0x10
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_AFULL_V2_5 0x20
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_EMPTY_V2_5	0x0040
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_AFULL_V2_5	0x0080
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_EMPTY_V2_5 0x00000100
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_AFULL_V2_5 0x00000200
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_EMPTY_V2_5 0x0400
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_AFULL_V2_5 0x0800
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_EMPTY_V2_5 0x1000
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_AFULL_V2_5 0x2000
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOKEN_QB_STATUS_SIZE_V2_5	    0x0001C000
+#define DLB2_CHP_CTRL_DIAG_02_FREELIST_SIZE_V2_5		    0xFFFE0000
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_EMPTY_V2_5_LOC 0
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_AFULL_V2_5_LOC 1
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_EMPTY_V2_5_LOC 2
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_AFULL_V2_5_LOC 3
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_EMPTY_V2_5_LOC 4
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_AFULL_V2_5_LOC 5
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_EMPTY_V2_5_LOC    6
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_AFULL_V2_5_LOC 7
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_EMPTY_V2_5_LOC	    8
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_AFULL_V2_5_LOC	    9
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_EMPTY_V2_5_LOC   10
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_AFULL_V2_5_LOC   11
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_EMPTY_V2_5_LOC 12
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_AFULL_V2_5_LOC 13
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOKEN_QB_STATUS_SIZE_V2_5_LOC	    14
+#define DLB2_CHP_CTRL_DIAG_02_FREELIST_SIZE_V2_5_LOC			    17
+
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0 0x54000000
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_RST 0xfefcfaf8
+
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI0	0x000000FF
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI1	0x0000FF00
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI2	0x00FF0000
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI3	0xFF000000
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI0_LOC	0
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI1_LOC	8
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI2_LOC	16
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI3_LOC	24
+
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_1 0x54000004
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_1_RST 0x0
+
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_1_RSVZ0	0xFFFFFFFF
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_1_RSVZ0_LOC	0
+
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0 0x54000008
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_RST 0xfefcfaf8
+
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI0	0x000000FF
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI1	0x0000FF00
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI2	0x00FF0000
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI3	0xFF000000
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI0_LOC	0
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI1_LOC	8
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI2_LOC	16
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI3_LOC	24
+
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1 0x5400000c
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RST 0x0
+
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RSVZ0	0xFFFFFFFF
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RSVZ0_LOC	0
+
+#define DLB2_DP_DIR_CSR_CTRL 0x54000010
+#define DLB2_DP_DIR_CSR_CTRL_RST 0x0
+
+#define DLB2_DP_DIR_CSR_CTRL_INT_COR_ALARM_DIS	0x00000001
+#define DLB2_DP_DIR_CSR_CTRL_INT_COR_SYND_DIS	0x00000002
+#define DLB2_DP_DIR_CSR_CTRL_INT_UNCR_ALARM_DIS	0x00000004
+#define DLB2_DP_DIR_CSR_CTRL_INT_UNC_SYND_DIS	0x00000008
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF0_ALARM_DIS	0x00000010
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF0_SYND_DIS	0x00000020
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF1_ALARM_DIS	0x00000040
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF1_SYND_DIS	0x00000080
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF2_ALARM_DIS	0x00000100
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF2_SYND_DIS	0x00000200
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF3_ALARM_DIS	0x00000400
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF3_SYND_DIS	0x00000800
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF4_ALARM_DIS	0x00001000
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF4_SYND_DIS	0x00002000
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF5_ALARM_DIS	0x00004000
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF5_SYND_DIS	0x00008000
+#define DLB2_DP_DIR_CSR_CTRL_RSVZ0			0xFFFF0000
+#define DLB2_DP_DIR_CSR_CTRL_INT_COR_ALARM_DIS_LOC	0
+#define DLB2_DP_DIR_CSR_CTRL_INT_COR_SYND_DIS_LOC	1
+#define DLB2_DP_DIR_CSR_CTRL_INT_UNCR_ALARM_DIS_LOC	2
+#define DLB2_DP_DIR_CSR_CTRL_INT_UNC_SYND_DIS_LOC	3
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF0_ALARM_DIS_LOC	4
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF0_SYND_DIS_LOC	5
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF1_ALARM_DIS_LOC	6
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF1_SYND_DIS_LOC	7
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF2_ALARM_DIS_LOC	8
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF2_SYND_DIS_LOC	9
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF3_ALARM_DIS_LOC	10
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF3_SYND_DIS_LOC	11
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF4_ALARM_DIS_LOC	12
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF4_SYND_DIS_LOC	13
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF5_ALARM_DIS_LOC	14
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF5_SYND_DIS_LOC	15
+#define DLB2_DP_DIR_CSR_CTRL_RSVZ0_LOC		16
+
+#define DLB2_DP_SMON_ACTIVITYCNTR0 0x5c000058
+#define DLB2_DP_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_DP_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_DP_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_DP_SMON_ACTIVITYCNTR1 0x5c00005c
+#define DLB2_DP_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_DP_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_DP_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_DP_SMON_COMPARE0 0x5c000060
+#define DLB2_DP_SMON_COMPARE0_RST 0x0
+
+#define DLB2_DP_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_DP_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_DP_SMON_COMPARE1 0x5c000064
+#define DLB2_DP_SMON_COMPARE1_RST 0x0
+
+#define DLB2_DP_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_DP_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_DP_SMON_CFG0 0x5c000068
+#define DLB2_DP_SMON_CFG0_RST 0x40000000
+
+#define DLB2_DP_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_DP_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_DP_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_DP_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_DP_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_DP_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_DP_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_DP_SMON_CFG0_SMON_MODE			0x0000F000
+#define DLB2_DP_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_DP_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_DP_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_DP_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_DP_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_DP_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_DP_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_DP_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_DP_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_DP_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_DP_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_DP_SMON_CFG0_SMON_ENABLE_LOC		0
+#define DLB2_DP_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC	1
+#define DLB2_DP_SMON_CFG0_RSVZ0_LOC			2
+#define DLB2_DP_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_DP_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_DP_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_DP_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_DP_SMON_CFG0_SMON_MODE_LOC		12
+#define DLB2_DP_SMON_CFG0_STOPCOUNTEROVFL_LOC	16
+#define DLB2_DP_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_DP_SMON_CFG0_STATCOUNTER0OVFL_LOC	18
+#define DLB2_DP_SMON_CFG0_STATCOUNTER1OVFL_LOC	19
+#define DLB2_DP_SMON_CFG0_STOPTIMEROVFL_LOC		20
+#define DLB2_DP_SMON_CFG0_INTTIMEROVFL_LOC		21
+#define DLB2_DP_SMON_CFG0_STATTIMEROVFL_LOC		22
+#define DLB2_DP_SMON_CFG0_RSVZ1_LOC			23
+#define DLB2_DP_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_DP_SMON_CFG0_RSVZ2_LOC			29
+#define DLB2_DP_SMON_CFG0_VERSION_LOC		30
+
+#define DLB2_DP_SMON_CFG1 0x5c00006c
+#define DLB2_DP_SMON_CFG1_RST 0x0
+
+#define DLB2_DP_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_DP_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_DP_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_DP_SMON_CFG1_MODE0_LOC	0
+#define DLB2_DP_SMON_CFG1_MODE1_LOC	8
+#define DLB2_DP_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_DP_SMON_MAX_TMR 0x5c000070
+#define DLB2_DP_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_DP_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_DP_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_DP_SMON_TMR 0x5c000074
+#define DLB2_DP_SMON_TMR_RST 0x0
+
+#define DLB2_DP_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_DP_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_DQED_SMON_ACTIVITYCNTR0 0x6c000024
+#define DLB2_DQED_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_DQED_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_DQED_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_DQED_SMON_ACTIVITYCNTR1 0x6c000028
+#define DLB2_DQED_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_DQED_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_DQED_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_DQED_SMON_COMPARE0 0x6c00002c
+#define DLB2_DQED_SMON_COMPARE0_RST 0x0
+
+#define DLB2_DQED_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_DQED_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_DQED_SMON_COMPARE1 0x6c000030
+#define DLB2_DQED_SMON_COMPARE1_RST 0x0
+
+#define DLB2_DQED_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_DQED_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_DQED_SMON_CFG0 0x6c000034
+#define DLB2_DQED_SMON_CFG0_RST 0x40000000
+
+#define DLB2_DQED_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_DQED_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_DQED_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_DQED_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_DQED_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_DQED_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_DQED_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_DQED_SMON_CFG0_SMON_MODE		0x0000F000
+#define DLB2_DQED_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_DQED_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_DQED_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_DQED_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_DQED_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_DQED_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_DQED_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_DQED_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_DQED_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_DQED_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_DQED_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_DQED_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_DQED_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
+#define DLB2_DQED_SMON_CFG0_RSVZ0_LOC			2
+#define DLB2_DQED_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_DQED_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_DQED_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_DQED_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_DQED_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_DQED_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_DQED_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_DQED_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_DQED_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_DQED_SMON_CFG0_STOPTIMEROVFL_LOC		20
+#define DLB2_DQED_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_DQED_SMON_CFG0_STATTIMEROVFL_LOC		22
+#define DLB2_DQED_SMON_CFG0_RSVZ1_LOC			23
+#define DLB2_DQED_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_DQED_SMON_CFG0_RSVZ2_LOC			29
+#define DLB2_DQED_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_DQED_SMON_CFG1 0x6c000038
+#define DLB2_DQED_SMON_CFG1_RST 0x0
+
+#define DLB2_DQED_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_DQED_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_DQED_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_DQED_SMON_CFG1_MODE0_LOC	0
+#define DLB2_DQED_SMON_CFG1_MODE1_LOC	8
+#define DLB2_DQED_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_DQED_SMON_MAX_TMR 0x6c00003c
+#define DLB2_DQED_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_DQED_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_DQED_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_DQED_SMON_TMR 0x6c000040
+#define DLB2_DQED_SMON_TMR_RST 0x0
+
+#define DLB2_DQED_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_DQED_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_QED_SMON_ACTIVITYCNTR0 0x7c000024
+#define DLB2_QED_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_QED_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_QED_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_QED_SMON_ACTIVITYCNTR1 0x7c000028
+#define DLB2_QED_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_QED_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_QED_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_QED_SMON_COMPARE0 0x7c00002c
+#define DLB2_QED_SMON_COMPARE0_RST 0x0
+
+#define DLB2_QED_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_QED_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_QED_SMON_COMPARE1 0x7c000030
+#define DLB2_QED_SMON_COMPARE1_RST 0x0
+
+#define DLB2_QED_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_QED_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_QED_SMON_CFG0 0x7c000034
+#define DLB2_QED_SMON_CFG0_RST 0x40000000
+
+#define DLB2_QED_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_QED_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_QED_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_QED_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_QED_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_QED_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_QED_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_QED_SMON_CFG0_SMON_MODE			0x0000F000
+#define DLB2_QED_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_QED_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_QED_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_QED_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_QED_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_QED_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_QED_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_QED_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_QED_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_QED_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_QED_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_QED_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_QED_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
+#define DLB2_QED_SMON_CFG0_RSVZ0_LOC				2
+#define DLB2_QED_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_QED_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_QED_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_QED_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_QED_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_QED_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_QED_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_QED_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_QED_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_QED_SMON_CFG0_STOPTIMEROVFL_LOC			20
+#define DLB2_QED_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_QED_SMON_CFG0_STATTIMEROVFL_LOC			22
+#define DLB2_QED_SMON_CFG0_RSVZ1_LOC				23
+#define DLB2_QED_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_QED_SMON_CFG0_RSVZ2_LOC				29
+#define DLB2_QED_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_QED_SMON_CFG1 0x7c000038
+#define DLB2_QED_SMON_CFG1_RST 0x0
+
+#define DLB2_QED_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_QED_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_QED_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_QED_SMON_CFG1_MODE0_LOC	0
+#define DLB2_QED_SMON_CFG1_MODE1_LOC	8
+#define DLB2_QED_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_QED_SMON_MAX_TMR 0x7c00003c
+#define DLB2_QED_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_QED_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_QED_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_QED_SMON_TMR 0x7c000040
+#define DLB2_QED_SMON_TMR_RST 0x0
+
+#define DLB2_QED_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_QED_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0 0x84000000
+#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0 0x74000000
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0 : \
+	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0)
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_RST 0xfefcfaf8
+
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI0	0x000000FF
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI1	0x0000FF00
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI2	0x00FF0000
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI3	0xFF000000
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI0_LOC	0
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI1_LOC	8
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI2_LOC	16
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI3_LOC	24
+
+#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1 0x84000004
+#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1 0x74000004
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1 : \
+	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1)
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1_RST 0x0
+
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1_RSVZ0	0xFFFFFFFF
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1_RSVZ0_LOC	0
+
+#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0 0x84000008
+#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0 0x74000008
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0 : \
+	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0)
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_RST 0xfefcfaf8
+
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI0	0x000000FF
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI1	0x0000FF00
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI2	0x00FF0000
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI3	0xFF000000
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI0_LOC	0
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI1_LOC	8
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI2_LOC	16
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI3_LOC	24
+
+#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1 0x8400000c
+#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1 0x7400000c
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1 : \
+	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1)
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1_RST 0x0
+
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1_RSVZ0	0xFFFFFFFF
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1_RSVZ0_LOC	0
+
+#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0 0x84000010
+#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0 0x74000010
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0 : \
+	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0)
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_RST 0xfefcfaf8
+
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI0	0x000000FF
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI1	0x0000FF00
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI2	0x00FF0000
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI3	0xFF000000
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI0_LOC	0
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI1_LOC	8
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI2_LOC	16
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI3_LOC	24
+
+#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1 0x84000014
+#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1 0x74000014
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1 : \
+	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1)
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RST 0x0
+
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RSVZ0	0xFFFFFFFF
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RSVZ0_LOC	0
+
+#define DLB2_NALB_SMON_ACTIVITYCNTR0 0x8c000064
+#define DLB2_NALB_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_NALB_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_NALB_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_NALB_SMON_ACTIVITYCNTR1 0x8c000068
+#define DLB2_NALB_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_NALB_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_NALB_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_NALB_SMON_COMPARE0 0x8c00006c
+#define DLB2_NALB_SMON_COMPARE0_RST 0x0
+
+#define DLB2_NALB_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_NALB_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_NALB_SMON_COMPARE1 0x8c000070
+#define DLB2_NALB_SMON_COMPARE1_RST 0x0
+
+#define DLB2_NALB_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_NALB_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_NALB_SMON_CFG0 0x8c000074
+#define DLB2_NALB_SMON_CFG0_RST 0x40000000
+
+#define DLB2_NALB_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_NALB_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_NALB_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_NALB_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_NALB_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_NALB_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_NALB_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_NALB_SMON_CFG0_SMON_MODE		0x0000F000
+#define DLB2_NALB_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_NALB_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_NALB_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_NALB_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_NALB_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_NALB_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_NALB_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_NALB_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_NALB_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_NALB_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_NALB_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_NALB_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_NALB_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
+#define DLB2_NALB_SMON_CFG0_RSVZ0_LOC			2
+#define DLB2_NALB_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_NALB_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_NALB_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_NALB_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_NALB_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_NALB_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_NALB_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_NALB_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_NALB_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_NALB_SMON_CFG0_STOPTIMEROVFL_LOC		20
+#define DLB2_NALB_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_NALB_SMON_CFG0_STATTIMEROVFL_LOC		22
+#define DLB2_NALB_SMON_CFG0_RSVZ1_LOC			23
+#define DLB2_NALB_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_NALB_SMON_CFG0_RSVZ2_LOC			29
+#define DLB2_NALB_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_NALB_SMON_CFG1 0x8c000078
+#define DLB2_NALB_SMON_CFG1_RST 0x0
+
+#define DLB2_NALB_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_NALB_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_NALB_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_NALB_SMON_CFG1_MODE0_LOC	0
+#define DLB2_NALB_SMON_CFG1_MODE1_LOC	8
+#define DLB2_NALB_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_NALB_SMON_MAX_TMR 0x8c00007c
+#define DLB2_NALB_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_NALB_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_NALB_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_NALB_SMON_TMR 0x8c000080
+#define DLB2_NALB_SMON_TMR_RST 0x0
+
+#define DLB2_NALB_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_NALB_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_V2RO_GRP_0_SLT_SHFT(x) \
+	(0x96000000 + (x) * 0x4)
+#define DLB2_V2_5RO_GRP_0_SLT_SHFT(x) \
+	(0x86000000 + (x) * 0x4)
+#define DLB2_RO_GRP_0_SLT_SHFT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2RO_GRP_0_SLT_SHFT(x) : \
+	 DLB2_V2_5RO_GRP_0_SLT_SHFT(x))
+#define DLB2_RO_GRP_0_SLT_SHFT_RST 0x0
+
+#define DLB2_RO_GRP_0_SLT_SHFT_CHANGE	0x000003FF
+#define DLB2_RO_GRP_0_SLT_SHFT_RSVD0		0xFFFFFC00
+#define DLB2_RO_GRP_0_SLT_SHFT_CHANGE_LOC	0
+#define DLB2_RO_GRP_0_SLT_SHFT_RSVD0_LOC	10
+
+#define DLB2_V2RO_GRP_1_SLT_SHFT(x) \
+	(0x96010000 + (x) * 0x4)
+#define DLB2_V2_5RO_GRP_1_SLT_SHFT(x) \
+	(0x86010000 + (x) * 0x4)
+#define DLB2_RO_GRP_1_SLT_SHFT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2RO_GRP_1_SLT_SHFT(x) : \
+	 DLB2_V2_5RO_GRP_1_SLT_SHFT(x))
+#define DLB2_RO_GRP_1_SLT_SHFT_RST 0x0
+
+#define DLB2_RO_GRP_1_SLT_SHFT_CHANGE	0x000003FF
+#define DLB2_RO_GRP_1_SLT_SHFT_RSVD0		0xFFFFFC00
+#define DLB2_RO_GRP_1_SLT_SHFT_CHANGE_LOC	0
+#define DLB2_RO_GRP_1_SLT_SHFT_RSVD0_LOC	10
+
+#define DLB2_V2RO_GRP_SN_MODE 0x94000000
+#define DLB2_V2_5RO_GRP_SN_MODE 0x84000000
+#define DLB2_RO_GRP_SN_MODE(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2RO_GRP_SN_MODE : \
+	 DLB2_V2_5RO_GRP_SN_MODE)
+#define DLB2_RO_GRP_SN_MODE_RST 0x0
+
+#define DLB2_RO_GRP_SN_MODE_SN_MODE_0	0x00000007
+#define DLB2_RO_GRP_SN_MODE_RSZV0		0x000000F8
+#define DLB2_RO_GRP_SN_MODE_SN_MODE_1	0x00000700
+#define DLB2_RO_GRP_SN_MODE_RSZV1		0xFFFFF800
+#define DLB2_RO_GRP_SN_MODE_SN_MODE_0_LOC	0
+#define DLB2_RO_GRP_SN_MODE_RSZV0_LOC	3
+#define DLB2_RO_GRP_SN_MODE_SN_MODE_1_LOC	8
+#define DLB2_RO_GRP_SN_MODE_RSZV1_LOC	11
+
+#define DLB2_V2RO_CFG_CTRL_GENERAL_0 0x9c000000
+#define DLB2_V2_5RO_CFG_CTRL_GENERAL_0 0x8c000000
+#define DLB2_RO_CFG_CTRL_GENERAL_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2RO_CFG_CTRL_GENERAL_0 : \
+	 DLB2_V2_5RO_CFG_CTRL_GENERAL_0)
+#define DLB2_RO_CFG_CTRL_GENERAL_0_RST 0x0
+
+#define DLB2_RO_CFG_CTRL_GENERAL_0_UNIT_SINGLE_STEP_MODE	0x00000001
+#define DLB2_RO_CFG_CTRL_GENERAL_0_RR_EN			0x00000002
+#define DLB2_RO_CFG_CTRL_GENERAL_0_RSZV0			0xFFFFFFFC
+#define DLB2_RO_CFG_CTRL_GENERAL_0_UNIT_SINGLE_STEP_MODE_LOC	0
+#define DLB2_RO_CFG_CTRL_GENERAL_0_RR_EN_LOC			1
+#define DLB2_RO_CFG_CTRL_GENERAL_0_RSZV0_LOC			2
+
+#define DLB2_RO_SMON_ACTIVITYCNTR0 0x9c000030
+#define DLB2_RO_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_RO_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_RO_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_RO_SMON_ACTIVITYCNTR1 0x9c000034
+#define DLB2_RO_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_RO_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_RO_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_RO_SMON_COMPARE0 0x9c000038
+#define DLB2_RO_SMON_COMPARE0_RST 0x0
+
+#define DLB2_RO_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_RO_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_RO_SMON_COMPARE1 0x9c00003c
+#define DLB2_RO_SMON_COMPARE1_RST 0x0
+
+#define DLB2_RO_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_RO_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_RO_SMON_CFG0 0x9c000040
+#define DLB2_RO_SMON_CFG0_RST 0x40000000
+
+#define DLB2_RO_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_RO_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_RO_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_RO_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_RO_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_RO_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_RO_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_RO_SMON_CFG0_SMON_MODE			0x0000F000
+#define DLB2_RO_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_RO_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_RO_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_RO_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_RO_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_RO_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_RO_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_RO_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_RO_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_RO_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_RO_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_RO_SMON_CFG0_SMON_ENABLE_LOC		0
+#define DLB2_RO_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC	1
+#define DLB2_RO_SMON_CFG0_RSVZ0_LOC			2
+#define DLB2_RO_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_RO_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_RO_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_RO_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_RO_SMON_CFG0_SMON_MODE_LOC		12
+#define DLB2_RO_SMON_CFG0_STOPCOUNTEROVFL_LOC	16
+#define DLB2_RO_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_RO_SMON_CFG0_STATCOUNTER0OVFL_LOC	18
+#define DLB2_RO_SMON_CFG0_STATCOUNTER1OVFL_LOC	19
+#define DLB2_RO_SMON_CFG0_STOPTIMEROVFL_LOC		20
+#define DLB2_RO_SMON_CFG0_INTTIMEROVFL_LOC		21
+#define DLB2_RO_SMON_CFG0_STATTIMEROVFL_LOC		22
+#define DLB2_RO_SMON_CFG0_RSVZ1_LOC			23
+#define DLB2_RO_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_RO_SMON_CFG0_RSVZ2_LOC			29
+#define DLB2_RO_SMON_CFG0_VERSION_LOC		30
+
+#define DLB2_RO_SMON_CFG1 0x9c000044
+#define DLB2_RO_SMON_CFG1_RST 0x0
+
+#define DLB2_RO_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_RO_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_RO_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_RO_SMON_CFG1_MODE0_LOC	0
+#define DLB2_RO_SMON_CFG1_MODE1_LOC	8
+#define DLB2_RO_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_RO_SMON_MAX_TMR 0x9c000048
+#define DLB2_RO_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_RO_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_RO_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_RO_SMON_TMR 0x9c00004c
+#define DLB2_RO_SMON_TMR_RST 0x0
+
+#define DLB2_RO_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_RO_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_V2LSP_CQ2PRIOV(x) \
+	(0xa0000000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ2PRIOV(x) \
+	(0x90000000 + (x) * 0x1000)
+#define DLB2_LSP_CQ2PRIOV(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ2PRIOV(x) : \
+	 DLB2_V2_5LSP_CQ2PRIOV(x))
+#define DLB2_LSP_CQ2PRIOV_RST 0x0
+
+#define DLB2_LSP_CQ2PRIOV_PRIO	0x00FFFFFF
+#define DLB2_LSP_CQ2PRIOV_V		0xFF000000
+#define DLB2_LSP_CQ2PRIOV_PRIO_LOC	0
+#define DLB2_LSP_CQ2PRIOV_V_LOC	24
+
+#define DLB2_V2LSP_CQ2QID0(x) \
+	(0xa0080000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ2QID0(x) \
+	(0x90080000 + (x) * 0x1000)
+#define DLB2_LSP_CQ2QID0(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ2QID0(x) : \
+	 DLB2_V2_5LSP_CQ2QID0(x))
+#define DLB2_LSP_CQ2QID0_RST 0x0
+
+#define DLB2_LSP_CQ2QID0_QID_P0	0x0000007F
+#define DLB2_LSP_CQ2QID0_RSVD3	0x00000080
+#define DLB2_LSP_CQ2QID0_QID_P1	0x00007F00
+#define DLB2_LSP_CQ2QID0_RSVD2	0x00008000
+#define DLB2_LSP_CQ2QID0_QID_P2	0x007F0000
+#define DLB2_LSP_CQ2QID0_RSVD1	0x00800000
+#define DLB2_LSP_CQ2QID0_QID_P3	0x7F000000
+#define DLB2_LSP_CQ2QID0_RSVD0	0x80000000
+#define DLB2_LSP_CQ2QID0_QID_P0_LOC	0
+#define DLB2_LSP_CQ2QID0_RSVD3_LOC	7
+#define DLB2_LSP_CQ2QID0_QID_P1_LOC	8
+#define DLB2_LSP_CQ2QID0_RSVD2_LOC	15
+#define DLB2_LSP_CQ2QID0_QID_P2_LOC	16
+#define DLB2_LSP_CQ2QID0_RSVD1_LOC	23
+#define DLB2_LSP_CQ2QID0_QID_P3_LOC	24
+#define DLB2_LSP_CQ2QID0_RSVD0_LOC	31
+
+#define DLB2_V2LSP_CQ2QID1(x) \
+	(0xa0100000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ2QID1(x) \
+	(0x90100000 + (x) * 0x1000)
+#define DLB2_LSP_CQ2QID1(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ2QID1(x) : \
+	 DLB2_V2_5LSP_CQ2QID1(x))
+#define DLB2_LSP_CQ2QID1_RST 0x0
+
+#define DLB2_LSP_CQ2QID1_QID_P4	0x0000007F
+#define DLB2_LSP_CQ2QID1_RSVD3	0x00000080
+#define DLB2_LSP_CQ2QID1_QID_P5	0x00007F00
+#define DLB2_LSP_CQ2QID1_RSVD2	0x00008000
+#define DLB2_LSP_CQ2QID1_QID_P6	0x007F0000
+#define DLB2_LSP_CQ2QID1_RSVD1	0x00800000
+#define DLB2_LSP_CQ2QID1_QID_P7	0x7F000000
+#define DLB2_LSP_CQ2QID1_RSVD0	0x80000000
+#define DLB2_LSP_CQ2QID1_QID_P4_LOC	0
+#define DLB2_LSP_CQ2QID1_RSVD3_LOC	7
+#define DLB2_LSP_CQ2QID1_QID_P5_LOC	8
+#define DLB2_LSP_CQ2QID1_RSVD2_LOC	15
+#define DLB2_LSP_CQ2QID1_QID_P6_LOC	16
+#define DLB2_LSP_CQ2QID1_RSVD1_LOC	23
+#define DLB2_LSP_CQ2QID1_QID_P7_LOC	24
+#define DLB2_LSP_CQ2QID1_RSVD0_LOC	31
+
+#define DLB2_V2LSP_CQ_DIR_DSBL(x) \
+	(0xa0180000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_DIR_DSBL(x) \
+	(0x90180000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_DIR_DSBL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_DIR_DSBL(x) : \
+	 DLB2_V2_5LSP_CQ_DIR_DSBL(x))
+#define DLB2_LSP_CQ_DIR_DSBL_RST 0x1
+
+#define DLB2_LSP_CQ_DIR_DSBL_DISABLED	0x00000001
+#define DLB2_LSP_CQ_DIR_DSBL_RSVD0		0xFFFFFFFE
+#define DLB2_LSP_CQ_DIR_DSBL_DISABLED_LOC	0
+#define DLB2_LSP_CQ_DIR_DSBL_RSVD0_LOC	1
+
+#define DLB2_V2LSP_CQ_DIR_TKN_CNT(x) \
+	(0xa0200000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_DIR_TKN_CNT(x) \
+	(0x90200000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_DIR_TKN_CNT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_DIR_TKN_CNT(x) : \
+	 DLB2_V2_5LSP_CQ_DIR_TKN_CNT(x))
+#define DLB2_LSP_CQ_DIR_TKN_CNT_RST 0x0
+
+#define DLB2_LSP_CQ_DIR_TKN_CNT_COUNT	0x00001FFF
+#define DLB2_LSP_CQ_DIR_TKN_CNT_RSVD0	0xFFFFE000
+#define DLB2_LSP_CQ_DIR_TKN_CNT_COUNT_LOC	0
+#define DLB2_LSP_CQ_DIR_TKN_CNT_RSVD0_LOC	13
+
+#define DLB2_V2LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(x) \
+	(0xa0280000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(x) \
+	(0x90280000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(x) : \
+	 DLB2_V2_5LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(x))
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RST 0x0
+
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_TOKEN_DEPTH_SELECT_V2	0x0000000F
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_DISABLE_WB_OPT_V2	0x00000010
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_IGNORE_DEPTH_V2	0x00000020
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RSVD0_V2		0xFFFFFFC0
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_TOKEN_DEPTH_SELECT_V2_LOC	0
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_DISABLE_WB_OPT_V2_LOC	4
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_IGNORE_DEPTH_V2_LOC	5
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RSVD0_V2_LOC		6
+
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_TOKEN_DEPTH_SELECT_V2_5 0x0000000F
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_DISABLE_WB_OPT_V2_5	0x00000010
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RSVD0_V2_5		0xFFFFFFE0
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_TOKEN_DEPTH_SELECT_V2_5_LOC	0
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_DISABLE_WB_OPT_V2_5_LOC	4
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RSVD0_V2_5_LOC		5
+
+#define DLB2_V2LSP_CQ_DIR_TOT_SCH_CNTL(x) \
+	(0xa0300000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_DIR_TOT_SCH_CNTL(x) \
+	(0x90300000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_DIR_TOT_SCH_CNTL(x) : \
+	 DLB2_V2_5LSP_CQ_DIR_TOT_SCH_CNTL(x))
+#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTL_RST 0x0
+
+#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTL_COUNT	0xFFFFFFFF
+#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTL_COUNT_LOC	0
+
+#define DLB2_V2LSP_CQ_DIR_TOT_SCH_CNTH(x) \
+	(0xa0380000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_DIR_TOT_SCH_CNTH(x) \
+	(0x90380000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_DIR_TOT_SCH_CNTH(x) : \
+	 DLB2_V2_5LSP_CQ_DIR_TOT_SCH_CNTH(x))
+#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTH_RST 0x0
+
+#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTH_COUNT	0xFFFFFFFF
+#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTH_COUNT_LOC	0
+
+#define DLB2_V2LSP_CQ_LDB_DSBL(x) \
+	(0xa0400000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_LDB_DSBL(x) \
+	(0x90400000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_LDB_DSBL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_LDB_DSBL(x) : \
+	 DLB2_V2_5LSP_CQ_LDB_DSBL(x))
+#define DLB2_LSP_CQ_LDB_DSBL_RST 0x1
+
+#define DLB2_LSP_CQ_LDB_DSBL_DISABLED	0x00000001
+#define DLB2_LSP_CQ_LDB_DSBL_RSVD0		0xFFFFFFFE
+#define DLB2_LSP_CQ_LDB_DSBL_DISABLED_LOC	0
+#define DLB2_LSP_CQ_LDB_DSBL_RSVD0_LOC	1
+
+#define DLB2_V2LSP_CQ_LDB_INFL_CNT(x) \
+	(0xa0480000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_LDB_INFL_CNT(x) \
+	(0x90480000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_LDB_INFL_CNT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_LDB_INFL_CNT(x) : \
+	 DLB2_V2_5LSP_CQ_LDB_INFL_CNT(x))
+#define DLB2_LSP_CQ_LDB_INFL_CNT_RST 0x0
+
+#define DLB2_LSP_CQ_LDB_INFL_CNT_COUNT	0x00000FFF
+#define DLB2_LSP_CQ_LDB_INFL_CNT_RSVD0	0xFFFFF000
+#define DLB2_LSP_CQ_LDB_INFL_CNT_COUNT_LOC	0
+#define DLB2_LSP_CQ_LDB_INFL_CNT_RSVD0_LOC	12
+
+#define DLB2_V2LSP_CQ_LDB_INFL_LIM(x) \
+	(0xa0500000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_LDB_INFL_LIM(x) \
+	(0x90500000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_LDB_INFL_LIM(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_LDB_INFL_LIM(x) : \
+	 DLB2_V2_5LSP_CQ_LDB_INFL_LIM(x))
+#define DLB2_LSP_CQ_LDB_INFL_LIM_RST 0x0
+
+#define DLB2_LSP_CQ_LDB_INFL_LIM_LIMIT	0x00000FFF
+#define DLB2_LSP_CQ_LDB_INFL_LIM_RSVD0	0xFFFFF000
+#define DLB2_LSP_CQ_LDB_INFL_LIM_LIMIT_LOC	0
+#define DLB2_LSP_CQ_LDB_INFL_LIM_RSVD0_LOC	12
+
+#define DLB2_V2LSP_CQ_LDB_TKN_CNT(x) \
+	(0xa0580000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_LDB_TKN_CNT(x) \
+	(0x90600000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_LDB_TKN_CNT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_LDB_TKN_CNT(x) : \
+	 DLB2_V2_5LSP_CQ_LDB_TKN_CNT(x))
+#define DLB2_LSP_CQ_LDB_TKN_CNT_RST 0x0
+
+#define DLB2_LSP_CQ_LDB_TKN_CNT_TOKEN_COUNT	0x000007FF
+#define DLB2_LSP_CQ_LDB_TKN_CNT_RSVD0	0xFFFFF800
+#define DLB2_LSP_CQ_LDB_TKN_CNT_TOKEN_COUNT_LOC	0
+#define DLB2_LSP_CQ_LDB_TKN_CNT_RSVD0_LOC		11
+
+#define DLB2_V2LSP_CQ_LDB_TKN_DEPTH_SEL(x) \
+	(0xa0600000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_LDB_TKN_DEPTH_SEL(x) \
+	(0x90680000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_LDB_TKN_DEPTH_SEL(x) : \
+	 DLB2_V2_5LSP_CQ_LDB_TKN_DEPTH_SEL(x))
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RST 0x0
+
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_V2	0x0000000F
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_IGNORE_DEPTH_V2	0x00000010
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RSVD0_V2		0xFFFFFFE0
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_V2_LOC	0
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_IGNORE_DEPTH_V2_LOC		4
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RSVD0_V2_LOC			5
+
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_V2_5	0x0000000F
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RSVD0_V2_5		0xFFFFFFF0
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_V2_5_LOC	0
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RSVD0_V2_5_LOC			4
+
+#define DLB2_V2LSP_CQ_LDB_TOT_SCH_CNTL(x) \
+	(0xa0680000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_LDB_TOT_SCH_CNTL(x) \
+	(0x90700000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_LDB_TOT_SCH_CNTL(x) : \
+	 DLB2_V2_5LSP_CQ_LDB_TOT_SCH_CNTL(x))
+#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTL_RST 0x0
+
+#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTL_COUNT	0xFFFFFFFF
+#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTL_COUNT_LOC	0
+
+#define DLB2_V2LSP_CQ_LDB_TOT_SCH_CNTH(x) \
+	(0xa0700000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_LDB_TOT_SCH_CNTH(x) \
+	(0x90780000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_LDB_TOT_SCH_CNTH(x) : \
+	 DLB2_V2_5LSP_CQ_LDB_TOT_SCH_CNTH(x))
+#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTH_RST 0x0
+
+#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTH_COUNT	0xFFFFFFFF
+#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTH_COUNT_LOC	0
+
+#define DLB2_V2LSP_QID_DIR_MAX_DEPTH(x) \
+	(0xa0780000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_DIR_MAX_DEPTH(x) \
+	(0x90800000 + (x) * 0x1000)
+#define DLB2_LSP_QID_DIR_MAX_DEPTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_DIR_MAX_DEPTH(x) : \
+	 DLB2_V2_5LSP_QID_DIR_MAX_DEPTH(x))
+#define DLB2_LSP_QID_DIR_MAX_DEPTH_RST 0x0
+
+#define DLB2_LSP_QID_DIR_MAX_DEPTH_DEPTH	0x00001FFF
+#define DLB2_LSP_QID_DIR_MAX_DEPTH_RSVD0	0xFFFFE000
+#define DLB2_LSP_QID_DIR_MAX_DEPTH_DEPTH_LOC	0
+#define DLB2_LSP_QID_DIR_MAX_DEPTH_RSVD0_LOC	13
+
+#define DLB2_V2LSP_QID_DIR_TOT_ENQ_CNTL(x) \
+	(0xa0800000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_DIR_TOT_ENQ_CNTL(x) \
+	(0x90880000 + (x) * 0x1000)
+#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_DIR_TOT_ENQ_CNTL(x) : \
+	 DLB2_V2_5LSP_QID_DIR_TOT_ENQ_CNTL(x))
+#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTL_RST 0x0
+
+#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTL_COUNT	0xFFFFFFFF
+#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTL_COUNT_LOC	0
+
+#define DLB2_V2LSP_QID_DIR_TOT_ENQ_CNTH(x) \
+	(0xa0880000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_DIR_TOT_ENQ_CNTH(x) \
+	(0x90900000 + (x) * 0x1000)
+#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_DIR_TOT_ENQ_CNTH(x) : \
+	 DLB2_V2_5LSP_QID_DIR_TOT_ENQ_CNTH(x))
+#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTH_RST 0x0
+
+#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTH_COUNT	0xFFFFFFFF
+#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTH_COUNT_LOC	0
+
+#define DLB2_V2LSP_QID_DIR_ENQUEUE_CNT(x) \
+	(0xa0900000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_DIR_ENQUEUE_CNT(x) \
+	(0x90980000 + (x) * 0x1000)
+#define DLB2_LSP_QID_DIR_ENQUEUE_CNT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_DIR_ENQUEUE_CNT(x) : \
+	 DLB2_V2_5LSP_QID_DIR_ENQUEUE_CNT(x))
+#define DLB2_LSP_QID_DIR_ENQUEUE_CNT_RST 0x0
+
+#define DLB2_LSP_QID_DIR_ENQUEUE_CNT_COUNT	0x00001FFF
+#define DLB2_LSP_QID_DIR_ENQUEUE_CNT_RSVD0	0xFFFFE000
+#define DLB2_LSP_QID_DIR_ENQUEUE_CNT_COUNT_LOC	0
+#define DLB2_LSP_QID_DIR_ENQUEUE_CNT_RSVD0_LOC	13
+
+#define DLB2_V2LSP_QID_DIR_DEPTH_THRSH(x) \
+	(0xa0980000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_DIR_DEPTH_THRSH(x) \
+	(0x90a00000 + (x) * 0x1000)
+#define DLB2_LSP_QID_DIR_DEPTH_THRSH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_DIR_DEPTH_THRSH(x) : \
+	 DLB2_V2_5LSP_QID_DIR_DEPTH_THRSH(x))
+#define DLB2_LSP_QID_DIR_DEPTH_THRSH_RST 0x0
+
+#define DLB2_LSP_QID_DIR_DEPTH_THRSH_THRESH	0x00001FFF
+#define DLB2_LSP_QID_DIR_DEPTH_THRSH_RSVD0	0xFFFFE000
+#define DLB2_LSP_QID_DIR_DEPTH_THRSH_THRESH_LOC	0
+#define DLB2_LSP_QID_DIR_DEPTH_THRSH_RSVD0_LOC	13
+
+#define DLB2_V2LSP_QID_AQED_ACTIVE_CNT(x) \
+	(0xa0a00000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_AQED_ACTIVE_CNT(x) \
+	(0x90b80000 + (x) * 0x1000)
+#define DLB2_LSP_QID_AQED_ACTIVE_CNT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_AQED_ACTIVE_CNT(x) : \
+	 DLB2_V2_5LSP_QID_AQED_ACTIVE_CNT(x))
+#define DLB2_LSP_QID_AQED_ACTIVE_CNT_RST 0x0
+
+#define DLB2_LSP_QID_AQED_ACTIVE_CNT_COUNT	0x00000FFF
+#define DLB2_LSP_QID_AQED_ACTIVE_CNT_RSVD0	0xFFFFF000
+#define DLB2_LSP_QID_AQED_ACTIVE_CNT_COUNT_LOC	0
+#define DLB2_LSP_QID_AQED_ACTIVE_CNT_RSVD0_LOC	12
+
+#define DLB2_V2LSP_QID_AQED_ACTIVE_LIM(x) \
+	(0xa0a80000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_AQED_ACTIVE_LIM(x) \
+	(0x90c00000 + (x) * 0x1000)
+#define DLB2_LSP_QID_AQED_ACTIVE_LIM(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_AQED_ACTIVE_LIM(x) : \
+	 DLB2_V2_5LSP_QID_AQED_ACTIVE_LIM(x))
+#define DLB2_LSP_QID_AQED_ACTIVE_LIM_RST 0x0
+
+#define DLB2_LSP_QID_AQED_ACTIVE_LIM_LIMIT	0x00000FFF
+#define DLB2_LSP_QID_AQED_ACTIVE_LIM_RSVD0	0xFFFFF000
+#define DLB2_LSP_QID_AQED_ACTIVE_LIM_LIMIT_LOC	0
+#define DLB2_LSP_QID_AQED_ACTIVE_LIM_RSVD0_LOC	12
+
+#define DLB2_V2LSP_QID_ATM_TOT_ENQ_CNTL(x) \
+	(0xa0b00000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_ATM_TOT_ENQ_CNTL(x) \
+	(0x90c80000 + (x) * 0x1000)
+#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_ATM_TOT_ENQ_CNTL(x) : \
+	 DLB2_V2_5LSP_QID_ATM_TOT_ENQ_CNTL(x))
+#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTL_RST 0x0
+
+#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTL_COUNT	0xFFFFFFFF
+#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTL_COUNT_LOC	0
+
+#define DLB2_V2LSP_QID_ATM_TOT_ENQ_CNTH(x) \
+	(0xa0b80000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_ATM_TOT_ENQ_CNTH(x) \
+	(0x90d00000 + (x) * 0x1000)
+#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_ATM_TOT_ENQ_CNTH(x) : \
+	 DLB2_V2_5LSP_QID_ATM_TOT_ENQ_CNTH(x))
+#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTH_RST 0x0
+
+#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTH_COUNT	0xFFFFFFFF
+#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTH_COUNT_LOC	0
+
+#define DLB2_V2LSP_QID_LDB_ENQUEUE_CNT(x) \
+	(0xa0c80000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_LDB_ENQUEUE_CNT(x) \
+	(0x90e00000 + (x) * 0x1000)
+#define DLB2_LSP_QID_LDB_ENQUEUE_CNT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_LDB_ENQUEUE_CNT(x) : \
+	 DLB2_V2_5LSP_QID_LDB_ENQUEUE_CNT(x))
+#define DLB2_LSP_QID_LDB_ENQUEUE_CNT_RST 0x0
+
+#define DLB2_LSP_QID_LDB_ENQUEUE_CNT_COUNT	0x00003FFF
+#define DLB2_LSP_QID_LDB_ENQUEUE_CNT_RSVD0	0xFFFFC000
+#define DLB2_LSP_QID_LDB_ENQUEUE_CNT_COUNT_LOC	0
+#define DLB2_LSP_QID_LDB_ENQUEUE_CNT_RSVD0_LOC	14
+
+#define DLB2_V2LSP_QID_LDB_INFL_CNT(x) \
+	(0xa0d00000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_LDB_INFL_CNT(x) \
+	(0x90e80000 + (x) * 0x1000)
+#define DLB2_LSP_QID_LDB_INFL_CNT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_LDB_INFL_CNT(x) : \
+	 DLB2_V2_5LSP_QID_LDB_INFL_CNT(x))
+#define DLB2_LSP_QID_LDB_INFL_CNT_RST 0x0
+
+#define DLB2_LSP_QID_LDB_INFL_CNT_COUNT	0x00000FFF
+#define DLB2_LSP_QID_LDB_INFL_CNT_RSVD0	0xFFFFF000
+#define DLB2_LSP_QID_LDB_INFL_CNT_COUNT_LOC	0
+#define DLB2_LSP_QID_LDB_INFL_CNT_RSVD0_LOC	12
+
+#define DLB2_V2LSP_QID_LDB_INFL_LIM(x) \
+	(0xa0d80000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_LDB_INFL_LIM(x) \
+	(0x90f00000 + (x) * 0x1000)
+#define DLB2_LSP_QID_LDB_INFL_LIM(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_LDB_INFL_LIM(x) : \
+	 DLB2_V2_5LSP_QID_LDB_INFL_LIM(x))
+#define DLB2_LSP_QID_LDB_INFL_LIM_RST 0x0
+
+#define DLB2_LSP_QID_LDB_INFL_LIM_LIMIT	0x00000FFF
+#define DLB2_LSP_QID_LDB_INFL_LIM_RSVD0	0xFFFFF000
+#define DLB2_LSP_QID_LDB_INFL_LIM_LIMIT_LOC	0
+#define DLB2_LSP_QID_LDB_INFL_LIM_RSVD0_LOC	12
+
+#define DLB2_V2LSP_QID2CQIDIX_00(x) \
+	(0xa0e00000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID2CQIDIX_00(x) \
+	(0x90f80000 + (x) * 0x1000)
+#define DLB2_LSP_QID2CQIDIX_00(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID2CQIDIX_00(x) : \
+	 DLB2_V2_5LSP_QID2CQIDIX_00(x))
+#define DLB2_LSP_QID2CQIDIX_00_RST 0x0
+#define DLB2_LSP_QID2CQIDIX(ver, x, y) \
+	(DLB2_LSP_QID2CQIDIX_00(ver, x) + 0x80000 * (y))
+#define DLB2_LSP_QID2CQIDIX_NUM 16
+
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P0	0x000000FF
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P1	0x0000FF00
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P2	0x00FF0000
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P3	0xFF000000
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P0_LOC	0
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P1_LOC	8
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P2_LOC	16
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P3_LOC	24
+
+#define DLB2_V2LSP_QID2CQIDIX2_00(x) \
+	(0xa1600000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID2CQIDIX2_00(x) \
+	(0x91780000 + (x) * 0x1000)
+#define DLB2_LSP_QID2CQIDIX2_00(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID2CQIDIX2_00(x) : \
+	 DLB2_V2_5LSP_QID2CQIDIX2_00(x))
+#define DLB2_LSP_QID2CQIDIX2_00_RST 0x0
+#define DLB2_LSP_QID2CQIDIX2(ver, x, y) \
+	(DLB2_LSP_QID2CQIDIX2_00(ver, x) + 0x80000 * (y))
+#define DLB2_LSP_QID2CQIDIX2_NUM 16
+
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P0	0x000000FF
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P1	0x0000FF00
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P2	0x00FF0000
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P3	0xFF000000
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P0_LOC	0
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P1_LOC	8
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P2_LOC	16
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P3_LOC	24
+
+#define DLB2_V2LSP_QID_NALDB_MAX_DEPTH(x) \
+	(0xa1f00000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_NALDB_MAX_DEPTH(x) \
+	(0x92080000 + (x) * 0x1000)
+#define DLB2_LSP_QID_NALDB_MAX_DEPTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_NALDB_MAX_DEPTH(x) : \
+	 DLB2_V2_5LSP_QID_NALDB_MAX_DEPTH(x))
+#define DLB2_LSP_QID_NALDB_MAX_DEPTH_RST 0x0
+
+#define DLB2_LSP_QID_NALDB_MAX_DEPTH_DEPTH	0x00003FFF
+#define DLB2_LSP_QID_NALDB_MAX_DEPTH_RSVD0	0xFFFFC000
+#define DLB2_LSP_QID_NALDB_MAX_DEPTH_DEPTH_LOC	0
+#define DLB2_LSP_QID_NALDB_MAX_DEPTH_RSVD0_LOC	14
+
+#define DLB2_V2LSP_QID_NALDB_TOT_ENQ_CNTL(x) \
+	(0xa1f80000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_NALDB_TOT_ENQ_CNTL(x) \
+	(0x92100000 + (x) * 0x1000)
+#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_NALDB_TOT_ENQ_CNTL(x) : \
+	 DLB2_V2_5LSP_QID_NALDB_TOT_ENQ_CNTL(x))
+#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL_RST 0x0
+
+#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL_COUNT	0xFFFFFFFF
+#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL_COUNT_LOC	0
+
+#define DLB2_V2LSP_QID_NALDB_TOT_ENQ_CNTH(x) \
+	(0xa2000000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_NALDB_TOT_ENQ_CNTH(x) \
+	(0x92180000 + (x) * 0x1000)
+#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_NALDB_TOT_ENQ_CNTH(x) : \
+	 DLB2_V2_5LSP_QID_NALDB_TOT_ENQ_CNTH(x))
+#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH_RST 0x0
+
+#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH_COUNT	0xFFFFFFFF
+#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH_COUNT_LOC	0
+
+#define DLB2_V2LSP_QID_ATM_DEPTH_THRSH(x) \
+	(0xa2080000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_ATM_DEPTH_THRSH(x) \
+	(0x92200000 + (x) * 0x1000)
+#define DLB2_LSP_QID_ATM_DEPTH_THRSH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_ATM_DEPTH_THRSH(x) : \
+	 DLB2_V2_5LSP_QID_ATM_DEPTH_THRSH(x))
+#define DLB2_LSP_QID_ATM_DEPTH_THRSH_RST 0x0
+
+#define DLB2_LSP_QID_ATM_DEPTH_THRSH_THRESH	0x00003FFF
+#define DLB2_LSP_QID_ATM_DEPTH_THRSH_RSVD0	0xFFFFC000
+#define DLB2_LSP_QID_ATM_DEPTH_THRSH_THRESH_LOC	0
+#define DLB2_LSP_QID_ATM_DEPTH_THRSH_RSVD0_LOC	14
+
+#define DLB2_V2LSP_QID_NALDB_DEPTH_THRSH(x) \
+	(0xa2100000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_NALDB_DEPTH_THRSH(x) \
+	(0x92280000 + (x) * 0x1000)
+#define DLB2_LSP_QID_NALDB_DEPTH_THRSH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_NALDB_DEPTH_THRSH(x) : \
+	 DLB2_V2_5LSP_QID_NALDB_DEPTH_THRSH(x))
+#define DLB2_LSP_QID_NALDB_DEPTH_THRSH_RST 0x0
+
+#define DLB2_LSP_QID_NALDB_DEPTH_THRSH_THRESH	0x00003FFF
+#define DLB2_LSP_QID_NALDB_DEPTH_THRSH_RSVD0		0xFFFFC000
+#define DLB2_LSP_QID_NALDB_DEPTH_THRSH_THRESH_LOC	0
+#define DLB2_LSP_QID_NALDB_DEPTH_THRSH_RSVD0_LOC	14
+
+#define DLB2_V2LSP_QID_ATM_ACTIVE(x) \
+	(0xa2180000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_ATM_ACTIVE(x) \
+	(0x92300000 + (x) * 0x1000)
+#define DLB2_LSP_QID_ATM_ACTIVE(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_ATM_ACTIVE(x) : \
+	 DLB2_V2_5LSP_QID_ATM_ACTIVE(x))
+#define DLB2_LSP_QID_ATM_ACTIVE_RST 0x0
+
+#define DLB2_LSP_QID_ATM_ACTIVE_COUNT	0x00003FFF
+#define DLB2_LSP_QID_ATM_ACTIVE_RSVD0	0xFFFFC000
+#define DLB2_LSP_QID_ATM_ACTIVE_COUNT_LOC	0
+#define DLB2_LSP_QID_ATM_ACTIVE_RSVD0_LOC	14
+
+#define DLB2_V2LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0 0xa4000008
+#define DLB2_V2_5LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0 0x94000008
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0 : \
+	 DLB2_V2_5LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0)
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_RST 0x0
+
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI0_WEIGHT	0x000000FF
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI1_WEIGHT	0x0000FF00
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI2_WEIGHT	0x00FF0000
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI3_WEIGHT	0xFF000000
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI0_WEIGHT_LOC	0
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI1_WEIGHT_LOC	8
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI2_WEIGHT_LOC	16
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI3_WEIGHT_LOC	24
+
+#define DLB2_V2LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1 0xa400000c
+#define DLB2_V2_5LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1 0x9400000c
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1 : \
+	 DLB2_V2_5LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1)
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_RST 0x0
+
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_RSVZ0_V2	0xFFFFFFFF
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_RSVZ0_V2_LOC	0
+
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI4_WEIGHT_V2_5	0x000000FF
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI5_WEIGHT_V2_5	0x0000FF00
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI6_WEIGHT_V2_5	0x00FF0000
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI7_WEIGHT_V2_5	0xFF000000
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI4_WEIGHT_V2_5_LOC	0
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI5_WEIGHT_V2_5_LOC	8
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI6_WEIGHT_V2_5_LOC	16
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI7_WEIGHT_V2_5_LOC	24
+
+#define DLB2_V2LSP_CFG_ARB_WEIGHT_LDB_QID_0 0xa4000014
+#define DLB2_V2_5LSP_CFG_ARB_WEIGHT_LDB_QID_0 0x94000014
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CFG_ARB_WEIGHT_LDB_QID_0 : \
+	 DLB2_V2_5LSP_CFG_ARB_WEIGHT_LDB_QID_0)
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_RST 0x0
+
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI0_WEIGHT	0x000000FF
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI1_WEIGHT	0x0000FF00
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI2_WEIGHT	0x00FF0000
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI3_WEIGHT	0xFF000000
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI0_WEIGHT_LOC	0
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI1_WEIGHT_LOC	8
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI2_WEIGHT_LOC	16
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI3_WEIGHT_LOC	24
+
+#define DLB2_V2LSP_CFG_ARB_WEIGHT_LDB_QID_1 0xa4000018
+#define DLB2_V2_5LSP_CFG_ARB_WEIGHT_LDB_QID_1 0x94000018
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CFG_ARB_WEIGHT_LDB_QID_1 : \
+	 DLB2_V2_5LSP_CFG_ARB_WEIGHT_LDB_QID_1)
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_RST 0x0
+
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_RSVZ0_V2	0xFFFFFFFF
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_RSVZ0_V2_LOC	0
+
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI4_WEIGHT_V2_5	0x000000FF
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI5_WEIGHT_V2_5	0x0000FF00
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI6_WEIGHT_V2_5	0x00FF0000
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI7_WEIGHT_V2_5	0xFF000000
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI4_WEIGHT_V2_5_LOC	0
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI5_WEIGHT_V2_5_LOC	8
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI6_WEIGHT_V2_5_LOC	16
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI7_WEIGHT_V2_5_LOC	24
+
+#define DLB2_V2LSP_LDB_SCHED_CTRL 0xa400002c
+#define DLB2_V2_5LSP_LDB_SCHED_CTRL 0x9400002c
+#define DLB2_LSP_LDB_SCHED_CTRL(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_LDB_SCHED_CTRL : \
+	 DLB2_V2_5LSP_LDB_SCHED_CTRL)
+#define DLB2_LSP_LDB_SCHED_CTRL_RST 0x0
+
+#define DLB2_LSP_LDB_SCHED_CTRL_CQ			0x000000FF
+#define DLB2_LSP_LDB_SCHED_CTRL_QIDIX		0x00000700
+#define DLB2_LSP_LDB_SCHED_CTRL_VALUE		0x00000800
+#define DLB2_LSP_LDB_SCHED_CTRL_NALB_HASWORK_V	0x00001000
+#define DLB2_LSP_LDB_SCHED_CTRL_RLIST_HASWORK_V	0x00002000
+#define DLB2_LSP_LDB_SCHED_CTRL_SLIST_HASWORK_V	0x00004000
+#define DLB2_LSP_LDB_SCHED_CTRL_INFLIGHT_OK_V	0x00008000
+#define DLB2_LSP_LDB_SCHED_CTRL_AQED_NFULL_V		0x00010000
+#define DLB2_LSP_LDB_SCHED_CTRL_RSVZ0		0xFFFE0000
+#define DLB2_LSP_LDB_SCHED_CTRL_CQ_LOC		0
+#define DLB2_LSP_LDB_SCHED_CTRL_QIDIX_LOC		8
+#define DLB2_LSP_LDB_SCHED_CTRL_VALUE_LOC		11
+#define DLB2_LSP_LDB_SCHED_CTRL_NALB_HASWORK_V_LOC	12
+#define DLB2_LSP_LDB_SCHED_CTRL_RLIST_HASWORK_V_LOC	13
+#define DLB2_LSP_LDB_SCHED_CTRL_SLIST_HASWORK_V_LOC	14
+#define DLB2_LSP_LDB_SCHED_CTRL_INFLIGHT_OK_V_LOC	15
+#define DLB2_LSP_LDB_SCHED_CTRL_AQED_NFULL_V_LOC	16
+#define DLB2_LSP_LDB_SCHED_CTRL_RSVZ0_LOC		17
+
+#define DLB2_V2LSP_DIR_SCH_CNT_L 0xa4000034
+#define DLB2_V2_5LSP_DIR_SCH_CNT_L 0x94000034
+#define DLB2_LSP_DIR_SCH_CNT_L(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_DIR_SCH_CNT_L : \
+	 DLB2_V2_5LSP_DIR_SCH_CNT_L)
+#define DLB2_LSP_DIR_SCH_CNT_L_RST 0x0
+
+#define DLB2_LSP_DIR_SCH_CNT_L_COUNT	0xFFFFFFFF
+#define DLB2_LSP_DIR_SCH_CNT_L_COUNT_LOC	0
+
+#define DLB2_V2LSP_DIR_SCH_CNT_H 0xa4000038
+#define DLB2_V2_5LSP_DIR_SCH_CNT_H 0x94000038
+#define DLB2_LSP_DIR_SCH_CNT_H(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_DIR_SCH_CNT_H : \
+	 DLB2_V2_5LSP_DIR_SCH_CNT_H)
+#define DLB2_LSP_DIR_SCH_CNT_H_RST 0x0
+
+#define DLB2_LSP_DIR_SCH_CNT_H_COUNT	0xFFFFFFFF
+#define DLB2_LSP_DIR_SCH_CNT_H_COUNT_LOC	0
+
+#define DLB2_V2LSP_LDB_SCH_CNT_L 0xa400003c
+#define DLB2_V2_5LSP_LDB_SCH_CNT_L 0x9400003c
+#define DLB2_LSP_LDB_SCH_CNT_L(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_LDB_SCH_CNT_L : \
+	 DLB2_V2_5LSP_LDB_SCH_CNT_L)
+#define DLB2_LSP_LDB_SCH_CNT_L_RST 0x0
+
+#define DLB2_LSP_LDB_SCH_CNT_L_COUNT	0xFFFFFFFF
+#define DLB2_LSP_LDB_SCH_CNT_L_COUNT_LOC	0
+
+#define DLB2_V2LSP_LDB_SCH_CNT_H 0xa4000040
+#define DLB2_V2_5LSP_LDB_SCH_CNT_H 0x94000040
+#define DLB2_LSP_LDB_SCH_CNT_H(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_LDB_SCH_CNT_H : \
+	 DLB2_V2_5LSP_LDB_SCH_CNT_H)
+#define DLB2_LSP_LDB_SCH_CNT_H_RST 0x0
+
+#define DLB2_LSP_LDB_SCH_CNT_H_COUNT	0xFFFFFFFF
+#define DLB2_LSP_LDB_SCH_CNT_H_COUNT_LOC	0
+
+#define DLB2_V2LSP_CFG_SHDW_CTRL 0xa4000070
+#define DLB2_V2_5LSP_CFG_SHDW_CTRL 0x94000070
+#define DLB2_LSP_CFG_SHDW_CTRL(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CFG_SHDW_CTRL : \
+	 DLB2_V2_5LSP_CFG_SHDW_CTRL)
+#define DLB2_LSP_CFG_SHDW_CTRL_RST 0x0
+
+#define DLB2_LSP_CFG_SHDW_CTRL_TRANSFER	0x00000001
+#define DLB2_LSP_CFG_SHDW_CTRL_RSVD0		0xFFFFFFFE
+#define DLB2_LSP_CFG_SHDW_CTRL_TRANSFER_LOC	0
+#define DLB2_LSP_CFG_SHDW_CTRL_RSVD0_LOC	1
+
+#define DLB2_V2LSP_CFG_SHDW_RANGE_COS(x) \
+	(0xa4000074 + (x) * 4)
+#define DLB2_V2_5LSP_CFG_SHDW_RANGE_COS(x) \
+	(0x94000074 + (x) * 4)
+#define DLB2_LSP_CFG_SHDW_RANGE_COS(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CFG_SHDW_RANGE_COS(x) : \
+	 DLB2_V2_5LSP_CFG_SHDW_RANGE_COS(x))
+#define DLB2_LSP_CFG_SHDW_RANGE_COS_RST 0x40
+
+#define DLB2_LSP_CFG_SHDW_RANGE_COS_BW_RANGE		0x000001FF
+#define DLB2_LSP_CFG_SHDW_RANGE_COS_RSVZ0		0x7FFFFE00
+#define DLB2_LSP_CFG_SHDW_RANGE_COS_NO_EXTRA_CREDIT	0x80000000
+#define DLB2_LSP_CFG_SHDW_RANGE_COS_BW_RANGE_LOC		0
+#define DLB2_LSP_CFG_SHDW_RANGE_COS_RSVZ0_LOC		9
+#define DLB2_LSP_CFG_SHDW_RANGE_COS_NO_EXTRA_CREDIT_LOC	31
+
+#define DLB2_V2LSP_CFG_CTRL_GENERAL_0 0xac000000
+#define DLB2_V2_5LSP_CFG_CTRL_GENERAL_0 0x9c000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CFG_CTRL_GENERAL_0 : \
+	 DLB2_V2_5LSP_CFG_CTRL_GENERAL_0)
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RST 0x0
+
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_ATQ_EMPTY_ARB_V2	0x00000001
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_TOK_UNIT_IDLE_V2	0x00000002
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_RLIST_PRI_V2	0x00000004
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_CMP_UNIT_IDLE_V2	0x00000008
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ0_V2		0x00000030
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OP_V2	0x00000040
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_HALF_BW_V2	0x00000080
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OUT_V2	0x00000100
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_DISAB_MULTI_V2	0x00000200
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OP_V2	0x00000400
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_HALF_BW_V2	0x00000800
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OUT_V2	0x00001000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_DISAB_MULTI_V2	0x00002000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OP_V2	0x00004000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_HALF_BW_V2	0x00008000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OUT_V2	0x00010000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OP_V2	0x00020000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_HALF_BW_V2	0x00040000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OUT_V2	0x00080000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_SINGLE_OP_V2	0x00100000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_HALF_BW_V2	0x00200000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_DISAB_MULTI_V2	0x00400000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_SCH_V2	0x00800000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_CMP_V2	0x01000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_CE_TOG_ARB_V2	0x02000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ1_V2		0x04000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALID_SEL_V2	0x18000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALUE_SEL_V2	0x20000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_COMPARE_SEL_V2	0xC0000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_ATQ_EMPTY_ARB_V2_LOC	0
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_TOK_UNIT_IDLE_V2_LOC		1
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_RLIST_PRI_V2_LOC		2
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_CMP_UNIT_IDLE_V2_LOC		3
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ0_V2_LOC			4
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OP_V2_LOC		6
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_HALF_BW_V2_LOC		7
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OUT_V2_LOC		8
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_DISAB_MULTI_V2_LOC		9
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OP_V2_LOC		10
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_HALF_BW_V2_LOC		11
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OUT_V2_LOC		12
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_DISAB_MULTI_V2_LOC		13
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OP_V2_LOC		14
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_HALF_BW_V2_LOC		15
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OUT_V2_LOC		16
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OP_V2_LOC		17
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_HALF_BW_V2_LOC		18
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OUT_V2_LOC		19
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_SINGLE_OP_V2_LOC		20
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_HALF_BW_V2_LOC		21
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_DISAB_MULTI_V2_LOC		22
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_SCH_V2_LOC		23
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_CMP_V2_LOC		24
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_CE_TOG_ARB_V2_LOC		25
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ1_V2_LOC			26
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALID_SEL_V2_LOC		27
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALUE_SEL_V2_LOC		29
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_COMPARE_SEL_V2_LOC		30
+
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_ATQ_EMPTY_ARB_V2_5	0x00000001
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_TOK_UNIT_IDLE_V2_5	0x00000002
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_RLIST_PRI_V2_5	0x00000004
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_CMP_UNIT_IDLE_V2_5	0x00000008
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ENAB_IF_THRESH_V2_5	0x00000010
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ0_V2_5		0x00000020
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OP_V2_5	0x00000040
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_HALF_BW_V2_5	0x00000080
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OUT_V2_5	0x00000100
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_DISAB_MULTI_V2_5	0x00000200
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OP_V2_5	0x00000400
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_HALF_BW_V2_5	0x00000800
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OUT_V2_5	0x00001000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_DISAB_MULTI_V2_5	0x00002000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OP_V2_5	0x00004000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_HALF_BW_V2_5	0x00008000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OUT_V2_5	0x00010000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OP_V2_5	0x00020000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_HALF_BW_V2_5	0x00040000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OUT_V2_5	0x00080000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_SINGLE_OP_V2_5	0x00100000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_HALF_BW_V2_5	0x00200000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_DISAB_MULTI_V2_5	0x00400000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_SCH_V2_5	0x00800000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_CMP_V2_5	0x01000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_CE_TOG_ARB_V2_5	0x02000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ1_V2_5		0x04000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALID_SEL_V2_5	0x18000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALUE_SEL_V2_5	0x20000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_COMPARE_SEL_V2_5	0xC0000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_ATQ_EMPTY_ARB_V2_5_LOC	0
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_TOK_UNIT_IDLE_V2_5_LOC	1
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_RLIST_PRI_V2_5_LOC		2
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_CMP_UNIT_IDLE_V2_5_LOC	3
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ENAB_IF_THRESH_V2_5_LOC		4
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ0_V2_5_LOC			5
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OP_V2_5_LOC		6
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_HALF_BW_V2_5_LOC		7
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OUT_V2_5_LOC		8
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_DISAB_MULTI_V2_5_LOC		9
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OP_V2_5_LOC		10
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_HALF_BW_V2_5_LOC		11
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OUT_V2_5_LOC		12
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_DISAB_MULTI_V2_5_LOC		13
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OP_V2_5_LOC	14
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_HALF_BW_V2_5_LOC		15
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OUT_V2_5_LOC	16
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OP_V2_5_LOC		17
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_HALF_BW_V2_5_LOC		18
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OUT_V2_5_LOC	19
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_SINGLE_OP_V2_5_LOC		20
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_HALF_BW_V2_5_LOC		21
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_DISAB_MULTI_V2_5_LOC		22
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_SCH_V2_5_LOC		23
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_CMP_V2_5_LOC		24
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_CE_TOG_ARB_V2_5_LOC		25
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ1_V2_5_LOC			26
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALID_SEL_V2_5_LOC		27
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALUE_SEL_V2_5_LOC		29
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_COMPARE_SEL_V2_5_LOC	30
+
+#define DLB2_LSP_SMON_COMPARE0 0xac000048
+#define DLB2_LSP_SMON_COMPARE0_RST 0x0
+
+#define DLB2_LSP_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_LSP_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_LSP_SMON_COMPARE1 0xac00004c
+#define DLB2_LSP_SMON_COMPARE1_RST 0x0
+
+#define DLB2_LSP_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_LSP_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_LSP_SMON_CFG0 0xac000050
+#define DLB2_LSP_SMON_CFG0_RST 0x40000000
+
+#define DLB2_LSP_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_LSP_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_LSP_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_LSP_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_LSP_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_LSP_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_LSP_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_LSP_SMON_CFG0_SMON_MODE			0x0000F000
+#define DLB2_LSP_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_LSP_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_LSP_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_LSP_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_LSP_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_LSP_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_LSP_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_LSP_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_LSP_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_LSP_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_LSP_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_LSP_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_LSP_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
+#define DLB2_LSP_SMON_CFG0_RSVZ0_LOC				2
+#define DLB2_LSP_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_LSP_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_LSP_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_LSP_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_LSP_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_LSP_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_LSP_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_LSP_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_LSP_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_LSP_SMON_CFG0_STOPTIMEROVFL_LOC			20
+#define DLB2_LSP_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_LSP_SMON_CFG0_STATTIMEROVFL_LOC			22
+#define DLB2_LSP_SMON_CFG0_RSVZ1_LOC				23
+#define DLB2_LSP_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_LSP_SMON_CFG0_RSVZ2_LOC				29
+#define DLB2_LSP_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_LSP_SMON_CFG1 0xac000054
+#define DLB2_LSP_SMON_CFG1_RST 0x0
+
+#define DLB2_LSP_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_LSP_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_LSP_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_LSP_SMON_CFG1_MODE0_LOC	0
+#define DLB2_LSP_SMON_CFG1_MODE1_LOC	8
+#define DLB2_LSP_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_LSP_SMON_ACTIVITYCNTR0 0xac000058
+#define DLB2_LSP_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_LSP_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_LSP_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_LSP_SMON_ACTIVITYCNTR1 0xac00005c
+#define DLB2_LSP_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_LSP_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_LSP_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_LSP_SMON_MAX_TMR 0xac000060
+#define DLB2_LSP_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_LSP_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_LSP_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_LSP_SMON_TMR 0xac000064
+#define DLB2_LSP_SMON_TMR_RST 0x0
+
+#define DLB2_LSP_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_LSP_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_V2CM_DIAG_RESET_STS 0xb4000000
+#define DLB2_V2_5CM_DIAG_RESET_STS 0xa4000000
+#define DLB2_CM_DIAG_RESET_STS(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 V2CM_DIAG_RESET_STS : \
+	 V2_5CM_DIAG_RESET_STS)
+#define DLB2_CM_DIAG_RESET_STS_RST 0x80000bff
+
+#define DLB2_CM_DIAG_RESET_STS_CHP_PF_RESET_DONE	0x00000001
+#define DLB2_CM_DIAG_RESET_STS_ROP_PF_RESET_DONE	0x00000002
+#define DLB2_CM_DIAG_RESET_STS_LSP_PF_RESET_DONE	0x00000004
+#define DLB2_CM_DIAG_RESET_STS_NALB_PF_RESET_DONE	0x00000008
+#define DLB2_CM_DIAG_RESET_STS_AP_PF_RESET_DONE	0x00000010
+#define DLB2_CM_DIAG_RESET_STS_DP_PF_RESET_DONE	0x00000020
+#define DLB2_CM_DIAG_RESET_STS_QED_PF_RESET_DONE	0x00000040
+#define DLB2_CM_DIAG_RESET_STS_DQED_PF_RESET_DONE	0x00000080
+#define DLB2_CM_DIAG_RESET_STS_AQED_PF_RESET_DONE	0x00000100
+#define DLB2_CM_DIAG_RESET_STS_SYS_PF_RESET_DONE	0x00000200
+#define DLB2_CM_DIAG_RESET_STS_PF_RESET_ACTIVE	0x00000400
+#define DLB2_CM_DIAG_RESET_STS_FLRSM_STATE		0x0003F800
+#define DLB2_CM_DIAG_RESET_STS_RSVD0			0x7FFC0000
+#define DLB2_CM_DIAG_RESET_STS_DLB_PROC_RESET_DONE	0x80000000
+#define DLB2_CM_DIAG_RESET_STS_CHP_PF_RESET_DONE_LOC		0
+#define DLB2_CM_DIAG_RESET_STS_ROP_PF_RESET_DONE_LOC		1
+#define DLB2_CM_DIAG_RESET_STS_LSP_PF_RESET_DONE_LOC		2
+#define DLB2_CM_DIAG_RESET_STS_NALB_PF_RESET_DONE_LOC	3
+#define DLB2_CM_DIAG_RESET_STS_AP_PF_RESET_DONE_LOC		4
+#define DLB2_CM_DIAG_RESET_STS_DP_PF_RESET_DONE_LOC		5
+#define DLB2_CM_DIAG_RESET_STS_QED_PF_RESET_DONE_LOC		6
+#define DLB2_CM_DIAG_RESET_STS_DQED_PF_RESET_DONE_LOC	7
+#define DLB2_CM_DIAG_RESET_STS_AQED_PF_RESET_DONE_LOC	8
+#define DLB2_CM_DIAG_RESET_STS_SYS_PF_RESET_DONE_LOC		9
+#define DLB2_CM_DIAG_RESET_STS_PF_RESET_ACTIVE_LOC		10
+#define DLB2_CM_DIAG_RESET_STS_FLRSM_STATE_LOC		11
+#define DLB2_CM_DIAG_RESET_STS_RSVD0_LOC			18
+#define DLB2_CM_DIAG_RESET_STS_DLB_PROC_RESET_DONE_LOC	31
+
+#define DLB2_V2CM_CFG_DIAGNOSTIC_IDLE_STATUS 0xb4000004
+#define DLB2_V2_5CM_CFG_DIAGNOSTIC_IDLE_STATUS 0xa4000004
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CM_CFG_DIAGNOSTIC_IDLE_STATUS : \
+	 DLB2_V2_5CM_CFG_DIAGNOSTIC_IDLE_STATUS)
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_RST 0x9d0fffff
+
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_CHP_PIPEIDLE		0x00000001
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_ROP_PIPEIDLE		0x00000002
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_LSP_PIPEIDLE		0x00000004
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_NALB_PIPEIDLE	0x00000008
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AP_PIPEIDLE		0x00000010
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DP_PIPEIDLE		0x00000020
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_QED_PIPEIDLE		0x00000040
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DQED_PIPEIDLE	0x00000080
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AQED_PIPEIDLE	0x00000100
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_SYS_PIPEIDLE		0x00000200
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_CHP_UNIT_IDLE	0x00000400
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_ROP_UNIT_IDLE	0x00000800
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_LSP_UNIT_IDLE	0x00001000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_NALB_UNIT_IDLE	0x00002000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AP_UNIT_IDLE		0x00004000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DP_UNIT_IDLE		0x00008000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_QED_UNIT_IDLE	0x00010000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DQED_UNIT_IDLE	0x00020000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AQED_UNIT_IDLE	0x00040000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_SYS_UNIT_IDLE	0x00080000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_RSVD1		0x00F00000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_CFG_RING_IDLE	0x01000000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_CFG_MSTR_IDLE	0x02000000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_FLR_CLKREQ_B	0x04000000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_PROC_IDLE	0x08000000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_PROC_IDLE_MASKED 0x10000000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_RSVD0		 0x60000000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DLB_FUNC_IDLE	 0x80000000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_CHP_PIPEIDLE_LOC		0
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_ROP_PIPEIDLE_LOC		1
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_LSP_PIPEIDLE_LOC		2
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_NALB_PIPEIDLE_LOC		3
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AP_PIPEIDLE_LOC		4
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DP_PIPEIDLE_LOC		5
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_QED_PIPEIDLE_LOC		6
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DQED_PIPEIDLE_LOC		7
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AQED_PIPEIDLE_LOC		8
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_SYS_PIPEIDLE_LOC		9
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_CHP_UNIT_IDLE_LOC		10
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_ROP_UNIT_IDLE_LOC		11
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_LSP_UNIT_IDLE_LOC		12
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_NALB_UNIT_IDLE_LOC	13
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AP_UNIT_IDLE_LOC		14
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DP_UNIT_IDLE_LOC		15
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_QED_UNIT_IDLE_LOC		16
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DQED_UNIT_IDLE_LOC	17
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AQED_UNIT_IDLE_LOC	18
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_SYS_UNIT_IDLE_LOC		19
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_RSVD1_LOC			20
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_CFG_RING_IDLE_LOC	24
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_CFG_MSTR_IDLE_LOC	25
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_FLR_CLKREQ_B_LOC	26
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_PROC_IDLE_LOC	27
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_PROC_IDLE_MASKED_LOC	28
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_RSVD0_LOC			29
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DLB_FUNC_IDLE_LOC		31
+
+#define DLB2_V2CM_CFG_PM_STATUS 0xb4000014
+#define DLB2_V2_5CM_CFG_PM_STATUS 0xa4000014
+#define DLB2_CM_CFG_PM_STATUS(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CM_CFG_PM_STATUS : \
+	 DLB2_V2_5CM_CFG_PM_STATUS)
+#define DLB2_CM_CFG_PM_STATUS_RST 0x100403e
+
+#define DLB2_CM_CFG_PM_STATUS_PROCHOT		0x00000001
+#define DLB2_CM_CFG_PM_STATUS_PGCB_DLB_IDLE		0x00000002
+#define DLB2_CM_CFG_PM_STATUS_PGCB_DLB_PG_RDY_ACK_B	0x00000004
+#define DLB2_CM_CFG_PM_STATUS_PMSM_PGCB_REQ_B	0x00000008
+#define DLB2_CM_CFG_PM_STATUS_PGBC_PMC_PG_REQ_B	0x00000010
+#define DLB2_CM_CFG_PM_STATUS_PMC_PGCB_PG_ACK_B	0x00000020
+#define DLB2_CM_CFG_PM_STATUS_PMC_PGCB_FET_EN_B	0x00000040
+#define DLB2_CM_CFG_PM_STATUS_PGCB_FET_EN_B		0x00000080
+#define DLB2_CM_CFG_PM_STATUS_RSVZ0			0x00000100
+#define DLB2_CM_CFG_PM_STATUS_RSVZ1			0x00000200
+#define DLB2_CM_CFG_PM_STATUS_FUSE_FORCE_ON		0x00000400
+#define DLB2_CM_CFG_PM_STATUS_FUSE_PROC_DISABLE	0x00000800
+#define DLB2_CM_CFG_PM_STATUS_RSVZ2			0x00001000
+#define DLB2_CM_CFG_PM_STATUS_RSVZ3			0x00002000
+#define DLB2_CM_CFG_PM_STATUS_PM_FSM_D0TOD3_OK	0x00004000
+#define DLB2_CM_CFG_PM_STATUS_PM_FSM_D3TOD0_OK	0x00008000
+#define DLB2_CM_CFG_PM_STATUS_DLB_IN_D3		0x00010000
+#define DLB2_CM_CFG_PM_STATUS_RSVZ4			0x00FE0000
+#define DLB2_CM_CFG_PM_STATUS_PMSM			0xFF000000
+#define DLB2_CM_CFG_PM_STATUS_PROCHOT_LOC			0
+#define DLB2_CM_CFG_PM_STATUS_PGCB_DLB_IDLE_LOC		1
+#define DLB2_CM_CFG_PM_STATUS_PGCB_DLB_PG_RDY_ACK_B_LOC	2
+#define DLB2_CM_CFG_PM_STATUS_PMSM_PGCB_REQ_B_LOC		3
+#define DLB2_CM_CFG_PM_STATUS_PGBC_PMC_PG_REQ_B_LOC		4
+#define DLB2_CM_CFG_PM_STATUS_PMC_PGCB_PG_ACK_B_LOC		5
+#define DLB2_CM_CFG_PM_STATUS_PMC_PGCB_FET_EN_B_LOC		6
+#define DLB2_CM_CFG_PM_STATUS_PGCB_FET_EN_B_LOC		7
+#define DLB2_CM_CFG_PM_STATUS_RSVZ0_LOC			8
+#define DLB2_CM_CFG_PM_STATUS_RSVZ1_LOC			9
+#define DLB2_CM_CFG_PM_STATUS_FUSE_FORCE_ON_LOC		10
+#define DLB2_CM_CFG_PM_STATUS_FUSE_PROC_DISABLE_LOC		11
+#define DLB2_CM_CFG_PM_STATUS_RSVZ2_LOC			12
+#define DLB2_CM_CFG_PM_STATUS_RSVZ3_LOC			13
+#define DLB2_CM_CFG_PM_STATUS_PM_FSM_D0TOD3_OK_LOC		14
+#define DLB2_CM_CFG_PM_STATUS_PM_FSM_D3TOD0_OK_LOC		15
+#define DLB2_CM_CFG_PM_STATUS_DLB_IN_D3_LOC			16
+#define DLB2_CM_CFG_PM_STATUS_RSVZ4_LOC			17
+#define DLB2_CM_CFG_PM_STATUS_PMSM_LOC			24
+
+#define DLB2_V2CM_CFG_PM_PMCSR_DISABLE 0xb4000018
+#define DLB2_V2_5CM_CFG_PM_PMCSR_DISABLE 0xa4000018
+#define DLB2_CM_CFG_PM_PMCSR_DISABLE(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CM_CFG_PM_PMCSR_DISABLE : \
+	 DLB2_V2_5CM_CFG_PM_PMCSR_DISABLE)
+#define DLB2_CM_CFG_PM_PMCSR_DISABLE_RST 0x1
+
+#define DLB2_CM_CFG_PM_PMCSR_DISABLE_DISABLE	0x00000001
+#define DLB2_CM_CFG_PM_PMCSR_DISABLE_RSVZ0	0xFFFFFFFE
+#define DLB2_CM_CFG_PM_PMCSR_DISABLE_DISABLE_LOC	0
+#define DLB2_CM_CFG_PM_PMCSR_DISABLE_RSVZ0_LOC	1
+
+#define DLB2_VF_VF2PF_MAILBOX_BYTES 256
+#define DLB2_VF_VF2PF_MAILBOX(x) \
+	(0x1000 + (x) * 0x4)
+#define DLB2_VF_VF2PF_MAILBOX_RST 0x0
+
+#define DLB2_VF_VF2PF_MAILBOX_MSG	0xFFFFFFFF
+#define DLB2_VF_VF2PF_MAILBOX_MSG_LOC	0
+
+#define DLB2_VF_VF2PF_MAILBOX_ISR 0x1f00
+#define DLB2_VF_VF2PF_MAILBOX_ISR_RST 0x0
+#define DLB2_VF_SIOV_MBOX_ISR_TRIGGER 0x8000
+
+#define DLB2_VF_VF2PF_MAILBOX_ISR_ISR	0x00000001
+#define DLB2_VF_VF2PF_MAILBOX_ISR_RSVD0	0xFFFFFFFE
+#define DLB2_VF_VF2PF_MAILBOX_ISR_ISR_LOC	0
+#define DLB2_VF_VF2PF_MAILBOX_ISR_RSVD0_LOC	1
+
+#define DLB2_VF_PF2VF_MAILBOX_BYTES 64
+#define DLB2_VF_PF2VF_MAILBOX(x) \
+	(0x2000 + (x) * 0x4)
+#define DLB2_VF_PF2VF_MAILBOX_RST 0x0
+
+#define DLB2_VF_PF2VF_MAILBOX_MSG	0xFFFFFFFF
+#define DLB2_VF_PF2VF_MAILBOX_MSG_LOC	0
+
+#define DLB2_VF_PF2VF_MAILBOX_ISR 0x2f00
+#define DLB2_VF_PF2VF_MAILBOX_ISR_RST 0x0
+
+#define DLB2_VF_PF2VF_MAILBOX_ISR_PF_ISR	0x00000001
+#define DLB2_VF_PF2VF_MAILBOX_ISR_RSVD0	0xFFFFFFFE
+#define DLB2_VF_PF2VF_MAILBOX_ISR_PF_ISR_LOC	0
+#define DLB2_VF_PF2VF_MAILBOX_ISR_RSVD0_LOC	1
+
+#define DLB2_VF_VF_MSI_ISR_PEND 0x2f10
+#define DLB2_VF_VF_MSI_ISR_PEND_RST 0x0
+
+#define DLB2_VF_VF_MSI_ISR_PEND_ISR_PEND	0xFFFFFFFF
+#define DLB2_VF_VF_MSI_ISR_PEND_ISR_PEND_LOC	0
+
+#define DLB2_VF_VF_RESET_IN_PROGRESS 0x3000
+#define DLB2_VF_VF_RESET_IN_PROGRESS_RST 0x1
+
+#define DLB2_VF_VF_RESET_IN_PROGRESS_RESET_IN_PROGRESS	0x00000001
+#define DLB2_VF_VF_RESET_IN_PROGRESS_RSVD0			0xFFFFFFFE
+#define DLB2_VF_VF_RESET_IN_PROGRESS_RESET_IN_PROGRESS_LOC	0
+#define DLB2_VF_VF_RESET_IN_PROGRESS_RSVD0_LOC		1
+
+#define DLB2_VF_VF_MSI_ISR 0x4000
+#define DLB2_VF_VF_MSI_ISR_RST 0x0
+
+#define DLB2_VF_VF_MSI_ISR_VF_MSI_ISR	0xFFFFFFFF
+#define DLB2_VF_VF_MSI_ISR_VF_MSI_ISR_LOC	0
+
+#define DLB2_SYS_TOTAL_CREDITS 0x10000100
+#define DLB2_SYS_TOTAL_CREDITS_RST 0x4000
+
+#define DLB2_SYS_TOTAL_CREDITS_TOTAL_CREDITS	0xFFFFFFFF
+#define DLB2_SYS_TOTAL_CREDITS_TOTAL_CREDITS_LOC	0
+
+#define DLB2_SYS_LDB_CQ_AI_ADDR_U(x) \
+	(0x10000fa4 + (x) * 0x1000)
+#define DLB2_SYS_LDB_CQ_AI_ADDR_U_RST 0x0
+
+#define DLB2_SYS_LDB_CQ_AI_ADDR_U_CQ_AI_ADDR_U	0xFFFFFFFF
+#define DLB2_SYS_LDB_CQ_AI_ADDR_U_CQ_AI_ADDR_U_LOC	0
+
+#define DLB2_SYS_LDB_CQ_AI_ADDR_L(x) \
+	(0x10000fa0 + (x) * 0x1000)
+#define DLB2_SYS_LDB_CQ_AI_ADDR_L_RST 0x0
+
+#define DLB2_SYS_LDB_CQ_AI_ADDR_L_RSVD0		0x00000003
+#define DLB2_SYS_LDB_CQ_AI_ADDR_L_CQ_AI_ADDR_L	0xFFFFFFFC
+#define DLB2_SYS_LDB_CQ_AI_ADDR_L_RSVD0_LOC		0
+#define DLB2_SYS_LDB_CQ_AI_ADDR_L_CQ_AI_ADDR_L_LOC	2
+
+#define DLB2_SYS_DIR_CQ_AI_ADDR_U(x) \
+	(0x10000fe4 + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ_AI_ADDR_U_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_AI_ADDR_U_CQ_AI_ADDR_U	0xFFFFFFFF
+#define DLB2_SYS_DIR_CQ_AI_ADDR_U_CQ_AI_ADDR_U_LOC	0
+
+#define DLB2_SYS_DIR_CQ_AI_ADDR_L(x) \
+	(0x10000fe0 + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ_AI_ADDR_L_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_AI_ADDR_L_RSVD0		0x00000003
+#define DLB2_SYS_DIR_CQ_AI_ADDR_L_CQ_AI_ADDR_L	0xFFFFFFFC
+#define DLB2_SYS_DIR_CQ_AI_ADDR_L_RSVD0_LOC		0
+#define DLB2_SYS_DIR_CQ_AI_ADDR_L_CQ_AI_ADDR_L_LOC	2
+
+#define DLB2_SYS_WB_DIR_CQ_STATE(x) \
+	(0x11c00000 + (x) * 0x1000)
+#define DLB2_SYS_WB_DIR_CQ_STATE_RST 0x0
+
+#define DLB2_SYS_WB_DIR_CQ_STATE_WB0_V	0x00000001
+#define DLB2_SYS_WB_DIR_CQ_STATE_WB1_V	0x00000002
+#define DLB2_SYS_WB_DIR_CQ_STATE_WB2_V	0x00000004
+#define DLB2_SYS_WB_DIR_CQ_STATE_DIR_OPT	0x00000008
+#define DLB2_SYS_WB_DIR_CQ_STATE_CQ_OPT_CLR	0x00000010
+#define DLB2_SYS_WB_DIR_CQ_STATE_RSVD0	0xFFFFFFE0
+#define DLB2_SYS_WB_DIR_CQ_STATE_WB0_V_LOC		0
+#define DLB2_SYS_WB_DIR_CQ_STATE_WB1_V_LOC		1
+#define DLB2_SYS_WB_DIR_CQ_STATE_WB2_V_LOC		2
+#define DLB2_SYS_WB_DIR_CQ_STATE_DIR_OPT_LOC		3
+#define DLB2_SYS_WB_DIR_CQ_STATE_CQ_OPT_CLR_LOC	4
+#define DLB2_SYS_WB_DIR_CQ_STATE_RSVD0_LOC		5
+
+#define DLB2_SYS_WB_LDB_CQ_STATE(x) \
+	(0x11d00000 + (x) * 0x1000)
+#define DLB2_SYS_WB_LDB_CQ_STATE_RST 0x0
+
+#define DLB2_SYS_WB_LDB_CQ_STATE_WB0_V	0x00000001
+#define DLB2_SYS_WB_LDB_CQ_STATE_WB1_V	0x00000002
+#define DLB2_SYS_WB_LDB_CQ_STATE_WB2_V	0x00000004
+#define DLB2_SYS_WB_LDB_CQ_STATE_RSVD1	0x00000008
+#define DLB2_SYS_WB_LDB_CQ_STATE_CQ_OPT_CLR	0x00000010
+#define DLB2_SYS_WB_LDB_CQ_STATE_RSVD0	0xFFFFFFE0
+#define DLB2_SYS_WB_LDB_CQ_STATE_WB0_V_LOC		0
+#define DLB2_SYS_WB_LDB_CQ_STATE_WB1_V_LOC		1
+#define DLB2_SYS_WB_LDB_CQ_STATE_WB2_V_LOC		2
+#define DLB2_SYS_WB_LDB_CQ_STATE_RSVD1_LOC		3
+#define DLB2_SYS_WB_LDB_CQ_STATE_CQ_OPT_CLR_LOC	4
+#define DLB2_SYS_WB_LDB_CQ_STATE_RSVD0_LOC		5
+
+#define DLB2_CHP_CFG_VAS_CRD(x) \
+	(0x40000000 + (x) * 0x1000)
+#define DLB2_CHP_CFG_VAS_CRD_RST 0x0
+
+#define DLB2_CHP_CFG_VAS_CRD_COUNT	0x00007FFF
+#define DLB2_CHP_CFG_VAS_CRD_RSVD0	0xFFFF8000
+#define DLB2_CHP_CFG_VAS_CRD_COUNT_LOC	0
+#define DLB2_CHP_CFG_VAS_CRD_RSVD0_LOC	15
+
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT(x) \
+	(0x90b00000 + (x) * 0x1000)
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_RST 0x0
+
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_LIMIT	0x00007FFF
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_V	0x00008000
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_RSVD0	0xFFFF0000
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_LIMIT_LOC	0
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_V_LOC		15
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_RSVD0_LOC	16
+
+#endif /* __DLB2_REGS_NEW_H */
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index 1cb0b9f50..7ba6521ef 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -47,19 +47,6 @@ static void dlb2_init_domain_rsrc_lists(struct dlb2_hw_domain *domain)
 		dlb2_list_init_head(&domain->avail_ldb_ports[i]);
 }
 
-static void dlb2_init_fn_rsrc_lists(struct dlb2_function_resources *rsrc)
-{
-	int i;
-
-	dlb2_list_init_head(&rsrc->avail_domains);
-	dlb2_list_init_head(&rsrc->used_domains);
-	dlb2_list_init_head(&rsrc->avail_ldb_queues);
-	dlb2_list_init_head(&rsrc->avail_dir_pq_pairs);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
-		dlb2_list_init_head(&rsrc->avail_ldb_ports[i]);
-}
-
 void dlb2_hw_enable_sparse_dir_cq_mode(struct dlb2_hw *hw)
 {
 	union dlb2_chp_cfg_chp_csr_ctrl r0;
@@ -130,171 +117,6 @@ void dlb2_hw_enable_sparse_ldb_cq_mode(struct dlb2_hw *hw)
 	DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, r0.val);
 }
 
-void dlb2_resource_free(struct dlb2_hw *hw)
-{
-	int i;
-
-	if (hw->pf.avail_hist_list_entries)
-		dlb2_bitmap_free(hw->pf.avail_hist_list_entries);
-
-	for (i = 0; i < DLB2_MAX_NUM_VDEVS; i++) {
-		if (hw->vdev[i].avail_hist_list_entries)
-			dlb2_bitmap_free(hw->vdev[i].avail_hist_list_entries);
-	}
-}
-
-int dlb2_resource_init(struct dlb2_hw *hw)
-{
-	struct dlb2_list_entry *list;
-	unsigned int i;
-	int ret;
-
-	/*
-	 * For optimal load-balancing, ports that map to one or more QIDs in
-	 * common should not be in numerical sequence. This is application
-	 * dependent, but the driver interleaves port IDs as much as possible
-	 * to reduce the likelihood of this. This initial allocation maximizes
-	 * the average distance between an ID and its immediate neighbors (i.e.
-	 * the distance from 1 to 0 and to 2, the distance from 2 to 1 and to
-	 * 3, etc.).
-	 */
-	u8 init_ldb_port_allocation[DLB2_MAX_NUM_LDB_PORTS] = {
-		0,  7,  14,  5, 12,  3, 10,  1,  8, 15,  6, 13,  4, 11,  2,  9,
-		16, 23, 30, 21, 28, 19, 26, 17, 24, 31, 22, 29, 20, 27, 18, 25,
-		32, 39, 46, 37, 44, 35, 42, 33, 40, 47, 38, 45, 36, 43, 34, 41,
-		48, 55, 62, 53, 60, 51, 58, 49, 56, 63, 54, 61, 52, 59, 50, 57,
-	};
-
-	/* Zero-out resource tracking data structures */
-	memset(&hw->rsrcs, 0, sizeof(hw->rsrcs));
-	memset(&hw->pf, 0, sizeof(hw->pf));
-
-	dlb2_init_fn_rsrc_lists(&hw->pf);
-
-	for (i = 0; i < DLB2_MAX_NUM_VDEVS; i++) {
-		memset(&hw->vdev[i], 0, sizeof(hw->vdev[i]));
-		dlb2_init_fn_rsrc_lists(&hw->vdev[i]);
-	}
-
-	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
-		memset(&hw->domains[i], 0, sizeof(hw->domains[i]));
-		dlb2_init_domain_rsrc_lists(&hw->domains[i]);
-		hw->domains[i].parent_func = &hw->pf;
-	}
-
-	/* Give all resources to the PF driver */
-	hw->pf.num_avail_domains = DLB2_MAX_NUM_DOMAINS;
-	for (i = 0; i < hw->pf.num_avail_domains; i++) {
-		list = &hw->domains[i].func_list;
-
-		dlb2_list_add(&hw->pf.avail_domains, list);
-	}
-
-	hw->pf.num_avail_ldb_queues = DLB2_MAX_NUM_LDB_QUEUES;
-	for (i = 0; i < hw->pf.num_avail_ldb_queues; i++) {
-		list = &hw->rsrcs.ldb_queues[i].func_list;
-
-		dlb2_list_add(&hw->pf.avail_ldb_queues, list);
-	}
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
-		hw->pf.num_avail_ldb_ports[i] =
-			DLB2_MAX_NUM_LDB_PORTS / DLB2_NUM_COS_DOMAINS;
-
-	for (i = 0; i < DLB2_MAX_NUM_LDB_PORTS; i++) {
-		int cos_id = i >> DLB2_NUM_COS_DOMAINS;
-		struct dlb2_ldb_port *port;
-
-		port = &hw->rsrcs.ldb_ports[init_ldb_port_allocation[i]];
-
-		dlb2_list_add(&hw->pf.avail_ldb_ports[cos_id],
-			      &port->func_list);
-	}
-
-	hw->pf.num_avail_dir_pq_pairs = DLB2_MAX_NUM_DIR_PORTS(hw->ver);
-	for (i = 0; i < hw->pf.num_avail_dir_pq_pairs; i++) {
-		list = &hw->rsrcs.dir_pq_pairs[i].func_list;
-
-		dlb2_list_add(&hw->pf.avail_dir_pq_pairs, list);
-	}
-
-	hw->pf.num_avail_qed_entries = DLB2_MAX_NUM_LDB_CREDITS;
-	hw->pf.num_avail_dqed_entries =
-		DLB2_MAX_NUM_DIR_CREDITS(hw->ver);
-
-	hw->pf.num_avail_aqed_entries = DLB2_MAX_NUM_AQED_ENTRIES;
-
-	ret = dlb2_bitmap_alloc(&hw->pf.avail_hist_list_entries,
-				DLB2_MAX_NUM_HIST_LIST_ENTRIES);
-	if (ret)
-		goto unwind;
-
-	ret = dlb2_bitmap_fill(hw->pf.avail_hist_list_entries);
-	if (ret)
-		goto unwind;
-
-	for (i = 0; i < DLB2_MAX_NUM_VDEVS; i++) {
-		ret = dlb2_bitmap_alloc(&hw->vdev[i].avail_hist_list_entries,
-					DLB2_MAX_NUM_HIST_LIST_ENTRIES);
-		if (ret)
-			goto unwind;
-
-		ret = dlb2_bitmap_zero(hw->vdev[i].avail_hist_list_entries);
-		if (ret)
-			goto unwind;
-	}
-
-	/* Initialize the hardware resource IDs */
-	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
-		hw->domains[i].id.phys_id = i;
-		hw->domains[i].id.vdev_owned = false;
-	}
-
-	for (i = 0; i < DLB2_MAX_NUM_LDB_QUEUES; i++) {
-		hw->rsrcs.ldb_queues[i].id.phys_id = i;
-		hw->rsrcs.ldb_queues[i].id.vdev_owned = false;
-	}
-
-	for (i = 0; i < DLB2_MAX_NUM_LDB_PORTS; i++) {
-		hw->rsrcs.ldb_ports[i].id.phys_id = i;
-		hw->rsrcs.ldb_ports[i].id.vdev_owned = false;
-	}
-
-	for (i = 0; i < DLB2_MAX_NUM_DIR_PORTS(hw->ver); i++) {
-		hw->rsrcs.dir_pq_pairs[i].id.phys_id = i;
-		hw->rsrcs.dir_pq_pairs[i].id.vdev_owned = false;
-	}
-
-	for (i = 0; i < DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {
-		hw->rsrcs.sn_groups[i].id = i;
-		/* Default mode (0) is 64 sequence numbers per queue */
-		hw->rsrcs.sn_groups[i].mode = 0;
-		hw->rsrcs.sn_groups[i].sequence_numbers_per_queue = 64;
-		hw->rsrcs.sn_groups[i].slot_use_bitmap = 0;
-	}
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
-		hw->cos_reservation[i] = 100 / DLB2_NUM_COS_DOMAINS;
-
-	return 0;
-
-unwind:
-	dlb2_resource_free(hw);
-
-	return ret;
-}
-
-void dlb2_clr_pmcsr_disable(struct dlb2_hw *hw)
-{
-	union dlb2_cfg_mstr_cfg_pm_pmcsr_disable r0;
-
-	r0.val = DLB2_CSR_RD(hw, DLB2_CFG_MSTR_CFG_PM_PMCSR_DISABLE);
-
-	r0.field.disable = 0;
-
-	DLB2_CSR_WR(hw, DLB2_CFG_MSTR_CFG_PM_PMCSR_DISABLE, r0.val);
-}
-
 static void dlb2_configure_domain_credits(struct dlb2_hw *hw,
 					  struct dlb2_hw_domain *domain)
 {
@@ -5876,7 +5698,7 @@ static void dlb2_log_start_domain(struct dlb2_hw *hw,
 int
 dlb2_hw_start_domain(struct dlb2_hw *hw,
 		     u32 domain_id,
-		     __attribute((unused)) struct dlb2_start_domain_args *arg,
+		     struct dlb2_start_domain_args *arg,
 		     struct dlb2_cmd_response *resp,
 		     bool vdev_req,
 		     unsigned int vdev_id)
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.h b/drivers/event/dlb2/pf/base/dlb2_resource.h
index 503fdf317..2e13193bb 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.h
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.h
@@ -6,35 +6,8 @@
 #define __DLB2_RESOURCE_H
 
 #include "dlb2_user.h"
-
-#include "dlb2_hw_types.h"
 #include "dlb2_osdep_types.h"
 
-/**
- * dlb2_resource_init() - initialize the device
- * @hw: pointer to struct dlb2_hw.
- *
- * This function initializes the device's software state (pointed to by the hw
- * argument) and programs global scheduling QoS registers. This function should
- * be called during driver initialization.
- *
- * The dlb2_hw struct must be unique per DLB 2.0 device and persist until the
- * device is reset.
- *
- * Return:
- * Returns 0 upon success, <0 otherwise.
- */
-int dlb2_resource_init(struct dlb2_hw *hw);
-
-/**
- * dlb2_resource_free() - free device state memory
- * @hw: dlb2_hw handle for a particular device.
- *
- * This function frees software state pointed to by dlb2_hw. This function
- * should be called when resetting the device or unloading the driver.
- */
-void dlb2_resource_free(struct dlb2_hw *hw);
-
 /**
  * dlb2_resource_reset() - reset in-use resources to their initial state
  * @hw: dlb2_hw handle for a particular device.
@@ -1485,15 +1458,6 @@ int dlb2_notify_vf(struct dlb2_hw *hw,
  */
 int dlb2_vdev_in_use(struct dlb2_hw *hw, unsigned int id);
 
-/**
- * dlb2_clr_pmcsr_disable() - power on bulk of DLB 2.0 logic
- * @hw: dlb2_hw handle for a particular device.
- *
- * Clearing the PMCSR must be done at initialization to make the device fully
- * operational.
- */
-void dlb2_clr_pmcsr_disable(struct dlb2_hw *hw);
-
 /**
  * dlb2_hw_get_ldb_queue_depth() - returns the depth of a load-balanced queue
  * @hw: dlb2_hw handle for a particular device.
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
new file mode 100644
index 000000000..175b0799e
--- /dev/null
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -0,0 +1,259 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2016-2020 Intel Corporation
+ */
+
+#define DLB2_USE_NEW_HEADERS /* TEMPORARY FOR MERGE */
+
+#include "dlb2_user.h"
+
+#include "dlb2_hw_types_new.h"
+#include "dlb2_osdep.h"
+#include "dlb2_osdep_bitmap.h"
+#include "dlb2_osdep_types.h"
+#include "dlb2_regs_new.h"
+#include "dlb2_resource_new.h" /* TEMP FOR UPSTREAMPATCHES */
+
+#include "../../dlb2_priv.h"
+#include "../../dlb2_inline_fns.h"
+
+#define DLB2_DOM_LIST_HEAD(head, type) \
+	DLB2_LIST_HEAD((head), type, domain_list)
+
+#define DLB2_FUNC_LIST_HEAD(head, type) \
+	DLB2_LIST_HEAD((head), type, func_list)
+
+#define DLB2_DOM_LIST_FOR(head, ptr, iter) \
+	DLB2_LIST_FOR_EACH(head, ptr, domain_list, iter)
+
+#define DLB2_FUNC_LIST_FOR(head, ptr, iter) \
+	DLB2_LIST_FOR_EACH(head, ptr, func_list, iter)
+
+#define DLB2_DOM_LIST_FOR_SAFE(head, ptr, ptr_tmp, it, it_tmp) \
+	DLB2_LIST_FOR_EACH_SAFE((head), ptr, ptr_tmp, domain_list, it, it_tmp)
+
+#define DLB2_FUNC_LIST_FOR_SAFE(head, ptr, ptr_tmp, it, it_tmp) \
+	DLB2_LIST_FOR_EACH_SAFE((head), ptr, ptr_tmp, func_list, it, it_tmp)
+
+static void dlb2_init_domain_rsrc_lists(struct dlb2_hw_domain *domain)
+{
+	int i;
+
+	dlb2_list_init_head(&domain->used_ldb_queues);
+	dlb2_list_init_head(&domain->used_dir_pq_pairs);
+	dlb2_list_init_head(&domain->avail_ldb_queues);
+	dlb2_list_init_head(&domain->avail_dir_pq_pairs);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
+		dlb2_list_init_head(&domain->used_ldb_ports[i]);
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
+		dlb2_list_init_head(&domain->avail_ldb_ports[i]);
+}
+
+static void dlb2_init_fn_rsrc_lists(struct dlb2_function_resources *rsrc)
+{
+	int i;
+	dlb2_list_init_head(&rsrc->avail_domains);
+	dlb2_list_init_head(&rsrc->used_domains);
+	dlb2_list_init_head(&rsrc->avail_ldb_queues);
+	dlb2_list_init_head(&rsrc->avail_dir_pq_pairs);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
+		dlb2_list_init_head(&rsrc->avail_ldb_ports[i]);
+}
+
+/**
+ * dlb2_resource_free() - free device state memory
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function frees software state pointed to by dlb2_hw. This function
+ * should be called when resetting the device or unloading the driver.
+ */
+void dlb2_resource_free(struct dlb2_hw *hw)
+{
+	int i;
+
+	if (hw->pf.avail_hist_list_entries)
+		dlb2_bitmap_free(hw->pf.avail_hist_list_entries);
+
+	for (i = 0; i < DLB2_MAX_NUM_VDEVS; i++) {
+		if (hw->vdev[i].avail_hist_list_entries)
+			dlb2_bitmap_free(hw->vdev[i].avail_hist_list_entries);
+	}
+}
+
+/**
+ * dlb2_resource_init() - initialize the device
+ * @hw: pointer to struct dlb2_hw.
+ * @ver: device version.
+ *
+ * This function initializes the device's software state (pointed to by the hw
+ * argument) and programs global scheduling QoS registers. This function should
+ * be called during driver initialization, and the dlb2_hw structure should
+ * be zero-initialized before calling the function.
+ *
+ * The dlb2_hw struct must be unique per DLB 2.0 device and persist until the
+ * device is reset.
+ *
+ * Return:
+ * Returns 0 upon success, <0 otherwise.
+ */
+int dlb2_resource_init(struct dlb2_hw *hw, enum dlb2_hw_ver ver)
+{
+	struct dlb2_list_entry *list;
+	unsigned int i;
+	int ret;
+
+	/*
+	 * For optimal load-balancing, ports that map to one or more QIDs in
+	 * common should not be in numerical sequence. The port->QID mapping is
+	 * application dependent, but the driver interleaves port IDs as much
+	 * as possible to reduce the likelihood of sequential ports mapping to
+	 * the same QID(s). This initial allocation of port IDs maximizes the
+	 * average distance between an ID and its immediate neighbors (i.e.
+	 * the distance from 1 to 0 and to 2, the distance from 2 to 1 and to
+	 * 3, etc.).
+	 */
+	const u8 init_ldb_port_allocation[DLB2_MAX_NUM_LDB_PORTS] = {
+		0,  7,  14,  5, 12,  3, 10,  1,  8, 15,  6, 13,  4, 11,  2,  9,
+		16, 23, 30, 21, 28, 19, 26, 17, 24, 31, 22, 29, 20, 27, 18, 25,
+		32, 39, 46, 37, 44, 35, 42, 33, 40, 47, 38, 45, 36, 43, 34, 41,
+		48, 55, 62, 53, 60, 51, 58, 49, 56, 63, 54, 61, 52, 59, 50, 57,
+	};
+
+	hw->ver = ver;
+
+	dlb2_init_fn_rsrc_lists(&hw->pf);
+
+	for (i = 0; i < DLB2_MAX_NUM_VDEVS; i++)
+		dlb2_init_fn_rsrc_lists(&hw->vdev[i]);
+
+	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
+		dlb2_init_domain_rsrc_lists(&hw->domains[i]);
+		hw->domains[i].parent_func = &hw->pf;
+	}
+
+	/* Give all resources to the PF driver */
+	hw->pf.num_avail_domains = DLB2_MAX_NUM_DOMAINS;
+	for (i = 0; i < hw->pf.num_avail_domains; i++) {
+		list = &hw->domains[i].func_list;
+
+		dlb2_list_add(&hw->pf.avail_domains, list);
+	}
+
+	hw->pf.num_avail_ldb_queues = DLB2_MAX_NUM_LDB_QUEUES;
+	for (i = 0; i < hw->pf.num_avail_ldb_queues; i++) {
+		list = &hw->rsrcs.ldb_queues[i].func_list;
+
+		dlb2_list_add(&hw->pf.avail_ldb_queues, list);
+	}
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
+		hw->pf.num_avail_ldb_ports[i] =
+			DLB2_MAX_NUM_LDB_PORTS / DLB2_NUM_COS_DOMAINS;
+
+	for (i = 0; i < DLB2_MAX_NUM_LDB_PORTS; i++) {
+		int cos_id = i >> DLB2_NUM_COS_DOMAINS;
+		struct dlb2_ldb_port *port;
+
+		port = &hw->rsrcs.ldb_ports[init_ldb_port_allocation[i]];
+
+		dlb2_list_add(&hw->pf.avail_ldb_ports[cos_id],
+			      &port->func_list);
+	}
+
+	hw->pf.num_avail_dir_pq_pairs = DLB2_MAX_NUM_DIR_PORTS(hw->ver);
+	for (i = 0; i < hw->pf.num_avail_dir_pq_pairs; i++) {
+		list = &hw->rsrcs.dir_pq_pairs[i].func_list;
+
+		dlb2_list_add(&hw->pf.avail_dir_pq_pairs, list);
+	}
+
+	if (hw->ver == DLB2_HW_V2) {
+		hw->pf.num_avail_qed_entries = DLB2_MAX_NUM_LDB_CREDITS;
+		hw->pf.num_avail_dqed_entries =
+			DLB2_MAX_NUM_DIR_CREDITS(hw->ver);
+	} else {
+		hw->pf.num_avail_entries = DLB2_MAX_NUM_CREDITS(hw->ver);
+	}
+
+	hw->pf.num_avail_aqed_entries = DLB2_MAX_NUM_AQED_ENTRIES;
+
+	ret = dlb2_bitmap_alloc(&hw->pf.avail_hist_list_entries,
+				DLB2_MAX_NUM_HIST_LIST_ENTRIES);
+	if (ret)
+		goto unwind;
+
+	ret = dlb2_bitmap_fill(hw->pf.avail_hist_list_entries);
+	if (ret)
+		goto unwind;
+
+	for (i = 0; i < DLB2_MAX_NUM_VDEVS; i++) {
+		ret = dlb2_bitmap_alloc(&hw->vdev[i].avail_hist_list_entries,
+					DLB2_MAX_NUM_HIST_LIST_ENTRIES);
+		if (ret)
+			goto unwind;
+
+		ret = dlb2_bitmap_zero(hw->vdev[i].avail_hist_list_entries);
+		if (ret)
+			goto unwind;
+	}
+
+	/* Initialize the hardware resource IDs */
+	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
+		hw->domains[i].id.phys_id = i;
+		hw->domains[i].id.vdev_owned = false;
+	}
+
+	for (i = 0; i < DLB2_MAX_NUM_LDB_QUEUES; i++) {
+		hw->rsrcs.ldb_queues[i].id.phys_id = i;
+		hw->rsrcs.ldb_queues[i].id.vdev_owned = false;
+	}
+
+	for (i = 0; i < DLB2_MAX_NUM_LDB_PORTS; i++) {
+		hw->rsrcs.ldb_ports[i].id.phys_id = i;
+		hw->rsrcs.ldb_ports[i].id.vdev_owned = false;
+	}
+
+	for (i = 0; i < DLB2_MAX_NUM_DIR_PORTS(hw->ver); i++) {
+		hw->rsrcs.dir_pq_pairs[i].id.phys_id = i;
+		hw->rsrcs.dir_pq_pairs[i].id.vdev_owned = false;
+	}
+
+	for (i = 0; i < DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {
+		hw->rsrcs.sn_groups[i].id = i;
+		/* Default mode (0) is 64 sequence numbers per queue */
+		hw->rsrcs.sn_groups[i].mode = 0;
+		hw->rsrcs.sn_groups[i].sequence_numbers_per_queue = 64;
+		hw->rsrcs.sn_groups[i].slot_use_bitmap = 0;
+	}
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
+		hw->cos_reservation[i] = 100 / DLB2_NUM_COS_DOMAINS;
+
+	return 0;
+
+unwind:
+	dlb2_resource_free(hw);
+
+	return ret;
+}
+
+/**
+ * dlb2_clr_pmcsr_disable() - power on bulk of DLB 2.0 logic
+ * @hw: dlb2_hw handle for a particular device.
+ * @ver: device version.
+ *
+ * Clearing the PMCSR must be done at initialization to make the device fully
+ * operational.
+ */
+void dlb2_clr_pmcsr_disable(struct dlb2_hw *hw, enum dlb2_hw_ver ver)
+{
+	u32 pmcsr_dis;
+
+	pmcsr_dis = DLB2_CSR_RD(hw, DLB2_CM_CFG_PM_PMCSR_DISABLE(ver));
+
+	DLB2_BITS_CLR(pmcsr_dis, DLB2_CM_CFG_PM_PMCSR_DISABLE_DISABLE);
+
+	DLB2_CSR_WR(hw, DLB2_CM_CFG_PM_PMCSR_DISABLE(ver), pmcsr_dis);
+}
+
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.h b/drivers/event/dlb2/pf/base/dlb2_resource_new.h
new file mode 100644
index 000000000..51f31543c
--- /dev/null
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.h
@@ -0,0 +1,73 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2016-2020 Intel Corporation
+ */
+
+#ifndef __DLB2_RESOURCE_NEW_H
+#define __DLB2_RESOURCE_NEW_H
+
+#include "dlb2_user.h"
+#include "dlb2_osdep_types.h"
+
+/**
+ * dlb2_resource_init() - initialize the device
+ * @hw: pointer to struct dlb2_hw.
+ * @ver: device version.
+ *
+ * This function initializes the device's software state (pointed to by the hw
+ * argument) and programs global scheduling QoS registers. This function should
+ * be called during driver initialization.
+ *
+ * The dlb2_hw struct must be unique per DLB 2.0 device and persist until the
+ * device is reset.
+ *
+ * Return:
+ * Returns 0 upon success, <0 otherwise.
+ */
+int dlb2_resource_init(struct dlb2_hw *hw, enum dlb2_hw_ver ver);
+
+/**
+ * dlb2_clr_pmcsr_disable() - power on bulk of DLB 2.0 logic
+ * @hw: dlb2_hw handle for a particular device.
+ * @ver: device version.
+ *
+ * Clearing the PMCSR must be done at initialization to make the device fully
+ * operational.
+ */
+void dlb2_clr_pmcsr_disable(struct dlb2_hw *hw, enum dlb2_hw_ver ver);
+
+/**
+ * dlb2_finish_unmap_qid_procedures() - finish any pending unmap procedures
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function attempts to finish any outstanding unmap procedures.
+ * This function should be called by the kernel thread responsible for
+ * finishing map/unmap procedures.
+ *
+ * Return:
+ * Returns the number of procedures that weren't completed.
+ */
+unsigned int dlb2_finish_unmap_qid_procedures(struct dlb2_hw *hw);
+
+/**
+ * dlb2_finish_map_qid_procedures() - finish any pending map procedures
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function attempts to finish any outstanding map procedures.
+ * This function should be called by the kernel thread responsible for
+ * finishing map/unmap procedures.
+ *
+ * Return:
+ * Returns the number of procedures that weren't completed.
+ */
+unsigned int dlb2_finish_map_qid_procedures(struct dlb2_hw *hw);
+
+/**
+ * dlb2_resource_free() - free device state memory
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function frees software state pointed to by dlb2_hw. This function
+ * should be called when resetting the device or unloading the driver.
+ */
+void dlb2_resource_free(struct dlb2_hw *hw);
+
+#endif /* __DLB2_RESOURCE_NEW_H */
diff --git a/drivers/event/dlb2/pf/dlb2_main.c b/drivers/event/dlb2/pf/dlb2_main.c
index a9d407f2f..5c0640b3c 100644
--- a/drivers/event/dlb2/pf/dlb2_main.c
+++ b/drivers/event/dlb2/pf/dlb2_main.c
@@ -13,9 +13,12 @@
 #include <rte_malloc.h>
 #include <rte_errno.h>
 
-#include "base/dlb2_resource.h"
+#define DLB2_USE_NEW_HEADERS /* TEMPORARY FOR MERGE */
+
+#include "base/dlb2_regs_new.h"
+#include "base/dlb2_hw_types_new.h"
+#include "base/dlb2_resource_new.h"
 #include "base/dlb2_osdep.h"
-#include "base/dlb2_regs.h"
 #include "dlb2_main.h"
 #include "../dlb2_user.h"
 #include "../dlb2_priv.h"
@@ -103,25 +106,34 @@ dlb2_pf_init_driver_state(struct dlb2_dev *dlb2_dev)
 
 static void dlb2_pf_enable_pm(struct dlb2_dev *dlb2_dev)
 {
-	dlb2_clr_pmcsr_disable(&dlb2_dev->hw);
+	int version;
+	version = DLB2_HW_DEVICE_FROM_PCI_ID(dlb2_dev->pdev);
+
+	dlb2_clr_pmcsr_disable(&dlb2_dev->hw, version);
 }
 
 #define DLB2_READY_RETRY_LIMIT 1000
-static int dlb2_pf_wait_for_device_ready(struct dlb2_dev *dlb2_dev)
+static int dlb2_pf_wait_for_device_ready(struct dlb2_dev *dlb2_dev,
+					 int dlb_version)
 {
 	u32 retries = 0;
 
 	/* Allow at least 1s for the device to become active after power-on */
 	for (retries = 0; retries < DLB2_READY_RETRY_LIMIT; retries++) {
-		union dlb2_cfg_mstr_cfg_diagnostic_idle_status idle;
-		union dlb2_cfg_mstr_cfg_pm_status pm_st;
+		u32 idle_val;
+		u32 idle_dlb_func_idle;
+		u32 pm_st_val;
+		u32 pm_st_pmsm;
 		u32 addr;
 
-		addr = DLB2_CFG_MSTR_CFG_PM_STATUS;
-		pm_st.val = DLB2_CSR_RD(&dlb2_dev->hw, addr);
-		addr = DLB2_CFG_MSTR_CFG_DIAGNOSTIC_IDLE_STATUS;
-		idle.val = DLB2_CSR_RD(&dlb2_dev->hw, addr);
-		if (pm_st.field.pmsm == 1 && idle.field.dlb_func_idle == 1)
+		addr = DLB2_CM_CFG_PM_STATUS(dlb_version);
+		pm_st_val = DLB2_CSR_RD(&dlb2_dev->hw, addr);
+		addr = DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS(dlb_version);
+		idle_val = DLB2_CSR_RD(&dlb2_dev->hw, addr);
+		idle_dlb_func_idle = idle_val &
+			DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DLB_FUNC_IDLE;
+		pm_st_pmsm = pm_st_val & DLB2_CM_CFG_PM_STATUS_PMSM;
+		if (pm_st_pmsm && idle_dlb_func_idle)
 			break;
 
 		rte_delay_ms(1);
@@ -141,6 +153,7 @@ dlb2_probe(struct rte_pci_device *pdev)
 {
 	struct dlb2_dev *dlb2_dev;
 	int ret = 0;
+	int dlb_version = 0;
 
 	DLB2_INFO(dlb2_dev, "probe\n");
 
@@ -152,6 +165,8 @@ dlb2_probe(struct rte_pci_device *pdev)
 		goto dlb2_dev_malloc_fail;
 	}
 
+	dlb_version = DLB2_HW_DEVICE_FROM_PCI_ID(pdev);
+
 	/* PCI Bus driver has already mapped bar space into process.
 	 * Save off our IO register and FUNC addresses.
 	 */
@@ -191,7 +206,7 @@ dlb2_probe(struct rte_pci_device *pdev)
 	 */
 	dlb2_pf_enable_pm(dlb2_dev);
 
-	ret = dlb2_pf_wait_for_device_ready(dlb2_dev);
+	ret = dlb2_pf_wait_for_device_ready(dlb2_dev, dlb_version);
 	if (ret)
 		goto wait_for_device_ready_fail;
 
@@ -203,7 +218,7 @@ dlb2_probe(struct rte_pci_device *pdev)
 	if (ret)
 		goto init_driver_state_fail;
 
-	ret = dlb2_resource_init(&dlb2_dev->hw);
+	ret = dlb2_resource_init(&dlb2_dev->hw, dlb_version);
 	if (ret)
 		goto resource_init_fail;
 
diff --git a/drivers/event/dlb2/pf/dlb2_main.h b/drivers/event/dlb2/pf/dlb2_main.h
index 9eeda482a..892298d7a 100644
--- a/drivers/event/dlb2/pf/dlb2_main.h
+++ b/drivers/event/dlb2/pf/dlb2_main.h
@@ -12,7 +12,11 @@
 #include <rte_bus_pci.h>
 #include <rte_eal_paging.h>
 
+#ifdef DLB2_USE_NEW_HEADERS
+#include "base/dlb2_hw_types_new.h"
+#else
 #include "base/dlb2_hw_types.h"
+#endif
 #include "../dlb2_user.h"
 
 #define DLB2_DEFAULT_UNREGISTER_TIMEOUT_S 5
diff --git a/drivers/event/dlb2/pf/dlb2_pf.c b/drivers/event/dlb2/pf/dlb2_pf.c
index f57dc1584..1e815f20d 100644
--- a/drivers/event/dlb2/pf/dlb2_pf.c
+++ b/drivers/event/dlb2/pf/dlb2_pf.c
@@ -32,13 +32,15 @@
 #include <rte_memory.h>
 #include <rte_string_fns.h>
 
+#define DLB2_USE_NEW_HEADERS /* TEMPORARY FOR MERGE */
+
 #include "../dlb2_priv.h"
 #include "../dlb2_iface.h"
 #include "../dlb2_inline_fns.h"
 #include "dlb2_main.h"
-#include "base/dlb2_hw_types.h"
+#include "base/dlb2_hw_types_new.h"
 #include "base/dlb2_osdep.h"
-#include "base/dlb2_resource.h"
+#include "base/dlb2_resource_new.h"
 
 static const char *event_dlb2_pf_name = RTE_STR(EVDEV_DLB2_NAME_PMD);
 
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v2 03/27] event/dlb2: add v2.5 get_resources
  2021-03-30 19:35   ` [dpdk-dev] [PATCH v2 00/27] Add DLB V2.5 Timothy McDaniel
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 01/27] event/dlb2: add v2.5 probe Timothy McDaniel
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 02/27] event/dlb2: add v2.5 HW init Timothy McDaniel
@ 2021-03-30 19:35     ` Timothy McDaniel
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 04/27] event/dlb2: add v2.5 create sched domain Timothy McDaniel
                       ` (24 subsequent siblings)
  27 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-03-30 19:35 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas

DLB v2.5 uses a new credit scheme, where directed and load balanced
credits are unified, instead of having separate directed and load
balanced credit pools.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/dlb2.c                     | 20 ++++--
 drivers/event/dlb2/dlb2_user.h                | 14 +++-
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 48 --------------
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 66 +++++++++++++++++++
 4 files changed, 92 insertions(+), 56 deletions(-)

diff --git a/drivers/event/dlb2/dlb2.c b/drivers/event/dlb2/dlb2.c
index 7f5b9141b..0048f6a1b 100644
--- a/drivers/event/dlb2/dlb2.c
+++ b/drivers/event/dlb2/dlb2.c
@@ -132,17 +132,25 @@ dlb2_hw_query_resources(struct dlb2_eventdev *dlb2)
 	evdev_dlb2_default_info.max_event_ports =
 		dlb2->hw_rsrc_query_results.num_ldb_ports;
 
-	evdev_dlb2_default_info.max_num_events =
-		dlb2->hw_rsrc_query_results.num_ldb_credits;
-
+	if (dlb2->version == DLB2_HW_V2_5) {
+		evdev_dlb2_default_info.max_num_events =
+			dlb2->hw_rsrc_query_results.num_credits;
+	} else {
+		evdev_dlb2_default_info.max_num_events =
+			dlb2->hw_rsrc_query_results.num_ldb_credits;
+	}
 	/* Save off values used when creating the scheduling domain. */
 
 	handle->info.num_sched_domains =
 		dlb2->hw_rsrc_query_results.num_sched_domains;
 
-	handle->info.hw_rsrc_max.nb_events_limit =
-		dlb2->hw_rsrc_query_results.num_ldb_credits;
-
+	if (dlb2->version == DLB2_HW_V2_5) {
+		handle->info.hw_rsrc_max.nb_events_limit =
+			dlb2->hw_rsrc_query_results.num_credits;
+	} else {
+		handle->info.hw_rsrc_max.nb_events_limit =
+			dlb2->hw_rsrc_query_results.num_ldb_credits;
+	}
 	handle->info.hw_rsrc_max.num_queues =
 		dlb2->hw_rsrc_query_results.num_ldb_queues +
 		dlb2->hw_rsrc_query_results.num_dir_ports;
diff --git a/drivers/event/dlb2/dlb2_user.h b/drivers/event/dlb2/dlb2_user.h
index f4bda7822..b7d125dec 100644
--- a/drivers/event/dlb2/dlb2_user.h
+++ b/drivers/event/dlb2/dlb2_user.h
@@ -195,9 +195,12 @@ struct dlb2_create_sched_domain_args {
  *	contiguous range of history list entries.
  * - num_ldb_credits: Amount of available load-balanced QE storage.
  * - num_dir_credits: Amount of available directed QE storage.
+ * - response.status: Detailed error code. In certain cases, such as if the
+ *	ioctl request arg is invalid, the driver won't set status.
  */
 struct dlb2_get_num_resources_args {
 	/* Output parameters */
+	struct dlb2_cmd_response response;
 	__u32 num_sched_domains;
 	__u32 num_ldb_queues;
 	__u32 num_ldb_ports;
@@ -206,8 +209,15 @@ struct dlb2_get_num_resources_args {
 	__u32 num_atomic_inflights;
 	__u32 num_hist_list_entries;
 	__u32 max_contiguous_hist_list_entries;
-	__u32 num_ldb_credits;
-	__u32 num_dir_credits;
+	union {
+		struct {
+			__u32 num_ldb_credits;
+			__u32 num_dir_credits;
+		};
+		struct {
+			__u32 num_credits;
+		};
+	};
 };
 
 /*
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index 7ba6521ef..eda983d85 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -58,54 +58,6 @@ void dlb2_hw_enable_sparse_dir_cq_mode(struct dlb2_hw *hw)
 	DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, r0.val);
 }
 
-int dlb2_hw_get_num_resources(struct dlb2_hw *hw,
-			      struct dlb2_get_num_resources_args *arg,
-			      bool vdev_req,
-			      unsigned int vdev_id)
-{
-	struct dlb2_function_resources *rsrcs;
-	struct dlb2_bitmap *map;
-	int i;
-
-	if (vdev_req && vdev_id >= DLB2_MAX_NUM_VDEVS)
-		return -EINVAL;
-
-	if (vdev_req)
-		rsrcs = &hw->vdev[vdev_id];
-	else
-		rsrcs = &hw->pf;
-
-	arg->num_sched_domains = rsrcs->num_avail_domains;
-
-	arg->num_ldb_queues = rsrcs->num_avail_ldb_queues;
-
-	arg->num_ldb_ports = 0;
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
-		arg->num_ldb_ports += rsrcs->num_avail_ldb_ports[i];
-
-	arg->num_cos_ldb_ports[0] = rsrcs->num_avail_ldb_ports[0];
-	arg->num_cos_ldb_ports[1] = rsrcs->num_avail_ldb_ports[1];
-	arg->num_cos_ldb_ports[2] = rsrcs->num_avail_ldb_ports[2];
-	arg->num_cos_ldb_ports[3] = rsrcs->num_avail_ldb_ports[3];
-
-	arg->num_dir_ports = rsrcs->num_avail_dir_pq_pairs;
-
-	arg->num_atomic_inflights = rsrcs->num_avail_aqed_entries;
-
-	map = rsrcs->avail_hist_list_entries;
-
-	arg->num_hist_list_entries = dlb2_bitmap_count(map);
-
-	arg->max_contiguous_hist_list_entries =
-		dlb2_bitmap_longest_set_range(map);
-
-	arg->num_ldb_credits = rsrcs->num_avail_qed_entries;
-
-	arg->num_dir_credits = rsrcs->num_avail_dqed_entries;
-
-	return 0;
-}
-
 void dlb2_hw_enable_sparse_ldb_cq_mode(struct dlb2_hw *hw)
 {
 	union dlb2_chp_cfg_chp_csr_ctrl r0;
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index 175b0799e..14b97dbf9 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -257,3 +257,69 @@ void dlb2_clr_pmcsr_disable(struct dlb2_hw *hw, enum dlb2_hw_ver ver)
 	DLB2_CSR_WR(hw, DLB2_CM_CFG_PM_PMCSR_DISABLE(ver), pmcsr_dis);
 }
 
+/**
+ * dlb2_hw_get_num_resources() - query the PCI function's available resources
+ * @hw: dlb2_hw handle for a particular device.
+ * @arg: pointer to resource counts.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function returns the number of available resources for the PF or for a
+ * VF.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, -EINVAL if vdev_req is true and vdev_id is
+ * invalid.
+ */
+int dlb2_hw_get_num_resources(struct dlb2_hw *hw,
+			      struct dlb2_get_num_resources_args *arg,
+			      bool vdev_req,
+			      unsigned int vdev_id)
+{
+	struct dlb2_function_resources *rsrcs;
+	struct dlb2_bitmap *map;
+	int i;
+
+	if (vdev_req && vdev_id >= DLB2_MAX_NUM_VDEVS)
+		return -EINVAL;
+
+	if (vdev_req)
+		rsrcs = &hw->vdev[vdev_id];
+	else
+		rsrcs = &hw->pf;
+
+	arg->num_sched_domains = rsrcs->num_avail_domains;
+
+	arg->num_ldb_queues = rsrcs->num_avail_ldb_queues;
+
+	arg->num_ldb_ports = 0;
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
+		arg->num_ldb_ports += rsrcs->num_avail_ldb_ports[i];
+
+	arg->num_cos_ldb_ports[0] = rsrcs->num_avail_ldb_ports[0];
+	arg->num_cos_ldb_ports[1] = rsrcs->num_avail_ldb_ports[1];
+	arg->num_cos_ldb_ports[2] = rsrcs->num_avail_ldb_ports[2];
+	arg->num_cos_ldb_ports[3] = rsrcs->num_avail_ldb_ports[3];
+
+	arg->num_dir_ports = rsrcs->num_avail_dir_pq_pairs;
+
+	arg->num_atomic_inflights = rsrcs->num_avail_aqed_entries;
+
+	map = rsrcs->avail_hist_list_entries;
+
+	arg->num_hist_list_entries = dlb2_bitmap_count(map);
+
+	arg->max_contiguous_hist_list_entries =
+		dlb2_bitmap_longest_set_range(map);
+
+	if (hw->ver == DLB2_HW_V2) {
+		arg->num_ldb_credits = rsrcs->num_avail_qed_entries;
+		arg->num_dir_credits = rsrcs->num_avail_dqed_entries;
+	} else {
+		arg->num_credits = rsrcs->num_avail_entries;
+	}
+	return 0;
+}
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v2 04/27] event/dlb2: add v2.5 create sched domain
  2021-03-30 19:35   ` [dpdk-dev] [PATCH v2 00/27] Add DLB V2.5 Timothy McDaniel
                       ` (2 preceding siblings ...)
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 03/27] event/dlb2: add v2.5 get_resources Timothy McDaniel
@ 2021-03-30 19:35     ` Timothy McDaniel
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 05/27] event/dlb2: add v2.5 domain reset Timothy McDaniel
                       ` (23 subsequent siblings)
  27 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-03-30 19:35 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas

Update domain creation logic to account for DLB v2.5
credit scheme, new register map, and new register access
macros.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/dlb2_user.h                |  13 +-
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 645 ----------------
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 696 ++++++++++++++++++
 3 files changed, 707 insertions(+), 647 deletions(-)

diff --git a/drivers/event/dlb2/dlb2_user.h b/drivers/event/dlb2/dlb2_user.h
index b7d125dec..9760e9bda 100644
--- a/drivers/event/dlb2/dlb2_user.h
+++ b/drivers/event/dlb2/dlb2_user.h
@@ -18,6 +18,7 @@ enum dlb2_error {
 	DLB2_ST_LDB_QUEUES_UNAVAILABLE,
 	DLB2_ST_LDB_CREDITS_UNAVAILABLE,
 	DLB2_ST_DIR_CREDITS_UNAVAILABLE,
+	DLB2_ST_CREDITS_UNAVAILABLE,
 	DLB2_ST_SEQUENCE_NUMBERS_UNAVAILABLE,
 	DLB2_ST_INVALID_DOMAIN_ID,
 	DLB2_ST_INVALID_QID_INFLIGHT_ALLOCATION,
@@ -57,6 +58,7 @@ static const char dlb2_error_strings[][128] = {
 	"DLB2_ST_LDB_QUEUES_UNAVAILABLE",
 	"DLB2_ST_LDB_CREDITS_UNAVAILABLE",
 	"DLB2_ST_DIR_CREDITS_UNAVAILABLE",
+	"DLB2_ST_CREDITS_UNAVAILABLE",
 	"DLB2_ST_SEQUENCE_NUMBERS_UNAVAILABLE",
 	"DLB2_ST_INVALID_DOMAIN_ID",
 	"DLB2_ST_INVALID_QID_INFLIGHT_ALLOCATION",
@@ -170,8 +172,15 @@ struct dlb2_create_sched_domain_args {
 	__u32 num_dir_ports;
 	__u32 num_atomic_inflights;
 	__u32 num_hist_list_entries;
-	__u32 num_ldb_credits;
-	__u32 num_dir_credits;
+	union {
+		struct {
+			__u32 num_ldb_credits;
+			__u32 num_dir_credits;
+		};
+		struct {
+			__u32 num_credits;
+		};
+	};
 	__u8 cos_strict;
 	__u8 padding1[3];
 };
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index eda983d85..99c3d031d 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -32,21 +32,6 @@
 #define DLB2_FUNC_LIST_FOR_SAFE(head, ptr, ptr_tmp, it, it_tmp) \
 	DLB2_LIST_FOR_EACH_SAFE((head), ptr, ptr_tmp, func_list, it, it_tmp)
 
-static void dlb2_init_domain_rsrc_lists(struct dlb2_hw_domain *domain)
-{
-	int i;
-
-	dlb2_list_init_head(&domain->used_ldb_queues);
-	dlb2_list_init_head(&domain->used_dir_pq_pairs);
-	dlb2_list_init_head(&domain->avail_ldb_queues);
-	dlb2_list_init_head(&domain->avail_dir_pq_pairs);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
-		dlb2_list_init_head(&domain->used_ldb_ports[i]);
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
-		dlb2_list_init_head(&domain->avail_ldb_ports[i]);
-}
-
 void dlb2_hw_enable_sparse_dir_cq_mode(struct dlb2_hw *hw)
 {
 	union dlb2_chp_cfg_chp_csr_ctrl r0;
@@ -69,636 +54,6 @@ void dlb2_hw_enable_sparse_ldb_cq_mode(struct dlb2_hw *hw)
 	DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, r0.val);
 }
 
-static void dlb2_configure_domain_credits(struct dlb2_hw *hw,
-					  struct dlb2_hw_domain *domain)
-{
-	union dlb2_chp_cfg_ldb_vas_crd r0 = { {0} };
-	union dlb2_chp_cfg_dir_vas_crd r1 = { {0} };
-
-	r0.field.count = domain->num_ldb_credits;
-
-	DLB2_CSR_WR(hw, DLB2_CHP_CFG_LDB_VAS_CRD(domain->id.phys_id), r0.val);
-
-	r1.field.count = domain->num_dir_credits;
-
-	DLB2_CSR_WR(hw, DLB2_CHP_CFG_DIR_VAS_CRD(domain->id.phys_id), r1.val);
-}
-
-static struct dlb2_ldb_port *
-dlb2_get_next_ldb_port(struct dlb2_hw *hw,
-		       struct dlb2_function_resources *rsrcs,
-		       u32 domain_id,
-		       u32 cos_id)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	RTE_SET_USED(iter);
-	/*
-	 * To reduce the odds of consecutive load-balanced ports mapping to the
-	 * same queue(s), the driver attempts to allocate ports whose neighbors
-	 * are owned by a different domain.
-	 */
-	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {
-		u32 next, prev;
-		u32 phys_id;
-
-		phys_id = port->id.phys_id;
-		next = phys_id + 1;
-		prev = phys_id - 1;
-
-		if (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)
-			next = 0;
-		if (phys_id == 0)
-			prev = DLB2_MAX_NUM_LDB_PORTS - 1;
-
-		if (!hw->rsrcs.ldb_ports[next].owned ||
-		    hw->rsrcs.ldb_ports[next].domain_id.phys_id == domain_id)
-			continue;
-
-		if (!hw->rsrcs.ldb_ports[prev].owned ||
-		    hw->rsrcs.ldb_ports[prev].domain_id.phys_id == domain_id)
-			continue;
-
-		return port;
-	}
-
-	/*
-	 * Failing that, the driver looks for a port with one neighbor owned by
-	 * a different domain and the other unallocated.
-	 */
-	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {
-		u32 next, prev;
-		u32 phys_id;
-
-		phys_id = port->id.phys_id;
-		next = phys_id + 1;
-		prev = phys_id - 1;
-
-		if (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)
-			next = 0;
-		if (phys_id == 0)
-			prev = DLB2_MAX_NUM_LDB_PORTS - 1;
-
-		if (!hw->rsrcs.ldb_ports[prev].owned &&
-		    hw->rsrcs.ldb_ports[next].owned &&
-		    hw->rsrcs.ldb_ports[next].domain_id.phys_id != domain_id)
-			return port;
-
-		if (!hw->rsrcs.ldb_ports[next].owned &&
-		    hw->rsrcs.ldb_ports[prev].owned &&
-		    hw->rsrcs.ldb_ports[prev].domain_id.phys_id != domain_id)
-			return port;
-	}
-
-	/*
-	 * Failing that, the driver looks for a port with both neighbors
-	 * unallocated.
-	 */
-	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {
-		u32 next, prev;
-		u32 phys_id;
-
-		phys_id = port->id.phys_id;
-		next = phys_id + 1;
-		prev = phys_id - 1;
-
-		if (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)
-			next = 0;
-		if (phys_id == 0)
-			prev = DLB2_MAX_NUM_LDB_PORTS - 1;
-
-		if (!hw->rsrcs.ldb_ports[prev].owned &&
-		    !hw->rsrcs.ldb_ports[next].owned)
-			return port;
-	}
-
-	/* If all else fails, the driver returns the next available port. */
-	return DLB2_FUNC_LIST_HEAD(rsrcs->avail_ldb_ports[cos_id],
-				   typeof(*port));
-}
-
-static int __dlb2_attach_ldb_ports(struct dlb2_hw *hw,
-				   struct dlb2_function_resources *rsrcs,
-				   struct dlb2_hw_domain *domain,
-				   u32 num_ports,
-				   u32 cos_id,
-				   struct dlb2_cmd_response *resp)
-{
-	unsigned int i;
-
-	if (rsrcs->num_avail_ldb_ports[cos_id] < num_ports) {
-		resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	for (i = 0; i < num_ports; i++) {
-		struct dlb2_ldb_port *port;
-
-		port = dlb2_get_next_ldb_port(hw, rsrcs,
-					      domain->id.phys_id, cos_id);
-		if (port == NULL) {
-			DLB2_HW_ERR(hw,
-				    "[%s()] Internal error: domain validation failed\n",
-				    __func__);
-			return -EFAULT;
-		}
-
-		dlb2_list_del(&rsrcs->avail_ldb_ports[cos_id],
-			      &port->func_list);
-
-		port->domain_id = domain->id;
-		port->owned = true;
-
-		dlb2_list_add(&domain->avail_ldb_ports[cos_id],
-			      &port->domain_list);
-	}
-
-	rsrcs->num_avail_ldb_ports[cos_id] -= num_ports;
-
-	return 0;
-}
-
-static int dlb2_attach_ldb_ports(struct dlb2_hw *hw,
-				 struct dlb2_function_resources *rsrcs,
-				 struct dlb2_hw_domain *domain,
-				 struct dlb2_create_sched_domain_args *args,
-				 struct dlb2_cmd_response *resp)
-{
-	unsigned int i, j;
-	int ret;
-
-	if (args->cos_strict) {
-		for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-			u32 num = args->num_cos_ldb_ports[i];
-
-			/* Allocate ports from specific classes-of-service */
-			ret = __dlb2_attach_ldb_ports(hw,
-						      rsrcs,
-						      domain,
-						      num,
-						      i,
-						      resp);
-			if (ret)
-				return ret;
-		}
-	} else {
-		unsigned int k;
-		u32 cos_id;
-
-		/*
-		 * Attempt to allocate from specific class-of-service, but
-		 * fallback to the other classes if that fails.
-		 */
-		for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-			for (j = 0; j < args->num_cos_ldb_ports[i]; j++) {
-				for (k = 0; k < DLB2_NUM_COS_DOMAINS; k++) {
-					cos_id = (i + k) % DLB2_NUM_COS_DOMAINS;
-
-					ret = __dlb2_attach_ldb_ports(hw,
-								      rsrcs,
-								      domain,
-								      1,
-								      cos_id,
-								      resp);
-					if (ret == 0)
-						break;
-				}
-
-				if (ret < 0)
-					return ret;
-			}
-		}
-	}
-
-	/* Allocate num_ldb_ports from any class-of-service */
-	for (i = 0; i < args->num_ldb_ports; i++) {
-		for (j = 0; j < DLB2_NUM_COS_DOMAINS; j++) {
-			ret = __dlb2_attach_ldb_ports(hw,
-						      rsrcs,
-						      domain,
-						      1,
-						      j,
-						      resp);
-			if (ret == 0)
-				break;
-		}
-
-		if (ret < 0)
-			return ret;
-	}
-
-	return 0;
-}
-
-static int dlb2_attach_dir_ports(struct dlb2_hw *hw,
-				 struct dlb2_function_resources *rsrcs,
-				 struct dlb2_hw_domain *domain,
-				 u32 num_ports,
-				 struct dlb2_cmd_response *resp)
-{
-	unsigned int i;
-
-	if (rsrcs->num_avail_dir_pq_pairs < num_ports) {
-		resp->status = DLB2_ST_DIR_PORTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	for (i = 0; i < num_ports; i++) {
-		struct dlb2_dir_pq_pair *port;
-
-		port = DLB2_FUNC_LIST_HEAD(rsrcs->avail_dir_pq_pairs,
-					   typeof(*port));
-		if (port == NULL) {
-			DLB2_HW_ERR(hw,
-				    "[%s()] Internal error: domain validation failed\n",
-				    __func__);
-			return -EFAULT;
-		}
-
-		dlb2_list_del(&rsrcs->avail_dir_pq_pairs, &port->func_list);
-
-		port->domain_id = domain->id;
-		port->owned = true;
-
-		dlb2_list_add(&domain->avail_dir_pq_pairs, &port->domain_list);
-	}
-
-	rsrcs->num_avail_dir_pq_pairs -= num_ports;
-
-	return 0;
-}
-
-static int dlb2_attach_ldb_credits(struct dlb2_function_resources *rsrcs,
-				   struct dlb2_hw_domain *domain,
-				   u32 num_credits,
-				   struct dlb2_cmd_response *resp)
-{
-	if (rsrcs->num_avail_qed_entries < num_credits) {
-		resp->status = DLB2_ST_LDB_CREDITS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	rsrcs->num_avail_qed_entries -= num_credits;
-	domain->num_ldb_credits += num_credits;
-	return 0;
-}
-
-static int dlb2_attach_dir_credits(struct dlb2_function_resources *rsrcs,
-				   struct dlb2_hw_domain *domain,
-				   u32 num_credits,
-				   struct dlb2_cmd_response *resp)
-{
-	if (rsrcs->num_avail_dqed_entries < num_credits) {
-		resp->status = DLB2_ST_DIR_CREDITS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	rsrcs->num_avail_dqed_entries -= num_credits;
-	domain->num_dir_credits += num_credits;
-	return 0;
-}
-
-static int dlb2_attach_atomic_inflights(struct dlb2_function_resources *rsrcs,
-					struct dlb2_hw_domain *domain,
-					u32 num_atomic_inflights,
-					struct dlb2_cmd_response *resp)
-{
-	if (rsrcs->num_avail_aqed_entries < num_atomic_inflights) {
-		resp->status = DLB2_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	rsrcs->num_avail_aqed_entries -= num_atomic_inflights;
-	domain->num_avail_aqed_entries += num_atomic_inflights;
-	return 0;
-}
-
-static int
-dlb2_attach_domain_hist_list_entries(struct dlb2_function_resources *rsrcs,
-				     struct dlb2_hw_domain *domain,
-				     u32 num_hist_list_entries,
-				     struct dlb2_cmd_response *resp)
-{
-	struct dlb2_bitmap *bitmap;
-	int base;
-
-	if (num_hist_list_entries) {
-		bitmap = rsrcs->avail_hist_list_entries;
-
-		base = dlb2_bitmap_find_set_bit_range(bitmap,
-						      num_hist_list_entries);
-		if (base < 0)
-			goto error;
-
-		domain->total_hist_list_entries = num_hist_list_entries;
-		domain->avail_hist_list_entries = num_hist_list_entries;
-		domain->hist_list_entry_base = base;
-		domain->hist_list_entry_offset = 0;
-
-		dlb2_bitmap_clear_range(bitmap, base, num_hist_list_entries);
-	}
-	return 0;
-
-error:
-	resp->status = DLB2_ST_HIST_LIST_ENTRIES_UNAVAILABLE;
-	return -EINVAL;
-}
-
-static int dlb2_attach_ldb_queues(struct dlb2_hw *hw,
-				  struct dlb2_function_resources *rsrcs,
-				  struct dlb2_hw_domain *domain,
-				  u32 num_queues,
-				  struct dlb2_cmd_response *resp)
-{
-	unsigned int i;
-
-	if (rsrcs->num_avail_ldb_queues < num_queues) {
-		resp->status = DLB2_ST_LDB_QUEUES_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	for (i = 0; i < num_queues; i++) {
-		struct dlb2_ldb_queue *queue;
-
-		queue = DLB2_FUNC_LIST_HEAD(rsrcs->avail_ldb_queues,
-					    typeof(*queue));
-		if (queue == NULL) {
-			DLB2_HW_ERR(hw,
-				    "[%s()] Internal error: domain validation failed\n",
-				    __func__);
-			return -EFAULT;
-		}
-
-		dlb2_list_del(&rsrcs->avail_ldb_queues, &queue->func_list);
-
-		queue->domain_id = domain->id;
-		queue->owned = true;
-
-		dlb2_list_add(&domain->avail_ldb_queues, &queue->domain_list);
-	}
-
-	rsrcs->num_avail_ldb_queues -= num_queues;
-
-	return 0;
-}
-
-static int
-dlb2_domain_attach_resources(struct dlb2_hw *hw,
-			     struct dlb2_function_resources *rsrcs,
-			     struct dlb2_hw_domain *domain,
-			     struct dlb2_create_sched_domain_args *args,
-			     struct dlb2_cmd_response *resp)
-{
-	int ret;
-
-	ret = dlb2_attach_ldb_queues(hw,
-				     rsrcs,
-				     domain,
-				     args->num_ldb_queues,
-				     resp);
-	if (ret < 0)
-		return ret;
-
-	ret = dlb2_attach_ldb_ports(hw,
-				    rsrcs,
-				    domain,
-				    args,
-				    resp);
-	if (ret < 0)
-		return ret;
-
-	ret = dlb2_attach_dir_ports(hw,
-				    rsrcs,
-				    domain,
-				    args->num_dir_ports,
-				    resp);
-	if (ret < 0)
-		return ret;
-
-	ret = dlb2_attach_ldb_credits(rsrcs,
-				      domain,
-				      args->num_ldb_credits,
-				      resp);
-	if (ret < 0)
-		return ret;
-
-	ret = dlb2_attach_dir_credits(rsrcs,
-				      domain,
-				      args->num_dir_credits,
-				      resp);
-	if (ret < 0)
-		return ret;
-
-	ret = dlb2_attach_domain_hist_list_entries(rsrcs,
-						   domain,
-						   args->num_hist_list_entries,
-						   resp);
-	if (ret < 0)
-		return ret;
-
-	ret = dlb2_attach_atomic_inflights(rsrcs,
-					   domain,
-					   args->num_atomic_inflights,
-					   resp);
-	if (ret < 0)
-		return ret;
-
-	dlb2_configure_domain_credits(hw, domain);
-
-	domain->configured = true;
-
-	domain->started = false;
-
-	rsrcs->num_avail_domains--;
-
-	return 0;
-}
-
-static int
-dlb2_verify_create_sched_dom_args(struct dlb2_function_resources *rsrcs,
-				  struct dlb2_create_sched_domain_args *args,
-				  struct dlb2_cmd_response *resp)
-{
-	u32 num_avail_ldb_ports, req_ldb_ports;
-	struct dlb2_bitmap *avail_hl_entries;
-	unsigned int max_contig_hl_range;
-	int i;
-
-	avail_hl_entries = rsrcs->avail_hist_list_entries;
-
-	max_contig_hl_range = dlb2_bitmap_longest_set_range(avail_hl_entries);
-
-	num_avail_ldb_ports = 0;
-	req_ldb_ports = 0;
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		num_avail_ldb_ports += rsrcs->num_avail_ldb_ports[i];
-
-		req_ldb_ports += args->num_cos_ldb_ports[i];
-	}
-
-	req_ldb_ports += args->num_ldb_ports;
-
-	if (rsrcs->num_avail_domains < 1) {
-		resp->status = DLB2_ST_DOMAIN_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	if (rsrcs->num_avail_ldb_queues < args->num_ldb_queues) {
-		resp->status = DLB2_ST_LDB_QUEUES_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	if (req_ldb_ports > num_avail_ldb_ports) {
-		resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	for (i = 0; args->cos_strict && i < DLB2_NUM_COS_DOMAINS; i++) {
-		if (args->num_cos_ldb_ports[i] >
-		    rsrcs->num_avail_ldb_ports[i]) {
-			resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
-			return -EINVAL;
-		}
-	}
-
-	if (args->num_ldb_queues > 0 && req_ldb_ports == 0) {
-		resp->status = DLB2_ST_LDB_PORT_REQUIRED_FOR_LDB_QUEUES;
-		return -EINVAL;
-	}
-
-	if (rsrcs->num_avail_dir_pq_pairs < args->num_dir_ports) {
-		resp->status = DLB2_ST_DIR_PORTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	if (rsrcs->num_avail_qed_entries < args->num_ldb_credits) {
-		resp->status = DLB2_ST_LDB_CREDITS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	if (rsrcs->num_avail_dqed_entries < args->num_dir_credits) {
-		resp->status = DLB2_ST_DIR_CREDITS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	if (rsrcs->num_avail_aqed_entries < args->num_atomic_inflights) {
-		resp->status = DLB2_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	if (max_contig_hl_range < args->num_hist_list_entries) {
-		resp->status = DLB2_ST_HIST_LIST_ENTRIES_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	return 0;
-}
-
-static void
-dlb2_log_create_sched_domain_args(struct dlb2_hw *hw,
-				  struct dlb2_create_sched_domain_args *args,
-				  bool vdev_req,
-				  unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 create sched domain arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tNumber of LDB queues:          %d\n",
-		    args->num_ldb_queues);
-	DLB2_HW_DBG(hw, "\tNumber of LDB ports (any CoS): %d\n",
-		    args->num_ldb_ports);
-	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 0):   %d\n",
-		    args->num_cos_ldb_ports[0]);
-	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 1):   %d\n",
-		    args->num_cos_ldb_ports[1]);
-	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 2):   %d\n",
-		    args->num_cos_ldb_ports[1]);
-	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 3):   %d\n",
-		    args->num_cos_ldb_ports[1]);
-	DLB2_HW_DBG(hw, "\tStrict CoS allocation:         %d\n",
-		    args->cos_strict);
-	DLB2_HW_DBG(hw, "\tNumber of DIR ports:           %d\n",
-		    args->num_dir_ports);
-	DLB2_HW_DBG(hw, "\tNumber of ATM inflights:       %d\n",
-		    args->num_atomic_inflights);
-	DLB2_HW_DBG(hw, "\tNumber of hist list entries:   %d\n",
-		    args->num_hist_list_entries);
-	DLB2_HW_DBG(hw, "\tNumber of LDB credits:         %d\n",
-		    args->num_ldb_credits);
-	DLB2_HW_DBG(hw, "\tNumber of DIR credits:         %d\n",
-		    args->num_dir_credits);
-}
-
-/**
- * dlb2_hw_create_sched_domain() - Allocate and initialize a DLB scheduling
- *	domain and its resources.
- * @hw:	Contains the current state of the DLB2 hardware.
- * @args: User-provided arguments.
- * @resp: Response to user.
- * @vdev_req: Request came from a virtual device.
- * @vdev_id: If vdev_req is true, this contains the virtual device's ID.
- *
- * Return: returns < 0 on error, 0 otherwise. If the driver is unable to
- * satisfy a request, resp->status will be set accordingly.
- */
-int dlb2_hw_create_sched_domain(struct dlb2_hw *hw,
-				struct dlb2_create_sched_domain_args *args,
-				struct dlb2_cmd_response *resp,
-				bool vdev_req,
-				unsigned int vdev_id)
-{
-	struct dlb2_function_resources *rsrcs;
-	struct dlb2_hw_domain *domain;
-	int ret;
-
-	rsrcs = (vdev_req) ? &hw->vdev[vdev_id] : &hw->pf;
-
-	dlb2_log_create_sched_domain_args(hw, args, vdev_req, vdev_id);
-
-	/*
-	 * Verify that hardware resources are available before attempting to
-	 * satisfy the request. This simplifies the error unwinding code.
-	 */
-	ret = dlb2_verify_create_sched_dom_args(rsrcs, args, resp);
-	if (ret)
-		return ret;
-
-	domain = DLB2_FUNC_LIST_HEAD(rsrcs->avail_domains, typeof(*domain));
-	if (domain == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: no available domains\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	if (domain->configured) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: avail_domains contains configured domains.\n",
-			    __func__);
-		return -EFAULT;
-	}
-
-	dlb2_init_domain_rsrc_lists(domain);
-
-	ret = dlb2_domain_attach_resources(hw, rsrcs, domain, args, resp);
-	if (ret < 0) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: failed to verify args.\n",
-			    __func__);
-
-		return ret;
-	}
-
-	dlb2_list_del(&rsrcs->avail_domains, &domain->func_list);
-
-	dlb2_list_add(&rsrcs->used_domains, &domain->func_list);
-
-	resp->id = (vdev_req) ? domain->id.virt_id : domain->id.phys_id;
-	resp->status = 0;
-
-	return 0;
-}
-
 /*
  * The PF driver cannot assume that a register write will affect subsequent HCW
  * writes. To ensure a write completes, the driver must read back a CSR. This
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index 14b97dbf9..8f97dd865 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -323,3 +323,699 @@ int dlb2_hw_get_num_resources(struct dlb2_hw *hw,
 	}
 	return 0;
 }
+
+static void dlb2_configure_domain_credits_v2_5(struct dlb2_hw *hw,
+					       struct dlb2_hw_domain *domain)
+{
+	u32 reg = 0;
+
+	DLB2_BITS_SET(reg, domain->num_credits, DLB2_CHP_CFG_LDB_VAS_CRD_COUNT);
+	DLB2_CSR_WR(hw, DLB2_CHP_CFG_VAS_CRD(domain->id.phys_id), reg);
+}
+
+static void dlb2_configure_domain_credits_v2(struct dlb2_hw *hw,
+					     struct dlb2_hw_domain *domain)
+{
+	u32 reg = 0;
+
+	DLB2_BITS_SET(reg, domain->num_ldb_credits,
+		      DLB2_CHP_CFG_LDB_VAS_CRD_COUNT);
+	DLB2_CSR_WR(hw, DLB2_CHP_CFG_LDB_VAS_CRD(domain->id.phys_id), reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, domain->num_dir_credits,
+		      DLB2_CHP_CFG_DIR_VAS_CRD_COUNT);
+	DLB2_CSR_WR(hw, DLB2_CHP_CFG_DIR_VAS_CRD(domain->id.phys_id), reg);
+}
+
+static void dlb2_configure_domain_credits(struct dlb2_hw *hw,
+					  struct dlb2_hw_domain *domain)
+{
+	if (hw->ver == DLB2_HW_V2)
+		dlb2_configure_domain_credits_v2(hw, domain);
+	else
+		dlb2_configure_domain_credits_v2_5(hw, domain);
+}
+
+static int dlb2_attach_credits(struct dlb2_function_resources *rsrcs,
+			       struct dlb2_hw_domain *domain,
+			       u32 num_credits,
+			       struct dlb2_cmd_response *resp)
+{
+	if (rsrcs->num_avail_entries < num_credits) {
+		resp->status = DLB2_ST_CREDITS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	rsrcs->num_avail_entries -= num_credits;
+	domain->num_credits += num_credits;
+	return 0;
+}
+
+static struct dlb2_ldb_port *
+dlb2_get_next_ldb_port(struct dlb2_hw *hw,
+		       struct dlb2_function_resources *rsrcs,
+		       u32 domain_id,
+		       u32 cos_id)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	RTE_SET_USED(iter);
+
+	/*
+	 * To reduce the odds of consecutive load-balanced ports mapping to the
+	 * same queue(s), the driver attempts to allocate ports whose neighbors
+	 * are owned by a different domain.
+	 */
+	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {
+		u32 next, prev;
+		u32 phys_id;
+
+		phys_id = port->id.phys_id;
+		next = phys_id + 1;
+		prev = phys_id - 1;
+
+		if (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)
+			next = 0;
+		if (phys_id == 0)
+			prev = DLB2_MAX_NUM_LDB_PORTS - 1;
+
+		if (!hw->rsrcs.ldb_ports[next].owned ||
+		    hw->rsrcs.ldb_ports[next].domain_id.phys_id == domain_id)
+			continue;
+
+		if (!hw->rsrcs.ldb_ports[prev].owned ||
+		    hw->rsrcs.ldb_ports[prev].domain_id.phys_id == domain_id)
+			continue;
+
+		return port;
+	}
+
+	/*
+	 * Failing that, the driver looks for a port with one neighbor owned by
+	 * a different domain and the other unallocated.
+	 */
+	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {
+		u32 next, prev;
+		u32 phys_id;
+
+		phys_id = port->id.phys_id;
+		next = phys_id + 1;
+		prev = phys_id - 1;
+
+		if (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)
+			next = 0;
+		if (phys_id == 0)
+			prev = DLB2_MAX_NUM_LDB_PORTS - 1;
+
+		if (!hw->rsrcs.ldb_ports[prev].owned &&
+		    hw->rsrcs.ldb_ports[next].owned &&
+		    hw->rsrcs.ldb_ports[next].domain_id.phys_id != domain_id)
+			return port;
+
+		if (!hw->rsrcs.ldb_ports[next].owned &&
+		    hw->rsrcs.ldb_ports[prev].owned &&
+		    hw->rsrcs.ldb_ports[prev].domain_id.phys_id != domain_id)
+			return port;
+	}
+
+	/*
+	 * Failing that, the driver looks for a port with both neighbors
+	 * unallocated.
+	 */
+	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {
+		u32 next, prev;
+		u32 phys_id;
+
+		phys_id = port->id.phys_id;
+		next = phys_id + 1;
+		prev = phys_id - 1;
+
+		if (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)
+			next = 0;
+		if (phys_id == 0)
+			prev = DLB2_MAX_NUM_LDB_PORTS - 1;
+
+		if (!hw->rsrcs.ldb_ports[prev].owned &&
+		    !hw->rsrcs.ldb_ports[next].owned)
+			return port;
+	}
+
+	/* If all else fails, the driver returns the next available port. */
+	return DLB2_FUNC_LIST_HEAD(rsrcs->avail_ldb_ports[cos_id],
+				   typeof(*port));
+}
+
+static int __dlb2_attach_ldb_ports(struct dlb2_hw *hw,
+				   struct dlb2_function_resources *rsrcs,
+				   struct dlb2_hw_domain *domain,
+				   u32 num_ports,
+				   u32 cos_id,
+				   struct dlb2_cmd_response *resp)
+{
+	unsigned int i;
+
+	if (rsrcs->num_avail_ldb_ports[cos_id] < num_ports) {
+		resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	for (i = 0; i < num_ports; i++) {
+		struct dlb2_ldb_port *port;
+
+		port = dlb2_get_next_ldb_port(hw, rsrcs,
+					      domain->id.phys_id, cos_id);
+		if (port == NULL) {
+			DLB2_HW_ERR(hw,
+				    "[%s()] Internal error: domain validation failed\n",
+				    __func__);
+			return -EFAULT;
+		}
+
+		dlb2_list_del(&rsrcs->avail_ldb_ports[cos_id],
+			      &port->func_list);
+
+		port->domain_id = domain->id;
+		port->owned = true;
+
+		dlb2_list_add(&domain->avail_ldb_ports[cos_id],
+			      &port->domain_list);
+	}
+
+	rsrcs->num_avail_ldb_ports[cos_id] -= num_ports;
+
+	return 0;
+}
+
+
+static int dlb2_attach_ldb_ports(struct dlb2_hw *hw,
+				 struct dlb2_function_resources *rsrcs,
+				 struct dlb2_hw_domain *domain,
+				 struct dlb2_create_sched_domain_args *args,
+				 struct dlb2_cmd_response *resp)
+{
+	unsigned int i, j;
+	int ret;
+
+	if (args->cos_strict) {
+		for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+			u32 num = args->num_cos_ldb_ports[i];
+
+			/* Allocate ports from specific classes-of-service */
+			ret = __dlb2_attach_ldb_ports(hw,
+						      rsrcs,
+						      domain,
+						      num,
+						      i,
+						      resp);
+			if (ret)
+				return ret;
+		}
+	} else {
+		unsigned int k;
+		u32 cos_id;
+
+		/*
+		 * Attempt to allocate from specific class-of-service, but
+		 * fallback to the other classes if that fails.
+		 */
+		for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+			for (j = 0; j < args->num_cos_ldb_ports[i]; j++) {
+				for (k = 0; k < DLB2_NUM_COS_DOMAINS; k++) {
+					cos_id = (i + k) % DLB2_NUM_COS_DOMAINS;
+
+					ret = __dlb2_attach_ldb_ports(hw,
+								      rsrcs,
+								      domain,
+								      1,
+								      cos_id,
+								      resp);
+					if (ret == 0)
+						break;
+				}
+
+				if (ret)
+					return ret;
+			}
+		}
+	}
+
+	/* Allocate num_ldb_ports from any class-of-service */
+	for (i = 0; i < args->num_ldb_ports; i++) {
+		for (j = 0; j < DLB2_NUM_COS_DOMAINS; j++) {
+			ret = __dlb2_attach_ldb_ports(hw,
+						      rsrcs,
+						      domain,
+						      1,
+						      j,
+						      resp);
+			if (ret == 0)
+				break;
+		}
+
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+
+static int dlb2_attach_dir_ports(struct dlb2_hw *hw,
+				 struct dlb2_function_resources *rsrcs,
+				 struct dlb2_hw_domain *domain,
+				 u32 num_ports,
+				 struct dlb2_cmd_response *resp)
+{
+	unsigned int i;
+
+	if (rsrcs->num_avail_dir_pq_pairs < num_ports) {
+		resp->status = DLB2_ST_DIR_PORTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	for (i = 0; i < num_ports; i++) {
+		struct dlb2_dir_pq_pair *port;
+
+		port = DLB2_FUNC_LIST_HEAD(rsrcs->avail_dir_pq_pairs,
+					   typeof(*port));
+		if (port == NULL) {
+			DLB2_HW_ERR(hw,
+				    "[%s()] Internal error: domain validation failed\n",
+				    __func__);
+			return -EFAULT;
+		}
+
+		dlb2_list_del(&rsrcs->avail_dir_pq_pairs, &port->func_list);
+
+		port->domain_id = domain->id;
+		port->owned = true;
+
+		dlb2_list_add(&domain->avail_dir_pq_pairs, &port->domain_list);
+	}
+
+	rsrcs->num_avail_dir_pq_pairs -= num_ports;
+
+	return 0;
+}
+
+static int dlb2_attach_ldb_credits(struct dlb2_function_resources *rsrcs,
+				   struct dlb2_hw_domain *domain,
+				   u32 num_credits,
+				   struct dlb2_cmd_response *resp)
+{
+	if (rsrcs->num_avail_qed_entries < num_credits) {
+		resp->status = DLB2_ST_LDB_CREDITS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	rsrcs->num_avail_qed_entries -= num_credits;
+	domain->num_ldb_credits += num_credits;
+	return 0;
+}
+
+static int dlb2_attach_dir_credits(struct dlb2_function_resources *rsrcs,
+				   struct dlb2_hw_domain *domain,
+				   u32 num_credits,
+				   struct dlb2_cmd_response *resp)
+{
+	if (rsrcs->num_avail_dqed_entries < num_credits) {
+		resp->status = DLB2_ST_DIR_CREDITS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	rsrcs->num_avail_dqed_entries -= num_credits;
+	domain->num_dir_credits += num_credits;
+	return 0;
+}
+
+
+static int dlb2_attach_atomic_inflights(struct dlb2_function_resources *rsrcs,
+					struct dlb2_hw_domain *domain,
+					u32 num_atomic_inflights,
+					struct dlb2_cmd_response *resp)
+{
+	if (rsrcs->num_avail_aqed_entries < num_atomic_inflights) {
+		resp->status = DLB2_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	rsrcs->num_avail_aqed_entries -= num_atomic_inflights;
+	domain->num_avail_aqed_entries += num_atomic_inflights;
+	return 0;
+}
+
+static int
+dlb2_attach_domain_hist_list_entries(struct dlb2_function_resources *rsrcs,
+				     struct dlb2_hw_domain *domain,
+				     u32 num_hist_list_entries,
+				     struct dlb2_cmd_response *resp)
+{
+	struct dlb2_bitmap *bitmap;
+	int base;
+
+	if (num_hist_list_entries) {
+		bitmap = rsrcs->avail_hist_list_entries;
+
+		base = dlb2_bitmap_find_set_bit_range(bitmap,
+						      num_hist_list_entries);
+		if (base < 0)
+			goto error;
+
+		domain->total_hist_list_entries = num_hist_list_entries;
+		domain->avail_hist_list_entries = num_hist_list_entries;
+		domain->hist_list_entry_base = base;
+		domain->hist_list_entry_offset = 0;
+
+		dlb2_bitmap_clear_range(bitmap, base, num_hist_list_entries);
+	}
+	return 0;
+
+error:
+	resp->status = DLB2_ST_HIST_LIST_ENTRIES_UNAVAILABLE;
+	return -EINVAL;
+}
+
+static int dlb2_attach_ldb_queues(struct dlb2_hw *hw,
+				  struct dlb2_function_resources *rsrcs,
+				  struct dlb2_hw_domain *domain,
+				  u32 num_queues,
+				  struct dlb2_cmd_response *resp)
+{
+	unsigned int i;
+
+	if (rsrcs->num_avail_ldb_queues < num_queues) {
+		resp->status = DLB2_ST_LDB_QUEUES_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	for (i = 0; i < num_queues; i++) {
+		struct dlb2_ldb_queue *queue;
+
+		queue = DLB2_FUNC_LIST_HEAD(rsrcs->avail_ldb_queues,
+					    typeof(*queue));
+		if (queue == NULL) {
+			DLB2_HW_ERR(hw,
+				    "[%s()] Internal error: domain validation failed\n",
+				    __func__);
+			return -EFAULT;
+		}
+
+		dlb2_list_del(&rsrcs->avail_ldb_queues, &queue->func_list);
+
+		queue->domain_id = domain->id;
+		queue->owned = true;
+
+		dlb2_list_add(&domain->avail_ldb_queues, &queue->domain_list);
+	}
+
+	rsrcs->num_avail_ldb_queues -= num_queues;
+
+	return 0;
+}
+
+static int
+dlb2_domain_attach_resources(struct dlb2_hw *hw,
+			     struct dlb2_function_resources *rsrcs,
+			     struct dlb2_hw_domain *domain,
+			     struct dlb2_create_sched_domain_args *args,
+			     struct dlb2_cmd_response *resp)
+{
+	int ret;
+
+	ret = dlb2_attach_ldb_queues(hw,
+				     rsrcs,
+				     domain,
+				     args->num_ldb_queues,
+				     resp);
+	if (ret)
+		return ret;
+
+	ret = dlb2_attach_ldb_ports(hw,
+				    rsrcs,
+				    domain,
+				    args,
+				    resp);
+	if (ret)
+		return ret;
+
+	ret = dlb2_attach_dir_ports(hw,
+				    rsrcs,
+				    domain,
+				    args->num_dir_ports,
+				    resp);
+	if (ret)
+		return ret;
+
+	if (hw->ver == DLB2_HW_V2) {
+		ret = dlb2_attach_ldb_credits(rsrcs,
+					      domain,
+					      args->num_ldb_credits,
+					      resp);
+		if (ret)
+			return ret;
+
+		ret = dlb2_attach_dir_credits(rsrcs,
+					      domain,
+					      args->num_dir_credits,
+					      resp);
+		if (ret)
+			return ret;
+	} else {  /* DLB 2.5 */
+		ret = dlb2_attach_credits(rsrcs,
+					  domain,
+					  args->num_credits,
+					  resp);
+		if (ret)
+			return ret;
+	}
+
+	ret = dlb2_attach_domain_hist_list_entries(rsrcs,
+						   domain,
+						   args->num_hist_list_entries,
+						   resp);
+	if (ret)
+		return ret;
+
+	ret = dlb2_attach_atomic_inflights(rsrcs,
+					   domain,
+					   args->num_atomic_inflights,
+					   resp);
+	if (ret)
+		return ret;
+
+	dlb2_configure_domain_credits(hw, domain);
+
+	domain->configured = true;
+
+	domain->started = false;
+
+	rsrcs->num_avail_domains--;
+
+	return 0;
+}
+
+static int
+dlb2_verify_create_sched_dom_args(struct dlb2_function_resources *rsrcs,
+				  struct dlb2_create_sched_domain_args *args,
+				  struct dlb2_cmd_response *resp,
+				  struct dlb2_hw *hw,
+				  struct dlb2_hw_domain **out_domain)
+{
+	u32 num_avail_ldb_ports, req_ldb_ports;
+	struct dlb2_bitmap *avail_hl_entries;
+	unsigned int max_contig_hl_range;
+	struct dlb2_hw_domain *domain;
+	int i;
+
+	avail_hl_entries = rsrcs->avail_hist_list_entries;
+
+	max_contig_hl_range = dlb2_bitmap_longest_set_range(avail_hl_entries);
+
+	num_avail_ldb_ports = 0;
+	req_ldb_ports = 0;
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		num_avail_ldb_ports += rsrcs->num_avail_ldb_ports[i];
+
+		req_ldb_ports += args->num_cos_ldb_ports[i];
+	}
+
+	req_ldb_ports += args->num_ldb_ports;
+
+	if (rsrcs->num_avail_domains < 1) {
+		resp->status = DLB2_ST_DOMAIN_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	domain = DLB2_FUNC_LIST_HEAD(rsrcs->avail_domains, typeof(*domain));
+	if (domain == NULL) {
+		resp->status = DLB2_ST_DOMAIN_UNAVAILABLE;
+		return -EFAULT;
+	}
+
+	if (rsrcs->num_avail_ldb_queues < args->num_ldb_queues) {
+		resp->status = DLB2_ST_LDB_QUEUES_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	if (req_ldb_ports > num_avail_ldb_ports) {
+		resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	for (i = 0; args->cos_strict && i < DLB2_NUM_COS_DOMAINS; i++) {
+		if (args->num_cos_ldb_ports[i] >
+		    rsrcs->num_avail_ldb_ports[i]) {
+			resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
+			return -EINVAL;
+		}
+	}
+
+	if (args->num_ldb_queues > 0 && req_ldb_ports == 0) {
+		resp->status = DLB2_ST_LDB_PORT_REQUIRED_FOR_LDB_QUEUES;
+		return -EINVAL;
+	}
+
+	if (rsrcs->num_avail_dir_pq_pairs < args->num_dir_ports) {
+		resp->status = DLB2_ST_DIR_PORTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+	if (hw->ver == DLB2_HW_V2_5) {
+		if (rsrcs->num_avail_entries < args->num_credits) {
+			resp->status = DLB2_ST_CREDITS_UNAVAILABLE;
+			return -EINVAL;
+		}
+	} else {
+		if (rsrcs->num_avail_qed_entries < args->num_ldb_credits) {
+			resp->status = DLB2_ST_LDB_CREDITS_UNAVAILABLE;
+			return -EINVAL;
+		}
+		if (rsrcs->num_avail_dqed_entries < args->num_dir_credits) {
+			resp->status = DLB2_ST_DIR_CREDITS_UNAVAILABLE;
+			return -EINVAL;
+		}
+	}
+
+	if (rsrcs->num_avail_aqed_entries < args->num_atomic_inflights) {
+		resp->status = DLB2_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	if (max_contig_hl_range < args->num_hist_list_entries) {
+		resp->status = DLB2_ST_HIST_LIST_ENTRIES_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	*out_domain = domain;
+
+	return 0;
+}
+
+static void
+dlb2_log_create_sched_domain_args(struct dlb2_hw *hw,
+				  struct dlb2_create_sched_domain_args *args,
+				  bool vdev_req,
+				  unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 create sched domain arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tNumber of LDB queues:          %d\n",
+		    args->num_ldb_queues);
+	DLB2_HW_DBG(hw, "\tNumber of LDB ports (any CoS): %d\n",
+		    args->num_ldb_ports);
+	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 0):   %d\n",
+		    args->num_cos_ldb_ports[0]);
+	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 1):   %d\n",
+		    args->num_cos_ldb_ports[1]);
+	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 2):   %d\n",
+		    args->num_cos_ldb_ports[2]);
+	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 3):   %d\n",
+		    args->num_cos_ldb_ports[3]);
+	DLB2_HW_DBG(hw, "\tStrict CoS allocation:         %d\n",
+		    args->cos_strict);
+	DLB2_HW_DBG(hw, "\tNumber of DIR ports:           %d\n",
+		    args->num_dir_ports);
+	DLB2_HW_DBG(hw, "\tNumber of ATM inflights:       %d\n",
+		    args->num_atomic_inflights);
+	DLB2_HW_DBG(hw, "\tNumber of hist list entries:   %d\n",
+		    args->num_hist_list_entries);
+	if (hw->ver == DLB2_HW_V2) {
+		DLB2_HW_DBG(hw, "\tNumber of LDB credits:         %d\n",
+			    args->num_ldb_credits);
+		DLB2_HW_DBG(hw, "\tNumber of DIR credits:         %d\n",
+			    args->num_dir_credits);
+	} else {
+		DLB2_HW_DBG(hw, "\tNumber of credits:         %d\n",
+			    args->num_credits);
+	}
+}
+
+/**
+ * dlb2_hw_create_sched_domain() - create a scheduling domain
+ * @hw: dlb2_hw handle for a particular device.
+ * @args: scheduling domain creation arguments.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function creates a scheduling domain containing the resources specified
+ * in args. The individual resources (queues, ports, credits) can be configured
+ * after creating a scheduling domain.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the domain ID.
+ *
+ * resp->id contains a virtual ID if vdev_req is true.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, or the requested domain name
+ *	    is already in use.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb2_hw_create_sched_domain(struct dlb2_hw *hw,
+				struct dlb2_create_sched_domain_args *args,
+				struct dlb2_cmd_response *resp,
+				bool vdev_req,
+				unsigned int vdev_id)
+{
+	struct dlb2_function_resources *rsrcs;
+	struct dlb2_hw_domain *domain;
+	int ret;
+
+	rsrcs = (vdev_req) ? &hw->vdev[vdev_id] : &hw->pf;
+
+	dlb2_log_create_sched_domain_args(hw, args, vdev_req, vdev_id);
+
+	/*
+	 * Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	ret = dlb2_verify_create_sched_dom_args(rsrcs, args, resp, hw, &domain);
+	if (ret)
+		return ret;
+
+	dlb2_init_domain_rsrc_lists(domain);
+
+	ret = dlb2_domain_attach_resources(hw, rsrcs, domain, args, resp);
+	if (ret) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: failed to verify args.\n",
+			    __func__);
+
+		return ret;
+	}
+
+	dlb2_list_del(&rsrcs->avail_domains, &domain->func_list);
+
+	dlb2_list_add(&rsrcs->used_domains, &domain->func_list);
+
+	resp->id = (vdev_req) ? domain->id.virt_id : domain->id.phys_id;
+	resp->status = 0;
+
+	return 0;
+}
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v2 05/27] event/dlb2: add v2.5 domain reset
  2021-03-30 19:35   ` [dpdk-dev] [PATCH v2 00/27] Add DLB V2.5 Timothy McDaniel
                       ` (3 preceding siblings ...)
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 04/27] event/dlb2: add v2.5 create sched domain Timothy McDaniel
@ 2021-03-30 19:35     ` Timothy McDaniel
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 06/27] event/dlb2: add V2.5 create ldb queue Timothy McDaniel
                       ` (22 subsequent siblings)
  27 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-03-30 19:35 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas

Convert to new register map and new register access
macros.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 .../event/dlb2/pf/base/dlb2_hw_types_new.h    |    1 +
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 1494 ----------
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 2562 +++++++++++++++++
 3 files changed, 2563 insertions(+), 1494 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_hw_types_new.h b/drivers/event/dlb2/pf/base/dlb2_hw_types_new.h
index d58aa94ad..0f418ef5d 100644
--- a/drivers/event/dlb2/pf/base/dlb2_hw_types_new.h
+++ b/drivers/event/dlb2/pf/base/dlb2_hw_types_new.h
@@ -187,6 +187,7 @@ struct dlb2_ldb_port {
 	u32 hist_list_entry_base;
 	u32 hist_list_entry_limit;
 	u32 ref_cnt;
+	u8 cq_depth;
 	u8 init_tkn_cnt;
 	u8 num_pending_removals;
 	u8 num_mappings;
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index 99c3d031d..041aeaeee 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -65,69 +65,6 @@ static inline void dlb2_flush_csr(struct dlb2_hw *hw)
 	DLB2_CSR_RD(hw, DLB2_SYS_TOTAL_VAS);
 }
 
-static void dlb2_dir_port_cq_disable(struct dlb2_hw *hw,
-				     struct dlb2_dir_pq_pair *port)
-{
-	union dlb2_lsp_cq_dir_dsbl reg;
-
-	reg.field.disabled = 1;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ_DIR_DSBL(port->id.phys_id), reg.val);
-
-	dlb2_flush_csr(hw);
-}
-
-static u32 dlb2_dir_cq_token_count(struct dlb2_hw *hw,
-				   struct dlb2_dir_pq_pair *port)
-{
-	union dlb2_lsp_cq_dir_tkn_cnt r0;
-
-	r0.val = DLB2_CSR_RD(hw, DLB2_LSP_CQ_DIR_TKN_CNT(port->id.phys_id));
-
-	/*
-	 * Account for the initial token count, which is used in order to
-	 * provide a CQ with depth less than 8.
-	 */
-
-	return r0.field.count - port->init_tkn_cnt;
-}
-
-static int dlb2_drain_dir_cq(struct dlb2_hw *hw,
-			     struct dlb2_dir_pq_pair *port)
-{
-	unsigned int port_id = port->id.phys_id;
-	u32 cnt;
-
-	/* Return any outstanding tokens */
-	cnt = dlb2_dir_cq_token_count(hw, port);
-
-	if (cnt != 0) {
-		struct dlb2_hcw hcw_mem[8], *hcw;
-		void  *pp_addr;
-
-		pp_addr = os_map_producer_port(hw, port_id, false);
-
-		/* Point hcw to a 64B-aligned location */
-		hcw = (struct dlb2_hcw *)((uintptr_t)&hcw_mem[4] & ~0x3F);
-
-		/*
-		 * Program the first HCW for a batch token return and
-		 * the rest as NOOPS
-		 */
-		memset(hcw, 0, 4 * sizeof(*hcw));
-		hcw->cq_token = 1;
-		hcw->lock_id = cnt - 1;
-
-		dlb2_movdir64b(pp_addr, hcw);
-
-		os_fence_hcw(hw, pp_addr);
-
-		os_unmap_producer_port(hw, pp_addr);
-	}
-
-	return 0;
-}
-
 static void dlb2_dir_port_cq_enable(struct dlb2_hw *hw,
 				    struct dlb2_dir_pq_pair *port)
 {
@@ -140,37 +77,6 @@ static void dlb2_dir_port_cq_enable(struct dlb2_hw *hw,
 	dlb2_flush_csr(hw);
 }
 
-static int dlb2_domain_drain_dir_cqs(struct dlb2_hw *hw,
-				     struct dlb2_hw_domain *domain,
-				     bool toggle_port)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *port;
-	int ret;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
-		/*
-		 * Can't drain a port if it's not configured, and there's
-		 * nothing to drain if its queue is unconfigured.
-		 */
-		if (!port->port_configured || !port->queue_configured)
-			continue;
-
-		if (toggle_port)
-			dlb2_dir_port_cq_disable(hw, port);
-
-		ret = dlb2_drain_dir_cq(hw, port);
-		if (ret < 0)
-			return ret;
-
-		if (toggle_port)
-			dlb2_dir_port_cq_enable(hw, port);
-	}
-
-	return 0;
-}
-
 static u32 dlb2_dir_queue_depth(struct dlb2_hw *hw,
 				struct dlb2_dir_pq_pair *queue)
 {
@@ -182,63 +88,6 @@ static u32 dlb2_dir_queue_depth(struct dlb2_hw *hw,
 	return r0.field.count;
 }
 
-static bool dlb2_dir_queue_is_empty(struct dlb2_hw *hw,
-				    struct dlb2_dir_pq_pair *queue)
-{
-	return dlb2_dir_queue_depth(hw, queue) == 0;
-}
-
-static bool dlb2_domain_dir_queues_empty(struct dlb2_hw *hw,
-					 struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *queue;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, queue, iter) {
-		if (!dlb2_dir_queue_is_empty(hw, queue))
-			return false;
-	}
-
-	return true;
-}
-
-static int dlb2_domain_drain_dir_queues(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain)
-{
-	int i, ret;
-
-	/* If the domain hasn't been started, there's no traffic to drain */
-	if (!domain->started)
-		return 0;
-
-	for (i = 0; i < DLB2_MAX_QID_EMPTY_CHECK_LOOPS; i++) {
-		ret = dlb2_domain_drain_dir_cqs(hw, domain, true);
-		if (ret < 0)
-			return ret;
-
-		if (dlb2_domain_dir_queues_empty(hw, domain))
-			break;
-	}
-
-	if (i == DLB2_MAX_QID_EMPTY_CHECK_LOOPS) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: failed to empty queues\n",
-			    __func__);
-		return -EFAULT;
-	}
-
-	/*
-	 * Drain the CQs one more time. For the queues to go empty, they would
-	 * have scheduled one or more QEs.
-	 */
-	ret = dlb2_domain_drain_dir_cqs(hw, domain, true);
-	if (ret < 0)
-		return ret;
-
-	return 0;
-}
-
 static void dlb2_ldb_port_cq_enable(struct dlb2_hw *hw,
 				    struct dlb2_ldb_port *port)
 {
@@ -271,105 +120,6 @@ static void dlb2_ldb_port_cq_disable(struct dlb2_hw *hw,
 	dlb2_flush_csr(hw);
 }
 
-static u32 dlb2_ldb_cq_inflight_count(struct dlb2_hw *hw,
-				      struct dlb2_ldb_port *port)
-{
-	union dlb2_lsp_cq_ldb_infl_cnt r0;
-
-	r0.val = DLB2_CSR_RD(hw, DLB2_LSP_CQ_LDB_INFL_CNT(port->id.phys_id));
-
-	return r0.field.count;
-}
-
-static u32 dlb2_ldb_cq_token_count(struct dlb2_hw *hw,
-				   struct dlb2_ldb_port *port)
-{
-	union dlb2_lsp_cq_ldb_tkn_cnt r0;
-
-	r0.val = DLB2_CSR_RD(hw, DLB2_LSP_CQ_LDB_TKN_CNT(port->id.phys_id));
-
-	/*
-	 * Account for the initial token count, which is used in order to
-	 * provide a CQ with depth less than 8.
-	 */
-
-	return r0.field.token_count - port->init_tkn_cnt;
-}
-
-static int dlb2_drain_ldb_cq(struct dlb2_hw *hw, struct dlb2_ldb_port *port)
-{
-	u32 infl_cnt, tkn_cnt;
-	unsigned int i;
-
-	infl_cnt = dlb2_ldb_cq_inflight_count(hw, port);
-	tkn_cnt = dlb2_ldb_cq_token_count(hw, port);
-
-	if (infl_cnt || tkn_cnt) {
-		struct dlb2_hcw hcw_mem[8], *hcw;
-		void  *pp_addr;
-
-		pp_addr = os_map_producer_port(hw, port->id.phys_id, true);
-
-		/* Point hcw to a 64B-aligned location */
-		hcw = (struct dlb2_hcw *)((uintptr_t)&hcw_mem[4] & ~0x3F);
-
-		/*
-		 * Program the first HCW for a completion and token return and
-		 * the other HCWs as NOOPS
-		 */
-
-		memset(hcw, 0, 4 * sizeof(*hcw));
-		hcw->qe_comp = (infl_cnt > 0);
-		hcw->cq_token = (tkn_cnt > 0);
-		hcw->lock_id = tkn_cnt - 1;
-
-		/* Return tokens in the first HCW */
-		dlb2_movdir64b(pp_addr, hcw);
-
-		hcw->cq_token = 0;
-
-		/* Issue remaining completions (if any) */
-		for (i = 1; i < infl_cnt; i++)
-			dlb2_movdir64b(pp_addr, hcw);
-
-		os_fence_hcw(hw, pp_addr);
-
-		os_unmap_producer_port(hw, pp_addr);
-	}
-
-	return 0;
-}
-
-static int dlb2_domain_drain_ldb_cqs(struct dlb2_hw *hw,
-				     struct dlb2_hw_domain *domain,
-				     bool toggle_port)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int ret, i;
-	RTE_SET_USED(iter);
-
-	/* If the domain hasn't been started, there's no traffic to drain */
-	if (!domain->started)
-		return 0;
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			if (toggle_port)
-				dlb2_ldb_port_cq_disable(hw, port);
-
-			ret = dlb2_drain_ldb_cq(hw, port);
-			if (ret < 0)
-				return ret;
-
-			if (toggle_port)
-				dlb2_ldb_port_cq_enable(hw, port);
-		}
-	}
-
-	return 0;
-}
-
 static u32 dlb2_ldb_queue_depth(struct dlb2_hw *hw,
 				struct dlb2_ldb_queue *queue)
 {
@@ -388,90 +138,6 @@ static u32 dlb2_ldb_queue_depth(struct dlb2_hw *hw,
 	return r0.field.count + r1.field.count + r2.field.count;
 }
 
-static bool dlb2_ldb_queue_is_empty(struct dlb2_hw *hw,
-				    struct dlb2_ldb_queue *queue)
-{
-	return dlb2_ldb_queue_depth(hw, queue) == 0;
-}
-
-static bool dlb2_domain_mapped_queues_empty(struct dlb2_hw *hw,
-					    struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_queue *queue;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
-		if (queue->num_mappings == 0)
-			continue;
-
-		if (!dlb2_ldb_queue_is_empty(hw, queue))
-			return false;
-	}
-
-	return true;
-}
-
-static int dlb2_domain_drain_mapped_queues(struct dlb2_hw *hw,
-					   struct dlb2_hw_domain *domain)
-{
-	int i, ret;
-
-	/* If the domain hasn't been started, there's no traffic to drain */
-	if (!domain->started)
-		return 0;
-
-	if (domain->num_pending_removals > 0) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: failed to unmap domain queues\n",
-			    __func__);
-		return -EFAULT;
-	}
-
-	for (i = 0; i < DLB2_MAX_QID_EMPTY_CHECK_LOOPS; i++) {
-		ret = dlb2_domain_drain_ldb_cqs(hw, domain, true);
-		if (ret < 0)
-			return ret;
-
-		if (dlb2_domain_mapped_queues_empty(hw, domain))
-			break;
-	}
-
-	if (i == DLB2_MAX_QID_EMPTY_CHECK_LOOPS) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: failed to empty queues\n",
-			    __func__);
-		return -EFAULT;
-	}
-
-	/*
-	 * Drain the CQs one more time. For the queues to go empty, they would
-	 * have scheduled one or more QEs.
-	 */
-	ret = dlb2_domain_drain_ldb_cqs(hw, domain, true);
-	if (ret < 0)
-		return ret;
-
-	return 0;
-}
-
-static void dlb2_domain_enable_ldb_cqs(struct dlb2_hw *hw,
-				       struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			port->enabled = true;
-
-			dlb2_ldb_port_cq_enable(hw, port);
-		}
-	}
-}
-
 static struct dlb2_ldb_queue *
 dlb2_get_ldb_queue_from_id(struct dlb2_hw *hw,
 			   u32 id,
@@ -1455,1166 +1121,6 @@ dlb2_domain_finish_unmap_qid_procedures(struct dlb2_hw *hw,
 	return domain->num_pending_removals;
 }
 
-static void dlb2_domain_disable_ldb_cqs(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			port->enabled = false;
-
-			dlb2_ldb_port_cq_disable(hw, port);
-		}
-	}
-}
-
-static void dlb2_log_reset_domain(struct dlb2_hw *hw,
-				  u32 domain_id,
-				  bool vdev_req,
-				  unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 reset domain:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
-}
-
-static void dlb2_domain_disable_dir_vpps(struct dlb2_hw *hw,
-					 struct dlb2_hw_domain *domain,
-					 unsigned int vdev_id)
-{
-	struct dlb2_list_entry *iter;
-	union dlb2_sys_vf_dir_vpp_v r1;
-	struct dlb2_dir_pq_pair *port;
-	RTE_SET_USED(iter);
-
-	r1.field.vpp_v = 0;
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
-		unsigned int offs;
-		u32 virt_id;
-
-		if (hw->virt_mode == DLB2_VIRT_SRIOV)
-			virt_id = port->id.virt_id;
-		else
-			virt_id = port->id.phys_id;
-
-		offs = vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) + virt_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP_V(offs), r1.val);
-	}
-}
-
-static void dlb2_domain_disable_ldb_vpps(struct dlb2_hw *hw,
-					 struct dlb2_hw_domain *domain,
-					 unsigned int vdev_id)
-{
-	struct dlb2_list_entry *iter;
-	union dlb2_sys_vf_ldb_vpp_v r1;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	r1.field.vpp_v = 0;
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			unsigned int offs;
-			u32 virt_id;
-
-			if (hw->virt_mode == DLB2_VIRT_SRIOV)
-				virt_id = port->id.virt_id;
-			else
-				virt_id = port->id.phys_id;
-
-			offs = vdev_id * DLB2_MAX_NUM_LDB_PORTS + virt_id;
-
-			DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VPP_V(offs), r1.val);
-		}
-	}
-}
-
-static void
-dlb2_domain_disable_ldb_port_interrupts(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	union dlb2_chp_ldb_cq_int_enb r0 = { {0} };
-	union dlb2_chp_ldb_cq_wd_enb r1 = { {0} };
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	r0.field.en_tim = 0;
-	r0.field.en_depth = 0;
-
-	r1.field.wd_enable = 0;
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			DLB2_CSR_WR(hw,
-				    DLB2_CHP_LDB_CQ_INT_ENB(port->id.phys_id),
-				    r0.val);
-
-			DLB2_CSR_WR(hw,
-				    DLB2_CHP_LDB_CQ_WD_ENB(port->id.phys_id),
-				    r1.val);
-		}
-	}
-}
-
-static void
-dlb2_domain_disable_dir_port_interrupts(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	union dlb2_chp_dir_cq_int_enb r0 = { {0} };
-	union dlb2_chp_dir_cq_wd_enb r1 = { {0} };
-	struct dlb2_dir_pq_pair *port;
-	RTE_SET_USED(iter);
-
-	r0.field.en_tim = 0;
-	r0.field.en_depth = 0;
-
-	r1.field.wd_enable = 0;
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
-		DLB2_CSR_WR(hw,
-			    DLB2_CHP_DIR_CQ_INT_ENB(port->id.phys_id),
-			    r0.val);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_CHP_DIR_CQ_WD_ENB(port->id.phys_id),
-			    r1.val);
-	}
-}
-
-static void
-dlb2_domain_disable_ldb_queue_write_perms(struct dlb2_hw *hw,
-					  struct dlb2_hw_domain *domain)
-{
-	int domain_offset = domain->id.phys_id * DLB2_MAX_NUM_LDB_QUEUES;
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_queue *queue;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
-		union dlb2_sys_ldb_vasqid_v r0 = { {0} };
-		union dlb2_sys_ldb_qid2vqid r1 = { {0} };
-		union dlb2_sys_vf_ldb_vqid_v r2 = { {0} };
-		union dlb2_sys_vf_ldb_vqid2qid r3 = { {0} };
-		int idx;
-
-		idx = domain_offset + queue->id.phys_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_LDB_VASQID_V(idx), r0.val);
-
-		if (queue->id.vdev_owned) {
-			DLB2_CSR_WR(hw,
-				    DLB2_SYS_LDB_QID2VQID(queue->id.phys_id),
-				    r1.val);
-
-			idx = queue->id.vdev_id * DLB2_MAX_NUM_LDB_QUEUES +
-				queue->id.virt_id;
-
-			DLB2_CSR_WR(hw,
-				    DLB2_SYS_VF_LDB_VQID_V(idx),
-				    r2.val);
-
-			DLB2_CSR_WR(hw,
-				    DLB2_SYS_VF_LDB_VQID2QID(idx),
-				    r3.val);
-		}
-	}
-}
-
-static void
-dlb2_domain_disable_dir_queue_write_perms(struct dlb2_hw *hw,
-					  struct dlb2_hw_domain *domain)
-{
-	int domain_offset = domain->id.phys_id *
-		DLB2_MAX_NUM_DIR_PORTS(hw->ver);
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *queue;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, queue, iter) {
-		union dlb2_sys_dir_vasqid_v r0 = { {0} };
-		union dlb2_sys_vf_dir_vqid_v r1 = { {0} };
-		union dlb2_sys_vf_dir_vqid2qid r2 = { {0} };
-		int idx;
-
-		idx = domain_offset + queue->id.phys_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(idx), r0.val);
-
-		if (queue->id.vdev_owned) {
-			idx = queue->id.vdev_id *
-				DLB2_MAX_NUM_DIR_PORTS(hw->ver) +
-				queue->id.virt_id;
-
-			DLB2_CSR_WR(hw,
-				    DLB2_SYS_VF_DIR_VQID_V(idx),
-				    r1.val);
-
-			DLB2_CSR_WR(hw,
-				    DLB2_SYS_VF_DIR_VQID2QID(idx),
-				    r2.val);
-		}
-	}
-}
-
-static void dlb2_domain_disable_ldb_seq_checks(struct dlb2_hw *hw,
-					       struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	union dlb2_chp_sn_chk_enbl r1;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	r1.field.en = 0;
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
-			DLB2_CSR_WR(hw,
-				    DLB2_CHP_SN_CHK_ENBL(port->id.phys_id),
-				    r1.val);
-	}
-}
-
-static int dlb2_domain_wait_for_ldb_cqs_to_empty(struct dlb2_hw *hw,
-						 struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			int i;
-
-			for (i = 0; i < DLB2_MAX_CQ_COMP_CHECK_LOOPS; i++) {
-				if (dlb2_ldb_cq_inflight_count(hw, port) == 0)
-					break;
-			}
-
-			if (i == DLB2_MAX_CQ_COMP_CHECK_LOOPS) {
-				DLB2_HW_ERR(hw,
-					    "[%s()] Internal error: failed to flush load-balanced port %d's completions.\n",
-					    __func__, port->id.phys_id);
-				return -EFAULT;
-			}
-		}
-	}
-
-	return 0;
-}
-
-static void dlb2_domain_disable_dir_cqs(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *port;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
-		port->enabled = false;
-
-		dlb2_dir_port_cq_disable(hw, port);
-	}
-}
-
-static void
-dlb2_domain_disable_dir_producer_ports(struct dlb2_hw *hw,
-				       struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *port;
-	union dlb2_sys_dir_pp_v r1;
-	RTE_SET_USED(iter);
-
-	r1.field.pp_v = 0;
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter)
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_DIR_PP_V(port->id.phys_id),
-			    r1.val);
-}
-
-static void
-dlb2_domain_disable_ldb_producer_ports(struct dlb2_hw *hw,
-				       struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	union dlb2_sys_ldb_pp_v r1;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	r1.field.pp_v = 0;
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
-			DLB2_CSR_WR(hw,
-				    DLB2_SYS_LDB_PP_V(port->id.phys_id),
-				    r1.val);
-	}
-}
-
-static int dlb2_domain_verify_reset_success(struct dlb2_hw *hw,
-					    struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *dir_port;
-	struct dlb2_ldb_port *ldb_port;
-	struct dlb2_ldb_queue *queue;
-	int i;
-	RTE_SET_USED(iter);
-
-	/*
-	 * Confirm that all the domain's queue's inflight counts and AQED
-	 * active counts are 0.
-	 */
-	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
-		if (!dlb2_ldb_queue_is_empty(hw, queue)) {
-			DLB2_HW_ERR(hw,
-				    "[%s()] Internal error: failed to empty ldb queue %d\n",
-				    __func__, queue->id.phys_id);
-			return -EFAULT;
-		}
-	}
-
-	/* Confirm that all the domain's CQs inflight and token counts are 0. */
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], ldb_port, iter) {
-			if (dlb2_ldb_cq_inflight_count(hw, ldb_port) ||
-			    dlb2_ldb_cq_token_count(hw, ldb_port)) {
-				DLB2_HW_ERR(hw,
-					    "[%s()] Internal error: failed to empty ldb port %d\n",
-					    __func__, ldb_port->id.phys_id);
-				return -EFAULT;
-			}
-		}
-	}
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, dir_port, iter) {
-		if (!dlb2_dir_queue_is_empty(hw, dir_port)) {
-			DLB2_HW_ERR(hw,
-				    "[%s()] Internal error: failed to empty dir queue %d\n",
-				    __func__, dir_port->id.phys_id);
-			return -EFAULT;
-		}
-
-		if (dlb2_dir_cq_token_count(hw, dir_port)) {
-			DLB2_HW_ERR(hw,
-				    "[%s()] Internal error: failed to empty dir port %d\n",
-				    __func__, dir_port->id.phys_id);
-			return -EFAULT;
-		}
-	}
-
-	return 0;
-}
-
-static void __dlb2_domain_reset_ldb_port_registers(struct dlb2_hw *hw,
-						   struct dlb2_ldb_port *port)
-{
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_PP2VAS(port->id.phys_id),
-		    DLB2_SYS_LDB_PP2VAS_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ2VAS(port->id.phys_id),
-		    DLB2_CHP_LDB_CQ2VAS_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_PP2VDEV(port->id.phys_id),
-		    DLB2_SYS_LDB_PP2VDEV_RST);
-
-	if (port->id.vdev_owned) {
-		unsigned int offs;
-		u32 virt_id;
-
-		/*
-		 * DLB uses producer port address bits 17:12 to determine the
-		 * producer port ID. In Scalable IOV mode, PP accesses come
-		 * through the PF MMIO window for the physical producer port,
-		 * so for translation purposes the virtual and physical port
-		 * IDs are equal.
-		 */
-		if (hw->virt_mode == DLB2_VIRT_SRIOV)
-			virt_id = port->id.virt_id;
-		else
-			virt_id = port->id.phys_id;
-
-		offs = port->id.vdev_id * DLB2_MAX_NUM_LDB_PORTS + virt_id;
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_VF_LDB_VPP2PP(offs),
-			    DLB2_SYS_VF_LDB_VPP2PP_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_VF_LDB_VPP_V(offs),
-			    DLB2_SYS_VF_LDB_VPP_V_RST);
-	}
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_PP_V(port->id.phys_id),
-		    DLB2_SYS_LDB_PP_V_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_LDB_DSBL(port->id.phys_id),
-		    DLB2_LSP_CQ_LDB_DSBL_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_DEPTH(port->id.phys_id),
-		    DLB2_CHP_LDB_CQ_DEPTH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_LDB_INFL_LIM(port->id.phys_id),
-		    DLB2_LSP_CQ_LDB_INFL_LIM_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_HIST_LIST_LIM(port->id.phys_id),
-		    DLB2_CHP_HIST_LIST_LIM_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_HIST_LIST_BASE(port->id.phys_id),
-		    DLB2_CHP_HIST_LIST_BASE_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_HIST_LIST_POP_PTR(port->id.phys_id),
-		    DLB2_CHP_HIST_LIST_POP_PTR_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_HIST_LIST_PUSH_PTR(port->id.phys_id),
-		    DLB2_CHP_HIST_LIST_PUSH_PTR_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH(port->id.phys_id),
-		    DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_TMR_THRSH(port->id.phys_id),
-		    DLB2_CHP_LDB_CQ_TMR_THRSH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_INT_ENB(port->id.phys_id),
-		    DLB2_CHP_LDB_CQ_INT_ENB_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_CQ_ISR(port->id.phys_id),
-		    DLB2_SYS_LDB_CQ_ISR_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL(port->id.phys_id),
-		    DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL(port->id.phys_id),
-		    DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_WPTR(port->id.phys_id),
-		    DLB2_CHP_LDB_CQ_WPTR_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_LDB_TKN_CNT(port->id.phys_id),
-		    DLB2_LSP_CQ_LDB_TKN_CNT_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_CQ_ADDR_L(port->id.phys_id),
-		    DLB2_SYS_LDB_CQ_ADDR_L_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_CQ_ADDR_U(port->id.phys_id),
-		    DLB2_SYS_LDB_CQ_ADDR_U_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_CQ_AT(port->id.phys_id),
-		    DLB2_SYS_LDB_CQ_AT_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_CQ_PASID(port->id.phys_id),
-		    DLB2_SYS_LDB_CQ_PASID_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_CQ2VF_PF_RO(port->id.phys_id),
-		    DLB2_SYS_LDB_CQ2VF_PF_RO_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTL(port->id.phys_id),
-		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTL_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTH(port->id.phys_id),
-		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ2QID0(port->id.phys_id),
-		    DLB2_LSP_CQ2QID0_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ2QID1(port->id.phys_id),
-		    DLB2_LSP_CQ2QID1_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ2PRIOV(port->id.phys_id),
-		    DLB2_LSP_CQ2PRIOV_RST);
-}
-
-static void dlb2_domain_reset_ldb_port_registers(struct dlb2_hw *hw,
-						 struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
-			__dlb2_domain_reset_ldb_port_registers(hw, port);
-	}
-}
-
-static void
-__dlb2_domain_reset_dir_port_registers(struct dlb2_hw *hw,
-				       struct dlb2_dir_pq_pair *port)
-{
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ2VAS(port->id.phys_id),
-		    DLB2_CHP_DIR_CQ2VAS_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_DIR_DSBL(port->id.phys_id),
-		    DLB2_LSP_CQ_DIR_DSBL_RST);
-
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_OPT_CLR, port->id.phys_id);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_DEPTH(port->id.phys_id),
-		    DLB2_CHP_DIR_CQ_DEPTH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH(port->id.phys_id),
-		    DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_TMR_THRSH(port->id.phys_id),
-		    DLB2_CHP_DIR_CQ_TMR_THRSH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_INT_ENB(port->id.phys_id),
-		    DLB2_CHP_DIR_CQ_INT_ENB_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ_ISR(port->id.phys_id),
-		    DLB2_SYS_DIR_CQ_ISR_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(port->id.phys_id),
-		    DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL(port->id.phys_id),
-		    DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_WPTR(port->id.phys_id),
-		    DLB2_CHP_DIR_CQ_WPTR_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_DIR_TKN_CNT(port->id.phys_id),
-		    DLB2_LSP_CQ_DIR_TKN_CNT_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ_ADDR_L(port->id.phys_id),
-		    DLB2_SYS_DIR_CQ_ADDR_L_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ_ADDR_U(port->id.phys_id),
-		    DLB2_SYS_DIR_CQ_ADDR_U_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ_AT(port->id.phys_id),
-		    DLB2_SYS_DIR_CQ_AT_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ_PASID(port->id.phys_id),
-		    DLB2_SYS_DIR_CQ_PASID_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ_FMT(port->id.phys_id),
-		    DLB2_SYS_DIR_CQ_FMT_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ2VF_PF_RO(port->id.phys_id),
-		    DLB2_SYS_DIR_CQ2VF_PF_RO_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTL(port->id.phys_id),
-		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTL_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTH(port->id.phys_id),
-		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_PP2VAS(port->id.phys_id),
-		    DLB2_SYS_DIR_PP2VAS_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ2VAS(port->id.phys_id),
-		    DLB2_CHP_DIR_CQ2VAS_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_PP2VDEV(port->id.phys_id),
-		    DLB2_SYS_DIR_PP2VDEV_RST);
-
-	if (port->id.vdev_owned) {
-		unsigned int offs;
-		u32 virt_id;
-
-		/*
-		 * DLB uses producer port address bits 17:12 to determine the
-		 * producer port ID. In Scalable IOV mode, PP accesses come
-		 * through the PF MMIO window for the physical producer port,
-		 * so for translation purposes the virtual and physical port
-		 * IDs are equal.
-		 */
-		if (hw->virt_mode == DLB2_VIRT_SRIOV)
-			virt_id = port->id.virt_id;
-		else
-			virt_id = port->id.phys_id;
-
-		offs = port->id.vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver)
-			+ virt_id;
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_VF_DIR_VPP2PP(offs),
-			    DLB2_SYS_VF_DIR_VPP2PP_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_VF_DIR_VPP_V(offs),
-			    DLB2_SYS_VF_DIR_VPP_V_RST);
-	}
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_PP_V(port->id.phys_id),
-		    DLB2_SYS_DIR_PP_V_RST);
-}
-
-static void dlb2_domain_reset_dir_port_registers(struct dlb2_hw *hw,
-						 struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *port;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter)
-		__dlb2_domain_reset_dir_port_registers(hw, port);
-}
-
-static void dlb2_domain_reset_ldb_queue_registers(struct dlb2_hw *hw,
-						  struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_queue *queue;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
-		unsigned int queue_id = queue->id.phys_id;
-		int i;
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL(queue_id),
-			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH(queue_id),
-			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTL(queue_id),
-			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTL_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTH(queue_id),
-			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_NALDB_MAX_DEPTH(queue_id),
-			    DLB2_LSP_QID_NALDB_MAX_DEPTH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_LDB_INFL_LIM(queue_id),
-			    DLB2_LSP_QID_LDB_INFL_LIM_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_AQED_ACTIVE_LIM(queue_id),
-			    DLB2_LSP_QID_AQED_ACTIVE_LIM_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_ATM_DEPTH_THRSH(queue_id),
-			    DLB2_LSP_QID_ATM_DEPTH_THRSH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_NALDB_DEPTH_THRSH(queue_id),
-			    DLB2_LSP_QID_NALDB_DEPTH_THRSH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_LDB_QID_ITS(queue_id),
-			    DLB2_SYS_LDB_QID_ITS_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_CHP_ORD_QID_SN(queue_id),
-			    DLB2_CHP_ORD_QID_SN_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_CHP_ORD_QID_SN_MAP(queue_id),
-			    DLB2_CHP_ORD_QID_SN_MAP_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_LDB_QID_V(queue_id),
-			    DLB2_SYS_LDB_QID_V_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_LDB_QID_CFG_V(queue_id),
-			    DLB2_SYS_LDB_QID_CFG_V_RST);
-
-		if (queue->sn_cfg_valid) {
-			u32 offs[2];
-
-			offs[0] = DLB2_RO_PIPE_GRP_0_SLT_SHFT(queue->sn_slot);
-			offs[1] = DLB2_RO_PIPE_GRP_1_SLT_SHFT(queue->sn_slot);
-
-			DLB2_CSR_WR(hw,
-				    offs[queue->sn_group],
-				    DLB2_RO_PIPE_GRP_0_SLT_SHFT_RST);
-		}
-
-		for (i = 0; i < DLB2_LSP_QID2CQIDIX_NUM; i++) {
-			DLB2_CSR_WR(hw,
-				    DLB2_LSP_QID2CQIDIX(queue_id, i),
-				    DLB2_LSP_QID2CQIDIX_00_RST);
-
-			DLB2_CSR_WR(hw,
-				    DLB2_LSP_QID2CQIDIX2(queue_id, i),
-				    DLB2_LSP_QID2CQIDIX2_00_RST);
-
-			DLB2_CSR_WR(hw,
-				    DLB2_ATM_QID2CQIDIX(queue_id, i),
-				    DLB2_ATM_QID2CQIDIX_00_RST);
-		}
-	}
-}
-
-static void dlb2_domain_reset_dir_queue_registers(struct dlb2_hw *hw,
-						  struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *queue;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, queue, iter) {
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_DIR_MAX_DEPTH(queue->id.phys_id),
-			    DLB2_LSP_QID_DIR_MAX_DEPTH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTL(queue->id.phys_id),
-			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTL_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTH(queue->id.phys_id),
-			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_DIR_DEPTH_THRSH(queue->id.phys_id),
-			    DLB2_LSP_QID_DIR_DEPTH_THRSH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_DIR_QID_ITS(queue->id.phys_id),
-			    DLB2_SYS_DIR_QID_ITS_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_DIR_QID_V(queue->id.phys_id),
-			    DLB2_SYS_DIR_QID_V_RST);
-	}
-}
-
-static void dlb2_domain_reset_registers(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain)
-{
-	dlb2_domain_reset_ldb_port_registers(hw, domain);
-
-	dlb2_domain_reset_dir_port_registers(hw, domain);
-
-	dlb2_domain_reset_ldb_queue_registers(hw, domain);
-
-	dlb2_domain_reset_dir_queue_registers(hw, domain);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_CFG_LDB_VAS_CRD(domain->id.phys_id),
-		    DLB2_CHP_CFG_LDB_VAS_CRD_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_CFG_DIR_VAS_CRD(domain->id.phys_id),
-		    DLB2_CHP_CFG_DIR_VAS_CRD_RST);
-}
-
-static int dlb2_domain_reset_software_state(struct dlb2_hw *hw,
-					    struct dlb2_hw_domain *domain)
-{
-	struct dlb2_dir_pq_pair *tmp_dir_port;
-	struct dlb2_ldb_queue *tmp_ldb_queue;
-	struct dlb2_ldb_port *tmp_ldb_port;
-	struct dlb2_list_entry *iter1;
-	struct dlb2_list_entry *iter2;
-	struct dlb2_function_resources *rsrcs;
-	struct dlb2_dir_pq_pair *dir_port;
-	struct dlb2_ldb_queue *ldb_queue;
-	struct dlb2_ldb_port *ldb_port;
-	struct dlb2_list_head *list;
-	int ret, i;
-	RTE_SET_USED(tmp_dir_port);
-	RTE_SET_USED(tmp_ldb_queue);
-	RTE_SET_USED(tmp_ldb_port);
-	RTE_SET_USED(iter1);
-	RTE_SET_USED(iter2);
-
-	rsrcs = domain->parent_func;
-
-	/* Move the domain's ldb queues to the function's avail list */
-	list = &domain->used_ldb_queues;
-	DLB2_DOM_LIST_FOR_SAFE(*list, ldb_queue, tmp_ldb_queue, iter1, iter2) {
-		if (ldb_queue->sn_cfg_valid) {
-			struct dlb2_sn_group *grp;
-
-			grp = &hw->rsrcs.sn_groups[ldb_queue->sn_group];
-
-			dlb2_sn_group_free_slot(grp, ldb_queue->sn_slot);
-			ldb_queue->sn_cfg_valid = false;
-		}
-
-		ldb_queue->owned = false;
-		ldb_queue->num_mappings = 0;
-		ldb_queue->num_pending_additions = 0;
-
-		dlb2_list_del(&domain->used_ldb_queues,
-			      &ldb_queue->domain_list);
-		dlb2_list_add(&rsrcs->avail_ldb_queues,
-			      &ldb_queue->func_list);
-		rsrcs->num_avail_ldb_queues++;
-	}
-
-	list = &domain->avail_ldb_queues;
-	DLB2_DOM_LIST_FOR_SAFE(*list, ldb_queue, tmp_ldb_queue, iter1, iter2) {
-		ldb_queue->owned = false;
-
-		dlb2_list_del(&domain->avail_ldb_queues,
-			      &ldb_queue->domain_list);
-		dlb2_list_add(&rsrcs->avail_ldb_queues,
-			      &ldb_queue->func_list);
-		rsrcs->num_avail_ldb_queues++;
-	}
-
-	/* Move the domain's ldb ports to the function's avail list */
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		list = &domain->used_ldb_ports[i];
-		DLB2_DOM_LIST_FOR_SAFE(*list, ldb_port, tmp_ldb_port,
-				       iter1, iter2) {
-			int j;
-
-			ldb_port->owned = false;
-			ldb_port->configured = false;
-			ldb_port->num_pending_removals = 0;
-			ldb_port->num_mappings = 0;
-			ldb_port->init_tkn_cnt = 0;
-			for (j = 0; j < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; j++)
-				ldb_port->qid_map[j].state =
-					DLB2_QUEUE_UNMAPPED;
-
-			dlb2_list_del(&domain->used_ldb_ports[i],
-				      &ldb_port->domain_list);
-			dlb2_list_add(&rsrcs->avail_ldb_ports[i],
-				      &ldb_port->func_list);
-			rsrcs->num_avail_ldb_ports[i]++;
-		}
-
-		list = &domain->avail_ldb_ports[i];
-		DLB2_DOM_LIST_FOR_SAFE(*list, ldb_port, tmp_ldb_port,
-				       iter1, iter2) {
-			ldb_port->owned = false;
-
-			dlb2_list_del(&domain->avail_ldb_ports[i],
-				      &ldb_port->domain_list);
-			dlb2_list_add(&rsrcs->avail_ldb_ports[i],
-				      &ldb_port->func_list);
-			rsrcs->num_avail_ldb_ports[i]++;
-		}
-	}
-
-	/* Move the domain's dir ports to the function's avail list */
-	list = &domain->used_dir_pq_pairs;
-	DLB2_DOM_LIST_FOR_SAFE(*list, dir_port, tmp_dir_port, iter1, iter2) {
-		dir_port->owned = false;
-		dir_port->port_configured = false;
-		dir_port->init_tkn_cnt = 0;
-
-		dlb2_list_del(&domain->used_dir_pq_pairs,
-			      &dir_port->domain_list);
-
-		dlb2_list_add(&rsrcs->avail_dir_pq_pairs,
-			      &dir_port->func_list);
-		rsrcs->num_avail_dir_pq_pairs++;
-	}
-
-	list = &domain->avail_dir_pq_pairs;
-	DLB2_DOM_LIST_FOR_SAFE(*list, dir_port, tmp_dir_port, iter1, iter2) {
-		dir_port->owned = false;
-
-		dlb2_list_del(&domain->avail_dir_pq_pairs,
-			      &dir_port->domain_list);
-
-		dlb2_list_add(&rsrcs->avail_dir_pq_pairs,
-			      &dir_port->func_list);
-		rsrcs->num_avail_dir_pq_pairs++;
-	}
-
-	/* Return hist list entries to the function */
-	ret = dlb2_bitmap_set_range(rsrcs->avail_hist_list_entries,
-				    domain->hist_list_entry_base,
-				    domain->total_hist_list_entries);
-	if (ret) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: domain hist list base doesn't match the function's bitmap.\n",
-			    __func__);
-		return ret;
-	}
-
-	domain->total_hist_list_entries = 0;
-	domain->avail_hist_list_entries = 0;
-	domain->hist_list_entry_base = 0;
-	domain->hist_list_entry_offset = 0;
-
-	rsrcs->num_avail_qed_entries += domain->num_ldb_credits;
-	domain->num_ldb_credits = 0;
-
-	rsrcs->num_avail_dqed_entries += domain->num_dir_credits;
-	domain->num_dir_credits = 0;
-
-	rsrcs->num_avail_aqed_entries += domain->num_avail_aqed_entries;
-	rsrcs->num_avail_aqed_entries += domain->num_used_aqed_entries;
-	domain->num_avail_aqed_entries = 0;
-	domain->num_used_aqed_entries = 0;
-
-	domain->num_pending_removals = 0;
-	domain->num_pending_additions = 0;
-	domain->configured = false;
-	domain->started = false;
-
-	/*
-	 * Move the domain out of the used_domains list and back to the
-	 * function's avail_domains list.
-	 */
-	dlb2_list_del(&rsrcs->used_domains, &domain->func_list);
-	dlb2_list_add(&rsrcs->avail_domains, &domain->func_list);
-	rsrcs->num_avail_domains++;
-
-	return 0;
-}
-
-static int dlb2_domain_drain_unmapped_queue(struct dlb2_hw *hw,
-					    struct dlb2_hw_domain *domain,
-					    struct dlb2_ldb_queue *queue)
-{
-	struct dlb2_ldb_port *port;
-	int ret, i;
-
-	/* If a domain has LDB queues, it must have LDB ports */
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		if (!dlb2_list_empty(&domain->used_ldb_ports[i]))
-			break;
-	}
-
-	if (i == DLB2_NUM_COS_DOMAINS) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: No configured LDB ports\n",
-			    __func__);
-		return -EFAULT;
-	}
-
-	port = DLB2_DOM_LIST_HEAD(domain->used_ldb_ports[i], typeof(*port));
-
-	/* If necessary, free up a QID slot in this CQ */
-	if (port->num_mappings == DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-		struct dlb2_ldb_queue *mapped_queue;
-
-		mapped_queue = &hw->rsrcs.ldb_queues[port->qid_map[0].qid];
-
-		ret = dlb2_ldb_port_unmap_qid(hw, port, mapped_queue);
-		if (ret)
-			return ret;
-	}
-
-	ret = dlb2_ldb_port_map_qid_dynamic(hw, port, queue, 0);
-	if (ret)
-		return ret;
-
-	return dlb2_domain_drain_mapped_queues(hw, domain);
-}
-
-static int dlb2_domain_drain_unmapped_queues(struct dlb2_hw *hw,
-					     struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_queue *queue;
-	int ret;
-	RTE_SET_USED(iter);
-
-	/* If the domain hasn't been started, there's no traffic to drain */
-	if (!domain->started)
-		return 0;
-
-	/*
-	 * Pre-condition: the unattached queue must not have any outstanding
-	 * completions. This is ensured by calling dlb2_domain_drain_ldb_cqs()
-	 * prior to this in dlb2_domain_drain_mapped_queues().
-	 */
-	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
-		if (queue->num_mappings != 0 ||
-		    dlb2_ldb_queue_is_empty(hw, queue))
-			continue;
-
-		ret = dlb2_domain_drain_unmapped_queue(hw, domain, queue);
-		if (ret)
-			return ret;
-	}
-
-	return 0;
-}
-
-/**
- * dlb2_reset_domain() - Reset a DLB scheduling domain and its associated
- *	hardware resources.
- * @hw:	Contains the current state of the DLB2 hardware.
- * @domain_id: Domain ID
- * @vdev_req: Request came from a virtual device.
- * @vdev_id: If vdev_req is true, this contains the virtual device's ID.
- *
- * Note: User software *must* stop sending to this domain's producer ports
- * before invoking this function, otherwise undefined behavior will result.
- *
- * Return: returns < 0 on error, 0 otherwise.
- */
-int dlb2_reset_domain(struct dlb2_hw *hw,
-		      u32 domain_id,
-		      bool vdev_req,
-		      unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	int ret;
-
-	dlb2_log_reset_domain(hw, domain_id, vdev_req, vdev_id);
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (domain  == NULL || !domain->configured)
-		return -EINVAL;
-
-	/* Disable VPPs */
-	if (vdev_req) {
-		dlb2_domain_disable_dir_vpps(hw, domain, vdev_id);
-
-		dlb2_domain_disable_ldb_vpps(hw, domain, vdev_id);
-	}
-
-	/* Disable CQ interrupts */
-	dlb2_domain_disable_dir_port_interrupts(hw, domain);
-
-	dlb2_domain_disable_ldb_port_interrupts(hw, domain);
-
-	/*
-	 * For each queue owned by this domain, disable its write permissions to
-	 * cause any traffic sent to it to be dropped. Well-behaved software
-	 * should not be sending QEs at this point.
-	 */
-	dlb2_domain_disable_dir_queue_write_perms(hw, domain);
-
-	dlb2_domain_disable_ldb_queue_write_perms(hw, domain);
-
-	/* Turn off completion tracking on all the domain's PPs. */
-	dlb2_domain_disable_ldb_seq_checks(hw, domain);
-
-	/*
-	 * Disable the LDB CQs and drain them in order to complete the map and
-	 * unmap procedures, which require zero CQ inflights and zero QID
-	 * inflights respectively.
-	 */
-	dlb2_domain_disable_ldb_cqs(hw, domain);
-
-	ret = dlb2_domain_drain_ldb_cqs(hw, domain, false);
-	if (ret < 0)
-		return ret;
-
-	ret = dlb2_domain_wait_for_ldb_cqs_to_empty(hw, domain);
-	if (ret < 0)
-		return ret;
-
-	ret = dlb2_domain_finish_unmap_qid_procedures(hw, domain);
-	if (ret < 0)
-		return ret;
-
-	ret = dlb2_domain_finish_map_qid_procedures(hw, domain);
-	if (ret < 0)
-		return ret;
-
-	/* Re-enable the CQs in order to drain the mapped queues. */
-	dlb2_domain_enable_ldb_cqs(hw, domain);
-
-	ret = dlb2_domain_drain_mapped_queues(hw, domain);
-	if (ret < 0)
-		return ret;
-
-	ret = dlb2_domain_drain_unmapped_queues(hw, domain);
-	if (ret < 0)
-		return ret;
-
-	/* Done draining LDB QEs, so disable the CQs. */
-	dlb2_domain_disable_ldb_cqs(hw, domain);
-
-	dlb2_domain_drain_dir_queues(hw, domain);
-
-	/* Done draining DIR QEs, so disable the CQs. */
-	dlb2_domain_disable_dir_cqs(hw, domain);
-
-	/* Disable PPs */
-	dlb2_domain_disable_dir_producer_ports(hw, domain);
-
-	dlb2_domain_disable_ldb_producer_ports(hw, domain);
-
-	ret = dlb2_domain_verify_reset_success(hw, domain);
-	if (ret)
-		return ret;
-
-	/* Reset the QID and port state. */
-	dlb2_domain_reset_registers(hw, domain);
-
-	/* Hardware reset complete. Reset the domain's software state */
-	ret = dlb2_domain_reset_software_state(hw, domain);
-	if (ret)
-		return ret;
-
-	return 0;
-}
-
 unsigned int dlb2_finish_unmap_qid_procedures(struct dlb2_hw *hw)
 {
 	int i, num = 0;
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index 8f97dd865..641812412 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -34,6 +34,17 @@
 #define DLB2_FUNC_LIST_FOR_SAFE(head, ptr, ptr_tmp, it, it_tmp) \
 	DLB2_LIST_FOR_EACH_SAFE((head), ptr, ptr_tmp, func_list, it, it_tmp)
 
+/*
+ * The PF driver cannot assume that a register write will affect subsequent HCW
+ * writes. To ensure a write completes, the driver must read back a CSR. This
+ * function only need be called for configuration that can occur after the
+ * domain has started; prior to starting, applications can't send HCWs.
+ */
+static inline void dlb2_flush_csr(struct dlb2_hw *hw)
+{
+	DLB2_CSR_RD(hw, DLB2_SYS_TOTAL_VAS(hw->ver));
+}
+
 static void dlb2_init_domain_rsrc_lists(struct dlb2_hw_domain *domain)
 {
 	int i;
@@ -1019,3 +1030,2554 @@ int dlb2_hw_create_sched_domain(struct dlb2_hw *hw,
 
 	return 0;
 }
+
+static void dlb2_dir_port_cq_disable(struct dlb2_hw *hw,
+				     struct dlb2_dir_pq_pair *port)
+{
+	u32 reg = 0;
+
+	DLB2_BIT_SET(reg, DLB2_LSP_CQ_DIR_DSBL_DISABLED);
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ_DIR_DSBL(hw->ver, port->id.phys_id), reg);
+
+	dlb2_flush_csr(hw);
+}
+
+static u32 dlb2_dir_cq_token_count(struct dlb2_hw *hw,
+				   struct dlb2_dir_pq_pair *port)
+{
+	u32 cnt;
+
+	cnt = DLB2_CSR_RD(hw,
+			  DLB2_LSP_CQ_DIR_TKN_CNT(hw->ver, port->id.phys_id));
+
+	/*
+	 * Account for the initial token count, which is used in order to
+	 * provide a CQ with depth less than 8.
+	 */
+
+	return DLB2_BITS_GET(cnt, DLB2_LSP_CQ_DIR_TKN_CNT_COUNT) -
+	       port->init_tkn_cnt;
+}
+
+static void dlb2_drain_dir_cq(struct dlb2_hw *hw,
+			      struct dlb2_dir_pq_pair *port)
+{
+	unsigned int port_id = port->id.phys_id;
+	u32 cnt;
+
+	/* Return any outstanding tokens */
+	cnt = dlb2_dir_cq_token_count(hw, port);
+
+	if (cnt != 0) {
+		struct dlb2_hcw hcw_mem[8], *hcw;
+		void __iomem *pp_addr;
+
+		pp_addr = os_map_producer_port(hw, port_id, false);
+
+		/* Point hcw to a 64B-aligned location */
+		hcw = (struct dlb2_hcw *)((uintptr_t)&hcw_mem[4] & ~0x3F);
+
+		/*
+		 * Program the first HCW for a batch token return and
+		 * the rest as NOOPS
+		 */
+		memset(hcw, 0, 4 * sizeof(*hcw));
+		hcw->cq_token = 1;
+		hcw->lock_id = cnt - 1;
+
+		dlb2_movdir64b(pp_addr, hcw);
+
+		os_fence_hcw(hw, pp_addr);
+
+		os_unmap_producer_port(hw, pp_addr);
+	}
+}
+
+static void dlb2_dir_port_cq_enable(struct dlb2_hw *hw,
+				    struct dlb2_dir_pq_pair *port)
+{
+	u32 reg = 0;
+
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ_DIR_DSBL(hw->ver, port->id.phys_id), reg);
+
+	dlb2_flush_csr(hw);
+}
+
+static int dlb2_domain_drain_dir_cqs(struct dlb2_hw *hw,
+				     struct dlb2_hw_domain *domain,
+				     bool toggle_port)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *port;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
+		/*
+		 * Can't drain a port if it's not configured, and there's
+		 * nothing to drain if its queue is unconfigured.
+		 */
+		if (!port->port_configured || !port->queue_configured)
+			continue;
+
+		if (toggle_port)
+			dlb2_dir_port_cq_disable(hw, port);
+
+		dlb2_drain_dir_cq(hw, port);
+
+		if (toggle_port)
+			dlb2_dir_port_cq_enable(hw, port);
+	}
+
+	return 0;
+}
+
+static u32 dlb2_dir_queue_depth(struct dlb2_hw *hw,
+				struct dlb2_dir_pq_pair *queue)
+{
+	u32 cnt;
+
+	cnt = DLB2_CSR_RD(hw, DLB2_LSP_QID_DIR_ENQUEUE_CNT(hw->ver,
+						      queue->id.phys_id));
+
+	return DLB2_BITS_GET(cnt, DLB2_LSP_QID_DIR_ENQUEUE_CNT_COUNT);
+}
+
+static bool dlb2_dir_queue_is_empty(struct dlb2_hw *hw,
+				    struct dlb2_dir_pq_pair *queue)
+{
+	return dlb2_dir_queue_depth(hw, queue) == 0;
+}
+
+static bool dlb2_domain_dir_queues_empty(struct dlb2_hw *hw,
+					 struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *queue;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, queue, iter) {
+		if (!dlb2_dir_queue_is_empty(hw, queue))
+			return false;
+	}
+
+	return true;
+}
+static int dlb2_domain_drain_dir_queues(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain)
+{
+	int i;
+
+	/* If the domain hasn't been started, there's no traffic to drain */
+	if (!domain->started)
+		return 0;
+
+	for (i = 0; i < DLB2_MAX_QID_EMPTY_CHECK_LOOPS; i++) {
+		dlb2_domain_drain_dir_cqs(hw, domain, true);
+
+		if (dlb2_domain_dir_queues_empty(hw, domain))
+			break;
+	}
+
+	if (i == DLB2_MAX_QID_EMPTY_CHECK_LOOPS) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: failed to empty queues\n",
+			    __func__);
+		return -EFAULT;
+	}
+
+	/*
+	 * Drain the CQs one more time. For the queues to go empty, they would
+	 * have scheduled one or more QEs.
+	 */
+	dlb2_domain_drain_dir_cqs(hw, domain, true);
+
+	return 0;
+}
+
+static void dlb2_ldb_port_cq_enable(struct dlb2_hw *hw,
+				    struct dlb2_ldb_port *port)
+{
+	u32 reg = 0;
+
+	/*
+	 * Don't re-enable the port if a removal is pending. The caller should
+	 * mark this port as enabled (if it isn't already), and when the
+	 * removal completes the port will be enabled.
+	 */
+	if (port->num_pending_removals)
+		return;
+
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ_LDB_DSBL(hw->ver, port->id.phys_id), reg);
+
+	dlb2_flush_csr(hw);
+}
+
+static void dlb2_ldb_port_cq_disable(struct dlb2_hw *hw,
+				     struct dlb2_ldb_port *port)
+{
+	u32 reg = 0;
+
+	DLB2_BIT_SET(reg, DLB2_LSP_CQ_LDB_DSBL_DISABLED);
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ_LDB_DSBL(hw->ver, port->id.phys_id), reg);
+
+	dlb2_flush_csr(hw);
+}
+
+static u32 dlb2_ldb_cq_inflight_count(struct dlb2_hw *hw,
+				      struct dlb2_ldb_port *port)
+{
+	u32 cnt;
+
+	cnt = DLB2_CSR_RD(hw,
+			  DLB2_LSP_CQ_LDB_INFL_CNT(hw->ver, port->id.phys_id));
+
+	return DLB2_BITS_GET(cnt, DLB2_LSP_CQ_LDB_INFL_CNT_COUNT);
+}
+
+static u32 dlb2_ldb_cq_token_count(struct dlb2_hw *hw,
+				   struct dlb2_ldb_port *port)
+{
+	u32 cnt;
+
+	cnt = DLB2_CSR_RD(hw,
+			  DLB2_LSP_CQ_LDB_TKN_CNT(hw->ver, port->id.phys_id));
+
+	/*
+	 * Account for the initial token count, which is used in order to
+	 * provide a CQ with depth less than 8.
+	 */
+
+	return DLB2_BITS_GET(cnt, DLB2_LSP_CQ_LDB_TKN_CNT_TOKEN_COUNT) -
+		port->init_tkn_cnt;
+}
+
+static void dlb2_drain_ldb_cq(struct dlb2_hw *hw, struct dlb2_ldb_port *port)
+{
+	u32 infl_cnt, tkn_cnt;
+	unsigned int i;
+
+	infl_cnt = dlb2_ldb_cq_inflight_count(hw, port);
+	tkn_cnt = dlb2_ldb_cq_token_count(hw, port);
+
+	if (infl_cnt || tkn_cnt) {
+		struct dlb2_hcw hcw_mem[8], *hcw;
+		void __iomem *pp_addr;
+
+		pp_addr = os_map_producer_port(hw, port->id.phys_id, true);
+
+		/* Point hcw to a 64B-aligned location */
+		hcw = (struct dlb2_hcw *)((uintptr_t)&hcw_mem[4] & ~0x3F);
+
+		/*
+		 * Program the first HCW for a completion and token return and
+		 * the other HCWs as NOOPS
+		 */
+
+		memset(hcw, 0, 4 * sizeof(*hcw));
+		hcw->qe_comp = (infl_cnt > 0);
+		hcw->cq_token = (tkn_cnt > 0);
+		hcw->lock_id = tkn_cnt - 1;
+
+		/* Return tokens in the first HCW */
+		dlb2_movdir64b(pp_addr, hcw);
+
+		hcw->cq_token = 0;
+
+		/* Issue remaining completions (if any) */
+		for (i = 1; i < infl_cnt; i++)
+			dlb2_movdir64b(pp_addr, hcw);
+
+		os_fence_hcw(hw, pp_addr);
+
+		os_unmap_producer_port(hw, pp_addr);
+	}
+}
+
+static void dlb2_domain_drain_ldb_cqs(struct dlb2_hw *hw,
+				      struct dlb2_hw_domain *domain,
+				      bool toggle_port)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	/* If the domain hasn't been started, there's no traffic to drain */
+	if (!domain->started)
+		return;
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			if (toggle_port)
+				dlb2_ldb_port_cq_disable(hw, port);
+
+			dlb2_drain_ldb_cq(hw, port);
+
+			if (toggle_port)
+				dlb2_ldb_port_cq_enable(hw, port);
+		}
+	}
+}
+
+static u32 dlb2_ldb_queue_depth(struct dlb2_hw *hw,
+				struct dlb2_ldb_queue *queue)
+{
+	u32 aqed, ldb, atm;
+
+	aqed = DLB2_CSR_RD(hw, DLB2_LSP_QID_AQED_ACTIVE_CNT(hw->ver,
+						       queue->id.phys_id));
+	ldb = DLB2_CSR_RD(hw, DLB2_LSP_QID_LDB_ENQUEUE_CNT(hw->ver,
+						      queue->id.phys_id));
+	atm = DLB2_CSR_RD(hw,
+			  DLB2_LSP_QID_ATM_ACTIVE(hw->ver, queue->id.phys_id));
+
+	return DLB2_BITS_GET(aqed, DLB2_LSP_QID_AQED_ACTIVE_CNT_COUNT)
+	       + DLB2_BITS_GET(ldb, DLB2_LSP_QID_LDB_ENQUEUE_CNT_COUNT)
+	       + DLB2_BITS_GET(atm, DLB2_LSP_QID_ATM_ACTIVE_COUNT);
+}
+
+static bool dlb2_ldb_queue_is_empty(struct dlb2_hw *hw,
+				    struct dlb2_ldb_queue *queue)
+{
+	return dlb2_ldb_queue_depth(hw, queue) == 0;
+}
+
+static bool dlb2_domain_mapped_queues_empty(struct dlb2_hw *hw,
+					    struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_queue *queue;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
+		if (queue->num_mappings == 0)
+			continue;
+
+		if (!dlb2_ldb_queue_is_empty(hw, queue))
+			return false;
+	}
+
+	return true;
+}
+
+static int dlb2_domain_drain_mapped_queues(struct dlb2_hw *hw,
+					   struct dlb2_hw_domain *domain)
+{
+	int i;
+
+	/* If the domain hasn't been started, there's no traffic to drain */
+	if (!domain->started)
+		return 0;
+
+	if (domain->num_pending_removals > 0) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: failed to unmap domain queues\n",
+			    __func__);
+		return -EFAULT;
+	}
+
+	for (i = 0; i < DLB2_MAX_QID_EMPTY_CHECK_LOOPS; i++) {
+		dlb2_domain_drain_ldb_cqs(hw, domain, true);
+
+		if (dlb2_domain_mapped_queues_empty(hw, domain))
+			break;
+	}
+
+	if (i == DLB2_MAX_QID_EMPTY_CHECK_LOOPS) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: failed to empty queues\n",
+			    __func__);
+		return -EFAULT;
+	}
+
+	/*
+	 * Drain the CQs one more time. For the queues to go empty, they would
+	 * have scheduled one or more QEs.
+	 */
+	dlb2_domain_drain_ldb_cqs(hw, domain, true);
+
+	return 0;
+}
+
+static void dlb2_domain_enable_ldb_cqs(struct dlb2_hw *hw,
+				       struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			port->enabled = true;
+
+			dlb2_ldb_port_cq_enable(hw, port);
+		}
+	}
+}
+
+static struct dlb2_ldb_queue *
+dlb2_get_ldb_queue_from_id(struct dlb2_hw *hw,
+			   u32 id,
+			   bool vdev_req,
+			   unsigned int vdev_id)
+{
+	struct dlb2_list_entry *iter1;
+	struct dlb2_list_entry *iter2;
+	struct dlb2_function_resources *rsrcs;
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+	RTE_SET_USED(iter1);
+	RTE_SET_USED(iter2);
+
+	if (id >= DLB2_MAX_NUM_LDB_QUEUES)
+		return NULL;
+
+	rsrcs = (vdev_req) ? &hw->vdev[vdev_id] : &hw->pf;
+
+	if (!vdev_req)
+		return &hw->rsrcs.ldb_queues[id];
+
+	DLB2_FUNC_LIST_FOR(rsrcs->used_domains, domain, iter1) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter2) {
+			if (queue->id.virt_id == id)
+				return queue;
+		}
+	}
+
+	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_queues, queue, iter1) {
+		if (queue->id.virt_id == id)
+			return queue;
+	}
+
+	return NULL;
+}
+
+static struct dlb2_hw_domain *dlb2_get_domain_from_id(struct dlb2_hw *hw,
+						      u32 id,
+						      bool vdev_req,
+						      unsigned int vdev_id)
+{
+	struct dlb2_list_entry *iteration;
+	struct dlb2_function_resources *rsrcs;
+	struct dlb2_hw_domain *domain;
+	RTE_SET_USED(iteration);
+
+	if (id >= DLB2_MAX_NUM_DOMAINS)
+		return NULL;
+
+	if (!vdev_req)
+		return &hw->domains[id];
+
+	rsrcs = &hw->vdev[vdev_id];
+
+	DLB2_FUNC_LIST_FOR(rsrcs->used_domains, domain, iteration) {
+		if (domain->id.virt_id == id)
+			return domain;
+	}
+
+	return NULL;
+}
+
+static int dlb2_port_slot_state_transition(struct dlb2_hw *hw,
+					   struct dlb2_ldb_port *port,
+					   struct dlb2_ldb_queue *queue,
+					   int slot,
+					   enum dlb2_qid_map_state new_state)
+{
+	enum dlb2_qid_map_state curr_state = port->qid_map[slot].state;
+	struct dlb2_hw_domain *domain;
+	int domain_id;
+
+	domain_id = port->domain_id.phys_id;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, false, 0);
+	if (domain == NULL) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: unable to find domain %d\n",
+			    __func__, domain_id);
+		return -EINVAL;
+	}
+
+	switch (curr_state) {
+	case DLB2_QUEUE_UNMAPPED:
+		switch (new_state) {
+		case DLB2_QUEUE_MAPPED:
+			queue->num_mappings++;
+			port->num_mappings++;
+			break;
+		case DLB2_QUEUE_MAP_IN_PROG:
+			queue->num_pending_additions++;
+			domain->num_pending_additions++;
+			break;
+		default:
+			goto error;
+		}
+		break;
+	case DLB2_QUEUE_MAPPED:
+		switch (new_state) {
+		case DLB2_QUEUE_UNMAPPED:
+			queue->num_mappings--;
+			port->num_mappings--;
+			break;
+		case DLB2_QUEUE_UNMAP_IN_PROG:
+			port->num_pending_removals++;
+			domain->num_pending_removals++;
+			break;
+		case DLB2_QUEUE_MAPPED:
+			/* Priority change, nothing to update */
+			break;
+		default:
+			goto error;
+		}
+		break;
+	case DLB2_QUEUE_MAP_IN_PROG:
+		switch (new_state) {
+		case DLB2_QUEUE_UNMAPPED:
+			queue->num_pending_additions--;
+			domain->num_pending_additions--;
+			break;
+		case DLB2_QUEUE_MAPPED:
+			queue->num_mappings++;
+			port->num_mappings++;
+			queue->num_pending_additions--;
+			domain->num_pending_additions--;
+			break;
+		default:
+			goto error;
+		}
+		break;
+	case DLB2_QUEUE_UNMAP_IN_PROG:
+		switch (new_state) {
+		case DLB2_QUEUE_UNMAPPED:
+			port->num_pending_removals--;
+			domain->num_pending_removals--;
+			queue->num_mappings--;
+			port->num_mappings--;
+			break;
+		case DLB2_QUEUE_MAPPED:
+			port->num_pending_removals--;
+			domain->num_pending_removals--;
+			break;
+		case DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP:
+			/* Nothing to update */
+			break;
+		default:
+			goto error;
+		}
+		break;
+	case DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP:
+		switch (new_state) {
+		case DLB2_QUEUE_UNMAP_IN_PROG:
+			/* Nothing to update */
+			break;
+		case DLB2_QUEUE_UNMAPPED:
+			/*
+			 * An UNMAP_IN_PROG_PENDING_MAP slot briefly
+			 * becomes UNMAPPED before it transitions to
+			 * MAP_IN_PROG.
+			 */
+			queue->num_mappings--;
+			port->num_mappings--;
+			port->num_pending_removals--;
+			domain->num_pending_removals--;
+			break;
+		default:
+			goto error;
+		}
+		break;
+	default:
+		goto error;
+	}
+
+	port->qid_map[slot].state = new_state;
+
+	DLB2_HW_DBG(hw,
+		    "[%s()] queue %d -> port %d state transition (%d -> %d)\n",
+		    __func__, queue->id.phys_id, port->id.phys_id,
+		    curr_state, new_state);
+	return 0;
+
+error:
+	DLB2_HW_ERR(hw,
+		    "[%s()] Internal error: invalid queue %d -> port %d state transition (%d -> %d)\n",
+		    __func__, queue->id.phys_id, port->id.phys_id,
+		    curr_state, new_state);
+	return -EFAULT;
+}
+
+static bool dlb2_port_find_slot(struct dlb2_ldb_port *port,
+				enum dlb2_qid_map_state state,
+				int *slot)
+{
+	int i;
+
+	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
+		if (port->qid_map[i].state == state)
+			break;
+	}
+
+	*slot = i;
+
+	return (i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ);
+}
+
+static bool dlb2_port_find_slot_queue(struct dlb2_ldb_port *port,
+				      enum dlb2_qid_map_state state,
+				      struct dlb2_ldb_queue *queue,
+				      int *slot)
+{
+	int i;
+
+	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
+		if (port->qid_map[i].state == state &&
+		    port->qid_map[i].qid == queue->id.phys_id)
+			break;
+	}
+
+	*slot = i;
+
+	return (i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ);
+}
+
+/*
+ * dlb2_ldb_queue_{enable, disable}_mapped_cqs() don't operate exactly as
+ * their function names imply, and should only be called by the dynamic CQ
+ * mapping code.
+ */
+static void dlb2_ldb_queue_disable_mapped_cqs(struct dlb2_hw *hw,
+					      struct dlb2_hw_domain *domain,
+					      struct dlb2_ldb_queue *queue)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int slot, i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			enum dlb2_qid_map_state state = DLB2_QUEUE_MAPPED;
+
+			if (!dlb2_port_find_slot_queue(port, state,
+						       queue, &slot))
+				continue;
+
+			if (port->enabled)
+				dlb2_ldb_port_cq_disable(hw, port);
+		}
+	}
+}
+
+static void dlb2_ldb_queue_enable_mapped_cqs(struct dlb2_hw *hw,
+					     struct dlb2_hw_domain *domain,
+					     struct dlb2_ldb_queue *queue)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int slot, i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			enum dlb2_qid_map_state state = DLB2_QUEUE_MAPPED;
+
+			if (!dlb2_port_find_slot_queue(port, state,
+						       queue, &slot))
+				continue;
+
+			if (port->enabled)
+				dlb2_ldb_port_cq_enable(hw, port);
+		}
+	}
+}
+
+static void dlb2_ldb_port_clear_queue_if_status(struct dlb2_hw *hw,
+						struct dlb2_ldb_port *port,
+						int slot)
+{
+	u32 ctrl = 0;
+
+	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
+	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
+	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_INFLIGHT_OK_V);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
+
+	dlb2_flush_csr(hw);
+}
+
+static void dlb2_ldb_port_set_queue_if_status(struct dlb2_hw *hw,
+					      struct dlb2_ldb_port *port,
+					      int slot)
+{
+	u32 ctrl = 0;
+
+	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
+	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
+	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_VALUE);
+	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_INFLIGHT_OK_V);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
+
+	dlb2_flush_csr(hw);
+}
+
+static int dlb2_ldb_port_map_qid_static(struct dlb2_hw *hw,
+					struct dlb2_ldb_port *p,
+					struct dlb2_ldb_queue *q,
+					u8 priority)
+{
+	enum dlb2_qid_map_state state;
+	u32 lsp_qid2cq2;
+	u32 lsp_qid2cq;
+	u32 atm_qid2cq;
+	u32 cq2priov;
+	u32 cq2qid;
+	int i;
+
+	/* Look for a pending or already mapped slot, else an unused slot */
+	if (!dlb2_port_find_slot_queue(p, DLB2_QUEUE_MAP_IN_PROG, q, &i) &&
+	    !dlb2_port_find_slot_queue(p, DLB2_QUEUE_MAPPED, q, &i) &&
+	    !dlb2_port_find_slot(p, DLB2_QUEUE_UNMAPPED, &i)) {
+		DLB2_HW_ERR(hw,
+			    "[%s():%d] Internal error: CQ has no available QID mapping slots\n",
+			    __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	/* Read-modify-write the priority and valid bit register */
+	cq2priov = DLB2_CSR_RD(hw, DLB2_LSP_CQ2PRIOV(hw->ver, p->id.phys_id));
+
+	cq2priov |= (1 << (i + DLB2_LSP_CQ2PRIOV_V_LOC)) & DLB2_LSP_CQ2PRIOV_V;
+	cq2priov |= ((priority & 0x7) << (i + DLB2_LSP_CQ2PRIOV_PRIO_LOC) * 3)
+		    & DLB2_LSP_CQ2PRIOV_PRIO;
+
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(hw->ver, p->id.phys_id), cq2priov);
+
+	/* Read-modify-write the QID map register */
+	if (i < 4)
+		cq2qid = DLB2_CSR_RD(hw, DLB2_LSP_CQ2QID0(hw->ver,
+							  p->id.phys_id));
+	else
+		cq2qid = DLB2_CSR_RD(hw, DLB2_LSP_CQ2QID1(hw->ver,
+							  p->id.phys_id));
+
+	if (i == 0 || i == 4)
+		DLB2_BITS_SET(cq2qid, q->id.phys_id, DLB2_LSP_CQ2QID0_QID_P0);
+	if (i == 1 || i == 5)
+		DLB2_BITS_SET(cq2qid, q->id.phys_id, DLB2_LSP_CQ2QID0_QID_P1);
+	if (i == 2 || i == 6)
+		DLB2_BITS_SET(cq2qid, q->id.phys_id, DLB2_LSP_CQ2QID0_QID_P2);
+	if (i == 3 || i == 7)
+		DLB2_BITS_SET(cq2qid, q->id.phys_id, DLB2_LSP_CQ2QID0_QID_P3);
+
+	if (i < 4)
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_CQ2QID0(hw->ver, p->id.phys_id), cq2qid);
+	else
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_CQ2QID1(hw->ver, p->id.phys_id), cq2qid);
+
+	atm_qid2cq = DLB2_CSR_RD(hw,
+				 DLB2_ATM_QID2CQIDIX(q->id.phys_id,
+						p->id.phys_id / 4));
+
+	lsp_qid2cq = DLB2_CSR_RD(hw,
+				 DLB2_LSP_QID2CQIDIX(hw->ver, q->id.phys_id,
+						p->id.phys_id / 4));
+
+	lsp_qid2cq2 = DLB2_CSR_RD(hw,
+				  DLB2_LSP_QID2CQIDIX2(hw->ver, q->id.phys_id,
+						  p->id.phys_id / 4));
+
+	switch (p->id.phys_id % 4) {
+	case 0:
+		DLB2_BIT_SET(atm_qid2cq,
+			     1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P0_LOC));
+		DLB2_BIT_SET(lsp_qid2cq,
+			     1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P0_LOC));
+		DLB2_BIT_SET(lsp_qid2cq2,
+			     1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P0_LOC));
+		break;
+
+	case 1:
+		DLB2_BIT_SET(atm_qid2cq,
+			     1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P1_LOC));
+		DLB2_BIT_SET(lsp_qid2cq,
+			     1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P1_LOC));
+		DLB2_BIT_SET(lsp_qid2cq2,
+			     1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P1_LOC));
+		break;
+
+	case 2:
+		DLB2_BIT_SET(atm_qid2cq,
+			     1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P2_LOC));
+		DLB2_BIT_SET(lsp_qid2cq,
+			     1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P2_LOC));
+		DLB2_BIT_SET(lsp_qid2cq2,
+			     1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P2_LOC));
+		break;
+
+	case 3:
+		DLB2_BIT_SET(atm_qid2cq,
+			     1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P3_LOC));
+		DLB2_BIT_SET(lsp_qid2cq,
+			     1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P3_LOC));
+		DLB2_BIT_SET(lsp_qid2cq2,
+			     1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P3_LOC));
+		break;
+	}
+
+	DLB2_CSR_WR(hw,
+		    DLB2_ATM_QID2CQIDIX(q->id.phys_id, p->id.phys_id / 4),
+		    atm_qid2cq);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_QID2CQIDIX(hw->ver,
+					q->id.phys_id, p->id.phys_id / 4),
+		    lsp_qid2cq);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_QID2CQIDIX2(hw->ver,
+					 q->id.phys_id, p->id.phys_id / 4),
+		    lsp_qid2cq2);
+
+	dlb2_flush_csr(hw);
+
+	p->qid_map[i].qid = q->id.phys_id;
+	p->qid_map[i].priority = priority;
+
+	state = DLB2_QUEUE_MAPPED;
+
+	return dlb2_port_slot_state_transition(hw, p, q, i, state);
+}
+
+static int dlb2_ldb_port_set_has_work_bits(struct dlb2_hw *hw,
+					   struct dlb2_ldb_port *port,
+					   struct dlb2_ldb_queue *queue,
+					   int slot)
+{
+	u32 ctrl = 0;
+	u32 active;
+	u32 enq;
+
+	/* Set the atomic scheduling haswork bit */
+	active = DLB2_CSR_RD(hw, DLB2_LSP_QID_AQED_ACTIVE_CNT(hw->ver,
+							 queue->id.phys_id));
+
+	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
+	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
+	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_VALUE);
+	DLB2_BITS_SET(ctrl,
+		      DLB2_BITS_GET(active,
+				    DLB2_LSP_QID_AQED_ACTIVE_CNT_COUNT) > 0,
+				    DLB2_LSP_LDB_SCHED_CTRL_RLIST_HASWORK_V);
+
+	/* Set the non-atomic scheduling haswork bit */
+	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
+
+	enq = DLB2_CSR_RD(hw,
+			  DLB2_LSP_QID_LDB_ENQUEUE_CNT(hw->ver,
+						       queue->id.phys_id));
+
+	memset(&ctrl, 0, sizeof(ctrl));
+
+	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
+	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
+	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_VALUE);
+	DLB2_BITS_SET(ctrl,
+		      DLB2_BITS_GET(enq,
+				    DLB2_LSP_QID_LDB_ENQUEUE_CNT_COUNT) > 0,
+		      DLB2_LSP_LDB_SCHED_CTRL_NALB_HASWORK_V);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
+
+	dlb2_flush_csr(hw);
+
+	return 0;
+}
+
+static void dlb2_ldb_port_clear_has_work_bits(struct dlb2_hw *hw,
+					      struct dlb2_ldb_port *port,
+					      u8 slot)
+{
+	u32 ctrl = 0;
+
+	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
+	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
+	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_RLIST_HASWORK_V);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
+
+	memset(&ctrl, 0, sizeof(ctrl));
+
+	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
+	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
+	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_NALB_HASWORK_V);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
+
+	dlb2_flush_csr(hw);
+}
+
+
+static void dlb2_ldb_queue_set_inflight_limit(struct dlb2_hw *hw,
+					      struct dlb2_ldb_queue *queue)
+{
+	u32 infl_lim = 0;
+
+	DLB2_BITS_SET(infl_lim, queue->num_qid_inflights,
+		 DLB2_LSP_QID_LDB_INFL_LIM_LIMIT);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_QID_LDB_INFL_LIM(hw->ver, queue->id.phys_id),
+		    infl_lim);
+}
+
+static void dlb2_ldb_queue_clear_inflight_limit(struct dlb2_hw *hw,
+						struct dlb2_ldb_queue *queue)
+{
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_QID_LDB_INFL_LIM(hw->ver, queue->id.phys_id),
+		    DLB2_LSP_QID_LDB_INFL_LIM_RST);
+}
+
+static int dlb2_ldb_port_finish_map_qid_dynamic(struct dlb2_hw *hw,
+						struct dlb2_hw_domain *domain,
+						struct dlb2_ldb_port *port,
+						struct dlb2_ldb_queue *queue)
+{
+	struct dlb2_list_entry *iter;
+	enum dlb2_qid_map_state state;
+	int slot, ret, i;
+	u32 infl_cnt;
+	u8 prio;
+	RTE_SET_USED(iter);
+
+	infl_cnt = DLB2_CSR_RD(hw,
+			       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver,
+						    queue->id.phys_id));
+
+	if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT)) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: non-zero QID inflight count\n",
+			    __func__);
+		return -EINVAL;
+	}
+
+	/*
+	 * Static map the port and set its corresponding has_work bits.
+	 */
+	state = DLB2_QUEUE_MAP_IN_PROG;
+	if (!dlb2_port_find_slot_queue(port, state, queue, &slot))
+		return -EINVAL;
+
+	prio = port->qid_map[slot].priority;
+
+	/*
+	 * Update the CQ2QID, CQ2PRIOV, and QID2CQIDX registers, and
+	 * the port's qid_map state.
+	 */
+	ret = dlb2_ldb_port_map_qid_static(hw, port, queue, prio);
+	if (ret)
+		return ret;
+
+	ret = dlb2_ldb_port_set_has_work_bits(hw, port, queue, slot);
+	if (ret)
+		return ret;
+
+	/*
+	 * Ensure IF_status(cq,qid) is 0 before enabling the port to
+	 * prevent spurious schedules to cause the queue's inflight
+	 * count to increase.
+	 */
+	dlb2_ldb_port_clear_queue_if_status(hw, port, slot);
+
+	/* Reset the queue's inflight status */
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			state = DLB2_QUEUE_MAPPED;
+			if (!dlb2_port_find_slot_queue(port, state,
+						       queue, &slot))
+				continue;
+
+			dlb2_ldb_port_set_queue_if_status(hw, port, slot);
+		}
+	}
+
+	dlb2_ldb_queue_set_inflight_limit(hw, queue);
+
+	/* Re-enable CQs mapped to this queue */
+	dlb2_ldb_queue_enable_mapped_cqs(hw, domain, queue);
+
+	/* If this queue has other mappings pending, clear its inflight limit */
+	if (queue->num_pending_additions > 0)
+		dlb2_ldb_queue_clear_inflight_limit(hw, queue);
+
+	return 0;
+}
+
+/**
+ * dlb2_ldb_port_map_qid_dynamic() - perform a "dynamic" QID->CQ mapping
+ * @hw: dlb2_hw handle for a particular device.
+ * @port: load-balanced port
+ * @queue: load-balanced queue
+ * @priority: queue servicing priority
+ *
+ * Returns 0 if the queue was mapped, 1 if the mapping is scheduled to occur
+ * at a later point, and <0 if an error occurred.
+ */
+static int dlb2_ldb_port_map_qid_dynamic(struct dlb2_hw *hw,
+					 struct dlb2_ldb_port *port,
+					 struct dlb2_ldb_queue *queue,
+					 u8 priority)
+{
+	enum dlb2_qid_map_state state;
+	struct dlb2_hw_domain *domain;
+	int domain_id, slot, ret;
+	u32 infl_cnt;
+
+	domain_id = port->domain_id.phys_id;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, false, 0);
+	if (domain == NULL) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: unable to find domain %d\n",
+			    __func__, port->domain_id.phys_id);
+		return -EINVAL;
+	}
+
+	/*
+	 * Set the QID inflight limit to 0 to prevent further scheduling of the
+	 * queue.
+	 */
+	DLB2_CSR_WR(hw, DLB2_LSP_QID_LDB_INFL_LIM(hw->ver,
+						  queue->id.phys_id), 0);
+
+	if (!dlb2_port_find_slot(port, DLB2_QUEUE_UNMAPPED, &slot)) {
+		DLB2_HW_ERR(hw,
+			    "Internal error: No available unmapped slots\n");
+		return -EFAULT;
+	}
+
+	port->qid_map[slot].qid = queue->id.phys_id;
+	port->qid_map[slot].priority = priority;
+
+	state = DLB2_QUEUE_MAP_IN_PROG;
+	ret = dlb2_port_slot_state_transition(hw, port, queue, slot, state);
+	if (ret)
+		return ret;
+
+	infl_cnt = DLB2_CSR_RD(hw,
+			       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver,
+						    queue->id.phys_id));
+
+	if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT)) {
+		/*
+		 * The queue is owed completions so it's not safe to map it
+		 * yet. Schedule a kernel thread to complete the mapping later,
+		 * once software has completed all the queue's inflight events.
+		 */
+		if (!os_worker_active(hw))
+			os_schedule_work(hw);
+
+		return 1;
+	}
+
+	/*
+	 * Disable the affected CQ, and the CQs already mapped to the QID,
+	 * before reading the QID's inflight count a second time. There is an
+	 * unlikely race in which the QID may schedule one more QE after we
+	 * read an inflight count of 0, and disabling the CQs guarantees that
+	 * the race will not occur after a re-read of the inflight count
+	 * register.
+	 */
+	if (port->enabled)
+		dlb2_ldb_port_cq_disable(hw, port);
+
+	dlb2_ldb_queue_disable_mapped_cqs(hw, domain, queue);
+
+	infl_cnt = DLB2_CSR_RD(hw,
+			       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver,
+						    queue->id.phys_id));
+
+	if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT)) {
+		if (port->enabled)
+			dlb2_ldb_port_cq_enable(hw, port);
+
+		dlb2_ldb_queue_enable_mapped_cqs(hw, domain, queue);
+
+		/*
+		 * The queue is owed completions so it's not safe to map it
+		 * yet. Schedule a kernel thread to complete the mapping later,
+		 * once software has completed all the queue's inflight events.
+		 */
+		if (!os_worker_active(hw))
+			os_schedule_work(hw);
+
+		return 1;
+	}
+
+	return dlb2_ldb_port_finish_map_qid_dynamic(hw, domain, port, queue);
+}
+
+static void dlb2_domain_finish_map_port(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain,
+					struct dlb2_ldb_port *port)
+{
+	int i;
+
+	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
+		u32 infl_cnt;
+		struct dlb2_ldb_queue *queue;
+		int qid;
+
+		if (port->qid_map[i].state != DLB2_QUEUE_MAP_IN_PROG)
+			continue;
+
+		qid = port->qid_map[i].qid;
+
+		queue = dlb2_get_ldb_queue_from_id(hw, qid, false, 0);
+
+		if (queue == NULL) {
+			DLB2_HW_ERR(hw,
+				    "[%s()] Internal error: unable to find queue %d\n",
+				    __func__, qid);
+			continue;
+		}
+
+		infl_cnt = DLB2_CSR_RD(hw,
+				       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver, qid));
+
+		if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT))
+			continue;
+
+		/*
+		 * Disable the affected CQ, and the CQs already mapped to the
+		 * QID, before reading the QID's inflight count a second time.
+		 * There is an unlikely race in which the QID may schedule one
+		 * more QE after we read an inflight count of 0, and disabling
+		 * the CQs guarantees that the race will not occur after a
+		 * re-read of the inflight count register.
+		 */
+		if (port->enabled)
+			dlb2_ldb_port_cq_disable(hw, port);
+
+		dlb2_ldb_queue_disable_mapped_cqs(hw, domain, queue);
+
+		infl_cnt = DLB2_CSR_RD(hw,
+				       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver, qid));
+
+		if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT)) {
+			if (port->enabled)
+				dlb2_ldb_port_cq_enable(hw, port);
+
+			dlb2_ldb_queue_enable_mapped_cqs(hw, domain, queue);
+
+			continue;
+		}
+
+		dlb2_ldb_port_finish_map_qid_dynamic(hw, domain, port, queue);
+	}
+}
+
+static unsigned int
+dlb2_domain_finish_map_qid_procedures(struct dlb2_hw *hw,
+				      struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	if (!domain->configured || domain->num_pending_additions == 0)
+		return 0;
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
+			dlb2_domain_finish_map_port(hw, domain, port);
+	}
+
+	return domain->num_pending_additions;
+}
+
+static int dlb2_ldb_port_unmap_qid(struct dlb2_hw *hw,
+				   struct dlb2_ldb_port *port,
+				   struct dlb2_ldb_queue *queue)
+{
+	enum dlb2_qid_map_state mapped, in_progress, pending_map, unmapped;
+	u32 lsp_qid2cq2;
+	u32 lsp_qid2cq;
+	u32 atm_qid2cq;
+	u32 cq2priov;
+	u32 queue_id;
+	u32 port_id;
+	int i;
+
+	/* Find the queue's slot */
+	mapped = DLB2_QUEUE_MAPPED;
+	in_progress = DLB2_QUEUE_UNMAP_IN_PROG;
+	pending_map = DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP;
+
+	if (!dlb2_port_find_slot_queue(port, mapped, queue, &i) &&
+	    !dlb2_port_find_slot_queue(port, in_progress, queue, &i) &&
+	    !dlb2_port_find_slot_queue(port, pending_map, queue, &i)) {
+		DLB2_HW_ERR(hw,
+			    "[%s():%d] Internal error: QID %d isn't mapped\n",
+			    __func__, __LINE__, queue->id.phys_id);
+		return -EFAULT;
+	}
+
+	port_id = port->id.phys_id;
+	queue_id = queue->id.phys_id;
+
+	/* Read-modify-write the priority and valid bit register */
+	cq2priov = DLB2_CSR_RD(hw, DLB2_LSP_CQ2PRIOV(hw->ver, port_id));
+
+	cq2priov &= ~(1 << (i + DLB2_LSP_CQ2PRIOV_V_LOC));
+
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(hw->ver, port_id), cq2priov);
+
+	atm_qid2cq = DLB2_CSR_RD(hw, DLB2_ATM_QID2CQIDIX(queue_id,
+							 port_id / 4));
+
+	lsp_qid2cq = DLB2_CSR_RD(hw,
+				 DLB2_LSP_QID2CQIDIX(hw->ver,
+						queue_id, port_id / 4));
+
+	lsp_qid2cq2 = DLB2_CSR_RD(hw,
+				  DLB2_LSP_QID2CQIDIX2(hw->ver,
+						  queue_id, port_id / 4));
+
+	switch (port_id % 4) {
+	case 0:
+		atm_qid2cq &= ~(1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P0_LOC));
+		lsp_qid2cq &= ~(1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P0_LOC));
+		lsp_qid2cq2 &= ~(1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P0_LOC));
+		break;
+
+	case 1:
+		atm_qid2cq &= ~(1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P1_LOC));
+		lsp_qid2cq &= ~(1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P1_LOC));
+		lsp_qid2cq2 &= ~(1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P1_LOC));
+		break;
+
+	case 2:
+		atm_qid2cq &= ~(1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P2_LOC));
+		lsp_qid2cq &= ~(1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P2_LOC));
+		lsp_qid2cq2 &= ~(1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P2_LOC));
+		break;
+
+	case 3:
+		atm_qid2cq &= ~(1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P3_LOC));
+		lsp_qid2cq &= ~(1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P3_LOC));
+		lsp_qid2cq2 &= ~(1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P3_LOC));
+		break;
+	}
+
+	DLB2_CSR_WR(hw, DLB2_ATM_QID2CQIDIX(queue_id, port_id / 4), atm_qid2cq);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_QID2CQIDIX(hw->ver, queue_id, port_id / 4),
+		    lsp_qid2cq);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_QID2CQIDIX2(hw->ver, queue_id, port_id / 4),
+		    lsp_qid2cq2);
+
+	dlb2_flush_csr(hw);
+
+	unmapped = DLB2_QUEUE_UNMAPPED;
+
+	return dlb2_port_slot_state_transition(hw, port, queue, i, unmapped);
+}
+
+static int dlb2_ldb_port_map_qid(struct dlb2_hw *hw,
+				 struct dlb2_hw_domain *domain,
+				 struct dlb2_ldb_port *port,
+				 struct dlb2_ldb_queue *queue,
+				 u8 prio)
+{
+	if (domain->started)
+		return dlb2_ldb_port_map_qid_dynamic(hw, port, queue, prio);
+	else
+		return dlb2_ldb_port_map_qid_static(hw, port, queue, prio);
+}
+
+static void
+dlb2_domain_finish_unmap_port_slot(struct dlb2_hw *hw,
+				   struct dlb2_hw_domain *domain,
+				   struct dlb2_ldb_port *port,
+				   int slot)
+{
+	enum dlb2_qid_map_state state;
+	struct dlb2_ldb_queue *queue;
+
+	queue = &hw->rsrcs.ldb_queues[port->qid_map[slot].qid];
+
+	state = port->qid_map[slot].state;
+
+	/* Update the QID2CQIDX and CQ2QID vectors */
+	dlb2_ldb_port_unmap_qid(hw, port, queue);
+
+	/*
+	 * Ensure the QID will not be serviced by this {CQ, slot} by clearing
+	 * the has_work bits
+	 */
+	dlb2_ldb_port_clear_has_work_bits(hw, port, slot);
+
+	/* Reset the {CQ, slot} to its default state */
+	dlb2_ldb_port_set_queue_if_status(hw, port, slot);
+
+	/* Re-enable the CQ if it was not manually disabled by the user */
+	if (port->enabled)
+		dlb2_ldb_port_cq_enable(hw, port);
+
+	/*
+	 * If there is a mapping that is pending this slot's removal, perform
+	 * the mapping now.
+	 */
+	if (state == DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP) {
+		struct dlb2_ldb_port_qid_map *map;
+		struct dlb2_ldb_queue *map_queue;
+		u8 prio;
+
+		map = &port->qid_map[slot];
+
+		map->qid = map->pending_qid;
+		map->priority = map->pending_priority;
+
+		map_queue = &hw->rsrcs.ldb_queues[map->qid];
+		prio = map->priority;
+
+		dlb2_ldb_port_map_qid(hw, domain, port, map_queue, prio);
+	}
+}
+
+
+static bool dlb2_domain_finish_unmap_port(struct dlb2_hw *hw,
+					  struct dlb2_hw_domain *domain,
+					  struct dlb2_ldb_port *port)
+{
+	u32 infl_cnt;
+	int i;
+
+	if (port->num_pending_removals == 0)
+		return false;
+
+	/*
+	 * The unmap requires all the CQ's outstanding inflights to be
+	 * completed.
+	 */
+	infl_cnt = DLB2_CSR_RD(hw, DLB2_LSP_CQ_LDB_INFL_CNT(hw->ver,
+						       port->id.phys_id));
+	if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_CQ_LDB_INFL_CNT_COUNT) > 0)
+		return false;
+
+	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
+		struct dlb2_ldb_port_qid_map *map;
+
+		map = &port->qid_map[i];
+
+		if (map->state != DLB2_QUEUE_UNMAP_IN_PROG &&
+		    map->state != DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP)
+			continue;
+
+		dlb2_domain_finish_unmap_port_slot(hw, domain, port, i);
+	}
+
+	return true;
+}
+
+static unsigned int
+dlb2_domain_finish_unmap_qid_procedures(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	if (!domain->configured || domain->num_pending_removals == 0)
+		return 0;
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
+			dlb2_domain_finish_unmap_port(hw, domain, port);
+	}
+
+	return domain->num_pending_removals;
+}
+
+static void dlb2_domain_disable_ldb_cqs(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			port->enabled = false;
+
+			dlb2_ldb_port_cq_disable(hw, port);
+		}
+	}
+}
+
+
+static void dlb2_log_reset_domain(struct dlb2_hw *hw,
+				  u32 domain_id,
+				  bool vdev_req,
+				  unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 reset domain:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
+}
+
+static void dlb2_domain_disable_dir_vpps(struct dlb2_hw *hw,
+					 struct dlb2_hw_domain *domain,
+					 unsigned int vdev_id)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *port;
+	u32 vpp_v = 0;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
+		unsigned int offs;
+		u32 virt_id;
+
+		if (hw->virt_mode == DLB2_VIRT_SRIOV)
+			virt_id = port->id.virt_id;
+		else
+			virt_id = port->id.phys_id;
+
+		offs = vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) + virt_id;
+
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP_V(offs), vpp_v);
+	}
+}
+
+static void dlb2_domain_disable_ldb_vpps(struct dlb2_hw *hw,
+					 struct dlb2_hw_domain *domain,
+					 unsigned int vdev_id)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	u32 vpp_v = 0;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			unsigned int offs;
+			u32 virt_id;
+
+			if (hw->virt_mode == DLB2_VIRT_SRIOV)
+				virt_id = port->id.virt_id;
+			else
+				virt_id = port->id.phys_id;
+
+			offs = vdev_id * DLB2_MAX_NUM_LDB_PORTS + virt_id;
+
+			DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VPP_V(offs), vpp_v);
+		}
+	}
+}
+
+static void
+dlb2_domain_disable_ldb_port_interrupts(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	u32 int_en = 0;
+	u32 wd_en = 0;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			DLB2_CSR_WR(hw,
+				    DLB2_CHP_LDB_CQ_INT_ENB(hw->ver,
+						       port->id.phys_id),
+				    int_en);
+
+			DLB2_CSR_WR(hw,
+				    DLB2_CHP_LDB_CQ_WD_ENB(hw->ver,
+						      port->id.phys_id),
+				    wd_en);
+		}
+	}
+}
+
+static void
+dlb2_domain_disable_dir_port_interrupts(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *port;
+	u32 int_en = 0;
+	u32 wd_en = 0;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
+		DLB2_CSR_WR(hw,
+			    DLB2_CHP_DIR_CQ_INT_ENB(hw->ver, port->id.phys_id),
+			    int_en);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_CHP_DIR_CQ_WD_ENB(hw->ver, port->id.phys_id),
+			    wd_en);
+	}
+}
+
+static void
+dlb2_domain_disable_ldb_queue_write_perms(struct dlb2_hw *hw,
+					  struct dlb2_hw_domain *domain)
+{
+	int domain_offset = domain->id.phys_id * DLB2_MAX_NUM_LDB_QUEUES;
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_queue *queue;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
+		int idx = domain_offset + queue->id.phys_id;
+
+		DLB2_CSR_WR(hw, DLB2_SYS_LDB_VASQID_V(idx), 0);
+
+		if (queue->id.vdev_owned) {
+			DLB2_CSR_WR(hw,
+				    DLB2_SYS_LDB_QID2VQID(queue->id.phys_id),
+				    0);
+
+			idx = queue->id.vdev_id * DLB2_MAX_NUM_LDB_QUEUES +
+				queue->id.virt_id;
+
+			DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID_V(idx), 0);
+
+			DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID2QID(idx), 0);
+		}
+	}
+}
+
+static void
+dlb2_domain_disable_dir_queue_write_perms(struct dlb2_hw *hw,
+					  struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *queue;
+	unsigned long max_ports;
+	int domain_offset;
+	RTE_SET_USED(iter);
+
+	max_ports = DLB2_MAX_NUM_DIR_PORTS(hw->ver);
+
+	domain_offset = domain->id.phys_id * max_ports;
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, queue, iter) {
+		int idx = domain_offset + queue->id.phys_id;
+
+		DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(idx), 0);
+
+		if (queue->id.vdev_owned) {
+			idx = queue->id.vdev_id * max_ports + queue->id.virt_id;
+
+			DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID_V(idx), 0);
+
+			DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID2QID(idx), 0);
+		}
+	}
+}
+
+static void dlb2_domain_disable_ldb_seq_checks(struct dlb2_hw *hw,
+					       struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	u32 chk_en = 0;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			DLB2_CSR_WR(hw,
+				    DLB2_CHP_SN_CHK_ENBL(hw->ver,
+							 port->id.phys_id),
+				    chk_en);
+		}
+	}
+}
+
+static int dlb2_domain_wait_for_ldb_cqs_to_empty(struct dlb2_hw *hw,
+						 struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			int j;
+
+			for (j = 0; j < DLB2_MAX_CQ_COMP_CHECK_LOOPS; j++) {
+				if (dlb2_ldb_cq_inflight_count(hw, port) == 0)
+					break;
+			}
+
+			if (j == DLB2_MAX_CQ_COMP_CHECK_LOOPS) {
+				DLB2_HW_ERR(hw,
+					    "[%s()] Internal error: failed to flush load-balanced port %d's completions.\n",
+					    __func__, port->id.phys_id);
+				return -EFAULT;
+			}
+		}
+	}
+
+	return 0;
+}
+
+static void dlb2_domain_disable_dir_cqs(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *port;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
+		port->enabled = false;
+
+		dlb2_dir_port_cq_disable(hw, port);
+	}
+}
+
+static void
+dlb2_domain_disable_dir_producer_ports(struct dlb2_hw *hw,
+				       struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *port;
+	u32 pp_v = 0;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_DIR_PP_V(port->id.phys_id),
+			    pp_v);
+	}
+}
+
+static void
+dlb2_domain_disable_ldb_producer_ports(struct dlb2_hw *hw,
+				       struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	u32 pp_v = 0;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			DLB2_CSR_WR(hw,
+				    DLB2_SYS_LDB_PP_V(port->id.phys_id),
+				    pp_v);
+		}
+	}
+}
+
+static int dlb2_domain_verify_reset_success(struct dlb2_hw *hw,
+					    struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *dir_port;
+	struct dlb2_ldb_port *ldb_port;
+	struct dlb2_ldb_queue *queue;
+	int i;
+	RTE_SET_USED(iter);
+
+	/*
+	 * Confirm that all the domain's queue's inflight counts and AQED
+	 * active counts are 0.
+	 */
+	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
+		if (!dlb2_ldb_queue_is_empty(hw, queue)) {
+			DLB2_HW_ERR(hw,
+				    "[%s()] Internal error: failed to empty ldb queue %d\n",
+				    __func__, queue->id.phys_id);
+			return -EFAULT;
+		}
+	}
+
+	/* Confirm that all the domain's CQs inflight and token counts are 0. */
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], ldb_port, iter) {
+			if (dlb2_ldb_cq_inflight_count(hw, ldb_port) ||
+			    dlb2_ldb_cq_token_count(hw, ldb_port)) {
+				DLB2_HW_ERR(hw,
+					    "[%s()] Internal error: failed to empty ldb port %d\n",
+					    __func__, ldb_port->id.phys_id);
+				return -EFAULT;
+			}
+		}
+	}
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, dir_port, iter) {
+		if (!dlb2_dir_queue_is_empty(hw, dir_port)) {
+			DLB2_HW_ERR(hw,
+				    "[%s()] Internal error: failed to empty dir queue %d\n",
+				    __func__, dir_port->id.phys_id);
+			return -EFAULT;
+		}
+
+		if (dlb2_dir_cq_token_count(hw, dir_port)) {
+			DLB2_HW_ERR(hw,
+				    "[%s()] Internal error: failed to empty dir port %d\n",
+				    __func__, dir_port->id.phys_id);
+			return -EFAULT;
+		}
+	}
+
+	return 0;
+}
+
+static void __dlb2_domain_reset_ldb_port_registers(struct dlb2_hw *hw,
+						   struct dlb2_ldb_port *port)
+{
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_PP2VAS(port->id.phys_id),
+		    DLB2_SYS_LDB_PP2VAS_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ2VAS(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ2VAS_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_PP2VDEV(port->id.phys_id),
+		    DLB2_SYS_LDB_PP2VDEV_RST);
+
+	if (port->id.vdev_owned) {
+		unsigned int offs;
+		u32 virt_id;
+
+		/*
+		 * DLB uses producer port address bits 17:12 to determine the
+		 * producer port ID. In Scalable IOV mode, PP accesses come
+		 * through the PF MMIO window for the physical producer port,
+		 * so for translation purposes the virtual and physical port
+		 * IDs are equal.
+		 */
+		if (hw->virt_mode == DLB2_VIRT_SRIOV)
+			virt_id = port->id.virt_id;
+		else
+			virt_id = port->id.phys_id;
+
+		offs = port->id.vdev_id * DLB2_MAX_NUM_LDB_PORTS + virt_id;
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_VF_LDB_VPP2PP(offs),
+			    DLB2_SYS_VF_LDB_VPP2PP_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_VF_LDB_VPP_V(offs),
+			    DLB2_SYS_VF_LDB_VPP_V_RST);
+	}
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_PP_V(port->id.phys_id),
+		    DLB2_SYS_LDB_PP_V_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_LDB_DSBL(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_LDB_DSBL_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_DEPTH(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ_DEPTH_RST);
+
+	if (hw->ver != DLB2_HW_V2)
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_CFG_CQ_LDB_WU_LIMIT(port->id.phys_id),
+			    DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_LDB_INFL_LIM(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_LDB_INFL_LIM_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_HIST_LIST_LIM(hw->ver, port->id.phys_id),
+		    DLB2_CHP_HIST_LIST_LIM_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_HIST_LIST_BASE(hw->ver, port->id.phys_id),
+		    DLB2_CHP_HIST_LIST_BASE_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_HIST_LIST_POP_PTR(hw->ver, port->id.phys_id),
+		    DLB2_CHP_HIST_LIST_POP_PTR_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_HIST_LIST_PUSH_PTR(hw->ver, port->id.phys_id),
+		    DLB2_CHP_HIST_LIST_PUSH_PTR_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_TMR_THRSH(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ_TMR_THRSH_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_INT_ENB(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ_INT_ENB_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_CQ_ISR(port->id.phys_id),
+		    DLB2_SYS_LDB_CQ_ISR_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_WPTR(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ_WPTR_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_LDB_TKN_CNT(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_LDB_TKN_CNT_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_CQ_ADDR_L(port->id.phys_id),
+		    DLB2_SYS_LDB_CQ_ADDR_L_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_CQ_ADDR_U(port->id.phys_id),
+		    DLB2_SYS_LDB_CQ_ADDR_U_RST);
+
+	if (hw->ver == DLB2_HW_V2)
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_LDB_CQ_AT(port->id.phys_id),
+			    DLB2_SYS_LDB_CQ_AT_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_CQ_PASID(hw->ver, port->id.phys_id),
+		    DLB2_SYS_LDB_CQ_PASID_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_CQ2VF_PF_RO(port->id.phys_id),
+		    DLB2_SYS_LDB_CQ2VF_PF_RO_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTL(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTL_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTH(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTH_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ2QID0(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ2QID0_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ2QID1(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ2QID1_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ2PRIOV(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ2PRIOV_RST);
+}
+
+static void dlb2_domain_reset_ldb_port_registers(struct dlb2_hw *hw,
+						 struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
+			__dlb2_domain_reset_ldb_port_registers(hw, port);
+	}
+}
+
+static void
+__dlb2_domain_reset_dir_port_registers(struct dlb2_hw *hw,
+				       struct dlb2_dir_pq_pair *port)
+{
+	u32 reg = 0;
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ2VAS(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ2VAS_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_DIR_DSBL(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_DIR_DSBL_RST);
+
+	DLB2_BIT_SET(reg, DLB2_SYS_WB_DIR_CQ_STATE_CQ_OPT_CLR);
+
+	if (hw->ver == DLB2_HW_V2)
+		DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_OPT_CLR, port->id.phys_id);
+	else
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_WB_DIR_CQ_STATE(port->id.phys_id), reg);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_DEPTH(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ_DEPTH_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_TMR_THRSH(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ_TMR_THRSH_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_INT_ENB(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ_INT_ENB_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_CQ_ISR(port->id.phys_id),
+		    DLB2_SYS_DIR_CQ_ISR_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(hw->ver,
+						      port->id.phys_id),
+		    DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_WPTR(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ_WPTR_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_DIR_TKN_CNT(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_DIR_TKN_CNT_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_CQ_ADDR_L(port->id.phys_id),
+		    DLB2_SYS_DIR_CQ_ADDR_L_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_CQ_ADDR_U(port->id.phys_id),
+		    DLB2_SYS_DIR_CQ_ADDR_U_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_CQ_AT(port->id.phys_id),
+		    DLB2_SYS_DIR_CQ_AT_RST);
+
+	if (hw->ver == DLB2_HW_V2)
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_DIR_CQ_AT(port->id.phys_id),
+			    DLB2_SYS_DIR_CQ_AT_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_CQ_PASID(hw->ver, port->id.phys_id),
+		    DLB2_SYS_DIR_CQ_PASID_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_CQ_FMT(port->id.phys_id),
+		    DLB2_SYS_DIR_CQ_FMT_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_CQ2VF_PF_RO(port->id.phys_id),
+		    DLB2_SYS_DIR_CQ2VF_PF_RO_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTL(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTL_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTH(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTH_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_PP2VAS(port->id.phys_id),
+		    DLB2_SYS_DIR_PP2VAS_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ2VAS(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ2VAS_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_PP2VDEV(port->id.phys_id),
+		    DLB2_SYS_DIR_PP2VDEV_RST);
+
+	if (port->id.vdev_owned) {
+		unsigned int offs;
+		u32 virt_id;
+
+		/*
+		 * DLB uses producer port address bits 17:12 to determine the
+		 * producer port ID. In Scalable IOV mode, PP accesses come
+		 * through the PF MMIO window for the physical producer port,
+		 * so for translation purposes the virtual and physical port
+		 * IDs are equal.
+		 */
+		if (hw->virt_mode == DLB2_VIRT_SRIOV)
+			virt_id = port->id.virt_id;
+		else
+			virt_id = port->id.phys_id;
+
+		offs = port->id.vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) +
+			virt_id;
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_VF_DIR_VPP2PP(offs),
+			    DLB2_SYS_VF_DIR_VPP2PP_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_VF_DIR_VPP_V(offs),
+			    DLB2_SYS_VF_DIR_VPP_V_RST);
+	}
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_PP_V(port->id.phys_id),
+		    DLB2_SYS_DIR_PP_V_RST);
+}
+
+static void dlb2_domain_reset_dir_port_registers(struct dlb2_hw *hw,
+						 struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *port;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter)
+		__dlb2_domain_reset_dir_port_registers(hw, port);
+}
+
+static void dlb2_domain_reset_ldb_queue_registers(struct dlb2_hw *hw,
+						  struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_queue *queue;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
+		unsigned int queue_id = queue->id.phys_id;
+		int i;
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL(hw->ver, queue_id),
+			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH(hw->ver, queue_id),
+			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTL(hw->ver, queue_id),
+			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTL_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTH(hw->ver, queue_id),
+			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_NALDB_MAX_DEPTH(hw->ver, queue_id),
+			    DLB2_LSP_QID_NALDB_MAX_DEPTH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_LDB_INFL_LIM(hw->ver, queue_id),
+			    DLB2_LSP_QID_LDB_INFL_LIM_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_AQED_ACTIVE_LIM(hw->ver, queue_id),
+			    DLB2_LSP_QID_AQED_ACTIVE_LIM_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_ATM_DEPTH_THRSH(hw->ver, queue_id),
+			    DLB2_LSP_QID_ATM_DEPTH_THRSH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_NALDB_DEPTH_THRSH(hw->ver, queue_id),
+			    DLB2_LSP_QID_NALDB_DEPTH_THRSH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_LDB_QID_ITS(queue_id),
+			    DLB2_SYS_LDB_QID_ITS_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_CHP_ORD_QID_SN(hw->ver, queue_id),
+			    DLB2_CHP_ORD_QID_SN_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_CHP_ORD_QID_SN_MAP(hw->ver, queue_id),
+			    DLB2_CHP_ORD_QID_SN_MAP_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_LDB_QID_V(queue_id),
+			    DLB2_SYS_LDB_QID_V_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_LDB_QID_CFG_V(queue_id),
+			    DLB2_SYS_LDB_QID_CFG_V_RST);
+
+		if (queue->sn_cfg_valid) {
+			u32 offs[2];
+
+			offs[0] = DLB2_RO_GRP_0_SLT_SHFT(hw->ver,
+							 queue->sn_slot);
+			offs[1] = DLB2_RO_GRP_1_SLT_SHFT(hw->ver,
+							 queue->sn_slot);
+
+			DLB2_CSR_WR(hw,
+				    offs[queue->sn_group],
+				    DLB2_RO_GRP_0_SLT_SHFT_RST);
+		}
+
+		for (i = 0; i < DLB2_LSP_QID2CQIDIX_NUM; i++) {
+			DLB2_CSR_WR(hw,
+				    DLB2_LSP_QID2CQIDIX(hw->ver, queue_id, i),
+				    DLB2_LSP_QID2CQIDIX_00_RST);
+
+			DLB2_CSR_WR(hw,
+				    DLB2_LSP_QID2CQIDIX2(hw->ver, queue_id, i),
+				    DLB2_LSP_QID2CQIDIX2_00_RST);
+
+			DLB2_CSR_WR(hw,
+				    DLB2_ATM_QID2CQIDIX(queue_id, i),
+				    DLB2_ATM_QID2CQIDIX_00_RST);
+		}
+	}
+}
+
+static void dlb2_domain_reset_dir_queue_registers(struct dlb2_hw *hw,
+						  struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *queue;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, queue, iter) {
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_DIR_MAX_DEPTH(hw->ver,
+						       queue->id.phys_id),
+			    DLB2_LSP_QID_DIR_MAX_DEPTH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTL(hw->ver,
+							  queue->id.phys_id),
+			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTL_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTH(hw->ver,
+							  queue->id.phys_id),
+			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_DIR_DEPTH_THRSH(hw->ver,
+							 queue->id.phys_id),
+			    DLB2_LSP_QID_DIR_DEPTH_THRSH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_DIR_QID_ITS(queue->id.phys_id),
+			    DLB2_SYS_DIR_QID_ITS_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_DIR_QID_V(queue->id.phys_id),
+			    DLB2_SYS_DIR_QID_V_RST);
+	}
+}
+
+
+
+
+
+static void dlb2_domain_reset_registers(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain)
+{
+	dlb2_domain_reset_ldb_port_registers(hw, domain);
+
+	dlb2_domain_reset_dir_port_registers(hw, domain);
+
+	dlb2_domain_reset_ldb_queue_registers(hw, domain);
+
+	dlb2_domain_reset_dir_queue_registers(hw, domain);
+
+	if (hw->ver == DLB2_HW_V2) {
+		DLB2_CSR_WR(hw,
+			    DLB2_CHP_CFG_LDB_VAS_CRD(domain->id.phys_id),
+			    DLB2_CHP_CFG_LDB_VAS_CRD_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_CHP_CFG_DIR_VAS_CRD(domain->id.phys_id),
+			    DLB2_CHP_CFG_DIR_VAS_CRD_RST);
+	} else
+		DLB2_CSR_WR(hw,
+			    DLB2_CHP_CFG_VAS_CRD(domain->id.phys_id),
+			    DLB2_CHP_CFG_VAS_CRD_RST);
+}
+
+static int dlb2_domain_reset_software_state(struct dlb2_hw *hw,
+					    struct dlb2_hw_domain *domain)
+{
+	struct dlb2_dir_pq_pair *tmp_dir_port;
+	struct dlb2_ldb_queue *tmp_ldb_queue;
+	struct dlb2_ldb_port *tmp_ldb_port;
+	struct dlb2_list_entry *iter1;
+	struct dlb2_list_entry *iter2;
+	struct dlb2_function_resources *rsrcs;
+	struct dlb2_dir_pq_pair *dir_port;
+	struct dlb2_ldb_queue *ldb_queue;
+	struct dlb2_ldb_port *ldb_port;
+	struct dlb2_list_head *list;
+	int ret, i;
+	RTE_SET_USED(tmp_dir_port);
+	RTE_SET_USED(tmp_ldb_queue);
+	RTE_SET_USED(tmp_ldb_port);
+	RTE_SET_USED(iter1);
+	RTE_SET_USED(iter2);
+
+	rsrcs = domain->parent_func;
+
+	/* Move the domain's ldb queues to the function's avail list */
+	list = &domain->used_ldb_queues;
+	DLB2_DOM_LIST_FOR_SAFE(*list, ldb_queue, tmp_ldb_queue, iter1, iter2) {
+		if (ldb_queue->sn_cfg_valid) {
+			struct dlb2_sn_group *grp;
+
+			grp = &hw->rsrcs.sn_groups[ldb_queue->sn_group];
+
+			dlb2_sn_group_free_slot(grp, ldb_queue->sn_slot);
+			ldb_queue->sn_cfg_valid = false;
+		}
+
+		ldb_queue->owned = false;
+		ldb_queue->num_mappings = 0;
+		ldb_queue->num_pending_additions = 0;
+
+		dlb2_list_del(&domain->used_ldb_queues,
+			      &ldb_queue->domain_list);
+		dlb2_list_add(&rsrcs->avail_ldb_queues,
+			      &ldb_queue->func_list);
+		rsrcs->num_avail_ldb_queues++;
+	}
+
+	list = &domain->avail_ldb_queues;
+	DLB2_DOM_LIST_FOR_SAFE(*list, ldb_queue, tmp_ldb_queue, iter1, iter2) {
+		ldb_queue->owned = false;
+
+		dlb2_list_del(&domain->avail_ldb_queues,
+			      &ldb_queue->domain_list);
+		dlb2_list_add(&rsrcs->avail_ldb_queues,
+			      &ldb_queue->func_list);
+		rsrcs->num_avail_ldb_queues++;
+	}
+
+	/* Move the domain's ldb ports to the function's avail list */
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		list = &domain->used_ldb_ports[i];
+		DLB2_DOM_LIST_FOR_SAFE(*list, ldb_port, tmp_ldb_port,
+				       iter1, iter2) {
+			int j;
+
+			ldb_port->owned = false;
+			ldb_port->configured = false;
+			ldb_port->num_pending_removals = 0;
+			ldb_port->num_mappings = 0;
+			ldb_port->init_tkn_cnt = 0;
+			ldb_port->cq_depth = 0;
+			for (j = 0; j < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; j++)
+				ldb_port->qid_map[j].state =
+					DLB2_QUEUE_UNMAPPED;
+
+			dlb2_list_del(&domain->used_ldb_ports[i],
+				      &ldb_port->domain_list);
+			dlb2_list_add(&rsrcs->avail_ldb_ports[i],
+				      &ldb_port->func_list);
+			rsrcs->num_avail_ldb_ports[i]++;
+		}
+
+		list = &domain->avail_ldb_ports[i];
+		DLB2_DOM_LIST_FOR_SAFE(*list, ldb_port, tmp_ldb_port,
+				       iter1, iter2) {
+			ldb_port->owned = false;
+
+			dlb2_list_del(&domain->avail_ldb_ports[i],
+				      &ldb_port->domain_list);
+			dlb2_list_add(&rsrcs->avail_ldb_ports[i],
+				      &ldb_port->func_list);
+			rsrcs->num_avail_ldb_ports[i]++;
+		}
+	}
+
+	/* Move the domain's dir ports to the function's avail list */
+	list = &domain->used_dir_pq_pairs;
+	DLB2_DOM_LIST_FOR_SAFE(*list, dir_port, tmp_dir_port, iter1, iter2) {
+		dir_port->owned = false;
+		dir_port->port_configured = false;
+		dir_port->init_tkn_cnt = 0;
+
+		dlb2_list_del(&domain->used_dir_pq_pairs,
+			      &dir_port->domain_list);
+
+		dlb2_list_add(&rsrcs->avail_dir_pq_pairs,
+			      &dir_port->func_list);
+		rsrcs->num_avail_dir_pq_pairs++;
+	}
+
+	list = &domain->avail_dir_pq_pairs;
+	DLB2_DOM_LIST_FOR_SAFE(*list, dir_port, tmp_dir_port, iter1, iter2) {
+		dir_port->owned = false;
+
+		dlb2_list_del(&domain->avail_dir_pq_pairs,
+			      &dir_port->domain_list);
+
+		dlb2_list_add(&rsrcs->avail_dir_pq_pairs,
+			      &dir_port->func_list);
+		rsrcs->num_avail_dir_pq_pairs++;
+	}
+
+	/* Return hist list entries to the function */
+	ret = dlb2_bitmap_set_range(rsrcs->avail_hist_list_entries,
+				    domain->hist_list_entry_base,
+				    domain->total_hist_list_entries);
+	if (ret) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: domain hist list base does not match the function's bitmap.\n",
+			    __func__);
+		return ret;
+	}
+
+	domain->total_hist_list_entries = 0;
+	domain->avail_hist_list_entries = 0;
+	domain->hist_list_entry_base = 0;
+	domain->hist_list_entry_offset = 0;
+
+	if (hw->ver == DLB2_HW_V2_5) {
+		rsrcs->num_avail_entries += domain->num_credits;
+		domain->num_credits = 0;
+	} else {
+		rsrcs->num_avail_qed_entries += domain->num_ldb_credits;
+		domain->num_ldb_credits = 0;
+
+		rsrcs->num_avail_dqed_entries += domain->num_dir_credits;
+		domain->num_dir_credits = 0;
+	}
+	rsrcs->num_avail_aqed_entries += domain->num_avail_aqed_entries;
+	rsrcs->num_avail_aqed_entries += domain->num_used_aqed_entries;
+	domain->num_avail_aqed_entries = 0;
+	domain->num_used_aqed_entries = 0;
+
+	domain->num_pending_removals = 0;
+	domain->num_pending_additions = 0;
+	domain->configured = false;
+	domain->started = false;
+
+	/*
+	 * Move the domain out of the used_domains list and back to the
+	 * function's avail_domains list.
+	 */
+	dlb2_list_del(&rsrcs->used_domains, &domain->func_list);
+	dlb2_list_add(&rsrcs->avail_domains, &domain->func_list);
+	rsrcs->num_avail_domains++;
+
+	return 0;
+}
+
+static int dlb2_domain_drain_unmapped_queue(struct dlb2_hw *hw,
+					    struct dlb2_hw_domain *domain,
+					    struct dlb2_ldb_queue *queue)
+{
+	struct dlb2_ldb_port *port = NULL;
+	int ret, i;
+
+	/* If a domain has LDB queues, it must have LDB ports */
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		port = DLB2_DOM_LIST_HEAD(domain->used_ldb_ports[i],
+					  typeof(*port));
+		if (port)
+			break;
+	}
+
+	if (port == NULL) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: No configured LDB ports\n",
+			    __func__);
+		return -EFAULT;
+	}
+
+	/* If necessary, free up a QID slot in this CQ */
+	if (port->num_mappings == DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
+		struct dlb2_ldb_queue *mapped_queue;
+
+		mapped_queue = &hw->rsrcs.ldb_queues[port->qid_map[0].qid];
+
+		ret = dlb2_ldb_port_unmap_qid(hw, port, mapped_queue);
+		if (ret)
+			return ret;
+	}
+
+	ret = dlb2_ldb_port_map_qid_dynamic(hw, port, queue, 0);
+	if (ret)
+		return ret;
+
+	return dlb2_domain_drain_mapped_queues(hw, domain);
+}
+
+static int dlb2_domain_drain_unmapped_queues(struct dlb2_hw *hw,
+					     struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_queue *queue;
+	int ret;
+	RTE_SET_USED(iter);
+
+	/* If the domain hasn't been started, there's no traffic to drain */
+	if (!domain->started)
+		return 0;
+
+	/*
+	 * Pre-condition: the unattached queue must not have any outstanding
+	 * completions. This is ensured by calling dlb2_domain_drain_ldb_cqs()
+	 * prior to this in dlb2_domain_drain_mapped_queues().
+	 */
+	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
+		if (queue->num_mappings != 0 ||
+		    dlb2_ldb_queue_is_empty(hw, queue))
+			continue;
+
+		ret = dlb2_domain_drain_unmapped_queue(hw, domain, queue);
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+
+/**
+ * dlb2_reset_domain() - reset a scheduling domain
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function resets and frees a DLB 2.0 scheduling domain and its associated
+ * resources.
+ *
+ * Pre-condition: the driver must ensure software has stopped sending QEs
+ * through this domain's producer ports before invoking this function, or
+ * undefined behavior will result.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, -1 otherwise.
+ *
+ * EINVAL - Invalid domain ID, or the domain is not configured.
+ * EFAULT - Internal error. (Possibly caused if software is the pre-condition
+ *	    is not met.)
+ * ETIMEDOUT - Hardware component didn't reset in the expected time.
+ */
+int dlb2_reset_domain(struct dlb2_hw *hw,
+		      u32 domain_id,
+		      bool vdev_req,
+		      unsigned int vdev_id)
+{
+	struct dlb2_hw_domain *domain;
+	int ret;
+
+	dlb2_log_reset_domain(hw, domain_id, vdev_req, vdev_id);
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (domain == NULL || !domain->configured)
+		return -EINVAL;
+
+	/* Disable VPPs */
+	if (vdev_req) {
+		dlb2_domain_disable_dir_vpps(hw, domain, vdev_id);
+
+		dlb2_domain_disable_ldb_vpps(hw, domain, vdev_id);
+	}
+
+	/* Disable CQ interrupts */
+	dlb2_domain_disable_dir_port_interrupts(hw, domain);
+
+	dlb2_domain_disable_ldb_port_interrupts(hw, domain);
+
+	/*
+	 * For each queue owned by this domain, disable its write permissions to
+	 * cause any traffic sent to it to be dropped. Well-behaved software
+	 * should not be sending QEs at this point.
+	 */
+	dlb2_domain_disable_dir_queue_write_perms(hw, domain);
+
+	dlb2_domain_disable_ldb_queue_write_perms(hw, domain);
+
+	/* Turn off completion tracking on all the domain's PPs. */
+	dlb2_domain_disable_ldb_seq_checks(hw, domain);
+
+	/*
+	 * Disable the LDB CQs and drain them in order to complete the map and
+	 * unmap procedures, which require zero CQ inflights and zero QID
+	 * inflights respectively.
+	 */
+	dlb2_domain_disable_ldb_cqs(hw, domain);
+
+	dlb2_domain_drain_ldb_cqs(hw, domain, false);
+
+	ret = dlb2_domain_wait_for_ldb_cqs_to_empty(hw, domain);
+	if (ret)
+		return ret;
+
+	ret = dlb2_domain_finish_unmap_qid_procedures(hw, domain);
+	if (ret)
+		return ret;
+
+	ret = dlb2_domain_finish_map_qid_procedures(hw, domain);
+	if (ret)
+		return ret;
+
+	/* Re-enable the CQs in order to drain the mapped queues. */
+	dlb2_domain_enable_ldb_cqs(hw, domain);
+
+	ret = dlb2_domain_drain_mapped_queues(hw, domain);
+	if (ret)
+		return ret;
+
+	ret = dlb2_domain_drain_unmapped_queues(hw, domain);
+	if (ret)
+		return ret;
+
+	/* Done draining LDB QEs, so disable the CQs. */
+	dlb2_domain_disable_ldb_cqs(hw, domain);
+
+	dlb2_domain_drain_dir_queues(hw, domain);
+
+	/* Done draining DIR QEs, so disable the CQs. */
+	dlb2_domain_disable_dir_cqs(hw, domain);
+
+	/* Disable PPs */
+	dlb2_domain_disable_dir_producer_ports(hw, domain);
+
+	dlb2_domain_disable_ldb_producer_ports(hw, domain);
+
+	ret = dlb2_domain_verify_reset_success(hw, domain);
+	if (ret)
+		return ret;
+
+	/* Reset the QID and port state. */
+	dlb2_domain_reset_registers(hw, domain);
+
+	/* Hardware reset complete. Reset the domain's software state */
+	return dlb2_domain_reset_software_state(hw, domain);
+}
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v2 06/27] event/dlb2: add V2.5 create ldb queue
  2021-03-30 19:35   ` [dpdk-dev] [PATCH v2 00/27] Add DLB V2.5 Timothy McDaniel
                       ` (4 preceding siblings ...)
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 05/27] event/dlb2: add v2.5 domain reset Timothy McDaniel
@ 2021-03-30 19:35     ` Timothy McDaniel
  2021-04-14 19:20       ` Jerin Jacob
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 07/27] event/dlb2: add v2.5 create ldb port Timothy McDaniel
                       ` (21 subsequent siblings)
  27 siblings, 1 reply; 174+ messages in thread
From: Timothy McDaniel @ 2021-03-30 19:35 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas

Updated low level hardware functions to add DLB 2.5 support
for creating load balanced queues.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 397 ------------------
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 391 +++++++++++++++++
 2 files changed, 391 insertions(+), 397 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index 041aeaeee..f8b85bc57 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -1149,403 +1149,6 @@ unsigned int dlb2_finish_map_qid_procedures(struct dlb2_hw *hw)
 	return num;
 }
 
-
-static void dlb2_configure_ldb_queue(struct dlb2_hw *hw,
-				     struct dlb2_hw_domain *domain,
-				     struct dlb2_ldb_queue *queue,
-				     struct dlb2_create_ldb_queue_args *args,
-				     bool vdev_req,
-				     unsigned int vdev_id)
-{
-	union dlb2_sys_vf_ldb_vqid_v r0 = { {0} };
-	union dlb2_sys_vf_ldb_vqid2qid r1 = { {0} };
-	union dlb2_sys_ldb_qid2vqid r2 = { {0} };
-	union dlb2_sys_ldb_vasqid_v r3 = { {0} };
-	union dlb2_lsp_qid_ldb_infl_lim r4 = { {0} };
-	union dlb2_lsp_qid_aqed_active_lim r5 = { {0} };
-	union dlb2_aqed_pipe_qid_hid_width r6 = { {0} };
-	union dlb2_sys_ldb_qid_its r7 = { {0} };
-	union dlb2_lsp_qid_atm_depth_thrsh r8 = { {0} };
-	union dlb2_lsp_qid_naldb_depth_thrsh r9 = { {0} };
-	union dlb2_aqed_pipe_qid_fid_lim r10 = { {0} };
-	union dlb2_chp_ord_qid_sn_map r11 = { {0} };
-	union dlb2_sys_ldb_qid_cfg_v r12 = { {0} };
-	union dlb2_sys_ldb_qid_v r13 = { {0} };
-
-	struct dlb2_sn_group *sn_group;
-	unsigned int offs;
-
-	/* QID write permissions are turned on when the domain is started */
-	r3.field.vasqid_v = 0;
-
-	offs = domain->id.phys_id * DLB2_MAX_NUM_LDB_QUEUES +
-		queue->id.phys_id;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_VASQID_V(offs), r3.val);
-
-	/*
-	 * Unordered QIDs get 4K inflights, ordered get as many as the number
-	 * of sequence numbers.
-	 */
-	r4.field.limit = args->num_qid_inflights;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_QID_LDB_INFL_LIM(queue->id.phys_id), r4.val);
-
-	r5.field.limit = queue->aqed_limit;
-
-	if (r5.field.limit > DLB2_MAX_NUM_AQED_ENTRIES)
-		r5.field.limit = DLB2_MAX_NUM_AQED_ENTRIES;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID_AQED_ACTIVE_LIM(queue->id.phys_id),
-		    r5.val);
-
-	switch (args->lock_id_comp_level) {
-	case 64:
-		r6.field.compress_code = 1;
-		break;
-	case 128:
-		r6.field.compress_code = 2;
-		break;
-	case 256:
-		r6.field.compress_code = 3;
-		break;
-	case 512:
-		r6.field.compress_code = 4;
-		break;
-	case 1024:
-		r6.field.compress_code = 5;
-		break;
-	case 2048:
-		r6.field.compress_code = 6;
-		break;
-	case 4096:
-		r6.field.compress_code = 7;
-		break;
-	case 0:
-	case 65536:
-		r6.field.compress_code = 0;
-	}
-
-	DLB2_CSR_WR(hw,
-		    DLB2_AQED_PIPE_QID_HID_WIDTH(queue->id.phys_id),
-		    r6.val);
-
-	/* Don't timestamp QEs that pass through this queue */
-	r7.field.qid_its = 0;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_QID_ITS(queue->id.phys_id),
-		    r7.val);
-
-	r8.field.thresh = args->depth_threshold;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID_ATM_DEPTH_THRSH(queue->id.phys_id),
-		    r8.val);
-
-	r9.field.thresh = args->depth_threshold;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID_NALDB_DEPTH_THRSH(queue->id.phys_id),
-		    r9.val);
-
-	/*
-	 * This register limits the number of inflight flows a queue can have
-	 * at one time.  It has an upper bound of 2048, but can be
-	 * over-subscribed. 512 is chosen so that a single queue doesn't use
-	 * the entire atomic storage, but can use a substantial portion if
-	 * needed.
-	 */
-	r10.field.qid_fid_limit = 512;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_AQED_PIPE_QID_FID_LIM(queue->id.phys_id),
-		    r10.val);
-
-	/* Configure SNs */
-	sn_group = &hw->rsrcs.sn_groups[queue->sn_group];
-	r11.field.mode = sn_group->mode;
-	r11.field.slot = queue->sn_slot;
-	r11.field.grp  = sn_group->id;
-
-	DLB2_CSR_WR(hw, DLB2_CHP_ORD_QID_SN_MAP(queue->id.phys_id), r11.val);
-
-	r12.field.sn_cfg_v = (args->num_sequence_numbers != 0);
-	r12.field.fid_cfg_v = (args->num_atomic_inflights != 0);
-
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_QID_CFG_V(queue->id.phys_id), r12.val);
-
-	if (vdev_req) {
-		offs = vdev_id * DLB2_MAX_NUM_LDB_QUEUES + queue->id.virt_id;
-
-		r0.field.vqid_v = 1;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID_V(offs), r0.val);
-
-		r1.field.qid = queue->id.phys_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID2QID(offs), r1.val);
-
-		r2.field.vqid = queue->id.virt_id;
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_LDB_QID2VQID(queue->id.phys_id),
-			    r2.val);
-	}
-
-	r13.field.qid_v = 1;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_QID_V(queue->id.phys_id), r13.val);
-}
-
-static int
-dlb2_ldb_queue_attach_to_sn_group(struct dlb2_hw *hw,
-				  struct dlb2_ldb_queue *queue,
-				  struct dlb2_create_ldb_queue_args *args)
-{
-	int slot = -1;
-	int i;
-
-	queue->sn_cfg_valid = false;
-
-	if (args->num_sequence_numbers == 0)
-		return 0;
-
-	for (i = 0; i < DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {
-		struct dlb2_sn_group *group = &hw->rsrcs.sn_groups[i];
-
-		if (group->sequence_numbers_per_queue ==
-		    args->num_sequence_numbers &&
-		    !dlb2_sn_group_full(group)) {
-			slot = dlb2_sn_group_alloc_slot(group);
-			if (slot >= 0)
-				break;
-		}
-	}
-
-	if (slot == -1) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: no sequence number slots available\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	queue->sn_cfg_valid = true;
-	queue->sn_group = i;
-	queue->sn_slot = slot;
-	return 0;
-}
-
-static int
-dlb2_ldb_queue_attach_resources(struct dlb2_hw *hw,
-				struct dlb2_hw_domain *domain,
-				struct dlb2_ldb_queue *queue,
-				struct dlb2_create_ldb_queue_args *args)
-{
-	int ret;
-
-	ret = dlb2_ldb_queue_attach_to_sn_group(hw, queue, args);
-	if (ret)
-		return ret;
-
-	/* Attach QID inflights */
-	queue->num_qid_inflights = args->num_qid_inflights;
-
-	/* Attach atomic inflights */
-	queue->aqed_limit = args->num_atomic_inflights;
-
-	domain->num_avail_aqed_entries -= args->num_atomic_inflights;
-	domain->num_used_aqed_entries += args->num_atomic_inflights;
-
-	return 0;
-}
-
-static int
-dlb2_verify_create_ldb_queue_args(struct dlb2_hw *hw,
-				  u32 domain_id,
-				  struct dlb2_create_ldb_queue_args *args,
-				  struct dlb2_cmd_response *resp,
-				  bool vdev_req,
-				  unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	int i;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (domain == NULL) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	if (!domain->configured) {
-		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
-		return -EINVAL;
-	}
-
-	if (domain->started) {
-		resp->status = DLB2_ST_DOMAIN_STARTED;
-		return -EINVAL;
-	}
-
-	if (dlb2_list_empty(&domain->avail_ldb_queues)) {
-		resp->status = DLB2_ST_LDB_QUEUES_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	if (args->num_sequence_numbers) {
-		for (i = 0; i < DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {
-			struct dlb2_sn_group *group = &hw->rsrcs.sn_groups[i];
-
-			if (group->sequence_numbers_per_queue ==
-			    args->num_sequence_numbers &&
-			    !dlb2_sn_group_full(group))
-				break;
-		}
-
-		if (i == DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS) {
-			resp->status = DLB2_ST_SEQUENCE_NUMBERS_UNAVAILABLE;
-			return -EINVAL;
-		}
-	}
-
-	if (args->num_qid_inflights > 4096) {
-		resp->status = DLB2_ST_INVALID_QID_INFLIGHT_ALLOCATION;
-		return -EINVAL;
-	}
-
-	/* Inflights must be <= number of sequence numbers if ordered */
-	if (args->num_sequence_numbers != 0 &&
-	    args->num_qid_inflights > args->num_sequence_numbers) {
-		resp->status = DLB2_ST_INVALID_QID_INFLIGHT_ALLOCATION;
-		return -EINVAL;
-	}
-
-	if (domain->num_avail_aqed_entries < args->num_atomic_inflights) {
-		resp->status = DLB2_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	if (args->num_atomic_inflights &&
-	    args->lock_id_comp_level != 0 &&
-	    args->lock_id_comp_level != 64 &&
-	    args->lock_id_comp_level != 128 &&
-	    args->lock_id_comp_level != 256 &&
-	    args->lock_id_comp_level != 512 &&
-	    args->lock_id_comp_level != 1024 &&
-	    args->lock_id_comp_level != 2048 &&
-	    args->lock_id_comp_level != 4096 &&
-	    args->lock_id_comp_level != 65536) {
-		resp->status = DLB2_ST_INVALID_LOCK_ID_COMP_LEVEL;
-		return -EINVAL;
-	}
-
-	return 0;
-}
-
-static void
-dlb2_log_create_ldb_queue_args(struct dlb2_hw *hw,
-			       u32 domain_id,
-			       struct dlb2_create_ldb_queue_args *args,
-			       bool vdev_req,
-			       unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 create load-balanced queue arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID:                  %d\n",
-		    domain_id);
-	DLB2_HW_DBG(hw, "\tNumber of sequence numbers: %d\n",
-		    args->num_sequence_numbers);
-	DLB2_HW_DBG(hw, "\tNumber of QID inflights:    %d\n",
-		    args->num_qid_inflights);
-	DLB2_HW_DBG(hw, "\tNumber of ATM inflights:    %d\n",
-		    args->num_atomic_inflights);
-}
-
-/**
- * dlb2_hw_create_ldb_queue() - Allocate and initialize a DLB LDB queue.
- * @hw:	Contains the current state of the DLB2 hardware.
- * @domain_id: Domain ID
- * @args: User-provided arguments.
- * @resp: Response to user.
- * @vdev_req: Request came from a virtual device.
- * @vdev_id: If vdev_req is true, this contains the virtual device's ID.
- *
- * Return: returns < 0 on error, 0 otherwise. If the driver is unable to
- * satisfy a request, resp->status will be set accordingly.
- */
-int dlb2_hw_create_ldb_queue(struct dlb2_hw *hw,
-			     u32 domain_id,
-			     struct dlb2_create_ldb_queue_args *args,
-			     struct dlb2_cmd_response *resp,
-			     bool vdev_req,
-			     unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_queue *queue;
-	int ret;
-
-	dlb2_log_create_ldb_queue_args(hw, domain_id, args, vdev_req, vdev_id);
-
-	/*
-	 * Verify that hardware resources are available before attempting to
-	 * satisfy the request. This simplifies the error unwinding code.
-	 */
-	ret = dlb2_verify_create_ldb_queue_args(hw,
-						domain_id,
-						args,
-						resp,
-						vdev_req,
-						vdev_id);
-	if (ret)
-		return ret;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-	if (domain == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: domain not found\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	queue = DLB2_DOM_LIST_HEAD(domain->avail_ldb_queues, typeof(*queue));
-	if (queue == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: no available ldb queues\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	ret = dlb2_ldb_queue_attach_resources(hw, domain, queue, args);
-	if (ret < 0) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: failed to attach the ldb queue resources\n",
-			    __func__, __LINE__);
-		return ret;
-	}
-
-	dlb2_configure_ldb_queue(hw, domain, queue, args, vdev_req, vdev_id);
-
-	queue->num_mappings = 0;
-
-	queue->configured = true;
-
-	/*
-	 * Configuration succeeded, so move the resource from the 'avail' to
-	 * the 'used' list.
-	 */
-	dlb2_list_del(&domain->avail_ldb_queues, &queue->domain_list);
-
-	dlb2_list_add(&domain->used_ldb_queues, &queue->domain_list);
-
-	resp->status = 0;
-	resp->id = (vdev_req) ? queue->id.virt_id : queue->id.phys_id;
-
-	return 0;
-}
-
 int dlb2_get_group_sequence_numbers(struct dlb2_hw *hw, unsigned int group_id)
 {
 	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index 641812412..b52d2becd 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -3581,3 +3581,394 @@ int dlb2_reset_domain(struct dlb2_hw *hw,
 	/* Hardware reset complete. Reset the domain's software state */
 	return dlb2_domain_reset_software_state(hw, domain);
 }
+
+static void
+dlb2_log_create_ldb_queue_args(struct dlb2_hw *hw,
+			       u32 domain_id,
+			       struct dlb2_create_ldb_queue_args *args,
+			       bool vdev_req,
+			       unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 create load-balanced queue arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID:                  %d\n",
+		    domain_id);
+	DLB2_HW_DBG(hw, "\tNumber of sequence numbers: %d\n",
+		    args->num_sequence_numbers);
+	DLB2_HW_DBG(hw, "\tNumber of QID inflights:    %d\n",
+		    args->num_qid_inflights);
+	DLB2_HW_DBG(hw, "\tNumber of ATM inflights:    %d\n",
+		    args->num_atomic_inflights);
+}
+
+static int
+dlb2_ldb_queue_attach_to_sn_group(struct dlb2_hw *hw,
+				  struct dlb2_ldb_queue *queue,
+				  struct dlb2_create_ldb_queue_args *args)
+{
+	int slot = -1;
+	int i;
+
+	queue->sn_cfg_valid = false;
+
+	if (args->num_sequence_numbers == 0)
+		return 0;
+
+	for (i = 0; i < DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {
+		struct dlb2_sn_group *group = &hw->rsrcs.sn_groups[i];
+
+		if (group->sequence_numbers_per_queue ==
+		    args->num_sequence_numbers &&
+		    !dlb2_sn_group_full(group)) {
+			slot = dlb2_sn_group_alloc_slot(group);
+			if (slot >= 0)
+				break;
+		}
+	}
+
+	if (slot == -1) {
+		DLB2_HW_ERR(hw,
+			    "[%s():%d] Internal error: no sequence number slots available\n",
+			    __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	queue->sn_cfg_valid = true;
+	queue->sn_group = i;
+	queue->sn_slot = slot;
+	return 0;
+}
+
+static int
+dlb2_verify_create_ldb_queue_args(struct dlb2_hw *hw,
+				  u32 domain_id,
+				  struct dlb2_create_ldb_queue_args *args,
+				  struct dlb2_cmd_response *resp,
+				  bool vdev_req,
+				  unsigned int vdev_id,
+				  struct dlb2_hw_domain **out_domain,
+				  struct dlb2_ldb_queue **out_queue)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+	int i;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
+		return -EINVAL;
+	}
+
+	if (domain->started) {
+		resp->status = DLB2_ST_DOMAIN_STARTED;
+		return -EINVAL;
+	}
+
+	queue = DLB2_DOM_LIST_HEAD(domain->avail_ldb_queues, typeof(*queue));
+	if (!queue) {
+		resp->status = DLB2_ST_LDB_QUEUES_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	if (args->num_sequence_numbers) {
+		for (i = 0; i < DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {
+			struct dlb2_sn_group *group = &hw->rsrcs.sn_groups[i];
+
+			if (group->sequence_numbers_per_queue ==
+			    args->num_sequence_numbers &&
+			    !dlb2_sn_group_full(group))
+				break;
+		}
+
+		if (i == DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS) {
+			resp->status = DLB2_ST_SEQUENCE_NUMBERS_UNAVAILABLE;
+			return -EINVAL;
+		}
+	}
+
+	if (args->num_qid_inflights > 4096) {
+		resp->status = DLB2_ST_INVALID_QID_INFLIGHT_ALLOCATION;
+		return -EINVAL;
+	}
+
+	/* Inflights must be <= number of sequence numbers if ordered */
+	if (args->num_sequence_numbers != 0 &&
+	    args->num_qid_inflights > args->num_sequence_numbers) {
+		resp->status = DLB2_ST_INVALID_QID_INFLIGHT_ALLOCATION;
+		return -EINVAL;
+	}
+
+	if (domain->num_avail_aqed_entries < args->num_atomic_inflights) {
+		resp->status = DLB2_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	if (args->num_atomic_inflights &&
+	    args->lock_id_comp_level != 0 &&
+	    args->lock_id_comp_level != 64 &&
+	    args->lock_id_comp_level != 128 &&
+	    args->lock_id_comp_level != 256 &&
+	    args->lock_id_comp_level != 512 &&
+	    args->lock_id_comp_level != 1024 &&
+	    args->lock_id_comp_level != 2048 &&
+	    args->lock_id_comp_level != 4096 &&
+	    args->lock_id_comp_level != 65536) {
+		resp->status = DLB2_ST_INVALID_LOCK_ID_COMP_LEVEL;
+		return -EINVAL;
+	}
+
+	*out_domain = domain;
+	*out_queue = queue;
+
+	return 0;
+}
+
+static int
+dlb2_ldb_queue_attach_resources(struct dlb2_hw *hw,
+				struct dlb2_hw_domain *domain,
+				struct dlb2_ldb_queue *queue,
+				struct dlb2_create_ldb_queue_args *args)
+{
+	int ret;
+	ret = dlb2_ldb_queue_attach_to_sn_group(hw, queue, args);
+	if (ret)
+		return ret;
+
+	/* Attach QID inflights */
+	queue->num_qid_inflights = args->num_qid_inflights;
+
+	/* Attach atomic inflights */
+	queue->aqed_limit = args->num_atomic_inflights;
+
+	domain->num_avail_aqed_entries -= args->num_atomic_inflights;
+	domain->num_used_aqed_entries += args->num_atomic_inflights;
+
+	return 0;
+}
+
+static void dlb2_configure_ldb_queue(struct dlb2_hw *hw,
+				     struct dlb2_hw_domain *domain,
+				     struct dlb2_ldb_queue *queue,
+				     struct dlb2_create_ldb_queue_args *args,
+				     bool vdev_req,
+				     unsigned int vdev_id)
+{
+	struct dlb2_sn_group *sn_group;
+	unsigned int offs;
+	u32 reg = 0;
+	u32 alimit;
+
+	/* QID write permissions are turned on when the domain is started */
+	offs = domain->id.phys_id * DLB2_MAX_NUM_LDB_QUEUES + queue->id.phys_id;
+
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_VASQID_V(offs), reg);
+
+	/*
+	 * Unordered QIDs get 4K inflights, ordered get as many as the number
+	 * of sequence numbers.
+	 */
+	DLB2_BITS_SET(reg, args->num_qid_inflights,
+		      DLB2_LSP_QID_LDB_INFL_LIM_LIMIT);
+	DLB2_CSR_WR(hw, DLB2_LSP_QID_LDB_INFL_LIM(hw->ver,
+						  queue->id.phys_id), reg);
+
+	alimit = queue->aqed_limit;
+
+	if (alimit > DLB2_MAX_NUM_AQED_ENTRIES)
+		alimit = DLB2_MAX_NUM_AQED_ENTRIES;
+
+	reg = 0;
+	DLB2_BITS_SET(reg, alimit, DLB2_LSP_QID_AQED_ACTIVE_LIM_LIMIT);
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_QID_AQED_ACTIVE_LIM(hw->ver,
+						 queue->id.phys_id), reg);
+
+	reg = 0;
+	switch (args->lock_id_comp_level) {
+	case 64:
+		DLB2_BITS_SET(reg, 1, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
+		break;
+	case 128:
+		DLB2_BITS_SET(reg, 2, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
+		break;
+	case 256:
+		DLB2_BITS_SET(reg, 3, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
+		break;
+	case 512:
+		DLB2_BITS_SET(reg, 4, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
+		break;
+	case 1024:
+		DLB2_BITS_SET(reg, 5, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
+		break;
+	case 2048:
+		DLB2_BITS_SET(reg, 6, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
+		break;
+	case 4096:
+		DLB2_BITS_SET(reg, 7, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
+		break;
+	default:
+		/* No compression by default */
+		break;
+	}
+
+	DLB2_CSR_WR(hw, DLB2_AQED_QID_HID_WIDTH(queue->id.phys_id), reg);
+
+	reg = 0;
+	/* Don't timestamp QEs that pass through this queue */
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_QID_ITS(queue->id.phys_id), reg);
+
+	DLB2_BITS_SET(reg, args->depth_threshold,
+		      DLB2_LSP_QID_ATM_DEPTH_THRSH_THRESH);
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_QID_ATM_DEPTH_THRSH(hw->ver,
+						 queue->id.phys_id), reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, args->depth_threshold,
+		      DLB2_LSP_QID_NALDB_DEPTH_THRSH_THRESH);
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_QID_NALDB_DEPTH_THRSH(hw->ver, queue->id.phys_id),
+		    reg);
+
+	/*
+	 * This register limits the number of inflight flows a queue can have
+	 * at one time.  It has an upper bound of 2048, but can be
+	 * over-subscribed. 512 is chosen so that a single queue does not use
+	 * the entire atomic storage, but can use a substantial portion if
+	 * needed.
+	 */
+	reg = 0;
+	DLB2_BITS_SET(reg, 512, DLB2_AQED_QID_FID_LIM_QID_FID_LIMIT);
+	DLB2_CSR_WR(hw, DLB2_AQED_QID_FID_LIM(queue->id.phys_id), reg);
+
+	/* Configure SNs */
+	reg = 0;
+	sn_group = &hw->rsrcs.sn_groups[queue->sn_group];
+	DLB2_BITS_SET(reg, sn_group->mode, DLB2_CHP_ORD_QID_SN_MAP_MODE);
+	DLB2_BITS_SET(reg, queue->sn_slot, DLB2_CHP_ORD_QID_SN_MAP_SLOT);
+	DLB2_BITS_SET(reg, sn_group->id, DLB2_CHP_ORD_QID_SN_MAP_GRP);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_ORD_QID_SN_MAP(hw->ver, queue->id.phys_id), reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, (args->num_sequence_numbers != 0),
+		 DLB2_SYS_LDB_QID_CFG_V_SN_CFG_V);
+	DLB2_BITS_SET(reg, (args->num_atomic_inflights != 0),
+		 DLB2_SYS_LDB_QID_CFG_V_FID_CFG_V);
+
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_QID_CFG_V(queue->id.phys_id), reg);
+
+	if (vdev_req) {
+		offs = vdev_id * DLB2_MAX_NUM_LDB_QUEUES + queue->id.virt_id;
+
+		reg = 0;
+		DLB2_BIT_SET(reg, DLB2_SYS_VF_LDB_VQID_V_VQID_V);
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID_V(offs), reg);
+
+		reg = 0;
+		DLB2_BITS_SET(reg, queue->id.phys_id,
+			      DLB2_SYS_VF_LDB_VQID2QID_QID);
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID2QID(offs), reg);
+
+		reg = 0;
+		DLB2_BITS_SET(reg, queue->id.virt_id,
+			      DLB2_SYS_LDB_QID2VQID_VQID);
+		DLB2_CSR_WR(hw, DLB2_SYS_LDB_QID2VQID(queue->id.phys_id), reg);
+	}
+
+	reg = 0;
+	DLB2_BIT_SET(reg, DLB2_SYS_LDB_QID_V_QID_V);
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_QID_V(queue->id.phys_id), reg);
+}
+
+/**
+ * dlb2_hw_create_ldb_queue() - create a load-balanced queue
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: queue creation arguments.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function creates a load-balanced queue.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the queue ID.
+ *
+ * resp->id contains a virtual ID if vdev_req is true.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, the domain is not configured,
+ *	    the domain has already been started, or the requested queue name is
+ *	    already in use.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb2_hw_create_ldb_queue(struct dlb2_hw *hw,
+			     u32 domain_id,
+			     struct dlb2_create_ldb_queue_args *args,
+			     struct dlb2_cmd_response *resp,
+			     bool vdev_req,
+			     unsigned int vdev_id)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+	int ret;
+
+	dlb2_log_create_ldb_queue_args(hw, domain_id, args, vdev_req, vdev_id);
+
+	/*
+	 * Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	ret = dlb2_verify_create_ldb_queue_args(hw,
+						domain_id,
+						args,
+						resp,
+						vdev_req,
+						vdev_id,
+						&domain,
+						&queue);
+	if (ret)
+		return ret;
+
+	ret = dlb2_ldb_queue_attach_resources(hw, domain, queue, args);
+
+	if (ret) {
+		DLB2_HW_ERR(hw,
+			    "[%s():%d] Internal error: failed to attach the ldb queue resources\n",
+			    __func__, __LINE__);
+		return ret;
+	}
+
+	dlb2_configure_ldb_queue(hw, domain, queue, args, vdev_req, vdev_id);
+
+	queue->num_mappings = 0;
+
+	queue->configured = true;
+
+	/*
+	 * Configuration succeeded, so move the resource from the 'avail' to
+	 * the 'used' list.
+	 */
+	dlb2_list_del(&domain->avail_ldb_queues, &queue->domain_list);
+
+	dlb2_list_add(&domain->used_ldb_queues, &queue->domain_list);
+
+	resp->status = 0;
+	resp->id = (vdev_req) ? queue->id.virt_id : queue->id.phys_id;
+
+	return 0;
+}
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v2 07/27] event/dlb2: add v2.5 create ldb port
  2021-03-30 19:35   ` [dpdk-dev] [PATCH v2 00/27] Add DLB V2.5 Timothy McDaniel
                       ` (5 preceding siblings ...)
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 06/27] event/dlb2: add V2.5 create ldb queue Timothy McDaniel
@ 2021-03-30 19:35     ` Timothy McDaniel
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 08/27] event/dlb2: add v2.5 create dir port Timothy McDaniel
                       ` (20 subsequent siblings)
  27 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-03-30 19:35 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas

Update create ldb port low level code to account for new
register map and hardware access macros.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 490 ------------------
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 471 +++++++++++++++++
 2 files changed, 471 insertions(+), 490 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index f8b85bc57..45d096eec 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -1216,496 +1216,6 @@ int dlb2_set_group_sequence_numbers(struct dlb2_hw *hw,
 	return 0;
 }
 
-static void dlb2_ldb_port_configure_pp(struct dlb2_hw *hw,
-				       struct dlb2_hw_domain *domain,
-				       struct dlb2_ldb_port *port,
-				       bool vdev_req,
-				       unsigned int vdev_id)
-{
-	union dlb2_sys_ldb_pp2vas r0 = { {0} };
-	union dlb2_sys_ldb_pp_v r4 = { {0} };
-
-	r0.field.vas = domain->id.phys_id;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_PP2VAS(port->id.phys_id), r0.val);
-
-	if (vdev_req) {
-		union dlb2_sys_vf_ldb_vpp2pp r1 = { {0} };
-		union dlb2_sys_ldb_pp2vdev r2 = { {0} };
-		union dlb2_sys_vf_ldb_vpp_v r3 = { {0} };
-		unsigned int offs;
-		u32 virt_id;
-
-		/*
-		 * DLB uses producer port address bits 17:12 to determine the
-		 * producer port ID. In Scalable IOV mode, PP accesses come
-		 * through the PF MMIO window for the physical producer port,
-		 * so for translation purposes the virtual and physical port
-		 * IDs are equal.
-		 */
-		if (hw->virt_mode == DLB2_VIRT_SRIOV)
-			virt_id = port->id.virt_id;
-		else
-			virt_id = port->id.phys_id;
-
-		r1.field.pp = port->id.phys_id;
-
-		offs = vdev_id * DLB2_MAX_NUM_LDB_PORTS + virt_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VPP2PP(offs), r1.val);
-
-		r2.field.vdev = vdev_id;
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_LDB_PP2VDEV(port->id.phys_id),
-			    r2.val);
-
-		r3.field.vpp_v = 1;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VPP_V(offs), r3.val);
-	}
-
-	r4.field.pp_v = 1;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_PP_V(port->id.phys_id),
-		    r4.val);
-}
-
-static int dlb2_ldb_port_configure_cq(struct dlb2_hw *hw,
-				      struct dlb2_hw_domain *domain,
-				      struct dlb2_ldb_port *port,
-				      uintptr_t cq_dma_base,
-				      struct dlb2_create_ldb_port_args *args,
-				      bool vdev_req,
-				      unsigned int vdev_id)
-{
-	union dlb2_sys_ldb_cq_addr_l r0 = { {0} };
-	union dlb2_sys_ldb_cq_addr_u r1 = { {0} };
-	union dlb2_sys_ldb_cq2vf_pf_ro r2 = { {0} };
-	union dlb2_chp_ldb_cq_tkn_depth_sel r3 = { {0} };
-	union dlb2_lsp_cq_ldb_tkn_depth_sel r4 = { {0} };
-	union dlb2_chp_hist_list_lim r5 = { {0} };
-	union dlb2_chp_hist_list_base r6 = { {0} };
-	union dlb2_lsp_cq_ldb_infl_lim r7 = { {0} };
-	union dlb2_chp_hist_list_push_ptr r8 = { {0} };
-	union dlb2_chp_hist_list_pop_ptr r9 = { {0} };
-	union dlb2_sys_ldb_cq_at r10 = { {0} };
-	union dlb2_sys_ldb_cq_pasid r11 = { {0} };
-	union dlb2_chp_ldb_cq2vas r12 = { {0} };
-	union dlb2_lsp_cq2priov r13 = { {0} };
-
-	/* The CQ address is 64B-aligned, and the DLB only wants bits [63:6] */
-	r0.field.addr_l = cq_dma_base >> 6;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_ADDR_L(port->id.phys_id), r0.val);
-
-	r1.field.addr_u = cq_dma_base >> 32;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_ADDR_U(port->id.phys_id), r1.val);
-
-	/*
-	 * 'ro' == relaxed ordering. This setting allows DLB2 to write
-	 * cache lines out-of-order (but QEs within a cache line are always
-	 * updated in-order).
-	 */
-	r2.field.vf = vdev_id;
-	r2.field.is_pf = !vdev_req && (hw->virt_mode != DLB2_VIRT_SIOV);
-	r2.field.ro = 1;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ2VF_PF_RO(port->id.phys_id), r2.val);
-
-	if (args->cq_depth <= 8) {
-		r3.field.token_depth_select = 1;
-	} else if (args->cq_depth == 16) {
-		r3.field.token_depth_select = 2;
-	} else if (args->cq_depth == 32) {
-		r3.field.token_depth_select = 3;
-	} else if (args->cq_depth == 64) {
-		r3.field.token_depth_select = 4;
-	} else if (args->cq_depth == 128) {
-		r3.field.token_depth_select = 5;
-	} else if (args->cq_depth == 256) {
-		r3.field.token_depth_select = 6;
-	} else if (args->cq_depth == 512) {
-		r3.field.token_depth_select = 7;
-	} else if (args->cq_depth == 1024) {
-		r3.field.token_depth_select = 8;
-	} else {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: invalid CQ depth\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL(port->id.phys_id),
-		    r3.val);
-
-	/*
-	 * To support CQs with depth less than 8, program the token count
-	 * register with a non-zero initial value. Operations such as domain
-	 * reset must take this initial value into account when quiescing the
-	 * CQ.
-	 */
-	port->init_tkn_cnt = 0;
-
-	if (args->cq_depth < 8) {
-		union dlb2_lsp_cq_ldb_tkn_cnt r14 = { {0} };
-
-		port->init_tkn_cnt = 8 - args->cq_depth;
-
-		r14.field.token_count = port->init_tkn_cnt;
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_CQ_LDB_TKN_CNT(port->id.phys_id),
-			    r14.val);
-	} else {
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_CQ_LDB_TKN_CNT(port->id.phys_id),
-			    DLB2_LSP_CQ_LDB_TKN_CNT_RST);
-	}
-
-	r4.field.token_depth_select = r3.field.token_depth_select;
-	r4.field.ignore_depth = 0;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL(port->id.phys_id),
-		    r4.val);
-
-	/* Reset the CQ write pointer */
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_WPTR(port->id.phys_id),
-		    DLB2_CHP_LDB_CQ_WPTR_RST);
-
-	r5.field.limit = port->hist_list_entry_limit - 1;
-
-	DLB2_CSR_WR(hw, DLB2_CHP_HIST_LIST_LIM(port->id.phys_id), r5.val);
-
-	r6.field.base = port->hist_list_entry_base;
-
-	DLB2_CSR_WR(hw, DLB2_CHP_HIST_LIST_BASE(port->id.phys_id), r6.val);
-
-	/*
-	 * The inflight limit sets a cap on the number of QEs for which this CQ
-	 * can owe completions at one time.
-	 */
-	r7.field.limit = args->cq_history_list_size;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ_LDB_INFL_LIM(port->id.phys_id), r7.val);
-
-	r8.field.push_ptr = r6.field.base;
-	r8.field.generation = 0;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_HIST_LIST_PUSH_PTR(port->id.phys_id),
-		    r8.val);
-
-	r9.field.pop_ptr = r6.field.base;
-	r9.field.generation = 0;
-
-	DLB2_CSR_WR(hw, DLB2_CHP_HIST_LIST_POP_PTR(port->id.phys_id), r9.val);
-
-	/*
-	 * Address translation (AT) settings: 0: untranslated, 2: translated
-	 * (see ATS spec regarding Address Type field for more details)
-	 */
-	r10.field.cq_at = 0;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_AT(port->id.phys_id), r10.val);
-
-	if (vdev_req && hw->virt_mode == DLB2_VIRT_SIOV) {
-		r11.field.pasid = hw->pasid[vdev_id];
-		r11.field.fmt2 = 1;
-	}
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_CQ_PASID(port->id.phys_id),
-		    r11.val);
-
-	r12.field.cq2vas = domain->id.phys_id;
-
-	DLB2_CSR_WR(hw, DLB2_CHP_LDB_CQ2VAS(port->id.phys_id), r12.val);
-
-	/* Disable the port's QID mappings */
-	r13.field.v = 0;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(port->id.phys_id), r13.val);
-
-	return 0;
-}
-
-static int dlb2_configure_ldb_port(struct dlb2_hw *hw,
-				   struct dlb2_hw_domain *domain,
-				   struct dlb2_ldb_port *port,
-				   uintptr_t cq_dma_base,
-				   struct dlb2_create_ldb_port_args *args,
-				   bool vdev_req,
-				   unsigned int vdev_id)
-{
-	int ret, i;
-
-	port->hist_list_entry_base = domain->hist_list_entry_base +
-				     domain->hist_list_entry_offset;
-	port->hist_list_entry_limit = port->hist_list_entry_base +
-				      args->cq_history_list_size;
-
-	domain->hist_list_entry_offset += args->cq_history_list_size;
-	domain->avail_hist_list_entries -= args->cq_history_list_size;
-
-	ret = dlb2_ldb_port_configure_cq(hw,
-					 domain,
-					 port,
-					 cq_dma_base,
-					 args,
-					 vdev_req,
-					 vdev_id);
-	if (ret < 0)
-		return ret;
-
-	dlb2_ldb_port_configure_pp(hw,
-				   domain,
-				   port,
-				   vdev_req,
-				   vdev_id);
-
-	dlb2_ldb_port_cq_enable(hw, port);
-
-	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++)
-		port->qid_map[i].state = DLB2_QUEUE_UNMAPPED;
-	port->num_mappings = 0;
-
-	port->enabled = true;
-
-	port->configured = true;
-
-	return 0;
-}
-
-static void
-dlb2_log_create_ldb_port_args(struct dlb2_hw *hw,
-			      u32 domain_id,
-			      uintptr_t cq_dma_base,
-			      struct dlb2_create_ldb_port_args *args,
-			      bool vdev_req,
-			      unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 create load-balanced port arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID:                 %d\n",
-		    domain_id);
-	DLB2_HW_DBG(hw, "\tCQ depth:                  %d\n",
-		    args->cq_depth);
-	DLB2_HW_DBG(hw, "\tCQ hist list size:         %d\n",
-		    args->cq_history_list_size);
-	DLB2_HW_DBG(hw, "\tCQ base address:           0x%lx\n",
-		    cq_dma_base);
-	DLB2_HW_DBG(hw, "\tCoS ID:                    %u\n", args->cos_id);
-	DLB2_HW_DBG(hw, "\tStrict CoS allocation:     %u\n",
-		    args->cos_strict);
-}
-
-static int
-dlb2_verify_create_ldb_port_args(struct dlb2_hw *hw,
-				 u32 domain_id,
-				 uintptr_t cq_dma_base,
-				 struct dlb2_create_ldb_port_args *args,
-				 struct dlb2_cmd_response *resp,
-				 bool vdev_req,
-				 unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	int i;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (domain == NULL) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	if (!domain->configured) {
-		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
-		return -EINVAL;
-	}
-
-	if (domain->started) {
-		resp->status = DLB2_ST_DOMAIN_STARTED;
-		return -EINVAL;
-	}
-
-	if (args->cos_id >= DLB2_NUM_COS_DOMAINS) {
-		resp->status = DLB2_ST_INVALID_COS_ID;
-		return -EINVAL;
-	}
-
-	if (args->cos_strict) {
-		if (dlb2_list_empty(&domain->avail_ldb_ports[args->cos_id])) {
-			resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
-			return -EINVAL;
-		}
-	} else {
-		for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-			if (!dlb2_list_empty(&domain->avail_ldb_ports[i]))
-				break;
-		}
-
-		if (i == DLB2_NUM_COS_DOMAINS) {
-			resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
-			return -EINVAL;
-		}
-	}
-
-	/* Check cache-line alignment */
-	if ((cq_dma_base & 0x3F) != 0) {
-		resp->status = DLB2_ST_INVALID_CQ_VIRT_ADDR;
-		return -EINVAL;
-	}
-
-	if (args->cq_depth != 1 &&
-	    args->cq_depth != 2 &&
-	    args->cq_depth != 4 &&
-	    args->cq_depth != 8 &&
-	    args->cq_depth != 16 &&
-	    args->cq_depth != 32 &&
-	    args->cq_depth != 64 &&
-	    args->cq_depth != 128 &&
-	    args->cq_depth != 256 &&
-	    args->cq_depth != 512 &&
-	    args->cq_depth != 1024) {
-		resp->status = DLB2_ST_INVALID_CQ_DEPTH;
-		return -EINVAL;
-	}
-
-	/* The history list size must be >= 1 */
-	if (!args->cq_history_list_size) {
-		resp->status = DLB2_ST_INVALID_HIST_LIST_DEPTH;
-		return -EINVAL;
-	}
-
-	if (args->cq_history_list_size > domain->avail_hist_list_entries) {
-		resp->status = DLB2_ST_HIST_LIST_ENTRIES_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	return 0;
-}
-
-
-/**
- * dlb2_hw_create_ldb_port() - Allocate and initialize a load-balanced port and
- *	its resources.
- * @hw:	Contains the current state of the DLB2 hardware.
- * @domain_id: Domain ID
- * @args: User-provided arguments.
- * @cq_dma_base: Base DMA address for consumer queue memory
- * @resp: Response to user.
- * @vdev_req: Request came from a virtual device.
- * @vdev_id: If vdev_req is true, this contains the virtual device's ID.
- *
- * Return: returns < 0 on error, 0 otherwise. If the driver is unable to
- * satisfy a request, resp->status will be set accordingly.
- */
-int dlb2_hw_create_ldb_port(struct dlb2_hw *hw,
-			    u32 domain_id,
-			    struct dlb2_create_ldb_port_args *args,
-			    uintptr_t cq_dma_base,
-			    struct dlb2_cmd_response *resp,
-			    bool vdev_req,
-			    unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_port *port;
-	int ret, cos_id, i;
-
-	dlb2_log_create_ldb_port_args(hw,
-				      domain_id,
-				      cq_dma_base,
-				      args,
-				      vdev_req,
-				      vdev_id);
-
-	/*
-	 * Verify that hardware resources are available before attempting to
-	 * satisfy the request. This simplifies the error unwinding code.
-	 */
-	ret = dlb2_verify_create_ldb_port_args(hw,
-					       domain_id,
-					       cq_dma_base,
-					       args,
-					       resp,
-					       vdev_req,
-					       vdev_id);
-	if (ret)
-		return ret;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-	if (domain == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: domain not found\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	if (args->cos_strict) {
-		cos_id = args->cos_id;
-
-		port = DLB2_DOM_LIST_HEAD(domain->avail_ldb_ports[cos_id],
-					  typeof(*port));
-	} else {
-		int idx;
-
-		for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-			idx = (args->cos_id + i) % DLB2_NUM_COS_DOMAINS;
-
-			port = DLB2_DOM_LIST_HEAD(domain->avail_ldb_ports[idx],
-						  typeof(*port));
-			if (port)
-				break;
-		}
-
-		cos_id = idx;
-	}
-
-	if (port == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: no available ldb ports\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	if (port->configured) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: avail_ldb_ports contains configured ports.\n",
-			    __func__);
-		return -EFAULT;
-	}
-
-	ret = dlb2_configure_ldb_port(hw,
-				      domain,
-				      port,
-				      cq_dma_base,
-				      args,
-				      vdev_req,
-				      vdev_id);
-	if (ret < 0)
-		return ret;
-
-	/*
-	 * Configuration succeeded, so move the resource from the 'avail' to
-	 * the 'used' list.
-	 */
-	dlb2_list_del(&domain->avail_ldb_ports[cos_id], &port->domain_list);
-
-	dlb2_list_add(&domain->used_ldb_ports[cos_id], &port->domain_list);
-
-	resp->status = 0;
-	resp->id = (vdev_req) ? port->id.virt_id : port->id.phys_id;
-
-	return 0;
-}
-
 static void
 dlb2_log_create_dir_port_args(struct dlb2_hw *hw,
 			      u32 domain_id,
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index b52d2becd..2eb39e23d 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -3972,3 +3972,474 @@ int dlb2_hw_create_ldb_queue(struct dlb2_hw *hw,
 
 	return 0;
 }
+
+static void dlb2_ldb_port_configure_pp(struct dlb2_hw *hw,
+				       struct dlb2_hw_domain *domain,
+				       struct dlb2_ldb_port *port,
+				       bool vdev_req,
+				       unsigned int vdev_id)
+{
+	u32 reg = 0;
+
+	DLB2_BITS_SET(reg, domain->id.phys_id, DLB2_SYS_LDB_PP2VAS_VAS);
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_PP2VAS(port->id.phys_id), reg);
+
+	if (vdev_req) {
+		unsigned int offs;
+		u32 virt_id;
+
+		/*
+		 * DLB uses producer port address bits 17:12 to determine the
+		 * producer port ID. In Scalable IOV mode, PP accesses come
+		 * through the PF MMIO window for the physical producer port,
+		 * so for translation purposes the virtual and physical port
+		 * IDs are equal.
+		 */
+		if (hw->virt_mode == DLB2_VIRT_SRIOV)
+			virt_id = port->id.virt_id;
+		else
+			virt_id = port->id.phys_id;
+
+		reg = 0;
+		DLB2_BITS_SET(reg, port->id.phys_id, DLB2_SYS_VF_LDB_VPP2PP_PP);
+		offs = vdev_id * DLB2_MAX_NUM_LDB_PORTS + virt_id;
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VPP2PP(offs), reg);
+
+		reg = 0;
+		DLB2_BITS_SET(reg, vdev_id, DLB2_SYS_LDB_PP2VDEV_VDEV);
+		DLB2_CSR_WR(hw, DLB2_SYS_LDB_PP2VDEV(port->id.phys_id), reg);
+
+		reg = 0;
+		DLB2_BIT_SET(reg, DLB2_SYS_VF_LDB_VPP_V_VPP_V);
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VPP_V(offs), reg);
+	}
+
+	reg = 0;
+	DLB2_BIT_SET(reg, DLB2_SYS_LDB_PP_V_PP_V);
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_PP_V(port->id.phys_id), reg);
+}
+
+static int dlb2_ldb_port_configure_cq(struct dlb2_hw *hw,
+				      struct dlb2_hw_domain *domain,
+				      struct dlb2_ldb_port *port,
+				      uintptr_t cq_dma_base,
+				      struct dlb2_create_ldb_port_args *args,
+				      bool vdev_req,
+				      unsigned int vdev_id)
+{
+	u32 hl_base = 0;
+	u32 reg = 0;
+	u32 ds = 0;
+
+	/* The CQ address is 64B-aligned, and the DLB only wants bits [63:6] */
+	DLB2_BITS_SET(reg, cq_dma_base >> 6, DLB2_SYS_LDB_CQ_ADDR_L_ADDR_L);
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_ADDR_L(port->id.phys_id), reg);
+
+	reg = cq_dma_base >> 32;
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_ADDR_U(port->id.phys_id), reg);
+
+	/*
+	 * 'ro' == relaxed ordering. This setting allows DLB2 to write
+	 * cache lines out-of-order (but QEs within a cache line are always
+	 * updated in-order).
+	 */
+	reg = 0;
+	DLB2_BITS_SET(reg, vdev_id, DLB2_SYS_LDB_CQ2VF_PF_RO_VF);
+	DLB2_BITS_SET(reg,
+		 !vdev_req && (hw->virt_mode != DLB2_VIRT_SIOV),
+		 DLB2_SYS_LDB_CQ2VF_PF_RO_IS_PF);
+	DLB2_BIT_SET(reg, DLB2_SYS_LDB_CQ2VF_PF_RO_RO);
+
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ2VF_PF_RO(port->id.phys_id), reg);
+
+	port->cq_depth = args->cq_depth;
+
+	if (args->cq_depth <= 8) {
+		ds = 1;
+	} else if (args->cq_depth == 16) {
+		ds = 2;
+	} else if (args->cq_depth == 32) {
+		ds = 3;
+	} else if (args->cq_depth == 64) {
+		ds = 4;
+	} else if (args->cq_depth == 128) {
+		ds = 5;
+	} else if (args->cq_depth == 256) {
+		ds = 6;
+	} else if (args->cq_depth == 512) {
+		ds = 7;
+	} else if (args->cq_depth == 1024) {
+		ds = 8;
+	} else {
+		DLB2_HW_ERR(hw,
+			    "[%s():%d] Internal error: invalid CQ depth\n",
+			    __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	reg = 0;
+	DLB2_BITS_SET(reg, ds,
+		      DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT);
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
+		    reg);
+
+	/*
+	 * To support CQs with depth less than 8, program the token count
+	 * register with a non-zero initial value. Operations such as domain
+	 * reset must take this initial value into account when quiescing the
+	 * CQ.
+	 */
+	port->init_tkn_cnt = 0;
+
+	if (args->cq_depth < 8) {
+		reg = 0;
+		port->init_tkn_cnt = 8 - args->cq_depth;
+
+		DLB2_BITS_SET(reg,
+			      port->init_tkn_cnt,
+			      DLB2_LSP_CQ_LDB_TKN_CNT_TOKEN_COUNT);
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_CQ_LDB_TKN_CNT(hw->ver, port->id.phys_id),
+			    reg);
+	} else {
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_CQ_LDB_TKN_CNT(hw->ver, port->id.phys_id),
+			    DLB2_LSP_CQ_LDB_TKN_CNT_RST);
+	}
+
+	reg = 0;
+	DLB2_BITS_SET(reg, ds,
+		      DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_V2);
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
+		    reg);
+
+	/* Reset the CQ write pointer */
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_WPTR(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ_WPTR_RST);
+
+	reg = 0;
+	DLB2_BITS_SET(reg,
+		      port->hist_list_entry_limit - 1,
+		      DLB2_CHP_HIST_LIST_LIM_LIMIT);
+	DLB2_CSR_WR(hw, DLB2_CHP_HIST_LIST_LIM(hw->ver, port->id.phys_id), reg);
+
+	DLB2_BITS_SET(hl_base, port->hist_list_entry_base,
+		      DLB2_CHP_HIST_LIST_BASE_BASE);
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_HIST_LIST_BASE(hw->ver, port->id.phys_id),
+		    hl_base);
+
+	/*
+	 * The inflight limit sets a cap on the number of QEs for which this CQ
+	 * can owe completions at one time.
+	 */
+	reg = 0;
+	DLB2_BITS_SET(reg, args->cq_history_list_size,
+		      DLB2_LSP_CQ_LDB_INFL_LIM_LIMIT);
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ_LDB_INFL_LIM(hw->ver, port->id.phys_id),
+		    reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, DLB2_BITS_GET(hl_base, DLB2_CHP_HIST_LIST_BASE_BASE),
+		      DLB2_CHP_HIST_LIST_PUSH_PTR_PUSH_PTR);
+	DLB2_CSR_WR(hw, DLB2_CHP_HIST_LIST_PUSH_PTR(hw->ver, port->id.phys_id),
+		    reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, DLB2_BITS_GET(hl_base, DLB2_CHP_HIST_LIST_BASE_BASE),
+		      DLB2_CHP_HIST_LIST_POP_PTR_POP_PTR);
+	DLB2_CSR_WR(hw, DLB2_CHP_HIST_LIST_POP_PTR(hw->ver, port->id.phys_id),
+		    reg);
+
+	/*
+	 * Address translation (AT) settings: 0: untranslated, 2: translated
+	 * (see ATS spec regarding Address Type field for more details)
+	 */
+
+	if (hw->ver == DLB2_HW_V2) {
+		reg = 0;
+		DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_AT(port->id.phys_id), reg);
+	}
+
+	if (vdev_req && hw->virt_mode == DLB2_VIRT_SIOV) {
+		reg = 0;
+		DLB2_BITS_SET(reg, hw->pasid[vdev_id],
+			      DLB2_SYS_LDB_CQ_PASID_PASID);
+		DLB2_BIT_SET(reg, DLB2_SYS_LDB_CQ_PASID_FMT2);
+	}
+
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_PASID(hw->ver, port->id.phys_id), reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, domain->id.phys_id, DLB2_CHP_LDB_CQ2VAS_CQ2VAS);
+	DLB2_CSR_WR(hw, DLB2_CHP_LDB_CQ2VAS(hw->ver, port->id.phys_id), reg);
+
+	/* Disable the port's QID mappings */
+	reg = 0;
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(hw->ver, port->id.phys_id), reg);
+
+	return 0;
+}
+
+static bool
+dlb2_cq_depth_is_valid(u32 depth)
+{
+	if (depth != 1 && depth != 2 &&
+	    depth != 4 && depth != 8 &&
+	    depth != 16 && depth != 32 &&
+	    depth != 64 && depth != 128 &&
+	    depth != 256 && depth != 512 &&
+	    depth != 1024)
+		return false;
+
+	return true;
+}
+
+static int dlb2_configure_ldb_port(struct dlb2_hw *hw,
+				   struct dlb2_hw_domain *domain,
+				   struct dlb2_ldb_port *port,
+				   uintptr_t cq_dma_base,
+				   struct dlb2_create_ldb_port_args *args,
+				   bool vdev_req,
+				   unsigned int vdev_id)
+{
+	int ret, i;
+
+	port->hist_list_entry_base = domain->hist_list_entry_base +
+				     domain->hist_list_entry_offset;
+	port->hist_list_entry_limit = port->hist_list_entry_base +
+				      args->cq_history_list_size;
+
+	domain->hist_list_entry_offset += args->cq_history_list_size;
+	domain->avail_hist_list_entries -= args->cq_history_list_size;
+
+	ret = dlb2_ldb_port_configure_cq(hw,
+					 domain,
+					 port,
+					 cq_dma_base,
+					 args,
+					 vdev_req,
+					 vdev_id);
+	if (ret)
+		return ret;
+
+	dlb2_ldb_port_configure_pp(hw,
+				   domain,
+				   port,
+				   vdev_req,
+				   vdev_id);
+
+	dlb2_ldb_port_cq_enable(hw, port);
+
+	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++)
+		port->qid_map[i].state = DLB2_QUEUE_UNMAPPED;
+	port->num_mappings = 0;
+
+	port->enabled = true;
+
+	port->configured = true;
+
+	return 0;
+}
+
+static void
+dlb2_log_create_ldb_port_args(struct dlb2_hw *hw,
+			      u32 domain_id,
+			      uintptr_t cq_dma_base,
+			      struct dlb2_create_ldb_port_args *args,
+			      bool vdev_req,
+			      unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 create load-balanced port arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID:                 %d\n",
+		    domain_id);
+	DLB2_HW_DBG(hw, "\tCQ depth:                  %d\n",
+		    args->cq_depth);
+	DLB2_HW_DBG(hw, "\tCQ hist list size:         %d\n",
+		    args->cq_history_list_size);
+	DLB2_HW_DBG(hw, "\tCQ base address:           0x%lx\n",
+		    cq_dma_base);
+	DLB2_HW_DBG(hw, "\tCoS ID:                    %u\n", args->cos_id);
+	DLB2_HW_DBG(hw, "\tStrict CoS allocation:     %u\n",
+		    args->cos_strict);
+}
+
+static int
+dlb2_verify_create_ldb_port_args(struct dlb2_hw *hw,
+				 u32 domain_id,
+				 uintptr_t cq_dma_base,
+				 struct dlb2_create_ldb_port_args *args,
+				 struct dlb2_cmd_response *resp,
+				 bool vdev_req,
+				 unsigned int vdev_id,
+				 struct dlb2_hw_domain **out_domain,
+				 struct dlb2_ldb_port **out_port,
+				 int *out_cos_id)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_port *port;
+	int i, id;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
+		return -EINVAL;
+	}
+
+	if (domain->started) {
+		resp->status = DLB2_ST_DOMAIN_STARTED;
+		return -EINVAL;
+	}
+
+	if (args->cos_id >= DLB2_NUM_COS_DOMAINS) {
+		resp->status = DLB2_ST_INVALID_COS_ID;
+		return -EINVAL;
+	}
+
+	if (args->cos_strict) {
+		id = args->cos_id;
+		port = DLB2_DOM_LIST_HEAD(domain->avail_ldb_ports[id],
+					  typeof(*port));
+	} else {
+		for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+			id = (args->cos_id + i) % DLB2_NUM_COS_DOMAINS;
+
+			port = DLB2_DOM_LIST_HEAD(domain->avail_ldb_ports[id],
+						  typeof(*port));
+			if (port)
+				break;
+		}
+	}
+
+	if (!port) {
+		resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	/* Check cache-line alignment */
+	if ((cq_dma_base & 0x3F) != 0) {
+		resp->status = DLB2_ST_INVALID_CQ_VIRT_ADDR;
+		return -EINVAL;
+	}
+
+	if (!dlb2_cq_depth_is_valid(args->cq_depth)) {
+		resp->status = DLB2_ST_INVALID_CQ_DEPTH;
+		return -EINVAL;
+	}
+
+	/* The history list size must be >= 1 */
+	if (!args->cq_history_list_size) {
+		resp->status = DLB2_ST_INVALID_HIST_LIST_DEPTH;
+		return -EINVAL;
+	}
+
+	if (args->cq_history_list_size > domain->avail_hist_list_entries) {
+		resp->status = DLB2_ST_HIST_LIST_ENTRIES_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	*out_domain = domain;
+	*out_port = port;
+	*out_cos_id = id;
+
+	return 0;
+}
+
+/**
+ * dlb2_hw_create_ldb_port() - create a load-balanced port
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: port creation arguments.
+ * @cq_dma_base: base address of the CQ memory. This can be a PA or an IOVA.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function creates a load-balanced port.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the port ID.
+ *
+ * resp->id contains a virtual ID if vdev_req is true.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, a credit setting is invalid, a
+ *	    pointer address is not properly aligned, the domain is not
+ *	    configured, or the domain has already been started.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb2_hw_create_ldb_port(struct dlb2_hw *hw,
+			    u32 domain_id,
+			    struct dlb2_create_ldb_port_args *args,
+			    uintptr_t cq_dma_base,
+			    struct dlb2_cmd_response *resp,
+			    bool vdev_req,
+			    unsigned int vdev_id)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_port *port;
+	int ret, cos_id;
+
+	dlb2_log_create_ldb_port_args(hw,
+				      domain_id,
+				      cq_dma_base,
+				      args,
+				      vdev_req,
+				      vdev_id);
+
+	/*
+	 * Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	ret = dlb2_verify_create_ldb_port_args(hw,
+					       domain_id,
+					       cq_dma_base,
+					       args,
+					       resp,
+					       vdev_req,
+					       vdev_id,
+					       &domain,
+					       &port,
+					       &cos_id);
+	if (ret)
+		return ret;
+
+	ret = dlb2_configure_ldb_port(hw,
+				      domain,
+				      port,
+				      cq_dma_base,
+				      args,
+				      vdev_req,
+				      vdev_id);
+	if (ret)
+		return ret;
+
+	/*
+	 * Configuration succeeded, so move the resource from the 'avail' to
+	 * the 'used' list.
+	 */
+	dlb2_list_del(&domain->avail_ldb_ports[cos_id], &port->domain_list);
+
+	dlb2_list_add(&domain->used_ldb_ports[cos_id], &port->domain_list);
+
+	resp->status = 0;
+	resp->id = (vdev_req) ? port->id.virt_id : port->id.phys_id;
+
+	return 0;
+}
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v2 08/27] event/dlb2: add v2.5 create dir port
  2021-03-30 19:35   ` [dpdk-dev] [PATCH v2 00/27] Add DLB V2.5 Timothy McDaniel
                       ` (6 preceding siblings ...)
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 07/27] event/dlb2: add v2.5 create ldb port Timothy McDaniel
@ 2021-03-30 19:35     ` Timothy McDaniel
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 09/27] event/dlb2: add v2.5 create dir queue Timothy McDaniel
                       ` (19 subsequent siblings)
  27 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-03-30 19:35 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas

Updated low level hardware functions to account for new
register map and access macros.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 426 ------------------
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 414 +++++++++++++++++
 2 files changed, 414 insertions(+), 426 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index 45d096eec..70c52e908 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -65,18 +65,6 @@ static inline void dlb2_flush_csr(struct dlb2_hw *hw)
 	DLB2_CSR_RD(hw, DLB2_SYS_TOTAL_VAS);
 }
 
-static void dlb2_dir_port_cq_enable(struct dlb2_hw *hw,
-				    struct dlb2_dir_pq_pair *port)
-{
-	union dlb2_lsp_cq_dir_dsbl reg;
-
-	reg.field.disabled = 0;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ_DIR_DSBL(port->id.phys_id), reg.val);
-
-	dlb2_flush_csr(hw);
-}
-
 static u32 dlb2_dir_queue_depth(struct dlb2_hw *hw,
 				struct dlb2_dir_pq_pair *queue)
 {
@@ -1216,25 +1204,6 @@ int dlb2_set_group_sequence_numbers(struct dlb2_hw *hw,
 	return 0;
 }
 
-static void
-dlb2_log_create_dir_port_args(struct dlb2_hw *hw,
-			      u32 domain_id,
-			      uintptr_t cq_dma_base,
-			      struct dlb2_create_dir_port_args *args,
-			      bool vdev_req,
-			      unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 create directed port arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID:                 %d\n",
-		    domain_id);
-	DLB2_HW_DBG(hw, "\tCQ depth:                  %d\n",
-		    args->cq_depth);
-	DLB2_HW_DBG(hw, "\tCQ base address:           0x%lx\n",
-		    cq_dma_base);
-}
-
 static struct dlb2_dir_pq_pair *
 dlb2_get_domain_used_dir_pq(struct dlb2_hw *hw,
 			    u32 id,
@@ -1256,401 +1225,6 @@ dlb2_get_domain_used_dir_pq(struct dlb2_hw *hw,
 	return NULL;
 }
 
-static int
-dlb2_verify_create_dir_port_args(struct dlb2_hw *hw,
-				 u32 domain_id,
-				 uintptr_t cq_dma_base,
-				 struct dlb2_create_dir_port_args *args,
-				 struct dlb2_cmd_response *resp,
-				 bool vdev_req,
-				 unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (domain == NULL) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	if (!domain->configured) {
-		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
-		return -EINVAL;
-	}
-
-	if (domain->started) {
-		resp->status = DLB2_ST_DOMAIN_STARTED;
-		return -EINVAL;
-	}
-
-	/*
-	 * If the user claims the queue is already configured, validate
-	 * the queue ID, its domain, and whether the queue is configured.
-	 */
-	if (args->queue_id != -1) {
-		struct dlb2_dir_pq_pair *queue;
-
-		queue = dlb2_get_domain_used_dir_pq(hw,
-						    args->queue_id,
-						    vdev_req,
-						    domain);
-
-		if (queue == NULL || queue->domain_id.phys_id !=
-				domain->id.phys_id ||
-				!queue->queue_configured) {
-			resp->status = DLB2_ST_INVALID_DIR_QUEUE_ID;
-			return -EINVAL;
-		}
-	}
-
-	/*
-	 * If the port's queue is not configured, validate that a free
-	 * port-queue pair is available.
-	 */
-	if (args->queue_id == -1 &&
-	    dlb2_list_empty(&domain->avail_dir_pq_pairs)) {
-		resp->status = DLB2_ST_DIR_PORTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	/* Check cache-line alignment */
-	if ((cq_dma_base & 0x3F) != 0) {
-		resp->status = DLB2_ST_INVALID_CQ_VIRT_ADDR;
-		return -EINVAL;
-	}
-
-	if (args->cq_depth != 1 &&
-	    args->cq_depth != 2 &&
-	    args->cq_depth != 4 &&
-	    args->cq_depth != 8 &&
-	    args->cq_depth != 16 &&
-	    args->cq_depth != 32 &&
-	    args->cq_depth != 64 &&
-	    args->cq_depth != 128 &&
-	    args->cq_depth != 256 &&
-	    args->cq_depth != 512 &&
-	    args->cq_depth != 1024) {
-		resp->status = DLB2_ST_INVALID_CQ_DEPTH;
-		return -EINVAL;
-	}
-
-	return 0;
-}
-
-static void dlb2_dir_port_configure_pp(struct dlb2_hw *hw,
-				       struct dlb2_hw_domain *domain,
-				       struct dlb2_dir_pq_pair *port,
-				       bool vdev_req,
-				       unsigned int vdev_id)
-{
-	union dlb2_sys_dir_pp2vas r0 = { {0} };
-	union dlb2_sys_dir_pp_v r4 = { {0} };
-
-	r0.field.vas = domain->id.phys_id;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_PP2VAS(port->id.phys_id), r0.val);
-
-	if (vdev_req) {
-		union dlb2_sys_vf_dir_vpp2pp r1 = { {0} };
-		union dlb2_sys_dir_pp2vdev r2 = { {0} };
-		union dlb2_sys_vf_dir_vpp_v r3 = { {0} };
-		unsigned int offs;
-		u32 virt_id;
-
-		/*
-		 * DLB uses producer port address bits 17:12 to determine the
-		 * producer port ID. In Scalable IOV mode, PP accesses come
-		 * through the PF MMIO window for the physical producer port,
-		 * so for translation purposes the virtual and physical port
-		 * IDs are equal.
-		 */
-		if (hw->virt_mode == DLB2_VIRT_SRIOV)
-			virt_id = port->id.virt_id;
-		else
-			virt_id = port->id.phys_id;
-
-		r1.field.pp = port->id.phys_id;
-
-		offs = vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) + virt_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP2PP(offs), r1.val);
-
-		r2.field.vdev = vdev_id;
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_DIR_PP2VDEV(port->id.phys_id),
-			    r2.val);
-
-		r3.field.vpp_v = 1;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP_V(offs), r3.val);
-	}
-
-	r4.field.pp_v = 1;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_PP_V(port->id.phys_id),
-		    r4.val);
-}
-
-static int dlb2_dir_port_configure_cq(struct dlb2_hw *hw,
-				      struct dlb2_hw_domain *domain,
-				      struct dlb2_dir_pq_pair *port,
-				      uintptr_t cq_dma_base,
-				      struct dlb2_create_dir_port_args *args,
-				      bool vdev_req,
-				      unsigned int vdev_id)
-{
-	union dlb2_sys_dir_cq_addr_l r0 = { {0} };
-	union dlb2_sys_dir_cq_addr_u r1 = { {0} };
-	union dlb2_sys_dir_cq2vf_pf_ro r2 = { {0} };
-	union dlb2_chp_dir_cq_tkn_depth_sel r3 = { {0} };
-	union dlb2_lsp_cq_dir_tkn_depth_sel_dsi r4 = { {0} };
-	union dlb2_sys_dir_cq_fmt r9 = { {0} };
-	union dlb2_sys_dir_cq_at r10 = { {0} };
-	union dlb2_sys_dir_cq_pasid r11 = { {0} };
-	union dlb2_chp_dir_cq2vas r12 = { {0} };
-
-	/* The CQ address is 64B-aligned, and the DLB only wants bits [63:6] */
-	r0.field.addr_l = cq_dma_base >> 6;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_ADDR_L(port->id.phys_id), r0.val);
-
-	r1.field.addr_u = cq_dma_base >> 32;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_ADDR_U(port->id.phys_id), r1.val);
-
-	/*
-	 * 'ro' == relaxed ordering. This setting allows DLB2 to write
-	 * cache lines out-of-order (but QEs within a cache line are always
-	 * updated in-order).
-	 */
-	r2.field.vf = vdev_id;
-	r2.field.is_pf = !vdev_req && (hw->virt_mode != DLB2_VIRT_SIOV);
-	r2.field.ro = 1;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ2VF_PF_RO(port->id.phys_id), r2.val);
-
-	if (args->cq_depth <= 8) {
-		r3.field.token_depth_select = 1;
-	} else if (args->cq_depth == 16) {
-		r3.field.token_depth_select = 2;
-	} else if (args->cq_depth == 32) {
-		r3.field.token_depth_select = 3;
-	} else if (args->cq_depth == 64) {
-		r3.field.token_depth_select = 4;
-	} else if (args->cq_depth == 128) {
-		r3.field.token_depth_select = 5;
-	} else if (args->cq_depth == 256) {
-		r3.field.token_depth_select = 6;
-	} else if (args->cq_depth == 512) {
-		r3.field.token_depth_select = 7;
-	} else if (args->cq_depth == 1024) {
-		r3.field.token_depth_select = 8;
-	} else {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: invalid CQ depth\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL(port->id.phys_id),
-		    r3.val);
-
-	/*
-	 * To support CQs with depth less than 8, program the token count
-	 * register with a non-zero initial value. Operations such as domain
-	 * reset must take this initial value into account when quiescing the
-	 * CQ.
-	 */
-	port->init_tkn_cnt = 0;
-
-	if (args->cq_depth < 8) {
-		union dlb2_lsp_cq_dir_tkn_cnt r13 = { {0} };
-
-		port->init_tkn_cnt = 8 - args->cq_depth;
-
-		r13.field.count = port->init_tkn_cnt;
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_CQ_DIR_TKN_CNT(port->id.phys_id),
-			    r13.val);
-	} else {
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_CQ_DIR_TKN_CNT(port->id.phys_id),
-			    DLB2_LSP_CQ_DIR_TKN_CNT_RST);
-	}
-
-	r4.field.token_depth_select = r3.field.token_depth_select;
-	r4.field.disable_wb_opt = 0;
-	r4.field.ignore_depth = 0;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(port->id.phys_id),
-		    r4.val);
-
-	/* Reset the CQ write pointer */
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_WPTR(port->id.phys_id),
-		    DLB2_CHP_DIR_CQ_WPTR_RST);
-
-	/* Virtualize the PPID */
-	r9.field.keep_pf_ppid = 0;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_FMT(port->id.phys_id), r9.val);
-
-	/*
-	 * Address translation (AT) settings: 0: untranslated, 2: translated
-	 * (see ATS spec regarding Address Type field for more details)
-	 */
-	r10.field.cq_at = 0;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_AT(port->id.phys_id), r10.val);
-
-	if (vdev_req && hw->virt_mode == DLB2_VIRT_SIOV) {
-		r11.field.pasid = hw->pasid[vdev_id];
-		r11.field.fmt2 = 1;
-	}
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ_PASID(port->id.phys_id),
-		    r11.val);
-
-	r12.field.cq2vas = domain->id.phys_id;
-
-	DLB2_CSR_WR(hw, DLB2_CHP_DIR_CQ2VAS(port->id.phys_id), r12.val);
-
-	return 0;
-}
-
-static int dlb2_configure_dir_port(struct dlb2_hw *hw,
-				   struct dlb2_hw_domain *domain,
-				   struct dlb2_dir_pq_pair *port,
-				   uintptr_t cq_dma_base,
-				   struct dlb2_create_dir_port_args *args,
-				   bool vdev_req,
-				   unsigned int vdev_id)
-{
-	int ret;
-
-	ret = dlb2_dir_port_configure_cq(hw,
-					 domain,
-					 port,
-					 cq_dma_base,
-					 args,
-					 vdev_req,
-					 vdev_id);
-
-	if (ret < 0)
-		return ret;
-
-	dlb2_dir_port_configure_pp(hw,
-				   domain,
-				   port,
-				   vdev_req,
-				   vdev_id);
-
-	dlb2_dir_port_cq_enable(hw, port);
-
-	port->enabled = true;
-
-	port->port_configured = true;
-
-	return 0;
-}
-
-/**
- * dlb2_hw_create_dir_port() - Allocate and initialize a DLB directed port
- *	and queue. The port/queue pair have the same ID and name.
- * @hw:	Contains the current state of the DLB2 hardware.
- * @domain_id: Domain ID
- * @args: User-provided arguments.
- * @cq_dma_base: Base DMA address for consumer queue memory
- * @resp: Response to user.
- * @vdev_req: Request came from a virtual device.
- * @vdev_id: If vdev_req is true, this contains the virtual device's ID.
- *
- * Return: returns < 0 on error, 0 otherwise. If the driver is unable to
- * satisfy a request, resp->status will be set accordingly.
- */
-int dlb2_hw_create_dir_port(struct dlb2_hw *hw,
-			    u32 domain_id,
-			    struct dlb2_create_dir_port_args *args,
-			    uintptr_t cq_dma_base,
-			    struct dlb2_cmd_response *resp,
-			    bool vdev_req,
-			    unsigned int vdev_id)
-{
-	struct dlb2_dir_pq_pair *port;
-	struct dlb2_hw_domain *domain;
-	int ret;
-
-	dlb2_log_create_dir_port_args(hw,
-				      domain_id,
-				      cq_dma_base,
-				      args,
-				      vdev_req,
-				      vdev_id);
-
-	/*
-	 * Verify that hardware resources are available before attempting to
-	 * satisfy the request. This simplifies the error unwinding code.
-	 */
-	ret = dlb2_verify_create_dir_port_args(hw,
-					       domain_id,
-					       cq_dma_base,
-					       args,
-					       resp,
-					       vdev_req,
-					       vdev_id);
-	if (ret)
-		return ret;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (args->queue_id != -1)
-		port = dlb2_get_domain_used_dir_pq(hw,
-						   args->queue_id,
-						   vdev_req,
-						   domain);
-	else
-		port = DLB2_DOM_LIST_HEAD(domain->avail_dir_pq_pairs,
-					  typeof(*port));
-	if (port == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: no available dir ports\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	ret = dlb2_configure_dir_port(hw,
-				      domain,
-				      port,
-				      cq_dma_base,
-				      args,
-				      vdev_req,
-				      vdev_id);
-	if (ret < 0)
-		return ret;
-
-	/*
-	 * Configuration succeeded, so move the resource from the 'avail' to
-	 * the 'used' list (if it's not already there).
-	 */
-	if (args->queue_id == -1) {
-		dlb2_list_del(&domain->avail_dir_pq_pairs, &port->domain_list);
-
-		dlb2_list_add(&domain->used_dir_pq_pairs, &port->domain_list);
-	}
-
-	resp->status = 0;
-	resp->id = (vdev_req) ? port->id.virt_id : port->id.phys_id;
-
-	return 0;
-}
-
 static void dlb2_configure_dir_queue(struct dlb2_hw *hw,
 				     struct dlb2_hw_domain *domain,
 				     struct dlb2_dir_pq_pair *queue,
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index 2eb39e23d..4e4b390dd 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -4443,3 +4443,417 @@ int dlb2_hw_create_ldb_port(struct dlb2_hw *hw,
 
 	return 0;
 }
+
+static void
+dlb2_log_create_dir_port_args(struct dlb2_hw *hw,
+			      u32 domain_id,
+			      uintptr_t cq_dma_base,
+			      struct dlb2_create_dir_port_args *args,
+			      bool vdev_req,
+			      unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 create directed port arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID:                 %d\n",
+		    domain_id);
+	DLB2_HW_DBG(hw, "\tCQ depth:                  %d\n",
+		    args->cq_depth);
+	DLB2_HW_DBG(hw, "\tCQ base address:           0x%lx\n",
+		    cq_dma_base);
+}
+
+static struct dlb2_dir_pq_pair *
+dlb2_get_domain_used_dir_pq(struct dlb2_hw *hw,
+			    u32 id,
+			    bool vdev_req,
+			    struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *port;
+	RTE_SET_USED(iter);
+
+	if (id >= DLB2_MAX_NUM_DIR_PORTS(hw->ver))
+		return NULL;
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
+		if ((!vdev_req && port->id.phys_id == id) ||
+		    (vdev_req && port->id.virt_id == id))
+			return port;
+	}
+
+	return NULL;
+}
+
+static int
+dlb2_verify_create_dir_port_args(struct dlb2_hw *hw,
+				 u32 domain_id,
+				 uintptr_t cq_dma_base,
+				 struct dlb2_create_dir_port_args *args,
+				 struct dlb2_cmd_response *resp,
+				 bool vdev_req,
+				 unsigned int vdev_id,
+				 struct dlb2_hw_domain **out_domain,
+				 struct dlb2_dir_pq_pair **out_port)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_dir_pq_pair *pq;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
+		return -EINVAL;
+	}
+
+	if (domain->started) {
+		resp->status = DLB2_ST_DOMAIN_STARTED;
+		return -EINVAL;
+	}
+
+	if (args->queue_id != -1) {
+		/*
+		 * If the user claims the queue is already configured, validate
+		 * the queue ID, its domain, and whether the queue is
+		 * configured.
+		 */
+		pq = dlb2_get_domain_used_dir_pq(hw,
+						 args->queue_id,
+						 vdev_req,
+						 domain);
+
+		if (!pq || pq->domain_id.phys_id != domain->id.phys_id ||
+		    !pq->queue_configured) {
+			resp->status = DLB2_ST_INVALID_DIR_QUEUE_ID;
+			return -EINVAL;
+		}
+	} else {
+		/*
+		 * If the port's queue is not configured, validate that a free
+		 * port-queue pair is available.
+		 */
+		pq = DLB2_DOM_LIST_HEAD(domain->avail_dir_pq_pairs,
+					typeof(*pq));
+		if (!pq) {
+			resp->status = DLB2_ST_DIR_PORTS_UNAVAILABLE;
+			return -EINVAL;
+		}
+	}
+
+	/* Check cache-line alignment */
+	if ((cq_dma_base & 0x3F) != 0) {
+		resp->status = DLB2_ST_INVALID_CQ_VIRT_ADDR;
+		return -EINVAL;
+	}
+
+	if (!dlb2_cq_depth_is_valid(args->cq_depth)) {
+		resp->status = DLB2_ST_INVALID_CQ_DEPTH;
+		return -EINVAL;
+	}
+
+	*out_domain = domain;
+	*out_port = pq;
+
+	return 0;
+}
+
+static void dlb2_dir_port_configure_pp(struct dlb2_hw *hw,
+				       struct dlb2_hw_domain *domain,
+				       struct dlb2_dir_pq_pair *port,
+				       bool vdev_req,
+				       unsigned int vdev_id)
+{
+	u32 reg = 0;
+
+	DLB2_BITS_SET(reg, domain->id.phys_id, DLB2_SYS_DIR_PP2VAS_VAS);
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_PP2VAS(port->id.phys_id), reg);
+
+	if (vdev_req) {
+		unsigned int offs;
+		u32 virt_id;
+
+		/*
+		 * DLB uses producer port address bits 17:12 to determine the
+		 * producer port ID. In Scalable IOV mode, PP accesses come
+		 * through the PF MMIO window for the physical producer port,
+		 * so for translation purposes the virtual and physical port
+		 * IDs are equal.
+		 */
+		if (hw->virt_mode == DLB2_VIRT_SRIOV)
+			virt_id = port->id.virt_id;
+		else
+			virt_id = port->id.phys_id;
+
+		reg = 0;
+		DLB2_BITS_SET(reg, port->id.phys_id, DLB2_SYS_VF_DIR_VPP2PP_PP);
+		offs = vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) + virt_id;
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP2PP(offs), reg);
+
+		reg = 0;
+		DLB2_BITS_SET(reg, vdev_id, DLB2_SYS_DIR_PP2VDEV_VDEV);
+		DLB2_CSR_WR(hw, DLB2_SYS_DIR_PP2VDEV(port->id.phys_id), reg);
+
+		reg = 0;
+		DLB2_BIT_SET(reg, DLB2_SYS_VF_DIR_VPP_V_VPP_V);
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP_V(offs), reg);
+	}
+
+	reg = 0;
+	DLB2_BIT_SET(reg, DLB2_SYS_DIR_PP_V_PP_V);
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_PP_V(port->id.phys_id), reg);
+}
+
+static int dlb2_dir_port_configure_cq(struct dlb2_hw *hw,
+				      struct dlb2_hw_domain *domain,
+				      struct dlb2_dir_pq_pair *port,
+				      uintptr_t cq_dma_base,
+				      struct dlb2_create_dir_port_args *args,
+				      bool vdev_req,
+				      unsigned int vdev_id)
+{
+	u32 reg = 0;
+	u32 ds = 0;
+
+	/* The CQ address is 64B-aligned, and the DLB only wants bits [63:6] */
+	DLB2_BITS_SET(reg, cq_dma_base >> 6, DLB2_SYS_DIR_CQ_ADDR_L_ADDR_L);
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_ADDR_L(port->id.phys_id), reg);
+
+	reg = cq_dma_base >> 32;
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_ADDR_U(port->id.phys_id), reg);
+
+	/*
+	 * 'ro' == relaxed ordering. This setting allows DLB2 to write
+	 * cache lines out-of-order (but QEs within a cache line are always
+	 * updated in-order).
+	 */
+	reg = 0;
+	DLB2_BITS_SET(reg, vdev_id, DLB2_SYS_DIR_CQ2VF_PF_RO_VF);
+	DLB2_BITS_SET(reg, !vdev_req && (hw->virt_mode != DLB2_VIRT_SIOV),
+		 DLB2_SYS_DIR_CQ2VF_PF_RO_IS_PF);
+	DLB2_BIT_SET(reg, DLB2_SYS_DIR_CQ2VF_PF_RO_RO);
+
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ2VF_PF_RO(port->id.phys_id), reg);
+
+	if (args->cq_depth <= 8) {
+		ds = 1;
+	} else if (args->cq_depth == 16) {
+		ds = 2;
+	} else if (args->cq_depth == 32) {
+		ds = 3;
+	} else if (args->cq_depth == 64) {
+		ds = 4;
+	} else if (args->cq_depth == 128) {
+		ds = 5;
+	} else if (args->cq_depth == 256) {
+		ds = 6;
+	} else if (args->cq_depth == 512) {
+		ds = 7;
+	} else if (args->cq_depth == 1024) {
+		ds = 8;
+	} else {
+		DLB2_HW_ERR(hw,
+			    "[%s():%d] Internal error: invalid CQ depth\n",
+			    __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	reg = 0;
+	DLB2_BITS_SET(reg, ds,
+		      DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT);
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
+		    reg);
+
+	/*
+	 * To support CQs with depth less than 8, program the token count
+	 * register with a non-zero initial value. Operations such as domain
+	 * reset must take this initial value into account when quiescing the
+	 * CQ.
+	 */
+	port->init_tkn_cnt = 0;
+
+	if (args->cq_depth < 8) {
+		reg = 0;
+		port->init_tkn_cnt = 8 - args->cq_depth;
+
+		DLB2_BITS_SET(reg, port->init_tkn_cnt,
+			      DLB2_LSP_CQ_DIR_TKN_CNT_COUNT);
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_CQ_DIR_TKN_CNT(hw->ver, port->id.phys_id),
+			    reg);
+	} else {
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_CQ_DIR_TKN_CNT(hw->ver, port->id.phys_id),
+			    DLB2_LSP_CQ_DIR_TKN_CNT_RST);
+	}
+
+	reg = 0;
+	DLB2_BITS_SET(reg, ds,
+		      DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_TOKEN_DEPTH_SELECT_V2);
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(hw->ver,
+						      port->id.phys_id),
+		    reg);
+
+	/* Reset the CQ write pointer */
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_WPTR(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ_WPTR_RST);
+
+	/* Virtualize the PPID */
+	reg = 0;
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_FMT(port->id.phys_id), reg);
+
+	/*
+	 * Address translation (AT) settings: 0: untranslated, 2: translated
+	 * (see ATS spec regarding Address Type field for more details)
+	 */
+	if (hw->ver == DLB2_HW_V2) {
+		reg = 0;
+		DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_AT(port->id.phys_id), reg);
+	}
+
+	if (vdev_req && hw->virt_mode == DLB2_VIRT_SIOV) {
+		DLB2_BITS_SET(reg, hw->pasid[vdev_id],
+			      DLB2_SYS_DIR_CQ_PASID_PASID);
+		DLB2_BIT_SET(reg, DLB2_SYS_DIR_CQ_PASID_FMT2);
+	}
+
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_PASID(hw->ver, port->id.phys_id), reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, domain->id.phys_id, DLB2_CHP_DIR_CQ2VAS_CQ2VAS);
+	DLB2_CSR_WR(hw, DLB2_CHP_DIR_CQ2VAS(hw->ver, port->id.phys_id), reg);
+
+	return 0;
+}
+
+static int dlb2_configure_dir_port(struct dlb2_hw *hw,
+				   struct dlb2_hw_domain *domain,
+				   struct dlb2_dir_pq_pair *port,
+				   uintptr_t cq_dma_base,
+				   struct dlb2_create_dir_port_args *args,
+				   bool vdev_req,
+				   unsigned int vdev_id)
+{
+	int ret;
+
+	ret = dlb2_dir_port_configure_cq(hw,
+					 domain,
+					 port,
+					 cq_dma_base,
+					 args,
+					 vdev_req,
+					 vdev_id);
+
+	if (ret)
+		return ret;
+
+	dlb2_dir_port_configure_pp(hw,
+				   domain,
+				   port,
+				   vdev_req,
+				   vdev_id);
+
+	dlb2_dir_port_cq_enable(hw, port);
+
+	port->enabled = true;
+
+	port->port_configured = true;
+
+	return 0;
+}
+
+/**
+ * dlb2_hw_create_dir_port() - create a directed port
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: port creation arguments.
+ * @cq_dma_base: base address of the CQ memory. This can be a PA or an IOVA.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function creates a directed port.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the port ID.
+ *
+ * resp->id contains a virtual ID if vdev_req is true.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, a credit setting is invalid, a
+ *	    pointer address is not properly aligned, the domain is not
+ *	    configured, or the domain has already been started.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb2_hw_create_dir_port(struct dlb2_hw *hw,
+			    u32 domain_id,
+			    struct dlb2_create_dir_port_args *args,
+			    uintptr_t cq_dma_base,
+			    struct dlb2_cmd_response *resp,
+			    bool vdev_req,
+			    unsigned int vdev_id)
+{
+	struct dlb2_dir_pq_pair *port;
+	struct dlb2_hw_domain *domain;
+	int ret;
+
+	dlb2_log_create_dir_port_args(hw,
+				      domain_id,
+				      cq_dma_base,
+				      args,
+				      vdev_req,
+				      vdev_id);
+
+	/*
+	 * Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	ret = dlb2_verify_create_dir_port_args(hw,
+					       domain_id,
+					       cq_dma_base,
+					       args,
+					       resp,
+					       vdev_req,
+					       vdev_id,
+					       &domain,
+					       &port);
+	if (ret)
+		return ret;
+
+	ret = dlb2_configure_dir_port(hw,
+				      domain,
+				      port,
+				      cq_dma_base,
+				      args,
+				      vdev_req,
+				      vdev_id);
+	if (ret)
+		return ret;
+
+	/*
+	 * Configuration succeeded, so move the resource from the 'avail' to
+	 * the 'used' list (if it's not already there).
+	 */
+	if (args->queue_id == -1) {
+		dlb2_list_del(&domain->avail_dir_pq_pairs, &port->domain_list);
+
+		dlb2_list_add(&domain->used_dir_pq_pairs, &port->domain_list);
+	}
+
+	resp->status = 0;
+	resp->id = (vdev_req) ? port->id.virt_id : port->id.phys_id;
+
+	return 0;
+}
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v2 09/27] event/dlb2: add v2.5 create dir queue
  2021-03-30 19:35   ` [dpdk-dev] [PATCH v2 00/27] Add DLB V2.5 Timothy McDaniel
                       ` (7 preceding siblings ...)
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 08/27] event/dlb2: add v2.5 create dir port Timothy McDaniel
@ 2021-03-30 19:35     ` Timothy McDaniel
  2021-04-03 10:26       ` Jerin Jacob
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 10/27] event/dlb2: add v2.5 map qid Timothy McDaniel
                       ` (18 subsequent siblings)
  27 siblings, 1 reply; 174+ messages in thread
From: Timothy McDaniel @ 2021-03-30 19:35 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas

Updated low level hardware functions to account for new
register map and hardware access macros.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 213 ------------------
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 201 +++++++++++++++++
 2 files changed, 201 insertions(+), 213 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index 70c52e908..362deadfe 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -1225,219 +1225,6 @@ dlb2_get_domain_used_dir_pq(struct dlb2_hw *hw,
 	return NULL;
 }
 
-static void dlb2_configure_dir_queue(struct dlb2_hw *hw,
-				     struct dlb2_hw_domain *domain,
-				     struct dlb2_dir_pq_pair *queue,
-				     struct dlb2_create_dir_queue_args *args,
-				     bool vdev_req,
-				     unsigned int vdev_id)
-{
-	union dlb2_sys_dir_vasqid_v r0 = { {0} };
-	union dlb2_sys_dir_qid_its r1 = { {0} };
-	union dlb2_lsp_qid_dir_depth_thrsh r2 = { {0} };
-	union dlb2_sys_dir_qid_v r5 = { {0} };
-
-	unsigned int offs;
-
-	/* QID write permissions are turned on when the domain is started */
-	r0.field.vasqid_v = 0;
-
-	offs = domain->id.phys_id * DLB2_MAX_NUM_DIR_QUEUES(hw->ver) +
-		queue->id.phys_id;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(offs), r0.val);
-
-	/* Don't timestamp QEs that pass through this queue */
-	r1.field.qid_its = 0;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_QID_ITS(queue->id.phys_id),
-		    r1.val);
-
-	r2.field.thresh = args->depth_threshold;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID_DIR_DEPTH_THRSH(queue->id.phys_id),
-		    r2.val);
-
-	if (vdev_req) {
-		union dlb2_sys_vf_dir_vqid_v r3 = { {0} };
-		union dlb2_sys_vf_dir_vqid2qid r4 = { {0} };
-
-		offs = vdev_id * DLB2_MAX_NUM_DIR_QUEUES(hw->ver)
-			+ queue->id.virt_id;
-
-		r3.field.vqid_v = 1;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID_V(offs), r3.val);
-
-		r4.field.qid = queue->id.phys_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID2QID(offs), r4.val);
-	}
-
-	r5.field.qid_v = 1;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_QID_V(queue->id.phys_id), r5.val);
-
-	queue->queue_configured = true;
-}
-
-static void
-dlb2_log_create_dir_queue_args(struct dlb2_hw *hw,
-			       u32 domain_id,
-			       struct dlb2_create_dir_queue_args *args,
-			       bool vdev_req,
-			       unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 create directed queue arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
-	DLB2_HW_DBG(hw, "\tPort ID:   %d\n", args->port_id);
-}
-
-static int
-dlb2_verify_create_dir_queue_args(struct dlb2_hw *hw,
-				  u32 domain_id,
-				  struct dlb2_create_dir_queue_args *args,
-				  struct dlb2_cmd_response *resp,
-				  bool vdev_req,
-				  unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (domain == NULL) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	if (!domain->configured) {
-		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
-		return -EINVAL;
-	}
-
-	if (domain->started) {
-		resp->status = DLB2_ST_DOMAIN_STARTED;
-		return -EINVAL;
-	}
-
-	/*
-	 * If the user claims the port is already configured, validate the port
-	 * ID, its domain, and whether the port is configured.
-	 */
-	if (args->port_id != -1) {
-		struct dlb2_dir_pq_pair *port;
-
-		port = dlb2_get_domain_used_dir_pq(hw,
-						   args->port_id,
-						   vdev_req,
-						   domain);
-
-		if (port == NULL || port->domain_id.phys_id !=
-				domain->id.phys_id || !port->port_configured) {
-			resp->status = DLB2_ST_INVALID_PORT_ID;
-			return -EINVAL;
-		}
-	}
-
-	/*
-	 * If the queue's port is not configured, validate that a free
-	 * port-queue pair is available.
-	 */
-	if (args->port_id == -1 &&
-	    dlb2_list_empty(&domain->avail_dir_pq_pairs)) {
-		resp->status = DLB2_ST_DIR_QUEUES_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	return 0;
-}
-
-/**
- * dlb2_hw_create_dir_queue() - Allocate and initialize a DLB DIR queue.
- * @hw:	Contains the current state of the DLB2 hardware.
- * @domain_id: Domain ID
- * @args: User-provided arguments.
- * @resp: Response to user.
- * @vdev_req: Request came from a virtual device.
- * @vdev_id: If vdev_req is true, this contains the virtual device's ID.
- *
- * Return: returns < 0 on error, 0 otherwise. If the driver is unable to
- * satisfy a request, resp->status will be set accordingly.
- */
-int dlb2_hw_create_dir_queue(struct dlb2_hw *hw,
-			     u32 domain_id,
-			     struct dlb2_create_dir_queue_args *args,
-			     struct dlb2_cmd_response *resp,
-			     bool vdev_req,
-			     unsigned int vdev_id)
-{
-	struct dlb2_dir_pq_pair *queue;
-	struct dlb2_hw_domain *domain;
-	int ret;
-
-	dlb2_log_create_dir_queue_args(hw, domain_id, args, vdev_req, vdev_id);
-
-	/*
-	 * Verify that hardware resources are available before attempting to
-	 * satisfy the request. This simplifies the error unwinding code.
-	 */
-	ret = dlb2_verify_create_dir_queue_args(hw,
-						domain_id,
-						args,
-						resp,
-						vdev_req,
-						vdev_id);
-	if (ret)
-		return ret;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-	if (domain == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: domain not found\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	if (args->port_id != -1)
-		queue = dlb2_get_domain_used_dir_pq(hw,
-						    args->port_id,
-						    vdev_req,
-						    domain);
-	else
-		queue = DLB2_DOM_LIST_HEAD(domain->avail_dir_pq_pairs,
-					   typeof(*queue));
-	if (queue == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: no available dir queues\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	dlb2_configure_dir_queue(hw, domain, queue, args, vdev_req, vdev_id);
-
-	/*
-	 * Configuration succeeded, so move the resource from the 'avail' to
-	 * the 'used' list (if it's not already there).
-	 */
-	if (args->port_id == -1) {
-		dlb2_list_del(&domain->avail_dir_pq_pairs,
-			      &queue->domain_list);
-
-		dlb2_list_add(&domain->used_dir_pq_pairs,
-			      &queue->domain_list);
-	}
-
-	resp->status = 0;
-
-	resp->id = (vdev_req) ? queue->id.virt_id : queue->id.phys_id;
-
-	return 0;
-}
-
 static bool
 dlb2_port_find_slot_with_pending_map_queue(struct dlb2_ldb_port *port,
 					   struct dlb2_ldb_queue *queue,
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index 4e4b390dd..d4b401250 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -4857,3 +4857,204 @@ int dlb2_hw_create_dir_port(struct dlb2_hw *hw,
 
 	return 0;
 }
+
+static void dlb2_configure_dir_queue(struct dlb2_hw *hw,
+				     struct dlb2_hw_domain *domain,
+				     struct dlb2_dir_pq_pair *queue,
+				     struct dlb2_create_dir_queue_args *args,
+				     bool vdev_req,
+				     unsigned int vdev_id)
+{
+	unsigned int offs;
+	u32 reg = 0;
+
+	/* QID write permissions are turned on when the domain is started */
+	offs = domain->id.phys_id * DLB2_MAX_NUM_DIR_QUEUES(hw->ver) +
+		queue->id.phys_id;
+
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(offs), reg);
+
+	/* Don't timestamp QEs that pass through this queue */
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_QID_ITS(queue->id.phys_id), reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, args->depth_threshold,
+		      DLB2_LSP_QID_DIR_DEPTH_THRSH_THRESH);
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_QID_DIR_DEPTH_THRSH(hw->ver, queue->id.phys_id),
+		    reg);
+
+	if (vdev_req) {
+		offs = vdev_id * DLB2_MAX_NUM_DIR_QUEUES(hw->ver) +
+			queue->id.virt_id;
+
+		reg = 0;
+		DLB2_BIT_SET(reg, DLB2_SYS_VF_DIR_VQID_V_VQID_V);
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID_V(offs), reg);
+
+		reg = 0;
+		DLB2_BITS_SET(reg, queue->id.phys_id,
+			      DLB2_SYS_VF_DIR_VQID2QID_QID);
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID2QID(offs), reg);
+	}
+
+	reg = 0;
+	DLB2_BIT_SET(reg, DLB2_SYS_DIR_QID_V_QID_V);
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_QID_V(queue->id.phys_id), reg);
+
+	queue->queue_configured = true;
+}
+
+static void
+dlb2_log_create_dir_queue_args(struct dlb2_hw *hw,
+			       u32 domain_id,
+			       struct dlb2_create_dir_queue_args *args,
+			       bool vdev_req,
+			       unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 create directed queue arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
+	DLB2_HW_DBG(hw, "\tPort ID:   %d\n", args->port_id);
+}
+
+static int
+dlb2_verify_create_dir_queue_args(struct dlb2_hw *hw,
+				  u32 domain_id,
+				  struct dlb2_create_dir_queue_args *args,
+				  struct dlb2_cmd_response *resp,
+				  bool vdev_req,
+				  unsigned int vdev_id,
+				  struct dlb2_hw_domain **out_domain,
+				  struct dlb2_dir_pq_pair **out_queue)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_dir_pq_pair *pq;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
+		return -EINVAL;
+	}
+
+	if (domain->started) {
+		resp->status = DLB2_ST_DOMAIN_STARTED;
+		return -EINVAL;
+	}
+
+	/*
+	 * If the user claims the port is already configured, validate the port
+	 * ID, its domain, and whether the port is configured.
+	 */
+	if (args->port_id != -1) {
+		pq = dlb2_get_domain_used_dir_pq(hw,
+						 args->port_id,
+						 vdev_req,
+						 domain);
+
+		if (!pq || pq->domain_id.phys_id != domain->id.phys_id ||
+		    !pq->port_configured) {
+			resp->status = DLB2_ST_INVALID_PORT_ID;
+			return -EINVAL;
+		}
+	} else {
+		/*
+		 * If the queue's port is not configured, validate that a free
+		 * port-queue pair is available.
+		 */
+		pq = DLB2_DOM_LIST_HEAD(domain->avail_dir_pq_pairs,
+					typeof(*pq));
+		if (!pq) {
+			resp->status = DLB2_ST_DIR_QUEUES_UNAVAILABLE;
+			return -EINVAL;
+		}
+	}
+
+	*out_domain = domain;
+	*out_queue = pq;
+
+	return 0;
+}
+
+/**
+ * dlb2_hw_create_dir_queue() - create a directed queue
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: queue creation arguments.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function creates a directed queue.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the queue ID.
+ *
+ * resp->id contains a virtual ID if vdev_req is true.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, the domain is not configured,
+ *	    or the domain has already been started.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb2_hw_create_dir_queue(struct dlb2_hw *hw,
+			     u32 domain_id,
+			     struct dlb2_create_dir_queue_args *args,
+			     struct dlb2_cmd_response *resp,
+			     bool vdev_req,
+			     unsigned int vdev_id)
+{
+	struct dlb2_dir_pq_pair *queue;
+	struct dlb2_hw_domain *domain;
+	int ret;
+
+	dlb2_log_create_dir_queue_args(hw, domain_id, args, vdev_req, vdev_id);
+
+	/*
+	 * Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	ret = dlb2_verify_create_dir_queue_args(hw,
+						domain_id,
+						args,
+						resp,
+						vdev_req,
+						vdev_id,
+						&domain,
+						&queue);
+	if (ret)
+		return ret;
+
+	dlb2_configure_dir_queue(hw, domain, queue, args, vdev_req, vdev_id);
+
+	/*
+	 * Configuration succeeded, so move the resource from the 'avail' to
+	 * the 'used' list (if it's not already there).
+	 */
+	if (args->port_id == -1) {
+		dlb2_list_del(&domain->avail_dir_pq_pairs,
+			      &queue->domain_list);
+
+		dlb2_list_add(&domain->used_dir_pq_pairs,
+			      &queue->domain_list);
+	}
+
+	resp->status = 0;
+
+	resp->id = (vdev_req) ? queue->id.virt_id : queue->id.phys_id;
+
+	return 0;
+}
+
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v2 10/27] event/dlb2: add v2.5 map qid
  2021-03-30 19:35   ` [dpdk-dev] [PATCH v2 00/27] Add DLB V2.5 Timothy McDaniel
                       ` (8 preceding siblings ...)
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 09/27] event/dlb2: add v2.5 create dir queue Timothy McDaniel
@ 2021-03-30 19:35     ` Timothy McDaniel
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 11/27] event/dlb2: add v2.5 unmap queue Timothy McDaniel
                       ` (17 subsequent siblings)
  27 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-03-30 19:35 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas

Update low level hardware functions to account for
new register map and hardware access macros.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 355 ---------------
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 418 ++++++++++++++++++
 2 files changed, 418 insertions(+), 355 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index 362deadfe..d59df5e39 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -1245,68 +1245,6 @@ dlb2_port_find_slot_with_pending_map_queue(struct dlb2_ldb_port *port,
 	return (i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ);
 }
 
-static void dlb2_ldb_port_change_qid_priority(struct dlb2_hw *hw,
-					      struct dlb2_ldb_port *port,
-					      int slot,
-					      struct dlb2_map_qid_args *args)
-{
-	union dlb2_lsp_cq2priov r0;
-
-	/* Read-modify-write the priority and valid bit register */
-	r0.val = DLB2_CSR_RD(hw, DLB2_LSP_CQ2PRIOV(port->id.phys_id));
-
-	r0.field.v |= 1 << slot;
-	r0.field.prio |= (args->priority & 0x7) << slot * 3;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(port->id.phys_id), r0.val);
-
-	dlb2_flush_csr(hw);
-
-	port->qid_map[slot].priority = args->priority;
-}
-
-static int dlb2_verify_map_qid_slot_available(struct dlb2_ldb_port *port,
-					      struct dlb2_ldb_queue *queue,
-					      struct dlb2_cmd_response *resp)
-{
-	enum dlb2_qid_map_state state;
-	int i;
-
-	/* Unused slot available? */
-	if (port->num_mappings < DLB2_MAX_NUM_QIDS_PER_LDB_CQ)
-		return 0;
-
-	/*
-	 * If the queue is already mapped (from the application's perspective),
-	 * this is simply a priority update.
-	 */
-	state = DLB2_QUEUE_MAPPED;
-	if (dlb2_port_find_slot_queue(port, state, queue, &i))
-		return 0;
-
-	state = DLB2_QUEUE_MAP_IN_PROG;
-	if (dlb2_port_find_slot_queue(port, state, queue, &i))
-		return 0;
-
-	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &i))
-		return 0;
-
-	/*
-	 * If the slot contains an unmap in progress, it's considered
-	 * available.
-	 */
-	state = DLB2_QUEUE_UNMAP_IN_PROG;
-	if (dlb2_port_find_slot(port, state, &i))
-		return 0;
-
-	state = DLB2_QUEUE_UNMAPPED;
-	if (dlb2_port_find_slot(port, state, &i))
-		return 0;
-
-	resp->status = DLB2_ST_NO_QID_SLOTS_AVAILABLE;
-	return -EINVAL;
-}
-
 static struct dlb2_ldb_queue *
 dlb2_get_domain_ldb_queue(u32 id,
 			  bool vdev_req,
@@ -1355,299 +1293,6 @@ dlb2_get_domain_used_ldb_port(u32 id,
 	return NULL;
 }
 
-static int dlb2_verify_map_qid_args(struct dlb2_hw *hw,
-				    u32 domain_id,
-				    struct dlb2_map_qid_args *args,
-				    struct dlb2_cmd_response *resp,
-				    bool vdev_req,
-				    unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_port *port;
-	struct dlb2_ldb_queue *queue;
-	int id;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (domain == NULL) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	if (!domain->configured) {
-		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
-		return -EINVAL;
-	}
-
-	id = args->port_id;
-
-	port = dlb2_get_domain_used_ldb_port(id, vdev_req, domain);
-
-	if (port == NULL || !port->configured) {
-		resp->status = DLB2_ST_INVALID_PORT_ID;
-		return -EINVAL;
-	}
-
-	if (args->priority >= DLB2_QID_PRIORITIES) {
-		resp->status = DLB2_ST_INVALID_PRIORITY;
-		return -EINVAL;
-	}
-
-	queue = dlb2_get_domain_ldb_queue(args->qid, vdev_req, domain);
-
-	if (queue == NULL || !queue->configured) {
-		resp->status = DLB2_ST_INVALID_QID;
-		return -EINVAL;
-	}
-
-	if (queue->domain_id.phys_id != domain->id.phys_id) {
-		resp->status = DLB2_ST_INVALID_QID;
-		return -EINVAL;
-	}
-
-	if (port->domain_id.phys_id != domain->id.phys_id) {
-		resp->status = DLB2_ST_INVALID_PORT_ID;
-		return -EINVAL;
-	}
-
-	return 0;
-}
-
-static void dlb2_log_map_qid(struct dlb2_hw *hw,
-			     u32 domain_id,
-			     struct dlb2_map_qid_args *args,
-			     bool vdev_req,
-			     unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 map QID arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID: %d\n",
-		    domain_id);
-	DLB2_HW_DBG(hw, "\tPort ID:   %d\n",
-		    args->port_id);
-	DLB2_HW_DBG(hw, "\tQueue ID:  %d\n",
-		    args->qid);
-	DLB2_HW_DBG(hw, "\tPriority:  %d\n",
-		    args->priority);
-}
-
-int dlb2_hw_map_qid(struct dlb2_hw *hw,
-		    u32 domain_id,
-		    struct dlb2_map_qid_args *args,
-		    struct dlb2_cmd_response *resp,
-		    bool vdev_req,
-		    unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_queue *queue;
-	enum dlb2_qid_map_state st;
-	struct dlb2_ldb_port *port;
-	int ret, i, id;
-	u8 prio;
-
-	dlb2_log_map_qid(hw, domain_id, args, vdev_req, vdev_id);
-
-	/*
-	 * Verify that hardware resources are available before attempting to
-	 * satisfy the request. This simplifies the error unwinding code.
-	 */
-	ret = dlb2_verify_map_qid_args(hw,
-				       domain_id,
-				       args,
-				       resp,
-				       vdev_req,
-				       vdev_id);
-	if (ret)
-		return ret;
-
-	prio = args->priority;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-	if (domain == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: domain not found\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	id = args->port_id;
-
-	port = dlb2_get_domain_used_ldb_port(id, vdev_req, domain);
-	if (port == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: port not found\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	queue = dlb2_get_domain_ldb_queue(args->qid, vdev_req, domain);
-	if (queue == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: queue not found\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	/*
-	 * If there are any outstanding detach operations for this port,
-	 * attempt to complete them. This may be necessary to free up a QID
-	 * slot for this requested mapping.
-	 */
-	if (port->num_pending_removals)
-		dlb2_domain_finish_unmap_port(hw, domain, port);
-
-	ret = dlb2_verify_map_qid_slot_available(port, queue, resp);
-	if (ret)
-		return ret;
-
-	/* Hardware requires disabling the CQ before mapping QIDs. */
-	if (port->enabled)
-		dlb2_ldb_port_cq_disable(hw, port);
-
-	/*
-	 * If this is only a priority change, don't perform the full QID->CQ
-	 * mapping procedure
-	 */
-	st = DLB2_QUEUE_MAPPED;
-	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
-		if (i >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-			DLB2_HW_ERR(hw,
-				    "[%s():%d] Internal error: port slot tracking failed\n",
-				    __func__, __LINE__);
-			return -EFAULT;
-		}
-
-		if (prio != port->qid_map[i].priority) {
-			dlb2_ldb_port_change_qid_priority(hw, port, i, args);
-			DLB2_HW_DBG(hw, "DLB2 map: priority change\n");
-		}
-
-		st = DLB2_QUEUE_MAPPED;
-		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
-		if (ret)
-			return ret;
-
-		goto map_qid_done;
-	}
-
-	st = DLB2_QUEUE_UNMAP_IN_PROG;
-	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
-		if (i >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-			DLB2_HW_ERR(hw,
-				    "[%s():%d] Internal error: port slot tracking failed\n",
-				    __func__, __LINE__);
-			return -EFAULT;
-		}
-
-		if (prio != port->qid_map[i].priority) {
-			dlb2_ldb_port_change_qid_priority(hw, port, i, args);
-			DLB2_HW_DBG(hw, "DLB2 map: priority change\n");
-		}
-
-		st = DLB2_QUEUE_MAPPED;
-		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
-		if (ret)
-			return ret;
-
-		goto map_qid_done;
-	}
-
-	/*
-	 * If this is a priority change on an in-progress mapping, don't
-	 * perform the full QID->CQ mapping procedure.
-	 */
-	st = DLB2_QUEUE_MAP_IN_PROG;
-	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
-		if (i >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-			DLB2_HW_ERR(hw,
-				    "[%s():%d] Internal error: port slot tracking failed\n",
-				    __func__, __LINE__);
-			return -EFAULT;
-		}
-
-		port->qid_map[i].priority = prio;
-
-		DLB2_HW_DBG(hw, "DLB2 map: priority change only\n");
-
-		goto map_qid_done;
-	}
-
-	/*
-	 * If this is a priority change on a pending mapping, update the
-	 * pending priority
-	 */
-	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &i)) {
-		if (i >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-			DLB2_HW_ERR(hw,
-				    "[%s():%d] Internal error: port slot tracking failed\n",
-				    __func__, __LINE__);
-			return -EFAULT;
-		}
-
-		port->qid_map[i].pending_priority = prio;
-
-		DLB2_HW_DBG(hw, "DLB2 map: priority change only\n");
-
-		goto map_qid_done;
-	}
-
-	/*
-	 * If all the CQ's slots are in use, then there's an unmap in progress
-	 * (guaranteed by dlb2_verify_map_qid_slot_available()), so add this
-	 * mapping to pending_map and return. When the removal is completed for
-	 * the slot's current occupant, this mapping will be performed.
-	 */
-	if (!dlb2_port_find_slot(port, DLB2_QUEUE_UNMAPPED, &i)) {
-		if (dlb2_port_find_slot(port, DLB2_QUEUE_UNMAP_IN_PROG, &i)) {
-			enum dlb2_qid_map_state st;
-
-			if (i >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-				DLB2_HW_ERR(hw,
-					    "[%s():%d] Internal error: port slot tracking failed\n",
-					    __func__, __LINE__);
-				return -EFAULT;
-			}
-
-			port->qid_map[i].pending_qid = queue->id.phys_id;
-			port->qid_map[i].pending_priority = prio;
-
-			st = DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP;
-
-			ret = dlb2_port_slot_state_transition(hw, port, queue,
-							      i, st);
-			if (ret)
-				return ret;
-
-			DLB2_HW_DBG(hw, "DLB2 map: map pending removal\n");
-
-			goto map_qid_done;
-		}
-	}
-
-	/*
-	 * If the domain has started, a special "dynamic" CQ->queue mapping
-	 * procedure is required in order to safely update the CQ<->QID tables.
-	 * The "static" procedure cannot be used when traffic is flowing,
-	 * because the CQ<->QID tables cannot be updated atomically and the
-	 * scheduler won't see the new mapping unless the queue's if_status
-	 * changes, which isn't guaranteed.
-	 */
-	ret = dlb2_ldb_port_map_qid(hw, domain, port, queue, prio);
-
-	/* If ret is less than zero, it's due to an internal error */
-	if (ret < 0)
-		return ret;
-
-map_qid_done:
-	if (port->enabled)
-		dlb2_ldb_port_cq_enable(hw, port);
-
-	resp->status = 0;
-
-	return 0;
-}
-
 static void dlb2_log_unmap_qid(struct dlb2_hw *hw,
 			       u32 domain_id,
 			       struct dlb2_unmap_qid_args *args,
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index d4b401250..5277a2643 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -5058,3 +5058,421 @@ int dlb2_hw_create_dir_queue(struct dlb2_hw *hw,
 	return 0;
 }
 
+static bool
+dlb2_port_find_slot_with_pending_map_queue(struct dlb2_ldb_port *port,
+					   struct dlb2_ldb_queue *queue,
+					   int *slot)
+{
+	int i;
+
+	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
+		struct dlb2_ldb_port_qid_map *map = &port->qid_map[i];
+
+		if (map->state == DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP &&
+		    map->pending_qid == queue->id.phys_id)
+			break;
+	}
+
+	*slot = i;
+
+	return (i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ);
+}
+
+static int dlb2_verify_map_qid_slot_available(struct dlb2_ldb_port *port,
+					      struct dlb2_ldb_queue *queue,
+					      struct dlb2_cmd_response *resp)
+{
+	enum dlb2_qid_map_state state;
+	int i;
+
+	/* Unused slot available? */
+	if (port->num_mappings < DLB2_MAX_NUM_QIDS_PER_LDB_CQ)
+		return 0;
+
+	/*
+	 * If the queue is already mapped (from the application's perspective),
+	 * this is simply a priority update.
+	 */
+	state = DLB2_QUEUE_MAPPED;
+	if (dlb2_port_find_slot_queue(port, state, queue, &i))
+		return 0;
+
+	state = DLB2_QUEUE_MAP_IN_PROG;
+	if (dlb2_port_find_slot_queue(port, state, queue, &i))
+		return 0;
+
+	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &i))
+		return 0;
+
+	/*
+	 * If the slot contains an unmap in progress, it's considered
+	 * available.
+	 */
+	state = DLB2_QUEUE_UNMAP_IN_PROG;
+	if (dlb2_port_find_slot(port, state, &i))
+		return 0;
+
+	state = DLB2_QUEUE_UNMAPPED;
+	if (dlb2_port_find_slot(port, state, &i))
+		return 0;
+
+	resp->status = DLB2_ST_NO_QID_SLOTS_AVAILABLE;
+	return -EINVAL;
+}
+
+static struct dlb2_ldb_queue *
+dlb2_get_domain_ldb_queue(u32 id,
+			  bool vdev_req,
+			  struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_queue *queue;
+	RTE_SET_USED(iter);
+
+	if (id >= DLB2_MAX_NUM_LDB_QUEUES)
+		return NULL;
+
+	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
+		if ((!vdev_req && queue->id.phys_id == id) ||
+		    (vdev_req && queue->id.virt_id == id))
+			return queue;
+	}
+
+	return NULL;
+}
+
+static struct dlb2_ldb_port *
+dlb2_get_domain_used_ldb_port(u32 id,
+			      bool vdev_req,
+			      struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	if (id >= DLB2_MAX_NUM_LDB_PORTS)
+		return NULL;
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			if ((!vdev_req && port->id.phys_id == id) ||
+			    (vdev_req && port->id.virt_id == id))
+				return port;
+		}
+
+		DLB2_DOM_LIST_FOR(domain->avail_ldb_ports[i], port, iter) {
+			if ((!vdev_req && port->id.phys_id == id) ||
+			    (vdev_req && port->id.virt_id == id))
+				return port;
+		}
+	}
+
+	return NULL;
+}
+
+static void dlb2_ldb_port_change_qid_priority(struct dlb2_hw *hw,
+					      struct dlb2_ldb_port *port,
+					      int slot,
+					      struct dlb2_map_qid_args *args)
+{
+	u32 cq2priov;
+
+	/* Read-modify-write the priority and valid bit register */
+	cq2priov = DLB2_CSR_RD(hw,
+			       DLB2_LSP_CQ2PRIOV(hw->ver, port->id.phys_id));
+
+	cq2priov |= (1 << (slot + DLB2_LSP_CQ2PRIOV_V_LOC)) &
+		    DLB2_LSP_CQ2PRIOV_V;
+	cq2priov |= ((args->priority & 0x7) << slot * 3) &
+		    DLB2_LSP_CQ2PRIOV_PRIO;
+
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(hw->ver, port->id.phys_id), cq2priov);
+
+	dlb2_flush_csr(hw);
+
+	port->qid_map[slot].priority = args->priority;
+}
+
+static int dlb2_verify_map_qid_args(struct dlb2_hw *hw,
+				    u32 domain_id,
+				    struct dlb2_map_qid_args *args,
+				    struct dlb2_cmd_response *resp,
+				    bool vdev_req,
+				    unsigned int vdev_id,
+				    struct dlb2_hw_domain **out_domain,
+				    struct dlb2_ldb_port **out_port,
+				    struct dlb2_ldb_queue **out_queue)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+	struct dlb2_ldb_port *port;
+	int id;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
+		return -EINVAL;
+	}
+
+	id = args->port_id;
+
+	port = dlb2_get_domain_used_ldb_port(id, vdev_req, domain);
+
+	if (!port || !port->configured) {
+		resp->status = DLB2_ST_INVALID_PORT_ID;
+		return -EINVAL;
+	}
+
+	if (args->priority >= DLB2_QID_PRIORITIES) {
+		resp->status = DLB2_ST_INVALID_PRIORITY;
+		return -EINVAL;
+	}
+
+	queue = dlb2_get_domain_ldb_queue(args->qid, vdev_req, domain);
+
+	if (!queue || !queue->configured) {
+		resp->status = DLB2_ST_INVALID_QID;
+		return -EINVAL;
+	}
+
+	if (queue->domain_id.phys_id != domain->id.phys_id) {
+		resp->status = DLB2_ST_INVALID_QID;
+		return -EINVAL;
+	}
+
+	if (port->domain_id.phys_id != domain->id.phys_id) {
+		resp->status = DLB2_ST_INVALID_PORT_ID;
+		return -EINVAL;
+	}
+
+	*out_domain = domain;
+	*out_queue = queue;
+	*out_port = port;
+
+	return 0;
+}
+
+static void dlb2_log_map_qid(struct dlb2_hw *hw,
+			     u32 domain_id,
+			     struct dlb2_map_qid_args *args,
+			     bool vdev_req,
+			     unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 map QID arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID: %d\n",
+		    domain_id);
+	DLB2_HW_DBG(hw, "\tPort ID:   %d\n",
+		    args->port_id);
+	DLB2_HW_DBG(hw, "\tQueue ID:  %d\n",
+		    args->qid);
+	DLB2_HW_DBG(hw, "\tPriority:  %d\n",
+		    args->priority);
+}
+
+/**
+ * dlb2_hw_map_qid() - map a load-balanced queue to a load-balanced port
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: map QID arguments.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function configures the DLB to schedule QEs from the specified queue
+ * to the specified port. Each load-balanced port can be mapped to up to 8
+ * queues; each load-balanced queue can potentially map to all the
+ * load-balanced ports.
+ *
+ * A successful return does not necessarily mean the mapping was configured. If
+ * this function is unable to immediately map the queue to the port, it will
+ * add the requested operation to a per-port list of pending map/unmap
+ * operations, and (if it's not already running) launch a kernel thread that
+ * periodically attempts to process all pending operations. In a sense, this is
+ * an asynchronous function.
+ *
+ * This asynchronicity creates two views of the state of hardware: the actual
+ * hardware state and the requested state (as if every request completed
+ * immediately). If there are any pending map/unmap operations, the requested
+ * state will differ from the actual state. All validation is performed with
+ * respect to the pending state; for instance, if there are 8 pending map
+ * operations for port X, a request for a 9th will fail because a load-balanced
+ * port can only map up to 8 queues.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, invalid port or queue ID, or
+ *	    the domain is not configured.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb2_hw_map_qid(struct dlb2_hw *hw,
+		    u32 domain_id,
+		    struct dlb2_map_qid_args *args,
+		    struct dlb2_cmd_response *resp,
+		    bool vdev_req,
+		    unsigned int vdev_id)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+	enum dlb2_qid_map_state st;
+	struct dlb2_ldb_port *port;
+	int ret, i;
+	u8 prio;
+
+	dlb2_log_map_qid(hw, domain_id, args, vdev_req, vdev_id);
+
+	/*
+	 * Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	ret = dlb2_verify_map_qid_args(hw,
+				       domain_id,
+				       args,
+				       resp,
+				       vdev_req,
+				       vdev_id,
+				       &domain,
+				       &port,
+				       &queue);
+	if (ret)
+		return ret;
+
+	prio = args->priority;
+
+	/*
+	 * If there are any outstanding detach operations for this port,
+	 * attempt to complete them. This may be necessary to free up a QID
+	 * slot for this requested mapping.
+	 */
+	if (port->num_pending_removals)
+		dlb2_domain_finish_unmap_port(hw, domain, port);
+
+	ret = dlb2_verify_map_qid_slot_available(port, queue, resp);
+	if (ret)
+		return ret;
+
+	/* Hardware requires disabling the CQ before mapping QIDs. */
+	if (port->enabled)
+		dlb2_ldb_port_cq_disable(hw, port);
+
+	/*
+	 * If this is only a priority change, don't perform the full QID->CQ
+	 * mapping procedure
+	 */
+	st = DLB2_QUEUE_MAPPED;
+	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
+		if (prio != port->qid_map[i].priority) {
+			dlb2_ldb_port_change_qid_priority(hw, port, i, args);
+			DLB2_HW_DBG(hw, "DLB2 map: priority change\n");
+		}
+
+		st = DLB2_QUEUE_MAPPED;
+		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
+		if (ret)
+			return ret;
+
+		goto map_qid_done;
+	}
+
+	st = DLB2_QUEUE_UNMAP_IN_PROG;
+	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
+		if (prio != port->qid_map[i].priority) {
+			dlb2_ldb_port_change_qid_priority(hw, port, i, args);
+			DLB2_HW_DBG(hw, "DLB2 map: priority change\n");
+		}
+
+		st = DLB2_QUEUE_MAPPED;
+		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
+		if (ret)
+			return ret;
+
+		goto map_qid_done;
+	}
+
+	/*
+	 * If this is a priority change on an in-progress mapping, don't
+	 * perform the full QID->CQ mapping procedure.
+	 */
+	st = DLB2_QUEUE_MAP_IN_PROG;
+	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
+		port->qid_map[i].priority = prio;
+
+		DLB2_HW_DBG(hw, "DLB2 map: priority change only\n");
+
+		goto map_qid_done;
+	}
+
+	/*
+	 * If this is a priority change on a pending mapping, update the
+	 * pending priority
+	 */
+	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &i)) {
+		port->qid_map[i].pending_priority = prio;
+
+		DLB2_HW_DBG(hw, "DLB2 map: priority change only\n");
+
+		goto map_qid_done;
+	}
+
+	/*
+	 * If all the CQ's slots are in use, then there's an unmap in progress
+	 * (guaranteed by dlb2_verify_map_qid_slot_available()), so add this
+	 * mapping to pending_map and return. When the removal is completed for
+	 * the slot's current occupant, this mapping will be performed.
+	 */
+	if (!dlb2_port_find_slot(port, DLB2_QUEUE_UNMAPPED, &i)) {
+		if (dlb2_port_find_slot(port, DLB2_QUEUE_UNMAP_IN_PROG, &i)) {
+			enum dlb2_qid_map_state new_st;
+
+			port->qid_map[i].pending_qid = queue->id.phys_id;
+			port->qid_map[i].pending_priority = prio;
+
+			new_st = DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP;
+
+			ret = dlb2_port_slot_state_transition(hw, port, queue,
+							      i, new_st);
+			if (ret)
+				return ret;
+
+			DLB2_HW_DBG(hw, "DLB2 map: map pending removal\n");
+
+			goto map_qid_done;
+		}
+	}
+
+	/*
+	 * If the domain has started, a special "dynamic" CQ->queue mapping
+	 * procedure is required in order to safely update the CQ<->QID tables.
+	 * The "static" procedure cannot be used when traffic is flowing,
+	 * because the CQ<->QID tables cannot be updated atomically and the
+	 * scheduler won't see the new mapping unless the queue's if_status
+	 * changes, which isn't guaranteed.
+	 */
+	ret = dlb2_ldb_port_map_qid(hw, domain, port, queue, prio);
+
+	/* If ret is less than zero, it's due to an internal error */
+	if (ret < 0)
+		return ret;
+
+map_qid_done:
+	if (port->enabled)
+		dlb2_ldb_port_cq_enable(hw, port);
+
+	resp->status = 0;
+
+	return 0;
+}
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v2 11/27] event/dlb2: add v2.5 unmap queue
  2021-03-30 19:35   ` [dpdk-dev] [PATCH v2 00/27] Add DLB V2.5 Timothy McDaniel
                       ` (9 preceding siblings ...)
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 10/27] event/dlb2: add v2.5 map qid Timothy McDaniel
@ 2021-03-30 19:35     ` Timothy McDaniel
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 12/27] event/dlb2: add v2.5 start domain Timothy McDaniel
                       ` (16 subsequent siblings)
  27 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-03-30 19:35 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas

Update low level functions to account for new register map
and hardware access macros.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 331 ------------------
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 298 ++++++++++++++++
 2 files changed, 298 insertions(+), 331 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index d59df5e39..ab5b080c1 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -1225,26 +1225,6 @@ dlb2_get_domain_used_dir_pq(struct dlb2_hw *hw,
 	return NULL;
 }
 
-static bool
-dlb2_port_find_slot_with_pending_map_queue(struct dlb2_ldb_port *port,
-					   struct dlb2_ldb_queue *queue,
-					   int *slot)
-{
-	int i;
-
-	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
-		struct dlb2_ldb_port_qid_map *map = &port->qid_map[i];
-
-		if (map->state == DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP &&
-		    map->pending_qid == queue->id.phys_id)
-			break;
-	}
-
-	*slot = i;
-
-	return (i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ);
-}
-
 static struct dlb2_ldb_queue *
 dlb2_get_domain_ldb_queue(u32 id,
 			  bool vdev_req,
@@ -1265,317 +1245,6 @@ dlb2_get_domain_ldb_queue(u32 id,
 	return NULL;
 }
 
-static struct dlb2_ldb_port *
-dlb2_get_domain_used_ldb_port(u32 id,
-			      bool vdev_req,
-			      struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	if (id >= DLB2_MAX_NUM_LDB_PORTS)
-		return NULL;
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
-			if ((!vdev_req && port->id.phys_id == id) ||
-			    (vdev_req && port->id.virt_id == id))
-				return port;
-
-		DLB2_DOM_LIST_FOR(domain->avail_ldb_ports[i], port, iter)
-			if ((!vdev_req && port->id.phys_id == id) ||
-			    (vdev_req && port->id.virt_id == id))
-				return port;
-	}
-
-	return NULL;
-}
-
-static void dlb2_log_unmap_qid(struct dlb2_hw *hw,
-			       u32 domain_id,
-			       struct dlb2_unmap_qid_args *args,
-			       bool vdev_req,
-			       unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 unmap QID arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID: %d\n",
-		    domain_id);
-	DLB2_HW_DBG(hw, "\tPort ID:   %d\n",
-		    args->port_id);
-	DLB2_HW_DBG(hw, "\tQueue ID:  %d\n",
-		    args->qid);
-	if (args->qid < DLB2_MAX_NUM_LDB_QUEUES)
-		DLB2_HW_DBG(hw, "\tQueue's num mappings:  %d\n",
-			    hw->rsrcs.ldb_queues[args->qid].num_mappings);
-}
-
-static int dlb2_verify_unmap_qid_args(struct dlb2_hw *hw,
-				      u32 domain_id,
-				      struct dlb2_unmap_qid_args *args,
-				      struct dlb2_cmd_response *resp,
-				      bool vdev_req,
-				      unsigned int vdev_id)
-{
-	enum dlb2_qid_map_state state;
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_queue *queue;
-	struct dlb2_ldb_port *port;
-	int slot;
-	int id;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (domain == NULL) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	if (!domain->configured) {
-		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
-		return -EINVAL;
-	}
-
-	id = args->port_id;
-
-	port = dlb2_get_domain_used_ldb_port(id, vdev_req, domain);
-
-	if (port == NULL || !port->configured) {
-		resp->status = DLB2_ST_INVALID_PORT_ID;
-		return -EINVAL;
-	}
-
-	if (port->domain_id.phys_id != domain->id.phys_id) {
-		resp->status = DLB2_ST_INVALID_PORT_ID;
-		return -EINVAL;
-	}
-
-	queue = dlb2_get_domain_ldb_queue(args->qid, vdev_req, domain);
-
-	if (queue == NULL || !queue->configured) {
-		DLB2_HW_ERR(hw, "[%s()] Can't unmap unconfigured queue %d\n",
-			    __func__, args->qid);
-		resp->status = DLB2_ST_INVALID_QID;
-		return -EINVAL;
-	}
-
-	/*
-	 * Verify that the port has the queue mapped. From the application's
-	 * perspective a queue is mapped if it is actually mapped, the map is
-	 * in progress, or the map is blocked pending an unmap.
-	 */
-	state = DLB2_QUEUE_MAPPED;
-	if (dlb2_port_find_slot_queue(port, state, queue, &slot))
-		return 0;
-
-	state = DLB2_QUEUE_MAP_IN_PROG;
-	if (dlb2_port_find_slot_queue(port, state, queue, &slot))
-		return 0;
-
-	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &slot))
-		return 0;
-
-	resp->status = DLB2_ST_INVALID_QID;
-	return -EINVAL;
-}
-
-int dlb2_hw_unmap_qid(struct dlb2_hw *hw,
-		      u32 domain_id,
-		      struct dlb2_unmap_qid_args *args,
-		      struct dlb2_cmd_response *resp,
-		      bool vdev_req,
-		      unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_queue *queue;
-	enum dlb2_qid_map_state st;
-	struct dlb2_ldb_port *port;
-	bool unmap_complete;
-	int i, ret, id;
-
-	dlb2_log_unmap_qid(hw, domain_id, args, vdev_req, vdev_id);
-
-	/*
-	 * Verify that hardware resources are available before attempting to
-	 * satisfy the request. This simplifies the error unwinding code.
-	 */
-	ret = dlb2_verify_unmap_qid_args(hw,
-					 domain_id,
-					 args,
-					 resp,
-					 vdev_req,
-					 vdev_id);
-	if (ret)
-		return ret;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-	if (domain == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: domain not found\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	id = args->port_id;
-
-	port = dlb2_get_domain_used_ldb_port(id, vdev_req, domain);
-	if (port == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: port not found\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	queue = dlb2_get_domain_ldb_queue(args->qid, vdev_req, domain);
-	if (queue == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: queue not found\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	/*
-	 * If the queue hasn't been mapped yet, we need to update the slot's
-	 * state and re-enable the queue's inflights.
-	 */
-	st = DLB2_QUEUE_MAP_IN_PROG;
-	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
-		if (i >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-			DLB2_HW_ERR(hw,
-				    "[%s():%d] Internal error: port slot tracking failed\n",
-				    __func__, __LINE__);
-			return -EFAULT;
-		}
-
-		/*
-		 * Since the in-progress map was aborted, re-enable the QID's
-		 * inflights.
-		 */
-		if (queue->num_pending_additions == 0)
-			dlb2_ldb_queue_set_inflight_limit(hw, queue);
-
-		st = DLB2_QUEUE_UNMAPPED;
-		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
-		if (ret)
-			return ret;
-
-		goto unmap_qid_done;
-	}
-
-	/*
-	 * If the queue mapping is on hold pending an unmap, we simply need to
-	 * update the slot's state.
-	 */
-	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &i)) {
-		if (i >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-			DLB2_HW_ERR(hw,
-				    "[%s():%d] Internal error: port slot tracking failed\n",
-				    __func__, __LINE__);
-			return -EFAULT;
-		}
-
-		st = DLB2_QUEUE_UNMAP_IN_PROG;
-		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
-		if (ret)
-			return ret;
-
-		goto unmap_qid_done;
-	}
-
-	st = DLB2_QUEUE_MAPPED;
-	if (!dlb2_port_find_slot_queue(port, st, queue, &i)) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: no available CQ slots\n",
-			    __func__);
-		return -EFAULT;
-	}
-
-	if (i >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: port slot tracking failed\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	/*
-	 * QID->CQ mapping removal is an asynchronous procedure. It requires
-	 * stopping the DLB2 from scheduling this CQ, draining all inflights
-	 * from the CQ, then unmapping the queue from the CQ. This function
-	 * simply marks the port as needing the queue unmapped, and (if
-	 * necessary) starts the unmapping worker thread.
-	 */
-	dlb2_ldb_port_cq_disable(hw, port);
-
-	st = DLB2_QUEUE_UNMAP_IN_PROG;
-	ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
-	if (ret)
-		return ret;
-
-	/*
-	 * Attempt to finish the unmapping now, in case the port has no
-	 * outstanding inflights. If that's not the case, this will fail and
-	 * the unmapping will be completed at a later time.
-	 */
-	unmap_complete = dlb2_domain_finish_unmap_port(hw, domain, port);
-
-	/*
-	 * If the unmapping couldn't complete immediately, launch the worker
-	 * thread (if it isn't already launched) to finish it later.
-	 */
-	if (!unmap_complete && !os_worker_active(hw))
-		os_schedule_work(hw);
-
-unmap_qid_done:
-	resp->status = 0;
-
-	return 0;
-}
-
-static void
-dlb2_log_pending_port_unmaps_args(struct dlb2_hw *hw,
-				  struct dlb2_pending_port_unmaps_args *args,
-				  bool vdev_req,
-				  unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB unmaps in progress arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from VF %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tPort ID: %d\n", args->port_id);
-}
-
-int dlb2_hw_pending_port_unmaps(struct dlb2_hw *hw,
-				u32 domain_id,
-				struct dlb2_pending_port_unmaps_args *args,
-				struct dlb2_cmd_response *resp,
-				bool vdev_req,
-				unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_port *port;
-
-	dlb2_log_pending_port_unmaps_args(hw, args, vdev_req, vdev_id);
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (domain == NULL) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	port = dlb2_get_domain_used_ldb_port(args->port_id, vdev_req, domain);
-	if (port == NULL || !port->configured) {
-		resp->status = DLB2_ST_INVALID_PORT_ID;
-		return -EINVAL;
-	}
-
-	resp->id = port->num_pending_removals;
-
-	return 0;
-}
-
 static int dlb2_verify_start_domain_args(struct dlb2_hw *hw,
 					 u32 domain_id,
 					 struct dlb2_cmd_response *resp,
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index 5277a2643..181922fe3 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -5476,3 +5476,301 @@ int dlb2_hw_map_qid(struct dlb2_hw *hw,
 
 	return 0;
 }
+
+static void dlb2_log_unmap_qid(struct dlb2_hw *hw,
+			       u32 domain_id,
+			       struct dlb2_unmap_qid_args *args,
+			       bool vdev_req,
+			       unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 unmap QID arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID: %d\n",
+		    domain_id);
+	DLB2_HW_DBG(hw, "\tPort ID:   %d\n",
+		    args->port_id);
+	DLB2_HW_DBG(hw, "\tQueue ID:  %d\n",
+		    args->qid);
+	if (args->qid < DLB2_MAX_NUM_LDB_QUEUES)
+		DLB2_HW_DBG(hw, "\tQueue's num mappings:  %d\n",
+			    hw->rsrcs.ldb_queues[args->qid].num_mappings);
+}
+
+static int dlb2_verify_unmap_qid_args(struct dlb2_hw *hw,
+				      u32 domain_id,
+				      struct dlb2_unmap_qid_args *args,
+				      struct dlb2_cmd_response *resp,
+				      bool vdev_req,
+				      unsigned int vdev_id,
+				      struct dlb2_hw_domain **out_domain,
+				      struct dlb2_ldb_port **out_port,
+				      struct dlb2_ldb_queue **out_queue)
+{
+	enum dlb2_qid_map_state state;
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+	struct dlb2_ldb_port *port;
+	int slot;
+	int id;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
+		return -EINVAL;
+	}
+
+	id = args->port_id;
+
+	port = dlb2_get_domain_used_ldb_port(id, vdev_req, domain);
+
+	if (!port || !port->configured) {
+		resp->status = DLB2_ST_INVALID_PORT_ID;
+		return -EINVAL;
+	}
+
+	if (port->domain_id.phys_id != domain->id.phys_id) {
+		resp->status = DLB2_ST_INVALID_PORT_ID;
+		return -EINVAL;
+	}
+
+	queue = dlb2_get_domain_ldb_queue(args->qid, vdev_req, domain);
+
+	if (!queue || !queue->configured) {
+		DLB2_HW_ERR(hw, "[%s()] Can't unmap unconfigured queue %d\n",
+			    __func__, args->qid);
+		resp->status = DLB2_ST_INVALID_QID;
+		return -EINVAL;
+	}
+
+	/*
+	 * Verify that the port has the queue mapped. From the application's
+	 * perspective a queue is mapped if it is actually mapped, the map is
+	 * in progress, or the map is blocked pending an unmap.
+	 */
+	state = DLB2_QUEUE_MAPPED;
+	if (dlb2_port_find_slot_queue(port, state, queue, &slot))
+		goto done;
+
+	state = DLB2_QUEUE_MAP_IN_PROG;
+	if (dlb2_port_find_slot_queue(port, state, queue, &slot))
+		goto done;
+
+	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &slot))
+		goto done;
+
+	resp->status = DLB2_ST_INVALID_QID;
+	return -EINVAL;
+
+done:
+	*out_domain = domain;
+	*out_port = port;
+	*out_queue = queue;
+
+	return 0;
+}
+
+/**
+ * dlb2_hw_unmap_qid() - Unmap a load-balanced queue from a load-balanced port
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: unmap QID arguments.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function configures the DLB to stop scheduling QEs from the specified
+ * queue to the specified port.
+ *
+ * A successful return does not necessarily mean the mapping was removed. If
+ * this function is unable to immediately unmap the queue from the port, it
+ * will add the requested operation to a per-port list of pending map/unmap
+ * operations, and (if it's not already running) launch a kernel thread that
+ * periodically attempts to process all pending operations. See
+ * dlb2_hw_map_qid() for more details.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, invalid port or queue ID, or
+ *	    the domain is not configured.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb2_hw_unmap_qid(struct dlb2_hw *hw,
+		      u32 domain_id,
+		      struct dlb2_unmap_qid_args *args,
+		      struct dlb2_cmd_response *resp,
+		      bool vdev_req,
+		      unsigned int vdev_id)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+	enum dlb2_qid_map_state st;
+	struct dlb2_ldb_port *port;
+	bool unmap_complete;
+	int i, ret;
+
+	dlb2_log_unmap_qid(hw, domain_id, args, vdev_req, vdev_id);
+
+	/*
+	 * Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	ret = dlb2_verify_unmap_qid_args(hw,
+					 domain_id,
+					 args,
+					 resp,
+					 vdev_req,
+					 vdev_id,
+					 &domain,
+					 &port,
+					 &queue);
+	if (ret)
+		return ret;
+
+	/*
+	 * If the queue hasn't been mapped yet, we need to update the slot's
+	 * state and re-enable the queue's inflights.
+	 */
+	st = DLB2_QUEUE_MAP_IN_PROG;
+	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
+		/*
+		 * Since the in-progress map was aborted, re-enable the QID's
+		 * inflights.
+		 */
+		if (queue->num_pending_additions == 0)
+			dlb2_ldb_queue_set_inflight_limit(hw, queue);
+
+		st = DLB2_QUEUE_UNMAPPED;
+		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
+		if (ret)
+			return ret;
+
+		goto unmap_qid_done;
+	}
+
+	/*
+	 * If the queue mapping is on hold pending an unmap, we simply need to
+	 * update the slot's state.
+	 */
+	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &i)) {
+		st = DLB2_QUEUE_UNMAP_IN_PROG;
+		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
+		if (ret)
+			return ret;
+
+		goto unmap_qid_done;
+	}
+
+	st = DLB2_QUEUE_MAPPED;
+	if (!dlb2_port_find_slot_queue(port, st, queue, &i)) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: no available CQ slots\n",
+			    __func__);
+		return -EFAULT;
+	}
+
+	/*
+	 * QID->CQ mapping removal is an asynchronous procedure. It requires
+	 * stopping the DLB2 from scheduling this CQ, draining all inflights
+	 * from the CQ, then unmapping the queue from the CQ. This function
+	 * simply marks the port as needing the queue unmapped, and (if
+	 * necessary) starts the unmapping worker thread.
+	 */
+	dlb2_ldb_port_cq_disable(hw, port);
+
+	st = DLB2_QUEUE_UNMAP_IN_PROG;
+	ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
+	if (ret)
+		return ret;
+
+	/*
+	 * Attempt to finish the unmapping now, in case the port has no
+	 * outstanding inflights. If that's not the case, this will fail and
+	 * the unmapping will be completed at a later time.
+	 */
+	unmap_complete = dlb2_domain_finish_unmap_port(hw, domain, port);
+
+	/*
+	 * If the unmapping couldn't complete immediately, launch the worker
+	 * thread (if it isn't already launched) to finish it later.
+	 */
+	if (!unmap_complete && !os_worker_active(hw))
+		os_schedule_work(hw);
+
+unmap_qid_done:
+	resp->status = 0;
+
+	return 0;
+}
+
+static void
+dlb2_log_pending_port_unmaps_args(struct dlb2_hw *hw,
+				  struct dlb2_pending_port_unmaps_args *args,
+				  bool vdev_req,
+				  unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB unmaps in progress arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from VF %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tPort ID: %d\n", args->port_id);
+}
+
+/**
+ * dlb2_hw_pending_port_unmaps() - returns the number of unmap operations in
+ *	progress.
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: number of unmaps in progress args
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the number of unmaps in progress.
+ *
+ * Errors:
+ * EINVAL - Invalid port ID.
+ */
+int dlb2_hw_pending_port_unmaps(struct dlb2_hw *hw,
+				u32 domain_id,
+				struct dlb2_pending_port_unmaps_args *args,
+				struct dlb2_cmd_response *resp,
+				bool vdev_req,
+				unsigned int vdev_id)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_port *port;
+
+	dlb2_log_pending_port_unmaps_args(hw, args, vdev_req, vdev_id);
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	port = dlb2_get_domain_used_ldb_port(args->port_id, vdev_req, domain);
+	if (!port || !port->configured) {
+		resp->status = DLB2_ST_INVALID_PORT_ID;
+		return -EINVAL;
+	}
+
+	resp->id = port->num_pending_removals;
+
+	return 0;
+}
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v2 12/27] event/dlb2: add v2.5 start domain
  2021-03-30 19:35   ` [dpdk-dev] [PATCH v2 00/27] Add DLB V2.5 Timothy McDaniel
                       ` (10 preceding siblings ...)
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 11/27] event/dlb2: add v2.5 unmap queue Timothy McDaniel
@ 2021-03-30 19:35     ` Timothy McDaniel
  2021-04-14 19:23       ` Jerin Jacob
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 13/27] event/dlb2: add v2.5 credit scheme Timothy McDaniel
                       ` (15 subsequent siblings)
  27 siblings, 1 reply; 174+ messages in thread
From: Timothy McDaniel @ 2021-03-30 19:35 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas

Update low level functions to account for new register map
and hardware access macros.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 123 -----------------
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 130 ++++++++++++++++++
 2 files changed, 130 insertions(+), 123 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index ab5b080c1..1e66ebf50 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -1245,129 +1245,6 @@ dlb2_get_domain_ldb_queue(u32 id,
 	return NULL;
 }
 
-static int dlb2_verify_start_domain_args(struct dlb2_hw *hw,
-					 u32 domain_id,
-					 struct dlb2_cmd_response *resp,
-					 bool vdev_req,
-					 unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (domain == NULL) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	if (!domain->configured) {
-		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
-		return -EINVAL;
-	}
-
-	if (domain->started) {
-		resp->status = DLB2_ST_DOMAIN_STARTED;
-		return -EINVAL;
-	}
-
-	return 0;
-}
-
-static void dlb2_log_start_domain(struct dlb2_hw *hw,
-				  u32 domain_id,
-				  bool vdev_req,
-				  unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 start domain arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
-}
-
-/**
- * dlb2_hw_start_domain() - Lock the domain configuration
- * @hw:	Contains the current state of the DLB2 hardware.
- * @domain_id: Domain ID
- * @arg: User-provided arguments (unused, here for ioctl callback template).
- * @resp: Response to user.
- * @vdev_req: Request came from a virtual device.
- * @vdev_id: If vdev_req is true, this contains the virtual device's ID.
- *
- * Return: returns < 0 on error, 0 otherwise. If the driver is unable to
- * satisfy a request, resp->status will be set accordingly.
- */
-int
-dlb2_hw_start_domain(struct dlb2_hw *hw,
-		     u32 domain_id,
-		     struct dlb2_start_domain_args *arg,
-		     struct dlb2_cmd_response *resp,
-		     bool vdev_req,
-		     unsigned int vdev_id)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *dir_queue;
-	struct dlb2_ldb_queue *ldb_queue;
-	struct dlb2_hw_domain *domain;
-	int ret;
-	RTE_SET_USED(arg);
-	RTE_SET_USED(iter);
-
-	dlb2_log_start_domain(hw, domain_id, vdev_req, vdev_id);
-
-	ret = dlb2_verify_start_domain_args(hw,
-					    domain_id,
-					    resp,
-					    vdev_req,
-					    vdev_id);
-	if (ret)
-		return ret;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-	if (domain == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: domain not found\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	/*
-	 * Enable load-balanced and directed queue write permissions for the
-	 * queues this domain owns. Without this, the DLB2 will drop all
-	 * incoming traffic to those queues.
-	 */
-	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, ldb_queue, iter) {
-		union dlb2_sys_ldb_vasqid_v r0 = { {0} };
-		unsigned int offs;
-
-		r0.field.vasqid_v = 1;
-
-		offs = domain->id.phys_id * DLB2_MAX_NUM_LDB_QUEUES +
-			ldb_queue->id.phys_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_LDB_VASQID_V(offs), r0.val);
-	}
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, dir_queue, iter) {
-		union dlb2_sys_dir_vasqid_v r0 = { {0} };
-		unsigned int offs;
-
-		r0.field.vasqid_v = 1;
-
-		offs = domain->id.phys_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) +
-			dir_queue->id.phys_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(offs), r0.val);
-	}
-
-	dlb2_flush_csr(hw);
-
-	domain->started = true;
-
-	resp->status = 0;
-
-	return 0;
-}
-
 static void dlb2_log_get_dir_queue_depth(struct dlb2_hw *hw,
 					 u32 domain_id,
 					 u32 queue_id,
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index 181922fe3..e806a60ac 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -5774,3 +5774,133 @@ int dlb2_hw_pending_port_unmaps(struct dlb2_hw *hw,
 
 	return 0;
 }
+
+static int dlb2_verify_start_domain_args(struct dlb2_hw *hw,
+					 u32 domain_id,
+					 struct dlb2_cmd_response *resp,
+					 bool vdev_req,
+					 unsigned int vdev_id,
+					 struct dlb2_hw_domain **out_domain)
+{
+	struct dlb2_hw_domain *domain;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
+		return -EINVAL;
+	}
+
+	if (domain->started) {
+		resp->status = DLB2_ST_DOMAIN_STARTED;
+		return -EINVAL;
+	}
+
+	*out_domain = domain;
+
+	return 0;
+}
+
+static void dlb2_log_start_domain(struct dlb2_hw *hw,
+				  u32 domain_id,
+				  bool vdev_req,
+				  unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 start domain arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
+}
+
+/**
+ * dlb2_hw_start_domain() - start a scheduling domain
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @arg: start domain arguments.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function starts a scheduling domain, which allows applications to send
+ * traffic through it. Once a domain is started, its resources can no longer be
+ * configured (besides QID remapping and port enable/disable).
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error.
+ *
+ * Errors:
+ * EINVAL - the domain is not configured, or the domain is already started.
+ */
+int
+dlb2_hw_start_domain(struct dlb2_hw *hw,
+		     u32 domain_id,
+		     struct dlb2_start_domain_args *args,
+		     struct dlb2_cmd_response *resp,
+		     bool vdev_req,
+		     unsigned int vdev_id)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *dir_queue;
+	struct dlb2_ldb_queue *ldb_queue;
+	struct dlb2_hw_domain *domain;
+	int ret;
+	RTE_SET_USED(args);
+	RTE_SET_USED(iter);
+
+	dlb2_log_start_domain(hw, domain_id, vdev_req, vdev_id);
+
+	ret = dlb2_verify_start_domain_args(hw,
+					    domain_id,
+					    resp,
+					    vdev_req,
+					    vdev_id,
+					    &domain);
+	if (ret)
+		return ret;
+
+	/*
+	 * Enable load-balanced and directed queue write permissions for the
+	 * queues this domain owns. Without this, the DLB2 will drop all
+	 * incoming traffic to those queues.
+	 */
+	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, ldb_queue, iter) {
+		u32 vasqid_v = 0;
+		unsigned int offs;
+
+		DLB2_BIT_SET(vasqid_v, DLB2_SYS_LDB_VASQID_V_VASQID_V);
+
+		offs = domain->id.phys_id * DLB2_MAX_NUM_LDB_QUEUES +
+			ldb_queue->id.phys_id;
+
+		DLB2_CSR_WR(hw, DLB2_SYS_LDB_VASQID_V(offs), vasqid_v);
+	}
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, dir_queue, iter) {
+		u32 vasqid_v = 0;
+		unsigned int offs;
+
+		DLB2_BIT_SET(vasqid_v, DLB2_SYS_DIR_VASQID_V_VASQID_V);
+
+		offs = domain->id.phys_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) +
+			dir_queue->id.phys_id;
+
+		DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(offs), vasqid_v);
+	}
+
+	dlb2_flush_csr(hw);
+
+	domain->started = true;
+
+	resp->status = 0;
+
+	return 0;
+}
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v2 13/27] event/dlb2: add v2.5 credit scheme
  2021-03-30 19:35   ` [dpdk-dev] [PATCH v2 00/27] Add DLB V2.5 Timothy McDaniel
                       ` (11 preceding siblings ...)
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 12/27] event/dlb2: add v2.5 start domain Timothy McDaniel
@ 2021-03-30 19:35     ` Timothy McDaniel
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 14/27] event/dlb2: add v2.5 queue depth functions Timothy McDaniel
                       ` (14 subsequent siblings)
  27 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-03-30 19:35 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas

DLB v2.5 uses a different credit scheme than was used in DLB v2.0 .
Specifically, there is a single credit pool for both load balanced
and directed traffic, instead of a separate pool for each as is
found with DLB v2.0.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/dlb2.c | 311 ++++++++++++++++++++++++++------------
 1 file changed, 212 insertions(+), 99 deletions(-)

diff --git a/drivers/event/dlb2/dlb2.c b/drivers/event/dlb2/dlb2.c
index 0048f6a1b..cc6495b76 100644
--- a/drivers/event/dlb2/dlb2.c
+++ b/drivers/event/dlb2/dlb2.c
@@ -436,8 +436,13 @@ dlb2_eventdev_info_get(struct rte_eventdev *dev,
 	 */
 	evdev_dlb2_default_info.max_event_ports += dlb2->num_ldb_ports;
 	evdev_dlb2_default_info.max_event_queues += dlb2->num_ldb_queues;
-	evdev_dlb2_default_info.max_num_events += dlb2->max_ldb_credits;
-
+	if (dlb2->version == DLB2_HW_V2_5) {
+		evdev_dlb2_default_info.max_num_events +=
+			dlb2->max_credits;
+	} else {
+		evdev_dlb2_default_info.max_num_events +=
+			dlb2->max_ldb_credits;
+	}
 	evdev_dlb2_default_info.max_event_queues =
 		RTE_MIN(evdev_dlb2_default_info.max_event_queues,
 			RTE_EVENT_MAX_QUEUES_PER_DEV);
@@ -451,7 +456,8 @@ dlb2_eventdev_info_get(struct rte_eventdev *dev,
 
 static int
 dlb2_hw_create_sched_domain(struct dlb2_hw_dev *handle,
-			    const struct dlb2_hw_rsrcs *resources_asked)
+			    const struct dlb2_hw_rsrcs *resources_asked,
+			    uint8_t device_version)
 {
 	int ret = 0;
 	struct dlb2_create_sched_domain_args *cfg;
@@ -468,8 +474,10 @@ dlb2_hw_create_sched_domain(struct dlb2_hw_dev *handle,
 	/* DIR ports and queues */
 
 	cfg->num_dir_ports = resources_asked->num_dir_ports;
-
-	cfg->num_dir_credits = resources_asked->num_dir_credits;
+	if (device_version == DLB2_HW_V2_5)
+		cfg->num_credits = resources_asked->num_credits;
+	else
+		cfg->num_dir_credits = resources_asked->num_dir_credits;
 
 	/* LDB queues */
 
@@ -509,8 +517,8 @@ dlb2_hw_create_sched_domain(struct dlb2_hw_dev *handle,
 		break;
 	}
 
-	cfg->num_ldb_credits =
-		resources_asked->num_ldb_credits;
+	if (device_version == DLB2_HW_V2)
+		cfg->num_ldb_credits = resources_asked->num_ldb_credits;
 
 	cfg->num_atomic_inflights =
 		DLB2_NUM_ATOMIC_INFLIGHTS_PER_QUEUE *
@@ -519,14 +527,24 @@ dlb2_hw_create_sched_domain(struct dlb2_hw_dev *handle,
 	cfg->num_hist_list_entries = resources_asked->num_ldb_ports *
 		DLB2_NUM_HIST_LIST_ENTRIES_PER_LDB_PORT;
 
-	DLB2_LOG_DBG("sched domain create - ldb_qs=%d, ldb_ports=%d, dir_ports=%d, atomic_inflights=%d, hist_list_entries=%d, ldb_credits=%d, dir_credits=%d\n",
-		     cfg->num_ldb_queues,
-		     resources_asked->num_ldb_ports,
-		     cfg->num_dir_ports,
-		     cfg->num_atomic_inflights,
-		     cfg->num_hist_list_entries,
-		     cfg->num_ldb_credits,
-		     cfg->num_dir_credits);
+	if (device_version == DLB2_HW_V2_5) {
+		DLB2_LOG_DBG("sched domain create - ldb_qs=%d, ldb_ports=%d, dir_ports=%d, atomic_inflights=%d, hist_list_entries=%d, credits=%d\n",
+			     cfg->num_ldb_queues,
+			     resources_asked->num_ldb_ports,
+			     cfg->num_dir_ports,
+			     cfg->num_atomic_inflights,
+			     cfg->num_hist_list_entries,
+			     cfg->num_credits);
+	} else {
+		DLB2_LOG_DBG("sched domain create - ldb_qs=%d, ldb_ports=%d, dir_ports=%d, atomic_inflights=%d, hist_list_entries=%d, ldb_credits=%d, dir_credits=%d\n",
+			     cfg->num_ldb_queues,
+			     resources_asked->num_ldb_ports,
+			     cfg->num_dir_ports,
+			     cfg->num_atomic_inflights,
+			     cfg->num_hist_list_entries,
+			     cfg->num_ldb_credits,
+			     cfg->num_dir_credits);
+	}
 
 	/* Configure the QM */
 
@@ -606,7 +624,6 @@ dlb2_eventdev_configure(const struct rte_eventdev *dev)
 	 */
 	if (dlb2->configured) {
 		dlb2_hw_reset_sched_domain(dev, true);
-
 		ret = dlb2_hw_query_resources(dlb2);
 		if (ret) {
 			DLB2_LOG_ERR("get resources err=%d, devid=%d\n",
@@ -665,20 +682,26 @@ dlb2_eventdev_configure(const struct rte_eventdev *dev)
 	/* 1 dir queue per dir port */
 	rsrcs->num_ldb_queues = config->nb_event_queues - rsrcs->num_dir_ports;
 
-	/* Scale down nb_events_limit by 4 for directed credits, since there
-	 * are 4x as many load-balanced credits.
-	 */
-	rsrcs->num_ldb_credits = 0;
-	rsrcs->num_dir_credits = 0;
+	if (dlb2->version == DLB2_HW_V2_5) {
+		rsrcs->num_credits = 0;
+		if (rsrcs->num_ldb_queues || rsrcs->num_dir_ports)
+			rsrcs->num_credits = config->nb_events_limit;
+	} else {
+		/* Scale down nb_events_limit by 4 for directed credits,
+		 * since there are 4x as many load-balanced credits.
+		 */
+		rsrcs->num_ldb_credits = 0;
+		rsrcs->num_dir_credits = 0;
 
-	if (rsrcs->num_ldb_queues)
-		rsrcs->num_ldb_credits = config->nb_events_limit;
-	if (rsrcs->num_dir_ports)
-		rsrcs->num_dir_credits = config->nb_events_limit / 4;
-	if (dlb2->num_dir_credits_override != -1)
-		rsrcs->num_dir_credits = dlb2->num_dir_credits_override;
+		if (rsrcs->num_ldb_queues)
+			rsrcs->num_ldb_credits = config->nb_events_limit;
+		if (rsrcs->num_dir_ports)
+			rsrcs->num_dir_credits = config->nb_events_limit / 4;
+		if (dlb2->num_dir_credits_override != -1)
+			rsrcs->num_dir_credits = dlb2->num_dir_credits_override;
+	}
 
-	if (dlb2_hw_create_sched_domain(handle, rsrcs) < 0) {
+	if (dlb2_hw_create_sched_domain(handle, rsrcs, dlb2->version) < 0) {
 		DLB2_LOG_ERR("dlb2_hw_create_sched_domain failed\n");
 		return -ENODEV;
 	}
@@ -693,10 +716,15 @@ dlb2_eventdev_configure(const struct rte_eventdev *dev)
 	dlb2->num_ldb_ports = dlb2->num_ports - dlb2->num_dir_ports;
 	dlb2->num_ldb_queues = dlb2->num_queues - dlb2->num_dir_ports;
 	dlb2->num_dir_queues = dlb2->num_dir_ports;
-	dlb2->ldb_credit_pool = rsrcs->num_ldb_credits;
-	dlb2->max_ldb_credits = rsrcs->num_ldb_credits;
-	dlb2->dir_credit_pool = rsrcs->num_dir_credits;
-	dlb2->max_dir_credits = rsrcs->num_dir_credits;
+	if (dlb2->version == DLB2_HW_V2_5) {
+		dlb2->credit_pool = rsrcs->num_credits;
+		dlb2->max_credits = rsrcs->num_credits;
+	} else {
+		dlb2->ldb_credit_pool = rsrcs->num_ldb_credits;
+		dlb2->max_ldb_credits = rsrcs->num_ldb_credits;
+		dlb2->dir_credit_pool = rsrcs->num_dir_credits;
+		dlb2->max_dir_credits = rsrcs->num_dir_credits;
+	}
 
 	dlb2->configured = true;
 
@@ -1170,8 +1198,9 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2,
 	struct dlb2_port *qm_port = NULL;
 	char mz_name[RTE_MEMZONE_NAMESIZE];
 	uint32_t qm_port_id;
-	uint16_t ldb_credit_high_watermark;
-	uint16_t dir_credit_high_watermark;
+	uint16_t ldb_credit_high_watermark = 0;
+	uint16_t dir_credit_high_watermark = 0;
+	uint16_t credit_high_watermark = 0;
 
 	if (handle == NULL)
 		return -EINVAL;
@@ -1206,15 +1235,18 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2,
 	/* User controls the LDB high watermark via enqueue depth. The DIR high
 	 * watermark is equal, unless the directed credit pool is too small.
 	 */
-	ldb_credit_high_watermark = enqueue_depth;
-
-	/* If there are no directed ports, the kernel driver will ignore this
-	 * port's directed credit settings. Don't use enqueue_depth if it would
-	 * require more directed credits than are available.
-	 */
-	dir_credit_high_watermark =
-		RTE_MIN(enqueue_depth,
-			handle->cfg.num_dir_credits / dlb2->num_ports);
+	if (dlb2->version == DLB2_HW_V2) {
+		ldb_credit_high_watermark = enqueue_depth;
+		/* If there are no directed ports, the kernel driver will
+		 * ignore this port's directed credit settings. Don't use
+		 * enqueue_depth if it would require more directed credits
+		 * than are available.
+		 */
+		dir_credit_high_watermark =
+			RTE_MIN(enqueue_depth,
+				handle->cfg.num_dir_credits / dlb2->num_ports);
+	} else
+		credit_high_watermark = enqueue_depth;
 
 	/* Per QM values */
 
@@ -1249,8 +1281,12 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2,
 
 	qm_port->id = qm_port_id;
 
-	qm_port->cached_ldb_credits = 0;
-	qm_port->cached_dir_credits = 0;
+	if (dlb2->version == DLB2_HW_V2) {
+		qm_port->cached_ldb_credits = 0;
+		qm_port->cached_dir_credits = 0;
+	} else
+		qm_port->cached_credits = 0;
+
 	/* CQs with depth < 8 use an 8-entry queue, but withhold credits so
 	 * the effective depth is smaller.
 	 */
@@ -1298,17 +1334,26 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2,
 	qm_port->state = PORT_STARTED; /* enabled at create time */
 	qm_port->config_state = DLB2_CONFIGURED;
 
-	qm_port->dir_credits = dir_credit_high_watermark;
-	qm_port->ldb_credits = ldb_credit_high_watermark;
-	qm_port->credit_pool[DLB2_DIR_QUEUE] = &dlb2->dir_credit_pool;
-	qm_port->credit_pool[DLB2_LDB_QUEUE] = &dlb2->ldb_credit_pool;
-
-	DLB2_LOG_DBG("dlb2: created ldb port %d, depth = %d, ldb credits=%d, dir credits=%d\n",
-		     qm_port_id,
-		     dequeue_depth,
-		     qm_port->ldb_credits,
-		     qm_port->dir_credits);
+	if (dlb2->version == DLB2_HW_V2) {
+		qm_port->dir_credits = dir_credit_high_watermark;
+		qm_port->ldb_credits = ldb_credit_high_watermark;
+		qm_port->credit_pool[DLB2_DIR_QUEUE] = &dlb2->dir_credit_pool;
+		qm_port->credit_pool[DLB2_LDB_QUEUE] = &dlb2->ldb_credit_pool;
+
+		DLB2_LOG_DBG("dlb2: created ldb port %d, depth = %d, ldb credits=%d, dir credits=%d\n",
+			     qm_port_id,
+			     dequeue_depth,
+			     qm_port->ldb_credits,
+			     qm_port->dir_credits);
+	} else {
+		qm_port->credits = credit_high_watermark;
+		qm_port->credit_pool[DLB2_COMBINED_POOL] = &dlb2->credit_pool;
 
+		DLB2_LOG_DBG("dlb2: created ldb port %d, depth = %d, credits=%d\n",
+			     qm_port_id,
+			     dequeue_depth,
+			     qm_port->credits);
+	}
 	rte_spinlock_unlock(&handle->resource_lock);
 
 	return 0;
@@ -1356,8 +1401,9 @@ dlb2_hw_create_dir_port(struct dlb2_eventdev *dlb2,
 	struct dlb2_port *qm_port = NULL;
 	char mz_name[RTE_MEMZONE_NAMESIZE];
 	uint32_t qm_port_id;
-	uint16_t ldb_credit_high_watermark;
-	uint16_t dir_credit_high_watermark;
+	uint16_t ldb_credit_high_watermark = 0;
+	uint16_t dir_credit_high_watermark = 0;
+	uint16_t credit_high_watermark = 0;
 
 	if (dlb2 == NULL || handle == NULL)
 		return -EINVAL;
@@ -1386,14 +1432,16 @@ dlb2_hw_create_dir_port(struct dlb2_eventdev *dlb2,
 	/* User controls the LDB high watermark via enqueue depth. The DIR high
 	 * watermark is equal, unless the directed credit pool is too small.
 	 */
-	ldb_credit_high_watermark = enqueue_depth;
-
-	/* Don't use enqueue_depth if it would require more directed credits
-	 * than are available.
-	 */
-	dir_credit_high_watermark =
-		RTE_MIN(enqueue_depth,
-			handle->cfg.num_dir_credits / dlb2->num_ports);
+	if (dlb2->version == DLB2_HW_V2) {
+		ldb_credit_high_watermark = enqueue_depth;
+		/* Don't use enqueue_depth if it would require more directed
+		 * credits than are available.
+		 */
+		dir_credit_high_watermark =
+			RTE_MIN(enqueue_depth,
+				handle->cfg.num_dir_credits / dlb2->num_ports);
+	} else
+		credit_high_watermark = enqueue_depth;
 
 	/* Per QM values */
 
@@ -1430,8 +1478,12 @@ dlb2_hw_create_dir_port(struct dlb2_eventdev *dlb2,
 
 	qm_port->id = qm_port_id;
 
-	qm_port->cached_ldb_credits = 0;
-	qm_port->cached_dir_credits = 0;
+	if (dlb2->version == DLB2_HW_V2) {
+		qm_port->cached_ldb_credits = 0;
+		qm_port->cached_dir_credits = 0;
+	} else
+		qm_port->cached_credits = 0;
+
 	/* CQs with depth < 8 use an 8-entry queue, but withhold credits so
 	 * the effective depth is smaller.
 	 */
@@ -1467,17 +1519,26 @@ dlb2_hw_create_dir_port(struct dlb2_eventdev *dlb2,
 	qm_port->state = PORT_STARTED; /* enabled at create time */
 	qm_port->config_state = DLB2_CONFIGURED;
 
-	qm_port->dir_credits = dir_credit_high_watermark;
-	qm_port->ldb_credits = ldb_credit_high_watermark;
-	qm_port->credit_pool[DLB2_DIR_QUEUE] = &dlb2->dir_credit_pool;
-	qm_port->credit_pool[DLB2_LDB_QUEUE] = &dlb2->ldb_credit_pool;
-
-	DLB2_LOG_DBG("dlb2: created dir port %d, depth = %d cr=%d,%d\n",
-		     qm_port_id,
-		     dequeue_depth,
-		     dir_credit_high_watermark,
-		     ldb_credit_high_watermark);
+	if (dlb2->version == DLB2_HW_V2) {
+		qm_port->dir_credits = dir_credit_high_watermark;
+		qm_port->ldb_credits = ldb_credit_high_watermark;
+		qm_port->credit_pool[DLB2_DIR_QUEUE] = &dlb2->dir_credit_pool;
+		qm_port->credit_pool[DLB2_LDB_QUEUE] = &dlb2->ldb_credit_pool;
+
+		DLB2_LOG_DBG("dlb2: created dir port %d, depth = %d cr=%d,%d\n",
+			     qm_port_id,
+			     dequeue_depth,
+			     dir_credit_high_watermark,
+			     ldb_credit_high_watermark);
+	} else {
+		qm_port->credits = credit_high_watermark;
+		qm_port->credit_pool[DLB2_COMBINED_POOL] = &dlb2->credit_pool;
 
+		DLB2_LOG_DBG("dlb2: created dir port %d, depth = %d cr=%d\n",
+			     qm_port_id,
+			     dequeue_depth,
+			     credit_high_watermark);
+	}
 	rte_spinlock_unlock(&handle->resource_lock);
 
 	return 0;
@@ -2297,6 +2358,24 @@ dlb2_check_enqueue_hw_dir_credits(struct dlb2_port *qm_port)
 	return 0;
 }
 
+static inline int
+dlb2_check_enqueue_hw_credits(struct dlb2_port *qm_port)
+{
+	if (unlikely(qm_port->cached_credits == 0)) {
+		qm_port->cached_credits =
+			dlb2_port_credits_get(qm_port,
+					      DLB2_COMBINED_POOL);
+		if (unlikely(qm_port->cached_credits == 0)) {
+			DLB2_INC_STAT(
+			qm_port->ev_port->stats.traffic.tx_nospc_hw_credits, 1);
+			DLB2_LOG_DBG("credits exhausted\n");
+			return 1; /* credits exhausted */
+		}
+	}
+
+	return 0;
+}
+
 static __rte_always_inline void
 dlb2_pp_write(struct dlb2_enqueue_qe *qe4,
 	      struct process_local_port_data *port_data)
@@ -2565,12 +2644,19 @@ dlb2_event_enqueue_prep(struct dlb2_eventdev_port *ev_port,
 	if (!qm_queue->is_directed) {
 		/* Load balanced destination queue */
 
-		if (dlb2_check_enqueue_hw_ldb_credits(qm_port)) {
-			rte_errno = -ENOSPC;
-			return 1;
+		if (dlb2->version == DLB2_HW_V2) {
+			if (dlb2_check_enqueue_hw_ldb_credits(qm_port)) {
+				rte_errno = -ENOSPC;
+				return 1;
+			}
+			cached_credits = &qm_port->cached_ldb_credits;
+		} else {
+			if (dlb2_check_enqueue_hw_credits(qm_port)) {
+				rte_errno = -ENOSPC;
+				return 1;
+			}
+			cached_credits = &qm_port->cached_credits;
 		}
-		cached_credits = &qm_port->cached_ldb_credits;
-
 		switch (ev->sched_type) {
 		case RTE_SCHED_TYPE_ORDERED:
 			DLB2_LOG_DBG("dlb2: put_qe: RTE_SCHED_TYPE_ORDERED\n");
@@ -2602,12 +2688,19 @@ dlb2_event_enqueue_prep(struct dlb2_eventdev_port *ev_port,
 	} else {
 		/* Directed destination queue */
 
-		if (dlb2_check_enqueue_hw_dir_credits(qm_port)) {
-			rte_errno = -ENOSPC;
-			return 1;
+		if (dlb2->version == DLB2_HW_V2) {
+			if (dlb2_check_enqueue_hw_dir_credits(qm_port)) {
+				rte_errno = -ENOSPC;
+				return 1;
+			}
+			cached_credits = &qm_port->cached_dir_credits;
+		} else {
+			if (dlb2_check_enqueue_hw_credits(qm_port)) {
+				rte_errno = -ENOSPC;
+				return 1;
+			}
+			cached_credits = &qm_port->cached_credits;
 		}
-		cached_credits = &qm_port->cached_dir_credits;
-
 		DLB2_LOG_DBG("dlb2: put_qe: RTE_SCHED_TYPE_DIRECTED\n");
 
 		*sched_type = DLB2_SCHED_DIRECTED;
@@ -2891,20 +2984,40 @@ dlb2_port_credits_inc(struct dlb2_port *qm_port, int num)
 
 	/* increment port credits, and return to pool if exceeds threshold */
 	if (!qm_port->is_directed) {
-		qm_port->cached_ldb_credits += num;
-		if (qm_port->cached_ldb_credits >= 2 * batch_size) {
-			__atomic_fetch_add(
-				qm_port->credit_pool[DLB2_LDB_QUEUE],
-				batch_size, __ATOMIC_SEQ_CST);
-			qm_port->cached_ldb_credits -= batch_size;
+		if (qm_port->dlb2->version == DLB2_HW_V2) {
+			qm_port->cached_ldb_credits += num;
+			if (qm_port->cached_ldb_credits >= 2 * batch_size) {
+				__atomic_fetch_add(
+					qm_port->credit_pool[DLB2_LDB_QUEUE],
+					batch_size, __ATOMIC_SEQ_CST);
+				qm_port->cached_ldb_credits -= batch_size;
+			}
+		} else {
+			qm_port->cached_credits += num;
+			if (qm_port->cached_credits >= 2 * batch_size) {
+				__atomic_fetch_add(
+				      qm_port->credit_pool[DLB2_COMBINED_POOL],
+				      batch_size, __ATOMIC_SEQ_CST);
+				qm_port->cached_credits -= batch_size;
+			}
 		}
 	} else {
-		qm_port->cached_dir_credits += num;
-		if (qm_port->cached_dir_credits >= 2 * batch_size) {
-			__atomic_fetch_add(
-				qm_port->credit_pool[DLB2_DIR_QUEUE],
-				batch_size, __ATOMIC_SEQ_CST);
-			qm_port->cached_dir_credits -= batch_size;
+		if (qm_port->dlb2->version == DLB2_HW_V2) {
+			qm_port->cached_dir_credits += num;
+			if (qm_port->cached_dir_credits >= 2 * batch_size) {
+				__atomic_fetch_add(
+					qm_port->credit_pool[DLB2_DIR_QUEUE],
+					batch_size, __ATOMIC_SEQ_CST);
+				qm_port->cached_dir_credits -= batch_size;
+			}
+		} else {
+			qm_port->cached_credits += num;
+			if (qm_port->cached_credits >= 2 * batch_size) {
+				__atomic_fetch_add(
+				      qm_port->credit_pool[DLB2_COMBINED_POOL],
+				      batch_size, __ATOMIC_SEQ_CST);
+				qm_port->cached_credits -= batch_size;
+			}
 		}
 	}
 }
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v2 14/27] event/dlb2: add v2.5 queue depth functions
  2021-03-30 19:35   ` [dpdk-dev] [PATCH v2 00/27] Add DLB V2.5 Timothy McDaniel
                       ` (12 preceding siblings ...)
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 13/27] event/dlb2: add v2.5 credit scheme Timothy McDaniel
@ 2021-03-30 19:35     ` Timothy McDaniel
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 15/27] event/dlb2: add v2.5 finish map/unmap Timothy McDaniel
                       ` (13 subsequent siblings)
  27 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-03-30 19:35 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas

Update get queue depth functions for DLB v2.5, accounting for
combined register map and new hardware access macros.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 160 ------------------
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 135 +++++++++++++++
 2 files changed, 135 insertions(+), 160 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index 1e66ebf50..8c1d8c782 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -65,17 +65,6 @@ static inline void dlb2_flush_csr(struct dlb2_hw *hw)
 	DLB2_CSR_RD(hw, DLB2_SYS_TOTAL_VAS);
 }
 
-static u32 dlb2_dir_queue_depth(struct dlb2_hw *hw,
-				struct dlb2_dir_pq_pair *queue)
-{
-	union dlb2_lsp_qid_dir_enqueue_cnt r0;
-
-	r0.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID_DIR_ENQUEUE_CNT(queue->id.phys_id));
-
-	return r0.field.count;
-}
-
 static void dlb2_ldb_port_cq_enable(struct dlb2_hw *hw,
 				    struct dlb2_ldb_port *port)
 {
@@ -108,24 +97,6 @@ static void dlb2_ldb_port_cq_disable(struct dlb2_hw *hw,
 	dlb2_flush_csr(hw);
 }
 
-static u32 dlb2_ldb_queue_depth(struct dlb2_hw *hw,
-				struct dlb2_ldb_queue *queue)
-{
-	union dlb2_lsp_qid_aqed_active_cnt r0;
-	union dlb2_lsp_qid_atm_active r1;
-	union dlb2_lsp_qid_ldb_enqueue_cnt r2;
-
-	r0.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID_AQED_ACTIVE_CNT(queue->id.phys_id));
-	r1.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID_ATM_ACTIVE(queue->id.phys_id));
-
-	r2.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID_LDB_ENQUEUE_CNT(queue->id.phys_id));
-
-	return r0.field.count + r1.field.count + r2.field.count;
-}
-
 static struct dlb2_ldb_queue *
 dlb2_get_ldb_queue_from_id(struct dlb2_hw *hw,
 			   u32 id,
@@ -1204,134 +1175,3 @@ int dlb2_set_group_sequence_numbers(struct dlb2_hw *hw,
 	return 0;
 }
 
-static struct dlb2_dir_pq_pair *
-dlb2_get_domain_used_dir_pq(struct dlb2_hw *hw,
-			    u32 id,
-			    bool vdev_req,
-			    struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *port;
-	RTE_SET_USED(iter);
-
-	if (id >= DLB2_MAX_NUM_DIR_PORTS(hw->ver))
-		return NULL;
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter)
-		if ((!vdev_req && port->id.phys_id == id) ||
-		    (vdev_req && port->id.virt_id == id))
-			return port;
-
-	return NULL;
-}
-
-static struct dlb2_ldb_queue *
-dlb2_get_domain_ldb_queue(u32 id,
-			  bool vdev_req,
-			  struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_queue *queue;
-	RTE_SET_USED(iter);
-
-	if (id >= DLB2_MAX_NUM_LDB_QUEUES)
-		return NULL;
-
-	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter)
-		if ((!vdev_req && queue->id.phys_id == id) ||
-		    (vdev_req && queue->id.virt_id == id))
-			return queue;
-
-	return NULL;
-}
-
-static void dlb2_log_get_dir_queue_depth(struct dlb2_hw *hw,
-					 u32 domain_id,
-					 u32 queue_id,
-					 bool vdev_req,
-					 unsigned int vf_id)
-{
-	DLB2_HW_DBG(hw, "DLB get directed queue depth:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from VF %d)\n", vf_id);
-	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
-	DLB2_HW_DBG(hw, "\tQueue ID: %d\n", queue_id);
-}
-
-int dlb2_hw_get_dir_queue_depth(struct dlb2_hw *hw,
-				u32 domain_id,
-				struct dlb2_get_dir_queue_depth_args *args,
-				struct dlb2_cmd_response *resp,
-				bool vdev_req,
-				unsigned int vdev_id)
-{
-	struct dlb2_dir_pq_pair *queue;
-	struct dlb2_hw_domain *domain;
-	int id;
-
-	id = domain_id;
-
-	dlb2_log_get_dir_queue_depth(hw, domain_id, args->queue_id,
-				     vdev_req, vdev_id);
-
-	domain = dlb2_get_domain_from_id(hw, id, vdev_req, vdev_id);
-	if (domain == NULL) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	id = args->queue_id;
-
-	queue = dlb2_get_domain_used_dir_pq(hw, id, vdev_req, domain);
-	if (queue == NULL) {
-		resp->status = DLB2_ST_INVALID_QID;
-		return -EINVAL;
-	}
-
-	resp->id = dlb2_dir_queue_depth(hw, queue);
-
-	return 0;
-}
-
-static void dlb2_log_get_ldb_queue_depth(struct dlb2_hw *hw,
-					 u32 domain_id,
-					 u32 queue_id,
-					 bool vdev_req,
-					 unsigned int vf_id)
-{
-	DLB2_HW_DBG(hw, "DLB get load-balanced queue depth:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from VF %d)\n", vf_id);
-	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
-	DLB2_HW_DBG(hw, "\tQueue ID: %d\n", queue_id);
-}
-
-int dlb2_hw_get_ldb_queue_depth(struct dlb2_hw *hw,
-				u32 domain_id,
-				struct dlb2_get_ldb_queue_depth_args *args,
-				struct dlb2_cmd_response *resp,
-				bool vdev_req,
-				unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_queue *queue;
-
-	dlb2_log_get_ldb_queue_depth(hw, domain_id, args->queue_id,
-				     vdev_req, vdev_id);
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-	if (domain == NULL) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	queue = dlb2_get_domain_ldb_queue(args->queue_id, vdev_req, domain);
-	if (queue == NULL) {
-		resp->status = DLB2_ST_INVALID_QID;
-		return -EINVAL;
-	}
-
-	resp->id = dlb2_ldb_queue_depth(hw, queue);
-
-	return 0;
-}
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index e806a60ac..6a5af0c1e 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -5904,3 +5904,138 @@ dlb2_hw_start_domain(struct dlb2_hw *hw,
 
 	return 0;
 }
+
+static void dlb2_log_get_dir_queue_depth(struct dlb2_hw *hw,
+					 u32 domain_id,
+					 u32 queue_id,
+					 bool vdev_req,
+					 unsigned int vf_id)
+{
+	DLB2_HW_DBG(hw, "DLB get directed queue depth:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from VF %d)\n", vf_id);
+	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
+	DLB2_HW_DBG(hw, "\tQueue ID: %d\n", queue_id);
+}
+
+/**
+ * dlb2_hw_get_dir_queue_depth() - returns the depth of a directed queue
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: queue depth args
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function returns the depth of a directed queue.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the depth.
+ *
+ * Errors:
+ * EINVAL - Invalid domain ID or queue ID.
+ */
+int dlb2_hw_get_dir_queue_depth(struct dlb2_hw *hw,
+				u32 domain_id,
+				struct dlb2_get_dir_queue_depth_args *args,
+				struct dlb2_cmd_response *resp,
+				bool vdev_req,
+				unsigned int vdev_id)
+{
+	struct dlb2_dir_pq_pair *queue;
+	struct dlb2_hw_domain *domain;
+	int id;
+
+	id = domain_id;
+
+	dlb2_log_get_dir_queue_depth(hw, domain_id, args->queue_id,
+				     vdev_req, vdev_id);
+
+	domain = dlb2_get_domain_from_id(hw, id, vdev_req, vdev_id);
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	id = args->queue_id;
+
+	queue = dlb2_get_domain_used_dir_pq(hw, id, vdev_req, domain);
+	if (!queue) {
+		resp->status = DLB2_ST_INVALID_QID;
+		return -EINVAL;
+	}
+
+	resp->id = dlb2_dir_queue_depth(hw, queue);
+
+	return 0;
+}
+
+static void dlb2_log_get_ldb_queue_depth(struct dlb2_hw *hw,
+					 u32 domain_id,
+					 u32 queue_id,
+					 bool vdev_req,
+					 unsigned int vf_id)
+{
+	DLB2_HW_DBG(hw, "DLB get load-balanced queue depth:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from VF %d)\n", vf_id);
+	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
+	DLB2_HW_DBG(hw, "\tQueue ID: %d\n", queue_id);
+}
+
+/**
+ * dlb2_hw_get_ldb_queue_depth() - returns the depth of a load-balanced queue
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: queue depth args
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function returns the depth of a load-balanced queue.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the depth.
+ *
+ * Errors:
+ * EINVAL - Invalid domain ID or queue ID.
+ */
+int dlb2_hw_get_ldb_queue_depth(struct dlb2_hw *hw,
+				u32 domain_id,
+				struct dlb2_get_ldb_queue_depth_args *args,
+				struct dlb2_cmd_response *resp,
+				bool vdev_req,
+				unsigned int vdev_id)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+
+	dlb2_log_get_ldb_queue_depth(hw, domain_id, args->queue_id,
+				     vdev_req, vdev_id);
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	queue = dlb2_get_domain_ldb_queue(args->queue_id, vdev_req, domain);
+	if (!queue) {
+		resp->status = DLB2_ST_INVALID_QID;
+		return -EINVAL;
+	}
+
+	resp->id = dlb2_ldb_queue_depth(hw, queue);
+
+	return 0;
+}
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v2 15/27] event/dlb2: add v2.5 finish map/unmap
  2021-03-30 19:35   ` [dpdk-dev] [PATCH v2 00/27] Add DLB V2.5 Timothy McDaniel
                       ` (13 preceding siblings ...)
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 14/27] event/dlb2: add v2.5 queue depth functions Timothy McDaniel
@ 2021-03-30 19:35     ` Timothy McDaniel
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 16/27] event/dlb2: add v2.5 sparse cq mode Timothy McDaniel
                       ` (12 subsequent siblings)
  27 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-03-30 19:35 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas

Update low level hardware funcs with map/unmap interfaces,
accounting for new combined register file and hardware access macros.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 1054 -----------------
 .../event/dlb2/pf/base/dlb2_resource_new.c    |   50 +
 2 files changed, 50 insertions(+), 1054 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index 8c1d8c782..f05f750f5 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -54,1060 +54,6 @@ void dlb2_hw_enable_sparse_ldb_cq_mode(struct dlb2_hw *hw)
 	DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, r0.val);
 }
 
-/*
- * The PF driver cannot assume that a register write will affect subsequent HCW
- * writes. To ensure a write completes, the driver must read back a CSR. This
- * function only need be called for configuration that can occur after the
- * domain has started; prior to starting, applications can't send HCWs.
- */
-static inline void dlb2_flush_csr(struct dlb2_hw *hw)
-{
-	DLB2_CSR_RD(hw, DLB2_SYS_TOTAL_VAS);
-}
-
-static void dlb2_ldb_port_cq_enable(struct dlb2_hw *hw,
-				    struct dlb2_ldb_port *port)
-{
-	union dlb2_lsp_cq_ldb_dsbl reg;
-
-	/*
-	 * Don't re-enable the port if a removal is pending. The caller should
-	 * mark this port as enabled (if it isn't already), and when the
-	 * removal completes the port will be enabled.
-	 */
-	if (port->num_pending_removals)
-		return;
-
-	reg.field.disabled = 0;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ_LDB_DSBL(port->id.phys_id), reg.val);
-
-	dlb2_flush_csr(hw);
-}
-
-static void dlb2_ldb_port_cq_disable(struct dlb2_hw *hw,
-				     struct dlb2_ldb_port *port)
-{
-	union dlb2_lsp_cq_ldb_dsbl reg;
-
-	reg.field.disabled = 1;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ_LDB_DSBL(port->id.phys_id), reg.val);
-
-	dlb2_flush_csr(hw);
-}
-
-static struct dlb2_ldb_queue *
-dlb2_get_ldb_queue_from_id(struct dlb2_hw *hw,
-			   u32 id,
-			   bool vdev_req,
-			   unsigned int vdev_id)
-{
-	struct dlb2_list_entry *iter1;
-	struct dlb2_list_entry *iter2;
-	struct dlb2_function_resources *rsrcs;
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_queue *queue;
-	RTE_SET_USED(iter1);
-	RTE_SET_USED(iter2);
-
-	if (id >= DLB2_MAX_NUM_LDB_QUEUES)
-		return NULL;
-
-	rsrcs = (vdev_req) ? &hw->vdev[vdev_id] : &hw->pf;
-
-	if (!vdev_req)
-		return &hw->rsrcs.ldb_queues[id];
-
-	DLB2_FUNC_LIST_FOR(rsrcs->used_domains, domain, iter1) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter2)
-			if (queue->id.virt_id == id)
-				return queue;
-	}
-
-	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_queues, queue, iter1)
-		if (queue->id.virt_id == id)
-			return queue;
-
-	return NULL;
-}
-
-static struct dlb2_hw_domain *dlb2_get_domain_from_id(struct dlb2_hw *hw,
-						      u32 id,
-						      bool vdev_req,
-						      unsigned int vdev_id)
-{
-	struct dlb2_list_entry *iteration;
-	struct dlb2_function_resources *rsrcs;
-	struct dlb2_hw_domain *domain;
-	RTE_SET_USED(iteration);
-
-	if (id >= DLB2_MAX_NUM_DOMAINS)
-		return NULL;
-
-	if (!vdev_req)
-		return &hw->domains[id];
-
-	rsrcs = &hw->vdev[vdev_id];
-
-	DLB2_FUNC_LIST_FOR(rsrcs->used_domains, domain, iteration)
-		if (domain->id.virt_id == id)
-			return domain;
-
-	return NULL;
-}
-
-static int dlb2_port_slot_state_transition(struct dlb2_hw *hw,
-					   struct dlb2_ldb_port *port,
-					   struct dlb2_ldb_queue *queue,
-					   int slot,
-					   enum dlb2_qid_map_state new_state)
-{
-	enum dlb2_qid_map_state curr_state = port->qid_map[slot].state;
-	struct dlb2_hw_domain *domain;
-	int domain_id;
-
-	domain_id = port->domain_id.phys_id;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, false, 0);
-	if (domain == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: unable to find domain %d\n",
-			    __func__, domain_id);
-		return -EINVAL;
-	}
-
-	switch (curr_state) {
-	case DLB2_QUEUE_UNMAPPED:
-		switch (new_state) {
-		case DLB2_QUEUE_MAPPED:
-			queue->num_mappings++;
-			port->num_mappings++;
-			break;
-		case DLB2_QUEUE_MAP_IN_PROG:
-			queue->num_pending_additions++;
-			domain->num_pending_additions++;
-			break;
-		default:
-			goto error;
-		}
-		break;
-	case DLB2_QUEUE_MAPPED:
-		switch (new_state) {
-		case DLB2_QUEUE_UNMAPPED:
-			queue->num_mappings--;
-			port->num_mappings--;
-			break;
-		case DLB2_QUEUE_UNMAP_IN_PROG:
-			port->num_pending_removals++;
-			domain->num_pending_removals++;
-			break;
-		case DLB2_QUEUE_MAPPED:
-			/* Priority change, nothing to update */
-			break;
-		default:
-			goto error;
-		}
-		break;
-	case DLB2_QUEUE_MAP_IN_PROG:
-		switch (new_state) {
-		case DLB2_QUEUE_UNMAPPED:
-			queue->num_pending_additions--;
-			domain->num_pending_additions--;
-			break;
-		case DLB2_QUEUE_MAPPED:
-			queue->num_mappings++;
-			port->num_mappings++;
-			queue->num_pending_additions--;
-			domain->num_pending_additions--;
-			break;
-		default:
-			goto error;
-		}
-		break;
-	case DLB2_QUEUE_UNMAP_IN_PROG:
-		switch (new_state) {
-		case DLB2_QUEUE_UNMAPPED:
-			port->num_pending_removals--;
-			domain->num_pending_removals--;
-			queue->num_mappings--;
-			port->num_mappings--;
-			break;
-		case DLB2_QUEUE_MAPPED:
-			port->num_pending_removals--;
-			domain->num_pending_removals--;
-			break;
-		case DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP:
-			/* Nothing to update */
-			break;
-		default:
-			goto error;
-		}
-		break;
-	case DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP:
-		switch (new_state) {
-		case DLB2_QUEUE_UNMAP_IN_PROG:
-			/* Nothing to update */
-			break;
-		case DLB2_QUEUE_UNMAPPED:
-			/*
-			 * An UNMAP_IN_PROG_PENDING_MAP slot briefly
-			 * becomes UNMAPPED before it transitions to
-			 * MAP_IN_PROG.
-			 */
-			queue->num_mappings--;
-			port->num_mappings--;
-			port->num_pending_removals--;
-			domain->num_pending_removals--;
-			break;
-		default:
-			goto error;
-		}
-		break;
-	default:
-		goto error;
-	}
-
-	port->qid_map[slot].state = new_state;
-
-	DLB2_HW_DBG(hw,
-		    "[%s()] queue %d -> port %d state transition (%d -> %d)\n",
-		    __func__, queue->id.phys_id, port->id.phys_id,
-		    curr_state, new_state);
-	return 0;
-
-error:
-	DLB2_HW_ERR(hw,
-		    "[%s()] Internal error: invalid queue %d -> port %d state transition (%d -> %d)\n",
-		    __func__, queue->id.phys_id, port->id.phys_id,
-		    curr_state, new_state);
-	return -EFAULT;
-}
-
-static bool dlb2_port_find_slot(struct dlb2_ldb_port *port,
-				enum dlb2_qid_map_state state,
-				int *slot)
-{
-	int i;
-
-	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
-		if (port->qid_map[i].state == state)
-			break;
-	}
-
-	*slot = i;
-
-	return (i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ);
-}
-
-static bool dlb2_port_find_slot_queue(struct dlb2_ldb_port *port,
-				      enum dlb2_qid_map_state state,
-				      struct dlb2_ldb_queue *queue,
-				      int *slot)
-{
-	int i;
-
-	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
-		if (port->qid_map[i].state == state &&
-		    port->qid_map[i].qid == queue->id.phys_id)
-			break;
-	}
-
-	*slot = i;
-
-	return (i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ);
-}
-
-/*
- * dlb2_ldb_queue_{enable, disable}_mapped_cqs() don't operate exactly as
- * their function names imply, and should only be called by the dynamic CQ
- * mapping code.
- */
-static void dlb2_ldb_queue_disable_mapped_cqs(struct dlb2_hw *hw,
-					      struct dlb2_hw_domain *domain,
-					      struct dlb2_ldb_queue *queue)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int slot, i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			enum dlb2_qid_map_state state = DLB2_QUEUE_MAPPED;
-
-			if (!dlb2_port_find_slot_queue(port, state,
-						       queue, &slot))
-				continue;
-
-			if (port->enabled)
-				dlb2_ldb_port_cq_disable(hw, port);
-		}
-	}
-}
-
-static void dlb2_ldb_queue_enable_mapped_cqs(struct dlb2_hw *hw,
-					     struct dlb2_hw_domain *domain,
-					     struct dlb2_ldb_queue *queue)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int slot, i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			enum dlb2_qid_map_state state = DLB2_QUEUE_MAPPED;
-
-			if (!dlb2_port_find_slot_queue(port, state,
-						       queue, &slot))
-				continue;
-
-			if (port->enabled)
-				dlb2_ldb_port_cq_enable(hw, port);
-		}
-	}
-}
-
-static void dlb2_ldb_port_clear_queue_if_status(struct dlb2_hw *hw,
-						struct dlb2_ldb_port *port,
-						int slot)
-{
-	union dlb2_lsp_ldb_sched_ctrl r0 = { {0} };
-
-	r0.field.cq = port->id.phys_id;
-	r0.field.qidix = slot;
-	r0.field.value = 0;
-	r0.field.inflight_ok_v = 1;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL, r0.val);
-
-	dlb2_flush_csr(hw);
-}
-
-static void dlb2_ldb_port_set_queue_if_status(struct dlb2_hw *hw,
-					      struct dlb2_ldb_port *port,
-					      int slot)
-{
-	union dlb2_lsp_ldb_sched_ctrl r0 = { {0} };
-
-	r0.field.cq = port->id.phys_id;
-	r0.field.qidix = slot;
-	r0.field.value = 1;
-	r0.field.inflight_ok_v = 1;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL, r0.val);
-
-	dlb2_flush_csr(hw);
-}
-
-static int dlb2_ldb_port_map_qid_static(struct dlb2_hw *hw,
-					struct dlb2_ldb_port *p,
-					struct dlb2_ldb_queue *q,
-					u8 priority)
-{
-	union dlb2_lsp_cq2priov r0;
-	union dlb2_lsp_cq2qid0 r1;
-	union dlb2_atm_qid2cqidix_00 r2;
-	union dlb2_lsp_qid2cqidix_00 r3;
-	union dlb2_lsp_qid2cqidix2_00 r4;
-	enum dlb2_qid_map_state state;
-	int i;
-
-	/* Look for a pending or already mapped slot, else an unused slot */
-	if (!dlb2_port_find_slot_queue(p, DLB2_QUEUE_MAP_IN_PROG, q, &i) &&
-	    !dlb2_port_find_slot_queue(p, DLB2_QUEUE_MAPPED, q, &i) &&
-	    !dlb2_port_find_slot(p, DLB2_QUEUE_UNMAPPED, &i)) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: CQ has no available QID mapping slots\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	if (i >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: port slot tracking failed\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	/* Read-modify-write the priority and valid bit register */
-	r0.val = DLB2_CSR_RD(hw, DLB2_LSP_CQ2PRIOV(p->id.phys_id));
-
-	r0.field.v |= 1 << i;
-	r0.field.prio |= (priority & 0x7) << i * 3;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(p->id.phys_id), r0.val);
-
-	/* Read-modify-write the QID map register */
-	if (i < 4)
-		r1.val = DLB2_CSR_RD(hw, DLB2_LSP_CQ2QID0(p->id.phys_id));
-	else
-		r1.val = DLB2_CSR_RD(hw, DLB2_LSP_CQ2QID1(p->id.phys_id));
-
-	if (i == 0 || i == 4)
-		r1.field.qid_p0 = q->id.phys_id;
-	if (i == 1 || i == 5)
-		r1.field.qid_p1 = q->id.phys_id;
-	if (i == 2 || i == 6)
-		r1.field.qid_p2 = q->id.phys_id;
-	if (i == 3 || i == 7)
-		r1.field.qid_p3 = q->id.phys_id;
-
-	if (i < 4)
-		DLB2_CSR_WR(hw, DLB2_LSP_CQ2QID0(p->id.phys_id), r1.val);
-	else
-		DLB2_CSR_WR(hw, DLB2_LSP_CQ2QID1(p->id.phys_id), r1.val);
-
-	r2.val = DLB2_CSR_RD(hw,
-			     DLB2_ATM_QID2CQIDIX(q->id.phys_id,
-						 p->id.phys_id / 4));
-
-	r3.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID2CQIDIX(q->id.phys_id,
-						 p->id.phys_id / 4));
-
-	r4.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID2CQIDIX2(q->id.phys_id,
-						  p->id.phys_id / 4));
-
-	switch (p->id.phys_id % 4) {
-	case 0:
-		r2.field.cq_p0 |= 1 << i;
-		r3.field.cq_p0 |= 1 << i;
-		r4.field.cq_p0 |= 1 << i;
-		break;
-
-	case 1:
-		r2.field.cq_p1 |= 1 << i;
-		r3.field.cq_p1 |= 1 << i;
-		r4.field.cq_p1 |= 1 << i;
-		break;
-
-	case 2:
-		r2.field.cq_p2 |= 1 << i;
-		r3.field.cq_p2 |= 1 << i;
-		r4.field.cq_p2 |= 1 << i;
-		break;
-
-	case 3:
-		r2.field.cq_p3 |= 1 << i;
-		r3.field.cq_p3 |= 1 << i;
-		r4.field.cq_p3 |= 1 << i;
-		break;
-	}
-
-	DLB2_CSR_WR(hw,
-		    DLB2_ATM_QID2CQIDIX(q->id.phys_id, p->id.phys_id / 4),
-		    r2.val);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID2CQIDIX(q->id.phys_id, p->id.phys_id / 4),
-		    r3.val);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID2CQIDIX2(q->id.phys_id, p->id.phys_id / 4),
-		    r4.val);
-
-	dlb2_flush_csr(hw);
-
-	p->qid_map[i].qid = q->id.phys_id;
-	p->qid_map[i].priority = priority;
-
-	state = DLB2_QUEUE_MAPPED;
-
-	return dlb2_port_slot_state_transition(hw, p, q, i, state);
-}
-
-static int dlb2_ldb_port_set_has_work_bits(struct dlb2_hw *hw,
-					   struct dlb2_ldb_port *port,
-					   struct dlb2_ldb_queue *queue,
-					   int slot)
-{
-	union dlb2_lsp_qid_aqed_active_cnt r0;
-	union dlb2_lsp_qid_ldb_enqueue_cnt r1;
-	union dlb2_lsp_ldb_sched_ctrl r2 = { {0} };
-
-	/* Set the atomic scheduling haswork bit */
-	r0.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID_AQED_ACTIVE_CNT(queue->id.phys_id));
-
-	r2.field.cq = port->id.phys_id;
-	r2.field.qidix = slot;
-	r2.field.value = 1;
-	r2.field.rlist_haswork_v = r0.field.count > 0;
-
-	/* Set the non-atomic scheduling haswork bit */
-	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL, r2.val);
-
-	r1.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID_LDB_ENQUEUE_CNT(queue->id.phys_id));
-
-	memset(&r2, 0, sizeof(r2));
-
-	r2.field.cq = port->id.phys_id;
-	r2.field.qidix = slot;
-	r2.field.value = 1;
-	r2.field.nalb_haswork_v = (r1.field.count > 0);
-
-	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL, r2.val);
-
-	dlb2_flush_csr(hw);
-
-	return 0;
-}
-
-static void dlb2_ldb_port_clear_has_work_bits(struct dlb2_hw *hw,
-					      struct dlb2_ldb_port *port,
-					      u8 slot)
-{
-	union dlb2_lsp_ldb_sched_ctrl r2 = { {0} };
-
-	r2.field.cq = port->id.phys_id;
-	r2.field.qidix = slot;
-	r2.field.value = 0;
-	r2.field.rlist_haswork_v = 1;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL, r2.val);
-
-	memset(&r2, 0, sizeof(r2));
-
-	r2.field.cq = port->id.phys_id;
-	r2.field.qidix = slot;
-	r2.field.value = 0;
-	r2.field.nalb_haswork_v = 1;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL, r2.val);
-
-	dlb2_flush_csr(hw);
-}
-
-static void dlb2_ldb_queue_set_inflight_limit(struct dlb2_hw *hw,
-					      struct dlb2_ldb_queue *queue)
-{
-	union dlb2_lsp_qid_ldb_infl_lim r0 = { {0} };
-
-	r0.field.limit = queue->num_qid_inflights;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_QID_LDB_INFL_LIM(queue->id.phys_id), r0.val);
-}
-
-static void dlb2_ldb_queue_clear_inflight_limit(struct dlb2_hw *hw,
-						struct dlb2_ldb_queue *queue)
-{
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID_LDB_INFL_LIM(queue->id.phys_id),
-		    DLB2_LSP_QID_LDB_INFL_LIM_RST);
-}
-
-static int dlb2_ldb_port_finish_map_qid_dynamic(struct dlb2_hw *hw,
-						struct dlb2_hw_domain *domain,
-						struct dlb2_ldb_port *port,
-						struct dlb2_ldb_queue *queue)
-{
-	struct dlb2_list_entry *iter;
-	union dlb2_lsp_qid_ldb_infl_cnt r0;
-	enum dlb2_qid_map_state state;
-	int slot, ret, i;
-	u8 prio;
-	RTE_SET_USED(iter);
-
-	r0.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID_LDB_INFL_CNT(queue->id.phys_id));
-
-	if (r0.field.count) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: non-zero QID inflight count\n",
-			    __func__);
-		return -EINVAL;
-	}
-
-	/*
-	 * Static map the port and set its corresponding has_work bits.
-	 */
-	state = DLB2_QUEUE_MAP_IN_PROG;
-	if (!dlb2_port_find_slot_queue(port, state, queue, &slot))
-		return -EINVAL;
-
-	if (slot >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: port slot tracking failed\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	prio = port->qid_map[slot].priority;
-
-	/*
-	 * Update the CQ2QID, CQ2PRIOV, and QID2CQIDX registers, and
-	 * the port's qid_map state.
-	 */
-	ret = dlb2_ldb_port_map_qid_static(hw, port, queue, prio);
-	if (ret)
-		return ret;
-
-	ret = dlb2_ldb_port_set_has_work_bits(hw, port, queue, slot);
-	if (ret)
-		return ret;
-
-	/*
-	 * Ensure IF_status(cq,qid) is 0 before enabling the port to
-	 * prevent spurious schedules to cause the queue's inflight
-	 * count to increase.
-	 */
-	dlb2_ldb_port_clear_queue_if_status(hw, port, slot);
-
-	/* Reset the queue's inflight status */
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			state = DLB2_QUEUE_MAPPED;
-			if (!dlb2_port_find_slot_queue(port, state,
-						       queue, &slot))
-				continue;
-
-			dlb2_ldb_port_set_queue_if_status(hw, port, slot);
-		}
-	}
-
-	dlb2_ldb_queue_set_inflight_limit(hw, queue);
-
-	/* Re-enable CQs mapped to this queue */
-	dlb2_ldb_queue_enable_mapped_cqs(hw, domain, queue);
-
-	/* If this queue has other mappings pending, clear its inflight limit */
-	if (queue->num_pending_additions > 0)
-		dlb2_ldb_queue_clear_inflight_limit(hw, queue);
-
-	return 0;
-}
-
-/**
- * dlb2_ldb_port_map_qid_dynamic() - perform a "dynamic" QID->CQ mapping
- * @hw: dlb2_hw handle for a particular device.
- * @port: load-balanced port
- * @queue: load-balanced queue
- * @priority: queue servicing priority
- *
- * Returns 0 if the queue was mapped, 1 if the mapping is scheduled to occur
- * at a later point, and <0 if an error occurred.
- */
-static int dlb2_ldb_port_map_qid_dynamic(struct dlb2_hw *hw,
-					 struct dlb2_ldb_port *port,
-					 struct dlb2_ldb_queue *queue,
-					 u8 priority)
-{
-	union dlb2_lsp_qid_ldb_infl_cnt r0 = { {0} };
-	enum dlb2_qid_map_state state;
-	struct dlb2_hw_domain *domain;
-	int domain_id, slot, ret;
-
-	domain_id = port->domain_id.phys_id;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, false, 0);
-	if (domain == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: unable to find domain %d\n",
-			    __func__, port->domain_id.phys_id);
-		return -EINVAL;
-	}
-
-	/*
-	 * Set the QID inflight limit to 0 to prevent further scheduling of the
-	 * queue.
-	 */
-	DLB2_CSR_WR(hw, DLB2_LSP_QID_LDB_INFL_LIM(queue->id.phys_id), 0);
-
-	if (!dlb2_port_find_slot(port, DLB2_QUEUE_UNMAPPED, &slot)) {
-		DLB2_HW_ERR(hw,
-			    "Internal error: No available unmapped slots\n");
-		return -EFAULT;
-	}
-
-	if (slot >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: port slot tracking failed\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	port->qid_map[slot].qid = queue->id.phys_id;
-	port->qid_map[slot].priority = priority;
-
-	state = DLB2_QUEUE_MAP_IN_PROG;
-	ret = dlb2_port_slot_state_transition(hw, port, queue, slot, state);
-	if (ret)
-		return ret;
-
-	r0.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID_LDB_INFL_CNT(queue->id.phys_id));
-
-	if (r0.field.count) {
-		/*
-		 * The queue is owed completions so it's not safe to map it
-		 * yet. Schedule a kernel thread to complete the mapping later,
-		 * once software has completed all the queue's inflight events.
-		 */
-		if (!os_worker_active(hw))
-			os_schedule_work(hw);
-
-		return 1;
-	}
-
-	/*
-	 * Disable the affected CQ, and the CQs already mapped to the QID,
-	 * before reading the QID's inflight count a second time. There is an
-	 * unlikely race in which the QID may schedule one more QE after we
-	 * read an inflight count of 0, and disabling the CQs guarantees that
-	 * the race will not occur after a re-read of the inflight count
-	 * register.
-	 */
-	if (port->enabled)
-		dlb2_ldb_port_cq_disable(hw, port);
-
-	dlb2_ldb_queue_disable_mapped_cqs(hw, domain, queue);
-
-	r0.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID_LDB_INFL_CNT(queue->id.phys_id));
-
-	if (r0.field.count) {
-		if (port->enabled)
-			dlb2_ldb_port_cq_enable(hw, port);
-
-		dlb2_ldb_queue_enable_mapped_cqs(hw, domain, queue);
-
-		/*
-		 * The queue is owed completions so it's not safe to map it
-		 * yet. Schedule a kernel thread to complete the mapping later,
-		 * once software has completed all the queue's inflight events.
-		 */
-		if (!os_worker_active(hw))
-			os_schedule_work(hw);
-
-		return 1;
-	}
-
-	return dlb2_ldb_port_finish_map_qid_dynamic(hw, domain, port, queue);
-}
-
-static void dlb2_domain_finish_map_port(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain,
-					struct dlb2_ldb_port *port)
-{
-	int i;
-
-	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
-		union dlb2_lsp_qid_ldb_infl_cnt r0;
-		struct dlb2_ldb_queue *queue;
-		int qid;
-
-		if (port->qid_map[i].state != DLB2_QUEUE_MAP_IN_PROG)
-			continue;
-
-		qid = port->qid_map[i].qid;
-
-		queue = dlb2_get_ldb_queue_from_id(hw, qid, false, 0);
-
-		if (queue == NULL) {
-			DLB2_HW_ERR(hw,
-				    "[%s()] Internal error: unable to find queue %d\n",
-				    __func__, qid);
-			continue;
-		}
-
-		r0.val = DLB2_CSR_RD(hw, DLB2_LSP_QID_LDB_INFL_CNT(qid));
-
-		if (r0.field.count)
-			continue;
-
-		/*
-		 * Disable the affected CQ, and the CQs already mapped to the
-		 * QID, before reading the QID's inflight count a second time.
-		 * There is an unlikely race in which the QID may schedule one
-		 * more QE after we read an inflight count of 0, and disabling
-		 * the CQs guarantees that the race will not occur after a
-		 * re-read of the inflight count register.
-		 */
-		if (port->enabled)
-			dlb2_ldb_port_cq_disable(hw, port);
-
-		dlb2_ldb_queue_disable_mapped_cqs(hw, domain, queue);
-
-		r0.val = DLB2_CSR_RD(hw, DLB2_LSP_QID_LDB_INFL_CNT(qid));
-
-		if (r0.field.count) {
-			if (port->enabled)
-				dlb2_ldb_port_cq_enable(hw, port);
-
-			dlb2_ldb_queue_enable_mapped_cqs(hw, domain, queue);
-
-			continue;
-		}
-
-		dlb2_ldb_port_finish_map_qid_dynamic(hw, domain, port, queue);
-	}
-}
-
-static unsigned int
-dlb2_domain_finish_map_qid_procedures(struct dlb2_hw *hw,
-				      struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	if (!domain->configured || domain->num_pending_additions == 0)
-		return 0;
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
-			dlb2_domain_finish_map_port(hw, domain, port);
-	}
-
-	return domain->num_pending_additions;
-}
-
-static int dlb2_ldb_port_unmap_qid(struct dlb2_hw *hw,
-				   struct dlb2_ldb_port *port,
-				   struct dlb2_ldb_queue *queue)
-{
-	enum dlb2_qid_map_state mapped, in_progress, pending_map, unmapped;
-	union dlb2_lsp_cq2priov r0;
-	union dlb2_atm_qid2cqidix_00 r1;
-	union dlb2_lsp_qid2cqidix_00 r2;
-	union dlb2_lsp_qid2cqidix2_00 r3;
-	u32 queue_id;
-	u32 port_id;
-	int i;
-
-	/* Find the queue's slot */
-	mapped = DLB2_QUEUE_MAPPED;
-	in_progress = DLB2_QUEUE_UNMAP_IN_PROG;
-	pending_map = DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP;
-
-	if (!dlb2_port_find_slot_queue(port, mapped, queue, &i) &&
-	    !dlb2_port_find_slot_queue(port, in_progress, queue, &i) &&
-	    !dlb2_port_find_slot_queue(port, pending_map, queue, &i)) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: QID %d isn't mapped\n",
-			    __func__, __LINE__, queue->id.phys_id);
-		return -EFAULT;
-	}
-
-	if (i >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: port slot tracking failed\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	port_id = port->id.phys_id;
-	queue_id = queue->id.phys_id;
-
-	/* Read-modify-write the priority and valid bit register */
-	r0.val = DLB2_CSR_RD(hw, DLB2_LSP_CQ2PRIOV(port_id));
-
-	r0.field.v &= ~(1 << i);
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(port_id), r0.val);
-
-	r1.val = DLB2_CSR_RD(hw,
-			     DLB2_ATM_QID2CQIDIX(queue_id, port_id / 4));
-
-	r2.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID2CQIDIX(queue_id, port_id / 4));
-
-	r3.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID2CQIDIX2(queue_id, port_id / 4));
-
-	switch (port_id % 4) {
-	case 0:
-		r1.field.cq_p0 &= ~(1 << i);
-		r2.field.cq_p0 &= ~(1 << i);
-		r3.field.cq_p0 &= ~(1 << i);
-		break;
-
-	case 1:
-		r1.field.cq_p1 &= ~(1 << i);
-		r2.field.cq_p1 &= ~(1 << i);
-		r3.field.cq_p1 &= ~(1 << i);
-		break;
-
-	case 2:
-		r1.field.cq_p2 &= ~(1 << i);
-		r2.field.cq_p2 &= ~(1 << i);
-		r3.field.cq_p2 &= ~(1 << i);
-		break;
-
-	case 3:
-		r1.field.cq_p3 &= ~(1 << i);
-		r2.field.cq_p3 &= ~(1 << i);
-		r3.field.cq_p3 &= ~(1 << i);
-		break;
-	}
-
-	DLB2_CSR_WR(hw,
-		    DLB2_ATM_QID2CQIDIX(queue_id, port_id / 4),
-		    r1.val);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID2CQIDIX(queue_id, port_id / 4),
-		    r2.val);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID2CQIDIX2(queue_id, port_id / 4),
-		    r3.val);
-
-	dlb2_flush_csr(hw);
-
-	unmapped = DLB2_QUEUE_UNMAPPED;
-
-	return dlb2_port_slot_state_transition(hw, port, queue, i, unmapped);
-}
-
-static int dlb2_ldb_port_map_qid(struct dlb2_hw *hw,
-				 struct dlb2_hw_domain *domain,
-				 struct dlb2_ldb_port *port,
-				 struct dlb2_ldb_queue *queue,
-				 u8 prio)
-{
-	if (domain->started)
-		return dlb2_ldb_port_map_qid_dynamic(hw, port, queue, prio);
-	else
-		return dlb2_ldb_port_map_qid_static(hw, port, queue, prio);
-}
-
-static void
-dlb2_domain_finish_unmap_port_slot(struct dlb2_hw *hw,
-				   struct dlb2_hw_domain *domain,
-				   struct dlb2_ldb_port *port,
-				   int slot)
-{
-	enum dlb2_qid_map_state state;
-	struct dlb2_ldb_queue *queue;
-
-	queue = &hw->rsrcs.ldb_queues[port->qid_map[slot].qid];
-
-	state = port->qid_map[slot].state;
-
-	/* Update the QID2CQIDX and CQ2QID vectors */
-	dlb2_ldb_port_unmap_qid(hw, port, queue);
-
-	/*
-	 * Ensure the QID will not be serviced by this {CQ, slot} by clearing
-	 * the has_work bits
-	 */
-	dlb2_ldb_port_clear_has_work_bits(hw, port, slot);
-
-	/* Reset the {CQ, slot} to its default state */
-	dlb2_ldb_port_set_queue_if_status(hw, port, slot);
-
-	/* Re-enable the CQ if it wasn't manually disabled by the user */
-	if (port->enabled)
-		dlb2_ldb_port_cq_enable(hw, port);
-
-	/*
-	 * If there is a mapping that is pending this slot's removal, perform
-	 * the mapping now.
-	 */
-	if (state == DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP) {
-		struct dlb2_ldb_port_qid_map *map;
-		struct dlb2_ldb_queue *map_queue;
-		u8 prio;
-
-		map = &port->qid_map[slot];
-
-		map->qid = map->pending_qid;
-		map->priority = map->pending_priority;
-
-		map_queue = &hw->rsrcs.ldb_queues[map->qid];
-		prio = map->priority;
-
-		dlb2_ldb_port_map_qid(hw, domain, port, map_queue, prio);
-	}
-}
-
-static bool dlb2_domain_finish_unmap_port(struct dlb2_hw *hw,
-					  struct dlb2_hw_domain *domain,
-					  struct dlb2_ldb_port *port)
-{
-	union dlb2_lsp_cq_ldb_infl_cnt r0;
-	int i;
-
-	if (port->num_pending_removals == 0)
-		return false;
-
-	/*
-	 * The unmap requires all the CQ's outstanding inflights to be
-	 * completed.
-	 */
-	r0.val = DLB2_CSR_RD(hw, DLB2_LSP_CQ_LDB_INFL_CNT(port->id.phys_id));
-	if (r0.field.count > 0)
-		return false;
-
-	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
-		struct dlb2_ldb_port_qid_map *map;
-
-		map = &port->qid_map[i];
-
-		if (map->state != DLB2_QUEUE_UNMAP_IN_PROG &&
-		    map->state != DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP)
-			continue;
-
-		dlb2_domain_finish_unmap_port_slot(hw, domain, port, i);
-	}
-
-	return true;
-}
-
-static unsigned int
-dlb2_domain_finish_unmap_qid_procedures(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	if (!domain->configured || domain->num_pending_removals == 0)
-		return 0;
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
-			dlb2_domain_finish_unmap_port(hw, domain, port);
-	}
-
-	return domain->num_pending_removals;
-}
-
-unsigned int dlb2_finish_unmap_qid_procedures(struct dlb2_hw *hw)
-{
-	int i, num = 0;
-
-	/* Finish queue unmap jobs for any domain that needs it */
-	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
-		struct dlb2_hw_domain *domain = &hw->domains[i];
-
-		num += dlb2_domain_finish_unmap_qid_procedures(hw, domain);
-	}
-
-	return num;
-}
-
-unsigned int dlb2_finish_map_qid_procedures(struct dlb2_hw *hw)
-{
-	int i, num = 0;
-
-	/* Finish queue map jobs for any domain that needs it */
-	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
-		struct dlb2_hw_domain *domain = &hw->domains[i];
-
-		num += dlb2_domain_finish_map_qid_procedures(hw, domain);
-	}
-
-	return num;
-}
-
 int dlb2_get_group_sequence_numbers(struct dlb2_hw *hw, unsigned int group_id)
 {
 	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index 6a5af0c1e..8cd1762cf 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -6039,3 +6039,53 @@ int dlb2_hw_get_ldb_queue_depth(struct dlb2_hw *hw,
 
 	return 0;
 }
+
+/**
+ * dlb2_finish_unmap_qid_procedures() - finish any pending unmap procedures
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function attempts to finish any outstanding unmap procedures.
+ * This function should be called by the kernel thread responsible for
+ * finishing map/unmap procedures.
+ *
+ * Return:
+ * Returns the number of procedures that weren't completed.
+ */
+unsigned int dlb2_finish_unmap_qid_procedures(struct dlb2_hw *hw)
+{
+	int i, num = 0;
+
+	/* Finish queue unmap jobs for any domain that needs it */
+	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
+		struct dlb2_hw_domain *domain = &hw->domains[i];
+
+		num += dlb2_domain_finish_unmap_qid_procedures(hw, domain);
+	}
+
+	return num;
+}
+
+/**
+ * dlb2_finish_map_qid_procedures() - finish any pending map procedures
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function attempts to finish any outstanding map procedures.
+ * This function should be called by the kernel thread responsible for
+ * finishing map/unmap procedures.
+ *
+ * Return:
+ * Returns the number of procedures that weren't completed.
+ */
+unsigned int dlb2_finish_map_qid_procedures(struct dlb2_hw *hw)
+{
+	int i, num = 0;
+
+	/* Finish queue map jobs for any domain that needs it */
+	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
+		struct dlb2_hw_domain *domain = &hw->domains[i];
+
+		num += dlb2_domain_finish_map_qid_procedures(hw, domain);
+	}
+
+	return num;
+}
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v2 16/27] event/dlb2: add v2.5 sparse cq mode
  2021-03-30 19:35   ` [dpdk-dev] [PATCH v2 00/27] Add DLB V2.5 Timothy McDaniel
                       ` (14 preceding siblings ...)
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 15/27] event/dlb2: add v2.5 finish map/unmap Timothy McDaniel
@ 2021-03-30 19:35     ` Timothy McDaniel
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 17/27] event/dlb2: add v2.5 sequence number management Timothy McDaniel
                       ` (11 subsequent siblings)
  27 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-03-30 19:35 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas

Update sparse cq mode mode functions for DLB v2.5, accounting for new
combined register map and hardware access macros.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 22 -----------
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 39 +++++++++++++++++++
 2 files changed, 39 insertions(+), 22 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index f05f750f5..d53cce643 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -32,28 +32,6 @@
 #define DLB2_FUNC_LIST_FOR_SAFE(head, ptr, ptr_tmp, it, it_tmp) \
 	DLB2_LIST_FOR_EACH_SAFE((head), ptr, ptr_tmp, func_list, it, it_tmp)
 
-void dlb2_hw_enable_sparse_dir_cq_mode(struct dlb2_hw *hw)
-{
-	union dlb2_chp_cfg_chp_csr_ctrl r0;
-
-	r0.val = DLB2_CSR_RD(hw, DLB2_CHP_CFG_CHP_CSR_CTRL);
-
-	r0.field.cfg_64bytes_qe_dir_cq_mode = 1;
-
-	DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, r0.val);
-}
-
-void dlb2_hw_enable_sparse_ldb_cq_mode(struct dlb2_hw *hw)
-{
-	union dlb2_chp_cfg_chp_csr_ctrl r0;
-
-	r0.val = DLB2_CSR_RD(hw, DLB2_CHP_CFG_CHP_CSR_CTRL);
-
-	r0.field.cfg_64bytes_qe_ldb_cq_mode = 1;
-
-	DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, r0.val);
-}
-
 int dlb2_get_group_sequence_numbers(struct dlb2_hw *hw, unsigned int group_id)
 {
 	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index 8cd1762cf..0f18bfeff 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -6089,3 +6089,42 @@ unsigned int dlb2_finish_map_qid_procedures(struct dlb2_hw *hw)
 
 	return num;
 }
+
+/**
+ * dlb2_hw_enable_sparse_dir_cq_mode() - enable sparse mode for directed ports.
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function must be called prior to configuring scheduling domains.
+ */
+
+void dlb2_hw_enable_sparse_dir_cq_mode(struct dlb2_hw *hw)
+{
+	u32 ctrl;
+
+	ctrl = DLB2_CSR_RD(hw, DLB2_CHP_CFG_CHP_CSR_CTRL);
+
+	DLB2_BIT_SET(ctrl,
+		     DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_DIR_CQ_MODE);
+
+	DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, ctrl);
+}
+
+/**
+ * dlb2_hw_enable_sparse_ldb_cq_mode() - enable sparse mode for load-balanced
+ *	ports.
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function must be called prior to configuring scheduling domains.
+ */
+void dlb2_hw_enable_sparse_ldb_cq_mode(struct dlb2_hw *hw)
+{
+	u32 ctrl;
+
+	ctrl = DLB2_CSR_RD(hw, DLB2_CHP_CFG_CHP_CSR_CTRL);
+
+	DLB2_BIT_SET(ctrl,
+		     DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_LDB_CQ_MODE);
+
+	DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, ctrl);
+}
+
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v2 17/27] event/dlb2: add v2.5 sequence number management
  2021-03-30 19:35   ` [dpdk-dev] [PATCH v2 00/27] Add DLB V2.5 Timothy McDaniel
                       ` (15 preceding siblings ...)
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 16/27] event/dlb2: add v2.5 sparse cq mode Timothy McDaniel
@ 2021-03-30 19:35     ` Timothy McDaniel
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 18/27] event/dlb2: consolidate resource header files into one file Timothy McDaniel
                       ` (10 subsequent siblings)
  27 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-03-30 19:35 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas

Update sequence number management functions for DLB v2.5,
accounting for new combined register map and hardware access macros.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_resource.c    |  67 -----------
 drivers/event/dlb2/pf/base/dlb2_resource.h    |   4 +-
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 105 ++++++++++++++++++
 3 files changed, 107 insertions(+), 69 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index d53cce643..e8a9d52f6 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -32,70 +32,3 @@
 #define DLB2_FUNC_LIST_FOR_SAFE(head, ptr, ptr_tmp, it, it_tmp) \
 	DLB2_LIST_FOR_EACH_SAFE((head), ptr, ptr_tmp, func_list, it, it_tmp)
 
-int dlb2_get_group_sequence_numbers(struct dlb2_hw *hw, unsigned int group_id)
-{
-	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
-		return -EINVAL;
-
-	return hw->rsrcs.sn_groups[group_id].sequence_numbers_per_queue;
-}
-
-int dlb2_get_group_sequence_number_occupancy(struct dlb2_hw *hw,
-					     unsigned int group_id)
-{
-	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
-		return -EINVAL;
-
-	return dlb2_sn_group_used_slots(&hw->rsrcs.sn_groups[group_id]);
-}
-
-static void dlb2_log_set_group_sequence_numbers(struct dlb2_hw *hw,
-						unsigned int group_id,
-						unsigned long val)
-{
-	DLB2_HW_DBG(hw, "DLB2 set group sequence numbers:\n");
-	DLB2_HW_DBG(hw, "\tGroup ID: %u\n", group_id);
-	DLB2_HW_DBG(hw, "\tValue:    %lu\n", val);
-}
-
-int dlb2_set_group_sequence_numbers(struct dlb2_hw *hw,
-				    unsigned int group_id,
-				    unsigned long val)
-{
-	u32 valid_allocations[] = {64, 128, 256, 512, 1024};
-	union dlb2_ro_pipe_grp_sn_mode r0 = { {0} };
-	struct dlb2_sn_group *group;
-	int mode;
-
-	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
-		return -EINVAL;
-
-	group = &hw->rsrcs.sn_groups[group_id];
-
-	/*
-	 * Once the first load-balanced queue using an SN group is configured,
-	 * the group cannot be changed.
-	 */
-	if (group->slot_use_bitmap != 0)
-		return -EPERM;
-
-	for (mode = 0; mode < DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES; mode++)
-		if (val == valid_allocations[mode])
-			break;
-
-	if (mode == DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES)
-		return -EINVAL;
-
-	group->mode = mode;
-	group->sequence_numbers_per_queue = val;
-
-	r0.field.sn_mode_0 = hw->rsrcs.sn_groups[0].mode;
-	r0.field.sn_mode_1 = hw->rsrcs.sn_groups[1].mode;
-
-	DLB2_CSR_WR(hw, DLB2_RO_PIPE_GRP_SN_MODE, r0.val);
-
-	dlb2_log_set_group_sequence_numbers(hw, group_id, val);
-
-	return 0;
-}
-
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.h b/drivers/event/dlb2/pf/base/dlb2_resource.h
index 2e13193bb..00a0b6b57 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.h
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.h
@@ -792,8 +792,8 @@ int dlb2_get_group_sequence_number_occupancy(struct dlb2_hw *hw,
  * ordered queue is configured.
  */
 int dlb2_set_group_sequence_numbers(struct dlb2_hw *hw,
-				    unsigned int group_id,
-				    unsigned long val);
+				    u32 group_id,
+				    u32 val);
 
 /**
  * dlb2_reset_domain() - reset a scheduling domain
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index 0f18bfeff..927b65568 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -6128,3 +6128,108 @@ void dlb2_hw_enable_sparse_ldb_cq_mode(struct dlb2_hw *hw)
 	DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, ctrl);
 }
 
+/**
+ * dlb2_get_group_sequence_numbers() - return a group's number of SNs per queue
+ * @hw: dlb2_hw handle for a particular device.
+ * @group_id: sequence number group ID.
+ *
+ * This function returns the configured number of sequence numbers per queue
+ * for the specified group.
+ *
+ * Return:
+ * Returns -EINVAL if group_id is invalid, else the group's SNs per queue.
+ */
+int dlb2_get_group_sequence_numbers(struct dlb2_hw *hw, u32 group_id)
+{
+	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
+		return -EINVAL;
+
+	return hw->rsrcs.sn_groups[group_id].sequence_numbers_per_queue;
+}
+
+/**
+ * dlb2_get_group_sequence_number_occupancy() - return a group's in-use slots
+ * @hw: dlb2_hw handle for a particular device.
+ * @group_id: sequence number group ID.
+ *
+ * This function returns the group's number of in-use slots (i.e. load-balanced
+ * queues using the specified group).
+ *
+ * Return:
+ * Returns -EINVAL if group_id is invalid, else the group's SNs per queue.
+ */
+int dlb2_get_group_sequence_number_occupancy(struct dlb2_hw *hw, u32 group_id)
+{
+	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
+		return -EINVAL;
+
+	return dlb2_sn_group_used_slots(&hw->rsrcs.sn_groups[group_id]);
+}
+
+static void dlb2_log_set_group_sequence_numbers(struct dlb2_hw *hw,
+						u32 group_id,
+						u32 val)
+{
+	DLB2_HW_DBG(hw, "DLB2 set group sequence numbers:\n");
+	DLB2_HW_DBG(hw, "\tGroup ID: %u\n", group_id);
+	DLB2_HW_DBG(hw, "\tValue:    %u\n", val);
+}
+
+/**
+ * dlb2_set_group_sequence_numbers() - assign a group's number of SNs per queue
+ * @hw: dlb2_hw handle for a particular device.
+ * @group_id: sequence number group ID.
+ * @val: requested amount of sequence numbers per queue.
+ *
+ * This function configures the group's number of sequence numbers per queue.
+ * val can be a power-of-two between 32 and 1024, inclusive. This setting can
+ * be configured until the first ordered load-balanced queue is configured, at
+ * which point the configuration is locked.
+ *
+ * Return:
+ * Returns 0 upon success; -EINVAL if group_id or val is invalid, -EPERM if an
+ * ordered queue is configured.
+ */
+int dlb2_set_group_sequence_numbers(struct dlb2_hw *hw,
+				    u32 group_id,
+				    u32 val)
+{
+	const u32 valid_allocations[] = {64, 128, 256, 512, 1024};
+	struct dlb2_sn_group *group;
+	u32 sn_mode = 0;
+	int mode;
+
+	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
+		return -EINVAL;
+
+	group = &hw->rsrcs.sn_groups[group_id];
+
+	/*
+	 * Once the first load-balanced queue using an SN group is configured,
+	 * the group cannot be changed.
+	 */
+	if (group->slot_use_bitmap != 0)
+		return -EPERM;
+
+	for (mode = 0; mode < DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES; mode++)
+		if (val == valid_allocations[mode])
+			break;
+
+	if (mode == DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES)
+		return -EINVAL;
+
+	group->mode = mode;
+	group->sequence_numbers_per_queue = val;
+
+	DLB2_BITS_SET(sn_mode, hw->rsrcs.sn_groups[0].mode,
+		 DLB2_RO_GRP_SN_MODE_SN_MODE_0);
+	DLB2_BITS_SET(sn_mode, hw->rsrcs.sn_groups[1].mode,
+		 DLB2_RO_GRP_SN_MODE_SN_MODE_1);
+
+	DLB2_CSR_WR(hw, DLB2_RO_GRP_SN_MODE(hw->ver), sn_mode);
+
+	dlb2_log_set_group_sequence_numbers(hw, group_id, val);
+
+	return 0;
+}
+
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v2 18/27] event/dlb2: consolidate resource header files into one file
  2021-03-30 19:35   ` [dpdk-dev] [PATCH v2 00/27] Add DLB V2.5 Timothy McDaniel
                       ` (16 preceding siblings ...)
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 17/27] event/dlb2: add v2.5 sequence number management Timothy McDaniel
@ 2021-03-30 19:35     ` Timothy McDaniel
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 19/27] event/dlb2: delete old dlb2_resource.c file Timothy McDaniel
                       ` (9 subsequent siblings)
  27 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-03-30 19:35 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas

A temporary version of dlb_resource.h (dlb_resource_new.h) was used
by the previous commits in this patch series. Merge the two files
now that DLB v2.5 support has been fully added to dlb_resource.c.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_osdep.h       |  1 -
 drivers/event/dlb2/pf/base/dlb2_resource.h    | 36 +++++++++
 .../event/dlb2/pf/base/dlb2_resource_new.c    |  2 +-
 .../event/dlb2/pf/base/dlb2_resource_new.h    | 73 -------------------
 drivers/event/dlb2/pf/dlb2_main.c             |  2 +-
 drivers/event/dlb2/pf/dlb2_pf.c               |  2 +-
 6 files changed, 39 insertions(+), 77 deletions(-)
 delete mode 100644 drivers/event/dlb2/pf/base/dlb2_resource_new.h

diff --git a/drivers/event/dlb2/pf/base/dlb2_osdep.h b/drivers/event/dlb2/pf/base/dlb2_osdep.h
index 3b0ca84ba..d2ad85a89 100644
--- a/drivers/event/dlb2/pf/base/dlb2_osdep.h
+++ b/drivers/event/dlb2/pf/base/dlb2_osdep.h
@@ -18,7 +18,6 @@
 #include "../dlb2_main.h"
 
 /* TEMPORARY inclusion of both headers for merge */
-#include "dlb2_resource_new.h"
 #include "dlb2_resource.h"
 
 #include "../../dlb2_log.h"
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.h b/drivers/event/dlb2/pf/base/dlb2_resource.h
index 00a0b6b57..684049cd6 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.h
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.h
@@ -8,6 +8,42 @@
 #include "dlb2_user.h"
 #include "dlb2_osdep_types.h"
 
+/**
+ * dlb2_resource_init() - initialize the device
+ * @hw: pointer to struct dlb2_hw.
+ * @ver: device version.
+ *
+ * This function initializes the device's software state (pointed to by the hw
+ * argument) and programs global scheduling QoS registers. This function should
+ * be called during driver initialization.
+ *
+ * The dlb2_hw struct must be unique per DLB 2.0 device and persist until the
+ * device is reset.
+ *
+ * Return:
+ * Returns 0 upon success, <0 otherwise.
+ */
+int dlb2_resource_init(struct dlb2_hw *hw, enum dlb2_hw_ver ver);
+
+/**
+ * dlb2_clr_pmcsr_disable() - power on bulk of DLB 2.0 logic
+ * @hw: dlb2_hw handle for a particular device.
+ * @ver: device version.
+ *
+ * Clearing the PMCSR must be done at initialization to make the device fully
+ * operational.
+ */
+void dlb2_clr_pmcsr_disable(struct dlb2_hw *hw, enum dlb2_hw_ver ver);
+
+/**
+ * dlb2_resource_free() - free device state memory
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function frees software state pointed to by dlb2_hw. This function
+ * should be called when resetting the device or unloading the driver.
+ */
+void dlb2_resource_free(struct dlb2_hw *hw);
+
 /**
  * dlb2_resource_reset() - reset in-use resources to their initial state
  * @hw: dlb2_hw handle for a particular device.
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index 927b65568..2f66b2c71 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -11,7 +11,7 @@
 #include "dlb2_osdep_bitmap.h"
 #include "dlb2_osdep_types.h"
 #include "dlb2_regs_new.h"
-#include "dlb2_resource_new.h" /* TEMP FOR UPSTREAMPATCHES */
+#include "dlb2_resource.h"
 
 #include "../../dlb2_priv.h"
 #include "../../dlb2_inline_fns.h"
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.h b/drivers/event/dlb2/pf/base/dlb2_resource_new.h
deleted file mode 100644
index 51f31543c..000000000
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.h
+++ /dev/null
@@ -1,73 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016-2020 Intel Corporation
- */
-
-#ifndef __DLB2_RESOURCE_NEW_H
-#define __DLB2_RESOURCE_NEW_H
-
-#include "dlb2_user.h"
-#include "dlb2_osdep_types.h"
-
-/**
- * dlb2_resource_init() - initialize the device
- * @hw: pointer to struct dlb2_hw.
- * @ver: device version.
- *
- * This function initializes the device's software state (pointed to by the hw
- * argument) and programs global scheduling QoS registers. This function should
- * be called during driver initialization.
- *
- * The dlb2_hw struct must be unique per DLB 2.0 device and persist until the
- * device is reset.
- *
- * Return:
- * Returns 0 upon success, <0 otherwise.
- */
-int dlb2_resource_init(struct dlb2_hw *hw, enum dlb2_hw_ver ver);
-
-/**
- * dlb2_clr_pmcsr_disable() - power on bulk of DLB 2.0 logic
- * @hw: dlb2_hw handle for a particular device.
- * @ver: device version.
- *
- * Clearing the PMCSR must be done at initialization to make the device fully
- * operational.
- */
-void dlb2_clr_pmcsr_disable(struct dlb2_hw *hw, enum dlb2_hw_ver ver);
-
-/**
- * dlb2_finish_unmap_qid_procedures() - finish any pending unmap procedures
- * @hw: dlb2_hw handle for a particular device.
- *
- * This function attempts to finish any outstanding unmap procedures.
- * This function should be called by the kernel thread responsible for
- * finishing map/unmap procedures.
- *
- * Return:
- * Returns the number of procedures that weren't completed.
- */
-unsigned int dlb2_finish_unmap_qid_procedures(struct dlb2_hw *hw);
-
-/**
- * dlb2_finish_map_qid_procedures() - finish any pending map procedures
- * @hw: dlb2_hw handle for a particular device.
- *
- * This function attempts to finish any outstanding map procedures.
- * This function should be called by the kernel thread responsible for
- * finishing map/unmap procedures.
- *
- * Return:
- * Returns the number of procedures that weren't completed.
- */
-unsigned int dlb2_finish_map_qid_procedures(struct dlb2_hw *hw);
-
-/**
- * dlb2_resource_free() - free device state memory
- * @hw: dlb2_hw handle for a particular device.
- *
- * This function frees software state pointed to by dlb2_hw. This function
- * should be called when resetting the device or unloading the driver.
- */
-void dlb2_resource_free(struct dlb2_hw *hw);
-
-#endif /* __DLB2_RESOURCE_NEW_H */
diff --git a/drivers/event/dlb2/pf/dlb2_main.c b/drivers/event/dlb2/pf/dlb2_main.c
index 5c0640b3c..bac07f097 100644
--- a/drivers/event/dlb2/pf/dlb2_main.c
+++ b/drivers/event/dlb2/pf/dlb2_main.c
@@ -17,7 +17,7 @@
 
 #include "base/dlb2_regs_new.h"
 #include "base/dlb2_hw_types_new.h"
-#include "base/dlb2_resource_new.h"
+#include "base/dlb2_resource.h"
 #include "base/dlb2_osdep.h"
 #include "dlb2_main.h"
 #include "../dlb2_user.h"
diff --git a/drivers/event/dlb2/pf/dlb2_pf.c b/drivers/event/dlb2/pf/dlb2_pf.c
index 1e815f20d..880964a29 100644
--- a/drivers/event/dlb2/pf/dlb2_pf.c
+++ b/drivers/event/dlb2/pf/dlb2_pf.c
@@ -40,7 +40,7 @@
 #include "dlb2_main.h"
 #include "base/dlb2_hw_types_new.h"
 #include "base/dlb2_osdep.h"
-#include "base/dlb2_resource_new.h"
+#include "base/dlb2_resource.h"
 
 static const char *event_dlb2_pf_name = RTE_STR(EVDEV_DLB2_NAME_PMD);
 
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v2 19/27] event/dlb2: delete old dlb2_resource.c file
  2021-03-30 19:35   ` [dpdk-dev] [PATCH v2 00/27] Add DLB V2.5 Timothy McDaniel
                       ` (17 preceding siblings ...)
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 18/27] event/dlb2: consolidate resource header files into one file Timothy McDaniel
@ 2021-03-30 19:35     ` Timothy McDaniel
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 20/27] event/dlb2: move dlb_resource_new.c to dlb_resource.c Timothy McDaniel
                       ` (8 subsequent siblings)
  27 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-03-30 19:35 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas

The file dlb_resource_new.c now contains all of the low level
functions required to support both DLB v2.0 and DLB v2.5, so
delete the temporary "old" file, and stop building it. The new
file (dlb_resource_new.c) will be renamed to dlb_resource.c in
the next commit.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/meson.build             |  1 -
 drivers/event/dlb2/pf/base/dlb2_resource.c | 34 ----------------------
 2 files changed, 35 deletions(-)
 delete mode 100644 drivers/event/dlb2/pf/base/dlb2_resource.c

diff --git a/drivers/event/dlb2/meson.build b/drivers/event/dlb2/meson.build
index bded07e06..d8cfd377f 100644
--- a/drivers/event/dlb2/meson.build
+++ b/drivers/event/dlb2/meson.build
@@ -13,7 +13,6 @@ sources = files('dlb2.c',
 		'dlb2_xstats.c',
 		'pf/dlb2_main.c',
 		'pf/dlb2_pf.c',
-		'pf/base/dlb2_resource.c',
 		'pf/base/dlb2_resource_new.c',
 		'rte_pmd_dlb2.c',
 		'dlb2_selftest.c'
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
deleted file mode 100644
index e8a9d52f6..000000000
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ /dev/null
@@ -1,34 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016-2020 Intel Corporation
- */
-
-#include "dlb2_user.h"
-
-#include "dlb2_hw_types.h"
-#include "dlb2_osdep.h"
-#include "dlb2_osdep_bitmap.h"
-#include "dlb2_osdep_types.h"
-#include "dlb2_regs.h"
-#include "dlb2_resource.h"
-
-#include "../../dlb2_priv.h"
-#include "../../dlb2_inline_fns.h"
-
-#define DLB2_DOM_LIST_HEAD(head, type) \
-	DLB2_LIST_HEAD((head), type, domain_list)
-
-#define DLB2_FUNC_LIST_HEAD(head, type) \
-	DLB2_LIST_HEAD((head), type, func_list)
-
-#define DLB2_DOM_LIST_FOR(head, ptr, iter) \
-	DLB2_LIST_FOR_EACH(head, ptr, domain_list, iter)
-
-#define DLB2_FUNC_LIST_FOR(head, ptr, iter) \
-	DLB2_LIST_FOR_EACH(head, ptr, func_list, iter)
-
-#define DLB2_DOM_LIST_FOR_SAFE(head, ptr, ptr_tmp, it, it_tmp) \
-	DLB2_LIST_FOR_EACH_SAFE((head), ptr, ptr_tmp, domain_list, it, it_tmp)
-
-#define DLB2_FUNC_LIST_FOR_SAFE(head, ptr, ptr_tmp, it, it_tmp) \
-	DLB2_LIST_FOR_EACH_SAFE((head), ptr, ptr_tmp, func_list, it, it_tmp)
-
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v2 20/27] event/dlb2: move dlb_resource_new.c to dlb_resource.c
  2021-03-30 19:35   ` [dpdk-dev] [PATCH v2 00/27] Add DLB V2.5 Timothy McDaniel
                       ` (18 preceding siblings ...)
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 19/27] event/dlb2: delete old dlb2_resource.c file Timothy McDaniel
@ 2021-03-30 19:35     ` Timothy McDaniel
  2021-04-03 10:29       ` Jerin Jacob
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 21/27] event/dlb2: remove temporary file, dlb_hw_types.h Timothy McDaniel
                       ` (7 subsequent siblings)
  27 siblings, 1 reply; 174+ messages in thread
From: Timothy McDaniel @ 2021-03-30 19:35 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas

The file dlb_resource_new.c now contains all of the low level
functions required to support both DLB v2.0 and DLB v2.5, and
the original file (dlb_resource.c) was removed in the previous
commit, so rename dlb_resource_new.c to dlb_resource.c, and
update the meson build file so that the new file is built.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/meson.build                                  | 2 +-
 .../event/dlb2/pf/base/{dlb2_resource_new.c => dlb2_resource.c} | 0
 2 files changed, 1 insertion(+), 1 deletion(-)
 rename drivers/event/dlb2/pf/base/{dlb2_resource_new.c => dlb2_resource.c} (100%)

diff --git a/drivers/event/dlb2/meson.build b/drivers/event/dlb2/meson.build
index d8cfd377f..f22638b8e 100644
--- a/drivers/event/dlb2/meson.build
+++ b/drivers/event/dlb2/meson.build
@@ -13,7 +13,7 @@ sources = files('dlb2.c',
 		'dlb2_xstats.c',
 		'pf/dlb2_main.c',
 		'pf/dlb2_pf.c',
-		'pf/base/dlb2_resource_new.c',
+		'pf/base/dlb2_resource.c',
 		'rte_pmd_dlb2.c',
 		'dlb2_selftest.c'
 )
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
similarity index 100%
rename from drivers/event/dlb2/pf/base/dlb2_resource_new.c
rename to drivers/event/dlb2/pf/base/dlb2_resource.c
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v2 21/27] event/dlb2: remove temporary file, dlb_hw_types.h
  2021-03-30 19:35   ` [dpdk-dev] [PATCH v2 00/27] Add DLB V2.5 Timothy McDaniel
                       ` (19 preceding siblings ...)
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 20/27] event/dlb2: move dlb_resource_new.c to dlb_resource.c Timothy McDaniel
@ 2021-03-30 19:35     ` Timothy McDaniel
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 22/27] event/dlb2: move dlb2_hw_type_new.h to dlb2_hw_types.h Timothy McDaniel
                       ` (6 subsequent siblings)
  27 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-03-30 19:35 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas

As support for DLB v2.5 was added, modifications were made to
dlb_hw_types_new.h, but the old file needed to be preserved during
the port in order to meet the requirement that individual patches in
a series each compile successfully. Since the DLB v2.5 support is
completely integrated, it is now safe to remove the old (original)
file, as well as the DLB2_USE_NEW_HEADERS define that was used to
control which version of the file was to be included in certain
source files. The next commit will rename dlb2_hw_type_new.h
to dlb_hw_types.h.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_hw_types.h | 335 ---------------------
 drivers/event/dlb2/pf/base/dlb2_resource.c |   2 -
 drivers/event/dlb2/pf/dlb2_main.c          |   2 -
 drivers/event/dlb2/pf/dlb2_main.h          |   4 -
 drivers/event/dlb2/pf/dlb2_pf.c            |   2 -
 5 files changed, 345 deletions(-)
 delete mode 100644 drivers/event/dlb2/pf/base/dlb2_hw_types.h

diff --git a/drivers/event/dlb2/pf/base/dlb2_hw_types.h b/drivers/event/dlb2/pf/base/dlb2_hw_types.h
deleted file mode 100644
index b007e1674..000000000
--- a/drivers/event/dlb2/pf/base/dlb2_hw_types.h
+++ /dev/null
@@ -1,335 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016-2020 Intel Corporation
- */
-
-#ifndef __DLB2_HW_TYPES_H
-#define __DLB2_HW_TYPES_H
-
-#include "../../dlb2_priv.h"
-#include "dlb2_user.h"
-
-#include "dlb2_osdep_list.h"
-#include "dlb2_osdep_types.h"
-
-#define DLB2_MAX_NUM_VDEVS			16
-#define DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS	2
-#define DLB2_NUM_ARB_WEIGHTS			8
-#define DLB2_MAX_NUM_AQED_ENTRIES		2048
-#define DLB2_MAX_WEIGHT				255
-#define DLB2_NUM_COS_DOMAINS			4
-#define DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS	2
-#define DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES	5
-#define DLB2_MAX_CQ_COMP_CHECK_LOOPS		409600
-#define DLB2_MAX_QID_EMPTY_CHECK_LOOPS		(32 * 64 * 1024 * (800 / 30))
-
-#define DLB2_FUNC_BAR				0
-#define DLB2_CSR_BAR				2
-
-#define PCI_DEVICE_ID_INTEL_DLB2_PF 0x2710
-#define PCI_DEVICE_ID_INTEL_DLB2_VF 0x2711
-
-#define PCI_DEVICE_ID_INTEL_DLB2_5_PF 0x2714
-#define PCI_DEVICE_ID_INTEL_DLB2_5_VF 0x2715
-
-#define DLB2_ALARM_HW_SOURCE_SYS 0
-#define DLB2_ALARM_HW_SOURCE_DLB 1
-
-#define DLB2_ALARM_HW_UNIT_CHP 4
-
-#define DLB2_ALARM_SYS_AID_ILLEGAL_QID		3
-#define DLB2_ALARM_SYS_AID_DISABLED_QID		4
-#define DLB2_ALARM_SYS_AID_ILLEGAL_HCW		5
-#define DLB2_ALARM_HW_CHP_AID_ILLEGAL_ENQ	1
-#define DLB2_ALARM_HW_CHP_AID_EXCESS_TOKEN_POPS 2
-
-/*
- * Hardware-defined base addresses. Those prefixed 'DLB2_DRV' are only used by
- * the PF driver.
- */
-#define DLB2_DRV_LDB_PP_BASE   0x2300000
-#define DLB2_DRV_LDB_PP_STRIDE 0x1000
-#define DLB2_DRV_LDB_PP_BOUND  (DLB2_DRV_LDB_PP_BASE + \
-				DLB2_DRV_LDB_PP_STRIDE * DLB2_MAX_NUM_LDB_PORTS)
-#define DLB2_DRV_DIR_PP_BASE   0x2200000
-#define DLB2_DRV_DIR_PP_STRIDE 0x1000
-#define DLB2_DRV_DIR_PP_BOUND  (DLB2_DRV_DIR_PP_BASE + \
-				DLB2_DRV_DIR_PP_STRIDE * DLB2_MAX_NUM_DIR_PORTS)
-#define DLB2_LDB_PP_BASE       0x2100000
-#define DLB2_LDB_PP_STRIDE     0x1000
-#define DLB2_LDB_PP_BOUND      (DLB2_LDB_PP_BASE + \
-				DLB2_LDB_PP_STRIDE * DLB2_MAX_NUM_LDB_PORTS)
-#define DLB2_LDB_PP_OFFS(id)   (DLB2_LDB_PP_BASE + (id) * DLB2_PP_SIZE)
-#define DLB2_DIR_PP_BASE       0x2000000
-#define DLB2_DIR_PP_STRIDE     0x1000
-#define DLB2_DIR_PP_BOUND      (DLB2_DIR_PP_BASE + \
-				DLB2_DIR_PP_STRIDE * \
-				DLB2_MAX_NUM_DIR_PORTS_V2_5)
-#define DLB2_DIR_PP_OFFS(id)   (DLB2_DIR_PP_BASE + (id) * DLB2_PP_SIZE)
-
-struct dlb2_resource_id {
-	u32 phys_id;
-	u32 virt_id;
-	u8 vdev_owned;
-	u8 vdev_id;
-};
-
-struct dlb2_freelist {
-	u32 base;
-	u32 bound;
-	u32 offset;
-};
-
-static inline u32 dlb2_freelist_count(struct dlb2_freelist *list)
-{
-	return list->bound - list->base - list->offset;
-}
-
-struct dlb2_hcw {
-	u64 data;
-	/* Word 3 */
-	u16 opaque;
-	u8 qid;
-	u8 sched_type:2;
-	u8 priority:3;
-	u8 msg_type:3;
-	/* Word 4 */
-	u16 lock_id;
-	u8 ts_flag:1;
-	u8 rsvd1:2;
-	u8 no_dec:1;
-	u8 cmp_id:4;
-	u8 cq_token:1;
-	u8 qe_comp:1;
-	u8 qe_frag:1;
-	u8 qe_valid:1;
-	u8 int_arm:1;
-	u8 error:1;
-	u8 rsvd:2;
-};
-
-struct dlb2_ldb_queue {
-	struct dlb2_list_entry domain_list;
-	struct dlb2_list_entry func_list;
-	struct dlb2_resource_id id;
-	struct dlb2_resource_id domain_id;
-	u32 num_qid_inflights;
-	u32 aqed_limit;
-	u32 sn_group; /* sn == sequence number */
-	u32 sn_slot;
-	u32 num_mappings;
-	u8 sn_cfg_valid;
-	u8 num_pending_additions;
-	u8 owned;
-	u8 configured;
-};
-
-/*
- * Directed ports and queues are paired by nature, so the driver tracks them
- * with a single data structure.
- */
-struct dlb2_dir_pq_pair {
-	struct dlb2_list_entry domain_list;
-	struct dlb2_list_entry func_list;
-	struct dlb2_resource_id id;
-	struct dlb2_resource_id domain_id;
-	u32 ref_cnt;
-	u8 init_tkn_cnt;
-	u8 queue_configured;
-	u8 port_configured;
-	u8 owned;
-	u8 enabled;
-};
-
-enum dlb2_qid_map_state {
-	/* The slot doesn't contain a valid queue mapping */
-	DLB2_QUEUE_UNMAPPED,
-	/* The slot contains a valid queue mapping */
-	DLB2_QUEUE_MAPPED,
-	/* The driver is mapping a queue into this slot */
-	DLB2_QUEUE_MAP_IN_PROG,
-	/* The driver is unmapping a queue from this slot */
-	DLB2_QUEUE_UNMAP_IN_PROG,
-	/*
-	 * The driver is unmapping a queue from this slot, and once complete
-	 * will replace it with another mapping.
-	 */
-	DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP,
-};
-
-struct dlb2_ldb_port_qid_map {
-	enum dlb2_qid_map_state state;
-	u16 qid;
-	u16 pending_qid;
-	u8 priority;
-	u8 pending_priority;
-};
-
-struct dlb2_ldb_port {
-	struct dlb2_list_entry domain_list;
-	struct dlb2_list_entry func_list;
-	struct dlb2_resource_id id;
-	struct dlb2_resource_id domain_id;
-	/* The qid_map represents the hardware QID mapping state. */
-	struct dlb2_ldb_port_qid_map qid_map[DLB2_MAX_NUM_QIDS_PER_LDB_CQ];
-	u32 hist_list_entry_base;
-	u32 hist_list_entry_limit;
-	u32 ref_cnt;
-	u8 init_tkn_cnt;
-	u8 num_pending_removals;
-	u8 num_mappings;
-	u8 owned;
-	u8 enabled;
-	u8 configured;
-};
-
-struct dlb2_sn_group {
-	u32 mode;
-	u32 sequence_numbers_per_queue;
-	u32 slot_use_bitmap;
-	u32 id;
-};
-
-static inline bool dlb2_sn_group_full(struct dlb2_sn_group *group)
-{
-	const u32 mask[] = {
-		0x0000ffff,  /* 64 SNs per queue */
-		0x000000ff,  /* 128 SNs per queue */
-		0x0000000f,  /* 256 SNs per queue */
-		0x00000003,  /* 512 SNs per queue */
-		0x00000001}; /* 1024 SNs per queue */
-
-	return group->slot_use_bitmap == mask[group->mode];
-}
-
-static inline int dlb2_sn_group_alloc_slot(struct dlb2_sn_group *group)
-{
-	const u32 bound[] = {16, 8, 4, 2, 1};
-	u32 i;
-
-	for (i = 0; i < bound[group->mode]; i++) {
-		if (!(group->slot_use_bitmap & (1 << i))) {
-			group->slot_use_bitmap |= 1 << i;
-			return i;
-		}
-	}
-
-	return -1;
-}
-
-static inline void
-dlb2_sn_group_free_slot(struct dlb2_sn_group *group, int slot)
-{
-	group->slot_use_bitmap &= ~(1 << slot);
-}
-
-static inline int dlb2_sn_group_used_slots(struct dlb2_sn_group *group)
-{
-	int i, cnt = 0;
-
-	for (i = 0; i < 32; i++)
-		cnt += !!(group->slot_use_bitmap & (1 << i));
-
-	return cnt;
-}
-
-struct dlb2_hw_domain {
-	struct dlb2_function_resources *parent_func;
-	struct dlb2_list_entry func_list;
-	struct dlb2_list_head used_ldb_queues;
-	struct dlb2_list_head used_ldb_ports[DLB2_NUM_COS_DOMAINS];
-	struct dlb2_list_head used_dir_pq_pairs;
-	struct dlb2_list_head avail_ldb_queues;
-	struct dlb2_list_head avail_ldb_ports[DLB2_NUM_COS_DOMAINS];
-	struct dlb2_list_head avail_dir_pq_pairs;
-	u32 total_hist_list_entries;
-	u32 avail_hist_list_entries;
-	u32 hist_list_entry_base;
-	u32 hist_list_entry_offset;
-	u32 num_ldb_credits;
-	u32 num_dir_credits;
-	u32 num_avail_aqed_entries;
-	u32 num_used_aqed_entries;
-	struct dlb2_resource_id id;
-	int num_pending_removals;
-	int num_pending_additions;
-	u8 configured;
-	u8 started;
-};
-
-struct dlb2_bitmap;
-
-struct dlb2_function_resources {
-	struct dlb2_list_head avail_domains;
-	struct dlb2_list_head used_domains;
-	struct dlb2_list_head avail_ldb_queues;
-	struct dlb2_list_head avail_ldb_ports[DLB2_NUM_COS_DOMAINS];
-	struct dlb2_list_head avail_dir_pq_pairs;
-	struct dlb2_bitmap *avail_hist_list_entries;
-	u32 num_avail_domains;
-	u32 num_avail_ldb_queues;
-	u32 num_avail_ldb_ports[DLB2_NUM_COS_DOMAINS];
-	u32 num_avail_dir_pq_pairs;
-	u32 num_avail_qed_entries;
-	u32 num_avail_dqed_entries;
-	u32 num_avail_aqed_entries;
-	u8 locked; /* (VDEV only) */
-};
-
-/*
- * After initialization, each resource in dlb2_hw_resources is located in one
- * of the following lists:
- * -- The PF's available resources list. These are unconfigured resources owned
- *	by the PF and not allocated to a dlb2 scheduling domain.
- * -- A VDEV's available resources list. These are VDEV-owned unconfigured
- *	resources not allocated to a dlb2 scheduling domain.
- * -- A domain's available resources list. These are domain-owned unconfigured
- *	resources.
- * -- A domain's used resources list. These are domain-owned configured
- *	resources.
- *
- * A resource moves to a new list when a VDEV or domain is created or destroyed,
- * or when the resource is configured.
- */
-struct dlb2_hw_resources {
-	struct dlb2_ldb_queue ldb_queues[DLB2_MAX_NUM_LDB_QUEUES];
-	struct dlb2_ldb_port ldb_ports[DLB2_MAX_NUM_LDB_PORTS];
-	struct dlb2_dir_pq_pair dir_pq_pairs[DLB2_MAX_NUM_DIR_PORTS_V2_5];
-	struct dlb2_sn_group sn_groups[DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS];
-};
-
-struct dlb2_mbox {
-	u32 *mbox;
-	u32 *isr_in_progress;
-};
-
-struct dlb2_sw_mbox {
-	struct dlb2_mbox vdev_to_pf;
-	struct dlb2_mbox pf_to_vdev;
-	void (*pf_to_vdev_inject)(void *arg);
-	void *pf_to_vdev_inject_arg;
-};
-
-struct dlb2_hw {
-	uint8_t ver;
-
-	/* BAR 0 address */
-	void *csr_kva;
-	unsigned long csr_phys_addr;
-	/* BAR 2 address */
-	void *func_kva;
-	unsigned long func_phys_addr;
-
-	/* Resource tracking */
-	struct dlb2_hw_resources rsrcs;
-	struct dlb2_function_resources pf;
-	struct dlb2_function_resources vdev[DLB2_MAX_NUM_VDEVS];
-	struct dlb2_hw_domain domains[DLB2_MAX_NUM_DOMAINS];
-	u8 cos_reservation[DLB2_NUM_COS_DOMAINS];
-
-	/* Virtualization */
-	int virt_mode;
-	struct dlb2_sw_mbox mbox[DLB2_MAX_NUM_VDEVS];
-	unsigned int pasid[DLB2_MAX_NUM_VDEVS];
-};
-
-#endif /* __DLB2_HW_TYPES_H */
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index 2f66b2c71..76b8b71db 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -2,8 +2,6 @@
  * Copyright(c) 2016-2020 Intel Corporation
  */
 
-#define DLB2_USE_NEW_HEADERS /* TEMPORARY FOR MERGE */
-
 #include "dlb2_user.h"
 
 #include "dlb2_hw_types_new.h"
diff --git a/drivers/event/dlb2/pf/dlb2_main.c b/drivers/event/dlb2/pf/dlb2_main.c
index bac07f097..3ab0c3ef5 100644
--- a/drivers/event/dlb2/pf/dlb2_main.c
+++ b/drivers/event/dlb2/pf/dlb2_main.c
@@ -13,8 +13,6 @@
 #include <rte_malloc.h>
 #include <rte_errno.h>
 
-#define DLB2_USE_NEW_HEADERS /* TEMPORARY FOR MERGE */
-
 #include "base/dlb2_regs_new.h"
 #include "base/dlb2_hw_types_new.h"
 #include "base/dlb2_resource.h"
diff --git a/drivers/event/dlb2/pf/dlb2_main.h b/drivers/event/dlb2/pf/dlb2_main.h
index 892298d7a..a1fab7c43 100644
--- a/drivers/event/dlb2/pf/dlb2_main.h
+++ b/drivers/event/dlb2/pf/dlb2_main.h
@@ -12,11 +12,7 @@
 #include <rte_bus_pci.h>
 #include <rte_eal_paging.h>
 
-#ifdef DLB2_USE_NEW_HEADERS
 #include "base/dlb2_hw_types_new.h"
-#else
-#include "base/dlb2_hw_types.h"
-#endif
 #include "../dlb2_user.h"
 
 #define DLB2_DEFAULT_UNREGISTER_TIMEOUT_S 5
diff --git a/drivers/event/dlb2/pf/dlb2_pf.c b/drivers/event/dlb2/pf/dlb2_pf.c
index 880964a29..b475ff0b1 100644
--- a/drivers/event/dlb2/pf/dlb2_pf.c
+++ b/drivers/event/dlb2/pf/dlb2_pf.c
@@ -32,8 +32,6 @@
 #include <rte_memory.h>
 #include <rte_string_fns.h>
 
-#define DLB2_USE_NEW_HEADERS /* TEMPORARY FOR MERGE */
-
 #include "../dlb2_priv.h"
 #include "../dlb2_iface.h"
 #include "../dlb2_inline_fns.h"
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v2 22/27] event/dlb2: move dlb2_hw_type_new.h to dlb2_hw_types.h
  2021-03-30 19:35   ` [dpdk-dev] [PATCH v2 00/27] Add DLB V2.5 Timothy McDaniel
                       ` (20 preceding siblings ...)
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 21/27] event/dlb2: remove temporary file, dlb_hw_types.h Timothy McDaniel
@ 2021-03-30 19:35     ` Timothy McDaniel
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 23/27] event/dlb2: delete old register map file, dlb2_regs.h Timothy McDaniel
                       ` (5 subsequent siblings)
  27 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-03-30 19:35 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas

The original and a "new" file were maintained during the
early portions of the patch series in order to ensure that
all individual patches compiled cleanly. It is now safe to
rename the new file, and use it unconditionally in all DLB
source files.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 .../event/dlb2/pf/base/{dlb2_hw_types_new.h => dlb2_hw_types.h} | 0
 drivers/event/dlb2/pf/base/dlb2_resource.c                      | 2 +-
 drivers/event/dlb2/pf/dlb2_main.c                               | 2 +-
 drivers/event/dlb2/pf/dlb2_main.h                               | 2 +-
 drivers/event/dlb2/pf/dlb2_pf.c                                 | 2 +-
 5 files changed, 4 insertions(+), 4 deletions(-)
 rename drivers/event/dlb2/pf/base/{dlb2_hw_types_new.h => dlb2_hw_types.h} (100%)

diff --git a/drivers/event/dlb2/pf/base/dlb2_hw_types_new.h b/drivers/event/dlb2/pf/base/dlb2_hw_types.h
similarity index 100%
rename from drivers/event/dlb2/pf/base/dlb2_hw_types_new.h
rename to drivers/event/dlb2/pf/base/dlb2_hw_types.h
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index 76b8b71db..54b0207db 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -4,7 +4,7 @@
 
 #include "dlb2_user.h"
 
-#include "dlb2_hw_types_new.h"
+#include "dlb2_hw_types.h"
 #include "dlb2_osdep.h"
 #include "dlb2_osdep_bitmap.h"
 #include "dlb2_osdep_types.h"
diff --git a/drivers/event/dlb2/pf/dlb2_main.c b/drivers/event/dlb2/pf/dlb2_main.c
index 3ab0c3ef5..1f6ccf8e4 100644
--- a/drivers/event/dlb2/pf/dlb2_main.c
+++ b/drivers/event/dlb2/pf/dlb2_main.c
@@ -14,7 +14,7 @@
 #include <rte_errno.h>
 
 #include "base/dlb2_regs_new.h"
-#include "base/dlb2_hw_types_new.h"
+#include "base/dlb2_hw_types.h"
 #include "base/dlb2_resource.h"
 #include "base/dlb2_osdep.h"
 #include "dlb2_main.h"
diff --git a/drivers/event/dlb2/pf/dlb2_main.h b/drivers/event/dlb2/pf/dlb2_main.h
index a1fab7c43..9eeda482a 100644
--- a/drivers/event/dlb2/pf/dlb2_main.h
+++ b/drivers/event/dlb2/pf/dlb2_main.h
@@ -12,7 +12,7 @@
 #include <rte_bus_pci.h>
 #include <rte_eal_paging.h>
 
-#include "base/dlb2_hw_types_new.h"
+#include "base/dlb2_hw_types.h"
 #include "../dlb2_user.h"
 
 #define DLB2_DEFAULT_UNREGISTER_TIMEOUT_S 5
diff --git a/drivers/event/dlb2/pf/dlb2_pf.c b/drivers/event/dlb2/pf/dlb2_pf.c
index b475ff0b1..f57dc1584 100644
--- a/drivers/event/dlb2/pf/dlb2_pf.c
+++ b/drivers/event/dlb2/pf/dlb2_pf.c
@@ -36,7 +36,7 @@
 #include "../dlb2_iface.h"
 #include "../dlb2_inline_fns.h"
 #include "dlb2_main.h"
-#include "base/dlb2_hw_types_new.h"
+#include "base/dlb2_hw_types.h"
 #include "base/dlb2_osdep.h"
 #include "base/dlb2_resource.h"
 
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v2 23/27] event/dlb2: delete old register map file, dlb2_regs.h
  2021-03-30 19:35   ` [dpdk-dev] [PATCH v2 00/27] Add DLB V2.5 Timothy McDaniel
                       ` (21 preceding siblings ...)
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 22/27] event/dlb2: move dlb2_hw_type_new.h to dlb2_hw_types.h Timothy McDaniel
@ 2021-03-30 19:35     ` Timothy McDaniel
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 24/27] event/dlb2: rename dlb2_regs_new.h to dlb2_regs.h Timothy McDaniel
                       ` (4 subsequent siblings)
  27 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-03-30 19:35 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas

All dependencies on the old register map have been removed, so
it can now be deleted.  The next commit will rename dlb2_regs_new.h
to dlb2_regs.h.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_regs.h | 2527 ------------------------
 1 file changed, 2527 deletions(-)
 delete mode 100644 drivers/event/dlb2/pf/base/dlb2_regs.h

diff --git a/drivers/event/dlb2/pf/base/dlb2_regs.h b/drivers/event/dlb2/pf/base/dlb2_regs.h
deleted file mode 100644
index 43ecad4f8..000000000
--- a/drivers/event/dlb2/pf/base/dlb2_regs.h
+++ /dev/null
@@ -1,2527 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016-2020 Intel Corporation
- */
-
-#ifndef __DLB2_REGS_H
-#define __DLB2_REGS_H
-
-#include "dlb2_osdep_types.h"
-
-#define DLB2_FUNC_PF_VF2PF_MAILBOX_BYTES 256
-#define DLB2_FUNC_PF_VF2PF_MAILBOX(vf_id, x) \
-	(0x1000 + 0x4 * (x) + (vf_id) * 0x10000)
-#define DLB2_FUNC_PF_VF2PF_MAILBOX_RST 0x0
-union dlb2_func_pf_vf2pf_mailbox {
-	struct {
-		u32 msg : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_PF_VF2PF_MAILBOX_ISR(vf_id) \
-	(0x1f00 + (vf_id) * 0x10000)
-#define DLB2_FUNC_PF_VF2PF_MAILBOX_ISR_RST 0x0
-union dlb2_func_pf_vf2pf_mailbox_isr {
-	struct {
-		u32 vf0_isr : 1;
-		u32 vf1_isr : 1;
-		u32 vf2_isr : 1;
-		u32 vf3_isr : 1;
-		u32 vf4_isr : 1;
-		u32 vf5_isr : 1;
-		u32 vf6_isr : 1;
-		u32 vf7_isr : 1;
-		u32 vf8_isr : 1;
-		u32 vf9_isr : 1;
-		u32 vf10_isr : 1;
-		u32 vf11_isr : 1;
-		u32 vf12_isr : 1;
-		u32 vf13_isr : 1;
-		u32 vf14_isr : 1;
-		u32 vf15_isr : 1;
-		u32 rsvd0 : 16;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_PF_VF2PF_FLR_ISR(vf_id) \
-	(0x1f04 + (vf_id) * 0x10000)
-#define DLB2_FUNC_PF_VF2PF_FLR_ISR_RST 0x0
-union dlb2_func_pf_vf2pf_flr_isr {
-	struct {
-		u32 vf0_isr : 1;
-		u32 vf1_isr : 1;
-		u32 vf2_isr : 1;
-		u32 vf3_isr : 1;
-		u32 vf4_isr : 1;
-		u32 vf5_isr : 1;
-		u32 vf6_isr : 1;
-		u32 vf7_isr : 1;
-		u32 vf8_isr : 1;
-		u32 vf9_isr : 1;
-		u32 vf10_isr : 1;
-		u32 vf11_isr : 1;
-		u32 vf12_isr : 1;
-		u32 vf13_isr : 1;
-		u32 vf14_isr : 1;
-		u32 vf15_isr : 1;
-		u32 rsvd0 : 16;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_PF_VF2PF_ISR_PEND(vf_id) \
-	(0x1f10 + (vf_id) * 0x10000)
-#define DLB2_FUNC_PF_VF2PF_ISR_PEND_RST 0x0
-union dlb2_func_pf_vf2pf_isr_pend {
-	struct {
-		u32 isr_pend : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_PF_PF2VF_MAILBOX_BYTES 64
-#define DLB2_FUNC_PF_PF2VF_MAILBOX(vf_id, x) \
-	(0x2000 + 0x4 * (x) + (vf_id) * 0x10000)
-#define DLB2_FUNC_PF_PF2VF_MAILBOX_RST 0x0
-union dlb2_func_pf_pf2vf_mailbox {
-	struct {
-		u32 msg : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_PF_PF2VF_MAILBOX_ISR(vf_id) \
-	(0x2f00 + (vf_id) * 0x10000)
-#define DLB2_FUNC_PF_PF2VF_MAILBOX_ISR_RST 0x0
-union dlb2_func_pf_pf2vf_mailbox_isr {
-	struct {
-		u32 vf0_isr : 1;
-		u32 vf1_isr : 1;
-		u32 vf2_isr : 1;
-		u32 vf3_isr : 1;
-		u32 vf4_isr : 1;
-		u32 vf5_isr : 1;
-		u32 vf6_isr : 1;
-		u32 vf7_isr : 1;
-		u32 vf8_isr : 1;
-		u32 vf9_isr : 1;
-		u32 vf10_isr : 1;
-		u32 vf11_isr : 1;
-		u32 vf12_isr : 1;
-		u32 vf13_isr : 1;
-		u32 vf14_isr : 1;
-		u32 vf15_isr : 1;
-		u32 rsvd0 : 16;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_PF_VF_RESET_IN_PROGRESS(vf_id) \
-	(0x3000 + (vf_id) * 0x10000)
-#define DLB2_FUNC_PF_VF_RESET_IN_PROGRESS_RST 0xffff
-union dlb2_func_pf_vf_reset_in_progress {
-	struct {
-		u32 vf0_reset_in_progress : 1;
-		u32 vf1_reset_in_progress : 1;
-		u32 vf2_reset_in_progress : 1;
-		u32 vf3_reset_in_progress : 1;
-		u32 vf4_reset_in_progress : 1;
-		u32 vf5_reset_in_progress : 1;
-		u32 vf6_reset_in_progress : 1;
-		u32 vf7_reset_in_progress : 1;
-		u32 vf8_reset_in_progress : 1;
-		u32 vf9_reset_in_progress : 1;
-		u32 vf10_reset_in_progress : 1;
-		u32 vf11_reset_in_progress : 1;
-		u32 vf12_reset_in_progress : 1;
-		u32 vf13_reset_in_progress : 1;
-		u32 vf14_reset_in_progress : 1;
-		u32 vf15_reset_in_progress : 1;
-		u32 rsvd0 : 16;
-	} field;
-	u32 val;
-};
-
-#define DLB2_MSIX_MEM_VECTOR_CTRL(x) \
-	(0x100000c + (x) * 0x10)
-#define DLB2_MSIX_MEM_VECTOR_CTRL_RST 0x1
-union dlb2_msix_mem_vector_ctrl {
-	struct {
-		u32 vec_mask : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_IOSF_FUNC_VF_BAR_DSBL(x) \
-	(0x20 + (x) * 0x4)
-#define DLB2_IOSF_FUNC_VF_BAR_DSBL_RST 0x0
-union dlb2_iosf_func_vf_bar_dsbl {
-	struct {
-		u32 func_vf_bar_dis : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_TOTAL_VAS 0x1000011c
-#define DLB2_SYS_TOTAL_VAS_RST 0x20
-union dlb2_sys_total_vas {
-	struct {
-		u32 total_vas : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_TOTAL_DIR_PORTS 0x10000118
-#define DLB2_SYS_TOTAL_DIR_PORTS_RST 0x40
-union dlb2_sys_total_dir_ports {
-	struct {
-		u32 total_dir_ports : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_TOTAL_LDB_PORTS 0x10000114
-#define DLB2_SYS_TOTAL_LDB_PORTS_RST 0x40
-union dlb2_sys_total_ldb_ports {
-	struct {
-		u32 total_ldb_ports : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_TOTAL_DIR_QID 0x10000110
-#define DLB2_SYS_TOTAL_DIR_QID_RST 0x40
-union dlb2_sys_total_dir_qid {
-	struct {
-		u32 total_dir_qid : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_TOTAL_LDB_QID 0x1000010c
-#define DLB2_SYS_TOTAL_LDB_QID_RST 0x20
-union dlb2_sys_total_ldb_qid {
-	struct {
-		u32 total_ldb_qid : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_TOTAL_DIR_CRDS 0x10000108
-#define DLB2_SYS_TOTAL_DIR_CRDS_RST 0x1000
-union dlb2_sys_total_dir_crds {
-	struct {
-		u32 total_dir_credits : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_TOTAL_LDB_CRDS 0x10000104
-#define DLB2_SYS_TOTAL_LDB_CRDS_RST 0x2000
-union dlb2_sys_total_ldb_crds {
-	struct {
-		u32 total_ldb_credits : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_ALARM_PF_SYND2 0x10000508
-#define DLB2_SYS_ALARM_PF_SYND2_RST 0x0
-union dlb2_sys_alarm_pf_synd2 {
-	struct {
-		u32 lock_id : 16;
-		u32 meas : 1;
-		u32 debug : 7;
-		u32 cq_pop : 1;
-		u32 qe_uhl : 1;
-		u32 qe_orsp : 1;
-		u32 qe_valid : 1;
-		u32 cq_int_rearm : 1;
-		u32 dsi_error : 1;
-		u32 rsvd0 : 2;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_ALARM_PF_SYND1 0x10000504
-#define DLB2_SYS_ALARM_PF_SYND1_RST 0x0
-union dlb2_sys_alarm_pf_synd1 {
-	struct {
-		u32 dsi : 16;
-		u32 qid : 8;
-		u32 qtype : 2;
-		u32 qpri : 3;
-		u32 msg_type : 3;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_ALARM_PF_SYND0 0x10000500
-#define DLB2_SYS_ALARM_PF_SYND0_RST 0x0
-union dlb2_sys_alarm_pf_synd0 {
-	struct {
-		u32 syndrome : 8;
-		u32 rtype : 2;
-		u32 rsvd0 : 3;
-		u32 is_ldb : 1;
-		u32 cls : 2;
-		u32 aid : 6;
-		u32 unit : 4;
-		u32 source : 4;
-		u32 more : 1;
-		u32 valid : 1;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_VF_LDB_VPP_V(x) \
-	(0x10000f00 + (x) * 0x1000)
-#define DLB2_SYS_VF_LDB_VPP_V_RST 0x0
-union dlb2_sys_vf_ldb_vpp_v {
-	struct {
-		u32 vpp_v : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_VF_LDB_VPP2PP(x) \
-	(0x10000f04 + (x) * 0x1000)
-#define DLB2_SYS_VF_LDB_VPP2PP_RST 0x0
-union dlb2_sys_vf_ldb_vpp2pp {
-	struct {
-		u32 pp : 6;
-		u32 rsvd0 : 26;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_VF_DIR_VPP_V(x) \
-	(0x10000f08 + (x) * 0x1000)
-#define DLB2_SYS_VF_DIR_VPP_V_RST 0x0
-union dlb2_sys_vf_dir_vpp_v {
-	struct {
-		u32 vpp_v : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_VF_DIR_VPP2PP(x) \
-	(0x10000f0c + (x) * 0x1000)
-#define DLB2_SYS_VF_DIR_VPP2PP_RST 0x0
-union dlb2_sys_vf_dir_vpp2pp {
-	struct {
-		u32 pp : 6;
-		u32 rsvd0 : 26;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_VF_LDB_VQID_V(x) \
-	(0x10000f10 + (x) * 0x1000)
-#define DLB2_SYS_VF_LDB_VQID_V_RST 0x0
-union dlb2_sys_vf_ldb_vqid_v {
-	struct {
-		u32 vqid_v : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_VF_LDB_VQID2QID(x) \
-	(0x10000f14 + (x) * 0x1000)
-#define DLB2_SYS_VF_LDB_VQID2QID_RST 0x0
-union dlb2_sys_vf_ldb_vqid2qid {
-	struct {
-		u32 qid : 5;
-		u32 rsvd0 : 27;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_LDB_QID2VQID(x) \
-	(0x10000f18 + (x) * 0x1000)
-#define DLB2_SYS_LDB_QID2VQID_RST 0x0
-union dlb2_sys_ldb_qid2vqid {
-	struct {
-		u32 vqid : 5;
-		u32 rsvd0 : 27;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_VF_DIR_VQID_V(x) \
-	(0x10000f1c + (x) * 0x1000)
-#define DLB2_SYS_VF_DIR_VQID_V_RST 0x0
-union dlb2_sys_vf_dir_vqid_v {
-	struct {
-		u32 vqid_v : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_VF_DIR_VQID2QID(x) \
-	(0x10000f20 + (x) * 0x1000)
-#define DLB2_SYS_VF_DIR_VQID2QID_RST 0x0
-union dlb2_sys_vf_dir_vqid2qid {
-	struct {
-		u32 qid : 6;
-		u32 rsvd0 : 26;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_LDB_VASQID_V(x) \
-	(0x10000f24 + (x) * 0x1000)
-#define DLB2_SYS_LDB_VASQID_V_RST 0x0
-union dlb2_sys_ldb_vasqid_v {
-	struct {
-		u32 vasqid_v : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_DIR_VASQID_V(x) \
-	(0x10000f28 + (x) * 0x1000)
-#define DLB2_SYS_DIR_VASQID_V_RST 0x0
-union dlb2_sys_dir_vasqid_v {
-	struct {
-		u32 vasqid_v : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_ALARM_VF_SYND2(x) \
-	(0x10000f48 + (x) * 0x1000)
-#define DLB2_SYS_ALARM_VF_SYND2_RST 0x0
-union dlb2_sys_alarm_vf_synd2 {
-	struct {
-		u32 lock_id : 16;
-		u32 debug : 8;
-		u32 cq_pop : 1;
-		u32 qe_uhl : 1;
-		u32 qe_orsp : 1;
-		u32 qe_valid : 1;
-		u32 isz : 1;
-		u32 dsi_error : 1;
-		u32 dlbrsvd : 2;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_ALARM_VF_SYND1(x) \
-	(0x10000f44 + (x) * 0x1000)
-#define DLB2_SYS_ALARM_VF_SYND1_RST 0x0
-union dlb2_sys_alarm_vf_synd1 {
-	struct {
-		u32 dsi : 16;
-		u32 qid : 8;
-		u32 qtype : 2;
-		u32 qpri : 3;
-		u32 msg_type : 3;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_ALARM_VF_SYND0(x) \
-	(0x10000f40 + (x) * 0x1000)
-#define DLB2_SYS_ALARM_VF_SYND0_RST 0x0
-union dlb2_sys_alarm_vf_synd0 {
-	struct {
-		u32 syndrome : 8;
-		u32 rtype : 2;
-		u32 vf_synd0_parity : 1;
-		u32 vf_synd1_parity : 1;
-		u32 vf_synd2_parity : 1;
-		u32 is_ldb : 1;
-		u32 cls : 2;
-		u32 aid : 6;
-		u32 unit : 4;
-		u32 source : 4;
-		u32 more : 1;
-		u32 valid : 1;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_LDB_QID_CFG_V(x) \
-	(0x10000f58 + (x) * 0x1000)
-#define DLB2_SYS_LDB_QID_CFG_V_RST 0x0
-union dlb2_sys_ldb_qid_cfg_v {
-	struct {
-		u32 sn_cfg_v : 1;
-		u32 fid_cfg_v : 1;
-		u32 rsvd0 : 30;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_LDB_QID_ITS(x) \
-	(0x10000f54 + (x) * 0x1000)
-#define DLB2_SYS_LDB_QID_ITS_RST 0x0
-union dlb2_sys_ldb_qid_its {
-	struct {
-		u32 qid_its : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_LDB_QID_V(x) \
-	(0x10000f50 + (x) * 0x1000)
-#define DLB2_SYS_LDB_QID_V_RST 0x0
-union dlb2_sys_ldb_qid_v {
-	struct {
-		u32 qid_v : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_DIR_QID_ITS(x) \
-	(0x10000f64 + (x) * 0x1000)
-#define DLB2_SYS_DIR_QID_ITS_RST 0x0
-union dlb2_sys_dir_qid_its {
-	struct {
-		u32 qid_its : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_DIR_QID_V(x) \
-	(0x10000f60 + (x) * 0x1000)
-#define DLB2_SYS_DIR_QID_V_RST 0x0
-union dlb2_sys_dir_qid_v {
-	struct {
-		u32 qid_v : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_LDB_CQ_AI_DATA(x) \
-	(0x10000fa8 + (x) * 0x1000)
-#define DLB2_SYS_LDB_CQ_AI_DATA_RST 0x0
-union dlb2_sys_ldb_cq_ai_data {
-	struct {
-		u32 cq_ai_data : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_LDB_CQ_AI_ADDR(x) \
-	(0x10000fa4 + (x) * 0x1000)
-#define DLB2_SYS_LDB_CQ_AI_ADDR_RST 0x0
-union dlb2_sys_ldb_cq_ai_addr {
-	struct {
-		u32 rsvd1 : 2;
-		u32 cq_ai_addr : 18;
-		u32 rsvd0 : 12;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_LDB_CQ_PASID(x) \
-	(0x10000fa0 + (x) * 0x1000)
-#define DLB2_SYS_LDB_CQ_PASID_RST 0x0
-union dlb2_sys_ldb_cq_pasid {
-	struct {
-		u32 pasid : 20;
-		u32 exe_req : 1;
-		u32 priv_req : 1;
-		u32 fmt2 : 1;
-		u32 rsvd0 : 9;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_LDB_CQ_AT(x) \
-	(0x10000f9c + (x) * 0x1000)
-#define DLB2_SYS_LDB_CQ_AT_RST 0x0
-union dlb2_sys_ldb_cq_at {
-	struct {
-		u32 cq_at : 2;
-		u32 rsvd0 : 30;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_LDB_CQ_ISR(x) \
-	(0x10000f98 + (x) * 0x1000)
-#define DLB2_SYS_LDB_CQ_ISR_RST 0x0
-/* CQ Interrupt Modes */
-#define DLB2_CQ_ISR_MODE_DIS  0
-#define DLB2_CQ_ISR_MODE_MSI  1
-#define DLB2_CQ_ISR_MODE_MSIX 2
-#define DLB2_CQ_ISR_MODE_ADI  3
-union dlb2_sys_ldb_cq_isr {
-	struct {
-		u32 vector : 6;
-		u32 vf : 4;
-		u32 en_code : 2;
-		u32 rsvd0 : 20;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_LDB_CQ2VF_PF_RO(x) \
-	(0x10000f94 + (x) * 0x1000)
-#define DLB2_SYS_LDB_CQ2VF_PF_RO_RST 0x0
-union dlb2_sys_ldb_cq2vf_pf_ro {
-	struct {
-		u32 vf : 4;
-		u32 is_pf : 1;
-		u32 ro : 1;
-		u32 rsvd0 : 26;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_LDB_PP_V(x) \
-	(0x10000f90 + (x) * 0x1000)
-#define DLB2_SYS_LDB_PP_V_RST 0x0
-union dlb2_sys_ldb_pp_v {
-	struct {
-		u32 pp_v : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_LDB_PP2VDEV(x) \
-	(0x10000f8c + (x) * 0x1000)
-#define DLB2_SYS_LDB_PP2VDEV_RST 0x0
-union dlb2_sys_ldb_pp2vdev {
-	struct {
-		u32 vdev : 4;
-		u32 rsvd0 : 28;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_LDB_PP2VAS(x) \
-	(0x10000f88 + (x) * 0x1000)
-#define DLB2_SYS_LDB_PP2VAS_RST 0x0
-union dlb2_sys_ldb_pp2vas {
-	struct {
-		u32 vas : 5;
-		u32 rsvd0 : 27;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_LDB_CQ_ADDR_U(x) \
-	(0x10000f84 + (x) * 0x1000)
-#define DLB2_SYS_LDB_CQ_ADDR_U_RST 0x0
-union dlb2_sys_ldb_cq_addr_u {
-	struct {
-		u32 addr_u : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_LDB_CQ_ADDR_L(x) \
-	(0x10000f80 + (x) * 0x1000)
-#define DLB2_SYS_LDB_CQ_ADDR_L_RST 0x0
-union dlb2_sys_ldb_cq_addr_l {
-	struct {
-		u32 rsvd0 : 6;
-		u32 addr_l : 26;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_DIR_CQ_FMT(x) \
-	(0x10000fec + (x) * 0x1000)
-#define DLB2_SYS_DIR_CQ_FMT_RST 0x0
-union dlb2_sys_dir_cq_fmt {
-	struct {
-		u32 keep_pf_ppid : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_DIR_CQ_AI_DATA(x) \
-	(0x10000fe8 + (x) * 0x1000)
-#define DLB2_SYS_DIR_CQ_AI_DATA_RST 0x0
-union dlb2_sys_dir_cq_ai_data {
-	struct {
-		u32 cq_ai_data : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_DIR_CQ_AI_ADDR(x) \
-	(0x10000fe4 + (x) * 0x1000)
-#define DLB2_SYS_DIR_CQ_AI_ADDR_RST 0x0
-union dlb2_sys_dir_cq_ai_addr {
-	struct {
-		u32 rsvd1 : 2;
-		u32 cq_ai_addr : 18;
-		u32 rsvd0 : 12;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_DIR_CQ_PASID(x) \
-	(0x10000fe0 + (x) * 0x1000)
-#define DLB2_SYS_DIR_CQ_PASID_RST 0x0
-union dlb2_sys_dir_cq_pasid {
-	struct {
-		u32 pasid : 20;
-		u32 exe_req : 1;
-		u32 priv_req : 1;
-		u32 fmt2 : 1;
-		u32 rsvd0 : 9;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_DIR_CQ_AT(x) \
-	(0x10000fdc + (x) * 0x1000)
-#define DLB2_SYS_DIR_CQ_AT_RST 0x0
-union dlb2_sys_dir_cq_at {
-	struct {
-		u32 cq_at : 2;
-		u32 rsvd0 : 30;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_DIR_CQ_ISR(x) \
-	(0x10000fd8 + (x) * 0x1000)
-#define DLB2_SYS_DIR_CQ_ISR_RST 0x0
-union dlb2_sys_dir_cq_isr {
-	struct {
-		u32 vector : 6;
-		u32 vf : 4;
-		u32 en_code : 2;
-		u32 rsvd0 : 20;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_DIR_CQ2VF_PF_RO(x) \
-	(0x10000fd4 + (x) * 0x1000)
-#define DLB2_SYS_DIR_CQ2VF_PF_RO_RST 0x0
-union dlb2_sys_dir_cq2vf_pf_ro {
-	struct {
-		u32 vf : 4;
-		u32 is_pf : 1;
-		u32 ro : 1;
-		u32 rsvd0 : 26;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_DIR_PP_V(x) \
-	(0x10000fd0 + (x) * 0x1000)
-#define DLB2_SYS_DIR_PP_V_RST 0x0
-union dlb2_sys_dir_pp_v {
-	struct {
-		u32 pp_v : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_DIR_PP2VDEV(x) \
-	(0x10000fcc + (x) * 0x1000)
-#define DLB2_SYS_DIR_PP2VDEV_RST 0x0
-union dlb2_sys_dir_pp2vdev {
-	struct {
-		u32 vdev : 4;
-		u32 rsvd0 : 28;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_DIR_PP2VAS(x) \
-	(0x10000fc8 + (x) * 0x1000)
-#define DLB2_SYS_DIR_PP2VAS_RST 0x0
-union dlb2_sys_dir_pp2vas {
-	struct {
-		u32 vas : 5;
-		u32 rsvd0 : 27;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_DIR_CQ_ADDR_U(x) \
-	(0x10000fc4 + (x) * 0x1000)
-#define DLB2_SYS_DIR_CQ_ADDR_U_RST 0x0
-union dlb2_sys_dir_cq_addr_u {
-	struct {
-		u32 addr_u : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_DIR_CQ_ADDR_L(x) \
-	(0x10000fc0 + (x) * 0x1000)
-#define DLB2_SYS_DIR_CQ_ADDR_L_RST 0x0
-union dlb2_sys_dir_cq_addr_l {
-	struct {
-		u32 rsvd0 : 6;
-		u32 addr_l : 26;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_INGRESS_ALARM_ENBL 0x10000300
-#define DLB2_SYS_INGRESS_ALARM_ENBL_RST 0x0
-union dlb2_sys_ingress_alarm_enbl {
-	struct {
-		u32 illegal_hcw : 1;
-		u32 illegal_pp : 1;
-		u32 illegal_pasid : 1;
-		u32 illegal_qid : 1;
-		u32 disabled_qid : 1;
-		u32 illegal_ldb_qid_cfg : 1;
-		u32 rsvd0 : 26;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_MSIX_ACK 0x10000400
-#define DLB2_SYS_MSIX_ACK_RST 0x0
-union dlb2_sys_msix_ack {
-	struct {
-		u32 msix_0_ack : 1;
-		u32 msix_1_ack : 1;
-		u32 rsvd0 : 30;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_MSIX_PASSTHRU 0x10000404
-#define DLB2_SYS_MSIX_PASSTHRU_RST 0x0
-union dlb2_sys_msix_passthru {
-	struct {
-		u32 msix_0_passthru : 1;
-		u32 msix_1_passthru : 1;
-		u32 rsvd0 : 30;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_MSIX_MODE 0x10000408
-#define DLB2_SYS_MSIX_MODE_RST 0x0
-/* MSI-X Modes */
-#define DLB2_MSIX_MODE_PACKED     0
-#define DLB2_MSIX_MODE_COMPRESSED 1
-union dlb2_sys_msix_mode {
-	struct {
-		u32 mode : 1;
-		u32 poll_mode : 1;
-		u32 poll_mask : 1;
-		u32 poll_lock : 1;
-		u32 rsvd0 : 28;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS 0x10000440
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_RST 0x0
-union dlb2_sys_dir_cq_31_0_occ_int_sts {
-	struct {
-		u32 cq_0_occ_int : 1;
-		u32 cq_1_occ_int : 1;
-		u32 cq_2_occ_int : 1;
-		u32 cq_3_occ_int : 1;
-		u32 cq_4_occ_int : 1;
-		u32 cq_5_occ_int : 1;
-		u32 cq_6_occ_int : 1;
-		u32 cq_7_occ_int : 1;
-		u32 cq_8_occ_int : 1;
-		u32 cq_9_occ_int : 1;
-		u32 cq_10_occ_int : 1;
-		u32 cq_11_occ_int : 1;
-		u32 cq_12_occ_int : 1;
-		u32 cq_13_occ_int : 1;
-		u32 cq_14_occ_int : 1;
-		u32 cq_15_occ_int : 1;
-		u32 cq_16_occ_int : 1;
-		u32 cq_17_occ_int : 1;
-		u32 cq_18_occ_int : 1;
-		u32 cq_19_occ_int : 1;
-		u32 cq_20_occ_int : 1;
-		u32 cq_21_occ_int : 1;
-		u32 cq_22_occ_int : 1;
-		u32 cq_23_occ_int : 1;
-		u32 cq_24_occ_int : 1;
-		u32 cq_25_occ_int : 1;
-		u32 cq_26_occ_int : 1;
-		u32 cq_27_occ_int : 1;
-		u32 cq_28_occ_int : 1;
-		u32 cq_29_occ_int : 1;
-		u32 cq_30_occ_int : 1;
-		u32 cq_31_occ_int : 1;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS 0x10000444
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_RST 0x0
-union dlb2_sys_dir_cq_63_32_occ_int_sts {
-	struct {
-		u32 cq_32_occ_int : 1;
-		u32 cq_33_occ_int : 1;
-		u32 cq_34_occ_int : 1;
-		u32 cq_35_occ_int : 1;
-		u32 cq_36_occ_int : 1;
-		u32 cq_37_occ_int : 1;
-		u32 cq_38_occ_int : 1;
-		u32 cq_39_occ_int : 1;
-		u32 cq_40_occ_int : 1;
-		u32 cq_41_occ_int : 1;
-		u32 cq_42_occ_int : 1;
-		u32 cq_43_occ_int : 1;
-		u32 cq_44_occ_int : 1;
-		u32 cq_45_occ_int : 1;
-		u32 cq_46_occ_int : 1;
-		u32 cq_47_occ_int : 1;
-		u32 cq_48_occ_int : 1;
-		u32 cq_49_occ_int : 1;
-		u32 cq_50_occ_int : 1;
-		u32 cq_51_occ_int : 1;
-		u32 cq_52_occ_int : 1;
-		u32 cq_53_occ_int : 1;
-		u32 cq_54_occ_int : 1;
-		u32 cq_55_occ_int : 1;
-		u32 cq_56_occ_int : 1;
-		u32 cq_57_occ_int : 1;
-		u32 cq_58_occ_int : 1;
-		u32 cq_59_occ_int : 1;
-		u32 cq_60_occ_int : 1;
-		u32 cq_61_occ_int : 1;
-		u32 cq_62_occ_int : 1;
-		u32 cq_63_occ_int : 1;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS 0x10000460
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_RST 0x0
-union dlb2_sys_ldb_cq_31_0_occ_int_sts {
-	struct {
-		u32 cq_0_occ_int : 1;
-		u32 cq_1_occ_int : 1;
-		u32 cq_2_occ_int : 1;
-		u32 cq_3_occ_int : 1;
-		u32 cq_4_occ_int : 1;
-		u32 cq_5_occ_int : 1;
-		u32 cq_6_occ_int : 1;
-		u32 cq_7_occ_int : 1;
-		u32 cq_8_occ_int : 1;
-		u32 cq_9_occ_int : 1;
-		u32 cq_10_occ_int : 1;
-		u32 cq_11_occ_int : 1;
-		u32 cq_12_occ_int : 1;
-		u32 cq_13_occ_int : 1;
-		u32 cq_14_occ_int : 1;
-		u32 cq_15_occ_int : 1;
-		u32 cq_16_occ_int : 1;
-		u32 cq_17_occ_int : 1;
-		u32 cq_18_occ_int : 1;
-		u32 cq_19_occ_int : 1;
-		u32 cq_20_occ_int : 1;
-		u32 cq_21_occ_int : 1;
-		u32 cq_22_occ_int : 1;
-		u32 cq_23_occ_int : 1;
-		u32 cq_24_occ_int : 1;
-		u32 cq_25_occ_int : 1;
-		u32 cq_26_occ_int : 1;
-		u32 cq_27_occ_int : 1;
-		u32 cq_28_occ_int : 1;
-		u32 cq_29_occ_int : 1;
-		u32 cq_30_occ_int : 1;
-		u32 cq_31_occ_int : 1;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS 0x10000464
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_RST 0x0
-union dlb2_sys_ldb_cq_63_32_occ_int_sts {
-	struct {
-		u32 cq_32_occ_int : 1;
-		u32 cq_33_occ_int : 1;
-		u32 cq_34_occ_int : 1;
-		u32 cq_35_occ_int : 1;
-		u32 cq_36_occ_int : 1;
-		u32 cq_37_occ_int : 1;
-		u32 cq_38_occ_int : 1;
-		u32 cq_39_occ_int : 1;
-		u32 cq_40_occ_int : 1;
-		u32 cq_41_occ_int : 1;
-		u32 cq_42_occ_int : 1;
-		u32 cq_43_occ_int : 1;
-		u32 cq_44_occ_int : 1;
-		u32 cq_45_occ_int : 1;
-		u32 cq_46_occ_int : 1;
-		u32 cq_47_occ_int : 1;
-		u32 cq_48_occ_int : 1;
-		u32 cq_49_occ_int : 1;
-		u32 cq_50_occ_int : 1;
-		u32 cq_51_occ_int : 1;
-		u32 cq_52_occ_int : 1;
-		u32 cq_53_occ_int : 1;
-		u32 cq_54_occ_int : 1;
-		u32 cq_55_occ_int : 1;
-		u32 cq_56_occ_int : 1;
-		u32 cq_57_occ_int : 1;
-		u32 cq_58_occ_int : 1;
-		u32 cq_59_occ_int : 1;
-		u32 cq_60_occ_int : 1;
-		u32 cq_61_occ_int : 1;
-		u32 cq_62_occ_int : 1;
-		u32 cq_63_occ_int : 1;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_DIR_CQ_OPT_CLR 0x100004c0
-#define DLB2_SYS_DIR_CQ_OPT_CLR_RST 0x0
-union dlb2_sys_dir_cq_opt_clr {
-	struct {
-		u32 cq : 6;
-		u32 rsvd0 : 26;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_ALARM_HW_SYND 0x1000050c
-#define DLB2_SYS_ALARM_HW_SYND_RST 0x0
-union dlb2_sys_alarm_hw_synd {
-	struct {
-		u32 syndrome : 8;
-		u32 rtype : 2;
-		u32 alarm : 1;
-		u32 cwd : 1;
-		u32 vf_pf_mb : 1;
-		u32 rsvd0 : 1;
-		u32 cls : 2;
-		u32 aid : 6;
-		u32 unit : 4;
-		u32 source : 4;
-		u32 more : 1;
-		u32 valid : 1;
-	} field;
-	u32 val;
-};
-
-#define DLB2_AQED_PIPE_QID_FID_LIM(x) \
-	(0x20000000 + (x) * 0x1000)
-#define DLB2_AQED_PIPE_QID_FID_LIM_RST 0x7ff
-union dlb2_aqed_pipe_qid_fid_lim {
-	struct {
-		u32 qid_fid_limit : 13;
-		u32 rsvd0 : 19;
-	} field;
-	u32 val;
-};
-
-#define DLB2_AQED_PIPE_QID_HID_WIDTH(x) \
-	(0x20080000 + (x) * 0x1000)
-#define DLB2_AQED_PIPE_QID_HID_WIDTH_RST 0x0
-union dlb2_aqed_pipe_qid_hid_width {
-	struct {
-		u32 compress_code : 3;
-		u32 rsvd0 : 29;
-	} field;
-	u32 val;
-};
-
-#define DLB2_AQED_PIPE_CFG_ARB_WEIGHTS_TQPRI_ATM_0 0x24000004
-#define DLB2_AQED_PIPE_CFG_ARB_WEIGHTS_TQPRI_ATM_0_RST 0xfefcfaf8
-union dlb2_aqed_pipe_cfg_arb_weights_tqpri_atm_0 {
-	struct {
-		u32 pri0 : 8;
-		u32 pri1 : 8;
-		u32 pri2 : 8;
-		u32 pri3 : 8;
-	} field;
-	u32 val;
-};
-
-#define DLB2_ATM_QID2CQIDIX_00(x) \
-	(0x30080000 + (x) * 0x1000)
-#define DLB2_ATM_QID2CQIDIX_00_RST 0x0
-#define DLB2_ATM_QID2CQIDIX(x, y) \
-	(DLB2_ATM_QID2CQIDIX_00(x) + 0x80000 * (y))
-#define DLB2_ATM_QID2CQIDIX_NUM 16
-union dlb2_atm_qid2cqidix_00 {
-	struct {
-		u32 cq_p0 : 8;
-		u32 cq_p1 : 8;
-		u32 cq_p2 : 8;
-		u32 cq_p3 : 8;
-	} field;
-	u32 val;
-};
-
-#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN 0x34000004
-#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_RST 0xfffefdfc
-union dlb2_atm_cfg_arb_weights_rdy_bin {
-	struct {
-		u32 bin0 : 8;
-		u32 bin1 : 8;
-		u32 bin2 : 8;
-		u32 bin3 : 8;
-	} field;
-	u32 val;
-};
-
-#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN 0x34000008
-#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_RST 0xfffefdfc
-union dlb2_atm_cfg_arb_weights_sched_bin {
-	struct {
-		u32 bin0 : 8;
-		u32 bin1 : 8;
-		u32 bin2 : 8;
-		u32 bin3 : 8;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_DIR_VAS_CRD(x) \
-	(0x40000000 + (x) * 0x1000)
-#define DLB2_CHP_CFG_DIR_VAS_CRD_RST 0x0
-union dlb2_chp_cfg_dir_vas_crd {
-	struct {
-		u32 count : 14;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_LDB_VAS_CRD(x) \
-	(0x40080000 + (x) * 0x1000)
-#define DLB2_CHP_CFG_LDB_VAS_CRD_RST 0x0
-union dlb2_chp_cfg_ldb_vas_crd {
-	struct {
-		u32 count : 15;
-		u32 rsvd0 : 17;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_ORD_QID_SN(x) \
-	(0x40100000 + (x) * 0x1000)
-#define DLB2_CHP_ORD_QID_SN_RST 0x0
-union dlb2_chp_ord_qid_sn {
-	struct {
-		u32 sn : 10;
-		u32 rsvd0 : 22;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_ORD_QID_SN_MAP(x) \
-	(0x40180000 + (x) * 0x1000)
-#define DLB2_CHP_ORD_QID_SN_MAP_RST 0x0
-union dlb2_chp_ord_qid_sn_map {
-	struct {
-		u32 mode : 3;
-		u32 slot : 4;
-		u32 rsvz0 : 1;
-		u32 grp : 1;
-		u32 rsvz1 : 1;
-		u32 rsvd0 : 22;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_SN_CHK_ENBL(x) \
-	(0x40200000 + (x) * 0x1000)
-#define DLB2_CHP_SN_CHK_ENBL_RST 0x0
-union dlb2_chp_sn_chk_enbl {
-	struct {
-		u32 en : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_DIR_CQ_DEPTH(x) \
-	(0x40280000 + (x) * 0x1000)
-#define DLB2_CHP_DIR_CQ_DEPTH_RST 0x0
-union dlb2_chp_dir_cq_depth {
-	struct {
-		u32 depth : 13;
-		u32 rsvd0 : 19;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH(x) \
-	(0x40300000 + (x) * 0x1000)
-#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_RST 0x0
-union dlb2_chp_dir_cq_int_depth_thrsh {
-	struct {
-		u32 depth_threshold : 13;
-		u32 rsvd0 : 19;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_DIR_CQ_INT_ENB(x) \
-	(0x40380000 + (x) * 0x1000)
-#define DLB2_CHP_DIR_CQ_INT_ENB_RST 0x0
-union dlb2_chp_dir_cq_int_enb {
-	struct {
-		u32 en_tim : 1;
-		u32 en_depth : 1;
-		u32 rsvd0 : 30;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_DIR_CQ_TMR_THRSH(x) \
-	(0x40480000 + (x) * 0x1000)
-#define DLB2_CHP_DIR_CQ_TMR_THRSH_RST 0x1
-union dlb2_chp_dir_cq_tmr_thrsh {
-	struct {
-		u32 thrsh_0 : 1;
-		u32 thrsh_13_1 : 13;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL(x) \
-	(0x40500000 + (x) * 0x1000)
-#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_RST 0x0
-union dlb2_chp_dir_cq_tkn_depth_sel {
-	struct {
-		u32 token_depth_select : 4;
-		u32 rsvd0 : 28;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_DIR_CQ_WD_ENB(x) \
-	(0x40580000 + (x) * 0x1000)
-#define DLB2_CHP_DIR_CQ_WD_ENB_RST 0x0
-union dlb2_chp_dir_cq_wd_enb {
-	struct {
-		u32 wd_enable : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_DIR_CQ_WPTR(x) \
-	(0x40600000 + (x) * 0x1000)
-#define DLB2_CHP_DIR_CQ_WPTR_RST 0x0
-union dlb2_chp_dir_cq_wptr {
-	struct {
-		u32 write_pointer : 13;
-		u32 rsvd0 : 19;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_DIR_CQ2VAS(x) \
-	(0x40680000 + (x) * 0x1000)
-#define DLB2_CHP_DIR_CQ2VAS_RST 0x0
-union dlb2_chp_dir_cq2vas {
-	struct {
-		u32 cq2vas : 5;
-		u32 rsvd0 : 27;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_HIST_LIST_BASE(x) \
-	(0x40700000 + (x) * 0x1000)
-#define DLB2_CHP_HIST_LIST_BASE_RST 0x0
-union dlb2_chp_hist_list_base {
-	struct {
-		u32 base : 13;
-		u32 rsvd0 : 19;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_HIST_LIST_LIM(x) \
-	(0x40780000 + (x) * 0x1000)
-#define DLB2_CHP_HIST_LIST_LIM_RST 0x0
-union dlb2_chp_hist_list_lim {
-	struct {
-		u32 limit : 13;
-		u32 rsvd0 : 19;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_HIST_LIST_POP_PTR(x) \
-	(0x40800000 + (x) * 0x1000)
-#define DLB2_CHP_HIST_LIST_POP_PTR_RST 0x0
-union dlb2_chp_hist_list_pop_ptr {
-	struct {
-		u32 pop_ptr : 13;
-		u32 generation : 1;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_HIST_LIST_PUSH_PTR(x) \
-	(0x40880000 + (x) * 0x1000)
-#define DLB2_CHP_HIST_LIST_PUSH_PTR_RST 0x0
-union dlb2_chp_hist_list_push_ptr {
-	struct {
-		u32 push_ptr : 13;
-		u32 generation : 1;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_LDB_CQ_DEPTH(x) \
-	(0x40900000 + (x) * 0x1000)
-#define DLB2_CHP_LDB_CQ_DEPTH_RST 0x0
-union dlb2_chp_ldb_cq_depth {
-	struct {
-		u32 depth : 11;
-		u32 rsvd0 : 21;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH(x) \
-	(0x40980000 + (x) * 0x1000)
-#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_RST 0x0
-union dlb2_chp_ldb_cq_int_depth_thrsh {
-	struct {
-		u32 depth_threshold : 11;
-		u32 rsvd0 : 21;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_LDB_CQ_INT_ENB(x) \
-	(0x40a00000 + (x) * 0x1000)
-#define DLB2_CHP_LDB_CQ_INT_ENB_RST 0x0
-union dlb2_chp_ldb_cq_int_enb {
-	struct {
-		u32 en_tim : 1;
-		u32 en_depth : 1;
-		u32 rsvd0 : 30;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_LDB_CQ_TMR_THRSH(x) \
-	(0x40b00000 + (x) * 0x1000)
-#define DLB2_CHP_LDB_CQ_TMR_THRSH_RST 0x1
-union dlb2_chp_ldb_cq_tmr_thrsh {
-	struct {
-		u32 thrsh_0 : 1;
-		u32 thrsh_13_1 : 13;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL(x) \
-	(0x40b80000 + (x) * 0x1000)
-#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_RST 0x0
-union dlb2_chp_ldb_cq_tkn_depth_sel {
-	struct {
-		u32 token_depth_select : 4;
-		u32 rsvd0 : 28;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_LDB_CQ_WD_ENB(x) \
-	(0x40c00000 + (x) * 0x1000)
-#define DLB2_CHP_LDB_CQ_WD_ENB_RST 0x0
-union dlb2_chp_ldb_cq_wd_enb {
-	struct {
-		u32 wd_enable : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_LDB_CQ_WPTR(x) \
-	(0x40c80000 + (x) * 0x1000)
-#define DLB2_CHP_LDB_CQ_WPTR_RST 0x0
-union dlb2_chp_ldb_cq_wptr {
-	struct {
-		u32 write_pointer : 11;
-		u32 rsvd0 : 21;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_LDB_CQ2VAS(x) \
-	(0x40d00000 + (x) * 0x1000)
-#define DLB2_CHP_LDB_CQ2VAS_RST 0x0
-union dlb2_chp_ldb_cq2vas {
-	struct {
-		u32 cq2vas : 5;
-		u32 rsvd0 : 27;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_CHP_CSR_CTRL 0x44000008
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_RST 0x180002
-union dlb2_chp_cfg_chp_csr_ctrl {
-	struct {
-		u32 int_cor_alarm_dis : 1;
-		u32 int_cor_synd_dis : 1;
-		u32 int_uncr_alarm_dis : 1;
-		u32 int_unc_synd_dis : 1;
-		u32 int_inf0_alarm_dis : 1;
-		u32 int_inf0_synd_dis : 1;
-		u32 int_inf1_alarm_dis : 1;
-		u32 int_inf1_synd_dis : 1;
-		u32 int_inf2_alarm_dis : 1;
-		u32 int_inf2_synd_dis : 1;
-		u32 int_inf3_alarm_dis : 1;
-		u32 int_inf3_synd_dis : 1;
-		u32 int_inf4_alarm_dis : 1;
-		u32 int_inf4_synd_dis : 1;
-		u32 int_inf5_alarm_dis : 1;
-		u32 int_inf5_synd_dis : 1;
-		u32 dlb_cor_alarm_enable : 1;
-		u32 cfg_64bytes_qe_ldb_cq_mode : 1;
-		u32 cfg_64bytes_qe_dir_cq_mode : 1;
-		u32 pad_write_ldb : 1;
-		u32 pad_write_dir : 1;
-		u32 pad_first_write_ldb : 1;
-		u32 pad_first_write_dir : 1;
-		u32 rsvz0 : 9;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_DIR_CQ_INTR_ARMED0 0x4400005c
-#define DLB2_CHP_DIR_CQ_INTR_ARMED0_RST 0x0
-union dlb2_chp_dir_cq_intr_armed0 {
-	struct {
-		u32 armed : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_DIR_CQ_INTR_ARMED1 0x44000060
-#define DLB2_CHP_DIR_CQ_INTR_ARMED1_RST 0x0
-union dlb2_chp_dir_cq_intr_armed1 {
-	struct {
-		u32 armed : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL 0x44000084
-#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_RST 0x0
-union dlb2_chp_cfg_dir_cq_timer_ctl {
-	struct {
-		u32 sample_interval : 8;
-		u32 enb : 1;
-		u32 rsvz0 : 23;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_DIR_WDTO_0 0x44000088
-#define DLB2_CHP_CFG_DIR_WDTO_0_RST 0x0
-union dlb2_chp_cfg_dir_wdto_0 {
-	struct {
-		u32 wdto : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_DIR_WDTO_1 0x4400008c
-#define DLB2_CHP_CFG_DIR_WDTO_1_RST 0x0
-union dlb2_chp_cfg_dir_wdto_1 {
-	struct {
-		u32 wdto : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_DIR_WD_DISABLE0 0x44000098
-#define DLB2_CHP_CFG_DIR_WD_DISABLE0_RST 0xffffffff
-union dlb2_chp_cfg_dir_wd_disable0 {
-	struct {
-		u32 wd_disable : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_DIR_WD_DISABLE1 0x4400009c
-#define DLB2_CHP_CFG_DIR_WD_DISABLE1_RST 0xffffffff
-union dlb2_chp_cfg_dir_wd_disable1 {
-	struct {
-		u32 wd_disable : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL 0x440000a0
-#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_RST 0x0
-union dlb2_chp_cfg_dir_wd_enb_interval {
-	struct {
-		u32 sample_interval : 28;
-		u32 enb : 1;
-		u32 rsvz0 : 3;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_DIR_WD_THRESHOLD 0x440000ac
-#define DLB2_CHP_CFG_DIR_WD_THRESHOLD_RST 0x0
-union dlb2_chp_cfg_dir_wd_threshold {
-	struct {
-		u32 wd_threshold : 8;
-		u32 rsvz0 : 24;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_LDB_CQ_INTR_ARMED0 0x440000b0
-#define DLB2_CHP_LDB_CQ_INTR_ARMED0_RST 0x0
-union dlb2_chp_ldb_cq_intr_armed0 {
-	struct {
-		u32 armed : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_LDB_CQ_INTR_ARMED1 0x440000b4
-#define DLB2_CHP_LDB_CQ_INTR_ARMED1_RST 0x0
-union dlb2_chp_ldb_cq_intr_armed1 {
-	struct {
-		u32 armed : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL 0x440000d8
-#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_RST 0x0
-union dlb2_chp_cfg_ldb_cq_timer_ctl {
-	struct {
-		u32 sample_interval : 8;
-		u32 enb : 1;
-		u32 rsvz0 : 23;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_LDB_WDTO_0 0x440000dc
-#define DLB2_CHP_CFG_LDB_WDTO_0_RST 0x0
-union dlb2_chp_cfg_ldb_wdto_0 {
-	struct {
-		u32 wdto : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_LDB_WDTO_1 0x440000e0
-#define DLB2_CHP_CFG_LDB_WDTO_1_RST 0x0
-union dlb2_chp_cfg_ldb_wdto_1 {
-	struct {
-		u32 wdto : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_LDB_WD_DISABLE0 0x440000ec
-#define DLB2_CHP_CFG_LDB_WD_DISABLE0_RST 0xffffffff
-union dlb2_chp_cfg_ldb_wd_disable0 {
-	struct {
-		u32 wd_disable : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_LDB_WD_DISABLE1 0x440000f0
-#define DLB2_CHP_CFG_LDB_WD_DISABLE1_RST 0xffffffff
-union dlb2_chp_cfg_ldb_wd_disable1 {
-	struct {
-		u32 wd_disable : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL 0x440000f4
-#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_RST 0x0
-union dlb2_chp_cfg_ldb_wd_enb_interval {
-	struct {
-		u32 sample_interval : 28;
-		u32 enb : 1;
-		u32 rsvz0 : 3;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_LDB_WD_THRESHOLD 0x44000100
-#define DLB2_CHP_CFG_LDB_WD_THRESHOLD_RST 0x0
-union dlb2_chp_cfg_ldb_wd_threshold {
-	struct {
-		u32 wd_threshold : 8;
-		u32 rsvz0 : 24;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CTRL_DIAG_02 0x4c000028
-#define DLB2_CHP_CTRL_DIAG_02_RST 0x1555
-union dlb2_chp_ctrl_diag_02 {
-	struct {
-		u32 egress_credit_status_empty : 1;
-		u32 egress_credit_status_afull : 1;
-		u32 chp_outbound_hcw_pipe_credit_status_empty : 1;
-		u32 chp_outbound_hcw_pipe_credit_status_afull : 1;
-		u32 chp_lsp_ap_cmp_pipe_credit_status_empty : 1;
-		u32 chp_lsp_ap_cmp_pipe_credit_status_afull : 1;
-		u32 chp_lsp_tok_pipe_credit_status_empty : 1;
-		u32 chp_lsp_tok_pipe_credit_status_afull : 1;
-		u32 chp_rop_pipe_credit_status_empty : 1;
-		u32 chp_rop_pipe_credit_status_afull : 1;
-		u32 qed_to_cq_pipe_credit_status_empty : 1;
-		u32 qed_to_cq_pipe_credit_status_afull : 1;
-		u32 egress_lsp_token_credit_status_empty : 1;
-		u32 egress_lsp_token_credit_status_afull : 1;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
-
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0 0x54000000
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_RST 0xfefcfaf8
-union dlb2_dp_cfg_arb_weights_tqpri_dir_0 {
-	struct {
-		u32 pri0 : 8;
-		u32 pri1 : 8;
-		u32 pri2 : 8;
-		u32 pri3 : 8;
-	} field;
-	u32 val;
-};
-
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_1 0x54000004
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_1_RST 0x0
-union dlb2_dp_cfg_arb_weights_tqpri_dir_1 {
-	struct {
-		u32 rsvz0 : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0 0x54000008
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_RST 0xfefcfaf8
-union dlb2_dp_cfg_arb_weights_tqpri_replay_0 {
-	struct {
-		u32 pri0 : 8;
-		u32 pri1 : 8;
-		u32 pri2 : 8;
-		u32 pri3 : 8;
-	} field;
-	u32 val;
-};
-
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1 0x5400000c
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RST 0x0
-union dlb2_dp_cfg_arb_weights_tqpri_replay_1 {
-	struct {
-		u32 rsvz0 : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_DP_DIR_CSR_CTRL 0x54000010
-#define DLB2_DP_DIR_CSR_CTRL_RST 0x0
-union dlb2_dp_dir_csr_ctrl {
-	struct {
-		u32 int_cor_alarm_dis : 1;
-		u32 int_cor_synd_dis : 1;
-		u32 int_uncr_alarm_dis : 1;
-		u32 int_unc_synd_dis : 1;
-		u32 int_inf0_alarm_dis : 1;
-		u32 int_inf0_synd_dis : 1;
-		u32 int_inf1_alarm_dis : 1;
-		u32 int_inf1_synd_dis : 1;
-		u32 int_inf2_alarm_dis : 1;
-		u32 int_inf2_synd_dis : 1;
-		u32 int_inf3_alarm_dis : 1;
-		u32 int_inf3_synd_dis : 1;
-		u32 int_inf4_alarm_dis : 1;
-		u32 int_inf4_synd_dis : 1;
-		u32 int_inf5_alarm_dis : 1;
-		u32 int_inf5_synd_dis : 1;
-		u32 rsvz0 : 16;
-	} field;
-	u32 val;
-};
-
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_ATQ_0 0x84000000
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_RST 0xfefcfaf8
-union dlb2_nalb_pipe_cfg_arb_weights_tqpri_atq_0 {
-	struct {
-		u32 pri0 : 8;
-		u32 pri1 : 8;
-		u32 pri2 : 8;
-		u32 pri3 : 8;
-	} field;
-	u32 val;
-};
-
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_ATQ_1 0x84000004
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_ATQ_1_RST 0x0
-union dlb2_nalb_pipe_cfg_arb_weights_tqpri_atq_1 {
-	struct {
-		u32 rsvz0 : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_NALB_0 0x84000008
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_NALB_0_RST 0xfefcfaf8
-union dlb2_nalb_pipe_cfg_arb_weights_tqpri_nalb_0 {
-	struct {
-		u32 pri0 : 8;
-		u32 pri1 : 8;
-		u32 pri2 : 8;
-		u32 pri3 : 8;
-	} field;
-	u32 val;
-};
-
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_NALB_1 0x8400000c
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_NALB_1_RST 0x0
-union dlb2_nalb_pipe_cfg_arb_weights_tqpri_nalb_1 {
-	struct {
-		u32 rsvz0 : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0 0x84000010
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_RST 0xfefcfaf8
-union dlb2_nalb_pipe_cfg_arb_weights_tqpri_replay_0 {
-	struct {
-		u32 pri0 : 8;
-		u32 pri1 : 8;
-		u32 pri2 : 8;
-		u32 pri3 : 8;
-	} field;
-	u32 val;
-};
-
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1 0x84000014
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RST 0x0
-union dlb2_nalb_pipe_cfg_arb_weights_tqpri_replay_1 {
-	struct {
-		u32 rsvz0 : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_RO_PIPE_GRP_0_SLT_SHFT(x) \
-	(0x96000000 + (x) * 0x4)
-#define DLB2_RO_PIPE_GRP_0_SLT_SHFT_RST 0x0
-union dlb2_ro_pipe_grp_0_slt_shft {
-	struct {
-		u32 change : 10;
-		u32 rsvd0 : 22;
-	} field;
-	u32 val;
-};
-
-#define DLB2_RO_PIPE_GRP_1_SLT_SHFT(x) \
-	(0x96010000 + (x) * 0x4)
-#define DLB2_RO_PIPE_GRP_1_SLT_SHFT_RST 0x0
-union dlb2_ro_pipe_grp_1_slt_shft {
-	struct {
-		u32 change : 10;
-		u32 rsvd0 : 22;
-	} field;
-	u32 val;
-};
-
-#define DLB2_RO_PIPE_GRP_SN_MODE 0x94000000
-#define DLB2_RO_PIPE_GRP_SN_MODE_RST 0x0
-union dlb2_ro_pipe_grp_sn_mode {
-	struct {
-		u32 sn_mode_0 : 3;
-		u32 rszv0 : 5;
-		u32 sn_mode_1 : 3;
-		u32 rszv1 : 21;
-	} field;
-	u32 val;
-};
-
-#define DLB2_RO_PIPE_CFG_CTRL_GENERAL_0 0x9c000000
-#define DLB2_RO_PIPE_CFG_CTRL_GENERAL_0_RST 0x0
-union dlb2_ro_pipe_cfg_ctrl_general_0 {
-	struct {
-		u32 unit_single_step_mode : 1;
-		u32 rr_en : 1;
-		u32 rszv0 : 30;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ2PRIOV(x) \
-	(0xa0000000 + (x) * 0x1000)
-#define DLB2_LSP_CQ2PRIOV_RST 0x0
-union dlb2_lsp_cq2priov {
-	struct {
-		u32 prio : 24;
-		u32 v : 8;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ2QID0(x) \
-	(0xa0080000 + (x) * 0x1000)
-#define DLB2_LSP_CQ2QID0_RST 0x0
-union dlb2_lsp_cq2qid0 {
-	struct {
-		u32 qid_p0 : 7;
-		u32 rsvd3 : 1;
-		u32 qid_p1 : 7;
-		u32 rsvd2 : 1;
-		u32 qid_p2 : 7;
-		u32 rsvd1 : 1;
-		u32 qid_p3 : 7;
-		u32 rsvd0 : 1;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ2QID1(x) \
-	(0xa0100000 + (x) * 0x1000)
-#define DLB2_LSP_CQ2QID1_RST 0x0
-union dlb2_lsp_cq2qid1 {
-	struct {
-		u32 qid_p4 : 7;
-		u32 rsvd3 : 1;
-		u32 qid_p5 : 7;
-		u32 rsvd2 : 1;
-		u32 qid_p6 : 7;
-		u32 rsvd1 : 1;
-		u32 qid_p7 : 7;
-		u32 rsvd0 : 1;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_DIR_DSBL(x) \
-	(0xa0180000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_DIR_DSBL_RST 0x1
-union dlb2_lsp_cq_dir_dsbl {
-	struct {
-		u32 disabled : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_DIR_TKN_CNT(x) \
-	(0xa0200000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_DIR_TKN_CNT_RST 0x0
-union dlb2_lsp_cq_dir_tkn_cnt {
-	struct {
-		u32 count : 13;
-		u32 rsvd0 : 19;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(x) \
-	(0xa0280000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RST 0x0
-union dlb2_lsp_cq_dir_tkn_depth_sel_dsi {
-	struct {
-		u32 token_depth_select : 4;
-		u32 disable_wb_opt : 1;
-		u32 ignore_depth : 1;
-		u32 rsvd0 : 26;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTL(x) \
-	(0xa0300000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTL_RST 0x0
-union dlb2_lsp_cq_dir_tot_sch_cntl {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTH(x) \
-	(0xa0380000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTH_RST 0x0
-union dlb2_lsp_cq_dir_tot_sch_cnth {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_LDB_DSBL(x) \
-	(0xa0400000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_LDB_DSBL_RST 0x1
-union dlb2_lsp_cq_ldb_dsbl {
-	struct {
-		u32 disabled : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_LDB_INFL_CNT(x) \
-	(0xa0480000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_LDB_INFL_CNT_RST 0x0
-union dlb2_lsp_cq_ldb_infl_cnt {
-	struct {
-		u32 count : 12;
-		u32 rsvd0 : 20;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_LDB_INFL_LIM(x) \
-	(0xa0500000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_LDB_INFL_LIM_RST 0x0
-union dlb2_lsp_cq_ldb_infl_lim {
-	struct {
-		u32 limit : 12;
-		u32 rsvd0 : 20;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_LDB_TKN_CNT(x) \
-	(0xa0580000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_LDB_TKN_CNT_RST 0x0
-union dlb2_lsp_cq_ldb_tkn_cnt {
-	struct {
-		u32 token_count : 11;
-		u32 rsvd0 : 21;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL(x) \
-	(0xa0600000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RST 0x0
-union dlb2_lsp_cq_ldb_tkn_depth_sel {
-	struct {
-		u32 token_depth_select : 4;
-		u32 ignore_depth : 1;
-		u32 rsvd0 : 27;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTL(x) \
-	(0xa0680000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTL_RST 0x0
-union dlb2_lsp_cq_ldb_tot_sch_cntl {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTH(x) \
-	(0xa0700000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTH_RST 0x0
-union dlb2_lsp_cq_ldb_tot_sch_cnth {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_DIR_MAX_DEPTH(x) \
-	(0xa0780000 + (x) * 0x1000)
-#define DLB2_LSP_QID_DIR_MAX_DEPTH_RST 0x0
-union dlb2_lsp_qid_dir_max_depth {
-	struct {
-		u32 depth : 13;
-		u32 rsvd0 : 19;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTL(x) \
-	(0xa0800000 + (x) * 0x1000)
-#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTL_RST 0x0
-union dlb2_lsp_qid_dir_tot_enq_cntl {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTH(x) \
-	(0xa0880000 + (x) * 0x1000)
-#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTH_RST 0x0
-union dlb2_lsp_qid_dir_tot_enq_cnth {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_DIR_ENQUEUE_CNT(x) \
-	(0xa0900000 + (x) * 0x1000)
-#define DLB2_LSP_QID_DIR_ENQUEUE_CNT_RST 0x0
-union dlb2_lsp_qid_dir_enqueue_cnt {
-	struct {
-		u32 count : 13;
-		u32 rsvd0 : 19;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_DIR_DEPTH_THRSH(x) \
-	(0xa0980000 + (x) * 0x1000)
-#define DLB2_LSP_QID_DIR_DEPTH_THRSH_RST 0x0
-union dlb2_lsp_qid_dir_depth_thrsh {
-	struct {
-		u32 thresh : 13;
-		u32 rsvd0 : 19;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_AQED_ACTIVE_CNT(x) \
-	(0xa0a00000 + (x) * 0x1000)
-#define DLB2_LSP_QID_AQED_ACTIVE_CNT_RST 0x0
-union dlb2_lsp_qid_aqed_active_cnt {
-	struct {
-		u32 count : 12;
-		u32 rsvd0 : 20;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_AQED_ACTIVE_LIM(x) \
-	(0xa0a80000 + (x) * 0x1000)
-#define DLB2_LSP_QID_AQED_ACTIVE_LIM_RST 0x0
-union dlb2_lsp_qid_aqed_active_lim {
-	struct {
-		u32 limit : 12;
-		u32 rsvd0 : 20;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTL(x) \
-	(0xa0b00000 + (x) * 0x1000)
-#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTL_RST 0x0
-union dlb2_lsp_qid_atm_tot_enq_cntl {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTH(x) \
-	(0xa0b80000 + (x) * 0x1000)
-#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTH_RST 0x0
-union dlb2_lsp_qid_atm_tot_enq_cnth {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_ATQ_ENQUEUE_CNT(x) \
-	(0xa0c00000 + (x) * 0x1000)
-#define DLB2_LSP_QID_ATQ_ENQUEUE_CNT_RST 0x0
-union dlb2_lsp_qid_atq_enqueue_cnt {
-	struct {
-		u32 count : 14;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_LDB_ENQUEUE_CNT(x) \
-	(0xa0c80000 + (x) * 0x1000)
-#define DLB2_LSP_QID_LDB_ENQUEUE_CNT_RST 0x0
-union dlb2_lsp_qid_ldb_enqueue_cnt {
-	struct {
-		u32 count : 14;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_LDB_INFL_CNT(x) \
-	(0xa0d00000 + (x) * 0x1000)
-#define DLB2_LSP_QID_LDB_INFL_CNT_RST 0x0
-union dlb2_lsp_qid_ldb_infl_cnt {
-	struct {
-		u32 count : 12;
-		u32 rsvd0 : 20;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_LDB_INFL_LIM(x) \
-	(0xa0d80000 + (x) * 0x1000)
-#define DLB2_LSP_QID_LDB_INFL_LIM_RST 0x0
-union dlb2_lsp_qid_ldb_infl_lim {
-	struct {
-		u32 limit : 12;
-		u32 rsvd0 : 20;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID2CQIDIX_00(x) \
-	(0xa0e00000 + (x) * 0x1000)
-#define DLB2_LSP_QID2CQIDIX_00_RST 0x0
-#define DLB2_LSP_QID2CQIDIX(x, y) \
-	(DLB2_LSP_QID2CQIDIX_00(x) + 0x80000 * (y))
-#define DLB2_LSP_QID2CQIDIX_NUM 16
-union dlb2_lsp_qid2cqidix_00 {
-	struct {
-		u32 cq_p0 : 8;
-		u32 cq_p1 : 8;
-		u32 cq_p2 : 8;
-		u32 cq_p3 : 8;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID2CQIDIX2_00(x) \
-	(0xa1600000 + (x) * 0x1000)
-#define DLB2_LSP_QID2CQIDIX2_00_RST 0x0
-#define DLB2_LSP_QID2CQIDIX2(x, y) \
-	(DLB2_LSP_QID2CQIDIX2_00(x) + 0x80000 * (y))
-#define DLB2_LSP_QID2CQIDIX2_NUM 16
-union dlb2_lsp_qid2cqidix2_00 {
-	struct {
-		u32 cq_p0 : 8;
-		u32 cq_p1 : 8;
-		u32 cq_p2 : 8;
-		u32 cq_p3 : 8;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_LDB_REPLAY_CNT(x) \
-	(0xa1e00000 + (x) * 0x1000)
-#define DLB2_LSP_QID_LDB_REPLAY_CNT_RST 0x0
-union dlb2_lsp_qid_ldb_replay_cnt {
-	struct {
-		u32 count : 14;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_NALDB_MAX_DEPTH(x) \
-	(0xa1f00000 + (x) * 0x1000)
-#define DLB2_LSP_QID_NALDB_MAX_DEPTH_RST 0x0
-union dlb2_lsp_qid_naldb_max_depth {
-	struct {
-		u32 depth : 14;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL(x) \
-	(0xa1f80000 + (x) * 0x1000)
-#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL_RST 0x0
-union dlb2_lsp_qid_naldb_tot_enq_cntl {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH(x) \
-	(0xa2000000 + (x) * 0x1000)
-#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH_RST 0x0
-union dlb2_lsp_qid_naldb_tot_enq_cnth {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_ATM_DEPTH_THRSH(x) \
-	(0xa2080000 + (x) * 0x1000)
-#define DLB2_LSP_QID_ATM_DEPTH_THRSH_RST 0x0
-union dlb2_lsp_qid_atm_depth_thrsh {
-	struct {
-		u32 thresh : 14;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_NALDB_DEPTH_THRSH(x) \
-	(0xa2100000 + (x) * 0x1000)
-#define DLB2_LSP_QID_NALDB_DEPTH_THRSH_RST 0x0
-union dlb2_lsp_qid_naldb_depth_thrsh {
-	struct {
-		u32 thresh : 14;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_ATM_ACTIVE(x) \
-	(0xa2180000 + (x) * 0x1000)
-#define DLB2_LSP_QID_ATM_ACTIVE_RST 0x0
-union dlb2_lsp_qid_atm_active {
-	struct {
-		u32 count : 14;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0 0xa4000008
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_RST 0x0
-union dlb2_lsp_cfg_arb_weight_atm_nalb_qid_0 {
-	struct {
-		u32 pri0_weight : 8;
-		u32 pri1_weight : 8;
-		u32 pri2_weight : 8;
-		u32 pri3_weight : 8;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1 0xa400000c
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_RST 0x0
-union dlb2_lsp_cfg_arb_weight_atm_nalb_qid_1 {
-	struct {
-		u32 rsvz0 : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0 0xa4000014
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_RST 0x0
-union dlb2_lsp_cfg_arb_weight_ldb_qid_0 {
-	struct {
-		u32 pri0_weight : 8;
-		u32 pri1_weight : 8;
-		u32 pri2_weight : 8;
-		u32 pri3_weight : 8;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1 0xa4000018
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_RST 0x0
-union dlb2_lsp_cfg_arb_weight_ldb_qid_1 {
-	struct {
-		u32 rsvz0 : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_LDB_SCHED_CTRL 0xa400002c
-#define DLB2_LSP_LDB_SCHED_CTRL_RST 0x0
-union dlb2_lsp_ldb_sched_ctrl {
-	struct {
-		u32 cq : 8;
-		u32 qidix : 3;
-		u32 value : 1;
-		u32 nalb_haswork_v : 1;
-		u32 rlist_haswork_v : 1;
-		u32 slist_haswork_v : 1;
-		u32 inflight_ok_v : 1;
-		u32 aqed_nfull_v : 1;
-		u32 rsvz0 : 15;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_DIR_SCH_CNT_L 0xa4000034
-#define DLB2_LSP_DIR_SCH_CNT_L_RST 0x0
-union dlb2_lsp_dir_sch_cnt_l {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_DIR_SCH_CNT_H 0xa4000038
-#define DLB2_LSP_DIR_SCH_CNT_H_RST 0x0
-union dlb2_lsp_dir_sch_cnt_h {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_LDB_SCH_CNT_L 0xa400003c
-#define DLB2_LSP_LDB_SCH_CNT_L_RST 0x0
-union dlb2_lsp_ldb_sch_cnt_l {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_LDB_SCH_CNT_H 0xa4000040
-#define DLB2_LSP_LDB_SCH_CNT_H_RST 0x0
-union dlb2_lsp_ldb_sch_cnt_h {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CFG_SHDW_CTRL 0xa4000070
-#define DLB2_LSP_CFG_SHDW_CTRL_RST 0x0
-union dlb2_lsp_cfg_shdw_ctrl {
-	struct {
-		u32 transfer : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CFG_SHDW_RANGE_COS(x) \
-	(0xa4000074 + (x) * 4)
-#define DLB2_LSP_CFG_SHDW_RANGE_COS_RST 0x40
-union dlb2_lsp_cfg_shdw_range_cos {
-	struct {
-		u32 bw_range : 9;
-		u32 rsvz0 : 22;
-		u32 no_extra_credit : 1;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CFG_CTRL_GENERAL_0 0xac000000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_RST 0x0
-union dlb2_lsp_cfg_ctrl_general_0 {
-	struct {
-		u32 disab_atq_empty_arb : 1;
-		u32 inc_tok_unit_idle : 1;
-		u32 disab_rlist_pri : 1;
-		u32 inc_cmp_unit_idle : 1;
-		u32 rsvz0 : 2;
-		u32 dir_single_op : 1;
-		u32 dir_half_bw : 1;
-		u32 dir_single_out : 1;
-		u32 dir_disab_multi : 1;
-		u32 atq_single_op : 1;
-		u32 atq_half_bw : 1;
-		u32 atq_single_out : 1;
-		u32 atq_disab_multi : 1;
-		u32 dirrpl_single_op : 1;
-		u32 dirrpl_half_bw : 1;
-		u32 dirrpl_single_out : 1;
-		u32 lbrpl_single_op : 1;
-		u32 lbrpl_half_bw : 1;
-		u32 lbrpl_single_out : 1;
-		u32 ldb_single_op : 1;
-		u32 ldb_half_bw : 1;
-		u32 ldb_disab_multi : 1;
-		u32 atm_single_sch : 1;
-		u32 atm_single_cmp : 1;
-		u32 ldb_ce_tog_arb : 1;
-		u32 rsvz1 : 1;
-		u32 smon0_valid_sel : 2;
-		u32 smon0_value_sel : 1;
-		u32 smon0_compare_sel : 2;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CFG_MSTR_DIAG_RESET_STS 0xb4000000
-#define DLB2_CFG_MSTR_DIAG_RESET_STS_RST 0x80000bff
-union dlb2_cfg_mstr_diag_reset_sts {
-	struct {
-		u32 chp_pf_reset_done : 1;
-		u32 rop_pf_reset_done : 1;
-		u32 lsp_pf_reset_done : 1;
-		u32 nalb_pf_reset_done : 1;
-		u32 ap_pf_reset_done : 1;
-		u32 dp_pf_reset_done : 1;
-		u32 qed_pf_reset_done : 1;
-		u32 dqed_pf_reset_done : 1;
-		u32 aqed_pf_reset_done : 1;
-		u32 sys_pf_reset_done : 1;
-		u32 pf_reset_active : 1;
-		u32 flrsm_state : 7;
-		u32 rsvd0 : 13;
-		u32 dlb_proc_reset_done : 1;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CFG_MSTR_CFG_DIAGNOSTIC_IDLE_STATUS 0xb4000004
-#define DLB2_CFG_MSTR_CFG_DIAGNOSTIC_IDLE_STATUS_RST 0x9d0fffff
-union dlb2_cfg_mstr_cfg_diagnostic_idle_status {
-	struct {
-		u32 chp_pipeidle : 1;
-		u32 rop_pipeidle : 1;
-		u32 lsp_pipeidle : 1;
-		u32 nalb_pipeidle : 1;
-		u32 ap_pipeidle : 1;
-		u32 dp_pipeidle : 1;
-		u32 qed_pipeidle : 1;
-		u32 dqed_pipeidle : 1;
-		u32 aqed_pipeidle : 1;
-		u32 sys_pipeidle : 1;
-		u32 chp_unit_idle : 1;
-		u32 rop_unit_idle : 1;
-		u32 lsp_unit_idle : 1;
-		u32 nalb_unit_idle : 1;
-		u32 ap_unit_idle : 1;
-		u32 dp_unit_idle : 1;
-		u32 qed_unit_idle : 1;
-		u32 dqed_unit_idle : 1;
-		u32 aqed_unit_idle : 1;
-		u32 sys_unit_idle : 1;
-		u32 rsvd1 : 4;
-		u32 mstr_cfg_ring_idle : 1;
-		u32 mstr_cfg_mstr_idle : 1;
-		u32 mstr_flr_clkreq_b : 1;
-		u32 mstr_proc_idle : 1;
-		u32 mstr_proc_idle_masked : 1;
-		u32 rsvd0 : 2;
-		u32 dlb_func_idle : 1;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CFG_MSTR_CFG_PM_STATUS 0xb4000014
-#define DLB2_CFG_MSTR_CFG_PM_STATUS_RST 0x100403e
-union dlb2_cfg_mstr_cfg_pm_status {
-	struct {
-		u32 prochot : 1;
-		u32 pgcb_dlb_idle : 1;
-		u32 pgcb_dlb_pg_rdy_ack_b : 1;
-		u32 pmsm_pgcb_req_b : 1;
-		u32 pgbc_pmc_pg_req_b : 1;
-		u32 pmc_pgcb_pg_ack_b : 1;
-		u32 pmc_pgcb_fet_en_b : 1;
-		u32 pgcb_fet_en_b : 1;
-		u32 rsvz0 : 1;
-		u32 rsvz1 : 1;
-		u32 fuse_force_on : 1;
-		u32 fuse_proc_disable : 1;
-		u32 rsvz2 : 1;
-		u32 rsvz3 : 1;
-		u32 pm_fsm_d0tod3_ok : 1;
-		u32 pm_fsm_d3tod0_ok : 1;
-		u32 dlb_in_d3 : 1;
-		u32 rsvz4 : 7;
-		u32 pmsm : 8;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CFG_MSTR_CFG_PM_PMCSR_DISABLE 0xb4000018
-#define DLB2_CFG_MSTR_CFG_PM_PMCSR_DISABLE_RST 0x1
-union dlb2_cfg_mstr_cfg_pm_pmcsr_disable {
-	struct {
-		u32 disable : 1;
-		u32 rsvz0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_VF_VF2PF_MAILBOX_BYTES 256
-#define DLB2_FUNC_VF_VF2PF_MAILBOX(x) \
-	(0x1000 + (x) * 0x4)
-#define DLB2_FUNC_VF_VF2PF_MAILBOX_RST 0x0
-union dlb2_func_vf_vf2pf_mailbox {
-	struct {
-		u32 msg : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_VF_VF2PF_MAILBOX_ISR 0x1f00
-#define DLB2_FUNC_VF_VF2PF_MAILBOX_ISR_RST 0x0
-#define DLB2_FUNC_VF_SIOV_VF2PF_MAILBOX_ISR_TRIGGER 0x8000
-union dlb2_func_vf_vf2pf_mailbox_isr {
-	struct {
-		u32 isr : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_VF_PF2VF_MAILBOX_BYTES 64
-#define DLB2_FUNC_VF_PF2VF_MAILBOX(x) \
-	(0x2000 + (x) * 0x4)
-#define DLB2_FUNC_VF_PF2VF_MAILBOX_RST 0x0
-union dlb2_func_vf_pf2vf_mailbox {
-	struct {
-		u32 msg : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_VF_PF2VF_MAILBOX_ISR 0x2f00
-#define DLB2_FUNC_VF_PF2VF_MAILBOX_ISR_RST 0x0
-union dlb2_func_vf_pf2vf_mailbox_isr {
-	struct {
-		u32 pf_isr : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_VF_VF_MSI_ISR_PEND 0x2f10
-#define DLB2_FUNC_VF_VF_MSI_ISR_PEND_RST 0x0
-union dlb2_func_vf_vf_msi_isr_pend {
-	struct {
-		u32 isr_pend : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_VF_VF_RESET_IN_PROGRESS 0x3000
-#define DLB2_FUNC_VF_VF_RESET_IN_PROGRESS_RST 0x1
-union dlb2_func_vf_vf_reset_in_progress {
-	struct {
-		u32 reset_in_progress : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_VF_VF_MSI_ISR 0x4000
-#define DLB2_FUNC_VF_VF_MSI_ISR_RST 0x0
-union dlb2_func_vf_vf_msi_isr {
-	struct {
-		u32 vf_msi_isr : 32;
-	} field;
-	u32 val;
-};
-
-#endif /* __DLB2_REGS_H */
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v2 24/27] event/dlb2: rename dlb2_regs_new.h to dlb2_regs.h
  2021-03-30 19:35   ` [dpdk-dev] [PATCH v2 00/27] Add DLB V2.5 Timothy McDaniel
                       ` (22 preceding siblings ...)
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 23/27] event/dlb2: delete old register map file, dlb2_regs.h Timothy McDaniel
@ 2021-03-30 19:35     ` Timothy McDaniel
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 25/27] event/dlb2: update xstats for v2.5 Timothy McDaniel
                       ` (3 subsequent siblings)
  27 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-03-30 19:35 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas

All references to the old register map have been removed,
so it is safe to rename the new combined file that supports
both DLB v2.0 and DLB v2.5. Also fixed all places where this
file is included.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_hw_types.h                  | 2 +-
 drivers/event/dlb2/pf/base/{dlb2_regs_new.h => dlb2_regs.h} | 6 +++---
 drivers/event/dlb2/pf/base/dlb2_resource.c                  | 2 +-
 drivers/event/dlb2/pf/dlb2_main.c                           | 2 +-
 4 files changed, 6 insertions(+), 6 deletions(-)
 rename drivers/event/dlb2/pf/base/{dlb2_regs_new.h => dlb2_regs.h} (99%)

diff --git a/drivers/event/dlb2/pf/base/dlb2_hw_types.h b/drivers/event/dlb2/pf/base/dlb2_hw_types.h
index 0f418ef5d..db9dfd240 100644
--- a/drivers/event/dlb2/pf/base/dlb2_hw_types.h
+++ b/drivers/event/dlb2/pf/base/dlb2_hw_types.h
@@ -10,7 +10,7 @@
 
 #include "dlb2_osdep_list.h"
 #include "dlb2_osdep_types.h"
-#include "dlb2_regs_new.h"
+#include "dlb2_regs.h"
 
 #define DLB2_BITS_SET(x, val, mask)	(x = ((x) & ~(mask))     \
 				 | (((val) << (mask##_LOC)) & (mask)))
diff --git a/drivers/event/dlb2/pf/base/dlb2_regs_new.h b/drivers/event/dlb2/pf/base/dlb2_regs.h
similarity index 99%
rename from drivers/event/dlb2/pf/base/dlb2_regs_new.h
rename to drivers/event/dlb2/pf/base/dlb2_regs.h
index 593243d63..cdff5cb1f 100644
--- a/drivers/event/dlb2/pf/base/dlb2_regs_new.h
+++ b/drivers/event/dlb2/pf/base/dlb2_regs.h
@@ -2,8 +2,8 @@
  * Copyright(c) 2016-2020 Intel Corporation
  */
 
-#ifndef __DLB2_REGS_NEW_H
-#define __DLB2_REGS_NEW_H
+#ifndef __DLB2_REGS_H
+#define __DLB2_REGS_H
 
 #include "dlb2_osdep_types.h"
 
@@ -4409,4 +4409,4 @@
 #define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_V_LOC		15
 #define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_RSVD0_LOC	16
 
-#endif /* __DLB2_REGS_NEW_H */
+#endif /* __DLB2_REGS_H */
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index 54b0207db..3661b940c 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -8,7 +8,7 @@
 #include "dlb2_osdep.h"
 #include "dlb2_osdep_bitmap.h"
 #include "dlb2_osdep_types.h"
-#include "dlb2_regs_new.h"
+#include "dlb2_regs.h"
 #include "dlb2_resource.h"
 
 #include "../../dlb2_priv.h"
diff --git a/drivers/event/dlb2/pf/dlb2_main.c b/drivers/event/dlb2/pf/dlb2_main.c
index 1f6ccf8e4..b6ec85b47 100644
--- a/drivers/event/dlb2/pf/dlb2_main.c
+++ b/drivers/event/dlb2/pf/dlb2_main.c
@@ -13,7 +13,7 @@
 #include <rte_malloc.h>
 #include <rte_errno.h>
 
-#include "base/dlb2_regs_new.h"
+#include "base/dlb2_regs.h"
 #include "base/dlb2_hw_types.h"
 #include "base/dlb2_resource.h"
 #include "base/dlb2_osdep.h"
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v2 25/27] event/dlb2: update xstats for v2.5
  2021-03-30 19:35   ` [dpdk-dev] [PATCH v2 00/27] Add DLB V2.5 Timothy McDaniel
                       ` (23 preceding siblings ...)
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 24/27] event/dlb2: rename dlb2_regs_new.h to dlb2_regs.h Timothy McDaniel
@ 2021-03-30 19:35     ` Timothy McDaniel
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 26/27] doc/dlb2: update documentation " Timothy McDaniel
                       ` (2 subsequent siblings)
  27 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-03-30 19:35 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas

Add DLB v2.5 specific information to xstats, such as metrics for the new
credit scheme.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/dlb2_xstats.c | 41 ++++++++++++++++++++++++++++----
 1 file changed, 37 insertions(+), 4 deletions(-)

diff --git a/drivers/event/dlb2/dlb2_xstats.c b/drivers/event/dlb2/dlb2_xstats.c
index b62e62060..d4c8d9903 100644
--- a/drivers/event/dlb2/dlb2_xstats.c
+++ b/drivers/event/dlb2/dlb2_xstats.c
@@ -9,6 +9,7 @@
 
 #include "dlb2_priv.h"
 #include "dlb2_inline_fns.h"
+#include "pf/base/dlb2_regs.h"
 
 enum dlb2_xstats_type {
 	/* common to device and port */
@@ -21,6 +22,7 @@ enum dlb2_xstats_type {
 	zero_polls,			/**< Call dequeue burst and return 0 */
 	tx_nospc_ldb_hw_credits,	/**< Insufficient LDB h/w credits */
 	tx_nospc_dir_hw_credits,	/**< Insufficient DIR h/w credits */
+	tx_nospc_hw_credits,		/**< Insufficient h/w credits */
 	tx_nospc_inflight_max,		/**< Reach the new_event_threshold */
 	tx_nospc_new_event_limit,	/**< Insufficient s/w credits */
 	tx_nospc_inflight_credits,	/**< Port has too few s/w credits */
@@ -29,6 +31,7 @@ enum dlb2_xstats_type {
 	inflight_events,
 	ldb_pool_size,
 	dir_pool_size,
+	pool_size,
 	/* port specific */
 	tx_new,				/**< Send an OP_NEW event */
 	tx_fwd,				/**< Send an OP_FORWARD event */
@@ -129,6 +132,9 @@ dlb2_device_traffic_stat_get(struct dlb2_eventdev *dlb2,
 		case tx_nospc_dir_hw_credits:
 			val += port->stats.traffic.tx_nospc_dir_hw_credits;
 			break;
+		case tx_nospc_hw_credits:
+			val += port->stats.traffic.tx_nospc_hw_credits;
+			break;
 		case tx_nospc_inflight_max:
 			val += port->stats.traffic.tx_nospc_inflight_max;
 			break;
@@ -159,6 +165,7 @@ get_dev_stat(struct dlb2_eventdev *dlb2, uint16_t obj_idx __rte_unused,
 	case zero_polls:
 	case tx_nospc_ldb_hw_credits:
 	case tx_nospc_dir_hw_credits:
+	case tx_nospc_hw_credits:
 	case tx_nospc_inflight_max:
 	case tx_nospc_new_event_limit:
 	case tx_nospc_inflight_credits:
@@ -171,6 +178,8 @@ get_dev_stat(struct dlb2_eventdev *dlb2, uint16_t obj_idx __rte_unused,
 		return dlb2->num_ldb_credits;
 	case dir_pool_size:
 		return dlb2->num_dir_credits;
+	case pool_size:
+		return dlb2->num_credits;
 	default: return -1;
 	}
 }
@@ -203,6 +212,9 @@ get_port_stat(struct dlb2_eventdev *dlb2, uint16_t obj_idx,
 	case tx_nospc_dir_hw_credits:
 		return ev_port->stats.traffic.tx_nospc_dir_hw_credits;
 
+	case tx_nospc_hw_credits:
+		return ev_port->stats.traffic.tx_nospc_hw_credits;
+
 	case tx_nospc_inflight_max:
 		return ev_port->stats.traffic.tx_nospc_inflight_max;
 
@@ -357,6 +369,7 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 		"zero_polls",
 		"tx_nospc_ldb_hw_credits",
 		"tx_nospc_dir_hw_credits",
+		"tx_nospc_hw_credits",
 		"tx_nospc_inflight_max",
 		"tx_nospc_new_event_limit",
 		"tx_nospc_inflight_credits",
@@ -364,6 +377,7 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 		"inflight_events",
 		"ldb_pool_size",
 		"dir_pool_size",
+		"pool_size",
 	};
 	static const enum dlb2_xstats_type dev_types[] = {
 		rx_ok,
@@ -375,6 +389,7 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 		zero_polls,
 		tx_nospc_ldb_hw_credits,
 		tx_nospc_dir_hw_credits,
+		tx_nospc_hw_credits,
 		tx_nospc_inflight_max,
 		tx_nospc_new_event_limit,
 		tx_nospc_inflight_credits,
@@ -382,6 +397,7 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 		inflight_events,
 		ldb_pool_size,
 		dir_pool_size,
+		pool_size,
 	};
 	/* Note: generated device stats are not allowed to be reset. */
 	static const uint8_t dev_reset_allowed[] = {
@@ -394,6 +410,7 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 		0, /* zero_polls */
 		0, /* tx_nospc_ldb_hw_credits */
 		0, /* tx_nospc_dir_hw_credits */
+		0, /* tx_nospc_hw_credits */
 		0, /* tx_nospc_inflight_max */
 		0, /* tx_nospc_new_event_limit */
 		0, /* tx_nospc_inflight_credits */
@@ -401,6 +418,7 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 		0, /* inflight_events */
 		0, /* ldb_pool_size */
 		0, /* dir_pool_size */
+		0, /* pool_size */
 	};
 	static const char * const port_stats[] = {
 		"is_configured",
@@ -415,6 +433,7 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 		"zero_polls",
 		"tx_nospc_ldb_hw_credits",
 		"tx_nospc_dir_hw_credits",
+		"tx_nospc_hw_credits",
 		"tx_nospc_inflight_max",
 		"tx_nospc_new_event_limit",
 		"tx_nospc_inflight_credits",
@@ -448,6 +467,7 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 		zero_polls,
 		tx_nospc_ldb_hw_credits,
 		tx_nospc_dir_hw_credits,
+		tx_nospc_hw_credits,
 		tx_nospc_inflight_max,
 		tx_nospc_new_event_limit,
 		tx_nospc_inflight_credits,
@@ -481,6 +501,7 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 		1, /* zero_polls */
 		1, /* tx_nospc_ldb_hw_credits */
 		1, /* tx_nospc_dir_hw_credits */
+		1, /* tx_nospc_hw_credits */
 		1, /* tx_nospc_inflight_max */
 		1, /* tx_nospc_new_event_limit */
 		1, /* tx_nospc_inflight_credits */
@@ -935,8 +956,8 @@ dlb2_eventdev_xstats_reset(struct rte_eventdev *dev,
 		break;
 	case RTE_EVENT_DEV_XSTATS_PORT:
 		if (queue_port_id == -1) {
-			for (i = 0; i < DLB2_MAX_NUM_PORTS(dlb2->version);
-					i++) {
+			for (i = 0;
+			     i < DLB2_MAX_NUM_PORTS(dlb2->version); i++) {
 				if (dlb2_xstats_reset_port(dlb2, i,
 							   ids, nb_ids))
 					return -EINVAL;
@@ -949,8 +970,8 @@ dlb2_eventdev_xstats_reset(struct rte_eventdev *dev,
 		break;
 	case RTE_EVENT_DEV_XSTATS_QUEUE:
 		if (queue_port_id == -1) {
-			for (i = 0; i < DLB2_MAX_NUM_QUEUES(dlb2->version);
-					i++) {
+			for (i = 0;
+			     i < DLB2_MAX_NUM_QUEUES(dlb2->version); i++) {
 				if (dlb2_xstats_reset_queue(dlb2, i,
 							    ids, nb_ids))
 					return -EINVAL;
@@ -1048,6 +1069,9 @@ dlb2_eventdev_dump(struct rte_eventdev *dev, FILE *f)
 	fprintf(f, "\tnum_dir_credits = %u\n",
 		dlb2->hw_rsrc_query_results.num_dir_credits);
 
+	fprintf(f, "\tnum_credits = %u\n",
+		dlb2->hw_rsrc_query_results.num_credits);
+
 	/* Port level information */
 
 	for (i = 0; i < dlb2->num_ports; i++) {
@@ -1102,6 +1126,12 @@ dlb2_eventdev_dump(struct rte_eventdev *dev, FILE *f)
 		fprintf(f, "\tdir_credits = %u\n",
 			p->qm_port.dir_credits);
 
+		fprintf(f, "\tcached_credits = %u\n",
+			p->qm_port.cached_credits);
+
+		fprintf(f, "\tdir_credits = %u\n",
+			p->qm_port.credits);
+
 		fprintf(f, "\tgenbit=%d, cq_idx=%d, cq_depth=%d\n",
 			p->qm_port.gen_bit,
 			p->qm_port.cq_idx,
@@ -1139,6 +1169,9 @@ dlb2_eventdev_dump(struct rte_eventdev *dev, FILE *f)
 		fprintf(f, "\t\ttx_nospc_dir_hw_credits %" PRIu64 "\n",
 			p->stats.traffic.tx_nospc_dir_hw_credits);
 
+		fprintf(f, "\t\ttx_nospc_hw_credits %" PRIu64 "\n",
+			p->stats.traffic.tx_nospc_hw_credits);
+
 		fprintf(f, "\t\ttx_nospc_inflight_max %" PRIu64 "\n",
 			p->stats.traffic.tx_nospc_inflight_max);
 
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v2 26/27] doc/dlb2: update documentation for v2.5
  2021-03-30 19:35   ` [dpdk-dev] [PATCH v2 00/27] Add DLB V2.5 Timothy McDaniel
                       ` (24 preceding siblings ...)
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 25/27] event/dlb2: update xstats for v2.5 Timothy McDaniel
@ 2021-03-30 19:35     ` Timothy McDaniel
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 27/27] event/dlb2: Change device name to dlb_event Timothy McDaniel
  2021-04-03  9:51     ` [dpdk-dev] [PATCH v2 00/27] Add DLB V2.5 Jerin Jacob
  27 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-03-30 19:35 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas

Update the dlb documentation for v2.5. Notable differences include
the new cobined credit scheme. Also cleaned up a couple of sections,
and removed a duplicate section.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 doc/guides/eventdevs/dlb2.rst | 75 +++++++++++++----------------------
 1 file changed, 27 insertions(+), 48 deletions(-)

diff --git a/doc/guides/eventdevs/dlb2.rst b/doc/guides/eventdevs/dlb2.rst
index 94d2c77ff..94e46ea7d 100644
--- a/doc/guides/eventdevs/dlb2.rst
+++ b/doc/guides/eventdevs/dlb2.rst
@@ -4,7 +4,8 @@
 Driver for the Intel® Dynamic Load Balancer (DLB2)
 ==================================================
 
-The DPDK dlb poll mode driver supports the Intel® Dynamic Load Balancer.
+The DPDK dlb poll mode driver supports the Intel® Dynamic Load Balancer,
+hardware versions 2.0 and 2.5.
 
 Prerequisites
 -------------
@@ -35,7 +36,7 @@ eventdev API and DLB2 misalign.
 Scheduling Domain Configuration
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-There are 32 scheduling domainis the DLB2.
+DLB2 supports 32 scheduling domains.
 When one is configured, it allocates load-balanced and
 directed queues, ports, credits, and other hardware resources. Some
 resource allocations are user-controlled -- the number of queues, for example
@@ -67,42 +68,7 @@ If the ``RTE_EVENT_QUEUE_CFG_ALL_TYPES`` flag is not set, schedule_type
 dictates the queue's scheduling type.
 
 The ``nb_atomic_order_sequences`` queue configuration field sets the ordered
-queue's reorder buffer size.  DLB2 has 4 groups of ordered queues, where each
-group is configured to contain either 1 queue with 1024 reorder entries, 2
-queues with 512 reorder entries, and so on down to 32 queues with 32 entries.
-
-When a load-balanced queue is created, the PMD will configure a new sequence
-number group on-demand if num_sequence_numbers does not match a pre-existing
-group with available reorder buffer entries. If all sequence number groups are
-in use, no new group will be created and queue configuration will fail. (Note
-that when the PMD is used with a virtual DLB2 device, it cannot change the
-sequence number configuration.)
-
-The queue's ``nb_atomic_flows`` parameter is ignored by the DLB2 PMD, because
-the DLB2 does not limit the number of flows a queue can track. In the DLB2, all
-load-balanced queues can use the full 16-bit flow ID range.
-
-Load-Balanced Queues
-~~~~~~~~~~~~~~~~~~~~
-
-A load-balanced queue can support atomic and ordered scheduling, or atomic and
-unordered scheduling, but not atomic and unordered and ordered scheduling. A
-queue's scheduling types are controlled by the event queue configuration.
-
-If the user sets the ``RTE_EVENT_QUEUE_CFG_ALL_TYPES`` flag, the
-``nb_atomic_order_sequences`` determines the supported scheduling types.
-With non-zero ``nb_atomic_order_sequences``, the queue is configured for atomic
-and ordered scheduling. In this case, ``RTE_SCHED_TYPE_PARALLEL`` scheduling is
-supported by scheduling those events as ordered events.  Note that when the
-event is dequeued, its sched_type will be ``RTE_SCHED_TYPE_ORDERED``. Else if
-``nb_atomic_order_sequences`` is zero, the queue is configured for atomic and
-unordered scheduling. In this case, ``RTE_SCHED_TYPE_ORDERED`` is unsupported.
-
-If the ``RTE_EVENT_QUEUE_CFG_ALL_TYPES`` flag is not set, schedule_type
-dictates the queue's scheduling type.
-
-The ``nb_atomic_order_sequences`` queue configuration field sets the ordered
-queue's reorder buffer size.  DLB2 has 4 groups of ordered queues, where each
+queue's reorder buffer size.  DLB2 has 2 groups of ordered queues, where each
 group is configured to contain either 1 queue with 1024 reorder entries, 2
 queues with 512 reorder entries, and so on down to 32 queues with 32 entries.
 
@@ -157,6 +123,11 @@ type (atomic, ordered, or parallel) is not preserved, and an event's sched_type
 will be set to ``RTE_SCHED_TYPE_ATOMIC`` when it is dequeued from a directed
 port.
 
+Finally, even though all 3 event types are supported on the same QID by
+converting unordered events to ordered, such use should be discouraged as much
+as possible, since mixing types on the same queue uses valuable reorder
+resources, and orders events which do not require ordering.
+
 Flow ID
 ~~~~~~~
 
@@ -169,13 +140,15 @@ Hardware Credits
 DLB2 uses a hardware credit scheme to prevent software from overflowing hardware
 event storage, with each unit of storage represented by a credit. A port spends
 a credit to enqueue an event, and hardware refills the ports with credits as the
-events are scheduled to ports. Refills come from credit pools, and each port is
-a member of a load-balanced credit pool and a directed credit pool. The
-load-balanced credits are used to enqueue to load-balanced queues, and directed
-credits are used for directed queues.
+events are scheduled to ports. Refills come from credit pools.
 
-A DLB2 eventdev contains one load-balanced and one directed credit pool. These
-pools' sizes are controlled by the nb_events_limit field in struct
+For DLB v2.5, there is a single credit pool used for both load balanced and
+directed traffic.
+
+For DLB v2.0, each port is a member of both a load-balanced credit pool and a
+directed credit pool. The load-balanced credits are used to enqueue to
+load-balanced queues, and directed credits are used for directed queues.
+These pools' sizes are controlled by the nb_events_limit field in struct
 rte_event_dev_config. The load-balanced pool is sized to contain
 nb_events_limit credits, and the directed pool is sized to contain
 nb_events_limit/4 credits. The directed pool size can be overridden with the
@@ -276,10 +249,16 @@ The DLB2 supports event priority and per-port queue service priority, as
 described in the eventdev header file. The DLB2 does not support 'global' event
 queue priority established at queue creation time.
 
-DLB2 supports 8 event and queue service priority levels. For both priority
-types, the PMD uses the upper three bits of the priority field to determine the
-DLB2 priority, discarding the 5 least significant bits. The 5 least significant
-event priority bits are not preserved when an event is enqueued.
+DLB2 supports 4 event and queue service priority levels. For both priority types,
+the PMD uses the upper three bits of the priority field to determine the DLB2
+priority, discarding the 5 least significant bits. But least significant bit out
+of 3 priority bits is effectively ignored for binning into 4 priorities. The
+discarded 5 least significant event priority bits are not preserved when an event
+is enqueued.
+
+Note that event priority only works within the same event type.
+When atomic and ordered or unordered events are enqueued to same QID, priority
+across the types is always equal, and both types are served in a round robin manner.
 
 Reconfiguration
 ~~~~~~~~~~~~~~~
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v2 27/27] event/dlb2: Change device name to dlb_event
  2021-03-30 19:35   ` [dpdk-dev] [PATCH v2 00/27] Add DLB V2.5 Timothy McDaniel
                       ` (25 preceding siblings ...)
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 26/27] doc/dlb2: update documentation " Timothy McDaniel
@ 2021-03-30 19:35     ` Timothy McDaniel
  2021-04-03 10:39       ` Jerin Jacob
  2021-04-03  9:51     ` [dpdk-dev] [PATCH v2 00/27] Add DLB V2.5 Jerin Jacob
  27 siblings, 1 reply; 174+ messages in thread
From: Timothy McDaniel @ 2021-03-30 19:35 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas

Updated eventdev device name to be dlb_event instead of
dlb2_event.  The new name will be used for all versions
of the DLB hardware. This change required corresponding changes
to the the directory name that contains the PMD, as well
as the documentation files, build infrastructure, and PMD
specific APIs.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 MAINTAINERS                                   |   6 +-
 app/test/test_eventdev.c                      |   6 +-
 config/rte_config.h                           |  11 +-
 doc/api/doxy-api-index.md                     |   2 +-
 doc/api/doxy-api.conf.in                      |   2 +-
 doc/guides/eventdevs/dlb.rst                  | 390 ++++++++++++++++++
 doc/guides/eventdevs/index.rst                |   2 +-
 doc/guides/rel_notes/release_21_05.rst        |   5 +
 drivers/event/{dlb2 => dlb}/dlb2.c            |  25 +-
 drivers/event/{dlb2 => dlb}/dlb2_iface.c      |   0
 drivers/event/{dlb2 => dlb}/dlb2_iface.h      |   0
 drivers/event/{dlb2 => dlb}/dlb2_inline_fns.h |   0
 drivers/event/{dlb2 => dlb}/dlb2_log.h        |   0
 drivers/event/{dlb2 => dlb}/dlb2_priv.h       |   7 +-
 drivers/event/{dlb2 => dlb}/dlb2_selftest.c   |   8 +-
 drivers/event/{dlb2 => dlb}/dlb2_user.h       |   0
 drivers/event/{dlb2 => dlb}/dlb2_xstats.c     |   0
 drivers/event/{dlb2 => dlb}/meson.build       |   4 +-
 .../{dlb2 => dlb}/pf/base/dlb2_hw_types.h     |   0
 .../event/{dlb2 => dlb}/pf/base/dlb2_osdep.h  |   0
 .../{dlb2 => dlb}/pf/base/dlb2_osdep_bitmap.h |   0
 .../{dlb2 => dlb}/pf/base/dlb2_osdep_list.h   |   0
 .../{dlb2 => dlb}/pf/base/dlb2_osdep_types.h  |   0
 .../event/{dlb2 => dlb}/pf/base/dlb2_regs.h   |   0
 .../{dlb2 => dlb}/pf/base/dlb2_resource.c     |   0
 .../{dlb2 => dlb}/pf/base/dlb2_resource.h     |   0
 drivers/event/{dlb2 => dlb}/pf/dlb2_main.c    |   0
 drivers/event/{dlb2 => dlb}/pf/dlb2_main.h    |   0
 drivers/event/{dlb2 => dlb}/pf/dlb2_pf.c      |   0
 .../rte_pmd_dlb2.c => dlb/rte_pmd_dlb.c}      |   6 +-
 .../rte_pmd_dlb2.h => dlb/rte_pmd_dlb.h}      |  12 +-
 drivers/event/{dlb2 => dlb}/version.map       |   2 +-
 drivers/event/meson.build                     |   2 +-
 33 files changed, 440 insertions(+), 50 deletions(-)
 create mode 100644 doc/guides/eventdevs/dlb.rst
 rename drivers/event/{dlb2 => dlb}/dlb2.c (99%)
 rename drivers/event/{dlb2 => dlb}/dlb2_iface.c (100%)
 rename drivers/event/{dlb2 => dlb}/dlb2_iface.h (100%)
 rename drivers/event/{dlb2 => dlb}/dlb2_inline_fns.h (100%)
 rename drivers/event/{dlb2 => dlb}/dlb2_log.h (100%)
 rename drivers/event/{dlb2 => dlb}/dlb2_priv.h (99%)
 rename drivers/event/{dlb2 => dlb}/dlb2_selftest.c (99%)
 rename drivers/event/{dlb2 => dlb}/dlb2_user.h (100%)
 rename drivers/event/{dlb2 => dlb}/dlb2_xstats.c (100%)
 rename drivers/event/{dlb2 => dlb}/meson.build (89%)
 rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_hw_types.h (100%)
 rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_osdep.h (100%)
 rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_osdep_bitmap.h (100%)
 rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_osdep_list.h (100%)
 rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_osdep_types.h (100%)
 rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_regs.h (100%)
 rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_resource.c (100%)
 rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_resource.h (100%)
 rename drivers/event/{dlb2 => dlb}/pf/dlb2_main.c (100%)
 rename drivers/event/{dlb2 => dlb}/pf/dlb2_main.h (100%)
 rename drivers/event/{dlb2 => dlb}/pf/dlb2_pf.c (100%)
 rename drivers/event/{dlb2/rte_pmd_dlb2.c => dlb/rte_pmd_dlb.c} (88%)
 rename drivers/event/{dlb2/rte_pmd_dlb2.h => dlb/rte_pmd_dlb.h} (88%)
 rename drivers/event/{dlb2 => dlb}/version.map (60%)

diff --git a/MAINTAINERS b/MAINTAINERS
index fa143160d..40610e169 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1196,10 +1196,10 @@ Cavium OCTEON TX timvf
 M: Pavan Nikhilesh <pbhagavatula@marvell.com>
 F: drivers/event/octeontx/timvf_*
 
-Intel DLB2
+Intel DLB
 M: Timothy McDaniel <timothy.mcdaniel@intel.com>
-F: drivers/event/dlb2/
-F: doc/guides/eventdevs/dlb2.rst
+F: drivers/event/dlb/
+F: doc/guides/eventdevs/dlb.rst
 
 Marvell OCTEON TX2
 M: Pavan Nikhilesh <pbhagavatula@marvell.com>
diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c
index bcfaa53cb..ba27bed02 100644
--- a/app/test/test_eventdev.c
+++ b/app/test/test_eventdev.c
@@ -1031,9 +1031,9 @@ test_eventdev_selftest_dpaa2(void)
 }
 
 static int
-test_eventdev_selftest_dlb2(void)
+test_eventdev_selftest_dlb(void)
 {
-	return test_eventdev_selftest_impl("dlb2_event", "");
+	return test_eventdev_selftest_impl("dlb_event", "");
 }
 
 REGISTER_TEST_COMMAND(eventdev_common_autotest, test_eventdev_common);
@@ -1043,4 +1043,4 @@ REGISTER_TEST_COMMAND(eventdev_selftest_octeontx,
 REGISTER_TEST_COMMAND(eventdev_selftest_octeontx2,
 		test_eventdev_selftest_octeontx2);
 REGISTER_TEST_COMMAND(eventdev_selftest_dpaa2, test_eventdev_selftest_dpaa2);
-REGISTER_TEST_COMMAND(eventdev_selftest_dlb2, test_eventdev_selftest_dlb2);
+REGISTER_TEST_COMMAND(eventdev_selftest_dlb, test_eventdev_selftest_dlb);
diff --git a/config/rte_config.h b/config/rte_config.h
index b13c0884b..1aa852cd7 100644
--- a/config/rte_config.h
+++ b/config/rte_config.h
@@ -139,11 +139,10 @@
 /* QEDE PMD defines */
 #define RTE_LIBRTE_QEDE_FW ""
 
-/* DLB2 defines */
-#define RTE_LIBRTE_PMD_DLB2_POLL_INTERVAL 1000
-#define RTE_LIBRTE_PMD_DLB2_UMWAIT_CTL_STATE  0
-#undef RTE_LIBRTE_PMD_DLB2_QUELL_STATS
-#define RTE_LIBRTE_PMD_DLB2_SW_CREDIT_QUANTA 32
-#define RTE_PMD_DLB2_DEFAULT_DEPTH_THRESH 256
+/* DLB defines */
+#define RTE_LIBRTE_PMD_DLB_POLL_INTERVAL 1000
+#undef RTE_LIBRTE_PMD_DLB_QUELL_STATS
+#define RTE_LIBRTE_PMD_DLB_SW_CREDIT_QUANTA 32
+#define RTE_LIBRTE_PMD_DLB_DEFAULT_DEPTH_THRESH 256
 
 #endif /* _RTE_CONFIG_H_ */
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index ca2c2f6e0..1c2865525 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -55,7 +55,7 @@ The public API headers are grouped by topics:
   [dpaa2_cmdif]        (@ref rte_pmd_dpaa2_cmdif.h),
   [dpaa2_qdma]         (@ref rte_pmd_dpaa2_qdma.h),
   [crypto_scheduler]   (@ref rte_cryptodev_scheduler.h),
-  [dlb2]               (@ref rte_pmd_dlb2.h),
+  [dlb]                (@ref rte_pmd_dlb.h),
   [ifpga]              (@ref rte_pmd_ifpga.h)
 
 - **memory**:
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index 3c7ee4608..9aebec419 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -7,7 +7,7 @@ USE_MDFILE_AS_MAINPAGE  = @TOPDIR@/doc/api/doxy-api-index.md
 INPUT                   = @TOPDIR@/doc/api/doxy-api-index.md \
                           @TOPDIR@/drivers/bus/vdev \
                           @TOPDIR@/drivers/crypto/scheduler \
-                          @TOPDIR@/drivers/event/dlb2 \
+                          @TOPDIR@/drivers/event/dlb \
                           @TOPDIR@/drivers/mempool/dpaa2 \
                           @TOPDIR@/drivers/net/ark \
                           @TOPDIR@/drivers/net/bnxt \
diff --git a/doc/guides/eventdevs/dlb.rst b/doc/guides/eventdevs/dlb.rst
new file mode 100644
index 000000000..94e46ea7d
--- /dev/null
+++ b/doc/guides/eventdevs/dlb.rst
@@ -0,0 +1,390 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+    Copyright(c) 2020 Intel Corporation.
+
+Driver for the Intel® Dynamic Load Balancer (DLB2)
+==================================================
+
+The DPDK dlb poll mode driver supports the Intel® Dynamic Load Balancer,
+hardware versions 2.0 and 2.5.
+
+Prerequisites
+-------------
+
+Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup
+the basic DPDK environment.
+
+Configuration
+-------------
+
+The DLB2 PF PMD is a user-space PMD that uses VFIO to gain direct
+device access. To use this operation mode, the PCIe PF device must be bound
+to a DPDK-compatible VFIO driver, such as vfio-pci.
+
+Eventdev API Notes
+------------------
+
+The DLB2 provides the functions of a DPDK event device; specifically, it
+supports atomic, ordered, and parallel scheduling events from queues to ports.
+However, the DLB2 hardware is not a perfect match to the eventdev API. Some DLB2
+features are abstracted by the PMD such as directed ports.
+
+In general the dlb PMD is designed for ease-of-use and does not require a
+detailed understanding of the hardware, but these details are important when
+writing high-performance code. This section describes the places where the
+eventdev API and DLB2 misalign.
+
+Scheduling Domain Configuration
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+DLB2 supports 32 scheduling domains.
+When one is configured, it allocates load-balanced and
+directed queues, ports, credits, and other hardware resources. Some
+resource allocations are user-controlled -- the number of queues, for example
+-- and others, like credit pools (one directed and one load-balanced pool per
+scheduling domain), are not.
+
+The DLB2 is a closed system eventdev, and as such the ``nb_events_limit`` device
+setup argument and the per-port ``new_event_threshold`` argument apply as
+defined in the eventdev header file. The limit is applied to all enqueues,
+regardless of whether it will consume a directed or load-balanced credit.
+
+Load-Balanced Queues
+~~~~~~~~~~~~~~~~~~~~
+
+A load-balanced queue can support atomic and ordered scheduling, or atomic and
+unordered scheduling, but not atomic and unordered and ordered scheduling. A
+queue's scheduling types are controlled by the event queue configuration.
+
+If the user sets the ``RTE_EVENT_QUEUE_CFG_ALL_TYPES`` flag, the
+``nb_atomic_order_sequences`` determines the supported scheduling types.
+With non-zero ``nb_atomic_order_sequences``, the queue is configured for atomic
+and ordered scheduling. In this case, ``RTE_SCHED_TYPE_PARALLEL`` scheduling is
+supported by scheduling those events as ordered events.  Note that when the
+event is dequeued, its sched_type will be ``RTE_SCHED_TYPE_ORDERED``. Else if
+``nb_atomic_order_sequences`` is zero, the queue is configured for atomic and
+unordered scheduling. In this case, ``RTE_SCHED_TYPE_ORDERED`` is unsupported.
+
+If the ``RTE_EVENT_QUEUE_CFG_ALL_TYPES`` flag is not set, schedule_type
+dictates the queue's scheduling type.
+
+The ``nb_atomic_order_sequences`` queue configuration field sets the ordered
+queue's reorder buffer size.  DLB2 has 2 groups of ordered queues, where each
+group is configured to contain either 1 queue with 1024 reorder entries, 2
+queues with 512 reorder entries, and so on down to 32 queues with 32 entries.
+
+When a load-balanced queue is created, the PMD will configure a new sequence
+number group on-demand if num_sequence_numbers does not match a pre-existing
+group with available reorder buffer entries. If all sequence number groups are
+in use, no new group will be created and queue configuration will fail. (Note
+that when the PMD is used with a virtual DLB2 device, it cannot change the
+sequence number configuration.)
+
+The queue's ``nb_atomic_flows`` parameter is ignored by the DLB2 PMD, because
+the DLB2 does not limit the number of flows a queue can track. In the DLB2, all
+load-balanced queues can use the full 16-bit flow ID range.
+
+Load-balanced and Directed Ports
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+DLB2 ports come in two flavors: load-balanced and directed. The eventdev API
+does not have the same concept, but it has a similar one: ports and queues that
+are singly-linked (i.e. linked to a single queue or port, respectively).
+
+The ``rte_event_dev_info_get()`` function reports the number of available
+event ports and queues (among other things). For the DLB2 PMD, max_event_ports
+and max_event_queues report the number of available load-balanced ports and
+queues, and max_single_link_event_port_queue_pairs reports the number of
+available directed ports and queues.
+
+When a scheduling domain is created in ``rte_event_dev_configure()``, the user
+specifies ``nb_event_ports`` and ``nb_single_link_event_port_queues``, which
+control the total number of ports (load-balanced and directed) and the number
+of directed ports. Hence, the number of requested load-balanced ports is
+``nb_event_ports - nb_single_link_event_ports``. The ``nb_event_queues`` field
+specifies the total number of queues (load-balanced and directed). The number
+of directed queues comes from ``nb_single_link_event_port_queues``, since
+directed ports and queues come in pairs.
+
+When a port is setup, the ``RTE_EVENT_PORT_CFG_SINGLE_LINK`` flag determines
+whether it should be configured as a directed (the flag is set) or a
+load-balanced (the flag is unset) port. Similarly, the
+``RTE_EVENT_QUEUE_CFG_SINGLE_LINK`` queue configuration flag controls
+whether it is a directed or load-balanced queue.
+
+Load-balanced ports can only be linked to load-balanced queues, and directed
+ports can only be linked to directed queues. Furthermore, directed ports can
+only be linked to a single directed queue (and vice versa), and that link
+cannot change after the eventdev is started.
+
+The eventdev API does not have a directed scheduling type. To support directed
+traffic, the dlb PMD detects when an event is being sent to a directed queue
+and overrides its scheduling type. Note that the originally selected scheduling
+type (atomic, ordered, or parallel) is not preserved, and an event's sched_type
+will be set to ``RTE_SCHED_TYPE_ATOMIC`` when it is dequeued from a directed
+port.
+
+Finally, even though all 3 event types are supported on the same QID by
+converting unordered events to ordered, such use should be discouraged as much
+as possible, since mixing types on the same queue uses valuable reorder
+resources, and orders events which do not require ordering.
+
+Flow ID
+~~~~~~~
+
+The flow ID field is preserved in the event when it is scheduled in the
+DLB2.
+
+Hardware Credits
+~~~~~~~~~~~~~~~~
+
+DLB2 uses a hardware credit scheme to prevent software from overflowing hardware
+event storage, with each unit of storage represented by a credit. A port spends
+a credit to enqueue an event, and hardware refills the ports with credits as the
+events are scheduled to ports. Refills come from credit pools.
+
+For DLB v2.5, there is a single credit pool used for both load balanced and
+directed traffic.
+
+For DLB v2.0, each port is a member of both a load-balanced credit pool and a
+directed credit pool. The load-balanced credits are used to enqueue to
+load-balanced queues, and directed credits are used for directed queues.
+These pools' sizes are controlled by the nb_events_limit field in struct
+rte_event_dev_config. The load-balanced pool is sized to contain
+nb_events_limit credits, and the directed pool is sized to contain
+nb_events_limit/4 credits. The directed pool size can be overridden with the
+num_dir_credits vdev argument, like so:
+
+    .. code-block:: console
+
+       --vdev=dlb1_event,num_dir_credits=<value>
+
+This can be used if the default allocation is too low or too high for the
+specific application needs. The PMD also supports a vdev arg that limits the
+max_num_events reported by rte_event_dev_info_get():
+
+    .. code-block:: console
+
+       --vdev=dlb1_event,max_num_events=<value>
+
+By default, max_num_events is reported as the total available load-balanced
+credits. If multiple DLB2-based applications are being used, it may be desirable
+to control how many load-balanced credits each application uses, particularly
+when application(s) are written to configure nb_events_limit equal to the
+reported max_num_events.
+
+Each port is a member of both credit pools. A port's credit allocation is
+defined by its low watermark, high watermark, and refill quanta. These three
+parameters are calculated by the dlb PMD like so:
+
+- The load-balanced high watermark is set to the port's enqueue_depth.
+  The directed high watermark is set to the minimum of the enqueue_depth and
+  the directed pool size divided by the total number of ports.
+- The refill quanta is set to half the high watermark.
+- The low watermark is set to the minimum of 16 and the refill quanta.
+
+When the eventdev is started, each port is pre-allocated a high watermark's
+worth of credits. For example, if an eventdev contains four ports with enqueue
+depths of 32 and a load-balanced credit pool size of 4096, each port will start
+with 32 load-balanced credits, and there will be 3968 credits available to
+replenish the ports. Thus, a single port is not capable of enqueueing up to the
+nb_events_limit (without any events being dequeued), since the other ports are
+retaining their initial credit allocation; in short, all ports must enqueue in
+order to reach the limit.
+
+If a port attempts to enqueue and has no credits available, the enqueue
+operation will fail and the application must retry the enqueue. Credits are
+replenished asynchronously by the DLB2 hardware.
+
+Software Credits
+~~~~~~~~~~~~~~~~
+
+The DLB2 is a "closed system" event dev, and the DLB2 PMD layers a software
+credit scheme on top of the hardware credit scheme in order to comply with
+the per-port backpressure described in the eventdev API.
+
+The DLB2's hardware scheme is local to a queue/pipeline stage: a port spends a
+credit when it enqueues to a queue, and credits are later replenished after the
+events are dequeued and released.
+
+In the software credit scheme, a credit is consumed when a new (.op =
+RTE_EVENT_OP_NEW) event is injected into the system, and the credit is
+replenished when the event is released from the system (either explicitly with
+RTE_EVENT_OP_RELEASE or implicitly in dequeue_burst()).
+
+In this model, an event is "in the system" from its first enqueue into eventdev
+until it is last dequeued. If the event goes through multiple event queues, it
+is still considered "in the system" while a worker thread is processing it.
+
+A port will fail to enqueue if the number of events in the system exceeds its
+``new_event_threshold`` (specified at port setup time). A port will also fail
+to enqueue if it lacks enough hardware credits to enqueue; load-balanced
+credits are used to enqueue to a load-balanced queue, and directed credits are
+used to enqueue to a directed queue.
+
+The out-of-credit situations are typically transient, and an eventdev
+application using the DLB2 ought to retry its enqueues if they fail.
+If enqueue fails, DLB2 PMD sets rte_errno as follows:
+
+- -ENOSPC: Credit exhaustion (either hardware or software)
+- -EINVAL: Invalid argument, such as port ID, queue ID, or sched_type.
+
+Depending on the pipeline the application has constructed, it's possible to
+enter a credit deadlock scenario wherein the worker thread lacks the credit
+to enqueue an event, and it must dequeue an event before it can recover the
+credit. If the worker thread retries its enqueue indefinitely, it will not
+make forward progress. Such deadlock is possible if the application has event
+"loops", in which an event in dequeued from queue A and later enqueued back to
+queue A.
+
+Due to this, workers should stop retrying after a time, release the events it
+is attempting to enqueue, and dequeue more events. It is important that the
+worker release the events and don't simply set them aside to retry the enqueue
+again later, because the port has limited history list size (by default, twice
+the port's dequeue_depth).
+
+Priority
+~~~~~~~~
+
+The DLB2 supports event priority and per-port queue service priority, as
+described in the eventdev header file. The DLB2 does not support 'global' event
+queue priority established at queue creation time.
+
+DLB2 supports 4 event and queue service priority levels. For both priority types,
+the PMD uses the upper three bits of the priority field to determine the DLB2
+priority, discarding the 5 least significant bits. But least significant bit out
+of 3 priority bits is effectively ignored for binning into 4 priorities. The
+discarded 5 least significant event priority bits are not preserved when an event
+is enqueued.
+
+Note that event priority only works within the same event type.
+When atomic and ordered or unordered events are enqueued to same QID, priority
+across the types is always equal, and both types are served in a round robin manner.
+
+Reconfiguration
+~~~~~~~~~~~~~~~
+
+The Eventdev API allows one to reconfigure a device, its ports, and its queues
+by first stopping the device, calling the configuration function(s), then
+restarting the device. The DLB2 does not support configuring an individual queue
+or port without first reconfiguring the entire device, however, so there are
+certain reconfiguration sequences that are valid in the eventdev API but not
+supported by the PMD.
+
+Specifically, the PMD supports the following configuration sequence:
+1. Configure and start the device
+2. Stop the device
+3. (Optional) Reconfigure the device
+4. (Optional) If step 3 is run:
+
+   a. Setup queue(s). The reconfigured queue(s) lose their previous port links.
+   b. The reconfigured port(s) lose their previous queue links.
+
+5. (Optional, only if steps 4a and 4b are run) Link port(s) to queue(s)
+6. Restart the device. If the device is reconfigured in step 3 but one or more
+   of its ports or queues are not, the PMD will apply their previous
+   configuration (including port->queue links) at this time.
+
+The PMD does not support the following configuration sequences:
+1. Configure and start the device
+2. Stop the device
+3. Setup queue or setup port
+4. Start the device
+
+This sequence is not supported because the event device must be reconfigured
+before its ports or queues can be.
+
+Deferred Scheduling
+~~~~~~~~~~~~~~~~~~~
+
+The DLB2 PMD's default behavior for managing a CQ is to "pop" the CQ once per
+dequeued event before returning from rte_event_dequeue_burst(). This frees the
+corresponding entries in the CQ, which enables the DLB2 to schedule more events
+to it.
+
+To support applications seeking finer-grained scheduling control -- for example
+deferring scheduling to get the best possible priority scheduling and
+load-balancing -- the PMD supports a deferred scheduling mode. In this mode,
+the CQ entry is not popped until the *subsequent* rte_event_dequeue_burst()
+call. This mode only applies to load-balanced event ports with dequeue depth of
+1.
+
+To enable deferred scheduling, use the defer_sched vdev argument like so:
+
+    .. code-block:: console
+
+       --vdev=dlb1_event,defer_sched=on
+
+Atomic Inflights Allocation
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+In the last stage prior to scheduling an atomic event to a CQ, DLB2 holds the
+inflight event in a temporary buffer that is divided among load-balanced
+queues. If a queue's atomic buffer storage fills up, this can result in
+head-of-line-blocking. For example:
+
+- An LDB queue allocated N atomic buffer entries
+- All N entries are filled with events from flow X, which is pinned to CQ 0.
+
+Until CQ 0 releases 1+ events, no other atomic flows for that LDB queue can be
+scheduled. The likelihood of this case depends on the eventdev configuration,
+traffic behavior, event processing latency, potential for a worker to be
+interrupted or otherwise delayed, etc.
+
+By default, the PMD allocates 16 buffer entries for each load-balanced queue,
+which provides an even division across all 128 queues but potentially wastes
+buffer space (e.g. if not all queues are used, or aren't used for atomic
+scheduling).
+
+The PMD provides a dev arg to override the default per-queue allocation. To
+increase a vdev's per-queue atomic-inflight allocation to (for example) 64:
+
+    .. code-block:: console
+
+       --vdev=dlb1_event,atm_inflights=64
+
+QID Depth Threshold
+~~~~~~~~~~~~~~~~~~~
+
+DLB2 supports setting and tracking queue depth thresholds. Hardware uses
+the thresholds to track how full a queue is compared to its threshold.
+Four buckets are used
+
+- Less than or equal to 50% of queue depth threshold
+- Greater than 50%, but less than or equal to 75% of depth threshold
+- Greater than 75%, but less than or equal to 100% of depth threshold
+- Greater than 100% of depth thresholds
+
+Per queue threshold metrics are tracked in the DLB2 xstats, and are also
+returned in the impl_opaque field of each received event.
+
+The per qid threshold can be specified as part of the device args, and
+can be applied to all queue, a range of queues, or a single queue, as
+shown below.
+
+    .. code-block:: console
+
+       --vdev=dlb2_event,qid_depth_thresh=all:<threshold_value>
+       --vdev=dlb2_event,qid_depth_thresh=qidA-qidB:<threshold_value>
+       --vdev=dlb2_event,qid_depth_thresh=qid:<threshold_value>
+
+Class of service
+~~~~~~~~~~~~~~~~
+
+DLB2 supports provisioning the DLB2 bandwidth into 4 classes of service.
+
+- Class 4 corresponds to 40% of the DLB2 hardware bandwidth
+- Class 3 corresponds to 30% of the DLB2 hardware bandwidth
+- Class 2 corresponds to 20% of the DLB2 hardware bandwidth
+- Class 1 corresponds to 10% of the DLB2 hardware bandwidth
+- Class 0 corresponds to don't care
+
+The classes are applied globally to the set of ports contained in this
+scheduling domain, which is more appropriate for the bifurcated
+PMD than for the PF PMD, since the PF PMD supports just 1 scheduling
+domain.
+
+Class of service can be specified in the devargs, as follows
+
+    .. code-block:: console
+
+       --vdev=dlb2_event,cos=<0..4>
diff --git a/doc/guides/eventdevs/index.rst b/doc/guides/eventdevs/index.rst
index 738788d9e..4b915bf3e 100644
--- a/doc/guides/eventdevs/index.rst
+++ b/doc/guides/eventdevs/index.rst
@@ -11,7 +11,7 @@ application through the eventdev API.
     :maxdepth: 2
     :numbered:
 
-    dlb2
+    dlb
     dpaa
     dpaa2
     dsw
diff --git a/doc/guides/rel_notes/release_21_05.rst b/doc/guides/rel_notes/release_21_05.rst
index 8a601e0a7..5b25f1479 100644
--- a/doc/guides/rel_notes/release_21_05.rst
+++ b/doc/guides/rel_notes/release_21_05.rst
@@ -94,6 +94,11 @@ New Features
 
   * Added support for preferred busy polling.
 
+* **Updated DLB driver.**
+
+  * Added support for v2.5 hardware.
+  * Renamed DLB2 to DLB, which supports all HW versions v2.0 and v2.5.
+
 * **Updated testpmd.**
 
   * Added a command line option to configure forced speed for Ethernet port.
diff --git a/drivers/event/dlb2/dlb2.c b/drivers/event/dlb/dlb2.c
similarity index 99%
rename from drivers/event/dlb2/dlb2.c
rename to drivers/event/dlb/dlb2.c
index cc6495b76..e5def9357 100644
--- a/drivers/event/dlb2/dlb2.c
+++ b/drivers/event/dlb/dlb2.c
@@ -667,15 +667,8 @@ dlb2_eventdev_configure(const struct rte_eventdev *dev)
 	}
 
 	/* Does this platform support umonitor/umwait? */
-	if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_WAITPKG)) {
-		if (RTE_LIBRTE_PMD_DLB2_UMWAIT_CTL_STATE != 0 &&
-		    RTE_LIBRTE_PMD_DLB2_UMWAIT_CTL_STATE != 1) {
-			DLB2_LOG_ERR("invalid value (%d) for RTE_LIBRTE_PMD_DLB2_UMWAIT_CTL_STATE, must be 0 or 1.\n",
-				     RTE_LIBRTE_PMD_DLB2_UMWAIT_CTL_STATE);
-			return -EINVAL;
-		}
+	if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_WAITPKG))
 		dlb2->umwait_allowed = true;
-	}
 
 	rsrcs->num_dir_ports = config->nb_single_link_event_port_queues;
 	rsrcs->num_ldb_ports  = config->nb_event_ports - rsrcs->num_dir_ports;
@@ -930,8 +923,9 @@ dlb2_hw_create_ldb_queue(struct dlb2_eventdev *dlb2,
 	}
 
 	if (ev_queue->depth_threshold == 0) {
-		cfg.depth_threshold = RTE_PMD_DLB2_DEFAULT_DEPTH_THRESH;
-		ev_queue->depth_threshold = RTE_PMD_DLB2_DEFAULT_DEPTH_THRESH;
+		cfg.depth_threshold = RTE_LIBRTE_PMD_DLB_DEFAULT_DEPTH_THRESH;
+		ev_queue->depth_threshold =
+			RTE_LIBRTE_PMD_DLB_DEFAULT_DEPTH_THRESH;
 	} else
 		cfg.depth_threshold = ev_queue->depth_threshold;
 
@@ -1623,7 +1617,7 @@ dlb2_eventdev_port_setup(struct rte_eventdev *dev,
 		  RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL);
 	ev_port->outstanding_releases = 0;
 	ev_port->inflight_credits = 0;
-	ev_port->credit_update_quanta = RTE_LIBRTE_PMD_DLB2_SW_CREDIT_QUANTA;
+	ev_port->credit_update_quanta = RTE_LIBRTE_PMD_DLB_SW_CREDIT_QUANTA;
 	ev_port->dlb2 = dlb2; /* reverse link */
 
 	/* Tear down pre-existing port->queue links */
@@ -1718,8 +1712,9 @@ dlb2_hw_create_dir_queue(struct dlb2_eventdev *dlb2,
 	cfg.port_id = qm_port_id;
 
 	if (ev_queue->depth_threshold == 0) {
-		cfg.depth_threshold = RTE_PMD_DLB2_DEFAULT_DEPTH_THRESH;
-		ev_queue->depth_threshold = RTE_PMD_DLB2_DEFAULT_DEPTH_THRESH;
+		cfg.depth_threshold = RTE_LIBRTE_PMD_DLB_DEFAULT_DEPTH_THRESH;
+		ev_queue->depth_threshold =
+			RTE_LIBRTE_PMD_DLB_DEFAULT_DEPTH_THRESH;
 	} else
 		cfg.depth_threshold = ev_queue->depth_threshold;
 
@@ -2747,7 +2742,7 @@ dlb2_event_enqueue_prep(struct dlb2_eventdev_port *ev_port,
 	DLB2_INC_STAT(ev_port->stats.tx_op_cnt[ev->op], 1);
 	DLB2_INC_STAT(ev_port->stats.traffic.tx_ok, 1);
 
-#ifndef RTE_LIBRTE_PMD_DLB2_QUELL_STATS
+#ifndef RTE_LIBRTE_PMD_DLB_QUELL_STATS
 	if (ev->op != RTE_EVENT_OP_RELEASE) {
 		DLB2_INC_STAT(ev_port->stats.queue[ev->queue_id].enq_ok, 1);
 		DLB2_INC_STAT(ev_port->stats.tx_sched_cnt[*sched_type], 1);
@@ -3070,7 +3065,7 @@ dlb2_dequeue_wait(struct dlb2_eventdev *dlb2,
 
 		DLB2_INC_STAT(ev_port->stats.traffic.rx_umonitor_umwait, 1);
 	} else {
-		uint64_t poll_interval = RTE_LIBRTE_PMD_DLB2_POLL_INTERVAL;
+		uint64_t poll_interval = RTE_LIBRTE_PMD_DLB_POLL_INTERVAL;
 		uint64_t curr_ticks = rte_get_timer_cycles();
 		uint64_t init_ticks = curr_ticks;
 
diff --git a/drivers/event/dlb2/dlb2_iface.c b/drivers/event/dlb/dlb2_iface.c
similarity index 100%
rename from drivers/event/dlb2/dlb2_iface.c
rename to drivers/event/dlb/dlb2_iface.c
diff --git a/drivers/event/dlb2/dlb2_iface.h b/drivers/event/dlb/dlb2_iface.h
similarity index 100%
rename from drivers/event/dlb2/dlb2_iface.h
rename to drivers/event/dlb/dlb2_iface.h
diff --git a/drivers/event/dlb2/dlb2_inline_fns.h b/drivers/event/dlb/dlb2_inline_fns.h
similarity index 100%
rename from drivers/event/dlb2/dlb2_inline_fns.h
rename to drivers/event/dlb/dlb2_inline_fns.h
diff --git a/drivers/event/dlb2/dlb2_log.h b/drivers/event/dlb/dlb2_log.h
similarity index 100%
rename from drivers/event/dlb2/dlb2_log.h
rename to drivers/event/dlb/dlb2_log.h
diff --git a/drivers/event/dlb2/dlb2_priv.h b/drivers/event/dlb/dlb2_priv.h
similarity index 99%
rename from drivers/event/dlb2/dlb2_priv.h
rename to drivers/event/dlb/dlb2_priv.h
index f3a9fe0aa..f11e08fca 100644
--- a/drivers/event/dlb2/dlb2_priv.h
+++ b/drivers/event/dlb/dlb2_priv.h
@@ -12,7 +12,7 @@
 #include <rte_config.h>
 #include "dlb2_user.h"
 #include "dlb2_log.h"
-#include "rte_pmd_dlb2.h"
+#include "rte_pmd_dlb.h"
 
 #ifndef RTE_LIBRTE_PMD_DLB2_QUELL_STATS
 #define DLB2_INC_STAT(_stat, _incr_val) ((_stat) += _incr_val)
@@ -20,7 +20,8 @@
 #define DLB2_INC_STAT(_stat, _incr_val)
 #endif
 
-#define EVDEV_DLB2_NAME_PMD dlb2_event
+/* common name for all dlb devs (dlb v2.0, dlb v2.5 ...) */
+#define EVDEV_DLB2_NAME_PMD dlb_event
 
 /*  command line arg strings */
 #define NUMA_NODE_ARG "numa_node"
@@ -320,7 +321,7 @@ struct dlb2_port {
 	bool gen_bit;
 	uint16_t dir_credits;
 	uint32_t dequeue_depth;
-	enum dlb2_token_pop_mode token_pop_mode;
+	enum dlb_token_pop_mode token_pop_mode;
 	union dlb2_port_config cfg;
 	uint32_t *credit_pool[DLB2_NUM_QUEUE_TYPES]; /* use __atomic builtins */
 	union {
diff --git a/drivers/event/dlb2/dlb2_selftest.c b/drivers/event/dlb/dlb2_selftest.c
similarity index 99%
rename from drivers/event/dlb2/dlb2_selftest.c
rename to drivers/event/dlb/dlb2_selftest.c
index 5cf66c552..019cbecdc 100644
--- a/drivers/event/dlb2/dlb2_selftest.c
+++ b/drivers/event/dlb/dlb2_selftest.c
@@ -22,7 +22,7 @@
 #include <rte_pause.h>
 
 #include "dlb2_priv.h"
-#include "rte_pmd_dlb2.h"
+#include "rte_pmd_dlb.h"
 
 #define MAX_PORTS 32
 #define MAX_QIDS 32
@@ -1105,13 +1105,13 @@ test_deferred_sched(void)
 		return -1;
 	}
 
-	ret = rte_pmd_dlb2_set_token_pop_mode(evdev, 0, DEFERRED_POP);
+	ret = rte_pmd_dlb_set_token_pop_mode(evdev, 0, DEFERRED_POP);
 	if (ret < 0) {
 		printf("%d: Error setting deferred scheduling\n", __LINE__);
 		goto err;
 	}
 
-	ret = rte_pmd_dlb2_set_token_pop_mode(evdev, 1, DEFERRED_POP);
+	ret = rte_pmd_dlb_set_token_pop_mode(evdev, 1, DEFERRED_POP);
 	if (ret < 0) {
 		printf("%d: Error setting deferred scheduling\n", __LINE__);
 		goto err;
@@ -1257,7 +1257,7 @@ test_delayed_pop(void)
 		return -1;
 	}
 
-	ret = rte_pmd_dlb2_set_token_pop_mode(evdev, 0, DELAYED_POP);
+	ret = rte_pmd_dlb_set_token_pop_mode(evdev, 0, DELAYED_POP);
 	if (ret < 0) {
 		printf("%d: Error setting deferred scheduling\n", __LINE__);
 		goto err;
diff --git a/drivers/event/dlb2/dlb2_user.h b/drivers/event/dlb/dlb2_user.h
similarity index 100%
rename from drivers/event/dlb2/dlb2_user.h
rename to drivers/event/dlb/dlb2_user.h
diff --git a/drivers/event/dlb2/dlb2_xstats.c b/drivers/event/dlb/dlb2_xstats.c
similarity index 100%
rename from drivers/event/dlb2/dlb2_xstats.c
rename to drivers/event/dlb/dlb2_xstats.c
diff --git a/drivers/event/dlb2/meson.build b/drivers/event/dlb/meson.build
similarity index 89%
rename from drivers/event/dlb2/meson.build
rename to drivers/event/dlb/meson.build
index f22638b8e..4a4aed931 100644
--- a/drivers/event/dlb2/meson.build
+++ b/drivers/event/dlb/meson.build
@@ -14,10 +14,10 @@ sources = files('dlb2.c',
 		'pf/dlb2_main.c',
 		'pf/dlb2_pf.c',
 		'pf/base/dlb2_resource.c',
-		'rte_pmd_dlb2.c',
+		'rte_pmd_dlb.c',
 		'dlb2_selftest.c'
 )
 
-headers = files('rte_pmd_dlb2.h')
+headers = files('rte_pmd_dlb.h')
 
 deps += ['mbuf', 'mempool', 'ring', 'pci', 'bus_pci']
diff --git a/drivers/event/dlb2/pf/base/dlb2_hw_types.h b/drivers/event/dlb/pf/base/dlb2_hw_types.h
similarity index 100%
rename from drivers/event/dlb2/pf/base/dlb2_hw_types.h
rename to drivers/event/dlb/pf/base/dlb2_hw_types.h
diff --git a/drivers/event/dlb2/pf/base/dlb2_osdep.h b/drivers/event/dlb/pf/base/dlb2_osdep.h
similarity index 100%
rename from drivers/event/dlb2/pf/base/dlb2_osdep.h
rename to drivers/event/dlb/pf/base/dlb2_osdep.h
diff --git a/drivers/event/dlb2/pf/base/dlb2_osdep_bitmap.h b/drivers/event/dlb/pf/base/dlb2_osdep_bitmap.h
similarity index 100%
rename from drivers/event/dlb2/pf/base/dlb2_osdep_bitmap.h
rename to drivers/event/dlb/pf/base/dlb2_osdep_bitmap.h
diff --git a/drivers/event/dlb2/pf/base/dlb2_osdep_list.h b/drivers/event/dlb/pf/base/dlb2_osdep_list.h
similarity index 100%
rename from drivers/event/dlb2/pf/base/dlb2_osdep_list.h
rename to drivers/event/dlb/pf/base/dlb2_osdep_list.h
diff --git a/drivers/event/dlb2/pf/base/dlb2_osdep_types.h b/drivers/event/dlb/pf/base/dlb2_osdep_types.h
similarity index 100%
rename from drivers/event/dlb2/pf/base/dlb2_osdep_types.h
rename to drivers/event/dlb/pf/base/dlb2_osdep_types.h
diff --git a/drivers/event/dlb2/pf/base/dlb2_regs.h b/drivers/event/dlb/pf/base/dlb2_regs.h
similarity index 100%
rename from drivers/event/dlb2/pf/base/dlb2_regs.h
rename to drivers/event/dlb/pf/base/dlb2_regs.h
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb/pf/base/dlb2_resource.c
similarity index 100%
rename from drivers/event/dlb2/pf/base/dlb2_resource.c
rename to drivers/event/dlb/pf/base/dlb2_resource.c
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.h b/drivers/event/dlb/pf/base/dlb2_resource.h
similarity index 100%
rename from drivers/event/dlb2/pf/base/dlb2_resource.h
rename to drivers/event/dlb/pf/base/dlb2_resource.h
diff --git a/drivers/event/dlb2/pf/dlb2_main.c b/drivers/event/dlb/pf/dlb2_main.c
similarity index 100%
rename from drivers/event/dlb2/pf/dlb2_main.c
rename to drivers/event/dlb/pf/dlb2_main.c
diff --git a/drivers/event/dlb2/pf/dlb2_main.h b/drivers/event/dlb/pf/dlb2_main.h
similarity index 100%
rename from drivers/event/dlb2/pf/dlb2_main.h
rename to drivers/event/dlb/pf/dlb2_main.h
diff --git a/drivers/event/dlb2/pf/dlb2_pf.c b/drivers/event/dlb/pf/dlb2_pf.c
similarity index 100%
rename from drivers/event/dlb2/pf/dlb2_pf.c
rename to drivers/event/dlb/pf/dlb2_pf.c
diff --git a/drivers/event/dlb2/rte_pmd_dlb2.c b/drivers/event/dlb/rte_pmd_dlb.c
similarity index 88%
rename from drivers/event/dlb2/rte_pmd_dlb2.c
rename to drivers/event/dlb/rte_pmd_dlb.c
index 43990e46a..82d203366 100644
--- a/drivers/event/dlb2/rte_pmd_dlb2.c
+++ b/drivers/event/dlb/rte_pmd_dlb.c
@@ -5,14 +5,14 @@
 #include <rte_eventdev.h>
 #include <eventdev_pmd.h>
 
-#include "rte_pmd_dlb2.h"
+#include "rte_pmd_dlb.h"
 #include "dlb2_priv.h"
 #include "dlb2_inline_fns.h"
 
 int
-rte_pmd_dlb2_set_token_pop_mode(uint8_t dev_id,
+rte_pmd_dlb_set_token_pop_mode(uint8_t dev_id,
 				uint8_t port_id,
-				enum dlb2_token_pop_mode mode)
+				enum dlb_token_pop_mode mode)
 {
 	struct dlb2_eventdev *dlb2;
 	struct rte_eventdev *dev;
diff --git a/drivers/event/dlb2/rte_pmd_dlb2.h b/drivers/event/dlb/rte_pmd_dlb.h
similarity index 88%
rename from drivers/event/dlb2/rte_pmd_dlb2.h
rename to drivers/event/dlb/rte_pmd_dlb.h
index 74399db01..d42b1f52a 100644
--- a/drivers/event/dlb2/rte_pmd_dlb2.h
+++ b/drivers/event/dlb/rte_pmd_dlb.h
@@ -3,13 +3,13 @@
  */
 
 /*!
- *  @file      rte_pmd_dlb2.h
+ *  @file      rte_pmd_dlb.h
  *
  *  @brief     DLB PMD-specific functions
  */
 
-#ifndef _RTE_PMD_DLB2_H_
-#define _RTE_PMD_DLB2_H_
+#ifndef _RTE_PMD_DLB_H_
+#define _RTE_PMD_DLB_H_
 
 #ifdef __cplusplus
 extern "C" {
@@ -23,7 +23,7 @@ extern "C" {
  *
  * Selects the token pop mode for a DLB2 port.
  */
-enum dlb2_token_pop_mode {
+enum dlb_token_pop_mode {
 	/* Pop the CQ tokens immediately after dequeueing. */
 	AUTO_POP,
 	/* Pop CQ tokens after (dequeue_depth - 1) events are released.
@@ -61,9 +61,9 @@ enum dlb2_token_pop_mode {
 
 __rte_experimental
 int
-rte_pmd_dlb2_set_token_pop_mode(uint8_t dev_id,
+rte_pmd_dlb_set_token_pop_mode(uint8_t dev_id,
 				uint8_t port_id,
-				enum dlb2_token_pop_mode mode);
+				enum dlb_token_pop_mode mode);
 
 #ifdef __cplusplus
 }
diff --git a/drivers/event/dlb2/version.map b/drivers/event/dlb/version.map
similarity index 60%
rename from drivers/event/dlb2/version.map
rename to drivers/event/dlb/version.map
index b1e4dff0f..3338a22c1 100644
--- a/drivers/event/dlb2/version.map
+++ b/drivers/event/dlb/version.map
@@ -5,5 +5,5 @@ DPDK_21 {
 EXPERIMENTAL {
 	global:
 
-	rte_pmd_dlb2_set_token_pop_mode;
+	rte_pmd_dlb_set_token_pop_mode;
 };
diff --git a/drivers/event/meson.build b/drivers/event/meson.build
index b7f9bf7c6..e9b0433f2 100644
--- a/drivers/event/meson.build
+++ b/drivers/event/meson.build
@@ -5,7 +5,7 @@ if is_windows
 	subdir_done()
 endif
 
-drivers = ['dlb2', 'dpaa', 'dpaa2', 'octeontx2', 'opdl', 'skeleton', 'sw',
+drivers = ['dlb', 'dpaa', 'dpaa2', 'octeontx2', 'opdl', 'skeleton', 'sw',
 	   'dsw']
 if not (toolchain == 'gcc' and cc.version().version_compare('<4.8.6') and
 	dpdk_conf.has('RTE_ARCH_ARM64'))
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [dpdk-dev] [PATCH v2 00/27] Add DLB V2.5
  2021-03-30 19:35   ` [dpdk-dev] [PATCH v2 00/27] Add DLB V2.5 Timothy McDaniel
                       ` (26 preceding siblings ...)
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 27/27] event/dlb2: Change device name to dlb_event Timothy McDaniel
@ 2021-04-03  9:51     ` Jerin Jacob
  27 siblings, 0 replies; 174+ messages in thread
From: Jerin Jacob @ 2021-04-03  9:51 UTC (permalink / raw)
  To: Timothy McDaniel
  Cc: dpdk-dev, Erik Gabriel Carrillo, Gage Eads, Van Haaren, Harry,
	Jerin Jacob, Thomas Monjalon

On Wed, Mar 31, 2021 at 1:06 AM Timothy McDaniel
<timothy.mcdaniel@intel.com> wrote:
>
> This patch series adds support for DLB v2.5 to
> the current DLB V2.0 PMD. The resulting PMD supports
> both hardware versions.
>
> The main differences between the DLB v2.5 and v2.0 hardware
> are:
> - Number of queues/ports
> - DLB v2.5 uses a combined credit pool, whereas DLB v2.0
>   splits credits into 2 pools, a directed credit pool and a
>   load balanced credit pool.
> - Different register maps, with different bit names and offsets

Please fix the following issues

[for-main]dell[dpdk-next-eventdev] $ ./devtools/check-git-log.sh -n 27
Wrong headline format:
        event/dlb2: add v2.5 get_resources
        event/dlb2: delete old dlb2_resource.c file
        event/dlb2: move dlb_resource_new.c to dlb_resource.c
        event/dlb2: remove temporary file, dlb_hw_types.h
        event/dlb2: move dlb2_hw_type_new.h to dlb2_hw_types.h
        event/dlb2: delete old register map file, dlb2_regs.h
        event/dlb2: rename dlb2_regs_new.h to dlb2_regs.h
        event/dlb2: Change device name to dlb_event
Wrong headline uppercase:
        event/dlb2: Change device name to dlb_event

./devtools/checkpatches.sh -n 27

### event/dlb2: add v2.5 sparse cq mode

WARNING:EMAIL_SUBJECT: A patch subject line should describe the change
not the tool that found it
#4:
Subject: [PATCH] event/dlb2: add v2.5 sparse cq mode

WARNING:REPEATED_WORD: Possible repeated word: 'mode'
#6:
Update sparse cq mode mode functions for DLB v2.5, accounting for new

total: 0 errors, 2 warnings, 70 lines checked

### event/dlb2: Change device name to dlb_event

WARNING:REPEATED_WORD: Possible repeated word: 'the'
#9:
to the the directory name that contains the PMD, as well

total: 0 errors, 1 warnings, 666 lines checked

22/27 valid patches



>
> In order to support both hardware versions with the same PMD,
> and avoid code duplication, the file dlb2_resource.c required a
> complete rewrite. This required some creative staging of the changes
> in order to keep the individual patches relatively small, while
> also meeting the requirement that all individual patches in the set
> compile cleanly.
>
> To accomplish this, a few temporary files are used:
>
> dlb2_hw_types_new.h
> dlb2_resources_new.h
> dlb2_resources_new.c
>
> As dlb2_resources_new.c is populated with the new combined v2.0/v2.5
> low level logic, the corresponding old code is removed from
> dlb2_resource.c, thus allowing both the original and new code to
> continue to compile and link cleanly. Once all of the code has been
> migrated to the new model, the old versions of the files are removed,
> and the new versions are renamed, effectively replacing the old original
> files.
>
> As you review the code, you can ignore the code deletions from
> dlb2_resource.c, as that file continues to shrink as the new
> corresponding logic is added to dlb2_resource_new.c.
>
> Changes since V1
> 1) Simplified subject text for all patches
> 2) correct typos/spelling
> 3) remove FPGA references
> 4) remove stale sysconf() references
> 5) fixed patches that had compilation issues
> 6) updated release notes
> 7) renamed dlb device from dlb2_event to dlb_event
> 8) moved dlb2 directory to dlb,to match name change
> 9) fixed other cases where "dlb2" was being used externally
>
> Timothy McDaniel (27):
>   event/dlb2: add v2.5 probe
>   event/dlb2: add v2.5 HW init
>   event/dlb2: add v2.5 get_resources
>   event/dlb2: add v2.5 create sched domain
>   event/dlb2: add v2.5 domain reset
>   event/dlb2: add V2.5 create ldb queue
>   event/dlb2: add v2.5 create ldb port
>   event/dlb2: add v2.5 create dir port
>   event/dlb2: add v2.5 create dir queue
>   event/dlb2: add v2.5 map qid
>   event/dlb2: add v2.5 unmap queue
>   event/dlb2: add v2.5 start domain
>   event/dlb2: add v2.5 credit scheme
>   event/dlb2: add v2.5 queue depth functions
>   event/dlb2: add v2.5 finish map/unmap
>   event/dlb2: add v2.5 sparse cq mode
>   event/dlb2: add v2.5 sequence number management
>   event/dlb2: consolidate resource header files into one file
>   event/dlb2: delete old dlb2_resource.c file
>   event/dlb2: move dlb_resource_new.c to dlb_resource.c
>   event/dlb2: remove temporary file, dlb_hw_types.h
>   event/dlb2: move dlb2_hw_type_new.h to dlb2_hw_types.h
>   event/dlb2: delete old register map file, dlb2_regs.h
>   event/dlb2: rename dlb2_regs_new.h to dlb2_regs.h
>   event/dlb2: update xstats for v2.5
>   doc/dlb2: update documentation for v2.5
>   event/dlb2: Change device name to dlb_event
>
>  MAINTAINERS                                   |    6 +-
>  app/test/test_eventdev.c                      |    6 +-
>  config/rte_config.h                           |   11 +-
>  doc/api/doxy-api-index.md                     |    2 +-
>  doc/api/doxy-api.conf.in                      |    2 +-
>  doc/guides/eventdevs/dlb.rst                  |  390 ++
>  doc/guides/eventdevs/dlb2.rst                 |   75 +-
>  doc/guides/eventdevs/index.rst                |    2 +-
>  doc/guides/rel_notes/release_21_05.rst        |    5 +
>  drivers/event/{dlb2 => dlb}/dlb2.c            |  455 +-
>  drivers/event/{dlb2 => dlb}/dlb2_iface.c      |    0
>  drivers/event/{dlb2 => dlb}/dlb2_iface.h      |    0
>  drivers/event/{dlb2 => dlb}/dlb2_inline_fns.h |    0
>  drivers/event/{dlb2 => dlb}/dlb2_log.h        |    0
>  drivers/event/{dlb2 => dlb}/dlb2_priv.h       |  163 +-
>  drivers/event/{dlb2 => dlb}/dlb2_selftest.c   |    8 +-
>  drivers/event/{dlb2 => dlb}/dlb2_user.h       |   27 +-
>  drivers/event/{dlb2 => dlb}/dlb2_xstats.c     |   70 +-
>  drivers/event/{dlb2 => dlb}/meson.build       |    4 +-
>  .../{dlb2 => dlb}/pf/base/dlb2_hw_types.h     |  102 +-
>  .../event/{dlb2 => dlb}/pf/base/dlb2_osdep.h  |    3 +
>  .../{dlb2 => dlb}/pf/base/dlb2_osdep_bitmap.h |    0
>  .../{dlb2 => dlb}/pf/base/dlb2_osdep_list.h   |    0
>  .../{dlb2 => dlb}/pf/base/dlb2_osdep_types.h  |    0
>  drivers/event/dlb/pf/base/dlb2_regs.h         | 4412 +++++++++++++++++
>  .../{dlb2 => dlb}/pf/base/dlb2_resource.c     | 3278 ++++++------
>  .../{dlb2 => dlb}/pf/base/dlb2_resource.h     |   28 +-
>  drivers/event/{dlb2 => dlb}/pf/dlb2_main.c    |   37 +-
>  drivers/event/{dlb2 => dlb}/pf/dlb2_main.h    |    0
>  drivers/event/{dlb2 => dlb}/pf/dlb2_pf.c      |   62 +-
>  .../rte_pmd_dlb2.c => dlb/rte_pmd_dlb.c}      |    6 +-
>  .../rte_pmd_dlb2.h => dlb/rte_pmd_dlb.h}      |   12 +-
>  drivers/event/{dlb2 => dlb}/version.map       |    2 +-
>  drivers/event/dlb2/pf/base/dlb2_mbox.h        |  596 ---
>  drivers/event/dlb2/pf/base/dlb2_regs.h        | 2527 ----------
>  drivers/event/meson.build                     |    2 +-
>  36 files changed, 7270 insertions(+), 5023 deletions(-)
>  create mode 100644 doc/guides/eventdevs/dlb.rst
>  rename drivers/event/{dlb2 => dlb}/dlb2.c (90%)
>  rename drivers/event/{dlb2 => dlb}/dlb2_iface.c (100%)
>  rename drivers/event/{dlb2 => dlb}/dlb2_iface.h (100%)
>  rename drivers/event/{dlb2 => dlb}/dlb2_inline_fns.h (100%)
>  rename drivers/event/{dlb2 => dlb}/dlb2_log.h (100%)
>  rename drivers/event/{dlb2 => dlb}/dlb2_priv.h (79%)
>  rename drivers/event/{dlb2 => dlb}/dlb2_selftest.c (99%)
>  rename drivers/event/{dlb2 => dlb}/dlb2_user.h (97%)
>  rename drivers/event/{dlb2 => dlb}/dlb2_xstats.c (94%)
>  rename drivers/event/{dlb2 => dlb}/meson.build (89%)
>  rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_hw_types.h (81%)
>  rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_osdep.h (99%)
>  rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_osdep_bitmap.h (100%)
>  rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_osdep_list.h (100%)
>  rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_osdep_types.h (100%)
>  create mode 100644 drivers/event/dlb/pf/base/dlb2_regs.h
>  rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_resource.c (68%)
>  rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_resource.h (99%)
>  rename drivers/event/{dlb2 => dlb}/pf/dlb2_main.c (95%)
>  rename drivers/event/{dlb2 => dlb}/pf/dlb2_main.h (100%)
>  rename drivers/event/{dlb2 => dlb}/pf/dlb2_pf.c (92%)
>  rename drivers/event/{dlb2/rte_pmd_dlb2.c => dlb/rte_pmd_dlb.c} (88%)
>  rename drivers/event/{dlb2/rte_pmd_dlb2.h => dlb/rte_pmd_dlb.h} (88%)
>  rename drivers/event/{dlb2 => dlb}/version.map (60%)
>  delete mode 100644 drivers/event/dlb2/pf/base/dlb2_mbox.h
>  delete mode 100644 drivers/event/dlb2/pf/base/dlb2_regs.h
>
> --
> 2.23.0
>

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [dpdk-dev] [PATCH v2 02/27] event/dlb2: add v2.5 HW init
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 02/27] event/dlb2: add v2.5 HW init Timothy McDaniel
@ 2021-04-03 10:18       ` Jerin Jacob
  0 siblings, 0 replies; 174+ messages in thread
From: Jerin Jacob @ 2021-04-03 10:18 UTC (permalink / raw)
  To: Timothy McDaniel
  Cc: dpdk-dev, Erik Gabriel Carrillo, Gage Eads, Van Haaren, Harry,
	Jerin Jacob, Thomas Monjalon

On Wed, Mar 31, 2021 at 1:07 AM Timothy McDaniel
<timothy.mcdaniel@intel.com> wrote:
>
> This commit adds support for DLB v2.5 probe-time hardware init,
> and sets up a framework for incorporating the remaining
> changes required to support DLB v2.5.
>
> DLB v2.0 and DLB v2.5 are similar in many respects, but their
> register offsets and definitions are different. As a result of these,
> differences, the low level hardware functions must take the device
> version into consideration. This requires that the hardware version be
> passed to many of the low level functions, so that the PMD can
> take the appropriate action based on the device version.
>
> To ease the transition and keep the individual patches small, three
> temporary files are added in this commit. These files have "new"
> in their names.  The files with "new" contain changes specific to a
> consolidated PMD that supports both DLB v2.0 and DLB 2.5. Their sister
> files of the same name (minus "new") contain the old DLB v2.0 specific
> code. The intent is to remove code from the original files as that code
> is ported to the combined DLB 2.0/2.5 PMD model and added to the "new"
> files in a series of commits. At end of the patch series, the old files
> will be empty and the "new" files will have the logic needed
> to implement a single PMD that supports both DLB v2.0 and DLB v2.5.
> At that time, the original DLB v2.0 specific files will be deleted,
> and the "new" files will be renamed and replace them.
>
> Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
> ---
>  drivers/event/dlb2/dlb2_priv.h                |    5 +
>  drivers/event/dlb2/meson.build                |    1 +
>  .../event/dlb2/pf/base/dlb2_hw_types_new.h    |  362 ++
>  drivers/event/dlb2/pf/base/dlb2_osdep.h       |    4 +
>  drivers/event/dlb2/pf/base/dlb2_regs_new.h    | 4412 +++++++++++++++++
>  drivers/event/dlb2/pf/base/dlb2_resource.c    |  180 +-
>  drivers/event/dlb2/pf/base/dlb2_resource.h    |   36 -
>  .../event/dlb2/pf/base/dlb2_resource_new.c    |  259 +
>  .../event/dlb2/pf/base/dlb2_resource_new.h    |   73 +
>  drivers/event/dlb2/pf/dlb2_main.c             |   41 +-
>  drivers/event/dlb2/pf/dlb2_main.h             |    4 +
>  drivers/event/dlb2/pf/dlb2_pf.c               |    6 +-
>  12 files changed, 5153 insertions(+), 230 deletions(-)
>  create mode 100644 drivers/event/dlb2/pf/base/dlb2_hw_types_new.h
>  create mode 100644 drivers/event/dlb2/pf/base/dlb2_regs_new.h
>  create mode 100644 drivers/event/dlb2/pf/base/dlb2_resource_new.c
>  create mode 100644 drivers/event/dlb2/pf/base/dlb2_resource_new.h
>
> diff --git a/drivers/event/dlb2/dlb2_priv.h b/drivers/event/dlb2/dlb2_priv.h
> index 1cd78ad94..f3a9fe0aa 100644
> --- a/drivers/event/dlb2/dlb2_priv.h
> +++ b/drivers/event/dlb2/dlb2_priv.h
> @@ -114,6 +114,11 @@
>  #define EV_TO_DLB2_PRIO(x) ((x) >> 5)
>  #define DLB2_TO_EV_PRIO(x) ((x) << 5)
>
> +enum dlb2_hw_ver {
> +       DLB2_HW_VER_2,
> +       DLB2_HW_VER_2_5,
> +};
> +
>  enum dlb2_hw_port_types {
>         DLB2_LDB_PORT,
>         DLB2_DIR_PORT,
> diff --git a/drivers/event/dlb2/meson.build b/drivers/event/dlb2/meson.build
> index f22638b8e..bded07e06 100644
> --- a/drivers/event/dlb2/meson.build
> +++ b/drivers/event/dlb2/meson.build
> @@ -14,6 +14,7 @@ sources = files('dlb2.c',
>                 'pf/dlb2_main.c',
>                 'pf/dlb2_pf.c',
>                 'pf/base/dlb2_resource.c',
> +               'pf/base/dlb2_resource_new.c',
>                 'rte_pmd_dlb2.c',
>                 'dlb2_selftest.c'
>  )
> diff --git a/drivers/event/dlb2/pf/base/dlb2_hw_types_new.h b/drivers/event/dlb2/pf/base/dlb2_hw_types_new.h
> new file mode 100644
> index 000000000..d58aa94ad
> --- /dev/null
> +++ b/drivers/event/dlb2/pf/base/dlb2_hw_types_new.h
> @@ -0,0 +1,362 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2016-2020 Intel Corporation
> + */
> +
> +#ifndef __DLB2_HW_TYPES_NEW_H
> +#define __DLB2_HW_TYPES_NEW_H
> +
> +#include "../../dlb2_priv.h"
> +#include "dlb2_user.h"
> +
> +#include "dlb2_osdep_list.h"
> +#include "dlb2_osdep_types.h"
> +#include "dlb2_regs_new.h"
> +
> +#define DLB2_BITS_SET(x, val, mask)    (x = ((x) & ~(mask))     \
> +                                | (((val) << (mask##_LOC)) & (mask)))
> +#define DLB2_BITS_CLR(x, mask) (x &= ~(mask))
> +#define DLB2_BIT_SET(x, mask)  ((x) |= (mask))
> +#define DLB2_BITS_GET(x, mask) (((x) & (mask)) >> (mask##_LOC))
> +
> +#define DLB2_MAX_NUM_VDEVS                     16
> +#define DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS    2
> +#define DLB2_NUM_ARB_WEIGHTS                   8
> +#define DLB2_MAX_NUM_AQED_ENTRIES              2048
> +#define DLB2_MAX_WEIGHT                                255
> +#define DLB2_NUM_COS_DOMAINS                   4
> +#define DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS    2
> +#define DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES     5
> +#define DLB2_MAX_CQ_COMP_CHECK_LOOPS           409600
> +#define DLB2_MAX_QID_EMPTY_CHECK_LOOPS         (32 * 64 * 1024 * (800 / 30))
> +
> +#define DLB2_FUNC_BAR                          0
> +#define DLB2_CSR_BAR                           2
> +
> +#ifdef FPGA
> +#define DLB2_HZ                                        2000000
> +#else
> +#define DLB2_HZ                                        800000000
> +#endif

Removal of compile-time FPGA constant is not addressed.


From here (See below)

> +       (ver == DLB2_HW_V2 ? \
> +        DLB2_V2CHP_CFG_LDB_WD_DISABLE1 : \
> +        DLB2_V2_5CHP_CFG_LDB_WD_DISABLE1)

> +
> +#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_LIMIT     0x00007FFF
> +#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_V 0x00008000
> +#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_RSVD0     0xFFFF0000
> +#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_LIMIT_LOC 0
> +#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_V_LOC             15
> +#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_RSVD0_LOC 16

To here(See above). Please move this autogenerated register definition
to a separate patch like "event/dlb2: add HW register definition" or
so

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [dpdk-dev] [PATCH 04/25] event/dlb2: add DLB v2.5 support to create sched domain
  2021-03-16 22:18 ` [dpdk-dev] [PATCH 04/25] event/dlb2: add DLB v2.5 support to create sched domain Timothy McDaniel
@ 2021-04-03 10:22   ` Jerin Jacob
  0 siblings, 0 replies; 174+ messages in thread
From: Jerin Jacob @ 2021-04-03 10:22 UTC (permalink / raw)
  To: Timothy McDaniel
  Cc: dpdk-dev, Jerin Jacob, Van Haaren, Harry, Ray Kinsella,
	Neil Horman, Nikhil Rao, Erik Gabriel Carrillo, Gujjar,
	Abhinandan S, Pavan Nikhilesh, Hemant Agrawal,
	Mattias Rönnblom, Peter Mccarthy

On Wed, Mar 17, 2021 at 3:50 AM Timothy McDaniel
<timothy.mcdaniel@intel.com> wrote:
>
> Update domain creation logic to account for DLB v2.5
> credit scheme, new register map, and new register access
> macros.
>
> Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>

> ---
>  drivers/event/dlb2/dlb2_user.h                |  13 +-
>  drivers/event/dlb2/pf/base/dlb2_resource.c    | 645 ----------------
>  .../event/dlb2/pf/base/dlb2_resource_new.c    | 696 ++++++++++++++++++

Please use git mv foo bar to avoid creating such big diff.
Wherever possible use git mv to reduce the diff in the patch.




>  3 files changed, 707 insertions(+), 647 deletions(-)
>
> diff --git a/drivers/event/dlb2/dlb2_user.h b/drivers/event/dlb2/dlb2_user.h
> index b7d125dec..9760e9bda 100644
> --- a/drivers/event/dlb2/dlb2_user.h
> +++ b/drivers/event/dlb2/dlb2_user.h
> @@ -18,6 +18,7 @@ enum dlb2_error {
>         DLB2_ST_LDB_QUEUES_UNAVAILABLE,
>         DLB2_ST_LDB_CREDITS_UNAVAILABLE,
>         DLB2_ST_DIR_CREDITS_UNAVAILABLE,
> +       DLB2_ST_CREDITS_UNAVAILABLE,
>         DLB2_ST_SEQUENCE_NUMBERS_UNAVAILABLE,
>         DLB2_ST_INVALID_DOMAIN_ID,
>         DLB2_ST_INVALID_QID_INFLIGHT_ALLOCATION,
> @@ -57,6 +58,7 @@ static const char dlb2_error_strings[][128] = {
>         "DLB2_ST_LDB_QUEUES_UNAVAILABLE",
>         "DLB2_ST_LDB_CREDITS_UNAVAILABLE",
>         "DLB2_ST_DIR_CREDITS_UNAVAILABLE",
> +       "DLB2_ST_CREDITS_UNAVAILABLE",
>         "DLB2_ST_SEQUENCE_NUMBERS_UNAVAILABLE",
>         "DLB2_ST_INVALID_DOMAIN_ID",
>         "DLB2_ST_INVALID_QID_INFLIGHT_ALLOCATION",
> @@ -170,8 +172,15 @@ struct dlb2_create_sched_domain_args {
>         __u32 num_dir_ports;
>         __u32 num_atomic_inflights;
>         __u32 num_hist_list_entries;
> -       __u32 num_ldb_credits;
> -       __u32 num_dir_credits;
> +       union {
> +               struct {
> +                       __u32 num_ldb_credits;
> +                       __u32 num_dir_credits;
> +               };
> +               struct {
> +                       __u32 num_credits;
> +               };
> +       };
>         __u8 cos_strict;
>         __u8 padding1[3];
>  };
> diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
> index 5b8723aaf..5d296f725 100644
> --- a/drivers/event/dlb2/pf/base/dlb2_resource.c
> +++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
> @@ -33,21 +33,6 @@
>  #define DLB2_FUNC_LIST_FOR_SAFE(head, ptr, ptr_tmp, it, it_tmp) \
>         DLB2_LIST_FOR_EACH_SAFE((head), ptr, ptr_tmp, func_list, it, it_tmp)
>
> -static void dlb2_init_domain_rsrc_lists(struct dlb2_hw_domain *domain)
> -{
> -       int i;
> -
> -       dlb2_list_init_head(&domain->used_ldb_queues);
> -       dlb2_list_init_head(&domain->used_dir_pq_pairs);
> -       dlb2_list_init_head(&domain->avail_ldb_queues);
> -       dlb2_list_init_head(&domain->avail_dir_pq_pairs);
> -
> -       for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
> -               dlb2_list_init_head(&domain->used_ldb_ports[i]);
> -       for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
> -               dlb2_list_init_head(&domain->avail_ldb_ports[i]);
> -}
> -
>  void dlb2_hw_enable_sparse_dir_cq_mode(struct dlb2_hw *hw)
>  {
>         union dlb2_chp_cfg_chp_csr_ctrl r0;
> @@ -70,636 +55,6 @@ void dlb2_hw_enable_sparse_ldb_cq_mode(struct dlb2_hw *hw)
>         DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, r0.val);
>  }
>
> -static void dlb2_configure_domain_credits(struct dlb2_hw *hw,
> -                                         struct dlb2_hw_domain *domain)
> -{
> -       union dlb2_chp_cfg_ldb_vas_crd r0 = { {0} };
> -       union dlb2_chp_cfg_dir_vas_crd r1 = { {0} };
> -
> -       r0.field.count = domain->num_ldb_credits;
> -
> -       DLB2_CSR_WR(hw, DLB2_CHP_CFG_LDB_VAS_CRD(domain->id.phys_id), r0.val);
> -
> -       r1.field.count = domain->num_dir_credits;
> -
> -       DLB2_CSR_WR(hw, DLB2_CHP_CFG_DIR_VAS_CRD(domain->id.phys_id), r1.val);
> -}
> -
> -static struct dlb2_ldb_port *
> -dlb2_get_next_ldb_port(struct dlb2_hw *hw,
> -                      struct dlb2_function_resources *rsrcs,
> -                      u32 domain_id,
> -                      u32 cos_id)
> -{
> -       struct dlb2_list_entry *iter;
> -       struct dlb2_ldb_port *port;
> -       RTE_SET_USED(iter);
> -       /*
> -        * To reduce the odds of consecutive load-balanced ports mapping to the
> -        * same queue(s), the driver attempts to allocate ports whose neighbors
> -        * are owned by a different domain.
> -        */
> -       DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {
> -               u32 next, prev;
> -               u32 phys_id;
> -
> -               phys_id = port->id.phys_id;
> -               next = phys_id + 1;
> -               prev = phys_id - 1;
> -
> -               if (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)
> -                       next = 0;
> -               if (phys_id == 0)
> -                       prev = DLB2_MAX_NUM_LDB_PORTS - 1;
> -
> -               if (!hw->rsrcs.ldb_ports[next].owned ||
> -                   hw->rsrcs.ldb_ports[next].domain_id.phys_id == domain_id)
> -                       continue;
> -
> -               if (!hw->rsrcs.ldb_ports[prev].owned ||
> -                   hw->rsrcs.ldb_ports[prev].domain_id.phys_id == domain_id)
> -                       continue;
> -
> -               return port;
> -       }
> -
> -       /*
> -        * Failing that, the driver looks for a port with one neighbor owned by
> -        * a different domain and the other unallocated.
> -        */
> -       DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {
> -               u32 next, prev;
> -               u32 phys_id;
> -
> -               phys_id = port->id.phys_id;
> -               next = phys_id + 1;
> -               prev = phys_id - 1;
> -
> -               if (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)
> -                       next = 0;
> -               if (phys_id == 0)
> -                       prev = DLB2_MAX_NUM_LDB_PORTS - 1;
> -
> -               if (!hw->rsrcs.ldb_ports[prev].owned &&
> -                   hw->rsrcs.ldb_ports[next].owned &&
> -                   hw->rsrcs.ldb_ports[next].domain_id.phys_id != domain_id)
> -                       return port;
> -
> -               if (!hw->rsrcs.ldb_ports[next].owned &&
> -                   hw->rsrcs.ldb_ports[prev].owned &&
> -                   hw->rsrcs.ldb_ports[prev].domain_id.phys_id != domain_id)
> -                       return port;
> -       }
> -
> -       /*
> -        * Failing that, the driver looks for a port with both neighbors
> -        * unallocated.
> -        */
> -       DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {
> -               u32 next, prev;
> -               u32 phys_id;
> -
> -               phys_id = port->id.phys_id;
> -               next = phys_id + 1;
> -               prev = phys_id - 1;
> -
> -               if (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)
> -                       next = 0;
> -               if (phys_id == 0)
> -                       prev = DLB2_MAX_NUM_LDB_PORTS - 1;
> -
> -               if (!hw->rsrcs.ldb_ports[prev].owned &&
> -                   !hw->rsrcs.ldb_ports[next].owned)
> -                       return port;
> -       }
> -
> -       /* If all else fails, the driver returns the next available port. */
> -       return DLB2_FUNC_LIST_HEAD(rsrcs->avail_ldb_ports[cos_id],
> -                                  typeof(*port));
> -}
> -
> -static int __dlb2_attach_ldb_ports(struct dlb2_hw *hw,
> -                                  struct dlb2_function_resources *rsrcs,
> -                                  struct dlb2_hw_domain *domain,
> -                                  u32 num_ports,
> -                                  u32 cos_id,
> -                                  struct dlb2_cmd_response *resp)
> -{
> -       unsigned int i;
> -
> -       if (rsrcs->num_avail_ldb_ports[cos_id] < num_ports) {
> -               resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
> -               return -EINVAL;
> -       }
> -
> -       for (i = 0; i < num_ports; i++) {
> -               struct dlb2_ldb_port *port;
> -
> -               port = dlb2_get_next_ldb_port(hw, rsrcs,
> -                                             domain->id.phys_id, cos_id);
> -               if (port == NULL) {
> -                       DLB2_HW_ERR(hw,
> -                                   "[%s()] Internal error: domain validation failed\n",
> -                                   __func__);
> -                       return -EFAULT;
> -               }
> -
> -               dlb2_list_del(&rsrcs->avail_ldb_ports[cos_id],
> -                             &port->func_list);
> -
> -               port->domain_id = domain->id;
> -               port->owned = true;
> -
> -               dlb2_list_add(&domain->avail_ldb_ports[cos_id],
> -                             &port->domain_list);
> -       }
> -
> -       rsrcs->num_avail_ldb_ports[cos_id] -= num_ports;
> -
> -       return 0;
> -}
> -
> -static int dlb2_attach_ldb_ports(struct dlb2_hw *hw,
> -                                struct dlb2_function_resources *rsrcs,
> -                                struct dlb2_hw_domain *domain,
> -                                struct dlb2_create_sched_domain_args *args,
> -                                struct dlb2_cmd_response *resp)
> -{
> -       unsigned int i, j;
> -       int ret;
> -
> -       if (args->cos_strict) {
> -               for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
> -                       u32 num = args->num_cos_ldb_ports[i];
> -
> -                       /* Allocate ports from specific classes-of-service */
> -                       ret = __dlb2_attach_ldb_ports(hw,
> -                                                     rsrcs,
> -                                                     domain,
> -                                                     num,
> -                                                     i,
> -                                                     resp);
> -                       if (ret)
> -                               return ret;
> -               }
> -       } else {
> -               unsigned int k;
> -               u32 cos_id;
> -
> -               /*
> -                * Attempt to allocate from specific class-of-service, but
> -                * fallback to the other classes if that fails.
> -                */
> -               for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
> -                       for (j = 0; j < args->num_cos_ldb_ports[i]; j++) {
> -                               for (k = 0; k < DLB2_NUM_COS_DOMAINS; k++) {
> -                                       cos_id = (i + k) % DLB2_NUM_COS_DOMAINS;
> -
> -                                       ret = __dlb2_attach_ldb_ports(hw,
> -                                                                     rsrcs,
> -                                                                     domain,
> -                                                                     1,
> -                                                                     cos_id,
> -                                                                     resp);
> -                                       if (ret == 0)
> -                                               break;
> -                               }
> -
> -                               if (ret < 0)
> -                                       return ret;
> -                       }
> -               }
> -       }
> -
> -       /* Allocate num_ldb_ports from any class-of-service */
> -       for (i = 0; i < args->num_ldb_ports; i++) {
> -               for (j = 0; j < DLB2_NUM_COS_DOMAINS; j++) {
> -                       ret = __dlb2_attach_ldb_ports(hw,
> -                                                     rsrcs,
> -                                                     domain,
> -                                                     1,
> -                                                     j,
> -                                                     resp);
> -                       if (ret == 0)
> -                               break;
> -               }
> -
> -               if (ret < 0)
> -                       return ret;
> -       }
> -
> -       return 0;
> -}
> -
> -static int dlb2_attach_dir_ports(struct dlb2_hw *hw,
> -                                struct dlb2_function_resources *rsrcs,
> -                                struct dlb2_hw_domain *domain,
> -                                u32 num_ports,
> -                                struct dlb2_cmd_response *resp)
> -{
> -       unsigned int i;
> -
> -       if (rsrcs->num_avail_dir_pq_pairs < num_ports) {
> -               resp->status = DLB2_ST_DIR_PORTS_UNAVAILABLE;
> -               return -EINVAL;
> -       }
> -
> -       for (i = 0; i < num_ports; i++) {
> -               struct dlb2_dir_pq_pair *port;
> -
> -               port = DLB2_FUNC_LIST_HEAD(rsrcs->avail_dir_pq_pairs,
> -                                          typeof(*port));
> -               if (port == NULL) {
> -                       DLB2_HW_ERR(hw,
> -                                   "[%s()] Internal error: domain validation failed\n",
> -                                   __func__);
> -                       return -EFAULT;
> -               }
> -
> -               dlb2_list_del(&rsrcs->avail_dir_pq_pairs, &port->func_list);
> -
> -               port->domain_id = domain->id;
> -               port->owned = true;
> -
> -               dlb2_list_add(&domain->avail_dir_pq_pairs, &port->domain_list);
> -       }
> -
> -       rsrcs->num_avail_dir_pq_pairs -= num_ports;
> -
> -       return 0;
> -}
> -
> -static int dlb2_attach_ldb_credits(struct dlb2_function_resources *rsrcs,
> -                                  struct dlb2_hw_domain *domain,
> -                                  u32 num_credits,
> -                                  struct dlb2_cmd_response *resp)
> -{
> -       if (rsrcs->num_avail_qed_entries < num_credits) {
> -               resp->status = DLB2_ST_LDB_CREDITS_UNAVAILABLE;
> -               return -EINVAL;
> -       }
> -
> -       rsrcs->num_avail_qed_entries -= num_credits;
> -       domain->num_ldb_credits += num_credits;
> -       return 0;
> -}
> -
> -static int dlb2_attach_dir_credits(struct dlb2_function_resources *rsrcs,
> -                                  struct dlb2_hw_domain *domain,
> -                                  u32 num_credits,
> -                                  struct dlb2_cmd_response *resp)
> -{
> -       if (rsrcs->num_avail_dqed_entries < num_credits) {
> -               resp->status = DLB2_ST_DIR_CREDITS_UNAVAILABLE;
> -               return -EINVAL;
> -       }
> -
> -       rsrcs->num_avail_dqed_entries -= num_credits;
> -       domain->num_dir_credits += num_credits;
> -       return 0;
> -}
> -
> -static int dlb2_attach_atomic_inflights(struct dlb2_function_resources *rsrcs,
> -                                       struct dlb2_hw_domain *domain,
> -                                       u32 num_atomic_inflights,
> -                                       struct dlb2_cmd_response *resp)
> -{
> -       if (rsrcs->num_avail_aqed_entries < num_atomic_inflights) {
> -               resp->status = DLB2_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;
> -               return -EINVAL;
> -       }
> -
> -       rsrcs->num_avail_aqed_entries -= num_atomic_inflights;
> -       domain->num_avail_aqed_entries += num_atomic_inflights;
> -       return 0;
> -}
> -
> -static int
> -dlb2_attach_domain_hist_list_entries(struct dlb2_function_resources *rsrcs,
> -                                    struct dlb2_hw_domain *domain,
> -                                    u32 num_hist_list_entries,
> -                                    struct dlb2_cmd_response *resp)
> -{
> -       struct dlb2_bitmap *bitmap;
> -       int base;
> -
> -       if (num_hist_list_entries) {
> -               bitmap = rsrcs->avail_hist_list_entries;
> -
> -               base = dlb2_bitmap_find_set_bit_range(bitmap,
> -                                                     num_hist_list_entries);
> -               if (base < 0)
> -                       goto error;
> -
> -               domain->total_hist_list_entries = num_hist_list_entries;
> -               domain->avail_hist_list_entries = num_hist_list_entries;
> -               domain->hist_list_entry_base = base;
> -               domain->hist_list_entry_offset = 0;
> -
> -               dlb2_bitmap_clear_range(bitmap, base, num_hist_list_entries);
> -       }
> -       return 0;
> -
> -error:
> -       resp->status = DLB2_ST_HIST_LIST_ENTRIES_UNAVAILABLE;
> -       return -EINVAL;
> -}
> -
> -static int dlb2_attach_ldb_queues(struct dlb2_hw *hw,
> -                                 struct dlb2_function_resources *rsrcs,
> -                                 struct dlb2_hw_domain *domain,
> -                                 u32 num_queues,
> -                                 struct dlb2_cmd_response *resp)
> -{
> -       unsigned int i;
> -
> -       if (rsrcs->num_avail_ldb_queues < num_queues) {
> -               resp->status = DLB2_ST_LDB_QUEUES_UNAVAILABLE;
> -               return -EINVAL;
> -       }
> -
> -       for (i = 0; i < num_queues; i++) {
> -               struct dlb2_ldb_queue *queue;
> -
> -               queue = DLB2_FUNC_LIST_HEAD(rsrcs->avail_ldb_queues,
> -                                           typeof(*queue));
> -               if (queue == NULL) {
> -                       DLB2_HW_ERR(hw,
> -                                   "[%s()] Internal error: domain validation failed\n",
> -                                   __func__);
> -                       return -EFAULT;
> -               }
> -
> -               dlb2_list_del(&rsrcs->avail_ldb_queues, &queue->func_list);
> -
> -               queue->domain_id = domain->id;
> -               queue->owned = true;
> -
> -               dlb2_list_add(&domain->avail_ldb_queues, &queue->domain_list);
> -       }
> -
> -       rsrcs->num_avail_ldb_queues -= num_queues;
> -
> -       return 0;
> -}
> -
> -static int
> -dlb2_domain_attach_resources(struct dlb2_hw *hw,
> -                            struct dlb2_function_resources *rsrcs,
> -                            struct dlb2_hw_domain *domain,
> -                            struct dlb2_create_sched_domain_args *args,
> -                            struct dlb2_cmd_response *resp)
> -{
> -       int ret;
> -
> -       ret = dlb2_attach_ldb_queues(hw,
> -                                    rsrcs,
> -                                    domain,
> -                                    args->num_ldb_queues,
> -                                    resp);
> -       if (ret < 0)
> -               return ret;
> -
> -       ret = dlb2_attach_ldb_ports(hw,
> -                                   rsrcs,
> -                                   domain,
> -                                   args,
> -                                   resp);
> -       if (ret < 0)
> -               return ret;
> -
> -       ret = dlb2_attach_dir_ports(hw,
> -                                   rsrcs,
> -                                   domain,
> -                                   args->num_dir_ports,
> -                                   resp);
> -       if (ret < 0)
> -               return ret;
> -
> -       ret = dlb2_attach_ldb_credits(rsrcs,
> -                                     domain,
> -                                     args->num_ldb_credits,
> -                                     resp);
> -       if (ret < 0)
> -               return ret;
> -
> -       ret = dlb2_attach_dir_credits(rsrcs,
> -                                     domain,
> -                                     args->num_dir_credits,
> -                                     resp);
> -       if (ret < 0)
> -               return ret;
> -
> -       ret = dlb2_attach_domain_hist_list_entries(rsrcs,
> -                                                  domain,
> -                                                  args->num_hist_list_entries,
> -                                                  resp);
> -       if (ret < 0)
> -               return ret;
> -
> -       ret = dlb2_attach_atomic_inflights(rsrcs,
> -                                          domain,
> -                                          args->num_atomic_inflights,
> -                                          resp);
> -       if (ret < 0)
> -               return ret;
> -
> -       dlb2_configure_domain_credits(hw, domain);
> -
> -       domain->configured = true;
> -
> -       domain->started = false;
> -
> -       rsrcs->num_avail_domains--;
> -
> -       return 0;
> -}
> -
> -static int
> -dlb2_verify_create_sched_dom_args(struct dlb2_function_resources *rsrcs,
> -                                 struct dlb2_create_sched_domain_args *args,
> -                                 struct dlb2_cmd_response *resp)
> -{
> -       u32 num_avail_ldb_ports, req_ldb_ports;
> -       struct dlb2_bitmap *avail_hl_entries;
> -       unsigned int max_contig_hl_range;
> -       int i;
> -
> -       avail_hl_entries = rsrcs->avail_hist_list_entries;
> -
> -       max_contig_hl_range = dlb2_bitmap_longest_set_range(avail_hl_entries);
> -
> -       num_avail_ldb_ports = 0;
> -       req_ldb_ports = 0;
> -       for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
> -               num_avail_ldb_ports += rsrcs->num_avail_ldb_ports[i];
> -
> -               req_ldb_ports += args->num_cos_ldb_ports[i];
> -       }
> -
> -       req_ldb_ports += args->num_ldb_ports;
> -
> -       if (rsrcs->num_avail_domains < 1) {
> -               resp->status = DLB2_ST_DOMAIN_UNAVAILABLE;
> -               return -EINVAL;
> -       }
> -
> -       if (rsrcs->num_avail_ldb_queues < args->num_ldb_queues) {
> -               resp->status = DLB2_ST_LDB_QUEUES_UNAVAILABLE;
> -               return -EINVAL;
> -       }
> -
> -       if (req_ldb_ports > num_avail_ldb_ports) {
> -               resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
> -               return -EINVAL;
> -       }
> -
> -       for (i = 0; args->cos_strict && i < DLB2_NUM_COS_DOMAINS; i++) {
> -               if (args->num_cos_ldb_ports[i] >
> -                   rsrcs->num_avail_ldb_ports[i]) {
> -                       resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
> -                       return -EINVAL;
> -               }
> -       }
> -
> -       if (args->num_ldb_queues > 0 && req_ldb_ports == 0) {
> -               resp->status = DLB2_ST_LDB_PORT_REQUIRED_FOR_LDB_QUEUES;
> -               return -EINVAL;
> -       }
> -
> -       if (rsrcs->num_avail_dir_pq_pairs < args->num_dir_ports) {
> -               resp->status = DLB2_ST_DIR_PORTS_UNAVAILABLE;
> -               return -EINVAL;
> -       }
> -
> -       if (rsrcs->num_avail_qed_entries < args->num_ldb_credits) {
> -               resp->status = DLB2_ST_LDB_CREDITS_UNAVAILABLE;
> -               return -EINVAL;
> -       }
> -
> -       if (rsrcs->num_avail_dqed_entries < args->num_dir_credits) {
> -               resp->status = DLB2_ST_DIR_CREDITS_UNAVAILABLE;
> -               return -EINVAL;
> -       }
> -
> -       if (rsrcs->num_avail_aqed_entries < args->num_atomic_inflights) {
> -               resp->status = DLB2_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;
> -               return -EINVAL;
> -       }
> -
> -       if (max_contig_hl_range < args->num_hist_list_entries) {
> -               resp->status = DLB2_ST_HIST_LIST_ENTRIES_UNAVAILABLE;
> -               return -EINVAL;
> -       }
> -
> -       return 0;
> -}
> -
> -static void
> -dlb2_log_create_sched_domain_args(struct dlb2_hw *hw,
> -                                 struct dlb2_create_sched_domain_args *args,
> -                                 bool vdev_req,
> -                                 unsigned int vdev_id)
> -{
> -       DLB2_HW_DBG(hw, "DLB2 create sched domain arguments:\n");
> -       if (vdev_req)
> -               DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
> -       DLB2_HW_DBG(hw, "\tNumber of LDB queues:          %d\n",
> -                   args->num_ldb_queues);
> -       DLB2_HW_DBG(hw, "\tNumber of LDB ports (any CoS): %d\n",
> -                   args->num_ldb_ports);
> -       DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 0):   %d\n",
> -                   args->num_cos_ldb_ports[0]);
> -       DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 1):   %d\n",
> -                   args->num_cos_ldb_ports[1]);
> -       DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 2):   %d\n",
> -                   args->num_cos_ldb_ports[1]);
> -       DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 3):   %d\n",
> -                   args->num_cos_ldb_ports[1]);
> -       DLB2_HW_DBG(hw, "\tStrict CoS allocation:         %d\n",
> -                   args->cos_strict);
> -       DLB2_HW_DBG(hw, "\tNumber of DIR ports:           %d\n",
> -                   args->num_dir_ports);
> -       DLB2_HW_DBG(hw, "\tNumber of ATM inflights:       %d\n",
> -                   args->num_atomic_inflights);
> -       DLB2_HW_DBG(hw, "\tNumber of hist list entries:   %d\n",
> -                   args->num_hist_list_entries);
> -       DLB2_HW_DBG(hw, "\tNumber of LDB credits:         %d\n",
> -                   args->num_ldb_credits);
> -       DLB2_HW_DBG(hw, "\tNumber of DIR credits:         %d\n",
> -                   args->num_dir_credits);
> -}
> -
> -/**
> - * dlb2_hw_create_sched_domain() - Allocate and initialize a DLB scheduling
> - *     domain and its resources.
> - * @hw:        Contains the current state of the DLB2 hardware.
> - * @args: User-provided arguments.
> - * @resp: Response to user.
> - * @vdev_req: Request came from a virtual device.
> - * @vdev_id: If vdev_req is true, this contains the virtual device's ID.
> - *
> - * Return: returns < 0 on error, 0 otherwise. If the driver is unable to
> - * satisfy a request, resp->status will be set accordingly.
> - */
> -int dlb2_hw_create_sched_domain(struct dlb2_hw *hw,
> -                               struct dlb2_create_sched_domain_args *args,
> -                               struct dlb2_cmd_response *resp,
> -                               bool vdev_req,
> -                               unsigned int vdev_id)
> -{
> -       struct dlb2_function_resources *rsrcs;
> -       struct dlb2_hw_domain *domain;
> -       int ret;
> -
> -       rsrcs = (vdev_req) ? &hw->vdev[vdev_id] : &hw->pf;
> -
> -       dlb2_log_create_sched_domain_args(hw, args, vdev_req, vdev_id);
> -
> -       /*
> -        * Verify that hardware resources are available before attempting to
> -        * satisfy the request. This simplifies the error unwinding code.
> -        */
> -       ret = dlb2_verify_create_sched_dom_args(rsrcs, args, resp);
> -       if (ret)
> -               return ret;
> -
> -       domain = DLB2_FUNC_LIST_HEAD(rsrcs->avail_domains, typeof(*domain));
> -       if (domain == NULL) {
> -               DLB2_HW_ERR(hw,
> -                           "[%s():%d] Internal error: no available domains\n",
> -                           __func__, __LINE__);
> -               return -EFAULT;
> -       }
> -
> -       if (domain->configured) {
> -               DLB2_HW_ERR(hw,
> -                           "[%s()] Internal error: avail_domains contains configured domains.\n",
> -                           __func__);
> -               return -EFAULT;
> -       }
> -
> -       dlb2_init_domain_rsrc_lists(domain);
> -
> -       ret = dlb2_domain_attach_resources(hw, rsrcs, domain, args, resp);
> -       if (ret < 0) {
> -               DLB2_HW_ERR(hw,
> -                           "[%s()] Internal error: failed to verify args.\n",
> -                           __func__);
> -
> -               return ret;
> -       }
> -
> -       dlb2_list_del(&rsrcs->avail_domains, &domain->func_list);
> -
> -       dlb2_list_add(&rsrcs->used_domains, &domain->func_list);
> -
> -       resp->id = (vdev_req) ? domain->id.virt_id : domain->id.phys_id;
> -       resp->status = 0;
> -
> -       return 0;
> -}
> -
>  /*
>   * The PF driver cannot assume that a register write will affect subsequent HCW
>   * writes. To ensure a write completes, the driver must read back a CSR. This
> diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
> index b0fd37a55..4d679a0a9 100644
> --- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
> +++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
> @@ -335,3 +335,699 @@ int dlb2_hw_get_num_resources(struct dlb2_hw *hw,
>         }
>         return 0;
>  }
> +
> +static void dlb2_configure_domain_credits_v2_5(struct dlb2_hw *hw,
> +                                              struct dlb2_hw_domain *domain)
> +{
> +       u32 reg = 0;
> +
> +       DLB2_BITS_SET(reg, domain->num_credits, DLB2_CHP_CFG_LDB_VAS_CRD_COUNT);
> +       DLB2_CSR_WR(hw, DLB2_CHP_CFG_VAS_CRD(domain->id.phys_id), reg);
> +}
> +
> +static void dlb2_configure_domain_credits_v2(struct dlb2_hw *hw,
> +                                            struct dlb2_hw_domain *domain)
> +{
> +       u32 reg = 0;
> +
> +       DLB2_BITS_SET(reg, domain->num_ldb_credits,
> +                     DLB2_CHP_CFG_LDB_VAS_CRD_COUNT);
> +       DLB2_CSR_WR(hw, DLB2_CHP_CFG_LDB_VAS_CRD(domain->id.phys_id), reg);
> +
> +       reg = 0;
> +       DLB2_BITS_SET(reg, domain->num_dir_credits,
> +                     DLB2_CHP_CFG_DIR_VAS_CRD_COUNT);
> +       DLB2_CSR_WR(hw, DLB2_CHP_CFG_DIR_VAS_CRD(domain->id.phys_id), reg);
> +}
> +
> +static void dlb2_configure_domain_credits(struct dlb2_hw *hw,
> +                                         struct dlb2_hw_domain *domain)
> +{
> +       if (hw->ver == DLB2_HW_V2)
> +               dlb2_configure_domain_credits_v2(hw, domain);
> +       else
> +               dlb2_configure_domain_credits_v2_5(hw, domain);
> +}
> +
> +static int dlb2_attach_credits(struct dlb2_function_resources *rsrcs,
> +                              struct dlb2_hw_domain *domain,
> +                              u32 num_credits,
> +                              struct dlb2_cmd_response *resp)
> +{
> +       if (rsrcs->num_avail_entries < num_credits) {
> +               resp->status = DLB2_ST_CREDITS_UNAVAILABLE;
> +               return -EINVAL;
> +       }
> +
> +       rsrcs->num_avail_entries -= num_credits;
> +       domain->num_credits += num_credits;
> +       return 0;
> +}
> +
> +static struct dlb2_ldb_port *
> +dlb2_get_next_ldb_port(struct dlb2_hw *hw,
> +                      struct dlb2_function_resources *rsrcs,
> +                      u32 domain_id,
> +                      u32 cos_id)
> +{
> +       struct dlb2_list_entry *iter;
> +       struct dlb2_ldb_port *port;
> +       RTE_SET_USED(iter);
> +
> +       /*
> +        * To reduce the odds of consecutive load-balanced ports mapping to the
> +        * same queue(s), the driver attempts to allocate ports whose neighbors
> +        * are owned by a different domain.
> +        */
> +       DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {
> +               u32 next, prev;
> +               u32 phys_id;
> +
> +               phys_id = port->id.phys_id;
> +               next = phys_id + 1;
> +               prev = phys_id - 1;
> +
> +               if (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)
> +                       next = 0;
> +               if (phys_id == 0)
> +                       prev = DLB2_MAX_NUM_LDB_PORTS - 1;
> +
> +               if (!hw->rsrcs.ldb_ports[next].owned ||
> +                   hw->rsrcs.ldb_ports[next].domain_id.phys_id == domain_id)
> +                       continue;
> +
> +               if (!hw->rsrcs.ldb_ports[prev].owned ||
> +                   hw->rsrcs.ldb_ports[prev].domain_id.phys_id == domain_id)
> +                       continue;
> +
> +               return port;
> +       }
> +
> +       /*
> +        * Failing that, the driver looks for a port with one neighbor owned by
> +        * a different domain and the other unallocated.
> +        */
> +       DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {
> +               u32 next, prev;
> +               u32 phys_id;
> +
> +               phys_id = port->id.phys_id;
> +               next = phys_id + 1;
> +               prev = phys_id - 1;
> +
> +               if (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)
> +                       next = 0;
> +               if (phys_id == 0)
> +                       prev = DLB2_MAX_NUM_LDB_PORTS - 1;
> +
> +               if (!hw->rsrcs.ldb_ports[prev].owned &&
> +                   hw->rsrcs.ldb_ports[next].owned &&
> +                   hw->rsrcs.ldb_ports[next].domain_id.phys_id != domain_id)
> +                       return port;
> +
> +               if (!hw->rsrcs.ldb_ports[next].owned &&
> +                   hw->rsrcs.ldb_ports[prev].owned &&
> +                   hw->rsrcs.ldb_ports[prev].domain_id.phys_id != domain_id)
> +                       return port;
> +       }
> +
> +       /*
> +        * Failing that, the driver looks for a port with both neighbors
> +        * unallocated.
> +        */
> +       DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {
> +               u32 next, prev;
> +               u32 phys_id;
> +
> +               phys_id = port->id.phys_id;
> +               next = phys_id + 1;
> +               prev = phys_id - 1;
> +
> +               if (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)
> +                       next = 0;
> +               if (phys_id == 0)
> +                       prev = DLB2_MAX_NUM_LDB_PORTS - 1;
> +
> +               if (!hw->rsrcs.ldb_ports[prev].owned &&
> +                   !hw->rsrcs.ldb_ports[next].owned)
> +                       return port;
> +       }
> +
> +       /* If all else fails, the driver returns the next available port. */
> +       return DLB2_FUNC_LIST_HEAD(rsrcs->avail_ldb_ports[cos_id],
> +                                  typeof(*port));
> +}
> +
> +static int __dlb2_attach_ldb_ports(struct dlb2_hw *hw,
> +                                  struct dlb2_function_resources *rsrcs,
> +                                  struct dlb2_hw_domain *domain,
> +                                  u32 num_ports,
> +                                  u32 cos_id,
> +                                  struct dlb2_cmd_response *resp)
> +{
> +       unsigned int i;
> +
> +       if (rsrcs->num_avail_ldb_ports[cos_id] < num_ports) {
> +               resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
> +               return -EINVAL;
> +       }
> +
> +       for (i = 0; i < num_ports; i++) {
> +               struct dlb2_ldb_port *port;
> +
> +               port = dlb2_get_next_ldb_port(hw, rsrcs,
> +                                             domain->id.phys_id, cos_id);
> +               if (port == NULL) {
> +                       DLB2_HW_ERR(hw,
> +                                   "[%s()] Internal error: domain validation failed\n",
> +                                   __func__);
> +                       return -EFAULT;
> +               }
> +
> +               dlb2_list_del(&rsrcs->avail_ldb_ports[cos_id],
> +                             &port->func_list);
> +
> +               port->domain_id = domain->id;
> +               port->owned = true;
> +
> +               dlb2_list_add(&domain->avail_ldb_ports[cos_id],
> +                             &port->domain_list);
> +       }
> +
> +       rsrcs->num_avail_ldb_ports[cos_id] -= num_ports;
> +
> +       return 0;
> +}
> +
> +
> +static int dlb2_attach_ldb_ports(struct dlb2_hw *hw,
> +                                struct dlb2_function_resources *rsrcs,
> +                                struct dlb2_hw_domain *domain,
> +                                struct dlb2_create_sched_domain_args *args,
> +                                struct dlb2_cmd_response *resp)
> +{
> +       unsigned int i, j;
> +       int ret;
> +
> +       if (args->cos_strict) {
> +               for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
> +                       u32 num = args->num_cos_ldb_ports[i];
> +
> +                       /* Allocate ports from specific classes-of-service */
> +                       ret = __dlb2_attach_ldb_ports(hw,
> +                                                     rsrcs,
> +                                                     domain,
> +                                                     num,
> +                                                     i,
> +                                                     resp);
> +                       if (ret)
> +                               return ret;
> +               }
> +       } else {
> +               unsigned int k;
> +               u32 cos_id;
> +
> +               /*
> +                * Attempt to allocate from specific class-of-service, but
> +                * fallback to the other classes if that fails.
> +                */
> +               for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
> +                       for (j = 0; j < args->num_cos_ldb_ports[i]; j++) {
> +                               for (k = 0; k < DLB2_NUM_COS_DOMAINS; k++) {
> +                                       cos_id = (i + k) % DLB2_NUM_COS_DOMAINS;
> +
> +                                       ret = __dlb2_attach_ldb_ports(hw,
> +                                                                     rsrcs,
> +                                                                     domain,
> +                                                                     1,
> +                                                                     cos_id,
> +                                                                     resp);
> +                                       if (ret == 0)
> +                                               break;
> +                               }
> +
> +                               if (ret)
> +                                       return ret;
> +                       }
> +               }
> +       }
> +
> +       /* Allocate num_ldb_ports from any class-of-service */
> +       for (i = 0; i < args->num_ldb_ports; i++) {
> +               for (j = 0; j < DLB2_NUM_COS_DOMAINS; j++) {
> +                       ret = __dlb2_attach_ldb_ports(hw,
> +                                                     rsrcs,
> +                                                     domain,
> +                                                     1,
> +                                                     j,
> +                                                     resp);
> +                       if (ret == 0)
> +                               break;
> +               }
> +
> +               if (ret)
> +                       return ret;
> +       }
> +
> +       return 0;
> +}
> +
> +static int dlb2_attach_dir_ports(struct dlb2_hw *hw,
> +                                struct dlb2_function_resources *rsrcs,
> +                                struct dlb2_hw_domain *domain,
> +                                u32 num_ports,
> +                                struct dlb2_cmd_response *resp)
> +{
> +       unsigned int i;
> +
> +       if (rsrcs->num_avail_dir_pq_pairs < num_ports) {
> +               resp->status = DLB2_ST_DIR_PORTS_UNAVAILABLE;
> +               return -EINVAL;
> +       }
> +
> +       for (i = 0; i < num_ports; i++) {
> +               struct dlb2_dir_pq_pair *port;
> +
> +               port = DLB2_FUNC_LIST_HEAD(rsrcs->avail_dir_pq_pairs,
> +                                          typeof(*port));
> +               if (port == NULL) {
> +                       DLB2_HW_ERR(hw,
> +                                   "[%s()] Internal error: domain validation failed\n",
> +                                   __func__);
> +                       return -EFAULT;
> +               }
> +
> +               dlb2_list_del(&rsrcs->avail_dir_pq_pairs, &port->func_list);
> +
> +               port->domain_id = domain->id;
> +               port->owned = true;
> +
> +               dlb2_list_add(&domain->avail_dir_pq_pairs, &port->domain_list);
> +       }
> +
> +       rsrcs->num_avail_dir_pq_pairs -= num_ports;
> +
> +       return 0;
> +}
> +
> +static int dlb2_attach_ldb_credits(struct dlb2_function_resources *rsrcs,
> +                                  struct dlb2_hw_domain *domain,
> +                                  u32 num_credits,
> +                                  struct dlb2_cmd_response *resp)
> +{
> +       if (rsrcs->num_avail_qed_entries < num_credits) {
> +               resp->status = DLB2_ST_LDB_CREDITS_UNAVAILABLE;
> +               return -EINVAL;
> +       }
> +
> +       rsrcs->num_avail_qed_entries -= num_credits;
> +       domain->num_ldb_credits += num_credits;
> +       return 0;
> +}
> +
> +static int dlb2_attach_dir_credits(struct dlb2_function_resources *rsrcs,
> +                                  struct dlb2_hw_domain *domain,
> +                                  u32 num_credits,
> +                                  struct dlb2_cmd_response *resp)
> +{
> +       if (rsrcs->num_avail_dqed_entries < num_credits) {
> +               resp->status = DLB2_ST_DIR_CREDITS_UNAVAILABLE;
> +               return -EINVAL;
> +       }
> +
> +       rsrcs->num_avail_dqed_entries -= num_credits;
> +       domain->num_dir_credits += num_credits;
> +       return 0;
> +}
> +
> +
> +static int dlb2_attach_atomic_inflights(struct dlb2_function_resources *rsrcs,
> +                                       struct dlb2_hw_domain *domain,
> +                                       u32 num_atomic_inflights,
> +                                       struct dlb2_cmd_response *resp)
> +{
> +       if (rsrcs->num_avail_aqed_entries < num_atomic_inflights) {
> +               resp->status = DLB2_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;
> +               return -EINVAL;
> +       }
> +
> +       rsrcs->num_avail_aqed_entries -= num_atomic_inflights;
> +       domain->num_avail_aqed_entries += num_atomic_inflights;
> +       return 0;
> +}
> +
> +static int
> +dlb2_attach_domain_hist_list_entries(struct dlb2_function_resources *rsrcs,
> +                                    struct dlb2_hw_domain *domain,
> +                                    u32 num_hist_list_entries,
> +                                    struct dlb2_cmd_response *resp)
> +{
> +       struct dlb2_bitmap *bitmap;
> +       int base;
> +
> +       if (num_hist_list_entries) {
> +               bitmap = rsrcs->avail_hist_list_entries;
> +
> +               base = dlb2_bitmap_find_set_bit_range(bitmap,
> +                                                     num_hist_list_entries);
> +               if (base < 0)
> +                       goto error;
> +
> +               domain->total_hist_list_entries = num_hist_list_entries;
> +               domain->avail_hist_list_entries = num_hist_list_entries;
> +               domain->hist_list_entry_base = base;
> +               domain->hist_list_entry_offset = 0;
> +
> +               dlb2_bitmap_clear_range(bitmap, base, num_hist_list_entries);
> +       }
> +       return 0;
> +
> +error:
> +       resp->status = DLB2_ST_HIST_LIST_ENTRIES_UNAVAILABLE;
> +       return -EINVAL;
> +}
> +
> +static int dlb2_attach_ldb_queues(struct dlb2_hw *hw,
> +                                 struct dlb2_function_resources *rsrcs,
> +                                 struct dlb2_hw_domain *domain,
> +                                 u32 num_queues,
> +                                 struct dlb2_cmd_response *resp)
> +{
> +       unsigned int i;
> +
> +       if (rsrcs->num_avail_ldb_queues < num_queues) {
> +               resp->status = DLB2_ST_LDB_QUEUES_UNAVAILABLE;
> +               return -EINVAL;
> +       }
> +
> +       for (i = 0; i < num_queues; i++) {
> +               struct dlb2_ldb_queue *queue;
> +
> +               queue = DLB2_FUNC_LIST_HEAD(rsrcs->avail_ldb_queues,
> +                                           typeof(*queue));
> +               if (queue == NULL) {
> +                       DLB2_HW_ERR(hw,
> +                                   "[%s()] Internal error: domain validation failed\n",
> +                                   __func__);
> +                       return -EFAULT;
> +               }
> +
> +               dlb2_list_del(&rsrcs->avail_ldb_queues, &queue->func_list);
> +
> +               queue->domain_id = domain->id;
> +               queue->owned = true;
> +
> +               dlb2_list_add(&domain->avail_ldb_queues, &queue->domain_list);
> +       }
> +
> +       rsrcs->num_avail_ldb_queues -= num_queues;
> +
> +       return 0;
> +}
> +
> +static int
> +dlb2_domain_attach_resources(struct dlb2_hw *hw,
> +                            struct dlb2_function_resources *rsrcs,
> +                            struct dlb2_hw_domain *domain,
> +                            struct dlb2_create_sched_domain_args *args,
> +                            struct dlb2_cmd_response *resp)
> +{
> +       int ret;
> +
> +       ret = dlb2_attach_ldb_queues(hw,
> +                                    rsrcs,
> +                                    domain,
> +                                    args->num_ldb_queues,
> +                                    resp);
> +       if (ret)
> +               return ret;
> +
> +       ret = dlb2_attach_ldb_ports(hw,
> +                                   rsrcs,
> +                                   domain,
> +                                   args,
> +                                   resp);
> +       if (ret)
> +               return ret;
> +
> +       ret = dlb2_attach_dir_ports(hw,
> +                                   rsrcs,
> +                                   domain,
> +                                   args->num_dir_ports,
> +                                   resp);
> +       if (ret)
> +               return ret;
> +
> +       if (hw->ver == DLB2_HW_V2) {
> +               ret = dlb2_attach_ldb_credits(rsrcs,
> +                                             domain,
> +                                             args->num_ldb_credits,
> +                                             resp);
> +               if (ret)
> +                       return ret;
> +
> +               ret = dlb2_attach_dir_credits(rsrcs,
> +                                             domain,
> +                                             args->num_dir_credits,
> +                                             resp);
> +               if (ret)
> +                       return ret;
> +       } else {  /* DLB 2.5 */
> +               ret = dlb2_attach_credits(rsrcs,
> +                                         domain,
> +                                         args->num_credits,
> +                                         resp);
> +               if (ret)
> +                       return ret;
> +       }
> +
> +       ret = dlb2_attach_domain_hist_list_entries(rsrcs,
> +                                                  domain,
> +                                                  args->num_hist_list_entries,
> +                                                  resp);
> +       if (ret)
> +               return ret;
> +
> +       ret = dlb2_attach_atomic_inflights(rsrcs,
> +                                          domain,
> +                                          args->num_atomic_inflights,
> +                                          resp);
> +       if (ret)
> +               return ret;
> +
> +       dlb2_configure_domain_credits(hw, domain);
> +
> +       domain->configured = true;
> +
> +       domain->started = false;
> +
> +       rsrcs->num_avail_domains--;
> +
> +       return 0;
> +}
> +
> +static int
> +dlb2_verify_create_sched_dom_args(struct dlb2_function_resources *rsrcs,
> +                                 struct dlb2_create_sched_domain_args *args,
> +                                 struct dlb2_cmd_response *resp,
> +                                 struct dlb2_hw *hw,
> +                                 struct dlb2_hw_domain **out_domain)
> +{
> +       u32 num_avail_ldb_ports, req_ldb_ports;
> +       struct dlb2_bitmap *avail_hl_entries;
> +       unsigned int max_contig_hl_range;
> +       struct dlb2_hw_domain *domain;
> +       int i;
> +
> +       avail_hl_entries = rsrcs->avail_hist_list_entries;
> +
> +       max_contig_hl_range = dlb2_bitmap_longest_set_range(avail_hl_entries);
> +
> +       num_avail_ldb_ports = 0;
> +       req_ldb_ports = 0;
> +       for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
> +               num_avail_ldb_ports += rsrcs->num_avail_ldb_ports[i];
> +
> +               req_ldb_ports += args->num_cos_ldb_ports[i];
> +       }
> +
> +       req_ldb_ports += args->num_ldb_ports;
> +
> +       if (rsrcs->num_avail_domains < 1) {
> +               resp->status = DLB2_ST_DOMAIN_UNAVAILABLE;
> +               return -EINVAL;
> +       }
> +
> +       domain = DLB2_FUNC_LIST_HEAD(rsrcs->avail_domains, typeof(*domain));
> +       if (domain == NULL) {
> +               resp->status = DLB2_ST_DOMAIN_UNAVAILABLE;
> +               return -EFAULT;
> +       }
> +
> +       if (rsrcs->num_avail_ldb_queues < args->num_ldb_queues) {
> +               resp->status = DLB2_ST_LDB_QUEUES_UNAVAILABLE;
> +               return -EINVAL;
> +       }
> +
> +       if (req_ldb_ports > num_avail_ldb_ports) {
> +               resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
> +               return -EINVAL;
> +       }
> +
> +       for (i = 0; args->cos_strict && i < DLB2_NUM_COS_DOMAINS; i++) {
> +               if (args->num_cos_ldb_ports[i] >
> +                   rsrcs->num_avail_ldb_ports[i]) {
> +                       resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
> +                       return -EINVAL;
> +               }
> +       }
> +
> +       if (args->num_ldb_queues > 0 && req_ldb_ports == 0) {
> +               resp->status = DLB2_ST_LDB_PORT_REQUIRED_FOR_LDB_QUEUES;
> +               return -EINVAL;
> +       }
> +
> +       if (rsrcs->num_avail_dir_pq_pairs < args->num_dir_ports) {
> +               resp->status = DLB2_ST_DIR_PORTS_UNAVAILABLE;
> +               return -EINVAL;
> +       }
> +       if (hw->ver == DLB2_HW_V2_5) {
> +               if (rsrcs->num_avail_entries < args->num_credits) {
> +                       resp->status = DLB2_ST_CREDITS_UNAVAILABLE;
> +                       return -EINVAL;
> +               }
> +       } else {
> +               if (rsrcs->num_avail_qed_entries < args->num_ldb_credits) {
> +                       resp->status = DLB2_ST_LDB_CREDITS_UNAVAILABLE;
> +                       return -EINVAL;
> +               }
> +               if (rsrcs->num_avail_dqed_entries < args->num_dir_credits) {
> +                       resp->status = DLB2_ST_DIR_CREDITS_UNAVAILABLE;
> +                       return -EINVAL;
> +               }
> +       }
> +
> +       if (rsrcs->num_avail_aqed_entries < args->num_atomic_inflights) {
> +               resp->status = DLB2_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;
> +               return -EINVAL;
> +       }
> +
> +       if (max_contig_hl_range < args->num_hist_list_entries) {
> +               resp->status = DLB2_ST_HIST_LIST_ENTRIES_UNAVAILABLE;
> +               return -EINVAL;
> +       }
> +
> +       *out_domain = domain;
> +
> +       return 0;
> +}
> +
> +static void
> +dlb2_log_create_sched_domain_args(struct dlb2_hw *hw,
> +                                 struct dlb2_create_sched_domain_args *args,
> +                                 bool vdev_req,
> +                                 unsigned int vdev_id)
> +{
> +       DLB2_HW_DBG(hw, "DLB2 create sched domain arguments:\n");
> +       if (vdev_req)
> +               DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
> +       DLB2_HW_DBG(hw, "\tNumber of LDB queues:          %d\n",
> +                   args->num_ldb_queues);
> +       DLB2_HW_DBG(hw, "\tNumber of LDB ports (any CoS): %d\n",
> +                   args->num_ldb_ports);
> +       DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 0):   %d\n",
> +                   args->num_cos_ldb_ports[0]);
> +       DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 1):   %d\n",
> +                   args->num_cos_ldb_ports[1]);
> +       DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 2):   %d\n",
> +                   args->num_cos_ldb_ports[2]);
> +       DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 3):   %d\n",
> +                   args->num_cos_ldb_ports[3]);
> +       DLB2_HW_DBG(hw, "\tStrict CoS allocation:         %d\n",
> +                   args->cos_strict);
> +       DLB2_HW_DBG(hw, "\tNumber of DIR ports:           %d\n",
> +                   args->num_dir_ports);
> +       DLB2_HW_DBG(hw, "\tNumber of ATM inflights:       %d\n",
> +                   args->num_atomic_inflights);
> +       DLB2_HW_DBG(hw, "\tNumber of hist list entries:   %d\n",
> +                   args->num_hist_list_entries);
> +       if (hw->ver == DLB2_HW_V2) {
> +               DLB2_HW_DBG(hw, "\tNumber of LDB credits:         %d\n",
> +                           args->num_ldb_credits);
> +               DLB2_HW_DBG(hw, "\tNumber of DIR credits:         %d\n",
> +                           args->num_dir_credits);
> +       } else {
> +               DLB2_HW_DBG(hw, "\tNumber of credits:         %d\n",
> +                           args->num_credits);
> +       }
> +}
> +
> +/**
> + * dlb2_hw_create_sched_domain() - create a scheduling domain
> + * @hw: dlb2_hw handle for a particular device.
> + * @args: scheduling domain creation arguments.
> + * @resp: response structure.
> + * @vdev_req: indicates whether this request came from a vdev.
> + * @vdev_id: If vdev_req is true, this contains the vdev's ID.
> + *
> + * This function creates a scheduling domain containing the resources specified
> + * in args. The individual resources (queues, ports, credits) can be configured
> + * after creating a scheduling domain.
> + *
> + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> + * device.
> + *
> + * Return:
> + * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
> + * assigned a detailed error code from enum dlb2_error. If successful, resp->id
> + * contains the domain ID.
> + *
> + * resp->id contains a virtual ID if vdev_req is true.
> + *
> + * Errors:
> + * EINVAL - A requested resource is unavailable, or the requested domain name
> + *         is already in use.
> + * EFAULT - Internal error (resp->status not set).
> + */
> +int dlb2_hw_create_sched_domain(struct dlb2_hw *hw,
> +                               struct dlb2_create_sched_domain_args *args,
> +                               struct dlb2_cmd_response *resp,
> +                               bool vdev_req,
> +                               unsigned int vdev_id)
> +{
> +       struct dlb2_function_resources *rsrcs;
> +       struct dlb2_hw_domain *domain;
> +       int ret;
> +
> +       rsrcs = (vdev_req) ? &hw->vdev[vdev_id] : &hw->pf;
> +
> +       dlb2_log_create_sched_domain_args(hw, args, vdev_req, vdev_id);
> +
> +       /*
> +        * Verify that hardware resources are available before attempting to
> +        * satisfy the request. This simplifies the error unwinding code.
> +        */
> +       ret = dlb2_verify_create_sched_dom_args(rsrcs, args, resp, hw, &domain);
> +       if (ret)
> +               return ret;
> +
> +       dlb2_init_domain_rsrc_lists(domain);
> +
> +       ret = dlb2_domain_attach_resources(hw, rsrcs, domain, args, resp);
> +       if (ret) {
> +               DLB2_HW_ERR(hw,
> +                           "[%s()] Internal error: failed to verify args.\n",
> +                           __func__);
> +
> +               return ret;
> +       }
> +
> +       dlb2_list_del(&rsrcs->avail_domains, &domain->func_list);
> +
> +       dlb2_list_add(&rsrcs->used_domains, &domain->func_list);
> +
> +       resp->id = (vdev_req) ? domain->id.virt_id : domain->id.phys_id;
> +       resp->status = 0;
> +
> +       return 0;
> +}
> --
> 2.23.0
>

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [dpdk-dev] [PATCH v2 09/27] event/dlb2: add v2.5 create dir queue
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 09/27] event/dlb2: add v2.5 create dir queue Timothy McDaniel
@ 2021-04-03 10:26       ` Jerin Jacob
  0 siblings, 0 replies; 174+ messages in thread
From: Jerin Jacob @ 2021-04-03 10:26 UTC (permalink / raw)
  To: Timothy McDaniel
  Cc: dpdk-dev, Erik Gabriel Carrillo, Gage Eads, Van Haaren, Harry,
	Jerin Jacob, Thomas Monjalon

On Wed, Mar 31, 2021 at 1:08 AM Timothy McDaniel
<timothy.mcdaniel@intel.com> wrote:
>
> Updated low level hardware functions to account for new
> register map and hardware access macros.
>
> Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
> ---
>  drivers/event/dlb2/pf/base/dlb2_resource.c    | 213 ------------------
>  .../event/dlb2/pf/base/dlb2_resource_new.c    | 201 +++++++++++++++++

All changes to this file, please take the git rename path to reduce the diff.


>  2 files changed, 201 insertions(+), 213 deletions(-)
>
> diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
> index 70c52e908..362deadfe 100644
> --- a/drivers/event/dlb2/pf/base/dlb2_resource.c
> +++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
> @@ -1225,219 +1225,6 @@ dlb2_get_domain_used_dir_pq(struct dlb2_hw *hw,
>         return NULL;
>  }
>
> -static void dlb2_configure_dir_queue(struct dlb2_hw *hw,
> -                                    struct dlb2_hw_domain *domain,
> -                                    struct dlb2_dir_pq_pair *queue,
> -                                    struct dlb2_create_dir_queue_args *args,
> -                                    bool vdev_req,
> -                                    unsigned int vdev_id)
> -{
> -       union dlb2_sys_dir_vasqid_v r0 = { {0} };
> -       union dlb2_sys_dir_qid_its r1 = { {0} };
> -       union dlb2_lsp_qid_dir_depth_thrsh r2 = { {0} };
> -       union dlb2_sys_dir_qid_v r5 = { {0} };
> -
> -       unsigned int offs;
> -
> -       /* QID write permissions are turned on when the domain is started */
> -       r0.field.vasqid_v = 0;
> -
> -       offs = domain->id.phys_id * DLB2_MAX_NUM_DIR_QUEUES(hw->ver) +
> -               queue->id.phys_id;
> -
> -       DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(offs), r0.val);
> -
> -       /* Don't timestamp QEs that pass through this queue */
> -       r1.field.qid_its = 0;
> -
> -       DLB2_CSR_WR(hw,
> -                   DLB2_SYS_DIR_QID_ITS(queue->id.phys_id),
> -                   r1.val);
> -
> -       r2.field.thresh = args->depth_threshold;
> -
> -       DLB2_CSR_WR(hw,
> -                   DLB2_LSP_QID_DIR_DEPTH_THRSH(queue->id.phys_id),
> -                   r2.val);
> -
> -       if (vdev_req) {
> -               union dlb2_sys_vf_dir_vqid_v r3 = { {0} };
> -               union dlb2_sys_vf_dir_vqid2qid r4 = { {0} };
> -
> -               offs = vdev_id * DLB2_MAX_NUM_DIR_QUEUES(hw->ver)
> -                       + queue->id.virt_id;
> -
> -               r3.field.vqid_v = 1;
> -
> -               DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID_V(offs), r3.val);
> -
> -               r4.field.qid = queue->id.phys_id;
> -
> -               DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID2QID(offs), r4.val);
> -       }
> -
> -       r5.field.qid_v = 1;
> -
> -       DLB2_CSR_WR(hw, DLB2_SYS_DIR_QID_V(queue->id.phys_id), r5.val);
> -
> -       queue->queue_configured = true;
> -}
> -
> -static void
> -dlb2_log_create_dir_queue_args(struct dlb2_hw *hw,
> -                              u32 domain_id,
> -                              struct dlb2_create_dir_queue_args *args,
> -                              bool vdev_req,
> -                              unsigned int vdev_id)
> -{
> -       DLB2_HW_DBG(hw, "DLB2 create directed queue arguments:\n");
> -       if (vdev_req)
> -               DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
> -       DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
> -       DLB2_HW_DBG(hw, "\tPort ID:   %d\n", args->port_id);
> -}
> -
> -static int
> -dlb2_verify_create_dir_queue_args(struct dlb2_hw *hw,
> -                                 u32 domain_id,
> -                                 struct dlb2_create_dir_queue_args *args,
> -                                 struct dlb2_cmd_response *resp,
> -                                 bool vdev_req,
> -                                 unsigned int vdev_id)
> -{
> -       struct dlb2_hw_domain *domain;
> -
> -       domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
> -
> -       if (domain == NULL) {
> -               resp->status = DLB2_ST_INVALID_DOMAIN_ID;
> -               return -EINVAL;
> -       }
> -
> -       if (!domain->configured) {
> -               resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
> -               return -EINVAL;
> -       }
> -
> -       if (domain->started) {
> -               resp->status = DLB2_ST_DOMAIN_STARTED;
> -               return -EINVAL;
> -       }
> -
> -       /*
> -        * If the user claims the port is already configured, validate the port
> -        * ID, its domain, and whether the port is configured.
> -        */
> -       if (args->port_id != -1) {
> -               struct dlb2_dir_pq_pair *port;
> -
> -               port = dlb2_get_domain_used_dir_pq(hw,
> -                                                  args->port_id,
> -                                                  vdev_req,
> -                                                  domain);
> -
> -               if (port == NULL || port->domain_id.phys_id !=
> -                               domain->id.phys_id || !port->port_configured) {
> -                       resp->status = DLB2_ST_INVALID_PORT_ID;
> -                       return -EINVAL;
> -               }
> -       }
> -
> -       /*
> -        * If the queue's port is not configured, validate that a free
> -        * port-queue pair is available.
> -        */
> -       if (args->port_id == -1 &&
> -           dlb2_list_empty(&domain->avail_dir_pq_pairs)) {
> -               resp->status = DLB2_ST_DIR_QUEUES_UNAVAILABLE;
> -               return -EINVAL;
> -       }
> -
> -       return 0;
> -}
> -
> -/**
> - * dlb2_hw_create_dir_queue() - Allocate and initialize a DLB DIR queue.
> - * @hw:        Contains the current state of the DLB2 hardware.
> - * @domain_id: Domain ID
> - * @args: User-provided arguments.
> - * @resp: Response to user.
> - * @vdev_req: Request came from a virtual device.
> - * @vdev_id: If vdev_req is true, this contains the virtual device's ID.
> - *
> - * Return: returns < 0 on error, 0 otherwise. If the driver is unable to
> - * satisfy a request, resp->status will be set accordingly.
> - */
> -int dlb2_hw_create_dir_queue(struct dlb2_hw *hw,
> -                            u32 domain_id,
> -                            struct dlb2_create_dir_queue_args *args,
> -                            struct dlb2_cmd_response *resp,
> -                            bool vdev_req,
> -                            unsigned int vdev_id)
> -{
> -       struct dlb2_dir_pq_pair *queue;
> -       struct dlb2_hw_domain *domain;
> -       int ret;
> -
> -       dlb2_log_create_dir_queue_args(hw, domain_id, args, vdev_req, vdev_id);
> -
> -       /*
> -        * Verify that hardware resources are available before attempting to
> -        * satisfy the request. This simplifies the error unwinding code.
> -        */
> -       ret = dlb2_verify_create_dir_queue_args(hw,
> -                                               domain_id,
> -                                               args,
> -                                               resp,
> -                                               vdev_req,
> -                                               vdev_id);
> -       if (ret)
> -               return ret;
> -
> -       domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
> -       if (domain == NULL) {
> -               DLB2_HW_ERR(hw,
> -                           "[%s():%d] Internal error: domain not found\n",
> -                           __func__, __LINE__);
> -               return -EFAULT;
> -       }
> -
> -       if (args->port_id != -1)
> -               queue = dlb2_get_domain_used_dir_pq(hw,
> -                                                   args->port_id,
> -                                                   vdev_req,
> -                                                   domain);
> -       else
> -               queue = DLB2_DOM_LIST_HEAD(domain->avail_dir_pq_pairs,
> -                                          typeof(*queue));
> -       if (queue == NULL) {
> -               DLB2_HW_ERR(hw,
> -                           "[%s():%d] Internal error: no available dir queues\n",
> -                           __func__, __LINE__);
> -               return -EFAULT;
> -       }
> -
> -       dlb2_configure_dir_queue(hw, domain, queue, args, vdev_req, vdev_id);
> -
> -       /*
> -        * Configuration succeeded, so move the resource from the 'avail' to
> -        * the 'used' list (if it's not already there).
> -        */
> -       if (args->port_id == -1) {
> -               dlb2_list_del(&domain->avail_dir_pq_pairs,
> -                             &queue->domain_list);
> -
> -               dlb2_list_add(&domain->used_dir_pq_pairs,
> -                             &queue->domain_list);
> -       }
> -
> -       resp->status = 0;
> -
> -       resp->id = (vdev_req) ? queue->id.virt_id : queue->id.phys_id;
> -
> -       return 0;
> -}
> -
>  static bool
>  dlb2_port_find_slot_with_pending_map_queue(struct dlb2_ldb_port *port,
>                                            struct dlb2_ldb_queue *queue,
> diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
> index 4e4b390dd..d4b401250 100644
> --- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
> +++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
> @@ -4857,3 +4857,204 @@ int dlb2_hw_create_dir_port(struct dlb2_hw *hw,
>
>         return 0;
>  }
> +
> +static void dlb2_configure_dir_queue(struct dlb2_hw *hw,
> +                                    struct dlb2_hw_domain *domain,
> +                                    struct dlb2_dir_pq_pair *queue,
> +                                    struct dlb2_create_dir_queue_args *args,
> +                                    bool vdev_req,
> +                                    unsigned int vdev_id)
> +{
> +       unsigned int offs;
> +       u32 reg = 0;
> +
> +       /* QID write permissions are turned on when the domain is started */
> +       offs = domain->id.phys_id * DLB2_MAX_NUM_DIR_QUEUES(hw->ver) +
> +               queue->id.phys_id;
> +
> +       DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(offs), reg);
> +
> +       /* Don't timestamp QEs that pass through this queue */
> +       DLB2_CSR_WR(hw, DLB2_SYS_DIR_QID_ITS(queue->id.phys_id), reg);
> +
> +       reg = 0;
> +       DLB2_BITS_SET(reg, args->depth_threshold,
> +                     DLB2_LSP_QID_DIR_DEPTH_THRSH_THRESH);
> +       DLB2_CSR_WR(hw,
> +                   DLB2_LSP_QID_DIR_DEPTH_THRSH(hw->ver, queue->id.phys_id),
> +                   reg);
> +
> +       if (vdev_req) {
> +               offs = vdev_id * DLB2_MAX_NUM_DIR_QUEUES(hw->ver) +
> +                       queue->id.virt_id;
> +
> +               reg = 0;
> +               DLB2_BIT_SET(reg, DLB2_SYS_VF_DIR_VQID_V_VQID_V);
> +               DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID_V(offs), reg);
> +
> +               reg = 0;
> +               DLB2_BITS_SET(reg, queue->id.phys_id,
> +                             DLB2_SYS_VF_DIR_VQID2QID_QID);
> +               DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID2QID(offs), reg);
> +       }
> +
> +       reg = 0;
> +       DLB2_BIT_SET(reg, DLB2_SYS_DIR_QID_V_QID_V);
> +       DLB2_CSR_WR(hw, DLB2_SYS_DIR_QID_V(queue->id.phys_id), reg);
> +
> +       queue->queue_configured = true;
> +}
> +
> +static void
> +dlb2_log_create_dir_queue_args(struct dlb2_hw *hw,
> +                              u32 domain_id,
> +                              struct dlb2_create_dir_queue_args *args,
> +                              bool vdev_req,
> +                              unsigned int vdev_id)
> +{
> +       DLB2_HW_DBG(hw, "DLB2 create directed queue arguments:\n");
> +       if (vdev_req)
> +               DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
> +       DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
> +       DLB2_HW_DBG(hw, "\tPort ID:   %d\n", args->port_id);
> +}
> +
> +static int
> +dlb2_verify_create_dir_queue_args(struct dlb2_hw *hw,
> +                                 u32 domain_id,
> +                                 struct dlb2_create_dir_queue_args *args,
> +                                 struct dlb2_cmd_response *resp,
> +                                 bool vdev_req,
> +                                 unsigned int vdev_id,
> +                                 struct dlb2_hw_domain **out_domain,
> +                                 struct dlb2_dir_pq_pair **out_queue)
> +{
> +       struct dlb2_hw_domain *domain;
> +       struct dlb2_dir_pq_pair *pq;
> +
> +       domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
> +
> +       if (!domain) {
> +               resp->status = DLB2_ST_INVALID_DOMAIN_ID;
> +               return -EINVAL;
> +       }
> +
> +       if (!domain->configured) {
> +               resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
> +               return -EINVAL;
> +       }
> +
> +       if (domain->started) {
> +               resp->status = DLB2_ST_DOMAIN_STARTED;
> +               return -EINVAL;
> +       }
> +
> +       /*
> +        * If the user claims the port is already configured, validate the port
> +        * ID, its domain, and whether the port is configured.
> +        */
> +       if (args->port_id != -1) {
> +               pq = dlb2_get_domain_used_dir_pq(hw,
> +                                                args->port_id,
> +                                                vdev_req,
> +                                                domain);
> +
> +               if (!pq || pq->domain_id.phys_id != domain->id.phys_id ||
> +                   !pq->port_configured) {
> +                       resp->status = DLB2_ST_INVALID_PORT_ID;
> +                       return -EINVAL;
> +               }
> +       } else {
> +               /*
> +                * If the queue's port is not configured, validate that a free
> +                * port-queue pair is available.
> +                */
> +               pq = DLB2_DOM_LIST_HEAD(domain->avail_dir_pq_pairs,
> +                                       typeof(*pq));
> +               if (!pq) {
> +                       resp->status = DLB2_ST_DIR_QUEUES_UNAVAILABLE;
> +                       return -EINVAL;
> +               }
> +       }
> +
> +       *out_domain = domain;
> +       *out_queue = pq;
> +
> +       return 0;
> +}
> +
> +/**
> + * dlb2_hw_create_dir_queue() - create a directed queue
> + * @hw: dlb2_hw handle for a particular device.
> + * @domain_id: domain ID.
> + * @args: queue creation arguments.
> + * @resp: response structure.
> + * @vdev_req: indicates whether this request came from a vdev.
> + * @vdev_id: If vdev_req is true, this contains the vdev's ID.
> + *
> + * This function creates a directed queue.
> + *
> + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> + * device.
> + *
> + * Return:
> + * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
> + * assigned a detailed error code from enum dlb2_error. If successful, resp->id
> + * contains the queue ID.
> + *
> + * resp->id contains a virtual ID if vdev_req is true.
> + *
> + * Errors:
> + * EINVAL - A requested resource is unavailable, the domain is not configured,
> + *         or the domain has already been started.
> + * EFAULT - Internal error (resp->status not set).
> + */
> +int dlb2_hw_create_dir_queue(struct dlb2_hw *hw,
> +                            u32 domain_id,
> +                            struct dlb2_create_dir_queue_args *args,
> +                            struct dlb2_cmd_response *resp,
> +                            bool vdev_req,
> +                            unsigned int vdev_id)
> +{
> +       struct dlb2_dir_pq_pair *queue;
> +       struct dlb2_hw_domain *domain;
> +       int ret;
> +
> +       dlb2_log_create_dir_queue_args(hw, domain_id, args, vdev_req, vdev_id);
> +
> +       /*
> +        * Verify that hardware resources are available before attempting to
> +        * satisfy the request. This simplifies the error unwinding code.
> +        */
> +       ret = dlb2_verify_create_dir_queue_args(hw,
> +                                               domain_id,
> +                                               args,
> +                                               resp,
> +                                               vdev_req,
> +                                               vdev_id,
> +                                               &domain,
> +                                               &queue);
> +       if (ret)
> +               return ret;
> +
> +       dlb2_configure_dir_queue(hw, domain, queue, args, vdev_req, vdev_id);
> +
> +       /*
> +        * Configuration succeeded, so move the resource from the 'avail' to
> +        * the 'used' list (if it's not already there).
> +        */
> +       if (args->port_id == -1) {
> +               dlb2_list_del(&domain->avail_dir_pq_pairs,
> +                             &queue->domain_list);
> +
> +               dlb2_list_add(&domain->used_dir_pq_pairs,
> +                             &queue->domain_list);
> +       }
> +
> +       resp->status = 0;
> +
> +       resp->id = (vdev_req) ? queue->id.virt_id : queue->id.phys_id;
> +
> +       return 0;
> +}
> +
> --
> 2.23.0
>

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [dpdk-dev] [PATCH v2 20/27] event/dlb2: move dlb_resource_new.c to dlb_resource.c
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 20/27] event/dlb2: move dlb_resource_new.c to dlb_resource.c Timothy McDaniel
@ 2021-04-03 10:29       ` Jerin Jacob
  0 siblings, 0 replies; 174+ messages in thread
From: Jerin Jacob @ 2021-04-03 10:29 UTC (permalink / raw)
  To: Timothy McDaniel
  Cc: dpdk-dev, Erik Gabriel Carrillo, Gage Eads, Van Haaren, Harry,
	Jerin Jacob, Thomas Monjalon

On Wed, Mar 31, 2021 at 1:09 AM Timothy McDaniel
<timothy.mcdaniel@intel.com> wrote:
>
> The file dlb_resource_new.c now contains all of the low level
> functions required to support both DLB v2.0 and DLB v2.5, and
> the original file (dlb_resource.c) was removed in the previous
> commit, so rename dlb_resource_new.c to dlb_resource.c, and
> update the meson build file so that the new file is built.
>
> Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>

Please squash 19 and 20 and have comments like "event/dlb2: switch
over to new implementation" or so.


> ---
>  drivers/event/dlb2/meson.build                                  | 2 +-
>  .../event/dlb2/pf/base/{dlb2_resource_new.c => dlb2_resource.c} | 0
>  2 files changed, 1 insertion(+), 1 deletion(-)
>  rename drivers/event/dlb2/pf/base/{dlb2_resource_new.c => dlb2_resource.c} (100%)
>
> diff --git a/drivers/event/dlb2/meson.build b/drivers/event/dlb2/meson.build
> index d8cfd377f..f22638b8e 100644
> --- a/drivers/event/dlb2/meson.build
> +++ b/drivers/event/dlb2/meson.build
> @@ -13,7 +13,7 @@ sources = files('dlb2.c',
>                 'dlb2_xstats.c',
>                 'pf/dlb2_main.c',
>                 'pf/dlb2_pf.c',
> -               'pf/base/dlb2_resource_new.c',
> +               'pf/base/dlb2_resource.c',
>                 'rte_pmd_dlb2.c',
>                 'dlb2_selftest.c'
>  )
> diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
> similarity index 100%
> rename from drivers/event/dlb2/pf/base/dlb2_resource_new.c
> rename to drivers/event/dlb2/pf/base/dlb2_resource.c
> --
> 2.23.0
>

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [dpdk-dev] [PATCH v2 27/27] event/dlb2: Change device name to dlb_event
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 27/27] event/dlb2: Change device name to dlb_event Timothy McDaniel
@ 2021-04-03 10:39       ` Jerin Jacob
  0 siblings, 0 replies; 174+ messages in thread
From: Jerin Jacob @ 2021-04-03 10:39 UTC (permalink / raw)
  To: Timothy McDaniel
  Cc: dpdk-dev, Erik Gabriel Carrillo, Gage Eads, Van Haaren, Harry,
	Jerin Jacob, Thomas Monjalon

On Wed, Mar 31, 2021 at 1:09 AM Timothy McDaniel
<timothy.mcdaniel@intel.com> wrote:
>
> Updated eventdev device name to be dlb_event instead of
> dlb2_event.  The new name will be used for all versions
> of the DLB hardware. This change required corresponding changes
> to the the directory name that contains the PMD, as well
> as the documentation files, build infrastructure, and PMD
> specific APIs.
>
> Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>


# Change the patch subject to event/dlb:
# Also, I can still still see[1],  doc/guides/eventdevs/dlb.rst and
doc/guides/eventdevs/dlb2.rst.
Let have only one .rst file for one driver.
# Please check the documentation carefully, I see the example in vdev
arguments still messed up with dlb2 and dlb1.
Please check and correct as needed.


[1]
[for-main]dell[dpdk-next-eventdev] $ git diff HEAD~27 --stat
 MAINTAINERS                                              |    6 +-
 app/test/test_eventdev.c                                 |    6 +-
 config/rte_config.h                                      |   11 +-
 doc/api/doxy-api-index.md                                |    2 +-
 doc/api/doxy-api.conf.in                                 |    2 +-
 doc/guides/eventdevs/dlb.rst                             |  390
++++++++++++++++
 doc/guides/eventdevs/dlb.rst                          |   75 ++-
 doc/guides/eventdevs/index.rst                           |    2 +-
 doc/guides/rel_notes/release_21_05.rst                   |    5 +
 drivers/event/{dlb2 => dlb}/dlb2.c                       |  451
++++++++++++-----

> +/* DLB defines */
> +#define RTE_LIBRTE_PMD_DLB_POLL_INTERVAL 1000
> +#undef RTE_LIBRTE_PMD_DLB_QUELL_STATS
> +#define RTE_LIBRTE_PMD_DLB_SW_CREDIT_QUANTA 32
> +#define RTE_LIBRTE_PMD_DLB_DEFAULT_DEPTH_THRESH 256


PLEASE MOVE THIS ALL TO RUNTIME. If it not used in fastpath.


> +Deferred Scheduling
> +~~~~~~~~~~~~~~~~~~~
> +
> +The DLB2 PMD's default behavior for managing a CQ is to "pop" the CQ once per
> +dequeued event before returning from rte_event_dequeue_burst(). This frees the
> +corresponding entries in the CQ, which enables the DLB2 to schedule more events
> +to it.
> +
> +To support applications seeking finer-grained scheduling control -- for example
> +deferring scheduling to get the best possible priority scheduling and
> +load-balancing -- the PMD supports a deferred scheduling mode. In this mode,
> +the CQ entry is not popped until the *subsequent* rte_event_dequeue_burst()
> +call. This mode only applies to load-balanced event ports with dequeue depth of
> +1.
> +
> +To enable deferred scheduling, use the defer_sched vdev argument like so:
> +
> +    .. code-block:: console
> +
> +       --vdev=dlb1_event,defer_sched=on

It should be dlb_event. Right?

> +
> +Atomic Inflights Allocation
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +In the last stage prior to scheduling an atomic event to a CQ, DLB2 holds the
> +inflight event in a temporary buffer that is divided among load-balanced
> +queues. If a queue's atomic buffer storage fills up, this can result in
> +head-of-line-blocking. For example:
> +
> +- An LDB queue allocated N atomic buffer entries
> +- All N entries are filled with events from flow X, which is pinned to CQ 0.
> +
> +Until CQ 0 releases 1+ events, no other atomic flows for that LDB queue can be
> +scheduled. The likelihood of this case depends on the eventdev configuration,
> +traffic behavior, event processing latency, potential for a worker to be
> +interrupted or otherwise delayed, etc.
> +
> +By default, the PMD allocates 16 buffer entries for each load-balanced queue,
> +which provides an even division across all 128 queues but potentially wastes
> +buffer space (e.g. if not all queues are used, or aren't used for atomic
> +scheduling).
> +
> +The PMD provides a dev arg to override the default per-queue allocation. To
> +increase a vdev's per-queue atomic-inflight allocation to (for example) 64:
> +
> +    .. code-block:: console
> +
> +       --vdev=dlb1_event,atm_inflights=64

It should be dlb_event. Right?

> +
> +QID Depth Threshold
> +~~~~~~~~~~~~~~~~~~~
> +
> +DLB2 supports setting and tracking queue depth thresholds. Hardware uses
> +the thresholds to track how full a queue is compared to its threshold.
> +Four buckets are used
> +
> +- Less than or equal to 50% of queue depth threshold
> +- Greater than 50%, but less than or equal to 75% of depth threshold
> +- Greater than 75%, but less than or equal to 100% of depth threshold
> +- Greater than 100% of depth thresholds
> +
> +Per queue threshold metrics are tracked in the DLB2 xstats, and are also
> +returned in the impl_opaque field of each received event.
> +
> +The per qid threshold can be specified as part of the device args, and
> +can be applied to all queue, a range of queues, or a single queue, as
> +shown below.
> +
> +    .. code-block:: console
> +
> +       --vdev=dlb2_event,qid_depth_thresh=all:<threshold_value>
> +       --vdev=dlb2_event,qid_depth_thresh=qidA-qidB:<threshold_value>
> +       --vdev=dlb2_event,qid_depth_thresh=qid:<threshold_value>

It should be dlb_event. Right?

^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v3 00/26] Add DLB V2.5
  2021-03-16 22:18 ` [dpdk-dev] [PATCH 01/25] event/dlb2: add dlb v2.5 probe Timothy McDaniel
  2021-03-21  9:48   ` Jerin Jacob
  2021-03-30 19:35   ` [dpdk-dev] [PATCH v2 00/27] Add DLB V2.5 Timothy McDaniel
@ 2021-04-13 20:14   ` Timothy McDaniel
  2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 01/26] event/dlb2: add v2.5 probe Timothy McDaniel
                       ` (25 more replies)
  2021-04-15  1:48   ` [dpdk-dev] [PATCH v4 00/27] Add DLB v2.5 Timothy McDaniel
  2021-05-01 19:03   ` [dpdk-dev] [PATCH v5 00/26] Add DLB v2.5 McDaniel, Timothy
  4 siblings, 26 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-04-13 20:14 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas

This patch series adds support for DLB v2.5 to
the current DLB V2.0 PMD. The resulting PMD supports
both hardware versions.

The main differences between the DLB v2.5 and v2.0 hardware
are:
- Number of queues/ports
- DLB v2.5 uses a combined credit pool, whereas DLB v2.0
  splits credits into 2 pools, a directed credit pool and a
  load balanced credit pool.
- Different register maps, with different bit names and offsets

In order to support both hardware versions with the same PMD,
and avoid code duplication, the file dlb2_resource.c required a
complete rewrite. This required some creative staging of the changes
in order to keep the individual patches relatively small, while
also meeting the requirement that all individual patches in the set
compile cleanly.

To accomplish this, a few temporary files are used:

dlb2_hw_types_new.h
dlb2_resources_new.h
dlb2_resources_new.c

As dlb2_resources_new.c is populated with the new combined v2.0/v2.5
low level logic, the corresponding old code is removed from
dlb2_resource.c, thus allowing both the original and new code to
continue to compile and link cleanly. Once all of the code has been
migrated to the new model, the old versions of the files are removed,
and the new versions are renamed, effectively replacing the old original
files.

As you review the code, you can ignore the code deletions from
dlb2_resource.c, as that file continues to shrink as the new
corresponding logic is added to dlb2_resource_new.c.

Changes since V2:
1) fix commit headers
2) fix commit message repeated words
3) remove FPGA reference
4) split out new v2.5 register definitions into separate patch
5) fixed documentation to use DLB and dlb_event exclusively,
   instead of the old names such as dlb1_event, dlb2_event,
   DLB2, ... Final doc updates are done in patch that performs
   device rename from DLB2 tosimply DLB
6) use component event/dlb at commit which changes device name and
   all subsequent commits
7) Move all DLB constants out of config/rte_config.h except QUELL_STATS,
   which is used in the fastpath. Exposed these as devarg command line
   parameters
8) Removed "TEMPORARY" comment leftover in dlb2_osdep.h
9) squashed 20-21 and 22-23 since they were logically the same as 19-20,
   which was requested to be squashed
10) delete old dlb2.rst - dlb.rst has been updated for v2.0 and v2.1

Changes since V1:
1) Simplified subject text for all patches
2) correct typos/spelling
3) remove FPGA references
4) remove stale sysconf() references
5) fixed patches that had compilation issues
6) updated release notes
7) renamed dlb device from dlb2_event to dlb_event
8) moved dlb2 directory to dlb,to match name change
9) fixed other cases where "dlb2" was being used externally

Timothy McDaniel (26):
  event/dlb2: add v2.5 probe
  event/dlb2: add v2.5 HW register definitions
  event/dlb2: add v2.5 HW init
  event/dlb2: add v2.5 get resources
  event/dlb2: add v2.5 create sched domain
  event/dlb2: add v2.5 domain reset
  event/dlb2: add V2.5 create ldb queue
  event/dlb2: add v2.5 create ldb port
  event/dlb2: add v2.5 create dir port
  event/dlb2: add v2.5 create dir queue
  event/dlb2: add v2.5 map qid
  event/dlb2: add v2.5 unmap queue
  event/dlb2: add v2.5 start domain
  event/dlb2: add v2.5 credit scheme
  event/dlb2: add v2.5 queue depth functions
  event/dlb2: add v2.5 finish map/unmap
  event/dlb2: add v2.5 sparse cq mode
  event/dlb2: add v2.5 sequence number management
  event/dlb2: use new implementation of resource header
  event/dlb2: use new implementation of resource file
  event/dlb2: use new implementation of HW types header
  event/dlb2: use new combined register map
  event/dlb2: update xstats for v2.5
  doc/dlb2: update documentation for v2.5
  event/dlb: remove version from device name
  event/dlb: move rte config defines to runtime devargs

 MAINTAINERS                                   |    6 +-
 app/test/test_eventdev.c                      |    6 +-
 config/rte_config.h                           |    8 +-
 doc/api/doxy-api-index.md                     |    2 +-
 doc/api/doxy-api.conf.in                      |    2 +-
 doc/guides/eventdevs/{dlb2.rst => dlb.rst}    |  155 +-
 doc/guides/eventdevs/index.rst                |    2 +-
 doc/guides/rel_notes/release_21_05.rst        |    5 +
 drivers/event/{dlb2 => dlb}/dlb2.c            |  550 ++-
 drivers/event/{dlb2 => dlb}/dlb2_iface.c      |    0
 drivers/event/{dlb2 => dlb}/dlb2_iface.h      |    0
 drivers/event/{dlb2 => dlb}/dlb2_inline_fns.h |    0
 drivers/event/{dlb2 => dlb}/dlb2_log.h        |    0
 drivers/event/{dlb2 => dlb}/dlb2_priv.h       |  177 +-
 drivers/event/{dlb2 => dlb}/dlb2_selftest.c   |    8 +-
 drivers/event/{dlb2 => dlb}/dlb2_user.h       |   27 +-
 drivers/event/{dlb2 => dlb}/dlb2_xstats.c     |   70 +-
 drivers/event/{dlb2 => dlb}/meson.build       |    4 +-
 .../{dlb2 => dlb}/pf/base/dlb2_hw_types.h     |  106 +-
 .../event/{dlb2 => dlb}/pf/base/dlb2_osdep.h  |    2 +
 .../{dlb2 => dlb}/pf/base/dlb2_osdep_bitmap.h |    0
 .../{dlb2 => dlb}/pf/base/dlb2_osdep_list.h   |    0
 .../{dlb2 => dlb}/pf/base/dlb2_osdep_types.h  |    0
 drivers/event/dlb/pf/base/dlb2_regs.h         | 4304 +++++++++++++++++
 .../{dlb2 => dlb}/pf/base/dlb2_resource.c     | 3278 +++++++------
 .../{dlb2 => dlb}/pf/base/dlb2_resource.h     |   28 +-
 drivers/event/{dlb2 => dlb}/pf/dlb2_main.c    |   37 +-
 drivers/event/{dlb2 => dlb}/pf/dlb2_main.h    |    0
 drivers/event/{dlb2 => dlb}/pf/dlb2_pf.c      |   67 +-
 .../rte_pmd_dlb2.c => dlb/rte_pmd_dlb.c}      |    6 +-
 .../rte_pmd_dlb2.h => dlb/rte_pmd_dlb.h}      |   12 +-
 drivers/event/{dlb2 => dlb}/version.map       |    2 +-
 drivers/event/dlb2/pf/base/dlb2_mbox.h        |  596 ---
 drivers/event/dlb2/pf/base/dlb2_regs.h        | 2527 ----------
 drivers/event/meson.build                     |    2 +-
 35 files changed, 6921 insertions(+), 5068 deletions(-)
 rename doc/guides/eventdevs/{dlb2.rst => dlb.rst} (72%)
 rename drivers/event/{dlb2 => dlb}/dlb2.c (89%)
 rename drivers/event/{dlb2 => dlb}/dlb2_iface.c (100%)
 rename drivers/event/{dlb2 => dlb}/dlb2_iface.h (100%)
 rename drivers/event/{dlb2 => dlb}/dlb2_inline_fns.h (100%)
 rename drivers/event/{dlb2 => dlb}/dlb2_log.h (100%)
 rename drivers/event/{dlb2 => dlb}/dlb2_priv.h (77%)
 rename drivers/event/{dlb2 => dlb}/dlb2_selftest.c (99%)
 rename drivers/event/{dlb2 => dlb}/dlb2_user.h (97%)
 rename drivers/event/{dlb2 => dlb}/dlb2_xstats.c (94%)
 rename drivers/event/{dlb2 => dlb}/meson.build (89%)
 rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_hw_types.h (80%)
 rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_osdep.h (99%)
 rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_osdep_bitmap.h (100%)
 rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_osdep_list.h (100%)
 rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_osdep_types.h (100%)
 create mode 100644 drivers/event/dlb/pf/base/dlb2_regs.h
 rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_resource.c (68%)
 rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_resource.h (99%)
 rename drivers/event/{dlb2 => dlb}/pf/dlb2_main.c (95%)
 rename drivers/event/{dlb2 => dlb}/pf/dlb2_main.h (100%)
 rename drivers/event/{dlb2 => dlb}/pf/dlb2_pf.c (91%)
 rename drivers/event/{dlb2/rte_pmd_dlb2.c => dlb/rte_pmd_dlb.c} (88%)
 rename drivers/event/{dlb2/rte_pmd_dlb2.h => dlb/rte_pmd_dlb.h} (88%)
 rename drivers/event/{dlb2 => dlb}/version.map (60%)
 delete mode 100644 drivers/event/dlb2/pf/base/dlb2_mbox.h
 delete mode 100644 drivers/event/dlb2/pf/base/dlb2_regs.h

-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v3 01/26] event/dlb2: add v2.5 probe
  2021-04-13 20:14   ` [dpdk-dev] [PATCH v3 00/26] " Timothy McDaniel
@ 2021-04-13 20:14     ` Timothy McDaniel
  2021-04-14 19:16       ` Jerin Jacob
  2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 02/26] event/dlb2: add v2.5 HW register definitions Timothy McDaniel
                       ` (24 subsequent siblings)
  25 siblings, 1 reply; 174+ messages in thread
From: Timothy McDaniel @ 2021-04-13 20:14 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas

This commit adds dlb v2.5 probe support, and updates
parameter parsing.

The dlb v2.5 device differs from dlb v2, in that the
number of resources (ports, queues, ...) is different,
so macros have been added to take the device version
into account.

This commit also cleans up a few issues in the original
dlb2 source:
- eliminate duplicate constant definitions
- removed unused constant definitions
- remove #ifdef FPGA
- remove unused include file, dlb2_mbox.h

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/dlb2.c                  |  99 +++-
 drivers/event/dlb2/dlb2_priv.h             | 151 ++++--
 drivers/event/dlb2/dlb2_xstats.c           |  37 +-
 drivers/event/dlb2/pf/base/dlb2_hw_types.h |  68 +--
 drivers/event/dlb2/pf/base/dlb2_mbox.h     | 596 ---------------------
 drivers/event/dlb2/pf/base/dlb2_resource.c |  48 +-
 drivers/event/dlb2/pf/dlb2_pf.c            |  62 ++-
 7 files changed, 318 insertions(+), 743 deletions(-)
 delete mode 100644 drivers/event/dlb2/pf/base/dlb2_mbox.h

diff --git a/drivers/event/dlb2/dlb2.c b/drivers/event/dlb2/dlb2.c
index fb5ff012a..7f5b9141b 100644
--- a/drivers/event/dlb2/dlb2.c
+++ b/drivers/event/dlb2/dlb2.c
@@ -59,7 +59,8 @@ static struct rte_event_dev_info evdev_dlb2_default_info = {
 	.max_event_port_enqueue_depth = DLB2_MAX_ENQUEUE_DEPTH,
 	.max_event_port_links = DLB2_MAX_NUM_QIDS_PER_LDB_CQ,
 	.max_num_events = DLB2_MAX_NUM_LDB_CREDITS,
-	.max_single_link_event_port_queue_pairs = DLB2_MAX_NUM_DIR_PORTS,
+	.max_single_link_event_port_queue_pairs =
+		DLB2_MAX_NUM_DIR_PORTS(DLB2_HW_V2),
 	.event_dev_cap = (RTE_EVENT_DEV_CAP_QUEUE_QOS |
 			  RTE_EVENT_DEV_CAP_EVENT_QOS |
 			  RTE_EVENT_DEV_CAP_BURST_MODE |
@@ -69,7 +70,7 @@ static struct rte_event_dev_info evdev_dlb2_default_info = {
 };
 
 struct process_local_port_data
-dlb2_port[DLB2_MAX_NUM_PORTS][DLB2_NUM_PORT_TYPES];
+dlb2_port[DLB2_MAX_NUM_PORTS_ALL][DLB2_NUM_PORT_TYPES];
 
 static void
 dlb2_free_qe_mem(struct dlb2_port *qm_port)
@@ -97,7 +98,7 @@ dlb2_init_queue_depth_thresholds(struct dlb2_eventdev *dlb2,
 {
 	int q;
 
-	for (q = 0; q < DLB2_MAX_NUM_QUEUES; q++) {
+	for (q = 0; q < DLB2_MAX_NUM_QUEUES(dlb2->version); q++) {
 		if (qid_depth_thresholds[q] != 0)
 			dlb2->ev_queues[q].depth_threshold =
 				qid_depth_thresholds[q];
@@ -247,9 +248,9 @@ set_num_dir_credits(const char *key __rte_unused,
 		return ret;
 
 	if (*num_dir_credits < 0 ||
-	    *num_dir_credits > DLB2_MAX_NUM_DIR_CREDITS) {
+	    *num_dir_credits > DLB2_MAX_NUM_DIR_CREDITS(DLB2_HW_V2)) {
 		DLB2_LOG_ERR("dlb2: num_dir_credits must be between 0 and %d\n",
-			     DLB2_MAX_NUM_DIR_CREDITS);
+			     DLB2_MAX_NUM_DIR_CREDITS(DLB2_HW_V2));
 		return -EINVAL;
 	}
 
@@ -306,7 +307,6 @@ set_cos(const char *key __rte_unused,
 	return 0;
 }
 
-
 static int
 set_qid_depth_thresh(const char *key __rte_unused,
 		     const char *value,
@@ -327,7 +327,7 @@ set_qid_depth_thresh(const char *key __rte_unused,
 	 */
 	if (sscanf(value, "all:%d", &thresh) == 1) {
 		first = 0;
-		last = DLB2_MAX_NUM_QUEUES - 1;
+		last = DLB2_MAX_NUM_QUEUES(DLB2_HW_V2) - 1;
 	} else if (sscanf(value, "%d-%d:%d", &first, &last, &thresh) == 3) {
 		/* we have everything we need */
 	} else if (sscanf(value, "%d:%d", &first, &thresh) == 2) {
@@ -337,7 +337,56 @@ set_qid_depth_thresh(const char *key __rte_unused,
 		return -EINVAL;
 	}
 
-	if (first > last || first < 0 || last >= DLB2_MAX_NUM_QUEUES) {
+	if (first > last || first < 0 ||
+		last >= DLB2_MAX_NUM_QUEUES(DLB2_HW_V2)) {
+		DLB2_LOG_ERR("Error parsing qid depth devarg, invalid qid value\n");
+		return -EINVAL;
+	}
+
+	if (thresh < 0 || thresh > DLB2_MAX_QUEUE_DEPTH_THRESHOLD) {
+		DLB2_LOG_ERR("Error parsing qid depth devarg, threshold > %d\n",
+			     DLB2_MAX_QUEUE_DEPTH_THRESHOLD);
+		return -EINVAL;
+	}
+
+	for (i = first; i <= last; i++)
+		qid_thresh->val[i] = thresh; /* indexed by qid */
+
+	return 0;
+}
+
+static int
+set_qid_depth_thresh_v2_5(const char *key __rte_unused,
+			  const char *value,
+			  void *opaque)
+{
+	struct dlb2_qid_depth_thresholds *qid_thresh = opaque;
+	int first, last, thresh, i;
+
+	if (value == NULL || opaque == NULL) {
+		DLB2_LOG_ERR("NULL pointer\n");
+		return -EINVAL;
+	}
+
+	/* command line override may take one of the following 3 forms:
+	 * qid_depth_thresh=all:<threshold_value> ... all queues
+	 * qid_depth_thresh=qidA-qidB:<threshold_value> ... a range of queues
+	 * qid_depth_thresh=qid:<threshold_value> ... just one queue
+	 */
+	if (sscanf(value, "all:%d", &thresh) == 1) {
+		first = 0;
+		last = DLB2_MAX_NUM_QUEUES(DLB2_HW_V2_5) - 1;
+	} else if (sscanf(value, "%d-%d:%d", &first, &last, &thresh) == 3) {
+		/* we have everything we need */
+	} else if (sscanf(value, "%d:%d", &first, &thresh) == 2) {
+		last = first;
+	} else {
+		DLB2_LOG_ERR("Error parsing qid depth devarg. Should be all:val, qid-qid:val, or qid:val\n");
+		return -EINVAL;
+	}
+
+	if (first > last || first < 0 ||
+		last >= DLB2_MAX_NUM_QUEUES(DLB2_HW_V2_5)) {
 		DLB2_LOG_ERR("Error parsing qid depth devarg, invalid qid value\n");
 		return -EINVAL;
 	}
@@ -521,7 +570,7 @@ dlb2_hw_reset_sched_domain(const struct rte_eventdev *dev, bool reconfig)
 	for (i = 0; i < dlb2->num_queues; i++)
 		dlb2->ev_queues[i].qm_queue.config_state = config_state;
 
-	for (i = 0; i < DLB2_MAX_NUM_QUEUES; i++)
+	for (i = 0; i < DLB2_MAX_NUM_QUEUES(DLB2_HW_V2_5); i++)
 		dlb2->ev_queues[i].setup_done = false;
 
 	dlb2->num_ports = 0;
@@ -1453,7 +1502,7 @@ dlb2_eventdev_port_setup(struct rte_eventdev *dev,
 
 	dlb2 = dlb2_pmd_priv(dev);
 
-	if (ev_port_id >= DLB2_MAX_NUM_PORTS)
+	if (ev_port_id >= DLB2_MAX_NUM_PORTS(dlb2->version))
 		return -EINVAL;
 
 	if (port_conf->dequeue_depth >
@@ -3895,7 +3944,7 @@ dlb2_primary_eventdev_probe(struct rte_eventdev *dev,
 	}
 
 	/* Initialize each port's token pop mode */
-	for (i = 0; i < DLB2_MAX_NUM_PORTS; i++)
+	for (i = 0; i < DLB2_MAX_NUM_PORTS(dlb2->version); i++)
 		dlb2->ev_ports[i].qm_port.token_pop_mode = AUTO_POP;
 
 	rte_spinlock_init(&dlb2->qm_instance.resource_lock);
@@ -3945,7 +3994,8 @@ dlb2_secondary_eventdev_probe(struct rte_eventdev *dev,
 int
 dlb2_parse_params(const char *params,
 		  const char *name,
-		  struct dlb2_devargs *dlb2_args)
+		  struct dlb2_devargs *dlb2_args,
+		  uint8_t version)
 {
 	int ret = 0;
 	static const char * const args[] = { NUMA_NODE_ARG,
@@ -3984,17 +4034,18 @@ dlb2_parse_params(const char *params,
 				return ret;
 			}
 
-			ret = rte_kvargs_process(kvlist,
+			if (version == DLB2_HW_V2) {
+				ret = rte_kvargs_process(kvlist,
 					DLB2_NUM_DIR_CREDITS,
 					set_num_dir_credits,
 					&dlb2_args->num_dir_credits_override);
-			if (ret != 0) {
-				DLB2_LOG_ERR("%s: Error parsing num_dir_credits parameter",
-					     name);
-				rte_kvargs_free(kvlist);
-				return ret;
+				if (ret != 0) {
+					DLB2_LOG_ERR("%s: Error parsing num_dir_credits parameter",
+						     name);
+					rte_kvargs_free(kvlist);
+					return ret;
+				}
 			}
-
 			ret = rte_kvargs_process(kvlist, DEV_ID_ARG,
 						 set_dev_id,
 						 &dlb2_args->dev_id);
@@ -4005,11 +4056,19 @@ dlb2_parse_params(const char *params,
 				return ret;
 			}
 
-			ret = rte_kvargs_process(
+			if (version == DLB2_HW_V2) {
+				ret = rte_kvargs_process(
 					kvlist,
 					DLB2_QID_DEPTH_THRESH_ARG,
 					set_qid_depth_thresh,
 					&dlb2_args->qid_depth_thresholds);
+			} else {
+				ret = rte_kvargs_process(
+					kvlist,
+					DLB2_QID_DEPTH_THRESH_ARG,
+					set_qid_depth_thresh_v2_5,
+					&dlb2_args->qid_depth_thresholds);
+			}
 			if (ret != 0) {
 				DLB2_LOG_ERR("%s: Error parsing qid_depth_thresh parameter",
 					     name);
diff --git a/drivers/event/dlb2/dlb2_priv.h b/drivers/event/dlb2/dlb2_priv.h
index eb1a93239..1cd78ad94 100644
--- a/drivers/event/dlb2/dlb2_priv.h
+++ b/drivers/event/dlb2/dlb2_priv.h
@@ -33,19 +33,31 @@
 
 /* Begin HW related defines and structs */
 
+#define DLB2_HW_V2 0
+#define DLB2_HW_V2_5 1
 #define DLB2_MAX_NUM_DOMAINS 32
 #define DLB2_MAX_NUM_VFS 16
 #define DLB2_MAX_NUM_LDB_QUEUES 32
 #define DLB2_MAX_NUM_LDB_PORTS 64
-#define DLB2_MAX_NUM_DIR_PORTS 64
-#define DLB2_MAX_NUM_DIR_QUEUES 64
+#define DLB2_MAX_NUM_DIR_PORTS_V2		DLB2_MAX_NUM_DIR_QUEUES_V2
+#define DLB2_MAX_NUM_DIR_PORTS_V2_5		DLB2_MAX_NUM_DIR_QUEUES_V2_5
+#define DLB2_MAX_NUM_DIR_PORTS(ver)		(ver == DLB2_HW_V2 ? \
+						 DLB2_MAX_NUM_DIR_PORTS_V2 : \
+						 DLB2_MAX_NUM_DIR_PORTS_V2_5)
+#define DLB2_MAX_NUM_DIR_QUEUES_V2		64 /* DIR == directed */
+#define DLB2_MAX_NUM_DIR_QUEUES_V2_5		96
+/* When needed for array sizing, the DLB 2.5 macro is used */
+#define DLB2_MAX_NUM_DIR_QUEUES(ver)		(ver == DLB2_HW_V2 ? \
+						 DLB2_MAX_NUM_DIR_QUEUES_V2 : \
+						 DLB2_MAX_NUM_DIR_QUEUES_V2_5)
 #define DLB2_MAX_NUM_FLOWS (64 * 1024)
 #define DLB2_MAX_NUM_LDB_CREDITS (8 * 1024)
-#define DLB2_MAX_NUM_DIR_CREDITS (2 * 1024)
+#define DLB2_MAX_NUM_DIR_CREDITS(ver)		(ver == DLB2_HW_V2 ? 4096 : 0)
+#define DLB2_MAX_NUM_CREDITS(ver)		(ver == DLB2_HW_V2 ? \
+						 0 : DLB2_MAX_NUM_LDB_CREDITS)
 #define DLB2_MAX_NUM_LDB_CREDIT_POOLS 64
 #define DLB2_MAX_NUM_DIR_CREDIT_POOLS 64
 #define DLB2_MAX_NUM_HIST_LIST_ENTRIES 2048
-#define DLB2_MAX_NUM_AQOS_ENTRIES 2048
 #define DLB2_MAX_NUM_QIDS_PER_LDB_CQ 8
 #define DLB2_QID_PRIORITIES 8
 #define DLB2_MAX_DEVICE_PATH 32
@@ -68,6 +80,11 @@
 #define DLB2_NUM_HIST_LIST_ENTRIES_PER_LDB_PORT \
 	DLB2_MAX_CQ_DEPTH
 
+#define DLB2_HW_DEVICE_FROM_PCI_ID(_pdev) \
+	(((_pdev->id.device_id == PCI_DEVICE_ID_INTEL_DLB2_5_PF) ||        \
+	  (_pdev->id.device_id == PCI_DEVICE_ID_INTEL_DLB2_5_VF))   ?   \
+		DLB2_HW_V2_5 : DLB2_HW_V2)
+
 /*
  * Static per queue/port provisioning values
  */
@@ -109,6 +126,8 @@ enum dlb2_hw_queue_types {
 	DLB2_NUM_QUEUE_TYPES /* Must be last */
 };
 
+#define DLB2_COMBINED_POOL DLB2_LDB_QUEUE
+
 #define PORT_TYPE(p) ((p)->is_directed ? DLB2_DIR_PORT : DLB2_LDB_PORT)
 
 /* Do not change - must match hardware! */
@@ -127,8 +146,15 @@ struct dlb2_hw_rsrcs {
 	uint32_t num_ldb_queues;	/* Number of available ldb queues */
 	uint32_t num_ldb_ports;         /* Number of load balanced ports */
 	uint32_t num_dir_ports;         /* Number of directed ports */
-	uint32_t num_ldb_credits;       /* Number of load balanced credits */
-	uint32_t num_dir_credits;       /* Number of directed credits */
+	union {
+		struct {
+			uint32_t num_ldb_credits; /* Number of ldb credits */
+			uint32_t num_dir_credits; /* Number of dir credits */
+		};
+		struct {
+			uint32_t num_credits; /* Number of combined credits */
+		};
+	};
 	uint32_t reorder_window_size;   /* Size of reorder window */
 };
 
@@ -292,9 +318,17 @@ struct dlb2_port {
 	enum dlb2_token_pop_mode token_pop_mode;
 	union dlb2_port_config cfg;
 	uint32_t *credit_pool[DLB2_NUM_QUEUE_TYPES]; /* use __atomic builtins */
-	uint16_t cached_ldb_credits;
-	uint16_t ldb_credits;
-	uint16_t cached_dir_credits;
+	union {
+		struct {
+			uint16_t cached_ldb_credits;
+			uint16_t ldb_credits;
+			uint16_t cached_dir_credits;
+		};
+		struct {
+			uint16_t cached_credits;
+			uint16_t credits;
+		};
+	};
 	bool int_armed;
 	uint16_t owed_tokens;
 	int16_t issued_releases;
@@ -325,11 +359,22 @@ struct process_local_port_data {
 
 struct dlb2_eventdev;
 
+struct dlb2_port_low_level_io_functions {
+	void (*pp_enqueue_four)(void *qe4, void *pp_addr);
+};
+
 struct dlb2_config {
 	int configured;
 	int reserved;
-	uint32_t num_ldb_credits;
-	uint32_t num_dir_credits;
+	union {
+		struct {
+			uint32_t num_ldb_credits;
+			uint32_t num_dir_credits;
+		};
+		struct {
+			uint32_t num_credits;
+		};
+	};
 	struct dlb2_create_sched_domain_args resources;
 };
 
@@ -354,10 +399,18 @@ struct dlb2_hw_dev {
 
 /* Begin DLB2 PMD Eventdev related defines and structs */
 
-#define DLB2_MAX_NUM_QUEUES \
-	(DLB2_MAX_NUM_DIR_QUEUES + DLB2_MAX_NUM_LDB_QUEUES)
+#define DLB2_MAX_NUM_QUEUES(ver)                                \
+	(DLB2_MAX_NUM_DIR_QUEUES(ver) + DLB2_MAX_NUM_LDB_QUEUES)
 
-#define DLB2_MAX_NUM_PORTS (DLB2_MAX_NUM_DIR_PORTS + DLB2_MAX_NUM_LDB_PORTS)
+#define DLB2_MAX_NUM_PORTS(ver) \
+	(DLB2_MAX_NUM_DIR_PORTS(ver) + DLB2_MAX_NUM_LDB_PORTS)
+
+#define DLB2_MAX_NUM_DIR_QUEUES_V2_5 96
+#define DLB2_MAX_NUM_DIR_PORTS_V2_5 DLB2_MAX_NUM_DIR_QUEUES_V2_5
+#define DLB2_MAX_NUM_QUEUES_ALL \
+	(DLB2_MAX_NUM_DIR_QUEUES_V2_5 + DLB2_MAX_NUM_LDB_QUEUES)
+#define DLB2_MAX_NUM_PORTS_ALL \
+	(DLB2_MAX_NUM_DIR_PORTS_V2_5 + DLB2_MAX_NUM_LDB_PORTS)
 #define DLB2_MAX_INPUT_QUEUE_DEPTH 256
 
 /** Structure to hold the queue to port link establishment attributes */
@@ -377,8 +430,15 @@ struct dlb2_traffic_stats {
 	uint64_t tx_ok;
 	uint64_t total_polls;
 	uint64_t zero_polls;
-	uint64_t tx_nospc_ldb_hw_credits;
-	uint64_t tx_nospc_dir_hw_credits;
+	union {
+		struct {
+			uint64_t tx_nospc_ldb_hw_credits;
+			uint64_t tx_nospc_dir_hw_credits;
+		};
+		struct {
+			uint64_t tx_nospc_hw_credits;
+		};
+	};
 	uint64_t tx_nospc_inflight_max;
 	uint64_t tx_nospc_new_event_limit;
 	uint64_t tx_nospc_inflight_credits;
@@ -411,7 +471,7 @@ struct dlb2_port_stats {
 	uint64_t tx_invalid;
 	uint64_t rx_sched_cnt[DLB2_NUM_HW_SCHED_TYPES];
 	uint64_t rx_sched_invalid;
-	struct dlb2_queue_stats queue[DLB2_MAX_NUM_QUEUES];
+	struct dlb2_queue_stats queue[DLB2_MAX_NUM_QUEUES_ALL];
 };
 
 struct dlb2_eventdev_port {
@@ -462,16 +522,16 @@ enum dlb2_run_state {
 };
 
 struct dlb2_eventdev {
-	struct dlb2_eventdev_port ev_ports[DLB2_MAX_NUM_PORTS];
-	struct dlb2_eventdev_queue ev_queues[DLB2_MAX_NUM_QUEUES];
-	uint8_t qm_ldb_to_ev_queue_id[DLB2_MAX_NUM_QUEUES];
-	uint8_t qm_dir_to_ev_queue_id[DLB2_MAX_NUM_QUEUES];
+	struct dlb2_eventdev_port ev_ports[DLB2_MAX_NUM_PORTS_ALL];
+	struct dlb2_eventdev_queue ev_queues[DLB2_MAX_NUM_QUEUES_ALL];
+	uint8_t qm_ldb_to_ev_queue_id[DLB2_MAX_NUM_QUEUES_ALL];
+	uint8_t qm_dir_to_ev_queue_id[DLB2_MAX_NUM_QUEUES_ALL];
 	/* store num stats and offset of the stats for each queue */
-	uint16_t xstats_count_per_qid[DLB2_MAX_NUM_QUEUES];
-	uint16_t xstats_offset_for_qid[DLB2_MAX_NUM_QUEUES];
+	uint16_t xstats_count_per_qid[DLB2_MAX_NUM_QUEUES_ALL];
+	uint16_t xstats_offset_for_qid[DLB2_MAX_NUM_QUEUES_ALL];
 	/* store num stats and offset of the stats for each port */
-	uint16_t xstats_count_per_port[DLB2_MAX_NUM_PORTS];
-	uint16_t xstats_offset_for_port[DLB2_MAX_NUM_PORTS];
+	uint16_t xstats_count_per_port[DLB2_MAX_NUM_PORTS_ALL];
+	uint16_t xstats_offset_for_port[DLB2_MAX_NUM_PORTS_ALL];
 	struct dlb2_get_num_resources_args hw_rsrc_query_results;
 	uint32_t xstats_count_mode_queue;
 	struct dlb2_hw_dev qm_instance; /* strictly hw related */
@@ -487,8 +547,15 @@ struct dlb2_eventdev {
 	int num_dir_credits_override;
 	volatile enum dlb2_run_state run_state;
 	uint16_t num_dir_queues; /* total num of evdev dir queues requested */
-	uint16_t num_dir_credits;
-	uint16_t num_ldb_credits;
+	union {
+		struct {
+			uint16_t num_dir_credits;
+			uint16_t num_ldb_credits;
+		};
+		struct {
+			uint16_t num_credits;
+		};
+	};
 	uint16_t num_queues; /* total queues */
 	uint16_t num_ldb_queues; /* total num of evdev ldb queues requested */
 	uint16_t num_ports; /* total num of evdev ports requested */
@@ -499,21 +566,28 @@ struct dlb2_eventdev {
 	bool defer_sched;
 	enum dlb2_cq_poll_modes poll_mode;
 	uint8_t revision;
+	uint8_t version;
 	bool configured;
-	uint16_t max_ldb_credits;
-	uint16_t max_dir_credits;
-
-	/* force hw credit pool counters into exclusive cache lines */
-
-	/* use __atomic builtins */ /* shared hw cred */
-	uint32_t ldb_credit_pool __rte_cache_aligned;
-	/* use __atomic builtins */ /* shared hw cred */
-	uint32_t dir_credit_pool __rte_cache_aligned;
+	union {
+		struct {
+			uint16_t max_ldb_credits;
+			uint16_t max_dir_credits;
+			/* use __atomic builtins */ /* shared hw cred */
+			uint32_t ldb_credit_pool __rte_cache_aligned;
+			/* use __atomic builtins */ /* shared hw cred */
+			uint32_t dir_credit_pool __rte_cache_aligned;
+		};
+		struct {
+			uint16_t max_credits;
+			/* use __atomic builtins */ /* shared hw cred */
+			uint32_t credit_pool __rte_cache_aligned;
+		};
+	};
 };
 
 /* used for collecting and passing around the dev args */
 struct dlb2_qid_depth_thresholds {
-	int val[DLB2_MAX_NUM_QUEUES];
+	int val[DLB2_MAX_NUM_QUEUES_ALL];
 };
 
 struct dlb2_devargs {
@@ -568,7 +642,8 @@ uint32_t dlb2_get_queue_depth(struct dlb2_eventdev *dlb2,
 
 int dlb2_parse_params(const char *params,
 		      const char *name,
-		      struct dlb2_devargs *dlb2_args);
+		      struct dlb2_devargs *dlb2_args,
+		      uint8_t version);
 
 /* Extern globals */
 extern struct process_local_port_data dlb2_port[][DLB2_NUM_PORT_TYPES];
diff --git a/drivers/event/dlb2/dlb2_xstats.c b/drivers/event/dlb2/dlb2_xstats.c
index 8c3c3cda9..b62e62060 100644
--- a/drivers/event/dlb2/dlb2_xstats.c
+++ b/drivers/event/dlb2/dlb2_xstats.c
@@ -95,7 +95,7 @@ dlb2_device_traffic_stat_get(struct dlb2_eventdev *dlb2,
 	int i;
 	uint64_t val = 0;
 
-	for (i = 0; i < DLB2_MAX_NUM_PORTS; i++) {
+	for (i = 0; i < DLB2_MAX_NUM_PORTS(dlb2->version); i++) {
 		struct dlb2_eventdev_port *port = &dlb2->ev_ports[i];
 
 		if (!port->setup_done)
@@ -269,7 +269,7 @@ dlb2_get_threshold_stat(struct dlb2_eventdev *dlb2, int qid, int stat)
 	int port = 0;
 	uint64_t tally = 0;
 
-	for (port = 0; port < DLB2_MAX_NUM_PORTS; port++)
+	for (port = 0; port < DLB2_MAX_NUM_PORTS(dlb2->version); port++)
 		tally += dlb2->ev_ports[port].stats.queue[qid].qid_depth[stat];
 
 	return tally;
@@ -281,7 +281,7 @@ dlb2_get_enq_ok_stat(struct dlb2_eventdev *dlb2, int qid)
 	int port = 0;
 	uint64_t enq_ok_tally = 0;
 
-	for (port = 0; port < DLB2_MAX_NUM_PORTS; port++)
+	for (port = 0; port < DLB2_MAX_NUM_PORTS(dlb2->version); port++)
 		enq_ok_tally += dlb2->ev_ports[port].stats.queue[qid].enq_ok;
 
 	return enq_ok_tally;
@@ -561,8 +561,8 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 
 	/* other vars */
 	const unsigned int count = RTE_DIM(dev_stats) +
-			DLB2_MAX_NUM_PORTS * RTE_DIM(port_stats) +
-			DLB2_MAX_NUM_QUEUES * RTE_DIM(qid_stats);
+		DLB2_MAX_NUM_PORTS(dlb2->version) * RTE_DIM(port_stats) +
+		DLB2_MAX_NUM_QUEUES(dlb2->version) * RTE_DIM(qid_stats);
 	unsigned int i, port, qid, stat_id = 0;
 
 	dlb2->xstats = rte_zmalloc_socket(NULL,
@@ -583,7 +583,7 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 	}
 	dlb2->xstats_count_mode_dev = stat_id;
 
-	for (port = 0; port < DLB2_MAX_NUM_PORTS; port++) {
+	for (port = 0; port < DLB2_MAX_NUM_PORTS(dlb2->version); port++) {
 		dlb2->xstats_offset_for_port[port] = stat_id;
 
 		uint32_t count_offset = stat_id;
@@ -605,7 +605,7 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 
 	dlb2->xstats_count_mode_port = stat_id - dlb2->xstats_count_mode_dev;
 
-	for (qid = 0; qid < DLB2_MAX_NUM_QUEUES; qid++) {
+	for (qid = 0; qid < DLB2_MAX_NUM_QUEUES(dlb2->version); qid++) {
 		uint32_t count_offset = stat_id;
 
 		dlb2->xstats_offset_for_qid[qid] = stat_id;
@@ -658,16 +658,15 @@ dlb2_eventdev_xstats_get_names(const struct rte_eventdev *dev,
 		xstats_mode_count = dlb2->xstats_count_mode_dev;
 		break;
 	case RTE_EVENT_DEV_XSTATS_PORT:
-		if (queue_port_id >= DLB2_MAX_NUM_PORTS)
+		if (queue_port_id >= DLB2_MAX_NUM_PORTS(dlb2->version))
 			break;
 		xstats_mode_count = dlb2->xstats_count_per_port[queue_port_id];
 		start_offset = dlb2->xstats_offset_for_port[queue_port_id];
 		break;
 	case RTE_EVENT_DEV_XSTATS_QUEUE:
-#if (DLB2_MAX_NUM_QUEUES <= 255) /* max 8 bit value */
-		if (queue_port_id >= DLB2_MAX_NUM_QUEUES)
+		if (queue_port_id >= DLB2_MAX_NUM_QUEUES(dlb2->version) &&
+		    (DLB2_MAX_NUM_QUEUES(dlb2->version) <= 255))
 			break;
-#endif
 		xstats_mode_count = dlb2->xstats_count_per_qid[queue_port_id];
 		start_offset = dlb2->xstats_offset_for_qid[queue_port_id];
 		break;
@@ -709,13 +708,13 @@ dlb2_xstats_update(struct dlb2_eventdev *dlb2,
 		xstats_mode_count = dlb2->xstats_count_mode_dev;
 		break;
 	case RTE_EVENT_DEV_XSTATS_PORT:
-		if (queue_port_id >= DLB2_MAX_NUM_PORTS)
+		if (queue_port_id >= DLB2_MAX_NUM_PORTS(dlb2->version))
 			goto invalid_value;
 		xstats_mode_count = dlb2->xstats_count_per_port[queue_port_id];
 		break;
 	case RTE_EVENT_DEV_XSTATS_QUEUE:
-#if (DLB2_MAX_NUM_QUEUES <= 255) /* max 8 bit value */
-		if (queue_port_id >= DLB2_MAX_NUM_QUEUES)
+#if (DLB2_MAX_NUM_QUEUES(DLB2_HW_V2_5) <= 255) /* max 8 bit value */
+		if (queue_port_id >= DLB2_MAX_NUM_QUEUES(dlb2->version))
 			goto invalid_value;
 #endif
 		xstats_mode_count = dlb2->xstats_count_per_qid[queue_port_id];
@@ -936,12 +935,13 @@ dlb2_eventdev_xstats_reset(struct rte_eventdev *dev,
 		break;
 	case RTE_EVENT_DEV_XSTATS_PORT:
 		if (queue_port_id == -1) {
-			for (i = 0; i < DLB2_MAX_NUM_PORTS; i++) {
+			for (i = 0; i < DLB2_MAX_NUM_PORTS(dlb2->version);
+					i++) {
 				if (dlb2_xstats_reset_port(dlb2, i,
 							   ids, nb_ids))
 					return -EINVAL;
 			}
-		} else if (queue_port_id < DLB2_MAX_NUM_PORTS) {
+		} else if (queue_port_id < DLB2_MAX_NUM_PORTS(dlb2->version)) {
 			if (dlb2_xstats_reset_port(dlb2, queue_port_id,
 						   ids, nb_ids))
 				return -EINVAL;
@@ -949,12 +949,13 @@ dlb2_eventdev_xstats_reset(struct rte_eventdev *dev,
 		break;
 	case RTE_EVENT_DEV_XSTATS_QUEUE:
 		if (queue_port_id == -1) {
-			for (i = 0; i < DLB2_MAX_NUM_QUEUES; i++) {
+			for (i = 0; i < DLB2_MAX_NUM_QUEUES(dlb2->version);
+					i++) {
 				if (dlb2_xstats_reset_queue(dlb2, i,
 							    ids, nb_ids))
 					return -EINVAL;
 			}
-		} else if (queue_port_id < DLB2_MAX_NUM_QUEUES) {
+		} else if (queue_port_id < DLB2_MAX_NUM_QUEUES(dlb2->version)) {
 			if (dlb2_xstats_reset_queue(dlb2, queue_port_id,
 						    ids, nb_ids))
 				return -EINVAL;
diff --git a/drivers/event/dlb2/pf/base/dlb2_hw_types.h b/drivers/event/dlb2/pf/base/dlb2_hw_types.h
index 1d99f1e01..b007e1674 100644
--- a/drivers/event/dlb2/pf/base/dlb2_hw_types.h
+++ b/drivers/event/dlb2/pf/base/dlb2_hw_types.h
@@ -5,54 +5,31 @@
 #ifndef __DLB2_HW_TYPES_H
 #define __DLB2_HW_TYPES_H
 
+#include "../../dlb2_priv.h"
 #include "dlb2_user.h"
 
 #include "dlb2_osdep_list.h"
 #include "dlb2_osdep_types.h"
 
 #define DLB2_MAX_NUM_VDEVS			16
-#define DLB2_MAX_NUM_DOMAINS			32
-#define DLB2_MAX_NUM_LDB_QUEUES			32 /* LDB == load-balanced */
-#define DLB2_MAX_NUM_DIR_QUEUES			64 /* DIR == directed */
-#define DLB2_MAX_NUM_LDB_PORTS			64
-#define DLB2_MAX_NUM_DIR_PORTS			64
-#define DLB2_MAX_NUM_LDB_CREDITS		(8 * 1024)
-#define DLB2_MAX_NUM_DIR_CREDITS		(2 * 1024)
-#define DLB2_MAX_NUM_HIST_LIST_ENTRIES		2048
-#define DLB2_MAX_NUM_AQED_ENTRIES		2048
-#define DLB2_MAX_NUM_QIDS_PER_LDB_CQ		8
 #define DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS	2
-#define DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES	5
-#define DLB2_QID_PRIORITIES			8
 #define DLB2_NUM_ARB_WEIGHTS			8
+#define DLB2_MAX_NUM_AQED_ENTRIES		2048
 #define DLB2_MAX_WEIGHT				255
 #define DLB2_NUM_COS_DOMAINS			4
+#define DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS	2
+#define DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES	5
 #define DLB2_MAX_CQ_COMP_CHECK_LOOPS		409600
 #define DLB2_MAX_QID_EMPTY_CHECK_LOOPS		(32 * 64 * 1024 * (800 / 30))
-#ifdef FPGA
-#define DLB2_HZ					2000000
-#else
-#define DLB2_HZ					800000000
-#endif
+
+#define DLB2_FUNC_BAR				0
+#define DLB2_CSR_BAR				2
 
 #define PCI_DEVICE_ID_INTEL_DLB2_PF 0x2710
 #define PCI_DEVICE_ID_INTEL_DLB2_VF 0x2711
 
-/* Interrupt related macros */
-#define DLB2_PF_NUM_NON_CQ_INTERRUPT_VECTORS 1
-#define DLB2_PF_NUM_CQ_INTERRUPT_VECTORS     64
-#define DLB2_PF_TOTAL_NUM_INTERRUPT_VECTORS \
-	(DLB2_PF_NUM_NON_CQ_INTERRUPT_VECTORS + \
-	 DLB2_PF_NUM_CQ_INTERRUPT_VECTORS)
-#define DLB2_PF_NUM_COMPRESSED_MODE_VECTORS \
-	(DLB2_PF_NUM_NON_CQ_INTERRUPT_VECTORS + 1)
-#define DLB2_PF_NUM_PACKED_MODE_VECTORS \
-	DLB2_PF_TOTAL_NUM_INTERRUPT_VECTORS
-#define DLB2_PF_COMPRESSED_MODE_CQ_VECTOR_ID \
-	DLB2_PF_NUM_NON_CQ_INTERRUPT_VECTORS
-
-/* DLB non-CQ interrupts (alarm, mailbox, WDT) */
-#define DLB2_INT_NON_CQ 0
+#define PCI_DEVICE_ID_INTEL_DLB2_5_PF 0x2714
+#define PCI_DEVICE_ID_INTEL_DLB2_5_VF 0x2715
 
 #define DLB2_ALARM_HW_SOURCE_SYS 0
 #define DLB2_ALARM_HW_SOURCE_DLB 1
@@ -65,18 +42,6 @@
 #define DLB2_ALARM_HW_CHP_AID_ILLEGAL_ENQ	1
 #define DLB2_ALARM_HW_CHP_AID_EXCESS_TOKEN_POPS 2
 
-#define DLB2_VF_NUM_NON_CQ_INTERRUPT_VECTORS 1
-#define DLB2_VF_NUM_CQ_INTERRUPT_VECTORS     31
-#define DLB2_VF_BASE_CQ_VECTOR_ID	     0
-#define DLB2_VF_LAST_CQ_VECTOR_ID	     30
-#define DLB2_VF_MBOX_VECTOR_ID		     31
-#define DLB2_VF_TOTAL_NUM_INTERRUPT_VECTORS \
-	(DLB2_VF_NUM_NON_CQ_INTERRUPT_VECTORS + \
-	 DLB2_VF_NUM_CQ_INTERRUPT_VECTORS)
-
-#define DLB2_VDEV_MAX_NUM_INTERRUPT_VECTORS (DLB2_MAX_NUM_LDB_PORTS + \
-					     DLB2_MAX_NUM_DIR_PORTS + 1)
-
 /*
  * Hardware-defined base addresses. Those prefixed 'DLB2_DRV' are only used by
  * the PF driver.
@@ -97,7 +62,8 @@
 #define DLB2_DIR_PP_BASE       0x2000000
 #define DLB2_DIR_PP_STRIDE     0x1000
 #define DLB2_DIR_PP_BOUND      (DLB2_DIR_PP_BASE + \
-				DLB2_DIR_PP_STRIDE * DLB2_MAX_NUM_DIR_PORTS)
+				DLB2_DIR_PP_STRIDE * \
+				DLB2_MAX_NUM_DIR_PORTS_V2_5)
 #define DLB2_DIR_PP_OFFS(id)   (DLB2_DIR_PP_BASE + (id) * DLB2_PP_SIZE)
 
 struct dlb2_resource_id {
@@ -225,7 +191,7 @@ struct dlb2_sn_group {
 
 static inline bool dlb2_sn_group_full(struct dlb2_sn_group *group)
 {
-	u32 mask[] = {
+	const u32 mask[] = {
 		0x0000ffff,  /* 64 SNs per queue */
 		0x000000ff,  /* 128 SNs per queue */
 		0x0000000f,  /* 256 SNs per queue */
@@ -237,7 +203,7 @@ static inline bool dlb2_sn_group_full(struct dlb2_sn_group *group)
 
 static inline int dlb2_sn_group_alloc_slot(struct dlb2_sn_group *group)
 {
-	u32 bound[6] = {16, 8, 4, 2, 1};
+	const u32 bound[] = {16, 8, 4, 2, 1};
 	u32 i;
 
 	for (i = 0; i < bound[group->mode]; i++) {
@@ -327,7 +293,7 @@ struct dlb2_function_resources {
 struct dlb2_hw_resources {
 	struct dlb2_ldb_queue ldb_queues[DLB2_MAX_NUM_LDB_QUEUES];
 	struct dlb2_ldb_port ldb_ports[DLB2_MAX_NUM_LDB_PORTS];
-	struct dlb2_dir_pq_pair dir_pq_pairs[DLB2_MAX_NUM_DIR_PORTS];
+	struct dlb2_dir_pq_pair dir_pq_pairs[DLB2_MAX_NUM_DIR_PORTS_V2_5];
 	struct dlb2_sn_group sn_groups[DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS];
 };
 
@@ -344,11 +310,13 @@ struct dlb2_sw_mbox {
 };
 
 struct dlb2_hw {
+	uint8_t ver;
+
 	/* BAR 0 address */
-	void  *csr_kva;
+	void *csr_kva;
 	unsigned long csr_phys_addr;
 	/* BAR 2 address */
-	void  *func_kva;
+	void *func_kva;
 	unsigned long func_phys_addr;
 
 	/* Resource tracking */
diff --git a/drivers/event/dlb2/pf/base/dlb2_mbox.h b/drivers/event/dlb2/pf/base/dlb2_mbox.h
deleted file mode 100644
index ce462c089..000000000
--- a/drivers/event/dlb2/pf/base/dlb2_mbox.h
+++ /dev/null
@@ -1,596 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016-2020 Intel Corporation
- */
-
-#ifndef __DLB2_BASE_DLB2_MBOX_H
-#define __DLB2_BASE_DLB2_MBOX_H
-
-#include "dlb2_osdep_types.h"
-#include "dlb2_regs.h"
-
-#define DLB2_MBOX_INTERFACE_VERSION 1
-
-/*
- * The PF uses its PF->VF mailbox to send responses to VF requests, as well as
- * to send requests of its own (e.g. notifying a VF of an impending FLR).
- * To avoid communication race conditions, e.g. the PF sends a response and then
- * sends a request before the VF reads the response, the PF->VF mailbox is
- * divided into two sections:
- * - Bytes 0-47: PF responses
- * - Bytes 48-63: PF requests
- *
- * Partitioning the PF->VF mailbox allows responses and requests to occupy the
- * mailbox simultaneously.
- */
-#define DLB2_PF2VF_RESP_BYTES	  48
-#define DLB2_PF2VF_RESP_BASE	  0
-#define DLB2_PF2VF_RESP_BASE_WORD (DLB2_PF2VF_RESP_BASE / 4)
-
-#define DLB2_PF2VF_REQ_BYTES	  16
-#define DLB2_PF2VF_REQ_BASE	  (DLB2_PF2VF_RESP_BASE + DLB2_PF2VF_RESP_BYTES)
-#define DLB2_PF2VF_REQ_BASE_WORD  (DLB2_PF2VF_REQ_BASE / 4)
-
-/*
- * Similarly, the VF->PF mailbox is divided into two sections:
- * - Bytes 0-239: VF requests
- * -- (Bytes 0-3 are unused due to a hardware errata)
- * - Bytes 240-255: VF responses
- */
-#define DLB2_VF2PF_REQ_BYTES	 236
-#define DLB2_VF2PF_REQ_BASE	 4
-#define DLB2_VF2PF_REQ_BASE_WORD (DLB2_VF2PF_REQ_BASE / 4)
-
-#define DLB2_VF2PF_RESP_BYTES	  16
-#define DLB2_VF2PF_RESP_BASE	  (DLB2_VF2PF_REQ_BASE + DLB2_VF2PF_REQ_BYTES)
-#define DLB2_VF2PF_RESP_BASE_WORD (DLB2_VF2PF_RESP_BASE / 4)
-
-/* VF-initiated commands */
-enum dlb2_mbox_cmd_type {
-	DLB2_MBOX_CMD_REGISTER,
-	DLB2_MBOX_CMD_UNREGISTER,
-	DLB2_MBOX_CMD_GET_NUM_RESOURCES,
-	DLB2_MBOX_CMD_CREATE_SCHED_DOMAIN,
-	DLB2_MBOX_CMD_RESET_SCHED_DOMAIN,
-	DLB2_MBOX_CMD_CREATE_LDB_QUEUE,
-	DLB2_MBOX_CMD_CREATE_DIR_QUEUE,
-	DLB2_MBOX_CMD_CREATE_LDB_PORT,
-	DLB2_MBOX_CMD_CREATE_DIR_PORT,
-	DLB2_MBOX_CMD_ENABLE_LDB_PORT,
-	DLB2_MBOX_CMD_DISABLE_LDB_PORT,
-	DLB2_MBOX_CMD_ENABLE_DIR_PORT,
-	DLB2_MBOX_CMD_DISABLE_DIR_PORT,
-	DLB2_MBOX_CMD_LDB_PORT_OWNED_BY_DOMAIN,
-	DLB2_MBOX_CMD_DIR_PORT_OWNED_BY_DOMAIN,
-	DLB2_MBOX_CMD_MAP_QID,
-	DLB2_MBOX_CMD_UNMAP_QID,
-	DLB2_MBOX_CMD_START_DOMAIN,
-	DLB2_MBOX_CMD_ENABLE_LDB_PORT_INTR,
-	DLB2_MBOX_CMD_ENABLE_DIR_PORT_INTR,
-	DLB2_MBOX_CMD_ARM_CQ_INTR,
-	DLB2_MBOX_CMD_GET_NUM_USED_RESOURCES,
-	DLB2_MBOX_CMD_GET_SN_ALLOCATION,
-	DLB2_MBOX_CMD_GET_LDB_QUEUE_DEPTH,
-	DLB2_MBOX_CMD_GET_DIR_QUEUE_DEPTH,
-	DLB2_MBOX_CMD_PENDING_PORT_UNMAPS,
-	DLB2_MBOX_CMD_GET_COS_BW,
-	DLB2_MBOX_CMD_GET_SN_OCCUPANCY,
-	DLB2_MBOX_CMD_QUERY_CQ_POLL_MODE,
-
-	/* NUM_QE_CMD_TYPES must be last */
-	NUM_DLB2_MBOX_CMD_TYPES,
-};
-
-static const char dlb2_mbox_cmd_type_strings[][128] = {
-	"DLB2_MBOX_CMD_REGISTER",
-	"DLB2_MBOX_CMD_UNREGISTER",
-	"DLB2_MBOX_CMD_GET_NUM_RESOURCES",
-	"DLB2_MBOX_CMD_CREATE_SCHED_DOMAIN",
-	"DLB2_MBOX_CMD_RESET_SCHED_DOMAIN",
-	"DLB2_MBOX_CMD_CREATE_LDB_QUEUE",
-	"DLB2_MBOX_CMD_CREATE_DIR_QUEUE",
-	"DLB2_MBOX_CMD_CREATE_LDB_PORT",
-	"DLB2_MBOX_CMD_CREATE_DIR_PORT",
-	"DLB2_MBOX_CMD_ENABLE_LDB_PORT",
-	"DLB2_MBOX_CMD_DISABLE_LDB_PORT",
-	"DLB2_MBOX_CMD_ENABLE_DIR_PORT",
-	"DLB2_MBOX_CMD_DISABLE_DIR_PORT",
-	"DLB2_MBOX_CMD_LDB_PORT_OWNED_BY_DOMAIN",
-	"DLB2_MBOX_CMD_DIR_PORT_OWNED_BY_DOMAIN",
-	"DLB2_MBOX_CMD_MAP_QID",
-	"DLB2_MBOX_CMD_UNMAP_QID",
-	"DLB2_MBOX_CMD_START_DOMAIN",
-	"DLB2_MBOX_CMD_ENABLE_LDB_PORT_INTR",
-	"DLB2_MBOX_CMD_ENABLE_DIR_PORT_INTR",
-	"DLB2_MBOX_CMD_ARM_CQ_INTR",
-	"DLB2_MBOX_CMD_GET_NUM_USED_RESOURCES",
-	"DLB2_MBOX_CMD_GET_SN_ALLOCATION",
-	"DLB2_MBOX_CMD_GET_LDB_QUEUE_DEPTH",
-	"DLB2_MBOX_CMD_GET_DIR_QUEUE_DEPTH",
-	"DLB2_MBOX_CMD_PENDING_PORT_UNMAPS",
-	"DLB2_MBOX_CMD_GET_COS_BW",
-	"DLB2_MBOX_CMD_GET_SN_OCCUPANCY",
-	"DLB2_MBOX_CMD_QUERY_CQ_POLL_MODE",
-};
-
-/* PF-initiated commands */
-enum dlb2_mbox_vf_cmd_type {
-	DLB2_MBOX_VF_CMD_DOMAIN_ALERT,
-	DLB2_MBOX_VF_CMD_NOTIFICATION,
-	DLB2_MBOX_VF_CMD_IN_USE,
-
-	/* NUM_DLB2_MBOX_VF_CMD_TYPES must be last */
-	NUM_DLB2_MBOX_VF_CMD_TYPES,
-};
-
-static const char dlb2_mbox_vf_cmd_type_strings[][128] = {
-	"DLB2_MBOX_VF_CMD_DOMAIN_ALERT",
-	"DLB2_MBOX_VF_CMD_NOTIFICATION",
-	"DLB2_MBOX_VF_CMD_IN_USE",
-};
-
-#define DLB2_MBOX_CMD_TYPE(hdr) \
-	(((struct dlb2_mbox_req_hdr *)hdr)->type)
-#define DLB2_MBOX_CMD_STRING(hdr) \
-	dlb2_mbox_cmd_type_strings[DLB2_MBOX_CMD_TYPE(hdr)]
-
-enum dlb2_mbox_status_type {
-	DLB2_MBOX_ST_SUCCESS,
-	DLB2_MBOX_ST_INVALID_CMD_TYPE,
-	DLB2_MBOX_ST_VERSION_MISMATCH,
-	DLB2_MBOX_ST_INVALID_OWNER_VF,
-};
-
-static const char dlb2_mbox_status_type_strings[][128] = {
-	"DLB2_MBOX_ST_SUCCESS",
-	"DLB2_MBOX_ST_INVALID_CMD_TYPE",
-	"DLB2_MBOX_ST_VERSION_MISMATCH",
-	"DLB2_MBOX_ST_INVALID_OWNER_VF",
-};
-
-#define DLB2_MBOX_ST_TYPE(hdr) \
-	(((struct dlb2_mbox_resp_hdr *)hdr)->status)
-#define DLB2_MBOX_ST_STRING(hdr) \
-	dlb2_mbox_status_type_strings[DLB2_MBOX_ST_TYPE(hdr)]
-
-/* This structure is always the first field in a request structure */
-struct dlb2_mbox_req_hdr {
-	u32 type;
-};
-
-/* This structure is always the first field in a response structure */
-struct dlb2_mbox_resp_hdr {
-	u32 status;
-};
-
-struct dlb2_mbox_register_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u16 min_interface_version;
-	u16 max_interface_version;
-};
-
-struct dlb2_mbox_register_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 interface_version;
-	u8 pf_id;
-	u8 vf_id;
-	u8 is_auxiliary_vf;
-	u8 primary_vf_id;
-	u32 padding;
-};
-
-struct dlb2_mbox_unregister_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 padding;
-};
-
-struct dlb2_mbox_unregister_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 padding;
-};
-
-struct dlb2_mbox_get_num_resources_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 padding;
-};
-
-struct dlb2_mbox_get_num_resources_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u16 num_sched_domains;
-	u16 num_ldb_queues;
-	u16 num_ldb_ports;
-	u16 num_cos_ldb_ports[4];
-	u16 num_dir_ports;
-	u32 num_atomic_inflights;
-	u32 num_hist_list_entries;
-	u32 max_contiguous_hist_list_entries;
-	u16 num_ldb_credits;
-	u16 num_dir_credits;
-};
-
-struct dlb2_mbox_create_sched_domain_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 num_ldb_queues;
-	u32 num_ldb_ports;
-	u32 num_cos_ldb_ports[4];
-	u32 num_dir_ports;
-	u32 num_atomic_inflights;
-	u32 num_hist_list_entries;
-	u32 num_ldb_credits;
-	u32 num_dir_credits;
-	u8 cos_strict;
-	u8 padding0[3];
-	u32 padding1;
-};
-
-struct dlb2_mbox_create_sched_domain_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 id;
-};
-
-struct dlb2_mbox_reset_sched_domain_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 id;
-};
-
-struct dlb2_mbox_reset_sched_domain_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-};
-
-struct dlb2_mbox_create_ldb_queue_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 num_sequence_numbers;
-	u32 num_qid_inflights;
-	u32 num_atomic_inflights;
-	u32 lock_id_comp_level;
-	u32 depth_threshold;
-	u32 padding;
-};
-
-struct dlb2_mbox_create_ldb_queue_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 id;
-};
-
-struct dlb2_mbox_create_dir_queue_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 port_id;
-	u32 depth_threshold;
-};
-
-struct dlb2_mbox_create_dir_queue_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 id;
-};
-
-struct dlb2_mbox_create_ldb_port_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u16 cq_depth;
-	u16 cq_history_list_size;
-	u8 cos_id;
-	u8 cos_strict;
-	u16 padding1;
-	u64 cq_base_address;
-};
-
-struct dlb2_mbox_create_ldb_port_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 id;
-};
-
-struct dlb2_mbox_create_dir_port_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u64 cq_base_address;
-	u16 cq_depth;
-	u16 padding0;
-	s32 queue_id;
-};
-
-struct dlb2_mbox_create_dir_port_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 id;
-};
-
-struct dlb2_mbox_enable_ldb_port_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 port_id;
-	u32 padding;
-};
-
-struct dlb2_mbox_enable_ldb_port_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 padding;
-};
-
-struct dlb2_mbox_disable_ldb_port_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 port_id;
-	u32 padding;
-};
-
-struct dlb2_mbox_disable_ldb_port_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 padding;
-};
-
-struct dlb2_mbox_enable_dir_port_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 port_id;
-	u32 padding;
-};
-
-struct dlb2_mbox_enable_dir_port_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 padding;
-};
-
-struct dlb2_mbox_disable_dir_port_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 port_id;
-	u32 padding;
-};
-
-struct dlb2_mbox_disable_dir_port_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 padding;
-};
-
-struct dlb2_mbox_ldb_port_owned_by_domain_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 port_id;
-	u32 padding;
-};
-
-struct dlb2_mbox_ldb_port_owned_by_domain_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	s32 owned;
-};
-
-struct dlb2_mbox_dir_port_owned_by_domain_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 port_id;
-	u32 padding;
-};
-
-struct dlb2_mbox_dir_port_owned_by_domain_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	s32 owned;
-};
-
-struct dlb2_mbox_map_qid_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 port_id;
-	u32 qid;
-	u32 priority;
-	u32 padding0;
-};
-
-struct dlb2_mbox_map_qid_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 id;
-};
-
-struct dlb2_mbox_unmap_qid_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 port_id;
-	u32 qid;
-};
-
-struct dlb2_mbox_unmap_qid_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 padding;
-};
-
-struct dlb2_mbox_start_domain_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-};
-
-struct dlb2_mbox_start_domain_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 padding;
-};
-
-struct dlb2_mbox_enable_ldb_port_intr_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u16 port_id;
-	u16 thresh;
-	u16 vector;
-	u16 owner_vf;
-	u16 reserved[2];
-};
-
-struct dlb2_mbox_enable_ldb_port_intr_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 padding;
-};
-
-struct dlb2_mbox_enable_dir_port_intr_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u16 port_id;
-	u16 thresh;
-	u16 vector;
-	u16 owner_vf;
-	u16 reserved[2];
-};
-
-struct dlb2_mbox_enable_dir_port_intr_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 padding;
-};
-
-struct dlb2_mbox_arm_cq_intr_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 port_id;
-	u32 is_ldb;
-};
-
-struct dlb2_mbox_arm_cq_intr_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 padding0;
-};
-
-/*
- * The alert_id and aux_alert_data follows the format of the alerts defined in
- * dlb2_types.h. The alert id contains an enum dlb2_domain_alert_id value, and
- * the aux_alert_data value varies depending on the alert.
- */
-struct dlb2_mbox_vf_alert_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 alert_id;
-	u32 aux_alert_data;
-};
-
-enum dlb2_mbox_vf_notification_type {
-	DLB2_MBOX_VF_NOTIFICATION_PRE_RESET,
-	DLB2_MBOX_VF_NOTIFICATION_POST_RESET,
-
-	/* NUM_DLB2_MBOX_VF_NOTIFICATION_TYPES must be last */
-	NUM_DLB2_MBOX_VF_NOTIFICATION_TYPES,
-};
-
-struct dlb2_mbox_vf_notification_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 notification;
-};
-
-struct dlb2_mbox_vf_in_use_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 padding;
-};
-
-struct dlb2_mbox_vf_in_use_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 in_use;
-};
-
-struct dlb2_mbox_get_sn_allocation_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 group_id;
-};
-
-struct dlb2_mbox_get_sn_allocation_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 num;
-};
-
-struct dlb2_mbox_get_ldb_queue_depth_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 queue_id;
-	u32 padding;
-};
-
-struct dlb2_mbox_get_ldb_queue_depth_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 depth;
-};
-
-struct dlb2_mbox_get_dir_queue_depth_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 queue_id;
-	u32 padding;
-};
-
-struct dlb2_mbox_get_dir_queue_depth_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 depth;
-};
-
-struct dlb2_mbox_pending_port_unmaps_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 port_id;
-	u32 padding;
-};
-
-struct dlb2_mbox_pending_port_unmaps_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 num;
-};
-
-struct dlb2_mbox_get_cos_bw_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 cos_id;
-};
-
-struct dlb2_mbox_get_cos_bw_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 num;
-};
-
-struct dlb2_mbox_get_sn_occupancy_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 group_id;
-};
-
-struct dlb2_mbox_get_sn_occupancy_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 num;
-};
-
-struct dlb2_mbox_query_cq_poll_mode_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 padding;
-};
-
-struct dlb2_mbox_query_cq_poll_mode_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 mode;
-};
-
-#endif /* __DLB2_BASE_DLB2_MBOX_H */
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index ae5ef2fc3..1cb0b9f50 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -5,7 +5,6 @@
 #include "dlb2_user.h"
 
 #include "dlb2_hw_types.h"
-#include "dlb2_mbox.h"
 #include "dlb2_osdep.h"
 #include "dlb2_osdep_bitmap.h"
 #include "dlb2_osdep_types.h"
@@ -212,7 +211,7 @@ int dlb2_resource_init(struct dlb2_hw *hw)
 			      &port->func_list);
 	}
 
-	hw->pf.num_avail_dir_pq_pairs = DLB2_MAX_NUM_DIR_PORTS;
+	hw->pf.num_avail_dir_pq_pairs = DLB2_MAX_NUM_DIR_PORTS(hw->ver);
 	for (i = 0; i < hw->pf.num_avail_dir_pq_pairs; i++) {
 		list = &hw->rsrcs.dir_pq_pairs[i].func_list;
 
@@ -220,7 +219,9 @@ int dlb2_resource_init(struct dlb2_hw *hw)
 	}
 
 	hw->pf.num_avail_qed_entries = DLB2_MAX_NUM_LDB_CREDITS;
-	hw->pf.num_avail_dqed_entries = DLB2_MAX_NUM_DIR_CREDITS;
+	hw->pf.num_avail_dqed_entries =
+		DLB2_MAX_NUM_DIR_CREDITS(hw->ver);
+
 	hw->pf.num_avail_aqed_entries = DLB2_MAX_NUM_AQED_ENTRIES;
 
 	ret = dlb2_bitmap_alloc(&hw->pf.avail_hist_list_entries,
@@ -259,7 +260,7 @@ int dlb2_resource_init(struct dlb2_hw *hw)
 		hw->rsrcs.ldb_ports[i].id.vdev_owned = false;
 	}
 
-	for (i = 0; i < DLB2_MAX_NUM_DIR_PORTS; i++) {
+	for (i = 0; i < DLB2_MAX_NUM_DIR_PORTS(hw->ver); i++) {
 		hw->rsrcs.dir_pq_pairs[i].id.phys_id = i;
 		hw->rsrcs.dir_pq_pairs[i].id.vdev_owned = false;
 	}
@@ -2373,7 +2374,7 @@ static void dlb2_domain_disable_dir_vpps(struct dlb2_hw *hw,
 		else
 			virt_id = port->id.phys_id;
 
-		offs = vdev_id * DLB2_MAX_NUM_DIR_PORTS + virt_id;
+		offs = vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) + virt_id;
 
 		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP_V(offs), r1.val);
 	}
@@ -2506,7 +2507,8 @@ static void
 dlb2_domain_disable_dir_queue_write_perms(struct dlb2_hw *hw,
 					  struct dlb2_hw_domain *domain)
 {
-	int domain_offset = domain->id.phys_id * DLB2_MAX_NUM_DIR_PORTS;
+	int domain_offset = domain->id.phys_id *
+		DLB2_MAX_NUM_DIR_PORTS(hw->ver);
 	struct dlb2_list_entry *iter;
 	struct dlb2_dir_pq_pair *queue;
 	RTE_SET_USED(iter);
@@ -2522,7 +2524,8 @@ dlb2_domain_disable_dir_queue_write_perms(struct dlb2_hw *hw,
 		DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(idx), r0.val);
 
 		if (queue->id.vdev_owned) {
-			idx = queue->id.vdev_id * DLB2_MAX_NUM_DIR_PORTS +
+			idx = queue->id.vdev_id *
+				DLB2_MAX_NUM_DIR_PORTS(hw->ver) +
 				queue->id.virt_id;
 
 			DLB2_CSR_WR(hw,
@@ -2961,7 +2964,8 @@ __dlb2_domain_reset_dir_port_registers(struct dlb2_hw *hw,
 		else
 			virt_id = port->id.phys_id;
 
-		offs = port->id.vdev_id * DLB2_MAX_NUM_DIR_PORTS + virt_id;
+		offs = port->id.vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver)
+			+ virt_id;
 
 		DLB2_CSR_WR(hw,
 			    DLB2_SYS_VF_DIR_VPP2PP(offs),
@@ -4484,7 +4488,8 @@ dlb2_log_create_dir_port_args(struct dlb2_hw *hw,
 }
 
 static struct dlb2_dir_pq_pair *
-dlb2_get_domain_used_dir_pq(u32 id,
+dlb2_get_domain_used_dir_pq(struct dlb2_hw *hw,
+			    u32 id,
 			    bool vdev_req,
 			    struct dlb2_hw_domain *domain)
 {
@@ -4492,7 +4497,7 @@ dlb2_get_domain_used_dir_pq(u32 id,
 	struct dlb2_dir_pq_pair *port;
 	RTE_SET_USED(iter);
 
-	if (id >= DLB2_MAX_NUM_DIR_PORTS)
+	if (id >= DLB2_MAX_NUM_DIR_PORTS(hw->ver))
 		return NULL;
 
 	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter)
@@ -4538,7 +4543,8 @@ dlb2_verify_create_dir_port_args(struct dlb2_hw *hw,
 	if (args->queue_id != -1) {
 		struct dlb2_dir_pq_pair *queue;
 
-		queue = dlb2_get_domain_used_dir_pq(args->queue_id,
+		queue = dlb2_get_domain_used_dir_pq(hw,
+						    args->queue_id,
 						    vdev_req,
 						    domain);
 
@@ -4618,7 +4624,7 @@ static void dlb2_dir_port_configure_pp(struct dlb2_hw *hw,
 
 		r1.field.pp = port->id.phys_id;
 
-		offs = vdev_id * DLB2_MAX_NUM_DIR_PORTS + virt_id;
+		offs = vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) + virt_id;
 
 		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP2PP(offs), r1.val);
 
@@ -4857,7 +4863,8 @@ int dlb2_hw_create_dir_port(struct dlb2_hw *hw,
 	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
 
 	if (args->queue_id != -1)
-		port = dlb2_get_domain_used_dir_pq(args->queue_id,
+		port = dlb2_get_domain_used_dir_pq(hw,
+						   args->queue_id,
 						   vdev_req,
 						   domain);
 	else
@@ -4913,7 +4920,7 @@ static void dlb2_configure_dir_queue(struct dlb2_hw *hw,
 	/* QID write permissions are turned on when the domain is started */
 	r0.field.vasqid_v = 0;
 
-	offs = domain->id.phys_id * DLB2_MAX_NUM_DIR_QUEUES +
+	offs = domain->id.phys_id * DLB2_MAX_NUM_DIR_QUEUES(hw->ver) +
 		queue->id.phys_id;
 
 	DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(offs), r0.val);
@@ -4935,7 +4942,8 @@ static void dlb2_configure_dir_queue(struct dlb2_hw *hw,
 		union dlb2_sys_vf_dir_vqid_v r3 = { {0} };
 		union dlb2_sys_vf_dir_vqid2qid r4 = { {0} };
 
-		offs = vdev_id * DLB2_MAX_NUM_DIR_QUEUES + queue->id.virt_id;
+		offs = vdev_id * DLB2_MAX_NUM_DIR_QUEUES(hw->ver)
+			+ queue->id.virt_id;
 
 		r3.field.vqid_v = 1;
 
@@ -5001,7 +5009,8 @@ dlb2_verify_create_dir_queue_args(struct dlb2_hw *hw,
 	if (args->port_id != -1) {
 		struct dlb2_dir_pq_pair *port;
 
-		port = dlb2_get_domain_used_dir_pq(args->port_id,
+		port = dlb2_get_domain_used_dir_pq(hw,
+						   args->port_id,
 						   vdev_req,
 						   domain);
 
@@ -5072,7 +5081,8 @@ int dlb2_hw_create_dir_queue(struct dlb2_hw *hw,
 	}
 
 	if (args->port_id != -1)
-		queue = dlb2_get_domain_used_dir_pq(args->port_id,
+		queue = dlb2_get_domain_used_dir_pq(hw,
+						    args->port_id,
 						    vdev_req,
 						    domain);
 	else
@@ -5920,7 +5930,7 @@ dlb2_hw_start_domain(struct dlb2_hw *hw,
 
 		r0.field.vasqid_v = 1;
 
-		offs = domain->id.phys_id * DLB2_MAX_NUM_DIR_PORTS +
+		offs = domain->id.phys_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) +
 			dir_queue->id.phys_id;
 
 		DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(offs), r0.val);
@@ -5972,7 +5982,7 @@ int dlb2_hw_get_dir_queue_depth(struct dlb2_hw *hw,
 
 	id = args->queue_id;
 
-	queue = dlb2_get_domain_used_dir_pq(id, vdev_req, domain);
+	queue = dlb2_get_domain_used_dir_pq(hw, id, vdev_req, domain);
 	if (queue == NULL) {
 		resp->status = DLB2_ST_INVALID_QID;
 		return -EINVAL;
diff --git a/drivers/event/dlb2/pf/dlb2_pf.c b/drivers/event/dlb2/pf/dlb2_pf.c
index cfb22efe8..f57dc1584 100644
--- a/drivers/event/dlb2/pf/dlb2_pf.c
+++ b/drivers/event/dlb2/pf/dlb2_pf.c
@@ -47,7 +47,7 @@ dlb2_pf_low_level_io_init(void)
 {
 	int i;
 	/* Addresses will be initialized at port create */
-	for (i = 0; i < DLB2_MAX_NUM_PORTS; i++) {
+	for (i = 0; i < DLB2_MAX_NUM_PORTS(DLB2_HW_V2_5); i++) {
 		/* First directed ports */
 		dlb2_port[i][DLB2_DIR_PORT].pp_addr = NULL;
 		dlb2_port[i][DLB2_DIR_PORT].cq_base = NULL;
@@ -628,6 +628,7 @@ dlb2_eventdev_pci_init(struct rte_eventdev *eventdev)
 
 	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
 		dlb2 = dlb2_pmd_priv(eventdev); /* rte_zmalloc_socket mem */
+		dlb2->version = DLB2_HW_DEVICE_FROM_PCI_ID(pci_dev);
 
 		/* Probe the DLB2 PF layer */
 		dlb2->qm_instance.pf_dev = dlb2_probe(pci_dev);
@@ -643,7 +644,8 @@ dlb2_eventdev_pci_init(struct rte_eventdev *eventdev)
 		if (pci_dev->device.devargs) {
 			ret = dlb2_parse_params(pci_dev->device.devargs->args,
 						pci_dev->device.devargs->name,
-						&dlb2_args);
+						&dlb2_args,
+						dlb2->version);
 			if (ret) {
 				DLB2_LOG_ERR("PFPMD failed to parse args ret=%d, errno=%d\n",
 					     ret, rte_errno);
@@ -655,6 +657,8 @@ dlb2_eventdev_pci_init(struct rte_eventdev *eventdev)
 						  event_dlb2_pf_name,
 						  &dlb2_args);
 	} else {
+		dlb2 = dlb2_pmd_priv(eventdev);
+		dlb2->version = DLB2_HW_DEVICE_FROM_PCI_ID(pci_dev);
 		ret = dlb2_secondary_eventdev_probe(eventdev,
 						    event_dlb2_pf_name);
 	}
@@ -684,6 +688,16 @@ static const struct rte_pci_id pci_id_dlb2_map[] = {
 	},
 };
 
+static const struct rte_pci_id pci_id_dlb2_5_map[] = {
+	{
+		RTE_PCI_DEVICE(EVENTDEV_INTEL_VENDOR_ID,
+			       PCI_DEVICE_ID_INTEL_DLB2_5_PF)
+	},
+	{
+		.vendor_id = 0,
+	},
+};
+
 static int
 event_dlb2_pci_probe(struct rte_pci_driver *pci_drv,
 		     struct rte_pci_device *pci_dev)
@@ -718,6 +732,40 @@ event_dlb2_pci_remove(struct rte_pci_device *pci_dev)
 
 }
 
+static int
+event_dlb2_5_pci_probe(struct rte_pci_driver *pci_drv,
+		       struct rte_pci_device *pci_dev)
+{
+	int ret;
+
+	ret = rte_event_pmd_pci_probe_named(pci_drv, pci_dev,
+					    sizeof(struct dlb2_eventdev),
+					    dlb2_eventdev_pci_init,
+					    event_dlb2_pf_name);
+	if (ret) {
+		DLB2_LOG_INFO("rte_event_pmd_pci_probe_named() failed, "
+				"ret=%d\n", ret);
+	}
+
+	return ret;
+}
+
+static int
+event_dlb2_5_pci_remove(struct rte_pci_device *pci_dev)
+{
+	int ret;
+
+	ret = rte_event_pmd_pci_remove(pci_dev, NULL);
+
+	if (ret) {
+		DLB2_LOG_INFO("rte_event_pmd_pci_remove() failed, "
+				"ret=%d\n", ret);
+	}
+
+	return ret;
+
+}
+
 static struct rte_pci_driver pci_eventdev_dlb2_pmd = {
 	.id_table = pci_id_dlb2_map,
 	.drv_flags = RTE_PCI_DRV_NEED_MAPPING,
@@ -725,5 +773,15 @@ static struct rte_pci_driver pci_eventdev_dlb2_pmd = {
 	.remove = event_dlb2_pci_remove,
 };
 
+static struct rte_pci_driver pci_eventdev_dlb2_5_pmd = {
+	.id_table = pci_id_dlb2_5_map,
+	.drv_flags = RTE_PCI_DRV_NEED_MAPPING,
+	.probe = event_dlb2_5_pci_probe,
+	.remove = event_dlb2_5_pci_remove,
+};
+
 RTE_PMD_REGISTER_PCI(event_dlb2_pf, pci_eventdev_dlb2_pmd);
 RTE_PMD_REGISTER_PCI_TABLE(event_dlb2_pf, pci_id_dlb2_map);
+
+RTE_PMD_REGISTER_PCI(event_dlb2_5_pf, pci_eventdev_dlb2_5_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(event_dlb2_5_pf, pci_id_dlb2_5_map);
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v3 02/26] event/dlb2: add v2.5 HW register definitions
  2021-04-13 20:14   ` [dpdk-dev] [PATCH v3 00/26] " Timothy McDaniel
  2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 01/26] event/dlb2: add v2.5 probe Timothy McDaniel
@ 2021-04-13 20:14     ` Timothy McDaniel
  2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 03/26] event/dlb2: add v2.5 HW init Timothy McDaniel
                       ` (23 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-04-13 20:14 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas

Add auto-generated register definitions, updated to
support both DLB v2.0 and v2.5 devices.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_regs_new.h | 4304 ++++++++++++++++++++
 1 file changed, 4304 insertions(+)
 create mode 100644 drivers/event/dlb2/pf/base/dlb2_regs_new.h

diff --git a/drivers/event/dlb2/pf/base/dlb2_regs_new.h b/drivers/event/dlb2/pf/base/dlb2_regs_new.h
new file mode 100644
index 000000000..26c3e7f4a
--- /dev/null
+++ b/drivers/event/dlb2/pf/base/dlb2_regs_new.h
@@ -0,0 +1,4304 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2016-2020 Intel Corporation
+ */
+
+#ifndef __DLB2_REGS_NEW_H
+#define __DLB2_REGS_NEW_H
+
+#include "dlb2_osdep_types.h"
+
+#define DLB2_PF_VF2PF_MAILBOX_BYTES 256
+#define DLB2_PF_VF2PF_MAILBOX(vf_id, x) \
+	(0x1000 + 0x4 * (x) + (vf_id) * 0x10000)
+#define DLB2_PF_VF2PF_MAILBOX_RST 0x0
+
+#define DLB2_PF_VF2PF_MAILBOX_MSG	0xFFFFFFFF
+#define DLB2_PF_VF2PF_MAILBOX_MSG_LOC	0
+
+#define DLB2_PF_VF2PF_MAILBOX_ISR(vf_id) \
+	(0x1f00 + (vf_id) * 0x10000)
+#define DLB2_PF_VF2PF_MAILBOX_ISR_RST 0x0
+
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF0_ISR	0x00000001
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF1_ISR	0x00000002
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF2_ISR	0x00000004
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF3_ISR	0x00000008
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF4_ISR	0x00000010
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF5_ISR	0x00000020
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF6_ISR	0x00000040
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF7_ISR	0x00000080
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF8_ISR	0x00000100
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF9_ISR	0x00000200
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF10_ISR	0x00000400
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF11_ISR	0x00000800
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF12_ISR	0x00001000
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF13_ISR	0x00002000
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF14_ISR	0x00004000
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF15_ISR	0x00008000
+#define DLB2_PF_VF2PF_MAILBOX_ISR_RSVD0	0xFFFF0000
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF0_ISR_LOC	0
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF1_ISR_LOC	1
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF2_ISR_LOC	2
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF3_ISR_LOC	3
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF4_ISR_LOC	4
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF5_ISR_LOC	5
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF6_ISR_LOC	6
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF7_ISR_LOC	7
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF8_ISR_LOC	8
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF9_ISR_LOC	9
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF10_ISR_LOC	10
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF11_ISR_LOC	11
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF12_ISR_LOC	12
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF13_ISR_LOC	13
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF14_ISR_LOC	14
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF15_ISR_LOC	15
+#define DLB2_PF_VF2PF_MAILBOX_ISR_RSVD0_LOC		16
+
+#define DLB2_PF_VF2PF_FLR_ISR(vf_id) \
+	(0x1f04 + (vf_id) * 0x10000)
+#define DLB2_PF_VF2PF_FLR_ISR_RST 0x0
+
+#define DLB2_PF_VF2PF_FLR_ISR_VF0_ISR	0x00000001
+#define DLB2_PF_VF2PF_FLR_ISR_VF1_ISR	0x00000002
+#define DLB2_PF_VF2PF_FLR_ISR_VF2_ISR	0x00000004
+#define DLB2_PF_VF2PF_FLR_ISR_VF3_ISR	0x00000008
+#define DLB2_PF_VF2PF_FLR_ISR_VF4_ISR	0x00000010
+#define DLB2_PF_VF2PF_FLR_ISR_VF5_ISR	0x00000020
+#define DLB2_PF_VF2PF_FLR_ISR_VF6_ISR	0x00000040
+#define DLB2_PF_VF2PF_FLR_ISR_VF7_ISR	0x00000080
+#define DLB2_PF_VF2PF_FLR_ISR_VF8_ISR	0x00000100
+#define DLB2_PF_VF2PF_FLR_ISR_VF9_ISR	0x00000200
+#define DLB2_PF_VF2PF_FLR_ISR_VF10_ISR	0x00000400
+#define DLB2_PF_VF2PF_FLR_ISR_VF11_ISR	0x00000800
+#define DLB2_PF_VF2PF_FLR_ISR_VF12_ISR	0x00001000
+#define DLB2_PF_VF2PF_FLR_ISR_VF13_ISR	0x00002000
+#define DLB2_PF_VF2PF_FLR_ISR_VF14_ISR	0x00004000
+#define DLB2_PF_VF2PF_FLR_ISR_VF15_ISR	0x00008000
+#define DLB2_PF_VF2PF_FLR_ISR_RSVD0		0xFFFF0000
+#define DLB2_PF_VF2PF_FLR_ISR_VF0_ISR_LOC	0
+#define DLB2_PF_VF2PF_FLR_ISR_VF1_ISR_LOC	1
+#define DLB2_PF_VF2PF_FLR_ISR_VF2_ISR_LOC	2
+#define DLB2_PF_VF2PF_FLR_ISR_VF3_ISR_LOC	3
+#define DLB2_PF_VF2PF_FLR_ISR_VF4_ISR_LOC	4
+#define DLB2_PF_VF2PF_FLR_ISR_VF5_ISR_LOC	5
+#define DLB2_PF_VF2PF_FLR_ISR_VF6_ISR_LOC	6
+#define DLB2_PF_VF2PF_FLR_ISR_VF7_ISR_LOC	7
+#define DLB2_PF_VF2PF_FLR_ISR_VF8_ISR_LOC	8
+#define DLB2_PF_VF2PF_FLR_ISR_VF9_ISR_LOC	9
+#define DLB2_PF_VF2PF_FLR_ISR_VF10_ISR_LOC	10
+#define DLB2_PF_VF2PF_FLR_ISR_VF11_ISR_LOC	11
+#define DLB2_PF_VF2PF_FLR_ISR_VF12_ISR_LOC	12
+#define DLB2_PF_VF2PF_FLR_ISR_VF13_ISR_LOC	13
+#define DLB2_PF_VF2PF_FLR_ISR_VF14_ISR_LOC	14
+#define DLB2_PF_VF2PF_FLR_ISR_VF15_ISR_LOC	15
+#define DLB2_PF_VF2PF_FLR_ISR_RSVD0_LOC	16
+
+#define DLB2_PF_VF2PF_ISR_PEND(vf_id) \
+	(0x1f10 + (vf_id) * 0x10000)
+#define DLB2_PF_VF2PF_ISR_PEND_RST 0x0
+
+#define DLB2_PF_VF2PF_ISR_PEND_ISR_PEND	0x00000001
+#define DLB2_PF_VF2PF_ISR_PEND_RSVD0		0xFFFFFFFE
+#define DLB2_PF_VF2PF_ISR_PEND_ISR_PEND_LOC	0
+#define DLB2_PF_VF2PF_ISR_PEND_RSVD0_LOC	1
+
+#define DLB2_PF_PF2VF_MAILBOX_BYTES 64
+#define DLB2_PF_PF2VF_MAILBOX(vf_id, x) \
+	(0x2000 + 0x4 * (x) + (vf_id) * 0x10000)
+#define DLB2_PF_PF2VF_MAILBOX_RST 0x0
+
+#define DLB2_PF_PF2VF_MAILBOX_MSG	0xFFFFFFFF
+#define DLB2_PF_PF2VF_MAILBOX_MSG_LOC	0
+
+#define DLB2_PF_PF2VF_MAILBOX_ISR(vf_id) \
+	(0x2f00 + (vf_id) * 0x10000)
+#define DLB2_PF_PF2VF_MAILBOX_ISR_RST 0x0
+
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF0_ISR	0x00000001
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF1_ISR	0x00000002
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF2_ISR	0x00000004
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF3_ISR	0x00000008
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF4_ISR	0x00000010
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF5_ISR	0x00000020
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF6_ISR	0x00000040
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF7_ISR	0x00000080
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF8_ISR	0x00000100
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF9_ISR	0x00000200
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF10_ISR	0x00000400
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF11_ISR	0x00000800
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF12_ISR	0x00001000
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF13_ISR	0x00002000
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF14_ISR	0x00004000
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF15_ISR	0x00008000
+#define DLB2_PF_PF2VF_MAILBOX_ISR_RSVD0	0xFFFF0000
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF0_ISR_LOC	0
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF1_ISR_LOC	1
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF2_ISR_LOC	2
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF3_ISR_LOC	3
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF4_ISR_LOC	4
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF5_ISR_LOC	5
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF6_ISR_LOC	6
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF7_ISR_LOC	7
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF8_ISR_LOC	8
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF9_ISR_LOC	9
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF10_ISR_LOC	10
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF11_ISR_LOC	11
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF12_ISR_LOC	12
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF13_ISR_LOC	13
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF14_ISR_LOC	14
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF15_ISR_LOC	15
+#define DLB2_PF_PF2VF_MAILBOX_ISR_RSVD0_LOC		16
+
+#define DLB2_PF_VF_RESET_IN_PROGRESS(vf_id) \
+	(0x3000 + (vf_id) * 0x10000)
+#define DLB2_PF_VF_RESET_IN_PROGRESS_RST 0xffff
+
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF0_RESET_IN_PROGRESS	0x00000001
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF1_RESET_IN_PROGRESS	0x00000002
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF2_RESET_IN_PROGRESS	0x00000004
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF3_RESET_IN_PROGRESS	0x00000008
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF4_RESET_IN_PROGRESS	0x00000010
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF5_RESET_IN_PROGRESS	0x00000020
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF6_RESET_IN_PROGRESS	0x00000040
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF7_RESET_IN_PROGRESS	0x00000080
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF8_RESET_IN_PROGRESS	0x00000100
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF9_RESET_IN_PROGRESS	0x00000200
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF10_RESET_IN_PROGRESS	0x00000400
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF11_RESET_IN_PROGRESS	0x00000800
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF12_RESET_IN_PROGRESS	0x00001000
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF13_RESET_IN_PROGRESS	0x00002000
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF14_RESET_IN_PROGRESS	0x00004000
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF15_RESET_IN_PROGRESS	0x00008000
+#define DLB2_PF_VF_RESET_IN_PROGRESS_RSVD0			0xFFFF0000
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF0_RESET_IN_PROGRESS_LOC	0
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF1_RESET_IN_PROGRESS_LOC	1
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF2_RESET_IN_PROGRESS_LOC	2
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF3_RESET_IN_PROGRESS_LOC	3
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF4_RESET_IN_PROGRESS_LOC	4
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF5_RESET_IN_PROGRESS_LOC	5
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF6_RESET_IN_PROGRESS_LOC	6
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF7_RESET_IN_PROGRESS_LOC	7
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF8_RESET_IN_PROGRESS_LOC	8
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF9_RESET_IN_PROGRESS_LOC	9
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF10_RESET_IN_PROGRESS_LOC	10
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF11_RESET_IN_PROGRESS_LOC	11
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF12_RESET_IN_PROGRESS_LOC	12
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF13_RESET_IN_PROGRESS_LOC	13
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF14_RESET_IN_PROGRESS_LOC	14
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF15_RESET_IN_PROGRESS_LOC	15
+#define DLB2_PF_VF_RESET_IN_PROGRESS_RSVD0_LOC			16
+
+#define DLB2_MSIX_VECTOR_CTRL(x) \
+	(0x100000c + (x) * 0x10)
+#define DLB2_MSIX_VECTOR_CTRL_RST 0x1
+
+#define DLB2_MSIX_VECTOR_CTRL_VEC_MASK	0x00000001
+#define DLB2_MSIX_VECTOR_CTRL_RSVD0		0xFFFFFFFE
+#define DLB2_MSIX_VECTOR_CTRL_VEC_MASK_LOC	0
+#define DLB2_MSIX_VECTOR_CTRL_RSVD0_LOC	1
+
+#define DLB2_IOSF_FUNC_VF_BAR_DSBL(x) \
+	(0x20 + (x) * 0x4)
+#define DLB2_IOSF_FUNC_VF_BAR_DSBL_RST 0x0
+
+#define DLB2_IOSF_FUNC_VF_BAR_DSBL_FUNC_VF_BAR_DIS	0x00000001
+#define DLB2_IOSF_FUNC_VF_BAR_DSBL_RSVD0		0xFFFFFFFE
+#define DLB2_IOSF_FUNC_VF_BAR_DSBL_FUNC_VF_BAR_DIS_LOC	0
+#define DLB2_IOSF_FUNC_VF_BAR_DSBL_RSVD0_LOC			1
+
+#define DLB2_V2SYS_TOTAL_VAS 0x1000011c
+#define DLB2_V2_5SYS_TOTAL_VAS 0x10000114
+#define DLB2_SYS_TOTAL_VAS(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2SYS_TOTAL_VAS : \
+	 DLB2_V2_5SYS_TOTAL_VAS)
+#define DLB2_SYS_TOTAL_VAS_RST 0x20
+
+#define DLB2_SYS_TOTAL_VAS_TOTAL_VAS	0xFFFFFFFF
+#define DLB2_SYS_TOTAL_VAS_TOTAL_VAS_LOC	0
+
+#define DLB2_SYS_TOTAL_DIR_CRDS 0x10000108
+#define DLB2_SYS_TOTAL_DIR_CRDS_RST 0x1000
+
+#define DLB2_SYS_TOTAL_DIR_CRDS_TOTAL_DIR_CREDITS	0xFFFFFFFF
+#define DLB2_SYS_TOTAL_DIR_CRDS_TOTAL_DIR_CREDITS_LOC	0
+
+#define DLB2_SYS_TOTAL_LDB_CRDS 0x10000104
+#define DLB2_SYS_TOTAL_LDB_CRDS_RST 0x2000
+
+#define DLB2_SYS_TOTAL_LDB_CRDS_TOTAL_LDB_CREDITS	0xFFFFFFFF
+#define DLB2_SYS_TOTAL_LDB_CRDS_TOTAL_LDB_CREDITS_LOC	0
+
+#define DLB2_SYS_ALARM_PF_SYND2 0x10000508
+#define DLB2_SYS_ALARM_PF_SYND2_RST 0x0
+
+#define DLB2_SYS_ALARM_PF_SYND2_LOCK_ID	0x0000FFFF
+#define DLB2_SYS_ALARM_PF_SYND2_MEAS		0x00010000
+#define DLB2_SYS_ALARM_PF_SYND2_DEBUG	0x00FE0000
+#define DLB2_SYS_ALARM_PF_SYND2_CQ_POP	0x01000000
+#define DLB2_SYS_ALARM_PF_SYND2_QE_UHL	0x02000000
+#define DLB2_SYS_ALARM_PF_SYND2_QE_ORSP	0x04000000
+#define DLB2_SYS_ALARM_PF_SYND2_QE_VALID	0x08000000
+#define DLB2_SYS_ALARM_PF_SYND2_CQ_INT_REARM	0x10000000
+#define DLB2_SYS_ALARM_PF_SYND2_DSI_ERROR	0x20000000
+#define DLB2_SYS_ALARM_PF_SYND2_RSVD0	0xC0000000
+#define DLB2_SYS_ALARM_PF_SYND2_LOCK_ID_LOC		0
+#define DLB2_SYS_ALARM_PF_SYND2_MEAS_LOC		16
+#define DLB2_SYS_ALARM_PF_SYND2_DEBUG_LOC		17
+#define DLB2_SYS_ALARM_PF_SYND2_CQ_POP_LOC		24
+#define DLB2_SYS_ALARM_PF_SYND2_QE_UHL_LOC		25
+#define DLB2_SYS_ALARM_PF_SYND2_QE_ORSP_LOC		26
+#define DLB2_SYS_ALARM_PF_SYND2_QE_VALID_LOC		27
+#define DLB2_SYS_ALARM_PF_SYND2_CQ_INT_REARM_LOC	28
+#define DLB2_SYS_ALARM_PF_SYND2_DSI_ERROR_LOC	29
+#define DLB2_SYS_ALARM_PF_SYND2_RSVD0_LOC		30
+
+#define DLB2_SYS_ALARM_PF_SYND1 0x10000504
+#define DLB2_SYS_ALARM_PF_SYND1_RST 0x0
+
+#define DLB2_SYS_ALARM_PF_SYND1_DSI		0x0000FFFF
+#define DLB2_SYS_ALARM_PF_SYND1_QID		0x00FF0000
+#define DLB2_SYS_ALARM_PF_SYND1_QTYPE	0x03000000
+#define DLB2_SYS_ALARM_PF_SYND1_QPRI		0x1C000000
+#define DLB2_SYS_ALARM_PF_SYND1_MSG_TYPE	0xE0000000
+#define DLB2_SYS_ALARM_PF_SYND1_DSI_LOC	0
+#define DLB2_SYS_ALARM_PF_SYND1_QID_LOC	16
+#define DLB2_SYS_ALARM_PF_SYND1_QTYPE_LOC	24
+#define DLB2_SYS_ALARM_PF_SYND1_QPRI_LOC	26
+#define DLB2_SYS_ALARM_PF_SYND1_MSG_TYPE_LOC	29
+
+#define DLB2_SYS_ALARM_PF_SYND0 0x10000500
+#define DLB2_SYS_ALARM_PF_SYND0_RST 0x0
+
+#define DLB2_SYS_ALARM_PF_SYND0_SYNDROME	0x000000FF
+#define DLB2_SYS_ALARM_PF_SYND0_RTYPE	0x00000300
+#define DLB2_SYS_ALARM_PF_SYND0_RSVD0	0x00001C00
+#define DLB2_SYS_ALARM_PF_SYND0_IS_LDB	0x00002000
+#define DLB2_SYS_ALARM_PF_SYND0_CLS		0x0000C000
+#define DLB2_SYS_ALARM_PF_SYND0_AID		0x003F0000
+#define DLB2_SYS_ALARM_PF_SYND0_UNIT		0x03C00000
+#define DLB2_SYS_ALARM_PF_SYND0_SOURCE	0x3C000000
+#define DLB2_SYS_ALARM_PF_SYND0_MORE		0x40000000
+#define DLB2_SYS_ALARM_PF_SYND0_VALID	0x80000000
+#define DLB2_SYS_ALARM_PF_SYND0_SYNDROME_LOC	0
+#define DLB2_SYS_ALARM_PF_SYND0_RTYPE_LOC	8
+#define DLB2_SYS_ALARM_PF_SYND0_RSVD0_LOC	10
+#define DLB2_SYS_ALARM_PF_SYND0_IS_LDB_LOC	13
+#define DLB2_SYS_ALARM_PF_SYND0_CLS_LOC	14
+#define DLB2_SYS_ALARM_PF_SYND0_AID_LOC	16
+#define DLB2_SYS_ALARM_PF_SYND0_UNIT_LOC	22
+#define DLB2_SYS_ALARM_PF_SYND0_SOURCE_LOC	26
+#define DLB2_SYS_ALARM_PF_SYND0_MORE_LOC	30
+#define DLB2_SYS_ALARM_PF_SYND0_VALID_LOC	31
+
+#define DLB2_SYS_VF_LDB_VPP_V(x) \
+	(0x10000f00 + (x) * 0x1000)
+#define DLB2_SYS_VF_LDB_VPP_V_RST 0x0
+
+#define DLB2_SYS_VF_LDB_VPP_V_VPP_V	0x00000001
+#define DLB2_SYS_VF_LDB_VPP_V_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_VF_LDB_VPP_V_VPP_V_LOC	0
+#define DLB2_SYS_VF_LDB_VPP_V_RSVD0_LOC	1
+
+#define DLB2_SYS_VF_LDB_VPP2PP(x) \
+	(0x10000f04 + (x) * 0x1000)
+#define DLB2_SYS_VF_LDB_VPP2PP_RST 0x0
+
+#define DLB2_SYS_VF_LDB_VPP2PP_PP	0x0000003F
+#define DLB2_SYS_VF_LDB_VPP2PP_RSVD0	0xFFFFFFC0
+#define DLB2_SYS_VF_LDB_VPP2PP_PP_LOC	0
+#define DLB2_SYS_VF_LDB_VPP2PP_RSVD0_LOC	6
+
+#define DLB2_SYS_VF_DIR_VPP_V(x) \
+	(0x10000f08 + (x) * 0x1000)
+#define DLB2_SYS_VF_DIR_VPP_V_RST 0x0
+
+#define DLB2_SYS_VF_DIR_VPP_V_VPP_V	0x00000001
+#define DLB2_SYS_VF_DIR_VPP_V_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_VF_DIR_VPP_V_VPP_V_LOC	0
+#define DLB2_SYS_VF_DIR_VPP_V_RSVD0_LOC	1
+
+#define DLB2_SYS_VF_DIR_VPP2PP(x) \
+	(0x10000f0c + (x) * 0x1000)
+#define DLB2_SYS_VF_DIR_VPP2PP_RST 0x0
+
+#define DLB2_SYS_VF_DIR_VPP2PP_PP	0x0000003F
+#define DLB2_SYS_VF_DIR_VPP2PP_RSVD0	0xFFFFFFC0
+#define DLB2_SYS_VF_DIR_VPP2PP_PP_LOC	0
+#define DLB2_SYS_VF_DIR_VPP2PP_RSVD0_LOC	6
+
+#define DLB2_SYS_VF_LDB_VQID_V(x) \
+	(0x10000f10 + (x) * 0x1000)
+#define DLB2_SYS_VF_LDB_VQID_V_RST 0x0
+
+#define DLB2_SYS_VF_LDB_VQID_V_VQID_V	0x00000001
+#define DLB2_SYS_VF_LDB_VQID_V_RSVD0		0xFFFFFFFE
+#define DLB2_SYS_VF_LDB_VQID_V_VQID_V_LOC	0
+#define DLB2_SYS_VF_LDB_VQID_V_RSVD0_LOC	1
+
+#define DLB2_SYS_VF_LDB_VQID2QID(x) \
+	(0x10000f14 + (x) * 0x1000)
+#define DLB2_SYS_VF_LDB_VQID2QID_RST 0x0
+
+#define DLB2_SYS_VF_LDB_VQID2QID_QID		0x0000001F
+#define DLB2_SYS_VF_LDB_VQID2QID_RSVD0	0xFFFFFFE0
+#define DLB2_SYS_VF_LDB_VQID2QID_QID_LOC	0
+#define DLB2_SYS_VF_LDB_VQID2QID_RSVD0_LOC	5
+
+#define DLB2_SYS_LDB_QID2VQID(x) \
+	(0x10000f18 + (x) * 0x1000)
+#define DLB2_SYS_LDB_QID2VQID_RST 0x0
+
+#define DLB2_SYS_LDB_QID2VQID_VQID	0x0000001F
+#define DLB2_SYS_LDB_QID2VQID_RSVD0	0xFFFFFFE0
+#define DLB2_SYS_LDB_QID2VQID_VQID_LOC	0
+#define DLB2_SYS_LDB_QID2VQID_RSVD0_LOC	5
+
+#define DLB2_SYS_VF_DIR_VQID_V(x) \
+	(0x10000f1c + (x) * 0x1000)
+#define DLB2_SYS_VF_DIR_VQID_V_RST 0x0
+
+#define DLB2_SYS_VF_DIR_VQID_V_VQID_V	0x00000001
+#define DLB2_SYS_VF_DIR_VQID_V_RSVD0		0xFFFFFFFE
+#define DLB2_SYS_VF_DIR_VQID_V_VQID_V_LOC	0
+#define DLB2_SYS_VF_DIR_VQID_V_RSVD0_LOC	1
+
+#define DLB2_SYS_VF_DIR_VQID2QID(x) \
+	(0x10000f20 + (x) * 0x1000)
+#define DLB2_SYS_VF_DIR_VQID2QID_RST 0x0
+
+#define DLB2_SYS_VF_DIR_VQID2QID_QID		0x0000003F
+#define DLB2_SYS_VF_DIR_VQID2QID_RSVD0	0xFFFFFFC0
+#define DLB2_SYS_VF_DIR_VQID2QID_QID_LOC	0
+#define DLB2_SYS_VF_DIR_VQID2QID_RSVD0_LOC	6
+
+#define DLB2_SYS_LDB_VASQID_V(x) \
+	(0x10000f24 + (x) * 0x1000)
+#define DLB2_SYS_LDB_VASQID_V_RST 0x0
+
+#define DLB2_SYS_LDB_VASQID_V_VASQID_V	0x00000001
+#define DLB2_SYS_LDB_VASQID_V_RSVD0		0xFFFFFFFE
+#define DLB2_SYS_LDB_VASQID_V_VASQID_V_LOC	0
+#define DLB2_SYS_LDB_VASQID_V_RSVD0_LOC	1
+
+#define DLB2_SYS_DIR_VASQID_V(x) \
+	(0x10000f28 + (x) * 0x1000)
+#define DLB2_SYS_DIR_VASQID_V_RST 0x0
+
+#define DLB2_SYS_DIR_VASQID_V_VASQID_V	0x00000001
+#define DLB2_SYS_DIR_VASQID_V_RSVD0		0xFFFFFFFE
+#define DLB2_SYS_DIR_VASQID_V_VASQID_V_LOC	0
+#define DLB2_SYS_DIR_VASQID_V_RSVD0_LOC	1
+
+#define DLB2_SYS_ALARM_VF_SYND2(x) \
+	(0x10000f48 + (x) * 0x1000)
+#define DLB2_SYS_ALARM_VF_SYND2_RST 0x0
+
+#define DLB2_SYS_ALARM_VF_SYND2_LOCK_ID	0x0000FFFF
+#define DLB2_SYS_ALARM_VF_SYND2_DEBUG	0x00FF0000
+#define DLB2_SYS_ALARM_VF_SYND2_CQ_POP	0x01000000
+#define DLB2_SYS_ALARM_VF_SYND2_QE_UHL	0x02000000
+#define DLB2_SYS_ALARM_VF_SYND2_QE_ORSP	0x04000000
+#define DLB2_SYS_ALARM_VF_SYND2_QE_VALID	0x08000000
+#define DLB2_SYS_ALARM_VF_SYND2_ISZ		0x10000000
+#define DLB2_SYS_ALARM_VF_SYND2_DSI_ERROR	0x20000000
+#define DLB2_SYS_ALARM_VF_SYND2_DLBRSVD	0xC0000000
+#define DLB2_SYS_ALARM_VF_SYND2_LOCK_ID_LOC		0
+#define DLB2_SYS_ALARM_VF_SYND2_DEBUG_LOC		16
+#define DLB2_SYS_ALARM_VF_SYND2_CQ_POP_LOC		24
+#define DLB2_SYS_ALARM_VF_SYND2_QE_UHL_LOC		25
+#define DLB2_SYS_ALARM_VF_SYND2_QE_ORSP_LOC		26
+#define DLB2_SYS_ALARM_VF_SYND2_QE_VALID_LOC		27
+#define DLB2_SYS_ALARM_VF_SYND2_ISZ_LOC		28
+#define DLB2_SYS_ALARM_VF_SYND2_DSI_ERROR_LOC	29
+#define DLB2_SYS_ALARM_VF_SYND2_DLBRSVD_LOC		30
+
+#define DLB2_SYS_ALARM_VF_SYND1(x) \
+	(0x10000f44 + (x) * 0x1000)
+#define DLB2_SYS_ALARM_VF_SYND1_RST 0x0
+
+#define DLB2_SYS_ALARM_VF_SYND1_DSI		0x0000FFFF
+#define DLB2_SYS_ALARM_VF_SYND1_QID		0x00FF0000
+#define DLB2_SYS_ALARM_VF_SYND1_QTYPE	0x03000000
+#define DLB2_SYS_ALARM_VF_SYND1_QPRI		0x1C000000
+#define DLB2_SYS_ALARM_VF_SYND1_MSG_TYPE	0xE0000000
+#define DLB2_SYS_ALARM_VF_SYND1_DSI_LOC	0
+#define DLB2_SYS_ALARM_VF_SYND1_QID_LOC	16
+#define DLB2_SYS_ALARM_VF_SYND1_QTYPE_LOC	24
+#define DLB2_SYS_ALARM_VF_SYND1_QPRI_LOC	26
+#define DLB2_SYS_ALARM_VF_SYND1_MSG_TYPE_LOC	29
+
+#define DLB2_SYS_ALARM_VF_SYND0(x) \
+	(0x10000f40 + (x) * 0x1000)
+#define DLB2_SYS_ALARM_VF_SYND0_RST 0x0
+
+#define DLB2_SYS_ALARM_VF_SYND0_SYNDROME		0x000000FF
+#define DLB2_SYS_ALARM_VF_SYND0_RTYPE		0x00000300
+#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND0_PARITY	0x00000400
+#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND1_PARITY	0x00000800
+#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND2_PARITY	0x00001000
+#define DLB2_SYS_ALARM_VF_SYND0_IS_LDB		0x00002000
+#define DLB2_SYS_ALARM_VF_SYND0_CLS			0x0000C000
+#define DLB2_SYS_ALARM_VF_SYND0_AID			0x003F0000
+#define DLB2_SYS_ALARM_VF_SYND0_UNIT			0x03C00000
+#define DLB2_SYS_ALARM_VF_SYND0_SOURCE		0x3C000000
+#define DLB2_SYS_ALARM_VF_SYND0_MORE			0x40000000
+#define DLB2_SYS_ALARM_VF_SYND0_VALID		0x80000000
+#define DLB2_SYS_ALARM_VF_SYND0_SYNDROME_LOC		0
+#define DLB2_SYS_ALARM_VF_SYND0_RTYPE_LOC		8
+#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND0_PARITY_LOC	10
+#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND1_PARITY_LOC	11
+#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND2_PARITY_LOC	12
+#define DLB2_SYS_ALARM_VF_SYND0_IS_LDB_LOC		13
+#define DLB2_SYS_ALARM_VF_SYND0_CLS_LOC		14
+#define DLB2_SYS_ALARM_VF_SYND0_AID_LOC		16
+#define DLB2_SYS_ALARM_VF_SYND0_UNIT_LOC		22
+#define DLB2_SYS_ALARM_VF_SYND0_SOURCE_LOC		26
+#define DLB2_SYS_ALARM_VF_SYND0_MORE_LOC		30
+#define DLB2_SYS_ALARM_VF_SYND0_VALID_LOC		31
+
+#define DLB2_SYS_LDB_QID_CFG_V(x) \
+	(0x10000f58 + (x) * 0x1000)
+#define DLB2_SYS_LDB_QID_CFG_V_RST 0x0
+
+#define DLB2_SYS_LDB_QID_CFG_V_SN_CFG_V	0x00000001
+#define DLB2_SYS_LDB_QID_CFG_V_FID_CFG_V	0x00000002
+#define DLB2_SYS_LDB_QID_CFG_V_RSVD0		0xFFFFFFFC
+#define DLB2_SYS_LDB_QID_CFG_V_SN_CFG_V_LOC	0
+#define DLB2_SYS_LDB_QID_CFG_V_FID_CFG_V_LOC	1
+#define DLB2_SYS_LDB_QID_CFG_V_RSVD0_LOC	2
+
+#define DLB2_SYS_LDB_QID_ITS(x) \
+	(0x10000f54 + (x) * 0x1000)
+#define DLB2_SYS_LDB_QID_ITS_RST 0x0
+
+#define DLB2_SYS_LDB_QID_ITS_QID_ITS	0x00000001
+#define DLB2_SYS_LDB_QID_ITS_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_LDB_QID_ITS_QID_ITS_LOC	0
+#define DLB2_SYS_LDB_QID_ITS_RSVD0_LOC	1
+
+#define DLB2_SYS_LDB_QID_V(x) \
+	(0x10000f50 + (x) * 0x1000)
+#define DLB2_SYS_LDB_QID_V_RST 0x0
+
+#define DLB2_SYS_LDB_QID_V_QID_V	0x00000001
+#define DLB2_SYS_LDB_QID_V_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_LDB_QID_V_QID_V_LOC	0
+#define DLB2_SYS_LDB_QID_V_RSVD0_LOC	1
+
+#define DLB2_SYS_DIR_QID_ITS(x) \
+	(0x10000f64 + (x) * 0x1000)
+#define DLB2_SYS_DIR_QID_ITS_RST 0x0
+
+#define DLB2_SYS_DIR_QID_ITS_QID_ITS	0x00000001
+#define DLB2_SYS_DIR_QID_ITS_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_DIR_QID_ITS_QID_ITS_LOC	0
+#define DLB2_SYS_DIR_QID_ITS_RSVD0_LOC	1
+
+#define DLB2_SYS_DIR_QID_V(x) \
+	(0x10000f60 + (x) * 0x1000)
+#define DLB2_SYS_DIR_QID_V_RST 0x0
+
+#define DLB2_SYS_DIR_QID_V_QID_V	0x00000001
+#define DLB2_SYS_DIR_QID_V_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_DIR_QID_V_QID_V_LOC	0
+#define DLB2_SYS_DIR_QID_V_RSVD0_LOC	1
+
+#define DLB2_SYS_LDB_CQ_AI_DATA(x) \
+	(0x10000fa8 + (x) * 0x1000)
+#define DLB2_SYS_LDB_CQ_AI_DATA_RST 0x0
+
+#define DLB2_SYS_LDB_CQ_AI_DATA_CQ_AI_DATA	0xFFFFFFFF
+#define DLB2_SYS_LDB_CQ_AI_DATA_CQ_AI_DATA_LOC	0
+
+#define DLB2_SYS_LDB_CQ_AI_ADDR(x) \
+	(0x10000fa4 + (x) * 0x1000)
+#define DLB2_SYS_LDB_CQ_AI_ADDR_RST 0x0
+
+#define DLB2_SYS_LDB_CQ_AI_ADDR_RSVD1	0x00000003
+#define DLB2_SYS_LDB_CQ_AI_ADDR_CQ_AI_ADDR	0x000FFFFC
+#define DLB2_SYS_LDB_CQ_AI_ADDR_RSVD0	0xFFF00000
+#define DLB2_SYS_LDB_CQ_AI_ADDR_RSVD1_LOC		0
+#define DLB2_SYS_LDB_CQ_AI_ADDR_CQ_AI_ADDR_LOC	2
+#define DLB2_SYS_LDB_CQ_AI_ADDR_RSVD0_LOC		20
+
+#define DLB2_V2SYS_LDB_CQ_PASID(x) \
+	(0x10000fa0 + (x) * 0x1000)
+#define DLB2_V2_5SYS_LDB_CQ_PASID(x) \
+	(0x10000f9c + (x) * 0x1000)
+#define DLB2_SYS_LDB_CQ_PASID(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2SYS_LDB_CQ_PASID(x) : \
+	 DLB2_V2_5SYS_LDB_CQ_PASID(x))
+#define DLB2_SYS_LDB_CQ_PASID_RST 0x0
+
+#define DLB2_SYS_LDB_CQ_PASID_PASID		0x000FFFFF
+#define DLB2_SYS_LDB_CQ_PASID_EXE_REQ	0x00100000
+#define DLB2_SYS_LDB_CQ_PASID_PRIV_REQ	0x00200000
+#define DLB2_SYS_LDB_CQ_PASID_FMT2		0x00400000
+#define DLB2_SYS_LDB_CQ_PASID_RSVD0		0xFF800000
+#define DLB2_SYS_LDB_CQ_PASID_PASID_LOC	0
+#define DLB2_SYS_LDB_CQ_PASID_EXE_REQ_LOC	20
+#define DLB2_SYS_LDB_CQ_PASID_PRIV_REQ_LOC	21
+#define DLB2_SYS_LDB_CQ_PASID_FMT2_LOC	22
+#define DLB2_SYS_LDB_CQ_PASID_RSVD0_LOC	23
+
+#define DLB2_SYS_LDB_CQ_AT(x) \
+	(0x10000f9c + (x) * 0x1000)
+#define DLB2_SYS_LDB_CQ_AT_RST 0x0
+
+#define DLB2_SYS_LDB_CQ_AT_CQ_AT	0x00000003
+#define DLB2_SYS_LDB_CQ_AT_RSVD0	0xFFFFFFFC
+#define DLB2_SYS_LDB_CQ_AT_CQ_AT_LOC	0
+#define DLB2_SYS_LDB_CQ_AT_RSVD0_LOC	2
+
+#define DLB2_SYS_LDB_CQ_ISR(x) \
+	(0x10000f98 + (x) * 0x1000)
+#define DLB2_SYS_LDB_CQ_ISR_RST 0x0
+/* CQ Interrupt Modes */
+#define DLB2_CQ_ISR_MODE_DIS  0
+#define DLB2_CQ_ISR_MODE_MSI  1
+#define DLB2_CQ_ISR_MODE_MSIX 2
+#define DLB2_CQ_ISR_MODE_ADI  3
+
+#define DLB2_SYS_LDB_CQ_ISR_VECTOR	0x0000003F
+#define DLB2_SYS_LDB_CQ_ISR_VF	0x000003C0
+#define DLB2_SYS_LDB_CQ_ISR_EN_CODE	0x00000C00
+#define DLB2_SYS_LDB_CQ_ISR_RSVD0	0xFFFFF000
+#define DLB2_SYS_LDB_CQ_ISR_VECTOR_LOC	0
+#define DLB2_SYS_LDB_CQ_ISR_VF_LOC		6
+#define DLB2_SYS_LDB_CQ_ISR_EN_CODE_LOC	10
+#define DLB2_SYS_LDB_CQ_ISR_RSVD0_LOC	12
+
+#define DLB2_SYS_LDB_CQ2VF_PF_RO(x) \
+	(0x10000f94 + (x) * 0x1000)
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_RST 0x0
+
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_VF		0x0000000F
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_IS_PF	0x00000010
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_RO		0x00000020
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_RSVD0	0xFFFFFFC0
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_VF_LOC	0
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_IS_PF_LOC	4
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_RO_LOC	5
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_RSVD0_LOC	6
+
+#define DLB2_SYS_LDB_PP_V(x) \
+	(0x10000f90 + (x) * 0x1000)
+#define DLB2_SYS_LDB_PP_V_RST 0x0
+
+#define DLB2_SYS_LDB_PP_V_PP_V	0x00000001
+#define DLB2_SYS_LDB_PP_V_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_LDB_PP_V_PP_V_LOC	0
+#define DLB2_SYS_LDB_PP_V_RSVD0_LOC	1
+
+#define DLB2_SYS_LDB_PP2VDEV(x) \
+	(0x10000f8c + (x) * 0x1000)
+#define DLB2_SYS_LDB_PP2VDEV_RST 0x0
+
+#define DLB2_SYS_LDB_PP2VDEV_VDEV	0x0000000F
+#define DLB2_SYS_LDB_PP2VDEV_RSVD0	0xFFFFFFF0
+#define DLB2_SYS_LDB_PP2VDEV_VDEV_LOC	0
+#define DLB2_SYS_LDB_PP2VDEV_RSVD0_LOC	4
+
+#define DLB2_SYS_LDB_PP2VAS(x) \
+	(0x10000f88 + (x) * 0x1000)
+#define DLB2_SYS_LDB_PP2VAS_RST 0x0
+
+#define DLB2_SYS_LDB_PP2VAS_VAS	0x0000001F
+#define DLB2_SYS_LDB_PP2VAS_RSVD0	0xFFFFFFE0
+#define DLB2_SYS_LDB_PP2VAS_VAS_LOC		0
+#define DLB2_SYS_LDB_PP2VAS_RSVD0_LOC	5
+
+#define DLB2_SYS_LDB_CQ_ADDR_U(x) \
+	(0x10000f84 + (x) * 0x1000)
+#define DLB2_SYS_LDB_CQ_ADDR_U_RST 0x0
+
+#define DLB2_SYS_LDB_CQ_ADDR_U_ADDR_U	0xFFFFFFFF
+#define DLB2_SYS_LDB_CQ_ADDR_U_ADDR_U_LOC	0
+
+#define DLB2_SYS_LDB_CQ_ADDR_L(x) \
+	(0x10000f80 + (x) * 0x1000)
+#define DLB2_SYS_LDB_CQ_ADDR_L_RST 0x0
+
+#define DLB2_SYS_LDB_CQ_ADDR_L_RSVD0		0x0000003F
+#define DLB2_SYS_LDB_CQ_ADDR_L_ADDR_L	0xFFFFFFC0
+#define DLB2_SYS_LDB_CQ_ADDR_L_RSVD0_LOC	0
+#define DLB2_SYS_LDB_CQ_ADDR_L_ADDR_L_LOC	6
+
+#define DLB2_SYS_DIR_CQ_FMT(x) \
+	(0x10000fec + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ_FMT_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_FMT_KEEP_PF_PPID	0x00000001
+#define DLB2_SYS_DIR_CQ_FMT_RSVD0		0xFFFFFFFE
+#define DLB2_SYS_DIR_CQ_FMT_KEEP_PF_PPID_LOC	0
+#define DLB2_SYS_DIR_CQ_FMT_RSVD0_LOC	1
+
+#define DLB2_SYS_DIR_CQ_AI_DATA(x) \
+	(0x10000fe8 + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ_AI_DATA_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_AI_DATA_CQ_AI_DATA	0xFFFFFFFF
+#define DLB2_SYS_DIR_CQ_AI_DATA_CQ_AI_DATA_LOC	0
+
+#define DLB2_SYS_DIR_CQ_AI_ADDR(x) \
+	(0x10000fe4 + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ_AI_ADDR_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_AI_ADDR_RSVD1	0x00000003
+#define DLB2_SYS_DIR_CQ_AI_ADDR_CQ_AI_ADDR	0x000FFFFC
+#define DLB2_SYS_DIR_CQ_AI_ADDR_RSVD0	0xFFF00000
+#define DLB2_SYS_DIR_CQ_AI_ADDR_RSVD1_LOC		0
+#define DLB2_SYS_DIR_CQ_AI_ADDR_CQ_AI_ADDR_LOC	2
+#define DLB2_SYS_DIR_CQ_AI_ADDR_RSVD0_LOC		20
+
+#define DLB2_V2SYS_DIR_CQ_PASID(x) \
+	(0x10000fe0 + (x) * 0x1000)
+#define DLB2_V2_5SYS_DIR_CQ_PASID(x) \
+	(0x10000fdc + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ_PASID(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2SYS_DIR_CQ_PASID(x) : \
+	 DLB2_V2_5SYS_DIR_CQ_PASID(x))
+#define DLB2_SYS_DIR_CQ_PASID_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_PASID_PASID		0x000FFFFF
+#define DLB2_SYS_DIR_CQ_PASID_EXE_REQ	0x00100000
+#define DLB2_SYS_DIR_CQ_PASID_PRIV_REQ	0x00200000
+#define DLB2_SYS_DIR_CQ_PASID_FMT2		0x00400000
+#define DLB2_SYS_DIR_CQ_PASID_RSVD0		0xFF800000
+#define DLB2_SYS_DIR_CQ_PASID_PASID_LOC	0
+#define DLB2_SYS_DIR_CQ_PASID_EXE_REQ_LOC	20
+#define DLB2_SYS_DIR_CQ_PASID_PRIV_REQ_LOC	21
+#define DLB2_SYS_DIR_CQ_PASID_FMT2_LOC	22
+#define DLB2_SYS_DIR_CQ_PASID_RSVD0_LOC	23
+
+#define DLB2_SYS_DIR_CQ_AT(x) \
+	(0x10000fdc + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ_AT_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_AT_CQ_AT	0x00000003
+#define DLB2_SYS_DIR_CQ_AT_RSVD0	0xFFFFFFFC
+#define DLB2_SYS_DIR_CQ_AT_CQ_AT_LOC	0
+#define DLB2_SYS_DIR_CQ_AT_RSVD0_LOC	2
+
+#define DLB2_SYS_DIR_CQ_ISR(x) \
+	(0x10000fd8 + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ_ISR_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_ISR_VECTOR	0x0000003F
+#define DLB2_SYS_DIR_CQ_ISR_VF	0x000003C0
+#define DLB2_SYS_DIR_CQ_ISR_EN_CODE	0x00000C00
+#define DLB2_SYS_DIR_CQ_ISR_RSVD0	0xFFFFF000
+#define DLB2_SYS_DIR_CQ_ISR_VECTOR_LOC	0
+#define DLB2_SYS_DIR_CQ_ISR_VF_LOC		6
+#define DLB2_SYS_DIR_CQ_ISR_EN_CODE_LOC	10
+#define DLB2_SYS_DIR_CQ_ISR_RSVD0_LOC	12
+
+#define DLB2_SYS_DIR_CQ2VF_PF_RO(x) \
+	(0x10000fd4 + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_RST 0x0
+
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_VF		0x0000000F
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_IS_PF	0x00000010
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_RO		0x00000020
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_RSVD0	0xFFFFFFC0
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_VF_LOC	0
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_IS_PF_LOC	4
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_RO_LOC	5
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_RSVD0_LOC	6
+
+#define DLB2_SYS_DIR_PP_V(x) \
+	(0x10000fd0 + (x) * 0x1000)
+#define DLB2_SYS_DIR_PP_V_RST 0x0
+
+#define DLB2_SYS_DIR_PP_V_PP_V	0x00000001
+#define DLB2_SYS_DIR_PP_V_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_DIR_PP_V_PP_V_LOC	0
+#define DLB2_SYS_DIR_PP_V_RSVD0_LOC	1
+
+#define DLB2_SYS_DIR_PP2VDEV(x) \
+	(0x10000fcc + (x) * 0x1000)
+#define DLB2_SYS_DIR_PP2VDEV_RST 0x0
+
+#define DLB2_SYS_DIR_PP2VDEV_VDEV	0x0000000F
+#define DLB2_SYS_DIR_PP2VDEV_RSVD0	0xFFFFFFF0
+#define DLB2_SYS_DIR_PP2VDEV_VDEV_LOC	0
+#define DLB2_SYS_DIR_PP2VDEV_RSVD0_LOC	4
+
+#define DLB2_SYS_DIR_PP2VAS(x) \
+	(0x10000fc8 + (x) * 0x1000)
+#define DLB2_SYS_DIR_PP2VAS_RST 0x0
+
+#define DLB2_SYS_DIR_PP2VAS_VAS	0x0000001F
+#define DLB2_SYS_DIR_PP2VAS_RSVD0	0xFFFFFFE0
+#define DLB2_SYS_DIR_PP2VAS_VAS_LOC		0
+#define DLB2_SYS_DIR_PP2VAS_RSVD0_LOC	5
+
+#define DLB2_SYS_DIR_CQ_ADDR_U(x) \
+	(0x10000fc4 + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ_ADDR_U_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_ADDR_U_ADDR_U	0xFFFFFFFF
+#define DLB2_SYS_DIR_CQ_ADDR_U_ADDR_U_LOC	0
+
+#define DLB2_SYS_DIR_CQ_ADDR_L(x) \
+	(0x10000fc0 + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ_ADDR_L_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_ADDR_L_RSVD0		0x0000003F
+#define DLB2_SYS_DIR_CQ_ADDR_L_ADDR_L	0xFFFFFFC0
+#define DLB2_SYS_DIR_CQ_ADDR_L_RSVD0_LOC	0
+#define DLB2_SYS_DIR_CQ_ADDR_L_ADDR_L_LOC	6
+
+#define DLB2_SYS_PM_SMON_COMP_MASK1 0x10003024
+#define DLB2_SYS_PM_SMON_COMP_MASK1_RST 0xffffffff
+
+#define DLB2_SYS_PM_SMON_COMP_MASK1_COMP_MASK1	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_COMP_MASK1_COMP_MASK1_LOC	0
+
+#define DLB2_SYS_PM_SMON_COMP_MASK0 0x10003020
+#define DLB2_SYS_PM_SMON_COMP_MASK0_RST 0xffffffff
+
+#define DLB2_SYS_PM_SMON_COMP_MASK0_COMP_MASK0	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_COMP_MASK0_COMP_MASK0_LOC	0
+
+#define DLB2_SYS_PM_SMON_MAX_TMR 0x1000301c
+#define DLB2_SYS_PM_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_SYS_PM_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_SYS_PM_SMON_TMR 0x10003018
+#define DLB2_SYS_PM_SMON_TMR_RST 0x0
+
+#define DLB2_SYS_PM_SMON_TMR_TIMER_VAL	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_TMR_TIMER_VAL_LOC	0
+
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR1 0x10003014
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR0 0x10003010
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_SYS_PM_SMON_COMPARE1 0x1000300c
+#define DLB2_SYS_PM_SMON_COMPARE1_RST 0x0
+
+#define DLB2_SYS_PM_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_SYS_PM_SMON_COMPARE0 0x10003008
+#define DLB2_SYS_PM_SMON_COMPARE0_RST 0x0
+
+#define DLB2_SYS_PM_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_SYS_PM_SMON_CFG1 0x10003004
+#define DLB2_SYS_PM_SMON_CFG1_RST 0x0
+
+#define DLB2_SYS_PM_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_SYS_PM_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_SYS_PM_SMON_CFG1_RSVD	0xFFFF0000
+#define DLB2_SYS_PM_SMON_CFG1_MODE0_LOC	0
+#define DLB2_SYS_PM_SMON_CFG1_MODE1_LOC	8
+#define DLB2_SYS_PM_SMON_CFG1_RSVD_LOC	16
+
+#define DLB2_SYS_PM_SMON_CFG0 0x10003000
+#define DLB2_SYS_PM_SMON_CFG0_RST 0x40000000
+
+#define DLB2_SYS_PM_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_SYS_PM_SMON_CFG0_RSVD2			0x0000000E
+#define DLB2_SYS_PM_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_SYS_PM_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_SYS_PM_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_SYS_PM_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_SYS_PM_SMON_CFG0_SMON_MODE		0x0000F000
+#define DLB2_SYS_PM_SMON_CFG0_STOPCOUNTEROVFL	0x00010000
+#define DLB2_SYS_PM_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_SYS_PM_SMON_CFG0_STATCOUNTER0OVFL	0x00040000
+#define DLB2_SYS_PM_SMON_CFG0_STATCOUNTER1OVFL	0x00080000
+#define DLB2_SYS_PM_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_SYS_PM_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_SYS_PM_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_SYS_PM_SMON_CFG0_RSVD1			0x00800000
+#define DLB2_SYS_PM_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_SYS_PM_SMON_CFG0_RSVD0			0x20000000
+#define DLB2_SYS_PM_SMON_CFG0_VERSION		0xC0000000
+#define DLB2_SYS_PM_SMON_CFG0_SMON_ENABLE_LOC		0
+#define DLB2_SYS_PM_SMON_CFG0_RSVD2_LOC			1
+#define DLB2_SYS_PM_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_SYS_PM_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_SYS_PM_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_SYS_PM_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_SYS_PM_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_SYS_PM_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_SYS_PM_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_SYS_PM_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_SYS_PM_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_SYS_PM_SMON_CFG0_STOPTIMEROVFL_LOC		20
+#define DLB2_SYS_PM_SMON_CFG0_INTTIMEROVFL_LOC		21
+#define DLB2_SYS_PM_SMON_CFG0_STATTIMEROVFL_LOC		22
+#define DLB2_SYS_PM_SMON_CFG0_RSVD1_LOC			23
+#define DLB2_SYS_PM_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_SYS_PM_SMON_CFG0_RSVD0_LOC			29
+#define DLB2_SYS_PM_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_SYS_SMON_COMP_MASK1(x) \
+	(0x18002024 + (x) * 0x40)
+#define DLB2_SYS_SMON_COMP_MASK1_RST 0xffffffff
+
+#define DLB2_SYS_SMON_COMP_MASK1_COMP_MASK1	0xFFFFFFFF
+#define DLB2_SYS_SMON_COMP_MASK1_COMP_MASK1_LOC	0
+
+#define DLB2_SYS_SMON_COMP_MASK0(x) \
+	(0x18002020 + (x) * 0x40)
+#define DLB2_SYS_SMON_COMP_MASK0_RST 0xffffffff
+
+#define DLB2_SYS_SMON_COMP_MASK0_COMP_MASK0	0xFFFFFFFF
+#define DLB2_SYS_SMON_COMP_MASK0_COMP_MASK0_LOC	0
+
+#define DLB2_SYS_SMON_MAX_TMR(x) \
+	(0x1800201c + (x) * 0x40)
+#define DLB2_SYS_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_SYS_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_SYS_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_SYS_SMON_TMR(x) \
+	(0x18002018 + (x) * 0x40)
+#define DLB2_SYS_SMON_TMR_RST 0x0
+
+#define DLB2_SYS_SMON_TMR_TIMER_VAL	0xFFFFFFFF
+#define DLB2_SYS_SMON_TMR_TIMER_VAL_LOC	0
+
+#define DLB2_SYS_SMON_ACTIVITYCNTR1(x) \
+	(0x18002014 + (x) * 0x40)
+#define DLB2_SYS_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_SYS_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_SYS_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_SYS_SMON_ACTIVITYCNTR0(x) \
+	(0x18002010 + (x) * 0x40)
+#define DLB2_SYS_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_SYS_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_SYS_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_SYS_SMON_COMPARE1(x) \
+	(0x1800200c + (x) * 0x40)
+#define DLB2_SYS_SMON_COMPARE1_RST 0x0
+
+#define DLB2_SYS_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_SYS_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_SYS_SMON_COMPARE0(x) \
+	(0x18002008 + (x) * 0x40)
+#define DLB2_SYS_SMON_COMPARE0_RST 0x0
+
+#define DLB2_SYS_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_SYS_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_SYS_SMON_CFG1(x) \
+	(0x18002004 + (x) * 0x40)
+#define DLB2_SYS_SMON_CFG1_RST 0x0
+
+#define DLB2_SYS_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_SYS_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_SYS_SMON_CFG1_RSVD	0xFFFF0000
+#define DLB2_SYS_SMON_CFG1_MODE0_LOC	0
+#define DLB2_SYS_SMON_CFG1_MODE1_LOC	8
+#define DLB2_SYS_SMON_CFG1_RSVD_LOC	16
+
+#define DLB2_SYS_SMON_CFG0(x) \
+	(0x18002000 + (x) * 0x40)
+#define DLB2_SYS_SMON_CFG0_RST 0x40000000
+
+#define DLB2_SYS_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_SYS_SMON_CFG0_RSVD2			0x0000000E
+#define DLB2_SYS_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_SYS_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_SYS_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_SYS_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_SYS_SMON_CFG0_SMON_MODE			0x0000F000
+#define DLB2_SYS_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_SYS_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_SYS_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_SYS_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_SYS_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_SYS_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_SYS_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_SYS_SMON_CFG0_RSVD1			0x00800000
+#define DLB2_SYS_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_SYS_SMON_CFG0_RSVD0			0x20000000
+#define DLB2_SYS_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_SYS_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_SYS_SMON_CFG0_RSVD2_LOC				1
+#define DLB2_SYS_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_SYS_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_SYS_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_SYS_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_SYS_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_SYS_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_SYS_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_SYS_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_SYS_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_SYS_SMON_CFG0_STOPTIMEROVFL_LOC			20
+#define DLB2_SYS_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_SYS_SMON_CFG0_STATTIMEROVFL_LOC			22
+#define DLB2_SYS_SMON_CFG0_RSVD1_LOC				23
+#define DLB2_SYS_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_SYS_SMON_CFG0_RSVD0_LOC				29
+#define DLB2_SYS_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_SYS_INGRESS_ALARM_ENBL 0x10000300
+#define DLB2_SYS_INGRESS_ALARM_ENBL_RST 0x0
+
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_HCW		0x00000001
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_PP		0x00000002
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_PASID		0x00000004
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_QID		0x00000008
+#define DLB2_SYS_INGRESS_ALARM_ENBL_DISABLED_QID		0x00000010
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_LDB_QID_CFG	0x00000020
+#define DLB2_SYS_INGRESS_ALARM_ENBL_RSVD0			0xFFFFFFC0
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_HCW_LOC		0
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_PP_LOC		1
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_PASID_LOC	2
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_QID_LOC		3
+#define DLB2_SYS_INGRESS_ALARM_ENBL_DISABLED_QID_LOC		4
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_LDB_QID_CFG_LOC	5
+#define DLB2_SYS_INGRESS_ALARM_ENBL_RSVD0_LOC		6
+
+#define DLB2_SYS_MSIX_ACK 0x10000400
+#define DLB2_SYS_MSIX_ACK_RST 0x0
+
+#define DLB2_SYS_MSIX_ACK_MSIX_0_ACK	0x00000001
+#define DLB2_SYS_MSIX_ACK_MSIX_1_ACK	0x00000002
+#define DLB2_SYS_MSIX_ACK_RSVD0	0xFFFFFFFC
+#define DLB2_SYS_MSIX_ACK_MSIX_0_ACK_LOC	0
+#define DLB2_SYS_MSIX_ACK_MSIX_1_ACK_LOC	1
+#define DLB2_SYS_MSIX_ACK_RSVD0_LOC		2
+
+#define DLB2_SYS_MSIX_PASSTHRU 0x10000404
+#define DLB2_SYS_MSIX_PASSTHRU_RST 0x0
+
+#define DLB2_SYS_MSIX_PASSTHRU_MSIX_0_PASSTHRU	0x00000001
+#define DLB2_SYS_MSIX_PASSTHRU_MSIX_1_PASSTHRU	0x00000002
+#define DLB2_SYS_MSIX_PASSTHRU_RSVD0			0xFFFFFFFC
+#define DLB2_SYS_MSIX_PASSTHRU_MSIX_0_PASSTHRU_LOC	0
+#define DLB2_SYS_MSIX_PASSTHRU_MSIX_1_PASSTHRU_LOC	1
+#define DLB2_SYS_MSIX_PASSTHRU_RSVD0_LOC		2
+
+#define DLB2_SYS_MSIX_MODE 0x10000408
+#define DLB2_SYS_MSIX_MODE_RST 0x0
+/* MSI-X Modes */
+#define DLB2_MSIX_MODE_PACKED     0
+#define DLB2_MSIX_MODE_COMPRESSED 1
+
+#define DLB2_SYS_MSIX_MODE_MODE_V2	0x00000001
+#define DLB2_SYS_MSIX_MODE_POLL_MODE_V2	0x00000002
+#define DLB2_SYS_MSIX_MODE_POLL_MASK_V2	0x00000004
+#define DLB2_SYS_MSIX_MODE_POLL_LOCK_V2	0x00000008
+#define DLB2_SYS_MSIX_MODE_RSVD0_V2	0xFFFFFFF0
+#define DLB2_SYS_MSIX_MODE_MODE_V2_LOC	0
+#define DLB2_SYS_MSIX_MODE_POLL_MODE_V2_LOC	1
+#define DLB2_SYS_MSIX_MODE_POLL_MASK_V2_LOC	2
+#define DLB2_SYS_MSIX_MODE_POLL_LOCK_V2_LOC	3
+#define DLB2_SYS_MSIX_MODE_RSVD0_V2_LOC	4
+
+#define DLB2_SYS_MSIX_MODE_MODE_V2_5	0x00000001
+#define DLB2_SYS_MSIX_MODE_IMS_POLLING_V2_5	0x00000002
+#define DLB2_SYS_MSIX_MODE_RSVD0_V2_5	0xFFFFFFFC
+#define DLB2_SYS_MSIX_MODE_MODE_V2_5_LOC		0
+#define DLB2_SYS_MSIX_MODE_IMS_POLLING_V2_5_LOC	1
+#define DLB2_SYS_MSIX_MODE_RSVD0_V2_5_LOC		2
+
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS 0x10000440
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_0_OCC_INT	0x00000001
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_1_OCC_INT	0x00000002
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_2_OCC_INT	0x00000004
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_3_OCC_INT	0x00000008
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_4_OCC_INT	0x00000010
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_5_OCC_INT	0x00000020
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_6_OCC_INT	0x00000040
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_7_OCC_INT	0x00000080
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_8_OCC_INT	0x00000100
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_9_OCC_INT	0x00000200
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_10_OCC_INT	0x00000400
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_11_OCC_INT	0x00000800
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_12_OCC_INT	0x00001000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_13_OCC_INT	0x00002000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_14_OCC_INT	0x00004000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_15_OCC_INT	0x00008000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_16_OCC_INT	0x00010000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_17_OCC_INT	0x00020000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_18_OCC_INT	0x00040000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_19_OCC_INT	0x00080000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_20_OCC_INT	0x00100000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_21_OCC_INT	0x00200000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_22_OCC_INT	0x00400000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_23_OCC_INT	0x00800000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_24_OCC_INT	0x01000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_25_OCC_INT	0x02000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_26_OCC_INT	0x04000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_27_OCC_INT	0x08000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_28_OCC_INT	0x10000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_29_OCC_INT	0x20000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_30_OCC_INT	0x40000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_31_OCC_INT	0x80000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_0_OCC_INT_LOC	0
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_1_OCC_INT_LOC	1
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_2_OCC_INT_LOC	2
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_3_OCC_INT_LOC	3
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_4_OCC_INT_LOC	4
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_5_OCC_INT_LOC	5
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_6_OCC_INT_LOC	6
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_7_OCC_INT_LOC	7
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_8_OCC_INT_LOC	8
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_9_OCC_INT_LOC	9
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_10_OCC_INT_LOC	10
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_11_OCC_INT_LOC	11
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_12_OCC_INT_LOC	12
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_13_OCC_INT_LOC	13
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_14_OCC_INT_LOC	14
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_15_OCC_INT_LOC	15
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_16_OCC_INT_LOC	16
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_17_OCC_INT_LOC	17
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_18_OCC_INT_LOC	18
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_19_OCC_INT_LOC	19
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_20_OCC_INT_LOC	20
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_21_OCC_INT_LOC	21
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_22_OCC_INT_LOC	22
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_23_OCC_INT_LOC	23
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_24_OCC_INT_LOC	24
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_25_OCC_INT_LOC	25
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_26_OCC_INT_LOC	26
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_27_OCC_INT_LOC	27
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_28_OCC_INT_LOC	28
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_29_OCC_INT_LOC	29
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_30_OCC_INT_LOC	30
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_31_OCC_INT_LOC	31
+
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS 0x10000444
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_32_OCC_INT	0x00000001
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_33_OCC_INT	0x00000002
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_34_OCC_INT	0x00000004
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_35_OCC_INT	0x00000008
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_36_OCC_INT	0x00000010
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_37_OCC_INT	0x00000020
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_38_OCC_INT	0x00000040
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_39_OCC_INT	0x00000080
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_40_OCC_INT	0x00000100
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_41_OCC_INT	0x00000200
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_42_OCC_INT	0x00000400
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_43_OCC_INT	0x00000800
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_44_OCC_INT	0x00001000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_45_OCC_INT	0x00002000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_46_OCC_INT	0x00004000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_47_OCC_INT	0x00008000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_48_OCC_INT	0x00010000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_49_OCC_INT	0x00020000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_50_OCC_INT	0x00040000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_51_OCC_INT	0x00080000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_52_OCC_INT	0x00100000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_53_OCC_INT	0x00200000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_54_OCC_INT	0x00400000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_55_OCC_INT	0x00800000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_56_OCC_INT	0x01000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_57_OCC_INT	0x02000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_58_OCC_INT	0x04000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_59_OCC_INT	0x08000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_60_OCC_INT	0x10000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_61_OCC_INT	0x20000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_62_OCC_INT	0x40000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_63_OCC_INT	0x80000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_32_OCC_INT_LOC	0
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_33_OCC_INT_LOC	1
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_34_OCC_INT_LOC	2
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_35_OCC_INT_LOC	3
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_36_OCC_INT_LOC	4
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_37_OCC_INT_LOC	5
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_38_OCC_INT_LOC	6
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_39_OCC_INT_LOC	7
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_40_OCC_INT_LOC	8
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_41_OCC_INT_LOC	9
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_42_OCC_INT_LOC	10
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_43_OCC_INT_LOC	11
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_44_OCC_INT_LOC	12
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_45_OCC_INT_LOC	13
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_46_OCC_INT_LOC	14
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_47_OCC_INT_LOC	15
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_48_OCC_INT_LOC	16
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_49_OCC_INT_LOC	17
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_50_OCC_INT_LOC	18
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_51_OCC_INT_LOC	19
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_52_OCC_INT_LOC	20
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_53_OCC_INT_LOC	21
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_54_OCC_INT_LOC	22
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_55_OCC_INT_LOC	23
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_56_OCC_INT_LOC	24
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_57_OCC_INT_LOC	25
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_58_OCC_INT_LOC	26
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_59_OCC_INT_LOC	27
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_60_OCC_INT_LOC	28
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_61_OCC_INT_LOC	29
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_62_OCC_INT_LOC	30
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_63_OCC_INT_LOC	31
+
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS 0x10000460
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_RST 0x0
+
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_0_OCC_INT	0x00000001
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_1_OCC_INT	0x00000002
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_2_OCC_INT	0x00000004
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_3_OCC_INT	0x00000008
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_4_OCC_INT	0x00000010
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_5_OCC_INT	0x00000020
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_6_OCC_INT	0x00000040
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_7_OCC_INT	0x00000080
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_8_OCC_INT	0x00000100
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_9_OCC_INT	0x00000200
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_10_OCC_INT	0x00000400
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_11_OCC_INT	0x00000800
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_12_OCC_INT	0x00001000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_13_OCC_INT	0x00002000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_14_OCC_INT	0x00004000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_15_OCC_INT	0x00008000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_16_OCC_INT	0x00010000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_17_OCC_INT	0x00020000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_18_OCC_INT	0x00040000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_19_OCC_INT	0x00080000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_20_OCC_INT	0x00100000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_21_OCC_INT	0x00200000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_22_OCC_INT	0x00400000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_23_OCC_INT	0x00800000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_24_OCC_INT	0x01000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_25_OCC_INT	0x02000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_26_OCC_INT	0x04000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_27_OCC_INT	0x08000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_28_OCC_INT	0x10000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_29_OCC_INT	0x20000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_30_OCC_INT	0x40000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_31_OCC_INT	0x80000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_0_OCC_INT_LOC	0
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_1_OCC_INT_LOC	1
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_2_OCC_INT_LOC	2
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_3_OCC_INT_LOC	3
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_4_OCC_INT_LOC	4
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_5_OCC_INT_LOC	5
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_6_OCC_INT_LOC	6
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_7_OCC_INT_LOC	7
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_8_OCC_INT_LOC	8
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_9_OCC_INT_LOC	9
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_10_OCC_INT_LOC	10
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_11_OCC_INT_LOC	11
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_12_OCC_INT_LOC	12
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_13_OCC_INT_LOC	13
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_14_OCC_INT_LOC	14
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_15_OCC_INT_LOC	15
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_16_OCC_INT_LOC	16
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_17_OCC_INT_LOC	17
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_18_OCC_INT_LOC	18
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_19_OCC_INT_LOC	19
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_20_OCC_INT_LOC	20
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_21_OCC_INT_LOC	21
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_22_OCC_INT_LOC	22
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_23_OCC_INT_LOC	23
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_24_OCC_INT_LOC	24
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_25_OCC_INT_LOC	25
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_26_OCC_INT_LOC	26
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_27_OCC_INT_LOC	27
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_28_OCC_INT_LOC	28
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_29_OCC_INT_LOC	29
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_30_OCC_INT_LOC	30
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_31_OCC_INT_LOC	31
+
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS 0x10000464
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_RST 0x0
+
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_32_OCC_INT	0x00000001
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_33_OCC_INT	0x00000002
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_34_OCC_INT	0x00000004
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_35_OCC_INT	0x00000008
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_36_OCC_INT	0x00000010
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_37_OCC_INT	0x00000020
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_38_OCC_INT	0x00000040
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_39_OCC_INT	0x00000080
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_40_OCC_INT	0x00000100
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_41_OCC_INT	0x00000200
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_42_OCC_INT	0x00000400
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_43_OCC_INT	0x00000800
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_44_OCC_INT	0x00001000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_45_OCC_INT	0x00002000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_46_OCC_INT	0x00004000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_47_OCC_INT	0x00008000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_48_OCC_INT	0x00010000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_49_OCC_INT	0x00020000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_50_OCC_INT	0x00040000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_51_OCC_INT	0x00080000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_52_OCC_INT	0x00100000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_53_OCC_INT	0x00200000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_54_OCC_INT	0x00400000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_55_OCC_INT	0x00800000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_56_OCC_INT	0x01000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_57_OCC_INT	0x02000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_58_OCC_INT	0x04000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_59_OCC_INT	0x08000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_60_OCC_INT	0x10000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_61_OCC_INT	0x20000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_62_OCC_INT	0x40000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_63_OCC_INT	0x80000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_32_OCC_INT_LOC	0
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_33_OCC_INT_LOC	1
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_34_OCC_INT_LOC	2
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_35_OCC_INT_LOC	3
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_36_OCC_INT_LOC	4
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_37_OCC_INT_LOC	5
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_38_OCC_INT_LOC	6
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_39_OCC_INT_LOC	7
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_40_OCC_INT_LOC	8
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_41_OCC_INT_LOC	9
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_42_OCC_INT_LOC	10
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_43_OCC_INT_LOC	11
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_44_OCC_INT_LOC	12
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_45_OCC_INT_LOC	13
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_46_OCC_INT_LOC	14
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_47_OCC_INT_LOC	15
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_48_OCC_INT_LOC	16
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_49_OCC_INT_LOC	17
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_50_OCC_INT_LOC	18
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_51_OCC_INT_LOC	19
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_52_OCC_INT_LOC	20
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_53_OCC_INT_LOC	21
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_54_OCC_INT_LOC	22
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_55_OCC_INT_LOC	23
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_56_OCC_INT_LOC	24
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_57_OCC_INT_LOC	25
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_58_OCC_INT_LOC	26
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_59_OCC_INT_LOC	27
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_60_OCC_INT_LOC	28
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_61_OCC_INT_LOC	29
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_62_OCC_INT_LOC	30
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_63_OCC_INT_LOC	31
+
+#define DLB2_SYS_DIR_CQ_OPT_CLR 0x100004c0
+#define DLB2_SYS_DIR_CQ_OPT_CLR_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_OPT_CLR_CQ		0x0000003F
+#define DLB2_SYS_DIR_CQ_OPT_CLR_RSVD0	0xFFFFFFC0
+#define DLB2_SYS_DIR_CQ_OPT_CLR_CQ_LOC	0
+#define DLB2_SYS_DIR_CQ_OPT_CLR_RSVD0_LOC	6
+
+#define DLB2_SYS_ALARM_HW_SYND 0x1000050c
+#define DLB2_SYS_ALARM_HW_SYND_RST 0x0
+
+#define DLB2_SYS_ALARM_HW_SYND_SYNDROME	0x000000FF
+#define DLB2_SYS_ALARM_HW_SYND_RTYPE		0x00000300
+#define DLB2_SYS_ALARM_HW_SYND_ALARM		0x00000400
+#define DLB2_SYS_ALARM_HW_SYND_CWD		0x00000800
+#define DLB2_SYS_ALARM_HW_SYND_VF_PF_MB	0x00001000
+#define DLB2_SYS_ALARM_HW_SYND_RSVD0		0x00002000
+#define DLB2_SYS_ALARM_HW_SYND_CLS		0x0000C000
+#define DLB2_SYS_ALARM_HW_SYND_AID		0x003F0000
+#define DLB2_SYS_ALARM_HW_SYND_UNIT		0x03C00000
+#define DLB2_SYS_ALARM_HW_SYND_SOURCE	0x3C000000
+#define DLB2_SYS_ALARM_HW_SYND_MORE		0x40000000
+#define DLB2_SYS_ALARM_HW_SYND_VALID		0x80000000
+#define DLB2_SYS_ALARM_HW_SYND_SYNDROME_LOC	0
+#define DLB2_SYS_ALARM_HW_SYND_RTYPE_LOC	8
+#define DLB2_SYS_ALARM_HW_SYND_ALARM_LOC	10
+#define DLB2_SYS_ALARM_HW_SYND_CWD_LOC	11
+#define DLB2_SYS_ALARM_HW_SYND_VF_PF_MB_LOC	12
+#define DLB2_SYS_ALARM_HW_SYND_RSVD0_LOC	13
+#define DLB2_SYS_ALARM_HW_SYND_CLS_LOC	14
+#define DLB2_SYS_ALARM_HW_SYND_AID_LOC	16
+#define DLB2_SYS_ALARM_HW_SYND_UNIT_LOC	22
+#define DLB2_SYS_ALARM_HW_SYND_SOURCE_LOC	26
+#define DLB2_SYS_ALARM_HW_SYND_MORE_LOC	30
+#define DLB2_SYS_ALARM_HW_SYND_VALID_LOC	31
+
+#define DLB2_AQED_QID_FID_LIM(x) \
+	(0x20000000 + (x) * 0x1000)
+#define DLB2_AQED_QID_FID_LIM_RST 0x7ff
+
+#define DLB2_AQED_QID_FID_LIM_QID_FID_LIMIT	0x00001FFF
+#define DLB2_AQED_QID_FID_LIM_RSVD0		0xFFFFE000
+#define DLB2_AQED_QID_FID_LIM_QID_FID_LIMIT_LOC	0
+#define DLB2_AQED_QID_FID_LIM_RSVD0_LOC		13
+
+#define DLB2_AQED_QID_HID_WIDTH(x) \
+	(0x20080000 + (x) * 0x1000)
+#define DLB2_AQED_QID_HID_WIDTH_RST 0x0
+
+#define DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE	0x00000007
+#define DLB2_AQED_QID_HID_WIDTH_RSVD0		0xFFFFFFF8
+#define DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE_LOC	0
+#define DLB2_AQED_QID_HID_WIDTH_RSVD0_LOC		3
+
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0 0x24000004
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_RST 0xfefcfaf8
+
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI0	0x000000FF
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI1	0x0000FF00
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI2	0x00FF0000
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI3	0xFF000000
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI0_LOC	0
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI1_LOC	8
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI2_LOC	16
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI3_LOC	24
+
+#define DLB2_AQED_SMON_ACTIVITYCNTR0 0x2c00004c
+#define DLB2_AQED_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_AQED_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_AQED_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_AQED_SMON_ACTIVITYCNTR1 0x2c000050
+#define DLB2_AQED_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_AQED_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_AQED_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_AQED_SMON_COMPARE0 0x2c000054
+#define DLB2_AQED_SMON_COMPARE0_RST 0x0
+
+#define DLB2_AQED_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_AQED_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_AQED_SMON_COMPARE1 0x2c000058
+#define DLB2_AQED_SMON_COMPARE1_RST 0x0
+
+#define DLB2_AQED_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_AQED_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_AQED_SMON_CFG0 0x2c00005c
+#define DLB2_AQED_SMON_CFG0_RST 0x40000000
+
+#define DLB2_AQED_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_AQED_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_AQED_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_AQED_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_AQED_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_AQED_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_AQED_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_AQED_SMON_CFG0_SMON_MODE		0x0000F000
+#define DLB2_AQED_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_AQED_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_AQED_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_AQED_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_AQED_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_AQED_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_AQED_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_AQED_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_AQED_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_AQED_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_AQED_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_AQED_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_AQED_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
+#define DLB2_AQED_SMON_CFG0_RSVZ0_LOC			2
+#define DLB2_AQED_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_AQED_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_AQED_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_AQED_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_AQED_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_AQED_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_AQED_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_AQED_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_AQED_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_AQED_SMON_CFG0_STOPTIMEROVFL_LOC		20
+#define DLB2_AQED_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_AQED_SMON_CFG0_STATTIMEROVFL_LOC		22
+#define DLB2_AQED_SMON_CFG0_RSVZ1_LOC			23
+#define DLB2_AQED_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_AQED_SMON_CFG0_RSVZ2_LOC			29
+#define DLB2_AQED_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_AQED_SMON_CFG1 0x2c000060
+#define DLB2_AQED_SMON_CFG1_RST 0x0
+
+#define DLB2_AQED_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_AQED_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_AQED_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_AQED_SMON_CFG1_MODE0_LOC	0
+#define DLB2_AQED_SMON_CFG1_MODE1_LOC	8
+#define DLB2_AQED_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_AQED_SMON_MAX_TMR 0x2c000064
+#define DLB2_AQED_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_AQED_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_AQED_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_AQED_SMON_TMR 0x2c000068
+#define DLB2_AQED_SMON_TMR_RST 0x0
+
+#define DLB2_AQED_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_AQED_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_ATM_QID2CQIDIX_00(x) \
+	(0x30080000 + (x) * 0x1000)
+#define DLB2_ATM_QID2CQIDIX_00_RST 0x0
+#define DLB2_ATM_QID2CQIDIX(x, y) \
+	(DLB2_ATM_QID2CQIDIX_00(x) + 0x80000 * (y))
+#define DLB2_ATM_QID2CQIDIX_NUM 16
+
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P0	0x000000FF
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P1	0x0000FF00
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P2	0x00FF0000
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P3	0xFF000000
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P0_LOC	0
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P1_LOC	8
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P2_LOC	16
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P3_LOC	24
+
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN 0x34000004
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_RST 0xfffefdfc
+
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN0	0x000000FF
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN1	0x0000FF00
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN2	0x00FF0000
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN3	0xFF000000
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN0_LOC	0
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN1_LOC	8
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN2_LOC	16
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN3_LOC	24
+
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN 0x34000008
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_RST 0xfffefdfc
+
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN0	0x000000FF
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN1	0x0000FF00
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN2	0x00FF0000
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN3	0xFF000000
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN0_LOC	0
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN1_LOC	8
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN2_LOC	16
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN3_LOC	24
+
+#define DLB2_ATM_SMON_ACTIVITYCNTR0 0x3c000050
+#define DLB2_ATM_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_ATM_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_ATM_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_ATM_SMON_ACTIVITYCNTR1 0x3c000054
+#define DLB2_ATM_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_ATM_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_ATM_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_ATM_SMON_COMPARE0 0x3c000058
+#define DLB2_ATM_SMON_COMPARE0_RST 0x0
+
+#define DLB2_ATM_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_ATM_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_ATM_SMON_COMPARE1 0x3c00005c
+#define DLB2_ATM_SMON_COMPARE1_RST 0x0
+
+#define DLB2_ATM_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_ATM_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_ATM_SMON_CFG0 0x3c000060
+#define DLB2_ATM_SMON_CFG0_RST 0x40000000
+
+#define DLB2_ATM_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_ATM_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_ATM_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_ATM_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_ATM_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_ATM_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_ATM_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_ATM_SMON_CFG0_SMON_MODE			0x0000F000
+#define DLB2_ATM_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_ATM_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_ATM_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_ATM_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_ATM_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_ATM_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_ATM_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_ATM_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_ATM_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_ATM_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_ATM_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_ATM_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_ATM_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
+#define DLB2_ATM_SMON_CFG0_RSVZ0_LOC				2
+#define DLB2_ATM_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_ATM_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_ATM_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_ATM_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_ATM_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_ATM_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_ATM_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_ATM_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_ATM_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_ATM_SMON_CFG0_STOPTIMEROVFL_LOC			20
+#define DLB2_ATM_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_ATM_SMON_CFG0_STATTIMEROVFL_LOC			22
+#define DLB2_ATM_SMON_CFG0_RSVZ1_LOC				23
+#define DLB2_ATM_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_ATM_SMON_CFG0_RSVZ2_LOC				29
+#define DLB2_ATM_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_ATM_SMON_CFG1 0x3c000064
+#define DLB2_ATM_SMON_CFG1_RST 0x0
+
+#define DLB2_ATM_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_ATM_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_ATM_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_ATM_SMON_CFG1_MODE0_LOC	0
+#define DLB2_ATM_SMON_CFG1_MODE1_LOC	8
+#define DLB2_ATM_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_ATM_SMON_MAX_TMR 0x3c000068
+#define DLB2_ATM_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_ATM_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_ATM_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_ATM_SMON_TMR 0x3c00006c
+#define DLB2_ATM_SMON_TMR_RST 0x0
+
+#define DLB2_ATM_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_ATM_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_CHP_CFG_DIR_VAS_CRD(x) \
+	(0x40000000 + (x) * 0x1000)
+#define DLB2_CHP_CFG_DIR_VAS_CRD_RST 0x0
+
+#define DLB2_CHP_CFG_DIR_VAS_CRD_COUNT	0x00003FFF
+#define DLB2_CHP_CFG_DIR_VAS_CRD_RSVD0	0xFFFFC000
+#define DLB2_CHP_CFG_DIR_VAS_CRD_COUNT_LOC	0
+#define DLB2_CHP_CFG_DIR_VAS_CRD_RSVD0_LOC	14
+
+#define DLB2_CHP_CFG_LDB_VAS_CRD(x) \
+	(0x40080000 + (x) * 0x1000)
+#define DLB2_CHP_CFG_LDB_VAS_CRD_RST 0x0
+
+#define DLB2_CHP_CFG_LDB_VAS_CRD_COUNT	0x00007FFF
+#define DLB2_CHP_CFG_LDB_VAS_CRD_RSVD0	0xFFFF8000
+#define DLB2_CHP_CFG_LDB_VAS_CRD_COUNT_LOC	0
+#define DLB2_CHP_CFG_LDB_VAS_CRD_RSVD0_LOC	15
+
+#define DLB2_V2CHP_ORD_QID_SN(x) \
+	(0x40100000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_ORD_QID_SN(x) \
+	(0x40080000 + (x) * 0x1000)
+#define DLB2_CHP_ORD_QID_SN(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_ORD_QID_SN(x) : \
+	 DLB2_V2_5CHP_ORD_QID_SN(x))
+#define DLB2_CHP_ORD_QID_SN_RST 0x0
+
+#define DLB2_CHP_ORD_QID_SN_SN	0x000003FF
+#define DLB2_CHP_ORD_QID_SN_RSVD0	0xFFFFFC00
+#define DLB2_CHP_ORD_QID_SN_SN_LOC		0
+#define DLB2_CHP_ORD_QID_SN_RSVD0_LOC	10
+
+#define DLB2_V2CHP_ORD_QID_SN_MAP(x) \
+	(0x40180000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_ORD_QID_SN_MAP(x) \
+	(0x40100000 + (x) * 0x1000)
+#define DLB2_CHP_ORD_QID_SN_MAP(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_ORD_QID_SN_MAP(x) : \
+	 DLB2_V2_5CHP_ORD_QID_SN_MAP(x))
+#define DLB2_CHP_ORD_QID_SN_MAP_RST 0x0
+
+#define DLB2_CHP_ORD_QID_SN_MAP_MODE		0x00000007
+#define DLB2_CHP_ORD_QID_SN_MAP_SLOT		0x00000078
+#define DLB2_CHP_ORD_QID_SN_MAP_RSVZ0	0x00000080
+#define DLB2_CHP_ORD_QID_SN_MAP_GRP		0x00000100
+#define DLB2_CHP_ORD_QID_SN_MAP_RSVZ1	0x00000200
+#define DLB2_CHP_ORD_QID_SN_MAP_RSVD0	0xFFFFFC00
+#define DLB2_CHP_ORD_QID_SN_MAP_MODE_LOC	0
+#define DLB2_CHP_ORD_QID_SN_MAP_SLOT_LOC	3
+#define DLB2_CHP_ORD_QID_SN_MAP_RSVZ0_LOC	7
+#define DLB2_CHP_ORD_QID_SN_MAP_GRP_LOC	8
+#define DLB2_CHP_ORD_QID_SN_MAP_RSVZ1_LOC	9
+#define DLB2_CHP_ORD_QID_SN_MAP_RSVD0_LOC	10
+
+#define DLB2_V2CHP_SN_CHK_ENBL(x) \
+	(0x40200000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_SN_CHK_ENBL(x) \
+	(0x40180000 + (x) * 0x1000)
+#define DLB2_CHP_SN_CHK_ENBL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_SN_CHK_ENBL(x) : \
+	 DLB2_V2_5CHP_SN_CHK_ENBL(x))
+#define DLB2_CHP_SN_CHK_ENBL_RST 0x0
+
+#define DLB2_CHP_SN_CHK_ENBL_EN	0x00000001
+#define DLB2_CHP_SN_CHK_ENBL_RSVD0	0xFFFFFFFE
+#define DLB2_CHP_SN_CHK_ENBL_EN_LOC		0
+#define DLB2_CHP_SN_CHK_ENBL_RSVD0_LOC	1
+
+#define DLB2_V2CHP_DIR_CQ_DEPTH(x) \
+	(0x40280000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ_DEPTH(x) \
+	(0x40300000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ_DEPTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_DEPTH(x) : \
+	 DLB2_V2_5CHP_DIR_CQ_DEPTH(x))
+#define DLB2_CHP_DIR_CQ_DEPTH_RST 0x0
+
+#define DLB2_CHP_DIR_CQ_DEPTH_DEPTH	0x00001FFF
+#define DLB2_CHP_DIR_CQ_DEPTH_RSVD0	0xFFFFE000
+#define DLB2_CHP_DIR_CQ_DEPTH_DEPTH_LOC	0
+#define DLB2_CHP_DIR_CQ_DEPTH_RSVD0_LOC	13
+
+#define DLB2_V2CHP_DIR_CQ_INT_DEPTH_THRSH(x) \
+	(0x40300000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ_INT_DEPTH_THRSH(x) \
+	(0x40380000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_INT_DEPTH_THRSH(x) : \
+	 DLB2_V2_5CHP_DIR_CQ_INT_DEPTH_THRSH(x))
+#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_RST 0x0
+
+#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_DEPTH_THRESHOLD	0x00001FFF
+#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_RSVD0		0xFFFFE000
+#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_DEPTH_THRESHOLD_LOC	0
+#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_RSVD0_LOC		13
+
+#define DLB2_V2CHP_DIR_CQ_INT_ENB(x) \
+	(0x40380000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ_INT_ENB(x) \
+	(0x40400000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ_INT_ENB(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_INT_ENB(x) : \
+	 DLB2_V2_5CHP_DIR_CQ_INT_ENB(x))
+#define DLB2_CHP_DIR_CQ_INT_ENB_RST 0x0
+
+#define DLB2_CHP_DIR_CQ_INT_ENB_EN_TIM	0x00000001
+#define DLB2_CHP_DIR_CQ_INT_ENB_EN_DEPTH	0x00000002
+#define DLB2_CHP_DIR_CQ_INT_ENB_RSVD0	0xFFFFFFFC
+#define DLB2_CHP_DIR_CQ_INT_ENB_EN_TIM_LOC	0
+#define DLB2_CHP_DIR_CQ_INT_ENB_EN_DEPTH_LOC	1
+#define DLB2_CHP_DIR_CQ_INT_ENB_RSVD0_LOC	2
+
+#define DLB2_V2CHP_DIR_CQ_TMR_THRSH(x) \
+	(0x40480000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ_TMR_THRSH(x) \
+	(0x40500000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ_TMR_THRSH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_TMR_THRSH(x) : \
+	 DLB2_V2_5CHP_DIR_CQ_TMR_THRSH(x))
+#define DLB2_CHP_DIR_CQ_TMR_THRSH_RST 0x1
+
+#define DLB2_CHP_DIR_CQ_TMR_THRSH_THRSH_0	0x00000001
+#define DLB2_CHP_DIR_CQ_TMR_THRSH_THRSH_13_1	0x00003FFE
+#define DLB2_CHP_DIR_CQ_TMR_THRSH_RSVD0	0xFFFFC000
+#define DLB2_CHP_DIR_CQ_TMR_THRSH_THRSH_0_LOC	0
+#define DLB2_CHP_DIR_CQ_TMR_THRSH_THRSH_13_1_LOC	1
+#define DLB2_CHP_DIR_CQ_TMR_THRSH_RSVD0_LOC		14
+
+#define DLB2_V2CHP_DIR_CQ_TKN_DEPTH_SEL(x) \
+	(0x40500000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ_TKN_DEPTH_SEL(x) \
+	(0x40580000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_TKN_DEPTH_SEL(x) : \
+	 DLB2_V2_5CHP_DIR_CQ_TKN_DEPTH_SEL(x))
+#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_RST 0x0
+
+#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT	0x0000000F
+#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_RSVD0			0xFFFFFFF0
+#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_LOC	0
+#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_RSVD0_LOC		4
+
+#define DLB2_V2CHP_DIR_CQ_WD_ENB(x) \
+	(0x40580000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ_WD_ENB(x) \
+	(0x40600000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ_WD_ENB(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_WD_ENB(x) : \
+	 DLB2_V2_5CHP_DIR_CQ_WD_ENB(x))
+#define DLB2_CHP_DIR_CQ_WD_ENB_RST 0x0
+
+#define DLB2_CHP_DIR_CQ_WD_ENB_WD_ENABLE	0x00000001
+#define DLB2_CHP_DIR_CQ_WD_ENB_RSVD0		0xFFFFFFFE
+#define DLB2_CHP_DIR_CQ_WD_ENB_WD_ENABLE_LOC	0
+#define DLB2_CHP_DIR_CQ_WD_ENB_RSVD0_LOC	1
+
+#define DLB2_V2CHP_DIR_CQ_WPTR(x) \
+	(0x40600000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ_WPTR(x) \
+	(0x40680000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ_WPTR(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_WPTR(x) : \
+	 DLB2_V2_5CHP_DIR_CQ_WPTR(x))
+#define DLB2_CHP_DIR_CQ_WPTR_RST 0x0
+
+#define DLB2_CHP_DIR_CQ_WPTR_WRITE_POINTER	0x00001FFF
+#define DLB2_CHP_DIR_CQ_WPTR_RSVD0		0xFFFFE000
+#define DLB2_CHP_DIR_CQ_WPTR_WRITE_POINTER_LOC	0
+#define DLB2_CHP_DIR_CQ_WPTR_RSVD0_LOC		13
+
+#define DLB2_V2CHP_DIR_CQ2VAS(x) \
+	(0x40680000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ2VAS(x) \
+	(0x40700000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ2VAS(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ2VAS(x) : \
+	 DLB2_V2_5CHP_DIR_CQ2VAS(x))
+#define DLB2_CHP_DIR_CQ2VAS_RST 0x0
+
+#define DLB2_CHP_DIR_CQ2VAS_CQ2VAS	0x0000001F
+#define DLB2_CHP_DIR_CQ2VAS_RSVD0	0xFFFFFFE0
+#define DLB2_CHP_DIR_CQ2VAS_CQ2VAS_LOC	0
+#define DLB2_CHP_DIR_CQ2VAS_RSVD0_LOC	5
+
+#define DLB2_V2CHP_HIST_LIST_BASE(x) \
+	(0x40700000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_HIST_LIST_BASE(x) \
+	(0x40780000 + (x) * 0x1000)
+#define DLB2_CHP_HIST_LIST_BASE(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_HIST_LIST_BASE(x) : \
+	 DLB2_V2_5CHP_HIST_LIST_BASE(x))
+#define DLB2_CHP_HIST_LIST_BASE_RST 0x0
+
+#define DLB2_CHP_HIST_LIST_BASE_BASE		0x00001FFF
+#define DLB2_CHP_HIST_LIST_BASE_RSVD0	0xFFFFE000
+#define DLB2_CHP_HIST_LIST_BASE_BASE_LOC	0
+#define DLB2_CHP_HIST_LIST_BASE_RSVD0_LOC	13
+
+#define DLB2_V2CHP_HIST_LIST_LIM(x) \
+	(0x40780000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_HIST_LIST_LIM(x) \
+	(0x40800000 + (x) * 0x1000)
+#define DLB2_CHP_HIST_LIST_LIM(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_HIST_LIST_LIM(x) : \
+	 DLB2_V2_5CHP_HIST_LIST_LIM(x))
+#define DLB2_CHP_HIST_LIST_LIM_RST 0x0
+
+#define DLB2_CHP_HIST_LIST_LIM_LIMIT	0x00001FFF
+#define DLB2_CHP_HIST_LIST_LIM_RSVD0	0xFFFFE000
+#define DLB2_CHP_HIST_LIST_LIM_LIMIT_LOC	0
+#define DLB2_CHP_HIST_LIST_LIM_RSVD0_LOC	13
+
+#define DLB2_V2CHP_HIST_LIST_POP_PTR(x) \
+	(0x40800000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_HIST_LIST_POP_PTR(x) \
+	(0x40880000 + (x) * 0x1000)
+#define DLB2_CHP_HIST_LIST_POP_PTR(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_HIST_LIST_POP_PTR(x) : \
+	 DLB2_V2_5CHP_HIST_LIST_POP_PTR(x))
+#define DLB2_CHP_HIST_LIST_POP_PTR_RST 0x0
+
+#define DLB2_CHP_HIST_LIST_POP_PTR_POP_PTR		0x00001FFF
+#define DLB2_CHP_HIST_LIST_POP_PTR_GENERATION	0x00002000
+#define DLB2_CHP_HIST_LIST_POP_PTR_RSVD0		0xFFFFC000
+#define DLB2_CHP_HIST_LIST_POP_PTR_POP_PTR_LOC	0
+#define DLB2_CHP_HIST_LIST_POP_PTR_GENERATION_LOC	13
+#define DLB2_CHP_HIST_LIST_POP_PTR_RSVD0_LOC		14
+
+#define DLB2_V2CHP_HIST_LIST_PUSH_PTR(x) \
+	(0x40880000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_HIST_LIST_PUSH_PTR(x) \
+	(0x40900000 + (x) * 0x1000)
+#define DLB2_CHP_HIST_LIST_PUSH_PTR(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_HIST_LIST_PUSH_PTR(x) : \
+	 DLB2_V2_5CHP_HIST_LIST_PUSH_PTR(x))
+#define DLB2_CHP_HIST_LIST_PUSH_PTR_RST 0x0
+
+#define DLB2_CHP_HIST_LIST_PUSH_PTR_PUSH_PTR		0x00001FFF
+#define DLB2_CHP_HIST_LIST_PUSH_PTR_GENERATION	0x00002000
+#define DLB2_CHP_HIST_LIST_PUSH_PTR_RSVD0		0xFFFFC000
+#define DLB2_CHP_HIST_LIST_PUSH_PTR_PUSH_PTR_LOC	0
+#define DLB2_CHP_HIST_LIST_PUSH_PTR_GENERATION_LOC	13
+#define DLB2_CHP_HIST_LIST_PUSH_PTR_RSVD0_LOC	14
+
+#define DLB2_V2CHP_LDB_CQ_DEPTH(x) \
+	(0x40900000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ_DEPTH(x) \
+	(0x40a80000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ_DEPTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_DEPTH(x) : \
+	 DLB2_V2_5CHP_LDB_CQ_DEPTH(x))
+#define DLB2_CHP_LDB_CQ_DEPTH_RST 0x0
+
+#define DLB2_CHP_LDB_CQ_DEPTH_DEPTH	0x000007FF
+#define DLB2_CHP_LDB_CQ_DEPTH_RSVD0	0xFFFFF800
+#define DLB2_CHP_LDB_CQ_DEPTH_DEPTH_LOC	0
+#define DLB2_CHP_LDB_CQ_DEPTH_RSVD0_LOC	11
+
+#define DLB2_V2CHP_LDB_CQ_INT_DEPTH_THRSH(x) \
+	(0x40980000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ_INT_DEPTH_THRSH(x) \
+	(0x40b00000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_INT_DEPTH_THRSH(x) : \
+	 DLB2_V2_5CHP_LDB_CQ_INT_DEPTH_THRSH(x))
+#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_RST 0x0
+
+#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_DEPTH_THRESHOLD	0x000007FF
+#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_RSVD0		0xFFFFF800
+#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_DEPTH_THRESHOLD_LOC	0
+#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_RSVD0_LOC		11
+
+#define DLB2_V2CHP_LDB_CQ_INT_ENB(x) \
+	(0x40a00000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ_INT_ENB(x) \
+	(0x40b80000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ_INT_ENB(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_INT_ENB(x) : \
+	 DLB2_V2_5CHP_LDB_CQ_INT_ENB(x))
+#define DLB2_CHP_LDB_CQ_INT_ENB_RST 0x0
+
+#define DLB2_CHP_LDB_CQ_INT_ENB_EN_TIM	0x00000001
+#define DLB2_CHP_LDB_CQ_INT_ENB_EN_DEPTH	0x00000002
+#define DLB2_CHP_LDB_CQ_INT_ENB_RSVD0	0xFFFFFFFC
+#define DLB2_CHP_LDB_CQ_INT_ENB_EN_TIM_LOC	0
+#define DLB2_CHP_LDB_CQ_INT_ENB_EN_DEPTH_LOC	1
+#define DLB2_CHP_LDB_CQ_INT_ENB_RSVD0_LOC	2
+
+#define DLB2_V2CHP_LDB_CQ_TMR_THRSH(x) \
+	(0x40b00000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ_TMR_THRSH(x) \
+	(0x40c80000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ_TMR_THRSH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_TMR_THRSH(x) : \
+	 DLB2_V2_5CHP_LDB_CQ_TMR_THRSH(x))
+#define DLB2_CHP_LDB_CQ_TMR_THRSH_RST 0x1
+
+#define DLB2_CHP_LDB_CQ_TMR_THRSH_THRSH_0	0x00000001
+#define DLB2_CHP_LDB_CQ_TMR_THRSH_THRSH_13_1	0x00003FFE
+#define DLB2_CHP_LDB_CQ_TMR_THRSH_RSVD0	0xFFFFC000
+#define DLB2_CHP_LDB_CQ_TMR_THRSH_THRSH_0_LOC	0
+#define DLB2_CHP_LDB_CQ_TMR_THRSH_THRSH_13_1_LOC	1
+#define DLB2_CHP_LDB_CQ_TMR_THRSH_RSVD0_LOC		14
+
+#define DLB2_V2CHP_LDB_CQ_TKN_DEPTH_SEL(x) \
+	(0x40b80000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ_TKN_DEPTH_SEL(x) \
+	(0x40d00000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_TKN_DEPTH_SEL(x) : \
+	 DLB2_V2_5CHP_LDB_CQ_TKN_DEPTH_SEL(x))
+#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_RST 0x0
+
+#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT	0x0000000F
+#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_RSVD0			0xFFFFFFF0
+#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_LOC	0
+#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_RSVD0_LOC		4
+
+#define DLB2_V2CHP_LDB_CQ_WD_ENB(x) \
+	(0x40c00000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ_WD_ENB(x) \
+	(0x40d80000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ_WD_ENB(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_WD_ENB(x) : \
+	 DLB2_V2_5CHP_LDB_CQ_WD_ENB(x))
+#define DLB2_CHP_LDB_CQ_WD_ENB_RST 0x0
+
+#define DLB2_CHP_LDB_CQ_WD_ENB_WD_ENABLE	0x00000001
+#define DLB2_CHP_LDB_CQ_WD_ENB_RSVD0		0xFFFFFFFE
+#define DLB2_CHP_LDB_CQ_WD_ENB_WD_ENABLE_LOC	0
+#define DLB2_CHP_LDB_CQ_WD_ENB_RSVD0_LOC	1
+
+#define DLB2_V2CHP_LDB_CQ_WPTR(x) \
+	(0x40c80000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ_WPTR(x) \
+	(0x40e00000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ_WPTR(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_WPTR(x) : \
+	 DLB2_V2_5CHP_LDB_CQ_WPTR(x))
+#define DLB2_CHP_LDB_CQ_WPTR_RST 0x0
+
+#define DLB2_CHP_LDB_CQ_WPTR_WRITE_POINTER	0x000007FF
+#define DLB2_CHP_LDB_CQ_WPTR_RSVD0		0xFFFFF800
+#define DLB2_CHP_LDB_CQ_WPTR_WRITE_POINTER_LOC	0
+#define DLB2_CHP_LDB_CQ_WPTR_RSVD0_LOC		11
+
+#define DLB2_V2CHP_LDB_CQ2VAS(x) \
+	(0x40d00000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ2VAS(x) \
+	(0x40e80000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ2VAS(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ2VAS(x) : \
+	 DLB2_V2_5CHP_LDB_CQ2VAS(x))
+#define DLB2_CHP_LDB_CQ2VAS_RST 0x0
+
+#define DLB2_CHP_LDB_CQ2VAS_CQ2VAS	0x0000001F
+#define DLB2_CHP_LDB_CQ2VAS_RSVD0	0xFFFFFFE0
+#define DLB2_CHP_LDB_CQ2VAS_CQ2VAS_LOC	0
+#define DLB2_CHP_LDB_CQ2VAS_RSVD0_LOC	5
+
+#define DLB2_CHP_CFG_CHP_CSR_CTRL 0x44000008
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_RST 0x180002
+
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_COR_ALARM_DIS		0x00000001
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_COR_SYND_DIS		0x00000002
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_UNCR_ALARM_DIS		0x00000004
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_UNC_SYND_DIS		0x00000008
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF0_ALARM_DIS		0x00000010
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF0_SYND_DIS		0x00000020
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF1_ALARM_DIS		0x00000040
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF1_SYND_DIS		0x00000080
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF2_ALARM_DIS		0x00000100
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF2_SYND_DIS		0x00000200
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF3_ALARM_DIS		0x00000400
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF3_SYND_DIS		0x00000800
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF4_ALARM_DIS		0x00001000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF4_SYND_DIS		0x00002000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF5_ALARM_DIS		0x00004000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF5_SYND_DIS		0x00008000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_DLB_COR_ALARM_ENABLE	0x00010000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_LDB_CQ_MODE	0x00020000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_DIR_CQ_MODE	0x00040000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_WRITE_LDB		0x00080000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_WRITE_DIR		0x00100000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_FIRST_WRITE_LDB	0x00200000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_FIRST_WRITE_DIR	0x00400000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_RSVZ0			0xFF800000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_COR_ALARM_DIS_LOC		0
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_COR_SYND_DIS_LOC		1
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_UNCR_ALARM_DIS_LOC		2
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_UNC_SYND_DIS_LOC		3
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF0_ALARM_DIS_LOC		4
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF0_SYND_DIS_LOC		5
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF1_ALARM_DIS_LOC		6
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF1_SYND_DIS_LOC		7
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF2_ALARM_DIS_LOC		8
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF2_SYND_DIS_LOC		9
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF3_ALARM_DIS_LOC		10
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF3_SYND_DIS_LOC		11
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF4_ALARM_DIS_LOC		12
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF4_SYND_DIS_LOC		13
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF5_ALARM_DIS_LOC		14
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF5_SYND_DIS_LOC		15
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_DLB_COR_ALARM_ENABLE_LOC		16
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_LDB_CQ_MODE_LOC	17
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_DIR_CQ_MODE_LOC	18
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_WRITE_LDB_LOC			19
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_WRITE_DIR_LOC			20
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_FIRST_WRITE_LDB_LOC		21
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_FIRST_WRITE_DIR_LOC		22
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_RSVZ0_LOC				23
+
+#define DLB2_V2CHP_DIR_CQ_INTR_ARMED0 0x4400005c
+#define DLB2_V2_5CHP_DIR_CQ_INTR_ARMED0 0x4400004c
+#define DLB2_CHP_DIR_CQ_INTR_ARMED0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_INTR_ARMED0 : \
+	 DLB2_V2_5CHP_DIR_CQ_INTR_ARMED0)
+#define DLB2_CHP_DIR_CQ_INTR_ARMED0_RST 0x0
+
+#define DLB2_CHP_DIR_CQ_INTR_ARMED0_ARMED	0xFFFFFFFF
+#define DLB2_CHP_DIR_CQ_INTR_ARMED0_ARMED_LOC	0
+
+#define DLB2_V2CHP_DIR_CQ_INTR_ARMED1 0x44000060
+#define DLB2_V2_5CHP_DIR_CQ_INTR_ARMED1 0x44000050
+#define DLB2_CHP_DIR_CQ_INTR_ARMED1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_INTR_ARMED1 : \
+	 DLB2_V2_5CHP_DIR_CQ_INTR_ARMED1)
+#define DLB2_CHP_DIR_CQ_INTR_ARMED1_RST 0x0
+
+#define DLB2_CHP_DIR_CQ_INTR_ARMED1_ARMED	0xFFFFFFFF
+#define DLB2_CHP_DIR_CQ_INTR_ARMED1_ARMED_LOC	0
+
+#define DLB2_V2CHP_CFG_DIR_CQ_TIMER_CTL 0x44000084
+#define DLB2_V2_5CHP_CFG_DIR_CQ_TIMER_CTL 0x44000088
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_DIR_CQ_TIMER_CTL : \
+	 DLB2_V2_5CHP_CFG_DIR_CQ_TIMER_CTL)
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_RST 0x0
+
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_SAMPLE_INTERVAL	0x000000FF
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_ENB			0x00000100
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_RSVZ0			0xFFFFFE00
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_SAMPLE_INTERVAL_LOC	0
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_ENB_LOC		8
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_RSVZ0_LOC		9
+
+#define DLB2_V2CHP_CFG_DIR_WDTO_0 0x44000088
+#define DLB2_V2_5CHP_CFG_DIR_WDTO_0 0x4400008c
+#define DLB2_CHP_CFG_DIR_WDTO_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_DIR_WDTO_0 : \
+	 DLB2_V2_5CHP_CFG_DIR_WDTO_0)
+#define DLB2_CHP_CFG_DIR_WDTO_0_RST 0x0
+
+#define DLB2_CHP_CFG_DIR_WDTO_0_WDTO	0xFFFFFFFF
+#define DLB2_CHP_CFG_DIR_WDTO_0_WDTO_LOC	0
+
+#define DLB2_V2CHP_CFG_DIR_WDTO_1 0x4400008c
+#define DLB2_V2_5CHP_CFG_DIR_WDTO_1 0x44000090
+#define DLB2_CHP_CFG_DIR_WDTO_1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_DIR_WDTO_1 : \
+	 DLB2_V2_5CHP_CFG_DIR_WDTO_1)
+#define DLB2_CHP_CFG_DIR_WDTO_1_RST 0x0
+
+#define DLB2_CHP_CFG_DIR_WDTO_1_WDTO	0xFFFFFFFF
+#define DLB2_CHP_CFG_DIR_WDTO_1_WDTO_LOC	0
+
+#define DLB2_V2CHP_CFG_DIR_WD_DISABLE0 0x44000098
+#define DLB2_V2_5CHP_CFG_DIR_WD_DISABLE0 0x440000a4
+#define DLB2_CHP_CFG_DIR_WD_DISABLE0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_DIR_WD_DISABLE0 : \
+	 DLB2_V2_5CHP_CFG_DIR_WD_DISABLE0)
+#define DLB2_CHP_CFG_DIR_WD_DISABLE0_RST 0xffffffff
+
+#define DLB2_CHP_CFG_DIR_WD_DISABLE0_WD_DISABLE	0xFFFFFFFF
+#define DLB2_CHP_CFG_DIR_WD_DISABLE0_WD_DISABLE_LOC	0
+
+#define DLB2_V2CHP_CFG_DIR_WD_DISABLE1 0x4400009c
+#define DLB2_V2_5CHP_CFG_DIR_WD_DISABLE1 0x440000a8
+#define DLB2_CHP_CFG_DIR_WD_DISABLE1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_DIR_WD_DISABLE1 : \
+	 DLB2_V2_5CHP_CFG_DIR_WD_DISABLE1)
+#define DLB2_CHP_CFG_DIR_WD_DISABLE1_RST 0xffffffff
+
+#define DLB2_CHP_CFG_DIR_WD_DISABLE1_WD_DISABLE	0xFFFFFFFF
+#define DLB2_CHP_CFG_DIR_WD_DISABLE1_WD_DISABLE_LOC	0
+
+#define DLB2_V2CHP_CFG_DIR_WD_ENB_INTERVAL 0x440000a0
+#define DLB2_V2_5CHP_CFG_DIR_WD_ENB_INTERVAL 0x440000b0
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_DIR_WD_ENB_INTERVAL : \
+	 DLB2_V2_5CHP_CFG_DIR_WD_ENB_INTERVAL)
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_RST 0x0
+
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_SAMPLE_INTERVAL	0x0FFFFFFF
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_ENB			0x10000000
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_RSVZ0		0xE0000000
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_SAMPLE_INTERVAL_LOC	0
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_ENB_LOC		28
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_RSVZ0_LOC		29
+
+#define DLB2_V2CHP_CFG_DIR_WD_THRESHOLD 0x440000ac
+#define DLB2_V2_5CHP_CFG_DIR_WD_THRESHOLD 0x440000c0
+#define DLB2_CHP_CFG_DIR_WD_THRESHOLD(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_DIR_WD_THRESHOLD : \
+	 DLB2_V2_5CHP_CFG_DIR_WD_THRESHOLD)
+#define DLB2_CHP_CFG_DIR_WD_THRESHOLD_RST 0x0
+
+#define DLB2_CHP_CFG_DIR_WD_THRESHOLD_WD_THRESHOLD	0x000000FF
+#define DLB2_CHP_CFG_DIR_WD_THRESHOLD_RSVZ0		0xFFFFFF00
+#define DLB2_CHP_CFG_DIR_WD_THRESHOLD_WD_THRESHOLD_LOC	0
+#define DLB2_CHP_CFG_DIR_WD_THRESHOLD_RSVZ0_LOC		8
+
+#define DLB2_V2CHP_LDB_CQ_INTR_ARMED0 0x440000b0
+#define DLB2_V2_5CHP_LDB_CQ_INTR_ARMED0 0x440000c4
+#define DLB2_CHP_LDB_CQ_INTR_ARMED0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_INTR_ARMED0 : \
+	 DLB2_V2_5CHP_LDB_CQ_INTR_ARMED0)
+#define DLB2_CHP_LDB_CQ_INTR_ARMED0_RST 0x0
+
+#define DLB2_CHP_LDB_CQ_INTR_ARMED0_ARMED	0xFFFFFFFF
+#define DLB2_CHP_LDB_CQ_INTR_ARMED0_ARMED_LOC	0
+
+#define DLB2_V2CHP_LDB_CQ_INTR_ARMED1 0x440000b4
+#define DLB2_V2_5CHP_LDB_CQ_INTR_ARMED1 0x440000c8
+#define DLB2_CHP_LDB_CQ_INTR_ARMED1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_INTR_ARMED1 : \
+	 DLB2_V2_5CHP_LDB_CQ_INTR_ARMED1)
+#define DLB2_CHP_LDB_CQ_INTR_ARMED1_RST 0x0
+
+#define DLB2_CHP_LDB_CQ_INTR_ARMED1_ARMED	0xFFFFFFFF
+#define DLB2_CHP_LDB_CQ_INTR_ARMED1_ARMED_LOC	0
+
+#define DLB2_V2CHP_CFG_LDB_CQ_TIMER_CTL 0x440000d8
+#define DLB2_V2_5CHP_CFG_LDB_CQ_TIMER_CTL 0x440000ec
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_LDB_CQ_TIMER_CTL : \
+	 DLB2_V2_5CHP_CFG_LDB_CQ_TIMER_CTL)
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_RST 0x0
+
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_SAMPLE_INTERVAL	0x000000FF
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_ENB			0x00000100
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_RSVZ0			0xFFFFFE00
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_SAMPLE_INTERVAL_LOC	0
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_ENB_LOC		8
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_RSVZ0_LOC		9
+
+#define DLB2_V2CHP_CFG_LDB_WDTO_0 0x440000dc
+#define DLB2_V2_5CHP_CFG_LDB_WDTO_0 0x440000f0
+#define DLB2_CHP_CFG_LDB_WDTO_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_LDB_WDTO_0 : \
+	 DLB2_V2_5CHP_CFG_LDB_WDTO_0)
+#define DLB2_CHP_CFG_LDB_WDTO_0_RST 0x0
+
+#define DLB2_CHP_CFG_LDB_WDTO_0_WDTO	0xFFFFFFFF
+#define DLB2_CHP_CFG_LDB_WDTO_0_WDTO_LOC	0
+
+#define DLB2_V2CHP_CFG_LDB_WDTO_1 0x440000e0
+#define DLB2_V2_5CHP_CFG_LDB_WDTO_1 0x440000f4
+#define DLB2_CHP_CFG_LDB_WDTO_1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_LDB_WDTO_1 : \
+	 DLB2_V2_5CHP_CFG_LDB_WDTO_1)
+#define DLB2_CHP_CFG_LDB_WDTO_1_RST 0x0
+
+#define DLB2_CHP_CFG_LDB_WDTO_1_WDTO	0xFFFFFFFF
+#define DLB2_CHP_CFG_LDB_WDTO_1_WDTO_LOC	0
+
+#define DLB2_V2CHP_CFG_LDB_WD_DISABLE0 0x440000ec
+#define DLB2_V2_5CHP_CFG_LDB_WD_DISABLE0 0x44000100
+#define DLB2_CHP_CFG_LDB_WD_DISABLE0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_LDB_WD_DISABLE0 : \
+	 DLB2_V2_5CHP_CFG_LDB_WD_DISABLE0)
+#define DLB2_CHP_CFG_LDB_WD_DISABLE0_RST 0xffffffff
+
+#define DLB2_CHP_CFG_LDB_WD_DISABLE0_WD_DISABLE	0xFFFFFFFF
+#define DLB2_CHP_CFG_LDB_WD_DISABLE0_WD_DISABLE_LOC	0
+
+#define DLB2_V2CHP_CFG_LDB_WD_DISABLE1 0x440000f0
+#define DLB2_V2_5CHP_CFG_LDB_WD_DISABLE1 0x44000104
+#define DLB2_CHP_CFG_LDB_WD_DISABLE1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_LDB_WD_DISABLE1 : \
+	 DLB2_V2_5CHP_CFG_LDB_WD_DISABLE1)
+#define DLB2_CHP_CFG_LDB_WD_DISABLE1_RST 0xffffffff
+
+#define DLB2_CHP_CFG_LDB_WD_DISABLE1_WD_DISABLE	0xFFFFFFFF
+#define DLB2_CHP_CFG_LDB_WD_DISABLE1_WD_DISABLE_LOC	0
+
+#define DLB2_V2CHP_CFG_LDB_WD_ENB_INTERVAL 0x440000f4
+#define DLB2_V2_5CHP_CFG_LDB_WD_ENB_INTERVAL 0x44000108
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_LDB_WD_ENB_INTERVAL : \
+	 DLB2_V2_5CHP_CFG_LDB_WD_ENB_INTERVAL)
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_RST 0x0
+
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_SAMPLE_INTERVAL	0x0FFFFFFF
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_ENB			0x10000000
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_RSVZ0		0xE0000000
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_SAMPLE_INTERVAL_LOC	0
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_ENB_LOC		28
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_RSVZ0_LOC		29
+
+#define DLB2_V2CHP_CFG_LDB_WD_THRESHOLD 0x44000100
+#define DLB2_V2_5CHP_CFG_LDB_WD_THRESHOLD 0x44000114
+#define DLB2_CHP_CFG_LDB_WD_THRESHOLD(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_LDB_WD_THRESHOLD : \
+	 DLB2_V2_5CHP_CFG_LDB_WD_THRESHOLD)
+#define DLB2_CHP_CFG_LDB_WD_THRESHOLD_RST 0x0
+
+#define DLB2_CHP_CFG_LDB_WD_THRESHOLD_WD_THRESHOLD	0x000000FF
+#define DLB2_CHP_CFG_LDB_WD_THRESHOLD_RSVZ0		0xFFFFFF00
+#define DLB2_CHP_CFG_LDB_WD_THRESHOLD_WD_THRESHOLD_LOC	0
+#define DLB2_CHP_CFG_LDB_WD_THRESHOLD_RSVZ0_LOC		8
+
+#define DLB2_CHP_SMON_COMPARE0 0x4c000000
+#define DLB2_CHP_SMON_COMPARE0_RST 0x0
+
+#define DLB2_CHP_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_CHP_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_CHP_SMON_COMPARE1 0x4c000004
+#define DLB2_CHP_SMON_COMPARE1_RST 0x0
+
+#define DLB2_CHP_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_CHP_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_CHP_SMON_CFG0 0x4c000008
+#define DLB2_CHP_SMON_CFG0_RST 0x40000000
+
+#define DLB2_CHP_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_CHP_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_CHP_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_CHP_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_CHP_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_CHP_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_CHP_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_CHP_SMON_CFG0_SMON_MODE			0x0000F000
+#define DLB2_CHP_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_CHP_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_CHP_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_CHP_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_CHP_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_CHP_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_CHP_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_CHP_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_CHP_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_CHP_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_CHP_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_CHP_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_CHP_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
+#define DLB2_CHP_SMON_CFG0_RSVZ0_LOC				2
+#define DLB2_CHP_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_CHP_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_CHP_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_CHP_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_CHP_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_CHP_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_CHP_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_CHP_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_CHP_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_CHP_SMON_CFG0_STOPTIMEROVFL_LOC			20
+#define DLB2_CHP_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_CHP_SMON_CFG0_STATTIMEROVFL_LOC			22
+#define DLB2_CHP_SMON_CFG0_RSVZ1_LOC				23
+#define DLB2_CHP_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_CHP_SMON_CFG0_RSVZ2_LOC				29
+#define DLB2_CHP_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_CHP_SMON_CFG1 0x4c00000c
+#define DLB2_CHP_SMON_CFG1_RST 0x0
+
+#define DLB2_CHP_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_CHP_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_CHP_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_CHP_SMON_CFG1_MODE0_LOC	0
+#define DLB2_CHP_SMON_CFG1_MODE1_LOC	8
+#define DLB2_CHP_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_CHP_SMON_ACTIVITYCNTR0 0x4c000010
+#define DLB2_CHP_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_CHP_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_CHP_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_CHP_SMON_ACTIVITYCNTR1 0x4c000014
+#define DLB2_CHP_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_CHP_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_CHP_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_CHP_SMON_MAX_TMR 0x4c000018
+#define DLB2_CHP_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_CHP_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_CHP_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_CHP_SMON_TMR 0x4c00001c
+#define DLB2_CHP_SMON_TMR_RST 0x0
+
+#define DLB2_CHP_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_CHP_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_CHP_CTRL_DIAG_02 0x4c000028
+#define DLB2_CHP_CTRL_DIAG_02_RST 0x1555
+
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_EMPTY_V2	0x00000001
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_AFULL_V2	0x00000002
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_EMPTY_V2 0x04
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_AFULL_V2 0x08
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_EMPTY_V2 0x0010
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_AFULL_V2 0x0020
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_EMPTY_V2    0x0040
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_AFULL_V2    0x0080
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_EMPTY_V2	 0x0100
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_AFULL_V2	 0x0200
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_EMPTY_V2	0x0400
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_AFULL_V2	0x0800
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_EMPTY_V2    0x1000
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_AFULL_V2    0x2000
+#define DLB2_CHP_CTRL_DIAG_02_RSVD0_V2				    0xFFFFC000
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_EMPTY_V2_LOC	    0
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_AFULL_V2_LOC	    1
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_EMPTY_V2_LOC 2
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_AFULL_V2_LOC 3
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_EMPTY_V2_LOC 4
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_AFULL_V2_LOC 5
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_EMPTY_V2_LOC    6
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_AFULL_V2_LOC    7
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_EMPTY_V2_LOC	  8
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_AFULL_V2_LOC	  9
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_EMPTY_V2_LOC	  10
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_AFULL_V2_LOC	  11
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_EMPTY_V2_LOC 12
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_AFULL_V2_LOC 13
+#define DLB2_CHP_CTRL_DIAG_02_RSVD0_V2_LOC				  14
+
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_EMPTY_V2_5	     0x00000001
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_AFULL_V2_5	     0x00000002
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_EMPTY_V2_5  4
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_AFULL_V2_5  8
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_EMPTY_V2_5 0x10
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_AFULL_V2_5 0x20
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_EMPTY_V2_5	0x0040
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_AFULL_V2_5	0x0080
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_EMPTY_V2_5 0x00000100
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_AFULL_V2_5 0x00000200
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_EMPTY_V2_5 0x0400
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_AFULL_V2_5 0x0800
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_EMPTY_V2_5 0x1000
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_AFULL_V2_5 0x2000
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOKEN_QB_STATUS_SIZE_V2_5	    0x0001C000
+#define DLB2_CHP_CTRL_DIAG_02_FREELIST_SIZE_V2_5		    0xFFFE0000
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_EMPTY_V2_5_LOC 0
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_AFULL_V2_5_LOC 1
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_EMPTY_V2_5_LOC 2
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_AFULL_V2_5_LOC 3
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_EMPTY_V2_5_LOC 4
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_AFULL_V2_5_LOC 5
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_EMPTY_V2_5_LOC    6
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_AFULL_V2_5_LOC 7
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_EMPTY_V2_5_LOC	    8
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_AFULL_V2_5_LOC	    9
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_EMPTY_V2_5_LOC   10
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_AFULL_V2_5_LOC   11
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_EMPTY_V2_5_LOC 12
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_AFULL_V2_5_LOC 13
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOKEN_QB_STATUS_SIZE_V2_5_LOC	    14
+#define DLB2_CHP_CTRL_DIAG_02_FREELIST_SIZE_V2_5_LOC			    17
+
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0 0x54000000
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_RST 0xfefcfaf8
+
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI0	0x000000FF
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI1	0x0000FF00
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI2	0x00FF0000
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI3	0xFF000000
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI0_LOC	0
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI1_LOC	8
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI2_LOC	16
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI3_LOC	24
+
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_1 0x54000004
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_1_RST 0x0
+
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_1_RSVZ0	0xFFFFFFFF
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_1_RSVZ0_LOC	0
+
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0 0x54000008
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_RST 0xfefcfaf8
+
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI0	0x000000FF
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI1	0x0000FF00
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI2	0x00FF0000
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI3	0xFF000000
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI0_LOC	0
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI1_LOC	8
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI2_LOC	16
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI3_LOC	24
+
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1 0x5400000c
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RST 0x0
+
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RSVZ0	0xFFFFFFFF
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RSVZ0_LOC	0
+
+#define DLB2_DP_DIR_CSR_CTRL 0x54000010
+#define DLB2_DP_DIR_CSR_CTRL_RST 0x0
+
+#define DLB2_DP_DIR_CSR_CTRL_INT_COR_ALARM_DIS	0x00000001
+#define DLB2_DP_DIR_CSR_CTRL_INT_COR_SYND_DIS	0x00000002
+#define DLB2_DP_DIR_CSR_CTRL_INT_UNCR_ALARM_DIS	0x00000004
+#define DLB2_DP_DIR_CSR_CTRL_INT_UNC_SYND_DIS	0x00000008
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF0_ALARM_DIS	0x00000010
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF0_SYND_DIS	0x00000020
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF1_ALARM_DIS	0x00000040
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF1_SYND_DIS	0x00000080
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF2_ALARM_DIS	0x00000100
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF2_SYND_DIS	0x00000200
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF3_ALARM_DIS	0x00000400
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF3_SYND_DIS	0x00000800
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF4_ALARM_DIS	0x00001000
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF4_SYND_DIS	0x00002000
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF5_ALARM_DIS	0x00004000
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF5_SYND_DIS	0x00008000
+#define DLB2_DP_DIR_CSR_CTRL_RSVZ0			0xFFFF0000
+#define DLB2_DP_DIR_CSR_CTRL_INT_COR_ALARM_DIS_LOC	0
+#define DLB2_DP_DIR_CSR_CTRL_INT_COR_SYND_DIS_LOC	1
+#define DLB2_DP_DIR_CSR_CTRL_INT_UNCR_ALARM_DIS_LOC	2
+#define DLB2_DP_DIR_CSR_CTRL_INT_UNC_SYND_DIS_LOC	3
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF0_ALARM_DIS_LOC	4
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF0_SYND_DIS_LOC	5
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF1_ALARM_DIS_LOC	6
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF1_SYND_DIS_LOC	7
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF2_ALARM_DIS_LOC	8
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF2_SYND_DIS_LOC	9
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF3_ALARM_DIS_LOC	10
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF3_SYND_DIS_LOC	11
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF4_ALARM_DIS_LOC	12
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF4_SYND_DIS_LOC	13
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF5_ALARM_DIS_LOC	14
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF5_SYND_DIS_LOC	15
+#define DLB2_DP_DIR_CSR_CTRL_RSVZ0_LOC		16
+
+#define DLB2_DP_SMON_ACTIVITYCNTR0 0x5c000058
+#define DLB2_DP_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_DP_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_DP_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_DP_SMON_ACTIVITYCNTR1 0x5c00005c
+#define DLB2_DP_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_DP_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_DP_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_DP_SMON_COMPARE0 0x5c000060
+#define DLB2_DP_SMON_COMPARE0_RST 0x0
+
+#define DLB2_DP_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_DP_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_DP_SMON_COMPARE1 0x5c000064
+#define DLB2_DP_SMON_COMPARE1_RST 0x0
+
+#define DLB2_DP_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_DP_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_DP_SMON_CFG0 0x5c000068
+#define DLB2_DP_SMON_CFG0_RST 0x40000000
+
+#define DLB2_DP_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_DP_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_DP_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_DP_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_DP_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_DP_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_DP_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_DP_SMON_CFG0_SMON_MODE			0x0000F000
+#define DLB2_DP_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_DP_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_DP_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_DP_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_DP_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_DP_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_DP_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_DP_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_DP_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_DP_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_DP_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_DP_SMON_CFG0_SMON_ENABLE_LOC		0
+#define DLB2_DP_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC	1
+#define DLB2_DP_SMON_CFG0_RSVZ0_LOC			2
+#define DLB2_DP_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_DP_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_DP_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_DP_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_DP_SMON_CFG0_SMON_MODE_LOC		12
+#define DLB2_DP_SMON_CFG0_STOPCOUNTEROVFL_LOC	16
+#define DLB2_DP_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_DP_SMON_CFG0_STATCOUNTER0OVFL_LOC	18
+#define DLB2_DP_SMON_CFG0_STATCOUNTER1OVFL_LOC	19
+#define DLB2_DP_SMON_CFG0_STOPTIMEROVFL_LOC		20
+#define DLB2_DP_SMON_CFG0_INTTIMEROVFL_LOC		21
+#define DLB2_DP_SMON_CFG0_STATTIMEROVFL_LOC		22
+#define DLB2_DP_SMON_CFG0_RSVZ1_LOC			23
+#define DLB2_DP_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_DP_SMON_CFG0_RSVZ2_LOC			29
+#define DLB2_DP_SMON_CFG0_VERSION_LOC		30
+
+#define DLB2_DP_SMON_CFG1 0x5c00006c
+#define DLB2_DP_SMON_CFG1_RST 0x0
+
+#define DLB2_DP_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_DP_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_DP_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_DP_SMON_CFG1_MODE0_LOC	0
+#define DLB2_DP_SMON_CFG1_MODE1_LOC	8
+#define DLB2_DP_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_DP_SMON_MAX_TMR 0x5c000070
+#define DLB2_DP_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_DP_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_DP_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_DP_SMON_TMR 0x5c000074
+#define DLB2_DP_SMON_TMR_RST 0x0
+
+#define DLB2_DP_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_DP_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_DQED_SMON_ACTIVITYCNTR0 0x6c000024
+#define DLB2_DQED_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_DQED_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_DQED_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_DQED_SMON_ACTIVITYCNTR1 0x6c000028
+#define DLB2_DQED_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_DQED_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_DQED_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_DQED_SMON_COMPARE0 0x6c00002c
+#define DLB2_DQED_SMON_COMPARE0_RST 0x0
+
+#define DLB2_DQED_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_DQED_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_DQED_SMON_COMPARE1 0x6c000030
+#define DLB2_DQED_SMON_COMPARE1_RST 0x0
+
+#define DLB2_DQED_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_DQED_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_DQED_SMON_CFG0 0x6c000034
+#define DLB2_DQED_SMON_CFG0_RST 0x40000000
+
+#define DLB2_DQED_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_DQED_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_DQED_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_DQED_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_DQED_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_DQED_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_DQED_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_DQED_SMON_CFG0_SMON_MODE		0x0000F000
+#define DLB2_DQED_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_DQED_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_DQED_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_DQED_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_DQED_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_DQED_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_DQED_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_DQED_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_DQED_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_DQED_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_DQED_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_DQED_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_DQED_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
+#define DLB2_DQED_SMON_CFG0_RSVZ0_LOC			2
+#define DLB2_DQED_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_DQED_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_DQED_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_DQED_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_DQED_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_DQED_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_DQED_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_DQED_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_DQED_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_DQED_SMON_CFG0_STOPTIMEROVFL_LOC		20
+#define DLB2_DQED_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_DQED_SMON_CFG0_STATTIMEROVFL_LOC		22
+#define DLB2_DQED_SMON_CFG0_RSVZ1_LOC			23
+#define DLB2_DQED_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_DQED_SMON_CFG0_RSVZ2_LOC			29
+#define DLB2_DQED_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_DQED_SMON_CFG1 0x6c000038
+#define DLB2_DQED_SMON_CFG1_RST 0x0
+
+#define DLB2_DQED_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_DQED_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_DQED_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_DQED_SMON_CFG1_MODE0_LOC	0
+#define DLB2_DQED_SMON_CFG1_MODE1_LOC	8
+#define DLB2_DQED_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_DQED_SMON_MAX_TMR 0x6c00003c
+#define DLB2_DQED_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_DQED_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_DQED_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_DQED_SMON_TMR 0x6c000040
+#define DLB2_DQED_SMON_TMR_RST 0x0
+
+#define DLB2_DQED_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_DQED_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_QED_SMON_ACTIVITYCNTR0 0x7c000024
+#define DLB2_QED_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_QED_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_QED_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_QED_SMON_ACTIVITYCNTR1 0x7c000028
+#define DLB2_QED_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_QED_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_QED_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_QED_SMON_COMPARE0 0x7c00002c
+#define DLB2_QED_SMON_COMPARE0_RST 0x0
+
+#define DLB2_QED_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_QED_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_QED_SMON_COMPARE1 0x7c000030
+#define DLB2_QED_SMON_COMPARE1_RST 0x0
+
+#define DLB2_QED_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_QED_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_QED_SMON_CFG0 0x7c000034
+#define DLB2_QED_SMON_CFG0_RST 0x40000000
+
+#define DLB2_QED_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_QED_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_QED_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_QED_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_QED_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_QED_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_QED_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_QED_SMON_CFG0_SMON_MODE			0x0000F000
+#define DLB2_QED_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_QED_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_QED_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_QED_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_QED_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_QED_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_QED_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_QED_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_QED_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_QED_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_QED_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_QED_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_QED_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
+#define DLB2_QED_SMON_CFG0_RSVZ0_LOC				2
+#define DLB2_QED_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_QED_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_QED_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_QED_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_QED_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_QED_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_QED_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_QED_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_QED_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_QED_SMON_CFG0_STOPTIMEROVFL_LOC			20
+#define DLB2_QED_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_QED_SMON_CFG0_STATTIMEROVFL_LOC			22
+#define DLB2_QED_SMON_CFG0_RSVZ1_LOC				23
+#define DLB2_QED_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_QED_SMON_CFG0_RSVZ2_LOC				29
+#define DLB2_QED_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_QED_SMON_CFG1 0x7c000038
+#define DLB2_QED_SMON_CFG1_RST 0x0
+
+#define DLB2_QED_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_QED_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_QED_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_QED_SMON_CFG1_MODE0_LOC	0
+#define DLB2_QED_SMON_CFG1_MODE1_LOC	8
+#define DLB2_QED_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_QED_SMON_MAX_TMR 0x7c00003c
+#define DLB2_QED_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_QED_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_QED_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_QED_SMON_TMR 0x7c000040
+#define DLB2_QED_SMON_TMR_RST 0x0
+
+#define DLB2_QED_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_QED_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0 0x84000000
+#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0 0x74000000
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0 : \
+	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0)
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_RST 0xfefcfaf8
+
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI0	0x000000FF
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI1	0x0000FF00
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI2	0x00FF0000
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI3	0xFF000000
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI0_LOC	0
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI1_LOC	8
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI2_LOC	16
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI3_LOC	24
+
+#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1 0x84000004
+#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1 0x74000004
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1 : \
+	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1)
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1_RST 0x0
+
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1_RSVZ0	0xFFFFFFFF
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1_RSVZ0_LOC	0
+
+#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0 0x84000008
+#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0 0x74000008
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0 : \
+	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0)
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_RST 0xfefcfaf8
+
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI0	0x000000FF
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI1	0x0000FF00
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI2	0x00FF0000
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI3	0xFF000000
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI0_LOC	0
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI1_LOC	8
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI2_LOC	16
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI3_LOC	24
+
+#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1 0x8400000c
+#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1 0x7400000c
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1 : \
+	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1)
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1_RST 0x0
+
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1_RSVZ0	0xFFFFFFFF
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1_RSVZ0_LOC	0
+
+#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0 0x84000010
+#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0 0x74000010
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0 : \
+	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0)
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_RST 0xfefcfaf8
+
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI0	0x000000FF
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI1	0x0000FF00
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI2	0x00FF0000
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI3	0xFF000000
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI0_LOC	0
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI1_LOC	8
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI2_LOC	16
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI3_LOC	24
+
+#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1 0x84000014
+#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1 0x74000014
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1 : \
+	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1)
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RST 0x0
+
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RSVZ0	0xFFFFFFFF
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RSVZ0_LOC	0
+
+#define DLB2_NALB_SMON_ACTIVITYCNTR0 0x8c000064
+#define DLB2_NALB_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_NALB_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_NALB_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_NALB_SMON_ACTIVITYCNTR1 0x8c000068
+#define DLB2_NALB_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_NALB_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_NALB_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_NALB_SMON_COMPARE0 0x8c00006c
+#define DLB2_NALB_SMON_COMPARE0_RST 0x0
+
+#define DLB2_NALB_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_NALB_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_NALB_SMON_COMPARE1 0x8c000070
+#define DLB2_NALB_SMON_COMPARE1_RST 0x0
+
+#define DLB2_NALB_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_NALB_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_NALB_SMON_CFG0 0x8c000074
+#define DLB2_NALB_SMON_CFG0_RST 0x40000000
+
+#define DLB2_NALB_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_NALB_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_NALB_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_NALB_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_NALB_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_NALB_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_NALB_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_NALB_SMON_CFG0_SMON_MODE		0x0000F000
+#define DLB2_NALB_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_NALB_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_NALB_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_NALB_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_NALB_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_NALB_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_NALB_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_NALB_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_NALB_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_NALB_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_NALB_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_NALB_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_NALB_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
+#define DLB2_NALB_SMON_CFG0_RSVZ0_LOC			2
+#define DLB2_NALB_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_NALB_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_NALB_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_NALB_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_NALB_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_NALB_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_NALB_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_NALB_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_NALB_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_NALB_SMON_CFG0_STOPTIMEROVFL_LOC		20
+#define DLB2_NALB_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_NALB_SMON_CFG0_STATTIMEROVFL_LOC		22
+#define DLB2_NALB_SMON_CFG0_RSVZ1_LOC			23
+#define DLB2_NALB_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_NALB_SMON_CFG0_RSVZ2_LOC			29
+#define DLB2_NALB_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_NALB_SMON_CFG1 0x8c000078
+#define DLB2_NALB_SMON_CFG1_RST 0x0
+
+#define DLB2_NALB_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_NALB_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_NALB_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_NALB_SMON_CFG1_MODE0_LOC	0
+#define DLB2_NALB_SMON_CFG1_MODE1_LOC	8
+#define DLB2_NALB_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_NALB_SMON_MAX_TMR 0x8c00007c
+#define DLB2_NALB_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_NALB_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_NALB_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_NALB_SMON_TMR 0x8c000080
+#define DLB2_NALB_SMON_TMR_RST 0x0
+
+#define DLB2_NALB_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_NALB_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_V2RO_GRP_0_SLT_SHFT(x) \
+	(0x96000000 + (x) * 0x4)
+#define DLB2_V2_5RO_GRP_0_SLT_SHFT(x) \
+	(0x86000000 + (x) * 0x4)
+#define DLB2_RO_GRP_0_SLT_SHFT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2RO_GRP_0_SLT_SHFT(x) : \
+	 DLB2_V2_5RO_GRP_0_SLT_SHFT(x))
+#define DLB2_RO_GRP_0_SLT_SHFT_RST 0x0
+
+#define DLB2_RO_GRP_0_SLT_SHFT_CHANGE	0x000003FF
+#define DLB2_RO_GRP_0_SLT_SHFT_RSVD0		0xFFFFFC00
+#define DLB2_RO_GRP_0_SLT_SHFT_CHANGE_LOC	0
+#define DLB2_RO_GRP_0_SLT_SHFT_RSVD0_LOC	10
+
+#define DLB2_V2RO_GRP_1_SLT_SHFT(x) \
+	(0x96010000 + (x) * 0x4)
+#define DLB2_V2_5RO_GRP_1_SLT_SHFT(x) \
+	(0x86010000 + (x) * 0x4)
+#define DLB2_RO_GRP_1_SLT_SHFT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2RO_GRP_1_SLT_SHFT(x) : \
+	 DLB2_V2_5RO_GRP_1_SLT_SHFT(x))
+#define DLB2_RO_GRP_1_SLT_SHFT_RST 0x0
+
+#define DLB2_RO_GRP_1_SLT_SHFT_CHANGE	0x000003FF
+#define DLB2_RO_GRP_1_SLT_SHFT_RSVD0		0xFFFFFC00
+#define DLB2_RO_GRP_1_SLT_SHFT_CHANGE_LOC	0
+#define DLB2_RO_GRP_1_SLT_SHFT_RSVD0_LOC	10
+
+#define DLB2_V2RO_GRP_SN_MODE 0x94000000
+#define DLB2_V2_5RO_GRP_SN_MODE 0x84000000
+#define DLB2_RO_GRP_SN_MODE(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2RO_GRP_SN_MODE : \
+	 DLB2_V2_5RO_GRP_SN_MODE)
+#define DLB2_RO_GRP_SN_MODE_RST 0x0
+
+#define DLB2_RO_GRP_SN_MODE_SN_MODE_0	0x00000007
+#define DLB2_RO_GRP_SN_MODE_RSZV0		0x000000F8
+#define DLB2_RO_GRP_SN_MODE_SN_MODE_1	0x00000700
+#define DLB2_RO_GRP_SN_MODE_RSZV1		0xFFFFF800
+#define DLB2_RO_GRP_SN_MODE_SN_MODE_0_LOC	0
+#define DLB2_RO_GRP_SN_MODE_RSZV0_LOC	3
+#define DLB2_RO_GRP_SN_MODE_SN_MODE_1_LOC	8
+#define DLB2_RO_GRP_SN_MODE_RSZV1_LOC	11
+
+#define DLB2_V2RO_CFG_CTRL_GENERAL_0 0x9c000000
+#define DLB2_V2_5RO_CFG_CTRL_GENERAL_0 0x8c000000
+#define DLB2_RO_CFG_CTRL_GENERAL_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2RO_CFG_CTRL_GENERAL_0 : \
+	 DLB2_V2_5RO_CFG_CTRL_GENERAL_0)
+#define DLB2_RO_CFG_CTRL_GENERAL_0_RST 0x0
+
+#define DLB2_RO_CFG_CTRL_GENERAL_0_UNIT_SINGLE_STEP_MODE	0x00000001
+#define DLB2_RO_CFG_CTRL_GENERAL_0_RR_EN			0x00000002
+#define DLB2_RO_CFG_CTRL_GENERAL_0_RSZV0			0xFFFFFFFC
+#define DLB2_RO_CFG_CTRL_GENERAL_0_UNIT_SINGLE_STEP_MODE_LOC	0
+#define DLB2_RO_CFG_CTRL_GENERAL_0_RR_EN_LOC			1
+#define DLB2_RO_CFG_CTRL_GENERAL_0_RSZV0_LOC			2
+
+#define DLB2_RO_SMON_ACTIVITYCNTR0 0x9c000030
+#define DLB2_RO_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_RO_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_RO_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_RO_SMON_ACTIVITYCNTR1 0x9c000034
+#define DLB2_RO_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_RO_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_RO_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_RO_SMON_COMPARE0 0x9c000038
+#define DLB2_RO_SMON_COMPARE0_RST 0x0
+
+#define DLB2_RO_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_RO_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_RO_SMON_COMPARE1 0x9c00003c
+#define DLB2_RO_SMON_COMPARE1_RST 0x0
+
+#define DLB2_RO_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_RO_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_RO_SMON_CFG0 0x9c000040
+#define DLB2_RO_SMON_CFG0_RST 0x40000000
+
+#define DLB2_RO_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_RO_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_RO_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_RO_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_RO_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_RO_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_RO_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_RO_SMON_CFG0_SMON_MODE			0x0000F000
+#define DLB2_RO_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_RO_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_RO_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_RO_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_RO_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_RO_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_RO_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_RO_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_RO_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_RO_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_RO_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_RO_SMON_CFG0_SMON_ENABLE_LOC		0
+#define DLB2_RO_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC	1
+#define DLB2_RO_SMON_CFG0_RSVZ0_LOC			2
+#define DLB2_RO_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_RO_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_RO_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_RO_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_RO_SMON_CFG0_SMON_MODE_LOC		12
+#define DLB2_RO_SMON_CFG0_STOPCOUNTEROVFL_LOC	16
+#define DLB2_RO_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_RO_SMON_CFG0_STATCOUNTER0OVFL_LOC	18
+#define DLB2_RO_SMON_CFG0_STATCOUNTER1OVFL_LOC	19
+#define DLB2_RO_SMON_CFG0_STOPTIMEROVFL_LOC		20
+#define DLB2_RO_SMON_CFG0_INTTIMEROVFL_LOC		21
+#define DLB2_RO_SMON_CFG0_STATTIMEROVFL_LOC		22
+#define DLB2_RO_SMON_CFG0_RSVZ1_LOC			23
+#define DLB2_RO_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_RO_SMON_CFG0_RSVZ2_LOC			29
+#define DLB2_RO_SMON_CFG0_VERSION_LOC		30
+
+#define DLB2_RO_SMON_CFG1 0x9c000044
+#define DLB2_RO_SMON_CFG1_RST 0x0
+
+#define DLB2_RO_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_RO_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_RO_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_RO_SMON_CFG1_MODE0_LOC	0
+#define DLB2_RO_SMON_CFG1_MODE1_LOC	8
+#define DLB2_RO_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_RO_SMON_MAX_TMR 0x9c000048
+#define DLB2_RO_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_RO_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_RO_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_RO_SMON_TMR 0x9c00004c
+#define DLB2_RO_SMON_TMR_RST 0x0
+
+#define DLB2_RO_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_RO_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_V2LSP_CQ2PRIOV(x) \
+	(0xa0000000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ2PRIOV(x) \
+	(0x90000000 + (x) * 0x1000)
+#define DLB2_LSP_CQ2PRIOV(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ2PRIOV(x) : \
+	 DLB2_V2_5LSP_CQ2PRIOV(x))
+#define DLB2_LSP_CQ2PRIOV_RST 0x0
+
+#define DLB2_LSP_CQ2PRIOV_PRIO	0x00FFFFFF
+#define DLB2_LSP_CQ2PRIOV_V		0xFF000000
+#define DLB2_LSP_CQ2PRIOV_PRIO_LOC	0
+#define DLB2_LSP_CQ2PRIOV_V_LOC	24
+
+#define DLB2_V2LSP_CQ2QID0(x) \
+	(0xa0080000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ2QID0(x) \
+	(0x90080000 + (x) * 0x1000)
+#define DLB2_LSP_CQ2QID0(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ2QID0(x) : \
+	 DLB2_V2_5LSP_CQ2QID0(x))
+#define DLB2_LSP_CQ2QID0_RST 0x0
+
+#define DLB2_LSP_CQ2QID0_QID_P0	0x0000007F
+#define DLB2_LSP_CQ2QID0_RSVD3	0x00000080
+#define DLB2_LSP_CQ2QID0_QID_P1	0x00007F00
+#define DLB2_LSP_CQ2QID0_RSVD2	0x00008000
+#define DLB2_LSP_CQ2QID0_QID_P2	0x007F0000
+#define DLB2_LSP_CQ2QID0_RSVD1	0x00800000
+#define DLB2_LSP_CQ2QID0_QID_P3	0x7F000000
+#define DLB2_LSP_CQ2QID0_RSVD0	0x80000000
+#define DLB2_LSP_CQ2QID0_QID_P0_LOC	0
+#define DLB2_LSP_CQ2QID0_RSVD3_LOC	7
+#define DLB2_LSP_CQ2QID0_QID_P1_LOC	8
+#define DLB2_LSP_CQ2QID0_RSVD2_LOC	15
+#define DLB2_LSP_CQ2QID0_QID_P2_LOC	16
+#define DLB2_LSP_CQ2QID0_RSVD1_LOC	23
+#define DLB2_LSP_CQ2QID0_QID_P3_LOC	24
+#define DLB2_LSP_CQ2QID0_RSVD0_LOC	31
+
+#define DLB2_V2LSP_CQ2QID1(x) \
+	(0xa0100000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ2QID1(x) \
+	(0x90100000 + (x) * 0x1000)
+#define DLB2_LSP_CQ2QID1(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ2QID1(x) : \
+	 DLB2_V2_5LSP_CQ2QID1(x))
+#define DLB2_LSP_CQ2QID1_RST 0x0
+
+#define DLB2_LSP_CQ2QID1_QID_P4	0x0000007F
+#define DLB2_LSP_CQ2QID1_RSVD3	0x00000080
+#define DLB2_LSP_CQ2QID1_QID_P5	0x00007F00
+#define DLB2_LSP_CQ2QID1_RSVD2	0x00008000
+#define DLB2_LSP_CQ2QID1_QID_P6	0x007F0000
+#define DLB2_LSP_CQ2QID1_RSVD1	0x00800000
+#define DLB2_LSP_CQ2QID1_QID_P7	0x7F000000
+#define DLB2_LSP_CQ2QID1_RSVD0	0x80000000
+#define DLB2_LSP_CQ2QID1_QID_P4_LOC	0
+#define DLB2_LSP_CQ2QID1_RSVD3_LOC	7
+#define DLB2_LSP_CQ2QID1_QID_P5_LOC	8
+#define DLB2_LSP_CQ2QID1_RSVD2_LOC	15
+#define DLB2_LSP_CQ2QID1_QID_P6_LOC	16
+#define DLB2_LSP_CQ2QID1_RSVD1_LOC	23
+#define DLB2_LSP_CQ2QID1_QID_P7_LOC	24
+#define DLB2_LSP_CQ2QID1_RSVD0_LOC	31
+
+#define DLB2_V2LSP_CQ_DIR_DSBL(x) \
+	(0xa0180000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_DIR_DSBL(x) \
+	(0x90180000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_DIR_DSBL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_DIR_DSBL(x) : \
+	 DLB2_V2_5LSP_CQ_DIR_DSBL(x))
+#define DLB2_LSP_CQ_DIR_DSBL_RST 0x1
+
+#define DLB2_LSP_CQ_DIR_DSBL_DISABLED	0x00000001
+#define DLB2_LSP_CQ_DIR_DSBL_RSVD0		0xFFFFFFFE
+#define DLB2_LSP_CQ_DIR_DSBL_DISABLED_LOC	0
+#define DLB2_LSP_CQ_DIR_DSBL_RSVD0_LOC	1
+
+#define DLB2_V2LSP_CQ_DIR_TKN_CNT(x) \
+	(0xa0200000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_DIR_TKN_CNT(x) \
+	(0x90200000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_DIR_TKN_CNT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_DIR_TKN_CNT(x) : \
+	 DLB2_V2_5LSP_CQ_DIR_TKN_CNT(x))
+#define DLB2_LSP_CQ_DIR_TKN_CNT_RST 0x0
+
+#define DLB2_LSP_CQ_DIR_TKN_CNT_COUNT	0x00001FFF
+#define DLB2_LSP_CQ_DIR_TKN_CNT_RSVD0	0xFFFFE000
+#define DLB2_LSP_CQ_DIR_TKN_CNT_COUNT_LOC	0
+#define DLB2_LSP_CQ_DIR_TKN_CNT_RSVD0_LOC	13
+
+#define DLB2_V2LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(x) \
+	(0xa0280000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(x) \
+	(0x90280000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(x) : \
+	 DLB2_V2_5LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(x))
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RST 0x0
+
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_TOKEN_DEPTH_SELECT_V2	0x0000000F
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_DISABLE_WB_OPT_V2	0x00000010
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_IGNORE_DEPTH_V2	0x00000020
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RSVD0_V2		0xFFFFFFC0
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_TOKEN_DEPTH_SELECT_V2_LOC	0
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_DISABLE_WB_OPT_V2_LOC	4
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_IGNORE_DEPTH_V2_LOC	5
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RSVD0_V2_LOC		6
+
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_TOKEN_DEPTH_SELECT_V2_5 0x0000000F
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_DISABLE_WB_OPT_V2_5	0x00000010
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RSVD0_V2_5		0xFFFFFFE0
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_TOKEN_DEPTH_SELECT_V2_5_LOC	0
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_DISABLE_WB_OPT_V2_5_LOC	4
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RSVD0_V2_5_LOC		5
+
+#define DLB2_V2LSP_CQ_DIR_TOT_SCH_CNTL(x) \
+	(0xa0300000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_DIR_TOT_SCH_CNTL(x) \
+	(0x90300000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_DIR_TOT_SCH_CNTL(x) : \
+	 DLB2_V2_5LSP_CQ_DIR_TOT_SCH_CNTL(x))
+#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTL_RST 0x0
+
+#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTL_COUNT	0xFFFFFFFF
+#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTL_COUNT_LOC	0
+
+#define DLB2_V2LSP_CQ_DIR_TOT_SCH_CNTH(x) \
+	(0xa0380000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_DIR_TOT_SCH_CNTH(x) \
+	(0x90380000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_DIR_TOT_SCH_CNTH(x) : \
+	 DLB2_V2_5LSP_CQ_DIR_TOT_SCH_CNTH(x))
+#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTH_RST 0x0
+
+#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTH_COUNT	0xFFFFFFFF
+#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTH_COUNT_LOC	0
+
+#define DLB2_V2LSP_CQ_LDB_DSBL(x) \
+	(0xa0400000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_LDB_DSBL(x) \
+	(0x90400000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_LDB_DSBL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_LDB_DSBL(x) : \
+	 DLB2_V2_5LSP_CQ_LDB_DSBL(x))
+#define DLB2_LSP_CQ_LDB_DSBL_RST 0x1
+
+#define DLB2_LSP_CQ_LDB_DSBL_DISABLED	0x00000001
+#define DLB2_LSP_CQ_LDB_DSBL_RSVD0		0xFFFFFFFE
+#define DLB2_LSP_CQ_LDB_DSBL_DISABLED_LOC	0
+#define DLB2_LSP_CQ_LDB_DSBL_RSVD0_LOC	1
+
+#define DLB2_V2LSP_CQ_LDB_INFL_CNT(x) \
+	(0xa0480000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_LDB_INFL_CNT(x) \
+	(0x90480000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_LDB_INFL_CNT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_LDB_INFL_CNT(x) : \
+	 DLB2_V2_5LSP_CQ_LDB_INFL_CNT(x))
+#define DLB2_LSP_CQ_LDB_INFL_CNT_RST 0x0
+
+#define DLB2_LSP_CQ_LDB_INFL_CNT_COUNT	0x00000FFF
+#define DLB2_LSP_CQ_LDB_INFL_CNT_RSVD0	0xFFFFF000
+#define DLB2_LSP_CQ_LDB_INFL_CNT_COUNT_LOC	0
+#define DLB2_LSP_CQ_LDB_INFL_CNT_RSVD0_LOC	12
+
+#define DLB2_V2LSP_CQ_LDB_INFL_LIM(x) \
+	(0xa0500000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_LDB_INFL_LIM(x) \
+	(0x90500000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_LDB_INFL_LIM(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_LDB_INFL_LIM(x) : \
+	 DLB2_V2_5LSP_CQ_LDB_INFL_LIM(x))
+#define DLB2_LSP_CQ_LDB_INFL_LIM_RST 0x0
+
+#define DLB2_LSP_CQ_LDB_INFL_LIM_LIMIT	0x00000FFF
+#define DLB2_LSP_CQ_LDB_INFL_LIM_RSVD0	0xFFFFF000
+#define DLB2_LSP_CQ_LDB_INFL_LIM_LIMIT_LOC	0
+#define DLB2_LSP_CQ_LDB_INFL_LIM_RSVD0_LOC	12
+
+#define DLB2_V2LSP_CQ_LDB_TKN_CNT(x) \
+	(0xa0580000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_LDB_TKN_CNT(x) \
+	(0x90600000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_LDB_TKN_CNT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_LDB_TKN_CNT(x) : \
+	 DLB2_V2_5LSP_CQ_LDB_TKN_CNT(x))
+#define DLB2_LSP_CQ_LDB_TKN_CNT_RST 0x0
+
+#define DLB2_LSP_CQ_LDB_TKN_CNT_TOKEN_COUNT	0x000007FF
+#define DLB2_LSP_CQ_LDB_TKN_CNT_RSVD0	0xFFFFF800
+#define DLB2_LSP_CQ_LDB_TKN_CNT_TOKEN_COUNT_LOC	0
+#define DLB2_LSP_CQ_LDB_TKN_CNT_RSVD0_LOC		11
+
+#define DLB2_V2LSP_CQ_LDB_TKN_DEPTH_SEL(x) \
+	(0xa0600000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_LDB_TKN_DEPTH_SEL(x) \
+	(0x90680000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_LDB_TKN_DEPTH_SEL(x) : \
+	 DLB2_V2_5LSP_CQ_LDB_TKN_DEPTH_SEL(x))
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RST 0x0
+
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_V2	0x0000000F
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_IGNORE_DEPTH_V2	0x00000010
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RSVD0_V2		0xFFFFFFE0
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_V2_LOC	0
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_IGNORE_DEPTH_V2_LOC		4
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RSVD0_V2_LOC			5
+
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_V2_5	0x0000000F
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RSVD0_V2_5		0xFFFFFFF0
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_V2_5_LOC	0
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RSVD0_V2_5_LOC			4
+
+#define DLB2_V2LSP_CQ_LDB_TOT_SCH_CNTL(x) \
+	(0xa0680000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_LDB_TOT_SCH_CNTL(x) \
+	(0x90700000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_LDB_TOT_SCH_CNTL(x) : \
+	 DLB2_V2_5LSP_CQ_LDB_TOT_SCH_CNTL(x))
+#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTL_RST 0x0
+
+#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTL_COUNT	0xFFFFFFFF
+#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTL_COUNT_LOC	0
+
+#define DLB2_V2LSP_CQ_LDB_TOT_SCH_CNTH(x) \
+	(0xa0700000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_LDB_TOT_SCH_CNTH(x) \
+	(0x90780000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_LDB_TOT_SCH_CNTH(x) : \
+	 DLB2_V2_5LSP_CQ_LDB_TOT_SCH_CNTH(x))
+#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTH_RST 0x0
+
+#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTH_COUNT	0xFFFFFFFF
+#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTH_COUNT_LOC	0
+
+#define DLB2_V2LSP_QID_DIR_MAX_DEPTH(x) \
+	(0xa0780000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_DIR_MAX_DEPTH(x) \
+	(0x90800000 + (x) * 0x1000)
+#define DLB2_LSP_QID_DIR_MAX_DEPTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_DIR_MAX_DEPTH(x) : \
+	 DLB2_V2_5LSP_QID_DIR_MAX_DEPTH(x))
+#define DLB2_LSP_QID_DIR_MAX_DEPTH_RST 0x0
+
+#define DLB2_LSP_QID_DIR_MAX_DEPTH_DEPTH	0x00001FFF
+#define DLB2_LSP_QID_DIR_MAX_DEPTH_RSVD0	0xFFFFE000
+#define DLB2_LSP_QID_DIR_MAX_DEPTH_DEPTH_LOC	0
+#define DLB2_LSP_QID_DIR_MAX_DEPTH_RSVD0_LOC	13
+
+#define DLB2_V2LSP_QID_DIR_TOT_ENQ_CNTL(x) \
+	(0xa0800000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_DIR_TOT_ENQ_CNTL(x) \
+	(0x90880000 + (x) * 0x1000)
+#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_DIR_TOT_ENQ_CNTL(x) : \
+	 DLB2_V2_5LSP_QID_DIR_TOT_ENQ_CNTL(x))
+#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTL_RST 0x0
+
+#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTL_COUNT	0xFFFFFFFF
+#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTL_COUNT_LOC	0
+
+#define DLB2_V2LSP_QID_DIR_TOT_ENQ_CNTH(x) \
+	(0xa0880000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_DIR_TOT_ENQ_CNTH(x) \
+	(0x90900000 + (x) * 0x1000)
+#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_DIR_TOT_ENQ_CNTH(x) : \
+	 DLB2_V2_5LSP_QID_DIR_TOT_ENQ_CNTH(x))
+#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTH_RST 0x0
+
+#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTH_COUNT	0xFFFFFFFF
+#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTH_COUNT_LOC	0
+
+#define DLB2_V2LSP_QID_DIR_ENQUEUE_CNT(x) \
+	(0xa0900000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_DIR_ENQUEUE_CNT(x) \
+	(0x90980000 + (x) * 0x1000)
+#define DLB2_LSP_QID_DIR_ENQUEUE_CNT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_DIR_ENQUEUE_CNT(x) : \
+	 DLB2_V2_5LSP_QID_DIR_ENQUEUE_CNT(x))
+#define DLB2_LSP_QID_DIR_ENQUEUE_CNT_RST 0x0
+
+#define DLB2_LSP_QID_DIR_ENQUEUE_CNT_COUNT	0x00001FFF
+#define DLB2_LSP_QID_DIR_ENQUEUE_CNT_RSVD0	0xFFFFE000
+#define DLB2_LSP_QID_DIR_ENQUEUE_CNT_COUNT_LOC	0
+#define DLB2_LSP_QID_DIR_ENQUEUE_CNT_RSVD0_LOC	13
+
+#define DLB2_V2LSP_QID_DIR_DEPTH_THRSH(x) \
+	(0xa0980000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_DIR_DEPTH_THRSH(x) \
+	(0x90a00000 + (x) * 0x1000)
+#define DLB2_LSP_QID_DIR_DEPTH_THRSH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_DIR_DEPTH_THRSH(x) : \
+	 DLB2_V2_5LSP_QID_DIR_DEPTH_THRSH(x))
+#define DLB2_LSP_QID_DIR_DEPTH_THRSH_RST 0x0
+
+#define DLB2_LSP_QID_DIR_DEPTH_THRSH_THRESH	0x00001FFF
+#define DLB2_LSP_QID_DIR_DEPTH_THRSH_RSVD0	0xFFFFE000
+#define DLB2_LSP_QID_DIR_DEPTH_THRSH_THRESH_LOC	0
+#define DLB2_LSP_QID_DIR_DEPTH_THRSH_RSVD0_LOC	13
+
+#define DLB2_V2LSP_QID_AQED_ACTIVE_CNT(x) \
+	(0xa0a00000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_AQED_ACTIVE_CNT(x) \
+	(0x90b80000 + (x) * 0x1000)
+#define DLB2_LSP_QID_AQED_ACTIVE_CNT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_AQED_ACTIVE_CNT(x) : \
+	 DLB2_V2_5LSP_QID_AQED_ACTIVE_CNT(x))
+#define DLB2_LSP_QID_AQED_ACTIVE_CNT_RST 0x0
+
+#define DLB2_LSP_QID_AQED_ACTIVE_CNT_COUNT	0x00000FFF
+#define DLB2_LSP_QID_AQED_ACTIVE_CNT_RSVD0	0xFFFFF000
+#define DLB2_LSP_QID_AQED_ACTIVE_CNT_COUNT_LOC	0
+#define DLB2_LSP_QID_AQED_ACTIVE_CNT_RSVD0_LOC	12
+
+#define DLB2_V2LSP_QID_AQED_ACTIVE_LIM(x) \
+	(0xa0a80000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_AQED_ACTIVE_LIM(x) \
+	(0x90c00000 + (x) * 0x1000)
+#define DLB2_LSP_QID_AQED_ACTIVE_LIM(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_AQED_ACTIVE_LIM(x) : \
+	 DLB2_V2_5LSP_QID_AQED_ACTIVE_LIM(x))
+#define DLB2_LSP_QID_AQED_ACTIVE_LIM_RST 0x0
+
+#define DLB2_LSP_QID_AQED_ACTIVE_LIM_LIMIT	0x00000FFF
+#define DLB2_LSP_QID_AQED_ACTIVE_LIM_RSVD0	0xFFFFF000
+#define DLB2_LSP_QID_AQED_ACTIVE_LIM_LIMIT_LOC	0
+#define DLB2_LSP_QID_AQED_ACTIVE_LIM_RSVD0_LOC	12
+
+#define DLB2_V2LSP_QID_ATM_TOT_ENQ_CNTL(x) \
+	(0xa0b00000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_ATM_TOT_ENQ_CNTL(x) \
+	(0x90c80000 + (x) * 0x1000)
+#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_ATM_TOT_ENQ_CNTL(x) : \
+	 DLB2_V2_5LSP_QID_ATM_TOT_ENQ_CNTL(x))
+#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTL_RST 0x0
+
+#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTL_COUNT	0xFFFFFFFF
+#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTL_COUNT_LOC	0
+
+#define DLB2_V2LSP_QID_ATM_TOT_ENQ_CNTH(x) \
+	(0xa0b80000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_ATM_TOT_ENQ_CNTH(x) \
+	(0x90d00000 + (x) * 0x1000)
+#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_ATM_TOT_ENQ_CNTH(x) : \
+	 DLB2_V2_5LSP_QID_ATM_TOT_ENQ_CNTH(x))
+#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTH_RST 0x0
+
+#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTH_COUNT	0xFFFFFFFF
+#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTH_COUNT_LOC	0
+
+#define DLB2_V2LSP_QID_LDB_ENQUEUE_CNT(x) \
+	(0xa0c80000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_LDB_ENQUEUE_CNT(x) \
+	(0x90e00000 + (x) * 0x1000)
+#define DLB2_LSP_QID_LDB_ENQUEUE_CNT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_LDB_ENQUEUE_CNT(x) : \
+	 DLB2_V2_5LSP_QID_LDB_ENQUEUE_CNT(x))
+#define DLB2_LSP_QID_LDB_ENQUEUE_CNT_RST 0x0
+
+#define DLB2_LSP_QID_LDB_ENQUEUE_CNT_COUNT	0x00003FFF
+#define DLB2_LSP_QID_LDB_ENQUEUE_CNT_RSVD0	0xFFFFC000
+#define DLB2_LSP_QID_LDB_ENQUEUE_CNT_COUNT_LOC	0
+#define DLB2_LSP_QID_LDB_ENQUEUE_CNT_RSVD0_LOC	14
+
+#define DLB2_V2LSP_QID_LDB_INFL_CNT(x) \
+	(0xa0d00000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_LDB_INFL_CNT(x) \
+	(0x90e80000 + (x) * 0x1000)
+#define DLB2_LSP_QID_LDB_INFL_CNT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_LDB_INFL_CNT(x) : \
+	 DLB2_V2_5LSP_QID_LDB_INFL_CNT(x))
+#define DLB2_LSP_QID_LDB_INFL_CNT_RST 0x0
+
+#define DLB2_LSP_QID_LDB_INFL_CNT_COUNT	0x00000FFF
+#define DLB2_LSP_QID_LDB_INFL_CNT_RSVD0	0xFFFFF000
+#define DLB2_LSP_QID_LDB_INFL_CNT_COUNT_LOC	0
+#define DLB2_LSP_QID_LDB_INFL_CNT_RSVD0_LOC	12
+
+#define DLB2_V2LSP_QID_LDB_INFL_LIM(x) \
+	(0xa0d80000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_LDB_INFL_LIM(x) \
+	(0x90f00000 + (x) * 0x1000)
+#define DLB2_LSP_QID_LDB_INFL_LIM(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_LDB_INFL_LIM(x) : \
+	 DLB2_V2_5LSP_QID_LDB_INFL_LIM(x))
+#define DLB2_LSP_QID_LDB_INFL_LIM_RST 0x0
+
+#define DLB2_LSP_QID_LDB_INFL_LIM_LIMIT	0x00000FFF
+#define DLB2_LSP_QID_LDB_INFL_LIM_RSVD0	0xFFFFF000
+#define DLB2_LSP_QID_LDB_INFL_LIM_LIMIT_LOC	0
+#define DLB2_LSP_QID_LDB_INFL_LIM_RSVD0_LOC	12
+
+#define DLB2_V2LSP_QID2CQIDIX_00(x) \
+	(0xa0e00000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID2CQIDIX_00(x) \
+	(0x90f80000 + (x) * 0x1000)
+#define DLB2_LSP_QID2CQIDIX_00(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID2CQIDIX_00(x) : \
+	 DLB2_V2_5LSP_QID2CQIDIX_00(x))
+#define DLB2_LSP_QID2CQIDIX_00_RST 0x0
+#define DLB2_LSP_QID2CQIDIX(ver, x, y) \
+	(DLB2_LSP_QID2CQIDIX_00(ver, x) + 0x80000 * (y))
+#define DLB2_LSP_QID2CQIDIX_NUM 16
+
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P0	0x000000FF
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P1	0x0000FF00
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P2	0x00FF0000
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P3	0xFF000000
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P0_LOC	0
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P1_LOC	8
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P2_LOC	16
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P3_LOC	24
+
+#define DLB2_V2LSP_QID2CQIDIX2_00(x) \
+	(0xa1600000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID2CQIDIX2_00(x) \
+	(0x91780000 + (x) * 0x1000)
+#define DLB2_LSP_QID2CQIDIX2_00(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID2CQIDIX2_00(x) : \
+	 DLB2_V2_5LSP_QID2CQIDIX2_00(x))
+#define DLB2_LSP_QID2CQIDIX2_00_RST 0x0
+#define DLB2_LSP_QID2CQIDIX2(ver, x, y) \
+	(DLB2_LSP_QID2CQIDIX2_00(ver, x) + 0x80000 * (y))
+#define DLB2_LSP_QID2CQIDIX2_NUM 16
+
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P0	0x000000FF
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P1	0x0000FF00
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P2	0x00FF0000
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P3	0xFF000000
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P0_LOC	0
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P1_LOC	8
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P2_LOC	16
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P3_LOC	24
+
+#define DLB2_V2LSP_QID_NALDB_MAX_DEPTH(x) \
+	(0xa1f00000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_NALDB_MAX_DEPTH(x) \
+	(0x92080000 + (x) * 0x1000)
+#define DLB2_LSP_QID_NALDB_MAX_DEPTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_NALDB_MAX_DEPTH(x) : \
+	 DLB2_V2_5LSP_QID_NALDB_MAX_DEPTH(x))
+#define DLB2_LSP_QID_NALDB_MAX_DEPTH_RST 0x0
+
+#define DLB2_LSP_QID_NALDB_MAX_DEPTH_DEPTH	0x00003FFF
+#define DLB2_LSP_QID_NALDB_MAX_DEPTH_RSVD0	0xFFFFC000
+#define DLB2_LSP_QID_NALDB_MAX_DEPTH_DEPTH_LOC	0
+#define DLB2_LSP_QID_NALDB_MAX_DEPTH_RSVD0_LOC	14
+
+#define DLB2_V2LSP_QID_NALDB_TOT_ENQ_CNTL(x) \
+	(0xa1f80000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_NALDB_TOT_ENQ_CNTL(x) \
+	(0x92100000 + (x) * 0x1000)
+#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_NALDB_TOT_ENQ_CNTL(x) : \
+	 DLB2_V2_5LSP_QID_NALDB_TOT_ENQ_CNTL(x))
+#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL_RST 0x0
+
+#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL_COUNT	0xFFFFFFFF
+#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL_COUNT_LOC	0
+
+#define DLB2_V2LSP_QID_NALDB_TOT_ENQ_CNTH(x) \
+	(0xa2000000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_NALDB_TOT_ENQ_CNTH(x) \
+	(0x92180000 + (x) * 0x1000)
+#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_NALDB_TOT_ENQ_CNTH(x) : \
+	 DLB2_V2_5LSP_QID_NALDB_TOT_ENQ_CNTH(x))
+#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH_RST 0x0
+
+#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH_COUNT	0xFFFFFFFF
+#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH_COUNT_LOC	0
+
+#define DLB2_V2LSP_QID_ATM_DEPTH_THRSH(x) \
+	(0xa2080000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_ATM_DEPTH_THRSH(x) \
+	(0x92200000 + (x) * 0x1000)
+#define DLB2_LSP_QID_ATM_DEPTH_THRSH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_ATM_DEPTH_THRSH(x) : \
+	 DLB2_V2_5LSP_QID_ATM_DEPTH_THRSH(x))
+#define DLB2_LSP_QID_ATM_DEPTH_THRSH_RST 0x0
+
+#define DLB2_LSP_QID_ATM_DEPTH_THRSH_THRESH	0x00003FFF
+#define DLB2_LSP_QID_ATM_DEPTH_THRSH_RSVD0	0xFFFFC000
+#define DLB2_LSP_QID_ATM_DEPTH_THRSH_THRESH_LOC	0
+#define DLB2_LSP_QID_ATM_DEPTH_THRSH_RSVD0_LOC	14
+
+#define DLB2_V2LSP_QID_NALDB_DEPTH_THRSH(x) \
+	(0xa2100000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_NALDB_DEPTH_THRSH(x) \
+	(0x92280000 + (x) * 0x1000)
+#define DLB2_LSP_QID_NALDB_DEPTH_THRSH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_NALDB_DEPTH_THRSH(x) : \
+	 DLB2_V2_5LSP_QID_NALDB_DEPTH_THRSH(x))
+#define DLB2_LSP_QID_NALDB_DEPTH_THRSH_RST 0x0
+
+#define DLB2_LSP_QID_NALDB_DEPTH_THRSH_THRESH	0x00003FFF
+#define DLB2_LSP_QID_NALDB_DEPTH_THRSH_RSVD0		0xFFFFC000
+#define DLB2_LSP_QID_NALDB_DEPTH_THRSH_THRESH_LOC	0
+#define DLB2_LSP_QID_NALDB_DEPTH_THRSH_RSVD0_LOC	14
+
+#define DLB2_V2LSP_QID_ATM_ACTIVE(x) \
+	(0xa2180000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_ATM_ACTIVE(x) \
+	(0x92300000 + (x) * 0x1000)
+#define DLB2_LSP_QID_ATM_ACTIVE(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_ATM_ACTIVE(x) : \
+	 DLB2_V2_5LSP_QID_ATM_ACTIVE(x))
+#define DLB2_LSP_QID_ATM_ACTIVE_RST 0x0
+
+#define DLB2_LSP_QID_ATM_ACTIVE_COUNT	0x00003FFF
+#define DLB2_LSP_QID_ATM_ACTIVE_RSVD0	0xFFFFC000
+#define DLB2_LSP_QID_ATM_ACTIVE_COUNT_LOC	0
+#define DLB2_LSP_QID_ATM_ACTIVE_RSVD0_LOC	14
+
+#define DLB2_V2LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0 0xa4000008
+#define DLB2_V2_5LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0 0x94000008
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0 : \
+	 DLB2_V2_5LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0)
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_RST 0x0
+
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI0_WEIGHT	0x000000FF
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI1_WEIGHT	0x0000FF00
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI2_WEIGHT	0x00FF0000
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI3_WEIGHT	0xFF000000
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI0_WEIGHT_LOC	0
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI1_WEIGHT_LOC	8
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI2_WEIGHT_LOC	16
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI3_WEIGHT_LOC	24
+
+#define DLB2_V2LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1 0xa400000c
+#define DLB2_V2_5LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1 0x9400000c
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1 : \
+	 DLB2_V2_5LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1)
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_RST 0x0
+
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_RSVZ0_V2	0xFFFFFFFF
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_RSVZ0_V2_LOC	0
+
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI4_WEIGHT_V2_5	0x000000FF
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI5_WEIGHT_V2_5	0x0000FF00
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI6_WEIGHT_V2_5	0x00FF0000
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI7_WEIGHT_V2_5	0xFF000000
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI4_WEIGHT_V2_5_LOC	0
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI5_WEIGHT_V2_5_LOC	8
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI6_WEIGHT_V2_5_LOC	16
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI7_WEIGHT_V2_5_LOC	24
+
+#define DLB2_V2LSP_CFG_ARB_WEIGHT_LDB_QID_0 0xa4000014
+#define DLB2_V2_5LSP_CFG_ARB_WEIGHT_LDB_QID_0 0x94000014
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CFG_ARB_WEIGHT_LDB_QID_0 : \
+	 DLB2_V2_5LSP_CFG_ARB_WEIGHT_LDB_QID_0)
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_RST 0x0
+
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI0_WEIGHT	0x000000FF
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI1_WEIGHT	0x0000FF00
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI2_WEIGHT	0x00FF0000
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI3_WEIGHT	0xFF000000
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI0_WEIGHT_LOC	0
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI1_WEIGHT_LOC	8
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI2_WEIGHT_LOC	16
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI3_WEIGHT_LOC	24
+
+#define DLB2_V2LSP_CFG_ARB_WEIGHT_LDB_QID_1 0xa4000018
+#define DLB2_V2_5LSP_CFG_ARB_WEIGHT_LDB_QID_1 0x94000018
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CFG_ARB_WEIGHT_LDB_QID_1 : \
+	 DLB2_V2_5LSP_CFG_ARB_WEIGHT_LDB_QID_1)
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_RST 0x0
+
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_RSVZ0_V2	0xFFFFFFFF
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_RSVZ0_V2_LOC	0
+
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI4_WEIGHT_V2_5	0x000000FF
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI5_WEIGHT_V2_5	0x0000FF00
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI6_WEIGHT_V2_5	0x00FF0000
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI7_WEIGHT_V2_5	0xFF000000
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI4_WEIGHT_V2_5_LOC	0
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI5_WEIGHT_V2_5_LOC	8
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI6_WEIGHT_V2_5_LOC	16
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI7_WEIGHT_V2_5_LOC	24
+
+#define DLB2_V2LSP_LDB_SCHED_CTRL 0xa400002c
+#define DLB2_V2_5LSP_LDB_SCHED_CTRL 0x9400002c
+#define DLB2_LSP_LDB_SCHED_CTRL(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_LDB_SCHED_CTRL : \
+	 DLB2_V2_5LSP_LDB_SCHED_CTRL)
+#define DLB2_LSP_LDB_SCHED_CTRL_RST 0x0
+
+#define DLB2_LSP_LDB_SCHED_CTRL_CQ			0x000000FF
+#define DLB2_LSP_LDB_SCHED_CTRL_QIDIX		0x00000700
+#define DLB2_LSP_LDB_SCHED_CTRL_VALUE		0x00000800
+#define DLB2_LSP_LDB_SCHED_CTRL_NALB_HASWORK_V	0x00001000
+#define DLB2_LSP_LDB_SCHED_CTRL_RLIST_HASWORK_V	0x00002000
+#define DLB2_LSP_LDB_SCHED_CTRL_SLIST_HASWORK_V	0x00004000
+#define DLB2_LSP_LDB_SCHED_CTRL_INFLIGHT_OK_V	0x00008000
+#define DLB2_LSP_LDB_SCHED_CTRL_AQED_NFULL_V		0x00010000
+#define DLB2_LSP_LDB_SCHED_CTRL_RSVZ0		0xFFFE0000
+#define DLB2_LSP_LDB_SCHED_CTRL_CQ_LOC		0
+#define DLB2_LSP_LDB_SCHED_CTRL_QIDIX_LOC		8
+#define DLB2_LSP_LDB_SCHED_CTRL_VALUE_LOC		11
+#define DLB2_LSP_LDB_SCHED_CTRL_NALB_HASWORK_V_LOC	12
+#define DLB2_LSP_LDB_SCHED_CTRL_RLIST_HASWORK_V_LOC	13
+#define DLB2_LSP_LDB_SCHED_CTRL_SLIST_HASWORK_V_LOC	14
+#define DLB2_LSP_LDB_SCHED_CTRL_INFLIGHT_OK_V_LOC	15
+#define DLB2_LSP_LDB_SCHED_CTRL_AQED_NFULL_V_LOC	16
+#define DLB2_LSP_LDB_SCHED_CTRL_RSVZ0_LOC		17
+
+#define DLB2_V2LSP_DIR_SCH_CNT_L 0xa4000034
+#define DLB2_V2_5LSP_DIR_SCH_CNT_L 0x94000034
+#define DLB2_LSP_DIR_SCH_CNT_L(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_DIR_SCH_CNT_L : \
+	 DLB2_V2_5LSP_DIR_SCH_CNT_L)
+#define DLB2_LSP_DIR_SCH_CNT_L_RST 0x0
+
+#define DLB2_LSP_DIR_SCH_CNT_L_COUNT	0xFFFFFFFF
+#define DLB2_LSP_DIR_SCH_CNT_L_COUNT_LOC	0
+
+#define DLB2_V2LSP_DIR_SCH_CNT_H 0xa4000038
+#define DLB2_V2_5LSP_DIR_SCH_CNT_H 0x94000038
+#define DLB2_LSP_DIR_SCH_CNT_H(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_DIR_SCH_CNT_H : \
+	 DLB2_V2_5LSP_DIR_SCH_CNT_H)
+#define DLB2_LSP_DIR_SCH_CNT_H_RST 0x0
+
+#define DLB2_LSP_DIR_SCH_CNT_H_COUNT	0xFFFFFFFF
+#define DLB2_LSP_DIR_SCH_CNT_H_COUNT_LOC	0
+
+#define DLB2_V2LSP_LDB_SCH_CNT_L 0xa400003c
+#define DLB2_V2_5LSP_LDB_SCH_CNT_L 0x9400003c
+#define DLB2_LSP_LDB_SCH_CNT_L(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_LDB_SCH_CNT_L : \
+	 DLB2_V2_5LSP_LDB_SCH_CNT_L)
+#define DLB2_LSP_LDB_SCH_CNT_L_RST 0x0
+
+#define DLB2_LSP_LDB_SCH_CNT_L_COUNT	0xFFFFFFFF
+#define DLB2_LSP_LDB_SCH_CNT_L_COUNT_LOC	0
+
+#define DLB2_V2LSP_LDB_SCH_CNT_H 0xa4000040
+#define DLB2_V2_5LSP_LDB_SCH_CNT_H 0x94000040
+#define DLB2_LSP_LDB_SCH_CNT_H(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_LDB_SCH_CNT_H : \
+	 DLB2_V2_5LSP_LDB_SCH_CNT_H)
+#define DLB2_LSP_LDB_SCH_CNT_H_RST 0x0
+
+#define DLB2_LSP_LDB_SCH_CNT_H_COUNT	0xFFFFFFFF
+#define DLB2_LSP_LDB_SCH_CNT_H_COUNT_LOC	0
+
+#define DLB2_V2LSP_CFG_SHDW_CTRL 0xa4000070
+#define DLB2_V2_5LSP_CFG_SHDW_CTRL 0x94000070
+#define DLB2_LSP_CFG_SHDW_CTRL(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CFG_SHDW_CTRL : \
+	 DLB2_V2_5LSP_CFG_SHDW_CTRL)
+#define DLB2_LSP_CFG_SHDW_CTRL_RST 0x0
+
+#define DLB2_LSP_CFG_SHDW_CTRL_TRANSFER	0x00000001
+#define DLB2_LSP_CFG_SHDW_CTRL_RSVD0		0xFFFFFFFE
+#define DLB2_LSP_CFG_SHDW_CTRL_TRANSFER_LOC	0
+#define DLB2_LSP_CFG_SHDW_CTRL_RSVD0_LOC	1
+
+#define DLB2_V2LSP_CFG_SHDW_RANGE_COS(x) \
+	(0xa4000074 + (x) * 4)
+#define DLB2_V2_5LSP_CFG_SHDW_RANGE_COS(x) \
+	(0x94000074 + (x) * 4)
+#define DLB2_LSP_CFG_SHDW_RANGE_COS(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CFG_SHDW_RANGE_COS(x) : \
+	 DLB2_V2_5LSP_CFG_SHDW_RANGE_COS(x))
+#define DLB2_LSP_CFG_SHDW_RANGE_COS_RST 0x40
+
+#define DLB2_LSP_CFG_SHDW_RANGE_COS_BW_RANGE		0x000001FF
+#define DLB2_LSP_CFG_SHDW_RANGE_COS_RSVZ0		0x7FFFFE00
+#define DLB2_LSP_CFG_SHDW_RANGE_COS_NO_EXTRA_CREDIT	0x80000000
+#define DLB2_LSP_CFG_SHDW_RANGE_COS_BW_RANGE_LOC		0
+#define DLB2_LSP_CFG_SHDW_RANGE_COS_RSVZ0_LOC		9
+#define DLB2_LSP_CFG_SHDW_RANGE_COS_NO_EXTRA_CREDIT_LOC	31
+
+#define DLB2_V2LSP_CFG_CTRL_GENERAL_0 0xac000000
+#define DLB2_V2_5LSP_CFG_CTRL_GENERAL_0 0x9c000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CFG_CTRL_GENERAL_0 : \
+	 DLB2_V2_5LSP_CFG_CTRL_GENERAL_0)
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RST 0x0
+
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_ATQ_EMPTY_ARB_V2	0x00000001
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_TOK_UNIT_IDLE_V2	0x00000002
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_RLIST_PRI_V2	0x00000004
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_CMP_UNIT_IDLE_V2	0x00000008
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ0_V2		0x00000030
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OP_V2	0x00000040
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_HALF_BW_V2	0x00000080
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OUT_V2	0x00000100
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_DISAB_MULTI_V2	0x00000200
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OP_V2	0x00000400
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_HALF_BW_V2	0x00000800
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OUT_V2	0x00001000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_DISAB_MULTI_V2	0x00002000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OP_V2	0x00004000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_HALF_BW_V2	0x00008000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OUT_V2	0x00010000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OP_V2	0x00020000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_HALF_BW_V2	0x00040000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OUT_V2	0x00080000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_SINGLE_OP_V2	0x00100000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_HALF_BW_V2	0x00200000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_DISAB_MULTI_V2	0x00400000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_SCH_V2	0x00800000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_CMP_V2	0x01000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_CE_TOG_ARB_V2	0x02000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ1_V2		0x04000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALID_SEL_V2	0x18000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALUE_SEL_V2	0x20000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_COMPARE_SEL_V2	0xC0000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_ATQ_EMPTY_ARB_V2_LOC	0
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_TOK_UNIT_IDLE_V2_LOC		1
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_RLIST_PRI_V2_LOC		2
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_CMP_UNIT_IDLE_V2_LOC		3
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ0_V2_LOC			4
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OP_V2_LOC		6
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_HALF_BW_V2_LOC		7
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OUT_V2_LOC		8
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_DISAB_MULTI_V2_LOC		9
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OP_V2_LOC		10
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_HALF_BW_V2_LOC		11
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OUT_V2_LOC		12
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_DISAB_MULTI_V2_LOC		13
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OP_V2_LOC		14
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_HALF_BW_V2_LOC		15
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OUT_V2_LOC		16
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OP_V2_LOC		17
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_HALF_BW_V2_LOC		18
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OUT_V2_LOC		19
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_SINGLE_OP_V2_LOC		20
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_HALF_BW_V2_LOC		21
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_DISAB_MULTI_V2_LOC		22
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_SCH_V2_LOC		23
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_CMP_V2_LOC		24
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_CE_TOG_ARB_V2_LOC		25
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ1_V2_LOC			26
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALID_SEL_V2_LOC		27
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALUE_SEL_V2_LOC		29
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_COMPARE_SEL_V2_LOC		30
+
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_ATQ_EMPTY_ARB_V2_5	0x00000001
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_TOK_UNIT_IDLE_V2_5	0x00000002
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_RLIST_PRI_V2_5	0x00000004
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_CMP_UNIT_IDLE_V2_5	0x00000008
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ENAB_IF_THRESH_V2_5	0x00000010
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ0_V2_5		0x00000020
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OP_V2_5	0x00000040
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_HALF_BW_V2_5	0x00000080
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OUT_V2_5	0x00000100
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_DISAB_MULTI_V2_5	0x00000200
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OP_V2_5	0x00000400
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_HALF_BW_V2_5	0x00000800
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OUT_V2_5	0x00001000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_DISAB_MULTI_V2_5	0x00002000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OP_V2_5	0x00004000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_HALF_BW_V2_5	0x00008000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OUT_V2_5	0x00010000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OP_V2_5	0x00020000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_HALF_BW_V2_5	0x00040000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OUT_V2_5	0x00080000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_SINGLE_OP_V2_5	0x00100000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_HALF_BW_V2_5	0x00200000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_DISAB_MULTI_V2_5	0x00400000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_SCH_V2_5	0x00800000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_CMP_V2_5	0x01000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_CE_TOG_ARB_V2_5	0x02000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ1_V2_5		0x04000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALID_SEL_V2_5	0x18000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALUE_SEL_V2_5	0x20000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_COMPARE_SEL_V2_5	0xC0000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_ATQ_EMPTY_ARB_V2_5_LOC	0
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_TOK_UNIT_IDLE_V2_5_LOC	1
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_RLIST_PRI_V2_5_LOC		2
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_CMP_UNIT_IDLE_V2_5_LOC	3
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ENAB_IF_THRESH_V2_5_LOC		4
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ0_V2_5_LOC			5
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OP_V2_5_LOC		6
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_HALF_BW_V2_5_LOC		7
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OUT_V2_5_LOC		8
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_DISAB_MULTI_V2_5_LOC		9
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OP_V2_5_LOC		10
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_HALF_BW_V2_5_LOC		11
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OUT_V2_5_LOC		12
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_DISAB_MULTI_V2_5_LOC		13
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OP_V2_5_LOC	14
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_HALF_BW_V2_5_LOC		15
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OUT_V2_5_LOC	16
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OP_V2_5_LOC		17
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_HALF_BW_V2_5_LOC		18
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OUT_V2_5_LOC	19
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_SINGLE_OP_V2_5_LOC		20
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_HALF_BW_V2_5_LOC		21
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_DISAB_MULTI_V2_5_LOC		22
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_SCH_V2_5_LOC		23
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_CMP_V2_5_LOC		24
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_CE_TOG_ARB_V2_5_LOC		25
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ1_V2_5_LOC			26
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALID_SEL_V2_5_LOC		27
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALUE_SEL_V2_5_LOC		29
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_COMPARE_SEL_V2_5_LOC	30
+
+#define DLB2_LSP_SMON_COMPARE0 0xac000048
+#define DLB2_LSP_SMON_COMPARE0_RST 0x0
+
+#define DLB2_LSP_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_LSP_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_LSP_SMON_COMPARE1 0xac00004c
+#define DLB2_LSP_SMON_COMPARE1_RST 0x0
+
+#define DLB2_LSP_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_LSP_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_LSP_SMON_CFG0 0xac000050
+#define DLB2_LSP_SMON_CFG0_RST 0x40000000
+
+#define DLB2_LSP_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_LSP_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_LSP_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_LSP_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_LSP_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_LSP_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_LSP_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_LSP_SMON_CFG0_SMON_MODE			0x0000F000
+#define DLB2_LSP_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_LSP_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_LSP_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_LSP_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_LSP_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_LSP_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_LSP_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_LSP_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_LSP_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_LSP_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_LSP_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_LSP_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_LSP_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
+#define DLB2_LSP_SMON_CFG0_RSVZ0_LOC				2
+#define DLB2_LSP_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_LSP_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_LSP_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_LSP_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_LSP_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_LSP_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_LSP_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_LSP_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_LSP_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_LSP_SMON_CFG0_STOPTIMEROVFL_LOC			20
+#define DLB2_LSP_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_LSP_SMON_CFG0_STATTIMEROVFL_LOC			22
+#define DLB2_LSP_SMON_CFG0_RSVZ1_LOC				23
+#define DLB2_LSP_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_LSP_SMON_CFG0_RSVZ2_LOC				29
+#define DLB2_LSP_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_LSP_SMON_CFG1 0xac000054
+#define DLB2_LSP_SMON_CFG1_RST 0x0
+
+#define DLB2_LSP_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_LSP_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_LSP_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_LSP_SMON_CFG1_MODE0_LOC	0
+#define DLB2_LSP_SMON_CFG1_MODE1_LOC	8
+#define DLB2_LSP_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_LSP_SMON_ACTIVITYCNTR0 0xac000058
+#define DLB2_LSP_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_LSP_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_LSP_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_LSP_SMON_ACTIVITYCNTR1 0xac00005c
+#define DLB2_LSP_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_LSP_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_LSP_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_LSP_SMON_MAX_TMR 0xac000060
+#define DLB2_LSP_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_LSP_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_LSP_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_LSP_SMON_TMR 0xac000064
+#define DLB2_LSP_SMON_TMR_RST 0x0
+
+#define DLB2_LSP_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_LSP_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_V2CM_DIAG_RESET_STS 0xb4000000
+#define DLB2_V2_5CM_DIAG_RESET_STS 0xa4000000
+#define DLB2_CM_DIAG_RESET_STS(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 V2CM_DIAG_RESET_STS : \
+	 V2_5CM_DIAG_RESET_STS)
+#define DLB2_CM_DIAG_RESET_STS_RST 0x80000bff
+
+#define DLB2_CM_DIAG_RESET_STS_CHP_PF_RESET_DONE	0x00000001
+#define DLB2_CM_DIAG_RESET_STS_ROP_PF_RESET_DONE	0x00000002
+#define DLB2_CM_DIAG_RESET_STS_LSP_PF_RESET_DONE	0x00000004
+#define DLB2_CM_DIAG_RESET_STS_NALB_PF_RESET_DONE	0x00000008
+#define DLB2_CM_DIAG_RESET_STS_AP_PF_RESET_DONE	0x00000010
+#define DLB2_CM_DIAG_RESET_STS_DP_PF_RESET_DONE	0x00000020
+#define DLB2_CM_DIAG_RESET_STS_QED_PF_RESET_DONE	0x00000040
+#define DLB2_CM_DIAG_RESET_STS_DQED_PF_RESET_DONE	0x00000080
+#define DLB2_CM_DIAG_RESET_STS_AQED_PF_RESET_DONE	0x00000100
+#define DLB2_CM_DIAG_RESET_STS_SYS_PF_RESET_DONE	0x00000200
+#define DLB2_CM_DIAG_RESET_STS_PF_RESET_ACTIVE	0x00000400
+#define DLB2_CM_DIAG_RESET_STS_FLRSM_STATE		0x0003F800
+#define DLB2_CM_DIAG_RESET_STS_RSVD0			0x7FFC0000
+#define DLB2_CM_DIAG_RESET_STS_DLB_PROC_RESET_DONE	0x80000000
+#define DLB2_CM_DIAG_RESET_STS_CHP_PF_RESET_DONE_LOC		0
+#define DLB2_CM_DIAG_RESET_STS_ROP_PF_RESET_DONE_LOC		1
+#define DLB2_CM_DIAG_RESET_STS_LSP_PF_RESET_DONE_LOC		2
+#define DLB2_CM_DIAG_RESET_STS_NALB_PF_RESET_DONE_LOC	3
+#define DLB2_CM_DIAG_RESET_STS_AP_PF_RESET_DONE_LOC		4
+#define DLB2_CM_DIAG_RESET_STS_DP_PF_RESET_DONE_LOC		5
+#define DLB2_CM_DIAG_RESET_STS_QED_PF_RESET_DONE_LOC		6
+#define DLB2_CM_DIAG_RESET_STS_DQED_PF_RESET_DONE_LOC	7
+#define DLB2_CM_DIAG_RESET_STS_AQED_PF_RESET_DONE_LOC	8
+#define DLB2_CM_DIAG_RESET_STS_SYS_PF_RESET_DONE_LOC		9
+#define DLB2_CM_DIAG_RESET_STS_PF_RESET_ACTIVE_LOC		10
+#define DLB2_CM_DIAG_RESET_STS_FLRSM_STATE_LOC		11
+#define DLB2_CM_DIAG_RESET_STS_RSVD0_LOC			18
+#define DLB2_CM_DIAG_RESET_STS_DLB_PROC_RESET_DONE_LOC	31
+
+#define DLB2_V2CM_CFG_DIAGNOSTIC_IDLE_STATUS 0xb4000004
+#define DLB2_V2_5CM_CFG_DIAGNOSTIC_IDLE_STATUS 0xa4000004
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CM_CFG_DIAGNOSTIC_IDLE_STATUS : \
+	 DLB2_V2_5CM_CFG_DIAGNOSTIC_IDLE_STATUS)
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_RST 0x9d0fffff
+
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_CHP_PIPEIDLE		0x00000001
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_ROP_PIPEIDLE		0x00000002
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_LSP_PIPEIDLE		0x00000004
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_NALB_PIPEIDLE	0x00000008
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AP_PIPEIDLE		0x00000010
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DP_PIPEIDLE		0x00000020
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_QED_PIPEIDLE		0x00000040
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DQED_PIPEIDLE	0x00000080
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AQED_PIPEIDLE	0x00000100
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_SYS_PIPEIDLE		0x00000200
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_CHP_UNIT_IDLE	0x00000400
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_ROP_UNIT_IDLE	0x00000800
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_LSP_UNIT_IDLE	0x00001000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_NALB_UNIT_IDLE	0x00002000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AP_UNIT_IDLE		0x00004000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DP_UNIT_IDLE		0x00008000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_QED_UNIT_IDLE	0x00010000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DQED_UNIT_IDLE	0x00020000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AQED_UNIT_IDLE	0x00040000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_SYS_UNIT_IDLE	0x00080000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_RSVD1		0x00F00000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_CFG_RING_IDLE	0x01000000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_CFG_MSTR_IDLE	0x02000000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_FLR_CLKREQ_B	0x04000000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_PROC_IDLE	0x08000000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_PROC_IDLE_MASKED 0x10000000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_RSVD0		 0x60000000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DLB_FUNC_IDLE	 0x80000000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_CHP_PIPEIDLE_LOC		0
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_ROP_PIPEIDLE_LOC		1
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_LSP_PIPEIDLE_LOC		2
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_NALB_PIPEIDLE_LOC		3
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AP_PIPEIDLE_LOC		4
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DP_PIPEIDLE_LOC		5
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_QED_PIPEIDLE_LOC		6
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DQED_PIPEIDLE_LOC		7
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AQED_PIPEIDLE_LOC		8
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_SYS_PIPEIDLE_LOC		9
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_CHP_UNIT_IDLE_LOC		10
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_ROP_UNIT_IDLE_LOC		11
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_LSP_UNIT_IDLE_LOC		12
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_NALB_UNIT_IDLE_LOC	13
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AP_UNIT_IDLE_LOC		14
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DP_UNIT_IDLE_LOC		15
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_QED_UNIT_IDLE_LOC		16
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DQED_UNIT_IDLE_LOC	17
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AQED_UNIT_IDLE_LOC	18
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_SYS_UNIT_IDLE_LOC		19
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_RSVD1_LOC			20
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_CFG_RING_IDLE_LOC	24
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_CFG_MSTR_IDLE_LOC	25
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_FLR_CLKREQ_B_LOC	26
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_PROC_IDLE_LOC	27
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_PROC_IDLE_MASKED_LOC	28
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_RSVD0_LOC			29
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DLB_FUNC_IDLE_LOC		31
+
+#define DLB2_V2CM_CFG_PM_STATUS 0xb4000014
+#define DLB2_V2_5CM_CFG_PM_STATUS 0xa4000014
+#define DLB2_CM_CFG_PM_STATUS(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CM_CFG_PM_STATUS : \
+	 DLB2_V2_5CM_CFG_PM_STATUS)
+#define DLB2_CM_CFG_PM_STATUS_RST 0x100403e
+
+#define DLB2_CM_CFG_PM_STATUS_PROCHOT		0x00000001
+#define DLB2_CM_CFG_PM_STATUS_PGCB_DLB_IDLE		0x00000002
+#define DLB2_CM_CFG_PM_STATUS_PGCB_DLB_PG_RDY_ACK_B	0x00000004
+#define DLB2_CM_CFG_PM_STATUS_PMSM_PGCB_REQ_B	0x00000008
+#define DLB2_CM_CFG_PM_STATUS_PGBC_PMC_PG_REQ_B	0x00000010
+#define DLB2_CM_CFG_PM_STATUS_PMC_PGCB_PG_ACK_B	0x00000020
+#define DLB2_CM_CFG_PM_STATUS_PMC_PGCB_FET_EN_B	0x00000040
+#define DLB2_CM_CFG_PM_STATUS_PGCB_FET_EN_B		0x00000080
+#define DLB2_CM_CFG_PM_STATUS_RSVZ0			0x00000100
+#define DLB2_CM_CFG_PM_STATUS_RSVZ1			0x00000200
+#define DLB2_CM_CFG_PM_STATUS_FUSE_FORCE_ON		0x00000400
+#define DLB2_CM_CFG_PM_STATUS_FUSE_PROC_DISABLE	0x00000800
+#define DLB2_CM_CFG_PM_STATUS_RSVZ2			0x00001000
+#define DLB2_CM_CFG_PM_STATUS_RSVZ3			0x00002000
+#define DLB2_CM_CFG_PM_STATUS_PM_FSM_D0TOD3_OK	0x00004000
+#define DLB2_CM_CFG_PM_STATUS_PM_FSM_D3TOD0_OK	0x00008000
+#define DLB2_CM_CFG_PM_STATUS_DLB_IN_D3		0x00010000
+#define DLB2_CM_CFG_PM_STATUS_RSVZ4			0x00FE0000
+#define DLB2_CM_CFG_PM_STATUS_PMSM			0xFF000000
+#define DLB2_CM_CFG_PM_STATUS_PROCHOT_LOC			0
+#define DLB2_CM_CFG_PM_STATUS_PGCB_DLB_IDLE_LOC		1
+#define DLB2_CM_CFG_PM_STATUS_PGCB_DLB_PG_RDY_ACK_B_LOC	2
+#define DLB2_CM_CFG_PM_STATUS_PMSM_PGCB_REQ_B_LOC		3
+#define DLB2_CM_CFG_PM_STATUS_PGBC_PMC_PG_REQ_B_LOC		4
+#define DLB2_CM_CFG_PM_STATUS_PMC_PGCB_PG_ACK_B_LOC		5
+#define DLB2_CM_CFG_PM_STATUS_PMC_PGCB_FET_EN_B_LOC		6
+#define DLB2_CM_CFG_PM_STATUS_PGCB_FET_EN_B_LOC		7
+#define DLB2_CM_CFG_PM_STATUS_RSVZ0_LOC			8
+#define DLB2_CM_CFG_PM_STATUS_RSVZ1_LOC			9
+#define DLB2_CM_CFG_PM_STATUS_FUSE_FORCE_ON_LOC		10
+#define DLB2_CM_CFG_PM_STATUS_FUSE_PROC_DISABLE_LOC		11
+#define DLB2_CM_CFG_PM_STATUS_RSVZ2_LOC			12
+#define DLB2_CM_CFG_PM_STATUS_RSVZ3_LOC			13
+#define DLB2_CM_CFG_PM_STATUS_PM_FSM_D0TOD3_OK_LOC		14
+#define DLB2_CM_CFG_PM_STATUS_PM_FSM_D3TOD0_OK_LOC		15
+#define DLB2_CM_CFG_PM_STATUS_DLB_IN_D3_LOC			16
+#define DLB2_CM_CFG_PM_STATUS_RSVZ4_LOC			17
+#define DLB2_CM_CFG_PM_STATUS_PMSM_LOC			24
+
+#define DLB2_V2CM_CFG_PM_PMCSR_DISABLE 0xb4000018
+#define DLB2_V2_5CM_CFG_PM_PMCSR_DISABLE 0xa4000018
+#define DLB2_CM_CFG_PM_PMCSR_DISABLE(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CM_CFG_PM_PMCSR_DISABLE : \
+	 DLB2_V2_5CM_CFG_PM_PMCSR_DISABLE)
+#define DLB2_CM_CFG_PM_PMCSR_DISABLE_RST 0x1
+
+#define DLB2_CM_CFG_PM_PMCSR_DISABLE_DISABLE	0x00000001
+#define DLB2_CM_CFG_PM_PMCSR_DISABLE_RSVZ0	0xFFFFFFFE
+#define DLB2_CM_CFG_PM_PMCSR_DISABLE_DISABLE_LOC	0
+#define DLB2_CM_CFG_PM_PMCSR_DISABLE_RSVZ0_LOC	1
+
+#define DLB2_VF_VF2PF_MAILBOX_BYTES 256
+#define DLB2_VF_VF2PF_MAILBOX(x) \
+	(0x1000 + (x) * 0x4)
+#define DLB2_VF_VF2PF_MAILBOX_RST 0x0
+
+#define DLB2_VF_VF2PF_MAILBOX_MSG	0xFFFFFFFF
+#define DLB2_VF_VF2PF_MAILBOX_MSG_LOC	0
+
+#define DLB2_VF_VF2PF_MAILBOX_ISR 0x1f00
+#define DLB2_VF_VF2PF_MAILBOX_ISR_RST 0x0
+#define DLB2_VF_SIOV_MBOX_ISR_TRIGGER 0x8000
+
+#define DLB2_VF_VF2PF_MAILBOX_ISR_ISR	0x00000001
+#define DLB2_VF_VF2PF_MAILBOX_ISR_RSVD0	0xFFFFFFFE
+#define DLB2_VF_VF2PF_MAILBOX_ISR_ISR_LOC	0
+#define DLB2_VF_VF2PF_MAILBOX_ISR_RSVD0_LOC	1
+
+#define DLB2_VF_PF2VF_MAILBOX_BYTES 64
+#define DLB2_VF_PF2VF_MAILBOX(x) \
+	(0x2000 + (x) * 0x4)
+#define DLB2_VF_PF2VF_MAILBOX_RST 0x0
+
+#define DLB2_VF_PF2VF_MAILBOX_MSG	0xFFFFFFFF
+#define DLB2_VF_PF2VF_MAILBOX_MSG_LOC	0
+
+#define DLB2_VF_PF2VF_MAILBOX_ISR 0x2f00
+#define DLB2_VF_PF2VF_MAILBOX_ISR_RST 0x0
+
+#define DLB2_VF_PF2VF_MAILBOX_ISR_PF_ISR	0x00000001
+#define DLB2_VF_PF2VF_MAILBOX_ISR_RSVD0	0xFFFFFFFE
+#define DLB2_VF_PF2VF_MAILBOX_ISR_PF_ISR_LOC	0
+#define DLB2_VF_PF2VF_MAILBOX_ISR_RSVD0_LOC	1
+
+#define DLB2_VF_VF_MSI_ISR_PEND 0x2f10
+#define DLB2_VF_VF_MSI_ISR_PEND_RST 0x0
+
+#define DLB2_VF_VF_MSI_ISR_PEND_ISR_PEND	0xFFFFFFFF
+#define DLB2_VF_VF_MSI_ISR_PEND_ISR_PEND_LOC	0
+
+#define DLB2_VF_VF_RESET_IN_PROGRESS 0x3000
+#define DLB2_VF_VF_RESET_IN_PROGRESS_RST 0x1
+
+#define DLB2_VF_VF_RESET_IN_PROGRESS_RESET_IN_PROGRESS	0x00000001
+#define DLB2_VF_VF_RESET_IN_PROGRESS_RSVD0			0xFFFFFFFE
+#define DLB2_VF_VF_RESET_IN_PROGRESS_RESET_IN_PROGRESS_LOC	0
+#define DLB2_VF_VF_RESET_IN_PROGRESS_RSVD0_LOC		1
+
+#define DLB2_VF_VF_MSI_ISR 0x4000
+#define DLB2_VF_VF_MSI_ISR_RST 0x0
+
+#define DLB2_VF_VF_MSI_ISR_VF_MSI_ISR	0xFFFFFFFF
+#define DLB2_VF_VF_MSI_ISR_VF_MSI_ISR_LOC	0
+
+#define DLB2_SYS_TOTAL_CREDITS 0x10000100
+#define DLB2_SYS_TOTAL_CREDITS_RST 0x4000
+
+#define DLB2_SYS_TOTAL_CREDITS_TOTAL_CREDITS	0xFFFFFFFF
+#define DLB2_SYS_TOTAL_CREDITS_TOTAL_CREDITS_LOC	0
+
+#define DLB2_SYS_LDB_CQ_AI_ADDR_U(x) \
+	(0x10000fa4 + (x) * 0x1000)
+#define DLB2_SYS_LDB_CQ_AI_ADDR_U_RST 0x0
+
+#define DLB2_SYS_LDB_CQ_AI_ADDR_U_CQ_AI_ADDR_U	0xFFFFFFFF
+#define DLB2_SYS_LDB_CQ_AI_ADDR_U_CQ_AI_ADDR_U_LOC	0
+
+#define DLB2_SYS_LDB_CQ_AI_ADDR_L(x) \
+	(0x10000fa0 + (x) * 0x1000)
+#define DLB2_SYS_LDB_CQ_AI_ADDR_L_RST 0x0
+
+#define DLB2_SYS_LDB_CQ_AI_ADDR_L_RSVD0		0x00000003
+#define DLB2_SYS_LDB_CQ_AI_ADDR_L_CQ_AI_ADDR_L	0xFFFFFFFC
+#define DLB2_SYS_LDB_CQ_AI_ADDR_L_RSVD0_LOC		0
+#define DLB2_SYS_LDB_CQ_AI_ADDR_L_CQ_AI_ADDR_L_LOC	2
+
+#define DLB2_SYS_DIR_CQ_AI_ADDR_U(x) \
+	(0x10000fe4 + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ_AI_ADDR_U_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_AI_ADDR_U_CQ_AI_ADDR_U	0xFFFFFFFF
+#define DLB2_SYS_DIR_CQ_AI_ADDR_U_CQ_AI_ADDR_U_LOC	0
+
+#define DLB2_SYS_DIR_CQ_AI_ADDR_L(x) \
+	(0x10000fe0 + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ_AI_ADDR_L_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_AI_ADDR_L_RSVD0		0x00000003
+#define DLB2_SYS_DIR_CQ_AI_ADDR_L_CQ_AI_ADDR_L	0xFFFFFFFC
+#define DLB2_SYS_DIR_CQ_AI_ADDR_L_RSVD0_LOC		0
+#define DLB2_SYS_DIR_CQ_AI_ADDR_L_CQ_AI_ADDR_L_LOC	2
+
+#define DLB2_SYS_WB_DIR_CQ_STATE(x) \
+	(0x11c00000 + (x) * 0x1000)
+#define DLB2_SYS_WB_DIR_CQ_STATE_RST 0x0
+
+#define DLB2_SYS_WB_DIR_CQ_STATE_WB0_V	0x00000001
+#define DLB2_SYS_WB_DIR_CQ_STATE_WB1_V	0x00000002
+#define DLB2_SYS_WB_DIR_CQ_STATE_WB2_V	0x00000004
+#define DLB2_SYS_WB_DIR_CQ_STATE_DIR_OPT	0x00000008
+#define DLB2_SYS_WB_DIR_CQ_STATE_CQ_OPT_CLR	0x00000010
+#define DLB2_SYS_WB_DIR_CQ_STATE_RSVD0	0xFFFFFFE0
+#define DLB2_SYS_WB_DIR_CQ_STATE_WB0_V_LOC		0
+#define DLB2_SYS_WB_DIR_CQ_STATE_WB1_V_LOC		1
+#define DLB2_SYS_WB_DIR_CQ_STATE_WB2_V_LOC		2
+#define DLB2_SYS_WB_DIR_CQ_STATE_DIR_OPT_LOC		3
+#define DLB2_SYS_WB_DIR_CQ_STATE_CQ_OPT_CLR_LOC	4
+#define DLB2_SYS_WB_DIR_CQ_STATE_RSVD0_LOC		5
+
+#define DLB2_SYS_WB_LDB_CQ_STATE(x) \
+	(0x11d00000 + (x) * 0x1000)
+#define DLB2_SYS_WB_LDB_CQ_STATE_RST 0x0
+
+#define DLB2_SYS_WB_LDB_CQ_STATE_WB0_V	0x00000001
+#define DLB2_SYS_WB_LDB_CQ_STATE_WB1_V	0x00000002
+#define DLB2_SYS_WB_LDB_CQ_STATE_WB2_V	0x00000004
+#define DLB2_SYS_WB_LDB_CQ_STATE_RSVD1	0x00000008
+#define DLB2_SYS_WB_LDB_CQ_STATE_CQ_OPT_CLR	0x00000010
+#define DLB2_SYS_WB_LDB_CQ_STATE_RSVD0	0xFFFFFFE0
+#define DLB2_SYS_WB_LDB_CQ_STATE_WB0_V_LOC		0
+#define DLB2_SYS_WB_LDB_CQ_STATE_WB1_V_LOC		1
+#define DLB2_SYS_WB_LDB_CQ_STATE_WB2_V_LOC		2
+#define DLB2_SYS_WB_LDB_CQ_STATE_RSVD1_LOC		3
+#define DLB2_SYS_WB_LDB_CQ_STATE_CQ_OPT_CLR_LOC	4
+#define DLB2_SYS_WB_LDB_CQ_STATE_RSVD0_LOC		5
+
+#define DLB2_CHP_CFG_VAS_CRD(x) \
+	(0x40000000 + (x) * 0x1000)
+#define DLB2_CHP_CFG_VAS_CRD_RST 0x0
+
+#define DLB2_CHP_CFG_VAS_CRD_COUNT	0x00007FFF
+#define DLB2_CHP_CFG_VAS_CRD_RSVD0	0xFFFF8000
+#define DLB2_CHP_CFG_VAS_CRD_COUNT_LOC	0
+#define DLB2_CHP_CFG_VAS_CRD_RSVD0_LOC	15
+
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT(x) \
+	(0x90b00000 + (x) * 0x1000)
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_RST 0x0
+
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_LIMIT	0x00007FFF
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_V	0x00008000
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_RSVD0	0xFFFF0000
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_LIMIT_LOC	0
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_V_LOC		15
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_RSVD0_LOC	16
+
+#endif /* __DLB2_REGS_NEW_H */
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v3 03/26] event/dlb2: add v2.5 HW init
  2021-04-13 20:14   ` [dpdk-dev] [PATCH v3 00/26] " Timothy McDaniel
  2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 01/26] event/dlb2: add v2.5 probe Timothy McDaniel
  2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 02/26] event/dlb2: add v2.5 HW register definitions Timothy McDaniel
@ 2021-04-13 20:14     ` Timothy McDaniel
  2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 04/26] event/dlb2: add v2.5 get resources Timothy McDaniel
                       ` (22 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-04-13 20:14 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas

This commit adds support for DLB v2.5 probe-time hardware init,
and sets up a framework for incorporating the remaining
changes required to support DLB v2.5.

DLB v2.0 and DLB v2.5 are similar in many respects, but their
register offsets and definitions are different. As a result of these,
differences, the low level hardware functions must take the device
version into consideration. This requires that the hardware version be
passed to many of the low level functions, so that the PMD can
take the appropriate action based on the device version.

To ease the transition and keep the individual patches small, three
temporary files are added in this commit. These files have "new"
in their names.  The files with "new" contain changes specific to a
consolidated PMD that supports both DLB v2.0 and DLB 2.5. Their sister
files of the same name (minus "new") contain the old DLB v2.0 specific
code. The intent is to remove code from the original files as that code
is ported to the combined DLB 2.0/2.5 PMD model and added to the "new"
files in a series of commits. At end of the patch series, the old files
will be empty and the "new" files will have the logic needed
to implement a single PMD that supports both DLB v2.0 and DLB v2.5.
At that time, the original DLB v2.0 specific files will be deleted,
and the "new" files will be renamed and replace them.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/dlb2_priv.h                |   5 +
 drivers/event/dlb2/meson.build                |   1 +
 .../event/dlb2/pf/base/dlb2_hw_types_new.h    | 356 ++++++++++++++++++
 drivers/event/dlb2/pf/base/dlb2_osdep.h       |   4 +
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 180 +--------
 drivers/event/dlb2/pf/base/dlb2_resource.h    |  36 --
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 259 +++++++++++++
 .../event/dlb2/pf/base/dlb2_resource_new.h    |  73 ++++
 drivers/event/dlb2/pf/dlb2_main.c             |  41 +-
 drivers/event/dlb2/pf/dlb2_main.h             |   4 +
 drivers/event/dlb2/pf/dlb2_pf.c               |   6 +-
 11 files changed, 735 insertions(+), 230 deletions(-)
 create mode 100644 drivers/event/dlb2/pf/base/dlb2_hw_types_new.h
 create mode 100644 drivers/event/dlb2/pf/base/dlb2_resource_new.c
 create mode 100644 drivers/event/dlb2/pf/base/dlb2_resource_new.h

diff --git a/drivers/event/dlb2/dlb2_priv.h b/drivers/event/dlb2/dlb2_priv.h
index 1cd78ad94..f3a9fe0aa 100644
--- a/drivers/event/dlb2/dlb2_priv.h
+++ b/drivers/event/dlb2/dlb2_priv.h
@@ -114,6 +114,11 @@
 #define EV_TO_DLB2_PRIO(x) ((x) >> 5)
 #define DLB2_TO_EV_PRIO(x) ((x) << 5)
 
+enum dlb2_hw_ver {
+	DLB2_HW_VER_2,
+	DLB2_HW_VER_2_5,
+};
+
 enum dlb2_hw_port_types {
 	DLB2_LDB_PORT,
 	DLB2_DIR_PORT,
diff --git a/drivers/event/dlb2/meson.build b/drivers/event/dlb2/meson.build
index f22638b8e..bded07e06 100644
--- a/drivers/event/dlb2/meson.build
+++ b/drivers/event/dlb2/meson.build
@@ -14,6 +14,7 @@ sources = files('dlb2.c',
 		'pf/dlb2_main.c',
 		'pf/dlb2_pf.c',
 		'pf/base/dlb2_resource.c',
+		'pf/base/dlb2_resource_new.c',
 		'rte_pmd_dlb2.c',
 		'dlb2_selftest.c'
 )
diff --git a/drivers/event/dlb2/pf/base/dlb2_hw_types_new.h b/drivers/event/dlb2/pf/base/dlb2_hw_types_new.h
new file mode 100644
index 000000000..4a4185acd
--- /dev/null
+++ b/drivers/event/dlb2/pf/base/dlb2_hw_types_new.h
@@ -0,0 +1,356 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2016-2020 Intel Corporation
+ */
+
+#ifndef __DLB2_HW_TYPES_NEW_H
+#define __DLB2_HW_TYPES_NEW_H
+
+#include "../../dlb2_priv.h"
+#include "dlb2_user.h"
+
+#include "dlb2_osdep_list.h"
+#include "dlb2_osdep_types.h"
+#include "dlb2_regs_new.h"
+
+#define DLB2_BITS_SET(x, val, mask)	(x = ((x) & ~(mask))     \
+				 | (((val) << (mask##_LOC)) & (mask)))
+#define DLB2_BITS_CLR(x, mask)	(x &= ~(mask))
+#define DLB2_BIT_SET(x, mask)	((x) |= (mask))
+#define DLB2_BITS_GET(x, mask)	(((x) & (mask)) >> (mask##_LOC))
+
+#define DLB2_MAX_NUM_VDEVS			16
+#define DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS	2
+#define DLB2_NUM_ARB_WEIGHTS			8
+#define DLB2_MAX_NUM_AQED_ENTRIES		2048
+#define DLB2_MAX_WEIGHT				255
+#define DLB2_NUM_COS_DOMAINS			4
+#define DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS	2
+#define DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES	5
+#define DLB2_MAX_CQ_COMP_CHECK_LOOPS		409600
+#define DLB2_MAX_QID_EMPTY_CHECK_LOOPS		(32 * 64 * 1024 * (800 / 30))
+
+#define DLB2_FUNC_BAR				0
+#define DLB2_CSR_BAR				2
+
+#define PCI_DEVICE_ID_INTEL_DLB2_PF 0x2710
+#define PCI_DEVICE_ID_INTEL_DLB2_VF 0x2711
+
+#define PCI_DEVICE_ID_INTEL_DLB2_5_PF 0x2714
+#define PCI_DEVICE_ID_INTEL_DLB2_5_VF 0x2715
+
+#define DLB2_ALARM_HW_SOURCE_SYS 0
+#define DLB2_ALARM_HW_SOURCE_DLB 1
+
+#define DLB2_ALARM_HW_UNIT_CHP 4
+
+#define DLB2_ALARM_SYS_AID_ILLEGAL_QID		3
+#define DLB2_ALARM_SYS_AID_DISABLED_QID		4
+#define DLB2_ALARM_SYS_AID_ILLEGAL_HCW		5
+#define DLB2_ALARM_HW_CHP_AID_ILLEGAL_ENQ	1
+#define DLB2_ALARM_HW_CHP_AID_EXCESS_TOKEN_POPS 2
+
+/*
+ * Hardware-defined base addresses. Those prefixed 'DLB2_DRV' are only used by
+ * the PF driver.
+ */
+#define DLB2_DRV_LDB_PP_BASE   0x2300000
+#define DLB2_DRV_LDB_PP_STRIDE 0x1000
+#define DLB2_DRV_LDB_PP_BOUND  (DLB2_DRV_LDB_PP_BASE + \
+				DLB2_DRV_LDB_PP_STRIDE * DLB2_MAX_NUM_LDB_PORTS)
+#define DLB2_DRV_DIR_PP_BASE   0x2200000
+#define DLB2_DRV_DIR_PP_STRIDE 0x1000
+#define DLB2_DRV_DIR_PP_BOUND  (DLB2_DRV_DIR_PP_BASE + \
+				DLB2_DRV_DIR_PP_STRIDE * DLB2_MAX_NUM_DIR_PORTS)
+#define DLB2_LDB_PP_BASE       0x2100000
+#define DLB2_LDB_PP_STRIDE     0x1000
+#define DLB2_LDB_PP_BOUND      (DLB2_LDB_PP_BASE + \
+				DLB2_LDB_PP_STRIDE * DLB2_MAX_NUM_LDB_PORTS)
+#define DLB2_LDB_PP_OFFS(id)   (DLB2_LDB_PP_BASE + (id) * DLB2_PP_SIZE)
+#define DLB2_DIR_PP_BASE       0x2000000
+#define DLB2_DIR_PP_STRIDE     0x1000
+#define DLB2_DIR_PP_BOUND      (DLB2_DIR_PP_BASE + \
+				DLB2_DIR_PP_STRIDE * \
+				DLB2_MAX_NUM_DIR_PORTS_V2_5)
+#define DLB2_DIR_PP_OFFS(id)   (DLB2_DIR_PP_BASE + (id) * DLB2_PP_SIZE)
+
+struct dlb2_resource_id {
+	u32 phys_id;
+	u32 virt_id;
+	u8 vdev_owned;
+	u8 vdev_id;
+};
+
+struct dlb2_freelist {
+	u32 base;
+	u32 bound;
+	u32 offset;
+};
+
+static inline u32 dlb2_freelist_count(struct dlb2_freelist *list)
+{
+	return list->bound - list->base - list->offset;
+}
+
+struct dlb2_hcw {
+	u64 data;
+	/* Word 3 */
+	u16 opaque;
+	u8 qid;
+	u8 sched_type:2;
+	u8 priority:3;
+	u8 msg_type:3;
+	/* Word 4 */
+	u16 lock_id;
+	u8 ts_flag:1;
+	u8 rsvd1:2;
+	u8 no_dec:1;
+	u8 cmp_id:4;
+	u8 cq_token:1;
+	u8 qe_comp:1;
+	u8 qe_frag:1;
+	u8 qe_valid:1;
+	u8 int_arm:1;
+	u8 error:1;
+	u8 rsvd:2;
+};
+
+struct dlb2_ldb_queue {
+	struct dlb2_list_entry domain_list;
+	struct dlb2_list_entry func_list;
+	struct dlb2_resource_id id;
+	struct dlb2_resource_id domain_id;
+	u32 num_qid_inflights;
+	u32 aqed_limit;
+	u32 sn_group; /* sn == sequence number */
+	u32 sn_slot;
+	u32 num_mappings;
+	u8 sn_cfg_valid;
+	u8 num_pending_additions;
+	u8 owned;
+	u8 configured;
+};
+
+/*
+ * Directed ports and queues are paired by nature, so the driver tracks them
+ * with a single data structure.
+ */
+struct dlb2_dir_pq_pair {
+	struct dlb2_list_entry domain_list;
+	struct dlb2_list_entry func_list;
+	struct dlb2_resource_id id;
+	struct dlb2_resource_id domain_id;
+	u32 ref_cnt;
+	u8 init_tkn_cnt;
+	u8 queue_configured;
+	u8 port_configured;
+	u8 owned;
+	u8 enabled;
+};
+
+enum dlb2_qid_map_state {
+	/* The slot does not contain a valid queue mapping */
+	DLB2_QUEUE_UNMAPPED,
+	/* The slot contains a valid queue mapping */
+	DLB2_QUEUE_MAPPED,
+	/* The driver is mapping a queue into this slot */
+	DLB2_QUEUE_MAP_IN_PROG,
+	/* The driver is unmapping a queue from this slot */
+	DLB2_QUEUE_UNMAP_IN_PROG,
+	/*
+	 * The driver is unmapping a queue from this slot, and once complete
+	 * will replace it with another mapping.
+	 */
+	DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP,
+};
+
+struct dlb2_ldb_port_qid_map {
+	enum dlb2_qid_map_state state;
+	u16 qid;
+	u16 pending_qid;
+	u8 priority;
+	u8 pending_priority;
+};
+
+struct dlb2_ldb_port {
+	struct dlb2_list_entry domain_list;
+	struct dlb2_list_entry func_list;
+	struct dlb2_resource_id id;
+	struct dlb2_resource_id domain_id;
+	/* The qid_map represents the hardware QID mapping state. */
+	struct dlb2_ldb_port_qid_map qid_map[DLB2_MAX_NUM_QIDS_PER_LDB_CQ];
+	u32 hist_list_entry_base;
+	u32 hist_list_entry_limit;
+	u32 ref_cnt;
+	u8 init_tkn_cnt;
+	u8 num_pending_removals;
+	u8 num_mappings;
+	u8 owned;
+	u8 enabled;
+	u8 configured;
+};
+
+struct dlb2_sn_group {
+	u32 mode;
+	u32 sequence_numbers_per_queue;
+	u32 slot_use_bitmap;
+	u32 id;
+};
+
+static inline bool dlb2_sn_group_full(struct dlb2_sn_group *group)
+{
+	const u32 mask[] = {
+		0x0000ffff,  /* 64 SNs per queue */
+		0x000000ff,  /* 128 SNs per queue */
+		0x0000000f,  /* 256 SNs per queue */
+		0x00000003,  /* 512 SNs per queue */
+		0x00000001}; /* 1024 SNs per queue */
+
+	return group->slot_use_bitmap == mask[group->mode];
+}
+
+static inline int dlb2_sn_group_alloc_slot(struct dlb2_sn_group *group)
+{
+	const u32 bound[] = {16, 8, 4, 2, 1};
+	u32 i;
+
+	for (i = 0; i < bound[group->mode]; i++) {
+		if (!(group->slot_use_bitmap & (1 << i))) {
+			group->slot_use_bitmap |= 1 << i;
+			return i;
+		}
+	}
+
+	return -1;
+}
+
+static inline void
+dlb2_sn_group_free_slot(struct dlb2_sn_group *group, int slot)
+{
+	group->slot_use_bitmap &= ~(1 << slot);
+}
+
+static inline int dlb2_sn_group_used_slots(struct dlb2_sn_group *group)
+{
+	int i, cnt = 0;
+
+	for (i = 0; i < 32; i++)
+		cnt += !!(group->slot_use_bitmap & (1 << i));
+
+	return cnt;
+}
+
+struct dlb2_hw_domain {
+	struct dlb2_function_resources *parent_func;
+	struct dlb2_list_entry func_list;
+	struct dlb2_list_head used_ldb_queues;
+	struct dlb2_list_head used_ldb_ports[DLB2_NUM_COS_DOMAINS];
+	struct dlb2_list_head used_dir_pq_pairs;
+	struct dlb2_list_head avail_ldb_queues;
+	struct dlb2_list_head avail_ldb_ports[DLB2_NUM_COS_DOMAINS];
+	struct dlb2_list_head avail_dir_pq_pairs;
+	u32 total_hist_list_entries;
+	u32 avail_hist_list_entries;
+	u32 hist_list_entry_base;
+	u32 hist_list_entry_offset;
+	union {
+		struct {
+			u32 num_ldb_credits;
+			u32 num_dir_credits;
+		};
+		struct {
+			u32 num_credits;
+		};
+	};
+	u32 num_avail_aqed_entries;
+	u32 num_used_aqed_entries;
+	struct dlb2_resource_id id;
+	int num_pending_removals;
+	int num_pending_additions;
+	u8 configured;
+	u8 started;
+};
+
+struct dlb2_bitmap;
+
+struct dlb2_function_resources {
+	struct dlb2_list_head avail_domains;
+	struct dlb2_list_head used_domains;
+	struct dlb2_list_head avail_ldb_queues;
+	struct dlb2_list_head avail_ldb_ports[DLB2_NUM_COS_DOMAINS];
+	struct dlb2_list_head avail_dir_pq_pairs;
+	struct dlb2_bitmap *avail_hist_list_entries;
+	u32 num_avail_domains;
+	u32 num_avail_ldb_queues;
+	u32 num_avail_ldb_ports[DLB2_NUM_COS_DOMAINS];
+	u32 num_avail_dir_pq_pairs;
+	union {
+		struct {
+			u32 num_avail_qed_entries;
+			u32 num_avail_dqed_entries;
+		};
+		struct {
+			u32 num_avail_entries;
+		};
+	};
+	u32 num_avail_aqed_entries;
+	u8 locked; /* (VDEV only) */
+};
+
+/*
+ * After initialization, each resource in dlb2_hw_resources is located in one
+ * of the following lists:
+ * -- The PF's available resources list. These are unconfigured resources owned
+ *	by the PF and not allocated to a dlb2 scheduling domain.
+ * -- A VDEV's available resources list. These are VDEV-owned unconfigured
+ *	resources not allocated to a dlb2 scheduling domain.
+ * -- A domain's available resources list. These are domain-owned unconfigured
+ *	resources.
+ * -- A domain's used resources list. These are domain-owned configured
+ *	resources.
+ *
+ * A resource moves to a new list when a VDEV or domain is created or destroyed,
+ * or when the resource is configured.
+ */
+struct dlb2_hw_resources {
+	struct dlb2_ldb_queue ldb_queues[DLB2_MAX_NUM_LDB_QUEUES];
+	struct dlb2_ldb_port ldb_ports[DLB2_MAX_NUM_LDB_PORTS];
+	struct dlb2_dir_pq_pair dir_pq_pairs[DLB2_MAX_NUM_DIR_PORTS_V2_5];
+	struct dlb2_sn_group sn_groups[DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS];
+};
+
+struct dlb2_mbox {
+	u32 *mbox;
+	u32 *isr_in_progress;
+};
+
+struct dlb2_sw_mbox {
+	struct dlb2_mbox vdev_to_pf;
+	struct dlb2_mbox pf_to_vdev;
+	void (*pf_to_vdev_inject)(void *arg);
+	void *pf_to_vdev_inject_arg;
+};
+
+struct dlb2_hw {
+	uint8_t ver;
+
+	/* BAR 0 address */
+	void *csr_kva;
+	unsigned long csr_phys_addr;
+	/* BAR 2 address */
+	void *func_kva;
+	unsigned long func_phys_addr;
+
+	/* Resource tracking */
+	struct dlb2_hw_resources rsrcs;
+	struct dlb2_function_resources pf;
+	struct dlb2_function_resources vdev[DLB2_MAX_NUM_VDEVS];
+	struct dlb2_hw_domain domains[DLB2_MAX_NUM_DOMAINS];
+	u8 cos_reservation[DLB2_NUM_COS_DOMAINS];
+
+	/* Virtualization */
+	int virt_mode;
+	struct dlb2_sw_mbox mbox[DLB2_MAX_NUM_VDEVS];
+	unsigned int pasid[DLB2_MAX_NUM_VDEVS];
+};
+
+#endif /* __DLB2_HW_TYPES_NEW_H */
diff --git a/drivers/event/dlb2/pf/base/dlb2_osdep.h b/drivers/event/dlb2/pf/base/dlb2_osdep.h
index aa101a49a..3b0ca84ba 100644
--- a/drivers/event/dlb2/pf/base/dlb2_osdep.h
+++ b/drivers/event/dlb2/pf/base/dlb2_osdep.h
@@ -16,7 +16,11 @@
 #include <rte_log.h>
 #include <rte_spinlock.h>
 #include "../dlb2_main.h"
+
+/* TEMPORARY inclusion of both headers for merge */
+#include "dlb2_resource_new.h"
 #include "dlb2_resource.h"
+
 #include "../../dlb2_log.h"
 #include "../../dlb2_user.h"
 
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index 1cb0b9f50..7ba6521ef 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -47,19 +47,6 @@ static void dlb2_init_domain_rsrc_lists(struct dlb2_hw_domain *domain)
 		dlb2_list_init_head(&domain->avail_ldb_ports[i]);
 }
 
-static void dlb2_init_fn_rsrc_lists(struct dlb2_function_resources *rsrc)
-{
-	int i;
-
-	dlb2_list_init_head(&rsrc->avail_domains);
-	dlb2_list_init_head(&rsrc->used_domains);
-	dlb2_list_init_head(&rsrc->avail_ldb_queues);
-	dlb2_list_init_head(&rsrc->avail_dir_pq_pairs);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
-		dlb2_list_init_head(&rsrc->avail_ldb_ports[i]);
-}
-
 void dlb2_hw_enable_sparse_dir_cq_mode(struct dlb2_hw *hw)
 {
 	union dlb2_chp_cfg_chp_csr_ctrl r0;
@@ -130,171 +117,6 @@ void dlb2_hw_enable_sparse_ldb_cq_mode(struct dlb2_hw *hw)
 	DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, r0.val);
 }
 
-void dlb2_resource_free(struct dlb2_hw *hw)
-{
-	int i;
-
-	if (hw->pf.avail_hist_list_entries)
-		dlb2_bitmap_free(hw->pf.avail_hist_list_entries);
-
-	for (i = 0; i < DLB2_MAX_NUM_VDEVS; i++) {
-		if (hw->vdev[i].avail_hist_list_entries)
-			dlb2_bitmap_free(hw->vdev[i].avail_hist_list_entries);
-	}
-}
-
-int dlb2_resource_init(struct dlb2_hw *hw)
-{
-	struct dlb2_list_entry *list;
-	unsigned int i;
-	int ret;
-
-	/*
-	 * For optimal load-balancing, ports that map to one or more QIDs in
-	 * common should not be in numerical sequence. This is application
-	 * dependent, but the driver interleaves port IDs as much as possible
-	 * to reduce the likelihood of this. This initial allocation maximizes
-	 * the average distance between an ID and its immediate neighbors (i.e.
-	 * the distance from 1 to 0 and to 2, the distance from 2 to 1 and to
-	 * 3, etc.).
-	 */
-	u8 init_ldb_port_allocation[DLB2_MAX_NUM_LDB_PORTS] = {
-		0,  7,  14,  5, 12,  3, 10,  1,  8, 15,  6, 13,  4, 11,  2,  9,
-		16, 23, 30, 21, 28, 19, 26, 17, 24, 31, 22, 29, 20, 27, 18, 25,
-		32, 39, 46, 37, 44, 35, 42, 33, 40, 47, 38, 45, 36, 43, 34, 41,
-		48, 55, 62, 53, 60, 51, 58, 49, 56, 63, 54, 61, 52, 59, 50, 57,
-	};
-
-	/* Zero-out resource tracking data structures */
-	memset(&hw->rsrcs, 0, sizeof(hw->rsrcs));
-	memset(&hw->pf, 0, sizeof(hw->pf));
-
-	dlb2_init_fn_rsrc_lists(&hw->pf);
-
-	for (i = 0; i < DLB2_MAX_NUM_VDEVS; i++) {
-		memset(&hw->vdev[i], 0, sizeof(hw->vdev[i]));
-		dlb2_init_fn_rsrc_lists(&hw->vdev[i]);
-	}
-
-	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
-		memset(&hw->domains[i], 0, sizeof(hw->domains[i]));
-		dlb2_init_domain_rsrc_lists(&hw->domains[i]);
-		hw->domains[i].parent_func = &hw->pf;
-	}
-
-	/* Give all resources to the PF driver */
-	hw->pf.num_avail_domains = DLB2_MAX_NUM_DOMAINS;
-	for (i = 0; i < hw->pf.num_avail_domains; i++) {
-		list = &hw->domains[i].func_list;
-
-		dlb2_list_add(&hw->pf.avail_domains, list);
-	}
-
-	hw->pf.num_avail_ldb_queues = DLB2_MAX_NUM_LDB_QUEUES;
-	for (i = 0; i < hw->pf.num_avail_ldb_queues; i++) {
-		list = &hw->rsrcs.ldb_queues[i].func_list;
-
-		dlb2_list_add(&hw->pf.avail_ldb_queues, list);
-	}
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
-		hw->pf.num_avail_ldb_ports[i] =
-			DLB2_MAX_NUM_LDB_PORTS / DLB2_NUM_COS_DOMAINS;
-
-	for (i = 0; i < DLB2_MAX_NUM_LDB_PORTS; i++) {
-		int cos_id = i >> DLB2_NUM_COS_DOMAINS;
-		struct dlb2_ldb_port *port;
-
-		port = &hw->rsrcs.ldb_ports[init_ldb_port_allocation[i]];
-
-		dlb2_list_add(&hw->pf.avail_ldb_ports[cos_id],
-			      &port->func_list);
-	}
-
-	hw->pf.num_avail_dir_pq_pairs = DLB2_MAX_NUM_DIR_PORTS(hw->ver);
-	for (i = 0; i < hw->pf.num_avail_dir_pq_pairs; i++) {
-		list = &hw->rsrcs.dir_pq_pairs[i].func_list;
-
-		dlb2_list_add(&hw->pf.avail_dir_pq_pairs, list);
-	}
-
-	hw->pf.num_avail_qed_entries = DLB2_MAX_NUM_LDB_CREDITS;
-	hw->pf.num_avail_dqed_entries =
-		DLB2_MAX_NUM_DIR_CREDITS(hw->ver);
-
-	hw->pf.num_avail_aqed_entries = DLB2_MAX_NUM_AQED_ENTRIES;
-
-	ret = dlb2_bitmap_alloc(&hw->pf.avail_hist_list_entries,
-				DLB2_MAX_NUM_HIST_LIST_ENTRIES);
-	if (ret)
-		goto unwind;
-
-	ret = dlb2_bitmap_fill(hw->pf.avail_hist_list_entries);
-	if (ret)
-		goto unwind;
-
-	for (i = 0; i < DLB2_MAX_NUM_VDEVS; i++) {
-		ret = dlb2_bitmap_alloc(&hw->vdev[i].avail_hist_list_entries,
-					DLB2_MAX_NUM_HIST_LIST_ENTRIES);
-		if (ret)
-			goto unwind;
-
-		ret = dlb2_bitmap_zero(hw->vdev[i].avail_hist_list_entries);
-		if (ret)
-			goto unwind;
-	}
-
-	/* Initialize the hardware resource IDs */
-	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
-		hw->domains[i].id.phys_id = i;
-		hw->domains[i].id.vdev_owned = false;
-	}
-
-	for (i = 0; i < DLB2_MAX_NUM_LDB_QUEUES; i++) {
-		hw->rsrcs.ldb_queues[i].id.phys_id = i;
-		hw->rsrcs.ldb_queues[i].id.vdev_owned = false;
-	}
-
-	for (i = 0; i < DLB2_MAX_NUM_LDB_PORTS; i++) {
-		hw->rsrcs.ldb_ports[i].id.phys_id = i;
-		hw->rsrcs.ldb_ports[i].id.vdev_owned = false;
-	}
-
-	for (i = 0; i < DLB2_MAX_NUM_DIR_PORTS(hw->ver); i++) {
-		hw->rsrcs.dir_pq_pairs[i].id.phys_id = i;
-		hw->rsrcs.dir_pq_pairs[i].id.vdev_owned = false;
-	}
-
-	for (i = 0; i < DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {
-		hw->rsrcs.sn_groups[i].id = i;
-		/* Default mode (0) is 64 sequence numbers per queue */
-		hw->rsrcs.sn_groups[i].mode = 0;
-		hw->rsrcs.sn_groups[i].sequence_numbers_per_queue = 64;
-		hw->rsrcs.sn_groups[i].slot_use_bitmap = 0;
-	}
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
-		hw->cos_reservation[i] = 100 / DLB2_NUM_COS_DOMAINS;
-
-	return 0;
-
-unwind:
-	dlb2_resource_free(hw);
-
-	return ret;
-}
-
-void dlb2_clr_pmcsr_disable(struct dlb2_hw *hw)
-{
-	union dlb2_cfg_mstr_cfg_pm_pmcsr_disable r0;
-
-	r0.val = DLB2_CSR_RD(hw, DLB2_CFG_MSTR_CFG_PM_PMCSR_DISABLE);
-
-	r0.field.disable = 0;
-
-	DLB2_CSR_WR(hw, DLB2_CFG_MSTR_CFG_PM_PMCSR_DISABLE, r0.val);
-}
-
 static void dlb2_configure_domain_credits(struct dlb2_hw *hw,
 					  struct dlb2_hw_domain *domain)
 {
@@ -5876,7 +5698,7 @@ static void dlb2_log_start_domain(struct dlb2_hw *hw,
 int
 dlb2_hw_start_domain(struct dlb2_hw *hw,
 		     u32 domain_id,
-		     __attribute((unused)) struct dlb2_start_domain_args *arg,
+		     struct dlb2_start_domain_args *arg,
 		     struct dlb2_cmd_response *resp,
 		     bool vdev_req,
 		     unsigned int vdev_id)
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.h b/drivers/event/dlb2/pf/base/dlb2_resource.h
index 503fdf317..2e13193bb 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.h
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.h
@@ -6,35 +6,8 @@
 #define __DLB2_RESOURCE_H
 
 #include "dlb2_user.h"
-
-#include "dlb2_hw_types.h"
 #include "dlb2_osdep_types.h"
 
-/**
- * dlb2_resource_init() - initialize the device
- * @hw: pointer to struct dlb2_hw.
- *
- * This function initializes the device's software state (pointed to by the hw
- * argument) and programs global scheduling QoS registers. This function should
- * be called during driver initialization.
- *
- * The dlb2_hw struct must be unique per DLB 2.0 device and persist until the
- * device is reset.
- *
- * Return:
- * Returns 0 upon success, <0 otherwise.
- */
-int dlb2_resource_init(struct dlb2_hw *hw);
-
-/**
- * dlb2_resource_free() - free device state memory
- * @hw: dlb2_hw handle for a particular device.
- *
- * This function frees software state pointed to by dlb2_hw. This function
- * should be called when resetting the device or unloading the driver.
- */
-void dlb2_resource_free(struct dlb2_hw *hw);
-
 /**
  * dlb2_resource_reset() - reset in-use resources to their initial state
  * @hw: dlb2_hw handle for a particular device.
@@ -1485,15 +1458,6 @@ int dlb2_notify_vf(struct dlb2_hw *hw,
  */
 int dlb2_vdev_in_use(struct dlb2_hw *hw, unsigned int id);
 
-/**
- * dlb2_clr_pmcsr_disable() - power on bulk of DLB 2.0 logic
- * @hw: dlb2_hw handle for a particular device.
- *
- * Clearing the PMCSR must be done at initialization to make the device fully
- * operational.
- */
-void dlb2_clr_pmcsr_disable(struct dlb2_hw *hw);
-
 /**
  * dlb2_hw_get_ldb_queue_depth() - returns the depth of a load-balanced queue
  * @hw: dlb2_hw handle for a particular device.
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
new file mode 100644
index 000000000..175b0799e
--- /dev/null
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -0,0 +1,259 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2016-2020 Intel Corporation
+ */
+
+#define DLB2_USE_NEW_HEADERS /* TEMPORARY FOR MERGE */
+
+#include "dlb2_user.h"
+
+#include "dlb2_hw_types_new.h"
+#include "dlb2_osdep.h"
+#include "dlb2_osdep_bitmap.h"
+#include "dlb2_osdep_types.h"
+#include "dlb2_regs_new.h"
+#include "dlb2_resource_new.h" /* TEMP FOR UPSTREAMPATCHES */
+
+#include "../../dlb2_priv.h"
+#include "../../dlb2_inline_fns.h"
+
+#define DLB2_DOM_LIST_HEAD(head, type) \
+	DLB2_LIST_HEAD((head), type, domain_list)
+
+#define DLB2_FUNC_LIST_HEAD(head, type) \
+	DLB2_LIST_HEAD((head), type, func_list)
+
+#define DLB2_DOM_LIST_FOR(head, ptr, iter) \
+	DLB2_LIST_FOR_EACH(head, ptr, domain_list, iter)
+
+#define DLB2_FUNC_LIST_FOR(head, ptr, iter) \
+	DLB2_LIST_FOR_EACH(head, ptr, func_list, iter)
+
+#define DLB2_DOM_LIST_FOR_SAFE(head, ptr, ptr_tmp, it, it_tmp) \
+	DLB2_LIST_FOR_EACH_SAFE((head), ptr, ptr_tmp, domain_list, it, it_tmp)
+
+#define DLB2_FUNC_LIST_FOR_SAFE(head, ptr, ptr_tmp, it, it_tmp) \
+	DLB2_LIST_FOR_EACH_SAFE((head), ptr, ptr_tmp, func_list, it, it_tmp)
+
+static void dlb2_init_domain_rsrc_lists(struct dlb2_hw_domain *domain)
+{
+	int i;
+
+	dlb2_list_init_head(&domain->used_ldb_queues);
+	dlb2_list_init_head(&domain->used_dir_pq_pairs);
+	dlb2_list_init_head(&domain->avail_ldb_queues);
+	dlb2_list_init_head(&domain->avail_dir_pq_pairs);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
+		dlb2_list_init_head(&domain->used_ldb_ports[i]);
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
+		dlb2_list_init_head(&domain->avail_ldb_ports[i]);
+}
+
+static void dlb2_init_fn_rsrc_lists(struct dlb2_function_resources *rsrc)
+{
+	int i;
+	dlb2_list_init_head(&rsrc->avail_domains);
+	dlb2_list_init_head(&rsrc->used_domains);
+	dlb2_list_init_head(&rsrc->avail_ldb_queues);
+	dlb2_list_init_head(&rsrc->avail_dir_pq_pairs);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
+		dlb2_list_init_head(&rsrc->avail_ldb_ports[i]);
+}
+
+/**
+ * dlb2_resource_free() - free device state memory
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function frees software state pointed to by dlb2_hw. This function
+ * should be called when resetting the device or unloading the driver.
+ */
+void dlb2_resource_free(struct dlb2_hw *hw)
+{
+	int i;
+
+	if (hw->pf.avail_hist_list_entries)
+		dlb2_bitmap_free(hw->pf.avail_hist_list_entries);
+
+	for (i = 0; i < DLB2_MAX_NUM_VDEVS; i++) {
+		if (hw->vdev[i].avail_hist_list_entries)
+			dlb2_bitmap_free(hw->vdev[i].avail_hist_list_entries);
+	}
+}
+
+/**
+ * dlb2_resource_init() - initialize the device
+ * @hw: pointer to struct dlb2_hw.
+ * @ver: device version.
+ *
+ * This function initializes the device's software state (pointed to by the hw
+ * argument) and programs global scheduling QoS registers. This function should
+ * be called during driver initialization, and the dlb2_hw structure should
+ * be zero-initialized before calling the function.
+ *
+ * The dlb2_hw struct must be unique per DLB 2.0 device and persist until the
+ * device is reset.
+ *
+ * Return:
+ * Returns 0 upon success, <0 otherwise.
+ */
+int dlb2_resource_init(struct dlb2_hw *hw, enum dlb2_hw_ver ver)
+{
+	struct dlb2_list_entry *list;
+	unsigned int i;
+	int ret;
+
+	/*
+	 * For optimal load-balancing, ports that map to one or more QIDs in
+	 * common should not be in numerical sequence. The port->QID mapping is
+	 * application dependent, but the driver interleaves port IDs as much
+	 * as possible to reduce the likelihood of sequential ports mapping to
+	 * the same QID(s). This initial allocation of port IDs maximizes the
+	 * average distance between an ID and its immediate neighbors (i.e.
+	 * the distance from 1 to 0 and to 2, the distance from 2 to 1 and to
+	 * 3, etc.).
+	 */
+	const u8 init_ldb_port_allocation[DLB2_MAX_NUM_LDB_PORTS] = {
+		0,  7,  14,  5, 12,  3, 10,  1,  8, 15,  6, 13,  4, 11,  2,  9,
+		16, 23, 30, 21, 28, 19, 26, 17, 24, 31, 22, 29, 20, 27, 18, 25,
+		32, 39, 46, 37, 44, 35, 42, 33, 40, 47, 38, 45, 36, 43, 34, 41,
+		48, 55, 62, 53, 60, 51, 58, 49, 56, 63, 54, 61, 52, 59, 50, 57,
+	};
+
+	hw->ver = ver;
+
+	dlb2_init_fn_rsrc_lists(&hw->pf);
+
+	for (i = 0; i < DLB2_MAX_NUM_VDEVS; i++)
+		dlb2_init_fn_rsrc_lists(&hw->vdev[i]);
+
+	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
+		dlb2_init_domain_rsrc_lists(&hw->domains[i]);
+		hw->domains[i].parent_func = &hw->pf;
+	}
+
+	/* Give all resources to the PF driver */
+	hw->pf.num_avail_domains = DLB2_MAX_NUM_DOMAINS;
+	for (i = 0; i < hw->pf.num_avail_domains; i++) {
+		list = &hw->domains[i].func_list;
+
+		dlb2_list_add(&hw->pf.avail_domains, list);
+	}
+
+	hw->pf.num_avail_ldb_queues = DLB2_MAX_NUM_LDB_QUEUES;
+	for (i = 0; i < hw->pf.num_avail_ldb_queues; i++) {
+		list = &hw->rsrcs.ldb_queues[i].func_list;
+
+		dlb2_list_add(&hw->pf.avail_ldb_queues, list);
+	}
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
+		hw->pf.num_avail_ldb_ports[i] =
+			DLB2_MAX_NUM_LDB_PORTS / DLB2_NUM_COS_DOMAINS;
+
+	for (i = 0; i < DLB2_MAX_NUM_LDB_PORTS; i++) {
+		int cos_id = i >> DLB2_NUM_COS_DOMAINS;
+		struct dlb2_ldb_port *port;
+
+		port = &hw->rsrcs.ldb_ports[init_ldb_port_allocation[i]];
+
+		dlb2_list_add(&hw->pf.avail_ldb_ports[cos_id],
+			      &port->func_list);
+	}
+
+	hw->pf.num_avail_dir_pq_pairs = DLB2_MAX_NUM_DIR_PORTS(hw->ver);
+	for (i = 0; i < hw->pf.num_avail_dir_pq_pairs; i++) {
+		list = &hw->rsrcs.dir_pq_pairs[i].func_list;
+
+		dlb2_list_add(&hw->pf.avail_dir_pq_pairs, list);
+	}
+
+	if (hw->ver == DLB2_HW_V2) {
+		hw->pf.num_avail_qed_entries = DLB2_MAX_NUM_LDB_CREDITS;
+		hw->pf.num_avail_dqed_entries =
+			DLB2_MAX_NUM_DIR_CREDITS(hw->ver);
+	} else {
+		hw->pf.num_avail_entries = DLB2_MAX_NUM_CREDITS(hw->ver);
+	}
+
+	hw->pf.num_avail_aqed_entries = DLB2_MAX_NUM_AQED_ENTRIES;
+
+	ret = dlb2_bitmap_alloc(&hw->pf.avail_hist_list_entries,
+				DLB2_MAX_NUM_HIST_LIST_ENTRIES);
+	if (ret)
+		goto unwind;
+
+	ret = dlb2_bitmap_fill(hw->pf.avail_hist_list_entries);
+	if (ret)
+		goto unwind;
+
+	for (i = 0; i < DLB2_MAX_NUM_VDEVS; i++) {
+		ret = dlb2_bitmap_alloc(&hw->vdev[i].avail_hist_list_entries,
+					DLB2_MAX_NUM_HIST_LIST_ENTRIES);
+		if (ret)
+			goto unwind;
+
+		ret = dlb2_bitmap_zero(hw->vdev[i].avail_hist_list_entries);
+		if (ret)
+			goto unwind;
+	}
+
+	/* Initialize the hardware resource IDs */
+	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
+		hw->domains[i].id.phys_id = i;
+		hw->domains[i].id.vdev_owned = false;
+	}
+
+	for (i = 0; i < DLB2_MAX_NUM_LDB_QUEUES; i++) {
+		hw->rsrcs.ldb_queues[i].id.phys_id = i;
+		hw->rsrcs.ldb_queues[i].id.vdev_owned = false;
+	}
+
+	for (i = 0; i < DLB2_MAX_NUM_LDB_PORTS; i++) {
+		hw->rsrcs.ldb_ports[i].id.phys_id = i;
+		hw->rsrcs.ldb_ports[i].id.vdev_owned = false;
+	}
+
+	for (i = 0; i < DLB2_MAX_NUM_DIR_PORTS(hw->ver); i++) {
+		hw->rsrcs.dir_pq_pairs[i].id.phys_id = i;
+		hw->rsrcs.dir_pq_pairs[i].id.vdev_owned = false;
+	}
+
+	for (i = 0; i < DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {
+		hw->rsrcs.sn_groups[i].id = i;
+		/* Default mode (0) is 64 sequence numbers per queue */
+		hw->rsrcs.sn_groups[i].mode = 0;
+		hw->rsrcs.sn_groups[i].sequence_numbers_per_queue = 64;
+		hw->rsrcs.sn_groups[i].slot_use_bitmap = 0;
+	}
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
+		hw->cos_reservation[i] = 100 / DLB2_NUM_COS_DOMAINS;
+
+	return 0;
+
+unwind:
+	dlb2_resource_free(hw);
+
+	return ret;
+}
+
+/**
+ * dlb2_clr_pmcsr_disable() - power on bulk of DLB 2.0 logic
+ * @hw: dlb2_hw handle for a particular device.
+ * @ver: device version.
+ *
+ * Clearing the PMCSR must be done at initialization to make the device fully
+ * operational.
+ */
+void dlb2_clr_pmcsr_disable(struct dlb2_hw *hw, enum dlb2_hw_ver ver)
+{
+	u32 pmcsr_dis;
+
+	pmcsr_dis = DLB2_CSR_RD(hw, DLB2_CM_CFG_PM_PMCSR_DISABLE(ver));
+
+	DLB2_BITS_CLR(pmcsr_dis, DLB2_CM_CFG_PM_PMCSR_DISABLE_DISABLE);
+
+	DLB2_CSR_WR(hw, DLB2_CM_CFG_PM_PMCSR_DISABLE(ver), pmcsr_dis);
+}
+
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.h b/drivers/event/dlb2/pf/base/dlb2_resource_new.h
new file mode 100644
index 000000000..51f31543c
--- /dev/null
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.h
@@ -0,0 +1,73 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2016-2020 Intel Corporation
+ */
+
+#ifndef __DLB2_RESOURCE_NEW_H
+#define __DLB2_RESOURCE_NEW_H
+
+#include "dlb2_user.h"
+#include "dlb2_osdep_types.h"
+
+/**
+ * dlb2_resource_init() - initialize the device
+ * @hw: pointer to struct dlb2_hw.
+ * @ver: device version.
+ *
+ * This function initializes the device's software state (pointed to by the hw
+ * argument) and programs global scheduling QoS registers. This function should
+ * be called during driver initialization.
+ *
+ * The dlb2_hw struct must be unique per DLB 2.0 device and persist until the
+ * device is reset.
+ *
+ * Return:
+ * Returns 0 upon success, <0 otherwise.
+ */
+int dlb2_resource_init(struct dlb2_hw *hw, enum dlb2_hw_ver ver);
+
+/**
+ * dlb2_clr_pmcsr_disable() - power on bulk of DLB 2.0 logic
+ * @hw: dlb2_hw handle for a particular device.
+ * @ver: device version.
+ *
+ * Clearing the PMCSR must be done at initialization to make the device fully
+ * operational.
+ */
+void dlb2_clr_pmcsr_disable(struct dlb2_hw *hw, enum dlb2_hw_ver ver);
+
+/**
+ * dlb2_finish_unmap_qid_procedures() - finish any pending unmap procedures
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function attempts to finish any outstanding unmap procedures.
+ * This function should be called by the kernel thread responsible for
+ * finishing map/unmap procedures.
+ *
+ * Return:
+ * Returns the number of procedures that weren't completed.
+ */
+unsigned int dlb2_finish_unmap_qid_procedures(struct dlb2_hw *hw);
+
+/**
+ * dlb2_finish_map_qid_procedures() - finish any pending map procedures
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function attempts to finish any outstanding map procedures.
+ * This function should be called by the kernel thread responsible for
+ * finishing map/unmap procedures.
+ *
+ * Return:
+ * Returns the number of procedures that weren't completed.
+ */
+unsigned int dlb2_finish_map_qid_procedures(struct dlb2_hw *hw);
+
+/**
+ * dlb2_resource_free() - free device state memory
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function frees software state pointed to by dlb2_hw. This function
+ * should be called when resetting the device or unloading the driver.
+ */
+void dlb2_resource_free(struct dlb2_hw *hw);
+
+#endif /* __DLB2_RESOURCE_NEW_H */
diff --git a/drivers/event/dlb2/pf/dlb2_main.c b/drivers/event/dlb2/pf/dlb2_main.c
index a9d407f2f..5c0640b3c 100644
--- a/drivers/event/dlb2/pf/dlb2_main.c
+++ b/drivers/event/dlb2/pf/dlb2_main.c
@@ -13,9 +13,12 @@
 #include <rte_malloc.h>
 #include <rte_errno.h>
 
-#include "base/dlb2_resource.h"
+#define DLB2_USE_NEW_HEADERS /* TEMPORARY FOR MERGE */
+
+#include "base/dlb2_regs_new.h"
+#include "base/dlb2_hw_types_new.h"
+#include "base/dlb2_resource_new.h"
 #include "base/dlb2_osdep.h"
-#include "base/dlb2_regs.h"
 #include "dlb2_main.h"
 #include "../dlb2_user.h"
 #include "../dlb2_priv.h"
@@ -103,25 +106,34 @@ dlb2_pf_init_driver_state(struct dlb2_dev *dlb2_dev)
 
 static void dlb2_pf_enable_pm(struct dlb2_dev *dlb2_dev)
 {
-	dlb2_clr_pmcsr_disable(&dlb2_dev->hw);
+	int version;
+	version = DLB2_HW_DEVICE_FROM_PCI_ID(dlb2_dev->pdev);
+
+	dlb2_clr_pmcsr_disable(&dlb2_dev->hw, version);
 }
 
 #define DLB2_READY_RETRY_LIMIT 1000
-static int dlb2_pf_wait_for_device_ready(struct dlb2_dev *dlb2_dev)
+static int dlb2_pf_wait_for_device_ready(struct dlb2_dev *dlb2_dev,
+					 int dlb_version)
 {
 	u32 retries = 0;
 
 	/* Allow at least 1s for the device to become active after power-on */
 	for (retries = 0; retries < DLB2_READY_RETRY_LIMIT; retries++) {
-		union dlb2_cfg_mstr_cfg_diagnostic_idle_status idle;
-		union dlb2_cfg_mstr_cfg_pm_status pm_st;
+		u32 idle_val;
+		u32 idle_dlb_func_idle;
+		u32 pm_st_val;
+		u32 pm_st_pmsm;
 		u32 addr;
 
-		addr = DLB2_CFG_MSTR_CFG_PM_STATUS;
-		pm_st.val = DLB2_CSR_RD(&dlb2_dev->hw, addr);
-		addr = DLB2_CFG_MSTR_CFG_DIAGNOSTIC_IDLE_STATUS;
-		idle.val = DLB2_CSR_RD(&dlb2_dev->hw, addr);
-		if (pm_st.field.pmsm == 1 && idle.field.dlb_func_idle == 1)
+		addr = DLB2_CM_CFG_PM_STATUS(dlb_version);
+		pm_st_val = DLB2_CSR_RD(&dlb2_dev->hw, addr);
+		addr = DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS(dlb_version);
+		idle_val = DLB2_CSR_RD(&dlb2_dev->hw, addr);
+		idle_dlb_func_idle = idle_val &
+			DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DLB_FUNC_IDLE;
+		pm_st_pmsm = pm_st_val & DLB2_CM_CFG_PM_STATUS_PMSM;
+		if (pm_st_pmsm && idle_dlb_func_idle)
 			break;
 
 		rte_delay_ms(1);
@@ -141,6 +153,7 @@ dlb2_probe(struct rte_pci_device *pdev)
 {
 	struct dlb2_dev *dlb2_dev;
 	int ret = 0;
+	int dlb_version = 0;
 
 	DLB2_INFO(dlb2_dev, "probe\n");
 
@@ -152,6 +165,8 @@ dlb2_probe(struct rte_pci_device *pdev)
 		goto dlb2_dev_malloc_fail;
 	}
 
+	dlb_version = DLB2_HW_DEVICE_FROM_PCI_ID(pdev);
+
 	/* PCI Bus driver has already mapped bar space into process.
 	 * Save off our IO register and FUNC addresses.
 	 */
@@ -191,7 +206,7 @@ dlb2_probe(struct rte_pci_device *pdev)
 	 */
 	dlb2_pf_enable_pm(dlb2_dev);
 
-	ret = dlb2_pf_wait_for_device_ready(dlb2_dev);
+	ret = dlb2_pf_wait_for_device_ready(dlb2_dev, dlb_version);
 	if (ret)
 		goto wait_for_device_ready_fail;
 
@@ -203,7 +218,7 @@ dlb2_probe(struct rte_pci_device *pdev)
 	if (ret)
 		goto init_driver_state_fail;
 
-	ret = dlb2_resource_init(&dlb2_dev->hw);
+	ret = dlb2_resource_init(&dlb2_dev->hw, dlb_version);
 	if (ret)
 		goto resource_init_fail;
 
diff --git a/drivers/event/dlb2/pf/dlb2_main.h b/drivers/event/dlb2/pf/dlb2_main.h
index 9eeda482a..892298d7a 100644
--- a/drivers/event/dlb2/pf/dlb2_main.h
+++ b/drivers/event/dlb2/pf/dlb2_main.h
@@ -12,7 +12,11 @@
 #include <rte_bus_pci.h>
 #include <rte_eal_paging.h>
 
+#ifdef DLB2_USE_NEW_HEADERS
+#include "base/dlb2_hw_types_new.h"
+#else
 #include "base/dlb2_hw_types.h"
+#endif
 #include "../dlb2_user.h"
 
 #define DLB2_DEFAULT_UNREGISTER_TIMEOUT_S 5
diff --git a/drivers/event/dlb2/pf/dlb2_pf.c b/drivers/event/dlb2/pf/dlb2_pf.c
index f57dc1584..1e815f20d 100644
--- a/drivers/event/dlb2/pf/dlb2_pf.c
+++ b/drivers/event/dlb2/pf/dlb2_pf.c
@@ -32,13 +32,15 @@
 #include <rte_memory.h>
 #include <rte_string_fns.h>
 
+#define DLB2_USE_NEW_HEADERS /* TEMPORARY FOR MERGE */
+
 #include "../dlb2_priv.h"
 #include "../dlb2_iface.h"
 #include "../dlb2_inline_fns.h"
 #include "dlb2_main.h"
-#include "base/dlb2_hw_types.h"
+#include "base/dlb2_hw_types_new.h"
 #include "base/dlb2_osdep.h"
-#include "base/dlb2_resource.h"
+#include "base/dlb2_resource_new.h"
 
 static const char *event_dlb2_pf_name = RTE_STR(EVDEV_DLB2_NAME_PMD);
 
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v3 04/26] event/dlb2: add v2.5 get resources
  2021-04-13 20:14   ` [dpdk-dev] [PATCH v3 00/26] " Timothy McDaniel
                       ` (2 preceding siblings ...)
  2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 03/26] event/dlb2: add v2.5 HW init Timothy McDaniel
@ 2021-04-13 20:14     ` Timothy McDaniel
  2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 05/26] event/dlb2: add v2.5 create sched domain Timothy McDaniel
                       ` (21 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-04-13 20:14 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas

DLB v2.5 uses a new credit scheme, where directed and load balanced
credits are unified, instead of having separate directed and load
balanced credit pools.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/dlb2.c                     | 20 ++++--
 drivers/event/dlb2/dlb2_user.h                | 14 +++-
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 48 --------------
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 66 +++++++++++++++++++
 4 files changed, 92 insertions(+), 56 deletions(-)

diff --git a/drivers/event/dlb2/dlb2.c b/drivers/event/dlb2/dlb2.c
index 7f5b9141b..0048f6a1b 100644
--- a/drivers/event/dlb2/dlb2.c
+++ b/drivers/event/dlb2/dlb2.c
@@ -132,17 +132,25 @@ dlb2_hw_query_resources(struct dlb2_eventdev *dlb2)
 	evdev_dlb2_default_info.max_event_ports =
 		dlb2->hw_rsrc_query_results.num_ldb_ports;
 
-	evdev_dlb2_default_info.max_num_events =
-		dlb2->hw_rsrc_query_results.num_ldb_credits;
-
+	if (dlb2->version == DLB2_HW_V2_5) {
+		evdev_dlb2_default_info.max_num_events =
+			dlb2->hw_rsrc_query_results.num_credits;
+	} else {
+		evdev_dlb2_default_info.max_num_events =
+			dlb2->hw_rsrc_query_results.num_ldb_credits;
+	}
 	/* Save off values used when creating the scheduling domain. */
 
 	handle->info.num_sched_domains =
 		dlb2->hw_rsrc_query_results.num_sched_domains;
 
-	handle->info.hw_rsrc_max.nb_events_limit =
-		dlb2->hw_rsrc_query_results.num_ldb_credits;
-
+	if (dlb2->version == DLB2_HW_V2_5) {
+		handle->info.hw_rsrc_max.nb_events_limit =
+			dlb2->hw_rsrc_query_results.num_credits;
+	} else {
+		handle->info.hw_rsrc_max.nb_events_limit =
+			dlb2->hw_rsrc_query_results.num_ldb_credits;
+	}
 	handle->info.hw_rsrc_max.num_queues =
 		dlb2->hw_rsrc_query_results.num_ldb_queues +
 		dlb2->hw_rsrc_query_results.num_dir_ports;
diff --git a/drivers/event/dlb2/dlb2_user.h b/drivers/event/dlb2/dlb2_user.h
index f4bda7822..b7d125dec 100644
--- a/drivers/event/dlb2/dlb2_user.h
+++ b/drivers/event/dlb2/dlb2_user.h
@@ -195,9 +195,12 @@ struct dlb2_create_sched_domain_args {
  *	contiguous range of history list entries.
  * - num_ldb_credits: Amount of available load-balanced QE storage.
  * - num_dir_credits: Amount of available directed QE storage.
+ * - response.status: Detailed error code. In certain cases, such as if the
+ *	ioctl request arg is invalid, the driver won't set status.
  */
 struct dlb2_get_num_resources_args {
 	/* Output parameters */
+	struct dlb2_cmd_response response;
 	__u32 num_sched_domains;
 	__u32 num_ldb_queues;
 	__u32 num_ldb_ports;
@@ -206,8 +209,15 @@ struct dlb2_get_num_resources_args {
 	__u32 num_atomic_inflights;
 	__u32 num_hist_list_entries;
 	__u32 max_contiguous_hist_list_entries;
-	__u32 num_ldb_credits;
-	__u32 num_dir_credits;
+	union {
+		struct {
+			__u32 num_ldb_credits;
+			__u32 num_dir_credits;
+		};
+		struct {
+			__u32 num_credits;
+		};
+	};
 };
 
 /*
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index 7ba6521ef..eda983d85 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -58,54 +58,6 @@ void dlb2_hw_enable_sparse_dir_cq_mode(struct dlb2_hw *hw)
 	DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, r0.val);
 }
 
-int dlb2_hw_get_num_resources(struct dlb2_hw *hw,
-			      struct dlb2_get_num_resources_args *arg,
-			      bool vdev_req,
-			      unsigned int vdev_id)
-{
-	struct dlb2_function_resources *rsrcs;
-	struct dlb2_bitmap *map;
-	int i;
-
-	if (vdev_req && vdev_id >= DLB2_MAX_NUM_VDEVS)
-		return -EINVAL;
-
-	if (vdev_req)
-		rsrcs = &hw->vdev[vdev_id];
-	else
-		rsrcs = &hw->pf;
-
-	arg->num_sched_domains = rsrcs->num_avail_domains;
-
-	arg->num_ldb_queues = rsrcs->num_avail_ldb_queues;
-
-	arg->num_ldb_ports = 0;
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
-		arg->num_ldb_ports += rsrcs->num_avail_ldb_ports[i];
-
-	arg->num_cos_ldb_ports[0] = rsrcs->num_avail_ldb_ports[0];
-	arg->num_cos_ldb_ports[1] = rsrcs->num_avail_ldb_ports[1];
-	arg->num_cos_ldb_ports[2] = rsrcs->num_avail_ldb_ports[2];
-	arg->num_cos_ldb_ports[3] = rsrcs->num_avail_ldb_ports[3];
-
-	arg->num_dir_ports = rsrcs->num_avail_dir_pq_pairs;
-
-	arg->num_atomic_inflights = rsrcs->num_avail_aqed_entries;
-
-	map = rsrcs->avail_hist_list_entries;
-
-	arg->num_hist_list_entries = dlb2_bitmap_count(map);
-
-	arg->max_contiguous_hist_list_entries =
-		dlb2_bitmap_longest_set_range(map);
-
-	arg->num_ldb_credits = rsrcs->num_avail_qed_entries;
-
-	arg->num_dir_credits = rsrcs->num_avail_dqed_entries;
-
-	return 0;
-}
-
 void dlb2_hw_enable_sparse_ldb_cq_mode(struct dlb2_hw *hw)
 {
 	union dlb2_chp_cfg_chp_csr_ctrl r0;
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index 175b0799e..14b97dbf9 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -257,3 +257,69 @@ void dlb2_clr_pmcsr_disable(struct dlb2_hw *hw, enum dlb2_hw_ver ver)
 	DLB2_CSR_WR(hw, DLB2_CM_CFG_PM_PMCSR_DISABLE(ver), pmcsr_dis);
 }
 
+/**
+ * dlb2_hw_get_num_resources() - query the PCI function's available resources
+ * @hw: dlb2_hw handle for a particular device.
+ * @arg: pointer to resource counts.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function returns the number of available resources for the PF or for a
+ * VF.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, -EINVAL if vdev_req is true and vdev_id is
+ * invalid.
+ */
+int dlb2_hw_get_num_resources(struct dlb2_hw *hw,
+			      struct dlb2_get_num_resources_args *arg,
+			      bool vdev_req,
+			      unsigned int vdev_id)
+{
+	struct dlb2_function_resources *rsrcs;
+	struct dlb2_bitmap *map;
+	int i;
+
+	if (vdev_req && vdev_id >= DLB2_MAX_NUM_VDEVS)
+		return -EINVAL;
+
+	if (vdev_req)
+		rsrcs = &hw->vdev[vdev_id];
+	else
+		rsrcs = &hw->pf;
+
+	arg->num_sched_domains = rsrcs->num_avail_domains;
+
+	arg->num_ldb_queues = rsrcs->num_avail_ldb_queues;
+
+	arg->num_ldb_ports = 0;
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
+		arg->num_ldb_ports += rsrcs->num_avail_ldb_ports[i];
+
+	arg->num_cos_ldb_ports[0] = rsrcs->num_avail_ldb_ports[0];
+	arg->num_cos_ldb_ports[1] = rsrcs->num_avail_ldb_ports[1];
+	arg->num_cos_ldb_ports[2] = rsrcs->num_avail_ldb_ports[2];
+	arg->num_cos_ldb_ports[3] = rsrcs->num_avail_ldb_ports[3];
+
+	arg->num_dir_ports = rsrcs->num_avail_dir_pq_pairs;
+
+	arg->num_atomic_inflights = rsrcs->num_avail_aqed_entries;
+
+	map = rsrcs->avail_hist_list_entries;
+
+	arg->num_hist_list_entries = dlb2_bitmap_count(map);
+
+	arg->max_contiguous_hist_list_entries =
+		dlb2_bitmap_longest_set_range(map);
+
+	if (hw->ver == DLB2_HW_V2) {
+		arg->num_ldb_credits = rsrcs->num_avail_qed_entries;
+		arg->num_dir_credits = rsrcs->num_avail_dqed_entries;
+	} else {
+		arg->num_credits = rsrcs->num_avail_entries;
+	}
+	return 0;
+}
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v3 05/26] event/dlb2: add v2.5 create sched domain
  2021-04-13 20:14   ` [dpdk-dev] [PATCH v3 00/26] " Timothy McDaniel
                       ` (3 preceding siblings ...)
  2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 04/26] event/dlb2: add v2.5 get resources Timothy McDaniel
@ 2021-04-13 20:14     ` Timothy McDaniel
  2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 06/26] event/dlb2: add v2.5 domain reset Timothy McDaniel
                       ` (20 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-04-13 20:14 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas

Update domain creation logic to account for DLB v2.5
credit scheme, new register map, and new register access
macros.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/dlb2_user.h                |  13 +-
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 645 ----------------
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 696 ++++++++++++++++++
 3 files changed, 707 insertions(+), 647 deletions(-)

diff --git a/drivers/event/dlb2/dlb2_user.h b/drivers/event/dlb2/dlb2_user.h
index b7d125dec..9760e9bda 100644
--- a/drivers/event/dlb2/dlb2_user.h
+++ b/drivers/event/dlb2/dlb2_user.h
@@ -18,6 +18,7 @@ enum dlb2_error {
 	DLB2_ST_LDB_QUEUES_UNAVAILABLE,
 	DLB2_ST_LDB_CREDITS_UNAVAILABLE,
 	DLB2_ST_DIR_CREDITS_UNAVAILABLE,
+	DLB2_ST_CREDITS_UNAVAILABLE,
 	DLB2_ST_SEQUENCE_NUMBERS_UNAVAILABLE,
 	DLB2_ST_INVALID_DOMAIN_ID,
 	DLB2_ST_INVALID_QID_INFLIGHT_ALLOCATION,
@@ -57,6 +58,7 @@ static const char dlb2_error_strings[][128] = {
 	"DLB2_ST_LDB_QUEUES_UNAVAILABLE",
 	"DLB2_ST_LDB_CREDITS_UNAVAILABLE",
 	"DLB2_ST_DIR_CREDITS_UNAVAILABLE",
+	"DLB2_ST_CREDITS_UNAVAILABLE",
 	"DLB2_ST_SEQUENCE_NUMBERS_UNAVAILABLE",
 	"DLB2_ST_INVALID_DOMAIN_ID",
 	"DLB2_ST_INVALID_QID_INFLIGHT_ALLOCATION",
@@ -170,8 +172,15 @@ struct dlb2_create_sched_domain_args {
 	__u32 num_dir_ports;
 	__u32 num_atomic_inflights;
 	__u32 num_hist_list_entries;
-	__u32 num_ldb_credits;
-	__u32 num_dir_credits;
+	union {
+		struct {
+			__u32 num_ldb_credits;
+			__u32 num_dir_credits;
+		};
+		struct {
+			__u32 num_credits;
+		};
+	};
 	__u8 cos_strict;
 	__u8 padding1[3];
 };
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index eda983d85..99c3d031d 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -32,21 +32,6 @@
 #define DLB2_FUNC_LIST_FOR_SAFE(head, ptr, ptr_tmp, it, it_tmp) \
 	DLB2_LIST_FOR_EACH_SAFE((head), ptr, ptr_tmp, func_list, it, it_tmp)
 
-static void dlb2_init_domain_rsrc_lists(struct dlb2_hw_domain *domain)
-{
-	int i;
-
-	dlb2_list_init_head(&domain->used_ldb_queues);
-	dlb2_list_init_head(&domain->used_dir_pq_pairs);
-	dlb2_list_init_head(&domain->avail_ldb_queues);
-	dlb2_list_init_head(&domain->avail_dir_pq_pairs);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
-		dlb2_list_init_head(&domain->used_ldb_ports[i]);
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
-		dlb2_list_init_head(&domain->avail_ldb_ports[i]);
-}
-
 void dlb2_hw_enable_sparse_dir_cq_mode(struct dlb2_hw *hw)
 {
 	union dlb2_chp_cfg_chp_csr_ctrl r0;
@@ -69,636 +54,6 @@ void dlb2_hw_enable_sparse_ldb_cq_mode(struct dlb2_hw *hw)
 	DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, r0.val);
 }
 
-static void dlb2_configure_domain_credits(struct dlb2_hw *hw,
-					  struct dlb2_hw_domain *domain)
-{
-	union dlb2_chp_cfg_ldb_vas_crd r0 = { {0} };
-	union dlb2_chp_cfg_dir_vas_crd r1 = { {0} };
-
-	r0.field.count = domain->num_ldb_credits;
-
-	DLB2_CSR_WR(hw, DLB2_CHP_CFG_LDB_VAS_CRD(domain->id.phys_id), r0.val);
-
-	r1.field.count = domain->num_dir_credits;
-
-	DLB2_CSR_WR(hw, DLB2_CHP_CFG_DIR_VAS_CRD(domain->id.phys_id), r1.val);
-}
-
-static struct dlb2_ldb_port *
-dlb2_get_next_ldb_port(struct dlb2_hw *hw,
-		       struct dlb2_function_resources *rsrcs,
-		       u32 domain_id,
-		       u32 cos_id)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	RTE_SET_USED(iter);
-	/*
-	 * To reduce the odds of consecutive load-balanced ports mapping to the
-	 * same queue(s), the driver attempts to allocate ports whose neighbors
-	 * are owned by a different domain.
-	 */
-	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {
-		u32 next, prev;
-		u32 phys_id;
-
-		phys_id = port->id.phys_id;
-		next = phys_id + 1;
-		prev = phys_id - 1;
-
-		if (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)
-			next = 0;
-		if (phys_id == 0)
-			prev = DLB2_MAX_NUM_LDB_PORTS - 1;
-
-		if (!hw->rsrcs.ldb_ports[next].owned ||
-		    hw->rsrcs.ldb_ports[next].domain_id.phys_id == domain_id)
-			continue;
-
-		if (!hw->rsrcs.ldb_ports[prev].owned ||
-		    hw->rsrcs.ldb_ports[prev].domain_id.phys_id == domain_id)
-			continue;
-
-		return port;
-	}
-
-	/*
-	 * Failing that, the driver looks for a port with one neighbor owned by
-	 * a different domain and the other unallocated.
-	 */
-	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {
-		u32 next, prev;
-		u32 phys_id;
-
-		phys_id = port->id.phys_id;
-		next = phys_id + 1;
-		prev = phys_id - 1;
-
-		if (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)
-			next = 0;
-		if (phys_id == 0)
-			prev = DLB2_MAX_NUM_LDB_PORTS - 1;
-
-		if (!hw->rsrcs.ldb_ports[prev].owned &&
-		    hw->rsrcs.ldb_ports[next].owned &&
-		    hw->rsrcs.ldb_ports[next].domain_id.phys_id != domain_id)
-			return port;
-
-		if (!hw->rsrcs.ldb_ports[next].owned &&
-		    hw->rsrcs.ldb_ports[prev].owned &&
-		    hw->rsrcs.ldb_ports[prev].domain_id.phys_id != domain_id)
-			return port;
-	}
-
-	/*
-	 * Failing that, the driver looks for a port with both neighbors
-	 * unallocated.
-	 */
-	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {
-		u32 next, prev;
-		u32 phys_id;
-
-		phys_id = port->id.phys_id;
-		next = phys_id + 1;
-		prev = phys_id - 1;
-
-		if (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)
-			next = 0;
-		if (phys_id == 0)
-			prev = DLB2_MAX_NUM_LDB_PORTS - 1;
-
-		if (!hw->rsrcs.ldb_ports[prev].owned &&
-		    !hw->rsrcs.ldb_ports[next].owned)
-			return port;
-	}
-
-	/* If all else fails, the driver returns the next available port. */
-	return DLB2_FUNC_LIST_HEAD(rsrcs->avail_ldb_ports[cos_id],
-				   typeof(*port));
-}
-
-static int __dlb2_attach_ldb_ports(struct dlb2_hw *hw,
-				   struct dlb2_function_resources *rsrcs,
-				   struct dlb2_hw_domain *domain,
-				   u32 num_ports,
-				   u32 cos_id,
-				   struct dlb2_cmd_response *resp)
-{
-	unsigned int i;
-
-	if (rsrcs->num_avail_ldb_ports[cos_id] < num_ports) {
-		resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	for (i = 0; i < num_ports; i++) {
-		struct dlb2_ldb_port *port;
-
-		port = dlb2_get_next_ldb_port(hw, rsrcs,
-					      domain->id.phys_id, cos_id);
-		if (port == NULL) {
-			DLB2_HW_ERR(hw,
-				    "[%s()] Internal error: domain validation failed\n",
-				    __func__);
-			return -EFAULT;
-		}
-
-		dlb2_list_del(&rsrcs->avail_ldb_ports[cos_id],
-			      &port->func_list);
-
-		port->domain_id = domain->id;
-		port->owned = true;
-
-		dlb2_list_add(&domain->avail_ldb_ports[cos_id],
-			      &port->domain_list);
-	}
-
-	rsrcs->num_avail_ldb_ports[cos_id] -= num_ports;
-
-	return 0;
-}
-
-static int dlb2_attach_ldb_ports(struct dlb2_hw *hw,
-				 struct dlb2_function_resources *rsrcs,
-				 struct dlb2_hw_domain *domain,
-				 struct dlb2_create_sched_domain_args *args,
-				 struct dlb2_cmd_response *resp)
-{
-	unsigned int i, j;
-	int ret;
-
-	if (args->cos_strict) {
-		for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-			u32 num = args->num_cos_ldb_ports[i];
-
-			/* Allocate ports from specific classes-of-service */
-			ret = __dlb2_attach_ldb_ports(hw,
-						      rsrcs,
-						      domain,
-						      num,
-						      i,
-						      resp);
-			if (ret)
-				return ret;
-		}
-	} else {
-		unsigned int k;
-		u32 cos_id;
-
-		/*
-		 * Attempt to allocate from specific class-of-service, but
-		 * fallback to the other classes if that fails.
-		 */
-		for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-			for (j = 0; j < args->num_cos_ldb_ports[i]; j++) {
-				for (k = 0; k < DLB2_NUM_COS_DOMAINS; k++) {
-					cos_id = (i + k) % DLB2_NUM_COS_DOMAINS;
-
-					ret = __dlb2_attach_ldb_ports(hw,
-								      rsrcs,
-								      domain,
-								      1,
-								      cos_id,
-								      resp);
-					if (ret == 0)
-						break;
-				}
-
-				if (ret < 0)
-					return ret;
-			}
-		}
-	}
-
-	/* Allocate num_ldb_ports from any class-of-service */
-	for (i = 0; i < args->num_ldb_ports; i++) {
-		for (j = 0; j < DLB2_NUM_COS_DOMAINS; j++) {
-			ret = __dlb2_attach_ldb_ports(hw,
-						      rsrcs,
-						      domain,
-						      1,
-						      j,
-						      resp);
-			if (ret == 0)
-				break;
-		}
-
-		if (ret < 0)
-			return ret;
-	}
-
-	return 0;
-}
-
-static int dlb2_attach_dir_ports(struct dlb2_hw *hw,
-				 struct dlb2_function_resources *rsrcs,
-				 struct dlb2_hw_domain *domain,
-				 u32 num_ports,
-				 struct dlb2_cmd_response *resp)
-{
-	unsigned int i;
-
-	if (rsrcs->num_avail_dir_pq_pairs < num_ports) {
-		resp->status = DLB2_ST_DIR_PORTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	for (i = 0; i < num_ports; i++) {
-		struct dlb2_dir_pq_pair *port;
-
-		port = DLB2_FUNC_LIST_HEAD(rsrcs->avail_dir_pq_pairs,
-					   typeof(*port));
-		if (port == NULL) {
-			DLB2_HW_ERR(hw,
-				    "[%s()] Internal error: domain validation failed\n",
-				    __func__);
-			return -EFAULT;
-		}
-
-		dlb2_list_del(&rsrcs->avail_dir_pq_pairs, &port->func_list);
-
-		port->domain_id = domain->id;
-		port->owned = true;
-
-		dlb2_list_add(&domain->avail_dir_pq_pairs, &port->domain_list);
-	}
-
-	rsrcs->num_avail_dir_pq_pairs -= num_ports;
-
-	return 0;
-}
-
-static int dlb2_attach_ldb_credits(struct dlb2_function_resources *rsrcs,
-				   struct dlb2_hw_domain *domain,
-				   u32 num_credits,
-				   struct dlb2_cmd_response *resp)
-{
-	if (rsrcs->num_avail_qed_entries < num_credits) {
-		resp->status = DLB2_ST_LDB_CREDITS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	rsrcs->num_avail_qed_entries -= num_credits;
-	domain->num_ldb_credits += num_credits;
-	return 0;
-}
-
-static int dlb2_attach_dir_credits(struct dlb2_function_resources *rsrcs,
-				   struct dlb2_hw_domain *domain,
-				   u32 num_credits,
-				   struct dlb2_cmd_response *resp)
-{
-	if (rsrcs->num_avail_dqed_entries < num_credits) {
-		resp->status = DLB2_ST_DIR_CREDITS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	rsrcs->num_avail_dqed_entries -= num_credits;
-	domain->num_dir_credits += num_credits;
-	return 0;
-}
-
-static int dlb2_attach_atomic_inflights(struct dlb2_function_resources *rsrcs,
-					struct dlb2_hw_domain *domain,
-					u32 num_atomic_inflights,
-					struct dlb2_cmd_response *resp)
-{
-	if (rsrcs->num_avail_aqed_entries < num_atomic_inflights) {
-		resp->status = DLB2_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	rsrcs->num_avail_aqed_entries -= num_atomic_inflights;
-	domain->num_avail_aqed_entries += num_atomic_inflights;
-	return 0;
-}
-
-static int
-dlb2_attach_domain_hist_list_entries(struct dlb2_function_resources *rsrcs,
-				     struct dlb2_hw_domain *domain,
-				     u32 num_hist_list_entries,
-				     struct dlb2_cmd_response *resp)
-{
-	struct dlb2_bitmap *bitmap;
-	int base;
-
-	if (num_hist_list_entries) {
-		bitmap = rsrcs->avail_hist_list_entries;
-
-		base = dlb2_bitmap_find_set_bit_range(bitmap,
-						      num_hist_list_entries);
-		if (base < 0)
-			goto error;
-
-		domain->total_hist_list_entries = num_hist_list_entries;
-		domain->avail_hist_list_entries = num_hist_list_entries;
-		domain->hist_list_entry_base = base;
-		domain->hist_list_entry_offset = 0;
-
-		dlb2_bitmap_clear_range(bitmap, base, num_hist_list_entries);
-	}
-	return 0;
-
-error:
-	resp->status = DLB2_ST_HIST_LIST_ENTRIES_UNAVAILABLE;
-	return -EINVAL;
-}
-
-static int dlb2_attach_ldb_queues(struct dlb2_hw *hw,
-				  struct dlb2_function_resources *rsrcs,
-				  struct dlb2_hw_domain *domain,
-				  u32 num_queues,
-				  struct dlb2_cmd_response *resp)
-{
-	unsigned int i;
-
-	if (rsrcs->num_avail_ldb_queues < num_queues) {
-		resp->status = DLB2_ST_LDB_QUEUES_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	for (i = 0; i < num_queues; i++) {
-		struct dlb2_ldb_queue *queue;
-
-		queue = DLB2_FUNC_LIST_HEAD(rsrcs->avail_ldb_queues,
-					    typeof(*queue));
-		if (queue == NULL) {
-			DLB2_HW_ERR(hw,
-				    "[%s()] Internal error: domain validation failed\n",
-				    __func__);
-			return -EFAULT;
-		}
-
-		dlb2_list_del(&rsrcs->avail_ldb_queues, &queue->func_list);
-
-		queue->domain_id = domain->id;
-		queue->owned = true;
-
-		dlb2_list_add(&domain->avail_ldb_queues, &queue->domain_list);
-	}
-
-	rsrcs->num_avail_ldb_queues -= num_queues;
-
-	return 0;
-}
-
-static int
-dlb2_domain_attach_resources(struct dlb2_hw *hw,
-			     struct dlb2_function_resources *rsrcs,
-			     struct dlb2_hw_domain *domain,
-			     struct dlb2_create_sched_domain_args *args,
-			     struct dlb2_cmd_response *resp)
-{
-	int ret;
-
-	ret = dlb2_attach_ldb_queues(hw,
-				     rsrcs,
-				     domain,
-				     args->num_ldb_queues,
-				     resp);
-	if (ret < 0)
-		return ret;
-
-	ret = dlb2_attach_ldb_ports(hw,
-				    rsrcs,
-				    domain,
-				    args,
-				    resp);
-	if (ret < 0)
-		return ret;
-
-	ret = dlb2_attach_dir_ports(hw,
-				    rsrcs,
-				    domain,
-				    args->num_dir_ports,
-				    resp);
-	if (ret < 0)
-		return ret;
-
-	ret = dlb2_attach_ldb_credits(rsrcs,
-				      domain,
-				      args->num_ldb_credits,
-				      resp);
-	if (ret < 0)
-		return ret;
-
-	ret = dlb2_attach_dir_credits(rsrcs,
-				      domain,
-				      args->num_dir_credits,
-				      resp);
-	if (ret < 0)
-		return ret;
-
-	ret = dlb2_attach_domain_hist_list_entries(rsrcs,
-						   domain,
-						   args->num_hist_list_entries,
-						   resp);
-	if (ret < 0)
-		return ret;
-
-	ret = dlb2_attach_atomic_inflights(rsrcs,
-					   domain,
-					   args->num_atomic_inflights,
-					   resp);
-	if (ret < 0)
-		return ret;
-
-	dlb2_configure_domain_credits(hw, domain);
-
-	domain->configured = true;
-
-	domain->started = false;
-
-	rsrcs->num_avail_domains--;
-
-	return 0;
-}
-
-static int
-dlb2_verify_create_sched_dom_args(struct dlb2_function_resources *rsrcs,
-				  struct dlb2_create_sched_domain_args *args,
-				  struct dlb2_cmd_response *resp)
-{
-	u32 num_avail_ldb_ports, req_ldb_ports;
-	struct dlb2_bitmap *avail_hl_entries;
-	unsigned int max_contig_hl_range;
-	int i;
-
-	avail_hl_entries = rsrcs->avail_hist_list_entries;
-
-	max_contig_hl_range = dlb2_bitmap_longest_set_range(avail_hl_entries);
-
-	num_avail_ldb_ports = 0;
-	req_ldb_ports = 0;
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		num_avail_ldb_ports += rsrcs->num_avail_ldb_ports[i];
-
-		req_ldb_ports += args->num_cos_ldb_ports[i];
-	}
-
-	req_ldb_ports += args->num_ldb_ports;
-
-	if (rsrcs->num_avail_domains < 1) {
-		resp->status = DLB2_ST_DOMAIN_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	if (rsrcs->num_avail_ldb_queues < args->num_ldb_queues) {
-		resp->status = DLB2_ST_LDB_QUEUES_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	if (req_ldb_ports > num_avail_ldb_ports) {
-		resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	for (i = 0; args->cos_strict && i < DLB2_NUM_COS_DOMAINS; i++) {
-		if (args->num_cos_ldb_ports[i] >
-		    rsrcs->num_avail_ldb_ports[i]) {
-			resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
-			return -EINVAL;
-		}
-	}
-
-	if (args->num_ldb_queues > 0 && req_ldb_ports == 0) {
-		resp->status = DLB2_ST_LDB_PORT_REQUIRED_FOR_LDB_QUEUES;
-		return -EINVAL;
-	}
-
-	if (rsrcs->num_avail_dir_pq_pairs < args->num_dir_ports) {
-		resp->status = DLB2_ST_DIR_PORTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	if (rsrcs->num_avail_qed_entries < args->num_ldb_credits) {
-		resp->status = DLB2_ST_LDB_CREDITS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	if (rsrcs->num_avail_dqed_entries < args->num_dir_credits) {
-		resp->status = DLB2_ST_DIR_CREDITS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	if (rsrcs->num_avail_aqed_entries < args->num_atomic_inflights) {
-		resp->status = DLB2_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	if (max_contig_hl_range < args->num_hist_list_entries) {
-		resp->status = DLB2_ST_HIST_LIST_ENTRIES_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	return 0;
-}
-
-static void
-dlb2_log_create_sched_domain_args(struct dlb2_hw *hw,
-				  struct dlb2_create_sched_domain_args *args,
-				  bool vdev_req,
-				  unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 create sched domain arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tNumber of LDB queues:          %d\n",
-		    args->num_ldb_queues);
-	DLB2_HW_DBG(hw, "\tNumber of LDB ports (any CoS): %d\n",
-		    args->num_ldb_ports);
-	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 0):   %d\n",
-		    args->num_cos_ldb_ports[0]);
-	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 1):   %d\n",
-		    args->num_cos_ldb_ports[1]);
-	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 2):   %d\n",
-		    args->num_cos_ldb_ports[1]);
-	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 3):   %d\n",
-		    args->num_cos_ldb_ports[1]);
-	DLB2_HW_DBG(hw, "\tStrict CoS allocation:         %d\n",
-		    args->cos_strict);
-	DLB2_HW_DBG(hw, "\tNumber of DIR ports:           %d\n",
-		    args->num_dir_ports);
-	DLB2_HW_DBG(hw, "\tNumber of ATM inflights:       %d\n",
-		    args->num_atomic_inflights);
-	DLB2_HW_DBG(hw, "\tNumber of hist list entries:   %d\n",
-		    args->num_hist_list_entries);
-	DLB2_HW_DBG(hw, "\tNumber of LDB credits:         %d\n",
-		    args->num_ldb_credits);
-	DLB2_HW_DBG(hw, "\tNumber of DIR credits:         %d\n",
-		    args->num_dir_credits);
-}
-
-/**
- * dlb2_hw_create_sched_domain() - Allocate and initialize a DLB scheduling
- *	domain and its resources.
- * @hw:	Contains the current state of the DLB2 hardware.
- * @args: User-provided arguments.
- * @resp: Response to user.
- * @vdev_req: Request came from a virtual device.
- * @vdev_id: If vdev_req is true, this contains the virtual device's ID.
- *
- * Return: returns < 0 on error, 0 otherwise. If the driver is unable to
- * satisfy a request, resp->status will be set accordingly.
- */
-int dlb2_hw_create_sched_domain(struct dlb2_hw *hw,
-				struct dlb2_create_sched_domain_args *args,
-				struct dlb2_cmd_response *resp,
-				bool vdev_req,
-				unsigned int vdev_id)
-{
-	struct dlb2_function_resources *rsrcs;
-	struct dlb2_hw_domain *domain;
-	int ret;
-
-	rsrcs = (vdev_req) ? &hw->vdev[vdev_id] : &hw->pf;
-
-	dlb2_log_create_sched_domain_args(hw, args, vdev_req, vdev_id);
-
-	/*
-	 * Verify that hardware resources are available before attempting to
-	 * satisfy the request. This simplifies the error unwinding code.
-	 */
-	ret = dlb2_verify_create_sched_dom_args(rsrcs, args, resp);
-	if (ret)
-		return ret;
-
-	domain = DLB2_FUNC_LIST_HEAD(rsrcs->avail_domains, typeof(*domain));
-	if (domain == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: no available domains\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	if (domain->configured) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: avail_domains contains configured domains.\n",
-			    __func__);
-		return -EFAULT;
-	}
-
-	dlb2_init_domain_rsrc_lists(domain);
-
-	ret = dlb2_domain_attach_resources(hw, rsrcs, domain, args, resp);
-	if (ret < 0) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: failed to verify args.\n",
-			    __func__);
-
-		return ret;
-	}
-
-	dlb2_list_del(&rsrcs->avail_domains, &domain->func_list);
-
-	dlb2_list_add(&rsrcs->used_domains, &domain->func_list);
-
-	resp->id = (vdev_req) ? domain->id.virt_id : domain->id.phys_id;
-	resp->status = 0;
-
-	return 0;
-}
-
 /*
  * The PF driver cannot assume that a register write will affect subsequent HCW
  * writes. To ensure a write completes, the driver must read back a CSR. This
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index 14b97dbf9..8f97dd865 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -323,3 +323,699 @@ int dlb2_hw_get_num_resources(struct dlb2_hw *hw,
 	}
 	return 0;
 }
+
+static void dlb2_configure_domain_credits_v2_5(struct dlb2_hw *hw,
+					       struct dlb2_hw_domain *domain)
+{
+	u32 reg = 0;
+
+	DLB2_BITS_SET(reg, domain->num_credits, DLB2_CHP_CFG_LDB_VAS_CRD_COUNT);
+	DLB2_CSR_WR(hw, DLB2_CHP_CFG_VAS_CRD(domain->id.phys_id), reg);
+}
+
+static void dlb2_configure_domain_credits_v2(struct dlb2_hw *hw,
+					     struct dlb2_hw_domain *domain)
+{
+	u32 reg = 0;
+
+	DLB2_BITS_SET(reg, domain->num_ldb_credits,
+		      DLB2_CHP_CFG_LDB_VAS_CRD_COUNT);
+	DLB2_CSR_WR(hw, DLB2_CHP_CFG_LDB_VAS_CRD(domain->id.phys_id), reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, domain->num_dir_credits,
+		      DLB2_CHP_CFG_DIR_VAS_CRD_COUNT);
+	DLB2_CSR_WR(hw, DLB2_CHP_CFG_DIR_VAS_CRD(domain->id.phys_id), reg);
+}
+
+static void dlb2_configure_domain_credits(struct dlb2_hw *hw,
+					  struct dlb2_hw_domain *domain)
+{
+	if (hw->ver == DLB2_HW_V2)
+		dlb2_configure_domain_credits_v2(hw, domain);
+	else
+		dlb2_configure_domain_credits_v2_5(hw, domain);
+}
+
+static int dlb2_attach_credits(struct dlb2_function_resources *rsrcs,
+			       struct dlb2_hw_domain *domain,
+			       u32 num_credits,
+			       struct dlb2_cmd_response *resp)
+{
+	if (rsrcs->num_avail_entries < num_credits) {
+		resp->status = DLB2_ST_CREDITS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	rsrcs->num_avail_entries -= num_credits;
+	domain->num_credits += num_credits;
+	return 0;
+}
+
+static struct dlb2_ldb_port *
+dlb2_get_next_ldb_port(struct dlb2_hw *hw,
+		       struct dlb2_function_resources *rsrcs,
+		       u32 domain_id,
+		       u32 cos_id)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	RTE_SET_USED(iter);
+
+	/*
+	 * To reduce the odds of consecutive load-balanced ports mapping to the
+	 * same queue(s), the driver attempts to allocate ports whose neighbors
+	 * are owned by a different domain.
+	 */
+	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {
+		u32 next, prev;
+		u32 phys_id;
+
+		phys_id = port->id.phys_id;
+		next = phys_id + 1;
+		prev = phys_id - 1;
+
+		if (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)
+			next = 0;
+		if (phys_id == 0)
+			prev = DLB2_MAX_NUM_LDB_PORTS - 1;
+
+		if (!hw->rsrcs.ldb_ports[next].owned ||
+		    hw->rsrcs.ldb_ports[next].domain_id.phys_id == domain_id)
+			continue;
+
+		if (!hw->rsrcs.ldb_ports[prev].owned ||
+		    hw->rsrcs.ldb_ports[prev].domain_id.phys_id == domain_id)
+			continue;
+
+		return port;
+	}
+
+	/*
+	 * Failing that, the driver looks for a port with one neighbor owned by
+	 * a different domain and the other unallocated.
+	 */
+	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {
+		u32 next, prev;
+		u32 phys_id;
+
+		phys_id = port->id.phys_id;
+		next = phys_id + 1;
+		prev = phys_id - 1;
+
+		if (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)
+			next = 0;
+		if (phys_id == 0)
+			prev = DLB2_MAX_NUM_LDB_PORTS - 1;
+
+		if (!hw->rsrcs.ldb_ports[prev].owned &&
+		    hw->rsrcs.ldb_ports[next].owned &&
+		    hw->rsrcs.ldb_ports[next].domain_id.phys_id != domain_id)
+			return port;
+
+		if (!hw->rsrcs.ldb_ports[next].owned &&
+		    hw->rsrcs.ldb_ports[prev].owned &&
+		    hw->rsrcs.ldb_ports[prev].domain_id.phys_id != domain_id)
+			return port;
+	}
+
+	/*
+	 * Failing that, the driver looks for a port with both neighbors
+	 * unallocated.
+	 */
+	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {
+		u32 next, prev;
+		u32 phys_id;
+
+		phys_id = port->id.phys_id;
+		next = phys_id + 1;
+		prev = phys_id - 1;
+
+		if (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)
+			next = 0;
+		if (phys_id == 0)
+			prev = DLB2_MAX_NUM_LDB_PORTS - 1;
+
+		if (!hw->rsrcs.ldb_ports[prev].owned &&
+		    !hw->rsrcs.ldb_ports[next].owned)
+			return port;
+	}
+
+	/* If all else fails, the driver returns the next available port. */
+	return DLB2_FUNC_LIST_HEAD(rsrcs->avail_ldb_ports[cos_id],
+				   typeof(*port));
+}
+
+static int __dlb2_attach_ldb_ports(struct dlb2_hw *hw,
+				   struct dlb2_function_resources *rsrcs,
+				   struct dlb2_hw_domain *domain,
+				   u32 num_ports,
+				   u32 cos_id,
+				   struct dlb2_cmd_response *resp)
+{
+	unsigned int i;
+
+	if (rsrcs->num_avail_ldb_ports[cos_id] < num_ports) {
+		resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	for (i = 0; i < num_ports; i++) {
+		struct dlb2_ldb_port *port;
+
+		port = dlb2_get_next_ldb_port(hw, rsrcs,
+					      domain->id.phys_id, cos_id);
+		if (port == NULL) {
+			DLB2_HW_ERR(hw,
+				    "[%s()] Internal error: domain validation failed\n",
+				    __func__);
+			return -EFAULT;
+		}
+
+		dlb2_list_del(&rsrcs->avail_ldb_ports[cos_id],
+			      &port->func_list);
+
+		port->domain_id = domain->id;
+		port->owned = true;
+
+		dlb2_list_add(&domain->avail_ldb_ports[cos_id],
+			      &port->domain_list);
+	}
+
+	rsrcs->num_avail_ldb_ports[cos_id] -= num_ports;
+
+	return 0;
+}
+
+
+static int dlb2_attach_ldb_ports(struct dlb2_hw *hw,
+				 struct dlb2_function_resources *rsrcs,
+				 struct dlb2_hw_domain *domain,
+				 struct dlb2_create_sched_domain_args *args,
+				 struct dlb2_cmd_response *resp)
+{
+	unsigned int i, j;
+	int ret;
+
+	if (args->cos_strict) {
+		for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+			u32 num = args->num_cos_ldb_ports[i];
+
+			/* Allocate ports from specific classes-of-service */
+			ret = __dlb2_attach_ldb_ports(hw,
+						      rsrcs,
+						      domain,
+						      num,
+						      i,
+						      resp);
+			if (ret)
+				return ret;
+		}
+	} else {
+		unsigned int k;
+		u32 cos_id;
+
+		/*
+		 * Attempt to allocate from specific class-of-service, but
+		 * fallback to the other classes if that fails.
+		 */
+		for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+			for (j = 0; j < args->num_cos_ldb_ports[i]; j++) {
+				for (k = 0; k < DLB2_NUM_COS_DOMAINS; k++) {
+					cos_id = (i + k) % DLB2_NUM_COS_DOMAINS;
+
+					ret = __dlb2_attach_ldb_ports(hw,
+								      rsrcs,
+								      domain,
+								      1,
+								      cos_id,
+								      resp);
+					if (ret == 0)
+						break;
+				}
+
+				if (ret)
+					return ret;
+			}
+		}
+	}
+
+	/* Allocate num_ldb_ports from any class-of-service */
+	for (i = 0; i < args->num_ldb_ports; i++) {
+		for (j = 0; j < DLB2_NUM_COS_DOMAINS; j++) {
+			ret = __dlb2_attach_ldb_ports(hw,
+						      rsrcs,
+						      domain,
+						      1,
+						      j,
+						      resp);
+			if (ret == 0)
+				break;
+		}
+
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+
+static int dlb2_attach_dir_ports(struct dlb2_hw *hw,
+				 struct dlb2_function_resources *rsrcs,
+				 struct dlb2_hw_domain *domain,
+				 u32 num_ports,
+				 struct dlb2_cmd_response *resp)
+{
+	unsigned int i;
+
+	if (rsrcs->num_avail_dir_pq_pairs < num_ports) {
+		resp->status = DLB2_ST_DIR_PORTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	for (i = 0; i < num_ports; i++) {
+		struct dlb2_dir_pq_pair *port;
+
+		port = DLB2_FUNC_LIST_HEAD(rsrcs->avail_dir_pq_pairs,
+					   typeof(*port));
+		if (port == NULL) {
+			DLB2_HW_ERR(hw,
+				    "[%s()] Internal error: domain validation failed\n",
+				    __func__);
+			return -EFAULT;
+		}
+
+		dlb2_list_del(&rsrcs->avail_dir_pq_pairs, &port->func_list);
+
+		port->domain_id = domain->id;
+		port->owned = true;
+
+		dlb2_list_add(&domain->avail_dir_pq_pairs, &port->domain_list);
+	}
+
+	rsrcs->num_avail_dir_pq_pairs -= num_ports;
+
+	return 0;
+}
+
+static int dlb2_attach_ldb_credits(struct dlb2_function_resources *rsrcs,
+				   struct dlb2_hw_domain *domain,
+				   u32 num_credits,
+				   struct dlb2_cmd_response *resp)
+{
+	if (rsrcs->num_avail_qed_entries < num_credits) {
+		resp->status = DLB2_ST_LDB_CREDITS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	rsrcs->num_avail_qed_entries -= num_credits;
+	domain->num_ldb_credits += num_credits;
+	return 0;
+}
+
+static int dlb2_attach_dir_credits(struct dlb2_function_resources *rsrcs,
+				   struct dlb2_hw_domain *domain,
+				   u32 num_credits,
+				   struct dlb2_cmd_response *resp)
+{
+	if (rsrcs->num_avail_dqed_entries < num_credits) {
+		resp->status = DLB2_ST_DIR_CREDITS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	rsrcs->num_avail_dqed_entries -= num_credits;
+	domain->num_dir_credits += num_credits;
+	return 0;
+}
+
+
+static int dlb2_attach_atomic_inflights(struct dlb2_function_resources *rsrcs,
+					struct dlb2_hw_domain *domain,
+					u32 num_atomic_inflights,
+					struct dlb2_cmd_response *resp)
+{
+	if (rsrcs->num_avail_aqed_entries < num_atomic_inflights) {
+		resp->status = DLB2_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	rsrcs->num_avail_aqed_entries -= num_atomic_inflights;
+	domain->num_avail_aqed_entries += num_atomic_inflights;
+	return 0;
+}
+
+static int
+dlb2_attach_domain_hist_list_entries(struct dlb2_function_resources *rsrcs,
+				     struct dlb2_hw_domain *domain,
+				     u32 num_hist_list_entries,
+				     struct dlb2_cmd_response *resp)
+{
+	struct dlb2_bitmap *bitmap;
+	int base;
+
+	if (num_hist_list_entries) {
+		bitmap = rsrcs->avail_hist_list_entries;
+
+		base = dlb2_bitmap_find_set_bit_range(bitmap,
+						      num_hist_list_entries);
+		if (base < 0)
+			goto error;
+
+		domain->total_hist_list_entries = num_hist_list_entries;
+		domain->avail_hist_list_entries = num_hist_list_entries;
+		domain->hist_list_entry_base = base;
+		domain->hist_list_entry_offset = 0;
+
+		dlb2_bitmap_clear_range(bitmap, base, num_hist_list_entries);
+	}
+	return 0;
+
+error:
+	resp->status = DLB2_ST_HIST_LIST_ENTRIES_UNAVAILABLE;
+	return -EINVAL;
+}
+
+static int dlb2_attach_ldb_queues(struct dlb2_hw *hw,
+				  struct dlb2_function_resources *rsrcs,
+				  struct dlb2_hw_domain *domain,
+				  u32 num_queues,
+				  struct dlb2_cmd_response *resp)
+{
+	unsigned int i;
+
+	if (rsrcs->num_avail_ldb_queues < num_queues) {
+		resp->status = DLB2_ST_LDB_QUEUES_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	for (i = 0; i < num_queues; i++) {
+		struct dlb2_ldb_queue *queue;
+
+		queue = DLB2_FUNC_LIST_HEAD(rsrcs->avail_ldb_queues,
+					    typeof(*queue));
+		if (queue == NULL) {
+			DLB2_HW_ERR(hw,
+				    "[%s()] Internal error: domain validation failed\n",
+				    __func__);
+			return -EFAULT;
+		}
+
+		dlb2_list_del(&rsrcs->avail_ldb_queues, &queue->func_list);
+
+		queue->domain_id = domain->id;
+		queue->owned = true;
+
+		dlb2_list_add(&domain->avail_ldb_queues, &queue->domain_list);
+	}
+
+	rsrcs->num_avail_ldb_queues -= num_queues;
+
+	return 0;
+}
+
+static int
+dlb2_domain_attach_resources(struct dlb2_hw *hw,
+			     struct dlb2_function_resources *rsrcs,
+			     struct dlb2_hw_domain *domain,
+			     struct dlb2_create_sched_domain_args *args,
+			     struct dlb2_cmd_response *resp)
+{
+	int ret;
+
+	ret = dlb2_attach_ldb_queues(hw,
+				     rsrcs,
+				     domain,
+				     args->num_ldb_queues,
+				     resp);
+	if (ret)
+		return ret;
+
+	ret = dlb2_attach_ldb_ports(hw,
+				    rsrcs,
+				    domain,
+				    args,
+				    resp);
+	if (ret)
+		return ret;
+
+	ret = dlb2_attach_dir_ports(hw,
+				    rsrcs,
+				    domain,
+				    args->num_dir_ports,
+				    resp);
+	if (ret)
+		return ret;
+
+	if (hw->ver == DLB2_HW_V2) {
+		ret = dlb2_attach_ldb_credits(rsrcs,
+					      domain,
+					      args->num_ldb_credits,
+					      resp);
+		if (ret)
+			return ret;
+
+		ret = dlb2_attach_dir_credits(rsrcs,
+					      domain,
+					      args->num_dir_credits,
+					      resp);
+		if (ret)
+			return ret;
+	} else {  /* DLB 2.5 */
+		ret = dlb2_attach_credits(rsrcs,
+					  domain,
+					  args->num_credits,
+					  resp);
+		if (ret)
+			return ret;
+	}
+
+	ret = dlb2_attach_domain_hist_list_entries(rsrcs,
+						   domain,
+						   args->num_hist_list_entries,
+						   resp);
+	if (ret)
+		return ret;
+
+	ret = dlb2_attach_atomic_inflights(rsrcs,
+					   domain,
+					   args->num_atomic_inflights,
+					   resp);
+	if (ret)
+		return ret;
+
+	dlb2_configure_domain_credits(hw, domain);
+
+	domain->configured = true;
+
+	domain->started = false;
+
+	rsrcs->num_avail_domains--;
+
+	return 0;
+}
+
+static int
+dlb2_verify_create_sched_dom_args(struct dlb2_function_resources *rsrcs,
+				  struct dlb2_create_sched_domain_args *args,
+				  struct dlb2_cmd_response *resp,
+				  struct dlb2_hw *hw,
+				  struct dlb2_hw_domain **out_domain)
+{
+	u32 num_avail_ldb_ports, req_ldb_ports;
+	struct dlb2_bitmap *avail_hl_entries;
+	unsigned int max_contig_hl_range;
+	struct dlb2_hw_domain *domain;
+	int i;
+
+	avail_hl_entries = rsrcs->avail_hist_list_entries;
+
+	max_contig_hl_range = dlb2_bitmap_longest_set_range(avail_hl_entries);
+
+	num_avail_ldb_ports = 0;
+	req_ldb_ports = 0;
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		num_avail_ldb_ports += rsrcs->num_avail_ldb_ports[i];
+
+		req_ldb_ports += args->num_cos_ldb_ports[i];
+	}
+
+	req_ldb_ports += args->num_ldb_ports;
+
+	if (rsrcs->num_avail_domains < 1) {
+		resp->status = DLB2_ST_DOMAIN_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	domain = DLB2_FUNC_LIST_HEAD(rsrcs->avail_domains, typeof(*domain));
+	if (domain == NULL) {
+		resp->status = DLB2_ST_DOMAIN_UNAVAILABLE;
+		return -EFAULT;
+	}
+
+	if (rsrcs->num_avail_ldb_queues < args->num_ldb_queues) {
+		resp->status = DLB2_ST_LDB_QUEUES_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	if (req_ldb_ports > num_avail_ldb_ports) {
+		resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	for (i = 0; args->cos_strict && i < DLB2_NUM_COS_DOMAINS; i++) {
+		if (args->num_cos_ldb_ports[i] >
+		    rsrcs->num_avail_ldb_ports[i]) {
+			resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
+			return -EINVAL;
+		}
+	}
+
+	if (args->num_ldb_queues > 0 && req_ldb_ports == 0) {
+		resp->status = DLB2_ST_LDB_PORT_REQUIRED_FOR_LDB_QUEUES;
+		return -EINVAL;
+	}
+
+	if (rsrcs->num_avail_dir_pq_pairs < args->num_dir_ports) {
+		resp->status = DLB2_ST_DIR_PORTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+	if (hw->ver == DLB2_HW_V2_5) {
+		if (rsrcs->num_avail_entries < args->num_credits) {
+			resp->status = DLB2_ST_CREDITS_UNAVAILABLE;
+			return -EINVAL;
+		}
+	} else {
+		if (rsrcs->num_avail_qed_entries < args->num_ldb_credits) {
+			resp->status = DLB2_ST_LDB_CREDITS_UNAVAILABLE;
+			return -EINVAL;
+		}
+		if (rsrcs->num_avail_dqed_entries < args->num_dir_credits) {
+			resp->status = DLB2_ST_DIR_CREDITS_UNAVAILABLE;
+			return -EINVAL;
+		}
+	}
+
+	if (rsrcs->num_avail_aqed_entries < args->num_atomic_inflights) {
+		resp->status = DLB2_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	if (max_contig_hl_range < args->num_hist_list_entries) {
+		resp->status = DLB2_ST_HIST_LIST_ENTRIES_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	*out_domain = domain;
+
+	return 0;
+}
+
+static void
+dlb2_log_create_sched_domain_args(struct dlb2_hw *hw,
+				  struct dlb2_create_sched_domain_args *args,
+				  bool vdev_req,
+				  unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 create sched domain arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tNumber of LDB queues:          %d\n",
+		    args->num_ldb_queues);
+	DLB2_HW_DBG(hw, "\tNumber of LDB ports (any CoS): %d\n",
+		    args->num_ldb_ports);
+	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 0):   %d\n",
+		    args->num_cos_ldb_ports[0]);
+	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 1):   %d\n",
+		    args->num_cos_ldb_ports[1]);
+	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 2):   %d\n",
+		    args->num_cos_ldb_ports[2]);
+	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 3):   %d\n",
+		    args->num_cos_ldb_ports[3]);
+	DLB2_HW_DBG(hw, "\tStrict CoS allocation:         %d\n",
+		    args->cos_strict);
+	DLB2_HW_DBG(hw, "\tNumber of DIR ports:           %d\n",
+		    args->num_dir_ports);
+	DLB2_HW_DBG(hw, "\tNumber of ATM inflights:       %d\n",
+		    args->num_atomic_inflights);
+	DLB2_HW_DBG(hw, "\tNumber of hist list entries:   %d\n",
+		    args->num_hist_list_entries);
+	if (hw->ver == DLB2_HW_V2) {
+		DLB2_HW_DBG(hw, "\tNumber of LDB credits:         %d\n",
+			    args->num_ldb_credits);
+		DLB2_HW_DBG(hw, "\tNumber of DIR credits:         %d\n",
+			    args->num_dir_credits);
+	} else {
+		DLB2_HW_DBG(hw, "\tNumber of credits:         %d\n",
+			    args->num_credits);
+	}
+}
+
+/**
+ * dlb2_hw_create_sched_domain() - create a scheduling domain
+ * @hw: dlb2_hw handle for a particular device.
+ * @args: scheduling domain creation arguments.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function creates a scheduling domain containing the resources specified
+ * in args. The individual resources (queues, ports, credits) can be configured
+ * after creating a scheduling domain.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the domain ID.
+ *
+ * resp->id contains a virtual ID if vdev_req is true.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, or the requested domain name
+ *	    is already in use.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb2_hw_create_sched_domain(struct dlb2_hw *hw,
+				struct dlb2_create_sched_domain_args *args,
+				struct dlb2_cmd_response *resp,
+				bool vdev_req,
+				unsigned int vdev_id)
+{
+	struct dlb2_function_resources *rsrcs;
+	struct dlb2_hw_domain *domain;
+	int ret;
+
+	rsrcs = (vdev_req) ? &hw->vdev[vdev_id] : &hw->pf;
+
+	dlb2_log_create_sched_domain_args(hw, args, vdev_req, vdev_id);
+
+	/*
+	 * Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	ret = dlb2_verify_create_sched_dom_args(rsrcs, args, resp, hw, &domain);
+	if (ret)
+		return ret;
+
+	dlb2_init_domain_rsrc_lists(domain);
+
+	ret = dlb2_domain_attach_resources(hw, rsrcs, domain, args, resp);
+	if (ret) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: failed to verify args.\n",
+			    __func__);
+
+		return ret;
+	}
+
+	dlb2_list_del(&rsrcs->avail_domains, &domain->func_list);
+
+	dlb2_list_add(&rsrcs->used_domains, &domain->func_list);
+
+	resp->id = (vdev_req) ? domain->id.virt_id : domain->id.phys_id;
+	resp->status = 0;
+
+	return 0;
+}
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v3 06/26] event/dlb2: add v2.5 domain reset
  2021-04-13 20:14   ` [dpdk-dev] [PATCH v3 00/26] " Timothy McDaniel
                       ` (4 preceding siblings ...)
  2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 05/26] event/dlb2: add v2.5 create sched domain Timothy McDaniel
@ 2021-04-13 20:14     ` Timothy McDaniel
  2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 07/26] event/dlb2: add V2.5 create ldb queue Timothy McDaniel
                       ` (19 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-04-13 20:14 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas

Convert to new register map and new register access
macros.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 .../event/dlb2/pf/base/dlb2_hw_types_new.h    |    1 +
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 1494 ----------
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 2562 +++++++++++++++++
 3 files changed, 2563 insertions(+), 1494 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_hw_types_new.h b/drivers/event/dlb2/pf/base/dlb2_hw_types_new.h
index 4a4185acd..4a6037775 100644
--- a/drivers/event/dlb2/pf/base/dlb2_hw_types_new.h
+++ b/drivers/event/dlb2/pf/base/dlb2_hw_types_new.h
@@ -181,6 +181,7 @@ struct dlb2_ldb_port {
 	u32 hist_list_entry_base;
 	u32 hist_list_entry_limit;
 	u32 ref_cnt;
+	u8 cq_depth;
 	u8 init_tkn_cnt;
 	u8 num_pending_removals;
 	u8 num_mappings;
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index 99c3d031d..041aeaeee 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -65,69 +65,6 @@ static inline void dlb2_flush_csr(struct dlb2_hw *hw)
 	DLB2_CSR_RD(hw, DLB2_SYS_TOTAL_VAS);
 }
 
-static void dlb2_dir_port_cq_disable(struct dlb2_hw *hw,
-				     struct dlb2_dir_pq_pair *port)
-{
-	union dlb2_lsp_cq_dir_dsbl reg;
-
-	reg.field.disabled = 1;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ_DIR_DSBL(port->id.phys_id), reg.val);
-
-	dlb2_flush_csr(hw);
-}
-
-static u32 dlb2_dir_cq_token_count(struct dlb2_hw *hw,
-				   struct dlb2_dir_pq_pair *port)
-{
-	union dlb2_lsp_cq_dir_tkn_cnt r0;
-
-	r0.val = DLB2_CSR_RD(hw, DLB2_LSP_CQ_DIR_TKN_CNT(port->id.phys_id));
-
-	/*
-	 * Account for the initial token count, which is used in order to
-	 * provide a CQ with depth less than 8.
-	 */
-
-	return r0.field.count - port->init_tkn_cnt;
-}
-
-static int dlb2_drain_dir_cq(struct dlb2_hw *hw,
-			     struct dlb2_dir_pq_pair *port)
-{
-	unsigned int port_id = port->id.phys_id;
-	u32 cnt;
-
-	/* Return any outstanding tokens */
-	cnt = dlb2_dir_cq_token_count(hw, port);
-
-	if (cnt != 0) {
-		struct dlb2_hcw hcw_mem[8], *hcw;
-		void  *pp_addr;
-
-		pp_addr = os_map_producer_port(hw, port_id, false);
-
-		/* Point hcw to a 64B-aligned location */
-		hcw = (struct dlb2_hcw *)((uintptr_t)&hcw_mem[4] & ~0x3F);
-
-		/*
-		 * Program the first HCW for a batch token return and
-		 * the rest as NOOPS
-		 */
-		memset(hcw, 0, 4 * sizeof(*hcw));
-		hcw->cq_token = 1;
-		hcw->lock_id = cnt - 1;
-
-		dlb2_movdir64b(pp_addr, hcw);
-
-		os_fence_hcw(hw, pp_addr);
-
-		os_unmap_producer_port(hw, pp_addr);
-	}
-
-	return 0;
-}
-
 static void dlb2_dir_port_cq_enable(struct dlb2_hw *hw,
 				    struct dlb2_dir_pq_pair *port)
 {
@@ -140,37 +77,6 @@ static void dlb2_dir_port_cq_enable(struct dlb2_hw *hw,
 	dlb2_flush_csr(hw);
 }
 
-static int dlb2_domain_drain_dir_cqs(struct dlb2_hw *hw,
-				     struct dlb2_hw_domain *domain,
-				     bool toggle_port)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *port;
-	int ret;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
-		/*
-		 * Can't drain a port if it's not configured, and there's
-		 * nothing to drain if its queue is unconfigured.
-		 */
-		if (!port->port_configured || !port->queue_configured)
-			continue;
-
-		if (toggle_port)
-			dlb2_dir_port_cq_disable(hw, port);
-
-		ret = dlb2_drain_dir_cq(hw, port);
-		if (ret < 0)
-			return ret;
-
-		if (toggle_port)
-			dlb2_dir_port_cq_enable(hw, port);
-	}
-
-	return 0;
-}
-
 static u32 dlb2_dir_queue_depth(struct dlb2_hw *hw,
 				struct dlb2_dir_pq_pair *queue)
 {
@@ -182,63 +88,6 @@ static u32 dlb2_dir_queue_depth(struct dlb2_hw *hw,
 	return r0.field.count;
 }
 
-static bool dlb2_dir_queue_is_empty(struct dlb2_hw *hw,
-				    struct dlb2_dir_pq_pair *queue)
-{
-	return dlb2_dir_queue_depth(hw, queue) == 0;
-}
-
-static bool dlb2_domain_dir_queues_empty(struct dlb2_hw *hw,
-					 struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *queue;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, queue, iter) {
-		if (!dlb2_dir_queue_is_empty(hw, queue))
-			return false;
-	}
-
-	return true;
-}
-
-static int dlb2_domain_drain_dir_queues(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain)
-{
-	int i, ret;
-
-	/* If the domain hasn't been started, there's no traffic to drain */
-	if (!domain->started)
-		return 0;
-
-	for (i = 0; i < DLB2_MAX_QID_EMPTY_CHECK_LOOPS; i++) {
-		ret = dlb2_domain_drain_dir_cqs(hw, domain, true);
-		if (ret < 0)
-			return ret;
-
-		if (dlb2_domain_dir_queues_empty(hw, domain))
-			break;
-	}
-
-	if (i == DLB2_MAX_QID_EMPTY_CHECK_LOOPS) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: failed to empty queues\n",
-			    __func__);
-		return -EFAULT;
-	}
-
-	/*
-	 * Drain the CQs one more time. For the queues to go empty, they would
-	 * have scheduled one or more QEs.
-	 */
-	ret = dlb2_domain_drain_dir_cqs(hw, domain, true);
-	if (ret < 0)
-		return ret;
-
-	return 0;
-}
-
 static void dlb2_ldb_port_cq_enable(struct dlb2_hw *hw,
 				    struct dlb2_ldb_port *port)
 {
@@ -271,105 +120,6 @@ static void dlb2_ldb_port_cq_disable(struct dlb2_hw *hw,
 	dlb2_flush_csr(hw);
 }
 
-static u32 dlb2_ldb_cq_inflight_count(struct dlb2_hw *hw,
-				      struct dlb2_ldb_port *port)
-{
-	union dlb2_lsp_cq_ldb_infl_cnt r0;
-
-	r0.val = DLB2_CSR_RD(hw, DLB2_LSP_CQ_LDB_INFL_CNT(port->id.phys_id));
-
-	return r0.field.count;
-}
-
-static u32 dlb2_ldb_cq_token_count(struct dlb2_hw *hw,
-				   struct dlb2_ldb_port *port)
-{
-	union dlb2_lsp_cq_ldb_tkn_cnt r0;
-
-	r0.val = DLB2_CSR_RD(hw, DLB2_LSP_CQ_LDB_TKN_CNT(port->id.phys_id));
-
-	/*
-	 * Account for the initial token count, which is used in order to
-	 * provide a CQ with depth less than 8.
-	 */
-
-	return r0.field.token_count - port->init_tkn_cnt;
-}
-
-static int dlb2_drain_ldb_cq(struct dlb2_hw *hw, struct dlb2_ldb_port *port)
-{
-	u32 infl_cnt, tkn_cnt;
-	unsigned int i;
-
-	infl_cnt = dlb2_ldb_cq_inflight_count(hw, port);
-	tkn_cnt = dlb2_ldb_cq_token_count(hw, port);
-
-	if (infl_cnt || tkn_cnt) {
-		struct dlb2_hcw hcw_mem[8], *hcw;
-		void  *pp_addr;
-
-		pp_addr = os_map_producer_port(hw, port->id.phys_id, true);
-
-		/* Point hcw to a 64B-aligned location */
-		hcw = (struct dlb2_hcw *)((uintptr_t)&hcw_mem[4] & ~0x3F);
-
-		/*
-		 * Program the first HCW for a completion and token return and
-		 * the other HCWs as NOOPS
-		 */
-
-		memset(hcw, 0, 4 * sizeof(*hcw));
-		hcw->qe_comp = (infl_cnt > 0);
-		hcw->cq_token = (tkn_cnt > 0);
-		hcw->lock_id = tkn_cnt - 1;
-
-		/* Return tokens in the first HCW */
-		dlb2_movdir64b(pp_addr, hcw);
-
-		hcw->cq_token = 0;
-
-		/* Issue remaining completions (if any) */
-		for (i = 1; i < infl_cnt; i++)
-			dlb2_movdir64b(pp_addr, hcw);
-
-		os_fence_hcw(hw, pp_addr);
-
-		os_unmap_producer_port(hw, pp_addr);
-	}
-
-	return 0;
-}
-
-static int dlb2_domain_drain_ldb_cqs(struct dlb2_hw *hw,
-				     struct dlb2_hw_domain *domain,
-				     bool toggle_port)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int ret, i;
-	RTE_SET_USED(iter);
-
-	/* If the domain hasn't been started, there's no traffic to drain */
-	if (!domain->started)
-		return 0;
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			if (toggle_port)
-				dlb2_ldb_port_cq_disable(hw, port);
-
-			ret = dlb2_drain_ldb_cq(hw, port);
-			if (ret < 0)
-				return ret;
-
-			if (toggle_port)
-				dlb2_ldb_port_cq_enable(hw, port);
-		}
-	}
-
-	return 0;
-}
-
 static u32 dlb2_ldb_queue_depth(struct dlb2_hw *hw,
 				struct dlb2_ldb_queue *queue)
 {
@@ -388,90 +138,6 @@ static u32 dlb2_ldb_queue_depth(struct dlb2_hw *hw,
 	return r0.field.count + r1.field.count + r2.field.count;
 }
 
-static bool dlb2_ldb_queue_is_empty(struct dlb2_hw *hw,
-				    struct dlb2_ldb_queue *queue)
-{
-	return dlb2_ldb_queue_depth(hw, queue) == 0;
-}
-
-static bool dlb2_domain_mapped_queues_empty(struct dlb2_hw *hw,
-					    struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_queue *queue;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
-		if (queue->num_mappings == 0)
-			continue;
-
-		if (!dlb2_ldb_queue_is_empty(hw, queue))
-			return false;
-	}
-
-	return true;
-}
-
-static int dlb2_domain_drain_mapped_queues(struct dlb2_hw *hw,
-					   struct dlb2_hw_domain *domain)
-{
-	int i, ret;
-
-	/* If the domain hasn't been started, there's no traffic to drain */
-	if (!domain->started)
-		return 0;
-
-	if (domain->num_pending_removals > 0) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: failed to unmap domain queues\n",
-			    __func__);
-		return -EFAULT;
-	}
-
-	for (i = 0; i < DLB2_MAX_QID_EMPTY_CHECK_LOOPS; i++) {
-		ret = dlb2_domain_drain_ldb_cqs(hw, domain, true);
-		if (ret < 0)
-			return ret;
-
-		if (dlb2_domain_mapped_queues_empty(hw, domain))
-			break;
-	}
-
-	if (i == DLB2_MAX_QID_EMPTY_CHECK_LOOPS) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: failed to empty queues\n",
-			    __func__);
-		return -EFAULT;
-	}
-
-	/*
-	 * Drain the CQs one more time. For the queues to go empty, they would
-	 * have scheduled one or more QEs.
-	 */
-	ret = dlb2_domain_drain_ldb_cqs(hw, domain, true);
-	if (ret < 0)
-		return ret;
-
-	return 0;
-}
-
-static void dlb2_domain_enable_ldb_cqs(struct dlb2_hw *hw,
-				       struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			port->enabled = true;
-
-			dlb2_ldb_port_cq_enable(hw, port);
-		}
-	}
-}
-
 static struct dlb2_ldb_queue *
 dlb2_get_ldb_queue_from_id(struct dlb2_hw *hw,
 			   u32 id,
@@ -1455,1166 +1121,6 @@ dlb2_domain_finish_unmap_qid_procedures(struct dlb2_hw *hw,
 	return domain->num_pending_removals;
 }
 
-static void dlb2_domain_disable_ldb_cqs(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			port->enabled = false;
-
-			dlb2_ldb_port_cq_disable(hw, port);
-		}
-	}
-}
-
-static void dlb2_log_reset_domain(struct dlb2_hw *hw,
-				  u32 domain_id,
-				  bool vdev_req,
-				  unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 reset domain:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
-}
-
-static void dlb2_domain_disable_dir_vpps(struct dlb2_hw *hw,
-					 struct dlb2_hw_domain *domain,
-					 unsigned int vdev_id)
-{
-	struct dlb2_list_entry *iter;
-	union dlb2_sys_vf_dir_vpp_v r1;
-	struct dlb2_dir_pq_pair *port;
-	RTE_SET_USED(iter);
-
-	r1.field.vpp_v = 0;
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
-		unsigned int offs;
-		u32 virt_id;
-
-		if (hw->virt_mode == DLB2_VIRT_SRIOV)
-			virt_id = port->id.virt_id;
-		else
-			virt_id = port->id.phys_id;
-
-		offs = vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) + virt_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP_V(offs), r1.val);
-	}
-}
-
-static void dlb2_domain_disable_ldb_vpps(struct dlb2_hw *hw,
-					 struct dlb2_hw_domain *domain,
-					 unsigned int vdev_id)
-{
-	struct dlb2_list_entry *iter;
-	union dlb2_sys_vf_ldb_vpp_v r1;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	r1.field.vpp_v = 0;
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			unsigned int offs;
-			u32 virt_id;
-
-			if (hw->virt_mode == DLB2_VIRT_SRIOV)
-				virt_id = port->id.virt_id;
-			else
-				virt_id = port->id.phys_id;
-
-			offs = vdev_id * DLB2_MAX_NUM_LDB_PORTS + virt_id;
-
-			DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VPP_V(offs), r1.val);
-		}
-	}
-}
-
-static void
-dlb2_domain_disable_ldb_port_interrupts(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	union dlb2_chp_ldb_cq_int_enb r0 = { {0} };
-	union dlb2_chp_ldb_cq_wd_enb r1 = { {0} };
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	r0.field.en_tim = 0;
-	r0.field.en_depth = 0;
-
-	r1.field.wd_enable = 0;
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			DLB2_CSR_WR(hw,
-				    DLB2_CHP_LDB_CQ_INT_ENB(port->id.phys_id),
-				    r0.val);
-
-			DLB2_CSR_WR(hw,
-				    DLB2_CHP_LDB_CQ_WD_ENB(port->id.phys_id),
-				    r1.val);
-		}
-	}
-}
-
-static void
-dlb2_domain_disable_dir_port_interrupts(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	union dlb2_chp_dir_cq_int_enb r0 = { {0} };
-	union dlb2_chp_dir_cq_wd_enb r1 = { {0} };
-	struct dlb2_dir_pq_pair *port;
-	RTE_SET_USED(iter);
-
-	r0.field.en_tim = 0;
-	r0.field.en_depth = 0;
-
-	r1.field.wd_enable = 0;
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
-		DLB2_CSR_WR(hw,
-			    DLB2_CHP_DIR_CQ_INT_ENB(port->id.phys_id),
-			    r0.val);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_CHP_DIR_CQ_WD_ENB(port->id.phys_id),
-			    r1.val);
-	}
-}
-
-static void
-dlb2_domain_disable_ldb_queue_write_perms(struct dlb2_hw *hw,
-					  struct dlb2_hw_domain *domain)
-{
-	int domain_offset = domain->id.phys_id * DLB2_MAX_NUM_LDB_QUEUES;
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_queue *queue;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
-		union dlb2_sys_ldb_vasqid_v r0 = { {0} };
-		union dlb2_sys_ldb_qid2vqid r1 = { {0} };
-		union dlb2_sys_vf_ldb_vqid_v r2 = { {0} };
-		union dlb2_sys_vf_ldb_vqid2qid r3 = { {0} };
-		int idx;
-
-		idx = domain_offset + queue->id.phys_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_LDB_VASQID_V(idx), r0.val);
-
-		if (queue->id.vdev_owned) {
-			DLB2_CSR_WR(hw,
-				    DLB2_SYS_LDB_QID2VQID(queue->id.phys_id),
-				    r1.val);
-
-			idx = queue->id.vdev_id * DLB2_MAX_NUM_LDB_QUEUES +
-				queue->id.virt_id;
-
-			DLB2_CSR_WR(hw,
-				    DLB2_SYS_VF_LDB_VQID_V(idx),
-				    r2.val);
-
-			DLB2_CSR_WR(hw,
-				    DLB2_SYS_VF_LDB_VQID2QID(idx),
-				    r3.val);
-		}
-	}
-}
-
-static void
-dlb2_domain_disable_dir_queue_write_perms(struct dlb2_hw *hw,
-					  struct dlb2_hw_domain *domain)
-{
-	int domain_offset = domain->id.phys_id *
-		DLB2_MAX_NUM_DIR_PORTS(hw->ver);
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *queue;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, queue, iter) {
-		union dlb2_sys_dir_vasqid_v r0 = { {0} };
-		union dlb2_sys_vf_dir_vqid_v r1 = { {0} };
-		union dlb2_sys_vf_dir_vqid2qid r2 = { {0} };
-		int idx;
-
-		idx = domain_offset + queue->id.phys_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(idx), r0.val);
-
-		if (queue->id.vdev_owned) {
-			idx = queue->id.vdev_id *
-				DLB2_MAX_NUM_DIR_PORTS(hw->ver) +
-				queue->id.virt_id;
-
-			DLB2_CSR_WR(hw,
-				    DLB2_SYS_VF_DIR_VQID_V(idx),
-				    r1.val);
-
-			DLB2_CSR_WR(hw,
-				    DLB2_SYS_VF_DIR_VQID2QID(idx),
-				    r2.val);
-		}
-	}
-}
-
-static void dlb2_domain_disable_ldb_seq_checks(struct dlb2_hw *hw,
-					       struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	union dlb2_chp_sn_chk_enbl r1;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	r1.field.en = 0;
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
-			DLB2_CSR_WR(hw,
-				    DLB2_CHP_SN_CHK_ENBL(port->id.phys_id),
-				    r1.val);
-	}
-}
-
-static int dlb2_domain_wait_for_ldb_cqs_to_empty(struct dlb2_hw *hw,
-						 struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			int i;
-
-			for (i = 0; i < DLB2_MAX_CQ_COMP_CHECK_LOOPS; i++) {
-				if (dlb2_ldb_cq_inflight_count(hw, port) == 0)
-					break;
-			}
-
-			if (i == DLB2_MAX_CQ_COMP_CHECK_LOOPS) {
-				DLB2_HW_ERR(hw,
-					    "[%s()] Internal error: failed to flush load-balanced port %d's completions.\n",
-					    __func__, port->id.phys_id);
-				return -EFAULT;
-			}
-		}
-	}
-
-	return 0;
-}
-
-static void dlb2_domain_disable_dir_cqs(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *port;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
-		port->enabled = false;
-
-		dlb2_dir_port_cq_disable(hw, port);
-	}
-}
-
-static void
-dlb2_domain_disable_dir_producer_ports(struct dlb2_hw *hw,
-				       struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *port;
-	union dlb2_sys_dir_pp_v r1;
-	RTE_SET_USED(iter);
-
-	r1.field.pp_v = 0;
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter)
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_DIR_PP_V(port->id.phys_id),
-			    r1.val);
-}
-
-static void
-dlb2_domain_disable_ldb_producer_ports(struct dlb2_hw *hw,
-				       struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	union dlb2_sys_ldb_pp_v r1;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	r1.field.pp_v = 0;
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
-			DLB2_CSR_WR(hw,
-				    DLB2_SYS_LDB_PP_V(port->id.phys_id),
-				    r1.val);
-	}
-}
-
-static int dlb2_domain_verify_reset_success(struct dlb2_hw *hw,
-					    struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *dir_port;
-	struct dlb2_ldb_port *ldb_port;
-	struct dlb2_ldb_queue *queue;
-	int i;
-	RTE_SET_USED(iter);
-
-	/*
-	 * Confirm that all the domain's queue's inflight counts and AQED
-	 * active counts are 0.
-	 */
-	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
-		if (!dlb2_ldb_queue_is_empty(hw, queue)) {
-			DLB2_HW_ERR(hw,
-				    "[%s()] Internal error: failed to empty ldb queue %d\n",
-				    __func__, queue->id.phys_id);
-			return -EFAULT;
-		}
-	}
-
-	/* Confirm that all the domain's CQs inflight and token counts are 0. */
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], ldb_port, iter) {
-			if (dlb2_ldb_cq_inflight_count(hw, ldb_port) ||
-			    dlb2_ldb_cq_token_count(hw, ldb_port)) {
-				DLB2_HW_ERR(hw,
-					    "[%s()] Internal error: failed to empty ldb port %d\n",
-					    __func__, ldb_port->id.phys_id);
-				return -EFAULT;
-			}
-		}
-	}
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, dir_port, iter) {
-		if (!dlb2_dir_queue_is_empty(hw, dir_port)) {
-			DLB2_HW_ERR(hw,
-				    "[%s()] Internal error: failed to empty dir queue %d\n",
-				    __func__, dir_port->id.phys_id);
-			return -EFAULT;
-		}
-
-		if (dlb2_dir_cq_token_count(hw, dir_port)) {
-			DLB2_HW_ERR(hw,
-				    "[%s()] Internal error: failed to empty dir port %d\n",
-				    __func__, dir_port->id.phys_id);
-			return -EFAULT;
-		}
-	}
-
-	return 0;
-}
-
-static void __dlb2_domain_reset_ldb_port_registers(struct dlb2_hw *hw,
-						   struct dlb2_ldb_port *port)
-{
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_PP2VAS(port->id.phys_id),
-		    DLB2_SYS_LDB_PP2VAS_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ2VAS(port->id.phys_id),
-		    DLB2_CHP_LDB_CQ2VAS_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_PP2VDEV(port->id.phys_id),
-		    DLB2_SYS_LDB_PP2VDEV_RST);
-
-	if (port->id.vdev_owned) {
-		unsigned int offs;
-		u32 virt_id;
-
-		/*
-		 * DLB uses producer port address bits 17:12 to determine the
-		 * producer port ID. In Scalable IOV mode, PP accesses come
-		 * through the PF MMIO window for the physical producer port,
-		 * so for translation purposes the virtual and physical port
-		 * IDs are equal.
-		 */
-		if (hw->virt_mode == DLB2_VIRT_SRIOV)
-			virt_id = port->id.virt_id;
-		else
-			virt_id = port->id.phys_id;
-
-		offs = port->id.vdev_id * DLB2_MAX_NUM_LDB_PORTS + virt_id;
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_VF_LDB_VPP2PP(offs),
-			    DLB2_SYS_VF_LDB_VPP2PP_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_VF_LDB_VPP_V(offs),
-			    DLB2_SYS_VF_LDB_VPP_V_RST);
-	}
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_PP_V(port->id.phys_id),
-		    DLB2_SYS_LDB_PP_V_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_LDB_DSBL(port->id.phys_id),
-		    DLB2_LSP_CQ_LDB_DSBL_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_DEPTH(port->id.phys_id),
-		    DLB2_CHP_LDB_CQ_DEPTH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_LDB_INFL_LIM(port->id.phys_id),
-		    DLB2_LSP_CQ_LDB_INFL_LIM_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_HIST_LIST_LIM(port->id.phys_id),
-		    DLB2_CHP_HIST_LIST_LIM_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_HIST_LIST_BASE(port->id.phys_id),
-		    DLB2_CHP_HIST_LIST_BASE_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_HIST_LIST_POP_PTR(port->id.phys_id),
-		    DLB2_CHP_HIST_LIST_POP_PTR_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_HIST_LIST_PUSH_PTR(port->id.phys_id),
-		    DLB2_CHP_HIST_LIST_PUSH_PTR_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH(port->id.phys_id),
-		    DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_TMR_THRSH(port->id.phys_id),
-		    DLB2_CHP_LDB_CQ_TMR_THRSH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_INT_ENB(port->id.phys_id),
-		    DLB2_CHP_LDB_CQ_INT_ENB_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_CQ_ISR(port->id.phys_id),
-		    DLB2_SYS_LDB_CQ_ISR_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL(port->id.phys_id),
-		    DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL(port->id.phys_id),
-		    DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_WPTR(port->id.phys_id),
-		    DLB2_CHP_LDB_CQ_WPTR_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_LDB_TKN_CNT(port->id.phys_id),
-		    DLB2_LSP_CQ_LDB_TKN_CNT_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_CQ_ADDR_L(port->id.phys_id),
-		    DLB2_SYS_LDB_CQ_ADDR_L_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_CQ_ADDR_U(port->id.phys_id),
-		    DLB2_SYS_LDB_CQ_ADDR_U_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_CQ_AT(port->id.phys_id),
-		    DLB2_SYS_LDB_CQ_AT_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_CQ_PASID(port->id.phys_id),
-		    DLB2_SYS_LDB_CQ_PASID_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_CQ2VF_PF_RO(port->id.phys_id),
-		    DLB2_SYS_LDB_CQ2VF_PF_RO_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTL(port->id.phys_id),
-		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTL_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTH(port->id.phys_id),
-		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ2QID0(port->id.phys_id),
-		    DLB2_LSP_CQ2QID0_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ2QID1(port->id.phys_id),
-		    DLB2_LSP_CQ2QID1_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ2PRIOV(port->id.phys_id),
-		    DLB2_LSP_CQ2PRIOV_RST);
-}
-
-static void dlb2_domain_reset_ldb_port_registers(struct dlb2_hw *hw,
-						 struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
-			__dlb2_domain_reset_ldb_port_registers(hw, port);
-	}
-}
-
-static void
-__dlb2_domain_reset_dir_port_registers(struct dlb2_hw *hw,
-				       struct dlb2_dir_pq_pair *port)
-{
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ2VAS(port->id.phys_id),
-		    DLB2_CHP_DIR_CQ2VAS_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_DIR_DSBL(port->id.phys_id),
-		    DLB2_LSP_CQ_DIR_DSBL_RST);
-
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_OPT_CLR, port->id.phys_id);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_DEPTH(port->id.phys_id),
-		    DLB2_CHP_DIR_CQ_DEPTH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH(port->id.phys_id),
-		    DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_TMR_THRSH(port->id.phys_id),
-		    DLB2_CHP_DIR_CQ_TMR_THRSH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_INT_ENB(port->id.phys_id),
-		    DLB2_CHP_DIR_CQ_INT_ENB_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ_ISR(port->id.phys_id),
-		    DLB2_SYS_DIR_CQ_ISR_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(port->id.phys_id),
-		    DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL(port->id.phys_id),
-		    DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_WPTR(port->id.phys_id),
-		    DLB2_CHP_DIR_CQ_WPTR_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_DIR_TKN_CNT(port->id.phys_id),
-		    DLB2_LSP_CQ_DIR_TKN_CNT_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ_ADDR_L(port->id.phys_id),
-		    DLB2_SYS_DIR_CQ_ADDR_L_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ_ADDR_U(port->id.phys_id),
-		    DLB2_SYS_DIR_CQ_ADDR_U_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ_AT(port->id.phys_id),
-		    DLB2_SYS_DIR_CQ_AT_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ_PASID(port->id.phys_id),
-		    DLB2_SYS_DIR_CQ_PASID_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ_FMT(port->id.phys_id),
-		    DLB2_SYS_DIR_CQ_FMT_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ2VF_PF_RO(port->id.phys_id),
-		    DLB2_SYS_DIR_CQ2VF_PF_RO_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTL(port->id.phys_id),
-		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTL_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTH(port->id.phys_id),
-		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_PP2VAS(port->id.phys_id),
-		    DLB2_SYS_DIR_PP2VAS_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ2VAS(port->id.phys_id),
-		    DLB2_CHP_DIR_CQ2VAS_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_PP2VDEV(port->id.phys_id),
-		    DLB2_SYS_DIR_PP2VDEV_RST);
-
-	if (port->id.vdev_owned) {
-		unsigned int offs;
-		u32 virt_id;
-
-		/*
-		 * DLB uses producer port address bits 17:12 to determine the
-		 * producer port ID. In Scalable IOV mode, PP accesses come
-		 * through the PF MMIO window for the physical producer port,
-		 * so for translation purposes the virtual and physical port
-		 * IDs are equal.
-		 */
-		if (hw->virt_mode == DLB2_VIRT_SRIOV)
-			virt_id = port->id.virt_id;
-		else
-			virt_id = port->id.phys_id;
-
-		offs = port->id.vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver)
-			+ virt_id;
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_VF_DIR_VPP2PP(offs),
-			    DLB2_SYS_VF_DIR_VPP2PP_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_VF_DIR_VPP_V(offs),
-			    DLB2_SYS_VF_DIR_VPP_V_RST);
-	}
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_PP_V(port->id.phys_id),
-		    DLB2_SYS_DIR_PP_V_RST);
-}
-
-static void dlb2_domain_reset_dir_port_registers(struct dlb2_hw *hw,
-						 struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *port;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter)
-		__dlb2_domain_reset_dir_port_registers(hw, port);
-}
-
-static void dlb2_domain_reset_ldb_queue_registers(struct dlb2_hw *hw,
-						  struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_queue *queue;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
-		unsigned int queue_id = queue->id.phys_id;
-		int i;
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL(queue_id),
-			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH(queue_id),
-			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTL(queue_id),
-			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTL_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTH(queue_id),
-			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_NALDB_MAX_DEPTH(queue_id),
-			    DLB2_LSP_QID_NALDB_MAX_DEPTH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_LDB_INFL_LIM(queue_id),
-			    DLB2_LSP_QID_LDB_INFL_LIM_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_AQED_ACTIVE_LIM(queue_id),
-			    DLB2_LSP_QID_AQED_ACTIVE_LIM_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_ATM_DEPTH_THRSH(queue_id),
-			    DLB2_LSP_QID_ATM_DEPTH_THRSH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_NALDB_DEPTH_THRSH(queue_id),
-			    DLB2_LSP_QID_NALDB_DEPTH_THRSH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_LDB_QID_ITS(queue_id),
-			    DLB2_SYS_LDB_QID_ITS_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_CHP_ORD_QID_SN(queue_id),
-			    DLB2_CHP_ORD_QID_SN_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_CHP_ORD_QID_SN_MAP(queue_id),
-			    DLB2_CHP_ORD_QID_SN_MAP_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_LDB_QID_V(queue_id),
-			    DLB2_SYS_LDB_QID_V_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_LDB_QID_CFG_V(queue_id),
-			    DLB2_SYS_LDB_QID_CFG_V_RST);
-
-		if (queue->sn_cfg_valid) {
-			u32 offs[2];
-
-			offs[0] = DLB2_RO_PIPE_GRP_0_SLT_SHFT(queue->sn_slot);
-			offs[1] = DLB2_RO_PIPE_GRP_1_SLT_SHFT(queue->sn_slot);
-
-			DLB2_CSR_WR(hw,
-				    offs[queue->sn_group],
-				    DLB2_RO_PIPE_GRP_0_SLT_SHFT_RST);
-		}
-
-		for (i = 0; i < DLB2_LSP_QID2CQIDIX_NUM; i++) {
-			DLB2_CSR_WR(hw,
-				    DLB2_LSP_QID2CQIDIX(queue_id, i),
-				    DLB2_LSP_QID2CQIDIX_00_RST);
-
-			DLB2_CSR_WR(hw,
-				    DLB2_LSP_QID2CQIDIX2(queue_id, i),
-				    DLB2_LSP_QID2CQIDIX2_00_RST);
-
-			DLB2_CSR_WR(hw,
-				    DLB2_ATM_QID2CQIDIX(queue_id, i),
-				    DLB2_ATM_QID2CQIDIX_00_RST);
-		}
-	}
-}
-
-static void dlb2_domain_reset_dir_queue_registers(struct dlb2_hw *hw,
-						  struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *queue;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, queue, iter) {
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_DIR_MAX_DEPTH(queue->id.phys_id),
-			    DLB2_LSP_QID_DIR_MAX_DEPTH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTL(queue->id.phys_id),
-			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTL_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTH(queue->id.phys_id),
-			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_DIR_DEPTH_THRSH(queue->id.phys_id),
-			    DLB2_LSP_QID_DIR_DEPTH_THRSH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_DIR_QID_ITS(queue->id.phys_id),
-			    DLB2_SYS_DIR_QID_ITS_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_DIR_QID_V(queue->id.phys_id),
-			    DLB2_SYS_DIR_QID_V_RST);
-	}
-}
-
-static void dlb2_domain_reset_registers(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain)
-{
-	dlb2_domain_reset_ldb_port_registers(hw, domain);
-
-	dlb2_domain_reset_dir_port_registers(hw, domain);
-
-	dlb2_domain_reset_ldb_queue_registers(hw, domain);
-
-	dlb2_domain_reset_dir_queue_registers(hw, domain);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_CFG_LDB_VAS_CRD(domain->id.phys_id),
-		    DLB2_CHP_CFG_LDB_VAS_CRD_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_CFG_DIR_VAS_CRD(domain->id.phys_id),
-		    DLB2_CHP_CFG_DIR_VAS_CRD_RST);
-}
-
-static int dlb2_domain_reset_software_state(struct dlb2_hw *hw,
-					    struct dlb2_hw_domain *domain)
-{
-	struct dlb2_dir_pq_pair *tmp_dir_port;
-	struct dlb2_ldb_queue *tmp_ldb_queue;
-	struct dlb2_ldb_port *tmp_ldb_port;
-	struct dlb2_list_entry *iter1;
-	struct dlb2_list_entry *iter2;
-	struct dlb2_function_resources *rsrcs;
-	struct dlb2_dir_pq_pair *dir_port;
-	struct dlb2_ldb_queue *ldb_queue;
-	struct dlb2_ldb_port *ldb_port;
-	struct dlb2_list_head *list;
-	int ret, i;
-	RTE_SET_USED(tmp_dir_port);
-	RTE_SET_USED(tmp_ldb_queue);
-	RTE_SET_USED(tmp_ldb_port);
-	RTE_SET_USED(iter1);
-	RTE_SET_USED(iter2);
-
-	rsrcs = domain->parent_func;
-
-	/* Move the domain's ldb queues to the function's avail list */
-	list = &domain->used_ldb_queues;
-	DLB2_DOM_LIST_FOR_SAFE(*list, ldb_queue, tmp_ldb_queue, iter1, iter2) {
-		if (ldb_queue->sn_cfg_valid) {
-			struct dlb2_sn_group *grp;
-
-			grp = &hw->rsrcs.sn_groups[ldb_queue->sn_group];
-
-			dlb2_sn_group_free_slot(grp, ldb_queue->sn_slot);
-			ldb_queue->sn_cfg_valid = false;
-		}
-
-		ldb_queue->owned = false;
-		ldb_queue->num_mappings = 0;
-		ldb_queue->num_pending_additions = 0;
-
-		dlb2_list_del(&domain->used_ldb_queues,
-			      &ldb_queue->domain_list);
-		dlb2_list_add(&rsrcs->avail_ldb_queues,
-			      &ldb_queue->func_list);
-		rsrcs->num_avail_ldb_queues++;
-	}
-
-	list = &domain->avail_ldb_queues;
-	DLB2_DOM_LIST_FOR_SAFE(*list, ldb_queue, tmp_ldb_queue, iter1, iter2) {
-		ldb_queue->owned = false;
-
-		dlb2_list_del(&domain->avail_ldb_queues,
-			      &ldb_queue->domain_list);
-		dlb2_list_add(&rsrcs->avail_ldb_queues,
-			      &ldb_queue->func_list);
-		rsrcs->num_avail_ldb_queues++;
-	}
-
-	/* Move the domain's ldb ports to the function's avail list */
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		list = &domain->used_ldb_ports[i];
-		DLB2_DOM_LIST_FOR_SAFE(*list, ldb_port, tmp_ldb_port,
-				       iter1, iter2) {
-			int j;
-
-			ldb_port->owned = false;
-			ldb_port->configured = false;
-			ldb_port->num_pending_removals = 0;
-			ldb_port->num_mappings = 0;
-			ldb_port->init_tkn_cnt = 0;
-			for (j = 0; j < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; j++)
-				ldb_port->qid_map[j].state =
-					DLB2_QUEUE_UNMAPPED;
-
-			dlb2_list_del(&domain->used_ldb_ports[i],
-				      &ldb_port->domain_list);
-			dlb2_list_add(&rsrcs->avail_ldb_ports[i],
-				      &ldb_port->func_list);
-			rsrcs->num_avail_ldb_ports[i]++;
-		}
-
-		list = &domain->avail_ldb_ports[i];
-		DLB2_DOM_LIST_FOR_SAFE(*list, ldb_port, tmp_ldb_port,
-				       iter1, iter2) {
-			ldb_port->owned = false;
-
-			dlb2_list_del(&domain->avail_ldb_ports[i],
-				      &ldb_port->domain_list);
-			dlb2_list_add(&rsrcs->avail_ldb_ports[i],
-				      &ldb_port->func_list);
-			rsrcs->num_avail_ldb_ports[i]++;
-		}
-	}
-
-	/* Move the domain's dir ports to the function's avail list */
-	list = &domain->used_dir_pq_pairs;
-	DLB2_DOM_LIST_FOR_SAFE(*list, dir_port, tmp_dir_port, iter1, iter2) {
-		dir_port->owned = false;
-		dir_port->port_configured = false;
-		dir_port->init_tkn_cnt = 0;
-
-		dlb2_list_del(&domain->used_dir_pq_pairs,
-			      &dir_port->domain_list);
-
-		dlb2_list_add(&rsrcs->avail_dir_pq_pairs,
-			      &dir_port->func_list);
-		rsrcs->num_avail_dir_pq_pairs++;
-	}
-
-	list = &domain->avail_dir_pq_pairs;
-	DLB2_DOM_LIST_FOR_SAFE(*list, dir_port, tmp_dir_port, iter1, iter2) {
-		dir_port->owned = false;
-
-		dlb2_list_del(&domain->avail_dir_pq_pairs,
-			      &dir_port->domain_list);
-
-		dlb2_list_add(&rsrcs->avail_dir_pq_pairs,
-			      &dir_port->func_list);
-		rsrcs->num_avail_dir_pq_pairs++;
-	}
-
-	/* Return hist list entries to the function */
-	ret = dlb2_bitmap_set_range(rsrcs->avail_hist_list_entries,
-				    domain->hist_list_entry_base,
-				    domain->total_hist_list_entries);
-	if (ret) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: domain hist list base doesn't match the function's bitmap.\n",
-			    __func__);
-		return ret;
-	}
-
-	domain->total_hist_list_entries = 0;
-	domain->avail_hist_list_entries = 0;
-	domain->hist_list_entry_base = 0;
-	domain->hist_list_entry_offset = 0;
-
-	rsrcs->num_avail_qed_entries += domain->num_ldb_credits;
-	domain->num_ldb_credits = 0;
-
-	rsrcs->num_avail_dqed_entries += domain->num_dir_credits;
-	domain->num_dir_credits = 0;
-
-	rsrcs->num_avail_aqed_entries += domain->num_avail_aqed_entries;
-	rsrcs->num_avail_aqed_entries += domain->num_used_aqed_entries;
-	domain->num_avail_aqed_entries = 0;
-	domain->num_used_aqed_entries = 0;
-
-	domain->num_pending_removals = 0;
-	domain->num_pending_additions = 0;
-	domain->configured = false;
-	domain->started = false;
-
-	/*
-	 * Move the domain out of the used_domains list and back to the
-	 * function's avail_domains list.
-	 */
-	dlb2_list_del(&rsrcs->used_domains, &domain->func_list);
-	dlb2_list_add(&rsrcs->avail_domains, &domain->func_list);
-	rsrcs->num_avail_domains++;
-
-	return 0;
-}
-
-static int dlb2_domain_drain_unmapped_queue(struct dlb2_hw *hw,
-					    struct dlb2_hw_domain *domain,
-					    struct dlb2_ldb_queue *queue)
-{
-	struct dlb2_ldb_port *port;
-	int ret, i;
-
-	/* If a domain has LDB queues, it must have LDB ports */
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		if (!dlb2_list_empty(&domain->used_ldb_ports[i]))
-			break;
-	}
-
-	if (i == DLB2_NUM_COS_DOMAINS) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: No configured LDB ports\n",
-			    __func__);
-		return -EFAULT;
-	}
-
-	port = DLB2_DOM_LIST_HEAD(domain->used_ldb_ports[i], typeof(*port));
-
-	/* If necessary, free up a QID slot in this CQ */
-	if (port->num_mappings == DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-		struct dlb2_ldb_queue *mapped_queue;
-
-		mapped_queue = &hw->rsrcs.ldb_queues[port->qid_map[0].qid];
-
-		ret = dlb2_ldb_port_unmap_qid(hw, port, mapped_queue);
-		if (ret)
-			return ret;
-	}
-
-	ret = dlb2_ldb_port_map_qid_dynamic(hw, port, queue, 0);
-	if (ret)
-		return ret;
-
-	return dlb2_domain_drain_mapped_queues(hw, domain);
-}
-
-static int dlb2_domain_drain_unmapped_queues(struct dlb2_hw *hw,
-					     struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_queue *queue;
-	int ret;
-	RTE_SET_USED(iter);
-
-	/* If the domain hasn't been started, there's no traffic to drain */
-	if (!domain->started)
-		return 0;
-
-	/*
-	 * Pre-condition: the unattached queue must not have any outstanding
-	 * completions. This is ensured by calling dlb2_domain_drain_ldb_cqs()
-	 * prior to this in dlb2_domain_drain_mapped_queues().
-	 */
-	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
-		if (queue->num_mappings != 0 ||
-		    dlb2_ldb_queue_is_empty(hw, queue))
-			continue;
-
-		ret = dlb2_domain_drain_unmapped_queue(hw, domain, queue);
-		if (ret)
-			return ret;
-	}
-
-	return 0;
-}
-
-/**
- * dlb2_reset_domain() - Reset a DLB scheduling domain and its associated
- *	hardware resources.
- * @hw:	Contains the current state of the DLB2 hardware.
- * @domain_id: Domain ID
- * @vdev_req: Request came from a virtual device.
- * @vdev_id: If vdev_req is true, this contains the virtual device's ID.
- *
- * Note: User software *must* stop sending to this domain's producer ports
- * before invoking this function, otherwise undefined behavior will result.
- *
- * Return: returns < 0 on error, 0 otherwise.
- */
-int dlb2_reset_domain(struct dlb2_hw *hw,
-		      u32 domain_id,
-		      bool vdev_req,
-		      unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	int ret;
-
-	dlb2_log_reset_domain(hw, domain_id, vdev_req, vdev_id);
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (domain  == NULL || !domain->configured)
-		return -EINVAL;
-
-	/* Disable VPPs */
-	if (vdev_req) {
-		dlb2_domain_disable_dir_vpps(hw, domain, vdev_id);
-
-		dlb2_domain_disable_ldb_vpps(hw, domain, vdev_id);
-	}
-
-	/* Disable CQ interrupts */
-	dlb2_domain_disable_dir_port_interrupts(hw, domain);
-
-	dlb2_domain_disable_ldb_port_interrupts(hw, domain);
-
-	/*
-	 * For each queue owned by this domain, disable its write permissions to
-	 * cause any traffic sent to it to be dropped. Well-behaved software
-	 * should not be sending QEs at this point.
-	 */
-	dlb2_domain_disable_dir_queue_write_perms(hw, domain);
-
-	dlb2_domain_disable_ldb_queue_write_perms(hw, domain);
-
-	/* Turn off completion tracking on all the domain's PPs. */
-	dlb2_domain_disable_ldb_seq_checks(hw, domain);
-
-	/*
-	 * Disable the LDB CQs and drain them in order to complete the map and
-	 * unmap procedures, which require zero CQ inflights and zero QID
-	 * inflights respectively.
-	 */
-	dlb2_domain_disable_ldb_cqs(hw, domain);
-
-	ret = dlb2_domain_drain_ldb_cqs(hw, domain, false);
-	if (ret < 0)
-		return ret;
-
-	ret = dlb2_domain_wait_for_ldb_cqs_to_empty(hw, domain);
-	if (ret < 0)
-		return ret;
-
-	ret = dlb2_domain_finish_unmap_qid_procedures(hw, domain);
-	if (ret < 0)
-		return ret;
-
-	ret = dlb2_domain_finish_map_qid_procedures(hw, domain);
-	if (ret < 0)
-		return ret;
-
-	/* Re-enable the CQs in order to drain the mapped queues. */
-	dlb2_domain_enable_ldb_cqs(hw, domain);
-
-	ret = dlb2_domain_drain_mapped_queues(hw, domain);
-	if (ret < 0)
-		return ret;
-
-	ret = dlb2_domain_drain_unmapped_queues(hw, domain);
-	if (ret < 0)
-		return ret;
-
-	/* Done draining LDB QEs, so disable the CQs. */
-	dlb2_domain_disable_ldb_cqs(hw, domain);
-
-	dlb2_domain_drain_dir_queues(hw, domain);
-
-	/* Done draining DIR QEs, so disable the CQs. */
-	dlb2_domain_disable_dir_cqs(hw, domain);
-
-	/* Disable PPs */
-	dlb2_domain_disable_dir_producer_ports(hw, domain);
-
-	dlb2_domain_disable_ldb_producer_ports(hw, domain);
-
-	ret = dlb2_domain_verify_reset_success(hw, domain);
-	if (ret)
-		return ret;
-
-	/* Reset the QID and port state. */
-	dlb2_domain_reset_registers(hw, domain);
-
-	/* Hardware reset complete. Reset the domain's software state */
-	ret = dlb2_domain_reset_software_state(hw, domain);
-	if (ret)
-		return ret;
-
-	return 0;
-}
-
 unsigned int dlb2_finish_unmap_qid_procedures(struct dlb2_hw *hw)
 {
 	int i, num = 0;
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index 8f97dd865..641812412 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -34,6 +34,17 @@
 #define DLB2_FUNC_LIST_FOR_SAFE(head, ptr, ptr_tmp, it, it_tmp) \
 	DLB2_LIST_FOR_EACH_SAFE((head), ptr, ptr_tmp, func_list, it, it_tmp)
 
+/*
+ * The PF driver cannot assume that a register write will affect subsequent HCW
+ * writes. To ensure a write completes, the driver must read back a CSR. This
+ * function only need be called for configuration that can occur after the
+ * domain has started; prior to starting, applications can't send HCWs.
+ */
+static inline void dlb2_flush_csr(struct dlb2_hw *hw)
+{
+	DLB2_CSR_RD(hw, DLB2_SYS_TOTAL_VAS(hw->ver));
+}
+
 static void dlb2_init_domain_rsrc_lists(struct dlb2_hw_domain *domain)
 {
 	int i;
@@ -1019,3 +1030,2554 @@ int dlb2_hw_create_sched_domain(struct dlb2_hw *hw,
 
 	return 0;
 }
+
+static void dlb2_dir_port_cq_disable(struct dlb2_hw *hw,
+				     struct dlb2_dir_pq_pair *port)
+{
+	u32 reg = 0;
+
+	DLB2_BIT_SET(reg, DLB2_LSP_CQ_DIR_DSBL_DISABLED);
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ_DIR_DSBL(hw->ver, port->id.phys_id), reg);
+
+	dlb2_flush_csr(hw);
+}
+
+static u32 dlb2_dir_cq_token_count(struct dlb2_hw *hw,
+				   struct dlb2_dir_pq_pair *port)
+{
+	u32 cnt;
+
+	cnt = DLB2_CSR_RD(hw,
+			  DLB2_LSP_CQ_DIR_TKN_CNT(hw->ver, port->id.phys_id));
+
+	/*
+	 * Account for the initial token count, which is used in order to
+	 * provide a CQ with depth less than 8.
+	 */
+
+	return DLB2_BITS_GET(cnt, DLB2_LSP_CQ_DIR_TKN_CNT_COUNT) -
+	       port->init_tkn_cnt;
+}
+
+static void dlb2_drain_dir_cq(struct dlb2_hw *hw,
+			      struct dlb2_dir_pq_pair *port)
+{
+	unsigned int port_id = port->id.phys_id;
+	u32 cnt;
+
+	/* Return any outstanding tokens */
+	cnt = dlb2_dir_cq_token_count(hw, port);
+
+	if (cnt != 0) {
+		struct dlb2_hcw hcw_mem[8], *hcw;
+		void __iomem *pp_addr;
+
+		pp_addr = os_map_producer_port(hw, port_id, false);
+
+		/* Point hcw to a 64B-aligned location */
+		hcw = (struct dlb2_hcw *)((uintptr_t)&hcw_mem[4] & ~0x3F);
+
+		/*
+		 * Program the first HCW for a batch token return and
+		 * the rest as NOOPS
+		 */
+		memset(hcw, 0, 4 * sizeof(*hcw));
+		hcw->cq_token = 1;
+		hcw->lock_id = cnt - 1;
+
+		dlb2_movdir64b(pp_addr, hcw);
+
+		os_fence_hcw(hw, pp_addr);
+
+		os_unmap_producer_port(hw, pp_addr);
+	}
+}
+
+static void dlb2_dir_port_cq_enable(struct dlb2_hw *hw,
+				    struct dlb2_dir_pq_pair *port)
+{
+	u32 reg = 0;
+
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ_DIR_DSBL(hw->ver, port->id.phys_id), reg);
+
+	dlb2_flush_csr(hw);
+}
+
+static int dlb2_domain_drain_dir_cqs(struct dlb2_hw *hw,
+				     struct dlb2_hw_domain *domain,
+				     bool toggle_port)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *port;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
+		/*
+		 * Can't drain a port if it's not configured, and there's
+		 * nothing to drain if its queue is unconfigured.
+		 */
+		if (!port->port_configured || !port->queue_configured)
+			continue;
+
+		if (toggle_port)
+			dlb2_dir_port_cq_disable(hw, port);
+
+		dlb2_drain_dir_cq(hw, port);
+
+		if (toggle_port)
+			dlb2_dir_port_cq_enable(hw, port);
+	}
+
+	return 0;
+}
+
+static u32 dlb2_dir_queue_depth(struct dlb2_hw *hw,
+				struct dlb2_dir_pq_pair *queue)
+{
+	u32 cnt;
+
+	cnt = DLB2_CSR_RD(hw, DLB2_LSP_QID_DIR_ENQUEUE_CNT(hw->ver,
+						      queue->id.phys_id));
+
+	return DLB2_BITS_GET(cnt, DLB2_LSP_QID_DIR_ENQUEUE_CNT_COUNT);
+}
+
+static bool dlb2_dir_queue_is_empty(struct dlb2_hw *hw,
+				    struct dlb2_dir_pq_pair *queue)
+{
+	return dlb2_dir_queue_depth(hw, queue) == 0;
+}
+
+static bool dlb2_domain_dir_queues_empty(struct dlb2_hw *hw,
+					 struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *queue;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, queue, iter) {
+		if (!dlb2_dir_queue_is_empty(hw, queue))
+			return false;
+	}
+
+	return true;
+}
+static int dlb2_domain_drain_dir_queues(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain)
+{
+	int i;
+
+	/* If the domain hasn't been started, there's no traffic to drain */
+	if (!domain->started)
+		return 0;
+
+	for (i = 0; i < DLB2_MAX_QID_EMPTY_CHECK_LOOPS; i++) {
+		dlb2_domain_drain_dir_cqs(hw, domain, true);
+
+		if (dlb2_domain_dir_queues_empty(hw, domain))
+			break;
+	}
+
+	if (i == DLB2_MAX_QID_EMPTY_CHECK_LOOPS) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: failed to empty queues\n",
+			    __func__);
+		return -EFAULT;
+	}
+
+	/*
+	 * Drain the CQs one more time. For the queues to go empty, they would
+	 * have scheduled one or more QEs.
+	 */
+	dlb2_domain_drain_dir_cqs(hw, domain, true);
+
+	return 0;
+}
+
+static void dlb2_ldb_port_cq_enable(struct dlb2_hw *hw,
+				    struct dlb2_ldb_port *port)
+{
+	u32 reg = 0;
+
+	/*
+	 * Don't re-enable the port if a removal is pending. The caller should
+	 * mark this port as enabled (if it isn't already), and when the
+	 * removal completes the port will be enabled.
+	 */
+	if (port->num_pending_removals)
+		return;
+
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ_LDB_DSBL(hw->ver, port->id.phys_id), reg);
+
+	dlb2_flush_csr(hw);
+}
+
+static void dlb2_ldb_port_cq_disable(struct dlb2_hw *hw,
+				     struct dlb2_ldb_port *port)
+{
+	u32 reg = 0;
+
+	DLB2_BIT_SET(reg, DLB2_LSP_CQ_LDB_DSBL_DISABLED);
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ_LDB_DSBL(hw->ver, port->id.phys_id), reg);
+
+	dlb2_flush_csr(hw);
+}
+
+static u32 dlb2_ldb_cq_inflight_count(struct dlb2_hw *hw,
+				      struct dlb2_ldb_port *port)
+{
+	u32 cnt;
+
+	cnt = DLB2_CSR_RD(hw,
+			  DLB2_LSP_CQ_LDB_INFL_CNT(hw->ver, port->id.phys_id));
+
+	return DLB2_BITS_GET(cnt, DLB2_LSP_CQ_LDB_INFL_CNT_COUNT);
+}
+
+static u32 dlb2_ldb_cq_token_count(struct dlb2_hw *hw,
+				   struct dlb2_ldb_port *port)
+{
+	u32 cnt;
+
+	cnt = DLB2_CSR_RD(hw,
+			  DLB2_LSP_CQ_LDB_TKN_CNT(hw->ver, port->id.phys_id));
+
+	/*
+	 * Account for the initial token count, which is used in order to
+	 * provide a CQ with depth less than 8.
+	 */
+
+	return DLB2_BITS_GET(cnt, DLB2_LSP_CQ_LDB_TKN_CNT_TOKEN_COUNT) -
+		port->init_tkn_cnt;
+}
+
+static void dlb2_drain_ldb_cq(struct dlb2_hw *hw, struct dlb2_ldb_port *port)
+{
+	u32 infl_cnt, tkn_cnt;
+	unsigned int i;
+
+	infl_cnt = dlb2_ldb_cq_inflight_count(hw, port);
+	tkn_cnt = dlb2_ldb_cq_token_count(hw, port);
+
+	if (infl_cnt || tkn_cnt) {
+		struct dlb2_hcw hcw_mem[8], *hcw;
+		void __iomem *pp_addr;
+
+		pp_addr = os_map_producer_port(hw, port->id.phys_id, true);
+
+		/* Point hcw to a 64B-aligned location */
+		hcw = (struct dlb2_hcw *)((uintptr_t)&hcw_mem[4] & ~0x3F);
+
+		/*
+		 * Program the first HCW for a completion and token return and
+		 * the other HCWs as NOOPS
+		 */
+
+		memset(hcw, 0, 4 * sizeof(*hcw));
+		hcw->qe_comp = (infl_cnt > 0);
+		hcw->cq_token = (tkn_cnt > 0);
+		hcw->lock_id = tkn_cnt - 1;
+
+		/* Return tokens in the first HCW */
+		dlb2_movdir64b(pp_addr, hcw);
+
+		hcw->cq_token = 0;
+
+		/* Issue remaining completions (if any) */
+		for (i = 1; i < infl_cnt; i++)
+			dlb2_movdir64b(pp_addr, hcw);
+
+		os_fence_hcw(hw, pp_addr);
+
+		os_unmap_producer_port(hw, pp_addr);
+	}
+}
+
+static void dlb2_domain_drain_ldb_cqs(struct dlb2_hw *hw,
+				      struct dlb2_hw_domain *domain,
+				      bool toggle_port)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	/* If the domain hasn't been started, there's no traffic to drain */
+	if (!domain->started)
+		return;
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			if (toggle_port)
+				dlb2_ldb_port_cq_disable(hw, port);
+
+			dlb2_drain_ldb_cq(hw, port);
+
+			if (toggle_port)
+				dlb2_ldb_port_cq_enable(hw, port);
+		}
+	}
+}
+
+static u32 dlb2_ldb_queue_depth(struct dlb2_hw *hw,
+				struct dlb2_ldb_queue *queue)
+{
+	u32 aqed, ldb, atm;
+
+	aqed = DLB2_CSR_RD(hw, DLB2_LSP_QID_AQED_ACTIVE_CNT(hw->ver,
+						       queue->id.phys_id));
+	ldb = DLB2_CSR_RD(hw, DLB2_LSP_QID_LDB_ENQUEUE_CNT(hw->ver,
+						      queue->id.phys_id));
+	atm = DLB2_CSR_RD(hw,
+			  DLB2_LSP_QID_ATM_ACTIVE(hw->ver, queue->id.phys_id));
+
+	return DLB2_BITS_GET(aqed, DLB2_LSP_QID_AQED_ACTIVE_CNT_COUNT)
+	       + DLB2_BITS_GET(ldb, DLB2_LSP_QID_LDB_ENQUEUE_CNT_COUNT)
+	       + DLB2_BITS_GET(atm, DLB2_LSP_QID_ATM_ACTIVE_COUNT);
+}
+
+static bool dlb2_ldb_queue_is_empty(struct dlb2_hw *hw,
+				    struct dlb2_ldb_queue *queue)
+{
+	return dlb2_ldb_queue_depth(hw, queue) == 0;
+}
+
+static bool dlb2_domain_mapped_queues_empty(struct dlb2_hw *hw,
+					    struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_queue *queue;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
+		if (queue->num_mappings == 0)
+			continue;
+
+		if (!dlb2_ldb_queue_is_empty(hw, queue))
+			return false;
+	}
+
+	return true;
+}
+
+static int dlb2_domain_drain_mapped_queues(struct dlb2_hw *hw,
+					   struct dlb2_hw_domain *domain)
+{
+	int i;
+
+	/* If the domain hasn't been started, there's no traffic to drain */
+	if (!domain->started)
+		return 0;
+
+	if (domain->num_pending_removals > 0) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: failed to unmap domain queues\n",
+			    __func__);
+		return -EFAULT;
+	}
+
+	for (i = 0; i < DLB2_MAX_QID_EMPTY_CHECK_LOOPS; i++) {
+		dlb2_domain_drain_ldb_cqs(hw, domain, true);
+
+		if (dlb2_domain_mapped_queues_empty(hw, domain))
+			break;
+	}
+
+	if (i == DLB2_MAX_QID_EMPTY_CHECK_LOOPS) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: failed to empty queues\n",
+			    __func__);
+		return -EFAULT;
+	}
+
+	/*
+	 * Drain the CQs one more time. For the queues to go empty, they would
+	 * have scheduled one or more QEs.
+	 */
+	dlb2_domain_drain_ldb_cqs(hw, domain, true);
+
+	return 0;
+}
+
+static void dlb2_domain_enable_ldb_cqs(struct dlb2_hw *hw,
+				       struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			port->enabled = true;
+
+			dlb2_ldb_port_cq_enable(hw, port);
+		}
+	}
+}
+
+static struct dlb2_ldb_queue *
+dlb2_get_ldb_queue_from_id(struct dlb2_hw *hw,
+			   u32 id,
+			   bool vdev_req,
+			   unsigned int vdev_id)
+{
+	struct dlb2_list_entry *iter1;
+	struct dlb2_list_entry *iter2;
+	struct dlb2_function_resources *rsrcs;
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+	RTE_SET_USED(iter1);
+	RTE_SET_USED(iter2);
+
+	if (id >= DLB2_MAX_NUM_LDB_QUEUES)
+		return NULL;
+
+	rsrcs = (vdev_req) ? &hw->vdev[vdev_id] : &hw->pf;
+
+	if (!vdev_req)
+		return &hw->rsrcs.ldb_queues[id];
+
+	DLB2_FUNC_LIST_FOR(rsrcs->used_domains, domain, iter1) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter2) {
+			if (queue->id.virt_id == id)
+				return queue;
+		}
+	}
+
+	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_queues, queue, iter1) {
+		if (queue->id.virt_id == id)
+			return queue;
+	}
+
+	return NULL;
+}
+
+static struct dlb2_hw_domain *dlb2_get_domain_from_id(struct dlb2_hw *hw,
+						      u32 id,
+						      bool vdev_req,
+						      unsigned int vdev_id)
+{
+	struct dlb2_list_entry *iteration;
+	struct dlb2_function_resources *rsrcs;
+	struct dlb2_hw_domain *domain;
+	RTE_SET_USED(iteration);
+
+	if (id >= DLB2_MAX_NUM_DOMAINS)
+		return NULL;
+
+	if (!vdev_req)
+		return &hw->domains[id];
+
+	rsrcs = &hw->vdev[vdev_id];
+
+	DLB2_FUNC_LIST_FOR(rsrcs->used_domains, domain, iteration) {
+		if (domain->id.virt_id == id)
+			return domain;
+	}
+
+	return NULL;
+}
+
+static int dlb2_port_slot_state_transition(struct dlb2_hw *hw,
+					   struct dlb2_ldb_port *port,
+					   struct dlb2_ldb_queue *queue,
+					   int slot,
+					   enum dlb2_qid_map_state new_state)
+{
+	enum dlb2_qid_map_state curr_state = port->qid_map[slot].state;
+	struct dlb2_hw_domain *domain;
+	int domain_id;
+
+	domain_id = port->domain_id.phys_id;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, false, 0);
+	if (domain == NULL) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: unable to find domain %d\n",
+			    __func__, domain_id);
+		return -EINVAL;
+	}
+
+	switch (curr_state) {
+	case DLB2_QUEUE_UNMAPPED:
+		switch (new_state) {
+		case DLB2_QUEUE_MAPPED:
+			queue->num_mappings++;
+			port->num_mappings++;
+			break;
+		case DLB2_QUEUE_MAP_IN_PROG:
+			queue->num_pending_additions++;
+			domain->num_pending_additions++;
+			break;
+		default:
+			goto error;
+		}
+		break;
+	case DLB2_QUEUE_MAPPED:
+		switch (new_state) {
+		case DLB2_QUEUE_UNMAPPED:
+			queue->num_mappings--;
+			port->num_mappings--;
+			break;
+		case DLB2_QUEUE_UNMAP_IN_PROG:
+			port->num_pending_removals++;
+			domain->num_pending_removals++;
+			break;
+		case DLB2_QUEUE_MAPPED:
+			/* Priority change, nothing to update */
+			break;
+		default:
+			goto error;
+		}
+		break;
+	case DLB2_QUEUE_MAP_IN_PROG:
+		switch (new_state) {
+		case DLB2_QUEUE_UNMAPPED:
+			queue->num_pending_additions--;
+			domain->num_pending_additions--;
+			break;
+		case DLB2_QUEUE_MAPPED:
+			queue->num_mappings++;
+			port->num_mappings++;
+			queue->num_pending_additions--;
+			domain->num_pending_additions--;
+			break;
+		default:
+			goto error;
+		}
+		break;
+	case DLB2_QUEUE_UNMAP_IN_PROG:
+		switch (new_state) {
+		case DLB2_QUEUE_UNMAPPED:
+			port->num_pending_removals--;
+			domain->num_pending_removals--;
+			queue->num_mappings--;
+			port->num_mappings--;
+			break;
+		case DLB2_QUEUE_MAPPED:
+			port->num_pending_removals--;
+			domain->num_pending_removals--;
+			break;
+		case DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP:
+			/* Nothing to update */
+			break;
+		default:
+			goto error;
+		}
+		break;
+	case DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP:
+		switch (new_state) {
+		case DLB2_QUEUE_UNMAP_IN_PROG:
+			/* Nothing to update */
+			break;
+		case DLB2_QUEUE_UNMAPPED:
+			/*
+			 * An UNMAP_IN_PROG_PENDING_MAP slot briefly
+			 * becomes UNMAPPED before it transitions to
+			 * MAP_IN_PROG.
+			 */
+			queue->num_mappings--;
+			port->num_mappings--;
+			port->num_pending_removals--;
+			domain->num_pending_removals--;
+			break;
+		default:
+			goto error;
+		}
+		break;
+	default:
+		goto error;
+	}
+
+	port->qid_map[slot].state = new_state;
+
+	DLB2_HW_DBG(hw,
+		    "[%s()] queue %d -> port %d state transition (%d -> %d)\n",
+		    __func__, queue->id.phys_id, port->id.phys_id,
+		    curr_state, new_state);
+	return 0;
+
+error:
+	DLB2_HW_ERR(hw,
+		    "[%s()] Internal error: invalid queue %d -> port %d state transition (%d -> %d)\n",
+		    __func__, queue->id.phys_id, port->id.phys_id,
+		    curr_state, new_state);
+	return -EFAULT;
+}
+
+static bool dlb2_port_find_slot(struct dlb2_ldb_port *port,
+				enum dlb2_qid_map_state state,
+				int *slot)
+{
+	int i;
+
+	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
+		if (port->qid_map[i].state == state)
+			break;
+	}
+
+	*slot = i;
+
+	return (i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ);
+}
+
+static bool dlb2_port_find_slot_queue(struct dlb2_ldb_port *port,
+				      enum dlb2_qid_map_state state,
+				      struct dlb2_ldb_queue *queue,
+				      int *slot)
+{
+	int i;
+
+	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
+		if (port->qid_map[i].state == state &&
+		    port->qid_map[i].qid == queue->id.phys_id)
+			break;
+	}
+
+	*slot = i;
+
+	return (i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ);
+}
+
+/*
+ * dlb2_ldb_queue_{enable, disable}_mapped_cqs() don't operate exactly as
+ * their function names imply, and should only be called by the dynamic CQ
+ * mapping code.
+ */
+static void dlb2_ldb_queue_disable_mapped_cqs(struct dlb2_hw *hw,
+					      struct dlb2_hw_domain *domain,
+					      struct dlb2_ldb_queue *queue)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int slot, i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			enum dlb2_qid_map_state state = DLB2_QUEUE_MAPPED;
+
+			if (!dlb2_port_find_slot_queue(port, state,
+						       queue, &slot))
+				continue;
+
+			if (port->enabled)
+				dlb2_ldb_port_cq_disable(hw, port);
+		}
+	}
+}
+
+static void dlb2_ldb_queue_enable_mapped_cqs(struct dlb2_hw *hw,
+					     struct dlb2_hw_domain *domain,
+					     struct dlb2_ldb_queue *queue)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int slot, i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			enum dlb2_qid_map_state state = DLB2_QUEUE_MAPPED;
+
+			if (!dlb2_port_find_slot_queue(port, state,
+						       queue, &slot))
+				continue;
+
+			if (port->enabled)
+				dlb2_ldb_port_cq_enable(hw, port);
+		}
+	}
+}
+
+static void dlb2_ldb_port_clear_queue_if_status(struct dlb2_hw *hw,
+						struct dlb2_ldb_port *port,
+						int slot)
+{
+	u32 ctrl = 0;
+
+	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
+	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
+	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_INFLIGHT_OK_V);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
+
+	dlb2_flush_csr(hw);
+}
+
+static void dlb2_ldb_port_set_queue_if_status(struct dlb2_hw *hw,
+					      struct dlb2_ldb_port *port,
+					      int slot)
+{
+	u32 ctrl = 0;
+
+	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
+	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
+	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_VALUE);
+	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_INFLIGHT_OK_V);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
+
+	dlb2_flush_csr(hw);
+}
+
+static int dlb2_ldb_port_map_qid_static(struct dlb2_hw *hw,
+					struct dlb2_ldb_port *p,
+					struct dlb2_ldb_queue *q,
+					u8 priority)
+{
+	enum dlb2_qid_map_state state;
+	u32 lsp_qid2cq2;
+	u32 lsp_qid2cq;
+	u32 atm_qid2cq;
+	u32 cq2priov;
+	u32 cq2qid;
+	int i;
+
+	/* Look for a pending or already mapped slot, else an unused slot */
+	if (!dlb2_port_find_slot_queue(p, DLB2_QUEUE_MAP_IN_PROG, q, &i) &&
+	    !dlb2_port_find_slot_queue(p, DLB2_QUEUE_MAPPED, q, &i) &&
+	    !dlb2_port_find_slot(p, DLB2_QUEUE_UNMAPPED, &i)) {
+		DLB2_HW_ERR(hw,
+			    "[%s():%d] Internal error: CQ has no available QID mapping slots\n",
+			    __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	/* Read-modify-write the priority and valid bit register */
+	cq2priov = DLB2_CSR_RD(hw, DLB2_LSP_CQ2PRIOV(hw->ver, p->id.phys_id));
+
+	cq2priov |= (1 << (i + DLB2_LSP_CQ2PRIOV_V_LOC)) & DLB2_LSP_CQ2PRIOV_V;
+	cq2priov |= ((priority & 0x7) << (i + DLB2_LSP_CQ2PRIOV_PRIO_LOC) * 3)
+		    & DLB2_LSP_CQ2PRIOV_PRIO;
+
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(hw->ver, p->id.phys_id), cq2priov);
+
+	/* Read-modify-write the QID map register */
+	if (i < 4)
+		cq2qid = DLB2_CSR_RD(hw, DLB2_LSP_CQ2QID0(hw->ver,
+							  p->id.phys_id));
+	else
+		cq2qid = DLB2_CSR_RD(hw, DLB2_LSP_CQ2QID1(hw->ver,
+							  p->id.phys_id));
+
+	if (i == 0 || i == 4)
+		DLB2_BITS_SET(cq2qid, q->id.phys_id, DLB2_LSP_CQ2QID0_QID_P0);
+	if (i == 1 || i == 5)
+		DLB2_BITS_SET(cq2qid, q->id.phys_id, DLB2_LSP_CQ2QID0_QID_P1);
+	if (i == 2 || i == 6)
+		DLB2_BITS_SET(cq2qid, q->id.phys_id, DLB2_LSP_CQ2QID0_QID_P2);
+	if (i == 3 || i == 7)
+		DLB2_BITS_SET(cq2qid, q->id.phys_id, DLB2_LSP_CQ2QID0_QID_P3);
+
+	if (i < 4)
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_CQ2QID0(hw->ver, p->id.phys_id), cq2qid);
+	else
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_CQ2QID1(hw->ver, p->id.phys_id), cq2qid);
+
+	atm_qid2cq = DLB2_CSR_RD(hw,
+				 DLB2_ATM_QID2CQIDIX(q->id.phys_id,
+						p->id.phys_id / 4));
+
+	lsp_qid2cq = DLB2_CSR_RD(hw,
+				 DLB2_LSP_QID2CQIDIX(hw->ver, q->id.phys_id,
+						p->id.phys_id / 4));
+
+	lsp_qid2cq2 = DLB2_CSR_RD(hw,
+				  DLB2_LSP_QID2CQIDIX2(hw->ver, q->id.phys_id,
+						  p->id.phys_id / 4));
+
+	switch (p->id.phys_id % 4) {
+	case 0:
+		DLB2_BIT_SET(atm_qid2cq,
+			     1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P0_LOC));
+		DLB2_BIT_SET(lsp_qid2cq,
+			     1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P0_LOC));
+		DLB2_BIT_SET(lsp_qid2cq2,
+			     1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P0_LOC));
+		break;
+
+	case 1:
+		DLB2_BIT_SET(atm_qid2cq,
+			     1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P1_LOC));
+		DLB2_BIT_SET(lsp_qid2cq,
+			     1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P1_LOC));
+		DLB2_BIT_SET(lsp_qid2cq2,
+			     1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P1_LOC));
+		break;
+
+	case 2:
+		DLB2_BIT_SET(atm_qid2cq,
+			     1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P2_LOC));
+		DLB2_BIT_SET(lsp_qid2cq,
+			     1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P2_LOC));
+		DLB2_BIT_SET(lsp_qid2cq2,
+			     1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P2_LOC));
+		break;
+
+	case 3:
+		DLB2_BIT_SET(atm_qid2cq,
+			     1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P3_LOC));
+		DLB2_BIT_SET(lsp_qid2cq,
+			     1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P3_LOC));
+		DLB2_BIT_SET(lsp_qid2cq2,
+			     1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P3_LOC));
+		break;
+	}
+
+	DLB2_CSR_WR(hw,
+		    DLB2_ATM_QID2CQIDIX(q->id.phys_id, p->id.phys_id / 4),
+		    atm_qid2cq);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_QID2CQIDIX(hw->ver,
+					q->id.phys_id, p->id.phys_id / 4),
+		    lsp_qid2cq);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_QID2CQIDIX2(hw->ver,
+					 q->id.phys_id, p->id.phys_id / 4),
+		    lsp_qid2cq2);
+
+	dlb2_flush_csr(hw);
+
+	p->qid_map[i].qid = q->id.phys_id;
+	p->qid_map[i].priority = priority;
+
+	state = DLB2_QUEUE_MAPPED;
+
+	return dlb2_port_slot_state_transition(hw, p, q, i, state);
+}
+
+static int dlb2_ldb_port_set_has_work_bits(struct dlb2_hw *hw,
+					   struct dlb2_ldb_port *port,
+					   struct dlb2_ldb_queue *queue,
+					   int slot)
+{
+	u32 ctrl = 0;
+	u32 active;
+	u32 enq;
+
+	/* Set the atomic scheduling haswork bit */
+	active = DLB2_CSR_RD(hw, DLB2_LSP_QID_AQED_ACTIVE_CNT(hw->ver,
+							 queue->id.phys_id));
+
+	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
+	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
+	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_VALUE);
+	DLB2_BITS_SET(ctrl,
+		      DLB2_BITS_GET(active,
+				    DLB2_LSP_QID_AQED_ACTIVE_CNT_COUNT) > 0,
+				    DLB2_LSP_LDB_SCHED_CTRL_RLIST_HASWORK_V);
+
+	/* Set the non-atomic scheduling haswork bit */
+	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
+
+	enq = DLB2_CSR_RD(hw,
+			  DLB2_LSP_QID_LDB_ENQUEUE_CNT(hw->ver,
+						       queue->id.phys_id));
+
+	memset(&ctrl, 0, sizeof(ctrl));
+
+	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
+	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
+	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_VALUE);
+	DLB2_BITS_SET(ctrl,
+		      DLB2_BITS_GET(enq,
+				    DLB2_LSP_QID_LDB_ENQUEUE_CNT_COUNT) > 0,
+		      DLB2_LSP_LDB_SCHED_CTRL_NALB_HASWORK_V);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
+
+	dlb2_flush_csr(hw);
+
+	return 0;
+}
+
+static void dlb2_ldb_port_clear_has_work_bits(struct dlb2_hw *hw,
+					      struct dlb2_ldb_port *port,
+					      u8 slot)
+{
+	u32 ctrl = 0;
+
+	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
+	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
+	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_RLIST_HASWORK_V);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
+
+	memset(&ctrl, 0, sizeof(ctrl));
+
+	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
+	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
+	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_NALB_HASWORK_V);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
+
+	dlb2_flush_csr(hw);
+}
+
+
+static void dlb2_ldb_queue_set_inflight_limit(struct dlb2_hw *hw,
+					      struct dlb2_ldb_queue *queue)
+{
+	u32 infl_lim = 0;
+
+	DLB2_BITS_SET(infl_lim, queue->num_qid_inflights,
+		 DLB2_LSP_QID_LDB_INFL_LIM_LIMIT);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_QID_LDB_INFL_LIM(hw->ver, queue->id.phys_id),
+		    infl_lim);
+}
+
+static void dlb2_ldb_queue_clear_inflight_limit(struct dlb2_hw *hw,
+						struct dlb2_ldb_queue *queue)
+{
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_QID_LDB_INFL_LIM(hw->ver, queue->id.phys_id),
+		    DLB2_LSP_QID_LDB_INFL_LIM_RST);
+}
+
+static int dlb2_ldb_port_finish_map_qid_dynamic(struct dlb2_hw *hw,
+						struct dlb2_hw_domain *domain,
+						struct dlb2_ldb_port *port,
+						struct dlb2_ldb_queue *queue)
+{
+	struct dlb2_list_entry *iter;
+	enum dlb2_qid_map_state state;
+	int slot, ret, i;
+	u32 infl_cnt;
+	u8 prio;
+	RTE_SET_USED(iter);
+
+	infl_cnt = DLB2_CSR_RD(hw,
+			       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver,
+						    queue->id.phys_id));
+
+	if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT)) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: non-zero QID inflight count\n",
+			    __func__);
+		return -EINVAL;
+	}
+
+	/*
+	 * Static map the port and set its corresponding has_work bits.
+	 */
+	state = DLB2_QUEUE_MAP_IN_PROG;
+	if (!dlb2_port_find_slot_queue(port, state, queue, &slot))
+		return -EINVAL;
+
+	prio = port->qid_map[slot].priority;
+
+	/*
+	 * Update the CQ2QID, CQ2PRIOV, and QID2CQIDX registers, and
+	 * the port's qid_map state.
+	 */
+	ret = dlb2_ldb_port_map_qid_static(hw, port, queue, prio);
+	if (ret)
+		return ret;
+
+	ret = dlb2_ldb_port_set_has_work_bits(hw, port, queue, slot);
+	if (ret)
+		return ret;
+
+	/*
+	 * Ensure IF_status(cq,qid) is 0 before enabling the port to
+	 * prevent spurious schedules to cause the queue's inflight
+	 * count to increase.
+	 */
+	dlb2_ldb_port_clear_queue_if_status(hw, port, slot);
+
+	/* Reset the queue's inflight status */
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			state = DLB2_QUEUE_MAPPED;
+			if (!dlb2_port_find_slot_queue(port, state,
+						       queue, &slot))
+				continue;
+
+			dlb2_ldb_port_set_queue_if_status(hw, port, slot);
+		}
+	}
+
+	dlb2_ldb_queue_set_inflight_limit(hw, queue);
+
+	/* Re-enable CQs mapped to this queue */
+	dlb2_ldb_queue_enable_mapped_cqs(hw, domain, queue);
+
+	/* If this queue has other mappings pending, clear its inflight limit */
+	if (queue->num_pending_additions > 0)
+		dlb2_ldb_queue_clear_inflight_limit(hw, queue);
+
+	return 0;
+}
+
+/**
+ * dlb2_ldb_port_map_qid_dynamic() - perform a "dynamic" QID->CQ mapping
+ * @hw: dlb2_hw handle for a particular device.
+ * @port: load-balanced port
+ * @queue: load-balanced queue
+ * @priority: queue servicing priority
+ *
+ * Returns 0 if the queue was mapped, 1 if the mapping is scheduled to occur
+ * at a later point, and <0 if an error occurred.
+ */
+static int dlb2_ldb_port_map_qid_dynamic(struct dlb2_hw *hw,
+					 struct dlb2_ldb_port *port,
+					 struct dlb2_ldb_queue *queue,
+					 u8 priority)
+{
+	enum dlb2_qid_map_state state;
+	struct dlb2_hw_domain *domain;
+	int domain_id, slot, ret;
+	u32 infl_cnt;
+
+	domain_id = port->domain_id.phys_id;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, false, 0);
+	if (domain == NULL) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: unable to find domain %d\n",
+			    __func__, port->domain_id.phys_id);
+		return -EINVAL;
+	}
+
+	/*
+	 * Set the QID inflight limit to 0 to prevent further scheduling of the
+	 * queue.
+	 */
+	DLB2_CSR_WR(hw, DLB2_LSP_QID_LDB_INFL_LIM(hw->ver,
+						  queue->id.phys_id), 0);
+
+	if (!dlb2_port_find_slot(port, DLB2_QUEUE_UNMAPPED, &slot)) {
+		DLB2_HW_ERR(hw,
+			    "Internal error: No available unmapped slots\n");
+		return -EFAULT;
+	}
+
+	port->qid_map[slot].qid = queue->id.phys_id;
+	port->qid_map[slot].priority = priority;
+
+	state = DLB2_QUEUE_MAP_IN_PROG;
+	ret = dlb2_port_slot_state_transition(hw, port, queue, slot, state);
+	if (ret)
+		return ret;
+
+	infl_cnt = DLB2_CSR_RD(hw,
+			       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver,
+						    queue->id.phys_id));
+
+	if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT)) {
+		/*
+		 * The queue is owed completions so it's not safe to map it
+		 * yet. Schedule a kernel thread to complete the mapping later,
+		 * once software has completed all the queue's inflight events.
+		 */
+		if (!os_worker_active(hw))
+			os_schedule_work(hw);
+
+		return 1;
+	}
+
+	/*
+	 * Disable the affected CQ, and the CQs already mapped to the QID,
+	 * before reading the QID's inflight count a second time. There is an
+	 * unlikely race in which the QID may schedule one more QE after we
+	 * read an inflight count of 0, and disabling the CQs guarantees that
+	 * the race will not occur after a re-read of the inflight count
+	 * register.
+	 */
+	if (port->enabled)
+		dlb2_ldb_port_cq_disable(hw, port);
+
+	dlb2_ldb_queue_disable_mapped_cqs(hw, domain, queue);
+
+	infl_cnt = DLB2_CSR_RD(hw,
+			       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver,
+						    queue->id.phys_id));
+
+	if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT)) {
+		if (port->enabled)
+			dlb2_ldb_port_cq_enable(hw, port);
+
+		dlb2_ldb_queue_enable_mapped_cqs(hw, domain, queue);
+
+		/*
+		 * The queue is owed completions so it's not safe to map it
+		 * yet. Schedule a kernel thread to complete the mapping later,
+		 * once software has completed all the queue's inflight events.
+		 */
+		if (!os_worker_active(hw))
+			os_schedule_work(hw);
+
+		return 1;
+	}
+
+	return dlb2_ldb_port_finish_map_qid_dynamic(hw, domain, port, queue);
+}
+
+static void dlb2_domain_finish_map_port(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain,
+					struct dlb2_ldb_port *port)
+{
+	int i;
+
+	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
+		u32 infl_cnt;
+		struct dlb2_ldb_queue *queue;
+		int qid;
+
+		if (port->qid_map[i].state != DLB2_QUEUE_MAP_IN_PROG)
+			continue;
+
+		qid = port->qid_map[i].qid;
+
+		queue = dlb2_get_ldb_queue_from_id(hw, qid, false, 0);
+
+		if (queue == NULL) {
+			DLB2_HW_ERR(hw,
+				    "[%s()] Internal error: unable to find queue %d\n",
+				    __func__, qid);
+			continue;
+		}
+
+		infl_cnt = DLB2_CSR_RD(hw,
+				       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver, qid));
+
+		if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT))
+			continue;
+
+		/*
+		 * Disable the affected CQ, and the CQs already mapped to the
+		 * QID, before reading the QID's inflight count a second time.
+		 * There is an unlikely race in which the QID may schedule one
+		 * more QE after we read an inflight count of 0, and disabling
+		 * the CQs guarantees that the race will not occur after a
+		 * re-read of the inflight count register.
+		 */
+		if (port->enabled)
+			dlb2_ldb_port_cq_disable(hw, port);
+
+		dlb2_ldb_queue_disable_mapped_cqs(hw, domain, queue);
+
+		infl_cnt = DLB2_CSR_RD(hw,
+				       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver, qid));
+
+		if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT)) {
+			if (port->enabled)
+				dlb2_ldb_port_cq_enable(hw, port);
+
+			dlb2_ldb_queue_enable_mapped_cqs(hw, domain, queue);
+
+			continue;
+		}
+
+		dlb2_ldb_port_finish_map_qid_dynamic(hw, domain, port, queue);
+	}
+}
+
+static unsigned int
+dlb2_domain_finish_map_qid_procedures(struct dlb2_hw *hw,
+				      struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	if (!domain->configured || domain->num_pending_additions == 0)
+		return 0;
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
+			dlb2_domain_finish_map_port(hw, domain, port);
+	}
+
+	return domain->num_pending_additions;
+}
+
+static int dlb2_ldb_port_unmap_qid(struct dlb2_hw *hw,
+				   struct dlb2_ldb_port *port,
+				   struct dlb2_ldb_queue *queue)
+{
+	enum dlb2_qid_map_state mapped, in_progress, pending_map, unmapped;
+	u32 lsp_qid2cq2;
+	u32 lsp_qid2cq;
+	u32 atm_qid2cq;
+	u32 cq2priov;
+	u32 queue_id;
+	u32 port_id;
+	int i;
+
+	/* Find the queue's slot */
+	mapped = DLB2_QUEUE_MAPPED;
+	in_progress = DLB2_QUEUE_UNMAP_IN_PROG;
+	pending_map = DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP;
+
+	if (!dlb2_port_find_slot_queue(port, mapped, queue, &i) &&
+	    !dlb2_port_find_slot_queue(port, in_progress, queue, &i) &&
+	    !dlb2_port_find_slot_queue(port, pending_map, queue, &i)) {
+		DLB2_HW_ERR(hw,
+			    "[%s():%d] Internal error: QID %d isn't mapped\n",
+			    __func__, __LINE__, queue->id.phys_id);
+		return -EFAULT;
+	}
+
+	port_id = port->id.phys_id;
+	queue_id = queue->id.phys_id;
+
+	/* Read-modify-write the priority and valid bit register */
+	cq2priov = DLB2_CSR_RD(hw, DLB2_LSP_CQ2PRIOV(hw->ver, port_id));
+
+	cq2priov &= ~(1 << (i + DLB2_LSP_CQ2PRIOV_V_LOC));
+
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(hw->ver, port_id), cq2priov);
+
+	atm_qid2cq = DLB2_CSR_RD(hw, DLB2_ATM_QID2CQIDIX(queue_id,
+							 port_id / 4));
+
+	lsp_qid2cq = DLB2_CSR_RD(hw,
+				 DLB2_LSP_QID2CQIDIX(hw->ver,
+						queue_id, port_id / 4));
+
+	lsp_qid2cq2 = DLB2_CSR_RD(hw,
+				  DLB2_LSP_QID2CQIDIX2(hw->ver,
+						  queue_id, port_id / 4));
+
+	switch (port_id % 4) {
+	case 0:
+		atm_qid2cq &= ~(1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P0_LOC));
+		lsp_qid2cq &= ~(1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P0_LOC));
+		lsp_qid2cq2 &= ~(1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P0_LOC));
+		break;
+
+	case 1:
+		atm_qid2cq &= ~(1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P1_LOC));
+		lsp_qid2cq &= ~(1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P1_LOC));
+		lsp_qid2cq2 &= ~(1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P1_LOC));
+		break;
+
+	case 2:
+		atm_qid2cq &= ~(1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P2_LOC));
+		lsp_qid2cq &= ~(1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P2_LOC));
+		lsp_qid2cq2 &= ~(1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P2_LOC));
+		break;
+
+	case 3:
+		atm_qid2cq &= ~(1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P3_LOC));
+		lsp_qid2cq &= ~(1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P3_LOC));
+		lsp_qid2cq2 &= ~(1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P3_LOC));
+		break;
+	}
+
+	DLB2_CSR_WR(hw, DLB2_ATM_QID2CQIDIX(queue_id, port_id / 4), atm_qid2cq);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_QID2CQIDIX(hw->ver, queue_id, port_id / 4),
+		    lsp_qid2cq);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_QID2CQIDIX2(hw->ver, queue_id, port_id / 4),
+		    lsp_qid2cq2);
+
+	dlb2_flush_csr(hw);
+
+	unmapped = DLB2_QUEUE_UNMAPPED;
+
+	return dlb2_port_slot_state_transition(hw, port, queue, i, unmapped);
+}
+
+static int dlb2_ldb_port_map_qid(struct dlb2_hw *hw,
+				 struct dlb2_hw_domain *domain,
+				 struct dlb2_ldb_port *port,
+				 struct dlb2_ldb_queue *queue,
+				 u8 prio)
+{
+	if (domain->started)
+		return dlb2_ldb_port_map_qid_dynamic(hw, port, queue, prio);
+	else
+		return dlb2_ldb_port_map_qid_static(hw, port, queue, prio);
+}
+
+static void
+dlb2_domain_finish_unmap_port_slot(struct dlb2_hw *hw,
+				   struct dlb2_hw_domain *domain,
+				   struct dlb2_ldb_port *port,
+				   int slot)
+{
+	enum dlb2_qid_map_state state;
+	struct dlb2_ldb_queue *queue;
+
+	queue = &hw->rsrcs.ldb_queues[port->qid_map[slot].qid];
+
+	state = port->qid_map[slot].state;
+
+	/* Update the QID2CQIDX and CQ2QID vectors */
+	dlb2_ldb_port_unmap_qid(hw, port, queue);
+
+	/*
+	 * Ensure the QID will not be serviced by this {CQ, slot} by clearing
+	 * the has_work bits
+	 */
+	dlb2_ldb_port_clear_has_work_bits(hw, port, slot);
+
+	/* Reset the {CQ, slot} to its default state */
+	dlb2_ldb_port_set_queue_if_status(hw, port, slot);
+
+	/* Re-enable the CQ if it was not manually disabled by the user */
+	if (port->enabled)
+		dlb2_ldb_port_cq_enable(hw, port);
+
+	/*
+	 * If there is a mapping that is pending this slot's removal, perform
+	 * the mapping now.
+	 */
+	if (state == DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP) {
+		struct dlb2_ldb_port_qid_map *map;
+		struct dlb2_ldb_queue *map_queue;
+		u8 prio;
+
+		map = &port->qid_map[slot];
+
+		map->qid = map->pending_qid;
+		map->priority = map->pending_priority;
+
+		map_queue = &hw->rsrcs.ldb_queues[map->qid];
+		prio = map->priority;
+
+		dlb2_ldb_port_map_qid(hw, domain, port, map_queue, prio);
+	}
+}
+
+
+static bool dlb2_domain_finish_unmap_port(struct dlb2_hw *hw,
+					  struct dlb2_hw_domain *domain,
+					  struct dlb2_ldb_port *port)
+{
+	u32 infl_cnt;
+	int i;
+
+	if (port->num_pending_removals == 0)
+		return false;
+
+	/*
+	 * The unmap requires all the CQ's outstanding inflights to be
+	 * completed.
+	 */
+	infl_cnt = DLB2_CSR_RD(hw, DLB2_LSP_CQ_LDB_INFL_CNT(hw->ver,
+						       port->id.phys_id));
+	if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_CQ_LDB_INFL_CNT_COUNT) > 0)
+		return false;
+
+	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
+		struct dlb2_ldb_port_qid_map *map;
+
+		map = &port->qid_map[i];
+
+		if (map->state != DLB2_QUEUE_UNMAP_IN_PROG &&
+		    map->state != DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP)
+			continue;
+
+		dlb2_domain_finish_unmap_port_slot(hw, domain, port, i);
+	}
+
+	return true;
+}
+
+static unsigned int
+dlb2_domain_finish_unmap_qid_procedures(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	if (!domain->configured || domain->num_pending_removals == 0)
+		return 0;
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
+			dlb2_domain_finish_unmap_port(hw, domain, port);
+	}
+
+	return domain->num_pending_removals;
+}
+
+static void dlb2_domain_disable_ldb_cqs(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			port->enabled = false;
+
+			dlb2_ldb_port_cq_disable(hw, port);
+		}
+	}
+}
+
+
+static void dlb2_log_reset_domain(struct dlb2_hw *hw,
+				  u32 domain_id,
+				  bool vdev_req,
+				  unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 reset domain:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
+}
+
+static void dlb2_domain_disable_dir_vpps(struct dlb2_hw *hw,
+					 struct dlb2_hw_domain *domain,
+					 unsigned int vdev_id)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *port;
+	u32 vpp_v = 0;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
+		unsigned int offs;
+		u32 virt_id;
+
+		if (hw->virt_mode == DLB2_VIRT_SRIOV)
+			virt_id = port->id.virt_id;
+		else
+			virt_id = port->id.phys_id;
+
+		offs = vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) + virt_id;
+
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP_V(offs), vpp_v);
+	}
+}
+
+static void dlb2_domain_disable_ldb_vpps(struct dlb2_hw *hw,
+					 struct dlb2_hw_domain *domain,
+					 unsigned int vdev_id)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	u32 vpp_v = 0;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			unsigned int offs;
+			u32 virt_id;
+
+			if (hw->virt_mode == DLB2_VIRT_SRIOV)
+				virt_id = port->id.virt_id;
+			else
+				virt_id = port->id.phys_id;
+
+			offs = vdev_id * DLB2_MAX_NUM_LDB_PORTS + virt_id;
+
+			DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VPP_V(offs), vpp_v);
+		}
+	}
+}
+
+static void
+dlb2_domain_disable_ldb_port_interrupts(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	u32 int_en = 0;
+	u32 wd_en = 0;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			DLB2_CSR_WR(hw,
+				    DLB2_CHP_LDB_CQ_INT_ENB(hw->ver,
+						       port->id.phys_id),
+				    int_en);
+
+			DLB2_CSR_WR(hw,
+				    DLB2_CHP_LDB_CQ_WD_ENB(hw->ver,
+						      port->id.phys_id),
+				    wd_en);
+		}
+	}
+}
+
+static void
+dlb2_domain_disable_dir_port_interrupts(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *port;
+	u32 int_en = 0;
+	u32 wd_en = 0;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
+		DLB2_CSR_WR(hw,
+			    DLB2_CHP_DIR_CQ_INT_ENB(hw->ver, port->id.phys_id),
+			    int_en);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_CHP_DIR_CQ_WD_ENB(hw->ver, port->id.phys_id),
+			    wd_en);
+	}
+}
+
+static void
+dlb2_domain_disable_ldb_queue_write_perms(struct dlb2_hw *hw,
+					  struct dlb2_hw_domain *domain)
+{
+	int domain_offset = domain->id.phys_id * DLB2_MAX_NUM_LDB_QUEUES;
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_queue *queue;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
+		int idx = domain_offset + queue->id.phys_id;
+
+		DLB2_CSR_WR(hw, DLB2_SYS_LDB_VASQID_V(idx), 0);
+
+		if (queue->id.vdev_owned) {
+			DLB2_CSR_WR(hw,
+				    DLB2_SYS_LDB_QID2VQID(queue->id.phys_id),
+				    0);
+
+			idx = queue->id.vdev_id * DLB2_MAX_NUM_LDB_QUEUES +
+				queue->id.virt_id;
+
+			DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID_V(idx), 0);
+
+			DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID2QID(idx), 0);
+		}
+	}
+}
+
+static void
+dlb2_domain_disable_dir_queue_write_perms(struct dlb2_hw *hw,
+					  struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *queue;
+	unsigned long max_ports;
+	int domain_offset;
+	RTE_SET_USED(iter);
+
+	max_ports = DLB2_MAX_NUM_DIR_PORTS(hw->ver);
+
+	domain_offset = domain->id.phys_id * max_ports;
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, queue, iter) {
+		int idx = domain_offset + queue->id.phys_id;
+
+		DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(idx), 0);
+
+		if (queue->id.vdev_owned) {
+			idx = queue->id.vdev_id * max_ports + queue->id.virt_id;
+
+			DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID_V(idx), 0);
+
+			DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID2QID(idx), 0);
+		}
+	}
+}
+
+static void dlb2_domain_disable_ldb_seq_checks(struct dlb2_hw *hw,
+					       struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	u32 chk_en = 0;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			DLB2_CSR_WR(hw,
+				    DLB2_CHP_SN_CHK_ENBL(hw->ver,
+							 port->id.phys_id),
+				    chk_en);
+		}
+	}
+}
+
+static int dlb2_domain_wait_for_ldb_cqs_to_empty(struct dlb2_hw *hw,
+						 struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			int j;
+
+			for (j = 0; j < DLB2_MAX_CQ_COMP_CHECK_LOOPS; j++) {
+				if (dlb2_ldb_cq_inflight_count(hw, port) == 0)
+					break;
+			}
+
+			if (j == DLB2_MAX_CQ_COMP_CHECK_LOOPS) {
+				DLB2_HW_ERR(hw,
+					    "[%s()] Internal error: failed to flush load-balanced port %d's completions.\n",
+					    __func__, port->id.phys_id);
+				return -EFAULT;
+			}
+		}
+	}
+
+	return 0;
+}
+
+static void dlb2_domain_disable_dir_cqs(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *port;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
+		port->enabled = false;
+
+		dlb2_dir_port_cq_disable(hw, port);
+	}
+}
+
+static void
+dlb2_domain_disable_dir_producer_ports(struct dlb2_hw *hw,
+				       struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *port;
+	u32 pp_v = 0;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_DIR_PP_V(port->id.phys_id),
+			    pp_v);
+	}
+}
+
+static void
+dlb2_domain_disable_ldb_producer_ports(struct dlb2_hw *hw,
+				       struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	u32 pp_v = 0;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			DLB2_CSR_WR(hw,
+				    DLB2_SYS_LDB_PP_V(port->id.phys_id),
+				    pp_v);
+		}
+	}
+}
+
+static int dlb2_domain_verify_reset_success(struct dlb2_hw *hw,
+					    struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *dir_port;
+	struct dlb2_ldb_port *ldb_port;
+	struct dlb2_ldb_queue *queue;
+	int i;
+	RTE_SET_USED(iter);
+
+	/*
+	 * Confirm that all the domain's queue's inflight counts and AQED
+	 * active counts are 0.
+	 */
+	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
+		if (!dlb2_ldb_queue_is_empty(hw, queue)) {
+			DLB2_HW_ERR(hw,
+				    "[%s()] Internal error: failed to empty ldb queue %d\n",
+				    __func__, queue->id.phys_id);
+			return -EFAULT;
+		}
+	}
+
+	/* Confirm that all the domain's CQs inflight and token counts are 0. */
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], ldb_port, iter) {
+			if (dlb2_ldb_cq_inflight_count(hw, ldb_port) ||
+			    dlb2_ldb_cq_token_count(hw, ldb_port)) {
+				DLB2_HW_ERR(hw,
+					    "[%s()] Internal error: failed to empty ldb port %d\n",
+					    __func__, ldb_port->id.phys_id);
+				return -EFAULT;
+			}
+		}
+	}
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, dir_port, iter) {
+		if (!dlb2_dir_queue_is_empty(hw, dir_port)) {
+			DLB2_HW_ERR(hw,
+				    "[%s()] Internal error: failed to empty dir queue %d\n",
+				    __func__, dir_port->id.phys_id);
+			return -EFAULT;
+		}
+
+		if (dlb2_dir_cq_token_count(hw, dir_port)) {
+			DLB2_HW_ERR(hw,
+				    "[%s()] Internal error: failed to empty dir port %d\n",
+				    __func__, dir_port->id.phys_id);
+			return -EFAULT;
+		}
+	}
+
+	return 0;
+}
+
+static void __dlb2_domain_reset_ldb_port_registers(struct dlb2_hw *hw,
+						   struct dlb2_ldb_port *port)
+{
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_PP2VAS(port->id.phys_id),
+		    DLB2_SYS_LDB_PP2VAS_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ2VAS(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ2VAS_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_PP2VDEV(port->id.phys_id),
+		    DLB2_SYS_LDB_PP2VDEV_RST);
+
+	if (port->id.vdev_owned) {
+		unsigned int offs;
+		u32 virt_id;
+
+		/*
+		 * DLB uses producer port address bits 17:12 to determine the
+		 * producer port ID. In Scalable IOV mode, PP accesses come
+		 * through the PF MMIO window for the physical producer port,
+		 * so for translation purposes the virtual and physical port
+		 * IDs are equal.
+		 */
+		if (hw->virt_mode == DLB2_VIRT_SRIOV)
+			virt_id = port->id.virt_id;
+		else
+			virt_id = port->id.phys_id;
+
+		offs = port->id.vdev_id * DLB2_MAX_NUM_LDB_PORTS + virt_id;
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_VF_LDB_VPP2PP(offs),
+			    DLB2_SYS_VF_LDB_VPP2PP_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_VF_LDB_VPP_V(offs),
+			    DLB2_SYS_VF_LDB_VPP_V_RST);
+	}
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_PP_V(port->id.phys_id),
+		    DLB2_SYS_LDB_PP_V_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_LDB_DSBL(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_LDB_DSBL_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_DEPTH(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ_DEPTH_RST);
+
+	if (hw->ver != DLB2_HW_V2)
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_CFG_CQ_LDB_WU_LIMIT(port->id.phys_id),
+			    DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_LDB_INFL_LIM(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_LDB_INFL_LIM_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_HIST_LIST_LIM(hw->ver, port->id.phys_id),
+		    DLB2_CHP_HIST_LIST_LIM_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_HIST_LIST_BASE(hw->ver, port->id.phys_id),
+		    DLB2_CHP_HIST_LIST_BASE_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_HIST_LIST_POP_PTR(hw->ver, port->id.phys_id),
+		    DLB2_CHP_HIST_LIST_POP_PTR_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_HIST_LIST_PUSH_PTR(hw->ver, port->id.phys_id),
+		    DLB2_CHP_HIST_LIST_PUSH_PTR_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_TMR_THRSH(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ_TMR_THRSH_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_INT_ENB(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ_INT_ENB_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_CQ_ISR(port->id.phys_id),
+		    DLB2_SYS_LDB_CQ_ISR_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_WPTR(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ_WPTR_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_LDB_TKN_CNT(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_LDB_TKN_CNT_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_CQ_ADDR_L(port->id.phys_id),
+		    DLB2_SYS_LDB_CQ_ADDR_L_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_CQ_ADDR_U(port->id.phys_id),
+		    DLB2_SYS_LDB_CQ_ADDR_U_RST);
+
+	if (hw->ver == DLB2_HW_V2)
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_LDB_CQ_AT(port->id.phys_id),
+			    DLB2_SYS_LDB_CQ_AT_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_CQ_PASID(hw->ver, port->id.phys_id),
+		    DLB2_SYS_LDB_CQ_PASID_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_CQ2VF_PF_RO(port->id.phys_id),
+		    DLB2_SYS_LDB_CQ2VF_PF_RO_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTL(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTL_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTH(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTH_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ2QID0(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ2QID0_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ2QID1(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ2QID1_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ2PRIOV(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ2PRIOV_RST);
+}
+
+static void dlb2_domain_reset_ldb_port_registers(struct dlb2_hw *hw,
+						 struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
+			__dlb2_domain_reset_ldb_port_registers(hw, port);
+	}
+}
+
+static void
+__dlb2_domain_reset_dir_port_registers(struct dlb2_hw *hw,
+				       struct dlb2_dir_pq_pair *port)
+{
+	u32 reg = 0;
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ2VAS(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ2VAS_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_DIR_DSBL(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_DIR_DSBL_RST);
+
+	DLB2_BIT_SET(reg, DLB2_SYS_WB_DIR_CQ_STATE_CQ_OPT_CLR);
+
+	if (hw->ver == DLB2_HW_V2)
+		DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_OPT_CLR, port->id.phys_id);
+	else
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_WB_DIR_CQ_STATE(port->id.phys_id), reg);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_DEPTH(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ_DEPTH_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_TMR_THRSH(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ_TMR_THRSH_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_INT_ENB(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ_INT_ENB_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_CQ_ISR(port->id.phys_id),
+		    DLB2_SYS_DIR_CQ_ISR_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(hw->ver,
+						      port->id.phys_id),
+		    DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_WPTR(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ_WPTR_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_DIR_TKN_CNT(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_DIR_TKN_CNT_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_CQ_ADDR_L(port->id.phys_id),
+		    DLB2_SYS_DIR_CQ_ADDR_L_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_CQ_ADDR_U(port->id.phys_id),
+		    DLB2_SYS_DIR_CQ_ADDR_U_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_CQ_AT(port->id.phys_id),
+		    DLB2_SYS_DIR_CQ_AT_RST);
+
+	if (hw->ver == DLB2_HW_V2)
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_DIR_CQ_AT(port->id.phys_id),
+			    DLB2_SYS_DIR_CQ_AT_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_CQ_PASID(hw->ver, port->id.phys_id),
+		    DLB2_SYS_DIR_CQ_PASID_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_CQ_FMT(port->id.phys_id),
+		    DLB2_SYS_DIR_CQ_FMT_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_CQ2VF_PF_RO(port->id.phys_id),
+		    DLB2_SYS_DIR_CQ2VF_PF_RO_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTL(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTL_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTH(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTH_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_PP2VAS(port->id.phys_id),
+		    DLB2_SYS_DIR_PP2VAS_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ2VAS(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ2VAS_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_PP2VDEV(port->id.phys_id),
+		    DLB2_SYS_DIR_PP2VDEV_RST);
+
+	if (port->id.vdev_owned) {
+		unsigned int offs;
+		u32 virt_id;
+
+		/*
+		 * DLB uses producer port address bits 17:12 to determine the
+		 * producer port ID. In Scalable IOV mode, PP accesses come
+		 * through the PF MMIO window for the physical producer port,
+		 * so for translation purposes the virtual and physical port
+		 * IDs are equal.
+		 */
+		if (hw->virt_mode == DLB2_VIRT_SRIOV)
+			virt_id = port->id.virt_id;
+		else
+			virt_id = port->id.phys_id;
+
+		offs = port->id.vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) +
+			virt_id;
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_VF_DIR_VPP2PP(offs),
+			    DLB2_SYS_VF_DIR_VPP2PP_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_VF_DIR_VPP_V(offs),
+			    DLB2_SYS_VF_DIR_VPP_V_RST);
+	}
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_PP_V(port->id.phys_id),
+		    DLB2_SYS_DIR_PP_V_RST);
+}
+
+static void dlb2_domain_reset_dir_port_registers(struct dlb2_hw *hw,
+						 struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *port;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter)
+		__dlb2_domain_reset_dir_port_registers(hw, port);
+}
+
+static void dlb2_domain_reset_ldb_queue_registers(struct dlb2_hw *hw,
+						  struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_queue *queue;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
+		unsigned int queue_id = queue->id.phys_id;
+		int i;
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL(hw->ver, queue_id),
+			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH(hw->ver, queue_id),
+			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTL(hw->ver, queue_id),
+			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTL_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTH(hw->ver, queue_id),
+			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_NALDB_MAX_DEPTH(hw->ver, queue_id),
+			    DLB2_LSP_QID_NALDB_MAX_DEPTH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_LDB_INFL_LIM(hw->ver, queue_id),
+			    DLB2_LSP_QID_LDB_INFL_LIM_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_AQED_ACTIVE_LIM(hw->ver, queue_id),
+			    DLB2_LSP_QID_AQED_ACTIVE_LIM_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_ATM_DEPTH_THRSH(hw->ver, queue_id),
+			    DLB2_LSP_QID_ATM_DEPTH_THRSH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_NALDB_DEPTH_THRSH(hw->ver, queue_id),
+			    DLB2_LSP_QID_NALDB_DEPTH_THRSH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_LDB_QID_ITS(queue_id),
+			    DLB2_SYS_LDB_QID_ITS_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_CHP_ORD_QID_SN(hw->ver, queue_id),
+			    DLB2_CHP_ORD_QID_SN_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_CHP_ORD_QID_SN_MAP(hw->ver, queue_id),
+			    DLB2_CHP_ORD_QID_SN_MAP_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_LDB_QID_V(queue_id),
+			    DLB2_SYS_LDB_QID_V_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_LDB_QID_CFG_V(queue_id),
+			    DLB2_SYS_LDB_QID_CFG_V_RST);
+
+		if (queue->sn_cfg_valid) {
+			u32 offs[2];
+
+			offs[0] = DLB2_RO_GRP_0_SLT_SHFT(hw->ver,
+							 queue->sn_slot);
+			offs[1] = DLB2_RO_GRP_1_SLT_SHFT(hw->ver,
+							 queue->sn_slot);
+
+			DLB2_CSR_WR(hw,
+				    offs[queue->sn_group],
+				    DLB2_RO_GRP_0_SLT_SHFT_RST);
+		}
+
+		for (i = 0; i < DLB2_LSP_QID2CQIDIX_NUM; i++) {
+			DLB2_CSR_WR(hw,
+				    DLB2_LSP_QID2CQIDIX(hw->ver, queue_id, i),
+				    DLB2_LSP_QID2CQIDIX_00_RST);
+
+			DLB2_CSR_WR(hw,
+				    DLB2_LSP_QID2CQIDIX2(hw->ver, queue_id, i),
+				    DLB2_LSP_QID2CQIDIX2_00_RST);
+
+			DLB2_CSR_WR(hw,
+				    DLB2_ATM_QID2CQIDIX(queue_id, i),
+				    DLB2_ATM_QID2CQIDIX_00_RST);
+		}
+	}
+}
+
+static void dlb2_domain_reset_dir_queue_registers(struct dlb2_hw *hw,
+						  struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *queue;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, queue, iter) {
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_DIR_MAX_DEPTH(hw->ver,
+						       queue->id.phys_id),
+			    DLB2_LSP_QID_DIR_MAX_DEPTH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTL(hw->ver,
+							  queue->id.phys_id),
+			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTL_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTH(hw->ver,
+							  queue->id.phys_id),
+			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_DIR_DEPTH_THRSH(hw->ver,
+							 queue->id.phys_id),
+			    DLB2_LSP_QID_DIR_DEPTH_THRSH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_DIR_QID_ITS(queue->id.phys_id),
+			    DLB2_SYS_DIR_QID_ITS_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_DIR_QID_V(queue->id.phys_id),
+			    DLB2_SYS_DIR_QID_V_RST);
+	}
+}
+
+
+
+
+
+static void dlb2_domain_reset_registers(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain)
+{
+	dlb2_domain_reset_ldb_port_registers(hw, domain);
+
+	dlb2_domain_reset_dir_port_registers(hw, domain);
+
+	dlb2_domain_reset_ldb_queue_registers(hw, domain);
+
+	dlb2_domain_reset_dir_queue_registers(hw, domain);
+
+	if (hw->ver == DLB2_HW_V2) {
+		DLB2_CSR_WR(hw,
+			    DLB2_CHP_CFG_LDB_VAS_CRD(domain->id.phys_id),
+			    DLB2_CHP_CFG_LDB_VAS_CRD_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_CHP_CFG_DIR_VAS_CRD(domain->id.phys_id),
+			    DLB2_CHP_CFG_DIR_VAS_CRD_RST);
+	} else
+		DLB2_CSR_WR(hw,
+			    DLB2_CHP_CFG_VAS_CRD(domain->id.phys_id),
+			    DLB2_CHP_CFG_VAS_CRD_RST);
+}
+
+static int dlb2_domain_reset_software_state(struct dlb2_hw *hw,
+					    struct dlb2_hw_domain *domain)
+{
+	struct dlb2_dir_pq_pair *tmp_dir_port;
+	struct dlb2_ldb_queue *tmp_ldb_queue;
+	struct dlb2_ldb_port *tmp_ldb_port;
+	struct dlb2_list_entry *iter1;
+	struct dlb2_list_entry *iter2;
+	struct dlb2_function_resources *rsrcs;
+	struct dlb2_dir_pq_pair *dir_port;
+	struct dlb2_ldb_queue *ldb_queue;
+	struct dlb2_ldb_port *ldb_port;
+	struct dlb2_list_head *list;
+	int ret, i;
+	RTE_SET_USED(tmp_dir_port);
+	RTE_SET_USED(tmp_ldb_queue);
+	RTE_SET_USED(tmp_ldb_port);
+	RTE_SET_USED(iter1);
+	RTE_SET_USED(iter2);
+
+	rsrcs = domain->parent_func;
+
+	/* Move the domain's ldb queues to the function's avail list */
+	list = &domain->used_ldb_queues;
+	DLB2_DOM_LIST_FOR_SAFE(*list, ldb_queue, tmp_ldb_queue, iter1, iter2) {
+		if (ldb_queue->sn_cfg_valid) {
+			struct dlb2_sn_group *grp;
+
+			grp = &hw->rsrcs.sn_groups[ldb_queue->sn_group];
+
+			dlb2_sn_group_free_slot(grp, ldb_queue->sn_slot);
+			ldb_queue->sn_cfg_valid = false;
+		}
+
+		ldb_queue->owned = false;
+		ldb_queue->num_mappings = 0;
+		ldb_queue->num_pending_additions = 0;
+
+		dlb2_list_del(&domain->used_ldb_queues,
+			      &ldb_queue->domain_list);
+		dlb2_list_add(&rsrcs->avail_ldb_queues,
+			      &ldb_queue->func_list);
+		rsrcs->num_avail_ldb_queues++;
+	}
+
+	list = &domain->avail_ldb_queues;
+	DLB2_DOM_LIST_FOR_SAFE(*list, ldb_queue, tmp_ldb_queue, iter1, iter2) {
+		ldb_queue->owned = false;
+
+		dlb2_list_del(&domain->avail_ldb_queues,
+			      &ldb_queue->domain_list);
+		dlb2_list_add(&rsrcs->avail_ldb_queues,
+			      &ldb_queue->func_list);
+		rsrcs->num_avail_ldb_queues++;
+	}
+
+	/* Move the domain's ldb ports to the function's avail list */
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		list = &domain->used_ldb_ports[i];
+		DLB2_DOM_LIST_FOR_SAFE(*list, ldb_port, tmp_ldb_port,
+				       iter1, iter2) {
+			int j;
+
+			ldb_port->owned = false;
+			ldb_port->configured = false;
+			ldb_port->num_pending_removals = 0;
+			ldb_port->num_mappings = 0;
+			ldb_port->init_tkn_cnt = 0;
+			ldb_port->cq_depth = 0;
+			for (j = 0; j < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; j++)
+				ldb_port->qid_map[j].state =
+					DLB2_QUEUE_UNMAPPED;
+
+			dlb2_list_del(&domain->used_ldb_ports[i],
+				      &ldb_port->domain_list);
+			dlb2_list_add(&rsrcs->avail_ldb_ports[i],
+				      &ldb_port->func_list);
+			rsrcs->num_avail_ldb_ports[i]++;
+		}
+
+		list = &domain->avail_ldb_ports[i];
+		DLB2_DOM_LIST_FOR_SAFE(*list, ldb_port, tmp_ldb_port,
+				       iter1, iter2) {
+			ldb_port->owned = false;
+
+			dlb2_list_del(&domain->avail_ldb_ports[i],
+				      &ldb_port->domain_list);
+			dlb2_list_add(&rsrcs->avail_ldb_ports[i],
+				      &ldb_port->func_list);
+			rsrcs->num_avail_ldb_ports[i]++;
+		}
+	}
+
+	/* Move the domain's dir ports to the function's avail list */
+	list = &domain->used_dir_pq_pairs;
+	DLB2_DOM_LIST_FOR_SAFE(*list, dir_port, tmp_dir_port, iter1, iter2) {
+		dir_port->owned = false;
+		dir_port->port_configured = false;
+		dir_port->init_tkn_cnt = 0;
+
+		dlb2_list_del(&domain->used_dir_pq_pairs,
+			      &dir_port->domain_list);
+
+		dlb2_list_add(&rsrcs->avail_dir_pq_pairs,
+			      &dir_port->func_list);
+		rsrcs->num_avail_dir_pq_pairs++;
+	}
+
+	list = &domain->avail_dir_pq_pairs;
+	DLB2_DOM_LIST_FOR_SAFE(*list, dir_port, tmp_dir_port, iter1, iter2) {
+		dir_port->owned = false;
+
+		dlb2_list_del(&domain->avail_dir_pq_pairs,
+			      &dir_port->domain_list);
+
+		dlb2_list_add(&rsrcs->avail_dir_pq_pairs,
+			      &dir_port->func_list);
+		rsrcs->num_avail_dir_pq_pairs++;
+	}
+
+	/* Return hist list entries to the function */
+	ret = dlb2_bitmap_set_range(rsrcs->avail_hist_list_entries,
+				    domain->hist_list_entry_base,
+				    domain->total_hist_list_entries);
+	if (ret) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: domain hist list base does not match the function's bitmap.\n",
+			    __func__);
+		return ret;
+	}
+
+	domain->total_hist_list_entries = 0;
+	domain->avail_hist_list_entries = 0;
+	domain->hist_list_entry_base = 0;
+	domain->hist_list_entry_offset = 0;
+
+	if (hw->ver == DLB2_HW_V2_5) {
+		rsrcs->num_avail_entries += domain->num_credits;
+		domain->num_credits = 0;
+	} else {
+		rsrcs->num_avail_qed_entries += domain->num_ldb_credits;
+		domain->num_ldb_credits = 0;
+
+		rsrcs->num_avail_dqed_entries += domain->num_dir_credits;
+		domain->num_dir_credits = 0;
+	}
+	rsrcs->num_avail_aqed_entries += domain->num_avail_aqed_entries;
+	rsrcs->num_avail_aqed_entries += domain->num_used_aqed_entries;
+	domain->num_avail_aqed_entries = 0;
+	domain->num_used_aqed_entries = 0;
+
+	domain->num_pending_removals = 0;
+	domain->num_pending_additions = 0;
+	domain->configured = false;
+	domain->started = false;
+
+	/*
+	 * Move the domain out of the used_domains list and back to the
+	 * function's avail_domains list.
+	 */
+	dlb2_list_del(&rsrcs->used_domains, &domain->func_list);
+	dlb2_list_add(&rsrcs->avail_domains, &domain->func_list);
+	rsrcs->num_avail_domains++;
+
+	return 0;
+}
+
+static int dlb2_domain_drain_unmapped_queue(struct dlb2_hw *hw,
+					    struct dlb2_hw_domain *domain,
+					    struct dlb2_ldb_queue *queue)
+{
+	struct dlb2_ldb_port *port = NULL;
+	int ret, i;
+
+	/* If a domain has LDB queues, it must have LDB ports */
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		port = DLB2_DOM_LIST_HEAD(domain->used_ldb_ports[i],
+					  typeof(*port));
+		if (port)
+			break;
+	}
+
+	if (port == NULL) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: No configured LDB ports\n",
+			    __func__);
+		return -EFAULT;
+	}
+
+	/* If necessary, free up a QID slot in this CQ */
+	if (port->num_mappings == DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
+		struct dlb2_ldb_queue *mapped_queue;
+
+		mapped_queue = &hw->rsrcs.ldb_queues[port->qid_map[0].qid];
+
+		ret = dlb2_ldb_port_unmap_qid(hw, port, mapped_queue);
+		if (ret)
+			return ret;
+	}
+
+	ret = dlb2_ldb_port_map_qid_dynamic(hw, port, queue, 0);
+	if (ret)
+		return ret;
+
+	return dlb2_domain_drain_mapped_queues(hw, domain);
+}
+
+static int dlb2_domain_drain_unmapped_queues(struct dlb2_hw *hw,
+					     struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_queue *queue;
+	int ret;
+	RTE_SET_USED(iter);
+
+	/* If the domain hasn't been started, there's no traffic to drain */
+	if (!domain->started)
+		return 0;
+
+	/*
+	 * Pre-condition: the unattached queue must not have any outstanding
+	 * completions. This is ensured by calling dlb2_domain_drain_ldb_cqs()
+	 * prior to this in dlb2_domain_drain_mapped_queues().
+	 */
+	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
+		if (queue->num_mappings != 0 ||
+		    dlb2_ldb_queue_is_empty(hw, queue))
+			continue;
+
+		ret = dlb2_domain_drain_unmapped_queue(hw, domain, queue);
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+
+/**
+ * dlb2_reset_domain() - reset a scheduling domain
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function resets and frees a DLB 2.0 scheduling domain and its associated
+ * resources.
+ *
+ * Pre-condition: the driver must ensure software has stopped sending QEs
+ * through this domain's producer ports before invoking this function, or
+ * undefined behavior will result.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, -1 otherwise.
+ *
+ * EINVAL - Invalid domain ID, or the domain is not configured.
+ * EFAULT - Internal error. (Possibly caused if software is the pre-condition
+ *	    is not met.)
+ * ETIMEDOUT - Hardware component didn't reset in the expected time.
+ */
+int dlb2_reset_domain(struct dlb2_hw *hw,
+		      u32 domain_id,
+		      bool vdev_req,
+		      unsigned int vdev_id)
+{
+	struct dlb2_hw_domain *domain;
+	int ret;
+
+	dlb2_log_reset_domain(hw, domain_id, vdev_req, vdev_id);
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (domain == NULL || !domain->configured)
+		return -EINVAL;
+
+	/* Disable VPPs */
+	if (vdev_req) {
+		dlb2_domain_disable_dir_vpps(hw, domain, vdev_id);
+
+		dlb2_domain_disable_ldb_vpps(hw, domain, vdev_id);
+	}
+
+	/* Disable CQ interrupts */
+	dlb2_domain_disable_dir_port_interrupts(hw, domain);
+
+	dlb2_domain_disable_ldb_port_interrupts(hw, domain);
+
+	/*
+	 * For each queue owned by this domain, disable its write permissions to
+	 * cause any traffic sent to it to be dropped. Well-behaved software
+	 * should not be sending QEs at this point.
+	 */
+	dlb2_domain_disable_dir_queue_write_perms(hw, domain);
+
+	dlb2_domain_disable_ldb_queue_write_perms(hw, domain);
+
+	/* Turn off completion tracking on all the domain's PPs. */
+	dlb2_domain_disable_ldb_seq_checks(hw, domain);
+
+	/*
+	 * Disable the LDB CQs and drain them in order to complete the map and
+	 * unmap procedures, which require zero CQ inflights and zero QID
+	 * inflights respectively.
+	 */
+	dlb2_domain_disable_ldb_cqs(hw, domain);
+
+	dlb2_domain_drain_ldb_cqs(hw, domain, false);
+
+	ret = dlb2_domain_wait_for_ldb_cqs_to_empty(hw, domain);
+	if (ret)
+		return ret;
+
+	ret = dlb2_domain_finish_unmap_qid_procedures(hw, domain);
+	if (ret)
+		return ret;
+
+	ret = dlb2_domain_finish_map_qid_procedures(hw, domain);
+	if (ret)
+		return ret;
+
+	/* Re-enable the CQs in order to drain the mapped queues. */
+	dlb2_domain_enable_ldb_cqs(hw, domain);
+
+	ret = dlb2_domain_drain_mapped_queues(hw, domain);
+	if (ret)
+		return ret;
+
+	ret = dlb2_domain_drain_unmapped_queues(hw, domain);
+	if (ret)
+		return ret;
+
+	/* Done draining LDB QEs, so disable the CQs. */
+	dlb2_domain_disable_ldb_cqs(hw, domain);
+
+	dlb2_domain_drain_dir_queues(hw, domain);
+
+	/* Done draining DIR QEs, so disable the CQs. */
+	dlb2_domain_disable_dir_cqs(hw, domain);
+
+	/* Disable PPs */
+	dlb2_domain_disable_dir_producer_ports(hw, domain);
+
+	dlb2_domain_disable_ldb_producer_ports(hw, domain);
+
+	ret = dlb2_domain_verify_reset_success(hw, domain);
+	if (ret)
+		return ret;
+
+	/* Reset the QID and port state. */
+	dlb2_domain_reset_registers(hw, domain);
+
+	/* Hardware reset complete. Reset the domain's software state */
+	return dlb2_domain_reset_software_state(hw, domain);
+}
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v3 07/26] event/dlb2: add V2.5 create ldb queue
  2021-04-13 20:14   ` [dpdk-dev] [PATCH v3 00/26] " Timothy McDaniel
                       ` (5 preceding siblings ...)
  2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 06/26] event/dlb2: add v2.5 domain reset Timothy McDaniel
@ 2021-04-13 20:14     ` Timothy McDaniel
  2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 08/26] event/dlb2: add v2.5 create ldb port Timothy McDaniel
                       ` (18 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-04-13 20:14 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas

Updated low level hardware functions to add DLB 2.5 support
for creating load balanced queues.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 397 ------------------
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 391 +++++++++++++++++
 2 files changed, 391 insertions(+), 397 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index 041aeaeee..f8b85bc57 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -1149,403 +1149,6 @@ unsigned int dlb2_finish_map_qid_procedures(struct dlb2_hw *hw)
 	return num;
 }
 
-
-static void dlb2_configure_ldb_queue(struct dlb2_hw *hw,
-				     struct dlb2_hw_domain *domain,
-				     struct dlb2_ldb_queue *queue,
-				     struct dlb2_create_ldb_queue_args *args,
-				     bool vdev_req,
-				     unsigned int vdev_id)
-{
-	union dlb2_sys_vf_ldb_vqid_v r0 = { {0} };
-	union dlb2_sys_vf_ldb_vqid2qid r1 = { {0} };
-	union dlb2_sys_ldb_qid2vqid r2 = { {0} };
-	union dlb2_sys_ldb_vasqid_v r3 = { {0} };
-	union dlb2_lsp_qid_ldb_infl_lim r4 = { {0} };
-	union dlb2_lsp_qid_aqed_active_lim r5 = { {0} };
-	union dlb2_aqed_pipe_qid_hid_width r6 = { {0} };
-	union dlb2_sys_ldb_qid_its r7 = { {0} };
-	union dlb2_lsp_qid_atm_depth_thrsh r8 = { {0} };
-	union dlb2_lsp_qid_naldb_depth_thrsh r9 = { {0} };
-	union dlb2_aqed_pipe_qid_fid_lim r10 = { {0} };
-	union dlb2_chp_ord_qid_sn_map r11 = { {0} };
-	union dlb2_sys_ldb_qid_cfg_v r12 = { {0} };
-	union dlb2_sys_ldb_qid_v r13 = { {0} };
-
-	struct dlb2_sn_group *sn_group;
-	unsigned int offs;
-
-	/* QID write permissions are turned on when the domain is started */
-	r3.field.vasqid_v = 0;
-
-	offs = domain->id.phys_id * DLB2_MAX_NUM_LDB_QUEUES +
-		queue->id.phys_id;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_VASQID_V(offs), r3.val);
-
-	/*
-	 * Unordered QIDs get 4K inflights, ordered get as many as the number
-	 * of sequence numbers.
-	 */
-	r4.field.limit = args->num_qid_inflights;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_QID_LDB_INFL_LIM(queue->id.phys_id), r4.val);
-
-	r5.field.limit = queue->aqed_limit;
-
-	if (r5.field.limit > DLB2_MAX_NUM_AQED_ENTRIES)
-		r5.field.limit = DLB2_MAX_NUM_AQED_ENTRIES;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID_AQED_ACTIVE_LIM(queue->id.phys_id),
-		    r5.val);
-
-	switch (args->lock_id_comp_level) {
-	case 64:
-		r6.field.compress_code = 1;
-		break;
-	case 128:
-		r6.field.compress_code = 2;
-		break;
-	case 256:
-		r6.field.compress_code = 3;
-		break;
-	case 512:
-		r6.field.compress_code = 4;
-		break;
-	case 1024:
-		r6.field.compress_code = 5;
-		break;
-	case 2048:
-		r6.field.compress_code = 6;
-		break;
-	case 4096:
-		r6.field.compress_code = 7;
-		break;
-	case 0:
-	case 65536:
-		r6.field.compress_code = 0;
-	}
-
-	DLB2_CSR_WR(hw,
-		    DLB2_AQED_PIPE_QID_HID_WIDTH(queue->id.phys_id),
-		    r6.val);
-
-	/* Don't timestamp QEs that pass through this queue */
-	r7.field.qid_its = 0;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_QID_ITS(queue->id.phys_id),
-		    r7.val);
-
-	r8.field.thresh = args->depth_threshold;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID_ATM_DEPTH_THRSH(queue->id.phys_id),
-		    r8.val);
-
-	r9.field.thresh = args->depth_threshold;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID_NALDB_DEPTH_THRSH(queue->id.phys_id),
-		    r9.val);
-
-	/*
-	 * This register limits the number of inflight flows a queue can have
-	 * at one time.  It has an upper bound of 2048, but can be
-	 * over-subscribed. 512 is chosen so that a single queue doesn't use
-	 * the entire atomic storage, but can use a substantial portion if
-	 * needed.
-	 */
-	r10.field.qid_fid_limit = 512;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_AQED_PIPE_QID_FID_LIM(queue->id.phys_id),
-		    r10.val);
-
-	/* Configure SNs */
-	sn_group = &hw->rsrcs.sn_groups[queue->sn_group];
-	r11.field.mode = sn_group->mode;
-	r11.field.slot = queue->sn_slot;
-	r11.field.grp  = sn_group->id;
-
-	DLB2_CSR_WR(hw, DLB2_CHP_ORD_QID_SN_MAP(queue->id.phys_id), r11.val);
-
-	r12.field.sn_cfg_v = (args->num_sequence_numbers != 0);
-	r12.field.fid_cfg_v = (args->num_atomic_inflights != 0);
-
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_QID_CFG_V(queue->id.phys_id), r12.val);
-
-	if (vdev_req) {
-		offs = vdev_id * DLB2_MAX_NUM_LDB_QUEUES + queue->id.virt_id;
-
-		r0.field.vqid_v = 1;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID_V(offs), r0.val);
-
-		r1.field.qid = queue->id.phys_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID2QID(offs), r1.val);
-
-		r2.field.vqid = queue->id.virt_id;
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_LDB_QID2VQID(queue->id.phys_id),
-			    r2.val);
-	}
-
-	r13.field.qid_v = 1;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_QID_V(queue->id.phys_id), r13.val);
-}
-
-static int
-dlb2_ldb_queue_attach_to_sn_group(struct dlb2_hw *hw,
-				  struct dlb2_ldb_queue *queue,
-				  struct dlb2_create_ldb_queue_args *args)
-{
-	int slot = -1;
-	int i;
-
-	queue->sn_cfg_valid = false;
-
-	if (args->num_sequence_numbers == 0)
-		return 0;
-
-	for (i = 0; i < DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {
-		struct dlb2_sn_group *group = &hw->rsrcs.sn_groups[i];
-
-		if (group->sequence_numbers_per_queue ==
-		    args->num_sequence_numbers &&
-		    !dlb2_sn_group_full(group)) {
-			slot = dlb2_sn_group_alloc_slot(group);
-			if (slot >= 0)
-				break;
-		}
-	}
-
-	if (slot == -1) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: no sequence number slots available\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	queue->sn_cfg_valid = true;
-	queue->sn_group = i;
-	queue->sn_slot = slot;
-	return 0;
-}
-
-static int
-dlb2_ldb_queue_attach_resources(struct dlb2_hw *hw,
-				struct dlb2_hw_domain *domain,
-				struct dlb2_ldb_queue *queue,
-				struct dlb2_create_ldb_queue_args *args)
-{
-	int ret;
-
-	ret = dlb2_ldb_queue_attach_to_sn_group(hw, queue, args);
-	if (ret)
-		return ret;
-
-	/* Attach QID inflights */
-	queue->num_qid_inflights = args->num_qid_inflights;
-
-	/* Attach atomic inflights */
-	queue->aqed_limit = args->num_atomic_inflights;
-
-	domain->num_avail_aqed_entries -= args->num_atomic_inflights;
-	domain->num_used_aqed_entries += args->num_atomic_inflights;
-
-	return 0;
-}
-
-static int
-dlb2_verify_create_ldb_queue_args(struct dlb2_hw *hw,
-				  u32 domain_id,
-				  struct dlb2_create_ldb_queue_args *args,
-				  struct dlb2_cmd_response *resp,
-				  bool vdev_req,
-				  unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	int i;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (domain == NULL) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	if (!domain->configured) {
-		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
-		return -EINVAL;
-	}
-
-	if (domain->started) {
-		resp->status = DLB2_ST_DOMAIN_STARTED;
-		return -EINVAL;
-	}
-
-	if (dlb2_list_empty(&domain->avail_ldb_queues)) {
-		resp->status = DLB2_ST_LDB_QUEUES_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	if (args->num_sequence_numbers) {
-		for (i = 0; i < DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {
-			struct dlb2_sn_group *group = &hw->rsrcs.sn_groups[i];
-
-			if (group->sequence_numbers_per_queue ==
-			    args->num_sequence_numbers &&
-			    !dlb2_sn_group_full(group))
-				break;
-		}
-
-		if (i == DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS) {
-			resp->status = DLB2_ST_SEQUENCE_NUMBERS_UNAVAILABLE;
-			return -EINVAL;
-		}
-	}
-
-	if (args->num_qid_inflights > 4096) {
-		resp->status = DLB2_ST_INVALID_QID_INFLIGHT_ALLOCATION;
-		return -EINVAL;
-	}
-
-	/* Inflights must be <= number of sequence numbers if ordered */
-	if (args->num_sequence_numbers != 0 &&
-	    args->num_qid_inflights > args->num_sequence_numbers) {
-		resp->status = DLB2_ST_INVALID_QID_INFLIGHT_ALLOCATION;
-		return -EINVAL;
-	}
-
-	if (domain->num_avail_aqed_entries < args->num_atomic_inflights) {
-		resp->status = DLB2_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	if (args->num_atomic_inflights &&
-	    args->lock_id_comp_level != 0 &&
-	    args->lock_id_comp_level != 64 &&
-	    args->lock_id_comp_level != 128 &&
-	    args->lock_id_comp_level != 256 &&
-	    args->lock_id_comp_level != 512 &&
-	    args->lock_id_comp_level != 1024 &&
-	    args->lock_id_comp_level != 2048 &&
-	    args->lock_id_comp_level != 4096 &&
-	    args->lock_id_comp_level != 65536) {
-		resp->status = DLB2_ST_INVALID_LOCK_ID_COMP_LEVEL;
-		return -EINVAL;
-	}
-
-	return 0;
-}
-
-static void
-dlb2_log_create_ldb_queue_args(struct dlb2_hw *hw,
-			       u32 domain_id,
-			       struct dlb2_create_ldb_queue_args *args,
-			       bool vdev_req,
-			       unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 create load-balanced queue arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID:                  %d\n",
-		    domain_id);
-	DLB2_HW_DBG(hw, "\tNumber of sequence numbers: %d\n",
-		    args->num_sequence_numbers);
-	DLB2_HW_DBG(hw, "\tNumber of QID inflights:    %d\n",
-		    args->num_qid_inflights);
-	DLB2_HW_DBG(hw, "\tNumber of ATM inflights:    %d\n",
-		    args->num_atomic_inflights);
-}
-
-/**
- * dlb2_hw_create_ldb_queue() - Allocate and initialize a DLB LDB queue.
- * @hw:	Contains the current state of the DLB2 hardware.
- * @domain_id: Domain ID
- * @args: User-provided arguments.
- * @resp: Response to user.
- * @vdev_req: Request came from a virtual device.
- * @vdev_id: If vdev_req is true, this contains the virtual device's ID.
- *
- * Return: returns < 0 on error, 0 otherwise. If the driver is unable to
- * satisfy a request, resp->status will be set accordingly.
- */
-int dlb2_hw_create_ldb_queue(struct dlb2_hw *hw,
-			     u32 domain_id,
-			     struct dlb2_create_ldb_queue_args *args,
-			     struct dlb2_cmd_response *resp,
-			     bool vdev_req,
-			     unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_queue *queue;
-	int ret;
-
-	dlb2_log_create_ldb_queue_args(hw, domain_id, args, vdev_req, vdev_id);
-
-	/*
-	 * Verify that hardware resources are available before attempting to
-	 * satisfy the request. This simplifies the error unwinding code.
-	 */
-	ret = dlb2_verify_create_ldb_queue_args(hw,
-						domain_id,
-						args,
-						resp,
-						vdev_req,
-						vdev_id);
-	if (ret)
-		return ret;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-	if (domain == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: domain not found\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	queue = DLB2_DOM_LIST_HEAD(domain->avail_ldb_queues, typeof(*queue));
-	if (queue == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: no available ldb queues\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	ret = dlb2_ldb_queue_attach_resources(hw, domain, queue, args);
-	if (ret < 0) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: failed to attach the ldb queue resources\n",
-			    __func__, __LINE__);
-		return ret;
-	}
-
-	dlb2_configure_ldb_queue(hw, domain, queue, args, vdev_req, vdev_id);
-
-	queue->num_mappings = 0;
-
-	queue->configured = true;
-
-	/*
-	 * Configuration succeeded, so move the resource from the 'avail' to
-	 * the 'used' list.
-	 */
-	dlb2_list_del(&domain->avail_ldb_queues, &queue->domain_list);
-
-	dlb2_list_add(&domain->used_ldb_queues, &queue->domain_list);
-
-	resp->status = 0;
-	resp->id = (vdev_req) ? queue->id.virt_id : queue->id.phys_id;
-
-	return 0;
-}
-
 int dlb2_get_group_sequence_numbers(struct dlb2_hw *hw, unsigned int group_id)
 {
 	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index 641812412..b52d2becd 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -3581,3 +3581,394 @@ int dlb2_reset_domain(struct dlb2_hw *hw,
 	/* Hardware reset complete. Reset the domain's software state */
 	return dlb2_domain_reset_software_state(hw, domain);
 }
+
+static void
+dlb2_log_create_ldb_queue_args(struct dlb2_hw *hw,
+			       u32 domain_id,
+			       struct dlb2_create_ldb_queue_args *args,
+			       bool vdev_req,
+			       unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 create load-balanced queue arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID:                  %d\n",
+		    domain_id);
+	DLB2_HW_DBG(hw, "\tNumber of sequence numbers: %d\n",
+		    args->num_sequence_numbers);
+	DLB2_HW_DBG(hw, "\tNumber of QID inflights:    %d\n",
+		    args->num_qid_inflights);
+	DLB2_HW_DBG(hw, "\tNumber of ATM inflights:    %d\n",
+		    args->num_atomic_inflights);
+}
+
+static int
+dlb2_ldb_queue_attach_to_sn_group(struct dlb2_hw *hw,
+				  struct dlb2_ldb_queue *queue,
+				  struct dlb2_create_ldb_queue_args *args)
+{
+	int slot = -1;
+	int i;
+
+	queue->sn_cfg_valid = false;
+
+	if (args->num_sequence_numbers == 0)
+		return 0;
+
+	for (i = 0; i < DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {
+		struct dlb2_sn_group *group = &hw->rsrcs.sn_groups[i];
+
+		if (group->sequence_numbers_per_queue ==
+		    args->num_sequence_numbers &&
+		    !dlb2_sn_group_full(group)) {
+			slot = dlb2_sn_group_alloc_slot(group);
+			if (slot >= 0)
+				break;
+		}
+	}
+
+	if (slot == -1) {
+		DLB2_HW_ERR(hw,
+			    "[%s():%d] Internal error: no sequence number slots available\n",
+			    __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	queue->sn_cfg_valid = true;
+	queue->sn_group = i;
+	queue->sn_slot = slot;
+	return 0;
+}
+
+static int
+dlb2_verify_create_ldb_queue_args(struct dlb2_hw *hw,
+				  u32 domain_id,
+				  struct dlb2_create_ldb_queue_args *args,
+				  struct dlb2_cmd_response *resp,
+				  bool vdev_req,
+				  unsigned int vdev_id,
+				  struct dlb2_hw_domain **out_domain,
+				  struct dlb2_ldb_queue **out_queue)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+	int i;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
+		return -EINVAL;
+	}
+
+	if (domain->started) {
+		resp->status = DLB2_ST_DOMAIN_STARTED;
+		return -EINVAL;
+	}
+
+	queue = DLB2_DOM_LIST_HEAD(domain->avail_ldb_queues, typeof(*queue));
+	if (!queue) {
+		resp->status = DLB2_ST_LDB_QUEUES_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	if (args->num_sequence_numbers) {
+		for (i = 0; i < DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {
+			struct dlb2_sn_group *group = &hw->rsrcs.sn_groups[i];
+
+			if (group->sequence_numbers_per_queue ==
+			    args->num_sequence_numbers &&
+			    !dlb2_sn_group_full(group))
+				break;
+		}
+
+		if (i == DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS) {
+			resp->status = DLB2_ST_SEQUENCE_NUMBERS_UNAVAILABLE;
+			return -EINVAL;
+		}
+	}
+
+	if (args->num_qid_inflights > 4096) {
+		resp->status = DLB2_ST_INVALID_QID_INFLIGHT_ALLOCATION;
+		return -EINVAL;
+	}
+
+	/* Inflights must be <= number of sequence numbers if ordered */
+	if (args->num_sequence_numbers != 0 &&
+	    args->num_qid_inflights > args->num_sequence_numbers) {
+		resp->status = DLB2_ST_INVALID_QID_INFLIGHT_ALLOCATION;
+		return -EINVAL;
+	}
+
+	if (domain->num_avail_aqed_entries < args->num_atomic_inflights) {
+		resp->status = DLB2_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	if (args->num_atomic_inflights &&
+	    args->lock_id_comp_level != 0 &&
+	    args->lock_id_comp_level != 64 &&
+	    args->lock_id_comp_level != 128 &&
+	    args->lock_id_comp_level != 256 &&
+	    args->lock_id_comp_level != 512 &&
+	    args->lock_id_comp_level != 1024 &&
+	    args->lock_id_comp_level != 2048 &&
+	    args->lock_id_comp_level != 4096 &&
+	    args->lock_id_comp_level != 65536) {
+		resp->status = DLB2_ST_INVALID_LOCK_ID_COMP_LEVEL;
+		return -EINVAL;
+	}
+
+	*out_domain = domain;
+	*out_queue = queue;
+
+	return 0;
+}
+
+static int
+dlb2_ldb_queue_attach_resources(struct dlb2_hw *hw,
+				struct dlb2_hw_domain *domain,
+				struct dlb2_ldb_queue *queue,
+				struct dlb2_create_ldb_queue_args *args)
+{
+	int ret;
+	ret = dlb2_ldb_queue_attach_to_sn_group(hw, queue, args);
+	if (ret)
+		return ret;
+
+	/* Attach QID inflights */
+	queue->num_qid_inflights = args->num_qid_inflights;
+
+	/* Attach atomic inflights */
+	queue->aqed_limit = args->num_atomic_inflights;
+
+	domain->num_avail_aqed_entries -= args->num_atomic_inflights;
+	domain->num_used_aqed_entries += args->num_atomic_inflights;
+
+	return 0;
+}
+
+static void dlb2_configure_ldb_queue(struct dlb2_hw *hw,
+				     struct dlb2_hw_domain *domain,
+				     struct dlb2_ldb_queue *queue,
+				     struct dlb2_create_ldb_queue_args *args,
+				     bool vdev_req,
+				     unsigned int vdev_id)
+{
+	struct dlb2_sn_group *sn_group;
+	unsigned int offs;
+	u32 reg = 0;
+	u32 alimit;
+
+	/* QID write permissions are turned on when the domain is started */
+	offs = domain->id.phys_id * DLB2_MAX_NUM_LDB_QUEUES + queue->id.phys_id;
+
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_VASQID_V(offs), reg);
+
+	/*
+	 * Unordered QIDs get 4K inflights, ordered get as many as the number
+	 * of sequence numbers.
+	 */
+	DLB2_BITS_SET(reg, args->num_qid_inflights,
+		      DLB2_LSP_QID_LDB_INFL_LIM_LIMIT);
+	DLB2_CSR_WR(hw, DLB2_LSP_QID_LDB_INFL_LIM(hw->ver,
+						  queue->id.phys_id), reg);
+
+	alimit = queue->aqed_limit;
+
+	if (alimit > DLB2_MAX_NUM_AQED_ENTRIES)
+		alimit = DLB2_MAX_NUM_AQED_ENTRIES;
+
+	reg = 0;
+	DLB2_BITS_SET(reg, alimit, DLB2_LSP_QID_AQED_ACTIVE_LIM_LIMIT);
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_QID_AQED_ACTIVE_LIM(hw->ver,
+						 queue->id.phys_id), reg);
+
+	reg = 0;
+	switch (args->lock_id_comp_level) {
+	case 64:
+		DLB2_BITS_SET(reg, 1, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
+		break;
+	case 128:
+		DLB2_BITS_SET(reg, 2, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
+		break;
+	case 256:
+		DLB2_BITS_SET(reg, 3, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
+		break;
+	case 512:
+		DLB2_BITS_SET(reg, 4, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
+		break;
+	case 1024:
+		DLB2_BITS_SET(reg, 5, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
+		break;
+	case 2048:
+		DLB2_BITS_SET(reg, 6, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
+		break;
+	case 4096:
+		DLB2_BITS_SET(reg, 7, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
+		break;
+	default:
+		/* No compression by default */
+		break;
+	}
+
+	DLB2_CSR_WR(hw, DLB2_AQED_QID_HID_WIDTH(queue->id.phys_id), reg);
+
+	reg = 0;
+	/* Don't timestamp QEs that pass through this queue */
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_QID_ITS(queue->id.phys_id), reg);
+
+	DLB2_BITS_SET(reg, args->depth_threshold,
+		      DLB2_LSP_QID_ATM_DEPTH_THRSH_THRESH);
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_QID_ATM_DEPTH_THRSH(hw->ver,
+						 queue->id.phys_id), reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, args->depth_threshold,
+		      DLB2_LSP_QID_NALDB_DEPTH_THRSH_THRESH);
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_QID_NALDB_DEPTH_THRSH(hw->ver, queue->id.phys_id),
+		    reg);
+
+	/*
+	 * This register limits the number of inflight flows a queue can have
+	 * at one time.  It has an upper bound of 2048, but can be
+	 * over-subscribed. 512 is chosen so that a single queue does not use
+	 * the entire atomic storage, but can use a substantial portion if
+	 * needed.
+	 */
+	reg = 0;
+	DLB2_BITS_SET(reg, 512, DLB2_AQED_QID_FID_LIM_QID_FID_LIMIT);
+	DLB2_CSR_WR(hw, DLB2_AQED_QID_FID_LIM(queue->id.phys_id), reg);
+
+	/* Configure SNs */
+	reg = 0;
+	sn_group = &hw->rsrcs.sn_groups[queue->sn_group];
+	DLB2_BITS_SET(reg, sn_group->mode, DLB2_CHP_ORD_QID_SN_MAP_MODE);
+	DLB2_BITS_SET(reg, queue->sn_slot, DLB2_CHP_ORD_QID_SN_MAP_SLOT);
+	DLB2_BITS_SET(reg, sn_group->id, DLB2_CHP_ORD_QID_SN_MAP_GRP);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_ORD_QID_SN_MAP(hw->ver, queue->id.phys_id), reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, (args->num_sequence_numbers != 0),
+		 DLB2_SYS_LDB_QID_CFG_V_SN_CFG_V);
+	DLB2_BITS_SET(reg, (args->num_atomic_inflights != 0),
+		 DLB2_SYS_LDB_QID_CFG_V_FID_CFG_V);
+
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_QID_CFG_V(queue->id.phys_id), reg);
+
+	if (vdev_req) {
+		offs = vdev_id * DLB2_MAX_NUM_LDB_QUEUES + queue->id.virt_id;
+
+		reg = 0;
+		DLB2_BIT_SET(reg, DLB2_SYS_VF_LDB_VQID_V_VQID_V);
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID_V(offs), reg);
+
+		reg = 0;
+		DLB2_BITS_SET(reg, queue->id.phys_id,
+			      DLB2_SYS_VF_LDB_VQID2QID_QID);
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID2QID(offs), reg);
+
+		reg = 0;
+		DLB2_BITS_SET(reg, queue->id.virt_id,
+			      DLB2_SYS_LDB_QID2VQID_VQID);
+		DLB2_CSR_WR(hw, DLB2_SYS_LDB_QID2VQID(queue->id.phys_id), reg);
+	}
+
+	reg = 0;
+	DLB2_BIT_SET(reg, DLB2_SYS_LDB_QID_V_QID_V);
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_QID_V(queue->id.phys_id), reg);
+}
+
+/**
+ * dlb2_hw_create_ldb_queue() - create a load-balanced queue
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: queue creation arguments.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function creates a load-balanced queue.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the queue ID.
+ *
+ * resp->id contains a virtual ID if vdev_req is true.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, the domain is not configured,
+ *	    the domain has already been started, or the requested queue name is
+ *	    already in use.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb2_hw_create_ldb_queue(struct dlb2_hw *hw,
+			     u32 domain_id,
+			     struct dlb2_create_ldb_queue_args *args,
+			     struct dlb2_cmd_response *resp,
+			     bool vdev_req,
+			     unsigned int vdev_id)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+	int ret;
+
+	dlb2_log_create_ldb_queue_args(hw, domain_id, args, vdev_req, vdev_id);
+
+	/*
+	 * Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	ret = dlb2_verify_create_ldb_queue_args(hw,
+						domain_id,
+						args,
+						resp,
+						vdev_req,
+						vdev_id,
+						&domain,
+						&queue);
+	if (ret)
+		return ret;
+
+	ret = dlb2_ldb_queue_attach_resources(hw, domain, queue, args);
+
+	if (ret) {
+		DLB2_HW_ERR(hw,
+			    "[%s():%d] Internal error: failed to attach the ldb queue resources\n",
+			    __func__, __LINE__);
+		return ret;
+	}
+
+	dlb2_configure_ldb_queue(hw, domain, queue, args, vdev_req, vdev_id);
+
+	queue->num_mappings = 0;
+
+	queue->configured = true;
+
+	/*
+	 * Configuration succeeded, so move the resource from the 'avail' to
+	 * the 'used' list.
+	 */
+	dlb2_list_del(&domain->avail_ldb_queues, &queue->domain_list);
+
+	dlb2_list_add(&domain->used_ldb_queues, &queue->domain_list);
+
+	resp->status = 0;
+	resp->id = (vdev_req) ? queue->id.virt_id : queue->id.phys_id;
+
+	return 0;
+}
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v3 08/26] event/dlb2: add v2.5 create ldb port
  2021-04-13 20:14   ` [dpdk-dev] [PATCH v3 00/26] " Timothy McDaniel
                       ` (6 preceding siblings ...)
  2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 07/26] event/dlb2: add V2.5 create ldb queue Timothy McDaniel
@ 2021-04-13 20:14     ` Timothy McDaniel
  2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 09/26] event/dlb2: add v2.5 create dir port Timothy McDaniel
                       ` (17 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-04-13 20:14 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas

Update create ldb port low level code to account for new
register map and hardware access macros.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 490 ------------------
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 471 +++++++++++++++++
 2 files changed, 471 insertions(+), 490 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index f8b85bc57..45d096eec 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -1216,496 +1216,6 @@ int dlb2_set_group_sequence_numbers(struct dlb2_hw *hw,
 	return 0;
 }
 
-static void dlb2_ldb_port_configure_pp(struct dlb2_hw *hw,
-				       struct dlb2_hw_domain *domain,
-				       struct dlb2_ldb_port *port,
-				       bool vdev_req,
-				       unsigned int vdev_id)
-{
-	union dlb2_sys_ldb_pp2vas r0 = { {0} };
-	union dlb2_sys_ldb_pp_v r4 = { {0} };
-
-	r0.field.vas = domain->id.phys_id;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_PP2VAS(port->id.phys_id), r0.val);
-
-	if (vdev_req) {
-		union dlb2_sys_vf_ldb_vpp2pp r1 = { {0} };
-		union dlb2_sys_ldb_pp2vdev r2 = { {0} };
-		union dlb2_sys_vf_ldb_vpp_v r3 = { {0} };
-		unsigned int offs;
-		u32 virt_id;
-
-		/*
-		 * DLB uses producer port address bits 17:12 to determine the
-		 * producer port ID. In Scalable IOV mode, PP accesses come
-		 * through the PF MMIO window for the physical producer port,
-		 * so for translation purposes the virtual and physical port
-		 * IDs are equal.
-		 */
-		if (hw->virt_mode == DLB2_VIRT_SRIOV)
-			virt_id = port->id.virt_id;
-		else
-			virt_id = port->id.phys_id;
-
-		r1.field.pp = port->id.phys_id;
-
-		offs = vdev_id * DLB2_MAX_NUM_LDB_PORTS + virt_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VPP2PP(offs), r1.val);
-
-		r2.field.vdev = vdev_id;
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_LDB_PP2VDEV(port->id.phys_id),
-			    r2.val);
-
-		r3.field.vpp_v = 1;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VPP_V(offs), r3.val);
-	}
-
-	r4.field.pp_v = 1;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_PP_V(port->id.phys_id),
-		    r4.val);
-}
-
-static int dlb2_ldb_port_configure_cq(struct dlb2_hw *hw,
-				      struct dlb2_hw_domain *domain,
-				      struct dlb2_ldb_port *port,
-				      uintptr_t cq_dma_base,
-				      struct dlb2_create_ldb_port_args *args,
-				      bool vdev_req,
-				      unsigned int vdev_id)
-{
-	union dlb2_sys_ldb_cq_addr_l r0 = { {0} };
-	union dlb2_sys_ldb_cq_addr_u r1 = { {0} };
-	union dlb2_sys_ldb_cq2vf_pf_ro r2 = { {0} };
-	union dlb2_chp_ldb_cq_tkn_depth_sel r3 = { {0} };
-	union dlb2_lsp_cq_ldb_tkn_depth_sel r4 = { {0} };
-	union dlb2_chp_hist_list_lim r5 = { {0} };
-	union dlb2_chp_hist_list_base r6 = { {0} };
-	union dlb2_lsp_cq_ldb_infl_lim r7 = { {0} };
-	union dlb2_chp_hist_list_push_ptr r8 = { {0} };
-	union dlb2_chp_hist_list_pop_ptr r9 = { {0} };
-	union dlb2_sys_ldb_cq_at r10 = { {0} };
-	union dlb2_sys_ldb_cq_pasid r11 = { {0} };
-	union dlb2_chp_ldb_cq2vas r12 = { {0} };
-	union dlb2_lsp_cq2priov r13 = { {0} };
-
-	/* The CQ address is 64B-aligned, and the DLB only wants bits [63:6] */
-	r0.field.addr_l = cq_dma_base >> 6;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_ADDR_L(port->id.phys_id), r0.val);
-
-	r1.field.addr_u = cq_dma_base >> 32;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_ADDR_U(port->id.phys_id), r1.val);
-
-	/*
-	 * 'ro' == relaxed ordering. This setting allows DLB2 to write
-	 * cache lines out-of-order (but QEs within a cache line are always
-	 * updated in-order).
-	 */
-	r2.field.vf = vdev_id;
-	r2.field.is_pf = !vdev_req && (hw->virt_mode != DLB2_VIRT_SIOV);
-	r2.field.ro = 1;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ2VF_PF_RO(port->id.phys_id), r2.val);
-
-	if (args->cq_depth <= 8) {
-		r3.field.token_depth_select = 1;
-	} else if (args->cq_depth == 16) {
-		r3.field.token_depth_select = 2;
-	} else if (args->cq_depth == 32) {
-		r3.field.token_depth_select = 3;
-	} else if (args->cq_depth == 64) {
-		r3.field.token_depth_select = 4;
-	} else if (args->cq_depth == 128) {
-		r3.field.token_depth_select = 5;
-	} else if (args->cq_depth == 256) {
-		r3.field.token_depth_select = 6;
-	} else if (args->cq_depth == 512) {
-		r3.field.token_depth_select = 7;
-	} else if (args->cq_depth == 1024) {
-		r3.field.token_depth_select = 8;
-	} else {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: invalid CQ depth\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL(port->id.phys_id),
-		    r3.val);
-
-	/*
-	 * To support CQs with depth less than 8, program the token count
-	 * register with a non-zero initial value. Operations such as domain
-	 * reset must take this initial value into account when quiescing the
-	 * CQ.
-	 */
-	port->init_tkn_cnt = 0;
-
-	if (args->cq_depth < 8) {
-		union dlb2_lsp_cq_ldb_tkn_cnt r14 = { {0} };
-
-		port->init_tkn_cnt = 8 - args->cq_depth;
-
-		r14.field.token_count = port->init_tkn_cnt;
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_CQ_LDB_TKN_CNT(port->id.phys_id),
-			    r14.val);
-	} else {
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_CQ_LDB_TKN_CNT(port->id.phys_id),
-			    DLB2_LSP_CQ_LDB_TKN_CNT_RST);
-	}
-
-	r4.field.token_depth_select = r3.field.token_depth_select;
-	r4.field.ignore_depth = 0;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL(port->id.phys_id),
-		    r4.val);
-
-	/* Reset the CQ write pointer */
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_WPTR(port->id.phys_id),
-		    DLB2_CHP_LDB_CQ_WPTR_RST);
-
-	r5.field.limit = port->hist_list_entry_limit - 1;
-
-	DLB2_CSR_WR(hw, DLB2_CHP_HIST_LIST_LIM(port->id.phys_id), r5.val);
-
-	r6.field.base = port->hist_list_entry_base;
-
-	DLB2_CSR_WR(hw, DLB2_CHP_HIST_LIST_BASE(port->id.phys_id), r6.val);
-
-	/*
-	 * The inflight limit sets a cap on the number of QEs for which this CQ
-	 * can owe completions at one time.
-	 */
-	r7.field.limit = args->cq_history_list_size;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ_LDB_INFL_LIM(port->id.phys_id), r7.val);
-
-	r8.field.push_ptr = r6.field.base;
-	r8.field.generation = 0;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_HIST_LIST_PUSH_PTR(port->id.phys_id),
-		    r8.val);
-
-	r9.field.pop_ptr = r6.field.base;
-	r9.field.generation = 0;
-
-	DLB2_CSR_WR(hw, DLB2_CHP_HIST_LIST_POP_PTR(port->id.phys_id), r9.val);
-
-	/*
-	 * Address translation (AT) settings: 0: untranslated, 2: translated
-	 * (see ATS spec regarding Address Type field for more details)
-	 */
-	r10.field.cq_at = 0;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_AT(port->id.phys_id), r10.val);
-
-	if (vdev_req && hw->virt_mode == DLB2_VIRT_SIOV) {
-		r11.field.pasid = hw->pasid[vdev_id];
-		r11.field.fmt2 = 1;
-	}
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_CQ_PASID(port->id.phys_id),
-		    r11.val);
-
-	r12.field.cq2vas = domain->id.phys_id;
-
-	DLB2_CSR_WR(hw, DLB2_CHP_LDB_CQ2VAS(port->id.phys_id), r12.val);
-
-	/* Disable the port's QID mappings */
-	r13.field.v = 0;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(port->id.phys_id), r13.val);
-
-	return 0;
-}
-
-static int dlb2_configure_ldb_port(struct dlb2_hw *hw,
-				   struct dlb2_hw_domain *domain,
-				   struct dlb2_ldb_port *port,
-				   uintptr_t cq_dma_base,
-				   struct dlb2_create_ldb_port_args *args,
-				   bool vdev_req,
-				   unsigned int vdev_id)
-{
-	int ret, i;
-
-	port->hist_list_entry_base = domain->hist_list_entry_base +
-				     domain->hist_list_entry_offset;
-	port->hist_list_entry_limit = port->hist_list_entry_base +
-				      args->cq_history_list_size;
-
-	domain->hist_list_entry_offset += args->cq_history_list_size;
-	domain->avail_hist_list_entries -= args->cq_history_list_size;
-
-	ret = dlb2_ldb_port_configure_cq(hw,
-					 domain,
-					 port,
-					 cq_dma_base,
-					 args,
-					 vdev_req,
-					 vdev_id);
-	if (ret < 0)
-		return ret;
-
-	dlb2_ldb_port_configure_pp(hw,
-				   domain,
-				   port,
-				   vdev_req,
-				   vdev_id);
-
-	dlb2_ldb_port_cq_enable(hw, port);
-
-	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++)
-		port->qid_map[i].state = DLB2_QUEUE_UNMAPPED;
-	port->num_mappings = 0;
-
-	port->enabled = true;
-
-	port->configured = true;
-
-	return 0;
-}
-
-static void
-dlb2_log_create_ldb_port_args(struct dlb2_hw *hw,
-			      u32 domain_id,
-			      uintptr_t cq_dma_base,
-			      struct dlb2_create_ldb_port_args *args,
-			      bool vdev_req,
-			      unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 create load-balanced port arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID:                 %d\n",
-		    domain_id);
-	DLB2_HW_DBG(hw, "\tCQ depth:                  %d\n",
-		    args->cq_depth);
-	DLB2_HW_DBG(hw, "\tCQ hist list size:         %d\n",
-		    args->cq_history_list_size);
-	DLB2_HW_DBG(hw, "\tCQ base address:           0x%lx\n",
-		    cq_dma_base);
-	DLB2_HW_DBG(hw, "\tCoS ID:                    %u\n", args->cos_id);
-	DLB2_HW_DBG(hw, "\tStrict CoS allocation:     %u\n",
-		    args->cos_strict);
-}
-
-static int
-dlb2_verify_create_ldb_port_args(struct dlb2_hw *hw,
-				 u32 domain_id,
-				 uintptr_t cq_dma_base,
-				 struct dlb2_create_ldb_port_args *args,
-				 struct dlb2_cmd_response *resp,
-				 bool vdev_req,
-				 unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	int i;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (domain == NULL) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	if (!domain->configured) {
-		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
-		return -EINVAL;
-	}
-
-	if (domain->started) {
-		resp->status = DLB2_ST_DOMAIN_STARTED;
-		return -EINVAL;
-	}
-
-	if (args->cos_id >= DLB2_NUM_COS_DOMAINS) {
-		resp->status = DLB2_ST_INVALID_COS_ID;
-		return -EINVAL;
-	}
-
-	if (args->cos_strict) {
-		if (dlb2_list_empty(&domain->avail_ldb_ports[args->cos_id])) {
-			resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
-			return -EINVAL;
-		}
-	} else {
-		for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-			if (!dlb2_list_empty(&domain->avail_ldb_ports[i]))
-				break;
-		}
-
-		if (i == DLB2_NUM_COS_DOMAINS) {
-			resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
-			return -EINVAL;
-		}
-	}
-
-	/* Check cache-line alignment */
-	if ((cq_dma_base & 0x3F) != 0) {
-		resp->status = DLB2_ST_INVALID_CQ_VIRT_ADDR;
-		return -EINVAL;
-	}
-
-	if (args->cq_depth != 1 &&
-	    args->cq_depth != 2 &&
-	    args->cq_depth != 4 &&
-	    args->cq_depth != 8 &&
-	    args->cq_depth != 16 &&
-	    args->cq_depth != 32 &&
-	    args->cq_depth != 64 &&
-	    args->cq_depth != 128 &&
-	    args->cq_depth != 256 &&
-	    args->cq_depth != 512 &&
-	    args->cq_depth != 1024) {
-		resp->status = DLB2_ST_INVALID_CQ_DEPTH;
-		return -EINVAL;
-	}
-
-	/* The history list size must be >= 1 */
-	if (!args->cq_history_list_size) {
-		resp->status = DLB2_ST_INVALID_HIST_LIST_DEPTH;
-		return -EINVAL;
-	}
-
-	if (args->cq_history_list_size > domain->avail_hist_list_entries) {
-		resp->status = DLB2_ST_HIST_LIST_ENTRIES_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	return 0;
-}
-
-
-/**
- * dlb2_hw_create_ldb_port() - Allocate and initialize a load-balanced port and
- *	its resources.
- * @hw:	Contains the current state of the DLB2 hardware.
- * @domain_id: Domain ID
- * @args: User-provided arguments.
- * @cq_dma_base: Base DMA address for consumer queue memory
- * @resp: Response to user.
- * @vdev_req: Request came from a virtual device.
- * @vdev_id: If vdev_req is true, this contains the virtual device's ID.
- *
- * Return: returns < 0 on error, 0 otherwise. If the driver is unable to
- * satisfy a request, resp->status will be set accordingly.
- */
-int dlb2_hw_create_ldb_port(struct dlb2_hw *hw,
-			    u32 domain_id,
-			    struct dlb2_create_ldb_port_args *args,
-			    uintptr_t cq_dma_base,
-			    struct dlb2_cmd_response *resp,
-			    bool vdev_req,
-			    unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_port *port;
-	int ret, cos_id, i;
-
-	dlb2_log_create_ldb_port_args(hw,
-				      domain_id,
-				      cq_dma_base,
-				      args,
-				      vdev_req,
-				      vdev_id);
-
-	/*
-	 * Verify that hardware resources are available before attempting to
-	 * satisfy the request. This simplifies the error unwinding code.
-	 */
-	ret = dlb2_verify_create_ldb_port_args(hw,
-					       domain_id,
-					       cq_dma_base,
-					       args,
-					       resp,
-					       vdev_req,
-					       vdev_id);
-	if (ret)
-		return ret;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-	if (domain == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: domain not found\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	if (args->cos_strict) {
-		cos_id = args->cos_id;
-
-		port = DLB2_DOM_LIST_HEAD(domain->avail_ldb_ports[cos_id],
-					  typeof(*port));
-	} else {
-		int idx;
-
-		for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-			idx = (args->cos_id + i) % DLB2_NUM_COS_DOMAINS;
-
-			port = DLB2_DOM_LIST_HEAD(domain->avail_ldb_ports[idx],
-						  typeof(*port));
-			if (port)
-				break;
-		}
-
-		cos_id = idx;
-	}
-
-	if (port == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: no available ldb ports\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	if (port->configured) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: avail_ldb_ports contains configured ports.\n",
-			    __func__);
-		return -EFAULT;
-	}
-
-	ret = dlb2_configure_ldb_port(hw,
-				      domain,
-				      port,
-				      cq_dma_base,
-				      args,
-				      vdev_req,
-				      vdev_id);
-	if (ret < 0)
-		return ret;
-
-	/*
-	 * Configuration succeeded, so move the resource from the 'avail' to
-	 * the 'used' list.
-	 */
-	dlb2_list_del(&domain->avail_ldb_ports[cos_id], &port->domain_list);
-
-	dlb2_list_add(&domain->used_ldb_ports[cos_id], &port->domain_list);
-
-	resp->status = 0;
-	resp->id = (vdev_req) ? port->id.virt_id : port->id.phys_id;
-
-	return 0;
-}
-
 static void
 dlb2_log_create_dir_port_args(struct dlb2_hw *hw,
 			      u32 domain_id,
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index b52d2becd..2eb39e23d 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -3972,3 +3972,474 @@ int dlb2_hw_create_ldb_queue(struct dlb2_hw *hw,
 
 	return 0;
 }
+
+static void dlb2_ldb_port_configure_pp(struct dlb2_hw *hw,
+				       struct dlb2_hw_domain *domain,
+				       struct dlb2_ldb_port *port,
+				       bool vdev_req,
+				       unsigned int vdev_id)
+{
+	u32 reg = 0;
+
+	DLB2_BITS_SET(reg, domain->id.phys_id, DLB2_SYS_LDB_PP2VAS_VAS);
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_PP2VAS(port->id.phys_id), reg);
+
+	if (vdev_req) {
+		unsigned int offs;
+		u32 virt_id;
+
+		/*
+		 * DLB uses producer port address bits 17:12 to determine the
+		 * producer port ID. In Scalable IOV mode, PP accesses come
+		 * through the PF MMIO window for the physical producer port,
+		 * so for translation purposes the virtual and physical port
+		 * IDs are equal.
+		 */
+		if (hw->virt_mode == DLB2_VIRT_SRIOV)
+			virt_id = port->id.virt_id;
+		else
+			virt_id = port->id.phys_id;
+
+		reg = 0;
+		DLB2_BITS_SET(reg, port->id.phys_id, DLB2_SYS_VF_LDB_VPP2PP_PP);
+		offs = vdev_id * DLB2_MAX_NUM_LDB_PORTS + virt_id;
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VPP2PP(offs), reg);
+
+		reg = 0;
+		DLB2_BITS_SET(reg, vdev_id, DLB2_SYS_LDB_PP2VDEV_VDEV);
+		DLB2_CSR_WR(hw, DLB2_SYS_LDB_PP2VDEV(port->id.phys_id), reg);
+
+		reg = 0;
+		DLB2_BIT_SET(reg, DLB2_SYS_VF_LDB_VPP_V_VPP_V);
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VPP_V(offs), reg);
+	}
+
+	reg = 0;
+	DLB2_BIT_SET(reg, DLB2_SYS_LDB_PP_V_PP_V);
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_PP_V(port->id.phys_id), reg);
+}
+
+static int dlb2_ldb_port_configure_cq(struct dlb2_hw *hw,
+				      struct dlb2_hw_domain *domain,
+				      struct dlb2_ldb_port *port,
+				      uintptr_t cq_dma_base,
+				      struct dlb2_create_ldb_port_args *args,
+				      bool vdev_req,
+				      unsigned int vdev_id)
+{
+	u32 hl_base = 0;
+	u32 reg = 0;
+	u32 ds = 0;
+
+	/* The CQ address is 64B-aligned, and the DLB only wants bits [63:6] */
+	DLB2_BITS_SET(reg, cq_dma_base >> 6, DLB2_SYS_LDB_CQ_ADDR_L_ADDR_L);
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_ADDR_L(port->id.phys_id), reg);
+
+	reg = cq_dma_base >> 32;
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_ADDR_U(port->id.phys_id), reg);
+
+	/*
+	 * 'ro' == relaxed ordering. This setting allows DLB2 to write
+	 * cache lines out-of-order (but QEs within a cache line are always
+	 * updated in-order).
+	 */
+	reg = 0;
+	DLB2_BITS_SET(reg, vdev_id, DLB2_SYS_LDB_CQ2VF_PF_RO_VF);
+	DLB2_BITS_SET(reg,
+		 !vdev_req && (hw->virt_mode != DLB2_VIRT_SIOV),
+		 DLB2_SYS_LDB_CQ2VF_PF_RO_IS_PF);
+	DLB2_BIT_SET(reg, DLB2_SYS_LDB_CQ2VF_PF_RO_RO);
+
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ2VF_PF_RO(port->id.phys_id), reg);
+
+	port->cq_depth = args->cq_depth;
+
+	if (args->cq_depth <= 8) {
+		ds = 1;
+	} else if (args->cq_depth == 16) {
+		ds = 2;
+	} else if (args->cq_depth == 32) {
+		ds = 3;
+	} else if (args->cq_depth == 64) {
+		ds = 4;
+	} else if (args->cq_depth == 128) {
+		ds = 5;
+	} else if (args->cq_depth == 256) {
+		ds = 6;
+	} else if (args->cq_depth == 512) {
+		ds = 7;
+	} else if (args->cq_depth == 1024) {
+		ds = 8;
+	} else {
+		DLB2_HW_ERR(hw,
+			    "[%s():%d] Internal error: invalid CQ depth\n",
+			    __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	reg = 0;
+	DLB2_BITS_SET(reg, ds,
+		      DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT);
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
+		    reg);
+
+	/*
+	 * To support CQs with depth less than 8, program the token count
+	 * register with a non-zero initial value. Operations such as domain
+	 * reset must take this initial value into account when quiescing the
+	 * CQ.
+	 */
+	port->init_tkn_cnt = 0;
+
+	if (args->cq_depth < 8) {
+		reg = 0;
+		port->init_tkn_cnt = 8 - args->cq_depth;
+
+		DLB2_BITS_SET(reg,
+			      port->init_tkn_cnt,
+			      DLB2_LSP_CQ_LDB_TKN_CNT_TOKEN_COUNT);
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_CQ_LDB_TKN_CNT(hw->ver, port->id.phys_id),
+			    reg);
+	} else {
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_CQ_LDB_TKN_CNT(hw->ver, port->id.phys_id),
+			    DLB2_LSP_CQ_LDB_TKN_CNT_RST);
+	}
+
+	reg = 0;
+	DLB2_BITS_SET(reg, ds,
+		      DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_V2);
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
+		    reg);
+
+	/* Reset the CQ write pointer */
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_WPTR(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ_WPTR_RST);
+
+	reg = 0;
+	DLB2_BITS_SET(reg,
+		      port->hist_list_entry_limit - 1,
+		      DLB2_CHP_HIST_LIST_LIM_LIMIT);
+	DLB2_CSR_WR(hw, DLB2_CHP_HIST_LIST_LIM(hw->ver, port->id.phys_id), reg);
+
+	DLB2_BITS_SET(hl_base, port->hist_list_entry_base,
+		      DLB2_CHP_HIST_LIST_BASE_BASE);
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_HIST_LIST_BASE(hw->ver, port->id.phys_id),
+		    hl_base);
+
+	/*
+	 * The inflight limit sets a cap on the number of QEs for which this CQ
+	 * can owe completions at one time.
+	 */
+	reg = 0;
+	DLB2_BITS_SET(reg, args->cq_history_list_size,
+		      DLB2_LSP_CQ_LDB_INFL_LIM_LIMIT);
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ_LDB_INFL_LIM(hw->ver, port->id.phys_id),
+		    reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, DLB2_BITS_GET(hl_base, DLB2_CHP_HIST_LIST_BASE_BASE),
+		      DLB2_CHP_HIST_LIST_PUSH_PTR_PUSH_PTR);
+	DLB2_CSR_WR(hw, DLB2_CHP_HIST_LIST_PUSH_PTR(hw->ver, port->id.phys_id),
+		    reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, DLB2_BITS_GET(hl_base, DLB2_CHP_HIST_LIST_BASE_BASE),
+		      DLB2_CHP_HIST_LIST_POP_PTR_POP_PTR);
+	DLB2_CSR_WR(hw, DLB2_CHP_HIST_LIST_POP_PTR(hw->ver, port->id.phys_id),
+		    reg);
+
+	/*
+	 * Address translation (AT) settings: 0: untranslated, 2: translated
+	 * (see ATS spec regarding Address Type field for more details)
+	 */
+
+	if (hw->ver == DLB2_HW_V2) {
+		reg = 0;
+		DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_AT(port->id.phys_id), reg);
+	}
+
+	if (vdev_req && hw->virt_mode == DLB2_VIRT_SIOV) {
+		reg = 0;
+		DLB2_BITS_SET(reg, hw->pasid[vdev_id],
+			      DLB2_SYS_LDB_CQ_PASID_PASID);
+		DLB2_BIT_SET(reg, DLB2_SYS_LDB_CQ_PASID_FMT2);
+	}
+
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_PASID(hw->ver, port->id.phys_id), reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, domain->id.phys_id, DLB2_CHP_LDB_CQ2VAS_CQ2VAS);
+	DLB2_CSR_WR(hw, DLB2_CHP_LDB_CQ2VAS(hw->ver, port->id.phys_id), reg);
+
+	/* Disable the port's QID mappings */
+	reg = 0;
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(hw->ver, port->id.phys_id), reg);
+
+	return 0;
+}
+
+static bool
+dlb2_cq_depth_is_valid(u32 depth)
+{
+	if (depth != 1 && depth != 2 &&
+	    depth != 4 && depth != 8 &&
+	    depth != 16 && depth != 32 &&
+	    depth != 64 && depth != 128 &&
+	    depth != 256 && depth != 512 &&
+	    depth != 1024)
+		return false;
+
+	return true;
+}
+
+static int dlb2_configure_ldb_port(struct dlb2_hw *hw,
+				   struct dlb2_hw_domain *domain,
+				   struct dlb2_ldb_port *port,
+				   uintptr_t cq_dma_base,
+				   struct dlb2_create_ldb_port_args *args,
+				   bool vdev_req,
+				   unsigned int vdev_id)
+{
+	int ret, i;
+
+	port->hist_list_entry_base = domain->hist_list_entry_base +
+				     domain->hist_list_entry_offset;
+	port->hist_list_entry_limit = port->hist_list_entry_base +
+				      args->cq_history_list_size;
+
+	domain->hist_list_entry_offset += args->cq_history_list_size;
+	domain->avail_hist_list_entries -= args->cq_history_list_size;
+
+	ret = dlb2_ldb_port_configure_cq(hw,
+					 domain,
+					 port,
+					 cq_dma_base,
+					 args,
+					 vdev_req,
+					 vdev_id);
+	if (ret)
+		return ret;
+
+	dlb2_ldb_port_configure_pp(hw,
+				   domain,
+				   port,
+				   vdev_req,
+				   vdev_id);
+
+	dlb2_ldb_port_cq_enable(hw, port);
+
+	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++)
+		port->qid_map[i].state = DLB2_QUEUE_UNMAPPED;
+	port->num_mappings = 0;
+
+	port->enabled = true;
+
+	port->configured = true;
+
+	return 0;
+}
+
+static void
+dlb2_log_create_ldb_port_args(struct dlb2_hw *hw,
+			      u32 domain_id,
+			      uintptr_t cq_dma_base,
+			      struct dlb2_create_ldb_port_args *args,
+			      bool vdev_req,
+			      unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 create load-balanced port arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID:                 %d\n",
+		    domain_id);
+	DLB2_HW_DBG(hw, "\tCQ depth:                  %d\n",
+		    args->cq_depth);
+	DLB2_HW_DBG(hw, "\tCQ hist list size:         %d\n",
+		    args->cq_history_list_size);
+	DLB2_HW_DBG(hw, "\tCQ base address:           0x%lx\n",
+		    cq_dma_base);
+	DLB2_HW_DBG(hw, "\tCoS ID:                    %u\n", args->cos_id);
+	DLB2_HW_DBG(hw, "\tStrict CoS allocation:     %u\n",
+		    args->cos_strict);
+}
+
+static int
+dlb2_verify_create_ldb_port_args(struct dlb2_hw *hw,
+				 u32 domain_id,
+				 uintptr_t cq_dma_base,
+				 struct dlb2_create_ldb_port_args *args,
+				 struct dlb2_cmd_response *resp,
+				 bool vdev_req,
+				 unsigned int vdev_id,
+				 struct dlb2_hw_domain **out_domain,
+				 struct dlb2_ldb_port **out_port,
+				 int *out_cos_id)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_port *port;
+	int i, id;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
+		return -EINVAL;
+	}
+
+	if (domain->started) {
+		resp->status = DLB2_ST_DOMAIN_STARTED;
+		return -EINVAL;
+	}
+
+	if (args->cos_id >= DLB2_NUM_COS_DOMAINS) {
+		resp->status = DLB2_ST_INVALID_COS_ID;
+		return -EINVAL;
+	}
+
+	if (args->cos_strict) {
+		id = args->cos_id;
+		port = DLB2_DOM_LIST_HEAD(domain->avail_ldb_ports[id],
+					  typeof(*port));
+	} else {
+		for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+			id = (args->cos_id + i) % DLB2_NUM_COS_DOMAINS;
+
+			port = DLB2_DOM_LIST_HEAD(domain->avail_ldb_ports[id],
+						  typeof(*port));
+			if (port)
+				break;
+		}
+	}
+
+	if (!port) {
+		resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	/* Check cache-line alignment */
+	if ((cq_dma_base & 0x3F) != 0) {
+		resp->status = DLB2_ST_INVALID_CQ_VIRT_ADDR;
+		return -EINVAL;
+	}
+
+	if (!dlb2_cq_depth_is_valid(args->cq_depth)) {
+		resp->status = DLB2_ST_INVALID_CQ_DEPTH;
+		return -EINVAL;
+	}
+
+	/* The history list size must be >= 1 */
+	if (!args->cq_history_list_size) {
+		resp->status = DLB2_ST_INVALID_HIST_LIST_DEPTH;
+		return -EINVAL;
+	}
+
+	if (args->cq_history_list_size > domain->avail_hist_list_entries) {
+		resp->status = DLB2_ST_HIST_LIST_ENTRIES_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	*out_domain = domain;
+	*out_port = port;
+	*out_cos_id = id;
+
+	return 0;
+}
+
+/**
+ * dlb2_hw_create_ldb_port() - create a load-balanced port
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: port creation arguments.
+ * @cq_dma_base: base address of the CQ memory. This can be a PA or an IOVA.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function creates a load-balanced port.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the port ID.
+ *
+ * resp->id contains a virtual ID if vdev_req is true.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, a credit setting is invalid, a
+ *	    pointer address is not properly aligned, the domain is not
+ *	    configured, or the domain has already been started.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb2_hw_create_ldb_port(struct dlb2_hw *hw,
+			    u32 domain_id,
+			    struct dlb2_create_ldb_port_args *args,
+			    uintptr_t cq_dma_base,
+			    struct dlb2_cmd_response *resp,
+			    bool vdev_req,
+			    unsigned int vdev_id)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_port *port;
+	int ret, cos_id;
+
+	dlb2_log_create_ldb_port_args(hw,
+				      domain_id,
+				      cq_dma_base,
+				      args,
+				      vdev_req,
+				      vdev_id);
+
+	/*
+	 * Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	ret = dlb2_verify_create_ldb_port_args(hw,
+					       domain_id,
+					       cq_dma_base,
+					       args,
+					       resp,
+					       vdev_req,
+					       vdev_id,
+					       &domain,
+					       &port,
+					       &cos_id);
+	if (ret)
+		return ret;
+
+	ret = dlb2_configure_ldb_port(hw,
+				      domain,
+				      port,
+				      cq_dma_base,
+				      args,
+				      vdev_req,
+				      vdev_id);
+	if (ret)
+		return ret;
+
+	/*
+	 * Configuration succeeded, so move the resource from the 'avail' to
+	 * the 'used' list.
+	 */
+	dlb2_list_del(&domain->avail_ldb_ports[cos_id], &port->domain_list);
+
+	dlb2_list_add(&domain->used_ldb_ports[cos_id], &port->domain_list);
+
+	resp->status = 0;
+	resp->id = (vdev_req) ? port->id.virt_id : port->id.phys_id;
+
+	return 0;
+}
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v3 09/26] event/dlb2: add v2.5 create dir port
  2021-04-13 20:14   ` [dpdk-dev] [PATCH v3 00/26] " Timothy McDaniel
                       ` (7 preceding siblings ...)
  2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 08/26] event/dlb2: add v2.5 create ldb port Timothy McDaniel
@ 2021-04-13 20:14     ` Timothy McDaniel
  2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 10/26] event/dlb2: add v2.5 create dir queue Timothy McDaniel
                       ` (16 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-04-13 20:14 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas

Updated low level hardware functions to account for new
register map and access macros.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 426 ------------------
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 414 +++++++++++++++++
 2 files changed, 414 insertions(+), 426 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index 45d096eec..70c52e908 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -65,18 +65,6 @@ static inline void dlb2_flush_csr(struct dlb2_hw *hw)
 	DLB2_CSR_RD(hw, DLB2_SYS_TOTAL_VAS);
 }
 
-static void dlb2_dir_port_cq_enable(struct dlb2_hw *hw,
-				    struct dlb2_dir_pq_pair *port)
-{
-	union dlb2_lsp_cq_dir_dsbl reg;
-
-	reg.field.disabled = 0;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ_DIR_DSBL(port->id.phys_id), reg.val);
-
-	dlb2_flush_csr(hw);
-}
-
 static u32 dlb2_dir_queue_depth(struct dlb2_hw *hw,
 				struct dlb2_dir_pq_pair *queue)
 {
@@ -1216,25 +1204,6 @@ int dlb2_set_group_sequence_numbers(struct dlb2_hw *hw,
 	return 0;
 }
 
-static void
-dlb2_log_create_dir_port_args(struct dlb2_hw *hw,
-			      u32 domain_id,
-			      uintptr_t cq_dma_base,
-			      struct dlb2_create_dir_port_args *args,
-			      bool vdev_req,
-			      unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 create directed port arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID:                 %d\n",
-		    domain_id);
-	DLB2_HW_DBG(hw, "\tCQ depth:                  %d\n",
-		    args->cq_depth);
-	DLB2_HW_DBG(hw, "\tCQ base address:           0x%lx\n",
-		    cq_dma_base);
-}
-
 static struct dlb2_dir_pq_pair *
 dlb2_get_domain_used_dir_pq(struct dlb2_hw *hw,
 			    u32 id,
@@ -1256,401 +1225,6 @@ dlb2_get_domain_used_dir_pq(struct dlb2_hw *hw,
 	return NULL;
 }
 
-static int
-dlb2_verify_create_dir_port_args(struct dlb2_hw *hw,
-				 u32 domain_id,
-				 uintptr_t cq_dma_base,
-				 struct dlb2_create_dir_port_args *args,
-				 struct dlb2_cmd_response *resp,
-				 bool vdev_req,
-				 unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (domain == NULL) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	if (!domain->configured) {
-		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
-		return -EINVAL;
-	}
-
-	if (domain->started) {
-		resp->status = DLB2_ST_DOMAIN_STARTED;
-		return -EINVAL;
-	}
-
-	/*
-	 * If the user claims the queue is already configured, validate
-	 * the queue ID, its domain, and whether the queue is configured.
-	 */
-	if (args->queue_id != -1) {
-		struct dlb2_dir_pq_pair *queue;
-
-		queue = dlb2_get_domain_used_dir_pq(hw,
-						    args->queue_id,
-						    vdev_req,
-						    domain);
-
-		if (queue == NULL || queue->domain_id.phys_id !=
-				domain->id.phys_id ||
-				!queue->queue_configured) {
-			resp->status = DLB2_ST_INVALID_DIR_QUEUE_ID;
-			return -EINVAL;
-		}
-	}
-
-	/*
-	 * If the port's queue is not configured, validate that a free
-	 * port-queue pair is available.
-	 */
-	if (args->queue_id == -1 &&
-	    dlb2_list_empty(&domain->avail_dir_pq_pairs)) {
-		resp->status = DLB2_ST_DIR_PORTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	/* Check cache-line alignment */
-	if ((cq_dma_base & 0x3F) != 0) {
-		resp->status = DLB2_ST_INVALID_CQ_VIRT_ADDR;
-		return -EINVAL;
-	}
-
-	if (args->cq_depth != 1 &&
-	    args->cq_depth != 2 &&
-	    args->cq_depth != 4 &&
-	    args->cq_depth != 8 &&
-	    args->cq_depth != 16 &&
-	    args->cq_depth != 32 &&
-	    args->cq_depth != 64 &&
-	    args->cq_depth != 128 &&
-	    args->cq_depth != 256 &&
-	    args->cq_depth != 512 &&
-	    args->cq_depth != 1024) {
-		resp->status = DLB2_ST_INVALID_CQ_DEPTH;
-		return -EINVAL;
-	}
-
-	return 0;
-}
-
-static void dlb2_dir_port_configure_pp(struct dlb2_hw *hw,
-				       struct dlb2_hw_domain *domain,
-				       struct dlb2_dir_pq_pair *port,
-				       bool vdev_req,
-				       unsigned int vdev_id)
-{
-	union dlb2_sys_dir_pp2vas r0 = { {0} };
-	union dlb2_sys_dir_pp_v r4 = { {0} };
-
-	r0.field.vas = domain->id.phys_id;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_PP2VAS(port->id.phys_id), r0.val);
-
-	if (vdev_req) {
-		union dlb2_sys_vf_dir_vpp2pp r1 = { {0} };
-		union dlb2_sys_dir_pp2vdev r2 = { {0} };
-		union dlb2_sys_vf_dir_vpp_v r3 = { {0} };
-		unsigned int offs;
-		u32 virt_id;
-
-		/*
-		 * DLB uses producer port address bits 17:12 to determine the
-		 * producer port ID. In Scalable IOV mode, PP accesses come
-		 * through the PF MMIO window for the physical producer port,
-		 * so for translation purposes the virtual and physical port
-		 * IDs are equal.
-		 */
-		if (hw->virt_mode == DLB2_VIRT_SRIOV)
-			virt_id = port->id.virt_id;
-		else
-			virt_id = port->id.phys_id;
-
-		r1.field.pp = port->id.phys_id;
-
-		offs = vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) + virt_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP2PP(offs), r1.val);
-
-		r2.field.vdev = vdev_id;
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_DIR_PP2VDEV(port->id.phys_id),
-			    r2.val);
-
-		r3.field.vpp_v = 1;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP_V(offs), r3.val);
-	}
-
-	r4.field.pp_v = 1;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_PP_V(port->id.phys_id),
-		    r4.val);
-}
-
-static int dlb2_dir_port_configure_cq(struct dlb2_hw *hw,
-				      struct dlb2_hw_domain *domain,
-				      struct dlb2_dir_pq_pair *port,
-				      uintptr_t cq_dma_base,
-				      struct dlb2_create_dir_port_args *args,
-				      bool vdev_req,
-				      unsigned int vdev_id)
-{
-	union dlb2_sys_dir_cq_addr_l r0 = { {0} };
-	union dlb2_sys_dir_cq_addr_u r1 = { {0} };
-	union dlb2_sys_dir_cq2vf_pf_ro r2 = { {0} };
-	union dlb2_chp_dir_cq_tkn_depth_sel r3 = { {0} };
-	union dlb2_lsp_cq_dir_tkn_depth_sel_dsi r4 = { {0} };
-	union dlb2_sys_dir_cq_fmt r9 = { {0} };
-	union dlb2_sys_dir_cq_at r10 = { {0} };
-	union dlb2_sys_dir_cq_pasid r11 = { {0} };
-	union dlb2_chp_dir_cq2vas r12 = { {0} };
-
-	/* The CQ address is 64B-aligned, and the DLB only wants bits [63:6] */
-	r0.field.addr_l = cq_dma_base >> 6;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_ADDR_L(port->id.phys_id), r0.val);
-
-	r1.field.addr_u = cq_dma_base >> 32;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_ADDR_U(port->id.phys_id), r1.val);
-
-	/*
-	 * 'ro' == relaxed ordering. This setting allows DLB2 to write
-	 * cache lines out-of-order (but QEs within a cache line are always
-	 * updated in-order).
-	 */
-	r2.field.vf = vdev_id;
-	r2.field.is_pf = !vdev_req && (hw->virt_mode != DLB2_VIRT_SIOV);
-	r2.field.ro = 1;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ2VF_PF_RO(port->id.phys_id), r2.val);
-
-	if (args->cq_depth <= 8) {
-		r3.field.token_depth_select = 1;
-	} else if (args->cq_depth == 16) {
-		r3.field.token_depth_select = 2;
-	} else if (args->cq_depth == 32) {
-		r3.field.token_depth_select = 3;
-	} else if (args->cq_depth == 64) {
-		r3.field.token_depth_select = 4;
-	} else if (args->cq_depth == 128) {
-		r3.field.token_depth_select = 5;
-	} else if (args->cq_depth == 256) {
-		r3.field.token_depth_select = 6;
-	} else if (args->cq_depth == 512) {
-		r3.field.token_depth_select = 7;
-	} else if (args->cq_depth == 1024) {
-		r3.field.token_depth_select = 8;
-	} else {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: invalid CQ depth\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL(port->id.phys_id),
-		    r3.val);
-
-	/*
-	 * To support CQs with depth less than 8, program the token count
-	 * register with a non-zero initial value. Operations such as domain
-	 * reset must take this initial value into account when quiescing the
-	 * CQ.
-	 */
-	port->init_tkn_cnt = 0;
-
-	if (args->cq_depth < 8) {
-		union dlb2_lsp_cq_dir_tkn_cnt r13 = { {0} };
-
-		port->init_tkn_cnt = 8 - args->cq_depth;
-
-		r13.field.count = port->init_tkn_cnt;
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_CQ_DIR_TKN_CNT(port->id.phys_id),
-			    r13.val);
-	} else {
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_CQ_DIR_TKN_CNT(port->id.phys_id),
-			    DLB2_LSP_CQ_DIR_TKN_CNT_RST);
-	}
-
-	r4.field.token_depth_select = r3.field.token_depth_select;
-	r4.field.disable_wb_opt = 0;
-	r4.field.ignore_depth = 0;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(port->id.phys_id),
-		    r4.val);
-
-	/* Reset the CQ write pointer */
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_WPTR(port->id.phys_id),
-		    DLB2_CHP_DIR_CQ_WPTR_RST);
-
-	/* Virtualize the PPID */
-	r9.field.keep_pf_ppid = 0;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_FMT(port->id.phys_id), r9.val);
-
-	/*
-	 * Address translation (AT) settings: 0: untranslated, 2: translated
-	 * (see ATS spec regarding Address Type field for more details)
-	 */
-	r10.field.cq_at = 0;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_AT(port->id.phys_id), r10.val);
-
-	if (vdev_req && hw->virt_mode == DLB2_VIRT_SIOV) {
-		r11.field.pasid = hw->pasid[vdev_id];
-		r11.field.fmt2 = 1;
-	}
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ_PASID(port->id.phys_id),
-		    r11.val);
-
-	r12.field.cq2vas = domain->id.phys_id;
-
-	DLB2_CSR_WR(hw, DLB2_CHP_DIR_CQ2VAS(port->id.phys_id), r12.val);
-
-	return 0;
-}
-
-static int dlb2_configure_dir_port(struct dlb2_hw *hw,
-				   struct dlb2_hw_domain *domain,
-				   struct dlb2_dir_pq_pair *port,
-				   uintptr_t cq_dma_base,
-				   struct dlb2_create_dir_port_args *args,
-				   bool vdev_req,
-				   unsigned int vdev_id)
-{
-	int ret;
-
-	ret = dlb2_dir_port_configure_cq(hw,
-					 domain,
-					 port,
-					 cq_dma_base,
-					 args,
-					 vdev_req,
-					 vdev_id);
-
-	if (ret < 0)
-		return ret;
-
-	dlb2_dir_port_configure_pp(hw,
-				   domain,
-				   port,
-				   vdev_req,
-				   vdev_id);
-
-	dlb2_dir_port_cq_enable(hw, port);
-
-	port->enabled = true;
-
-	port->port_configured = true;
-
-	return 0;
-}
-
-/**
- * dlb2_hw_create_dir_port() - Allocate and initialize a DLB directed port
- *	and queue. The port/queue pair have the same ID and name.
- * @hw:	Contains the current state of the DLB2 hardware.
- * @domain_id: Domain ID
- * @args: User-provided arguments.
- * @cq_dma_base: Base DMA address for consumer queue memory
- * @resp: Response to user.
- * @vdev_req: Request came from a virtual device.
- * @vdev_id: If vdev_req is true, this contains the virtual device's ID.
- *
- * Return: returns < 0 on error, 0 otherwise. If the driver is unable to
- * satisfy a request, resp->status will be set accordingly.
- */
-int dlb2_hw_create_dir_port(struct dlb2_hw *hw,
-			    u32 domain_id,
-			    struct dlb2_create_dir_port_args *args,
-			    uintptr_t cq_dma_base,
-			    struct dlb2_cmd_response *resp,
-			    bool vdev_req,
-			    unsigned int vdev_id)
-{
-	struct dlb2_dir_pq_pair *port;
-	struct dlb2_hw_domain *domain;
-	int ret;
-
-	dlb2_log_create_dir_port_args(hw,
-				      domain_id,
-				      cq_dma_base,
-				      args,
-				      vdev_req,
-				      vdev_id);
-
-	/*
-	 * Verify that hardware resources are available before attempting to
-	 * satisfy the request. This simplifies the error unwinding code.
-	 */
-	ret = dlb2_verify_create_dir_port_args(hw,
-					       domain_id,
-					       cq_dma_base,
-					       args,
-					       resp,
-					       vdev_req,
-					       vdev_id);
-	if (ret)
-		return ret;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (args->queue_id != -1)
-		port = dlb2_get_domain_used_dir_pq(hw,
-						   args->queue_id,
-						   vdev_req,
-						   domain);
-	else
-		port = DLB2_DOM_LIST_HEAD(domain->avail_dir_pq_pairs,
-					  typeof(*port));
-	if (port == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: no available dir ports\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	ret = dlb2_configure_dir_port(hw,
-				      domain,
-				      port,
-				      cq_dma_base,
-				      args,
-				      vdev_req,
-				      vdev_id);
-	if (ret < 0)
-		return ret;
-
-	/*
-	 * Configuration succeeded, so move the resource from the 'avail' to
-	 * the 'used' list (if it's not already there).
-	 */
-	if (args->queue_id == -1) {
-		dlb2_list_del(&domain->avail_dir_pq_pairs, &port->domain_list);
-
-		dlb2_list_add(&domain->used_dir_pq_pairs, &port->domain_list);
-	}
-
-	resp->status = 0;
-	resp->id = (vdev_req) ? port->id.virt_id : port->id.phys_id;
-
-	return 0;
-}
-
 static void dlb2_configure_dir_queue(struct dlb2_hw *hw,
 				     struct dlb2_hw_domain *domain,
 				     struct dlb2_dir_pq_pair *queue,
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index 2eb39e23d..4e4b390dd 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -4443,3 +4443,417 @@ int dlb2_hw_create_ldb_port(struct dlb2_hw *hw,
 
 	return 0;
 }
+
+static void
+dlb2_log_create_dir_port_args(struct dlb2_hw *hw,
+			      u32 domain_id,
+			      uintptr_t cq_dma_base,
+			      struct dlb2_create_dir_port_args *args,
+			      bool vdev_req,
+			      unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 create directed port arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID:                 %d\n",
+		    domain_id);
+	DLB2_HW_DBG(hw, "\tCQ depth:                  %d\n",
+		    args->cq_depth);
+	DLB2_HW_DBG(hw, "\tCQ base address:           0x%lx\n",
+		    cq_dma_base);
+}
+
+static struct dlb2_dir_pq_pair *
+dlb2_get_domain_used_dir_pq(struct dlb2_hw *hw,
+			    u32 id,
+			    bool vdev_req,
+			    struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *port;
+	RTE_SET_USED(iter);
+
+	if (id >= DLB2_MAX_NUM_DIR_PORTS(hw->ver))
+		return NULL;
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
+		if ((!vdev_req && port->id.phys_id == id) ||
+		    (vdev_req && port->id.virt_id == id))
+			return port;
+	}
+
+	return NULL;
+}
+
+static int
+dlb2_verify_create_dir_port_args(struct dlb2_hw *hw,
+				 u32 domain_id,
+				 uintptr_t cq_dma_base,
+				 struct dlb2_create_dir_port_args *args,
+				 struct dlb2_cmd_response *resp,
+				 bool vdev_req,
+				 unsigned int vdev_id,
+				 struct dlb2_hw_domain **out_domain,
+				 struct dlb2_dir_pq_pair **out_port)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_dir_pq_pair *pq;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
+		return -EINVAL;
+	}
+
+	if (domain->started) {
+		resp->status = DLB2_ST_DOMAIN_STARTED;
+		return -EINVAL;
+	}
+
+	if (args->queue_id != -1) {
+		/*
+		 * If the user claims the queue is already configured, validate
+		 * the queue ID, its domain, and whether the queue is
+		 * configured.
+		 */
+		pq = dlb2_get_domain_used_dir_pq(hw,
+						 args->queue_id,
+						 vdev_req,
+						 domain);
+
+		if (!pq || pq->domain_id.phys_id != domain->id.phys_id ||
+		    !pq->queue_configured) {
+			resp->status = DLB2_ST_INVALID_DIR_QUEUE_ID;
+			return -EINVAL;
+		}
+	} else {
+		/*
+		 * If the port's queue is not configured, validate that a free
+		 * port-queue pair is available.
+		 */
+		pq = DLB2_DOM_LIST_HEAD(domain->avail_dir_pq_pairs,
+					typeof(*pq));
+		if (!pq) {
+			resp->status = DLB2_ST_DIR_PORTS_UNAVAILABLE;
+			return -EINVAL;
+		}
+	}
+
+	/* Check cache-line alignment */
+	if ((cq_dma_base & 0x3F) != 0) {
+		resp->status = DLB2_ST_INVALID_CQ_VIRT_ADDR;
+		return -EINVAL;
+	}
+
+	if (!dlb2_cq_depth_is_valid(args->cq_depth)) {
+		resp->status = DLB2_ST_INVALID_CQ_DEPTH;
+		return -EINVAL;
+	}
+
+	*out_domain = domain;
+	*out_port = pq;
+
+	return 0;
+}
+
+static void dlb2_dir_port_configure_pp(struct dlb2_hw *hw,
+				       struct dlb2_hw_domain *domain,
+				       struct dlb2_dir_pq_pair *port,
+				       bool vdev_req,
+				       unsigned int vdev_id)
+{
+	u32 reg = 0;
+
+	DLB2_BITS_SET(reg, domain->id.phys_id, DLB2_SYS_DIR_PP2VAS_VAS);
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_PP2VAS(port->id.phys_id), reg);
+
+	if (vdev_req) {
+		unsigned int offs;
+		u32 virt_id;
+
+		/*
+		 * DLB uses producer port address bits 17:12 to determine the
+		 * producer port ID. In Scalable IOV mode, PP accesses come
+		 * through the PF MMIO window for the physical producer port,
+		 * so for translation purposes the virtual and physical port
+		 * IDs are equal.
+		 */
+		if (hw->virt_mode == DLB2_VIRT_SRIOV)
+			virt_id = port->id.virt_id;
+		else
+			virt_id = port->id.phys_id;
+
+		reg = 0;
+		DLB2_BITS_SET(reg, port->id.phys_id, DLB2_SYS_VF_DIR_VPP2PP_PP);
+		offs = vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) + virt_id;
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP2PP(offs), reg);
+
+		reg = 0;
+		DLB2_BITS_SET(reg, vdev_id, DLB2_SYS_DIR_PP2VDEV_VDEV);
+		DLB2_CSR_WR(hw, DLB2_SYS_DIR_PP2VDEV(port->id.phys_id), reg);
+
+		reg = 0;
+		DLB2_BIT_SET(reg, DLB2_SYS_VF_DIR_VPP_V_VPP_V);
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP_V(offs), reg);
+	}
+
+	reg = 0;
+	DLB2_BIT_SET(reg, DLB2_SYS_DIR_PP_V_PP_V);
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_PP_V(port->id.phys_id), reg);
+}
+
+static int dlb2_dir_port_configure_cq(struct dlb2_hw *hw,
+				      struct dlb2_hw_domain *domain,
+				      struct dlb2_dir_pq_pair *port,
+				      uintptr_t cq_dma_base,
+				      struct dlb2_create_dir_port_args *args,
+				      bool vdev_req,
+				      unsigned int vdev_id)
+{
+	u32 reg = 0;
+	u32 ds = 0;
+
+	/* The CQ address is 64B-aligned, and the DLB only wants bits [63:6] */
+	DLB2_BITS_SET(reg, cq_dma_base >> 6, DLB2_SYS_DIR_CQ_ADDR_L_ADDR_L);
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_ADDR_L(port->id.phys_id), reg);
+
+	reg = cq_dma_base >> 32;
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_ADDR_U(port->id.phys_id), reg);
+
+	/*
+	 * 'ro' == relaxed ordering. This setting allows DLB2 to write
+	 * cache lines out-of-order (but QEs within a cache line are always
+	 * updated in-order).
+	 */
+	reg = 0;
+	DLB2_BITS_SET(reg, vdev_id, DLB2_SYS_DIR_CQ2VF_PF_RO_VF);
+	DLB2_BITS_SET(reg, !vdev_req && (hw->virt_mode != DLB2_VIRT_SIOV),
+		 DLB2_SYS_DIR_CQ2VF_PF_RO_IS_PF);
+	DLB2_BIT_SET(reg, DLB2_SYS_DIR_CQ2VF_PF_RO_RO);
+
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ2VF_PF_RO(port->id.phys_id), reg);
+
+	if (args->cq_depth <= 8) {
+		ds = 1;
+	} else if (args->cq_depth == 16) {
+		ds = 2;
+	} else if (args->cq_depth == 32) {
+		ds = 3;
+	} else if (args->cq_depth == 64) {
+		ds = 4;
+	} else if (args->cq_depth == 128) {
+		ds = 5;
+	} else if (args->cq_depth == 256) {
+		ds = 6;
+	} else if (args->cq_depth == 512) {
+		ds = 7;
+	} else if (args->cq_depth == 1024) {
+		ds = 8;
+	} else {
+		DLB2_HW_ERR(hw,
+			    "[%s():%d] Internal error: invalid CQ depth\n",
+			    __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	reg = 0;
+	DLB2_BITS_SET(reg, ds,
+		      DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT);
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
+		    reg);
+
+	/*
+	 * To support CQs with depth less than 8, program the token count
+	 * register with a non-zero initial value. Operations such as domain
+	 * reset must take this initial value into account when quiescing the
+	 * CQ.
+	 */
+	port->init_tkn_cnt = 0;
+
+	if (args->cq_depth < 8) {
+		reg = 0;
+		port->init_tkn_cnt = 8 - args->cq_depth;
+
+		DLB2_BITS_SET(reg, port->init_tkn_cnt,
+			      DLB2_LSP_CQ_DIR_TKN_CNT_COUNT);
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_CQ_DIR_TKN_CNT(hw->ver, port->id.phys_id),
+			    reg);
+	} else {
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_CQ_DIR_TKN_CNT(hw->ver, port->id.phys_id),
+			    DLB2_LSP_CQ_DIR_TKN_CNT_RST);
+	}
+
+	reg = 0;
+	DLB2_BITS_SET(reg, ds,
+		      DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_TOKEN_DEPTH_SELECT_V2);
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(hw->ver,
+						      port->id.phys_id),
+		    reg);
+
+	/* Reset the CQ write pointer */
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_WPTR(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ_WPTR_RST);
+
+	/* Virtualize the PPID */
+	reg = 0;
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_FMT(port->id.phys_id), reg);
+
+	/*
+	 * Address translation (AT) settings: 0: untranslated, 2: translated
+	 * (see ATS spec regarding Address Type field for more details)
+	 */
+	if (hw->ver == DLB2_HW_V2) {
+		reg = 0;
+		DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_AT(port->id.phys_id), reg);
+	}
+
+	if (vdev_req && hw->virt_mode == DLB2_VIRT_SIOV) {
+		DLB2_BITS_SET(reg, hw->pasid[vdev_id],
+			      DLB2_SYS_DIR_CQ_PASID_PASID);
+		DLB2_BIT_SET(reg, DLB2_SYS_DIR_CQ_PASID_FMT2);
+	}
+
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_PASID(hw->ver, port->id.phys_id), reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, domain->id.phys_id, DLB2_CHP_DIR_CQ2VAS_CQ2VAS);
+	DLB2_CSR_WR(hw, DLB2_CHP_DIR_CQ2VAS(hw->ver, port->id.phys_id), reg);
+
+	return 0;
+}
+
+static int dlb2_configure_dir_port(struct dlb2_hw *hw,
+				   struct dlb2_hw_domain *domain,
+				   struct dlb2_dir_pq_pair *port,
+				   uintptr_t cq_dma_base,
+				   struct dlb2_create_dir_port_args *args,
+				   bool vdev_req,
+				   unsigned int vdev_id)
+{
+	int ret;
+
+	ret = dlb2_dir_port_configure_cq(hw,
+					 domain,
+					 port,
+					 cq_dma_base,
+					 args,
+					 vdev_req,
+					 vdev_id);
+
+	if (ret)
+		return ret;
+
+	dlb2_dir_port_configure_pp(hw,
+				   domain,
+				   port,
+				   vdev_req,
+				   vdev_id);
+
+	dlb2_dir_port_cq_enable(hw, port);
+
+	port->enabled = true;
+
+	port->port_configured = true;
+
+	return 0;
+}
+
+/**
+ * dlb2_hw_create_dir_port() - create a directed port
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: port creation arguments.
+ * @cq_dma_base: base address of the CQ memory. This can be a PA or an IOVA.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function creates a directed port.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the port ID.
+ *
+ * resp->id contains a virtual ID if vdev_req is true.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, a credit setting is invalid, a
+ *	    pointer address is not properly aligned, the domain is not
+ *	    configured, or the domain has already been started.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb2_hw_create_dir_port(struct dlb2_hw *hw,
+			    u32 domain_id,
+			    struct dlb2_create_dir_port_args *args,
+			    uintptr_t cq_dma_base,
+			    struct dlb2_cmd_response *resp,
+			    bool vdev_req,
+			    unsigned int vdev_id)
+{
+	struct dlb2_dir_pq_pair *port;
+	struct dlb2_hw_domain *domain;
+	int ret;
+
+	dlb2_log_create_dir_port_args(hw,
+				      domain_id,
+				      cq_dma_base,
+				      args,
+				      vdev_req,
+				      vdev_id);
+
+	/*
+	 * Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	ret = dlb2_verify_create_dir_port_args(hw,
+					       domain_id,
+					       cq_dma_base,
+					       args,
+					       resp,
+					       vdev_req,
+					       vdev_id,
+					       &domain,
+					       &port);
+	if (ret)
+		return ret;
+
+	ret = dlb2_configure_dir_port(hw,
+				      domain,
+				      port,
+				      cq_dma_base,
+				      args,
+				      vdev_req,
+				      vdev_id);
+	if (ret)
+		return ret;
+
+	/*
+	 * Configuration succeeded, so move the resource from the 'avail' to
+	 * the 'used' list (if it's not already there).
+	 */
+	if (args->queue_id == -1) {
+		dlb2_list_del(&domain->avail_dir_pq_pairs, &port->domain_list);
+
+		dlb2_list_add(&domain->used_dir_pq_pairs, &port->domain_list);
+	}
+
+	resp->status = 0;
+	resp->id = (vdev_req) ? port->id.virt_id : port->id.phys_id;
+
+	return 0;
+}
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v3 10/26] event/dlb2: add v2.5 create dir queue
  2021-04-13 20:14   ` [dpdk-dev] [PATCH v3 00/26] " Timothy McDaniel
                       ` (8 preceding siblings ...)
  2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 09/26] event/dlb2: add v2.5 create dir port Timothy McDaniel
@ 2021-04-13 20:14     ` Timothy McDaniel
  2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 11/26] event/dlb2: add v2.5 map qid Timothy McDaniel
                       ` (15 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-04-13 20:14 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas

Updated low level hardware functions to account for new
register map and hardware access macros.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 213 ------------------
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 201 +++++++++++++++++
 2 files changed, 201 insertions(+), 213 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index 70c52e908..362deadfe 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -1225,219 +1225,6 @@ dlb2_get_domain_used_dir_pq(struct dlb2_hw *hw,
 	return NULL;
 }
 
-static void dlb2_configure_dir_queue(struct dlb2_hw *hw,
-				     struct dlb2_hw_domain *domain,
-				     struct dlb2_dir_pq_pair *queue,
-				     struct dlb2_create_dir_queue_args *args,
-				     bool vdev_req,
-				     unsigned int vdev_id)
-{
-	union dlb2_sys_dir_vasqid_v r0 = { {0} };
-	union dlb2_sys_dir_qid_its r1 = { {0} };
-	union dlb2_lsp_qid_dir_depth_thrsh r2 = { {0} };
-	union dlb2_sys_dir_qid_v r5 = { {0} };
-
-	unsigned int offs;
-
-	/* QID write permissions are turned on when the domain is started */
-	r0.field.vasqid_v = 0;
-
-	offs = domain->id.phys_id * DLB2_MAX_NUM_DIR_QUEUES(hw->ver) +
-		queue->id.phys_id;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(offs), r0.val);
-
-	/* Don't timestamp QEs that pass through this queue */
-	r1.field.qid_its = 0;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_QID_ITS(queue->id.phys_id),
-		    r1.val);
-
-	r2.field.thresh = args->depth_threshold;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID_DIR_DEPTH_THRSH(queue->id.phys_id),
-		    r2.val);
-
-	if (vdev_req) {
-		union dlb2_sys_vf_dir_vqid_v r3 = { {0} };
-		union dlb2_sys_vf_dir_vqid2qid r4 = { {0} };
-
-		offs = vdev_id * DLB2_MAX_NUM_DIR_QUEUES(hw->ver)
-			+ queue->id.virt_id;
-
-		r3.field.vqid_v = 1;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID_V(offs), r3.val);
-
-		r4.field.qid = queue->id.phys_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID2QID(offs), r4.val);
-	}
-
-	r5.field.qid_v = 1;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_QID_V(queue->id.phys_id), r5.val);
-
-	queue->queue_configured = true;
-}
-
-static void
-dlb2_log_create_dir_queue_args(struct dlb2_hw *hw,
-			       u32 domain_id,
-			       struct dlb2_create_dir_queue_args *args,
-			       bool vdev_req,
-			       unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 create directed queue arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
-	DLB2_HW_DBG(hw, "\tPort ID:   %d\n", args->port_id);
-}
-
-static int
-dlb2_verify_create_dir_queue_args(struct dlb2_hw *hw,
-				  u32 domain_id,
-				  struct dlb2_create_dir_queue_args *args,
-				  struct dlb2_cmd_response *resp,
-				  bool vdev_req,
-				  unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (domain == NULL) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	if (!domain->configured) {
-		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
-		return -EINVAL;
-	}
-
-	if (domain->started) {
-		resp->status = DLB2_ST_DOMAIN_STARTED;
-		return -EINVAL;
-	}
-
-	/*
-	 * If the user claims the port is already configured, validate the port
-	 * ID, its domain, and whether the port is configured.
-	 */
-	if (args->port_id != -1) {
-		struct dlb2_dir_pq_pair *port;
-
-		port = dlb2_get_domain_used_dir_pq(hw,
-						   args->port_id,
-						   vdev_req,
-						   domain);
-
-		if (port == NULL || port->domain_id.phys_id !=
-				domain->id.phys_id || !port->port_configured) {
-			resp->status = DLB2_ST_INVALID_PORT_ID;
-			return -EINVAL;
-		}
-	}
-
-	/*
-	 * If the queue's port is not configured, validate that a free
-	 * port-queue pair is available.
-	 */
-	if (args->port_id == -1 &&
-	    dlb2_list_empty(&domain->avail_dir_pq_pairs)) {
-		resp->status = DLB2_ST_DIR_QUEUES_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	return 0;
-}
-
-/**
- * dlb2_hw_create_dir_queue() - Allocate and initialize a DLB DIR queue.
- * @hw:	Contains the current state of the DLB2 hardware.
- * @domain_id: Domain ID
- * @args: User-provided arguments.
- * @resp: Response to user.
- * @vdev_req: Request came from a virtual device.
- * @vdev_id: If vdev_req is true, this contains the virtual device's ID.
- *
- * Return: returns < 0 on error, 0 otherwise. If the driver is unable to
- * satisfy a request, resp->status will be set accordingly.
- */
-int dlb2_hw_create_dir_queue(struct dlb2_hw *hw,
-			     u32 domain_id,
-			     struct dlb2_create_dir_queue_args *args,
-			     struct dlb2_cmd_response *resp,
-			     bool vdev_req,
-			     unsigned int vdev_id)
-{
-	struct dlb2_dir_pq_pair *queue;
-	struct dlb2_hw_domain *domain;
-	int ret;
-
-	dlb2_log_create_dir_queue_args(hw, domain_id, args, vdev_req, vdev_id);
-
-	/*
-	 * Verify that hardware resources are available before attempting to
-	 * satisfy the request. This simplifies the error unwinding code.
-	 */
-	ret = dlb2_verify_create_dir_queue_args(hw,
-						domain_id,
-						args,
-						resp,
-						vdev_req,
-						vdev_id);
-	if (ret)
-		return ret;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-	if (domain == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: domain not found\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	if (args->port_id != -1)
-		queue = dlb2_get_domain_used_dir_pq(hw,
-						    args->port_id,
-						    vdev_req,
-						    domain);
-	else
-		queue = DLB2_DOM_LIST_HEAD(domain->avail_dir_pq_pairs,
-					   typeof(*queue));
-	if (queue == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: no available dir queues\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	dlb2_configure_dir_queue(hw, domain, queue, args, vdev_req, vdev_id);
-
-	/*
-	 * Configuration succeeded, so move the resource from the 'avail' to
-	 * the 'used' list (if it's not already there).
-	 */
-	if (args->port_id == -1) {
-		dlb2_list_del(&domain->avail_dir_pq_pairs,
-			      &queue->domain_list);
-
-		dlb2_list_add(&domain->used_dir_pq_pairs,
-			      &queue->domain_list);
-	}
-
-	resp->status = 0;
-
-	resp->id = (vdev_req) ? queue->id.virt_id : queue->id.phys_id;
-
-	return 0;
-}
-
 static bool
 dlb2_port_find_slot_with_pending_map_queue(struct dlb2_ldb_port *port,
 					   struct dlb2_ldb_queue *queue,
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index 4e4b390dd..d4b401250 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -4857,3 +4857,204 @@ int dlb2_hw_create_dir_port(struct dlb2_hw *hw,
 
 	return 0;
 }
+
+static void dlb2_configure_dir_queue(struct dlb2_hw *hw,
+				     struct dlb2_hw_domain *domain,
+				     struct dlb2_dir_pq_pair *queue,
+				     struct dlb2_create_dir_queue_args *args,
+				     bool vdev_req,
+				     unsigned int vdev_id)
+{
+	unsigned int offs;
+	u32 reg = 0;
+
+	/* QID write permissions are turned on when the domain is started */
+	offs = domain->id.phys_id * DLB2_MAX_NUM_DIR_QUEUES(hw->ver) +
+		queue->id.phys_id;
+
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(offs), reg);
+
+	/* Don't timestamp QEs that pass through this queue */
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_QID_ITS(queue->id.phys_id), reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, args->depth_threshold,
+		      DLB2_LSP_QID_DIR_DEPTH_THRSH_THRESH);
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_QID_DIR_DEPTH_THRSH(hw->ver, queue->id.phys_id),
+		    reg);
+
+	if (vdev_req) {
+		offs = vdev_id * DLB2_MAX_NUM_DIR_QUEUES(hw->ver) +
+			queue->id.virt_id;
+
+		reg = 0;
+		DLB2_BIT_SET(reg, DLB2_SYS_VF_DIR_VQID_V_VQID_V);
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID_V(offs), reg);
+
+		reg = 0;
+		DLB2_BITS_SET(reg, queue->id.phys_id,
+			      DLB2_SYS_VF_DIR_VQID2QID_QID);
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID2QID(offs), reg);
+	}
+
+	reg = 0;
+	DLB2_BIT_SET(reg, DLB2_SYS_DIR_QID_V_QID_V);
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_QID_V(queue->id.phys_id), reg);
+
+	queue->queue_configured = true;
+}
+
+static void
+dlb2_log_create_dir_queue_args(struct dlb2_hw *hw,
+			       u32 domain_id,
+			       struct dlb2_create_dir_queue_args *args,
+			       bool vdev_req,
+			       unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 create directed queue arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
+	DLB2_HW_DBG(hw, "\tPort ID:   %d\n", args->port_id);
+}
+
+static int
+dlb2_verify_create_dir_queue_args(struct dlb2_hw *hw,
+				  u32 domain_id,
+				  struct dlb2_create_dir_queue_args *args,
+				  struct dlb2_cmd_response *resp,
+				  bool vdev_req,
+				  unsigned int vdev_id,
+				  struct dlb2_hw_domain **out_domain,
+				  struct dlb2_dir_pq_pair **out_queue)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_dir_pq_pair *pq;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
+		return -EINVAL;
+	}
+
+	if (domain->started) {
+		resp->status = DLB2_ST_DOMAIN_STARTED;
+		return -EINVAL;
+	}
+
+	/*
+	 * If the user claims the port is already configured, validate the port
+	 * ID, its domain, and whether the port is configured.
+	 */
+	if (args->port_id != -1) {
+		pq = dlb2_get_domain_used_dir_pq(hw,
+						 args->port_id,
+						 vdev_req,
+						 domain);
+
+		if (!pq || pq->domain_id.phys_id != domain->id.phys_id ||
+		    !pq->port_configured) {
+			resp->status = DLB2_ST_INVALID_PORT_ID;
+			return -EINVAL;
+		}
+	} else {
+		/*
+		 * If the queue's port is not configured, validate that a free
+		 * port-queue pair is available.
+		 */
+		pq = DLB2_DOM_LIST_HEAD(domain->avail_dir_pq_pairs,
+					typeof(*pq));
+		if (!pq) {
+			resp->status = DLB2_ST_DIR_QUEUES_UNAVAILABLE;
+			return -EINVAL;
+		}
+	}
+
+	*out_domain = domain;
+	*out_queue = pq;
+
+	return 0;
+}
+
+/**
+ * dlb2_hw_create_dir_queue() - create a directed queue
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: queue creation arguments.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function creates a directed queue.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the queue ID.
+ *
+ * resp->id contains a virtual ID if vdev_req is true.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, the domain is not configured,
+ *	    or the domain has already been started.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb2_hw_create_dir_queue(struct dlb2_hw *hw,
+			     u32 domain_id,
+			     struct dlb2_create_dir_queue_args *args,
+			     struct dlb2_cmd_response *resp,
+			     bool vdev_req,
+			     unsigned int vdev_id)
+{
+	struct dlb2_dir_pq_pair *queue;
+	struct dlb2_hw_domain *domain;
+	int ret;
+
+	dlb2_log_create_dir_queue_args(hw, domain_id, args, vdev_req, vdev_id);
+
+	/*
+	 * Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	ret = dlb2_verify_create_dir_queue_args(hw,
+						domain_id,
+						args,
+						resp,
+						vdev_req,
+						vdev_id,
+						&domain,
+						&queue);
+	if (ret)
+		return ret;
+
+	dlb2_configure_dir_queue(hw, domain, queue, args, vdev_req, vdev_id);
+
+	/*
+	 * Configuration succeeded, so move the resource from the 'avail' to
+	 * the 'used' list (if it's not already there).
+	 */
+	if (args->port_id == -1) {
+		dlb2_list_del(&domain->avail_dir_pq_pairs,
+			      &queue->domain_list);
+
+		dlb2_list_add(&domain->used_dir_pq_pairs,
+			      &queue->domain_list);
+	}
+
+	resp->status = 0;
+
+	resp->id = (vdev_req) ? queue->id.virt_id : queue->id.phys_id;
+
+	return 0;
+}
+
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v3 11/26] event/dlb2: add v2.5 map qid
  2021-04-13 20:14   ` [dpdk-dev] [PATCH v3 00/26] " Timothy McDaniel
                       ` (9 preceding siblings ...)
  2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 10/26] event/dlb2: add v2.5 create dir queue Timothy McDaniel
@ 2021-04-13 20:14     ` Timothy McDaniel
  2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 12/26] event/dlb2: add v2.5 unmap queue Timothy McDaniel
                       ` (14 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-04-13 20:14 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas

Update low level hardware functions to account for
new register map and hardware access macros.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 355 ---------------
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 418 ++++++++++++++++++
 2 files changed, 418 insertions(+), 355 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index 362deadfe..d59df5e39 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -1245,68 +1245,6 @@ dlb2_port_find_slot_with_pending_map_queue(struct dlb2_ldb_port *port,
 	return (i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ);
 }
 
-static void dlb2_ldb_port_change_qid_priority(struct dlb2_hw *hw,
-					      struct dlb2_ldb_port *port,
-					      int slot,
-					      struct dlb2_map_qid_args *args)
-{
-	union dlb2_lsp_cq2priov r0;
-
-	/* Read-modify-write the priority and valid bit register */
-	r0.val = DLB2_CSR_RD(hw, DLB2_LSP_CQ2PRIOV(port->id.phys_id));
-
-	r0.field.v |= 1 << slot;
-	r0.field.prio |= (args->priority & 0x7) << slot * 3;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(port->id.phys_id), r0.val);
-
-	dlb2_flush_csr(hw);
-
-	port->qid_map[slot].priority = args->priority;
-}
-
-static int dlb2_verify_map_qid_slot_available(struct dlb2_ldb_port *port,
-					      struct dlb2_ldb_queue *queue,
-					      struct dlb2_cmd_response *resp)
-{
-	enum dlb2_qid_map_state state;
-	int i;
-
-	/* Unused slot available? */
-	if (port->num_mappings < DLB2_MAX_NUM_QIDS_PER_LDB_CQ)
-		return 0;
-
-	/*
-	 * If the queue is already mapped (from the application's perspective),
-	 * this is simply a priority update.
-	 */
-	state = DLB2_QUEUE_MAPPED;
-	if (dlb2_port_find_slot_queue(port, state, queue, &i))
-		return 0;
-
-	state = DLB2_QUEUE_MAP_IN_PROG;
-	if (dlb2_port_find_slot_queue(port, state, queue, &i))
-		return 0;
-
-	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &i))
-		return 0;
-
-	/*
-	 * If the slot contains an unmap in progress, it's considered
-	 * available.
-	 */
-	state = DLB2_QUEUE_UNMAP_IN_PROG;
-	if (dlb2_port_find_slot(port, state, &i))
-		return 0;
-
-	state = DLB2_QUEUE_UNMAPPED;
-	if (dlb2_port_find_slot(port, state, &i))
-		return 0;
-
-	resp->status = DLB2_ST_NO_QID_SLOTS_AVAILABLE;
-	return -EINVAL;
-}
-
 static struct dlb2_ldb_queue *
 dlb2_get_domain_ldb_queue(u32 id,
 			  bool vdev_req,
@@ -1355,299 +1293,6 @@ dlb2_get_domain_used_ldb_port(u32 id,
 	return NULL;
 }
 
-static int dlb2_verify_map_qid_args(struct dlb2_hw *hw,
-				    u32 domain_id,
-				    struct dlb2_map_qid_args *args,
-				    struct dlb2_cmd_response *resp,
-				    bool vdev_req,
-				    unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_port *port;
-	struct dlb2_ldb_queue *queue;
-	int id;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (domain == NULL) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	if (!domain->configured) {
-		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
-		return -EINVAL;
-	}
-
-	id = args->port_id;
-
-	port = dlb2_get_domain_used_ldb_port(id, vdev_req, domain);
-
-	if (port == NULL || !port->configured) {
-		resp->status = DLB2_ST_INVALID_PORT_ID;
-		return -EINVAL;
-	}
-
-	if (args->priority >= DLB2_QID_PRIORITIES) {
-		resp->status = DLB2_ST_INVALID_PRIORITY;
-		return -EINVAL;
-	}
-
-	queue = dlb2_get_domain_ldb_queue(args->qid, vdev_req, domain);
-
-	if (queue == NULL || !queue->configured) {
-		resp->status = DLB2_ST_INVALID_QID;
-		return -EINVAL;
-	}
-
-	if (queue->domain_id.phys_id != domain->id.phys_id) {
-		resp->status = DLB2_ST_INVALID_QID;
-		return -EINVAL;
-	}
-
-	if (port->domain_id.phys_id != domain->id.phys_id) {
-		resp->status = DLB2_ST_INVALID_PORT_ID;
-		return -EINVAL;
-	}
-
-	return 0;
-}
-
-static void dlb2_log_map_qid(struct dlb2_hw *hw,
-			     u32 domain_id,
-			     struct dlb2_map_qid_args *args,
-			     bool vdev_req,
-			     unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 map QID arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID: %d\n",
-		    domain_id);
-	DLB2_HW_DBG(hw, "\tPort ID:   %d\n",
-		    args->port_id);
-	DLB2_HW_DBG(hw, "\tQueue ID:  %d\n",
-		    args->qid);
-	DLB2_HW_DBG(hw, "\tPriority:  %d\n",
-		    args->priority);
-}
-
-int dlb2_hw_map_qid(struct dlb2_hw *hw,
-		    u32 domain_id,
-		    struct dlb2_map_qid_args *args,
-		    struct dlb2_cmd_response *resp,
-		    bool vdev_req,
-		    unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_queue *queue;
-	enum dlb2_qid_map_state st;
-	struct dlb2_ldb_port *port;
-	int ret, i, id;
-	u8 prio;
-
-	dlb2_log_map_qid(hw, domain_id, args, vdev_req, vdev_id);
-
-	/*
-	 * Verify that hardware resources are available before attempting to
-	 * satisfy the request. This simplifies the error unwinding code.
-	 */
-	ret = dlb2_verify_map_qid_args(hw,
-				       domain_id,
-				       args,
-				       resp,
-				       vdev_req,
-				       vdev_id);
-	if (ret)
-		return ret;
-
-	prio = args->priority;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-	if (domain == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: domain not found\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	id = args->port_id;
-
-	port = dlb2_get_domain_used_ldb_port(id, vdev_req, domain);
-	if (port == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: port not found\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	queue = dlb2_get_domain_ldb_queue(args->qid, vdev_req, domain);
-	if (queue == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: queue not found\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	/*
-	 * If there are any outstanding detach operations for this port,
-	 * attempt to complete them. This may be necessary to free up a QID
-	 * slot for this requested mapping.
-	 */
-	if (port->num_pending_removals)
-		dlb2_domain_finish_unmap_port(hw, domain, port);
-
-	ret = dlb2_verify_map_qid_slot_available(port, queue, resp);
-	if (ret)
-		return ret;
-
-	/* Hardware requires disabling the CQ before mapping QIDs. */
-	if (port->enabled)
-		dlb2_ldb_port_cq_disable(hw, port);
-
-	/*
-	 * If this is only a priority change, don't perform the full QID->CQ
-	 * mapping procedure
-	 */
-	st = DLB2_QUEUE_MAPPED;
-	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
-		if (i >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-			DLB2_HW_ERR(hw,
-				    "[%s():%d] Internal error: port slot tracking failed\n",
-				    __func__, __LINE__);
-			return -EFAULT;
-		}
-
-		if (prio != port->qid_map[i].priority) {
-			dlb2_ldb_port_change_qid_priority(hw, port, i, args);
-			DLB2_HW_DBG(hw, "DLB2 map: priority change\n");
-		}
-
-		st = DLB2_QUEUE_MAPPED;
-		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
-		if (ret)
-			return ret;
-
-		goto map_qid_done;
-	}
-
-	st = DLB2_QUEUE_UNMAP_IN_PROG;
-	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
-		if (i >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-			DLB2_HW_ERR(hw,
-				    "[%s():%d] Internal error: port slot tracking failed\n",
-				    __func__, __LINE__);
-			return -EFAULT;
-		}
-
-		if (prio != port->qid_map[i].priority) {
-			dlb2_ldb_port_change_qid_priority(hw, port, i, args);
-			DLB2_HW_DBG(hw, "DLB2 map: priority change\n");
-		}
-
-		st = DLB2_QUEUE_MAPPED;
-		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
-		if (ret)
-			return ret;
-
-		goto map_qid_done;
-	}
-
-	/*
-	 * If this is a priority change on an in-progress mapping, don't
-	 * perform the full QID->CQ mapping procedure.
-	 */
-	st = DLB2_QUEUE_MAP_IN_PROG;
-	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
-		if (i >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-			DLB2_HW_ERR(hw,
-				    "[%s():%d] Internal error: port slot tracking failed\n",
-				    __func__, __LINE__);
-			return -EFAULT;
-		}
-
-		port->qid_map[i].priority = prio;
-
-		DLB2_HW_DBG(hw, "DLB2 map: priority change only\n");
-
-		goto map_qid_done;
-	}
-
-	/*
-	 * If this is a priority change on a pending mapping, update the
-	 * pending priority
-	 */
-	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &i)) {
-		if (i >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-			DLB2_HW_ERR(hw,
-				    "[%s():%d] Internal error: port slot tracking failed\n",
-				    __func__, __LINE__);
-			return -EFAULT;
-		}
-
-		port->qid_map[i].pending_priority = prio;
-
-		DLB2_HW_DBG(hw, "DLB2 map: priority change only\n");
-
-		goto map_qid_done;
-	}
-
-	/*
-	 * If all the CQ's slots are in use, then there's an unmap in progress
-	 * (guaranteed by dlb2_verify_map_qid_slot_available()), so add this
-	 * mapping to pending_map and return. When the removal is completed for
-	 * the slot's current occupant, this mapping will be performed.
-	 */
-	if (!dlb2_port_find_slot(port, DLB2_QUEUE_UNMAPPED, &i)) {
-		if (dlb2_port_find_slot(port, DLB2_QUEUE_UNMAP_IN_PROG, &i)) {
-			enum dlb2_qid_map_state st;
-
-			if (i >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-				DLB2_HW_ERR(hw,
-					    "[%s():%d] Internal error: port slot tracking failed\n",
-					    __func__, __LINE__);
-				return -EFAULT;
-			}
-
-			port->qid_map[i].pending_qid = queue->id.phys_id;
-			port->qid_map[i].pending_priority = prio;
-
-			st = DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP;
-
-			ret = dlb2_port_slot_state_transition(hw, port, queue,
-							      i, st);
-			if (ret)
-				return ret;
-
-			DLB2_HW_DBG(hw, "DLB2 map: map pending removal\n");
-
-			goto map_qid_done;
-		}
-	}
-
-	/*
-	 * If the domain has started, a special "dynamic" CQ->queue mapping
-	 * procedure is required in order to safely update the CQ<->QID tables.
-	 * The "static" procedure cannot be used when traffic is flowing,
-	 * because the CQ<->QID tables cannot be updated atomically and the
-	 * scheduler won't see the new mapping unless the queue's if_status
-	 * changes, which isn't guaranteed.
-	 */
-	ret = dlb2_ldb_port_map_qid(hw, domain, port, queue, prio);
-
-	/* If ret is less than zero, it's due to an internal error */
-	if (ret < 0)
-		return ret;
-
-map_qid_done:
-	if (port->enabled)
-		dlb2_ldb_port_cq_enable(hw, port);
-
-	resp->status = 0;
-
-	return 0;
-}
-
 static void dlb2_log_unmap_qid(struct dlb2_hw *hw,
 			       u32 domain_id,
 			       struct dlb2_unmap_qid_args *args,
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index d4b401250..5277a2643 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -5058,3 +5058,421 @@ int dlb2_hw_create_dir_queue(struct dlb2_hw *hw,
 	return 0;
 }
 
+static bool
+dlb2_port_find_slot_with_pending_map_queue(struct dlb2_ldb_port *port,
+					   struct dlb2_ldb_queue *queue,
+					   int *slot)
+{
+	int i;
+
+	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
+		struct dlb2_ldb_port_qid_map *map = &port->qid_map[i];
+
+		if (map->state == DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP &&
+		    map->pending_qid == queue->id.phys_id)
+			break;
+	}
+
+	*slot = i;
+
+	return (i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ);
+}
+
+static int dlb2_verify_map_qid_slot_available(struct dlb2_ldb_port *port,
+					      struct dlb2_ldb_queue *queue,
+					      struct dlb2_cmd_response *resp)
+{
+	enum dlb2_qid_map_state state;
+	int i;
+
+	/* Unused slot available? */
+	if (port->num_mappings < DLB2_MAX_NUM_QIDS_PER_LDB_CQ)
+		return 0;
+
+	/*
+	 * If the queue is already mapped (from the application's perspective),
+	 * this is simply a priority update.
+	 */
+	state = DLB2_QUEUE_MAPPED;
+	if (dlb2_port_find_slot_queue(port, state, queue, &i))
+		return 0;
+
+	state = DLB2_QUEUE_MAP_IN_PROG;
+	if (dlb2_port_find_slot_queue(port, state, queue, &i))
+		return 0;
+
+	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &i))
+		return 0;
+
+	/*
+	 * If the slot contains an unmap in progress, it's considered
+	 * available.
+	 */
+	state = DLB2_QUEUE_UNMAP_IN_PROG;
+	if (dlb2_port_find_slot(port, state, &i))
+		return 0;
+
+	state = DLB2_QUEUE_UNMAPPED;
+	if (dlb2_port_find_slot(port, state, &i))
+		return 0;
+
+	resp->status = DLB2_ST_NO_QID_SLOTS_AVAILABLE;
+	return -EINVAL;
+}
+
+static struct dlb2_ldb_queue *
+dlb2_get_domain_ldb_queue(u32 id,
+			  bool vdev_req,
+			  struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_queue *queue;
+	RTE_SET_USED(iter);
+
+	if (id >= DLB2_MAX_NUM_LDB_QUEUES)
+		return NULL;
+
+	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
+		if ((!vdev_req && queue->id.phys_id == id) ||
+		    (vdev_req && queue->id.virt_id == id))
+			return queue;
+	}
+
+	return NULL;
+}
+
+static struct dlb2_ldb_port *
+dlb2_get_domain_used_ldb_port(u32 id,
+			      bool vdev_req,
+			      struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	if (id >= DLB2_MAX_NUM_LDB_PORTS)
+		return NULL;
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			if ((!vdev_req && port->id.phys_id == id) ||
+			    (vdev_req && port->id.virt_id == id))
+				return port;
+		}
+
+		DLB2_DOM_LIST_FOR(domain->avail_ldb_ports[i], port, iter) {
+			if ((!vdev_req && port->id.phys_id == id) ||
+			    (vdev_req && port->id.virt_id == id))
+				return port;
+		}
+	}
+
+	return NULL;
+}
+
+static void dlb2_ldb_port_change_qid_priority(struct dlb2_hw *hw,
+					      struct dlb2_ldb_port *port,
+					      int slot,
+					      struct dlb2_map_qid_args *args)
+{
+	u32 cq2priov;
+
+	/* Read-modify-write the priority and valid bit register */
+	cq2priov = DLB2_CSR_RD(hw,
+			       DLB2_LSP_CQ2PRIOV(hw->ver, port->id.phys_id));
+
+	cq2priov |= (1 << (slot + DLB2_LSP_CQ2PRIOV_V_LOC)) &
+		    DLB2_LSP_CQ2PRIOV_V;
+	cq2priov |= ((args->priority & 0x7) << slot * 3) &
+		    DLB2_LSP_CQ2PRIOV_PRIO;
+
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(hw->ver, port->id.phys_id), cq2priov);
+
+	dlb2_flush_csr(hw);
+
+	port->qid_map[slot].priority = args->priority;
+}
+
+static int dlb2_verify_map_qid_args(struct dlb2_hw *hw,
+				    u32 domain_id,
+				    struct dlb2_map_qid_args *args,
+				    struct dlb2_cmd_response *resp,
+				    bool vdev_req,
+				    unsigned int vdev_id,
+				    struct dlb2_hw_domain **out_domain,
+				    struct dlb2_ldb_port **out_port,
+				    struct dlb2_ldb_queue **out_queue)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+	struct dlb2_ldb_port *port;
+	int id;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
+		return -EINVAL;
+	}
+
+	id = args->port_id;
+
+	port = dlb2_get_domain_used_ldb_port(id, vdev_req, domain);
+
+	if (!port || !port->configured) {
+		resp->status = DLB2_ST_INVALID_PORT_ID;
+		return -EINVAL;
+	}
+
+	if (args->priority >= DLB2_QID_PRIORITIES) {
+		resp->status = DLB2_ST_INVALID_PRIORITY;
+		return -EINVAL;
+	}
+
+	queue = dlb2_get_domain_ldb_queue(args->qid, vdev_req, domain);
+
+	if (!queue || !queue->configured) {
+		resp->status = DLB2_ST_INVALID_QID;
+		return -EINVAL;
+	}
+
+	if (queue->domain_id.phys_id != domain->id.phys_id) {
+		resp->status = DLB2_ST_INVALID_QID;
+		return -EINVAL;
+	}
+
+	if (port->domain_id.phys_id != domain->id.phys_id) {
+		resp->status = DLB2_ST_INVALID_PORT_ID;
+		return -EINVAL;
+	}
+
+	*out_domain = domain;
+	*out_queue = queue;
+	*out_port = port;
+
+	return 0;
+}
+
+static void dlb2_log_map_qid(struct dlb2_hw *hw,
+			     u32 domain_id,
+			     struct dlb2_map_qid_args *args,
+			     bool vdev_req,
+			     unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 map QID arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID: %d\n",
+		    domain_id);
+	DLB2_HW_DBG(hw, "\tPort ID:   %d\n",
+		    args->port_id);
+	DLB2_HW_DBG(hw, "\tQueue ID:  %d\n",
+		    args->qid);
+	DLB2_HW_DBG(hw, "\tPriority:  %d\n",
+		    args->priority);
+}
+
+/**
+ * dlb2_hw_map_qid() - map a load-balanced queue to a load-balanced port
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: map QID arguments.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function configures the DLB to schedule QEs from the specified queue
+ * to the specified port. Each load-balanced port can be mapped to up to 8
+ * queues; each load-balanced queue can potentially map to all the
+ * load-balanced ports.
+ *
+ * A successful return does not necessarily mean the mapping was configured. If
+ * this function is unable to immediately map the queue to the port, it will
+ * add the requested operation to a per-port list of pending map/unmap
+ * operations, and (if it's not already running) launch a kernel thread that
+ * periodically attempts to process all pending operations. In a sense, this is
+ * an asynchronous function.
+ *
+ * This asynchronicity creates two views of the state of hardware: the actual
+ * hardware state and the requested state (as if every request completed
+ * immediately). If there are any pending map/unmap operations, the requested
+ * state will differ from the actual state. All validation is performed with
+ * respect to the pending state; for instance, if there are 8 pending map
+ * operations for port X, a request for a 9th will fail because a load-balanced
+ * port can only map up to 8 queues.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, invalid port or queue ID, or
+ *	    the domain is not configured.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb2_hw_map_qid(struct dlb2_hw *hw,
+		    u32 domain_id,
+		    struct dlb2_map_qid_args *args,
+		    struct dlb2_cmd_response *resp,
+		    bool vdev_req,
+		    unsigned int vdev_id)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+	enum dlb2_qid_map_state st;
+	struct dlb2_ldb_port *port;
+	int ret, i;
+	u8 prio;
+
+	dlb2_log_map_qid(hw, domain_id, args, vdev_req, vdev_id);
+
+	/*
+	 * Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	ret = dlb2_verify_map_qid_args(hw,
+				       domain_id,
+				       args,
+				       resp,
+				       vdev_req,
+				       vdev_id,
+				       &domain,
+				       &port,
+				       &queue);
+	if (ret)
+		return ret;
+
+	prio = args->priority;
+
+	/*
+	 * If there are any outstanding detach operations for this port,
+	 * attempt to complete them. This may be necessary to free up a QID
+	 * slot for this requested mapping.
+	 */
+	if (port->num_pending_removals)
+		dlb2_domain_finish_unmap_port(hw, domain, port);
+
+	ret = dlb2_verify_map_qid_slot_available(port, queue, resp);
+	if (ret)
+		return ret;
+
+	/* Hardware requires disabling the CQ before mapping QIDs. */
+	if (port->enabled)
+		dlb2_ldb_port_cq_disable(hw, port);
+
+	/*
+	 * If this is only a priority change, don't perform the full QID->CQ
+	 * mapping procedure
+	 */
+	st = DLB2_QUEUE_MAPPED;
+	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
+		if (prio != port->qid_map[i].priority) {
+			dlb2_ldb_port_change_qid_priority(hw, port, i, args);
+			DLB2_HW_DBG(hw, "DLB2 map: priority change\n");
+		}
+
+		st = DLB2_QUEUE_MAPPED;
+		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
+		if (ret)
+			return ret;
+
+		goto map_qid_done;
+	}
+
+	st = DLB2_QUEUE_UNMAP_IN_PROG;
+	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
+		if (prio != port->qid_map[i].priority) {
+			dlb2_ldb_port_change_qid_priority(hw, port, i, args);
+			DLB2_HW_DBG(hw, "DLB2 map: priority change\n");
+		}
+
+		st = DLB2_QUEUE_MAPPED;
+		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
+		if (ret)
+			return ret;
+
+		goto map_qid_done;
+	}
+
+	/*
+	 * If this is a priority change on an in-progress mapping, don't
+	 * perform the full QID->CQ mapping procedure.
+	 */
+	st = DLB2_QUEUE_MAP_IN_PROG;
+	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
+		port->qid_map[i].priority = prio;
+
+		DLB2_HW_DBG(hw, "DLB2 map: priority change only\n");
+
+		goto map_qid_done;
+	}
+
+	/*
+	 * If this is a priority change on a pending mapping, update the
+	 * pending priority
+	 */
+	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &i)) {
+		port->qid_map[i].pending_priority = prio;
+
+		DLB2_HW_DBG(hw, "DLB2 map: priority change only\n");
+
+		goto map_qid_done;
+	}
+
+	/*
+	 * If all the CQ's slots are in use, then there's an unmap in progress
+	 * (guaranteed by dlb2_verify_map_qid_slot_available()), so add this
+	 * mapping to pending_map and return. When the removal is completed for
+	 * the slot's current occupant, this mapping will be performed.
+	 */
+	if (!dlb2_port_find_slot(port, DLB2_QUEUE_UNMAPPED, &i)) {
+		if (dlb2_port_find_slot(port, DLB2_QUEUE_UNMAP_IN_PROG, &i)) {
+			enum dlb2_qid_map_state new_st;
+
+			port->qid_map[i].pending_qid = queue->id.phys_id;
+			port->qid_map[i].pending_priority = prio;
+
+			new_st = DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP;
+
+			ret = dlb2_port_slot_state_transition(hw, port, queue,
+							      i, new_st);
+			if (ret)
+				return ret;
+
+			DLB2_HW_DBG(hw, "DLB2 map: map pending removal\n");
+
+			goto map_qid_done;
+		}
+	}
+
+	/*
+	 * If the domain has started, a special "dynamic" CQ->queue mapping
+	 * procedure is required in order to safely update the CQ<->QID tables.
+	 * The "static" procedure cannot be used when traffic is flowing,
+	 * because the CQ<->QID tables cannot be updated atomically and the
+	 * scheduler won't see the new mapping unless the queue's if_status
+	 * changes, which isn't guaranteed.
+	 */
+	ret = dlb2_ldb_port_map_qid(hw, domain, port, queue, prio);
+
+	/* If ret is less than zero, it's due to an internal error */
+	if (ret < 0)
+		return ret;
+
+map_qid_done:
+	if (port->enabled)
+		dlb2_ldb_port_cq_enable(hw, port);
+
+	resp->status = 0;
+
+	return 0;
+}
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v3 12/26] event/dlb2: add v2.5 unmap queue
  2021-04-13 20:14   ` [dpdk-dev] [PATCH v3 00/26] " Timothy McDaniel
                       ` (10 preceding siblings ...)
  2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 11/26] event/dlb2: add v2.5 map qid Timothy McDaniel
@ 2021-04-13 20:14     ` Timothy McDaniel
  2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 13/26] event/dlb2: add v2.5 start domain Timothy McDaniel
                       ` (13 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-04-13 20:14 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas

Update low level functions to account for new register map
and hardware access macros.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 331 ------------------
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 298 ++++++++++++++++
 2 files changed, 298 insertions(+), 331 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index d59df5e39..ab5b080c1 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -1225,26 +1225,6 @@ dlb2_get_domain_used_dir_pq(struct dlb2_hw *hw,
 	return NULL;
 }
 
-static bool
-dlb2_port_find_slot_with_pending_map_queue(struct dlb2_ldb_port *port,
-					   struct dlb2_ldb_queue *queue,
-					   int *slot)
-{
-	int i;
-
-	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
-		struct dlb2_ldb_port_qid_map *map = &port->qid_map[i];
-
-		if (map->state == DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP &&
-		    map->pending_qid == queue->id.phys_id)
-			break;
-	}
-
-	*slot = i;
-
-	return (i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ);
-}
-
 static struct dlb2_ldb_queue *
 dlb2_get_domain_ldb_queue(u32 id,
 			  bool vdev_req,
@@ -1265,317 +1245,6 @@ dlb2_get_domain_ldb_queue(u32 id,
 	return NULL;
 }
 
-static struct dlb2_ldb_port *
-dlb2_get_domain_used_ldb_port(u32 id,
-			      bool vdev_req,
-			      struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	if (id >= DLB2_MAX_NUM_LDB_PORTS)
-		return NULL;
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
-			if ((!vdev_req && port->id.phys_id == id) ||
-			    (vdev_req && port->id.virt_id == id))
-				return port;
-
-		DLB2_DOM_LIST_FOR(domain->avail_ldb_ports[i], port, iter)
-			if ((!vdev_req && port->id.phys_id == id) ||
-			    (vdev_req && port->id.virt_id == id))
-				return port;
-	}
-
-	return NULL;
-}
-
-static void dlb2_log_unmap_qid(struct dlb2_hw *hw,
-			       u32 domain_id,
-			       struct dlb2_unmap_qid_args *args,
-			       bool vdev_req,
-			       unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 unmap QID arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID: %d\n",
-		    domain_id);
-	DLB2_HW_DBG(hw, "\tPort ID:   %d\n",
-		    args->port_id);
-	DLB2_HW_DBG(hw, "\tQueue ID:  %d\n",
-		    args->qid);
-	if (args->qid < DLB2_MAX_NUM_LDB_QUEUES)
-		DLB2_HW_DBG(hw, "\tQueue's num mappings:  %d\n",
-			    hw->rsrcs.ldb_queues[args->qid].num_mappings);
-}
-
-static int dlb2_verify_unmap_qid_args(struct dlb2_hw *hw,
-				      u32 domain_id,
-				      struct dlb2_unmap_qid_args *args,
-				      struct dlb2_cmd_response *resp,
-				      bool vdev_req,
-				      unsigned int vdev_id)
-{
-	enum dlb2_qid_map_state state;
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_queue *queue;
-	struct dlb2_ldb_port *port;
-	int slot;
-	int id;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (domain == NULL) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	if (!domain->configured) {
-		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
-		return -EINVAL;
-	}
-
-	id = args->port_id;
-
-	port = dlb2_get_domain_used_ldb_port(id, vdev_req, domain);
-
-	if (port == NULL || !port->configured) {
-		resp->status = DLB2_ST_INVALID_PORT_ID;
-		return -EINVAL;
-	}
-
-	if (port->domain_id.phys_id != domain->id.phys_id) {
-		resp->status = DLB2_ST_INVALID_PORT_ID;
-		return -EINVAL;
-	}
-
-	queue = dlb2_get_domain_ldb_queue(args->qid, vdev_req, domain);
-
-	if (queue == NULL || !queue->configured) {
-		DLB2_HW_ERR(hw, "[%s()] Can't unmap unconfigured queue %d\n",
-			    __func__, args->qid);
-		resp->status = DLB2_ST_INVALID_QID;
-		return -EINVAL;
-	}
-
-	/*
-	 * Verify that the port has the queue mapped. From the application's
-	 * perspective a queue is mapped if it is actually mapped, the map is
-	 * in progress, or the map is blocked pending an unmap.
-	 */
-	state = DLB2_QUEUE_MAPPED;
-	if (dlb2_port_find_slot_queue(port, state, queue, &slot))
-		return 0;
-
-	state = DLB2_QUEUE_MAP_IN_PROG;
-	if (dlb2_port_find_slot_queue(port, state, queue, &slot))
-		return 0;
-
-	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &slot))
-		return 0;
-
-	resp->status = DLB2_ST_INVALID_QID;
-	return -EINVAL;
-}
-
-int dlb2_hw_unmap_qid(struct dlb2_hw *hw,
-		      u32 domain_id,
-		      struct dlb2_unmap_qid_args *args,
-		      struct dlb2_cmd_response *resp,
-		      bool vdev_req,
-		      unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_queue *queue;
-	enum dlb2_qid_map_state st;
-	struct dlb2_ldb_port *port;
-	bool unmap_complete;
-	int i, ret, id;
-
-	dlb2_log_unmap_qid(hw, domain_id, args, vdev_req, vdev_id);
-
-	/*
-	 * Verify that hardware resources are available before attempting to
-	 * satisfy the request. This simplifies the error unwinding code.
-	 */
-	ret = dlb2_verify_unmap_qid_args(hw,
-					 domain_id,
-					 args,
-					 resp,
-					 vdev_req,
-					 vdev_id);
-	if (ret)
-		return ret;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-	if (domain == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: domain not found\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	id = args->port_id;
-
-	port = dlb2_get_domain_used_ldb_port(id, vdev_req, domain);
-	if (port == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: port not found\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	queue = dlb2_get_domain_ldb_queue(args->qid, vdev_req, domain);
-	if (queue == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: queue not found\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	/*
-	 * If the queue hasn't been mapped yet, we need to update the slot's
-	 * state and re-enable the queue's inflights.
-	 */
-	st = DLB2_QUEUE_MAP_IN_PROG;
-	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
-		if (i >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-			DLB2_HW_ERR(hw,
-				    "[%s():%d] Internal error: port slot tracking failed\n",
-				    __func__, __LINE__);
-			return -EFAULT;
-		}
-
-		/*
-		 * Since the in-progress map was aborted, re-enable the QID's
-		 * inflights.
-		 */
-		if (queue->num_pending_additions == 0)
-			dlb2_ldb_queue_set_inflight_limit(hw, queue);
-
-		st = DLB2_QUEUE_UNMAPPED;
-		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
-		if (ret)
-			return ret;
-
-		goto unmap_qid_done;
-	}
-
-	/*
-	 * If the queue mapping is on hold pending an unmap, we simply need to
-	 * update the slot's state.
-	 */
-	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &i)) {
-		if (i >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-			DLB2_HW_ERR(hw,
-				    "[%s():%d] Internal error: port slot tracking failed\n",
-				    __func__, __LINE__);
-			return -EFAULT;
-		}
-
-		st = DLB2_QUEUE_UNMAP_IN_PROG;
-		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
-		if (ret)
-			return ret;
-
-		goto unmap_qid_done;
-	}
-
-	st = DLB2_QUEUE_MAPPED;
-	if (!dlb2_port_find_slot_queue(port, st, queue, &i)) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: no available CQ slots\n",
-			    __func__);
-		return -EFAULT;
-	}
-
-	if (i >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: port slot tracking failed\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	/*
-	 * QID->CQ mapping removal is an asynchronous procedure. It requires
-	 * stopping the DLB2 from scheduling this CQ, draining all inflights
-	 * from the CQ, then unmapping the queue from the CQ. This function
-	 * simply marks the port as needing the queue unmapped, and (if
-	 * necessary) starts the unmapping worker thread.
-	 */
-	dlb2_ldb_port_cq_disable(hw, port);
-
-	st = DLB2_QUEUE_UNMAP_IN_PROG;
-	ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
-	if (ret)
-		return ret;
-
-	/*
-	 * Attempt to finish the unmapping now, in case the port has no
-	 * outstanding inflights. If that's not the case, this will fail and
-	 * the unmapping will be completed at a later time.
-	 */
-	unmap_complete = dlb2_domain_finish_unmap_port(hw, domain, port);
-
-	/*
-	 * If the unmapping couldn't complete immediately, launch the worker
-	 * thread (if it isn't already launched) to finish it later.
-	 */
-	if (!unmap_complete && !os_worker_active(hw))
-		os_schedule_work(hw);
-
-unmap_qid_done:
-	resp->status = 0;
-
-	return 0;
-}
-
-static void
-dlb2_log_pending_port_unmaps_args(struct dlb2_hw *hw,
-				  struct dlb2_pending_port_unmaps_args *args,
-				  bool vdev_req,
-				  unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB unmaps in progress arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from VF %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tPort ID: %d\n", args->port_id);
-}
-
-int dlb2_hw_pending_port_unmaps(struct dlb2_hw *hw,
-				u32 domain_id,
-				struct dlb2_pending_port_unmaps_args *args,
-				struct dlb2_cmd_response *resp,
-				bool vdev_req,
-				unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_port *port;
-
-	dlb2_log_pending_port_unmaps_args(hw, args, vdev_req, vdev_id);
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (domain == NULL) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	port = dlb2_get_domain_used_ldb_port(args->port_id, vdev_req, domain);
-	if (port == NULL || !port->configured) {
-		resp->status = DLB2_ST_INVALID_PORT_ID;
-		return -EINVAL;
-	}
-
-	resp->id = port->num_pending_removals;
-
-	return 0;
-}
-
 static int dlb2_verify_start_domain_args(struct dlb2_hw *hw,
 					 u32 domain_id,
 					 struct dlb2_cmd_response *resp,
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index 5277a2643..181922fe3 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -5476,3 +5476,301 @@ int dlb2_hw_map_qid(struct dlb2_hw *hw,
 
 	return 0;
 }
+
+static void dlb2_log_unmap_qid(struct dlb2_hw *hw,
+			       u32 domain_id,
+			       struct dlb2_unmap_qid_args *args,
+			       bool vdev_req,
+			       unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 unmap QID arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID: %d\n",
+		    domain_id);
+	DLB2_HW_DBG(hw, "\tPort ID:   %d\n",
+		    args->port_id);
+	DLB2_HW_DBG(hw, "\tQueue ID:  %d\n",
+		    args->qid);
+	if (args->qid < DLB2_MAX_NUM_LDB_QUEUES)
+		DLB2_HW_DBG(hw, "\tQueue's num mappings:  %d\n",
+			    hw->rsrcs.ldb_queues[args->qid].num_mappings);
+}
+
+static int dlb2_verify_unmap_qid_args(struct dlb2_hw *hw,
+				      u32 domain_id,
+				      struct dlb2_unmap_qid_args *args,
+				      struct dlb2_cmd_response *resp,
+				      bool vdev_req,
+				      unsigned int vdev_id,
+				      struct dlb2_hw_domain **out_domain,
+				      struct dlb2_ldb_port **out_port,
+				      struct dlb2_ldb_queue **out_queue)
+{
+	enum dlb2_qid_map_state state;
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+	struct dlb2_ldb_port *port;
+	int slot;
+	int id;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
+		return -EINVAL;
+	}
+
+	id = args->port_id;
+
+	port = dlb2_get_domain_used_ldb_port(id, vdev_req, domain);
+
+	if (!port || !port->configured) {
+		resp->status = DLB2_ST_INVALID_PORT_ID;
+		return -EINVAL;
+	}
+
+	if (port->domain_id.phys_id != domain->id.phys_id) {
+		resp->status = DLB2_ST_INVALID_PORT_ID;
+		return -EINVAL;
+	}
+
+	queue = dlb2_get_domain_ldb_queue(args->qid, vdev_req, domain);
+
+	if (!queue || !queue->configured) {
+		DLB2_HW_ERR(hw, "[%s()] Can't unmap unconfigured queue %d\n",
+			    __func__, args->qid);
+		resp->status = DLB2_ST_INVALID_QID;
+		return -EINVAL;
+	}
+
+	/*
+	 * Verify that the port has the queue mapped. From the application's
+	 * perspective a queue is mapped if it is actually mapped, the map is
+	 * in progress, or the map is blocked pending an unmap.
+	 */
+	state = DLB2_QUEUE_MAPPED;
+	if (dlb2_port_find_slot_queue(port, state, queue, &slot))
+		goto done;
+
+	state = DLB2_QUEUE_MAP_IN_PROG;
+	if (dlb2_port_find_slot_queue(port, state, queue, &slot))
+		goto done;
+
+	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &slot))
+		goto done;
+
+	resp->status = DLB2_ST_INVALID_QID;
+	return -EINVAL;
+
+done:
+	*out_domain = domain;
+	*out_port = port;
+	*out_queue = queue;
+
+	return 0;
+}
+
+/**
+ * dlb2_hw_unmap_qid() - Unmap a load-balanced queue from a load-balanced port
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: unmap QID arguments.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function configures the DLB to stop scheduling QEs from the specified
+ * queue to the specified port.
+ *
+ * A successful return does not necessarily mean the mapping was removed. If
+ * this function is unable to immediately unmap the queue from the port, it
+ * will add the requested operation to a per-port list of pending map/unmap
+ * operations, and (if it's not already running) launch a kernel thread that
+ * periodically attempts to process all pending operations. See
+ * dlb2_hw_map_qid() for more details.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, invalid port or queue ID, or
+ *	    the domain is not configured.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb2_hw_unmap_qid(struct dlb2_hw *hw,
+		      u32 domain_id,
+		      struct dlb2_unmap_qid_args *args,
+		      struct dlb2_cmd_response *resp,
+		      bool vdev_req,
+		      unsigned int vdev_id)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+	enum dlb2_qid_map_state st;
+	struct dlb2_ldb_port *port;
+	bool unmap_complete;
+	int i, ret;
+
+	dlb2_log_unmap_qid(hw, domain_id, args, vdev_req, vdev_id);
+
+	/*
+	 * Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	ret = dlb2_verify_unmap_qid_args(hw,
+					 domain_id,
+					 args,
+					 resp,
+					 vdev_req,
+					 vdev_id,
+					 &domain,
+					 &port,
+					 &queue);
+	if (ret)
+		return ret;
+
+	/*
+	 * If the queue hasn't been mapped yet, we need to update the slot's
+	 * state and re-enable the queue's inflights.
+	 */
+	st = DLB2_QUEUE_MAP_IN_PROG;
+	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
+		/*
+		 * Since the in-progress map was aborted, re-enable the QID's
+		 * inflights.
+		 */
+		if (queue->num_pending_additions == 0)
+			dlb2_ldb_queue_set_inflight_limit(hw, queue);
+
+		st = DLB2_QUEUE_UNMAPPED;
+		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
+		if (ret)
+			return ret;
+
+		goto unmap_qid_done;
+	}
+
+	/*
+	 * If the queue mapping is on hold pending an unmap, we simply need to
+	 * update the slot's state.
+	 */
+	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &i)) {
+		st = DLB2_QUEUE_UNMAP_IN_PROG;
+		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
+		if (ret)
+			return ret;
+
+		goto unmap_qid_done;
+	}
+
+	st = DLB2_QUEUE_MAPPED;
+	if (!dlb2_port_find_slot_queue(port, st, queue, &i)) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: no available CQ slots\n",
+			    __func__);
+		return -EFAULT;
+	}
+
+	/*
+	 * QID->CQ mapping removal is an asynchronous procedure. It requires
+	 * stopping the DLB2 from scheduling this CQ, draining all inflights
+	 * from the CQ, then unmapping the queue from the CQ. This function
+	 * simply marks the port as needing the queue unmapped, and (if
+	 * necessary) starts the unmapping worker thread.
+	 */
+	dlb2_ldb_port_cq_disable(hw, port);
+
+	st = DLB2_QUEUE_UNMAP_IN_PROG;
+	ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
+	if (ret)
+		return ret;
+
+	/*
+	 * Attempt to finish the unmapping now, in case the port has no
+	 * outstanding inflights. If that's not the case, this will fail and
+	 * the unmapping will be completed at a later time.
+	 */
+	unmap_complete = dlb2_domain_finish_unmap_port(hw, domain, port);
+
+	/*
+	 * If the unmapping couldn't complete immediately, launch the worker
+	 * thread (if it isn't already launched) to finish it later.
+	 */
+	if (!unmap_complete && !os_worker_active(hw))
+		os_schedule_work(hw);
+
+unmap_qid_done:
+	resp->status = 0;
+
+	return 0;
+}
+
+static void
+dlb2_log_pending_port_unmaps_args(struct dlb2_hw *hw,
+				  struct dlb2_pending_port_unmaps_args *args,
+				  bool vdev_req,
+				  unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB unmaps in progress arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from VF %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tPort ID: %d\n", args->port_id);
+}
+
+/**
+ * dlb2_hw_pending_port_unmaps() - returns the number of unmap operations in
+ *	progress.
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: number of unmaps in progress args
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the number of unmaps in progress.
+ *
+ * Errors:
+ * EINVAL - Invalid port ID.
+ */
+int dlb2_hw_pending_port_unmaps(struct dlb2_hw *hw,
+				u32 domain_id,
+				struct dlb2_pending_port_unmaps_args *args,
+				struct dlb2_cmd_response *resp,
+				bool vdev_req,
+				unsigned int vdev_id)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_port *port;
+
+	dlb2_log_pending_port_unmaps_args(hw, args, vdev_req, vdev_id);
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	port = dlb2_get_domain_used_ldb_port(args->port_id, vdev_req, domain);
+	if (!port || !port->configured) {
+		resp->status = DLB2_ST_INVALID_PORT_ID;
+		return -EINVAL;
+	}
+
+	resp->id = port->num_pending_removals;
+
+	return 0;
+}
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v3 13/26] event/dlb2: add v2.5 start domain
  2021-04-13 20:14   ` [dpdk-dev] [PATCH v3 00/26] " Timothy McDaniel
                       ` (11 preceding siblings ...)
  2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 12/26] event/dlb2: add v2.5 unmap queue Timothy McDaniel
@ 2021-04-13 20:14     ` Timothy McDaniel
  2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 14/26] event/dlb2: add v2.5 credit scheme Timothy McDaniel
                       ` (12 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-04-13 20:14 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas

Update low level functions to account for new register map
and hardware access macros.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 123 -----------------
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 130 ++++++++++++++++++
 2 files changed, 130 insertions(+), 123 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index ab5b080c1..1e66ebf50 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -1245,129 +1245,6 @@ dlb2_get_domain_ldb_queue(u32 id,
 	return NULL;
 }
 
-static int dlb2_verify_start_domain_args(struct dlb2_hw *hw,
-					 u32 domain_id,
-					 struct dlb2_cmd_response *resp,
-					 bool vdev_req,
-					 unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (domain == NULL) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	if (!domain->configured) {
-		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
-		return -EINVAL;
-	}
-
-	if (domain->started) {
-		resp->status = DLB2_ST_DOMAIN_STARTED;
-		return -EINVAL;
-	}
-
-	return 0;
-}
-
-static void dlb2_log_start_domain(struct dlb2_hw *hw,
-				  u32 domain_id,
-				  bool vdev_req,
-				  unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 start domain arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
-}
-
-/**
- * dlb2_hw_start_domain() - Lock the domain configuration
- * @hw:	Contains the current state of the DLB2 hardware.
- * @domain_id: Domain ID
- * @arg: User-provided arguments (unused, here for ioctl callback template).
- * @resp: Response to user.
- * @vdev_req: Request came from a virtual device.
- * @vdev_id: If vdev_req is true, this contains the virtual device's ID.
- *
- * Return: returns < 0 on error, 0 otherwise. If the driver is unable to
- * satisfy a request, resp->status will be set accordingly.
- */
-int
-dlb2_hw_start_domain(struct dlb2_hw *hw,
-		     u32 domain_id,
-		     struct dlb2_start_domain_args *arg,
-		     struct dlb2_cmd_response *resp,
-		     bool vdev_req,
-		     unsigned int vdev_id)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *dir_queue;
-	struct dlb2_ldb_queue *ldb_queue;
-	struct dlb2_hw_domain *domain;
-	int ret;
-	RTE_SET_USED(arg);
-	RTE_SET_USED(iter);
-
-	dlb2_log_start_domain(hw, domain_id, vdev_req, vdev_id);
-
-	ret = dlb2_verify_start_domain_args(hw,
-					    domain_id,
-					    resp,
-					    vdev_req,
-					    vdev_id);
-	if (ret)
-		return ret;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-	if (domain == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: domain not found\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	/*
-	 * Enable load-balanced and directed queue write permissions for the
-	 * queues this domain owns. Without this, the DLB2 will drop all
-	 * incoming traffic to those queues.
-	 */
-	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, ldb_queue, iter) {
-		union dlb2_sys_ldb_vasqid_v r0 = { {0} };
-		unsigned int offs;
-
-		r0.field.vasqid_v = 1;
-
-		offs = domain->id.phys_id * DLB2_MAX_NUM_LDB_QUEUES +
-			ldb_queue->id.phys_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_LDB_VASQID_V(offs), r0.val);
-	}
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, dir_queue, iter) {
-		union dlb2_sys_dir_vasqid_v r0 = { {0} };
-		unsigned int offs;
-
-		r0.field.vasqid_v = 1;
-
-		offs = domain->id.phys_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) +
-			dir_queue->id.phys_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(offs), r0.val);
-	}
-
-	dlb2_flush_csr(hw);
-
-	domain->started = true;
-
-	resp->status = 0;
-
-	return 0;
-}
-
 static void dlb2_log_get_dir_queue_depth(struct dlb2_hw *hw,
 					 u32 domain_id,
 					 u32 queue_id,
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index 181922fe3..e806a60ac 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -5774,3 +5774,133 @@ int dlb2_hw_pending_port_unmaps(struct dlb2_hw *hw,
 
 	return 0;
 }
+
+static int dlb2_verify_start_domain_args(struct dlb2_hw *hw,
+					 u32 domain_id,
+					 struct dlb2_cmd_response *resp,
+					 bool vdev_req,
+					 unsigned int vdev_id,
+					 struct dlb2_hw_domain **out_domain)
+{
+	struct dlb2_hw_domain *domain;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
+		return -EINVAL;
+	}
+
+	if (domain->started) {
+		resp->status = DLB2_ST_DOMAIN_STARTED;
+		return -EINVAL;
+	}
+
+	*out_domain = domain;
+
+	return 0;
+}
+
+static void dlb2_log_start_domain(struct dlb2_hw *hw,
+				  u32 domain_id,
+				  bool vdev_req,
+				  unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 start domain arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
+}
+
+/**
+ * dlb2_hw_start_domain() - start a scheduling domain
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @arg: start domain arguments.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function starts a scheduling domain, which allows applications to send
+ * traffic through it. Once a domain is started, its resources can no longer be
+ * configured (besides QID remapping and port enable/disable).
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error.
+ *
+ * Errors:
+ * EINVAL - the domain is not configured, or the domain is already started.
+ */
+int
+dlb2_hw_start_domain(struct dlb2_hw *hw,
+		     u32 domain_id,
+		     struct dlb2_start_domain_args *args,
+		     struct dlb2_cmd_response *resp,
+		     bool vdev_req,
+		     unsigned int vdev_id)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *dir_queue;
+	struct dlb2_ldb_queue *ldb_queue;
+	struct dlb2_hw_domain *domain;
+	int ret;
+	RTE_SET_USED(args);
+	RTE_SET_USED(iter);
+
+	dlb2_log_start_domain(hw, domain_id, vdev_req, vdev_id);
+
+	ret = dlb2_verify_start_domain_args(hw,
+					    domain_id,
+					    resp,
+					    vdev_req,
+					    vdev_id,
+					    &domain);
+	if (ret)
+		return ret;
+
+	/*
+	 * Enable load-balanced and directed queue write permissions for the
+	 * queues this domain owns. Without this, the DLB2 will drop all
+	 * incoming traffic to those queues.
+	 */
+	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, ldb_queue, iter) {
+		u32 vasqid_v = 0;
+		unsigned int offs;
+
+		DLB2_BIT_SET(vasqid_v, DLB2_SYS_LDB_VASQID_V_VASQID_V);
+
+		offs = domain->id.phys_id * DLB2_MAX_NUM_LDB_QUEUES +
+			ldb_queue->id.phys_id;
+
+		DLB2_CSR_WR(hw, DLB2_SYS_LDB_VASQID_V(offs), vasqid_v);
+	}
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, dir_queue, iter) {
+		u32 vasqid_v = 0;
+		unsigned int offs;
+
+		DLB2_BIT_SET(vasqid_v, DLB2_SYS_DIR_VASQID_V_VASQID_V);
+
+		offs = domain->id.phys_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) +
+			dir_queue->id.phys_id;
+
+		DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(offs), vasqid_v);
+	}
+
+	dlb2_flush_csr(hw);
+
+	domain->started = true;
+
+	resp->status = 0;
+
+	return 0;
+}
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v3 14/26] event/dlb2: add v2.5 credit scheme
  2021-04-13 20:14   ` [dpdk-dev] [PATCH v3 00/26] " Timothy McDaniel
                       ` (12 preceding siblings ...)
  2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 13/26] event/dlb2: add v2.5 start domain Timothy McDaniel
@ 2021-04-13 20:14     ` Timothy McDaniel
  2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 15/26] event/dlb2: add v2.5 queue depth functions Timothy McDaniel
                       ` (11 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-04-13 20:14 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas

DLB v2.5 uses a different credit scheme than was used in DLB v2.0 .
Specifically, there is a single credit pool for both load balanced
and directed traffic, instead of a separate pool for each as is
found with DLB v2.0.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/dlb2.c | 311 ++++++++++++++++++++++++++------------
 1 file changed, 212 insertions(+), 99 deletions(-)

diff --git a/drivers/event/dlb2/dlb2.c b/drivers/event/dlb2/dlb2.c
index 0048f6a1b..cc6495b76 100644
--- a/drivers/event/dlb2/dlb2.c
+++ b/drivers/event/dlb2/dlb2.c
@@ -436,8 +436,13 @@ dlb2_eventdev_info_get(struct rte_eventdev *dev,
 	 */
 	evdev_dlb2_default_info.max_event_ports += dlb2->num_ldb_ports;
 	evdev_dlb2_default_info.max_event_queues += dlb2->num_ldb_queues;
-	evdev_dlb2_default_info.max_num_events += dlb2->max_ldb_credits;
-
+	if (dlb2->version == DLB2_HW_V2_5) {
+		evdev_dlb2_default_info.max_num_events +=
+			dlb2->max_credits;
+	} else {
+		evdev_dlb2_default_info.max_num_events +=
+			dlb2->max_ldb_credits;
+	}
 	evdev_dlb2_default_info.max_event_queues =
 		RTE_MIN(evdev_dlb2_default_info.max_event_queues,
 			RTE_EVENT_MAX_QUEUES_PER_DEV);
@@ -451,7 +456,8 @@ dlb2_eventdev_info_get(struct rte_eventdev *dev,
 
 static int
 dlb2_hw_create_sched_domain(struct dlb2_hw_dev *handle,
-			    const struct dlb2_hw_rsrcs *resources_asked)
+			    const struct dlb2_hw_rsrcs *resources_asked,
+			    uint8_t device_version)
 {
 	int ret = 0;
 	struct dlb2_create_sched_domain_args *cfg;
@@ -468,8 +474,10 @@ dlb2_hw_create_sched_domain(struct dlb2_hw_dev *handle,
 	/* DIR ports and queues */
 
 	cfg->num_dir_ports = resources_asked->num_dir_ports;
-
-	cfg->num_dir_credits = resources_asked->num_dir_credits;
+	if (device_version == DLB2_HW_V2_5)
+		cfg->num_credits = resources_asked->num_credits;
+	else
+		cfg->num_dir_credits = resources_asked->num_dir_credits;
 
 	/* LDB queues */
 
@@ -509,8 +517,8 @@ dlb2_hw_create_sched_domain(struct dlb2_hw_dev *handle,
 		break;
 	}
 
-	cfg->num_ldb_credits =
-		resources_asked->num_ldb_credits;
+	if (device_version == DLB2_HW_V2)
+		cfg->num_ldb_credits = resources_asked->num_ldb_credits;
 
 	cfg->num_atomic_inflights =
 		DLB2_NUM_ATOMIC_INFLIGHTS_PER_QUEUE *
@@ -519,14 +527,24 @@ dlb2_hw_create_sched_domain(struct dlb2_hw_dev *handle,
 	cfg->num_hist_list_entries = resources_asked->num_ldb_ports *
 		DLB2_NUM_HIST_LIST_ENTRIES_PER_LDB_PORT;
 
-	DLB2_LOG_DBG("sched domain create - ldb_qs=%d, ldb_ports=%d, dir_ports=%d, atomic_inflights=%d, hist_list_entries=%d, ldb_credits=%d, dir_credits=%d\n",
-		     cfg->num_ldb_queues,
-		     resources_asked->num_ldb_ports,
-		     cfg->num_dir_ports,
-		     cfg->num_atomic_inflights,
-		     cfg->num_hist_list_entries,
-		     cfg->num_ldb_credits,
-		     cfg->num_dir_credits);
+	if (device_version == DLB2_HW_V2_5) {
+		DLB2_LOG_DBG("sched domain create - ldb_qs=%d, ldb_ports=%d, dir_ports=%d, atomic_inflights=%d, hist_list_entries=%d, credits=%d\n",
+			     cfg->num_ldb_queues,
+			     resources_asked->num_ldb_ports,
+			     cfg->num_dir_ports,
+			     cfg->num_atomic_inflights,
+			     cfg->num_hist_list_entries,
+			     cfg->num_credits);
+	} else {
+		DLB2_LOG_DBG("sched domain create - ldb_qs=%d, ldb_ports=%d, dir_ports=%d, atomic_inflights=%d, hist_list_entries=%d, ldb_credits=%d, dir_credits=%d\n",
+			     cfg->num_ldb_queues,
+			     resources_asked->num_ldb_ports,
+			     cfg->num_dir_ports,
+			     cfg->num_atomic_inflights,
+			     cfg->num_hist_list_entries,
+			     cfg->num_ldb_credits,
+			     cfg->num_dir_credits);
+	}
 
 	/* Configure the QM */
 
@@ -606,7 +624,6 @@ dlb2_eventdev_configure(const struct rte_eventdev *dev)
 	 */
 	if (dlb2->configured) {
 		dlb2_hw_reset_sched_domain(dev, true);
-
 		ret = dlb2_hw_query_resources(dlb2);
 		if (ret) {
 			DLB2_LOG_ERR("get resources err=%d, devid=%d\n",
@@ -665,20 +682,26 @@ dlb2_eventdev_configure(const struct rte_eventdev *dev)
 	/* 1 dir queue per dir port */
 	rsrcs->num_ldb_queues = config->nb_event_queues - rsrcs->num_dir_ports;
 
-	/* Scale down nb_events_limit by 4 for directed credits, since there
-	 * are 4x as many load-balanced credits.
-	 */
-	rsrcs->num_ldb_credits = 0;
-	rsrcs->num_dir_credits = 0;
+	if (dlb2->version == DLB2_HW_V2_5) {
+		rsrcs->num_credits = 0;
+		if (rsrcs->num_ldb_queues || rsrcs->num_dir_ports)
+			rsrcs->num_credits = config->nb_events_limit;
+	} else {
+		/* Scale down nb_events_limit by 4 for directed credits,
+		 * since there are 4x as many load-balanced credits.
+		 */
+		rsrcs->num_ldb_credits = 0;
+		rsrcs->num_dir_credits = 0;
 
-	if (rsrcs->num_ldb_queues)
-		rsrcs->num_ldb_credits = config->nb_events_limit;
-	if (rsrcs->num_dir_ports)
-		rsrcs->num_dir_credits = config->nb_events_limit / 4;
-	if (dlb2->num_dir_credits_override != -1)
-		rsrcs->num_dir_credits = dlb2->num_dir_credits_override;
+		if (rsrcs->num_ldb_queues)
+			rsrcs->num_ldb_credits = config->nb_events_limit;
+		if (rsrcs->num_dir_ports)
+			rsrcs->num_dir_credits = config->nb_events_limit / 4;
+		if (dlb2->num_dir_credits_override != -1)
+			rsrcs->num_dir_credits = dlb2->num_dir_credits_override;
+	}
 
-	if (dlb2_hw_create_sched_domain(handle, rsrcs) < 0) {
+	if (dlb2_hw_create_sched_domain(handle, rsrcs, dlb2->version) < 0) {
 		DLB2_LOG_ERR("dlb2_hw_create_sched_domain failed\n");
 		return -ENODEV;
 	}
@@ -693,10 +716,15 @@ dlb2_eventdev_configure(const struct rte_eventdev *dev)
 	dlb2->num_ldb_ports = dlb2->num_ports - dlb2->num_dir_ports;
 	dlb2->num_ldb_queues = dlb2->num_queues - dlb2->num_dir_ports;
 	dlb2->num_dir_queues = dlb2->num_dir_ports;
-	dlb2->ldb_credit_pool = rsrcs->num_ldb_credits;
-	dlb2->max_ldb_credits = rsrcs->num_ldb_credits;
-	dlb2->dir_credit_pool = rsrcs->num_dir_credits;
-	dlb2->max_dir_credits = rsrcs->num_dir_credits;
+	if (dlb2->version == DLB2_HW_V2_5) {
+		dlb2->credit_pool = rsrcs->num_credits;
+		dlb2->max_credits = rsrcs->num_credits;
+	} else {
+		dlb2->ldb_credit_pool = rsrcs->num_ldb_credits;
+		dlb2->max_ldb_credits = rsrcs->num_ldb_credits;
+		dlb2->dir_credit_pool = rsrcs->num_dir_credits;
+		dlb2->max_dir_credits = rsrcs->num_dir_credits;
+	}
 
 	dlb2->configured = true;
 
@@ -1170,8 +1198,9 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2,
 	struct dlb2_port *qm_port = NULL;
 	char mz_name[RTE_MEMZONE_NAMESIZE];
 	uint32_t qm_port_id;
-	uint16_t ldb_credit_high_watermark;
-	uint16_t dir_credit_high_watermark;
+	uint16_t ldb_credit_high_watermark = 0;
+	uint16_t dir_credit_high_watermark = 0;
+	uint16_t credit_high_watermark = 0;
 
 	if (handle == NULL)
 		return -EINVAL;
@@ -1206,15 +1235,18 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2,
 	/* User controls the LDB high watermark via enqueue depth. The DIR high
 	 * watermark is equal, unless the directed credit pool is too small.
 	 */
-	ldb_credit_high_watermark = enqueue_depth;
-
-	/* If there are no directed ports, the kernel driver will ignore this
-	 * port's directed credit settings. Don't use enqueue_depth if it would
-	 * require more directed credits than are available.
-	 */
-	dir_credit_high_watermark =
-		RTE_MIN(enqueue_depth,
-			handle->cfg.num_dir_credits / dlb2->num_ports);
+	if (dlb2->version == DLB2_HW_V2) {
+		ldb_credit_high_watermark = enqueue_depth;
+		/* If there are no directed ports, the kernel driver will
+		 * ignore this port's directed credit settings. Don't use
+		 * enqueue_depth if it would require more directed credits
+		 * than are available.
+		 */
+		dir_credit_high_watermark =
+			RTE_MIN(enqueue_depth,
+				handle->cfg.num_dir_credits / dlb2->num_ports);
+	} else
+		credit_high_watermark = enqueue_depth;
 
 	/* Per QM values */
 
@@ -1249,8 +1281,12 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2,
 
 	qm_port->id = qm_port_id;
 
-	qm_port->cached_ldb_credits = 0;
-	qm_port->cached_dir_credits = 0;
+	if (dlb2->version == DLB2_HW_V2) {
+		qm_port->cached_ldb_credits = 0;
+		qm_port->cached_dir_credits = 0;
+	} else
+		qm_port->cached_credits = 0;
+
 	/* CQs with depth < 8 use an 8-entry queue, but withhold credits so
 	 * the effective depth is smaller.
 	 */
@@ -1298,17 +1334,26 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2,
 	qm_port->state = PORT_STARTED; /* enabled at create time */
 	qm_port->config_state = DLB2_CONFIGURED;
 
-	qm_port->dir_credits = dir_credit_high_watermark;
-	qm_port->ldb_credits = ldb_credit_high_watermark;
-	qm_port->credit_pool[DLB2_DIR_QUEUE] = &dlb2->dir_credit_pool;
-	qm_port->credit_pool[DLB2_LDB_QUEUE] = &dlb2->ldb_credit_pool;
-
-	DLB2_LOG_DBG("dlb2: created ldb port %d, depth = %d, ldb credits=%d, dir credits=%d\n",
-		     qm_port_id,
-		     dequeue_depth,
-		     qm_port->ldb_credits,
-		     qm_port->dir_credits);
+	if (dlb2->version == DLB2_HW_V2) {
+		qm_port->dir_credits = dir_credit_high_watermark;
+		qm_port->ldb_credits = ldb_credit_high_watermark;
+		qm_port->credit_pool[DLB2_DIR_QUEUE] = &dlb2->dir_credit_pool;
+		qm_port->credit_pool[DLB2_LDB_QUEUE] = &dlb2->ldb_credit_pool;
+
+		DLB2_LOG_DBG("dlb2: created ldb port %d, depth = %d, ldb credits=%d, dir credits=%d\n",
+			     qm_port_id,
+			     dequeue_depth,
+			     qm_port->ldb_credits,
+			     qm_port->dir_credits);
+	} else {
+		qm_port->credits = credit_high_watermark;
+		qm_port->credit_pool[DLB2_COMBINED_POOL] = &dlb2->credit_pool;
 
+		DLB2_LOG_DBG("dlb2: created ldb port %d, depth = %d, credits=%d\n",
+			     qm_port_id,
+			     dequeue_depth,
+			     qm_port->credits);
+	}
 	rte_spinlock_unlock(&handle->resource_lock);
 
 	return 0;
@@ -1356,8 +1401,9 @@ dlb2_hw_create_dir_port(struct dlb2_eventdev *dlb2,
 	struct dlb2_port *qm_port = NULL;
 	char mz_name[RTE_MEMZONE_NAMESIZE];
 	uint32_t qm_port_id;
-	uint16_t ldb_credit_high_watermark;
-	uint16_t dir_credit_high_watermark;
+	uint16_t ldb_credit_high_watermark = 0;
+	uint16_t dir_credit_high_watermark = 0;
+	uint16_t credit_high_watermark = 0;
 
 	if (dlb2 == NULL || handle == NULL)
 		return -EINVAL;
@@ -1386,14 +1432,16 @@ dlb2_hw_create_dir_port(struct dlb2_eventdev *dlb2,
 	/* User controls the LDB high watermark via enqueue depth. The DIR high
 	 * watermark is equal, unless the directed credit pool is too small.
 	 */
-	ldb_credit_high_watermark = enqueue_depth;
-
-	/* Don't use enqueue_depth if it would require more directed credits
-	 * than are available.
-	 */
-	dir_credit_high_watermark =
-		RTE_MIN(enqueue_depth,
-			handle->cfg.num_dir_credits / dlb2->num_ports);
+	if (dlb2->version == DLB2_HW_V2) {
+		ldb_credit_high_watermark = enqueue_depth;
+		/* Don't use enqueue_depth if it would require more directed
+		 * credits than are available.
+		 */
+		dir_credit_high_watermark =
+			RTE_MIN(enqueue_depth,
+				handle->cfg.num_dir_credits / dlb2->num_ports);
+	} else
+		credit_high_watermark = enqueue_depth;
 
 	/* Per QM values */
 
@@ -1430,8 +1478,12 @@ dlb2_hw_create_dir_port(struct dlb2_eventdev *dlb2,
 
 	qm_port->id = qm_port_id;
 
-	qm_port->cached_ldb_credits = 0;
-	qm_port->cached_dir_credits = 0;
+	if (dlb2->version == DLB2_HW_V2) {
+		qm_port->cached_ldb_credits = 0;
+		qm_port->cached_dir_credits = 0;
+	} else
+		qm_port->cached_credits = 0;
+
 	/* CQs with depth < 8 use an 8-entry queue, but withhold credits so
 	 * the effective depth is smaller.
 	 */
@@ -1467,17 +1519,26 @@ dlb2_hw_create_dir_port(struct dlb2_eventdev *dlb2,
 	qm_port->state = PORT_STARTED; /* enabled at create time */
 	qm_port->config_state = DLB2_CONFIGURED;
 
-	qm_port->dir_credits = dir_credit_high_watermark;
-	qm_port->ldb_credits = ldb_credit_high_watermark;
-	qm_port->credit_pool[DLB2_DIR_QUEUE] = &dlb2->dir_credit_pool;
-	qm_port->credit_pool[DLB2_LDB_QUEUE] = &dlb2->ldb_credit_pool;
-
-	DLB2_LOG_DBG("dlb2: created dir port %d, depth = %d cr=%d,%d\n",
-		     qm_port_id,
-		     dequeue_depth,
-		     dir_credit_high_watermark,
-		     ldb_credit_high_watermark);
+	if (dlb2->version == DLB2_HW_V2) {
+		qm_port->dir_credits = dir_credit_high_watermark;
+		qm_port->ldb_credits = ldb_credit_high_watermark;
+		qm_port->credit_pool[DLB2_DIR_QUEUE] = &dlb2->dir_credit_pool;
+		qm_port->credit_pool[DLB2_LDB_QUEUE] = &dlb2->ldb_credit_pool;
+
+		DLB2_LOG_DBG("dlb2: created dir port %d, depth = %d cr=%d,%d\n",
+			     qm_port_id,
+			     dequeue_depth,
+			     dir_credit_high_watermark,
+			     ldb_credit_high_watermark);
+	} else {
+		qm_port->credits = credit_high_watermark;
+		qm_port->credit_pool[DLB2_COMBINED_POOL] = &dlb2->credit_pool;
 
+		DLB2_LOG_DBG("dlb2: created dir port %d, depth = %d cr=%d\n",
+			     qm_port_id,
+			     dequeue_depth,
+			     credit_high_watermark);
+	}
 	rte_spinlock_unlock(&handle->resource_lock);
 
 	return 0;
@@ -2297,6 +2358,24 @@ dlb2_check_enqueue_hw_dir_credits(struct dlb2_port *qm_port)
 	return 0;
 }
 
+static inline int
+dlb2_check_enqueue_hw_credits(struct dlb2_port *qm_port)
+{
+	if (unlikely(qm_port->cached_credits == 0)) {
+		qm_port->cached_credits =
+			dlb2_port_credits_get(qm_port,
+					      DLB2_COMBINED_POOL);
+		if (unlikely(qm_port->cached_credits == 0)) {
+			DLB2_INC_STAT(
+			qm_port->ev_port->stats.traffic.tx_nospc_hw_credits, 1);
+			DLB2_LOG_DBG("credits exhausted\n");
+			return 1; /* credits exhausted */
+		}
+	}
+
+	return 0;
+}
+
 static __rte_always_inline void
 dlb2_pp_write(struct dlb2_enqueue_qe *qe4,
 	      struct process_local_port_data *port_data)
@@ -2565,12 +2644,19 @@ dlb2_event_enqueue_prep(struct dlb2_eventdev_port *ev_port,
 	if (!qm_queue->is_directed) {
 		/* Load balanced destination queue */
 
-		if (dlb2_check_enqueue_hw_ldb_credits(qm_port)) {
-			rte_errno = -ENOSPC;
-			return 1;
+		if (dlb2->version == DLB2_HW_V2) {
+			if (dlb2_check_enqueue_hw_ldb_credits(qm_port)) {
+				rte_errno = -ENOSPC;
+				return 1;
+			}
+			cached_credits = &qm_port->cached_ldb_credits;
+		} else {
+			if (dlb2_check_enqueue_hw_credits(qm_port)) {
+				rte_errno = -ENOSPC;
+				return 1;
+			}
+			cached_credits = &qm_port->cached_credits;
 		}
-		cached_credits = &qm_port->cached_ldb_credits;
-
 		switch (ev->sched_type) {
 		case RTE_SCHED_TYPE_ORDERED:
 			DLB2_LOG_DBG("dlb2: put_qe: RTE_SCHED_TYPE_ORDERED\n");
@@ -2602,12 +2688,19 @@ dlb2_event_enqueue_prep(struct dlb2_eventdev_port *ev_port,
 	} else {
 		/* Directed destination queue */
 
-		if (dlb2_check_enqueue_hw_dir_credits(qm_port)) {
-			rte_errno = -ENOSPC;
-			return 1;
+		if (dlb2->version == DLB2_HW_V2) {
+			if (dlb2_check_enqueue_hw_dir_credits(qm_port)) {
+				rte_errno = -ENOSPC;
+				return 1;
+			}
+			cached_credits = &qm_port->cached_dir_credits;
+		} else {
+			if (dlb2_check_enqueue_hw_credits(qm_port)) {
+				rte_errno = -ENOSPC;
+				return 1;
+			}
+			cached_credits = &qm_port->cached_credits;
 		}
-		cached_credits = &qm_port->cached_dir_credits;
-
 		DLB2_LOG_DBG("dlb2: put_qe: RTE_SCHED_TYPE_DIRECTED\n");
 
 		*sched_type = DLB2_SCHED_DIRECTED;
@@ -2891,20 +2984,40 @@ dlb2_port_credits_inc(struct dlb2_port *qm_port, int num)
 
 	/* increment port credits, and return to pool if exceeds threshold */
 	if (!qm_port->is_directed) {
-		qm_port->cached_ldb_credits += num;
-		if (qm_port->cached_ldb_credits >= 2 * batch_size) {
-			__atomic_fetch_add(
-				qm_port->credit_pool[DLB2_LDB_QUEUE],
-				batch_size, __ATOMIC_SEQ_CST);
-			qm_port->cached_ldb_credits -= batch_size;
+		if (qm_port->dlb2->version == DLB2_HW_V2) {
+			qm_port->cached_ldb_credits += num;
+			if (qm_port->cached_ldb_credits >= 2 * batch_size) {
+				__atomic_fetch_add(
+					qm_port->credit_pool[DLB2_LDB_QUEUE],
+					batch_size, __ATOMIC_SEQ_CST);
+				qm_port->cached_ldb_credits -= batch_size;
+			}
+		} else {
+			qm_port->cached_credits += num;
+			if (qm_port->cached_credits >= 2 * batch_size) {
+				__atomic_fetch_add(
+				      qm_port->credit_pool[DLB2_COMBINED_POOL],
+				      batch_size, __ATOMIC_SEQ_CST);
+				qm_port->cached_credits -= batch_size;
+			}
 		}
 	} else {
-		qm_port->cached_dir_credits += num;
-		if (qm_port->cached_dir_credits >= 2 * batch_size) {
-			__atomic_fetch_add(
-				qm_port->credit_pool[DLB2_DIR_QUEUE],
-				batch_size, __ATOMIC_SEQ_CST);
-			qm_port->cached_dir_credits -= batch_size;
+		if (qm_port->dlb2->version == DLB2_HW_V2) {
+			qm_port->cached_dir_credits += num;
+			if (qm_port->cached_dir_credits >= 2 * batch_size) {
+				__atomic_fetch_add(
+					qm_port->credit_pool[DLB2_DIR_QUEUE],
+					batch_size, __ATOMIC_SEQ_CST);
+				qm_port->cached_dir_credits -= batch_size;
+			}
+		} else {
+			qm_port->cached_credits += num;
+			if (qm_port->cached_credits >= 2 * batch_size) {
+				__atomic_fetch_add(
+				      qm_port->credit_pool[DLB2_COMBINED_POOL],
+				      batch_size, __ATOMIC_SEQ_CST);
+				qm_port->cached_credits -= batch_size;
+			}
 		}
 	}
 }
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v3 15/26] event/dlb2: add v2.5 queue depth functions
  2021-04-13 20:14   ` [dpdk-dev] [PATCH v3 00/26] " Timothy McDaniel
                       ` (13 preceding siblings ...)
  2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 14/26] event/dlb2: add v2.5 credit scheme Timothy McDaniel
@ 2021-04-13 20:14     ` Timothy McDaniel
  2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 16/26] event/dlb2: add v2.5 finish map/unmap Timothy McDaniel
                       ` (10 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-04-13 20:14 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas

Update get queue depth functions for DLB v2.5, accounting for
combined register map and new hardware access macros.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 160 ------------------
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 135 +++++++++++++++
 2 files changed, 135 insertions(+), 160 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index 1e66ebf50..8c1d8c782 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -65,17 +65,6 @@ static inline void dlb2_flush_csr(struct dlb2_hw *hw)
 	DLB2_CSR_RD(hw, DLB2_SYS_TOTAL_VAS);
 }
 
-static u32 dlb2_dir_queue_depth(struct dlb2_hw *hw,
-				struct dlb2_dir_pq_pair *queue)
-{
-	union dlb2_lsp_qid_dir_enqueue_cnt r0;
-
-	r0.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID_DIR_ENQUEUE_CNT(queue->id.phys_id));
-
-	return r0.field.count;
-}
-
 static void dlb2_ldb_port_cq_enable(struct dlb2_hw *hw,
 				    struct dlb2_ldb_port *port)
 {
@@ -108,24 +97,6 @@ static void dlb2_ldb_port_cq_disable(struct dlb2_hw *hw,
 	dlb2_flush_csr(hw);
 }
 
-static u32 dlb2_ldb_queue_depth(struct dlb2_hw *hw,
-				struct dlb2_ldb_queue *queue)
-{
-	union dlb2_lsp_qid_aqed_active_cnt r0;
-	union dlb2_lsp_qid_atm_active r1;
-	union dlb2_lsp_qid_ldb_enqueue_cnt r2;
-
-	r0.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID_AQED_ACTIVE_CNT(queue->id.phys_id));
-	r1.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID_ATM_ACTIVE(queue->id.phys_id));
-
-	r2.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID_LDB_ENQUEUE_CNT(queue->id.phys_id));
-
-	return r0.field.count + r1.field.count + r2.field.count;
-}
-
 static struct dlb2_ldb_queue *
 dlb2_get_ldb_queue_from_id(struct dlb2_hw *hw,
 			   u32 id,
@@ -1204,134 +1175,3 @@ int dlb2_set_group_sequence_numbers(struct dlb2_hw *hw,
 	return 0;
 }
 
-static struct dlb2_dir_pq_pair *
-dlb2_get_domain_used_dir_pq(struct dlb2_hw *hw,
-			    u32 id,
-			    bool vdev_req,
-			    struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *port;
-	RTE_SET_USED(iter);
-
-	if (id >= DLB2_MAX_NUM_DIR_PORTS(hw->ver))
-		return NULL;
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter)
-		if ((!vdev_req && port->id.phys_id == id) ||
-		    (vdev_req && port->id.virt_id == id))
-			return port;
-
-	return NULL;
-}
-
-static struct dlb2_ldb_queue *
-dlb2_get_domain_ldb_queue(u32 id,
-			  bool vdev_req,
-			  struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_queue *queue;
-	RTE_SET_USED(iter);
-
-	if (id >= DLB2_MAX_NUM_LDB_QUEUES)
-		return NULL;
-
-	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter)
-		if ((!vdev_req && queue->id.phys_id == id) ||
-		    (vdev_req && queue->id.virt_id == id))
-			return queue;
-
-	return NULL;
-}
-
-static void dlb2_log_get_dir_queue_depth(struct dlb2_hw *hw,
-					 u32 domain_id,
-					 u32 queue_id,
-					 bool vdev_req,
-					 unsigned int vf_id)
-{
-	DLB2_HW_DBG(hw, "DLB get directed queue depth:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from VF %d)\n", vf_id);
-	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
-	DLB2_HW_DBG(hw, "\tQueue ID: %d\n", queue_id);
-}
-
-int dlb2_hw_get_dir_queue_depth(struct dlb2_hw *hw,
-				u32 domain_id,
-				struct dlb2_get_dir_queue_depth_args *args,
-				struct dlb2_cmd_response *resp,
-				bool vdev_req,
-				unsigned int vdev_id)
-{
-	struct dlb2_dir_pq_pair *queue;
-	struct dlb2_hw_domain *domain;
-	int id;
-
-	id = domain_id;
-
-	dlb2_log_get_dir_queue_depth(hw, domain_id, args->queue_id,
-				     vdev_req, vdev_id);
-
-	domain = dlb2_get_domain_from_id(hw, id, vdev_req, vdev_id);
-	if (domain == NULL) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	id = args->queue_id;
-
-	queue = dlb2_get_domain_used_dir_pq(hw, id, vdev_req, domain);
-	if (queue == NULL) {
-		resp->status = DLB2_ST_INVALID_QID;
-		return -EINVAL;
-	}
-
-	resp->id = dlb2_dir_queue_depth(hw, queue);
-
-	return 0;
-}
-
-static void dlb2_log_get_ldb_queue_depth(struct dlb2_hw *hw,
-					 u32 domain_id,
-					 u32 queue_id,
-					 bool vdev_req,
-					 unsigned int vf_id)
-{
-	DLB2_HW_DBG(hw, "DLB get load-balanced queue depth:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from VF %d)\n", vf_id);
-	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
-	DLB2_HW_DBG(hw, "\tQueue ID: %d\n", queue_id);
-}
-
-int dlb2_hw_get_ldb_queue_depth(struct dlb2_hw *hw,
-				u32 domain_id,
-				struct dlb2_get_ldb_queue_depth_args *args,
-				struct dlb2_cmd_response *resp,
-				bool vdev_req,
-				unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_queue *queue;
-
-	dlb2_log_get_ldb_queue_depth(hw, domain_id, args->queue_id,
-				     vdev_req, vdev_id);
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-	if (domain == NULL) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	queue = dlb2_get_domain_ldb_queue(args->queue_id, vdev_req, domain);
-	if (queue == NULL) {
-		resp->status = DLB2_ST_INVALID_QID;
-		return -EINVAL;
-	}
-
-	resp->id = dlb2_ldb_queue_depth(hw, queue);
-
-	return 0;
-}
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index e806a60ac..6a5af0c1e 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -5904,3 +5904,138 @@ dlb2_hw_start_domain(struct dlb2_hw *hw,
 
 	return 0;
 }
+
+static void dlb2_log_get_dir_queue_depth(struct dlb2_hw *hw,
+					 u32 domain_id,
+					 u32 queue_id,
+					 bool vdev_req,
+					 unsigned int vf_id)
+{
+	DLB2_HW_DBG(hw, "DLB get directed queue depth:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from VF %d)\n", vf_id);
+	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
+	DLB2_HW_DBG(hw, "\tQueue ID: %d\n", queue_id);
+}
+
+/**
+ * dlb2_hw_get_dir_queue_depth() - returns the depth of a directed queue
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: queue depth args
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function returns the depth of a directed queue.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the depth.
+ *
+ * Errors:
+ * EINVAL - Invalid domain ID or queue ID.
+ */
+int dlb2_hw_get_dir_queue_depth(struct dlb2_hw *hw,
+				u32 domain_id,
+				struct dlb2_get_dir_queue_depth_args *args,
+				struct dlb2_cmd_response *resp,
+				bool vdev_req,
+				unsigned int vdev_id)
+{
+	struct dlb2_dir_pq_pair *queue;
+	struct dlb2_hw_domain *domain;
+	int id;
+
+	id = domain_id;
+
+	dlb2_log_get_dir_queue_depth(hw, domain_id, args->queue_id,
+				     vdev_req, vdev_id);
+
+	domain = dlb2_get_domain_from_id(hw, id, vdev_req, vdev_id);
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	id = args->queue_id;
+
+	queue = dlb2_get_domain_used_dir_pq(hw, id, vdev_req, domain);
+	if (!queue) {
+		resp->status = DLB2_ST_INVALID_QID;
+		return -EINVAL;
+	}
+
+	resp->id = dlb2_dir_queue_depth(hw, queue);
+
+	return 0;
+}
+
+static void dlb2_log_get_ldb_queue_depth(struct dlb2_hw *hw,
+					 u32 domain_id,
+					 u32 queue_id,
+					 bool vdev_req,
+					 unsigned int vf_id)
+{
+	DLB2_HW_DBG(hw, "DLB get load-balanced queue depth:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from VF %d)\n", vf_id);
+	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
+	DLB2_HW_DBG(hw, "\tQueue ID: %d\n", queue_id);
+}
+
+/**
+ * dlb2_hw_get_ldb_queue_depth() - returns the depth of a load-balanced queue
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: queue depth args
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function returns the depth of a load-balanced queue.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the depth.
+ *
+ * Errors:
+ * EINVAL - Invalid domain ID or queue ID.
+ */
+int dlb2_hw_get_ldb_queue_depth(struct dlb2_hw *hw,
+				u32 domain_id,
+				struct dlb2_get_ldb_queue_depth_args *args,
+				struct dlb2_cmd_response *resp,
+				bool vdev_req,
+				unsigned int vdev_id)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+
+	dlb2_log_get_ldb_queue_depth(hw, domain_id, args->queue_id,
+				     vdev_req, vdev_id);
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	queue = dlb2_get_domain_ldb_queue(args->queue_id, vdev_req, domain);
+	if (!queue) {
+		resp->status = DLB2_ST_INVALID_QID;
+		return -EINVAL;
+	}
+
+	resp->id = dlb2_ldb_queue_depth(hw, queue);
+
+	return 0;
+}
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v3 16/26] event/dlb2: add v2.5 finish map/unmap
  2021-04-13 20:14   ` [dpdk-dev] [PATCH v3 00/26] " Timothy McDaniel
                       ` (14 preceding siblings ...)
  2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 15/26] event/dlb2: add v2.5 queue depth functions Timothy McDaniel
@ 2021-04-13 20:14     ` Timothy McDaniel
  2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 17/26] event/dlb2: add v2.5 sparse cq mode Timothy McDaniel
                       ` (9 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-04-13 20:14 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas

Update low level hardware funcs with map/unmap interfaces,
accounting for new combined register file and hardware access macros.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 1054 -----------------
 .../event/dlb2/pf/base/dlb2_resource_new.c    |   50 +
 2 files changed, 50 insertions(+), 1054 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index 8c1d8c782..f05f750f5 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -54,1060 +54,6 @@ void dlb2_hw_enable_sparse_ldb_cq_mode(struct dlb2_hw *hw)
 	DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, r0.val);
 }
 
-/*
- * The PF driver cannot assume that a register write will affect subsequent HCW
- * writes. To ensure a write completes, the driver must read back a CSR. This
- * function only need be called for configuration that can occur after the
- * domain has started; prior to starting, applications can't send HCWs.
- */
-static inline void dlb2_flush_csr(struct dlb2_hw *hw)
-{
-	DLB2_CSR_RD(hw, DLB2_SYS_TOTAL_VAS);
-}
-
-static void dlb2_ldb_port_cq_enable(struct dlb2_hw *hw,
-				    struct dlb2_ldb_port *port)
-{
-	union dlb2_lsp_cq_ldb_dsbl reg;
-
-	/*
-	 * Don't re-enable the port if a removal is pending. The caller should
-	 * mark this port as enabled (if it isn't already), and when the
-	 * removal completes the port will be enabled.
-	 */
-	if (port->num_pending_removals)
-		return;
-
-	reg.field.disabled = 0;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ_LDB_DSBL(port->id.phys_id), reg.val);
-
-	dlb2_flush_csr(hw);
-}
-
-static void dlb2_ldb_port_cq_disable(struct dlb2_hw *hw,
-				     struct dlb2_ldb_port *port)
-{
-	union dlb2_lsp_cq_ldb_dsbl reg;
-
-	reg.field.disabled = 1;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ_LDB_DSBL(port->id.phys_id), reg.val);
-
-	dlb2_flush_csr(hw);
-}
-
-static struct dlb2_ldb_queue *
-dlb2_get_ldb_queue_from_id(struct dlb2_hw *hw,
-			   u32 id,
-			   bool vdev_req,
-			   unsigned int vdev_id)
-{
-	struct dlb2_list_entry *iter1;
-	struct dlb2_list_entry *iter2;
-	struct dlb2_function_resources *rsrcs;
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_queue *queue;
-	RTE_SET_USED(iter1);
-	RTE_SET_USED(iter2);
-
-	if (id >= DLB2_MAX_NUM_LDB_QUEUES)
-		return NULL;
-
-	rsrcs = (vdev_req) ? &hw->vdev[vdev_id] : &hw->pf;
-
-	if (!vdev_req)
-		return &hw->rsrcs.ldb_queues[id];
-
-	DLB2_FUNC_LIST_FOR(rsrcs->used_domains, domain, iter1) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter2)
-			if (queue->id.virt_id == id)
-				return queue;
-	}
-
-	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_queues, queue, iter1)
-		if (queue->id.virt_id == id)
-			return queue;
-
-	return NULL;
-}
-
-static struct dlb2_hw_domain *dlb2_get_domain_from_id(struct dlb2_hw *hw,
-						      u32 id,
-						      bool vdev_req,
-						      unsigned int vdev_id)
-{
-	struct dlb2_list_entry *iteration;
-	struct dlb2_function_resources *rsrcs;
-	struct dlb2_hw_domain *domain;
-	RTE_SET_USED(iteration);
-
-	if (id >= DLB2_MAX_NUM_DOMAINS)
-		return NULL;
-
-	if (!vdev_req)
-		return &hw->domains[id];
-
-	rsrcs = &hw->vdev[vdev_id];
-
-	DLB2_FUNC_LIST_FOR(rsrcs->used_domains, domain, iteration)
-		if (domain->id.virt_id == id)
-			return domain;
-
-	return NULL;
-}
-
-static int dlb2_port_slot_state_transition(struct dlb2_hw *hw,
-					   struct dlb2_ldb_port *port,
-					   struct dlb2_ldb_queue *queue,
-					   int slot,
-					   enum dlb2_qid_map_state new_state)
-{
-	enum dlb2_qid_map_state curr_state = port->qid_map[slot].state;
-	struct dlb2_hw_domain *domain;
-	int domain_id;
-
-	domain_id = port->domain_id.phys_id;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, false, 0);
-	if (domain == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: unable to find domain %d\n",
-			    __func__, domain_id);
-		return -EINVAL;
-	}
-
-	switch (curr_state) {
-	case DLB2_QUEUE_UNMAPPED:
-		switch (new_state) {
-		case DLB2_QUEUE_MAPPED:
-			queue->num_mappings++;
-			port->num_mappings++;
-			break;
-		case DLB2_QUEUE_MAP_IN_PROG:
-			queue->num_pending_additions++;
-			domain->num_pending_additions++;
-			break;
-		default:
-			goto error;
-		}
-		break;
-	case DLB2_QUEUE_MAPPED:
-		switch (new_state) {
-		case DLB2_QUEUE_UNMAPPED:
-			queue->num_mappings--;
-			port->num_mappings--;
-			break;
-		case DLB2_QUEUE_UNMAP_IN_PROG:
-			port->num_pending_removals++;
-			domain->num_pending_removals++;
-			break;
-		case DLB2_QUEUE_MAPPED:
-			/* Priority change, nothing to update */
-			break;
-		default:
-			goto error;
-		}
-		break;
-	case DLB2_QUEUE_MAP_IN_PROG:
-		switch (new_state) {
-		case DLB2_QUEUE_UNMAPPED:
-			queue->num_pending_additions--;
-			domain->num_pending_additions--;
-			break;
-		case DLB2_QUEUE_MAPPED:
-			queue->num_mappings++;
-			port->num_mappings++;
-			queue->num_pending_additions--;
-			domain->num_pending_additions--;
-			break;
-		default:
-			goto error;
-		}
-		break;
-	case DLB2_QUEUE_UNMAP_IN_PROG:
-		switch (new_state) {
-		case DLB2_QUEUE_UNMAPPED:
-			port->num_pending_removals--;
-			domain->num_pending_removals--;
-			queue->num_mappings--;
-			port->num_mappings--;
-			break;
-		case DLB2_QUEUE_MAPPED:
-			port->num_pending_removals--;
-			domain->num_pending_removals--;
-			break;
-		case DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP:
-			/* Nothing to update */
-			break;
-		default:
-			goto error;
-		}
-		break;
-	case DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP:
-		switch (new_state) {
-		case DLB2_QUEUE_UNMAP_IN_PROG:
-			/* Nothing to update */
-			break;
-		case DLB2_QUEUE_UNMAPPED:
-			/*
-			 * An UNMAP_IN_PROG_PENDING_MAP slot briefly
-			 * becomes UNMAPPED before it transitions to
-			 * MAP_IN_PROG.
-			 */
-			queue->num_mappings--;
-			port->num_mappings--;
-			port->num_pending_removals--;
-			domain->num_pending_removals--;
-			break;
-		default:
-			goto error;
-		}
-		break;
-	default:
-		goto error;
-	}
-
-	port->qid_map[slot].state = new_state;
-
-	DLB2_HW_DBG(hw,
-		    "[%s()] queue %d -> port %d state transition (%d -> %d)\n",
-		    __func__, queue->id.phys_id, port->id.phys_id,
-		    curr_state, new_state);
-	return 0;
-
-error:
-	DLB2_HW_ERR(hw,
-		    "[%s()] Internal error: invalid queue %d -> port %d state transition (%d -> %d)\n",
-		    __func__, queue->id.phys_id, port->id.phys_id,
-		    curr_state, new_state);
-	return -EFAULT;
-}
-
-static bool dlb2_port_find_slot(struct dlb2_ldb_port *port,
-				enum dlb2_qid_map_state state,
-				int *slot)
-{
-	int i;
-
-	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
-		if (port->qid_map[i].state == state)
-			break;
-	}
-
-	*slot = i;
-
-	return (i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ);
-}
-
-static bool dlb2_port_find_slot_queue(struct dlb2_ldb_port *port,
-				      enum dlb2_qid_map_state state,
-				      struct dlb2_ldb_queue *queue,
-				      int *slot)
-{
-	int i;
-
-	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
-		if (port->qid_map[i].state == state &&
-		    port->qid_map[i].qid == queue->id.phys_id)
-			break;
-	}
-
-	*slot = i;
-
-	return (i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ);
-}
-
-/*
- * dlb2_ldb_queue_{enable, disable}_mapped_cqs() don't operate exactly as
- * their function names imply, and should only be called by the dynamic CQ
- * mapping code.
- */
-static void dlb2_ldb_queue_disable_mapped_cqs(struct dlb2_hw *hw,
-					      struct dlb2_hw_domain *domain,
-					      struct dlb2_ldb_queue *queue)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int slot, i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			enum dlb2_qid_map_state state = DLB2_QUEUE_MAPPED;
-
-			if (!dlb2_port_find_slot_queue(port, state,
-						       queue, &slot))
-				continue;
-
-			if (port->enabled)
-				dlb2_ldb_port_cq_disable(hw, port);
-		}
-	}
-}
-
-static void dlb2_ldb_queue_enable_mapped_cqs(struct dlb2_hw *hw,
-					     struct dlb2_hw_domain *domain,
-					     struct dlb2_ldb_queue *queue)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int slot, i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			enum dlb2_qid_map_state state = DLB2_QUEUE_MAPPED;
-
-			if (!dlb2_port_find_slot_queue(port, state,
-						       queue, &slot))
-				continue;
-
-			if (port->enabled)
-				dlb2_ldb_port_cq_enable(hw, port);
-		}
-	}
-}
-
-static void dlb2_ldb_port_clear_queue_if_status(struct dlb2_hw *hw,
-						struct dlb2_ldb_port *port,
-						int slot)
-{
-	union dlb2_lsp_ldb_sched_ctrl r0 = { {0} };
-
-	r0.field.cq = port->id.phys_id;
-	r0.field.qidix = slot;
-	r0.field.value = 0;
-	r0.field.inflight_ok_v = 1;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL, r0.val);
-
-	dlb2_flush_csr(hw);
-}
-
-static void dlb2_ldb_port_set_queue_if_status(struct dlb2_hw *hw,
-					      struct dlb2_ldb_port *port,
-					      int slot)
-{
-	union dlb2_lsp_ldb_sched_ctrl r0 = { {0} };
-
-	r0.field.cq = port->id.phys_id;
-	r0.field.qidix = slot;
-	r0.field.value = 1;
-	r0.field.inflight_ok_v = 1;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL, r0.val);
-
-	dlb2_flush_csr(hw);
-}
-
-static int dlb2_ldb_port_map_qid_static(struct dlb2_hw *hw,
-					struct dlb2_ldb_port *p,
-					struct dlb2_ldb_queue *q,
-					u8 priority)
-{
-	union dlb2_lsp_cq2priov r0;
-	union dlb2_lsp_cq2qid0 r1;
-	union dlb2_atm_qid2cqidix_00 r2;
-	union dlb2_lsp_qid2cqidix_00 r3;
-	union dlb2_lsp_qid2cqidix2_00 r4;
-	enum dlb2_qid_map_state state;
-	int i;
-
-	/* Look for a pending or already mapped slot, else an unused slot */
-	if (!dlb2_port_find_slot_queue(p, DLB2_QUEUE_MAP_IN_PROG, q, &i) &&
-	    !dlb2_port_find_slot_queue(p, DLB2_QUEUE_MAPPED, q, &i) &&
-	    !dlb2_port_find_slot(p, DLB2_QUEUE_UNMAPPED, &i)) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: CQ has no available QID mapping slots\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	if (i >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: port slot tracking failed\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	/* Read-modify-write the priority and valid bit register */
-	r0.val = DLB2_CSR_RD(hw, DLB2_LSP_CQ2PRIOV(p->id.phys_id));
-
-	r0.field.v |= 1 << i;
-	r0.field.prio |= (priority & 0x7) << i * 3;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(p->id.phys_id), r0.val);
-
-	/* Read-modify-write the QID map register */
-	if (i < 4)
-		r1.val = DLB2_CSR_RD(hw, DLB2_LSP_CQ2QID0(p->id.phys_id));
-	else
-		r1.val = DLB2_CSR_RD(hw, DLB2_LSP_CQ2QID1(p->id.phys_id));
-
-	if (i == 0 || i == 4)
-		r1.field.qid_p0 = q->id.phys_id;
-	if (i == 1 || i == 5)
-		r1.field.qid_p1 = q->id.phys_id;
-	if (i == 2 || i == 6)
-		r1.field.qid_p2 = q->id.phys_id;
-	if (i == 3 || i == 7)
-		r1.field.qid_p3 = q->id.phys_id;
-
-	if (i < 4)
-		DLB2_CSR_WR(hw, DLB2_LSP_CQ2QID0(p->id.phys_id), r1.val);
-	else
-		DLB2_CSR_WR(hw, DLB2_LSP_CQ2QID1(p->id.phys_id), r1.val);
-
-	r2.val = DLB2_CSR_RD(hw,
-			     DLB2_ATM_QID2CQIDIX(q->id.phys_id,
-						 p->id.phys_id / 4));
-
-	r3.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID2CQIDIX(q->id.phys_id,
-						 p->id.phys_id / 4));
-
-	r4.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID2CQIDIX2(q->id.phys_id,
-						  p->id.phys_id / 4));
-
-	switch (p->id.phys_id % 4) {
-	case 0:
-		r2.field.cq_p0 |= 1 << i;
-		r3.field.cq_p0 |= 1 << i;
-		r4.field.cq_p0 |= 1 << i;
-		break;
-
-	case 1:
-		r2.field.cq_p1 |= 1 << i;
-		r3.field.cq_p1 |= 1 << i;
-		r4.field.cq_p1 |= 1 << i;
-		break;
-
-	case 2:
-		r2.field.cq_p2 |= 1 << i;
-		r3.field.cq_p2 |= 1 << i;
-		r4.field.cq_p2 |= 1 << i;
-		break;
-
-	case 3:
-		r2.field.cq_p3 |= 1 << i;
-		r3.field.cq_p3 |= 1 << i;
-		r4.field.cq_p3 |= 1 << i;
-		break;
-	}
-
-	DLB2_CSR_WR(hw,
-		    DLB2_ATM_QID2CQIDIX(q->id.phys_id, p->id.phys_id / 4),
-		    r2.val);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID2CQIDIX(q->id.phys_id, p->id.phys_id / 4),
-		    r3.val);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID2CQIDIX2(q->id.phys_id, p->id.phys_id / 4),
-		    r4.val);
-
-	dlb2_flush_csr(hw);
-
-	p->qid_map[i].qid = q->id.phys_id;
-	p->qid_map[i].priority = priority;
-
-	state = DLB2_QUEUE_MAPPED;
-
-	return dlb2_port_slot_state_transition(hw, p, q, i, state);
-}
-
-static int dlb2_ldb_port_set_has_work_bits(struct dlb2_hw *hw,
-					   struct dlb2_ldb_port *port,
-					   struct dlb2_ldb_queue *queue,
-					   int slot)
-{
-	union dlb2_lsp_qid_aqed_active_cnt r0;
-	union dlb2_lsp_qid_ldb_enqueue_cnt r1;
-	union dlb2_lsp_ldb_sched_ctrl r2 = { {0} };
-
-	/* Set the atomic scheduling haswork bit */
-	r0.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID_AQED_ACTIVE_CNT(queue->id.phys_id));
-
-	r2.field.cq = port->id.phys_id;
-	r2.field.qidix = slot;
-	r2.field.value = 1;
-	r2.field.rlist_haswork_v = r0.field.count > 0;
-
-	/* Set the non-atomic scheduling haswork bit */
-	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL, r2.val);
-
-	r1.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID_LDB_ENQUEUE_CNT(queue->id.phys_id));
-
-	memset(&r2, 0, sizeof(r2));
-
-	r2.field.cq = port->id.phys_id;
-	r2.field.qidix = slot;
-	r2.field.value = 1;
-	r2.field.nalb_haswork_v = (r1.field.count > 0);
-
-	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL, r2.val);
-
-	dlb2_flush_csr(hw);
-
-	return 0;
-}
-
-static void dlb2_ldb_port_clear_has_work_bits(struct dlb2_hw *hw,
-					      struct dlb2_ldb_port *port,
-					      u8 slot)
-{
-	union dlb2_lsp_ldb_sched_ctrl r2 = { {0} };
-
-	r2.field.cq = port->id.phys_id;
-	r2.field.qidix = slot;
-	r2.field.value = 0;
-	r2.field.rlist_haswork_v = 1;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL, r2.val);
-
-	memset(&r2, 0, sizeof(r2));
-
-	r2.field.cq = port->id.phys_id;
-	r2.field.qidix = slot;
-	r2.field.value = 0;
-	r2.field.nalb_haswork_v = 1;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL, r2.val);
-
-	dlb2_flush_csr(hw);
-}
-
-static void dlb2_ldb_queue_set_inflight_limit(struct dlb2_hw *hw,
-					      struct dlb2_ldb_queue *queue)
-{
-	union dlb2_lsp_qid_ldb_infl_lim r0 = { {0} };
-
-	r0.field.limit = queue->num_qid_inflights;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_QID_LDB_INFL_LIM(queue->id.phys_id), r0.val);
-}
-
-static void dlb2_ldb_queue_clear_inflight_limit(struct dlb2_hw *hw,
-						struct dlb2_ldb_queue *queue)
-{
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID_LDB_INFL_LIM(queue->id.phys_id),
-		    DLB2_LSP_QID_LDB_INFL_LIM_RST);
-}
-
-static int dlb2_ldb_port_finish_map_qid_dynamic(struct dlb2_hw *hw,
-						struct dlb2_hw_domain *domain,
-						struct dlb2_ldb_port *port,
-						struct dlb2_ldb_queue *queue)
-{
-	struct dlb2_list_entry *iter;
-	union dlb2_lsp_qid_ldb_infl_cnt r0;
-	enum dlb2_qid_map_state state;
-	int slot, ret, i;
-	u8 prio;
-	RTE_SET_USED(iter);
-
-	r0.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID_LDB_INFL_CNT(queue->id.phys_id));
-
-	if (r0.field.count) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: non-zero QID inflight count\n",
-			    __func__);
-		return -EINVAL;
-	}
-
-	/*
-	 * Static map the port and set its corresponding has_work bits.
-	 */
-	state = DLB2_QUEUE_MAP_IN_PROG;
-	if (!dlb2_port_find_slot_queue(port, state, queue, &slot))
-		return -EINVAL;
-
-	if (slot >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: port slot tracking failed\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	prio = port->qid_map[slot].priority;
-
-	/*
-	 * Update the CQ2QID, CQ2PRIOV, and QID2CQIDX registers, and
-	 * the port's qid_map state.
-	 */
-	ret = dlb2_ldb_port_map_qid_static(hw, port, queue, prio);
-	if (ret)
-		return ret;
-
-	ret = dlb2_ldb_port_set_has_work_bits(hw, port, queue, slot);
-	if (ret)
-		return ret;
-
-	/*
-	 * Ensure IF_status(cq,qid) is 0 before enabling the port to
-	 * prevent spurious schedules to cause the queue's inflight
-	 * count to increase.
-	 */
-	dlb2_ldb_port_clear_queue_if_status(hw, port, slot);
-
-	/* Reset the queue's inflight status */
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			state = DLB2_QUEUE_MAPPED;
-			if (!dlb2_port_find_slot_queue(port, state,
-						       queue, &slot))
-				continue;
-
-			dlb2_ldb_port_set_queue_if_status(hw, port, slot);
-		}
-	}
-
-	dlb2_ldb_queue_set_inflight_limit(hw, queue);
-
-	/* Re-enable CQs mapped to this queue */
-	dlb2_ldb_queue_enable_mapped_cqs(hw, domain, queue);
-
-	/* If this queue has other mappings pending, clear its inflight limit */
-	if (queue->num_pending_additions > 0)
-		dlb2_ldb_queue_clear_inflight_limit(hw, queue);
-
-	return 0;
-}
-
-/**
- * dlb2_ldb_port_map_qid_dynamic() - perform a "dynamic" QID->CQ mapping
- * @hw: dlb2_hw handle for a particular device.
- * @port: load-balanced port
- * @queue: load-balanced queue
- * @priority: queue servicing priority
- *
- * Returns 0 if the queue was mapped, 1 if the mapping is scheduled to occur
- * at a later point, and <0 if an error occurred.
- */
-static int dlb2_ldb_port_map_qid_dynamic(struct dlb2_hw *hw,
-					 struct dlb2_ldb_port *port,
-					 struct dlb2_ldb_queue *queue,
-					 u8 priority)
-{
-	union dlb2_lsp_qid_ldb_infl_cnt r0 = { {0} };
-	enum dlb2_qid_map_state state;
-	struct dlb2_hw_domain *domain;
-	int domain_id, slot, ret;
-
-	domain_id = port->domain_id.phys_id;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, false, 0);
-	if (domain == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: unable to find domain %d\n",
-			    __func__, port->domain_id.phys_id);
-		return -EINVAL;
-	}
-
-	/*
-	 * Set the QID inflight limit to 0 to prevent further scheduling of the
-	 * queue.
-	 */
-	DLB2_CSR_WR(hw, DLB2_LSP_QID_LDB_INFL_LIM(queue->id.phys_id), 0);
-
-	if (!dlb2_port_find_slot(port, DLB2_QUEUE_UNMAPPED, &slot)) {
-		DLB2_HW_ERR(hw,
-			    "Internal error: No available unmapped slots\n");
-		return -EFAULT;
-	}
-
-	if (slot >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: port slot tracking failed\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	port->qid_map[slot].qid = queue->id.phys_id;
-	port->qid_map[slot].priority = priority;
-
-	state = DLB2_QUEUE_MAP_IN_PROG;
-	ret = dlb2_port_slot_state_transition(hw, port, queue, slot, state);
-	if (ret)
-		return ret;
-
-	r0.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID_LDB_INFL_CNT(queue->id.phys_id));
-
-	if (r0.field.count) {
-		/*
-		 * The queue is owed completions so it's not safe to map it
-		 * yet. Schedule a kernel thread to complete the mapping later,
-		 * once software has completed all the queue's inflight events.
-		 */
-		if (!os_worker_active(hw))
-			os_schedule_work(hw);
-
-		return 1;
-	}
-
-	/*
-	 * Disable the affected CQ, and the CQs already mapped to the QID,
-	 * before reading the QID's inflight count a second time. There is an
-	 * unlikely race in which the QID may schedule one more QE after we
-	 * read an inflight count of 0, and disabling the CQs guarantees that
-	 * the race will not occur after a re-read of the inflight count
-	 * register.
-	 */
-	if (port->enabled)
-		dlb2_ldb_port_cq_disable(hw, port);
-
-	dlb2_ldb_queue_disable_mapped_cqs(hw, domain, queue);
-
-	r0.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID_LDB_INFL_CNT(queue->id.phys_id));
-
-	if (r0.field.count) {
-		if (port->enabled)
-			dlb2_ldb_port_cq_enable(hw, port);
-
-		dlb2_ldb_queue_enable_mapped_cqs(hw, domain, queue);
-
-		/*
-		 * The queue is owed completions so it's not safe to map it
-		 * yet. Schedule a kernel thread to complete the mapping later,
-		 * once software has completed all the queue's inflight events.
-		 */
-		if (!os_worker_active(hw))
-			os_schedule_work(hw);
-
-		return 1;
-	}
-
-	return dlb2_ldb_port_finish_map_qid_dynamic(hw, domain, port, queue);
-}
-
-static void dlb2_domain_finish_map_port(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain,
-					struct dlb2_ldb_port *port)
-{
-	int i;
-
-	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
-		union dlb2_lsp_qid_ldb_infl_cnt r0;
-		struct dlb2_ldb_queue *queue;
-		int qid;
-
-		if (port->qid_map[i].state != DLB2_QUEUE_MAP_IN_PROG)
-			continue;
-
-		qid = port->qid_map[i].qid;
-
-		queue = dlb2_get_ldb_queue_from_id(hw, qid, false, 0);
-
-		if (queue == NULL) {
-			DLB2_HW_ERR(hw,
-				    "[%s()] Internal error: unable to find queue %d\n",
-				    __func__, qid);
-			continue;
-		}
-
-		r0.val = DLB2_CSR_RD(hw, DLB2_LSP_QID_LDB_INFL_CNT(qid));
-
-		if (r0.field.count)
-			continue;
-
-		/*
-		 * Disable the affected CQ, and the CQs already mapped to the
-		 * QID, before reading the QID's inflight count a second time.
-		 * There is an unlikely race in which the QID may schedule one
-		 * more QE after we read an inflight count of 0, and disabling
-		 * the CQs guarantees that the race will not occur after a
-		 * re-read of the inflight count register.
-		 */
-		if (port->enabled)
-			dlb2_ldb_port_cq_disable(hw, port);
-
-		dlb2_ldb_queue_disable_mapped_cqs(hw, domain, queue);
-
-		r0.val = DLB2_CSR_RD(hw, DLB2_LSP_QID_LDB_INFL_CNT(qid));
-
-		if (r0.field.count) {
-			if (port->enabled)
-				dlb2_ldb_port_cq_enable(hw, port);
-
-			dlb2_ldb_queue_enable_mapped_cqs(hw, domain, queue);
-
-			continue;
-		}
-
-		dlb2_ldb_port_finish_map_qid_dynamic(hw, domain, port, queue);
-	}
-}
-
-static unsigned int
-dlb2_domain_finish_map_qid_procedures(struct dlb2_hw *hw,
-				      struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	if (!domain->configured || domain->num_pending_additions == 0)
-		return 0;
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
-			dlb2_domain_finish_map_port(hw, domain, port);
-	}
-
-	return domain->num_pending_additions;
-}
-
-static int dlb2_ldb_port_unmap_qid(struct dlb2_hw *hw,
-				   struct dlb2_ldb_port *port,
-				   struct dlb2_ldb_queue *queue)
-{
-	enum dlb2_qid_map_state mapped, in_progress, pending_map, unmapped;
-	union dlb2_lsp_cq2priov r0;
-	union dlb2_atm_qid2cqidix_00 r1;
-	union dlb2_lsp_qid2cqidix_00 r2;
-	union dlb2_lsp_qid2cqidix2_00 r3;
-	u32 queue_id;
-	u32 port_id;
-	int i;
-
-	/* Find the queue's slot */
-	mapped = DLB2_QUEUE_MAPPED;
-	in_progress = DLB2_QUEUE_UNMAP_IN_PROG;
-	pending_map = DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP;
-
-	if (!dlb2_port_find_slot_queue(port, mapped, queue, &i) &&
-	    !dlb2_port_find_slot_queue(port, in_progress, queue, &i) &&
-	    !dlb2_port_find_slot_queue(port, pending_map, queue, &i)) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: QID %d isn't mapped\n",
-			    __func__, __LINE__, queue->id.phys_id);
-		return -EFAULT;
-	}
-
-	if (i >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: port slot tracking failed\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	port_id = port->id.phys_id;
-	queue_id = queue->id.phys_id;
-
-	/* Read-modify-write the priority and valid bit register */
-	r0.val = DLB2_CSR_RD(hw, DLB2_LSP_CQ2PRIOV(port_id));
-
-	r0.field.v &= ~(1 << i);
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(port_id), r0.val);
-
-	r1.val = DLB2_CSR_RD(hw,
-			     DLB2_ATM_QID2CQIDIX(queue_id, port_id / 4));
-
-	r2.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID2CQIDIX(queue_id, port_id / 4));
-
-	r3.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID2CQIDIX2(queue_id, port_id / 4));
-
-	switch (port_id % 4) {
-	case 0:
-		r1.field.cq_p0 &= ~(1 << i);
-		r2.field.cq_p0 &= ~(1 << i);
-		r3.field.cq_p0 &= ~(1 << i);
-		break;
-
-	case 1:
-		r1.field.cq_p1 &= ~(1 << i);
-		r2.field.cq_p1 &= ~(1 << i);
-		r3.field.cq_p1 &= ~(1 << i);
-		break;
-
-	case 2:
-		r1.field.cq_p2 &= ~(1 << i);
-		r2.field.cq_p2 &= ~(1 << i);
-		r3.field.cq_p2 &= ~(1 << i);
-		break;
-
-	case 3:
-		r1.field.cq_p3 &= ~(1 << i);
-		r2.field.cq_p3 &= ~(1 << i);
-		r3.field.cq_p3 &= ~(1 << i);
-		break;
-	}
-
-	DLB2_CSR_WR(hw,
-		    DLB2_ATM_QID2CQIDIX(queue_id, port_id / 4),
-		    r1.val);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID2CQIDIX(queue_id, port_id / 4),
-		    r2.val);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID2CQIDIX2(queue_id, port_id / 4),
-		    r3.val);
-
-	dlb2_flush_csr(hw);
-
-	unmapped = DLB2_QUEUE_UNMAPPED;
-
-	return dlb2_port_slot_state_transition(hw, port, queue, i, unmapped);
-}
-
-static int dlb2_ldb_port_map_qid(struct dlb2_hw *hw,
-				 struct dlb2_hw_domain *domain,
-				 struct dlb2_ldb_port *port,
-				 struct dlb2_ldb_queue *queue,
-				 u8 prio)
-{
-	if (domain->started)
-		return dlb2_ldb_port_map_qid_dynamic(hw, port, queue, prio);
-	else
-		return dlb2_ldb_port_map_qid_static(hw, port, queue, prio);
-}
-
-static void
-dlb2_domain_finish_unmap_port_slot(struct dlb2_hw *hw,
-				   struct dlb2_hw_domain *domain,
-				   struct dlb2_ldb_port *port,
-				   int slot)
-{
-	enum dlb2_qid_map_state state;
-	struct dlb2_ldb_queue *queue;
-
-	queue = &hw->rsrcs.ldb_queues[port->qid_map[slot].qid];
-
-	state = port->qid_map[slot].state;
-
-	/* Update the QID2CQIDX and CQ2QID vectors */
-	dlb2_ldb_port_unmap_qid(hw, port, queue);
-
-	/*
-	 * Ensure the QID will not be serviced by this {CQ, slot} by clearing
-	 * the has_work bits
-	 */
-	dlb2_ldb_port_clear_has_work_bits(hw, port, slot);
-
-	/* Reset the {CQ, slot} to its default state */
-	dlb2_ldb_port_set_queue_if_status(hw, port, slot);
-
-	/* Re-enable the CQ if it wasn't manually disabled by the user */
-	if (port->enabled)
-		dlb2_ldb_port_cq_enable(hw, port);
-
-	/*
-	 * If there is a mapping that is pending this slot's removal, perform
-	 * the mapping now.
-	 */
-	if (state == DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP) {
-		struct dlb2_ldb_port_qid_map *map;
-		struct dlb2_ldb_queue *map_queue;
-		u8 prio;
-
-		map = &port->qid_map[slot];
-
-		map->qid = map->pending_qid;
-		map->priority = map->pending_priority;
-
-		map_queue = &hw->rsrcs.ldb_queues[map->qid];
-		prio = map->priority;
-
-		dlb2_ldb_port_map_qid(hw, domain, port, map_queue, prio);
-	}
-}
-
-static bool dlb2_domain_finish_unmap_port(struct dlb2_hw *hw,
-					  struct dlb2_hw_domain *domain,
-					  struct dlb2_ldb_port *port)
-{
-	union dlb2_lsp_cq_ldb_infl_cnt r0;
-	int i;
-
-	if (port->num_pending_removals == 0)
-		return false;
-
-	/*
-	 * The unmap requires all the CQ's outstanding inflights to be
-	 * completed.
-	 */
-	r0.val = DLB2_CSR_RD(hw, DLB2_LSP_CQ_LDB_INFL_CNT(port->id.phys_id));
-	if (r0.field.count > 0)
-		return false;
-
-	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
-		struct dlb2_ldb_port_qid_map *map;
-
-		map = &port->qid_map[i];
-
-		if (map->state != DLB2_QUEUE_UNMAP_IN_PROG &&
-		    map->state != DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP)
-			continue;
-
-		dlb2_domain_finish_unmap_port_slot(hw, domain, port, i);
-	}
-
-	return true;
-}
-
-static unsigned int
-dlb2_domain_finish_unmap_qid_procedures(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	if (!domain->configured || domain->num_pending_removals == 0)
-		return 0;
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
-			dlb2_domain_finish_unmap_port(hw, domain, port);
-	}
-
-	return domain->num_pending_removals;
-}
-
-unsigned int dlb2_finish_unmap_qid_procedures(struct dlb2_hw *hw)
-{
-	int i, num = 0;
-
-	/* Finish queue unmap jobs for any domain that needs it */
-	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
-		struct dlb2_hw_domain *domain = &hw->domains[i];
-
-		num += dlb2_domain_finish_unmap_qid_procedures(hw, domain);
-	}
-
-	return num;
-}
-
-unsigned int dlb2_finish_map_qid_procedures(struct dlb2_hw *hw)
-{
-	int i, num = 0;
-
-	/* Finish queue map jobs for any domain that needs it */
-	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
-		struct dlb2_hw_domain *domain = &hw->domains[i];
-
-		num += dlb2_domain_finish_map_qid_procedures(hw, domain);
-	}
-
-	return num;
-}
-
 int dlb2_get_group_sequence_numbers(struct dlb2_hw *hw, unsigned int group_id)
 {
 	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index 6a5af0c1e..8cd1762cf 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -6039,3 +6039,53 @@ int dlb2_hw_get_ldb_queue_depth(struct dlb2_hw *hw,
 
 	return 0;
 }
+
+/**
+ * dlb2_finish_unmap_qid_procedures() - finish any pending unmap procedures
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function attempts to finish any outstanding unmap procedures.
+ * This function should be called by the kernel thread responsible for
+ * finishing map/unmap procedures.
+ *
+ * Return:
+ * Returns the number of procedures that weren't completed.
+ */
+unsigned int dlb2_finish_unmap_qid_procedures(struct dlb2_hw *hw)
+{
+	int i, num = 0;
+
+	/* Finish queue unmap jobs for any domain that needs it */
+	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
+		struct dlb2_hw_domain *domain = &hw->domains[i];
+
+		num += dlb2_domain_finish_unmap_qid_procedures(hw, domain);
+	}
+
+	return num;
+}
+
+/**
+ * dlb2_finish_map_qid_procedures() - finish any pending map procedures
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function attempts to finish any outstanding map procedures.
+ * This function should be called by the kernel thread responsible for
+ * finishing map/unmap procedures.
+ *
+ * Return:
+ * Returns the number of procedures that weren't completed.
+ */
+unsigned int dlb2_finish_map_qid_procedures(struct dlb2_hw *hw)
+{
+	int i, num = 0;
+
+	/* Finish queue map jobs for any domain that needs it */
+	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
+		struct dlb2_hw_domain *domain = &hw->domains[i];
+
+		num += dlb2_domain_finish_map_qid_procedures(hw, domain);
+	}
+
+	return num;
+}
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v3 17/26] event/dlb2: add v2.5 sparse cq mode
  2021-04-13 20:14   ` [dpdk-dev] [PATCH v3 00/26] " Timothy McDaniel
                       ` (15 preceding siblings ...)
  2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 16/26] event/dlb2: add v2.5 finish map/unmap Timothy McDaniel
@ 2021-04-13 20:14     ` Timothy McDaniel
  2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 18/26] event/dlb2: add v2.5 sequence number management Timothy McDaniel
                       ` (8 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-04-13 20:14 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas

Update sparse cq mode functions for DLB v2.5, accounting for new
combined register map and hardware access macros.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 22 -----------
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 39 +++++++++++++++++++
 2 files changed, 39 insertions(+), 22 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index f05f750f5..d53cce643 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -32,28 +32,6 @@
 #define DLB2_FUNC_LIST_FOR_SAFE(head, ptr, ptr_tmp, it, it_tmp) \
 	DLB2_LIST_FOR_EACH_SAFE((head), ptr, ptr_tmp, func_list, it, it_tmp)
 
-void dlb2_hw_enable_sparse_dir_cq_mode(struct dlb2_hw *hw)
-{
-	union dlb2_chp_cfg_chp_csr_ctrl r0;
-
-	r0.val = DLB2_CSR_RD(hw, DLB2_CHP_CFG_CHP_CSR_CTRL);
-
-	r0.field.cfg_64bytes_qe_dir_cq_mode = 1;
-
-	DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, r0.val);
-}
-
-void dlb2_hw_enable_sparse_ldb_cq_mode(struct dlb2_hw *hw)
-{
-	union dlb2_chp_cfg_chp_csr_ctrl r0;
-
-	r0.val = DLB2_CSR_RD(hw, DLB2_CHP_CFG_CHP_CSR_CTRL);
-
-	r0.field.cfg_64bytes_qe_ldb_cq_mode = 1;
-
-	DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, r0.val);
-}
-
 int dlb2_get_group_sequence_numbers(struct dlb2_hw *hw, unsigned int group_id)
 {
 	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index 8cd1762cf..0f18bfeff 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -6089,3 +6089,42 @@ unsigned int dlb2_finish_map_qid_procedures(struct dlb2_hw *hw)
 
 	return num;
 }
+
+/**
+ * dlb2_hw_enable_sparse_dir_cq_mode() - enable sparse mode for directed ports.
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function must be called prior to configuring scheduling domains.
+ */
+
+void dlb2_hw_enable_sparse_dir_cq_mode(struct dlb2_hw *hw)
+{
+	u32 ctrl;
+
+	ctrl = DLB2_CSR_RD(hw, DLB2_CHP_CFG_CHP_CSR_CTRL);
+
+	DLB2_BIT_SET(ctrl,
+		     DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_DIR_CQ_MODE);
+
+	DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, ctrl);
+}
+
+/**
+ * dlb2_hw_enable_sparse_ldb_cq_mode() - enable sparse mode for load-balanced
+ *	ports.
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function must be called prior to configuring scheduling domains.
+ */
+void dlb2_hw_enable_sparse_ldb_cq_mode(struct dlb2_hw *hw)
+{
+	u32 ctrl;
+
+	ctrl = DLB2_CSR_RD(hw, DLB2_CHP_CFG_CHP_CSR_CTRL);
+
+	DLB2_BIT_SET(ctrl,
+		     DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_LDB_CQ_MODE);
+
+	DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, ctrl);
+}
+
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v3 18/26] event/dlb2: add v2.5 sequence number management
  2021-04-13 20:14   ` [dpdk-dev] [PATCH v3 00/26] " Timothy McDaniel
                       ` (16 preceding siblings ...)
  2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 17/26] event/dlb2: add v2.5 sparse cq mode Timothy McDaniel
@ 2021-04-13 20:14     ` Timothy McDaniel
  2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 19/26] event/dlb2: use new implementation of resource header Timothy McDaniel
                       ` (7 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-04-13 20:14 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas

Update sequence number management functions for DLB v2.5,
accounting for new combined register map and hardware access macros.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_resource.c    |  67 -----------
 drivers/event/dlb2/pf/base/dlb2_resource.h    |   4 +-
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 105 ++++++++++++++++++
 3 files changed, 107 insertions(+), 69 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index d53cce643..e8a9d52f6 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -32,70 +32,3 @@
 #define DLB2_FUNC_LIST_FOR_SAFE(head, ptr, ptr_tmp, it, it_tmp) \
 	DLB2_LIST_FOR_EACH_SAFE((head), ptr, ptr_tmp, func_list, it, it_tmp)
 
-int dlb2_get_group_sequence_numbers(struct dlb2_hw *hw, unsigned int group_id)
-{
-	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
-		return -EINVAL;
-
-	return hw->rsrcs.sn_groups[group_id].sequence_numbers_per_queue;
-}
-
-int dlb2_get_group_sequence_number_occupancy(struct dlb2_hw *hw,
-					     unsigned int group_id)
-{
-	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
-		return -EINVAL;
-
-	return dlb2_sn_group_used_slots(&hw->rsrcs.sn_groups[group_id]);
-}
-
-static void dlb2_log_set_group_sequence_numbers(struct dlb2_hw *hw,
-						unsigned int group_id,
-						unsigned long val)
-{
-	DLB2_HW_DBG(hw, "DLB2 set group sequence numbers:\n");
-	DLB2_HW_DBG(hw, "\tGroup ID: %u\n", group_id);
-	DLB2_HW_DBG(hw, "\tValue:    %lu\n", val);
-}
-
-int dlb2_set_group_sequence_numbers(struct dlb2_hw *hw,
-				    unsigned int group_id,
-				    unsigned long val)
-{
-	u32 valid_allocations[] = {64, 128, 256, 512, 1024};
-	union dlb2_ro_pipe_grp_sn_mode r0 = { {0} };
-	struct dlb2_sn_group *group;
-	int mode;
-
-	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
-		return -EINVAL;
-
-	group = &hw->rsrcs.sn_groups[group_id];
-
-	/*
-	 * Once the first load-balanced queue using an SN group is configured,
-	 * the group cannot be changed.
-	 */
-	if (group->slot_use_bitmap != 0)
-		return -EPERM;
-
-	for (mode = 0; mode < DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES; mode++)
-		if (val == valid_allocations[mode])
-			break;
-
-	if (mode == DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES)
-		return -EINVAL;
-
-	group->mode = mode;
-	group->sequence_numbers_per_queue = val;
-
-	r0.field.sn_mode_0 = hw->rsrcs.sn_groups[0].mode;
-	r0.field.sn_mode_1 = hw->rsrcs.sn_groups[1].mode;
-
-	DLB2_CSR_WR(hw, DLB2_RO_PIPE_GRP_SN_MODE, r0.val);
-
-	dlb2_log_set_group_sequence_numbers(hw, group_id, val);
-
-	return 0;
-}
-
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.h b/drivers/event/dlb2/pf/base/dlb2_resource.h
index 2e13193bb..00a0b6b57 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.h
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.h
@@ -792,8 +792,8 @@ int dlb2_get_group_sequence_number_occupancy(struct dlb2_hw *hw,
  * ordered queue is configured.
  */
 int dlb2_set_group_sequence_numbers(struct dlb2_hw *hw,
-				    unsigned int group_id,
-				    unsigned long val);
+				    u32 group_id,
+				    u32 val);
 
 /**
  * dlb2_reset_domain() - reset a scheduling domain
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index 0f18bfeff..927b65568 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -6128,3 +6128,108 @@ void dlb2_hw_enable_sparse_ldb_cq_mode(struct dlb2_hw *hw)
 	DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, ctrl);
 }
 
+/**
+ * dlb2_get_group_sequence_numbers() - return a group's number of SNs per queue
+ * @hw: dlb2_hw handle for a particular device.
+ * @group_id: sequence number group ID.
+ *
+ * This function returns the configured number of sequence numbers per queue
+ * for the specified group.
+ *
+ * Return:
+ * Returns -EINVAL if group_id is invalid, else the group's SNs per queue.
+ */
+int dlb2_get_group_sequence_numbers(struct dlb2_hw *hw, u32 group_id)
+{
+	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
+		return -EINVAL;
+
+	return hw->rsrcs.sn_groups[group_id].sequence_numbers_per_queue;
+}
+
+/**
+ * dlb2_get_group_sequence_number_occupancy() - return a group's in-use slots
+ * @hw: dlb2_hw handle for a particular device.
+ * @group_id: sequence number group ID.
+ *
+ * This function returns the group's number of in-use slots (i.e. load-balanced
+ * queues using the specified group).
+ *
+ * Return:
+ * Returns -EINVAL if group_id is invalid, else the group's SNs per queue.
+ */
+int dlb2_get_group_sequence_number_occupancy(struct dlb2_hw *hw, u32 group_id)
+{
+	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
+		return -EINVAL;
+
+	return dlb2_sn_group_used_slots(&hw->rsrcs.sn_groups[group_id]);
+}
+
+static void dlb2_log_set_group_sequence_numbers(struct dlb2_hw *hw,
+						u32 group_id,
+						u32 val)
+{
+	DLB2_HW_DBG(hw, "DLB2 set group sequence numbers:\n");
+	DLB2_HW_DBG(hw, "\tGroup ID: %u\n", group_id);
+	DLB2_HW_DBG(hw, "\tValue:    %u\n", val);
+}
+
+/**
+ * dlb2_set_group_sequence_numbers() - assign a group's number of SNs per queue
+ * @hw: dlb2_hw handle for a particular device.
+ * @group_id: sequence number group ID.
+ * @val: requested amount of sequence numbers per queue.
+ *
+ * This function configures the group's number of sequence numbers per queue.
+ * val can be a power-of-two between 32 and 1024, inclusive. This setting can
+ * be configured until the first ordered load-balanced queue is configured, at
+ * which point the configuration is locked.
+ *
+ * Return:
+ * Returns 0 upon success; -EINVAL if group_id or val is invalid, -EPERM if an
+ * ordered queue is configured.
+ */
+int dlb2_set_group_sequence_numbers(struct dlb2_hw *hw,
+				    u32 group_id,
+				    u32 val)
+{
+	const u32 valid_allocations[] = {64, 128, 256, 512, 1024};
+	struct dlb2_sn_group *group;
+	u32 sn_mode = 0;
+	int mode;
+
+	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
+		return -EINVAL;
+
+	group = &hw->rsrcs.sn_groups[group_id];
+
+	/*
+	 * Once the first load-balanced queue using an SN group is configured,
+	 * the group cannot be changed.
+	 */
+	if (group->slot_use_bitmap != 0)
+		return -EPERM;
+
+	for (mode = 0; mode < DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES; mode++)
+		if (val == valid_allocations[mode])
+			break;
+
+	if (mode == DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES)
+		return -EINVAL;
+
+	group->mode = mode;
+	group->sequence_numbers_per_queue = val;
+
+	DLB2_BITS_SET(sn_mode, hw->rsrcs.sn_groups[0].mode,
+		 DLB2_RO_GRP_SN_MODE_SN_MODE_0);
+	DLB2_BITS_SET(sn_mode, hw->rsrcs.sn_groups[1].mode,
+		 DLB2_RO_GRP_SN_MODE_SN_MODE_1);
+
+	DLB2_CSR_WR(hw, DLB2_RO_GRP_SN_MODE(hw->ver), sn_mode);
+
+	dlb2_log_set_group_sequence_numbers(hw, group_id, val);
+
+	return 0;
+}
+
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v3 19/26] event/dlb2: use new implementation of resource header
  2021-04-13 20:14   ` [dpdk-dev] [PATCH v3 00/26] " Timothy McDaniel
                       ` (17 preceding siblings ...)
  2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 18/26] event/dlb2: add v2.5 sequence number management Timothy McDaniel
@ 2021-04-13 20:14     ` Timothy McDaniel
  2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 20/26] event/dlb2: use new implementation of resource file Timothy McDaniel
                       ` (6 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-04-13 20:14 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas

A temporary version of dlb_resource.h (dlb_resource_new.h) was used
by the previous commits in this patch series. Merge the two files
now that DLB v2.5 support has been fully added to dlb_resource.c.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_osdep.h       |  2 -
 drivers/event/dlb2/pf/base/dlb2_resource.h    | 36 +++++++++
 .../event/dlb2/pf/base/dlb2_resource_new.c    |  2 +-
 .../event/dlb2/pf/base/dlb2_resource_new.h    | 73 -------------------
 drivers/event/dlb2/pf/dlb2_main.c             |  2 +-
 drivers/event/dlb2/pf/dlb2_pf.c               |  2 +-
 6 files changed, 39 insertions(+), 78 deletions(-)
 delete mode 100644 drivers/event/dlb2/pf/base/dlb2_resource_new.h

diff --git a/drivers/event/dlb2/pf/base/dlb2_osdep.h b/drivers/event/dlb2/pf/base/dlb2_osdep.h
index 3b0ca84ba..cffe22f3c 100644
--- a/drivers/event/dlb2/pf/base/dlb2_osdep.h
+++ b/drivers/event/dlb2/pf/base/dlb2_osdep.h
@@ -17,8 +17,6 @@
 #include <rte_spinlock.h>
 #include "../dlb2_main.h"
 
-/* TEMPORARY inclusion of both headers for merge */
-#include "dlb2_resource_new.h"
 #include "dlb2_resource.h"
 
 #include "../../dlb2_log.h"
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.h b/drivers/event/dlb2/pf/base/dlb2_resource.h
index 00a0b6b57..684049cd6 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.h
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.h
@@ -8,6 +8,42 @@
 #include "dlb2_user.h"
 #include "dlb2_osdep_types.h"
 
+/**
+ * dlb2_resource_init() - initialize the device
+ * @hw: pointer to struct dlb2_hw.
+ * @ver: device version.
+ *
+ * This function initializes the device's software state (pointed to by the hw
+ * argument) and programs global scheduling QoS registers. This function should
+ * be called during driver initialization.
+ *
+ * The dlb2_hw struct must be unique per DLB 2.0 device and persist until the
+ * device is reset.
+ *
+ * Return:
+ * Returns 0 upon success, <0 otherwise.
+ */
+int dlb2_resource_init(struct dlb2_hw *hw, enum dlb2_hw_ver ver);
+
+/**
+ * dlb2_clr_pmcsr_disable() - power on bulk of DLB 2.0 logic
+ * @hw: dlb2_hw handle for a particular device.
+ * @ver: device version.
+ *
+ * Clearing the PMCSR must be done at initialization to make the device fully
+ * operational.
+ */
+void dlb2_clr_pmcsr_disable(struct dlb2_hw *hw, enum dlb2_hw_ver ver);
+
+/**
+ * dlb2_resource_free() - free device state memory
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function frees software state pointed to by dlb2_hw. This function
+ * should be called when resetting the device or unloading the driver.
+ */
+void dlb2_resource_free(struct dlb2_hw *hw);
+
 /**
  * dlb2_resource_reset() - reset in-use resources to their initial state
  * @hw: dlb2_hw handle for a particular device.
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index 927b65568..2f66b2c71 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -11,7 +11,7 @@
 #include "dlb2_osdep_bitmap.h"
 #include "dlb2_osdep_types.h"
 #include "dlb2_regs_new.h"
-#include "dlb2_resource_new.h" /* TEMP FOR UPSTREAMPATCHES */
+#include "dlb2_resource.h"
 
 #include "../../dlb2_priv.h"
 #include "../../dlb2_inline_fns.h"
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.h b/drivers/event/dlb2/pf/base/dlb2_resource_new.h
deleted file mode 100644
index 51f31543c..000000000
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.h
+++ /dev/null
@@ -1,73 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016-2020 Intel Corporation
- */
-
-#ifndef __DLB2_RESOURCE_NEW_H
-#define __DLB2_RESOURCE_NEW_H
-
-#include "dlb2_user.h"
-#include "dlb2_osdep_types.h"
-
-/**
- * dlb2_resource_init() - initialize the device
- * @hw: pointer to struct dlb2_hw.
- * @ver: device version.
- *
- * This function initializes the device's software state (pointed to by the hw
- * argument) and programs global scheduling QoS registers. This function should
- * be called during driver initialization.
- *
- * The dlb2_hw struct must be unique per DLB 2.0 device and persist until the
- * device is reset.
- *
- * Return:
- * Returns 0 upon success, <0 otherwise.
- */
-int dlb2_resource_init(struct dlb2_hw *hw, enum dlb2_hw_ver ver);
-
-/**
- * dlb2_clr_pmcsr_disable() - power on bulk of DLB 2.0 logic
- * @hw: dlb2_hw handle for a particular device.
- * @ver: device version.
- *
- * Clearing the PMCSR must be done at initialization to make the device fully
- * operational.
- */
-void dlb2_clr_pmcsr_disable(struct dlb2_hw *hw, enum dlb2_hw_ver ver);
-
-/**
- * dlb2_finish_unmap_qid_procedures() - finish any pending unmap procedures
- * @hw: dlb2_hw handle for a particular device.
- *
- * This function attempts to finish any outstanding unmap procedures.
- * This function should be called by the kernel thread responsible for
- * finishing map/unmap procedures.
- *
- * Return:
- * Returns the number of procedures that weren't completed.
- */
-unsigned int dlb2_finish_unmap_qid_procedures(struct dlb2_hw *hw);
-
-/**
- * dlb2_finish_map_qid_procedures() - finish any pending map procedures
- * @hw: dlb2_hw handle for a particular device.
- *
- * This function attempts to finish any outstanding map procedures.
- * This function should be called by the kernel thread responsible for
- * finishing map/unmap procedures.
- *
- * Return:
- * Returns the number of procedures that weren't completed.
- */
-unsigned int dlb2_finish_map_qid_procedures(struct dlb2_hw *hw);
-
-/**
- * dlb2_resource_free() - free device state memory
- * @hw: dlb2_hw handle for a particular device.
- *
- * This function frees software state pointed to by dlb2_hw. This function
- * should be called when resetting the device or unloading the driver.
- */
-void dlb2_resource_free(struct dlb2_hw *hw);
-
-#endif /* __DLB2_RESOURCE_NEW_H */
diff --git a/drivers/event/dlb2/pf/dlb2_main.c b/drivers/event/dlb2/pf/dlb2_main.c
index 5c0640b3c..bac07f097 100644
--- a/drivers/event/dlb2/pf/dlb2_main.c
+++ b/drivers/event/dlb2/pf/dlb2_main.c
@@ -17,7 +17,7 @@
 
 #include "base/dlb2_regs_new.h"
 #include "base/dlb2_hw_types_new.h"
-#include "base/dlb2_resource_new.h"
+#include "base/dlb2_resource.h"
 #include "base/dlb2_osdep.h"
 #include "dlb2_main.h"
 #include "../dlb2_user.h"
diff --git a/drivers/event/dlb2/pf/dlb2_pf.c b/drivers/event/dlb2/pf/dlb2_pf.c
index 1e815f20d..880964a29 100644
--- a/drivers/event/dlb2/pf/dlb2_pf.c
+++ b/drivers/event/dlb2/pf/dlb2_pf.c
@@ -40,7 +40,7 @@
 #include "dlb2_main.h"
 #include "base/dlb2_hw_types_new.h"
 #include "base/dlb2_osdep.h"
-#include "base/dlb2_resource_new.h"
+#include "base/dlb2_resource.h"
 
 static const char *event_dlb2_pf_name = RTE_STR(EVDEV_DLB2_NAME_PMD);
 
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v3 20/26] event/dlb2: use new implementation of resource file
  2021-04-13 20:14   ` [dpdk-dev] [PATCH v3 00/26] " Timothy McDaniel
                       ` (18 preceding siblings ...)
  2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 19/26] event/dlb2: use new implementation of resource header Timothy McDaniel
@ 2021-04-13 20:14     ` Timothy McDaniel
  2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 21/26] event/dlb2: use new implementation of HW types header Timothy McDaniel
                       ` (5 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-04-13 20:14 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas

The file dlb_resource_new.c now contains all of the low level
functions required to support both DLB v2.0 and DLB v2.5, and
the original file (dlb_resource.c) was removed in the previous
commit, so rename dlb_resource_new.c to dlb_resource.c, and
update the meson build file so that the new file is built.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/meson.build                |    1 -
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 6205 +++++++++++++++-
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 6235 -----------------
 3 files changed, 6203 insertions(+), 6238 deletions(-)
 delete mode 100644 drivers/event/dlb2/pf/base/dlb2_resource_new.c

diff --git a/drivers/event/dlb2/meson.build b/drivers/event/dlb2/meson.build
index bded07e06..f22638b8e 100644
--- a/drivers/event/dlb2/meson.build
+++ b/drivers/event/dlb2/meson.build
@@ -14,7 +14,6 @@ sources = files('dlb2.c',
 		'pf/dlb2_main.c',
 		'pf/dlb2_pf.c',
 		'pf/base/dlb2_resource.c',
-		'pf/base/dlb2_resource_new.c',
 		'rte_pmd_dlb2.c',
 		'dlb2_selftest.c'
 )
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index e8a9d52f6..2f66b2c71 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -2,13 +2,15 @@
  * Copyright(c) 2016-2020 Intel Corporation
  */
 
+#define DLB2_USE_NEW_HEADERS /* TEMPORARY FOR MERGE */
+
 #include "dlb2_user.h"
 
-#include "dlb2_hw_types.h"
+#include "dlb2_hw_types_new.h"
 #include "dlb2_osdep.h"
 #include "dlb2_osdep_bitmap.h"
 #include "dlb2_osdep_types.h"
-#include "dlb2_regs.h"
+#include "dlb2_regs_new.h"
 #include "dlb2_resource.h"
 
 #include "../../dlb2_priv.h"
@@ -32,3 +34,6202 @@
 #define DLB2_FUNC_LIST_FOR_SAFE(head, ptr, ptr_tmp, it, it_tmp) \
 	DLB2_LIST_FOR_EACH_SAFE((head), ptr, ptr_tmp, func_list, it, it_tmp)
 
+/*
+ * The PF driver cannot assume that a register write will affect subsequent HCW
+ * writes. To ensure a write completes, the driver must read back a CSR. This
+ * function only need be called for configuration that can occur after the
+ * domain has started; prior to starting, applications can't send HCWs.
+ */
+static inline void dlb2_flush_csr(struct dlb2_hw *hw)
+{
+	DLB2_CSR_RD(hw, DLB2_SYS_TOTAL_VAS(hw->ver));
+}
+
+static void dlb2_init_domain_rsrc_lists(struct dlb2_hw_domain *domain)
+{
+	int i;
+
+	dlb2_list_init_head(&domain->used_ldb_queues);
+	dlb2_list_init_head(&domain->used_dir_pq_pairs);
+	dlb2_list_init_head(&domain->avail_ldb_queues);
+	dlb2_list_init_head(&domain->avail_dir_pq_pairs);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
+		dlb2_list_init_head(&domain->used_ldb_ports[i]);
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
+		dlb2_list_init_head(&domain->avail_ldb_ports[i]);
+}
+
+static void dlb2_init_fn_rsrc_lists(struct dlb2_function_resources *rsrc)
+{
+	int i;
+	dlb2_list_init_head(&rsrc->avail_domains);
+	dlb2_list_init_head(&rsrc->used_domains);
+	dlb2_list_init_head(&rsrc->avail_ldb_queues);
+	dlb2_list_init_head(&rsrc->avail_dir_pq_pairs);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
+		dlb2_list_init_head(&rsrc->avail_ldb_ports[i]);
+}
+
+/**
+ * dlb2_resource_free() - free device state memory
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function frees software state pointed to by dlb2_hw. This function
+ * should be called when resetting the device or unloading the driver.
+ */
+void dlb2_resource_free(struct dlb2_hw *hw)
+{
+	int i;
+
+	if (hw->pf.avail_hist_list_entries)
+		dlb2_bitmap_free(hw->pf.avail_hist_list_entries);
+
+	for (i = 0; i < DLB2_MAX_NUM_VDEVS; i++) {
+		if (hw->vdev[i].avail_hist_list_entries)
+			dlb2_bitmap_free(hw->vdev[i].avail_hist_list_entries);
+	}
+}
+
+/**
+ * dlb2_resource_init() - initialize the device
+ * @hw: pointer to struct dlb2_hw.
+ * @ver: device version.
+ *
+ * This function initializes the device's software state (pointed to by the hw
+ * argument) and programs global scheduling QoS registers. This function should
+ * be called during driver initialization, and the dlb2_hw structure should
+ * be zero-initialized before calling the function.
+ *
+ * The dlb2_hw struct must be unique per DLB 2.0 device and persist until the
+ * device is reset.
+ *
+ * Return:
+ * Returns 0 upon success, <0 otherwise.
+ */
+int dlb2_resource_init(struct dlb2_hw *hw, enum dlb2_hw_ver ver)
+{
+	struct dlb2_list_entry *list;
+	unsigned int i;
+	int ret;
+
+	/*
+	 * For optimal load-balancing, ports that map to one or more QIDs in
+	 * common should not be in numerical sequence. The port->QID mapping is
+	 * application dependent, but the driver interleaves port IDs as much
+	 * as possible to reduce the likelihood of sequential ports mapping to
+	 * the same QID(s). This initial allocation of port IDs maximizes the
+	 * average distance between an ID and its immediate neighbors (i.e.
+	 * the distance from 1 to 0 and to 2, the distance from 2 to 1 and to
+	 * 3, etc.).
+	 */
+	const u8 init_ldb_port_allocation[DLB2_MAX_NUM_LDB_PORTS] = {
+		0,  7,  14,  5, 12,  3, 10,  1,  8, 15,  6, 13,  4, 11,  2,  9,
+		16, 23, 30, 21, 28, 19, 26, 17, 24, 31, 22, 29, 20, 27, 18, 25,
+		32, 39, 46, 37, 44, 35, 42, 33, 40, 47, 38, 45, 36, 43, 34, 41,
+		48, 55, 62, 53, 60, 51, 58, 49, 56, 63, 54, 61, 52, 59, 50, 57,
+	};
+
+	hw->ver = ver;
+
+	dlb2_init_fn_rsrc_lists(&hw->pf);
+
+	for (i = 0; i < DLB2_MAX_NUM_VDEVS; i++)
+		dlb2_init_fn_rsrc_lists(&hw->vdev[i]);
+
+	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
+		dlb2_init_domain_rsrc_lists(&hw->domains[i]);
+		hw->domains[i].parent_func = &hw->pf;
+	}
+
+	/* Give all resources to the PF driver */
+	hw->pf.num_avail_domains = DLB2_MAX_NUM_DOMAINS;
+	for (i = 0; i < hw->pf.num_avail_domains; i++) {
+		list = &hw->domains[i].func_list;
+
+		dlb2_list_add(&hw->pf.avail_domains, list);
+	}
+
+	hw->pf.num_avail_ldb_queues = DLB2_MAX_NUM_LDB_QUEUES;
+	for (i = 0; i < hw->pf.num_avail_ldb_queues; i++) {
+		list = &hw->rsrcs.ldb_queues[i].func_list;
+
+		dlb2_list_add(&hw->pf.avail_ldb_queues, list);
+	}
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
+		hw->pf.num_avail_ldb_ports[i] =
+			DLB2_MAX_NUM_LDB_PORTS / DLB2_NUM_COS_DOMAINS;
+
+	for (i = 0; i < DLB2_MAX_NUM_LDB_PORTS; i++) {
+		int cos_id = i >> DLB2_NUM_COS_DOMAINS;
+		struct dlb2_ldb_port *port;
+
+		port = &hw->rsrcs.ldb_ports[init_ldb_port_allocation[i]];
+
+		dlb2_list_add(&hw->pf.avail_ldb_ports[cos_id],
+			      &port->func_list);
+	}
+
+	hw->pf.num_avail_dir_pq_pairs = DLB2_MAX_NUM_DIR_PORTS(hw->ver);
+	for (i = 0; i < hw->pf.num_avail_dir_pq_pairs; i++) {
+		list = &hw->rsrcs.dir_pq_pairs[i].func_list;
+
+		dlb2_list_add(&hw->pf.avail_dir_pq_pairs, list);
+	}
+
+	if (hw->ver == DLB2_HW_V2) {
+		hw->pf.num_avail_qed_entries = DLB2_MAX_NUM_LDB_CREDITS;
+		hw->pf.num_avail_dqed_entries =
+			DLB2_MAX_NUM_DIR_CREDITS(hw->ver);
+	} else {
+		hw->pf.num_avail_entries = DLB2_MAX_NUM_CREDITS(hw->ver);
+	}
+
+	hw->pf.num_avail_aqed_entries = DLB2_MAX_NUM_AQED_ENTRIES;
+
+	ret = dlb2_bitmap_alloc(&hw->pf.avail_hist_list_entries,
+				DLB2_MAX_NUM_HIST_LIST_ENTRIES);
+	if (ret)
+		goto unwind;
+
+	ret = dlb2_bitmap_fill(hw->pf.avail_hist_list_entries);
+	if (ret)
+		goto unwind;
+
+	for (i = 0; i < DLB2_MAX_NUM_VDEVS; i++) {
+		ret = dlb2_bitmap_alloc(&hw->vdev[i].avail_hist_list_entries,
+					DLB2_MAX_NUM_HIST_LIST_ENTRIES);
+		if (ret)
+			goto unwind;
+
+		ret = dlb2_bitmap_zero(hw->vdev[i].avail_hist_list_entries);
+		if (ret)
+			goto unwind;
+	}
+
+	/* Initialize the hardware resource IDs */
+	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
+		hw->domains[i].id.phys_id = i;
+		hw->domains[i].id.vdev_owned = false;
+	}
+
+	for (i = 0; i < DLB2_MAX_NUM_LDB_QUEUES; i++) {
+		hw->rsrcs.ldb_queues[i].id.phys_id = i;
+		hw->rsrcs.ldb_queues[i].id.vdev_owned = false;
+	}
+
+	for (i = 0; i < DLB2_MAX_NUM_LDB_PORTS; i++) {
+		hw->rsrcs.ldb_ports[i].id.phys_id = i;
+		hw->rsrcs.ldb_ports[i].id.vdev_owned = false;
+	}
+
+	for (i = 0; i < DLB2_MAX_NUM_DIR_PORTS(hw->ver); i++) {
+		hw->rsrcs.dir_pq_pairs[i].id.phys_id = i;
+		hw->rsrcs.dir_pq_pairs[i].id.vdev_owned = false;
+	}
+
+	for (i = 0; i < DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {
+		hw->rsrcs.sn_groups[i].id = i;
+		/* Default mode (0) is 64 sequence numbers per queue */
+		hw->rsrcs.sn_groups[i].mode = 0;
+		hw->rsrcs.sn_groups[i].sequence_numbers_per_queue = 64;
+		hw->rsrcs.sn_groups[i].slot_use_bitmap = 0;
+	}
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
+		hw->cos_reservation[i] = 100 / DLB2_NUM_COS_DOMAINS;
+
+	return 0;
+
+unwind:
+	dlb2_resource_free(hw);
+
+	return ret;
+}
+
+/**
+ * dlb2_clr_pmcsr_disable() - power on bulk of DLB 2.0 logic
+ * @hw: dlb2_hw handle for a particular device.
+ * @ver: device version.
+ *
+ * Clearing the PMCSR must be done at initialization to make the device fully
+ * operational.
+ */
+void dlb2_clr_pmcsr_disable(struct dlb2_hw *hw, enum dlb2_hw_ver ver)
+{
+	u32 pmcsr_dis;
+
+	pmcsr_dis = DLB2_CSR_RD(hw, DLB2_CM_CFG_PM_PMCSR_DISABLE(ver));
+
+	DLB2_BITS_CLR(pmcsr_dis, DLB2_CM_CFG_PM_PMCSR_DISABLE_DISABLE);
+
+	DLB2_CSR_WR(hw, DLB2_CM_CFG_PM_PMCSR_DISABLE(ver), pmcsr_dis);
+}
+
+/**
+ * dlb2_hw_get_num_resources() - query the PCI function's available resources
+ * @hw: dlb2_hw handle for a particular device.
+ * @arg: pointer to resource counts.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function returns the number of available resources for the PF or for a
+ * VF.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, -EINVAL if vdev_req is true and vdev_id is
+ * invalid.
+ */
+int dlb2_hw_get_num_resources(struct dlb2_hw *hw,
+			      struct dlb2_get_num_resources_args *arg,
+			      bool vdev_req,
+			      unsigned int vdev_id)
+{
+	struct dlb2_function_resources *rsrcs;
+	struct dlb2_bitmap *map;
+	int i;
+
+	if (vdev_req && vdev_id >= DLB2_MAX_NUM_VDEVS)
+		return -EINVAL;
+
+	if (vdev_req)
+		rsrcs = &hw->vdev[vdev_id];
+	else
+		rsrcs = &hw->pf;
+
+	arg->num_sched_domains = rsrcs->num_avail_domains;
+
+	arg->num_ldb_queues = rsrcs->num_avail_ldb_queues;
+
+	arg->num_ldb_ports = 0;
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
+		arg->num_ldb_ports += rsrcs->num_avail_ldb_ports[i];
+
+	arg->num_cos_ldb_ports[0] = rsrcs->num_avail_ldb_ports[0];
+	arg->num_cos_ldb_ports[1] = rsrcs->num_avail_ldb_ports[1];
+	arg->num_cos_ldb_ports[2] = rsrcs->num_avail_ldb_ports[2];
+	arg->num_cos_ldb_ports[3] = rsrcs->num_avail_ldb_ports[3];
+
+	arg->num_dir_ports = rsrcs->num_avail_dir_pq_pairs;
+
+	arg->num_atomic_inflights = rsrcs->num_avail_aqed_entries;
+
+	map = rsrcs->avail_hist_list_entries;
+
+	arg->num_hist_list_entries = dlb2_bitmap_count(map);
+
+	arg->max_contiguous_hist_list_entries =
+		dlb2_bitmap_longest_set_range(map);
+
+	if (hw->ver == DLB2_HW_V2) {
+		arg->num_ldb_credits = rsrcs->num_avail_qed_entries;
+		arg->num_dir_credits = rsrcs->num_avail_dqed_entries;
+	} else {
+		arg->num_credits = rsrcs->num_avail_entries;
+	}
+	return 0;
+}
+
+static void dlb2_configure_domain_credits_v2_5(struct dlb2_hw *hw,
+					       struct dlb2_hw_domain *domain)
+{
+	u32 reg = 0;
+
+	DLB2_BITS_SET(reg, domain->num_credits, DLB2_CHP_CFG_LDB_VAS_CRD_COUNT);
+	DLB2_CSR_WR(hw, DLB2_CHP_CFG_VAS_CRD(domain->id.phys_id), reg);
+}
+
+static void dlb2_configure_domain_credits_v2(struct dlb2_hw *hw,
+					     struct dlb2_hw_domain *domain)
+{
+	u32 reg = 0;
+
+	DLB2_BITS_SET(reg, domain->num_ldb_credits,
+		      DLB2_CHP_CFG_LDB_VAS_CRD_COUNT);
+	DLB2_CSR_WR(hw, DLB2_CHP_CFG_LDB_VAS_CRD(domain->id.phys_id), reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, domain->num_dir_credits,
+		      DLB2_CHP_CFG_DIR_VAS_CRD_COUNT);
+	DLB2_CSR_WR(hw, DLB2_CHP_CFG_DIR_VAS_CRD(domain->id.phys_id), reg);
+}
+
+static void dlb2_configure_domain_credits(struct dlb2_hw *hw,
+					  struct dlb2_hw_domain *domain)
+{
+	if (hw->ver == DLB2_HW_V2)
+		dlb2_configure_domain_credits_v2(hw, domain);
+	else
+		dlb2_configure_domain_credits_v2_5(hw, domain);
+}
+
+static int dlb2_attach_credits(struct dlb2_function_resources *rsrcs,
+			       struct dlb2_hw_domain *domain,
+			       u32 num_credits,
+			       struct dlb2_cmd_response *resp)
+{
+	if (rsrcs->num_avail_entries < num_credits) {
+		resp->status = DLB2_ST_CREDITS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	rsrcs->num_avail_entries -= num_credits;
+	domain->num_credits += num_credits;
+	return 0;
+}
+
+static struct dlb2_ldb_port *
+dlb2_get_next_ldb_port(struct dlb2_hw *hw,
+		       struct dlb2_function_resources *rsrcs,
+		       u32 domain_id,
+		       u32 cos_id)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	RTE_SET_USED(iter);
+
+	/*
+	 * To reduce the odds of consecutive load-balanced ports mapping to the
+	 * same queue(s), the driver attempts to allocate ports whose neighbors
+	 * are owned by a different domain.
+	 */
+	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {
+		u32 next, prev;
+		u32 phys_id;
+
+		phys_id = port->id.phys_id;
+		next = phys_id + 1;
+		prev = phys_id - 1;
+
+		if (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)
+			next = 0;
+		if (phys_id == 0)
+			prev = DLB2_MAX_NUM_LDB_PORTS - 1;
+
+		if (!hw->rsrcs.ldb_ports[next].owned ||
+		    hw->rsrcs.ldb_ports[next].domain_id.phys_id == domain_id)
+			continue;
+
+		if (!hw->rsrcs.ldb_ports[prev].owned ||
+		    hw->rsrcs.ldb_ports[prev].domain_id.phys_id == domain_id)
+			continue;
+
+		return port;
+	}
+
+	/*
+	 * Failing that, the driver looks for a port with one neighbor owned by
+	 * a different domain and the other unallocated.
+	 */
+	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {
+		u32 next, prev;
+		u32 phys_id;
+
+		phys_id = port->id.phys_id;
+		next = phys_id + 1;
+		prev = phys_id - 1;
+
+		if (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)
+			next = 0;
+		if (phys_id == 0)
+			prev = DLB2_MAX_NUM_LDB_PORTS - 1;
+
+		if (!hw->rsrcs.ldb_ports[prev].owned &&
+		    hw->rsrcs.ldb_ports[next].owned &&
+		    hw->rsrcs.ldb_ports[next].domain_id.phys_id != domain_id)
+			return port;
+
+		if (!hw->rsrcs.ldb_ports[next].owned &&
+		    hw->rsrcs.ldb_ports[prev].owned &&
+		    hw->rsrcs.ldb_ports[prev].domain_id.phys_id != domain_id)
+			return port;
+	}
+
+	/*
+	 * Failing that, the driver looks for a port with both neighbors
+	 * unallocated.
+	 */
+	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {
+		u32 next, prev;
+		u32 phys_id;
+
+		phys_id = port->id.phys_id;
+		next = phys_id + 1;
+		prev = phys_id - 1;
+
+		if (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)
+			next = 0;
+		if (phys_id == 0)
+			prev = DLB2_MAX_NUM_LDB_PORTS - 1;
+
+		if (!hw->rsrcs.ldb_ports[prev].owned &&
+		    !hw->rsrcs.ldb_ports[next].owned)
+			return port;
+	}
+
+	/* If all else fails, the driver returns the next available port. */
+	return DLB2_FUNC_LIST_HEAD(rsrcs->avail_ldb_ports[cos_id],
+				   typeof(*port));
+}
+
+static int __dlb2_attach_ldb_ports(struct dlb2_hw *hw,
+				   struct dlb2_function_resources *rsrcs,
+				   struct dlb2_hw_domain *domain,
+				   u32 num_ports,
+				   u32 cos_id,
+				   struct dlb2_cmd_response *resp)
+{
+	unsigned int i;
+
+	if (rsrcs->num_avail_ldb_ports[cos_id] < num_ports) {
+		resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	for (i = 0; i < num_ports; i++) {
+		struct dlb2_ldb_port *port;
+
+		port = dlb2_get_next_ldb_port(hw, rsrcs,
+					      domain->id.phys_id, cos_id);
+		if (port == NULL) {
+			DLB2_HW_ERR(hw,
+				    "[%s()] Internal error: domain validation failed\n",
+				    __func__);
+			return -EFAULT;
+		}
+
+		dlb2_list_del(&rsrcs->avail_ldb_ports[cos_id],
+			      &port->func_list);
+
+		port->domain_id = domain->id;
+		port->owned = true;
+
+		dlb2_list_add(&domain->avail_ldb_ports[cos_id],
+			      &port->domain_list);
+	}
+
+	rsrcs->num_avail_ldb_ports[cos_id] -= num_ports;
+
+	return 0;
+}
+
+
+static int dlb2_attach_ldb_ports(struct dlb2_hw *hw,
+				 struct dlb2_function_resources *rsrcs,
+				 struct dlb2_hw_domain *domain,
+				 struct dlb2_create_sched_domain_args *args,
+				 struct dlb2_cmd_response *resp)
+{
+	unsigned int i, j;
+	int ret;
+
+	if (args->cos_strict) {
+		for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+			u32 num = args->num_cos_ldb_ports[i];
+
+			/* Allocate ports from specific classes-of-service */
+			ret = __dlb2_attach_ldb_ports(hw,
+						      rsrcs,
+						      domain,
+						      num,
+						      i,
+						      resp);
+			if (ret)
+				return ret;
+		}
+	} else {
+		unsigned int k;
+		u32 cos_id;
+
+		/*
+		 * Attempt to allocate from specific class-of-service, but
+		 * fallback to the other classes if that fails.
+		 */
+		for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+			for (j = 0; j < args->num_cos_ldb_ports[i]; j++) {
+				for (k = 0; k < DLB2_NUM_COS_DOMAINS; k++) {
+					cos_id = (i + k) % DLB2_NUM_COS_DOMAINS;
+
+					ret = __dlb2_attach_ldb_ports(hw,
+								      rsrcs,
+								      domain,
+								      1,
+								      cos_id,
+								      resp);
+					if (ret == 0)
+						break;
+				}
+
+				if (ret)
+					return ret;
+			}
+		}
+	}
+
+	/* Allocate num_ldb_ports from any class-of-service */
+	for (i = 0; i < args->num_ldb_ports; i++) {
+		for (j = 0; j < DLB2_NUM_COS_DOMAINS; j++) {
+			ret = __dlb2_attach_ldb_ports(hw,
+						      rsrcs,
+						      domain,
+						      1,
+						      j,
+						      resp);
+			if (ret == 0)
+				break;
+		}
+
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+
+static int dlb2_attach_dir_ports(struct dlb2_hw *hw,
+				 struct dlb2_function_resources *rsrcs,
+				 struct dlb2_hw_domain *domain,
+				 u32 num_ports,
+				 struct dlb2_cmd_response *resp)
+{
+	unsigned int i;
+
+	if (rsrcs->num_avail_dir_pq_pairs < num_ports) {
+		resp->status = DLB2_ST_DIR_PORTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	for (i = 0; i < num_ports; i++) {
+		struct dlb2_dir_pq_pair *port;
+
+		port = DLB2_FUNC_LIST_HEAD(rsrcs->avail_dir_pq_pairs,
+					   typeof(*port));
+		if (port == NULL) {
+			DLB2_HW_ERR(hw,
+				    "[%s()] Internal error: domain validation failed\n",
+				    __func__);
+			return -EFAULT;
+		}
+
+		dlb2_list_del(&rsrcs->avail_dir_pq_pairs, &port->func_list);
+
+		port->domain_id = domain->id;
+		port->owned = true;
+
+		dlb2_list_add(&domain->avail_dir_pq_pairs, &port->domain_list);
+	}
+
+	rsrcs->num_avail_dir_pq_pairs -= num_ports;
+
+	return 0;
+}
+
+static int dlb2_attach_ldb_credits(struct dlb2_function_resources *rsrcs,
+				   struct dlb2_hw_domain *domain,
+				   u32 num_credits,
+				   struct dlb2_cmd_response *resp)
+{
+	if (rsrcs->num_avail_qed_entries < num_credits) {
+		resp->status = DLB2_ST_LDB_CREDITS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	rsrcs->num_avail_qed_entries -= num_credits;
+	domain->num_ldb_credits += num_credits;
+	return 0;
+}
+
+static int dlb2_attach_dir_credits(struct dlb2_function_resources *rsrcs,
+				   struct dlb2_hw_domain *domain,
+				   u32 num_credits,
+				   struct dlb2_cmd_response *resp)
+{
+	if (rsrcs->num_avail_dqed_entries < num_credits) {
+		resp->status = DLB2_ST_DIR_CREDITS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	rsrcs->num_avail_dqed_entries -= num_credits;
+	domain->num_dir_credits += num_credits;
+	return 0;
+}
+
+
+static int dlb2_attach_atomic_inflights(struct dlb2_function_resources *rsrcs,
+					struct dlb2_hw_domain *domain,
+					u32 num_atomic_inflights,
+					struct dlb2_cmd_response *resp)
+{
+	if (rsrcs->num_avail_aqed_entries < num_atomic_inflights) {
+		resp->status = DLB2_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	rsrcs->num_avail_aqed_entries -= num_atomic_inflights;
+	domain->num_avail_aqed_entries += num_atomic_inflights;
+	return 0;
+}
+
+static int
+dlb2_attach_domain_hist_list_entries(struct dlb2_function_resources *rsrcs,
+				     struct dlb2_hw_domain *domain,
+				     u32 num_hist_list_entries,
+				     struct dlb2_cmd_response *resp)
+{
+	struct dlb2_bitmap *bitmap;
+	int base;
+
+	if (num_hist_list_entries) {
+		bitmap = rsrcs->avail_hist_list_entries;
+
+		base = dlb2_bitmap_find_set_bit_range(bitmap,
+						      num_hist_list_entries);
+		if (base < 0)
+			goto error;
+
+		domain->total_hist_list_entries = num_hist_list_entries;
+		domain->avail_hist_list_entries = num_hist_list_entries;
+		domain->hist_list_entry_base = base;
+		domain->hist_list_entry_offset = 0;
+
+		dlb2_bitmap_clear_range(bitmap, base, num_hist_list_entries);
+	}
+	return 0;
+
+error:
+	resp->status = DLB2_ST_HIST_LIST_ENTRIES_UNAVAILABLE;
+	return -EINVAL;
+}
+
+static int dlb2_attach_ldb_queues(struct dlb2_hw *hw,
+				  struct dlb2_function_resources *rsrcs,
+				  struct dlb2_hw_domain *domain,
+				  u32 num_queues,
+				  struct dlb2_cmd_response *resp)
+{
+	unsigned int i;
+
+	if (rsrcs->num_avail_ldb_queues < num_queues) {
+		resp->status = DLB2_ST_LDB_QUEUES_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	for (i = 0; i < num_queues; i++) {
+		struct dlb2_ldb_queue *queue;
+
+		queue = DLB2_FUNC_LIST_HEAD(rsrcs->avail_ldb_queues,
+					    typeof(*queue));
+		if (queue == NULL) {
+			DLB2_HW_ERR(hw,
+				    "[%s()] Internal error: domain validation failed\n",
+				    __func__);
+			return -EFAULT;
+		}
+
+		dlb2_list_del(&rsrcs->avail_ldb_queues, &queue->func_list);
+
+		queue->domain_id = domain->id;
+		queue->owned = true;
+
+		dlb2_list_add(&domain->avail_ldb_queues, &queue->domain_list);
+	}
+
+	rsrcs->num_avail_ldb_queues -= num_queues;
+
+	return 0;
+}
+
+static int
+dlb2_domain_attach_resources(struct dlb2_hw *hw,
+			     struct dlb2_function_resources *rsrcs,
+			     struct dlb2_hw_domain *domain,
+			     struct dlb2_create_sched_domain_args *args,
+			     struct dlb2_cmd_response *resp)
+{
+	int ret;
+
+	ret = dlb2_attach_ldb_queues(hw,
+				     rsrcs,
+				     domain,
+				     args->num_ldb_queues,
+				     resp);
+	if (ret)
+		return ret;
+
+	ret = dlb2_attach_ldb_ports(hw,
+				    rsrcs,
+				    domain,
+				    args,
+				    resp);
+	if (ret)
+		return ret;
+
+	ret = dlb2_attach_dir_ports(hw,
+				    rsrcs,
+				    domain,
+				    args->num_dir_ports,
+				    resp);
+	if (ret)
+		return ret;
+
+	if (hw->ver == DLB2_HW_V2) {
+		ret = dlb2_attach_ldb_credits(rsrcs,
+					      domain,
+					      args->num_ldb_credits,
+					      resp);
+		if (ret)
+			return ret;
+
+		ret = dlb2_attach_dir_credits(rsrcs,
+					      domain,
+					      args->num_dir_credits,
+					      resp);
+		if (ret)
+			return ret;
+	} else {  /* DLB 2.5 */
+		ret = dlb2_attach_credits(rsrcs,
+					  domain,
+					  args->num_credits,
+					  resp);
+		if (ret)
+			return ret;
+	}
+
+	ret = dlb2_attach_domain_hist_list_entries(rsrcs,
+						   domain,
+						   args->num_hist_list_entries,
+						   resp);
+	if (ret)
+		return ret;
+
+	ret = dlb2_attach_atomic_inflights(rsrcs,
+					   domain,
+					   args->num_atomic_inflights,
+					   resp);
+	if (ret)
+		return ret;
+
+	dlb2_configure_domain_credits(hw, domain);
+
+	domain->configured = true;
+
+	domain->started = false;
+
+	rsrcs->num_avail_domains--;
+
+	return 0;
+}
+
+static int
+dlb2_verify_create_sched_dom_args(struct dlb2_function_resources *rsrcs,
+				  struct dlb2_create_sched_domain_args *args,
+				  struct dlb2_cmd_response *resp,
+				  struct dlb2_hw *hw,
+				  struct dlb2_hw_domain **out_domain)
+{
+	u32 num_avail_ldb_ports, req_ldb_ports;
+	struct dlb2_bitmap *avail_hl_entries;
+	unsigned int max_contig_hl_range;
+	struct dlb2_hw_domain *domain;
+	int i;
+
+	avail_hl_entries = rsrcs->avail_hist_list_entries;
+
+	max_contig_hl_range = dlb2_bitmap_longest_set_range(avail_hl_entries);
+
+	num_avail_ldb_ports = 0;
+	req_ldb_ports = 0;
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		num_avail_ldb_ports += rsrcs->num_avail_ldb_ports[i];
+
+		req_ldb_ports += args->num_cos_ldb_ports[i];
+	}
+
+	req_ldb_ports += args->num_ldb_ports;
+
+	if (rsrcs->num_avail_domains < 1) {
+		resp->status = DLB2_ST_DOMAIN_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	domain = DLB2_FUNC_LIST_HEAD(rsrcs->avail_domains, typeof(*domain));
+	if (domain == NULL) {
+		resp->status = DLB2_ST_DOMAIN_UNAVAILABLE;
+		return -EFAULT;
+	}
+
+	if (rsrcs->num_avail_ldb_queues < args->num_ldb_queues) {
+		resp->status = DLB2_ST_LDB_QUEUES_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	if (req_ldb_ports > num_avail_ldb_ports) {
+		resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	for (i = 0; args->cos_strict && i < DLB2_NUM_COS_DOMAINS; i++) {
+		if (args->num_cos_ldb_ports[i] >
+		    rsrcs->num_avail_ldb_ports[i]) {
+			resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
+			return -EINVAL;
+		}
+	}
+
+	if (args->num_ldb_queues > 0 && req_ldb_ports == 0) {
+		resp->status = DLB2_ST_LDB_PORT_REQUIRED_FOR_LDB_QUEUES;
+		return -EINVAL;
+	}
+
+	if (rsrcs->num_avail_dir_pq_pairs < args->num_dir_ports) {
+		resp->status = DLB2_ST_DIR_PORTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+	if (hw->ver == DLB2_HW_V2_5) {
+		if (rsrcs->num_avail_entries < args->num_credits) {
+			resp->status = DLB2_ST_CREDITS_UNAVAILABLE;
+			return -EINVAL;
+		}
+	} else {
+		if (rsrcs->num_avail_qed_entries < args->num_ldb_credits) {
+			resp->status = DLB2_ST_LDB_CREDITS_UNAVAILABLE;
+			return -EINVAL;
+		}
+		if (rsrcs->num_avail_dqed_entries < args->num_dir_credits) {
+			resp->status = DLB2_ST_DIR_CREDITS_UNAVAILABLE;
+			return -EINVAL;
+		}
+	}
+
+	if (rsrcs->num_avail_aqed_entries < args->num_atomic_inflights) {
+		resp->status = DLB2_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	if (max_contig_hl_range < args->num_hist_list_entries) {
+		resp->status = DLB2_ST_HIST_LIST_ENTRIES_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	*out_domain = domain;
+
+	return 0;
+}
+
+static void
+dlb2_log_create_sched_domain_args(struct dlb2_hw *hw,
+				  struct dlb2_create_sched_domain_args *args,
+				  bool vdev_req,
+				  unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 create sched domain arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tNumber of LDB queues:          %d\n",
+		    args->num_ldb_queues);
+	DLB2_HW_DBG(hw, "\tNumber of LDB ports (any CoS): %d\n",
+		    args->num_ldb_ports);
+	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 0):   %d\n",
+		    args->num_cos_ldb_ports[0]);
+	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 1):   %d\n",
+		    args->num_cos_ldb_ports[1]);
+	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 2):   %d\n",
+		    args->num_cos_ldb_ports[2]);
+	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 3):   %d\n",
+		    args->num_cos_ldb_ports[3]);
+	DLB2_HW_DBG(hw, "\tStrict CoS allocation:         %d\n",
+		    args->cos_strict);
+	DLB2_HW_DBG(hw, "\tNumber of DIR ports:           %d\n",
+		    args->num_dir_ports);
+	DLB2_HW_DBG(hw, "\tNumber of ATM inflights:       %d\n",
+		    args->num_atomic_inflights);
+	DLB2_HW_DBG(hw, "\tNumber of hist list entries:   %d\n",
+		    args->num_hist_list_entries);
+	if (hw->ver == DLB2_HW_V2) {
+		DLB2_HW_DBG(hw, "\tNumber of LDB credits:         %d\n",
+			    args->num_ldb_credits);
+		DLB2_HW_DBG(hw, "\tNumber of DIR credits:         %d\n",
+			    args->num_dir_credits);
+	} else {
+		DLB2_HW_DBG(hw, "\tNumber of credits:         %d\n",
+			    args->num_credits);
+	}
+}
+
+/**
+ * dlb2_hw_create_sched_domain() - create a scheduling domain
+ * @hw: dlb2_hw handle for a particular device.
+ * @args: scheduling domain creation arguments.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function creates a scheduling domain containing the resources specified
+ * in args. The individual resources (queues, ports, credits) can be configured
+ * after creating a scheduling domain.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the domain ID.
+ *
+ * resp->id contains a virtual ID if vdev_req is true.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, or the requested domain name
+ *	    is already in use.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb2_hw_create_sched_domain(struct dlb2_hw *hw,
+				struct dlb2_create_sched_domain_args *args,
+				struct dlb2_cmd_response *resp,
+				bool vdev_req,
+				unsigned int vdev_id)
+{
+	struct dlb2_function_resources *rsrcs;
+	struct dlb2_hw_domain *domain;
+	int ret;
+
+	rsrcs = (vdev_req) ? &hw->vdev[vdev_id] : &hw->pf;
+
+	dlb2_log_create_sched_domain_args(hw, args, vdev_req, vdev_id);
+
+	/*
+	 * Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	ret = dlb2_verify_create_sched_dom_args(rsrcs, args, resp, hw, &domain);
+	if (ret)
+		return ret;
+
+	dlb2_init_domain_rsrc_lists(domain);
+
+	ret = dlb2_domain_attach_resources(hw, rsrcs, domain, args, resp);
+	if (ret) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: failed to verify args.\n",
+			    __func__);
+
+		return ret;
+	}
+
+	dlb2_list_del(&rsrcs->avail_domains, &domain->func_list);
+
+	dlb2_list_add(&rsrcs->used_domains, &domain->func_list);
+
+	resp->id = (vdev_req) ? domain->id.virt_id : domain->id.phys_id;
+	resp->status = 0;
+
+	return 0;
+}
+
+static void dlb2_dir_port_cq_disable(struct dlb2_hw *hw,
+				     struct dlb2_dir_pq_pair *port)
+{
+	u32 reg = 0;
+
+	DLB2_BIT_SET(reg, DLB2_LSP_CQ_DIR_DSBL_DISABLED);
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ_DIR_DSBL(hw->ver, port->id.phys_id), reg);
+
+	dlb2_flush_csr(hw);
+}
+
+static u32 dlb2_dir_cq_token_count(struct dlb2_hw *hw,
+				   struct dlb2_dir_pq_pair *port)
+{
+	u32 cnt;
+
+	cnt = DLB2_CSR_RD(hw,
+			  DLB2_LSP_CQ_DIR_TKN_CNT(hw->ver, port->id.phys_id));
+
+	/*
+	 * Account for the initial token count, which is used in order to
+	 * provide a CQ with depth less than 8.
+	 */
+
+	return DLB2_BITS_GET(cnt, DLB2_LSP_CQ_DIR_TKN_CNT_COUNT) -
+	       port->init_tkn_cnt;
+}
+
+static void dlb2_drain_dir_cq(struct dlb2_hw *hw,
+			      struct dlb2_dir_pq_pair *port)
+{
+	unsigned int port_id = port->id.phys_id;
+	u32 cnt;
+
+	/* Return any outstanding tokens */
+	cnt = dlb2_dir_cq_token_count(hw, port);
+
+	if (cnt != 0) {
+		struct dlb2_hcw hcw_mem[8], *hcw;
+		void __iomem *pp_addr;
+
+		pp_addr = os_map_producer_port(hw, port_id, false);
+
+		/* Point hcw to a 64B-aligned location */
+		hcw = (struct dlb2_hcw *)((uintptr_t)&hcw_mem[4] & ~0x3F);
+
+		/*
+		 * Program the first HCW for a batch token return and
+		 * the rest as NOOPS
+		 */
+		memset(hcw, 0, 4 * sizeof(*hcw));
+		hcw->cq_token = 1;
+		hcw->lock_id = cnt - 1;
+
+		dlb2_movdir64b(pp_addr, hcw);
+
+		os_fence_hcw(hw, pp_addr);
+
+		os_unmap_producer_port(hw, pp_addr);
+	}
+}
+
+static void dlb2_dir_port_cq_enable(struct dlb2_hw *hw,
+				    struct dlb2_dir_pq_pair *port)
+{
+	u32 reg = 0;
+
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ_DIR_DSBL(hw->ver, port->id.phys_id), reg);
+
+	dlb2_flush_csr(hw);
+}
+
+static int dlb2_domain_drain_dir_cqs(struct dlb2_hw *hw,
+				     struct dlb2_hw_domain *domain,
+				     bool toggle_port)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *port;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
+		/*
+		 * Can't drain a port if it's not configured, and there's
+		 * nothing to drain if its queue is unconfigured.
+		 */
+		if (!port->port_configured || !port->queue_configured)
+			continue;
+
+		if (toggle_port)
+			dlb2_dir_port_cq_disable(hw, port);
+
+		dlb2_drain_dir_cq(hw, port);
+
+		if (toggle_port)
+			dlb2_dir_port_cq_enable(hw, port);
+	}
+
+	return 0;
+}
+
+static u32 dlb2_dir_queue_depth(struct dlb2_hw *hw,
+				struct dlb2_dir_pq_pair *queue)
+{
+	u32 cnt;
+
+	cnt = DLB2_CSR_RD(hw, DLB2_LSP_QID_DIR_ENQUEUE_CNT(hw->ver,
+						      queue->id.phys_id));
+
+	return DLB2_BITS_GET(cnt, DLB2_LSP_QID_DIR_ENQUEUE_CNT_COUNT);
+}
+
+static bool dlb2_dir_queue_is_empty(struct dlb2_hw *hw,
+				    struct dlb2_dir_pq_pair *queue)
+{
+	return dlb2_dir_queue_depth(hw, queue) == 0;
+}
+
+static bool dlb2_domain_dir_queues_empty(struct dlb2_hw *hw,
+					 struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *queue;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, queue, iter) {
+		if (!dlb2_dir_queue_is_empty(hw, queue))
+			return false;
+	}
+
+	return true;
+}
+static int dlb2_domain_drain_dir_queues(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain)
+{
+	int i;
+
+	/* If the domain hasn't been started, there's no traffic to drain */
+	if (!domain->started)
+		return 0;
+
+	for (i = 0; i < DLB2_MAX_QID_EMPTY_CHECK_LOOPS; i++) {
+		dlb2_domain_drain_dir_cqs(hw, domain, true);
+
+		if (dlb2_domain_dir_queues_empty(hw, domain))
+			break;
+	}
+
+	if (i == DLB2_MAX_QID_EMPTY_CHECK_LOOPS) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: failed to empty queues\n",
+			    __func__);
+		return -EFAULT;
+	}
+
+	/*
+	 * Drain the CQs one more time. For the queues to go empty, they would
+	 * have scheduled one or more QEs.
+	 */
+	dlb2_domain_drain_dir_cqs(hw, domain, true);
+
+	return 0;
+}
+
+static void dlb2_ldb_port_cq_enable(struct dlb2_hw *hw,
+				    struct dlb2_ldb_port *port)
+{
+	u32 reg = 0;
+
+	/*
+	 * Don't re-enable the port if a removal is pending. The caller should
+	 * mark this port as enabled (if it isn't already), and when the
+	 * removal completes the port will be enabled.
+	 */
+	if (port->num_pending_removals)
+		return;
+
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ_LDB_DSBL(hw->ver, port->id.phys_id), reg);
+
+	dlb2_flush_csr(hw);
+}
+
+static void dlb2_ldb_port_cq_disable(struct dlb2_hw *hw,
+				     struct dlb2_ldb_port *port)
+{
+	u32 reg = 0;
+
+	DLB2_BIT_SET(reg, DLB2_LSP_CQ_LDB_DSBL_DISABLED);
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ_LDB_DSBL(hw->ver, port->id.phys_id), reg);
+
+	dlb2_flush_csr(hw);
+}
+
+static u32 dlb2_ldb_cq_inflight_count(struct dlb2_hw *hw,
+				      struct dlb2_ldb_port *port)
+{
+	u32 cnt;
+
+	cnt = DLB2_CSR_RD(hw,
+			  DLB2_LSP_CQ_LDB_INFL_CNT(hw->ver, port->id.phys_id));
+
+	return DLB2_BITS_GET(cnt, DLB2_LSP_CQ_LDB_INFL_CNT_COUNT);
+}
+
+static u32 dlb2_ldb_cq_token_count(struct dlb2_hw *hw,
+				   struct dlb2_ldb_port *port)
+{
+	u32 cnt;
+
+	cnt = DLB2_CSR_RD(hw,
+			  DLB2_LSP_CQ_LDB_TKN_CNT(hw->ver, port->id.phys_id));
+
+	/*
+	 * Account for the initial token count, which is used in order to
+	 * provide a CQ with depth less than 8.
+	 */
+
+	return DLB2_BITS_GET(cnt, DLB2_LSP_CQ_LDB_TKN_CNT_TOKEN_COUNT) -
+		port->init_tkn_cnt;
+}
+
+static void dlb2_drain_ldb_cq(struct dlb2_hw *hw, struct dlb2_ldb_port *port)
+{
+	u32 infl_cnt, tkn_cnt;
+	unsigned int i;
+
+	infl_cnt = dlb2_ldb_cq_inflight_count(hw, port);
+	tkn_cnt = dlb2_ldb_cq_token_count(hw, port);
+
+	if (infl_cnt || tkn_cnt) {
+		struct dlb2_hcw hcw_mem[8], *hcw;
+		void __iomem *pp_addr;
+
+		pp_addr = os_map_producer_port(hw, port->id.phys_id, true);
+
+		/* Point hcw to a 64B-aligned location */
+		hcw = (struct dlb2_hcw *)((uintptr_t)&hcw_mem[4] & ~0x3F);
+
+		/*
+		 * Program the first HCW for a completion and token return and
+		 * the other HCWs as NOOPS
+		 */
+
+		memset(hcw, 0, 4 * sizeof(*hcw));
+		hcw->qe_comp = (infl_cnt > 0);
+		hcw->cq_token = (tkn_cnt > 0);
+		hcw->lock_id = tkn_cnt - 1;
+
+		/* Return tokens in the first HCW */
+		dlb2_movdir64b(pp_addr, hcw);
+
+		hcw->cq_token = 0;
+
+		/* Issue remaining completions (if any) */
+		for (i = 1; i < infl_cnt; i++)
+			dlb2_movdir64b(pp_addr, hcw);
+
+		os_fence_hcw(hw, pp_addr);
+
+		os_unmap_producer_port(hw, pp_addr);
+	}
+}
+
+static void dlb2_domain_drain_ldb_cqs(struct dlb2_hw *hw,
+				      struct dlb2_hw_domain *domain,
+				      bool toggle_port)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	/* If the domain hasn't been started, there's no traffic to drain */
+	if (!domain->started)
+		return;
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			if (toggle_port)
+				dlb2_ldb_port_cq_disable(hw, port);
+
+			dlb2_drain_ldb_cq(hw, port);
+
+			if (toggle_port)
+				dlb2_ldb_port_cq_enable(hw, port);
+		}
+	}
+}
+
+static u32 dlb2_ldb_queue_depth(struct dlb2_hw *hw,
+				struct dlb2_ldb_queue *queue)
+{
+	u32 aqed, ldb, atm;
+
+	aqed = DLB2_CSR_RD(hw, DLB2_LSP_QID_AQED_ACTIVE_CNT(hw->ver,
+						       queue->id.phys_id));
+	ldb = DLB2_CSR_RD(hw, DLB2_LSP_QID_LDB_ENQUEUE_CNT(hw->ver,
+						      queue->id.phys_id));
+	atm = DLB2_CSR_RD(hw,
+			  DLB2_LSP_QID_ATM_ACTIVE(hw->ver, queue->id.phys_id));
+
+	return DLB2_BITS_GET(aqed, DLB2_LSP_QID_AQED_ACTIVE_CNT_COUNT)
+	       + DLB2_BITS_GET(ldb, DLB2_LSP_QID_LDB_ENQUEUE_CNT_COUNT)
+	       + DLB2_BITS_GET(atm, DLB2_LSP_QID_ATM_ACTIVE_COUNT);
+}
+
+static bool dlb2_ldb_queue_is_empty(struct dlb2_hw *hw,
+				    struct dlb2_ldb_queue *queue)
+{
+	return dlb2_ldb_queue_depth(hw, queue) == 0;
+}
+
+static bool dlb2_domain_mapped_queues_empty(struct dlb2_hw *hw,
+					    struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_queue *queue;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
+		if (queue->num_mappings == 0)
+			continue;
+
+		if (!dlb2_ldb_queue_is_empty(hw, queue))
+			return false;
+	}
+
+	return true;
+}
+
+static int dlb2_domain_drain_mapped_queues(struct dlb2_hw *hw,
+					   struct dlb2_hw_domain *domain)
+{
+	int i;
+
+	/* If the domain hasn't been started, there's no traffic to drain */
+	if (!domain->started)
+		return 0;
+
+	if (domain->num_pending_removals > 0) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: failed to unmap domain queues\n",
+			    __func__);
+		return -EFAULT;
+	}
+
+	for (i = 0; i < DLB2_MAX_QID_EMPTY_CHECK_LOOPS; i++) {
+		dlb2_domain_drain_ldb_cqs(hw, domain, true);
+
+		if (dlb2_domain_mapped_queues_empty(hw, domain))
+			break;
+	}
+
+	if (i == DLB2_MAX_QID_EMPTY_CHECK_LOOPS) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: failed to empty queues\n",
+			    __func__);
+		return -EFAULT;
+	}
+
+	/*
+	 * Drain the CQs one more time. For the queues to go empty, they would
+	 * have scheduled one or more QEs.
+	 */
+	dlb2_domain_drain_ldb_cqs(hw, domain, true);
+
+	return 0;
+}
+
+static void dlb2_domain_enable_ldb_cqs(struct dlb2_hw *hw,
+				       struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			port->enabled = true;
+
+			dlb2_ldb_port_cq_enable(hw, port);
+		}
+	}
+}
+
+static struct dlb2_ldb_queue *
+dlb2_get_ldb_queue_from_id(struct dlb2_hw *hw,
+			   u32 id,
+			   bool vdev_req,
+			   unsigned int vdev_id)
+{
+	struct dlb2_list_entry *iter1;
+	struct dlb2_list_entry *iter2;
+	struct dlb2_function_resources *rsrcs;
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+	RTE_SET_USED(iter1);
+	RTE_SET_USED(iter2);
+
+	if (id >= DLB2_MAX_NUM_LDB_QUEUES)
+		return NULL;
+
+	rsrcs = (vdev_req) ? &hw->vdev[vdev_id] : &hw->pf;
+
+	if (!vdev_req)
+		return &hw->rsrcs.ldb_queues[id];
+
+	DLB2_FUNC_LIST_FOR(rsrcs->used_domains, domain, iter1) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter2) {
+			if (queue->id.virt_id == id)
+				return queue;
+		}
+	}
+
+	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_queues, queue, iter1) {
+		if (queue->id.virt_id == id)
+			return queue;
+	}
+
+	return NULL;
+}
+
+static struct dlb2_hw_domain *dlb2_get_domain_from_id(struct dlb2_hw *hw,
+						      u32 id,
+						      bool vdev_req,
+						      unsigned int vdev_id)
+{
+	struct dlb2_list_entry *iteration;
+	struct dlb2_function_resources *rsrcs;
+	struct dlb2_hw_domain *domain;
+	RTE_SET_USED(iteration);
+
+	if (id >= DLB2_MAX_NUM_DOMAINS)
+		return NULL;
+
+	if (!vdev_req)
+		return &hw->domains[id];
+
+	rsrcs = &hw->vdev[vdev_id];
+
+	DLB2_FUNC_LIST_FOR(rsrcs->used_domains, domain, iteration) {
+		if (domain->id.virt_id == id)
+			return domain;
+	}
+
+	return NULL;
+}
+
+static int dlb2_port_slot_state_transition(struct dlb2_hw *hw,
+					   struct dlb2_ldb_port *port,
+					   struct dlb2_ldb_queue *queue,
+					   int slot,
+					   enum dlb2_qid_map_state new_state)
+{
+	enum dlb2_qid_map_state curr_state = port->qid_map[slot].state;
+	struct dlb2_hw_domain *domain;
+	int domain_id;
+
+	domain_id = port->domain_id.phys_id;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, false, 0);
+	if (domain == NULL) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: unable to find domain %d\n",
+			    __func__, domain_id);
+		return -EINVAL;
+	}
+
+	switch (curr_state) {
+	case DLB2_QUEUE_UNMAPPED:
+		switch (new_state) {
+		case DLB2_QUEUE_MAPPED:
+			queue->num_mappings++;
+			port->num_mappings++;
+			break;
+		case DLB2_QUEUE_MAP_IN_PROG:
+			queue->num_pending_additions++;
+			domain->num_pending_additions++;
+			break;
+		default:
+			goto error;
+		}
+		break;
+	case DLB2_QUEUE_MAPPED:
+		switch (new_state) {
+		case DLB2_QUEUE_UNMAPPED:
+			queue->num_mappings--;
+			port->num_mappings--;
+			break;
+		case DLB2_QUEUE_UNMAP_IN_PROG:
+			port->num_pending_removals++;
+			domain->num_pending_removals++;
+			break;
+		case DLB2_QUEUE_MAPPED:
+			/* Priority change, nothing to update */
+			break;
+		default:
+			goto error;
+		}
+		break;
+	case DLB2_QUEUE_MAP_IN_PROG:
+		switch (new_state) {
+		case DLB2_QUEUE_UNMAPPED:
+			queue->num_pending_additions--;
+			domain->num_pending_additions--;
+			break;
+		case DLB2_QUEUE_MAPPED:
+			queue->num_mappings++;
+			port->num_mappings++;
+			queue->num_pending_additions--;
+			domain->num_pending_additions--;
+			break;
+		default:
+			goto error;
+		}
+		break;
+	case DLB2_QUEUE_UNMAP_IN_PROG:
+		switch (new_state) {
+		case DLB2_QUEUE_UNMAPPED:
+			port->num_pending_removals--;
+			domain->num_pending_removals--;
+			queue->num_mappings--;
+			port->num_mappings--;
+			break;
+		case DLB2_QUEUE_MAPPED:
+			port->num_pending_removals--;
+			domain->num_pending_removals--;
+			break;
+		case DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP:
+			/* Nothing to update */
+			break;
+		default:
+			goto error;
+		}
+		break;
+	case DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP:
+		switch (new_state) {
+		case DLB2_QUEUE_UNMAP_IN_PROG:
+			/* Nothing to update */
+			break;
+		case DLB2_QUEUE_UNMAPPED:
+			/*
+			 * An UNMAP_IN_PROG_PENDING_MAP slot briefly
+			 * becomes UNMAPPED before it transitions to
+			 * MAP_IN_PROG.
+			 */
+			queue->num_mappings--;
+			port->num_mappings--;
+			port->num_pending_removals--;
+			domain->num_pending_removals--;
+			break;
+		default:
+			goto error;
+		}
+		break;
+	default:
+		goto error;
+	}
+
+	port->qid_map[slot].state = new_state;
+
+	DLB2_HW_DBG(hw,
+		    "[%s()] queue %d -> port %d state transition (%d -> %d)\n",
+		    __func__, queue->id.phys_id, port->id.phys_id,
+		    curr_state, new_state);
+	return 0;
+
+error:
+	DLB2_HW_ERR(hw,
+		    "[%s()] Internal error: invalid queue %d -> port %d state transition (%d -> %d)\n",
+		    __func__, queue->id.phys_id, port->id.phys_id,
+		    curr_state, new_state);
+	return -EFAULT;
+}
+
+static bool dlb2_port_find_slot(struct dlb2_ldb_port *port,
+				enum dlb2_qid_map_state state,
+				int *slot)
+{
+	int i;
+
+	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
+		if (port->qid_map[i].state == state)
+			break;
+	}
+
+	*slot = i;
+
+	return (i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ);
+}
+
+static bool dlb2_port_find_slot_queue(struct dlb2_ldb_port *port,
+				      enum dlb2_qid_map_state state,
+				      struct dlb2_ldb_queue *queue,
+				      int *slot)
+{
+	int i;
+
+	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
+		if (port->qid_map[i].state == state &&
+		    port->qid_map[i].qid == queue->id.phys_id)
+			break;
+	}
+
+	*slot = i;
+
+	return (i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ);
+}
+
+/*
+ * dlb2_ldb_queue_{enable, disable}_mapped_cqs() don't operate exactly as
+ * their function names imply, and should only be called by the dynamic CQ
+ * mapping code.
+ */
+static void dlb2_ldb_queue_disable_mapped_cqs(struct dlb2_hw *hw,
+					      struct dlb2_hw_domain *domain,
+					      struct dlb2_ldb_queue *queue)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int slot, i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			enum dlb2_qid_map_state state = DLB2_QUEUE_MAPPED;
+
+			if (!dlb2_port_find_slot_queue(port, state,
+						       queue, &slot))
+				continue;
+
+			if (port->enabled)
+				dlb2_ldb_port_cq_disable(hw, port);
+		}
+	}
+}
+
+static void dlb2_ldb_queue_enable_mapped_cqs(struct dlb2_hw *hw,
+					     struct dlb2_hw_domain *domain,
+					     struct dlb2_ldb_queue *queue)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int slot, i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			enum dlb2_qid_map_state state = DLB2_QUEUE_MAPPED;
+
+			if (!dlb2_port_find_slot_queue(port, state,
+						       queue, &slot))
+				continue;
+
+			if (port->enabled)
+				dlb2_ldb_port_cq_enable(hw, port);
+		}
+	}
+}
+
+static void dlb2_ldb_port_clear_queue_if_status(struct dlb2_hw *hw,
+						struct dlb2_ldb_port *port,
+						int slot)
+{
+	u32 ctrl = 0;
+
+	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
+	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
+	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_INFLIGHT_OK_V);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
+
+	dlb2_flush_csr(hw);
+}
+
+static void dlb2_ldb_port_set_queue_if_status(struct dlb2_hw *hw,
+					      struct dlb2_ldb_port *port,
+					      int slot)
+{
+	u32 ctrl = 0;
+
+	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
+	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
+	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_VALUE);
+	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_INFLIGHT_OK_V);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
+
+	dlb2_flush_csr(hw);
+}
+
+static int dlb2_ldb_port_map_qid_static(struct dlb2_hw *hw,
+					struct dlb2_ldb_port *p,
+					struct dlb2_ldb_queue *q,
+					u8 priority)
+{
+	enum dlb2_qid_map_state state;
+	u32 lsp_qid2cq2;
+	u32 lsp_qid2cq;
+	u32 atm_qid2cq;
+	u32 cq2priov;
+	u32 cq2qid;
+	int i;
+
+	/* Look for a pending or already mapped slot, else an unused slot */
+	if (!dlb2_port_find_slot_queue(p, DLB2_QUEUE_MAP_IN_PROG, q, &i) &&
+	    !dlb2_port_find_slot_queue(p, DLB2_QUEUE_MAPPED, q, &i) &&
+	    !dlb2_port_find_slot(p, DLB2_QUEUE_UNMAPPED, &i)) {
+		DLB2_HW_ERR(hw,
+			    "[%s():%d] Internal error: CQ has no available QID mapping slots\n",
+			    __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	/* Read-modify-write the priority and valid bit register */
+	cq2priov = DLB2_CSR_RD(hw, DLB2_LSP_CQ2PRIOV(hw->ver, p->id.phys_id));
+
+	cq2priov |= (1 << (i + DLB2_LSP_CQ2PRIOV_V_LOC)) & DLB2_LSP_CQ2PRIOV_V;
+	cq2priov |= ((priority & 0x7) << (i + DLB2_LSP_CQ2PRIOV_PRIO_LOC) * 3)
+		    & DLB2_LSP_CQ2PRIOV_PRIO;
+
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(hw->ver, p->id.phys_id), cq2priov);
+
+	/* Read-modify-write the QID map register */
+	if (i < 4)
+		cq2qid = DLB2_CSR_RD(hw, DLB2_LSP_CQ2QID0(hw->ver,
+							  p->id.phys_id));
+	else
+		cq2qid = DLB2_CSR_RD(hw, DLB2_LSP_CQ2QID1(hw->ver,
+							  p->id.phys_id));
+
+	if (i == 0 || i == 4)
+		DLB2_BITS_SET(cq2qid, q->id.phys_id, DLB2_LSP_CQ2QID0_QID_P0);
+	if (i == 1 || i == 5)
+		DLB2_BITS_SET(cq2qid, q->id.phys_id, DLB2_LSP_CQ2QID0_QID_P1);
+	if (i == 2 || i == 6)
+		DLB2_BITS_SET(cq2qid, q->id.phys_id, DLB2_LSP_CQ2QID0_QID_P2);
+	if (i == 3 || i == 7)
+		DLB2_BITS_SET(cq2qid, q->id.phys_id, DLB2_LSP_CQ2QID0_QID_P3);
+
+	if (i < 4)
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_CQ2QID0(hw->ver, p->id.phys_id), cq2qid);
+	else
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_CQ2QID1(hw->ver, p->id.phys_id), cq2qid);
+
+	atm_qid2cq = DLB2_CSR_RD(hw,
+				 DLB2_ATM_QID2CQIDIX(q->id.phys_id,
+						p->id.phys_id / 4));
+
+	lsp_qid2cq = DLB2_CSR_RD(hw,
+				 DLB2_LSP_QID2CQIDIX(hw->ver, q->id.phys_id,
+						p->id.phys_id / 4));
+
+	lsp_qid2cq2 = DLB2_CSR_RD(hw,
+				  DLB2_LSP_QID2CQIDIX2(hw->ver, q->id.phys_id,
+						  p->id.phys_id / 4));
+
+	switch (p->id.phys_id % 4) {
+	case 0:
+		DLB2_BIT_SET(atm_qid2cq,
+			     1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P0_LOC));
+		DLB2_BIT_SET(lsp_qid2cq,
+			     1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P0_LOC));
+		DLB2_BIT_SET(lsp_qid2cq2,
+			     1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P0_LOC));
+		break;
+
+	case 1:
+		DLB2_BIT_SET(atm_qid2cq,
+			     1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P1_LOC));
+		DLB2_BIT_SET(lsp_qid2cq,
+			     1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P1_LOC));
+		DLB2_BIT_SET(lsp_qid2cq2,
+			     1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P1_LOC));
+		break;
+
+	case 2:
+		DLB2_BIT_SET(atm_qid2cq,
+			     1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P2_LOC));
+		DLB2_BIT_SET(lsp_qid2cq,
+			     1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P2_LOC));
+		DLB2_BIT_SET(lsp_qid2cq2,
+			     1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P2_LOC));
+		break;
+
+	case 3:
+		DLB2_BIT_SET(atm_qid2cq,
+			     1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P3_LOC));
+		DLB2_BIT_SET(lsp_qid2cq,
+			     1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P3_LOC));
+		DLB2_BIT_SET(lsp_qid2cq2,
+			     1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P3_LOC));
+		break;
+	}
+
+	DLB2_CSR_WR(hw,
+		    DLB2_ATM_QID2CQIDIX(q->id.phys_id, p->id.phys_id / 4),
+		    atm_qid2cq);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_QID2CQIDIX(hw->ver,
+					q->id.phys_id, p->id.phys_id / 4),
+		    lsp_qid2cq);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_QID2CQIDIX2(hw->ver,
+					 q->id.phys_id, p->id.phys_id / 4),
+		    lsp_qid2cq2);
+
+	dlb2_flush_csr(hw);
+
+	p->qid_map[i].qid = q->id.phys_id;
+	p->qid_map[i].priority = priority;
+
+	state = DLB2_QUEUE_MAPPED;
+
+	return dlb2_port_slot_state_transition(hw, p, q, i, state);
+}
+
+static int dlb2_ldb_port_set_has_work_bits(struct dlb2_hw *hw,
+					   struct dlb2_ldb_port *port,
+					   struct dlb2_ldb_queue *queue,
+					   int slot)
+{
+	u32 ctrl = 0;
+	u32 active;
+	u32 enq;
+
+	/* Set the atomic scheduling haswork bit */
+	active = DLB2_CSR_RD(hw, DLB2_LSP_QID_AQED_ACTIVE_CNT(hw->ver,
+							 queue->id.phys_id));
+
+	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
+	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
+	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_VALUE);
+	DLB2_BITS_SET(ctrl,
+		      DLB2_BITS_GET(active,
+				    DLB2_LSP_QID_AQED_ACTIVE_CNT_COUNT) > 0,
+				    DLB2_LSP_LDB_SCHED_CTRL_RLIST_HASWORK_V);
+
+	/* Set the non-atomic scheduling haswork bit */
+	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
+
+	enq = DLB2_CSR_RD(hw,
+			  DLB2_LSP_QID_LDB_ENQUEUE_CNT(hw->ver,
+						       queue->id.phys_id));
+
+	memset(&ctrl, 0, sizeof(ctrl));
+
+	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
+	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
+	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_VALUE);
+	DLB2_BITS_SET(ctrl,
+		      DLB2_BITS_GET(enq,
+				    DLB2_LSP_QID_LDB_ENQUEUE_CNT_COUNT) > 0,
+		      DLB2_LSP_LDB_SCHED_CTRL_NALB_HASWORK_V);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
+
+	dlb2_flush_csr(hw);
+
+	return 0;
+}
+
+static void dlb2_ldb_port_clear_has_work_bits(struct dlb2_hw *hw,
+					      struct dlb2_ldb_port *port,
+					      u8 slot)
+{
+	u32 ctrl = 0;
+
+	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
+	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
+	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_RLIST_HASWORK_V);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
+
+	memset(&ctrl, 0, sizeof(ctrl));
+
+	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
+	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
+	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_NALB_HASWORK_V);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
+
+	dlb2_flush_csr(hw);
+}
+
+
+static void dlb2_ldb_queue_set_inflight_limit(struct dlb2_hw *hw,
+					      struct dlb2_ldb_queue *queue)
+{
+	u32 infl_lim = 0;
+
+	DLB2_BITS_SET(infl_lim, queue->num_qid_inflights,
+		 DLB2_LSP_QID_LDB_INFL_LIM_LIMIT);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_QID_LDB_INFL_LIM(hw->ver, queue->id.phys_id),
+		    infl_lim);
+}
+
+static void dlb2_ldb_queue_clear_inflight_limit(struct dlb2_hw *hw,
+						struct dlb2_ldb_queue *queue)
+{
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_QID_LDB_INFL_LIM(hw->ver, queue->id.phys_id),
+		    DLB2_LSP_QID_LDB_INFL_LIM_RST);
+}
+
+static int dlb2_ldb_port_finish_map_qid_dynamic(struct dlb2_hw *hw,
+						struct dlb2_hw_domain *domain,
+						struct dlb2_ldb_port *port,
+						struct dlb2_ldb_queue *queue)
+{
+	struct dlb2_list_entry *iter;
+	enum dlb2_qid_map_state state;
+	int slot, ret, i;
+	u32 infl_cnt;
+	u8 prio;
+	RTE_SET_USED(iter);
+
+	infl_cnt = DLB2_CSR_RD(hw,
+			       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver,
+						    queue->id.phys_id));
+
+	if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT)) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: non-zero QID inflight count\n",
+			    __func__);
+		return -EINVAL;
+	}
+
+	/*
+	 * Static map the port and set its corresponding has_work bits.
+	 */
+	state = DLB2_QUEUE_MAP_IN_PROG;
+	if (!dlb2_port_find_slot_queue(port, state, queue, &slot))
+		return -EINVAL;
+
+	prio = port->qid_map[slot].priority;
+
+	/*
+	 * Update the CQ2QID, CQ2PRIOV, and QID2CQIDX registers, and
+	 * the port's qid_map state.
+	 */
+	ret = dlb2_ldb_port_map_qid_static(hw, port, queue, prio);
+	if (ret)
+		return ret;
+
+	ret = dlb2_ldb_port_set_has_work_bits(hw, port, queue, slot);
+	if (ret)
+		return ret;
+
+	/*
+	 * Ensure IF_status(cq,qid) is 0 before enabling the port to
+	 * prevent spurious schedules to cause the queue's inflight
+	 * count to increase.
+	 */
+	dlb2_ldb_port_clear_queue_if_status(hw, port, slot);
+
+	/* Reset the queue's inflight status */
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			state = DLB2_QUEUE_MAPPED;
+			if (!dlb2_port_find_slot_queue(port, state,
+						       queue, &slot))
+				continue;
+
+			dlb2_ldb_port_set_queue_if_status(hw, port, slot);
+		}
+	}
+
+	dlb2_ldb_queue_set_inflight_limit(hw, queue);
+
+	/* Re-enable CQs mapped to this queue */
+	dlb2_ldb_queue_enable_mapped_cqs(hw, domain, queue);
+
+	/* If this queue has other mappings pending, clear its inflight limit */
+	if (queue->num_pending_additions > 0)
+		dlb2_ldb_queue_clear_inflight_limit(hw, queue);
+
+	return 0;
+}
+
+/**
+ * dlb2_ldb_port_map_qid_dynamic() - perform a "dynamic" QID->CQ mapping
+ * @hw: dlb2_hw handle for a particular device.
+ * @port: load-balanced port
+ * @queue: load-balanced queue
+ * @priority: queue servicing priority
+ *
+ * Returns 0 if the queue was mapped, 1 if the mapping is scheduled to occur
+ * at a later point, and <0 if an error occurred.
+ */
+static int dlb2_ldb_port_map_qid_dynamic(struct dlb2_hw *hw,
+					 struct dlb2_ldb_port *port,
+					 struct dlb2_ldb_queue *queue,
+					 u8 priority)
+{
+	enum dlb2_qid_map_state state;
+	struct dlb2_hw_domain *domain;
+	int domain_id, slot, ret;
+	u32 infl_cnt;
+
+	domain_id = port->domain_id.phys_id;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, false, 0);
+	if (domain == NULL) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: unable to find domain %d\n",
+			    __func__, port->domain_id.phys_id);
+		return -EINVAL;
+	}
+
+	/*
+	 * Set the QID inflight limit to 0 to prevent further scheduling of the
+	 * queue.
+	 */
+	DLB2_CSR_WR(hw, DLB2_LSP_QID_LDB_INFL_LIM(hw->ver,
+						  queue->id.phys_id), 0);
+
+	if (!dlb2_port_find_slot(port, DLB2_QUEUE_UNMAPPED, &slot)) {
+		DLB2_HW_ERR(hw,
+			    "Internal error: No available unmapped slots\n");
+		return -EFAULT;
+	}
+
+	port->qid_map[slot].qid = queue->id.phys_id;
+	port->qid_map[slot].priority = priority;
+
+	state = DLB2_QUEUE_MAP_IN_PROG;
+	ret = dlb2_port_slot_state_transition(hw, port, queue, slot, state);
+	if (ret)
+		return ret;
+
+	infl_cnt = DLB2_CSR_RD(hw,
+			       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver,
+						    queue->id.phys_id));
+
+	if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT)) {
+		/*
+		 * The queue is owed completions so it's not safe to map it
+		 * yet. Schedule a kernel thread to complete the mapping later,
+		 * once software has completed all the queue's inflight events.
+		 */
+		if (!os_worker_active(hw))
+			os_schedule_work(hw);
+
+		return 1;
+	}
+
+	/*
+	 * Disable the affected CQ, and the CQs already mapped to the QID,
+	 * before reading the QID's inflight count a second time. There is an
+	 * unlikely race in which the QID may schedule one more QE after we
+	 * read an inflight count of 0, and disabling the CQs guarantees that
+	 * the race will not occur after a re-read of the inflight count
+	 * register.
+	 */
+	if (port->enabled)
+		dlb2_ldb_port_cq_disable(hw, port);
+
+	dlb2_ldb_queue_disable_mapped_cqs(hw, domain, queue);
+
+	infl_cnt = DLB2_CSR_RD(hw,
+			       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver,
+						    queue->id.phys_id));
+
+	if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT)) {
+		if (port->enabled)
+			dlb2_ldb_port_cq_enable(hw, port);
+
+		dlb2_ldb_queue_enable_mapped_cqs(hw, domain, queue);
+
+		/*
+		 * The queue is owed completions so it's not safe to map it
+		 * yet. Schedule a kernel thread to complete the mapping later,
+		 * once software has completed all the queue's inflight events.
+		 */
+		if (!os_worker_active(hw))
+			os_schedule_work(hw);
+
+		return 1;
+	}
+
+	return dlb2_ldb_port_finish_map_qid_dynamic(hw, domain, port, queue);
+}
+
+static void dlb2_domain_finish_map_port(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain,
+					struct dlb2_ldb_port *port)
+{
+	int i;
+
+	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
+		u32 infl_cnt;
+		struct dlb2_ldb_queue *queue;
+		int qid;
+
+		if (port->qid_map[i].state != DLB2_QUEUE_MAP_IN_PROG)
+			continue;
+
+		qid = port->qid_map[i].qid;
+
+		queue = dlb2_get_ldb_queue_from_id(hw, qid, false, 0);
+
+		if (queue == NULL) {
+			DLB2_HW_ERR(hw,
+				    "[%s()] Internal error: unable to find queue %d\n",
+				    __func__, qid);
+			continue;
+		}
+
+		infl_cnt = DLB2_CSR_RD(hw,
+				       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver, qid));
+
+		if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT))
+			continue;
+
+		/*
+		 * Disable the affected CQ, and the CQs already mapped to the
+		 * QID, before reading the QID's inflight count a second time.
+		 * There is an unlikely race in which the QID may schedule one
+		 * more QE after we read an inflight count of 0, and disabling
+		 * the CQs guarantees that the race will not occur after a
+		 * re-read of the inflight count register.
+		 */
+		if (port->enabled)
+			dlb2_ldb_port_cq_disable(hw, port);
+
+		dlb2_ldb_queue_disable_mapped_cqs(hw, domain, queue);
+
+		infl_cnt = DLB2_CSR_RD(hw,
+				       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver, qid));
+
+		if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT)) {
+			if (port->enabled)
+				dlb2_ldb_port_cq_enable(hw, port);
+
+			dlb2_ldb_queue_enable_mapped_cqs(hw, domain, queue);
+
+			continue;
+		}
+
+		dlb2_ldb_port_finish_map_qid_dynamic(hw, domain, port, queue);
+	}
+}
+
+static unsigned int
+dlb2_domain_finish_map_qid_procedures(struct dlb2_hw *hw,
+				      struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	if (!domain->configured || domain->num_pending_additions == 0)
+		return 0;
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
+			dlb2_domain_finish_map_port(hw, domain, port);
+	}
+
+	return domain->num_pending_additions;
+}
+
+static int dlb2_ldb_port_unmap_qid(struct dlb2_hw *hw,
+				   struct dlb2_ldb_port *port,
+				   struct dlb2_ldb_queue *queue)
+{
+	enum dlb2_qid_map_state mapped, in_progress, pending_map, unmapped;
+	u32 lsp_qid2cq2;
+	u32 lsp_qid2cq;
+	u32 atm_qid2cq;
+	u32 cq2priov;
+	u32 queue_id;
+	u32 port_id;
+	int i;
+
+	/* Find the queue's slot */
+	mapped = DLB2_QUEUE_MAPPED;
+	in_progress = DLB2_QUEUE_UNMAP_IN_PROG;
+	pending_map = DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP;
+
+	if (!dlb2_port_find_slot_queue(port, mapped, queue, &i) &&
+	    !dlb2_port_find_slot_queue(port, in_progress, queue, &i) &&
+	    !dlb2_port_find_slot_queue(port, pending_map, queue, &i)) {
+		DLB2_HW_ERR(hw,
+			    "[%s():%d] Internal error: QID %d isn't mapped\n",
+			    __func__, __LINE__, queue->id.phys_id);
+		return -EFAULT;
+	}
+
+	port_id = port->id.phys_id;
+	queue_id = queue->id.phys_id;
+
+	/* Read-modify-write the priority and valid bit register */
+	cq2priov = DLB2_CSR_RD(hw, DLB2_LSP_CQ2PRIOV(hw->ver, port_id));
+
+	cq2priov &= ~(1 << (i + DLB2_LSP_CQ2PRIOV_V_LOC));
+
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(hw->ver, port_id), cq2priov);
+
+	atm_qid2cq = DLB2_CSR_RD(hw, DLB2_ATM_QID2CQIDIX(queue_id,
+							 port_id / 4));
+
+	lsp_qid2cq = DLB2_CSR_RD(hw,
+				 DLB2_LSP_QID2CQIDIX(hw->ver,
+						queue_id, port_id / 4));
+
+	lsp_qid2cq2 = DLB2_CSR_RD(hw,
+				  DLB2_LSP_QID2CQIDIX2(hw->ver,
+						  queue_id, port_id / 4));
+
+	switch (port_id % 4) {
+	case 0:
+		atm_qid2cq &= ~(1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P0_LOC));
+		lsp_qid2cq &= ~(1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P0_LOC));
+		lsp_qid2cq2 &= ~(1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P0_LOC));
+		break;
+
+	case 1:
+		atm_qid2cq &= ~(1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P1_LOC));
+		lsp_qid2cq &= ~(1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P1_LOC));
+		lsp_qid2cq2 &= ~(1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P1_LOC));
+		break;
+
+	case 2:
+		atm_qid2cq &= ~(1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P2_LOC));
+		lsp_qid2cq &= ~(1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P2_LOC));
+		lsp_qid2cq2 &= ~(1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P2_LOC));
+		break;
+
+	case 3:
+		atm_qid2cq &= ~(1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P3_LOC));
+		lsp_qid2cq &= ~(1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P3_LOC));
+		lsp_qid2cq2 &= ~(1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P3_LOC));
+		break;
+	}
+
+	DLB2_CSR_WR(hw, DLB2_ATM_QID2CQIDIX(queue_id, port_id / 4), atm_qid2cq);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_QID2CQIDIX(hw->ver, queue_id, port_id / 4),
+		    lsp_qid2cq);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_QID2CQIDIX2(hw->ver, queue_id, port_id / 4),
+		    lsp_qid2cq2);
+
+	dlb2_flush_csr(hw);
+
+	unmapped = DLB2_QUEUE_UNMAPPED;
+
+	return dlb2_port_slot_state_transition(hw, port, queue, i, unmapped);
+}
+
+static int dlb2_ldb_port_map_qid(struct dlb2_hw *hw,
+				 struct dlb2_hw_domain *domain,
+				 struct dlb2_ldb_port *port,
+				 struct dlb2_ldb_queue *queue,
+				 u8 prio)
+{
+	if (domain->started)
+		return dlb2_ldb_port_map_qid_dynamic(hw, port, queue, prio);
+	else
+		return dlb2_ldb_port_map_qid_static(hw, port, queue, prio);
+}
+
+static void
+dlb2_domain_finish_unmap_port_slot(struct dlb2_hw *hw,
+				   struct dlb2_hw_domain *domain,
+				   struct dlb2_ldb_port *port,
+				   int slot)
+{
+	enum dlb2_qid_map_state state;
+	struct dlb2_ldb_queue *queue;
+
+	queue = &hw->rsrcs.ldb_queues[port->qid_map[slot].qid];
+
+	state = port->qid_map[slot].state;
+
+	/* Update the QID2CQIDX and CQ2QID vectors */
+	dlb2_ldb_port_unmap_qid(hw, port, queue);
+
+	/*
+	 * Ensure the QID will not be serviced by this {CQ, slot} by clearing
+	 * the has_work bits
+	 */
+	dlb2_ldb_port_clear_has_work_bits(hw, port, slot);
+
+	/* Reset the {CQ, slot} to its default state */
+	dlb2_ldb_port_set_queue_if_status(hw, port, slot);
+
+	/* Re-enable the CQ if it was not manually disabled by the user */
+	if (port->enabled)
+		dlb2_ldb_port_cq_enable(hw, port);
+
+	/*
+	 * If there is a mapping that is pending this slot's removal, perform
+	 * the mapping now.
+	 */
+	if (state == DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP) {
+		struct dlb2_ldb_port_qid_map *map;
+		struct dlb2_ldb_queue *map_queue;
+		u8 prio;
+
+		map = &port->qid_map[slot];
+
+		map->qid = map->pending_qid;
+		map->priority = map->pending_priority;
+
+		map_queue = &hw->rsrcs.ldb_queues[map->qid];
+		prio = map->priority;
+
+		dlb2_ldb_port_map_qid(hw, domain, port, map_queue, prio);
+	}
+}
+
+
+static bool dlb2_domain_finish_unmap_port(struct dlb2_hw *hw,
+					  struct dlb2_hw_domain *domain,
+					  struct dlb2_ldb_port *port)
+{
+	u32 infl_cnt;
+	int i;
+
+	if (port->num_pending_removals == 0)
+		return false;
+
+	/*
+	 * The unmap requires all the CQ's outstanding inflights to be
+	 * completed.
+	 */
+	infl_cnt = DLB2_CSR_RD(hw, DLB2_LSP_CQ_LDB_INFL_CNT(hw->ver,
+						       port->id.phys_id));
+	if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_CQ_LDB_INFL_CNT_COUNT) > 0)
+		return false;
+
+	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
+		struct dlb2_ldb_port_qid_map *map;
+
+		map = &port->qid_map[i];
+
+		if (map->state != DLB2_QUEUE_UNMAP_IN_PROG &&
+		    map->state != DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP)
+			continue;
+
+		dlb2_domain_finish_unmap_port_slot(hw, domain, port, i);
+	}
+
+	return true;
+}
+
+static unsigned int
+dlb2_domain_finish_unmap_qid_procedures(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	if (!domain->configured || domain->num_pending_removals == 0)
+		return 0;
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
+			dlb2_domain_finish_unmap_port(hw, domain, port);
+	}
+
+	return domain->num_pending_removals;
+}
+
+static void dlb2_domain_disable_ldb_cqs(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			port->enabled = false;
+
+			dlb2_ldb_port_cq_disable(hw, port);
+		}
+	}
+}
+
+
+static void dlb2_log_reset_domain(struct dlb2_hw *hw,
+				  u32 domain_id,
+				  bool vdev_req,
+				  unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 reset domain:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
+}
+
+static void dlb2_domain_disable_dir_vpps(struct dlb2_hw *hw,
+					 struct dlb2_hw_domain *domain,
+					 unsigned int vdev_id)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *port;
+	u32 vpp_v = 0;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
+		unsigned int offs;
+		u32 virt_id;
+
+		if (hw->virt_mode == DLB2_VIRT_SRIOV)
+			virt_id = port->id.virt_id;
+		else
+			virt_id = port->id.phys_id;
+
+		offs = vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) + virt_id;
+
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP_V(offs), vpp_v);
+	}
+}
+
+static void dlb2_domain_disable_ldb_vpps(struct dlb2_hw *hw,
+					 struct dlb2_hw_domain *domain,
+					 unsigned int vdev_id)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	u32 vpp_v = 0;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			unsigned int offs;
+			u32 virt_id;
+
+			if (hw->virt_mode == DLB2_VIRT_SRIOV)
+				virt_id = port->id.virt_id;
+			else
+				virt_id = port->id.phys_id;
+
+			offs = vdev_id * DLB2_MAX_NUM_LDB_PORTS + virt_id;
+
+			DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VPP_V(offs), vpp_v);
+		}
+	}
+}
+
+static void
+dlb2_domain_disable_ldb_port_interrupts(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	u32 int_en = 0;
+	u32 wd_en = 0;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			DLB2_CSR_WR(hw,
+				    DLB2_CHP_LDB_CQ_INT_ENB(hw->ver,
+						       port->id.phys_id),
+				    int_en);
+
+			DLB2_CSR_WR(hw,
+				    DLB2_CHP_LDB_CQ_WD_ENB(hw->ver,
+						      port->id.phys_id),
+				    wd_en);
+		}
+	}
+}
+
+static void
+dlb2_domain_disable_dir_port_interrupts(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *port;
+	u32 int_en = 0;
+	u32 wd_en = 0;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
+		DLB2_CSR_WR(hw,
+			    DLB2_CHP_DIR_CQ_INT_ENB(hw->ver, port->id.phys_id),
+			    int_en);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_CHP_DIR_CQ_WD_ENB(hw->ver, port->id.phys_id),
+			    wd_en);
+	}
+}
+
+static void
+dlb2_domain_disable_ldb_queue_write_perms(struct dlb2_hw *hw,
+					  struct dlb2_hw_domain *domain)
+{
+	int domain_offset = domain->id.phys_id * DLB2_MAX_NUM_LDB_QUEUES;
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_queue *queue;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
+		int idx = domain_offset + queue->id.phys_id;
+
+		DLB2_CSR_WR(hw, DLB2_SYS_LDB_VASQID_V(idx), 0);
+
+		if (queue->id.vdev_owned) {
+			DLB2_CSR_WR(hw,
+				    DLB2_SYS_LDB_QID2VQID(queue->id.phys_id),
+				    0);
+
+			idx = queue->id.vdev_id * DLB2_MAX_NUM_LDB_QUEUES +
+				queue->id.virt_id;
+
+			DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID_V(idx), 0);
+
+			DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID2QID(idx), 0);
+		}
+	}
+}
+
+static void
+dlb2_domain_disable_dir_queue_write_perms(struct dlb2_hw *hw,
+					  struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *queue;
+	unsigned long max_ports;
+	int domain_offset;
+	RTE_SET_USED(iter);
+
+	max_ports = DLB2_MAX_NUM_DIR_PORTS(hw->ver);
+
+	domain_offset = domain->id.phys_id * max_ports;
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, queue, iter) {
+		int idx = domain_offset + queue->id.phys_id;
+
+		DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(idx), 0);
+
+		if (queue->id.vdev_owned) {
+			idx = queue->id.vdev_id * max_ports + queue->id.virt_id;
+
+			DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID_V(idx), 0);
+
+			DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID2QID(idx), 0);
+		}
+	}
+}
+
+static void dlb2_domain_disable_ldb_seq_checks(struct dlb2_hw *hw,
+					       struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	u32 chk_en = 0;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			DLB2_CSR_WR(hw,
+				    DLB2_CHP_SN_CHK_ENBL(hw->ver,
+							 port->id.phys_id),
+				    chk_en);
+		}
+	}
+}
+
+static int dlb2_domain_wait_for_ldb_cqs_to_empty(struct dlb2_hw *hw,
+						 struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			int j;
+
+			for (j = 0; j < DLB2_MAX_CQ_COMP_CHECK_LOOPS; j++) {
+				if (dlb2_ldb_cq_inflight_count(hw, port) == 0)
+					break;
+			}
+
+			if (j == DLB2_MAX_CQ_COMP_CHECK_LOOPS) {
+				DLB2_HW_ERR(hw,
+					    "[%s()] Internal error: failed to flush load-balanced port %d's completions.\n",
+					    __func__, port->id.phys_id);
+				return -EFAULT;
+			}
+		}
+	}
+
+	return 0;
+}
+
+static void dlb2_domain_disable_dir_cqs(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *port;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
+		port->enabled = false;
+
+		dlb2_dir_port_cq_disable(hw, port);
+	}
+}
+
+static void
+dlb2_domain_disable_dir_producer_ports(struct dlb2_hw *hw,
+				       struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *port;
+	u32 pp_v = 0;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_DIR_PP_V(port->id.phys_id),
+			    pp_v);
+	}
+}
+
+static void
+dlb2_domain_disable_ldb_producer_ports(struct dlb2_hw *hw,
+				       struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	u32 pp_v = 0;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			DLB2_CSR_WR(hw,
+				    DLB2_SYS_LDB_PP_V(port->id.phys_id),
+				    pp_v);
+		}
+	}
+}
+
+static int dlb2_domain_verify_reset_success(struct dlb2_hw *hw,
+					    struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *dir_port;
+	struct dlb2_ldb_port *ldb_port;
+	struct dlb2_ldb_queue *queue;
+	int i;
+	RTE_SET_USED(iter);
+
+	/*
+	 * Confirm that all the domain's queue's inflight counts and AQED
+	 * active counts are 0.
+	 */
+	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
+		if (!dlb2_ldb_queue_is_empty(hw, queue)) {
+			DLB2_HW_ERR(hw,
+				    "[%s()] Internal error: failed to empty ldb queue %d\n",
+				    __func__, queue->id.phys_id);
+			return -EFAULT;
+		}
+	}
+
+	/* Confirm that all the domain's CQs inflight and token counts are 0. */
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], ldb_port, iter) {
+			if (dlb2_ldb_cq_inflight_count(hw, ldb_port) ||
+			    dlb2_ldb_cq_token_count(hw, ldb_port)) {
+				DLB2_HW_ERR(hw,
+					    "[%s()] Internal error: failed to empty ldb port %d\n",
+					    __func__, ldb_port->id.phys_id);
+				return -EFAULT;
+			}
+		}
+	}
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, dir_port, iter) {
+		if (!dlb2_dir_queue_is_empty(hw, dir_port)) {
+			DLB2_HW_ERR(hw,
+				    "[%s()] Internal error: failed to empty dir queue %d\n",
+				    __func__, dir_port->id.phys_id);
+			return -EFAULT;
+		}
+
+		if (dlb2_dir_cq_token_count(hw, dir_port)) {
+			DLB2_HW_ERR(hw,
+				    "[%s()] Internal error: failed to empty dir port %d\n",
+				    __func__, dir_port->id.phys_id);
+			return -EFAULT;
+		}
+	}
+
+	return 0;
+}
+
+static void __dlb2_domain_reset_ldb_port_registers(struct dlb2_hw *hw,
+						   struct dlb2_ldb_port *port)
+{
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_PP2VAS(port->id.phys_id),
+		    DLB2_SYS_LDB_PP2VAS_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ2VAS(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ2VAS_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_PP2VDEV(port->id.phys_id),
+		    DLB2_SYS_LDB_PP2VDEV_RST);
+
+	if (port->id.vdev_owned) {
+		unsigned int offs;
+		u32 virt_id;
+
+		/*
+		 * DLB uses producer port address bits 17:12 to determine the
+		 * producer port ID. In Scalable IOV mode, PP accesses come
+		 * through the PF MMIO window for the physical producer port,
+		 * so for translation purposes the virtual and physical port
+		 * IDs are equal.
+		 */
+		if (hw->virt_mode == DLB2_VIRT_SRIOV)
+			virt_id = port->id.virt_id;
+		else
+			virt_id = port->id.phys_id;
+
+		offs = port->id.vdev_id * DLB2_MAX_NUM_LDB_PORTS + virt_id;
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_VF_LDB_VPP2PP(offs),
+			    DLB2_SYS_VF_LDB_VPP2PP_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_VF_LDB_VPP_V(offs),
+			    DLB2_SYS_VF_LDB_VPP_V_RST);
+	}
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_PP_V(port->id.phys_id),
+		    DLB2_SYS_LDB_PP_V_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_LDB_DSBL(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_LDB_DSBL_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_DEPTH(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ_DEPTH_RST);
+
+	if (hw->ver != DLB2_HW_V2)
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_CFG_CQ_LDB_WU_LIMIT(port->id.phys_id),
+			    DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_LDB_INFL_LIM(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_LDB_INFL_LIM_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_HIST_LIST_LIM(hw->ver, port->id.phys_id),
+		    DLB2_CHP_HIST_LIST_LIM_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_HIST_LIST_BASE(hw->ver, port->id.phys_id),
+		    DLB2_CHP_HIST_LIST_BASE_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_HIST_LIST_POP_PTR(hw->ver, port->id.phys_id),
+		    DLB2_CHP_HIST_LIST_POP_PTR_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_HIST_LIST_PUSH_PTR(hw->ver, port->id.phys_id),
+		    DLB2_CHP_HIST_LIST_PUSH_PTR_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_TMR_THRSH(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ_TMR_THRSH_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_INT_ENB(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ_INT_ENB_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_CQ_ISR(port->id.phys_id),
+		    DLB2_SYS_LDB_CQ_ISR_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_WPTR(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ_WPTR_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_LDB_TKN_CNT(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_LDB_TKN_CNT_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_CQ_ADDR_L(port->id.phys_id),
+		    DLB2_SYS_LDB_CQ_ADDR_L_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_CQ_ADDR_U(port->id.phys_id),
+		    DLB2_SYS_LDB_CQ_ADDR_U_RST);
+
+	if (hw->ver == DLB2_HW_V2)
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_LDB_CQ_AT(port->id.phys_id),
+			    DLB2_SYS_LDB_CQ_AT_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_CQ_PASID(hw->ver, port->id.phys_id),
+		    DLB2_SYS_LDB_CQ_PASID_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_CQ2VF_PF_RO(port->id.phys_id),
+		    DLB2_SYS_LDB_CQ2VF_PF_RO_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTL(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTL_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTH(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTH_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ2QID0(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ2QID0_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ2QID1(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ2QID1_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ2PRIOV(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ2PRIOV_RST);
+}
+
+static void dlb2_domain_reset_ldb_port_registers(struct dlb2_hw *hw,
+						 struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
+			__dlb2_domain_reset_ldb_port_registers(hw, port);
+	}
+}
+
+static void
+__dlb2_domain_reset_dir_port_registers(struct dlb2_hw *hw,
+				       struct dlb2_dir_pq_pair *port)
+{
+	u32 reg = 0;
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ2VAS(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ2VAS_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_DIR_DSBL(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_DIR_DSBL_RST);
+
+	DLB2_BIT_SET(reg, DLB2_SYS_WB_DIR_CQ_STATE_CQ_OPT_CLR);
+
+	if (hw->ver == DLB2_HW_V2)
+		DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_OPT_CLR, port->id.phys_id);
+	else
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_WB_DIR_CQ_STATE(port->id.phys_id), reg);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_DEPTH(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ_DEPTH_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_TMR_THRSH(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ_TMR_THRSH_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_INT_ENB(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ_INT_ENB_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_CQ_ISR(port->id.phys_id),
+		    DLB2_SYS_DIR_CQ_ISR_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(hw->ver,
+						      port->id.phys_id),
+		    DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_WPTR(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ_WPTR_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_DIR_TKN_CNT(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_DIR_TKN_CNT_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_CQ_ADDR_L(port->id.phys_id),
+		    DLB2_SYS_DIR_CQ_ADDR_L_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_CQ_ADDR_U(port->id.phys_id),
+		    DLB2_SYS_DIR_CQ_ADDR_U_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_CQ_AT(port->id.phys_id),
+		    DLB2_SYS_DIR_CQ_AT_RST);
+
+	if (hw->ver == DLB2_HW_V2)
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_DIR_CQ_AT(port->id.phys_id),
+			    DLB2_SYS_DIR_CQ_AT_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_CQ_PASID(hw->ver, port->id.phys_id),
+		    DLB2_SYS_DIR_CQ_PASID_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_CQ_FMT(port->id.phys_id),
+		    DLB2_SYS_DIR_CQ_FMT_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_CQ2VF_PF_RO(port->id.phys_id),
+		    DLB2_SYS_DIR_CQ2VF_PF_RO_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTL(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTL_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTH(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTH_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_PP2VAS(port->id.phys_id),
+		    DLB2_SYS_DIR_PP2VAS_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ2VAS(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ2VAS_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_PP2VDEV(port->id.phys_id),
+		    DLB2_SYS_DIR_PP2VDEV_RST);
+
+	if (port->id.vdev_owned) {
+		unsigned int offs;
+		u32 virt_id;
+
+		/*
+		 * DLB uses producer port address bits 17:12 to determine the
+		 * producer port ID. In Scalable IOV mode, PP accesses come
+		 * through the PF MMIO window for the physical producer port,
+		 * so for translation purposes the virtual and physical port
+		 * IDs are equal.
+		 */
+		if (hw->virt_mode == DLB2_VIRT_SRIOV)
+			virt_id = port->id.virt_id;
+		else
+			virt_id = port->id.phys_id;
+
+		offs = port->id.vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) +
+			virt_id;
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_VF_DIR_VPP2PP(offs),
+			    DLB2_SYS_VF_DIR_VPP2PP_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_VF_DIR_VPP_V(offs),
+			    DLB2_SYS_VF_DIR_VPP_V_RST);
+	}
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_PP_V(port->id.phys_id),
+		    DLB2_SYS_DIR_PP_V_RST);
+}
+
+static void dlb2_domain_reset_dir_port_registers(struct dlb2_hw *hw,
+						 struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *port;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter)
+		__dlb2_domain_reset_dir_port_registers(hw, port);
+}
+
+static void dlb2_domain_reset_ldb_queue_registers(struct dlb2_hw *hw,
+						  struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_queue *queue;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
+		unsigned int queue_id = queue->id.phys_id;
+		int i;
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL(hw->ver, queue_id),
+			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH(hw->ver, queue_id),
+			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTL(hw->ver, queue_id),
+			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTL_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTH(hw->ver, queue_id),
+			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_NALDB_MAX_DEPTH(hw->ver, queue_id),
+			    DLB2_LSP_QID_NALDB_MAX_DEPTH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_LDB_INFL_LIM(hw->ver, queue_id),
+			    DLB2_LSP_QID_LDB_INFL_LIM_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_AQED_ACTIVE_LIM(hw->ver, queue_id),
+			    DLB2_LSP_QID_AQED_ACTIVE_LIM_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_ATM_DEPTH_THRSH(hw->ver, queue_id),
+			    DLB2_LSP_QID_ATM_DEPTH_THRSH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_NALDB_DEPTH_THRSH(hw->ver, queue_id),
+			    DLB2_LSP_QID_NALDB_DEPTH_THRSH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_LDB_QID_ITS(queue_id),
+			    DLB2_SYS_LDB_QID_ITS_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_CHP_ORD_QID_SN(hw->ver, queue_id),
+			    DLB2_CHP_ORD_QID_SN_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_CHP_ORD_QID_SN_MAP(hw->ver, queue_id),
+			    DLB2_CHP_ORD_QID_SN_MAP_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_LDB_QID_V(queue_id),
+			    DLB2_SYS_LDB_QID_V_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_LDB_QID_CFG_V(queue_id),
+			    DLB2_SYS_LDB_QID_CFG_V_RST);
+
+		if (queue->sn_cfg_valid) {
+			u32 offs[2];
+
+			offs[0] = DLB2_RO_GRP_0_SLT_SHFT(hw->ver,
+							 queue->sn_slot);
+			offs[1] = DLB2_RO_GRP_1_SLT_SHFT(hw->ver,
+							 queue->sn_slot);
+
+			DLB2_CSR_WR(hw,
+				    offs[queue->sn_group],
+				    DLB2_RO_GRP_0_SLT_SHFT_RST);
+		}
+
+		for (i = 0; i < DLB2_LSP_QID2CQIDIX_NUM; i++) {
+			DLB2_CSR_WR(hw,
+				    DLB2_LSP_QID2CQIDIX(hw->ver, queue_id, i),
+				    DLB2_LSP_QID2CQIDIX_00_RST);
+
+			DLB2_CSR_WR(hw,
+				    DLB2_LSP_QID2CQIDIX2(hw->ver, queue_id, i),
+				    DLB2_LSP_QID2CQIDIX2_00_RST);
+
+			DLB2_CSR_WR(hw,
+				    DLB2_ATM_QID2CQIDIX(queue_id, i),
+				    DLB2_ATM_QID2CQIDIX_00_RST);
+		}
+	}
+}
+
+static void dlb2_domain_reset_dir_queue_registers(struct dlb2_hw *hw,
+						  struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *queue;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, queue, iter) {
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_DIR_MAX_DEPTH(hw->ver,
+						       queue->id.phys_id),
+			    DLB2_LSP_QID_DIR_MAX_DEPTH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTL(hw->ver,
+							  queue->id.phys_id),
+			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTL_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTH(hw->ver,
+							  queue->id.phys_id),
+			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_DIR_DEPTH_THRSH(hw->ver,
+							 queue->id.phys_id),
+			    DLB2_LSP_QID_DIR_DEPTH_THRSH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_DIR_QID_ITS(queue->id.phys_id),
+			    DLB2_SYS_DIR_QID_ITS_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_DIR_QID_V(queue->id.phys_id),
+			    DLB2_SYS_DIR_QID_V_RST);
+	}
+}
+
+
+
+
+
+static void dlb2_domain_reset_registers(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain)
+{
+	dlb2_domain_reset_ldb_port_registers(hw, domain);
+
+	dlb2_domain_reset_dir_port_registers(hw, domain);
+
+	dlb2_domain_reset_ldb_queue_registers(hw, domain);
+
+	dlb2_domain_reset_dir_queue_registers(hw, domain);
+
+	if (hw->ver == DLB2_HW_V2) {
+		DLB2_CSR_WR(hw,
+			    DLB2_CHP_CFG_LDB_VAS_CRD(domain->id.phys_id),
+			    DLB2_CHP_CFG_LDB_VAS_CRD_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_CHP_CFG_DIR_VAS_CRD(domain->id.phys_id),
+			    DLB2_CHP_CFG_DIR_VAS_CRD_RST);
+	} else
+		DLB2_CSR_WR(hw,
+			    DLB2_CHP_CFG_VAS_CRD(domain->id.phys_id),
+			    DLB2_CHP_CFG_VAS_CRD_RST);
+}
+
+static int dlb2_domain_reset_software_state(struct dlb2_hw *hw,
+					    struct dlb2_hw_domain *domain)
+{
+	struct dlb2_dir_pq_pair *tmp_dir_port;
+	struct dlb2_ldb_queue *tmp_ldb_queue;
+	struct dlb2_ldb_port *tmp_ldb_port;
+	struct dlb2_list_entry *iter1;
+	struct dlb2_list_entry *iter2;
+	struct dlb2_function_resources *rsrcs;
+	struct dlb2_dir_pq_pair *dir_port;
+	struct dlb2_ldb_queue *ldb_queue;
+	struct dlb2_ldb_port *ldb_port;
+	struct dlb2_list_head *list;
+	int ret, i;
+	RTE_SET_USED(tmp_dir_port);
+	RTE_SET_USED(tmp_ldb_queue);
+	RTE_SET_USED(tmp_ldb_port);
+	RTE_SET_USED(iter1);
+	RTE_SET_USED(iter2);
+
+	rsrcs = domain->parent_func;
+
+	/* Move the domain's ldb queues to the function's avail list */
+	list = &domain->used_ldb_queues;
+	DLB2_DOM_LIST_FOR_SAFE(*list, ldb_queue, tmp_ldb_queue, iter1, iter2) {
+		if (ldb_queue->sn_cfg_valid) {
+			struct dlb2_sn_group *grp;
+
+			grp = &hw->rsrcs.sn_groups[ldb_queue->sn_group];
+
+			dlb2_sn_group_free_slot(grp, ldb_queue->sn_slot);
+			ldb_queue->sn_cfg_valid = false;
+		}
+
+		ldb_queue->owned = false;
+		ldb_queue->num_mappings = 0;
+		ldb_queue->num_pending_additions = 0;
+
+		dlb2_list_del(&domain->used_ldb_queues,
+			      &ldb_queue->domain_list);
+		dlb2_list_add(&rsrcs->avail_ldb_queues,
+			      &ldb_queue->func_list);
+		rsrcs->num_avail_ldb_queues++;
+	}
+
+	list = &domain->avail_ldb_queues;
+	DLB2_DOM_LIST_FOR_SAFE(*list, ldb_queue, tmp_ldb_queue, iter1, iter2) {
+		ldb_queue->owned = false;
+
+		dlb2_list_del(&domain->avail_ldb_queues,
+			      &ldb_queue->domain_list);
+		dlb2_list_add(&rsrcs->avail_ldb_queues,
+			      &ldb_queue->func_list);
+		rsrcs->num_avail_ldb_queues++;
+	}
+
+	/* Move the domain's ldb ports to the function's avail list */
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		list = &domain->used_ldb_ports[i];
+		DLB2_DOM_LIST_FOR_SAFE(*list, ldb_port, tmp_ldb_port,
+				       iter1, iter2) {
+			int j;
+
+			ldb_port->owned = false;
+			ldb_port->configured = false;
+			ldb_port->num_pending_removals = 0;
+			ldb_port->num_mappings = 0;
+			ldb_port->init_tkn_cnt = 0;
+			ldb_port->cq_depth = 0;
+			for (j = 0; j < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; j++)
+				ldb_port->qid_map[j].state =
+					DLB2_QUEUE_UNMAPPED;
+
+			dlb2_list_del(&domain->used_ldb_ports[i],
+				      &ldb_port->domain_list);
+			dlb2_list_add(&rsrcs->avail_ldb_ports[i],
+				      &ldb_port->func_list);
+			rsrcs->num_avail_ldb_ports[i]++;
+		}
+
+		list = &domain->avail_ldb_ports[i];
+		DLB2_DOM_LIST_FOR_SAFE(*list, ldb_port, tmp_ldb_port,
+				       iter1, iter2) {
+			ldb_port->owned = false;
+
+			dlb2_list_del(&domain->avail_ldb_ports[i],
+				      &ldb_port->domain_list);
+			dlb2_list_add(&rsrcs->avail_ldb_ports[i],
+				      &ldb_port->func_list);
+			rsrcs->num_avail_ldb_ports[i]++;
+		}
+	}
+
+	/* Move the domain's dir ports to the function's avail list */
+	list = &domain->used_dir_pq_pairs;
+	DLB2_DOM_LIST_FOR_SAFE(*list, dir_port, tmp_dir_port, iter1, iter2) {
+		dir_port->owned = false;
+		dir_port->port_configured = false;
+		dir_port->init_tkn_cnt = 0;
+
+		dlb2_list_del(&domain->used_dir_pq_pairs,
+			      &dir_port->domain_list);
+
+		dlb2_list_add(&rsrcs->avail_dir_pq_pairs,
+			      &dir_port->func_list);
+		rsrcs->num_avail_dir_pq_pairs++;
+	}
+
+	list = &domain->avail_dir_pq_pairs;
+	DLB2_DOM_LIST_FOR_SAFE(*list, dir_port, tmp_dir_port, iter1, iter2) {
+		dir_port->owned = false;
+
+		dlb2_list_del(&domain->avail_dir_pq_pairs,
+			      &dir_port->domain_list);
+
+		dlb2_list_add(&rsrcs->avail_dir_pq_pairs,
+			      &dir_port->func_list);
+		rsrcs->num_avail_dir_pq_pairs++;
+	}
+
+	/* Return hist list entries to the function */
+	ret = dlb2_bitmap_set_range(rsrcs->avail_hist_list_entries,
+				    domain->hist_list_entry_base,
+				    domain->total_hist_list_entries);
+	if (ret) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: domain hist list base does not match the function's bitmap.\n",
+			    __func__);
+		return ret;
+	}
+
+	domain->total_hist_list_entries = 0;
+	domain->avail_hist_list_entries = 0;
+	domain->hist_list_entry_base = 0;
+	domain->hist_list_entry_offset = 0;
+
+	if (hw->ver == DLB2_HW_V2_5) {
+		rsrcs->num_avail_entries += domain->num_credits;
+		domain->num_credits = 0;
+	} else {
+		rsrcs->num_avail_qed_entries += domain->num_ldb_credits;
+		domain->num_ldb_credits = 0;
+
+		rsrcs->num_avail_dqed_entries += domain->num_dir_credits;
+		domain->num_dir_credits = 0;
+	}
+	rsrcs->num_avail_aqed_entries += domain->num_avail_aqed_entries;
+	rsrcs->num_avail_aqed_entries += domain->num_used_aqed_entries;
+	domain->num_avail_aqed_entries = 0;
+	domain->num_used_aqed_entries = 0;
+
+	domain->num_pending_removals = 0;
+	domain->num_pending_additions = 0;
+	domain->configured = false;
+	domain->started = false;
+
+	/*
+	 * Move the domain out of the used_domains list and back to the
+	 * function's avail_domains list.
+	 */
+	dlb2_list_del(&rsrcs->used_domains, &domain->func_list);
+	dlb2_list_add(&rsrcs->avail_domains, &domain->func_list);
+	rsrcs->num_avail_domains++;
+
+	return 0;
+}
+
+static int dlb2_domain_drain_unmapped_queue(struct dlb2_hw *hw,
+					    struct dlb2_hw_domain *domain,
+					    struct dlb2_ldb_queue *queue)
+{
+	struct dlb2_ldb_port *port = NULL;
+	int ret, i;
+
+	/* If a domain has LDB queues, it must have LDB ports */
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		port = DLB2_DOM_LIST_HEAD(domain->used_ldb_ports[i],
+					  typeof(*port));
+		if (port)
+			break;
+	}
+
+	if (port == NULL) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: No configured LDB ports\n",
+			    __func__);
+		return -EFAULT;
+	}
+
+	/* If necessary, free up a QID slot in this CQ */
+	if (port->num_mappings == DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
+		struct dlb2_ldb_queue *mapped_queue;
+
+		mapped_queue = &hw->rsrcs.ldb_queues[port->qid_map[0].qid];
+
+		ret = dlb2_ldb_port_unmap_qid(hw, port, mapped_queue);
+		if (ret)
+			return ret;
+	}
+
+	ret = dlb2_ldb_port_map_qid_dynamic(hw, port, queue, 0);
+	if (ret)
+		return ret;
+
+	return dlb2_domain_drain_mapped_queues(hw, domain);
+}
+
+static int dlb2_domain_drain_unmapped_queues(struct dlb2_hw *hw,
+					     struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_queue *queue;
+	int ret;
+	RTE_SET_USED(iter);
+
+	/* If the domain hasn't been started, there's no traffic to drain */
+	if (!domain->started)
+		return 0;
+
+	/*
+	 * Pre-condition: the unattached queue must not have any outstanding
+	 * completions. This is ensured by calling dlb2_domain_drain_ldb_cqs()
+	 * prior to this in dlb2_domain_drain_mapped_queues().
+	 */
+	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
+		if (queue->num_mappings != 0 ||
+		    dlb2_ldb_queue_is_empty(hw, queue))
+			continue;
+
+		ret = dlb2_domain_drain_unmapped_queue(hw, domain, queue);
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+
+/**
+ * dlb2_reset_domain() - reset a scheduling domain
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function resets and frees a DLB 2.0 scheduling domain and its associated
+ * resources.
+ *
+ * Pre-condition: the driver must ensure software has stopped sending QEs
+ * through this domain's producer ports before invoking this function, or
+ * undefined behavior will result.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, -1 otherwise.
+ *
+ * EINVAL - Invalid domain ID, or the domain is not configured.
+ * EFAULT - Internal error. (Possibly caused if software is the pre-condition
+ *	    is not met.)
+ * ETIMEDOUT - Hardware component didn't reset in the expected time.
+ */
+int dlb2_reset_domain(struct dlb2_hw *hw,
+		      u32 domain_id,
+		      bool vdev_req,
+		      unsigned int vdev_id)
+{
+	struct dlb2_hw_domain *domain;
+	int ret;
+
+	dlb2_log_reset_domain(hw, domain_id, vdev_req, vdev_id);
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (domain == NULL || !domain->configured)
+		return -EINVAL;
+
+	/* Disable VPPs */
+	if (vdev_req) {
+		dlb2_domain_disable_dir_vpps(hw, domain, vdev_id);
+
+		dlb2_domain_disable_ldb_vpps(hw, domain, vdev_id);
+	}
+
+	/* Disable CQ interrupts */
+	dlb2_domain_disable_dir_port_interrupts(hw, domain);
+
+	dlb2_domain_disable_ldb_port_interrupts(hw, domain);
+
+	/*
+	 * For each queue owned by this domain, disable its write permissions to
+	 * cause any traffic sent to it to be dropped. Well-behaved software
+	 * should not be sending QEs at this point.
+	 */
+	dlb2_domain_disable_dir_queue_write_perms(hw, domain);
+
+	dlb2_domain_disable_ldb_queue_write_perms(hw, domain);
+
+	/* Turn off completion tracking on all the domain's PPs. */
+	dlb2_domain_disable_ldb_seq_checks(hw, domain);
+
+	/*
+	 * Disable the LDB CQs and drain them in order to complete the map and
+	 * unmap procedures, which require zero CQ inflights and zero QID
+	 * inflights respectively.
+	 */
+	dlb2_domain_disable_ldb_cqs(hw, domain);
+
+	dlb2_domain_drain_ldb_cqs(hw, domain, false);
+
+	ret = dlb2_domain_wait_for_ldb_cqs_to_empty(hw, domain);
+	if (ret)
+		return ret;
+
+	ret = dlb2_domain_finish_unmap_qid_procedures(hw, domain);
+	if (ret)
+		return ret;
+
+	ret = dlb2_domain_finish_map_qid_procedures(hw, domain);
+	if (ret)
+		return ret;
+
+	/* Re-enable the CQs in order to drain the mapped queues. */
+	dlb2_domain_enable_ldb_cqs(hw, domain);
+
+	ret = dlb2_domain_drain_mapped_queues(hw, domain);
+	if (ret)
+		return ret;
+
+	ret = dlb2_domain_drain_unmapped_queues(hw, domain);
+	if (ret)
+		return ret;
+
+	/* Done draining LDB QEs, so disable the CQs. */
+	dlb2_domain_disable_ldb_cqs(hw, domain);
+
+	dlb2_domain_drain_dir_queues(hw, domain);
+
+	/* Done draining DIR QEs, so disable the CQs. */
+	dlb2_domain_disable_dir_cqs(hw, domain);
+
+	/* Disable PPs */
+	dlb2_domain_disable_dir_producer_ports(hw, domain);
+
+	dlb2_domain_disable_ldb_producer_ports(hw, domain);
+
+	ret = dlb2_domain_verify_reset_success(hw, domain);
+	if (ret)
+		return ret;
+
+	/* Reset the QID and port state. */
+	dlb2_domain_reset_registers(hw, domain);
+
+	/* Hardware reset complete. Reset the domain's software state */
+	return dlb2_domain_reset_software_state(hw, domain);
+}
+
+static void
+dlb2_log_create_ldb_queue_args(struct dlb2_hw *hw,
+			       u32 domain_id,
+			       struct dlb2_create_ldb_queue_args *args,
+			       bool vdev_req,
+			       unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 create load-balanced queue arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID:                  %d\n",
+		    domain_id);
+	DLB2_HW_DBG(hw, "\tNumber of sequence numbers: %d\n",
+		    args->num_sequence_numbers);
+	DLB2_HW_DBG(hw, "\tNumber of QID inflights:    %d\n",
+		    args->num_qid_inflights);
+	DLB2_HW_DBG(hw, "\tNumber of ATM inflights:    %d\n",
+		    args->num_atomic_inflights);
+}
+
+static int
+dlb2_ldb_queue_attach_to_sn_group(struct dlb2_hw *hw,
+				  struct dlb2_ldb_queue *queue,
+				  struct dlb2_create_ldb_queue_args *args)
+{
+	int slot = -1;
+	int i;
+
+	queue->sn_cfg_valid = false;
+
+	if (args->num_sequence_numbers == 0)
+		return 0;
+
+	for (i = 0; i < DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {
+		struct dlb2_sn_group *group = &hw->rsrcs.sn_groups[i];
+
+		if (group->sequence_numbers_per_queue ==
+		    args->num_sequence_numbers &&
+		    !dlb2_sn_group_full(group)) {
+			slot = dlb2_sn_group_alloc_slot(group);
+			if (slot >= 0)
+				break;
+		}
+	}
+
+	if (slot == -1) {
+		DLB2_HW_ERR(hw,
+			    "[%s():%d] Internal error: no sequence number slots available\n",
+			    __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	queue->sn_cfg_valid = true;
+	queue->sn_group = i;
+	queue->sn_slot = slot;
+	return 0;
+}
+
+static int
+dlb2_verify_create_ldb_queue_args(struct dlb2_hw *hw,
+				  u32 domain_id,
+				  struct dlb2_create_ldb_queue_args *args,
+				  struct dlb2_cmd_response *resp,
+				  bool vdev_req,
+				  unsigned int vdev_id,
+				  struct dlb2_hw_domain **out_domain,
+				  struct dlb2_ldb_queue **out_queue)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+	int i;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
+		return -EINVAL;
+	}
+
+	if (domain->started) {
+		resp->status = DLB2_ST_DOMAIN_STARTED;
+		return -EINVAL;
+	}
+
+	queue = DLB2_DOM_LIST_HEAD(domain->avail_ldb_queues, typeof(*queue));
+	if (!queue) {
+		resp->status = DLB2_ST_LDB_QUEUES_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	if (args->num_sequence_numbers) {
+		for (i = 0; i < DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {
+			struct dlb2_sn_group *group = &hw->rsrcs.sn_groups[i];
+
+			if (group->sequence_numbers_per_queue ==
+			    args->num_sequence_numbers &&
+			    !dlb2_sn_group_full(group))
+				break;
+		}
+
+		if (i == DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS) {
+			resp->status = DLB2_ST_SEQUENCE_NUMBERS_UNAVAILABLE;
+			return -EINVAL;
+		}
+	}
+
+	if (args->num_qid_inflights > 4096) {
+		resp->status = DLB2_ST_INVALID_QID_INFLIGHT_ALLOCATION;
+		return -EINVAL;
+	}
+
+	/* Inflights must be <= number of sequence numbers if ordered */
+	if (args->num_sequence_numbers != 0 &&
+	    args->num_qid_inflights > args->num_sequence_numbers) {
+		resp->status = DLB2_ST_INVALID_QID_INFLIGHT_ALLOCATION;
+		return -EINVAL;
+	}
+
+	if (domain->num_avail_aqed_entries < args->num_atomic_inflights) {
+		resp->status = DLB2_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	if (args->num_atomic_inflights &&
+	    args->lock_id_comp_level != 0 &&
+	    args->lock_id_comp_level != 64 &&
+	    args->lock_id_comp_level != 128 &&
+	    args->lock_id_comp_level != 256 &&
+	    args->lock_id_comp_level != 512 &&
+	    args->lock_id_comp_level != 1024 &&
+	    args->lock_id_comp_level != 2048 &&
+	    args->lock_id_comp_level != 4096 &&
+	    args->lock_id_comp_level != 65536) {
+		resp->status = DLB2_ST_INVALID_LOCK_ID_COMP_LEVEL;
+		return -EINVAL;
+	}
+
+	*out_domain = domain;
+	*out_queue = queue;
+
+	return 0;
+}
+
+static int
+dlb2_ldb_queue_attach_resources(struct dlb2_hw *hw,
+				struct dlb2_hw_domain *domain,
+				struct dlb2_ldb_queue *queue,
+				struct dlb2_create_ldb_queue_args *args)
+{
+	int ret;
+	ret = dlb2_ldb_queue_attach_to_sn_group(hw, queue, args);
+	if (ret)
+		return ret;
+
+	/* Attach QID inflights */
+	queue->num_qid_inflights = args->num_qid_inflights;
+
+	/* Attach atomic inflights */
+	queue->aqed_limit = args->num_atomic_inflights;
+
+	domain->num_avail_aqed_entries -= args->num_atomic_inflights;
+	domain->num_used_aqed_entries += args->num_atomic_inflights;
+
+	return 0;
+}
+
+static void dlb2_configure_ldb_queue(struct dlb2_hw *hw,
+				     struct dlb2_hw_domain *domain,
+				     struct dlb2_ldb_queue *queue,
+				     struct dlb2_create_ldb_queue_args *args,
+				     bool vdev_req,
+				     unsigned int vdev_id)
+{
+	struct dlb2_sn_group *sn_group;
+	unsigned int offs;
+	u32 reg = 0;
+	u32 alimit;
+
+	/* QID write permissions are turned on when the domain is started */
+	offs = domain->id.phys_id * DLB2_MAX_NUM_LDB_QUEUES + queue->id.phys_id;
+
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_VASQID_V(offs), reg);
+
+	/*
+	 * Unordered QIDs get 4K inflights, ordered get as many as the number
+	 * of sequence numbers.
+	 */
+	DLB2_BITS_SET(reg, args->num_qid_inflights,
+		      DLB2_LSP_QID_LDB_INFL_LIM_LIMIT);
+	DLB2_CSR_WR(hw, DLB2_LSP_QID_LDB_INFL_LIM(hw->ver,
+						  queue->id.phys_id), reg);
+
+	alimit = queue->aqed_limit;
+
+	if (alimit > DLB2_MAX_NUM_AQED_ENTRIES)
+		alimit = DLB2_MAX_NUM_AQED_ENTRIES;
+
+	reg = 0;
+	DLB2_BITS_SET(reg, alimit, DLB2_LSP_QID_AQED_ACTIVE_LIM_LIMIT);
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_QID_AQED_ACTIVE_LIM(hw->ver,
+						 queue->id.phys_id), reg);
+
+	reg = 0;
+	switch (args->lock_id_comp_level) {
+	case 64:
+		DLB2_BITS_SET(reg, 1, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
+		break;
+	case 128:
+		DLB2_BITS_SET(reg, 2, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
+		break;
+	case 256:
+		DLB2_BITS_SET(reg, 3, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
+		break;
+	case 512:
+		DLB2_BITS_SET(reg, 4, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
+		break;
+	case 1024:
+		DLB2_BITS_SET(reg, 5, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
+		break;
+	case 2048:
+		DLB2_BITS_SET(reg, 6, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
+		break;
+	case 4096:
+		DLB2_BITS_SET(reg, 7, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
+		break;
+	default:
+		/* No compression by default */
+		break;
+	}
+
+	DLB2_CSR_WR(hw, DLB2_AQED_QID_HID_WIDTH(queue->id.phys_id), reg);
+
+	reg = 0;
+	/* Don't timestamp QEs that pass through this queue */
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_QID_ITS(queue->id.phys_id), reg);
+
+	DLB2_BITS_SET(reg, args->depth_threshold,
+		      DLB2_LSP_QID_ATM_DEPTH_THRSH_THRESH);
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_QID_ATM_DEPTH_THRSH(hw->ver,
+						 queue->id.phys_id), reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, args->depth_threshold,
+		      DLB2_LSP_QID_NALDB_DEPTH_THRSH_THRESH);
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_QID_NALDB_DEPTH_THRSH(hw->ver, queue->id.phys_id),
+		    reg);
+
+	/*
+	 * This register limits the number of inflight flows a queue can have
+	 * at one time.  It has an upper bound of 2048, but can be
+	 * over-subscribed. 512 is chosen so that a single queue does not use
+	 * the entire atomic storage, but can use a substantial portion if
+	 * needed.
+	 */
+	reg = 0;
+	DLB2_BITS_SET(reg, 512, DLB2_AQED_QID_FID_LIM_QID_FID_LIMIT);
+	DLB2_CSR_WR(hw, DLB2_AQED_QID_FID_LIM(queue->id.phys_id), reg);
+
+	/* Configure SNs */
+	reg = 0;
+	sn_group = &hw->rsrcs.sn_groups[queue->sn_group];
+	DLB2_BITS_SET(reg, sn_group->mode, DLB2_CHP_ORD_QID_SN_MAP_MODE);
+	DLB2_BITS_SET(reg, queue->sn_slot, DLB2_CHP_ORD_QID_SN_MAP_SLOT);
+	DLB2_BITS_SET(reg, sn_group->id, DLB2_CHP_ORD_QID_SN_MAP_GRP);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_ORD_QID_SN_MAP(hw->ver, queue->id.phys_id), reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, (args->num_sequence_numbers != 0),
+		 DLB2_SYS_LDB_QID_CFG_V_SN_CFG_V);
+	DLB2_BITS_SET(reg, (args->num_atomic_inflights != 0),
+		 DLB2_SYS_LDB_QID_CFG_V_FID_CFG_V);
+
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_QID_CFG_V(queue->id.phys_id), reg);
+
+	if (vdev_req) {
+		offs = vdev_id * DLB2_MAX_NUM_LDB_QUEUES + queue->id.virt_id;
+
+		reg = 0;
+		DLB2_BIT_SET(reg, DLB2_SYS_VF_LDB_VQID_V_VQID_V);
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID_V(offs), reg);
+
+		reg = 0;
+		DLB2_BITS_SET(reg, queue->id.phys_id,
+			      DLB2_SYS_VF_LDB_VQID2QID_QID);
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID2QID(offs), reg);
+
+		reg = 0;
+		DLB2_BITS_SET(reg, queue->id.virt_id,
+			      DLB2_SYS_LDB_QID2VQID_VQID);
+		DLB2_CSR_WR(hw, DLB2_SYS_LDB_QID2VQID(queue->id.phys_id), reg);
+	}
+
+	reg = 0;
+	DLB2_BIT_SET(reg, DLB2_SYS_LDB_QID_V_QID_V);
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_QID_V(queue->id.phys_id), reg);
+}
+
+/**
+ * dlb2_hw_create_ldb_queue() - create a load-balanced queue
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: queue creation arguments.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function creates a load-balanced queue.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the queue ID.
+ *
+ * resp->id contains a virtual ID if vdev_req is true.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, the domain is not configured,
+ *	    the domain has already been started, or the requested queue name is
+ *	    already in use.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb2_hw_create_ldb_queue(struct dlb2_hw *hw,
+			     u32 domain_id,
+			     struct dlb2_create_ldb_queue_args *args,
+			     struct dlb2_cmd_response *resp,
+			     bool vdev_req,
+			     unsigned int vdev_id)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+	int ret;
+
+	dlb2_log_create_ldb_queue_args(hw, domain_id, args, vdev_req, vdev_id);
+
+	/*
+	 * Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	ret = dlb2_verify_create_ldb_queue_args(hw,
+						domain_id,
+						args,
+						resp,
+						vdev_req,
+						vdev_id,
+						&domain,
+						&queue);
+	if (ret)
+		return ret;
+
+	ret = dlb2_ldb_queue_attach_resources(hw, domain, queue, args);
+
+	if (ret) {
+		DLB2_HW_ERR(hw,
+			    "[%s():%d] Internal error: failed to attach the ldb queue resources\n",
+			    __func__, __LINE__);
+		return ret;
+	}
+
+	dlb2_configure_ldb_queue(hw, domain, queue, args, vdev_req, vdev_id);
+
+	queue->num_mappings = 0;
+
+	queue->configured = true;
+
+	/*
+	 * Configuration succeeded, so move the resource from the 'avail' to
+	 * the 'used' list.
+	 */
+	dlb2_list_del(&domain->avail_ldb_queues, &queue->domain_list);
+
+	dlb2_list_add(&domain->used_ldb_queues, &queue->domain_list);
+
+	resp->status = 0;
+	resp->id = (vdev_req) ? queue->id.virt_id : queue->id.phys_id;
+
+	return 0;
+}
+
+static void dlb2_ldb_port_configure_pp(struct dlb2_hw *hw,
+				       struct dlb2_hw_domain *domain,
+				       struct dlb2_ldb_port *port,
+				       bool vdev_req,
+				       unsigned int vdev_id)
+{
+	u32 reg = 0;
+
+	DLB2_BITS_SET(reg, domain->id.phys_id, DLB2_SYS_LDB_PP2VAS_VAS);
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_PP2VAS(port->id.phys_id), reg);
+
+	if (vdev_req) {
+		unsigned int offs;
+		u32 virt_id;
+
+		/*
+		 * DLB uses producer port address bits 17:12 to determine the
+		 * producer port ID. In Scalable IOV mode, PP accesses come
+		 * through the PF MMIO window for the physical producer port,
+		 * so for translation purposes the virtual and physical port
+		 * IDs are equal.
+		 */
+		if (hw->virt_mode == DLB2_VIRT_SRIOV)
+			virt_id = port->id.virt_id;
+		else
+			virt_id = port->id.phys_id;
+
+		reg = 0;
+		DLB2_BITS_SET(reg, port->id.phys_id, DLB2_SYS_VF_LDB_VPP2PP_PP);
+		offs = vdev_id * DLB2_MAX_NUM_LDB_PORTS + virt_id;
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VPP2PP(offs), reg);
+
+		reg = 0;
+		DLB2_BITS_SET(reg, vdev_id, DLB2_SYS_LDB_PP2VDEV_VDEV);
+		DLB2_CSR_WR(hw, DLB2_SYS_LDB_PP2VDEV(port->id.phys_id), reg);
+
+		reg = 0;
+		DLB2_BIT_SET(reg, DLB2_SYS_VF_LDB_VPP_V_VPP_V);
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VPP_V(offs), reg);
+	}
+
+	reg = 0;
+	DLB2_BIT_SET(reg, DLB2_SYS_LDB_PP_V_PP_V);
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_PP_V(port->id.phys_id), reg);
+}
+
+static int dlb2_ldb_port_configure_cq(struct dlb2_hw *hw,
+				      struct dlb2_hw_domain *domain,
+				      struct dlb2_ldb_port *port,
+				      uintptr_t cq_dma_base,
+				      struct dlb2_create_ldb_port_args *args,
+				      bool vdev_req,
+				      unsigned int vdev_id)
+{
+	u32 hl_base = 0;
+	u32 reg = 0;
+	u32 ds = 0;
+
+	/* The CQ address is 64B-aligned, and the DLB only wants bits [63:6] */
+	DLB2_BITS_SET(reg, cq_dma_base >> 6, DLB2_SYS_LDB_CQ_ADDR_L_ADDR_L);
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_ADDR_L(port->id.phys_id), reg);
+
+	reg = cq_dma_base >> 32;
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_ADDR_U(port->id.phys_id), reg);
+
+	/*
+	 * 'ro' == relaxed ordering. This setting allows DLB2 to write
+	 * cache lines out-of-order (but QEs within a cache line are always
+	 * updated in-order).
+	 */
+	reg = 0;
+	DLB2_BITS_SET(reg, vdev_id, DLB2_SYS_LDB_CQ2VF_PF_RO_VF);
+	DLB2_BITS_SET(reg,
+		 !vdev_req && (hw->virt_mode != DLB2_VIRT_SIOV),
+		 DLB2_SYS_LDB_CQ2VF_PF_RO_IS_PF);
+	DLB2_BIT_SET(reg, DLB2_SYS_LDB_CQ2VF_PF_RO_RO);
+
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ2VF_PF_RO(port->id.phys_id), reg);
+
+	port->cq_depth = args->cq_depth;
+
+	if (args->cq_depth <= 8) {
+		ds = 1;
+	} else if (args->cq_depth == 16) {
+		ds = 2;
+	} else if (args->cq_depth == 32) {
+		ds = 3;
+	} else if (args->cq_depth == 64) {
+		ds = 4;
+	} else if (args->cq_depth == 128) {
+		ds = 5;
+	} else if (args->cq_depth == 256) {
+		ds = 6;
+	} else if (args->cq_depth == 512) {
+		ds = 7;
+	} else if (args->cq_depth == 1024) {
+		ds = 8;
+	} else {
+		DLB2_HW_ERR(hw,
+			    "[%s():%d] Internal error: invalid CQ depth\n",
+			    __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	reg = 0;
+	DLB2_BITS_SET(reg, ds,
+		      DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT);
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
+		    reg);
+
+	/*
+	 * To support CQs with depth less than 8, program the token count
+	 * register with a non-zero initial value. Operations such as domain
+	 * reset must take this initial value into account when quiescing the
+	 * CQ.
+	 */
+	port->init_tkn_cnt = 0;
+
+	if (args->cq_depth < 8) {
+		reg = 0;
+		port->init_tkn_cnt = 8 - args->cq_depth;
+
+		DLB2_BITS_SET(reg,
+			      port->init_tkn_cnt,
+			      DLB2_LSP_CQ_LDB_TKN_CNT_TOKEN_COUNT);
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_CQ_LDB_TKN_CNT(hw->ver, port->id.phys_id),
+			    reg);
+	} else {
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_CQ_LDB_TKN_CNT(hw->ver, port->id.phys_id),
+			    DLB2_LSP_CQ_LDB_TKN_CNT_RST);
+	}
+
+	reg = 0;
+	DLB2_BITS_SET(reg, ds,
+		      DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_V2);
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
+		    reg);
+
+	/* Reset the CQ write pointer */
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_WPTR(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ_WPTR_RST);
+
+	reg = 0;
+	DLB2_BITS_SET(reg,
+		      port->hist_list_entry_limit - 1,
+		      DLB2_CHP_HIST_LIST_LIM_LIMIT);
+	DLB2_CSR_WR(hw, DLB2_CHP_HIST_LIST_LIM(hw->ver, port->id.phys_id), reg);
+
+	DLB2_BITS_SET(hl_base, port->hist_list_entry_base,
+		      DLB2_CHP_HIST_LIST_BASE_BASE);
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_HIST_LIST_BASE(hw->ver, port->id.phys_id),
+		    hl_base);
+
+	/*
+	 * The inflight limit sets a cap on the number of QEs for which this CQ
+	 * can owe completions at one time.
+	 */
+	reg = 0;
+	DLB2_BITS_SET(reg, args->cq_history_list_size,
+		      DLB2_LSP_CQ_LDB_INFL_LIM_LIMIT);
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ_LDB_INFL_LIM(hw->ver, port->id.phys_id),
+		    reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, DLB2_BITS_GET(hl_base, DLB2_CHP_HIST_LIST_BASE_BASE),
+		      DLB2_CHP_HIST_LIST_PUSH_PTR_PUSH_PTR);
+	DLB2_CSR_WR(hw, DLB2_CHP_HIST_LIST_PUSH_PTR(hw->ver, port->id.phys_id),
+		    reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, DLB2_BITS_GET(hl_base, DLB2_CHP_HIST_LIST_BASE_BASE),
+		      DLB2_CHP_HIST_LIST_POP_PTR_POP_PTR);
+	DLB2_CSR_WR(hw, DLB2_CHP_HIST_LIST_POP_PTR(hw->ver, port->id.phys_id),
+		    reg);
+
+	/*
+	 * Address translation (AT) settings: 0: untranslated, 2: translated
+	 * (see ATS spec regarding Address Type field for more details)
+	 */
+
+	if (hw->ver == DLB2_HW_V2) {
+		reg = 0;
+		DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_AT(port->id.phys_id), reg);
+	}
+
+	if (vdev_req && hw->virt_mode == DLB2_VIRT_SIOV) {
+		reg = 0;
+		DLB2_BITS_SET(reg, hw->pasid[vdev_id],
+			      DLB2_SYS_LDB_CQ_PASID_PASID);
+		DLB2_BIT_SET(reg, DLB2_SYS_LDB_CQ_PASID_FMT2);
+	}
+
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_PASID(hw->ver, port->id.phys_id), reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, domain->id.phys_id, DLB2_CHP_LDB_CQ2VAS_CQ2VAS);
+	DLB2_CSR_WR(hw, DLB2_CHP_LDB_CQ2VAS(hw->ver, port->id.phys_id), reg);
+
+	/* Disable the port's QID mappings */
+	reg = 0;
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(hw->ver, port->id.phys_id), reg);
+
+	return 0;
+}
+
+static bool
+dlb2_cq_depth_is_valid(u32 depth)
+{
+	if (depth != 1 && depth != 2 &&
+	    depth != 4 && depth != 8 &&
+	    depth != 16 && depth != 32 &&
+	    depth != 64 && depth != 128 &&
+	    depth != 256 && depth != 512 &&
+	    depth != 1024)
+		return false;
+
+	return true;
+}
+
+static int dlb2_configure_ldb_port(struct dlb2_hw *hw,
+				   struct dlb2_hw_domain *domain,
+				   struct dlb2_ldb_port *port,
+				   uintptr_t cq_dma_base,
+				   struct dlb2_create_ldb_port_args *args,
+				   bool vdev_req,
+				   unsigned int vdev_id)
+{
+	int ret, i;
+
+	port->hist_list_entry_base = domain->hist_list_entry_base +
+				     domain->hist_list_entry_offset;
+	port->hist_list_entry_limit = port->hist_list_entry_base +
+				      args->cq_history_list_size;
+
+	domain->hist_list_entry_offset += args->cq_history_list_size;
+	domain->avail_hist_list_entries -= args->cq_history_list_size;
+
+	ret = dlb2_ldb_port_configure_cq(hw,
+					 domain,
+					 port,
+					 cq_dma_base,
+					 args,
+					 vdev_req,
+					 vdev_id);
+	if (ret)
+		return ret;
+
+	dlb2_ldb_port_configure_pp(hw,
+				   domain,
+				   port,
+				   vdev_req,
+				   vdev_id);
+
+	dlb2_ldb_port_cq_enable(hw, port);
+
+	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++)
+		port->qid_map[i].state = DLB2_QUEUE_UNMAPPED;
+	port->num_mappings = 0;
+
+	port->enabled = true;
+
+	port->configured = true;
+
+	return 0;
+}
+
+static void
+dlb2_log_create_ldb_port_args(struct dlb2_hw *hw,
+			      u32 domain_id,
+			      uintptr_t cq_dma_base,
+			      struct dlb2_create_ldb_port_args *args,
+			      bool vdev_req,
+			      unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 create load-balanced port arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID:                 %d\n",
+		    domain_id);
+	DLB2_HW_DBG(hw, "\tCQ depth:                  %d\n",
+		    args->cq_depth);
+	DLB2_HW_DBG(hw, "\tCQ hist list size:         %d\n",
+		    args->cq_history_list_size);
+	DLB2_HW_DBG(hw, "\tCQ base address:           0x%lx\n",
+		    cq_dma_base);
+	DLB2_HW_DBG(hw, "\tCoS ID:                    %u\n", args->cos_id);
+	DLB2_HW_DBG(hw, "\tStrict CoS allocation:     %u\n",
+		    args->cos_strict);
+}
+
+static int
+dlb2_verify_create_ldb_port_args(struct dlb2_hw *hw,
+				 u32 domain_id,
+				 uintptr_t cq_dma_base,
+				 struct dlb2_create_ldb_port_args *args,
+				 struct dlb2_cmd_response *resp,
+				 bool vdev_req,
+				 unsigned int vdev_id,
+				 struct dlb2_hw_domain **out_domain,
+				 struct dlb2_ldb_port **out_port,
+				 int *out_cos_id)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_port *port;
+	int i, id;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
+		return -EINVAL;
+	}
+
+	if (domain->started) {
+		resp->status = DLB2_ST_DOMAIN_STARTED;
+		return -EINVAL;
+	}
+
+	if (args->cos_id >= DLB2_NUM_COS_DOMAINS) {
+		resp->status = DLB2_ST_INVALID_COS_ID;
+		return -EINVAL;
+	}
+
+	if (args->cos_strict) {
+		id = args->cos_id;
+		port = DLB2_DOM_LIST_HEAD(domain->avail_ldb_ports[id],
+					  typeof(*port));
+	} else {
+		for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+			id = (args->cos_id + i) % DLB2_NUM_COS_DOMAINS;
+
+			port = DLB2_DOM_LIST_HEAD(domain->avail_ldb_ports[id],
+						  typeof(*port));
+			if (port)
+				break;
+		}
+	}
+
+	if (!port) {
+		resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	/* Check cache-line alignment */
+	if ((cq_dma_base & 0x3F) != 0) {
+		resp->status = DLB2_ST_INVALID_CQ_VIRT_ADDR;
+		return -EINVAL;
+	}
+
+	if (!dlb2_cq_depth_is_valid(args->cq_depth)) {
+		resp->status = DLB2_ST_INVALID_CQ_DEPTH;
+		return -EINVAL;
+	}
+
+	/* The history list size must be >= 1 */
+	if (!args->cq_history_list_size) {
+		resp->status = DLB2_ST_INVALID_HIST_LIST_DEPTH;
+		return -EINVAL;
+	}
+
+	if (args->cq_history_list_size > domain->avail_hist_list_entries) {
+		resp->status = DLB2_ST_HIST_LIST_ENTRIES_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	*out_domain = domain;
+	*out_port = port;
+	*out_cos_id = id;
+
+	return 0;
+}
+
+/**
+ * dlb2_hw_create_ldb_port() - create a load-balanced port
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: port creation arguments.
+ * @cq_dma_base: base address of the CQ memory. This can be a PA or an IOVA.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function creates a load-balanced port.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the port ID.
+ *
+ * resp->id contains a virtual ID if vdev_req is true.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, a credit setting is invalid, a
+ *	    pointer address is not properly aligned, the domain is not
+ *	    configured, or the domain has already been started.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb2_hw_create_ldb_port(struct dlb2_hw *hw,
+			    u32 domain_id,
+			    struct dlb2_create_ldb_port_args *args,
+			    uintptr_t cq_dma_base,
+			    struct dlb2_cmd_response *resp,
+			    bool vdev_req,
+			    unsigned int vdev_id)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_port *port;
+	int ret, cos_id;
+
+	dlb2_log_create_ldb_port_args(hw,
+				      domain_id,
+				      cq_dma_base,
+				      args,
+				      vdev_req,
+				      vdev_id);
+
+	/*
+	 * Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	ret = dlb2_verify_create_ldb_port_args(hw,
+					       domain_id,
+					       cq_dma_base,
+					       args,
+					       resp,
+					       vdev_req,
+					       vdev_id,
+					       &domain,
+					       &port,
+					       &cos_id);
+	if (ret)
+		return ret;
+
+	ret = dlb2_configure_ldb_port(hw,
+				      domain,
+				      port,
+				      cq_dma_base,
+				      args,
+				      vdev_req,
+				      vdev_id);
+	if (ret)
+		return ret;
+
+	/*
+	 * Configuration succeeded, so move the resource from the 'avail' to
+	 * the 'used' list.
+	 */
+	dlb2_list_del(&domain->avail_ldb_ports[cos_id], &port->domain_list);
+
+	dlb2_list_add(&domain->used_ldb_ports[cos_id], &port->domain_list);
+
+	resp->status = 0;
+	resp->id = (vdev_req) ? port->id.virt_id : port->id.phys_id;
+
+	return 0;
+}
+
+static void
+dlb2_log_create_dir_port_args(struct dlb2_hw *hw,
+			      u32 domain_id,
+			      uintptr_t cq_dma_base,
+			      struct dlb2_create_dir_port_args *args,
+			      bool vdev_req,
+			      unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 create directed port arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID:                 %d\n",
+		    domain_id);
+	DLB2_HW_DBG(hw, "\tCQ depth:                  %d\n",
+		    args->cq_depth);
+	DLB2_HW_DBG(hw, "\tCQ base address:           0x%lx\n",
+		    cq_dma_base);
+}
+
+static struct dlb2_dir_pq_pair *
+dlb2_get_domain_used_dir_pq(struct dlb2_hw *hw,
+			    u32 id,
+			    bool vdev_req,
+			    struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *port;
+	RTE_SET_USED(iter);
+
+	if (id >= DLB2_MAX_NUM_DIR_PORTS(hw->ver))
+		return NULL;
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
+		if ((!vdev_req && port->id.phys_id == id) ||
+		    (vdev_req && port->id.virt_id == id))
+			return port;
+	}
+
+	return NULL;
+}
+
+static int
+dlb2_verify_create_dir_port_args(struct dlb2_hw *hw,
+				 u32 domain_id,
+				 uintptr_t cq_dma_base,
+				 struct dlb2_create_dir_port_args *args,
+				 struct dlb2_cmd_response *resp,
+				 bool vdev_req,
+				 unsigned int vdev_id,
+				 struct dlb2_hw_domain **out_domain,
+				 struct dlb2_dir_pq_pair **out_port)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_dir_pq_pair *pq;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
+		return -EINVAL;
+	}
+
+	if (domain->started) {
+		resp->status = DLB2_ST_DOMAIN_STARTED;
+		return -EINVAL;
+	}
+
+	if (args->queue_id != -1) {
+		/*
+		 * If the user claims the queue is already configured, validate
+		 * the queue ID, its domain, and whether the queue is
+		 * configured.
+		 */
+		pq = dlb2_get_domain_used_dir_pq(hw,
+						 args->queue_id,
+						 vdev_req,
+						 domain);
+
+		if (!pq || pq->domain_id.phys_id != domain->id.phys_id ||
+		    !pq->queue_configured) {
+			resp->status = DLB2_ST_INVALID_DIR_QUEUE_ID;
+			return -EINVAL;
+		}
+	} else {
+		/*
+		 * If the port's queue is not configured, validate that a free
+		 * port-queue pair is available.
+		 */
+		pq = DLB2_DOM_LIST_HEAD(domain->avail_dir_pq_pairs,
+					typeof(*pq));
+		if (!pq) {
+			resp->status = DLB2_ST_DIR_PORTS_UNAVAILABLE;
+			return -EINVAL;
+		}
+	}
+
+	/* Check cache-line alignment */
+	if ((cq_dma_base & 0x3F) != 0) {
+		resp->status = DLB2_ST_INVALID_CQ_VIRT_ADDR;
+		return -EINVAL;
+	}
+
+	if (!dlb2_cq_depth_is_valid(args->cq_depth)) {
+		resp->status = DLB2_ST_INVALID_CQ_DEPTH;
+		return -EINVAL;
+	}
+
+	*out_domain = domain;
+	*out_port = pq;
+
+	return 0;
+}
+
+static void dlb2_dir_port_configure_pp(struct dlb2_hw *hw,
+				       struct dlb2_hw_domain *domain,
+				       struct dlb2_dir_pq_pair *port,
+				       bool vdev_req,
+				       unsigned int vdev_id)
+{
+	u32 reg = 0;
+
+	DLB2_BITS_SET(reg, domain->id.phys_id, DLB2_SYS_DIR_PP2VAS_VAS);
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_PP2VAS(port->id.phys_id), reg);
+
+	if (vdev_req) {
+		unsigned int offs;
+		u32 virt_id;
+
+		/*
+		 * DLB uses producer port address bits 17:12 to determine the
+		 * producer port ID. In Scalable IOV mode, PP accesses come
+		 * through the PF MMIO window for the physical producer port,
+		 * so for translation purposes the virtual and physical port
+		 * IDs are equal.
+		 */
+		if (hw->virt_mode == DLB2_VIRT_SRIOV)
+			virt_id = port->id.virt_id;
+		else
+			virt_id = port->id.phys_id;
+
+		reg = 0;
+		DLB2_BITS_SET(reg, port->id.phys_id, DLB2_SYS_VF_DIR_VPP2PP_PP);
+		offs = vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) + virt_id;
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP2PP(offs), reg);
+
+		reg = 0;
+		DLB2_BITS_SET(reg, vdev_id, DLB2_SYS_DIR_PP2VDEV_VDEV);
+		DLB2_CSR_WR(hw, DLB2_SYS_DIR_PP2VDEV(port->id.phys_id), reg);
+
+		reg = 0;
+		DLB2_BIT_SET(reg, DLB2_SYS_VF_DIR_VPP_V_VPP_V);
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP_V(offs), reg);
+	}
+
+	reg = 0;
+	DLB2_BIT_SET(reg, DLB2_SYS_DIR_PP_V_PP_V);
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_PP_V(port->id.phys_id), reg);
+}
+
+static int dlb2_dir_port_configure_cq(struct dlb2_hw *hw,
+				      struct dlb2_hw_domain *domain,
+				      struct dlb2_dir_pq_pair *port,
+				      uintptr_t cq_dma_base,
+				      struct dlb2_create_dir_port_args *args,
+				      bool vdev_req,
+				      unsigned int vdev_id)
+{
+	u32 reg = 0;
+	u32 ds = 0;
+
+	/* The CQ address is 64B-aligned, and the DLB only wants bits [63:6] */
+	DLB2_BITS_SET(reg, cq_dma_base >> 6, DLB2_SYS_DIR_CQ_ADDR_L_ADDR_L);
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_ADDR_L(port->id.phys_id), reg);
+
+	reg = cq_dma_base >> 32;
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_ADDR_U(port->id.phys_id), reg);
+
+	/*
+	 * 'ro' == relaxed ordering. This setting allows DLB2 to write
+	 * cache lines out-of-order (but QEs within a cache line are always
+	 * updated in-order).
+	 */
+	reg = 0;
+	DLB2_BITS_SET(reg, vdev_id, DLB2_SYS_DIR_CQ2VF_PF_RO_VF);
+	DLB2_BITS_SET(reg, !vdev_req && (hw->virt_mode != DLB2_VIRT_SIOV),
+		 DLB2_SYS_DIR_CQ2VF_PF_RO_IS_PF);
+	DLB2_BIT_SET(reg, DLB2_SYS_DIR_CQ2VF_PF_RO_RO);
+
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ2VF_PF_RO(port->id.phys_id), reg);
+
+	if (args->cq_depth <= 8) {
+		ds = 1;
+	} else if (args->cq_depth == 16) {
+		ds = 2;
+	} else if (args->cq_depth == 32) {
+		ds = 3;
+	} else if (args->cq_depth == 64) {
+		ds = 4;
+	} else if (args->cq_depth == 128) {
+		ds = 5;
+	} else if (args->cq_depth == 256) {
+		ds = 6;
+	} else if (args->cq_depth == 512) {
+		ds = 7;
+	} else if (args->cq_depth == 1024) {
+		ds = 8;
+	} else {
+		DLB2_HW_ERR(hw,
+			    "[%s():%d] Internal error: invalid CQ depth\n",
+			    __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	reg = 0;
+	DLB2_BITS_SET(reg, ds,
+		      DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT);
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
+		    reg);
+
+	/*
+	 * To support CQs with depth less than 8, program the token count
+	 * register with a non-zero initial value. Operations such as domain
+	 * reset must take this initial value into account when quiescing the
+	 * CQ.
+	 */
+	port->init_tkn_cnt = 0;
+
+	if (args->cq_depth < 8) {
+		reg = 0;
+		port->init_tkn_cnt = 8 - args->cq_depth;
+
+		DLB2_BITS_SET(reg, port->init_tkn_cnt,
+			      DLB2_LSP_CQ_DIR_TKN_CNT_COUNT);
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_CQ_DIR_TKN_CNT(hw->ver, port->id.phys_id),
+			    reg);
+	} else {
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_CQ_DIR_TKN_CNT(hw->ver, port->id.phys_id),
+			    DLB2_LSP_CQ_DIR_TKN_CNT_RST);
+	}
+
+	reg = 0;
+	DLB2_BITS_SET(reg, ds,
+		      DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_TOKEN_DEPTH_SELECT_V2);
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(hw->ver,
+						      port->id.phys_id),
+		    reg);
+
+	/* Reset the CQ write pointer */
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_WPTR(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ_WPTR_RST);
+
+	/* Virtualize the PPID */
+	reg = 0;
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_FMT(port->id.phys_id), reg);
+
+	/*
+	 * Address translation (AT) settings: 0: untranslated, 2: translated
+	 * (see ATS spec regarding Address Type field for more details)
+	 */
+	if (hw->ver == DLB2_HW_V2) {
+		reg = 0;
+		DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_AT(port->id.phys_id), reg);
+	}
+
+	if (vdev_req && hw->virt_mode == DLB2_VIRT_SIOV) {
+		DLB2_BITS_SET(reg, hw->pasid[vdev_id],
+			      DLB2_SYS_DIR_CQ_PASID_PASID);
+		DLB2_BIT_SET(reg, DLB2_SYS_DIR_CQ_PASID_FMT2);
+	}
+
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_PASID(hw->ver, port->id.phys_id), reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, domain->id.phys_id, DLB2_CHP_DIR_CQ2VAS_CQ2VAS);
+	DLB2_CSR_WR(hw, DLB2_CHP_DIR_CQ2VAS(hw->ver, port->id.phys_id), reg);
+
+	return 0;
+}
+
+static int dlb2_configure_dir_port(struct dlb2_hw *hw,
+				   struct dlb2_hw_domain *domain,
+				   struct dlb2_dir_pq_pair *port,
+				   uintptr_t cq_dma_base,
+				   struct dlb2_create_dir_port_args *args,
+				   bool vdev_req,
+				   unsigned int vdev_id)
+{
+	int ret;
+
+	ret = dlb2_dir_port_configure_cq(hw,
+					 domain,
+					 port,
+					 cq_dma_base,
+					 args,
+					 vdev_req,
+					 vdev_id);
+
+	if (ret)
+		return ret;
+
+	dlb2_dir_port_configure_pp(hw,
+				   domain,
+				   port,
+				   vdev_req,
+				   vdev_id);
+
+	dlb2_dir_port_cq_enable(hw, port);
+
+	port->enabled = true;
+
+	port->port_configured = true;
+
+	return 0;
+}
+
+/**
+ * dlb2_hw_create_dir_port() - create a directed port
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: port creation arguments.
+ * @cq_dma_base: base address of the CQ memory. This can be a PA or an IOVA.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function creates a directed port.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the port ID.
+ *
+ * resp->id contains a virtual ID if vdev_req is true.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, a credit setting is invalid, a
+ *	    pointer address is not properly aligned, the domain is not
+ *	    configured, or the domain has already been started.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb2_hw_create_dir_port(struct dlb2_hw *hw,
+			    u32 domain_id,
+			    struct dlb2_create_dir_port_args *args,
+			    uintptr_t cq_dma_base,
+			    struct dlb2_cmd_response *resp,
+			    bool vdev_req,
+			    unsigned int vdev_id)
+{
+	struct dlb2_dir_pq_pair *port;
+	struct dlb2_hw_domain *domain;
+	int ret;
+
+	dlb2_log_create_dir_port_args(hw,
+				      domain_id,
+				      cq_dma_base,
+				      args,
+				      vdev_req,
+				      vdev_id);
+
+	/*
+	 * Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	ret = dlb2_verify_create_dir_port_args(hw,
+					       domain_id,
+					       cq_dma_base,
+					       args,
+					       resp,
+					       vdev_req,
+					       vdev_id,
+					       &domain,
+					       &port);
+	if (ret)
+		return ret;
+
+	ret = dlb2_configure_dir_port(hw,
+				      domain,
+				      port,
+				      cq_dma_base,
+				      args,
+				      vdev_req,
+				      vdev_id);
+	if (ret)
+		return ret;
+
+	/*
+	 * Configuration succeeded, so move the resource from the 'avail' to
+	 * the 'used' list (if it's not already there).
+	 */
+	if (args->queue_id == -1) {
+		dlb2_list_del(&domain->avail_dir_pq_pairs, &port->domain_list);
+
+		dlb2_list_add(&domain->used_dir_pq_pairs, &port->domain_list);
+	}
+
+	resp->status = 0;
+	resp->id = (vdev_req) ? port->id.virt_id : port->id.phys_id;
+
+	return 0;
+}
+
+static void dlb2_configure_dir_queue(struct dlb2_hw *hw,
+				     struct dlb2_hw_domain *domain,
+				     struct dlb2_dir_pq_pair *queue,
+				     struct dlb2_create_dir_queue_args *args,
+				     bool vdev_req,
+				     unsigned int vdev_id)
+{
+	unsigned int offs;
+	u32 reg = 0;
+
+	/* QID write permissions are turned on when the domain is started */
+	offs = domain->id.phys_id * DLB2_MAX_NUM_DIR_QUEUES(hw->ver) +
+		queue->id.phys_id;
+
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(offs), reg);
+
+	/* Don't timestamp QEs that pass through this queue */
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_QID_ITS(queue->id.phys_id), reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, args->depth_threshold,
+		      DLB2_LSP_QID_DIR_DEPTH_THRSH_THRESH);
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_QID_DIR_DEPTH_THRSH(hw->ver, queue->id.phys_id),
+		    reg);
+
+	if (vdev_req) {
+		offs = vdev_id * DLB2_MAX_NUM_DIR_QUEUES(hw->ver) +
+			queue->id.virt_id;
+
+		reg = 0;
+		DLB2_BIT_SET(reg, DLB2_SYS_VF_DIR_VQID_V_VQID_V);
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID_V(offs), reg);
+
+		reg = 0;
+		DLB2_BITS_SET(reg, queue->id.phys_id,
+			      DLB2_SYS_VF_DIR_VQID2QID_QID);
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID2QID(offs), reg);
+	}
+
+	reg = 0;
+	DLB2_BIT_SET(reg, DLB2_SYS_DIR_QID_V_QID_V);
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_QID_V(queue->id.phys_id), reg);
+
+	queue->queue_configured = true;
+}
+
+static void
+dlb2_log_create_dir_queue_args(struct dlb2_hw *hw,
+			       u32 domain_id,
+			       struct dlb2_create_dir_queue_args *args,
+			       bool vdev_req,
+			       unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 create directed queue arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
+	DLB2_HW_DBG(hw, "\tPort ID:   %d\n", args->port_id);
+}
+
+static int
+dlb2_verify_create_dir_queue_args(struct dlb2_hw *hw,
+				  u32 domain_id,
+				  struct dlb2_create_dir_queue_args *args,
+				  struct dlb2_cmd_response *resp,
+				  bool vdev_req,
+				  unsigned int vdev_id,
+				  struct dlb2_hw_domain **out_domain,
+				  struct dlb2_dir_pq_pair **out_queue)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_dir_pq_pair *pq;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
+		return -EINVAL;
+	}
+
+	if (domain->started) {
+		resp->status = DLB2_ST_DOMAIN_STARTED;
+		return -EINVAL;
+	}
+
+	/*
+	 * If the user claims the port is already configured, validate the port
+	 * ID, its domain, and whether the port is configured.
+	 */
+	if (args->port_id != -1) {
+		pq = dlb2_get_domain_used_dir_pq(hw,
+						 args->port_id,
+						 vdev_req,
+						 domain);
+
+		if (!pq || pq->domain_id.phys_id != domain->id.phys_id ||
+		    !pq->port_configured) {
+			resp->status = DLB2_ST_INVALID_PORT_ID;
+			return -EINVAL;
+		}
+	} else {
+		/*
+		 * If the queue's port is not configured, validate that a free
+		 * port-queue pair is available.
+		 */
+		pq = DLB2_DOM_LIST_HEAD(domain->avail_dir_pq_pairs,
+					typeof(*pq));
+		if (!pq) {
+			resp->status = DLB2_ST_DIR_QUEUES_UNAVAILABLE;
+			return -EINVAL;
+		}
+	}
+
+	*out_domain = domain;
+	*out_queue = pq;
+
+	return 0;
+}
+
+/**
+ * dlb2_hw_create_dir_queue() - create a directed queue
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: queue creation arguments.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function creates a directed queue.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the queue ID.
+ *
+ * resp->id contains a virtual ID if vdev_req is true.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, the domain is not configured,
+ *	    or the domain has already been started.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb2_hw_create_dir_queue(struct dlb2_hw *hw,
+			     u32 domain_id,
+			     struct dlb2_create_dir_queue_args *args,
+			     struct dlb2_cmd_response *resp,
+			     bool vdev_req,
+			     unsigned int vdev_id)
+{
+	struct dlb2_dir_pq_pair *queue;
+	struct dlb2_hw_domain *domain;
+	int ret;
+
+	dlb2_log_create_dir_queue_args(hw, domain_id, args, vdev_req, vdev_id);
+
+	/*
+	 * Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	ret = dlb2_verify_create_dir_queue_args(hw,
+						domain_id,
+						args,
+						resp,
+						vdev_req,
+						vdev_id,
+						&domain,
+						&queue);
+	if (ret)
+		return ret;
+
+	dlb2_configure_dir_queue(hw, domain, queue, args, vdev_req, vdev_id);
+
+	/*
+	 * Configuration succeeded, so move the resource from the 'avail' to
+	 * the 'used' list (if it's not already there).
+	 */
+	if (args->port_id == -1) {
+		dlb2_list_del(&domain->avail_dir_pq_pairs,
+			      &queue->domain_list);
+
+		dlb2_list_add(&domain->used_dir_pq_pairs,
+			      &queue->domain_list);
+	}
+
+	resp->status = 0;
+
+	resp->id = (vdev_req) ? queue->id.virt_id : queue->id.phys_id;
+
+	return 0;
+}
+
+static bool
+dlb2_port_find_slot_with_pending_map_queue(struct dlb2_ldb_port *port,
+					   struct dlb2_ldb_queue *queue,
+					   int *slot)
+{
+	int i;
+
+	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
+		struct dlb2_ldb_port_qid_map *map = &port->qid_map[i];
+
+		if (map->state == DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP &&
+		    map->pending_qid == queue->id.phys_id)
+			break;
+	}
+
+	*slot = i;
+
+	return (i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ);
+}
+
+static int dlb2_verify_map_qid_slot_available(struct dlb2_ldb_port *port,
+					      struct dlb2_ldb_queue *queue,
+					      struct dlb2_cmd_response *resp)
+{
+	enum dlb2_qid_map_state state;
+	int i;
+
+	/* Unused slot available? */
+	if (port->num_mappings < DLB2_MAX_NUM_QIDS_PER_LDB_CQ)
+		return 0;
+
+	/*
+	 * If the queue is already mapped (from the application's perspective),
+	 * this is simply a priority update.
+	 */
+	state = DLB2_QUEUE_MAPPED;
+	if (dlb2_port_find_slot_queue(port, state, queue, &i))
+		return 0;
+
+	state = DLB2_QUEUE_MAP_IN_PROG;
+	if (dlb2_port_find_slot_queue(port, state, queue, &i))
+		return 0;
+
+	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &i))
+		return 0;
+
+	/*
+	 * If the slot contains an unmap in progress, it's considered
+	 * available.
+	 */
+	state = DLB2_QUEUE_UNMAP_IN_PROG;
+	if (dlb2_port_find_slot(port, state, &i))
+		return 0;
+
+	state = DLB2_QUEUE_UNMAPPED;
+	if (dlb2_port_find_slot(port, state, &i))
+		return 0;
+
+	resp->status = DLB2_ST_NO_QID_SLOTS_AVAILABLE;
+	return -EINVAL;
+}
+
+static struct dlb2_ldb_queue *
+dlb2_get_domain_ldb_queue(u32 id,
+			  bool vdev_req,
+			  struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_queue *queue;
+	RTE_SET_USED(iter);
+
+	if (id >= DLB2_MAX_NUM_LDB_QUEUES)
+		return NULL;
+
+	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
+		if ((!vdev_req && queue->id.phys_id == id) ||
+		    (vdev_req && queue->id.virt_id == id))
+			return queue;
+	}
+
+	return NULL;
+}
+
+static struct dlb2_ldb_port *
+dlb2_get_domain_used_ldb_port(u32 id,
+			      bool vdev_req,
+			      struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	if (id >= DLB2_MAX_NUM_LDB_PORTS)
+		return NULL;
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			if ((!vdev_req && port->id.phys_id == id) ||
+			    (vdev_req && port->id.virt_id == id))
+				return port;
+		}
+
+		DLB2_DOM_LIST_FOR(domain->avail_ldb_ports[i], port, iter) {
+			if ((!vdev_req && port->id.phys_id == id) ||
+			    (vdev_req && port->id.virt_id == id))
+				return port;
+		}
+	}
+
+	return NULL;
+}
+
+static void dlb2_ldb_port_change_qid_priority(struct dlb2_hw *hw,
+					      struct dlb2_ldb_port *port,
+					      int slot,
+					      struct dlb2_map_qid_args *args)
+{
+	u32 cq2priov;
+
+	/* Read-modify-write the priority and valid bit register */
+	cq2priov = DLB2_CSR_RD(hw,
+			       DLB2_LSP_CQ2PRIOV(hw->ver, port->id.phys_id));
+
+	cq2priov |= (1 << (slot + DLB2_LSP_CQ2PRIOV_V_LOC)) &
+		    DLB2_LSP_CQ2PRIOV_V;
+	cq2priov |= ((args->priority & 0x7) << slot * 3) &
+		    DLB2_LSP_CQ2PRIOV_PRIO;
+
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(hw->ver, port->id.phys_id), cq2priov);
+
+	dlb2_flush_csr(hw);
+
+	port->qid_map[slot].priority = args->priority;
+}
+
+static int dlb2_verify_map_qid_args(struct dlb2_hw *hw,
+				    u32 domain_id,
+				    struct dlb2_map_qid_args *args,
+				    struct dlb2_cmd_response *resp,
+				    bool vdev_req,
+				    unsigned int vdev_id,
+				    struct dlb2_hw_domain **out_domain,
+				    struct dlb2_ldb_port **out_port,
+				    struct dlb2_ldb_queue **out_queue)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+	struct dlb2_ldb_port *port;
+	int id;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
+		return -EINVAL;
+	}
+
+	id = args->port_id;
+
+	port = dlb2_get_domain_used_ldb_port(id, vdev_req, domain);
+
+	if (!port || !port->configured) {
+		resp->status = DLB2_ST_INVALID_PORT_ID;
+		return -EINVAL;
+	}
+
+	if (args->priority >= DLB2_QID_PRIORITIES) {
+		resp->status = DLB2_ST_INVALID_PRIORITY;
+		return -EINVAL;
+	}
+
+	queue = dlb2_get_domain_ldb_queue(args->qid, vdev_req, domain);
+
+	if (!queue || !queue->configured) {
+		resp->status = DLB2_ST_INVALID_QID;
+		return -EINVAL;
+	}
+
+	if (queue->domain_id.phys_id != domain->id.phys_id) {
+		resp->status = DLB2_ST_INVALID_QID;
+		return -EINVAL;
+	}
+
+	if (port->domain_id.phys_id != domain->id.phys_id) {
+		resp->status = DLB2_ST_INVALID_PORT_ID;
+		return -EINVAL;
+	}
+
+	*out_domain = domain;
+	*out_queue = queue;
+	*out_port = port;
+
+	return 0;
+}
+
+static void dlb2_log_map_qid(struct dlb2_hw *hw,
+			     u32 domain_id,
+			     struct dlb2_map_qid_args *args,
+			     bool vdev_req,
+			     unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 map QID arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID: %d\n",
+		    domain_id);
+	DLB2_HW_DBG(hw, "\tPort ID:   %d\n",
+		    args->port_id);
+	DLB2_HW_DBG(hw, "\tQueue ID:  %d\n",
+		    args->qid);
+	DLB2_HW_DBG(hw, "\tPriority:  %d\n",
+		    args->priority);
+}
+
+/**
+ * dlb2_hw_map_qid() - map a load-balanced queue to a load-balanced port
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: map QID arguments.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function configures the DLB to schedule QEs from the specified queue
+ * to the specified port. Each load-balanced port can be mapped to up to 8
+ * queues; each load-balanced queue can potentially map to all the
+ * load-balanced ports.
+ *
+ * A successful return does not necessarily mean the mapping was configured. If
+ * this function is unable to immediately map the queue to the port, it will
+ * add the requested operation to a per-port list of pending map/unmap
+ * operations, and (if it's not already running) launch a kernel thread that
+ * periodically attempts to process all pending operations. In a sense, this is
+ * an asynchronous function.
+ *
+ * This asynchronicity creates two views of the state of hardware: the actual
+ * hardware state and the requested state (as if every request completed
+ * immediately). If there are any pending map/unmap operations, the requested
+ * state will differ from the actual state. All validation is performed with
+ * respect to the pending state; for instance, if there are 8 pending map
+ * operations for port X, a request for a 9th will fail because a load-balanced
+ * port can only map up to 8 queues.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, invalid port or queue ID, or
+ *	    the domain is not configured.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb2_hw_map_qid(struct dlb2_hw *hw,
+		    u32 domain_id,
+		    struct dlb2_map_qid_args *args,
+		    struct dlb2_cmd_response *resp,
+		    bool vdev_req,
+		    unsigned int vdev_id)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+	enum dlb2_qid_map_state st;
+	struct dlb2_ldb_port *port;
+	int ret, i;
+	u8 prio;
+
+	dlb2_log_map_qid(hw, domain_id, args, vdev_req, vdev_id);
+
+	/*
+	 * Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	ret = dlb2_verify_map_qid_args(hw,
+				       domain_id,
+				       args,
+				       resp,
+				       vdev_req,
+				       vdev_id,
+				       &domain,
+				       &port,
+				       &queue);
+	if (ret)
+		return ret;
+
+	prio = args->priority;
+
+	/*
+	 * If there are any outstanding detach operations for this port,
+	 * attempt to complete them. This may be necessary to free up a QID
+	 * slot for this requested mapping.
+	 */
+	if (port->num_pending_removals)
+		dlb2_domain_finish_unmap_port(hw, domain, port);
+
+	ret = dlb2_verify_map_qid_slot_available(port, queue, resp);
+	if (ret)
+		return ret;
+
+	/* Hardware requires disabling the CQ before mapping QIDs. */
+	if (port->enabled)
+		dlb2_ldb_port_cq_disable(hw, port);
+
+	/*
+	 * If this is only a priority change, don't perform the full QID->CQ
+	 * mapping procedure
+	 */
+	st = DLB2_QUEUE_MAPPED;
+	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
+		if (prio != port->qid_map[i].priority) {
+			dlb2_ldb_port_change_qid_priority(hw, port, i, args);
+			DLB2_HW_DBG(hw, "DLB2 map: priority change\n");
+		}
+
+		st = DLB2_QUEUE_MAPPED;
+		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
+		if (ret)
+			return ret;
+
+		goto map_qid_done;
+	}
+
+	st = DLB2_QUEUE_UNMAP_IN_PROG;
+	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
+		if (prio != port->qid_map[i].priority) {
+			dlb2_ldb_port_change_qid_priority(hw, port, i, args);
+			DLB2_HW_DBG(hw, "DLB2 map: priority change\n");
+		}
+
+		st = DLB2_QUEUE_MAPPED;
+		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
+		if (ret)
+			return ret;
+
+		goto map_qid_done;
+	}
+
+	/*
+	 * If this is a priority change on an in-progress mapping, don't
+	 * perform the full QID->CQ mapping procedure.
+	 */
+	st = DLB2_QUEUE_MAP_IN_PROG;
+	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
+		port->qid_map[i].priority = prio;
+
+		DLB2_HW_DBG(hw, "DLB2 map: priority change only\n");
+
+		goto map_qid_done;
+	}
+
+	/*
+	 * If this is a priority change on a pending mapping, update the
+	 * pending priority
+	 */
+	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &i)) {
+		port->qid_map[i].pending_priority = prio;
+
+		DLB2_HW_DBG(hw, "DLB2 map: priority change only\n");
+
+		goto map_qid_done;
+	}
+
+	/*
+	 * If all the CQ's slots are in use, then there's an unmap in progress
+	 * (guaranteed by dlb2_verify_map_qid_slot_available()), so add this
+	 * mapping to pending_map and return. When the removal is completed for
+	 * the slot's current occupant, this mapping will be performed.
+	 */
+	if (!dlb2_port_find_slot(port, DLB2_QUEUE_UNMAPPED, &i)) {
+		if (dlb2_port_find_slot(port, DLB2_QUEUE_UNMAP_IN_PROG, &i)) {
+			enum dlb2_qid_map_state new_st;
+
+			port->qid_map[i].pending_qid = queue->id.phys_id;
+			port->qid_map[i].pending_priority = prio;
+
+			new_st = DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP;
+
+			ret = dlb2_port_slot_state_transition(hw, port, queue,
+							      i, new_st);
+			if (ret)
+				return ret;
+
+			DLB2_HW_DBG(hw, "DLB2 map: map pending removal\n");
+
+			goto map_qid_done;
+		}
+	}
+
+	/*
+	 * If the domain has started, a special "dynamic" CQ->queue mapping
+	 * procedure is required in order to safely update the CQ<->QID tables.
+	 * The "static" procedure cannot be used when traffic is flowing,
+	 * because the CQ<->QID tables cannot be updated atomically and the
+	 * scheduler won't see the new mapping unless the queue's if_status
+	 * changes, which isn't guaranteed.
+	 */
+	ret = dlb2_ldb_port_map_qid(hw, domain, port, queue, prio);
+
+	/* If ret is less than zero, it's due to an internal error */
+	if (ret < 0)
+		return ret;
+
+map_qid_done:
+	if (port->enabled)
+		dlb2_ldb_port_cq_enable(hw, port);
+
+	resp->status = 0;
+
+	return 0;
+}
+
+static void dlb2_log_unmap_qid(struct dlb2_hw *hw,
+			       u32 domain_id,
+			       struct dlb2_unmap_qid_args *args,
+			       bool vdev_req,
+			       unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 unmap QID arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID: %d\n",
+		    domain_id);
+	DLB2_HW_DBG(hw, "\tPort ID:   %d\n",
+		    args->port_id);
+	DLB2_HW_DBG(hw, "\tQueue ID:  %d\n",
+		    args->qid);
+	if (args->qid < DLB2_MAX_NUM_LDB_QUEUES)
+		DLB2_HW_DBG(hw, "\tQueue's num mappings:  %d\n",
+			    hw->rsrcs.ldb_queues[args->qid].num_mappings);
+}
+
+static int dlb2_verify_unmap_qid_args(struct dlb2_hw *hw,
+				      u32 domain_id,
+				      struct dlb2_unmap_qid_args *args,
+				      struct dlb2_cmd_response *resp,
+				      bool vdev_req,
+				      unsigned int vdev_id,
+				      struct dlb2_hw_domain **out_domain,
+				      struct dlb2_ldb_port **out_port,
+				      struct dlb2_ldb_queue **out_queue)
+{
+	enum dlb2_qid_map_state state;
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+	struct dlb2_ldb_port *port;
+	int slot;
+	int id;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
+		return -EINVAL;
+	}
+
+	id = args->port_id;
+
+	port = dlb2_get_domain_used_ldb_port(id, vdev_req, domain);
+
+	if (!port || !port->configured) {
+		resp->status = DLB2_ST_INVALID_PORT_ID;
+		return -EINVAL;
+	}
+
+	if (port->domain_id.phys_id != domain->id.phys_id) {
+		resp->status = DLB2_ST_INVALID_PORT_ID;
+		return -EINVAL;
+	}
+
+	queue = dlb2_get_domain_ldb_queue(args->qid, vdev_req, domain);
+
+	if (!queue || !queue->configured) {
+		DLB2_HW_ERR(hw, "[%s()] Can't unmap unconfigured queue %d\n",
+			    __func__, args->qid);
+		resp->status = DLB2_ST_INVALID_QID;
+		return -EINVAL;
+	}
+
+	/*
+	 * Verify that the port has the queue mapped. From the application's
+	 * perspective a queue is mapped if it is actually mapped, the map is
+	 * in progress, or the map is blocked pending an unmap.
+	 */
+	state = DLB2_QUEUE_MAPPED;
+	if (dlb2_port_find_slot_queue(port, state, queue, &slot))
+		goto done;
+
+	state = DLB2_QUEUE_MAP_IN_PROG;
+	if (dlb2_port_find_slot_queue(port, state, queue, &slot))
+		goto done;
+
+	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &slot))
+		goto done;
+
+	resp->status = DLB2_ST_INVALID_QID;
+	return -EINVAL;
+
+done:
+	*out_domain = domain;
+	*out_port = port;
+	*out_queue = queue;
+
+	return 0;
+}
+
+/**
+ * dlb2_hw_unmap_qid() - Unmap a load-balanced queue from a load-balanced port
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: unmap QID arguments.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function configures the DLB to stop scheduling QEs from the specified
+ * queue to the specified port.
+ *
+ * A successful return does not necessarily mean the mapping was removed. If
+ * this function is unable to immediately unmap the queue from the port, it
+ * will add the requested operation to a per-port list of pending map/unmap
+ * operations, and (if it's not already running) launch a kernel thread that
+ * periodically attempts to process all pending operations. See
+ * dlb2_hw_map_qid() for more details.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, invalid port or queue ID, or
+ *	    the domain is not configured.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb2_hw_unmap_qid(struct dlb2_hw *hw,
+		      u32 domain_id,
+		      struct dlb2_unmap_qid_args *args,
+		      struct dlb2_cmd_response *resp,
+		      bool vdev_req,
+		      unsigned int vdev_id)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+	enum dlb2_qid_map_state st;
+	struct dlb2_ldb_port *port;
+	bool unmap_complete;
+	int i, ret;
+
+	dlb2_log_unmap_qid(hw, domain_id, args, vdev_req, vdev_id);
+
+	/*
+	 * Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	ret = dlb2_verify_unmap_qid_args(hw,
+					 domain_id,
+					 args,
+					 resp,
+					 vdev_req,
+					 vdev_id,
+					 &domain,
+					 &port,
+					 &queue);
+	if (ret)
+		return ret;
+
+	/*
+	 * If the queue hasn't been mapped yet, we need to update the slot's
+	 * state and re-enable the queue's inflights.
+	 */
+	st = DLB2_QUEUE_MAP_IN_PROG;
+	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
+		/*
+		 * Since the in-progress map was aborted, re-enable the QID's
+		 * inflights.
+		 */
+		if (queue->num_pending_additions == 0)
+			dlb2_ldb_queue_set_inflight_limit(hw, queue);
+
+		st = DLB2_QUEUE_UNMAPPED;
+		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
+		if (ret)
+			return ret;
+
+		goto unmap_qid_done;
+	}
+
+	/*
+	 * If the queue mapping is on hold pending an unmap, we simply need to
+	 * update the slot's state.
+	 */
+	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &i)) {
+		st = DLB2_QUEUE_UNMAP_IN_PROG;
+		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
+		if (ret)
+			return ret;
+
+		goto unmap_qid_done;
+	}
+
+	st = DLB2_QUEUE_MAPPED;
+	if (!dlb2_port_find_slot_queue(port, st, queue, &i)) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: no available CQ slots\n",
+			    __func__);
+		return -EFAULT;
+	}
+
+	/*
+	 * QID->CQ mapping removal is an asynchronous procedure. It requires
+	 * stopping the DLB2 from scheduling this CQ, draining all inflights
+	 * from the CQ, then unmapping the queue from the CQ. This function
+	 * simply marks the port as needing the queue unmapped, and (if
+	 * necessary) starts the unmapping worker thread.
+	 */
+	dlb2_ldb_port_cq_disable(hw, port);
+
+	st = DLB2_QUEUE_UNMAP_IN_PROG;
+	ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
+	if (ret)
+		return ret;
+
+	/*
+	 * Attempt to finish the unmapping now, in case the port has no
+	 * outstanding inflights. If that's not the case, this will fail and
+	 * the unmapping will be completed at a later time.
+	 */
+	unmap_complete = dlb2_domain_finish_unmap_port(hw, domain, port);
+
+	/*
+	 * If the unmapping couldn't complete immediately, launch the worker
+	 * thread (if it isn't already launched) to finish it later.
+	 */
+	if (!unmap_complete && !os_worker_active(hw))
+		os_schedule_work(hw);
+
+unmap_qid_done:
+	resp->status = 0;
+
+	return 0;
+}
+
+static void
+dlb2_log_pending_port_unmaps_args(struct dlb2_hw *hw,
+				  struct dlb2_pending_port_unmaps_args *args,
+				  bool vdev_req,
+				  unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB unmaps in progress arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from VF %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tPort ID: %d\n", args->port_id);
+}
+
+/**
+ * dlb2_hw_pending_port_unmaps() - returns the number of unmap operations in
+ *	progress.
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: number of unmaps in progress args
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the number of unmaps in progress.
+ *
+ * Errors:
+ * EINVAL - Invalid port ID.
+ */
+int dlb2_hw_pending_port_unmaps(struct dlb2_hw *hw,
+				u32 domain_id,
+				struct dlb2_pending_port_unmaps_args *args,
+				struct dlb2_cmd_response *resp,
+				bool vdev_req,
+				unsigned int vdev_id)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_port *port;
+
+	dlb2_log_pending_port_unmaps_args(hw, args, vdev_req, vdev_id);
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	port = dlb2_get_domain_used_ldb_port(args->port_id, vdev_req, domain);
+	if (!port || !port->configured) {
+		resp->status = DLB2_ST_INVALID_PORT_ID;
+		return -EINVAL;
+	}
+
+	resp->id = port->num_pending_removals;
+
+	return 0;
+}
+
+static int dlb2_verify_start_domain_args(struct dlb2_hw *hw,
+					 u32 domain_id,
+					 struct dlb2_cmd_response *resp,
+					 bool vdev_req,
+					 unsigned int vdev_id,
+					 struct dlb2_hw_domain **out_domain)
+{
+	struct dlb2_hw_domain *domain;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
+		return -EINVAL;
+	}
+
+	if (domain->started) {
+		resp->status = DLB2_ST_DOMAIN_STARTED;
+		return -EINVAL;
+	}
+
+	*out_domain = domain;
+
+	return 0;
+}
+
+static void dlb2_log_start_domain(struct dlb2_hw *hw,
+				  u32 domain_id,
+				  bool vdev_req,
+				  unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 start domain arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
+}
+
+/**
+ * dlb2_hw_start_domain() - start a scheduling domain
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @arg: start domain arguments.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function starts a scheduling domain, which allows applications to send
+ * traffic through it. Once a domain is started, its resources can no longer be
+ * configured (besides QID remapping and port enable/disable).
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error.
+ *
+ * Errors:
+ * EINVAL - the domain is not configured, or the domain is already started.
+ */
+int
+dlb2_hw_start_domain(struct dlb2_hw *hw,
+		     u32 domain_id,
+		     struct dlb2_start_domain_args *args,
+		     struct dlb2_cmd_response *resp,
+		     bool vdev_req,
+		     unsigned int vdev_id)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *dir_queue;
+	struct dlb2_ldb_queue *ldb_queue;
+	struct dlb2_hw_domain *domain;
+	int ret;
+	RTE_SET_USED(args);
+	RTE_SET_USED(iter);
+
+	dlb2_log_start_domain(hw, domain_id, vdev_req, vdev_id);
+
+	ret = dlb2_verify_start_domain_args(hw,
+					    domain_id,
+					    resp,
+					    vdev_req,
+					    vdev_id,
+					    &domain);
+	if (ret)
+		return ret;
+
+	/*
+	 * Enable load-balanced and directed queue write permissions for the
+	 * queues this domain owns. Without this, the DLB2 will drop all
+	 * incoming traffic to those queues.
+	 */
+	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, ldb_queue, iter) {
+		u32 vasqid_v = 0;
+		unsigned int offs;
+
+		DLB2_BIT_SET(vasqid_v, DLB2_SYS_LDB_VASQID_V_VASQID_V);
+
+		offs = domain->id.phys_id * DLB2_MAX_NUM_LDB_QUEUES +
+			ldb_queue->id.phys_id;
+
+		DLB2_CSR_WR(hw, DLB2_SYS_LDB_VASQID_V(offs), vasqid_v);
+	}
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, dir_queue, iter) {
+		u32 vasqid_v = 0;
+		unsigned int offs;
+
+		DLB2_BIT_SET(vasqid_v, DLB2_SYS_DIR_VASQID_V_VASQID_V);
+
+		offs = domain->id.phys_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) +
+			dir_queue->id.phys_id;
+
+		DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(offs), vasqid_v);
+	}
+
+	dlb2_flush_csr(hw);
+
+	domain->started = true;
+
+	resp->status = 0;
+
+	return 0;
+}
+
+static void dlb2_log_get_dir_queue_depth(struct dlb2_hw *hw,
+					 u32 domain_id,
+					 u32 queue_id,
+					 bool vdev_req,
+					 unsigned int vf_id)
+{
+	DLB2_HW_DBG(hw, "DLB get directed queue depth:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from VF %d)\n", vf_id);
+	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
+	DLB2_HW_DBG(hw, "\tQueue ID: %d\n", queue_id);
+}
+
+/**
+ * dlb2_hw_get_dir_queue_depth() - returns the depth of a directed queue
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: queue depth args
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function returns the depth of a directed queue.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the depth.
+ *
+ * Errors:
+ * EINVAL - Invalid domain ID or queue ID.
+ */
+int dlb2_hw_get_dir_queue_depth(struct dlb2_hw *hw,
+				u32 domain_id,
+				struct dlb2_get_dir_queue_depth_args *args,
+				struct dlb2_cmd_response *resp,
+				bool vdev_req,
+				unsigned int vdev_id)
+{
+	struct dlb2_dir_pq_pair *queue;
+	struct dlb2_hw_domain *domain;
+	int id;
+
+	id = domain_id;
+
+	dlb2_log_get_dir_queue_depth(hw, domain_id, args->queue_id,
+				     vdev_req, vdev_id);
+
+	domain = dlb2_get_domain_from_id(hw, id, vdev_req, vdev_id);
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	id = args->queue_id;
+
+	queue = dlb2_get_domain_used_dir_pq(hw, id, vdev_req, domain);
+	if (!queue) {
+		resp->status = DLB2_ST_INVALID_QID;
+		return -EINVAL;
+	}
+
+	resp->id = dlb2_dir_queue_depth(hw, queue);
+
+	return 0;
+}
+
+static void dlb2_log_get_ldb_queue_depth(struct dlb2_hw *hw,
+					 u32 domain_id,
+					 u32 queue_id,
+					 bool vdev_req,
+					 unsigned int vf_id)
+{
+	DLB2_HW_DBG(hw, "DLB get load-balanced queue depth:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from VF %d)\n", vf_id);
+	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
+	DLB2_HW_DBG(hw, "\tQueue ID: %d\n", queue_id);
+}
+
+/**
+ * dlb2_hw_get_ldb_queue_depth() - returns the depth of a load-balanced queue
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: queue depth args
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function returns the depth of a load-balanced queue.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the depth.
+ *
+ * Errors:
+ * EINVAL - Invalid domain ID or queue ID.
+ */
+int dlb2_hw_get_ldb_queue_depth(struct dlb2_hw *hw,
+				u32 domain_id,
+				struct dlb2_get_ldb_queue_depth_args *args,
+				struct dlb2_cmd_response *resp,
+				bool vdev_req,
+				unsigned int vdev_id)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+
+	dlb2_log_get_ldb_queue_depth(hw, domain_id, args->queue_id,
+				     vdev_req, vdev_id);
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	queue = dlb2_get_domain_ldb_queue(args->queue_id, vdev_req, domain);
+	if (!queue) {
+		resp->status = DLB2_ST_INVALID_QID;
+		return -EINVAL;
+	}
+
+	resp->id = dlb2_ldb_queue_depth(hw, queue);
+
+	return 0;
+}
+
+/**
+ * dlb2_finish_unmap_qid_procedures() - finish any pending unmap procedures
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function attempts to finish any outstanding unmap procedures.
+ * This function should be called by the kernel thread responsible for
+ * finishing map/unmap procedures.
+ *
+ * Return:
+ * Returns the number of procedures that weren't completed.
+ */
+unsigned int dlb2_finish_unmap_qid_procedures(struct dlb2_hw *hw)
+{
+	int i, num = 0;
+
+	/* Finish queue unmap jobs for any domain that needs it */
+	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
+		struct dlb2_hw_domain *domain = &hw->domains[i];
+
+		num += dlb2_domain_finish_unmap_qid_procedures(hw, domain);
+	}
+
+	return num;
+}
+
+/**
+ * dlb2_finish_map_qid_procedures() - finish any pending map procedures
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function attempts to finish any outstanding map procedures.
+ * This function should be called by the kernel thread responsible for
+ * finishing map/unmap procedures.
+ *
+ * Return:
+ * Returns the number of procedures that weren't completed.
+ */
+unsigned int dlb2_finish_map_qid_procedures(struct dlb2_hw *hw)
+{
+	int i, num = 0;
+
+	/* Finish queue map jobs for any domain that needs it */
+	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
+		struct dlb2_hw_domain *domain = &hw->domains[i];
+
+		num += dlb2_domain_finish_map_qid_procedures(hw, domain);
+	}
+
+	return num;
+}
+
+/**
+ * dlb2_hw_enable_sparse_dir_cq_mode() - enable sparse mode for directed ports.
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function must be called prior to configuring scheduling domains.
+ */
+
+void dlb2_hw_enable_sparse_dir_cq_mode(struct dlb2_hw *hw)
+{
+	u32 ctrl;
+
+	ctrl = DLB2_CSR_RD(hw, DLB2_CHP_CFG_CHP_CSR_CTRL);
+
+	DLB2_BIT_SET(ctrl,
+		     DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_DIR_CQ_MODE);
+
+	DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, ctrl);
+}
+
+/**
+ * dlb2_hw_enable_sparse_ldb_cq_mode() - enable sparse mode for load-balanced
+ *	ports.
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function must be called prior to configuring scheduling domains.
+ */
+void dlb2_hw_enable_sparse_ldb_cq_mode(struct dlb2_hw *hw)
+{
+	u32 ctrl;
+
+	ctrl = DLB2_CSR_RD(hw, DLB2_CHP_CFG_CHP_CSR_CTRL);
+
+	DLB2_BIT_SET(ctrl,
+		     DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_LDB_CQ_MODE);
+
+	DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, ctrl);
+}
+
+/**
+ * dlb2_get_group_sequence_numbers() - return a group's number of SNs per queue
+ * @hw: dlb2_hw handle for a particular device.
+ * @group_id: sequence number group ID.
+ *
+ * This function returns the configured number of sequence numbers per queue
+ * for the specified group.
+ *
+ * Return:
+ * Returns -EINVAL if group_id is invalid, else the group's SNs per queue.
+ */
+int dlb2_get_group_sequence_numbers(struct dlb2_hw *hw, u32 group_id)
+{
+	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
+		return -EINVAL;
+
+	return hw->rsrcs.sn_groups[group_id].sequence_numbers_per_queue;
+}
+
+/**
+ * dlb2_get_group_sequence_number_occupancy() - return a group's in-use slots
+ * @hw: dlb2_hw handle for a particular device.
+ * @group_id: sequence number group ID.
+ *
+ * This function returns the group's number of in-use slots (i.e. load-balanced
+ * queues using the specified group).
+ *
+ * Return:
+ * Returns -EINVAL if group_id is invalid, else the group's SNs per queue.
+ */
+int dlb2_get_group_sequence_number_occupancy(struct dlb2_hw *hw, u32 group_id)
+{
+	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
+		return -EINVAL;
+
+	return dlb2_sn_group_used_slots(&hw->rsrcs.sn_groups[group_id]);
+}
+
+static void dlb2_log_set_group_sequence_numbers(struct dlb2_hw *hw,
+						u32 group_id,
+						u32 val)
+{
+	DLB2_HW_DBG(hw, "DLB2 set group sequence numbers:\n");
+	DLB2_HW_DBG(hw, "\tGroup ID: %u\n", group_id);
+	DLB2_HW_DBG(hw, "\tValue:    %u\n", val);
+}
+
+/**
+ * dlb2_set_group_sequence_numbers() - assign a group's number of SNs per queue
+ * @hw: dlb2_hw handle for a particular device.
+ * @group_id: sequence number group ID.
+ * @val: requested amount of sequence numbers per queue.
+ *
+ * This function configures the group's number of sequence numbers per queue.
+ * val can be a power-of-two between 32 and 1024, inclusive. This setting can
+ * be configured until the first ordered load-balanced queue is configured, at
+ * which point the configuration is locked.
+ *
+ * Return:
+ * Returns 0 upon success; -EINVAL if group_id or val is invalid, -EPERM if an
+ * ordered queue is configured.
+ */
+int dlb2_set_group_sequence_numbers(struct dlb2_hw *hw,
+				    u32 group_id,
+				    u32 val)
+{
+	const u32 valid_allocations[] = {64, 128, 256, 512, 1024};
+	struct dlb2_sn_group *group;
+	u32 sn_mode = 0;
+	int mode;
+
+	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
+		return -EINVAL;
+
+	group = &hw->rsrcs.sn_groups[group_id];
+
+	/*
+	 * Once the first load-balanced queue using an SN group is configured,
+	 * the group cannot be changed.
+	 */
+	if (group->slot_use_bitmap != 0)
+		return -EPERM;
+
+	for (mode = 0; mode < DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES; mode++)
+		if (val == valid_allocations[mode])
+			break;
+
+	if (mode == DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES)
+		return -EINVAL;
+
+	group->mode = mode;
+	group->sequence_numbers_per_queue = val;
+
+	DLB2_BITS_SET(sn_mode, hw->rsrcs.sn_groups[0].mode,
+		 DLB2_RO_GRP_SN_MODE_SN_MODE_0);
+	DLB2_BITS_SET(sn_mode, hw->rsrcs.sn_groups[1].mode,
+		 DLB2_RO_GRP_SN_MODE_SN_MODE_1);
+
+	DLB2_CSR_WR(hw, DLB2_RO_GRP_SN_MODE(hw->ver), sn_mode);
+
+	dlb2_log_set_group_sequence_numbers(hw, group_id, val);
+
+	return 0;
+}
+
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
deleted file mode 100644
index 2f66b2c71..000000000
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ /dev/null
@@ -1,6235 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016-2020 Intel Corporation
- */
-
-#define DLB2_USE_NEW_HEADERS /* TEMPORARY FOR MERGE */
-
-#include "dlb2_user.h"
-
-#include "dlb2_hw_types_new.h"
-#include "dlb2_osdep.h"
-#include "dlb2_osdep_bitmap.h"
-#include "dlb2_osdep_types.h"
-#include "dlb2_regs_new.h"
-#include "dlb2_resource.h"
-
-#include "../../dlb2_priv.h"
-#include "../../dlb2_inline_fns.h"
-
-#define DLB2_DOM_LIST_HEAD(head, type) \
-	DLB2_LIST_HEAD((head), type, domain_list)
-
-#define DLB2_FUNC_LIST_HEAD(head, type) \
-	DLB2_LIST_HEAD((head), type, func_list)
-
-#define DLB2_DOM_LIST_FOR(head, ptr, iter) \
-	DLB2_LIST_FOR_EACH(head, ptr, domain_list, iter)
-
-#define DLB2_FUNC_LIST_FOR(head, ptr, iter) \
-	DLB2_LIST_FOR_EACH(head, ptr, func_list, iter)
-
-#define DLB2_DOM_LIST_FOR_SAFE(head, ptr, ptr_tmp, it, it_tmp) \
-	DLB2_LIST_FOR_EACH_SAFE((head), ptr, ptr_tmp, domain_list, it, it_tmp)
-
-#define DLB2_FUNC_LIST_FOR_SAFE(head, ptr, ptr_tmp, it, it_tmp) \
-	DLB2_LIST_FOR_EACH_SAFE((head), ptr, ptr_tmp, func_list, it, it_tmp)
-
-/*
- * The PF driver cannot assume that a register write will affect subsequent HCW
- * writes. To ensure a write completes, the driver must read back a CSR. This
- * function only need be called for configuration that can occur after the
- * domain has started; prior to starting, applications can't send HCWs.
- */
-static inline void dlb2_flush_csr(struct dlb2_hw *hw)
-{
-	DLB2_CSR_RD(hw, DLB2_SYS_TOTAL_VAS(hw->ver));
-}
-
-static void dlb2_init_domain_rsrc_lists(struct dlb2_hw_domain *domain)
-{
-	int i;
-
-	dlb2_list_init_head(&domain->used_ldb_queues);
-	dlb2_list_init_head(&domain->used_dir_pq_pairs);
-	dlb2_list_init_head(&domain->avail_ldb_queues);
-	dlb2_list_init_head(&domain->avail_dir_pq_pairs);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
-		dlb2_list_init_head(&domain->used_ldb_ports[i]);
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
-		dlb2_list_init_head(&domain->avail_ldb_ports[i]);
-}
-
-static void dlb2_init_fn_rsrc_lists(struct dlb2_function_resources *rsrc)
-{
-	int i;
-	dlb2_list_init_head(&rsrc->avail_domains);
-	dlb2_list_init_head(&rsrc->used_domains);
-	dlb2_list_init_head(&rsrc->avail_ldb_queues);
-	dlb2_list_init_head(&rsrc->avail_dir_pq_pairs);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
-		dlb2_list_init_head(&rsrc->avail_ldb_ports[i]);
-}
-
-/**
- * dlb2_resource_free() - free device state memory
- * @hw: dlb2_hw handle for a particular device.
- *
- * This function frees software state pointed to by dlb2_hw. This function
- * should be called when resetting the device or unloading the driver.
- */
-void dlb2_resource_free(struct dlb2_hw *hw)
-{
-	int i;
-
-	if (hw->pf.avail_hist_list_entries)
-		dlb2_bitmap_free(hw->pf.avail_hist_list_entries);
-
-	for (i = 0; i < DLB2_MAX_NUM_VDEVS; i++) {
-		if (hw->vdev[i].avail_hist_list_entries)
-			dlb2_bitmap_free(hw->vdev[i].avail_hist_list_entries);
-	}
-}
-
-/**
- * dlb2_resource_init() - initialize the device
- * @hw: pointer to struct dlb2_hw.
- * @ver: device version.
- *
- * This function initializes the device's software state (pointed to by the hw
- * argument) and programs global scheduling QoS registers. This function should
- * be called during driver initialization, and the dlb2_hw structure should
- * be zero-initialized before calling the function.
- *
- * The dlb2_hw struct must be unique per DLB 2.0 device and persist until the
- * device is reset.
- *
- * Return:
- * Returns 0 upon success, <0 otherwise.
- */
-int dlb2_resource_init(struct dlb2_hw *hw, enum dlb2_hw_ver ver)
-{
-	struct dlb2_list_entry *list;
-	unsigned int i;
-	int ret;
-
-	/*
-	 * For optimal load-balancing, ports that map to one or more QIDs in
-	 * common should not be in numerical sequence. The port->QID mapping is
-	 * application dependent, but the driver interleaves port IDs as much
-	 * as possible to reduce the likelihood of sequential ports mapping to
-	 * the same QID(s). This initial allocation of port IDs maximizes the
-	 * average distance between an ID and its immediate neighbors (i.e.
-	 * the distance from 1 to 0 and to 2, the distance from 2 to 1 and to
-	 * 3, etc.).
-	 */
-	const u8 init_ldb_port_allocation[DLB2_MAX_NUM_LDB_PORTS] = {
-		0,  7,  14,  5, 12,  3, 10,  1,  8, 15,  6, 13,  4, 11,  2,  9,
-		16, 23, 30, 21, 28, 19, 26, 17, 24, 31, 22, 29, 20, 27, 18, 25,
-		32, 39, 46, 37, 44, 35, 42, 33, 40, 47, 38, 45, 36, 43, 34, 41,
-		48, 55, 62, 53, 60, 51, 58, 49, 56, 63, 54, 61, 52, 59, 50, 57,
-	};
-
-	hw->ver = ver;
-
-	dlb2_init_fn_rsrc_lists(&hw->pf);
-
-	for (i = 0; i < DLB2_MAX_NUM_VDEVS; i++)
-		dlb2_init_fn_rsrc_lists(&hw->vdev[i]);
-
-	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
-		dlb2_init_domain_rsrc_lists(&hw->domains[i]);
-		hw->domains[i].parent_func = &hw->pf;
-	}
-
-	/* Give all resources to the PF driver */
-	hw->pf.num_avail_domains = DLB2_MAX_NUM_DOMAINS;
-	for (i = 0; i < hw->pf.num_avail_domains; i++) {
-		list = &hw->domains[i].func_list;
-
-		dlb2_list_add(&hw->pf.avail_domains, list);
-	}
-
-	hw->pf.num_avail_ldb_queues = DLB2_MAX_NUM_LDB_QUEUES;
-	for (i = 0; i < hw->pf.num_avail_ldb_queues; i++) {
-		list = &hw->rsrcs.ldb_queues[i].func_list;
-
-		dlb2_list_add(&hw->pf.avail_ldb_queues, list);
-	}
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
-		hw->pf.num_avail_ldb_ports[i] =
-			DLB2_MAX_NUM_LDB_PORTS / DLB2_NUM_COS_DOMAINS;
-
-	for (i = 0; i < DLB2_MAX_NUM_LDB_PORTS; i++) {
-		int cos_id = i >> DLB2_NUM_COS_DOMAINS;
-		struct dlb2_ldb_port *port;
-
-		port = &hw->rsrcs.ldb_ports[init_ldb_port_allocation[i]];
-
-		dlb2_list_add(&hw->pf.avail_ldb_ports[cos_id],
-			      &port->func_list);
-	}
-
-	hw->pf.num_avail_dir_pq_pairs = DLB2_MAX_NUM_DIR_PORTS(hw->ver);
-	for (i = 0; i < hw->pf.num_avail_dir_pq_pairs; i++) {
-		list = &hw->rsrcs.dir_pq_pairs[i].func_list;
-
-		dlb2_list_add(&hw->pf.avail_dir_pq_pairs, list);
-	}
-
-	if (hw->ver == DLB2_HW_V2) {
-		hw->pf.num_avail_qed_entries = DLB2_MAX_NUM_LDB_CREDITS;
-		hw->pf.num_avail_dqed_entries =
-			DLB2_MAX_NUM_DIR_CREDITS(hw->ver);
-	} else {
-		hw->pf.num_avail_entries = DLB2_MAX_NUM_CREDITS(hw->ver);
-	}
-
-	hw->pf.num_avail_aqed_entries = DLB2_MAX_NUM_AQED_ENTRIES;
-
-	ret = dlb2_bitmap_alloc(&hw->pf.avail_hist_list_entries,
-				DLB2_MAX_NUM_HIST_LIST_ENTRIES);
-	if (ret)
-		goto unwind;
-
-	ret = dlb2_bitmap_fill(hw->pf.avail_hist_list_entries);
-	if (ret)
-		goto unwind;
-
-	for (i = 0; i < DLB2_MAX_NUM_VDEVS; i++) {
-		ret = dlb2_bitmap_alloc(&hw->vdev[i].avail_hist_list_entries,
-					DLB2_MAX_NUM_HIST_LIST_ENTRIES);
-		if (ret)
-			goto unwind;
-
-		ret = dlb2_bitmap_zero(hw->vdev[i].avail_hist_list_entries);
-		if (ret)
-			goto unwind;
-	}
-
-	/* Initialize the hardware resource IDs */
-	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
-		hw->domains[i].id.phys_id = i;
-		hw->domains[i].id.vdev_owned = false;
-	}
-
-	for (i = 0; i < DLB2_MAX_NUM_LDB_QUEUES; i++) {
-		hw->rsrcs.ldb_queues[i].id.phys_id = i;
-		hw->rsrcs.ldb_queues[i].id.vdev_owned = false;
-	}
-
-	for (i = 0; i < DLB2_MAX_NUM_LDB_PORTS; i++) {
-		hw->rsrcs.ldb_ports[i].id.phys_id = i;
-		hw->rsrcs.ldb_ports[i].id.vdev_owned = false;
-	}
-
-	for (i = 0; i < DLB2_MAX_NUM_DIR_PORTS(hw->ver); i++) {
-		hw->rsrcs.dir_pq_pairs[i].id.phys_id = i;
-		hw->rsrcs.dir_pq_pairs[i].id.vdev_owned = false;
-	}
-
-	for (i = 0; i < DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {
-		hw->rsrcs.sn_groups[i].id = i;
-		/* Default mode (0) is 64 sequence numbers per queue */
-		hw->rsrcs.sn_groups[i].mode = 0;
-		hw->rsrcs.sn_groups[i].sequence_numbers_per_queue = 64;
-		hw->rsrcs.sn_groups[i].slot_use_bitmap = 0;
-	}
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
-		hw->cos_reservation[i] = 100 / DLB2_NUM_COS_DOMAINS;
-
-	return 0;
-
-unwind:
-	dlb2_resource_free(hw);
-
-	return ret;
-}
-
-/**
- * dlb2_clr_pmcsr_disable() - power on bulk of DLB 2.0 logic
- * @hw: dlb2_hw handle for a particular device.
- * @ver: device version.
- *
- * Clearing the PMCSR must be done at initialization to make the device fully
- * operational.
- */
-void dlb2_clr_pmcsr_disable(struct dlb2_hw *hw, enum dlb2_hw_ver ver)
-{
-	u32 pmcsr_dis;
-
-	pmcsr_dis = DLB2_CSR_RD(hw, DLB2_CM_CFG_PM_PMCSR_DISABLE(ver));
-
-	DLB2_BITS_CLR(pmcsr_dis, DLB2_CM_CFG_PM_PMCSR_DISABLE_DISABLE);
-
-	DLB2_CSR_WR(hw, DLB2_CM_CFG_PM_PMCSR_DISABLE(ver), pmcsr_dis);
-}
-
-/**
- * dlb2_hw_get_num_resources() - query the PCI function's available resources
- * @hw: dlb2_hw handle for a particular device.
- * @arg: pointer to resource counts.
- * @vdev_req: indicates whether this request came from a vdev.
- * @vdev_id: If vdev_req is true, this contains the vdev's ID.
- *
- * This function returns the number of available resources for the PF or for a
- * VF.
- *
- * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
- * device.
- *
- * Return:
- * Returns 0 upon success, -EINVAL if vdev_req is true and vdev_id is
- * invalid.
- */
-int dlb2_hw_get_num_resources(struct dlb2_hw *hw,
-			      struct dlb2_get_num_resources_args *arg,
-			      bool vdev_req,
-			      unsigned int vdev_id)
-{
-	struct dlb2_function_resources *rsrcs;
-	struct dlb2_bitmap *map;
-	int i;
-
-	if (vdev_req && vdev_id >= DLB2_MAX_NUM_VDEVS)
-		return -EINVAL;
-
-	if (vdev_req)
-		rsrcs = &hw->vdev[vdev_id];
-	else
-		rsrcs = &hw->pf;
-
-	arg->num_sched_domains = rsrcs->num_avail_domains;
-
-	arg->num_ldb_queues = rsrcs->num_avail_ldb_queues;
-
-	arg->num_ldb_ports = 0;
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
-		arg->num_ldb_ports += rsrcs->num_avail_ldb_ports[i];
-
-	arg->num_cos_ldb_ports[0] = rsrcs->num_avail_ldb_ports[0];
-	arg->num_cos_ldb_ports[1] = rsrcs->num_avail_ldb_ports[1];
-	arg->num_cos_ldb_ports[2] = rsrcs->num_avail_ldb_ports[2];
-	arg->num_cos_ldb_ports[3] = rsrcs->num_avail_ldb_ports[3];
-
-	arg->num_dir_ports = rsrcs->num_avail_dir_pq_pairs;
-
-	arg->num_atomic_inflights = rsrcs->num_avail_aqed_entries;
-
-	map = rsrcs->avail_hist_list_entries;
-
-	arg->num_hist_list_entries = dlb2_bitmap_count(map);
-
-	arg->max_contiguous_hist_list_entries =
-		dlb2_bitmap_longest_set_range(map);
-
-	if (hw->ver == DLB2_HW_V2) {
-		arg->num_ldb_credits = rsrcs->num_avail_qed_entries;
-		arg->num_dir_credits = rsrcs->num_avail_dqed_entries;
-	} else {
-		arg->num_credits = rsrcs->num_avail_entries;
-	}
-	return 0;
-}
-
-static void dlb2_configure_domain_credits_v2_5(struct dlb2_hw *hw,
-					       struct dlb2_hw_domain *domain)
-{
-	u32 reg = 0;
-
-	DLB2_BITS_SET(reg, domain->num_credits, DLB2_CHP_CFG_LDB_VAS_CRD_COUNT);
-	DLB2_CSR_WR(hw, DLB2_CHP_CFG_VAS_CRD(domain->id.phys_id), reg);
-}
-
-static void dlb2_configure_domain_credits_v2(struct dlb2_hw *hw,
-					     struct dlb2_hw_domain *domain)
-{
-	u32 reg = 0;
-
-	DLB2_BITS_SET(reg, domain->num_ldb_credits,
-		      DLB2_CHP_CFG_LDB_VAS_CRD_COUNT);
-	DLB2_CSR_WR(hw, DLB2_CHP_CFG_LDB_VAS_CRD(domain->id.phys_id), reg);
-
-	reg = 0;
-	DLB2_BITS_SET(reg, domain->num_dir_credits,
-		      DLB2_CHP_CFG_DIR_VAS_CRD_COUNT);
-	DLB2_CSR_WR(hw, DLB2_CHP_CFG_DIR_VAS_CRD(domain->id.phys_id), reg);
-}
-
-static void dlb2_configure_domain_credits(struct dlb2_hw *hw,
-					  struct dlb2_hw_domain *domain)
-{
-	if (hw->ver == DLB2_HW_V2)
-		dlb2_configure_domain_credits_v2(hw, domain);
-	else
-		dlb2_configure_domain_credits_v2_5(hw, domain);
-}
-
-static int dlb2_attach_credits(struct dlb2_function_resources *rsrcs,
-			       struct dlb2_hw_domain *domain,
-			       u32 num_credits,
-			       struct dlb2_cmd_response *resp)
-{
-	if (rsrcs->num_avail_entries < num_credits) {
-		resp->status = DLB2_ST_CREDITS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	rsrcs->num_avail_entries -= num_credits;
-	domain->num_credits += num_credits;
-	return 0;
-}
-
-static struct dlb2_ldb_port *
-dlb2_get_next_ldb_port(struct dlb2_hw *hw,
-		       struct dlb2_function_resources *rsrcs,
-		       u32 domain_id,
-		       u32 cos_id)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	RTE_SET_USED(iter);
-
-	/*
-	 * To reduce the odds of consecutive load-balanced ports mapping to the
-	 * same queue(s), the driver attempts to allocate ports whose neighbors
-	 * are owned by a different domain.
-	 */
-	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {
-		u32 next, prev;
-		u32 phys_id;
-
-		phys_id = port->id.phys_id;
-		next = phys_id + 1;
-		prev = phys_id - 1;
-
-		if (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)
-			next = 0;
-		if (phys_id == 0)
-			prev = DLB2_MAX_NUM_LDB_PORTS - 1;
-
-		if (!hw->rsrcs.ldb_ports[next].owned ||
-		    hw->rsrcs.ldb_ports[next].domain_id.phys_id == domain_id)
-			continue;
-
-		if (!hw->rsrcs.ldb_ports[prev].owned ||
-		    hw->rsrcs.ldb_ports[prev].domain_id.phys_id == domain_id)
-			continue;
-
-		return port;
-	}
-
-	/*
-	 * Failing that, the driver looks for a port with one neighbor owned by
-	 * a different domain and the other unallocated.
-	 */
-	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {
-		u32 next, prev;
-		u32 phys_id;
-
-		phys_id = port->id.phys_id;
-		next = phys_id + 1;
-		prev = phys_id - 1;
-
-		if (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)
-			next = 0;
-		if (phys_id == 0)
-			prev = DLB2_MAX_NUM_LDB_PORTS - 1;
-
-		if (!hw->rsrcs.ldb_ports[prev].owned &&
-		    hw->rsrcs.ldb_ports[next].owned &&
-		    hw->rsrcs.ldb_ports[next].domain_id.phys_id != domain_id)
-			return port;
-
-		if (!hw->rsrcs.ldb_ports[next].owned &&
-		    hw->rsrcs.ldb_ports[prev].owned &&
-		    hw->rsrcs.ldb_ports[prev].domain_id.phys_id != domain_id)
-			return port;
-	}
-
-	/*
-	 * Failing that, the driver looks for a port with both neighbors
-	 * unallocated.
-	 */
-	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {
-		u32 next, prev;
-		u32 phys_id;
-
-		phys_id = port->id.phys_id;
-		next = phys_id + 1;
-		prev = phys_id - 1;
-
-		if (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)
-			next = 0;
-		if (phys_id == 0)
-			prev = DLB2_MAX_NUM_LDB_PORTS - 1;
-
-		if (!hw->rsrcs.ldb_ports[prev].owned &&
-		    !hw->rsrcs.ldb_ports[next].owned)
-			return port;
-	}
-
-	/* If all else fails, the driver returns the next available port. */
-	return DLB2_FUNC_LIST_HEAD(rsrcs->avail_ldb_ports[cos_id],
-				   typeof(*port));
-}
-
-static int __dlb2_attach_ldb_ports(struct dlb2_hw *hw,
-				   struct dlb2_function_resources *rsrcs,
-				   struct dlb2_hw_domain *domain,
-				   u32 num_ports,
-				   u32 cos_id,
-				   struct dlb2_cmd_response *resp)
-{
-	unsigned int i;
-
-	if (rsrcs->num_avail_ldb_ports[cos_id] < num_ports) {
-		resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	for (i = 0; i < num_ports; i++) {
-		struct dlb2_ldb_port *port;
-
-		port = dlb2_get_next_ldb_port(hw, rsrcs,
-					      domain->id.phys_id, cos_id);
-		if (port == NULL) {
-			DLB2_HW_ERR(hw,
-				    "[%s()] Internal error: domain validation failed\n",
-				    __func__);
-			return -EFAULT;
-		}
-
-		dlb2_list_del(&rsrcs->avail_ldb_ports[cos_id],
-			      &port->func_list);
-
-		port->domain_id = domain->id;
-		port->owned = true;
-
-		dlb2_list_add(&domain->avail_ldb_ports[cos_id],
-			      &port->domain_list);
-	}
-
-	rsrcs->num_avail_ldb_ports[cos_id] -= num_ports;
-
-	return 0;
-}
-
-
-static int dlb2_attach_ldb_ports(struct dlb2_hw *hw,
-				 struct dlb2_function_resources *rsrcs,
-				 struct dlb2_hw_domain *domain,
-				 struct dlb2_create_sched_domain_args *args,
-				 struct dlb2_cmd_response *resp)
-{
-	unsigned int i, j;
-	int ret;
-
-	if (args->cos_strict) {
-		for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-			u32 num = args->num_cos_ldb_ports[i];
-
-			/* Allocate ports from specific classes-of-service */
-			ret = __dlb2_attach_ldb_ports(hw,
-						      rsrcs,
-						      domain,
-						      num,
-						      i,
-						      resp);
-			if (ret)
-				return ret;
-		}
-	} else {
-		unsigned int k;
-		u32 cos_id;
-
-		/*
-		 * Attempt to allocate from specific class-of-service, but
-		 * fallback to the other classes if that fails.
-		 */
-		for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-			for (j = 0; j < args->num_cos_ldb_ports[i]; j++) {
-				for (k = 0; k < DLB2_NUM_COS_DOMAINS; k++) {
-					cos_id = (i + k) % DLB2_NUM_COS_DOMAINS;
-
-					ret = __dlb2_attach_ldb_ports(hw,
-								      rsrcs,
-								      domain,
-								      1,
-								      cos_id,
-								      resp);
-					if (ret == 0)
-						break;
-				}
-
-				if (ret)
-					return ret;
-			}
-		}
-	}
-
-	/* Allocate num_ldb_ports from any class-of-service */
-	for (i = 0; i < args->num_ldb_ports; i++) {
-		for (j = 0; j < DLB2_NUM_COS_DOMAINS; j++) {
-			ret = __dlb2_attach_ldb_ports(hw,
-						      rsrcs,
-						      domain,
-						      1,
-						      j,
-						      resp);
-			if (ret == 0)
-				break;
-		}
-
-		if (ret)
-			return ret;
-	}
-
-	return 0;
-}
-
-static int dlb2_attach_dir_ports(struct dlb2_hw *hw,
-				 struct dlb2_function_resources *rsrcs,
-				 struct dlb2_hw_domain *domain,
-				 u32 num_ports,
-				 struct dlb2_cmd_response *resp)
-{
-	unsigned int i;
-
-	if (rsrcs->num_avail_dir_pq_pairs < num_ports) {
-		resp->status = DLB2_ST_DIR_PORTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	for (i = 0; i < num_ports; i++) {
-		struct dlb2_dir_pq_pair *port;
-
-		port = DLB2_FUNC_LIST_HEAD(rsrcs->avail_dir_pq_pairs,
-					   typeof(*port));
-		if (port == NULL) {
-			DLB2_HW_ERR(hw,
-				    "[%s()] Internal error: domain validation failed\n",
-				    __func__);
-			return -EFAULT;
-		}
-
-		dlb2_list_del(&rsrcs->avail_dir_pq_pairs, &port->func_list);
-
-		port->domain_id = domain->id;
-		port->owned = true;
-
-		dlb2_list_add(&domain->avail_dir_pq_pairs, &port->domain_list);
-	}
-
-	rsrcs->num_avail_dir_pq_pairs -= num_ports;
-
-	return 0;
-}
-
-static int dlb2_attach_ldb_credits(struct dlb2_function_resources *rsrcs,
-				   struct dlb2_hw_domain *domain,
-				   u32 num_credits,
-				   struct dlb2_cmd_response *resp)
-{
-	if (rsrcs->num_avail_qed_entries < num_credits) {
-		resp->status = DLB2_ST_LDB_CREDITS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	rsrcs->num_avail_qed_entries -= num_credits;
-	domain->num_ldb_credits += num_credits;
-	return 0;
-}
-
-static int dlb2_attach_dir_credits(struct dlb2_function_resources *rsrcs,
-				   struct dlb2_hw_domain *domain,
-				   u32 num_credits,
-				   struct dlb2_cmd_response *resp)
-{
-	if (rsrcs->num_avail_dqed_entries < num_credits) {
-		resp->status = DLB2_ST_DIR_CREDITS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	rsrcs->num_avail_dqed_entries -= num_credits;
-	domain->num_dir_credits += num_credits;
-	return 0;
-}
-
-
-static int dlb2_attach_atomic_inflights(struct dlb2_function_resources *rsrcs,
-					struct dlb2_hw_domain *domain,
-					u32 num_atomic_inflights,
-					struct dlb2_cmd_response *resp)
-{
-	if (rsrcs->num_avail_aqed_entries < num_atomic_inflights) {
-		resp->status = DLB2_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	rsrcs->num_avail_aqed_entries -= num_atomic_inflights;
-	domain->num_avail_aqed_entries += num_atomic_inflights;
-	return 0;
-}
-
-static int
-dlb2_attach_domain_hist_list_entries(struct dlb2_function_resources *rsrcs,
-				     struct dlb2_hw_domain *domain,
-				     u32 num_hist_list_entries,
-				     struct dlb2_cmd_response *resp)
-{
-	struct dlb2_bitmap *bitmap;
-	int base;
-
-	if (num_hist_list_entries) {
-		bitmap = rsrcs->avail_hist_list_entries;
-
-		base = dlb2_bitmap_find_set_bit_range(bitmap,
-						      num_hist_list_entries);
-		if (base < 0)
-			goto error;
-
-		domain->total_hist_list_entries = num_hist_list_entries;
-		domain->avail_hist_list_entries = num_hist_list_entries;
-		domain->hist_list_entry_base = base;
-		domain->hist_list_entry_offset = 0;
-
-		dlb2_bitmap_clear_range(bitmap, base, num_hist_list_entries);
-	}
-	return 0;
-
-error:
-	resp->status = DLB2_ST_HIST_LIST_ENTRIES_UNAVAILABLE;
-	return -EINVAL;
-}
-
-static int dlb2_attach_ldb_queues(struct dlb2_hw *hw,
-				  struct dlb2_function_resources *rsrcs,
-				  struct dlb2_hw_domain *domain,
-				  u32 num_queues,
-				  struct dlb2_cmd_response *resp)
-{
-	unsigned int i;
-
-	if (rsrcs->num_avail_ldb_queues < num_queues) {
-		resp->status = DLB2_ST_LDB_QUEUES_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	for (i = 0; i < num_queues; i++) {
-		struct dlb2_ldb_queue *queue;
-
-		queue = DLB2_FUNC_LIST_HEAD(rsrcs->avail_ldb_queues,
-					    typeof(*queue));
-		if (queue == NULL) {
-			DLB2_HW_ERR(hw,
-				    "[%s()] Internal error: domain validation failed\n",
-				    __func__);
-			return -EFAULT;
-		}
-
-		dlb2_list_del(&rsrcs->avail_ldb_queues, &queue->func_list);
-
-		queue->domain_id = domain->id;
-		queue->owned = true;
-
-		dlb2_list_add(&domain->avail_ldb_queues, &queue->domain_list);
-	}
-
-	rsrcs->num_avail_ldb_queues -= num_queues;
-
-	return 0;
-}
-
-static int
-dlb2_domain_attach_resources(struct dlb2_hw *hw,
-			     struct dlb2_function_resources *rsrcs,
-			     struct dlb2_hw_domain *domain,
-			     struct dlb2_create_sched_domain_args *args,
-			     struct dlb2_cmd_response *resp)
-{
-	int ret;
-
-	ret = dlb2_attach_ldb_queues(hw,
-				     rsrcs,
-				     domain,
-				     args->num_ldb_queues,
-				     resp);
-	if (ret)
-		return ret;
-
-	ret = dlb2_attach_ldb_ports(hw,
-				    rsrcs,
-				    domain,
-				    args,
-				    resp);
-	if (ret)
-		return ret;
-
-	ret = dlb2_attach_dir_ports(hw,
-				    rsrcs,
-				    domain,
-				    args->num_dir_ports,
-				    resp);
-	if (ret)
-		return ret;
-
-	if (hw->ver == DLB2_HW_V2) {
-		ret = dlb2_attach_ldb_credits(rsrcs,
-					      domain,
-					      args->num_ldb_credits,
-					      resp);
-		if (ret)
-			return ret;
-
-		ret = dlb2_attach_dir_credits(rsrcs,
-					      domain,
-					      args->num_dir_credits,
-					      resp);
-		if (ret)
-			return ret;
-	} else {  /* DLB 2.5 */
-		ret = dlb2_attach_credits(rsrcs,
-					  domain,
-					  args->num_credits,
-					  resp);
-		if (ret)
-			return ret;
-	}
-
-	ret = dlb2_attach_domain_hist_list_entries(rsrcs,
-						   domain,
-						   args->num_hist_list_entries,
-						   resp);
-	if (ret)
-		return ret;
-
-	ret = dlb2_attach_atomic_inflights(rsrcs,
-					   domain,
-					   args->num_atomic_inflights,
-					   resp);
-	if (ret)
-		return ret;
-
-	dlb2_configure_domain_credits(hw, domain);
-
-	domain->configured = true;
-
-	domain->started = false;
-
-	rsrcs->num_avail_domains--;
-
-	return 0;
-}
-
-static int
-dlb2_verify_create_sched_dom_args(struct dlb2_function_resources *rsrcs,
-				  struct dlb2_create_sched_domain_args *args,
-				  struct dlb2_cmd_response *resp,
-				  struct dlb2_hw *hw,
-				  struct dlb2_hw_domain **out_domain)
-{
-	u32 num_avail_ldb_ports, req_ldb_ports;
-	struct dlb2_bitmap *avail_hl_entries;
-	unsigned int max_contig_hl_range;
-	struct dlb2_hw_domain *domain;
-	int i;
-
-	avail_hl_entries = rsrcs->avail_hist_list_entries;
-
-	max_contig_hl_range = dlb2_bitmap_longest_set_range(avail_hl_entries);
-
-	num_avail_ldb_ports = 0;
-	req_ldb_ports = 0;
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		num_avail_ldb_ports += rsrcs->num_avail_ldb_ports[i];
-
-		req_ldb_ports += args->num_cos_ldb_ports[i];
-	}
-
-	req_ldb_ports += args->num_ldb_ports;
-
-	if (rsrcs->num_avail_domains < 1) {
-		resp->status = DLB2_ST_DOMAIN_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	domain = DLB2_FUNC_LIST_HEAD(rsrcs->avail_domains, typeof(*domain));
-	if (domain == NULL) {
-		resp->status = DLB2_ST_DOMAIN_UNAVAILABLE;
-		return -EFAULT;
-	}
-
-	if (rsrcs->num_avail_ldb_queues < args->num_ldb_queues) {
-		resp->status = DLB2_ST_LDB_QUEUES_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	if (req_ldb_ports > num_avail_ldb_ports) {
-		resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	for (i = 0; args->cos_strict && i < DLB2_NUM_COS_DOMAINS; i++) {
-		if (args->num_cos_ldb_ports[i] >
-		    rsrcs->num_avail_ldb_ports[i]) {
-			resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
-			return -EINVAL;
-		}
-	}
-
-	if (args->num_ldb_queues > 0 && req_ldb_ports == 0) {
-		resp->status = DLB2_ST_LDB_PORT_REQUIRED_FOR_LDB_QUEUES;
-		return -EINVAL;
-	}
-
-	if (rsrcs->num_avail_dir_pq_pairs < args->num_dir_ports) {
-		resp->status = DLB2_ST_DIR_PORTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-	if (hw->ver == DLB2_HW_V2_5) {
-		if (rsrcs->num_avail_entries < args->num_credits) {
-			resp->status = DLB2_ST_CREDITS_UNAVAILABLE;
-			return -EINVAL;
-		}
-	} else {
-		if (rsrcs->num_avail_qed_entries < args->num_ldb_credits) {
-			resp->status = DLB2_ST_LDB_CREDITS_UNAVAILABLE;
-			return -EINVAL;
-		}
-		if (rsrcs->num_avail_dqed_entries < args->num_dir_credits) {
-			resp->status = DLB2_ST_DIR_CREDITS_UNAVAILABLE;
-			return -EINVAL;
-		}
-	}
-
-	if (rsrcs->num_avail_aqed_entries < args->num_atomic_inflights) {
-		resp->status = DLB2_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	if (max_contig_hl_range < args->num_hist_list_entries) {
-		resp->status = DLB2_ST_HIST_LIST_ENTRIES_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	*out_domain = domain;
-
-	return 0;
-}
-
-static void
-dlb2_log_create_sched_domain_args(struct dlb2_hw *hw,
-				  struct dlb2_create_sched_domain_args *args,
-				  bool vdev_req,
-				  unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 create sched domain arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tNumber of LDB queues:          %d\n",
-		    args->num_ldb_queues);
-	DLB2_HW_DBG(hw, "\tNumber of LDB ports (any CoS): %d\n",
-		    args->num_ldb_ports);
-	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 0):   %d\n",
-		    args->num_cos_ldb_ports[0]);
-	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 1):   %d\n",
-		    args->num_cos_ldb_ports[1]);
-	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 2):   %d\n",
-		    args->num_cos_ldb_ports[2]);
-	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 3):   %d\n",
-		    args->num_cos_ldb_ports[3]);
-	DLB2_HW_DBG(hw, "\tStrict CoS allocation:         %d\n",
-		    args->cos_strict);
-	DLB2_HW_DBG(hw, "\tNumber of DIR ports:           %d\n",
-		    args->num_dir_ports);
-	DLB2_HW_DBG(hw, "\tNumber of ATM inflights:       %d\n",
-		    args->num_atomic_inflights);
-	DLB2_HW_DBG(hw, "\tNumber of hist list entries:   %d\n",
-		    args->num_hist_list_entries);
-	if (hw->ver == DLB2_HW_V2) {
-		DLB2_HW_DBG(hw, "\tNumber of LDB credits:         %d\n",
-			    args->num_ldb_credits);
-		DLB2_HW_DBG(hw, "\tNumber of DIR credits:         %d\n",
-			    args->num_dir_credits);
-	} else {
-		DLB2_HW_DBG(hw, "\tNumber of credits:         %d\n",
-			    args->num_credits);
-	}
-}
-
-/**
- * dlb2_hw_create_sched_domain() - create a scheduling domain
- * @hw: dlb2_hw handle for a particular device.
- * @args: scheduling domain creation arguments.
- * @resp: response structure.
- * @vdev_req: indicates whether this request came from a vdev.
- * @vdev_id: If vdev_req is true, this contains the vdev's ID.
- *
- * This function creates a scheduling domain containing the resources specified
- * in args. The individual resources (queues, ports, credits) can be configured
- * after creating a scheduling domain.
- *
- * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
- * device.
- *
- * Return:
- * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
- * assigned a detailed error code from enum dlb2_error. If successful, resp->id
- * contains the domain ID.
- *
- * resp->id contains a virtual ID if vdev_req is true.
- *
- * Errors:
- * EINVAL - A requested resource is unavailable, or the requested domain name
- *	    is already in use.
- * EFAULT - Internal error (resp->status not set).
- */
-int dlb2_hw_create_sched_domain(struct dlb2_hw *hw,
-				struct dlb2_create_sched_domain_args *args,
-				struct dlb2_cmd_response *resp,
-				bool vdev_req,
-				unsigned int vdev_id)
-{
-	struct dlb2_function_resources *rsrcs;
-	struct dlb2_hw_domain *domain;
-	int ret;
-
-	rsrcs = (vdev_req) ? &hw->vdev[vdev_id] : &hw->pf;
-
-	dlb2_log_create_sched_domain_args(hw, args, vdev_req, vdev_id);
-
-	/*
-	 * Verify that hardware resources are available before attempting to
-	 * satisfy the request. This simplifies the error unwinding code.
-	 */
-	ret = dlb2_verify_create_sched_dom_args(rsrcs, args, resp, hw, &domain);
-	if (ret)
-		return ret;
-
-	dlb2_init_domain_rsrc_lists(domain);
-
-	ret = dlb2_domain_attach_resources(hw, rsrcs, domain, args, resp);
-	if (ret) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: failed to verify args.\n",
-			    __func__);
-
-		return ret;
-	}
-
-	dlb2_list_del(&rsrcs->avail_domains, &domain->func_list);
-
-	dlb2_list_add(&rsrcs->used_domains, &domain->func_list);
-
-	resp->id = (vdev_req) ? domain->id.virt_id : domain->id.phys_id;
-	resp->status = 0;
-
-	return 0;
-}
-
-static void dlb2_dir_port_cq_disable(struct dlb2_hw *hw,
-				     struct dlb2_dir_pq_pair *port)
-{
-	u32 reg = 0;
-
-	DLB2_BIT_SET(reg, DLB2_LSP_CQ_DIR_DSBL_DISABLED);
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ_DIR_DSBL(hw->ver, port->id.phys_id), reg);
-
-	dlb2_flush_csr(hw);
-}
-
-static u32 dlb2_dir_cq_token_count(struct dlb2_hw *hw,
-				   struct dlb2_dir_pq_pair *port)
-{
-	u32 cnt;
-
-	cnt = DLB2_CSR_RD(hw,
-			  DLB2_LSP_CQ_DIR_TKN_CNT(hw->ver, port->id.phys_id));
-
-	/*
-	 * Account for the initial token count, which is used in order to
-	 * provide a CQ with depth less than 8.
-	 */
-
-	return DLB2_BITS_GET(cnt, DLB2_LSP_CQ_DIR_TKN_CNT_COUNT) -
-	       port->init_tkn_cnt;
-}
-
-static void dlb2_drain_dir_cq(struct dlb2_hw *hw,
-			      struct dlb2_dir_pq_pair *port)
-{
-	unsigned int port_id = port->id.phys_id;
-	u32 cnt;
-
-	/* Return any outstanding tokens */
-	cnt = dlb2_dir_cq_token_count(hw, port);
-
-	if (cnt != 0) {
-		struct dlb2_hcw hcw_mem[8], *hcw;
-		void __iomem *pp_addr;
-
-		pp_addr = os_map_producer_port(hw, port_id, false);
-
-		/* Point hcw to a 64B-aligned location */
-		hcw = (struct dlb2_hcw *)((uintptr_t)&hcw_mem[4] & ~0x3F);
-
-		/*
-		 * Program the first HCW for a batch token return and
-		 * the rest as NOOPS
-		 */
-		memset(hcw, 0, 4 * sizeof(*hcw));
-		hcw->cq_token = 1;
-		hcw->lock_id = cnt - 1;
-
-		dlb2_movdir64b(pp_addr, hcw);
-
-		os_fence_hcw(hw, pp_addr);
-
-		os_unmap_producer_port(hw, pp_addr);
-	}
-}
-
-static void dlb2_dir_port_cq_enable(struct dlb2_hw *hw,
-				    struct dlb2_dir_pq_pair *port)
-{
-	u32 reg = 0;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ_DIR_DSBL(hw->ver, port->id.phys_id), reg);
-
-	dlb2_flush_csr(hw);
-}
-
-static int dlb2_domain_drain_dir_cqs(struct dlb2_hw *hw,
-				     struct dlb2_hw_domain *domain,
-				     bool toggle_port)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *port;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
-		/*
-		 * Can't drain a port if it's not configured, and there's
-		 * nothing to drain if its queue is unconfigured.
-		 */
-		if (!port->port_configured || !port->queue_configured)
-			continue;
-
-		if (toggle_port)
-			dlb2_dir_port_cq_disable(hw, port);
-
-		dlb2_drain_dir_cq(hw, port);
-
-		if (toggle_port)
-			dlb2_dir_port_cq_enable(hw, port);
-	}
-
-	return 0;
-}
-
-static u32 dlb2_dir_queue_depth(struct dlb2_hw *hw,
-				struct dlb2_dir_pq_pair *queue)
-{
-	u32 cnt;
-
-	cnt = DLB2_CSR_RD(hw, DLB2_LSP_QID_DIR_ENQUEUE_CNT(hw->ver,
-						      queue->id.phys_id));
-
-	return DLB2_BITS_GET(cnt, DLB2_LSP_QID_DIR_ENQUEUE_CNT_COUNT);
-}
-
-static bool dlb2_dir_queue_is_empty(struct dlb2_hw *hw,
-				    struct dlb2_dir_pq_pair *queue)
-{
-	return dlb2_dir_queue_depth(hw, queue) == 0;
-}
-
-static bool dlb2_domain_dir_queues_empty(struct dlb2_hw *hw,
-					 struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *queue;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, queue, iter) {
-		if (!dlb2_dir_queue_is_empty(hw, queue))
-			return false;
-	}
-
-	return true;
-}
-static int dlb2_domain_drain_dir_queues(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain)
-{
-	int i;
-
-	/* If the domain hasn't been started, there's no traffic to drain */
-	if (!domain->started)
-		return 0;
-
-	for (i = 0; i < DLB2_MAX_QID_EMPTY_CHECK_LOOPS; i++) {
-		dlb2_domain_drain_dir_cqs(hw, domain, true);
-
-		if (dlb2_domain_dir_queues_empty(hw, domain))
-			break;
-	}
-
-	if (i == DLB2_MAX_QID_EMPTY_CHECK_LOOPS) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: failed to empty queues\n",
-			    __func__);
-		return -EFAULT;
-	}
-
-	/*
-	 * Drain the CQs one more time. For the queues to go empty, they would
-	 * have scheduled one or more QEs.
-	 */
-	dlb2_domain_drain_dir_cqs(hw, domain, true);
-
-	return 0;
-}
-
-static void dlb2_ldb_port_cq_enable(struct dlb2_hw *hw,
-				    struct dlb2_ldb_port *port)
-{
-	u32 reg = 0;
-
-	/*
-	 * Don't re-enable the port if a removal is pending. The caller should
-	 * mark this port as enabled (if it isn't already), and when the
-	 * removal completes the port will be enabled.
-	 */
-	if (port->num_pending_removals)
-		return;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ_LDB_DSBL(hw->ver, port->id.phys_id), reg);
-
-	dlb2_flush_csr(hw);
-}
-
-static void dlb2_ldb_port_cq_disable(struct dlb2_hw *hw,
-				     struct dlb2_ldb_port *port)
-{
-	u32 reg = 0;
-
-	DLB2_BIT_SET(reg, DLB2_LSP_CQ_LDB_DSBL_DISABLED);
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ_LDB_DSBL(hw->ver, port->id.phys_id), reg);
-
-	dlb2_flush_csr(hw);
-}
-
-static u32 dlb2_ldb_cq_inflight_count(struct dlb2_hw *hw,
-				      struct dlb2_ldb_port *port)
-{
-	u32 cnt;
-
-	cnt = DLB2_CSR_RD(hw,
-			  DLB2_LSP_CQ_LDB_INFL_CNT(hw->ver, port->id.phys_id));
-
-	return DLB2_BITS_GET(cnt, DLB2_LSP_CQ_LDB_INFL_CNT_COUNT);
-}
-
-static u32 dlb2_ldb_cq_token_count(struct dlb2_hw *hw,
-				   struct dlb2_ldb_port *port)
-{
-	u32 cnt;
-
-	cnt = DLB2_CSR_RD(hw,
-			  DLB2_LSP_CQ_LDB_TKN_CNT(hw->ver, port->id.phys_id));
-
-	/*
-	 * Account for the initial token count, which is used in order to
-	 * provide a CQ with depth less than 8.
-	 */
-
-	return DLB2_BITS_GET(cnt, DLB2_LSP_CQ_LDB_TKN_CNT_TOKEN_COUNT) -
-		port->init_tkn_cnt;
-}
-
-static void dlb2_drain_ldb_cq(struct dlb2_hw *hw, struct dlb2_ldb_port *port)
-{
-	u32 infl_cnt, tkn_cnt;
-	unsigned int i;
-
-	infl_cnt = dlb2_ldb_cq_inflight_count(hw, port);
-	tkn_cnt = dlb2_ldb_cq_token_count(hw, port);
-
-	if (infl_cnt || tkn_cnt) {
-		struct dlb2_hcw hcw_mem[8], *hcw;
-		void __iomem *pp_addr;
-
-		pp_addr = os_map_producer_port(hw, port->id.phys_id, true);
-
-		/* Point hcw to a 64B-aligned location */
-		hcw = (struct dlb2_hcw *)((uintptr_t)&hcw_mem[4] & ~0x3F);
-
-		/*
-		 * Program the first HCW for a completion and token return and
-		 * the other HCWs as NOOPS
-		 */
-
-		memset(hcw, 0, 4 * sizeof(*hcw));
-		hcw->qe_comp = (infl_cnt > 0);
-		hcw->cq_token = (tkn_cnt > 0);
-		hcw->lock_id = tkn_cnt - 1;
-
-		/* Return tokens in the first HCW */
-		dlb2_movdir64b(pp_addr, hcw);
-
-		hcw->cq_token = 0;
-
-		/* Issue remaining completions (if any) */
-		for (i = 1; i < infl_cnt; i++)
-			dlb2_movdir64b(pp_addr, hcw);
-
-		os_fence_hcw(hw, pp_addr);
-
-		os_unmap_producer_port(hw, pp_addr);
-	}
-}
-
-static void dlb2_domain_drain_ldb_cqs(struct dlb2_hw *hw,
-				      struct dlb2_hw_domain *domain,
-				      bool toggle_port)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	/* If the domain hasn't been started, there's no traffic to drain */
-	if (!domain->started)
-		return;
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			if (toggle_port)
-				dlb2_ldb_port_cq_disable(hw, port);
-
-			dlb2_drain_ldb_cq(hw, port);
-
-			if (toggle_port)
-				dlb2_ldb_port_cq_enable(hw, port);
-		}
-	}
-}
-
-static u32 dlb2_ldb_queue_depth(struct dlb2_hw *hw,
-				struct dlb2_ldb_queue *queue)
-{
-	u32 aqed, ldb, atm;
-
-	aqed = DLB2_CSR_RD(hw, DLB2_LSP_QID_AQED_ACTIVE_CNT(hw->ver,
-						       queue->id.phys_id));
-	ldb = DLB2_CSR_RD(hw, DLB2_LSP_QID_LDB_ENQUEUE_CNT(hw->ver,
-						      queue->id.phys_id));
-	atm = DLB2_CSR_RD(hw,
-			  DLB2_LSP_QID_ATM_ACTIVE(hw->ver, queue->id.phys_id));
-
-	return DLB2_BITS_GET(aqed, DLB2_LSP_QID_AQED_ACTIVE_CNT_COUNT)
-	       + DLB2_BITS_GET(ldb, DLB2_LSP_QID_LDB_ENQUEUE_CNT_COUNT)
-	       + DLB2_BITS_GET(atm, DLB2_LSP_QID_ATM_ACTIVE_COUNT);
-}
-
-static bool dlb2_ldb_queue_is_empty(struct dlb2_hw *hw,
-				    struct dlb2_ldb_queue *queue)
-{
-	return dlb2_ldb_queue_depth(hw, queue) == 0;
-}
-
-static bool dlb2_domain_mapped_queues_empty(struct dlb2_hw *hw,
-					    struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_queue *queue;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
-		if (queue->num_mappings == 0)
-			continue;
-
-		if (!dlb2_ldb_queue_is_empty(hw, queue))
-			return false;
-	}
-
-	return true;
-}
-
-static int dlb2_domain_drain_mapped_queues(struct dlb2_hw *hw,
-					   struct dlb2_hw_domain *domain)
-{
-	int i;
-
-	/* If the domain hasn't been started, there's no traffic to drain */
-	if (!domain->started)
-		return 0;
-
-	if (domain->num_pending_removals > 0) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: failed to unmap domain queues\n",
-			    __func__);
-		return -EFAULT;
-	}
-
-	for (i = 0; i < DLB2_MAX_QID_EMPTY_CHECK_LOOPS; i++) {
-		dlb2_domain_drain_ldb_cqs(hw, domain, true);
-
-		if (dlb2_domain_mapped_queues_empty(hw, domain))
-			break;
-	}
-
-	if (i == DLB2_MAX_QID_EMPTY_CHECK_LOOPS) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: failed to empty queues\n",
-			    __func__);
-		return -EFAULT;
-	}
-
-	/*
-	 * Drain the CQs one more time. For the queues to go empty, they would
-	 * have scheduled one or more QEs.
-	 */
-	dlb2_domain_drain_ldb_cqs(hw, domain, true);
-
-	return 0;
-}
-
-static void dlb2_domain_enable_ldb_cqs(struct dlb2_hw *hw,
-				       struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			port->enabled = true;
-
-			dlb2_ldb_port_cq_enable(hw, port);
-		}
-	}
-}
-
-static struct dlb2_ldb_queue *
-dlb2_get_ldb_queue_from_id(struct dlb2_hw *hw,
-			   u32 id,
-			   bool vdev_req,
-			   unsigned int vdev_id)
-{
-	struct dlb2_list_entry *iter1;
-	struct dlb2_list_entry *iter2;
-	struct dlb2_function_resources *rsrcs;
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_queue *queue;
-	RTE_SET_USED(iter1);
-	RTE_SET_USED(iter2);
-
-	if (id >= DLB2_MAX_NUM_LDB_QUEUES)
-		return NULL;
-
-	rsrcs = (vdev_req) ? &hw->vdev[vdev_id] : &hw->pf;
-
-	if (!vdev_req)
-		return &hw->rsrcs.ldb_queues[id];
-
-	DLB2_FUNC_LIST_FOR(rsrcs->used_domains, domain, iter1) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter2) {
-			if (queue->id.virt_id == id)
-				return queue;
-		}
-	}
-
-	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_queues, queue, iter1) {
-		if (queue->id.virt_id == id)
-			return queue;
-	}
-
-	return NULL;
-}
-
-static struct dlb2_hw_domain *dlb2_get_domain_from_id(struct dlb2_hw *hw,
-						      u32 id,
-						      bool vdev_req,
-						      unsigned int vdev_id)
-{
-	struct dlb2_list_entry *iteration;
-	struct dlb2_function_resources *rsrcs;
-	struct dlb2_hw_domain *domain;
-	RTE_SET_USED(iteration);
-
-	if (id >= DLB2_MAX_NUM_DOMAINS)
-		return NULL;
-
-	if (!vdev_req)
-		return &hw->domains[id];
-
-	rsrcs = &hw->vdev[vdev_id];
-
-	DLB2_FUNC_LIST_FOR(rsrcs->used_domains, domain, iteration) {
-		if (domain->id.virt_id == id)
-			return domain;
-	}
-
-	return NULL;
-}
-
-static int dlb2_port_slot_state_transition(struct dlb2_hw *hw,
-					   struct dlb2_ldb_port *port,
-					   struct dlb2_ldb_queue *queue,
-					   int slot,
-					   enum dlb2_qid_map_state new_state)
-{
-	enum dlb2_qid_map_state curr_state = port->qid_map[slot].state;
-	struct dlb2_hw_domain *domain;
-	int domain_id;
-
-	domain_id = port->domain_id.phys_id;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, false, 0);
-	if (domain == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: unable to find domain %d\n",
-			    __func__, domain_id);
-		return -EINVAL;
-	}
-
-	switch (curr_state) {
-	case DLB2_QUEUE_UNMAPPED:
-		switch (new_state) {
-		case DLB2_QUEUE_MAPPED:
-			queue->num_mappings++;
-			port->num_mappings++;
-			break;
-		case DLB2_QUEUE_MAP_IN_PROG:
-			queue->num_pending_additions++;
-			domain->num_pending_additions++;
-			break;
-		default:
-			goto error;
-		}
-		break;
-	case DLB2_QUEUE_MAPPED:
-		switch (new_state) {
-		case DLB2_QUEUE_UNMAPPED:
-			queue->num_mappings--;
-			port->num_mappings--;
-			break;
-		case DLB2_QUEUE_UNMAP_IN_PROG:
-			port->num_pending_removals++;
-			domain->num_pending_removals++;
-			break;
-		case DLB2_QUEUE_MAPPED:
-			/* Priority change, nothing to update */
-			break;
-		default:
-			goto error;
-		}
-		break;
-	case DLB2_QUEUE_MAP_IN_PROG:
-		switch (new_state) {
-		case DLB2_QUEUE_UNMAPPED:
-			queue->num_pending_additions--;
-			domain->num_pending_additions--;
-			break;
-		case DLB2_QUEUE_MAPPED:
-			queue->num_mappings++;
-			port->num_mappings++;
-			queue->num_pending_additions--;
-			domain->num_pending_additions--;
-			break;
-		default:
-			goto error;
-		}
-		break;
-	case DLB2_QUEUE_UNMAP_IN_PROG:
-		switch (new_state) {
-		case DLB2_QUEUE_UNMAPPED:
-			port->num_pending_removals--;
-			domain->num_pending_removals--;
-			queue->num_mappings--;
-			port->num_mappings--;
-			break;
-		case DLB2_QUEUE_MAPPED:
-			port->num_pending_removals--;
-			domain->num_pending_removals--;
-			break;
-		case DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP:
-			/* Nothing to update */
-			break;
-		default:
-			goto error;
-		}
-		break;
-	case DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP:
-		switch (new_state) {
-		case DLB2_QUEUE_UNMAP_IN_PROG:
-			/* Nothing to update */
-			break;
-		case DLB2_QUEUE_UNMAPPED:
-			/*
-			 * An UNMAP_IN_PROG_PENDING_MAP slot briefly
-			 * becomes UNMAPPED before it transitions to
-			 * MAP_IN_PROG.
-			 */
-			queue->num_mappings--;
-			port->num_mappings--;
-			port->num_pending_removals--;
-			domain->num_pending_removals--;
-			break;
-		default:
-			goto error;
-		}
-		break;
-	default:
-		goto error;
-	}
-
-	port->qid_map[slot].state = new_state;
-
-	DLB2_HW_DBG(hw,
-		    "[%s()] queue %d -> port %d state transition (%d -> %d)\n",
-		    __func__, queue->id.phys_id, port->id.phys_id,
-		    curr_state, new_state);
-	return 0;
-
-error:
-	DLB2_HW_ERR(hw,
-		    "[%s()] Internal error: invalid queue %d -> port %d state transition (%d -> %d)\n",
-		    __func__, queue->id.phys_id, port->id.phys_id,
-		    curr_state, new_state);
-	return -EFAULT;
-}
-
-static bool dlb2_port_find_slot(struct dlb2_ldb_port *port,
-				enum dlb2_qid_map_state state,
-				int *slot)
-{
-	int i;
-
-	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
-		if (port->qid_map[i].state == state)
-			break;
-	}
-
-	*slot = i;
-
-	return (i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ);
-}
-
-static bool dlb2_port_find_slot_queue(struct dlb2_ldb_port *port,
-				      enum dlb2_qid_map_state state,
-				      struct dlb2_ldb_queue *queue,
-				      int *slot)
-{
-	int i;
-
-	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
-		if (port->qid_map[i].state == state &&
-		    port->qid_map[i].qid == queue->id.phys_id)
-			break;
-	}
-
-	*slot = i;
-
-	return (i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ);
-}
-
-/*
- * dlb2_ldb_queue_{enable, disable}_mapped_cqs() don't operate exactly as
- * their function names imply, and should only be called by the dynamic CQ
- * mapping code.
- */
-static void dlb2_ldb_queue_disable_mapped_cqs(struct dlb2_hw *hw,
-					      struct dlb2_hw_domain *domain,
-					      struct dlb2_ldb_queue *queue)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int slot, i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			enum dlb2_qid_map_state state = DLB2_QUEUE_MAPPED;
-
-			if (!dlb2_port_find_slot_queue(port, state,
-						       queue, &slot))
-				continue;
-
-			if (port->enabled)
-				dlb2_ldb_port_cq_disable(hw, port);
-		}
-	}
-}
-
-static void dlb2_ldb_queue_enable_mapped_cqs(struct dlb2_hw *hw,
-					     struct dlb2_hw_domain *domain,
-					     struct dlb2_ldb_queue *queue)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int slot, i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			enum dlb2_qid_map_state state = DLB2_QUEUE_MAPPED;
-
-			if (!dlb2_port_find_slot_queue(port, state,
-						       queue, &slot))
-				continue;
-
-			if (port->enabled)
-				dlb2_ldb_port_cq_enable(hw, port);
-		}
-	}
-}
-
-static void dlb2_ldb_port_clear_queue_if_status(struct dlb2_hw *hw,
-						struct dlb2_ldb_port *port,
-						int slot)
-{
-	u32 ctrl = 0;
-
-	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
-	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
-	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_INFLIGHT_OK_V);
-
-	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
-
-	dlb2_flush_csr(hw);
-}
-
-static void dlb2_ldb_port_set_queue_if_status(struct dlb2_hw *hw,
-					      struct dlb2_ldb_port *port,
-					      int slot)
-{
-	u32 ctrl = 0;
-
-	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
-	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
-	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_VALUE);
-	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_INFLIGHT_OK_V);
-
-	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
-
-	dlb2_flush_csr(hw);
-}
-
-static int dlb2_ldb_port_map_qid_static(struct dlb2_hw *hw,
-					struct dlb2_ldb_port *p,
-					struct dlb2_ldb_queue *q,
-					u8 priority)
-{
-	enum dlb2_qid_map_state state;
-	u32 lsp_qid2cq2;
-	u32 lsp_qid2cq;
-	u32 atm_qid2cq;
-	u32 cq2priov;
-	u32 cq2qid;
-	int i;
-
-	/* Look for a pending or already mapped slot, else an unused slot */
-	if (!dlb2_port_find_slot_queue(p, DLB2_QUEUE_MAP_IN_PROG, q, &i) &&
-	    !dlb2_port_find_slot_queue(p, DLB2_QUEUE_MAPPED, q, &i) &&
-	    !dlb2_port_find_slot(p, DLB2_QUEUE_UNMAPPED, &i)) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: CQ has no available QID mapping slots\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	/* Read-modify-write the priority and valid bit register */
-	cq2priov = DLB2_CSR_RD(hw, DLB2_LSP_CQ2PRIOV(hw->ver, p->id.phys_id));
-
-	cq2priov |= (1 << (i + DLB2_LSP_CQ2PRIOV_V_LOC)) & DLB2_LSP_CQ2PRIOV_V;
-	cq2priov |= ((priority & 0x7) << (i + DLB2_LSP_CQ2PRIOV_PRIO_LOC) * 3)
-		    & DLB2_LSP_CQ2PRIOV_PRIO;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(hw->ver, p->id.phys_id), cq2priov);
-
-	/* Read-modify-write the QID map register */
-	if (i < 4)
-		cq2qid = DLB2_CSR_RD(hw, DLB2_LSP_CQ2QID0(hw->ver,
-							  p->id.phys_id));
-	else
-		cq2qid = DLB2_CSR_RD(hw, DLB2_LSP_CQ2QID1(hw->ver,
-							  p->id.phys_id));
-
-	if (i == 0 || i == 4)
-		DLB2_BITS_SET(cq2qid, q->id.phys_id, DLB2_LSP_CQ2QID0_QID_P0);
-	if (i == 1 || i == 5)
-		DLB2_BITS_SET(cq2qid, q->id.phys_id, DLB2_LSP_CQ2QID0_QID_P1);
-	if (i == 2 || i == 6)
-		DLB2_BITS_SET(cq2qid, q->id.phys_id, DLB2_LSP_CQ2QID0_QID_P2);
-	if (i == 3 || i == 7)
-		DLB2_BITS_SET(cq2qid, q->id.phys_id, DLB2_LSP_CQ2QID0_QID_P3);
-
-	if (i < 4)
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_CQ2QID0(hw->ver, p->id.phys_id), cq2qid);
-	else
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_CQ2QID1(hw->ver, p->id.phys_id), cq2qid);
-
-	atm_qid2cq = DLB2_CSR_RD(hw,
-				 DLB2_ATM_QID2CQIDIX(q->id.phys_id,
-						p->id.phys_id / 4));
-
-	lsp_qid2cq = DLB2_CSR_RD(hw,
-				 DLB2_LSP_QID2CQIDIX(hw->ver, q->id.phys_id,
-						p->id.phys_id / 4));
-
-	lsp_qid2cq2 = DLB2_CSR_RD(hw,
-				  DLB2_LSP_QID2CQIDIX2(hw->ver, q->id.phys_id,
-						  p->id.phys_id / 4));
-
-	switch (p->id.phys_id % 4) {
-	case 0:
-		DLB2_BIT_SET(atm_qid2cq,
-			     1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P0_LOC));
-		DLB2_BIT_SET(lsp_qid2cq,
-			     1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P0_LOC));
-		DLB2_BIT_SET(lsp_qid2cq2,
-			     1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P0_LOC));
-		break;
-
-	case 1:
-		DLB2_BIT_SET(atm_qid2cq,
-			     1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P1_LOC));
-		DLB2_BIT_SET(lsp_qid2cq,
-			     1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P1_LOC));
-		DLB2_BIT_SET(lsp_qid2cq2,
-			     1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P1_LOC));
-		break;
-
-	case 2:
-		DLB2_BIT_SET(atm_qid2cq,
-			     1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P2_LOC));
-		DLB2_BIT_SET(lsp_qid2cq,
-			     1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P2_LOC));
-		DLB2_BIT_SET(lsp_qid2cq2,
-			     1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P2_LOC));
-		break;
-
-	case 3:
-		DLB2_BIT_SET(atm_qid2cq,
-			     1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P3_LOC));
-		DLB2_BIT_SET(lsp_qid2cq,
-			     1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P3_LOC));
-		DLB2_BIT_SET(lsp_qid2cq2,
-			     1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P3_LOC));
-		break;
-	}
-
-	DLB2_CSR_WR(hw,
-		    DLB2_ATM_QID2CQIDIX(q->id.phys_id, p->id.phys_id / 4),
-		    atm_qid2cq);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID2CQIDIX(hw->ver,
-					q->id.phys_id, p->id.phys_id / 4),
-		    lsp_qid2cq);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID2CQIDIX2(hw->ver,
-					 q->id.phys_id, p->id.phys_id / 4),
-		    lsp_qid2cq2);
-
-	dlb2_flush_csr(hw);
-
-	p->qid_map[i].qid = q->id.phys_id;
-	p->qid_map[i].priority = priority;
-
-	state = DLB2_QUEUE_MAPPED;
-
-	return dlb2_port_slot_state_transition(hw, p, q, i, state);
-}
-
-static int dlb2_ldb_port_set_has_work_bits(struct dlb2_hw *hw,
-					   struct dlb2_ldb_port *port,
-					   struct dlb2_ldb_queue *queue,
-					   int slot)
-{
-	u32 ctrl = 0;
-	u32 active;
-	u32 enq;
-
-	/* Set the atomic scheduling haswork bit */
-	active = DLB2_CSR_RD(hw, DLB2_LSP_QID_AQED_ACTIVE_CNT(hw->ver,
-							 queue->id.phys_id));
-
-	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
-	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
-	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_VALUE);
-	DLB2_BITS_SET(ctrl,
-		      DLB2_BITS_GET(active,
-				    DLB2_LSP_QID_AQED_ACTIVE_CNT_COUNT) > 0,
-				    DLB2_LSP_LDB_SCHED_CTRL_RLIST_HASWORK_V);
-
-	/* Set the non-atomic scheduling haswork bit */
-	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
-
-	enq = DLB2_CSR_RD(hw,
-			  DLB2_LSP_QID_LDB_ENQUEUE_CNT(hw->ver,
-						       queue->id.phys_id));
-
-	memset(&ctrl, 0, sizeof(ctrl));
-
-	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
-	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
-	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_VALUE);
-	DLB2_BITS_SET(ctrl,
-		      DLB2_BITS_GET(enq,
-				    DLB2_LSP_QID_LDB_ENQUEUE_CNT_COUNT) > 0,
-		      DLB2_LSP_LDB_SCHED_CTRL_NALB_HASWORK_V);
-
-	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
-
-	dlb2_flush_csr(hw);
-
-	return 0;
-}
-
-static void dlb2_ldb_port_clear_has_work_bits(struct dlb2_hw *hw,
-					      struct dlb2_ldb_port *port,
-					      u8 slot)
-{
-	u32 ctrl = 0;
-
-	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
-	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
-	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_RLIST_HASWORK_V);
-
-	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
-
-	memset(&ctrl, 0, sizeof(ctrl));
-
-	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
-	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
-	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_NALB_HASWORK_V);
-
-	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
-
-	dlb2_flush_csr(hw);
-}
-
-
-static void dlb2_ldb_queue_set_inflight_limit(struct dlb2_hw *hw,
-					      struct dlb2_ldb_queue *queue)
-{
-	u32 infl_lim = 0;
-
-	DLB2_BITS_SET(infl_lim, queue->num_qid_inflights,
-		 DLB2_LSP_QID_LDB_INFL_LIM_LIMIT);
-
-	DLB2_CSR_WR(hw, DLB2_LSP_QID_LDB_INFL_LIM(hw->ver, queue->id.phys_id),
-		    infl_lim);
-}
-
-static void dlb2_ldb_queue_clear_inflight_limit(struct dlb2_hw *hw,
-						struct dlb2_ldb_queue *queue)
-{
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID_LDB_INFL_LIM(hw->ver, queue->id.phys_id),
-		    DLB2_LSP_QID_LDB_INFL_LIM_RST);
-}
-
-static int dlb2_ldb_port_finish_map_qid_dynamic(struct dlb2_hw *hw,
-						struct dlb2_hw_domain *domain,
-						struct dlb2_ldb_port *port,
-						struct dlb2_ldb_queue *queue)
-{
-	struct dlb2_list_entry *iter;
-	enum dlb2_qid_map_state state;
-	int slot, ret, i;
-	u32 infl_cnt;
-	u8 prio;
-	RTE_SET_USED(iter);
-
-	infl_cnt = DLB2_CSR_RD(hw,
-			       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver,
-						    queue->id.phys_id));
-
-	if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT)) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: non-zero QID inflight count\n",
-			    __func__);
-		return -EINVAL;
-	}
-
-	/*
-	 * Static map the port and set its corresponding has_work bits.
-	 */
-	state = DLB2_QUEUE_MAP_IN_PROG;
-	if (!dlb2_port_find_slot_queue(port, state, queue, &slot))
-		return -EINVAL;
-
-	prio = port->qid_map[slot].priority;
-
-	/*
-	 * Update the CQ2QID, CQ2PRIOV, and QID2CQIDX registers, and
-	 * the port's qid_map state.
-	 */
-	ret = dlb2_ldb_port_map_qid_static(hw, port, queue, prio);
-	if (ret)
-		return ret;
-
-	ret = dlb2_ldb_port_set_has_work_bits(hw, port, queue, slot);
-	if (ret)
-		return ret;
-
-	/*
-	 * Ensure IF_status(cq,qid) is 0 before enabling the port to
-	 * prevent spurious schedules to cause the queue's inflight
-	 * count to increase.
-	 */
-	dlb2_ldb_port_clear_queue_if_status(hw, port, slot);
-
-	/* Reset the queue's inflight status */
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			state = DLB2_QUEUE_MAPPED;
-			if (!dlb2_port_find_slot_queue(port, state,
-						       queue, &slot))
-				continue;
-
-			dlb2_ldb_port_set_queue_if_status(hw, port, slot);
-		}
-	}
-
-	dlb2_ldb_queue_set_inflight_limit(hw, queue);
-
-	/* Re-enable CQs mapped to this queue */
-	dlb2_ldb_queue_enable_mapped_cqs(hw, domain, queue);
-
-	/* If this queue has other mappings pending, clear its inflight limit */
-	if (queue->num_pending_additions > 0)
-		dlb2_ldb_queue_clear_inflight_limit(hw, queue);
-
-	return 0;
-}
-
-/**
- * dlb2_ldb_port_map_qid_dynamic() - perform a "dynamic" QID->CQ mapping
- * @hw: dlb2_hw handle for a particular device.
- * @port: load-balanced port
- * @queue: load-balanced queue
- * @priority: queue servicing priority
- *
- * Returns 0 if the queue was mapped, 1 if the mapping is scheduled to occur
- * at a later point, and <0 if an error occurred.
- */
-static int dlb2_ldb_port_map_qid_dynamic(struct dlb2_hw *hw,
-					 struct dlb2_ldb_port *port,
-					 struct dlb2_ldb_queue *queue,
-					 u8 priority)
-{
-	enum dlb2_qid_map_state state;
-	struct dlb2_hw_domain *domain;
-	int domain_id, slot, ret;
-	u32 infl_cnt;
-
-	domain_id = port->domain_id.phys_id;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, false, 0);
-	if (domain == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: unable to find domain %d\n",
-			    __func__, port->domain_id.phys_id);
-		return -EINVAL;
-	}
-
-	/*
-	 * Set the QID inflight limit to 0 to prevent further scheduling of the
-	 * queue.
-	 */
-	DLB2_CSR_WR(hw, DLB2_LSP_QID_LDB_INFL_LIM(hw->ver,
-						  queue->id.phys_id), 0);
-
-	if (!dlb2_port_find_slot(port, DLB2_QUEUE_UNMAPPED, &slot)) {
-		DLB2_HW_ERR(hw,
-			    "Internal error: No available unmapped slots\n");
-		return -EFAULT;
-	}
-
-	port->qid_map[slot].qid = queue->id.phys_id;
-	port->qid_map[slot].priority = priority;
-
-	state = DLB2_QUEUE_MAP_IN_PROG;
-	ret = dlb2_port_slot_state_transition(hw, port, queue, slot, state);
-	if (ret)
-		return ret;
-
-	infl_cnt = DLB2_CSR_RD(hw,
-			       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver,
-						    queue->id.phys_id));
-
-	if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT)) {
-		/*
-		 * The queue is owed completions so it's not safe to map it
-		 * yet. Schedule a kernel thread to complete the mapping later,
-		 * once software has completed all the queue's inflight events.
-		 */
-		if (!os_worker_active(hw))
-			os_schedule_work(hw);
-
-		return 1;
-	}
-
-	/*
-	 * Disable the affected CQ, and the CQs already mapped to the QID,
-	 * before reading the QID's inflight count a second time. There is an
-	 * unlikely race in which the QID may schedule one more QE after we
-	 * read an inflight count of 0, and disabling the CQs guarantees that
-	 * the race will not occur after a re-read of the inflight count
-	 * register.
-	 */
-	if (port->enabled)
-		dlb2_ldb_port_cq_disable(hw, port);
-
-	dlb2_ldb_queue_disable_mapped_cqs(hw, domain, queue);
-
-	infl_cnt = DLB2_CSR_RD(hw,
-			       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver,
-						    queue->id.phys_id));
-
-	if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT)) {
-		if (port->enabled)
-			dlb2_ldb_port_cq_enable(hw, port);
-
-		dlb2_ldb_queue_enable_mapped_cqs(hw, domain, queue);
-
-		/*
-		 * The queue is owed completions so it's not safe to map it
-		 * yet. Schedule a kernel thread to complete the mapping later,
-		 * once software has completed all the queue's inflight events.
-		 */
-		if (!os_worker_active(hw))
-			os_schedule_work(hw);
-
-		return 1;
-	}
-
-	return dlb2_ldb_port_finish_map_qid_dynamic(hw, domain, port, queue);
-}
-
-static void dlb2_domain_finish_map_port(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain,
-					struct dlb2_ldb_port *port)
-{
-	int i;
-
-	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
-		u32 infl_cnt;
-		struct dlb2_ldb_queue *queue;
-		int qid;
-
-		if (port->qid_map[i].state != DLB2_QUEUE_MAP_IN_PROG)
-			continue;
-
-		qid = port->qid_map[i].qid;
-
-		queue = dlb2_get_ldb_queue_from_id(hw, qid, false, 0);
-
-		if (queue == NULL) {
-			DLB2_HW_ERR(hw,
-				    "[%s()] Internal error: unable to find queue %d\n",
-				    __func__, qid);
-			continue;
-		}
-
-		infl_cnt = DLB2_CSR_RD(hw,
-				       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver, qid));
-
-		if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT))
-			continue;
-
-		/*
-		 * Disable the affected CQ, and the CQs already mapped to the
-		 * QID, before reading the QID's inflight count a second time.
-		 * There is an unlikely race in which the QID may schedule one
-		 * more QE after we read an inflight count of 0, and disabling
-		 * the CQs guarantees that the race will not occur after a
-		 * re-read of the inflight count register.
-		 */
-		if (port->enabled)
-			dlb2_ldb_port_cq_disable(hw, port);
-
-		dlb2_ldb_queue_disable_mapped_cqs(hw, domain, queue);
-
-		infl_cnt = DLB2_CSR_RD(hw,
-				       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver, qid));
-
-		if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT)) {
-			if (port->enabled)
-				dlb2_ldb_port_cq_enable(hw, port);
-
-			dlb2_ldb_queue_enable_mapped_cqs(hw, domain, queue);
-
-			continue;
-		}
-
-		dlb2_ldb_port_finish_map_qid_dynamic(hw, domain, port, queue);
-	}
-}
-
-static unsigned int
-dlb2_domain_finish_map_qid_procedures(struct dlb2_hw *hw,
-				      struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	if (!domain->configured || domain->num_pending_additions == 0)
-		return 0;
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
-			dlb2_domain_finish_map_port(hw, domain, port);
-	}
-
-	return domain->num_pending_additions;
-}
-
-static int dlb2_ldb_port_unmap_qid(struct dlb2_hw *hw,
-				   struct dlb2_ldb_port *port,
-				   struct dlb2_ldb_queue *queue)
-{
-	enum dlb2_qid_map_state mapped, in_progress, pending_map, unmapped;
-	u32 lsp_qid2cq2;
-	u32 lsp_qid2cq;
-	u32 atm_qid2cq;
-	u32 cq2priov;
-	u32 queue_id;
-	u32 port_id;
-	int i;
-
-	/* Find the queue's slot */
-	mapped = DLB2_QUEUE_MAPPED;
-	in_progress = DLB2_QUEUE_UNMAP_IN_PROG;
-	pending_map = DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP;
-
-	if (!dlb2_port_find_slot_queue(port, mapped, queue, &i) &&
-	    !dlb2_port_find_slot_queue(port, in_progress, queue, &i) &&
-	    !dlb2_port_find_slot_queue(port, pending_map, queue, &i)) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: QID %d isn't mapped\n",
-			    __func__, __LINE__, queue->id.phys_id);
-		return -EFAULT;
-	}
-
-	port_id = port->id.phys_id;
-	queue_id = queue->id.phys_id;
-
-	/* Read-modify-write the priority and valid bit register */
-	cq2priov = DLB2_CSR_RD(hw, DLB2_LSP_CQ2PRIOV(hw->ver, port_id));
-
-	cq2priov &= ~(1 << (i + DLB2_LSP_CQ2PRIOV_V_LOC));
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(hw->ver, port_id), cq2priov);
-
-	atm_qid2cq = DLB2_CSR_RD(hw, DLB2_ATM_QID2CQIDIX(queue_id,
-							 port_id / 4));
-
-	lsp_qid2cq = DLB2_CSR_RD(hw,
-				 DLB2_LSP_QID2CQIDIX(hw->ver,
-						queue_id, port_id / 4));
-
-	lsp_qid2cq2 = DLB2_CSR_RD(hw,
-				  DLB2_LSP_QID2CQIDIX2(hw->ver,
-						  queue_id, port_id / 4));
-
-	switch (port_id % 4) {
-	case 0:
-		atm_qid2cq &= ~(1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P0_LOC));
-		lsp_qid2cq &= ~(1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P0_LOC));
-		lsp_qid2cq2 &= ~(1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P0_LOC));
-		break;
-
-	case 1:
-		atm_qid2cq &= ~(1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P1_LOC));
-		lsp_qid2cq &= ~(1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P1_LOC));
-		lsp_qid2cq2 &= ~(1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P1_LOC));
-		break;
-
-	case 2:
-		atm_qid2cq &= ~(1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P2_LOC));
-		lsp_qid2cq &= ~(1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P2_LOC));
-		lsp_qid2cq2 &= ~(1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P2_LOC));
-		break;
-
-	case 3:
-		atm_qid2cq &= ~(1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P3_LOC));
-		lsp_qid2cq &= ~(1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P3_LOC));
-		lsp_qid2cq2 &= ~(1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P3_LOC));
-		break;
-	}
-
-	DLB2_CSR_WR(hw, DLB2_ATM_QID2CQIDIX(queue_id, port_id / 4), atm_qid2cq);
-
-	DLB2_CSR_WR(hw, DLB2_LSP_QID2CQIDIX(hw->ver, queue_id, port_id / 4),
-		    lsp_qid2cq);
-
-	DLB2_CSR_WR(hw, DLB2_LSP_QID2CQIDIX2(hw->ver, queue_id, port_id / 4),
-		    lsp_qid2cq2);
-
-	dlb2_flush_csr(hw);
-
-	unmapped = DLB2_QUEUE_UNMAPPED;
-
-	return dlb2_port_slot_state_transition(hw, port, queue, i, unmapped);
-}
-
-static int dlb2_ldb_port_map_qid(struct dlb2_hw *hw,
-				 struct dlb2_hw_domain *domain,
-				 struct dlb2_ldb_port *port,
-				 struct dlb2_ldb_queue *queue,
-				 u8 prio)
-{
-	if (domain->started)
-		return dlb2_ldb_port_map_qid_dynamic(hw, port, queue, prio);
-	else
-		return dlb2_ldb_port_map_qid_static(hw, port, queue, prio);
-}
-
-static void
-dlb2_domain_finish_unmap_port_slot(struct dlb2_hw *hw,
-				   struct dlb2_hw_domain *domain,
-				   struct dlb2_ldb_port *port,
-				   int slot)
-{
-	enum dlb2_qid_map_state state;
-	struct dlb2_ldb_queue *queue;
-
-	queue = &hw->rsrcs.ldb_queues[port->qid_map[slot].qid];
-
-	state = port->qid_map[slot].state;
-
-	/* Update the QID2CQIDX and CQ2QID vectors */
-	dlb2_ldb_port_unmap_qid(hw, port, queue);
-
-	/*
-	 * Ensure the QID will not be serviced by this {CQ, slot} by clearing
-	 * the has_work bits
-	 */
-	dlb2_ldb_port_clear_has_work_bits(hw, port, slot);
-
-	/* Reset the {CQ, slot} to its default state */
-	dlb2_ldb_port_set_queue_if_status(hw, port, slot);
-
-	/* Re-enable the CQ if it was not manually disabled by the user */
-	if (port->enabled)
-		dlb2_ldb_port_cq_enable(hw, port);
-
-	/*
-	 * If there is a mapping that is pending this slot's removal, perform
-	 * the mapping now.
-	 */
-	if (state == DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP) {
-		struct dlb2_ldb_port_qid_map *map;
-		struct dlb2_ldb_queue *map_queue;
-		u8 prio;
-
-		map = &port->qid_map[slot];
-
-		map->qid = map->pending_qid;
-		map->priority = map->pending_priority;
-
-		map_queue = &hw->rsrcs.ldb_queues[map->qid];
-		prio = map->priority;
-
-		dlb2_ldb_port_map_qid(hw, domain, port, map_queue, prio);
-	}
-}
-
-
-static bool dlb2_domain_finish_unmap_port(struct dlb2_hw *hw,
-					  struct dlb2_hw_domain *domain,
-					  struct dlb2_ldb_port *port)
-{
-	u32 infl_cnt;
-	int i;
-
-	if (port->num_pending_removals == 0)
-		return false;
-
-	/*
-	 * The unmap requires all the CQ's outstanding inflights to be
-	 * completed.
-	 */
-	infl_cnt = DLB2_CSR_RD(hw, DLB2_LSP_CQ_LDB_INFL_CNT(hw->ver,
-						       port->id.phys_id));
-	if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_CQ_LDB_INFL_CNT_COUNT) > 0)
-		return false;
-
-	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
-		struct dlb2_ldb_port_qid_map *map;
-
-		map = &port->qid_map[i];
-
-		if (map->state != DLB2_QUEUE_UNMAP_IN_PROG &&
-		    map->state != DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP)
-			continue;
-
-		dlb2_domain_finish_unmap_port_slot(hw, domain, port, i);
-	}
-
-	return true;
-}
-
-static unsigned int
-dlb2_domain_finish_unmap_qid_procedures(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	if (!domain->configured || domain->num_pending_removals == 0)
-		return 0;
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
-			dlb2_domain_finish_unmap_port(hw, domain, port);
-	}
-
-	return domain->num_pending_removals;
-}
-
-static void dlb2_domain_disable_ldb_cqs(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			port->enabled = false;
-
-			dlb2_ldb_port_cq_disable(hw, port);
-		}
-	}
-}
-
-
-static void dlb2_log_reset_domain(struct dlb2_hw *hw,
-				  u32 domain_id,
-				  bool vdev_req,
-				  unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 reset domain:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
-}
-
-static void dlb2_domain_disable_dir_vpps(struct dlb2_hw *hw,
-					 struct dlb2_hw_domain *domain,
-					 unsigned int vdev_id)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *port;
-	u32 vpp_v = 0;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
-		unsigned int offs;
-		u32 virt_id;
-
-		if (hw->virt_mode == DLB2_VIRT_SRIOV)
-			virt_id = port->id.virt_id;
-		else
-			virt_id = port->id.phys_id;
-
-		offs = vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) + virt_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP_V(offs), vpp_v);
-	}
-}
-
-static void dlb2_domain_disable_ldb_vpps(struct dlb2_hw *hw,
-					 struct dlb2_hw_domain *domain,
-					 unsigned int vdev_id)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	u32 vpp_v = 0;
-	int i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			unsigned int offs;
-			u32 virt_id;
-
-			if (hw->virt_mode == DLB2_VIRT_SRIOV)
-				virt_id = port->id.virt_id;
-			else
-				virt_id = port->id.phys_id;
-
-			offs = vdev_id * DLB2_MAX_NUM_LDB_PORTS + virt_id;
-
-			DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VPP_V(offs), vpp_v);
-		}
-	}
-}
-
-static void
-dlb2_domain_disable_ldb_port_interrupts(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	u32 int_en = 0;
-	u32 wd_en = 0;
-	int i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			DLB2_CSR_WR(hw,
-				    DLB2_CHP_LDB_CQ_INT_ENB(hw->ver,
-						       port->id.phys_id),
-				    int_en);
-
-			DLB2_CSR_WR(hw,
-				    DLB2_CHP_LDB_CQ_WD_ENB(hw->ver,
-						      port->id.phys_id),
-				    wd_en);
-		}
-	}
-}
-
-static void
-dlb2_domain_disable_dir_port_interrupts(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *port;
-	u32 int_en = 0;
-	u32 wd_en = 0;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
-		DLB2_CSR_WR(hw,
-			    DLB2_CHP_DIR_CQ_INT_ENB(hw->ver, port->id.phys_id),
-			    int_en);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_CHP_DIR_CQ_WD_ENB(hw->ver, port->id.phys_id),
-			    wd_en);
-	}
-}
-
-static void
-dlb2_domain_disable_ldb_queue_write_perms(struct dlb2_hw *hw,
-					  struct dlb2_hw_domain *domain)
-{
-	int domain_offset = domain->id.phys_id * DLB2_MAX_NUM_LDB_QUEUES;
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_queue *queue;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
-		int idx = domain_offset + queue->id.phys_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_LDB_VASQID_V(idx), 0);
-
-		if (queue->id.vdev_owned) {
-			DLB2_CSR_WR(hw,
-				    DLB2_SYS_LDB_QID2VQID(queue->id.phys_id),
-				    0);
-
-			idx = queue->id.vdev_id * DLB2_MAX_NUM_LDB_QUEUES +
-				queue->id.virt_id;
-
-			DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID_V(idx), 0);
-
-			DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID2QID(idx), 0);
-		}
-	}
-}
-
-static void
-dlb2_domain_disable_dir_queue_write_perms(struct dlb2_hw *hw,
-					  struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *queue;
-	unsigned long max_ports;
-	int domain_offset;
-	RTE_SET_USED(iter);
-
-	max_ports = DLB2_MAX_NUM_DIR_PORTS(hw->ver);
-
-	domain_offset = domain->id.phys_id * max_ports;
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, queue, iter) {
-		int idx = domain_offset + queue->id.phys_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(idx), 0);
-
-		if (queue->id.vdev_owned) {
-			idx = queue->id.vdev_id * max_ports + queue->id.virt_id;
-
-			DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID_V(idx), 0);
-
-			DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID2QID(idx), 0);
-		}
-	}
-}
-
-static void dlb2_domain_disable_ldb_seq_checks(struct dlb2_hw *hw,
-					       struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	u32 chk_en = 0;
-	int i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			DLB2_CSR_WR(hw,
-				    DLB2_CHP_SN_CHK_ENBL(hw->ver,
-							 port->id.phys_id),
-				    chk_en);
-		}
-	}
-}
-
-static int dlb2_domain_wait_for_ldb_cqs_to_empty(struct dlb2_hw *hw,
-						 struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			int j;
-
-			for (j = 0; j < DLB2_MAX_CQ_COMP_CHECK_LOOPS; j++) {
-				if (dlb2_ldb_cq_inflight_count(hw, port) == 0)
-					break;
-			}
-
-			if (j == DLB2_MAX_CQ_COMP_CHECK_LOOPS) {
-				DLB2_HW_ERR(hw,
-					    "[%s()] Internal error: failed to flush load-balanced port %d's completions.\n",
-					    __func__, port->id.phys_id);
-				return -EFAULT;
-			}
-		}
-	}
-
-	return 0;
-}
-
-static void dlb2_domain_disable_dir_cqs(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *port;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
-		port->enabled = false;
-
-		dlb2_dir_port_cq_disable(hw, port);
-	}
-}
-
-static void
-dlb2_domain_disable_dir_producer_ports(struct dlb2_hw *hw,
-				       struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *port;
-	u32 pp_v = 0;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_DIR_PP_V(port->id.phys_id),
-			    pp_v);
-	}
-}
-
-static void
-dlb2_domain_disable_ldb_producer_ports(struct dlb2_hw *hw,
-				       struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	u32 pp_v = 0;
-	int i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			DLB2_CSR_WR(hw,
-				    DLB2_SYS_LDB_PP_V(port->id.phys_id),
-				    pp_v);
-		}
-	}
-}
-
-static int dlb2_domain_verify_reset_success(struct dlb2_hw *hw,
-					    struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *dir_port;
-	struct dlb2_ldb_port *ldb_port;
-	struct dlb2_ldb_queue *queue;
-	int i;
-	RTE_SET_USED(iter);
-
-	/*
-	 * Confirm that all the domain's queue's inflight counts and AQED
-	 * active counts are 0.
-	 */
-	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
-		if (!dlb2_ldb_queue_is_empty(hw, queue)) {
-			DLB2_HW_ERR(hw,
-				    "[%s()] Internal error: failed to empty ldb queue %d\n",
-				    __func__, queue->id.phys_id);
-			return -EFAULT;
-		}
-	}
-
-	/* Confirm that all the domain's CQs inflight and token counts are 0. */
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], ldb_port, iter) {
-			if (dlb2_ldb_cq_inflight_count(hw, ldb_port) ||
-			    dlb2_ldb_cq_token_count(hw, ldb_port)) {
-				DLB2_HW_ERR(hw,
-					    "[%s()] Internal error: failed to empty ldb port %d\n",
-					    __func__, ldb_port->id.phys_id);
-				return -EFAULT;
-			}
-		}
-	}
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, dir_port, iter) {
-		if (!dlb2_dir_queue_is_empty(hw, dir_port)) {
-			DLB2_HW_ERR(hw,
-				    "[%s()] Internal error: failed to empty dir queue %d\n",
-				    __func__, dir_port->id.phys_id);
-			return -EFAULT;
-		}
-
-		if (dlb2_dir_cq_token_count(hw, dir_port)) {
-			DLB2_HW_ERR(hw,
-				    "[%s()] Internal error: failed to empty dir port %d\n",
-				    __func__, dir_port->id.phys_id);
-			return -EFAULT;
-		}
-	}
-
-	return 0;
-}
-
-static void __dlb2_domain_reset_ldb_port_registers(struct dlb2_hw *hw,
-						   struct dlb2_ldb_port *port)
-{
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_PP2VAS(port->id.phys_id),
-		    DLB2_SYS_LDB_PP2VAS_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ2VAS(hw->ver, port->id.phys_id),
-		    DLB2_CHP_LDB_CQ2VAS_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_PP2VDEV(port->id.phys_id),
-		    DLB2_SYS_LDB_PP2VDEV_RST);
-
-	if (port->id.vdev_owned) {
-		unsigned int offs;
-		u32 virt_id;
-
-		/*
-		 * DLB uses producer port address bits 17:12 to determine the
-		 * producer port ID. In Scalable IOV mode, PP accesses come
-		 * through the PF MMIO window for the physical producer port,
-		 * so for translation purposes the virtual and physical port
-		 * IDs are equal.
-		 */
-		if (hw->virt_mode == DLB2_VIRT_SRIOV)
-			virt_id = port->id.virt_id;
-		else
-			virt_id = port->id.phys_id;
-
-		offs = port->id.vdev_id * DLB2_MAX_NUM_LDB_PORTS + virt_id;
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_VF_LDB_VPP2PP(offs),
-			    DLB2_SYS_VF_LDB_VPP2PP_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_VF_LDB_VPP_V(offs),
-			    DLB2_SYS_VF_LDB_VPP_V_RST);
-	}
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_PP_V(port->id.phys_id),
-		    DLB2_SYS_LDB_PP_V_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_LDB_DSBL(hw->ver, port->id.phys_id),
-		    DLB2_LSP_CQ_LDB_DSBL_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_DEPTH(hw->ver, port->id.phys_id),
-		    DLB2_CHP_LDB_CQ_DEPTH_RST);
-
-	if (hw->ver != DLB2_HW_V2)
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_CFG_CQ_LDB_WU_LIMIT(port->id.phys_id),
-			    DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_LDB_INFL_LIM(hw->ver, port->id.phys_id),
-		    DLB2_LSP_CQ_LDB_INFL_LIM_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_HIST_LIST_LIM(hw->ver, port->id.phys_id),
-		    DLB2_CHP_HIST_LIST_LIM_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_HIST_LIST_BASE(hw->ver, port->id.phys_id),
-		    DLB2_CHP_HIST_LIST_BASE_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_HIST_LIST_POP_PTR(hw->ver, port->id.phys_id),
-		    DLB2_CHP_HIST_LIST_POP_PTR_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_HIST_LIST_PUSH_PTR(hw->ver, port->id.phys_id),
-		    DLB2_CHP_HIST_LIST_PUSH_PTR_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH(hw->ver, port->id.phys_id),
-		    DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_TMR_THRSH(hw->ver, port->id.phys_id),
-		    DLB2_CHP_LDB_CQ_TMR_THRSH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_INT_ENB(hw->ver, port->id.phys_id),
-		    DLB2_CHP_LDB_CQ_INT_ENB_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_CQ_ISR(port->id.phys_id),
-		    DLB2_SYS_LDB_CQ_ISR_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
-		    DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
-		    DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_WPTR(hw->ver, port->id.phys_id),
-		    DLB2_CHP_LDB_CQ_WPTR_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_LDB_TKN_CNT(hw->ver, port->id.phys_id),
-		    DLB2_LSP_CQ_LDB_TKN_CNT_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_CQ_ADDR_L(port->id.phys_id),
-		    DLB2_SYS_LDB_CQ_ADDR_L_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_CQ_ADDR_U(port->id.phys_id),
-		    DLB2_SYS_LDB_CQ_ADDR_U_RST);
-
-	if (hw->ver == DLB2_HW_V2)
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_LDB_CQ_AT(port->id.phys_id),
-			    DLB2_SYS_LDB_CQ_AT_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_CQ_PASID(hw->ver, port->id.phys_id),
-		    DLB2_SYS_LDB_CQ_PASID_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_CQ2VF_PF_RO(port->id.phys_id),
-		    DLB2_SYS_LDB_CQ2VF_PF_RO_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTL(hw->ver, port->id.phys_id),
-		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTL_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTH(hw->ver, port->id.phys_id),
-		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ2QID0(hw->ver, port->id.phys_id),
-		    DLB2_LSP_CQ2QID0_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ2QID1(hw->ver, port->id.phys_id),
-		    DLB2_LSP_CQ2QID1_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ2PRIOV(hw->ver, port->id.phys_id),
-		    DLB2_LSP_CQ2PRIOV_RST);
-}
-
-static void dlb2_domain_reset_ldb_port_registers(struct dlb2_hw *hw,
-						 struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
-			__dlb2_domain_reset_ldb_port_registers(hw, port);
-	}
-}
-
-static void
-__dlb2_domain_reset_dir_port_registers(struct dlb2_hw *hw,
-				       struct dlb2_dir_pq_pair *port)
-{
-	u32 reg = 0;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ2VAS(hw->ver, port->id.phys_id),
-		    DLB2_CHP_DIR_CQ2VAS_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_DIR_DSBL(hw->ver, port->id.phys_id),
-		    DLB2_LSP_CQ_DIR_DSBL_RST);
-
-	DLB2_BIT_SET(reg, DLB2_SYS_WB_DIR_CQ_STATE_CQ_OPT_CLR);
-
-	if (hw->ver == DLB2_HW_V2)
-		DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_OPT_CLR, port->id.phys_id);
-	else
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_WB_DIR_CQ_STATE(port->id.phys_id), reg);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_DEPTH(hw->ver, port->id.phys_id),
-		    DLB2_CHP_DIR_CQ_DEPTH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH(hw->ver, port->id.phys_id),
-		    DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_TMR_THRSH(hw->ver, port->id.phys_id),
-		    DLB2_CHP_DIR_CQ_TMR_THRSH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_INT_ENB(hw->ver, port->id.phys_id),
-		    DLB2_CHP_DIR_CQ_INT_ENB_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ_ISR(port->id.phys_id),
-		    DLB2_SYS_DIR_CQ_ISR_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(hw->ver,
-						      port->id.phys_id),
-		    DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
-		    DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_WPTR(hw->ver, port->id.phys_id),
-		    DLB2_CHP_DIR_CQ_WPTR_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_DIR_TKN_CNT(hw->ver, port->id.phys_id),
-		    DLB2_LSP_CQ_DIR_TKN_CNT_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ_ADDR_L(port->id.phys_id),
-		    DLB2_SYS_DIR_CQ_ADDR_L_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ_ADDR_U(port->id.phys_id),
-		    DLB2_SYS_DIR_CQ_ADDR_U_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ_AT(port->id.phys_id),
-		    DLB2_SYS_DIR_CQ_AT_RST);
-
-	if (hw->ver == DLB2_HW_V2)
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_DIR_CQ_AT(port->id.phys_id),
-			    DLB2_SYS_DIR_CQ_AT_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ_PASID(hw->ver, port->id.phys_id),
-		    DLB2_SYS_DIR_CQ_PASID_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ_FMT(port->id.phys_id),
-		    DLB2_SYS_DIR_CQ_FMT_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ2VF_PF_RO(port->id.phys_id),
-		    DLB2_SYS_DIR_CQ2VF_PF_RO_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTL(hw->ver, port->id.phys_id),
-		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTL_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTH(hw->ver, port->id.phys_id),
-		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_PP2VAS(port->id.phys_id),
-		    DLB2_SYS_DIR_PP2VAS_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ2VAS(hw->ver, port->id.phys_id),
-		    DLB2_CHP_DIR_CQ2VAS_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_PP2VDEV(port->id.phys_id),
-		    DLB2_SYS_DIR_PP2VDEV_RST);
-
-	if (port->id.vdev_owned) {
-		unsigned int offs;
-		u32 virt_id;
-
-		/*
-		 * DLB uses producer port address bits 17:12 to determine the
-		 * producer port ID. In Scalable IOV mode, PP accesses come
-		 * through the PF MMIO window for the physical producer port,
-		 * so for translation purposes the virtual and physical port
-		 * IDs are equal.
-		 */
-		if (hw->virt_mode == DLB2_VIRT_SRIOV)
-			virt_id = port->id.virt_id;
-		else
-			virt_id = port->id.phys_id;
-
-		offs = port->id.vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) +
-			virt_id;
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_VF_DIR_VPP2PP(offs),
-			    DLB2_SYS_VF_DIR_VPP2PP_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_VF_DIR_VPP_V(offs),
-			    DLB2_SYS_VF_DIR_VPP_V_RST);
-	}
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_PP_V(port->id.phys_id),
-		    DLB2_SYS_DIR_PP_V_RST);
-}
-
-static void dlb2_domain_reset_dir_port_registers(struct dlb2_hw *hw,
-						 struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *port;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter)
-		__dlb2_domain_reset_dir_port_registers(hw, port);
-}
-
-static void dlb2_domain_reset_ldb_queue_registers(struct dlb2_hw *hw,
-						  struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_queue *queue;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
-		unsigned int queue_id = queue->id.phys_id;
-		int i;
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL(hw->ver, queue_id),
-			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH(hw->ver, queue_id),
-			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTL(hw->ver, queue_id),
-			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTL_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTH(hw->ver, queue_id),
-			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_NALDB_MAX_DEPTH(hw->ver, queue_id),
-			    DLB2_LSP_QID_NALDB_MAX_DEPTH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_LDB_INFL_LIM(hw->ver, queue_id),
-			    DLB2_LSP_QID_LDB_INFL_LIM_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_AQED_ACTIVE_LIM(hw->ver, queue_id),
-			    DLB2_LSP_QID_AQED_ACTIVE_LIM_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_ATM_DEPTH_THRSH(hw->ver, queue_id),
-			    DLB2_LSP_QID_ATM_DEPTH_THRSH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_NALDB_DEPTH_THRSH(hw->ver, queue_id),
-			    DLB2_LSP_QID_NALDB_DEPTH_THRSH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_LDB_QID_ITS(queue_id),
-			    DLB2_SYS_LDB_QID_ITS_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_CHP_ORD_QID_SN(hw->ver, queue_id),
-			    DLB2_CHP_ORD_QID_SN_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_CHP_ORD_QID_SN_MAP(hw->ver, queue_id),
-			    DLB2_CHP_ORD_QID_SN_MAP_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_LDB_QID_V(queue_id),
-			    DLB2_SYS_LDB_QID_V_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_LDB_QID_CFG_V(queue_id),
-			    DLB2_SYS_LDB_QID_CFG_V_RST);
-
-		if (queue->sn_cfg_valid) {
-			u32 offs[2];
-
-			offs[0] = DLB2_RO_GRP_0_SLT_SHFT(hw->ver,
-							 queue->sn_slot);
-			offs[1] = DLB2_RO_GRP_1_SLT_SHFT(hw->ver,
-							 queue->sn_slot);
-
-			DLB2_CSR_WR(hw,
-				    offs[queue->sn_group],
-				    DLB2_RO_GRP_0_SLT_SHFT_RST);
-		}
-
-		for (i = 0; i < DLB2_LSP_QID2CQIDIX_NUM; i++) {
-			DLB2_CSR_WR(hw,
-				    DLB2_LSP_QID2CQIDIX(hw->ver, queue_id, i),
-				    DLB2_LSP_QID2CQIDIX_00_RST);
-
-			DLB2_CSR_WR(hw,
-				    DLB2_LSP_QID2CQIDIX2(hw->ver, queue_id, i),
-				    DLB2_LSP_QID2CQIDIX2_00_RST);
-
-			DLB2_CSR_WR(hw,
-				    DLB2_ATM_QID2CQIDIX(queue_id, i),
-				    DLB2_ATM_QID2CQIDIX_00_RST);
-		}
-	}
-}
-
-static void dlb2_domain_reset_dir_queue_registers(struct dlb2_hw *hw,
-						  struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *queue;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, queue, iter) {
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_DIR_MAX_DEPTH(hw->ver,
-						       queue->id.phys_id),
-			    DLB2_LSP_QID_DIR_MAX_DEPTH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTL(hw->ver,
-							  queue->id.phys_id),
-			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTL_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTH(hw->ver,
-							  queue->id.phys_id),
-			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_DIR_DEPTH_THRSH(hw->ver,
-							 queue->id.phys_id),
-			    DLB2_LSP_QID_DIR_DEPTH_THRSH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_DIR_QID_ITS(queue->id.phys_id),
-			    DLB2_SYS_DIR_QID_ITS_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_DIR_QID_V(queue->id.phys_id),
-			    DLB2_SYS_DIR_QID_V_RST);
-	}
-}
-
-
-
-
-
-static void dlb2_domain_reset_registers(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain)
-{
-	dlb2_domain_reset_ldb_port_registers(hw, domain);
-
-	dlb2_domain_reset_dir_port_registers(hw, domain);
-
-	dlb2_domain_reset_ldb_queue_registers(hw, domain);
-
-	dlb2_domain_reset_dir_queue_registers(hw, domain);
-
-	if (hw->ver == DLB2_HW_V2) {
-		DLB2_CSR_WR(hw,
-			    DLB2_CHP_CFG_LDB_VAS_CRD(domain->id.phys_id),
-			    DLB2_CHP_CFG_LDB_VAS_CRD_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_CHP_CFG_DIR_VAS_CRD(domain->id.phys_id),
-			    DLB2_CHP_CFG_DIR_VAS_CRD_RST);
-	} else
-		DLB2_CSR_WR(hw,
-			    DLB2_CHP_CFG_VAS_CRD(domain->id.phys_id),
-			    DLB2_CHP_CFG_VAS_CRD_RST);
-}
-
-static int dlb2_domain_reset_software_state(struct dlb2_hw *hw,
-					    struct dlb2_hw_domain *domain)
-{
-	struct dlb2_dir_pq_pair *tmp_dir_port;
-	struct dlb2_ldb_queue *tmp_ldb_queue;
-	struct dlb2_ldb_port *tmp_ldb_port;
-	struct dlb2_list_entry *iter1;
-	struct dlb2_list_entry *iter2;
-	struct dlb2_function_resources *rsrcs;
-	struct dlb2_dir_pq_pair *dir_port;
-	struct dlb2_ldb_queue *ldb_queue;
-	struct dlb2_ldb_port *ldb_port;
-	struct dlb2_list_head *list;
-	int ret, i;
-	RTE_SET_USED(tmp_dir_port);
-	RTE_SET_USED(tmp_ldb_queue);
-	RTE_SET_USED(tmp_ldb_port);
-	RTE_SET_USED(iter1);
-	RTE_SET_USED(iter2);
-
-	rsrcs = domain->parent_func;
-
-	/* Move the domain's ldb queues to the function's avail list */
-	list = &domain->used_ldb_queues;
-	DLB2_DOM_LIST_FOR_SAFE(*list, ldb_queue, tmp_ldb_queue, iter1, iter2) {
-		if (ldb_queue->sn_cfg_valid) {
-			struct dlb2_sn_group *grp;
-
-			grp = &hw->rsrcs.sn_groups[ldb_queue->sn_group];
-
-			dlb2_sn_group_free_slot(grp, ldb_queue->sn_slot);
-			ldb_queue->sn_cfg_valid = false;
-		}
-
-		ldb_queue->owned = false;
-		ldb_queue->num_mappings = 0;
-		ldb_queue->num_pending_additions = 0;
-
-		dlb2_list_del(&domain->used_ldb_queues,
-			      &ldb_queue->domain_list);
-		dlb2_list_add(&rsrcs->avail_ldb_queues,
-			      &ldb_queue->func_list);
-		rsrcs->num_avail_ldb_queues++;
-	}
-
-	list = &domain->avail_ldb_queues;
-	DLB2_DOM_LIST_FOR_SAFE(*list, ldb_queue, tmp_ldb_queue, iter1, iter2) {
-		ldb_queue->owned = false;
-
-		dlb2_list_del(&domain->avail_ldb_queues,
-			      &ldb_queue->domain_list);
-		dlb2_list_add(&rsrcs->avail_ldb_queues,
-			      &ldb_queue->func_list);
-		rsrcs->num_avail_ldb_queues++;
-	}
-
-	/* Move the domain's ldb ports to the function's avail list */
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		list = &domain->used_ldb_ports[i];
-		DLB2_DOM_LIST_FOR_SAFE(*list, ldb_port, tmp_ldb_port,
-				       iter1, iter2) {
-			int j;
-
-			ldb_port->owned = false;
-			ldb_port->configured = false;
-			ldb_port->num_pending_removals = 0;
-			ldb_port->num_mappings = 0;
-			ldb_port->init_tkn_cnt = 0;
-			ldb_port->cq_depth = 0;
-			for (j = 0; j < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; j++)
-				ldb_port->qid_map[j].state =
-					DLB2_QUEUE_UNMAPPED;
-
-			dlb2_list_del(&domain->used_ldb_ports[i],
-				      &ldb_port->domain_list);
-			dlb2_list_add(&rsrcs->avail_ldb_ports[i],
-				      &ldb_port->func_list);
-			rsrcs->num_avail_ldb_ports[i]++;
-		}
-
-		list = &domain->avail_ldb_ports[i];
-		DLB2_DOM_LIST_FOR_SAFE(*list, ldb_port, tmp_ldb_port,
-				       iter1, iter2) {
-			ldb_port->owned = false;
-
-			dlb2_list_del(&domain->avail_ldb_ports[i],
-				      &ldb_port->domain_list);
-			dlb2_list_add(&rsrcs->avail_ldb_ports[i],
-				      &ldb_port->func_list);
-			rsrcs->num_avail_ldb_ports[i]++;
-		}
-	}
-
-	/* Move the domain's dir ports to the function's avail list */
-	list = &domain->used_dir_pq_pairs;
-	DLB2_DOM_LIST_FOR_SAFE(*list, dir_port, tmp_dir_port, iter1, iter2) {
-		dir_port->owned = false;
-		dir_port->port_configured = false;
-		dir_port->init_tkn_cnt = 0;
-
-		dlb2_list_del(&domain->used_dir_pq_pairs,
-			      &dir_port->domain_list);
-
-		dlb2_list_add(&rsrcs->avail_dir_pq_pairs,
-			      &dir_port->func_list);
-		rsrcs->num_avail_dir_pq_pairs++;
-	}
-
-	list = &domain->avail_dir_pq_pairs;
-	DLB2_DOM_LIST_FOR_SAFE(*list, dir_port, tmp_dir_port, iter1, iter2) {
-		dir_port->owned = false;
-
-		dlb2_list_del(&domain->avail_dir_pq_pairs,
-			      &dir_port->domain_list);
-
-		dlb2_list_add(&rsrcs->avail_dir_pq_pairs,
-			      &dir_port->func_list);
-		rsrcs->num_avail_dir_pq_pairs++;
-	}
-
-	/* Return hist list entries to the function */
-	ret = dlb2_bitmap_set_range(rsrcs->avail_hist_list_entries,
-				    domain->hist_list_entry_base,
-				    domain->total_hist_list_entries);
-	if (ret) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: domain hist list base does not match the function's bitmap.\n",
-			    __func__);
-		return ret;
-	}
-
-	domain->total_hist_list_entries = 0;
-	domain->avail_hist_list_entries = 0;
-	domain->hist_list_entry_base = 0;
-	domain->hist_list_entry_offset = 0;
-
-	if (hw->ver == DLB2_HW_V2_5) {
-		rsrcs->num_avail_entries += domain->num_credits;
-		domain->num_credits = 0;
-	} else {
-		rsrcs->num_avail_qed_entries += domain->num_ldb_credits;
-		domain->num_ldb_credits = 0;
-
-		rsrcs->num_avail_dqed_entries += domain->num_dir_credits;
-		domain->num_dir_credits = 0;
-	}
-	rsrcs->num_avail_aqed_entries += domain->num_avail_aqed_entries;
-	rsrcs->num_avail_aqed_entries += domain->num_used_aqed_entries;
-	domain->num_avail_aqed_entries = 0;
-	domain->num_used_aqed_entries = 0;
-
-	domain->num_pending_removals = 0;
-	domain->num_pending_additions = 0;
-	domain->configured = false;
-	domain->started = false;
-
-	/*
-	 * Move the domain out of the used_domains list and back to the
-	 * function's avail_domains list.
-	 */
-	dlb2_list_del(&rsrcs->used_domains, &domain->func_list);
-	dlb2_list_add(&rsrcs->avail_domains, &domain->func_list);
-	rsrcs->num_avail_domains++;
-
-	return 0;
-}
-
-static int dlb2_domain_drain_unmapped_queue(struct dlb2_hw *hw,
-					    struct dlb2_hw_domain *domain,
-					    struct dlb2_ldb_queue *queue)
-{
-	struct dlb2_ldb_port *port = NULL;
-	int ret, i;
-
-	/* If a domain has LDB queues, it must have LDB ports */
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		port = DLB2_DOM_LIST_HEAD(domain->used_ldb_ports[i],
-					  typeof(*port));
-		if (port)
-			break;
-	}
-
-	if (port == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: No configured LDB ports\n",
-			    __func__);
-		return -EFAULT;
-	}
-
-	/* If necessary, free up a QID slot in this CQ */
-	if (port->num_mappings == DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-		struct dlb2_ldb_queue *mapped_queue;
-
-		mapped_queue = &hw->rsrcs.ldb_queues[port->qid_map[0].qid];
-
-		ret = dlb2_ldb_port_unmap_qid(hw, port, mapped_queue);
-		if (ret)
-			return ret;
-	}
-
-	ret = dlb2_ldb_port_map_qid_dynamic(hw, port, queue, 0);
-	if (ret)
-		return ret;
-
-	return dlb2_domain_drain_mapped_queues(hw, domain);
-}
-
-static int dlb2_domain_drain_unmapped_queues(struct dlb2_hw *hw,
-					     struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_queue *queue;
-	int ret;
-	RTE_SET_USED(iter);
-
-	/* If the domain hasn't been started, there's no traffic to drain */
-	if (!domain->started)
-		return 0;
-
-	/*
-	 * Pre-condition: the unattached queue must not have any outstanding
-	 * completions. This is ensured by calling dlb2_domain_drain_ldb_cqs()
-	 * prior to this in dlb2_domain_drain_mapped_queues().
-	 */
-	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
-		if (queue->num_mappings != 0 ||
-		    dlb2_ldb_queue_is_empty(hw, queue))
-			continue;
-
-		ret = dlb2_domain_drain_unmapped_queue(hw, domain, queue);
-		if (ret)
-			return ret;
-	}
-
-	return 0;
-}
-
-/**
- * dlb2_reset_domain() - reset a scheduling domain
- * @hw: dlb2_hw handle for a particular device.
- * @domain_id: domain ID.
- * @vdev_req: indicates whether this request came from a vdev.
- * @vdev_id: If vdev_req is true, this contains the vdev's ID.
- *
- * This function resets and frees a DLB 2.0 scheduling domain and its associated
- * resources.
- *
- * Pre-condition: the driver must ensure software has stopped sending QEs
- * through this domain's producer ports before invoking this function, or
- * undefined behavior will result.
- *
- * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
- * device.
- *
- * Return:
- * Returns 0 upon success, -1 otherwise.
- *
- * EINVAL - Invalid domain ID, or the domain is not configured.
- * EFAULT - Internal error. (Possibly caused if software is the pre-condition
- *	    is not met.)
- * ETIMEDOUT - Hardware component didn't reset in the expected time.
- */
-int dlb2_reset_domain(struct dlb2_hw *hw,
-		      u32 domain_id,
-		      bool vdev_req,
-		      unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	int ret;
-
-	dlb2_log_reset_domain(hw, domain_id, vdev_req, vdev_id);
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (domain == NULL || !domain->configured)
-		return -EINVAL;
-
-	/* Disable VPPs */
-	if (vdev_req) {
-		dlb2_domain_disable_dir_vpps(hw, domain, vdev_id);
-
-		dlb2_domain_disable_ldb_vpps(hw, domain, vdev_id);
-	}
-
-	/* Disable CQ interrupts */
-	dlb2_domain_disable_dir_port_interrupts(hw, domain);
-
-	dlb2_domain_disable_ldb_port_interrupts(hw, domain);
-
-	/*
-	 * For each queue owned by this domain, disable its write permissions to
-	 * cause any traffic sent to it to be dropped. Well-behaved software
-	 * should not be sending QEs at this point.
-	 */
-	dlb2_domain_disable_dir_queue_write_perms(hw, domain);
-
-	dlb2_domain_disable_ldb_queue_write_perms(hw, domain);
-
-	/* Turn off completion tracking on all the domain's PPs. */
-	dlb2_domain_disable_ldb_seq_checks(hw, domain);
-
-	/*
-	 * Disable the LDB CQs and drain them in order to complete the map and
-	 * unmap procedures, which require zero CQ inflights and zero QID
-	 * inflights respectively.
-	 */
-	dlb2_domain_disable_ldb_cqs(hw, domain);
-
-	dlb2_domain_drain_ldb_cqs(hw, domain, false);
-
-	ret = dlb2_domain_wait_for_ldb_cqs_to_empty(hw, domain);
-	if (ret)
-		return ret;
-
-	ret = dlb2_domain_finish_unmap_qid_procedures(hw, domain);
-	if (ret)
-		return ret;
-
-	ret = dlb2_domain_finish_map_qid_procedures(hw, domain);
-	if (ret)
-		return ret;
-
-	/* Re-enable the CQs in order to drain the mapped queues. */
-	dlb2_domain_enable_ldb_cqs(hw, domain);
-
-	ret = dlb2_domain_drain_mapped_queues(hw, domain);
-	if (ret)
-		return ret;
-
-	ret = dlb2_domain_drain_unmapped_queues(hw, domain);
-	if (ret)
-		return ret;
-
-	/* Done draining LDB QEs, so disable the CQs. */
-	dlb2_domain_disable_ldb_cqs(hw, domain);
-
-	dlb2_domain_drain_dir_queues(hw, domain);
-
-	/* Done draining DIR QEs, so disable the CQs. */
-	dlb2_domain_disable_dir_cqs(hw, domain);
-
-	/* Disable PPs */
-	dlb2_domain_disable_dir_producer_ports(hw, domain);
-
-	dlb2_domain_disable_ldb_producer_ports(hw, domain);
-
-	ret = dlb2_domain_verify_reset_success(hw, domain);
-	if (ret)
-		return ret;
-
-	/* Reset the QID and port state. */
-	dlb2_domain_reset_registers(hw, domain);
-
-	/* Hardware reset complete. Reset the domain's software state */
-	return dlb2_domain_reset_software_state(hw, domain);
-}
-
-static void
-dlb2_log_create_ldb_queue_args(struct dlb2_hw *hw,
-			       u32 domain_id,
-			       struct dlb2_create_ldb_queue_args *args,
-			       bool vdev_req,
-			       unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 create load-balanced queue arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID:                  %d\n",
-		    domain_id);
-	DLB2_HW_DBG(hw, "\tNumber of sequence numbers: %d\n",
-		    args->num_sequence_numbers);
-	DLB2_HW_DBG(hw, "\tNumber of QID inflights:    %d\n",
-		    args->num_qid_inflights);
-	DLB2_HW_DBG(hw, "\tNumber of ATM inflights:    %d\n",
-		    args->num_atomic_inflights);
-}
-
-static int
-dlb2_ldb_queue_attach_to_sn_group(struct dlb2_hw *hw,
-				  struct dlb2_ldb_queue *queue,
-				  struct dlb2_create_ldb_queue_args *args)
-{
-	int slot = -1;
-	int i;
-
-	queue->sn_cfg_valid = false;
-
-	if (args->num_sequence_numbers == 0)
-		return 0;
-
-	for (i = 0; i < DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {
-		struct dlb2_sn_group *group = &hw->rsrcs.sn_groups[i];
-
-		if (group->sequence_numbers_per_queue ==
-		    args->num_sequence_numbers &&
-		    !dlb2_sn_group_full(group)) {
-			slot = dlb2_sn_group_alloc_slot(group);
-			if (slot >= 0)
-				break;
-		}
-	}
-
-	if (slot == -1) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: no sequence number slots available\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	queue->sn_cfg_valid = true;
-	queue->sn_group = i;
-	queue->sn_slot = slot;
-	return 0;
-}
-
-static int
-dlb2_verify_create_ldb_queue_args(struct dlb2_hw *hw,
-				  u32 domain_id,
-				  struct dlb2_create_ldb_queue_args *args,
-				  struct dlb2_cmd_response *resp,
-				  bool vdev_req,
-				  unsigned int vdev_id,
-				  struct dlb2_hw_domain **out_domain,
-				  struct dlb2_ldb_queue **out_queue)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_queue *queue;
-	int i;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (!domain) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	if (!domain->configured) {
-		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
-		return -EINVAL;
-	}
-
-	if (domain->started) {
-		resp->status = DLB2_ST_DOMAIN_STARTED;
-		return -EINVAL;
-	}
-
-	queue = DLB2_DOM_LIST_HEAD(domain->avail_ldb_queues, typeof(*queue));
-	if (!queue) {
-		resp->status = DLB2_ST_LDB_QUEUES_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	if (args->num_sequence_numbers) {
-		for (i = 0; i < DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {
-			struct dlb2_sn_group *group = &hw->rsrcs.sn_groups[i];
-
-			if (group->sequence_numbers_per_queue ==
-			    args->num_sequence_numbers &&
-			    !dlb2_sn_group_full(group))
-				break;
-		}
-
-		if (i == DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS) {
-			resp->status = DLB2_ST_SEQUENCE_NUMBERS_UNAVAILABLE;
-			return -EINVAL;
-		}
-	}
-
-	if (args->num_qid_inflights > 4096) {
-		resp->status = DLB2_ST_INVALID_QID_INFLIGHT_ALLOCATION;
-		return -EINVAL;
-	}
-
-	/* Inflights must be <= number of sequence numbers if ordered */
-	if (args->num_sequence_numbers != 0 &&
-	    args->num_qid_inflights > args->num_sequence_numbers) {
-		resp->status = DLB2_ST_INVALID_QID_INFLIGHT_ALLOCATION;
-		return -EINVAL;
-	}
-
-	if (domain->num_avail_aqed_entries < args->num_atomic_inflights) {
-		resp->status = DLB2_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	if (args->num_atomic_inflights &&
-	    args->lock_id_comp_level != 0 &&
-	    args->lock_id_comp_level != 64 &&
-	    args->lock_id_comp_level != 128 &&
-	    args->lock_id_comp_level != 256 &&
-	    args->lock_id_comp_level != 512 &&
-	    args->lock_id_comp_level != 1024 &&
-	    args->lock_id_comp_level != 2048 &&
-	    args->lock_id_comp_level != 4096 &&
-	    args->lock_id_comp_level != 65536) {
-		resp->status = DLB2_ST_INVALID_LOCK_ID_COMP_LEVEL;
-		return -EINVAL;
-	}
-
-	*out_domain = domain;
-	*out_queue = queue;
-
-	return 0;
-}
-
-static int
-dlb2_ldb_queue_attach_resources(struct dlb2_hw *hw,
-				struct dlb2_hw_domain *domain,
-				struct dlb2_ldb_queue *queue,
-				struct dlb2_create_ldb_queue_args *args)
-{
-	int ret;
-	ret = dlb2_ldb_queue_attach_to_sn_group(hw, queue, args);
-	if (ret)
-		return ret;
-
-	/* Attach QID inflights */
-	queue->num_qid_inflights = args->num_qid_inflights;
-
-	/* Attach atomic inflights */
-	queue->aqed_limit = args->num_atomic_inflights;
-
-	domain->num_avail_aqed_entries -= args->num_atomic_inflights;
-	domain->num_used_aqed_entries += args->num_atomic_inflights;
-
-	return 0;
-}
-
-static void dlb2_configure_ldb_queue(struct dlb2_hw *hw,
-				     struct dlb2_hw_domain *domain,
-				     struct dlb2_ldb_queue *queue,
-				     struct dlb2_create_ldb_queue_args *args,
-				     bool vdev_req,
-				     unsigned int vdev_id)
-{
-	struct dlb2_sn_group *sn_group;
-	unsigned int offs;
-	u32 reg = 0;
-	u32 alimit;
-
-	/* QID write permissions are turned on when the domain is started */
-	offs = domain->id.phys_id * DLB2_MAX_NUM_LDB_QUEUES + queue->id.phys_id;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_VASQID_V(offs), reg);
-
-	/*
-	 * Unordered QIDs get 4K inflights, ordered get as many as the number
-	 * of sequence numbers.
-	 */
-	DLB2_BITS_SET(reg, args->num_qid_inflights,
-		      DLB2_LSP_QID_LDB_INFL_LIM_LIMIT);
-	DLB2_CSR_WR(hw, DLB2_LSP_QID_LDB_INFL_LIM(hw->ver,
-						  queue->id.phys_id), reg);
-
-	alimit = queue->aqed_limit;
-
-	if (alimit > DLB2_MAX_NUM_AQED_ENTRIES)
-		alimit = DLB2_MAX_NUM_AQED_ENTRIES;
-
-	reg = 0;
-	DLB2_BITS_SET(reg, alimit, DLB2_LSP_QID_AQED_ACTIVE_LIM_LIMIT);
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID_AQED_ACTIVE_LIM(hw->ver,
-						 queue->id.phys_id), reg);
-
-	reg = 0;
-	switch (args->lock_id_comp_level) {
-	case 64:
-		DLB2_BITS_SET(reg, 1, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
-		break;
-	case 128:
-		DLB2_BITS_SET(reg, 2, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
-		break;
-	case 256:
-		DLB2_BITS_SET(reg, 3, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
-		break;
-	case 512:
-		DLB2_BITS_SET(reg, 4, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
-		break;
-	case 1024:
-		DLB2_BITS_SET(reg, 5, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
-		break;
-	case 2048:
-		DLB2_BITS_SET(reg, 6, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
-		break;
-	case 4096:
-		DLB2_BITS_SET(reg, 7, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
-		break;
-	default:
-		/* No compression by default */
-		break;
-	}
-
-	DLB2_CSR_WR(hw, DLB2_AQED_QID_HID_WIDTH(queue->id.phys_id), reg);
-
-	reg = 0;
-	/* Don't timestamp QEs that pass through this queue */
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_QID_ITS(queue->id.phys_id), reg);
-
-	DLB2_BITS_SET(reg, args->depth_threshold,
-		      DLB2_LSP_QID_ATM_DEPTH_THRSH_THRESH);
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID_ATM_DEPTH_THRSH(hw->ver,
-						 queue->id.phys_id), reg);
-
-	reg = 0;
-	DLB2_BITS_SET(reg, args->depth_threshold,
-		      DLB2_LSP_QID_NALDB_DEPTH_THRSH_THRESH);
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID_NALDB_DEPTH_THRSH(hw->ver, queue->id.phys_id),
-		    reg);
-
-	/*
-	 * This register limits the number of inflight flows a queue can have
-	 * at one time.  It has an upper bound of 2048, but can be
-	 * over-subscribed. 512 is chosen so that a single queue does not use
-	 * the entire atomic storage, but can use a substantial portion if
-	 * needed.
-	 */
-	reg = 0;
-	DLB2_BITS_SET(reg, 512, DLB2_AQED_QID_FID_LIM_QID_FID_LIMIT);
-	DLB2_CSR_WR(hw, DLB2_AQED_QID_FID_LIM(queue->id.phys_id), reg);
-
-	/* Configure SNs */
-	reg = 0;
-	sn_group = &hw->rsrcs.sn_groups[queue->sn_group];
-	DLB2_BITS_SET(reg, sn_group->mode, DLB2_CHP_ORD_QID_SN_MAP_MODE);
-	DLB2_BITS_SET(reg, queue->sn_slot, DLB2_CHP_ORD_QID_SN_MAP_SLOT);
-	DLB2_BITS_SET(reg, sn_group->id, DLB2_CHP_ORD_QID_SN_MAP_GRP);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_ORD_QID_SN_MAP(hw->ver, queue->id.phys_id), reg);
-
-	reg = 0;
-	DLB2_BITS_SET(reg, (args->num_sequence_numbers != 0),
-		 DLB2_SYS_LDB_QID_CFG_V_SN_CFG_V);
-	DLB2_BITS_SET(reg, (args->num_atomic_inflights != 0),
-		 DLB2_SYS_LDB_QID_CFG_V_FID_CFG_V);
-
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_QID_CFG_V(queue->id.phys_id), reg);
-
-	if (vdev_req) {
-		offs = vdev_id * DLB2_MAX_NUM_LDB_QUEUES + queue->id.virt_id;
-
-		reg = 0;
-		DLB2_BIT_SET(reg, DLB2_SYS_VF_LDB_VQID_V_VQID_V);
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID_V(offs), reg);
-
-		reg = 0;
-		DLB2_BITS_SET(reg, queue->id.phys_id,
-			      DLB2_SYS_VF_LDB_VQID2QID_QID);
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID2QID(offs), reg);
-
-		reg = 0;
-		DLB2_BITS_SET(reg, queue->id.virt_id,
-			      DLB2_SYS_LDB_QID2VQID_VQID);
-		DLB2_CSR_WR(hw, DLB2_SYS_LDB_QID2VQID(queue->id.phys_id), reg);
-	}
-
-	reg = 0;
-	DLB2_BIT_SET(reg, DLB2_SYS_LDB_QID_V_QID_V);
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_QID_V(queue->id.phys_id), reg);
-}
-
-/**
- * dlb2_hw_create_ldb_queue() - create a load-balanced queue
- * @hw: dlb2_hw handle for a particular device.
- * @domain_id: domain ID.
- * @args: queue creation arguments.
- * @resp: response structure.
- * @vdev_req: indicates whether this request came from a vdev.
- * @vdev_id: If vdev_req is true, this contains the vdev's ID.
- *
- * This function creates a load-balanced queue.
- *
- * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
- * device.
- *
- * Return:
- * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
- * assigned a detailed error code from enum dlb2_error. If successful, resp->id
- * contains the queue ID.
- *
- * resp->id contains a virtual ID if vdev_req is true.
- *
- * Errors:
- * EINVAL - A requested resource is unavailable, the domain is not configured,
- *	    the domain has already been started, or the requested queue name is
- *	    already in use.
- * EFAULT - Internal error (resp->status not set).
- */
-int dlb2_hw_create_ldb_queue(struct dlb2_hw *hw,
-			     u32 domain_id,
-			     struct dlb2_create_ldb_queue_args *args,
-			     struct dlb2_cmd_response *resp,
-			     bool vdev_req,
-			     unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_queue *queue;
-	int ret;
-
-	dlb2_log_create_ldb_queue_args(hw, domain_id, args, vdev_req, vdev_id);
-
-	/*
-	 * Verify that hardware resources are available before attempting to
-	 * satisfy the request. This simplifies the error unwinding code.
-	 */
-	ret = dlb2_verify_create_ldb_queue_args(hw,
-						domain_id,
-						args,
-						resp,
-						vdev_req,
-						vdev_id,
-						&domain,
-						&queue);
-	if (ret)
-		return ret;
-
-	ret = dlb2_ldb_queue_attach_resources(hw, domain, queue, args);
-
-	if (ret) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: failed to attach the ldb queue resources\n",
-			    __func__, __LINE__);
-		return ret;
-	}
-
-	dlb2_configure_ldb_queue(hw, domain, queue, args, vdev_req, vdev_id);
-
-	queue->num_mappings = 0;
-
-	queue->configured = true;
-
-	/*
-	 * Configuration succeeded, so move the resource from the 'avail' to
-	 * the 'used' list.
-	 */
-	dlb2_list_del(&domain->avail_ldb_queues, &queue->domain_list);
-
-	dlb2_list_add(&domain->used_ldb_queues, &queue->domain_list);
-
-	resp->status = 0;
-	resp->id = (vdev_req) ? queue->id.virt_id : queue->id.phys_id;
-
-	return 0;
-}
-
-static void dlb2_ldb_port_configure_pp(struct dlb2_hw *hw,
-				       struct dlb2_hw_domain *domain,
-				       struct dlb2_ldb_port *port,
-				       bool vdev_req,
-				       unsigned int vdev_id)
-{
-	u32 reg = 0;
-
-	DLB2_BITS_SET(reg, domain->id.phys_id, DLB2_SYS_LDB_PP2VAS_VAS);
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_PP2VAS(port->id.phys_id), reg);
-
-	if (vdev_req) {
-		unsigned int offs;
-		u32 virt_id;
-
-		/*
-		 * DLB uses producer port address bits 17:12 to determine the
-		 * producer port ID. In Scalable IOV mode, PP accesses come
-		 * through the PF MMIO window for the physical producer port,
-		 * so for translation purposes the virtual and physical port
-		 * IDs are equal.
-		 */
-		if (hw->virt_mode == DLB2_VIRT_SRIOV)
-			virt_id = port->id.virt_id;
-		else
-			virt_id = port->id.phys_id;
-
-		reg = 0;
-		DLB2_BITS_SET(reg, port->id.phys_id, DLB2_SYS_VF_LDB_VPP2PP_PP);
-		offs = vdev_id * DLB2_MAX_NUM_LDB_PORTS + virt_id;
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VPP2PP(offs), reg);
-
-		reg = 0;
-		DLB2_BITS_SET(reg, vdev_id, DLB2_SYS_LDB_PP2VDEV_VDEV);
-		DLB2_CSR_WR(hw, DLB2_SYS_LDB_PP2VDEV(port->id.phys_id), reg);
-
-		reg = 0;
-		DLB2_BIT_SET(reg, DLB2_SYS_VF_LDB_VPP_V_VPP_V);
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VPP_V(offs), reg);
-	}
-
-	reg = 0;
-	DLB2_BIT_SET(reg, DLB2_SYS_LDB_PP_V_PP_V);
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_PP_V(port->id.phys_id), reg);
-}
-
-static int dlb2_ldb_port_configure_cq(struct dlb2_hw *hw,
-				      struct dlb2_hw_domain *domain,
-				      struct dlb2_ldb_port *port,
-				      uintptr_t cq_dma_base,
-				      struct dlb2_create_ldb_port_args *args,
-				      bool vdev_req,
-				      unsigned int vdev_id)
-{
-	u32 hl_base = 0;
-	u32 reg = 0;
-	u32 ds = 0;
-
-	/* The CQ address is 64B-aligned, and the DLB only wants bits [63:6] */
-	DLB2_BITS_SET(reg, cq_dma_base >> 6, DLB2_SYS_LDB_CQ_ADDR_L_ADDR_L);
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_ADDR_L(port->id.phys_id), reg);
-
-	reg = cq_dma_base >> 32;
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_ADDR_U(port->id.phys_id), reg);
-
-	/*
-	 * 'ro' == relaxed ordering. This setting allows DLB2 to write
-	 * cache lines out-of-order (but QEs within a cache line are always
-	 * updated in-order).
-	 */
-	reg = 0;
-	DLB2_BITS_SET(reg, vdev_id, DLB2_SYS_LDB_CQ2VF_PF_RO_VF);
-	DLB2_BITS_SET(reg,
-		 !vdev_req && (hw->virt_mode != DLB2_VIRT_SIOV),
-		 DLB2_SYS_LDB_CQ2VF_PF_RO_IS_PF);
-	DLB2_BIT_SET(reg, DLB2_SYS_LDB_CQ2VF_PF_RO_RO);
-
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ2VF_PF_RO(port->id.phys_id), reg);
-
-	port->cq_depth = args->cq_depth;
-
-	if (args->cq_depth <= 8) {
-		ds = 1;
-	} else if (args->cq_depth == 16) {
-		ds = 2;
-	} else if (args->cq_depth == 32) {
-		ds = 3;
-	} else if (args->cq_depth == 64) {
-		ds = 4;
-	} else if (args->cq_depth == 128) {
-		ds = 5;
-	} else if (args->cq_depth == 256) {
-		ds = 6;
-	} else if (args->cq_depth == 512) {
-		ds = 7;
-	} else if (args->cq_depth == 1024) {
-		ds = 8;
-	} else {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: invalid CQ depth\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	reg = 0;
-	DLB2_BITS_SET(reg, ds,
-		      DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT);
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
-		    reg);
-
-	/*
-	 * To support CQs with depth less than 8, program the token count
-	 * register with a non-zero initial value. Operations such as domain
-	 * reset must take this initial value into account when quiescing the
-	 * CQ.
-	 */
-	port->init_tkn_cnt = 0;
-
-	if (args->cq_depth < 8) {
-		reg = 0;
-		port->init_tkn_cnt = 8 - args->cq_depth;
-
-		DLB2_BITS_SET(reg,
-			      port->init_tkn_cnt,
-			      DLB2_LSP_CQ_LDB_TKN_CNT_TOKEN_COUNT);
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_CQ_LDB_TKN_CNT(hw->ver, port->id.phys_id),
-			    reg);
-	} else {
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_CQ_LDB_TKN_CNT(hw->ver, port->id.phys_id),
-			    DLB2_LSP_CQ_LDB_TKN_CNT_RST);
-	}
-
-	reg = 0;
-	DLB2_BITS_SET(reg, ds,
-		      DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_V2);
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
-		    reg);
-
-	/* Reset the CQ write pointer */
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_WPTR(hw->ver, port->id.phys_id),
-		    DLB2_CHP_LDB_CQ_WPTR_RST);
-
-	reg = 0;
-	DLB2_BITS_SET(reg,
-		      port->hist_list_entry_limit - 1,
-		      DLB2_CHP_HIST_LIST_LIM_LIMIT);
-	DLB2_CSR_WR(hw, DLB2_CHP_HIST_LIST_LIM(hw->ver, port->id.phys_id), reg);
-
-	DLB2_BITS_SET(hl_base, port->hist_list_entry_base,
-		      DLB2_CHP_HIST_LIST_BASE_BASE);
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_HIST_LIST_BASE(hw->ver, port->id.phys_id),
-		    hl_base);
-
-	/*
-	 * The inflight limit sets a cap on the number of QEs for which this CQ
-	 * can owe completions at one time.
-	 */
-	reg = 0;
-	DLB2_BITS_SET(reg, args->cq_history_list_size,
-		      DLB2_LSP_CQ_LDB_INFL_LIM_LIMIT);
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ_LDB_INFL_LIM(hw->ver, port->id.phys_id),
-		    reg);
-
-	reg = 0;
-	DLB2_BITS_SET(reg, DLB2_BITS_GET(hl_base, DLB2_CHP_HIST_LIST_BASE_BASE),
-		      DLB2_CHP_HIST_LIST_PUSH_PTR_PUSH_PTR);
-	DLB2_CSR_WR(hw, DLB2_CHP_HIST_LIST_PUSH_PTR(hw->ver, port->id.phys_id),
-		    reg);
-
-	reg = 0;
-	DLB2_BITS_SET(reg, DLB2_BITS_GET(hl_base, DLB2_CHP_HIST_LIST_BASE_BASE),
-		      DLB2_CHP_HIST_LIST_POP_PTR_POP_PTR);
-	DLB2_CSR_WR(hw, DLB2_CHP_HIST_LIST_POP_PTR(hw->ver, port->id.phys_id),
-		    reg);
-
-	/*
-	 * Address translation (AT) settings: 0: untranslated, 2: translated
-	 * (see ATS spec regarding Address Type field for more details)
-	 */
-
-	if (hw->ver == DLB2_HW_V2) {
-		reg = 0;
-		DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_AT(port->id.phys_id), reg);
-	}
-
-	if (vdev_req && hw->virt_mode == DLB2_VIRT_SIOV) {
-		reg = 0;
-		DLB2_BITS_SET(reg, hw->pasid[vdev_id],
-			      DLB2_SYS_LDB_CQ_PASID_PASID);
-		DLB2_BIT_SET(reg, DLB2_SYS_LDB_CQ_PASID_FMT2);
-	}
-
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_PASID(hw->ver, port->id.phys_id), reg);
-
-	reg = 0;
-	DLB2_BITS_SET(reg, domain->id.phys_id, DLB2_CHP_LDB_CQ2VAS_CQ2VAS);
-	DLB2_CSR_WR(hw, DLB2_CHP_LDB_CQ2VAS(hw->ver, port->id.phys_id), reg);
-
-	/* Disable the port's QID mappings */
-	reg = 0;
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(hw->ver, port->id.phys_id), reg);
-
-	return 0;
-}
-
-static bool
-dlb2_cq_depth_is_valid(u32 depth)
-{
-	if (depth != 1 && depth != 2 &&
-	    depth != 4 && depth != 8 &&
-	    depth != 16 && depth != 32 &&
-	    depth != 64 && depth != 128 &&
-	    depth != 256 && depth != 512 &&
-	    depth != 1024)
-		return false;
-
-	return true;
-}
-
-static int dlb2_configure_ldb_port(struct dlb2_hw *hw,
-				   struct dlb2_hw_domain *domain,
-				   struct dlb2_ldb_port *port,
-				   uintptr_t cq_dma_base,
-				   struct dlb2_create_ldb_port_args *args,
-				   bool vdev_req,
-				   unsigned int vdev_id)
-{
-	int ret, i;
-
-	port->hist_list_entry_base = domain->hist_list_entry_base +
-				     domain->hist_list_entry_offset;
-	port->hist_list_entry_limit = port->hist_list_entry_base +
-				      args->cq_history_list_size;
-
-	domain->hist_list_entry_offset += args->cq_history_list_size;
-	domain->avail_hist_list_entries -= args->cq_history_list_size;
-
-	ret = dlb2_ldb_port_configure_cq(hw,
-					 domain,
-					 port,
-					 cq_dma_base,
-					 args,
-					 vdev_req,
-					 vdev_id);
-	if (ret)
-		return ret;
-
-	dlb2_ldb_port_configure_pp(hw,
-				   domain,
-				   port,
-				   vdev_req,
-				   vdev_id);
-
-	dlb2_ldb_port_cq_enable(hw, port);
-
-	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++)
-		port->qid_map[i].state = DLB2_QUEUE_UNMAPPED;
-	port->num_mappings = 0;
-
-	port->enabled = true;
-
-	port->configured = true;
-
-	return 0;
-}
-
-static void
-dlb2_log_create_ldb_port_args(struct dlb2_hw *hw,
-			      u32 domain_id,
-			      uintptr_t cq_dma_base,
-			      struct dlb2_create_ldb_port_args *args,
-			      bool vdev_req,
-			      unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 create load-balanced port arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID:                 %d\n",
-		    domain_id);
-	DLB2_HW_DBG(hw, "\tCQ depth:                  %d\n",
-		    args->cq_depth);
-	DLB2_HW_DBG(hw, "\tCQ hist list size:         %d\n",
-		    args->cq_history_list_size);
-	DLB2_HW_DBG(hw, "\tCQ base address:           0x%lx\n",
-		    cq_dma_base);
-	DLB2_HW_DBG(hw, "\tCoS ID:                    %u\n", args->cos_id);
-	DLB2_HW_DBG(hw, "\tStrict CoS allocation:     %u\n",
-		    args->cos_strict);
-}
-
-static int
-dlb2_verify_create_ldb_port_args(struct dlb2_hw *hw,
-				 u32 domain_id,
-				 uintptr_t cq_dma_base,
-				 struct dlb2_create_ldb_port_args *args,
-				 struct dlb2_cmd_response *resp,
-				 bool vdev_req,
-				 unsigned int vdev_id,
-				 struct dlb2_hw_domain **out_domain,
-				 struct dlb2_ldb_port **out_port,
-				 int *out_cos_id)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_port *port;
-	int i, id;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (!domain) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	if (!domain->configured) {
-		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
-		return -EINVAL;
-	}
-
-	if (domain->started) {
-		resp->status = DLB2_ST_DOMAIN_STARTED;
-		return -EINVAL;
-	}
-
-	if (args->cos_id >= DLB2_NUM_COS_DOMAINS) {
-		resp->status = DLB2_ST_INVALID_COS_ID;
-		return -EINVAL;
-	}
-
-	if (args->cos_strict) {
-		id = args->cos_id;
-		port = DLB2_DOM_LIST_HEAD(domain->avail_ldb_ports[id],
-					  typeof(*port));
-	} else {
-		for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-			id = (args->cos_id + i) % DLB2_NUM_COS_DOMAINS;
-
-			port = DLB2_DOM_LIST_HEAD(domain->avail_ldb_ports[id],
-						  typeof(*port));
-			if (port)
-				break;
-		}
-	}
-
-	if (!port) {
-		resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	/* Check cache-line alignment */
-	if ((cq_dma_base & 0x3F) != 0) {
-		resp->status = DLB2_ST_INVALID_CQ_VIRT_ADDR;
-		return -EINVAL;
-	}
-
-	if (!dlb2_cq_depth_is_valid(args->cq_depth)) {
-		resp->status = DLB2_ST_INVALID_CQ_DEPTH;
-		return -EINVAL;
-	}
-
-	/* The history list size must be >= 1 */
-	if (!args->cq_history_list_size) {
-		resp->status = DLB2_ST_INVALID_HIST_LIST_DEPTH;
-		return -EINVAL;
-	}
-
-	if (args->cq_history_list_size > domain->avail_hist_list_entries) {
-		resp->status = DLB2_ST_HIST_LIST_ENTRIES_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	*out_domain = domain;
-	*out_port = port;
-	*out_cos_id = id;
-
-	return 0;
-}
-
-/**
- * dlb2_hw_create_ldb_port() - create a load-balanced port
- * @hw: dlb2_hw handle for a particular device.
- * @domain_id: domain ID.
- * @args: port creation arguments.
- * @cq_dma_base: base address of the CQ memory. This can be a PA or an IOVA.
- * @resp: response structure.
- * @vdev_req: indicates whether this request came from a vdev.
- * @vdev_id: If vdev_req is true, this contains the vdev's ID.
- *
- * This function creates a load-balanced port.
- *
- * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
- * device.
- *
- * Return:
- * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
- * assigned a detailed error code from enum dlb2_error. If successful, resp->id
- * contains the port ID.
- *
- * resp->id contains a virtual ID if vdev_req is true.
- *
- * Errors:
- * EINVAL - A requested resource is unavailable, a credit setting is invalid, a
- *	    pointer address is not properly aligned, the domain is not
- *	    configured, or the domain has already been started.
- * EFAULT - Internal error (resp->status not set).
- */
-int dlb2_hw_create_ldb_port(struct dlb2_hw *hw,
-			    u32 domain_id,
-			    struct dlb2_create_ldb_port_args *args,
-			    uintptr_t cq_dma_base,
-			    struct dlb2_cmd_response *resp,
-			    bool vdev_req,
-			    unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_port *port;
-	int ret, cos_id;
-
-	dlb2_log_create_ldb_port_args(hw,
-				      domain_id,
-				      cq_dma_base,
-				      args,
-				      vdev_req,
-				      vdev_id);
-
-	/*
-	 * Verify that hardware resources are available before attempting to
-	 * satisfy the request. This simplifies the error unwinding code.
-	 */
-	ret = dlb2_verify_create_ldb_port_args(hw,
-					       domain_id,
-					       cq_dma_base,
-					       args,
-					       resp,
-					       vdev_req,
-					       vdev_id,
-					       &domain,
-					       &port,
-					       &cos_id);
-	if (ret)
-		return ret;
-
-	ret = dlb2_configure_ldb_port(hw,
-				      domain,
-				      port,
-				      cq_dma_base,
-				      args,
-				      vdev_req,
-				      vdev_id);
-	if (ret)
-		return ret;
-
-	/*
-	 * Configuration succeeded, so move the resource from the 'avail' to
-	 * the 'used' list.
-	 */
-	dlb2_list_del(&domain->avail_ldb_ports[cos_id], &port->domain_list);
-
-	dlb2_list_add(&domain->used_ldb_ports[cos_id], &port->domain_list);
-
-	resp->status = 0;
-	resp->id = (vdev_req) ? port->id.virt_id : port->id.phys_id;
-
-	return 0;
-}
-
-static void
-dlb2_log_create_dir_port_args(struct dlb2_hw *hw,
-			      u32 domain_id,
-			      uintptr_t cq_dma_base,
-			      struct dlb2_create_dir_port_args *args,
-			      bool vdev_req,
-			      unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 create directed port arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID:                 %d\n",
-		    domain_id);
-	DLB2_HW_DBG(hw, "\tCQ depth:                  %d\n",
-		    args->cq_depth);
-	DLB2_HW_DBG(hw, "\tCQ base address:           0x%lx\n",
-		    cq_dma_base);
-}
-
-static struct dlb2_dir_pq_pair *
-dlb2_get_domain_used_dir_pq(struct dlb2_hw *hw,
-			    u32 id,
-			    bool vdev_req,
-			    struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *port;
-	RTE_SET_USED(iter);
-
-	if (id >= DLB2_MAX_NUM_DIR_PORTS(hw->ver))
-		return NULL;
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
-		if ((!vdev_req && port->id.phys_id == id) ||
-		    (vdev_req && port->id.virt_id == id))
-			return port;
-	}
-
-	return NULL;
-}
-
-static int
-dlb2_verify_create_dir_port_args(struct dlb2_hw *hw,
-				 u32 domain_id,
-				 uintptr_t cq_dma_base,
-				 struct dlb2_create_dir_port_args *args,
-				 struct dlb2_cmd_response *resp,
-				 bool vdev_req,
-				 unsigned int vdev_id,
-				 struct dlb2_hw_domain **out_domain,
-				 struct dlb2_dir_pq_pair **out_port)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_dir_pq_pair *pq;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (!domain) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	if (!domain->configured) {
-		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
-		return -EINVAL;
-	}
-
-	if (domain->started) {
-		resp->status = DLB2_ST_DOMAIN_STARTED;
-		return -EINVAL;
-	}
-
-	if (args->queue_id != -1) {
-		/*
-		 * If the user claims the queue is already configured, validate
-		 * the queue ID, its domain, and whether the queue is
-		 * configured.
-		 */
-		pq = dlb2_get_domain_used_dir_pq(hw,
-						 args->queue_id,
-						 vdev_req,
-						 domain);
-
-		if (!pq || pq->domain_id.phys_id != domain->id.phys_id ||
-		    !pq->queue_configured) {
-			resp->status = DLB2_ST_INVALID_DIR_QUEUE_ID;
-			return -EINVAL;
-		}
-	} else {
-		/*
-		 * If the port's queue is not configured, validate that a free
-		 * port-queue pair is available.
-		 */
-		pq = DLB2_DOM_LIST_HEAD(domain->avail_dir_pq_pairs,
-					typeof(*pq));
-		if (!pq) {
-			resp->status = DLB2_ST_DIR_PORTS_UNAVAILABLE;
-			return -EINVAL;
-		}
-	}
-
-	/* Check cache-line alignment */
-	if ((cq_dma_base & 0x3F) != 0) {
-		resp->status = DLB2_ST_INVALID_CQ_VIRT_ADDR;
-		return -EINVAL;
-	}
-
-	if (!dlb2_cq_depth_is_valid(args->cq_depth)) {
-		resp->status = DLB2_ST_INVALID_CQ_DEPTH;
-		return -EINVAL;
-	}
-
-	*out_domain = domain;
-	*out_port = pq;
-
-	return 0;
-}
-
-static void dlb2_dir_port_configure_pp(struct dlb2_hw *hw,
-				       struct dlb2_hw_domain *domain,
-				       struct dlb2_dir_pq_pair *port,
-				       bool vdev_req,
-				       unsigned int vdev_id)
-{
-	u32 reg = 0;
-
-	DLB2_BITS_SET(reg, domain->id.phys_id, DLB2_SYS_DIR_PP2VAS_VAS);
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_PP2VAS(port->id.phys_id), reg);
-
-	if (vdev_req) {
-		unsigned int offs;
-		u32 virt_id;
-
-		/*
-		 * DLB uses producer port address bits 17:12 to determine the
-		 * producer port ID. In Scalable IOV mode, PP accesses come
-		 * through the PF MMIO window for the physical producer port,
-		 * so for translation purposes the virtual and physical port
-		 * IDs are equal.
-		 */
-		if (hw->virt_mode == DLB2_VIRT_SRIOV)
-			virt_id = port->id.virt_id;
-		else
-			virt_id = port->id.phys_id;
-
-		reg = 0;
-		DLB2_BITS_SET(reg, port->id.phys_id, DLB2_SYS_VF_DIR_VPP2PP_PP);
-		offs = vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) + virt_id;
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP2PP(offs), reg);
-
-		reg = 0;
-		DLB2_BITS_SET(reg, vdev_id, DLB2_SYS_DIR_PP2VDEV_VDEV);
-		DLB2_CSR_WR(hw, DLB2_SYS_DIR_PP2VDEV(port->id.phys_id), reg);
-
-		reg = 0;
-		DLB2_BIT_SET(reg, DLB2_SYS_VF_DIR_VPP_V_VPP_V);
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP_V(offs), reg);
-	}
-
-	reg = 0;
-	DLB2_BIT_SET(reg, DLB2_SYS_DIR_PP_V_PP_V);
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_PP_V(port->id.phys_id), reg);
-}
-
-static int dlb2_dir_port_configure_cq(struct dlb2_hw *hw,
-				      struct dlb2_hw_domain *domain,
-				      struct dlb2_dir_pq_pair *port,
-				      uintptr_t cq_dma_base,
-				      struct dlb2_create_dir_port_args *args,
-				      bool vdev_req,
-				      unsigned int vdev_id)
-{
-	u32 reg = 0;
-	u32 ds = 0;
-
-	/* The CQ address is 64B-aligned, and the DLB only wants bits [63:6] */
-	DLB2_BITS_SET(reg, cq_dma_base >> 6, DLB2_SYS_DIR_CQ_ADDR_L_ADDR_L);
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_ADDR_L(port->id.phys_id), reg);
-
-	reg = cq_dma_base >> 32;
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_ADDR_U(port->id.phys_id), reg);
-
-	/*
-	 * 'ro' == relaxed ordering. This setting allows DLB2 to write
-	 * cache lines out-of-order (but QEs within a cache line are always
-	 * updated in-order).
-	 */
-	reg = 0;
-	DLB2_BITS_SET(reg, vdev_id, DLB2_SYS_DIR_CQ2VF_PF_RO_VF);
-	DLB2_BITS_SET(reg, !vdev_req && (hw->virt_mode != DLB2_VIRT_SIOV),
-		 DLB2_SYS_DIR_CQ2VF_PF_RO_IS_PF);
-	DLB2_BIT_SET(reg, DLB2_SYS_DIR_CQ2VF_PF_RO_RO);
-
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ2VF_PF_RO(port->id.phys_id), reg);
-
-	if (args->cq_depth <= 8) {
-		ds = 1;
-	} else if (args->cq_depth == 16) {
-		ds = 2;
-	} else if (args->cq_depth == 32) {
-		ds = 3;
-	} else if (args->cq_depth == 64) {
-		ds = 4;
-	} else if (args->cq_depth == 128) {
-		ds = 5;
-	} else if (args->cq_depth == 256) {
-		ds = 6;
-	} else if (args->cq_depth == 512) {
-		ds = 7;
-	} else if (args->cq_depth == 1024) {
-		ds = 8;
-	} else {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: invalid CQ depth\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	reg = 0;
-	DLB2_BITS_SET(reg, ds,
-		      DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT);
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
-		    reg);
-
-	/*
-	 * To support CQs with depth less than 8, program the token count
-	 * register with a non-zero initial value. Operations such as domain
-	 * reset must take this initial value into account when quiescing the
-	 * CQ.
-	 */
-	port->init_tkn_cnt = 0;
-
-	if (args->cq_depth < 8) {
-		reg = 0;
-		port->init_tkn_cnt = 8 - args->cq_depth;
-
-		DLB2_BITS_SET(reg, port->init_tkn_cnt,
-			      DLB2_LSP_CQ_DIR_TKN_CNT_COUNT);
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_CQ_DIR_TKN_CNT(hw->ver, port->id.phys_id),
-			    reg);
-	} else {
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_CQ_DIR_TKN_CNT(hw->ver, port->id.phys_id),
-			    DLB2_LSP_CQ_DIR_TKN_CNT_RST);
-	}
-
-	reg = 0;
-	DLB2_BITS_SET(reg, ds,
-		      DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_TOKEN_DEPTH_SELECT_V2);
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(hw->ver,
-						      port->id.phys_id),
-		    reg);
-
-	/* Reset the CQ write pointer */
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_WPTR(hw->ver, port->id.phys_id),
-		    DLB2_CHP_DIR_CQ_WPTR_RST);
-
-	/* Virtualize the PPID */
-	reg = 0;
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_FMT(port->id.phys_id), reg);
-
-	/*
-	 * Address translation (AT) settings: 0: untranslated, 2: translated
-	 * (see ATS spec regarding Address Type field for more details)
-	 */
-	if (hw->ver == DLB2_HW_V2) {
-		reg = 0;
-		DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_AT(port->id.phys_id), reg);
-	}
-
-	if (vdev_req && hw->virt_mode == DLB2_VIRT_SIOV) {
-		DLB2_BITS_SET(reg, hw->pasid[vdev_id],
-			      DLB2_SYS_DIR_CQ_PASID_PASID);
-		DLB2_BIT_SET(reg, DLB2_SYS_DIR_CQ_PASID_FMT2);
-	}
-
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_PASID(hw->ver, port->id.phys_id), reg);
-
-	reg = 0;
-	DLB2_BITS_SET(reg, domain->id.phys_id, DLB2_CHP_DIR_CQ2VAS_CQ2VAS);
-	DLB2_CSR_WR(hw, DLB2_CHP_DIR_CQ2VAS(hw->ver, port->id.phys_id), reg);
-
-	return 0;
-}
-
-static int dlb2_configure_dir_port(struct dlb2_hw *hw,
-				   struct dlb2_hw_domain *domain,
-				   struct dlb2_dir_pq_pair *port,
-				   uintptr_t cq_dma_base,
-				   struct dlb2_create_dir_port_args *args,
-				   bool vdev_req,
-				   unsigned int vdev_id)
-{
-	int ret;
-
-	ret = dlb2_dir_port_configure_cq(hw,
-					 domain,
-					 port,
-					 cq_dma_base,
-					 args,
-					 vdev_req,
-					 vdev_id);
-
-	if (ret)
-		return ret;
-
-	dlb2_dir_port_configure_pp(hw,
-				   domain,
-				   port,
-				   vdev_req,
-				   vdev_id);
-
-	dlb2_dir_port_cq_enable(hw, port);
-
-	port->enabled = true;
-
-	port->port_configured = true;
-
-	return 0;
-}
-
-/**
- * dlb2_hw_create_dir_port() - create a directed port
- * @hw: dlb2_hw handle for a particular device.
- * @domain_id: domain ID.
- * @args: port creation arguments.
- * @cq_dma_base: base address of the CQ memory. This can be a PA or an IOVA.
- * @resp: response structure.
- * @vdev_req: indicates whether this request came from a vdev.
- * @vdev_id: If vdev_req is true, this contains the vdev's ID.
- *
- * This function creates a directed port.
- *
- * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
- * device.
- *
- * Return:
- * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
- * assigned a detailed error code from enum dlb2_error. If successful, resp->id
- * contains the port ID.
- *
- * resp->id contains a virtual ID if vdev_req is true.
- *
- * Errors:
- * EINVAL - A requested resource is unavailable, a credit setting is invalid, a
- *	    pointer address is not properly aligned, the domain is not
- *	    configured, or the domain has already been started.
- * EFAULT - Internal error (resp->status not set).
- */
-int dlb2_hw_create_dir_port(struct dlb2_hw *hw,
-			    u32 domain_id,
-			    struct dlb2_create_dir_port_args *args,
-			    uintptr_t cq_dma_base,
-			    struct dlb2_cmd_response *resp,
-			    bool vdev_req,
-			    unsigned int vdev_id)
-{
-	struct dlb2_dir_pq_pair *port;
-	struct dlb2_hw_domain *domain;
-	int ret;
-
-	dlb2_log_create_dir_port_args(hw,
-				      domain_id,
-				      cq_dma_base,
-				      args,
-				      vdev_req,
-				      vdev_id);
-
-	/*
-	 * Verify that hardware resources are available before attempting to
-	 * satisfy the request. This simplifies the error unwinding code.
-	 */
-	ret = dlb2_verify_create_dir_port_args(hw,
-					       domain_id,
-					       cq_dma_base,
-					       args,
-					       resp,
-					       vdev_req,
-					       vdev_id,
-					       &domain,
-					       &port);
-	if (ret)
-		return ret;
-
-	ret = dlb2_configure_dir_port(hw,
-				      domain,
-				      port,
-				      cq_dma_base,
-				      args,
-				      vdev_req,
-				      vdev_id);
-	if (ret)
-		return ret;
-
-	/*
-	 * Configuration succeeded, so move the resource from the 'avail' to
-	 * the 'used' list (if it's not already there).
-	 */
-	if (args->queue_id == -1) {
-		dlb2_list_del(&domain->avail_dir_pq_pairs, &port->domain_list);
-
-		dlb2_list_add(&domain->used_dir_pq_pairs, &port->domain_list);
-	}
-
-	resp->status = 0;
-	resp->id = (vdev_req) ? port->id.virt_id : port->id.phys_id;
-
-	return 0;
-}
-
-static void dlb2_configure_dir_queue(struct dlb2_hw *hw,
-				     struct dlb2_hw_domain *domain,
-				     struct dlb2_dir_pq_pair *queue,
-				     struct dlb2_create_dir_queue_args *args,
-				     bool vdev_req,
-				     unsigned int vdev_id)
-{
-	unsigned int offs;
-	u32 reg = 0;
-
-	/* QID write permissions are turned on when the domain is started */
-	offs = domain->id.phys_id * DLB2_MAX_NUM_DIR_QUEUES(hw->ver) +
-		queue->id.phys_id;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(offs), reg);
-
-	/* Don't timestamp QEs that pass through this queue */
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_QID_ITS(queue->id.phys_id), reg);
-
-	reg = 0;
-	DLB2_BITS_SET(reg, args->depth_threshold,
-		      DLB2_LSP_QID_DIR_DEPTH_THRSH_THRESH);
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID_DIR_DEPTH_THRSH(hw->ver, queue->id.phys_id),
-		    reg);
-
-	if (vdev_req) {
-		offs = vdev_id * DLB2_MAX_NUM_DIR_QUEUES(hw->ver) +
-			queue->id.virt_id;
-
-		reg = 0;
-		DLB2_BIT_SET(reg, DLB2_SYS_VF_DIR_VQID_V_VQID_V);
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID_V(offs), reg);
-
-		reg = 0;
-		DLB2_BITS_SET(reg, queue->id.phys_id,
-			      DLB2_SYS_VF_DIR_VQID2QID_QID);
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID2QID(offs), reg);
-	}
-
-	reg = 0;
-	DLB2_BIT_SET(reg, DLB2_SYS_DIR_QID_V_QID_V);
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_QID_V(queue->id.phys_id), reg);
-
-	queue->queue_configured = true;
-}
-
-static void
-dlb2_log_create_dir_queue_args(struct dlb2_hw *hw,
-			       u32 domain_id,
-			       struct dlb2_create_dir_queue_args *args,
-			       bool vdev_req,
-			       unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 create directed queue arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
-	DLB2_HW_DBG(hw, "\tPort ID:   %d\n", args->port_id);
-}
-
-static int
-dlb2_verify_create_dir_queue_args(struct dlb2_hw *hw,
-				  u32 domain_id,
-				  struct dlb2_create_dir_queue_args *args,
-				  struct dlb2_cmd_response *resp,
-				  bool vdev_req,
-				  unsigned int vdev_id,
-				  struct dlb2_hw_domain **out_domain,
-				  struct dlb2_dir_pq_pair **out_queue)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_dir_pq_pair *pq;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (!domain) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	if (!domain->configured) {
-		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
-		return -EINVAL;
-	}
-
-	if (domain->started) {
-		resp->status = DLB2_ST_DOMAIN_STARTED;
-		return -EINVAL;
-	}
-
-	/*
-	 * If the user claims the port is already configured, validate the port
-	 * ID, its domain, and whether the port is configured.
-	 */
-	if (args->port_id != -1) {
-		pq = dlb2_get_domain_used_dir_pq(hw,
-						 args->port_id,
-						 vdev_req,
-						 domain);
-
-		if (!pq || pq->domain_id.phys_id != domain->id.phys_id ||
-		    !pq->port_configured) {
-			resp->status = DLB2_ST_INVALID_PORT_ID;
-			return -EINVAL;
-		}
-	} else {
-		/*
-		 * If the queue's port is not configured, validate that a free
-		 * port-queue pair is available.
-		 */
-		pq = DLB2_DOM_LIST_HEAD(domain->avail_dir_pq_pairs,
-					typeof(*pq));
-		if (!pq) {
-			resp->status = DLB2_ST_DIR_QUEUES_UNAVAILABLE;
-			return -EINVAL;
-		}
-	}
-
-	*out_domain = domain;
-	*out_queue = pq;
-
-	return 0;
-}
-
-/**
- * dlb2_hw_create_dir_queue() - create a directed queue
- * @hw: dlb2_hw handle for a particular device.
- * @domain_id: domain ID.
- * @args: queue creation arguments.
- * @resp: response structure.
- * @vdev_req: indicates whether this request came from a vdev.
- * @vdev_id: If vdev_req is true, this contains the vdev's ID.
- *
- * This function creates a directed queue.
- *
- * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
- * device.
- *
- * Return:
- * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
- * assigned a detailed error code from enum dlb2_error. If successful, resp->id
- * contains the queue ID.
- *
- * resp->id contains a virtual ID if vdev_req is true.
- *
- * Errors:
- * EINVAL - A requested resource is unavailable, the domain is not configured,
- *	    or the domain has already been started.
- * EFAULT - Internal error (resp->status not set).
- */
-int dlb2_hw_create_dir_queue(struct dlb2_hw *hw,
-			     u32 domain_id,
-			     struct dlb2_create_dir_queue_args *args,
-			     struct dlb2_cmd_response *resp,
-			     bool vdev_req,
-			     unsigned int vdev_id)
-{
-	struct dlb2_dir_pq_pair *queue;
-	struct dlb2_hw_domain *domain;
-	int ret;
-
-	dlb2_log_create_dir_queue_args(hw, domain_id, args, vdev_req, vdev_id);
-
-	/*
-	 * Verify that hardware resources are available before attempting to
-	 * satisfy the request. This simplifies the error unwinding code.
-	 */
-	ret = dlb2_verify_create_dir_queue_args(hw,
-						domain_id,
-						args,
-						resp,
-						vdev_req,
-						vdev_id,
-						&domain,
-						&queue);
-	if (ret)
-		return ret;
-
-	dlb2_configure_dir_queue(hw, domain, queue, args, vdev_req, vdev_id);
-
-	/*
-	 * Configuration succeeded, so move the resource from the 'avail' to
-	 * the 'used' list (if it's not already there).
-	 */
-	if (args->port_id == -1) {
-		dlb2_list_del(&domain->avail_dir_pq_pairs,
-			      &queue->domain_list);
-
-		dlb2_list_add(&domain->used_dir_pq_pairs,
-			      &queue->domain_list);
-	}
-
-	resp->status = 0;
-
-	resp->id = (vdev_req) ? queue->id.virt_id : queue->id.phys_id;
-
-	return 0;
-}
-
-static bool
-dlb2_port_find_slot_with_pending_map_queue(struct dlb2_ldb_port *port,
-					   struct dlb2_ldb_queue *queue,
-					   int *slot)
-{
-	int i;
-
-	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
-		struct dlb2_ldb_port_qid_map *map = &port->qid_map[i];
-
-		if (map->state == DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP &&
-		    map->pending_qid == queue->id.phys_id)
-			break;
-	}
-
-	*slot = i;
-
-	return (i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ);
-}
-
-static int dlb2_verify_map_qid_slot_available(struct dlb2_ldb_port *port,
-					      struct dlb2_ldb_queue *queue,
-					      struct dlb2_cmd_response *resp)
-{
-	enum dlb2_qid_map_state state;
-	int i;
-
-	/* Unused slot available? */
-	if (port->num_mappings < DLB2_MAX_NUM_QIDS_PER_LDB_CQ)
-		return 0;
-
-	/*
-	 * If the queue is already mapped (from the application's perspective),
-	 * this is simply a priority update.
-	 */
-	state = DLB2_QUEUE_MAPPED;
-	if (dlb2_port_find_slot_queue(port, state, queue, &i))
-		return 0;
-
-	state = DLB2_QUEUE_MAP_IN_PROG;
-	if (dlb2_port_find_slot_queue(port, state, queue, &i))
-		return 0;
-
-	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &i))
-		return 0;
-
-	/*
-	 * If the slot contains an unmap in progress, it's considered
-	 * available.
-	 */
-	state = DLB2_QUEUE_UNMAP_IN_PROG;
-	if (dlb2_port_find_slot(port, state, &i))
-		return 0;
-
-	state = DLB2_QUEUE_UNMAPPED;
-	if (dlb2_port_find_slot(port, state, &i))
-		return 0;
-
-	resp->status = DLB2_ST_NO_QID_SLOTS_AVAILABLE;
-	return -EINVAL;
-}
-
-static struct dlb2_ldb_queue *
-dlb2_get_domain_ldb_queue(u32 id,
-			  bool vdev_req,
-			  struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_queue *queue;
-	RTE_SET_USED(iter);
-
-	if (id >= DLB2_MAX_NUM_LDB_QUEUES)
-		return NULL;
-
-	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
-		if ((!vdev_req && queue->id.phys_id == id) ||
-		    (vdev_req && queue->id.virt_id == id))
-			return queue;
-	}
-
-	return NULL;
-}
-
-static struct dlb2_ldb_port *
-dlb2_get_domain_used_ldb_port(u32 id,
-			      bool vdev_req,
-			      struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	if (id >= DLB2_MAX_NUM_LDB_PORTS)
-		return NULL;
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			if ((!vdev_req && port->id.phys_id == id) ||
-			    (vdev_req && port->id.virt_id == id))
-				return port;
-		}
-
-		DLB2_DOM_LIST_FOR(domain->avail_ldb_ports[i], port, iter) {
-			if ((!vdev_req && port->id.phys_id == id) ||
-			    (vdev_req && port->id.virt_id == id))
-				return port;
-		}
-	}
-
-	return NULL;
-}
-
-static void dlb2_ldb_port_change_qid_priority(struct dlb2_hw *hw,
-					      struct dlb2_ldb_port *port,
-					      int slot,
-					      struct dlb2_map_qid_args *args)
-{
-	u32 cq2priov;
-
-	/* Read-modify-write the priority and valid bit register */
-	cq2priov = DLB2_CSR_RD(hw,
-			       DLB2_LSP_CQ2PRIOV(hw->ver, port->id.phys_id));
-
-	cq2priov |= (1 << (slot + DLB2_LSP_CQ2PRIOV_V_LOC)) &
-		    DLB2_LSP_CQ2PRIOV_V;
-	cq2priov |= ((args->priority & 0x7) << slot * 3) &
-		    DLB2_LSP_CQ2PRIOV_PRIO;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(hw->ver, port->id.phys_id), cq2priov);
-
-	dlb2_flush_csr(hw);
-
-	port->qid_map[slot].priority = args->priority;
-}
-
-static int dlb2_verify_map_qid_args(struct dlb2_hw *hw,
-				    u32 domain_id,
-				    struct dlb2_map_qid_args *args,
-				    struct dlb2_cmd_response *resp,
-				    bool vdev_req,
-				    unsigned int vdev_id,
-				    struct dlb2_hw_domain **out_domain,
-				    struct dlb2_ldb_port **out_port,
-				    struct dlb2_ldb_queue **out_queue)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_queue *queue;
-	struct dlb2_ldb_port *port;
-	int id;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (!domain) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	if (!domain->configured) {
-		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
-		return -EINVAL;
-	}
-
-	id = args->port_id;
-
-	port = dlb2_get_domain_used_ldb_port(id, vdev_req, domain);
-
-	if (!port || !port->configured) {
-		resp->status = DLB2_ST_INVALID_PORT_ID;
-		return -EINVAL;
-	}
-
-	if (args->priority >= DLB2_QID_PRIORITIES) {
-		resp->status = DLB2_ST_INVALID_PRIORITY;
-		return -EINVAL;
-	}
-
-	queue = dlb2_get_domain_ldb_queue(args->qid, vdev_req, domain);
-
-	if (!queue || !queue->configured) {
-		resp->status = DLB2_ST_INVALID_QID;
-		return -EINVAL;
-	}
-
-	if (queue->domain_id.phys_id != domain->id.phys_id) {
-		resp->status = DLB2_ST_INVALID_QID;
-		return -EINVAL;
-	}
-
-	if (port->domain_id.phys_id != domain->id.phys_id) {
-		resp->status = DLB2_ST_INVALID_PORT_ID;
-		return -EINVAL;
-	}
-
-	*out_domain = domain;
-	*out_queue = queue;
-	*out_port = port;
-
-	return 0;
-}
-
-static void dlb2_log_map_qid(struct dlb2_hw *hw,
-			     u32 domain_id,
-			     struct dlb2_map_qid_args *args,
-			     bool vdev_req,
-			     unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 map QID arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID: %d\n",
-		    domain_id);
-	DLB2_HW_DBG(hw, "\tPort ID:   %d\n",
-		    args->port_id);
-	DLB2_HW_DBG(hw, "\tQueue ID:  %d\n",
-		    args->qid);
-	DLB2_HW_DBG(hw, "\tPriority:  %d\n",
-		    args->priority);
-}
-
-/**
- * dlb2_hw_map_qid() - map a load-balanced queue to a load-balanced port
- * @hw: dlb2_hw handle for a particular device.
- * @domain_id: domain ID.
- * @args: map QID arguments.
- * @resp: response structure.
- * @vdev_req: indicates whether this request came from a vdev.
- * @vdev_id: If vdev_req is true, this contains the vdev's ID.
- *
- * This function configures the DLB to schedule QEs from the specified queue
- * to the specified port. Each load-balanced port can be mapped to up to 8
- * queues; each load-balanced queue can potentially map to all the
- * load-balanced ports.
- *
- * A successful return does not necessarily mean the mapping was configured. If
- * this function is unable to immediately map the queue to the port, it will
- * add the requested operation to a per-port list of pending map/unmap
- * operations, and (if it's not already running) launch a kernel thread that
- * periodically attempts to process all pending operations. In a sense, this is
- * an asynchronous function.
- *
- * This asynchronicity creates two views of the state of hardware: the actual
- * hardware state and the requested state (as if every request completed
- * immediately). If there are any pending map/unmap operations, the requested
- * state will differ from the actual state. All validation is performed with
- * respect to the pending state; for instance, if there are 8 pending map
- * operations for port X, a request for a 9th will fail because a load-balanced
- * port can only map up to 8 queues.
- *
- * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
- * device.
- *
- * Return:
- * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
- * assigned a detailed error code from enum dlb2_error.
- *
- * Errors:
- * EINVAL - A requested resource is unavailable, invalid port or queue ID, or
- *	    the domain is not configured.
- * EFAULT - Internal error (resp->status not set).
- */
-int dlb2_hw_map_qid(struct dlb2_hw *hw,
-		    u32 domain_id,
-		    struct dlb2_map_qid_args *args,
-		    struct dlb2_cmd_response *resp,
-		    bool vdev_req,
-		    unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_queue *queue;
-	enum dlb2_qid_map_state st;
-	struct dlb2_ldb_port *port;
-	int ret, i;
-	u8 prio;
-
-	dlb2_log_map_qid(hw, domain_id, args, vdev_req, vdev_id);
-
-	/*
-	 * Verify that hardware resources are available before attempting to
-	 * satisfy the request. This simplifies the error unwinding code.
-	 */
-	ret = dlb2_verify_map_qid_args(hw,
-				       domain_id,
-				       args,
-				       resp,
-				       vdev_req,
-				       vdev_id,
-				       &domain,
-				       &port,
-				       &queue);
-	if (ret)
-		return ret;
-
-	prio = args->priority;
-
-	/*
-	 * If there are any outstanding detach operations for this port,
-	 * attempt to complete them. This may be necessary to free up a QID
-	 * slot for this requested mapping.
-	 */
-	if (port->num_pending_removals)
-		dlb2_domain_finish_unmap_port(hw, domain, port);
-
-	ret = dlb2_verify_map_qid_slot_available(port, queue, resp);
-	if (ret)
-		return ret;
-
-	/* Hardware requires disabling the CQ before mapping QIDs. */
-	if (port->enabled)
-		dlb2_ldb_port_cq_disable(hw, port);
-
-	/*
-	 * If this is only a priority change, don't perform the full QID->CQ
-	 * mapping procedure
-	 */
-	st = DLB2_QUEUE_MAPPED;
-	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
-		if (prio != port->qid_map[i].priority) {
-			dlb2_ldb_port_change_qid_priority(hw, port, i, args);
-			DLB2_HW_DBG(hw, "DLB2 map: priority change\n");
-		}
-
-		st = DLB2_QUEUE_MAPPED;
-		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
-		if (ret)
-			return ret;
-
-		goto map_qid_done;
-	}
-
-	st = DLB2_QUEUE_UNMAP_IN_PROG;
-	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
-		if (prio != port->qid_map[i].priority) {
-			dlb2_ldb_port_change_qid_priority(hw, port, i, args);
-			DLB2_HW_DBG(hw, "DLB2 map: priority change\n");
-		}
-
-		st = DLB2_QUEUE_MAPPED;
-		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
-		if (ret)
-			return ret;
-
-		goto map_qid_done;
-	}
-
-	/*
-	 * If this is a priority change on an in-progress mapping, don't
-	 * perform the full QID->CQ mapping procedure.
-	 */
-	st = DLB2_QUEUE_MAP_IN_PROG;
-	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
-		port->qid_map[i].priority = prio;
-
-		DLB2_HW_DBG(hw, "DLB2 map: priority change only\n");
-
-		goto map_qid_done;
-	}
-
-	/*
-	 * If this is a priority change on a pending mapping, update the
-	 * pending priority
-	 */
-	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &i)) {
-		port->qid_map[i].pending_priority = prio;
-
-		DLB2_HW_DBG(hw, "DLB2 map: priority change only\n");
-
-		goto map_qid_done;
-	}
-
-	/*
-	 * If all the CQ's slots are in use, then there's an unmap in progress
-	 * (guaranteed by dlb2_verify_map_qid_slot_available()), so add this
-	 * mapping to pending_map and return. When the removal is completed for
-	 * the slot's current occupant, this mapping will be performed.
-	 */
-	if (!dlb2_port_find_slot(port, DLB2_QUEUE_UNMAPPED, &i)) {
-		if (dlb2_port_find_slot(port, DLB2_QUEUE_UNMAP_IN_PROG, &i)) {
-			enum dlb2_qid_map_state new_st;
-
-			port->qid_map[i].pending_qid = queue->id.phys_id;
-			port->qid_map[i].pending_priority = prio;
-
-			new_st = DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP;
-
-			ret = dlb2_port_slot_state_transition(hw, port, queue,
-							      i, new_st);
-			if (ret)
-				return ret;
-
-			DLB2_HW_DBG(hw, "DLB2 map: map pending removal\n");
-
-			goto map_qid_done;
-		}
-	}
-
-	/*
-	 * If the domain has started, a special "dynamic" CQ->queue mapping
-	 * procedure is required in order to safely update the CQ<->QID tables.
-	 * The "static" procedure cannot be used when traffic is flowing,
-	 * because the CQ<->QID tables cannot be updated atomically and the
-	 * scheduler won't see the new mapping unless the queue's if_status
-	 * changes, which isn't guaranteed.
-	 */
-	ret = dlb2_ldb_port_map_qid(hw, domain, port, queue, prio);
-
-	/* If ret is less than zero, it's due to an internal error */
-	if (ret < 0)
-		return ret;
-
-map_qid_done:
-	if (port->enabled)
-		dlb2_ldb_port_cq_enable(hw, port);
-
-	resp->status = 0;
-
-	return 0;
-}
-
-static void dlb2_log_unmap_qid(struct dlb2_hw *hw,
-			       u32 domain_id,
-			       struct dlb2_unmap_qid_args *args,
-			       bool vdev_req,
-			       unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 unmap QID arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID: %d\n",
-		    domain_id);
-	DLB2_HW_DBG(hw, "\tPort ID:   %d\n",
-		    args->port_id);
-	DLB2_HW_DBG(hw, "\tQueue ID:  %d\n",
-		    args->qid);
-	if (args->qid < DLB2_MAX_NUM_LDB_QUEUES)
-		DLB2_HW_DBG(hw, "\tQueue's num mappings:  %d\n",
-			    hw->rsrcs.ldb_queues[args->qid].num_mappings);
-}
-
-static int dlb2_verify_unmap_qid_args(struct dlb2_hw *hw,
-				      u32 domain_id,
-				      struct dlb2_unmap_qid_args *args,
-				      struct dlb2_cmd_response *resp,
-				      bool vdev_req,
-				      unsigned int vdev_id,
-				      struct dlb2_hw_domain **out_domain,
-				      struct dlb2_ldb_port **out_port,
-				      struct dlb2_ldb_queue **out_queue)
-{
-	enum dlb2_qid_map_state state;
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_queue *queue;
-	struct dlb2_ldb_port *port;
-	int slot;
-	int id;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (!domain) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	if (!domain->configured) {
-		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
-		return -EINVAL;
-	}
-
-	id = args->port_id;
-
-	port = dlb2_get_domain_used_ldb_port(id, vdev_req, domain);
-
-	if (!port || !port->configured) {
-		resp->status = DLB2_ST_INVALID_PORT_ID;
-		return -EINVAL;
-	}
-
-	if (port->domain_id.phys_id != domain->id.phys_id) {
-		resp->status = DLB2_ST_INVALID_PORT_ID;
-		return -EINVAL;
-	}
-
-	queue = dlb2_get_domain_ldb_queue(args->qid, vdev_req, domain);
-
-	if (!queue || !queue->configured) {
-		DLB2_HW_ERR(hw, "[%s()] Can't unmap unconfigured queue %d\n",
-			    __func__, args->qid);
-		resp->status = DLB2_ST_INVALID_QID;
-		return -EINVAL;
-	}
-
-	/*
-	 * Verify that the port has the queue mapped. From the application's
-	 * perspective a queue is mapped if it is actually mapped, the map is
-	 * in progress, or the map is blocked pending an unmap.
-	 */
-	state = DLB2_QUEUE_MAPPED;
-	if (dlb2_port_find_slot_queue(port, state, queue, &slot))
-		goto done;
-
-	state = DLB2_QUEUE_MAP_IN_PROG;
-	if (dlb2_port_find_slot_queue(port, state, queue, &slot))
-		goto done;
-
-	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &slot))
-		goto done;
-
-	resp->status = DLB2_ST_INVALID_QID;
-	return -EINVAL;
-
-done:
-	*out_domain = domain;
-	*out_port = port;
-	*out_queue = queue;
-
-	return 0;
-}
-
-/**
- * dlb2_hw_unmap_qid() - Unmap a load-balanced queue from a load-balanced port
- * @hw: dlb2_hw handle for a particular device.
- * @domain_id: domain ID.
- * @args: unmap QID arguments.
- * @resp: response structure.
- * @vdev_req: indicates whether this request came from a vdev.
- * @vdev_id: If vdev_req is true, this contains the vdev's ID.
- *
- * This function configures the DLB to stop scheduling QEs from the specified
- * queue to the specified port.
- *
- * A successful return does not necessarily mean the mapping was removed. If
- * this function is unable to immediately unmap the queue from the port, it
- * will add the requested operation to a per-port list of pending map/unmap
- * operations, and (if it's not already running) launch a kernel thread that
- * periodically attempts to process all pending operations. See
- * dlb2_hw_map_qid() for more details.
- *
- * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
- * device.
- *
- * Return:
- * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
- * assigned a detailed error code from enum dlb2_error.
- *
- * Errors:
- * EINVAL - A requested resource is unavailable, invalid port or queue ID, or
- *	    the domain is not configured.
- * EFAULT - Internal error (resp->status not set).
- */
-int dlb2_hw_unmap_qid(struct dlb2_hw *hw,
-		      u32 domain_id,
-		      struct dlb2_unmap_qid_args *args,
-		      struct dlb2_cmd_response *resp,
-		      bool vdev_req,
-		      unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_queue *queue;
-	enum dlb2_qid_map_state st;
-	struct dlb2_ldb_port *port;
-	bool unmap_complete;
-	int i, ret;
-
-	dlb2_log_unmap_qid(hw, domain_id, args, vdev_req, vdev_id);
-
-	/*
-	 * Verify that hardware resources are available before attempting to
-	 * satisfy the request. This simplifies the error unwinding code.
-	 */
-	ret = dlb2_verify_unmap_qid_args(hw,
-					 domain_id,
-					 args,
-					 resp,
-					 vdev_req,
-					 vdev_id,
-					 &domain,
-					 &port,
-					 &queue);
-	if (ret)
-		return ret;
-
-	/*
-	 * If the queue hasn't been mapped yet, we need to update the slot's
-	 * state and re-enable the queue's inflights.
-	 */
-	st = DLB2_QUEUE_MAP_IN_PROG;
-	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
-		/*
-		 * Since the in-progress map was aborted, re-enable the QID's
-		 * inflights.
-		 */
-		if (queue->num_pending_additions == 0)
-			dlb2_ldb_queue_set_inflight_limit(hw, queue);
-
-		st = DLB2_QUEUE_UNMAPPED;
-		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
-		if (ret)
-			return ret;
-
-		goto unmap_qid_done;
-	}
-
-	/*
-	 * If the queue mapping is on hold pending an unmap, we simply need to
-	 * update the slot's state.
-	 */
-	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &i)) {
-		st = DLB2_QUEUE_UNMAP_IN_PROG;
-		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
-		if (ret)
-			return ret;
-
-		goto unmap_qid_done;
-	}
-
-	st = DLB2_QUEUE_MAPPED;
-	if (!dlb2_port_find_slot_queue(port, st, queue, &i)) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: no available CQ slots\n",
-			    __func__);
-		return -EFAULT;
-	}
-
-	/*
-	 * QID->CQ mapping removal is an asynchronous procedure. It requires
-	 * stopping the DLB2 from scheduling this CQ, draining all inflights
-	 * from the CQ, then unmapping the queue from the CQ. This function
-	 * simply marks the port as needing the queue unmapped, and (if
-	 * necessary) starts the unmapping worker thread.
-	 */
-	dlb2_ldb_port_cq_disable(hw, port);
-
-	st = DLB2_QUEUE_UNMAP_IN_PROG;
-	ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
-	if (ret)
-		return ret;
-
-	/*
-	 * Attempt to finish the unmapping now, in case the port has no
-	 * outstanding inflights. If that's not the case, this will fail and
-	 * the unmapping will be completed at a later time.
-	 */
-	unmap_complete = dlb2_domain_finish_unmap_port(hw, domain, port);
-
-	/*
-	 * If the unmapping couldn't complete immediately, launch the worker
-	 * thread (if it isn't already launched) to finish it later.
-	 */
-	if (!unmap_complete && !os_worker_active(hw))
-		os_schedule_work(hw);
-
-unmap_qid_done:
-	resp->status = 0;
-
-	return 0;
-}
-
-static void
-dlb2_log_pending_port_unmaps_args(struct dlb2_hw *hw,
-				  struct dlb2_pending_port_unmaps_args *args,
-				  bool vdev_req,
-				  unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB unmaps in progress arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from VF %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tPort ID: %d\n", args->port_id);
-}
-
-/**
- * dlb2_hw_pending_port_unmaps() - returns the number of unmap operations in
- *	progress.
- * @hw: dlb2_hw handle for a particular device.
- * @domain_id: domain ID.
- * @args: number of unmaps in progress args
- * @resp: response structure.
- * @vdev_req: indicates whether this request came from a vdev.
- * @vdev_id: If vdev_req is true, this contains the vdev's ID.
- *
- * Return:
- * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
- * assigned a detailed error code from enum dlb2_error. If successful, resp->id
- * contains the number of unmaps in progress.
- *
- * Errors:
- * EINVAL - Invalid port ID.
- */
-int dlb2_hw_pending_port_unmaps(struct dlb2_hw *hw,
-				u32 domain_id,
-				struct dlb2_pending_port_unmaps_args *args,
-				struct dlb2_cmd_response *resp,
-				bool vdev_req,
-				unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_port *port;
-
-	dlb2_log_pending_port_unmaps_args(hw, args, vdev_req, vdev_id);
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (!domain) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	port = dlb2_get_domain_used_ldb_port(args->port_id, vdev_req, domain);
-	if (!port || !port->configured) {
-		resp->status = DLB2_ST_INVALID_PORT_ID;
-		return -EINVAL;
-	}
-
-	resp->id = port->num_pending_removals;
-
-	return 0;
-}
-
-static int dlb2_verify_start_domain_args(struct dlb2_hw *hw,
-					 u32 domain_id,
-					 struct dlb2_cmd_response *resp,
-					 bool vdev_req,
-					 unsigned int vdev_id,
-					 struct dlb2_hw_domain **out_domain)
-{
-	struct dlb2_hw_domain *domain;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (!domain) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	if (!domain->configured) {
-		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
-		return -EINVAL;
-	}
-
-	if (domain->started) {
-		resp->status = DLB2_ST_DOMAIN_STARTED;
-		return -EINVAL;
-	}
-
-	*out_domain = domain;
-
-	return 0;
-}
-
-static void dlb2_log_start_domain(struct dlb2_hw *hw,
-				  u32 domain_id,
-				  bool vdev_req,
-				  unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 start domain arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
-}
-
-/**
- * dlb2_hw_start_domain() - start a scheduling domain
- * @hw: dlb2_hw handle for a particular device.
- * @domain_id: domain ID.
- * @arg: start domain arguments.
- * @resp: response structure.
- * @vdev_req: indicates whether this request came from a vdev.
- * @vdev_id: If vdev_req is true, this contains the vdev's ID.
- *
- * This function starts a scheduling domain, which allows applications to send
- * traffic through it. Once a domain is started, its resources can no longer be
- * configured (besides QID remapping and port enable/disable).
- *
- * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
- * device.
- *
- * Return:
- * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
- * assigned a detailed error code from enum dlb2_error.
- *
- * Errors:
- * EINVAL - the domain is not configured, or the domain is already started.
- */
-int
-dlb2_hw_start_domain(struct dlb2_hw *hw,
-		     u32 domain_id,
-		     struct dlb2_start_domain_args *args,
-		     struct dlb2_cmd_response *resp,
-		     bool vdev_req,
-		     unsigned int vdev_id)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *dir_queue;
-	struct dlb2_ldb_queue *ldb_queue;
-	struct dlb2_hw_domain *domain;
-	int ret;
-	RTE_SET_USED(args);
-	RTE_SET_USED(iter);
-
-	dlb2_log_start_domain(hw, domain_id, vdev_req, vdev_id);
-
-	ret = dlb2_verify_start_domain_args(hw,
-					    domain_id,
-					    resp,
-					    vdev_req,
-					    vdev_id,
-					    &domain);
-	if (ret)
-		return ret;
-
-	/*
-	 * Enable load-balanced and directed queue write permissions for the
-	 * queues this domain owns. Without this, the DLB2 will drop all
-	 * incoming traffic to those queues.
-	 */
-	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, ldb_queue, iter) {
-		u32 vasqid_v = 0;
-		unsigned int offs;
-
-		DLB2_BIT_SET(vasqid_v, DLB2_SYS_LDB_VASQID_V_VASQID_V);
-
-		offs = domain->id.phys_id * DLB2_MAX_NUM_LDB_QUEUES +
-			ldb_queue->id.phys_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_LDB_VASQID_V(offs), vasqid_v);
-	}
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, dir_queue, iter) {
-		u32 vasqid_v = 0;
-		unsigned int offs;
-
-		DLB2_BIT_SET(vasqid_v, DLB2_SYS_DIR_VASQID_V_VASQID_V);
-
-		offs = domain->id.phys_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) +
-			dir_queue->id.phys_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(offs), vasqid_v);
-	}
-
-	dlb2_flush_csr(hw);
-
-	domain->started = true;
-
-	resp->status = 0;
-
-	return 0;
-}
-
-static void dlb2_log_get_dir_queue_depth(struct dlb2_hw *hw,
-					 u32 domain_id,
-					 u32 queue_id,
-					 bool vdev_req,
-					 unsigned int vf_id)
-{
-	DLB2_HW_DBG(hw, "DLB get directed queue depth:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from VF %d)\n", vf_id);
-	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
-	DLB2_HW_DBG(hw, "\tQueue ID: %d\n", queue_id);
-}
-
-/**
- * dlb2_hw_get_dir_queue_depth() - returns the depth of a directed queue
- * @hw: dlb2_hw handle for a particular device.
- * @domain_id: domain ID.
- * @args: queue depth args
- * @resp: response structure.
- * @vdev_req: indicates whether this request came from a vdev.
- * @vdev_id: If vdev_req is true, this contains the vdev's ID.
- *
- * This function returns the depth of a directed queue.
- *
- * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
- * device.
- *
- * Return:
- * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
- * assigned a detailed error code from enum dlb2_error. If successful, resp->id
- * contains the depth.
- *
- * Errors:
- * EINVAL - Invalid domain ID or queue ID.
- */
-int dlb2_hw_get_dir_queue_depth(struct dlb2_hw *hw,
-				u32 domain_id,
-				struct dlb2_get_dir_queue_depth_args *args,
-				struct dlb2_cmd_response *resp,
-				bool vdev_req,
-				unsigned int vdev_id)
-{
-	struct dlb2_dir_pq_pair *queue;
-	struct dlb2_hw_domain *domain;
-	int id;
-
-	id = domain_id;
-
-	dlb2_log_get_dir_queue_depth(hw, domain_id, args->queue_id,
-				     vdev_req, vdev_id);
-
-	domain = dlb2_get_domain_from_id(hw, id, vdev_req, vdev_id);
-	if (!domain) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	id = args->queue_id;
-
-	queue = dlb2_get_domain_used_dir_pq(hw, id, vdev_req, domain);
-	if (!queue) {
-		resp->status = DLB2_ST_INVALID_QID;
-		return -EINVAL;
-	}
-
-	resp->id = dlb2_dir_queue_depth(hw, queue);
-
-	return 0;
-}
-
-static void dlb2_log_get_ldb_queue_depth(struct dlb2_hw *hw,
-					 u32 domain_id,
-					 u32 queue_id,
-					 bool vdev_req,
-					 unsigned int vf_id)
-{
-	DLB2_HW_DBG(hw, "DLB get load-balanced queue depth:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from VF %d)\n", vf_id);
-	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
-	DLB2_HW_DBG(hw, "\tQueue ID: %d\n", queue_id);
-}
-
-/**
- * dlb2_hw_get_ldb_queue_depth() - returns the depth of a load-balanced queue
- * @hw: dlb2_hw handle for a particular device.
- * @domain_id: domain ID.
- * @args: queue depth args
- * @resp: response structure.
- * @vdev_req: indicates whether this request came from a vdev.
- * @vdev_id: If vdev_req is true, this contains the vdev's ID.
- *
- * This function returns the depth of a load-balanced queue.
- *
- * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
- * device.
- *
- * Return:
- * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
- * assigned a detailed error code from enum dlb2_error. If successful, resp->id
- * contains the depth.
- *
- * Errors:
- * EINVAL - Invalid domain ID or queue ID.
- */
-int dlb2_hw_get_ldb_queue_depth(struct dlb2_hw *hw,
-				u32 domain_id,
-				struct dlb2_get_ldb_queue_depth_args *args,
-				struct dlb2_cmd_response *resp,
-				bool vdev_req,
-				unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_queue *queue;
-
-	dlb2_log_get_ldb_queue_depth(hw, domain_id, args->queue_id,
-				     vdev_req, vdev_id);
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-	if (!domain) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	queue = dlb2_get_domain_ldb_queue(args->queue_id, vdev_req, domain);
-	if (!queue) {
-		resp->status = DLB2_ST_INVALID_QID;
-		return -EINVAL;
-	}
-
-	resp->id = dlb2_ldb_queue_depth(hw, queue);
-
-	return 0;
-}
-
-/**
- * dlb2_finish_unmap_qid_procedures() - finish any pending unmap procedures
- * @hw: dlb2_hw handle for a particular device.
- *
- * This function attempts to finish any outstanding unmap procedures.
- * This function should be called by the kernel thread responsible for
- * finishing map/unmap procedures.
- *
- * Return:
- * Returns the number of procedures that weren't completed.
- */
-unsigned int dlb2_finish_unmap_qid_procedures(struct dlb2_hw *hw)
-{
-	int i, num = 0;
-
-	/* Finish queue unmap jobs for any domain that needs it */
-	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
-		struct dlb2_hw_domain *domain = &hw->domains[i];
-
-		num += dlb2_domain_finish_unmap_qid_procedures(hw, domain);
-	}
-
-	return num;
-}
-
-/**
- * dlb2_finish_map_qid_procedures() - finish any pending map procedures
- * @hw: dlb2_hw handle for a particular device.
- *
- * This function attempts to finish any outstanding map procedures.
- * This function should be called by the kernel thread responsible for
- * finishing map/unmap procedures.
- *
- * Return:
- * Returns the number of procedures that weren't completed.
- */
-unsigned int dlb2_finish_map_qid_procedures(struct dlb2_hw *hw)
-{
-	int i, num = 0;
-
-	/* Finish queue map jobs for any domain that needs it */
-	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
-		struct dlb2_hw_domain *domain = &hw->domains[i];
-
-		num += dlb2_domain_finish_map_qid_procedures(hw, domain);
-	}
-
-	return num;
-}
-
-/**
- * dlb2_hw_enable_sparse_dir_cq_mode() - enable sparse mode for directed ports.
- * @hw: dlb2_hw handle for a particular device.
- *
- * This function must be called prior to configuring scheduling domains.
- */
-
-void dlb2_hw_enable_sparse_dir_cq_mode(struct dlb2_hw *hw)
-{
-	u32 ctrl;
-
-	ctrl = DLB2_CSR_RD(hw, DLB2_CHP_CFG_CHP_CSR_CTRL);
-
-	DLB2_BIT_SET(ctrl,
-		     DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_DIR_CQ_MODE);
-
-	DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, ctrl);
-}
-
-/**
- * dlb2_hw_enable_sparse_ldb_cq_mode() - enable sparse mode for load-balanced
- *	ports.
- * @hw: dlb2_hw handle for a particular device.
- *
- * This function must be called prior to configuring scheduling domains.
- */
-void dlb2_hw_enable_sparse_ldb_cq_mode(struct dlb2_hw *hw)
-{
-	u32 ctrl;
-
-	ctrl = DLB2_CSR_RD(hw, DLB2_CHP_CFG_CHP_CSR_CTRL);
-
-	DLB2_BIT_SET(ctrl,
-		     DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_LDB_CQ_MODE);
-
-	DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, ctrl);
-}
-
-/**
- * dlb2_get_group_sequence_numbers() - return a group's number of SNs per queue
- * @hw: dlb2_hw handle for a particular device.
- * @group_id: sequence number group ID.
- *
- * This function returns the configured number of sequence numbers per queue
- * for the specified group.
- *
- * Return:
- * Returns -EINVAL if group_id is invalid, else the group's SNs per queue.
- */
-int dlb2_get_group_sequence_numbers(struct dlb2_hw *hw, u32 group_id)
-{
-	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
-		return -EINVAL;
-
-	return hw->rsrcs.sn_groups[group_id].sequence_numbers_per_queue;
-}
-
-/**
- * dlb2_get_group_sequence_number_occupancy() - return a group's in-use slots
- * @hw: dlb2_hw handle for a particular device.
- * @group_id: sequence number group ID.
- *
- * This function returns the group's number of in-use slots (i.e. load-balanced
- * queues using the specified group).
- *
- * Return:
- * Returns -EINVAL if group_id is invalid, else the group's SNs per queue.
- */
-int dlb2_get_group_sequence_number_occupancy(struct dlb2_hw *hw, u32 group_id)
-{
-	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
-		return -EINVAL;
-
-	return dlb2_sn_group_used_slots(&hw->rsrcs.sn_groups[group_id]);
-}
-
-static void dlb2_log_set_group_sequence_numbers(struct dlb2_hw *hw,
-						u32 group_id,
-						u32 val)
-{
-	DLB2_HW_DBG(hw, "DLB2 set group sequence numbers:\n");
-	DLB2_HW_DBG(hw, "\tGroup ID: %u\n", group_id);
-	DLB2_HW_DBG(hw, "\tValue:    %u\n", val);
-}
-
-/**
- * dlb2_set_group_sequence_numbers() - assign a group's number of SNs per queue
- * @hw: dlb2_hw handle for a particular device.
- * @group_id: sequence number group ID.
- * @val: requested amount of sequence numbers per queue.
- *
- * This function configures the group's number of sequence numbers per queue.
- * val can be a power-of-two between 32 and 1024, inclusive. This setting can
- * be configured until the first ordered load-balanced queue is configured, at
- * which point the configuration is locked.
- *
- * Return:
- * Returns 0 upon success; -EINVAL if group_id or val is invalid, -EPERM if an
- * ordered queue is configured.
- */
-int dlb2_set_group_sequence_numbers(struct dlb2_hw *hw,
-				    u32 group_id,
-				    u32 val)
-{
-	const u32 valid_allocations[] = {64, 128, 256, 512, 1024};
-	struct dlb2_sn_group *group;
-	u32 sn_mode = 0;
-	int mode;
-
-	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
-		return -EINVAL;
-
-	group = &hw->rsrcs.sn_groups[group_id];
-
-	/*
-	 * Once the first load-balanced queue using an SN group is configured,
-	 * the group cannot be changed.
-	 */
-	if (group->slot_use_bitmap != 0)
-		return -EPERM;
-
-	for (mode = 0; mode < DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES; mode++)
-		if (val == valid_allocations[mode])
-			break;
-
-	if (mode == DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES)
-		return -EINVAL;
-
-	group->mode = mode;
-	group->sequence_numbers_per_queue = val;
-
-	DLB2_BITS_SET(sn_mode, hw->rsrcs.sn_groups[0].mode,
-		 DLB2_RO_GRP_SN_MODE_SN_MODE_0);
-	DLB2_BITS_SET(sn_mode, hw->rsrcs.sn_groups[1].mode,
-		 DLB2_RO_GRP_SN_MODE_SN_MODE_1);
-
-	DLB2_CSR_WR(hw, DLB2_RO_GRP_SN_MODE(hw->ver), sn_mode);
-
-	dlb2_log_set_group_sequence_numbers(hw, group_id, val);
-
-	return 0;
-}
-
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v3 21/26] event/dlb2: use new implementation of HW types header
  2021-04-13 20:14   ` [dpdk-dev] [PATCH v3 00/26] " Timothy McDaniel
                       ` (19 preceding siblings ...)
  2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 20/26] event/dlb2: use new implementation of resource file Timothy McDaniel
@ 2021-04-13 20:14     ` Timothy McDaniel
  2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 22/26] event/dlb2: use new combined register map Timothy McDaniel
                       ` (4 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-04-13 20:14 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas

As support for DLB v2.5 was added, modifications were made to
dlb_hw_types_new.h, but the old file needed to be preserved during
the port in order to meet the requirement that individual patches in
a series each compile successfully. Since the DLB v2.5 support is
completely integrated, it is now safe to remove the old (original)
file, as well as the DLB2_USE_NEW_HEADERS define that was used to
control which version of the file was to be included in certain
source files.
It is now safe to rename the new file, and use it unconditionally
in all DLB source files.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_hw_types.h    |  38 +-
 .../event/dlb2/pf/base/dlb2_hw_types_new.h    | 357 ------------------
 drivers/event/dlb2/pf/base/dlb2_resource.c    |   4 +-
 drivers/event/dlb2/pf/dlb2_main.c             |   4 +-
 drivers/event/dlb2/pf/dlb2_main.h             |   4 -
 drivers/event/dlb2/pf/dlb2_pf.c               |   4 +-
 6 files changed, 33 insertions(+), 378 deletions(-)
 delete mode 100644 drivers/event/dlb2/pf/base/dlb2_hw_types_new.h

diff --git a/drivers/event/dlb2/pf/base/dlb2_hw_types.h b/drivers/event/dlb2/pf/base/dlb2_hw_types.h
index b007e1674..4a6037775 100644
--- a/drivers/event/dlb2/pf/base/dlb2_hw_types.h
+++ b/drivers/event/dlb2/pf/base/dlb2_hw_types.h
@@ -2,14 +2,21 @@
  * Copyright(c) 2016-2020 Intel Corporation
  */
 
-#ifndef __DLB2_HW_TYPES_H
-#define __DLB2_HW_TYPES_H
+#ifndef __DLB2_HW_TYPES_NEW_H
+#define __DLB2_HW_TYPES_NEW_H
 
 #include "../../dlb2_priv.h"
 #include "dlb2_user.h"
 
 #include "dlb2_osdep_list.h"
 #include "dlb2_osdep_types.h"
+#include "dlb2_regs_new.h"
+
+#define DLB2_BITS_SET(x, val, mask)	(x = ((x) & ~(mask))     \
+				 | (((val) << (mask##_LOC)) & (mask)))
+#define DLB2_BITS_CLR(x, mask)	(x &= ~(mask))
+#define DLB2_BIT_SET(x, mask)	((x) |= (mask))
+#define DLB2_BITS_GET(x, mask)	(((x) & (mask)) >> (mask##_LOC))
 
 #define DLB2_MAX_NUM_VDEVS			16
 #define DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS	2
@@ -141,7 +148,7 @@ struct dlb2_dir_pq_pair {
 };
 
 enum dlb2_qid_map_state {
-	/* The slot doesn't contain a valid queue mapping */
+	/* The slot does not contain a valid queue mapping */
 	DLB2_QUEUE_UNMAPPED,
 	/* The slot contains a valid queue mapping */
 	DLB2_QUEUE_MAPPED,
@@ -174,6 +181,7 @@ struct dlb2_ldb_port {
 	u32 hist_list_entry_base;
 	u32 hist_list_entry_limit;
 	u32 ref_cnt;
+	u8 cq_depth;
 	u8 init_tkn_cnt;
 	u8 num_pending_removals;
 	u8 num_mappings;
@@ -245,8 +253,15 @@ struct dlb2_hw_domain {
 	u32 avail_hist_list_entries;
 	u32 hist_list_entry_base;
 	u32 hist_list_entry_offset;
-	u32 num_ldb_credits;
-	u32 num_dir_credits;
+	union {
+		struct {
+			u32 num_ldb_credits;
+			u32 num_dir_credits;
+		};
+		struct {
+			u32 num_credits;
+		};
+	};
 	u32 num_avail_aqed_entries;
 	u32 num_used_aqed_entries;
 	struct dlb2_resource_id id;
@@ -269,8 +284,15 @@ struct dlb2_function_resources {
 	u32 num_avail_ldb_queues;
 	u32 num_avail_ldb_ports[DLB2_NUM_COS_DOMAINS];
 	u32 num_avail_dir_pq_pairs;
-	u32 num_avail_qed_entries;
-	u32 num_avail_dqed_entries;
+	union {
+		struct {
+			u32 num_avail_qed_entries;
+			u32 num_avail_dqed_entries;
+		};
+		struct {
+			u32 num_avail_entries;
+		};
+	};
 	u32 num_avail_aqed_entries;
 	u8 locked; /* (VDEV only) */
 };
@@ -332,4 +354,4 @@ struct dlb2_hw {
 	unsigned int pasid[DLB2_MAX_NUM_VDEVS];
 };
 
-#endif /* __DLB2_HW_TYPES_H */
+#endif /* __DLB2_HW_TYPES_NEW_H */
diff --git a/drivers/event/dlb2/pf/base/dlb2_hw_types_new.h b/drivers/event/dlb2/pf/base/dlb2_hw_types_new.h
deleted file mode 100644
index 4a6037775..000000000
--- a/drivers/event/dlb2/pf/base/dlb2_hw_types_new.h
+++ /dev/null
@@ -1,357 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016-2020 Intel Corporation
- */
-
-#ifndef __DLB2_HW_TYPES_NEW_H
-#define __DLB2_HW_TYPES_NEW_H
-
-#include "../../dlb2_priv.h"
-#include "dlb2_user.h"
-
-#include "dlb2_osdep_list.h"
-#include "dlb2_osdep_types.h"
-#include "dlb2_regs_new.h"
-
-#define DLB2_BITS_SET(x, val, mask)	(x = ((x) & ~(mask))     \
-				 | (((val) << (mask##_LOC)) & (mask)))
-#define DLB2_BITS_CLR(x, mask)	(x &= ~(mask))
-#define DLB2_BIT_SET(x, mask)	((x) |= (mask))
-#define DLB2_BITS_GET(x, mask)	(((x) & (mask)) >> (mask##_LOC))
-
-#define DLB2_MAX_NUM_VDEVS			16
-#define DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS	2
-#define DLB2_NUM_ARB_WEIGHTS			8
-#define DLB2_MAX_NUM_AQED_ENTRIES		2048
-#define DLB2_MAX_WEIGHT				255
-#define DLB2_NUM_COS_DOMAINS			4
-#define DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS	2
-#define DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES	5
-#define DLB2_MAX_CQ_COMP_CHECK_LOOPS		409600
-#define DLB2_MAX_QID_EMPTY_CHECK_LOOPS		(32 * 64 * 1024 * (800 / 30))
-
-#define DLB2_FUNC_BAR				0
-#define DLB2_CSR_BAR				2
-
-#define PCI_DEVICE_ID_INTEL_DLB2_PF 0x2710
-#define PCI_DEVICE_ID_INTEL_DLB2_VF 0x2711
-
-#define PCI_DEVICE_ID_INTEL_DLB2_5_PF 0x2714
-#define PCI_DEVICE_ID_INTEL_DLB2_5_VF 0x2715
-
-#define DLB2_ALARM_HW_SOURCE_SYS 0
-#define DLB2_ALARM_HW_SOURCE_DLB 1
-
-#define DLB2_ALARM_HW_UNIT_CHP 4
-
-#define DLB2_ALARM_SYS_AID_ILLEGAL_QID		3
-#define DLB2_ALARM_SYS_AID_DISABLED_QID		4
-#define DLB2_ALARM_SYS_AID_ILLEGAL_HCW		5
-#define DLB2_ALARM_HW_CHP_AID_ILLEGAL_ENQ	1
-#define DLB2_ALARM_HW_CHP_AID_EXCESS_TOKEN_POPS 2
-
-/*
- * Hardware-defined base addresses. Those prefixed 'DLB2_DRV' are only used by
- * the PF driver.
- */
-#define DLB2_DRV_LDB_PP_BASE   0x2300000
-#define DLB2_DRV_LDB_PP_STRIDE 0x1000
-#define DLB2_DRV_LDB_PP_BOUND  (DLB2_DRV_LDB_PP_BASE + \
-				DLB2_DRV_LDB_PP_STRIDE * DLB2_MAX_NUM_LDB_PORTS)
-#define DLB2_DRV_DIR_PP_BASE   0x2200000
-#define DLB2_DRV_DIR_PP_STRIDE 0x1000
-#define DLB2_DRV_DIR_PP_BOUND  (DLB2_DRV_DIR_PP_BASE + \
-				DLB2_DRV_DIR_PP_STRIDE * DLB2_MAX_NUM_DIR_PORTS)
-#define DLB2_LDB_PP_BASE       0x2100000
-#define DLB2_LDB_PP_STRIDE     0x1000
-#define DLB2_LDB_PP_BOUND      (DLB2_LDB_PP_BASE + \
-				DLB2_LDB_PP_STRIDE * DLB2_MAX_NUM_LDB_PORTS)
-#define DLB2_LDB_PP_OFFS(id)   (DLB2_LDB_PP_BASE + (id) * DLB2_PP_SIZE)
-#define DLB2_DIR_PP_BASE       0x2000000
-#define DLB2_DIR_PP_STRIDE     0x1000
-#define DLB2_DIR_PP_BOUND      (DLB2_DIR_PP_BASE + \
-				DLB2_DIR_PP_STRIDE * \
-				DLB2_MAX_NUM_DIR_PORTS_V2_5)
-#define DLB2_DIR_PP_OFFS(id)   (DLB2_DIR_PP_BASE + (id) * DLB2_PP_SIZE)
-
-struct dlb2_resource_id {
-	u32 phys_id;
-	u32 virt_id;
-	u8 vdev_owned;
-	u8 vdev_id;
-};
-
-struct dlb2_freelist {
-	u32 base;
-	u32 bound;
-	u32 offset;
-};
-
-static inline u32 dlb2_freelist_count(struct dlb2_freelist *list)
-{
-	return list->bound - list->base - list->offset;
-}
-
-struct dlb2_hcw {
-	u64 data;
-	/* Word 3 */
-	u16 opaque;
-	u8 qid;
-	u8 sched_type:2;
-	u8 priority:3;
-	u8 msg_type:3;
-	/* Word 4 */
-	u16 lock_id;
-	u8 ts_flag:1;
-	u8 rsvd1:2;
-	u8 no_dec:1;
-	u8 cmp_id:4;
-	u8 cq_token:1;
-	u8 qe_comp:1;
-	u8 qe_frag:1;
-	u8 qe_valid:1;
-	u8 int_arm:1;
-	u8 error:1;
-	u8 rsvd:2;
-};
-
-struct dlb2_ldb_queue {
-	struct dlb2_list_entry domain_list;
-	struct dlb2_list_entry func_list;
-	struct dlb2_resource_id id;
-	struct dlb2_resource_id domain_id;
-	u32 num_qid_inflights;
-	u32 aqed_limit;
-	u32 sn_group; /* sn == sequence number */
-	u32 sn_slot;
-	u32 num_mappings;
-	u8 sn_cfg_valid;
-	u8 num_pending_additions;
-	u8 owned;
-	u8 configured;
-};
-
-/*
- * Directed ports and queues are paired by nature, so the driver tracks them
- * with a single data structure.
- */
-struct dlb2_dir_pq_pair {
-	struct dlb2_list_entry domain_list;
-	struct dlb2_list_entry func_list;
-	struct dlb2_resource_id id;
-	struct dlb2_resource_id domain_id;
-	u32 ref_cnt;
-	u8 init_tkn_cnt;
-	u8 queue_configured;
-	u8 port_configured;
-	u8 owned;
-	u8 enabled;
-};
-
-enum dlb2_qid_map_state {
-	/* The slot does not contain a valid queue mapping */
-	DLB2_QUEUE_UNMAPPED,
-	/* The slot contains a valid queue mapping */
-	DLB2_QUEUE_MAPPED,
-	/* The driver is mapping a queue into this slot */
-	DLB2_QUEUE_MAP_IN_PROG,
-	/* The driver is unmapping a queue from this slot */
-	DLB2_QUEUE_UNMAP_IN_PROG,
-	/*
-	 * The driver is unmapping a queue from this slot, and once complete
-	 * will replace it with another mapping.
-	 */
-	DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP,
-};
-
-struct dlb2_ldb_port_qid_map {
-	enum dlb2_qid_map_state state;
-	u16 qid;
-	u16 pending_qid;
-	u8 priority;
-	u8 pending_priority;
-};
-
-struct dlb2_ldb_port {
-	struct dlb2_list_entry domain_list;
-	struct dlb2_list_entry func_list;
-	struct dlb2_resource_id id;
-	struct dlb2_resource_id domain_id;
-	/* The qid_map represents the hardware QID mapping state. */
-	struct dlb2_ldb_port_qid_map qid_map[DLB2_MAX_NUM_QIDS_PER_LDB_CQ];
-	u32 hist_list_entry_base;
-	u32 hist_list_entry_limit;
-	u32 ref_cnt;
-	u8 cq_depth;
-	u8 init_tkn_cnt;
-	u8 num_pending_removals;
-	u8 num_mappings;
-	u8 owned;
-	u8 enabled;
-	u8 configured;
-};
-
-struct dlb2_sn_group {
-	u32 mode;
-	u32 sequence_numbers_per_queue;
-	u32 slot_use_bitmap;
-	u32 id;
-};
-
-static inline bool dlb2_sn_group_full(struct dlb2_sn_group *group)
-{
-	const u32 mask[] = {
-		0x0000ffff,  /* 64 SNs per queue */
-		0x000000ff,  /* 128 SNs per queue */
-		0x0000000f,  /* 256 SNs per queue */
-		0x00000003,  /* 512 SNs per queue */
-		0x00000001}; /* 1024 SNs per queue */
-
-	return group->slot_use_bitmap == mask[group->mode];
-}
-
-static inline int dlb2_sn_group_alloc_slot(struct dlb2_sn_group *group)
-{
-	const u32 bound[] = {16, 8, 4, 2, 1};
-	u32 i;
-
-	for (i = 0; i < bound[group->mode]; i++) {
-		if (!(group->slot_use_bitmap & (1 << i))) {
-			group->slot_use_bitmap |= 1 << i;
-			return i;
-		}
-	}
-
-	return -1;
-}
-
-static inline void
-dlb2_sn_group_free_slot(struct dlb2_sn_group *group, int slot)
-{
-	group->slot_use_bitmap &= ~(1 << slot);
-}
-
-static inline int dlb2_sn_group_used_slots(struct dlb2_sn_group *group)
-{
-	int i, cnt = 0;
-
-	for (i = 0; i < 32; i++)
-		cnt += !!(group->slot_use_bitmap & (1 << i));
-
-	return cnt;
-}
-
-struct dlb2_hw_domain {
-	struct dlb2_function_resources *parent_func;
-	struct dlb2_list_entry func_list;
-	struct dlb2_list_head used_ldb_queues;
-	struct dlb2_list_head used_ldb_ports[DLB2_NUM_COS_DOMAINS];
-	struct dlb2_list_head used_dir_pq_pairs;
-	struct dlb2_list_head avail_ldb_queues;
-	struct dlb2_list_head avail_ldb_ports[DLB2_NUM_COS_DOMAINS];
-	struct dlb2_list_head avail_dir_pq_pairs;
-	u32 total_hist_list_entries;
-	u32 avail_hist_list_entries;
-	u32 hist_list_entry_base;
-	u32 hist_list_entry_offset;
-	union {
-		struct {
-			u32 num_ldb_credits;
-			u32 num_dir_credits;
-		};
-		struct {
-			u32 num_credits;
-		};
-	};
-	u32 num_avail_aqed_entries;
-	u32 num_used_aqed_entries;
-	struct dlb2_resource_id id;
-	int num_pending_removals;
-	int num_pending_additions;
-	u8 configured;
-	u8 started;
-};
-
-struct dlb2_bitmap;
-
-struct dlb2_function_resources {
-	struct dlb2_list_head avail_domains;
-	struct dlb2_list_head used_domains;
-	struct dlb2_list_head avail_ldb_queues;
-	struct dlb2_list_head avail_ldb_ports[DLB2_NUM_COS_DOMAINS];
-	struct dlb2_list_head avail_dir_pq_pairs;
-	struct dlb2_bitmap *avail_hist_list_entries;
-	u32 num_avail_domains;
-	u32 num_avail_ldb_queues;
-	u32 num_avail_ldb_ports[DLB2_NUM_COS_DOMAINS];
-	u32 num_avail_dir_pq_pairs;
-	union {
-		struct {
-			u32 num_avail_qed_entries;
-			u32 num_avail_dqed_entries;
-		};
-		struct {
-			u32 num_avail_entries;
-		};
-	};
-	u32 num_avail_aqed_entries;
-	u8 locked; /* (VDEV only) */
-};
-
-/*
- * After initialization, each resource in dlb2_hw_resources is located in one
- * of the following lists:
- * -- The PF's available resources list. These are unconfigured resources owned
- *	by the PF and not allocated to a dlb2 scheduling domain.
- * -- A VDEV's available resources list. These are VDEV-owned unconfigured
- *	resources not allocated to a dlb2 scheduling domain.
- * -- A domain's available resources list. These are domain-owned unconfigured
- *	resources.
- * -- A domain's used resources list. These are domain-owned configured
- *	resources.
- *
- * A resource moves to a new list when a VDEV or domain is created or destroyed,
- * or when the resource is configured.
- */
-struct dlb2_hw_resources {
-	struct dlb2_ldb_queue ldb_queues[DLB2_MAX_NUM_LDB_QUEUES];
-	struct dlb2_ldb_port ldb_ports[DLB2_MAX_NUM_LDB_PORTS];
-	struct dlb2_dir_pq_pair dir_pq_pairs[DLB2_MAX_NUM_DIR_PORTS_V2_5];
-	struct dlb2_sn_group sn_groups[DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS];
-};
-
-struct dlb2_mbox {
-	u32 *mbox;
-	u32 *isr_in_progress;
-};
-
-struct dlb2_sw_mbox {
-	struct dlb2_mbox vdev_to_pf;
-	struct dlb2_mbox pf_to_vdev;
-	void (*pf_to_vdev_inject)(void *arg);
-	void *pf_to_vdev_inject_arg;
-};
-
-struct dlb2_hw {
-	uint8_t ver;
-
-	/* BAR 0 address */
-	void *csr_kva;
-	unsigned long csr_phys_addr;
-	/* BAR 2 address */
-	void *func_kva;
-	unsigned long func_phys_addr;
-
-	/* Resource tracking */
-	struct dlb2_hw_resources rsrcs;
-	struct dlb2_function_resources pf;
-	struct dlb2_function_resources vdev[DLB2_MAX_NUM_VDEVS];
-	struct dlb2_hw_domain domains[DLB2_MAX_NUM_DOMAINS];
-	u8 cos_reservation[DLB2_NUM_COS_DOMAINS];
-
-	/* Virtualization */
-	int virt_mode;
-	struct dlb2_sw_mbox mbox[DLB2_MAX_NUM_VDEVS];
-	unsigned int pasid[DLB2_MAX_NUM_VDEVS];
-};
-
-#endif /* __DLB2_HW_TYPES_NEW_H */
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index 2f66b2c71..54b0207db 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -2,11 +2,9 @@
  * Copyright(c) 2016-2020 Intel Corporation
  */
 
-#define DLB2_USE_NEW_HEADERS /* TEMPORARY FOR MERGE */
-
 #include "dlb2_user.h"
 
-#include "dlb2_hw_types_new.h"
+#include "dlb2_hw_types.h"
 #include "dlb2_osdep.h"
 #include "dlb2_osdep_bitmap.h"
 #include "dlb2_osdep_types.h"
diff --git a/drivers/event/dlb2/pf/dlb2_main.c b/drivers/event/dlb2/pf/dlb2_main.c
index bac07f097..1f6ccf8e4 100644
--- a/drivers/event/dlb2/pf/dlb2_main.c
+++ b/drivers/event/dlb2/pf/dlb2_main.c
@@ -13,10 +13,8 @@
 #include <rte_malloc.h>
 #include <rte_errno.h>
 
-#define DLB2_USE_NEW_HEADERS /* TEMPORARY FOR MERGE */
-
 #include "base/dlb2_regs_new.h"
-#include "base/dlb2_hw_types_new.h"
+#include "base/dlb2_hw_types.h"
 #include "base/dlb2_resource.h"
 #include "base/dlb2_osdep.h"
 #include "dlb2_main.h"
diff --git a/drivers/event/dlb2/pf/dlb2_main.h b/drivers/event/dlb2/pf/dlb2_main.h
index 892298d7a..9eeda482a 100644
--- a/drivers/event/dlb2/pf/dlb2_main.h
+++ b/drivers/event/dlb2/pf/dlb2_main.h
@@ -12,11 +12,7 @@
 #include <rte_bus_pci.h>
 #include <rte_eal_paging.h>
 
-#ifdef DLB2_USE_NEW_HEADERS
-#include "base/dlb2_hw_types_new.h"
-#else
 #include "base/dlb2_hw_types.h"
-#endif
 #include "../dlb2_user.h"
 
 #define DLB2_DEFAULT_UNREGISTER_TIMEOUT_S 5
diff --git a/drivers/event/dlb2/pf/dlb2_pf.c b/drivers/event/dlb2/pf/dlb2_pf.c
index 880964a29..f57dc1584 100644
--- a/drivers/event/dlb2/pf/dlb2_pf.c
+++ b/drivers/event/dlb2/pf/dlb2_pf.c
@@ -32,13 +32,11 @@
 #include <rte_memory.h>
 #include <rte_string_fns.h>
 
-#define DLB2_USE_NEW_HEADERS /* TEMPORARY FOR MERGE */
-
 #include "../dlb2_priv.h"
 #include "../dlb2_iface.h"
 #include "../dlb2_inline_fns.h"
 #include "dlb2_main.h"
-#include "base/dlb2_hw_types_new.h"
+#include "base/dlb2_hw_types.h"
 #include "base/dlb2_osdep.h"
 #include "base/dlb2_resource.h"
 
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v3 22/26] event/dlb2: use new combined register map
  2021-04-13 20:14   ` [dpdk-dev] [PATCH v3 00/26] " Timothy McDaniel
                       ` (20 preceding siblings ...)
  2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 21/26] event/dlb2: use new implementation of HW types header Timothy McDaniel
@ 2021-04-13 20:14     ` Timothy McDaniel
  2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 23/26] event/dlb2: update xstats for v2.5 Timothy McDaniel
                       ` (3 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-04-13 20:14 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas

All references to the old register map have been removed,
so it is safe to rename the new combined file that supports
both DLB v2.0 and DLB v2.5. Also fixed all places where this
file is included.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_hw_types.h |    2 +-
 drivers/event/dlb2/pf/base/dlb2_regs.h     | 5955 +++++++++++++-------
 drivers/event/dlb2/pf/base/dlb2_regs_new.h | 4304 --------------
 drivers/event/dlb2/pf/base/dlb2_resource.c |    2 +-
 drivers/event/dlb2/pf/dlb2_main.c          |    2 +-
 5 files changed, 3869 insertions(+), 6396 deletions(-)
 delete mode 100644 drivers/event/dlb2/pf/base/dlb2_regs_new.h

diff --git a/drivers/event/dlb2/pf/base/dlb2_hw_types.h b/drivers/event/dlb2/pf/base/dlb2_hw_types.h
index 4a6037775..6b8fee341 100644
--- a/drivers/event/dlb2/pf/base/dlb2_hw_types.h
+++ b/drivers/event/dlb2/pf/base/dlb2_hw_types.h
@@ -10,7 +10,7 @@
 
 #include "dlb2_osdep_list.h"
 #include "dlb2_osdep_types.h"
-#include "dlb2_regs_new.h"
+#include "dlb2_regs.h"
 
 #define DLB2_BITS_SET(x, val, mask)	(x = ((x) & ~(mask))     \
 				 | (((val) << (mask##_LOC)) & (mask)))
diff --git a/drivers/event/dlb2/pf/base/dlb2_regs.h b/drivers/event/dlb2/pf/base/dlb2_regs.h
index 43ecad4f8..7167f3d2f 100644
--- a/drivers/event/dlb2/pf/base/dlb2_regs.h
+++ b/drivers/event/dlb2/pf/base/dlb2_regs.h
@@ -7,553 +7,550 @@
 
 #include "dlb2_osdep_types.h"
 
-#define DLB2_FUNC_PF_VF2PF_MAILBOX_BYTES 256
-#define DLB2_FUNC_PF_VF2PF_MAILBOX(vf_id, x) \
+#define DLB2_PF_VF2PF_MAILBOX_BYTES 256
+#define DLB2_PF_VF2PF_MAILBOX(vf_id, x) \
 	(0x1000 + 0x4 * (x) + (vf_id) * 0x10000)
-#define DLB2_FUNC_PF_VF2PF_MAILBOX_RST 0x0
-union dlb2_func_pf_vf2pf_mailbox {
-	struct {
-		u32 msg : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_PF_VF2PF_MAILBOX_ISR(vf_id) \
+#define DLB2_PF_VF2PF_MAILBOX_RST 0x0
+
+#define DLB2_PF_VF2PF_MAILBOX_MSG	0xFFFFFFFF
+#define DLB2_PF_VF2PF_MAILBOX_MSG_LOC	0
+
+#define DLB2_PF_VF2PF_MAILBOX_ISR(vf_id) \
 	(0x1f00 + (vf_id) * 0x10000)
-#define DLB2_FUNC_PF_VF2PF_MAILBOX_ISR_RST 0x0
-union dlb2_func_pf_vf2pf_mailbox_isr {
-	struct {
-		u32 vf0_isr : 1;
-		u32 vf1_isr : 1;
-		u32 vf2_isr : 1;
-		u32 vf3_isr : 1;
-		u32 vf4_isr : 1;
-		u32 vf5_isr : 1;
-		u32 vf6_isr : 1;
-		u32 vf7_isr : 1;
-		u32 vf8_isr : 1;
-		u32 vf9_isr : 1;
-		u32 vf10_isr : 1;
-		u32 vf11_isr : 1;
-		u32 vf12_isr : 1;
-		u32 vf13_isr : 1;
-		u32 vf14_isr : 1;
-		u32 vf15_isr : 1;
-		u32 rsvd0 : 16;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_PF_VF2PF_FLR_ISR(vf_id) \
+#define DLB2_PF_VF2PF_MAILBOX_ISR_RST 0x0
+
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF0_ISR	0x00000001
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF1_ISR	0x00000002
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF2_ISR	0x00000004
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF3_ISR	0x00000008
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF4_ISR	0x00000010
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF5_ISR	0x00000020
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF6_ISR	0x00000040
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF7_ISR	0x00000080
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF8_ISR	0x00000100
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF9_ISR	0x00000200
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF10_ISR	0x00000400
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF11_ISR	0x00000800
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF12_ISR	0x00001000
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF13_ISR	0x00002000
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF14_ISR	0x00004000
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF15_ISR	0x00008000
+#define DLB2_PF_VF2PF_MAILBOX_ISR_RSVD0	0xFFFF0000
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF0_ISR_LOC	0
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF1_ISR_LOC	1
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF2_ISR_LOC	2
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF3_ISR_LOC	3
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF4_ISR_LOC	4
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF5_ISR_LOC	5
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF6_ISR_LOC	6
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF7_ISR_LOC	7
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF8_ISR_LOC	8
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF9_ISR_LOC	9
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF10_ISR_LOC	10
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF11_ISR_LOC	11
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF12_ISR_LOC	12
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF13_ISR_LOC	13
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF14_ISR_LOC	14
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF15_ISR_LOC	15
+#define DLB2_PF_VF2PF_MAILBOX_ISR_RSVD0_LOC		16
+
+#define DLB2_PF_VF2PF_FLR_ISR(vf_id) \
 	(0x1f04 + (vf_id) * 0x10000)
-#define DLB2_FUNC_PF_VF2PF_FLR_ISR_RST 0x0
-union dlb2_func_pf_vf2pf_flr_isr {
-	struct {
-		u32 vf0_isr : 1;
-		u32 vf1_isr : 1;
-		u32 vf2_isr : 1;
-		u32 vf3_isr : 1;
-		u32 vf4_isr : 1;
-		u32 vf5_isr : 1;
-		u32 vf6_isr : 1;
-		u32 vf7_isr : 1;
-		u32 vf8_isr : 1;
-		u32 vf9_isr : 1;
-		u32 vf10_isr : 1;
-		u32 vf11_isr : 1;
-		u32 vf12_isr : 1;
-		u32 vf13_isr : 1;
-		u32 vf14_isr : 1;
-		u32 vf15_isr : 1;
-		u32 rsvd0 : 16;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_PF_VF2PF_ISR_PEND(vf_id) \
+#define DLB2_PF_VF2PF_FLR_ISR_RST 0x0
+
+#define DLB2_PF_VF2PF_FLR_ISR_VF0_ISR	0x00000001
+#define DLB2_PF_VF2PF_FLR_ISR_VF1_ISR	0x00000002
+#define DLB2_PF_VF2PF_FLR_ISR_VF2_ISR	0x00000004
+#define DLB2_PF_VF2PF_FLR_ISR_VF3_ISR	0x00000008
+#define DLB2_PF_VF2PF_FLR_ISR_VF4_ISR	0x00000010
+#define DLB2_PF_VF2PF_FLR_ISR_VF5_ISR	0x00000020
+#define DLB2_PF_VF2PF_FLR_ISR_VF6_ISR	0x00000040
+#define DLB2_PF_VF2PF_FLR_ISR_VF7_ISR	0x00000080
+#define DLB2_PF_VF2PF_FLR_ISR_VF8_ISR	0x00000100
+#define DLB2_PF_VF2PF_FLR_ISR_VF9_ISR	0x00000200
+#define DLB2_PF_VF2PF_FLR_ISR_VF10_ISR	0x00000400
+#define DLB2_PF_VF2PF_FLR_ISR_VF11_ISR	0x00000800
+#define DLB2_PF_VF2PF_FLR_ISR_VF12_ISR	0x00001000
+#define DLB2_PF_VF2PF_FLR_ISR_VF13_ISR	0x00002000
+#define DLB2_PF_VF2PF_FLR_ISR_VF14_ISR	0x00004000
+#define DLB2_PF_VF2PF_FLR_ISR_VF15_ISR	0x00008000
+#define DLB2_PF_VF2PF_FLR_ISR_RSVD0		0xFFFF0000
+#define DLB2_PF_VF2PF_FLR_ISR_VF0_ISR_LOC	0
+#define DLB2_PF_VF2PF_FLR_ISR_VF1_ISR_LOC	1
+#define DLB2_PF_VF2PF_FLR_ISR_VF2_ISR_LOC	2
+#define DLB2_PF_VF2PF_FLR_ISR_VF3_ISR_LOC	3
+#define DLB2_PF_VF2PF_FLR_ISR_VF4_ISR_LOC	4
+#define DLB2_PF_VF2PF_FLR_ISR_VF5_ISR_LOC	5
+#define DLB2_PF_VF2PF_FLR_ISR_VF6_ISR_LOC	6
+#define DLB2_PF_VF2PF_FLR_ISR_VF7_ISR_LOC	7
+#define DLB2_PF_VF2PF_FLR_ISR_VF8_ISR_LOC	8
+#define DLB2_PF_VF2PF_FLR_ISR_VF9_ISR_LOC	9
+#define DLB2_PF_VF2PF_FLR_ISR_VF10_ISR_LOC	10
+#define DLB2_PF_VF2PF_FLR_ISR_VF11_ISR_LOC	11
+#define DLB2_PF_VF2PF_FLR_ISR_VF12_ISR_LOC	12
+#define DLB2_PF_VF2PF_FLR_ISR_VF13_ISR_LOC	13
+#define DLB2_PF_VF2PF_FLR_ISR_VF14_ISR_LOC	14
+#define DLB2_PF_VF2PF_FLR_ISR_VF15_ISR_LOC	15
+#define DLB2_PF_VF2PF_FLR_ISR_RSVD0_LOC	16
+
+#define DLB2_PF_VF2PF_ISR_PEND(vf_id) \
 	(0x1f10 + (vf_id) * 0x10000)
-#define DLB2_FUNC_PF_VF2PF_ISR_PEND_RST 0x0
-union dlb2_func_pf_vf2pf_isr_pend {
-	struct {
-		u32 isr_pend : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_PF_PF2VF_MAILBOX_BYTES 64
-#define DLB2_FUNC_PF_PF2VF_MAILBOX(vf_id, x) \
+#define DLB2_PF_VF2PF_ISR_PEND_RST 0x0
+
+#define DLB2_PF_VF2PF_ISR_PEND_ISR_PEND	0x00000001
+#define DLB2_PF_VF2PF_ISR_PEND_RSVD0		0xFFFFFFFE
+#define DLB2_PF_VF2PF_ISR_PEND_ISR_PEND_LOC	0
+#define DLB2_PF_VF2PF_ISR_PEND_RSVD0_LOC	1
+
+#define DLB2_PF_PF2VF_MAILBOX_BYTES 64
+#define DLB2_PF_PF2VF_MAILBOX(vf_id, x) \
 	(0x2000 + 0x4 * (x) + (vf_id) * 0x10000)
-#define DLB2_FUNC_PF_PF2VF_MAILBOX_RST 0x0
-union dlb2_func_pf_pf2vf_mailbox {
-	struct {
-		u32 msg : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_PF_PF2VF_MAILBOX_ISR(vf_id) \
+#define DLB2_PF_PF2VF_MAILBOX_RST 0x0
+
+#define DLB2_PF_PF2VF_MAILBOX_MSG	0xFFFFFFFF
+#define DLB2_PF_PF2VF_MAILBOX_MSG_LOC	0
+
+#define DLB2_PF_PF2VF_MAILBOX_ISR(vf_id) \
 	(0x2f00 + (vf_id) * 0x10000)
-#define DLB2_FUNC_PF_PF2VF_MAILBOX_ISR_RST 0x0
-union dlb2_func_pf_pf2vf_mailbox_isr {
-	struct {
-		u32 vf0_isr : 1;
-		u32 vf1_isr : 1;
-		u32 vf2_isr : 1;
-		u32 vf3_isr : 1;
-		u32 vf4_isr : 1;
-		u32 vf5_isr : 1;
-		u32 vf6_isr : 1;
-		u32 vf7_isr : 1;
-		u32 vf8_isr : 1;
-		u32 vf9_isr : 1;
-		u32 vf10_isr : 1;
-		u32 vf11_isr : 1;
-		u32 vf12_isr : 1;
-		u32 vf13_isr : 1;
-		u32 vf14_isr : 1;
-		u32 vf15_isr : 1;
-		u32 rsvd0 : 16;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_PF_VF_RESET_IN_PROGRESS(vf_id) \
+#define DLB2_PF_PF2VF_MAILBOX_ISR_RST 0x0
+
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF0_ISR	0x00000001
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF1_ISR	0x00000002
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF2_ISR	0x00000004
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF3_ISR	0x00000008
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF4_ISR	0x00000010
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF5_ISR	0x00000020
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF6_ISR	0x00000040
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF7_ISR	0x00000080
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF8_ISR	0x00000100
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF9_ISR	0x00000200
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF10_ISR	0x00000400
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF11_ISR	0x00000800
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF12_ISR	0x00001000
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF13_ISR	0x00002000
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF14_ISR	0x00004000
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF15_ISR	0x00008000
+#define DLB2_PF_PF2VF_MAILBOX_ISR_RSVD0	0xFFFF0000
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF0_ISR_LOC	0
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF1_ISR_LOC	1
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF2_ISR_LOC	2
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF3_ISR_LOC	3
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF4_ISR_LOC	4
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF5_ISR_LOC	5
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF6_ISR_LOC	6
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF7_ISR_LOC	7
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF8_ISR_LOC	8
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF9_ISR_LOC	9
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF10_ISR_LOC	10
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF11_ISR_LOC	11
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF12_ISR_LOC	12
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF13_ISR_LOC	13
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF14_ISR_LOC	14
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF15_ISR_LOC	15
+#define DLB2_PF_PF2VF_MAILBOX_ISR_RSVD0_LOC		16
+
+#define DLB2_PF_VF_RESET_IN_PROGRESS(vf_id) \
 	(0x3000 + (vf_id) * 0x10000)
-#define DLB2_FUNC_PF_VF_RESET_IN_PROGRESS_RST 0xffff
-union dlb2_func_pf_vf_reset_in_progress {
-	struct {
-		u32 vf0_reset_in_progress : 1;
-		u32 vf1_reset_in_progress : 1;
-		u32 vf2_reset_in_progress : 1;
-		u32 vf3_reset_in_progress : 1;
-		u32 vf4_reset_in_progress : 1;
-		u32 vf5_reset_in_progress : 1;
-		u32 vf6_reset_in_progress : 1;
-		u32 vf7_reset_in_progress : 1;
-		u32 vf8_reset_in_progress : 1;
-		u32 vf9_reset_in_progress : 1;
-		u32 vf10_reset_in_progress : 1;
-		u32 vf11_reset_in_progress : 1;
-		u32 vf12_reset_in_progress : 1;
-		u32 vf13_reset_in_progress : 1;
-		u32 vf14_reset_in_progress : 1;
-		u32 vf15_reset_in_progress : 1;
-		u32 rsvd0 : 16;
-	} field;
-	u32 val;
-};
-
-#define DLB2_MSIX_MEM_VECTOR_CTRL(x) \
+#define DLB2_PF_VF_RESET_IN_PROGRESS_RST 0xffff
+
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF0_RESET_IN_PROGRESS	0x00000001
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF1_RESET_IN_PROGRESS	0x00000002
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF2_RESET_IN_PROGRESS	0x00000004
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF3_RESET_IN_PROGRESS	0x00000008
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF4_RESET_IN_PROGRESS	0x00000010
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF5_RESET_IN_PROGRESS	0x00000020
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF6_RESET_IN_PROGRESS	0x00000040
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF7_RESET_IN_PROGRESS	0x00000080
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF8_RESET_IN_PROGRESS	0x00000100
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF9_RESET_IN_PROGRESS	0x00000200
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF10_RESET_IN_PROGRESS	0x00000400
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF11_RESET_IN_PROGRESS	0x00000800
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF12_RESET_IN_PROGRESS	0x00001000
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF13_RESET_IN_PROGRESS	0x00002000
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF14_RESET_IN_PROGRESS	0x00004000
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF15_RESET_IN_PROGRESS	0x00008000
+#define DLB2_PF_VF_RESET_IN_PROGRESS_RSVD0			0xFFFF0000
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF0_RESET_IN_PROGRESS_LOC	0
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF1_RESET_IN_PROGRESS_LOC	1
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF2_RESET_IN_PROGRESS_LOC	2
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF3_RESET_IN_PROGRESS_LOC	3
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF4_RESET_IN_PROGRESS_LOC	4
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF5_RESET_IN_PROGRESS_LOC	5
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF6_RESET_IN_PROGRESS_LOC	6
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF7_RESET_IN_PROGRESS_LOC	7
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF8_RESET_IN_PROGRESS_LOC	8
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF9_RESET_IN_PROGRESS_LOC	9
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF10_RESET_IN_PROGRESS_LOC	10
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF11_RESET_IN_PROGRESS_LOC	11
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF12_RESET_IN_PROGRESS_LOC	12
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF13_RESET_IN_PROGRESS_LOC	13
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF14_RESET_IN_PROGRESS_LOC	14
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF15_RESET_IN_PROGRESS_LOC	15
+#define DLB2_PF_VF_RESET_IN_PROGRESS_RSVD0_LOC			16
+
+#define DLB2_MSIX_VECTOR_CTRL(x) \
 	(0x100000c + (x) * 0x10)
-#define DLB2_MSIX_MEM_VECTOR_CTRL_RST 0x1
-union dlb2_msix_mem_vector_ctrl {
-	struct {
-		u32 vec_mask : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
+#define DLB2_MSIX_VECTOR_CTRL_RST 0x1
+
+#define DLB2_MSIX_VECTOR_CTRL_VEC_MASK	0x00000001
+#define DLB2_MSIX_VECTOR_CTRL_RSVD0		0xFFFFFFFE
+#define DLB2_MSIX_VECTOR_CTRL_VEC_MASK_LOC	0
+#define DLB2_MSIX_VECTOR_CTRL_RSVD0_LOC	1
 
 #define DLB2_IOSF_FUNC_VF_BAR_DSBL(x) \
 	(0x20 + (x) * 0x4)
 #define DLB2_IOSF_FUNC_VF_BAR_DSBL_RST 0x0
-union dlb2_iosf_func_vf_bar_dsbl {
-	struct {
-		u32 func_vf_bar_dis : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_TOTAL_VAS 0x1000011c
+
+#define DLB2_IOSF_FUNC_VF_BAR_DSBL_FUNC_VF_BAR_DIS	0x00000001
+#define DLB2_IOSF_FUNC_VF_BAR_DSBL_RSVD0		0xFFFFFFFE
+#define DLB2_IOSF_FUNC_VF_BAR_DSBL_FUNC_VF_BAR_DIS_LOC	0
+#define DLB2_IOSF_FUNC_VF_BAR_DSBL_RSVD0_LOC			1
+
+#define DLB2_V2SYS_TOTAL_VAS 0x1000011c
+#define DLB2_V2_5SYS_TOTAL_VAS 0x10000114
+#define DLB2_SYS_TOTAL_VAS(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2SYS_TOTAL_VAS : \
+	 DLB2_V2_5SYS_TOTAL_VAS)
 #define DLB2_SYS_TOTAL_VAS_RST 0x20
-union dlb2_sys_total_vas {
-	struct {
-		u32 total_vas : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_TOTAL_DIR_PORTS 0x10000118
-#define DLB2_SYS_TOTAL_DIR_PORTS_RST 0x40
-union dlb2_sys_total_dir_ports {
-	struct {
-		u32 total_dir_ports : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_TOTAL_LDB_PORTS 0x10000114
-#define DLB2_SYS_TOTAL_LDB_PORTS_RST 0x40
-union dlb2_sys_total_ldb_ports {
-	struct {
-		u32 total_ldb_ports : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_TOTAL_DIR_QID 0x10000110
-#define DLB2_SYS_TOTAL_DIR_QID_RST 0x40
-union dlb2_sys_total_dir_qid {
-	struct {
-		u32 total_dir_qid : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_TOTAL_LDB_QID 0x1000010c
-#define DLB2_SYS_TOTAL_LDB_QID_RST 0x20
-union dlb2_sys_total_ldb_qid {
-	struct {
-		u32 total_ldb_qid : 32;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_TOTAL_VAS_TOTAL_VAS	0xFFFFFFFF
+#define DLB2_SYS_TOTAL_VAS_TOTAL_VAS_LOC	0
 
 #define DLB2_SYS_TOTAL_DIR_CRDS 0x10000108
 #define DLB2_SYS_TOTAL_DIR_CRDS_RST 0x1000
-union dlb2_sys_total_dir_crds {
-	struct {
-		u32 total_dir_credits : 32;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_TOTAL_DIR_CRDS_TOTAL_DIR_CREDITS	0xFFFFFFFF
+#define DLB2_SYS_TOTAL_DIR_CRDS_TOTAL_DIR_CREDITS_LOC	0
 
 #define DLB2_SYS_TOTAL_LDB_CRDS 0x10000104
 #define DLB2_SYS_TOTAL_LDB_CRDS_RST 0x2000
-union dlb2_sys_total_ldb_crds {
-	struct {
-		u32 total_ldb_credits : 32;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_TOTAL_LDB_CRDS_TOTAL_LDB_CREDITS	0xFFFFFFFF
+#define DLB2_SYS_TOTAL_LDB_CRDS_TOTAL_LDB_CREDITS_LOC	0
 
 #define DLB2_SYS_ALARM_PF_SYND2 0x10000508
 #define DLB2_SYS_ALARM_PF_SYND2_RST 0x0
-union dlb2_sys_alarm_pf_synd2 {
-	struct {
-		u32 lock_id : 16;
-		u32 meas : 1;
-		u32 debug : 7;
-		u32 cq_pop : 1;
-		u32 qe_uhl : 1;
-		u32 qe_orsp : 1;
-		u32 qe_valid : 1;
-		u32 cq_int_rearm : 1;
-		u32 dsi_error : 1;
-		u32 rsvd0 : 2;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_ALARM_PF_SYND2_LOCK_ID	0x0000FFFF
+#define DLB2_SYS_ALARM_PF_SYND2_MEAS		0x00010000
+#define DLB2_SYS_ALARM_PF_SYND2_DEBUG	0x00FE0000
+#define DLB2_SYS_ALARM_PF_SYND2_CQ_POP	0x01000000
+#define DLB2_SYS_ALARM_PF_SYND2_QE_UHL	0x02000000
+#define DLB2_SYS_ALARM_PF_SYND2_QE_ORSP	0x04000000
+#define DLB2_SYS_ALARM_PF_SYND2_QE_VALID	0x08000000
+#define DLB2_SYS_ALARM_PF_SYND2_CQ_INT_REARM	0x10000000
+#define DLB2_SYS_ALARM_PF_SYND2_DSI_ERROR	0x20000000
+#define DLB2_SYS_ALARM_PF_SYND2_RSVD0	0xC0000000
+#define DLB2_SYS_ALARM_PF_SYND2_LOCK_ID_LOC		0
+#define DLB2_SYS_ALARM_PF_SYND2_MEAS_LOC		16
+#define DLB2_SYS_ALARM_PF_SYND2_DEBUG_LOC		17
+#define DLB2_SYS_ALARM_PF_SYND2_CQ_POP_LOC		24
+#define DLB2_SYS_ALARM_PF_SYND2_QE_UHL_LOC		25
+#define DLB2_SYS_ALARM_PF_SYND2_QE_ORSP_LOC		26
+#define DLB2_SYS_ALARM_PF_SYND2_QE_VALID_LOC		27
+#define DLB2_SYS_ALARM_PF_SYND2_CQ_INT_REARM_LOC	28
+#define DLB2_SYS_ALARM_PF_SYND2_DSI_ERROR_LOC	29
+#define DLB2_SYS_ALARM_PF_SYND2_RSVD0_LOC		30
 
 #define DLB2_SYS_ALARM_PF_SYND1 0x10000504
 #define DLB2_SYS_ALARM_PF_SYND1_RST 0x0
-union dlb2_sys_alarm_pf_synd1 {
-	struct {
-		u32 dsi : 16;
-		u32 qid : 8;
-		u32 qtype : 2;
-		u32 qpri : 3;
-		u32 msg_type : 3;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_ALARM_PF_SYND1_DSI		0x0000FFFF
+#define DLB2_SYS_ALARM_PF_SYND1_QID		0x00FF0000
+#define DLB2_SYS_ALARM_PF_SYND1_QTYPE	0x03000000
+#define DLB2_SYS_ALARM_PF_SYND1_QPRI		0x1C000000
+#define DLB2_SYS_ALARM_PF_SYND1_MSG_TYPE	0xE0000000
+#define DLB2_SYS_ALARM_PF_SYND1_DSI_LOC	0
+#define DLB2_SYS_ALARM_PF_SYND1_QID_LOC	16
+#define DLB2_SYS_ALARM_PF_SYND1_QTYPE_LOC	24
+#define DLB2_SYS_ALARM_PF_SYND1_QPRI_LOC	26
+#define DLB2_SYS_ALARM_PF_SYND1_MSG_TYPE_LOC	29
 
 #define DLB2_SYS_ALARM_PF_SYND0 0x10000500
 #define DLB2_SYS_ALARM_PF_SYND0_RST 0x0
-union dlb2_sys_alarm_pf_synd0 {
-	struct {
-		u32 syndrome : 8;
-		u32 rtype : 2;
-		u32 rsvd0 : 3;
-		u32 is_ldb : 1;
-		u32 cls : 2;
-		u32 aid : 6;
-		u32 unit : 4;
-		u32 source : 4;
-		u32 more : 1;
-		u32 valid : 1;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_ALARM_PF_SYND0_SYNDROME	0x000000FF
+#define DLB2_SYS_ALARM_PF_SYND0_RTYPE	0x00000300
+#define DLB2_SYS_ALARM_PF_SYND0_RSVD0	0x00001C00
+#define DLB2_SYS_ALARM_PF_SYND0_IS_LDB	0x00002000
+#define DLB2_SYS_ALARM_PF_SYND0_CLS		0x0000C000
+#define DLB2_SYS_ALARM_PF_SYND0_AID		0x003F0000
+#define DLB2_SYS_ALARM_PF_SYND0_UNIT		0x03C00000
+#define DLB2_SYS_ALARM_PF_SYND0_SOURCE	0x3C000000
+#define DLB2_SYS_ALARM_PF_SYND0_MORE		0x40000000
+#define DLB2_SYS_ALARM_PF_SYND0_VALID	0x80000000
+#define DLB2_SYS_ALARM_PF_SYND0_SYNDROME_LOC	0
+#define DLB2_SYS_ALARM_PF_SYND0_RTYPE_LOC	8
+#define DLB2_SYS_ALARM_PF_SYND0_RSVD0_LOC	10
+#define DLB2_SYS_ALARM_PF_SYND0_IS_LDB_LOC	13
+#define DLB2_SYS_ALARM_PF_SYND0_CLS_LOC	14
+#define DLB2_SYS_ALARM_PF_SYND0_AID_LOC	16
+#define DLB2_SYS_ALARM_PF_SYND0_UNIT_LOC	22
+#define DLB2_SYS_ALARM_PF_SYND0_SOURCE_LOC	26
+#define DLB2_SYS_ALARM_PF_SYND0_MORE_LOC	30
+#define DLB2_SYS_ALARM_PF_SYND0_VALID_LOC	31
 
 #define DLB2_SYS_VF_LDB_VPP_V(x) \
 	(0x10000f00 + (x) * 0x1000)
 #define DLB2_SYS_VF_LDB_VPP_V_RST 0x0
-union dlb2_sys_vf_ldb_vpp_v {
-	struct {
-		u32 vpp_v : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_VF_LDB_VPP_V_VPP_V	0x00000001
+#define DLB2_SYS_VF_LDB_VPP_V_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_VF_LDB_VPP_V_VPP_V_LOC	0
+#define DLB2_SYS_VF_LDB_VPP_V_RSVD0_LOC	1
 
 #define DLB2_SYS_VF_LDB_VPP2PP(x) \
 	(0x10000f04 + (x) * 0x1000)
 #define DLB2_SYS_VF_LDB_VPP2PP_RST 0x0
-union dlb2_sys_vf_ldb_vpp2pp {
-	struct {
-		u32 pp : 6;
-		u32 rsvd0 : 26;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_VF_LDB_VPP2PP_PP	0x0000003F
+#define DLB2_SYS_VF_LDB_VPP2PP_RSVD0	0xFFFFFFC0
+#define DLB2_SYS_VF_LDB_VPP2PP_PP_LOC	0
+#define DLB2_SYS_VF_LDB_VPP2PP_RSVD0_LOC	6
 
 #define DLB2_SYS_VF_DIR_VPP_V(x) \
 	(0x10000f08 + (x) * 0x1000)
 #define DLB2_SYS_VF_DIR_VPP_V_RST 0x0
-union dlb2_sys_vf_dir_vpp_v {
-	struct {
-		u32 vpp_v : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_VF_DIR_VPP_V_VPP_V	0x00000001
+#define DLB2_SYS_VF_DIR_VPP_V_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_VF_DIR_VPP_V_VPP_V_LOC	0
+#define DLB2_SYS_VF_DIR_VPP_V_RSVD0_LOC	1
 
 #define DLB2_SYS_VF_DIR_VPP2PP(x) \
 	(0x10000f0c + (x) * 0x1000)
 #define DLB2_SYS_VF_DIR_VPP2PP_RST 0x0
-union dlb2_sys_vf_dir_vpp2pp {
-	struct {
-		u32 pp : 6;
-		u32 rsvd0 : 26;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_VF_DIR_VPP2PP_PP	0x0000003F
+#define DLB2_SYS_VF_DIR_VPP2PP_RSVD0	0xFFFFFFC0
+#define DLB2_SYS_VF_DIR_VPP2PP_PP_LOC	0
+#define DLB2_SYS_VF_DIR_VPP2PP_RSVD0_LOC	6
 
 #define DLB2_SYS_VF_LDB_VQID_V(x) \
 	(0x10000f10 + (x) * 0x1000)
 #define DLB2_SYS_VF_LDB_VQID_V_RST 0x0
-union dlb2_sys_vf_ldb_vqid_v {
-	struct {
-		u32 vqid_v : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_VF_LDB_VQID_V_VQID_V	0x00000001
+#define DLB2_SYS_VF_LDB_VQID_V_RSVD0		0xFFFFFFFE
+#define DLB2_SYS_VF_LDB_VQID_V_VQID_V_LOC	0
+#define DLB2_SYS_VF_LDB_VQID_V_RSVD0_LOC	1
 
 #define DLB2_SYS_VF_LDB_VQID2QID(x) \
 	(0x10000f14 + (x) * 0x1000)
 #define DLB2_SYS_VF_LDB_VQID2QID_RST 0x0
-union dlb2_sys_vf_ldb_vqid2qid {
-	struct {
-		u32 qid : 5;
-		u32 rsvd0 : 27;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_VF_LDB_VQID2QID_QID		0x0000001F
+#define DLB2_SYS_VF_LDB_VQID2QID_RSVD0	0xFFFFFFE0
+#define DLB2_SYS_VF_LDB_VQID2QID_QID_LOC	0
+#define DLB2_SYS_VF_LDB_VQID2QID_RSVD0_LOC	5
 
 #define DLB2_SYS_LDB_QID2VQID(x) \
 	(0x10000f18 + (x) * 0x1000)
 #define DLB2_SYS_LDB_QID2VQID_RST 0x0
-union dlb2_sys_ldb_qid2vqid {
-	struct {
-		u32 vqid : 5;
-		u32 rsvd0 : 27;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_LDB_QID2VQID_VQID	0x0000001F
+#define DLB2_SYS_LDB_QID2VQID_RSVD0	0xFFFFFFE0
+#define DLB2_SYS_LDB_QID2VQID_VQID_LOC	0
+#define DLB2_SYS_LDB_QID2VQID_RSVD0_LOC	5
 
 #define DLB2_SYS_VF_DIR_VQID_V(x) \
 	(0x10000f1c + (x) * 0x1000)
 #define DLB2_SYS_VF_DIR_VQID_V_RST 0x0
-union dlb2_sys_vf_dir_vqid_v {
-	struct {
-		u32 vqid_v : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_VF_DIR_VQID_V_VQID_V	0x00000001
+#define DLB2_SYS_VF_DIR_VQID_V_RSVD0		0xFFFFFFFE
+#define DLB2_SYS_VF_DIR_VQID_V_VQID_V_LOC	0
+#define DLB2_SYS_VF_DIR_VQID_V_RSVD0_LOC	1
 
 #define DLB2_SYS_VF_DIR_VQID2QID(x) \
 	(0x10000f20 + (x) * 0x1000)
 #define DLB2_SYS_VF_DIR_VQID2QID_RST 0x0
-union dlb2_sys_vf_dir_vqid2qid {
-	struct {
-		u32 qid : 6;
-		u32 rsvd0 : 26;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_VF_DIR_VQID2QID_QID		0x0000003F
+#define DLB2_SYS_VF_DIR_VQID2QID_RSVD0	0xFFFFFFC0
+#define DLB2_SYS_VF_DIR_VQID2QID_QID_LOC	0
+#define DLB2_SYS_VF_DIR_VQID2QID_RSVD0_LOC	6
 
 #define DLB2_SYS_LDB_VASQID_V(x) \
 	(0x10000f24 + (x) * 0x1000)
 #define DLB2_SYS_LDB_VASQID_V_RST 0x0
-union dlb2_sys_ldb_vasqid_v {
-	struct {
-		u32 vasqid_v : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_LDB_VASQID_V_VASQID_V	0x00000001
+#define DLB2_SYS_LDB_VASQID_V_RSVD0		0xFFFFFFFE
+#define DLB2_SYS_LDB_VASQID_V_VASQID_V_LOC	0
+#define DLB2_SYS_LDB_VASQID_V_RSVD0_LOC	1
 
 #define DLB2_SYS_DIR_VASQID_V(x) \
 	(0x10000f28 + (x) * 0x1000)
 #define DLB2_SYS_DIR_VASQID_V_RST 0x0
-union dlb2_sys_dir_vasqid_v {
-	struct {
-		u32 vasqid_v : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_DIR_VASQID_V_VASQID_V	0x00000001
+#define DLB2_SYS_DIR_VASQID_V_RSVD0		0xFFFFFFFE
+#define DLB2_SYS_DIR_VASQID_V_VASQID_V_LOC	0
+#define DLB2_SYS_DIR_VASQID_V_RSVD0_LOC	1
 
 #define DLB2_SYS_ALARM_VF_SYND2(x) \
 	(0x10000f48 + (x) * 0x1000)
 #define DLB2_SYS_ALARM_VF_SYND2_RST 0x0
-union dlb2_sys_alarm_vf_synd2 {
-	struct {
-		u32 lock_id : 16;
-		u32 debug : 8;
-		u32 cq_pop : 1;
-		u32 qe_uhl : 1;
-		u32 qe_orsp : 1;
-		u32 qe_valid : 1;
-		u32 isz : 1;
-		u32 dsi_error : 1;
-		u32 dlbrsvd : 2;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_ALARM_VF_SYND2_LOCK_ID	0x0000FFFF
+#define DLB2_SYS_ALARM_VF_SYND2_DEBUG	0x00FF0000
+#define DLB2_SYS_ALARM_VF_SYND2_CQ_POP	0x01000000
+#define DLB2_SYS_ALARM_VF_SYND2_QE_UHL	0x02000000
+#define DLB2_SYS_ALARM_VF_SYND2_QE_ORSP	0x04000000
+#define DLB2_SYS_ALARM_VF_SYND2_QE_VALID	0x08000000
+#define DLB2_SYS_ALARM_VF_SYND2_ISZ		0x10000000
+#define DLB2_SYS_ALARM_VF_SYND2_DSI_ERROR	0x20000000
+#define DLB2_SYS_ALARM_VF_SYND2_DLBRSVD	0xC0000000
+#define DLB2_SYS_ALARM_VF_SYND2_LOCK_ID_LOC		0
+#define DLB2_SYS_ALARM_VF_SYND2_DEBUG_LOC		16
+#define DLB2_SYS_ALARM_VF_SYND2_CQ_POP_LOC		24
+#define DLB2_SYS_ALARM_VF_SYND2_QE_UHL_LOC		25
+#define DLB2_SYS_ALARM_VF_SYND2_QE_ORSP_LOC		26
+#define DLB2_SYS_ALARM_VF_SYND2_QE_VALID_LOC		27
+#define DLB2_SYS_ALARM_VF_SYND2_ISZ_LOC		28
+#define DLB2_SYS_ALARM_VF_SYND2_DSI_ERROR_LOC	29
+#define DLB2_SYS_ALARM_VF_SYND2_DLBRSVD_LOC		30
 
 #define DLB2_SYS_ALARM_VF_SYND1(x) \
 	(0x10000f44 + (x) * 0x1000)
 #define DLB2_SYS_ALARM_VF_SYND1_RST 0x0
-union dlb2_sys_alarm_vf_synd1 {
-	struct {
-		u32 dsi : 16;
-		u32 qid : 8;
-		u32 qtype : 2;
-		u32 qpri : 3;
-		u32 msg_type : 3;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_ALARM_VF_SYND1_DSI		0x0000FFFF
+#define DLB2_SYS_ALARM_VF_SYND1_QID		0x00FF0000
+#define DLB2_SYS_ALARM_VF_SYND1_QTYPE	0x03000000
+#define DLB2_SYS_ALARM_VF_SYND1_QPRI		0x1C000000
+#define DLB2_SYS_ALARM_VF_SYND1_MSG_TYPE	0xE0000000
+#define DLB2_SYS_ALARM_VF_SYND1_DSI_LOC	0
+#define DLB2_SYS_ALARM_VF_SYND1_QID_LOC	16
+#define DLB2_SYS_ALARM_VF_SYND1_QTYPE_LOC	24
+#define DLB2_SYS_ALARM_VF_SYND1_QPRI_LOC	26
+#define DLB2_SYS_ALARM_VF_SYND1_MSG_TYPE_LOC	29
 
 #define DLB2_SYS_ALARM_VF_SYND0(x) \
 	(0x10000f40 + (x) * 0x1000)
 #define DLB2_SYS_ALARM_VF_SYND0_RST 0x0
-union dlb2_sys_alarm_vf_synd0 {
-	struct {
-		u32 syndrome : 8;
-		u32 rtype : 2;
-		u32 vf_synd0_parity : 1;
-		u32 vf_synd1_parity : 1;
-		u32 vf_synd2_parity : 1;
-		u32 is_ldb : 1;
-		u32 cls : 2;
-		u32 aid : 6;
-		u32 unit : 4;
-		u32 source : 4;
-		u32 more : 1;
-		u32 valid : 1;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_ALARM_VF_SYND0_SYNDROME		0x000000FF
+#define DLB2_SYS_ALARM_VF_SYND0_RTYPE		0x00000300
+#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND0_PARITY	0x00000400
+#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND1_PARITY	0x00000800
+#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND2_PARITY	0x00001000
+#define DLB2_SYS_ALARM_VF_SYND0_IS_LDB		0x00002000
+#define DLB2_SYS_ALARM_VF_SYND0_CLS			0x0000C000
+#define DLB2_SYS_ALARM_VF_SYND0_AID			0x003F0000
+#define DLB2_SYS_ALARM_VF_SYND0_UNIT			0x03C00000
+#define DLB2_SYS_ALARM_VF_SYND0_SOURCE		0x3C000000
+#define DLB2_SYS_ALARM_VF_SYND0_MORE			0x40000000
+#define DLB2_SYS_ALARM_VF_SYND0_VALID		0x80000000
+#define DLB2_SYS_ALARM_VF_SYND0_SYNDROME_LOC		0
+#define DLB2_SYS_ALARM_VF_SYND0_RTYPE_LOC		8
+#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND0_PARITY_LOC	10
+#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND1_PARITY_LOC	11
+#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND2_PARITY_LOC	12
+#define DLB2_SYS_ALARM_VF_SYND0_IS_LDB_LOC		13
+#define DLB2_SYS_ALARM_VF_SYND0_CLS_LOC		14
+#define DLB2_SYS_ALARM_VF_SYND0_AID_LOC		16
+#define DLB2_SYS_ALARM_VF_SYND0_UNIT_LOC		22
+#define DLB2_SYS_ALARM_VF_SYND0_SOURCE_LOC		26
+#define DLB2_SYS_ALARM_VF_SYND0_MORE_LOC		30
+#define DLB2_SYS_ALARM_VF_SYND0_VALID_LOC		31
 
 #define DLB2_SYS_LDB_QID_CFG_V(x) \
 	(0x10000f58 + (x) * 0x1000)
 #define DLB2_SYS_LDB_QID_CFG_V_RST 0x0
-union dlb2_sys_ldb_qid_cfg_v {
-	struct {
-		u32 sn_cfg_v : 1;
-		u32 fid_cfg_v : 1;
-		u32 rsvd0 : 30;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_LDB_QID_CFG_V_SN_CFG_V	0x00000001
+#define DLB2_SYS_LDB_QID_CFG_V_FID_CFG_V	0x00000002
+#define DLB2_SYS_LDB_QID_CFG_V_RSVD0		0xFFFFFFFC
+#define DLB2_SYS_LDB_QID_CFG_V_SN_CFG_V_LOC	0
+#define DLB2_SYS_LDB_QID_CFG_V_FID_CFG_V_LOC	1
+#define DLB2_SYS_LDB_QID_CFG_V_RSVD0_LOC	2
 
 #define DLB2_SYS_LDB_QID_ITS(x) \
 	(0x10000f54 + (x) * 0x1000)
 #define DLB2_SYS_LDB_QID_ITS_RST 0x0
-union dlb2_sys_ldb_qid_its {
-	struct {
-		u32 qid_its : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_LDB_QID_ITS_QID_ITS	0x00000001
+#define DLB2_SYS_LDB_QID_ITS_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_LDB_QID_ITS_QID_ITS_LOC	0
+#define DLB2_SYS_LDB_QID_ITS_RSVD0_LOC	1
 
 #define DLB2_SYS_LDB_QID_V(x) \
 	(0x10000f50 + (x) * 0x1000)
 #define DLB2_SYS_LDB_QID_V_RST 0x0
-union dlb2_sys_ldb_qid_v {
-	struct {
-		u32 qid_v : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_LDB_QID_V_QID_V	0x00000001
+#define DLB2_SYS_LDB_QID_V_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_LDB_QID_V_QID_V_LOC	0
+#define DLB2_SYS_LDB_QID_V_RSVD0_LOC	1
 
 #define DLB2_SYS_DIR_QID_ITS(x) \
 	(0x10000f64 + (x) * 0x1000)
 #define DLB2_SYS_DIR_QID_ITS_RST 0x0
-union dlb2_sys_dir_qid_its {
-	struct {
-		u32 qid_its : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_DIR_QID_ITS_QID_ITS	0x00000001
+#define DLB2_SYS_DIR_QID_ITS_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_DIR_QID_ITS_QID_ITS_LOC	0
+#define DLB2_SYS_DIR_QID_ITS_RSVD0_LOC	1
 
 #define DLB2_SYS_DIR_QID_V(x) \
 	(0x10000f60 + (x) * 0x1000)
 #define DLB2_SYS_DIR_QID_V_RST 0x0
-union dlb2_sys_dir_qid_v {
-	struct {
-		u32 qid_v : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_DIR_QID_V_QID_V	0x00000001
+#define DLB2_SYS_DIR_QID_V_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_DIR_QID_V_QID_V_LOC	0
+#define DLB2_SYS_DIR_QID_V_RSVD0_LOC	1
 
 #define DLB2_SYS_LDB_CQ_AI_DATA(x) \
 	(0x10000fa8 + (x) * 0x1000)
 #define DLB2_SYS_LDB_CQ_AI_DATA_RST 0x0
-union dlb2_sys_ldb_cq_ai_data {
-	struct {
-		u32 cq_ai_data : 32;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_LDB_CQ_AI_DATA_CQ_AI_DATA	0xFFFFFFFF
+#define DLB2_SYS_LDB_CQ_AI_DATA_CQ_AI_DATA_LOC	0
 
 #define DLB2_SYS_LDB_CQ_AI_ADDR(x) \
 	(0x10000fa4 + (x) * 0x1000)
 #define DLB2_SYS_LDB_CQ_AI_ADDR_RST 0x0
-union dlb2_sys_ldb_cq_ai_addr {
-	struct {
-		u32 rsvd1 : 2;
-		u32 cq_ai_addr : 18;
-		u32 rsvd0 : 12;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_LDB_CQ_PASID(x) \
+
+#define DLB2_SYS_LDB_CQ_AI_ADDR_RSVD1	0x00000003
+#define DLB2_SYS_LDB_CQ_AI_ADDR_CQ_AI_ADDR	0x000FFFFC
+#define DLB2_SYS_LDB_CQ_AI_ADDR_RSVD0	0xFFF00000
+#define DLB2_SYS_LDB_CQ_AI_ADDR_RSVD1_LOC		0
+#define DLB2_SYS_LDB_CQ_AI_ADDR_CQ_AI_ADDR_LOC	2
+#define DLB2_SYS_LDB_CQ_AI_ADDR_RSVD0_LOC		20
+
+#define DLB2_V2SYS_LDB_CQ_PASID(x) \
 	(0x10000fa0 + (x) * 0x1000)
+#define DLB2_V2_5SYS_LDB_CQ_PASID(x) \
+	(0x10000f9c + (x) * 0x1000)
+#define DLB2_SYS_LDB_CQ_PASID(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2SYS_LDB_CQ_PASID(x) : \
+	 DLB2_V2_5SYS_LDB_CQ_PASID(x))
 #define DLB2_SYS_LDB_CQ_PASID_RST 0x0
-union dlb2_sys_ldb_cq_pasid {
-	struct {
-		u32 pasid : 20;
-		u32 exe_req : 1;
-		u32 priv_req : 1;
-		u32 fmt2 : 1;
-		u32 rsvd0 : 9;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_LDB_CQ_PASID_PASID		0x000FFFFF
+#define DLB2_SYS_LDB_CQ_PASID_EXE_REQ	0x00100000
+#define DLB2_SYS_LDB_CQ_PASID_PRIV_REQ	0x00200000
+#define DLB2_SYS_LDB_CQ_PASID_FMT2		0x00400000
+#define DLB2_SYS_LDB_CQ_PASID_RSVD0		0xFF800000
+#define DLB2_SYS_LDB_CQ_PASID_PASID_LOC	0
+#define DLB2_SYS_LDB_CQ_PASID_EXE_REQ_LOC	20
+#define DLB2_SYS_LDB_CQ_PASID_PRIV_REQ_LOC	21
+#define DLB2_SYS_LDB_CQ_PASID_FMT2_LOC	22
+#define DLB2_SYS_LDB_CQ_PASID_RSVD0_LOC	23
 
 #define DLB2_SYS_LDB_CQ_AT(x) \
 	(0x10000f9c + (x) * 0x1000)
 #define DLB2_SYS_LDB_CQ_AT_RST 0x0
-union dlb2_sys_ldb_cq_at {
-	struct {
-		u32 cq_at : 2;
-		u32 rsvd0 : 30;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_LDB_CQ_AT_CQ_AT	0x00000003
+#define DLB2_SYS_LDB_CQ_AT_RSVD0	0xFFFFFFFC
+#define DLB2_SYS_LDB_CQ_AT_CQ_AT_LOC	0
+#define DLB2_SYS_LDB_CQ_AT_RSVD0_LOC	2
 
 #define DLB2_SYS_LDB_CQ_ISR(x) \
 	(0x10000f98 + (x) * 0x1000)
@@ -563,497 +560,891 @@ union dlb2_sys_ldb_cq_at {
 #define DLB2_CQ_ISR_MODE_MSI  1
 #define DLB2_CQ_ISR_MODE_MSIX 2
 #define DLB2_CQ_ISR_MODE_ADI  3
-union dlb2_sys_ldb_cq_isr {
-	struct {
-		u32 vector : 6;
-		u32 vf : 4;
-		u32 en_code : 2;
-		u32 rsvd0 : 20;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_LDB_CQ_ISR_VECTOR	0x0000003F
+#define DLB2_SYS_LDB_CQ_ISR_VF	0x000003C0
+#define DLB2_SYS_LDB_CQ_ISR_EN_CODE	0x00000C00
+#define DLB2_SYS_LDB_CQ_ISR_RSVD0	0xFFFFF000
+#define DLB2_SYS_LDB_CQ_ISR_VECTOR_LOC	0
+#define DLB2_SYS_LDB_CQ_ISR_VF_LOC		6
+#define DLB2_SYS_LDB_CQ_ISR_EN_CODE_LOC	10
+#define DLB2_SYS_LDB_CQ_ISR_RSVD0_LOC	12
 
 #define DLB2_SYS_LDB_CQ2VF_PF_RO(x) \
 	(0x10000f94 + (x) * 0x1000)
 #define DLB2_SYS_LDB_CQ2VF_PF_RO_RST 0x0
-union dlb2_sys_ldb_cq2vf_pf_ro {
-	struct {
-		u32 vf : 4;
-		u32 is_pf : 1;
-		u32 ro : 1;
-		u32 rsvd0 : 26;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_VF		0x0000000F
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_IS_PF	0x00000010
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_RO		0x00000020
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_RSVD0	0xFFFFFFC0
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_VF_LOC	0
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_IS_PF_LOC	4
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_RO_LOC	5
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_RSVD0_LOC	6
 
 #define DLB2_SYS_LDB_PP_V(x) \
 	(0x10000f90 + (x) * 0x1000)
 #define DLB2_SYS_LDB_PP_V_RST 0x0
-union dlb2_sys_ldb_pp_v {
-	struct {
-		u32 pp_v : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_LDB_PP_V_PP_V	0x00000001
+#define DLB2_SYS_LDB_PP_V_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_LDB_PP_V_PP_V_LOC	0
+#define DLB2_SYS_LDB_PP_V_RSVD0_LOC	1
 
 #define DLB2_SYS_LDB_PP2VDEV(x) \
 	(0x10000f8c + (x) * 0x1000)
 #define DLB2_SYS_LDB_PP2VDEV_RST 0x0
-union dlb2_sys_ldb_pp2vdev {
-	struct {
-		u32 vdev : 4;
-		u32 rsvd0 : 28;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_LDB_PP2VDEV_VDEV	0x0000000F
+#define DLB2_SYS_LDB_PP2VDEV_RSVD0	0xFFFFFFF0
+#define DLB2_SYS_LDB_PP2VDEV_VDEV_LOC	0
+#define DLB2_SYS_LDB_PP2VDEV_RSVD0_LOC	4
 
 #define DLB2_SYS_LDB_PP2VAS(x) \
 	(0x10000f88 + (x) * 0x1000)
 #define DLB2_SYS_LDB_PP2VAS_RST 0x0
-union dlb2_sys_ldb_pp2vas {
-	struct {
-		u32 vas : 5;
-		u32 rsvd0 : 27;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_LDB_PP2VAS_VAS	0x0000001F
+#define DLB2_SYS_LDB_PP2VAS_RSVD0	0xFFFFFFE0
+#define DLB2_SYS_LDB_PP2VAS_VAS_LOC		0
+#define DLB2_SYS_LDB_PP2VAS_RSVD0_LOC	5
 
 #define DLB2_SYS_LDB_CQ_ADDR_U(x) \
 	(0x10000f84 + (x) * 0x1000)
 #define DLB2_SYS_LDB_CQ_ADDR_U_RST 0x0
-union dlb2_sys_ldb_cq_addr_u {
-	struct {
-		u32 addr_u : 32;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_LDB_CQ_ADDR_U_ADDR_U	0xFFFFFFFF
+#define DLB2_SYS_LDB_CQ_ADDR_U_ADDR_U_LOC	0
 
 #define DLB2_SYS_LDB_CQ_ADDR_L(x) \
 	(0x10000f80 + (x) * 0x1000)
 #define DLB2_SYS_LDB_CQ_ADDR_L_RST 0x0
-union dlb2_sys_ldb_cq_addr_l {
-	struct {
-		u32 rsvd0 : 6;
-		u32 addr_l : 26;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_LDB_CQ_ADDR_L_RSVD0		0x0000003F
+#define DLB2_SYS_LDB_CQ_ADDR_L_ADDR_L	0xFFFFFFC0
+#define DLB2_SYS_LDB_CQ_ADDR_L_RSVD0_LOC	0
+#define DLB2_SYS_LDB_CQ_ADDR_L_ADDR_L_LOC	6
 
 #define DLB2_SYS_DIR_CQ_FMT(x) \
 	(0x10000fec + (x) * 0x1000)
 #define DLB2_SYS_DIR_CQ_FMT_RST 0x0
-union dlb2_sys_dir_cq_fmt {
-	struct {
-		u32 keep_pf_ppid : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_DIR_CQ_FMT_KEEP_PF_PPID	0x00000001
+#define DLB2_SYS_DIR_CQ_FMT_RSVD0		0xFFFFFFFE
+#define DLB2_SYS_DIR_CQ_FMT_KEEP_PF_PPID_LOC	0
+#define DLB2_SYS_DIR_CQ_FMT_RSVD0_LOC	1
 
 #define DLB2_SYS_DIR_CQ_AI_DATA(x) \
 	(0x10000fe8 + (x) * 0x1000)
 #define DLB2_SYS_DIR_CQ_AI_DATA_RST 0x0
-union dlb2_sys_dir_cq_ai_data {
-	struct {
-		u32 cq_ai_data : 32;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_DIR_CQ_AI_DATA_CQ_AI_DATA	0xFFFFFFFF
+#define DLB2_SYS_DIR_CQ_AI_DATA_CQ_AI_DATA_LOC	0
 
 #define DLB2_SYS_DIR_CQ_AI_ADDR(x) \
 	(0x10000fe4 + (x) * 0x1000)
 #define DLB2_SYS_DIR_CQ_AI_ADDR_RST 0x0
-union dlb2_sys_dir_cq_ai_addr {
-	struct {
-		u32 rsvd1 : 2;
-		u32 cq_ai_addr : 18;
-		u32 rsvd0 : 12;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_DIR_CQ_PASID(x) \
+
+#define DLB2_SYS_DIR_CQ_AI_ADDR_RSVD1	0x00000003
+#define DLB2_SYS_DIR_CQ_AI_ADDR_CQ_AI_ADDR	0x000FFFFC
+#define DLB2_SYS_DIR_CQ_AI_ADDR_RSVD0	0xFFF00000
+#define DLB2_SYS_DIR_CQ_AI_ADDR_RSVD1_LOC		0
+#define DLB2_SYS_DIR_CQ_AI_ADDR_CQ_AI_ADDR_LOC	2
+#define DLB2_SYS_DIR_CQ_AI_ADDR_RSVD0_LOC		20
+
+#define DLB2_V2SYS_DIR_CQ_PASID(x) \
 	(0x10000fe0 + (x) * 0x1000)
+#define DLB2_V2_5SYS_DIR_CQ_PASID(x) \
+	(0x10000fdc + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ_PASID(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2SYS_DIR_CQ_PASID(x) : \
+	 DLB2_V2_5SYS_DIR_CQ_PASID(x))
 #define DLB2_SYS_DIR_CQ_PASID_RST 0x0
-union dlb2_sys_dir_cq_pasid {
-	struct {
-		u32 pasid : 20;
-		u32 exe_req : 1;
-		u32 priv_req : 1;
-		u32 fmt2 : 1;
-		u32 rsvd0 : 9;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_DIR_CQ_PASID_PASID		0x000FFFFF
+#define DLB2_SYS_DIR_CQ_PASID_EXE_REQ	0x00100000
+#define DLB2_SYS_DIR_CQ_PASID_PRIV_REQ	0x00200000
+#define DLB2_SYS_DIR_CQ_PASID_FMT2		0x00400000
+#define DLB2_SYS_DIR_CQ_PASID_RSVD0		0xFF800000
+#define DLB2_SYS_DIR_CQ_PASID_PASID_LOC	0
+#define DLB2_SYS_DIR_CQ_PASID_EXE_REQ_LOC	20
+#define DLB2_SYS_DIR_CQ_PASID_PRIV_REQ_LOC	21
+#define DLB2_SYS_DIR_CQ_PASID_FMT2_LOC	22
+#define DLB2_SYS_DIR_CQ_PASID_RSVD0_LOC	23
 
 #define DLB2_SYS_DIR_CQ_AT(x) \
 	(0x10000fdc + (x) * 0x1000)
 #define DLB2_SYS_DIR_CQ_AT_RST 0x0
-union dlb2_sys_dir_cq_at {
-	struct {
-		u32 cq_at : 2;
-		u32 rsvd0 : 30;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_DIR_CQ_AT_CQ_AT	0x00000003
+#define DLB2_SYS_DIR_CQ_AT_RSVD0	0xFFFFFFFC
+#define DLB2_SYS_DIR_CQ_AT_CQ_AT_LOC	0
+#define DLB2_SYS_DIR_CQ_AT_RSVD0_LOC	2
 
 #define DLB2_SYS_DIR_CQ_ISR(x) \
 	(0x10000fd8 + (x) * 0x1000)
 #define DLB2_SYS_DIR_CQ_ISR_RST 0x0
-union dlb2_sys_dir_cq_isr {
-	struct {
-		u32 vector : 6;
-		u32 vf : 4;
-		u32 en_code : 2;
-		u32 rsvd0 : 20;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_DIR_CQ_ISR_VECTOR	0x0000003F
+#define DLB2_SYS_DIR_CQ_ISR_VF	0x000003C0
+#define DLB2_SYS_DIR_CQ_ISR_EN_CODE	0x00000C00
+#define DLB2_SYS_DIR_CQ_ISR_RSVD0	0xFFFFF000
+#define DLB2_SYS_DIR_CQ_ISR_VECTOR_LOC	0
+#define DLB2_SYS_DIR_CQ_ISR_VF_LOC		6
+#define DLB2_SYS_DIR_CQ_ISR_EN_CODE_LOC	10
+#define DLB2_SYS_DIR_CQ_ISR_RSVD0_LOC	12
 
 #define DLB2_SYS_DIR_CQ2VF_PF_RO(x) \
 	(0x10000fd4 + (x) * 0x1000)
 #define DLB2_SYS_DIR_CQ2VF_PF_RO_RST 0x0
-union dlb2_sys_dir_cq2vf_pf_ro {
-	struct {
-		u32 vf : 4;
-		u32 is_pf : 1;
-		u32 ro : 1;
-		u32 rsvd0 : 26;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_VF		0x0000000F
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_IS_PF	0x00000010
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_RO		0x00000020
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_RSVD0	0xFFFFFFC0
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_VF_LOC	0
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_IS_PF_LOC	4
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_RO_LOC	5
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_RSVD0_LOC	6
 
 #define DLB2_SYS_DIR_PP_V(x) \
 	(0x10000fd0 + (x) * 0x1000)
 #define DLB2_SYS_DIR_PP_V_RST 0x0
-union dlb2_sys_dir_pp_v {
-	struct {
-		u32 pp_v : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_DIR_PP_V_PP_V	0x00000001
+#define DLB2_SYS_DIR_PP_V_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_DIR_PP_V_PP_V_LOC	0
+#define DLB2_SYS_DIR_PP_V_RSVD0_LOC	1
 
 #define DLB2_SYS_DIR_PP2VDEV(x) \
 	(0x10000fcc + (x) * 0x1000)
 #define DLB2_SYS_DIR_PP2VDEV_RST 0x0
-union dlb2_sys_dir_pp2vdev {
-	struct {
-		u32 vdev : 4;
-		u32 rsvd0 : 28;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_DIR_PP2VDEV_VDEV	0x0000000F
+#define DLB2_SYS_DIR_PP2VDEV_RSVD0	0xFFFFFFF0
+#define DLB2_SYS_DIR_PP2VDEV_VDEV_LOC	0
+#define DLB2_SYS_DIR_PP2VDEV_RSVD0_LOC	4
 
 #define DLB2_SYS_DIR_PP2VAS(x) \
 	(0x10000fc8 + (x) * 0x1000)
 #define DLB2_SYS_DIR_PP2VAS_RST 0x0
-union dlb2_sys_dir_pp2vas {
-	struct {
-		u32 vas : 5;
-		u32 rsvd0 : 27;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_DIR_PP2VAS_VAS	0x0000001F
+#define DLB2_SYS_DIR_PP2VAS_RSVD0	0xFFFFFFE0
+#define DLB2_SYS_DIR_PP2VAS_VAS_LOC		0
+#define DLB2_SYS_DIR_PP2VAS_RSVD0_LOC	5
 
 #define DLB2_SYS_DIR_CQ_ADDR_U(x) \
 	(0x10000fc4 + (x) * 0x1000)
 #define DLB2_SYS_DIR_CQ_ADDR_U_RST 0x0
-union dlb2_sys_dir_cq_addr_u {
-	struct {
-		u32 addr_u : 32;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_DIR_CQ_ADDR_U_ADDR_U	0xFFFFFFFF
+#define DLB2_SYS_DIR_CQ_ADDR_U_ADDR_U_LOC	0
 
 #define DLB2_SYS_DIR_CQ_ADDR_L(x) \
 	(0x10000fc0 + (x) * 0x1000)
 #define DLB2_SYS_DIR_CQ_ADDR_L_RST 0x0
-union dlb2_sys_dir_cq_addr_l {
-	struct {
-		u32 rsvd0 : 6;
-		u32 addr_l : 26;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_DIR_CQ_ADDR_L_RSVD0		0x0000003F
+#define DLB2_SYS_DIR_CQ_ADDR_L_ADDR_L	0xFFFFFFC0
+#define DLB2_SYS_DIR_CQ_ADDR_L_RSVD0_LOC	0
+#define DLB2_SYS_DIR_CQ_ADDR_L_ADDR_L_LOC	6
+
+#define DLB2_SYS_PM_SMON_COMP_MASK1 0x10003024
+#define DLB2_SYS_PM_SMON_COMP_MASK1_RST 0xffffffff
+
+#define DLB2_SYS_PM_SMON_COMP_MASK1_COMP_MASK1	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_COMP_MASK1_COMP_MASK1_LOC	0
+
+#define DLB2_SYS_PM_SMON_COMP_MASK0 0x10003020
+#define DLB2_SYS_PM_SMON_COMP_MASK0_RST 0xffffffff
+
+#define DLB2_SYS_PM_SMON_COMP_MASK0_COMP_MASK0	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_COMP_MASK0_COMP_MASK0_LOC	0
+
+#define DLB2_SYS_PM_SMON_MAX_TMR 0x1000301c
+#define DLB2_SYS_PM_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_SYS_PM_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_SYS_PM_SMON_TMR 0x10003018
+#define DLB2_SYS_PM_SMON_TMR_RST 0x0
+
+#define DLB2_SYS_PM_SMON_TMR_TIMER_VAL	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_TMR_TIMER_VAL_LOC	0
+
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR1 0x10003014
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR0 0x10003010
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_SYS_PM_SMON_COMPARE1 0x1000300c
+#define DLB2_SYS_PM_SMON_COMPARE1_RST 0x0
+
+#define DLB2_SYS_PM_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_SYS_PM_SMON_COMPARE0 0x10003008
+#define DLB2_SYS_PM_SMON_COMPARE0_RST 0x0
+
+#define DLB2_SYS_PM_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_SYS_PM_SMON_CFG1 0x10003004
+#define DLB2_SYS_PM_SMON_CFG1_RST 0x0
+
+#define DLB2_SYS_PM_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_SYS_PM_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_SYS_PM_SMON_CFG1_RSVD	0xFFFF0000
+#define DLB2_SYS_PM_SMON_CFG1_MODE0_LOC	0
+#define DLB2_SYS_PM_SMON_CFG1_MODE1_LOC	8
+#define DLB2_SYS_PM_SMON_CFG1_RSVD_LOC	16
+
+#define DLB2_SYS_PM_SMON_CFG0 0x10003000
+#define DLB2_SYS_PM_SMON_CFG0_RST 0x40000000
+
+#define DLB2_SYS_PM_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_SYS_PM_SMON_CFG0_RSVD2			0x0000000E
+#define DLB2_SYS_PM_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_SYS_PM_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_SYS_PM_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_SYS_PM_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_SYS_PM_SMON_CFG0_SMON_MODE		0x0000F000
+#define DLB2_SYS_PM_SMON_CFG0_STOPCOUNTEROVFL	0x00010000
+#define DLB2_SYS_PM_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_SYS_PM_SMON_CFG0_STATCOUNTER0OVFL	0x00040000
+#define DLB2_SYS_PM_SMON_CFG0_STATCOUNTER1OVFL	0x00080000
+#define DLB2_SYS_PM_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_SYS_PM_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_SYS_PM_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_SYS_PM_SMON_CFG0_RSVD1			0x00800000
+#define DLB2_SYS_PM_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_SYS_PM_SMON_CFG0_RSVD0			0x20000000
+#define DLB2_SYS_PM_SMON_CFG0_VERSION		0xC0000000
+#define DLB2_SYS_PM_SMON_CFG0_SMON_ENABLE_LOC		0
+#define DLB2_SYS_PM_SMON_CFG0_RSVD2_LOC			1
+#define DLB2_SYS_PM_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_SYS_PM_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_SYS_PM_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_SYS_PM_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_SYS_PM_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_SYS_PM_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_SYS_PM_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_SYS_PM_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_SYS_PM_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_SYS_PM_SMON_CFG0_STOPTIMEROVFL_LOC		20
+#define DLB2_SYS_PM_SMON_CFG0_INTTIMEROVFL_LOC		21
+#define DLB2_SYS_PM_SMON_CFG0_STATTIMEROVFL_LOC		22
+#define DLB2_SYS_PM_SMON_CFG0_RSVD1_LOC			23
+#define DLB2_SYS_PM_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_SYS_PM_SMON_CFG0_RSVD0_LOC			29
+#define DLB2_SYS_PM_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_SYS_SMON_COMP_MASK1(x) \
+	(0x18002024 + (x) * 0x40)
+#define DLB2_SYS_SMON_COMP_MASK1_RST 0xffffffff
+
+#define DLB2_SYS_SMON_COMP_MASK1_COMP_MASK1	0xFFFFFFFF
+#define DLB2_SYS_SMON_COMP_MASK1_COMP_MASK1_LOC	0
+
+#define DLB2_SYS_SMON_COMP_MASK0(x) \
+	(0x18002020 + (x) * 0x40)
+#define DLB2_SYS_SMON_COMP_MASK0_RST 0xffffffff
+
+#define DLB2_SYS_SMON_COMP_MASK0_COMP_MASK0	0xFFFFFFFF
+#define DLB2_SYS_SMON_COMP_MASK0_COMP_MASK0_LOC	0
+
+#define DLB2_SYS_SMON_MAX_TMR(x) \
+	(0x1800201c + (x) * 0x40)
+#define DLB2_SYS_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_SYS_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_SYS_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_SYS_SMON_TMR(x) \
+	(0x18002018 + (x) * 0x40)
+#define DLB2_SYS_SMON_TMR_RST 0x0
+
+#define DLB2_SYS_SMON_TMR_TIMER_VAL	0xFFFFFFFF
+#define DLB2_SYS_SMON_TMR_TIMER_VAL_LOC	0
+
+#define DLB2_SYS_SMON_ACTIVITYCNTR1(x) \
+	(0x18002014 + (x) * 0x40)
+#define DLB2_SYS_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_SYS_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_SYS_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_SYS_SMON_ACTIVITYCNTR0(x) \
+	(0x18002010 + (x) * 0x40)
+#define DLB2_SYS_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_SYS_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_SYS_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_SYS_SMON_COMPARE1(x) \
+	(0x1800200c + (x) * 0x40)
+#define DLB2_SYS_SMON_COMPARE1_RST 0x0
+
+#define DLB2_SYS_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_SYS_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_SYS_SMON_COMPARE0(x) \
+	(0x18002008 + (x) * 0x40)
+#define DLB2_SYS_SMON_COMPARE0_RST 0x0
+
+#define DLB2_SYS_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_SYS_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_SYS_SMON_CFG1(x) \
+	(0x18002004 + (x) * 0x40)
+#define DLB2_SYS_SMON_CFG1_RST 0x0
+
+#define DLB2_SYS_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_SYS_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_SYS_SMON_CFG1_RSVD	0xFFFF0000
+#define DLB2_SYS_SMON_CFG1_MODE0_LOC	0
+#define DLB2_SYS_SMON_CFG1_MODE1_LOC	8
+#define DLB2_SYS_SMON_CFG1_RSVD_LOC	16
+
+#define DLB2_SYS_SMON_CFG0(x) \
+	(0x18002000 + (x) * 0x40)
+#define DLB2_SYS_SMON_CFG0_RST 0x40000000
+
+#define DLB2_SYS_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_SYS_SMON_CFG0_RSVD2			0x0000000E
+#define DLB2_SYS_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_SYS_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_SYS_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_SYS_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_SYS_SMON_CFG0_SMON_MODE			0x0000F000
+#define DLB2_SYS_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_SYS_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_SYS_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_SYS_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_SYS_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_SYS_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_SYS_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_SYS_SMON_CFG0_RSVD1			0x00800000
+#define DLB2_SYS_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_SYS_SMON_CFG0_RSVD0			0x20000000
+#define DLB2_SYS_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_SYS_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_SYS_SMON_CFG0_RSVD2_LOC				1
+#define DLB2_SYS_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_SYS_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_SYS_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_SYS_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_SYS_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_SYS_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_SYS_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_SYS_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_SYS_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_SYS_SMON_CFG0_STOPTIMEROVFL_LOC			20
+#define DLB2_SYS_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_SYS_SMON_CFG0_STATTIMEROVFL_LOC			22
+#define DLB2_SYS_SMON_CFG0_RSVD1_LOC				23
+#define DLB2_SYS_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_SYS_SMON_CFG0_RSVD0_LOC				29
+#define DLB2_SYS_SMON_CFG0_VERSION_LOC			30
 
 #define DLB2_SYS_INGRESS_ALARM_ENBL 0x10000300
 #define DLB2_SYS_INGRESS_ALARM_ENBL_RST 0x0
-union dlb2_sys_ingress_alarm_enbl {
-	struct {
-		u32 illegal_hcw : 1;
-		u32 illegal_pp : 1;
-		u32 illegal_pasid : 1;
-		u32 illegal_qid : 1;
-		u32 disabled_qid : 1;
-		u32 illegal_ldb_qid_cfg : 1;
-		u32 rsvd0 : 26;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_HCW		0x00000001
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_PP		0x00000002
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_PASID		0x00000004
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_QID		0x00000008
+#define DLB2_SYS_INGRESS_ALARM_ENBL_DISABLED_QID		0x00000010
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_LDB_QID_CFG	0x00000020
+#define DLB2_SYS_INGRESS_ALARM_ENBL_RSVD0			0xFFFFFFC0
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_HCW_LOC		0
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_PP_LOC		1
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_PASID_LOC	2
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_QID_LOC		3
+#define DLB2_SYS_INGRESS_ALARM_ENBL_DISABLED_QID_LOC		4
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_LDB_QID_CFG_LOC	5
+#define DLB2_SYS_INGRESS_ALARM_ENBL_RSVD0_LOC		6
 
 #define DLB2_SYS_MSIX_ACK 0x10000400
 #define DLB2_SYS_MSIX_ACK_RST 0x0
-union dlb2_sys_msix_ack {
-	struct {
-		u32 msix_0_ack : 1;
-		u32 msix_1_ack : 1;
-		u32 rsvd0 : 30;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_MSIX_ACK_MSIX_0_ACK	0x00000001
+#define DLB2_SYS_MSIX_ACK_MSIX_1_ACK	0x00000002
+#define DLB2_SYS_MSIX_ACK_RSVD0	0xFFFFFFFC
+#define DLB2_SYS_MSIX_ACK_MSIX_0_ACK_LOC	0
+#define DLB2_SYS_MSIX_ACK_MSIX_1_ACK_LOC	1
+#define DLB2_SYS_MSIX_ACK_RSVD0_LOC		2
 
 #define DLB2_SYS_MSIX_PASSTHRU 0x10000404
 #define DLB2_SYS_MSIX_PASSTHRU_RST 0x0
-union dlb2_sys_msix_passthru {
-	struct {
-		u32 msix_0_passthru : 1;
-		u32 msix_1_passthru : 1;
-		u32 rsvd0 : 30;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_MSIX_PASSTHRU_MSIX_0_PASSTHRU	0x00000001
+#define DLB2_SYS_MSIX_PASSTHRU_MSIX_1_PASSTHRU	0x00000002
+#define DLB2_SYS_MSIX_PASSTHRU_RSVD0			0xFFFFFFFC
+#define DLB2_SYS_MSIX_PASSTHRU_MSIX_0_PASSTHRU_LOC	0
+#define DLB2_SYS_MSIX_PASSTHRU_MSIX_1_PASSTHRU_LOC	1
+#define DLB2_SYS_MSIX_PASSTHRU_RSVD0_LOC		2
 
 #define DLB2_SYS_MSIX_MODE 0x10000408
 #define DLB2_SYS_MSIX_MODE_RST 0x0
 /* MSI-X Modes */
 #define DLB2_MSIX_MODE_PACKED     0
 #define DLB2_MSIX_MODE_COMPRESSED 1
-union dlb2_sys_msix_mode {
-	struct {
-		u32 mode : 1;
-		u32 poll_mode : 1;
-		u32 poll_mask : 1;
-		u32 poll_lock : 1;
-		u32 rsvd0 : 28;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_MSIX_MODE_MODE_V2	0x00000001
+#define DLB2_SYS_MSIX_MODE_POLL_MODE_V2	0x00000002
+#define DLB2_SYS_MSIX_MODE_POLL_MASK_V2	0x00000004
+#define DLB2_SYS_MSIX_MODE_POLL_LOCK_V2	0x00000008
+#define DLB2_SYS_MSIX_MODE_RSVD0_V2	0xFFFFFFF0
+#define DLB2_SYS_MSIX_MODE_MODE_V2_LOC	0
+#define DLB2_SYS_MSIX_MODE_POLL_MODE_V2_LOC	1
+#define DLB2_SYS_MSIX_MODE_POLL_MASK_V2_LOC	2
+#define DLB2_SYS_MSIX_MODE_POLL_LOCK_V2_LOC	3
+#define DLB2_SYS_MSIX_MODE_RSVD0_V2_LOC	4
+
+#define DLB2_SYS_MSIX_MODE_MODE_V2_5	0x00000001
+#define DLB2_SYS_MSIX_MODE_IMS_POLLING_V2_5	0x00000002
+#define DLB2_SYS_MSIX_MODE_RSVD0_V2_5	0xFFFFFFFC
+#define DLB2_SYS_MSIX_MODE_MODE_V2_5_LOC		0
+#define DLB2_SYS_MSIX_MODE_IMS_POLLING_V2_5_LOC	1
+#define DLB2_SYS_MSIX_MODE_RSVD0_V2_5_LOC		2
 
 #define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS 0x10000440
 #define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_RST 0x0
-union dlb2_sys_dir_cq_31_0_occ_int_sts {
-	struct {
-		u32 cq_0_occ_int : 1;
-		u32 cq_1_occ_int : 1;
-		u32 cq_2_occ_int : 1;
-		u32 cq_3_occ_int : 1;
-		u32 cq_4_occ_int : 1;
-		u32 cq_5_occ_int : 1;
-		u32 cq_6_occ_int : 1;
-		u32 cq_7_occ_int : 1;
-		u32 cq_8_occ_int : 1;
-		u32 cq_9_occ_int : 1;
-		u32 cq_10_occ_int : 1;
-		u32 cq_11_occ_int : 1;
-		u32 cq_12_occ_int : 1;
-		u32 cq_13_occ_int : 1;
-		u32 cq_14_occ_int : 1;
-		u32 cq_15_occ_int : 1;
-		u32 cq_16_occ_int : 1;
-		u32 cq_17_occ_int : 1;
-		u32 cq_18_occ_int : 1;
-		u32 cq_19_occ_int : 1;
-		u32 cq_20_occ_int : 1;
-		u32 cq_21_occ_int : 1;
-		u32 cq_22_occ_int : 1;
-		u32 cq_23_occ_int : 1;
-		u32 cq_24_occ_int : 1;
-		u32 cq_25_occ_int : 1;
-		u32 cq_26_occ_int : 1;
-		u32 cq_27_occ_int : 1;
-		u32 cq_28_occ_int : 1;
-		u32 cq_29_occ_int : 1;
-		u32 cq_30_occ_int : 1;
-		u32 cq_31_occ_int : 1;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_0_OCC_INT	0x00000001
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_1_OCC_INT	0x00000002
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_2_OCC_INT	0x00000004
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_3_OCC_INT	0x00000008
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_4_OCC_INT	0x00000010
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_5_OCC_INT	0x00000020
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_6_OCC_INT	0x00000040
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_7_OCC_INT	0x00000080
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_8_OCC_INT	0x00000100
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_9_OCC_INT	0x00000200
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_10_OCC_INT	0x00000400
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_11_OCC_INT	0x00000800
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_12_OCC_INT	0x00001000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_13_OCC_INT	0x00002000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_14_OCC_INT	0x00004000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_15_OCC_INT	0x00008000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_16_OCC_INT	0x00010000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_17_OCC_INT	0x00020000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_18_OCC_INT	0x00040000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_19_OCC_INT	0x00080000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_20_OCC_INT	0x00100000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_21_OCC_INT	0x00200000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_22_OCC_INT	0x00400000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_23_OCC_INT	0x00800000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_24_OCC_INT	0x01000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_25_OCC_INT	0x02000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_26_OCC_INT	0x04000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_27_OCC_INT	0x08000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_28_OCC_INT	0x10000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_29_OCC_INT	0x20000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_30_OCC_INT	0x40000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_31_OCC_INT	0x80000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_0_OCC_INT_LOC	0
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_1_OCC_INT_LOC	1
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_2_OCC_INT_LOC	2
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_3_OCC_INT_LOC	3
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_4_OCC_INT_LOC	4
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_5_OCC_INT_LOC	5
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_6_OCC_INT_LOC	6
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_7_OCC_INT_LOC	7
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_8_OCC_INT_LOC	8
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_9_OCC_INT_LOC	9
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_10_OCC_INT_LOC	10
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_11_OCC_INT_LOC	11
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_12_OCC_INT_LOC	12
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_13_OCC_INT_LOC	13
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_14_OCC_INT_LOC	14
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_15_OCC_INT_LOC	15
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_16_OCC_INT_LOC	16
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_17_OCC_INT_LOC	17
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_18_OCC_INT_LOC	18
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_19_OCC_INT_LOC	19
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_20_OCC_INT_LOC	20
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_21_OCC_INT_LOC	21
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_22_OCC_INT_LOC	22
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_23_OCC_INT_LOC	23
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_24_OCC_INT_LOC	24
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_25_OCC_INT_LOC	25
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_26_OCC_INT_LOC	26
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_27_OCC_INT_LOC	27
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_28_OCC_INT_LOC	28
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_29_OCC_INT_LOC	29
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_30_OCC_INT_LOC	30
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_31_OCC_INT_LOC	31
 
 #define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS 0x10000444
 #define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_RST 0x0
-union dlb2_sys_dir_cq_63_32_occ_int_sts {
-	struct {
-		u32 cq_32_occ_int : 1;
-		u32 cq_33_occ_int : 1;
-		u32 cq_34_occ_int : 1;
-		u32 cq_35_occ_int : 1;
-		u32 cq_36_occ_int : 1;
-		u32 cq_37_occ_int : 1;
-		u32 cq_38_occ_int : 1;
-		u32 cq_39_occ_int : 1;
-		u32 cq_40_occ_int : 1;
-		u32 cq_41_occ_int : 1;
-		u32 cq_42_occ_int : 1;
-		u32 cq_43_occ_int : 1;
-		u32 cq_44_occ_int : 1;
-		u32 cq_45_occ_int : 1;
-		u32 cq_46_occ_int : 1;
-		u32 cq_47_occ_int : 1;
-		u32 cq_48_occ_int : 1;
-		u32 cq_49_occ_int : 1;
-		u32 cq_50_occ_int : 1;
-		u32 cq_51_occ_int : 1;
-		u32 cq_52_occ_int : 1;
-		u32 cq_53_occ_int : 1;
-		u32 cq_54_occ_int : 1;
-		u32 cq_55_occ_int : 1;
-		u32 cq_56_occ_int : 1;
-		u32 cq_57_occ_int : 1;
-		u32 cq_58_occ_int : 1;
-		u32 cq_59_occ_int : 1;
-		u32 cq_60_occ_int : 1;
-		u32 cq_61_occ_int : 1;
-		u32 cq_62_occ_int : 1;
-		u32 cq_63_occ_int : 1;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_32_OCC_INT	0x00000001
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_33_OCC_INT	0x00000002
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_34_OCC_INT	0x00000004
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_35_OCC_INT	0x00000008
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_36_OCC_INT	0x00000010
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_37_OCC_INT	0x00000020
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_38_OCC_INT	0x00000040
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_39_OCC_INT	0x00000080
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_40_OCC_INT	0x00000100
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_41_OCC_INT	0x00000200
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_42_OCC_INT	0x00000400
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_43_OCC_INT	0x00000800
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_44_OCC_INT	0x00001000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_45_OCC_INT	0x00002000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_46_OCC_INT	0x00004000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_47_OCC_INT	0x00008000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_48_OCC_INT	0x00010000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_49_OCC_INT	0x00020000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_50_OCC_INT	0x00040000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_51_OCC_INT	0x00080000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_52_OCC_INT	0x00100000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_53_OCC_INT	0x00200000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_54_OCC_INT	0x00400000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_55_OCC_INT	0x00800000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_56_OCC_INT	0x01000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_57_OCC_INT	0x02000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_58_OCC_INT	0x04000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_59_OCC_INT	0x08000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_60_OCC_INT	0x10000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_61_OCC_INT	0x20000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_62_OCC_INT	0x40000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_63_OCC_INT	0x80000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_32_OCC_INT_LOC	0
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_33_OCC_INT_LOC	1
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_34_OCC_INT_LOC	2
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_35_OCC_INT_LOC	3
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_36_OCC_INT_LOC	4
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_37_OCC_INT_LOC	5
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_38_OCC_INT_LOC	6
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_39_OCC_INT_LOC	7
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_40_OCC_INT_LOC	8
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_41_OCC_INT_LOC	9
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_42_OCC_INT_LOC	10
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_43_OCC_INT_LOC	11
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_44_OCC_INT_LOC	12
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_45_OCC_INT_LOC	13
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_46_OCC_INT_LOC	14
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_47_OCC_INT_LOC	15
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_48_OCC_INT_LOC	16
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_49_OCC_INT_LOC	17
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_50_OCC_INT_LOC	18
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_51_OCC_INT_LOC	19
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_52_OCC_INT_LOC	20
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_53_OCC_INT_LOC	21
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_54_OCC_INT_LOC	22
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_55_OCC_INT_LOC	23
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_56_OCC_INT_LOC	24
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_57_OCC_INT_LOC	25
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_58_OCC_INT_LOC	26
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_59_OCC_INT_LOC	27
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_60_OCC_INT_LOC	28
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_61_OCC_INT_LOC	29
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_62_OCC_INT_LOC	30
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_63_OCC_INT_LOC	31
 
 #define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS 0x10000460
 #define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_RST 0x0
-union dlb2_sys_ldb_cq_31_0_occ_int_sts {
-	struct {
-		u32 cq_0_occ_int : 1;
-		u32 cq_1_occ_int : 1;
-		u32 cq_2_occ_int : 1;
-		u32 cq_3_occ_int : 1;
-		u32 cq_4_occ_int : 1;
-		u32 cq_5_occ_int : 1;
-		u32 cq_6_occ_int : 1;
-		u32 cq_7_occ_int : 1;
-		u32 cq_8_occ_int : 1;
-		u32 cq_9_occ_int : 1;
-		u32 cq_10_occ_int : 1;
-		u32 cq_11_occ_int : 1;
-		u32 cq_12_occ_int : 1;
-		u32 cq_13_occ_int : 1;
-		u32 cq_14_occ_int : 1;
-		u32 cq_15_occ_int : 1;
-		u32 cq_16_occ_int : 1;
-		u32 cq_17_occ_int : 1;
-		u32 cq_18_occ_int : 1;
-		u32 cq_19_occ_int : 1;
-		u32 cq_20_occ_int : 1;
-		u32 cq_21_occ_int : 1;
-		u32 cq_22_occ_int : 1;
-		u32 cq_23_occ_int : 1;
-		u32 cq_24_occ_int : 1;
-		u32 cq_25_occ_int : 1;
-		u32 cq_26_occ_int : 1;
-		u32 cq_27_occ_int : 1;
-		u32 cq_28_occ_int : 1;
-		u32 cq_29_occ_int : 1;
-		u32 cq_30_occ_int : 1;
-		u32 cq_31_occ_int : 1;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_0_OCC_INT	0x00000001
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_1_OCC_INT	0x00000002
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_2_OCC_INT	0x00000004
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_3_OCC_INT	0x00000008
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_4_OCC_INT	0x00000010
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_5_OCC_INT	0x00000020
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_6_OCC_INT	0x00000040
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_7_OCC_INT	0x00000080
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_8_OCC_INT	0x00000100
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_9_OCC_INT	0x00000200
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_10_OCC_INT	0x00000400
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_11_OCC_INT	0x00000800
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_12_OCC_INT	0x00001000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_13_OCC_INT	0x00002000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_14_OCC_INT	0x00004000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_15_OCC_INT	0x00008000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_16_OCC_INT	0x00010000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_17_OCC_INT	0x00020000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_18_OCC_INT	0x00040000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_19_OCC_INT	0x00080000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_20_OCC_INT	0x00100000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_21_OCC_INT	0x00200000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_22_OCC_INT	0x00400000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_23_OCC_INT	0x00800000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_24_OCC_INT	0x01000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_25_OCC_INT	0x02000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_26_OCC_INT	0x04000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_27_OCC_INT	0x08000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_28_OCC_INT	0x10000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_29_OCC_INT	0x20000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_30_OCC_INT	0x40000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_31_OCC_INT	0x80000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_0_OCC_INT_LOC	0
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_1_OCC_INT_LOC	1
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_2_OCC_INT_LOC	2
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_3_OCC_INT_LOC	3
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_4_OCC_INT_LOC	4
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_5_OCC_INT_LOC	5
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_6_OCC_INT_LOC	6
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_7_OCC_INT_LOC	7
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_8_OCC_INT_LOC	8
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_9_OCC_INT_LOC	9
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_10_OCC_INT_LOC	10
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_11_OCC_INT_LOC	11
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_12_OCC_INT_LOC	12
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_13_OCC_INT_LOC	13
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_14_OCC_INT_LOC	14
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_15_OCC_INT_LOC	15
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_16_OCC_INT_LOC	16
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_17_OCC_INT_LOC	17
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_18_OCC_INT_LOC	18
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_19_OCC_INT_LOC	19
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_20_OCC_INT_LOC	20
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_21_OCC_INT_LOC	21
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_22_OCC_INT_LOC	22
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_23_OCC_INT_LOC	23
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_24_OCC_INT_LOC	24
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_25_OCC_INT_LOC	25
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_26_OCC_INT_LOC	26
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_27_OCC_INT_LOC	27
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_28_OCC_INT_LOC	28
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_29_OCC_INT_LOC	29
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_30_OCC_INT_LOC	30
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_31_OCC_INT_LOC	31
 
 #define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS 0x10000464
 #define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_RST 0x0
-union dlb2_sys_ldb_cq_63_32_occ_int_sts {
-	struct {
-		u32 cq_32_occ_int : 1;
-		u32 cq_33_occ_int : 1;
-		u32 cq_34_occ_int : 1;
-		u32 cq_35_occ_int : 1;
-		u32 cq_36_occ_int : 1;
-		u32 cq_37_occ_int : 1;
-		u32 cq_38_occ_int : 1;
-		u32 cq_39_occ_int : 1;
-		u32 cq_40_occ_int : 1;
-		u32 cq_41_occ_int : 1;
-		u32 cq_42_occ_int : 1;
-		u32 cq_43_occ_int : 1;
-		u32 cq_44_occ_int : 1;
-		u32 cq_45_occ_int : 1;
-		u32 cq_46_occ_int : 1;
-		u32 cq_47_occ_int : 1;
-		u32 cq_48_occ_int : 1;
-		u32 cq_49_occ_int : 1;
-		u32 cq_50_occ_int : 1;
-		u32 cq_51_occ_int : 1;
-		u32 cq_52_occ_int : 1;
-		u32 cq_53_occ_int : 1;
-		u32 cq_54_occ_int : 1;
-		u32 cq_55_occ_int : 1;
-		u32 cq_56_occ_int : 1;
-		u32 cq_57_occ_int : 1;
-		u32 cq_58_occ_int : 1;
-		u32 cq_59_occ_int : 1;
-		u32 cq_60_occ_int : 1;
-		u32 cq_61_occ_int : 1;
-		u32 cq_62_occ_int : 1;
-		u32 cq_63_occ_int : 1;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_32_OCC_INT	0x00000001
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_33_OCC_INT	0x00000002
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_34_OCC_INT	0x00000004
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_35_OCC_INT	0x00000008
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_36_OCC_INT	0x00000010
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_37_OCC_INT	0x00000020
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_38_OCC_INT	0x00000040
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_39_OCC_INT	0x00000080
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_40_OCC_INT	0x00000100
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_41_OCC_INT	0x00000200
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_42_OCC_INT	0x00000400
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_43_OCC_INT	0x00000800
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_44_OCC_INT	0x00001000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_45_OCC_INT	0x00002000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_46_OCC_INT	0x00004000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_47_OCC_INT	0x00008000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_48_OCC_INT	0x00010000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_49_OCC_INT	0x00020000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_50_OCC_INT	0x00040000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_51_OCC_INT	0x00080000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_52_OCC_INT	0x00100000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_53_OCC_INT	0x00200000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_54_OCC_INT	0x00400000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_55_OCC_INT	0x00800000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_56_OCC_INT	0x01000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_57_OCC_INT	0x02000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_58_OCC_INT	0x04000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_59_OCC_INT	0x08000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_60_OCC_INT	0x10000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_61_OCC_INT	0x20000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_62_OCC_INT	0x40000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_63_OCC_INT	0x80000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_32_OCC_INT_LOC	0
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_33_OCC_INT_LOC	1
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_34_OCC_INT_LOC	2
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_35_OCC_INT_LOC	3
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_36_OCC_INT_LOC	4
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_37_OCC_INT_LOC	5
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_38_OCC_INT_LOC	6
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_39_OCC_INT_LOC	7
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_40_OCC_INT_LOC	8
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_41_OCC_INT_LOC	9
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_42_OCC_INT_LOC	10
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_43_OCC_INT_LOC	11
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_44_OCC_INT_LOC	12
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_45_OCC_INT_LOC	13
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_46_OCC_INT_LOC	14
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_47_OCC_INT_LOC	15
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_48_OCC_INT_LOC	16
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_49_OCC_INT_LOC	17
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_50_OCC_INT_LOC	18
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_51_OCC_INT_LOC	19
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_52_OCC_INT_LOC	20
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_53_OCC_INT_LOC	21
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_54_OCC_INT_LOC	22
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_55_OCC_INT_LOC	23
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_56_OCC_INT_LOC	24
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_57_OCC_INT_LOC	25
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_58_OCC_INT_LOC	26
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_59_OCC_INT_LOC	27
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_60_OCC_INT_LOC	28
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_61_OCC_INT_LOC	29
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_62_OCC_INT_LOC	30
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_63_OCC_INT_LOC	31
 
 #define DLB2_SYS_DIR_CQ_OPT_CLR 0x100004c0
 #define DLB2_SYS_DIR_CQ_OPT_CLR_RST 0x0
-union dlb2_sys_dir_cq_opt_clr {
-	struct {
-		u32 cq : 6;
-		u32 rsvd0 : 26;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_DIR_CQ_OPT_CLR_CQ		0x0000003F
+#define DLB2_SYS_DIR_CQ_OPT_CLR_RSVD0	0xFFFFFFC0
+#define DLB2_SYS_DIR_CQ_OPT_CLR_CQ_LOC	0
+#define DLB2_SYS_DIR_CQ_OPT_CLR_RSVD0_LOC	6
 
 #define DLB2_SYS_ALARM_HW_SYND 0x1000050c
 #define DLB2_SYS_ALARM_HW_SYND_RST 0x0
-union dlb2_sys_alarm_hw_synd {
-	struct {
-		u32 syndrome : 8;
-		u32 rtype : 2;
-		u32 alarm : 1;
-		u32 cwd : 1;
-		u32 vf_pf_mb : 1;
-		u32 rsvd0 : 1;
-		u32 cls : 2;
-		u32 aid : 6;
-		u32 unit : 4;
-		u32 source : 4;
-		u32 more : 1;
-		u32 valid : 1;
-	} field;
-	u32 val;
-};
-
-#define DLB2_AQED_PIPE_QID_FID_LIM(x) \
+
+#define DLB2_SYS_ALARM_HW_SYND_SYNDROME	0x000000FF
+#define DLB2_SYS_ALARM_HW_SYND_RTYPE		0x00000300
+#define DLB2_SYS_ALARM_HW_SYND_ALARM		0x00000400
+#define DLB2_SYS_ALARM_HW_SYND_CWD		0x00000800
+#define DLB2_SYS_ALARM_HW_SYND_VF_PF_MB	0x00001000
+#define DLB2_SYS_ALARM_HW_SYND_RSVD0		0x00002000
+#define DLB2_SYS_ALARM_HW_SYND_CLS		0x0000C000
+#define DLB2_SYS_ALARM_HW_SYND_AID		0x003F0000
+#define DLB2_SYS_ALARM_HW_SYND_UNIT		0x03C00000
+#define DLB2_SYS_ALARM_HW_SYND_SOURCE	0x3C000000
+#define DLB2_SYS_ALARM_HW_SYND_MORE		0x40000000
+#define DLB2_SYS_ALARM_HW_SYND_VALID		0x80000000
+#define DLB2_SYS_ALARM_HW_SYND_SYNDROME_LOC	0
+#define DLB2_SYS_ALARM_HW_SYND_RTYPE_LOC	8
+#define DLB2_SYS_ALARM_HW_SYND_ALARM_LOC	10
+#define DLB2_SYS_ALARM_HW_SYND_CWD_LOC	11
+#define DLB2_SYS_ALARM_HW_SYND_VF_PF_MB_LOC	12
+#define DLB2_SYS_ALARM_HW_SYND_RSVD0_LOC	13
+#define DLB2_SYS_ALARM_HW_SYND_CLS_LOC	14
+#define DLB2_SYS_ALARM_HW_SYND_AID_LOC	16
+#define DLB2_SYS_ALARM_HW_SYND_UNIT_LOC	22
+#define DLB2_SYS_ALARM_HW_SYND_SOURCE_LOC	26
+#define DLB2_SYS_ALARM_HW_SYND_MORE_LOC	30
+#define DLB2_SYS_ALARM_HW_SYND_VALID_LOC	31
+
+#define DLB2_AQED_QID_FID_LIM(x) \
 	(0x20000000 + (x) * 0x1000)
-#define DLB2_AQED_PIPE_QID_FID_LIM_RST 0x7ff
-union dlb2_aqed_pipe_qid_fid_lim {
-	struct {
-		u32 qid_fid_limit : 13;
-		u32 rsvd0 : 19;
-	} field;
-	u32 val;
-};
-
-#define DLB2_AQED_PIPE_QID_HID_WIDTH(x) \
+#define DLB2_AQED_QID_FID_LIM_RST 0x7ff
+
+#define DLB2_AQED_QID_FID_LIM_QID_FID_LIMIT	0x00001FFF
+#define DLB2_AQED_QID_FID_LIM_RSVD0		0xFFFFE000
+#define DLB2_AQED_QID_FID_LIM_QID_FID_LIMIT_LOC	0
+#define DLB2_AQED_QID_FID_LIM_RSVD0_LOC		13
+
+#define DLB2_AQED_QID_HID_WIDTH(x) \
 	(0x20080000 + (x) * 0x1000)
-#define DLB2_AQED_PIPE_QID_HID_WIDTH_RST 0x0
-union dlb2_aqed_pipe_qid_hid_width {
-	struct {
-		u32 compress_code : 3;
-		u32 rsvd0 : 29;
-	} field;
-	u32 val;
-};
-
-#define DLB2_AQED_PIPE_CFG_ARB_WEIGHTS_TQPRI_ATM_0 0x24000004
-#define DLB2_AQED_PIPE_CFG_ARB_WEIGHTS_TQPRI_ATM_0_RST 0xfefcfaf8
-union dlb2_aqed_pipe_cfg_arb_weights_tqpri_atm_0 {
-	struct {
-		u32 pri0 : 8;
-		u32 pri1 : 8;
-		u32 pri2 : 8;
-		u32 pri3 : 8;
-	} field;
-	u32 val;
-};
+#define DLB2_AQED_QID_HID_WIDTH_RST 0x0
+
+#define DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE	0x00000007
+#define DLB2_AQED_QID_HID_WIDTH_RSVD0		0xFFFFFFF8
+#define DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE_LOC	0
+#define DLB2_AQED_QID_HID_WIDTH_RSVD0_LOC		3
+
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0 0x24000004
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_RST 0xfefcfaf8
+
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI0	0x000000FF
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI1	0x0000FF00
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI2	0x00FF0000
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI3	0xFF000000
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI0_LOC	0
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI1_LOC	8
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI2_LOC	16
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI3_LOC	24
+
+#define DLB2_AQED_SMON_ACTIVITYCNTR0 0x2c00004c
+#define DLB2_AQED_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_AQED_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_AQED_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_AQED_SMON_ACTIVITYCNTR1 0x2c000050
+#define DLB2_AQED_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_AQED_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_AQED_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_AQED_SMON_COMPARE0 0x2c000054
+#define DLB2_AQED_SMON_COMPARE0_RST 0x0
+
+#define DLB2_AQED_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_AQED_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_AQED_SMON_COMPARE1 0x2c000058
+#define DLB2_AQED_SMON_COMPARE1_RST 0x0
+
+#define DLB2_AQED_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_AQED_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_AQED_SMON_CFG0 0x2c00005c
+#define DLB2_AQED_SMON_CFG0_RST 0x40000000
+
+#define DLB2_AQED_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_AQED_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_AQED_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_AQED_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_AQED_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_AQED_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_AQED_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_AQED_SMON_CFG0_SMON_MODE		0x0000F000
+#define DLB2_AQED_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_AQED_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_AQED_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_AQED_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_AQED_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_AQED_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_AQED_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_AQED_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_AQED_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_AQED_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_AQED_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_AQED_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_AQED_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
+#define DLB2_AQED_SMON_CFG0_RSVZ0_LOC			2
+#define DLB2_AQED_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_AQED_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_AQED_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_AQED_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_AQED_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_AQED_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_AQED_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_AQED_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_AQED_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_AQED_SMON_CFG0_STOPTIMEROVFL_LOC		20
+#define DLB2_AQED_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_AQED_SMON_CFG0_STATTIMEROVFL_LOC		22
+#define DLB2_AQED_SMON_CFG0_RSVZ1_LOC			23
+#define DLB2_AQED_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_AQED_SMON_CFG0_RSVZ2_LOC			29
+#define DLB2_AQED_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_AQED_SMON_CFG1 0x2c000060
+#define DLB2_AQED_SMON_CFG1_RST 0x0
+
+#define DLB2_AQED_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_AQED_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_AQED_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_AQED_SMON_CFG1_MODE0_LOC	0
+#define DLB2_AQED_SMON_CFG1_MODE1_LOC	8
+#define DLB2_AQED_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_AQED_SMON_MAX_TMR 0x2c000064
+#define DLB2_AQED_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_AQED_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_AQED_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_AQED_SMON_TMR 0x2c000068
+#define DLB2_AQED_SMON_TMR_RST 0x0
+
+#define DLB2_AQED_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_AQED_SMON_TMR_TIMER_LOC	0
 
 #define DLB2_ATM_QID2CQIDIX_00(x) \
 	(0x30080000 + (x) * 0x1000)
@@ -1061,1467 +1452,2853 @@ union dlb2_aqed_pipe_cfg_arb_weights_tqpri_atm_0 {
 #define DLB2_ATM_QID2CQIDIX(x, y) \
 	(DLB2_ATM_QID2CQIDIX_00(x) + 0x80000 * (y))
 #define DLB2_ATM_QID2CQIDIX_NUM 16
-union dlb2_atm_qid2cqidix_00 {
-	struct {
-		u32 cq_p0 : 8;
-		u32 cq_p1 : 8;
-		u32 cq_p2 : 8;
-		u32 cq_p3 : 8;
-	} field;
-	u32 val;
-};
+
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P0	0x000000FF
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P1	0x0000FF00
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P2	0x00FF0000
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P3	0xFF000000
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P0_LOC	0
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P1_LOC	8
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P2_LOC	16
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P3_LOC	24
 
 #define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN 0x34000004
 #define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_RST 0xfffefdfc
-union dlb2_atm_cfg_arb_weights_rdy_bin {
-	struct {
-		u32 bin0 : 8;
-		u32 bin1 : 8;
-		u32 bin2 : 8;
-		u32 bin3 : 8;
-	} field;
-	u32 val;
-};
+
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN0	0x000000FF
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN1	0x0000FF00
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN2	0x00FF0000
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN3	0xFF000000
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN0_LOC	0
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN1_LOC	8
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN2_LOC	16
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN3_LOC	24
 
 #define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN 0x34000008
 #define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_RST 0xfffefdfc
-union dlb2_atm_cfg_arb_weights_sched_bin {
-	struct {
-		u32 bin0 : 8;
-		u32 bin1 : 8;
-		u32 bin2 : 8;
-		u32 bin3 : 8;
-	} field;
-	u32 val;
-};
+
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN0	0x000000FF
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN1	0x0000FF00
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN2	0x00FF0000
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN3	0xFF000000
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN0_LOC	0
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN1_LOC	8
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN2_LOC	16
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN3_LOC	24
+
+#define DLB2_ATM_SMON_ACTIVITYCNTR0 0x3c000050
+#define DLB2_ATM_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_ATM_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_ATM_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_ATM_SMON_ACTIVITYCNTR1 0x3c000054
+#define DLB2_ATM_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_ATM_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_ATM_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_ATM_SMON_COMPARE0 0x3c000058
+#define DLB2_ATM_SMON_COMPARE0_RST 0x0
+
+#define DLB2_ATM_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_ATM_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_ATM_SMON_COMPARE1 0x3c00005c
+#define DLB2_ATM_SMON_COMPARE1_RST 0x0
+
+#define DLB2_ATM_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_ATM_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_ATM_SMON_CFG0 0x3c000060
+#define DLB2_ATM_SMON_CFG0_RST 0x40000000
+
+#define DLB2_ATM_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_ATM_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_ATM_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_ATM_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_ATM_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_ATM_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_ATM_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_ATM_SMON_CFG0_SMON_MODE			0x0000F000
+#define DLB2_ATM_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_ATM_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_ATM_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_ATM_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_ATM_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_ATM_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_ATM_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_ATM_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_ATM_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_ATM_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_ATM_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_ATM_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_ATM_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
+#define DLB2_ATM_SMON_CFG0_RSVZ0_LOC				2
+#define DLB2_ATM_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_ATM_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_ATM_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_ATM_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_ATM_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_ATM_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_ATM_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_ATM_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_ATM_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_ATM_SMON_CFG0_STOPTIMEROVFL_LOC			20
+#define DLB2_ATM_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_ATM_SMON_CFG0_STATTIMEROVFL_LOC			22
+#define DLB2_ATM_SMON_CFG0_RSVZ1_LOC				23
+#define DLB2_ATM_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_ATM_SMON_CFG0_RSVZ2_LOC				29
+#define DLB2_ATM_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_ATM_SMON_CFG1 0x3c000064
+#define DLB2_ATM_SMON_CFG1_RST 0x0
+
+#define DLB2_ATM_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_ATM_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_ATM_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_ATM_SMON_CFG1_MODE0_LOC	0
+#define DLB2_ATM_SMON_CFG1_MODE1_LOC	8
+#define DLB2_ATM_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_ATM_SMON_MAX_TMR 0x3c000068
+#define DLB2_ATM_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_ATM_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_ATM_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_ATM_SMON_TMR 0x3c00006c
+#define DLB2_ATM_SMON_TMR_RST 0x0
+
+#define DLB2_ATM_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_ATM_SMON_TMR_TIMER_LOC	0
 
 #define DLB2_CHP_CFG_DIR_VAS_CRD(x) \
 	(0x40000000 + (x) * 0x1000)
 #define DLB2_CHP_CFG_DIR_VAS_CRD_RST 0x0
-union dlb2_chp_cfg_dir_vas_crd {
-	struct {
-		u32 count : 14;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
+
+#define DLB2_CHP_CFG_DIR_VAS_CRD_COUNT	0x00003FFF
+#define DLB2_CHP_CFG_DIR_VAS_CRD_RSVD0	0xFFFFC000
+#define DLB2_CHP_CFG_DIR_VAS_CRD_COUNT_LOC	0
+#define DLB2_CHP_CFG_DIR_VAS_CRD_RSVD0_LOC	14
 
 #define DLB2_CHP_CFG_LDB_VAS_CRD(x) \
 	(0x40080000 + (x) * 0x1000)
 #define DLB2_CHP_CFG_LDB_VAS_CRD_RST 0x0
-union dlb2_chp_cfg_ldb_vas_crd {
-	struct {
-		u32 count : 15;
-		u32 rsvd0 : 17;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_ORD_QID_SN(x) \
+
+#define DLB2_CHP_CFG_LDB_VAS_CRD_COUNT	0x00007FFF
+#define DLB2_CHP_CFG_LDB_VAS_CRD_RSVD0	0xFFFF8000
+#define DLB2_CHP_CFG_LDB_VAS_CRD_COUNT_LOC	0
+#define DLB2_CHP_CFG_LDB_VAS_CRD_RSVD0_LOC	15
+
+#define DLB2_V2CHP_ORD_QID_SN(x) \
 	(0x40100000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_ORD_QID_SN(x) \
+	(0x40080000 + (x) * 0x1000)
+#define DLB2_CHP_ORD_QID_SN(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_ORD_QID_SN(x) : \
+	 DLB2_V2_5CHP_ORD_QID_SN(x))
 #define DLB2_CHP_ORD_QID_SN_RST 0x0
-union dlb2_chp_ord_qid_sn {
-	struct {
-		u32 sn : 10;
-		u32 rsvd0 : 22;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_ORD_QID_SN_MAP(x) \
+
+#define DLB2_CHP_ORD_QID_SN_SN	0x000003FF
+#define DLB2_CHP_ORD_QID_SN_RSVD0	0xFFFFFC00
+#define DLB2_CHP_ORD_QID_SN_SN_LOC		0
+#define DLB2_CHP_ORD_QID_SN_RSVD0_LOC	10
+
+#define DLB2_V2CHP_ORD_QID_SN_MAP(x) \
 	(0x40180000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_ORD_QID_SN_MAP(x) \
+	(0x40100000 + (x) * 0x1000)
+#define DLB2_CHP_ORD_QID_SN_MAP(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_ORD_QID_SN_MAP(x) : \
+	 DLB2_V2_5CHP_ORD_QID_SN_MAP(x))
 #define DLB2_CHP_ORD_QID_SN_MAP_RST 0x0
-union dlb2_chp_ord_qid_sn_map {
-	struct {
-		u32 mode : 3;
-		u32 slot : 4;
-		u32 rsvz0 : 1;
-		u32 grp : 1;
-		u32 rsvz1 : 1;
-		u32 rsvd0 : 22;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_SN_CHK_ENBL(x) \
+
+#define DLB2_CHP_ORD_QID_SN_MAP_MODE		0x00000007
+#define DLB2_CHP_ORD_QID_SN_MAP_SLOT		0x00000078
+#define DLB2_CHP_ORD_QID_SN_MAP_RSVZ0	0x00000080
+#define DLB2_CHP_ORD_QID_SN_MAP_GRP		0x00000100
+#define DLB2_CHP_ORD_QID_SN_MAP_RSVZ1	0x00000200
+#define DLB2_CHP_ORD_QID_SN_MAP_RSVD0	0xFFFFFC00
+#define DLB2_CHP_ORD_QID_SN_MAP_MODE_LOC	0
+#define DLB2_CHP_ORD_QID_SN_MAP_SLOT_LOC	3
+#define DLB2_CHP_ORD_QID_SN_MAP_RSVZ0_LOC	7
+#define DLB2_CHP_ORD_QID_SN_MAP_GRP_LOC	8
+#define DLB2_CHP_ORD_QID_SN_MAP_RSVZ1_LOC	9
+#define DLB2_CHP_ORD_QID_SN_MAP_RSVD0_LOC	10
+
+#define DLB2_V2CHP_SN_CHK_ENBL(x) \
 	(0x40200000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_SN_CHK_ENBL(x) \
+	(0x40180000 + (x) * 0x1000)
+#define DLB2_CHP_SN_CHK_ENBL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_SN_CHK_ENBL(x) : \
+	 DLB2_V2_5CHP_SN_CHK_ENBL(x))
 #define DLB2_CHP_SN_CHK_ENBL_RST 0x0
-union dlb2_chp_sn_chk_enbl {
-	struct {
-		u32 en : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_DIR_CQ_DEPTH(x) \
+
+#define DLB2_CHP_SN_CHK_ENBL_EN	0x00000001
+#define DLB2_CHP_SN_CHK_ENBL_RSVD0	0xFFFFFFFE
+#define DLB2_CHP_SN_CHK_ENBL_EN_LOC		0
+#define DLB2_CHP_SN_CHK_ENBL_RSVD0_LOC	1
+
+#define DLB2_V2CHP_DIR_CQ_DEPTH(x) \
 	(0x40280000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ_DEPTH(x) \
+	(0x40300000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ_DEPTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_DEPTH(x) : \
+	 DLB2_V2_5CHP_DIR_CQ_DEPTH(x))
 #define DLB2_CHP_DIR_CQ_DEPTH_RST 0x0
-union dlb2_chp_dir_cq_depth {
-	struct {
-		u32 depth : 13;
-		u32 rsvd0 : 19;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH(x) \
+
+#define DLB2_CHP_DIR_CQ_DEPTH_DEPTH	0x00001FFF
+#define DLB2_CHP_DIR_CQ_DEPTH_RSVD0	0xFFFFE000
+#define DLB2_CHP_DIR_CQ_DEPTH_DEPTH_LOC	0
+#define DLB2_CHP_DIR_CQ_DEPTH_RSVD0_LOC	13
+
+#define DLB2_V2CHP_DIR_CQ_INT_DEPTH_THRSH(x) \
 	(0x40300000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ_INT_DEPTH_THRSH(x) \
+	(0x40380000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_INT_DEPTH_THRSH(x) : \
+	 DLB2_V2_5CHP_DIR_CQ_INT_DEPTH_THRSH(x))
 #define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_RST 0x0
-union dlb2_chp_dir_cq_int_depth_thrsh {
-	struct {
-		u32 depth_threshold : 13;
-		u32 rsvd0 : 19;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_DIR_CQ_INT_ENB(x) \
+
+#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_DEPTH_THRESHOLD	0x00001FFF
+#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_RSVD0		0xFFFFE000
+#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_DEPTH_THRESHOLD_LOC	0
+#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_RSVD0_LOC		13
+
+#define DLB2_V2CHP_DIR_CQ_INT_ENB(x) \
 	(0x40380000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ_INT_ENB(x) \
+	(0x40400000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ_INT_ENB(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_INT_ENB(x) : \
+	 DLB2_V2_5CHP_DIR_CQ_INT_ENB(x))
 #define DLB2_CHP_DIR_CQ_INT_ENB_RST 0x0
-union dlb2_chp_dir_cq_int_enb {
-	struct {
-		u32 en_tim : 1;
-		u32 en_depth : 1;
-		u32 rsvd0 : 30;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_DIR_CQ_TMR_THRSH(x) \
+
+#define DLB2_CHP_DIR_CQ_INT_ENB_EN_TIM	0x00000001
+#define DLB2_CHP_DIR_CQ_INT_ENB_EN_DEPTH	0x00000002
+#define DLB2_CHP_DIR_CQ_INT_ENB_RSVD0	0xFFFFFFFC
+#define DLB2_CHP_DIR_CQ_INT_ENB_EN_TIM_LOC	0
+#define DLB2_CHP_DIR_CQ_INT_ENB_EN_DEPTH_LOC	1
+#define DLB2_CHP_DIR_CQ_INT_ENB_RSVD0_LOC	2
+
+#define DLB2_V2CHP_DIR_CQ_TMR_THRSH(x) \
 	(0x40480000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ_TMR_THRSH(x) \
+	(0x40500000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ_TMR_THRSH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_TMR_THRSH(x) : \
+	 DLB2_V2_5CHP_DIR_CQ_TMR_THRSH(x))
 #define DLB2_CHP_DIR_CQ_TMR_THRSH_RST 0x1
-union dlb2_chp_dir_cq_tmr_thrsh {
-	struct {
-		u32 thrsh_0 : 1;
-		u32 thrsh_13_1 : 13;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL(x) \
+
+#define DLB2_CHP_DIR_CQ_TMR_THRSH_THRSH_0	0x00000001
+#define DLB2_CHP_DIR_CQ_TMR_THRSH_THRSH_13_1	0x00003FFE
+#define DLB2_CHP_DIR_CQ_TMR_THRSH_RSVD0	0xFFFFC000
+#define DLB2_CHP_DIR_CQ_TMR_THRSH_THRSH_0_LOC	0
+#define DLB2_CHP_DIR_CQ_TMR_THRSH_THRSH_13_1_LOC	1
+#define DLB2_CHP_DIR_CQ_TMR_THRSH_RSVD0_LOC		14
+
+#define DLB2_V2CHP_DIR_CQ_TKN_DEPTH_SEL(x) \
 	(0x40500000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ_TKN_DEPTH_SEL(x) \
+	(0x40580000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_TKN_DEPTH_SEL(x) : \
+	 DLB2_V2_5CHP_DIR_CQ_TKN_DEPTH_SEL(x))
 #define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_RST 0x0
-union dlb2_chp_dir_cq_tkn_depth_sel {
-	struct {
-		u32 token_depth_select : 4;
-		u32 rsvd0 : 28;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_DIR_CQ_WD_ENB(x) \
+
+#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT	0x0000000F
+#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_RSVD0			0xFFFFFFF0
+#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_LOC	0
+#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_RSVD0_LOC		4
+
+#define DLB2_V2CHP_DIR_CQ_WD_ENB(x) \
 	(0x40580000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ_WD_ENB(x) \
+	(0x40600000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ_WD_ENB(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_WD_ENB(x) : \
+	 DLB2_V2_5CHP_DIR_CQ_WD_ENB(x))
 #define DLB2_CHP_DIR_CQ_WD_ENB_RST 0x0
-union dlb2_chp_dir_cq_wd_enb {
-	struct {
-		u32 wd_enable : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_DIR_CQ_WPTR(x) \
+
+#define DLB2_CHP_DIR_CQ_WD_ENB_WD_ENABLE	0x00000001
+#define DLB2_CHP_DIR_CQ_WD_ENB_RSVD0		0xFFFFFFFE
+#define DLB2_CHP_DIR_CQ_WD_ENB_WD_ENABLE_LOC	0
+#define DLB2_CHP_DIR_CQ_WD_ENB_RSVD0_LOC	1
+
+#define DLB2_V2CHP_DIR_CQ_WPTR(x) \
 	(0x40600000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ_WPTR(x) \
+	(0x40680000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ_WPTR(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_WPTR(x) : \
+	 DLB2_V2_5CHP_DIR_CQ_WPTR(x))
 #define DLB2_CHP_DIR_CQ_WPTR_RST 0x0
-union dlb2_chp_dir_cq_wptr {
-	struct {
-		u32 write_pointer : 13;
-		u32 rsvd0 : 19;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_DIR_CQ2VAS(x) \
+
+#define DLB2_CHP_DIR_CQ_WPTR_WRITE_POINTER	0x00001FFF
+#define DLB2_CHP_DIR_CQ_WPTR_RSVD0		0xFFFFE000
+#define DLB2_CHP_DIR_CQ_WPTR_WRITE_POINTER_LOC	0
+#define DLB2_CHP_DIR_CQ_WPTR_RSVD0_LOC		13
+
+#define DLB2_V2CHP_DIR_CQ2VAS(x) \
 	(0x40680000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ2VAS(x) \
+	(0x40700000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ2VAS(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ2VAS(x) : \
+	 DLB2_V2_5CHP_DIR_CQ2VAS(x))
 #define DLB2_CHP_DIR_CQ2VAS_RST 0x0
-union dlb2_chp_dir_cq2vas {
-	struct {
-		u32 cq2vas : 5;
-		u32 rsvd0 : 27;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_HIST_LIST_BASE(x) \
+
+#define DLB2_CHP_DIR_CQ2VAS_CQ2VAS	0x0000001F
+#define DLB2_CHP_DIR_CQ2VAS_RSVD0	0xFFFFFFE0
+#define DLB2_CHP_DIR_CQ2VAS_CQ2VAS_LOC	0
+#define DLB2_CHP_DIR_CQ2VAS_RSVD0_LOC	5
+
+#define DLB2_V2CHP_HIST_LIST_BASE(x) \
 	(0x40700000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_HIST_LIST_BASE(x) \
+	(0x40780000 + (x) * 0x1000)
+#define DLB2_CHP_HIST_LIST_BASE(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_HIST_LIST_BASE(x) : \
+	 DLB2_V2_5CHP_HIST_LIST_BASE(x))
 #define DLB2_CHP_HIST_LIST_BASE_RST 0x0
-union dlb2_chp_hist_list_base {
-	struct {
-		u32 base : 13;
-		u32 rsvd0 : 19;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_HIST_LIST_LIM(x) \
+
+#define DLB2_CHP_HIST_LIST_BASE_BASE		0x00001FFF
+#define DLB2_CHP_HIST_LIST_BASE_RSVD0	0xFFFFE000
+#define DLB2_CHP_HIST_LIST_BASE_BASE_LOC	0
+#define DLB2_CHP_HIST_LIST_BASE_RSVD0_LOC	13
+
+#define DLB2_V2CHP_HIST_LIST_LIM(x) \
 	(0x40780000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_HIST_LIST_LIM(x) \
+	(0x40800000 + (x) * 0x1000)
+#define DLB2_CHP_HIST_LIST_LIM(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_HIST_LIST_LIM(x) : \
+	 DLB2_V2_5CHP_HIST_LIST_LIM(x))
 #define DLB2_CHP_HIST_LIST_LIM_RST 0x0
-union dlb2_chp_hist_list_lim {
-	struct {
-		u32 limit : 13;
-		u32 rsvd0 : 19;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_HIST_LIST_POP_PTR(x) \
+
+#define DLB2_CHP_HIST_LIST_LIM_LIMIT	0x00001FFF
+#define DLB2_CHP_HIST_LIST_LIM_RSVD0	0xFFFFE000
+#define DLB2_CHP_HIST_LIST_LIM_LIMIT_LOC	0
+#define DLB2_CHP_HIST_LIST_LIM_RSVD0_LOC	13
+
+#define DLB2_V2CHP_HIST_LIST_POP_PTR(x) \
 	(0x40800000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_HIST_LIST_POP_PTR(x) \
+	(0x40880000 + (x) * 0x1000)
+#define DLB2_CHP_HIST_LIST_POP_PTR(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_HIST_LIST_POP_PTR(x) : \
+	 DLB2_V2_5CHP_HIST_LIST_POP_PTR(x))
 #define DLB2_CHP_HIST_LIST_POP_PTR_RST 0x0
-union dlb2_chp_hist_list_pop_ptr {
-	struct {
-		u32 pop_ptr : 13;
-		u32 generation : 1;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_HIST_LIST_PUSH_PTR(x) \
+
+#define DLB2_CHP_HIST_LIST_POP_PTR_POP_PTR		0x00001FFF
+#define DLB2_CHP_HIST_LIST_POP_PTR_GENERATION	0x00002000
+#define DLB2_CHP_HIST_LIST_POP_PTR_RSVD0		0xFFFFC000
+#define DLB2_CHP_HIST_LIST_POP_PTR_POP_PTR_LOC	0
+#define DLB2_CHP_HIST_LIST_POP_PTR_GENERATION_LOC	13
+#define DLB2_CHP_HIST_LIST_POP_PTR_RSVD0_LOC		14
+
+#define DLB2_V2CHP_HIST_LIST_PUSH_PTR(x) \
 	(0x40880000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_HIST_LIST_PUSH_PTR(x) \
+	(0x40900000 + (x) * 0x1000)
+#define DLB2_CHP_HIST_LIST_PUSH_PTR(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_HIST_LIST_PUSH_PTR(x) : \
+	 DLB2_V2_5CHP_HIST_LIST_PUSH_PTR(x))
 #define DLB2_CHP_HIST_LIST_PUSH_PTR_RST 0x0
-union dlb2_chp_hist_list_push_ptr {
-	struct {
-		u32 push_ptr : 13;
-		u32 generation : 1;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_LDB_CQ_DEPTH(x) \
+
+#define DLB2_CHP_HIST_LIST_PUSH_PTR_PUSH_PTR		0x00001FFF
+#define DLB2_CHP_HIST_LIST_PUSH_PTR_GENERATION	0x00002000
+#define DLB2_CHP_HIST_LIST_PUSH_PTR_RSVD0		0xFFFFC000
+#define DLB2_CHP_HIST_LIST_PUSH_PTR_PUSH_PTR_LOC	0
+#define DLB2_CHP_HIST_LIST_PUSH_PTR_GENERATION_LOC	13
+#define DLB2_CHP_HIST_LIST_PUSH_PTR_RSVD0_LOC	14
+
+#define DLB2_V2CHP_LDB_CQ_DEPTH(x) \
 	(0x40900000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ_DEPTH(x) \
+	(0x40a80000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ_DEPTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_DEPTH(x) : \
+	 DLB2_V2_5CHP_LDB_CQ_DEPTH(x))
 #define DLB2_CHP_LDB_CQ_DEPTH_RST 0x0
-union dlb2_chp_ldb_cq_depth {
-	struct {
-		u32 depth : 11;
-		u32 rsvd0 : 21;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH(x) \
+
+#define DLB2_CHP_LDB_CQ_DEPTH_DEPTH	0x000007FF
+#define DLB2_CHP_LDB_CQ_DEPTH_RSVD0	0xFFFFF800
+#define DLB2_CHP_LDB_CQ_DEPTH_DEPTH_LOC	0
+#define DLB2_CHP_LDB_CQ_DEPTH_RSVD0_LOC	11
+
+#define DLB2_V2CHP_LDB_CQ_INT_DEPTH_THRSH(x) \
 	(0x40980000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ_INT_DEPTH_THRSH(x) \
+	(0x40b00000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_INT_DEPTH_THRSH(x) : \
+	 DLB2_V2_5CHP_LDB_CQ_INT_DEPTH_THRSH(x))
 #define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_RST 0x0
-union dlb2_chp_ldb_cq_int_depth_thrsh {
-	struct {
-		u32 depth_threshold : 11;
-		u32 rsvd0 : 21;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_LDB_CQ_INT_ENB(x) \
+
+#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_DEPTH_THRESHOLD	0x000007FF
+#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_RSVD0		0xFFFFF800
+#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_DEPTH_THRESHOLD_LOC	0
+#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_RSVD0_LOC		11
+
+#define DLB2_V2CHP_LDB_CQ_INT_ENB(x) \
 	(0x40a00000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ_INT_ENB(x) \
+	(0x40b80000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ_INT_ENB(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_INT_ENB(x) : \
+	 DLB2_V2_5CHP_LDB_CQ_INT_ENB(x))
 #define DLB2_CHP_LDB_CQ_INT_ENB_RST 0x0
-union dlb2_chp_ldb_cq_int_enb {
-	struct {
-		u32 en_tim : 1;
-		u32 en_depth : 1;
-		u32 rsvd0 : 30;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_LDB_CQ_TMR_THRSH(x) \
+
+#define DLB2_CHP_LDB_CQ_INT_ENB_EN_TIM	0x00000001
+#define DLB2_CHP_LDB_CQ_INT_ENB_EN_DEPTH	0x00000002
+#define DLB2_CHP_LDB_CQ_INT_ENB_RSVD0	0xFFFFFFFC
+#define DLB2_CHP_LDB_CQ_INT_ENB_EN_TIM_LOC	0
+#define DLB2_CHP_LDB_CQ_INT_ENB_EN_DEPTH_LOC	1
+#define DLB2_CHP_LDB_CQ_INT_ENB_RSVD0_LOC	2
+
+#define DLB2_V2CHP_LDB_CQ_TMR_THRSH(x) \
 	(0x40b00000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ_TMR_THRSH(x) \
+	(0x40c80000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ_TMR_THRSH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_TMR_THRSH(x) : \
+	 DLB2_V2_5CHP_LDB_CQ_TMR_THRSH(x))
 #define DLB2_CHP_LDB_CQ_TMR_THRSH_RST 0x1
-union dlb2_chp_ldb_cq_tmr_thrsh {
-	struct {
-		u32 thrsh_0 : 1;
-		u32 thrsh_13_1 : 13;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL(x) \
+
+#define DLB2_CHP_LDB_CQ_TMR_THRSH_THRSH_0	0x00000001
+#define DLB2_CHP_LDB_CQ_TMR_THRSH_THRSH_13_1	0x00003FFE
+#define DLB2_CHP_LDB_CQ_TMR_THRSH_RSVD0	0xFFFFC000
+#define DLB2_CHP_LDB_CQ_TMR_THRSH_THRSH_0_LOC	0
+#define DLB2_CHP_LDB_CQ_TMR_THRSH_THRSH_13_1_LOC	1
+#define DLB2_CHP_LDB_CQ_TMR_THRSH_RSVD0_LOC		14
+
+#define DLB2_V2CHP_LDB_CQ_TKN_DEPTH_SEL(x) \
 	(0x40b80000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ_TKN_DEPTH_SEL(x) \
+	(0x40d00000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_TKN_DEPTH_SEL(x) : \
+	 DLB2_V2_5CHP_LDB_CQ_TKN_DEPTH_SEL(x))
 #define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_RST 0x0
-union dlb2_chp_ldb_cq_tkn_depth_sel {
-	struct {
-		u32 token_depth_select : 4;
-		u32 rsvd0 : 28;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_LDB_CQ_WD_ENB(x) \
+
+#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT	0x0000000F
+#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_RSVD0			0xFFFFFFF0
+#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_LOC	0
+#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_RSVD0_LOC		4
+
+#define DLB2_V2CHP_LDB_CQ_WD_ENB(x) \
 	(0x40c00000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ_WD_ENB(x) \
+	(0x40d80000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ_WD_ENB(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_WD_ENB(x) : \
+	 DLB2_V2_5CHP_LDB_CQ_WD_ENB(x))
 #define DLB2_CHP_LDB_CQ_WD_ENB_RST 0x0
-union dlb2_chp_ldb_cq_wd_enb {
-	struct {
-		u32 wd_enable : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_LDB_CQ_WPTR(x) \
+
+#define DLB2_CHP_LDB_CQ_WD_ENB_WD_ENABLE	0x00000001
+#define DLB2_CHP_LDB_CQ_WD_ENB_RSVD0		0xFFFFFFFE
+#define DLB2_CHP_LDB_CQ_WD_ENB_WD_ENABLE_LOC	0
+#define DLB2_CHP_LDB_CQ_WD_ENB_RSVD0_LOC	1
+
+#define DLB2_V2CHP_LDB_CQ_WPTR(x) \
 	(0x40c80000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ_WPTR(x) \
+	(0x40e00000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ_WPTR(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_WPTR(x) : \
+	 DLB2_V2_5CHP_LDB_CQ_WPTR(x))
 #define DLB2_CHP_LDB_CQ_WPTR_RST 0x0
-union dlb2_chp_ldb_cq_wptr {
-	struct {
-		u32 write_pointer : 11;
-		u32 rsvd0 : 21;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_LDB_CQ2VAS(x) \
+
+#define DLB2_CHP_LDB_CQ_WPTR_WRITE_POINTER	0x000007FF
+#define DLB2_CHP_LDB_CQ_WPTR_RSVD0		0xFFFFF800
+#define DLB2_CHP_LDB_CQ_WPTR_WRITE_POINTER_LOC	0
+#define DLB2_CHP_LDB_CQ_WPTR_RSVD0_LOC		11
+
+#define DLB2_V2CHP_LDB_CQ2VAS(x) \
 	(0x40d00000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ2VAS(x) \
+	(0x40e80000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ2VAS(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ2VAS(x) : \
+	 DLB2_V2_5CHP_LDB_CQ2VAS(x))
 #define DLB2_CHP_LDB_CQ2VAS_RST 0x0
-union dlb2_chp_ldb_cq2vas {
-	struct {
-		u32 cq2vas : 5;
-		u32 rsvd0 : 27;
-	} field;
-	u32 val;
-};
+
+#define DLB2_CHP_LDB_CQ2VAS_CQ2VAS	0x0000001F
+#define DLB2_CHP_LDB_CQ2VAS_RSVD0	0xFFFFFFE0
+#define DLB2_CHP_LDB_CQ2VAS_CQ2VAS_LOC	0
+#define DLB2_CHP_LDB_CQ2VAS_RSVD0_LOC	5
 
 #define DLB2_CHP_CFG_CHP_CSR_CTRL 0x44000008
 #define DLB2_CHP_CFG_CHP_CSR_CTRL_RST 0x180002
-union dlb2_chp_cfg_chp_csr_ctrl {
-	struct {
-		u32 int_cor_alarm_dis : 1;
-		u32 int_cor_synd_dis : 1;
-		u32 int_uncr_alarm_dis : 1;
-		u32 int_unc_synd_dis : 1;
-		u32 int_inf0_alarm_dis : 1;
-		u32 int_inf0_synd_dis : 1;
-		u32 int_inf1_alarm_dis : 1;
-		u32 int_inf1_synd_dis : 1;
-		u32 int_inf2_alarm_dis : 1;
-		u32 int_inf2_synd_dis : 1;
-		u32 int_inf3_alarm_dis : 1;
-		u32 int_inf3_synd_dis : 1;
-		u32 int_inf4_alarm_dis : 1;
-		u32 int_inf4_synd_dis : 1;
-		u32 int_inf5_alarm_dis : 1;
-		u32 int_inf5_synd_dis : 1;
-		u32 dlb_cor_alarm_enable : 1;
-		u32 cfg_64bytes_qe_ldb_cq_mode : 1;
-		u32 cfg_64bytes_qe_dir_cq_mode : 1;
-		u32 pad_write_ldb : 1;
-		u32 pad_write_dir : 1;
-		u32 pad_first_write_ldb : 1;
-		u32 pad_first_write_dir : 1;
-		u32 rsvz0 : 9;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_DIR_CQ_INTR_ARMED0 0x4400005c
+
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_COR_ALARM_DIS		0x00000001
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_COR_SYND_DIS		0x00000002
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_UNCR_ALARM_DIS		0x00000004
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_UNC_SYND_DIS		0x00000008
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF0_ALARM_DIS		0x00000010
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF0_SYND_DIS		0x00000020
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF1_ALARM_DIS		0x00000040
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF1_SYND_DIS		0x00000080
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF2_ALARM_DIS		0x00000100
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF2_SYND_DIS		0x00000200
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF3_ALARM_DIS		0x00000400
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF3_SYND_DIS		0x00000800
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF4_ALARM_DIS		0x00001000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF4_SYND_DIS		0x00002000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF5_ALARM_DIS		0x00004000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF5_SYND_DIS		0x00008000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_DLB_COR_ALARM_ENABLE	0x00010000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_LDB_CQ_MODE	0x00020000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_DIR_CQ_MODE	0x00040000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_WRITE_LDB		0x00080000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_WRITE_DIR		0x00100000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_FIRST_WRITE_LDB	0x00200000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_FIRST_WRITE_DIR	0x00400000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_RSVZ0			0xFF800000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_COR_ALARM_DIS_LOC		0
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_COR_SYND_DIS_LOC		1
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_UNCR_ALARM_DIS_LOC		2
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_UNC_SYND_DIS_LOC		3
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF0_ALARM_DIS_LOC		4
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF0_SYND_DIS_LOC		5
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF1_ALARM_DIS_LOC		6
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF1_SYND_DIS_LOC		7
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF2_ALARM_DIS_LOC		8
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF2_SYND_DIS_LOC		9
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF3_ALARM_DIS_LOC		10
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF3_SYND_DIS_LOC		11
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF4_ALARM_DIS_LOC		12
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF4_SYND_DIS_LOC		13
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF5_ALARM_DIS_LOC		14
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF5_SYND_DIS_LOC		15
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_DLB_COR_ALARM_ENABLE_LOC		16
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_LDB_CQ_MODE_LOC	17
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_DIR_CQ_MODE_LOC	18
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_WRITE_LDB_LOC			19
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_WRITE_DIR_LOC			20
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_FIRST_WRITE_LDB_LOC		21
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_FIRST_WRITE_DIR_LOC		22
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_RSVZ0_LOC				23
+
+#define DLB2_V2CHP_DIR_CQ_INTR_ARMED0 0x4400005c
+#define DLB2_V2_5CHP_DIR_CQ_INTR_ARMED0 0x4400004c
+#define DLB2_CHP_DIR_CQ_INTR_ARMED0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_INTR_ARMED0 : \
+	 DLB2_V2_5CHP_DIR_CQ_INTR_ARMED0)
 #define DLB2_CHP_DIR_CQ_INTR_ARMED0_RST 0x0
-union dlb2_chp_dir_cq_intr_armed0 {
-	struct {
-		u32 armed : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_DIR_CQ_INTR_ARMED1 0x44000060
+
+#define DLB2_CHP_DIR_CQ_INTR_ARMED0_ARMED	0xFFFFFFFF
+#define DLB2_CHP_DIR_CQ_INTR_ARMED0_ARMED_LOC	0
+
+#define DLB2_V2CHP_DIR_CQ_INTR_ARMED1 0x44000060
+#define DLB2_V2_5CHP_DIR_CQ_INTR_ARMED1 0x44000050
+#define DLB2_CHP_DIR_CQ_INTR_ARMED1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_INTR_ARMED1 : \
+	 DLB2_V2_5CHP_DIR_CQ_INTR_ARMED1)
 #define DLB2_CHP_DIR_CQ_INTR_ARMED1_RST 0x0
-union dlb2_chp_dir_cq_intr_armed1 {
-	struct {
-		u32 armed : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL 0x44000084
+
+#define DLB2_CHP_DIR_CQ_INTR_ARMED1_ARMED	0xFFFFFFFF
+#define DLB2_CHP_DIR_CQ_INTR_ARMED1_ARMED_LOC	0
+
+#define DLB2_V2CHP_CFG_DIR_CQ_TIMER_CTL 0x44000084
+#define DLB2_V2_5CHP_CFG_DIR_CQ_TIMER_CTL 0x44000088
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_DIR_CQ_TIMER_CTL : \
+	 DLB2_V2_5CHP_CFG_DIR_CQ_TIMER_CTL)
 #define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_RST 0x0
-union dlb2_chp_cfg_dir_cq_timer_ctl {
-	struct {
-		u32 sample_interval : 8;
-		u32 enb : 1;
-		u32 rsvz0 : 23;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_DIR_WDTO_0 0x44000088
+
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_SAMPLE_INTERVAL	0x000000FF
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_ENB			0x00000100
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_RSVZ0			0xFFFFFE00
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_SAMPLE_INTERVAL_LOC	0
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_ENB_LOC		8
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_RSVZ0_LOC		9
+
+#define DLB2_V2CHP_CFG_DIR_WDTO_0 0x44000088
+#define DLB2_V2_5CHP_CFG_DIR_WDTO_0 0x4400008c
+#define DLB2_CHP_CFG_DIR_WDTO_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_DIR_WDTO_0 : \
+	 DLB2_V2_5CHP_CFG_DIR_WDTO_0)
 #define DLB2_CHP_CFG_DIR_WDTO_0_RST 0x0
-union dlb2_chp_cfg_dir_wdto_0 {
-	struct {
-		u32 wdto : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_DIR_WDTO_1 0x4400008c
+
+#define DLB2_CHP_CFG_DIR_WDTO_0_WDTO	0xFFFFFFFF
+#define DLB2_CHP_CFG_DIR_WDTO_0_WDTO_LOC	0
+
+#define DLB2_V2CHP_CFG_DIR_WDTO_1 0x4400008c
+#define DLB2_V2_5CHP_CFG_DIR_WDTO_1 0x44000090
+#define DLB2_CHP_CFG_DIR_WDTO_1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_DIR_WDTO_1 : \
+	 DLB2_V2_5CHP_CFG_DIR_WDTO_1)
 #define DLB2_CHP_CFG_DIR_WDTO_1_RST 0x0
-union dlb2_chp_cfg_dir_wdto_1 {
-	struct {
-		u32 wdto : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_DIR_WD_DISABLE0 0x44000098
+
+#define DLB2_CHP_CFG_DIR_WDTO_1_WDTO	0xFFFFFFFF
+#define DLB2_CHP_CFG_DIR_WDTO_1_WDTO_LOC	0
+
+#define DLB2_V2CHP_CFG_DIR_WD_DISABLE0 0x44000098
+#define DLB2_V2_5CHP_CFG_DIR_WD_DISABLE0 0x440000a4
+#define DLB2_CHP_CFG_DIR_WD_DISABLE0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_DIR_WD_DISABLE0 : \
+	 DLB2_V2_5CHP_CFG_DIR_WD_DISABLE0)
 #define DLB2_CHP_CFG_DIR_WD_DISABLE0_RST 0xffffffff
-union dlb2_chp_cfg_dir_wd_disable0 {
-	struct {
-		u32 wd_disable : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_DIR_WD_DISABLE1 0x4400009c
+
+#define DLB2_CHP_CFG_DIR_WD_DISABLE0_WD_DISABLE	0xFFFFFFFF
+#define DLB2_CHP_CFG_DIR_WD_DISABLE0_WD_DISABLE_LOC	0
+
+#define DLB2_V2CHP_CFG_DIR_WD_DISABLE1 0x4400009c
+#define DLB2_V2_5CHP_CFG_DIR_WD_DISABLE1 0x440000a8
+#define DLB2_CHP_CFG_DIR_WD_DISABLE1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_DIR_WD_DISABLE1 : \
+	 DLB2_V2_5CHP_CFG_DIR_WD_DISABLE1)
 #define DLB2_CHP_CFG_DIR_WD_DISABLE1_RST 0xffffffff
-union dlb2_chp_cfg_dir_wd_disable1 {
-	struct {
-		u32 wd_disable : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL 0x440000a0
+
+#define DLB2_CHP_CFG_DIR_WD_DISABLE1_WD_DISABLE	0xFFFFFFFF
+#define DLB2_CHP_CFG_DIR_WD_DISABLE1_WD_DISABLE_LOC	0
+
+#define DLB2_V2CHP_CFG_DIR_WD_ENB_INTERVAL 0x440000a0
+#define DLB2_V2_5CHP_CFG_DIR_WD_ENB_INTERVAL 0x440000b0
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_DIR_WD_ENB_INTERVAL : \
+	 DLB2_V2_5CHP_CFG_DIR_WD_ENB_INTERVAL)
 #define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_RST 0x0
-union dlb2_chp_cfg_dir_wd_enb_interval {
-	struct {
-		u32 sample_interval : 28;
-		u32 enb : 1;
-		u32 rsvz0 : 3;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_DIR_WD_THRESHOLD 0x440000ac
+
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_SAMPLE_INTERVAL	0x0FFFFFFF
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_ENB			0x10000000
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_RSVZ0		0xE0000000
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_SAMPLE_INTERVAL_LOC	0
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_ENB_LOC		28
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_RSVZ0_LOC		29
+
+#define DLB2_V2CHP_CFG_DIR_WD_THRESHOLD 0x440000ac
+#define DLB2_V2_5CHP_CFG_DIR_WD_THRESHOLD 0x440000c0
+#define DLB2_CHP_CFG_DIR_WD_THRESHOLD(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_DIR_WD_THRESHOLD : \
+	 DLB2_V2_5CHP_CFG_DIR_WD_THRESHOLD)
 #define DLB2_CHP_CFG_DIR_WD_THRESHOLD_RST 0x0
-union dlb2_chp_cfg_dir_wd_threshold {
-	struct {
-		u32 wd_threshold : 8;
-		u32 rsvz0 : 24;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_LDB_CQ_INTR_ARMED0 0x440000b0
+
+#define DLB2_CHP_CFG_DIR_WD_THRESHOLD_WD_THRESHOLD	0x000000FF
+#define DLB2_CHP_CFG_DIR_WD_THRESHOLD_RSVZ0		0xFFFFFF00
+#define DLB2_CHP_CFG_DIR_WD_THRESHOLD_WD_THRESHOLD_LOC	0
+#define DLB2_CHP_CFG_DIR_WD_THRESHOLD_RSVZ0_LOC		8
+
+#define DLB2_V2CHP_LDB_CQ_INTR_ARMED0 0x440000b0
+#define DLB2_V2_5CHP_LDB_CQ_INTR_ARMED0 0x440000c4
+#define DLB2_CHP_LDB_CQ_INTR_ARMED0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_INTR_ARMED0 : \
+	 DLB2_V2_5CHP_LDB_CQ_INTR_ARMED0)
 #define DLB2_CHP_LDB_CQ_INTR_ARMED0_RST 0x0
-union dlb2_chp_ldb_cq_intr_armed0 {
-	struct {
-		u32 armed : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_LDB_CQ_INTR_ARMED1 0x440000b4
+
+#define DLB2_CHP_LDB_CQ_INTR_ARMED0_ARMED	0xFFFFFFFF
+#define DLB2_CHP_LDB_CQ_INTR_ARMED0_ARMED_LOC	0
+
+#define DLB2_V2CHP_LDB_CQ_INTR_ARMED1 0x440000b4
+#define DLB2_V2_5CHP_LDB_CQ_INTR_ARMED1 0x440000c8
+#define DLB2_CHP_LDB_CQ_INTR_ARMED1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_INTR_ARMED1 : \
+	 DLB2_V2_5CHP_LDB_CQ_INTR_ARMED1)
 #define DLB2_CHP_LDB_CQ_INTR_ARMED1_RST 0x0
-union dlb2_chp_ldb_cq_intr_armed1 {
-	struct {
-		u32 armed : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL 0x440000d8
+
+#define DLB2_CHP_LDB_CQ_INTR_ARMED1_ARMED	0xFFFFFFFF
+#define DLB2_CHP_LDB_CQ_INTR_ARMED1_ARMED_LOC	0
+
+#define DLB2_V2CHP_CFG_LDB_CQ_TIMER_CTL 0x440000d8
+#define DLB2_V2_5CHP_CFG_LDB_CQ_TIMER_CTL 0x440000ec
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_LDB_CQ_TIMER_CTL : \
+	 DLB2_V2_5CHP_CFG_LDB_CQ_TIMER_CTL)
 #define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_RST 0x0
-union dlb2_chp_cfg_ldb_cq_timer_ctl {
-	struct {
-		u32 sample_interval : 8;
-		u32 enb : 1;
-		u32 rsvz0 : 23;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_LDB_WDTO_0 0x440000dc
+
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_SAMPLE_INTERVAL	0x000000FF
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_ENB			0x00000100
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_RSVZ0			0xFFFFFE00
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_SAMPLE_INTERVAL_LOC	0
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_ENB_LOC		8
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_RSVZ0_LOC		9
+
+#define DLB2_V2CHP_CFG_LDB_WDTO_0 0x440000dc
+#define DLB2_V2_5CHP_CFG_LDB_WDTO_0 0x440000f0
+#define DLB2_CHP_CFG_LDB_WDTO_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_LDB_WDTO_0 : \
+	 DLB2_V2_5CHP_CFG_LDB_WDTO_0)
 #define DLB2_CHP_CFG_LDB_WDTO_0_RST 0x0
-union dlb2_chp_cfg_ldb_wdto_0 {
-	struct {
-		u32 wdto : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_LDB_WDTO_1 0x440000e0
+
+#define DLB2_CHP_CFG_LDB_WDTO_0_WDTO	0xFFFFFFFF
+#define DLB2_CHP_CFG_LDB_WDTO_0_WDTO_LOC	0
+
+#define DLB2_V2CHP_CFG_LDB_WDTO_1 0x440000e0
+#define DLB2_V2_5CHP_CFG_LDB_WDTO_1 0x440000f4
+#define DLB2_CHP_CFG_LDB_WDTO_1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_LDB_WDTO_1 : \
+	 DLB2_V2_5CHP_CFG_LDB_WDTO_1)
 #define DLB2_CHP_CFG_LDB_WDTO_1_RST 0x0
-union dlb2_chp_cfg_ldb_wdto_1 {
-	struct {
-		u32 wdto : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_LDB_WD_DISABLE0 0x440000ec
+
+#define DLB2_CHP_CFG_LDB_WDTO_1_WDTO	0xFFFFFFFF
+#define DLB2_CHP_CFG_LDB_WDTO_1_WDTO_LOC	0
+
+#define DLB2_V2CHP_CFG_LDB_WD_DISABLE0 0x440000ec
+#define DLB2_V2_5CHP_CFG_LDB_WD_DISABLE0 0x44000100
+#define DLB2_CHP_CFG_LDB_WD_DISABLE0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_LDB_WD_DISABLE0 : \
+	 DLB2_V2_5CHP_CFG_LDB_WD_DISABLE0)
 #define DLB2_CHP_CFG_LDB_WD_DISABLE0_RST 0xffffffff
-union dlb2_chp_cfg_ldb_wd_disable0 {
-	struct {
-		u32 wd_disable : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_LDB_WD_DISABLE1 0x440000f0
+
+#define DLB2_CHP_CFG_LDB_WD_DISABLE0_WD_DISABLE	0xFFFFFFFF
+#define DLB2_CHP_CFG_LDB_WD_DISABLE0_WD_DISABLE_LOC	0
+
+#define DLB2_V2CHP_CFG_LDB_WD_DISABLE1 0x440000f0
+#define DLB2_V2_5CHP_CFG_LDB_WD_DISABLE1 0x44000104
+#define DLB2_CHP_CFG_LDB_WD_DISABLE1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_LDB_WD_DISABLE1 : \
+	 DLB2_V2_5CHP_CFG_LDB_WD_DISABLE1)
 #define DLB2_CHP_CFG_LDB_WD_DISABLE1_RST 0xffffffff
-union dlb2_chp_cfg_ldb_wd_disable1 {
-	struct {
-		u32 wd_disable : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL 0x440000f4
+
+#define DLB2_CHP_CFG_LDB_WD_DISABLE1_WD_DISABLE	0xFFFFFFFF
+#define DLB2_CHP_CFG_LDB_WD_DISABLE1_WD_DISABLE_LOC	0
+
+#define DLB2_V2CHP_CFG_LDB_WD_ENB_INTERVAL 0x440000f4
+#define DLB2_V2_5CHP_CFG_LDB_WD_ENB_INTERVAL 0x44000108
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_LDB_WD_ENB_INTERVAL : \
+	 DLB2_V2_5CHP_CFG_LDB_WD_ENB_INTERVAL)
 #define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_RST 0x0
-union dlb2_chp_cfg_ldb_wd_enb_interval {
-	struct {
-		u32 sample_interval : 28;
-		u32 enb : 1;
-		u32 rsvz0 : 3;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_LDB_WD_THRESHOLD 0x44000100
+
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_SAMPLE_INTERVAL	0x0FFFFFFF
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_ENB			0x10000000
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_RSVZ0		0xE0000000
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_SAMPLE_INTERVAL_LOC	0
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_ENB_LOC		28
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_RSVZ0_LOC		29
+
+#define DLB2_V2CHP_CFG_LDB_WD_THRESHOLD 0x44000100
+#define DLB2_V2_5CHP_CFG_LDB_WD_THRESHOLD 0x44000114
+#define DLB2_CHP_CFG_LDB_WD_THRESHOLD(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_LDB_WD_THRESHOLD : \
+	 DLB2_V2_5CHP_CFG_LDB_WD_THRESHOLD)
 #define DLB2_CHP_CFG_LDB_WD_THRESHOLD_RST 0x0
-union dlb2_chp_cfg_ldb_wd_threshold {
-	struct {
-		u32 wd_threshold : 8;
-		u32 rsvz0 : 24;
-	} field;
-	u32 val;
-};
+
+#define DLB2_CHP_CFG_LDB_WD_THRESHOLD_WD_THRESHOLD	0x000000FF
+#define DLB2_CHP_CFG_LDB_WD_THRESHOLD_RSVZ0		0xFFFFFF00
+#define DLB2_CHP_CFG_LDB_WD_THRESHOLD_WD_THRESHOLD_LOC	0
+#define DLB2_CHP_CFG_LDB_WD_THRESHOLD_RSVZ0_LOC		8
+
+#define DLB2_CHP_SMON_COMPARE0 0x4c000000
+#define DLB2_CHP_SMON_COMPARE0_RST 0x0
+
+#define DLB2_CHP_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_CHP_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_CHP_SMON_COMPARE1 0x4c000004
+#define DLB2_CHP_SMON_COMPARE1_RST 0x0
+
+#define DLB2_CHP_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_CHP_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_CHP_SMON_CFG0 0x4c000008
+#define DLB2_CHP_SMON_CFG0_RST 0x40000000
+
+#define DLB2_CHP_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_CHP_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_CHP_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_CHP_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_CHP_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_CHP_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_CHP_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_CHP_SMON_CFG0_SMON_MODE			0x0000F000
+#define DLB2_CHP_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_CHP_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_CHP_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_CHP_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_CHP_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_CHP_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_CHP_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_CHP_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_CHP_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_CHP_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_CHP_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_CHP_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_CHP_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
+#define DLB2_CHP_SMON_CFG0_RSVZ0_LOC				2
+#define DLB2_CHP_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_CHP_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_CHP_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_CHP_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_CHP_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_CHP_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_CHP_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_CHP_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_CHP_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_CHP_SMON_CFG0_STOPTIMEROVFL_LOC			20
+#define DLB2_CHP_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_CHP_SMON_CFG0_STATTIMEROVFL_LOC			22
+#define DLB2_CHP_SMON_CFG0_RSVZ1_LOC				23
+#define DLB2_CHP_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_CHP_SMON_CFG0_RSVZ2_LOC				29
+#define DLB2_CHP_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_CHP_SMON_CFG1 0x4c00000c
+#define DLB2_CHP_SMON_CFG1_RST 0x0
+
+#define DLB2_CHP_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_CHP_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_CHP_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_CHP_SMON_CFG1_MODE0_LOC	0
+#define DLB2_CHP_SMON_CFG1_MODE1_LOC	8
+#define DLB2_CHP_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_CHP_SMON_ACTIVITYCNTR0 0x4c000010
+#define DLB2_CHP_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_CHP_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_CHP_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_CHP_SMON_ACTIVITYCNTR1 0x4c000014
+#define DLB2_CHP_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_CHP_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_CHP_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_CHP_SMON_MAX_TMR 0x4c000018
+#define DLB2_CHP_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_CHP_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_CHP_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_CHP_SMON_TMR 0x4c00001c
+#define DLB2_CHP_SMON_TMR_RST 0x0
+
+#define DLB2_CHP_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_CHP_SMON_TMR_TIMER_LOC	0
 
 #define DLB2_CHP_CTRL_DIAG_02 0x4c000028
 #define DLB2_CHP_CTRL_DIAG_02_RST 0x1555
-union dlb2_chp_ctrl_diag_02 {
-	struct {
-		u32 egress_credit_status_empty : 1;
-		u32 egress_credit_status_afull : 1;
-		u32 chp_outbound_hcw_pipe_credit_status_empty : 1;
-		u32 chp_outbound_hcw_pipe_credit_status_afull : 1;
-		u32 chp_lsp_ap_cmp_pipe_credit_status_empty : 1;
-		u32 chp_lsp_ap_cmp_pipe_credit_status_afull : 1;
-		u32 chp_lsp_tok_pipe_credit_status_empty : 1;
-		u32 chp_lsp_tok_pipe_credit_status_afull : 1;
-		u32 chp_rop_pipe_credit_status_empty : 1;
-		u32 chp_rop_pipe_credit_status_afull : 1;
-		u32 qed_to_cq_pipe_credit_status_empty : 1;
-		u32 qed_to_cq_pipe_credit_status_afull : 1;
-		u32 egress_lsp_token_credit_status_empty : 1;
-		u32 egress_lsp_token_credit_status_afull : 1;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
+
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_EMPTY_V2	0x00000001
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_AFULL_V2	0x00000002
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_EMPTY_V2 0x04
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_AFULL_V2 0x08
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_EMPTY_V2 0x0010
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_AFULL_V2 0x0020
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_EMPTY_V2    0x0040
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_AFULL_V2    0x0080
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_EMPTY_V2	 0x0100
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_AFULL_V2	 0x0200
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_EMPTY_V2	0x0400
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_AFULL_V2	0x0800
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_EMPTY_V2    0x1000
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_AFULL_V2    0x2000
+#define DLB2_CHP_CTRL_DIAG_02_RSVD0_V2				    0xFFFFC000
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_EMPTY_V2_LOC	    0
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_AFULL_V2_LOC	    1
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_EMPTY_V2_LOC 2
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_AFULL_V2_LOC 3
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_EMPTY_V2_LOC 4
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_AFULL_V2_LOC 5
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_EMPTY_V2_LOC    6
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_AFULL_V2_LOC    7
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_EMPTY_V2_LOC	  8
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_AFULL_V2_LOC	  9
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_EMPTY_V2_LOC	  10
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_AFULL_V2_LOC	  11
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_EMPTY_V2_LOC 12
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_AFULL_V2_LOC 13
+#define DLB2_CHP_CTRL_DIAG_02_RSVD0_V2_LOC				  14
+
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_EMPTY_V2_5	     0x00000001
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_AFULL_V2_5	     0x00000002
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_EMPTY_V2_5  4
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_AFULL_V2_5  8
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_EMPTY_V2_5 0x10
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_AFULL_V2_5 0x20
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_EMPTY_V2_5	0x0040
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_AFULL_V2_5	0x0080
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_EMPTY_V2_5 0x00000100
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_AFULL_V2_5 0x00000200
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_EMPTY_V2_5 0x0400
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_AFULL_V2_5 0x0800
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_EMPTY_V2_5 0x1000
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_AFULL_V2_5 0x2000
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOKEN_QB_STATUS_SIZE_V2_5	    0x0001C000
+#define DLB2_CHP_CTRL_DIAG_02_FREELIST_SIZE_V2_5		    0xFFFE0000
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_EMPTY_V2_5_LOC 0
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_AFULL_V2_5_LOC 1
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_EMPTY_V2_5_LOC 2
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_AFULL_V2_5_LOC 3
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_EMPTY_V2_5_LOC 4
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_AFULL_V2_5_LOC 5
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_EMPTY_V2_5_LOC    6
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_AFULL_V2_5_LOC 7
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_EMPTY_V2_5_LOC	    8
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_AFULL_V2_5_LOC	    9
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_EMPTY_V2_5_LOC   10
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_AFULL_V2_5_LOC   11
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_EMPTY_V2_5_LOC 12
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_AFULL_V2_5_LOC 13
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOKEN_QB_STATUS_SIZE_V2_5_LOC	    14
+#define DLB2_CHP_CTRL_DIAG_02_FREELIST_SIZE_V2_5_LOC			    17
 
 #define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0 0x54000000
 #define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_RST 0xfefcfaf8
-union dlb2_dp_cfg_arb_weights_tqpri_dir_0 {
-	struct {
-		u32 pri0 : 8;
-		u32 pri1 : 8;
-		u32 pri2 : 8;
-		u32 pri3 : 8;
-	} field;
-	u32 val;
-};
+
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI0	0x000000FF
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI1	0x0000FF00
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI2	0x00FF0000
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI3	0xFF000000
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI0_LOC	0
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI1_LOC	8
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI2_LOC	16
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI3_LOC	24
 
 #define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_1 0x54000004
 #define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_1_RST 0x0
-union dlb2_dp_cfg_arb_weights_tqpri_dir_1 {
-	struct {
-		u32 rsvz0 : 32;
-	} field;
-	u32 val;
-};
+
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_1_RSVZ0	0xFFFFFFFF
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_1_RSVZ0_LOC	0
 
 #define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0 0x54000008
 #define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_RST 0xfefcfaf8
-union dlb2_dp_cfg_arb_weights_tqpri_replay_0 {
-	struct {
-		u32 pri0 : 8;
-		u32 pri1 : 8;
-		u32 pri2 : 8;
-		u32 pri3 : 8;
-	} field;
-	u32 val;
-};
+
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI0	0x000000FF
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI1	0x0000FF00
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI2	0x00FF0000
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI3	0xFF000000
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI0_LOC	0
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI1_LOC	8
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI2_LOC	16
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI3_LOC	24
 
 #define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1 0x5400000c
 #define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RST 0x0
-union dlb2_dp_cfg_arb_weights_tqpri_replay_1 {
-	struct {
-		u32 rsvz0 : 32;
-	} field;
-	u32 val;
-};
+
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RSVZ0	0xFFFFFFFF
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RSVZ0_LOC	0
 
 #define DLB2_DP_DIR_CSR_CTRL 0x54000010
 #define DLB2_DP_DIR_CSR_CTRL_RST 0x0
-union dlb2_dp_dir_csr_ctrl {
-	struct {
-		u32 int_cor_alarm_dis : 1;
-		u32 int_cor_synd_dis : 1;
-		u32 int_uncr_alarm_dis : 1;
-		u32 int_unc_synd_dis : 1;
-		u32 int_inf0_alarm_dis : 1;
-		u32 int_inf0_synd_dis : 1;
-		u32 int_inf1_alarm_dis : 1;
-		u32 int_inf1_synd_dis : 1;
-		u32 int_inf2_alarm_dis : 1;
-		u32 int_inf2_synd_dis : 1;
-		u32 int_inf3_alarm_dis : 1;
-		u32 int_inf3_synd_dis : 1;
-		u32 int_inf4_alarm_dis : 1;
-		u32 int_inf4_synd_dis : 1;
-		u32 int_inf5_alarm_dis : 1;
-		u32 int_inf5_synd_dis : 1;
-		u32 rsvz0 : 16;
-	} field;
-	u32 val;
-};
-
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_ATQ_0 0x84000000
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_RST 0xfefcfaf8
-union dlb2_nalb_pipe_cfg_arb_weights_tqpri_atq_0 {
-	struct {
-		u32 pri0 : 8;
-		u32 pri1 : 8;
-		u32 pri2 : 8;
-		u32 pri3 : 8;
-	} field;
-	u32 val;
-};
-
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_ATQ_1 0x84000004
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_ATQ_1_RST 0x0
-union dlb2_nalb_pipe_cfg_arb_weights_tqpri_atq_1 {
-	struct {
-		u32 rsvz0 : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_NALB_0 0x84000008
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_NALB_0_RST 0xfefcfaf8
-union dlb2_nalb_pipe_cfg_arb_weights_tqpri_nalb_0 {
-	struct {
-		u32 pri0 : 8;
-		u32 pri1 : 8;
-		u32 pri2 : 8;
-		u32 pri3 : 8;
-	} field;
-	u32 val;
-};
-
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_NALB_1 0x8400000c
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_NALB_1_RST 0x0
-union dlb2_nalb_pipe_cfg_arb_weights_tqpri_nalb_1 {
-	struct {
-		u32 rsvz0 : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0 0x84000010
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_RST 0xfefcfaf8
-union dlb2_nalb_pipe_cfg_arb_weights_tqpri_replay_0 {
-	struct {
-		u32 pri0 : 8;
-		u32 pri1 : 8;
-		u32 pri2 : 8;
-		u32 pri3 : 8;
-	} field;
-	u32 val;
-};
-
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1 0x84000014
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RST 0x0
-union dlb2_nalb_pipe_cfg_arb_weights_tqpri_replay_1 {
-	struct {
-		u32 rsvz0 : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_RO_PIPE_GRP_0_SLT_SHFT(x) \
+
+#define DLB2_DP_DIR_CSR_CTRL_INT_COR_ALARM_DIS	0x00000001
+#define DLB2_DP_DIR_CSR_CTRL_INT_COR_SYND_DIS	0x00000002
+#define DLB2_DP_DIR_CSR_CTRL_INT_UNCR_ALARM_DIS	0x00000004
+#define DLB2_DP_DIR_CSR_CTRL_INT_UNC_SYND_DIS	0x00000008
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF0_ALARM_DIS	0x00000010
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF0_SYND_DIS	0x00000020
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF1_ALARM_DIS	0x00000040
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF1_SYND_DIS	0x00000080
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF2_ALARM_DIS	0x00000100
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF2_SYND_DIS	0x00000200
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF3_ALARM_DIS	0x00000400
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF3_SYND_DIS	0x00000800
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF4_ALARM_DIS	0x00001000
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF4_SYND_DIS	0x00002000
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF5_ALARM_DIS	0x00004000
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF5_SYND_DIS	0x00008000
+#define DLB2_DP_DIR_CSR_CTRL_RSVZ0			0xFFFF0000
+#define DLB2_DP_DIR_CSR_CTRL_INT_COR_ALARM_DIS_LOC	0
+#define DLB2_DP_DIR_CSR_CTRL_INT_COR_SYND_DIS_LOC	1
+#define DLB2_DP_DIR_CSR_CTRL_INT_UNCR_ALARM_DIS_LOC	2
+#define DLB2_DP_DIR_CSR_CTRL_INT_UNC_SYND_DIS_LOC	3
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF0_ALARM_DIS_LOC	4
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF0_SYND_DIS_LOC	5
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF1_ALARM_DIS_LOC	6
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF1_SYND_DIS_LOC	7
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF2_ALARM_DIS_LOC	8
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF2_SYND_DIS_LOC	9
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF3_ALARM_DIS_LOC	10
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF3_SYND_DIS_LOC	11
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF4_ALARM_DIS_LOC	12
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF4_SYND_DIS_LOC	13
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF5_ALARM_DIS_LOC	14
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF5_SYND_DIS_LOC	15
+#define DLB2_DP_DIR_CSR_CTRL_RSVZ0_LOC		16
+
+#define DLB2_DP_SMON_ACTIVITYCNTR0 0x5c000058
+#define DLB2_DP_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_DP_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_DP_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_DP_SMON_ACTIVITYCNTR1 0x5c00005c
+#define DLB2_DP_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_DP_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_DP_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_DP_SMON_COMPARE0 0x5c000060
+#define DLB2_DP_SMON_COMPARE0_RST 0x0
+
+#define DLB2_DP_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_DP_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_DP_SMON_COMPARE1 0x5c000064
+#define DLB2_DP_SMON_COMPARE1_RST 0x0
+
+#define DLB2_DP_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_DP_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_DP_SMON_CFG0 0x5c000068
+#define DLB2_DP_SMON_CFG0_RST 0x40000000
+
+#define DLB2_DP_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_DP_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_DP_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_DP_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_DP_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_DP_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_DP_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_DP_SMON_CFG0_SMON_MODE			0x0000F000
+#define DLB2_DP_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_DP_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_DP_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_DP_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_DP_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_DP_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_DP_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_DP_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_DP_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_DP_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_DP_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_DP_SMON_CFG0_SMON_ENABLE_LOC		0
+#define DLB2_DP_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC	1
+#define DLB2_DP_SMON_CFG0_RSVZ0_LOC			2
+#define DLB2_DP_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_DP_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_DP_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_DP_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_DP_SMON_CFG0_SMON_MODE_LOC		12
+#define DLB2_DP_SMON_CFG0_STOPCOUNTEROVFL_LOC	16
+#define DLB2_DP_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_DP_SMON_CFG0_STATCOUNTER0OVFL_LOC	18
+#define DLB2_DP_SMON_CFG0_STATCOUNTER1OVFL_LOC	19
+#define DLB2_DP_SMON_CFG0_STOPTIMEROVFL_LOC		20
+#define DLB2_DP_SMON_CFG0_INTTIMEROVFL_LOC		21
+#define DLB2_DP_SMON_CFG0_STATTIMEROVFL_LOC		22
+#define DLB2_DP_SMON_CFG0_RSVZ1_LOC			23
+#define DLB2_DP_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_DP_SMON_CFG0_RSVZ2_LOC			29
+#define DLB2_DP_SMON_CFG0_VERSION_LOC		30
+
+#define DLB2_DP_SMON_CFG1 0x5c00006c
+#define DLB2_DP_SMON_CFG1_RST 0x0
+
+#define DLB2_DP_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_DP_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_DP_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_DP_SMON_CFG1_MODE0_LOC	0
+#define DLB2_DP_SMON_CFG1_MODE1_LOC	8
+#define DLB2_DP_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_DP_SMON_MAX_TMR 0x5c000070
+#define DLB2_DP_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_DP_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_DP_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_DP_SMON_TMR 0x5c000074
+#define DLB2_DP_SMON_TMR_RST 0x0
+
+#define DLB2_DP_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_DP_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_DQED_SMON_ACTIVITYCNTR0 0x6c000024
+#define DLB2_DQED_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_DQED_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_DQED_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_DQED_SMON_ACTIVITYCNTR1 0x6c000028
+#define DLB2_DQED_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_DQED_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_DQED_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_DQED_SMON_COMPARE0 0x6c00002c
+#define DLB2_DQED_SMON_COMPARE0_RST 0x0
+
+#define DLB2_DQED_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_DQED_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_DQED_SMON_COMPARE1 0x6c000030
+#define DLB2_DQED_SMON_COMPARE1_RST 0x0
+
+#define DLB2_DQED_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_DQED_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_DQED_SMON_CFG0 0x6c000034
+#define DLB2_DQED_SMON_CFG0_RST 0x40000000
+
+#define DLB2_DQED_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_DQED_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_DQED_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_DQED_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_DQED_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_DQED_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_DQED_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_DQED_SMON_CFG0_SMON_MODE		0x0000F000
+#define DLB2_DQED_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_DQED_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_DQED_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_DQED_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_DQED_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_DQED_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_DQED_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_DQED_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_DQED_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_DQED_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_DQED_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_DQED_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_DQED_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
+#define DLB2_DQED_SMON_CFG0_RSVZ0_LOC			2
+#define DLB2_DQED_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_DQED_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_DQED_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_DQED_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_DQED_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_DQED_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_DQED_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_DQED_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_DQED_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_DQED_SMON_CFG0_STOPTIMEROVFL_LOC		20
+#define DLB2_DQED_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_DQED_SMON_CFG0_STATTIMEROVFL_LOC		22
+#define DLB2_DQED_SMON_CFG0_RSVZ1_LOC			23
+#define DLB2_DQED_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_DQED_SMON_CFG0_RSVZ2_LOC			29
+#define DLB2_DQED_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_DQED_SMON_CFG1 0x6c000038
+#define DLB2_DQED_SMON_CFG1_RST 0x0
+
+#define DLB2_DQED_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_DQED_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_DQED_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_DQED_SMON_CFG1_MODE0_LOC	0
+#define DLB2_DQED_SMON_CFG1_MODE1_LOC	8
+#define DLB2_DQED_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_DQED_SMON_MAX_TMR 0x6c00003c
+#define DLB2_DQED_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_DQED_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_DQED_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_DQED_SMON_TMR 0x6c000040
+#define DLB2_DQED_SMON_TMR_RST 0x0
+
+#define DLB2_DQED_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_DQED_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_QED_SMON_ACTIVITYCNTR0 0x7c000024
+#define DLB2_QED_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_QED_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_QED_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_QED_SMON_ACTIVITYCNTR1 0x7c000028
+#define DLB2_QED_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_QED_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_QED_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_QED_SMON_COMPARE0 0x7c00002c
+#define DLB2_QED_SMON_COMPARE0_RST 0x0
+
+#define DLB2_QED_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_QED_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_QED_SMON_COMPARE1 0x7c000030
+#define DLB2_QED_SMON_COMPARE1_RST 0x0
+
+#define DLB2_QED_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_QED_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_QED_SMON_CFG0 0x7c000034
+#define DLB2_QED_SMON_CFG0_RST 0x40000000
+
+#define DLB2_QED_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_QED_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_QED_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_QED_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_QED_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_QED_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_QED_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_QED_SMON_CFG0_SMON_MODE			0x0000F000
+#define DLB2_QED_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_QED_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_QED_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_QED_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_QED_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_QED_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_QED_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_QED_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_QED_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_QED_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_QED_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_QED_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_QED_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
+#define DLB2_QED_SMON_CFG0_RSVZ0_LOC				2
+#define DLB2_QED_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_QED_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_QED_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_QED_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_QED_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_QED_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_QED_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_QED_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_QED_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_QED_SMON_CFG0_STOPTIMEROVFL_LOC			20
+#define DLB2_QED_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_QED_SMON_CFG0_STATTIMEROVFL_LOC			22
+#define DLB2_QED_SMON_CFG0_RSVZ1_LOC				23
+#define DLB2_QED_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_QED_SMON_CFG0_RSVZ2_LOC				29
+#define DLB2_QED_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_QED_SMON_CFG1 0x7c000038
+#define DLB2_QED_SMON_CFG1_RST 0x0
+
+#define DLB2_QED_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_QED_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_QED_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_QED_SMON_CFG1_MODE0_LOC	0
+#define DLB2_QED_SMON_CFG1_MODE1_LOC	8
+#define DLB2_QED_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_QED_SMON_MAX_TMR 0x7c00003c
+#define DLB2_QED_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_QED_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_QED_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_QED_SMON_TMR 0x7c000040
+#define DLB2_QED_SMON_TMR_RST 0x0
+
+#define DLB2_QED_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_QED_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0 0x84000000
+#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0 0x74000000
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0 : \
+	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0)
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_RST 0xfefcfaf8
+
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI0	0x000000FF
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI1	0x0000FF00
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI2	0x00FF0000
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI3	0xFF000000
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI0_LOC	0
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI1_LOC	8
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI2_LOC	16
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI3_LOC	24
+
+#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1 0x84000004
+#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1 0x74000004
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1 : \
+	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1)
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1_RST 0x0
+
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1_RSVZ0	0xFFFFFFFF
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1_RSVZ0_LOC	0
+
+#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0 0x84000008
+#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0 0x74000008
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0 : \
+	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0)
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_RST 0xfefcfaf8
+
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI0	0x000000FF
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI1	0x0000FF00
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI2	0x00FF0000
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI3	0xFF000000
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI0_LOC	0
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI1_LOC	8
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI2_LOC	16
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI3_LOC	24
+
+#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1 0x8400000c
+#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1 0x7400000c
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1 : \
+	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1)
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1_RST 0x0
+
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1_RSVZ0	0xFFFFFFFF
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1_RSVZ0_LOC	0
+
+#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0 0x84000010
+#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0 0x74000010
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0 : \
+	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0)
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_RST 0xfefcfaf8
+
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI0	0x000000FF
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI1	0x0000FF00
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI2	0x00FF0000
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI3	0xFF000000
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI0_LOC	0
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI1_LOC	8
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI2_LOC	16
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI3_LOC	24
+
+#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1 0x84000014
+#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1 0x74000014
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1 : \
+	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1)
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RST 0x0
+
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RSVZ0	0xFFFFFFFF
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RSVZ0_LOC	0
+
+#define DLB2_NALB_SMON_ACTIVITYCNTR0 0x8c000064
+#define DLB2_NALB_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_NALB_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_NALB_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_NALB_SMON_ACTIVITYCNTR1 0x8c000068
+#define DLB2_NALB_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_NALB_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_NALB_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_NALB_SMON_COMPARE0 0x8c00006c
+#define DLB2_NALB_SMON_COMPARE0_RST 0x0
+
+#define DLB2_NALB_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_NALB_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_NALB_SMON_COMPARE1 0x8c000070
+#define DLB2_NALB_SMON_COMPARE1_RST 0x0
+
+#define DLB2_NALB_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_NALB_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_NALB_SMON_CFG0 0x8c000074
+#define DLB2_NALB_SMON_CFG0_RST 0x40000000
+
+#define DLB2_NALB_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_NALB_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_NALB_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_NALB_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_NALB_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_NALB_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_NALB_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_NALB_SMON_CFG0_SMON_MODE		0x0000F000
+#define DLB2_NALB_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_NALB_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_NALB_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_NALB_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_NALB_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_NALB_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_NALB_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_NALB_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_NALB_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_NALB_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_NALB_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_NALB_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_NALB_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
+#define DLB2_NALB_SMON_CFG0_RSVZ0_LOC			2
+#define DLB2_NALB_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_NALB_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_NALB_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_NALB_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_NALB_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_NALB_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_NALB_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_NALB_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_NALB_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_NALB_SMON_CFG0_STOPTIMEROVFL_LOC		20
+#define DLB2_NALB_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_NALB_SMON_CFG0_STATTIMEROVFL_LOC		22
+#define DLB2_NALB_SMON_CFG0_RSVZ1_LOC			23
+#define DLB2_NALB_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_NALB_SMON_CFG0_RSVZ2_LOC			29
+#define DLB2_NALB_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_NALB_SMON_CFG1 0x8c000078
+#define DLB2_NALB_SMON_CFG1_RST 0x0
+
+#define DLB2_NALB_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_NALB_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_NALB_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_NALB_SMON_CFG1_MODE0_LOC	0
+#define DLB2_NALB_SMON_CFG1_MODE1_LOC	8
+#define DLB2_NALB_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_NALB_SMON_MAX_TMR 0x8c00007c
+#define DLB2_NALB_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_NALB_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_NALB_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_NALB_SMON_TMR 0x8c000080
+#define DLB2_NALB_SMON_TMR_RST 0x0
+
+#define DLB2_NALB_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_NALB_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_V2RO_GRP_0_SLT_SHFT(x) \
 	(0x96000000 + (x) * 0x4)
-#define DLB2_RO_PIPE_GRP_0_SLT_SHFT_RST 0x0
-union dlb2_ro_pipe_grp_0_slt_shft {
-	struct {
-		u32 change : 10;
-		u32 rsvd0 : 22;
-	} field;
-	u32 val;
-};
-
-#define DLB2_RO_PIPE_GRP_1_SLT_SHFT(x) \
+#define DLB2_V2_5RO_GRP_0_SLT_SHFT(x) \
+	(0x86000000 + (x) * 0x4)
+#define DLB2_RO_GRP_0_SLT_SHFT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2RO_GRP_0_SLT_SHFT(x) : \
+	 DLB2_V2_5RO_GRP_0_SLT_SHFT(x))
+#define DLB2_RO_GRP_0_SLT_SHFT_RST 0x0
+
+#define DLB2_RO_GRP_0_SLT_SHFT_CHANGE	0x000003FF
+#define DLB2_RO_GRP_0_SLT_SHFT_RSVD0		0xFFFFFC00
+#define DLB2_RO_GRP_0_SLT_SHFT_CHANGE_LOC	0
+#define DLB2_RO_GRP_0_SLT_SHFT_RSVD0_LOC	10
+
+#define DLB2_V2RO_GRP_1_SLT_SHFT(x) \
 	(0x96010000 + (x) * 0x4)
-#define DLB2_RO_PIPE_GRP_1_SLT_SHFT_RST 0x0
-union dlb2_ro_pipe_grp_1_slt_shft {
-	struct {
-		u32 change : 10;
-		u32 rsvd0 : 22;
-	} field;
-	u32 val;
-};
-
-#define DLB2_RO_PIPE_GRP_SN_MODE 0x94000000
-#define DLB2_RO_PIPE_GRP_SN_MODE_RST 0x0
-union dlb2_ro_pipe_grp_sn_mode {
-	struct {
-		u32 sn_mode_0 : 3;
-		u32 rszv0 : 5;
-		u32 sn_mode_1 : 3;
-		u32 rszv1 : 21;
-	} field;
-	u32 val;
-};
-
-#define DLB2_RO_PIPE_CFG_CTRL_GENERAL_0 0x9c000000
-#define DLB2_RO_PIPE_CFG_CTRL_GENERAL_0_RST 0x0
-union dlb2_ro_pipe_cfg_ctrl_general_0 {
-	struct {
-		u32 unit_single_step_mode : 1;
-		u32 rr_en : 1;
-		u32 rszv0 : 30;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ2PRIOV(x) \
+#define DLB2_V2_5RO_GRP_1_SLT_SHFT(x) \
+	(0x86010000 + (x) * 0x4)
+#define DLB2_RO_GRP_1_SLT_SHFT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2RO_GRP_1_SLT_SHFT(x) : \
+	 DLB2_V2_5RO_GRP_1_SLT_SHFT(x))
+#define DLB2_RO_GRP_1_SLT_SHFT_RST 0x0
+
+#define DLB2_RO_GRP_1_SLT_SHFT_CHANGE	0x000003FF
+#define DLB2_RO_GRP_1_SLT_SHFT_RSVD0		0xFFFFFC00
+#define DLB2_RO_GRP_1_SLT_SHFT_CHANGE_LOC	0
+#define DLB2_RO_GRP_1_SLT_SHFT_RSVD0_LOC	10
+
+#define DLB2_V2RO_GRP_SN_MODE 0x94000000
+#define DLB2_V2_5RO_GRP_SN_MODE 0x84000000
+#define DLB2_RO_GRP_SN_MODE(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2RO_GRP_SN_MODE : \
+	 DLB2_V2_5RO_GRP_SN_MODE)
+#define DLB2_RO_GRP_SN_MODE_RST 0x0
+
+#define DLB2_RO_GRP_SN_MODE_SN_MODE_0	0x00000007
+#define DLB2_RO_GRP_SN_MODE_RSZV0		0x000000F8
+#define DLB2_RO_GRP_SN_MODE_SN_MODE_1	0x00000700
+#define DLB2_RO_GRP_SN_MODE_RSZV1		0xFFFFF800
+#define DLB2_RO_GRP_SN_MODE_SN_MODE_0_LOC	0
+#define DLB2_RO_GRP_SN_MODE_RSZV0_LOC	3
+#define DLB2_RO_GRP_SN_MODE_SN_MODE_1_LOC	8
+#define DLB2_RO_GRP_SN_MODE_RSZV1_LOC	11
+
+#define DLB2_V2RO_CFG_CTRL_GENERAL_0 0x9c000000
+#define DLB2_V2_5RO_CFG_CTRL_GENERAL_0 0x8c000000
+#define DLB2_RO_CFG_CTRL_GENERAL_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2RO_CFG_CTRL_GENERAL_0 : \
+	 DLB2_V2_5RO_CFG_CTRL_GENERAL_0)
+#define DLB2_RO_CFG_CTRL_GENERAL_0_RST 0x0
+
+#define DLB2_RO_CFG_CTRL_GENERAL_0_UNIT_SINGLE_STEP_MODE	0x00000001
+#define DLB2_RO_CFG_CTRL_GENERAL_0_RR_EN			0x00000002
+#define DLB2_RO_CFG_CTRL_GENERAL_0_RSZV0			0xFFFFFFFC
+#define DLB2_RO_CFG_CTRL_GENERAL_0_UNIT_SINGLE_STEP_MODE_LOC	0
+#define DLB2_RO_CFG_CTRL_GENERAL_0_RR_EN_LOC			1
+#define DLB2_RO_CFG_CTRL_GENERAL_0_RSZV0_LOC			2
+
+#define DLB2_RO_SMON_ACTIVITYCNTR0 0x9c000030
+#define DLB2_RO_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_RO_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_RO_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_RO_SMON_ACTIVITYCNTR1 0x9c000034
+#define DLB2_RO_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_RO_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_RO_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_RO_SMON_COMPARE0 0x9c000038
+#define DLB2_RO_SMON_COMPARE0_RST 0x0
+
+#define DLB2_RO_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_RO_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_RO_SMON_COMPARE1 0x9c00003c
+#define DLB2_RO_SMON_COMPARE1_RST 0x0
+
+#define DLB2_RO_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_RO_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_RO_SMON_CFG0 0x9c000040
+#define DLB2_RO_SMON_CFG0_RST 0x40000000
+
+#define DLB2_RO_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_RO_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_RO_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_RO_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_RO_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_RO_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_RO_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_RO_SMON_CFG0_SMON_MODE			0x0000F000
+#define DLB2_RO_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_RO_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_RO_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_RO_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_RO_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_RO_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_RO_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_RO_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_RO_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_RO_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_RO_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_RO_SMON_CFG0_SMON_ENABLE_LOC		0
+#define DLB2_RO_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC	1
+#define DLB2_RO_SMON_CFG0_RSVZ0_LOC			2
+#define DLB2_RO_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_RO_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_RO_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_RO_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_RO_SMON_CFG0_SMON_MODE_LOC		12
+#define DLB2_RO_SMON_CFG0_STOPCOUNTEROVFL_LOC	16
+#define DLB2_RO_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_RO_SMON_CFG0_STATCOUNTER0OVFL_LOC	18
+#define DLB2_RO_SMON_CFG0_STATCOUNTER1OVFL_LOC	19
+#define DLB2_RO_SMON_CFG0_STOPTIMEROVFL_LOC		20
+#define DLB2_RO_SMON_CFG0_INTTIMEROVFL_LOC		21
+#define DLB2_RO_SMON_CFG0_STATTIMEROVFL_LOC		22
+#define DLB2_RO_SMON_CFG0_RSVZ1_LOC			23
+#define DLB2_RO_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_RO_SMON_CFG0_RSVZ2_LOC			29
+#define DLB2_RO_SMON_CFG0_VERSION_LOC		30
+
+#define DLB2_RO_SMON_CFG1 0x9c000044
+#define DLB2_RO_SMON_CFG1_RST 0x0
+
+#define DLB2_RO_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_RO_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_RO_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_RO_SMON_CFG1_MODE0_LOC	0
+#define DLB2_RO_SMON_CFG1_MODE1_LOC	8
+#define DLB2_RO_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_RO_SMON_MAX_TMR 0x9c000048
+#define DLB2_RO_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_RO_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_RO_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_RO_SMON_TMR 0x9c00004c
+#define DLB2_RO_SMON_TMR_RST 0x0
+
+#define DLB2_RO_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_RO_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_V2LSP_CQ2PRIOV(x) \
 	(0xa0000000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ2PRIOV(x) \
+	(0x90000000 + (x) * 0x1000)
+#define DLB2_LSP_CQ2PRIOV(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ2PRIOV(x) : \
+	 DLB2_V2_5LSP_CQ2PRIOV(x))
 #define DLB2_LSP_CQ2PRIOV_RST 0x0
-union dlb2_lsp_cq2priov {
-	struct {
-		u32 prio : 24;
-		u32 v : 8;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ2QID0(x) \
+
+#define DLB2_LSP_CQ2PRIOV_PRIO	0x00FFFFFF
+#define DLB2_LSP_CQ2PRIOV_V		0xFF000000
+#define DLB2_LSP_CQ2PRIOV_PRIO_LOC	0
+#define DLB2_LSP_CQ2PRIOV_V_LOC	24
+
+#define DLB2_V2LSP_CQ2QID0(x) \
 	(0xa0080000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ2QID0(x) \
+	(0x90080000 + (x) * 0x1000)
+#define DLB2_LSP_CQ2QID0(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ2QID0(x) : \
+	 DLB2_V2_5LSP_CQ2QID0(x))
 #define DLB2_LSP_CQ2QID0_RST 0x0
-union dlb2_lsp_cq2qid0 {
-	struct {
-		u32 qid_p0 : 7;
-		u32 rsvd3 : 1;
-		u32 qid_p1 : 7;
-		u32 rsvd2 : 1;
-		u32 qid_p2 : 7;
-		u32 rsvd1 : 1;
-		u32 qid_p3 : 7;
-		u32 rsvd0 : 1;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ2QID1(x) \
+
+#define DLB2_LSP_CQ2QID0_QID_P0	0x0000007F
+#define DLB2_LSP_CQ2QID0_RSVD3	0x00000080
+#define DLB2_LSP_CQ2QID0_QID_P1	0x00007F00
+#define DLB2_LSP_CQ2QID0_RSVD2	0x00008000
+#define DLB2_LSP_CQ2QID0_QID_P2	0x007F0000
+#define DLB2_LSP_CQ2QID0_RSVD1	0x00800000
+#define DLB2_LSP_CQ2QID0_QID_P3	0x7F000000
+#define DLB2_LSP_CQ2QID0_RSVD0	0x80000000
+#define DLB2_LSP_CQ2QID0_QID_P0_LOC	0
+#define DLB2_LSP_CQ2QID0_RSVD3_LOC	7
+#define DLB2_LSP_CQ2QID0_QID_P1_LOC	8
+#define DLB2_LSP_CQ2QID0_RSVD2_LOC	15
+#define DLB2_LSP_CQ2QID0_QID_P2_LOC	16
+#define DLB2_LSP_CQ2QID0_RSVD1_LOC	23
+#define DLB2_LSP_CQ2QID0_QID_P3_LOC	24
+#define DLB2_LSP_CQ2QID0_RSVD0_LOC	31
+
+#define DLB2_V2LSP_CQ2QID1(x) \
 	(0xa0100000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ2QID1(x) \
+	(0x90100000 + (x) * 0x1000)
+#define DLB2_LSP_CQ2QID1(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ2QID1(x) : \
+	 DLB2_V2_5LSP_CQ2QID1(x))
 #define DLB2_LSP_CQ2QID1_RST 0x0
-union dlb2_lsp_cq2qid1 {
-	struct {
-		u32 qid_p4 : 7;
-		u32 rsvd3 : 1;
-		u32 qid_p5 : 7;
-		u32 rsvd2 : 1;
-		u32 qid_p6 : 7;
-		u32 rsvd1 : 1;
-		u32 qid_p7 : 7;
-		u32 rsvd0 : 1;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_DIR_DSBL(x) \
+
+#define DLB2_LSP_CQ2QID1_QID_P4	0x0000007F
+#define DLB2_LSP_CQ2QID1_RSVD3	0x00000080
+#define DLB2_LSP_CQ2QID1_QID_P5	0x00007F00
+#define DLB2_LSP_CQ2QID1_RSVD2	0x00008000
+#define DLB2_LSP_CQ2QID1_QID_P6	0x007F0000
+#define DLB2_LSP_CQ2QID1_RSVD1	0x00800000
+#define DLB2_LSP_CQ2QID1_QID_P7	0x7F000000
+#define DLB2_LSP_CQ2QID1_RSVD0	0x80000000
+#define DLB2_LSP_CQ2QID1_QID_P4_LOC	0
+#define DLB2_LSP_CQ2QID1_RSVD3_LOC	7
+#define DLB2_LSP_CQ2QID1_QID_P5_LOC	8
+#define DLB2_LSP_CQ2QID1_RSVD2_LOC	15
+#define DLB2_LSP_CQ2QID1_QID_P6_LOC	16
+#define DLB2_LSP_CQ2QID1_RSVD1_LOC	23
+#define DLB2_LSP_CQ2QID1_QID_P7_LOC	24
+#define DLB2_LSP_CQ2QID1_RSVD0_LOC	31
+
+#define DLB2_V2LSP_CQ_DIR_DSBL(x) \
 	(0xa0180000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_DIR_DSBL(x) \
+	(0x90180000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_DIR_DSBL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_DIR_DSBL(x) : \
+	 DLB2_V2_5LSP_CQ_DIR_DSBL(x))
 #define DLB2_LSP_CQ_DIR_DSBL_RST 0x1
-union dlb2_lsp_cq_dir_dsbl {
-	struct {
-		u32 disabled : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_DIR_TKN_CNT(x) \
+
+#define DLB2_LSP_CQ_DIR_DSBL_DISABLED	0x00000001
+#define DLB2_LSP_CQ_DIR_DSBL_RSVD0		0xFFFFFFFE
+#define DLB2_LSP_CQ_DIR_DSBL_DISABLED_LOC	0
+#define DLB2_LSP_CQ_DIR_DSBL_RSVD0_LOC	1
+
+#define DLB2_V2LSP_CQ_DIR_TKN_CNT(x) \
 	(0xa0200000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_DIR_TKN_CNT(x) \
+	(0x90200000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_DIR_TKN_CNT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_DIR_TKN_CNT(x) : \
+	 DLB2_V2_5LSP_CQ_DIR_TKN_CNT(x))
 #define DLB2_LSP_CQ_DIR_TKN_CNT_RST 0x0
-union dlb2_lsp_cq_dir_tkn_cnt {
-	struct {
-		u32 count : 13;
-		u32 rsvd0 : 19;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(x) \
+
+#define DLB2_LSP_CQ_DIR_TKN_CNT_COUNT	0x00001FFF
+#define DLB2_LSP_CQ_DIR_TKN_CNT_RSVD0	0xFFFFE000
+#define DLB2_LSP_CQ_DIR_TKN_CNT_COUNT_LOC	0
+#define DLB2_LSP_CQ_DIR_TKN_CNT_RSVD0_LOC	13
+
+#define DLB2_V2LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(x) \
 	(0xa0280000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(x) \
+	(0x90280000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(x) : \
+	 DLB2_V2_5LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(x))
 #define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RST 0x0
-union dlb2_lsp_cq_dir_tkn_depth_sel_dsi {
-	struct {
-		u32 token_depth_select : 4;
-		u32 disable_wb_opt : 1;
-		u32 ignore_depth : 1;
-		u32 rsvd0 : 26;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTL(x) \
+
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_TOKEN_DEPTH_SELECT_V2	0x0000000F
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_DISABLE_WB_OPT_V2	0x00000010
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_IGNORE_DEPTH_V2	0x00000020
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RSVD0_V2		0xFFFFFFC0
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_TOKEN_DEPTH_SELECT_V2_LOC	0
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_DISABLE_WB_OPT_V2_LOC	4
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_IGNORE_DEPTH_V2_LOC	5
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RSVD0_V2_LOC		6
+
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_TOKEN_DEPTH_SELECT_V2_5 0x0000000F
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_DISABLE_WB_OPT_V2_5	0x00000010
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RSVD0_V2_5		0xFFFFFFE0
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_TOKEN_DEPTH_SELECT_V2_5_LOC	0
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_DISABLE_WB_OPT_V2_5_LOC	4
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RSVD0_V2_5_LOC		5
+
+#define DLB2_V2LSP_CQ_DIR_TOT_SCH_CNTL(x) \
 	(0xa0300000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_DIR_TOT_SCH_CNTL(x) \
+	(0x90300000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_DIR_TOT_SCH_CNTL(x) : \
+	 DLB2_V2_5LSP_CQ_DIR_TOT_SCH_CNTL(x))
 #define DLB2_LSP_CQ_DIR_TOT_SCH_CNTL_RST 0x0
-union dlb2_lsp_cq_dir_tot_sch_cntl {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTH(x) \
+
+#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTL_COUNT	0xFFFFFFFF
+#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTL_COUNT_LOC	0
+
+#define DLB2_V2LSP_CQ_DIR_TOT_SCH_CNTH(x) \
 	(0xa0380000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_DIR_TOT_SCH_CNTH(x) \
+	(0x90380000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_DIR_TOT_SCH_CNTH(x) : \
+	 DLB2_V2_5LSP_CQ_DIR_TOT_SCH_CNTH(x))
 #define DLB2_LSP_CQ_DIR_TOT_SCH_CNTH_RST 0x0
-union dlb2_lsp_cq_dir_tot_sch_cnth {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_LDB_DSBL(x) \
+
+#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTH_COUNT	0xFFFFFFFF
+#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTH_COUNT_LOC	0
+
+#define DLB2_V2LSP_CQ_LDB_DSBL(x) \
 	(0xa0400000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_LDB_DSBL(x) \
+	(0x90400000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_LDB_DSBL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_LDB_DSBL(x) : \
+	 DLB2_V2_5LSP_CQ_LDB_DSBL(x))
 #define DLB2_LSP_CQ_LDB_DSBL_RST 0x1
-union dlb2_lsp_cq_ldb_dsbl {
-	struct {
-		u32 disabled : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_LDB_INFL_CNT(x) \
+
+#define DLB2_LSP_CQ_LDB_DSBL_DISABLED	0x00000001
+#define DLB2_LSP_CQ_LDB_DSBL_RSVD0		0xFFFFFFFE
+#define DLB2_LSP_CQ_LDB_DSBL_DISABLED_LOC	0
+#define DLB2_LSP_CQ_LDB_DSBL_RSVD0_LOC	1
+
+#define DLB2_V2LSP_CQ_LDB_INFL_CNT(x) \
 	(0xa0480000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_LDB_INFL_CNT(x) \
+	(0x90480000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_LDB_INFL_CNT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_LDB_INFL_CNT(x) : \
+	 DLB2_V2_5LSP_CQ_LDB_INFL_CNT(x))
 #define DLB2_LSP_CQ_LDB_INFL_CNT_RST 0x0
-union dlb2_lsp_cq_ldb_infl_cnt {
-	struct {
-		u32 count : 12;
-		u32 rsvd0 : 20;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_LDB_INFL_LIM(x) \
+
+#define DLB2_LSP_CQ_LDB_INFL_CNT_COUNT	0x00000FFF
+#define DLB2_LSP_CQ_LDB_INFL_CNT_RSVD0	0xFFFFF000
+#define DLB2_LSP_CQ_LDB_INFL_CNT_COUNT_LOC	0
+#define DLB2_LSP_CQ_LDB_INFL_CNT_RSVD0_LOC	12
+
+#define DLB2_V2LSP_CQ_LDB_INFL_LIM(x) \
 	(0xa0500000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_LDB_INFL_LIM(x) \
+	(0x90500000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_LDB_INFL_LIM(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_LDB_INFL_LIM(x) : \
+	 DLB2_V2_5LSP_CQ_LDB_INFL_LIM(x))
 #define DLB2_LSP_CQ_LDB_INFL_LIM_RST 0x0
-union dlb2_lsp_cq_ldb_infl_lim {
-	struct {
-		u32 limit : 12;
-		u32 rsvd0 : 20;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_LDB_TKN_CNT(x) \
+
+#define DLB2_LSP_CQ_LDB_INFL_LIM_LIMIT	0x00000FFF
+#define DLB2_LSP_CQ_LDB_INFL_LIM_RSVD0	0xFFFFF000
+#define DLB2_LSP_CQ_LDB_INFL_LIM_LIMIT_LOC	0
+#define DLB2_LSP_CQ_LDB_INFL_LIM_RSVD0_LOC	12
+
+#define DLB2_V2LSP_CQ_LDB_TKN_CNT(x) \
 	(0xa0580000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_LDB_TKN_CNT(x) \
+	(0x90600000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_LDB_TKN_CNT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_LDB_TKN_CNT(x) : \
+	 DLB2_V2_5LSP_CQ_LDB_TKN_CNT(x))
 #define DLB2_LSP_CQ_LDB_TKN_CNT_RST 0x0
-union dlb2_lsp_cq_ldb_tkn_cnt {
-	struct {
-		u32 token_count : 11;
-		u32 rsvd0 : 21;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL(x) \
+
+#define DLB2_LSP_CQ_LDB_TKN_CNT_TOKEN_COUNT	0x000007FF
+#define DLB2_LSP_CQ_LDB_TKN_CNT_RSVD0	0xFFFFF800
+#define DLB2_LSP_CQ_LDB_TKN_CNT_TOKEN_COUNT_LOC	0
+#define DLB2_LSP_CQ_LDB_TKN_CNT_RSVD0_LOC		11
+
+#define DLB2_V2LSP_CQ_LDB_TKN_DEPTH_SEL(x) \
 	(0xa0600000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_LDB_TKN_DEPTH_SEL(x) \
+	(0x90680000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_LDB_TKN_DEPTH_SEL(x) : \
+	 DLB2_V2_5LSP_CQ_LDB_TKN_DEPTH_SEL(x))
 #define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RST 0x0
-union dlb2_lsp_cq_ldb_tkn_depth_sel {
-	struct {
-		u32 token_depth_select : 4;
-		u32 ignore_depth : 1;
-		u32 rsvd0 : 27;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTL(x) \
+
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_V2	0x0000000F
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_IGNORE_DEPTH_V2	0x00000010
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RSVD0_V2		0xFFFFFFE0
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_V2_LOC	0
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_IGNORE_DEPTH_V2_LOC		4
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RSVD0_V2_LOC			5
+
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_V2_5	0x0000000F
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RSVD0_V2_5		0xFFFFFFF0
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_V2_5_LOC	0
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RSVD0_V2_5_LOC			4
+
+#define DLB2_V2LSP_CQ_LDB_TOT_SCH_CNTL(x) \
 	(0xa0680000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_LDB_TOT_SCH_CNTL(x) \
+	(0x90700000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_LDB_TOT_SCH_CNTL(x) : \
+	 DLB2_V2_5LSP_CQ_LDB_TOT_SCH_CNTL(x))
 #define DLB2_LSP_CQ_LDB_TOT_SCH_CNTL_RST 0x0
-union dlb2_lsp_cq_ldb_tot_sch_cntl {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTH(x) \
+
+#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTL_COUNT	0xFFFFFFFF
+#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTL_COUNT_LOC	0
+
+#define DLB2_V2LSP_CQ_LDB_TOT_SCH_CNTH(x) \
 	(0xa0700000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_LDB_TOT_SCH_CNTH(x) \
+	(0x90780000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_LDB_TOT_SCH_CNTH(x) : \
+	 DLB2_V2_5LSP_CQ_LDB_TOT_SCH_CNTH(x))
 #define DLB2_LSP_CQ_LDB_TOT_SCH_CNTH_RST 0x0
-union dlb2_lsp_cq_ldb_tot_sch_cnth {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_DIR_MAX_DEPTH(x) \
+
+#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTH_COUNT	0xFFFFFFFF
+#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTH_COUNT_LOC	0
+
+#define DLB2_V2LSP_QID_DIR_MAX_DEPTH(x) \
 	(0xa0780000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_DIR_MAX_DEPTH(x) \
+	(0x90800000 + (x) * 0x1000)
+#define DLB2_LSP_QID_DIR_MAX_DEPTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_DIR_MAX_DEPTH(x) : \
+	 DLB2_V2_5LSP_QID_DIR_MAX_DEPTH(x))
 #define DLB2_LSP_QID_DIR_MAX_DEPTH_RST 0x0
-union dlb2_lsp_qid_dir_max_depth {
-	struct {
-		u32 depth : 13;
-		u32 rsvd0 : 19;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTL(x) \
+
+#define DLB2_LSP_QID_DIR_MAX_DEPTH_DEPTH	0x00001FFF
+#define DLB2_LSP_QID_DIR_MAX_DEPTH_RSVD0	0xFFFFE000
+#define DLB2_LSP_QID_DIR_MAX_DEPTH_DEPTH_LOC	0
+#define DLB2_LSP_QID_DIR_MAX_DEPTH_RSVD0_LOC	13
+
+#define DLB2_V2LSP_QID_DIR_TOT_ENQ_CNTL(x) \
 	(0xa0800000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_DIR_TOT_ENQ_CNTL(x) \
+	(0x90880000 + (x) * 0x1000)
+#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_DIR_TOT_ENQ_CNTL(x) : \
+	 DLB2_V2_5LSP_QID_DIR_TOT_ENQ_CNTL(x))
 #define DLB2_LSP_QID_DIR_TOT_ENQ_CNTL_RST 0x0
-union dlb2_lsp_qid_dir_tot_enq_cntl {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTH(x) \
+
+#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTL_COUNT	0xFFFFFFFF
+#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTL_COUNT_LOC	0
+
+#define DLB2_V2LSP_QID_DIR_TOT_ENQ_CNTH(x) \
 	(0xa0880000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_DIR_TOT_ENQ_CNTH(x) \
+	(0x90900000 + (x) * 0x1000)
+#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_DIR_TOT_ENQ_CNTH(x) : \
+	 DLB2_V2_5LSP_QID_DIR_TOT_ENQ_CNTH(x))
 #define DLB2_LSP_QID_DIR_TOT_ENQ_CNTH_RST 0x0
-union dlb2_lsp_qid_dir_tot_enq_cnth {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_DIR_ENQUEUE_CNT(x) \
+
+#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTH_COUNT	0xFFFFFFFF
+#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTH_COUNT_LOC	0
+
+#define DLB2_V2LSP_QID_DIR_ENQUEUE_CNT(x) \
 	(0xa0900000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_DIR_ENQUEUE_CNT(x) \
+	(0x90980000 + (x) * 0x1000)
+#define DLB2_LSP_QID_DIR_ENQUEUE_CNT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_DIR_ENQUEUE_CNT(x) : \
+	 DLB2_V2_5LSP_QID_DIR_ENQUEUE_CNT(x))
 #define DLB2_LSP_QID_DIR_ENQUEUE_CNT_RST 0x0
-union dlb2_lsp_qid_dir_enqueue_cnt {
-	struct {
-		u32 count : 13;
-		u32 rsvd0 : 19;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_DIR_DEPTH_THRSH(x) \
+
+#define DLB2_LSP_QID_DIR_ENQUEUE_CNT_COUNT	0x00001FFF
+#define DLB2_LSP_QID_DIR_ENQUEUE_CNT_RSVD0	0xFFFFE000
+#define DLB2_LSP_QID_DIR_ENQUEUE_CNT_COUNT_LOC	0
+#define DLB2_LSP_QID_DIR_ENQUEUE_CNT_RSVD0_LOC	13
+
+#define DLB2_V2LSP_QID_DIR_DEPTH_THRSH(x) \
 	(0xa0980000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_DIR_DEPTH_THRSH(x) \
+	(0x90a00000 + (x) * 0x1000)
+#define DLB2_LSP_QID_DIR_DEPTH_THRSH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_DIR_DEPTH_THRSH(x) : \
+	 DLB2_V2_5LSP_QID_DIR_DEPTH_THRSH(x))
 #define DLB2_LSP_QID_DIR_DEPTH_THRSH_RST 0x0
-union dlb2_lsp_qid_dir_depth_thrsh {
-	struct {
-		u32 thresh : 13;
-		u32 rsvd0 : 19;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_AQED_ACTIVE_CNT(x) \
+
+#define DLB2_LSP_QID_DIR_DEPTH_THRSH_THRESH	0x00001FFF
+#define DLB2_LSP_QID_DIR_DEPTH_THRSH_RSVD0	0xFFFFE000
+#define DLB2_LSP_QID_DIR_DEPTH_THRSH_THRESH_LOC	0
+#define DLB2_LSP_QID_DIR_DEPTH_THRSH_RSVD0_LOC	13
+
+#define DLB2_V2LSP_QID_AQED_ACTIVE_CNT(x) \
 	(0xa0a00000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_AQED_ACTIVE_CNT(x) \
+	(0x90b80000 + (x) * 0x1000)
+#define DLB2_LSP_QID_AQED_ACTIVE_CNT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_AQED_ACTIVE_CNT(x) : \
+	 DLB2_V2_5LSP_QID_AQED_ACTIVE_CNT(x))
 #define DLB2_LSP_QID_AQED_ACTIVE_CNT_RST 0x0
-union dlb2_lsp_qid_aqed_active_cnt {
-	struct {
-		u32 count : 12;
-		u32 rsvd0 : 20;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_AQED_ACTIVE_LIM(x) \
+
+#define DLB2_LSP_QID_AQED_ACTIVE_CNT_COUNT	0x00000FFF
+#define DLB2_LSP_QID_AQED_ACTIVE_CNT_RSVD0	0xFFFFF000
+#define DLB2_LSP_QID_AQED_ACTIVE_CNT_COUNT_LOC	0
+#define DLB2_LSP_QID_AQED_ACTIVE_CNT_RSVD0_LOC	12
+
+#define DLB2_V2LSP_QID_AQED_ACTIVE_LIM(x) \
 	(0xa0a80000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_AQED_ACTIVE_LIM(x) \
+	(0x90c00000 + (x) * 0x1000)
+#define DLB2_LSP_QID_AQED_ACTIVE_LIM(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_AQED_ACTIVE_LIM(x) : \
+	 DLB2_V2_5LSP_QID_AQED_ACTIVE_LIM(x))
 #define DLB2_LSP_QID_AQED_ACTIVE_LIM_RST 0x0
-union dlb2_lsp_qid_aqed_active_lim {
-	struct {
-		u32 limit : 12;
-		u32 rsvd0 : 20;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTL(x) \
+
+#define DLB2_LSP_QID_AQED_ACTIVE_LIM_LIMIT	0x00000FFF
+#define DLB2_LSP_QID_AQED_ACTIVE_LIM_RSVD0	0xFFFFF000
+#define DLB2_LSP_QID_AQED_ACTIVE_LIM_LIMIT_LOC	0
+#define DLB2_LSP_QID_AQED_ACTIVE_LIM_RSVD0_LOC	12
+
+#define DLB2_V2LSP_QID_ATM_TOT_ENQ_CNTL(x) \
 	(0xa0b00000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_ATM_TOT_ENQ_CNTL(x) \
+	(0x90c80000 + (x) * 0x1000)
+#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_ATM_TOT_ENQ_CNTL(x) : \
+	 DLB2_V2_5LSP_QID_ATM_TOT_ENQ_CNTL(x))
 #define DLB2_LSP_QID_ATM_TOT_ENQ_CNTL_RST 0x0
-union dlb2_lsp_qid_atm_tot_enq_cntl {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTH(x) \
+
+#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTL_COUNT	0xFFFFFFFF
+#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTL_COUNT_LOC	0
+
+#define DLB2_V2LSP_QID_ATM_TOT_ENQ_CNTH(x) \
 	(0xa0b80000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_ATM_TOT_ENQ_CNTH(x) \
+	(0x90d00000 + (x) * 0x1000)
+#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_ATM_TOT_ENQ_CNTH(x) : \
+	 DLB2_V2_5LSP_QID_ATM_TOT_ENQ_CNTH(x))
 #define DLB2_LSP_QID_ATM_TOT_ENQ_CNTH_RST 0x0
-union dlb2_lsp_qid_atm_tot_enq_cnth {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_ATQ_ENQUEUE_CNT(x) \
-	(0xa0c00000 + (x) * 0x1000)
-#define DLB2_LSP_QID_ATQ_ENQUEUE_CNT_RST 0x0
-union dlb2_lsp_qid_atq_enqueue_cnt {
-	struct {
-		u32 count : 14;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_LDB_ENQUEUE_CNT(x) \
+
+#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTH_COUNT	0xFFFFFFFF
+#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTH_COUNT_LOC	0
+
+#define DLB2_V2LSP_QID_LDB_ENQUEUE_CNT(x) \
 	(0xa0c80000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_LDB_ENQUEUE_CNT(x) \
+	(0x90e00000 + (x) * 0x1000)
+#define DLB2_LSP_QID_LDB_ENQUEUE_CNT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_LDB_ENQUEUE_CNT(x) : \
+	 DLB2_V2_5LSP_QID_LDB_ENQUEUE_CNT(x))
 #define DLB2_LSP_QID_LDB_ENQUEUE_CNT_RST 0x0
-union dlb2_lsp_qid_ldb_enqueue_cnt {
-	struct {
-		u32 count : 14;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_LDB_INFL_CNT(x) \
+
+#define DLB2_LSP_QID_LDB_ENQUEUE_CNT_COUNT	0x00003FFF
+#define DLB2_LSP_QID_LDB_ENQUEUE_CNT_RSVD0	0xFFFFC000
+#define DLB2_LSP_QID_LDB_ENQUEUE_CNT_COUNT_LOC	0
+#define DLB2_LSP_QID_LDB_ENQUEUE_CNT_RSVD0_LOC	14
+
+#define DLB2_V2LSP_QID_LDB_INFL_CNT(x) \
 	(0xa0d00000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_LDB_INFL_CNT(x) \
+	(0x90e80000 + (x) * 0x1000)
+#define DLB2_LSP_QID_LDB_INFL_CNT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_LDB_INFL_CNT(x) : \
+	 DLB2_V2_5LSP_QID_LDB_INFL_CNT(x))
 #define DLB2_LSP_QID_LDB_INFL_CNT_RST 0x0
-union dlb2_lsp_qid_ldb_infl_cnt {
-	struct {
-		u32 count : 12;
-		u32 rsvd0 : 20;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_LDB_INFL_LIM(x) \
+
+#define DLB2_LSP_QID_LDB_INFL_CNT_COUNT	0x00000FFF
+#define DLB2_LSP_QID_LDB_INFL_CNT_RSVD0	0xFFFFF000
+#define DLB2_LSP_QID_LDB_INFL_CNT_COUNT_LOC	0
+#define DLB2_LSP_QID_LDB_INFL_CNT_RSVD0_LOC	12
+
+#define DLB2_V2LSP_QID_LDB_INFL_LIM(x) \
 	(0xa0d80000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_LDB_INFL_LIM(x) \
+	(0x90f00000 + (x) * 0x1000)
+#define DLB2_LSP_QID_LDB_INFL_LIM(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_LDB_INFL_LIM(x) : \
+	 DLB2_V2_5LSP_QID_LDB_INFL_LIM(x))
 #define DLB2_LSP_QID_LDB_INFL_LIM_RST 0x0
-union dlb2_lsp_qid_ldb_infl_lim {
-	struct {
-		u32 limit : 12;
-		u32 rsvd0 : 20;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID2CQIDIX_00(x) \
+
+#define DLB2_LSP_QID_LDB_INFL_LIM_LIMIT	0x00000FFF
+#define DLB2_LSP_QID_LDB_INFL_LIM_RSVD0	0xFFFFF000
+#define DLB2_LSP_QID_LDB_INFL_LIM_LIMIT_LOC	0
+#define DLB2_LSP_QID_LDB_INFL_LIM_RSVD0_LOC	12
+
+#define DLB2_V2LSP_QID2CQIDIX_00(x) \
 	(0xa0e00000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID2CQIDIX_00(x) \
+	(0x90f80000 + (x) * 0x1000)
+#define DLB2_LSP_QID2CQIDIX_00(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID2CQIDIX_00(x) : \
+	 DLB2_V2_5LSP_QID2CQIDIX_00(x))
 #define DLB2_LSP_QID2CQIDIX_00_RST 0x0
-#define DLB2_LSP_QID2CQIDIX(x, y) \
-	(DLB2_LSP_QID2CQIDIX_00(x) + 0x80000 * (y))
+#define DLB2_LSP_QID2CQIDIX(ver, x, y) \
+	(DLB2_LSP_QID2CQIDIX_00(ver, x) + 0x80000 * (y))
 #define DLB2_LSP_QID2CQIDIX_NUM 16
-union dlb2_lsp_qid2cqidix_00 {
-	struct {
-		u32 cq_p0 : 8;
-		u32 cq_p1 : 8;
-		u32 cq_p2 : 8;
-		u32 cq_p3 : 8;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID2CQIDIX2_00(x) \
+
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P0	0x000000FF
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P1	0x0000FF00
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P2	0x00FF0000
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P3	0xFF000000
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P0_LOC	0
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P1_LOC	8
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P2_LOC	16
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P3_LOC	24
+
+#define DLB2_V2LSP_QID2CQIDIX2_00(x) \
 	(0xa1600000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID2CQIDIX2_00(x) \
+	(0x91780000 + (x) * 0x1000)
+#define DLB2_LSP_QID2CQIDIX2_00(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID2CQIDIX2_00(x) : \
+	 DLB2_V2_5LSP_QID2CQIDIX2_00(x))
 #define DLB2_LSP_QID2CQIDIX2_00_RST 0x0
-#define DLB2_LSP_QID2CQIDIX2(x, y) \
-	(DLB2_LSP_QID2CQIDIX2_00(x) + 0x80000 * (y))
+#define DLB2_LSP_QID2CQIDIX2(ver, x, y) \
+	(DLB2_LSP_QID2CQIDIX2_00(ver, x) + 0x80000 * (y))
 #define DLB2_LSP_QID2CQIDIX2_NUM 16
-union dlb2_lsp_qid2cqidix2_00 {
-	struct {
-		u32 cq_p0 : 8;
-		u32 cq_p1 : 8;
-		u32 cq_p2 : 8;
-		u32 cq_p3 : 8;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_LDB_REPLAY_CNT(x) \
-	(0xa1e00000 + (x) * 0x1000)
-#define DLB2_LSP_QID_LDB_REPLAY_CNT_RST 0x0
-union dlb2_lsp_qid_ldb_replay_cnt {
-	struct {
-		u32 count : 14;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_NALDB_MAX_DEPTH(x) \
+
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P0	0x000000FF
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P1	0x0000FF00
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P2	0x00FF0000
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P3	0xFF000000
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P0_LOC	0
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P1_LOC	8
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P2_LOC	16
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P3_LOC	24
+
+#define DLB2_V2LSP_QID_NALDB_MAX_DEPTH(x) \
 	(0xa1f00000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_NALDB_MAX_DEPTH(x) \
+	(0x92080000 + (x) * 0x1000)
+#define DLB2_LSP_QID_NALDB_MAX_DEPTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_NALDB_MAX_DEPTH(x) : \
+	 DLB2_V2_5LSP_QID_NALDB_MAX_DEPTH(x))
 #define DLB2_LSP_QID_NALDB_MAX_DEPTH_RST 0x0
-union dlb2_lsp_qid_naldb_max_depth {
-	struct {
-		u32 depth : 14;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL(x) \
+
+#define DLB2_LSP_QID_NALDB_MAX_DEPTH_DEPTH	0x00003FFF
+#define DLB2_LSP_QID_NALDB_MAX_DEPTH_RSVD0	0xFFFFC000
+#define DLB2_LSP_QID_NALDB_MAX_DEPTH_DEPTH_LOC	0
+#define DLB2_LSP_QID_NALDB_MAX_DEPTH_RSVD0_LOC	14
+
+#define DLB2_V2LSP_QID_NALDB_TOT_ENQ_CNTL(x) \
 	(0xa1f80000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_NALDB_TOT_ENQ_CNTL(x) \
+	(0x92100000 + (x) * 0x1000)
+#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_NALDB_TOT_ENQ_CNTL(x) : \
+	 DLB2_V2_5LSP_QID_NALDB_TOT_ENQ_CNTL(x))
 #define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL_RST 0x0
-union dlb2_lsp_qid_naldb_tot_enq_cntl {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH(x) \
+
+#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL_COUNT	0xFFFFFFFF
+#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL_COUNT_LOC	0
+
+#define DLB2_V2LSP_QID_NALDB_TOT_ENQ_CNTH(x) \
 	(0xa2000000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_NALDB_TOT_ENQ_CNTH(x) \
+	(0x92180000 + (x) * 0x1000)
+#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_NALDB_TOT_ENQ_CNTH(x) : \
+	 DLB2_V2_5LSP_QID_NALDB_TOT_ENQ_CNTH(x))
 #define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH_RST 0x0
-union dlb2_lsp_qid_naldb_tot_enq_cnth {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_ATM_DEPTH_THRSH(x) \
+
+#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH_COUNT	0xFFFFFFFF
+#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH_COUNT_LOC	0
+
+#define DLB2_V2LSP_QID_ATM_DEPTH_THRSH(x) \
 	(0xa2080000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_ATM_DEPTH_THRSH(x) \
+	(0x92200000 + (x) * 0x1000)
+#define DLB2_LSP_QID_ATM_DEPTH_THRSH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_ATM_DEPTH_THRSH(x) : \
+	 DLB2_V2_5LSP_QID_ATM_DEPTH_THRSH(x))
 #define DLB2_LSP_QID_ATM_DEPTH_THRSH_RST 0x0
-union dlb2_lsp_qid_atm_depth_thrsh {
-	struct {
-		u32 thresh : 14;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_NALDB_DEPTH_THRSH(x) \
+
+#define DLB2_LSP_QID_ATM_DEPTH_THRSH_THRESH	0x00003FFF
+#define DLB2_LSP_QID_ATM_DEPTH_THRSH_RSVD0	0xFFFFC000
+#define DLB2_LSP_QID_ATM_DEPTH_THRSH_THRESH_LOC	0
+#define DLB2_LSP_QID_ATM_DEPTH_THRSH_RSVD0_LOC	14
+
+#define DLB2_V2LSP_QID_NALDB_DEPTH_THRSH(x) \
 	(0xa2100000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_NALDB_DEPTH_THRSH(x) \
+	(0x92280000 + (x) * 0x1000)
+#define DLB2_LSP_QID_NALDB_DEPTH_THRSH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_NALDB_DEPTH_THRSH(x) : \
+	 DLB2_V2_5LSP_QID_NALDB_DEPTH_THRSH(x))
 #define DLB2_LSP_QID_NALDB_DEPTH_THRSH_RST 0x0
-union dlb2_lsp_qid_naldb_depth_thrsh {
-	struct {
-		u32 thresh : 14;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_ATM_ACTIVE(x) \
+
+#define DLB2_LSP_QID_NALDB_DEPTH_THRSH_THRESH	0x00003FFF
+#define DLB2_LSP_QID_NALDB_DEPTH_THRSH_RSVD0		0xFFFFC000
+#define DLB2_LSP_QID_NALDB_DEPTH_THRSH_THRESH_LOC	0
+#define DLB2_LSP_QID_NALDB_DEPTH_THRSH_RSVD0_LOC	14
+
+#define DLB2_V2LSP_QID_ATM_ACTIVE(x) \
 	(0xa2180000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_ATM_ACTIVE(x) \
+	(0x92300000 + (x) * 0x1000)
+#define DLB2_LSP_QID_ATM_ACTIVE(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_ATM_ACTIVE(x) : \
+	 DLB2_V2_5LSP_QID_ATM_ACTIVE(x))
 #define DLB2_LSP_QID_ATM_ACTIVE_RST 0x0
-union dlb2_lsp_qid_atm_active {
-	struct {
-		u32 count : 14;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0 0xa4000008
+
+#define DLB2_LSP_QID_ATM_ACTIVE_COUNT	0x00003FFF
+#define DLB2_LSP_QID_ATM_ACTIVE_RSVD0	0xFFFFC000
+#define DLB2_LSP_QID_ATM_ACTIVE_COUNT_LOC	0
+#define DLB2_LSP_QID_ATM_ACTIVE_RSVD0_LOC	14
+
+#define DLB2_V2LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0 0xa4000008
+#define DLB2_V2_5LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0 0x94000008
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0 : \
+	 DLB2_V2_5LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0)
 #define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_RST 0x0
-union dlb2_lsp_cfg_arb_weight_atm_nalb_qid_0 {
-	struct {
-		u32 pri0_weight : 8;
-		u32 pri1_weight : 8;
-		u32 pri2_weight : 8;
-		u32 pri3_weight : 8;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1 0xa400000c
+
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI0_WEIGHT	0x000000FF
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI1_WEIGHT	0x0000FF00
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI2_WEIGHT	0x00FF0000
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI3_WEIGHT	0xFF000000
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI0_WEIGHT_LOC	0
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI1_WEIGHT_LOC	8
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI2_WEIGHT_LOC	16
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI3_WEIGHT_LOC	24
+
+#define DLB2_V2LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1 0xa400000c
+#define DLB2_V2_5LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1 0x9400000c
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1 : \
+	 DLB2_V2_5LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1)
 #define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_RST 0x0
-union dlb2_lsp_cfg_arb_weight_atm_nalb_qid_1 {
-	struct {
-		u32 rsvz0 : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0 0xa4000014
+
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_RSVZ0_V2	0xFFFFFFFF
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_RSVZ0_V2_LOC	0
+
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI4_WEIGHT_V2_5	0x000000FF
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI5_WEIGHT_V2_5	0x0000FF00
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI6_WEIGHT_V2_5	0x00FF0000
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI7_WEIGHT_V2_5	0xFF000000
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI4_WEIGHT_V2_5_LOC	0
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI5_WEIGHT_V2_5_LOC	8
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI6_WEIGHT_V2_5_LOC	16
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI7_WEIGHT_V2_5_LOC	24
+
+#define DLB2_V2LSP_CFG_ARB_WEIGHT_LDB_QID_0 0xa4000014
+#define DLB2_V2_5LSP_CFG_ARB_WEIGHT_LDB_QID_0 0x94000014
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CFG_ARB_WEIGHT_LDB_QID_0 : \
+	 DLB2_V2_5LSP_CFG_ARB_WEIGHT_LDB_QID_0)
 #define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_RST 0x0
-union dlb2_lsp_cfg_arb_weight_ldb_qid_0 {
-	struct {
-		u32 pri0_weight : 8;
-		u32 pri1_weight : 8;
-		u32 pri2_weight : 8;
-		u32 pri3_weight : 8;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1 0xa4000018
+
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI0_WEIGHT	0x000000FF
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI1_WEIGHT	0x0000FF00
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI2_WEIGHT	0x00FF0000
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI3_WEIGHT	0xFF000000
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI0_WEIGHT_LOC	0
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI1_WEIGHT_LOC	8
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI2_WEIGHT_LOC	16
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI3_WEIGHT_LOC	24
+
+#define DLB2_V2LSP_CFG_ARB_WEIGHT_LDB_QID_1 0xa4000018
+#define DLB2_V2_5LSP_CFG_ARB_WEIGHT_LDB_QID_1 0x94000018
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CFG_ARB_WEIGHT_LDB_QID_1 : \
+	 DLB2_V2_5LSP_CFG_ARB_WEIGHT_LDB_QID_1)
 #define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_RST 0x0
-union dlb2_lsp_cfg_arb_weight_ldb_qid_1 {
-	struct {
-		u32 rsvz0 : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_LDB_SCHED_CTRL 0xa400002c
+
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_RSVZ0_V2	0xFFFFFFFF
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_RSVZ0_V2_LOC	0
+
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI4_WEIGHT_V2_5	0x000000FF
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI5_WEIGHT_V2_5	0x0000FF00
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI6_WEIGHT_V2_5	0x00FF0000
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI7_WEIGHT_V2_5	0xFF000000
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI4_WEIGHT_V2_5_LOC	0
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI5_WEIGHT_V2_5_LOC	8
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI6_WEIGHT_V2_5_LOC	16
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI7_WEIGHT_V2_5_LOC	24
+
+#define DLB2_V2LSP_LDB_SCHED_CTRL 0xa400002c
+#define DLB2_V2_5LSP_LDB_SCHED_CTRL 0x9400002c
+#define DLB2_LSP_LDB_SCHED_CTRL(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_LDB_SCHED_CTRL : \
+	 DLB2_V2_5LSP_LDB_SCHED_CTRL)
 #define DLB2_LSP_LDB_SCHED_CTRL_RST 0x0
-union dlb2_lsp_ldb_sched_ctrl {
-	struct {
-		u32 cq : 8;
-		u32 qidix : 3;
-		u32 value : 1;
-		u32 nalb_haswork_v : 1;
-		u32 rlist_haswork_v : 1;
-		u32 slist_haswork_v : 1;
-		u32 inflight_ok_v : 1;
-		u32 aqed_nfull_v : 1;
-		u32 rsvz0 : 15;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_DIR_SCH_CNT_L 0xa4000034
+
+#define DLB2_LSP_LDB_SCHED_CTRL_CQ			0x000000FF
+#define DLB2_LSP_LDB_SCHED_CTRL_QIDIX		0x00000700
+#define DLB2_LSP_LDB_SCHED_CTRL_VALUE		0x00000800
+#define DLB2_LSP_LDB_SCHED_CTRL_NALB_HASWORK_V	0x00001000
+#define DLB2_LSP_LDB_SCHED_CTRL_RLIST_HASWORK_V	0x00002000
+#define DLB2_LSP_LDB_SCHED_CTRL_SLIST_HASWORK_V	0x00004000
+#define DLB2_LSP_LDB_SCHED_CTRL_INFLIGHT_OK_V	0x00008000
+#define DLB2_LSP_LDB_SCHED_CTRL_AQED_NFULL_V		0x00010000
+#define DLB2_LSP_LDB_SCHED_CTRL_RSVZ0		0xFFFE0000
+#define DLB2_LSP_LDB_SCHED_CTRL_CQ_LOC		0
+#define DLB2_LSP_LDB_SCHED_CTRL_QIDIX_LOC		8
+#define DLB2_LSP_LDB_SCHED_CTRL_VALUE_LOC		11
+#define DLB2_LSP_LDB_SCHED_CTRL_NALB_HASWORK_V_LOC	12
+#define DLB2_LSP_LDB_SCHED_CTRL_RLIST_HASWORK_V_LOC	13
+#define DLB2_LSP_LDB_SCHED_CTRL_SLIST_HASWORK_V_LOC	14
+#define DLB2_LSP_LDB_SCHED_CTRL_INFLIGHT_OK_V_LOC	15
+#define DLB2_LSP_LDB_SCHED_CTRL_AQED_NFULL_V_LOC	16
+#define DLB2_LSP_LDB_SCHED_CTRL_RSVZ0_LOC		17
+
+#define DLB2_V2LSP_DIR_SCH_CNT_L 0xa4000034
+#define DLB2_V2_5LSP_DIR_SCH_CNT_L 0x94000034
+#define DLB2_LSP_DIR_SCH_CNT_L(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_DIR_SCH_CNT_L : \
+	 DLB2_V2_5LSP_DIR_SCH_CNT_L)
 #define DLB2_LSP_DIR_SCH_CNT_L_RST 0x0
-union dlb2_lsp_dir_sch_cnt_l {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_DIR_SCH_CNT_H 0xa4000038
+
+#define DLB2_LSP_DIR_SCH_CNT_L_COUNT	0xFFFFFFFF
+#define DLB2_LSP_DIR_SCH_CNT_L_COUNT_LOC	0
+
+#define DLB2_V2LSP_DIR_SCH_CNT_H 0xa4000038
+#define DLB2_V2_5LSP_DIR_SCH_CNT_H 0x94000038
+#define DLB2_LSP_DIR_SCH_CNT_H(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_DIR_SCH_CNT_H : \
+	 DLB2_V2_5LSP_DIR_SCH_CNT_H)
 #define DLB2_LSP_DIR_SCH_CNT_H_RST 0x0
-union dlb2_lsp_dir_sch_cnt_h {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_LDB_SCH_CNT_L 0xa400003c
+
+#define DLB2_LSP_DIR_SCH_CNT_H_COUNT	0xFFFFFFFF
+#define DLB2_LSP_DIR_SCH_CNT_H_COUNT_LOC	0
+
+#define DLB2_V2LSP_LDB_SCH_CNT_L 0xa400003c
+#define DLB2_V2_5LSP_LDB_SCH_CNT_L 0x9400003c
+#define DLB2_LSP_LDB_SCH_CNT_L(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_LDB_SCH_CNT_L : \
+	 DLB2_V2_5LSP_LDB_SCH_CNT_L)
 #define DLB2_LSP_LDB_SCH_CNT_L_RST 0x0
-union dlb2_lsp_ldb_sch_cnt_l {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_LDB_SCH_CNT_H 0xa4000040
+
+#define DLB2_LSP_LDB_SCH_CNT_L_COUNT	0xFFFFFFFF
+#define DLB2_LSP_LDB_SCH_CNT_L_COUNT_LOC	0
+
+#define DLB2_V2LSP_LDB_SCH_CNT_H 0xa4000040
+#define DLB2_V2_5LSP_LDB_SCH_CNT_H 0x94000040
+#define DLB2_LSP_LDB_SCH_CNT_H(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_LDB_SCH_CNT_H : \
+	 DLB2_V2_5LSP_LDB_SCH_CNT_H)
 #define DLB2_LSP_LDB_SCH_CNT_H_RST 0x0
-union dlb2_lsp_ldb_sch_cnt_h {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CFG_SHDW_CTRL 0xa4000070
+
+#define DLB2_LSP_LDB_SCH_CNT_H_COUNT	0xFFFFFFFF
+#define DLB2_LSP_LDB_SCH_CNT_H_COUNT_LOC	0
+
+#define DLB2_V2LSP_CFG_SHDW_CTRL 0xa4000070
+#define DLB2_V2_5LSP_CFG_SHDW_CTRL 0x94000070
+#define DLB2_LSP_CFG_SHDW_CTRL(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CFG_SHDW_CTRL : \
+	 DLB2_V2_5LSP_CFG_SHDW_CTRL)
 #define DLB2_LSP_CFG_SHDW_CTRL_RST 0x0
-union dlb2_lsp_cfg_shdw_ctrl {
-	struct {
-		u32 transfer : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CFG_SHDW_RANGE_COS(x) \
+
+#define DLB2_LSP_CFG_SHDW_CTRL_TRANSFER	0x00000001
+#define DLB2_LSP_CFG_SHDW_CTRL_RSVD0		0xFFFFFFFE
+#define DLB2_LSP_CFG_SHDW_CTRL_TRANSFER_LOC	0
+#define DLB2_LSP_CFG_SHDW_CTRL_RSVD0_LOC	1
+
+#define DLB2_V2LSP_CFG_SHDW_RANGE_COS(x) \
 	(0xa4000074 + (x) * 4)
+#define DLB2_V2_5LSP_CFG_SHDW_RANGE_COS(x) \
+	(0x94000074 + (x) * 4)
+#define DLB2_LSP_CFG_SHDW_RANGE_COS(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CFG_SHDW_RANGE_COS(x) : \
+	 DLB2_V2_5LSP_CFG_SHDW_RANGE_COS(x))
 #define DLB2_LSP_CFG_SHDW_RANGE_COS_RST 0x40
-union dlb2_lsp_cfg_shdw_range_cos {
-	struct {
-		u32 bw_range : 9;
-		u32 rsvz0 : 22;
-		u32 no_extra_credit : 1;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CFG_CTRL_GENERAL_0 0xac000000
+
+#define DLB2_LSP_CFG_SHDW_RANGE_COS_BW_RANGE		0x000001FF
+#define DLB2_LSP_CFG_SHDW_RANGE_COS_RSVZ0		0x7FFFFE00
+#define DLB2_LSP_CFG_SHDW_RANGE_COS_NO_EXTRA_CREDIT	0x80000000
+#define DLB2_LSP_CFG_SHDW_RANGE_COS_BW_RANGE_LOC		0
+#define DLB2_LSP_CFG_SHDW_RANGE_COS_RSVZ0_LOC		9
+#define DLB2_LSP_CFG_SHDW_RANGE_COS_NO_EXTRA_CREDIT_LOC	31
+
+#define DLB2_V2LSP_CFG_CTRL_GENERAL_0 0xac000000
+#define DLB2_V2_5LSP_CFG_CTRL_GENERAL_0 0x9c000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CFG_CTRL_GENERAL_0 : \
+	 DLB2_V2_5LSP_CFG_CTRL_GENERAL_0)
 #define DLB2_LSP_CFG_CTRL_GENERAL_0_RST 0x0
-union dlb2_lsp_cfg_ctrl_general_0 {
-	struct {
-		u32 disab_atq_empty_arb : 1;
-		u32 inc_tok_unit_idle : 1;
-		u32 disab_rlist_pri : 1;
-		u32 inc_cmp_unit_idle : 1;
-		u32 rsvz0 : 2;
-		u32 dir_single_op : 1;
-		u32 dir_half_bw : 1;
-		u32 dir_single_out : 1;
-		u32 dir_disab_multi : 1;
-		u32 atq_single_op : 1;
-		u32 atq_half_bw : 1;
-		u32 atq_single_out : 1;
-		u32 atq_disab_multi : 1;
-		u32 dirrpl_single_op : 1;
-		u32 dirrpl_half_bw : 1;
-		u32 dirrpl_single_out : 1;
-		u32 lbrpl_single_op : 1;
-		u32 lbrpl_half_bw : 1;
-		u32 lbrpl_single_out : 1;
-		u32 ldb_single_op : 1;
-		u32 ldb_half_bw : 1;
-		u32 ldb_disab_multi : 1;
-		u32 atm_single_sch : 1;
-		u32 atm_single_cmp : 1;
-		u32 ldb_ce_tog_arb : 1;
-		u32 rsvz1 : 1;
-		u32 smon0_valid_sel : 2;
-		u32 smon0_value_sel : 1;
-		u32 smon0_compare_sel : 2;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CFG_MSTR_DIAG_RESET_STS 0xb4000000
-#define DLB2_CFG_MSTR_DIAG_RESET_STS_RST 0x80000bff
-union dlb2_cfg_mstr_diag_reset_sts {
-	struct {
-		u32 chp_pf_reset_done : 1;
-		u32 rop_pf_reset_done : 1;
-		u32 lsp_pf_reset_done : 1;
-		u32 nalb_pf_reset_done : 1;
-		u32 ap_pf_reset_done : 1;
-		u32 dp_pf_reset_done : 1;
-		u32 qed_pf_reset_done : 1;
-		u32 dqed_pf_reset_done : 1;
-		u32 aqed_pf_reset_done : 1;
-		u32 sys_pf_reset_done : 1;
-		u32 pf_reset_active : 1;
-		u32 flrsm_state : 7;
-		u32 rsvd0 : 13;
-		u32 dlb_proc_reset_done : 1;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CFG_MSTR_CFG_DIAGNOSTIC_IDLE_STATUS 0xb4000004
-#define DLB2_CFG_MSTR_CFG_DIAGNOSTIC_IDLE_STATUS_RST 0x9d0fffff
-union dlb2_cfg_mstr_cfg_diagnostic_idle_status {
-	struct {
-		u32 chp_pipeidle : 1;
-		u32 rop_pipeidle : 1;
-		u32 lsp_pipeidle : 1;
-		u32 nalb_pipeidle : 1;
-		u32 ap_pipeidle : 1;
-		u32 dp_pipeidle : 1;
-		u32 qed_pipeidle : 1;
-		u32 dqed_pipeidle : 1;
-		u32 aqed_pipeidle : 1;
-		u32 sys_pipeidle : 1;
-		u32 chp_unit_idle : 1;
-		u32 rop_unit_idle : 1;
-		u32 lsp_unit_idle : 1;
-		u32 nalb_unit_idle : 1;
-		u32 ap_unit_idle : 1;
-		u32 dp_unit_idle : 1;
-		u32 qed_unit_idle : 1;
-		u32 dqed_unit_idle : 1;
-		u32 aqed_unit_idle : 1;
-		u32 sys_unit_idle : 1;
-		u32 rsvd1 : 4;
-		u32 mstr_cfg_ring_idle : 1;
-		u32 mstr_cfg_mstr_idle : 1;
-		u32 mstr_flr_clkreq_b : 1;
-		u32 mstr_proc_idle : 1;
-		u32 mstr_proc_idle_masked : 1;
-		u32 rsvd0 : 2;
-		u32 dlb_func_idle : 1;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CFG_MSTR_CFG_PM_STATUS 0xb4000014
-#define DLB2_CFG_MSTR_CFG_PM_STATUS_RST 0x100403e
-union dlb2_cfg_mstr_cfg_pm_status {
-	struct {
-		u32 prochot : 1;
-		u32 pgcb_dlb_idle : 1;
-		u32 pgcb_dlb_pg_rdy_ack_b : 1;
-		u32 pmsm_pgcb_req_b : 1;
-		u32 pgbc_pmc_pg_req_b : 1;
-		u32 pmc_pgcb_pg_ack_b : 1;
-		u32 pmc_pgcb_fet_en_b : 1;
-		u32 pgcb_fet_en_b : 1;
-		u32 rsvz0 : 1;
-		u32 rsvz1 : 1;
-		u32 fuse_force_on : 1;
-		u32 fuse_proc_disable : 1;
-		u32 rsvz2 : 1;
-		u32 rsvz3 : 1;
-		u32 pm_fsm_d0tod3_ok : 1;
-		u32 pm_fsm_d3tod0_ok : 1;
-		u32 dlb_in_d3 : 1;
-		u32 rsvz4 : 7;
-		u32 pmsm : 8;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CFG_MSTR_CFG_PM_PMCSR_DISABLE 0xb4000018
-#define DLB2_CFG_MSTR_CFG_PM_PMCSR_DISABLE_RST 0x1
-union dlb2_cfg_mstr_cfg_pm_pmcsr_disable {
-	struct {
-		u32 disable : 1;
-		u32 rsvz0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_VF_VF2PF_MAILBOX_BYTES 256
-#define DLB2_FUNC_VF_VF2PF_MAILBOX(x) \
+
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_ATQ_EMPTY_ARB_V2	0x00000001
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_TOK_UNIT_IDLE_V2	0x00000002
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_RLIST_PRI_V2	0x00000004
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_CMP_UNIT_IDLE_V2	0x00000008
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ0_V2		0x00000030
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OP_V2	0x00000040
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_HALF_BW_V2	0x00000080
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OUT_V2	0x00000100
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_DISAB_MULTI_V2	0x00000200
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OP_V2	0x00000400
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_HALF_BW_V2	0x00000800
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OUT_V2	0x00001000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_DISAB_MULTI_V2	0x00002000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OP_V2	0x00004000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_HALF_BW_V2	0x00008000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OUT_V2	0x00010000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OP_V2	0x00020000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_HALF_BW_V2	0x00040000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OUT_V2	0x00080000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_SINGLE_OP_V2	0x00100000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_HALF_BW_V2	0x00200000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_DISAB_MULTI_V2	0x00400000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_SCH_V2	0x00800000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_CMP_V2	0x01000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_CE_TOG_ARB_V2	0x02000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ1_V2		0x04000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALID_SEL_V2	0x18000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALUE_SEL_V2	0x20000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_COMPARE_SEL_V2	0xC0000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_ATQ_EMPTY_ARB_V2_LOC	0
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_TOK_UNIT_IDLE_V2_LOC		1
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_RLIST_PRI_V2_LOC		2
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_CMP_UNIT_IDLE_V2_LOC		3
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ0_V2_LOC			4
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OP_V2_LOC		6
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_HALF_BW_V2_LOC		7
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OUT_V2_LOC		8
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_DISAB_MULTI_V2_LOC		9
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OP_V2_LOC		10
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_HALF_BW_V2_LOC		11
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OUT_V2_LOC		12
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_DISAB_MULTI_V2_LOC		13
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OP_V2_LOC		14
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_HALF_BW_V2_LOC		15
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OUT_V2_LOC		16
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OP_V2_LOC		17
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_HALF_BW_V2_LOC		18
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OUT_V2_LOC		19
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_SINGLE_OP_V2_LOC		20
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_HALF_BW_V2_LOC		21
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_DISAB_MULTI_V2_LOC		22
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_SCH_V2_LOC		23
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_CMP_V2_LOC		24
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_CE_TOG_ARB_V2_LOC		25
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ1_V2_LOC			26
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALID_SEL_V2_LOC		27
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALUE_SEL_V2_LOC		29
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_COMPARE_SEL_V2_LOC		30
+
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_ATQ_EMPTY_ARB_V2_5	0x00000001
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_TOK_UNIT_IDLE_V2_5	0x00000002
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_RLIST_PRI_V2_5	0x00000004
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_CMP_UNIT_IDLE_V2_5	0x00000008
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ENAB_IF_THRESH_V2_5	0x00000010
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ0_V2_5		0x00000020
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OP_V2_5	0x00000040
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_HALF_BW_V2_5	0x00000080
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OUT_V2_5	0x00000100
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_DISAB_MULTI_V2_5	0x00000200
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OP_V2_5	0x00000400
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_HALF_BW_V2_5	0x00000800
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OUT_V2_5	0x00001000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_DISAB_MULTI_V2_5	0x00002000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OP_V2_5	0x00004000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_HALF_BW_V2_5	0x00008000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OUT_V2_5	0x00010000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OP_V2_5	0x00020000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_HALF_BW_V2_5	0x00040000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OUT_V2_5	0x00080000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_SINGLE_OP_V2_5	0x00100000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_HALF_BW_V2_5	0x00200000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_DISAB_MULTI_V2_5	0x00400000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_SCH_V2_5	0x00800000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_CMP_V2_5	0x01000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_CE_TOG_ARB_V2_5	0x02000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ1_V2_5		0x04000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALID_SEL_V2_5	0x18000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALUE_SEL_V2_5	0x20000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_COMPARE_SEL_V2_5	0xC0000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_ATQ_EMPTY_ARB_V2_5_LOC	0
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_TOK_UNIT_IDLE_V2_5_LOC	1
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_RLIST_PRI_V2_5_LOC		2
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_CMP_UNIT_IDLE_V2_5_LOC	3
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ENAB_IF_THRESH_V2_5_LOC		4
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ0_V2_5_LOC			5
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OP_V2_5_LOC		6
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_HALF_BW_V2_5_LOC		7
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OUT_V2_5_LOC		8
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_DISAB_MULTI_V2_5_LOC		9
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OP_V2_5_LOC		10
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_HALF_BW_V2_5_LOC		11
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OUT_V2_5_LOC		12
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_DISAB_MULTI_V2_5_LOC		13
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OP_V2_5_LOC	14
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_HALF_BW_V2_5_LOC		15
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OUT_V2_5_LOC	16
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OP_V2_5_LOC		17
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_HALF_BW_V2_5_LOC		18
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OUT_V2_5_LOC	19
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_SINGLE_OP_V2_5_LOC		20
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_HALF_BW_V2_5_LOC		21
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_DISAB_MULTI_V2_5_LOC		22
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_SCH_V2_5_LOC		23
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_CMP_V2_5_LOC		24
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_CE_TOG_ARB_V2_5_LOC		25
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ1_V2_5_LOC			26
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALID_SEL_V2_5_LOC		27
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALUE_SEL_V2_5_LOC		29
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_COMPARE_SEL_V2_5_LOC	30
+
+#define DLB2_LSP_SMON_COMPARE0 0xac000048
+#define DLB2_LSP_SMON_COMPARE0_RST 0x0
+
+#define DLB2_LSP_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_LSP_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_LSP_SMON_COMPARE1 0xac00004c
+#define DLB2_LSP_SMON_COMPARE1_RST 0x0
+
+#define DLB2_LSP_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_LSP_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_LSP_SMON_CFG0 0xac000050
+#define DLB2_LSP_SMON_CFG0_RST 0x40000000
+
+#define DLB2_LSP_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_LSP_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_LSP_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_LSP_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_LSP_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_LSP_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_LSP_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_LSP_SMON_CFG0_SMON_MODE			0x0000F000
+#define DLB2_LSP_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_LSP_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_LSP_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_LSP_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_LSP_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_LSP_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_LSP_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_LSP_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_LSP_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_LSP_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_LSP_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_LSP_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_LSP_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
+#define DLB2_LSP_SMON_CFG0_RSVZ0_LOC				2
+#define DLB2_LSP_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_LSP_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_LSP_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_LSP_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_LSP_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_LSP_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_LSP_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_LSP_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_LSP_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_LSP_SMON_CFG0_STOPTIMEROVFL_LOC			20
+#define DLB2_LSP_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_LSP_SMON_CFG0_STATTIMEROVFL_LOC			22
+#define DLB2_LSP_SMON_CFG0_RSVZ1_LOC				23
+#define DLB2_LSP_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_LSP_SMON_CFG0_RSVZ2_LOC				29
+#define DLB2_LSP_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_LSP_SMON_CFG1 0xac000054
+#define DLB2_LSP_SMON_CFG1_RST 0x0
+
+#define DLB2_LSP_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_LSP_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_LSP_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_LSP_SMON_CFG1_MODE0_LOC	0
+#define DLB2_LSP_SMON_CFG1_MODE1_LOC	8
+#define DLB2_LSP_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_LSP_SMON_ACTIVITYCNTR0 0xac000058
+#define DLB2_LSP_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_LSP_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_LSP_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_LSP_SMON_ACTIVITYCNTR1 0xac00005c
+#define DLB2_LSP_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_LSP_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_LSP_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_LSP_SMON_MAX_TMR 0xac000060
+#define DLB2_LSP_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_LSP_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_LSP_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_LSP_SMON_TMR 0xac000064
+#define DLB2_LSP_SMON_TMR_RST 0x0
+
+#define DLB2_LSP_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_LSP_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_V2CM_DIAG_RESET_STS 0xb4000000
+#define DLB2_V2_5CM_DIAG_RESET_STS 0xa4000000
+#define DLB2_CM_DIAG_RESET_STS(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 V2CM_DIAG_RESET_STS : \
+	 V2_5CM_DIAG_RESET_STS)
+#define DLB2_CM_DIAG_RESET_STS_RST 0x80000bff
+
+#define DLB2_CM_DIAG_RESET_STS_CHP_PF_RESET_DONE	0x00000001
+#define DLB2_CM_DIAG_RESET_STS_ROP_PF_RESET_DONE	0x00000002
+#define DLB2_CM_DIAG_RESET_STS_LSP_PF_RESET_DONE	0x00000004
+#define DLB2_CM_DIAG_RESET_STS_NALB_PF_RESET_DONE	0x00000008
+#define DLB2_CM_DIAG_RESET_STS_AP_PF_RESET_DONE	0x00000010
+#define DLB2_CM_DIAG_RESET_STS_DP_PF_RESET_DONE	0x00000020
+#define DLB2_CM_DIAG_RESET_STS_QED_PF_RESET_DONE	0x00000040
+#define DLB2_CM_DIAG_RESET_STS_DQED_PF_RESET_DONE	0x00000080
+#define DLB2_CM_DIAG_RESET_STS_AQED_PF_RESET_DONE	0x00000100
+#define DLB2_CM_DIAG_RESET_STS_SYS_PF_RESET_DONE	0x00000200
+#define DLB2_CM_DIAG_RESET_STS_PF_RESET_ACTIVE	0x00000400
+#define DLB2_CM_DIAG_RESET_STS_FLRSM_STATE		0x0003F800
+#define DLB2_CM_DIAG_RESET_STS_RSVD0			0x7FFC0000
+#define DLB2_CM_DIAG_RESET_STS_DLB_PROC_RESET_DONE	0x80000000
+#define DLB2_CM_DIAG_RESET_STS_CHP_PF_RESET_DONE_LOC		0
+#define DLB2_CM_DIAG_RESET_STS_ROP_PF_RESET_DONE_LOC		1
+#define DLB2_CM_DIAG_RESET_STS_LSP_PF_RESET_DONE_LOC		2
+#define DLB2_CM_DIAG_RESET_STS_NALB_PF_RESET_DONE_LOC	3
+#define DLB2_CM_DIAG_RESET_STS_AP_PF_RESET_DONE_LOC		4
+#define DLB2_CM_DIAG_RESET_STS_DP_PF_RESET_DONE_LOC		5
+#define DLB2_CM_DIAG_RESET_STS_QED_PF_RESET_DONE_LOC		6
+#define DLB2_CM_DIAG_RESET_STS_DQED_PF_RESET_DONE_LOC	7
+#define DLB2_CM_DIAG_RESET_STS_AQED_PF_RESET_DONE_LOC	8
+#define DLB2_CM_DIAG_RESET_STS_SYS_PF_RESET_DONE_LOC		9
+#define DLB2_CM_DIAG_RESET_STS_PF_RESET_ACTIVE_LOC		10
+#define DLB2_CM_DIAG_RESET_STS_FLRSM_STATE_LOC		11
+#define DLB2_CM_DIAG_RESET_STS_RSVD0_LOC			18
+#define DLB2_CM_DIAG_RESET_STS_DLB_PROC_RESET_DONE_LOC	31
+
+#define DLB2_V2CM_CFG_DIAGNOSTIC_IDLE_STATUS 0xb4000004
+#define DLB2_V2_5CM_CFG_DIAGNOSTIC_IDLE_STATUS 0xa4000004
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CM_CFG_DIAGNOSTIC_IDLE_STATUS : \
+	 DLB2_V2_5CM_CFG_DIAGNOSTIC_IDLE_STATUS)
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_RST 0x9d0fffff
+
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_CHP_PIPEIDLE		0x00000001
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_ROP_PIPEIDLE		0x00000002
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_LSP_PIPEIDLE		0x00000004
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_NALB_PIPEIDLE	0x00000008
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AP_PIPEIDLE		0x00000010
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DP_PIPEIDLE		0x00000020
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_QED_PIPEIDLE		0x00000040
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DQED_PIPEIDLE	0x00000080
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AQED_PIPEIDLE	0x00000100
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_SYS_PIPEIDLE		0x00000200
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_CHP_UNIT_IDLE	0x00000400
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_ROP_UNIT_IDLE	0x00000800
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_LSP_UNIT_IDLE	0x00001000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_NALB_UNIT_IDLE	0x00002000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AP_UNIT_IDLE		0x00004000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DP_UNIT_IDLE		0x00008000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_QED_UNIT_IDLE	0x00010000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DQED_UNIT_IDLE	0x00020000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AQED_UNIT_IDLE	0x00040000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_SYS_UNIT_IDLE	0x00080000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_RSVD1		0x00F00000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_CFG_RING_IDLE	0x01000000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_CFG_MSTR_IDLE	0x02000000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_FLR_CLKREQ_B	0x04000000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_PROC_IDLE	0x08000000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_PROC_IDLE_MASKED 0x10000000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_RSVD0		 0x60000000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DLB_FUNC_IDLE	 0x80000000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_CHP_PIPEIDLE_LOC		0
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_ROP_PIPEIDLE_LOC		1
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_LSP_PIPEIDLE_LOC		2
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_NALB_PIPEIDLE_LOC		3
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AP_PIPEIDLE_LOC		4
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DP_PIPEIDLE_LOC		5
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_QED_PIPEIDLE_LOC		6
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DQED_PIPEIDLE_LOC		7
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AQED_PIPEIDLE_LOC		8
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_SYS_PIPEIDLE_LOC		9
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_CHP_UNIT_IDLE_LOC		10
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_ROP_UNIT_IDLE_LOC		11
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_LSP_UNIT_IDLE_LOC		12
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_NALB_UNIT_IDLE_LOC	13
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AP_UNIT_IDLE_LOC		14
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DP_UNIT_IDLE_LOC		15
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_QED_UNIT_IDLE_LOC		16
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DQED_UNIT_IDLE_LOC	17
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AQED_UNIT_IDLE_LOC	18
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_SYS_UNIT_IDLE_LOC		19
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_RSVD1_LOC			20
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_CFG_RING_IDLE_LOC	24
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_CFG_MSTR_IDLE_LOC	25
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_FLR_CLKREQ_B_LOC	26
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_PROC_IDLE_LOC	27
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_PROC_IDLE_MASKED_LOC	28
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_RSVD0_LOC			29
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DLB_FUNC_IDLE_LOC		31
+
+#define DLB2_V2CM_CFG_PM_STATUS 0xb4000014
+#define DLB2_V2_5CM_CFG_PM_STATUS 0xa4000014
+#define DLB2_CM_CFG_PM_STATUS(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CM_CFG_PM_STATUS : \
+	 DLB2_V2_5CM_CFG_PM_STATUS)
+#define DLB2_CM_CFG_PM_STATUS_RST 0x100403e
+
+#define DLB2_CM_CFG_PM_STATUS_PROCHOT		0x00000001
+#define DLB2_CM_CFG_PM_STATUS_PGCB_DLB_IDLE		0x00000002
+#define DLB2_CM_CFG_PM_STATUS_PGCB_DLB_PG_RDY_ACK_B	0x00000004
+#define DLB2_CM_CFG_PM_STATUS_PMSM_PGCB_REQ_B	0x00000008
+#define DLB2_CM_CFG_PM_STATUS_PGBC_PMC_PG_REQ_B	0x00000010
+#define DLB2_CM_CFG_PM_STATUS_PMC_PGCB_PG_ACK_B	0x00000020
+#define DLB2_CM_CFG_PM_STATUS_PMC_PGCB_FET_EN_B	0x00000040
+#define DLB2_CM_CFG_PM_STATUS_PGCB_FET_EN_B		0x00000080
+#define DLB2_CM_CFG_PM_STATUS_RSVZ0			0x00000100
+#define DLB2_CM_CFG_PM_STATUS_RSVZ1			0x00000200
+#define DLB2_CM_CFG_PM_STATUS_FUSE_FORCE_ON		0x00000400
+#define DLB2_CM_CFG_PM_STATUS_FUSE_PROC_DISABLE	0x00000800
+#define DLB2_CM_CFG_PM_STATUS_RSVZ2			0x00001000
+#define DLB2_CM_CFG_PM_STATUS_RSVZ3			0x00002000
+#define DLB2_CM_CFG_PM_STATUS_PM_FSM_D0TOD3_OK	0x00004000
+#define DLB2_CM_CFG_PM_STATUS_PM_FSM_D3TOD0_OK	0x00008000
+#define DLB2_CM_CFG_PM_STATUS_DLB_IN_D3		0x00010000
+#define DLB2_CM_CFG_PM_STATUS_RSVZ4			0x00FE0000
+#define DLB2_CM_CFG_PM_STATUS_PMSM			0xFF000000
+#define DLB2_CM_CFG_PM_STATUS_PROCHOT_LOC			0
+#define DLB2_CM_CFG_PM_STATUS_PGCB_DLB_IDLE_LOC		1
+#define DLB2_CM_CFG_PM_STATUS_PGCB_DLB_PG_RDY_ACK_B_LOC	2
+#define DLB2_CM_CFG_PM_STATUS_PMSM_PGCB_REQ_B_LOC		3
+#define DLB2_CM_CFG_PM_STATUS_PGBC_PMC_PG_REQ_B_LOC		4
+#define DLB2_CM_CFG_PM_STATUS_PMC_PGCB_PG_ACK_B_LOC		5
+#define DLB2_CM_CFG_PM_STATUS_PMC_PGCB_FET_EN_B_LOC		6
+#define DLB2_CM_CFG_PM_STATUS_PGCB_FET_EN_B_LOC		7
+#define DLB2_CM_CFG_PM_STATUS_RSVZ0_LOC			8
+#define DLB2_CM_CFG_PM_STATUS_RSVZ1_LOC			9
+#define DLB2_CM_CFG_PM_STATUS_FUSE_FORCE_ON_LOC		10
+#define DLB2_CM_CFG_PM_STATUS_FUSE_PROC_DISABLE_LOC		11
+#define DLB2_CM_CFG_PM_STATUS_RSVZ2_LOC			12
+#define DLB2_CM_CFG_PM_STATUS_RSVZ3_LOC			13
+#define DLB2_CM_CFG_PM_STATUS_PM_FSM_D0TOD3_OK_LOC		14
+#define DLB2_CM_CFG_PM_STATUS_PM_FSM_D3TOD0_OK_LOC		15
+#define DLB2_CM_CFG_PM_STATUS_DLB_IN_D3_LOC			16
+#define DLB2_CM_CFG_PM_STATUS_RSVZ4_LOC			17
+#define DLB2_CM_CFG_PM_STATUS_PMSM_LOC			24
+
+#define DLB2_V2CM_CFG_PM_PMCSR_DISABLE 0xb4000018
+#define DLB2_V2_5CM_CFG_PM_PMCSR_DISABLE 0xa4000018
+#define DLB2_CM_CFG_PM_PMCSR_DISABLE(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CM_CFG_PM_PMCSR_DISABLE : \
+	 DLB2_V2_5CM_CFG_PM_PMCSR_DISABLE)
+#define DLB2_CM_CFG_PM_PMCSR_DISABLE_RST 0x1
+
+#define DLB2_CM_CFG_PM_PMCSR_DISABLE_DISABLE	0x00000001
+#define DLB2_CM_CFG_PM_PMCSR_DISABLE_RSVZ0	0xFFFFFFFE
+#define DLB2_CM_CFG_PM_PMCSR_DISABLE_DISABLE_LOC	0
+#define DLB2_CM_CFG_PM_PMCSR_DISABLE_RSVZ0_LOC	1
+
+#define DLB2_VF_VF2PF_MAILBOX_BYTES 256
+#define DLB2_VF_VF2PF_MAILBOX(x) \
 	(0x1000 + (x) * 0x4)
-#define DLB2_FUNC_VF_VF2PF_MAILBOX_RST 0x0
-union dlb2_func_vf_vf2pf_mailbox {
-	struct {
-		u32 msg : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_VF_VF2PF_MAILBOX_ISR 0x1f00
-#define DLB2_FUNC_VF_VF2PF_MAILBOX_ISR_RST 0x0
-#define DLB2_FUNC_VF_SIOV_VF2PF_MAILBOX_ISR_TRIGGER 0x8000
-union dlb2_func_vf_vf2pf_mailbox_isr {
-	struct {
-		u32 isr : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_VF_PF2VF_MAILBOX_BYTES 64
-#define DLB2_FUNC_VF_PF2VF_MAILBOX(x) \
+#define DLB2_VF_VF2PF_MAILBOX_RST 0x0
+
+#define DLB2_VF_VF2PF_MAILBOX_MSG	0xFFFFFFFF
+#define DLB2_VF_VF2PF_MAILBOX_MSG_LOC	0
+
+#define DLB2_VF_VF2PF_MAILBOX_ISR 0x1f00
+#define DLB2_VF_VF2PF_MAILBOX_ISR_RST 0x0
+#define DLB2_VF_SIOV_MBOX_ISR_TRIGGER 0x8000
+
+#define DLB2_VF_VF2PF_MAILBOX_ISR_ISR	0x00000001
+#define DLB2_VF_VF2PF_MAILBOX_ISR_RSVD0	0xFFFFFFFE
+#define DLB2_VF_VF2PF_MAILBOX_ISR_ISR_LOC	0
+#define DLB2_VF_VF2PF_MAILBOX_ISR_RSVD0_LOC	1
+
+#define DLB2_VF_PF2VF_MAILBOX_BYTES 64
+#define DLB2_VF_PF2VF_MAILBOX(x) \
 	(0x2000 + (x) * 0x4)
-#define DLB2_FUNC_VF_PF2VF_MAILBOX_RST 0x0
-union dlb2_func_vf_pf2vf_mailbox {
-	struct {
-		u32 msg : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_VF_PF2VF_MAILBOX_ISR 0x2f00
-#define DLB2_FUNC_VF_PF2VF_MAILBOX_ISR_RST 0x0
-union dlb2_func_vf_pf2vf_mailbox_isr {
-	struct {
-		u32 pf_isr : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_VF_VF_MSI_ISR_PEND 0x2f10
-#define DLB2_FUNC_VF_VF_MSI_ISR_PEND_RST 0x0
-union dlb2_func_vf_vf_msi_isr_pend {
-	struct {
-		u32 isr_pend : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_VF_VF_RESET_IN_PROGRESS 0x3000
-#define DLB2_FUNC_VF_VF_RESET_IN_PROGRESS_RST 0x1
-union dlb2_func_vf_vf_reset_in_progress {
-	struct {
-		u32 reset_in_progress : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_VF_VF_MSI_ISR 0x4000
-#define DLB2_FUNC_VF_VF_MSI_ISR_RST 0x0
-union dlb2_func_vf_vf_msi_isr {
-	struct {
-		u32 vf_msi_isr : 32;
-	} field;
-	u32 val;
-};
+#define DLB2_VF_PF2VF_MAILBOX_RST 0x0
+
+#define DLB2_VF_PF2VF_MAILBOX_MSG	0xFFFFFFFF
+#define DLB2_VF_PF2VF_MAILBOX_MSG_LOC	0
+
+#define DLB2_VF_PF2VF_MAILBOX_ISR 0x2f00
+#define DLB2_VF_PF2VF_MAILBOX_ISR_RST 0x0
+
+#define DLB2_VF_PF2VF_MAILBOX_ISR_PF_ISR	0x00000001
+#define DLB2_VF_PF2VF_MAILBOX_ISR_RSVD0	0xFFFFFFFE
+#define DLB2_VF_PF2VF_MAILBOX_ISR_PF_ISR_LOC	0
+#define DLB2_VF_PF2VF_MAILBOX_ISR_RSVD0_LOC	1
+
+#define DLB2_VF_VF_MSI_ISR_PEND 0x2f10
+#define DLB2_VF_VF_MSI_ISR_PEND_RST 0x0
+
+#define DLB2_VF_VF_MSI_ISR_PEND_ISR_PEND	0xFFFFFFFF
+#define DLB2_VF_VF_MSI_ISR_PEND_ISR_PEND_LOC	0
+
+#define DLB2_VF_VF_RESET_IN_PROGRESS 0x3000
+#define DLB2_VF_VF_RESET_IN_PROGRESS_RST 0x1
+
+#define DLB2_VF_VF_RESET_IN_PROGRESS_RESET_IN_PROGRESS	0x00000001
+#define DLB2_VF_VF_RESET_IN_PROGRESS_RSVD0			0xFFFFFFFE
+#define DLB2_VF_VF_RESET_IN_PROGRESS_RESET_IN_PROGRESS_LOC	0
+#define DLB2_VF_VF_RESET_IN_PROGRESS_RSVD0_LOC		1
+
+#define DLB2_VF_VF_MSI_ISR 0x4000
+#define DLB2_VF_VF_MSI_ISR_RST 0x0
+
+#define DLB2_VF_VF_MSI_ISR_VF_MSI_ISR	0xFFFFFFFF
+#define DLB2_VF_VF_MSI_ISR_VF_MSI_ISR_LOC	0
+
+#define DLB2_SYS_TOTAL_CREDITS 0x10000100
+#define DLB2_SYS_TOTAL_CREDITS_RST 0x4000
+
+#define DLB2_SYS_TOTAL_CREDITS_TOTAL_CREDITS	0xFFFFFFFF
+#define DLB2_SYS_TOTAL_CREDITS_TOTAL_CREDITS_LOC	0
+
+#define DLB2_SYS_LDB_CQ_AI_ADDR_U(x) \
+	(0x10000fa4 + (x) * 0x1000)
+#define DLB2_SYS_LDB_CQ_AI_ADDR_U_RST 0x0
+
+#define DLB2_SYS_LDB_CQ_AI_ADDR_U_CQ_AI_ADDR_U	0xFFFFFFFF
+#define DLB2_SYS_LDB_CQ_AI_ADDR_U_CQ_AI_ADDR_U_LOC	0
+
+#define DLB2_SYS_LDB_CQ_AI_ADDR_L(x) \
+	(0x10000fa0 + (x) * 0x1000)
+#define DLB2_SYS_LDB_CQ_AI_ADDR_L_RST 0x0
+
+#define DLB2_SYS_LDB_CQ_AI_ADDR_L_RSVD0		0x00000003
+#define DLB2_SYS_LDB_CQ_AI_ADDR_L_CQ_AI_ADDR_L	0xFFFFFFFC
+#define DLB2_SYS_LDB_CQ_AI_ADDR_L_RSVD0_LOC		0
+#define DLB2_SYS_LDB_CQ_AI_ADDR_L_CQ_AI_ADDR_L_LOC	2
+
+#define DLB2_SYS_DIR_CQ_AI_ADDR_U(x) \
+	(0x10000fe4 + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ_AI_ADDR_U_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_AI_ADDR_U_CQ_AI_ADDR_U	0xFFFFFFFF
+#define DLB2_SYS_DIR_CQ_AI_ADDR_U_CQ_AI_ADDR_U_LOC	0
+
+#define DLB2_SYS_DIR_CQ_AI_ADDR_L(x) \
+	(0x10000fe0 + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ_AI_ADDR_L_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_AI_ADDR_L_RSVD0		0x00000003
+#define DLB2_SYS_DIR_CQ_AI_ADDR_L_CQ_AI_ADDR_L	0xFFFFFFFC
+#define DLB2_SYS_DIR_CQ_AI_ADDR_L_RSVD0_LOC		0
+#define DLB2_SYS_DIR_CQ_AI_ADDR_L_CQ_AI_ADDR_L_LOC	2
+
+#define DLB2_SYS_WB_DIR_CQ_STATE(x) \
+	(0x11c00000 + (x) * 0x1000)
+#define DLB2_SYS_WB_DIR_CQ_STATE_RST 0x0
+
+#define DLB2_SYS_WB_DIR_CQ_STATE_WB0_V	0x00000001
+#define DLB2_SYS_WB_DIR_CQ_STATE_WB1_V	0x00000002
+#define DLB2_SYS_WB_DIR_CQ_STATE_WB2_V	0x00000004
+#define DLB2_SYS_WB_DIR_CQ_STATE_DIR_OPT	0x00000008
+#define DLB2_SYS_WB_DIR_CQ_STATE_CQ_OPT_CLR	0x00000010
+#define DLB2_SYS_WB_DIR_CQ_STATE_RSVD0	0xFFFFFFE0
+#define DLB2_SYS_WB_DIR_CQ_STATE_WB0_V_LOC		0
+#define DLB2_SYS_WB_DIR_CQ_STATE_WB1_V_LOC		1
+#define DLB2_SYS_WB_DIR_CQ_STATE_WB2_V_LOC		2
+#define DLB2_SYS_WB_DIR_CQ_STATE_DIR_OPT_LOC		3
+#define DLB2_SYS_WB_DIR_CQ_STATE_CQ_OPT_CLR_LOC	4
+#define DLB2_SYS_WB_DIR_CQ_STATE_RSVD0_LOC		5
+
+#define DLB2_SYS_WB_LDB_CQ_STATE(x) \
+	(0x11d00000 + (x) * 0x1000)
+#define DLB2_SYS_WB_LDB_CQ_STATE_RST 0x0
+
+#define DLB2_SYS_WB_LDB_CQ_STATE_WB0_V	0x00000001
+#define DLB2_SYS_WB_LDB_CQ_STATE_WB1_V	0x00000002
+#define DLB2_SYS_WB_LDB_CQ_STATE_WB2_V	0x00000004
+#define DLB2_SYS_WB_LDB_CQ_STATE_RSVD1	0x00000008
+#define DLB2_SYS_WB_LDB_CQ_STATE_CQ_OPT_CLR	0x00000010
+#define DLB2_SYS_WB_LDB_CQ_STATE_RSVD0	0xFFFFFFE0
+#define DLB2_SYS_WB_LDB_CQ_STATE_WB0_V_LOC		0
+#define DLB2_SYS_WB_LDB_CQ_STATE_WB1_V_LOC		1
+#define DLB2_SYS_WB_LDB_CQ_STATE_WB2_V_LOC		2
+#define DLB2_SYS_WB_LDB_CQ_STATE_RSVD1_LOC		3
+#define DLB2_SYS_WB_LDB_CQ_STATE_CQ_OPT_CLR_LOC	4
+#define DLB2_SYS_WB_LDB_CQ_STATE_RSVD0_LOC		5
+
+#define DLB2_CHP_CFG_VAS_CRD(x) \
+	(0x40000000 + (x) * 0x1000)
+#define DLB2_CHP_CFG_VAS_CRD_RST 0x0
+
+#define DLB2_CHP_CFG_VAS_CRD_COUNT	0x00007FFF
+#define DLB2_CHP_CFG_VAS_CRD_RSVD0	0xFFFF8000
+#define DLB2_CHP_CFG_VAS_CRD_COUNT_LOC	0
+#define DLB2_CHP_CFG_VAS_CRD_RSVD0_LOC	15
+
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT(x) \
+	(0x90b00000 + (x) * 0x1000)
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_RST 0x0
+
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_LIMIT	0x00007FFF
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_V	0x00008000
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_RSVD0	0xFFFF0000
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_LIMIT_LOC	0
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_V_LOC		15
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_RSVD0_LOC	16
 
 #endif /* __DLB2_REGS_H */
diff --git a/drivers/event/dlb2/pf/base/dlb2_regs_new.h b/drivers/event/dlb2/pf/base/dlb2_regs_new.h
deleted file mode 100644
index 26c3e7f4a..000000000
--- a/drivers/event/dlb2/pf/base/dlb2_regs_new.h
+++ /dev/null
@@ -1,4304 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016-2020 Intel Corporation
- */
-
-#ifndef __DLB2_REGS_NEW_H
-#define __DLB2_REGS_NEW_H
-
-#include "dlb2_osdep_types.h"
-
-#define DLB2_PF_VF2PF_MAILBOX_BYTES 256
-#define DLB2_PF_VF2PF_MAILBOX(vf_id, x) \
-	(0x1000 + 0x4 * (x) + (vf_id) * 0x10000)
-#define DLB2_PF_VF2PF_MAILBOX_RST 0x0
-
-#define DLB2_PF_VF2PF_MAILBOX_MSG	0xFFFFFFFF
-#define DLB2_PF_VF2PF_MAILBOX_MSG_LOC	0
-
-#define DLB2_PF_VF2PF_MAILBOX_ISR(vf_id) \
-	(0x1f00 + (vf_id) * 0x10000)
-#define DLB2_PF_VF2PF_MAILBOX_ISR_RST 0x0
-
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF0_ISR	0x00000001
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF1_ISR	0x00000002
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF2_ISR	0x00000004
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF3_ISR	0x00000008
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF4_ISR	0x00000010
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF5_ISR	0x00000020
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF6_ISR	0x00000040
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF7_ISR	0x00000080
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF8_ISR	0x00000100
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF9_ISR	0x00000200
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF10_ISR	0x00000400
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF11_ISR	0x00000800
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF12_ISR	0x00001000
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF13_ISR	0x00002000
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF14_ISR	0x00004000
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF15_ISR	0x00008000
-#define DLB2_PF_VF2PF_MAILBOX_ISR_RSVD0	0xFFFF0000
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF0_ISR_LOC	0
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF1_ISR_LOC	1
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF2_ISR_LOC	2
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF3_ISR_LOC	3
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF4_ISR_LOC	4
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF5_ISR_LOC	5
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF6_ISR_LOC	6
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF7_ISR_LOC	7
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF8_ISR_LOC	8
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF9_ISR_LOC	9
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF10_ISR_LOC	10
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF11_ISR_LOC	11
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF12_ISR_LOC	12
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF13_ISR_LOC	13
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF14_ISR_LOC	14
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF15_ISR_LOC	15
-#define DLB2_PF_VF2PF_MAILBOX_ISR_RSVD0_LOC		16
-
-#define DLB2_PF_VF2PF_FLR_ISR(vf_id) \
-	(0x1f04 + (vf_id) * 0x10000)
-#define DLB2_PF_VF2PF_FLR_ISR_RST 0x0
-
-#define DLB2_PF_VF2PF_FLR_ISR_VF0_ISR	0x00000001
-#define DLB2_PF_VF2PF_FLR_ISR_VF1_ISR	0x00000002
-#define DLB2_PF_VF2PF_FLR_ISR_VF2_ISR	0x00000004
-#define DLB2_PF_VF2PF_FLR_ISR_VF3_ISR	0x00000008
-#define DLB2_PF_VF2PF_FLR_ISR_VF4_ISR	0x00000010
-#define DLB2_PF_VF2PF_FLR_ISR_VF5_ISR	0x00000020
-#define DLB2_PF_VF2PF_FLR_ISR_VF6_ISR	0x00000040
-#define DLB2_PF_VF2PF_FLR_ISR_VF7_ISR	0x00000080
-#define DLB2_PF_VF2PF_FLR_ISR_VF8_ISR	0x00000100
-#define DLB2_PF_VF2PF_FLR_ISR_VF9_ISR	0x00000200
-#define DLB2_PF_VF2PF_FLR_ISR_VF10_ISR	0x00000400
-#define DLB2_PF_VF2PF_FLR_ISR_VF11_ISR	0x00000800
-#define DLB2_PF_VF2PF_FLR_ISR_VF12_ISR	0x00001000
-#define DLB2_PF_VF2PF_FLR_ISR_VF13_ISR	0x00002000
-#define DLB2_PF_VF2PF_FLR_ISR_VF14_ISR	0x00004000
-#define DLB2_PF_VF2PF_FLR_ISR_VF15_ISR	0x00008000
-#define DLB2_PF_VF2PF_FLR_ISR_RSVD0		0xFFFF0000
-#define DLB2_PF_VF2PF_FLR_ISR_VF0_ISR_LOC	0
-#define DLB2_PF_VF2PF_FLR_ISR_VF1_ISR_LOC	1
-#define DLB2_PF_VF2PF_FLR_ISR_VF2_ISR_LOC	2
-#define DLB2_PF_VF2PF_FLR_ISR_VF3_ISR_LOC	3
-#define DLB2_PF_VF2PF_FLR_ISR_VF4_ISR_LOC	4
-#define DLB2_PF_VF2PF_FLR_ISR_VF5_ISR_LOC	5
-#define DLB2_PF_VF2PF_FLR_ISR_VF6_ISR_LOC	6
-#define DLB2_PF_VF2PF_FLR_ISR_VF7_ISR_LOC	7
-#define DLB2_PF_VF2PF_FLR_ISR_VF8_ISR_LOC	8
-#define DLB2_PF_VF2PF_FLR_ISR_VF9_ISR_LOC	9
-#define DLB2_PF_VF2PF_FLR_ISR_VF10_ISR_LOC	10
-#define DLB2_PF_VF2PF_FLR_ISR_VF11_ISR_LOC	11
-#define DLB2_PF_VF2PF_FLR_ISR_VF12_ISR_LOC	12
-#define DLB2_PF_VF2PF_FLR_ISR_VF13_ISR_LOC	13
-#define DLB2_PF_VF2PF_FLR_ISR_VF14_ISR_LOC	14
-#define DLB2_PF_VF2PF_FLR_ISR_VF15_ISR_LOC	15
-#define DLB2_PF_VF2PF_FLR_ISR_RSVD0_LOC	16
-
-#define DLB2_PF_VF2PF_ISR_PEND(vf_id) \
-	(0x1f10 + (vf_id) * 0x10000)
-#define DLB2_PF_VF2PF_ISR_PEND_RST 0x0
-
-#define DLB2_PF_VF2PF_ISR_PEND_ISR_PEND	0x00000001
-#define DLB2_PF_VF2PF_ISR_PEND_RSVD0		0xFFFFFFFE
-#define DLB2_PF_VF2PF_ISR_PEND_ISR_PEND_LOC	0
-#define DLB2_PF_VF2PF_ISR_PEND_RSVD0_LOC	1
-
-#define DLB2_PF_PF2VF_MAILBOX_BYTES 64
-#define DLB2_PF_PF2VF_MAILBOX(vf_id, x) \
-	(0x2000 + 0x4 * (x) + (vf_id) * 0x10000)
-#define DLB2_PF_PF2VF_MAILBOX_RST 0x0
-
-#define DLB2_PF_PF2VF_MAILBOX_MSG	0xFFFFFFFF
-#define DLB2_PF_PF2VF_MAILBOX_MSG_LOC	0
-
-#define DLB2_PF_PF2VF_MAILBOX_ISR(vf_id) \
-	(0x2f00 + (vf_id) * 0x10000)
-#define DLB2_PF_PF2VF_MAILBOX_ISR_RST 0x0
-
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF0_ISR	0x00000001
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF1_ISR	0x00000002
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF2_ISR	0x00000004
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF3_ISR	0x00000008
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF4_ISR	0x00000010
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF5_ISR	0x00000020
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF6_ISR	0x00000040
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF7_ISR	0x00000080
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF8_ISR	0x00000100
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF9_ISR	0x00000200
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF10_ISR	0x00000400
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF11_ISR	0x00000800
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF12_ISR	0x00001000
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF13_ISR	0x00002000
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF14_ISR	0x00004000
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF15_ISR	0x00008000
-#define DLB2_PF_PF2VF_MAILBOX_ISR_RSVD0	0xFFFF0000
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF0_ISR_LOC	0
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF1_ISR_LOC	1
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF2_ISR_LOC	2
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF3_ISR_LOC	3
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF4_ISR_LOC	4
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF5_ISR_LOC	5
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF6_ISR_LOC	6
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF7_ISR_LOC	7
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF8_ISR_LOC	8
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF9_ISR_LOC	9
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF10_ISR_LOC	10
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF11_ISR_LOC	11
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF12_ISR_LOC	12
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF13_ISR_LOC	13
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF14_ISR_LOC	14
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF15_ISR_LOC	15
-#define DLB2_PF_PF2VF_MAILBOX_ISR_RSVD0_LOC		16
-
-#define DLB2_PF_VF_RESET_IN_PROGRESS(vf_id) \
-	(0x3000 + (vf_id) * 0x10000)
-#define DLB2_PF_VF_RESET_IN_PROGRESS_RST 0xffff
-
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF0_RESET_IN_PROGRESS	0x00000001
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF1_RESET_IN_PROGRESS	0x00000002
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF2_RESET_IN_PROGRESS	0x00000004
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF3_RESET_IN_PROGRESS	0x00000008
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF4_RESET_IN_PROGRESS	0x00000010
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF5_RESET_IN_PROGRESS	0x00000020
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF6_RESET_IN_PROGRESS	0x00000040
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF7_RESET_IN_PROGRESS	0x00000080
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF8_RESET_IN_PROGRESS	0x00000100
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF9_RESET_IN_PROGRESS	0x00000200
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF10_RESET_IN_PROGRESS	0x00000400
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF11_RESET_IN_PROGRESS	0x00000800
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF12_RESET_IN_PROGRESS	0x00001000
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF13_RESET_IN_PROGRESS	0x00002000
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF14_RESET_IN_PROGRESS	0x00004000
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF15_RESET_IN_PROGRESS	0x00008000
-#define DLB2_PF_VF_RESET_IN_PROGRESS_RSVD0			0xFFFF0000
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF0_RESET_IN_PROGRESS_LOC	0
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF1_RESET_IN_PROGRESS_LOC	1
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF2_RESET_IN_PROGRESS_LOC	2
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF3_RESET_IN_PROGRESS_LOC	3
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF4_RESET_IN_PROGRESS_LOC	4
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF5_RESET_IN_PROGRESS_LOC	5
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF6_RESET_IN_PROGRESS_LOC	6
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF7_RESET_IN_PROGRESS_LOC	7
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF8_RESET_IN_PROGRESS_LOC	8
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF9_RESET_IN_PROGRESS_LOC	9
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF10_RESET_IN_PROGRESS_LOC	10
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF11_RESET_IN_PROGRESS_LOC	11
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF12_RESET_IN_PROGRESS_LOC	12
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF13_RESET_IN_PROGRESS_LOC	13
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF14_RESET_IN_PROGRESS_LOC	14
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF15_RESET_IN_PROGRESS_LOC	15
-#define DLB2_PF_VF_RESET_IN_PROGRESS_RSVD0_LOC			16
-
-#define DLB2_MSIX_VECTOR_CTRL(x) \
-	(0x100000c + (x) * 0x10)
-#define DLB2_MSIX_VECTOR_CTRL_RST 0x1
-
-#define DLB2_MSIX_VECTOR_CTRL_VEC_MASK	0x00000001
-#define DLB2_MSIX_VECTOR_CTRL_RSVD0		0xFFFFFFFE
-#define DLB2_MSIX_VECTOR_CTRL_VEC_MASK_LOC	0
-#define DLB2_MSIX_VECTOR_CTRL_RSVD0_LOC	1
-
-#define DLB2_IOSF_FUNC_VF_BAR_DSBL(x) \
-	(0x20 + (x) * 0x4)
-#define DLB2_IOSF_FUNC_VF_BAR_DSBL_RST 0x0
-
-#define DLB2_IOSF_FUNC_VF_BAR_DSBL_FUNC_VF_BAR_DIS	0x00000001
-#define DLB2_IOSF_FUNC_VF_BAR_DSBL_RSVD0		0xFFFFFFFE
-#define DLB2_IOSF_FUNC_VF_BAR_DSBL_FUNC_VF_BAR_DIS_LOC	0
-#define DLB2_IOSF_FUNC_VF_BAR_DSBL_RSVD0_LOC			1
-
-#define DLB2_V2SYS_TOTAL_VAS 0x1000011c
-#define DLB2_V2_5SYS_TOTAL_VAS 0x10000114
-#define DLB2_SYS_TOTAL_VAS(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2SYS_TOTAL_VAS : \
-	 DLB2_V2_5SYS_TOTAL_VAS)
-#define DLB2_SYS_TOTAL_VAS_RST 0x20
-
-#define DLB2_SYS_TOTAL_VAS_TOTAL_VAS	0xFFFFFFFF
-#define DLB2_SYS_TOTAL_VAS_TOTAL_VAS_LOC	0
-
-#define DLB2_SYS_TOTAL_DIR_CRDS 0x10000108
-#define DLB2_SYS_TOTAL_DIR_CRDS_RST 0x1000
-
-#define DLB2_SYS_TOTAL_DIR_CRDS_TOTAL_DIR_CREDITS	0xFFFFFFFF
-#define DLB2_SYS_TOTAL_DIR_CRDS_TOTAL_DIR_CREDITS_LOC	0
-
-#define DLB2_SYS_TOTAL_LDB_CRDS 0x10000104
-#define DLB2_SYS_TOTAL_LDB_CRDS_RST 0x2000
-
-#define DLB2_SYS_TOTAL_LDB_CRDS_TOTAL_LDB_CREDITS	0xFFFFFFFF
-#define DLB2_SYS_TOTAL_LDB_CRDS_TOTAL_LDB_CREDITS_LOC	0
-
-#define DLB2_SYS_ALARM_PF_SYND2 0x10000508
-#define DLB2_SYS_ALARM_PF_SYND2_RST 0x0
-
-#define DLB2_SYS_ALARM_PF_SYND2_LOCK_ID	0x0000FFFF
-#define DLB2_SYS_ALARM_PF_SYND2_MEAS		0x00010000
-#define DLB2_SYS_ALARM_PF_SYND2_DEBUG	0x00FE0000
-#define DLB2_SYS_ALARM_PF_SYND2_CQ_POP	0x01000000
-#define DLB2_SYS_ALARM_PF_SYND2_QE_UHL	0x02000000
-#define DLB2_SYS_ALARM_PF_SYND2_QE_ORSP	0x04000000
-#define DLB2_SYS_ALARM_PF_SYND2_QE_VALID	0x08000000
-#define DLB2_SYS_ALARM_PF_SYND2_CQ_INT_REARM	0x10000000
-#define DLB2_SYS_ALARM_PF_SYND2_DSI_ERROR	0x20000000
-#define DLB2_SYS_ALARM_PF_SYND2_RSVD0	0xC0000000
-#define DLB2_SYS_ALARM_PF_SYND2_LOCK_ID_LOC		0
-#define DLB2_SYS_ALARM_PF_SYND2_MEAS_LOC		16
-#define DLB2_SYS_ALARM_PF_SYND2_DEBUG_LOC		17
-#define DLB2_SYS_ALARM_PF_SYND2_CQ_POP_LOC		24
-#define DLB2_SYS_ALARM_PF_SYND2_QE_UHL_LOC		25
-#define DLB2_SYS_ALARM_PF_SYND2_QE_ORSP_LOC		26
-#define DLB2_SYS_ALARM_PF_SYND2_QE_VALID_LOC		27
-#define DLB2_SYS_ALARM_PF_SYND2_CQ_INT_REARM_LOC	28
-#define DLB2_SYS_ALARM_PF_SYND2_DSI_ERROR_LOC	29
-#define DLB2_SYS_ALARM_PF_SYND2_RSVD0_LOC		30
-
-#define DLB2_SYS_ALARM_PF_SYND1 0x10000504
-#define DLB2_SYS_ALARM_PF_SYND1_RST 0x0
-
-#define DLB2_SYS_ALARM_PF_SYND1_DSI		0x0000FFFF
-#define DLB2_SYS_ALARM_PF_SYND1_QID		0x00FF0000
-#define DLB2_SYS_ALARM_PF_SYND1_QTYPE	0x03000000
-#define DLB2_SYS_ALARM_PF_SYND1_QPRI		0x1C000000
-#define DLB2_SYS_ALARM_PF_SYND1_MSG_TYPE	0xE0000000
-#define DLB2_SYS_ALARM_PF_SYND1_DSI_LOC	0
-#define DLB2_SYS_ALARM_PF_SYND1_QID_LOC	16
-#define DLB2_SYS_ALARM_PF_SYND1_QTYPE_LOC	24
-#define DLB2_SYS_ALARM_PF_SYND1_QPRI_LOC	26
-#define DLB2_SYS_ALARM_PF_SYND1_MSG_TYPE_LOC	29
-
-#define DLB2_SYS_ALARM_PF_SYND0 0x10000500
-#define DLB2_SYS_ALARM_PF_SYND0_RST 0x0
-
-#define DLB2_SYS_ALARM_PF_SYND0_SYNDROME	0x000000FF
-#define DLB2_SYS_ALARM_PF_SYND0_RTYPE	0x00000300
-#define DLB2_SYS_ALARM_PF_SYND0_RSVD0	0x00001C00
-#define DLB2_SYS_ALARM_PF_SYND0_IS_LDB	0x00002000
-#define DLB2_SYS_ALARM_PF_SYND0_CLS		0x0000C000
-#define DLB2_SYS_ALARM_PF_SYND0_AID		0x003F0000
-#define DLB2_SYS_ALARM_PF_SYND0_UNIT		0x03C00000
-#define DLB2_SYS_ALARM_PF_SYND0_SOURCE	0x3C000000
-#define DLB2_SYS_ALARM_PF_SYND0_MORE		0x40000000
-#define DLB2_SYS_ALARM_PF_SYND0_VALID	0x80000000
-#define DLB2_SYS_ALARM_PF_SYND0_SYNDROME_LOC	0
-#define DLB2_SYS_ALARM_PF_SYND0_RTYPE_LOC	8
-#define DLB2_SYS_ALARM_PF_SYND0_RSVD0_LOC	10
-#define DLB2_SYS_ALARM_PF_SYND0_IS_LDB_LOC	13
-#define DLB2_SYS_ALARM_PF_SYND0_CLS_LOC	14
-#define DLB2_SYS_ALARM_PF_SYND0_AID_LOC	16
-#define DLB2_SYS_ALARM_PF_SYND0_UNIT_LOC	22
-#define DLB2_SYS_ALARM_PF_SYND0_SOURCE_LOC	26
-#define DLB2_SYS_ALARM_PF_SYND0_MORE_LOC	30
-#define DLB2_SYS_ALARM_PF_SYND0_VALID_LOC	31
-
-#define DLB2_SYS_VF_LDB_VPP_V(x) \
-	(0x10000f00 + (x) * 0x1000)
-#define DLB2_SYS_VF_LDB_VPP_V_RST 0x0
-
-#define DLB2_SYS_VF_LDB_VPP_V_VPP_V	0x00000001
-#define DLB2_SYS_VF_LDB_VPP_V_RSVD0	0xFFFFFFFE
-#define DLB2_SYS_VF_LDB_VPP_V_VPP_V_LOC	0
-#define DLB2_SYS_VF_LDB_VPP_V_RSVD0_LOC	1
-
-#define DLB2_SYS_VF_LDB_VPP2PP(x) \
-	(0x10000f04 + (x) * 0x1000)
-#define DLB2_SYS_VF_LDB_VPP2PP_RST 0x0
-
-#define DLB2_SYS_VF_LDB_VPP2PP_PP	0x0000003F
-#define DLB2_SYS_VF_LDB_VPP2PP_RSVD0	0xFFFFFFC0
-#define DLB2_SYS_VF_LDB_VPP2PP_PP_LOC	0
-#define DLB2_SYS_VF_LDB_VPP2PP_RSVD0_LOC	6
-
-#define DLB2_SYS_VF_DIR_VPP_V(x) \
-	(0x10000f08 + (x) * 0x1000)
-#define DLB2_SYS_VF_DIR_VPP_V_RST 0x0
-
-#define DLB2_SYS_VF_DIR_VPP_V_VPP_V	0x00000001
-#define DLB2_SYS_VF_DIR_VPP_V_RSVD0	0xFFFFFFFE
-#define DLB2_SYS_VF_DIR_VPP_V_VPP_V_LOC	0
-#define DLB2_SYS_VF_DIR_VPP_V_RSVD0_LOC	1
-
-#define DLB2_SYS_VF_DIR_VPP2PP(x) \
-	(0x10000f0c + (x) * 0x1000)
-#define DLB2_SYS_VF_DIR_VPP2PP_RST 0x0
-
-#define DLB2_SYS_VF_DIR_VPP2PP_PP	0x0000003F
-#define DLB2_SYS_VF_DIR_VPP2PP_RSVD0	0xFFFFFFC0
-#define DLB2_SYS_VF_DIR_VPP2PP_PP_LOC	0
-#define DLB2_SYS_VF_DIR_VPP2PP_RSVD0_LOC	6
-
-#define DLB2_SYS_VF_LDB_VQID_V(x) \
-	(0x10000f10 + (x) * 0x1000)
-#define DLB2_SYS_VF_LDB_VQID_V_RST 0x0
-
-#define DLB2_SYS_VF_LDB_VQID_V_VQID_V	0x00000001
-#define DLB2_SYS_VF_LDB_VQID_V_RSVD0		0xFFFFFFFE
-#define DLB2_SYS_VF_LDB_VQID_V_VQID_V_LOC	0
-#define DLB2_SYS_VF_LDB_VQID_V_RSVD0_LOC	1
-
-#define DLB2_SYS_VF_LDB_VQID2QID(x) \
-	(0x10000f14 + (x) * 0x1000)
-#define DLB2_SYS_VF_LDB_VQID2QID_RST 0x0
-
-#define DLB2_SYS_VF_LDB_VQID2QID_QID		0x0000001F
-#define DLB2_SYS_VF_LDB_VQID2QID_RSVD0	0xFFFFFFE0
-#define DLB2_SYS_VF_LDB_VQID2QID_QID_LOC	0
-#define DLB2_SYS_VF_LDB_VQID2QID_RSVD0_LOC	5
-
-#define DLB2_SYS_LDB_QID2VQID(x) \
-	(0x10000f18 + (x) * 0x1000)
-#define DLB2_SYS_LDB_QID2VQID_RST 0x0
-
-#define DLB2_SYS_LDB_QID2VQID_VQID	0x0000001F
-#define DLB2_SYS_LDB_QID2VQID_RSVD0	0xFFFFFFE0
-#define DLB2_SYS_LDB_QID2VQID_VQID_LOC	0
-#define DLB2_SYS_LDB_QID2VQID_RSVD0_LOC	5
-
-#define DLB2_SYS_VF_DIR_VQID_V(x) \
-	(0x10000f1c + (x) * 0x1000)
-#define DLB2_SYS_VF_DIR_VQID_V_RST 0x0
-
-#define DLB2_SYS_VF_DIR_VQID_V_VQID_V	0x00000001
-#define DLB2_SYS_VF_DIR_VQID_V_RSVD0		0xFFFFFFFE
-#define DLB2_SYS_VF_DIR_VQID_V_VQID_V_LOC	0
-#define DLB2_SYS_VF_DIR_VQID_V_RSVD0_LOC	1
-
-#define DLB2_SYS_VF_DIR_VQID2QID(x) \
-	(0x10000f20 + (x) * 0x1000)
-#define DLB2_SYS_VF_DIR_VQID2QID_RST 0x0
-
-#define DLB2_SYS_VF_DIR_VQID2QID_QID		0x0000003F
-#define DLB2_SYS_VF_DIR_VQID2QID_RSVD0	0xFFFFFFC0
-#define DLB2_SYS_VF_DIR_VQID2QID_QID_LOC	0
-#define DLB2_SYS_VF_DIR_VQID2QID_RSVD0_LOC	6
-
-#define DLB2_SYS_LDB_VASQID_V(x) \
-	(0x10000f24 + (x) * 0x1000)
-#define DLB2_SYS_LDB_VASQID_V_RST 0x0
-
-#define DLB2_SYS_LDB_VASQID_V_VASQID_V	0x00000001
-#define DLB2_SYS_LDB_VASQID_V_RSVD0		0xFFFFFFFE
-#define DLB2_SYS_LDB_VASQID_V_VASQID_V_LOC	0
-#define DLB2_SYS_LDB_VASQID_V_RSVD0_LOC	1
-
-#define DLB2_SYS_DIR_VASQID_V(x) \
-	(0x10000f28 + (x) * 0x1000)
-#define DLB2_SYS_DIR_VASQID_V_RST 0x0
-
-#define DLB2_SYS_DIR_VASQID_V_VASQID_V	0x00000001
-#define DLB2_SYS_DIR_VASQID_V_RSVD0		0xFFFFFFFE
-#define DLB2_SYS_DIR_VASQID_V_VASQID_V_LOC	0
-#define DLB2_SYS_DIR_VASQID_V_RSVD0_LOC	1
-
-#define DLB2_SYS_ALARM_VF_SYND2(x) \
-	(0x10000f48 + (x) * 0x1000)
-#define DLB2_SYS_ALARM_VF_SYND2_RST 0x0
-
-#define DLB2_SYS_ALARM_VF_SYND2_LOCK_ID	0x0000FFFF
-#define DLB2_SYS_ALARM_VF_SYND2_DEBUG	0x00FF0000
-#define DLB2_SYS_ALARM_VF_SYND2_CQ_POP	0x01000000
-#define DLB2_SYS_ALARM_VF_SYND2_QE_UHL	0x02000000
-#define DLB2_SYS_ALARM_VF_SYND2_QE_ORSP	0x04000000
-#define DLB2_SYS_ALARM_VF_SYND2_QE_VALID	0x08000000
-#define DLB2_SYS_ALARM_VF_SYND2_ISZ		0x10000000
-#define DLB2_SYS_ALARM_VF_SYND2_DSI_ERROR	0x20000000
-#define DLB2_SYS_ALARM_VF_SYND2_DLBRSVD	0xC0000000
-#define DLB2_SYS_ALARM_VF_SYND2_LOCK_ID_LOC		0
-#define DLB2_SYS_ALARM_VF_SYND2_DEBUG_LOC		16
-#define DLB2_SYS_ALARM_VF_SYND2_CQ_POP_LOC		24
-#define DLB2_SYS_ALARM_VF_SYND2_QE_UHL_LOC		25
-#define DLB2_SYS_ALARM_VF_SYND2_QE_ORSP_LOC		26
-#define DLB2_SYS_ALARM_VF_SYND2_QE_VALID_LOC		27
-#define DLB2_SYS_ALARM_VF_SYND2_ISZ_LOC		28
-#define DLB2_SYS_ALARM_VF_SYND2_DSI_ERROR_LOC	29
-#define DLB2_SYS_ALARM_VF_SYND2_DLBRSVD_LOC		30
-
-#define DLB2_SYS_ALARM_VF_SYND1(x) \
-	(0x10000f44 + (x) * 0x1000)
-#define DLB2_SYS_ALARM_VF_SYND1_RST 0x0
-
-#define DLB2_SYS_ALARM_VF_SYND1_DSI		0x0000FFFF
-#define DLB2_SYS_ALARM_VF_SYND1_QID		0x00FF0000
-#define DLB2_SYS_ALARM_VF_SYND1_QTYPE	0x03000000
-#define DLB2_SYS_ALARM_VF_SYND1_QPRI		0x1C000000
-#define DLB2_SYS_ALARM_VF_SYND1_MSG_TYPE	0xE0000000
-#define DLB2_SYS_ALARM_VF_SYND1_DSI_LOC	0
-#define DLB2_SYS_ALARM_VF_SYND1_QID_LOC	16
-#define DLB2_SYS_ALARM_VF_SYND1_QTYPE_LOC	24
-#define DLB2_SYS_ALARM_VF_SYND1_QPRI_LOC	26
-#define DLB2_SYS_ALARM_VF_SYND1_MSG_TYPE_LOC	29
-
-#define DLB2_SYS_ALARM_VF_SYND0(x) \
-	(0x10000f40 + (x) * 0x1000)
-#define DLB2_SYS_ALARM_VF_SYND0_RST 0x0
-
-#define DLB2_SYS_ALARM_VF_SYND0_SYNDROME		0x000000FF
-#define DLB2_SYS_ALARM_VF_SYND0_RTYPE		0x00000300
-#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND0_PARITY	0x00000400
-#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND1_PARITY	0x00000800
-#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND2_PARITY	0x00001000
-#define DLB2_SYS_ALARM_VF_SYND0_IS_LDB		0x00002000
-#define DLB2_SYS_ALARM_VF_SYND0_CLS			0x0000C000
-#define DLB2_SYS_ALARM_VF_SYND0_AID			0x003F0000
-#define DLB2_SYS_ALARM_VF_SYND0_UNIT			0x03C00000
-#define DLB2_SYS_ALARM_VF_SYND0_SOURCE		0x3C000000
-#define DLB2_SYS_ALARM_VF_SYND0_MORE			0x40000000
-#define DLB2_SYS_ALARM_VF_SYND0_VALID		0x80000000
-#define DLB2_SYS_ALARM_VF_SYND0_SYNDROME_LOC		0
-#define DLB2_SYS_ALARM_VF_SYND0_RTYPE_LOC		8
-#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND0_PARITY_LOC	10
-#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND1_PARITY_LOC	11
-#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND2_PARITY_LOC	12
-#define DLB2_SYS_ALARM_VF_SYND0_IS_LDB_LOC		13
-#define DLB2_SYS_ALARM_VF_SYND0_CLS_LOC		14
-#define DLB2_SYS_ALARM_VF_SYND0_AID_LOC		16
-#define DLB2_SYS_ALARM_VF_SYND0_UNIT_LOC		22
-#define DLB2_SYS_ALARM_VF_SYND0_SOURCE_LOC		26
-#define DLB2_SYS_ALARM_VF_SYND0_MORE_LOC		30
-#define DLB2_SYS_ALARM_VF_SYND0_VALID_LOC		31
-
-#define DLB2_SYS_LDB_QID_CFG_V(x) \
-	(0x10000f58 + (x) * 0x1000)
-#define DLB2_SYS_LDB_QID_CFG_V_RST 0x0
-
-#define DLB2_SYS_LDB_QID_CFG_V_SN_CFG_V	0x00000001
-#define DLB2_SYS_LDB_QID_CFG_V_FID_CFG_V	0x00000002
-#define DLB2_SYS_LDB_QID_CFG_V_RSVD0		0xFFFFFFFC
-#define DLB2_SYS_LDB_QID_CFG_V_SN_CFG_V_LOC	0
-#define DLB2_SYS_LDB_QID_CFG_V_FID_CFG_V_LOC	1
-#define DLB2_SYS_LDB_QID_CFG_V_RSVD0_LOC	2
-
-#define DLB2_SYS_LDB_QID_ITS(x) \
-	(0x10000f54 + (x) * 0x1000)
-#define DLB2_SYS_LDB_QID_ITS_RST 0x0
-
-#define DLB2_SYS_LDB_QID_ITS_QID_ITS	0x00000001
-#define DLB2_SYS_LDB_QID_ITS_RSVD0	0xFFFFFFFE
-#define DLB2_SYS_LDB_QID_ITS_QID_ITS_LOC	0
-#define DLB2_SYS_LDB_QID_ITS_RSVD0_LOC	1
-
-#define DLB2_SYS_LDB_QID_V(x) \
-	(0x10000f50 + (x) * 0x1000)
-#define DLB2_SYS_LDB_QID_V_RST 0x0
-
-#define DLB2_SYS_LDB_QID_V_QID_V	0x00000001
-#define DLB2_SYS_LDB_QID_V_RSVD0	0xFFFFFFFE
-#define DLB2_SYS_LDB_QID_V_QID_V_LOC	0
-#define DLB2_SYS_LDB_QID_V_RSVD0_LOC	1
-
-#define DLB2_SYS_DIR_QID_ITS(x) \
-	(0x10000f64 + (x) * 0x1000)
-#define DLB2_SYS_DIR_QID_ITS_RST 0x0
-
-#define DLB2_SYS_DIR_QID_ITS_QID_ITS	0x00000001
-#define DLB2_SYS_DIR_QID_ITS_RSVD0	0xFFFFFFFE
-#define DLB2_SYS_DIR_QID_ITS_QID_ITS_LOC	0
-#define DLB2_SYS_DIR_QID_ITS_RSVD0_LOC	1
-
-#define DLB2_SYS_DIR_QID_V(x) \
-	(0x10000f60 + (x) * 0x1000)
-#define DLB2_SYS_DIR_QID_V_RST 0x0
-
-#define DLB2_SYS_DIR_QID_V_QID_V	0x00000001
-#define DLB2_SYS_DIR_QID_V_RSVD0	0xFFFFFFFE
-#define DLB2_SYS_DIR_QID_V_QID_V_LOC	0
-#define DLB2_SYS_DIR_QID_V_RSVD0_LOC	1
-
-#define DLB2_SYS_LDB_CQ_AI_DATA(x) \
-	(0x10000fa8 + (x) * 0x1000)
-#define DLB2_SYS_LDB_CQ_AI_DATA_RST 0x0
-
-#define DLB2_SYS_LDB_CQ_AI_DATA_CQ_AI_DATA	0xFFFFFFFF
-#define DLB2_SYS_LDB_CQ_AI_DATA_CQ_AI_DATA_LOC	0
-
-#define DLB2_SYS_LDB_CQ_AI_ADDR(x) \
-	(0x10000fa4 + (x) * 0x1000)
-#define DLB2_SYS_LDB_CQ_AI_ADDR_RST 0x0
-
-#define DLB2_SYS_LDB_CQ_AI_ADDR_RSVD1	0x00000003
-#define DLB2_SYS_LDB_CQ_AI_ADDR_CQ_AI_ADDR	0x000FFFFC
-#define DLB2_SYS_LDB_CQ_AI_ADDR_RSVD0	0xFFF00000
-#define DLB2_SYS_LDB_CQ_AI_ADDR_RSVD1_LOC		0
-#define DLB2_SYS_LDB_CQ_AI_ADDR_CQ_AI_ADDR_LOC	2
-#define DLB2_SYS_LDB_CQ_AI_ADDR_RSVD0_LOC		20
-
-#define DLB2_V2SYS_LDB_CQ_PASID(x) \
-	(0x10000fa0 + (x) * 0x1000)
-#define DLB2_V2_5SYS_LDB_CQ_PASID(x) \
-	(0x10000f9c + (x) * 0x1000)
-#define DLB2_SYS_LDB_CQ_PASID(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2SYS_LDB_CQ_PASID(x) : \
-	 DLB2_V2_5SYS_LDB_CQ_PASID(x))
-#define DLB2_SYS_LDB_CQ_PASID_RST 0x0
-
-#define DLB2_SYS_LDB_CQ_PASID_PASID		0x000FFFFF
-#define DLB2_SYS_LDB_CQ_PASID_EXE_REQ	0x00100000
-#define DLB2_SYS_LDB_CQ_PASID_PRIV_REQ	0x00200000
-#define DLB2_SYS_LDB_CQ_PASID_FMT2		0x00400000
-#define DLB2_SYS_LDB_CQ_PASID_RSVD0		0xFF800000
-#define DLB2_SYS_LDB_CQ_PASID_PASID_LOC	0
-#define DLB2_SYS_LDB_CQ_PASID_EXE_REQ_LOC	20
-#define DLB2_SYS_LDB_CQ_PASID_PRIV_REQ_LOC	21
-#define DLB2_SYS_LDB_CQ_PASID_FMT2_LOC	22
-#define DLB2_SYS_LDB_CQ_PASID_RSVD0_LOC	23
-
-#define DLB2_SYS_LDB_CQ_AT(x) \
-	(0x10000f9c + (x) * 0x1000)
-#define DLB2_SYS_LDB_CQ_AT_RST 0x0
-
-#define DLB2_SYS_LDB_CQ_AT_CQ_AT	0x00000003
-#define DLB2_SYS_LDB_CQ_AT_RSVD0	0xFFFFFFFC
-#define DLB2_SYS_LDB_CQ_AT_CQ_AT_LOC	0
-#define DLB2_SYS_LDB_CQ_AT_RSVD0_LOC	2
-
-#define DLB2_SYS_LDB_CQ_ISR(x) \
-	(0x10000f98 + (x) * 0x1000)
-#define DLB2_SYS_LDB_CQ_ISR_RST 0x0
-/* CQ Interrupt Modes */
-#define DLB2_CQ_ISR_MODE_DIS  0
-#define DLB2_CQ_ISR_MODE_MSI  1
-#define DLB2_CQ_ISR_MODE_MSIX 2
-#define DLB2_CQ_ISR_MODE_ADI  3
-
-#define DLB2_SYS_LDB_CQ_ISR_VECTOR	0x0000003F
-#define DLB2_SYS_LDB_CQ_ISR_VF	0x000003C0
-#define DLB2_SYS_LDB_CQ_ISR_EN_CODE	0x00000C00
-#define DLB2_SYS_LDB_CQ_ISR_RSVD0	0xFFFFF000
-#define DLB2_SYS_LDB_CQ_ISR_VECTOR_LOC	0
-#define DLB2_SYS_LDB_CQ_ISR_VF_LOC		6
-#define DLB2_SYS_LDB_CQ_ISR_EN_CODE_LOC	10
-#define DLB2_SYS_LDB_CQ_ISR_RSVD0_LOC	12
-
-#define DLB2_SYS_LDB_CQ2VF_PF_RO(x) \
-	(0x10000f94 + (x) * 0x1000)
-#define DLB2_SYS_LDB_CQ2VF_PF_RO_RST 0x0
-
-#define DLB2_SYS_LDB_CQ2VF_PF_RO_VF		0x0000000F
-#define DLB2_SYS_LDB_CQ2VF_PF_RO_IS_PF	0x00000010
-#define DLB2_SYS_LDB_CQ2VF_PF_RO_RO		0x00000020
-#define DLB2_SYS_LDB_CQ2VF_PF_RO_RSVD0	0xFFFFFFC0
-#define DLB2_SYS_LDB_CQ2VF_PF_RO_VF_LOC	0
-#define DLB2_SYS_LDB_CQ2VF_PF_RO_IS_PF_LOC	4
-#define DLB2_SYS_LDB_CQ2VF_PF_RO_RO_LOC	5
-#define DLB2_SYS_LDB_CQ2VF_PF_RO_RSVD0_LOC	6
-
-#define DLB2_SYS_LDB_PP_V(x) \
-	(0x10000f90 + (x) * 0x1000)
-#define DLB2_SYS_LDB_PP_V_RST 0x0
-
-#define DLB2_SYS_LDB_PP_V_PP_V	0x00000001
-#define DLB2_SYS_LDB_PP_V_RSVD0	0xFFFFFFFE
-#define DLB2_SYS_LDB_PP_V_PP_V_LOC	0
-#define DLB2_SYS_LDB_PP_V_RSVD0_LOC	1
-
-#define DLB2_SYS_LDB_PP2VDEV(x) \
-	(0x10000f8c + (x) * 0x1000)
-#define DLB2_SYS_LDB_PP2VDEV_RST 0x0
-
-#define DLB2_SYS_LDB_PP2VDEV_VDEV	0x0000000F
-#define DLB2_SYS_LDB_PP2VDEV_RSVD0	0xFFFFFFF0
-#define DLB2_SYS_LDB_PP2VDEV_VDEV_LOC	0
-#define DLB2_SYS_LDB_PP2VDEV_RSVD0_LOC	4
-
-#define DLB2_SYS_LDB_PP2VAS(x) \
-	(0x10000f88 + (x) * 0x1000)
-#define DLB2_SYS_LDB_PP2VAS_RST 0x0
-
-#define DLB2_SYS_LDB_PP2VAS_VAS	0x0000001F
-#define DLB2_SYS_LDB_PP2VAS_RSVD0	0xFFFFFFE0
-#define DLB2_SYS_LDB_PP2VAS_VAS_LOC		0
-#define DLB2_SYS_LDB_PP2VAS_RSVD0_LOC	5
-
-#define DLB2_SYS_LDB_CQ_ADDR_U(x) \
-	(0x10000f84 + (x) * 0x1000)
-#define DLB2_SYS_LDB_CQ_ADDR_U_RST 0x0
-
-#define DLB2_SYS_LDB_CQ_ADDR_U_ADDR_U	0xFFFFFFFF
-#define DLB2_SYS_LDB_CQ_ADDR_U_ADDR_U_LOC	0
-
-#define DLB2_SYS_LDB_CQ_ADDR_L(x) \
-	(0x10000f80 + (x) * 0x1000)
-#define DLB2_SYS_LDB_CQ_ADDR_L_RST 0x0
-
-#define DLB2_SYS_LDB_CQ_ADDR_L_RSVD0		0x0000003F
-#define DLB2_SYS_LDB_CQ_ADDR_L_ADDR_L	0xFFFFFFC0
-#define DLB2_SYS_LDB_CQ_ADDR_L_RSVD0_LOC	0
-#define DLB2_SYS_LDB_CQ_ADDR_L_ADDR_L_LOC	6
-
-#define DLB2_SYS_DIR_CQ_FMT(x) \
-	(0x10000fec + (x) * 0x1000)
-#define DLB2_SYS_DIR_CQ_FMT_RST 0x0
-
-#define DLB2_SYS_DIR_CQ_FMT_KEEP_PF_PPID	0x00000001
-#define DLB2_SYS_DIR_CQ_FMT_RSVD0		0xFFFFFFFE
-#define DLB2_SYS_DIR_CQ_FMT_KEEP_PF_PPID_LOC	0
-#define DLB2_SYS_DIR_CQ_FMT_RSVD0_LOC	1
-
-#define DLB2_SYS_DIR_CQ_AI_DATA(x) \
-	(0x10000fe8 + (x) * 0x1000)
-#define DLB2_SYS_DIR_CQ_AI_DATA_RST 0x0
-
-#define DLB2_SYS_DIR_CQ_AI_DATA_CQ_AI_DATA	0xFFFFFFFF
-#define DLB2_SYS_DIR_CQ_AI_DATA_CQ_AI_DATA_LOC	0
-
-#define DLB2_SYS_DIR_CQ_AI_ADDR(x) \
-	(0x10000fe4 + (x) * 0x1000)
-#define DLB2_SYS_DIR_CQ_AI_ADDR_RST 0x0
-
-#define DLB2_SYS_DIR_CQ_AI_ADDR_RSVD1	0x00000003
-#define DLB2_SYS_DIR_CQ_AI_ADDR_CQ_AI_ADDR	0x000FFFFC
-#define DLB2_SYS_DIR_CQ_AI_ADDR_RSVD0	0xFFF00000
-#define DLB2_SYS_DIR_CQ_AI_ADDR_RSVD1_LOC		0
-#define DLB2_SYS_DIR_CQ_AI_ADDR_CQ_AI_ADDR_LOC	2
-#define DLB2_SYS_DIR_CQ_AI_ADDR_RSVD0_LOC		20
-
-#define DLB2_V2SYS_DIR_CQ_PASID(x) \
-	(0x10000fe0 + (x) * 0x1000)
-#define DLB2_V2_5SYS_DIR_CQ_PASID(x) \
-	(0x10000fdc + (x) * 0x1000)
-#define DLB2_SYS_DIR_CQ_PASID(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2SYS_DIR_CQ_PASID(x) : \
-	 DLB2_V2_5SYS_DIR_CQ_PASID(x))
-#define DLB2_SYS_DIR_CQ_PASID_RST 0x0
-
-#define DLB2_SYS_DIR_CQ_PASID_PASID		0x000FFFFF
-#define DLB2_SYS_DIR_CQ_PASID_EXE_REQ	0x00100000
-#define DLB2_SYS_DIR_CQ_PASID_PRIV_REQ	0x00200000
-#define DLB2_SYS_DIR_CQ_PASID_FMT2		0x00400000
-#define DLB2_SYS_DIR_CQ_PASID_RSVD0		0xFF800000
-#define DLB2_SYS_DIR_CQ_PASID_PASID_LOC	0
-#define DLB2_SYS_DIR_CQ_PASID_EXE_REQ_LOC	20
-#define DLB2_SYS_DIR_CQ_PASID_PRIV_REQ_LOC	21
-#define DLB2_SYS_DIR_CQ_PASID_FMT2_LOC	22
-#define DLB2_SYS_DIR_CQ_PASID_RSVD0_LOC	23
-
-#define DLB2_SYS_DIR_CQ_AT(x) \
-	(0x10000fdc + (x) * 0x1000)
-#define DLB2_SYS_DIR_CQ_AT_RST 0x0
-
-#define DLB2_SYS_DIR_CQ_AT_CQ_AT	0x00000003
-#define DLB2_SYS_DIR_CQ_AT_RSVD0	0xFFFFFFFC
-#define DLB2_SYS_DIR_CQ_AT_CQ_AT_LOC	0
-#define DLB2_SYS_DIR_CQ_AT_RSVD0_LOC	2
-
-#define DLB2_SYS_DIR_CQ_ISR(x) \
-	(0x10000fd8 + (x) * 0x1000)
-#define DLB2_SYS_DIR_CQ_ISR_RST 0x0
-
-#define DLB2_SYS_DIR_CQ_ISR_VECTOR	0x0000003F
-#define DLB2_SYS_DIR_CQ_ISR_VF	0x000003C0
-#define DLB2_SYS_DIR_CQ_ISR_EN_CODE	0x00000C00
-#define DLB2_SYS_DIR_CQ_ISR_RSVD0	0xFFFFF000
-#define DLB2_SYS_DIR_CQ_ISR_VECTOR_LOC	0
-#define DLB2_SYS_DIR_CQ_ISR_VF_LOC		6
-#define DLB2_SYS_DIR_CQ_ISR_EN_CODE_LOC	10
-#define DLB2_SYS_DIR_CQ_ISR_RSVD0_LOC	12
-
-#define DLB2_SYS_DIR_CQ2VF_PF_RO(x) \
-	(0x10000fd4 + (x) * 0x1000)
-#define DLB2_SYS_DIR_CQ2VF_PF_RO_RST 0x0
-
-#define DLB2_SYS_DIR_CQ2VF_PF_RO_VF		0x0000000F
-#define DLB2_SYS_DIR_CQ2VF_PF_RO_IS_PF	0x00000010
-#define DLB2_SYS_DIR_CQ2VF_PF_RO_RO		0x00000020
-#define DLB2_SYS_DIR_CQ2VF_PF_RO_RSVD0	0xFFFFFFC0
-#define DLB2_SYS_DIR_CQ2VF_PF_RO_VF_LOC	0
-#define DLB2_SYS_DIR_CQ2VF_PF_RO_IS_PF_LOC	4
-#define DLB2_SYS_DIR_CQ2VF_PF_RO_RO_LOC	5
-#define DLB2_SYS_DIR_CQ2VF_PF_RO_RSVD0_LOC	6
-
-#define DLB2_SYS_DIR_PP_V(x) \
-	(0x10000fd0 + (x) * 0x1000)
-#define DLB2_SYS_DIR_PP_V_RST 0x0
-
-#define DLB2_SYS_DIR_PP_V_PP_V	0x00000001
-#define DLB2_SYS_DIR_PP_V_RSVD0	0xFFFFFFFE
-#define DLB2_SYS_DIR_PP_V_PP_V_LOC	0
-#define DLB2_SYS_DIR_PP_V_RSVD0_LOC	1
-
-#define DLB2_SYS_DIR_PP2VDEV(x) \
-	(0x10000fcc + (x) * 0x1000)
-#define DLB2_SYS_DIR_PP2VDEV_RST 0x0
-
-#define DLB2_SYS_DIR_PP2VDEV_VDEV	0x0000000F
-#define DLB2_SYS_DIR_PP2VDEV_RSVD0	0xFFFFFFF0
-#define DLB2_SYS_DIR_PP2VDEV_VDEV_LOC	0
-#define DLB2_SYS_DIR_PP2VDEV_RSVD0_LOC	4
-
-#define DLB2_SYS_DIR_PP2VAS(x) \
-	(0x10000fc8 + (x) * 0x1000)
-#define DLB2_SYS_DIR_PP2VAS_RST 0x0
-
-#define DLB2_SYS_DIR_PP2VAS_VAS	0x0000001F
-#define DLB2_SYS_DIR_PP2VAS_RSVD0	0xFFFFFFE0
-#define DLB2_SYS_DIR_PP2VAS_VAS_LOC		0
-#define DLB2_SYS_DIR_PP2VAS_RSVD0_LOC	5
-
-#define DLB2_SYS_DIR_CQ_ADDR_U(x) \
-	(0x10000fc4 + (x) * 0x1000)
-#define DLB2_SYS_DIR_CQ_ADDR_U_RST 0x0
-
-#define DLB2_SYS_DIR_CQ_ADDR_U_ADDR_U	0xFFFFFFFF
-#define DLB2_SYS_DIR_CQ_ADDR_U_ADDR_U_LOC	0
-
-#define DLB2_SYS_DIR_CQ_ADDR_L(x) \
-	(0x10000fc0 + (x) * 0x1000)
-#define DLB2_SYS_DIR_CQ_ADDR_L_RST 0x0
-
-#define DLB2_SYS_DIR_CQ_ADDR_L_RSVD0		0x0000003F
-#define DLB2_SYS_DIR_CQ_ADDR_L_ADDR_L	0xFFFFFFC0
-#define DLB2_SYS_DIR_CQ_ADDR_L_RSVD0_LOC	0
-#define DLB2_SYS_DIR_CQ_ADDR_L_ADDR_L_LOC	6
-
-#define DLB2_SYS_PM_SMON_COMP_MASK1 0x10003024
-#define DLB2_SYS_PM_SMON_COMP_MASK1_RST 0xffffffff
-
-#define DLB2_SYS_PM_SMON_COMP_MASK1_COMP_MASK1	0xFFFFFFFF
-#define DLB2_SYS_PM_SMON_COMP_MASK1_COMP_MASK1_LOC	0
-
-#define DLB2_SYS_PM_SMON_COMP_MASK0 0x10003020
-#define DLB2_SYS_PM_SMON_COMP_MASK0_RST 0xffffffff
-
-#define DLB2_SYS_PM_SMON_COMP_MASK0_COMP_MASK0	0xFFFFFFFF
-#define DLB2_SYS_PM_SMON_COMP_MASK0_COMP_MASK0_LOC	0
-
-#define DLB2_SYS_PM_SMON_MAX_TMR 0x1000301c
-#define DLB2_SYS_PM_SMON_MAX_TMR_RST 0x0
-
-#define DLB2_SYS_PM_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
-#define DLB2_SYS_PM_SMON_MAX_TMR_MAXVALUE_LOC	0
-
-#define DLB2_SYS_PM_SMON_TMR 0x10003018
-#define DLB2_SYS_PM_SMON_TMR_RST 0x0
-
-#define DLB2_SYS_PM_SMON_TMR_TIMER_VAL	0xFFFFFFFF
-#define DLB2_SYS_PM_SMON_TMR_TIMER_VAL_LOC	0
-
-#define DLB2_SYS_PM_SMON_ACTIVITYCNTR1 0x10003014
-#define DLB2_SYS_PM_SMON_ACTIVITYCNTR1_RST 0x0
-
-#define DLB2_SYS_PM_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
-#define DLB2_SYS_PM_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
-
-#define DLB2_SYS_PM_SMON_ACTIVITYCNTR0 0x10003010
-#define DLB2_SYS_PM_SMON_ACTIVITYCNTR0_RST 0x0
-
-#define DLB2_SYS_PM_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
-#define DLB2_SYS_PM_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
-
-#define DLB2_SYS_PM_SMON_COMPARE1 0x1000300c
-#define DLB2_SYS_PM_SMON_COMPARE1_RST 0x0
-
-#define DLB2_SYS_PM_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
-#define DLB2_SYS_PM_SMON_COMPARE1_COMPARE1_LOC	0
-
-#define DLB2_SYS_PM_SMON_COMPARE0 0x10003008
-#define DLB2_SYS_PM_SMON_COMPARE0_RST 0x0
-
-#define DLB2_SYS_PM_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
-#define DLB2_SYS_PM_SMON_COMPARE0_COMPARE0_LOC	0
-
-#define DLB2_SYS_PM_SMON_CFG1 0x10003004
-#define DLB2_SYS_PM_SMON_CFG1_RST 0x0
-
-#define DLB2_SYS_PM_SMON_CFG1_MODE0	0x000000FF
-#define DLB2_SYS_PM_SMON_CFG1_MODE1	0x0000FF00
-#define DLB2_SYS_PM_SMON_CFG1_RSVD	0xFFFF0000
-#define DLB2_SYS_PM_SMON_CFG1_MODE0_LOC	0
-#define DLB2_SYS_PM_SMON_CFG1_MODE1_LOC	8
-#define DLB2_SYS_PM_SMON_CFG1_RSVD_LOC	16
-
-#define DLB2_SYS_PM_SMON_CFG0 0x10003000
-#define DLB2_SYS_PM_SMON_CFG0_RST 0x40000000
-
-#define DLB2_SYS_PM_SMON_CFG0_SMON_ENABLE		0x00000001
-#define DLB2_SYS_PM_SMON_CFG0_RSVD2			0x0000000E
-#define DLB2_SYS_PM_SMON_CFG0_SMON0_FUNCTION		0x00000070
-#define DLB2_SYS_PM_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
-#define DLB2_SYS_PM_SMON_CFG0_SMON1_FUNCTION		0x00000700
-#define DLB2_SYS_PM_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
-#define DLB2_SYS_PM_SMON_CFG0_SMON_MODE		0x0000F000
-#define DLB2_SYS_PM_SMON_CFG0_STOPCOUNTEROVFL	0x00010000
-#define DLB2_SYS_PM_SMON_CFG0_INTCOUNTEROVFL		0x00020000
-#define DLB2_SYS_PM_SMON_CFG0_STATCOUNTER0OVFL	0x00040000
-#define DLB2_SYS_PM_SMON_CFG0_STATCOUNTER1OVFL	0x00080000
-#define DLB2_SYS_PM_SMON_CFG0_STOPTIMEROVFL		0x00100000
-#define DLB2_SYS_PM_SMON_CFG0_INTTIMEROVFL		0x00200000
-#define DLB2_SYS_PM_SMON_CFG0_STATTIMEROVFL		0x00400000
-#define DLB2_SYS_PM_SMON_CFG0_RSVD1			0x00800000
-#define DLB2_SYS_PM_SMON_CFG0_TIMER_PRESCALE		0x1F000000
-#define DLB2_SYS_PM_SMON_CFG0_RSVD0			0x20000000
-#define DLB2_SYS_PM_SMON_CFG0_VERSION		0xC0000000
-#define DLB2_SYS_PM_SMON_CFG0_SMON_ENABLE_LOC		0
-#define DLB2_SYS_PM_SMON_CFG0_RSVD2_LOC			1
-#define DLB2_SYS_PM_SMON_CFG0_SMON0_FUNCTION_LOC		4
-#define DLB2_SYS_PM_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
-#define DLB2_SYS_PM_SMON_CFG0_SMON1_FUNCTION_LOC		8
-#define DLB2_SYS_PM_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
-#define DLB2_SYS_PM_SMON_CFG0_SMON_MODE_LOC			12
-#define DLB2_SYS_PM_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
-#define DLB2_SYS_PM_SMON_CFG0_INTCOUNTEROVFL_LOC		17
-#define DLB2_SYS_PM_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
-#define DLB2_SYS_PM_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
-#define DLB2_SYS_PM_SMON_CFG0_STOPTIMEROVFL_LOC		20
-#define DLB2_SYS_PM_SMON_CFG0_INTTIMEROVFL_LOC		21
-#define DLB2_SYS_PM_SMON_CFG0_STATTIMEROVFL_LOC		22
-#define DLB2_SYS_PM_SMON_CFG0_RSVD1_LOC			23
-#define DLB2_SYS_PM_SMON_CFG0_TIMER_PRESCALE_LOC		24
-#define DLB2_SYS_PM_SMON_CFG0_RSVD0_LOC			29
-#define DLB2_SYS_PM_SMON_CFG0_VERSION_LOC			30
-
-#define DLB2_SYS_SMON_COMP_MASK1(x) \
-	(0x18002024 + (x) * 0x40)
-#define DLB2_SYS_SMON_COMP_MASK1_RST 0xffffffff
-
-#define DLB2_SYS_SMON_COMP_MASK1_COMP_MASK1	0xFFFFFFFF
-#define DLB2_SYS_SMON_COMP_MASK1_COMP_MASK1_LOC	0
-
-#define DLB2_SYS_SMON_COMP_MASK0(x) \
-	(0x18002020 + (x) * 0x40)
-#define DLB2_SYS_SMON_COMP_MASK0_RST 0xffffffff
-
-#define DLB2_SYS_SMON_COMP_MASK0_COMP_MASK0	0xFFFFFFFF
-#define DLB2_SYS_SMON_COMP_MASK0_COMP_MASK0_LOC	0
-
-#define DLB2_SYS_SMON_MAX_TMR(x) \
-	(0x1800201c + (x) * 0x40)
-#define DLB2_SYS_SMON_MAX_TMR_RST 0x0
-
-#define DLB2_SYS_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
-#define DLB2_SYS_SMON_MAX_TMR_MAXVALUE_LOC	0
-
-#define DLB2_SYS_SMON_TMR(x) \
-	(0x18002018 + (x) * 0x40)
-#define DLB2_SYS_SMON_TMR_RST 0x0
-
-#define DLB2_SYS_SMON_TMR_TIMER_VAL	0xFFFFFFFF
-#define DLB2_SYS_SMON_TMR_TIMER_VAL_LOC	0
-
-#define DLB2_SYS_SMON_ACTIVITYCNTR1(x) \
-	(0x18002014 + (x) * 0x40)
-#define DLB2_SYS_SMON_ACTIVITYCNTR1_RST 0x0
-
-#define DLB2_SYS_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
-#define DLB2_SYS_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
-
-#define DLB2_SYS_SMON_ACTIVITYCNTR0(x) \
-	(0x18002010 + (x) * 0x40)
-#define DLB2_SYS_SMON_ACTIVITYCNTR0_RST 0x0
-
-#define DLB2_SYS_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
-#define DLB2_SYS_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
-
-#define DLB2_SYS_SMON_COMPARE1(x) \
-	(0x1800200c + (x) * 0x40)
-#define DLB2_SYS_SMON_COMPARE1_RST 0x0
-
-#define DLB2_SYS_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
-#define DLB2_SYS_SMON_COMPARE1_COMPARE1_LOC	0
-
-#define DLB2_SYS_SMON_COMPARE0(x) \
-	(0x18002008 + (x) * 0x40)
-#define DLB2_SYS_SMON_COMPARE0_RST 0x0
-
-#define DLB2_SYS_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
-#define DLB2_SYS_SMON_COMPARE0_COMPARE0_LOC	0
-
-#define DLB2_SYS_SMON_CFG1(x) \
-	(0x18002004 + (x) * 0x40)
-#define DLB2_SYS_SMON_CFG1_RST 0x0
-
-#define DLB2_SYS_SMON_CFG1_MODE0	0x000000FF
-#define DLB2_SYS_SMON_CFG1_MODE1	0x0000FF00
-#define DLB2_SYS_SMON_CFG1_RSVD	0xFFFF0000
-#define DLB2_SYS_SMON_CFG1_MODE0_LOC	0
-#define DLB2_SYS_SMON_CFG1_MODE1_LOC	8
-#define DLB2_SYS_SMON_CFG1_RSVD_LOC	16
-
-#define DLB2_SYS_SMON_CFG0(x) \
-	(0x18002000 + (x) * 0x40)
-#define DLB2_SYS_SMON_CFG0_RST 0x40000000
-
-#define DLB2_SYS_SMON_CFG0_SMON_ENABLE		0x00000001
-#define DLB2_SYS_SMON_CFG0_RSVD2			0x0000000E
-#define DLB2_SYS_SMON_CFG0_SMON0_FUNCTION		0x00000070
-#define DLB2_SYS_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
-#define DLB2_SYS_SMON_CFG0_SMON1_FUNCTION		0x00000700
-#define DLB2_SYS_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
-#define DLB2_SYS_SMON_CFG0_SMON_MODE			0x0000F000
-#define DLB2_SYS_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
-#define DLB2_SYS_SMON_CFG0_INTCOUNTEROVFL		0x00020000
-#define DLB2_SYS_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
-#define DLB2_SYS_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
-#define DLB2_SYS_SMON_CFG0_STOPTIMEROVFL		0x00100000
-#define DLB2_SYS_SMON_CFG0_INTTIMEROVFL		0x00200000
-#define DLB2_SYS_SMON_CFG0_STATTIMEROVFL		0x00400000
-#define DLB2_SYS_SMON_CFG0_RSVD1			0x00800000
-#define DLB2_SYS_SMON_CFG0_TIMER_PRESCALE		0x1F000000
-#define DLB2_SYS_SMON_CFG0_RSVD0			0x20000000
-#define DLB2_SYS_SMON_CFG0_VERSION			0xC0000000
-#define DLB2_SYS_SMON_CFG0_SMON_ENABLE_LOC			0
-#define DLB2_SYS_SMON_CFG0_RSVD2_LOC				1
-#define DLB2_SYS_SMON_CFG0_SMON0_FUNCTION_LOC		4
-#define DLB2_SYS_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
-#define DLB2_SYS_SMON_CFG0_SMON1_FUNCTION_LOC		8
-#define DLB2_SYS_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
-#define DLB2_SYS_SMON_CFG0_SMON_MODE_LOC			12
-#define DLB2_SYS_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
-#define DLB2_SYS_SMON_CFG0_INTCOUNTEROVFL_LOC		17
-#define DLB2_SYS_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
-#define DLB2_SYS_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
-#define DLB2_SYS_SMON_CFG0_STOPTIMEROVFL_LOC			20
-#define DLB2_SYS_SMON_CFG0_INTTIMEROVFL_LOC			21
-#define DLB2_SYS_SMON_CFG0_STATTIMEROVFL_LOC			22
-#define DLB2_SYS_SMON_CFG0_RSVD1_LOC				23
-#define DLB2_SYS_SMON_CFG0_TIMER_PRESCALE_LOC		24
-#define DLB2_SYS_SMON_CFG0_RSVD0_LOC				29
-#define DLB2_SYS_SMON_CFG0_VERSION_LOC			30
-
-#define DLB2_SYS_INGRESS_ALARM_ENBL 0x10000300
-#define DLB2_SYS_INGRESS_ALARM_ENBL_RST 0x0
-
-#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_HCW		0x00000001
-#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_PP		0x00000002
-#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_PASID		0x00000004
-#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_QID		0x00000008
-#define DLB2_SYS_INGRESS_ALARM_ENBL_DISABLED_QID		0x00000010
-#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_LDB_QID_CFG	0x00000020
-#define DLB2_SYS_INGRESS_ALARM_ENBL_RSVD0			0xFFFFFFC0
-#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_HCW_LOC		0
-#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_PP_LOC		1
-#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_PASID_LOC	2
-#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_QID_LOC		3
-#define DLB2_SYS_INGRESS_ALARM_ENBL_DISABLED_QID_LOC		4
-#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_LDB_QID_CFG_LOC	5
-#define DLB2_SYS_INGRESS_ALARM_ENBL_RSVD0_LOC		6
-
-#define DLB2_SYS_MSIX_ACK 0x10000400
-#define DLB2_SYS_MSIX_ACK_RST 0x0
-
-#define DLB2_SYS_MSIX_ACK_MSIX_0_ACK	0x00000001
-#define DLB2_SYS_MSIX_ACK_MSIX_1_ACK	0x00000002
-#define DLB2_SYS_MSIX_ACK_RSVD0	0xFFFFFFFC
-#define DLB2_SYS_MSIX_ACK_MSIX_0_ACK_LOC	0
-#define DLB2_SYS_MSIX_ACK_MSIX_1_ACK_LOC	1
-#define DLB2_SYS_MSIX_ACK_RSVD0_LOC		2
-
-#define DLB2_SYS_MSIX_PASSTHRU 0x10000404
-#define DLB2_SYS_MSIX_PASSTHRU_RST 0x0
-
-#define DLB2_SYS_MSIX_PASSTHRU_MSIX_0_PASSTHRU	0x00000001
-#define DLB2_SYS_MSIX_PASSTHRU_MSIX_1_PASSTHRU	0x00000002
-#define DLB2_SYS_MSIX_PASSTHRU_RSVD0			0xFFFFFFFC
-#define DLB2_SYS_MSIX_PASSTHRU_MSIX_0_PASSTHRU_LOC	0
-#define DLB2_SYS_MSIX_PASSTHRU_MSIX_1_PASSTHRU_LOC	1
-#define DLB2_SYS_MSIX_PASSTHRU_RSVD0_LOC		2
-
-#define DLB2_SYS_MSIX_MODE 0x10000408
-#define DLB2_SYS_MSIX_MODE_RST 0x0
-/* MSI-X Modes */
-#define DLB2_MSIX_MODE_PACKED     0
-#define DLB2_MSIX_MODE_COMPRESSED 1
-
-#define DLB2_SYS_MSIX_MODE_MODE_V2	0x00000001
-#define DLB2_SYS_MSIX_MODE_POLL_MODE_V2	0x00000002
-#define DLB2_SYS_MSIX_MODE_POLL_MASK_V2	0x00000004
-#define DLB2_SYS_MSIX_MODE_POLL_LOCK_V2	0x00000008
-#define DLB2_SYS_MSIX_MODE_RSVD0_V2	0xFFFFFFF0
-#define DLB2_SYS_MSIX_MODE_MODE_V2_LOC	0
-#define DLB2_SYS_MSIX_MODE_POLL_MODE_V2_LOC	1
-#define DLB2_SYS_MSIX_MODE_POLL_MASK_V2_LOC	2
-#define DLB2_SYS_MSIX_MODE_POLL_LOCK_V2_LOC	3
-#define DLB2_SYS_MSIX_MODE_RSVD0_V2_LOC	4
-
-#define DLB2_SYS_MSIX_MODE_MODE_V2_5	0x00000001
-#define DLB2_SYS_MSIX_MODE_IMS_POLLING_V2_5	0x00000002
-#define DLB2_SYS_MSIX_MODE_RSVD0_V2_5	0xFFFFFFFC
-#define DLB2_SYS_MSIX_MODE_MODE_V2_5_LOC		0
-#define DLB2_SYS_MSIX_MODE_IMS_POLLING_V2_5_LOC	1
-#define DLB2_SYS_MSIX_MODE_RSVD0_V2_5_LOC		2
-
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS 0x10000440
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_RST 0x0
-
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_0_OCC_INT	0x00000001
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_1_OCC_INT	0x00000002
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_2_OCC_INT	0x00000004
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_3_OCC_INT	0x00000008
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_4_OCC_INT	0x00000010
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_5_OCC_INT	0x00000020
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_6_OCC_INT	0x00000040
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_7_OCC_INT	0x00000080
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_8_OCC_INT	0x00000100
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_9_OCC_INT	0x00000200
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_10_OCC_INT	0x00000400
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_11_OCC_INT	0x00000800
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_12_OCC_INT	0x00001000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_13_OCC_INT	0x00002000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_14_OCC_INT	0x00004000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_15_OCC_INT	0x00008000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_16_OCC_INT	0x00010000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_17_OCC_INT	0x00020000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_18_OCC_INT	0x00040000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_19_OCC_INT	0x00080000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_20_OCC_INT	0x00100000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_21_OCC_INT	0x00200000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_22_OCC_INT	0x00400000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_23_OCC_INT	0x00800000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_24_OCC_INT	0x01000000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_25_OCC_INT	0x02000000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_26_OCC_INT	0x04000000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_27_OCC_INT	0x08000000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_28_OCC_INT	0x10000000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_29_OCC_INT	0x20000000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_30_OCC_INT	0x40000000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_31_OCC_INT	0x80000000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_0_OCC_INT_LOC	0
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_1_OCC_INT_LOC	1
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_2_OCC_INT_LOC	2
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_3_OCC_INT_LOC	3
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_4_OCC_INT_LOC	4
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_5_OCC_INT_LOC	5
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_6_OCC_INT_LOC	6
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_7_OCC_INT_LOC	7
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_8_OCC_INT_LOC	8
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_9_OCC_INT_LOC	9
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_10_OCC_INT_LOC	10
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_11_OCC_INT_LOC	11
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_12_OCC_INT_LOC	12
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_13_OCC_INT_LOC	13
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_14_OCC_INT_LOC	14
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_15_OCC_INT_LOC	15
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_16_OCC_INT_LOC	16
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_17_OCC_INT_LOC	17
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_18_OCC_INT_LOC	18
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_19_OCC_INT_LOC	19
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_20_OCC_INT_LOC	20
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_21_OCC_INT_LOC	21
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_22_OCC_INT_LOC	22
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_23_OCC_INT_LOC	23
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_24_OCC_INT_LOC	24
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_25_OCC_INT_LOC	25
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_26_OCC_INT_LOC	26
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_27_OCC_INT_LOC	27
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_28_OCC_INT_LOC	28
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_29_OCC_INT_LOC	29
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_30_OCC_INT_LOC	30
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_31_OCC_INT_LOC	31
-
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS 0x10000444
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_RST 0x0
-
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_32_OCC_INT	0x00000001
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_33_OCC_INT	0x00000002
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_34_OCC_INT	0x00000004
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_35_OCC_INT	0x00000008
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_36_OCC_INT	0x00000010
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_37_OCC_INT	0x00000020
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_38_OCC_INT	0x00000040
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_39_OCC_INT	0x00000080
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_40_OCC_INT	0x00000100
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_41_OCC_INT	0x00000200
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_42_OCC_INT	0x00000400
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_43_OCC_INT	0x00000800
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_44_OCC_INT	0x00001000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_45_OCC_INT	0x00002000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_46_OCC_INT	0x00004000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_47_OCC_INT	0x00008000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_48_OCC_INT	0x00010000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_49_OCC_INT	0x00020000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_50_OCC_INT	0x00040000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_51_OCC_INT	0x00080000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_52_OCC_INT	0x00100000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_53_OCC_INT	0x00200000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_54_OCC_INT	0x00400000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_55_OCC_INT	0x00800000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_56_OCC_INT	0x01000000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_57_OCC_INT	0x02000000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_58_OCC_INT	0x04000000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_59_OCC_INT	0x08000000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_60_OCC_INT	0x10000000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_61_OCC_INT	0x20000000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_62_OCC_INT	0x40000000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_63_OCC_INT	0x80000000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_32_OCC_INT_LOC	0
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_33_OCC_INT_LOC	1
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_34_OCC_INT_LOC	2
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_35_OCC_INT_LOC	3
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_36_OCC_INT_LOC	4
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_37_OCC_INT_LOC	5
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_38_OCC_INT_LOC	6
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_39_OCC_INT_LOC	7
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_40_OCC_INT_LOC	8
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_41_OCC_INT_LOC	9
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_42_OCC_INT_LOC	10
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_43_OCC_INT_LOC	11
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_44_OCC_INT_LOC	12
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_45_OCC_INT_LOC	13
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_46_OCC_INT_LOC	14
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_47_OCC_INT_LOC	15
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_48_OCC_INT_LOC	16
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_49_OCC_INT_LOC	17
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_50_OCC_INT_LOC	18
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_51_OCC_INT_LOC	19
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_52_OCC_INT_LOC	20
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_53_OCC_INT_LOC	21
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_54_OCC_INT_LOC	22
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_55_OCC_INT_LOC	23
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_56_OCC_INT_LOC	24
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_57_OCC_INT_LOC	25
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_58_OCC_INT_LOC	26
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_59_OCC_INT_LOC	27
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_60_OCC_INT_LOC	28
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_61_OCC_INT_LOC	29
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_62_OCC_INT_LOC	30
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_63_OCC_INT_LOC	31
-
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS 0x10000460
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_RST 0x0
-
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_0_OCC_INT	0x00000001
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_1_OCC_INT	0x00000002
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_2_OCC_INT	0x00000004
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_3_OCC_INT	0x00000008
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_4_OCC_INT	0x00000010
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_5_OCC_INT	0x00000020
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_6_OCC_INT	0x00000040
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_7_OCC_INT	0x00000080
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_8_OCC_INT	0x00000100
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_9_OCC_INT	0x00000200
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_10_OCC_INT	0x00000400
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_11_OCC_INT	0x00000800
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_12_OCC_INT	0x00001000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_13_OCC_INT	0x00002000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_14_OCC_INT	0x00004000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_15_OCC_INT	0x00008000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_16_OCC_INT	0x00010000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_17_OCC_INT	0x00020000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_18_OCC_INT	0x00040000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_19_OCC_INT	0x00080000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_20_OCC_INT	0x00100000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_21_OCC_INT	0x00200000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_22_OCC_INT	0x00400000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_23_OCC_INT	0x00800000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_24_OCC_INT	0x01000000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_25_OCC_INT	0x02000000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_26_OCC_INT	0x04000000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_27_OCC_INT	0x08000000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_28_OCC_INT	0x10000000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_29_OCC_INT	0x20000000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_30_OCC_INT	0x40000000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_31_OCC_INT	0x80000000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_0_OCC_INT_LOC	0
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_1_OCC_INT_LOC	1
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_2_OCC_INT_LOC	2
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_3_OCC_INT_LOC	3
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_4_OCC_INT_LOC	4
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_5_OCC_INT_LOC	5
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_6_OCC_INT_LOC	6
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_7_OCC_INT_LOC	7
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_8_OCC_INT_LOC	8
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_9_OCC_INT_LOC	9
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_10_OCC_INT_LOC	10
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_11_OCC_INT_LOC	11
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_12_OCC_INT_LOC	12
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_13_OCC_INT_LOC	13
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_14_OCC_INT_LOC	14
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_15_OCC_INT_LOC	15
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_16_OCC_INT_LOC	16
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_17_OCC_INT_LOC	17
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_18_OCC_INT_LOC	18
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_19_OCC_INT_LOC	19
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_20_OCC_INT_LOC	20
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_21_OCC_INT_LOC	21
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_22_OCC_INT_LOC	22
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_23_OCC_INT_LOC	23
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_24_OCC_INT_LOC	24
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_25_OCC_INT_LOC	25
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_26_OCC_INT_LOC	26
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_27_OCC_INT_LOC	27
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_28_OCC_INT_LOC	28
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_29_OCC_INT_LOC	29
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_30_OCC_INT_LOC	30
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_31_OCC_INT_LOC	31
-
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS 0x10000464
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_RST 0x0
-
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_32_OCC_INT	0x00000001
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_33_OCC_INT	0x00000002
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_34_OCC_INT	0x00000004
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_35_OCC_INT	0x00000008
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_36_OCC_INT	0x00000010
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_37_OCC_INT	0x00000020
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_38_OCC_INT	0x00000040
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_39_OCC_INT	0x00000080
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_40_OCC_INT	0x00000100
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_41_OCC_INT	0x00000200
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_42_OCC_INT	0x00000400
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_43_OCC_INT	0x00000800
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_44_OCC_INT	0x00001000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_45_OCC_INT	0x00002000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_46_OCC_INT	0x00004000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_47_OCC_INT	0x00008000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_48_OCC_INT	0x00010000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_49_OCC_INT	0x00020000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_50_OCC_INT	0x00040000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_51_OCC_INT	0x00080000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_52_OCC_INT	0x00100000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_53_OCC_INT	0x00200000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_54_OCC_INT	0x00400000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_55_OCC_INT	0x00800000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_56_OCC_INT	0x01000000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_57_OCC_INT	0x02000000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_58_OCC_INT	0x04000000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_59_OCC_INT	0x08000000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_60_OCC_INT	0x10000000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_61_OCC_INT	0x20000000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_62_OCC_INT	0x40000000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_63_OCC_INT	0x80000000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_32_OCC_INT_LOC	0
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_33_OCC_INT_LOC	1
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_34_OCC_INT_LOC	2
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_35_OCC_INT_LOC	3
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_36_OCC_INT_LOC	4
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_37_OCC_INT_LOC	5
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_38_OCC_INT_LOC	6
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_39_OCC_INT_LOC	7
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_40_OCC_INT_LOC	8
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_41_OCC_INT_LOC	9
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_42_OCC_INT_LOC	10
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_43_OCC_INT_LOC	11
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_44_OCC_INT_LOC	12
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_45_OCC_INT_LOC	13
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_46_OCC_INT_LOC	14
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_47_OCC_INT_LOC	15
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_48_OCC_INT_LOC	16
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_49_OCC_INT_LOC	17
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_50_OCC_INT_LOC	18
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_51_OCC_INT_LOC	19
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_52_OCC_INT_LOC	20
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_53_OCC_INT_LOC	21
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_54_OCC_INT_LOC	22
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_55_OCC_INT_LOC	23
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_56_OCC_INT_LOC	24
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_57_OCC_INT_LOC	25
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_58_OCC_INT_LOC	26
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_59_OCC_INT_LOC	27
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_60_OCC_INT_LOC	28
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_61_OCC_INT_LOC	29
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_62_OCC_INT_LOC	30
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_63_OCC_INT_LOC	31
-
-#define DLB2_SYS_DIR_CQ_OPT_CLR 0x100004c0
-#define DLB2_SYS_DIR_CQ_OPT_CLR_RST 0x0
-
-#define DLB2_SYS_DIR_CQ_OPT_CLR_CQ		0x0000003F
-#define DLB2_SYS_DIR_CQ_OPT_CLR_RSVD0	0xFFFFFFC0
-#define DLB2_SYS_DIR_CQ_OPT_CLR_CQ_LOC	0
-#define DLB2_SYS_DIR_CQ_OPT_CLR_RSVD0_LOC	6
-
-#define DLB2_SYS_ALARM_HW_SYND 0x1000050c
-#define DLB2_SYS_ALARM_HW_SYND_RST 0x0
-
-#define DLB2_SYS_ALARM_HW_SYND_SYNDROME	0x000000FF
-#define DLB2_SYS_ALARM_HW_SYND_RTYPE		0x00000300
-#define DLB2_SYS_ALARM_HW_SYND_ALARM		0x00000400
-#define DLB2_SYS_ALARM_HW_SYND_CWD		0x00000800
-#define DLB2_SYS_ALARM_HW_SYND_VF_PF_MB	0x00001000
-#define DLB2_SYS_ALARM_HW_SYND_RSVD0		0x00002000
-#define DLB2_SYS_ALARM_HW_SYND_CLS		0x0000C000
-#define DLB2_SYS_ALARM_HW_SYND_AID		0x003F0000
-#define DLB2_SYS_ALARM_HW_SYND_UNIT		0x03C00000
-#define DLB2_SYS_ALARM_HW_SYND_SOURCE	0x3C000000
-#define DLB2_SYS_ALARM_HW_SYND_MORE		0x40000000
-#define DLB2_SYS_ALARM_HW_SYND_VALID		0x80000000
-#define DLB2_SYS_ALARM_HW_SYND_SYNDROME_LOC	0
-#define DLB2_SYS_ALARM_HW_SYND_RTYPE_LOC	8
-#define DLB2_SYS_ALARM_HW_SYND_ALARM_LOC	10
-#define DLB2_SYS_ALARM_HW_SYND_CWD_LOC	11
-#define DLB2_SYS_ALARM_HW_SYND_VF_PF_MB_LOC	12
-#define DLB2_SYS_ALARM_HW_SYND_RSVD0_LOC	13
-#define DLB2_SYS_ALARM_HW_SYND_CLS_LOC	14
-#define DLB2_SYS_ALARM_HW_SYND_AID_LOC	16
-#define DLB2_SYS_ALARM_HW_SYND_UNIT_LOC	22
-#define DLB2_SYS_ALARM_HW_SYND_SOURCE_LOC	26
-#define DLB2_SYS_ALARM_HW_SYND_MORE_LOC	30
-#define DLB2_SYS_ALARM_HW_SYND_VALID_LOC	31
-
-#define DLB2_AQED_QID_FID_LIM(x) \
-	(0x20000000 + (x) * 0x1000)
-#define DLB2_AQED_QID_FID_LIM_RST 0x7ff
-
-#define DLB2_AQED_QID_FID_LIM_QID_FID_LIMIT	0x00001FFF
-#define DLB2_AQED_QID_FID_LIM_RSVD0		0xFFFFE000
-#define DLB2_AQED_QID_FID_LIM_QID_FID_LIMIT_LOC	0
-#define DLB2_AQED_QID_FID_LIM_RSVD0_LOC		13
-
-#define DLB2_AQED_QID_HID_WIDTH(x) \
-	(0x20080000 + (x) * 0x1000)
-#define DLB2_AQED_QID_HID_WIDTH_RST 0x0
-
-#define DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE	0x00000007
-#define DLB2_AQED_QID_HID_WIDTH_RSVD0		0xFFFFFFF8
-#define DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE_LOC	0
-#define DLB2_AQED_QID_HID_WIDTH_RSVD0_LOC		3
-
-#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0 0x24000004
-#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_RST 0xfefcfaf8
-
-#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI0	0x000000FF
-#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI1	0x0000FF00
-#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI2	0x00FF0000
-#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI3	0xFF000000
-#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI0_LOC	0
-#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI1_LOC	8
-#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI2_LOC	16
-#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI3_LOC	24
-
-#define DLB2_AQED_SMON_ACTIVITYCNTR0 0x2c00004c
-#define DLB2_AQED_SMON_ACTIVITYCNTR0_RST 0x0
-
-#define DLB2_AQED_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
-#define DLB2_AQED_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
-
-#define DLB2_AQED_SMON_ACTIVITYCNTR1 0x2c000050
-#define DLB2_AQED_SMON_ACTIVITYCNTR1_RST 0x0
-
-#define DLB2_AQED_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
-#define DLB2_AQED_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
-
-#define DLB2_AQED_SMON_COMPARE0 0x2c000054
-#define DLB2_AQED_SMON_COMPARE0_RST 0x0
-
-#define DLB2_AQED_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
-#define DLB2_AQED_SMON_COMPARE0_COMPARE0_LOC	0
-
-#define DLB2_AQED_SMON_COMPARE1 0x2c000058
-#define DLB2_AQED_SMON_COMPARE1_RST 0x0
-
-#define DLB2_AQED_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
-#define DLB2_AQED_SMON_COMPARE1_COMPARE1_LOC	0
-
-#define DLB2_AQED_SMON_CFG0 0x2c00005c
-#define DLB2_AQED_SMON_CFG0_RST 0x40000000
-
-#define DLB2_AQED_SMON_CFG0_SMON_ENABLE		0x00000001
-#define DLB2_AQED_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
-#define DLB2_AQED_SMON_CFG0_RSVZ0			0x0000000C
-#define DLB2_AQED_SMON_CFG0_SMON0_FUNCTION		0x00000070
-#define DLB2_AQED_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
-#define DLB2_AQED_SMON_CFG0_SMON1_FUNCTION		0x00000700
-#define DLB2_AQED_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
-#define DLB2_AQED_SMON_CFG0_SMON_MODE		0x0000F000
-#define DLB2_AQED_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
-#define DLB2_AQED_SMON_CFG0_INTCOUNTEROVFL		0x00020000
-#define DLB2_AQED_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
-#define DLB2_AQED_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
-#define DLB2_AQED_SMON_CFG0_STOPTIMEROVFL		0x00100000
-#define DLB2_AQED_SMON_CFG0_INTTIMEROVFL		0x00200000
-#define DLB2_AQED_SMON_CFG0_STATTIMEROVFL		0x00400000
-#define DLB2_AQED_SMON_CFG0_RSVZ1			0x00800000
-#define DLB2_AQED_SMON_CFG0_TIMER_PRESCALE		0x1F000000
-#define DLB2_AQED_SMON_CFG0_RSVZ2			0x20000000
-#define DLB2_AQED_SMON_CFG0_VERSION			0xC0000000
-#define DLB2_AQED_SMON_CFG0_SMON_ENABLE_LOC			0
-#define DLB2_AQED_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
-#define DLB2_AQED_SMON_CFG0_RSVZ0_LOC			2
-#define DLB2_AQED_SMON_CFG0_SMON0_FUNCTION_LOC		4
-#define DLB2_AQED_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
-#define DLB2_AQED_SMON_CFG0_SMON1_FUNCTION_LOC		8
-#define DLB2_AQED_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
-#define DLB2_AQED_SMON_CFG0_SMON_MODE_LOC			12
-#define DLB2_AQED_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
-#define DLB2_AQED_SMON_CFG0_INTCOUNTEROVFL_LOC		17
-#define DLB2_AQED_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
-#define DLB2_AQED_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
-#define DLB2_AQED_SMON_CFG0_STOPTIMEROVFL_LOC		20
-#define DLB2_AQED_SMON_CFG0_INTTIMEROVFL_LOC			21
-#define DLB2_AQED_SMON_CFG0_STATTIMEROVFL_LOC		22
-#define DLB2_AQED_SMON_CFG0_RSVZ1_LOC			23
-#define DLB2_AQED_SMON_CFG0_TIMER_PRESCALE_LOC		24
-#define DLB2_AQED_SMON_CFG0_RSVZ2_LOC			29
-#define DLB2_AQED_SMON_CFG0_VERSION_LOC			30
-
-#define DLB2_AQED_SMON_CFG1 0x2c000060
-#define DLB2_AQED_SMON_CFG1_RST 0x0
-
-#define DLB2_AQED_SMON_CFG1_MODE0	0x000000FF
-#define DLB2_AQED_SMON_CFG1_MODE1	0x0000FF00
-#define DLB2_AQED_SMON_CFG1_RSVZ0	0xFFFF0000
-#define DLB2_AQED_SMON_CFG1_MODE0_LOC	0
-#define DLB2_AQED_SMON_CFG1_MODE1_LOC	8
-#define DLB2_AQED_SMON_CFG1_RSVZ0_LOC	16
-
-#define DLB2_AQED_SMON_MAX_TMR 0x2c000064
-#define DLB2_AQED_SMON_MAX_TMR_RST 0x0
-
-#define DLB2_AQED_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
-#define DLB2_AQED_SMON_MAX_TMR_MAXVALUE_LOC	0
-
-#define DLB2_AQED_SMON_TMR 0x2c000068
-#define DLB2_AQED_SMON_TMR_RST 0x0
-
-#define DLB2_AQED_SMON_TMR_TIMER	0xFFFFFFFF
-#define DLB2_AQED_SMON_TMR_TIMER_LOC	0
-
-#define DLB2_ATM_QID2CQIDIX_00(x) \
-	(0x30080000 + (x) * 0x1000)
-#define DLB2_ATM_QID2CQIDIX_00_RST 0x0
-#define DLB2_ATM_QID2CQIDIX(x, y) \
-	(DLB2_ATM_QID2CQIDIX_00(x) + 0x80000 * (y))
-#define DLB2_ATM_QID2CQIDIX_NUM 16
-
-#define DLB2_ATM_QID2CQIDIX_00_CQ_P0	0x000000FF
-#define DLB2_ATM_QID2CQIDIX_00_CQ_P1	0x0000FF00
-#define DLB2_ATM_QID2CQIDIX_00_CQ_P2	0x00FF0000
-#define DLB2_ATM_QID2CQIDIX_00_CQ_P3	0xFF000000
-#define DLB2_ATM_QID2CQIDIX_00_CQ_P0_LOC	0
-#define DLB2_ATM_QID2CQIDIX_00_CQ_P1_LOC	8
-#define DLB2_ATM_QID2CQIDIX_00_CQ_P2_LOC	16
-#define DLB2_ATM_QID2CQIDIX_00_CQ_P3_LOC	24
-
-#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN 0x34000004
-#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_RST 0xfffefdfc
-
-#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN0	0x000000FF
-#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN1	0x0000FF00
-#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN2	0x00FF0000
-#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN3	0xFF000000
-#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN0_LOC	0
-#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN1_LOC	8
-#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN2_LOC	16
-#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN3_LOC	24
-
-#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN 0x34000008
-#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_RST 0xfffefdfc
-
-#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN0	0x000000FF
-#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN1	0x0000FF00
-#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN2	0x00FF0000
-#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN3	0xFF000000
-#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN0_LOC	0
-#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN1_LOC	8
-#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN2_LOC	16
-#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN3_LOC	24
-
-#define DLB2_ATM_SMON_ACTIVITYCNTR0 0x3c000050
-#define DLB2_ATM_SMON_ACTIVITYCNTR0_RST 0x0
-
-#define DLB2_ATM_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
-#define DLB2_ATM_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
-
-#define DLB2_ATM_SMON_ACTIVITYCNTR1 0x3c000054
-#define DLB2_ATM_SMON_ACTIVITYCNTR1_RST 0x0
-
-#define DLB2_ATM_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
-#define DLB2_ATM_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
-
-#define DLB2_ATM_SMON_COMPARE0 0x3c000058
-#define DLB2_ATM_SMON_COMPARE0_RST 0x0
-
-#define DLB2_ATM_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
-#define DLB2_ATM_SMON_COMPARE0_COMPARE0_LOC	0
-
-#define DLB2_ATM_SMON_COMPARE1 0x3c00005c
-#define DLB2_ATM_SMON_COMPARE1_RST 0x0
-
-#define DLB2_ATM_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
-#define DLB2_ATM_SMON_COMPARE1_COMPARE1_LOC	0
-
-#define DLB2_ATM_SMON_CFG0 0x3c000060
-#define DLB2_ATM_SMON_CFG0_RST 0x40000000
-
-#define DLB2_ATM_SMON_CFG0_SMON_ENABLE		0x00000001
-#define DLB2_ATM_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
-#define DLB2_ATM_SMON_CFG0_RSVZ0			0x0000000C
-#define DLB2_ATM_SMON_CFG0_SMON0_FUNCTION		0x00000070
-#define DLB2_ATM_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
-#define DLB2_ATM_SMON_CFG0_SMON1_FUNCTION		0x00000700
-#define DLB2_ATM_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
-#define DLB2_ATM_SMON_CFG0_SMON_MODE			0x0000F000
-#define DLB2_ATM_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
-#define DLB2_ATM_SMON_CFG0_INTCOUNTEROVFL		0x00020000
-#define DLB2_ATM_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
-#define DLB2_ATM_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
-#define DLB2_ATM_SMON_CFG0_STOPTIMEROVFL		0x00100000
-#define DLB2_ATM_SMON_CFG0_INTTIMEROVFL		0x00200000
-#define DLB2_ATM_SMON_CFG0_STATTIMEROVFL		0x00400000
-#define DLB2_ATM_SMON_CFG0_RSVZ1			0x00800000
-#define DLB2_ATM_SMON_CFG0_TIMER_PRESCALE		0x1F000000
-#define DLB2_ATM_SMON_CFG0_RSVZ2			0x20000000
-#define DLB2_ATM_SMON_CFG0_VERSION			0xC0000000
-#define DLB2_ATM_SMON_CFG0_SMON_ENABLE_LOC			0
-#define DLB2_ATM_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
-#define DLB2_ATM_SMON_CFG0_RSVZ0_LOC				2
-#define DLB2_ATM_SMON_CFG0_SMON0_FUNCTION_LOC		4
-#define DLB2_ATM_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
-#define DLB2_ATM_SMON_CFG0_SMON1_FUNCTION_LOC		8
-#define DLB2_ATM_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
-#define DLB2_ATM_SMON_CFG0_SMON_MODE_LOC			12
-#define DLB2_ATM_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
-#define DLB2_ATM_SMON_CFG0_INTCOUNTEROVFL_LOC		17
-#define DLB2_ATM_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
-#define DLB2_ATM_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
-#define DLB2_ATM_SMON_CFG0_STOPTIMEROVFL_LOC			20
-#define DLB2_ATM_SMON_CFG0_INTTIMEROVFL_LOC			21
-#define DLB2_ATM_SMON_CFG0_STATTIMEROVFL_LOC			22
-#define DLB2_ATM_SMON_CFG0_RSVZ1_LOC				23
-#define DLB2_ATM_SMON_CFG0_TIMER_PRESCALE_LOC		24
-#define DLB2_ATM_SMON_CFG0_RSVZ2_LOC				29
-#define DLB2_ATM_SMON_CFG0_VERSION_LOC			30
-
-#define DLB2_ATM_SMON_CFG1 0x3c000064
-#define DLB2_ATM_SMON_CFG1_RST 0x0
-
-#define DLB2_ATM_SMON_CFG1_MODE0	0x000000FF
-#define DLB2_ATM_SMON_CFG1_MODE1	0x0000FF00
-#define DLB2_ATM_SMON_CFG1_RSVZ0	0xFFFF0000
-#define DLB2_ATM_SMON_CFG1_MODE0_LOC	0
-#define DLB2_ATM_SMON_CFG1_MODE1_LOC	8
-#define DLB2_ATM_SMON_CFG1_RSVZ0_LOC	16
-
-#define DLB2_ATM_SMON_MAX_TMR 0x3c000068
-#define DLB2_ATM_SMON_MAX_TMR_RST 0x0
-
-#define DLB2_ATM_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
-#define DLB2_ATM_SMON_MAX_TMR_MAXVALUE_LOC	0
-
-#define DLB2_ATM_SMON_TMR 0x3c00006c
-#define DLB2_ATM_SMON_TMR_RST 0x0
-
-#define DLB2_ATM_SMON_TMR_TIMER	0xFFFFFFFF
-#define DLB2_ATM_SMON_TMR_TIMER_LOC	0
-
-#define DLB2_CHP_CFG_DIR_VAS_CRD(x) \
-	(0x40000000 + (x) * 0x1000)
-#define DLB2_CHP_CFG_DIR_VAS_CRD_RST 0x0
-
-#define DLB2_CHP_CFG_DIR_VAS_CRD_COUNT	0x00003FFF
-#define DLB2_CHP_CFG_DIR_VAS_CRD_RSVD0	0xFFFFC000
-#define DLB2_CHP_CFG_DIR_VAS_CRD_COUNT_LOC	0
-#define DLB2_CHP_CFG_DIR_VAS_CRD_RSVD0_LOC	14
-
-#define DLB2_CHP_CFG_LDB_VAS_CRD(x) \
-	(0x40080000 + (x) * 0x1000)
-#define DLB2_CHP_CFG_LDB_VAS_CRD_RST 0x0
-
-#define DLB2_CHP_CFG_LDB_VAS_CRD_COUNT	0x00007FFF
-#define DLB2_CHP_CFG_LDB_VAS_CRD_RSVD0	0xFFFF8000
-#define DLB2_CHP_CFG_LDB_VAS_CRD_COUNT_LOC	0
-#define DLB2_CHP_CFG_LDB_VAS_CRD_RSVD0_LOC	15
-
-#define DLB2_V2CHP_ORD_QID_SN(x) \
-	(0x40100000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_ORD_QID_SN(x) \
-	(0x40080000 + (x) * 0x1000)
-#define DLB2_CHP_ORD_QID_SN(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_ORD_QID_SN(x) : \
-	 DLB2_V2_5CHP_ORD_QID_SN(x))
-#define DLB2_CHP_ORD_QID_SN_RST 0x0
-
-#define DLB2_CHP_ORD_QID_SN_SN	0x000003FF
-#define DLB2_CHP_ORD_QID_SN_RSVD0	0xFFFFFC00
-#define DLB2_CHP_ORD_QID_SN_SN_LOC		0
-#define DLB2_CHP_ORD_QID_SN_RSVD0_LOC	10
-
-#define DLB2_V2CHP_ORD_QID_SN_MAP(x) \
-	(0x40180000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_ORD_QID_SN_MAP(x) \
-	(0x40100000 + (x) * 0x1000)
-#define DLB2_CHP_ORD_QID_SN_MAP(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_ORD_QID_SN_MAP(x) : \
-	 DLB2_V2_5CHP_ORD_QID_SN_MAP(x))
-#define DLB2_CHP_ORD_QID_SN_MAP_RST 0x0
-
-#define DLB2_CHP_ORD_QID_SN_MAP_MODE		0x00000007
-#define DLB2_CHP_ORD_QID_SN_MAP_SLOT		0x00000078
-#define DLB2_CHP_ORD_QID_SN_MAP_RSVZ0	0x00000080
-#define DLB2_CHP_ORD_QID_SN_MAP_GRP		0x00000100
-#define DLB2_CHP_ORD_QID_SN_MAP_RSVZ1	0x00000200
-#define DLB2_CHP_ORD_QID_SN_MAP_RSVD0	0xFFFFFC00
-#define DLB2_CHP_ORD_QID_SN_MAP_MODE_LOC	0
-#define DLB2_CHP_ORD_QID_SN_MAP_SLOT_LOC	3
-#define DLB2_CHP_ORD_QID_SN_MAP_RSVZ0_LOC	7
-#define DLB2_CHP_ORD_QID_SN_MAP_GRP_LOC	8
-#define DLB2_CHP_ORD_QID_SN_MAP_RSVZ1_LOC	9
-#define DLB2_CHP_ORD_QID_SN_MAP_RSVD0_LOC	10
-
-#define DLB2_V2CHP_SN_CHK_ENBL(x) \
-	(0x40200000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_SN_CHK_ENBL(x) \
-	(0x40180000 + (x) * 0x1000)
-#define DLB2_CHP_SN_CHK_ENBL(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_SN_CHK_ENBL(x) : \
-	 DLB2_V2_5CHP_SN_CHK_ENBL(x))
-#define DLB2_CHP_SN_CHK_ENBL_RST 0x0
-
-#define DLB2_CHP_SN_CHK_ENBL_EN	0x00000001
-#define DLB2_CHP_SN_CHK_ENBL_RSVD0	0xFFFFFFFE
-#define DLB2_CHP_SN_CHK_ENBL_EN_LOC		0
-#define DLB2_CHP_SN_CHK_ENBL_RSVD0_LOC	1
-
-#define DLB2_V2CHP_DIR_CQ_DEPTH(x) \
-	(0x40280000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_DIR_CQ_DEPTH(x) \
-	(0x40300000 + (x) * 0x1000)
-#define DLB2_CHP_DIR_CQ_DEPTH(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_DIR_CQ_DEPTH(x) : \
-	 DLB2_V2_5CHP_DIR_CQ_DEPTH(x))
-#define DLB2_CHP_DIR_CQ_DEPTH_RST 0x0
-
-#define DLB2_CHP_DIR_CQ_DEPTH_DEPTH	0x00001FFF
-#define DLB2_CHP_DIR_CQ_DEPTH_RSVD0	0xFFFFE000
-#define DLB2_CHP_DIR_CQ_DEPTH_DEPTH_LOC	0
-#define DLB2_CHP_DIR_CQ_DEPTH_RSVD0_LOC	13
-
-#define DLB2_V2CHP_DIR_CQ_INT_DEPTH_THRSH(x) \
-	(0x40300000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_DIR_CQ_INT_DEPTH_THRSH(x) \
-	(0x40380000 + (x) * 0x1000)
-#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_DIR_CQ_INT_DEPTH_THRSH(x) : \
-	 DLB2_V2_5CHP_DIR_CQ_INT_DEPTH_THRSH(x))
-#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_RST 0x0
-
-#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_DEPTH_THRESHOLD	0x00001FFF
-#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_RSVD0		0xFFFFE000
-#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_DEPTH_THRESHOLD_LOC	0
-#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_RSVD0_LOC		13
-
-#define DLB2_V2CHP_DIR_CQ_INT_ENB(x) \
-	(0x40380000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_DIR_CQ_INT_ENB(x) \
-	(0x40400000 + (x) * 0x1000)
-#define DLB2_CHP_DIR_CQ_INT_ENB(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_DIR_CQ_INT_ENB(x) : \
-	 DLB2_V2_5CHP_DIR_CQ_INT_ENB(x))
-#define DLB2_CHP_DIR_CQ_INT_ENB_RST 0x0
-
-#define DLB2_CHP_DIR_CQ_INT_ENB_EN_TIM	0x00000001
-#define DLB2_CHP_DIR_CQ_INT_ENB_EN_DEPTH	0x00000002
-#define DLB2_CHP_DIR_CQ_INT_ENB_RSVD0	0xFFFFFFFC
-#define DLB2_CHP_DIR_CQ_INT_ENB_EN_TIM_LOC	0
-#define DLB2_CHP_DIR_CQ_INT_ENB_EN_DEPTH_LOC	1
-#define DLB2_CHP_DIR_CQ_INT_ENB_RSVD0_LOC	2
-
-#define DLB2_V2CHP_DIR_CQ_TMR_THRSH(x) \
-	(0x40480000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_DIR_CQ_TMR_THRSH(x) \
-	(0x40500000 + (x) * 0x1000)
-#define DLB2_CHP_DIR_CQ_TMR_THRSH(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_DIR_CQ_TMR_THRSH(x) : \
-	 DLB2_V2_5CHP_DIR_CQ_TMR_THRSH(x))
-#define DLB2_CHP_DIR_CQ_TMR_THRSH_RST 0x1
-
-#define DLB2_CHP_DIR_CQ_TMR_THRSH_THRSH_0	0x00000001
-#define DLB2_CHP_DIR_CQ_TMR_THRSH_THRSH_13_1	0x00003FFE
-#define DLB2_CHP_DIR_CQ_TMR_THRSH_RSVD0	0xFFFFC000
-#define DLB2_CHP_DIR_CQ_TMR_THRSH_THRSH_0_LOC	0
-#define DLB2_CHP_DIR_CQ_TMR_THRSH_THRSH_13_1_LOC	1
-#define DLB2_CHP_DIR_CQ_TMR_THRSH_RSVD0_LOC		14
-
-#define DLB2_V2CHP_DIR_CQ_TKN_DEPTH_SEL(x) \
-	(0x40500000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_DIR_CQ_TKN_DEPTH_SEL(x) \
-	(0x40580000 + (x) * 0x1000)
-#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_DIR_CQ_TKN_DEPTH_SEL(x) : \
-	 DLB2_V2_5CHP_DIR_CQ_TKN_DEPTH_SEL(x))
-#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_RST 0x0
-
-#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT	0x0000000F
-#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_RSVD0			0xFFFFFFF0
-#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_LOC	0
-#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_RSVD0_LOC		4
-
-#define DLB2_V2CHP_DIR_CQ_WD_ENB(x) \
-	(0x40580000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_DIR_CQ_WD_ENB(x) \
-	(0x40600000 + (x) * 0x1000)
-#define DLB2_CHP_DIR_CQ_WD_ENB(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_DIR_CQ_WD_ENB(x) : \
-	 DLB2_V2_5CHP_DIR_CQ_WD_ENB(x))
-#define DLB2_CHP_DIR_CQ_WD_ENB_RST 0x0
-
-#define DLB2_CHP_DIR_CQ_WD_ENB_WD_ENABLE	0x00000001
-#define DLB2_CHP_DIR_CQ_WD_ENB_RSVD0		0xFFFFFFFE
-#define DLB2_CHP_DIR_CQ_WD_ENB_WD_ENABLE_LOC	0
-#define DLB2_CHP_DIR_CQ_WD_ENB_RSVD0_LOC	1
-
-#define DLB2_V2CHP_DIR_CQ_WPTR(x) \
-	(0x40600000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_DIR_CQ_WPTR(x) \
-	(0x40680000 + (x) * 0x1000)
-#define DLB2_CHP_DIR_CQ_WPTR(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_DIR_CQ_WPTR(x) : \
-	 DLB2_V2_5CHP_DIR_CQ_WPTR(x))
-#define DLB2_CHP_DIR_CQ_WPTR_RST 0x0
-
-#define DLB2_CHP_DIR_CQ_WPTR_WRITE_POINTER	0x00001FFF
-#define DLB2_CHP_DIR_CQ_WPTR_RSVD0		0xFFFFE000
-#define DLB2_CHP_DIR_CQ_WPTR_WRITE_POINTER_LOC	0
-#define DLB2_CHP_DIR_CQ_WPTR_RSVD0_LOC		13
-
-#define DLB2_V2CHP_DIR_CQ2VAS(x) \
-	(0x40680000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_DIR_CQ2VAS(x) \
-	(0x40700000 + (x) * 0x1000)
-#define DLB2_CHP_DIR_CQ2VAS(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_DIR_CQ2VAS(x) : \
-	 DLB2_V2_5CHP_DIR_CQ2VAS(x))
-#define DLB2_CHP_DIR_CQ2VAS_RST 0x0
-
-#define DLB2_CHP_DIR_CQ2VAS_CQ2VAS	0x0000001F
-#define DLB2_CHP_DIR_CQ2VAS_RSVD0	0xFFFFFFE0
-#define DLB2_CHP_DIR_CQ2VAS_CQ2VAS_LOC	0
-#define DLB2_CHP_DIR_CQ2VAS_RSVD0_LOC	5
-
-#define DLB2_V2CHP_HIST_LIST_BASE(x) \
-	(0x40700000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_HIST_LIST_BASE(x) \
-	(0x40780000 + (x) * 0x1000)
-#define DLB2_CHP_HIST_LIST_BASE(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_HIST_LIST_BASE(x) : \
-	 DLB2_V2_5CHP_HIST_LIST_BASE(x))
-#define DLB2_CHP_HIST_LIST_BASE_RST 0x0
-
-#define DLB2_CHP_HIST_LIST_BASE_BASE		0x00001FFF
-#define DLB2_CHP_HIST_LIST_BASE_RSVD0	0xFFFFE000
-#define DLB2_CHP_HIST_LIST_BASE_BASE_LOC	0
-#define DLB2_CHP_HIST_LIST_BASE_RSVD0_LOC	13
-
-#define DLB2_V2CHP_HIST_LIST_LIM(x) \
-	(0x40780000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_HIST_LIST_LIM(x) \
-	(0x40800000 + (x) * 0x1000)
-#define DLB2_CHP_HIST_LIST_LIM(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_HIST_LIST_LIM(x) : \
-	 DLB2_V2_5CHP_HIST_LIST_LIM(x))
-#define DLB2_CHP_HIST_LIST_LIM_RST 0x0
-
-#define DLB2_CHP_HIST_LIST_LIM_LIMIT	0x00001FFF
-#define DLB2_CHP_HIST_LIST_LIM_RSVD0	0xFFFFE000
-#define DLB2_CHP_HIST_LIST_LIM_LIMIT_LOC	0
-#define DLB2_CHP_HIST_LIST_LIM_RSVD0_LOC	13
-
-#define DLB2_V2CHP_HIST_LIST_POP_PTR(x) \
-	(0x40800000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_HIST_LIST_POP_PTR(x) \
-	(0x40880000 + (x) * 0x1000)
-#define DLB2_CHP_HIST_LIST_POP_PTR(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_HIST_LIST_POP_PTR(x) : \
-	 DLB2_V2_5CHP_HIST_LIST_POP_PTR(x))
-#define DLB2_CHP_HIST_LIST_POP_PTR_RST 0x0
-
-#define DLB2_CHP_HIST_LIST_POP_PTR_POP_PTR		0x00001FFF
-#define DLB2_CHP_HIST_LIST_POP_PTR_GENERATION	0x00002000
-#define DLB2_CHP_HIST_LIST_POP_PTR_RSVD0		0xFFFFC000
-#define DLB2_CHP_HIST_LIST_POP_PTR_POP_PTR_LOC	0
-#define DLB2_CHP_HIST_LIST_POP_PTR_GENERATION_LOC	13
-#define DLB2_CHP_HIST_LIST_POP_PTR_RSVD0_LOC		14
-
-#define DLB2_V2CHP_HIST_LIST_PUSH_PTR(x) \
-	(0x40880000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_HIST_LIST_PUSH_PTR(x) \
-	(0x40900000 + (x) * 0x1000)
-#define DLB2_CHP_HIST_LIST_PUSH_PTR(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_HIST_LIST_PUSH_PTR(x) : \
-	 DLB2_V2_5CHP_HIST_LIST_PUSH_PTR(x))
-#define DLB2_CHP_HIST_LIST_PUSH_PTR_RST 0x0
-
-#define DLB2_CHP_HIST_LIST_PUSH_PTR_PUSH_PTR		0x00001FFF
-#define DLB2_CHP_HIST_LIST_PUSH_PTR_GENERATION	0x00002000
-#define DLB2_CHP_HIST_LIST_PUSH_PTR_RSVD0		0xFFFFC000
-#define DLB2_CHP_HIST_LIST_PUSH_PTR_PUSH_PTR_LOC	0
-#define DLB2_CHP_HIST_LIST_PUSH_PTR_GENERATION_LOC	13
-#define DLB2_CHP_HIST_LIST_PUSH_PTR_RSVD0_LOC	14
-
-#define DLB2_V2CHP_LDB_CQ_DEPTH(x) \
-	(0x40900000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_LDB_CQ_DEPTH(x) \
-	(0x40a80000 + (x) * 0x1000)
-#define DLB2_CHP_LDB_CQ_DEPTH(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_LDB_CQ_DEPTH(x) : \
-	 DLB2_V2_5CHP_LDB_CQ_DEPTH(x))
-#define DLB2_CHP_LDB_CQ_DEPTH_RST 0x0
-
-#define DLB2_CHP_LDB_CQ_DEPTH_DEPTH	0x000007FF
-#define DLB2_CHP_LDB_CQ_DEPTH_RSVD0	0xFFFFF800
-#define DLB2_CHP_LDB_CQ_DEPTH_DEPTH_LOC	0
-#define DLB2_CHP_LDB_CQ_DEPTH_RSVD0_LOC	11
-
-#define DLB2_V2CHP_LDB_CQ_INT_DEPTH_THRSH(x) \
-	(0x40980000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_LDB_CQ_INT_DEPTH_THRSH(x) \
-	(0x40b00000 + (x) * 0x1000)
-#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_LDB_CQ_INT_DEPTH_THRSH(x) : \
-	 DLB2_V2_5CHP_LDB_CQ_INT_DEPTH_THRSH(x))
-#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_RST 0x0
-
-#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_DEPTH_THRESHOLD	0x000007FF
-#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_RSVD0		0xFFFFF800
-#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_DEPTH_THRESHOLD_LOC	0
-#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_RSVD0_LOC		11
-
-#define DLB2_V2CHP_LDB_CQ_INT_ENB(x) \
-	(0x40a00000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_LDB_CQ_INT_ENB(x) \
-	(0x40b80000 + (x) * 0x1000)
-#define DLB2_CHP_LDB_CQ_INT_ENB(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_LDB_CQ_INT_ENB(x) : \
-	 DLB2_V2_5CHP_LDB_CQ_INT_ENB(x))
-#define DLB2_CHP_LDB_CQ_INT_ENB_RST 0x0
-
-#define DLB2_CHP_LDB_CQ_INT_ENB_EN_TIM	0x00000001
-#define DLB2_CHP_LDB_CQ_INT_ENB_EN_DEPTH	0x00000002
-#define DLB2_CHP_LDB_CQ_INT_ENB_RSVD0	0xFFFFFFFC
-#define DLB2_CHP_LDB_CQ_INT_ENB_EN_TIM_LOC	0
-#define DLB2_CHP_LDB_CQ_INT_ENB_EN_DEPTH_LOC	1
-#define DLB2_CHP_LDB_CQ_INT_ENB_RSVD0_LOC	2
-
-#define DLB2_V2CHP_LDB_CQ_TMR_THRSH(x) \
-	(0x40b00000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_LDB_CQ_TMR_THRSH(x) \
-	(0x40c80000 + (x) * 0x1000)
-#define DLB2_CHP_LDB_CQ_TMR_THRSH(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_LDB_CQ_TMR_THRSH(x) : \
-	 DLB2_V2_5CHP_LDB_CQ_TMR_THRSH(x))
-#define DLB2_CHP_LDB_CQ_TMR_THRSH_RST 0x1
-
-#define DLB2_CHP_LDB_CQ_TMR_THRSH_THRSH_0	0x00000001
-#define DLB2_CHP_LDB_CQ_TMR_THRSH_THRSH_13_1	0x00003FFE
-#define DLB2_CHP_LDB_CQ_TMR_THRSH_RSVD0	0xFFFFC000
-#define DLB2_CHP_LDB_CQ_TMR_THRSH_THRSH_0_LOC	0
-#define DLB2_CHP_LDB_CQ_TMR_THRSH_THRSH_13_1_LOC	1
-#define DLB2_CHP_LDB_CQ_TMR_THRSH_RSVD0_LOC		14
-
-#define DLB2_V2CHP_LDB_CQ_TKN_DEPTH_SEL(x) \
-	(0x40b80000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_LDB_CQ_TKN_DEPTH_SEL(x) \
-	(0x40d00000 + (x) * 0x1000)
-#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_LDB_CQ_TKN_DEPTH_SEL(x) : \
-	 DLB2_V2_5CHP_LDB_CQ_TKN_DEPTH_SEL(x))
-#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_RST 0x0
-
-#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT	0x0000000F
-#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_RSVD0			0xFFFFFFF0
-#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_LOC	0
-#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_RSVD0_LOC		4
-
-#define DLB2_V2CHP_LDB_CQ_WD_ENB(x) \
-	(0x40c00000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_LDB_CQ_WD_ENB(x) \
-	(0x40d80000 + (x) * 0x1000)
-#define DLB2_CHP_LDB_CQ_WD_ENB(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_LDB_CQ_WD_ENB(x) : \
-	 DLB2_V2_5CHP_LDB_CQ_WD_ENB(x))
-#define DLB2_CHP_LDB_CQ_WD_ENB_RST 0x0
-
-#define DLB2_CHP_LDB_CQ_WD_ENB_WD_ENABLE	0x00000001
-#define DLB2_CHP_LDB_CQ_WD_ENB_RSVD0		0xFFFFFFFE
-#define DLB2_CHP_LDB_CQ_WD_ENB_WD_ENABLE_LOC	0
-#define DLB2_CHP_LDB_CQ_WD_ENB_RSVD0_LOC	1
-
-#define DLB2_V2CHP_LDB_CQ_WPTR(x) \
-	(0x40c80000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_LDB_CQ_WPTR(x) \
-	(0x40e00000 + (x) * 0x1000)
-#define DLB2_CHP_LDB_CQ_WPTR(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_LDB_CQ_WPTR(x) : \
-	 DLB2_V2_5CHP_LDB_CQ_WPTR(x))
-#define DLB2_CHP_LDB_CQ_WPTR_RST 0x0
-
-#define DLB2_CHP_LDB_CQ_WPTR_WRITE_POINTER	0x000007FF
-#define DLB2_CHP_LDB_CQ_WPTR_RSVD0		0xFFFFF800
-#define DLB2_CHP_LDB_CQ_WPTR_WRITE_POINTER_LOC	0
-#define DLB2_CHP_LDB_CQ_WPTR_RSVD0_LOC		11
-
-#define DLB2_V2CHP_LDB_CQ2VAS(x) \
-	(0x40d00000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_LDB_CQ2VAS(x) \
-	(0x40e80000 + (x) * 0x1000)
-#define DLB2_CHP_LDB_CQ2VAS(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_LDB_CQ2VAS(x) : \
-	 DLB2_V2_5CHP_LDB_CQ2VAS(x))
-#define DLB2_CHP_LDB_CQ2VAS_RST 0x0
-
-#define DLB2_CHP_LDB_CQ2VAS_CQ2VAS	0x0000001F
-#define DLB2_CHP_LDB_CQ2VAS_RSVD0	0xFFFFFFE0
-#define DLB2_CHP_LDB_CQ2VAS_CQ2VAS_LOC	0
-#define DLB2_CHP_LDB_CQ2VAS_RSVD0_LOC	5
-
-#define DLB2_CHP_CFG_CHP_CSR_CTRL 0x44000008
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_RST 0x180002
-
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_COR_ALARM_DIS		0x00000001
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_COR_SYND_DIS		0x00000002
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_UNCR_ALARM_DIS		0x00000004
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_UNC_SYND_DIS		0x00000008
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF0_ALARM_DIS		0x00000010
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF0_SYND_DIS		0x00000020
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF1_ALARM_DIS		0x00000040
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF1_SYND_DIS		0x00000080
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF2_ALARM_DIS		0x00000100
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF2_SYND_DIS		0x00000200
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF3_ALARM_DIS		0x00000400
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF3_SYND_DIS		0x00000800
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF4_ALARM_DIS		0x00001000
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF4_SYND_DIS		0x00002000
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF5_ALARM_DIS		0x00004000
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF5_SYND_DIS		0x00008000
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_DLB_COR_ALARM_ENABLE	0x00010000
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_LDB_CQ_MODE	0x00020000
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_DIR_CQ_MODE	0x00040000
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_WRITE_LDB		0x00080000
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_WRITE_DIR		0x00100000
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_FIRST_WRITE_LDB	0x00200000
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_FIRST_WRITE_DIR	0x00400000
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_RSVZ0			0xFF800000
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_COR_ALARM_DIS_LOC		0
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_COR_SYND_DIS_LOC		1
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_UNCR_ALARM_DIS_LOC		2
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_UNC_SYND_DIS_LOC		3
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF0_ALARM_DIS_LOC		4
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF0_SYND_DIS_LOC		5
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF1_ALARM_DIS_LOC		6
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF1_SYND_DIS_LOC		7
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF2_ALARM_DIS_LOC		8
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF2_SYND_DIS_LOC		9
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF3_ALARM_DIS_LOC		10
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF3_SYND_DIS_LOC		11
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF4_ALARM_DIS_LOC		12
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF4_SYND_DIS_LOC		13
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF5_ALARM_DIS_LOC		14
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF5_SYND_DIS_LOC		15
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_DLB_COR_ALARM_ENABLE_LOC		16
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_LDB_CQ_MODE_LOC	17
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_DIR_CQ_MODE_LOC	18
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_WRITE_LDB_LOC			19
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_WRITE_DIR_LOC			20
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_FIRST_WRITE_LDB_LOC		21
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_FIRST_WRITE_DIR_LOC		22
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_RSVZ0_LOC				23
-
-#define DLB2_V2CHP_DIR_CQ_INTR_ARMED0 0x4400005c
-#define DLB2_V2_5CHP_DIR_CQ_INTR_ARMED0 0x4400004c
-#define DLB2_CHP_DIR_CQ_INTR_ARMED0(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_DIR_CQ_INTR_ARMED0 : \
-	 DLB2_V2_5CHP_DIR_CQ_INTR_ARMED0)
-#define DLB2_CHP_DIR_CQ_INTR_ARMED0_RST 0x0
-
-#define DLB2_CHP_DIR_CQ_INTR_ARMED0_ARMED	0xFFFFFFFF
-#define DLB2_CHP_DIR_CQ_INTR_ARMED0_ARMED_LOC	0
-
-#define DLB2_V2CHP_DIR_CQ_INTR_ARMED1 0x44000060
-#define DLB2_V2_5CHP_DIR_CQ_INTR_ARMED1 0x44000050
-#define DLB2_CHP_DIR_CQ_INTR_ARMED1(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_DIR_CQ_INTR_ARMED1 : \
-	 DLB2_V2_5CHP_DIR_CQ_INTR_ARMED1)
-#define DLB2_CHP_DIR_CQ_INTR_ARMED1_RST 0x0
-
-#define DLB2_CHP_DIR_CQ_INTR_ARMED1_ARMED	0xFFFFFFFF
-#define DLB2_CHP_DIR_CQ_INTR_ARMED1_ARMED_LOC	0
-
-#define DLB2_V2CHP_CFG_DIR_CQ_TIMER_CTL 0x44000084
-#define DLB2_V2_5CHP_CFG_DIR_CQ_TIMER_CTL 0x44000088
-#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_CFG_DIR_CQ_TIMER_CTL : \
-	 DLB2_V2_5CHP_CFG_DIR_CQ_TIMER_CTL)
-#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_RST 0x0
-
-#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_SAMPLE_INTERVAL	0x000000FF
-#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_ENB			0x00000100
-#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_RSVZ0			0xFFFFFE00
-#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_SAMPLE_INTERVAL_LOC	0
-#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_ENB_LOC		8
-#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_RSVZ0_LOC		9
-
-#define DLB2_V2CHP_CFG_DIR_WDTO_0 0x44000088
-#define DLB2_V2_5CHP_CFG_DIR_WDTO_0 0x4400008c
-#define DLB2_CHP_CFG_DIR_WDTO_0(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_CFG_DIR_WDTO_0 : \
-	 DLB2_V2_5CHP_CFG_DIR_WDTO_0)
-#define DLB2_CHP_CFG_DIR_WDTO_0_RST 0x0
-
-#define DLB2_CHP_CFG_DIR_WDTO_0_WDTO	0xFFFFFFFF
-#define DLB2_CHP_CFG_DIR_WDTO_0_WDTO_LOC	0
-
-#define DLB2_V2CHP_CFG_DIR_WDTO_1 0x4400008c
-#define DLB2_V2_5CHP_CFG_DIR_WDTO_1 0x44000090
-#define DLB2_CHP_CFG_DIR_WDTO_1(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_CFG_DIR_WDTO_1 : \
-	 DLB2_V2_5CHP_CFG_DIR_WDTO_1)
-#define DLB2_CHP_CFG_DIR_WDTO_1_RST 0x0
-
-#define DLB2_CHP_CFG_DIR_WDTO_1_WDTO	0xFFFFFFFF
-#define DLB2_CHP_CFG_DIR_WDTO_1_WDTO_LOC	0
-
-#define DLB2_V2CHP_CFG_DIR_WD_DISABLE0 0x44000098
-#define DLB2_V2_5CHP_CFG_DIR_WD_DISABLE0 0x440000a4
-#define DLB2_CHP_CFG_DIR_WD_DISABLE0(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_CFG_DIR_WD_DISABLE0 : \
-	 DLB2_V2_5CHP_CFG_DIR_WD_DISABLE0)
-#define DLB2_CHP_CFG_DIR_WD_DISABLE0_RST 0xffffffff
-
-#define DLB2_CHP_CFG_DIR_WD_DISABLE0_WD_DISABLE	0xFFFFFFFF
-#define DLB2_CHP_CFG_DIR_WD_DISABLE0_WD_DISABLE_LOC	0
-
-#define DLB2_V2CHP_CFG_DIR_WD_DISABLE1 0x4400009c
-#define DLB2_V2_5CHP_CFG_DIR_WD_DISABLE1 0x440000a8
-#define DLB2_CHP_CFG_DIR_WD_DISABLE1(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_CFG_DIR_WD_DISABLE1 : \
-	 DLB2_V2_5CHP_CFG_DIR_WD_DISABLE1)
-#define DLB2_CHP_CFG_DIR_WD_DISABLE1_RST 0xffffffff
-
-#define DLB2_CHP_CFG_DIR_WD_DISABLE1_WD_DISABLE	0xFFFFFFFF
-#define DLB2_CHP_CFG_DIR_WD_DISABLE1_WD_DISABLE_LOC	0
-
-#define DLB2_V2CHP_CFG_DIR_WD_ENB_INTERVAL 0x440000a0
-#define DLB2_V2_5CHP_CFG_DIR_WD_ENB_INTERVAL 0x440000b0
-#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_CFG_DIR_WD_ENB_INTERVAL : \
-	 DLB2_V2_5CHP_CFG_DIR_WD_ENB_INTERVAL)
-#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_RST 0x0
-
-#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_SAMPLE_INTERVAL	0x0FFFFFFF
-#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_ENB			0x10000000
-#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_RSVZ0		0xE0000000
-#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_SAMPLE_INTERVAL_LOC	0
-#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_ENB_LOC		28
-#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_RSVZ0_LOC		29
-
-#define DLB2_V2CHP_CFG_DIR_WD_THRESHOLD 0x440000ac
-#define DLB2_V2_5CHP_CFG_DIR_WD_THRESHOLD 0x440000c0
-#define DLB2_CHP_CFG_DIR_WD_THRESHOLD(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_CFG_DIR_WD_THRESHOLD : \
-	 DLB2_V2_5CHP_CFG_DIR_WD_THRESHOLD)
-#define DLB2_CHP_CFG_DIR_WD_THRESHOLD_RST 0x0
-
-#define DLB2_CHP_CFG_DIR_WD_THRESHOLD_WD_THRESHOLD	0x000000FF
-#define DLB2_CHP_CFG_DIR_WD_THRESHOLD_RSVZ0		0xFFFFFF00
-#define DLB2_CHP_CFG_DIR_WD_THRESHOLD_WD_THRESHOLD_LOC	0
-#define DLB2_CHP_CFG_DIR_WD_THRESHOLD_RSVZ0_LOC		8
-
-#define DLB2_V2CHP_LDB_CQ_INTR_ARMED0 0x440000b0
-#define DLB2_V2_5CHP_LDB_CQ_INTR_ARMED0 0x440000c4
-#define DLB2_CHP_LDB_CQ_INTR_ARMED0(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_LDB_CQ_INTR_ARMED0 : \
-	 DLB2_V2_5CHP_LDB_CQ_INTR_ARMED0)
-#define DLB2_CHP_LDB_CQ_INTR_ARMED0_RST 0x0
-
-#define DLB2_CHP_LDB_CQ_INTR_ARMED0_ARMED	0xFFFFFFFF
-#define DLB2_CHP_LDB_CQ_INTR_ARMED0_ARMED_LOC	0
-
-#define DLB2_V2CHP_LDB_CQ_INTR_ARMED1 0x440000b4
-#define DLB2_V2_5CHP_LDB_CQ_INTR_ARMED1 0x440000c8
-#define DLB2_CHP_LDB_CQ_INTR_ARMED1(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_LDB_CQ_INTR_ARMED1 : \
-	 DLB2_V2_5CHP_LDB_CQ_INTR_ARMED1)
-#define DLB2_CHP_LDB_CQ_INTR_ARMED1_RST 0x0
-
-#define DLB2_CHP_LDB_CQ_INTR_ARMED1_ARMED	0xFFFFFFFF
-#define DLB2_CHP_LDB_CQ_INTR_ARMED1_ARMED_LOC	0
-
-#define DLB2_V2CHP_CFG_LDB_CQ_TIMER_CTL 0x440000d8
-#define DLB2_V2_5CHP_CFG_LDB_CQ_TIMER_CTL 0x440000ec
-#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_CFG_LDB_CQ_TIMER_CTL : \
-	 DLB2_V2_5CHP_CFG_LDB_CQ_TIMER_CTL)
-#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_RST 0x0
-
-#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_SAMPLE_INTERVAL	0x000000FF
-#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_ENB			0x00000100
-#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_RSVZ0			0xFFFFFE00
-#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_SAMPLE_INTERVAL_LOC	0
-#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_ENB_LOC		8
-#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_RSVZ0_LOC		9
-
-#define DLB2_V2CHP_CFG_LDB_WDTO_0 0x440000dc
-#define DLB2_V2_5CHP_CFG_LDB_WDTO_0 0x440000f0
-#define DLB2_CHP_CFG_LDB_WDTO_0(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_CFG_LDB_WDTO_0 : \
-	 DLB2_V2_5CHP_CFG_LDB_WDTO_0)
-#define DLB2_CHP_CFG_LDB_WDTO_0_RST 0x0
-
-#define DLB2_CHP_CFG_LDB_WDTO_0_WDTO	0xFFFFFFFF
-#define DLB2_CHP_CFG_LDB_WDTO_0_WDTO_LOC	0
-
-#define DLB2_V2CHP_CFG_LDB_WDTO_1 0x440000e0
-#define DLB2_V2_5CHP_CFG_LDB_WDTO_1 0x440000f4
-#define DLB2_CHP_CFG_LDB_WDTO_1(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_CFG_LDB_WDTO_1 : \
-	 DLB2_V2_5CHP_CFG_LDB_WDTO_1)
-#define DLB2_CHP_CFG_LDB_WDTO_1_RST 0x0
-
-#define DLB2_CHP_CFG_LDB_WDTO_1_WDTO	0xFFFFFFFF
-#define DLB2_CHP_CFG_LDB_WDTO_1_WDTO_LOC	0
-
-#define DLB2_V2CHP_CFG_LDB_WD_DISABLE0 0x440000ec
-#define DLB2_V2_5CHP_CFG_LDB_WD_DISABLE0 0x44000100
-#define DLB2_CHP_CFG_LDB_WD_DISABLE0(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_CFG_LDB_WD_DISABLE0 : \
-	 DLB2_V2_5CHP_CFG_LDB_WD_DISABLE0)
-#define DLB2_CHP_CFG_LDB_WD_DISABLE0_RST 0xffffffff
-
-#define DLB2_CHP_CFG_LDB_WD_DISABLE0_WD_DISABLE	0xFFFFFFFF
-#define DLB2_CHP_CFG_LDB_WD_DISABLE0_WD_DISABLE_LOC	0
-
-#define DLB2_V2CHP_CFG_LDB_WD_DISABLE1 0x440000f0
-#define DLB2_V2_5CHP_CFG_LDB_WD_DISABLE1 0x44000104
-#define DLB2_CHP_CFG_LDB_WD_DISABLE1(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_CFG_LDB_WD_DISABLE1 : \
-	 DLB2_V2_5CHP_CFG_LDB_WD_DISABLE1)
-#define DLB2_CHP_CFG_LDB_WD_DISABLE1_RST 0xffffffff
-
-#define DLB2_CHP_CFG_LDB_WD_DISABLE1_WD_DISABLE	0xFFFFFFFF
-#define DLB2_CHP_CFG_LDB_WD_DISABLE1_WD_DISABLE_LOC	0
-
-#define DLB2_V2CHP_CFG_LDB_WD_ENB_INTERVAL 0x440000f4
-#define DLB2_V2_5CHP_CFG_LDB_WD_ENB_INTERVAL 0x44000108
-#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_CFG_LDB_WD_ENB_INTERVAL : \
-	 DLB2_V2_5CHP_CFG_LDB_WD_ENB_INTERVAL)
-#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_RST 0x0
-
-#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_SAMPLE_INTERVAL	0x0FFFFFFF
-#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_ENB			0x10000000
-#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_RSVZ0		0xE0000000
-#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_SAMPLE_INTERVAL_LOC	0
-#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_ENB_LOC		28
-#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_RSVZ0_LOC		29
-
-#define DLB2_V2CHP_CFG_LDB_WD_THRESHOLD 0x44000100
-#define DLB2_V2_5CHP_CFG_LDB_WD_THRESHOLD 0x44000114
-#define DLB2_CHP_CFG_LDB_WD_THRESHOLD(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_CFG_LDB_WD_THRESHOLD : \
-	 DLB2_V2_5CHP_CFG_LDB_WD_THRESHOLD)
-#define DLB2_CHP_CFG_LDB_WD_THRESHOLD_RST 0x0
-
-#define DLB2_CHP_CFG_LDB_WD_THRESHOLD_WD_THRESHOLD	0x000000FF
-#define DLB2_CHP_CFG_LDB_WD_THRESHOLD_RSVZ0		0xFFFFFF00
-#define DLB2_CHP_CFG_LDB_WD_THRESHOLD_WD_THRESHOLD_LOC	0
-#define DLB2_CHP_CFG_LDB_WD_THRESHOLD_RSVZ0_LOC		8
-
-#define DLB2_CHP_SMON_COMPARE0 0x4c000000
-#define DLB2_CHP_SMON_COMPARE0_RST 0x0
-
-#define DLB2_CHP_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
-#define DLB2_CHP_SMON_COMPARE0_COMPARE0_LOC	0
-
-#define DLB2_CHP_SMON_COMPARE1 0x4c000004
-#define DLB2_CHP_SMON_COMPARE1_RST 0x0
-
-#define DLB2_CHP_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
-#define DLB2_CHP_SMON_COMPARE1_COMPARE1_LOC	0
-
-#define DLB2_CHP_SMON_CFG0 0x4c000008
-#define DLB2_CHP_SMON_CFG0_RST 0x40000000
-
-#define DLB2_CHP_SMON_CFG0_SMON_ENABLE		0x00000001
-#define DLB2_CHP_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
-#define DLB2_CHP_SMON_CFG0_RSVZ0			0x0000000C
-#define DLB2_CHP_SMON_CFG0_SMON0_FUNCTION		0x00000070
-#define DLB2_CHP_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
-#define DLB2_CHP_SMON_CFG0_SMON1_FUNCTION		0x00000700
-#define DLB2_CHP_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
-#define DLB2_CHP_SMON_CFG0_SMON_MODE			0x0000F000
-#define DLB2_CHP_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
-#define DLB2_CHP_SMON_CFG0_INTCOUNTEROVFL		0x00020000
-#define DLB2_CHP_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
-#define DLB2_CHP_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
-#define DLB2_CHP_SMON_CFG0_STOPTIMEROVFL		0x00100000
-#define DLB2_CHP_SMON_CFG0_INTTIMEROVFL		0x00200000
-#define DLB2_CHP_SMON_CFG0_STATTIMEROVFL		0x00400000
-#define DLB2_CHP_SMON_CFG0_RSVZ1			0x00800000
-#define DLB2_CHP_SMON_CFG0_TIMER_PRESCALE		0x1F000000
-#define DLB2_CHP_SMON_CFG0_RSVZ2			0x20000000
-#define DLB2_CHP_SMON_CFG0_VERSION			0xC0000000
-#define DLB2_CHP_SMON_CFG0_SMON_ENABLE_LOC			0
-#define DLB2_CHP_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
-#define DLB2_CHP_SMON_CFG0_RSVZ0_LOC				2
-#define DLB2_CHP_SMON_CFG0_SMON0_FUNCTION_LOC		4
-#define DLB2_CHP_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
-#define DLB2_CHP_SMON_CFG0_SMON1_FUNCTION_LOC		8
-#define DLB2_CHP_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
-#define DLB2_CHP_SMON_CFG0_SMON_MODE_LOC			12
-#define DLB2_CHP_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
-#define DLB2_CHP_SMON_CFG0_INTCOUNTEROVFL_LOC		17
-#define DLB2_CHP_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
-#define DLB2_CHP_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
-#define DLB2_CHP_SMON_CFG0_STOPTIMEROVFL_LOC			20
-#define DLB2_CHP_SMON_CFG0_INTTIMEROVFL_LOC			21
-#define DLB2_CHP_SMON_CFG0_STATTIMEROVFL_LOC			22
-#define DLB2_CHP_SMON_CFG0_RSVZ1_LOC				23
-#define DLB2_CHP_SMON_CFG0_TIMER_PRESCALE_LOC		24
-#define DLB2_CHP_SMON_CFG0_RSVZ2_LOC				29
-#define DLB2_CHP_SMON_CFG0_VERSION_LOC			30
-
-#define DLB2_CHP_SMON_CFG1 0x4c00000c
-#define DLB2_CHP_SMON_CFG1_RST 0x0
-
-#define DLB2_CHP_SMON_CFG1_MODE0	0x000000FF
-#define DLB2_CHP_SMON_CFG1_MODE1	0x0000FF00
-#define DLB2_CHP_SMON_CFG1_RSVZ0	0xFFFF0000
-#define DLB2_CHP_SMON_CFG1_MODE0_LOC	0
-#define DLB2_CHP_SMON_CFG1_MODE1_LOC	8
-#define DLB2_CHP_SMON_CFG1_RSVZ0_LOC	16
-
-#define DLB2_CHP_SMON_ACTIVITYCNTR0 0x4c000010
-#define DLB2_CHP_SMON_ACTIVITYCNTR0_RST 0x0
-
-#define DLB2_CHP_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
-#define DLB2_CHP_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
-
-#define DLB2_CHP_SMON_ACTIVITYCNTR1 0x4c000014
-#define DLB2_CHP_SMON_ACTIVITYCNTR1_RST 0x0
-
-#define DLB2_CHP_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
-#define DLB2_CHP_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
-
-#define DLB2_CHP_SMON_MAX_TMR 0x4c000018
-#define DLB2_CHP_SMON_MAX_TMR_RST 0x0
-
-#define DLB2_CHP_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
-#define DLB2_CHP_SMON_MAX_TMR_MAXVALUE_LOC	0
-
-#define DLB2_CHP_SMON_TMR 0x4c00001c
-#define DLB2_CHP_SMON_TMR_RST 0x0
-
-#define DLB2_CHP_SMON_TMR_TIMER	0xFFFFFFFF
-#define DLB2_CHP_SMON_TMR_TIMER_LOC	0
-
-#define DLB2_CHP_CTRL_DIAG_02 0x4c000028
-#define DLB2_CHP_CTRL_DIAG_02_RST 0x1555
-
-#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_EMPTY_V2	0x00000001
-#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_AFULL_V2	0x00000002
-#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_EMPTY_V2 0x04
-#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_AFULL_V2 0x08
-#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_EMPTY_V2 0x0010
-#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_AFULL_V2 0x0020
-#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_EMPTY_V2    0x0040
-#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_AFULL_V2    0x0080
-#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_EMPTY_V2	 0x0100
-#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_AFULL_V2	 0x0200
-#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_EMPTY_V2	0x0400
-#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_AFULL_V2	0x0800
-#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_EMPTY_V2    0x1000
-#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_AFULL_V2    0x2000
-#define DLB2_CHP_CTRL_DIAG_02_RSVD0_V2				    0xFFFFC000
-#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_EMPTY_V2_LOC	    0
-#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_AFULL_V2_LOC	    1
-#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_EMPTY_V2_LOC 2
-#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_AFULL_V2_LOC 3
-#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_EMPTY_V2_LOC 4
-#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_AFULL_V2_LOC 5
-#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_EMPTY_V2_LOC    6
-#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_AFULL_V2_LOC    7
-#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_EMPTY_V2_LOC	  8
-#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_AFULL_V2_LOC	  9
-#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_EMPTY_V2_LOC	  10
-#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_AFULL_V2_LOC	  11
-#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_EMPTY_V2_LOC 12
-#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_AFULL_V2_LOC 13
-#define DLB2_CHP_CTRL_DIAG_02_RSVD0_V2_LOC				  14
-
-#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_EMPTY_V2_5	     0x00000001
-#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_AFULL_V2_5	     0x00000002
-#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_EMPTY_V2_5  4
-#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_AFULL_V2_5  8
-#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_EMPTY_V2_5 0x10
-#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_AFULL_V2_5 0x20
-#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_EMPTY_V2_5	0x0040
-#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_AFULL_V2_5	0x0080
-#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_EMPTY_V2_5 0x00000100
-#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_AFULL_V2_5 0x00000200
-#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_EMPTY_V2_5 0x0400
-#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_AFULL_V2_5 0x0800
-#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_EMPTY_V2_5 0x1000
-#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_AFULL_V2_5 0x2000
-#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOKEN_QB_STATUS_SIZE_V2_5	    0x0001C000
-#define DLB2_CHP_CTRL_DIAG_02_FREELIST_SIZE_V2_5		    0xFFFE0000
-#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_EMPTY_V2_5_LOC 0
-#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_AFULL_V2_5_LOC 1
-#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_EMPTY_V2_5_LOC 2
-#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_AFULL_V2_5_LOC 3
-#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_EMPTY_V2_5_LOC 4
-#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_AFULL_V2_5_LOC 5
-#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_EMPTY_V2_5_LOC    6
-#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_AFULL_V2_5_LOC 7
-#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_EMPTY_V2_5_LOC	    8
-#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_AFULL_V2_5_LOC	    9
-#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_EMPTY_V2_5_LOC   10
-#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_AFULL_V2_5_LOC   11
-#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_EMPTY_V2_5_LOC 12
-#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_AFULL_V2_5_LOC 13
-#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOKEN_QB_STATUS_SIZE_V2_5_LOC	    14
-#define DLB2_CHP_CTRL_DIAG_02_FREELIST_SIZE_V2_5_LOC			    17
-
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0 0x54000000
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_RST 0xfefcfaf8
-
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI0	0x000000FF
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI1	0x0000FF00
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI2	0x00FF0000
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI3	0xFF000000
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI0_LOC	0
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI1_LOC	8
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI2_LOC	16
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI3_LOC	24
-
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_1 0x54000004
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_1_RST 0x0
-
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_1_RSVZ0	0xFFFFFFFF
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_1_RSVZ0_LOC	0
-
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0 0x54000008
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_RST 0xfefcfaf8
-
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI0	0x000000FF
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI1	0x0000FF00
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI2	0x00FF0000
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI3	0xFF000000
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI0_LOC	0
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI1_LOC	8
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI2_LOC	16
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI3_LOC	24
-
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1 0x5400000c
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RST 0x0
-
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RSVZ0	0xFFFFFFFF
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RSVZ0_LOC	0
-
-#define DLB2_DP_DIR_CSR_CTRL 0x54000010
-#define DLB2_DP_DIR_CSR_CTRL_RST 0x0
-
-#define DLB2_DP_DIR_CSR_CTRL_INT_COR_ALARM_DIS	0x00000001
-#define DLB2_DP_DIR_CSR_CTRL_INT_COR_SYND_DIS	0x00000002
-#define DLB2_DP_DIR_CSR_CTRL_INT_UNCR_ALARM_DIS	0x00000004
-#define DLB2_DP_DIR_CSR_CTRL_INT_UNC_SYND_DIS	0x00000008
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF0_ALARM_DIS	0x00000010
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF0_SYND_DIS	0x00000020
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF1_ALARM_DIS	0x00000040
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF1_SYND_DIS	0x00000080
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF2_ALARM_DIS	0x00000100
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF2_SYND_DIS	0x00000200
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF3_ALARM_DIS	0x00000400
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF3_SYND_DIS	0x00000800
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF4_ALARM_DIS	0x00001000
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF4_SYND_DIS	0x00002000
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF5_ALARM_DIS	0x00004000
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF5_SYND_DIS	0x00008000
-#define DLB2_DP_DIR_CSR_CTRL_RSVZ0			0xFFFF0000
-#define DLB2_DP_DIR_CSR_CTRL_INT_COR_ALARM_DIS_LOC	0
-#define DLB2_DP_DIR_CSR_CTRL_INT_COR_SYND_DIS_LOC	1
-#define DLB2_DP_DIR_CSR_CTRL_INT_UNCR_ALARM_DIS_LOC	2
-#define DLB2_DP_DIR_CSR_CTRL_INT_UNC_SYND_DIS_LOC	3
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF0_ALARM_DIS_LOC	4
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF0_SYND_DIS_LOC	5
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF1_ALARM_DIS_LOC	6
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF1_SYND_DIS_LOC	7
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF2_ALARM_DIS_LOC	8
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF2_SYND_DIS_LOC	9
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF3_ALARM_DIS_LOC	10
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF3_SYND_DIS_LOC	11
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF4_ALARM_DIS_LOC	12
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF4_SYND_DIS_LOC	13
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF5_ALARM_DIS_LOC	14
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF5_SYND_DIS_LOC	15
-#define DLB2_DP_DIR_CSR_CTRL_RSVZ0_LOC		16
-
-#define DLB2_DP_SMON_ACTIVITYCNTR0 0x5c000058
-#define DLB2_DP_SMON_ACTIVITYCNTR0_RST 0x0
-
-#define DLB2_DP_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
-#define DLB2_DP_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
-
-#define DLB2_DP_SMON_ACTIVITYCNTR1 0x5c00005c
-#define DLB2_DP_SMON_ACTIVITYCNTR1_RST 0x0
-
-#define DLB2_DP_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
-#define DLB2_DP_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
-
-#define DLB2_DP_SMON_COMPARE0 0x5c000060
-#define DLB2_DP_SMON_COMPARE0_RST 0x0
-
-#define DLB2_DP_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
-#define DLB2_DP_SMON_COMPARE0_COMPARE0_LOC	0
-
-#define DLB2_DP_SMON_COMPARE1 0x5c000064
-#define DLB2_DP_SMON_COMPARE1_RST 0x0
-
-#define DLB2_DP_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
-#define DLB2_DP_SMON_COMPARE1_COMPARE1_LOC	0
-
-#define DLB2_DP_SMON_CFG0 0x5c000068
-#define DLB2_DP_SMON_CFG0_RST 0x40000000
-
-#define DLB2_DP_SMON_CFG0_SMON_ENABLE		0x00000001
-#define DLB2_DP_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
-#define DLB2_DP_SMON_CFG0_RSVZ0			0x0000000C
-#define DLB2_DP_SMON_CFG0_SMON0_FUNCTION		0x00000070
-#define DLB2_DP_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
-#define DLB2_DP_SMON_CFG0_SMON1_FUNCTION		0x00000700
-#define DLB2_DP_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
-#define DLB2_DP_SMON_CFG0_SMON_MODE			0x0000F000
-#define DLB2_DP_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
-#define DLB2_DP_SMON_CFG0_INTCOUNTEROVFL		0x00020000
-#define DLB2_DP_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
-#define DLB2_DP_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
-#define DLB2_DP_SMON_CFG0_STOPTIMEROVFL		0x00100000
-#define DLB2_DP_SMON_CFG0_INTTIMEROVFL		0x00200000
-#define DLB2_DP_SMON_CFG0_STATTIMEROVFL		0x00400000
-#define DLB2_DP_SMON_CFG0_RSVZ1			0x00800000
-#define DLB2_DP_SMON_CFG0_TIMER_PRESCALE		0x1F000000
-#define DLB2_DP_SMON_CFG0_RSVZ2			0x20000000
-#define DLB2_DP_SMON_CFG0_VERSION			0xC0000000
-#define DLB2_DP_SMON_CFG0_SMON_ENABLE_LOC		0
-#define DLB2_DP_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC	1
-#define DLB2_DP_SMON_CFG0_RSVZ0_LOC			2
-#define DLB2_DP_SMON_CFG0_SMON0_FUNCTION_LOC		4
-#define DLB2_DP_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
-#define DLB2_DP_SMON_CFG0_SMON1_FUNCTION_LOC		8
-#define DLB2_DP_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
-#define DLB2_DP_SMON_CFG0_SMON_MODE_LOC		12
-#define DLB2_DP_SMON_CFG0_STOPCOUNTEROVFL_LOC	16
-#define DLB2_DP_SMON_CFG0_INTCOUNTEROVFL_LOC		17
-#define DLB2_DP_SMON_CFG0_STATCOUNTER0OVFL_LOC	18
-#define DLB2_DP_SMON_CFG0_STATCOUNTER1OVFL_LOC	19
-#define DLB2_DP_SMON_CFG0_STOPTIMEROVFL_LOC		20
-#define DLB2_DP_SMON_CFG0_INTTIMEROVFL_LOC		21
-#define DLB2_DP_SMON_CFG0_STATTIMEROVFL_LOC		22
-#define DLB2_DP_SMON_CFG0_RSVZ1_LOC			23
-#define DLB2_DP_SMON_CFG0_TIMER_PRESCALE_LOC		24
-#define DLB2_DP_SMON_CFG0_RSVZ2_LOC			29
-#define DLB2_DP_SMON_CFG0_VERSION_LOC		30
-
-#define DLB2_DP_SMON_CFG1 0x5c00006c
-#define DLB2_DP_SMON_CFG1_RST 0x0
-
-#define DLB2_DP_SMON_CFG1_MODE0	0x000000FF
-#define DLB2_DP_SMON_CFG1_MODE1	0x0000FF00
-#define DLB2_DP_SMON_CFG1_RSVZ0	0xFFFF0000
-#define DLB2_DP_SMON_CFG1_MODE0_LOC	0
-#define DLB2_DP_SMON_CFG1_MODE1_LOC	8
-#define DLB2_DP_SMON_CFG1_RSVZ0_LOC	16
-
-#define DLB2_DP_SMON_MAX_TMR 0x5c000070
-#define DLB2_DP_SMON_MAX_TMR_RST 0x0
-
-#define DLB2_DP_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
-#define DLB2_DP_SMON_MAX_TMR_MAXVALUE_LOC	0
-
-#define DLB2_DP_SMON_TMR 0x5c000074
-#define DLB2_DP_SMON_TMR_RST 0x0
-
-#define DLB2_DP_SMON_TMR_TIMER	0xFFFFFFFF
-#define DLB2_DP_SMON_TMR_TIMER_LOC	0
-
-#define DLB2_DQED_SMON_ACTIVITYCNTR0 0x6c000024
-#define DLB2_DQED_SMON_ACTIVITYCNTR0_RST 0x0
-
-#define DLB2_DQED_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
-#define DLB2_DQED_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
-
-#define DLB2_DQED_SMON_ACTIVITYCNTR1 0x6c000028
-#define DLB2_DQED_SMON_ACTIVITYCNTR1_RST 0x0
-
-#define DLB2_DQED_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
-#define DLB2_DQED_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
-
-#define DLB2_DQED_SMON_COMPARE0 0x6c00002c
-#define DLB2_DQED_SMON_COMPARE0_RST 0x0
-
-#define DLB2_DQED_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
-#define DLB2_DQED_SMON_COMPARE0_COMPARE0_LOC	0
-
-#define DLB2_DQED_SMON_COMPARE1 0x6c000030
-#define DLB2_DQED_SMON_COMPARE1_RST 0x0
-
-#define DLB2_DQED_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
-#define DLB2_DQED_SMON_COMPARE1_COMPARE1_LOC	0
-
-#define DLB2_DQED_SMON_CFG0 0x6c000034
-#define DLB2_DQED_SMON_CFG0_RST 0x40000000
-
-#define DLB2_DQED_SMON_CFG0_SMON_ENABLE		0x00000001
-#define DLB2_DQED_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
-#define DLB2_DQED_SMON_CFG0_RSVZ0			0x0000000C
-#define DLB2_DQED_SMON_CFG0_SMON0_FUNCTION		0x00000070
-#define DLB2_DQED_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
-#define DLB2_DQED_SMON_CFG0_SMON1_FUNCTION		0x00000700
-#define DLB2_DQED_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
-#define DLB2_DQED_SMON_CFG0_SMON_MODE		0x0000F000
-#define DLB2_DQED_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
-#define DLB2_DQED_SMON_CFG0_INTCOUNTEROVFL		0x00020000
-#define DLB2_DQED_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
-#define DLB2_DQED_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
-#define DLB2_DQED_SMON_CFG0_STOPTIMEROVFL		0x00100000
-#define DLB2_DQED_SMON_CFG0_INTTIMEROVFL		0x00200000
-#define DLB2_DQED_SMON_CFG0_STATTIMEROVFL		0x00400000
-#define DLB2_DQED_SMON_CFG0_RSVZ1			0x00800000
-#define DLB2_DQED_SMON_CFG0_TIMER_PRESCALE		0x1F000000
-#define DLB2_DQED_SMON_CFG0_RSVZ2			0x20000000
-#define DLB2_DQED_SMON_CFG0_VERSION			0xC0000000
-#define DLB2_DQED_SMON_CFG0_SMON_ENABLE_LOC			0
-#define DLB2_DQED_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
-#define DLB2_DQED_SMON_CFG0_RSVZ0_LOC			2
-#define DLB2_DQED_SMON_CFG0_SMON0_FUNCTION_LOC		4
-#define DLB2_DQED_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
-#define DLB2_DQED_SMON_CFG0_SMON1_FUNCTION_LOC		8
-#define DLB2_DQED_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
-#define DLB2_DQED_SMON_CFG0_SMON_MODE_LOC			12
-#define DLB2_DQED_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
-#define DLB2_DQED_SMON_CFG0_INTCOUNTEROVFL_LOC		17
-#define DLB2_DQED_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
-#define DLB2_DQED_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
-#define DLB2_DQED_SMON_CFG0_STOPTIMEROVFL_LOC		20
-#define DLB2_DQED_SMON_CFG0_INTTIMEROVFL_LOC			21
-#define DLB2_DQED_SMON_CFG0_STATTIMEROVFL_LOC		22
-#define DLB2_DQED_SMON_CFG0_RSVZ1_LOC			23
-#define DLB2_DQED_SMON_CFG0_TIMER_PRESCALE_LOC		24
-#define DLB2_DQED_SMON_CFG0_RSVZ2_LOC			29
-#define DLB2_DQED_SMON_CFG0_VERSION_LOC			30
-
-#define DLB2_DQED_SMON_CFG1 0x6c000038
-#define DLB2_DQED_SMON_CFG1_RST 0x0
-
-#define DLB2_DQED_SMON_CFG1_MODE0	0x000000FF
-#define DLB2_DQED_SMON_CFG1_MODE1	0x0000FF00
-#define DLB2_DQED_SMON_CFG1_RSVZ0	0xFFFF0000
-#define DLB2_DQED_SMON_CFG1_MODE0_LOC	0
-#define DLB2_DQED_SMON_CFG1_MODE1_LOC	8
-#define DLB2_DQED_SMON_CFG1_RSVZ0_LOC	16
-
-#define DLB2_DQED_SMON_MAX_TMR 0x6c00003c
-#define DLB2_DQED_SMON_MAX_TMR_RST 0x0
-
-#define DLB2_DQED_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
-#define DLB2_DQED_SMON_MAX_TMR_MAXVALUE_LOC	0
-
-#define DLB2_DQED_SMON_TMR 0x6c000040
-#define DLB2_DQED_SMON_TMR_RST 0x0
-
-#define DLB2_DQED_SMON_TMR_TIMER	0xFFFFFFFF
-#define DLB2_DQED_SMON_TMR_TIMER_LOC	0
-
-#define DLB2_QED_SMON_ACTIVITYCNTR0 0x7c000024
-#define DLB2_QED_SMON_ACTIVITYCNTR0_RST 0x0
-
-#define DLB2_QED_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
-#define DLB2_QED_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
-
-#define DLB2_QED_SMON_ACTIVITYCNTR1 0x7c000028
-#define DLB2_QED_SMON_ACTIVITYCNTR1_RST 0x0
-
-#define DLB2_QED_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
-#define DLB2_QED_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
-
-#define DLB2_QED_SMON_COMPARE0 0x7c00002c
-#define DLB2_QED_SMON_COMPARE0_RST 0x0
-
-#define DLB2_QED_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
-#define DLB2_QED_SMON_COMPARE0_COMPARE0_LOC	0
-
-#define DLB2_QED_SMON_COMPARE1 0x7c000030
-#define DLB2_QED_SMON_COMPARE1_RST 0x0
-
-#define DLB2_QED_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
-#define DLB2_QED_SMON_COMPARE1_COMPARE1_LOC	0
-
-#define DLB2_QED_SMON_CFG0 0x7c000034
-#define DLB2_QED_SMON_CFG0_RST 0x40000000
-
-#define DLB2_QED_SMON_CFG0_SMON_ENABLE		0x00000001
-#define DLB2_QED_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
-#define DLB2_QED_SMON_CFG0_RSVZ0			0x0000000C
-#define DLB2_QED_SMON_CFG0_SMON0_FUNCTION		0x00000070
-#define DLB2_QED_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
-#define DLB2_QED_SMON_CFG0_SMON1_FUNCTION		0x00000700
-#define DLB2_QED_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
-#define DLB2_QED_SMON_CFG0_SMON_MODE			0x0000F000
-#define DLB2_QED_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
-#define DLB2_QED_SMON_CFG0_INTCOUNTEROVFL		0x00020000
-#define DLB2_QED_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
-#define DLB2_QED_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
-#define DLB2_QED_SMON_CFG0_STOPTIMEROVFL		0x00100000
-#define DLB2_QED_SMON_CFG0_INTTIMEROVFL		0x00200000
-#define DLB2_QED_SMON_CFG0_STATTIMEROVFL		0x00400000
-#define DLB2_QED_SMON_CFG0_RSVZ1			0x00800000
-#define DLB2_QED_SMON_CFG0_TIMER_PRESCALE		0x1F000000
-#define DLB2_QED_SMON_CFG0_RSVZ2			0x20000000
-#define DLB2_QED_SMON_CFG0_VERSION			0xC0000000
-#define DLB2_QED_SMON_CFG0_SMON_ENABLE_LOC			0
-#define DLB2_QED_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
-#define DLB2_QED_SMON_CFG0_RSVZ0_LOC				2
-#define DLB2_QED_SMON_CFG0_SMON0_FUNCTION_LOC		4
-#define DLB2_QED_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
-#define DLB2_QED_SMON_CFG0_SMON1_FUNCTION_LOC		8
-#define DLB2_QED_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
-#define DLB2_QED_SMON_CFG0_SMON_MODE_LOC			12
-#define DLB2_QED_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
-#define DLB2_QED_SMON_CFG0_INTCOUNTEROVFL_LOC		17
-#define DLB2_QED_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
-#define DLB2_QED_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
-#define DLB2_QED_SMON_CFG0_STOPTIMEROVFL_LOC			20
-#define DLB2_QED_SMON_CFG0_INTTIMEROVFL_LOC			21
-#define DLB2_QED_SMON_CFG0_STATTIMEROVFL_LOC			22
-#define DLB2_QED_SMON_CFG0_RSVZ1_LOC				23
-#define DLB2_QED_SMON_CFG0_TIMER_PRESCALE_LOC		24
-#define DLB2_QED_SMON_CFG0_RSVZ2_LOC				29
-#define DLB2_QED_SMON_CFG0_VERSION_LOC			30
-
-#define DLB2_QED_SMON_CFG1 0x7c000038
-#define DLB2_QED_SMON_CFG1_RST 0x0
-
-#define DLB2_QED_SMON_CFG1_MODE0	0x000000FF
-#define DLB2_QED_SMON_CFG1_MODE1	0x0000FF00
-#define DLB2_QED_SMON_CFG1_RSVZ0	0xFFFF0000
-#define DLB2_QED_SMON_CFG1_MODE0_LOC	0
-#define DLB2_QED_SMON_CFG1_MODE1_LOC	8
-#define DLB2_QED_SMON_CFG1_RSVZ0_LOC	16
-
-#define DLB2_QED_SMON_MAX_TMR 0x7c00003c
-#define DLB2_QED_SMON_MAX_TMR_RST 0x0
-
-#define DLB2_QED_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
-#define DLB2_QED_SMON_MAX_TMR_MAXVALUE_LOC	0
-
-#define DLB2_QED_SMON_TMR 0x7c000040
-#define DLB2_QED_SMON_TMR_RST 0x0
-
-#define DLB2_QED_SMON_TMR_TIMER	0xFFFFFFFF
-#define DLB2_QED_SMON_TMR_TIMER_LOC	0
-
-#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0 0x84000000
-#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0 0x74000000
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0 : \
-	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0)
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_RST 0xfefcfaf8
-
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI0	0x000000FF
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI1	0x0000FF00
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI2	0x00FF0000
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI3	0xFF000000
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI0_LOC	0
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI1_LOC	8
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI2_LOC	16
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI3_LOC	24
-
-#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1 0x84000004
-#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1 0x74000004
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1 : \
-	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1)
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1_RST 0x0
-
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1_RSVZ0	0xFFFFFFFF
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1_RSVZ0_LOC	0
-
-#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0 0x84000008
-#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0 0x74000008
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0 : \
-	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0)
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_RST 0xfefcfaf8
-
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI0	0x000000FF
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI1	0x0000FF00
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI2	0x00FF0000
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI3	0xFF000000
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI0_LOC	0
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI1_LOC	8
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI2_LOC	16
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI3_LOC	24
-
-#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1 0x8400000c
-#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1 0x7400000c
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1 : \
-	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1)
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1_RST 0x0
-
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1_RSVZ0	0xFFFFFFFF
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1_RSVZ0_LOC	0
-
-#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0 0x84000010
-#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0 0x74000010
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0 : \
-	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0)
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_RST 0xfefcfaf8
-
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI0	0x000000FF
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI1	0x0000FF00
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI2	0x00FF0000
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI3	0xFF000000
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI0_LOC	0
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI1_LOC	8
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI2_LOC	16
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI3_LOC	24
-
-#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1 0x84000014
-#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1 0x74000014
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1 : \
-	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1)
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RST 0x0
-
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RSVZ0	0xFFFFFFFF
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RSVZ0_LOC	0
-
-#define DLB2_NALB_SMON_ACTIVITYCNTR0 0x8c000064
-#define DLB2_NALB_SMON_ACTIVITYCNTR0_RST 0x0
-
-#define DLB2_NALB_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
-#define DLB2_NALB_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
-
-#define DLB2_NALB_SMON_ACTIVITYCNTR1 0x8c000068
-#define DLB2_NALB_SMON_ACTIVITYCNTR1_RST 0x0
-
-#define DLB2_NALB_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
-#define DLB2_NALB_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
-
-#define DLB2_NALB_SMON_COMPARE0 0x8c00006c
-#define DLB2_NALB_SMON_COMPARE0_RST 0x0
-
-#define DLB2_NALB_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
-#define DLB2_NALB_SMON_COMPARE0_COMPARE0_LOC	0
-
-#define DLB2_NALB_SMON_COMPARE1 0x8c000070
-#define DLB2_NALB_SMON_COMPARE1_RST 0x0
-
-#define DLB2_NALB_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
-#define DLB2_NALB_SMON_COMPARE1_COMPARE1_LOC	0
-
-#define DLB2_NALB_SMON_CFG0 0x8c000074
-#define DLB2_NALB_SMON_CFG0_RST 0x40000000
-
-#define DLB2_NALB_SMON_CFG0_SMON_ENABLE		0x00000001
-#define DLB2_NALB_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
-#define DLB2_NALB_SMON_CFG0_RSVZ0			0x0000000C
-#define DLB2_NALB_SMON_CFG0_SMON0_FUNCTION		0x00000070
-#define DLB2_NALB_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
-#define DLB2_NALB_SMON_CFG0_SMON1_FUNCTION		0x00000700
-#define DLB2_NALB_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
-#define DLB2_NALB_SMON_CFG0_SMON_MODE		0x0000F000
-#define DLB2_NALB_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
-#define DLB2_NALB_SMON_CFG0_INTCOUNTEROVFL		0x00020000
-#define DLB2_NALB_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
-#define DLB2_NALB_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
-#define DLB2_NALB_SMON_CFG0_STOPTIMEROVFL		0x00100000
-#define DLB2_NALB_SMON_CFG0_INTTIMEROVFL		0x00200000
-#define DLB2_NALB_SMON_CFG0_STATTIMEROVFL		0x00400000
-#define DLB2_NALB_SMON_CFG0_RSVZ1			0x00800000
-#define DLB2_NALB_SMON_CFG0_TIMER_PRESCALE		0x1F000000
-#define DLB2_NALB_SMON_CFG0_RSVZ2			0x20000000
-#define DLB2_NALB_SMON_CFG0_VERSION			0xC0000000
-#define DLB2_NALB_SMON_CFG0_SMON_ENABLE_LOC			0
-#define DLB2_NALB_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
-#define DLB2_NALB_SMON_CFG0_RSVZ0_LOC			2
-#define DLB2_NALB_SMON_CFG0_SMON0_FUNCTION_LOC		4
-#define DLB2_NALB_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
-#define DLB2_NALB_SMON_CFG0_SMON1_FUNCTION_LOC		8
-#define DLB2_NALB_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
-#define DLB2_NALB_SMON_CFG0_SMON_MODE_LOC			12
-#define DLB2_NALB_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
-#define DLB2_NALB_SMON_CFG0_INTCOUNTEROVFL_LOC		17
-#define DLB2_NALB_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
-#define DLB2_NALB_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
-#define DLB2_NALB_SMON_CFG0_STOPTIMEROVFL_LOC		20
-#define DLB2_NALB_SMON_CFG0_INTTIMEROVFL_LOC			21
-#define DLB2_NALB_SMON_CFG0_STATTIMEROVFL_LOC		22
-#define DLB2_NALB_SMON_CFG0_RSVZ1_LOC			23
-#define DLB2_NALB_SMON_CFG0_TIMER_PRESCALE_LOC		24
-#define DLB2_NALB_SMON_CFG0_RSVZ2_LOC			29
-#define DLB2_NALB_SMON_CFG0_VERSION_LOC			30
-
-#define DLB2_NALB_SMON_CFG1 0x8c000078
-#define DLB2_NALB_SMON_CFG1_RST 0x0
-
-#define DLB2_NALB_SMON_CFG1_MODE0	0x000000FF
-#define DLB2_NALB_SMON_CFG1_MODE1	0x0000FF00
-#define DLB2_NALB_SMON_CFG1_RSVZ0	0xFFFF0000
-#define DLB2_NALB_SMON_CFG1_MODE0_LOC	0
-#define DLB2_NALB_SMON_CFG1_MODE1_LOC	8
-#define DLB2_NALB_SMON_CFG1_RSVZ0_LOC	16
-
-#define DLB2_NALB_SMON_MAX_TMR 0x8c00007c
-#define DLB2_NALB_SMON_MAX_TMR_RST 0x0
-
-#define DLB2_NALB_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
-#define DLB2_NALB_SMON_MAX_TMR_MAXVALUE_LOC	0
-
-#define DLB2_NALB_SMON_TMR 0x8c000080
-#define DLB2_NALB_SMON_TMR_RST 0x0
-
-#define DLB2_NALB_SMON_TMR_TIMER	0xFFFFFFFF
-#define DLB2_NALB_SMON_TMR_TIMER_LOC	0
-
-#define DLB2_V2RO_GRP_0_SLT_SHFT(x) \
-	(0x96000000 + (x) * 0x4)
-#define DLB2_V2_5RO_GRP_0_SLT_SHFT(x) \
-	(0x86000000 + (x) * 0x4)
-#define DLB2_RO_GRP_0_SLT_SHFT(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2RO_GRP_0_SLT_SHFT(x) : \
-	 DLB2_V2_5RO_GRP_0_SLT_SHFT(x))
-#define DLB2_RO_GRP_0_SLT_SHFT_RST 0x0
-
-#define DLB2_RO_GRP_0_SLT_SHFT_CHANGE	0x000003FF
-#define DLB2_RO_GRP_0_SLT_SHFT_RSVD0		0xFFFFFC00
-#define DLB2_RO_GRP_0_SLT_SHFT_CHANGE_LOC	0
-#define DLB2_RO_GRP_0_SLT_SHFT_RSVD0_LOC	10
-
-#define DLB2_V2RO_GRP_1_SLT_SHFT(x) \
-	(0x96010000 + (x) * 0x4)
-#define DLB2_V2_5RO_GRP_1_SLT_SHFT(x) \
-	(0x86010000 + (x) * 0x4)
-#define DLB2_RO_GRP_1_SLT_SHFT(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2RO_GRP_1_SLT_SHFT(x) : \
-	 DLB2_V2_5RO_GRP_1_SLT_SHFT(x))
-#define DLB2_RO_GRP_1_SLT_SHFT_RST 0x0
-
-#define DLB2_RO_GRP_1_SLT_SHFT_CHANGE	0x000003FF
-#define DLB2_RO_GRP_1_SLT_SHFT_RSVD0		0xFFFFFC00
-#define DLB2_RO_GRP_1_SLT_SHFT_CHANGE_LOC	0
-#define DLB2_RO_GRP_1_SLT_SHFT_RSVD0_LOC	10
-
-#define DLB2_V2RO_GRP_SN_MODE 0x94000000
-#define DLB2_V2_5RO_GRP_SN_MODE 0x84000000
-#define DLB2_RO_GRP_SN_MODE(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2RO_GRP_SN_MODE : \
-	 DLB2_V2_5RO_GRP_SN_MODE)
-#define DLB2_RO_GRP_SN_MODE_RST 0x0
-
-#define DLB2_RO_GRP_SN_MODE_SN_MODE_0	0x00000007
-#define DLB2_RO_GRP_SN_MODE_RSZV0		0x000000F8
-#define DLB2_RO_GRP_SN_MODE_SN_MODE_1	0x00000700
-#define DLB2_RO_GRP_SN_MODE_RSZV1		0xFFFFF800
-#define DLB2_RO_GRP_SN_MODE_SN_MODE_0_LOC	0
-#define DLB2_RO_GRP_SN_MODE_RSZV0_LOC	3
-#define DLB2_RO_GRP_SN_MODE_SN_MODE_1_LOC	8
-#define DLB2_RO_GRP_SN_MODE_RSZV1_LOC	11
-
-#define DLB2_V2RO_CFG_CTRL_GENERAL_0 0x9c000000
-#define DLB2_V2_5RO_CFG_CTRL_GENERAL_0 0x8c000000
-#define DLB2_RO_CFG_CTRL_GENERAL_0(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2RO_CFG_CTRL_GENERAL_0 : \
-	 DLB2_V2_5RO_CFG_CTRL_GENERAL_0)
-#define DLB2_RO_CFG_CTRL_GENERAL_0_RST 0x0
-
-#define DLB2_RO_CFG_CTRL_GENERAL_0_UNIT_SINGLE_STEP_MODE	0x00000001
-#define DLB2_RO_CFG_CTRL_GENERAL_0_RR_EN			0x00000002
-#define DLB2_RO_CFG_CTRL_GENERAL_0_RSZV0			0xFFFFFFFC
-#define DLB2_RO_CFG_CTRL_GENERAL_0_UNIT_SINGLE_STEP_MODE_LOC	0
-#define DLB2_RO_CFG_CTRL_GENERAL_0_RR_EN_LOC			1
-#define DLB2_RO_CFG_CTRL_GENERAL_0_RSZV0_LOC			2
-
-#define DLB2_RO_SMON_ACTIVITYCNTR0 0x9c000030
-#define DLB2_RO_SMON_ACTIVITYCNTR0_RST 0x0
-
-#define DLB2_RO_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
-#define DLB2_RO_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
-
-#define DLB2_RO_SMON_ACTIVITYCNTR1 0x9c000034
-#define DLB2_RO_SMON_ACTIVITYCNTR1_RST 0x0
-
-#define DLB2_RO_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
-#define DLB2_RO_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
-
-#define DLB2_RO_SMON_COMPARE0 0x9c000038
-#define DLB2_RO_SMON_COMPARE0_RST 0x0
-
-#define DLB2_RO_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
-#define DLB2_RO_SMON_COMPARE0_COMPARE0_LOC	0
-
-#define DLB2_RO_SMON_COMPARE1 0x9c00003c
-#define DLB2_RO_SMON_COMPARE1_RST 0x0
-
-#define DLB2_RO_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
-#define DLB2_RO_SMON_COMPARE1_COMPARE1_LOC	0
-
-#define DLB2_RO_SMON_CFG0 0x9c000040
-#define DLB2_RO_SMON_CFG0_RST 0x40000000
-
-#define DLB2_RO_SMON_CFG0_SMON_ENABLE		0x00000001
-#define DLB2_RO_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
-#define DLB2_RO_SMON_CFG0_RSVZ0			0x0000000C
-#define DLB2_RO_SMON_CFG0_SMON0_FUNCTION		0x00000070
-#define DLB2_RO_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
-#define DLB2_RO_SMON_CFG0_SMON1_FUNCTION		0x00000700
-#define DLB2_RO_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
-#define DLB2_RO_SMON_CFG0_SMON_MODE			0x0000F000
-#define DLB2_RO_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
-#define DLB2_RO_SMON_CFG0_INTCOUNTEROVFL		0x00020000
-#define DLB2_RO_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
-#define DLB2_RO_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
-#define DLB2_RO_SMON_CFG0_STOPTIMEROVFL		0x00100000
-#define DLB2_RO_SMON_CFG0_INTTIMEROVFL		0x00200000
-#define DLB2_RO_SMON_CFG0_STATTIMEROVFL		0x00400000
-#define DLB2_RO_SMON_CFG0_RSVZ1			0x00800000
-#define DLB2_RO_SMON_CFG0_TIMER_PRESCALE		0x1F000000
-#define DLB2_RO_SMON_CFG0_RSVZ2			0x20000000
-#define DLB2_RO_SMON_CFG0_VERSION			0xC0000000
-#define DLB2_RO_SMON_CFG0_SMON_ENABLE_LOC		0
-#define DLB2_RO_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC	1
-#define DLB2_RO_SMON_CFG0_RSVZ0_LOC			2
-#define DLB2_RO_SMON_CFG0_SMON0_FUNCTION_LOC		4
-#define DLB2_RO_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
-#define DLB2_RO_SMON_CFG0_SMON1_FUNCTION_LOC		8
-#define DLB2_RO_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
-#define DLB2_RO_SMON_CFG0_SMON_MODE_LOC		12
-#define DLB2_RO_SMON_CFG0_STOPCOUNTEROVFL_LOC	16
-#define DLB2_RO_SMON_CFG0_INTCOUNTEROVFL_LOC		17
-#define DLB2_RO_SMON_CFG0_STATCOUNTER0OVFL_LOC	18
-#define DLB2_RO_SMON_CFG0_STATCOUNTER1OVFL_LOC	19
-#define DLB2_RO_SMON_CFG0_STOPTIMEROVFL_LOC		20
-#define DLB2_RO_SMON_CFG0_INTTIMEROVFL_LOC		21
-#define DLB2_RO_SMON_CFG0_STATTIMEROVFL_LOC		22
-#define DLB2_RO_SMON_CFG0_RSVZ1_LOC			23
-#define DLB2_RO_SMON_CFG0_TIMER_PRESCALE_LOC		24
-#define DLB2_RO_SMON_CFG0_RSVZ2_LOC			29
-#define DLB2_RO_SMON_CFG0_VERSION_LOC		30
-
-#define DLB2_RO_SMON_CFG1 0x9c000044
-#define DLB2_RO_SMON_CFG1_RST 0x0
-
-#define DLB2_RO_SMON_CFG1_MODE0	0x000000FF
-#define DLB2_RO_SMON_CFG1_MODE1	0x0000FF00
-#define DLB2_RO_SMON_CFG1_RSVZ0	0xFFFF0000
-#define DLB2_RO_SMON_CFG1_MODE0_LOC	0
-#define DLB2_RO_SMON_CFG1_MODE1_LOC	8
-#define DLB2_RO_SMON_CFG1_RSVZ0_LOC	16
-
-#define DLB2_RO_SMON_MAX_TMR 0x9c000048
-#define DLB2_RO_SMON_MAX_TMR_RST 0x0
-
-#define DLB2_RO_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
-#define DLB2_RO_SMON_MAX_TMR_MAXVALUE_LOC	0
-
-#define DLB2_RO_SMON_TMR 0x9c00004c
-#define DLB2_RO_SMON_TMR_RST 0x0
-
-#define DLB2_RO_SMON_TMR_TIMER	0xFFFFFFFF
-#define DLB2_RO_SMON_TMR_TIMER_LOC	0
-
-#define DLB2_V2LSP_CQ2PRIOV(x) \
-	(0xa0000000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_CQ2PRIOV(x) \
-	(0x90000000 + (x) * 0x1000)
-#define DLB2_LSP_CQ2PRIOV(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CQ2PRIOV(x) : \
-	 DLB2_V2_5LSP_CQ2PRIOV(x))
-#define DLB2_LSP_CQ2PRIOV_RST 0x0
-
-#define DLB2_LSP_CQ2PRIOV_PRIO	0x00FFFFFF
-#define DLB2_LSP_CQ2PRIOV_V		0xFF000000
-#define DLB2_LSP_CQ2PRIOV_PRIO_LOC	0
-#define DLB2_LSP_CQ2PRIOV_V_LOC	24
-
-#define DLB2_V2LSP_CQ2QID0(x) \
-	(0xa0080000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_CQ2QID0(x) \
-	(0x90080000 + (x) * 0x1000)
-#define DLB2_LSP_CQ2QID0(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CQ2QID0(x) : \
-	 DLB2_V2_5LSP_CQ2QID0(x))
-#define DLB2_LSP_CQ2QID0_RST 0x0
-
-#define DLB2_LSP_CQ2QID0_QID_P0	0x0000007F
-#define DLB2_LSP_CQ2QID0_RSVD3	0x00000080
-#define DLB2_LSP_CQ2QID0_QID_P1	0x00007F00
-#define DLB2_LSP_CQ2QID0_RSVD2	0x00008000
-#define DLB2_LSP_CQ2QID0_QID_P2	0x007F0000
-#define DLB2_LSP_CQ2QID0_RSVD1	0x00800000
-#define DLB2_LSP_CQ2QID0_QID_P3	0x7F000000
-#define DLB2_LSP_CQ2QID0_RSVD0	0x80000000
-#define DLB2_LSP_CQ2QID0_QID_P0_LOC	0
-#define DLB2_LSP_CQ2QID0_RSVD3_LOC	7
-#define DLB2_LSP_CQ2QID0_QID_P1_LOC	8
-#define DLB2_LSP_CQ2QID0_RSVD2_LOC	15
-#define DLB2_LSP_CQ2QID0_QID_P2_LOC	16
-#define DLB2_LSP_CQ2QID0_RSVD1_LOC	23
-#define DLB2_LSP_CQ2QID0_QID_P3_LOC	24
-#define DLB2_LSP_CQ2QID0_RSVD0_LOC	31
-
-#define DLB2_V2LSP_CQ2QID1(x) \
-	(0xa0100000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_CQ2QID1(x) \
-	(0x90100000 + (x) * 0x1000)
-#define DLB2_LSP_CQ2QID1(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CQ2QID1(x) : \
-	 DLB2_V2_5LSP_CQ2QID1(x))
-#define DLB2_LSP_CQ2QID1_RST 0x0
-
-#define DLB2_LSP_CQ2QID1_QID_P4	0x0000007F
-#define DLB2_LSP_CQ2QID1_RSVD3	0x00000080
-#define DLB2_LSP_CQ2QID1_QID_P5	0x00007F00
-#define DLB2_LSP_CQ2QID1_RSVD2	0x00008000
-#define DLB2_LSP_CQ2QID1_QID_P6	0x007F0000
-#define DLB2_LSP_CQ2QID1_RSVD1	0x00800000
-#define DLB2_LSP_CQ2QID1_QID_P7	0x7F000000
-#define DLB2_LSP_CQ2QID1_RSVD0	0x80000000
-#define DLB2_LSP_CQ2QID1_QID_P4_LOC	0
-#define DLB2_LSP_CQ2QID1_RSVD3_LOC	7
-#define DLB2_LSP_CQ2QID1_QID_P5_LOC	8
-#define DLB2_LSP_CQ2QID1_RSVD2_LOC	15
-#define DLB2_LSP_CQ2QID1_QID_P6_LOC	16
-#define DLB2_LSP_CQ2QID1_RSVD1_LOC	23
-#define DLB2_LSP_CQ2QID1_QID_P7_LOC	24
-#define DLB2_LSP_CQ2QID1_RSVD0_LOC	31
-
-#define DLB2_V2LSP_CQ_DIR_DSBL(x) \
-	(0xa0180000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_CQ_DIR_DSBL(x) \
-	(0x90180000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_DIR_DSBL(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CQ_DIR_DSBL(x) : \
-	 DLB2_V2_5LSP_CQ_DIR_DSBL(x))
-#define DLB2_LSP_CQ_DIR_DSBL_RST 0x1
-
-#define DLB2_LSP_CQ_DIR_DSBL_DISABLED	0x00000001
-#define DLB2_LSP_CQ_DIR_DSBL_RSVD0		0xFFFFFFFE
-#define DLB2_LSP_CQ_DIR_DSBL_DISABLED_LOC	0
-#define DLB2_LSP_CQ_DIR_DSBL_RSVD0_LOC	1
-
-#define DLB2_V2LSP_CQ_DIR_TKN_CNT(x) \
-	(0xa0200000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_CQ_DIR_TKN_CNT(x) \
-	(0x90200000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_DIR_TKN_CNT(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CQ_DIR_TKN_CNT(x) : \
-	 DLB2_V2_5LSP_CQ_DIR_TKN_CNT(x))
-#define DLB2_LSP_CQ_DIR_TKN_CNT_RST 0x0
-
-#define DLB2_LSP_CQ_DIR_TKN_CNT_COUNT	0x00001FFF
-#define DLB2_LSP_CQ_DIR_TKN_CNT_RSVD0	0xFFFFE000
-#define DLB2_LSP_CQ_DIR_TKN_CNT_COUNT_LOC	0
-#define DLB2_LSP_CQ_DIR_TKN_CNT_RSVD0_LOC	13
-
-#define DLB2_V2LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(x) \
-	(0xa0280000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(x) \
-	(0x90280000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(x) : \
-	 DLB2_V2_5LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(x))
-#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RST 0x0
-
-#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_TOKEN_DEPTH_SELECT_V2	0x0000000F
-#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_DISABLE_WB_OPT_V2	0x00000010
-#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_IGNORE_DEPTH_V2	0x00000020
-#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RSVD0_V2		0xFFFFFFC0
-#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_TOKEN_DEPTH_SELECT_V2_LOC	0
-#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_DISABLE_WB_OPT_V2_LOC	4
-#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_IGNORE_DEPTH_V2_LOC	5
-#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RSVD0_V2_LOC		6
-
-#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_TOKEN_DEPTH_SELECT_V2_5 0x0000000F
-#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_DISABLE_WB_OPT_V2_5	0x00000010
-#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RSVD0_V2_5		0xFFFFFFE0
-#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_TOKEN_DEPTH_SELECT_V2_5_LOC	0
-#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_DISABLE_WB_OPT_V2_5_LOC	4
-#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RSVD0_V2_5_LOC		5
-
-#define DLB2_V2LSP_CQ_DIR_TOT_SCH_CNTL(x) \
-	(0xa0300000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_CQ_DIR_TOT_SCH_CNTL(x) \
-	(0x90300000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTL(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CQ_DIR_TOT_SCH_CNTL(x) : \
-	 DLB2_V2_5LSP_CQ_DIR_TOT_SCH_CNTL(x))
-#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTL_RST 0x0
-
-#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTL_COUNT	0xFFFFFFFF
-#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTL_COUNT_LOC	0
-
-#define DLB2_V2LSP_CQ_DIR_TOT_SCH_CNTH(x) \
-	(0xa0380000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_CQ_DIR_TOT_SCH_CNTH(x) \
-	(0x90380000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTH(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CQ_DIR_TOT_SCH_CNTH(x) : \
-	 DLB2_V2_5LSP_CQ_DIR_TOT_SCH_CNTH(x))
-#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTH_RST 0x0
-
-#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTH_COUNT	0xFFFFFFFF
-#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTH_COUNT_LOC	0
-
-#define DLB2_V2LSP_CQ_LDB_DSBL(x) \
-	(0xa0400000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_CQ_LDB_DSBL(x) \
-	(0x90400000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_LDB_DSBL(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CQ_LDB_DSBL(x) : \
-	 DLB2_V2_5LSP_CQ_LDB_DSBL(x))
-#define DLB2_LSP_CQ_LDB_DSBL_RST 0x1
-
-#define DLB2_LSP_CQ_LDB_DSBL_DISABLED	0x00000001
-#define DLB2_LSP_CQ_LDB_DSBL_RSVD0		0xFFFFFFFE
-#define DLB2_LSP_CQ_LDB_DSBL_DISABLED_LOC	0
-#define DLB2_LSP_CQ_LDB_DSBL_RSVD0_LOC	1
-
-#define DLB2_V2LSP_CQ_LDB_INFL_CNT(x) \
-	(0xa0480000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_CQ_LDB_INFL_CNT(x) \
-	(0x90480000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_LDB_INFL_CNT(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CQ_LDB_INFL_CNT(x) : \
-	 DLB2_V2_5LSP_CQ_LDB_INFL_CNT(x))
-#define DLB2_LSP_CQ_LDB_INFL_CNT_RST 0x0
-
-#define DLB2_LSP_CQ_LDB_INFL_CNT_COUNT	0x00000FFF
-#define DLB2_LSP_CQ_LDB_INFL_CNT_RSVD0	0xFFFFF000
-#define DLB2_LSP_CQ_LDB_INFL_CNT_COUNT_LOC	0
-#define DLB2_LSP_CQ_LDB_INFL_CNT_RSVD0_LOC	12
-
-#define DLB2_V2LSP_CQ_LDB_INFL_LIM(x) \
-	(0xa0500000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_CQ_LDB_INFL_LIM(x) \
-	(0x90500000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_LDB_INFL_LIM(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CQ_LDB_INFL_LIM(x) : \
-	 DLB2_V2_5LSP_CQ_LDB_INFL_LIM(x))
-#define DLB2_LSP_CQ_LDB_INFL_LIM_RST 0x0
-
-#define DLB2_LSP_CQ_LDB_INFL_LIM_LIMIT	0x00000FFF
-#define DLB2_LSP_CQ_LDB_INFL_LIM_RSVD0	0xFFFFF000
-#define DLB2_LSP_CQ_LDB_INFL_LIM_LIMIT_LOC	0
-#define DLB2_LSP_CQ_LDB_INFL_LIM_RSVD0_LOC	12
-
-#define DLB2_V2LSP_CQ_LDB_TKN_CNT(x) \
-	(0xa0580000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_CQ_LDB_TKN_CNT(x) \
-	(0x90600000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_LDB_TKN_CNT(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CQ_LDB_TKN_CNT(x) : \
-	 DLB2_V2_5LSP_CQ_LDB_TKN_CNT(x))
-#define DLB2_LSP_CQ_LDB_TKN_CNT_RST 0x0
-
-#define DLB2_LSP_CQ_LDB_TKN_CNT_TOKEN_COUNT	0x000007FF
-#define DLB2_LSP_CQ_LDB_TKN_CNT_RSVD0	0xFFFFF800
-#define DLB2_LSP_CQ_LDB_TKN_CNT_TOKEN_COUNT_LOC	0
-#define DLB2_LSP_CQ_LDB_TKN_CNT_RSVD0_LOC		11
-
-#define DLB2_V2LSP_CQ_LDB_TKN_DEPTH_SEL(x) \
-	(0xa0600000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_CQ_LDB_TKN_DEPTH_SEL(x) \
-	(0x90680000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CQ_LDB_TKN_DEPTH_SEL(x) : \
-	 DLB2_V2_5LSP_CQ_LDB_TKN_DEPTH_SEL(x))
-#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RST 0x0
-
-#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_V2	0x0000000F
-#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_IGNORE_DEPTH_V2	0x00000010
-#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RSVD0_V2		0xFFFFFFE0
-#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_V2_LOC	0
-#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_IGNORE_DEPTH_V2_LOC		4
-#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RSVD0_V2_LOC			5
-
-#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_V2_5	0x0000000F
-#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RSVD0_V2_5		0xFFFFFFF0
-#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_V2_5_LOC	0
-#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RSVD0_V2_5_LOC			4
-
-#define DLB2_V2LSP_CQ_LDB_TOT_SCH_CNTL(x) \
-	(0xa0680000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_CQ_LDB_TOT_SCH_CNTL(x) \
-	(0x90700000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTL(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CQ_LDB_TOT_SCH_CNTL(x) : \
-	 DLB2_V2_5LSP_CQ_LDB_TOT_SCH_CNTL(x))
-#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTL_RST 0x0
-
-#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTL_COUNT	0xFFFFFFFF
-#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTL_COUNT_LOC	0
-
-#define DLB2_V2LSP_CQ_LDB_TOT_SCH_CNTH(x) \
-	(0xa0700000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_CQ_LDB_TOT_SCH_CNTH(x) \
-	(0x90780000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTH(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CQ_LDB_TOT_SCH_CNTH(x) : \
-	 DLB2_V2_5LSP_CQ_LDB_TOT_SCH_CNTH(x))
-#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTH_RST 0x0
-
-#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTH_COUNT	0xFFFFFFFF
-#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTH_COUNT_LOC	0
-
-#define DLB2_V2LSP_QID_DIR_MAX_DEPTH(x) \
-	(0xa0780000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID_DIR_MAX_DEPTH(x) \
-	(0x90800000 + (x) * 0x1000)
-#define DLB2_LSP_QID_DIR_MAX_DEPTH(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID_DIR_MAX_DEPTH(x) : \
-	 DLB2_V2_5LSP_QID_DIR_MAX_DEPTH(x))
-#define DLB2_LSP_QID_DIR_MAX_DEPTH_RST 0x0
-
-#define DLB2_LSP_QID_DIR_MAX_DEPTH_DEPTH	0x00001FFF
-#define DLB2_LSP_QID_DIR_MAX_DEPTH_RSVD0	0xFFFFE000
-#define DLB2_LSP_QID_DIR_MAX_DEPTH_DEPTH_LOC	0
-#define DLB2_LSP_QID_DIR_MAX_DEPTH_RSVD0_LOC	13
-
-#define DLB2_V2LSP_QID_DIR_TOT_ENQ_CNTL(x) \
-	(0xa0800000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID_DIR_TOT_ENQ_CNTL(x) \
-	(0x90880000 + (x) * 0x1000)
-#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTL(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID_DIR_TOT_ENQ_CNTL(x) : \
-	 DLB2_V2_5LSP_QID_DIR_TOT_ENQ_CNTL(x))
-#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTL_RST 0x0
-
-#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTL_COUNT	0xFFFFFFFF
-#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTL_COUNT_LOC	0
-
-#define DLB2_V2LSP_QID_DIR_TOT_ENQ_CNTH(x) \
-	(0xa0880000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID_DIR_TOT_ENQ_CNTH(x) \
-	(0x90900000 + (x) * 0x1000)
-#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTH(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID_DIR_TOT_ENQ_CNTH(x) : \
-	 DLB2_V2_5LSP_QID_DIR_TOT_ENQ_CNTH(x))
-#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTH_RST 0x0
-
-#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTH_COUNT	0xFFFFFFFF
-#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTH_COUNT_LOC	0
-
-#define DLB2_V2LSP_QID_DIR_ENQUEUE_CNT(x) \
-	(0xa0900000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID_DIR_ENQUEUE_CNT(x) \
-	(0x90980000 + (x) * 0x1000)
-#define DLB2_LSP_QID_DIR_ENQUEUE_CNT(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID_DIR_ENQUEUE_CNT(x) : \
-	 DLB2_V2_5LSP_QID_DIR_ENQUEUE_CNT(x))
-#define DLB2_LSP_QID_DIR_ENQUEUE_CNT_RST 0x0
-
-#define DLB2_LSP_QID_DIR_ENQUEUE_CNT_COUNT	0x00001FFF
-#define DLB2_LSP_QID_DIR_ENQUEUE_CNT_RSVD0	0xFFFFE000
-#define DLB2_LSP_QID_DIR_ENQUEUE_CNT_COUNT_LOC	0
-#define DLB2_LSP_QID_DIR_ENQUEUE_CNT_RSVD0_LOC	13
-
-#define DLB2_V2LSP_QID_DIR_DEPTH_THRSH(x) \
-	(0xa0980000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID_DIR_DEPTH_THRSH(x) \
-	(0x90a00000 + (x) * 0x1000)
-#define DLB2_LSP_QID_DIR_DEPTH_THRSH(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID_DIR_DEPTH_THRSH(x) : \
-	 DLB2_V2_5LSP_QID_DIR_DEPTH_THRSH(x))
-#define DLB2_LSP_QID_DIR_DEPTH_THRSH_RST 0x0
-
-#define DLB2_LSP_QID_DIR_DEPTH_THRSH_THRESH	0x00001FFF
-#define DLB2_LSP_QID_DIR_DEPTH_THRSH_RSVD0	0xFFFFE000
-#define DLB2_LSP_QID_DIR_DEPTH_THRSH_THRESH_LOC	0
-#define DLB2_LSP_QID_DIR_DEPTH_THRSH_RSVD0_LOC	13
-
-#define DLB2_V2LSP_QID_AQED_ACTIVE_CNT(x) \
-	(0xa0a00000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID_AQED_ACTIVE_CNT(x) \
-	(0x90b80000 + (x) * 0x1000)
-#define DLB2_LSP_QID_AQED_ACTIVE_CNT(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID_AQED_ACTIVE_CNT(x) : \
-	 DLB2_V2_5LSP_QID_AQED_ACTIVE_CNT(x))
-#define DLB2_LSP_QID_AQED_ACTIVE_CNT_RST 0x0
-
-#define DLB2_LSP_QID_AQED_ACTIVE_CNT_COUNT	0x00000FFF
-#define DLB2_LSP_QID_AQED_ACTIVE_CNT_RSVD0	0xFFFFF000
-#define DLB2_LSP_QID_AQED_ACTIVE_CNT_COUNT_LOC	0
-#define DLB2_LSP_QID_AQED_ACTIVE_CNT_RSVD0_LOC	12
-
-#define DLB2_V2LSP_QID_AQED_ACTIVE_LIM(x) \
-	(0xa0a80000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID_AQED_ACTIVE_LIM(x) \
-	(0x90c00000 + (x) * 0x1000)
-#define DLB2_LSP_QID_AQED_ACTIVE_LIM(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID_AQED_ACTIVE_LIM(x) : \
-	 DLB2_V2_5LSP_QID_AQED_ACTIVE_LIM(x))
-#define DLB2_LSP_QID_AQED_ACTIVE_LIM_RST 0x0
-
-#define DLB2_LSP_QID_AQED_ACTIVE_LIM_LIMIT	0x00000FFF
-#define DLB2_LSP_QID_AQED_ACTIVE_LIM_RSVD0	0xFFFFF000
-#define DLB2_LSP_QID_AQED_ACTIVE_LIM_LIMIT_LOC	0
-#define DLB2_LSP_QID_AQED_ACTIVE_LIM_RSVD0_LOC	12
-
-#define DLB2_V2LSP_QID_ATM_TOT_ENQ_CNTL(x) \
-	(0xa0b00000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID_ATM_TOT_ENQ_CNTL(x) \
-	(0x90c80000 + (x) * 0x1000)
-#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTL(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID_ATM_TOT_ENQ_CNTL(x) : \
-	 DLB2_V2_5LSP_QID_ATM_TOT_ENQ_CNTL(x))
-#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTL_RST 0x0
-
-#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTL_COUNT	0xFFFFFFFF
-#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTL_COUNT_LOC	0
-
-#define DLB2_V2LSP_QID_ATM_TOT_ENQ_CNTH(x) \
-	(0xa0b80000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID_ATM_TOT_ENQ_CNTH(x) \
-	(0x90d00000 + (x) * 0x1000)
-#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTH(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID_ATM_TOT_ENQ_CNTH(x) : \
-	 DLB2_V2_5LSP_QID_ATM_TOT_ENQ_CNTH(x))
-#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTH_RST 0x0
-
-#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTH_COUNT	0xFFFFFFFF
-#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTH_COUNT_LOC	0
-
-#define DLB2_V2LSP_QID_LDB_ENQUEUE_CNT(x) \
-	(0xa0c80000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID_LDB_ENQUEUE_CNT(x) \
-	(0x90e00000 + (x) * 0x1000)
-#define DLB2_LSP_QID_LDB_ENQUEUE_CNT(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID_LDB_ENQUEUE_CNT(x) : \
-	 DLB2_V2_5LSP_QID_LDB_ENQUEUE_CNT(x))
-#define DLB2_LSP_QID_LDB_ENQUEUE_CNT_RST 0x0
-
-#define DLB2_LSP_QID_LDB_ENQUEUE_CNT_COUNT	0x00003FFF
-#define DLB2_LSP_QID_LDB_ENQUEUE_CNT_RSVD0	0xFFFFC000
-#define DLB2_LSP_QID_LDB_ENQUEUE_CNT_COUNT_LOC	0
-#define DLB2_LSP_QID_LDB_ENQUEUE_CNT_RSVD0_LOC	14
-
-#define DLB2_V2LSP_QID_LDB_INFL_CNT(x) \
-	(0xa0d00000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID_LDB_INFL_CNT(x) \
-	(0x90e80000 + (x) * 0x1000)
-#define DLB2_LSP_QID_LDB_INFL_CNT(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID_LDB_INFL_CNT(x) : \
-	 DLB2_V2_5LSP_QID_LDB_INFL_CNT(x))
-#define DLB2_LSP_QID_LDB_INFL_CNT_RST 0x0
-
-#define DLB2_LSP_QID_LDB_INFL_CNT_COUNT	0x00000FFF
-#define DLB2_LSP_QID_LDB_INFL_CNT_RSVD0	0xFFFFF000
-#define DLB2_LSP_QID_LDB_INFL_CNT_COUNT_LOC	0
-#define DLB2_LSP_QID_LDB_INFL_CNT_RSVD0_LOC	12
-
-#define DLB2_V2LSP_QID_LDB_INFL_LIM(x) \
-	(0xa0d80000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID_LDB_INFL_LIM(x) \
-	(0x90f00000 + (x) * 0x1000)
-#define DLB2_LSP_QID_LDB_INFL_LIM(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID_LDB_INFL_LIM(x) : \
-	 DLB2_V2_5LSP_QID_LDB_INFL_LIM(x))
-#define DLB2_LSP_QID_LDB_INFL_LIM_RST 0x0
-
-#define DLB2_LSP_QID_LDB_INFL_LIM_LIMIT	0x00000FFF
-#define DLB2_LSP_QID_LDB_INFL_LIM_RSVD0	0xFFFFF000
-#define DLB2_LSP_QID_LDB_INFL_LIM_LIMIT_LOC	0
-#define DLB2_LSP_QID_LDB_INFL_LIM_RSVD0_LOC	12
-
-#define DLB2_V2LSP_QID2CQIDIX_00(x) \
-	(0xa0e00000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID2CQIDIX_00(x) \
-	(0x90f80000 + (x) * 0x1000)
-#define DLB2_LSP_QID2CQIDIX_00(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID2CQIDIX_00(x) : \
-	 DLB2_V2_5LSP_QID2CQIDIX_00(x))
-#define DLB2_LSP_QID2CQIDIX_00_RST 0x0
-#define DLB2_LSP_QID2CQIDIX(ver, x, y) \
-	(DLB2_LSP_QID2CQIDIX_00(ver, x) + 0x80000 * (y))
-#define DLB2_LSP_QID2CQIDIX_NUM 16
-
-#define DLB2_LSP_QID2CQIDIX_00_CQ_P0	0x000000FF
-#define DLB2_LSP_QID2CQIDIX_00_CQ_P1	0x0000FF00
-#define DLB2_LSP_QID2CQIDIX_00_CQ_P2	0x00FF0000
-#define DLB2_LSP_QID2CQIDIX_00_CQ_P3	0xFF000000
-#define DLB2_LSP_QID2CQIDIX_00_CQ_P0_LOC	0
-#define DLB2_LSP_QID2CQIDIX_00_CQ_P1_LOC	8
-#define DLB2_LSP_QID2CQIDIX_00_CQ_P2_LOC	16
-#define DLB2_LSP_QID2CQIDIX_00_CQ_P3_LOC	24
-
-#define DLB2_V2LSP_QID2CQIDIX2_00(x) \
-	(0xa1600000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID2CQIDIX2_00(x) \
-	(0x91780000 + (x) * 0x1000)
-#define DLB2_LSP_QID2CQIDIX2_00(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID2CQIDIX2_00(x) : \
-	 DLB2_V2_5LSP_QID2CQIDIX2_00(x))
-#define DLB2_LSP_QID2CQIDIX2_00_RST 0x0
-#define DLB2_LSP_QID2CQIDIX2(ver, x, y) \
-	(DLB2_LSP_QID2CQIDIX2_00(ver, x) + 0x80000 * (y))
-#define DLB2_LSP_QID2CQIDIX2_NUM 16
-
-#define DLB2_LSP_QID2CQIDIX2_00_CQ_P0	0x000000FF
-#define DLB2_LSP_QID2CQIDIX2_00_CQ_P1	0x0000FF00
-#define DLB2_LSP_QID2CQIDIX2_00_CQ_P2	0x00FF0000
-#define DLB2_LSP_QID2CQIDIX2_00_CQ_P3	0xFF000000
-#define DLB2_LSP_QID2CQIDIX2_00_CQ_P0_LOC	0
-#define DLB2_LSP_QID2CQIDIX2_00_CQ_P1_LOC	8
-#define DLB2_LSP_QID2CQIDIX2_00_CQ_P2_LOC	16
-#define DLB2_LSP_QID2CQIDIX2_00_CQ_P3_LOC	24
-
-#define DLB2_V2LSP_QID_NALDB_MAX_DEPTH(x) \
-	(0xa1f00000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID_NALDB_MAX_DEPTH(x) \
-	(0x92080000 + (x) * 0x1000)
-#define DLB2_LSP_QID_NALDB_MAX_DEPTH(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID_NALDB_MAX_DEPTH(x) : \
-	 DLB2_V2_5LSP_QID_NALDB_MAX_DEPTH(x))
-#define DLB2_LSP_QID_NALDB_MAX_DEPTH_RST 0x0
-
-#define DLB2_LSP_QID_NALDB_MAX_DEPTH_DEPTH	0x00003FFF
-#define DLB2_LSP_QID_NALDB_MAX_DEPTH_RSVD0	0xFFFFC000
-#define DLB2_LSP_QID_NALDB_MAX_DEPTH_DEPTH_LOC	0
-#define DLB2_LSP_QID_NALDB_MAX_DEPTH_RSVD0_LOC	14
-
-#define DLB2_V2LSP_QID_NALDB_TOT_ENQ_CNTL(x) \
-	(0xa1f80000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID_NALDB_TOT_ENQ_CNTL(x) \
-	(0x92100000 + (x) * 0x1000)
-#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID_NALDB_TOT_ENQ_CNTL(x) : \
-	 DLB2_V2_5LSP_QID_NALDB_TOT_ENQ_CNTL(x))
-#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL_RST 0x0
-
-#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL_COUNT	0xFFFFFFFF
-#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL_COUNT_LOC	0
-
-#define DLB2_V2LSP_QID_NALDB_TOT_ENQ_CNTH(x) \
-	(0xa2000000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID_NALDB_TOT_ENQ_CNTH(x) \
-	(0x92180000 + (x) * 0x1000)
-#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID_NALDB_TOT_ENQ_CNTH(x) : \
-	 DLB2_V2_5LSP_QID_NALDB_TOT_ENQ_CNTH(x))
-#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH_RST 0x0
-
-#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH_COUNT	0xFFFFFFFF
-#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH_COUNT_LOC	0
-
-#define DLB2_V2LSP_QID_ATM_DEPTH_THRSH(x) \
-	(0xa2080000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID_ATM_DEPTH_THRSH(x) \
-	(0x92200000 + (x) * 0x1000)
-#define DLB2_LSP_QID_ATM_DEPTH_THRSH(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID_ATM_DEPTH_THRSH(x) : \
-	 DLB2_V2_5LSP_QID_ATM_DEPTH_THRSH(x))
-#define DLB2_LSP_QID_ATM_DEPTH_THRSH_RST 0x0
-
-#define DLB2_LSP_QID_ATM_DEPTH_THRSH_THRESH	0x00003FFF
-#define DLB2_LSP_QID_ATM_DEPTH_THRSH_RSVD0	0xFFFFC000
-#define DLB2_LSP_QID_ATM_DEPTH_THRSH_THRESH_LOC	0
-#define DLB2_LSP_QID_ATM_DEPTH_THRSH_RSVD0_LOC	14
-
-#define DLB2_V2LSP_QID_NALDB_DEPTH_THRSH(x) \
-	(0xa2100000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID_NALDB_DEPTH_THRSH(x) \
-	(0x92280000 + (x) * 0x1000)
-#define DLB2_LSP_QID_NALDB_DEPTH_THRSH(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID_NALDB_DEPTH_THRSH(x) : \
-	 DLB2_V2_5LSP_QID_NALDB_DEPTH_THRSH(x))
-#define DLB2_LSP_QID_NALDB_DEPTH_THRSH_RST 0x0
-
-#define DLB2_LSP_QID_NALDB_DEPTH_THRSH_THRESH	0x00003FFF
-#define DLB2_LSP_QID_NALDB_DEPTH_THRSH_RSVD0		0xFFFFC000
-#define DLB2_LSP_QID_NALDB_DEPTH_THRSH_THRESH_LOC	0
-#define DLB2_LSP_QID_NALDB_DEPTH_THRSH_RSVD0_LOC	14
-
-#define DLB2_V2LSP_QID_ATM_ACTIVE(x) \
-	(0xa2180000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID_ATM_ACTIVE(x) \
-	(0x92300000 + (x) * 0x1000)
-#define DLB2_LSP_QID_ATM_ACTIVE(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID_ATM_ACTIVE(x) : \
-	 DLB2_V2_5LSP_QID_ATM_ACTIVE(x))
-#define DLB2_LSP_QID_ATM_ACTIVE_RST 0x0
-
-#define DLB2_LSP_QID_ATM_ACTIVE_COUNT	0x00003FFF
-#define DLB2_LSP_QID_ATM_ACTIVE_RSVD0	0xFFFFC000
-#define DLB2_LSP_QID_ATM_ACTIVE_COUNT_LOC	0
-#define DLB2_LSP_QID_ATM_ACTIVE_RSVD0_LOC	14
-
-#define DLB2_V2LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0 0xa4000008
-#define DLB2_V2_5LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0 0x94000008
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0 : \
-	 DLB2_V2_5LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0)
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_RST 0x0
-
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI0_WEIGHT	0x000000FF
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI1_WEIGHT	0x0000FF00
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI2_WEIGHT	0x00FF0000
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI3_WEIGHT	0xFF000000
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI0_WEIGHT_LOC	0
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI1_WEIGHT_LOC	8
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI2_WEIGHT_LOC	16
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI3_WEIGHT_LOC	24
-
-#define DLB2_V2LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1 0xa400000c
-#define DLB2_V2_5LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1 0x9400000c
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1 : \
-	 DLB2_V2_5LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1)
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_RST 0x0
-
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_RSVZ0_V2	0xFFFFFFFF
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_RSVZ0_V2_LOC	0
-
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI4_WEIGHT_V2_5	0x000000FF
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI5_WEIGHT_V2_5	0x0000FF00
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI6_WEIGHT_V2_5	0x00FF0000
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI7_WEIGHT_V2_5	0xFF000000
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI4_WEIGHT_V2_5_LOC	0
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI5_WEIGHT_V2_5_LOC	8
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI6_WEIGHT_V2_5_LOC	16
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI7_WEIGHT_V2_5_LOC	24
-
-#define DLB2_V2LSP_CFG_ARB_WEIGHT_LDB_QID_0 0xa4000014
-#define DLB2_V2_5LSP_CFG_ARB_WEIGHT_LDB_QID_0 0x94000014
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CFG_ARB_WEIGHT_LDB_QID_0 : \
-	 DLB2_V2_5LSP_CFG_ARB_WEIGHT_LDB_QID_0)
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_RST 0x0
-
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI0_WEIGHT	0x000000FF
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI1_WEIGHT	0x0000FF00
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI2_WEIGHT	0x00FF0000
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI3_WEIGHT	0xFF000000
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI0_WEIGHT_LOC	0
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI1_WEIGHT_LOC	8
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI2_WEIGHT_LOC	16
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI3_WEIGHT_LOC	24
-
-#define DLB2_V2LSP_CFG_ARB_WEIGHT_LDB_QID_1 0xa4000018
-#define DLB2_V2_5LSP_CFG_ARB_WEIGHT_LDB_QID_1 0x94000018
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CFG_ARB_WEIGHT_LDB_QID_1 : \
-	 DLB2_V2_5LSP_CFG_ARB_WEIGHT_LDB_QID_1)
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_RST 0x0
-
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_RSVZ0_V2	0xFFFFFFFF
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_RSVZ0_V2_LOC	0
-
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI4_WEIGHT_V2_5	0x000000FF
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI5_WEIGHT_V2_5	0x0000FF00
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI6_WEIGHT_V2_5	0x00FF0000
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI7_WEIGHT_V2_5	0xFF000000
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI4_WEIGHT_V2_5_LOC	0
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI5_WEIGHT_V2_5_LOC	8
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI6_WEIGHT_V2_5_LOC	16
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI7_WEIGHT_V2_5_LOC	24
-
-#define DLB2_V2LSP_LDB_SCHED_CTRL 0xa400002c
-#define DLB2_V2_5LSP_LDB_SCHED_CTRL 0x9400002c
-#define DLB2_LSP_LDB_SCHED_CTRL(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_LDB_SCHED_CTRL : \
-	 DLB2_V2_5LSP_LDB_SCHED_CTRL)
-#define DLB2_LSP_LDB_SCHED_CTRL_RST 0x0
-
-#define DLB2_LSP_LDB_SCHED_CTRL_CQ			0x000000FF
-#define DLB2_LSP_LDB_SCHED_CTRL_QIDIX		0x00000700
-#define DLB2_LSP_LDB_SCHED_CTRL_VALUE		0x00000800
-#define DLB2_LSP_LDB_SCHED_CTRL_NALB_HASWORK_V	0x00001000
-#define DLB2_LSP_LDB_SCHED_CTRL_RLIST_HASWORK_V	0x00002000
-#define DLB2_LSP_LDB_SCHED_CTRL_SLIST_HASWORK_V	0x00004000
-#define DLB2_LSP_LDB_SCHED_CTRL_INFLIGHT_OK_V	0x00008000
-#define DLB2_LSP_LDB_SCHED_CTRL_AQED_NFULL_V		0x00010000
-#define DLB2_LSP_LDB_SCHED_CTRL_RSVZ0		0xFFFE0000
-#define DLB2_LSP_LDB_SCHED_CTRL_CQ_LOC		0
-#define DLB2_LSP_LDB_SCHED_CTRL_QIDIX_LOC		8
-#define DLB2_LSP_LDB_SCHED_CTRL_VALUE_LOC		11
-#define DLB2_LSP_LDB_SCHED_CTRL_NALB_HASWORK_V_LOC	12
-#define DLB2_LSP_LDB_SCHED_CTRL_RLIST_HASWORK_V_LOC	13
-#define DLB2_LSP_LDB_SCHED_CTRL_SLIST_HASWORK_V_LOC	14
-#define DLB2_LSP_LDB_SCHED_CTRL_INFLIGHT_OK_V_LOC	15
-#define DLB2_LSP_LDB_SCHED_CTRL_AQED_NFULL_V_LOC	16
-#define DLB2_LSP_LDB_SCHED_CTRL_RSVZ0_LOC		17
-
-#define DLB2_V2LSP_DIR_SCH_CNT_L 0xa4000034
-#define DLB2_V2_5LSP_DIR_SCH_CNT_L 0x94000034
-#define DLB2_LSP_DIR_SCH_CNT_L(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_DIR_SCH_CNT_L : \
-	 DLB2_V2_5LSP_DIR_SCH_CNT_L)
-#define DLB2_LSP_DIR_SCH_CNT_L_RST 0x0
-
-#define DLB2_LSP_DIR_SCH_CNT_L_COUNT	0xFFFFFFFF
-#define DLB2_LSP_DIR_SCH_CNT_L_COUNT_LOC	0
-
-#define DLB2_V2LSP_DIR_SCH_CNT_H 0xa4000038
-#define DLB2_V2_5LSP_DIR_SCH_CNT_H 0x94000038
-#define DLB2_LSP_DIR_SCH_CNT_H(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_DIR_SCH_CNT_H : \
-	 DLB2_V2_5LSP_DIR_SCH_CNT_H)
-#define DLB2_LSP_DIR_SCH_CNT_H_RST 0x0
-
-#define DLB2_LSP_DIR_SCH_CNT_H_COUNT	0xFFFFFFFF
-#define DLB2_LSP_DIR_SCH_CNT_H_COUNT_LOC	0
-
-#define DLB2_V2LSP_LDB_SCH_CNT_L 0xa400003c
-#define DLB2_V2_5LSP_LDB_SCH_CNT_L 0x9400003c
-#define DLB2_LSP_LDB_SCH_CNT_L(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_LDB_SCH_CNT_L : \
-	 DLB2_V2_5LSP_LDB_SCH_CNT_L)
-#define DLB2_LSP_LDB_SCH_CNT_L_RST 0x0
-
-#define DLB2_LSP_LDB_SCH_CNT_L_COUNT	0xFFFFFFFF
-#define DLB2_LSP_LDB_SCH_CNT_L_COUNT_LOC	0
-
-#define DLB2_V2LSP_LDB_SCH_CNT_H 0xa4000040
-#define DLB2_V2_5LSP_LDB_SCH_CNT_H 0x94000040
-#define DLB2_LSP_LDB_SCH_CNT_H(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_LDB_SCH_CNT_H : \
-	 DLB2_V2_5LSP_LDB_SCH_CNT_H)
-#define DLB2_LSP_LDB_SCH_CNT_H_RST 0x0
-
-#define DLB2_LSP_LDB_SCH_CNT_H_COUNT	0xFFFFFFFF
-#define DLB2_LSP_LDB_SCH_CNT_H_COUNT_LOC	0
-
-#define DLB2_V2LSP_CFG_SHDW_CTRL 0xa4000070
-#define DLB2_V2_5LSP_CFG_SHDW_CTRL 0x94000070
-#define DLB2_LSP_CFG_SHDW_CTRL(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CFG_SHDW_CTRL : \
-	 DLB2_V2_5LSP_CFG_SHDW_CTRL)
-#define DLB2_LSP_CFG_SHDW_CTRL_RST 0x0
-
-#define DLB2_LSP_CFG_SHDW_CTRL_TRANSFER	0x00000001
-#define DLB2_LSP_CFG_SHDW_CTRL_RSVD0		0xFFFFFFFE
-#define DLB2_LSP_CFG_SHDW_CTRL_TRANSFER_LOC	0
-#define DLB2_LSP_CFG_SHDW_CTRL_RSVD0_LOC	1
-
-#define DLB2_V2LSP_CFG_SHDW_RANGE_COS(x) \
-	(0xa4000074 + (x) * 4)
-#define DLB2_V2_5LSP_CFG_SHDW_RANGE_COS(x) \
-	(0x94000074 + (x) * 4)
-#define DLB2_LSP_CFG_SHDW_RANGE_COS(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CFG_SHDW_RANGE_COS(x) : \
-	 DLB2_V2_5LSP_CFG_SHDW_RANGE_COS(x))
-#define DLB2_LSP_CFG_SHDW_RANGE_COS_RST 0x40
-
-#define DLB2_LSP_CFG_SHDW_RANGE_COS_BW_RANGE		0x000001FF
-#define DLB2_LSP_CFG_SHDW_RANGE_COS_RSVZ0		0x7FFFFE00
-#define DLB2_LSP_CFG_SHDW_RANGE_COS_NO_EXTRA_CREDIT	0x80000000
-#define DLB2_LSP_CFG_SHDW_RANGE_COS_BW_RANGE_LOC		0
-#define DLB2_LSP_CFG_SHDW_RANGE_COS_RSVZ0_LOC		9
-#define DLB2_LSP_CFG_SHDW_RANGE_COS_NO_EXTRA_CREDIT_LOC	31
-
-#define DLB2_V2LSP_CFG_CTRL_GENERAL_0 0xac000000
-#define DLB2_V2_5LSP_CFG_CTRL_GENERAL_0 0x9c000000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CFG_CTRL_GENERAL_0 : \
-	 DLB2_V2_5LSP_CFG_CTRL_GENERAL_0)
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_RST 0x0
-
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_ATQ_EMPTY_ARB_V2	0x00000001
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_TOK_UNIT_IDLE_V2	0x00000002
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_RLIST_PRI_V2	0x00000004
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_CMP_UNIT_IDLE_V2	0x00000008
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ0_V2		0x00000030
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OP_V2	0x00000040
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_HALF_BW_V2	0x00000080
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OUT_V2	0x00000100
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_DISAB_MULTI_V2	0x00000200
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OP_V2	0x00000400
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_HALF_BW_V2	0x00000800
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OUT_V2	0x00001000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_DISAB_MULTI_V2	0x00002000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OP_V2	0x00004000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_HALF_BW_V2	0x00008000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OUT_V2	0x00010000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OP_V2	0x00020000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_HALF_BW_V2	0x00040000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OUT_V2	0x00080000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_SINGLE_OP_V2	0x00100000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_HALF_BW_V2	0x00200000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_DISAB_MULTI_V2	0x00400000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_SCH_V2	0x00800000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_CMP_V2	0x01000000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_CE_TOG_ARB_V2	0x02000000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ1_V2		0x04000000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALID_SEL_V2	0x18000000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALUE_SEL_V2	0x20000000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_COMPARE_SEL_V2	0xC0000000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_ATQ_EMPTY_ARB_V2_LOC	0
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_TOK_UNIT_IDLE_V2_LOC		1
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_RLIST_PRI_V2_LOC		2
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_CMP_UNIT_IDLE_V2_LOC		3
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ0_V2_LOC			4
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OP_V2_LOC		6
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_HALF_BW_V2_LOC		7
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OUT_V2_LOC		8
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_DISAB_MULTI_V2_LOC		9
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OP_V2_LOC		10
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_HALF_BW_V2_LOC		11
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OUT_V2_LOC		12
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_DISAB_MULTI_V2_LOC		13
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OP_V2_LOC		14
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_HALF_BW_V2_LOC		15
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OUT_V2_LOC		16
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OP_V2_LOC		17
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_HALF_BW_V2_LOC		18
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OUT_V2_LOC		19
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_SINGLE_OP_V2_LOC		20
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_HALF_BW_V2_LOC		21
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_DISAB_MULTI_V2_LOC		22
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_SCH_V2_LOC		23
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_CMP_V2_LOC		24
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_CE_TOG_ARB_V2_LOC		25
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ1_V2_LOC			26
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALID_SEL_V2_LOC		27
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALUE_SEL_V2_LOC		29
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_COMPARE_SEL_V2_LOC		30
-
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_ATQ_EMPTY_ARB_V2_5	0x00000001
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_TOK_UNIT_IDLE_V2_5	0x00000002
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_RLIST_PRI_V2_5	0x00000004
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_CMP_UNIT_IDLE_V2_5	0x00000008
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ENAB_IF_THRESH_V2_5	0x00000010
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ0_V2_5		0x00000020
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OP_V2_5	0x00000040
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_HALF_BW_V2_5	0x00000080
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OUT_V2_5	0x00000100
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_DISAB_MULTI_V2_5	0x00000200
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OP_V2_5	0x00000400
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_HALF_BW_V2_5	0x00000800
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OUT_V2_5	0x00001000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_DISAB_MULTI_V2_5	0x00002000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OP_V2_5	0x00004000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_HALF_BW_V2_5	0x00008000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OUT_V2_5	0x00010000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OP_V2_5	0x00020000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_HALF_BW_V2_5	0x00040000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OUT_V2_5	0x00080000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_SINGLE_OP_V2_5	0x00100000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_HALF_BW_V2_5	0x00200000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_DISAB_MULTI_V2_5	0x00400000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_SCH_V2_5	0x00800000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_CMP_V2_5	0x01000000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_CE_TOG_ARB_V2_5	0x02000000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ1_V2_5		0x04000000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALID_SEL_V2_5	0x18000000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALUE_SEL_V2_5	0x20000000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_COMPARE_SEL_V2_5	0xC0000000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_ATQ_EMPTY_ARB_V2_5_LOC	0
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_TOK_UNIT_IDLE_V2_5_LOC	1
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_RLIST_PRI_V2_5_LOC		2
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_CMP_UNIT_IDLE_V2_5_LOC	3
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ENAB_IF_THRESH_V2_5_LOC		4
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ0_V2_5_LOC			5
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OP_V2_5_LOC		6
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_HALF_BW_V2_5_LOC		7
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OUT_V2_5_LOC		8
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_DISAB_MULTI_V2_5_LOC		9
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OP_V2_5_LOC		10
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_HALF_BW_V2_5_LOC		11
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OUT_V2_5_LOC		12
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_DISAB_MULTI_V2_5_LOC		13
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OP_V2_5_LOC	14
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_HALF_BW_V2_5_LOC		15
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OUT_V2_5_LOC	16
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OP_V2_5_LOC		17
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_HALF_BW_V2_5_LOC		18
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OUT_V2_5_LOC	19
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_SINGLE_OP_V2_5_LOC		20
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_HALF_BW_V2_5_LOC		21
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_DISAB_MULTI_V2_5_LOC		22
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_SCH_V2_5_LOC		23
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_CMP_V2_5_LOC		24
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_CE_TOG_ARB_V2_5_LOC		25
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ1_V2_5_LOC			26
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALID_SEL_V2_5_LOC		27
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALUE_SEL_V2_5_LOC		29
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_COMPARE_SEL_V2_5_LOC	30
-
-#define DLB2_LSP_SMON_COMPARE0 0xac000048
-#define DLB2_LSP_SMON_COMPARE0_RST 0x0
-
-#define DLB2_LSP_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
-#define DLB2_LSP_SMON_COMPARE0_COMPARE0_LOC	0
-
-#define DLB2_LSP_SMON_COMPARE1 0xac00004c
-#define DLB2_LSP_SMON_COMPARE1_RST 0x0
-
-#define DLB2_LSP_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
-#define DLB2_LSP_SMON_COMPARE1_COMPARE1_LOC	0
-
-#define DLB2_LSP_SMON_CFG0 0xac000050
-#define DLB2_LSP_SMON_CFG0_RST 0x40000000
-
-#define DLB2_LSP_SMON_CFG0_SMON_ENABLE		0x00000001
-#define DLB2_LSP_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
-#define DLB2_LSP_SMON_CFG0_RSVZ0			0x0000000C
-#define DLB2_LSP_SMON_CFG0_SMON0_FUNCTION		0x00000070
-#define DLB2_LSP_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
-#define DLB2_LSP_SMON_CFG0_SMON1_FUNCTION		0x00000700
-#define DLB2_LSP_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
-#define DLB2_LSP_SMON_CFG0_SMON_MODE			0x0000F000
-#define DLB2_LSP_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
-#define DLB2_LSP_SMON_CFG0_INTCOUNTEROVFL		0x00020000
-#define DLB2_LSP_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
-#define DLB2_LSP_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
-#define DLB2_LSP_SMON_CFG0_STOPTIMEROVFL		0x00100000
-#define DLB2_LSP_SMON_CFG0_INTTIMEROVFL		0x00200000
-#define DLB2_LSP_SMON_CFG0_STATTIMEROVFL		0x00400000
-#define DLB2_LSP_SMON_CFG0_RSVZ1			0x00800000
-#define DLB2_LSP_SMON_CFG0_TIMER_PRESCALE		0x1F000000
-#define DLB2_LSP_SMON_CFG0_RSVZ2			0x20000000
-#define DLB2_LSP_SMON_CFG0_VERSION			0xC0000000
-#define DLB2_LSP_SMON_CFG0_SMON_ENABLE_LOC			0
-#define DLB2_LSP_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
-#define DLB2_LSP_SMON_CFG0_RSVZ0_LOC				2
-#define DLB2_LSP_SMON_CFG0_SMON0_FUNCTION_LOC		4
-#define DLB2_LSP_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
-#define DLB2_LSP_SMON_CFG0_SMON1_FUNCTION_LOC		8
-#define DLB2_LSP_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
-#define DLB2_LSP_SMON_CFG0_SMON_MODE_LOC			12
-#define DLB2_LSP_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
-#define DLB2_LSP_SMON_CFG0_INTCOUNTEROVFL_LOC		17
-#define DLB2_LSP_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
-#define DLB2_LSP_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
-#define DLB2_LSP_SMON_CFG0_STOPTIMEROVFL_LOC			20
-#define DLB2_LSP_SMON_CFG0_INTTIMEROVFL_LOC			21
-#define DLB2_LSP_SMON_CFG0_STATTIMEROVFL_LOC			22
-#define DLB2_LSP_SMON_CFG0_RSVZ1_LOC				23
-#define DLB2_LSP_SMON_CFG0_TIMER_PRESCALE_LOC		24
-#define DLB2_LSP_SMON_CFG0_RSVZ2_LOC				29
-#define DLB2_LSP_SMON_CFG0_VERSION_LOC			30
-
-#define DLB2_LSP_SMON_CFG1 0xac000054
-#define DLB2_LSP_SMON_CFG1_RST 0x0
-
-#define DLB2_LSP_SMON_CFG1_MODE0	0x000000FF
-#define DLB2_LSP_SMON_CFG1_MODE1	0x0000FF00
-#define DLB2_LSP_SMON_CFG1_RSVZ0	0xFFFF0000
-#define DLB2_LSP_SMON_CFG1_MODE0_LOC	0
-#define DLB2_LSP_SMON_CFG1_MODE1_LOC	8
-#define DLB2_LSP_SMON_CFG1_RSVZ0_LOC	16
-
-#define DLB2_LSP_SMON_ACTIVITYCNTR0 0xac000058
-#define DLB2_LSP_SMON_ACTIVITYCNTR0_RST 0x0
-
-#define DLB2_LSP_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
-#define DLB2_LSP_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
-
-#define DLB2_LSP_SMON_ACTIVITYCNTR1 0xac00005c
-#define DLB2_LSP_SMON_ACTIVITYCNTR1_RST 0x0
-
-#define DLB2_LSP_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
-#define DLB2_LSP_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
-
-#define DLB2_LSP_SMON_MAX_TMR 0xac000060
-#define DLB2_LSP_SMON_MAX_TMR_RST 0x0
-
-#define DLB2_LSP_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
-#define DLB2_LSP_SMON_MAX_TMR_MAXVALUE_LOC	0
-
-#define DLB2_LSP_SMON_TMR 0xac000064
-#define DLB2_LSP_SMON_TMR_RST 0x0
-
-#define DLB2_LSP_SMON_TMR_TIMER	0xFFFFFFFF
-#define DLB2_LSP_SMON_TMR_TIMER_LOC	0
-
-#define DLB2_V2CM_DIAG_RESET_STS 0xb4000000
-#define DLB2_V2_5CM_DIAG_RESET_STS 0xa4000000
-#define DLB2_CM_DIAG_RESET_STS(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 V2CM_DIAG_RESET_STS : \
-	 V2_5CM_DIAG_RESET_STS)
-#define DLB2_CM_DIAG_RESET_STS_RST 0x80000bff
-
-#define DLB2_CM_DIAG_RESET_STS_CHP_PF_RESET_DONE	0x00000001
-#define DLB2_CM_DIAG_RESET_STS_ROP_PF_RESET_DONE	0x00000002
-#define DLB2_CM_DIAG_RESET_STS_LSP_PF_RESET_DONE	0x00000004
-#define DLB2_CM_DIAG_RESET_STS_NALB_PF_RESET_DONE	0x00000008
-#define DLB2_CM_DIAG_RESET_STS_AP_PF_RESET_DONE	0x00000010
-#define DLB2_CM_DIAG_RESET_STS_DP_PF_RESET_DONE	0x00000020
-#define DLB2_CM_DIAG_RESET_STS_QED_PF_RESET_DONE	0x00000040
-#define DLB2_CM_DIAG_RESET_STS_DQED_PF_RESET_DONE	0x00000080
-#define DLB2_CM_DIAG_RESET_STS_AQED_PF_RESET_DONE	0x00000100
-#define DLB2_CM_DIAG_RESET_STS_SYS_PF_RESET_DONE	0x00000200
-#define DLB2_CM_DIAG_RESET_STS_PF_RESET_ACTIVE	0x00000400
-#define DLB2_CM_DIAG_RESET_STS_FLRSM_STATE		0x0003F800
-#define DLB2_CM_DIAG_RESET_STS_RSVD0			0x7FFC0000
-#define DLB2_CM_DIAG_RESET_STS_DLB_PROC_RESET_DONE	0x80000000
-#define DLB2_CM_DIAG_RESET_STS_CHP_PF_RESET_DONE_LOC		0
-#define DLB2_CM_DIAG_RESET_STS_ROP_PF_RESET_DONE_LOC		1
-#define DLB2_CM_DIAG_RESET_STS_LSP_PF_RESET_DONE_LOC		2
-#define DLB2_CM_DIAG_RESET_STS_NALB_PF_RESET_DONE_LOC	3
-#define DLB2_CM_DIAG_RESET_STS_AP_PF_RESET_DONE_LOC		4
-#define DLB2_CM_DIAG_RESET_STS_DP_PF_RESET_DONE_LOC		5
-#define DLB2_CM_DIAG_RESET_STS_QED_PF_RESET_DONE_LOC		6
-#define DLB2_CM_DIAG_RESET_STS_DQED_PF_RESET_DONE_LOC	7
-#define DLB2_CM_DIAG_RESET_STS_AQED_PF_RESET_DONE_LOC	8
-#define DLB2_CM_DIAG_RESET_STS_SYS_PF_RESET_DONE_LOC		9
-#define DLB2_CM_DIAG_RESET_STS_PF_RESET_ACTIVE_LOC		10
-#define DLB2_CM_DIAG_RESET_STS_FLRSM_STATE_LOC		11
-#define DLB2_CM_DIAG_RESET_STS_RSVD0_LOC			18
-#define DLB2_CM_DIAG_RESET_STS_DLB_PROC_RESET_DONE_LOC	31
-
-#define DLB2_V2CM_CFG_DIAGNOSTIC_IDLE_STATUS 0xb4000004
-#define DLB2_V2_5CM_CFG_DIAGNOSTIC_IDLE_STATUS 0xa4000004
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CM_CFG_DIAGNOSTIC_IDLE_STATUS : \
-	 DLB2_V2_5CM_CFG_DIAGNOSTIC_IDLE_STATUS)
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_RST 0x9d0fffff
-
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_CHP_PIPEIDLE		0x00000001
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_ROP_PIPEIDLE		0x00000002
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_LSP_PIPEIDLE		0x00000004
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_NALB_PIPEIDLE	0x00000008
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AP_PIPEIDLE		0x00000010
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DP_PIPEIDLE		0x00000020
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_QED_PIPEIDLE		0x00000040
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DQED_PIPEIDLE	0x00000080
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AQED_PIPEIDLE	0x00000100
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_SYS_PIPEIDLE		0x00000200
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_CHP_UNIT_IDLE	0x00000400
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_ROP_UNIT_IDLE	0x00000800
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_LSP_UNIT_IDLE	0x00001000
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_NALB_UNIT_IDLE	0x00002000
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AP_UNIT_IDLE		0x00004000
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DP_UNIT_IDLE		0x00008000
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_QED_UNIT_IDLE	0x00010000
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DQED_UNIT_IDLE	0x00020000
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AQED_UNIT_IDLE	0x00040000
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_SYS_UNIT_IDLE	0x00080000
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_RSVD1		0x00F00000
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_CFG_RING_IDLE	0x01000000
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_CFG_MSTR_IDLE	0x02000000
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_FLR_CLKREQ_B	0x04000000
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_PROC_IDLE	0x08000000
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_PROC_IDLE_MASKED 0x10000000
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_RSVD0		 0x60000000
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DLB_FUNC_IDLE	 0x80000000
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_CHP_PIPEIDLE_LOC		0
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_ROP_PIPEIDLE_LOC		1
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_LSP_PIPEIDLE_LOC		2
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_NALB_PIPEIDLE_LOC		3
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AP_PIPEIDLE_LOC		4
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DP_PIPEIDLE_LOC		5
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_QED_PIPEIDLE_LOC		6
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DQED_PIPEIDLE_LOC		7
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AQED_PIPEIDLE_LOC		8
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_SYS_PIPEIDLE_LOC		9
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_CHP_UNIT_IDLE_LOC		10
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_ROP_UNIT_IDLE_LOC		11
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_LSP_UNIT_IDLE_LOC		12
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_NALB_UNIT_IDLE_LOC	13
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AP_UNIT_IDLE_LOC		14
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DP_UNIT_IDLE_LOC		15
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_QED_UNIT_IDLE_LOC		16
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DQED_UNIT_IDLE_LOC	17
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AQED_UNIT_IDLE_LOC	18
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_SYS_UNIT_IDLE_LOC		19
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_RSVD1_LOC			20
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_CFG_RING_IDLE_LOC	24
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_CFG_MSTR_IDLE_LOC	25
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_FLR_CLKREQ_B_LOC	26
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_PROC_IDLE_LOC	27
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_PROC_IDLE_MASKED_LOC	28
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_RSVD0_LOC			29
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DLB_FUNC_IDLE_LOC		31
-
-#define DLB2_V2CM_CFG_PM_STATUS 0xb4000014
-#define DLB2_V2_5CM_CFG_PM_STATUS 0xa4000014
-#define DLB2_CM_CFG_PM_STATUS(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CM_CFG_PM_STATUS : \
-	 DLB2_V2_5CM_CFG_PM_STATUS)
-#define DLB2_CM_CFG_PM_STATUS_RST 0x100403e
-
-#define DLB2_CM_CFG_PM_STATUS_PROCHOT		0x00000001
-#define DLB2_CM_CFG_PM_STATUS_PGCB_DLB_IDLE		0x00000002
-#define DLB2_CM_CFG_PM_STATUS_PGCB_DLB_PG_RDY_ACK_B	0x00000004
-#define DLB2_CM_CFG_PM_STATUS_PMSM_PGCB_REQ_B	0x00000008
-#define DLB2_CM_CFG_PM_STATUS_PGBC_PMC_PG_REQ_B	0x00000010
-#define DLB2_CM_CFG_PM_STATUS_PMC_PGCB_PG_ACK_B	0x00000020
-#define DLB2_CM_CFG_PM_STATUS_PMC_PGCB_FET_EN_B	0x00000040
-#define DLB2_CM_CFG_PM_STATUS_PGCB_FET_EN_B		0x00000080
-#define DLB2_CM_CFG_PM_STATUS_RSVZ0			0x00000100
-#define DLB2_CM_CFG_PM_STATUS_RSVZ1			0x00000200
-#define DLB2_CM_CFG_PM_STATUS_FUSE_FORCE_ON		0x00000400
-#define DLB2_CM_CFG_PM_STATUS_FUSE_PROC_DISABLE	0x00000800
-#define DLB2_CM_CFG_PM_STATUS_RSVZ2			0x00001000
-#define DLB2_CM_CFG_PM_STATUS_RSVZ3			0x00002000
-#define DLB2_CM_CFG_PM_STATUS_PM_FSM_D0TOD3_OK	0x00004000
-#define DLB2_CM_CFG_PM_STATUS_PM_FSM_D3TOD0_OK	0x00008000
-#define DLB2_CM_CFG_PM_STATUS_DLB_IN_D3		0x00010000
-#define DLB2_CM_CFG_PM_STATUS_RSVZ4			0x00FE0000
-#define DLB2_CM_CFG_PM_STATUS_PMSM			0xFF000000
-#define DLB2_CM_CFG_PM_STATUS_PROCHOT_LOC			0
-#define DLB2_CM_CFG_PM_STATUS_PGCB_DLB_IDLE_LOC		1
-#define DLB2_CM_CFG_PM_STATUS_PGCB_DLB_PG_RDY_ACK_B_LOC	2
-#define DLB2_CM_CFG_PM_STATUS_PMSM_PGCB_REQ_B_LOC		3
-#define DLB2_CM_CFG_PM_STATUS_PGBC_PMC_PG_REQ_B_LOC		4
-#define DLB2_CM_CFG_PM_STATUS_PMC_PGCB_PG_ACK_B_LOC		5
-#define DLB2_CM_CFG_PM_STATUS_PMC_PGCB_FET_EN_B_LOC		6
-#define DLB2_CM_CFG_PM_STATUS_PGCB_FET_EN_B_LOC		7
-#define DLB2_CM_CFG_PM_STATUS_RSVZ0_LOC			8
-#define DLB2_CM_CFG_PM_STATUS_RSVZ1_LOC			9
-#define DLB2_CM_CFG_PM_STATUS_FUSE_FORCE_ON_LOC		10
-#define DLB2_CM_CFG_PM_STATUS_FUSE_PROC_DISABLE_LOC		11
-#define DLB2_CM_CFG_PM_STATUS_RSVZ2_LOC			12
-#define DLB2_CM_CFG_PM_STATUS_RSVZ3_LOC			13
-#define DLB2_CM_CFG_PM_STATUS_PM_FSM_D0TOD3_OK_LOC		14
-#define DLB2_CM_CFG_PM_STATUS_PM_FSM_D3TOD0_OK_LOC		15
-#define DLB2_CM_CFG_PM_STATUS_DLB_IN_D3_LOC			16
-#define DLB2_CM_CFG_PM_STATUS_RSVZ4_LOC			17
-#define DLB2_CM_CFG_PM_STATUS_PMSM_LOC			24
-
-#define DLB2_V2CM_CFG_PM_PMCSR_DISABLE 0xb4000018
-#define DLB2_V2_5CM_CFG_PM_PMCSR_DISABLE 0xa4000018
-#define DLB2_CM_CFG_PM_PMCSR_DISABLE(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CM_CFG_PM_PMCSR_DISABLE : \
-	 DLB2_V2_5CM_CFG_PM_PMCSR_DISABLE)
-#define DLB2_CM_CFG_PM_PMCSR_DISABLE_RST 0x1
-
-#define DLB2_CM_CFG_PM_PMCSR_DISABLE_DISABLE	0x00000001
-#define DLB2_CM_CFG_PM_PMCSR_DISABLE_RSVZ0	0xFFFFFFFE
-#define DLB2_CM_CFG_PM_PMCSR_DISABLE_DISABLE_LOC	0
-#define DLB2_CM_CFG_PM_PMCSR_DISABLE_RSVZ0_LOC	1
-
-#define DLB2_VF_VF2PF_MAILBOX_BYTES 256
-#define DLB2_VF_VF2PF_MAILBOX(x) \
-	(0x1000 + (x) * 0x4)
-#define DLB2_VF_VF2PF_MAILBOX_RST 0x0
-
-#define DLB2_VF_VF2PF_MAILBOX_MSG	0xFFFFFFFF
-#define DLB2_VF_VF2PF_MAILBOX_MSG_LOC	0
-
-#define DLB2_VF_VF2PF_MAILBOX_ISR 0x1f00
-#define DLB2_VF_VF2PF_MAILBOX_ISR_RST 0x0
-#define DLB2_VF_SIOV_MBOX_ISR_TRIGGER 0x8000
-
-#define DLB2_VF_VF2PF_MAILBOX_ISR_ISR	0x00000001
-#define DLB2_VF_VF2PF_MAILBOX_ISR_RSVD0	0xFFFFFFFE
-#define DLB2_VF_VF2PF_MAILBOX_ISR_ISR_LOC	0
-#define DLB2_VF_VF2PF_MAILBOX_ISR_RSVD0_LOC	1
-
-#define DLB2_VF_PF2VF_MAILBOX_BYTES 64
-#define DLB2_VF_PF2VF_MAILBOX(x) \
-	(0x2000 + (x) * 0x4)
-#define DLB2_VF_PF2VF_MAILBOX_RST 0x0
-
-#define DLB2_VF_PF2VF_MAILBOX_MSG	0xFFFFFFFF
-#define DLB2_VF_PF2VF_MAILBOX_MSG_LOC	0
-
-#define DLB2_VF_PF2VF_MAILBOX_ISR 0x2f00
-#define DLB2_VF_PF2VF_MAILBOX_ISR_RST 0x0
-
-#define DLB2_VF_PF2VF_MAILBOX_ISR_PF_ISR	0x00000001
-#define DLB2_VF_PF2VF_MAILBOX_ISR_RSVD0	0xFFFFFFFE
-#define DLB2_VF_PF2VF_MAILBOX_ISR_PF_ISR_LOC	0
-#define DLB2_VF_PF2VF_MAILBOX_ISR_RSVD0_LOC	1
-
-#define DLB2_VF_VF_MSI_ISR_PEND 0x2f10
-#define DLB2_VF_VF_MSI_ISR_PEND_RST 0x0
-
-#define DLB2_VF_VF_MSI_ISR_PEND_ISR_PEND	0xFFFFFFFF
-#define DLB2_VF_VF_MSI_ISR_PEND_ISR_PEND_LOC	0
-
-#define DLB2_VF_VF_RESET_IN_PROGRESS 0x3000
-#define DLB2_VF_VF_RESET_IN_PROGRESS_RST 0x1
-
-#define DLB2_VF_VF_RESET_IN_PROGRESS_RESET_IN_PROGRESS	0x00000001
-#define DLB2_VF_VF_RESET_IN_PROGRESS_RSVD0			0xFFFFFFFE
-#define DLB2_VF_VF_RESET_IN_PROGRESS_RESET_IN_PROGRESS_LOC	0
-#define DLB2_VF_VF_RESET_IN_PROGRESS_RSVD0_LOC		1
-
-#define DLB2_VF_VF_MSI_ISR 0x4000
-#define DLB2_VF_VF_MSI_ISR_RST 0x0
-
-#define DLB2_VF_VF_MSI_ISR_VF_MSI_ISR	0xFFFFFFFF
-#define DLB2_VF_VF_MSI_ISR_VF_MSI_ISR_LOC	0
-
-#define DLB2_SYS_TOTAL_CREDITS 0x10000100
-#define DLB2_SYS_TOTAL_CREDITS_RST 0x4000
-
-#define DLB2_SYS_TOTAL_CREDITS_TOTAL_CREDITS	0xFFFFFFFF
-#define DLB2_SYS_TOTAL_CREDITS_TOTAL_CREDITS_LOC	0
-
-#define DLB2_SYS_LDB_CQ_AI_ADDR_U(x) \
-	(0x10000fa4 + (x) * 0x1000)
-#define DLB2_SYS_LDB_CQ_AI_ADDR_U_RST 0x0
-
-#define DLB2_SYS_LDB_CQ_AI_ADDR_U_CQ_AI_ADDR_U	0xFFFFFFFF
-#define DLB2_SYS_LDB_CQ_AI_ADDR_U_CQ_AI_ADDR_U_LOC	0
-
-#define DLB2_SYS_LDB_CQ_AI_ADDR_L(x) \
-	(0x10000fa0 + (x) * 0x1000)
-#define DLB2_SYS_LDB_CQ_AI_ADDR_L_RST 0x0
-
-#define DLB2_SYS_LDB_CQ_AI_ADDR_L_RSVD0		0x00000003
-#define DLB2_SYS_LDB_CQ_AI_ADDR_L_CQ_AI_ADDR_L	0xFFFFFFFC
-#define DLB2_SYS_LDB_CQ_AI_ADDR_L_RSVD0_LOC		0
-#define DLB2_SYS_LDB_CQ_AI_ADDR_L_CQ_AI_ADDR_L_LOC	2
-
-#define DLB2_SYS_DIR_CQ_AI_ADDR_U(x) \
-	(0x10000fe4 + (x) * 0x1000)
-#define DLB2_SYS_DIR_CQ_AI_ADDR_U_RST 0x0
-
-#define DLB2_SYS_DIR_CQ_AI_ADDR_U_CQ_AI_ADDR_U	0xFFFFFFFF
-#define DLB2_SYS_DIR_CQ_AI_ADDR_U_CQ_AI_ADDR_U_LOC	0
-
-#define DLB2_SYS_DIR_CQ_AI_ADDR_L(x) \
-	(0x10000fe0 + (x) * 0x1000)
-#define DLB2_SYS_DIR_CQ_AI_ADDR_L_RST 0x0
-
-#define DLB2_SYS_DIR_CQ_AI_ADDR_L_RSVD0		0x00000003
-#define DLB2_SYS_DIR_CQ_AI_ADDR_L_CQ_AI_ADDR_L	0xFFFFFFFC
-#define DLB2_SYS_DIR_CQ_AI_ADDR_L_RSVD0_LOC		0
-#define DLB2_SYS_DIR_CQ_AI_ADDR_L_CQ_AI_ADDR_L_LOC	2
-
-#define DLB2_SYS_WB_DIR_CQ_STATE(x) \
-	(0x11c00000 + (x) * 0x1000)
-#define DLB2_SYS_WB_DIR_CQ_STATE_RST 0x0
-
-#define DLB2_SYS_WB_DIR_CQ_STATE_WB0_V	0x00000001
-#define DLB2_SYS_WB_DIR_CQ_STATE_WB1_V	0x00000002
-#define DLB2_SYS_WB_DIR_CQ_STATE_WB2_V	0x00000004
-#define DLB2_SYS_WB_DIR_CQ_STATE_DIR_OPT	0x00000008
-#define DLB2_SYS_WB_DIR_CQ_STATE_CQ_OPT_CLR	0x00000010
-#define DLB2_SYS_WB_DIR_CQ_STATE_RSVD0	0xFFFFFFE0
-#define DLB2_SYS_WB_DIR_CQ_STATE_WB0_V_LOC		0
-#define DLB2_SYS_WB_DIR_CQ_STATE_WB1_V_LOC		1
-#define DLB2_SYS_WB_DIR_CQ_STATE_WB2_V_LOC		2
-#define DLB2_SYS_WB_DIR_CQ_STATE_DIR_OPT_LOC		3
-#define DLB2_SYS_WB_DIR_CQ_STATE_CQ_OPT_CLR_LOC	4
-#define DLB2_SYS_WB_DIR_CQ_STATE_RSVD0_LOC		5
-
-#define DLB2_SYS_WB_LDB_CQ_STATE(x) \
-	(0x11d00000 + (x) * 0x1000)
-#define DLB2_SYS_WB_LDB_CQ_STATE_RST 0x0
-
-#define DLB2_SYS_WB_LDB_CQ_STATE_WB0_V	0x00000001
-#define DLB2_SYS_WB_LDB_CQ_STATE_WB1_V	0x00000002
-#define DLB2_SYS_WB_LDB_CQ_STATE_WB2_V	0x00000004
-#define DLB2_SYS_WB_LDB_CQ_STATE_RSVD1	0x00000008
-#define DLB2_SYS_WB_LDB_CQ_STATE_CQ_OPT_CLR	0x00000010
-#define DLB2_SYS_WB_LDB_CQ_STATE_RSVD0	0xFFFFFFE0
-#define DLB2_SYS_WB_LDB_CQ_STATE_WB0_V_LOC		0
-#define DLB2_SYS_WB_LDB_CQ_STATE_WB1_V_LOC		1
-#define DLB2_SYS_WB_LDB_CQ_STATE_WB2_V_LOC		2
-#define DLB2_SYS_WB_LDB_CQ_STATE_RSVD1_LOC		3
-#define DLB2_SYS_WB_LDB_CQ_STATE_CQ_OPT_CLR_LOC	4
-#define DLB2_SYS_WB_LDB_CQ_STATE_RSVD0_LOC		5
-
-#define DLB2_CHP_CFG_VAS_CRD(x) \
-	(0x40000000 + (x) * 0x1000)
-#define DLB2_CHP_CFG_VAS_CRD_RST 0x0
-
-#define DLB2_CHP_CFG_VAS_CRD_COUNT	0x00007FFF
-#define DLB2_CHP_CFG_VAS_CRD_RSVD0	0xFFFF8000
-#define DLB2_CHP_CFG_VAS_CRD_COUNT_LOC	0
-#define DLB2_CHP_CFG_VAS_CRD_RSVD0_LOC	15
-
-#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT(x) \
-	(0x90b00000 + (x) * 0x1000)
-#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_RST 0x0
-
-#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_LIMIT	0x00007FFF
-#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_V	0x00008000
-#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_RSVD0	0xFFFF0000
-#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_LIMIT_LOC	0
-#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_V_LOC		15
-#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_RSVD0_LOC	16
-
-#endif /* __DLB2_REGS_NEW_H */
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index 54b0207db..3661b940c 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -8,7 +8,7 @@
 #include "dlb2_osdep.h"
 #include "dlb2_osdep_bitmap.h"
 #include "dlb2_osdep_types.h"
-#include "dlb2_regs_new.h"
+#include "dlb2_regs.h"
 #include "dlb2_resource.h"
 
 #include "../../dlb2_priv.h"
diff --git a/drivers/event/dlb2/pf/dlb2_main.c b/drivers/event/dlb2/pf/dlb2_main.c
index 1f6ccf8e4..b6ec85b47 100644
--- a/drivers/event/dlb2/pf/dlb2_main.c
+++ b/drivers/event/dlb2/pf/dlb2_main.c
@@ -13,7 +13,7 @@
 #include <rte_malloc.h>
 #include <rte_errno.h>
 
-#include "base/dlb2_regs_new.h"
+#include "base/dlb2_regs.h"
 #include "base/dlb2_hw_types.h"
 #include "base/dlb2_resource.h"
 #include "base/dlb2_osdep.h"
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v3 23/26] event/dlb2: update xstats for v2.5
  2021-04-13 20:14   ` [dpdk-dev] [PATCH v3 00/26] " Timothy McDaniel
                       ` (21 preceding siblings ...)
  2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 22/26] event/dlb2: use new combined register map Timothy McDaniel
@ 2021-04-13 20:14     ` Timothy McDaniel
  2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 24/26] doc/dlb2: update documentation " Timothy McDaniel
                       ` (2 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-04-13 20:14 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas

Add DLB v2.5 specific information to xstats, such as metrics for the new
credit scheme.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/dlb2_xstats.c | 41 ++++++++++++++++++++++++++++----
 1 file changed, 37 insertions(+), 4 deletions(-)

diff --git a/drivers/event/dlb2/dlb2_xstats.c b/drivers/event/dlb2/dlb2_xstats.c
index b62e62060..d4c8d9903 100644
--- a/drivers/event/dlb2/dlb2_xstats.c
+++ b/drivers/event/dlb2/dlb2_xstats.c
@@ -9,6 +9,7 @@
 
 #include "dlb2_priv.h"
 #include "dlb2_inline_fns.h"
+#include "pf/base/dlb2_regs.h"
 
 enum dlb2_xstats_type {
 	/* common to device and port */
@@ -21,6 +22,7 @@ enum dlb2_xstats_type {
 	zero_polls,			/**< Call dequeue burst and return 0 */
 	tx_nospc_ldb_hw_credits,	/**< Insufficient LDB h/w credits */
 	tx_nospc_dir_hw_credits,	/**< Insufficient DIR h/w credits */
+	tx_nospc_hw_credits,		/**< Insufficient h/w credits */
 	tx_nospc_inflight_max,		/**< Reach the new_event_threshold */
 	tx_nospc_new_event_limit,	/**< Insufficient s/w credits */
 	tx_nospc_inflight_credits,	/**< Port has too few s/w credits */
@@ -29,6 +31,7 @@ enum dlb2_xstats_type {
 	inflight_events,
 	ldb_pool_size,
 	dir_pool_size,
+	pool_size,
 	/* port specific */
 	tx_new,				/**< Send an OP_NEW event */
 	tx_fwd,				/**< Send an OP_FORWARD event */
@@ -129,6 +132,9 @@ dlb2_device_traffic_stat_get(struct dlb2_eventdev *dlb2,
 		case tx_nospc_dir_hw_credits:
 			val += port->stats.traffic.tx_nospc_dir_hw_credits;
 			break;
+		case tx_nospc_hw_credits:
+			val += port->stats.traffic.tx_nospc_hw_credits;
+			break;
 		case tx_nospc_inflight_max:
 			val += port->stats.traffic.tx_nospc_inflight_max;
 			break;
@@ -159,6 +165,7 @@ get_dev_stat(struct dlb2_eventdev *dlb2, uint16_t obj_idx __rte_unused,
 	case zero_polls:
 	case tx_nospc_ldb_hw_credits:
 	case tx_nospc_dir_hw_credits:
+	case tx_nospc_hw_credits:
 	case tx_nospc_inflight_max:
 	case tx_nospc_new_event_limit:
 	case tx_nospc_inflight_credits:
@@ -171,6 +178,8 @@ get_dev_stat(struct dlb2_eventdev *dlb2, uint16_t obj_idx __rte_unused,
 		return dlb2->num_ldb_credits;
 	case dir_pool_size:
 		return dlb2->num_dir_credits;
+	case pool_size:
+		return dlb2->num_credits;
 	default: return -1;
 	}
 }
@@ -203,6 +212,9 @@ get_port_stat(struct dlb2_eventdev *dlb2, uint16_t obj_idx,
 	case tx_nospc_dir_hw_credits:
 		return ev_port->stats.traffic.tx_nospc_dir_hw_credits;
 
+	case tx_nospc_hw_credits:
+		return ev_port->stats.traffic.tx_nospc_hw_credits;
+
 	case tx_nospc_inflight_max:
 		return ev_port->stats.traffic.tx_nospc_inflight_max;
 
@@ -357,6 +369,7 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 		"zero_polls",
 		"tx_nospc_ldb_hw_credits",
 		"tx_nospc_dir_hw_credits",
+		"tx_nospc_hw_credits",
 		"tx_nospc_inflight_max",
 		"tx_nospc_new_event_limit",
 		"tx_nospc_inflight_credits",
@@ -364,6 +377,7 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 		"inflight_events",
 		"ldb_pool_size",
 		"dir_pool_size",
+		"pool_size",
 	};
 	static const enum dlb2_xstats_type dev_types[] = {
 		rx_ok,
@@ -375,6 +389,7 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 		zero_polls,
 		tx_nospc_ldb_hw_credits,
 		tx_nospc_dir_hw_credits,
+		tx_nospc_hw_credits,
 		tx_nospc_inflight_max,
 		tx_nospc_new_event_limit,
 		tx_nospc_inflight_credits,
@@ -382,6 +397,7 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 		inflight_events,
 		ldb_pool_size,
 		dir_pool_size,
+		pool_size,
 	};
 	/* Note: generated device stats are not allowed to be reset. */
 	static const uint8_t dev_reset_allowed[] = {
@@ -394,6 +410,7 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 		0, /* zero_polls */
 		0, /* tx_nospc_ldb_hw_credits */
 		0, /* tx_nospc_dir_hw_credits */
+		0, /* tx_nospc_hw_credits */
 		0, /* tx_nospc_inflight_max */
 		0, /* tx_nospc_new_event_limit */
 		0, /* tx_nospc_inflight_credits */
@@ -401,6 +418,7 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 		0, /* inflight_events */
 		0, /* ldb_pool_size */
 		0, /* dir_pool_size */
+		0, /* pool_size */
 	};
 	static const char * const port_stats[] = {
 		"is_configured",
@@ -415,6 +433,7 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 		"zero_polls",
 		"tx_nospc_ldb_hw_credits",
 		"tx_nospc_dir_hw_credits",
+		"tx_nospc_hw_credits",
 		"tx_nospc_inflight_max",
 		"tx_nospc_new_event_limit",
 		"tx_nospc_inflight_credits",
@@ -448,6 +467,7 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 		zero_polls,
 		tx_nospc_ldb_hw_credits,
 		tx_nospc_dir_hw_credits,
+		tx_nospc_hw_credits,
 		tx_nospc_inflight_max,
 		tx_nospc_new_event_limit,
 		tx_nospc_inflight_credits,
@@ -481,6 +501,7 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 		1, /* zero_polls */
 		1, /* tx_nospc_ldb_hw_credits */
 		1, /* tx_nospc_dir_hw_credits */
+		1, /* tx_nospc_hw_credits */
 		1, /* tx_nospc_inflight_max */
 		1, /* tx_nospc_new_event_limit */
 		1, /* tx_nospc_inflight_credits */
@@ -935,8 +956,8 @@ dlb2_eventdev_xstats_reset(struct rte_eventdev *dev,
 		break;
 	case RTE_EVENT_DEV_XSTATS_PORT:
 		if (queue_port_id == -1) {
-			for (i = 0; i < DLB2_MAX_NUM_PORTS(dlb2->version);
-					i++) {
+			for (i = 0;
+			     i < DLB2_MAX_NUM_PORTS(dlb2->version); i++) {
 				if (dlb2_xstats_reset_port(dlb2, i,
 							   ids, nb_ids))
 					return -EINVAL;
@@ -949,8 +970,8 @@ dlb2_eventdev_xstats_reset(struct rte_eventdev *dev,
 		break;
 	case RTE_EVENT_DEV_XSTATS_QUEUE:
 		if (queue_port_id == -1) {
-			for (i = 0; i < DLB2_MAX_NUM_QUEUES(dlb2->version);
-					i++) {
+			for (i = 0;
+			     i < DLB2_MAX_NUM_QUEUES(dlb2->version); i++) {
 				if (dlb2_xstats_reset_queue(dlb2, i,
 							    ids, nb_ids))
 					return -EINVAL;
@@ -1048,6 +1069,9 @@ dlb2_eventdev_dump(struct rte_eventdev *dev, FILE *f)
 	fprintf(f, "\tnum_dir_credits = %u\n",
 		dlb2->hw_rsrc_query_results.num_dir_credits);
 
+	fprintf(f, "\tnum_credits = %u\n",
+		dlb2->hw_rsrc_query_results.num_credits);
+
 	/* Port level information */
 
 	for (i = 0; i < dlb2->num_ports; i++) {
@@ -1102,6 +1126,12 @@ dlb2_eventdev_dump(struct rte_eventdev *dev, FILE *f)
 		fprintf(f, "\tdir_credits = %u\n",
 			p->qm_port.dir_credits);
 
+		fprintf(f, "\tcached_credits = %u\n",
+			p->qm_port.cached_credits);
+
+		fprintf(f, "\tdir_credits = %u\n",
+			p->qm_port.credits);
+
 		fprintf(f, "\tgenbit=%d, cq_idx=%d, cq_depth=%d\n",
 			p->qm_port.gen_bit,
 			p->qm_port.cq_idx,
@@ -1139,6 +1169,9 @@ dlb2_eventdev_dump(struct rte_eventdev *dev, FILE *f)
 		fprintf(f, "\t\ttx_nospc_dir_hw_credits %" PRIu64 "\n",
 			p->stats.traffic.tx_nospc_dir_hw_credits);
 
+		fprintf(f, "\t\ttx_nospc_hw_credits %" PRIu64 "\n",
+			p->stats.traffic.tx_nospc_hw_credits);
+
 		fprintf(f, "\t\ttx_nospc_inflight_max %" PRIu64 "\n",
 			p->stats.traffic.tx_nospc_inflight_max);
 
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v3 24/26] doc/dlb2: update documentation for v2.5
  2021-04-13 20:14   ` [dpdk-dev] [PATCH v3 00/26] " Timothy McDaniel
                       ` (22 preceding siblings ...)
  2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 23/26] event/dlb2: update xstats for v2.5 Timothy McDaniel
@ 2021-04-13 20:14     ` Timothy McDaniel
  2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 25/26] event/dlb: remove version from device name Timothy McDaniel
  2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 26/26] event/dlb: move rte config defines to runtime devargs Timothy McDaniel
  25 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-04-13 20:14 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas

Update the dlb documentation for v2.5. Notable differences include
the new cobined credit scheme. Also cleaned up a couple of sections,
and removed a duplicate section.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 doc/guides/eventdevs/dlb2.rst | 75 +++++++++++++----------------------
 1 file changed, 27 insertions(+), 48 deletions(-)

diff --git a/doc/guides/eventdevs/dlb2.rst b/doc/guides/eventdevs/dlb2.rst
index 94d2c77ff..94e46ea7d 100644
--- a/doc/guides/eventdevs/dlb2.rst
+++ b/doc/guides/eventdevs/dlb2.rst
@@ -4,7 +4,8 @@
 Driver for the Intel® Dynamic Load Balancer (DLB2)
 ==================================================
 
-The DPDK dlb poll mode driver supports the Intel® Dynamic Load Balancer.
+The DPDK dlb poll mode driver supports the Intel® Dynamic Load Balancer,
+hardware versions 2.0 and 2.5.
 
 Prerequisites
 -------------
@@ -35,7 +36,7 @@ eventdev API and DLB2 misalign.
 Scheduling Domain Configuration
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-There are 32 scheduling domainis the DLB2.
+DLB2 supports 32 scheduling domains.
 When one is configured, it allocates load-balanced and
 directed queues, ports, credits, and other hardware resources. Some
 resource allocations are user-controlled -- the number of queues, for example
@@ -67,42 +68,7 @@ If the ``RTE_EVENT_QUEUE_CFG_ALL_TYPES`` flag is not set, schedule_type
 dictates the queue's scheduling type.
 
 The ``nb_atomic_order_sequences`` queue configuration field sets the ordered
-queue's reorder buffer size.  DLB2 has 4 groups of ordered queues, where each
-group is configured to contain either 1 queue with 1024 reorder entries, 2
-queues with 512 reorder entries, and so on down to 32 queues with 32 entries.
-
-When a load-balanced queue is created, the PMD will configure a new sequence
-number group on-demand if num_sequence_numbers does not match a pre-existing
-group with available reorder buffer entries. If all sequence number groups are
-in use, no new group will be created and queue configuration will fail. (Note
-that when the PMD is used with a virtual DLB2 device, it cannot change the
-sequence number configuration.)
-
-The queue's ``nb_atomic_flows`` parameter is ignored by the DLB2 PMD, because
-the DLB2 does not limit the number of flows a queue can track. In the DLB2, all
-load-balanced queues can use the full 16-bit flow ID range.
-
-Load-Balanced Queues
-~~~~~~~~~~~~~~~~~~~~
-
-A load-balanced queue can support atomic and ordered scheduling, or atomic and
-unordered scheduling, but not atomic and unordered and ordered scheduling. A
-queue's scheduling types are controlled by the event queue configuration.
-
-If the user sets the ``RTE_EVENT_QUEUE_CFG_ALL_TYPES`` flag, the
-``nb_atomic_order_sequences`` determines the supported scheduling types.
-With non-zero ``nb_atomic_order_sequences``, the queue is configured for atomic
-and ordered scheduling. In this case, ``RTE_SCHED_TYPE_PARALLEL`` scheduling is
-supported by scheduling those events as ordered events.  Note that when the
-event is dequeued, its sched_type will be ``RTE_SCHED_TYPE_ORDERED``. Else if
-``nb_atomic_order_sequences`` is zero, the queue is configured for atomic and
-unordered scheduling. In this case, ``RTE_SCHED_TYPE_ORDERED`` is unsupported.
-
-If the ``RTE_EVENT_QUEUE_CFG_ALL_TYPES`` flag is not set, schedule_type
-dictates the queue's scheduling type.
-
-The ``nb_atomic_order_sequences`` queue configuration field sets the ordered
-queue's reorder buffer size.  DLB2 has 4 groups of ordered queues, where each
+queue's reorder buffer size.  DLB2 has 2 groups of ordered queues, where each
 group is configured to contain either 1 queue with 1024 reorder entries, 2
 queues with 512 reorder entries, and so on down to 32 queues with 32 entries.
 
@@ -157,6 +123,11 @@ type (atomic, ordered, or parallel) is not preserved, and an event's sched_type
 will be set to ``RTE_SCHED_TYPE_ATOMIC`` when it is dequeued from a directed
 port.
 
+Finally, even though all 3 event types are supported on the same QID by
+converting unordered events to ordered, such use should be discouraged as much
+as possible, since mixing types on the same queue uses valuable reorder
+resources, and orders events which do not require ordering.
+
 Flow ID
 ~~~~~~~
 
@@ -169,13 +140,15 @@ Hardware Credits
 DLB2 uses a hardware credit scheme to prevent software from overflowing hardware
 event storage, with each unit of storage represented by a credit. A port spends
 a credit to enqueue an event, and hardware refills the ports with credits as the
-events are scheduled to ports. Refills come from credit pools, and each port is
-a member of a load-balanced credit pool and a directed credit pool. The
-load-balanced credits are used to enqueue to load-balanced queues, and directed
-credits are used for directed queues.
+events are scheduled to ports. Refills come from credit pools.
 
-A DLB2 eventdev contains one load-balanced and one directed credit pool. These
-pools' sizes are controlled by the nb_events_limit field in struct
+For DLB v2.5, there is a single credit pool used for both load balanced and
+directed traffic.
+
+For DLB v2.0, each port is a member of both a load-balanced credit pool and a
+directed credit pool. The load-balanced credits are used to enqueue to
+load-balanced queues, and directed credits are used for directed queues.
+These pools' sizes are controlled by the nb_events_limit field in struct
 rte_event_dev_config. The load-balanced pool is sized to contain
 nb_events_limit credits, and the directed pool is sized to contain
 nb_events_limit/4 credits. The directed pool size can be overridden with the
@@ -276,10 +249,16 @@ The DLB2 supports event priority and per-port queue service priority, as
 described in the eventdev header file. The DLB2 does not support 'global' event
 queue priority established at queue creation time.
 
-DLB2 supports 8 event and queue service priority levels. For both priority
-types, the PMD uses the upper three bits of the priority field to determine the
-DLB2 priority, discarding the 5 least significant bits. The 5 least significant
-event priority bits are not preserved when an event is enqueued.
+DLB2 supports 4 event and queue service priority levels. For both priority types,
+the PMD uses the upper three bits of the priority field to determine the DLB2
+priority, discarding the 5 least significant bits. But least significant bit out
+of 3 priority bits is effectively ignored for binning into 4 priorities. The
+discarded 5 least significant event priority bits are not preserved when an event
+is enqueued.
+
+Note that event priority only works within the same event type.
+When atomic and ordered or unordered events are enqueued to same QID, priority
+across the types is always equal, and both types are served in a round robin manner.
 
 Reconfiguration
 ~~~~~~~~~~~~~~~
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v3 25/26] event/dlb: remove version from device name
  2021-04-13 20:14   ` [dpdk-dev] [PATCH v3 00/26] " Timothy McDaniel
                       ` (23 preceding siblings ...)
  2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 24/26] doc/dlb2: update documentation " Timothy McDaniel
@ 2021-04-13 20:14     ` Timothy McDaniel
  2021-04-14 19:31       ` Jerin Jacob
  2021-04-14 19:44       ` Jerin Jacob
  2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 26/26] event/dlb: move rte config defines to runtime devargs Timothy McDaniel
  25 siblings, 2 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-04-13 20:14 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas

Updated eventdev device name to be dlb_event instead of
dlb2_event.  The new name will be used for all versions
of the DLB hardware. This change required corresponding changes
to the directory name that contains the PMD, as well
as the documentation files, build infrastructure, and PMD
specific APIs.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 MAINTAINERS                                   |  6 +-
 app/test/test_eventdev.c                      |  6 +-
 config/rte_config.h                           | 11 ++-
 doc/api/doxy-api-index.md                     |  2 +-
 doc/api/doxy-api.conf.in                      |  2 +-
 doc/guides/eventdevs/{dlb2.rst => dlb.rst}    | 88 +++++++++----------
 doc/guides/eventdevs/index.rst                |  2 +-
 doc/guides/rel_notes/release_21_05.rst        |  5 ++
 drivers/event/{dlb2 => dlb}/dlb2.c            | 25 +++---
 drivers/event/{dlb2 => dlb}/dlb2_iface.c      |  0
 drivers/event/{dlb2 => dlb}/dlb2_iface.h      |  0
 drivers/event/{dlb2 => dlb}/dlb2_inline_fns.h |  0
 drivers/event/{dlb2 => dlb}/dlb2_log.h        |  0
 drivers/event/{dlb2 => dlb}/dlb2_priv.h       |  7 +-
 drivers/event/{dlb2 => dlb}/dlb2_selftest.c   |  8 +-
 drivers/event/{dlb2 => dlb}/dlb2_user.h       |  0
 drivers/event/{dlb2 => dlb}/dlb2_xstats.c     |  0
 drivers/event/{dlb2 => dlb}/meson.build       |  4 +-
 .../{dlb2 => dlb}/pf/base/dlb2_hw_types.h     |  0
 .../event/{dlb2 => dlb}/pf/base/dlb2_osdep.h  |  0
 .../{dlb2 => dlb}/pf/base/dlb2_osdep_bitmap.h |  0
 .../{dlb2 => dlb}/pf/base/dlb2_osdep_list.h   |  0
 .../{dlb2 => dlb}/pf/base/dlb2_osdep_types.h  |  0
 .../event/{dlb2 => dlb}/pf/base/dlb2_regs.h   |  0
 .../{dlb2 => dlb}/pf/base/dlb2_resource.c     |  0
 .../{dlb2 => dlb}/pf/base/dlb2_resource.h     |  0
 drivers/event/{dlb2 => dlb}/pf/dlb2_main.c    |  0
 drivers/event/{dlb2 => dlb}/pf/dlb2_main.h    |  0
 drivers/event/{dlb2 => dlb}/pf/dlb2_pf.c      |  0
 .../rte_pmd_dlb2.c => dlb/rte_pmd_dlb.c}      |  6 +-
 .../rte_pmd_dlb2.h => dlb/rte_pmd_dlb.h}      | 12 +--
 drivers/event/{dlb2 => dlb}/version.map       |  2 +-
 drivers/event/meson.build                     |  2 +-
 33 files changed, 94 insertions(+), 94 deletions(-)
 rename doc/guides/eventdevs/{dlb2.rst => dlb.rst} (84%)
 rename drivers/event/{dlb2 => dlb}/dlb2.c (99%)
 rename drivers/event/{dlb2 => dlb}/dlb2_iface.c (100%)
 rename drivers/event/{dlb2 => dlb}/dlb2_iface.h (100%)
 rename drivers/event/{dlb2 => dlb}/dlb2_inline_fns.h (100%)
 rename drivers/event/{dlb2 => dlb}/dlb2_log.h (100%)
 rename drivers/event/{dlb2 => dlb}/dlb2_priv.h (99%)
 rename drivers/event/{dlb2 => dlb}/dlb2_selftest.c (99%)
 rename drivers/event/{dlb2 => dlb}/dlb2_user.h (100%)
 rename drivers/event/{dlb2 => dlb}/dlb2_xstats.c (100%)
 rename drivers/event/{dlb2 => dlb}/meson.build (89%)
 rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_hw_types.h (100%)
 rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_osdep.h (100%)
 rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_osdep_bitmap.h (100%)
 rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_osdep_list.h (100%)
 rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_osdep_types.h (100%)
 rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_regs.h (100%)
 rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_resource.c (100%)
 rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_resource.h (100%)
 rename drivers/event/{dlb2 => dlb}/pf/dlb2_main.c (100%)
 rename drivers/event/{dlb2 => dlb}/pf/dlb2_main.h (100%)
 rename drivers/event/{dlb2 => dlb}/pf/dlb2_pf.c (100%)
 rename drivers/event/{dlb2/rte_pmd_dlb2.c => dlb/rte_pmd_dlb.c} (88%)
 rename drivers/event/{dlb2/rte_pmd_dlb2.h => dlb/rte_pmd_dlb.h} (88%)
 rename drivers/event/{dlb2 => dlb}/version.map (60%)

diff --git a/MAINTAINERS b/MAINTAINERS
index fa143160d..40610e169 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1196,10 +1196,10 @@ Cavium OCTEON TX timvf
 M: Pavan Nikhilesh <pbhagavatula@marvell.com>
 F: drivers/event/octeontx/timvf_*
 
-Intel DLB2
+Intel DLB
 M: Timothy McDaniel <timothy.mcdaniel@intel.com>
-F: drivers/event/dlb2/
-F: doc/guides/eventdevs/dlb2.rst
+F: drivers/event/dlb/
+F: doc/guides/eventdevs/dlb.rst
 
 Marvell OCTEON TX2
 M: Pavan Nikhilesh <pbhagavatula@marvell.com>
diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c
index bcfaa53cb..ba27bed02 100644
--- a/app/test/test_eventdev.c
+++ b/app/test/test_eventdev.c
@@ -1031,9 +1031,9 @@ test_eventdev_selftest_dpaa2(void)
 }
 
 static int
-test_eventdev_selftest_dlb2(void)
+test_eventdev_selftest_dlb(void)
 {
-	return test_eventdev_selftest_impl("dlb2_event", "");
+	return test_eventdev_selftest_impl("dlb_event", "");
 }
 
 REGISTER_TEST_COMMAND(eventdev_common_autotest, test_eventdev_common);
@@ -1043,4 +1043,4 @@ REGISTER_TEST_COMMAND(eventdev_selftest_octeontx,
 REGISTER_TEST_COMMAND(eventdev_selftest_octeontx2,
 		test_eventdev_selftest_octeontx2);
 REGISTER_TEST_COMMAND(eventdev_selftest_dpaa2, test_eventdev_selftest_dpaa2);
-REGISTER_TEST_COMMAND(eventdev_selftest_dlb2, test_eventdev_selftest_dlb2);
+REGISTER_TEST_COMMAND(eventdev_selftest_dlb, test_eventdev_selftest_dlb);
diff --git a/config/rte_config.h b/config/rte_config.h
index b13c0884b..1aa852cd7 100644
--- a/config/rte_config.h
+++ b/config/rte_config.h
@@ -139,11 +139,10 @@
 /* QEDE PMD defines */
 #define RTE_LIBRTE_QEDE_FW ""
 
-/* DLB2 defines */
-#define RTE_LIBRTE_PMD_DLB2_POLL_INTERVAL 1000
-#define RTE_LIBRTE_PMD_DLB2_UMWAIT_CTL_STATE  0
-#undef RTE_LIBRTE_PMD_DLB2_QUELL_STATS
-#define RTE_LIBRTE_PMD_DLB2_SW_CREDIT_QUANTA 32
-#define RTE_PMD_DLB2_DEFAULT_DEPTH_THRESH 256
+/* DLB defines */
+#define RTE_LIBRTE_PMD_DLB_POLL_INTERVAL 1000
+#undef RTE_LIBRTE_PMD_DLB_QUELL_STATS
+#define RTE_LIBRTE_PMD_DLB_SW_CREDIT_QUANTA 32
+#define RTE_LIBRTE_PMD_DLB_DEFAULT_DEPTH_THRESH 256
 
 #endif /* _RTE_CONFIG_H_ */
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index ca2c2f6e0..1c2865525 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -55,7 +55,7 @@ The public API headers are grouped by topics:
   [dpaa2_cmdif]        (@ref rte_pmd_dpaa2_cmdif.h),
   [dpaa2_qdma]         (@ref rte_pmd_dpaa2_qdma.h),
   [crypto_scheduler]   (@ref rte_cryptodev_scheduler.h),
-  [dlb2]               (@ref rte_pmd_dlb2.h),
+  [dlb]                (@ref rte_pmd_dlb.h),
   [ifpga]              (@ref rte_pmd_ifpga.h)
 
 - **memory**:
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index 3c7ee4608..9aebec419 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -7,7 +7,7 @@ USE_MDFILE_AS_MAINPAGE  = @TOPDIR@/doc/api/doxy-api-index.md
 INPUT                   = @TOPDIR@/doc/api/doxy-api-index.md \
                           @TOPDIR@/drivers/bus/vdev \
                           @TOPDIR@/drivers/crypto/scheduler \
-                          @TOPDIR@/drivers/event/dlb2 \
+                          @TOPDIR@/drivers/event/dlb \
                           @TOPDIR@/drivers/mempool/dpaa2 \
                           @TOPDIR@/drivers/net/ark \
                           @TOPDIR@/drivers/net/bnxt \
diff --git a/doc/guides/eventdevs/dlb2.rst b/doc/guides/eventdevs/dlb.rst
similarity index 84%
rename from doc/guides/eventdevs/dlb2.rst
rename to doc/guides/eventdevs/dlb.rst
index 94e46ea7d..3410a6e49 100644
--- a/doc/guides/eventdevs/dlb2.rst
+++ b/doc/guides/eventdevs/dlb.rst
@@ -1,7 +1,7 @@
 ..  SPDX-License-Identifier: BSD-3-Clause
     Copyright(c) 2020 Intel Corporation.
 
-Driver for the Intel® Dynamic Load Balancer (DLB2)
+Driver for the Intel® Dynamic Load Balancer (DLB)
 ==================================================
 
 The DPDK dlb poll mode driver supports the Intel® Dynamic Load Balancer,
@@ -16,34 +16,34 @@ the basic DPDK environment.
 Configuration
 -------------
 
-The DLB2 PF PMD is a user-space PMD that uses VFIO to gain direct
+The DLB PF PMD is a user-space PMD that uses VFIO to gain direct
 device access. To use this operation mode, the PCIe PF device must be bound
 to a DPDK-compatible VFIO driver, such as vfio-pci.
 
 Eventdev API Notes
 ------------------
 
-The DLB2 provides the functions of a DPDK event device; specifically, it
+The DLB PMD provides the functions of a DPDK event device; specifically, it
 supports atomic, ordered, and parallel scheduling events from queues to ports.
-However, the DLB2 hardware is not a perfect match to the eventdev API. Some DLB2
+However, the DLB hardware is not a perfect match to the eventdev API. Some DLB
 features are abstracted by the PMD such as directed ports.
 
 In general the dlb PMD is designed for ease-of-use and does not require a
 detailed understanding of the hardware, but these details are important when
 writing high-performance code. This section describes the places where the
-eventdev API and DLB2 misalign.
+eventdev API and DLB misalign.
 
 Scheduling Domain Configuration
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-DLB2 supports 32 scheduling domains.
+DLB supports 32 scheduling domains.
 When one is configured, it allocates load-balanced and
 directed queues, ports, credits, and other hardware resources. Some
 resource allocations are user-controlled -- the number of queues, for example
 -- and others, like credit pools (one directed and one load-balanced pool per
 scheduling domain), are not.
 
-The DLB2 is a closed system eventdev, and as such the ``nb_events_limit`` device
+The DLB is a closed system eventdev, and as such the ``nb_events_limit`` device
 setup argument and the per-port ``new_event_threshold`` argument apply as
 defined in the eventdev header file. The limit is applied to all enqueues,
 regardless of whether it will consume a directed or load-balanced credit.
@@ -68,7 +68,7 @@ If the ``RTE_EVENT_QUEUE_CFG_ALL_TYPES`` flag is not set, schedule_type
 dictates the queue's scheduling type.
 
 The ``nb_atomic_order_sequences`` queue configuration field sets the ordered
-queue's reorder buffer size.  DLB2 has 2 groups of ordered queues, where each
+queue's reorder buffer size.  DLB has 2 groups of ordered queues, where each
 group is configured to contain either 1 queue with 1024 reorder entries, 2
 queues with 512 reorder entries, and so on down to 32 queues with 32 entries.
 
@@ -76,22 +76,22 @@ When a load-balanced queue is created, the PMD will configure a new sequence
 number group on-demand if num_sequence_numbers does not match a pre-existing
 group with available reorder buffer entries. If all sequence number groups are
 in use, no new group will be created and queue configuration will fail. (Note
-that when the PMD is used with a virtual DLB2 device, it cannot change the
+that when the PMD is used with a virtual DLB device, it cannot change the
 sequence number configuration.)
 
-The queue's ``nb_atomic_flows`` parameter is ignored by the DLB2 PMD, because
-the DLB2 does not limit the number of flows a queue can track. In the DLB2, all
+The queue's ``nb_atomic_flows`` parameter is ignored by the DLB PMD, because
+the DLB does not limit the number of flows a queue can track. In the DLB, all
 load-balanced queues can use the full 16-bit flow ID range.
 
 Load-balanced and Directed Ports
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-DLB2 ports come in two flavors: load-balanced and directed. The eventdev API
+DLB ports come in two flavors: load-balanced and directed. The eventdev API
 does not have the same concept, but it has a similar one: ports and queues that
 are singly-linked (i.e. linked to a single queue or port, respectively).
 
 The ``rte_event_dev_info_get()`` function reports the number of available
-event ports and queues (among other things). For the DLB2 PMD, max_event_ports
+event ports and queues (among other things). For the DLB PMD, max_event_ports
 and max_event_queues report the number of available load-balanced ports and
 queues, and max_single_link_event_port_queue_pairs reports the number of
 available directed ports and queues.
@@ -132,12 +132,12 @@ Flow ID
 ~~~~~~~
 
 The flow ID field is preserved in the event when it is scheduled in the
-DLB2.
+DLB.
 
 Hardware Credits
 ~~~~~~~~~~~~~~~~
 
-DLB2 uses a hardware credit scheme to prevent software from overflowing hardware
+DLB uses a hardware credit scheme to prevent software from overflowing hardware
 event storage, with each unit of storage represented by a credit. A port spends
 a credit to enqueue an event, and hardware refills the ports with credits as the
 events are scheduled to ports. Refills come from credit pools.
@@ -156,7 +156,7 @@ num_dir_credits vdev argument, like so:
 
     .. code-block:: console
 
-       --vdev=dlb1_event,num_dir_credits=<value>
+       --vdev=dlb_event,num_dir_credits=<value>
 
 This can be used if the default allocation is too low or too high for the
 specific application needs. The PMD also supports a vdev arg that limits the
@@ -164,10 +164,10 @@ max_num_events reported by rte_event_dev_info_get():
 
     .. code-block:: console
 
-       --vdev=dlb1_event,max_num_events=<value>
+       --vdev=dlb_event,max_num_events=<value>
 
 By default, max_num_events is reported as the total available load-balanced
-credits. If multiple DLB2-based applications are being used, it may be desirable
+credits. If multiple DLB-based applications are being used, it may be desirable
 to control how many load-balanced credits each application uses, particularly
 when application(s) are written to configure nb_events_limit equal to the
 reported max_num_events.
@@ -193,16 +193,16 @@ order to reach the limit.
 
 If a port attempts to enqueue and has no credits available, the enqueue
 operation will fail and the application must retry the enqueue. Credits are
-replenished asynchronously by the DLB2 hardware.
+replenished asynchronously by the DLB hardware.
 
 Software Credits
 ~~~~~~~~~~~~~~~~
 
-The DLB2 is a "closed system" event dev, and the DLB2 PMD layers a software
+The DLB is a "closed system" event dev, and the DLB PMD layers a software
 credit scheme on top of the hardware credit scheme in order to comply with
 the per-port backpressure described in the eventdev API.
 
-The DLB2's hardware scheme is local to a queue/pipeline stage: a port spends a
+The DLB's hardware scheme is local to a queue/pipeline stage: a port spends a
 credit when it enqueues to a queue, and credits are later replenished after the
 events are dequeued and released.
 
@@ -222,8 +222,8 @@ credits are used to enqueue to a load-balanced queue, and directed credits are
 used to enqueue to a directed queue.
 
 The out-of-credit situations are typically transient, and an eventdev
-application using the DLB2 ought to retry its enqueues if they fail.
-If enqueue fails, DLB2 PMD sets rte_errno as follows:
+application using the DLB ought to retry its enqueues if they fail.
+If enqueue fails, DLB PMD sets rte_errno as follows:
 
 - -ENOSPC: Credit exhaustion (either hardware or software)
 - -EINVAL: Invalid argument, such as port ID, queue ID, or sched_type.
@@ -245,12 +245,12 @@ the port's dequeue_depth).
 Priority
 ~~~~~~~~
 
-The DLB2 supports event priority and per-port queue service priority, as
-described in the eventdev header file. The DLB2 does not support 'global' event
+The DLB supports event priority and per-port queue service priority, as
+described in the eventdev header file. The DLB does not support 'global' event
 queue priority established at queue creation time.
 
-DLB2 supports 4 event and queue service priority levels. For both priority types,
-the PMD uses the upper three bits of the priority field to determine the DLB2
+DLB supports 4 event and queue service priority levels. For both priority types,
+the PMD uses the upper three bits of the priority field to determine the DLB
 priority, discarding the 5 least significant bits. But least significant bit out
 of 3 priority bits is effectively ignored for binning into 4 priorities. The
 discarded 5 least significant event priority bits are not preserved when an event
@@ -265,7 +265,7 @@ Reconfiguration
 
 The Eventdev API allows one to reconfigure a device, its ports, and its queues
 by first stopping the device, calling the configuration function(s), then
-restarting the device. The DLB2 does not support configuring an individual queue
+restarting the device. The DLB does not support configuring an individual queue
 or port without first reconfiguring the entire device, however, so there are
 certain reconfiguration sequences that are valid in the eventdev API but not
 supported by the PMD.
@@ -296,9 +296,9 @@ before its ports or queues can be.
 Deferred Scheduling
 ~~~~~~~~~~~~~~~~~~~
 
-The DLB2 PMD's default behavior for managing a CQ is to "pop" the CQ once per
+The DLB PMD's default behavior for managing a CQ is to "pop" the CQ once per
 dequeued event before returning from rte_event_dequeue_burst(). This frees the
-corresponding entries in the CQ, which enables the DLB2 to schedule more events
+corresponding entries in the CQ, which enables the DLB to schedule more events
 to it.
 
 To support applications seeking finer-grained scheduling control -- for example
@@ -312,12 +312,12 @@ To enable deferred scheduling, use the defer_sched vdev argument like so:
 
     .. code-block:: console
 
-       --vdev=dlb1_event,defer_sched=on
+       --vdev=dlb_event,defer_sched=on
 
 Atomic Inflights Allocation
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-In the last stage prior to scheduling an atomic event to a CQ, DLB2 holds the
+In the last stage prior to scheduling an atomic event to a CQ, DLB holds the
 inflight event in a temporary buffer that is divided among load-balanced
 queues. If a queue's atomic buffer storage fills up, this can result in
 head-of-line-blocking. For example:
@@ -340,12 +340,12 @@ increase a vdev's per-queue atomic-inflight allocation to (for example) 64:
 
     .. code-block:: console
 
-       --vdev=dlb1_event,atm_inflights=64
+       --vdev=dlb_event,atm_inflights=64
 
 QID Depth Threshold
 ~~~~~~~~~~~~~~~~~~~
 
-DLB2 supports setting and tracking queue depth thresholds. Hardware uses
+DLB supports setting and tracking queue depth thresholds. Hardware uses
 the thresholds to track how full a queue is compared to its threshold.
 Four buckets are used
 
@@ -354,7 +354,7 @@ Four buckets are used
 - Greater than 75%, but less than or equal to 100% of depth threshold
 - Greater than 100% of depth thresholds
 
-Per queue threshold metrics are tracked in the DLB2 xstats, and are also
+Per queue threshold metrics are tracked in the DLB xstats, and are also
 returned in the impl_opaque field of each received event.
 
 The per qid threshold can be specified as part of the device args, and
@@ -363,19 +363,19 @@ shown below.
 
     .. code-block:: console
 
-       --vdev=dlb2_event,qid_depth_thresh=all:<threshold_value>
-       --vdev=dlb2_event,qid_depth_thresh=qidA-qidB:<threshold_value>
-       --vdev=dlb2_event,qid_depth_thresh=qid:<threshold_value>
+       --vdev=dlb_event,qid_depth_thresh=all:<threshold_value>
+       --vdev=dlb_event,qid_depth_thresh=qidA-qidB:<threshold_value>
+       --vdev=dlb_event,qid_depth_thresh=qid:<threshold_value>
 
 Class of service
 ~~~~~~~~~~~~~~~~
 
-DLB2 supports provisioning the DLB2 bandwidth into 4 classes of service.
+DLB supports provisioning the DLB bandwidth into 4 classes of service.
 
-- Class 4 corresponds to 40% of the DLB2 hardware bandwidth
-- Class 3 corresponds to 30% of the DLB2 hardware bandwidth
-- Class 2 corresponds to 20% of the DLB2 hardware bandwidth
-- Class 1 corresponds to 10% of the DLB2 hardware bandwidth
+- Class 4 corresponds to 40% of the DLB hardware bandwidth
+- Class 3 corresponds to 30% of the DLB hardware bandwidth
+- Class 2 corresponds to 20% of the DLB hardware bandwidth
+- Class 1 corresponds to 10% of the DLB hardware bandwidth
 - Class 0 corresponds to don't care
 
 The classes are applied globally to the set of ports contained in this
@@ -387,4 +387,4 @@ Class of service can be specified in the devargs, as follows
 
     .. code-block:: console
 
-       --vdev=dlb2_event,cos=<0..4>
+       --vdev=dlb_event,cos=<0..4>
diff --git a/doc/guides/eventdevs/index.rst b/doc/guides/eventdevs/index.rst
index 738788d9e..4b915bf3e 100644
--- a/doc/guides/eventdevs/index.rst
+++ b/doc/guides/eventdevs/index.rst
@@ -11,7 +11,7 @@ application through the eventdev API.
     :maxdepth: 2
     :numbered:
 
-    dlb2
+    dlb
     dpaa
     dpaa2
     dsw
diff --git a/doc/guides/rel_notes/release_21_05.rst b/doc/guides/rel_notes/release_21_05.rst
index 8a601e0a7..5b25f1479 100644
--- a/doc/guides/rel_notes/release_21_05.rst
+++ b/doc/guides/rel_notes/release_21_05.rst
@@ -94,6 +94,11 @@ New Features
 
   * Added support for preferred busy polling.
 
+* **Updated DLB driver.**
+
+  * Added support for v2.5 hardware.
+  * Renamed DLB2 to DLB, which supports all HW versions v2.0 and v2.5.
+
 * **Updated testpmd.**
 
   * Added a command line option to configure forced speed for Ethernet port.
diff --git a/drivers/event/dlb2/dlb2.c b/drivers/event/dlb/dlb2.c
similarity index 99%
rename from drivers/event/dlb2/dlb2.c
rename to drivers/event/dlb/dlb2.c
index cc6495b76..e5def9357 100644
--- a/drivers/event/dlb2/dlb2.c
+++ b/drivers/event/dlb/dlb2.c
@@ -667,15 +667,8 @@ dlb2_eventdev_configure(const struct rte_eventdev *dev)
 	}
 
 	/* Does this platform support umonitor/umwait? */
-	if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_WAITPKG)) {
-		if (RTE_LIBRTE_PMD_DLB2_UMWAIT_CTL_STATE != 0 &&
-		    RTE_LIBRTE_PMD_DLB2_UMWAIT_CTL_STATE != 1) {
-			DLB2_LOG_ERR("invalid value (%d) for RTE_LIBRTE_PMD_DLB2_UMWAIT_CTL_STATE, must be 0 or 1.\n",
-				     RTE_LIBRTE_PMD_DLB2_UMWAIT_CTL_STATE);
-			return -EINVAL;
-		}
+	if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_WAITPKG))
 		dlb2->umwait_allowed = true;
-	}
 
 	rsrcs->num_dir_ports = config->nb_single_link_event_port_queues;
 	rsrcs->num_ldb_ports  = config->nb_event_ports - rsrcs->num_dir_ports;
@@ -930,8 +923,9 @@ dlb2_hw_create_ldb_queue(struct dlb2_eventdev *dlb2,
 	}
 
 	if (ev_queue->depth_threshold == 0) {
-		cfg.depth_threshold = RTE_PMD_DLB2_DEFAULT_DEPTH_THRESH;
-		ev_queue->depth_threshold = RTE_PMD_DLB2_DEFAULT_DEPTH_THRESH;
+		cfg.depth_threshold = RTE_LIBRTE_PMD_DLB_DEFAULT_DEPTH_THRESH;
+		ev_queue->depth_threshold =
+			RTE_LIBRTE_PMD_DLB_DEFAULT_DEPTH_THRESH;
 	} else
 		cfg.depth_threshold = ev_queue->depth_threshold;
 
@@ -1623,7 +1617,7 @@ dlb2_eventdev_port_setup(struct rte_eventdev *dev,
 		  RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL);
 	ev_port->outstanding_releases = 0;
 	ev_port->inflight_credits = 0;
-	ev_port->credit_update_quanta = RTE_LIBRTE_PMD_DLB2_SW_CREDIT_QUANTA;
+	ev_port->credit_update_quanta = RTE_LIBRTE_PMD_DLB_SW_CREDIT_QUANTA;
 	ev_port->dlb2 = dlb2; /* reverse link */
 
 	/* Tear down pre-existing port->queue links */
@@ -1718,8 +1712,9 @@ dlb2_hw_create_dir_queue(struct dlb2_eventdev *dlb2,
 	cfg.port_id = qm_port_id;
 
 	if (ev_queue->depth_threshold == 0) {
-		cfg.depth_threshold = RTE_PMD_DLB2_DEFAULT_DEPTH_THRESH;
-		ev_queue->depth_threshold = RTE_PMD_DLB2_DEFAULT_DEPTH_THRESH;
+		cfg.depth_threshold = RTE_LIBRTE_PMD_DLB_DEFAULT_DEPTH_THRESH;
+		ev_queue->depth_threshold =
+			RTE_LIBRTE_PMD_DLB_DEFAULT_DEPTH_THRESH;
 	} else
 		cfg.depth_threshold = ev_queue->depth_threshold;
 
@@ -2747,7 +2742,7 @@ dlb2_event_enqueue_prep(struct dlb2_eventdev_port *ev_port,
 	DLB2_INC_STAT(ev_port->stats.tx_op_cnt[ev->op], 1);
 	DLB2_INC_STAT(ev_port->stats.traffic.tx_ok, 1);
 
-#ifndef RTE_LIBRTE_PMD_DLB2_QUELL_STATS
+#ifndef RTE_LIBRTE_PMD_DLB_QUELL_STATS
 	if (ev->op != RTE_EVENT_OP_RELEASE) {
 		DLB2_INC_STAT(ev_port->stats.queue[ev->queue_id].enq_ok, 1);
 		DLB2_INC_STAT(ev_port->stats.tx_sched_cnt[*sched_type], 1);
@@ -3070,7 +3065,7 @@ dlb2_dequeue_wait(struct dlb2_eventdev *dlb2,
 
 		DLB2_INC_STAT(ev_port->stats.traffic.rx_umonitor_umwait, 1);
 	} else {
-		uint64_t poll_interval = RTE_LIBRTE_PMD_DLB2_POLL_INTERVAL;
+		uint64_t poll_interval = RTE_LIBRTE_PMD_DLB_POLL_INTERVAL;
 		uint64_t curr_ticks = rte_get_timer_cycles();
 		uint64_t init_ticks = curr_ticks;
 
diff --git a/drivers/event/dlb2/dlb2_iface.c b/drivers/event/dlb/dlb2_iface.c
similarity index 100%
rename from drivers/event/dlb2/dlb2_iface.c
rename to drivers/event/dlb/dlb2_iface.c
diff --git a/drivers/event/dlb2/dlb2_iface.h b/drivers/event/dlb/dlb2_iface.h
similarity index 100%
rename from drivers/event/dlb2/dlb2_iface.h
rename to drivers/event/dlb/dlb2_iface.h
diff --git a/drivers/event/dlb2/dlb2_inline_fns.h b/drivers/event/dlb/dlb2_inline_fns.h
similarity index 100%
rename from drivers/event/dlb2/dlb2_inline_fns.h
rename to drivers/event/dlb/dlb2_inline_fns.h
diff --git a/drivers/event/dlb2/dlb2_log.h b/drivers/event/dlb/dlb2_log.h
similarity index 100%
rename from drivers/event/dlb2/dlb2_log.h
rename to drivers/event/dlb/dlb2_log.h
diff --git a/drivers/event/dlb2/dlb2_priv.h b/drivers/event/dlb/dlb2_priv.h
similarity index 99%
rename from drivers/event/dlb2/dlb2_priv.h
rename to drivers/event/dlb/dlb2_priv.h
index f3a9fe0aa..f11e08fca 100644
--- a/drivers/event/dlb2/dlb2_priv.h
+++ b/drivers/event/dlb/dlb2_priv.h
@@ -12,7 +12,7 @@
 #include <rte_config.h>
 #include "dlb2_user.h"
 #include "dlb2_log.h"
-#include "rte_pmd_dlb2.h"
+#include "rte_pmd_dlb.h"
 
 #ifndef RTE_LIBRTE_PMD_DLB2_QUELL_STATS
 #define DLB2_INC_STAT(_stat, _incr_val) ((_stat) += _incr_val)
@@ -20,7 +20,8 @@
 #define DLB2_INC_STAT(_stat, _incr_val)
 #endif
 
-#define EVDEV_DLB2_NAME_PMD dlb2_event
+/* common name for all dlb devs (dlb v2.0, dlb v2.5 ...) */
+#define EVDEV_DLB2_NAME_PMD dlb_event
 
 /*  command line arg strings */
 #define NUMA_NODE_ARG "numa_node"
@@ -320,7 +321,7 @@ struct dlb2_port {
 	bool gen_bit;
 	uint16_t dir_credits;
 	uint32_t dequeue_depth;
-	enum dlb2_token_pop_mode token_pop_mode;
+	enum dlb_token_pop_mode token_pop_mode;
 	union dlb2_port_config cfg;
 	uint32_t *credit_pool[DLB2_NUM_QUEUE_TYPES]; /* use __atomic builtins */
 	union {
diff --git a/drivers/event/dlb2/dlb2_selftest.c b/drivers/event/dlb/dlb2_selftest.c
similarity index 99%
rename from drivers/event/dlb2/dlb2_selftest.c
rename to drivers/event/dlb/dlb2_selftest.c
index 5cf66c552..019cbecdc 100644
--- a/drivers/event/dlb2/dlb2_selftest.c
+++ b/drivers/event/dlb/dlb2_selftest.c
@@ -22,7 +22,7 @@
 #include <rte_pause.h>
 
 #include "dlb2_priv.h"
-#include "rte_pmd_dlb2.h"
+#include "rte_pmd_dlb.h"
 
 #define MAX_PORTS 32
 #define MAX_QIDS 32
@@ -1105,13 +1105,13 @@ test_deferred_sched(void)
 		return -1;
 	}
 
-	ret = rte_pmd_dlb2_set_token_pop_mode(evdev, 0, DEFERRED_POP);
+	ret = rte_pmd_dlb_set_token_pop_mode(evdev, 0, DEFERRED_POP);
 	if (ret < 0) {
 		printf("%d: Error setting deferred scheduling\n", __LINE__);
 		goto err;
 	}
 
-	ret = rte_pmd_dlb2_set_token_pop_mode(evdev, 1, DEFERRED_POP);
+	ret = rte_pmd_dlb_set_token_pop_mode(evdev, 1, DEFERRED_POP);
 	if (ret < 0) {
 		printf("%d: Error setting deferred scheduling\n", __LINE__);
 		goto err;
@@ -1257,7 +1257,7 @@ test_delayed_pop(void)
 		return -1;
 	}
 
-	ret = rte_pmd_dlb2_set_token_pop_mode(evdev, 0, DELAYED_POP);
+	ret = rte_pmd_dlb_set_token_pop_mode(evdev, 0, DELAYED_POP);
 	if (ret < 0) {
 		printf("%d: Error setting deferred scheduling\n", __LINE__);
 		goto err;
diff --git a/drivers/event/dlb2/dlb2_user.h b/drivers/event/dlb/dlb2_user.h
similarity index 100%
rename from drivers/event/dlb2/dlb2_user.h
rename to drivers/event/dlb/dlb2_user.h
diff --git a/drivers/event/dlb2/dlb2_xstats.c b/drivers/event/dlb/dlb2_xstats.c
similarity index 100%
rename from drivers/event/dlb2/dlb2_xstats.c
rename to drivers/event/dlb/dlb2_xstats.c
diff --git a/drivers/event/dlb2/meson.build b/drivers/event/dlb/meson.build
similarity index 89%
rename from drivers/event/dlb2/meson.build
rename to drivers/event/dlb/meson.build
index f22638b8e..4a4aed931 100644
--- a/drivers/event/dlb2/meson.build
+++ b/drivers/event/dlb/meson.build
@@ -14,10 +14,10 @@ sources = files('dlb2.c',
 		'pf/dlb2_main.c',
 		'pf/dlb2_pf.c',
 		'pf/base/dlb2_resource.c',
-		'rte_pmd_dlb2.c',
+		'rte_pmd_dlb.c',
 		'dlb2_selftest.c'
 )
 
-headers = files('rte_pmd_dlb2.h')
+headers = files('rte_pmd_dlb.h')
 
 deps += ['mbuf', 'mempool', 'ring', 'pci', 'bus_pci']
diff --git a/drivers/event/dlb2/pf/base/dlb2_hw_types.h b/drivers/event/dlb/pf/base/dlb2_hw_types.h
similarity index 100%
rename from drivers/event/dlb2/pf/base/dlb2_hw_types.h
rename to drivers/event/dlb/pf/base/dlb2_hw_types.h
diff --git a/drivers/event/dlb2/pf/base/dlb2_osdep.h b/drivers/event/dlb/pf/base/dlb2_osdep.h
similarity index 100%
rename from drivers/event/dlb2/pf/base/dlb2_osdep.h
rename to drivers/event/dlb/pf/base/dlb2_osdep.h
diff --git a/drivers/event/dlb2/pf/base/dlb2_osdep_bitmap.h b/drivers/event/dlb/pf/base/dlb2_osdep_bitmap.h
similarity index 100%
rename from drivers/event/dlb2/pf/base/dlb2_osdep_bitmap.h
rename to drivers/event/dlb/pf/base/dlb2_osdep_bitmap.h
diff --git a/drivers/event/dlb2/pf/base/dlb2_osdep_list.h b/drivers/event/dlb/pf/base/dlb2_osdep_list.h
similarity index 100%
rename from drivers/event/dlb2/pf/base/dlb2_osdep_list.h
rename to drivers/event/dlb/pf/base/dlb2_osdep_list.h
diff --git a/drivers/event/dlb2/pf/base/dlb2_osdep_types.h b/drivers/event/dlb/pf/base/dlb2_osdep_types.h
similarity index 100%
rename from drivers/event/dlb2/pf/base/dlb2_osdep_types.h
rename to drivers/event/dlb/pf/base/dlb2_osdep_types.h
diff --git a/drivers/event/dlb2/pf/base/dlb2_regs.h b/drivers/event/dlb/pf/base/dlb2_regs.h
similarity index 100%
rename from drivers/event/dlb2/pf/base/dlb2_regs.h
rename to drivers/event/dlb/pf/base/dlb2_regs.h
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb/pf/base/dlb2_resource.c
similarity index 100%
rename from drivers/event/dlb2/pf/base/dlb2_resource.c
rename to drivers/event/dlb/pf/base/dlb2_resource.c
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.h b/drivers/event/dlb/pf/base/dlb2_resource.h
similarity index 100%
rename from drivers/event/dlb2/pf/base/dlb2_resource.h
rename to drivers/event/dlb/pf/base/dlb2_resource.h
diff --git a/drivers/event/dlb2/pf/dlb2_main.c b/drivers/event/dlb/pf/dlb2_main.c
similarity index 100%
rename from drivers/event/dlb2/pf/dlb2_main.c
rename to drivers/event/dlb/pf/dlb2_main.c
diff --git a/drivers/event/dlb2/pf/dlb2_main.h b/drivers/event/dlb/pf/dlb2_main.h
similarity index 100%
rename from drivers/event/dlb2/pf/dlb2_main.h
rename to drivers/event/dlb/pf/dlb2_main.h
diff --git a/drivers/event/dlb2/pf/dlb2_pf.c b/drivers/event/dlb/pf/dlb2_pf.c
similarity index 100%
rename from drivers/event/dlb2/pf/dlb2_pf.c
rename to drivers/event/dlb/pf/dlb2_pf.c
diff --git a/drivers/event/dlb2/rte_pmd_dlb2.c b/drivers/event/dlb/rte_pmd_dlb.c
similarity index 88%
rename from drivers/event/dlb2/rte_pmd_dlb2.c
rename to drivers/event/dlb/rte_pmd_dlb.c
index 43990e46a..82d203366 100644
--- a/drivers/event/dlb2/rte_pmd_dlb2.c
+++ b/drivers/event/dlb/rte_pmd_dlb.c
@@ -5,14 +5,14 @@
 #include <rte_eventdev.h>
 #include <eventdev_pmd.h>
 
-#include "rte_pmd_dlb2.h"
+#include "rte_pmd_dlb.h"
 #include "dlb2_priv.h"
 #include "dlb2_inline_fns.h"
 
 int
-rte_pmd_dlb2_set_token_pop_mode(uint8_t dev_id,
+rte_pmd_dlb_set_token_pop_mode(uint8_t dev_id,
 				uint8_t port_id,
-				enum dlb2_token_pop_mode mode)
+				enum dlb_token_pop_mode mode)
 {
 	struct dlb2_eventdev *dlb2;
 	struct rte_eventdev *dev;
diff --git a/drivers/event/dlb2/rte_pmd_dlb2.h b/drivers/event/dlb/rte_pmd_dlb.h
similarity index 88%
rename from drivers/event/dlb2/rte_pmd_dlb2.h
rename to drivers/event/dlb/rte_pmd_dlb.h
index 74399db01..d42b1f52a 100644
--- a/drivers/event/dlb2/rte_pmd_dlb2.h
+++ b/drivers/event/dlb/rte_pmd_dlb.h
@@ -3,13 +3,13 @@
  */
 
 /*!
- *  @file      rte_pmd_dlb2.h
+ *  @file      rte_pmd_dlb.h
  *
  *  @brief     DLB PMD-specific functions
  */
 
-#ifndef _RTE_PMD_DLB2_H_
-#define _RTE_PMD_DLB2_H_
+#ifndef _RTE_PMD_DLB_H_
+#define _RTE_PMD_DLB_H_
 
 #ifdef __cplusplus
 extern "C" {
@@ -23,7 +23,7 @@ extern "C" {
  *
  * Selects the token pop mode for a DLB2 port.
  */
-enum dlb2_token_pop_mode {
+enum dlb_token_pop_mode {
 	/* Pop the CQ tokens immediately after dequeueing. */
 	AUTO_POP,
 	/* Pop CQ tokens after (dequeue_depth - 1) events are released.
@@ -61,9 +61,9 @@ enum dlb2_token_pop_mode {
 
 __rte_experimental
 int
-rte_pmd_dlb2_set_token_pop_mode(uint8_t dev_id,
+rte_pmd_dlb_set_token_pop_mode(uint8_t dev_id,
 				uint8_t port_id,
-				enum dlb2_token_pop_mode mode);
+				enum dlb_token_pop_mode mode);
 
 #ifdef __cplusplus
 }
diff --git a/drivers/event/dlb2/version.map b/drivers/event/dlb/version.map
similarity index 60%
rename from drivers/event/dlb2/version.map
rename to drivers/event/dlb/version.map
index b1e4dff0f..3338a22c1 100644
--- a/drivers/event/dlb2/version.map
+++ b/drivers/event/dlb/version.map
@@ -5,5 +5,5 @@ DPDK_21 {
 EXPERIMENTAL {
 	global:
 
-	rte_pmd_dlb2_set_token_pop_mode;
+	rte_pmd_dlb_set_token_pop_mode;
 };
diff --git a/drivers/event/meson.build b/drivers/event/meson.build
index b7f9bf7c6..e9b0433f2 100644
--- a/drivers/event/meson.build
+++ b/drivers/event/meson.build
@@ -5,7 +5,7 @@ if is_windows
 	subdir_done()
 endif
 
-drivers = ['dlb2', 'dpaa', 'dpaa2', 'octeontx2', 'opdl', 'skeleton', 'sw',
+drivers = ['dlb', 'dpaa', 'dpaa2', 'octeontx2', 'opdl', 'skeleton', 'sw',
 	   'dsw']
 if not (toolchain == 'gcc' and cc.version().version_compare('<4.8.6') and
 	dpdk_conf.has('RTE_ARCH_ARM64'))
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v3 26/26] event/dlb: move rte config defines to runtime devargs
  2021-04-13 20:14   ` [dpdk-dev] [PATCH v3 00/26] " Timothy McDaniel
                       ` (24 preceding siblings ...)
  2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 25/26] event/dlb: remove version from device name Timothy McDaniel
@ 2021-04-13 20:14     ` Timothy McDaniel
  2021-04-14 19:11       ` Jerin Jacob
  25 siblings, 1 reply; 174+ messages in thread
From: Timothy McDaniel @ 2021-04-13 20:14 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas

The new devarg names and their default values
are listed below. The defaults have not changed, and
none of these parameters are accessed in the fast path.

poll_interval=1000
sw_credit_quantai=32
default_depth_thresh=256

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 config/rte_config.h            |   3 -
 drivers/event/dlb/dlb2.c       | 109 +++++++++++++++++++++++++++++++--
 drivers/event/dlb/dlb2_priv.h  |  14 +++++
 drivers/event/dlb/pf/dlb2_pf.c |   5 +-
 4 files changed, 121 insertions(+), 10 deletions(-)

diff --git a/config/rte_config.h b/config/rte_config.h
index 1aa852cd7..836aca3c2 100644
--- a/config/rte_config.h
+++ b/config/rte_config.h
@@ -140,9 +140,6 @@
 #define RTE_LIBRTE_QEDE_FW ""
 
 /* DLB defines */
-#define RTE_LIBRTE_PMD_DLB_POLL_INTERVAL 1000
 #undef RTE_LIBRTE_PMD_DLB_QUELL_STATS
-#define RTE_LIBRTE_PMD_DLB_SW_CREDIT_QUANTA 32
-#define RTE_LIBRTE_PMD_DLB_DEFAULT_DEPTH_THRESH 256
 
 #endif /* _RTE_CONFIG_H_ */
diff --git a/drivers/event/dlb/dlb2.c b/drivers/event/dlb/dlb2.c
index e5def9357..818b1c367 100644
--- a/drivers/event/dlb/dlb2.c
+++ b/drivers/event/dlb/dlb2.c
@@ -315,6 +315,66 @@ set_cos(const char *key __rte_unused,
 	return 0;
 }
 
+static int
+set_poll_interval(const char *key __rte_unused,
+	const char *value,
+	void *opaque)
+{
+	int *poll_interval = opaque;
+	int ret;
+
+	if (value == NULL || opaque == NULL) {
+		DLB2_LOG_ERR("NULL pointer\n");
+		return -EINVAL;
+	}
+
+	ret = dlb2_string_to_int(poll_interval, value);
+	if (ret < 0)
+		return ret;
+
+	return 0;
+}
+
+static int
+set_sw_credit_quanta(const char *key __rte_unused,
+	const char *value,
+	void *opaque)
+{
+	int *sw_credit_quanta = opaque;
+	int ret;
+
+	if (value == NULL || opaque == NULL) {
+		DLB2_LOG_ERR("NULL pointer\n");
+		return -EINVAL;
+	}
+
+	ret = dlb2_string_to_int(sw_credit_quanta, value);
+	if (ret < 0)
+		return ret;
+
+	return 0;
+}
+
+static int
+set_default_depth_thresh(const char *key __rte_unused,
+	const char *value,
+	void *opaque)
+{
+	int *default_depth_thresh = opaque;
+	int ret;
+
+	if (value == NULL || opaque == NULL) {
+		DLB2_LOG_ERR("NULL pointer\n");
+		return -EINVAL;
+	}
+
+	ret = dlb2_string_to_int(default_depth_thresh, value);
+	if (ret < 0)
+		return ret;
+
+	return 0;
+}
+
 static int
 set_qid_depth_thresh(const char *key __rte_unused,
 		     const char *value,
@@ -923,9 +983,9 @@ dlb2_hw_create_ldb_queue(struct dlb2_eventdev *dlb2,
 	}
 
 	if (ev_queue->depth_threshold == 0) {
-		cfg.depth_threshold = RTE_LIBRTE_PMD_DLB_DEFAULT_DEPTH_THRESH;
+		cfg.depth_threshold = dlb2->default_depth_thresh;
 		ev_queue->depth_threshold =
-			RTE_LIBRTE_PMD_DLB_DEFAULT_DEPTH_THRESH;
+			dlb2->default_depth_thresh;
 	} else
 		cfg.depth_threshold = ev_queue->depth_threshold;
 
@@ -1617,7 +1677,7 @@ dlb2_eventdev_port_setup(struct rte_eventdev *dev,
 		  RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL);
 	ev_port->outstanding_releases = 0;
 	ev_port->inflight_credits = 0;
-	ev_port->credit_update_quanta = RTE_LIBRTE_PMD_DLB_SW_CREDIT_QUANTA;
+	ev_port->credit_update_quanta = dlb2->sw_credit_quanta;
 	ev_port->dlb2 = dlb2; /* reverse link */
 
 	/* Tear down pre-existing port->queue links */
@@ -1712,9 +1772,9 @@ dlb2_hw_create_dir_queue(struct dlb2_eventdev *dlb2,
 	cfg.port_id = qm_port_id;
 
 	if (ev_queue->depth_threshold == 0) {
-		cfg.depth_threshold = RTE_LIBRTE_PMD_DLB_DEFAULT_DEPTH_THRESH;
+		cfg.depth_threshold = dlb2->default_depth_thresh;
 		ev_queue->depth_threshold =
-			RTE_LIBRTE_PMD_DLB_DEFAULT_DEPTH_THRESH;
+			dlb2->default_depth_thresh;
 	} else
 		cfg.depth_threshold = ev_queue->depth_threshold;
 
@@ -3065,7 +3125,7 @@ dlb2_dequeue_wait(struct dlb2_eventdev *dlb2,
 
 		DLB2_INC_STAT(ev_port->stats.traffic.rx_umonitor_umwait, 1);
 	} else {
-		uint64_t poll_interval = RTE_LIBRTE_PMD_DLB_POLL_INTERVAL;
+		uint64_t poll_interval = dlb2->poll_interval;
 		uint64_t curr_ticks = rte_get_timer_cycles();
 		uint64_t init_ticks = curr_ticks;
 
@@ -4020,6 +4080,9 @@ dlb2_primary_eventdev_probe(struct rte_eventdev *dev,
 	dlb2->max_num_events_override = dlb2_args->max_num_events;
 	dlb2->num_dir_credits_override = dlb2_args->num_dir_credits_override;
 	dlb2->qm_instance.cos_id = dlb2_args->cos_id;
+	dlb2->poll_interval = dlb2_args->poll_interval;
+	dlb2->sw_credit_quanta = dlb2_args->sw_credit_quanta;
+	dlb2->default_depth_thresh = dlb2_args->default_depth_thresh;
 
 	err = dlb2_iface_open(&dlb2->qm_instance, name);
 	if (err < 0) {
@@ -4120,6 +4183,9 @@ dlb2_parse_params(const char *params,
 					     DEV_ID_ARG,
 					     DLB2_QID_DEPTH_THRESH_ARG,
 					     DLB2_COS_ARG,
+					     DLB2_POLL_INTERVAL_ARG,
+					     DLB2_SW_CREDIT_QUANTA_ARG,
+					     DLB2_DEPTH_THRESH_ARG,
 					     NULL };
 
 	if (params != NULL && params[0] != '\0') {
@@ -4202,6 +4268,37 @@ dlb2_parse_params(const char *params,
 				return ret;
 			}
 
+			ret = rte_kvargs_process(kvlist, DLB2_POLL_INTERVAL_ARG,
+						 set_poll_interval,
+						 &dlb2_args->poll_interval);
+			if (ret != 0) {
+				DLB2_LOG_ERR("%s: Error parsing poll interval parameter",
+					     name);
+				rte_kvargs_free(kvlist);
+				return ret;
+			}
+
+			ret = rte_kvargs_process(kvlist,
+						 DLB2_SW_CREDIT_QUANTA_ARG,
+						 set_sw_credit_quanta,
+						 &dlb2_args->sw_credit_quanta);
+			if (ret != 0) {
+				DLB2_LOG_ERR("%s: Error parsing sw xredit quanta parameter",
+					     name);
+				rte_kvargs_free(kvlist);
+				return ret;
+			}
+
+			ret = rte_kvargs_process(kvlist, DLB2_DEPTH_THRESH_ARG,
+					set_default_depth_thresh,
+					&dlb2_args->default_depth_thresh);
+			if (ret != 0) {
+				DLB2_LOG_ERR("%s: Error parsing set depth thresh parameter",
+					     name);
+				rte_kvargs_free(kvlist);
+				return ret;
+			}
+
 			rte_kvargs_free(kvlist);
 		}
 	}
diff --git a/drivers/event/dlb/dlb2_priv.h b/drivers/event/dlb/dlb2_priv.h
index f11e08fca..3c540a264 100644
--- a/drivers/event/dlb/dlb2_priv.h
+++ b/drivers/event/dlb/dlb2_priv.h
@@ -23,6 +23,11 @@
 /* common name for all dlb devs (dlb v2.0, dlb v2.5 ...) */
 #define EVDEV_DLB2_NAME_PMD dlb_event
 
+/* Default values for command line devargs */
+#define DLB2_POLL_INTERVAL_DEFAULT 1000
+#define DLB2_SW_CREDIT_QUANTA_DEFAULT 32
+#define DLB2_DEPTH_THRESH_DEFAULT 256
+
 /*  command line arg strings */
 #define NUMA_NODE_ARG "numa_node"
 #define DLB2_MAX_NUM_EVENTS "max_num_events"
@@ -31,6 +36,9 @@
 #define DLB2_DEFER_SCHED_ARG "defer_sched"
 #define DLB2_QID_DEPTH_THRESH_ARG "qid_depth_thresh"
 #define DLB2_COS_ARG "cos"
+#define DLB2_POLL_INTERVAL_ARG "poll_interval"
+#define DLB2_SW_CREDIT_QUANTA_ARG "sw_credit_quanta"
+#define DLB2_DEPTH_THRESH_ARG "default_depth_thresh"
 
 /* Begin HW related defines and structs */
 
@@ -571,6 +579,9 @@ struct dlb2_eventdev {
 	bool global_dequeue_wait; /* Not using per dequeue wait if true */
 	bool defer_sched;
 	enum dlb2_cq_poll_modes poll_mode;
+	int poll_interval;
+	int sw_credit_quanta;
+	int default_depth_thresh;
 	uint8_t revision;
 	uint8_t version;
 	bool configured;
@@ -604,6 +615,9 @@ struct dlb2_devargs {
 	int defer_sched;
 	struct dlb2_qid_depth_thresholds qid_depth_thresholds;
 	enum dlb2_cos cos_id;
+	int poll_interval;
+	int sw_credit_quanta;
+	int default_depth_thresh;
 };
 
 /* End Eventdev related defines and structs */
diff --git a/drivers/event/dlb/pf/dlb2_pf.c b/drivers/event/dlb/pf/dlb2_pf.c
index f57dc1584..e9da89d65 100644
--- a/drivers/event/dlb/pf/dlb2_pf.c
+++ b/drivers/event/dlb/pf/dlb2_pf.c
@@ -615,7 +615,10 @@ dlb2_eventdev_pci_init(struct rte_eventdev *eventdev)
 		.max_num_events = DLB2_MAX_NUM_LDB_CREDITS,
 		.num_dir_credits_override = -1,
 		.qid_depth_thresholds = { {0} },
-		.cos_id = DLB2_COS_DEFAULT
+		.cos_id = DLB2_COS_DEFAULT,
+		.poll_interval = DLB2_POLL_INTERVAL_DEFAULT,
+		.sw_credit_quanta = DLB2_SW_CREDIT_QUANTA_DEFAULT,
+		.default_depth_thresh = DLB2_DEPTH_THRESH_DEFAULT
 	};
 	struct dlb2_eventdev *dlb2;
 
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [dpdk-dev] [PATCH v3 26/26] event/dlb: move rte config defines to runtime devargs
  2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 26/26] event/dlb: move rte config defines to runtime devargs Timothy McDaniel
@ 2021-04-14 19:11       ` Jerin Jacob
  2021-04-14 19:38         ` McDaniel, Timothy
  0 siblings, 1 reply; 174+ messages in thread
From: Jerin Jacob @ 2021-04-14 19:11 UTC (permalink / raw)
  To: Timothy McDaniel
  Cc: dpdk-dev, Erik Gabriel Carrillo, Gage Eads, Van Haaren, Harry,
	Jerin Jacob, Thomas Monjalon

On Wed, Apr 14, 2021 at 1:49 AM Timothy McDaniel
<timothy.mcdaniel@intel.com> wrote:
>
> The new devarg names and their default values
> are listed below. The defaults have not changed, and
> none of these parameters are accessed in the fast path.
>
> poll_interval=1000
> sw_credit_quantai=32
> default_depth_thresh=256
>
> Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>

Please check CI failures. Please make this practice to see the fate of
the patch by CI after the patch submission to avoid additional delay
in merging the patch.

http://patches.dpdk.org/project/dpdk/patch/1618344896-2090-27-git-send-email-timothy.mcdaniel@intel.com/

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [dpdk-dev] [PATCH v3 01/26] event/dlb2: add v2.5 probe
  2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 01/26] event/dlb2: add v2.5 probe Timothy McDaniel
@ 2021-04-14 19:16       ` Jerin Jacob
  2021-04-14 19:41         ` McDaniel, Timothy
  0 siblings, 1 reply; 174+ messages in thread
From: Jerin Jacob @ 2021-04-14 19:16 UTC (permalink / raw)
  To: Timothy McDaniel
  Cc: dpdk-dev, Erik Gabriel Carrillo, Gage Eads, Van Haaren, Harry,
	Jerin Jacob, Thomas Monjalon

On Wed, Apr 14, 2021 at 1:46 AM Timothy McDaniel
<timothy.mcdaniel@intel.com> wrote:
>
> This commit adds dlb v2.5 probe support, and updates
> parameter parsing.
>
> The dlb v2.5 device differs from dlb v2, in that the
> number of resources (ports, queues, ...) is different,
> so macros have been added to take the device version
> into account.
>

Please move the original source cleanup (the below items) to separate patch

> This commit also cleans up a few issues in the original
> dlb2 source:
> - eliminate duplicate constant definitions
> - removed unused constant definitions
> - remove #ifdef FPGA
> - remove unused include file, dlb2_mbox.h

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [dpdk-dev] [PATCH v2 06/27] event/dlb2: add V2.5 create ldb queue
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 06/27] event/dlb2: add V2.5 create ldb queue Timothy McDaniel
@ 2021-04-14 19:20       ` Jerin Jacob
  2021-04-14 19:41         ` McDaniel, Timothy
  0 siblings, 1 reply; 174+ messages in thread
From: Jerin Jacob @ 2021-04-14 19:20 UTC (permalink / raw)
  To: Timothy McDaniel
  Cc: dpdk-dev, Erik Gabriel Carrillo, Gage Eads, Van Haaren, Harry,
	Jerin Jacob, Thomas Monjalon

On Wed, Mar 31, 2021 at 1:07 AM Timothy McDaniel
<timothy.mcdaniel@intel.com> wrote:
>
> Updated low level hardware functions to add DLB 2.5 support

v2.5

> for creating load balanced queues.


Also subject has V2.5. Change v2.5

>
> Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [dpdk-dev] [PATCH v2 12/27] event/dlb2: add v2.5 start domain
  2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 12/27] event/dlb2: add v2.5 start domain Timothy McDaniel
@ 2021-04-14 19:23       ` Jerin Jacob
  2021-04-14 19:42         ` McDaniel, Timothy
  0 siblings, 1 reply; 174+ messages in thread
From: Jerin Jacob @ 2021-04-14 19:23 UTC (permalink / raw)
  To: Timothy McDaniel
  Cc: dpdk-dev, Erik Gabriel Carrillo, Gage Eads, Van Haaren, Harry,
	Jerin Jacob, Thomas Monjalon

On Wed, Mar 31, 2021 at 1:08 AM Timothy McDaniel
<timothy.mcdaniel@intel.com> wrote:
>
> Update low level functions to account for new register map
> and hardware access macros.

Patch 7 to 13 has the SAME comment. Please update based on the content
and subsystem.


>
> Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
> ---

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [dpdk-dev] [PATCH v3 25/26] event/dlb: remove version from device name
  2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 25/26] event/dlb: remove version from device name Timothy McDaniel
@ 2021-04-14 19:31       ` Jerin Jacob
  2021-04-14 19:42         ` McDaniel, Timothy
  2021-04-14 19:44       ` Jerin Jacob
  1 sibling, 1 reply; 174+ messages in thread
From: Jerin Jacob @ 2021-04-14 19:31 UTC (permalink / raw)
  To: Timothy McDaniel, David Marchand, Ray Kinsella
  Cc: dpdk-dev, Erik Gabriel Carrillo, Gage Eads, Van Haaren, Harry,
	Jerin Jacob, Thomas Monjalon

On Wed, Apr 14, 2021 at 1:49 AM Timothy McDaniel
<timothy.mcdaniel@intel.com> wrote:
>
> Updated eventdev device name to be dlb_event instead of
> dlb2_event.  The new name will be used for all versions
> of the DLB hardware. This change required corresponding changes
> to the directory name that contains the PMD, as well
> as the documentation files, build infrastructure, and PMD
> specific APIs.
>
> Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>

Please change the subject to "event/dlb: rename dlb2 driver", or so.

Also,See the below patch and change the abiignore to dlb2 now.

------------------

commit 4113ddd45293d7b26ff4033bfd86cef03d29124f
Author: Thomas Monjalon <thomas@monjalon.net>
Date:   Tue Apr 13 10:29:37 2021 +0200

    devtools: skip removed DLB driver in ABI check

    The eventdev driver DLB was removed in DPDK 21.05,
    breaking the ABI check.
    The exception was agreed so we just need to skip this check.

    Note: complete removal of a driver cannot be ignored
    in devtools/libabigail.abignore, so the script must be patched.

    Fixes: 698fa829415d ("event/dlb: remove driver")

    Reported-by: David Marchand <david.marchand@redhat.com>
    Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
    Reviewed-by: David Marchand <david.marchand@redhat.com>

---------------------

> ---
>  MAINTAINERS                                   |  6 +-
>  app/test/test_eventdev.c                      |  6 +-
>  config/rte_config.h                           | 11 ++-
>  doc/api/doxy-api-index.md                     |  2 +-
>  doc/api/doxy-api.conf.in                      |  2 +-
>  doc/guides/eventdevs/{dlb2.rst => dlb.rst}    | 88 +++++++++----------
>  doc/guides/eventdevs/index.rst                |  2 +-
>  doc/guides/rel_notes/release_21_05.rst        |  5 ++
>  drivers/event/{dlb2 => dlb}/dlb2.c            | 25 +++---
>  drivers/event/{dlb2 => dlb}/dlb2_iface.c      |  0
>  drivers/event/{dlb2 => dlb}/dlb2_iface.h      |  0
>  drivers/event/{dlb2 => dlb}/dlb2_inline_fns.h |  0
>  drivers/event/{dlb2 => dlb}/dlb2_log.h        |  0
>  drivers/event/{dlb2 => dlb}/dlb2_priv.h       |  7 +-
>  drivers/event/{dlb2 => dlb}/dlb2_selftest.c   |  8 +-
>  drivers/event/{dlb2 => dlb}/dlb2_user.h       |  0
>  drivers/event/{dlb2 => dlb}/dlb2_xstats.c     |  0
>  drivers/event/{dlb2 => dlb}/meson.build       |  4 +-
>  .../{dlb2 => dlb}/pf/base/dlb2_hw_types.h     |  0
>  .../event/{dlb2 => dlb}/pf/base/dlb2_osdep.h  |  0
>  .../{dlb2 => dlb}/pf/base/dlb2_osdep_bitmap.h |  0
>  .../{dlb2 => dlb}/pf/base/dlb2_osdep_list.h   |  0
>  .../{dlb2 => dlb}/pf/base/dlb2_osdep_types.h  |  0
>  .../event/{dlb2 => dlb}/pf/base/dlb2_regs.h   |  0
>  .../{dlb2 => dlb}/pf/base/dlb2_resource.c     |  0
>  .../{dlb2 => dlb}/pf/base/dlb2_resource.h     |  0
>  drivers/event/{dlb2 => dlb}/pf/dlb2_main.c    |  0
>  drivers/event/{dlb2 => dlb}/pf/dlb2_main.h    |  0
>  drivers/event/{dlb2 => dlb}/pf/dlb2_pf.c      |  0
>  .../rte_pmd_dlb2.c => dlb/rte_pmd_dlb.c}      |  6 +-
>  .../rte_pmd_dlb2.h => dlb/rte_pmd_dlb.h}      | 12 +--
>  drivers/event/{dlb2 => dlb}/version.map       |  2 +-
>  drivers/event/meson.build                     |  2 +-
>  33 files changed, 94 insertions(+), 94 deletions(-)
>  rename doc/guides/eventdevs/{dlb2.rst => dlb.rst} (84%)
>  rename drivers/event/{dlb2 => dlb}/dlb2.c (99%)
>  rename drivers/event/{dlb2 => dlb}/dlb2_iface.c (100%)
>  rename drivers/event/{dlb2 => dlb}/dlb2_iface.h (100%)
>  rename drivers/event/{dlb2 => dlb}/dlb2_inline_fns.h (100%)
>  rename drivers/event/{dlb2 => dlb}/dlb2_log.h (100%)
>  rename drivers/event/{dlb2 => dlb}/dlb2_priv.h (99%)
>  rename drivers/event/{dlb2 => dlb}/dlb2_selftest.c (99%)
>  rename drivers/event/{dlb2 => dlb}/dlb2_user.h (100%)
>  rename drivers/event/{dlb2 => dlb}/dlb2_xstats.c (100%)
>  rename drivers/event/{dlb2 => dlb}/meson.build (89%)
>  rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_hw_types.h (100%)
>  rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_osdep.h (100%)
>  rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_osdep_bitmap.h (100%)
>  rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_osdep_list.h (100%)
>  rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_osdep_types.h (100%)
>  rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_regs.h (100%)
>  rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_resource.c (100%)
>  rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_resource.h (100%)
>  rename drivers/event/{dlb2 => dlb}/pf/dlb2_main.c (100%)
>  rename drivers/event/{dlb2 => dlb}/pf/dlb2_main.h (100%)
>  rename drivers/event/{dlb2 => dlb}/pf/dlb2_pf.c (100%)
>  rename drivers/event/{dlb2/rte_pmd_dlb2.c => dlb/rte_pmd_dlb.c} (88%)
>  rename drivers/event/{dlb2/rte_pmd_dlb2.h => dlb/rte_pmd_dlb.h} (88%)
>  rename drivers/event/{dlb2 => dlb}/version.map (60%)

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [dpdk-dev] [PATCH v3 26/26] event/dlb: move rte config defines to runtime devargs
  2021-04-14 19:11       ` Jerin Jacob
@ 2021-04-14 19:38         ` McDaniel, Timothy
  2021-04-14 19:52           ` Jerin Jacob
  0 siblings, 1 reply; 174+ messages in thread
From: McDaniel, Timothy @ 2021-04-14 19:38 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dpdk-dev, Carrillo, Erik G, Gage Eads, Van Haaren, Harry,
	Jerin Jacob, Thomas Monjalon



> -----Original Message-----
> From: Jerin Jacob <jerinjacobk@gmail.com>
> Sent: Wednesday, April 14, 2021 2:11 PM
> To: McDaniel, Timothy <timothy.mcdaniel@intel.com>
> Cc: dpdk-dev <dev@dpdk.org>; Carrillo, Erik G <erik.g.carrillo@intel.com>; Gage
> Eads <gage.eads@intel.com>; Van Haaren, Harry
> <harry.van.haaren@intel.com>; Jerin Jacob <jerinj@marvell.com>; Thomas
> Monjalon <thomas@monjalon.net>
> Subject: Re: [dpdk-dev] [PATCH v3 26/26] event/dlb: move rte config defines to
> runtime devargs
> 
> On Wed, Apr 14, 2021 at 1:49 AM Timothy McDaniel
> <timothy.mcdaniel@intel.com> wrote:
> >
> > The new devarg names and their default values
> > are listed below. The defaults have not changed, and
> > none of these parameters are accessed in the fast path.
> >
> > poll_interval=1000
> > sw_credit_quantai=32
> > default_depth_thresh=256
> >
> > Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
> 
> Please check CI failures. Please make this practice to see the fate of
> the patch by CI after the patch submission to avoid additional delay
> in merging the patch.
> 
> http://patches.dpdk.org/project/dpdk/patch/1618344896-2090-27-git-send-
> email-timothy.mcdaniel@intel.com/

The failures seems to be 
1) with a reference to dlb2 documentation  in the 20.11 release notes
2) an apply failure with the dqueue optimization patch. It requires the DLB 25 patches
to have been previously installed. I added a depends-on line to its cover sheet, but that
did not seem to help
3) there are many false positives reported by check patches, most spelling related. I do
not see any real issues there

As for the code cleanup, this was very minor, such as not including mbox.h, which we do not use.
Not sure it warrants its own patch, and I'm not sure I can even identify the other minor cleanups, if any.

Thanks,
Tim


^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [dpdk-dev] [PATCH v3 01/26] event/dlb2: add v2.5 probe
  2021-04-14 19:16       ` Jerin Jacob
@ 2021-04-14 19:41         ` McDaniel, Timothy
  2021-04-14 19:47           ` Jerin Jacob
  0 siblings, 1 reply; 174+ messages in thread
From: McDaniel, Timothy @ 2021-04-14 19:41 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dpdk-dev, Carrillo, Erik G, Gage Eads, Van Haaren, Harry,
	Jerin Jacob, Thomas Monjalon



> -----Original Message-----
> From: Jerin Jacob <jerinjacobk@gmail.com>
> Sent: Wednesday, April 14, 2021 2:16 PM
> To: McDaniel, Timothy <timothy.mcdaniel@intel.com>
> Cc: dpdk-dev <dev@dpdk.org>; Carrillo, Erik G <erik.g.carrillo@intel.com>; Gage
> Eads <gage.eads@intel.com>; Van Haaren, Harry
> <harry.van.haaren@intel.com>; Jerin Jacob <jerinj@marvell.com>; Thomas
> Monjalon <thomas@monjalon.net>
> Subject: Re: [dpdk-dev] [PATCH v3 01/26] event/dlb2: add v2.5 probe
> 
> On Wed, Apr 14, 2021 at 1:46 AM Timothy McDaniel
> <timothy.mcdaniel@intel.com> wrote:
> >
> > This commit adds dlb v2.5 probe support, and updates
> > parameter parsing.
> >
> > The dlb v2.5 device differs from dlb v2, in that the
> > number of resources (ports, queues, ...) is different,
> > so macros have been added to take the device version
> > into account.
> >
> 
> Please move the original source cleanup (the below items) to separate patch
> 
> > This commit also cleans up a few issues in the original
> > dlb2 source:
> > - eliminate duplicate constant definitions
> > - removed unused constant definitions
> > - remove #ifdef FPGA
> > - remove unused include file, dlb2_mbox.h

All in all it was quite minor, but I'll try to do this if you think its necessary.

Thanks,
Tim

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [dpdk-dev] [PATCH v2 06/27] event/dlb2: add V2.5 create ldb queue
  2021-04-14 19:20       ` Jerin Jacob
@ 2021-04-14 19:41         ` McDaniel, Timothy
  0 siblings, 0 replies; 174+ messages in thread
From: McDaniel, Timothy @ 2021-04-14 19:41 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dpdk-dev, Carrillo, Erik G, Gage Eads, Van Haaren, Harry,
	Jerin Jacob, Thomas Monjalon



> -----Original Message-----
> From: Jerin Jacob <jerinjacobk@gmail.com>
> Sent: Wednesday, April 14, 2021 2:20 PM
> To: McDaniel, Timothy <timothy.mcdaniel@intel.com>
> Cc: dpdk-dev <dev@dpdk.org>; Carrillo, Erik G <erik.g.carrillo@intel.com>; Gage
> Eads <gage.eads@intel.com>; Van Haaren, Harry
> <harry.van.haaren@intel.com>; Jerin Jacob <jerinj@marvell.com>; Thomas
> Monjalon <thomas@monjalon.net>
> Subject: Re: [dpdk-dev] [PATCH v2 06/27] event/dlb2: add V2.5 create ldb queue
> 
> On Wed, Mar 31, 2021 at 1:07 AM Timothy McDaniel
> <timothy.mcdaniel@intel.com> wrote:
> >
> > Updated low level hardware functions to add DLB 2.5 support
> 
> v2.5
> 
> > for creating load balanced queues.
> 
> 
> Also subject has V2.5. Change v2.5
> 
> >
> > Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>

Okay

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [dpdk-dev] [PATCH v2 12/27] event/dlb2: add v2.5 start domain
  2021-04-14 19:23       ` Jerin Jacob
@ 2021-04-14 19:42         ` McDaniel, Timothy
  0 siblings, 0 replies; 174+ messages in thread
From: McDaniel, Timothy @ 2021-04-14 19:42 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dpdk-dev, Carrillo, Erik G, Gage Eads, Van Haaren, Harry,
	Jerin Jacob, Thomas Monjalon



> -----Original Message-----
> From: Jerin Jacob <jerinjacobk@gmail.com>
> Sent: Wednesday, April 14, 2021 2:24 PM
> To: McDaniel, Timothy <timothy.mcdaniel@intel.com>
> Cc: dpdk-dev <dev@dpdk.org>; Carrillo, Erik G <erik.g.carrillo@intel.com>; Gage
> Eads <gage.eads@intel.com>; Van Haaren, Harry
> <harry.van.haaren@intel.com>; Jerin Jacob <jerinj@marvell.com>; Thomas
> Monjalon <thomas@monjalon.net>
> Subject: Re: [dpdk-dev] [PATCH v2 12/27] event/dlb2: add v2.5 start domain
> 
> On Wed, Mar 31, 2021 at 1:08 AM Timothy McDaniel
> <timothy.mcdaniel@intel.com> wrote:
> >
> > Update low level functions to account for new register map
> > and hardware access macros.
> 
> Patch 7 to 13 has the SAME comment. Please update based on the content
> and subsystem.
> 
> 
> >
> > Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
> > ---

Will do

Thanks,
Tim

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [dpdk-dev] [PATCH v3 25/26] event/dlb: remove version from device name
  2021-04-14 19:31       ` Jerin Jacob
@ 2021-04-14 19:42         ` McDaniel, Timothy
  0 siblings, 0 replies; 174+ messages in thread
From: McDaniel, Timothy @ 2021-04-14 19:42 UTC (permalink / raw)
  To: Jerin Jacob, David Marchand, Ray Kinsella
  Cc: dpdk-dev, Carrillo, Erik G, Gage Eads, Van Haaren, Harry,
	Jerin Jacob, Thomas Monjalon



> -----Original Message-----
> From: Jerin Jacob <jerinjacobk@gmail.com>
> Sent: Wednesday, April 14, 2021 2:32 PM
> To: McDaniel, Timothy <timothy.mcdaniel@intel.com>; David Marchand
> <david.marchand@redhat.com>; Ray Kinsella <mdr@ashroe.eu>
> Cc: dpdk-dev <dev@dpdk.org>; Carrillo, Erik G <erik.g.carrillo@intel.com>; Gage
> Eads <gage.eads@intel.com>; Van Haaren, Harry
> <harry.van.haaren@intel.com>; Jerin Jacob <jerinj@marvell.com>; Thomas
> Monjalon <thomas@monjalon.net>
> Subject: Re: [dpdk-dev] [PATCH v3 25/26] event/dlb: remove version from
> device name
> 
> On Wed, Apr 14, 2021 at 1:49 AM Timothy McDaniel
> <timothy.mcdaniel@intel.com> wrote:
> >
> > Updated eventdev device name to be dlb_event instead of
> > dlb2_event.  The new name will be used for all versions
> > of the DLB hardware. This change required corresponding changes
> > to the directory name that contains the PMD, as well
> > as the documentation files, build infrastructure, and PMD
> > specific APIs.
> >
> > Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
> 
> Please change the subject to "event/dlb: rename dlb2 driver", or so.
> 
> Also,See the below patch and change the abiignore to dlb2 now.
> 
> ------------------
> 
> commit 4113ddd45293d7b26ff4033bfd86cef03d29124f
> Author: Thomas Monjalon <thomas@monjalon.net>
> Date:   Tue Apr 13 10:29:37 2021 +0200
> 
>     devtools: skip removed DLB driver in ABI check
> 
>     The eventdev driver DLB was removed in DPDK 21.05,
>     breaking the ABI check.
>     The exception was agreed so we just need to skip this check.
> 
>     Note: complete removal of a driver cannot be ignored
>     in devtools/libabigail.abignore, so the script must be patched.
> 
>     Fixes: 698fa829415d ("event/dlb: remove driver")
> 
>     Reported-by: David Marchand <david.marchand@redhat.com>
>     Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
>     Reviewed-by: David Marchand <david.marchand@redhat.com>
> 
> ---------------------
> 
> > ---
> >  MAINTAINERS                                   |  6 +-
> >  app/test/test_eventdev.c                      |  6 +-
> >  config/rte_config.h                           | 11 ++-
> >  doc/api/doxy-api-index.md                     |  2 +-
> >  doc/api/doxy-api.conf.in                      |  2 +-
> >  doc/guides/eventdevs/{dlb2.rst => dlb.rst}    | 88 +++++++++----------
> >  doc/guides/eventdevs/index.rst                |  2 +-
> >  doc/guides/rel_notes/release_21_05.rst        |  5 ++
> >  drivers/event/{dlb2 => dlb}/dlb2.c            | 25 +++---
> >  drivers/event/{dlb2 => dlb}/dlb2_iface.c      |  0
> >  drivers/event/{dlb2 => dlb}/dlb2_iface.h      |  0
> >  drivers/event/{dlb2 => dlb}/dlb2_inline_fns.h |  0
> >  drivers/event/{dlb2 => dlb}/dlb2_log.h        |  0
> >  drivers/event/{dlb2 => dlb}/dlb2_priv.h       |  7 +-
> >  drivers/event/{dlb2 => dlb}/dlb2_selftest.c   |  8 +-
> >  drivers/event/{dlb2 => dlb}/dlb2_user.h       |  0
> >  drivers/event/{dlb2 => dlb}/dlb2_xstats.c     |  0
> >  drivers/event/{dlb2 => dlb}/meson.build       |  4 +-
> >  .../{dlb2 => dlb}/pf/base/dlb2_hw_types.h     |  0
> >  .../event/{dlb2 => dlb}/pf/base/dlb2_osdep.h  |  0
> >  .../{dlb2 => dlb}/pf/base/dlb2_osdep_bitmap.h |  0
> >  .../{dlb2 => dlb}/pf/base/dlb2_osdep_list.h   |  0
> >  .../{dlb2 => dlb}/pf/base/dlb2_osdep_types.h  |  0
> >  .../event/{dlb2 => dlb}/pf/base/dlb2_regs.h   |  0
> >  .../{dlb2 => dlb}/pf/base/dlb2_resource.c     |  0
> >  .../{dlb2 => dlb}/pf/base/dlb2_resource.h     |  0
> >  drivers/event/{dlb2 => dlb}/pf/dlb2_main.c    |  0
> >  drivers/event/{dlb2 => dlb}/pf/dlb2_main.h    |  0
> >  drivers/event/{dlb2 => dlb}/pf/dlb2_pf.c      |  0
> >  .../rte_pmd_dlb2.c => dlb/rte_pmd_dlb.c}      |  6 +-
> >  .../rte_pmd_dlb2.h => dlb/rte_pmd_dlb.h}      | 12 +--
> >  drivers/event/{dlb2 => dlb}/version.map       |  2 +-
> >  drivers/event/meson.build                     |  2 +-
> >  33 files changed, 94 insertions(+), 94 deletions(-)
> >  rename doc/guides/eventdevs/{dlb2.rst => dlb.rst} (84%)
> >  rename drivers/event/{dlb2 => dlb}/dlb2.c (99%)
> >  rename drivers/event/{dlb2 => dlb}/dlb2_iface.c (100%)
> >  rename drivers/event/{dlb2 => dlb}/dlb2_iface.h (100%)
> >  rename drivers/event/{dlb2 => dlb}/dlb2_inline_fns.h (100%)
> >  rename drivers/event/{dlb2 => dlb}/dlb2_log.h (100%)
> >  rename drivers/event/{dlb2 => dlb}/dlb2_priv.h (99%)
> >  rename drivers/event/{dlb2 => dlb}/dlb2_selftest.c (99%)
> >  rename drivers/event/{dlb2 => dlb}/dlb2_user.h (100%)
> >  rename drivers/event/{dlb2 => dlb}/dlb2_xstats.c (100%)
> >  rename drivers/event/{dlb2 => dlb}/meson.build (89%)
> >  rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_hw_types.h (100%)
> >  rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_osdep.h (100%)
> >  rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_osdep_bitmap.h (100%)
> >  rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_osdep_list.h (100%)
> >  rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_osdep_types.h (100%)
> >  rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_regs.h (100%)
> >  rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_resource.c (100%)
> >  rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_resource.h (100%)
> >  rename drivers/event/{dlb2 => dlb}/pf/dlb2_main.c (100%)
> >  rename drivers/event/{dlb2 => dlb}/pf/dlb2_main.h (100%)
> >  rename drivers/event/{dlb2 => dlb}/pf/dlb2_pf.c (100%)
> >  rename drivers/event/{dlb2/rte_pmd_dlb2.c => dlb/rte_pmd_dlb.c} (88%)
> >  rename drivers/event/{dlb2/rte_pmd_dlb2.h => dlb/rte_pmd_dlb.h} (88%)
> >  rename drivers/event/{dlb2 => dlb}/version.map (60%)

Okay

Thanks,
Tim

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [dpdk-dev] [PATCH v3 25/26] event/dlb: remove version from device name
  2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 25/26] event/dlb: remove version from device name Timothy McDaniel
  2021-04-14 19:31       ` Jerin Jacob
@ 2021-04-14 19:44       ` Jerin Jacob
  2021-04-14 20:33         ` Thomas Monjalon
  1 sibling, 1 reply; 174+ messages in thread
From: Jerin Jacob @ 2021-04-14 19:44 UTC (permalink / raw)
  To: Timothy McDaniel
  Cc: dpdk-dev, Erik Gabriel Carrillo, Gage Eads, Van Haaren, Harry,
	Jerin Jacob, Thomas Monjalon

On Wed, Apr 14, 2021 at 1:49 AM Timothy McDaniel
<timothy.mcdaniel@intel.com> wrote:
>
> Updated eventdev device name to be dlb_event instead of
> dlb2_event.  The new name will be used for all versions
> of the DLB hardware. This change required corresponding changes
> to the directory name that contains the PMD, as well
> as the documentation files, build infrastructure, and PMD
> specific APIs.
>
> Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>

> diff --git a/doc/guides/rel_notes/release_21_05.rst b/doc/guides/rel_notes/release_21_05.rst
> index 8a601e0a7..5b25f1479 100644
> --- a/doc/guides/rel_notes/release_21_05.rst
> +++ b/doc/guides/rel_notes/release_21_05.rst
> @@ -94,6 +94,11 @@ New Features
>
>    * Added support for preferred busy polling.
>
> +* **Updated DLB driver.**
> +
> +  * Added support for v2.5 hardware.
> +  * Renamed DLB2 to DLB, which supports all HW versions v2.0 and v2.5.

 @Thomas Monjalon , Do we need to update the "Removed Items" section?

@McDaniel, Timothy
Following section in doc/guides/rel_notes/release_20_11.rst will emit
error now. Please change as needed.

* **Added a new driver for the Intel Dynamic Load Balancer v2.0 device.**

  Added the new ``dlb2`` eventdev driver for the Intel DLB V2.0 device. See the
  :doc:`../eventdevs/dlb2` eventdev guide for more details on this new driver.


> +
>  * **Updated testpmd.**

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [dpdk-dev] [PATCH v3 01/26] event/dlb2: add v2.5 probe
  2021-04-14 19:41         ` McDaniel, Timothy
@ 2021-04-14 19:47           ` Jerin Jacob
  0 siblings, 0 replies; 174+ messages in thread
From: Jerin Jacob @ 2021-04-14 19:47 UTC (permalink / raw)
  To: McDaniel, Timothy
  Cc: dpdk-dev, Carrillo, Erik G, Gage Eads, Van Haaren, Harry,
	Jerin Jacob, Thomas Monjalon

On Thu, Apr 15, 2021 at 1:11 AM McDaniel, Timothy
<timothy.mcdaniel@intel.com> wrote:
>
>
>
> > -----Original Message-----
> > From: Jerin Jacob <jerinjacobk@gmail.com>
> > Sent: Wednesday, April 14, 2021 2:16 PM
> > To: McDaniel, Timothy <timothy.mcdaniel@intel.com>
> > Cc: dpdk-dev <dev@dpdk.org>; Carrillo, Erik G <erik.g.carrillo@intel.com>; Gage
> > Eads <gage.eads@intel.com>; Van Haaren, Harry
> > <harry.van.haaren@intel.com>; Jerin Jacob <jerinj@marvell.com>; Thomas
> > Monjalon <thomas@monjalon.net>
> > Subject: Re: [dpdk-dev] [PATCH v3 01/26] event/dlb2: add v2.5 probe
> >
> > On Wed, Apr 14, 2021 at 1:46 AM Timothy McDaniel
> > <timothy.mcdaniel@intel.com> wrote:
> > >
> > > This commit adds dlb v2.5 probe support, and updates
> > > parameter parsing.
> > >
> > > The dlb v2.5 device differs from dlb v2, in that the
> > > number of resources (ports, queues, ...) is different,
> > > so macros have been added to take the device version
> > > into account.
> > >
> >
> > Please move the original source cleanup (the below items) to separate patch
> >
> > > This commit also cleans up a few issues in the original
> > > dlb2 source:
> > > - eliminate duplicate constant definitions
> > > - removed unused constant definitions
> > > - remove #ifdef FPGA
> > > - remove unused include file, dlb2_mbox.h
>
> All in all it was quite minor, but I'll try to do this if you think its necessary.

Yes, please. As it does not match the subject "event/dlb2: add v2.5 probe"

>
> Thanks,
> Tim

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [dpdk-dev] [PATCH v3 26/26] event/dlb: move rte config defines to runtime devargs
  2021-04-14 19:38         ` McDaniel, Timothy
@ 2021-04-14 19:52           ` Jerin Jacob
  0 siblings, 0 replies; 174+ messages in thread
From: Jerin Jacob @ 2021-04-14 19:52 UTC (permalink / raw)
  To: McDaniel, Timothy
  Cc: dpdk-dev, Carrillo, Erik G, Gage Eads, Van Haaren, Harry,
	Jerin Jacob, Thomas Monjalon

On Thu, Apr 15, 2021 at 1:09 AM McDaniel, Timothy
<timothy.mcdaniel@intel.com> wrote:
>
>
>
> > -----Original Message-----
> > From: Jerin Jacob <jerinjacobk@gmail.com>
> > Sent: Wednesday, April 14, 2021 2:11 PM
> > To: McDaniel, Timothy <timothy.mcdaniel@intel.com>
> > Cc: dpdk-dev <dev@dpdk.org>; Carrillo, Erik G <erik.g.carrillo@intel.com>; Gage
> > Eads <gage.eads@intel.com>; Van Haaren, Harry
> > <harry.van.haaren@intel.com>; Jerin Jacob <jerinj@marvell.com>; Thomas
> > Monjalon <thomas@monjalon.net>
> > Subject: Re: [dpdk-dev] [PATCH v3 26/26] event/dlb: move rte config defines to
> > runtime devargs
> >
> > On Wed, Apr 14, 2021 at 1:49 AM Timothy McDaniel
> > <timothy.mcdaniel@intel.com> wrote:
> > >
> > > The new devarg names and their default values
> > > are listed below. The defaults have not changed, and
> > > none of these parameters are accessed in the fast path.
> > >
> > > poll_interval=1000
> > > sw_credit_quantai=32
> > > default_depth_thresh=256
> > >
> > > Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
> >
> > Please check CI failures. Please make this practice to see the fate of
> > the patch by CI after the patch submission to avoid additional delay
> > in merging the patch.
> >
> > http://patches.dpdk.org/project/dpdk/patch/1618344896-2090-27-git-send-
> > email-timothy.mcdaniel@intel.com/
>
> The failures seems to be
> 1) with a reference to dlb2 documentation  in the 20.11 release notes

Please update the 20.11 release notes to fix the issue.



> 2) an apply failure with the dqueue optimization patch. It requires the DLB 25 patches
> to have been previously installed. I added a depends-on line to its cover sheet, but that
> did not seem to help
> 3) there are many false positives reported by check patches, most spelling related. I do
> not see any real issues there
>
> As for the code cleanup, this was very minor, such as not including mbox.h, which we do not use.
> Not sure it warrants its own patch, and I'm not sure I can even identify the other minor cleanups, if any.
>
> Thanks,
> Tim
>

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [dpdk-dev] [PATCH v3 25/26] event/dlb: remove version from device name
  2021-04-14 19:44       ` Jerin Jacob
@ 2021-04-14 20:33         ` Thomas Monjalon
  2021-04-15  3:22           ` McDaniel, Timothy
  2021-04-15  5:47           ` Jerin Jacob
  0 siblings, 2 replies; 174+ messages in thread
From: Thomas Monjalon @ 2021-04-14 20:33 UTC (permalink / raw)
  To: Timothy McDaniel, Jerin Jacob
  Cc: Jerin Jacob, dpdk-dev, Erik Gabriel Carrillo, Gage Eads,
	Van Haaren, Harry, david.marchand

14/04/2021 21:44, Jerin Jacob:
> On Wed, Apr 14, 2021 at 1:49 AM Timothy McDaniel
> <timothy.mcdaniel@intel.com> wrote:
> >
> > Updated eventdev device name to be dlb_event instead of
> > dlb2_event.  The new name will be used for all versions
> > of the DLB hardware. This change required corresponding changes
> > to the directory name that contains the PMD, as well
> > as the documentation files, build infrastructure, and PMD
> > specific APIs.
> >
> > Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
> > --- a/doc/guides/rel_notes/release_21_05.rst
> > +++ b/doc/guides/rel_notes/release_21_05.rst
> > +* **Updated DLB driver.**
> > +
> > +  * Added support for v2.5 hardware.
> > +  * Renamed DLB2 to DLB, which supports all HW versions v2.0 and v2.5.
> 
>  @Thomas Monjalon , Do we need to update the "Removed Items" section?

I did not follow the exact change.
Is it changing the driver library name?
If yes, it is one more ABI issue.
If not, I don't see what to update in the release notes.



^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v4 00/27] Add DLB v2.5
  2021-03-16 22:18 ` [dpdk-dev] [PATCH 01/25] event/dlb2: add dlb v2.5 probe Timothy McDaniel
                     ` (2 preceding siblings ...)
  2021-04-13 20:14   ` [dpdk-dev] [PATCH v3 00/26] " Timothy McDaniel
@ 2021-04-15  1:48   ` Timothy McDaniel
  2021-04-15  1:48     ` [dpdk-dev] [PATCH v4 01/27] event/dlb2: minor code cleanup Timothy McDaniel
                       ` (26 more replies)
  2021-05-01 19:03   ` [dpdk-dev] [PATCH v5 00/26] Add DLB v2.5 McDaniel, Timothy
  4 siblings, 27 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-04-15  1:48 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, harry.van.haaren, jerinj, thomas

This patch series adds support for DLB v2.5 to
the current DLB V2.0 PMD. The resulting PMD supports
both hardware versions.

The main differences between the DLB v2.5 and v2.0 hardware
are:
- Number of queues/ports
- DLB v2.5 uses a combined credit pool, whereas DLB v2.0
  splits credits into 2 pools, a directed credit pool and a
  load balanced credit pool.
- Different register maps, with different bit names and offsets

In order to support both hardware versions with the same PMD,
and avoid code duplication, the file dlb2_resource.c required a
complete rewrite. This required some creative staging of the changes
in order to keep the individual patches relatively small, while
also meeting the requirement that all individual patches in the set
compile cleanly.

To accomplish this, a few temporary files are used:

dlb2_hw_types_new.h
dlb2_resources_new.h
dlb2_resources_new.c

As dlb2_resources_new.c is populated with the new combined v2.0/v2.5
low level logic, the corresponding old code is removed from
dlb2_resource.c, thus allowing both the original and new code to
continue to compile and link cleanly. Once all of the code has been
migrated to the new model, the old versions of the files are removed,
and the new versions are renamed, effectively replacing the old original
files.

As you review the code, you can ignore the code deletions from
dlb2_resource.c, as that file continues to shrink as the new
corresponding logic is added to dlb2_resource_new.c.

Changes since V3:
1) Moved minor cleanup to its own patch. This included
	a) remove FPGA references
	b) eliminate duplicate macros/defines in hw_types
	c) don't include dlb2_mbox.h
	d) delete unused defines.macros (SMON, INT, ...)
2) Changed DLB V2.x and V2.x to simply v2.x, where v is lower case
3) Updated 20.11 release notes to remove reference to dlb2 doc, since
   it is now named dlb.rst
4) Updated commit message/header text, as requested

Changes since V2:
1) fix commit headers
2) fix commit message repeated words
3) remove FPGA reference
4) split out new v2.5 register definitions into separate patch
5) fixed documentation to use DLB and dlb_event exclusively,
   instead of the old names such as dlb1_event, dlb2_event,
   DLB2, ... Final doc updates are done in patch that performs
   device rename from DLB2 tosimply DLB
6) use component event/dlb at commit which changes device name and
   all subsequent commits
7) Move all DLB constants out of config/rte_config.h except QUELL_STATS,
   which is used in the fastpath. Exposed these as devarg command line
   parameters
8) Removed "TEMPORARY" comment leftover in dlb2_osdep.h
9) squashed 20-21 and 22-23 since they were logically the same as 19-20,
   which was requested to be squashed
10) delete old dlb2.rst - dlb.rst has been updated for v2.0 and v2.1

Changes since V1:
1) Simplified subject text for all patches
2) correct typos/spelling
3) remove FPGA references
4) remove stale sysconf() references
5) fixed patches that had compilation issues
6) updated release notes
7) renamed dlb device from dlb2_event to dlb_event
8) moved dlb2 directory to dlb,to match name change
9) fixed other cases where "dlb2" was being used externally

Timothy McDaniel (27):
  event/dlb2: minor code cleanup
  event/dlb2: add v2.5 probe
  event/dlb2: add v2.5 HW register definitions
  event/dlb2: add v2.5 HW init
  event/dlb2: add v2.5 get resources
  event/dlb2: add v2.5 create sched domain
  event/dlb2: add v2.5 domain reset
  event/dlb2: add v2.5 create ldb queue
  event/dlb2: add v2.5 create ldb port
  event/dlb2: add v2.5 create dir port
  event/dlb2: add v2.5 create dir queue
  event/dlb2: add v2.5 map qid
  event/dlb2: add v2.5 unmap queue
  event/dlb2: add v2.5 start domain
  event/dlb2: add v2.5 credit scheme
  event/dlb2: add v2.5 queue depth functions
  event/dlb2: add v2.5 finish map/unmap
  event/dlb2: add v2.5 sparse cq mode
  event/dlb2: add v2.5 sequence number management
  event/dlb2: use new implementation of resource header
  event/dlb2: use new implementation of resource file
  event/dlb2: use new implementation of HW types header
  event/dlb2: use new combined register map
  event/dlb2: update xstats for v2.5
  doc/dlb2: update documentation for v2.5
  event/dlb: rename dlb2 driver
  event/dlb: move rte config defines to runtime devargs

 MAINTAINERS                                   |    6 +-
 app/test/test_eventdev.c                      |    6 +-
 config/rte_config.h                           |    8 +-
 doc/api/doxy-api-index.md                     |    2 +-
 doc/api/doxy-api.conf.in                      |    2 +-
 doc/guides/eventdevs/{dlb2.rst => dlb.rst}    |  155 +-
 doc/guides/eventdevs/index.rst                |    2 +-
 doc/guides/rel_notes/release_20_11.rst        |    2 +-
 doc/guides/rel_notes/release_21_05.rst        |    5 +
 drivers/event/{dlb2 => dlb}/dlb2.c            |  550 ++-
 drivers/event/{dlb2 => dlb}/dlb2_iface.c      |    0
 drivers/event/{dlb2 => dlb}/dlb2_iface.h      |    0
 drivers/event/{dlb2 => dlb}/dlb2_inline_fns.h |    0
 drivers/event/{dlb2 => dlb}/dlb2_log.h        |    0
 drivers/event/{dlb2 => dlb}/dlb2_priv.h       |  177 +-
 drivers/event/{dlb2 => dlb}/dlb2_selftest.c   |    8 +-
 drivers/event/{dlb2 => dlb}/dlb2_user.h       |   27 +-
 drivers/event/{dlb2 => dlb}/dlb2_xstats.c     |   70 +-
 drivers/event/{dlb2 => dlb}/meson.build       |    4 +-
 .../{dlb2 => dlb}/pf/base/dlb2_hw_types.h     |  106 +-
 .../event/{dlb2 => dlb}/pf/base/dlb2_osdep.h  |    2 +
 .../{dlb2 => dlb}/pf/base/dlb2_osdep_bitmap.h |    0
 .../{dlb2 => dlb}/pf/base/dlb2_osdep_list.h   |    0
 .../{dlb2 => dlb}/pf/base/dlb2_osdep_types.h  |    0
 drivers/event/dlb/pf/base/dlb2_regs.h         | 4304 +++++++++++++++++
 .../{dlb2 => dlb}/pf/base/dlb2_resource.c     | 3278 +++++++------
 .../{dlb2 => dlb}/pf/base/dlb2_resource.h     |   28 +-
 drivers/event/{dlb2 => dlb}/pf/dlb2_main.c    |   37 +-
 drivers/event/{dlb2 => dlb}/pf/dlb2_main.h    |    0
 drivers/event/{dlb2 => dlb}/pf/dlb2_pf.c      |   67 +-
 .../rte_pmd_dlb2.c => dlb/rte_pmd_dlb.c}      |    6 +-
 .../rte_pmd_dlb2.h => dlb/rte_pmd_dlb.h}      |   12 +-
 drivers/event/{dlb2 => dlb}/version.map       |    2 +-
 drivers/event/dlb2/pf/base/dlb2_mbox.h        |  596 ---
 drivers/event/dlb2/pf/base/dlb2_regs.h        | 2527 ----------
 drivers/event/meson.build                     |    2 +-
 36 files changed, 6922 insertions(+), 5069 deletions(-)
 rename doc/guides/eventdevs/{dlb2.rst => dlb.rst} (72%)
 rename drivers/event/{dlb2 => dlb}/dlb2.c (89%)
 rename drivers/event/{dlb2 => dlb}/dlb2_iface.c (100%)
 rename drivers/event/{dlb2 => dlb}/dlb2_iface.h (100%)
 rename drivers/event/{dlb2 => dlb}/dlb2_inline_fns.h (100%)
 rename drivers/event/{dlb2 => dlb}/dlb2_log.h (100%)
 rename drivers/event/{dlb2 => dlb}/dlb2_priv.h (77%)
 rename drivers/event/{dlb2 => dlb}/dlb2_selftest.c (99%)
 rename drivers/event/{dlb2 => dlb}/dlb2_user.h (97%)
 rename drivers/event/{dlb2 => dlb}/dlb2_xstats.c (94%)
 rename drivers/event/{dlb2 => dlb}/meson.build (89%)
 rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_hw_types.h (80%)
 rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_osdep.h (99%)
 rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_osdep_bitmap.h (100%)
 rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_osdep_list.h (100%)
 rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_osdep_types.h (100%)
 create mode 100644 drivers/event/dlb/pf/base/dlb2_regs.h
 rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_resource.c (68%)
 rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_resource.h (99%)
 rename drivers/event/{dlb2 => dlb}/pf/dlb2_main.c (95%)
 rename drivers/event/{dlb2 => dlb}/pf/dlb2_main.h (100%)
 rename drivers/event/{dlb2 => dlb}/pf/dlb2_pf.c (91%)
 rename drivers/event/{dlb2/rte_pmd_dlb2.c => dlb/rte_pmd_dlb.c} (88%)
 rename drivers/event/{dlb2/rte_pmd_dlb2.h => dlb/rte_pmd_dlb.h} (88%)
 rename drivers/event/{dlb2 => dlb}/version.map (60%)
 delete mode 100644 drivers/event/dlb2/pf/base/dlb2_mbox.h
 delete mode 100644 drivers/event/dlb2/pf/base/dlb2_regs.h

-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v4 01/27] event/dlb2: minor code cleanup
  2021-04-15  1:48   ` [dpdk-dev] [PATCH v4 00/27] Add DLB v2.5 Timothy McDaniel
@ 2021-04-15  1:48     ` Timothy McDaniel
  2021-04-15  1:48     ` [dpdk-dev] [PATCH v4 02/27] event/dlb2: add v2.5 probe Timothy McDaniel
                       ` (25 subsequent siblings)
  26 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-04-15  1:48 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, harry.van.haaren, jerinj, thomas

1) Remove references to FPGA.
2) Do not include dlb2_mbox.h, it is not needed.
3) Remove duplicate macros/defines that were
   present in both dlb2_priv.h and dlb2_hw_types.h.
   Update dlb2_resource.c to include dlb2_priv.h
   so that it picks up the macros/defines that
   have now been consolidated.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_hw_types.h |  46 +-
 drivers/event/dlb2/pf/base/dlb2_mbox.h     | 596 ---------------------
 drivers/event/dlb2/pf/base/dlb2_resource.c |   1 -
 3 files changed, 2 insertions(+), 641 deletions(-)
 delete mode 100644 drivers/event/dlb2/pf/base/dlb2_mbox.h

diff --git a/drivers/event/dlb2/pf/base/dlb2_hw_types.h b/drivers/event/dlb2/pf/base/dlb2_hw_types.h
index 1d99f1e01..c7cd41f8b 100644
--- a/drivers/event/dlb2/pf/base/dlb2_hw_types.h
+++ b/drivers/event/dlb2/pf/base/dlb2_hw_types.h
@@ -5,55 +5,25 @@
 #ifndef __DLB2_HW_TYPES_H
 #define __DLB2_HW_TYPES_H
 
+#include "../../dlb2_priv.h"
 #include "dlb2_user.h"
 
 #include "dlb2_osdep_list.h"
 #include "dlb2_osdep_types.h"
 
 #define DLB2_MAX_NUM_VDEVS			16
-#define DLB2_MAX_NUM_DOMAINS			32
-#define DLB2_MAX_NUM_LDB_QUEUES			32 /* LDB == load-balanced */
-#define DLB2_MAX_NUM_DIR_QUEUES			64 /* DIR == directed */
-#define DLB2_MAX_NUM_LDB_PORTS			64
-#define DLB2_MAX_NUM_DIR_PORTS			64
-#define DLB2_MAX_NUM_LDB_CREDITS		(8 * 1024)
-#define DLB2_MAX_NUM_DIR_CREDITS		(2 * 1024)
-#define DLB2_MAX_NUM_HIST_LIST_ENTRIES		2048
 #define DLB2_MAX_NUM_AQED_ENTRIES		2048
-#define DLB2_MAX_NUM_QIDS_PER_LDB_CQ		8
 #define DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS	2
 #define DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES	5
-#define DLB2_QID_PRIORITIES			8
+
 #define DLB2_NUM_ARB_WEIGHTS			8
 #define DLB2_MAX_WEIGHT				255
 #define DLB2_NUM_COS_DOMAINS			4
 #define DLB2_MAX_CQ_COMP_CHECK_LOOPS		409600
 #define DLB2_MAX_QID_EMPTY_CHECK_LOOPS		(32 * 64 * 1024 * (800 / 30))
-#ifdef FPGA
-#define DLB2_HZ					2000000
-#else
-#define DLB2_HZ					800000000
-#endif
-
 #define PCI_DEVICE_ID_INTEL_DLB2_PF 0x2710
 #define PCI_DEVICE_ID_INTEL_DLB2_VF 0x2711
 
-/* Interrupt related macros */
-#define DLB2_PF_NUM_NON_CQ_INTERRUPT_VECTORS 1
-#define DLB2_PF_NUM_CQ_INTERRUPT_VECTORS     64
-#define DLB2_PF_TOTAL_NUM_INTERRUPT_VECTORS \
-	(DLB2_PF_NUM_NON_CQ_INTERRUPT_VECTORS + \
-	 DLB2_PF_NUM_CQ_INTERRUPT_VECTORS)
-#define DLB2_PF_NUM_COMPRESSED_MODE_VECTORS \
-	(DLB2_PF_NUM_NON_CQ_INTERRUPT_VECTORS + 1)
-#define DLB2_PF_NUM_PACKED_MODE_VECTORS \
-	DLB2_PF_TOTAL_NUM_INTERRUPT_VECTORS
-#define DLB2_PF_COMPRESSED_MODE_CQ_VECTOR_ID \
-	DLB2_PF_NUM_NON_CQ_INTERRUPT_VECTORS
-
-/* DLB non-CQ interrupts (alarm, mailbox, WDT) */
-#define DLB2_INT_NON_CQ 0
-
 #define DLB2_ALARM_HW_SOURCE_SYS 0
 #define DLB2_ALARM_HW_SOURCE_DLB 1
 
@@ -65,18 +35,6 @@
 #define DLB2_ALARM_HW_CHP_AID_ILLEGAL_ENQ	1
 #define DLB2_ALARM_HW_CHP_AID_EXCESS_TOKEN_POPS 2
 
-#define DLB2_VF_NUM_NON_CQ_INTERRUPT_VECTORS 1
-#define DLB2_VF_NUM_CQ_INTERRUPT_VECTORS     31
-#define DLB2_VF_BASE_CQ_VECTOR_ID	     0
-#define DLB2_VF_LAST_CQ_VECTOR_ID	     30
-#define DLB2_VF_MBOX_VECTOR_ID		     31
-#define DLB2_VF_TOTAL_NUM_INTERRUPT_VECTORS \
-	(DLB2_VF_NUM_NON_CQ_INTERRUPT_VECTORS + \
-	 DLB2_VF_NUM_CQ_INTERRUPT_VECTORS)
-
-#define DLB2_VDEV_MAX_NUM_INTERRUPT_VECTORS (DLB2_MAX_NUM_LDB_PORTS + \
-					     DLB2_MAX_NUM_DIR_PORTS + 1)
-
 /*
  * Hardware-defined base addresses. Those prefixed 'DLB2_DRV' are only used by
  * the PF driver.
diff --git a/drivers/event/dlb2/pf/base/dlb2_mbox.h b/drivers/event/dlb2/pf/base/dlb2_mbox.h
deleted file mode 100644
index ce462c089..000000000
--- a/drivers/event/dlb2/pf/base/dlb2_mbox.h
+++ /dev/null
@@ -1,596 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016-2020 Intel Corporation
- */
-
-#ifndef __DLB2_BASE_DLB2_MBOX_H
-#define __DLB2_BASE_DLB2_MBOX_H
-
-#include "dlb2_osdep_types.h"
-#include "dlb2_regs.h"
-
-#define DLB2_MBOX_INTERFACE_VERSION 1
-
-/*
- * The PF uses its PF->VF mailbox to send responses to VF requests, as well as
- * to send requests of its own (e.g. notifying a VF of an impending FLR).
- * To avoid communication race conditions, e.g. the PF sends a response and then
- * sends a request before the VF reads the response, the PF->VF mailbox is
- * divided into two sections:
- * - Bytes 0-47: PF responses
- * - Bytes 48-63: PF requests
- *
- * Partitioning the PF->VF mailbox allows responses and requests to occupy the
- * mailbox simultaneously.
- */
-#define DLB2_PF2VF_RESP_BYTES	  48
-#define DLB2_PF2VF_RESP_BASE	  0
-#define DLB2_PF2VF_RESP_BASE_WORD (DLB2_PF2VF_RESP_BASE / 4)
-
-#define DLB2_PF2VF_REQ_BYTES	  16
-#define DLB2_PF2VF_REQ_BASE	  (DLB2_PF2VF_RESP_BASE + DLB2_PF2VF_RESP_BYTES)
-#define DLB2_PF2VF_REQ_BASE_WORD  (DLB2_PF2VF_REQ_BASE / 4)
-
-/*
- * Similarly, the VF->PF mailbox is divided into two sections:
- * - Bytes 0-239: VF requests
- * -- (Bytes 0-3 are unused due to a hardware errata)
- * - Bytes 240-255: VF responses
- */
-#define DLB2_VF2PF_REQ_BYTES	 236
-#define DLB2_VF2PF_REQ_BASE	 4
-#define DLB2_VF2PF_REQ_BASE_WORD (DLB2_VF2PF_REQ_BASE / 4)
-
-#define DLB2_VF2PF_RESP_BYTES	  16
-#define DLB2_VF2PF_RESP_BASE	  (DLB2_VF2PF_REQ_BASE + DLB2_VF2PF_REQ_BYTES)
-#define DLB2_VF2PF_RESP_BASE_WORD (DLB2_VF2PF_RESP_BASE / 4)
-
-/* VF-initiated commands */
-enum dlb2_mbox_cmd_type {
-	DLB2_MBOX_CMD_REGISTER,
-	DLB2_MBOX_CMD_UNREGISTER,
-	DLB2_MBOX_CMD_GET_NUM_RESOURCES,
-	DLB2_MBOX_CMD_CREATE_SCHED_DOMAIN,
-	DLB2_MBOX_CMD_RESET_SCHED_DOMAIN,
-	DLB2_MBOX_CMD_CREATE_LDB_QUEUE,
-	DLB2_MBOX_CMD_CREATE_DIR_QUEUE,
-	DLB2_MBOX_CMD_CREATE_LDB_PORT,
-	DLB2_MBOX_CMD_CREATE_DIR_PORT,
-	DLB2_MBOX_CMD_ENABLE_LDB_PORT,
-	DLB2_MBOX_CMD_DISABLE_LDB_PORT,
-	DLB2_MBOX_CMD_ENABLE_DIR_PORT,
-	DLB2_MBOX_CMD_DISABLE_DIR_PORT,
-	DLB2_MBOX_CMD_LDB_PORT_OWNED_BY_DOMAIN,
-	DLB2_MBOX_CMD_DIR_PORT_OWNED_BY_DOMAIN,
-	DLB2_MBOX_CMD_MAP_QID,
-	DLB2_MBOX_CMD_UNMAP_QID,
-	DLB2_MBOX_CMD_START_DOMAIN,
-	DLB2_MBOX_CMD_ENABLE_LDB_PORT_INTR,
-	DLB2_MBOX_CMD_ENABLE_DIR_PORT_INTR,
-	DLB2_MBOX_CMD_ARM_CQ_INTR,
-	DLB2_MBOX_CMD_GET_NUM_USED_RESOURCES,
-	DLB2_MBOX_CMD_GET_SN_ALLOCATION,
-	DLB2_MBOX_CMD_GET_LDB_QUEUE_DEPTH,
-	DLB2_MBOX_CMD_GET_DIR_QUEUE_DEPTH,
-	DLB2_MBOX_CMD_PENDING_PORT_UNMAPS,
-	DLB2_MBOX_CMD_GET_COS_BW,
-	DLB2_MBOX_CMD_GET_SN_OCCUPANCY,
-	DLB2_MBOX_CMD_QUERY_CQ_POLL_MODE,
-
-	/* NUM_QE_CMD_TYPES must be last */
-	NUM_DLB2_MBOX_CMD_TYPES,
-};
-
-static const char dlb2_mbox_cmd_type_strings[][128] = {
-	"DLB2_MBOX_CMD_REGISTER",
-	"DLB2_MBOX_CMD_UNREGISTER",
-	"DLB2_MBOX_CMD_GET_NUM_RESOURCES",
-	"DLB2_MBOX_CMD_CREATE_SCHED_DOMAIN",
-	"DLB2_MBOX_CMD_RESET_SCHED_DOMAIN",
-	"DLB2_MBOX_CMD_CREATE_LDB_QUEUE",
-	"DLB2_MBOX_CMD_CREATE_DIR_QUEUE",
-	"DLB2_MBOX_CMD_CREATE_LDB_PORT",
-	"DLB2_MBOX_CMD_CREATE_DIR_PORT",
-	"DLB2_MBOX_CMD_ENABLE_LDB_PORT",
-	"DLB2_MBOX_CMD_DISABLE_LDB_PORT",
-	"DLB2_MBOX_CMD_ENABLE_DIR_PORT",
-	"DLB2_MBOX_CMD_DISABLE_DIR_PORT",
-	"DLB2_MBOX_CMD_LDB_PORT_OWNED_BY_DOMAIN",
-	"DLB2_MBOX_CMD_DIR_PORT_OWNED_BY_DOMAIN",
-	"DLB2_MBOX_CMD_MAP_QID",
-	"DLB2_MBOX_CMD_UNMAP_QID",
-	"DLB2_MBOX_CMD_START_DOMAIN",
-	"DLB2_MBOX_CMD_ENABLE_LDB_PORT_INTR",
-	"DLB2_MBOX_CMD_ENABLE_DIR_PORT_INTR",
-	"DLB2_MBOX_CMD_ARM_CQ_INTR",
-	"DLB2_MBOX_CMD_GET_NUM_USED_RESOURCES",
-	"DLB2_MBOX_CMD_GET_SN_ALLOCATION",
-	"DLB2_MBOX_CMD_GET_LDB_QUEUE_DEPTH",
-	"DLB2_MBOX_CMD_GET_DIR_QUEUE_DEPTH",
-	"DLB2_MBOX_CMD_PENDING_PORT_UNMAPS",
-	"DLB2_MBOX_CMD_GET_COS_BW",
-	"DLB2_MBOX_CMD_GET_SN_OCCUPANCY",
-	"DLB2_MBOX_CMD_QUERY_CQ_POLL_MODE",
-};
-
-/* PF-initiated commands */
-enum dlb2_mbox_vf_cmd_type {
-	DLB2_MBOX_VF_CMD_DOMAIN_ALERT,
-	DLB2_MBOX_VF_CMD_NOTIFICATION,
-	DLB2_MBOX_VF_CMD_IN_USE,
-
-	/* NUM_DLB2_MBOX_VF_CMD_TYPES must be last */
-	NUM_DLB2_MBOX_VF_CMD_TYPES,
-};
-
-static const char dlb2_mbox_vf_cmd_type_strings[][128] = {
-	"DLB2_MBOX_VF_CMD_DOMAIN_ALERT",
-	"DLB2_MBOX_VF_CMD_NOTIFICATION",
-	"DLB2_MBOX_VF_CMD_IN_USE",
-};
-
-#define DLB2_MBOX_CMD_TYPE(hdr) \
-	(((struct dlb2_mbox_req_hdr *)hdr)->type)
-#define DLB2_MBOX_CMD_STRING(hdr) \
-	dlb2_mbox_cmd_type_strings[DLB2_MBOX_CMD_TYPE(hdr)]
-
-enum dlb2_mbox_status_type {
-	DLB2_MBOX_ST_SUCCESS,
-	DLB2_MBOX_ST_INVALID_CMD_TYPE,
-	DLB2_MBOX_ST_VERSION_MISMATCH,
-	DLB2_MBOX_ST_INVALID_OWNER_VF,
-};
-
-static const char dlb2_mbox_status_type_strings[][128] = {
-	"DLB2_MBOX_ST_SUCCESS",
-	"DLB2_MBOX_ST_INVALID_CMD_TYPE",
-	"DLB2_MBOX_ST_VERSION_MISMATCH",
-	"DLB2_MBOX_ST_INVALID_OWNER_VF",
-};
-
-#define DLB2_MBOX_ST_TYPE(hdr) \
-	(((struct dlb2_mbox_resp_hdr *)hdr)->status)
-#define DLB2_MBOX_ST_STRING(hdr) \
-	dlb2_mbox_status_type_strings[DLB2_MBOX_ST_TYPE(hdr)]
-
-/* This structure is always the first field in a request structure */
-struct dlb2_mbox_req_hdr {
-	u32 type;
-};
-
-/* This structure is always the first field in a response structure */
-struct dlb2_mbox_resp_hdr {
-	u32 status;
-};
-
-struct dlb2_mbox_register_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u16 min_interface_version;
-	u16 max_interface_version;
-};
-
-struct dlb2_mbox_register_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 interface_version;
-	u8 pf_id;
-	u8 vf_id;
-	u8 is_auxiliary_vf;
-	u8 primary_vf_id;
-	u32 padding;
-};
-
-struct dlb2_mbox_unregister_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 padding;
-};
-
-struct dlb2_mbox_unregister_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 padding;
-};
-
-struct dlb2_mbox_get_num_resources_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 padding;
-};
-
-struct dlb2_mbox_get_num_resources_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u16 num_sched_domains;
-	u16 num_ldb_queues;
-	u16 num_ldb_ports;
-	u16 num_cos_ldb_ports[4];
-	u16 num_dir_ports;
-	u32 num_atomic_inflights;
-	u32 num_hist_list_entries;
-	u32 max_contiguous_hist_list_entries;
-	u16 num_ldb_credits;
-	u16 num_dir_credits;
-};
-
-struct dlb2_mbox_create_sched_domain_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 num_ldb_queues;
-	u32 num_ldb_ports;
-	u32 num_cos_ldb_ports[4];
-	u32 num_dir_ports;
-	u32 num_atomic_inflights;
-	u32 num_hist_list_entries;
-	u32 num_ldb_credits;
-	u32 num_dir_credits;
-	u8 cos_strict;
-	u8 padding0[3];
-	u32 padding1;
-};
-
-struct dlb2_mbox_create_sched_domain_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 id;
-};
-
-struct dlb2_mbox_reset_sched_domain_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 id;
-};
-
-struct dlb2_mbox_reset_sched_domain_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-};
-
-struct dlb2_mbox_create_ldb_queue_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 num_sequence_numbers;
-	u32 num_qid_inflights;
-	u32 num_atomic_inflights;
-	u32 lock_id_comp_level;
-	u32 depth_threshold;
-	u32 padding;
-};
-
-struct dlb2_mbox_create_ldb_queue_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 id;
-};
-
-struct dlb2_mbox_create_dir_queue_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 port_id;
-	u32 depth_threshold;
-};
-
-struct dlb2_mbox_create_dir_queue_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 id;
-};
-
-struct dlb2_mbox_create_ldb_port_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u16 cq_depth;
-	u16 cq_history_list_size;
-	u8 cos_id;
-	u8 cos_strict;
-	u16 padding1;
-	u64 cq_base_address;
-};
-
-struct dlb2_mbox_create_ldb_port_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 id;
-};
-
-struct dlb2_mbox_create_dir_port_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u64 cq_base_address;
-	u16 cq_depth;
-	u16 padding0;
-	s32 queue_id;
-};
-
-struct dlb2_mbox_create_dir_port_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 id;
-};
-
-struct dlb2_mbox_enable_ldb_port_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 port_id;
-	u32 padding;
-};
-
-struct dlb2_mbox_enable_ldb_port_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 padding;
-};
-
-struct dlb2_mbox_disable_ldb_port_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 port_id;
-	u32 padding;
-};
-
-struct dlb2_mbox_disable_ldb_port_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 padding;
-};
-
-struct dlb2_mbox_enable_dir_port_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 port_id;
-	u32 padding;
-};
-
-struct dlb2_mbox_enable_dir_port_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 padding;
-};
-
-struct dlb2_mbox_disable_dir_port_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 port_id;
-	u32 padding;
-};
-
-struct dlb2_mbox_disable_dir_port_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 padding;
-};
-
-struct dlb2_mbox_ldb_port_owned_by_domain_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 port_id;
-	u32 padding;
-};
-
-struct dlb2_mbox_ldb_port_owned_by_domain_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	s32 owned;
-};
-
-struct dlb2_mbox_dir_port_owned_by_domain_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 port_id;
-	u32 padding;
-};
-
-struct dlb2_mbox_dir_port_owned_by_domain_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	s32 owned;
-};
-
-struct dlb2_mbox_map_qid_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 port_id;
-	u32 qid;
-	u32 priority;
-	u32 padding0;
-};
-
-struct dlb2_mbox_map_qid_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 id;
-};
-
-struct dlb2_mbox_unmap_qid_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 port_id;
-	u32 qid;
-};
-
-struct dlb2_mbox_unmap_qid_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 padding;
-};
-
-struct dlb2_mbox_start_domain_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-};
-
-struct dlb2_mbox_start_domain_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 padding;
-};
-
-struct dlb2_mbox_enable_ldb_port_intr_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u16 port_id;
-	u16 thresh;
-	u16 vector;
-	u16 owner_vf;
-	u16 reserved[2];
-};
-
-struct dlb2_mbox_enable_ldb_port_intr_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 padding;
-};
-
-struct dlb2_mbox_enable_dir_port_intr_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u16 port_id;
-	u16 thresh;
-	u16 vector;
-	u16 owner_vf;
-	u16 reserved[2];
-};
-
-struct dlb2_mbox_enable_dir_port_intr_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 padding;
-};
-
-struct dlb2_mbox_arm_cq_intr_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 port_id;
-	u32 is_ldb;
-};
-
-struct dlb2_mbox_arm_cq_intr_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 padding0;
-};
-
-/*
- * The alert_id and aux_alert_data follows the format of the alerts defined in
- * dlb2_types.h. The alert id contains an enum dlb2_domain_alert_id value, and
- * the aux_alert_data value varies depending on the alert.
- */
-struct dlb2_mbox_vf_alert_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 alert_id;
-	u32 aux_alert_data;
-};
-
-enum dlb2_mbox_vf_notification_type {
-	DLB2_MBOX_VF_NOTIFICATION_PRE_RESET,
-	DLB2_MBOX_VF_NOTIFICATION_POST_RESET,
-
-	/* NUM_DLB2_MBOX_VF_NOTIFICATION_TYPES must be last */
-	NUM_DLB2_MBOX_VF_NOTIFICATION_TYPES,
-};
-
-struct dlb2_mbox_vf_notification_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 notification;
-};
-
-struct dlb2_mbox_vf_in_use_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 padding;
-};
-
-struct dlb2_mbox_vf_in_use_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 in_use;
-};
-
-struct dlb2_mbox_get_sn_allocation_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 group_id;
-};
-
-struct dlb2_mbox_get_sn_allocation_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 num;
-};
-
-struct dlb2_mbox_get_ldb_queue_depth_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 queue_id;
-	u32 padding;
-};
-
-struct dlb2_mbox_get_ldb_queue_depth_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 depth;
-};
-
-struct dlb2_mbox_get_dir_queue_depth_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 queue_id;
-	u32 padding;
-};
-
-struct dlb2_mbox_get_dir_queue_depth_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 depth;
-};
-
-struct dlb2_mbox_pending_port_unmaps_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 port_id;
-	u32 padding;
-};
-
-struct dlb2_mbox_pending_port_unmaps_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 num;
-};
-
-struct dlb2_mbox_get_cos_bw_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 cos_id;
-};
-
-struct dlb2_mbox_get_cos_bw_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 num;
-};
-
-struct dlb2_mbox_get_sn_occupancy_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 group_id;
-};
-
-struct dlb2_mbox_get_sn_occupancy_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 num;
-};
-
-struct dlb2_mbox_query_cq_poll_mode_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 padding;
-};
-
-struct dlb2_mbox_query_cq_poll_mode_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 mode;
-};
-
-#endif /* __DLB2_BASE_DLB2_MBOX_H */
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index ae5ef2fc3..b57157fdc 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -5,7 +5,6 @@
 #include "dlb2_user.h"
 
 #include "dlb2_hw_types.h"
-#include "dlb2_mbox.h"
 #include "dlb2_osdep.h"
 #include "dlb2_osdep_bitmap.h"
 #include "dlb2_osdep_types.h"
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v4 02/27] event/dlb2: add v2.5 probe
  2021-04-15  1:48   ` [dpdk-dev] [PATCH v4 00/27] Add DLB v2.5 Timothy McDaniel
  2021-04-15  1:48     ` [dpdk-dev] [PATCH v4 01/27] event/dlb2: minor code cleanup Timothy McDaniel
@ 2021-04-15  1:48     ` Timothy McDaniel
  2021-04-29  7:09       ` Jerin Jacob
  2021-04-15  1:48     ` [dpdk-dev] [PATCH v4 03/27] event/dlb2: add v2.5 HW register definitions Timothy McDaniel
                       ` (24 subsequent siblings)
  26 siblings, 1 reply; 174+ messages in thread
From: Timothy McDaniel @ 2021-04-15  1:48 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, harry.van.haaren, jerinj, thomas

This commit adds dlb v2.5 probe support, and updates
parameter parsing.

The dlb v2.5 device differs from dlb v2, in that the
number of resources (ports, queues, ...) is different,
so macros have been added to take the device version
into account.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/dlb2.c                  |  99 +++++++++++---
 drivers/event/dlb2/dlb2_priv.h             | 151 +++++++++++++++------
 drivers/event/dlb2/dlb2_xstats.c           |  37 ++---
 drivers/event/dlb2/pf/base/dlb2_hw_types.h |  28 ++--
 drivers/event/dlb2/pf/base/dlb2_resource.c |  47 ++++---
 drivers/event/dlb2/pf/dlb2_pf.c            |  62 ++++++++-
 6 files changed, 319 insertions(+), 105 deletions(-)

diff --git a/drivers/event/dlb2/dlb2.c b/drivers/event/dlb2/dlb2.c
index fb5ff012a..7f5b9141b 100644
--- a/drivers/event/dlb2/dlb2.c
+++ b/drivers/event/dlb2/dlb2.c
@@ -59,7 +59,8 @@ static struct rte_event_dev_info evdev_dlb2_default_info = {
 	.max_event_port_enqueue_depth = DLB2_MAX_ENQUEUE_DEPTH,
 	.max_event_port_links = DLB2_MAX_NUM_QIDS_PER_LDB_CQ,
 	.max_num_events = DLB2_MAX_NUM_LDB_CREDITS,
-	.max_single_link_event_port_queue_pairs = DLB2_MAX_NUM_DIR_PORTS,
+	.max_single_link_event_port_queue_pairs =
+		DLB2_MAX_NUM_DIR_PORTS(DLB2_HW_V2),
 	.event_dev_cap = (RTE_EVENT_DEV_CAP_QUEUE_QOS |
 			  RTE_EVENT_DEV_CAP_EVENT_QOS |
 			  RTE_EVENT_DEV_CAP_BURST_MODE |
@@ -69,7 +70,7 @@ static struct rte_event_dev_info evdev_dlb2_default_info = {
 };
 
 struct process_local_port_data
-dlb2_port[DLB2_MAX_NUM_PORTS][DLB2_NUM_PORT_TYPES];
+dlb2_port[DLB2_MAX_NUM_PORTS_ALL][DLB2_NUM_PORT_TYPES];
 
 static void
 dlb2_free_qe_mem(struct dlb2_port *qm_port)
@@ -97,7 +98,7 @@ dlb2_init_queue_depth_thresholds(struct dlb2_eventdev *dlb2,
 {
 	int q;
 
-	for (q = 0; q < DLB2_MAX_NUM_QUEUES; q++) {
+	for (q = 0; q < DLB2_MAX_NUM_QUEUES(dlb2->version); q++) {
 		if (qid_depth_thresholds[q] != 0)
 			dlb2->ev_queues[q].depth_threshold =
 				qid_depth_thresholds[q];
@@ -247,9 +248,9 @@ set_num_dir_credits(const char *key __rte_unused,
 		return ret;
 
 	if (*num_dir_credits < 0 ||
-	    *num_dir_credits > DLB2_MAX_NUM_DIR_CREDITS) {
+	    *num_dir_credits > DLB2_MAX_NUM_DIR_CREDITS(DLB2_HW_V2)) {
 		DLB2_LOG_ERR("dlb2: num_dir_credits must be between 0 and %d\n",
-			     DLB2_MAX_NUM_DIR_CREDITS);
+			     DLB2_MAX_NUM_DIR_CREDITS(DLB2_HW_V2));
 		return -EINVAL;
 	}
 
@@ -306,7 +307,6 @@ set_cos(const char *key __rte_unused,
 	return 0;
 }
 
-
 static int
 set_qid_depth_thresh(const char *key __rte_unused,
 		     const char *value,
@@ -327,7 +327,7 @@ set_qid_depth_thresh(const char *key __rte_unused,
 	 */
 	if (sscanf(value, "all:%d", &thresh) == 1) {
 		first = 0;
-		last = DLB2_MAX_NUM_QUEUES - 1;
+		last = DLB2_MAX_NUM_QUEUES(DLB2_HW_V2) - 1;
 	} else if (sscanf(value, "%d-%d:%d", &first, &last, &thresh) == 3) {
 		/* we have everything we need */
 	} else if (sscanf(value, "%d:%d", &first, &thresh) == 2) {
@@ -337,7 +337,56 @@ set_qid_depth_thresh(const char *key __rte_unused,
 		return -EINVAL;
 	}
 
-	if (first > last || first < 0 || last >= DLB2_MAX_NUM_QUEUES) {
+	if (first > last || first < 0 ||
+		last >= DLB2_MAX_NUM_QUEUES(DLB2_HW_V2)) {
+		DLB2_LOG_ERR("Error parsing qid depth devarg, invalid qid value\n");
+		return -EINVAL;
+	}
+
+	if (thresh < 0 || thresh > DLB2_MAX_QUEUE_DEPTH_THRESHOLD) {
+		DLB2_LOG_ERR("Error parsing qid depth devarg, threshold > %d\n",
+			     DLB2_MAX_QUEUE_DEPTH_THRESHOLD);
+		return -EINVAL;
+	}
+
+	for (i = first; i <= last; i++)
+		qid_thresh->val[i] = thresh; /* indexed by qid */
+
+	return 0;
+}
+
+static int
+set_qid_depth_thresh_v2_5(const char *key __rte_unused,
+			  const char *value,
+			  void *opaque)
+{
+	struct dlb2_qid_depth_thresholds *qid_thresh = opaque;
+	int first, last, thresh, i;
+
+	if (value == NULL || opaque == NULL) {
+		DLB2_LOG_ERR("NULL pointer\n");
+		return -EINVAL;
+	}
+
+	/* command line override may take one of the following 3 forms:
+	 * qid_depth_thresh=all:<threshold_value> ... all queues
+	 * qid_depth_thresh=qidA-qidB:<threshold_value> ... a range of queues
+	 * qid_depth_thresh=qid:<threshold_value> ... just one queue
+	 */
+	if (sscanf(value, "all:%d", &thresh) == 1) {
+		first = 0;
+		last = DLB2_MAX_NUM_QUEUES(DLB2_HW_V2_5) - 1;
+	} else if (sscanf(value, "%d-%d:%d", &first, &last, &thresh) == 3) {
+		/* we have everything we need */
+	} else if (sscanf(value, "%d:%d", &first, &thresh) == 2) {
+		last = first;
+	} else {
+		DLB2_LOG_ERR("Error parsing qid depth devarg. Should be all:val, qid-qid:val, or qid:val\n");
+		return -EINVAL;
+	}
+
+	if (first > last || first < 0 ||
+		last >= DLB2_MAX_NUM_QUEUES(DLB2_HW_V2_5)) {
 		DLB2_LOG_ERR("Error parsing qid depth devarg, invalid qid value\n");
 		return -EINVAL;
 	}
@@ -521,7 +570,7 @@ dlb2_hw_reset_sched_domain(const struct rte_eventdev *dev, bool reconfig)
 	for (i = 0; i < dlb2->num_queues; i++)
 		dlb2->ev_queues[i].qm_queue.config_state = config_state;
 
-	for (i = 0; i < DLB2_MAX_NUM_QUEUES; i++)
+	for (i = 0; i < DLB2_MAX_NUM_QUEUES(DLB2_HW_V2_5); i++)
 		dlb2->ev_queues[i].setup_done = false;
 
 	dlb2->num_ports = 0;
@@ -1453,7 +1502,7 @@ dlb2_eventdev_port_setup(struct rte_eventdev *dev,
 
 	dlb2 = dlb2_pmd_priv(dev);
 
-	if (ev_port_id >= DLB2_MAX_NUM_PORTS)
+	if (ev_port_id >= DLB2_MAX_NUM_PORTS(dlb2->version))
 		return -EINVAL;
 
 	if (port_conf->dequeue_depth >
@@ -3895,7 +3944,7 @@ dlb2_primary_eventdev_probe(struct rte_eventdev *dev,
 	}
 
 	/* Initialize each port's token pop mode */
-	for (i = 0; i < DLB2_MAX_NUM_PORTS; i++)
+	for (i = 0; i < DLB2_MAX_NUM_PORTS(dlb2->version); i++)
 		dlb2->ev_ports[i].qm_port.token_pop_mode = AUTO_POP;
 
 	rte_spinlock_init(&dlb2->qm_instance.resource_lock);
@@ -3945,7 +3994,8 @@ dlb2_secondary_eventdev_probe(struct rte_eventdev *dev,
 int
 dlb2_parse_params(const char *params,
 		  const char *name,
-		  struct dlb2_devargs *dlb2_args)
+		  struct dlb2_devargs *dlb2_args,
+		  uint8_t version)
 {
 	int ret = 0;
 	static const char * const args[] = { NUMA_NODE_ARG,
@@ -3984,17 +4034,18 @@ dlb2_parse_params(const char *params,
 				return ret;
 			}
 
-			ret = rte_kvargs_process(kvlist,
+			if (version == DLB2_HW_V2) {
+				ret = rte_kvargs_process(kvlist,
 					DLB2_NUM_DIR_CREDITS,
 					set_num_dir_credits,
 					&dlb2_args->num_dir_credits_override);
-			if (ret != 0) {
-				DLB2_LOG_ERR("%s: Error parsing num_dir_credits parameter",
-					     name);
-				rte_kvargs_free(kvlist);
-				return ret;
+				if (ret != 0) {
+					DLB2_LOG_ERR("%s: Error parsing num_dir_credits parameter",
+						     name);
+					rte_kvargs_free(kvlist);
+					return ret;
+				}
 			}
-
 			ret = rte_kvargs_process(kvlist, DEV_ID_ARG,
 						 set_dev_id,
 						 &dlb2_args->dev_id);
@@ -4005,11 +4056,19 @@ dlb2_parse_params(const char *params,
 				return ret;
 			}
 
-			ret = rte_kvargs_process(
+			if (version == DLB2_HW_V2) {
+				ret = rte_kvargs_process(
 					kvlist,
 					DLB2_QID_DEPTH_THRESH_ARG,
 					set_qid_depth_thresh,
 					&dlb2_args->qid_depth_thresholds);
+			} else {
+				ret = rte_kvargs_process(
+					kvlist,
+					DLB2_QID_DEPTH_THRESH_ARG,
+					set_qid_depth_thresh_v2_5,
+					&dlb2_args->qid_depth_thresholds);
+			}
 			if (ret != 0) {
 				DLB2_LOG_ERR("%s: Error parsing qid_depth_thresh parameter",
 					     name);
diff --git a/drivers/event/dlb2/dlb2_priv.h b/drivers/event/dlb2/dlb2_priv.h
index eb1a93239..1cd78ad94 100644
--- a/drivers/event/dlb2/dlb2_priv.h
+++ b/drivers/event/dlb2/dlb2_priv.h
@@ -33,19 +33,31 @@
 
 /* Begin HW related defines and structs */
 
+#define DLB2_HW_V2 0
+#define DLB2_HW_V2_5 1
 #define DLB2_MAX_NUM_DOMAINS 32
 #define DLB2_MAX_NUM_VFS 16
 #define DLB2_MAX_NUM_LDB_QUEUES 32
 #define DLB2_MAX_NUM_LDB_PORTS 64
-#define DLB2_MAX_NUM_DIR_PORTS 64
-#define DLB2_MAX_NUM_DIR_QUEUES 64
+#define DLB2_MAX_NUM_DIR_PORTS_V2		DLB2_MAX_NUM_DIR_QUEUES_V2
+#define DLB2_MAX_NUM_DIR_PORTS_V2_5		DLB2_MAX_NUM_DIR_QUEUES_V2_5
+#define DLB2_MAX_NUM_DIR_PORTS(ver)		(ver == DLB2_HW_V2 ? \
+						 DLB2_MAX_NUM_DIR_PORTS_V2 : \
+						 DLB2_MAX_NUM_DIR_PORTS_V2_5)
+#define DLB2_MAX_NUM_DIR_QUEUES_V2		64 /* DIR == directed */
+#define DLB2_MAX_NUM_DIR_QUEUES_V2_5		96
+/* When needed for array sizing, the DLB 2.5 macro is used */
+#define DLB2_MAX_NUM_DIR_QUEUES(ver)		(ver == DLB2_HW_V2 ? \
+						 DLB2_MAX_NUM_DIR_QUEUES_V2 : \
+						 DLB2_MAX_NUM_DIR_QUEUES_V2_5)
 #define DLB2_MAX_NUM_FLOWS (64 * 1024)
 #define DLB2_MAX_NUM_LDB_CREDITS (8 * 1024)
-#define DLB2_MAX_NUM_DIR_CREDITS (2 * 1024)
+#define DLB2_MAX_NUM_DIR_CREDITS(ver)		(ver == DLB2_HW_V2 ? 4096 : 0)
+#define DLB2_MAX_NUM_CREDITS(ver)		(ver == DLB2_HW_V2 ? \
+						 0 : DLB2_MAX_NUM_LDB_CREDITS)
 #define DLB2_MAX_NUM_LDB_CREDIT_POOLS 64
 #define DLB2_MAX_NUM_DIR_CREDIT_POOLS 64
 #define DLB2_MAX_NUM_HIST_LIST_ENTRIES 2048
-#define DLB2_MAX_NUM_AQOS_ENTRIES 2048
 #define DLB2_MAX_NUM_QIDS_PER_LDB_CQ 8
 #define DLB2_QID_PRIORITIES 8
 #define DLB2_MAX_DEVICE_PATH 32
@@ -68,6 +80,11 @@
 #define DLB2_NUM_HIST_LIST_ENTRIES_PER_LDB_PORT \
 	DLB2_MAX_CQ_DEPTH
 
+#define DLB2_HW_DEVICE_FROM_PCI_ID(_pdev) \
+	(((_pdev->id.device_id == PCI_DEVICE_ID_INTEL_DLB2_5_PF) ||        \
+	  (_pdev->id.device_id == PCI_DEVICE_ID_INTEL_DLB2_5_VF))   ?   \
+		DLB2_HW_V2_5 : DLB2_HW_V2)
+
 /*
  * Static per queue/port provisioning values
  */
@@ -109,6 +126,8 @@ enum dlb2_hw_queue_types {
 	DLB2_NUM_QUEUE_TYPES /* Must be last */
 };
 
+#define DLB2_COMBINED_POOL DLB2_LDB_QUEUE
+
 #define PORT_TYPE(p) ((p)->is_directed ? DLB2_DIR_PORT : DLB2_LDB_PORT)
 
 /* Do not change - must match hardware! */
@@ -127,8 +146,15 @@ struct dlb2_hw_rsrcs {
 	uint32_t num_ldb_queues;	/* Number of available ldb queues */
 	uint32_t num_ldb_ports;         /* Number of load balanced ports */
 	uint32_t num_dir_ports;         /* Number of directed ports */
-	uint32_t num_ldb_credits;       /* Number of load balanced credits */
-	uint32_t num_dir_credits;       /* Number of directed credits */
+	union {
+		struct {
+			uint32_t num_ldb_credits; /* Number of ldb credits */
+			uint32_t num_dir_credits; /* Number of dir credits */
+		};
+		struct {
+			uint32_t num_credits; /* Number of combined credits */
+		};
+	};
 	uint32_t reorder_window_size;   /* Size of reorder window */
 };
 
@@ -292,9 +318,17 @@ struct dlb2_port {
 	enum dlb2_token_pop_mode token_pop_mode;
 	union dlb2_port_config cfg;
 	uint32_t *credit_pool[DLB2_NUM_QUEUE_TYPES]; /* use __atomic builtins */
-	uint16_t cached_ldb_credits;
-	uint16_t ldb_credits;
-	uint16_t cached_dir_credits;
+	union {
+		struct {
+			uint16_t cached_ldb_credits;
+			uint16_t ldb_credits;
+			uint16_t cached_dir_credits;
+		};
+		struct {
+			uint16_t cached_credits;
+			uint16_t credits;
+		};
+	};
 	bool int_armed;
 	uint16_t owed_tokens;
 	int16_t issued_releases;
@@ -325,11 +359,22 @@ struct process_local_port_data {
 
 struct dlb2_eventdev;
 
+struct dlb2_port_low_level_io_functions {
+	void (*pp_enqueue_four)(void *qe4, void *pp_addr);
+};
+
 struct dlb2_config {
 	int configured;
 	int reserved;
-	uint32_t num_ldb_credits;
-	uint32_t num_dir_credits;
+	union {
+		struct {
+			uint32_t num_ldb_credits;
+			uint32_t num_dir_credits;
+		};
+		struct {
+			uint32_t num_credits;
+		};
+	};
 	struct dlb2_create_sched_domain_args resources;
 };
 
@@ -354,10 +399,18 @@ struct dlb2_hw_dev {
 
 /* Begin DLB2 PMD Eventdev related defines and structs */
 
-#define DLB2_MAX_NUM_QUEUES \
-	(DLB2_MAX_NUM_DIR_QUEUES + DLB2_MAX_NUM_LDB_QUEUES)
+#define DLB2_MAX_NUM_QUEUES(ver)                                \
+	(DLB2_MAX_NUM_DIR_QUEUES(ver) + DLB2_MAX_NUM_LDB_QUEUES)
 
-#define DLB2_MAX_NUM_PORTS (DLB2_MAX_NUM_DIR_PORTS + DLB2_MAX_NUM_LDB_PORTS)
+#define DLB2_MAX_NUM_PORTS(ver) \
+	(DLB2_MAX_NUM_DIR_PORTS(ver) + DLB2_MAX_NUM_LDB_PORTS)
+
+#define DLB2_MAX_NUM_DIR_QUEUES_V2_5 96
+#define DLB2_MAX_NUM_DIR_PORTS_V2_5 DLB2_MAX_NUM_DIR_QUEUES_V2_5
+#define DLB2_MAX_NUM_QUEUES_ALL \
+	(DLB2_MAX_NUM_DIR_QUEUES_V2_5 + DLB2_MAX_NUM_LDB_QUEUES)
+#define DLB2_MAX_NUM_PORTS_ALL \
+	(DLB2_MAX_NUM_DIR_PORTS_V2_5 + DLB2_MAX_NUM_LDB_PORTS)
 #define DLB2_MAX_INPUT_QUEUE_DEPTH 256
 
 /** Structure to hold the queue to port link establishment attributes */
@@ -377,8 +430,15 @@ struct dlb2_traffic_stats {
 	uint64_t tx_ok;
 	uint64_t total_polls;
 	uint64_t zero_polls;
-	uint64_t tx_nospc_ldb_hw_credits;
-	uint64_t tx_nospc_dir_hw_credits;
+	union {
+		struct {
+			uint64_t tx_nospc_ldb_hw_credits;
+			uint64_t tx_nospc_dir_hw_credits;
+		};
+		struct {
+			uint64_t tx_nospc_hw_credits;
+		};
+	};
 	uint64_t tx_nospc_inflight_max;
 	uint64_t tx_nospc_new_event_limit;
 	uint64_t tx_nospc_inflight_credits;
@@ -411,7 +471,7 @@ struct dlb2_port_stats {
 	uint64_t tx_invalid;
 	uint64_t rx_sched_cnt[DLB2_NUM_HW_SCHED_TYPES];
 	uint64_t rx_sched_invalid;
-	struct dlb2_queue_stats queue[DLB2_MAX_NUM_QUEUES];
+	struct dlb2_queue_stats queue[DLB2_MAX_NUM_QUEUES_ALL];
 };
 
 struct dlb2_eventdev_port {
@@ -462,16 +522,16 @@ enum dlb2_run_state {
 };
 
 struct dlb2_eventdev {
-	struct dlb2_eventdev_port ev_ports[DLB2_MAX_NUM_PORTS];
-	struct dlb2_eventdev_queue ev_queues[DLB2_MAX_NUM_QUEUES];
-	uint8_t qm_ldb_to_ev_queue_id[DLB2_MAX_NUM_QUEUES];
-	uint8_t qm_dir_to_ev_queue_id[DLB2_MAX_NUM_QUEUES];
+	struct dlb2_eventdev_port ev_ports[DLB2_MAX_NUM_PORTS_ALL];
+	struct dlb2_eventdev_queue ev_queues[DLB2_MAX_NUM_QUEUES_ALL];
+	uint8_t qm_ldb_to_ev_queue_id[DLB2_MAX_NUM_QUEUES_ALL];
+	uint8_t qm_dir_to_ev_queue_id[DLB2_MAX_NUM_QUEUES_ALL];
 	/* store num stats and offset of the stats for each queue */
-	uint16_t xstats_count_per_qid[DLB2_MAX_NUM_QUEUES];
-	uint16_t xstats_offset_for_qid[DLB2_MAX_NUM_QUEUES];
+	uint16_t xstats_count_per_qid[DLB2_MAX_NUM_QUEUES_ALL];
+	uint16_t xstats_offset_for_qid[DLB2_MAX_NUM_QUEUES_ALL];
 	/* store num stats and offset of the stats for each port */
-	uint16_t xstats_count_per_port[DLB2_MAX_NUM_PORTS];
-	uint16_t xstats_offset_for_port[DLB2_MAX_NUM_PORTS];
+	uint16_t xstats_count_per_port[DLB2_MAX_NUM_PORTS_ALL];
+	uint16_t xstats_offset_for_port[DLB2_MAX_NUM_PORTS_ALL];
 	struct dlb2_get_num_resources_args hw_rsrc_query_results;
 	uint32_t xstats_count_mode_queue;
 	struct dlb2_hw_dev qm_instance; /* strictly hw related */
@@ -487,8 +547,15 @@ struct dlb2_eventdev {
 	int num_dir_credits_override;
 	volatile enum dlb2_run_state run_state;
 	uint16_t num_dir_queues; /* total num of evdev dir queues requested */
-	uint16_t num_dir_credits;
-	uint16_t num_ldb_credits;
+	union {
+		struct {
+			uint16_t num_dir_credits;
+			uint16_t num_ldb_credits;
+		};
+		struct {
+			uint16_t num_credits;
+		};
+	};
 	uint16_t num_queues; /* total queues */
 	uint16_t num_ldb_queues; /* total num of evdev ldb queues requested */
 	uint16_t num_ports; /* total num of evdev ports requested */
@@ -499,21 +566,28 @@ struct dlb2_eventdev {
 	bool defer_sched;
 	enum dlb2_cq_poll_modes poll_mode;
 	uint8_t revision;
+	uint8_t version;
 	bool configured;
-	uint16_t max_ldb_credits;
-	uint16_t max_dir_credits;
-
-	/* force hw credit pool counters into exclusive cache lines */
-
-	/* use __atomic builtins */ /* shared hw cred */
-	uint32_t ldb_credit_pool __rte_cache_aligned;
-	/* use __atomic builtins */ /* shared hw cred */
-	uint32_t dir_credit_pool __rte_cache_aligned;
+	union {
+		struct {
+			uint16_t max_ldb_credits;
+			uint16_t max_dir_credits;
+			/* use __atomic builtins */ /* shared hw cred */
+			uint32_t ldb_credit_pool __rte_cache_aligned;
+			/* use __atomic builtins */ /* shared hw cred */
+			uint32_t dir_credit_pool __rte_cache_aligned;
+		};
+		struct {
+			uint16_t max_credits;
+			/* use __atomic builtins */ /* shared hw cred */
+			uint32_t credit_pool __rte_cache_aligned;
+		};
+	};
 };
 
 /* used for collecting and passing around the dev args */
 struct dlb2_qid_depth_thresholds {
-	int val[DLB2_MAX_NUM_QUEUES];
+	int val[DLB2_MAX_NUM_QUEUES_ALL];
 };
 
 struct dlb2_devargs {
@@ -568,7 +642,8 @@ uint32_t dlb2_get_queue_depth(struct dlb2_eventdev *dlb2,
 
 int dlb2_parse_params(const char *params,
 		      const char *name,
-		      struct dlb2_devargs *dlb2_args);
+		      struct dlb2_devargs *dlb2_args,
+		      uint8_t version);
 
 /* Extern globals */
 extern struct process_local_port_data dlb2_port[][DLB2_NUM_PORT_TYPES];
diff --git a/drivers/event/dlb2/dlb2_xstats.c b/drivers/event/dlb2/dlb2_xstats.c
index 8c3c3cda9..b62e62060 100644
--- a/drivers/event/dlb2/dlb2_xstats.c
+++ b/drivers/event/dlb2/dlb2_xstats.c
@@ -95,7 +95,7 @@ dlb2_device_traffic_stat_get(struct dlb2_eventdev *dlb2,
 	int i;
 	uint64_t val = 0;
 
-	for (i = 0; i < DLB2_MAX_NUM_PORTS; i++) {
+	for (i = 0; i < DLB2_MAX_NUM_PORTS(dlb2->version); i++) {
 		struct dlb2_eventdev_port *port = &dlb2->ev_ports[i];
 
 		if (!port->setup_done)
@@ -269,7 +269,7 @@ dlb2_get_threshold_stat(struct dlb2_eventdev *dlb2, int qid, int stat)
 	int port = 0;
 	uint64_t tally = 0;
 
-	for (port = 0; port < DLB2_MAX_NUM_PORTS; port++)
+	for (port = 0; port < DLB2_MAX_NUM_PORTS(dlb2->version); port++)
 		tally += dlb2->ev_ports[port].stats.queue[qid].qid_depth[stat];
 
 	return tally;
@@ -281,7 +281,7 @@ dlb2_get_enq_ok_stat(struct dlb2_eventdev *dlb2, int qid)
 	int port = 0;
 	uint64_t enq_ok_tally = 0;
 
-	for (port = 0; port < DLB2_MAX_NUM_PORTS; port++)
+	for (port = 0; port < DLB2_MAX_NUM_PORTS(dlb2->version); port++)
 		enq_ok_tally += dlb2->ev_ports[port].stats.queue[qid].enq_ok;
 
 	return enq_ok_tally;
@@ -561,8 +561,8 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 
 	/* other vars */
 	const unsigned int count = RTE_DIM(dev_stats) +
-			DLB2_MAX_NUM_PORTS * RTE_DIM(port_stats) +
-			DLB2_MAX_NUM_QUEUES * RTE_DIM(qid_stats);
+		DLB2_MAX_NUM_PORTS(dlb2->version) * RTE_DIM(port_stats) +
+		DLB2_MAX_NUM_QUEUES(dlb2->version) * RTE_DIM(qid_stats);
 	unsigned int i, port, qid, stat_id = 0;
 
 	dlb2->xstats = rte_zmalloc_socket(NULL,
@@ -583,7 +583,7 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 	}
 	dlb2->xstats_count_mode_dev = stat_id;
 
-	for (port = 0; port < DLB2_MAX_NUM_PORTS; port++) {
+	for (port = 0; port < DLB2_MAX_NUM_PORTS(dlb2->version); port++) {
 		dlb2->xstats_offset_for_port[port] = stat_id;
 
 		uint32_t count_offset = stat_id;
@@ -605,7 +605,7 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 
 	dlb2->xstats_count_mode_port = stat_id - dlb2->xstats_count_mode_dev;
 
-	for (qid = 0; qid < DLB2_MAX_NUM_QUEUES; qid++) {
+	for (qid = 0; qid < DLB2_MAX_NUM_QUEUES(dlb2->version); qid++) {
 		uint32_t count_offset = stat_id;
 
 		dlb2->xstats_offset_for_qid[qid] = stat_id;
@@ -658,16 +658,15 @@ dlb2_eventdev_xstats_get_names(const struct rte_eventdev *dev,
 		xstats_mode_count = dlb2->xstats_count_mode_dev;
 		break;
 	case RTE_EVENT_DEV_XSTATS_PORT:
-		if (queue_port_id >= DLB2_MAX_NUM_PORTS)
+		if (queue_port_id >= DLB2_MAX_NUM_PORTS(dlb2->version))
 			break;
 		xstats_mode_count = dlb2->xstats_count_per_port[queue_port_id];
 		start_offset = dlb2->xstats_offset_for_port[queue_port_id];
 		break;
 	case RTE_EVENT_DEV_XSTATS_QUEUE:
-#if (DLB2_MAX_NUM_QUEUES <= 255) /* max 8 bit value */
-		if (queue_port_id >= DLB2_MAX_NUM_QUEUES)
+		if (queue_port_id >= DLB2_MAX_NUM_QUEUES(dlb2->version) &&
+		    (DLB2_MAX_NUM_QUEUES(dlb2->version) <= 255))
 			break;
-#endif
 		xstats_mode_count = dlb2->xstats_count_per_qid[queue_port_id];
 		start_offset = dlb2->xstats_offset_for_qid[queue_port_id];
 		break;
@@ -709,13 +708,13 @@ dlb2_xstats_update(struct dlb2_eventdev *dlb2,
 		xstats_mode_count = dlb2->xstats_count_mode_dev;
 		break;
 	case RTE_EVENT_DEV_XSTATS_PORT:
-		if (queue_port_id >= DLB2_MAX_NUM_PORTS)
+		if (queue_port_id >= DLB2_MAX_NUM_PORTS(dlb2->version))
 			goto invalid_value;
 		xstats_mode_count = dlb2->xstats_count_per_port[queue_port_id];
 		break;
 	case RTE_EVENT_DEV_XSTATS_QUEUE:
-#if (DLB2_MAX_NUM_QUEUES <= 255) /* max 8 bit value */
-		if (queue_port_id >= DLB2_MAX_NUM_QUEUES)
+#if (DLB2_MAX_NUM_QUEUES(DLB2_HW_V2_5) <= 255) /* max 8 bit value */
+		if (queue_port_id >= DLB2_MAX_NUM_QUEUES(dlb2->version))
 			goto invalid_value;
 #endif
 		xstats_mode_count = dlb2->xstats_count_per_qid[queue_port_id];
@@ -936,12 +935,13 @@ dlb2_eventdev_xstats_reset(struct rte_eventdev *dev,
 		break;
 	case RTE_EVENT_DEV_XSTATS_PORT:
 		if (queue_port_id == -1) {
-			for (i = 0; i < DLB2_MAX_NUM_PORTS; i++) {
+			for (i = 0; i < DLB2_MAX_NUM_PORTS(dlb2->version);
+					i++) {
 				if (dlb2_xstats_reset_port(dlb2, i,
 							   ids, nb_ids))
 					return -EINVAL;
 			}
-		} else if (queue_port_id < DLB2_MAX_NUM_PORTS) {
+		} else if (queue_port_id < DLB2_MAX_NUM_PORTS(dlb2->version)) {
 			if (dlb2_xstats_reset_port(dlb2, queue_port_id,
 						   ids, nb_ids))
 				return -EINVAL;
@@ -949,12 +949,13 @@ dlb2_eventdev_xstats_reset(struct rte_eventdev *dev,
 		break;
 	case RTE_EVENT_DEV_XSTATS_QUEUE:
 		if (queue_port_id == -1) {
-			for (i = 0; i < DLB2_MAX_NUM_QUEUES; i++) {
+			for (i = 0; i < DLB2_MAX_NUM_QUEUES(dlb2->version);
+					i++) {
 				if (dlb2_xstats_reset_queue(dlb2, i,
 							    ids, nb_ids))
 					return -EINVAL;
 			}
-		} else if (queue_port_id < DLB2_MAX_NUM_QUEUES) {
+		} else if (queue_port_id < DLB2_MAX_NUM_QUEUES(dlb2->version)) {
 			if (dlb2_xstats_reset_queue(dlb2, queue_port_id,
 						    ids, nb_ids))
 				return -EINVAL;
diff --git a/drivers/event/dlb2/pf/base/dlb2_hw_types.h b/drivers/event/dlb2/pf/base/dlb2_hw_types.h
index c7cd41f8b..b007e1674 100644
--- a/drivers/event/dlb2/pf/base/dlb2_hw_types.h
+++ b/drivers/event/dlb2/pf/base/dlb2_hw_types.h
@@ -12,18 +12,25 @@
 #include "dlb2_osdep_types.h"
 
 #define DLB2_MAX_NUM_VDEVS			16
-#define DLB2_MAX_NUM_AQED_ENTRIES		2048
 #define DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS	2
-#define DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES	5
-
 #define DLB2_NUM_ARB_WEIGHTS			8
+#define DLB2_MAX_NUM_AQED_ENTRIES		2048
 #define DLB2_MAX_WEIGHT				255
 #define DLB2_NUM_COS_DOMAINS			4
+#define DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS	2
+#define DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES	5
 #define DLB2_MAX_CQ_COMP_CHECK_LOOPS		409600
 #define DLB2_MAX_QID_EMPTY_CHECK_LOOPS		(32 * 64 * 1024 * (800 / 30))
+
+#define DLB2_FUNC_BAR				0
+#define DLB2_CSR_BAR				2
+
 #define PCI_DEVICE_ID_INTEL_DLB2_PF 0x2710
 #define PCI_DEVICE_ID_INTEL_DLB2_VF 0x2711
 
+#define PCI_DEVICE_ID_INTEL_DLB2_5_PF 0x2714
+#define PCI_DEVICE_ID_INTEL_DLB2_5_VF 0x2715
+
 #define DLB2_ALARM_HW_SOURCE_SYS 0
 #define DLB2_ALARM_HW_SOURCE_DLB 1
 
@@ -55,7 +62,8 @@
 #define DLB2_DIR_PP_BASE       0x2000000
 #define DLB2_DIR_PP_STRIDE     0x1000
 #define DLB2_DIR_PP_BOUND      (DLB2_DIR_PP_BASE + \
-				DLB2_DIR_PP_STRIDE * DLB2_MAX_NUM_DIR_PORTS)
+				DLB2_DIR_PP_STRIDE * \
+				DLB2_MAX_NUM_DIR_PORTS_V2_5)
 #define DLB2_DIR_PP_OFFS(id)   (DLB2_DIR_PP_BASE + (id) * DLB2_PP_SIZE)
 
 struct dlb2_resource_id {
@@ -183,7 +191,7 @@ struct dlb2_sn_group {
 
 static inline bool dlb2_sn_group_full(struct dlb2_sn_group *group)
 {
-	u32 mask[] = {
+	const u32 mask[] = {
 		0x0000ffff,  /* 64 SNs per queue */
 		0x000000ff,  /* 128 SNs per queue */
 		0x0000000f,  /* 256 SNs per queue */
@@ -195,7 +203,7 @@ static inline bool dlb2_sn_group_full(struct dlb2_sn_group *group)
 
 static inline int dlb2_sn_group_alloc_slot(struct dlb2_sn_group *group)
 {
-	u32 bound[6] = {16, 8, 4, 2, 1};
+	const u32 bound[] = {16, 8, 4, 2, 1};
 	u32 i;
 
 	for (i = 0; i < bound[group->mode]; i++) {
@@ -285,7 +293,7 @@ struct dlb2_function_resources {
 struct dlb2_hw_resources {
 	struct dlb2_ldb_queue ldb_queues[DLB2_MAX_NUM_LDB_QUEUES];
 	struct dlb2_ldb_port ldb_ports[DLB2_MAX_NUM_LDB_PORTS];
-	struct dlb2_dir_pq_pair dir_pq_pairs[DLB2_MAX_NUM_DIR_PORTS];
+	struct dlb2_dir_pq_pair dir_pq_pairs[DLB2_MAX_NUM_DIR_PORTS_V2_5];
 	struct dlb2_sn_group sn_groups[DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS];
 };
 
@@ -302,11 +310,13 @@ struct dlb2_sw_mbox {
 };
 
 struct dlb2_hw {
+	uint8_t ver;
+
 	/* BAR 0 address */
-	void  *csr_kva;
+	void *csr_kva;
 	unsigned long csr_phys_addr;
 	/* BAR 2 address */
-	void  *func_kva;
+	void *func_kva;
 	unsigned long func_phys_addr;
 
 	/* Resource tracking */
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index b57157fdc..1cb0b9f50 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -211,7 +211,7 @@ int dlb2_resource_init(struct dlb2_hw *hw)
 			      &port->func_list);
 	}
 
-	hw->pf.num_avail_dir_pq_pairs = DLB2_MAX_NUM_DIR_PORTS;
+	hw->pf.num_avail_dir_pq_pairs = DLB2_MAX_NUM_DIR_PORTS(hw->ver);
 	for (i = 0; i < hw->pf.num_avail_dir_pq_pairs; i++) {
 		list = &hw->rsrcs.dir_pq_pairs[i].func_list;
 
@@ -219,7 +219,9 @@ int dlb2_resource_init(struct dlb2_hw *hw)
 	}
 
 	hw->pf.num_avail_qed_entries = DLB2_MAX_NUM_LDB_CREDITS;
-	hw->pf.num_avail_dqed_entries = DLB2_MAX_NUM_DIR_CREDITS;
+	hw->pf.num_avail_dqed_entries =
+		DLB2_MAX_NUM_DIR_CREDITS(hw->ver);
+
 	hw->pf.num_avail_aqed_entries = DLB2_MAX_NUM_AQED_ENTRIES;
 
 	ret = dlb2_bitmap_alloc(&hw->pf.avail_hist_list_entries,
@@ -258,7 +260,7 @@ int dlb2_resource_init(struct dlb2_hw *hw)
 		hw->rsrcs.ldb_ports[i].id.vdev_owned = false;
 	}
 
-	for (i = 0; i < DLB2_MAX_NUM_DIR_PORTS; i++) {
+	for (i = 0; i < DLB2_MAX_NUM_DIR_PORTS(hw->ver); i++) {
 		hw->rsrcs.dir_pq_pairs[i].id.phys_id = i;
 		hw->rsrcs.dir_pq_pairs[i].id.vdev_owned = false;
 	}
@@ -2372,7 +2374,7 @@ static void dlb2_domain_disable_dir_vpps(struct dlb2_hw *hw,
 		else
 			virt_id = port->id.phys_id;
 
-		offs = vdev_id * DLB2_MAX_NUM_DIR_PORTS + virt_id;
+		offs = vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) + virt_id;
 
 		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP_V(offs), r1.val);
 	}
@@ -2505,7 +2507,8 @@ static void
 dlb2_domain_disable_dir_queue_write_perms(struct dlb2_hw *hw,
 					  struct dlb2_hw_domain *domain)
 {
-	int domain_offset = domain->id.phys_id * DLB2_MAX_NUM_DIR_PORTS;
+	int domain_offset = domain->id.phys_id *
+		DLB2_MAX_NUM_DIR_PORTS(hw->ver);
 	struct dlb2_list_entry *iter;
 	struct dlb2_dir_pq_pair *queue;
 	RTE_SET_USED(iter);
@@ -2521,7 +2524,8 @@ dlb2_domain_disable_dir_queue_write_perms(struct dlb2_hw *hw,
 		DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(idx), r0.val);
 
 		if (queue->id.vdev_owned) {
-			idx = queue->id.vdev_id * DLB2_MAX_NUM_DIR_PORTS +
+			idx = queue->id.vdev_id *
+				DLB2_MAX_NUM_DIR_PORTS(hw->ver) +
 				queue->id.virt_id;
 
 			DLB2_CSR_WR(hw,
@@ -2960,7 +2964,8 @@ __dlb2_domain_reset_dir_port_registers(struct dlb2_hw *hw,
 		else
 			virt_id = port->id.phys_id;
 
-		offs = port->id.vdev_id * DLB2_MAX_NUM_DIR_PORTS + virt_id;
+		offs = port->id.vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver)
+			+ virt_id;
 
 		DLB2_CSR_WR(hw,
 			    DLB2_SYS_VF_DIR_VPP2PP(offs),
@@ -4483,7 +4488,8 @@ dlb2_log_create_dir_port_args(struct dlb2_hw *hw,
 }
 
 static struct dlb2_dir_pq_pair *
-dlb2_get_domain_used_dir_pq(u32 id,
+dlb2_get_domain_used_dir_pq(struct dlb2_hw *hw,
+			    u32 id,
 			    bool vdev_req,
 			    struct dlb2_hw_domain *domain)
 {
@@ -4491,7 +4497,7 @@ dlb2_get_domain_used_dir_pq(u32 id,
 	struct dlb2_dir_pq_pair *port;
 	RTE_SET_USED(iter);
 
-	if (id >= DLB2_MAX_NUM_DIR_PORTS)
+	if (id >= DLB2_MAX_NUM_DIR_PORTS(hw->ver))
 		return NULL;
 
 	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter)
@@ -4537,7 +4543,8 @@ dlb2_verify_create_dir_port_args(struct dlb2_hw *hw,
 	if (args->queue_id != -1) {
 		struct dlb2_dir_pq_pair *queue;
 
-		queue = dlb2_get_domain_used_dir_pq(args->queue_id,
+		queue = dlb2_get_domain_used_dir_pq(hw,
+						    args->queue_id,
 						    vdev_req,
 						    domain);
 
@@ -4617,7 +4624,7 @@ static void dlb2_dir_port_configure_pp(struct dlb2_hw *hw,
 
 		r1.field.pp = port->id.phys_id;
 
-		offs = vdev_id * DLB2_MAX_NUM_DIR_PORTS + virt_id;
+		offs = vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) + virt_id;
 
 		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP2PP(offs), r1.val);
 
@@ -4856,7 +4863,8 @@ int dlb2_hw_create_dir_port(struct dlb2_hw *hw,
 	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
 
 	if (args->queue_id != -1)
-		port = dlb2_get_domain_used_dir_pq(args->queue_id,
+		port = dlb2_get_domain_used_dir_pq(hw,
+						   args->queue_id,
 						   vdev_req,
 						   domain);
 	else
@@ -4912,7 +4920,7 @@ static void dlb2_configure_dir_queue(struct dlb2_hw *hw,
 	/* QID write permissions are turned on when the domain is started */
 	r0.field.vasqid_v = 0;
 
-	offs = domain->id.phys_id * DLB2_MAX_NUM_DIR_QUEUES +
+	offs = domain->id.phys_id * DLB2_MAX_NUM_DIR_QUEUES(hw->ver) +
 		queue->id.phys_id;
 
 	DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(offs), r0.val);
@@ -4934,7 +4942,8 @@ static void dlb2_configure_dir_queue(struct dlb2_hw *hw,
 		union dlb2_sys_vf_dir_vqid_v r3 = { {0} };
 		union dlb2_sys_vf_dir_vqid2qid r4 = { {0} };
 
-		offs = vdev_id * DLB2_MAX_NUM_DIR_QUEUES + queue->id.virt_id;
+		offs = vdev_id * DLB2_MAX_NUM_DIR_QUEUES(hw->ver)
+			+ queue->id.virt_id;
 
 		r3.field.vqid_v = 1;
 
@@ -5000,7 +5009,8 @@ dlb2_verify_create_dir_queue_args(struct dlb2_hw *hw,
 	if (args->port_id != -1) {
 		struct dlb2_dir_pq_pair *port;
 
-		port = dlb2_get_domain_used_dir_pq(args->port_id,
+		port = dlb2_get_domain_used_dir_pq(hw,
+						   args->port_id,
 						   vdev_req,
 						   domain);
 
@@ -5071,7 +5081,8 @@ int dlb2_hw_create_dir_queue(struct dlb2_hw *hw,
 	}
 
 	if (args->port_id != -1)
-		queue = dlb2_get_domain_used_dir_pq(args->port_id,
+		queue = dlb2_get_domain_used_dir_pq(hw,
+						    args->port_id,
 						    vdev_req,
 						    domain);
 	else
@@ -5919,7 +5930,7 @@ dlb2_hw_start_domain(struct dlb2_hw *hw,
 
 		r0.field.vasqid_v = 1;
 
-		offs = domain->id.phys_id * DLB2_MAX_NUM_DIR_PORTS +
+		offs = domain->id.phys_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) +
 			dir_queue->id.phys_id;
 
 		DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(offs), r0.val);
@@ -5971,7 +5982,7 @@ int dlb2_hw_get_dir_queue_depth(struct dlb2_hw *hw,
 
 	id = args->queue_id;
 
-	queue = dlb2_get_domain_used_dir_pq(id, vdev_req, domain);
+	queue = dlb2_get_domain_used_dir_pq(hw, id, vdev_req, domain);
 	if (queue == NULL) {
 		resp->status = DLB2_ST_INVALID_QID;
 		return -EINVAL;
diff --git a/drivers/event/dlb2/pf/dlb2_pf.c b/drivers/event/dlb2/pf/dlb2_pf.c
index cfb22efe8..f57dc1584 100644
--- a/drivers/event/dlb2/pf/dlb2_pf.c
+++ b/drivers/event/dlb2/pf/dlb2_pf.c
@@ -47,7 +47,7 @@ dlb2_pf_low_level_io_init(void)
 {
 	int i;
 	/* Addresses will be initialized at port create */
-	for (i = 0; i < DLB2_MAX_NUM_PORTS; i++) {
+	for (i = 0; i < DLB2_MAX_NUM_PORTS(DLB2_HW_V2_5); i++) {
 		/* First directed ports */
 		dlb2_port[i][DLB2_DIR_PORT].pp_addr = NULL;
 		dlb2_port[i][DLB2_DIR_PORT].cq_base = NULL;
@@ -628,6 +628,7 @@ dlb2_eventdev_pci_init(struct rte_eventdev *eventdev)
 
 	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
 		dlb2 = dlb2_pmd_priv(eventdev); /* rte_zmalloc_socket mem */
+		dlb2->version = DLB2_HW_DEVICE_FROM_PCI_ID(pci_dev);
 
 		/* Probe the DLB2 PF layer */
 		dlb2->qm_instance.pf_dev = dlb2_probe(pci_dev);
@@ -643,7 +644,8 @@ dlb2_eventdev_pci_init(struct rte_eventdev *eventdev)
 		if (pci_dev->device.devargs) {
 			ret = dlb2_parse_params(pci_dev->device.devargs->args,
 						pci_dev->device.devargs->name,
-						&dlb2_args);
+						&dlb2_args,
+						dlb2->version);
 			if (ret) {
 				DLB2_LOG_ERR("PFPMD failed to parse args ret=%d, errno=%d\n",
 					     ret, rte_errno);
@@ -655,6 +657,8 @@ dlb2_eventdev_pci_init(struct rte_eventdev *eventdev)
 						  event_dlb2_pf_name,
 						  &dlb2_args);
 	} else {
+		dlb2 = dlb2_pmd_priv(eventdev);
+		dlb2->version = DLB2_HW_DEVICE_FROM_PCI_ID(pci_dev);
 		ret = dlb2_secondary_eventdev_probe(eventdev,
 						    event_dlb2_pf_name);
 	}
@@ -684,6 +688,16 @@ static const struct rte_pci_id pci_id_dlb2_map[] = {
 	},
 };
 
+static const struct rte_pci_id pci_id_dlb2_5_map[] = {
+	{
+		RTE_PCI_DEVICE(EVENTDEV_INTEL_VENDOR_ID,
+			       PCI_DEVICE_ID_INTEL_DLB2_5_PF)
+	},
+	{
+		.vendor_id = 0,
+	},
+};
+
 static int
 event_dlb2_pci_probe(struct rte_pci_driver *pci_drv,
 		     struct rte_pci_device *pci_dev)
@@ -718,6 +732,40 @@ event_dlb2_pci_remove(struct rte_pci_device *pci_dev)
 
 }
 
+static int
+event_dlb2_5_pci_probe(struct rte_pci_driver *pci_drv,
+		       struct rte_pci_device *pci_dev)
+{
+	int ret;
+
+	ret = rte_event_pmd_pci_probe_named(pci_drv, pci_dev,
+					    sizeof(struct dlb2_eventdev),
+					    dlb2_eventdev_pci_init,
+					    event_dlb2_pf_name);
+	if (ret) {
+		DLB2_LOG_INFO("rte_event_pmd_pci_probe_named() failed, "
+				"ret=%d\n", ret);
+	}
+
+	return ret;
+}
+
+static int
+event_dlb2_5_pci_remove(struct rte_pci_device *pci_dev)
+{
+	int ret;
+
+	ret = rte_event_pmd_pci_remove(pci_dev, NULL);
+
+	if (ret) {
+		DLB2_LOG_INFO("rte_event_pmd_pci_remove() failed, "
+				"ret=%d\n", ret);
+	}
+
+	return ret;
+
+}
+
 static struct rte_pci_driver pci_eventdev_dlb2_pmd = {
 	.id_table = pci_id_dlb2_map,
 	.drv_flags = RTE_PCI_DRV_NEED_MAPPING,
@@ -725,5 +773,15 @@ static struct rte_pci_driver pci_eventdev_dlb2_pmd = {
 	.remove = event_dlb2_pci_remove,
 };
 
+static struct rte_pci_driver pci_eventdev_dlb2_5_pmd = {
+	.id_table = pci_id_dlb2_5_map,
+	.drv_flags = RTE_PCI_DRV_NEED_MAPPING,
+	.probe = event_dlb2_5_pci_probe,
+	.remove = event_dlb2_5_pci_remove,
+};
+
 RTE_PMD_REGISTER_PCI(event_dlb2_pf, pci_eventdev_dlb2_pmd);
 RTE_PMD_REGISTER_PCI_TABLE(event_dlb2_pf, pci_id_dlb2_map);
+
+RTE_PMD_REGISTER_PCI(event_dlb2_5_pf, pci_eventdev_dlb2_5_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(event_dlb2_5_pf, pci_id_dlb2_5_map);
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v4 03/27] event/dlb2: add v2.5 HW register definitions
  2021-04-15  1:48   ` [dpdk-dev] [PATCH v4 00/27] Add DLB v2.5 Timothy McDaniel
  2021-04-15  1:48     ` [dpdk-dev] [PATCH v4 01/27] event/dlb2: minor code cleanup Timothy McDaniel
  2021-04-15  1:48     ` [dpdk-dev] [PATCH v4 02/27] event/dlb2: add v2.5 probe Timothy McDaniel
@ 2021-04-15  1:48     ` Timothy McDaniel
  2021-04-15  1:48     ` [dpdk-dev] [PATCH v4 04/27] event/dlb2: add v2.5 HW init Timothy McDaniel
                       ` (23 subsequent siblings)
  26 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-04-15  1:48 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, harry.van.haaren, jerinj, thomas

Add auto-generated register definitions, updated to
support both DLB v2.0 and v2.5 devices.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_regs_new.h | 4304 ++++++++++++++++++++
 1 file changed, 4304 insertions(+)
 create mode 100644 drivers/event/dlb2/pf/base/dlb2_regs_new.h

diff --git a/drivers/event/dlb2/pf/base/dlb2_regs_new.h b/drivers/event/dlb2/pf/base/dlb2_regs_new.h
new file mode 100644
index 000000000..26c3e7f4a
--- /dev/null
+++ b/drivers/event/dlb2/pf/base/dlb2_regs_new.h
@@ -0,0 +1,4304 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2016-2020 Intel Corporation
+ */
+
+#ifndef __DLB2_REGS_NEW_H
+#define __DLB2_REGS_NEW_H
+
+#include "dlb2_osdep_types.h"
+
+#define DLB2_PF_VF2PF_MAILBOX_BYTES 256
+#define DLB2_PF_VF2PF_MAILBOX(vf_id, x) \
+	(0x1000 + 0x4 * (x) + (vf_id) * 0x10000)
+#define DLB2_PF_VF2PF_MAILBOX_RST 0x0
+
+#define DLB2_PF_VF2PF_MAILBOX_MSG	0xFFFFFFFF
+#define DLB2_PF_VF2PF_MAILBOX_MSG_LOC	0
+
+#define DLB2_PF_VF2PF_MAILBOX_ISR(vf_id) \
+	(0x1f00 + (vf_id) * 0x10000)
+#define DLB2_PF_VF2PF_MAILBOX_ISR_RST 0x0
+
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF0_ISR	0x00000001
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF1_ISR	0x00000002
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF2_ISR	0x00000004
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF3_ISR	0x00000008
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF4_ISR	0x00000010
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF5_ISR	0x00000020
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF6_ISR	0x00000040
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF7_ISR	0x00000080
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF8_ISR	0x00000100
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF9_ISR	0x00000200
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF10_ISR	0x00000400
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF11_ISR	0x00000800
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF12_ISR	0x00001000
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF13_ISR	0x00002000
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF14_ISR	0x00004000
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF15_ISR	0x00008000
+#define DLB2_PF_VF2PF_MAILBOX_ISR_RSVD0	0xFFFF0000
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF0_ISR_LOC	0
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF1_ISR_LOC	1
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF2_ISR_LOC	2
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF3_ISR_LOC	3
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF4_ISR_LOC	4
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF5_ISR_LOC	5
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF6_ISR_LOC	6
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF7_ISR_LOC	7
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF8_ISR_LOC	8
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF9_ISR_LOC	9
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF10_ISR_LOC	10
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF11_ISR_LOC	11
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF12_ISR_LOC	12
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF13_ISR_LOC	13
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF14_ISR_LOC	14
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF15_ISR_LOC	15
+#define DLB2_PF_VF2PF_MAILBOX_ISR_RSVD0_LOC		16
+
+#define DLB2_PF_VF2PF_FLR_ISR(vf_id) \
+	(0x1f04 + (vf_id) * 0x10000)
+#define DLB2_PF_VF2PF_FLR_ISR_RST 0x0
+
+#define DLB2_PF_VF2PF_FLR_ISR_VF0_ISR	0x00000001
+#define DLB2_PF_VF2PF_FLR_ISR_VF1_ISR	0x00000002
+#define DLB2_PF_VF2PF_FLR_ISR_VF2_ISR	0x00000004
+#define DLB2_PF_VF2PF_FLR_ISR_VF3_ISR	0x00000008
+#define DLB2_PF_VF2PF_FLR_ISR_VF4_ISR	0x00000010
+#define DLB2_PF_VF2PF_FLR_ISR_VF5_ISR	0x00000020
+#define DLB2_PF_VF2PF_FLR_ISR_VF6_ISR	0x00000040
+#define DLB2_PF_VF2PF_FLR_ISR_VF7_ISR	0x00000080
+#define DLB2_PF_VF2PF_FLR_ISR_VF8_ISR	0x00000100
+#define DLB2_PF_VF2PF_FLR_ISR_VF9_ISR	0x00000200
+#define DLB2_PF_VF2PF_FLR_ISR_VF10_ISR	0x00000400
+#define DLB2_PF_VF2PF_FLR_ISR_VF11_ISR	0x00000800
+#define DLB2_PF_VF2PF_FLR_ISR_VF12_ISR	0x00001000
+#define DLB2_PF_VF2PF_FLR_ISR_VF13_ISR	0x00002000
+#define DLB2_PF_VF2PF_FLR_ISR_VF14_ISR	0x00004000
+#define DLB2_PF_VF2PF_FLR_ISR_VF15_ISR	0x00008000
+#define DLB2_PF_VF2PF_FLR_ISR_RSVD0		0xFFFF0000
+#define DLB2_PF_VF2PF_FLR_ISR_VF0_ISR_LOC	0
+#define DLB2_PF_VF2PF_FLR_ISR_VF1_ISR_LOC	1
+#define DLB2_PF_VF2PF_FLR_ISR_VF2_ISR_LOC	2
+#define DLB2_PF_VF2PF_FLR_ISR_VF3_ISR_LOC	3
+#define DLB2_PF_VF2PF_FLR_ISR_VF4_ISR_LOC	4
+#define DLB2_PF_VF2PF_FLR_ISR_VF5_ISR_LOC	5
+#define DLB2_PF_VF2PF_FLR_ISR_VF6_ISR_LOC	6
+#define DLB2_PF_VF2PF_FLR_ISR_VF7_ISR_LOC	7
+#define DLB2_PF_VF2PF_FLR_ISR_VF8_ISR_LOC	8
+#define DLB2_PF_VF2PF_FLR_ISR_VF9_ISR_LOC	9
+#define DLB2_PF_VF2PF_FLR_ISR_VF10_ISR_LOC	10
+#define DLB2_PF_VF2PF_FLR_ISR_VF11_ISR_LOC	11
+#define DLB2_PF_VF2PF_FLR_ISR_VF12_ISR_LOC	12
+#define DLB2_PF_VF2PF_FLR_ISR_VF13_ISR_LOC	13
+#define DLB2_PF_VF2PF_FLR_ISR_VF14_ISR_LOC	14
+#define DLB2_PF_VF2PF_FLR_ISR_VF15_ISR_LOC	15
+#define DLB2_PF_VF2PF_FLR_ISR_RSVD0_LOC	16
+
+#define DLB2_PF_VF2PF_ISR_PEND(vf_id) \
+	(0x1f10 + (vf_id) * 0x10000)
+#define DLB2_PF_VF2PF_ISR_PEND_RST 0x0
+
+#define DLB2_PF_VF2PF_ISR_PEND_ISR_PEND	0x00000001
+#define DLB2_PF_VF2PF_ISR_PEND_RSVD0		0xFFFFFFFE
+#define DLB2_PF_VF2PF_ISR_PEND_ISR_PEND_LOC	0
+#define DLB2_PF_VF2PF_ISR_PEND_RSVD0_LOC	1
+
+#define DLB2_PF_PF2VF_MAILBOX_BYTES 64
+#define DLB2_PF_PF2VF_MAILBOX(vf_id, x) \
+	(0x2000 + 0x4 * (x) + (vf_id) * 0x10000)
+#define DLB2_PF_PF2VF_MAILBOX_RST 0x0
+
+#define DLB2_PF_PF2VF_MAILBOX_MSG	0xFFFFFFFF
+#define DLB2_PF_PF2VF_MAILBOX_MSG_LOC	0
+
+#define DLB2_PF_PF2VF_MAILBOX_ISR(vf_id) \
+	(0x2f00 + (vf_id) * 0x10000)
+#define DLB2_PF_PF2VF_MAILBOX_ISR_RST 0x0
+
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF0_ISR	0x00000001
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF1_ISR	0x00000002
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF2_ISR	0x00000004
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF3_ISR	0x00000008
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF4_ISR	0x00000010
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF5_ISR	0x00000020
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF6_ISR	0x00000040
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF7_ISR	0x00000080
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF8_ISR	0x00000100
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF9_ISR	0x00000200
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF10_ISR	0x00000400
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF11_ISR	0x00000800
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF12_ISR	0x00001000
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF13_ISR	0x00002000
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF14_ISR	0x00004000
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF15_ISR	0x00008000
+#define DLB2_PF_PF2VF_MAILBOX_ISR_RSVD0	0xFFFF0000
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF0_ISR_LOC	0
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF1_ISR_LOC	1
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF2_ISR_LOC	2
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF3_ISR_LOC	3
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF4_ISR_LOC	4
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF5_ISR_LOC	5
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF6_ISR_LOC	6
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF7_ISR_LOC	7
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF8_ISR_LOC	8
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF9_ISR_LOC	9
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF10_ISR_LOC	10
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF11_ISR_LOC	11
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF12_ISR_LOC	12
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF13_ISR_LOC	13
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF14_ISR_LOC	14
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF15_ISR_LOC	15
+#define DLB2_PF_PF2VF_MAILBOX_ISR_RSVD0_LOC		16
+
+#define DLB2_PF_VF_RESET_IN_PROGRESS(vf_id) \
+	(0x3000 + (vf_id) * 0x10000)
+#define DLB2_PF_VF_RESET_IN_PROGRESS_RST 0xffff
+
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF0_RESET_IN_PROGRESS	0x00000001
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF1_RESET_IN_PROGRESS	0x00000002
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF2_RESET_IN_PROGRESS	0x00000004
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF3_RESET_IN_PROGRESS	0x00000008
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF4_RESET_IN_PROGRESS	0x00000010
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF5_RESET_IN_PROGRESS	0x00000020
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF6_RESET_IN_PROGRESS	0x00000040
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF7_RESET_IN_PROGRESS	0x00000080
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF8_RESET_IN_PROGRESS	0x00000100
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF9_RESET_IN_PROGRESS	0x00000200
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF10_RESET_IN_PROGRESS	0x00000400
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF11_RESET_IN_PROGRESS	0x00000800
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF12_RESET_IN_PROGRESS	0x00001000
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF13_RESET_IN_PROGRESS	0x00002000
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF14_RESET_IN_PROGRESS	0x00004000
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF15_RESET_IN_PROGRESS	0x00008000
+#define DLB2_PF_VF_RESET_IN_PROGRESS_RSVD0			0xFFFF0000
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF0_RESET_IN_PROGRESS_LOC	0
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF1_RESET_IN_PROGRESS_LOC	1
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF2_RESET_IN_PROGRESS_LOC	2
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF3_RESET_IN_PROGRESS_LOC	3
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF4_RESET_IN_PROGRESS_LOC	4
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF5_RESET_IN_PROGRESS_LOC	5
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF6_RESET_IN_PROGRESS_LOC	6
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF7_RESET_IN_PROGRESS_LOC	7
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF8_RESET_IN_PROGRESS_LOC	8
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF9_RESET_IN_PROGRESS_LOC	9
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF10_RESET_IN_PROGRESS_LOC	10
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF11_RESET_IN_PROGRESS_LOC	11
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF12_RESET_IN_PROGRESS_LOC	12
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF13_RESET_IN_PROGRESS_LOC	13
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF14_RESET_IN_PROGRESS_LOC	14
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF15_RESET_IN_PROGRESS_LOC	15
+#define DLB2_PF_VF_RESET_IN_PROGRESS_RSVD0_LOC			16
+
+#define DLB2_MSIX_VECTOR_CTRL(x) \
+	(0x100000c + (x) * 0x10)
+#define DLB2_MSIX_VECTOR_CTRL_RST 0x1
+
+#define DLB2_MSIX_VECTOR_CTRL_VEC_MASK	0x00000001
+#define DLB2_MSIX_VECTOR_CTRL_RSVD0		0xFFFFFFFE
+#define DLB2_MSIX_VECTOR_CTRL_VEC_MASK_LOC	0
+#define DLB2_MSIX_VECTOR_CTRL_RSVD0_LOC	1
+
+#define DLB2_IOSF_FUNC_VF_BAR_DSBL(x) \
+	(0x20 + (x) * 0x4)
+#define DLB2_IOSF_FUNC_VF_BAR_DSBL_RST 0x0
+
+#define DLB2_IOSF_FUNC_VF_BAR_DSBL_FUNC_VF_BAR_DIS	0x00000001
+#define DLB2_IOSF_FUNC_VF_BAR_DSBL_RSVD0		0xFFFFFFFE
+#define DLB2_IOSF_FUNC_VF_BAR_DSBL_FUNC_VF_BAR_DIS_LOC	0
+#define DLB2_IOSF_FUNC_VF_BAR_DSBL_RSVD0_LOC			1
+
+#define DLB2_V2SYS_TOTAL_VAS 0x1000011c
+#define DLB2_V2_5SYS_TOTAL_VAS 0x10000114
+#define DLB2_SYS_TOTAL_VAS(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2SYS_TOTAL_VAS : \
+	 DLB2_V2_5SYS_TOTAL_VAS)
+#define DLB2_SYS_TOTAL_VAS_RST 0x20
+
+#define DLB2_SYS_TOTAL_VAS_TOTAL_VAS	0xFFFFFFFF
+#define DLB2_SYS_TOTAL_VAS_TOTAL_VAS_LOC	0
+
+#define DLB2_SYS_TOTAL_DIR_CRDS 0x10000108
+#define DLB2_SYS_TOTAL_DIR_CRDS_RST 0x1000
+
+#define DLB2_SYS_TOTAL_DIR_CRDS_TOTAL_DIR_CREDITS	0xFFFFFFFF
+#define DLB2_SYS_TOTAL_DIR_CRDS_TOTAL_DIR_CREDITS_LOC	0
+
+#define DLB2_SYS_TOTAL_LDB_CRDS 0x10000104
+#define DLB2_SYS_TOTAL_LDB_CRDS_RST 0x2000
+
+#define DLB2_SYS_TOTAL_LDB_CRDS_TOTAL_LDB_CREDITS	0xFFFFFFFF
+#define DLB2_SYS_TOTAL_LDB_CRDS_TOTAL_LDB_CREDITS_LOC	0
+
+#define DLB2_SYS_ALARM_PF_SYND2 0x10000508
+#define DLB2_SYS_ALARM_PF_SYND2_RST 0x0
+
+#define DLB2_SYS_ALARM_PF_SYND2_LOCK_ID	0x0000FFFF
+#define DLB2_SYS_ALARM_PF_SYND2_MEAS		0x00010000
+#define DLB2_SYS_ALARM_PF_SYND2_DEBUG	0x00FE0000
+#define DLB2_SYS_ALARM_PF_SYND2_CQ_POP	0x01000000
+#define DLB2_SYS_ALARM_PF_SYND2_QE_UHL	0x02000000
+#define DLB2_SYS_ALARM_PF_SYND2_QE_ORSP	0x04000000
+#define DLB2_SYS_ALARM_PF_SYND2_QE_VALID	0x08000000
+#define DLB2_SYS_ALARM_PF_SYND2_CQ_INT_REARM	0x10000000
+#define DLB2_SYS_ALARM_PF_SYND2_DSI_ERROR	0x20000000
+#define DLB2_SYS_ALARM_PF_SYND2_RSVD0	0xC0000000
+#define DLB2_SYS_ALARM_PF_SYND2_LOCK_ID_LOC		0
+#define DLB2_SYS_ALARM_PF_SYND2_MEAS_LOC		16
+#define DLB2_SYS_ALARM_PF_SYND2_DEBUG_LOC		17
+#define DLB2_SYS_ALARM_PF_SYND2_CQ_POP_LOC		24
+#define DLB2_SYS_ALARM_PF_SYND2_QE_UHL_LOC		25
+#define DLB2_SYS_ALARM_PF_SYND2_QE_ORSP_LOC		26
+#define DLB2_SYS_ALARM_PF_SYND2_QE_VALID_LOC		27
+#define DLB2_SYS_ALARM_PF_SYND2_CQ_INT_REARM_LOC	28
+#define DLB2_SYS_ALARM_PF_SYND2_DSI_ERROR_LOC	29
+#define DLB2_SYS_ALARM_PF_SYND2_RSVD0_LOC		30
+
+#define DLB2_SYS_ALARM_PF_SYND1 0x10000504
+#define DLB2_SYS_ALARM_PF_SYND1_RST 0x0
+
+#define DLB2_SYS_ALARM_PF_SYND1_DSI		0x0000FFFF
+#define DLB2_SYS_ALARM_PF_SYND1_QID		0x00FF0000
+#define DLB2_SYS_ALARM_PF_SYND1_QTYPE	0x03000000
+#define DLB2_SYS_ALARM_PF_SYND1_QPRI		0x1C000000
+#define DLB2_SYS_ALARM_PF_SYND1_MSG_TYPE	0xE0000000
+#define DLB2_SYS_ALARM_PF_SYND1_DSI_LOC	0
+#define DLB2_SYS_ALARM_PF_SYND1_QID_LOC	16
+#define DLB2_SYS_ALARM_PF_SYND1_QTYPE_LOC	24
+#define DLB2_SYS_ALARM_PF_SYND1_QPRI_LOC	26
+#define DLB2_SYS_ALARM_PF_SYND1_MSG_TYPE_LOC	29
+
+#define DLB2_SYS_ALARM_PF_SYND0 0x10000500
+#define DLB2_SYS_ALARM_PF_SYND0_RST 0x0
+
+#define DLB2_SYS_ALARM_PF_SYND0_SYNDROME	0x000000FF
+#define DLB2_SYS_ALARM_PF_SYND0_RTYPE	0x00000300
+#define DLB2_SYS_ALARM_PF_SYND0_RSVD0	0x00001C00
+#define DLB2_SYS_ALARM_PF_SYND0_IS_LDB	0x00002000
+#define DLB2_SYS_ALARM_PF_SYND0_CLS		0x0000C000
+#define DLB2_SYS_ALARM_PF_SYND0_AID		0x003F0000
+#define DLB2_SYS_ALARM_PF_SYND0_UNIT		0x03C00000
+#define DLB2_SYS_ALARM_PF_SYND0_SOURCE	0x3C000000
+#define DLB2_SYS_ALARM_PF_SYND0_MORE		0x40000000
+#define DLB2_SYS_ALARM_PF_SYND0_VALID	0x80000000
+#define DLB2_SYS_ALARM_PF_SYND0_SYNDROME_LOC	0
+#define DLB2_SYS_ALARM_PF_SYND0_RTYPE_LOC	8
+#define DLB2_SYS_ALARM_PF_SYND0_RSVD0_LOC	10
+#define DLB2_SYS_ALARM_PF_SYND0_IS_LDB_LOC	13
+#define DLB2_SYS_ALARM_PF_SYND0_CLS_LOC	14
+#define DLB2_SYS_ALARM_PF_SYND0_AID_LOC	16
+#define DLB2_SYS_ALARM_PF_SYND0_UNIT_LOC	22
+#define DLB2_SYS_ALARM_PF_SYND0_SOURCE_LOC	26
+#define DLB2_SYS_ALARM_PF_SYND0_MORE_LOC	30
+#define DLB2_SYS_ALARM_PF_SYND0_VALID_LOC	31
+
+#define DLB2_SYS_VF_LDB_VPP_V(x) \
+	(0x10000f00 + (x) * 0x1000)
+#define DLB2_SYS_VF_LDB_VPP_V_RST 0x0
+
+#define DLB2_SYS_VF_LDB_VPP_V_VPP_V	0x00000001
+#define DLB2_SYS_VF_LDB_VPP_V_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_VF_LDB_VPP_V_VPP_V_LOC	0
+#define DLB2_SYS_VF_LDB_VPP_V_RSVD0_LOC	1
+
+#define DLB2_SYS_VF_LDB_VPP2PP(x) \
+	(0x10000f04 + (x) * 0x1000)
+#define DLB2_SYS_VF_LDB_VPP2PP_RST 0x0
+
+#define DLB2_SYS_VF_LDB_VPP2PP_PP	0x0000003F
+#define DLB2_SYS_VF_LDB_VPP2PP_RSVD0	0xFFFFFFC0
+#define DLB2_SYS_VF_LDB_VPP2PP_PP_LOC	0
+#define DLB2_SYS_VF_LDB_VPP2PP_RSVD0_LOC	6
+
+#define DLB2_SYS_VF_DIR_VPP_V(x) \
+	(0x10000f08 + (x) * 0x1000)
+#define DLB2_SYS_VF_DIR_VPP_V_RST 0x0
+
+#define DLB2_SYS_VF_DIR_VPP_V_VPP_V	0x00000001
+#define DLB2_SYS_VF_DIR_VPP_V_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_VF_DIR_VPP_V_VPP_V_LOC	0
+#define DLB2_SYS_VF_DIR_VPP_V_RSVD0_LOC	1
+
+#define DLB2_SYS_VF_DIR_VPP2PP(x) \
+	(0x10000f0c + (x) * 0x1000)
+#define DLB2_SYS_VF_DIR_VPP2PP_RST 0x0
+
+#define DLB2_SYS_VF_DIR_VPP2PP_PP	0x0000003F
+#define DLB2_SYS_VF_DIR_VPP2PP_RSVD0	0xFFFFFFC0
+#define DLB2_SYS_VF_DIR_VPP2PP_PP_LOC	0
+#define DLB2_SYS_VF_DIR_VPP2PP_RSVD0_LOC	6
+
+#define DLB2_SYS_VF_LDB_VQID_V(x) \
+	(0x10000f10 + (x) * 0x1000)
+#define DLB2_SYS_VF_LDB_VQID_V_RST 0x0
+
+#define DLB2_SYS_VF_LDB_VQID_V_VQID_V	0x00000001
+#define DLB2_SYS_VF_LDB_VQID_V_RSVD0		0xFFFFFFFE
+#define DLB2_SYS_VF_LDB_VQID_V_VQID_V_LOC	0
+#define DLB2_SYS_VF_LDB_VQID_V_RSVD0_LOC	1
+
+#define DLB2_SYS_VF_LDB_VQID2QID(x) \
+	(0x10000f14 + (x) * 0x1000)
+#define DLB2_SYS_VF_LDB_VQID2QID_RST 0x0
+
+#define DLB2_SYS_VF_LDB_VQID2QID_QID		0x0000001F
+#define DLB2_SYS_VF_LDB_VQID2QID_RSVD0	0xFFFFFFE0
+#define DLB2_SYS_VF_LDB_VQID2QID_QID_LOC	0
+#define DLB2_SYS_VF_LDB_VQID2QID_RSVD0_LOC	5
+
+#define DLB2_SYS_LDB_QID2VQID(x) \
+	(0x10000f18 + (x) * 0x1000)
+#define DLB2_SYS_LDB_QID2VQID_RST 0x0
+
+#define DLB2_SYS_LDB_QID2VQID_VQID	0x0000001F
+#define DLB2_SYS_LDB_QID2VQID_RSVD0	0xFFFFFFE0
+#define DLB2_SYS_LDB_QID2VQID_VQID_LOC	0
+#define DLB2_SYS_LDB_QID2VQID_RSVD0_LOC	5
+
+#define DLB2_SYS_VF_DIR_VQID_V(x) \
+	(0x10000f1c + (x) * 0x1000)
+#define DLB2_SYS_VF_DIR_VQID_V_RST 0x0
+
+#define DLB2_SYS_VF_DIR_VQID_V_VQID_V	0x00000001
+#define DLB2_SYS_VF_DIR_VQID_V_RSVD0		0xFFFFFFFE
+#define DLB2_SYS_VF_DIR_VQID_V_VQID_V_LOC	0
+#define DLB2_SYS_VF_DIR_VQID_V_RSVD0_LOC	1
+
+#define DLB2_SYS_VF_DIR_VQID2QID(x) \
+	(0x10000f20 + (x) * 0x1000)
+#define DLB2_SYS_VF_DIR_VQID2QID_RST 0x0
+
+#define DLB2_SYS_VF_DIR_VQID2QID_QID		0x0000003F
+#define DLB2_SYS_VF_DIR_VQID2QID_RSVD0	0xFFFFFFC0
+#define DLB2_SYS_VF_DIR_VQID2QID_QID_LOC	0
+#define DLB2_SYS_VF_DIR_VQID2QID_RSVD0_LOC	6
+
+#define DLB2_SYS_LDB_VASQID_V(x) \
+	(0x10000f24 + (x) * 0x1000)
+#define DLB2_SYS_LDB_VASQID_V_RST 0x0
+
+#define DLB2_SYS_LDB_VASQID_V_VASQID_V	0x00000001
+#define DLB2_SYS_LDB_VASQID_V_RSVD0		0xFFFFFFFE
+#define DLB2_SYS_LDB_VASQID_V_VASQID_V_LOC	0
+#define DLB2_SYS_LDB_VASQID_V_RSVD0_LOC	1
+
+#define DLB2_SYS_DIR_VASQID_V(x) \
+	(0x10000f28 + (x) * 0x1000)
+#define DLB2_SYS_DIR_VASQID_V_RST 0x0
+
+#define DLB2_SYS_DIR_VASQID_V_VASQID_V	0x00000001
+#define DLB2_SYS_DIR_VASQID_V_RSVD0		0xFFFFFFFE
+#define DLB2_SYS_DIR_VASQID_V_VASQID_V_LOC	0
+#define DLB2_SYS_DIR_VASQID_V_RSVD0_LOC	1
+
+#define DLB2_SYS_ALARM_VF_SYND2(x) \
+	(0x10000f48 + (x) * 0x1000)
+#define DLB2_SYS_ALARM_VF_SYND2_RST 0x0
+
+#define DLB2_SYS_ALARM_VF_SYND2_LOCK_ID	0x0000FFFF
+#define DLB2_SYS_ALARM_VF_SYND2_DEBUG	0x00FF0000
+#define DLB2_SYS_ALARM_VF_SYND2_CQ_POP	0x01000000
+#define DLB2_SYS_ALARM_VF_SYND2_QE_UHL	0x02000000
+#define DLB2_SYS_ALARM_VF_SYND2_QE_ORSP	0x04000000
+#define DLB2_SYS_ALARM_VF_SYND2_QE_VALID	0x08000000
+#define DLB2_SYS_ALARM_VF_SYND2_ISZ		0x10000000
+#define DLB2_SYS_ALARM_VF_SYND2_DSI_ERROR	0x20000000
+#define DLB2_SYS_ALARM_VF_SYND2_DLBRSVD	0xC0000000
+#define DLB2_SYS_ALARM_VF_SYND2_LOCK_ID_LOC		0
+#define DLB2_SYS_ALARM_VF_SYND2_DEBUG_LOC		16
+#define DLB2_SYS_ALARM_VF_SYND2_CQ_POP_LOC		24
+#define DLB2_SYS_ALARM_VF_SYND2_QE_UHL_LOC		25
+#define DLB2_SYS_ALARM_VF_SYND2_QE_ORSP_LOC		26
+#define DLB2_SYS_ALARM_VF_SYND2_QE_VALID_LOC		27
+#define DLB2_SYS_ALARM_VF_SYND2_ISZ_LOC		28
+#define DLB2_SYS_ALARM_VF_SYND2_DSI_ERROR_LOC	29
+#define DLB2_SYS_ALARM_VF_SYND2_DLBRSVD_LOC		30
+
+#define DLB2_SYS_ALARM_VF_SYND1(x) \
+	(0x10000f44 + (x) * 0x1000)
+#define DLB2_SYS_ALARM_VF_SYND1_RST 0x0
+
+#define DLB2_SYS_ALARM_VF_SYND1_DSI		0x0000FFFF
+#define DLB2_SYS_ALARM_VF_SYND1_QID		0x00FF0000
+#define DLB2_SYS_ALARM_VF_SYND1_QTYPE	0x03000000
+#define DLB2_SYS_ALARM_VF_SYND1_QPRI		0x1C000000
+#define DLB2_SYS_ALARM_VF_SYND1_MSG_TYPE	0xE0000000
+#define DLB2_SYS_ALARM_VF_SYND1_DSI_LOC	0
+#define DLB2_SYS_ALARM_VF_SYND1_QID_LOC	16
+#define DLB2_SYS_ALARM_VF_SYND1_QTYPE_LOC	24
+#define DLB2_SYS_ALARM_VF_SYND1_QPRI_LOC	26
+#define DLB2_SYS_ALARM_VF_SYND1_MSG_TYPE_LOC	29
+
+#define DLB2_SYS_ALARM_VF_SYND0(x) \
+	(0x10000f40 + (x) * 0x1000)
+#define DLB2_SYS_ALARM_VF_SYND0_RST 0x0
+
+#define DLB2_SYS_ALARM_VF_SYND0_SYNDROME		0x000000FF
+#define DLB2_SYS_ALARM_VF_SYND0_RTYPE		0x00000300
+#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND0_PARITY	0x00000400
+#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND1_PARITY	0x00000800
+#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND2_PARITY	0x00001000
+#define DLB2_SYS_ALARM_VF_SYND0_IS_LDB		0x00002000
+#define DLB2_SYS_ALARM_VF_SYND0_CLS			0x0000C000
+#define DLB2_SYS_ALARM_VF_SYND0_AID			0x003F0000
+#define DLB2_SYS_ALARM_VF_SYND0_UNIT			0x03C00000
+#define DLB2_SYS_ALARM_VF_SYND0_SOURCE		0x3C000000
+#define DLB2_SYS_ALARM_VF_SYND0_MORE			0x40000000
+#define DLB2_SYS_ALARM_VF_SYND0_VALID		0x80000000
+#define DLB2_SYS_ALARM_VF_SYND0_SYNDROME_LOC		0
+#define DLB2_SYS_ALARM_VF_SYND0_RTYPE_LOC		8
+#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND0_PARITY_LOC	10
+#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND1_PARITY_LOC	11
+#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND2_PARITY_LOC	12
+#define DLB2_SYS_ALARM_VF_SYND0_IS_LDB_LOC		13
+#define DLB2_SYS_ALARM_VF_SYND0_CLS_LOC		14
+#define DLB2_SYS_ALARM_VF_SYND0_AID_LOC		16
+#define DLB2_SYS_ALARM_VF_SYND0_UNIT_LOC		22
+#define DLB2_SYS_ALARM_VF_SYND0_SOURCE_LOC		26
+#define DLB2_SYS_ALARM_VF_SYND0_MORE_LOC		30
+#define DLB2_SYS_ALARM_VF_SYND0_VALID_LOC		31
+
+#define DLB2_SYS_LDB_QID_CFG_V(x) \
+	(0x10000f58 + (x) * 0x1000)
+#define DLB2_SYS_LDB_QID_CFG_V_RST 0x0
+
+#define DLB2_SYS_LDB_QID_CFG_V_SN_CFG_V	0x00000001
+#define DLB2_SYS_LDB_QID_CFG_V_FID_CFG_V	0x00000002
+#define DLB2_SYS_LDB_QID_CFG_V_RSVD0		0xFFFFFFFC
+#define DLB2_SYS_LDB_QID_CFG_V_SN_CFG_V_LOC	0
+#define DLB2_SYS_LDB_QID_CFG_V_FID_CFG_V_LOC	1
+#define DLB2_SYS_LDB_QID_CFG_V_RSVD0_LOC	2
+
+#define DLB2_SYS_LDB_QID_ITS(x) \
+	(0x10000f54 + (x) * 0x1000)
+#define DLB2_SYS_LDB_QID_ITS_RST 0x0
+
+#define DLB2_SYS_LDB_QID_ITS_QID_ITS	0x00000001
+#define DLB2_SYS_LDB_QID_ITS_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_LDB_QID_ITS_QID_ITS_LOC	0
+#define DLB2_SYS_LDB_QID_ITS_RSVD0_LOC	1
+
+#define DLB2_SYS_LDB_QID_V(x) \
+	(0x10000f50 + (x) * 0x1000)
+#define DLB2_SYS_LDB_QID_V_RST 0x0
+
+#define DLB2_SYS_LDB_QID_V_QID_V	0x00000001
+#define DLB2_SYS_LDB_QID_V_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_LDB_QID_V_QID_V_LOC	0
+#define DLB2_SYS_LDB_QID_V_RSVD0_LOC	1
+
+#define DLB2_SYS_DIR_QID_ITS(x) \
+	(0x10000f64 + (x) * 0x1000)
+#define DLB2_SYS_DIR_QID_ITS_RST 0x0
+
+#define DLB2_SYS_DIR_QID_ITS_QID_ITS	0x00000001
+#define DLB2_SYS_DIR_QID_ITS_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_DIR_QID_ITS_QID_ITS_LOC	0
+#define DLB2_SYS_DIR_QID_ITS_RSVD0_LOC	1
+
+#define DLB2_SYS_DIR_QID_V(x) \
+	(0x10000f60 + (x) * 0x1000)
+#define DLB2_SYS_DIR_QID_V_RST 0x0
+
+#define DLB2_SYS_DIR_QID_V_QID_V	0x00000001
+#define DLB2_SYS_DIR_QID_V_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_DIR_QID_V_QID_V_LOC	0
+#define DLB2_SYS_DIR_QID_V_RSVD0_LOC	1
+
+#define DLB2_SYS_LDB_CQ_AI_DATA(x) \
+	(0x10000fa8 + (x) * 0x1000)
+#define DLB2_SYS_LDB_CQ_AI_DATA_RST 0x0
+
+#define DLB2_SYS_LDB_CQ_AI_DATA_CQ_AI_DATA	0xFFFFFFFF
+#define DLB2_SYS_LDB_CQ_AI_DATA_CQ_AI_DATA_LOC	0
+
+#define DLB2_SYS_LDB_CQ_AI_ADDR(x) \
+	(0x10000fa4 + (x) * 0x1000)
+#define DLB2_SYS_LDB_CQ_AI_ADDR_RST 0x0
+
+#define DLB2_SYS_LDB_CQ_AI_ADDR_RSVD1	0x00000003
+#define DLB2_SYS_LDB_CQ_AI_ADDR_CQ_AI_ADDR	0x000FFFFC
+#define DLB2_SYS_LDB_CQ_AI_ADDR_RSVD0	0xFFF00000
+#define DLB2_SYS_LDB_CQ_AI_ADDR_RSVD1_LOC		0
+#define DLB2_SYS_LDB_CQ_AI_ADDR_CQ_AI_ADDR_LOC	2
+#define DLB2_SYS_LDB_CQ_AI_ADDR_RSVD0_LOC		20
+
+#define DLB2_V2SYS_LDB_CQ_PASID(x) \
+	(0x10000fa0 + (x) * 0x1000)
+#define DLB2_V2_5SYS_LDB_CQ_PASID(x) \
+	(0x10000f9c + (x) * 0x1000)
+#define DLB2_SYS_LDB_CQ_PASID(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2SYS_LDB_CQ_PASID(x) : \
+	 DLB2_V2_5SYS_LDB_CQ_PASID(x))
+#define DLB2_SYS_LDB_CQ_PASID_RST 0x0
+
+#define DLB2_SYS_LDB_CQ_PASID_PASID		0x000FFFFF
+#define DLB2_SYS_LDB_CQ_PASID_EXE_REQ	0x00100000
+#define DLB2_SYS_LDB_CQ_PASID_PRIV_REQ	0x00200000
+#define DLB2_SYS_LDB_CQ_PASID_FMT2		0x00400000
+#define DLB2_SYS_LDB_CQ_PASID_RSVD0		0xFF800000
+#define DLB2_SYS_LDB_CQ_PASID_PASID_LOC	0
+#define DLB2_SYS_LDB_CQ_PASID_EXE_REQ_LOC	20
+#define DLB2_SYS_LDB_CQ_PASID_PRIV_REQ_LOC	21
+#define DLB2_SYS_LDB_CQ_PASID_FMT2_LOC	22
+#define DLB2_SYS_LDB_CQ_PASID_RSVD0_LOC	23
+
+#define DLB2_SYS_LDB_CQ_AT(x) \
+	(0x10000f9c + (x) * 0x1000)
+#define DLB2_SYS_LDB_CQ_AT_RST 0x0
+
+#define DLB2_SYS_LDB_CQ_AT_CQ_AT	0x00000003
+#define DLB2_SYS_LDB_CQ_AT_RSVD0	0xFFFFFFFC
+#define DLB2_SYS_LDB_CQ_AT_CQ_AT_LOC	0
+#define DLB2_SYS_LDB_CQ_AT_RSVD0_LOC	2
+
+#define DLB2_SYS_LDB_CQ_ISR(x) \
+	(0x10000f98 + (x) * 0x1000)
+#define DLB2_SYS_LDB_CQ_ISR_RST 0x0
+/* CQ Interrupt Modes */
+#define DLB2_CQ_ISR_MODE_DIS  0
+#define DLB2_CQ_ISR_MODE_MSI  1
+#define DLB2_CQ_ISR_MODE_MSIX 2
+#define DLB2_CQ_ISR_MODE_ADI  3
+
+#define DLB2_SYS_LDB_CQ_ISR_VECTOR	0x0000003F
+#define DLB2_SYS_LDB_CQ_ISR_VF	0x000003C0
+#define DLB2_SYS_LDB_CQ_ISR_EN_CODE	0x00000C00
+#define DLB2_SYS_LDB_CQ_ISR_RSVD0	0xFFFFF000
+#define DLB2_SYS_LDB_CQ_ISR_VECTOR_LOC	0
+#define DLB2_SYS_LDB_CQ_ISR_VF_LOC		6
+#define DLB2_SYS_LDB_CQ_ISR_EN_CODE_LOC	10
+#define DLB2_SYS_LDB_CQ_ISR_RSVD0_LOC	12
+
+#define DLB2_SYS_LDB_CQ2VF_PF_RO(x) \
+	(0x10000f94 + (x) * 0x1000)
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_RST 0x0
+
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_VF		0x0000000F
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_IS_PF	0x00000010
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_RO		0x00000020
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_RSVD0	0xFFFFFFC0
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_VF_LOC	0
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_IS_PF_LOC	4
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_RO_LOC	5
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_RSVD0_LOC	6
+
+#define DLB2_SYS_LDB_PP_V(x) \
+	(0x10000f90 + (x) * 0x1000)
+#define DLB2_SYS_LDB_PP_V_RST 0x0
+
+#define DLB2_SYS_LDB_PP_V_PP_V	0x00000001
+#define DLB2_SYS_LDB_PP_V_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_LDB_PP_V_PP_V_LOC	0
+#define DLB2_SYS_LDB_PP_V_RSVD0_LOC	1
+
+#define DLB2_SYS_LDB_PP2VDEV(x) \
+	(0x10000f8c + (x) * 0x1000)
+#define DLB2_SYS_LDB_PP2VDEV_RST 0x0
+
+#define DLB2_SYS_LDB_PP2VDEV_VDEV	0x0000000F
+#define DLB2_SYS_LDB_PP2VDEV_RSVD0	0xFFFFFFF0
+#define DLB2_SYS_LDB_PP2VDEV_VDEV_LOC	0
+#define DLB2_SYS_LDB_PP2VDEV_RSVD0_LOC	4
+
+#define DLB2_SYS_LDB_PP2VAS(x) \
+	(0x10000f88 + (x) * 0x1000)
+#define DLB2_SYS_LDB_PP2VAS_RST 0x0
+
+#define DLB2_SYS_LDB_PP2VAS_VAS	0x0000001F
+#define DLB2_SYS_LDB_PP2VAS_RSVD0	0xFFFFFFE0
+#define DLB2_SYS_LDB_PP2VAS_VAS_LOC		0
+#define DLB2_SYS_LDB_PP2VAS_RSVD0_LOC	5
+
+#define DLB2_SYS_LDB_CQ_ADDR_U(x) \
+	(0x10000f84 + (x) * 0x1000)
+#define DLB2_SYS_LDB_CQ_ADDR_U_RST 0x0
+
+#define DLB2_SYS_LDB_CQ_ADDR_U_ADDR_U	0xFFFFFFFF
+#define DLB2_SYS_LDB_CQ_ADDR_U_ADDR_U_LOC	0
+
+#define DLB2_SYS_LDB_CQ_ADDR_L(x) \
+	(0x10000f80 + (x) * 0x1000)
+#define DLB2_SYS_LDB_CQ_ADDR_L_RST 0x0
+
+#define DLB2_SYS_LDB_CQ_ADDR_L_RSVD0		0x0000003F
+#define DLB2_SYS_LDB_CQ_ADDR_L_ADDR_L	0xFFFFFFC0
+#define DLB2_SYS_LDB_CQ_ADDR_L_RSVD0_LOC	0
+#define DLB2_SYS_LDB_CQ_ADDR_L_ADDR_L_LOC	6
+
+#define DLB2_SYS_DIR_CQ_FMT(x) \
+	(0x10000fec + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ_FMT_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_FMT_KEEP_PF_PPID	0x00000001
+#define DLB2_SYS_DIR_CQ_FMT_RSVD0		0xFFFFFFFE
+#define DLB2_SYS_DIR_CQ_FMT_KEEP_PF_PPID_LOC	0
+#define DLB2_SYS_DIR_CQ_FMT_RSVD0_LOC	1
+
+#define DLB2_SYS_DIR_CQ_AI_DATA(x) \
+	(0x10000fe8 + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ_AI_DATA_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_AI_DATA_CQ_AI_DATA	0xFFFFFFFF
+#define DLB2_SYS_DIR_CQ_AI_DATA_CQ_AI_DATA_LOC	0
+
+#define DLB2_SYS_DIR_CQ_AI_ADDR(x) \
+	(0x10000fe4 + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ_AI_ADDR_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_AI_ADDR_RSVD1	0x00000003
+#define DLB2_SYS_DIR_CQ_AI_ADDR_CQ_AI_ADDR	0x000FFFFC
+#define DLB2_SYS_DIR_CQ_AI_ADDR_RSVD0	0xFFF00000
+#define DLB2_SYS_DIR_CQ_AI_ADDR_RSVD1_LOC		0
+#define DLB2_SYS_DIR_CQ_AI_ADDR_CQ_AI_ADDR_LOC	2
+#define DLB2_SYS_DIR_CQ_AI_ADDR_RSVD0_LOC		20
+
+#define DLB2_V2SYS_DIR_CQ_PASID(x) \
+	(0x10000fe0 + (x) * 0x1000)
+#define DLB2_V2_5SYS_DIR_CQ_PASID(x) \
+	(0x10000fdc + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ_PASID(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2SYS_DIR_CQ_PASID(x) : \
+	 DLB2_V2_5SYS_DIR_CQ_PASID(x))
+#define DLB2_SYS_DIR_CQ_PASID_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_PASID_PASID		0x000FFFFF
+#define DLB2_SYS_DIR_CQ_PASID_EXE_REQ	0x00100000
+#define DLB2_SYS_DIR_CQ_PASID_PRIV_REQ	0x00200000
+#define DLB2_SYS_DIR_CQ_PASID_FMT2		0x00400000
+#define DLB2_SYS_DIR_CQ_PASID_RSVD0		0xFF800000
+#define DLB2_SYS_DIR_CQ_PASID_PASID_LOC	0
+#define DLB2_SYS_DIR_CQ_PASID_EXE_REQ_LOC	20
+#define DLB2_SYS_DIR_CQ_PASID_PRIV_REQ_LOC	21
+#define DLB2_SYS_DIR_CQ_PASID_FMT2_LOC	22
+#define DLB2_SYS_DIR_CQ_PASID_RSVD0_LOC	23
+
+#define DLB2_SYS_DIR_CQ_AT(x) \
+	(0x10000fdc + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ_AT_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_AT_CQ_AT	0x00000003
+#define DLB2_SYS_DIR_CQ_AT_RSVD0	0xFFFFFFFC
+#define DLB2_SYS_DIR_CQ_AT_CQ_AT_LOC	0
+#define DLB2_SYS_DIR_CQ_AT_RSVD0_LOC	2
+
+#define DLB2_SYS_DIR_CQ_ISR(x) \
+	(0x10000fd8 + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ_ISR_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_ISR_VECTOR	0x0000003F
+#define DLB2_SYS_DIR_CQ_ISR_VF	0x000003C0
+#define DLB2_SYS_DIR_CQ_ISR_EN_CODE	0x00000C00
+#define DLB2_SYS_DIR_CQ_ISR_RSVD0	0xFFFFF000
+#define DLB2_SYS_DIR_CQ_ISR_VECTOR_LOC	0
+#define DLB2_SYS_DIR_CQ_ISR_VF_LOC		6
+#define DLB2_SYS_DIR_CQ_ISR_EN_CODE_LOC	10
+#define DLB2_SYS_DIR_CQ_ISR_RSVD0_LOC	12
+
+#define DLB2_SYS_DIR_CQ2VF_PF_RO(x) \
+	(0x10000fd4 + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_RST 0x0
+
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_VF		0x0000000F
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_IS_PF	0x00000010
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_RO		0x00000020
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_RSVD0	0xFFFFFFC0
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_VF_LOC	0
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_IS_PF_LOC	4
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_RO_LOC	5
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_RSVD0_LOC	6
+
+#define DLB2_SYS_DIR_PP_V(x) \
+	(0x10000fd0 + (x) * 0x1000)
+#define DLB2_SYS_DIR_PP_V_RST 0x0
+
+#define DLB2_SYS_DIR_PP_V_PP_V	0x00000001
+#define DLB2_SYS_DIR_PP_V_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_DIR_PP_V_PP_V_LOC	0
+#define DLB2_SYS_DIR_PP_V_RSVD0_LOC	1
+
+#define DLB2_SYS_DIR_PP2VDEV(x) \
+	(0x10000fcc + (x) * 0x1000)
+#define DLB2_SYS_DIR_PP2VDEV_RST 0x0
+
+#define DLB2_SYS_DIR_PP2VDEV_VDEV	0x0000000F
+#define DLB2_SYS_DIR_PP2VDEV_RSVD0	0xFFFFFFF0
+#define DLB2_SYS_DIR_PP2VDEV_VDEV_LOC	0
+#define DLB2_SYS_DIR_PP2VDEV_RSVD0_LOC	4
+
+#define DLB2_SYS_DIR_PP2VAS(x) \
+	(0x10000fc8 + (x) * 0x1000)
+#define DLB2_SYS_DIR_PP2VAS_RST 0x0
+
+#define DLB2_SYS_DIR_PP2VAS_VAS	0x0000001F
+#define DLB2_SYS_DIR_PP2VAS_RSVD0	0xFFFFFFE0
+#define DLB2_SYS_DIR_PP2VAS_VAS_LOC		0
+#define DLB2_SYS_DIR_PP2VAS_RSVD0_LOC	5
+
+#define DLB2_SYS_DIR_CQ_ADDR_U(x) \
+	(0x10000fc4 + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ_ADDR_U_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_ADDR_U_ADDR_U	0xFFFFFFFF
+#define DLB2_SYS_DIR_CQ_ADDR_U_ADDR_U_LOC	0
+
+#define DLB2_SYS_DIR_CQ_ADDR_L(x) \
+	(0x10000fc0 + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ_ADDR_L_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_ADDR_L_RSVD0		0x0000003F
+#define DLB2_SYS_DIR_CQ_ADDR_L_ADDR_L	0xFFFFFFC0
+#define DLB2_SYS_DIR_CQ_ADDR_L_RSVD0_LOC	0
+#define DLB2_SYS_DIR_CQ_ADDR_L_ADDR_L_LOC	6
+
+#define DLB2_SYS_PM_SMON_COMP_MASK1 0x10003024
+#define DLB2_SYS_PM_SMON_COMP_MASK1_RST 0xffffffff
+
+#define DLB2_SYS_PM_SMON_COMP_MASK1_COMP_MASK1	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_COMP_MASK1_COMP_MASK1_LOC	0
+
+#define DLB2_SYS_PM_SMON_COMP_MASK0 0x10003020
+#define DLB2_SYS_PM_SMON_COMP_MASK0_RST 0xffffffff
+
+#define DLB2_SYS_PM_SMON_COMP_MASK0_COMP_MASK0	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_COMP_MASK0_COMP_MASK0_LOC	0
+
+#define DLB2_SYS_PM_SMON_MAX_TMR 0x1000301c
+#define DLB2_SYS_PM_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_SYS_PM_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_SYS_PM_SMON_TMR 0x10003018
+#define DLB2_SYS_PM_SMON_TMR_RST 0x0
+
+#define DLB2_SYS_PM_SMON_TMR_TIMER_VAL	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_TMR_TIMER_VAL_LOC	0
+
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR1 0x10003014
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR0 0x10003010
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_SYS_PM_SMON_COMPARE1 0x1000300c
+#define DLB2_SYS_PM_SMON_COMPARE1_RST 0x0
+
+#define DLB2_SYS_PM_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_SYS_PM_SMON_COMPARE0 0x10003008
+#define DLB2_SYS_PM_SMON_COMPARE0_RST 0x0
+
+#define DLB2_SYS_PM_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_SYS_PM_SMON_CFG1 0x10003004
+#define DLB2_SYS_PM_SMON_CFG1_RST 0x0
+
+#define DLB2_SYS_PM_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_SYS_PM_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_SYS_PM_SMON_CFG1_RSVD	0xFFFF0000
+#define DLB2_SYS_PM_SMON_CFG1_MODE0_LOC	0
+#define DLB2_SYS_PM_SMON_CFG1_MODE1_LOC	8
+#define DLB2_SYS_PM_SMON_CFG1_RSVD_LOC	16
+
+#define DLB2_SYS_PM_SMON_CFG0 0x10003000
+#define DLB2_SYS_PM_SMON_CFG0_RST 0x40000000
+
+#define DLB2_SYS_PM_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_SYS_PM_SMON_CFG0_RSVD2			0x0000000E
+#define DLB2_SYS_PM_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_SYS_PM_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_SYS_PM_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_SYS_PM_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_SYS_PM_SMON_CFG0_SMON_MODE		0x0000F000
+#define DLB2_SYS_PM_SMON_CFG0_STOPCOUNTEROVFL	0x00010000
+#define DLB2_SYS_PM_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_SYS_PM_SMON_CFG0_STATCOUNTER0OVFL	0x00040000
+#define DLB2_SYS_PM_SMON_CFG0_STATCOUNTER1OVFL	0x00080000
+#define DLB2_SYS_PM_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_SYS_PM_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_SYS_PM_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_SYS_PM_SMON_CFG0_RSVD1			0x00800000
+#define DLB2_SYS_PM_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_SYS_PM_SMON_CFG0_RSVD0			0x20000000
+#define DLB2_SYS_PM_SMON_CFG0_VERSION		0xC0000000
+#define DLB2_SYS_PM_SMON_CFG0_SMON_ENABLE_LOC		0
+#define DLB2_SYS_PM_SMON_CFG0_RSVD2_LOC			1
+#define DLB2_SYS_PM_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_SYS_PM_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_SYS_PM_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_SYS_PM_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_SYS_PM_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_SYS_PM_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_SYS_PM_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_SYS_PM_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_SYS_PM_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_SYS_PM_SMON_CFG0_STOPTIMEROVFL_LOC		20
+#define DLB2_SYS_PM_SMON_CFG0_INTTIMEROVFL_LOC		21
+#define DLB2_SYS_PM_SMON_CFG0_STATTIMEROVFL_LOC		22
+#define DLB2_SYS_PM_SMON_CFG0_RSVD1_LOC			23
+#define DLB2_SYS_PM_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_SYS_PM_SMON_CFG0_RSVD0_LOC			29
+#define DLB2_SYS_PM_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_SYS_SMON_COMP_MASK1(x) \
+	(0x18002024 + (x) * 0x40)
+#define DLB2_SYS_SMON_COMP_MASK1_RST 0xffffffff
+
+#define DLB2_SYS_SMON_COMP_MASK1_COMP_MASK1	0xFFFFFFFF
+#define DLB2_SYS_SMON_COMP_MASK1_COMP_MASK1_LOC	0
+
+#define DLB2_SYS_SMON_COMP_MASK0(x) \
+	(0x18002020 + (x) * 0x40)
+#define DLB2_SYS_SMON_COMP_MASK0_RST 0xffffffff
+
+#define DLB2_SYS_SMON_COMP_MASK0_COMP_MASK0	0xFFFFFFFF
+#define DLB2_SYS_SMON_COMP_MASK0_COMP_MASK0_LOC	0
+
+#define DLB2_SYS_SMON_MAX_TMR(x) \
+	(0x1800201c + (x) * 0x40)
+#define DLB2_SYS_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_SYS_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_SYS_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_SYS_SMON_TMR(x) \
+	(0x18002018 + (x) * 0x40)
+#define DLB2_SYS_SMON_TMR_RST 0x0
+
+#define DLB2_SYS_SMON_TMR_TIMER_VAL	0xFFFFFFFF
+#define DLB2_SYS_SMON_TMR_TIMER_VAL_LOC	0
+
+#define DLB2_SYS_SMON_ACTIVITYCNTR1(x) \
+	(0x18002014 + (x) * 0x40)
+#define DLB2_SYS_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_SYS_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_SYS_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_SYS_SMON_ACTIVITYCNTR0(x) \
+	(0x18002010 + (x) * 0x40)
+#define DLB2_SYS_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_SYS_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_SYS_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_SYS_SMON_COMPARE1(x) \
+	(0x1800200c + (x) * 0x40)
+#define DLB2_SYS_SMON_COMPARE1_RST 0x0
+
+#define DLB2_SYS_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_SYS_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_SYS_SMON_COMPARE0(x) \
+	(0x18002008 + (x) * 0x40)
+#define DLB2_SYS_SMON_COMPARE0_RST 0x0
+
+#define DLB2_SYS_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_SYS_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_SYS_SMON_CFG1(x) \
+	(0x18002004 + (x) * 0x40)
+#define DLB2_SYS_SMON_CFG1_RST 0x0
+
+#define DLB2_SYS_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_SYS_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_SYS_SMON_CFG1_RSVD	0xFFFF0000
+#define DLB2_SYS_SMON_CFG1_MODE0_LOC	0
+#define DLB2_SYS_SMON_CFG1_MODE1_LOC	8
+#define DLB2_SYS_SMON_CFG1_RSVD_LOC	16
+
+#define DLB2_SYS_SMON_CFG0(x) \
+	(0x18002000 + (x) * 0x40)
+#define DLB2_SYS_SMON_CFG0_RST 0x40000000
+
+#define DLB2_SYS_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_SYS_SMON_CFG0_RSVD2			0x0000000E
+#define DLB2_SYS_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_SYS_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_SYS_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_SYS_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_SYS_SMON_CFG0_SMON_MODE			0x0000F000
+#define DLB2_SYS_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_SYS_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_SYS_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_SYS_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_SYS_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_SYS_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_SYS_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_SYS_SMON_CFG0_RSVD1			0x00800000
+#define DLB2_SYS_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_SYS_SMON_CFG0_RSVD0			0x20000000
+#define DLB2_SYS_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_SYS_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_SYS_SMON_CFG0_RSVD2_LOC				1
+#define DLB2_SYS_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_SYS_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_SYS_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_SYS_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_SYS_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_SYS_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_SYS_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_SYS_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_SYS_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_SYS_SMON_CFG0_STOPTIMEROVFL_LOC			20
+#define DLB2_SYS_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_SYS_SMON_CFG0_STATTIMEROVFL_LOC			22
+#define DLB2_SYS_SMON_CFG0_RSVD1_LOC				23
+#define DLB2_SYS_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_SYS_SMON_CFG0_RSVD0_LOC				29
+#define DLB2_SYS_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_SYS_INGRESS_ALARM_ENBL 0x10000300
+#define DLB2_SYS_INGRESS_ALARM_ENBL_RST 0x0
+
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_HCW		0x00000001
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_PP		0x00000002
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_PASID		0x00000004
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_QID		0x00000008
+#define DLB2_SYS_INGRESS_ALARM_ENBL_DISABLED_QID		0x00000010
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_LDB_QID_CFG	0x00000020
+#define DLB2_SYS_INGRESS_ALARM_ENBL_RSVD0			0xFFFFFFC0
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_HCW_LOC		0
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_PP_LOC		1
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_PASID_LOC	2
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_QID_LOC		3
+#define DLB2_SYS_INGRESS_ALARM_ENBL_DISABLED_QID_LOC		4
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_LDB_QID_CFG_LOC	5
+#define DLB2_SYS_INGRESS_ALARM_ENBL_RSVD0_LOC		6
+
+#define DLB2_SYS_MSIX_ACK 0x10000400
+#define DLB2_SYS_MSIX_ACK_RST 0x0
+
+#define DLB2_SYS_MSIX_ACK_MSIX_0_ACK	0x00000001
+#define DLB2_SYS_MSIX_ACK_MSIX_1_ACK	0x00000002
+#define DLB2_SYS_MSIX_ACK_RSVD0	0xFFFFFFFC
+#define DLB2_SYS_MSIX_ACK_MSIX_0_ACK_LOC	0
+#define DLB2_SYS_MSIX_ACK_MSIX_1_ACK_LOC	1
+#define DLB2_SYS_MSIX_ACK_RSVD0_LOC		2
+
+#define DLB2_SYS_MSIX_PASSTHRU 0x10000404
+#define DLB2_SYS_MSIX_PASSTHRU_RST 0x0
+
+#define DLB2_SYS_MSIX_PASSTHRU_MSIX_0_PASSTHRU	0x00000001
+#define DLB2_SYS_MSIX_PASSTHRU_MSIX_1_PASSTHRU	0x00000002
+#define DLB2_SYS_MSIX_PASSTHRU_RSVD0			0xFFFFFFFC
+#define DLB2_SYS_MSIX_PASSTHRU_MSIX_0_PASSTHRU_LOC	0
+#define DLB2_SYS_MSIX_PASSTHRU_MSIX_1_PASSTHRU_LOC	1
+#define DLB2_SYS_MSIX_PASSTHRU_RSVD0_LOC		2
+
+#define DLB2_SYS_MSIX_MODE 0x10000408
+#define DLB2_SYS_MSIX_MODE_RST 0x0
+/* MSI-X Modes */
+#define DLB2_MSIX_MODE_PACKED     0
+#define DLB2_MSIX_MODE_COMPRESSED 1
+
+#define DLB2_SYS_MSIX_MODE_MODE_V2	0x00000001
+#define DLB2_SYS_MSIX_MODE_POLL_MODE_V2	0x00000002
+#define DLB2_SYS_MSIX_MODE_POLL_MASK_V2	0x00000004
+#define DLB2_SYS_MSIX_MODE_POLL_LOCK_V2	0x00000008
+#define DLB2_SYS_MSIX_MODE_RSVD0_V2	0xFFFFFFF0
+#define DLB2_SYS_MSIX_MODE_MODE_V2_LOC	0
+#define DLB2_SYS_MSIX_MODE_POLL_MODE_V2_LOC	1
+#define DLB2_SYS_MSIX_MODE_POLL_MASK_V2_LOC	2
+#define DLB2_SYS_MSIX_MODE_POLL_LOCK_V2_LOC	3
+#define DLB2_SYS_MSIX_MODE_RSVD0_V2_LOC	4
+
+#define DLB2_SYS_MSIX_MODE_MODE_V2_5	0x00000001
+#define DLB2_SYS_MSIX_MODE_IMS_POLLING_V2_5	0x00000002
+#define DLB2_SYS_MSIX_MODE_RSVD0_V2_5	0xFFFFFFFC
+#define DLB2_SYS_MSIX_MODE_MODE_V2_5_LOC		0
+#define DLB2_SYS_MSIX_MODE_IMS_POLLING_V2_5_LOC	1
+#define DLB2_SYS_MSIX_MODE_RSVD0_V2_5_LOC		2
+
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS 0x10000440
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_0_OCC_INT	0x00000001
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_1_OCC_INT	0x00000002
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_2_OCC_INT	0x00000004
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_3_OCC_INT	0x00000008
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_4_OCC_INT	0x00000010
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_5_OCC_INT	0x00000020
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_6_OCC_INT	0x00000040
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_7_OCC_INT	0x00000080
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_8_OCC_INT	0x00000100
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_9_OCC_INT	0x00000200
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_10_OCC_INT	0x00000400
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_11_OCC_INT	0x00000800
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_12_OCC_INT	0x00001000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_13_OCC_INT	0x00002000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_14_OCC_INT	0x00004000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_15_OCC_INT	0x00008000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_16_OCC_INT	0x00010000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_17_OCC_INT	0x00020000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_18_OCC_INT	0x00040000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_19_OCC_INT	0x00080000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_20_OCC_INT	0x00100000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_21_OCC_INT	0x00200000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_22_OCC_INT	0x00400000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_23_OCC_INT	0x00800000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_24_OCC_INT	0x01000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_25_OCC_INT	0x02000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_26_OCC_INT	0x04000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_27_OCC_INT	0x08000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_28_OCC_INT	0x10000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_29_OCC_INT	0x20000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_30_OCC_INT	0x40000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_31_OCC_INT	0x80000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_0_OCC_INT_LOC	0
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_1_OCC_INT_LOC	1
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_2_OCC_INT_LOC	2
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_3_OCC_INT_LOC	3
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_4_OCC_INT_LOC	4
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_5_OCC_INT_LOC	5
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_6_OCC_INT_LOC	6
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_7_OCC_INT_LOC	7
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_8_OCC_INT_LOC	8
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_9_OCC_INT_LOC	9
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_10_OCC_INT_LOC	10
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_11_OCC_INT_LOC	11
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_12_OCC_INT_LOC	12
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_13_OCC_INT_LOC	13
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_14_OCC_INT_LOC	14
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_15_OCC_INT_LOC	15
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_16_OCC_INT_LOC	16
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_17_OCC_INT_LOC	17
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_18_OCC_INT_LOC	18
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_19_OCC_INT_LOC	19
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_20_OCC_INT_LOC	20
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_21_OCC_INT_LOC	21
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_22_OCC_INT_LOC	22
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_23_OCC_INT_LOC	23
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_24_OCC_INT_LOC	24
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_25_OCC_INT_LOC	25
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_26_OCC_INT_LOC	26
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_27_OCC_INT_LOC	27
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_28_OCC_INT_LOC	28
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_29_OCC_INT_LOC	29
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_30_OCC_INT_LOC	30
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_31_OCC_INT_LOC	31
+
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS 0x10000444
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_32_OCC_INT	0x00000001
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_33_OCC_INT	0x00000002
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_34_OCC_INT	0x00000004
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_35_OCC_INT	0x00000008
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_36_OCC_INT	0x00000010
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_37_OCC_INT	0x00000020
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_38_OCC_INT	0x00000040
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_39_OCC_INT	0x00000080
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_40_OCC_INT	0x00000100
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_41_OCC_INT	0x00000200
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_42_OCC_INT	0x00000400
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_43_OCC_INT	0x00000800
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_44_OCC_INT	0x00001000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_45_OCC_INT	0x00002000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_46_OCC_INT	0x00004000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_47_OCC_INT	0x00008000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_48_OCC_INT	0x00010000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_49_OCC_INT	0x00020000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_50_OCC_INT	0x00040000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_51_OCC_INT	0x00080000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_52_OCC_INT	0x00100000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_53_OCC_INT	0x00200000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_54_OCC_INT	0x00400000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_55_OCC_INT	0x00800000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_56_OCC_INT	0x01000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_57_OCC_INT	0x02000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_58_OCC_INT	0x04000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_59_OCC_INT	0x08000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_60_OCC_INT	0x10000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_61_OCC_INT	0x20000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_62_OCC_INT	0x40000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_63_OCC_INT	0x80000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_32_OCC_INT_LOC	0
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_33_OCC_INT_LOC	1
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_34_OCC_INT_LOC	2
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_35_OCC_INT_LOC	3
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_36_OCC_INT_LOC	4
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_37_OCC_INT_LOC	5
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_38_OCC_INT_LOC	6
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_39_OCC_INT_LOC	7
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_40_OCC_INT_LOC	8
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_41_OCC_INT_LOC	9
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_42_OCC_INT_LOC	10
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_43_OCC_INT_LOC	11
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_44_OCC_INT_LOC	12
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_45_OCC_INT_LOC	13
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_46_OCC_INT_LOC	14
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_47_OCC_INT_LOC	15
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_48_OCC_INT_LOC	16
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_49_OCC_INT_LOC	17
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_50_OCC_INT_LOC	18
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_51_OCC_INT_LOC	19
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_52_OCC_INT_LOC	20
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_53_OCC_INT_LOC	21
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_54_OCC_INT_LOC	22
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_55_OCC_INT_LOC	23
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_56_OCC_INT_LOC	24
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_57_OCC_INT_LOC	25
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_58_OCC_INT_LOC	26
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_59_OCC_INT_LOC	27
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_60_OCC_INT_LOC	28
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_61_OCC_INT_LOC	29
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_62_OCC_INT_LOC	30
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_63_OCC_INT_LOC	31
+
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS 0x10000460
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_RST 0x0
+
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_0_OCC_INT	0x00000001
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_1_OCC_INT	0x00000002
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_2_OCC_INT	0x00000004
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_3_OCC_INT	0x00000008
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_4_OCC_INT	0x00000010
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_5_OCC_INT	0x00000020
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_6_OCC_INT	0x00000040
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_7_OCC_INT	0x00000080
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_8_OCC_INT	0x00000100
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_9_OCC_INT	0x00000200
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_10_OCC_INT	0x00000400
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_11_OCC_INT	0x00000800
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_12_OCC_INT	0x00001000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_13_OCC_INT	0x00002000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_14_OCC_INT	0x00004000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_15_OCC_INT	0x00008000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_16_OCC_INT	0x00010000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_17_OCC_INT	0x00020000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_18_OCC_INT	0x00040000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_19_OCC_INT	0x00080000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_20_OCC_INT	0x00100000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_21_OCC_INT	0x00200000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_22_OCC_INT	0x00400000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_23_OCC_INT	0x00800000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_24_OCC_INT	0x01000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_25_OCC_INT	0x02000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_26_OCC_INT	0x04000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_27_OCC_INT	0x08000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_28_OCC_INT	0x10000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_29_OCC_INT	0x20000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_30_OCC_INT	0x40000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_31_OCC_INT	0x80000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_0_OCC_INT_LOC	0
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_1_OCC_INT_LOC	1
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_2_OCC_INT_LOC	2
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_3_OCC_INT_LOC	3
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_4_OCC_INT_LOC	4
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_5_OCC_INT_LOC	5
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_6_OCC_INT_LOC	6
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_7_OCC_INT_LOC	7
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_8_OCC_INT_LOC	8
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_9_OCC_INT_LOC	9
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_10_OCC_INT_LOC	10
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_11_OCC_INT_LOC	11
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_12_OCC_INT_LOC	12
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_13_OCC_INT_LOC	13
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_14_OCC_INT_LOC	14
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_15_OCC_INT_LOC	15
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_16_OCC_INT_LOC	16
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_17_OCC_INT_LOC	17
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_18_OCC_INT_LOC	18
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_19_OCC_INT_LOC	19
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_20_OCC_INT_LOC	20
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_21_OCC_INT_LOC	21
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_22_OCC_INT_LOC	22
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_23_OCC_INT_LOC	23
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_24_OCC_INT_LOC	24
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_25_OCC_INT_LOC	25
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_26_OCC_INT_LOC	26
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_27_OCC_INT_LOC	27
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_28_OCC_INT_LOC	28
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_29_OCC_INT_LOC	29
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_30_OCC_INT_LOC	30
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_31_OCC_INT_LOC	31
+
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS 0x10000464
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_RST 0x0
+
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_32_OCC_INT	0x00000001
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_33_OCC_INT	0x00000002
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_34_OCC_INT	0x00000004
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_35_OCC_INT	0x00000008
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_36_OCC_INT	0x00000010
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_37_OCC_INT	0x00000020
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_38_OCC_INT	0x00000040
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_39_OCC_INT	0x00000080
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_40_OCC_INT	0x00000100
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_41_OCC_INT	0x00000200
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_42_OCC_INT	0x00000400
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_43_OCC_INT	0x00000800
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_44_OCC_INT	0x00001000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_45_OCC_INT	0x00002000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_46_OCC_INT	0x00004000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_47_OCC_INT	0x00008000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_48_OCC_INT	0x00010000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_49_OCC_INT	0x00020000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_50_OCC_INT	0x00040000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_51_OCC_INT	0x00080000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_52_OCC_INT	0x00100000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_53_OCC_INT	0x00200000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_54_OCC_INT	0x00400000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_55_OCC_INT	0x00800000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_56_OCC_INT	0x01000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_57_OCC_INT	0x02000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_58_OCC_INT	0x04000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_59_OCC_INT	0x08000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_60_OCC_INT	0x10000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_61_OCC_INT	0x20000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_62_OCC_INT	0x40000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_63_OCC_INT	0x80000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_32_OCC_INT_LOC	0
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_33_OCC_INT_LOC	1
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_34_OCC_INT_LOC	2
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_35_OCC_INT_LOC	3
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_36_OCC_INT_LOC	4
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_37_OCC_INT_LOC	5
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_38_OCC_INT_LOC	6
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_39_OCC_INT_LOC	7
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_40_OCC_INT_LOC	8
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_41_OCC_INT_LOC	9
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_42_OCC_INT_LOC	10
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_43_OCC_INT_LOC	11
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_44_OCC_INT_LOC	12
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_45_OCC_INT_LOC	13
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_46_OCC_INT_LOC	14
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_47_OCC_INT_LOC	15
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_48_OCC_INT_LOC	16
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_49_OCC_INT_LOC	17
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_50_OCC_INT_LOC	18
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_51_OCC_INT_LOC	19
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_52_OCC_INT_LOC	20
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_53_OCC_INT_LOC	21
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_54_OCC_INT_LOC	22
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_55_OCC_INT_LOC	23
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_56_OCC_INT_LOC	24
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_57_OCC_INT_LOC	25
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_58_OCC_INT_LOC	26
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_59_OCC_INT_LOC	27
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_60_OCC_INT_LOC	28
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_61_OCC_INT_LOC	29
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_62_OCC_INT_LOC	30
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_63_OCC_INT_LOC	31
+
+#define DLB2_SYS_DIR_CQ_OPT_CLR 0x100004c0
+#define DLB2_SYS_DIR_CQ_OPT_CLR_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_OPT_CLR_CQ		0x0000003F
+#define DLB2_SYS_DIR_CQ_OPT_CLR_RSVD0	0xFFFFFFC0
+#define DLB2_SYS_DIR_CQ_OPT_CLR_CQ_LOC	0
+#define DLB2_SYS_DIR_CQ_OPT_CLR_RSVD0_LOC	6
+
+#define DLB2_SYS_ALARM_HW_SYND 0x1000050c
+#define DLB2_SYS_ALARM_HW_SYND_RST 0x0
+
+#define DLB2_SYS_ALARM_HW_SYND_SYNDROME	0x000000FF
+#define DLB2_SYS_ALARM_HW_SYND_RTYPE		0x00000300
+#define DLB2_SYS_ALARM_HW_SYND_ALARM		0x00000400
+#define DLB2_SYS_ALARM_HW_SYND_CWD		0x00000800
+#define DLB2_SYS_ALARM_HW_SYND_VF_PF_MB	0x00001000
+#define DLB2_SYS_ALARM_HW_SYND_RSVD0		0x00002000
+#define DLB2_SYS_ALARM_HW_SYND_CLS		0x0000C000
+#define DLB2_SYS_ALARM_HW_SYND_AID		0x003F0000
+#define DLB2_SYS_ALARM_HW_SYND_UNIT		0x03C00000
+#define DLB2_SYS_ALARM_HW_SYND_SOURCE	0x3C000000
+#define DLB2_SYS_ALARM_HW_SYND_MORE		0x40000000
+#define DLB2_SYS_ALARM_HW_SYND_VALID		0x80000000
+#define DLB2_SYS_ALARM_HW_SYND_SYNDROME_LOC	0
+#define DLB2_SYS_ALARM_HW_SYND_RTYPE_LOC	8
+#define DLB2_SYS_ALARM_HW_SYND_ALARM_LOC	10
+#define DLB2_SYS_ALARM_HW_SYND_CWD_LOC	11
+#define DLB2_SYS_ALARM_HW_SYND_VF_PF_MB_LOC	12
+#define DLB2_SYS_ALARM_HW_SYND_RSVD0_LOC	13
+#define DLB2_SYS_ALARM_HW_SYND_CLS_LOC	14
+#define DLB2_SYS_ALARM_HW_SYND_AID_LOC	16
+#define DLB2_SYS_ALARM_HW_SYND_UNIT_LOC	22
+#define DLB2_SYS_ALARM_HW_SYND_SOURCE_LOC	26
+#define DLB2_SYS_ALARM_HW_SYND_MORE_LOC	30
+#define DLB2_SYS_ALARM_HW_SYND_VALID_LOC	31
+
+#define DLB2_AQED_QID_FID_LIM(x) \
+	(0x20000000 + (x) * 0x1000)
+#define DLB2_AQED_QID_FID_LIM_RST 0x7ff
+
+#define DLB2_AQED_QID_FID_LIM_QID_FID_LIMIT	0x00001FFF
+#define DLB2_AQED_QID_FID_LIM_RSVD0		0xFFFFE000
+#define DLB2_AQED_QID_FID_LIM_QID_FID_LIMIT_LOC	0
+#define DLB2_AQED_QID_FID_LIM_RSVD0_LOC		13
+
+#define DLB2_AQED_QID_HID_WIDTH(x) \
+	(0x20080000 + (x) * 0x1000)
+#define DLB2_AQED_QID_HID_WIDTH_RST 0x0
+
+#define DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE	0x00000007
+#define DLB2_AQED_QID_HID_WIDTH_RSVD0		0xFFFFFFF8
+#define DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE_LOC	0
+#define DLB2_AQED_QID_HID_WIDTH_RSVD0_LOC		3
+
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0 0x24000004
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_RST 0xfefcfaf8
+
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI0	0x000000FF
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI1	0x0000FF00
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI2	0x00FF0000
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI3	0xFF000000
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI0_LOC	0
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI1_LOC	8
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI2_LOC	16
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI3_LOC	24
+
+#define DLB2_AQED_SMON_ACTIVITYCNTR0 0x2c00004c
+#define DLB2_AQED_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_AQED_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_AQED_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_AQED_SMON_ACTIVITYCNTR1 0x2c000050
+#define DLB2_AQED_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_AQED_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_AQED_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_AQED_SMON_COMPARE0 0x2c000054
+#define DLB2_AQED_SMON_COMPARE0_RST 0x0
+
+#define DLB2_AQED_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_AQED_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_AQED_SMON_COMPARE1 0x2c000058
+#define DLB2_AQED_SMON_COMPARE1_RST 0x0
+
+#define DLB2_AQED_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_AQED_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_AQED_SMON_CFG0 0x2c00005c
+#define DLB2_AQED_SMON_CFG0_RST 0x40000000
+
+#define DLB2_AQED_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_AQED_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_AQED_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_AQED_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_AQED_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_AQED_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_AQED_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_AQED_SMON_CFG0_SMON_MODE		0x0000F000
+#define DLB2_AQED_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_AQED_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_AQED_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_AQED_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_AQED_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_AQED_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_AQED_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_AQED_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_AQED_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_AQED_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_AQED_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_AQED_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_AQED_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
+#define DLB2_AQED_SMON_CFG0_RSVZ0_LOC			2
+#define DLB2_AQED_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_AQED_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_AQED_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_AQED_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_AQED_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_AQED_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_AQED_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_AQED_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_AQED_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_AQED_SMON_CFG0_STOPTIMEROVFL_LOC		20
+#define DLB2_AQED_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_AQED_SMON_CFG0_STATTIMEROVFL_LOC		22
+#define DLB2_AQED_SMON_CFG0_RSVZ1_LOC			23
+#define DLB2_AQED_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_AQED_SMON_CFG0_RSVZ2_LOC			29
+#define DLB2_AQED_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_AQED_SMON_CFG1 0x2c000060
+#define DLB2_AQED_SMON_CFG1_RST 0x0
+
+#define DLB2_AQED_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_AQED_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_AQED_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_AQED_SMON_CFG1_MODE0_LOC	0
+#define DLB2_AQED_SMON_CFG1_MODE1_LOC	8
+#define DLB2_AQED_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_AQED_SMON_MAX_TMR 0x2c000064
+#define DLB2_AQED_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_AQED_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_AQED_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_AQED_SMON_TMR 0x2c000068
+#define DLB2_AQED_SMON_TMR_RST 0x0
+
+#define DLB2_AQED_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_AQED_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_ATM_QID2CQIDIX_00(x) \
+	(0x30080000 + (x) * 0x1000)
+#define DLB2_ATM_QID2CQIDIX_00_RST 0x0
+#define DLB2_ATM_QID2CQIDIX(x, y) \
+	(DLB2_ATM_QID2CQIDIX_00(x) + 0x80000 * (y))
+#define DLB2_ATM_QID2CQIDIX_NUM 16
+
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P0	0x000000FF
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P1	0x0000FF00
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P2	0x00FF0000
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P3	0xFF000000
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P0_LOC	0
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P1_LOC	8
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P2_LOC	16
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P3_LOC	24
+
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN 0x34000004
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_RST 0xfffefdfc
+
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN0	0x000000FF
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN1	0x0000FF00
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN2	0x00FF0000
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN3	0xFF000000
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN0_LOC	0
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN1_LOC	8
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN2_LOC	16
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN3_LOC	24
+
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN 0x34000008
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_RST 0xfffefdfc
+
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN0	0x000000FF
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN1	0x0000FF00
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN2	0x00FF0000
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN3	0xFF000000
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN0_LOC	0
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN1_LOC	8
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN2_LOC	16
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN3_LOC	24
+
+#define DLB2_ATM_SMON_ACTIVITYCNTR0 0x3c000050
+#define DLB2_ATM_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_ATM_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_ATM_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_ATM_SMON_ACTIVITYCNTR1 0x3c000054
+#define DLB2_ATM_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_ATM_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_ATM_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_ATM_SMON_COMPARE0 0x3c000058
+#define DLB2_ATM_SMON_COMPARE0_RST 0x0
+
+#define DLB2_ATM_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_ATM_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_ATM_SMON_COMPARE1 0x3c00005c
+#define DLB2_ATM_SMON_COMPARE1_RST 0x0
+
+#define DLB2_ATM_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_ATM_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_ATM_SMON_CFG0 0x3c000060
+#define DLB2_ATM_SMON_CFG0_RST 0x40000000
+
+#define DLB2_ATM_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_ATM_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_ATM_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_ATM_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_ATM_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_ATM_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_ATM_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_ATM_SMON_CFG0_SMON_MODE			0x0000F000
+#define DLB2_ATM_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_ATM_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_ATM_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_ATM_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_ATM_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_ATM_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_ATM_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_ATM_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_ATM_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_ATM_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_ATM_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_ATM_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_ATM_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
+#define DLB2_ATM_SMON_CFG0_RSVZ0_LOC				2
+#define DLB2_ATM_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_ATM_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_ATM_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_ATM_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_ATM_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_ATM_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_ATM_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_ATM_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_ATM_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_ATM_SMON_CFG0_STOPTIMEROVFL_LOC			20
+#define DLB2_ATM_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_ATM_SMON_CFG0_STATTIMEROVFL_LOC			22
+#define DLB2_ATM_SMON_CFG0_RSVZ1_LOC				23
+#define DLB2_ATM_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_ATM_SMON_CFG0_RSVZ2_LOC				29
+#define DLB2_ATM_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_ATM_SMON_CFG1 0x3c000064
+#define DLB2_ATM_SMON_CFG1_RST 0x0
+
+#define DLB2_ATM_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_ATM_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_ATM_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_ATM_SMON_CFG1_MODE0_LOC	0
+#define DLB2_ATM_SMON_CFG1_MODE1_LOC	8
+#define DLB2_ATM_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_ATM_SMON_MAX_TMR 0x3c000068
+#define DLB2_ATM_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_ATM_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_ATM_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_ATM_SMON_TMR 0x3c00006c
+#define DLB2_ATM_SMON_TMR_RST 0x0
+
+#define DLB2_ATM_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_ATM_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_CHP_CFG_DIR_VAS_CRD(x) \
+	(0x40000000 + (x) * 0x1000)
+#define DLB2_CHP_CFG_DIR_VAS_CRD_RST 0x0
+
+#define DLB2_CHP_CFG_DIR_VAS_CRD_COUNT	0x00003FFF
+#define DLB2_CHP_CFG_DIR_VAS_CRD_RSVD0	0xFFFFC000
+#define DLB2_CHP_CFG_DIR_VAS_CRD_COUNT_LOC	0
+#define DLB2_CHP_CFG_DIR_VAS_CRD_RSVD0_LOC	14
+
+#define DLB2_CHP_CFG_LDB_VAS_CRD(x) \
+	(0x40080000 + (x) * 0x1000)
+#define DLB2_CHP_CFG_LDB_VAS_CRD_RST 0x0
+
+#define DLB2_CHP_CFG_LDB_VAS_CRD_COUNT	0x00007FFF
+#define DLB2_CHP_CFG_LDB_VAS_CRD_RSVD0	0xFFFF8000
+#define DLB2_CHP_CFG_LDB_VAS_CRD_COUNT_LOC	0
+#define DLB2_CHP_CFG_LDB_VAS_CRD_RSVD0_LOC	15
+
+#define DLB2_V2CHP_ORD_QID_SN(x) \
+	(0x40100000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_ORD_QID_SN(x) \
+	(0x40080000 + (x) * 0x1000)
+#define DLB2_CHP_ORD_QID_SN(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_ORD_QID_SN(x) : \
+	 DLB2_V2_5CHP_ORD_QID_SN(x))
+#define DLB2_CHP_ORD_QID_SN_RST 0x0
+
+#define DLB2_CHP_ORD_QID_SN_SN	0x000003FF
+#define DLB2_CHP_ORD_QID_SN_RSVD0	0xFFFFFC00
+#define DLB2_CHP_ORD_QID_SN_SN_LOC		0
+#define DLB2_CHP_ORD_QID_SN_RSVD0_LOC	10
+
+#define DLB2_V2CHP_ORD_QID_SN_MAP(x) \
+	(0x40180000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_ORD_QID_SN_MAP(x) \
+	(0x40100000 + (x) * 0x1000)
+#define DLB2_CHP_ORD_QID_SN_MAP(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_ORD_QID_SN_MAP(x) : \
+	 DLB2_V2_5CHP_ORD_QID_SN_MAP(x))
+#define DLB2_CHP_ORD_QID_SN_MAP_RST 0x0
+
+#define DLB2_CHP_ORD_QID_SN_MAP_MODE		0x00000007
+#define DLB2_CHP_ORD_QID_SN_MAP_SLOT		0x00000078
+#define DLB2_CHP_ORD_QID_SN_MAP_RSVZ0	0x00000080
+#define DLB2_CHP_ORD_QID_SN_MAP_GRP		0x00000100
+#define DLB2_CHP_ORD_QID_SN_MAP_RSVZ1	0x00000200
+#define DLB2_CHP_ORD_QID_SN_MAP_RSVD0	0xFFFFFC00
+#define DLB2_CHP_ORD_QID_SN_MAP_MODE_LOC	0
+#define DLB2_CHP_ORD_QID_SN_MAP_SLOT_LOC	3
+#define DLB2_CHP_ORD_QID_SN_MAP_RSVZ0_LOC	7
+#define DLB2_CHP_ORD_QID_SN_MAP_GRP_LOC	8
+#define DLB2_CHP_ORD_QID_SN_MAP_RSVZ1_LOC	9
+#define DLB2_CHP_ORD_QID_SN_MAP_RSVD0_LOC	10
+
+#define DLB2_V2CHP_SN_CHK_ENBL(x) \
+	(0x40200000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_SN_CHK_ENBL(x) \
+	(0x40180000 + (x) * 0x1000)
+#define DLB2_CHP_SN_CHK_ENBL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_SN_CHK_ENBL(x) : \
+	 DLB2_V2_5CHP_SN_CHK_ENBL(x))
+#define DLB2_CHP_SN_CHK_ENBL_RST 0x0
+
+#define DLB2_CHP_SN_CHK_ENBL_EN	0x00000001
+#define DLB2_CHP_SN_CHK_ENBL_RSVD0	0xFFFFFFFE
+#define DLB2_CHP_SN_CHK_ENBL_EN_LOC		0
+#define DLB2_CHP_SN_CHK_ENBL_RSVD0_LOC	1
+
+#define DLB2_V2CHP_DIR_CQ_DEPTH(x) \
+	(0x40280000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ_DEPTH(x) \
+	(0x40300000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ_DEPTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_DEPTH(x) : \
+	 DLB2_V2_5CHP_DIR_CQ_DEPTH(x))
+#define DLB2_CHP_DIR_CQ_DEPTH_RST 0x0
+
+#define DLB2_CHP_DIR_CQ_DEPTH_DEPTH	0x00001FFF
+#define DLB2_CHP_DIR_CQ_DEPTH_RSVD0	0xFFFFE000
+#define DLB2_CHP_DIR_CQ_DEPTH_DEPTH_LOC	0
+#define DLB2_CHP_DIR_CQ_DEPTH_RSVD0_LOC	13
+
+#define DLB2_V2CHP_DIR_CQ_INT_DEPTH_THRSH(x) \
+	(0x40300000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ_INT_DEPTH_THRSH(x) \
+	(0x40380000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_INT_DEPTH_THRSH(x) : \
+	 DLB2_V2_5CHP_DIR_CQ_INT_DEPTH_THRSH(x))
+#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_RST 0x0
+
+#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_DEPTH_THRESHOLD	0x00001FFF
+#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_RSVD0		0xFFFFE000
+#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_DEPTH_THRESHOLD_LOC	0
+#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_RSVD0_LOC		13
+
+#define DLB2_V2CHP_DIR_CQ_INT_ENB(x) \
+	(0x40380000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ_INT_ENB(x) \
+	(0x40400000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ_INT_ENB(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_INT_ENB(x) : \
+	 DLB2_V2_5CHP_DIR_CQ_INT_ENB(x))
+#define DLB2_CHP_DIR_CQ_INT_ENB_RST 0x0
+
+#define DLB2_CHP_DIR_CQ_INT_ENB_EN_TIM	0x00000001
+#define DLB2_CHP_DIR_CQ_INT_ENB_EN_DEPTH	0x00000002
+#define DLB2_CHP_DIR_CQ_INT_ENB_RSVD0	0xFFFFFFFC
+#define DLB2_CHP_DIR_CQ_INT_ENB_EN_TIM_LOC	0
+#define DLB2_CHP_DIR_CQ_INT_ENB_EN_DEPTH_LOC	1
+#define DLB2_CHP_DIR_CQ_INT_ENB_RSVD0_LOC	2
+
+#define DLB2_V2CHP_DIR_CQ_TMR_THRSH(x) \
+	(0x40480000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ_TMR_THRSH(x) \
+	(0x40500000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ_TMR_THRSH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_TMR_THRSH(x) : \
+	 DLB2_V2_5CHP_DIR_CQ_TMR_THRSH(x))
+#define DLB2_CHP_DIR_CQ_TMR_THRSH_RST 0x1
+
+#define DLB2_CHP_DIR_CQ_TMR_THRSH_THRSH_0	0x00000001
+#define DLB2_CHP_DIR_CQ_TMR_THRSH_THRSH_13_1	0x00003FFE
+#define DLB2_CHP_DIR_CQ_TMR_THRSH_RSVD0	0xFFFFC000
+#define DLB2_CHP_DIR_CQ_TMR_THRSH_THRSH_0_LOC	0
+#define DLB2_CHP_DIR_CQ_TMR_THRSH_THRSH_13_1_LOC	1
+#define DLB2_CHP_DIR_CQ_TMR_THRSH_RSVD0_LOC		14
+
+#define DLB2_V2CHP_DIR_CQ_TKN_DEPTH_SEL(x) \
+	(0x40500000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ_TKN_DEPTH_SEL(x) \
+	(0x40580000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_TKN_DEPTH_SEL(x) : \
+	 DLB2_V2_5CHP_DIR_CQ_TKN_DEPTH_SEL(x))
+#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_RST 0x0
+
+#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT	0x0000000F
+#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_RSVD0			0xFFFFFFF0
+#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_LOC	0
+#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_RSVD0_LOC		4
+
+#define DLB2_V2CHP_DIR_CQ_WD_ENB(x) \
+	(0x40580000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ_WD_ENB(x) \
+	(0x40600000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ_WD_ENB(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_WD_ENB(x) : \
+	 DLB2_V2_5CHP_DIR_CQ_WD_ENB(x))
+#define DLB2_CHP_DIR_CQ_WD_ENB_RST 0x0
+
+#define DLB2_CHP_DIR_CQ_WD_ENB_WD_ENABLE	0x00000001
+#define DLB2_CHP_DIR_CQ_WD_ENB_RSVD0		0xFFFFFFFE
+#define DLB2_CHP_DIR_CQ_WD_ENB_WD_ENABLE_LOC	0
+#define DLB2_CHP_DIR_CQ_WD_ENB_RSVD0_LOC	1
+
+#define DLB2_V2CHP_DIR_CQ_WPTR(x) \
+	(0x40600000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ_WPTR(x) \
+	(0x40680000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ_WPTR(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_WPTR(x) : \
+	 DLB2_V2_5CHP_DIR_CQ_WPTR(x))
+#define DLB2_CHP_DIR_CQ_WPTR_RST 0x0
+
+#define DLB2_CHP_DIR_CQ_WPTR_WRITE_POINTER	0x00001FFF
+#define DLB2_CHP_DIR_CQ_WPTR_RSVD0		0xFFFFE000
+#define DLB2_CHP_DIR_CQ_WPTR_WRITE_POINTER_LOC	0
+#define DLB2_CHP_DIR_CQ_WPTR_RSVD0_LOC		13
+
+#define DLB2_V2CHP_DIR_CQ2VAS(x) \
+	(0x40680000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ2VAS(x) \
+	(0x40700000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ2VAS(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ2VAS(x) : \
+	 DLB2_V2_5CHP_DIR_CQ2VAS(x))
+#define DLB2_CHP_DIR_CQ2VAS_RST 0x0
+
+#define DLB2_CHP_DIR_CQ2VAS_CQ2VAS	0x0000001F
+#define DLB2_CHP_DIR_CQ2VAS_RSVD0	0xFFFFFFE0
+#define DLB2_CHP_DIR_CQ2VAS_CQ2VAS_LOC	0
+#define DLB2_CHP_DIR_CQ2VAS_RSVD0_LOC	5
+
+#define DLB2_V2CHP_HIST_LIST_BASE(x) \
+	(0x40700000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_HIST_LIST_BASE(x) \
+	(0x40780000 + (x) * 0x1000)
+#define DLB2_CHP_HIST_LIST_BASE(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_HIST_LIST_BASE(x) : \
+	 DLB2_V2_5CHP_HIST_LIST_BASE(x))
+#define DLB2_CHP_HIST_LIST_BASE_RST 0x0
+
+#define DLB2_CHP_HIST_LIST_BASE_BASE		0x00001FFF
+#define DLB2_CHP_HIST_LIST_BASE_RSVD0	0xFFFFE000
+#define DLB2_CHP_HIST_LIST_BASE_BASE_LOC	0
+#define DLB2_CHP_HIST_LIST_BASE_RSVD0_LOC	13
+
+#define DLB2_V2CHP_HIST_LIST_LIM(x) \
+	(0x40780000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_HIST_LIST_LIM(x) \
+	(0x40800000 + (x) * 0x1000)
+#define DLB2_CHP_HIST_LIST_LIM(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_HIST_LIST_LIM(x) : \
+	 DLB2_V2_5CHP_HIST_LIST_LIM(x))
+#define DLB2_CHP_HIST_LIST_LIM_RST 0x0
+
+#define DLB2_CHP_HIST_LIST_LIM_LIMIT	0x00001FFF
+#define DLB2_CHP_HIST_LIST_LIM_RSVD0	0xFFFFE000
+#define DLB2_CHP_HIST_LIST_LIM_LIMIT_LOC	0
+#define DLB2_CHP_HIST_LIST_LIM_RSVD0_LOC	13
+
+#define DLB2_V2CHP_HIST_LIST_POP_PTR(x) \
+	(0x40800000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_HIST_LIST_POP_PTR(x) \
+	(0x40880000 + (x) * 0x1000)
+#define DLB2_CHP_HIST_LIST_POP_PTR(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_HIST_LIST_POP_PTR(x) : \
+	 DLB2_V2_5CHP_HIST_LIST_POP_PTR(x))
+#define DLB2_CHP_HIST_LIST_POP_PTR_RST 0x0
+
+#define DLB2_CHP_HIST_LIST_POP_PTR_POP_PTR		0x00001FFF
+#define DLB2_CHP_HIST_LIST_POP_PTR_GENERATION	0x00002000
+#define DLB2_CHP_HIST_LIST_POP_PTR_RSVD0		0xFFFFC000
+#define DLB2_CHP_HIST_LIST_POP_PTR_POP_PTR_LOC	0
+#define DLB2_CHP_HIST_LIST_POP_PTR_GENERATION_LOC	13
+#define DLB2_CHP_HIST_LIST_POP_PTR_RSVD0_LOC		14
+
+#define DLB2_V2CHP_HIST_LIST_PUSH_PTR(x) \
+	(0x40880000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_HIST_LIST_PUSH_PTR(x) \
+	(0x40900000 + (x) * 0x1000)
+#define DLB2_CHP_HIST_LIST_PUSH_PTR(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_HIST_LIST_PUSH_PTR(x) : \
+	 DLB2_V2_5CHP_HIST_LIST_PUSH_PTR(x))
+#define DLB2_CHP_HIST_LIST_PUSH_PTR_RST 0x0
+
+#define DLB2_CHP_HIST_LIST_PUSH_PTR_PUSH_PTR		0x00001FFF
+#define DLB2_CHP_HIST_LIST_PUSH_PTR_GENERATION	0x00002000
+#define DLB2_CHP_HIST_LIST_PUSH_PTR_RSVD0		0xFFFFC000
+#define DLB2_CHP_HIST_LIST_PUSH_PTR_PUSH_PTR_LOC	0
+#define DLB2_CHP_HIST_LIST_PUSH_PTR_GENERATION_LOC	13
+#define DLB2_CHP_HIST_LIST_PUSH_PTR_RSVD0_LOC	14
+
+#define DLB2_V2CHP_LDB_CQ_DEPTH(x) \
+	(0x40900000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ_DEPTH(x) \
+	(0x40a80000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ_DEPTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_DEPTH(x) : \
+	 DLB2_V2_5CHP_LDB_CQ_DEPTH(x))
+#define DLB2_CHP_LDB_CQ_DEPTH_RST 0x0
+
+#define DLB2_CHP_LDB_CQ_DEPTH_DEPTH	0x000007FF
+#define DLB2_CHP_LDB_CQ_DEPTH_RSVD0	0xFFFFF800
+#define DLB2_CHP_LDB_CQ_DEPTH_DEPTH_LOC	0
+#define DLB2_CHP_LDB_CQ_DEPTH_RSVD0_LOC	11
+
+#define DLB2_V2CHP_LDB_CQ_INT_DEPTH_THRSH(x) \
+	(0x40980000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ_INT_DEPTH_THRSH(x) \
+	(0x40b00000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_INT_DEPTH_THRSH(x) : \
+	 DLB2_V2_5CHP_LDB_CQ_INT_DEPTH_THRSH(x))
+#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_RST 0x0
+
+#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_DEPTH_THRESHOLD	0x000007FF
+#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_RSVD0		0xFFFFF800
+#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_DEPTH_THRESHOLD_LOC	0
+#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_RSVD0_LOC		11
+
+#define DLB2_V2CHP_LDB_CQ_INT_ENB(x) \
+	(0x40a00000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ_INT_ENB(x) \
+	(0x40b80000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ_INT_ENB(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_INT_ENB(x) : \
+	 DLB2_V2_5CHP_LDB_CQ_INT_ENB(x))
+#define DLB2_CHP_LDB_CQ_INT_ENB_RST 0x0
+
+#define DLB2_CHP_LDB_CQ_INT_ENB_EN_TIM	0x00000001
+#define DLB2_CHP_LDB_CQ_INT_ENB_EN_DEPTH	0x00000002
+#define DLB2_CHP_LDB_CQ_INT_ENB_RSVD0	0xFFFFFFFC
+#define DLB2_CHP_LDB_CQ_INT_ENB_EN_TIM_LOC	0
+#define DLB2_CHP_LDB_CQ_INT_ENB_EN_DEPTH_LOC	1
+#define DLB2_CHP_LDB_CQ_INT_ENB_RSVD0_LOC	2
+
+#define DLB2_V2CHP_LDB_CQ_TMR_THRSH(x) \
+	(0x40b00000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ_TMR_THRSH(x) \
+	(0x40c80000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ_TMR_THRSH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_TMR_THRSH(x) : \
+	 DLB2_V2_5CHP_LDB_CQ_TMR_THRSH(x))
+#define DLB2_CHP_LDB_CQ_TMR_THRSH_RST 0x1
+
+#define DLB2_CHP_LDB_CQ_TMR_THRSH_THRSH_0	0x00000001
+#define DLB2_CHP_LDB_CQ_TMR_THRSH_THRSH_13_1	0x00003FFE
+#define DLB2_CHP_LDB_CQ_TMR_THRSH_RSVD0	0xFFFFC000
+#define DLB2_CHP_LDB_CQ_TMR_THRSH_THRSH_0_LOC	0
+#define DLB2_CHP_LDB_CQ_TMR_THRSH_THRSH_13_1_LOC	1
+#define DLB2_CHP_LDB_CQ_TMR_THRSH_RSVD0_LOC		14
+
+#define DLB2_V2CHP_LDB_CQ_TKN_DEPTH_SEL(x) \
+	(0x40b80000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ_TKN_DEPTH_SEL(x) \
+	(0x40d00000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_TKN_DEPTH_SEL(x) : \
+	 DLB2_V2_5CHP_LDB_CQ_TKN_DEPTH_SEL(x))
+#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_RST 0x0
+
+#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT	0x0000000F
+#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_RSVD0			0xFFFFFFF0
+#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_LOC	0
+#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_RSVD0_LOC		4
+
+#define DLB2_V2CHP_LDB_CQ_WD_ENB(x) \
+	(0x40c00000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ_WD_ENB(x) \
+	(0x40d80000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ_WD_ENB(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_WD_ENB(x) : \
+	 DLB2_V2_5CHP_LDB_CQ_WD_ENB(x))
+#define DLB2_CHP_LDB_CQ_WD_ENB_RST 0x0
+
+#define DLB2_CHP_LDB_CQ_WD_ENB_WD_ENABLE	0x00000001
+#define DLB2_CHP_LDB_CQ_WD_ENB_RSVD0		0xFFFFFFFE
+#define DLB2_CHP_LDB_CQ_WD_ENB_WD_ENABLE_LOC	0
+#define DLB2_CHP_LDB_CQ_WD_ENB_RSVD0_LOC	1
+
+#define DLB2_V2CHP_LDB_CQ_WPTR(x) \
+	(0x40c80000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ_WPTR(x) \
+	(0x40e00000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ_WPTR(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_WPTR(x) : \
+	 DLB2_V2_5CHP_LDB_CQ_WPTR(x))
+#define DLB2_CHP_LDB_CQ_WPTR_RST 0x0
+
+#define DLB2_CHP_LDB_CQ_WPTR_WRITE_POINTER	0x000007FF
+#define DLB2_CHP_LDB_CQ_WPTR_RSVD0		0xFFFFF800
+#define DLB2_CHP_LDB_CQ_WPTR_WRITE_POINTER_LOC	0
+#define DLB2_CHP_LDB_CQ_WPTR_RSVD0_LOC		11
+
+#define DLB2_V2CHP_LDB_CQ2VAS(x) \
+	(0x40d00000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ2VAS(x) \
+	(0x40e80000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ2VAS(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ2VAS(x) : \
+	 DLB2_V2_5CHP_LDB_CQ2VAS(x))
+#define DLB2_CHP_LDB_CQ2VAS_RST 0x0
+
+#define DLB2_CHP_LDB_CQ2VAS_CQ2VAS	0x0000001F
+#define DLB2_CHP_LDB_CQ2VAS_RSVD0	0xFFFFFFE0
+#define DLB2_CHP_LDB_CQ2VAS_CQ2VAS_LOC	0
+#define DLB2_CHP_LDB_CQ2VAS_RSVD0_LOC	5
+
+#define DLB2_CHP_CFG_CHP_CSR_CTRL 0x44000008
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_RST 0x180002
+
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_COR_ALARM_DIS		0x00000001
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_COR_SYND_DIS		0x00000002
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_UNCR_ALARM_DIS		0x00000004
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_UNC_SYND_DIS		0x00000008
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF0_ALARM_DIS		0x00000010
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF0_SYND_DIS		0x00000020
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF1_ALARM_DIS		0x00000040
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF1_SYND_DIS		0x00000080
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF2_ALARM_DIS		0x00000100
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF2_SYND_DIS		0x00000200
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF3_ALARM_DIS		0x00000400
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF3_SYND_DIS		0x00000800
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF4_ALARM_DIS		0x00001000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF4_SYND_DIS		0x00002000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF5_ALARM_DIS		0x00004000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF5_SYND_DIS		0x00008000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_DLB_COR_ALARM_ENABLE	0x00010000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_LDB_CQ_MODE	0x00020000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_DIR_CQ_MODE	0x00040000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_WRITE_LDB		0x00080000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_WRITE_DIR		0x00100000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_FIRST_WRITE_LDB	0x00200000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_FIRST_WRITE_DIR	0x00400000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_RSVZ0			0xFF800000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_COR_ALARM_DIS_LOC		0
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_COR_SYND_DIS_LOC		1
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_UNCR_ALARM_DIS_LOC		2
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_UNC_SYND_DIS_LOC		3
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF0_ALARM_DIS_LOC		4
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF0_SYND_DIS_LOC		5
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF1_ALARM_DIS_LOC		6
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF1_SYND_DIS_LOC		7
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF2_ALARM_DIS_LOC		8
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF2_SYND_DIS_LOC		9
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF3_ALARM_DIS_LOC		10
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF3_SYND_DIS_LOC		11
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF4_ALARM_DIS_LOC		12
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF4_SYND_DIS_LOC		13
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF5_ALARM_DIS_LOC		14
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF5_SYND_DIS_LOC		15
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_DLB_COR_ALARM_ENABLE_LOC		16
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_LDB_CQ_MODE_LOC	17
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_DIR_CQ_MODE_LOC	18
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_WRITE_LDB_LOC			19
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_WRITE_DIR_LOC			20
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_FIRST_WRITE_LDB_LOC		21
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_FIRST_WRITE_DIR_LOC		22
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_RSVZ0_LOC				23
+
+#define DLB2_V2CHP_DIR_CQ_INTR_ARMED0 0x4400005c
+#define DLB2_V2_5CHP_DIR_CQ_INTR_ARMED0 0x4400004c
+#define DLB2_CHP_DIR_CQ_INTR_ARMED0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_INTR_ARMED0 : \
+	 DLB2_V2_5CHP_DIR_CQ_INTR_ARMED0)
+#define DLB2_CHP_DIR_CQ_INTR_ARMED0_RST 0x0
+
+#define DLB2_CHP_DIR_CQ_INTR_ARMED0_ARMED	0xFFFFFFFF
+#define DLB2_CHP_DIR_CQ_INTR_ARMED0_ARMED_LOC	0
+
+#define DLB2_V2CHP_DIR_CQ_INTR_ARMED1 0x44000060
+#define DLB2_V2_5CHP_DIR_CQ_INTR_ARMED1 0x44000050
+#define DLB2_CHP_DIR_CQ_INTR_ARMED1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_INTR_ARMED1 : \
+	 DLB2_V2_5CHP_DIR_CQ_INTR_ARMED1)
+#define DLB2_CHP_DIR_CQ_INTR_ARMED1_RST 0x0
+
+#define DLB2_CHP_DIR_CQ_INTR_ARMED1_ARMED	0xFFFFFFFF
+#define DLB2_CHP_DIR_CQ_INTR_ARMED1_ARMED_LOC	0
+
+#define DLB2_V2CHP_CFG_DIR_CQ_TIMER_CTL 0x44000084
+#define DLB2_V2_5CHP_CFG_DIR_CQ_TIMER_CTL 0x44000088
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_DIR_CQ_TIMER_CTL : \
+	 DLB2_V2_5CHP_CFG_DIR_CQ_TIMER_CTL)
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_RST 0x0
+
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_SAMPLE_INTERVAL	0x000000FF
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_ENB			0x00000100
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_RSVZ0			0xFFFFFE00
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_SAMPLE_INTERVAL_LOC	0
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_ENB_LOC		8
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_RSVZ0_LOC		9
+
+#define DLB2_V2CHP_CFG_DIR_WDTO_0 0x44000088
+#define DLB2_V2_5CHP_CFG_DIR_WDTO_0 0x4400008c
+#define DLB2_CHP_CFG_DIR_WDTO_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_DIR_WDTO_0 : \
+	 DLB2_V2_5CHP_CFG_DIR_WDTO_0)
+#define DLB2_CHP_CFG_DIR_WDTO_0_RST 0x0
+
+#define DLB2_CHP_CFG_DIR_WDTO_0_WDTO	0xFFFFFFFF
+#define DLB2_CHP_CFG_DIR_WDTO_0_WDTO_LOC	0
+
+#define DLB2_V2CHP_CFG_DIR_WDTO_1 0x4400008c
+#define DLB2_V2_5CHP_CFG_DIR_WDTO_1 0x44000090
+#define DLB2_CHP_CFG_DIR_WDTO_1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_DIR_WDTO_1 : \
+	 DLB2_V2_5CHP_CFG_DIR_WDTO_1)
+#define DLB2_CHP_CFG_DIR_WDTO_1_RST 0x0
+
+#define DLB2_CHP_CFG_DIR_WDTO_1_WDTO	0xFFFFFFFF
+#define DLB2_CHP_CFG_DIR_WDTO_1_WDTO_LOC	0
+
+#define DLB2_V2CHP_CFG_DIR_WD_DISABLE0 0x44000098
+#define DLB2_V2_5CHP_CFG_DIR_WD_DISABLE0 0x440000a4
+#define DLB2_CHP_CFG_DIR_WD_DISABLE0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_DIR_WD_DISABLE0 : \
+	 DLB2_V2_5CHP_CFG_DIR_WD_DISABLE0)
+#define DLB2_CHP_CFG_DIR_WD_DISABLE0_RST 0xffffffff
+
+#define DLB2_CHP_CFG_DIR_WD_DISABLE0_WD_DISABLE	0xFFFFFFFF
+#define DLB2_CHP_CFG_DIR_WD_DISABLE0_WD_DISABLE_LOC	0
+
+#define DLB2_V2CHP_CFG_DIR_WD_DISABLE1 0x4400009c
+#define DLB2_V2_5CHP_CFG_DIR_WD_DISABLE1 0x440000a8
+#define DLB2_CHP_CFG_DIR_WD_DISABLE1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_DIR_WD_DISABLE1 : \
+	 DLB2_V2_5CHP_CFG_DIR_WD_DISABLE1)
+#define DLB2_CHP_CFG_DIR_WD_DISABLE1_RST 0xffffffff
+
+#define DLB2_CHP_CFG_DIR_WD_DISABLE1_WD_DISABLE	0xFFFFFFFF
+#define DLB2_CHP_CFG_DIR_WD_DISABLE1_WD_DISABLE_LOC	0
+
+#define DLB2_V2CHP_CFG_DIR_WD_ENB_INTERVAL 0x440000a0
+#define DLB2_V2_5CHP_CFG_DIR_WD_ENB_INTERVAL 0x440000b0
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_DIR_WD_ENB_INTERVAL : \
+	 DLB2_V2_5CHP_CFG_DIR_WD_ENB_INTERVAL)
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_RST 0x0
+
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_SAMPLE_INTERVAL	0x0FFFFFFF
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_ENB			0x10000000
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_RSVZ0		0xE0000000
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_SAMPLE_INTERVAL_LOC	0
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_ENB_LOC		28
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_RSVZ0_LOC		29
+
+#define DLB2_V2CHP_CFG_DIR_WD_THRESHOLD 0x440000ac
+#define DLB2_V2_5CHP_CFG_DIR_WD_THRESHOLD 0x440000c0
+#define DLB2_CHP_CFG_DIR_WD_THRESHOLD(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_DIR_WD_THRESHOLD : \
+	 DLB2_V2_5CHP_CFG_DIR_WD_THRESHOLD)
+#define DLB2_CHP_CFG_DIR_WD_THRESHOLD_RST 0x0
+
+#define DLB2_CHP_CFG_DIR_WD_THRESHOLD_WD_THRESHOLD	0x000000FF
+#define DLB2_CHP_CFG_DIR_WD_THRESHOLD_RSVZ0		0xFFFFFF00
+#define DLB2_CHP_CFG_DIR_WD_THRESHOLD_WD_THRESHOLD_LOC	0
+#define DLB2_CHP_CFG_DIR_WD_THRESHOLD_RSVZ0_LOC		8
+
+#define DLB2_V2CHP_LDB_CQ_INTR_ARMED0 0x440000b0
+#define DLB2_V2_5CHP_LDB_CQ_INTR_ARMED0 0x440000c4
+#define DLB2_CHP_LDB_CQ_INTR_ARMED0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_INTR_ARMED0 : \
+	 DLB2_V2_5CHP_LDB_CQ_INTR_ARMED0)
+#define DLB2_CHP_LDB_CQ_INTR_ARMED0_RST 0x0
+
+#define DLB2_CHP_LDB_CQ_INTR_ARMED0_ARMED	0xFFFFFFFF
+#define DLB2_CHP_LDB_CQ_INTR_ARMED0_ARMED_LOC	0
+
+#define DLB2_V2CHP_LDB_CQ_INTR_ARMED1 0x440000b4
+#define DLB2_V2_5CHP_LDB_CQ_INTR_ARMED1 0x440000c8
+#define DLB2_CHP_LDB_CQ_INTR_ARMED1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_INTR_ARMED1 : \
+	 DLB2_V2_5CHP_LDB_CQ_INTR_ARMED1)
+#define DLB2_CHP_LDB_CQ_INTR_ARMED1_RST 0x0
+
+#define DLB2_CHP_LDB_CQ_INTR_ARMED1_ARMED	0xFFFFFFFF
+#define DLB2_CHP_LDB_CQ_INTR_ARMED1_ARMED_LOC	0
+
+#define DLB2_V2CHP_CFG_LDB_CQ_TIMER_CTL 0x440000d8
+#define DLB2_V2_5CHP_CFG_LDB_CQ_TIMER_CTL 0x440000ec
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_LDB_CQ_TIMER_CTL : \
+	 DLB2_V2_5CHP_CFG_LDB_CQ_TIMER_CTL)
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_RST 0x0
+
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_SAMPLE_INTERVAL	0x000000FF
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_ENB			0x00000100
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_RSVZ0			0xFFFFFE00
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_SAMPLE_INTERVAL_LOC	0
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_ENB_LOC		8
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_RSVZ0_LOC		9
+
+#define DLB2_V2CHP_CFG_LDB_WDTO_0 0x440000dc
+#define DLB2_V2_5CHP_CFG_LDB_WDTO_0 0x440000f0
+#define DLB2_CHP_CFG_LDB_WDTO_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_LDB_WDTO_0 : \
+	 DLB2_V2_5CHP_CFG_LDB_WDTO_0)
+#define DLB2_CHP_CFG_LDB_WDTO_0_RST 0x0
+
+#define DLB2_CHP_CFG_LDB_WDTO_0_WDTO	0xFFFFFFFF
+#define DLB2_CHP_CFG_LDB_WDTO_0_WDTO_LOC	0
+
+#define DLB2_V2CHP_CFG_LDB_WDTO_1 0x440000e0
+#define DLB2_V2_5CHP_CFG_LDB_WDTO_1 0x440000f4
+#define DLB2_CHP_CFG_LDB_WDTO_1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_LDB_WDTO_1 : \
+	 DLB2_V2_5CHP_CFG_LDB_WDTO_1)
+#define DLB2_CHP_CFG_LDB_WDTO_1_RST 0x0
+
+#define DLB2_CHP_CFG_LDB_WDTO_1_WDTO	0xFFFFFFFF
+#define DLB2_CHP_CFG_LDB_WDTO_1_WDTO_LOC	0
+
+#define DLB2_V2CHP_CFG_LDB_WD_DISABLE0 0x440000ec
+#define DLB2_V2_5CHP_CFG_LDB_WD_DISABLE0 0x44000100
+#define DLB2_CHP_CFG_LDB_WD_DISABLE0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_LDB_WD_DISABLE0 : \
+	 DLB2_V2_5CHP_CFG_LDB_WD_DISABLE0)
+#define DLB2_CHP_CFG_LDB_WD_DISABLE0_RST 0xffffffff
+
+#define DLB2_CHP_CFG_LDB_WD_DISABLE0_WD_DISABLE	0xFFFFFFFF
+#define DLB2_CHP_CFG_LDB_WD_DISABLE0_WD_DISABLE_LOC	0
+
+#define DLB2_V2CHP_CFG_LDB_WD_DISABLE1 0x440000f0
+#define DLB2_V2_5CHP_CFG_LDB_WD_DISABLE1 0x44000104
+#define DLB2_CHP_CFG_LDB_WD_DISABLE1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_LDB_WD_DISABLE1 : \
+	 DLB2_V2_5CHP_CFG_LDB_WD_DISABLE1)
+#define DLB2_CHP_CFG_LDB_WD_DISABLE1_RST 0xffffffff
+
+#define DLB2_CHP_CFG_LDB_WD_DISABLE1_WD_DISABLE	0xFFFFFFFF
+#define DLB2_CHP_CFG_LDB_WD_DISABLE1_WD_DISABLE_LOC	0
+
+#define DLB2_V2CHP_CFG_LDB_WD_ENB_INTERVAL 0x440000f4
+#define DLB2_V2_5CHP_CFG_LDB_WD_ENB_INTERVAL 0x44000108
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_LDB_WD_ENB_INTERVAL : \
+	 DLB2_V2_5CHP_CFG_LDB_WD_ENB_INTERVAL)
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_RST 0x0
+
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_SAMPLE_INTERVAL	0x0FFFFFFF
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_ENB			0x10000000
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_RSVZ0		0xE0000000
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_SAMPLE_INTERVAL_LOC	0
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_ENB_LOC		28
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_RSVZ0_LOC		29
+
+#define DLB2_V2CHP_CFG_LDB_WD_THRESHOLD 0x44000100
+#define DLB2_V2_5CHP_CFG_LDB_WD_THRESHOLD 0x44000114
+#define DLB2_CHP_CFG_LDB_WD_THRESHOLD(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_LDB_WD_THRESHOLD : \
+	 DLB2_V2_5CHP_CFG_LDB_WD_THRESHOLD)
+#define DLB2_CHP_CFG_LDB_WD_THRESHOLD_RST 0x0
+
+#define DLB2_CHP_CFG_LDB_WD_THRESHOLD_WD_THRESHOLD	0x000000FF
+#define DLB2_CHP_CFG_LDB_WD_THRESHOLD_RSVZ0		0xFFFFFF00
+#define DLB2_CHP_CFG_LDB_WD_THRESHOLD_WD_THRESHOLD_LOC	0
+#define DLB2_CHP_CFG_LDB_WD_THRESHOLD_RSVZ0_LOC		8
+
+#define DLB2_CHP_SMON_COMPARE0 0x4c000000
+#define DLB2_CHP_SMON_COMPARE0_RST 0x0
+
+#define DLB2_CHP_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_CHP_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_CHP_SMON_COMPARE1 0x4c000004
+#define DLB2_CHP_SMON_COMPARE1_RST 0x0
+
+#define DLB2_CHP_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_CHP_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_CHP_SMON_CFG0 0x4c000008
+#define DLB2_CHP_SMON_CFG0_RST 0x40000000
+
+#define DLB2_CHP_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_CHP_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_CHP_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_CHP_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_CHP_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_CHP_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_CHP_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_CHP_SMON_CFG0_SMON_MODE			0x0000F000
+#define DLB2_CHP_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_CHP_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_CHP_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_CHP_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_CHP_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_CHP_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_CHP_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_CHP_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_CHP_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_CHP_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_CHP_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_CHP_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_CHP_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
+#define DLB2_CHP_SMON_CFG0_RSVZ0_LOC				2
+#define DLB2_CHP_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_CHP_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_CHP_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_CHP_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_CHP_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_CHP_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_CHP_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_CHP_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_CHP_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_CHP_SMON_CFG0_STOPTIMEROVFL_LOC			20
+#define DLB2_CHP_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_CHP_SMON_CFG0_STATTIMEROVFL_LOC			22
+#define DLB2_CHP_SMON_CFG0_RSVZ1_LOC				23
+#define DLB2_CHP_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_CHP_SMON_CFG0_RSVZ2_LOC				29
+#define DLB2_CHP_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_CHP_SMON_CFG1 0x4c00000c
+#define DLB2_CHP_SMON_CFG1_RST 0x0
+
+#define DLB2_CHP_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_CHP_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_CHP_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_CHP_SMON_CFG1_MODE0_LOC	0
+#define DLB2_CHP_SMON_CFG1_MODE1_LOC	8
+#define DLB2_CHP_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_CHP_SMON_ACTIVITYCNTR0 0x4c000010
+#define DLB2_CHP_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_CHP_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_CHP_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_CHP_SMON_ACTIVITYCNTR1 0x4c000014
+#define DLB2_CHP_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_CHP_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_CHP_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_CHP_SMON_MAX_TMR 0x4c000018
+#define DLB2_CHP_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_CHP_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_CHP_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_CHP_SMON_TMR 0x4c00001c
+#define DLB2_CHP_SMON_TMR_RST 0x0
+
+#define DLB2_CHP_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_CHP_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_CHP_CTRL_DIAG_02 0x4c000028
+#define DLB2_CHP_CTRL_DIAG_02_RST 0x1555
+
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_EMPTY_V2	0x00000001
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_AFULL_V2	0x00000002
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_EMPTY_V2 0x04
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_AFULL_V2 0x08
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_EMPTY_V2 0x0010
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_AFULL_V2 0x0020
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_EMPTY_V2    0x0040
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_AFULL_V2    0x0080
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_EMPTY_V2	 0x0100
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_AFULL_V2	 0x0200
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_EMPTY_V2	0x0400
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_AFULL_V2	0x0800
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_EMPTY_V2    0x1000
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_AFULL_V2    0x2000
+#define DLB2_CHP_CTRL_DIAG_02_RSVD0_V2				    0xFFFFC000
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_EMPTY_V2_LOC	    0
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_AFULL_V2_LOC	    1
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_EMPTY_V2_LOC 2
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_AFULL_V2_LOC 3
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_EMPTY_V2_LOC 4
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_AFULL_V2_LOC 5
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_EMPTY_V2_LOC    6
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_AFULL_V2_LOC    7
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_EMPTY_V2_LOC	  8
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_AFULL_V2_LOC	  9
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_EMPTY_V2_LOC	  10
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_AFULL_V2_LOC	  11
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_EMPTY_V2_LOC 12
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_AFULL_V2_LOC 13
+#define DLB2_CHP_CTRL_DIAG_02_RSVD0_V2_LOC				  14
+
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_EMPTY_V2_5	     0x00000001
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_AFULL_V2_5	     0x00000002
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_EMPTY_V2_5  4
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_AFULL_V2_5  8
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_EMPTY_V2_5 0x10
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_AFULL_V2_5 0x20
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_EMPTY_V2_5	0x0040
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_AFULL_V2_5	0x0080
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_EMPTY_V2_5 0x00000100
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_AFULL_V2_5 0x00000200
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_EMPTY_V2_5 0x0400
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_AFULL_V2_5 0x0800
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_EMPTY_V2_5 0x1000
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_AFULL_V2_5 0x2000
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOKEN_QB_STATUS_SIZE_V2_5	    0x0001C000
+#define DLB2_CHP_CTRL_DIAG_02_FREELIST_SIZE_V2_5		    0xFFFE0000
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_EMPTY_V2_5_LOC 0
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_AFULL_V2_5_LOC 1
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_EMPTY_V2_5_LOC 2
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_AFULL_V2_5_LOC 3
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_EMPTY_V2_5_LOC 4
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_AFULL_V2_5_LOC 5
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_EMPTY_V2_5_LOC    6
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_AFULL_V2_5_LOC 7
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_EMPTY_V2_5_LOC	    8
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_AFULL_V2_5_LOC	    9
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_EMPTY_V2_5_LOC   10
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_AFULL_V2_5_LOC   11
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_EMPTY_V2_5_LOC 12
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_AFULL_V2_5_LOC 13
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOKEN_QB_STATUS_SIZE_V2_5_LOC	    14
+#define DLB2_CHP_CTRL_DIAG_02_FREELIST_SIZE_V2_5_LOC			    17
+
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0 0x54000000
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_RST 0xfefcfaf8
+
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI0	0x000000FF
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI1	0x0000FF00
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI2	0x00FF0000
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI3	0xFF000000
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI0_LOC	0
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI1_LOC	8
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI2_LOC	16
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI3_LOC	24
+
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_1 0x54000004
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_1_RST 0x0
+
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_1_RSVZ0	0xFFFFFFFF
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_1_RSVZ0_LOC	0
+
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0 0x54000008
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_RST 0xfefcfaf8
+
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI0	0x000000FF
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI1	0x0000FF00
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI2	0x00FF0000
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI3	0xFF000000
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI0_LOC	0
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI1_LOC	8
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI2_LOC	16
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI3_LOC	24
+
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1 0x5400000c
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RST 0x0
+
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RSVZ0	0xFFFFFFFF
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RSVZ0_LOC	0
+
+#define DLB2_DP_DIR_CSR_CTRL 0x54000010
+#define DLB2_DP_DIR_CSR_CTRL_RST 0x0
+
+#define DLB2_DP_DIR_CSR_CTRL_INT_COR_ALARM_DIS	0x00000001
+#define DLB2_DP_DIR_CSR_CTRL_INT_COR_SYND_DIS	0x00000002
+#define DLB2_DP_DIR_CSR_CTRL_INT_UNCR_ALARM_DIS	0x00000004
+#define DLB2_DP_DIR_CSR_CTRL_INT_UNC_SYND_DIS	0x00000008
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF0_ALARM_DIS	0x00000010
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF0_SYND_DIS	0x00000020
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF1_ALARM_DIS	0x00000040
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF1_SYND_DIS	0x00000080
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF2_ALARM_DIS	0x00000100
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF2_SYND_DIS	0x00000200
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF3_ALARM_DIS	0x00000400
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF3_SYND_DIS	0x00000800
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF4_ALARM_DIS	0x00001000
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF4_SYND_DIS	0x00002000
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF5_ALARM_DIS	0x00004000
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF5_SYND_DIS	0x00008000
+#define DLB2_DP_DIR_CSR_CTRL_RSVZ0			0xFFFF0000
+#define DLB2_DP_DIR_CSR_CTRL_INT_COR_ALARM_DIS_LOC	0
+#define DLB2_DP_DIR_CSR_CTRL_INT_COR_SYND_DIS_LOC	1
+#define DLB2_DP_DIR_CSR_CTRL_INT_UNCR_ALARM_DIS_LOC	2
+#define DLB2_DP_DIR_CSR_CTRL_INT_UNC_SYND_DIS_LOC	3
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF0_ALARM_DIS_LOC	4
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF0_SYND_DIS_LOC	5
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF1_ALARM_DIS_LOC	6
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF1_SYND_DIS_LOC	7
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF2_ALARM_DIS_LOC	8
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF2_SYND_DIS_LOC	9
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF3_ALARM_DIS_LOC	10
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF3_SYND_DIS_LOC	11
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF4_ALARM_DIS_LOC	12
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF4_SYND_DIS_LOC	13
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF5_ALARM_DIS_LOC	14
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF5_SYND_DIS_LOC	15
+#define DLB2_DP_DIR_CSR_CTRL_RSVZ0_LOC		16
+
+#define DLB2_DP_SMON_ACTIVITYCNTR0 0x5c000058
+#define DLB2_DP_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_DP_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_DP_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_DP_SMON_ACTIVITYCNTR1 0x5c00005c
+#define DLB2_DP_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_DP_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_DP_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_DP_SMON_COMPARE0 0x5c000060
+#define DLB2_DP_SMON_COMPARE0_RST 0x0
+
+#define DLB2_DP_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_DP_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_DP_SMON_COMPARE1 0x5c000064
+#define DLB2_DP_SMON_COMPARE1_RST 0x0
+
+#define DLB2_DP_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_DP_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_DP_SMON_CFG0 0x5c000068
+#define DLB2_DP_SMON_CFG0_RST 0x40000000
+
+#define DLB2_DP_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_DP_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_DP_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_DP_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_DP_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_DP_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_DP_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_DP_SMON_CFG0_SMON_MODE			0x0000F000
+#define DLB2_DP_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_DP_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_DP_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_DP_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_DP_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_DP_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_DP_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_DP_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_DP_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_DP_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_DP_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_DP_SMON_CFG0_SMON_ENABLE_LOC		0
+#define DLB2_DP_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC	1
+#define DLB2_DP_SMON_CFG0_RSVZ0_LOC			2
+#define DLB2_DP_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_DP_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_DP_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_DP_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_DP_SMON_CFG0_SMON_MODE_LOC		12
+#define DLB2_DP_SMON_CFG0_STOPCOUNTEROVFL_LOC	16
+#define DLB2_DP_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_DP_SMON_CFG0_STATCOUNTER0OVFL_LOC	18
+#define DLB2_DP_SMON_CFG0_STATCOUNTER1OVFL_LOC	19
+#define DLB2_DP_SMON_CFG0_STOPTIMEROVFL_LOC		20
+#define DLB2_DP_SMON_CFG0_INTTIMEROVFL_LOC		21
+#define DLB2_DP_SMON_CFG0_STATTIMEROVFL_LOC		22
+#define DLB2_DP_SMON_CFG0_RSVZ1_LOC			23
+#define DLB2_DP_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_DP_SMON_CFG0_RSVZ2_LOC			29
+#define DLB2_DP_SMON_CFG0_VERSION_LOC		30
+
+#define DLB2_DP_SMON_CFG1 0x5c00006c
+#define DLB2_DP_SMON_CFG1_RST 0x0
+
+#define DLB2_DP_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_DP_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_DP_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_DP_SMON_CFG1_MODE0_LOC	0
+#define DLB2_DP_SMON_CFG1_MODE1_LOC	8
+#define DLB2_DP_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_DP_SMON_MAX_TMR 0x5c000070
+#define DLB2_DP_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_DP_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_DP_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_DP_SMON_TMR 0x5c000074
+#define DLB2_DP_SMON_TMR_RST 0x0
+
+#define DLB2_DP_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_DP_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_DQED_SMON_ACTIVITYCNTR0 0x6c000024
+#define DLB2_DQED_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_DQED_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_DQED_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_DQED_SMON_ACTIVITYCNTR1 0x6c000028
+#define DLB2_DQED_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_DQED_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_DQED_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_DQED_SMON_COMPARE0 0x6c00002c
+#define DLB2_DQED_SMON_COMPARE0_RST 0x0
+
+#define DLB2_DQED_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_DQED_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_DQED_SMON_COMPARE1 0x6c000030
+#define DLB2_DQED_SMON_COMPARE1_RST 0x0
+
+#define DLB2_DQED_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_DQED_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_DQED_SMON_CFG0 0x6c000034
+#define DLB2_DQED_SMON_CFG0_RST 0x40000000
+
+#define DLB2_DQED_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_DQED_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_DQED_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_DQED_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_DQED_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_DQED_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_DQED_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_DQED_SMON_CFG0_SMON_MODE		0x0000F000
+#define DLB2_DQED_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_DQED_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_DQED_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_DQED_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_DQED_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_DQED_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_DQED_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_DQED_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_DQED_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_DQED_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_DQED_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_DQED_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_DQED_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
+#define DLB2_DQED_SMON_CFG0_RSVZ0_LOC			2
+#define DLB2_DQED_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_DQED_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_DQED_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_DQED_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_DQED_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_DQED_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_DQED_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_DQED_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_DQED_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_DQED_SMON_CFG0_STOPTIMEROVFL_LOC		20
+#define DLB2_DQED_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_DQED_SMON_CFG0_STATTIMEROVFL_LOC		22
+#define DLB2_DQED_SMON_CFG0_RSVZ1_LOC			23
+#define DLB2_DQED_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_DQED_SMON_CFG0_RSVZ2_LOC			29
+#define DLB2_DQED_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_DQED_SMON_CFG1 0x6c000038
+#define DLB2_DQED_SMON_CFG1_RST 0x0
+
+#define DLB2_DQED_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_DQED_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_DQED_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_DQED_SMON_CFG1_MODE0_LOC	0
+#define DLB2_DQED_SMON_CFG1_MODE1_LOC	8
+#define DLB2_DQED_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_DQED_SMON_MAX_TMR 0x6c00003c
+#define DLB2_DQED_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_DQED_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_DQED_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_DQED_SMON_TMR 0x6c000040
+#define DLB2_DQED_SMON_TMR_RST 0x0
+
+#define DLB2_DQED_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_DQED_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_QED_SMON_ACTIVITYCNTR0 0x7c000024
+#define DLB2_QED_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_QED_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_QED_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_QED_SMON_ACTIVITYCNTR1 0x7c000028
+#define DLB2_QED_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_QED_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_QED_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_QED_SMON_COMPARE0 0x7c00002c
+#define DLB2_QED_SMON_COMPARE0_RST 0x0
+
+#define DLB2_QED_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_QED_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_QED_SMON_COMPARE1 0x7c000030
+#define DLB2_QED_SMON_COMPARE1_RST 0x0
+
+#define DLB2_QED_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_QED_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_QED_SMON_CFG0 0x7c000034
+#define DLB2_QED_SMON_CFG0_RST 0x40000000
+
+#define DLB2_QED_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_QED_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_QED_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_QED_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_QED_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_QED_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_QED_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_QED_SMON_CFG0_SMON_MODE			0x0000F000
+#define DLB2_QED_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_QED_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_QED_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_QED_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_QED_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_QED_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_QED_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_QED_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_QED_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_QED_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_QED_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_QED_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_QED_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
+#define DLB2_QED_SMON_CFG0_RSVZ0_LOC				2
+#define DLB2_QED_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_QED_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_QED_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_QED_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_QED_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_QED_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_QED_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_QED_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_QED_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_QED_SMON_CFG0_STOPTIMEROVFL_LOC			20
+#define DLB2_QED_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_QED_SMON_CFG0_STATTIMEROVFL_LOC			22
+#define DLB2_QED_SMON_CFG0_RSVZ1_LOC				23
+#define DLB2_QED_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_QED_SMON_CFG0_RSVZ2_LOC				29
+#define DLB2_QED_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_QED_SMON_CFG1 0x7c000038
+#define DLB2_QED_SMON_CFG1_RST 0x0
+
+#define DLB2_QED_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_QED_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_QED_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_QED_SMON_CFG1_MODE0_LOC	0
+#define DLB2_QED_SMON_CFG1_MODE1_LOC	8
+#define DLB2_QED_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_QED_SMON_MAX_TMR 0x7c00003c
+#define DLB2_QED_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_QED_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_QED_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_QED_SMON_TMR 0x7c000040
+#define DLB2_QED_SMON_TMR_RST 0x0
+
+#define DLB2_QED_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_QED_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0 0x84000000
+#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0 0x74000000
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0 : \
+	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0)
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_RST 0xfefcfaf8
+
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI0	0x000000FF
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI1	0x0000FF00
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI2	0x00FF0000
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI3	0xFF000000
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI0_LOC	0
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI1_LOC	8
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI2_LOC	16
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI3_LOC	24
+
+#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1 0x84000004
+#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1 0x74000004
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1 : \
+	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1)
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1_RST 0x0
+
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1_RSVZ0	0xFFFFFFFF
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1_RSVZ0_LOC	0
+
+#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0 0x84000008
+#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0 0x74000008
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0 : \
+	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0)
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_RST 0xfefcfaf8
+
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI0	0x000000FF
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI1	0x0000FF00
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI2	0x00FF0000
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI3	0xFF000000
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI0_LOC	0
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI1_LOC	8
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI2_LOC	16
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI3_LOC	24
+
+#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1 0x8400000c
+#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1 0x7400000c
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1 : \
+	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1)
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1_RST 0x0
+
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1_RSVZ0	0xFFFFFFFF
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1_RSVZ0_LOC	0
+
+#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0 0x84000010
+#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0 0x74000010
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0 : \
+	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0)
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_RST 0xfefcfaf8
+
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI0	0x000000FF
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI1	0x0000FF00
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI2	0x00FF0000
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI3	0xFF000000
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI0_LOC	0
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI1_LOC	8
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI2_LOC	16
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI3_LOC	24
+
+#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1 0x84000014
+#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1 0x74000014
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1 : \
+	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1)
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RST 0x0
+
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RSVZ0	0xFFFFFFFF
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RSVZ0_LOC	0
+
+#define DLB2_NALB_SMON_ACTIVITYCNTR0 0x8c000064
+#define DLB2_NALB_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_NALB_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_NALB_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_NALB_SMON_ACTIVITYCNTR1 0x8c000068
+#define DLB2_NALB_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_NALB_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_NALB_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_NALB_SMON_COMPARE0 0x8c00006c
+#define DLB2_NALB_SMON_COMPARE0_RST 0x0
+
+#define DLB2_NALB_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_NALB_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_NALB_SMON_COMPARE1 0x8c000070
+#define DLB2_NALB_SMON_COMPARE1_RST 0x0
+
+#define DLB2_NALB_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_NALB_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_NALB_SMON_CFG0 0x8c000074
+#define DLB2_NALB_SMON_CFG0_RST 0x40000000
+
+#define DLB2_NALB_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_NALB_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_NALB_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_NALB_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_NALB_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_NALB_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_NALB_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_NALB_SMON_CFG0_SMON_MODE		0x0000F000
+#define DLB2_NALB_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_NALB_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_NALB_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_NALB_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_NALB_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_NALB_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_NALB_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_NALB_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_NALB_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_NALB_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_NALB_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_NALB_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_NALB_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
+#define DLB2_NALB_SMON_CFG0_RSVZ0_LOC			2
+#define DLB2_NALB_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_NALB_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_NALB_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_NALB_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_NALB_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_NALB_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_NALB_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_NALB_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_NALB_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_NALB_SMON_CFG0_STOPTIMEROVFL_LOC		20
+#define DLB2_NALB_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_NALB_SMON_CFG0_STATTIMEROVFL_LOC		22
+#define DLB2_NALB_SMON_CFG0_RSVZ1_LOC			23
+#define DLB2_NALB_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_NALB_SMON_CFG0_RSVZ2_LOC			29
+#define DLB2_NALB_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_NALB_SMON_CFG1 0x8c000078
+#define DLB2_NALB_SMON_CFG1_RST 0x0
+
+#define DLB2_NALB_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_NALB_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_NALB_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_NALB_SMON_CFG1_MODE0_LOC	0
+#define DLB2_NALB_SMON_CFG1_MODE1_LOC	8
+#define DLB2_NALB_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_NALB_SMON_MAX_TMR 0x8c00007c
+#define DLB2_NALB_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_NALB_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_NALB_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_NALB_SMON_TMR 0x8c000080
+#define DLB2_NALB_SMON_TMR_RST 0x0
+
+#define DLB2_NALB_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_NALB_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_V2RO_GRP_0_SLT_SHFT(x) \
+	(0x96000000 + (x) * 0x4)
+#define DLB2_V2_5RO_GRP_0_SLT_SHFT(x) \
+	(0x86000000 + (x) * 0x4)
+#define DLB2_RO_GRP_0_SLT_SHFT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2RO_GRP_0_SLT_SHFT(x) : \
+	 DLB2_V2_5RO_GRP_0_SLT_SHFT(x))
+#define DLB2_RO_GRP_0_SLT_SHFT_RST 0x0
+
+#define DLB2_RO_GRP_0_SLT_SHFT_CHANGE	0x000003FF
+#define DLB2_RO_GRP_0_SLT_SHFT_RSVD0		0xFFFFFC00
+#define DLB2_RO_GRP_0_SLT_SHFT_CHANGE_LOC	0
+#define DLB2_RO_GRP_0_SLT_SHFT_RSVD0_LOC	10
+
+#define DLB2_V2RO_GRP_1_SLT_SHFT(x) \
+	(0x96010000 + (x) * 0x4)
+#define DLB2_V2_5RO_GRP_1_SLT_SHFT(x) \
+	(0x86010000 + (x) * 0x4)
+#define DLB2_RO_GRP_1_SLT_SHFT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2RO_GRP_1_SLT_SHFT(x) : \
+	 DLB2_V2_5RO_GRP_1_SLT_SHFT(x))
+#define DLB2_RO_GRP_1_SLT_SHFT_RST 0x0
+
+#define DLB2_RO_GRP_1_SLT_SHFT_CHANGE	0x000003FF
+#define DLB2_RO_GRP_1_SLT_SHFT_RSVD0		0xFFFFFC00
+#define DLB2_RO_GRP_1_SLT_SHFT_CHANGE_LOC	0
+#define DLB2_RO_GRP_1_SLT_SHFT_RSVD0_LOC	10
+
+#define DLB2_V2RO_GRP_SN_MODE 0x94000000
+#define DLB2_V2_5RO_GRP_SN_MODE 0x84000000
+#define DLB2_RO_GRP_SN_MODE(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2RO_GRP_SN_MODE : \
+	 DLB2_V2_5RO_GRP_SN_MODE)
+#define DLB2_RO_GRP_SN_MODE_RST 0x0
+
+#define DLB2_RO_GRP_SN_MODE_SN_MODE_0	0x00000007
+#define DLB2_RO_GRP_SN_MODE_RSZV0		0x000000F8
+#define DLB2_RO_GRP_SN_MODE_SN_MODE_1	0x00000700
+#define DLB2_RO_GRP_SN_MODE_RSZV1		0xFFFFF800
+#define DLB2_RO_GRP_SN_MODE_SN_MODE_0_LOC	0
+#define DLB2_RO_GRP_SN_MODE_RSZV0_LOC	3
+#define DLB2_RO_GRP_SN_MODE_SN_MODE_1_LOC	8
+#define DLB2_RO_GRP_SN_MODE_RSZV1_LOC	11
+
+#define DLB2_V2RO_CFG_CTRL_GENERAL_0 0x9c000000
+#define DLB2_V2_5RO_CFG_CTRL_GENERAL_0 0x8c000000
+#define DLB2_RO_CFG_CTRL_GENERAL_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2RO_CFG_CTRL_GENERAL_0 : \
+	 DLB2_V2_5RO_CFG_CTRL_GENERAL_0)
+#define DLB2_RO_CFG_CTRL_GENERAL_0_RST 0x0
+
+#define DLB2_RO_CFG_CTRL_GENERAL_0_UNIT_SINGLE_STEP_MODE	0x00000001
+#define DLB2_RO_CFG_CTRL_GENERAL_0_RR_EN			0x00000002
+#define DLB2_RO_CFG_CTRL_GENERAL_0_RSZV0			0xFFFFFFFC
+#define DLB2_RO_CFG_CTRL_GENERAL_0_UNIT_SINGLE_STEP_MODE_LOC	0
+#define DLB2_RO_CFG_CTRL_GENERAL_0_RR_EN_LOC			1
+#define DLB2_RO_CFG_CTRL_GENERAL_0_RSZV0_LOC			2
+
+#define DLB2_RO_SMON_ACTIVITYCNTR0 0x9c000030
+#define DLB2_RO_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_RO_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_RO_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_RO_SMON_ACTIVITYCNTR1 0x9c000034
+#define DLB2_RO_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_RO_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_RO_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_RO_SMON_COMPARE0 0x9c000038
+#define DLB2_RO_SMON_COMPARE0_RST 0x0
+
+#define DLB2_RO_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_RO_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_RO_SMON_COMPARE1 0x9c00003c
+#define DLB2_RO_SMON_COMPARE1_RST 0x0
+
+#define DLB2_RO_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_RO_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_RO_SMON_CFG0 0x9c000040
+#define DLB2_RO_SMON_CFG0_RST 0x40000000
+
+#define DLB2_RO_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_RO_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_RO_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_RO_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_RO_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_RO_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_RO_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_RO_SMON_CFG0_SMON_MODE			0x0000F000
+#define DLB2_RO_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_RO_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_RO_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_RO_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_RO_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_RO_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_RO_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_RO_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_RO_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_RO_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_RO_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_RO_SMON_CFG0_SMON_ENABLE_LOC		0
+#define DLB2_RO_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC	1
+#define DLB2_RO_SMON_CFG0_RSVZ0_LOC			2
+#define DLB2_RO_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_RO_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_RO_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_RO_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_RO_SMON_CFG0_SMON_MODE_LOC		12
+#define DLB2_RO_SMON_CFG0_STOPCOUNTEROVFL_LOC	16
+#define DLB2_RO_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_RO_SMON_CFG0_STATCOUNTER0OVFL_LOC	18
+#define DLB2_RO_SMON_CFG0_STATCOUNTER1OVFL_LOC	19
+#define DLB2_RO_SMON_CFG0_STOPTIMEROVFL_LOC		20
+#define DLB2_RO_SMON_CFG0_INTTIMEROVFL_LOC		21
+#define DLB2_RO_SMON_CFG0_STATTIMEROVFL_LOC		22
+#define DLB2_RO_SMON_CFG0_RSVZ1_LOC			23
+#define DLB2_RO_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_RO_SMON_CFG0_RSVZ2_LOC			29
+#define DLB2_RO_SMON_CFG0_VERSION_LOC		30
+
+#define DLB2_RO_SMON_CFG1 0x9c000044
+#define DLB2_RO_SMON_CFG1_RST 0x0
+
+#define DLB2_RO_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_RO_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_RO_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_RO_SMON_CFG1_MODE0_LOC	0
+#define DLB2_RO_SMON_CFG1_MODE1_LOC	8
+#define DLB2_RO_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_RO_SMON_MAX_TMR 0x9c000048
+#define DLB2_RO_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_RO_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_RO_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_RO_SMON_TMR 0x9c00004c
+#define DLB2_RO_SMON_TMR_RST 0x0
+
+#define DLB2_RO_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_RO_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_V2LSP_CQ2PRIOV(x) \
+	(0xa0000000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ2PRIOV(x) \
+	(0x90000000 + (x) * 0x1000)
+#define DLB2_LSP_CQ2PRIOV(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ2PRIOV(x) : \
+	 DLB2_V2_5LSP_CQ2PRIOV(x))
+#define DLB2_LSP_CQ2PRIOV_RST 0x0
+
+#define DLB2_LSP_CQ2PRIOV_PRIO	0x00FFFFFF
+#define DLB2_LSP_CQ2PRIOV_V		0xFF000000
+#define DLB2_LSP_CQ2PRIOV_PRIO_LOC	0
+#define DLB2_LSP_CQ2PRIOV_V_LOC	24
+
+#define DLB2_V2LSP_CQ2QID0(x) \
+	(0xa0080000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ2QID0(x) \
+	(0x90080000 + (x) * 0x1000)
+#define DLB2_LSP_CQ2QID0(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ2QID0(x) : \
+	 DLB2_V2_5LSP_CQ2QID0(x))
+#define DLB2_LSP_CQ2QID0_RST 0x0
+
+#define DLB2_LSP_CQ2QID0_QID_P0	0x0000007F
+#define DLB2_LSP_CQ2QID0_RSVD3	0x00000080
+#define DLB2_LSP_CQ2QID0_QID_P1	0x00007F00
+#define DLB2_LSP_CQ2QID0_RSVD2	0x00008000
+#define DLB2_LSP_CQ2QID0_QID_P2	0x007F0000
+#define DLB2_LSP_CQ2QID0_RSVD1	0x00800000
+#define DLB2_LSP_CQ2QID0_QID_P3	0x7F000000
+#define DLB2_LSP_CQ2QID0_RSVD0	0x80000000
+#define DLB2_LSP_CQ2QID0_QID_P0_LOC	0
+#define DLB2_LSP_CQ2QID0_RSVD3_LOC	7
+#define DLB2_LSP_CQ2QID0_QID_P1_LOC	8
+#define DLB2_LSP_CQ2QID0_RSVD2_LOC	15
+#define DLB2_LSP_CQ2QID0_QID_P2_LOC	16
+#define DLB2_LSP_CQ2QID0_RSVD1_LOC	23
+#define DLB2_LSP_CQ2QID0_QID_P3_LOC	24
+#define DLB2_LSP_CQ2QID0_RSVD0_LOC	31
+
+#define DLB2_V2LSP_CQ2QID1(x) \
+	(0xa0100000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ2QID1(x) \
+	(0x90100000 + (x) * 0x1000)
+#define DLB2_LSP_CQ2QID1(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ2QID1(x) : \
+	 DLB2_V2_5LSP_CQ2QID1(x))
+#define DLB2_LSP_CQ2QID1_RST 0x0
+
+#define DLB2_LSP_CQ2QID1_QID_P4	0x0000007F
+#define DLB2_LSP_CQ2QID1_RSVD3	0x00000080
+#define DLB2_LSP_CQ2QID1_QID_P5	0x00007F00
+#define DLB2_LSP_CQ2QID1_RSVD2	0x00008000
+#define DLB2_LSP_CQ2QID1_QID_P6	0x007F0000
+#define DLB2_LSP_CQ2QID1_RSVD1	0x00800000
+#define DLB2_LSP_CQ2QID1_QID_P7	0x7F000000
+#define DLB2_LSP_CQ2QID1_RSVD0	0x80000000
+#define DLB2_LSP_CQ2QID1_QID_P4_LOC	0
+#define DLB2_LSP_CQ2QID1_RSVD3_LOC	7
+#define DLB2_LSP_CQ2QID1_QID_P5_LOC	8
+#define DLB2_LSP_CQ2QID1_RSVD2_LOC	15
+#define DLB2_LSP_CQ2QID1_QID_P6_LOC	16
+#define DLB2_LSP_CQ2QID1_RSVD1_LOC	23
+#define DLB2_LSP_CQ2QID1_QID_P7_LOC	24
+#define DLB2_LSP_CQ2QID1_RSVD0_LOC	31
+
+#define DLB2_V2LSP_CQ_DIR_DSBL(x) \
+	(0xa0180000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_DIR_DSBL(x) \
+	(0x90180000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_DIR_DSBL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_DIR_DSBL(x) : \
+	 DLB2_V2_5LSP_CQ_DIR_DSBL(x))
+#define DLB2_LSP_CQ_DIR_DSBL_RST 0x1
+
+#define DLB2_LSP_CQ_DIR_DSBL_DISABLED	0x00000001
+#define DLB2_LSP_CQ_DIR_DSBL_RSVD0		0xFFFFFFFE
+#define DLB2_LSP_CQ_DIR_DSBL_DISABLED_LOC	0
+#define DLB2_LSP_CQ_DIR_DSBL_RSVD0_LOC	1
+
+#define DLB2_V2LSP_CQ_DIR_TKN_CNT(x) \
+	(0xa0200000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_DIR_TKN_CNT(x) \
+	(0x90200000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_DIR_TKN_CNT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_DIR_TKN_CNT(x) : \
+	 DLB2_V2_5LSP_CQ_DIR_TKN_CNT(x))
+#define DLB2_LSP_CQ_DIR_TKN_CNT_RST 0x0
+
+#define DLB2_LSP_CQ_DIR_TKN_CNT_COUNT	0x00001FFF
+#define DLB2_LSP_CQ_DIR_TKN_CNT_RSVD0	0xFFFFE000
+#define DLB2_LSP_CQ_DIR_TKN_CNT_COUNT_LOC	0
+#define DLB2_LSP_CQ_DIR_TKN_CNT_RSVD0_LOC	13
+
+#define DLB2_V2LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(x) \
+	(0xa0280000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(x) \
+	(0x90280000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(x) : \
+	 DLB2_V2_5LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(x))
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RST 0x0
+
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_TOKEN_DEPTH_SELECT_V2	0x0000000F
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_DISABLE_WB_OPT_V2	0x00000010
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_IGNORE_DEPTH_V2	0x00000020
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RSVD0_V2		0xFFFFFFC0
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_TOKEN_DEPTH_SELECT_V2_LOC	0
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_DISABLE_WB_OPT_V2_LOC	4
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_IGNORE_DEPTH_V2_LOC	5
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RSVD0_V2_LOC		6
+
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_TOKEN_DEPTH_SELECT_V2_5 0x0000000F
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_DISABLE_WB_OPT_V2_5	0x00000010
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RSVD0_V2_5		0xFFFFFFE0
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_TOKEN_DEPTH_SELECT_V2_5_LOC	0
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_DISABLE_WB_OPT_V2_5_LOC	4
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RSVD0_V2_5_LOC		5
+
+#define DLB2_V2LSP_CQ_DIR_TOT_SCH_CNTL(x) \
+	(0xa0300000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_DIR_TOT_SCH_CNTL(x) \
+	(0x90300000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_DIR_TOT_SCH_CNTL(x) : \
+	 DLB2_V2_5LSP_CQ_DIR_TOT_SCH_CNTL(x))
+#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTL_RST 0x0
+
+#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTL_COUNT	0xFFFFFFFF
+#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTL_COUNT_LOC	0
+
+#define DLB2_V2LSP_CQ_DIR_TOT_SCH_CNTH(x) \
+	(0xa0380000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_DIR_TOT_SCH_CNTH(x) \
+	(0x90380000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_DIR_TOT_SCH_CNTH(x) : \
+	 DLB2_V2_5LSP_CQ_DIR_TOT_SCH_CNTH(x))
+#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTH_RST 0x0
+
+#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTH_COUNT	0xFFFFFFFF
+#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTH_COUNT_LOC	0
+
+#define DLB2_V2LSP_CQ_LDB_DSBL(x) \
+	(0xa0400000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_LDB_DSBL(x) \
+	(0x90400000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_LDB_DSBL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_LDB_DSBL(x) : \
+	 DLB2_V2_5LSP_CQ_LDB_DSBL(x))
+#define DLB2_LSP_CQ_LDB_DSBL_RST 0x1
+
+#define DLB2_LSP_CQ_LDB_DSBL_DISABLED	0x00000001
+#define DLB2_LSP_CQ_LDB_DSBL_RSVD0		0xFFFFFFFE
+#define DLB2_LSP_CQ_LDB_DSBL_DISABLED_LOC	0
+#define DLB2_LSP_CQ_LDB_DSBL_RSVD0_LOC	1
+
+#define DLB2_V2LSP_CQ_LDB_INFL_CNT(x) \
+	(0xa0480000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_LDB_INFL_CNT(x) \
+	(0x90480000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_LDB_INFL_CNT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_LDB_INFL_CNT(x) : \
+	 DLB2_V2_5LSP_CQ_LDB_INFL_CNT(x))
+#define DLB2_LSP_CQ_LDB_INFL_CNT_RST 0x0
+
+#define DLB2_LSP_CQ_LDB_INFL_CNT_COUNT	0x00000FFF
+#define DLB2_LSP_CQ_LDB_INFL_CNT_RSVD0	0xFFFFF000
+#define DLB2_LSP_CQ_LDB_INFL_CNT_COUNT_LOC	0
+#define DLB2_LSP_CQ_LDB_INFL_CNT_RSVD0_LOC	12
+
+#define DLB2_V2LSP_CQ_LDB_INFL_LIM(x) \
+	(0xa0500000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_LDB_INFL_LIM(x) \
+	(0x90500000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_LDB_INFL_LIM(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_LDB_INFL_LIM(x) : \
+	 DLB2_V2_5LSP_CQ_LDB_INFL_LIM(x))
+#define DLB2_LSP_CQ_LDB_INFL_LIM_RST 0x0
+
+#define DLB2_LSP_CQ_LDB_INFL_LIM_LIMIT	0x00000FFF
+#define DLB2_LSP_CQ_LDB_INFL_LIM_RSVD0	0xFFFFF000
+#define DLB2_LSP_CQ_LDB_INFL_LIM_LIMIT_LOC	0
+#define DLB2_LSP_CQ_LDB_INFL_LIM_RSVD0_LOC	12
+
+#define DLB2_V2LSP_CQ_LDB_TKN_CNT(x) \
+	(0xa0580000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_LDB_TKN_CNT(x) \
+	(0x90600000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_LDB_TKN_CNT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_LDB_TKN_CNT(x) : \
+	 DLB2_V2_5LSP_CQ_LDB_TKN_CNT(x))
+#define DLB2_LSP_CQ_LDB_TKN_CNT_RST 0x0
+
+#define DLB2_LSP_CQ_LDB_TKN_CNT_TOKEN_COUNT	0x000007FF
+#define DLB2_LSP_CQ_LDB_TKN_CNT_RSVD0	0xFFFFF800
+#define DLB2_LSP_CQ_LDB_TKN_CNT_TOKEN_COUNT_LOC	0
+#define DLB2_LSP_CQ_LDB_TKN_CNT_RSVD0_LOC		11
+
+#define DLB2_V2LSP_CQ_LDB_TKN_DEPTH_SEL(x) \
+	(0xa0600000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_LDB_TKN_DEPTH_SEL(x) \
+	(0x90680000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_LDB_TKN_DEPTH_SEL(x) : \
+	 DLB2_V2_5LSP_CQ_LDB_TKN_DEPTH_SEL(x))
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RST 0x0
+
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_V2	0x0000000F
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_IGNORE_DEPTH_V2	0x00000010
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RSVD0_V2		0xFFFFFFE0
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_V2_LOC	0
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_IGNORE_DEPTH_V2_LOC		4
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RSVD0_V2_LOC			5
+
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_V2_5	0x0000000F
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RSVD0_V2_5		0xFFFFFFF0
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_V2_5_LOC	0
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RSVD0_V2_5_LOC			4
+
+#define DLB2_V2LSP_CQ_LDB_TOT_SCH_CNTL(x) \
+	(0xa0680000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_LDB_TOT_SCH_CNTL(x) \
+	(0x90700000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_LDB_TOT_SCH_CNTL(x) : \
+	 DLB2_V2_5LSP_CQ_LDB_TOT_SCH_CNTL(x))
+#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTL_RST 0x0
+
+#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTL_COUNT	0xFFFFFFFF
+#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTL_COUNT_LOC	0
+
+#define DLB2_V2LSP_CQ_LDB_TOT_SCH_CNTH(x) \
+	(0xa0700000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_LDB_TOT_SCH_CNTH(x) \
+	(0x90780000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_LDB_TOT_SCH_CNTH(x) : \
+	 DLB2_V2_5LSP_CQ_LDB_TOT_SCH_CNTH(x))
+#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTH_RST 0x0
+
+#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTH_COUNT	0xFFFFFFFF
+#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTH_COUNT_LOC	0
+
+#define DLB2_V2LSP_QID_DIR_MAX_DEPTH(x) \
+	(0xa0780000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_DIR_MAX_DEPTH(x) \
+	(0x90800000 + (x) * 0x1000)
+#define DLB2_LSP_QID_DIR_MAX_DEPTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_DIR_MAX_DEPTH(x) : \
+	 DLB2_V2_5LSP_QID_DIR_MAX_DEPTH(x))
+#define DLB2_LSP_QID_DIR_MAX_DEPTH_RST 0x0
+
+#define DLB2_LSP_QID_DIR_MAX_DEPTH_DEPTH	0x00001FFF
+#define DLB2_LSP_QID_DIR_MAX_DEPTH_RSVD0	0xFFFFE000
+#define DLB2_LSP_QID_DIR_MAX_DEPTH_DEPTH_LOC	0
+#define DLB2_LSP_QID_DIR_MAX_DEPTH_RSVD0_LOC	13
+
+#define DLB2_V2LSP_QID_DIR_TOT_ENQ_CNTL(x) \
+	(0xa0800000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_DIR_TOT_ENQ_CNTL(x) \
+	(0x90880000 + (x) * 0x1000)
+#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_DIR_TOT_ENQ_CNTL(x) : \
+	 DLB2_V2_5LSP_QID_DIR_TOT_ENQ_CNTL(x))
+#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTL_RST 0x0
+
+#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTL_COUNT	0xFFFFFFFF
+#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTL_COUNT_LOC	0
+
+#define DLB2_V2LSP_QID_DIR_TOT_ENQ_CNTH(x) \
+	(0xa0880000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_DIR_TOT_ENQ_CNTH(x) \
+	(0x90900000 + (x) * 0x1000)
+#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_DIR_TOT_ENQ_CNTH(x) : \
+	 DLB2_V2_5LSP_QID_DIR_TOT_ENQ_CNTH(x))
+#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTH_RST 0x0
+
+#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTH_COUNT	0xFFFFFFFF
+#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTH_COUNT_LOC	0
+
+#define DLB2_V2LSP_QID_DIR_ENQUEUE_CNT(x) \
+	(0xa0900000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_DIR_ENQUEUE_CNT(x) \
+	(0x90980000 + (x) * 0x1000)
+#define DLB2_LSP_QID_DIR_ENQUEUE_CNT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_DIR_ENQUEUE_CNT(x) : \
+	 DLB2_V2_5LSP_QID_DIR_ENQUEUE_CNT(x))
+#define DLB2_LSP_QID_DIR_ENQUEUE_CNT_RST 0x0
+
+#define DLB2_LSP_QID_DIR_ENQUEUE_CNT_COUNT	0x00001FFF
+#define DLB2_LSP_QID_DIR_ENQUEUE_CNT_RSVD0	0xFFFFE000
+#define DLB2_LSP_QID_DIR_ENQUEUE_CNT_COUNT_LOC	0
+#define DLB2_LSP_QID_DIR_ENQUEUE_CNT_RSVD0_LOC	13
+
+#define DLB2_V2LSP_QID_DIR_DEPTH_THRSH(x) \
+	(0xa0980000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_DIR_DEPTH_THRSH(x) \
+	(0x90a00000 + (x) * 0x1000)
+#define DLB2_LSP_QID_DIR_DEPTH_THRSH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_DIR_DEPTH_THRSH(x) : \
+	 DLB2_V2_5LSP_QID_DIR_DEPTH_THRSH(x))
+#define DLB2_LSP_QID_DIR_DEPTH_THRSH_RST 0x0
+
+#define DLB2_LSP_QID_DIR_DEPTH_THRSH_THRESH	0x00001FFF
+#define DLB2_LSP_QID_DIR_DEPTH_THRSH_RSVD0	0xFFFFE000
+#define DLB2_LSP_QID_DIR_DEPTH_THRSH_THRESH_LOC	0
+#define DLB2_LSP_QID_DIR_DEPTH_THRSH_RSVD0_LOC	13
+
+#define DLB2_V2LSP_QID_AQED_ACTIVE_CNT(x) \
+	(0xa0a00000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_AQED_ACTIVE_CNT(x) \
+	(0x90b80000 + (x) * 0x1000)
+#define DLB2_LSP_QID_AQED_ACTIVE_CNT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_AQED_ACTIVE_CNT(x) : \
+	 DLB2_V2_5LSP_QID_AQED_ACTIVE_CNT(x))
+#define DLB2_LSP_QID_AQED_ACTIVE_CNT_RST 0x0
+
+#define DLB2_LSP_QID_AQED_ACTIVE_CNT_COUNT	0x00000FFF
+#define DLB2_LSP_QID_AQED_ACTIVE_CNT_RSVD0	0xFFFFF000
+#define DLB2_LSP_QID_AQED_ACTIVE_CNT_COUNT_LOC	0
+#define DLB2_LSP_QID_AQED_ACTIVE_CNT_RSVD0_LOC	12
+
+#define DLB2_V2LSP_QID_AQED_ACTIVE_LIM(x) \
+	(0xa0a80000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_AQED_ACTIVE_LIM(x) \
+	(0x90c00000 + (x) * 0x1000)
+#define DLB2_LSP_QID_AQED_ACTIVE_LIM(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_AQED_ACTIVE_LIM(x) : \
+	 DLB2_V2_5LSP_QID_AQED_ACTIVE_LIM(x))
+#define DLB2_LSP_QID_AQED_ACTIVE_LIM_RST 0x0
+
+#define DLB2_LSP_QID_AQED_ACTIVE_LIM_LIMIT	0x00000FFF
+#define DLB2_LSP_QID_AQED_ACTIVE_LIM_RSVD0	0xFFFFF000
+#define DLB2_LSP_QID_AQED_ACTIVE_LIM_LIMIT_LOC	0
+#define DLB2_LSP_QID_AQED_ACTIVE_LIM_RSVD0_LOC	12
+
+#define DLB2_V2LSP_QID_ATM_TOT_ENQ_CNTL(x) \
+	(0xa0b00000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_ATM_TOT_ENQ_CNTL(x) \
+	(0x90c80000 + (x) * 0x1000)
+#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_ATM_TOT_ENQ_CNTL(x) : \
+	 DLB2_V2_5LSP_QID_ATM_TOT_ENQ_CNTL(x))
+#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTL_RST 0x0
+
+#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTL_COUNT	0xFFFFFFFF
+#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTL_COUNT_LOC	0
+
+#define DLB2_V2LSP_QID_ATM_TOT_ENQ_CNTH(x) \
+	(0xa0b80000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_ATM_TOT_ENQ_CNTH(x) \
+	(0x90d00000 + (x) * 0x1000)
+#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_ATM_TOT_ENQ_CNTH(x) : \
+	 DLB2_V2_5LSP_QID_ATM_TOT_ENQ_CNTH(x))
+#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTH_RST 0x0
+
+#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTH_COUNT	0xFFFFFFFF
+#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTH_COUNT_LOC	0
+
+#define DLB2_V2LSP_QID_LDB_ENQUEUE_CNT(x) \
+	(0xa0c80000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_LDB_ENQUEUE_CNT(x) \
+	(0x90e00000 + (x) * 0x1000)
+#define DLB2_LSP_QID_LDB_ENQUEUE_CNT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_LDB_ENQUEUE_CNT(x) : \
+	 DLB2_V2_5LSP_QID_LDB_ENQUEUE_CNT(x))
+#define DLB2_LSP_QID_LDB_ENQUEUE_CNT_RST 0x0
+
+#define DLB2_LSP_QID_LDB_ENQUEUE_CNT_COUNT	0x00003FFF
+#define DLB2_LSP_QID_LDB_ENQUEUE_CNT_RSVD0	0xFFFFC000
+#define DLB2_LSP_QID_LDB_ENQUEUE_CNT_COUNT_LOC	0
+#define DLB2_LSP_QID_LDB_ENQUEUE_CNT_RSVD0_LOC	14
+
+#define DLB2_V2LSP_QID_LDB_INFL_CNT(x) \
+	(0xa0d00000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_LDB_INFL_CNT(x) \
+	(0x90e80000 + (x) * 0x1000)
+#define DLB2_LSP_QID_LDB_INFL_CNT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_LDB_INFL_CNT(x) : \
+	 DLB2_V2_5LSP_QID_LDB_INFL_CNT(x))
+#define DLB2_LSP_QID_LDB_INFL_CNT_RST 0x0
+
+#define DLB2_LSP_QID_LDB_INFL_CNT_COUNT	0x00000FFF
+#define DLB2_LSP_QID_LDB_INFL_CNT_RSVD0	0xFFFFF000
+#define DLB2_LSP_QID_LDB_INFL_CNT_COUNT_LOC	0
+#define DLB2_LSP_QID_LDB_INFL_CNT_RSVD0_LOC	12
+
+#define DLB2_V2LSP_QID_LDB_INFL_LIM(x) \
+	(0xa0d80000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_LDB_INFL_LIM(x) \
+	(0x90f00000 + (x) * 0x1000)
+#define DLB2_LSP_QID_LDB_INFL_LIM(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_LDB_INFL_LIM(x) : \
+	 DLB2_V2_5LSP_QID_LDB_INFL_LIM(x))
+#define DLB2_LSP_QID_LDB_INFL_LIM_RST 0x0
+
+#define DLB2_LSP_QID_LDB_INFL_LIM_LIMIT	0x00000FFF
+#define DLB2_LSP_QID_LDB_INFL_LIM_RSVD0	0xFFFFF000
+#define DLB2_LSP_QID_LDB_INFL_LIM_LIMIT_LOC	0
+#define DLB2_LSP_QID_LDB_INFL_LIM_RSVD0_LOC	12
+
+#define DLB2_V2LSP_QID2CQIDIX_00(x) \
+	(0xa0e00000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID2CQIDIX_00(x) \
+	(0x90f80000 + (x) * 0x1000)
+#define DLB2_LSP_QID2CQIDIX_00(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID2CQIDIX_00(x) : \
+	 DLB2_V2_5LSP_QID2CQIDIX_00(x))
+#define DLB2_LSP_QID2CQIDIX_00_RST 0x0
+#define DLB2_LSP_QID2CQIDIX(ver, x, y) \
+	(DLB2_LSP_QID2CQIDIX_00(ver, x) + 0x80000 * (y))
+#define DLB2_LSP_QID2CQIDIX_NUM 16
+
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P0	0x000000FF
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P1	0x0000FF00
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P2	0x00FF0000
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P3	0xFF000000
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P0_LOC	0
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P1_LOC	8
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P2_LOC	16
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P3_LOC	24
+
+#define DLB2_V2LSP_QID2CQIDIX2_00(x) \
+	(0xa1600000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID2CQIDIX2_00(x) \
+	(0x91780000 + (x) * 0x1000)
+#define DLB2_LSP_QID2CQIDIX2_00(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID2CQIDIX2_00(x) : \
+	 DLB2_V2_5LSP_QID2CQIDIX2_00(x))
+#define DLB2_LSP_QID2CQIDIX2_00_RST 0x0
+#define DLB2_LSP_QID2CQIDIX2(ver, x, y) \
+	(DLB2_LSP_QID2CQIDIX2_00(ver, x) + 0x80000 * (y))
+#define DLB2_LSP_QID2CQIDIX2_NUM 16
+
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P0	0x000000FF
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P1	0x0000FF00
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P2	0x00FF0000
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P3	0xFF000000
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P0_LOC	0
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P1_LOC	8
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P2_LOC	16
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P3_LOC	24
+
+#define DLB2_V2LSP_QID_NALDB_MAX_DEPTH(x) \
+	(0xa1f00000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_NALDB_MAX_DEPTH(x) \
+	(0x92080000 + (x) * 0x1000)
+#define DLB2_LSP_QID_NALDB_MAX_DEPTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_NALDB_MAX_DEPTH(x) : \
+	 DLB2_V2_5LSP_QID_NALDB_MAX_DEPTH(x))
+#define DLB2_LSP_QID_NALDB_MAX_DEPTH_RST 0x0
+
+#define DLB2_LSP_QID_NALDB_MAX_DEPTH_DEPTH	0x00003FFF
+#define DLB2_LSP_QID_NALDB_MAX_DEPTH_RSVD0	0xFFFFC000
+#define DLB2_LSP_QID_NALDB_MAX_DEPTH_DEPTH_LOC	0
+#define DLB2_LSP_QID_NALDB_MAX_DEPTH_RSVD0_LOC	14
+
+#define DLB2_V2LSP_QID_NALDB_TOT_ENQ_CNTL(x) \
+	(0xa1f80000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_NALDB_TOT_ENQ_CNTL(x) \
+	(0x92100000 + (x) * 0x1000)
+#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_NALDB_TOT_ENQ_CNTL(x) : \
+	 DLB2_V2_5LSP_QID_NALDB_TOT_ENQ_CNTL(x))
+#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL_RST 0x0
+
+#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL_COUNT	0xFFFFFFFF
+#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL_COUNT_LOC	0
+
+#define DLB2_V2LSP_QID_NALDB_TOT_ENQ_CNTH(x) \
+	(0xa2000000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_NALDB_TOT_ENQ_CNTH(x) \
+	(0x92180000 + (x) * 0x1000)
+#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_NALDB_TOT_ENQ_CNTH(x) : \
+	 DLB2_V2_5LSP_QID_NALDB_TOT_ENQ_CNTH(x))
+#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH_RST 0x0
+
+#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH_COUNT	0xFFFFFFFF
+#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH_COUNT_LOC	0
+
+#define DLB2_V2LSP_QID_ATM_DEPTH_THRSH(x) \
+	(0xa2080000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_ATM_DEPTH_THRSH(x) \
+	(0x92200000 + (x) * 0x1000)
+#define DLB2_LSP_QID_ATM_DEPTH_THRSH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_ATM_DEPTH_THRSH(x) : \
+	 DLB2_V2_5LSP_QID_ATM_DEPTH_THRSH(x))
+#define DLB2_LSP_QID_ATM_DEPTH_THRSH_RST 0x0
+
+#define DLB2_LSP_QID_ATM_DEPTH_THRSH_THRESH	0x00003FFF
+#define DLB2_LSP_QID_ATM_DEPTH_THRSH_RSVD0	0xFFFFC000
+#define DLB2_LSP_QID_ATM_DEPTH_THRSH_THRESH_LOC	0
+#define DLB2_LSP_QID_ATM_DEPTH_THRSH_RSVD0_LOC	14
+
+#define DLB2_V2LSP_QID_NALDB_DEPTH_THRSH(x) \
+	(0xa2100000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_NALDB_DEPTH_THRSH(x) \
+	(0x92280000 + (x) * 0x1000)
+#define DLB2_LSP_QID_NALDB_DEPTH_THRSH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_NALDB_DEPTH_THRSH(x) : \
+	 DLB2_V2_5LSP_QID_NALDB_DEPTH_THRSH(x))
+#define DLB2_LSP_QID_NALDB_DEPTH_THRSH_RST 0x0
+
+#define DLB2_LSP_QID_NALDB_DEPTH_THRSH_THRESH	0x00003FFF
+#define DLB2_LSP_QID_NALDB_DEPTH_THRSH_RSVD0		0xFFFFC000
+#define DLB2_LSP_QID_NALDB_DEPTH_THRSH_THRESH_LOC	0
+#define DLB2_LSP_QID_NALDB_DEPTH_THRSH_RSVD0_LOC	14
+
+#define DLB2_V2LSP_QID_ATM_ACTIVE(x) \
+	(0xa2180000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_ATM_ACTIVE(x) \
+	(0x92300000 + (x) * 0x1000)
+#define DLB2_LSP_QID_ATM_ACTIVE(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_ATM_ACTIVE(x) : \
+	 DLB2_V2_5LSP_QID_ATM_ACTIVE(x))
+#define DLB2_LSP_QID_ATM_ACTIVE_RST 0x0
+
+#define DLB2_LSP_QID_ATM_ACTIVE_COUNT	0x00003FFF
+#define DLB2_LSP_QID_ATM_ACTIVE_RSVD0	0xFFFFC000
+#define DLB2_LSP_QID_ATM_ACTIVE_COUNT_LOC	0
+#define DLB2_LSP_QID_ATM_ACTIVE_RSVD0_LOC	14
+
+#define DLB2_V2LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0 0xa4000008
+#define DLB2_V2_5LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0 0x94000008
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0 : \
+	 DLB2_V2_5LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0)
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_RST 0x0
+
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI0_WEIGHT	0x000000FF
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI1_WEIGHT	0x0000FF00
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI2_WEIGHT	0x00FF0000
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI3_WEIGHT	0xFF000000
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI0_WEIGHT_LOC	0
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI1_WEIGHT_LOC	8
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI2_WEIGHT_LOC	16
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI3_WEIGHT_LOC	24
+
+#define DLB2_V2LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1 0xa400000c
+#define DLB2_V2_5LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1 0x9400000c
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1 : \
+	 DLB2_V2_5LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1)
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_RST 0x0
+
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_RSVZ0_V2	0xFFFFFFFF
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_RSVZ0_V2_LOC	0
+
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI4_WEIGHT_V2_5	0x000000FF
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI5_WEIGHT_V2_5	0x0000FF00
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI6_WEIGHT_V2_5	0x00FF0000
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI7_WEIGHT_V2_5	0xFF000000
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI4_WEIGHT_V2_5_LOC	0
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI5_WEIGHT_V2_5_LOC	8
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI6_WEIGHT_V2_5_LOC	16
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI7_WEIGHT_V2_5_LOC	24
+
+#define DLB2_V2LSP_CFG_ARB_WEIGHT_LDB_QID_0 0xa4000014
+#define DLB2_V2_5LSP_CFG_ARB_WEIGHT_LDB_QID_0 0x94000014
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CFG_ARB_WEIGHT_LDB_QID_0 : \
+	 DLB2_V2_5LSP_CFG_ARB_WEIGHT_LDB_QID_0)
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_RST 0x0
+
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI0_WEIGHT	0x000000FF
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI1_WEIGHT	0x0000FF00
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI2_WEIGHT	0x00FF0000
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI3_WEIGHT	0xFF000000
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI0_WEIGHT_LOC	0
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI1_WEIGHT_LOC	8
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI2_WEIGHT_LOC	16
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI3_WEIGHT_LOC	24
+
+#define DLB2_V2LSP_CFG_ARB_WEIGHT_LDB_QID_1 0xa4000018
+#define DLB2_V2_5LSP_CFG_ARB_WEIGHT_LDB_QID_1 0x94000018
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CFG_ARB_WEIGHT_LDB_QID_1 : \
+	 DLB2_V2_5LSP_CFG_ARB_WEIGHT_LDB_QID_1)
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_RST 0x0
+
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_RSVZ0_V2	0xFFFFFFFF
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_RSVZ0_V2_LOC	0
+
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI4_WEIGHT_V2_5	0x000000FF
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI5_WEIGHT_V2_5	0x0000FF00
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI6_WEIGHT_V2_5	0x00FF0000
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI7_WEIGHT_V2_5	0xFF000000
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI4_WEIGHT_V2_5_LOC	0
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI5_WEIGHT_V2_5_LOC	8
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI6_WEIGHT_V2_5_LOC	16
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI7_WEIGHT_V2_5_LOC	24
+
+#define DLB2_V2LSP_LDB_SCHED_CTRL 0xa400002c
+#define DLB2_V2_5LSP_LDB_SCHED_CTRL 0x9400002c
+#define DLB2_LSP_LDB_SCHED_CTRL(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_LDB_SCHED_CTRL : \
+	 DLB2_V2_5LSP_LDB_SCHED_CTRL)
+#define DLB2_LSP_LDB_SCHED_CTRL_RST 0x0
+
+#define DLB2_LSP_LDB_SCHED_CTRL_CQ			0x000000FF
+#define DLB2_LSP_LDB_SCHED_CTRL_QIDIX		0x00000700
+#define DLB2_LSP_LDB_SCHED_CTRL_VALUE		0x00000800
+#define DLB2_LSP_LDB_SCHED_CTRL_NALB_HASWORK_V	0x00001000
+#define DLB2_LSP_LDB_SCHED_CTRL_RLIST_HASWORK_V	0x00002000
+#define DLB2_LSP_LDB_SCHED_CTRL_SLIST_HASWORK_V	0x00004000
+#define DLB2_LSP_LDB_SCHED_CTRL_INFLIGHT_OK_V	0x00008000
+#define DLB2_LSP_LDB_SCHED_CTRL_AQED_NFULL_V		0x00010000
+#define DLB2_LSP_LDB_SCHED_CTRL_RSVZ0		0xFFFE0000
+#define DLB2_LSP_LDB_SCHED_CTRL_CQ_LOC		0
+#define DLB2_LSP_LDB_SCHED_CTRL_QIDIX_LOC		8
+#define DLB2_LSP_LDB_SCHED_CTRL_VALUE_LOC		11
+#define DLB2_LSP_LDB_SCHED_CTRL_NALB_HASWORK_V_LOC	12
+#define DLB2_LSP_LDB_SCHED_CTRL_RLIST_HASWORK_V_LOC	13
+#define DLB2_LSP_LDB_SCHED_CTRL_SLIST_HASWORK_V_LOC	14
+#define DLB2_LSP_LDB_SCHED_CTRL_INFLIGHT_OK_V_LOC	15
+#define DLB2_LSP_LDB_SCHED_CTRL_AQED_NFULL_V_LOC	16
+#define DLB2_LSP_LDB_SCHED_CTRL_RSVZ0_LOC		17
+
+#define DLB2_V2LSP_DIR_SCH_CNT_L 0xa4000034
+#define DLB2_V2_5LSP_DIR_SCH_CNT_L 0x94000034
+#define DLB2_LSP_DIR_SCH_CNT_L(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_DIR_SCH_CNT_L : \
+	 DLB2_V2_5LSP_DIR_SCH_CNT_L)
+#define DLB2_LSP_DIR_SCH_CNT_L_RST 0x0
+
+#define DLB2_LSP_DIR_SCH_CNT_L_COUNT	0xFFFFFFFF
+#define DLB2_LSP_DIR_SCH_CNT_L_COUNT_LOC	0
+
+#define DLB2_V2LSP_DIR_SCH_CNT_H 0xa4000038
+#define DLB2_V2_5LSP_DIR_SCH_CNT_H 0x94000038
+#define DLB2_LSP_DIR_SCH_CNT_H(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_DIR_SCH_CNT_H : \
+	 DLB2_V2_5LSP_DIR_SCH_CNT_H)
+#define DLB2_LSP_DIR_SCH_CNT_H_RST 0x0
+
+#define DLB2_LSP_DIR_SCH_CNT_H_COUNT	0xFFFFFFFF
+#define DLB2_LSP_DIR_SCH_CNT_H_COUNT_LOC	0
+
+#define DLB2_V2LSP_LDB_SCH_CNT_L 0xa400003c
+#define DLB2_V2_5LSP_LDB_SCH_CNT_L 0x9400003c
+#define DLB2_LSP_LDB_SCH_CNT_L(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_LDB_SCH_CNT_L : \
+	 DLB2_V2_5LSP_LDB_SCH_CNT_L)
+#define DLB2_LSP_LDB_SCH_CNT_L_RST 0x0
+
+#define DLB2_LSP_LDB_SCH_CNT_L_COUNT	0xFFFFFFFF
+#define DLB2_LSP_LDB_SCH_CNT_L_COUNT_LOC	0
+
+#define DLB2_V2LSP_LDB_SCH_CNT_H 0xa4000040
+#define DLB2_V2_5LSP_LDB_SCH_CNT_H 0x94000040
+#define DLB2_LSP_LDB_SCH_CNT_H(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_LDB_SCH_CNT_H : \
+	 DLB2_V2_5LSP_LDB_SCH_CNT_H)
+#define DLB2_LSP_LDB_SCH_CNT_H_RST 0x0
+
+#define DLB2_LSP_LDB_SCH_CNT_H_COUNT	0xFFFFFFFF
+#define DLB2_LSP_LDB_SCH_CNT_H_COUNT_LOC	0
+
+#define DLB2_V2LSP_CFG_SHDW_CTRL 0xa4000070
+#define DLB2_V2_5LSP_CFG_SHDW_CTRL 0x94000070
+#define DLB2_LSP_CFG_SHDW_CTRL(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CFG_SHDW_CTRL : \
+	 DLB2_V2_5LSP_CFG_SHDW_CTRL)
+#define DLB2_LSP_CFG_SHDW_CTRL_RST 0x0
+
+#define DLB2_LSP_CFG_SHDW_CTRL_TRANSFER	0x00000001
+#define DLB2_LSP_CFG_SHDW_CTRL_RSVD0		0xFFFFFFFE
+#define DLB2_LSP_CFG_SHDW_CTRL_TRANSFER_LOC	0
+#define DLB2_LSP_CFG_SHDW_CTRL_RSVD0_LOC	1
+
+#define DLB2_V2LSP_CFG_SHDW_RANGE_COS(x) \
+	(0xa4000074 + (x) * 4)
+#define DLB2_V2_5LSP_CFG_SHDW_RANGE_COS(x) \
+	(0x94000074 + (x) * 4)
+#define DLB2_LSP_CFG_SHDW_RANGE_COS(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CFG_SHDW_RANGE_COS(x) : \
+	 DLB2_V2_5LSP_CFG_SHDW_RANGE_COS(x))
+#define DLB2_LSP_CFG_SHDW_RANGE_COS_RST 0x40
+
+#define DLB2_LSP_CFG_SHDW_RANGE_COS_BW_RANGE		0x000001FF
+#define DLB2_LSP_CFG_SHDW_RANGE_COS_RSVZ0		0x7FFFFE00
+#define DLB2_LSP_CFG_SHDW_RANGE_COS_NO_EXTRA_CREDIT	0x80000000
+#define DLB2_LSP_CFG_SHDW_RANGE_COS_BW_RANGE_LOC		0
+#define DLB2_LSP_CFG_SHDW_RANGE_COS_RSVZ0_LOC		9
+#define DLB2_LSP_CFG_SHDW_RANGE_COS_NO_EXTRA_CREDIT_LOC	31
+
+#define DLB2_V2LSP_CFG_CTRL_GENERAL_0 0xac000000
+#define DLB2_V2_5LSP_CFG_CTRL_GENERAL_0 0x9c000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CFG_CTRL_GENERAL_0 : \
+	 DLB2_V2_5LSP_CFG_CTRL_GENERAL_0)
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RST 0x0
+
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_ATQ_EMPTY_ARB_V2	0x00000001
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_TOK_UNIT_IDLE_V2	0x00000002
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_RLIST_PRI_V2	0x00000004
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_CMP_UNIT_IDLE_V2	0x00000008
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ0_V2		0x00000030
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OP_V2	0x00000040
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_HALF_BW_V2	0x00000080
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OUT_V2	0x00000100
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_DISAB_MULTI_V2	0x00000200
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OP_V2	0x00000400
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_HALF_BW_V2	0x00000800
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OUT_V2	0x00001000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_DISAB_MULTI_V2	0x00002000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OP_V2	0x00004000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_HALF_BW_V2	0x00008000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OUT_V2	0x00010000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OP_V2	0x00020000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_HALF_BW_V2	0x00040000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OUT_V2	0x00080000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_SINGLE_OP_V2	0x00100000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_HALF_BW_V2	0x00200000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_DISAB_MULTI_V2	0x00400000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_SCH_V2	0x00800000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_CMP_V2	0x01000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_CE_TOG_ARB_V2	0x02000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ1_V2		0x04000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALID_SEL_V2	0x18000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALUE_SEL_V2	0x20000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_COMPARE_SEL_V2	0xC0000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_ATQ_EMPTY_ARB_V2_LOC	0
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_TOK_UNIT_IDLE_V2_LOC		1
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_RLIST_PRI_V2_LOC		2
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_CMP_UNIT_IDLE_V2_LOC		3
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ0_V2_LOC			4
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OP_V2_LOC		6
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_HALF_BW_V2_LOC		7
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OUT_V2_LOC		8
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_DISAB_MULTI_V2_LOC		9
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OP_V2_LOC		10
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_HALF_BW_V2_LOC		11
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OUT_V2_LOC		12
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_DISAB_MULTI_V2_LOC		13
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OP_V2_LOC		14
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_HALF_BW_V2_LOC		15
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OUT_V2_LOC		16
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OP_V2_LOC		17
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_HALF_BW_V2_LOC		18
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OUT_V2_LOC		19
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_SINGLE_OP_V2_LOC		20
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_HALF_BW_V2_LOC		21
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_DISAB_MULTI_V2_LOC		22
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_SCH_V2_LOC		23
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_CMP_V2_LOC		24
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_CE_TOG_ARB_V2_LOC		25
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ1_V2_LOC			26
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALID_SEL_V2_LOC		27
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALUE_SEL_V2_LOC		29
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_COMPARE_SEL_V2_LOC		30
+
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_ATQ_EMPTY_ARB_V2_5	0x00000001
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_TOK_UNIT_IDLE_V2_5	0x00000002
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_RLIST_PRI_V2_5	0x00000004
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_CMP_UNIT_IDLE_V2_5	0x00000008
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ENAB_IF_THRESH_V2_5	0x00000010
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ0_V2_5		0x00000020
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OP_V2_5	0x00000040
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_HALF_BW_V2_5	0x00000080
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OUT_V2_5	0x00000100
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_DISAB_MULTI_V2_5	0x00000200
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OP_V2_5	0x00000400
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_HALF_BW_V2_5	0x00000800
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OUT_V2_5	0x00001000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_DISAB_MULTI_V2_5	0x00002000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OP_V2_5	0x00004000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_HALF_BW_V2_5	0x00008000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OUT_V2_5	0x00010000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OP_V2_5	0x00020000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_HALF_BW_V2_5	0x00040000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OUT_V2_5	0x00080000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_SINGLE_OP_V2_5	0x00100000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_HALF_BW_V2_5	0x00200000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_DISAB_MULTI_V2_5	0x00400000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_SCH_V2_5	0x00800000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_CMP_V2_5	0x01000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_CE_TOG_ARB_V2_5	0x02000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ1_V2_5		0x04000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALID_SEL_V2_5	0x18000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALUE_SEL_V2_5	0x20000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_COMPARE_SEL_V2_5	0xC0000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_ATQ_EMPTY_ARB_V2_5_LOC	0
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_TOK_UNIT_IDLE_V2_5_LOC	1
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_RLIST_PRI_V2_5_LOC		2
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_CMP_UNIT_IDLE_V2_5_LOC	3
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ENAB_IF_THRESH_V2_5_LOC		4
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ0_V2_5_LOC			5
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OP_V2_5_LOC		6
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_HALF_BW_V2_5_LOC		7
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OUT_V2_5_LOC		8
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_DISAB_MULTI_V2_5_LOC		9
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OP_V2_5_LOC		10
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_HALF_BW_V2_5_LOC		11
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OUT_V2_5_LOC		12
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_DISAB_MULTI_V2_5_LOC		13
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OP_V2_5_LOC	14
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_HALF_BW_V2_5_LOC		15
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OUT_V2_5_LOC	16
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OP_V2_5_LOC		17
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_HALF_BW_V2_5_LOC		18
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OUT_V2_5_LOC	19
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_SINGLE_OP_V2_5_LOC		20
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_HALF_BW_V2_5_LOC		21
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_DISAB_MULTI_V2_5_LOC		22
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_SCH_V2_5_LOC		23
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_CMP_V2_5_LOC		24
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_CE_TOG_ARB_V2_5_LOC		25
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ1_V2_5_LOC			26
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALID_SEL_V2_5_LOC		27
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALUE_SEL_V2_5_LOC		29
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_COMPARE_SEL_V2_5_LOC	30
+
+#define DLB2_LSP_SMON_COMPARE0 0xac000048
+#define DLB2_LSP_SMON_COMPARE0_RST 0x0
+
+#define DLB2_LSP_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_LSP_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_LSP_SMON_COMPARE1 0xac00004c
+#define DLB2_LSP_SMON_COMPARE1_RST 0x0
+
+#define DLB2_LSP_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_LSP_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_LSP_SMON_CFG0 0xac000050
+#define DLB2_LSP_SMON_CFG0_RST 0x40000000
+
+#define DLB2_LSP_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_LSP_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_LSP_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_LSP_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_LSP_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_LSP_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_LSP_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_LSP_SMON_CFG0_SMON_MODE			0x0000F000
+#define DLB2_LSP_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_LSP_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_LSP_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_LSP_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_LSP_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_LSP_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_LSP_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_LSP_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_LSP_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_LSP_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_LSP_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_LSP_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_LSP_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
+#define DLB2_LSP_SMON_CFG0_RSVZ0_LOC				2
+#define DLB2_LSP_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_LSP_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_LSP_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_LSP_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_LSP_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_LSP_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_LSP_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_LSP_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_LSP_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_LSP_SMON_CFG0_STOPTIMEROVFL_LOC			20
+#define DLB2_LSP_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_LSP_SMON_CFG0_STATTIMEROVFL_LOC			22
+#define DLB2_LSP_SMON_CFG0_RSVZ1_LOC				23
+#define DLB2_LSP_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_LSP_SMON_CFG0_RSVZ2_LOC				29
+#define DLB2_LSP_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_LSP_SMON_CFG1 0xac000054
+#define DLB2_LSP_SMON_CFG1_RST 0x0
+
+#define DLB2_LSP_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_LSP_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_LSP_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_LSP_SMON_CFG1_MODE0_LOC	0
+#define DLB2_LSP_SMON_CFG1_MODE1_LOC	8
+#define DLB2_LSP_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_LSP_SMON_ACTIVITYCNTR0 0xac000058
+#define DLB2_LSP_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_LSP_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_LSP_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_LSP_SMON_ACTIVITYCNTR1 0xac00005c
+#define DLB2_LSP_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_LSP_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_LSP_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_LSP_SMON_MAX_TMR 0xac000060
+#define DLB2_LSP_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_LSP_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_LSP_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_LSP_SMON_TMR 0xac000064
+#define DLB2_LSP_SMON_TMR_RST 0x0
+
+#define DLB2_LSP_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_LSP_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_V2CM_DIAG_RESET_STS 0xb4000000
+#define DLB2_V2_5CM_DIAG_RESET_STS 0xa4000000
+#define DLB2_CM_DIAG_RESET_STS(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 V2CM_DIAG_RESET_STS : \
+	 V2_5CM_DIAG_RESET_STS)
+#define DLB2_CM_DIAG_RESET_STS_RST 0x80000bff
+
+#define DLB2_CM_DIAG_RESET_STS_CHP_PF_RESET_DONE	0x00000001
+#define DLB2_CM_DIAG_RESET_STS_ROP_PF_RESET_DONE	0x00000002
+#define DLB2_CM_DIAG_RESET_STS_LSP_PF_RESET_DONE	0x00000004
+#define DLB2_CM_DIAG_RESET_STS_NALB_PF_RESET_DONE	0x00000008
+#define DLB2_CM_DIAG_RESET_STS_AP_PF_RESET_DONE	0x00000010
+#define DLB2_CM_DIAG_RESET_STS_DP_PF_RESET_DONE	0x00000020
+#define DLB2_CM_DIAG_RESET_STS_QED_PF_RESET_DONE	0x00000040
+#define DLB2_CM_DIAG_RESET_STS_DQED_PF_RESET_DONE	0x00000080
+#define DLB2_CM_DIAG_RESET_STS_AQED_PF_RESET_DONE	0x00000100
+#define DLB2_CM_DIAG_RESET_STS_SYS_PF_RESET_DONE	0x00000200
+#define DLB2_CM_DIAG_RESET_STS_PF_RESET_ACTIVE	0x00000400
+#define DLB2_CM_DIAG_RESET_STS_FLRSM_STATE		0x0003F800
+#define DLB2_CM_DIAG_RESET_STS_RSVD0			0x7FFC0000
+#define DLB2_CM_DIAG_RESET_STS_DLB_PROC_RESET_DONE	0x80000000
+#define DLB2_CM_DIAG_RESET_STS_CHP_PF_RESET_DONE_LOC		0
+#define DLB2_CM_DIAG_RESET_STS_ROP_PF_RESET_DONE_LOC		1
+#define DLB2_CM_DIAG_RESET_STS_LSP_PF_RESET_DONE_LOC		2
+#define DLB2_CM_DIAG_RESET_STS_NALB_PF_RESET_DONE_LOC	3
+#define DLB2_CM_DIAG_RESET_STS_AP_PF_RESET_DONE_LOC		4
+#define DLB2_CM_DIAG_RESET_STS_DP_PF_RESET_DONE_LOC		5
+#define DLB2_CM_DIAG_RESET_STS_QED_PF_RESET_DONE_LOC		6
+#define DLB2_CM_DIAG_RESET_STS_DQED_PF_RESET_DONE_LOC	7
+#define DLB2_CM_DIAG_RESET_STS_AQED_PF_RESET_DONE_LOC	8
+#define DLB2_CM_DIAG_RESET_STS_SYS_PF_RESET_DONE_LOC		9
+#define DLB2_CM_DIAG_RESET_STS_PF_RESET_ACTIVE_LOC		10
+#define DLB2_CM_DIAG_RESET_STS_FLRSM_STATE_LOC		11
+#define DLB2_CM_DIAG_RESET_STS_RSVD0_LOC			18
+#define DLB2_CM_DIAG_RESET_STS_DLB_PROC_RESET_DONE_LOC	31
+
+#define DLB2_V2CM_CFG_DIAGNOSTIC_IDLE_STATUS 0xb4000004
+#define DLB2_V2_5CM_CFG_DIAGNOSTIC_IDLE_STATUS 0xa4000004
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CM_CFG_DIAGNOSTIC_IDLE_STATUS : \
+	 DLB2_V2_5CM_CFG_DIAGNOSTIC_IDLE_STATUS)
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_RST 0x9d0fffff
+
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_CHP_PIPEIDLE		0x00000001
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_ROP_PIPEIDLE		0x00000002
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_LSP_PIPEIDLE		0x00000004
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_NALB_PIPEIDLE	0x00000008
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AP_PIPEIDLE		0x00000010
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DP_PIPEIDLE		0x00000020
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_QED_PIPEIDLE		0x00000040
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DQED_PIPEIDLE	0x00000080
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AQED_PIPEIDLE	0x00000100
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_SYS_PIPEIDLE		0x00000200
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_CHP_UNIT_IDLE	0x00000400
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_ROP_UNIT_IDLE	0x00000800
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_LSP_UNIT_IDLE	0x00001000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_NALB_UNIT_IDLE	0x00002000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AP_UNIT_IDLE		0x00004000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DP_UNIT_IDLE		0x00008000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_QED_UNIT_IDLE	0x00010000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DQED_UNIT_IDLE	0x00020000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AQED_UNIT_IDLE	0x00040000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_SYS_UNIT_IDLE	0x00080000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_RSVD1		0x00F00000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_CFG_RING_IDLE	0x01000000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_CFG_MSTR_IDLE	0x02000000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_FLR_CLKREQ_B	0x04000000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_PROC_IDLE	0x08000000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_PROC_IDLE_MASKED 0x10000000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_RSVD0		 0x60000000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DLB_FUNC_IDLE	 0x80000000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_CHP_PIPEIDLE_LOC		0
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_ROP_PIPEIDLE_LOC		1
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_LSP_PIPEIDLE_LOC		2
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_NALB_PIPEIDLE_LOC		3
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AP_PIPEIDLE_LOC		4
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DP_PIPEIDLE_LOC		5
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_QED_PIPEIDLE_LOC		6
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DQED_PIPEIDLE_LOC		7
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AQED_PIPEIDLE_LOC		8
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_SYS_PIPEIDLE_LOC		9
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_CHP_UNIT_IDLE_LOC		10
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_ROP_UNIT_IDLE_LOC		11
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_LSP_UNIT_IDLE_LOC		12
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_NALB_UNIT_IDLE_LOC	13
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AP_UNIT_IDLE_LOC		14
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DP_UNIT_IDLE_LOC		15
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_QED_UNIT_IDLE_LOC		16
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DQED_UNIT_IDLE_LOC	17
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AQED_UNIT_IDLE_LOC	18
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_SYS_UNIT_IDLE_LOC		19
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_RSVD1_LOC			20
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_CFG_RING_IDLE_LOC	24
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_CFG_MSTR_IDLE_LOC	25
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_FLR_CLKREQ_B_LOC	26
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_PROC_IDLE_LOC	27
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_PROC_IDLE_MASKED_LOC	28
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_RSVD0_LOC			29
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DLB_FUNC_IDLE_LOC		31
+
+#define DLB2_V2CM_CFG_PM_STATUS 0xb4000014
+#define DLB2_V2_5CM_CFG_PM_STATUS 0xa4000014
+#define DLB2_CM_CFG_PM_STATUS(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CM_CFG_PM_STATUS : \
+	 DLB2_V2_5CM_CFG_PM_STATUS)
+#define DLB2_CM_CFG_PM_STATUS_RST 0x100403e
+
+#define DLB2_CM_CFG_PM_STATUS_PROCHOT		0x00000001
+#define DLB2_CM_CFG_PM_STATUS_PGCB_DLB_IDLE		0x00000002
+#define DLB2_CM_CFG_PM_STATUS_PGCB_DLB_PG_RDY_ACK_B	0x00000004
+#define DLB2_CM_CFG_PM_STATUS_PMSM_PGCB_REQ_B	0x00000008
+#define DLB2_CM_CFG_PM_STATUS_PGBC_PMC_PG_REQ_B	0x00000010
+#define DLB2_CM_CFG_PM_STATUS_PMC_PGCB_PG_ACK_B	0x00000020
+#define DLB2_CM_CFG_PM_STATUS_PMC_PGCB_FET_EN_B	0x00000040
+#define DLB2_CM_CFG_PM_STATUS_PGCB_FET_EN_B		0x00000080
+#define DLB2_CM_CFG_PM_STATUS_RSVZ0			0x00000100
+#define DLB2_CM_CFG_PM_STATUS_RSVZ1			0x00000200
+#define DLB2_CM_CFG_PM_STATUS_FUSE_FORCE_ON		0x00000400
+#define DLB2_CM_CFG_PM_STATUS_FUSE_PROC_DISABLE	0x00000800
+#define DLB2_CM_CFG_PM_STATUS_RSVZ2			0x00001000
+#define DLB2_CM_CFG_PM_STATUS_RSVZ3			0x00002000
+#define DLB2_CM_CFG_PM_STATUS_PM_FSM_D0TOD3_OK	0x00004000
+#define DLB2_CM_CFG_PM_STATUS_PM_FSM_D3TOD0_OK	0x00008000
+#define DLB2_CM_CFG_PM_STATUS_DLB_IN_D3		0x00010000
+#define DLB2_CM_CFG_PM_STATUS_RSVZ4			0x00FE0000
+#define DLB2_CM_CFG_PM_STATUS_PMSM			0xFF000000
+#define DLB2_CM_CFG_PM_STATUS_PROCHOT_LOC			0
+#define DLB2_CM_CFG_PM_STATUS_PGCB_DLB_IDLE_LOC		1
+#define DLB2_CM_CFG_PM_STATUS_PGCB_DLB_PG_RDY_ACK_B_LOC	2
+#define DLB2_CM_CFG_PM_STATUS_PMSM_PGCB_REQ_B_LOC		3
+#define DLB2_CM_CFG_PM_STATUS_PGBC_PMC_PG_REQ_B_LOC		4
+#define DLB2_CM_CFG_PM_STATUS_PMC_PGCB_PG_ACK_B_LOC		5
+#define DLB2_CM_CFG_PM_STATUS_PMC_PGCB_FET_EN_B_LOC		6
+#define DLB2_CM_CFG_PM_STATUS_PGCB_FET_EN_B_LOC		7
+#define DLB2_CM_CFG_PM_STATUS_RSVZ0_LOC			8
+#define DLB2_CM_CFG_PM_STATUS_RSVZ1_LOC			9
+#define DLB2_CM_CFG_PM_STATUS_FUSE_FORCE_ON_LOC		10
+#define DLB2_CM_CFG_PM_STATUS_FUSE_PROC_DISABLE_LOC		11
+#define DLB2_CM_CFG_PM_STATUS_RSVZ2_LOC			12
+#define DLB2_CM_CFG_PM_STATUS_RSVZ3_LOC			13
+#define DLB2_CM_CFG_PM_STATUS_PM_FSM_D0TOD3_OK_LOC		14
+#define DLB2_CM_CFG_PM_STATUS_PM_FSM_D3TOD0_OK_LOC		15
+#define DLB2_CM_CFG_PM_STATUS_DLB_IN_D3_LOC			16
+#define DLB2_CM_CFG_PM_STATUS_RSVZ4_LOC			17
+#define DLB2_CM_CFG_PM_STATUS_PMSM_LOC			24
+
+#define DLB2_V2CM_CFG_PM_PMCSR_DISABLE 0xb4000018
+#define DLB2_V2_5CM_CFG_PM_PMCSR_DISABLE 0xa4000018
+#define DLB2_CM_CFG_PM_PMCSR_DISABLE(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CM_CFG_PM_PMCSR_DISABLE : \
+	 DLB2_V2_5CM_CFG_PM_PMCSR_DISABLE)
+#define DLB2_CM_CFG_PM_PMCSR_DISABLE_RST 0x1
+
+#define DLB2_CM_CFG_PM_PMCSR_DISABLE_DISABLE	0x00000001
+#define DLB2_CM_CFG_PM_PMCSR_DISABLE_RSVZ0	0xFFFFFFFE
+#define DLB2_CM_CFG_PM_PMCSR_DISABLE_DISABLE_LOC	0
+#define DLB2_CM_CFG_PM_PMCSR_DISABLE_RSVZ0_LOC	1
+
+#define DLB2_VF_VF2PF_MAILBOX_BYTES 256
+#define DLB2_VF_VF2PF_MAILBOX(x) \
+	(0x1000 + (x) * 0x4)
+#define DLB2_VF_VF2PF_MAILBOX_RST 0x0
+
+#define DLB2_VF_VF2PF_MAILBOX_MSG	0xFFFFFFFF
+#define DLB2_VF_VF2PF_MAILBOX_MSG_LOC	0
+
+#define DLB2_VF_VF2PF_MAILBOX_ISR 0x1f00
+#define DLB2_VF_VF2PF_MAILBOX_ISR_RST 0x0
+#define DLB2_VF_SIOV_MBOX_ISR_TRIGGER 0x8000
+
+#define DLB2_VF_VF2PF_MAILBOX_ISR_ISR	0x00000001
+#define DLB2_VF_VF2PF_MAILBOX_ISR_RSVD0	0xFFFFFFFE
+#define DLB2_VF_VF2PF_MAILBOX_ISR_ISR_LOC	0
+#define DLB2_VF_VF2PF_MAILBOX_ISR_RSVD0_LOC	1
+
+#define DLB2_VF_PF2VF_MAILBOX_BYTES 64
+#define DLB2_VF_PF2VF_MAILBOX(x) \
+	(0x2000 + (x) * 0x4)
+#define DLB2_VF_PF2VF_MAILBOX_RST 0x0
+
+#define DLB2_VF_PF2VF_MAILBOX_MSG	0xFFFFFFFF
+#define DLB2_VF_PF2VF_MAILBOX_MSG_LOC	0
+
+#define DLB2_VF_PF2VF_MAILBOX_ISR 0x2f00
+#define DLB2_VF_PF2VF_MAILBOX_ISR_RST 0x0
+
+#define DLB2_VF_PF2VF_MAILBOX_ISR_PF_ISR	0x00000001
+#define DLB2_VF_PF2VF_MAILBOX_ISR_RSVD0	0xFFFFFFFE
+#define DLB2_VF_PF2VF_MAILBOX_ISR_PF_ISR_LOC	0
+#define DLB2_VF_PF2VF_MAILBOX_ISR_RSVD0_LOC	1
+
+#define DLB2_VF_VF_MSI_ISR_PEND 0x2f10
+#define DLB2_VF_VF_MSI_ISR_PEND_RST 0x0
+
+#define DLB2_VF_VF_MSI_ISR_PEND_ISR_PEND	0xFFFFFFFF
+#define DLB2_VF_VF_MSI_ISR_PEND_ISR_PEND_LOC	0
+
+#define DLB2_VF_VF_RESET_IN_PROGRESS 0x3000
+#define DLB2_VF_VF_RESET_IN_PROGRESS_RST 0x1
+
+#define DLB2_VF_VF_RESET_IN_PROGRESS_RESET_IN_PROGRESS	0x00000001
+#define DLB2_VF_VF_RESET_IN_PROGRESS_RSVD0			0xFFFFFFFE
+#define DLB2_VF_VF_RESET_IN_PROGRESS_RESET_IN_PROGRESS_LOC	0
+#define DLB2_VF_VF_RESET_IN_PROGRESS_RSVD0_LOC		1
+
+#define DLB2_VF_VF_MSI_ISR 0x4000
+#define DLB2_VF_VF_MSI_ISR_RST 0x0
+
+#define DLB2_VF_VF_MSI_ISR_VF_MSI_ISR	0xFFFFFFFF
+#define DLB2_VF_VF_MSI_ISR_VF_MSI_ISR_LOC	0
+
+#define DLB2_SYS_TOTAL_CREDITS 0x10000100
+#define DLB2_SYS_TOTAL_CREDITS_RST 0x4000
+
+#define DLB2_SYS_TOTAL_CREDITS_TOTAL_CREDITS	0xFFFFFFFF
+#define DLB2_SYS_TOTAL_CREDITS_TOTAL_CREDITS_LOC	0
+
+#define DLB2_SYS_LDB_CQ_AI_ADDR_U(x) \
+	(0x10000fa4 + (x) * 0x1000)
+#define DLB2_SYS_LDB_CQ_AI_ADDR_U_RST 0x0
+
+#define DLB2_SYS_LDB_CQ_AI_ADDR_U_CQ_AI_ADDR_U	0xFFFFFFFF
+#define DLB2_SYS_LDB_CQ_AI_ADDR_U_CQ_AI_ADDR_U_LOC	0
+
+#define DLB2_SYS_LDB_CQ_AI_ADDR_L(x) \
+	(0x10000fa0 + (x) * 0x1000)
+#define DLB2_SYS_LDB_CQ_AI_ADDR_L_RST 0x0
+
+#define DLB2_SYS_LDB_CQ_AI_ADDR_L_RSVD0		0x00000003
+#define DLB2_SYS_LDB_CQ_AI_ADDR_L_CQ_AI_ADDR_L	0xFFFFFFFC
+#define DLB2_SYS_LDB_CQ_AI_ADDR_L_RSVD0_LOC		0
+#define DLB2_SYS_LDB_CQ_AI_ADDR_L_CQ_AI_ADDR_L_LOC	2
+
+#define DLB2_SYS_DIR_CQ_AI_ADDR_U(x) \
+	(0x10000fe4 + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ_AI_ADDR_U_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_AI_ADDR_U_CQ_AI_ADDR_U	0xFFFFFFFF
+#define DLB2_SYS_DIR_CQ_AI_ADDR_U_CQ_AI_ADDR_U_LOC	0
+
+#define DLB2_SYS_DIR_CQ_AI_ADDR_L(x) \
+	(0x10000fe0 + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ_AI_ADDR_L_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_AI_ADDR_L_RSVD0		0x00000003
+#define DLB2_SYS_DIR_CQ_AI_ADDR_L_CQ_AI_ADDR_L	0xFFFFFFFC
+#define DLB2_SYS_DIR_CQ_AI_ADDR_L_RSVD0_LOC		0
+#define DLB2_SYS_DIR_CQ_AI_ADDR_L_CQ_AI_ADDR_L_LOC	2
+
+#define DLB2_SYS_WB_DIR_CQ_STATE(x) \
+	(0x11c00000 + (x) * 0x1000)
+#define DLB2_SYS_WB_DIR_CQ_STATE_RST 0x0
+
+#define DLB2_SYS_WB_DIR_CQ_STATE_WB0_V	0x00000001
+#define DLB2_SYS_WB_DIR_CQ_STATE_WB1_V	0x00000002
+#define DLB2_SYS_WB_DIR_CQ_STATE_WB2_V	0x00000004
+#define DLB2_SYS_WB_DIR_CQ_STATE_DIR_OPT	0x00000008
+#define DLB2_SYS_WB_DIR_CQ_STATE_CQ_OPT_CLR	0x00000010
+#define DLB2_SYS_WB_DIR_CQ_STATE_RSVD0	0xFFFFFFE0
+#define DLB2_SYS_WB_DIR_CQ_STATE_WB0_V_LOC		0
+#define DLB2_SYS_WB_DIR_CQ_STATE_WB1_V_LOC		1
+#define DLB2_SYS_WB_DIR_CQ_STATE_WB2_V_LOC		2
+#define DLB2_SYS_WB_DIR_CQ_STATE_DIR_OPT_LOC		3
+#define DLB2_SYS_WB_DIR_CQ_STATE_CQ_OPT_CLR_LOC	4
+#define DLB2_SYS_WB_DIR_CQ_STATE_RSVD0_LOC		5
+
+#define DLB2_SYS_WB_LDB_CQ_STATE(x) \
+	(0x11d00000 + (x) * 0x1000)
+#define DLB2_SYS_WB_LDB_CQ_STATE_RST 0x0
+
+#define DLB2_SYS_WB_LDB_CQ_STATE_WB0_V	0x00000001
+#define DLB2_SYS_WB_LDB_CQ_STATE_WB1_V	0x00000002
+#define DLB2_SYS_WB_LDB_CQ_STATE_WB2_V	0x00000004
+#define DLB2_SYS_WB_LDB_CQ_STATE_RSVD1	0x00000008
+#define DLB2_SYS_WB_LDB_CQ_STATE_CQ_OPT_CLR	0x00000010
+#define DLB2_SYS_WB_LDB_CQ_STATE_RSVD0	0xFFFFFFE0
+#define DLB2_SYS_WB_LDB_CQ_STATE_WB0_V_LOC		0
+#define DLB2_SYS_WB_LDB_CQ_STATE_WB1_V_LOC		1
+#define DLB2_SYS_WB_LDB_CQ_STATE_WB2_V_LOC		2
+#define DLB2_SYS_WB_LDB_CQ_STATE_RSVD1_LOC		3
+#define DLB2_SYS_WB_LDB_CQ_STATE_CQ_OPT_CLR_LOC	4
+#define DLB2_SYS_WB_LDB_CQ_STATE_RSVD0_LOC		5
+
+#define DLB2_CHP_CFG_VAS_CRD(x) \
+	(0x40000000 + (x) * 0x1000)
+#define DLB2_CHP_CFG_VAS_CRD_RST 0x0
+
+#define DLB2_CHP_CFG_VAS_CRD_COUNT	0x00007FFF
+#define DLB2_CHP_CFG_VAS_CRD_RSVD0	0xFFFF8000
+#define DLB2_CHP_CFG_VAS_CRD_COUNT_LOC	0
+#define DLB2_CHP_CFG_VAS_CRD_RSVD0_LOC	15
+
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT(x) \
+	(0x90b00000 + (x) * 0x1000)
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_RST 0x0
+
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_LIMIT	0x00007FFF
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_V	0x00008000
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_RSVD0	0xFFFF0000
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_LIMIT_LOC	0
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_V_LOC		15
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_RSVD0_LOC	16
+
+#endif /* __DLB2_REGS_NEW_H */
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v4 04/27] event/dlb2: add v2.5 HW init
  2021-04-15  1:48   ` [dpdk-dev] [PATCH v4 00/27] Add DLB v2.5 Timothy McDaniel
                       ` (2 preceding siblings ...)
  2021-04-15  1:48     ` [dpdk-dev] [PATCH v4 03/27] event/dlb2: add v2.5 HW register definitions Timothy McDaniel
@ 2021-04-15  1:48     ` Timothy McDaniel
  2021-04-15  1:48     ` [dpdk-dev] [PATCH v4 05/27] event/dlb2: add v2.5 get resources Timothy McDaniel
                       ` (22 subsequent siblings)
  26 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-04-15  1:48 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, harry.van.haaren, jerinj, thomas

This commit adds support for DLB v2.5 probe-time hardware init,
and sets up a framework for incorporating the remaining
changes required to support DLB v2.5.

DLB v2.0 and DLB v2.5 are similar in many respects, but their
register offsets and definitions are different. As a result of these,
differences, the low level hardware functions must take the device
version into consideration. This requires that the hardware version be
passed to many of the low level functions, so that the PMD can
take the appropriate action based on the device version.

To ease the transition and keep the individual patches small, three
temporary files are added in this commit. These files have "new"
in their names.  The files with "new" contain changes specific to a
consolidated PMD that supports both DLB v2.0 and DLB 2.5. Their sister
files of the same name (minus "new") contain the old DLB v2.0 specific
code. The intent is to remove code from the original files as that code
is ported to the combined DLB 2.0/2.5 PMD model and added to the "new"
files in a series of commits. At end of the patch series, the old files
will be empty and the "new" files will have the logic needed
to implement a single PMD that supports both DLB v2.0 and DLB v2.5.
At that time, the original DLB v2.0 specific files will be deleted,
and the "new" files will be renamed and replace them.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/dlb2_priv.h                |   5 +
 drivers/event/dlb2/meson.build                |   1 +
 .../event/dlb2/pf/base/dlb2_hw_types_new.h    | 356 ++++++++++++++++++
 drivers/event/dlb2/pf/base/dlb2_osdep.h       |   4 +
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 180 +--------
 drivers/event/dlb2/pf/base/dlb2_resource.h    |  36 --
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 259 +++++++++++++
 .../event/dlb2/pf/base/dlb2_resource_new.h    |  73 ++++
 drivers/event/dlb2/pf/dlb2_main.c             |  41 +-
 drivers/event/dlb2/pf/dlb2_main.h             |   4 +
 drivers/event/dlb2/pf/dlb2_pf.c               |   6 +-
 11 files changed, 735 insertions(+), 230 deletions(-)
 create mode 100644 drivers/event/dlb2/pf/base/dlb2_hw_types_new.h
 create mode 100644 drivers/event/dlb2/pf/base/dlb2_resource_new.c
 create mode 100644 drivers/event/dlb2/pf/base/dlb2_resource_new.h

diff --git a/drivers/event/dlb2/dlb2_priv.h b/drivers/event/dlb2/dlb2_priv.h
index 1cd78ad94..f3a9fe0aa 100644
--- a/drivers/event/dlb2/dlb2_priv.h
+++ b/drivers/event/dlb2/dlb2_priv.h
@@ -114,6 +114,11 @@
 #define EV_TO_DLB2_PRIO(x) ((x) >> 5)
 #define DLB2_TO_EV_PRIO(x) ((x) << 5)
 
+enum dlb2_hw_ver {
+	DLB2_HW_VER_2,
+	DLB2_HW_VER_2_5,
+};
+
 enum dlb2_hw_port_types {
 	DLB2_LDB_PORT,
 	DLB2_DIR_PORT,
diff --git a/drivers/event/dlb2/meson.build b/drivers/event/dlb2/meson.build
index f22638b8e..bded07e06 100644
--- a/drivers/event/dlb2/meson.build
+++ b/drivers/event/dlb2/meson.build
@@ -14,6 +14,7 @@ sources = files('dlb2.c',
 		'pf/dlb2_main.c',
 		'pf/dlb2_pf.c',
 		'pf/base/dlb2_resource.c',
+		'pf/base/dlb2_resource_new.c',
 		'rte_pmd_dlb2.c',
 		'dlb2_selftest.c'
 )
diff --git a/drivers/event/dlb2/pf/base/dlb2_hw_types_new.h b/drivers/event/dlb2/pf/base/dlb2_hw_types_new.h
new file mode 100644
index 000000000..4a4185acd
--- /dev/null
+++ b/drivers/event/dlb2/pf/base/dlb2_hw_types_new.h
@@ -0,0 +1,356 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2016-2020 Intel Corporation
+ */
+
+#ifndef __DLB2_HW_TYPES_NEW_H
+#define __DLB2_HW_TYPES_NEW_H
+
+#include "../../dlb2_priv.h"
+#include "dlb2_user.h"
+
+#include "dlb2_osdep_list.h"
+#include "dlb2_osdep_types.h"
+#include "dlb2_regs_new.h"
+
+#define DLB2_BITS_SET(x, val, mask)	(x = ((x) & ~(mask))     \
+				 | (((val) << (mask##_LOC)) & (mask)))
+#define DLB2_BITS_CLR(x, mask)	(x &= ~(mask))
+#define DLB2_BIT_SET(x, mask)	((x) |= (mask))
+#define DLB2_BITS_GET(x, mask)	(((x) & (mask)) >> (mask##_LOC))
+
+#define DLB2_MAX_NUM_VDEVS			16
+#define DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS	2
+#define DLB2_NUM_ARB_WEIGHTS			8
+#define DLB2_MAX_NUM_AQED_ENTRIES		2048
+#define DLB2_MAX_WEIGHT				255
+#define DLB2_NUM_COS_DOMAINS			4
+#define DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS	2
+#define DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES	5
+#define DLB2_MAX_CQ_COMP_CHECK_LOOPS		409600
+#define DLB2_MAX_QID_EMPTY_CHECK_LOOPS		(32 * 64 * 1024 * (800 / 30))
+
+#define DLB2_FUNC_BAR				0
+#define DLB2_CSR_BAR				2
+
+#define PCI_DEVICE_ID_INTEL_DLB2_PF 0x2710
+#define PCI_DEVICE_ID_INTEL_DLB2_VF 0x2711
+
+#define PCI_DEVICE_ID_INTEL_DLB2_5_PF 0x2714
+#define PCI_DEVICE_ID_INTEL_DLB2_5_VF 0x2715
+
+#define DLB2_ALARM_HW_SOURCE_SYS 0
+#define DLB2_ALARM_HW_SOURCE_DLB 1
+
+#define DLB2_ALARM_HW_UNIT_CHP 4
+
+#define DLB2_ALARM_SYS_AID_ILLEGAL_QID		3
+#define DLB2_ALARM_SYS_AID_DISABLED_QID		4
+#define DLB2_ALARM_SYS_AID_ILLEGAL_HCW		5
+#define DLB2_ALARM_HW_CHP_AID_ILLEGAL_ENQ	1
+#define DLB2_ALARM_HW_CHP_AID_EXCESS_TOKEN_POPS 2
+
+/*
+ * Hardware-defined base addresses. Those prefixed 'DLB2_DRV' are only used by
+ * the PF driver.
+ */
+#define DLB2_DRV_LDB_PP_BASE   0x2300000
+#define DLB2_DRV_LDB_PP_STRIDE 0x1000
+#define DLB2_DRV_LDB_PP_BOUND  (DLB2_DRV_LDB_PP_BASE + \
+				DLB2_DRV_LDB_PP_STRIDE * DLB2_MAX_NUM_LDB_PORTS)
+#define DLB2_DRV_DIR_PP_BASE   0x2200000
+#define DLB2_DRV_DIR_PP_STRIDE 0x1000
+#define DLB2_DRV_DIR_PP_BOUND  (DLB2_DRV_DIR_PP_BASE + \
+				DLB2_DRV_DIR_PP_STRIDE * DLB2_MAX_NUM_DIR_PORTS)
+#define DLB2_LDB_PP_BASE       0x2100000
+#define DLB2_LDB_PP_STRIDE     0x1000
+#define DLB2_LDB_PP_BOUND      (DLB2_LDB_PP_BASE + \
+				DLB2_LDB_PP_STRIDE * DLB2_MAX_NUM_LDB_PORTS)
+#define DLB2_LDB_PP_OFFS(id)   (DLB2_LDB_PP_BASE + (id) * DLB2_PP_SIZE)
+#define DLB2_DIR_PP_BASE       0x2000000
+#define DLB2_DIR_PP_STRIDE     0x1000
+#define DLB2_DIR_PP_BOUND      (DLB2_DIR_PP_BASE + \
+				DLB2_DIR_PP_STRIDE * \
+				DLB2_MAX_NUM_DIR_PORTS_V2_5)
+#define DLB2_DIR_PP_OFFS(id)   (DLB2_DIR_PP_BASE + (id) * DLB2_PP_SIZE)
+
+struct dlb2_resource_id {
+	u32 phys_id;
+	u32 virt_id;
+	u8 vdev_owned;
+	u8 vdev_id;
+};
+
+struct dlb2_freelist {
+	u32 base;
+	u32 bound;
+	u32 offset;
+};
+
+static inline u32 dlb2_freelist_count(struct dlb2_freelist *list)
+{
+	return list->bound - list->base - list->offset;
+}
+
+struct dlb2_hcw {
+	u64 data;
+	/* Word 3 */
+	u16 opaque;
+	u8 qid;
+	u8 sched_type:2;
+	u8 priority:3;
+	u8 msg_type:3;
+	/* Word 4 */
+	u16 lock_id;
+	u8 ts_flag:1;
+	u8 rsvd1:2;
+	u8 no_dec:1;
+	u8 cmp_id:4;
+	u8 cq_token:1;
+	u8 qe_comp:1;
+	u8 qe_frag:1;
+	u8 qe_valid:1;
+	u8 int_arm:1;
+	u8 error:1;
+	u8 rsvd:2;
+};
+
+struct dlb2_ldb_queue {
+	struct dlb2_list_entry domain_list;
+	struct dlb2_list_entry func_list;
+	struct dlb2_resource_id id;
+	struct dlb2_resource_id domain_id;
+	u32 num_qid_inflights;
+	u32 aqed_limit;
+	u32 sn_group; /* sn == sequence number */
+	u32 sn_slot;
+	u32 num_mappings;
+	u8 sn_cfg_valid;
+	u8 num_pending_additions;
+	u8 owned;
+	u8 configured;
+};
+
+/*
+ * Directed ports and queues are paired by nature, so the driver tracks them
+ * with a single data structure.
+ */
+struct dlb2_dir_pq_pair {
+	struct dlb2_list_entry domain_list;
+	struct dlb2_list_entry func_list;
+	struct dlb2_resource_id id;
+	struct dlb2_resource_id domain_id;
+	u32 ref_cnt;
+	u8 init_tkn_cnt;
+	u8 queue_configured;
+	u8 port_configured;
+	u8 owned;
+	u8 enabled;
+};
+
+enum dlb2_qid_map_state {
+	/* The slot does not contain a valid queue mapping */
+	DLB2_QUEUE_UNMAPPED,
+	/* The slot contains a valid queue mapping */
+	DLB2_QUEUE_MAPPED,
+	/* The driver is mapping a queue into this slot */
+	DLB2_QUEUE_MAP_IN_PROG,
+	/* The driver is unmapping a queue from this slot */
+	DLB2_QUEUE_UNMAP_IN_PROG,
+	/*
+	 * The driver is unmapping a queue from this slot, and once complete
+	 * will replace it with another mapping.
+	 */
+	DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP,
+};
+
+struct dlb2_ldb_port_qid_map {
+	enum dlb2_qid_map_state state;
+	u16 qid;
+	u16 pending_qid;
+	u8 priority;
+	u8 pending_priority;
+};
+
+struct dlb2_ldb_port {
+	struct dlb2_list_entry domain_list;
+	struct dlb2_list_entry func_list;
+	struct dlb2_resource_id id;
+	struct dlb2_resource_id domain_id;
+	/* The qid_map represents the hardware QID mapping state. */
+	struct dlb2_ldb_port_qid_map qid_map[DLB2_MAX_NUM_QIDS_PER_LDB_CQ];
+	u32 hist_list_entry_base;
+	u32 hist_list_entry_limit;
+	u32 ref_cnt;
+	u8 init_tkn_cnt;
+	u8 num_pending_removals;
+	u8 num_mappings;
+	u8 owned;
+	u8 enabled;
+	u8 configured;
+};
+
+struct dlb2_sn_group {
+	u32 mode;
+	u32 sequence_numbers_per_queue;
+	u32 slot_use_bitmap;
+	u32 id;
+};
+
+static inline bool dlb2_sn_group_full(struct dlb2_sn_group *group)
+{
+	const u32 mask[] = {
+		0x0000ffff,  /* 64 SNs per queue */
+		0x000000ff,  /* 128 SNs per queue */
+		0x0000000f,  /* 256 SNs per queue */
+		0x00000003,  /* 512 SNs per queue */
+		0x00000001}; /* 1024 SNs per queue */
+
+	return group->slot_use_bitmap == mask[group->mode];
+}
+
+static inline int dlb2_sn_group_alloc_slot(struct dlb2_sn_group *group)
+{
+	const u32 bound[] = {16, 8, 4, 2, 1};
+	u32 i;
+
+	for (i = 0; i < bound[group->mode]; i++) {
+		if (!(group->slot_use_bitmap & (1 << i))) {
+			group->slot_use_bitmap |= 1 << i;
+			return i;
+		}
+	}
+
+	return -1;
+}
+
+static inline void
+dlb2_sn_group_free_slot(struct dlb2_sn_group *group, int slot)
+{
+	group->slot_use_bitmap &= ~(1 << slot);
+}
+
+static inline int dlb2_sn_group_used_slots(struct dlb2_sn_group *group)
+{
+	int i, cnt = 0;
+
+	for (i = 0; i < 32; i++)
+		cnt += !!(group->slot_use_bitmap & (1 << i));
+
+	return cnt;
+}
+
+struct dlb2_hw_domain {
+	struct dlb2_function_resources *parent_func;
+	struct dlb2_list_entry func_list;
+	struct dlb2_list_head used_ldb_queues;
+	struct dlb2_list_head used_ldb_ports[DLB2_NUM_COS_DOMAINS];
+	struct dlb2_list_head used_dir_pq_pairs;
+	struct dlb2_list_head avail_ldb_queues;
+	struct dlb2_list_head avail_ldb_ports[DLB2_NUM_COS_DOMAINS];
+	struct dlb2_list_head avail_dir_pq_pairs;
+	u32 total_hist_list_entries;
+	u32 avail_hist_list_entries;
+	u32 hist_list_entry_base;
+	u32 hist_list_entry_offset;
+	union {
+		struct {
+			u32 num_ldb_credits;
+			u32 num_dir_credits;
+		};
+		struct {
+			u32 num_credits;
+		};
+	};
+	u32 num_avail_aqed_entries;
+	u32 num_used_aqed_entries;
+	struct dlb2_resource_id id;
+	int num_pending_removals;
+	int num_pending_additions;
+	u8 configured;
+	u8 started;
+};
+
+struct dlb2_bitmap;
+
+struct dlb2_function_resources {
+	struct dlb2_list_head avail_domains;
+	struct dlb2_list_head used_domains;
+	struct dlb2_list_head avail_ldb_queues;
+	struct dlb2_list_head avail_ldb_ports[DLB2_NUM_COS_DOMAINS];
+	struct dlb2_list_head avail_dir_pq_pairs;
+	struct dlb2_bitmap *avail_hist_list_entries;
+	u32 num_avail_domains;
+	u32 num_avail_ldb_queues;
+	u32 num_avail_ldb_ports[DLB2_NUM_COS_DOMAINS];
+	u32 num_avail_dir_pq_pairs;
+	union {
+		struct {
+			u32 num_avail_qed_entries;
+			u32 num_avail_dqed_entries;
+		};
+		struct {
+			u32 num_avail_entries;
+		};
+	};
+	u32 num_avail_aqed_entries;
+	u8 locked; /* (VDEV only) */
+};
+
+/*
+ * After initialization, each resource in dlb2_hw_resources is located in one
+ * of the following lists:
+ * -- The PF's available resources list. These are unconfigured resources owned
+ *	by the PF and not allocated to a dlb2 scheduling domain.
+ * -- A VDEV's available resources list. These are VDEV-owned unconfigured
+ *	resources not allocated to a dlb2 scheduling domain.
+ * -- A domain's available resources list. These are domain-owned unconfigured
+ *	resources.
+ * -- A domain's used resources list. These are domain-owned configured
+ *	resources.
+ *
+ * A resource moves to a new list when a VDEV or domain is created or destroyed,
+ * or when the resource is configured.
+ */
+struct dlb2_hw_resources {
+	struct dlb2_ldb_queue ldb_queues[DLB2_MAX_NUM_LDB_QUEUES];
+	struct dlb2_ldb_port ldb_ports[DLB2_MAX_NUM_LDB_PORTS];
+	struct dlb2_dir_pq_pair dir_pq_pairs[DLB2_MAX_NUM_DIR_PORTS_V2_5];
+	struct dlb2_sn_group sn_groups[DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS];
+};
+
+struct dlb2_mbox {
+	u32 *mbox;
+	u32 *isr_in_progress;
+};
+
+struct dlb2_sw_mbox {
+	struct dlb2_mbox vdev_to_pf;
+	struct dlb2_mbox pf_to_vdev;
+	void (*pf_to_vdev_inject)(void *arg);
+	void *pf_to_vdev_inject_arg;
+};
+
+struct dlb2_hw {
+	uint8_t ver;
+
+	/* BAR 0 address */
+	void *csr_kva;
+	unsigned long csr_phys_addr;
+	/* BAR 2 address */
+	void *func_kva;
+	unsigned long func_phys_addr;
+
+	/* Resource tracking */
+	struct dlb2_hw_resources rsrcs;
+	struct dlb2_function_resources pf;
+	struct dlb2_function_resources vdev[DLB2_MAX_NUM_VDEVS];
+	struct dlb2_hw_domain domains[DLB2_MAX_NUM_DOMAINS];
+	u8 cos_reservation[DLB2_NUM_COS_DOMAINS];
+
+	/* Virtualization */
+	int virt_mode;
+	struct dlb2_sw_mbox mbox[DLB2_MAX_NUM_VDEVS];
+	unsigned int pasid[DLB2_MAX_NUM_VDEVS];
+};
+
+#endif /* __DLB2_HW_TYPES_NEW_H */
diff --git a/drivers/event/dlb2/pf/base/dlb2_osdep.h b/drivers/event/dlb2/pf/base/dlb2_osdep.h
index aa101a49a..3b0ca84ba 100644
--- a/drivers/event/dlb2/pf/base/dlb2_osdep.h
+++ b/drivers/event/dlb2/pf/base/dlb2_osdep.h
@@ -16,7 +16,11 @@
 #include <rte_log.h>
 #include <rte_spinlock.h>
 #include "../dlb2_main.h"
+
+/* TEMPORARY inclusion of both headers for merge */
+#include "dlb2_resource_new.h"
 #include "dlb2_resource.h"
+
 #include "../../dlb2_log.h"
 #include "../../dlb2_user.h"
 
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index 1cb0b9f50..7ba6521ef 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -47,19 +47,6 @@ static void dlb2_init_domain_rsrc_lists(struct dlb2_hw_domain *domain)
 		dlb2_list_init_head(&domain->avail_ldb_ports[i]);
 }
 
-static void dlb2_init_fn_rsrc_lists(struct dlb2_function_resources *rsrc)
-{
-	int i;
-
-	dlb2_list_init_head(&rsrc->avail_domains);
-	dlb2_list_init_head(&rsrc->used_domains);
-	dlb2_list_init_head(&rsrc->avail_ldb_queues);
-	dlb2_list_init_head(&rsrc->avail_dir_pq_pairs);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
-		dlb2_list_init_head(&rsrc->avail_ldb_ports[i]);
-}
-
 void dlb2_hw_enable_sparse_dir_cq_mode(struct dlb2_hw *hw)
 {
 	union dlb2_chp_cfg_chp_csr_ctrl r0;
@@ -130,171 +117,6 @@ void dlb2_hw_enable_sparse_ldb_cq_mode(struct dlb2_hw *hw)
 	DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, r0.val);
 }
 
-void dlb2_resource_free(struct dlb2_hw *hw)
-{
-	int i;
-
-	if (hw->pf.avail_hist_list_entries)
-		dlb2_bitmap_free(hw->pf.avail_hist_list_entries);
-
-	for (i = 0; i < DLB2_MAX_NUM_VDEVS; i++) {
-		if (hw->vdev[i].avail_hist_list_entries)
-			dlb2_bitmap_free(hw->vdev[i].avail_hist_list_entries);
-	}
-}
-
-int dlb2_resource_init(struct dlb2_hw *hw)
-{
-	struct dlb2_list_entry *list;
-	unsigned int i;
-	int ret;
-
-	/*
-	 * For optimal load-balancing, ports that map to one or more QIDs in
-	 * common should not be in numerical sequence. This is application
-	 * dependent, but the driver interleaves port IDs as much as possible
-	 * to reduce the likelihood of this. This initial allocation maximizes
-	 * the average distance between an ID and its immediate neighbors (i.e.
-	 * the distance from 1 to 0 and to 2, the distance from 2 to 1 and to
-	 * 3, etc.).
-	 */
-	u8 init_ldb_port_allocation[DLB2_MAX_NUM_LDB_PORTS] = {
-		0,  7,  14,  5, 12,  3, 10,  1,  8, 15,  6, 13,  4, 11,  2,  9,
-		16, 23, 30, 21, 28, 19, 26, 17, 24, 31, 22, 29, 20, 27, 18, 25,
-		32, 39, 46, 37, 44, 35, 42, 33, 40, 47, 38, 45, 36, 43, 34, 41,
-		48, 55, 62, 53, 60, 51, 58, 49, 56, 63, 54, 61, 52, 59, 50, 57,
-	};
-
-	/* Zero-out resource tracking data structures */
-	memset(&hw->rsrcs, 0, sizeof(hw->rsrcs));
-	memset(&hw->pf, 0, sizeof(hw->pf));
-
-	dlb2_init_fn_rsrc_lists(&hw->pf);
-
-	for (i = 0; i < DLB2_MAX_NUM_VDEVS; i++) {
-		memset(&hw->vdev[i], 0, sizeof(hw->vdev[i]));
-		dlb2_init_fn_rsrc_lists(&hw->vdev[i]);
-	}
-
-	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
-		memset(&hw->domains[i], 0, sizeof(hw->domains[i]));
-		dlb2_init_domain_rsrc_lists(&hw->domains[i]);
-		hw->domains[i].parent_func = &hw->pf;
-	}
-
-	/* Give all resources to the PF driver */
-	hw->pf.num_avail_domains = DLB2_MAX_NUM_DOMAINS;
-	for (i = 0; i < hw->pf.num_avail_domains; i++) {
-		list = &hw->domains[i].func_list;
-
-		dlb2_list_add(&hw->pf.avail_domains, list);
-	}
-
-	hw->pf.num_avail_ldb_queues = DLB2_MAX_NUM_LDB_QUEUES;
-	for (i = 0; i < hw->pf.num_avail_ldb_queues; i++) {
-		list = &hw->rsrcs.ldb_queues[i].func_list;
-
-		dlb2_list_add(&hw->pf.avail_ldb_queues, list);
-	}
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
-		hw->pf.num_avail_ldb_ports[i] =
-			DLB2_MAX_NUM_LDB_PORTS / DLB2_NUM_COS_DOMAINS;
-
-	for (i = 0; i < DLB2_MAX_NUM_LDB_PORTS; i++) {
-		int cos_id = i >> DLB2_NUM_COS_DOMAINS;
-		struct dlb2_ldb_port *port;
-
-		port = &hw->rsrcs.ldb_ports[init_ldb_port_allocation[i]];
-
-		dlb2_list_add(&hw->pf.avail_ldb_ports[cos_id],
-			      &port->func_list);
-	}
-
-	hw->pf.num_avail_dir_pq_pairs = DLB2_MAX_NUM_DIR_PORTS(hw->ver);
-	for (i = 0; i < hw->pf.num_avail_dir_pq_pairs; i++) {
-		list = &hw->rsrcs.dir_pq_pairs[i].func_list;
-
-		dlb2_list_add(&hw->pf.avail_dir_pq_pairs, list);
-	}
-
-	hw->pf.num_avail_qed_entries = DLB2_MAX_NUM_LDB_CREDITS;
-	hw->pf.num_avail_dqed_entries =
-		DLB2_MAX_NUM_DIR_CREDITS(hw->ver);
-
-	hw->pf.num_avail_aqed_entries = DLB2_MAX_NUM_AQED_ENTRIES;
-
-	ret = dlb2_bitmap_alloc(&hw->pf.avail_hist_list_entries,
-				DLB2_MAX_NUM_HIST_LIST_ENTRIES);
-	if (ret)
-		goto unwind;
-
-	ret = dlb2_bitmap_fill(hw->pf.avail_hist_list_entries);
-	if (ret)
-		goto unwind;
-
-	for (i = 0; i < DLB2_MAX_NUM_VDEVS; i++) {
-		ret = dlb2_bitmap_alloc(&hw->vdev[i].avail_hist_list_entries,
-					DLB2_MAX_NUM_HIST_LIST_ENTRIES);
-		if (ret)
-			goto unwind;
-
-		ret = dlb2_bitmap_zero(hw->vdev[i].avail_hist_list_entries);
-		if (ret)
-			goto unwind;
-	}
-
-	/* Initialize the hardware resource IDs */
-	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
-		hw->domains[i].id.phys_id = i;
-		hw->domains[i].id.vdev_owned = false;
-	}
-
-	for (i = 0; i < DLB2_MAX_NUM_LDB_QUEUES; i++) {
-		hw->rsrcs.ldb_queues[i].id.phys_id = i;
-		hw->rsrcs.ldb_queues[i].id.vdev_owned = false;
-	}
-
-	for (i = 0; i < DLB2_MAX_NUM_LDB_PORTS; i++) {
-		hw->rsrcs.ldb_ports[i].id.phys_id = i;
-		hw->rsrcs.ldb_ports[i].id.vdev_owned = false;
-	}
-
-	for (i = 0; i < DLB2_MAX_NUM_DIR_PORTS(hw->ver); i++) {
-		hw->rsrcs.dir_pq_pairs[i].id.phys_id = i;
-		hw->rsrcs.dir_pq_pairs[i].id.vdev_owned = false;
-	}
-
-	for (i = 0; i < DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {
-		hw->rsrcs.sn_groups[i].id = i;
-		/* Default mode (0) is 64 sequence numbers per queue */
-		hw->rsrcs.sn_groups[i].mode = 0;
-		hw->rsrcs.sn_groups[i].sequence_numbers_per_queue = 64;
-		hw->rsrcs.sn_groups[i].slot_use_bitmap = 0;
-	}
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
-		hw->cos_reservation[i] = 100 / DLB2_NUM_COS_DOMAINS;
-
-	return 0;
-
-unwind:
-	dlb2_resource_free(hw);
-
-	return ret;
-}
-
-void dlb2_clr_pmcsr_disable(struct dlb2_hw *hw)
-{
-	union dlb2_cfg_mstr_cfg_pm_pmcsr_disable r0;
-
-	r0.val = DLB2_CSR_RD(hw, DLB2_CFG_MSTR_CFG_PM_PMCSR_DISABLE);
-
-	r0.field.disable = 0;
-
-	DLB2_CSR_WR(hw, DLB2_CFG_MSTR_CFG_PM_PMCSR_DISABLE, r0.val);
-}
-
 static void dlb2_configure_domain_credits(struct dlb2_hw *hw,
 					  struct dlb2_hw_domain *domain)
 {
@@ -5876,7 +5698,7 @@ static void dlb2_log_start_domain(struct dlb2_hw *hw,
 int
 dlb2_hw_start_domain(struct dlb2_hw *hw,
 		     u32 domain_id,
-		     __attribute((unused)) struct dlb2_start_domain_args *arg,
+		     struct dlb2_start_domain_args *arg,
 		     struct dlb2_cmd_response *resp,
 		     bool vdev_req,
 		     unsigned int vdev_id)
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.h b/drivers/event/dlb2/pf/base/dlb2_resource.h
index 503fdf317..2e13193bb 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.h
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.h
@@ -6,35 +6,8 @@
 #define __DLB2_RESOURCE_H
 
 #include "dlb2_user.h"
-
-#include "dlb2_hw_types.h"
 #include "dlb2_osdep_types.h"
 
-/**
- * dlb2_resource_init() - initialize the device
- * @hw: pointer to struct dlb2_hw.
- *
- * This function initializes the device's software state (pointed to by the hw
- * argument) and programs global scheduling QoS registers. This function should
- * be called during driver initialization.
- *
- * The dlb2_hw struct must be unique per DLB 2.0 device and persist until the
- * device is reset.
- *
- * Return:
- * Returns 0 upon success, <0 otherwise.
- */
-int dlb2_resource_init(struct dlb2_hw *hw);
-
-/**
- * dlb2_resource_free() - free device state memory
- * @hw: dlb2_hw handle for a particular device.
- *
- * This function frees software state pointed to by dlb2_hw. This function
- * should be called when resetting the device or unloading the driver.
- */
-void dlb2_resource_free(struct dlb2_hw *hw);
-
 /**
  * dlb2_resource_reset() - reset in-use resources to their initial state
  * @hw: dlb2_hw handle for a particular device.
@@ -1485,15 +1458,6 @@ int dlb2_notify_vf(struct dlb2_hw *hw,
  */
 int dlb2_vdev_in_use(struct dlb2_hw *hw, unsigned int id);
 
-/**
- * dlb2_clr_pmcsr_disable() - power on bulk of DLB 2.0 logic
- * @hw: dlb2_hw handle for a particular device.
- *
- * Clearing the PMCSR must be done at initialization to make the device fully
- * operational.
- */
-void dlb2_clr_pmcsr_disable(struct dlb2_hw *hw);
-
 /**
  * dlb2_hw_get_ldb_queue_depth() - returns the depth of a load-balanced queue
  * @hw: dlb2_hw handle for a particular device.
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
new file mode 100644
index 000000000..175b0799e
--- /dev/null
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -0,0 +1,259 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2016-2020 Intel Corporation
+ */
+
+#define DLB2_USE_NEW_HEADERS /* TEMPORARY FOR MERGE */
+
+#include "dlb2_user.h"
+
+#include "dlb2_hw_types_new.h"
+#include "dlb2_osdep.h"
+#include "dlb2_osdep_bitmap.h"
+#include "dlb2_osdep_types.h"
+#include "dlb2_regs_new.h"
+#include "dlb2_resource_new.h" /* TEMP FOR UPSTREAMPATCHES */
+
+#include "../../dlb2_priv.h"
+#include "../../dlb2_inline_fns.h"
+
+#define DLB2_DOM_LIST_HEAD(head, type) \
+	DLB2_LIST_HEAD((head), type, domain_list)
+
+#define DLB2_FUNC_LIST_HEAD(head, type) \
+	DLB2_LIST_HEAD((head), type, func_list)
+
+#define DLB2_DOM_LIST_FOR(head, ptr, iter) \
+	DLB2_LIST_FOR_EACH(head, ptr, domain_list, iter)
+
+#define DLB2_FUNC_LIST_FOR(head, ptr, iter) \
+	DLB2_LIST_FOR_EACH(head, ptr, func_list, iter)
+
+#define DLB2_DOM_LIST_FOR_SAFE(head, ptr, ptr_tmp, it, it_tmp) \
+	DLB2_LIST_FOR_EACH_SAFE((head), ptr, ptr_tmp, domain_list, it, it_tmp)
+
+#define DLB2_FUNC_LIST_FOR_SAFE(head, ptr, ptr_tmp, it, it_tmp) \
+	DLB2_LIST_FOR_EACH_SAFE((head), ptr, ptr_tmp, func_list, it, it_tmp)
+
+static void dlb2_init_domain_rsrc_lists(struct dlb2_hw_domain *domain)
+{
+	int i;
+
+	dlb2_list_init_head(&domain->used_ldb_queues);
+	dlb2_list_init_head(&domain->used_dir_pq_pairs);
+	dlb2_list_init_head(&domain->avail_ldb_queues);
+	dlb2_list_init_head(&domain->avail_dir_pq_pairs);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
+		dlb2_list_init_head(&domain->used_ldb_ports[i]);
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
+		dlb2_list_init_head(&domain->avail_ldb_ports[i]);
+}
+
+static void dlb2_init_fn_rsrc_lists(struct dlb2_function_resources *rsrc)
+{
+	int i;
+	dlb2_list_init_head(&rsrc->avail_domains);
+	dlb2_list_init_head(&rsrc->used_domains);
+	dlb2_list_init_head(&rsrc->avail_ldb_queues);
+	dlb2_list_init_head(&rsrc->avail_dir_pq_pairs);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
+		dlb2_list_init_head(&rsrc->avail_ldb_ports[i]);
+}
+
+/**
+ * dlb2_resource_free() - free device state memory
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function frees software state pointed to by dlb2_hw. This function
+ * should be called when resetting the device or unloading the driver.
+ */
+void dlb2_resource_free(struct dlb2_hw *hw)
+{
+	int i;
+
+	if (hw->pf.avail_hist_list_entries)
+		dlb2_bitmap_free(hw->pf.avail_hist_list_entries);
+
+	for (i = 0; i < DLB2_MAX_NUM_VDEVS; i++) {
+		if (hw->vdev[i].avail_hist_list_entries)
+			dlb2_bitmap_free(hw->vdev[i].avail_hist_list_entries);
+	}
+}
+
+/**
+ * dlb2_resource_init() - initialize the device
+ * @hw: pointer to struct dlb2_hw.
+ * @ver: device version.
+ *
+ * This function initializes the device's software state (pointed to by the hw
+ * argument) and programs global scheduling QoS registers. This function should
+ * be called during driver initialization, and the dlb2_hw structure should
+ * be zero-initialized before calling the function.
+ *
+ * The dlb2_hw struct must be unique per DLB 2.0 device and persist until the
+ * device is reset.
+ *
+ * Return:
+ * Returns 0 upon success, <0 otherwise.
+ */
+int dlb2_resource_init(struct dlb2_hw *hw, enum dlb2_hw_ver ver)
+{
+	struct dlb2_list_entry *list;
+	unsigned int i;
+	int ret;
+
+	/*
+	 * For optimal load-balancing, ports that map to one or more QIDs in
+	 * common should not be in numerical sequence. The port->QID mapping is
+	 * application dependent, but the driver interleaves port IDs as much
+	 * as possible to reduce the likelihood of sequential ports mapping to
+	 * the same QID(s). This initial allocation of port IDs maximizes the
+	 * average distance between an ID and its immediate neighbors (i.e.
+	 * the distance from 1 to 0 and to 2, the distance from 2 to 1 and to
+	 * 3, etc.).
+	 */
+	const u8 init_ldb_port_allocation[DLB2_MAX_NUM_LDB_PORTS] = {
+		0,  7,  14,  5, 12,  3, 10,  1,  8, 15,  6, 13,  4, 11,  2,  9,
+		16, 23, 30, 21, 28, 19, 26, 17, 24, 31, 22, 29, 20, 27, 18, 25,
+		32, 39, 46, 37, 44, 35, 42, 33, 40, 47, 38, 45, 36, 43, 34, 41,
+		48, 55, 62, 53, 60, 51, 58, 49, 56, 63, 54, 61, 52, 59, 50, 57,
+	};
+
+	hw->ver = ver;
+
+	dlb2_init_fn_rsrc_lists(&hw->pf);
+
+	for (i = 0; i < DLB2_MAX_NUM_VDEVS; i++)
+		dlb2_init_fn_rsrc_lists(&hw->vdev[i]);
+
+	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
+		dlb2_init_domain_rsrc_lists(&hw->domains[i]);
+		hw->domains[i].parent_func = &hw->pf;
+	}
+
+	/* Give all resources to the PF driver */
+	hw->pf.num_avail_domains = DLB2_MAX_NUM_DOMAINS;
+	for (i = 0; i < hw->pf.num_avail_domains; i++) {
+		list = &hw->domains[i].func_list;
+
+		dlb2_list_add(&hw->pf.avail_domains, list);
+	}
+
+	hw->pf.num_avail_ldb_queues = DLB2_MAX_NUM_LDB_QUEUES;
+	for (i = 0; i < hw->pf.num_avail_ldb_queues; i++) {
+		list = &hw->rsrcs.ldb_queues[i].func_list;
+
+		dlb2_list_add(&hw->pf.avail_ldb_queues, list);
+	}
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
+		hw->pf.num_avail_ldb_ports[i] =
+			DLB2_MAX_NUM_LDB_PORTS / DLB2_NUM_COS_DOMAINS;
+
+	for (i = 0; i < DLB2_MAX_NUM_LDB_PORTS; i++) {
+		int cos_id = i >> DLB2_NUM_COS_DOMAINS;
+		struct dlb2_ldb_port *port;
+
+		port = &hw->rsrcs.ldb_ports[init_ldb_port_allocation[i]];
+
+		dlb2_list_add(&hw->pf.avail_ldb_ports[cos_id],
+			      &port->func_list);
+	}
+
+	hw->pf.num_avail_dir_pq_pairs = DLB2_MAX_NUM_DIR_PORTS(hw->ver);
+	for (i = 0; i < hw->pf.num_avail_dir_pq_pairs; i++) {
+		list = &hw->rsrcs.dir_pq_pairs[i].func_list;
+
+		dlb2_list_add(&hw->pf.avail_dir_pq_pairs, list);
+	}
+
+	if (hw->ver == DLB2_HW_V2) {
+		hw->pf.num_avail_qed_entries = DLB2_MAX_NUM_LDB_CREDITS;
+		hw->pf.num_avail_dqed_entries =
+			DLB2_MAX_NUM_DIR_CREDITS(hw->ver);
+	} else {
+		hw->pf.num_avail_entries = DLB2_MAX_NUM_CREDITS(hw->ver);
+	}
+
+	hw->pf.num_avail_aqed_entries = DLB2_MAX_NUM_AQED_ENTRIES;
+
+	ret = dlb2_bitmap_alloc(&hw->pf.avail_hist_list_entries,
+				DLB2_MAX_NUM_HIST_LIST_ENTRIES);
+	if (ret)
+		goto unwind;
+
+	ret = dlb2_bitmap_fill(hw->pf.avail_hist_list_entries);
+	if (ret)
+		goto unwind;
+
+	for (i = 0; i < DLB2_MAX_NUM_VDEVS; i++) {
+		ret = dlb2_bitmap_alloc(&hw->vdev[i].avail_hist_list_entries,
+					DLB2_MAX_NUM_HIST_LIST_ENTRIES);
+		if (ret)
+			goto unwind;
+
+		ret = dlb2_bitmap_zero(hw->vdev[i].avail_hist_list_entries);
+		if (ret)
+			goto unwind;
+	}
+
+	/* Initialize the hardware resource IDs */
+	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
+		hw->domains[i].id.phys_id = i;
+		hw->domains[i].id.vdev_owned = false;
+	}
+
+	for (i = 0; i < DLB2_MAX_NUM_LDB_QUEUES; i++) {
+		hw->rsrcs.ldb_queues[i].id.phys_id = i;
+		hw->rsrcs.ldb_queues[i].id.vdev_owned = false;
+	}
+
+	for (i = 0; i < DLB2_MAX_NUM_LDB_PORTS; i++) {
+		hw->rsrcs.ldb_ports[i].id.phys_id = i;
+		hw->rsrcs.ldb_ports[i].id.vdev_owned = false;
+	}
+
+	for (i = 0; i < DLB2_MAX_NUM_DIR_PORTS(hw->ver); i++) {
+		hw->rsrcs.dir_pq_pairs[i].id.phys_id = i;
+		hw->rsrcs.dir_pq_pairs[i].id.vdev_owned = false;
+	}
+
+	for (i = 0; i < DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {
+		hw->rsrcs.sn_groups[i].id = i;
+		/* Default mode (0) is 64 sequence numbers per queue */
+		hw->rsrcs.sn_groups[i].mode = 0;
+		hw->rsrcs.sn_groups[i].sequence_numbers_per_queue = 64;
+		hw->rsrcs.sn_groups[i].slot_use_bitmap = 0;
+	}
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
+		hw->cos_reservation[i] = 100 / DLB2_NUM_COS_DOMAINS;
+
+	return 0;
+
+unwind:
+	dlb2_resource_free(hw);
+
+	return ret;
+}
+
+/**
+ * dlb2_clr_pmcsr_disable() - power on bulk of DLB 2.0 logic
+ * @hw: dlb2_hw handle for a particular device.
+ * @ver: device version.
+ *
+ * Clearing the PMCSR must be done at initialization to make the device fully
+ * operational.
+ */
+void dlb2_clr_pmcsr_disable(struct dlb2_hw *hw, enum dlb2_hw_ver ver)
+{
+	u32 pmcsr_dis;
+
+	pmcsr_dis = DLB2_CSR_RD(hw, DLB2_CM_CFG_PM_PMCSR_DISABLE(ver));
+
+	DLB2_BITS_CLR(pmcsr_dis, DLB2_CM_CFG_PM_PMCSR_DISABLE_DISABLE);
+
+	DLB2_CSR_WR(hw, DLB2_CM_CFG_PM_PMCSR_DISABLE(ver), pmcsr_dis);
+}
+
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.h b/drivers/event/dlb2/pf/base/dlb2_resource_new.h
new file mode 100644
index 000000000..51f31543c
--- /dev/null
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.h
@@ -0,0 +1,73 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2016-2020 Intel Corporation
+ */
+
+#ifndef __DLB2_RESOURCE_NEW_H
+#define __DLB2_RESOURCE_NEW_H
+
+#include "dlb2_user.h"
+#include "dlb2_osdep_types.h"
+
+/**
+ * dlb2_resource_init() - initialize the device
+ * @hw: pointer to struct dlb2_hw.
+ * @ver: device version.
+ *
+ * This function initializes the device's software state (pointed to by the hw
+ * argument) and programs global scheduling QoS registers. This function should
+ * be called during driver initialization.
+ *
+ * The dlb2_hw struct must be unique per DLB 2.0 device and persist until the
+ * device is reset.
+ *
+ * Return:
+ * Returns 0 upon success, <0 otherwise.
+ */
+int dlb2_resource_init(struct dlb2_hw *hw, enum dlb2_hw_ver ver);
+
+/**
+ * dlb2_clr_pmcsr_disable() - power on bulk of DLB 2.0 logic
+ * @hw: dlb2_hw handle for a particular device.
+ * @ver: device version.
+ *
+ * Clearing the PMCSR must be done at initialization to make the device fully
+ * operational.
+ */
+void dlb2_clr_pmcsr_disable(struct dlb2_hw *hw, enum dlb2_hw_ver ver);
+
+/**
+ * dlb2_finish_unmap_qid_procedures() - finish any pending unmap procedures
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function attempts to finish any outstanding unmap procedures.
+ * This function should be called by the kernel thread responsible for
+ * finishing map/unmap procedures.
+ *
+ * Return:
+ * Returns the number of procedures that weren't completed.
+ */
+unsigned int dlb2_finish_unmap_qid_procedures(struct dlb2_hw *hw);
+
+/**
+ * dlb2_finish_map_qid_procedures() - finish any pending map procedures
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function attempts to finish any outstanding map procedures.
+ * This function should be called by the kernel thread responsible for
+ * finishing map/unmap procedures.
+ *
+ * Return:
+ * Returns the number of procedures that weren't completed.
+ */
+unsigned int dlb2_finish_map_qid_procedures(struct dlb2_hw *hw);
+
+/**
+ * dlb2_resource_free() - free device state memory
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function frees software state pointed to by dlb2_hw. This function
+ * should be called when resetting the device or unloading the driver.
+ */
+void dlb2_resource_free(struct dlb2_hw *hw);
+
+#endif /* __DLB2_RESOURCE_NEW_H */
diff --git a/drivers/event/dlb2/pf/dlb2_main.c b/drivers/event/dlb2/pf/dlb2_main.c
index a9d407f2f..5c0640b3c 100644
--- a/drivers/event/dlb2/pf/dlb2_main.c
+++ b/drivers/event/dlb2/pf/dlb2_main.c
@@ -13,9 +13,12 @@
 #include <rte_malloc.h>
 #include <rte_errno.h>
 
-#include "base/dlb2_resource.h"
+#define DLB2_USE_NEW_HEADERS /* TEMPORARY FOR MERGE */
+
+#include "base/dlb2_regs_new.h"
+#include "base/dlb2_hw_types_new.h"
+#include "base/dlb2_resource_new.h"
 #include "base/dlb2_osdep.h"
-#include "base/dlb2_regs.h"
 #include "dlb2_main.h"
 #include "../dlb2_user.h"
 #include "../dlb2_priv.h"
@@ -103,25 +106,34 @@ dlb2_pf_init_driver_state(struct dlb2_dev *dlb2_dev)
 
 static void dlb2_pf_enable_pm(struct dlb2_dev *dlb2_dev)
 {
-	dlb2_clr_pmcsr_disable(&dlb2_dev->hw);
+	int version;
+	version = DLB2_HW_DEVICE_FROM_PCI_ID(dlb2_dev->pdev);
+
+	dlb2_clr_pmcsr_disable(&dlb2_dev->hw, version);
 }
 
 #define DLB2_READY_RETRY_LIMIT 1000
-static int dlb2_pf_wait_for_device_ready(struct dlb2_dev *dlb2_dev)
+static int dlb2_pf_wait_for_device_ready(struct dlb2_dev *dlb2_dev,
+					 int dlb_version)
 {
 	u32 retries = 0;
 
 	/* Allow at least 1s for the device to become active after power-on */
 	for (retries = 0; retries < DLB2_READY_RETRY_LIMIT; retries++) {
-		union dlb2_cfg_mstr_cfg_diagnostic_idle_status idle;
-		union dlb2_cfg_mstr_cfg_pm_status pm_st;
+		u32 idle_val;
+		u32 idle_dlb_func_idle;
+		u32 pm_st_val;
+		u32 pm_st_pmsm;
 		u32 addr;
 
-		addr = DLB2_CFG_MSTR_CFG_PM_STATUS;
-		pm_st.val = DLB2_CSR_RD(&dlb2_dev->hw, addr);
-		addr = DLB2_CFG_MSTR_CFG_DIAGNOSTIC_IDLE_STATUS;
-		idle.val = DLB2_CSR_RD(&dlb2_dev->hw, addr);
-		if (pm_st.field.pmsm == 1 && idle.field.dlb_func_idle == 1)
+		addr = DLB2_CM_CFG_PM_STATUS(dlb_version);
+		pm_st_val = DLB2_CSR_RD(&dlb2_dev->hw, addr);
+		addr = DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS(dlb_version);
+		idle_val = DLB2_CSR_RD(&dlb2_dev->hw, addr);
+		idle_dlb_func_idle = idle_val &
+			DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DLB_FUNC_IDLE;
+		pm_st_pmsm = pm_st_val & DLB2_CM_CFG_PM_STATUS_PMSM;
+		if (pm_st_pmsm && idle_dlb_func_idle)
 			break;
 
 		rte_delay_ms(1);
@@ -141,6 +153,7 @@ dlb2_probe(struct rte_pci_device *pdev)
 {
 	struct dlb2_dev *dlb2_dev;
 	int ret = 0;
+	int dlb_version = 0;
 
 	DLB2_INFO(dlb2_dev, "probe\n");
 
@@ -152,6 +165,8 @@ dlb2_probe(struct rte_pci_device *pdev)
 		goto dlb2_dev_malloc_fail;
 	}
 
+	dlb_version = DLB2_HW_DEVICE_FROM_PCI_ID(pdev);
+
 	/* PCI Bus driver has already mapped bar space into process.
 	 * Save off our IO register and FUNC addresses.
 	 */
@@ -191,7 +206,7 @@ dlb2_probe(struct rte_pci_device *pdev)
 	 */
 	dlb2_pf_enable_pm(dlb2_dev);
 
-	ret = dlb2_pf_wait_for_device_ready(dlb2_dev);
+	ret = dlb2_pf_wait_for_device_ready(dlb2_dev, dlb_version);
 	if (ret)
 		goto wait_for_device_ready_fail;
 
@@ -203,7 +218,7 @@ dlb2_probe(struct rte_pci_device *pdev)
 	if (ret)
 		goto init_driver_state_fail;
 
-	ret = dlb2_resource_init(&dlb2_dev->hw);
+	ret = dlb2_resource_init(&dlb2_dev->hw, dlb_version);
 	if (ret)
 		goto resource_init_fail;
 
diff --git a/drivers/event/dlb2/pf/dlb2_main.h b/drivers/event/dlb2/pf/dlb2_main.h
index 9eeda482a..892298d7a 100644
--- a/drivers/event/dlb2/pf/dlb2_main.h
+++ b/drivers/event/dlb2/pf/dlb2_main.h
@@ -12,7 +12,11 @@
 #include <rte_bus_pci.h>
 #include <rte_eal_paging.h>
 
+#ifdef DLB2_USE_NEW_HEADERS
+#include "base/dlb2_hw_types_new.h"
+#else
 #include "base/dlb2_hw_types.h"
+#endif
 #include "../dlb2_user.h"
 
 #define DLB2_DEFAULT_UNREGISTER_TIMEOUT_S 5
diff --git a/drivers/event/dlb2/pf/dlb2_pf.c b/drivers/event/dlb2/pf/dlb2_pf.c
index f57dc1584..1e815f20d 100644
--- a/drivers/event/dlb2/pf/dlb2_pf.c
+++ b/drivers/event/dlb2/pf/dlb2_pf.c
@@ -32,13 +32,15 @@
 #include <rte_memory.h>
 #include <rte_string_fns.h>
 
+#define DLB2_USE_NEW_HEADERS /* TEMPORARY FOR MERGE */
+
 #include "../dlb2_priv.h"
 #include "../dlb2_iface.h"
 #include "../dlb2_inline_fns.h"
 #include "dlb2_main.h"
-#include "base/dlb2_hw_types.h"
+#include "base/dlb2_hw_types_new.h"
 #include "base/dlb2_osdep.h"
-#include "base/dlb2_resource.h"
+#include "base/dlb2_resource_new.h"
 
 static const char *event_dlb2_pf_name = RTE_STR(EVDEV_DLB2_NAME_PMD);
 
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v4 05/27] event/dlb2: add v2.5 get resources
  2021-04-15  1:48   ` [dpdk-dev] [PATCH v4 00/27] Add DLB v2.5 Timothy McDaniel
                       ` (3 preceding siblings ...)
  2021-04-15  1:48     ` [dpdk-dev] [PATCH v4 04/27] event/dlb2: add v2.5 HW init Timothy McDaniel
@ 2021-04-15  1:48     ` Timothy McDaniel
  2021-04-15  1:48     ` [dpdk-dev] [PATCH v4 06/27] event/dlb2: add v2.5 create sched domain Timothy McDaniel
                       ` (21 subsequent siblings)
  26 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-04-15  1:48 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, harry.van.haaren, jerinj, thomas

DLB v2.5 uses a new credit scheme, where directed and load balanced
credits are unified, instead of having separate directed and load
balanced credit pools.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/dlb2.c                     | 20 ++++--
 drivers/event/dlb2/dlb2_user.h                | 14 +++-
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 48 --------------
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 66 +++++++++++++++++++
 4 files changed, 92 insertions(+), 56 deletions(-)

diff --git a/drivers/event/dlb2/dlb2.c b/drivers/event/dlb2/dlb2.c
index 7f5b9141b..0048f6a1b 100644
--- a/drivers/event/dlb2/dlb2.c
+++ b/drivers/event/dlb2/dlb2.c
@@ -132,17 +132,25 @@ dlb2_hw_query_resources(struct dlb2_eventdev *dlb2)
 	evdev_dlb2_default_info.max_event_ports =
 		dlb2->hw_rsrc_query_results.num_ldb_ports;
 
-	evdev_dlb2_default_info.max_num_events =
-		dlb2->hw_rsrc_query_results.num_ldb_credits;
-
+	if (dlb2->version == DLB2_HW_V2_5) {
+		evdev_dlb2_default_info.max_num_events =
+			dlb2->hw_rsrc_query_results.num_credits;
+	} else {
+		evdev_dlb2_default_info.max_num_events =
+			dlb2->hw_rsrc_query_results.num_ldb_credits;
+	}
 	/* Save off values used when creating the scheduling domain. */
 
 	handle->info.num_sched_domains =
 		dlb2->hw_rsrc_query_results.num_sched_domains;
 
-	handle->info.hw_rsrc_max.nb_events_limit =
-		dlb2->hw_rsrc_query_results.num_ldb_credits;
-
+	if (dlb2->version == DLB2_HW_V2_5) {
+		handle->info.hw_rsrc_max.nb_events_limit =
+			dlb2->hw_rsrc_query_results.num_credits;
+	} else {
+		handle->info.hw_rsrc_max.nb_events_limit =
+			dlb2->hw_rsrc_query_results.num_ldb_credits;
+	}
 	handle->info.hw_rsrc_max.num_queues =
 		dlb2->hw_rsrc_query_results.num_ldb_queues +
 		dlb2->hw_rsrc_query_results.num_dir_ports;
diff --git a/drivers/event/dlb2/dlb2_user.h b/drivers/event/dlb2/dlb2_user.h
index f4bda7822..b7d125dec 100644
--- a/drivers/event/dlb2/dlb2_user.h
+++ b/drivers/event/dlb2/dlb2_user.h
@@ -195,9 +195,12 @@ struct dlb2_create_sched_domain_args {
  *	contiguous range of history list entries.
  * - num_ldb_credits: Amount of available load-balanced QE storage.
  * - num_dir_credits: Amount of available directed QE storage.
+ * - response.status: Detailed error code. In certain cases, such as if the
+ *	ioctl request arg is invalid, the driver won't set status.
  */
 struct dlb2_get_num_resources_args {
 	/* Output parameters */
+	struct dlb2_cmd_response response;
 	__u32 num_sched_domains;
 	__u32 num_ldb_queues;
 	__u32 num_ldb_ports;
@@ -206,8 +209,15 @@ struct dlb2_get_num_resources_args {
 	__u32 num_atomic_inflights;
 	__u32 num_hist_list_entries;
 	__u32 max_contiguous_hist_list_entries;
-	__u32 num_ldb_credits;
-	__u32 num_dir_credits;
+	union {
+		struct {
+			__u32 num_ldb_credits;
+			__u32 num_dir_credits;
+		};
+		struct {
+			__u32 num_credits;
+		};
+	};
 };
 
 /*
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index 7ba6521ef..eda983d85 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -58,54 +58,6 @@ void dlb2_hw_enable_sparse_dir_cq_mode(struct dlb2_hw *hw)
 	DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, r0.val);
 }
 
-int dlb2_hw_get_num_resources(struct dlb2_hw *hw,
-			      struct dlb2_get_num_resources_args *arg,
-			      bool vdev_req,
-			      unsigned int vdev_id)
-{
-	struct dlb2_function_resources *rsrcs;
-	struct dlb2_bitmap *map;
-	int i;
-
-	if (vdev_req && vdev_id >= DLB2_MAX_NUM_VDEVS)
-		return -EINVAL;
-
-	if (vdev_req)
-		rsrcs = &hw->vdev[vdev_id];
-	else
-		rsrcs = &hw->pf;
-
-	arg->num_sched_domains = rsrcs->num_avail_domains;
-
-	arg->num_ldb_queues = rsrcs->num_avail_ldb_queues;
-
-	arg->num_ldb_ports = 0;
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
-		arg->num_ldb_ports += rsrcs->num_avail_ldb_ports[i];
-
-	arg->num_cos_ldb_ports[0] = rsrcs->num_avail_ldb_ports[0];
-	arg->num_cos_ldb_ports[1] = rsrcs->num_avail_ldb_ports[1];
-	arg->num_cos_ldb_ports[2] = rsrcs->num_avail_ldb_ports[2];
-	arg->num_cos_ldb_ports[3] = rsrcs->num_avail_ldb_ports[3];
-
-	arg->num_dir_ports = rsrcs->num_avail_dir_pq_pairs;
-
-	arg->num_atomic_inflights = rsrcs->num_avail_aqed_entries;
-
-	map = rsrcs->avail_hist_list_entries;
-
-	arg->num_hist_list_entries = dlb2_bitmap_count(map);
-
-	arg->max_contiguous_hist_list_entries =
-		dlb2_bitmap_longest_set_range(map);
-
-	arg->num_ldb_credits = rsrcs->num_avail_qed_entries;
-
-	arg->num_dir_credits = rsrcs->num_avail_dqed_entries;
-
-	return 0;
-}
-
 void dlb2_hw_enable_sparse_ldb_cq_mode(struct dlb2_hw *hw)
 {
 	union dlb2_chp_cfg_chp_csr_ctrl r0;
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index 175b0799e..14b97dbf9 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -257,3 +257,69 @@ void dlb2_clr_pmcsr_disable(struct dlb2_hw *hw, enum dlb2_hw_ver ver)
 	DLB2_CSR_WR(hw, DLB2_CM_CFG_PM_PMCSR_DISABLE(ver), pmcsr_dis);
 }
 
+/**
+ * dlb2_hw_get_num_resources() - query the PCI function's available resources
+ * @hw: dlb2_hw handle for a particular device.
+ * @arg: pointer to resource counts.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function returns the number of available resources for the PF or for a
+ * VF.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, -EINVAL if vdev_req is true and vdev_id is
+ * invalid.
+ */
+int dlb2_hw_get_num_resources(struct dlb2_hw *hw,
+			      struct dlb2_get_num_resources_args *arg,
+			      bool vdev_req,
+			      unsigned int vdev_id)
+{
+	struct dlb2_function_resources *rsrcs;
+	struct dlb2_bitmap *map;
+	int i;
+
+	if (vdev_req && vdev_id >= DLB2_MAX_NUM_VDEVS)
+		return -EINVAL;
+
+	if (vdev_req)
+		rsrcs = &hw->vdev[vdev_id];
+	else
+		rsrcs = &hw->pf;
+
+	arg->num_sched_domains = rsrcs->num_avail_domains;
+
+	arg->num_ldb_queues = rsrcs->num_avail_ldb_queues;
+
+	arg->num_ldb_ports = 0;
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
+		arg->num_ldb_ports += rsrcs->num_avail_ldb_ports[i];
+
+	arg->num_cos_ldb_ports[0] = rsrcs->num_avail_ldb_ports[0];
+	arg->num_cos_ldb_ports[1] = rsrcs->num_avail_ldb_ports[1];
+	arg->num_cos_ldb_ports[2] = rsrcs->num_avail_ldb_ports[2];
+	arg->num_cos_ldb_ports[3] = rsrcs->num_avail_ldb_ports[3];
+
+	arg->num_dir_ports = rsrcs->num_avail_dir_pq_pairs;
+
+	arg->num_atomic_inflights = rsrcs->num_avail_aqed_entries;
+
+	map = rsrcs->avail_hist_list_entries;
+
+	arg->num_hist_list_entries = dlb2_bitmap_count(map);
+
+	arg->max_contiguous_hist_list_entries =
+		dlb2_bitmap_longest_set_range(map);
+
+	if (hw->ver == DLB2_HW_V2) {
+		arg->num_ldb_credits = rsrcs->num_avail_qed_entries;
+		arg->num_dir_credits = rsrcs->num_avail_dqed_entries;
+	} else {
+		arg->num_credits = rsrcs->num_avail_entries;
+	}
+	return 0;
+}
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v4 06/27] event/dlb2: add v2.5 create sched domain
  2021-04-15  1:48   ` [dpdk-dev] [PATCH v4 00/27] Add DLB v2.5 Timothy McDaniel
                       ` (4 preceding siblings ...)
  2021-04-15  1:48     ` [dpdk-dev] [PATCH v4 05/27] event/dlb2: add v2.5 get resources Timothy McDaniel
@ 2021-04-15  1:48     ` Timothy McDaniel
  2021-04-15  1:48     ` [dpdk-dev] [PATCH v4 07/27] event/dlb2: add v2.5 domain reset Timothy McDaniel
                       ` (20 subsequent siblings)
  26 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-04-15  1:48 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, harry.van.haaren, jerinj, thomas

Update domain creation logic to account for DLB v2.5
credit scheme, new register map, and new register access
macros.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/dlb2_user.h                |  13 +-
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 645 ----------------
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 696 ++++++++++++++++++
 3 files changed, 707 insertions(+), 647 deletions(-)

diff --git a/drivers/event/dlb2/dlb2_user.h b/drivers/event/dlb2/dlb2_user.h
index b7d125dec..9760e9bda 100644
--- a/drivers/event/dlb2/dlb2_user.h
+++ b/drivers/event/dlb2/dlb2_user.h
@@ -18,6 +18,7 @@ enum dlb2_error {
 	DLB2_ST_LDB_QUEUES_UNAVAILABLE,
 	DLB2_ST_LDB_CREDITS_UNAVAILABLE,
 	DLB2_ST_DIR_CREDITS_UNAVAILABLE,
+	DLB2_ST_CREDITS_UNAVAILABLE,
 	DLB2_ST_SEQUENCE_NUMBERS_UNAVAILABLE,
 	DLB2_ST_INVALID_DOMAIN_ID,
 	DLB2_ST_INVALID_QID_INFLIGHT_ALLOCATION,
@@ -57,6 +58,7 @@ static const char dlb2_error_strings[][128] = {
 	"DLB2_ST_LDB_QUEUES_UNAVAILABLE",
 	"DLB2_ST_LDB_CREDITS_UNAVAILABLE",
 	"DLB2_ST_DIR_CREDITS_UNAVAILABLE",
+	"DLB2_ST_CREDITS_UNAVAILABLE",
 	"DLB2_ST_SEQUENCE_NUMBERS_UNAVAILABLE",
 	"DLB2_ST_INVALID_DOMAIN_ID",
 	"DLB2_ST_INVALID_QID_INFLIGHT_ALLOCATION",
@@ -170,8 +172,15 @@ struct dlb2_create_sched_domain_args {
 	__u32 num_dir_ports;
 	__u32 num_atomic_inflights;
 	__u32 num_hist_list_entries;
-	__u32 num_ldb_credits;
-	__u32 num_dir_credits;
+	union {
+		struct {
+			__u32 num_ldb_credits;
+			__u32 num_dir_credits;
+		};
+		struct {
+			__u32 num_credits;
+		};
+	};
 	__u8 cos_strict;
 	__u8 padding1[3];
 };
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index eda983d85..99c3d031d 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -32,21 +32,6 @@
 #define DLB2_FUNC_LIST_FOR_SAFE(head, ptr, ptr_tmp, it, it_tmp) \
 	DLB2_LIST_FOR_EACH_SAFE((head), ptr, ptr_tmp, func_list, it, it_tmp)
 
-static void dlb2_init_domain_rsrc_lists(struct dlb2_hw_domain *domain)
-{
-	int i;
-
-	dlb2_list_init_head(&domain->used_ldb_queues);
-	dlb2_list_init_head(&domain->used_dir_pq_pairs);
-	dlb2_list_init_head(&domain->avail_ldb_queues);
-	dlb2_list_init_head(&domain->avail_dir_pq_pairs);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
-		dlb2_list_init_head(&domain->used_ldb_ports[i]);
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
-		dlb2_list_init_head(&domain->avail_ldb_ports[i]);
-}
-
 void dlb2_hw_enable_sparse_dir_cq_mode(struct dlb2_hw *hw)
 {
 	union dlb2_chp_cfg_chp_csr_ctrl r0;
@@ -69,636 +54,6 @@ void dlb2_hw_enable_sparse_ldb_cq_mode(struct dlb2_hw *hw)
 	DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, r0.val);
 }
 
-static void dlb2_configure_domain_credits(struct dlb2_hw *hw,
-					  struct dlb2_hw_domain *domain)
-{
-	union dlb2_chp_cfg_ldb_vas_crd r0 = { {0} };
-	union dlb2_chp_cfg_dir_vas_crd r1 = { {0} };
-
-	r0.field.count = domain->num_ldb_credits;
-
-	DLB2_CSR_WR(hw, DLB2_CHP_CFG_LDB_VAS_CRD(domain->id.phys_id), r0.val);
-
-	r1.field.count = domain->num_dir_credits;
-
-	DLB2_CSR_WR(hw, DLB2_CHP_CFG_DIR_VAS_CRD(domain->id.phys_id), r1.val);
-}
-
-static struct dlb2_ldb_port *
-dlb2_get_next_ldb_port(struct dlb2_hw *hw,
-		       struct dlb2_function_resources *rsrcs,
-		       u32 domain_id,
-		       u32 cos_id)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	RTE_SET_USED(iter);
-	/*
-	 * To reduce the odds of consecutive load-balanced ports mapping to the
-	 * same queue(s), the driver attempts to allocate ports whose neighbors
-	 * are owned by a different domain.
-	 */
-	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {
-		u32 next, prev;
-		u32 phys_id;
-
-		phys_id = port->id.phys_id;
-		next = phys_id + 1;
-		prev = phys_id - 1;
-
-		if (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)
-			next = 0;
-		if (phys_id == 0)
-			prev = DLB2_MAX_NUM_LDB_PORTS - 1;
-
-		if (!hw->rsrcs.ldb_ports[next].owned ||
-		    hw->rsrcs.ldb_ports[next].domain_id.phys_id == domain_id)
-			continue;
-
-		if (!hw->rsrcs.ldb_ports[prev].owned ||
-		    hw->rsrcs.ldb_ports[prev].domain_id.phys_id == domain_id)
-			continue;
-
-		return port;
-	}
-
-	/*
-	 * Failing that, the driver looks for a port with one neighbor owned by
-	 * a different domain and the other unallocated.
-	 */
-	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {
-		u32 next, prev;
-		u32 phys_id;
-
-		phys_id = port->id.phys_id;
-		next = phys_id + 1;
-		prev = phys_id - 1;
-
-		if (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)
-			next = 0;
-		if (phys_id == 0)
-			prev = DLB2_MAX_NUM_LDB_PORTS - 1;
-
-		if (!hw->rsrcs.ldb_ports[prev].owned &&
-		    hw->rsrcs.ldb_ports[next].owned &&
-		    hw->rsrcs.ldb_ports[next].domain_id.phys_id != domain_id)
-			return port;
-
-		if (!hw->rsrcs.ldb_ports[next].owned &&
-		    hw->rsrcs.ldb_ports[prev].owned &&
-		    hw->rsrcs.ldb_ports[prev].domain_id.phys_id != domain_id)
-			return port;
-	}
-
-	/*
-	 * Failing that, the driver looks for a port with both neighbors
-	 * unallocated.
-	 */
-	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {
-		u32 next, prev;
-		u32 phys_id;
-
-		phys_id = port->id.phys_id;
-		next = phys_id + 1;
-		prev = phys_id - 1;
-
-		if (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)
-			next = 0;
-		if (phys_id == 0)
-			prev = DLB2_MAX_NUM_LDB_PORTS - 1;
-
-		if (!hw->rsrcs.ldb_ports[prev].owned &&
-		    !hw->rsrcs.ldb_ports[next].owned)
-			return port;
-	}
-
-	/* If all else fails, the driver returns the next available port. */
-	return DLB2_FUNC_LIST_HEAD(rsrcs->avail_ldb_ports[cos_id],
-				   typeof(*port));
-}
-
-static int __dlb2_attach_ldb_ports(struct dlb2_hw *hw,
-				   struct dlb2_function_resources *rsrcs,
-				   struct dlb2_hw_domain *domain,
-				   u32 num_ports,
-				   u32 cos_id,
-				   struct dlb2_cmd_response *resp)
-{
-	unsigned int i;
-
-	if (rsrcs->num_avail_ldb_ports[cos_id] < num_ports) {
-		resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	for (i = 0; i < num_ports; i++) {
-		struct dlb2_ldb_port *port;
-
-		port = dlb2_get_next_ldb_port(hw, rsrcs,
-					      domain->id.phys_id, cos_id);
-		if (port == NULL) {
-			DLB2_HW_ERR(hw,
-				    "[%s()] Internal error: domain validation failed\n",
-				    __func__);
-			return -EFAULT;
-		}
-
-		dlb2_list_del(&rsrcs->avail_ldb_ports[cos_id],
-			      &port->func_list);
-
-		port->domain_id = domain->id;
-		port->owned = true;
-
-		dlb2_list_add(&domain->avail_ldb_ports[cos_id],
-			      &port->domain_list);
-	}
-
-	rsrcs->num_avail_ldb_ports[cos_id] -= num_ports;
-
-	return 0;
-}
-
-static int dlb2_attach_ldb_ports(struct dlb2_hw *hw,
-				 struct dlb2_function_resources *rsrcs,
-				 struct dlb2_hw_domain *domain,
-				 struct dlb2_create_sched_domain_args *args,
-				 struct dlb2_cmd_response *resp)
-{
-	unsigned int i, j;
-	int ret;
-
-	if (args->cos_strict) {
-		for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-			u32 num = args->num_cos_ldb_ports[i];
-
-			/* Allocate ports from specific classes-of-service */
-			ret = __dlb2_attach_ldb_ports(hw,
-						      rsrcs,
-						      domain,
-						      num,
-						      i,
-						      resp);
-			if (ret)
-				return ret;
-		}
-	} else {
-		unsigned int k;
-		u32 cos_id;
-
-		/*
-		 * Attempt to allocate from specific class-of-service, but
-		 * fallback to the other classes if that fails.
-		 */
-		for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-			for (j = 0; j < args->num_cos_ldb_ports[i]; j++) {
-				for (k = 0; k < DLB2_NUM_COS_DOMAINS; k++) {
-					cos_id = (i + k) % DLB2_NUM_COS_DOMAINS;
-
-					ret = __dlb2_attach_ldb_ports(hw,
-								      rsrcs,
-								      domain,
-								      1,
-								      cos_id,
-								      resp);
-					if (ret == 0)
-						break;
-				}
-
-				if (ret < 0)
-					return ret;
-			}
-		}
-	}
-
-	/* Allocate num_ldb_ports from any class-of-service */
-	for (i = 0; i < args->num_ldb_ports; i++) {
-		for (j = 0; j < DLB2_NUM_COS_DOMAINS; j++) {
-			ret = __dlb2_attach_ldb_ports(hw,
-						      rsrcs,
-						      domain,
-						      1,
-						      j,
-						      resp);
-			if (ret == 0)
-				break;
-		}
-
-		if (ret < 0)
-			return ret;
-	}
-
-	return 0;
-}
-
-static int dlb2_attach_dir_ports(struct dlb2_hw *hw,
-				 struct dlb2_function_resources *rsrcs,
-				 struct dlb2_hw_domain *domain,
-				 u32 num_ports,
-				 struct dlb2_cmd_response *resp)
-{
-	unsigned int i;
-
-	if (rsrcs->num_avail_dir_pq_pairs < num_ports) {
-		resp->status = DLB2_ST_DIR_PORTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	for (i = 0; i < num_ports; i++) {
-		struct dlb2_dir_pq_pair *port;
-
-		port = DLB2_FUNC_LIST_HEAD(rsrcs->avail_dir_pq_pairs,
-					   typeof(*port));
-		if (port == NULL) {
-			DLB2_HW_ERR(hw,
-				    "[%s()] Internal error: domain validation failed\n",
-				    __func__);
-			return -EFAULT;
-		}
-
-		dlb2_list_del(&rsrcs->avail_dir_pq_pairs, &port->func_list);
-
-		port->domain_id = domain->id;
-		port->owned = true;
-
-		dlb2_list_add(&domain->avail_dir_pq_pairs, &port->domain_list);
-	}
-
-	rsrcs->num_avail_dir_pq_pairs -= num_ports;
-
-	return 0;
-}
-
-static int dlb2_attach_ldb_credits(struct dlb2_function_resources *rsrcs,
-				   struct dlb2_hw_domain *domain,
-				   u32 num_credits,
-				   struct dlb2_cmd_response *resp)
-{
-	if (rsrcs->num_avail_qed_entries < num_credits) {
-		resp->status = DLB2_ST_LDB_CREDITS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	rsrcs->num_avail_qed_entries -= num_credits;
-	domain->num_ldb_credits += num_credits;
-	return 0;
-}
-
-static int dlb2_attach_dir_credits(struct dlb2_function_resources *rsrcs,
-				   struct dlb2_hw_domain *domain,
-				   u32 num_credits,
-				   struct dlb2_cmd_response *resp)
-{
-	if (rsrcs->num_avail_dqed_entries < num_credits) {
-		resp->status = DLB2_ST_DIR_CREDITS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	rsrcs->num_avail_dqed_entries -= num_credits;
-	domain->num_dir_credits += num_credits;
-	return 0;
-}
-
-static int dlb2_attach_atomic_inflights(struct dlb2_function_resources *rsrcs,
-					struct dlb2_hw_domain *domain,
-					u32 num_atomic_inflights,
-					struct dlb2_cmd_response *resp)
-{
-	if (rsrcs->num_avail_aqed_entries < num_atomic_inflights) {
-		resp->status = DLB2_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	rsrcs->num_avail_aqed_entries -= num_atomic_inflights;
-	domain->num_avail_aqed_entries += num_atomic_inflights;
-	return 0;
-}
-
-static int
-dlb2_attach_domain_hist_list_entries(struct dlb2_function_resources *rsrcs,
-				     struct dlb2_hw_domain *domain,
-				     u32 num_hist_list_entries,
-				     struct dlb2_cmd_response *resp)
-{
-	struct dlb2_bitmap *bitmap;
-	int base;
-
-	if (num_hist_list_entries) {
-		bitmap = rsrcs->avail_hist_list_entries;
-
-		base = dlb2_bitmap_find_set_bit_range(bitmap,
-						      num_hist_list_entries);
-		if (base < 0)
-			goto error;
-
-		domain->total_hist_list_entries = num_hist_list_entries;
-		domain->avail_hist_list_entries = num_hist_list_entries;
-		domain->hist_list_entry_base = base;
-		domain->hist_list_entry_offset = 0;
-
-		dlb2_bitmap_clear_range(bitmap, base, num_hist_list_entries);
-	}
-	return 0;
-
-error:
-	resp->status = DLB2_ST_HIST_LIST_ENTRIES_UNAVAILABLE;
-	return -EINVAL;
-}
-
-static int dlb2_attach_ldb_queues(struct dlb2_hw *hw,
-				  struct dlb2_function_resources *rsrcs,
-				  struct dlb2_hw_domain *domain,
-				  u32 num_queues,
-				  struct dlb2_cmd_response *resp)
-{
-	unsigned int i;
-
-	if (rsrcs->num_avail_ldb_queues < num_queues) {
-		resp->status = DLB2_ST_LDB_QUEUES_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	for (i = 0; i < num_queues; i++) {
-		struct dlb2_ldb_queue *queue;
-
-		queue = DLB2_FUNC_LIST_HEAD(rsrcs->avail_ldb_queues,
-					    typeof(*queue));
-		if (queue == NULL) {
-			DLB2_HW_ERR(hw,
-				    "[%s()] Internal error: domain validation failed\n",
-				    __func__);
-			return -EFAULT;
-		}
-
-		dlb2_list_del(&rsrcs->avail_ldb_queues, &queue->func_list);
-
-		queue->domain_id = domain->id;
-		queue->owned = true;
-
-		dlb2_list_add(&domain->avail_ldb_queues, &queue->domain_list);
-	}
-
-	rsrcs->num_avail_ldb_queues -= num_queues;
-
-	return 0;
-}
-
-static int
-dlb2_domain_attach_resources(struct dlb2_hw *hw,
-			     struct dlb2_function_resources *rsrcs,
-			     struct dlb2_hw_domain *domain,
-			     struct dlb2_create_sched_domain_args *args,
-			     struct dlb2_cmd_response *resp)
-{
-	int ret;
-
-	ret = dlb2_attach_ldb_queues(hw,
-				     rsrcs,
-				     domain,
-				     args->num_ldb_queues,
-				     resp);
-	if (ret < 0)
-		return ret;
-
-	ret = dlb2_attach_ldb_ports(hw,
-				    rsrcs,
-				    domain,
-				    args,
-				    resp);
-	if (ret < 0)
-		return ret;
-
-	ret = dlb2_attach_dir_ports(hw,
-				    rsrcs,
-				    domain,
-				    args->num_dir_ports,
-				    resp);
-	if (ret < 0)
-		return ret;
-
-	ret = dlb2_attach_ldb_credits(rsrcs,
-				      domain,
-				      args->num_ldb_credits,
-				      resp);
-	if (ret < 0)
-		return ret;
-
-	ret = dlb2_attach_dir_credits(rsrcs,
-				      domain,
-				      args->num_dir_credits,
-				      resp);
-	if (ret < 0)
-		return ret;
-
-	ret = dlb2_attach_domain_hist_list_entries(rsrcs,
-						   domain,
-						   args->num_hist_list_entries,
-						   resp);
-	if (ret < 0)
-		return ret;
-
-	ret = dlb2_attach_atomic_inflights(rsrcs,
-					   domain,
-					   args->num_atomic_inflights,
-					   resp);
-	if (ret < 0)
-		return ret;
-
-	dlb2_configure_domain_credits(hw, domain);
-
-	domain->configured = true;
-
-	domain->started = false;
-
-	rsrcs->num_avail_domains--;
-
-	return 0;
-}
-
-static int
-dlb2_verify_create_sched_dom_args(struct dlb2_function_resources *rsrcs,
-				  struct dlb2_create_sched_domain_args *args,
-				  struct dlb2_cmd_response *resp)
-{
-	u32 num_avail_ldb_ports, req_ldb_ports;
-	struct dlb2_bitmap *avail_hl_entries;
-	unsigned int max_contig_hl_range;
-	int i;
-
-	avail_hl_entries = rsrcs->avail_hist_list_entries;
-
-	max_contig_hl_range = dlb2_bitmap_longest_set_range(avail_hl_entries);
-
-	num_avail_ldb_ports = 0;
-	req_ldb_ports = 0;
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		num_avail_ldb_ports += rsrcs->num_avail_ldb_ports[i];
-
-		req_ldb_ports += args->num_cos_ldb_ports[i];
-	}
-
-	req_ldb_ports += args->num_ldb_ports;
-
-	if (rsrcs->num_avail_domains < 1) {
-		resp->status = DLB2_ST_DOMAIN_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	if (rsrcs->num_avail_ldb_queues < args->num_ldb_queues) {
-		resp->status = DLB2_ST_LDB_QUEUES_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	if (req_ldb_ports > num_avail_ldb_ports) {
-		resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	for (i = 0; args->cos_strict && i < DLB2_NUM_COS_DOMAINS; i++) {
-		if (args->num_cos_ldb_ports[i] >
-		    rsrcs->num_avail_ldb_ports[i]) {
-			resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
-			return -EINVAL;
-		}
-	}
-
-	if (args->num_ldb_queues > 0 && req_ldb_ports == 0) {
-		resp->status = DLB2_ST_LDB_PORT_REQUIRED_FOR_LDB_QUEUES;
-		return -EINVAL;
-	}
-
-	if (rsrcs->num_avail_dir_pq_pairs < args->num_dir_ports) {
-		resp->status = DLB2_ST_DIR_PORTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	if (rsrcs->num_avail_qed_entries < args->num_ldb_credits) {
-		resp->status = DLB2_ST_LDB_CREDITS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	if (rsrcs->num_avail_dqed_entries < args->num_dir_credits) {
-		resp->status = DLB2_ST_DIR_CREDITS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	if (rsrcs->num_avail_aqed_entries < args->num_atomic_inflights) {
-		resp->status = DLB2_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	if (max_contig_hl_range < args->num_hist_list_entries) {
-		resp->status = DLB2_ST_HIST_LIST_ENTRIES_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	return 0;
-}
-
-static void
-dlb2_log_create_sched_domain_args(struct dlb2_hw *hw,
-				  struct dlb2_create_sched_domain_args *args,
-				  bool vdev_req,
-				  unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 create sched domain arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tNumber of LDB queues:          %d\n",
-		    args->num_ldb_queues);
-	DLB2_HW_DBG(hw, "\tNumber of LDB ports (any CoS): %d\n",
-		    args->num_ldb_ports);
-	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 0):   %d\n",
-		    args->num_cos_ldb_ports[0]);
-	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 1):   %d\n",
-		    args->num_cos_ldb_ports[1]);
-	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 2):   %d\n",
-		    args->num_cos_ldb_ports[1]);
-	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 3):   %d\n",
-		    args->num_cos_ldb_ports[1]);
-	DLB2_HW_DBG(hw, "\tStrict CoS allocation:         %d\n",
-		    args->cos_strict);
-	DLB2_HW_DBG(hw, "\tNumber of DIR ports:           %d\n",
-		    args->num_dir_ports);
-	DLB2_HW_DBG(hw, "\tNumber of ATM inflights:       %d\n",
-		    args->num_atomic_inflights);
-	DLB2_HW_DBG(hw, "\tNumber of hist list entries:   %d\n",
-		    args->num_hist_list_entries);
-	DLB2_HW_DBG(hw, "\tNumber of LDB credits:         %d\n",
-		    args->num_ldb_credits);
-	DLB2_HW_DBG(hw, "\tNumber of DIR credits:         %d\n",
-		    args->num_dir_credits);
-}
-
-/**
- * dlb2_hw_create_sched_domain() - Allocate and initialize a DLB scheduling
- *	domain and its resources.
- * @hw:	Contains the current state of the DLB2 hardware.
- * @args: User-provided arguments.
- * @resp: Response to user.
- * @vdev_req: Request came from a virtual device.
- * @vdev_id: If vdev_req is true, this contains the virtual device's ID.
- *
- * Return: returns < 0 on error, 0 otherwise. If the driver is unable to
- * satisfy a request, resp->status will be set accordingly.
- */
-int dlb2_hw_create_sched_domain(struct dlb2_hw *hw,
-				struct dlb2_create_sched_domain_args *args,
-				struct dlb2_cmd_response *resp,
-				bool vdev_req,
-				unsigned int vdev_id)
-{
-	struct dlb2_function_resources *rsrcs;
-	struct dlb2_hw_domain *domain;
-	int ret;
-
-	rsrcs = (vdev_req) ? &hw->vdev[vdev_id] : &hw->pf;
-
-	dlb2_log_create_sched_domain_args(hw, args, vdev_req, vdev_id);
-
-	/*
-	 * Verify that hardware resources are available before attempting to
-	 * satisfy the request. This simplifies the error unwinding code.
-	 */
-	ret = dlb2_verify_create_sched_dom_args(rsrcs, args, resp);
-	if (ret)
-		return ret;
-
-	domain = DLB2_FUNC_LIST_HEAD(rsrcs->avail_domains, typeof(*domain));
-	if (domain == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: no available domains\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	if (domain->configured) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: avail_domains contains configured domains.\n",
-			    __func__);
-		return -EFAULT;
-	}
-
-	dlb2_init_domain_rsrc_lists(domain);
-
-	ret = dlb2_domain_attach_resources(hw, rsrcs, domain, args, resp);
-	if (ret < 0) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: failed to verify args.\n",
-			    __func__);
-
-		return ret;
-	}
-
-	dlb2_list_del(&rsrcs->avail_domains, &domain->func_list);
-
-	dlb2_list_add(&rsrcs->used_domains, &domain->func_list);
-
-	resp->id = (vdev_req) ? domain->id.virt_id : domain->id.phys_id;
-	resp->status = 0;
-
-	return 0;
-}
-
 /*
  * The PF driver cannot assume that a register write will affect subsequent HCW
  * writes. To ensure a write completes, the driver must read back a CSR. This
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index 14b97dbf9..8f97dd865 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -323,3 +323,699 @@ int dlb2_hw_get_num_resources(struct dlb2_hw *hw,
 	}
 	return 0;
 }
+
+static void dlb2_configure_domain_credits_v2_5(struct dlb2_hw *hw,
+					       struct dlb2_hw_domain *domain)
+{
+	u32 reg = 0;
+
+	DLB2_BITS_SET(reg, domain->num_credits, DLB2_CHP_CFG_LDB_VAS_CRD_COUNT);
+	DLB2_CSR_WR(hw, DLB2_CHP_CFG_VAS_CRD(domain->id.phys_id), reg);
+}
+
+static void dlb2_configure_domain_credits_v2(struct dlb2_hw *hw,
+					     struct dlb2_hw_domain *domain)
+{
+	u32 reg = 0;
+
+	DLB2_BITS_SET(reg, domain->num_ldb_credits,
+		      DLB2_CHP_CFG_LDB_VAS_CRD_COUNT);
+	DLB2_CSR_WR(hw, DLB2_CHP_CFG_LDB_VAS_CRD(domain->id.phys_id), reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, domain->num_dir_credits,
+		      DLB2_CHP_CFG_DIR_VAS_CRD_COUNT);
+	DLB2_CSR_WR(hw, DLB2_CHP_CFG_DIR_VAS_CRD(domain->id.phys_id), reg);
+}
+
+static void dlb2_configure_domain_credits(struct dlb2_hw *hw,
+					  struct dlb2_hw_domain *domain)
+{
+	if (hw->ver == DLB2_HW_V2)
+		dlb2_configure_domain_credits_v2(hw, domain);
+	else
+		dlb2_configure_domain_credits_v2_5(hw, domain);
+}
+
+static int dlb2_attach_credits(struct dlb2_function_resources *rsrcs,
+			       struct dlb2_hw_domain *domain,
+			       u32 num_credits,
+			       struct dlb2_cmd_response *resp)
+{
+	if (rsrcs->num_avail_entries < num_credits) {
+		resp->status = DLB2_ST_CREDITS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	rsrcs->num_avail_entries -= num_credits;
+	domain->num_credits += num_credits;
+	return 0;
+}
+
+static struct dlb2_ldb_port *
+dlb2_get_next_ldb_port(struct dlb2_hw *hw,
+		       struct dlb2_function_resources *rsrcs,
+		       u32 domain_id,
+		       u32 cos_id)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	RTE_SET_USED(iter);
+
+	/*
+	 * To reduce the odds of consecutive load-balanced ports mapping to the
+	 * same queue(s), the driver attempts to allocate ports whose neighbors
+	 * are owned by a different domain.
+	 */
+	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {
+		u32 next, prev;
+		u32 phys_id;
+
+		phys_id = port->id.phys_id;
+		next = phys_id + 1;
+		prev = phys_id - 1;
+
+		if (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)
+			next = 0;
+		if (phys_id == 0)
+			prev = DLB2_MAX_NUM_LDB_PORTS - 1;
+
+		if (!hw->rsrcs.ldb_ports[next].owned ||
+		    hw->rsrcs.ldb_ports[next].domain_id.phys_id == domain_id)
+			continue;
+
+		if (!hw->rsrcs.ldb_ports[prev].owned ||
+		    hw->rsrcs.ldb_ports[prev].domain_id.phys_id == domain_id)
+			continue;
+
+		return port;
+	}
+
+	/*
+	 * Failing that, the driver looks for a port with one neighbor owned by
+	 * a different domain and the other unallocated.
+	 */
+	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {
+		u32 next, prev;
+		u32 phys_id;
+
+		phys_id = port->id.phys_id;
+		next = phys_id + 1;
+		prev = phys_id - 1;
+
+		if (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)
+			next = 0;
+		if (phys_id == 0)
+			prev = DLB2_MAX_NUM_LDB_PORTS - 1;
+
+		if (!hw->rsrcs.ldb_ports[prev].owned &&
+		    hw->rsrcs.ldb_ports[next].owned &&
+		    hw->rsrcs.ldb_ports[next].domain_id.phys_id != domain_id)
+			return port;
+
+		if (!hw->rsrcs.ldb_ports[next].owned &&
+		    hw->rsrcs.ldb_ports[prev].owned &&
+		    hw->rsrcs.ldb_ports[prev].domain_id.phys_id != domain_id)
+			return port;
+	}
+
+	/*
+	 * Failing that, the driver looks for a port with both neighbors
+	 * unallocated.
+	 */
+	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {
+		u32 next, prev;
+		u32 phys_id;
+
+		phys_id = port->id.phys_id;
+		next = phys_id + 1;
+		prev = phys_id - 1;
+
+		if (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)
+			next = 0;
+		if (phys_id == 0)
+			prev = DLB2_MAX_NUM_LDB_PORTS - 1;
+
+		if (!hw->rsrcs.ldb_ports[prev].owned &&
+		    !hw->rsrcs.ldb_ports[next].owned)
+			return port;
+	}
+
+	/* If all else fails, the driver returns the next available port. */
+	return DLB2_FUNC_LIST_HEAD(rsrcs->avail_ldb_ports[cos_id],
+				   typeof(*port));
+}
+
+static int __dlb2_attach_ldb_ports(struct dlb2_hw *hw,
+				   struct dlb2_function_resources *rsrcs,
+				   struct dlb2_hw_domain *domain,
+				   u32 num_ports,
+				   u32 cos_id,
+				   struct dlb2_cmd_response *resp)
+{
+	unsigned int i;
+
+	if (rsrcs->num_avail_ldb_ports[cos_id] < num_ports) {
+		resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	for (i = 0; i < num_ports; i++) {
+		struct dlb2_ldb_port *port;
+
+		port = dlb2_get_next_ldb_port(hw, rsrcs,
+					      domain->id.phys_id, cos_id);
+		if (port == NULL) {
+			DLB2_HW_ERR(hw,
+				    "[%s()] Internal error: domain validation failed\n",
+				    __func__);
+			return -EFAULT;
+		}
+
+		dlb2_list_del(&rsrcs->avail_ldb_ports[cos_id],
+			      &port->func_list);
+
+		port->domain_id = domain->id;
+		port->owned = true;
+
+		dlb2_list_add(&domain->avail_ldb_ports[cos_id],
+			      &port->domain_list);
+	}
+
+	rsrcs->num_avail_ldb_ports[cos_id] -= num_ports;
+
+	return 0;
+}
+
+
+static int dlb2_attach_ldb_ports(struct dlb2_hw *hw,
+				 struct dlb2_function_resources *rsrcs,
+				 struct dlb2_hw_domain *domain,
+				 struct dlb2_create_sched_domain_args *args,
+				 struct dlb2_cmd_response *resp)
+{
+	unsigned int i, j;
+	int ret;
+
+	if (args->cos_strict) {
+		for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+			u32 num = args->num_cos_ldb_ports[i];
+
+			/* Allocate ports from specific classes-of-service */
+			ret = __dlb2_attach_ldb_ports(hw,
+						      rsrcs,
+						      domain,
+						      num,
+						      i,
+						      resp);
+			if (ret)
+				return ret;
+		}
+	} else {
+		unsigned int k;
+		u32 cos_id;
+
+		/*
+		 * Attempt to allocate from specific class-of-service, but
+		 * fallback to the other classes if that fails.
+		 */
+		for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+			for (j = 0; j < args->num_cos_ldb_ports[i]; j++) {
+				for (k = 0; k < DLB2_NUM_COS_DOMAINS; k++) {
+					cos_id = (i + k) % DLB2_NUM_COS_DOMAINS;
+
+					ret = __dlb2_attach_ldb_ports(hw,
+								      rsrcs,
+								      domain,
+								      1,
+								      cos_id,
+								      resp);
+					if (ret == 0)
+						break;
+				}
+
+				if (ret)
+					return ret;
+			}
+		}
+	}
+
+	/* Allocate num_ldb_ports from any class-of-service */
+	for (i = 0; i < args->num_ldb_ports; i++) {
+		for (j = 0; j < DLB2_NUM_COS_DOMAINS; j++) {
+			ret = __dlb2_attach_ldb_ports(hw,
+						      rsrcs,
+						      domain,
+						      1,
+						      j,
+						      resp);
+			if (ret == 0)
+				break;
+		}
+
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+
+static int dlb2_attach_dir_ports(struct dlb2_hw *hw,
+				 struct dlb2_function_resources *rsrcs,
+				 struct dlb2_hw_domain *domain,
+				 u32 num_ports,
+				 struct dlb2_cmd_response *resp)
+{
+	unsigned int i;
+
+	if (rsrcs->num_avail_dir_pq_pairs < num_ports) {
+		resp->status = DLB2_ST_DIR_PORTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	for (i = 0; i < num_ports; i++) {
+		struct dlb2_dir_pq_pair *port;
+
+		port = DLB2_FUNC_LIST_HEAD(rsrcs->avail_dir_pq_pairs,
+					   typeof(*port));
+		if (port == NULL) {
+			DLB2_HW_ERR(hw,
+				    "[%s()] Internal error: domain validation failed\n",
+				    __func__);
+			return -EFAULT;
+		}
+
+		dlb2_list_del(&rsrcs->avail_dir_pq_pairs, &port->func_list);
+
+		port->domain_id = domain->id;
+		port->owned = true;
+
+		dlb2_list_add(&domain->avail_dir_pq_pairs, &port->domain_list);
+	}
+
+	rsrcs->num_avail_dir_pq_pairs -= num_ports;
+
+	return 0;
+}
+
+static int dlb2_attach_ldb_credits(struct dlb2_function_resources *rsrcs,
+				   struct dlb2_hw_domain *domain,
+				   u32 num_credits,
+				   struct dlb2_cmd_response *resp)
+{
+	if (rsrcs->num_avail_qed_entries < num_credits) {
+		resp->status = DLB2_ST_LDB_CREDITS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	rsrcs->num_avail_qed_entries -= num_credits;
+	domain->num_ldb_credits += num_credits;
+	return 0;
+}
+
+static int dlb2_attach_dir_credits(struct dlb2_function_resources *rsrcs,
+				   struct dlb2_hw_domain *domain,
+				   u32 num_credits,
+				   struct dlb2_cmd_response *resp)
+{
+	if (rsrcs->num_avail_dqed_entries < num_credits) {
+		resp->status = DLB2_ST_DIR_CREDITS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	rsrcs->num_avail_dqed_entries -= num_credits;
+	domain->num_dir_credits += num_credits;
+	return 0;
+}
+
+
+static int dlb2_attach_atomic_inflights(struct dlb2_function_resources *rsrcs,
+					struct dlb2_hw_domain *domain,
+					u32 num_atomic_inflights,
+					struct dlb2_cmd_response *resp)
+{
+	if (rsrcs->num_avail_aqed_entries < num_atomic_inflights) {
+		resp->status = DLB2_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	rsrcs->num_avail_aqed_entries -= num_atomic_inflights;
+	domain->num_avail_aqed_entries += num_atomic_inflights;
+	return 0;
+}
+
+static int
+dlb2_attach_domain_hist_list_entries(struct dlb2_function_resources *rsrcs,
+				     struct dlb2_hw_domain *domain,
+				     u32 num_hist_list_entries,
+				     struct dlb2_cmd_response *resp)
+{
+	struct dlb2_bitmap *bitmap;
+	int base;
+
+	if (num_hist_list_entries) {
+		bitmap = rsrcs->avail_hist_list_entries;
+
+		base = dlb2_bitmap_find_set_bit_range(bitmap,
+						      num_hist_list_entries);
+		if (base < 0)
+			goto error;
+
+		domain->total_hist_list_entries = num_hist_list_entries;
+		domain->avail_hist_list_entries = num_hist_list_entries;
+		domain->hist_list_entry_base = base;
+		domain->hist_list_entry_offset = 0;
+
+		dlb2_bitmap_clear_range(bitmap, base, num_hist_list_entries);
+	}
+	return 0;
+
+error:
+	resp->status = DLB2_ST_HIST_LIST_ENTRIES_UNAVAILABLE;
+	return -EINVAL;
+}
+
+static int dlb2_attach_ldb_queues(struct dlb2_hw *hw,
+				  struct dlb2_function_resources *rsrcs,
+				  struct dlb2_hw_domain *domain,
+				  u32 num_queues,
+				  struct dlb2_cmd_response *resp)
+{
+	unsigned int i;
+
+	if (rsrcs->num_avail_ldb_queues < num_queues) {
+		resp->status = DLB2_ST_LDB_QUEUES_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	for (i = 0; i < num_queues; i++) {
+		struct dlb2_ldb_queue *queue;
+
+		queue = DLB2_FUNC_LIST_HEAD(rsrcs->avail_ldb_queues,
+					    typeof(*queue));
+		if (queue == NULL) {
+			DLB2_HW_ERR(hw,
+				    "[%s()] Internal error: domain validation failed\n",
+				    __func__);
+			return -EFAULT;
+		}
+
+		dlb2_list_del(&rsrcs->avail_ldb_queues, &queue->func_list);
+
+		queue->domain_id = domain->id;
+		queue->owned = true;
+
+		dlb2_list_add(&domain->avail_ldb_queues, &queue->domain_list);
+	}
+
+	rsrcs->num_avail_ldb_queues -= num_queues;
+
+	return 0;
+}
+
+static int
+dlb2_domain_attach_resources(struct dlb2_hw *hw,
+			     struct dlb2_function_resources *rsrcs,
+			     struct dlb2_hw_domain *domain,
+			     struct dlb2_create_sched_domain_args *args,
+			     struct dlb2_cmd_response *resp)
+{
+	int ret;
+
+	ret = dlb2_attach_ldb_queues(hw,
+				     rsrcs,
+				     domain,
+				     args->num_ldb_queues,
+				     resp);
+	if (ret)
+		return ret;
+
+	ret = dlb2_attach_ldb_ports(hw,
+				    rsrcs,
+				    domain,
+				    args,
+				    resp);
+	if (ret)
+		return ret;
+
+	ret = dlb2_attach_dir_ports(hw,
+				    rsrcs,
+				    domain,
+				    args->num_dir_ports,
+				    resp);
+	if (ret)
+		return ret;
+
+	if (hw->ver == DLB2_HW_V2) {
+		ret = dlb2_attach_ldb_credits(rsrcs,
+					      domain,
+					      args->num_ldb_credits,
+					      resp);
+		if (ret)
+			return ret;
+
+		ret = dlb2_attach_dir_credits(rsrcs,
+					      domain,
+					      args->num_dir_credits,
+					      resp);
+		if (ret)
+			return ret;
+	} else {  /* DLB 2.5 */
+		ret = dlb2_attach_credits(rsrcs,
+					  domain,
+					  args->num_credits,
+					  resp);
+		if (ret)
+			return ret;
+	}
+
+	ret = dlb2_attach_domain_hist_list_entries(rsrcs,
+						   domain,
+						   args->num_hist_list_entries,
+						   resp);
+	if (ret)
+		return ret;
+
+	ret = dlb2_attach_atomic_inflights(rsrcs,
+					   domain,
+					   args->num_atomic_inflights,
+					   resp);
+	if (ret)
+		return ret;
+
+	dlb2_configure_domain_credits(hw, domain);
+
+	domain->configured = true;
+
+	domain->started = false;
+
+	rsrcs->num_avail_domains--;
+
+	return 0;
+}
+
+static int
+dlb2_verify_create_sched_dom_args(struct dlb2_function_resources *rsrcs,
+				  struct dlb2_create_sched_domain_args *args,
+				  struct dlb2_cmd_response *resp,
+				  struct dlb2_hw *hw,
+				  struct dlb2_hw_domain **out_domain)
+{
+	u32 num_avail_ldb_ports, req_ldb_ports;
+	struct dlb2_bitmap *avail_hl_entries;
+	unsigned int max_contig_hl_range;
+	struct dlb2_hw_domain *domain;
+	int i;
+
+	avail_hl_entries = rsrcs->avail_hist_list_entries;
+
+	max_contig_hl_range = dlb2_bitmap_longest_set_range(avail_hl_entries);
+
+	num_avail_ldb_ports = 0;
+	req_ldb_ports = 0;
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		num_avail_ldb_ports += rsrcs->num_avail_ldb_ports[i];
+
+		req_ldb_ports += args->num_cos_ldb_ports[i];
+	}
+
+	req_ldb_ports += args->num_ldb_ports;
+
+	if (rsrcs->num_avail_domains < 1) {
+		resp->status = DLB2_ST_DOMAIN_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	domain = DLB2_FUNC_LIST_HEAD(rsrcs->avail_domains, typeof(*domain));
+	if (domain == NULL) {
+		resp->status = DLB2_ST_DOMAIN_UNAVAILABLE;
+		return -EFAULT;
+	}
+
+	if (rsrcs->num_avail_ldb_queues < args->num_ldb_queues) {
+		resp->status = DLB2_ST_LDB_QUEUES_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	if (req_ldb_ports > num_avail_ldb_ports) {
+		resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	for (i = 0; args->cos_strict && i < DLB2_NUM_COS_DOMAINS; i++) {
+		if (args->num_cos_ldb_ports[i] >
+		    rsrcs->num_avail_ldb_ports[i]) {
+			resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
+			return -EINVAL;
+		}
+	}
+
+	if (args->num_ldb_queues > 0 && req_ldb_ports == 0) {
+		resp->status = DLB2_ST_LDB_PORT_REQUIRED_FOR_LDB_QUEUES;
+		return -EINVAL;
+	}
+
+	if (rsrcs->num_avail_dir_pq_pairs < args->num_dir_ports) {
+		resp->status = DLB2_ST_DIR_PORTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+	if (hw->ver == DLB2_HW_V2_5) {
+		if (rsrcs->num_avail_entries < args->num_credits) {
+			resp->status = DLB2_ST_CREDITS_UNAVAILABLE;
+			return -EINVAL;
+		}
+	} else {
+		if (rsrcs->num_avail_qed_entries < args->num_ldb_credits) {
+			resp->status = DLB2_ST_LDB_CREDITS_UNAVAILABLE;
+			return -EINVAL;
+		}
+		if (rsrcs->num_avail_dqed_entries < args->num_dir_credits) {
+			resp->status = DLB2_ST_DIR_CREDITS_UNAVAILABLE;
+			return -EINVAL;
+		}
+	}
+
+	if (rsrcs->num_avail_aqed_entries < args->num_atomic_inflights) {
+		resp->status = DLB2_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	if (max_contig_hl_range < args->num_hist_list_entries) {
+		resp->status = DLB2_ST_HIST_LIST_ENTRIES_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	*out_domain = domain;
+
+	return 0;
+}
+
+static void
+dlb2_log_create_sched_domain_args(struct dlb2_hw *hw,
+				  struct dlb2_create_sched_domain_args *args,
+				  bool vdev_req,
+				  unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 create sched domain arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tNumber of LDB queues:          %d\n",
+		    args->num_ldb_queues);
+	DLB2_HW_DBG(hw, "\tNumber of LDB ports (any CoS): %d\n",
+		    args->num_ldb_ports);
+	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 0):   %d\n",
+		    args->num_cos_ldb_ports[0]);
+	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 1):   %d\n",
+		    args->num_cos_ldb_ports[1]);
+	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 2):   %d\n",
+		    args->num_cos_ldb_ports[2]);
+	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 3):   %d\n",
+		    args->num_cos_ldb_ports[3]);
+	DLB2_HW_DBG(hw, "\tStrict CoS allocation:         %d\n",
+		    args->cos_strict);
+	DLB2_HW_DBG(hw, "\tNumber of DIR ports:           %d\n",
+		    args->num_dir_ports);
+	DLB2_HW_DBG(hw, "\tNumber of ATM inflights:       %d\n",
+		    args->num_atomic_inflights);
+	DLB2_HW_DBG(hw, "\tNumber of hist list entries:   %d\n",
+		    args->num_hist_list_entries);
+	if (hw->ver == DLB2_HW_V2) {
+		DLB2_HW_DBG(hw, "\tNumber of LDB credits:         %d\n",
+			    args->num_ldb_credits);
+		DLB2_HW_DBG(hw, "\tNumber of DIR credits:         %d\n",
+			    args->num_dir_credits);
+	} else {
+		DLB2_HW_DBG(hw, "\tNumber of credits:         %d\n",
+			    args->num_credits);
+	}
+}
+
+/**
+ * dlb2_hw_create_sched_domain() - create a scheduling domain
+ * @hw: dlb2_hw handle for a particular device.
+ * @args: scheduling domain creation arguments.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function creates a scheduling domain containing the resources specified
+ * in args. The individual resources (queues, ports, credits) can be configured
+ * after creating a scheduling domain.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the domain ID.
+ *
+ * resp->id contains a virtual ID if vdev_req is true.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, or the requested domain name
+ *	    is already in use.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb2_hw_create_sched_domain(struct dlb2_hw *hw,
+				struct dlb2_create_sched_domain_args *args,
+				struct dlb2_cmd_response *resp,
+				bool vdev_req,
+				unsigned int vdev_id)
+{
+	struct dlb2_function_resources *rsrcs;
+	struct dlb2_hw_domain *domain;
+	int ret;
+
+	rsrcs = (vdev_req) ? &hw->vdev[vdev_id] : &hw->pf;
+
+	dlb2_log_create_sched_domain_args(hw, args, vdev_req, vdev_id);
+
+	/*
+	 * Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	ret = dlb2_verify_create_sched_dom_args(rsrcs, args, resp, hw, &domain);
+	if (ret)
+		return ret;
+
+	dlb2_init_domain_rsrc_lists(domain);
+
+	ret = dlb2_domain_attach_resources(hw, rsrcs, domain, args, resp);
+	if (ret) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: failed to verify args.\n",
+			    __func__);
+
+		return ret;
+	}
+
+	dlb2_list_del(&rsrcs->avail_domains, &domain->func_list);
+
+	dlb2_list_add(&rsrcs->used_domains, &domain->func_list);
+
+	resp->id = (vdev_req) ? domain->id.virt_id : domain->id.phys_id;
+	resp->status = 0;
+
+	return 0;
+}
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v4 07/27] event/dlb2: add v2.5 domain reset
  2021-04-15  1:48   ` [dpdk-dev] [PATCH v4 00/27] Add DLB v2.5 Timothy McDaniel
                       ` (5 preceding siblings ...)
  2021-04-15  1:48     ` [dpdk-dev] [PATCH v4 06/27] event/dlb2: add v2.5 create sched domain Timothy McDaniel
@ 2021-04-15  1:48     ` Timothy McDaniel
  2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 08/27] event/dlb2: add v2.5 create ldb queue Timothy McDaniel
                       ` (19 subsequent siblings)
  26 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-04-15  1:48 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, harry.van.haaren, jerinj, thomas

Reset hardware registers, consumer queues, ports,
interrupts and software. Queues must also be drained
as part of the reset process.

The logic is very similar to what was done for v2.0,
but the new combined register map for v2.0 and v2.5
uses new register names and bit names.  Additionally,
new register access macros are used so that the code
can perform the correct action, based on the hardware
version, v2.0 or v2.5.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 .../event/dlb2/pf/base/dlb2_hw_types_new.h    |    1 +
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 1494 ----------
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 2562 +++++++++++++++++
 3 files changed, 2563 insertions(+), 1494 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_hw_types_new.h b/drivers/event/dlb2/pf/base/dlb2_hw_types_new.h
index 4a4185acd..4a6037775 100644
--- a/drivers/event/dlb2/pf/base/dlb2_hw_types_new.h
+++ b/drivers/event/dlb2/pf/base/dlb2_hw_types_new.h
@@ -181,6 +181,7 @@ struct dlb2_ldb_port {
 	u32 hist_list_entry_base;
 	u32 hist_list_entry_limit;
 	u32 ref_cnt;
+	u8 cq_depth;
 	u8 init_tkn_cnt;
 	u8 num_pending_removals;
 	u8 num_mappings;
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index 99c3d031d..041aeaeee 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -65,69 +65,6 @@ static inline void dlb2_flush_csr(struct dlb2_hw *hw)
 	DLB2_CSR_RD(hw, DLB2_SYS_TOTAL_VAS);
 }
 
-static void dlb2_dir_port_cq_disable(struct dlb2_hw *hw,
-				     struct dlb2_dir_pq_pair *port)
-{
-	union dlb2_lsp_cq_dir_dsbl reg;
-
-	reg.field.disabled = 1;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ_DIR_DSBL(port->id.phys_id), reg.val);
-
-	dlb2_flush_csr(hw);
-}
-
-static u32 dlb2_dir_cq_token_count(struct dlb2_hw *hw,
-				   struct dlb2_dir_pq_pair *port)
-{
-	union dlb2_lsp_cq_dir_tkn_cnt r0;
-
-	r0.val = DLB2_CSR_RD(hw, DLB2_LSP_CQ_DIR_TKN_CNT(port->id.phys_id));
-
-	/*
-	 * Account for the initial token count, which is used in order to
-	 * provide a CQ with depth less than 8.
-	 */
-
-	return r0.field.count - port->init_tkn_cnt;
-}
-
-static int dlb2_drain_dir_cq(struct dlb2_hw *hw,
-			     struct dlb2_dir_pq_pair *port)
-{
-	unsigned int port_id = port->id.phys_id;
-	u32 cnt;
-
-	/* Return any outstanding tokens */
-	cnt = dlb2_dir_cq_token_count(hw, port);
-
-	if (cnt != 0) {
-		struct dlb2_hcw hcw_mem[8], *hcw;
-		void  *pp_addr;
-
-		pp_addr = os_map_producer_port(hw, port_id, false);
-
-		/* Point hcw to a 64B-aligned location */
-		hcw = (struct dlb2_hcw *)((uintptr_t)&hcw_mem[4] & ~0x3F);
-
-		/*
-		 * Program the first HCW for a batch token return and
-		 * the rest as NOOPS
-		 */
-		memset(hcw, 0, 4 * sizeof(*hcw));
-		hcw->cq_token = 1;
-		hcw->lock_id = cnt - 1;
-
-		dlb2_movdir64b(pp_addr, hcw);
-
-		os_fence_hcw(hw, pp_addr);
-
-		os_unmap_producer_port(hw, pp_addr);
-	}
-
-	return 0;
-}
-
 static void dlb2_dir_port_cq_enable(struct dlb2_hw *hw,
 				    struct dlb2_dir_pq_pair *port)
 {
@@ -140,37 +77,6 @@ static void dlb2_dir_port_cq_enable(struct dlb2_hw *hw,
 	dlb2_flush_csr(hw);
 }
 
-static int dlb2_domain_drain_dir_cqs(struct dlb2_hw *hw,
-				     struct dlb2_hw_domain *domain,
-				     bool toggle_port)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *port;
-	int ret;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
-		/*
-		 * Can't drain a port if it's not configured, and there's
-		 * nothing to drain if its queue is unconfigured.
-		 */
-		if (!port->port_configured || !port->queue_configured)
-			continue;
-
-		if (toggle_port)
-			dlb2_dir_port_cq_disable(hw, port);
-
-		ret = dlb2_drain_dir_cq(hw, port);
-		if (ret < 0)
-			return ret;
-
-		if (toggle_port)
-			dlb2_dir_port_cq_enable(hw, port);
-	}
-
-	return 0;
-}
-
 static u32 dlb2_dir_queue_depth(struct dlb2_hw *hw,
 				struct dlb2_dir_pq_pair *queue)
 {
@@ -182,63 +88,6 @@ static u32 dlb2_dir_queue_depth(struct dlb2_hw *hw,
 	return r0.field.count;
 }
 
-static bool dlb2_dir_queue_is_empty(struct dlb2_hw *hw,
-				    struct dlb2_dir_pq_pair *queue)
-{
-	return dlb2_dir_queue_depth(hw, queue) == 0;
-}
-
-static bool dlb2_domain_dir_queues_empty(struct dlb2_hw *hw,
-					 struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *queue;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, queue, iter) {
-		if (!dlb2_dir_queue_is_empty(hw, queue))
-			return false;
-	}
-
-	return true;
-}
-
-static int dlb2_domain_drain_dir_queues(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain)
-{
-	int i, ret;
-
-	/* If the domain hasn't been started, there's no traffic to drain */
-	if (!domain->started)
-		return 0;
-
-	for (i = 0; i < DLB2_MAX_QID_EMPTY_CHECK_LOOPS; i++) {
-		ret = dlb2_domain_drain_dir_cqs(hw, domain, true);
-		if (ret < 0)
-			return ret;
-
-		if (dlb2_domain_dir_queues_empty(hw, domain))
-			break;
-	}
-
-	if (i == DLB2_MAX_QID_EMPTY_CHECK_LOOPS) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: failed to empty queues\n",
-			    __func__);
-		return -EFAULT;
-	}
-
-	/*
-	 * Drain the CQs one more time. For the queues to go empty, they would
-	 * have scheduled one or more QEs.
-	 */
-	ret = dlb2_domain_drain_dir_cqs(hw, domain, true);
-	if (ret < 0)
-		return ret;
-
-	return 0;
-}
-
 static void dlb2_ldb_port_cq_enable(struct dlb2_hw *hw,
 				    struct dlb2_ldb_port *port)
 {
@@ -271,105 +120,6 @@ static void dlb2_ldb_port_cq_disable(struct dlb2_hw *hw,
 	dlb2_flush_csr(hw);
 }
 
-static u32 dlb2_ldb_cq_inflight_count(struct dlb2_hw *hw,
-				      struct dlb2_ldb_port *port)
-{
-	union dlb2_lsp_cq_ldb_infl_cnt r0;
-
-	r0.val = DLB2_CSR_RD(hw, DLB2_LSP_CQ_LDB_INFL_CNT(port->id.phys_id));
-
-	return r0.field.count;
-}
-
-static u32 dlb2_ldb_cq_token_count(struct dlb2_hw *hw,
-				   struct dlb2_ldb_port *port)
-{
-	union dlb2_lsp_cq_ldb_tkn_cnt r0;
-
-	r0.val = DLB2_CSR_RD(hw, DLB2_LSP_CQ_LDB_TKN_CNT(port->id.phys_id));
-
-	/*
-	 * Account for the initial token count, which is used in order to
-	 * provide a CQ with depth less than 8.
-	 */
-
-	return r0.field.token_count - port->init_tkn_cnt;
-}
-
-static int dlb2_drain_ldb_cq(struct dlb2_hw *hw, struct dlb2_ldb_port *port)
-{
-	u32 infl_cnt, tkn_cnt;
-	unsigned int i;
-
-	infl_cnt = dlb2_ldb_cq_inflight_count(hw, port);
-	tkn_cnt = dlb2_ldb_cq_token_count(hw, port);
-
-	if (infl_cnt || tkn_cnt) {
-		struct dlb2_hcw hcw_mem[8], *hcw;
-		void  *pp_addr;
-
-		pp_addr = os_map_producer_port(hw, port->id.phys_id, true);
-
-		/* Point hcw to a 64B-aligned location */
-		hcw = (struct dlb2_hcw *)((uintptr_t)&hcw_mem[4] & ~0x3F);
-
-		/*
-		 * Program the first HCW for a completion and token return and
-		 * the other HCWs as NOOPS
-		 */
-
-		memset(hcw, 0, 4 * sizeof(*hcw));
-		hcw->qe_comp = (infl_cnt > 0);
-		hcw->cq_token = (tkn_cnt > 0);
-		hcw->lock_id = tkn_cnt - 1;
-
-		/* Return tokens in the first HCW */
-		dlb2_movdir64b(pp_addr, hcw);
-
-		hcw->cq_token = 0;
-
-		/* Issue remaining completions (if any) */
-		for (i = 1; i < infl_cnt; i++)
-			dlb2_movdir64b(pp_addr, hcw);
-
-		os_fence_hcw(hw, pp_addr);
-
-		os_unmap_producer_port(hw, pp_addr);
-	}
-
-	return 0;
-}
-
-static int dlb2_domain_drain_ldb_cqs(struct dlb2_hw *hw,
-				     struct dlb2_hw_domain *domain,
-				     bool toggle_port)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int ret, i;
-	RTE_SET_USED(iter);
-
-	/* If the domain hasn't been started, there's no traffic to drain */
-	if (!domain->started)
-		return 0;
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			if (toggle_port)
-				dlb2_ldb_port_cq_disable(hw, port);
-
-			ret = dlb2_drain_ldb_cq(hw, port);
-			if (ret < 0)
-				return ret;
-
-			if (toggle_port)
-				dlb2_ldb_port_cq_enable(hw, port);
-		}
-	}
-
-	return 0;
-}
-
 static u32 dlb2_ldb_queue_depth(struct dlb2_hw *hw,
 				struct dlb2_ldb_queue *queue)
 {
@@ -388,90 +138,6 @@ static u32 dlb2_ldb_queue_depth(struct dlb2_hw *hw,
 	return r0.field.count + r1.field.count + r2.field.count;
 }
 
-static bool dlb2_ldb_queue_is_empty(struct dlb2_hw *hw,
-				    struct dlb2_ldb_queue *queue)
-{
-	return dlb2_ldb_queue_depth(hw, queue) == 0;
-}
-
-static bool dlb2_domain_mapped_queues_empty(struct dlb2_hw *hw,
-					    struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_queue *queue;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
-		if (queue->num_mappings == 0)
-			continue;
-
-		if (!dlb2_ldb_queue_is_empty(hw, queue))
-			return false;
-	}
-
-	return true;
-}
-
-static int dlb2_domain_drain_mapped_queues(struct dlb2_hw *hw,
-					   struct dlb2_hw_domain *domain)
-{
-	int i, ret;
-
-	/* If the domain hasn't been started, there's no traffic to drain */
-	if (!domain->started)
-		return 0;
-
-	if (domain->num_pending_removals > 0) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: failed to unmap domain queues\n",
-			    __func__);
-		return -EFAULT;
-	}
-
-	for (i = 0; i < DLB2_MAX_QID_EMPTY_CHECK_LOOPS; i++) {
-		ret = dlb2_domain_drain_ldb_cqs(hw, domain, true);
-		if (ret < 0)
-			return ret;
-
-		if (dlb2_domain_mapped_queues_empty(hw, domain))
-			break;
-	}
-
-	if (i == DLB2_MAX_QID_EMPTY_CHECK_LOOPS) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: failed to empty queues\n",
-			    __func__);
-		return -EFAULT;
-	}
-
-	/*
-	 * Drain the CQs one more time. For the queues to go empty, they would
-	 * have scheduled one or more QEs.
-	 */
-	ret = dlb2_domain_drain_ldb_cqs(hw, domain, true);
-	if (ret < 0)
-		return ret;
-
-	return 0;
-}
-
-static void dlb2_domain_enable_ldb_cqs(struct dlb2_hw *hw,
-				       struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			port->enabled = true;
-
-			dlb2_ldb_port_cq_enable(hw, port);
-		}
-	}
-}
-
 static struct dlb2_ldb_queue *
 dlb2_get_ldb_queue_from_id(struct dlb2_hw *hw,
 			   u32 id,
@@ -1455,1166 +1121,6 @@ dlb2_domain_finish_unmap_qid_procedures(struct dlb2_hw *hw,
 	return domain->num_pending_removals;
 }
 
-static void dlb2_domain_disable_ldb_cqs(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			port->enabled = false;
-
-			dlb2_ldb_port_cq_disable(hw, port);
-		}
-	}
-}
-
-static void dlb2_log_reset_domain(struct dlb2_hw *hw,
-				  u32 domain_id,
-				  bool vdev_req,
-				  unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 reset domain:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
-}
-
-static void dlb2_domain_disable_dir_vpps(struct dlb2_hw *hw,
-					 struct dlb2_hw_domain *domain,
-					 unsigned int vdev_id)
-{
-	struct dlb2_list_entry *iter;
-	union dlb2_sys_vf_dir_vpp_v r1;
-	struct dlb2_dir_pq_pair *port;
-	RTE_SET_USED(iter);
-
-	r1.field.vpp_v = 0;
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
-		unsigned int offs;
-		u32 virt_id;
-
-		if (hw->virt_mode == DLB2_VIRT_SRIOV)
-			virt_id = port->id.virt_id;
-		else
-			virt_id = port->id.phys_id;
-
-		offs = vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) + virt_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP_V(offs), r1.val);
-	}
-}
-
-static void dlb2_domain_disable_ldb_vpps(struct dlb2_hw *hw,
-					 struct dlb2_hw_domain *domain,
-					 unsigned int vdev_id)
-{
-	struct dlb2_list_entry *iter;
-	union dlb2_sys_vf_ldb_vpp_v r1;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	r1.field.vpp_v = 0;
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			unsigned int offs;
-			u32 virt_id;
-
-			if (hw->virt_mode == DLB2_VIRT_SRIOV)
-				virt_id = port->id.virt_id;
-			else
-				virt_id = port->id.phys_id;
-
-			offs = vdev_id * DLB2_MAX_NUM_LDB_PORTS + virt_id;
-
-			DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VPP_V(offs), r1.val);
-		}
-	}
-}
-
-static void
-dlb2_domain_disable_ldb_port_interrupts(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	union dlb2_chp_ldb_cq_int_enb r0 = { {0} };
-	union dlb2_chp_ldb_cq_wd_enb r1 = { {0} };
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	r0.field.en_tim = 0;
-	r0.field.en_depth = 0;
-
-	r1.field.wd_enable = 0;
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			DLB2_CSR_WR(hw,
-				    DLB2_CHP_LDB_CQ_INT_ENB(port->id.phys_id),
-				    r0.val);
-
-			DLB2_CSR_WR(hw,
-				    DLB2_CHP_LDB_CQ_WD_ENB(port->id.phys_id),
-				    r1.val);
-		}
-	}
-}
-
-static void
-dlb2_domain_disable_dir_port_interrupts(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	union dlb2_chp_dir_cq_int_enb r0 = { {0} };
-	union dlb2_chp_dir_cq_wd_enb r1 = { {0} };
-	struct dlb2_dir_pq_pair *port;
-	RTE_SET_USED(iter);
-
-	r0.field.en_tim = 0;
-	r0.field.en_depth = 0;
-
-	r1.field.wd_enable = 0;
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
-		DLB2_CSR_WR(hw,
-			    DLB2_CHP_DIR_CQ_INT_ENB(port->id.phys_id),
-			    r0.val);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_CHP_DIR_CQ_WD_ENB(port->id.phys_id),
-			    r1.val);
-	}
-}
-
-static void
-dlb2_domain_disable_ldb_queue_write_perms(struct dlb2_hw *hw,
-					  struct dlb2_hw_domain *domain)
-{
-	int domain_offset = domain->id.phys_id * DLB2_MAX_NUM_LDB_QUEUES;
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_queue *queue;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
-		union dlb2_sys_ldb_vasqid_v r0 = { {0} };
-		union dlb2_sys_ldb_qid2vqid r1 = { {0} };
-		union dlb2_sys_vf_ldb_vqid_v r2 = { {0} };
-		union dlb2_sys_vf_ldb_vqid2qid r3 = { {0} };
-		int idx;
-
-		idx = domain_offset + queue->id.phys_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_LDB_VASQID_V(idx), r0.val);
-
-		if (queue->id.vdev_owned) {
-			DLB2_CSR_WR(hw,
-				    DLB2_SYS_LDB_QID2VQID(queue->id.phys_id),
-				    r1.val);
-
-			idx = queue->id.vdev_id * DLB2_MAX_NUM_LDB_QUEUES +
-				queue->id.virt_id;
-
-			DLB2_CSR_WR(hw,
-				    DLB2_SYS_VF_LDB_VQID_V(idx),
-				    r2.val);
-
-			DLB2_CSR_WR(hw,
-				    DLB2_SYS_VF_LDB_VQID2QID(idx),
-				    r3.val);
-		}
-	}
-}
-
-static void
-dlb2_domain_disable_dir_queue_write_perms(struct dlb2_hw *hw,
-					  struct dlb2_hw_domain *domain)
-{
-	int domain_offset = domain->id.phys_id *
-		DLB2_MAX_NUM_DIR_PORTS(hw->ver);
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *queue;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, queue, iter) {
-		union dlb2_sys_dir_vasqid_v r0 = { {0} };
-		union dlb2_sys_vf_dir_vqid_v r1 = { {0} };
-		union dlb2_sys_vf_dir_vqid2qid r2 = { {0} };
-		int idx;
-
-		idx = domain_offset + queue->id.phys_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(idx), r0.val);
-
-		if (queue->id.vdev_owned) {
-			idx = queue->id.vdev_id *
-				DLB2_MAX_NUM_DIR_PORTS(hw->ver) +
-				queue->id.virt_id;
-
-			DLB2_CSR_WR(hw,
-				    DLB2_SYS_VF_DIR_VQID_V(idx),
-				    r1.val);
-
-			DLB2_CSR_WR(hw,
-				    DLB2_SYS_VF_DIR_VQID2QID(idx),
-				    r2.val);
-		}
-	}
-}
-
-static void dlb2_domain_disable_ldb_seq_checks(struct dlb2_hw *hw,
-					       struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	union dlb2_chp_sn_chk_enbl r1;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	r1.field.en = 0;
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
-			DLB2_CSR_WR(hw,
-				    DLB2_CHP_SN_CHK_ENBL(port->id.phys_id),
-				    r1.val);
-	}
-}
-
-static int dlb2_domain_wait_for_ldb_cqs_to_empty(struct dlb2_hw *hw,
-						 struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			int i;
-
-			for (i = 0; i < DLB2_MAX_CQ_COMP_CHECK_LOOPS; i++) {
-				if (dlb2_ldb_cq_inflight_count(hw, port) == 0)
-					break;
-			}
-
-			if (i == DLB2_MAX_CQ_COMP_CHECK_LOOPS) {
-				DLB2_HW_ERR(hw,
-					    "[%s()] Internal error: failed to flush load-balanced port %d's completions.\n",
-					    __func__, port->id.phys_id);
-				return -EFAULT;
-			}
-		}
-	}
-
-	return 0;
-}
-
-static void dlb2_domain_disable_dir_cqs(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *port;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
-		port->enabled = false;
-
-		dlb2_dir_port_cq_disable(hw, port);
-	}
-}
-
-static void
-dlb2_domain_disable_dir_producer_ports(struct dlb2_hw *hw,
-				       struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *port;
-	union dlb2_sys_dir_pp_v r1;
-	RTE_SET_USED(iter);
-
-	r1.field.pp_v = 0;
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter)
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_DIR_PP_V(port->id.phys_id),
-			    r1.val);
-}
-
-static void
-dlb2_domain_disable_ldb_producer_ports(struct dlb2_hw *hw,
-				       struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	union dlb2_sys_ldb_pp_v r1;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	r1.field.pp_v = 0;
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
-			DLB2_CSR_WR(hw,
-				    DLB2_SYS_LDB_PP_V(port->id.phys_id),
-				    r1.val);
-	}
-}
-
-static int dlb2_domain_verify_reset_success(struct dlb2_hw *hw,
-					    struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *dir_port;
-	struct dlb2_ldb_port *ldb_port;
-	struct dlb2_ldb_queue *queue;
-	int i;
-	RTE_SET_USED(iter);
-
-	/*
-	 * Confirm that all the domain's queue's inflight counts and AQED
-	 * active counts are 0.
-	 */
-	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
-		if (!dlb2_ldb_queue_is_empty(hw, queue)) {
-			DLB2_HW_ERR(hw,
-				    "[%s()] Internal error: failed to empty ldb queue %d\n",
-				    __func__, queue->id.phys_id);
-			return -EFAULT;
-		}
-	}
-
-	/* Confirm that all the domain's CQs inflight and token counts are 0. */
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], ldb_port, iter) {
-			if (dlb2_ldb_cq_inflight_count(hw, ldb_port) ||
-			    dlb2_ldb_cq_token_count(hw, ldb_port)) {
-				DLB2_HW_ERR(hw,
-					    "[%s()] Internal error: failed to empty ldb port %d\n",
-					    __func__, ldb_port->id.phys_id);
-				return -EFAULT;
-			}
-		}
-	}
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, dir_port, iter) {
-		if (!dlb2_dir_queue_is_empty(hw, dir_port)) {
-			DLB2_HW_ERR(hw,
-				    "[%s()] Internal error: failed to empty dir queue %d\n",
-				    __func__, dir_port->id.phys_id);
-			return -EFAULT;
-		}
-
-		if (dlb2_dir_cq_token_count(hw, dir_port)) {
-			DLB2_HW_ERR(hw,
-				    "[%s()] Internal error: failed to empty dir port %d\n",
-				    __func__, dir_port->id.phys_id);
-			return -EFAULT;
-		}
-	}
-
-	return 0;
-}
-
-static void __dlb2_domain_reset_ldb_port_registers(struct dlb2_hw *hw,
-						   struct dlb2_ldb_port *port)
-{
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_PP2VAS(port->id.phys_id),
-		    DLB2_SYS_LDB_PP2VAS_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ2VAS(port->id.phys_id),
-		    DLB2_CHP_LDB_CQ2VAS_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_PP2VDEV(port->id.phys_id),
-		    DLB2_SYS_LDB_PP2VDEV_RST);
-
-	if (port->id.vdev_owned) {
-		unsigned int offs;
-		u32 virt_id;
-
-		/*
-		 * DLB uses producer port address bits 17:12 to determine the
-		 * producer port ID. In Scalable IOV mode, PP accesses come
-		 * through the PF MMIO window for the physical producer port,
-		 * so for translation purposes the virtual and physical port
-		 * IDs are equal.
-		 */
-		if (hw->virt_mode == DLB2_VIRT_SRIOV)
-			virt_id = port->id.virt_id;
-		else
-			virt_id = port->id.phys_id;
-
-		offs = port->id.vdev_id * DLB2_MAX_NUM_LDB_PORTS + virt_id;
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_VF_LDB_VPP2PP(offs),
-			    DLB2_SYS_VF_LDB_VPP2PP_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_VF_LDB_VPP_V(offs),
-			    DLB2_SYS_VF_LDB_VPP_V_RST);
-	}
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_PP_V(port->id.phys_id),
-		    DLB2_SYS_LDB_PP_V_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_LDB_DSBL(port->id.phys_id),
-		    DLB2_LSP_CQ_LDB_DSBL_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_DEPTH(port->id.phys_id),
-		    DLB2_CHP_LDB_CQ_DEPTH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_LDB_INFL_LIM(port->id.phys_id),
-		    DLB2_LSP_CQ_LDB_INFL_LIM_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_HIST_LIST_LIM(port->id.phys_id),
-		    DLB2_CHP_HIST_LIST_LIM_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_HIST_LIST_BASE(port->id.phys_id),
-		    DLB2_CHP_HIST_LIST_BASE_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_HIST_LIST_POP_PTR(port->id.phys_id),
-		    DLB2_CHP_HIST_LIST_POP_PTR_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_HIST_LIST_PUSH_PTR(port->id.phys_id),
-		    DLB2_CHP_HIST_LIST_PUSH_PTR_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH(port->id.phys_id),
-		    DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_TMR_THRSH(port->id.phys_id),
-		    DLB2_CHP_LDB_CQ_TMR_THRSH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_INT_ENB(port->id.phys_id),
-		    DLB2_CHP_LDB_CQ_INT_ENB_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_CQ_ISR(port->id.phys_id),
-		    DLB2_SYS_LDB_CQ_ISR_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL(port->id.phys_id),
-		    DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL(port->id.phys_id),
-		    DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_WPTR(port->id.phys_id),
-		    DLB2_CHP_LDB_CQ_WPTR_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_LDB_TKN_CNT(port->id.phys_id),
-		    DLB2_LSP_CQ_LDB_TKN_CNT_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_CQ_ADDR_L(port->id.phys_id),
-		    DLB2_SYS_LDB_CQ_ADDR_L_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_CQ_ADDR_U(port->id.phys_id),
-		    DLB2_SYS_LDB_CQ_ADDR_U_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_CQ_AT(port->id.phys_id),
-		    DLB2_SYS_LDB_CQ_AT_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_CQ_PASID(port->id.phys_id),
-		    DLB2_SYS_LDB_CQ_PASID_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_CQ2VF_PF_RO(port->id.phys_id),
-		    DLB2_SYS_LDB_CQ2VF_PF_RO_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTL(port->id.phys_id),
-		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTL_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTH(port->id.phys_id),
-		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ2QID0(port->id.phys_id),
-		    DLB2_LSP_CQ2QID0_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ2QID1(port->id.phys_id),
-		    DLB2_LSP_CQ2QID1_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ2PRIOV(port->id.phys_id),
-		    DLB2_LSP_CQ2PRIOV_RST);
-}
-
-static void dlb2_domain_reset_ldb_port_registers(struct dlb2_hw *hw,
-						 struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
-			__dlb2_domain_reset_ldb_port_registers(hw, port);
-	}
-}
-
-static void
-__dlb2_domain_reset_dir_port_registers(struct dlb2_hw *hw,
-				       struct dlb2_dir_pq_pair *port)
-{
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ2VAS(port->id.phys_id),
-		    DLB2_CHP_DIR_CQ2VAS_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_DIR_DSBL(port->id.phys_id),
-		    DLB2_LSP_CQ_DIR_DSBL_RST);
-
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_OPT_CLR, port->id.phys_id);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_DEPTH(port->id.phys_id),
-		    DLB2_CHP_DIR_CQ_DEPTH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH(port->id.phys_id),
-		    DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_TMR_THRSH(port->id.phys_id),
-		    DLB2_CHP_DIR_CQ_TMR_THRSH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_INT_ENB(port->id.phys_id),
-		    DLB2_CHP_DIR_CQ_INT_ENB_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ_ISR(port->id.phys_id),
-		    DLB2_SYS_DIR_CQ_ISR_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(port->id.phys_id),
-		    DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL(port->id.phys_id),
-		    DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_WPTR(port->id.phys_id),
-		    DLB2_CHP_DIR_CQ_WPTR_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_DIR_TKN_CNT(port->id.phys_id),
-		    DLB2_LSP_CQ_DIR_TKN_CNT_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ_ADDR_L(port->id.phys_id),
-		    DLB2_SYS_DIR_CQ_ADDR_L_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ_ADDR_U(port->id.phys_id),
-		    DLB2_SYS_DIR_CQ_ADDR_U_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ_AT(port->id.phys_id),
-		    DLB2_SYS_DIR_CQ_AT_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ_PASID(port->id.phys_id),
-		    DLB2_SYS_DIR_CQ_PASID_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ_FMT(port->id.phys_id),
-		    DLB2_SYS_DIR_CQ_FMT_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ2VF_PF_RO(port->id.phys_id),
-		    DLB2_SYS_DIR_CQ2VF_PF_RO_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTL(port->id.phys_id),
-		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTL_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTH(port->id.phys_id),
-		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_PP2VAS(port->id.phys_id),
-		    DLB2_SYS_DIR_PP2VAS_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ2VAS(port->id.phys_id),
-		    DLB2_CHP_DIR_CQ2VAS_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_PP2VDEV(port->id.phys_id),
-		    DLB2_SYS_DIR_PP2VDEV_RST);
-
-	if (port->id.vdev_owned) {
-		unsigned int offs;
-		u32 virt_id;
-
-		/*
-		 * DLB uses producer port address bits 17:12 to determine the
-		 * producer port ID. In Scalable IOV mode, PP accesses come
-		 * through the PF MMIO window for the physical producer port,
-		 * so for translation purposes the virtual and physical port
-		 * IDs are equal.
-		 */
-		if (hw->virt_mode == DLB2_VIRT_SRIOV)
-			virt_id = port->id.virt_id;
-		else
-			virt_id = port->id.phys_id;
-
-		offs = port->id.vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver)
-			+ virt_id;
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_VF_DIR_VPP2PP(offs),
-			    DLB2_SYS_VF_DIR_VPP2PP_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_VF_DIR_VPP_V(offs),
-			    DLB2_SYS_VF_DIR_VPP_V_RST);
-	}
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_PP_V(port->id.phys_id),
-		    DLB2_SYS_DIR_PP_V_RST);
-}
-
-static void dlb2_domain_reset_dir_port_registers(struct dlb2_hw *hw,
-						 struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *port;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter)
-		__dlb2_domain_reset_dir_port_registers(hw, port);
-}
-
-static void dlb2_domain_reset_ldb_queue_registers(struct dlb2_hw *hw,
-						  struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_queue *queue;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
-		unsigned int queue_id = queue->id.phys_id;
-		int i;
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL(queue_id),
-			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH(queue_id),
-			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTL(queue_id),
-			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTL_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTH(queue_id),
-			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_NALDB_MAX_DEPTH(queue_id),
-			    DLB2_LSP_QID_NALDB_MAX_DEPTH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_LDB_INFL_LIM(queue_id),
-			    DLB2_LSP_QID_LDB_INFL_LIM_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_AQED_ACTIVE_LIM(queue_id),
-			    DLB2_LSP_QID_AQED_ACTIVE_LIM_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_ATM_DEPTH_THRSH(queue_id),
-			    DLB2_LSP_QID_ATM_DEPTH_THRSH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_NALDB_DEPTH_THRSH(queue_id),
-			    DLB2_LSP_QID_NALDB_DEPTH_THRSH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_LDB_QID_ITS(queue_id),
-			    DLB2_SYS_LDB_QID_ITS_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_CHP_ORD_QID_SN(queue_id),
-			    DLB2_CHP_ORD_QID_SN_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_CHP_ORD_QID_SN_MAP(queue_id),
-			    DLB2_CHP_ORD_QID_SN_MAP_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_LDB_QID_V(queue_id),
-			    DLB2_SYS_LDB_QID_V_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_LDB_QID_CFG_V(queue_id),
-			    DLB2_SYS_LDB_QID_CFG_V_RST);
-
-		if (queue->sn_cfg_valid) {
-			u32 offs[2];
-
-			offs[0] = DLB2_RO_PIPE_GRP_0_SLT_SHFT(queue->sn_slot);
-			offs[1] = DLB2_RO_PIPE_GRP_1_SLT_SHFT(queue->sn_slot);
-
-			DLB2_CSR_WR(hw,
-				    offs[queue->sn_group],
-				    DLB2_RO_PIPE_GRP_0_SLT_SHFT_RST);
-		}
-
-		for (i = 0; i < DLB2_LSP_QID2CQIDIX_NUM; i++) {
-			DLB2_CSR_WR(hw,
-				    DLB2_LSP_QID2CQIDIX(queue_id, i),
-				    DLB2_LSP_QID2CQIDIX_00_RST);
-
-			DLB2_CSR_WR(hw,
-				    DLB2_LSP_QID2CQIDIX2(queue_id, i),
-				    DLB2_LSP_QID2CQIDIX2_00_RST);
-
-			DLB2_CSR_WR(hw,
-				    DLB2_ATM_QID2CQIDIX(queue_id, i),
-				    DLB2_ATM_QID2CQIDIX_00_RST);
-		}
-	}
-}
-
-static void dlb2_domain_reset_dir_queue_registers(struct dlb2_hw *hw,
-						  struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *queue;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, queue, iter) {
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_DIR_MAX_DEPTH(queue->id.phys_id),
-			    DLB2_LSP_QID_DIR_MAX_DEPTH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTL(queue->id.phys_id),
-			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTL_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTH(queue->id.phys_id),
-			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_DIR_DEPTH_THRSH(queue->id.phys_id),
-			    DLB2_LSP_QID_DIR_DEPTH_THRSH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_DIR_QID_ITS(queue->id.phys_id),
-			    DLB2_SYS_DIR_QID_ITS_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_DIR_QID_V(queue->id.phys_id),
-			    DLB2_SYS_DIR_QID_V_RST);
-	}
-}
-
-static void dlb2_domain_reset_registers(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain)
-{
-	dlb2_domain_reset_ldb_port_registers(hw, domain);
-
-	dlb2_domain_reset_dir_port_registers(hw, domain);
-
-	dlb2_domain_reset_ldb_queue_registers(hw, domain);
-
-	dlb2_domain_reset_dir_queue_registers(hw, domain);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_CFG_LDB_VAS_CRD(domain->id.phys_id),
-		    DLB2_CHP_CFG_LDB_VAS_CRD_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_CFG_DIR_VAS_CRD(domain->id.phys_id),
-		    DLB2_CHP_CFG_DIR_VAS_CRD_RST);
-}
-
-static int dlb2_domain_reset_software_state(struct dlb2_hw *hw,
-					    struct dlb2_hw_domain *domain)
-{
-	struct dlb2_dir_pq_pair *tmp_dir_port;
-	struct dlb2_ldb_queue *tmp_ldb_queue;
-	struct dlb2_ldb_port *tmp_ldb_port;
-	struct dlb2_list_entry *iter1;
-	struct dlb2_list_entry *iter2;
-	struct dlb2_function_resources *rsrcs;
-	struct dlb2_dir_pq_pair *dir_port;
-	struct dlb2_ldb_queue *ldb_queue;
-	struct dlb2_ldb_port *ldb_port;
-	struct dlb2_list_head *list;
-	int ret, i;
-	RTE_SET_USED(tmp_dir_port);
-	RTE_SET_USED(tmp_ldb_queue);
-	RTE_SET_USED(tmp_ldb_port);
-	RTE_SET_USED(iter1);
-	RTE_SET_USED(iter2);
-
-	rsrcs = domain->parent_func;
-
-	/* Move the domain's ldb queues to the function's avail list */
-	list = &domain->used_ldb_queues;
-	DLB2_DOM_LIST_FOR_SAFE(*list, ldb_queue, tmp_ldb_queue, iter1, iter2) {
-		if (ldb_queue->sn_cfg_valid) {
-			struct dlb2_sn_group *grp;
-
-			grp = &hw->rsrcs.sn_groups[ldb_queue->sn_group];
-
-			dlb2_sn_group_free_slot(grp, ldb_queue->sn_slot);
-			ldb_queue->sn_cfg_valid = false;
-		}
-
-		ldb_queue->owned = false;
-		ldb_queue->num_mappings = 0;
-		ldb_queue->num_pending_additions = 0;
-
-		dlb2_list_del(&domain->used_ldb_queues,
-			      &ldb_queue->domain_list);
-		dlb2_list_add(&rsrcs->avail_ldb_queues,
-			      &ldb_queue->func_list);
-		rsrcs->num_avail_ldb_queues++;
-	}
-
-	list = &domain->avail_ldb_queues;
-	DLB2_DOM_LIST_FOR_SAFE(*list, ldb_queue, tmp_ldb_queue, iter1, iter2) {
-		ldb_queue->owned = false;
-
-		dlb2_list_del(&domain->avail_ldb_queues,
-			      &ldb_queue->domain_list);
-		dlb2_list_add(&rsrcs->avail_ldb_queues,
-			      &ldb_queue->func_list);
-		rsrcs->num_avail_ldb_queues++;
-	}
-
-	/* Move the domain's ldb ports to the function's avail list */
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		list = &domain->used_ldb_ports[i];
-		DLB2_DOM_LIST_FOR_SAFE(*list, ldb_port, tmp_ldb_port,
-				       iter1, iter2) {
-			int j;
-
-			ldb_port->owned = false;
-			ldb_port->configured = false;
-			ldb_port->num_pending_removals = 0;
-			ldb_port->num_mappings = 0;
-			ldb_port->init_tkn_cnt = 0;
-			for (j = 0; j < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; j++)
-				ldb_port->qid_map[j].state =
-					DLB2_QUEUE_UNMAPPED;
-
-			dlb2_list_del(&domain->used_ldb_ports[i],
-				      &ldb_port->domain_list);
-			dlb2_list_add(&rsrcs->avail_ldb_ports[i],
-				      &ldb_port->func_list);
-			rsrcs->num_avail_ldb_ports[i]++;
-		}
-
-		list = &domain->avail_ldb_ports[i];
-		DLB2_DOM_LIST_FOR_SAFE(*list, ldb_port, tmp_ldb_port,
-				       iter1, iter2) {
-			ldb_port->owned = false;
-
-			dlb2_list_del(&domain->avail_ldb_ports[i],
-				      &ldb_port->domain_list);
-			dlb2_list_add(&rsrcs->avail_ldb_ports[i],
-				      &ldb_port->func_list);
-			rsrcs->num_avail_ldb_ports[i]++;
-		}
-	}
-
-	/* Move the domain's dir ports to the function's avail list */
-	list = &domain->used_dir_pq_pairs;
-	DLB2_DOM_LIST_FOR_SAFE(*list, dir_port, tmp_dir_port, iter1, iter2) {
-		dir_port->owned = false;
-		dir_port->port_configured = false;
-		dir_port->init_tkn_cnt = 0;
-
-		dlb2_list_del(&domain->used_dir_pq_pairs,
-			      &dir_port->domain_list);
-
-		dlb2_list_add(&rsrcs->avail_dir_pq_pairs,
-			      &dir_port->func_list);
-		rsrcs->num_avail_dir_pq_pairs++;
-	}
-
-	list = &domain->avail_dir_pq_pairs;
-	DLB2_DOM_LIST_FOR_SAFE(*list, dir_port, tmp_dir_port, iter1, iter2) {
-		dir_port->owned = false;
-
-		dlb2_list_del(&domain->avail_dir_pq_pairs,
-			      &dir_port->domain_list);
-
-		dlb2_list_add(&rsrcs->avail_dir_pq_pairs,
-			      &dir_port->func_list);
-		rsrcs->num_avail_dir_pq_pairs++;
-	}
-
-	/* Return hist list entries to the function */
-	ret = dlb2_bitmap_set_range(rsrcs->avail_hist_list_entries,
-				    domain->hist_list_entry_base,
-				    domain->total_hist_list_entries);
-	if (ret) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: domain hist list base doesn't match the function's bitmap.\n",
-			    __func__);
-		return ret;
-	}
-
-	domain->total_hist_list_entries = 0;
-	domain->avail_hist_list_entries = 0;
-	domain->hist_list_entry_base = 0;
-	domain->hist_list_entry_offset = 0;
-
-	rsrcs->num_avail_qed_entries += domain->num_ldb_credits;
-	domain->num_ldb_credits = 0;
-
-	rsrcs->num_avail_dqed_entries += domain->num_dir_credits;
-	domain->num_dir_credits = 0;
-
-	rsrcs->num_avail_aqed_entries += domain->num_avail_aqed_entries;
-	rsrcs->num_avail_aqed_entries += domain->num_used_aqed_entries;
-	domain->num_avail_aqed_entries = 0;
-	domain->num_used_aqed_entries = 0;
-
-	domain->num_pending_removals = 0;
-	domain->num_pending_additions = 0;
-	domain->configured = false;
-	domain->started = false;
-
-	/*
-	 * Move the domain out of the used_domains list and back to the
-	 * function's avail_domains list.
-	 */
-	dlb2_list_del(&rsrcs->used_domains, &domain->func_list);
-	dlb2_list_add(&rsrcs->avail_domains, &domain->func_list);
-	rsrcs->num_avail_domains++;
-
-	return 0;
-}
-
-static int dlb2_domain_drain_unmapped_queue(struct dlb2_hw *hw,
-					    struct dlb2_hw_domain *domain,
-					    struct dlb2_ldb_queue *queue)
-{
-	struct dlb2_ldb_port *port;
-	int ret, i;
-
-	/* If a domain has LDB queues, it must have LDB ports */
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		if (!dlb2_list_empty(&domain->used_ldb_ports[i]))
-			break;
-	}
-
-	if (i == DLB2_NUM_COS_DOMAINS) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: No configured LDB ports\n",
-			    __func__);
-		return -EFAULT;
-	}
-
-	port = DLB2_DOM_LIST_HEAD(domain->used_ldb_ports[i], typeof(*port));
-
-	/* If necessary, free up a QID slot in this CQ */
-	if (port->num_mappings == DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-		struct dlb2_ldb_queue *mapped_queue;
-
-		mapped_queue = &hw->rsrcs.ldb_queues[port->qid_map[0].qid];
-
-		ret = dlb2_ldb_port_unmap_qid(hw, port, mapped_queue);
-		if (ret)
-			return ret;
-	}
-
-	ret = dlb2_ldb_port_map_qid_dynamic(hw, port, queue, 0);
-	if (ret)
-		return ret;
-
-	return dlb2_domain_drain_mapped_queues(hw, domain);
-}
-
-static int dlb2_domain_drain_unmapped_queues(struct dlb2_hw *hw,
-					     struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_queue *queue;
-	int ret;
-	RTE_SET_USED(iter);
-
-	/* If the domain hasn't been started, there's no traffic to drain */
-	if (!domain->started)
-		return 0;
-
-	/*
-	 * Pre-condition: the unattached queue must not have any outstanding
-	 * completions. This is ensured by calling dlb2_domain_drain_ldb_cqs()
-	 * prior to this in dlb2_domain_drain_mapped_queues().
-	 */
-	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
-		if (queue->num_mappings != 0 ||
-		    dlb2_ldb_queue_is_empty(hw, queue))
-			continue;
-
-		ret = dlb2_domain_drain_unmapped_queue(hw, domain, queue);
-		if (ret)
-			return ret;
-	}
-
-	return 0;
-}
-
-/**
- * dlb2_reset_domain() - Reset a DLB scheduling domain and its associated
- *	hardware resources.
- * @hw:	Contains the current state of the DLB2 hardware.
- * @domain_id: Domain ID
- * @vdev_req: Request came from a virtual device.
- * @vdev_id: If vdev_req is true, this contains the virtual device's ID.
- *
- * Note: User software *must* stop sending to this domain's producer ports
- * before invoking this function, otherwise undefined behavior will result.
- *
- * Return: returns < 0 on error, 0 otherwise.
- */
-int dlb2_reset_domain(struct dlb2_hw *hw,
-		      u32 domain_id,
-		      bool vdev_req,
-		      unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	int ret;
-
-	dlb2_log_reset_domain(hw, domain_id, vdev_req, vdev_id);
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (domain  == NULL || !domain->configured)
-		return -EINVAL;
-
-	/* Disable VPPs */
-	if (vdev_req) {
-		dlb2_domain_disable_dir_vpps(hw, domain, vdev_id);
-
-		dlb2_domain_disable_ldb_vpps(hw, domain, vdev_id);
-	}
-
-	/* Disable CQ interrupts */
-	dlb2_domain_disable_dir_port_interrupts(hw, domain);
-
-	dlb2_domain_disable_ldb_port_interrupts(hw, domain);
-
-	/*
-	 * For each queue owned by this domain, disable its write permissions to
-	 * cause any traffic sent to it to be dropped. Well-behaved software
-	 * should not be sending QEs at this point.
-	 */
-	dlb2_domain_disable_dir_queue_write_perms(hw, domain);
-
-	dlb2_domain_disable_ldb_queue_write_perms(hw, domain);
-
-	/* Turn off completion tracking on all the domain's PPs. */
-	dlb2_domain_disable_ldb_seq_checks(hw, domain);
-
-	/*
-	 * Disable the LDB CQs and drain them in order to complete the map and
-	 * unmap procedures, which require zero CQ inflights and zero QID
-	 * inflights respectively.
-	 */
-	dlb2_domain_disable_ldb_cqs(hw, domain);
-
-	ret = dlb2_domain_drain_ldb_cqs(hw, domain, false);
-	if (ret < 0)
-		return ret;
-
-	ret = dlb2_domain_wait_for_ldb_cqs_to_empty(hw, domain);
-	if (ret < 0)
-		return ret;
-
-	ret = dlb2_domain_finish_unmap_qid_procedures(hw, domain);
-	if (ret < 0)
-		return ret;
-
-	ret = dlb2_domain_finish_map_qid_procedures(hw, domain);
-	if (ret < 0)
-		return ret;
-
-	/* Re-enable the CQs in order to drain the mapped queues. */
-	dlb2_domain_enable_ldb_cqs(hw, domain);
-
-	ret = dlb2_domain_drain_mapped_queues(hw, domain);
-	if (ret < 0)
-		return ret;
-
-	ret = dlb2_domain_drain_unmapped_queues(hw, domain);
-	if (ret < 0)
-		return ret;
-
-	/* Done draining LDB QEs, so disable the CQs. */
-	dlb2_domain_disable_ldb_cqs(hw, domain);
-
-	dlb2_domain_drain_dir_queues(hw, domain);
-
-	/* Done draining DIR QEs, so disable the CQs. */
-	dlb2_domain_disable_dir_cqs(hw, domain);
-
-	/* Disable PPs */
-	dlb2_domain_disable_dir_producer_ports(hw, domain);
-
-	dlb2_domain_disable_ldb_producer_ports(hw, domain);
-
-	ret = dlb2_domain_verify_reset_success(hw, domain);
-	if (ret)
-		return ret;
-
-	/* Reset the QID and port state. */
-	dlb2_domain_reset_registers(hw, domain);
-
-	/* Hardware reset complete. Reset the domain's software state */
-	ret = dlb2_domain_reset_software_state(hw, domain);
-	if (ret)
-		return ret;
-
-	return 0;
-}
-
 unsigned int dlb2_finish_unmap_qid_procedures(struct dlb2_hw *hw)
 {
 	int i, num = 0;
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index 8f97dd865..641812412 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -34,6 +34,17 @@
 #define DLB2_FUNC_LIST_FOR_SAFE(head, ptr, ptr_tmp, it, it_tmp) \
 	DLB2_LIST_FOR_EACH_SAFE((head), ptr, ptr_tmp, func_list, it, it_tmp)
 
+/*
+ * The PF driver cannot assume that a register write will affect subsequent HCW
+ * writes. To ensure a write completes, the driver must read back a CSR. This
+ * function only need be called for configuration that can occur after the
+ * domain has started; prior to starting, applications can't send HCWs.
+ */
+static inline void dlb2_flush_csr(struct dlb2_hw *hw)
+{
+	DLB2_CSR_RD(hw, DLB2_SYS_TOTAL_VAS(hw->ver));
+}
+
 static void dlb2_init_domain_rsrc_lists(struct dlb2_hw_domain *domain)
 {
 	int i;
@@ -1019,3 +1030,2554 @@ int dlb2_hw_create_sched_domain(struct dlb2_hw *hw,
 
 	return 0;
 }
+
+static void dlb2_dir_port_cq_disable(struct dlb2_hw *hw,
+				     struct dlb2_dir_pq_pair *port)
+{
+	u32 reg = 0;
+
+	DLB2_BIT_SET(reg, DLB2_LSP_CQ_DIR_DSBL_DISABLED);
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ_DIR_DSBL(hw->ver, port->id.phys_id), reg);
+
+	dlb2_flush_csr(hw);
+}
+
+static u32 dlb2_dir_cq_token_count(struct dlb2_hw *hw,
+				   struct dlb2_dir_pq_pair *port)
+{
+	u32 cnt;
+
+	cnt = DLB2_CSR_RD(hw,
+			  DLB2_LSP_CQ_DIR_TKN_CNT(hw->ver, port->id.phys_id));
+
+	/*
+	 * Account for the initial token count, which is used in order to
+	 * provide a CQ with depth less than 8.
+	 */
+
+	return DLB2_BITS_GET(cnt, DLB2_LSP_CQ_DIR_TKN_CNT_COUNT) -
+	       port->init_tkn_cnt;
+}
+
+static void dlb2_drain_dir_cq(struct dlb2_hw *hw,
+			      struct dlb2_dir_pq_pair *port)
+{
+	unsigned int port_id = port->id.phys_id;
+	u32 cnt;
+
+	/* Return any outstanding tokens */
+	cnt = dlb2_dir_cq_token_count(hw, port);
+
+	if (cnt != 0) {
+		struct dlb2_hcw hcw_mem[8], *hcw;
+		void __iomem *pp_addr;
+
+		pp_addr = os_map_producer_port(hw, port_id, false);
+
+		/* Point hcw to a 64B-aligned location */
+		hcw = (struct dlb2_hcw *)((uintptr_t)&hcw_mem[4] & ~0x3F);
+
+		/*
+		 * Program the first HCW for a batch token return and
+		 * the rest as NOOPS
+		 */
+		memset(hcw, 0, 4 * sizeof(*hcw));
+		hcw->cq_token = 1;
+		hcw->lock_id = cnt - 1;
+
+		dlb2_movdir64b(pp_addr, hcw);
+
+		os_fence_hcw(hw, pp_addr);
+
+		os_unmap_producer_port(hw, pp_addr);
+	}
+}
+
+static void dlb2_dir_port_cq_enable(struct dlb2_hw *hw,
+				    struct dlb2_dir_pq_pair *port)
+{
+	u32 reg = 0;
+
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ_DIR_DSBL(hw->ver, port->id.phys_id), reg);
+
+	dlb2_flush_csr(hw);
+}
+
+static int dlb2_domain_drain_dir_cqs(struct dlb2_hw *hw,
+				     struct dlb2_hw_domain *domain,
+				     bool toggle_port)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *port;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
+		/*
+		 * Can't drain a port if it's not configured, and there's
+		 * nothing to drain if its queue is unconfigured.
+		 */
+		if (!port->port_configured || !port->queue_configured)
+			continue;
+
+		if (toggle_port)
+			dlb2_dir_port_cq_disable(hw, port);
+
+		dlb2_drain_dir_cq(hw, port);
+
+		if (toggle_port)
+			dlb2_dir_port_cq_enable(hw, port);
+	}
+
+	return 0;
+}
+
+static u32 dlb2_dir_queue_depth(struct dlb2_hw *hw,
+				struct dlb2_dir_pq_pair *queue)
+{
+	u32 cnt;
+
+	cnt = DLB2_CSR_RD(hw, DLB2_LSP_QID_DIR_ENQUEUE_CNT(hw->ver,
+						      queue->id.phys_id));
+
+	return DLB2_BITS_GET(cnt, DLB2_LSP_QID_DIR_ENQUEUE_CNT_COUNT);
+}
+
+static bool dlb2_dir_queue_is_empty(struct dlb2_hw *hw,
+				    struct dlb2_dir_pq_pair *queue)
+{
+	return dlb2_dir_queue_depth(hw, queue) == 0;
+}
+
+static bool dlb2_domain_dir_queues_empty(struct dlb2_hw *hw,
+					 struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *queue;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, queue, iter) {
+		if (!dlb2_dir_queue_is_empty(hw, queue))
+			return false;
+	}
+
+	return true;
+}
+static int dlb2_domain_drain_dir_queues(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain)
+{
+	int i;
+
+	/* If the domain hasn't been started, there's no traffic to drain */
+	if (!domain->started)
+		return 0;
+
+	for (i = 0; i < DLB2_MAX_QID_EMPTY_CHECK_LOOPS; i++) {
+		dlb2_domain_drain_dir_cqs(hw, domain, true);
+
+		if (dlb2_domain_dir_queues_empty(hw, domain))
+			break;
+	}
+
+	if (i == DLB2_MAX_QID_EMPTY_CHECK_LOOPS) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: failed to empty queues\n",
+			    __func__);
+		return -EFAULT;
+	}
+
+	/*
+	 * Drain the CQs one more time. For the queues to go empty, they would
+	 * have scheduled one or more QEs.
+	 */
+	dlb2_domain_drain_dir_cqs(hw, domain, true);
+
+	return 0;
+}
+
+static void dlb2_ldb_port_cq_enable(struct dlb2_hw *hw,
+				    struct dlb2_ldb_port *port)
+{
+	u32 reg = 0;
+
+	/*
+	 * Don't re-enable the port if a removal is pending. The caller should
+	 * mark this port as enabled (if it isn't already), and when the
+	 * removal completes the port will be enabled.
+	 */
+	if (port->num_pending_removals)
+		return;
+
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ_LDB_DSBL(hw->ver, port->id.phys_id), reg);
+
+	dlb2_flush_csr(hw);
+}
+
+static void dlb2_ldb_port_cq_disable(struct dlb2_hw *hw,
+				     struct dlb2_ldb_port *port)
+{
+	u32 reg = 0;
+
+	DLB2_BIT_SET(reg, DLB2_LSP_CQ_LDB_DSBL_DISABLED);
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ_LDB_DSBL(hw->ver, port->id.phys_id), reg);
+
+	dlb2_flush_csr(hw);
+}
+
+static u32 dlb2_ldb_cq_inflight_count(struct dlb2_hw *hw,
+				      struct dlb2_ldb_port *port)
+{
+	u32 cnt;
+
+	cnt = DLB2_CSR_RD(hw,
+			  DLB2_LSP_CQ_LDB_INFL_CNT(hw->ver, port->id.phys_id));
+
+	return DLB2_BITS_GET(cnt, DLB2_LSP_CQ_LDB_INFL_CNT_COUNT);
+}
+
+static u32 dlb2_ldb_cq_token_count(struct dlb2_hw *hw,
+				   struct dlb2_ldb_port *port)
+{
+	u32 cnt;
+
+	cnt = DLB2_CSR_RD(hw,
+			  DLB2_LSP_CQ_LDB_TKN_CNT(hw->ver, port->id.phys_id));
+
+	/*
+	 * Account for the initial token count, which is used in order to
+	 * provide a CQ with depth less than 8.
+	 */
+
+	return DLB2_BITS_GET(cnt, DLB2_LSP_CQ_LDB_TKN_CNT_TOKEN_COUNT) -
+		port->init_tkn_cnt;
+}
+
+static void dlb2_drain_ldb_cq(struct dlb2_hw *hw, struct dlb2_ldb_port *port)
+{
+	u32 infl_cnt, tkn_cnt;
+	unsigned int i;
+
+	infl_cnt = dlb2_ldb_cq_inflight_count(hw, port);
+	tkn_cnt = dlb2_ldb_cq_token_count(hw, port);
+
+	if (infl_cnt || tkn_cnt) {
+		struct dlb2_hcw hcw_mem[8], *hcw;
+		void __iomem *pp_addr;
+
+		pp_addr = os_map_producer_port(hw, port->id.phys_id, true);
+
+		/* Point hcw to a 64B-aligned location */
+		hcw = (struct dlb2_hcw *)((uintptr_t)&hcw_mem[4] & ~0x3F);
+
+		/*
+		 * Program the first HCW for a completion and token return and
+		 * the other HCWs as NOOPS
+		 */
+
+		memset(hcw, 0, 4 * sizeof(*hcw));
+		hcw->qe_comp = (infl_cnt > 0);
+		hcw->cq_token = (tkn_cnt > 0);
+		hcw->lock_id = tkn_cnt - 1;
+
+		/* Return tokens in the first HCW */
+		dlb2_movdir64b(pp_addr, hcw);
+
+		hcw->cq_token = 0;
+
+		/* Issue remaining completions (if any) */
+		for (i = 1; i < infl_cnt; i++)
+			dlb2_movdir64b(pp_addr, hcw);
+
+		os_fence_hcw(hw, pp_addr);
+
+		os_unmap_producer_port(hw, pp_addr);
+	}
+}
+
+static void dlb2_domain_drain_ldb_cqs(struct dlb2_hw *hw,
+				      struct dlb2_hw_domain *domain,
+				      bool toggle_port)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	/* If the domain hasn't been started, there's no traffic to drain */
+	if (!domain->started)
+		return;
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			if (toggle_port)
+				dlb2_ldb_port_cq_disable(hw, port);
+
+			dlb2_drain_ldb_cq(hw, port);
+
+			if (toggle_port)
+				dlb2_ldb_port_cq_enable(hw, port);
+		}
+	}
+}
+
+static u32 dlb2_ldb_queue_depth(struct dlb2_hw *hw,
+				struct dlb2_ldb_queue *queue)
+{
+	u32 aqed, ldb, atm;
+
+	aqed = DLB2_CSR_RD(hw, DLB2_LSP_QID_AQED_ACTIVE_CNT(hw->ver,
+						       queue->id.phys_id));
+	ldb = DLB2_CSR_RD(hw, DLB2_LSP_QID_LDB_ENQUEUE_CNT(hw->ver,
+						      queue->id.phys_id));
+	atm = DLB2_CSR_RD(hw,
+			  DLB2_LSP_QID_ATM_ACTIVE(hw->ver, queue->id.phys_id));
+
+	return DLB2_BITS_GET(aqed, DLB2_LSP_QID_AQED_ACTIVE_CNT_COUNT)
+	       + DLB2_BITS_GET(ldb, DLB2_LSP_QID_LDB_ENQUEUE_CNT_COUNT)
+	       + DLB2_BITS_GET(atm, DLB2_LSP_QID_ATM_ACTIVE_COUNT);
+}
+
+static bool dlb2_ldb_queue_is_empty(struct dlb2_hw *hw,
+				    struct dlb2_ldb_queue *queue)
+{
+	return dlb2_ldb_queue_depth(hw, queue) == 0;
+}
+
+static bool dlb2_domain_mapped_queues_empty(struct dlb2_hw *hw,
+					    struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_queue *queue;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
+		if (queue->num_mappings == 0)
+			continue;
+
+		if (!dlb2_ldb_queue_is_empty(hw, queue))
+			return false;
+	}
+
+	return true;
+}
+
+static int dlb2_domain_drain_mapped_queues(struct dlb2_hw *hw,
+					   struct dlb2_hw_domain *domain)
+{
+	int i;
+
+	/* If the domain hasn't been started, there's no traffic to drain */
+	if (!domain->started)
+		return 0;
+
+	if (domain->num_pending_removals > 0) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: failed to unmap domain queues\n",
+			    __func__);
+		return -EFAULT;
+	}
+
+	for (i = 0; i < DLB2_MAX_QID_EMPTY_CHECK_LOOPS; i++) {
+		dlb2_domain_drain_ldb_cqs(hw, domain, true);
+
+		if (dlb2_domain_mapped_queues_empty(hw, domain))
+			break;
+	}
+
+	if (i == DLB2_MAX_QID_EMPTY_CHECK_LOOPS) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: failed to empty queues\n",
+			    __func__);
+		return -EFAULT;
+	}
+
+	/*
+	 * Drain the CQs one more time. For the queues to go empty, they would
+	 * have scheduled one or more QEs.
+	 */
+	dlb2_domain_drain_ldb_cqs(hw, domain, true);
+
+	return 0;
+}
+
+static void dlb2_domain_enable_ldb_cqs(struct dlb2_hw *hw,
+				       struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			port->enabled = true;
+
+			dlb2_ldb_port_cq_enable(hw, port);
+		}
+	}
+}
+
+static struct dlb2_ldb_queue *
+dlb2_get_ldb_queue_from_id(struct dlb2_hw *hw,
+			   u32 id,
+			   bool vdev_req,
+			   unsigned int vdev_id)
+{
+	struct dlb2_list_entry *iter1;
+	struct dlb2_list_entry *iter2;
+	struct dlb2_function_resources *rsrcs;
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+	RTE_SET_USED(iter1);
+	RTE_SET_USED(iter2);
+
+	if (id >= DLB2_MAX_NUM_LDB_QUEUES)
+		return NULL;
+
+	rsrcs = (vdev_req) ? &hw->vdev[vdev_id] : &hw->pf;
+
+	if (!vdev_req)
+		return &hw->rsrcs.ldb_queues[id];
+
+	DLB2_FUNC_LIST_FOR(rsrcs->used_domains, domain, iter1) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter2) {
+			if (queue->id.virt_id == id)
+				return queue;
+		}
+	}
+
+	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_queues, queue, iter1) {
+		if (queue->id.virt_id == id)
+			return queue;
+	}
+
+	return NULL;
+}
+
+static struct dlb2_hw_domain *dlb2_get_domain_from_id(struct dlb2_hw *hw,
+						      u32 id,
+						      bool vdev_req,
+						      unsigned int vdev_id)
+{
+	struct dlb2_list_entry *iteration;
+	struct dlb2_function_resources *rsrcs;
+	struct dlb2_hw_domain *domain;
+	RTE_SET_USED(iteration);
+
+	if (id >= DLB2_MAX_NUM_DOMAINS)
+		return NULL;
+
+	if (!vdev_req)
+		return &hw->domains[id];
+
+	rsrcs = &hw->vdev[vdev_id];
+
+	DLB2_FUNC_LIST_FOR(rsrcs->used_domains, domain, iteration) {
+		if (domain->id.virt_id == id)
+			return domain;
+	}
+
+	return NULL;
+}
+
+static int dlb2_port_slot_state_transition(struct dlb2_hw *hw,
+					   struct dlb2_ldb_port *port,
+					   struct dlb2_ldb_queue *queue,
+					   int slot,
+					   enum dlb2_qid_map_state new_state)
+{
+	enum dlb2_qid_map_state curr_state = port->qid_map[slot].state;
+	struct dlb2_hw_domain *domain;
+	int domain_id;
+
+	domain_id = port->domain_id.phys_id;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, false, 0);
+	if (domain == NULL) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: unable to find domain %d\n",
+			    __func__, domain_id);
+		return -EINVAL;
+	}
+
+	switch (curr_state) {
+	case DLB2_QUEUE_UNMAPPED:
+		switch (new_state) {
+		case DLB2_QUEUE_MAPPED:
+			queue->num_mappings++;
+			port->num_mappings++;
+			break;
+		case DLB2_QUEUE_MAP_IN_PROG:
+			queue->num_pending_additions++;
+			domain->num_pending_additions++;
+			break;
+		default:
+			goto error;
+		}
+		break;
+	case DLB2_QUEUE_MAPPED:
+		switch (new_state) {
+		case DLB2_QUEUE_UNMAPPED:
+			queue->num_mappings--;
+			port->num_mappings--;
+			break;
+		case DLB2_QUEUE_UNMAP_IN_PROG:
+			port->num_pending_removals++;
+			domain->num_pending_removals++;
+			break;
+		case DLB2_QUEUE_MAPPED:
+			/* Priority change, nothing to update */
+			break;
+		default:
+			goto error;
+		}
+		break;
+	case DLB2_QUEUE_MAP_IN_PROG:
+		switch (new_state) {
+		case DLB2_QUEUE_UNMAPPED:
+			queue->num_pending_additions--;
+			domain->num_pending_additions--;
+			break;
+		case DLB2_QUEUE_MAPPED:
+			queue->num_mappings++;
+			port->num_mappings++;
+			queue->num_pending_additions--;
+			domain->num_pending_additions--;
+			break;
+		default:
+			goto error;
+		}
+		break;
+	case DLB2_QUEUE_UNMAP_IN_PROG:
+		switch (new_state) {
+		case DLB2_QUEUE_UNMAPPED:
+			port->num_pending_removals--;
+			domain->num_pending_removals--;
+			queue->num_mappings--;
+			port->num_mappings--;
+			break;
+		case DLB2_QUEUE_MAPPED:
+			port->num_pending_removals--;
+			domain->num_pending_removals--;
+			break;
+		case DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP:
+			/* Nothing to update */
+			break;
+		default:
+			goto error;
+		}
+		break;
+	case DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP:
+		switch (new_state) {
+		case DLB2_QUEUE_UNMAP_IN_PROG:
+			/* Nothing to update */
+			break;
+		case DLB2_QUEUE_UNMAPPED:
+			/*
+			 * An UNMAP_IN_PROG_PENDING_MAP slot briefly
+			 * becomes UNMAPPED before it transitions to
+			 * MAP_IN_PROG.
+			 */
+			queue->num_mappings--;
+			port->num_mappings--;
+			port->num_pending_removals--;
+			domain->num_pending_removals--;
+			break;
+		default:
+			goto error;
+		}
+		break;
+	default:
+		goto error;
+	}
+
+	port->qid_map[slot].state = new_state;
+
+	DLB2_HW_DBG(hw,
+		    "[%s()] queue %d -> port %d state transition (%d -> %d)\n",
+		    __func__, queue->id.phys_id, port->id.phys_id,
+		    curr_state, new_state);
+	return 0;
+
+error:
+	DLB2_HW_ERR(hw,
+		    "[%s()] Internal error: invalid queue %d -> port %d state transition (%d -> %d)\n",
+		    __func__, queue->id.phys_id, port->id.phys_id,
+		    curr_state, new_state);
+	return -EFAULT;
+}
+
+static bool dlb2_port_find_slot(struct dlb2_ldb_port *port,
+				enum dlb2_qid_map_state state,
+				int *slot)
+{
+	int i;
+
+	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
+		if (port->qid_map[i].state == state)
+			break;
+	}
+
+	*slot = i;
+
+	return (i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ);
+}
+
+static bool dlb2_port_find_slot_queue(struct dlb2_ldb_port *port,
+				      enum dlb2_qid_map_state state,
+				      struct dlb2_ldb_queue *queue,
+				      int *slot)
+{
+	int i;
+
+	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
+		if (port->qid_map[i].state == state &&
+		    port->qid_map[i].qid == queue->id.phys_id)
+			break;
+	}
+
+	*slot = i;
+
+	return (i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ);
+}
+
+/*
+ * dlb2_ldb_queue_{enable, disable}_mapped_cqs() don't operate exactly as
+ * their function names imply, and should only be called by the dynamic CQ
+ * mapping code.
+ */
+static void dlb2_ldb_queue_disable_mapped_cqs(struct dlb2_hw *hw,
+					      struct dlb2_hw_domain *domain,
+					      struct dlb2_ldb_queue *queue)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int slot, i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			enum dlb2_qid_map_state state = DLB2_QUEUE_MAPPED;
+
+			if (!dlb2_port_find_slot_queue(port, state,
+						       queue, &slot))
+				continue;
+
+			if (port->enabled)
+				dlb2_ldb_port_cq_disable(hw, port);
+		}
+	}
+}
+
+static void dlb2_ldb_queue_enable_mapped_cqs(struct dlb2_hw *hw,
+					     struct dlb2_hw_domain *domain,
+					     struct dlb2_ldb_queue *queue)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int slot, i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			enum dlb2_qid_map_state state = DLB2_QUEUE_MAPPED;
+
+			if (!dlb2_port_find_slot_queue(port, state,
+						       queue, &slot))
+				continue;
+
+			if (port->enabled)
+				dlb2_ldb_port_cq_enable(hw, port);
+		}
+	}
+}
+
+static void dlb2_ldb_port_clear_queue_if_status(struct dlb2_hw *hw,
+						struct dlb2_ldb_port *port,
+						int slot)
+{
+	u32 ctrl = 0;
+
+	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
+	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
+	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_INFLIGHT_OK_V);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
+
+	dlb2_flush_csr(hw);
+}
+
+static void dlb2_ldb_port_set_queue_if_status(struct dlb2_hw *hw,
+					      struct dlb2_ldb_port *port,
+					      int slot)
+{
+	u32 ctrl = 0;
+
+	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
+	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
+	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_VALUE);
+	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_INFLIGHT_OK_V);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
+
+	dlb2_flush_csr(hw);
+}
+
+static int dlb2_ldb_port_map_qid_static(struct dlb2_hw *hw,
+					struct dlb2_ldb_port *p,
+					struct dlb2_ldb_queue *q,
+					u8 priority)
+{
+	enum dlb2_qid_map_state state;
+	u32 lsp_qid2cq2;
+	u32 lsp_qid2cq;
+	u32 atm_qid2cq;
+	u32 cq2priov;
+	u32 cq2qid;
+	int i;
+
+	/* Look for a pending or already mapped slot, else an unused slot */
+	if (!dlb2_port_find_slot_queue(p, DLB2_QUEUE_MAP_IN_PROG, q, &i) &&
+	    !dlb2_port_find_slot_queue(p, DLB2_QUEUE_MAPPED, q, &i) &&
+	    !dlb2_port_find_slot(p, DLB2_QUEUE_UNMAPPED, &i)) {
+		DLB2_HW_ERR(hw,
+			    "[%s():%d] Internal error: CQ has no available QID mapping slots\n",
+			    __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	/* Read-modify-write the priority and valid bit register */
+	cq2priov = DLB2_CSR_RD(hw, DLB2_LSP_CQ2PRIOV(hw->ver, p->id.phys_id));
+
+	cq2priov |= (1 << (i + DLB2_LSP_CQ2PRIOV_V_LOC)) & DLB2_LSP_CQ2PRIOV_V;
+	cq2priov |= ((priority & 0x7) << (i + DLB2_LSP_CQ2PRIOV_PRIO_LOC) * 3)
+		    & DLB2_LSP_CQ2PRIOV_PRIO;
+
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(hw->ver, p->id.phys_id), cq2priov);
+
+	/* Read-modify-write the QID map register */
+	if (i < 4)
+		cq2qid = DLB2_CSR_RD(hw, DLB2_LSP_CQ2QID0(hw->ver,
+							  p->id.phys_id));
+	else
+		cq2qid = DLB2_CSR_RD(hw, DLB2_LSP_CQ2QID1(hw->ver,
+							  p->id.phys_id));
+
+	if (i == 0 || i == 4)
+		DLB2_BITS_SET(cq2qid, q->id.phys_id, DLB2_LSP_CQ2QID0_QID_P0);
+	if (i == 1 || i == 5)
+		DLB2_BITS_SET(cq2qid, q->id.phys_id, DLB2_LSP_CQ2QID0_QID_P1);
+	if (i == 2 || i == 6)
+		DLB2_BITS_SET(cq2qid, q->id.phys_id, DLB2_LSP_CQ2QID0_QID_P2);
+	if (i == 3 || i == 7)
+		DLB2_BITS_SET(cq2qid, q->id.phys_id, DLB2_LSP_CQ2QID0_QID_P3);
+
+	if (i < 4)
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_CQ2QID0(hw->ver, p->id.phys_id), cq2qid);
+	else
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_CQ2QID1(hw->ver, p->id.phys_id), cq2qid);
+
+	atm_qid2cq = DLB2_CSR_RD(hw,
+				 DLB2_ATM_QID2CQIDIX(q->id.phys_id,
+						p->id.phys_id / 4));
+
+	lsp_qid2cq = DLB2_CSR_RD(hw,
+				 DLB2_LSP_QID2CQIDIX(hw->ver, q->id.phys_id,
+						p->id.phys_id / 4));
+
+	lsp_qid2cq2 = DLB2_CSR_RD(hw,
+				  DLB2_LSP_QID2CQIDIX2(hw->ver, q->id.phys_id,
+						  p->id.phys_id / 4));
+
+	switch (p->id.phys_id % 4) {
+	case 0:
+		DLB2_BIT_SET(atm_qid2cq,
+			     1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P0_LOC));
+		DLB2_BIT_SET(lsp_qid2cq,
+			     1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P0_LOC));
+		DLB2_BIT_SET(lsp_qid2cq2,
+			     1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P0_LOC));
+		break;
+
+	case 1:
+		DLB2_BIT_SET(atm_qid2cq,
+			     1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P1_LOC));
+		DLB2_BIT_SET(lsp_qid2cq,
+			     1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P1_LOC));
+		DLB2_BIT_SET(lsp_qid2cq2,
+			     1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P1_LOC));
+		break;
+
+	case 2:
+		DLB2_BIT_SET(atm_qid2cq,
+			     1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P2_LOC));
+		DLB2_BIT_SET(lsp_qid2cq,
+			     1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P2_LOC));
+		DLB2_BIT_SET(lsp_qid2cq2,
+			     1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P2_LOC));
+		break;
+
+	case 3:
+		DLB2_BIT_SET(atm_qid2cq,
+			     1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P3_LOC));
+		DLB2_BIT_SET(lsp_qid2cq,
+			     1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P3_LOC));
+		DLB2_BIT_SET(lsp_qid2cq2,
+			     1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P3_LOC));
+		break;
+	}
+
+	DLB2_CSR_WR(hw,
+		    DLB2_ATM_QID2CQIDIX(q->id.phys_id, p->id.phys_id / 4),
+		    atm_qid2cq);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_QID2CQIDIX(hw->ver,
+					q->id.phys_id, p->id.phys_id / 4),
+		    lsp_qid2cq);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_QID2CQIDIX2(hw->ver,
+					 q->id.phys_id, p->id.phys_id / 4),
+		    lsp_qid2cq2);
+
+	dlb2_flush_csr(hw);
+
+	p->qid_map[i].qid = q->id.phys_id;
+	p->qid_map[i].priority = priority;
+
+	state = DLB2_QUEUE_MAPPED;
+
+	return dlb2_port_slot_state_transition(hw, p, q, i, state);
+}
+
+static int dlb2_ldb_port_set_has_work_bits(struct dlb2_hw *hw,
+					   struct dlb2_ldb_port *port,
+					   struct dlb2_ldb_queue *queue,
+					   int slot)
+{
+	u32 ctrl = 0;
+	u32 active;
+	u32 enq;
+
+	/* Set the atomic scheduling haswork bit */
+	active = DLB2_CSR_RD(hw, DLB2_LSP_QID_AQED_ACTIVE_CNT(hw->ver,
+							 queue->id.phys_id));
+
+	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
+	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
+	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_VALUE);
+	DLB2_BITS_SET(ctrl,
+		      DLB2_BITS_GET(active,
+				    DLB2_LSP_QID_AQED_ACTIVE_CNT_COUNT) > 0,
+				    DLB2_LSP_LDB_SCHED_CTRL_RLIST_HASWORK_V);
+
+	/* Set the non-atomic scheduling haswork bit */
+	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
+
+	enq = DLB2_CSR_RD(hw,
+			  DLB2_LSP_QID_LDB_ENQUEUE_CNT(hw->ver,
+						       queue->id.phys_id));
+
+	memset(&ctrl, 0, sizeof(ctrl));
+
+	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
+	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
+	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_VALUE);
+	DLB2_BITS_SET(ctrl,
+		      DLB2_BITS_GET(enq,
+				    DLB2_LSP_QID_LDB_ENQUEUE_CNT_COUNT) > 0,
+		      DLB2_LSP_LDB_SCHED_CTRL_NALB_HASWORK_V);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
+
+	dlb2_flush_csr(hw);
+
+	return 0;
+}
+
+static void dlb2_ldb_port_clear_has_work_bits(struct dlb2_hw *hw,
+					      struct dlb2_ldb_port *port,
+					      u8 slot)
+{
+	u32 ctrl = 0;
+
+	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
+	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
+	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_RLIST_HASWORK_V);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
+
+	memset(&ctrl, 0, sizeof(ctrl));
+
+	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
+	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
+	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_NALB_HASWORK_V);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
+
+	dlb2_flush_csr(hw);
+}
+
+
+static void dlb2_ldb_queue_set_inflight_limit(struct dlb2_hw *hw,
+					      struct dlb2_ldb_queue *queue)
+{
+	u32 infl_lim = 0;
+
+	DLB2_BITS_SET(infl_lim, queue->num_qid_inflights,
+		 DLB2_LSP_QID_LDB_INFL_LIM_LIMIT);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_QID_LDB_INFL_LIM(hw->ver, queue->id.phys_id),
+		    infl_lim);
+}
+
+static void dlb2_ldb_queue_clear_inflight_limit(struct dlb2_hw *hw,
+						struct dlb2_ldb_queue *queue)
+{
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_QID_LDB_INFL_LIM(hw->ver, queue->id.phys_id),
+		    DLB2_LSP_QID_LDB_INFL_LIM_RST);
+}
+
+static int dlb2_ldb_port_finish_map_qid_dynamic(struct dlb2_hw *hw,
+						struct dlb2_hw_domain *domain,
+						struct dlb2_ldb_port *port,
+						struct dlb2_ldb_queue *queue)
+{
+	struct dlb2_list_entry *iter;
+	enum dlb2_qid_map_state state;
+	int slot, ret, i;
+	u32 infl_cnt;
+	u8 prio;
+	RTE_SET_USED(iter);
+
+	infl_cnt = DLB2_CSR_RD(hw,
+			       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver,
+						    queue->id.phys_id));
+
+	if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT)) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: non-zero QID inflight count\n",
+			    __func__);
+		return -EINVAL;
+	}
+
+	/*
+	 * Static map the port and set its corresponding has_work bits.
+	 */
+	state = DLB2_QUEUE_MAP_IN_PROG;
+	if (!dlb2_port_find_slot_queue(port, state, queue, &slot))
+		return -EINVAL;
+
+	prio = port->qid_map[slot].priority;
+
+	/*
+	 * Update the CQ2QID, CQ2PRIOV, and QID2CQIDX registers, and
+	 * the port's qid_map state.
+	 */
+	ret = dlb2_ldb_port_map_qid_static(hw, port, queue, prio);
+	if (ret)
+		return ret;
+
+	ret = dlb2_ldb_port_set_has_work_bits(hw, port, queue, slot);
+	if (ret)
+		return ret;
+
+	/*
+	 * Ensure IF_status(cq,qid) is 0 before enabling the port to
+	 * prevent spurious schedules to cause the queue's inflight
+	 * count to increase.
+	 */
+	dlb2_ldb_port_clear_queue_if_status(hw, port, slot);
+
+	/* Reset the queue's inflight status */
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			state = DLB2_QUEUE_MAPPED;
+			if (!dlb2_port_find_slot_queue(port, state,
+						       queue, &slot))
+				continue;
+
+			dlb2_ldb_port_set_queue_if_status(hw, port, slot);
+		}
+	}
+
+	dlb2_ldb_queue_set_inflight_limit(hw, queue);
+
+	/* Re-enable CQs mapped to this queue */
+	dlb2_ldb_queue_enable_mapped_cqs(hw, domain, queue);
+
+	/* If this queue has other mappings pending, clear its inflight limit */
+	if (queue->num_pending_additions > 0)
+		dlb2_ldb_queue_clear_inflight_limit(hw, queue);
+
+	return 0;
+}
+
+/**
+ * dlb2_ldb_port_map_qid_dynamic() - perform a "dynamic" QID->CQ mapping
+ * @hw: dlb2_hw handle for a particular device.
+ * @port: load-balanced port
+ * @queue: load-balanced queue
+ * @priority: queue servicing priority
+ *
+ * Returns 0 if the queue was mapped, 1 if the mapping is scheduled to occur
+ * at a later point, and <0 if an error occurred.
+ */
+static int dlb2_ldb_port_map_qid_dynamic(struct dlb2_hw *hw,
+					 struct dlb2_ldb_port *port,
+					 struct dlb2_ldb_queue *queue,
+					 u8 priority)
+{
+	enum dlb2_qid_map_state state;
+	struct dlb2_hw_domain *domain;
+	int domain_id, slot, ret;
+	u32 infl_cnt;
+
+	domain_id = port->domain_id.phys_id;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, false, 0);
+	if (domain == NULL) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: unable to find domain %d\n",
+			    __func__, port->domain_id.phys_id);
+		return -EINVAL;
+	}
+
+	/*
+	 * Set the QID inflight limit to 0 to prevent further scheduling of the
+	 * queue.
+	 */
+	DLB2_CSR_WR(hw, DLB2_LSP_QID_LDB_INFL_LIM(hw->ver,
+						  queue->id.phys_id), 0);
+
+	if (!dlb2_port_find_slot(port, DLB2_QUEUE_UNMAPPED, &slot)) {
+		DLB2_HW_ERR(hw,
+			    "Internal error: No available unmapped slots\n");
+		return -EFAULT;
+	}
+
+	port->qid_map[slot].qid = queue->id.phys_id;
+	port->qid_map[slot].priority = priority;
+
+	state = DLB2_QUEUE_MAP_IN_PROG;
+	ret = dlb2_port_slot_state_transition(hw, port, queue, slot, state);
+	if (ret)
+		return ret;
+
+	infl_cnt = DLB2_CSR_RD(hw,
+			       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver,
+						    queue->id.phys_id));
+
+	if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT)) {
+		/*
+		 * The queue is owed completions so it's not safe to map it
+		 * yet. Schedule a kernel thread to complete the mapping later,
+		 * once software has completed all the queue's inflight events.
+		 */
+		if (!os_worker_active(hw))
+			os_schedule_work(hw);
+
+		return 1;
+	}
+
+	/*
+	 * Disable the affected CQ, and the CQs already mapped to the QID,
+	 * before reading the QID's inflight count a second time. There is an
+	 * unlikely race in which the QID may schedule one more QE after we
+	 * read an inflight count of 0, and disabling the CQs guarantees that
+	 * the race will not occur after a re-read of the inflight count
+	 * register.
+	 */
+	if (port->enabled)
+		dlb2_ldb_port_cq_disable(hw, port);
+
+	dlb2_ldb_queue_disable_mapped_cqs(hw, domain, queue);
+
+	infl_cnt = DLB2_CSR_RD(hw,
+			       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver,
+						    queue->id.phys_id));
+
+	if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT)) {
+		if (port->enabled)
+			dlb2_ldb_port_cq_enable(hw, port);
+
+		dlb2_ldb_queue_enable_mapped_cqs(hw, domain, queue);
+
+		/*
+		 * The queue is owed completions so it's not safe to map it
+		 * yet. Schedule a kernel thread to complete the mapping later,
+		 * once software has completed all the queue's inflight events.
+		 */
+		if (!os_worker_active(hw))
+			os_schedule_work(hw);
+
+		return 1;
+	}
+
+	return dlb2_ldb_port_finish_map_qid_dynamic(hw, domain, port, queue);
+}
+
+static void dlb2_domain_finish_map_port(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain,
+					struct dlb2_ldb_port *port)
+{
+	int i;
+
+	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
+		u32 infl_cnt;
+		struct dlb2_ldb_queue *queue;
+		int qid;
+
+		if (port->qid_map[i].state != DLB2_QUEUE_MAP_IN_PROG)
+			continue;
+
+		qid = port->qid_map[i].qid;
+
+		queue = dlb2_get_ldb_queue_from_id(hw, qid, false, 0);
+
+		if (queue == NULL) {
+			DLB2_HW_ERR(hw,
+				    "[%s()] Internal error: unable to find queue %d\n",
+				    __func__, qid);
+			continue;
+		}
+
+		infl_cnt = DLB2_CSR_RD(hw,
+				       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver, qid));
+
+		if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT))
+			continue;
+
+		/*
+		 * Disable the affected CQ, and the CQs already mapped to the
+		 * QID, before reading the QID's inflight count a second time.
+		 * There is an unlikely race in which the QID may schedule one
+		 * more QE after we read an inflight count of 0, and disabling
+		 * the CQs guarantees that the race will not occur after a
+		 * re-read of the inflight count register.
+		 */
+		if (port->enabled)
+			dlb2_ldb_port_cq_disable(hw, port);
+
+		dlb2_ldb_queue_disable_mapped_cqs(hw, domain, queue);
+
+		infl_cnt = DLB2_CSR_RD(hw,
+				       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver, qid));
+
+		if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT)) {
+			if (port->enabled)
+				dlb2_ldb_port_cq_enable(hw, port);
+
+			dlb2_ldb_queue_enable_mapped_cqs(hw, domain, queue);
+
+			continue;
+		}
+
+		dlb2_ldb_port_finish_map_qid_dynamic(hw, domain, port, queue);
+	}
+}
+
+static unsigned int
+dlb2_domain_finish_map_qid_procedures(struct dlb2_hw *hw,
+				      struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	if (!domain->configured || domain->num_pending_additions == 0)
+		return 0;
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
+			dlb2_domain_finish_map_port(hw, domain, port);
+	}
+
+	return domain->num_pending_additions;
+}
+
+static int dlb2_ldb_port_unmap_qid(struct dlb2_hw *hw,
+				   struct dlb2_ldb_port *port,
+				   struct dlb2_ldb_queue *queue)
+{
+	enum dlb2_qid_map_state mapped, in_progress, pending_map, unmapped;
+	u32 lsp_qid2cq2;
+	u32 lsp_qid2cq;
+	u32 atm_qid2cq;
+	u32 cq2priov;
+	u32 queue_id;
+	u32 port_id;
+	int i;
+
+	/* Find the queue's slot */
+	mapped = DLB2_QUEUE_MAPPED;
+	in_progress = DLB2_QUEUE_UNMAP_IN_PROG;
+	pending_map = DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP;
+
+	if (!dlb2_port_find_slot_queue(port, mapped, queue, &i) &&
+	    !dlb2_port_find_slot_queue(port, in_progress, queue, &i) &&
+	    !dlb2_port_find_slot_queue(port, pending_map, queue, &i)) {
+		DLB2_HW_ERR(hw,
+			    "[%s():%d] Internal error: QID %d isn't mapped\n",
+			    __func__, __LINE__, queue->id.phys_id);
+		return -EFAULT;
+	}
+
+	port_id = port->id.phys_id;
+	queue_id = queue->id.phys_id;
+
+	/* Read-modify-write the priority and valid bit register */
+	cq2priov = DLB2_CSR_RD(hw, DLB2_LSP_CQ2PRIOV(hw->ver, port_id));
+
+	cq2priov &= ~(1 << (i + DLB2_LSP_CQ2PRIOV_V_LOC));
+
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(hw->ver, port_id), cq2priov);
+
+	atm_qid2cq = DLB2_CSR_RD(hw, DLB2_ATM_QID2CQIDIX(queue_id,
+							 port_id / 4));
+
+	lsp_qid2cq = DLB2_CSR_RD(hw,
+				 DLB2_LSP_QID2CQIDIX(hw->ver,
+						queue_id, port_id / 4));
+
+	lsp_qid2cq2 = DLB2_CSR_RD(hw,
+				  DLB2_LSP_QID2CQIDIX2(hw->ver,
+						  queue_id, port_id / 4));
+
+	switch (port_id % 4) {
+	case 0:
+		atm_qid2cq &= ~(1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P0_LOC));
+		lsp_qid2cq &= ~(1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P0_LOC));
+		lsp_qid2cq2 &= ~(1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P0_LOC));
+		break;
+
+	case 1:
+		atm_qid2cq &= ~(1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P1_LOC));
+		lsp_qid2cq &= ~(1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P1_LOC));
+		lsp_qid2cq2 &= ~(1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P1_LOC));
+		break;
+
+	case 2:
+		atm_qid2cq &= ~(1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P2_LOC));
+		lsp_qid2cq &= ~(1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P2_LOC));
+		lsp_qid2cq2 &= ~(1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P2_LOC));
+		break;
+
+	case 3:
+		atm_qid2cq &= ~(1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P3_LOC));
+		lsp_qid2cq &= ~(1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P3_LOC));
+		lsp_qid2cq2 &= ~(1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P3_LOC));
+		break;
+	}
+
+	DLB2_CSR_WR(hw, DLB2_ATM_QID2CQIDIX(queue_id, port_id / 4), atm_qid2cq);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_QID2CQIDIX(hw->ver, queue_id, port_id / 4),
+		    lsp_qid2cq);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_QID2CQIDIX2(hw->ver, queue_id, port_id / 4),
+		    lsp_qid2cq2);
+
+	dlb2_flush_csr(hw);
+
+	unmapped = DLB2_QUEUE_UNMAPPED;
+
+	return dlb2_port_slot_state_transition(hw, port, queue, i, unmapped);
+}
+
+static int dlb2_ldb_port_map_qid(struct dlb2_hw *hw,
+				 struct dlb2_hw_domain *domain,
+				 struct dlb2_ldb_port *port,
+				 struct dlb2_ldb_queue *queue,
+				 u8 prio)
+{
+	if (domain->started)
+		return dlb2_ldb_port_map_qid_dynamic(hw, port, queue, prio);
+	else
+		return dlb2_ldb_port_map_qid_static(hw, port, queue, prio);
+}
+
+static void
+dlb2_domain_finish_unmap_port_slot(struct dlb2_hw *hw,
+				   struct dlb2_hw_domain *domain,
+				   struct dlb2_ldb_port *port,
+				   int slot)
+{
+	enum dlb2_qid_map_state state;
+	struct dlb2_ldb_queue *queue;
+
+	queue = &hw->rsrcs.ldb_queues[port->qid_map[slot].qid];
+
+	state = port->qid_map[slot].state;
+
+	/* Update the QID2CQIDX and CQ2QID vectors */
+	dlb2_ldb_port_unmap_qid(hw, port, queue);
+
+	/*
+	 * Ensure the QID will not be serviced by this {CQ, slot} by clearing
+	 * the has_work bits
+	 */
+	dlb2_ldb_port_clear_has_work_bits(hw, port, slot);
+
+	/* Reset the {CQ, slot} to its default state */
+	dlb2_ldb_port_set_queue_if_status(hw, port, slot);
+
+	/* Re-enable the CQ if it was not manually disabled by the user */
+	if (port->enabled)
+		dlb2_ldb_port_cq_enable(hw, port);
+
+	/*
+	 * If there is a mapping that is pending this slot's removal, perform
+	 * the mapping now.
+	 */
+	if (state == DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP) {
+		struct dlb2_ldb_port_qid_map *map;
+		struct dlb2_ldb_queue *map_queue;
+		u8 prio;
+
+		map = &port->qid_map[slot];
+
+		map->qid = map->pending_qid;
+		map->priority = map->pending_priority;
+
+		map_queue = &hw->rsrcs.ldb_queues[map->qid];
+		prio = map->priority;
+
+		dlb2_ldb_port_map_qid(hw, domain, port, map_queue, prio);
+	}
+}
+
+
+static bool dlb2_domain_finish_unmap_port(struct dlb2_hw *hw,
+					  struct dlb2_hw_domain *domain,
+					  struct dlb2_ldb_port *port)
+{
+	u32 infl_cnt;
+	int i;
+
+	if (port->num_pending_removals == 0)
+		return false;
+
+	/*
+	 * The unmap requires all the CQ's outstanding inflights to be
+	 * completed.
+	 */
+	infl_cnt = DLB2_CSR_RD(hw, DLB2_LSP_CQ_LDB_INFL_CNT(hw->ver,
+						       port->id.phys_id));
+	if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_CQ_LDB_INFL_CNT_COUNT) > 0)
+		return false;
+
+	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
+		struct dlb2_ldb_port_qid_map *map;
+
+		map = &port->qid_map[i];
+
+		if (map->state != DLB2_QUEUE_UNMAP_IN_PROG &&
+		    map->state != DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP)
+			continue;
+
+		dlb2_domain_finish_unmap_port_slot(hw, domain, port, i);
+	}
+
+	return true;
+}
+
+static unsigned int
+dlb2_domain_finish_unmap_qid_procedures(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	if (!domain->configured || domain->num_pending_removals == 0)
+		return 0;
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
+			dlb2_domain_finish_unmap_port(hw, domain, port);
+	}
+
+	return domain->num_pending_removals;
+}
+
+static void dlb2_domain_disable_ldb_cqs(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			port->enabled = false;
+
+			dlb2_ldb_port_cq_disable(hw, port);
+		}
+	}
+}
+
+
+static void dlb2_log_reset_domain(struct dlb2_hw *hw,
+				  u32 domain_id,
+				  bool vdev_req,
+				  unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 reset domain:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
+}
+
+static void dlb2_domain_disable_dir_vpps(struct dlb2_hw *hw,
+					 struct dlb2_hw_domain *domain,
+					 unsigned int vdev_id)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *port;
+	u32 vpp_v = 0;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
+		unsigned int offs;
+		u32 virt_id;
+
+		if (hw->virt_mode == DLB2_VIRT_SRIOV)
+			virt_id = port->id.virt_id;
+		else
+			virt_id = port->id.phys_id;
+
+		offs = vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) + virt_id;
+
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP_V(offs), vpp_v);
+	}
+}
+
+static void dlb2_domain_disable_ldb_vpps(struct dlb2_hw *hw,
+					 struct dlb2_hw_domain *domain,
+					 unsigned int vdev_id)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	u32 vpp_v = 0;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			unsigned int offs;
+			u32 virt_id;
+
+			if (hw->virt_mode == DLB2_VIRT_SRIOV)
+				virt_id = port->id.virt_id;
+			else
+				virt_id = port->id.phys_id;
+
+			offs = vdev_id * DLB2_MAX_NUM_LDB_PORTS + virt_id;
+
+			DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VPP_V(offs), vpp_v);
+		}
+	}
+}
+
+static void
+dlb2_domain_disable_ldb_port_interrupts(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	u32 int_en = 0;
+	u32 wd_en = 0;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			DLB2_CSR_WR(hw,
+				    DLB2_CHP_LDB_CQ_INT_ENB(hw->ver,
+						       port->id.phys_id),
+				    int_en);
+
+			DLB2_CSR_WR(hw,
+				    DLB2_CHP_LDB_CQ_WD_ENB(hw->ver,
+						      port->id.phys_id),
+				    wd_en);
+		}
+	}
+}
+
+static void
+dlb2_domain_disable_dir_port_interrupts(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *port;
+	u32 int_en = 0;
+	u32 wd_en = 0;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
+		DLB2_CSR_WR(hw,
+			    DLB2_CHP_DIR_CQ_INT_ENB(hw->ver, port->id.phys_id),
+			    int_en);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_CHP_DIR_CQ_WD_ENB(hw->ver, port->id.phys_id),
+			    wd_en);
+	}
+}
+
+static void
+dlb2_domain_disable_ldb_queue_write_perms(struct dlb2_hw *hw,
+					  struct dlb2_hw_domain *domain)
+{
+	int domain_offset = domain->id.phys_id * DLB2_MAX_NUM_LDB_QUEUES;
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_queue *queue;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
+		int idx = domain_offset + queue->id.phys_id;
+
+		DLB2_CSR_WR(hw, DLB2_SYS_LDB_VASQID_V(idx), 0);
+
+		if (queue->id.vdev_owned) {
+			DLB2_CSR_WR(hw,
+				    DLB2_SYS_LDB_QID2VQID(queue->id.phys_id),
+				    0);
+
+			idx = queue->id.vdev_id * DLB2_MAX_NUM_LDB_QUEUES +
+				queue->id.virt_id;
+
+			DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID_V(idx), 0);
+
+			DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID2QID(idx), 0);
+		}
+	}
+}
+
+static void
+dlb2_domain_disable_dir_queue_write_perms(struct dlb2_hw *hw,
+					  struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *queue;
+	unsigned long max_ports;
+	int domain_offset;
+	RTE_SET_USED(iter);
+
+	max_ports = DLB2_MAX_NUM_DIR_PORTS(hw->ver);
+
+	domain_offset = domain->id.phys_id * max_ports;
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, queue, iter) {
+		int idx = domain_offset + queue->id.phys_id;
+
+		DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(idx), 0);
+
+		if (queue->id.vdev_owned) {
+			idx = queue->id.vdev_id * max_ports + queue->id.virt_id;
+
+			DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID_V(idx), 0);
+
+			DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID2QID(idx), 0);
+		}
+	}
+}
+
+static void dlb2_domain_disable_ldb_seq_checks(struct dlb2_hw *hw,
+					       struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	u32 chk_en = 0;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			DLB2_CSR_WR(hw,
+				    DLB2_CHP_SN_CHK_ENBL(hw->ver,
+							 port->id.phys_id),
+				    chk_en);
+		}
+	}
+}
+
+static int dlb2_domain_wait_for_ldb_cqs_to_empty(struct dlb2_hw *hw,
+						 struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			int j;
+
+			for (j = 0; j < DLB2_MAX_CQ_COMP_CHECK_LOOPS; j++) {
+				if (dlb2_ldb_cq_inflight_count(hw, port) == 0)
+					break;
+			}
+
+			if (j == DLB2_MAX_CQ_COMP_CHECK_LOOPS) {
+				DLB2_HW_ERR(hw,
+					    "[%s()] Internal error: failed to flush load-balanced port %d's completions.\n",
+					    __func__, port->id.phys_id);
+				return -EFAULT;
+			}
+		}
+	}
+
+	return 0;
+}
+
+static void dlb2_domain_disable_dir_cqs(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *port;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
+		port->enabled = false;
+
+		dlb2_dir_port_cq_disable(hw, port);
+	}
+}
+
+static void
+dlb2_domain_disable_dir_producer_ports(struct dlb2_hw *hw,
+				       struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *port;
+	u32 pp_v = 0;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_DIR_PP_V(port->id.phys_id),
+			    pp_v);
+	}
+}
+
+static void
+dlb2_domain_disable_ldb_producer_ports(struct dlb2_hw *hw,
+				       struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	u32 pp_v = 0;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			DLB2_CSR_WR(hw,
+				    DLB2_SYS_LDB_PP_V(port->id.phys_id),
+				    pp_v);
+		}
+	}
+}
+
+static int dlb2_domain_verify_reset_success(struct dlb2_hw *hw,
+					    struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *dir_port;
+	struct dlb2_ldb_port *ldb_port;
+	struct dlb2_ldb_queue *queue;
+	int i;
+	RTE_SET_USED(iter);
+
+	/*
+	 * Confirm that all the domain's queue's inflight counts and AQED
+	 * active counts are 0.
+	 */
+	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
+		if (!dlb2_ldb_queue_is_empty(hw, queue)) {
+			DLB2_HW_ERR(hw,
+				    "[%s()] Internal error: failed to empty ldb queue %d\n",
+				    __func__, queue->id.phys_id);
+			return -EFAULT;
+		}
+	}
+
+	/* Confirm that all the domain's CQs inflight and token counts are 0. */
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], ldb_port, iter) {
+			if (dlb2_ldb_cq_inflight_count(hw, ldb_port) ||
+			    dlb2_ldb_cq_token_count(hw, ldb_port)) {
+				DLB2_HW_ERR(hw,
+					    "[%s()] Internal error: failed to empty ldb port %d\n",
+					    __func__, ldb_port->id.phys_id);
+				return -EFAULT;
+			}
+		}
+	}
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, dir_port, iter) {
+		if (!dlb2_dir_queue_is_empty(hw, dir_port)) {
+			DLB2_HW_ERR(hw,
+				    "[%s()] Internal error: failed to empty dir queue %d\n",
+				    __func__, dir_port->id.phys_id);
+			return -EFAULT;
+		}
+
+		if (dlb2_dir_cq_token_count(hw, dir_port)) {
+			DLB2_HW_ERR(hw,
+				    "[%s()] Internal error: failed to empty dir port %d\n",
+				    __func__, dir_port->id.phys_id);
+			return -EFAULT;
+		}
+	}
+
+	return 0;
+}
+
+static void __dlb2_domain_reset_ldb_port_registers(struct dlb2_hw *hw,
+						   struct dlb2_ldb_port *port)
+{
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_PP2VAS(port->id.phys_id),
+		    DLB2_SYS_LDB_PP2VAS_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ2VAS(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ2VAS_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_PP2VDEV(port->id.phys_id),
+		    DLB2_SYS_LDB_PP2VDEV_RST);
+
+	if (port->id.vdev_owned) {
+		unsigned int offs;
+		u32 virt_id;
+
+		/*
+		 * DLB uses producer port address bits 17:12 to determine the
+		 * producer port ID. In Scalable IOV mode, PP accesses come
+		 * through the PF MMIO window for the physical producer port,
+		 * so for translation purposes the virtual and physical port
+		 * IDs are equal.
+		 */
+		if (hw->virt_mode == DLB2_VIRT_SRIOV)
+			virt_id = port->id.virt_id;
+		else
+			virt_id = port->id.phys_id;
+
+		offs = port->id.vdev_id * DLB2_MAX_NUM_LDB_PORTS + virt_id;
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_VF_LDB_VPP2PP(offs),
+			    DLB2_SYS_VF_LDB_VPP2PP_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_VF_LDB_VPP_V(offs),
+			    DLB2_SYS_VF_LDB_VPP_V_RST);
+	}
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_PP_V(port->id.phys_id),
+		    DLB2_SYS_LDB_PP_V_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_LDB_DSBL(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_LDB_DSBL_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_DEPTH(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ_DEPTH_RST);
+
+	if (hw->ver != DLB2_HW_V2)
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_CFG_CQ_LDB_WU_LIMIT(port->id.phys_id),
+			    DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_LDB_INFL_LIM(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_LDB_INFL_LIM_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_HIST_LIST_LIM(hw->ver, port->id.phys_id),
+		    DLB2_CHP_HIST_LIST_LIM_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_HIST_LIST_BASE(hw->ver, port->id.phys_id),
+		    DLB2_CHP_HIST_LIST_BASE_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_HIST_LIST_POP_PTR(hw->ver, port->id.phys_id),
+		    DLB2_CHP_HIST_LIST_POP_PTR_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_HIST_LIST_PUSH_PTR(hw->ver, port->id.phys_id),
+		    DLB2_CHP_HIST_LIST_PUSH_PTR_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_TMR_THRSH(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ_TMR_THRSH_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_INT_ENB(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ_INT_ENB_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_CQ_ISR(port->id.phys_id),
+		    DLB2_SYS_LDB_CQ_ISR_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_WPTR(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ_WPTR_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_LDB_TKN_CNT(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_LDB_TKN_CNT_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_CQ_ADDR_L(port->id.phys_id),
+		    DLB2_SYS_LDB_CQ_ADDR_L_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_CQ_ADDR_U(port->id.phys_id),
+		    DLB2_SYS_LDB_CQ_ADDR_U_RST);
+
+	if (hw->ver == DLB2_HW_V2)
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_LDB_CQ_AT(port->id.phys_id),
+			    DLB2_SYS_LDB_CQ_AT_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_CQ_PASID(hw->ver, port->id.phys_id),
+		    DLB2_SYS_LDB_CQ_PASID_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_CQ2VF_PF_RO(port->id.phys_id),
+		    DLB2_SYS_LDB_CQ2VF_PF_RO_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTL(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTL_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTH(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTH_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ2QID0(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ2QID0_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ2QID1(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ2QID1_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ2PRIOV(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ2PRIOV_RST);
+}
+
+static void dlb2_domain_reset_ldb_port_registers(struct dlb2_hw *hw,
+						 struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
+			__dlb2_domain_reset_ldb_port_registers(hw, port);
+	}
+}
+
+static void
+__dlb2_domain_reset_dir_port_registers(struct dlb2_hw *hw,
+				       struct dlb2_dir_pq_pair *port)
+{
+	u32 reg = 0;
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ2VAS(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ2VAS_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_DIR_DSBL(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_DIR_DSBL_RST);
+
+	DLB2_BIT_SET(reg, DLB2_SYS_WB_DIR_CQ_STATE_CQ_OPT_CLR);
+
+	if (hw->ver == DLB2_HW_V2)
+		DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_OPT_CLR, port->id.phys_id);
+	else
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_WB_DIR_CQ_STATE(port->id.phys_id), reg);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_DEPTH(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ_DEPTH_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_TMR_THRSH(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ_TMR_THRSH_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_INT_ENB(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ_INT_ENB_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_CQ_ISR(port->id.phys_id),
+		    DLB2_SYS_DIR_CQ_ISR_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(hw->ver,
+						      port->id.phys_id),
+		    DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_WPTR(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ_WPTR_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_DIR_TKN_CNT(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_DIR_TKN_CNT_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_CQ_ADDR_L(port->id.phys_id),
+		    DLB2_SYS_DIR_CQ_ADDR_L_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_CQ_ADDR_U(port->id.phys_id),
+		    DLB2_SYS_DIR_CQ_ADDR_U_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_CQ_AT(port->id.phys_id),
+		    DLB2_SYS_DIR_CQ_AT_RST);
+
+	if (hw->ver == DLB2_HW_V2)
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_DIR_CQ_AT(port->id.phys_id),
+			    DLB2_SYS_DIR_CQ_AT_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_CQ_PASID(hw->ver, port->id.phys_id),
+		    DLB2_SYS_DIR_CQ_PASID_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_CQ_FMT(port->id.phys_id),
+		    DLB2_SYS_DIR_CQ_FMT_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_CQ2VF_PF_RO(port->id.phys_id),
+		    DLB2_SYS_DIR_CQ2VF_PF_RO_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTL(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTL_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTH(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTH_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_PP2VAS(port->id.phys_id),
+		    DLB2_SYS_DIR_PP2VAS_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ2VAS(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ2VAS_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_PP2VDEV(port->id.phys_id),
+		    DLB2_SYS_DIR_PP2VDEV_RST);
+
+	if (port->id.vdev_owned) {
+		unsigned int offs;
+		u32 virt_id;
+
+		/*
+		 * DLB uses producer port address bits 17:12 to determine the
+		 * producer port ID. In Scalable IOV mode, PP accesses come
+		 * through the PF MMIO window for the physical producer port,
+		 * so for translation purposes the virtual and physical port
+		 * IDs are equal.
+		 */
+		if (hw->virt_mode == DLB2_VIRT_SRIOV)
+			virt_id = port->id.virt_id;
+		else
+			virt_id = port->id.phys_id;
+
+		offs = port->id.vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) +
+			virt_id;
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_VF_DIR_VPP2PP(offs),
+			    DLB2_SYS_VF_DIR_VPP2PP_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_VF_DIR_VPP_V(offs),
+			    DLB2_SYS_VF_DIR_VPP_V_RST);
+	}
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_PP_V(port->id.phys_id),
+		    DLB2_SYS_DIR_PP_V_RST);
+}
+
+static void dlb2_domain_reset_dir_port_registers(struct dlb2_hw *hw,
+						 struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *port;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter)
+		__dlb2_domain_reset_dir_port_registers(hw, port);
+}
+
+static void dlb2_domain_reset_ldb_queue_registers(struct dlb2_hw *hw,
+						  struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_queue *queue;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
+		unsigned int queue_id = queue->id.phys_id;
+		int i;
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL(hw->ver, queue_id),
+			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH(hw->ver, queue_id),
+			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTL(hw->ver, queue_id),
+			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTL_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTH(hw->ver, queue_id),
+			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_NALDB_MAX_DEPTH(hw->ver, queue_id),
+			    DLB2_LSP_QID_NALDB_MAX_DEPTH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_LDB_INFL_LIM(hw->ver, queue_id),
+			    DLB2_LSP_QID_LDB_INFL_LIM_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_AQED_ACTIVE_LIM(hw->ver, queue_id),
+			    DLB2_LSP_QID_AQED_ACTIVE_LIM_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_ATM_DEPTH_THRSH(hw->ver, queue_id),
+			    DLB2_LSP_QID_ATM_DEPTH_THRSH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_NALDB_DEPTH_THRSH(hw->ver, queue_id),
+			    DLB2_LSP_QID_NALDB_DEPTH_THRSH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_LDB_QID_ITS(queue_id),
+			    DLB2_SYS_LDB_QID_ITS_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_CHP_ORD_QID_SN(hw->ver, queue_id),
+			    DLB2_CHP_ORD_QID_SN_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_CHP_ORD_QID_SN_MAP(hw->ver, queue_id),
+			    DLB2_CHP_ORD_QID_SN_MAP_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_LDB_QID_V(queue_id),
+			    DLB2_SYS_LDB_QID_V_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_LDB_QID_CFG_V(queue_id),
+			    DLB2_SYS_LDB_QID_CFG_V_RST);
+
+		if (queue->sn_cfg_valid) {
+			u32 offs[2];
+
+			offs[0] = DLB2_RO_GRP_0_SLT_SHFT(hw->ver,
+							 queue->sn_slot);
+			offs[1] = DLB2_RO_GRP_1_SLT_SHFT(hw->ver,
+							 queue->sn_slot);
+
+			DLB2_CSR_WR(hw,
+				    offs[queue->sn_group],
+				    DLB2_RO_GRP_0_SLT_SHFT_RST);
+		}
+
+		for (i = 0; i < DLB2_LSP_QID2CQIDIX_NUM; i++) {
+			DLB2_CSR_WR(hw,
+				    DLB2_LSP_QID2CQIDIX(hw->ver, queue_id, i),
+				    DLB2_LSP_QID2CQIDIX_00_RST);
+
+			DLB2_CSR_WR(hw,
+				    DLB2_LSP_QID2CQIDIX2(hw->ver, queue_id, i),
+				    DLB2_LSP_QID2CQIDIX2_00_RST);
+
+			DLB2_CSR_WR(hw,
+				    DLB2_ATM_QID2CQIDIX(queue_id, i),
+				    DLB2_ATM_QID2CQIDIX_00_RST);
+		}
+	}
+}
+
+static void dlb2_domain_reset_dir_queue_registers(struct dlb2_hw *hw,
+						  struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *queue;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, queue, iter) {
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_DIR_MAX_DEPTH(hw->ver,
+						       queue->id.phys_id),
+			    DLB2_LSP_QID_DIR_MAX_DEPTH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTL(hw->ver,
+							  queue->id.phys_id),
+			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTL_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTH(hw->ver,
+							  queue->id.phys_id),
+			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_DIR_DEPTH_THRSH(hw->ver,
+							 queue->id.phys_id),
+			    DLB2_LSP_QID_DIR_DEPTH_THRSH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_DIR_QID_ITS(queue->id.phys_id),
+			    DLB2_SYS_DIR_QID_ITS_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_DIR_QID_V(queue->id.phys_id),
+			    DLB2_SYS_DIR_QID_V_RST);
+	}
+}
+
+
+
+
+
+static void dlb2_domain_reset_registers(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain)
+{
+	dlb2_domain_reset_ldb_port_registers(hw, domain);
+
+	dlb2_domain_reset_dir_port_registers(hw, domain);
+
+	dlb2_domain_reset_ldb_queue_registers(hw, domain);
+
+	dlb2_domain_reset_dir_queue_registers(hw, domain);
+
+	if (hw->ver == DLB2_HW_V2) {
+		DLB2_CSR_WR(hw,
+			    DLB2_CHP_CFG_LDB_VAS_CRD(domain->id.phys_id),
+			    DLB2_CHP_CFG_LDB_VAS_CRD_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_CHP_CFG_DIR_VAS_CRD(domain->id.phys_id),
+			    DLB2_CHP_CFG_DIR_VAS_CRD_RST);
+	} else
+		DLB2_CSR_WR(hw,
+			    DLB2_CHP_CFG_VAS_CRD(domain->id.phys_id),
+			    DLB2_CHP_CFG_VAS_CRD_RST);
+}
+
+static int dlb2_domain_reset_software_state(struct dlb2_hw *hw,
+					    struct dlb2_hw_domain *domain)
+{
+	struct dlb2_dir_pq_pair *tmp_dir_port;
+	struct dlb2_ldb_queue *tmp_ldb_queue;
+	struct dlb2_ldb_port *tmp_ldb_port;
+	struct dlb2_list_entry *iter1;
+	struct dlb2_list_entry *iter2;
+	struct dlb2_function_resources *rsrcs;
+	struct dlb2_dir_pq_pair *dir_port;
+	struct dlb2_ldb_queue *ldb_queue;
+	struct dlb2_ldb_port *ldb_port;
+	struct dlb2_list_head *list;
+	int ret, i;
+	RTE_SET_USED(tmp_dir_port);
+	RTE_SET_USED(tmp_ldb_queue);
+	RTE_SET_USED(tmp_ldb_port);
+	RTE_SET_USED(iter1);
+	RTE_SET_USED(iter2);
+
+	rsrcs = domain->parent_func;
+
+	/* Move the domain's ldb queues to the function's avail list */
+	list = &domain->used_ldb_queues;
+	DLB2_DOM_LIST_FOR_SAFE(*list, ldb_queue, tmp_ldb_queue, iter1, iter2) {
+		if (ldb_queue->sn_cfg_valid) {
+			struct dlb2_sn_group *grp;
+
+			grp = &hw->rsrcs.sn_groups[ldb_queue->sn_group];
+
+			dlb2_sn_group_free_slot(grp, ldb_queue->sn_slot);
+			ldb_queue->sn_cfg_valid = false;
+		}
+
+		ldb_queue->owned = false;
+		ldb_queue->num_mappings = 0;
+		ldb_queue->num_pending_additions = 0;
+
+		dlb2_list_del(&domain->used_ldb_queues,
+			      &ldb_queue->domain_list);
+		dlb2_list_add(&rsrcs->avail_ldb_queues,
+			      &ldb_queue->func_list);
+		rsrcs->num_avail_ldb_queues++;
+	}
+
+	list = &domain->avail_ldb_queues;
+	DLB2_DOM_LIST_FOR_SAFE(*list, ldb_queue, tmp_ldb_queue, iter1, iter2) {
+		ldb_queue->owned = false;
+
+		dlb2_list_del(&domain->avail_ldb_queues,
+			      &ldb_queue->domain_list);
+		dlb2_list_add(&rsrcs->avail_ldb_queues,
+			      &ldb_queue->func_list);
+		rsrcs->num_avail_ldb_queues++;
+	}
+
+	/* Move the domain's ldb ports to the function's avail list */
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		list = &domain->used_ldb_ports[i];
+		DLB2_DOM_LIST_FOR_SAFE(*list, ldb_port, tmp_ldb_port,
+				       iter1, iter2) {
+			int j;
+
+			ldb_port->owned = false;
+			ldb_port->configured = false;
+			ldb_port->num_pending_removals = 0;
+			ldb_port->num_mappings = 0;
+			ldb_port->init_tkn_cnt = 0;
+			ldb_port->cq_depth = 0;
+			for (j = 0; j < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; j++)
+				ldb_port->qid_map[j].state =
+					DLB2_QUEUE_UNMAPPED;
+
+			dlb2_list_del(&domain->used_ldb_ports[i],
+				      &ldb_port->domain_list);
+			dlb2_list_add(&rsrcs->avail_ldb_ports[i],
+				      &ldb_port->func_list);
+			rsrcs->num_avail_ldb_ports[i]++;
+		}
+
+		list = &domain->avail_ldb_ports[i];
+		DLB2_DOM_LIST_FOR_SAFE(*list, ldb_port, tmp_ldb_port,
+				       iter1, iter2) {
+			ldb_port->owned = false;
+
+			dlb2_list_del(&domain->avail_ldb_ports[i],
+				      &ldb_port->domain_list);
+			dlb2_list_add(&rsrcs->avail_ldb_ports[i],
+				      &ldb_port->func_list);
+			rsrcs->num_avail_ldb_ports[i]++;
+		}
+	}
+
+	/* Move the domain's dir ports to the function's avail list */
+	list = &domain->used_dir_pq_pairs;
+	DLB2_DOM_LIST_FOR_SAFE(*list, dir_port, tmp_dir_port, iter1, iter2) {
+		dir_port->owned = false;
+		dir_port->port_configured = false;
+		dir_port->init_tkn_cnt = 0;
+
+		dlb2_list_del(&domain->used_dir_pq_pairs,
+			      &dir_port->domain_list);
+
+		dlb2_list_add(&rsrcs->avail_dir_pq_pairs,
+			      &dir_port->func_list);
+		rsrcs->num_avail_dir_pq_pairs++;
+	}
+
+	list = &domain->avail_dir_pq_pairs;
+	DLB2_DOM_LIST_FOR_SAFE(*list, dir_port, tmp_dir_port, iter1, iter2) {
+		dir_port->owned = false;
+
+		dlb2_list_del(&domain->avail_dir_pq_pairs,
+			      &dir_port->domain_list);
+
+		dlb2_list_add(&rsrcs->avail_dir_pq_pairs,
+			      &dir_port->func_list);
+		rsrcs->num_avail_dir_pq_pairs++;
+	}
+
+	/* Return hist list entries to the function */
+	ret = dlb2_bitmap_set_range(rsrcs->avail_hist_list_entries,
+				    domain->hist_list_entry_base,
+				    domain->total_hist_list_entries);
+	if (ret) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: domain hist list base does not match the function's bitmap.\n",
+			    __func__);
+		return ret;
+	}
+
+	domain->total_hist_list_entries = 0;
+	domain->avail_hist_list_entries = 0;
+	domain->hist_list_entry_base = 0;
+	domain->hist_list_entry_offset = 0;
+
+	if (hw->ver == DLB2_HW_V2_5) {
+		rsrcs->num_avail_entries += domain->num_credits;
+		domain->num_credits = 0;
+	} else {
+		rsrcs->num_avail_qed_entries += domain->num_ldb_credits;
+		domain->num_ldb_credits = 0;
+
+		rsrcs->num_avail_dqed_entries += domain->num_dir_credits;
+		domain->num_dir_credits = 0;
+	}
+	rsrcs->num_avail_aqed_entries += domain->num_avail_aqed_entries;
+	rsrcs->num_avail_aqed_entries += domain->num_used_aqed_entries;
+	domain->num_avail_aqed_entries = 0;
+	domain->num_used_aqed_entries = 0;
+
+	domain->num_pending_removals = 0;
+	domain->num_pending_additions = 0;
+	domain->configured = false;
+	domain->started = false;
+
+	/*
+	 * Move the domain out of the used_domains list and back to the
+	 * function's avail_domains list.
+	 */
+	dlb2_list_del(&rsrcs->used_domains, &domain->func_list);
+	dlb2_list_add(&rsrcs->avail_domains, &domain->func_list);
+	rsrcs->num_avail_domains++;
+
+	return 0;
+}
+
+static int dlb2_domain_drain_unmapped_queue(struct dlb2_hw *hw,
+					    struct dlb2_hw_domain *domain,
+					    struct dlb2_ldb_queue *queue)
+{
+	struct dlb2_ldb_port *port = NULL;
+	int ret, i;
+
+	/* If a domain has LDB queues, it must have LDB ports */
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		port = DLB2_DOM_LIST_HEAD(domain->used_ldb_ports[i],
+					  typeof(*port));
+		if (port)
+			break;
+	}
+
+	if (port == NULL) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: No configured LDB ports\n",
+			    __func__);
+		return -EFAULT;
+	}
+
+	/* If necessary, free up a QID slot in this CQ */
+	if (port->num_mappings == DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
+		struct dlb2_ldb_queue *mapped_queue;
+
+		mapped_queue = &hw->rsrcs.ldb_queues[port->qid_map[0].qid];
+
+		ret = dlb2_ldb_port_unmap_qid(hw, port, mapped_queue);
+		if (ret)
+			return ret;
+	}
+
+	ret = dlb2_ldb_port_map_qid_dynamic(hw, port, queue, 0);
+	if (ret)
+		return ret;
+
+	return dlb2_domain_drain_mapped_queues(hw, domain);
+}
+
+static int dlb2_domain_drain_unmapped_queues(struct dlb2_hw *hw,
+					     struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_queue *queue;
+	int ret;
+	RTE_SET_USED(iter);
+
+	/* If the domain hasn't been started, there's no traffic to drain */
+	if (!domain->started)
+		return 0;
+
+	/*
+	 * Pre-condition: the unattached queue must not have any outstanding
+	 * completions. This is ensured by calling dlb2_domain_drain_ldb_cqs()
+	 * prior to this in dlb2_domain_drain_mapped_queues().
+	 */
+	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
+		if (queue->num_mappings != 0 ||
+		    dlb2_ldb_queue_is_empty(hw, queue))
+			continue;
+
+		ret = dlb2_domain_drain_unmapped_queue(hw, domain, queue);
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+
+/**
+ * dlb2_reset_domain() - reset a scheduling domain
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function resets and frees a DLB 2.0 scheduling domain and its associated
+ * resources.
+ *
+ * Pre-condition: the driver must ensure software has stopped sending QEs
+ * through this domain's producer ports before invoking this function, or
+ * undefined behavior will result.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, -1 otherwise.
+ *
+ * EINVAL - Invalid domain ID, or the domain is not configured.
+ * EFAULT - Internal error. (Possibly caused if software is the pre-condition
+ *	    is not met.)
+ * ETIMEDOUT - Hardware component didn't reset in the expected time.
+ */
+int dlb2_reset_domain(struct dlb2_hw *hw,
+		      u32 domain_id,
+		      bool vdev_req,
+		      unsigned int vdev_id)
+{
+	struct dlb2_hw_domain *domain;
+	int ret;
+
+	dlb2_log_reset_domain(hw, domain_id, vdev_req, vdev_id);
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (domain == NULL || !domain->configured)
+		return -EINVAL;
+
+	/* Disable VPPs */
+	if (vdev_req) {
+		dlb2_domain_disable_dir_vpps(hw, domain, vdev_id);
+
+		dlb2_domain_disable_ldb_vpps(hw, domain, vdev_id);
+	}
+
+	/* Disable CQ interrupts */
+	dlb2_domain_disable_dir_port_interrupts(hw, domain);
+
+	dlb2_domain_disable_ldb_port_interrupts(hw, domain);
+
+	/*
+	 * For each queue owned by this domain, disable its write permissions to
+	 * cause any traffic sent to it to be dropped. Well-behaved software
+	 * should not be sending QEs at this point.
+	 */
+	dlb2_domain_disable_dir_queue_write_perms(hw, domain);
+
+	dlb2_domain_disable_ldb_queue_write_perms(hw, domain);
+
+	/* Turn off completion tracking on all the domain's PPs. */
+	dlb2_domain_disable_ldb_seq_checks(hw, domain);
+
+	/*
+	 * Disable the LDB CQs and drain them in order to complete the map and
+	 * unmap procedures, which require zero CQ inflights and zero QID
+	 * inflights respectively.
+	 */
+	dlb2_domain_disable_ldb_cqs(hw, domain);
+
+	dlb2_domain_drain_ldb_cqs(hw, domain, false);
+
+	ret = dlb2_domain_wait_for_ldb_cqs_to_empty(hw, domain);
+	if (ret)
+		return ret;
+
+	ret = dlb2_domain_finish_unmap_qid_procedures(hw, domain);
+	if (ret)
+		return ret;
+
+	ret = dlb2_domain_finish_map_qid_procedures(hw, domain);
+	if (ret)
+		return ret;
+
+	/* Re-enable the CQs in order to drain the mapped queues. */
+	dlb2_domain_enable_ldb_cqs(hw, domain);
+
+	ret = dlb2_domain_drain_mapped_queues(hw, domain);
+	if (ret)
+		return ret;
+
+	ret = dlb2_domain_drain_unmapped_queues(hw, domain);
+	if (ret)
+		return ret;
+
+	/* Done draining LDB QEs, so disable the CQs. */
+	dlb2_domain_disable_ldb_cqs(hw, domain);
+
+	dlb2_domain_drain_dir_queues(hw, domain);
+
+	/* Done draining DIR QEs, so disable the CQs. */
+	dlb2_domain_disable_dir_cqs(hw, domain);
+
+	/* Disable PPs */
+	dlb2_domain_disable_dir_producer_ports(hw, domain);
+
+	dlb2_domain_disable_ldb_producer_ports(hw, domain);
+
+	ret = dlb2_domain_verify_reset_success(hw, domain);
+	if (ret)
+		return ret;
+
+	/* Reset the QID and port state. */
+	dlb2_domain_reset_registers(hw, domain);
+
+	/* Hardware reset complete. Reset the domain's software state */
+	return dlb2_domain_reset_software_state(hw, domain);
+}
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v4 08/27] event/dlb2: add v2.5 create ldb queue
  2021-04-15  1:48   ` [dpdk-dev] [PATCH v4 00/27] Add DLB v2.5 Timothy McDaniel
                       ` (6 preceding siblings ...)
  2021-04-15  1:48     ` [dpdk-dev] [PATCH v4 07/27] event/dlb2: add v2.5 domain reset Timothy McDaniel
@ 2021-04-15  1:49     ` Timothy McDaniel
  2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 09/27] event/dlb2: add v2.5 create ldb port Timothy McDaniel
                       ` (18 subsequent siblings)
  26 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-04-15  1:49 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, harry.van.haaren, jerinj, thomas

Updated low level hardware functions related to configuring
load balanced queues. These functions create the queues,
as well as attach related resources required by load
balanced queues, such as sequence numbers.

The logic is very similar to what was done for v2.0,
but the new combined register map for v2.0 and v2.5
uses new register names and bit names.  Additionally,
new register access macros are used so that the code
can perform the correct action based on the hardware
version, v2.0 or v2.5.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 397 ------------------
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 391 +++++++++++++++++
 2 files changed, 391 insertions(+), 397 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index 041aeaeee..f8b85bc57 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -1149,403 +1149,6 @@ unsigned int dlb2_finish_map_qid_procedures(struct dlb2_hw *hw)
 	return num;
 }
 
-
-static void dlb2_configure_ldb_queue(struct dlb2_hw *hw,
-				     struct dlb2_hw_domain *domain,
-				     struct dlb2_ldb_queue *queue,
-				     struct dlb2_create_ldb_queue_args *args,
-				     bool vdev_req,
-				     unsigned int vdev_id)
-{
-	union dlb2_sys_vf_ldb_vqid_v r0 = { {0} };
-	union dlb2_sys_vf_ldb_vqid2qid r1 = { {0} };
-	union dlb2_sys_ldb_qid2vqid r2 = { {0} };
-	union dlb2_sys_ldb_vasqid_v r3 = { {0} };
-	union dlb2_lsp_qid_ldb_infl_lim r4 = { {0} };
-	union dlb2_lsp_qid_aqed_active_lim r5 = { {0} };
-	union dlb2_aqed_pipe_qid_hid_width r6 = { {0} };
-	union dlb2_sys_ldb_qid_its r7 = { {0} };
-	union dlb2_lsp_qid_atm_depth_thrsh r8 = { {0} };
-	union dlb2_lsp_qid_naldb_depth_thrsh r9 = { {0} };
-	union dlb2_aqed_pipe_qid_fid_lim r10 = { {0} };
-	union dlb2_chp_ord_qid_sn_map r11 = { {0} };
-	union dlb2_sys_ldb_qid_cfg_v r12 = { {0} };
-	union dlb2_sys_ldb_qid_v r13 = { {0} };
-
-	struct dlb2_sn_group *sn_group;
-	unsigned int offs;
-
-	/* QID write permissions are turned on when the domain is started */
-	r3.field.vasqid_v = 0;
-
-	offs = domain->id.phys_id * DLB2_MAX_NUM_LDB_QUEUES +
-		queue->id.phys_id;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_VASQID_V(offs), r3.val);
-
-	/*
-	 * Unordered QIDs get 4K inflights, ordered get as many as the number
-	 * of sequence numbers.
-	 */
-	r4.field.limit = args->num_qid_inflights;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_QID_LDB_INFL_LIM(queue->id.phys_id), r4.val);
-
-	r5.field.limit = queue->aqed_limit;
-
-	if (r5.field.limit > DLB2_MAX_NUM_AQED_ENTRIES)
-		r5.field.limit = DLB2_MAX_NUM_AQED_ENTRIES;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID_AQED_ACTIVE_LIM(queue->id.phys_id),
-		    r5.val);
-
-	switch (args->lock_id_comp_level) {
-	case 64:
-		r6.field.compress_code = 1;
-		break;
-	case 128:
-		r6.field.compress_code = 2;
-		break;
-	case 256:
-		r6.field.compress_code = 3;
-		break;
-	case 512:
-		r6.field.compress_code = 4;
-		break;
-	case 1024:
-		r6.field.compress_code = 5;
-		break;
-	case 2048:
-		r6.field.compress_code = 6;
-		break;
-	case 4096:
-		r6.field.compress_code = 7;
-		break;
-	case 0:
-	case 65536:
-		r6.field.compress_code = 0;
-	}
-
-	DLB2_CSR_WR(hw,
-		    DLB2_AQED_PIPE_QID_HID_WIDTH(queue->id.phys_id),
-		    r6.val);
-
-	/* Don't timestamp QEs that pass through this queue */
-	r7.field.qid_its = 0;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_QID_ITS(queue->id.phys_id),
-		    r7.val);
-
-	r8.field.thresh = args->depth_threshold;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID_ATM_DEPTH_THRSH(queue->id.phys_id),
-		    r8.val);
-
-	r9.field.thresh = args->depth_threshold;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID_NALDB_DEPTH_THRSH(queue->id.phys_id),
-		    r9.val);
-
-	/*
-	 * This register limits the number of inflight flows a queue can have
-	 * at one time.  It has an upper bound of 2048, but can be
-	 * over-subscribed. 512 is chosen so that a single queue doesn't use
-	 * the entire atomic storage, but can use a substantial portion if
-	 * needed.
-	 */
-	r10.field.qid_fid_limit = 512;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_AQED_PIPE_QID_FID_LIM(queue->id.phys_id),
-		    r10.val);
-
-	/* Configure SNs */
-	sn_group = &hw->rsrcs.sn_groups[queue->sn_group];
-	r11.field.mode = sn_group->mode;
-	r11.field.slot = queue->sn_slot;
-	r11.field.grp  = sn_group->id;
-
-	DLB2_CSR_WR(hw, DLB2_CHP_ORD_QID_SN_MAP(queue->id.phys_id), r11.val);
-
-	r12.field.sn_cfg_v = (args->num_sequence_numbers != 0);
-	r12.field.fid_cfg_v = (args->num_atomic_inflights != 0);
-
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_QID_CFG_V(queue->id.phys_id), r12.val);
-
-	if (vdev_req) {
-		offs = vdev_id * DLB2_MAX_NUM_LDB_QUEUES + queue->id.virt_id;
-
-		r0.field.vqid_v = 1;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID_V(offs), r0.val);
-
-		r1.field.qid = queue->id.phys_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID2QID(offs), r1.val);
-
-		r2.field.vqid = queue->id.virt_id;
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_LDB_QID2VQID(queue->id.phys_id),
-			    r2.val);
-	}
-
-	r13.field.qid_v = 1;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_QID_V(queue->id.phys_id), r13.val);
-}
-
-static int
-dlb2_ldb_queue_attach_to_sn_group(struct dlb2_hw *hw,
-				  struct dlb2_ldb_queue *queue,
-				  struct dlb2_create_ldb_queue_args *args)
-{
-	int slot = -1;
-	int i;
-
-	queue->sn_cfg_valid = false;
-
-	if (args->num_sequence_numbers == 0)
-		return 0;
-
-	for (i = 0; i < DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {
-		struct dlb2_sn_group *group = &hw->rsrcs.sn_groups[i];
-
-		if (group->sequence_numbers_per_queue ==
-		    args->num_sequence_numbers &&
-		    !dlb2_sn_group_full(group)) {
-			slot = dlb2_sn_group_alloc_slot(group);
-			if (slot >= 0)
-				break;
-		}
-	}
-
-	if (slot == -1) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: no sequence number slots available\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	queue->sn_cfg_valid = true;
-	queue->sn_group = i;
-	queue->sn_slot = slot;
-	return 0;
-}
-
-static int
-dlb2_ldb_queue_attach_resources(struct dlb2_hw *hw,
-				struct dlb2_hw_domain *domain,
-				struct dlb2_ldb_queue *queue,
-				struct dlb2_create_ldb_queue_args *args)
-{
-	int ret;
-
-	ret = dlb2_ldb_queue_attach_to_sn_group(hw, queue, args);
-	if (ret)
-		return ret;
-
-	/* Attach QID inflights */
-	queue->num_qid_inflights = args->num_qid_inflights;
-
-	/* Attach atomic inflights */
-	queue->aqed_limit = args->num_atomic_inflights;
-
-	domain->num_avail_aqed_entries -= args->num_atomic_inflights;
-	domain->num_used_aqed_entries += args->num_atomic_inflights;
-
-	return 0;
-}
-
-static int
-dlb2_verify_create_ldb_queue_args(struct dlb2_hw *hw,
-				  u32 domain_id,
-				  struct dlb2_create_ldb_queue_args *args,
-				  struct dlb2_cmd_response *resp,
-				  bool vdev_req,
-				  unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	int i;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (domain == NULL) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	if (!domain->configured) {
-		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
-		return -EINVAL;
-	}
-
-	if (domain->started) {
-		resp->status = DLB2_ST_DOMAIN_STARTED;
-		return -EINVAL;
-	}
-
-	if (dlb2_list_empty(&domain->avail_ldb_queues)) {
-		resp->status = DLB2_ST_LDB_QUEUES_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	if (args->num_sequence_numbers) {
-		for (i = 0; i < DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {
-			struct dlb2_sn_group *group = &hw->rsrcs.sn_groups[i];
-
-			if (group->sequence_numbers_per_queue ==
-			    args->num_sequence_numbers &&
-			    !dlb2_sn_group_full(group))
-				break;
-		}
-
-		if (i == DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS) {
-			resp->status = DLB2_ST_SEQUENCE_NUMBERS_UNAVAILABLE;
-			return -EINVAL;
-		}
-	}
-
-	if (args->num_qid_inflights > 4096) {
-		resp->status = DLB2_ST_INVALID_QID_INFLIGHT_ALLOCATION;
-		return -EINVAL;
-	}
-
-	/* Inflights must be <= number of sequence numbers if ordered */
-	if (args->num_sequence_numbers != 0 &&
-	    args->num_qid_inflights > args->num_sequence_numbers) {
-		resp->status = DLB2_ST_INVALID_QID_INFLIGHT_ALLOCATION;
-		return -EINVAL;
-	}
-
-	if (domain->num_avail_aqed_entries < args->num_atomic_inflights) {
-		resp->status = DLB2_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	if (args->num_atomic_inflights &&
-	    args->lock_id_comp_level != 0 &&
-	    args->lock_id_comp_level != 64 &&
-	    args->lock_id_comp_level != 128 &&
-	    args->lock_id_comp_level != 256 &&
-	    args->lock_id_comp_level != 512 &&
-	    args->lock_id_comp_level != 1024 &&
-	    args->lock_id_comp_level != 2048 &&
-	    args->lock_id_comp_level != 4096 &&
-	    args->lock_id_comp_level != 65536) {
-		resp->status = DLB2_ST_INVALID_LOCK_ID_COMP_LEVEL;
-		return -EINVAL;
-	}
-
-	return 0;
-}
-
-static void
-dlb2_log_create_ldb_queue_args(struct dlb2_hw *hw,
-			       u32 domain_id,
-			       struct dlb2_create_ldb_queue_args *args,
-			       bool vdev_req,
-			       unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 create load-balanced queue arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID:                  %d\n",
-		    domain_id);
-	DLB2_HW_DBG(hw, "\tNumber of sequence numbers: %d\n",
-		    args->num_sequence_numbers);
-	DLB2_HW_DBG(hw, "\tNumber of QID inflights:    %d\n",
-		    args->num_qid_inflights);
-	DLB2_HW_DBG(hw, "\tNumber of ATM inflights:    %d\n",
-		    args->num_atomic_inflights);
-}
-
-/**
- * dlb2_hw_create_ldb_queue() - Allocate and initialize a DLB LDB queue.
- * @hw:	Contains the current state of the DLB2 hardware.
- * @domain_id: Domain ID
- * @args: User-provided arguments.
- * @resp: Response to user.
- * @vdev_req: Request came from a virtual device.
- * @vdev_id: If vdev_req is true, this contains the virtual device's ID.
- *
- * Return: returns < 0 on error, 0 otherwise. If the driver is unable to
- * satisfy a request, resp->status will be set accordingly.
- */
-int dlb2_hw_create_ldb_queue(struct dlb2_hw *hw,
-			     u32 domain_id,
-			     struct dlb2_create_ldb_queue_args *args,
-			     struct dlb2_cmd_response *resp,
-			     bool vdev_req,
-			     unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_queue *queue;
-	int ret;
-
-	dlb2_log_create_ldb_queue_args(hw, domain_id, args, vdev_req, vdev_id);
-
-	/*
-	 * Verify that hardware resources are available before attempting to
-	 * satisfy the request. This simplifies the error unwinding code.
-	 */
-	ret = dlb2_verify_create_ldb_queue_args(hw,
-						domain_id,
-						args,
-						resp,
-						vdev_req,
-						vdev_id);
-	if (ret)
-		return ret;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-	if (domain == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: domain not found\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	queue = DLB2_DOM_LIST_HEAD(domain->avail_ldb_queues, typeof(*queue));
-	if (queue == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: no available ldb queues\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	ret = dlb2_ldb_queue_attach_resources(hw, domain, queue, args);
-	if (ret < 0) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: failed to attach the ldb queue resources\n",
-			    __func__, __LINE__);
-		return ret;
-	}
-
-	dlb2_configure_ldb_queue(hw, domain, queue, args, vdev_req, vdev_id);
-
-	queue->num_mappings = 0;
-
-	queue->configured = true;
-
-	/*
-	 * Configuration succeeded, so move the resource from the 'avail' to
-	 * the 'used' list.
-	 */
-	dlb2_list_del(&domain->avail_ldb_queues, &queue->domain_list);
-
-	dlb2_list_add(&domain->used_ldb_queues, &queue->domain_list);
-
-	resp->status = 0;
-	resp->id = (vdev_req) ? queue->id.virt_id : queue->id.phys_id;
-
-	return 0;
-}
-
 int dlb2_get_group_sequence_numbers(struct dlb2_hw *hw, unsigned int group_id)
 {
 	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index 641812412..b52d2becd 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -3581,3 +3581,394 @@ int dlb2_reset_domain(struct dlb2_hw *hw,
 	/* Hardware reset complete. Reset the domain's software state */
 	return dlb2_domain_reset_software_state(hw, domain);
 }
+
+static void
+dlb2_log_create_ldb_queue_args(struct dlb2_hw *hw,
+			       u32 domain_id,
+			       struct dlb2_create_ldb_queue_args *args,
+			       bool vdev_req,
+			       unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 create load-balanced queue arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID:                  %d\n",
+		    domain_id);
+	DLB2_HW_DBG(hw, "\tNumber of sequence numbers: %d\n",
+		    args->num_sequence_numbers);
+	DLB2_HW_DBG(hw, "\tNumber of QID inflights:    %d\n",
+		    args->num_qid_inflights);
+	DLB2_HW_DBG(hw, "\tNumber of ATM inflights:    %d\n",
+		    args->num_atomic_inflights);
+}
+
+static int
+dlb2_ldb_queue_attach_to_sn_group(struct dlb2_hw *hw,
+				  struct dlb2_ldb_queue *queue,
+				  struct dlb2_create_ldb_queue_args *args)
+{
+	int slot = -1;
+	int i;
+
+	queue->sn_cfg_valid = false;
+
+	if (args->num_sequence_numbers == 0)
+		return 0;
+
+	for (i = 0; i < DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {
+		struct dlb2_sn_group *group = &hw->rsrcs.sn_groups[i];
+
+		if (group->sequence_numbers_per_queue ==
+		    args->num_sequence_numbers &&
+		    !dlb2_sn_group_full(group)) {
+			slot = dlb2_sn_group_alloc_slot(group);
+			if (slot >= 0)
+				break;
+		}
+	}
+
+	if (slot == -1) {
+		DLB2_HW_ERR(hw,
+			    "[%s():%d] Internal error: no sequence number slots available\n",
+			    __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	queue->sn_cfg_valid = true;
+	queue->sn_group = i;
+	queue->sn_slot = slot;
+	return 0;
+}
+
+static int
+dlb2_verify_create_ldb_queue_args(struct dlb2_hw *hw,
+				  u32 domain_id,
+				  struct dlb2_create_ldb_queue_args *args,
+				  struct dlb2_cmd_response *resp,
+				  bool vdev_req,
+				  unsigned int vdev_id,
+				  struct dlb2_hw_domain **out_domain,
+				  struct dlb2_ldb_queue **out_queue)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+	int i;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
+		return -EINVAL;
+	}
+
+	if (domain->started) {
+		resp->status = DLB2_ST_DOMAIN_STARTED;
+		return -EINVAL;
+	}
+
+	queue = DLB2_DOM_LIST_HEAD(domain->avail_ldb_queues, typeof(*queue));
+	if (!queue) {
+		resp->status = DLB2_ST_LDB_QUEUES_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	if (args->num_sequence_numbers) {
+		for (i = 0; i < DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {
+			struct dlb2_sn_group *group = &hw->rsrcs.sn_groups[i];
+
+			if (group->sequence_numbers_per_queue ==
+			    args->num_sequence_numbers &&
+			    !dlb2_sn_group_full(group))
+				break;
+		}
+
+		if (i == DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS) {
+			resp->status = DLB2_ST_SEQUENCE_NUMBERS_UNAVAILABLE;
+			return -EINVAL;
+		}
+	}
+
+	if (args->num_qid_inflights > 4096) {
+		resp->status = DLB2_ST_INVALID_QID_INFLIGHT_ALLOCATION;
+		return -EINVAL;
+	}
+
+	/* Inflights must be <= number of sequence numbers if ordered */
+	if (args->num_sequence_numbers != 0 &&
+	    args->num_qid_inflights > args->num_sequence_numbers) {
+		resp->status = DLB2_ST_INVALID_QID_INFLIGHT_ALLOCATION;
+		return -EINVAL;
+	}
+
+	if (domain->num_avail_aqed_entries < args->num_atomic_inflights) {
+		resp->status = DLB2_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	if (args->num_atomic_inflights &&
+	    args->lock_id_comp_level != 0 &&
+	    args->lock_id_comp_level != 64 &&
+	    args->lock_id_comp_level != 128 &&
+	    args->lock_id_comp_level != 256 &&
+	    args->lock_id_comp_level != 512 &&
+	    args->lock_id_comp_level != 1024 &&
+	    args->lock_id_comp_level != 2048 &&
+	    args->lock_id_comp_level != 4096 &&
+	    args->lock_id_comp_level != 65536) {
+		resp->status = DLB2_ST_INVALID_LOCK_ID_COMP_LEVEL;
+		return -EINVAL;
+	}
+
+	*out_domain = domain;
+	*out_queue = queue;
+
+	return 0;
+}
+
+static int
+dlb2_ldb_queue_attach_resources(struct dlb2_hw *hw,
+				struct dlb2_hw_domain *domain,
+				struct dlb2_ldb_queue *queue,
+				struct dlb2_create_ldb_queue_args *args)
+{
+	int ret;
+	ret = dlb2_ldb_queue_attach_to_sn_group(hw, queue, args);
+	if (ret)
+		return ret;
+
+	/* Attach QID inflights */
+	queue->num_qid_inflights = args->num_qid_inflights;
+
+	/* Attach atomic inflights */
+	queue->aqed_limit = args->num_atomic_inflights;
+
+	domain->num_avail_aqed_entries -= args->num_atomic_inflights;
+	domain->num_used_aqed_entries += args->num_atomic_inflights;
+
+	return 0;
+}
+
+static void dlb2_configure_ldb_queue(struct dlb2_hw *hw,
+				     struct dlb2_hw_domain *domain,
+				     struct dlb2_ldb_queue *queue,
+				     struct dlb2_create_ldb_queue_args *args,
+				     bool vdev_req,
+				     unsigned int vdev_id)
+{
+	struct dlb2_sn_group *sn_group;
+	unsigned int offs;
+	u32 reg = 0;
+	u32 alimit;
+
+	/* QID write permissions are turned on when the domain is started */
+	offs = domain->id.phys_id * DLB2_MAX_NUM_LDB_QUEUES + queue->id.phys_id;
+
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_VASQID_V(offs), reg);
+
+	/*
+	 * Unordered QIDs get 4K inflights, ordered get as many as the number
+	 * of sequence numbers.
+	 */
+	DLB2_BITS_SET(reg, args->num_qid_inflights,
+		      DLB2_LSP_QID_LDB_INFL_LIM_LIMIT);
+	DLB2_CSR_WR(hw, DLB2_LSP_QID_LDB_INFL_LIM(hw->ver,
+						  queue->id.phys_id), reg);
+
+	alimit = queue->aqed_limit;
+
+	if (alimit > DLB2_MAX_NUM_AQED_ENTRIES)
+		alimit = DLB2_MAX_NUM_AQED_ENTRIES;
+
+	reg = 0;
+	DLB2_BITS_SET(reg, alimit, DLB2_LSP_QID_AQED_ACTIVE_LIM_LIMIT);
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_QID_AQED_ACTIVE_LIM(hw->ver,
+						 queue->id.phys_id), reg);
+
+	reg = 0;
+	switch (args->lock_id_comp_level) {
+	case 64:
+		DLB2_BITS_SET(reg, 1, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
+		break;
+	case 128:
+		DLB2_BITS_SET(reg, 2, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
+		break;
+	case 256:
+		DLB2_BITS_SET(reg, 3, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
+		break;
+	case 512:
+		DLB2_BITS_SET(reg, 4, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
+		break;
+	case 1024:
+		DLB2_BITS_SET(reg, 5, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
+		break;
+	case 2048:
+		DLB2_BITS_SET(reg, 6, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
+		break;
+	case 4096:
+		DLB2_BITS_SET(reg, 7, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
+		break;
+	default:
+		/* No compression by default */
+		break;
+	}
+
+	DLB2_CSR_WR(hw, DLB2_AQED_QID_HID_WIDTH(queue->id.phys_id), reg);
+
+	reg = 0;
+	/* Don't timestamp QEs that pass through this queue */
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_QID_ITS(queue->id.phys_id), reg);
+
+	DLB2_BITS_SET(reg, args->depth_threshold,
+		      DLB2_LSP_QID_ATM_DEPTH_THRSH_THRESH);
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_QID_ATM_DEPTH_THRSH(hw->ver,
+						 queue->id.phys_id), reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, args->depth_threshold,
+		      DLB2_LSP_QID_NALDB_DEPTH_THRSH_THRESH);
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_QID_NALDB_DEPTH_THRSH(hw->ver, queue->id.phys_id),
+		    reg);
+
+	/*
+	 * This register limits the number of inflight flows a queue can have
+	 * at one time.  It has an upper bound of 2048, but can be
+	 * over-subscribed. 512 is chosen so that a single queue does not use
+	 * the entire atomic storage, but can use a substantial portion if
+	 * needed.
+	 */
+	reg = 0;
+	DLB2_BITS_SET(reg, 512, DLB2_AQED_QID_FID_LIM_QID_FID_LIMIT);
+	DLB2_CSR_WR(hw, DLB2_AQED_QID_FID_LIM(queue->id.phys_id), reg);
+
+	/* Configure SNs */
+	reg = 0;
+	sn_group = &hw->rsrcs.sn_groups[queue->sn_group];
+	DLB2_BITS_SET(reg, sn_group->mode, DLB2_CHP_ORD_QID_SN_MAP_MODE);
+	DLB2_BITS_SET(reg, queue->sn_slot, DLB2_CHP_ORD_QID_SN_MAP_SLOT);
+	DLB2_BITS_SET(reg, sn_group->id, DLB2_CHP_ORD_QID_SN_MAP_GRP);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_ORD_QID_SN_MAP(hw->ver, queue->id.phys_id), reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, (args->num_sequence_numbers != 0),
+		 DLB2_SYS_LDB_QID_CFG_V_SN_CFG_V);
+	DLB2_BITS_SET(reg, (args->num_atomic_inflights != 0),
+		 DLB2_SYS_LDB_QID_CFG_V_FID_CFG_V);
+
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_QID_CFG_V(queue->id.phys_id), reg);
+
+	if (vdev_req) {
+		offs = vdev_id * DLB2_MAX_NUM_LDB_QUEUES + queue->id.virt_id;
+
+		reg = 0;
+		DLB2_BIT_SET(reg, DLB2_SYS_VF_LDB_VQID_V_VQID_V);
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID_V(offs), reg);
+
+		reg = 0;
+		DLB2_BITS_SET(reg, queue->id.phys_id,
+			      DLB2_SYS_VF_LDB_VQID2QID_QID);
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID2QID(offs), reg);
+
+		reg = 0;
+		DLB2_BITS_SET(reg, queue->id.virt_id,
+			      DLB2_SYS_LDB_QID2VQID_VQID);
+		DLB2_CSR_WR(hw, DLB2_SYS_LDB_QID2VQID(queue->id.phys_id), reg);
+	}
+
+	reg = 0;
+	DLB2_BIT_SET(reg, DLB2_SYS_LDB_QID_V_QID_V);
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_QID_V(queue->id.phys_id), reg);
+}
+
+/**
+ * dlb2_hw_create_ldb_queue() - create a load-balanced queue
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: queue creation arguments.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function creates a load-balanced queue.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the queue ID.
+ *
+ * resp->id contains a virtual ID if vdev_req is true.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, the domain is not configured,
+ *	    the domain has already been started, or the requested queue name is
+ *	    already in use.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb2_hw_create_ldb_queue(struct dlb2_hw *hw,
+			     u32 domain_id,
+			     struct dlb2_create_ldb_queue_args *args,
+			     struct dlb2_cmd_response *resp,
+			     bool vdev_req,
+			     unsigned int vdev_id)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+	int ret;
+
+	dlb2_log_create_ldb_queue_args(hw, domain_id, args, vdev_req, vdev_id);
+
+	/*
+	 * Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	ret = dlb2_verify_create_ldb_queue_args(hw,
+						domain_id,
+						args,
+						resp,
+						vdev_req,
+						vdev_id,
+						&domain,
+						&queue);
+	if (ret)
+		return ret;
+
+	ret = dlb2_ldb_queue_attach_resources(hw, domain, queue, args);
+
+	if (ret) {
+		DLB2_HW_ERR(hw,
+			    "[%s():%d] Internal error: failed to attach the ldb queue resources\n",
+			    __func__, __LINE__);
+		return ret;
+	}
+
+	dlb2_configure_ldb_queue(hw, domain, queue, args, vdev_req, vdev_id);
+
+	queue->num_mappings = 0;
+
+	queue->configured = true;
+
+	/*
+	 * Configuration succeeded, so move the resource from the 'avail' to
+	 * the 'used' list.
+	 */
+	dlb2_list_del(&domain->avail_ldb_queues, &queue->domain_list);
+
+	dlb2_list_add(&domain->used_ldb_queues, &queue->domain_list);
+
+	resp->status = 0;
+	resp->id = (vdev_req) ? queue->id.virt_id : queue->id.phys_id;
+
+	return 0;
+}
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v4 09/27] event/dlb2: add v2.5 create ldb port
  2021-04-15  1:48   ` [dpdk-dev] [PATCH v4 00/27] Add DLB v2.5 Timothy McDaniel
                       ` (7 preceding siblings ...)
  2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 08/27] event/dlb2: add v2.5 create ldb queue Timothy McDaniel
@ 2021-04-15  1:49     ` Timothy McDaniel
  2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 10/27] event/dlb2: add v2.5 create dir port Timothy McDaniel
                       ` (17 subsequent siblings)
  26 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-04-15  1:49 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, harry.van.haaren, jerinj, thomas

Update the low level HW functions responsible for
creating load balanced ports. These functions create the
producer port (PP), configure the consumer queue (CQ), and
validate the port creation arguments.

The logic is very similar to what was done for v2.0,
but the new combined register map for v2.0 and v2.5
uses new register names and bit names.  Additionally,
new register access macros are used so that the code
can perform the correct action, based on the hardware
version, v2.0 or v2.5.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 490 ------------------
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 471 +++++++++++++++++
 2 files changed, 471 insertions(+), 490 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index f8b85bc57..45d096eec 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -1216,496 +1216,6 @@ int dlb2_set_group_sequence_numbers(struct dlb2_hw *hw,
 	return 0;
 }
 
-static void dlb2_ldb_port_configure_pp(struct dlb2_hw *hw,
-				       struct dlb2_hw_domain *domain,
-				       struct dlb2_ldb_port *port,
-				       bool vdev_req,
-				       unsigned int vdev_id)
-{
-	union dlb2_sys_ldb_pp2vas r0 = { {0} };
-	union dlb2_sys_ldb_pp_v r4 = { {0} };
-
-	r0.field.vas = domain->id.phys_id;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_PP2VAS(port->id.phys_id), r0.val);
-
-	if (vdev_req) {
-		union dlb2_sys_vf_ldb_vpp2pp r1 = { {0} };
-		union dlb2_sys_ldb_pp2vdev r2 = { {0} };
-		union dlb2_sys_vf_ldb_vpp_v r3 = { {0} };
-		unsigned int offs;
-		u32 virt_id;
-
-		/*
-		 * DLB uses producer port address bits 17:12 to determine the
-		 * producer port ID. In Scalable IOV mode, PP accesses come
-		 * through the PF MMIO window for the physical producer port,
-		 * so for translation purposes the virtual and physical port
-		 * IDs are equal.
-		 */
-		if (hw->virt_mode == DLB2_VIRT_SRIOV)
-			virt_id = port->id.virt_id;
-		else
-			virt_id = port->id.phys_id;
-
-		r1.field.pp = port->id.phys_id;
-
-		offs = vdev_id * DLB2_MAX_NUM_LDB_PORTS + virt_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VPP2PP(offs), r1.val);
-
-		r2.field.vdev = vdev_id;
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_LDB_PP2VDEV(port->id.phys_id),
-			    r2.val);
-
-		r3.field.vpp_v = 1;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VPP_V(offs), r3.val);
-	}
-
-	r4.field.pp_v = 1;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_PP_V(port->id.phys_id),
-		    r4.val);
-}
-
-static int dlb2_ldb_port_configure_cq(struct dlb2_hw *hw,
-				      struct dlb2_hw_domain *domain,
-				      struct dlb2_ldb_port *port,
-				      uintptr_t cq_dma_base,
-				      struct dlb2_create_ldb_port_args *args,
-				      bool vdev_req,
-				      unsigned int vdev_id)
-{
-	union dlb2_sys_ldb_cq_addr_l r0 = { {0} };
-	union dlb2_sys_ldb_cq_addr_u r1 = { {0} };
-	union dlb2_sys_ldb_cq2vf_pf_ro r2 = { {0} };
-	union dlb2_chp_ldb_cq_tkn_depth_sel r3 = { {0} };
-	union dlb2_lsp_cq_ldb_tkn_depth_sel r4 = { {0} };
-	union dlb2_chp_hist_list_lim r5 = { {0} };
-	union dlb2_chp_hist_list_base r6 = { {0} };
-	union dlb2_lsp_cq_ldb_infl_lim r7 = { {0} };
-	union dlb2_chp_hist_list_push_ptr r8 = { {0} };
-	union dlb2_chp_hist_list_pop_ptr r9 = { {0} };
-	union dlb2_sys_ldb_cq_at r10 = { {0} };
-	union dlb2_sys_ldb_cq_pasid r11 = { {0} };
-	union dlb2_chp_ldb_cq2vas r12 = { {0} };
-	union dlb2_lsp_cq2priov r13 = { {0} };
-
-	/* The CQ address is 64B-aligned, and the DLB only wants bits [63:6] */
-	r0.field.addr_l = cq_dma_base >> 6;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_ADDR_L(port->id.phys_id), r0.val);
-
-	r1.field.addr_u = cq_dma_base >> 32;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_ADDR_U(port->id.phys_id), r1.val);
-
-	/*
-	 * 'ro' == relaxed ordering. This setting allows DLB2 to write
-	 * cache lines out-of-order (but QEs within a cache line are always
-	 * updated in-order).
-	 */
-	r2.field.vf = vdev_id;
-	r2.field.is_pf = !vdev_req && (hw->virt_mode != DLB2_VIRT_SIOV);
-	r2.field.ro = 1;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ2VF_PF_RO(port->id.phys_id), r2.val);
-
-	if (args->cq_depth <= 8) {
-		r3.field.token_depth_select = 1;
-	} else if (args->cq_depth == 16) {
-		r3.field.token_depth_select = 2;
-	} else if (args->cq_depth == 32) {
-		r3.field.token_depth_select = 3;
-	} else if (args->cq_depth == 64) {
-		r3.field.token_depth_select = 4;
-	} else if (args->cq_depth == 128) {
-		r3.field.token_depth_select = 5;
-	} else if (args->cq_depth == 256) {
-		r3.field.token_depth_select = 6;
-	} else if (args->cq_depth == 512) {
-		r3.field.token_depth_select = 7;
-	} else if (args->cq_depth == 1024) {
-		r3.field.token_depth_select = 8;
-	} else {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: invalid CQ depth\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL(port->id.phys_id),
-		    r3.val);
-
-	/*
-	 * To support CQs with depth less than 8, program the token count
-	 * register with a non-zero initial value. Operations such as domain
-	 * reset must take this initial value into account when quiescing the
-	 * CQ.
-	 */
-	port->init_tkn_cnt = 0;
-
-	if (args->cq_depth < 8) {
-		union dlb2_lsp_cq_ldb_tkn_cnt r14 = { {0} };
-
-		port->init_tkn_cnt = 8 - args->cq_depth;
-
-		r14.field.token_count = port->init_tkn_cnt;
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_CQ_LDB_TKN_CNT(port->id.phys_id),
-			    r14.val);
-	} else {
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_CQ_LDB_TKN_CNT(port->id.phys_id),
-			    DLB2_LSP_CQ_LDB_TKN_CNT_RST);
-	}
-
-	r4.field.token_depth_select = r3.field.token_depth_select;
-	r4.field.ignore_depth = 0;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL(port->id.phys_id),
-		    r4.val);
-
-	/* Reset the CQ write pointer */
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_WPTR(port->id.phys_id),
-		    DLB2_CHP_LDB_CQ_WPTR_RST);
-
-	r5.field.limit = port->hist_list_entry_limit - 1;
-
-	DLB2_CSR_WR(hw, DLB2_CHP_HIST_LIST_LIM(port->id.phys_id), r5.val);
-
-	r6.field.base = port->hist_list_entry_base;
-
-	DLB2_CSR_WR(hw, DLB2_CHP_HIST_LIST_BASE(port->id.phys_id), r6.val);
-
-	/*
-	 * The inflight limit sets a cap on the number of QEs for which this CQ
-	 * can owe completions at one time.
-	 */
-	r7.field.limit = args->cq_history_list_size;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ_LDB_INFL_LIM(port->id.phys_id), r7.val);
-
-	r8.field.push_ptr = r6.field.base;
-	r8.field.generation = 0;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_HIST_LIST_PUSH_PTR(port->id.phys_id),
-		    r8.val);
-
-	r9.field.pop_ptr = r6.field.base;
-	r9.field.generation = 0;
-
-	DLB2_CSR_WR(hw, DLB2_CHP_HIST_LIST_POP_PTR(port->id.phys_id), r9.val);
-
-	/*
-	 * Address translation (AT) settings: 0: untranslated, 2: translated
-	 * (see ATS spec regarding Address Type field for more details)
-	 */
-	r10.field.cq_at = 0;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_AT(port->id.phys_id), r10.val);
-
-	if (vdev_req && hw->virt_mode == DLB2_VIRT_SIOV) {
-		r11.field.pasid = hw->pasid[vdev_id];
-		r11.field.fmt2 = 1;
-	}
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_CQ_PASID(port->id.phys_id),
-		    r11.val);
-
-	r12.field.cq2vas = domain->id.phys_id;
-
-	DLB2_CSR_WR(hw, DLB2_CHP_LDB_CQ2VAS(port->id.phys_id), r12.val);
-
-	/* Disable the port's QID mappings */
-	r13.field.v = 0;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(port->id.phys_id), r13.val);
-
-	return 0;
-}
-
-static int dlb2_configure_ldb_port(struct dlb2_hw *hw,
-				   struct dlb2_hw_domain *domain,
-				   struct dlb2_ldb_port *port,
-				   uintptr_t cq_dma_base,
-				   struct dlb2_create_ldb_port_args *args,
-				   bool vdev_req,
-				   unsigned int vdev_id)
-{
-	int ret, i;
-
-	port->hist_list_entry_base = domain->hist_list_entry_base +
-				     domain->hist_list_entry_offset;
-	port->hist_list_entry_limit = port->hist_list_entry_base +
-				      args->cq_history_list_size;
-
-	domain->hist_list_entry_offset += args->cq_history_list_size;
-	domain->avail_hist_list_entries -= args->cq_history_list_size;
-
-	ret = dlb2_ldb_port_configure_cq(hw,
-					 domain,
-					 port,
-					 cq_dma_base,
-					 args,
-					 vdev_req,
-					 vdev_id);
-	if (ret < 0)
-		return ret;
-
-	dlb2_ldb_port_configure_pp(hw,
-				   domain,
-				   port,
-				   vdev_req,
-				   vdev_id);
-
-	dlb2_ldb_port_cq_enable(hw, port);
-
-	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++)
-		port->qid_map[i].state = DLB2_QUEUE_UNMAPPED;
-	port->num_mappings = 0;
-
-	port->enabled = true;
-
-	port->configured = true;
-
-	return 0;
-}
-
-static void
-dlb2_log_create_ldb_port_args(struct dlb2_hw *hw,
-			      u32 domain_id,
-			      uintptr_t cq_dma_base,
-			      struct dlb2_create_ldb_port_args *args,
-			      bool vdev_req,
-			      unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 create load-balanced port arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID:                 %d\n",
-		    domain_id);
-	DLB2_HW_DBG(hw, "\tCQ depth:                  %d\n",
-		    args->cq_depth);
-	DLB2_HW_DBG(hw, "\tCQ hist list size:         %d\n",
-		    args->cq_history_list_size);
-	DLB2_HW_DBG(hw, "\tCQ base address:           0x%lx\n",
-		    cq_dma_base);
-	DLB2_HW_DBG(hw, "\tCoS ID:                    %u\n", args->cos_id);
-	DLB2_HW_DBG(hw, "\tStrict CoS allocation:     %u\n",
-		    args->cos_strict);
-}
-
-static int
-dlb2_verify_create_ldb_port_args(struct dlb2_hw *hw,
-				 u32 domain_id,
-				 uintptr_t cq_dma_base,
-				 struct dlb2_create_ldb_port_args *args,
-				 struct dlb2_cmd_response *resp,
-				 bool vdev_req,
-				 unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	int i;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (domain == NULL) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	if (!domain->configured) {
-		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
-		return -EINVAL;
-	}
-
-	if (domain->started) {
-		resp->status = DLB2_ST_DOMAIN_STARTED;
-		return -EINVAL;
-	}
-
-	if (args->cos_id >= DLB2_NUM_COS_DOMAINS) {
-		resp->status = DLB2_ST_INVALID_COS_ID;
-		return -EINVAL;
-	}
-
-	if (args->cos_strict) {
-		if (dlb2_list_empty(&domain->avail_ldb_ports[args->cos_id])) {
-			resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
-			return -EINVAL;
-		}
-	} else {
-		for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-			if (!dlb2_list_empty(&domain->avail_ldb_ports[i]))
-				break;
-		}
-
-		if (i == DLB2_NUM_COS_DOMAINS) {
-			resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
-			return -EINVAL;
-		}
-	}
-
-	/* Check cache-line alignment */
-	if ((cq_dma_base & 0x3F) != 0) {
-		resp->status = DLB2_ST_INVALID_CQ_VIRT_ADDR;
-		return -EINVAL;
-	}
-
-	if (args->cq_depth != 1 &&
-	    args->cq_depth != 2 &&
-	    args->cq_depth != 4 &&
-	    args->cq_depth != 8 &&
-	    args->cq_depth != 16 &&
-	    args->cq_depth != 32 &&
-	    args->cq_depth != 64 &&
-	    args->cq_depth != 128 &&
-	    args->cq_depth != 256 &&
-	    args->cq_depth != 512 &&
-	    args->cq_depth != 1024) {
-		resp->status = DLB2_ST_INVALID_CQ_DEPTH;
-		return -EINVAL;
-	}
-
-	/* The history list size must be >= 1 */
-	if (!args->cq_history_list_size) {
-		resp->status = DLB2_ST_INVALID_HIST_LIST_DEPTH;
-		return -EINVAL;
-	}
-
-	if (args->cq_history_list_size > domain->avail_hist_list_entries) {
-		resp->status = DLB2_ST_HIST_LIST_ENTRIES_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	return 0;
-}
-
-
-/**
- * dlb2_hw_create_ldb_port() - Allocate and initialize a load-balanced port and
- *	its resources.
- * @hw:	Contains the current state of the DLB2 hardware.
- * @domain_id: Domain ID
- * @args: User-provided arguments.
- * @cq_dma_base: Base DMA address for consumer queue memory
- * @resp: Response to user.
- * @vdev_req: Request came from a virtual device.
- * @vdev_id: If vdev_req is true, this contains the virtual device's ID.
- *
- * Return: returns < 0 on error, 0 otherwise. If the driver is unable to
- * satisfy a request, resp->status will be set accordingly.
- */
-int dlb2_hw_create_ldb_port(struct dlb2_hw *hw,
-			    u32 domain_id,
-			    struct dlb2_create_ldb_port_args *args,
-			    uintptr_t cq_dma_base,
-			    struct dlb2_cmd_response *resp,
-			    bool vdev_req,
-			    unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_port *port;
-	int ret, cos_id, i;
-
-	dlb2_log_create_ldb_port_args(hw,
-				      domain_id,
-				      cq_dma_base,
-				      args,
-				      vdev_req,
-				      vdev_id);
-
-	/*
-	 * Verify that hardware resources are available before attempting to
-	 * satisfy the request. This simplifies the error unwinding code.
-	 */
-	ret = dlb2_verify_create_ldb_port_args(hw,
-					       domain_id,
-					       cq_dma_base,
-					       args,
-					       resp,
-					       vdev_req,
-					       vdev_id);
-	if (ret)
-		return ret;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-	if (domain == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: domain not found\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	if (args->cos_strict) {
-		cos_id = args->cos_id;
-
-		port = DLB2_DOM_LIST_HEAD(domain->avail_ldb_ports[cos_id],
-					  typeof(*port));
-	} else {
-		int idx;
-
-		for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-			idx = (args->cos_id + i) % DLB2_NUM_COS_DOMAINS;
-
-			port = DLB2_DOM_LIST_HEAD(domain->avail_ldb_ports[idx],
-						  typeof(*port));
-			if (port)
-				break;
-		}
-
-		cos_id = idx;
-	}
-
-	if (port == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: no available ldb ports\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	if (port->configured) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: avail_ldb_ports contains configured ports.\n",
-			    __func__);
-		return -EFAULT;
-	}
-
-	ret = dlb2_configure_ldb_port(hw,
-				      domain,
-				      port,
-				      cq_dma_base,
-				      args,
-				      vdev_req,
-				      vdev_id);
-	if (ret < 0)
-		return ret;
-
-	/*
-	 * Configuration succeeded, so move the resource from the 'avail' to
-	 * the 'used' list.
-	 */
-	dlb2_list_del(&domain->avail_ldb_ports[cos_id], &port->domain_list);
-
-	dlb2_list_add(&domain->used_ldb_ports[cos_id], &port->domain_list);
-
-	resp->status = 0;
-	resp->id = (vdev_req) ? port->id.virt_id : port->id.phys_id;
-
-	return 0;
-}
-
 static void
 dlb2_log_create_dir_port_args(struct dlb2_hw *hw,
 			      u32 domain_id,
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index b52d2becd..2eb39e23d 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -3972,3 +3972,474 @@ int dlb2_hw_create_ldb_queue(struct dlb2_hw *hw,
 
 	return 0;
 }
+
+static void dlb2_ldb_port_configure_pp(struct dlb2_hw *hw,
+				       struct dlb2_hw_domain *domain,
+				       struct dlb2_ldb_port *port,
+				       bool vdev_req,
+				       unsigned int vdev_id)
+{
+	u32 reg = 0;
+
+	DLB2_BITS_SET(reg, domain->id.phys_id, DLB2_SYS_LDB_PP2VAS_VAS);
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_PP2VAS(port->id.phys_id), reg);
+
+	if (vdev_req) {
+		unsigned int offs;
+		u32 virt_id;
+
+		/*
+		 * DLB uses producer port address bits 17:12 to determine the
+		 * producer port ID. In Scalable IOV mode, PP accesses come
+		 * through the PF MMIO window for the physical producer port,
+		 * so for translation purposes the virtual and physical port
+		 * IDs are equal.
+		 */
+		if (hw->virt_mode == DLB2_VIRT_SRIOV)
+			virt_id = port->id.virt_id;
+		else
+			virt_id = port->id.phys_id;
+
+		reg = 0;
+		DLB2_BITS_SET(reg, port->id.phys_id, DLB2_SYS_VF_LDB_VPP2PP_PP);
+		offs = vdev_id * DLB2_MAX_NUM_LDB_PORTS + virt_id;
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VPP2PP(offs), reg);
+
+		reg = 0;
+		DLB2_BITS_SET(reg, vdev_id, DLB2_SYS_LDB_PP2VDEV_VDEV);
+		DLB2_CSR_WR(hw, DLB2_SYS_LDB_PP2VDEV(port->id.phys_id), reg);
+
+		reg = 0;
+		DLB2_BIT_SET(reg, DLB2_SYS_VF_LDB_VPP_V_VPP_V);
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VPP_V(offs), reg);
+	}
+
+	reg = 0;
+	DLB2_BIT_SET(reg, DLB2_SYS_LDB_PP_V_PP_V);
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_PP_V(port->id.phys_id), reg);
+}
+
+static int dlb2_ldb_port_configure_cq(struct dlb2_hw *hw,
+				      struct dlb2_hw_domain *domain,
+				      struct dlb2_ldb_port *port,
+				      uintptr_t cq_dma_base,
+				      struct dlb2_create_ldb_port_args *args,
+				      bool vdev_req,
+				      unsigned int vdev_id)
+{
+	u32 hl_base = 0;
+	u32 reg = 0;
+	u32 ds = 0;
+
+	/* The CQ address is 64B-aligned, and the DLB only wants bits [63:6] */
+	DLB2_BITS_SET(reg, cq_dma_base >> 6, DLB2_SYS_LDB_CQ_ADDR_L_ADDR_L);
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_ADDR_L(port->id.phys_id), reg);
+
+	reg = cq_dma_base >> 32;
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_ADDR_U(port->id.phys_id), reg);
+
+	/*
+	 * 'ro' == relaxed ordering. This setting allows DLB2 to write
+	 * cache lines out-of-order (but QEs within a cache line are always
+	 * updated in-order).
+	 */
+	reg = 0;
+	DLB2_BITS_SET(reg, vdev_id, DLB2_SYS_LDB_CQ2VF_PF_RO_VF);
+	DLB2_BITS_SET(reg,
+		 !vdev_req && (hw->virt_mode != DLB2_VIRT_SIOV),
+		 DLB2_SYS_LDB_CQ2VF_PF_RO_IS_PF);
+	DLB2_BIT_SET(reg, DLB2_SYS_LDB_CQ2VF_PF_RO_RO);
+
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ2VF_PF_RO(port->id.phys_id), reg);
+
+	port->cq_depth = args->cq_depth;
+
+	if (args->cq_depth <= 8) {
+		ds = 1;
+	} else if (args->cq_depth == 16) {
+		ds = 2;
+	} else if (args->cq_depth == 32) {
+		ds = 3;
+	} else if (args->cq_depth == 64) {
+		ds = 4;
+	} else if (args->cq_depth == 128) {
+		ds = 5;
+	} else if (args->cq_depth == 256) {
+		ds = 6;
+	} else if (args->cq_depth == 512) {
+		ds = 7;
+	} else if (args->cq_depth == 1024) {
+		ds = 8;
+	} else {
+		DLB2_HW_ERR(hw,
+			    "[%s():%d] Internal error: invalid CQ depth\n",
+			    __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	reg = 0;
+	DLB2_BITS_SET(reg, ds,
+		      DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT);
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
+		    reg);
+
+	/*
+	 * To support CQs with depth less than 8, program the token count
+	 * register with a non-zero initial value. Operations such as domain
+	 * reset must take this initial value into account when quiescing the
+	 * CQ.
+	 */
+	port->init_tkn_cnt = 0;
+
+	if (args->cq_depth < 8) {
+		reg = 0;
+		port->init_tkn_cnt = 8 - args->cq_depth;
+
+		DLB2_BITS_SET(reg,
+			      port->init_tkn_cnt,
+			      DLB2_LSP_CQ_LDB_TKN_CNT_TOKEN_COUNT);
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_CQ_LDB_TKN_CNT(hw->ver, port->id.phys_id),
+			    reg);
+	} else {
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_CQ_LDB_TKN_CNT(hw->ver, port->id.phys_id),
+			    DLB2_LSP_CQ_LDB_TKN_CNT_RST);
+	}
+
+	reg = 0;
+	DLB2_BITS_SET(reg, ds,
+		      DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_V2);
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
+		    reg);
+
+	/* Reset the CQ write pointer */
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_WPTR(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ_WPTR_RST);
+
+	reg = 0;
+	DLB2_BITS_SET(reg,
+		      port->hist_list_entry_limit - 1,
+		      DLB2_CHP_HIST_LIST_LIM_LIMIT);
+	DLB2_CSR_WR(hw, DLB2_CHP_HIST_LIST_LIM(hw->ver, port->id.phys_id), reg);
+
+	DLB2_BITS_SET(hl_base, port->hist_list_entry_base,
+		      DLB2_CHP_HIST_LIST_BASE_BASE);
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_HIST_LIST_BASE(hw->ver, port->id.phys_id),
+		    hl_base);
+
+	/*
+	 * The inflight limit sets a cap on the number of QEs for which this CQ
+	 * can owe completions at one time.
+	 */
+	reg = 0;
+	DLB2_BITS_SET(reg, args->cq_history_list_size,
+		      DLB2_LSP_CQ_LDB_INFL_LIM_LIMIT);
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ_LDB_INFL_LIM(hw->ver, port->id.phys_id),
+		    reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, DLB2_BITS_GET(hl_base, DLB2_CHP_HIST_LIST_BASE_BASE),
+		      DLB2_CHP_HIST_LIST_PUSH_PTR_PUSH_PTR);
+	DLB2_CSR_WR(hw, DLB2_CHP_HIST_LIST_PUSH_PTR(hw->ver, port->id.phys_id),
+		    reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, DLB2_BITS_GET(hl_base, DLB2_CHP_HIST_LIST_BASE_BASE),
+		      DLB2_CHP_HIST_LIST_POP_PTR_POP_PTR);
+	DLB2_CSR_WR(hw, DLB2_CHP_HIST_LIST_POP_PTR(hw->ver, port->id.phys_id),
+		    reg);
+
+	/*
+	 * Address translation (AT) settings: 0: untranslated, 2: translated
+	 * (see ATS spec regarding Address Type field for more details)
+	 */
+
+	if (hw->ver == DLB2_HW_V2) {
+		reg = 0;
+		DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_AT(port->id.phys_id), reg);
+	}
+
+	if (vdev_req && hw->virt_mode == DLB2_VIRT_SIOV) {
+		reg = 0;
+		DLB2_BITS_SET(reg, hw->pasid[vdev_id],
+			      DLB2_SYS_LDB_CQ_PASID_PASID);
+		DLB2_BIT_SET(reg, DLB2_SYS_LDB_CQ_PASID_FMT2);
+	}
+
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_PASID(hw->ver, port->id.phys_id), reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, domain->id.phys_id, DLB2_CHP_LDB_CQ2VAS_CQ2VAS);
+	DLB2_CSR_WR(hw, DLB2_CHP_LDB_CQ2VAS(hw->ver, port->id.phys_id), reg);
+
+	/* Disable the port's QID mappings */
+	reg = 0;
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(hw->ver, port->id.phys_id), reg);
+
+	return 0;
+}
+
+static bool
+dlb2_cq_depth_is_valid(u32 depth)
+{
+	if (depth != 1 && depth != 2 &&
+	    depth != 4 && depth != 8 &&
+	    depth != 16 && depth != 32 &&
+	    depth != 64 && depth != 128 &&
+	    depth != 256 && depth != 512 &&
+	    depth != 1024)
+		return false;
+
+	return true;
+}
+
+static int dlb2_configure_ldb_port(struct dlb2_hw *hw,
+				   struct dlb2_hw_domain *domain,
+				   struct dlb2_ldb_port *port,
+				   uintptr_t cq_dma_base,
+				   struct dlb2_create_ldb_port_args *args,
+				   bool vdev_req,
+				   unsigned int vdev_id)
+{
+	int ret, i;
+
+	port->hist_list_entry_base = domain->hist_list_entry_base +
+				     domain->hist_list_entry_offset;
+	port->hist_list_entry_limit = port->hist_list_entry_base +
+				      args->cq_history_list_size;
+
+	domain->hist_list_entry_offset += args->cq_history_list_size;
+	domain->avail_hist_list_entries -= args->cq_history_list_size;
+
+	ret = dlb2_ldb_port_configure_cq(hw,
+					 domain,
+					 port,
+					 cq_dma_base,
+					 args,
+					 vdev_req,
+					 vdev_id);
+	if (ret)
+		return ret;
+
+	dlb2_ldb_port_configure_pp(hw,
+				   domain,
+				   port,
+				   vdev_req,
+				   vdev_id);
+
+	dlb2_ldb_port_cq_enable(hw, port);
+
+	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++)
+		port->qid_map[i].state = DLB2_QUEUE_UNMAPPED;
+	port->num_mappings = 0;
+
+	port->enabled = true;
+
+	port->configured = true;
+
+	return 0;
+}
+
+static void
+dlb2_log_create_ldb_port_args(struct dlb2_hw *hw,
+			      u32 domain_id,
+			      uintptr_t cq_dma_base,
+			      struct dlb2_create_ldb_port_args *args,
+			      bool vdev_req,
+			      unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 create load-balanced port arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID:                 %d\n",
+		    domain_id);
+	DLB2_HW_DBG(hw, "\tCQ depth:                  %d\n",
+		    args->cq_depth);
+	DLB2_HW_DBG(hw, "\tCQ hist list size:         %d\n",
+		    args->cq_history_list_size);
+	DLB2_HW_DBG(hw, "\tCQ base address:           0x%lx\n",
+		    cq_dma_base);
+	DLB2_HW_DBG(hw, "\tCoS ID:                    %u\n", args->cos_id);
+	DLB2_HW_DBG(hw, "\tStrict CoS allocation:     %u\n",
+		    args->cos_strict);
+}
+
+static int
+dlb2_verify_create_ldb_port_args(struct dlb2_hw *hw,
+				 u32 domain_id,
+				 uintptr_t cq_dma_base,
+				 struct dlb2_create_ldb_port_args *args,
+				 struct dlb2_cmd_response *resp,
+				 bool vdev_req,
+				 unsigned int vdev_id,
+				 struct dlb2_hw_domain **out_domain,
+				 struct dlb2_ldb_port **out_port,
+				 int *out_cos_id)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_port *port;
+	int i, id;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
+		return -EINVAL;
+	}
+
+	if (domain->started) {
+		resp->status = DLB2_ST_DOMAIN_STARTED;
+		return -EINVAL;
+	}
+
+	if (args->cos_id >= DLB2_NUM_COS_DOMAINS) {
+		resp->status = DLB2_ST_INVALID_COS_ID;
+		return -EINVAL;
+	}
+
+	if (args->cos_strict) {
+		id = args->cos_id;
+		port = DLB2_DOM_LIST_HEAD(domain->avail_ldb_ports[id],
+					  typeof(*port));
+	} else {
+		for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+			id = (args->cos_id + i) % DLB2_NUM_COS_DOMAINS;
+
+			port = DLB2_DOM_LIST_HEAD(domain->avail_ldb_ports[id],
+						  typeof(*port));
+			if (port)
+				break;
+		}
+	}
+
+	if (!port) {
+		resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	/* Check cache-line alignment */
+	if ((cq_dma_base & 0x3F) != 0) {
+		resp->status = DLB2_ST_INVALID_CQ_VIRT_ADDR;
+		return -EINVAL;
+	}
+
+	if (!dlb2_cq_depth_is_valid(args->cq_depth)) {
+		resp->status = DLB2_ST_INVALID_CQ_DEPTH;
+		return -EINVAL;
+	}
+
+	/* The history list size must be >= 1 */
+	if (!args->cq_history_list_size) {
+		resp->status = DLB2_ST_INVALID_HIST_LIST_DEPTH;
+		return -EINVAL;
+	}
+
+	if (args->cq_history_list_size > domain->avail_hist_list_entries) {
+		resp->status = DLB2_ST_HIST_LIST_ENTRIES_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	*out_domain = domain;
+	*out_port = port;
+	*out_cos_id = id;
+
+	return 0;
+}
+
+/**
+ * dlb2_hw_create_ldb_port() - create a load-balanced port
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: port creation arguments.
+ * @cq_dma_base: base address of the CQ memory. This can be a PA or an IOVA.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function creates a load-balanced port.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the port ID.
+ *
+ * resp->id contains a virtual ID if vdev_req is true.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, a credit setting is invalid, a
+ *	    pointer address is not properly aligned, the domain is not
+ *	    configured, or the domain has already been started.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb2_hw_create_ldb_port(struct dlb2_hw *hw,
+			    u32 domain_id,
+			    struct dlb2_create_ldb_port_args *args,
+			    uintptr_t cq_dma_base,
+			    struct dlb2_cmd_response *resp,
+			    bool vdev_req,
+			    unsigned int vdev_id)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_port *port;
+	int ret, cos_id;
+
+	dlb2_log_create_ldb_port_args(hw,
+				      domain_id,
+				      cq_dma_base,
+				      args,
+				      vdev_req,
+				      vdev_id);
+
+	/*
+	 * Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	ret = dlb2_verify_create_ldb_port_args(hw,
+					       domain_id,
+					       cq_dma_base,
+					       args,
+					       resp,
+					       vdev_req,
+					       vdev_id,
+					       &domain,
+					       &port,
+					       &cos_id);
+	if (ret)
+		return ret;
+
+	ret = dlb2_configure_ldb_port(hw,
+				      domain,
+				      port,
+				      cq_dma_base,
+				      args,
+				      vdev_req,
+				      vdev_id);
+	if (ret)
+		return ret;
+
+	/*
+	 * Configuration succeeded, so move the resource from the 'avail' to
+	 * the 'used' list.
+	 */
+	dlb2_list_del(&domain->avail_ldb_ports[cos_id], &port->domain_list);
+
+	dlb2_list_add(&domain->used_ldb_ports[cos_id], &port->domain_list);
+
+	resp->status = 0;
+	resp->id = (vdev_req) ? port->id.virt_id : port->id.phys_id;
+
+	return 0;
+}
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v4 10/27] event/dlb2: add v2.5 create dir port
  2021-04-15  1:48   ` [dpdk-dev] [PATCH v4 00/27] Add DLB v2.5 Timothy McDaniel
                       ` (8 preceding siblings ...)
  2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 09/27] event/dlb2: add v2.5 create ldb port Timothy McDaniel
@ 2021-04-15  1:49     ` Timothy McDaniel
  2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 11/27] event/dlb2: add v2.5 create dir queue Timothy McDaniel
                       ` (16 subsequent siblings)
  26 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-04-15  1:49 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, harry.van.haaren, jerinj, thomas

Update the low level HW functions responsible for
creating directed ports. These functions create the
producer port (PP), configure the consumer queue (CQ),
configure queue depth, and validate the port creation
arguments.

The logic is very similar to what was done for v2.0,
but the new combined register map for v2.0 and v2.5
uses new register names and bit names.  Additionally,
new register access macros are used so that the code
can perform the correct action, based on the hardware
version, v2.0 or v2.5.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 426 ------------------
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 414 +++++++++++++++++
 2 files changed, 414 insertions(+), 426 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index 45d096eec..70c52e908 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -65,18 +65,6 @@ static inline void dlb2_flush_csr(struct dlb2_hw *hw)
 	DLB2_CSR_RD(hw, DLB2_SYS_TOTAL_VAS);
 }
 
-static void dlb2_dir_port_cq_enable(struct dlb2_hw *hw,
-				    struct dlb2_dir_pq_pair *port)
-{
-	union dlb2_lsp_cq_dir_dsbl reg;
-
-	reg.field.disabled = 0;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ_DIR_DSBL(port->id.phys_id), reg.val);
-
-	dlb2_flush_csr(hw);
-}
-
 static u32 dlb2_dir_queue_depth(struct dlb2_hw *hw,
 				struct dlb2_dir_pq_pair *queue)
 {
@@ -1216,25 +1204,6 @@ int dlb2_set_group_sequence_numbers(struct dlb2_hw *hw,
 	return 0;
 }
 
-static void
-dlb2_log_create_dir_port_args(struct dlb2_hw *hw,
-			      u32 domain_id,
-			      uintptr_t cq_dma_base,
-			      struct dlb2_create_dir_port_args *args,
-			      bool vdev_req,
-			      unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 create directed port arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID:                 %d\n",
-		    domain_id);
-	DLB2_HW_DBG(hw, "\tCQ depth:                  %d\n",
-		    args->cq_depth);
-	DLB2_HW_DBG(hw, "\tCQ base address:           0x%lx\n",
-		    cq_dma_base);
-}
-
 static struct dlb2_dir_pq_pair *
 dlb2_get_domain_used_dir_pq(struct dlb2_hw *hw,
 			    u32 id,
@@ -1256,401 +1225,6 @@ dlb2_get_domain_used_dir_pq(struct dlb2_hw *hw,
 	return NULL;
 }
 
-static int
-dlb2_verify_create_dir_port_args(struct dlb2_hw *hw,
-				 u32 domain_id,
-				 uintptr_t cq_dma_base,
-				 struct dlb2_create_dir_port_args *args,
-				 struct dlb2_cmd_response *resp,
-				 bool vdev_req,
-				 unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (domain == NULL) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	if (!domain->configured) {
-		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
-		return -EINVAL;
-	}
-
-	if (domain->started) {
-		resp->status = DLB2_ST_DOMAIN_STARTED;
-		return -EINVAL;
-	}
-
-	/*
-	 * If the user claims the queue is already configured, validate
-	 * the queue ID, its domain, and whether the queue is configured.
-	 */
-	if (args->queue_id != -1) {
-		struct dlb2_dir_pq_pair *queue;
-
-		queue = dlb2_get_domain_used_dir_pq(hw,
-						    args->queue_id,
-						    vdev_req,
-						    domain);
-
-		if (queue == NULL || queue->domain_id.phys_id !=
-				domain->id.phys_id ||
-				!queue->queue_configured) {
-			resp->status = DLB2_ST_INVALID_DIR_QUEUE_ID;
-			return -EINVAL;
-		}
-	}
-
-	/*
-	 * If the port's queue is not configured, validate that a free
-	 * port-queue pair is available.
-	 */
-	if (args->queue_id == -1 &&
-	    dlb2_list_empty(&domain->avail_dir_pq_pairs)) {
-		resp->status = DLB2_ST_DIR_PORTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	/* Check cache-line alignment */
-	if ((cq_dma_base & 0x3F) != 0) {
-		resp->status = DLB2_ST_INVALID_CQ_VIRT_ADDR;
-		return -EINVAL;
-	}
-
-	if (args->cq_depth != 1 &&
-	    args->cq_depth != 2 &&
-	    args->cq_depth != 4 &&
-	    args->cq_depth != 8 &&
-	    args->cq_depth != 16 &&
-	    args->cq_depth != 32 &&
-	    args->cq_depth != 64 &&
-	    args->cq_depth != 128 &&
-	    args->cq_depth != 256 &&
-	    args->cq_depth != 512 &&
-	    args->cq_depth != 1024) {
-		resp->status = DLB2_ST_INVALID_CQ_DEPTH;
-		return -EINVAL;
-	}
-
-	return 0;
-}
-
-static void dlb2_dir_port_configure_pp(struct dlb2_hw *hw,
-				       struct dlb2_hw_domain *domain,
-				       struct dlb2_dir_pq_pair *port,
-				       bool vdev_req,
-				       unsigned int vdev_id)
-{
-	union dlb2_sys_dir_pp2vas r0 = { {0} };
-	union dlb2_sys_dir_pp_v r4 = { {0} };
-
-	r0.field.vas = domain->id.phys_id;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_PP2VAS(port->id.phys_id), r0.val);
-
-	if (vdev_req) {
-		union dlb2_sys_vf_dir_vpp2pp r1 = { {0} };
-		union dlb2_sys_dir_pp2vdev r2 = { {0} };
-		union dlb2_sys_vf_dir_vpp_v r3 = { {0} };
-		unsigned int offs;
-		u32 virt_id;
-
-		/*
-		 * DLB uses producer port address bits 17:12 to determine the
-		 * producer port ID. In Scalable IOV mode, PP accesses come
-		 * through the PF MMIO window for the physical producer port,
-		 * so for translation purposes the virtual and physical port
-		 * IDs are equal.
-		 */
-		if (hw->virt_mode == DLB2_VIRT_SRIOV)
-			virt_id = port->id.virt_id;
-		else
-			virt_id = port->id.phys_id;
-
-		r1.field.pp = port->id.phys_id;
-
-		offs = vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) + virt_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP2PP(offs), r1.val);
-
-		r2.field.vdev = vdev_id;
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_DIR_PP2VDEV(port->id.phys_id),
-			    r2.val);
-
-		r3.field.vpp_v = 1;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP_V(offs), r3.val);
-	}
-
-	r4.field.pp_v = 1;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_PP_V(port->id.phys_id),
-		    r4.val);
-}
-
-static int dlb2_dir_port_configure_cq(struct dlb2_hw *hw,
-				      struct dlb2_hw_domain *domain,
-				      struct dlb2_dir_pq_pair *port,
-				      uintptr_t cq_dma_base,
-				      struct dlb2_create_dir_port_args *args,
-				      bool vdev_req,
-				      unsigned int vdev_id)
-{
-	union dlb2_sys_dir_cq_addr_l r0 = { {0} };
-	union dlb2_sys_dir_cq_addr_u r1 = { {0} };
-	union dlb2_sys_dir_cq2vf_pf_ro r2 = { {0} };
-	union dlb2_chp_dir_cq_tkn_depth_sel r3 = { {0} };
-	union dlb2_lsp_cq_dir_tkn_depth_sel_dsi r4 = { {0} };
-	union dlb2_sys_dir_cq_fmt r9 = { {0} };
-	union dlb2_sys_dir_cq_at r10 = { {0} };
-	union dlb2_sys_dir_cq_pasid r11 = { {0} };
-	union dlb2_chp_dir_cq2vas r12 = { {0} };
-
-	/* The CQ address is 64B-aligned, and the DLB only wants bits [63:6] */
-	r0.field.addr_l = cq_dma_base >> 6;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_ADDR_L(port->id.phys_id), r0.val);
-
-	r1.field.addr_u = cq_dma_base >> 32;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_ADDR_U(port->id.phys_id), r1.val);
-
-	/*
-	 * 'ro' == relaxed ordering. This setting allows DLB2 to write
-	 * cache lines out-of-order (but QEs within a cache line are always
-	 * updated in-order).
-	 */
-	r2.field.vf = vdev_id;
-	r2.field.is_pf = !vdev_req && (hw->virt_mode != DLB2_VIRT_SIOV);
-	r2.field.ro = 1;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ2VF_PF_RO(port->id.phys_id), r2.val);
-
-	if (args->cq_depth <= 8) {
-		r3.field.token_depth_select = 1;
-	} else if (args->cq_depth == 16) {
-		r3.field.token_depth_select = 2;
-	} else if (args->cq_depth == 32) {
-		r3.field.token_depth_select = 3;
-	} else if (args->cq_depth == 64) {
-		r3.field.token_depth_select = 4;
-	} else if (args->cq_depth == 128) {
-		r3.field.token_depth_select = 5;
-	} else if (args->cq_depth == 256) {
-		r3.field.token_depth_select = 6;
-	} else if (args->cq_depth == 512) {
-		r3.field.token_depth_select = 7;
-	} else if (args->cq_depth == 1024) {
-		r3.field.token_depth_select = 8;
-	} else {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: invalid CQ depth\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL(port->id.phys_id),
-		    r3.val);
-
-	/*
-	 * To support CQs with depth less than 8, program the token count
-	 * register with a non-zero initial value. Operations such as domain
-	 * reset must take this initial value into account when quiescing the
-	 * CQ.
-	 */
-	port->init_tkn_cnt = 0;
-
-	if (args->cq_depth < 8) {
-		union dlb2_lsp_cq_dir_tkn_cnt r13 = { {0} };
-
-		port->init_tkn_cnt = 8 - args->cq_depth;
-
-		r13.field.count = port->init_tkn_cnt;
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_CQ_DIR_TKN_CNT(port->id.phys_id),
-			    r13.val);
-	} else {
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_CQ_DIR_TKN_CNT(port->id.phys_id),
-			    DLB2_LSP_CQ_DIR_TKN_CNT_RST);
-	}
-
-	r4.field.token_depth_select = r3.field.token_depth_select;
-	r4.field.disable_wb_opt = 0;
-	r4.field.ignore_depth = 0;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(port->id.phys_id),
-		    r4.val);
-
-	/* Reset the CQ write pointer */
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_WPTR(port->id.phys_id),
-		    DLB2_CHP_DIR_CQ_WPTR_RST);
-
-	/* Virtualize the PPID */
-	r9.field.keep_pf_ppid = 0;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_FMT(port->id.phys_id), r9.val);
-
-	/*
-	 * Address translation (AT) settings: 0: untranslated, 2: translated
-	 * (see ATS spec regarding Address Type field for more details)
-	 */
-	r10.field.cq_at = 0;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_AT(port->id.phys_id), r10.val);
-
-	if (vdev_req && hw->virt_mode == DLB2_VIRT_SIOV) {
-		r11.field.pasid = hw->pasid[vdev_id];
-		r11.field.fmt2 = 1;
-	}
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ_PASID(port->id.phys_id),
-		    r11.val);
-
-	r12.field.cq2vas = domain->id.phys_id;
-
-	DLB2_CSR_WR(hw, DLB2_CHP_DIR_CQ2VAS(port->id.phys_id), r12.val);
-
-	return 0;
-}
-
-static int dlb2_configure_dir_port(struct dlb2_hw *hw,
-				   struct dlb2_hw_domain *domain,
-				   struct dlb2_dir_pq_pair *port,
-				   uintptr_t cq_dma_base,
-				   struct dlb2_create_dir_port_args *args,
-				   bool vdev_req,
-				   unsigned int vdev_id)
-{
-	int ret;
-
-	ret = dlb2_dir_port_configure_cq(hw,
-					 domain,
-					 port,
-					 cq_dma_base,
-					 args,
-					 vdev_req,
-					 vdev_id);
-
-	if (ret < 0)
-		return ret;
-
-	dlb2_dir_port_configure_pp(hw,
-				   domain,
-				   port,
-				   vdev_req,
-				   vdev_id);
-
-	dlb2_dir_port_cq_enable(hw, port);
-
-	port->enabled = true;
-
-	port->port_configured = true;
-
-	return 0;
-}
-
-/**
- * dlb2_hw_create_dir_port() - Allocate and initialize a DLB directed port
- *	and queue. The port/queue pair have the same ID and name.
- * @hw:	Contains the current state of the DLB2 hardware.
- * @domain_id: Domain ID
- * @args: User-provided arguments.
- * @cq_dma_base: Base DMA address for consumer queue memory
- * @resp: Response to user.
- * @vdev_req: Request came from a virtual device.
- * @vdev_id: If vdev_req is true, this contains the virtual device's ID.
- *
- * Return: returns < 0 on error, 0 otherwise. If the driver is unable to
- * satisfy a request, resp->status will be set accordingly.
- */
-int dlb2_hw_create_dir_port(struct dlb2_hw *hw,
-			    u32 domain_id,
-			    struct dlb2_create_dir_port_args *args,
-			    uintptr_t cq_dma_base,
-			    struct dlb2_cmd_response *resp,
-			    bool vdev_req,
-			    unsigned int vdev_id)
-{
-	struct dlb2_dir_pq_pair *port;
-	struct dlb2_hw_domain *domain;
-	int ret;
-
-	dlb2_log_create_dir_port_args(hw,
-				      domain_id,
-				      cq_dma_base,
-				      args,
-				      vdev_req,
-				      vdev_id);
-
-	/*
-	 * Verify that hardware resources are available before attempting to
-	 * satisfy the request. This simplifies the error unwinding code.
-	 */
-	ret = dlb2_verify_create_dir_port_args(hw,
-					       domain_id,
-					       cq_dma_base,
-					       args,
-					       resp,
-					       vdev_req,
-					       vdev_id);
-	if (ret)
-		return ret;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (args->queue_id != -1)
-		port = dlb2_get_domain_used_dir_pq(hw,
-						   args->queue_id,
-						   vdev_req,
-						   domain);
-	else
-		port = DLB2_DOM_LIST_HEAD(domain->avail_dir_pq_pairs,
-					  typeof(*port));
-	if (port == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: no available dir ports\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	ret = dlb2_configure_dir_port(hw,
-				      domain,
-				      port,
-				      cq_dma_base,
-				      args,
-				      vdev_req,
-				      vdev_id);
-	if (ret < 0)
-		return ret;
-
-	/*
-	 * Configuration succeeded, so move the resource from the 'avail' to
-	 * the 'used' list (if it's not already there).
-	 */
-	if (args->queue_id == -1) {
-		dlb2_list_del(&domain->avail_dir_pq_pairs, &port->domain_list);
-
-		dlb2_list_add(&domain->used_dir_pq_pairs, &port->domain_list);
-	}
-
-	resp->status = 0;
-	resp->id = (vdev_req) ? port->id.virt_id : port->id.phys_id;
-
-	return 0;
-}
-
 static void dlb2_configure_dir_queue(struct dlb2_hw *hw,
 				     struct dlb2_hw_domain *domain,
 				     struct dlb2_dir_pq_pair *queue,
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index 2eb39e23d..4e4b390dd 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -4443,3 +4443,417 @@ int dlb2_hw_create_ldb_port(struct dlb2_hw *hw,
 
 	return 0;
 }
+
+static void
+dlb2_log_create_dir_port_args(struct dlb2_hw *hw,
+			      u32 domain_id,
+			      uintptr_t cq_dma_base,
+			      struct dlb2_create_dir_port_args *args,
+			      bool vdev_req,
+			      unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 create directed port arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID:                 %d\n",
+		    domain_id);
+	DLB2_HW_DBG(hw, "\tCQ depth:                  %d\n",
+		    args->cq_depth);
+	DLB2_HW_DBG(hw, "\tCQ base address:           0x%lx\n",
+		    cq_dma_base);
+}
+
+static struct dlb2_dir_pq_pair *
+dlb2_get_domain_used_dir_pq(struct dlb2_hw *hw,
+			    u32 id,
+			    bool vdev_req,
+			    struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *port;
+	RTE_SET_USED(iter);
+
+	if (id >= DLB2_MAX_NUM_DIR_PORTS(hw->ver))
+		return NULL;
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
+		if ((!vdev_req && port->id.phys_id == id) ||
+		    (vdev_req && port->id.virt_id == id))
+			return port;
+	}
+
+	return NULL;
+}
+
+static int
+dlb2_verify_create_dir_port_args(struct dlb2_hw *hw,
+				 u32 domain_id,
+				 uintptr_t cq_dma_base,
+				 struct dlb2_create_dir_port_args *args,
+				 struct dlb2_cmd_response *resp,
+				 bool vdev_req,
+				 unsigned int vdev_id,
+				 struct dlb2_hw_domain **out_domain,
+				 struct dlb2_dir_pq_pair **out_port)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_dir_pq_pair *pq;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
+		return -EINVAL;
+	}
+
+	if (domain->started) {
+		resp->status = DLB2_ST_DOMAIN_STARTED;
+		return -EINVAL;
+	}
+
+	if (args->queue_id != -1) {
+		/*
+		 * If the user claims the queue is already configured, validate
+		 * the queue ID, its domain, and whether the queue is
+		 * configured.
+		 */
+		pq = dlb2_get_domain_used_dir_pq(hw,
+						 args->queue_id,
+						 vdev_req,
+						 domain);
+
+		if (!pq || pq->domain_id.phys_id != domain->id.phys_id ||
+		    !pq->queue_configured) {
+			resp->status = DLB2_ST_INVALID_DIR_QUEUE_ID;
+			return -EINVAL;
+		}
+	} else {
+		/*
+		 * If the port's queue is not configured, validate that a free
+		 * port-queue pair is available.
+		 */
+		pq = DLB2_DOM_LIST_HEAD(domain->avail_dir_pq_pairs,
+					typeof(*pq));
+		if (!pq) {
+			resp->status = DLB2_ST_DIR_PORTS_UNAVAILABLE;
+			return -EINVAL;
+		}
+	}
+
+	/* Check cache-line alignment */
+	if ((cq_dma_base & 0x3F) != 0) {
+		resp->status = DLB2_ST_INVALID_CQ_VIRT_ADDR;
+		return -EINVAL;
+	}
+
+	if (!dlb2_cq_depth_is_valid(args->cq_depth)) {
+		resp->status = DLB2_ST_INVALID_CQ_DEPTH;
+		return -EINVAL;
+	}
+
+	*out_domain = domain;
+	*out_port = pq;
+
+	return 0;
+}
+
+static void dlb2_dir_port_configure_pp(struct dlb2_hw *hw,
+				       struct dlb2_hw_domain *domain,
+				       struct dlb2_dir_pq_pair *port,
+				       bool vdev_req,
+				       unsigned int vdev_id)
+{
+	u32 reg = 0;
+
+	DLB2_BITS_SET(reg, domain->id.phys_id, DLB2_SYS_DIR_PP2VAS_VAS);
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_PP2VAS(port->id.phys_id), reg);
+
+	if (vdev_req) {
+		unsigned int offs;
+		u32 virt_id;
+
+		/*
+		 * DLB uses producer port address bits 17:12 to determine the
+		 * producer port ID. In Scalable IOV mode, PP accesses come
+		 * through the PF MMIO window for the physical producer port,
+		 * so for translation purposes the virtual and physical port
+		 * IDs are equal.
+		 */
+		if (hw->virt_mode == DLB2_VIRT_SRIOV)
+			virt_id = port->id.virt_id;
+		else
+			virt_id = port->id.phys_id;
+
+		reg = 0;
+		DLB2_BITS_SET(reg, port->id.phys_id, DLB2_SYS_VF_DIR_VPP2PP_PP);
+		offs = vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) + virt_id;
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP2PP(offs), reg);
+
+		reg = 0;
+		DLB2_BITS_SET(reg, vdev_id, DLB2_SYS_DIR_PP2VDEV_VDEV);
+		DLB2_CSR_WR(hw, DLB2_SYS_DIR_PP2VDEV(port->id.phys_id), reg);
+
+		reg = 0;
+		DLB2_BIT_SET(reg, DLB2_SYS_VF_DIR_VPP_V_VPP_V);
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP_V(offs), reg);
+	}
+
+	reg = 0;
+	DLB2_BIT_SET(reg, DLB2_SYS_DIR_PP_V_PP_V);
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_PP_V(port->id.phys_id), reg);
+}
+
+static int dlb2_dir_port_configure_cq(struct dlb2_hw *hw,
+				      struct dlb2_hw_domain *domain,
+				      struct dlb2_dir_pq_pair *port,
+				      uintptr_t cq_dma_base,
+				      struct dlb2_create_dir_port_args *args,
+				      bool vdev_req,
+				      unsigned int vdev_id)
+{
+	u32 reg = 0;
+	u32 ds = 0;
+
+	/* The CQ address is 64B-aligned, and the DLB only wants bits [63:6] */
+	DLB2_BITS_SET(reg, cq_dma_base >> 6, DLB2_SYS_DIR_CQ_ADDR_L_ADDR_L);
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_ADDR_L(port->id.phys_id), reg);
+
+	reg = cq_dma_base >> 32;
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_ADDR_U(port->id.phys_id), reg);
+
+	/*
+	 * 'ro' == relaxed ordering. This setting allows DLB2 to write
+	 * cache lines out-of-order (but QEs within a cache line are always
+	 * updated in-order).
+	 */
+	reg = 0;
+	DLB2_BITS_SET(reg, vdev_id, DLB2_SYS_DIR_CQ2VF_PF_RO_VF);
+	DLB2_BITS_SET(reg, !vdev_req && (hw->virt_mode != DLB2_VIRT_SIOV),
+		 DLB2_SYS_DIR_CQ2VF_PF_RO_IS_PF);
+	DLB2_BIT_SET(reg, DLB2_SYS_DIR_CQ2VF_PF_RO_RO);
+
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ2VF_PF_RO(port->id.phys_id), reg);
+
+	if (args->cq_depth <= 8) {
+		ds = 1;
+	} else if (args->cq_depth == 16) {
+		ds = 2;
+	} else if (args->cq_depth == 32) {
+		ds = 3;
+	} else if (args->cq_depth == 64) {
+		ds = 4;
+	} else if (args->cq_depth == 128) {
+		ds = 5;
+	} else if (args->cq_depth == 256) {
+		ds = 6;
+	} else if (args->cq_depth == 512) {
+		ds = 7;
+	} else if (args->cq_depth == 1024) {
+		ds = 8;
+	} else {
+		DLB2_HW_ERR(hw,
+			    "[%s():%d] Internal error: invalid CQ depth\n",
+			    __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	reg = 0;
+	DLB2_BITS_SET(reg, ds,
+		      DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT);
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
+		    reg);
+
+	/*
+	 * To support CQs with depth less than 8, program the token count
+	 * register with a non-zero initial value. Operations such as domain
+	 * reset must take this initial value into account when quiescing the
+	 * CQ.
+	 */
+	port->init_tkn_cnt = 0;
+
+	if (args->cq_depth < 8) {
+		reg = 0;
+		port->init_tkn_cnt = 8 - args->cq_depth;
+
+		DLB2_BITS_SET(reg, port->init_tkn_cnt,
+			      DLB2_LSP_CQ_DIR_TKN_CNT_COUNT);
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_CQ_DIR_TKN_CNT(hw->ver, port->id.phys_id),
+			    reg);
+	} else {
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_CQ_DIR_TKN_CNT(hw->ver, port->id.phys_id),
+			    DLB2_LSP_CQ_DIR_TKN_CNT_RST);
+	}
+
+	reg = 0;
+	DLB2_BITS_SET(reg, ds,
+		      DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_TOKEN_DEPTH_SELECT_V2);
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(hw->ver,
+						      port->id.phys_id),
+		    reg);
+
+	/* Reset the CQ write pointer */
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_WPTR(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ_WPTR_RST);
+
+	/* Virtualize the PPID */
+	reg = 0;
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_FMT(port->id.phys_id), reg);
+
+	/*
+	 * Address translation (AT) settings: 0: untranslated, 2: translated
+	 * (see ATS spec regarding Address Type field for more details)
+	 */
+	if (hw->ver == DLB2_HW_V2) {
+		reg = 0;
+		DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_AT(port->id.phys_id), reg);
+	}
+
+	if (vdev_req && hw->virt_mode == DLB2_VIRT_SIOV) {
+		DLB2_BITS_SET(reg, hw->pasid[vdev_id],
+			      DLB2_SYS_DIR_CQ_PASID_PASID);
+		DLB2_BIT_SET(reg, DLB2_SYS_DIR_CQ_PASID_FMT2);
+	}
+
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_PASID(hw->ver, port->id.phys_id), reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, domain->id.phys_id, DLB2_CHP_DIR_CQ2VAS_CQ2VAS);
+	DLB2_CSR_WR(hw, DLB2_CHP_DIR_CQ2VAS(hw->ver, port->id.phys_id), reg);
+
+	return 0;
+}
+
+static int dlb2_configure_dir_port(struct dlb2_hw *hw,
+				   struct dlb2_hw_domain *domain,
+				   struct dlb2_dir_pq_pair *port,
+				   uintptr_t cq_dma_base,
+				   struct dlb2_create_dir_port_args *args,
+				   bool vdev_req,
+				   unsigned int vdev_id)
+{
+	int ret;
+
+	ret = dlb2_dir_port_configure_cq(hw,
+					 domain,
+					 port,
+					 cq_dma_base,
+					 args,
+					 vdev_req,
+					 vdev_id);
+
+	if (ret)
+		return ret;
+
+	dlb2_dir_port_configure_pp(hw,
+				   domain,
+				   port,
+				   vdev_req,
+				   vdev_id);
+
+	dlb2_dir_port_cq_enable(hw, port);
+
+	port->enabled = true;
+
+	port->port_configured = true;
+
+	return 0;
+}
+
+/**
+ * dlb2_hw_create_dir_port() - create a directed port
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: port creation arguments.
+ * @cq_dma_base: base address of the CQ memory. This can be a PA or an IOVA.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function creates a directed port.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the port ID.
+ *
+ * resp->id contains a virtual ID if vdev_req is true.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, a credit setting is invalid, a
+ *	    pointer address is not properly aligned, the domain is not
+ *	    configured, or the domain has already been started.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb2_hw_create_dir_port(struct dlb2_hw *hw,
+			    u32 domain_id,
+			    struct dlb2_create_dir_port_args *args,
+			    uintptr_t cq_dma_base,
+			    struct dlb2_cmd_response *resp,
+			    bool vdev_req,
+			    unsigned int vdev_id)
+{
+	struct dlb2_dir_pq_pair *port;
+	struct dlb2_hw_domain *domain;
+	int ret;
+
+	dlb2_log_create_dir_port_args(hw,
+				      domain_id,
+				      cq_dma_base,
+				      args,
+				      vdev_req,
+				      vdev_id);
+
+	/*
+	 * Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	ret = dlb2_verify_create_dir_port_args(hw,
+					       domain_id,
+					       cq_dma_base,
+					       args,
+					       resp,
+					       vdev_req,
+					       vdev_id,
+					       &domain,
+					       &port);
+	if (ret)
+		return ret;
+
+	ret = dlb2_configure_dir_port(hw,
+				      domain,
+				      port,
+				      cq_dma_base,
+				      args,
+				      vdev_req,
+				      vdev_id);
+	if (ret)
+		return ret;
+
+	/*
+	 * Configuration succeeded, so move the resource from the 'avail' to
+	 * the 'used' list (if it's not already there).
+	 */
+	if (args->queue_id == -1) {
+		dlb2_list_del(&domain->avail_dir_pq_pairs, &port->domain_list);
+
+		dlb2_list_add(&domain->used_dir_pq_pairs, &port->domain_list);
+	}
+
+	resp->status = 0;
+	resp->id = (vdev_req) ? port->id.virt_id : port->id.phys_id;
+
+	return 0;
+}
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v4 11/27] event/dlb2: add v2.5 create dir queue
  2021-04-15  1:48   ` [dpdk-dev] [PATCH v4 00/27] Add DLB v2.5 Timothy McDaniel
                       ` (9 preceding siblings ...)
  2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 10/27] event/dlb2: add v2.5 create dir port Timothy McDaniel
@ 2021-04-15  1:49     ` Timothy McDaniel
  2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 12/27] event/dlb2: add v2.5 map qid Timothy McDaniel
                       ` (15 subsequent siblings)
  26 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-04-15  1:49 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, harry.van.haaren, jerinj, thomas

Update the low level HW functions responsible for
creating directed queues. These functions configure
the depth threshold, configure queue depth, and
validate the queue creation arguments.

The logic is very similar to what was done for v2.0,
but the new combined register map for v2.0 and v2.5
uses new register names and bit names.  Additionally,
new register access macros are used so that the code
can perform the correct action, based on the hardware
version, v2.0 or v2.5.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 213 ------------------
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 201 +++++++++++++++++
 2 files changed, 201 insertions(+), 213 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index 70c52e908..362deadfe 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -1225,219 +1225,6 @@ dlb2_get_domain_used_dir_pq(struct dlb2_hw *hw,
 	return NULL;
 }
 
-static void dlb2_configure_dir_queue(struct dlb2_hw *hw,
-				     struct dlb2_hw_domain *domain,
-				     struct dlb2_dir_pq_pair *queue,
-				     struct dlb2_create_dir_queue_args *args,
-				     bool vdev_req,
-				     unsigned int vdev_id)
-{
-	union dlb2_sys_dir_vasqid_v r0 = { {0} };
-	union dlb2_sys_dir_qid_its r1 = { {0} };
-	union dlb2_lsp_qid_dir_depth_thrsh r2 = { {0} };
-	union dlb2_sys_dir_qid_v r5 = { {0} };
-
-	unsigned int offs;
-
-	/* QID write permissions are turned on when the domain is started */
-	r0.field.vasqid_v = 0;
-
-	offs = domain->id.phys_id * DLB2_MAX_NUM_DIR_QUEUES(hw->ver) +
-		queue->id.phys_id;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(offs), r0.val);
-
-	/* Don't timestamp QEs that pass through this queue */
-	r1.field.qid_its = 0;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_QID_ITS(queue->id.phys_id),
-		    r1.val);
-
-	r2.field.thresh = args->depth_threshold;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID_DIR_DEPTH_THRSH(queue->id.phys_id),
-		    r2.val);
-
-	if (vdev_req) {
-		union dlb2_sys_vf_dir_vqid_v r3 = { {0} };
-		union dlb2_sys_vf_dir_vqid2qid r4 = { {0} };
-
-		offs = vdev_id * DLB2_MAX_NUM_DIR_QUEUES(hw->ver)
-			+ queue->id.virt_id;
-
-		r3.field.vqid_v = 1;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID_V(offs), r3.val);
-
-		r4.field.qid = queue->id.phys_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID2QID(offs), r4.val);
-	}
-
-	r5.field.qid_v = 1;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_QID_V(queue->id.phys_id), r5.val);
-
-	queue->queue_configured = true;
-}
-
-static void
-dlb2_log_create_dir_queue_args(struct dlb2_hw *hw,
-			       u32 domain_id,
-			       struct dlb2_create_dir_queue_args *args,
-			       bool vdev_req,
-			       unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 create directed queue arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
-	DLB2_HW_DBG(hw, "\tPort ID:   %d\n", args->port_id);
-}
-
-static int
-dlb2_verify_create_dir_queue_args(struct dlb2_hw *hw,
-				  u32 domain_id,
-				  struct dlb2_create_dir_queue_args *args,
-				  struct dlb2_cmd_response *resp,
-				  bool vdev_req,
-				  unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (domain == NULL) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	if (!domain->configured) {
-		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
-		return -EINVAL;
-	}
-
-	if (domain->started) {
-		resp->status = DLB2_ST_DOMAIN_STARTED;
-		return -EINVAL;
-	}
-
-	/*
-	 * If the user claims the port is already configured, validate the port
-	 * ID, its domain, and whether the port is configured.
-	 */
-	if (args->port_id != -1) {
-		struct dlb2_dir_pq_pair *port;
-
-		port = dlb2_get_domain_used_dir_pq(hw,
-						   args->port_id,
-						   vdev_req,
-						   domain);
-
-		if (port == NULL || port->domain_id.phys_id !=
-				domain->id.phys_id || !port->port_configured) {
-			resp->status = DLB2_ST_INVALID_PORT_ID;
-			return -EINVAL;
-		}
-	}
-
-	/*
-	 * If the queue's port is not configured, validate that a free
-	 * port-queue pair is available.
-	 */
-	if (args->port_id == -1 &&
-	    dlb2_list_empty(&domain->avail_dir_pq_pairs)) {
-		resp->status = DLB2_ST_DIR_QUEUES_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	return 0;
-}
-
-/**
- * dlb2_hw_create_dir_queue() - Allocate and initialize a DLB DIR queue.
- * @hw:	Contains the current state of the DLB2 hardware.
- * @domain_id: Domain ID
- * @args: User-provided arguments.
- * @resp: Response to user.
- * @vdev_req: Request came from a virtual device.
- * @vdev_id: If vdev_req is true, this contains the virtual device's ID.
- *
- * Return: returns < 0 on error, 0 otherwise. If the driver is unable to
- * satisfy a request, resp->status will be set accordingly.
- */
-int dlb2_hw_create_dir_queue(struct dlb2_hw *hw,
-			     u32 domain_id,
-			     struct dlb2_create_dir_queue_args *args,
-			     struct dlb2_cmd_response *resp,
-			     bool vdev_req,
-			     unsigned int vdev_id)
-{
-	struct dlb2_dir_pq_pair *queue;
-	struct dlb2_hw_domain *domain;
-	int ret;
-
-	dlb2_log_create_dir_queue_args(hw, domain_id, args, vdev_req, vdev_id);
-
-	/*
-	 * Verify that hardware resources are available before attempting to
-	 * satisfy the request. This simplifies the error unwinding code.
-	 */
-	ret = dlb2_verify_create_dir_queue_args(hw,
-						domain_id,
-						args,
-						resp,
-						vdev_req,
-						vdev_id);
-	if (ret)
-		return ret;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-	if (domain == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: domain not found\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	if (args->port_id != -1)
-		queue = dlb2_get_domain_used_dir_pq(hw,
-						    args->port_id,
-						    vdev_req,
-						    domain);
-	else
-		queue = DLB2_DOM_LIST_HEAD(domain->avail_dir_pq_pairs,
-					   typeof(*queue));
-	if (queue == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: no available dir queues\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	dlb2_configure_dir_queue(hw, domain, queue, args, vdev_req, vdev_id);
-
-	/*
-	 * Configuration succeeded, so move the resource from the 'avail' to
-	 * the 'used' list (if it's not already there).
-	 */
-	if (args->port_id == -1) {
-		dlb2_list_del(&domain->avail_dir_pq_pairs,
-			      &queue->domain_list);
-
-		dlb2_list_add(&domain->used_dir_pq_pairs,
-			      &queue->domain_list);
-	}
-
-	resp->status = 0;
-
-	resp->id = (vdev_req) ? queue->id.virt_id : queue->id.phys_id;
-
-	return 0;
-}
-
 static bool
 dlb2_port_find_slot_with_pending_map_queue(struct dlb2_ldb_port *port,
 					   struct dlb2_ldb_queue *queue,
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index 4e4b390dd..d4b401250 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -4857,3 +4857,204 @@ int dlb2_hw_create_dir_port(struct dlb2_hw *hw,
 
 	return 0;
 }
+
+static void dlb2_configure_dir_queue(struct dlb2_hw *hw,
+				     struct dlb2_hw_domain *domain,
+				     struct dlb2_dir_pq_pair *queue,
+				     struct dlb2_create_dir_queue_args *args,
+				     bool vdev_req,
+				     unsigned int vdev_id)
+{
+	unsigned int offs;
+	u32 reg = 0;
+
+	/* QID write permissions are turned on when the domain is started */
+	offs = domain->id.phys_id * DLB2_MAX_NUM_DIR_QUEUES(hw->ver) +
+		queue->id.phys_id;
+
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(offs), reg);
+
+	/* Don't timestamp QEs that pass through this queue */
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_QID_ITS(queue->id.phys_id), reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, args->depth_threshold,
+		      DLB2_LSP_QID_DIR_DEPTH_THRSH_THRESH);
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_QID_DIR_DEPTH_THRSH(hw->ver, queue->id.phys_id),
+		    reg);
+
+	if (vdev_req) {
+		offs = vdev_id * DLB2_MAX_NUM_DIR_QUEUES(hw->ver) +
+			queue->id.virt_id;
+
+		reg = 0;
+		DLB2_BIT_SET(reg, DLB2_SYS_VF_DIR_VQID_V_VQID_V);
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID_V(offs), reg);
+
+		reg = 0;
+		DLB2_BITS_SET(reg, queue->id.phys_id,
+			      DLB2_SYS_VF_DIR_VQID2QID_QID);
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID2QID(offs), reg);
+	}
+
+	reg = 0;
+	DLB2_BIT_SET(reg, DLB2_SYS_DIR_QID_V_QID_V);
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_QID_V(queue->id.phys_id), reg);
+
+	queue->queue_configured = true;
+}
+
+static void
+dlb2_log_create_dir_queue_args(struct dlb2_hw *hw,
+			       u32 domain_id,
+			       struct dlb2_create_dir_queue_args *args,
+			       bool vdev_req,
+			       unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 create directed queue arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
+	DLB2_HW_DBG(hw, "\tPort ID:   %d\n", args->port_id);
+}
+
+static int
+dlb2_verify_create_dir_queue_args(struct dlb2_hw *hw,
+				  u32 domain_id,
+				  struct dlb2_create_dir_queue_args *args,
+				  struct dlb2_cmd_response *resp,
+				  bool vdev_req,
+				  unsigned int vdev_id,
+				  struct dlb2_hw_domain **out_domain,
+				  struct dlb2_dir_pq_pair **out_queue)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_dir_pq_pair *pq;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
+		return -EINVAL;
+	}
+
+	if (domain->started) {
+		resp->status = DLB2_ST_DOMAIN_STARTED;
+		return -EINVAL;
+	}
+
+	/*
+	 * If the user claims the port is already configured, validate the port
+	 * ID, its domain, and whether the port is configured.
+	 */
+	if (args->port_id != -1) {
+		pq = dlb2_get_domain_used_dir_pq(hw,
+						 args->port_id,
+						 vdev_req,
+						 domain);
+
+		if (!pq || pq->domain_id.phys_id != domain->id.phys_id ||
+		    !pq->port_configured) {
+			resp->status = DLB2_ST_INVALID_PORT_ID;
+			return -EINVAL;
+		}
+	} else {
+		/*
+		 * If the queue's port is not configured, validate that a free
+		 * port-queue pair is available.
+		 */
+		pq = DLB2_DOM_LIST_HEAD(domain->avail_dir_pq_pairs,
+					typeof(*pq));
+		if (!pq) {
+			resp->status = DLB2_ST_DIR_QUEUES_UNAVAILABLE;
+			return -EINVAL;
+		}
+	}
+
+	*out_domain = domain;
+	*out_queue = pq;
+
+	return 0;
+}
+
+/**
+ * dlb2_hw_create_dir_queue() - create a directed queue
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: queue creation arguments.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function creates a directed queue.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the queue ID.
+ *
+ * resp->id contains a virtual ID if vdev_req is true.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, the domain is not configured,
+ *	    or the domain has already been started.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb2_hw_create_dir_queue(struct dlb2_hw *hw,
+			     u32 domain_id,
+			     struct dlb2_create_dir_queue_args *args,
+			     struct dlb2_cmd_response *resp,
+			     bool vdev_req,
+			     unsigned int vdev_id)
+{
+	struct dlb2_dir_pq_pair *queue;
+	struct dlb2_hw_domain *domain;
+	int ret;
+
+	dlb2_log_create_dir_queue_args(hw, domain_id, args, vdev_req, vdev_id);
+
+	/*
+	 * Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	ret = dlb2_verify_create_dir_queue_args(hw,
+						domain_id,
+						args,
+						resp,
+						vdev_req,
+						vdev_id,
+						&domain,
+						&queue);
+	if (ret)
+		return ret;
+
+	dlb2_configure_dir_queue(hw, domain, queue, args, vdev_req, vdev_id);
+
+	/*
+	 * Configuration succeeded, so move the resource from the 'avail' to
+	 * the 'used' list (if it's not already there).
+	 */
+	if (args->port_id == -1) {
+		dlb2_list_del(&domain->avail_dir_pq_pairs,
+			      &queue->domain_list);
+
+		dlb2_list_add(&domain->used_dir_pq_pairs,
+			      &queue->domain_list);
+	}
+
+	resp->status = 0;
+
+	resp->id = (vdev_req) ? queue->id.virt_id : queue->id.phys_id;
+
+	return 0;
+}
+
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v4 12/27] event/dlb2: add v2.5 map qid
  2021-04-15  1:48   ` [dpdk-dev] [PATCH v4 00/27] Add DLB v2.5 Timothy McDaniel
                       ` (10 preceding siblings ...)
  2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 11/27] event/dlb2: add v2.5 create dir queue Timothy McDaniel
@ 2021-04-15  1:49     ` Timothy McDaniel
  2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 13/27] event/dlb2: add v2.5 unmap queue Timothy McDaniel
                       ` (14 subsequent siblings)
  26 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-04-15  1:49 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, harry.van.haaren, jerinj, thomas

Update the low level HW functions responsible for
mapping queues to ports. These functions also validate
the map arguments and verify that the maximum number
of queues linked to a load balanced port does not
exceed the capabilities of the hardware.

The logic is very similar to what was done for v2.0,
but the new combined register map for v2.0 and v2.5
uses new register names and bit names.  Additionally,
new register access macros are used so that the code
can perform the correct action, based on the hardware
version, v2.0 or v2.5.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 355 ---------------
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 418 ++++++++++++++++++
 2 files changed, 418 insertions(+), 355 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index 362deadfe..d59df5e39 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -1245,68 +1245,6 @@ dlb2_port_find_slot_with_pending_map_queue(struct dlb2_ldb_port *port,
 	return (i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ);
 }
 
-static void dlb2_ldb_port_change_qid_priority(struct dlb2_hw *hw,
-					      struct dlb2_ldb_port *port,
-					      int slot,
-					      struct dlb2_map_qid_args *args)
-{
-	union dlb2_lsp_cq2priov r0;
-
-	/* Read-modify-write the priority and valid bit register */
-	r0.val = DLB2_CSR_RD(hw, DLB2_LSP_CQ2PRIOV(port->id.phys_id));
-
-	r0.field.v |= 1 << slot;
-	r0.field.prio |= (args->priority & 0x7) << slot * 3;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(port->id.phys_id), r0.val);
-
-	dlb2_flush_csr(hw);
-
-	port->qid_map[slot].priority = args->priority;
-}
-
-static int dlb2_verify_map_qid_slot_available(struct dlb2_ldb_port *port,
-					      struct dlb2_ldb_queue *queue,
-					      struct dlb2_cmd_response *resp)
-{
-	enum dlb2_qid_map_state state;
-	int i;
-
-	/* Unused slot available? */
-	if (port->num_mappings < DLB2_MAX_NUM_QIDS_PER_LDB_CQ)
-		return 0;
-
-	/*
-	 * If the queue is already mapped (from the application's perspective),
-	 * this is simply a priority update.
-	 */
-	state = DLB2_QUEUE_MAPPED;
-	if (dlb2_port_find_slot_queue(port, state, queue, &i))
-		return 0;
-
-	state = DLB2_QUEUE_MAP_IN_PROG;
-	if (dlb2_port_find_slot_queue(port, state, queue, &i))
-		return 0;
-
-	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &i))
-		return 0;
-
-	/*
-	 * If the slot contains an unmap in progress, it's considered
-	 * available.
-	 */
-	state = DLB2_QUEUE_UNMAP_IN_PROG;
-	if (dlb2_port_find_slot(port, state, &i))
-		return 0;
-
-	state = DLB2_QUEUE_UNMAPPED;
-	if (dlb2_port_find_slot(port, state, &i))
-		return 0;
-
-	resp->status = DLB2_ST_NO_QID_SLOTS_AVAILABLE;
-	return -EINVAL;
-}
-
 static struct dlb2_ldb_queue *
 dlb2_get_domain_ldb_queue(u32 id,
 			  bool vdev_req,
@@ -1355,299 +1293,6 @@ dlb2_get_domain_used_ldb_port(u32 id,
 	return NULL;
 }
 
-static int dlb2_verify_map_qid_args(struct dlb2_hw *hw,
-				    u32 domain_id,
-				    struct dlb2_map_qid_args *args,
-				    struct dlb2_cmd_response *resp,
-				    bool vdev_req,
-				    unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_port *port;
-	struct dlb2_ldb_queue *queue;
-	int id;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (domain == NULL) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	if (!domain->configured) {
-		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
-		return -EINVAL;
-	}
-
-	id = args->port_id;
-
-	port = dlb2_get_domain_used_ldb_port(id, vdev_req, domain);
-
-	if (port == NULL || !port->configured) {
-		resp->status = DLB2_ST_INVALID_PORT_ID;
-		return -EINVAL;
-	}
-
-	if (args->priority >= DLB2_QID_PRIORITIES) {
-		resp->status = DLB2_ST_INVALID_PRIORITY;
-		return -EINVAL;
-	}
-
-	queue = dlb2_get_domain_ldb_queue(args->qid, vdev_req, domain);
-
-	if (queue == NULL || !queue->configured) {
-		resp->status = DLB2_ST_INVALID_QID;
-		return -EINVAL;
-	}
-
-	if (queue->domain_id.phys_id != domain->id.phys_id) {
-		resp->status = DLB2_ST_INVALID_QID;
-		return -EINVAL;
-	}
-
-	if (port->domain_id.phys_id != domain->id.phys_id) {
-		resp->status = DLB2_ST_INVALID_PORT_ID;
-		return -EINVAL;
-	}
-
-	return 0;
-}
-
-static void dlb2_log_map_qid(struct dlb2_hw *hw,
-			     u32 domain_id,
-			     struct dlb2_map_qid_args *args,
-			     bool vdev_req,
-			     unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 map QID arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID: %d\n",
-		    domain_id);
-	DLB2_HW_DBG(hw, "\tPort ID:   %d\n",
-		    args->port_id);
-	DLB2_HW_DBG(hw, "\tQueue ID:  %d\n",
-		    args->qid);
-	DLB2_HW_DBG(hw, "\tPriority:  %d\n",
-		    args->priority);
-}
-
-int dlb2_hw_map_qid(struct dlb2_hw *hw,
-		    u32 domain_id,
-		    struct dlb2_map_qid_args *args,
-		    struct dlb2_cmd_response *resp,
-		    bool vdev_req,
-		    unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_queue *queue;
-	enum dlb2_qid_map_state st;
-	struct dlb2_ldb_port *port;
-	int ret, i, id;
-	u8 prio;
-
-	dlb2_log_map_qid(hw, domain_id, args, vdev_req, vdev_id);
-
-	/*
-	 * Verify that hardware resources are available before attempting to
-	 * satisfy the request. This simplifies the error unwinding code.
-	 */
-	ret = dlb2_verify_map_qid_args(hw,
-				       domain_id,
-				       args,
-				       resp,
-				       vdev_req,
-				       vdev_id);
-	if (ret)
-		return ret;
-
-	prio = args->priority;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-	if (domain == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: domain not found\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	id = args->port_id;
-
-	port = dlb2_get_domain_used_ldb_port(id, vdev_req, domain);
-	if (port == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: port not found\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	queue = dlb2_get_domain_ldb_queue(args->qid, vdev_req, domain);
-	if (queue == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: queue not found\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	/*
-	 * If there are any outstanding detach operations for this port,
-	 * attempt to complete them. This may be necessary to free up a QID
-	 * slot for this requested mapping.
-	 */
-	if (port->num_pending_removals)
-		dlb2_domain_finish_unmap_port(hw, domain, port);
-
-	ret = dlb2_verify_map_qid_slot_available(port, queue, resp);
-	if (ret)
-		return ret;
-
-	/* Hardware requires disabling the CQ before mapping QIDs. */
-	if (port->enabled)
-		dlb2_ldb_port_cq_disable(hw, port);
-
-	/*
-	 * If this is only a priority change, don't perform the full QID->CQ
-	 * mapping procedure
-	 */
-	st = DLB2_QUEUE_MAPPED;
-	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
-		if (i >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-			DLB2_HW_ERR(hw,
-				    "[%s():%d] Internal error: port slot tracking failed\n",
-				    __func__, __LINE__);
-			return -EFAULT;
-		}
-
-		if (prio != port->qid_map[i].priority) {
-			dlb2_ldb_port_change_qid_priority(hw, port, i, args);
-			DLB2_HW_DBG(hw, "DLB2 map: priority change\n");
-		}
-
-		st = DLB2_QUEUE_MAPPED;
-		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
-		if (ret)
-			return ret;
-
-		goto map_qid_done;
-	}
-
-	st = DLB2_QUEUE_UNMAP_IN_PROG;
-	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
-		if (i >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-			DLB2_HW_ERR(hw,
-				    "[%s():%d] Internal error: port slot tracking failed\n",
-				    __func__, __LINE__);
-			return -EFAULT;
-		}
-
-		if (prio != port->qid_map[i].priority) {
-			dlb2_ldb_port_change_qid_priority(hw, port, i, args);
-			DLB2_HW_DBG(hw, "DLB2 map: priority change\n");
-		}
-
-		st = DLB2_QUEUE_MAPPED;
-		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
-		if (ret)
-			return ret;
-
-		goto map_qid_done;
-	}
-
-	/*
-	 * If this is a priority change on an in-progress mapping, don't
-	 * perform the full QID->CQ mapping procedure.
-	 */
-	st = DLB2_QUEUE_MAP_IN_PROG;
-	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
-		if (i >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-			DLB2_HW_ERR(hw,
-				    "[%s():%d] Internal error: port slot tracking failed\n",
-				    __func__, __LINE__);
-			return -EFAULT;
-		}
-
-		port->qid_map[i].priority = prio;
-
-		DLB2_HW_DBG(hw, "DLB2 map: priority change only\n");
-
-		goto map_qid_done;
-	}
-
-	/*
-	 * If this is a priority change on a pending mapping, update the
-	 * pending priority
-	 */
-	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &i)) {
-		if (i >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-			DLB2_HW_ERR(hw,
-				    "[%s():%d] Internal error: port slot tracking failed\n",
-				    __func__, __LINE__);
-			return -EFAULT;
-		}
-
-		port->qid_map[i].pending_priority = prio;
-
-		DLB2_HW_DBG(hw, "DLB2 map: priority change only\n");
-
-		goto map_qid_done;
-	}
-
-	/*
-	 * If all the CQ's slots are in use, then there's an unmap in progress
-	 * (guaranteed by dlb2_verify_map_qid_slot_available()), so add this
-	 * mapping to pending_map and return. When the removal is completed for
-	 * the slot's current occupant, this mapping will be performed.
-	 */
-	if (!dlb2_port_find_slot(port, DLB2_QUEUE_UNMAPPED, &i)) {
-		if (dlb2_port_find_slot(port, DLB2_QUEUE_UNMAP_IN_PROG, &i)) {
-			enum dlb2_qid_map_state st;
-
-			if (i >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-				DLB2_HW_ERR(hw,
-					    "[%s():%d] Internal error: port slot tracking failed\n",
-					    __func__, __LINE__);
-				return -EFAULT;
-			}
-
-			port->qid_map[i].pending_qid = queue->id.phys_id;
-			port->qid_map[i].pending_priority = prio;
-
-			st = DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP;
-
-			ret = dlb2_port_slot_state_transition(hw, port, queue,
-							      i, st);
-			if (ret)
-				return ret;
-
-			DLB2_HW_DBG(hw, "DLB2 map: map pending removal\n");
-
-			goto map_qid_done;
-		}
-	}
-
-	/*
-	 * If the domain has started, a special "dynamic" CQ->queue mapping
-	 * procedure is required in order to safely update the CQ<->QID tables.
-	 * The "static" procedure cannot be used when traffic is flowing,
-	 * because the CQ<->QID tables cannot be updated atomically and the
-	 * scheduler won't see the new mapping unless the queue's if_status
-	 * changes, which isn't guaranteed.
-	 */
-	ret = dlb2_ldb_port_map_qid(hw, domain, port, queue, prio);
-
-	/* If ret is less than zero, it's due to an internal error */
-	if (ret < 0)
-		return ret;
-
-map_qid_done:
-	if (port->enabled)
-		dlb2_ldb_port_cq_enable(hw, port);
-
-	resp->status = 0;
-
-	return 0;
-}
-
 static void dlb2_log_unmap_qid(struct dlb2_hw *hw,
 			       u32 domain_id,
 			       struct dlb2_unmap_qid_args *args,
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index d4b401250..5277a2643 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -5058,3 +5058,421 @@ int dlb2_hw_create_dir_queue(struct dlb2_hw *hw,
 	return 0;
 }
 
+static bool
+dlb2_port_find_slot_with_pending_map_queue(struct dlb2_ldb_port *port,
+					   struct dlb2_ldb_queue *queue,
+					   int *slot)
+{
+	int i;
+
+	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
+		struct dlb2_ldb_port_qid_map *map = &port->qid_map[i];
+
+		if (map->state == DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP &&
+		    map->pending_qid == queue->id.phys_id)
+			break;
+	}
+
+	*slot = i;
+
+	return (i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ);
+}
+
+static int dlb2_verify_map_qid_slot_available(struct dlb2_ldb_port *port,
+					      struct dlb2_ldb_queue *queue,
+					      struct dlb2_cmd_response *resp)
+{
+	enum dlb2_qid_map_state state;
+	int i;
+
+	/* Unused slot available? */
+	if (port->num_mappings < DLB2_MAX_NUM_QIDS_PER_LDB_CQ)
+		return 0;
+
+	/*
+	 * If the queue is already mapped (from the application's perspective),
+	 * this is simply a priority update.
+	 */
+	state = DLB2_QUEUE_MAPPED;
+	if (dlb2_port_find_slot_queue(port, state, queue, &i))
+		return 0;
+
+	state = DLB2_QUEUE_MAP_IN_PROG;
+	if (dlb2_port_find_slot_queue(port, state, queue, &i))
+		return 0;
+
+	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &i))
+		return 0;
+
+	/*
+	 * If the slot contains an unmap in progress, it's considered
+	 * available.
+	 */
+	state = DLB2_QUEUE_UNMAP_IN_PROG;
+	if (dlb2_port_find_slot(port, state, &i))
+		return 0;
+
+	state = DLB2_QUEUE_UNMAPPED;
+	if (dlb2_port_find_slot(port, state, &i))
+		return 0;
+
+	resp->status = DLB2_ST_NO_QID_SLOTS_AVAILABLE;
+	return -EINVAL;
+}
+
+static struct dlb2_ldb_queue *
+dlb2_get_domain_ldb_queue(u32 id,
+			  bool vdev_req,
+			  struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_queue *queue;
+	RTE_SET_USED(iter);
+
+	if (id >= DLB2_MAX_NUM_LDB_QUEUES)
+		return NULL;
+
+	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
+		if ((!vdev_req && queue->id.phys_id == id) ||
+		    (vdev_req && queue->id.virt_id == id))
+			return queue;
+	}
+
+	return NULL;
+}
+
+static struct dlb2_ldb_port *
+dlb2_get_domain_used_ldb_port(u32 id,
+			      bool vdev_req,
+			      struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	if (id >= DLB2_MAX_NUM_LDB_PORTS)
+		return NULL;
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			if ((!vdev_req && port->id.phys_id == id) ||
+			    (vdev_req && port->id.virt_id == id))
+				return port;
+		}
+
+		DLB2_DOM_LIST_FOR(domain->avail_ldb_ports[i], port, iter) {
+			if ((!vdev_req && port->id.phys_id == id) ||
+			    (vdev_req && port->id.virt_id == id))
+				return port;
+		}
+	}
+
+	return NULL;
+}
+
+static void dlb2_ldb_port_change_qid_priority(struct dlb2_hw *hw,
+					      struct dlb2_ldb_port *port,
+					      int slot,
+					      struct dlb2_map_qid_args *args)
+{
+	u32 cq2priov;
+
+	/* Read-modify-write the priority and valid bit register */
+	cq2priov = DLB2_CSR_RD(hw,
+			       DLB2_LSP_CQ2PRIOV(hw->ver, port->id.phys_id));
+
+	cq2priov |= (1 << (slot + DLB2_LSP_CQ2PRIOV_V_LOC)) &
+		    DLB2_LSP_CQ2PRIOV_V;
+	cq2priov |= ((args->priority & 0x7) << slot * 3) &
+		    DLB2_LSP_CQ2PRIOV_PRIO;
+
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(hw->ver, port->id.phys_id), cq2priov);
+
+	dlb2_flush_csr(hw);
+
+	port->qid_map[slot].priority = args->priority;
+}
+
+static int dlb2_verify_map_qid_args(struct dlb2_hw *hw,
+				    u32 domain_id,
+				    struct dlb2_map_qid_args *args,
+				    struct dlb2_cmd_response *resp,
+				    bool vdev_req,
+				    unsigned int vdev_id,
+				    struct dlb2_hw_domain **out_domain,
+				    struct dlb2_ldb_port **out_port,
+				    struct dlb2_ldb_queue **out_queue)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+	struct dlb2_ldb_port *port;
+	int id;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
+		return -EINVAL;
+	}
+
+	id = args->port_id;
+
+	port = dlb2_get_domain_used_ldb_port(id, vdev_req, domain);
+
+	if (!port || !port->configured) {
+		resp->status = DLB2_ST_INVALID_PORT_ID;
+		return -EINVAL;
+	}
+
+	if (args->priority >= DLB2_QID_PRIORITIES) {
+		resp->status = DLB2_ST_INVALID_PRIORITY;
+		return -EINVAL;
+	}
+
+	queue = dlb2_get_domain_ldb_queue(args->qid, vdev_req, domain);
+
+	if (!queue || !queue->configured) {
+		resp->status = DLB2_ST_INVALID_QID;
+		return -EINVAL;
+	}
+
+	if (queue->domain_id.phys_id != domain->id.phys_id) {
+		resp->status = DLB2_ST_INVALID_QID;
+		return -EINVAL;
+	}
+
+	if (port->domain_id.phys_id != domain->id.phys_id) {
+		resp->status = DLB2_ST_INVALID_PORT_ID;
+		return -EINVAL;
+	}
+
+	*out_domain = domain;
+	*out_queue = queue;
+	*out_port = port;
+
+	return 0;
+}
+
+static void dlb2_log_map_qid(struct dlb2_hw *hw,
+			     u32 domain_id,
+			     struct dlb2_map_qid_args *args,
+			     bool vdev_req,
+			     unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 map QID arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID: %d\n",
+		    domain_id);
+	DLB2_HW_DBG(hw, "\tPort ID:   %d\n",
+		    args->port_id);
+	DLB2_HW_DBG(hw, "\tQueue ID:  %d\n",
+		    args->qid);
+	DLB2_HW_DBG(hw, "\tPriority:  %d\n",
+		    args->priority);
+}
+
+/**
+ * dlb2_hw_map_qid() - map a load-balanced queue to a load-balanced port
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: map QID arguments.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function configures the DLB to schedule QEs from the specified queue
+ * to the specified port. Each load-balanced port can be mapped to up to 8
+ * queues; each load-balanced queue can potentially map to all the
+ * load-balanced ports.
+ *
+ * A successful return does not necessarily mean the mapping was configured. If
+ * this function is unable to immediately map the queue to the port, it will
+ * add the requested operation to a per-port list of pending map/unmap
+ * operations, and (if it's not already running) launch a kernel thread that
+ * periodically attempts to process all pending operations. In a sense, this is
+ * an asynchronous function.
+ *
+ * This asynchronicity creates two views of the state of hardware: the actual
+ * hardware state and the requested state (as if every request completed
+ * immediately). If there are any pending map/unmap operations, the requested
+ * state will differ from the actual state. All validation is performed with
+ * respect to the pending state; for instance, if there are 8 pending map
+ * operations for port X, a request for a 9th will fail because a load-balanced
+ * port can only map up to 8 queues.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, invalid port or queue ID, or
+ *	    the domain is not configured.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb2_hw_map_qid(struct dlb2_hw *hw,
+		    u32 domain_id,
+		    struct dlb2_map_qid_args *args,
+		    struct dlb2_cmd_response *resp,
+		    bool vdev_req,
+		    unsigned int vdev_id)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+	enum dlb2_qid_map_state st;
+	struct dlb2_ldb_port *port;
+	int ret, i;
+	u8 prio;
+
+	dlb2_log_map_qid(hw, domain_id, args, vdev_req, vdev_id);
+
+	/*
+	 * Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	ret = dlb2_verify_map_qid_args(hw,
+				       domain_id,
+				       args,
+				       resp,
+				       vdev_req,
+				       vdev_id,
+				       &domain,
+				       &port,
+				       &queue);
+	if (ret)
+		return ret;
+
+	prio = args->priority;
+
+	/*
+	 * If there are any outstanding detach operations for this port,
+	 * attempt to complete them. This may be necessary to free up a QID
+	 * slot for this requested mapping.
+	 */
+	if (port->num_pending_removals)
+		dlb2_domain_finish_unmap_port(hw, domain, port);
+
+	ret = dlb2_verify_map_qid_slot_available(port, queue, resp);
+	if (ret)
+		return ret;
+
+	/* Hardware requires disabling the CQ before mapping QIDs. */
+	if (port->enabled)
+		dlb2_ldb_port_cq_disable(hw, port);
+
+	/*
+	 * If this is only a priority change, don't perform the full QID->CQ
+	 * mapping procedure
+	 */
+	st = DLB2_QUEUE_MAPPED;
+	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
+		if (prio != port->qid_map[i].priority) {
+			dlb2_ldb_port_change_qid_priority(hw, port, i, args);
+			DLB2_HW_DBG(hw, "DLB2 map: priority change\n");
+		}
+
+		st = DLB2_QUEUE_MAPPED;
+		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
+		if (ret)
+			return ret;
+
+		goto map_qid_done;
+	}
+
+	st = DLB2_QUEUE_UNMAP_IN_PROG;
+	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
+		if (prio != port->qid_map[i].priority) {
+			dlb2_ldb_port_change_qid_priority(hw, port, i, args);
+			DLB2_HW_DBG(hw, "DLB2 map: priority change\n");
+		}
+
+		st = DLB2_QUEUE_MAPPED;
+		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
+		if (ret)
+			return ret;
+
+		goto map_qid_done;
+	}
+
+	/*
+	 * If this is a priority change on an in-progress mapping, don't
+	 * perform the full QID->CQ mapping procedure.
+	 */
+	st = DLB2_QUEUE_MAP_IN_PROG;
+	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
+		port->qid_map[i].priority = prio;
+
+		DLB2_HW_DBG(hw, "DLB2 map: priority change only\n");
+
+		goto map_qid_done;
+	}
+
+	/*
+	 * If this is a priority change on a pending mapping, update the
+	 * pending priority
+	 */
+	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &i)) {
+		port->qid_map[i].pending_priority = prio;
+
+		DLB2_HW_DBG(hw, "DLB2 map: priority change only\n");
+
+		goto map_qid_done;
+	}
+
+	/*
+	 * If all the CQ's slots are in use, then there's an unmap in progress
+	 * (guaranteed by dlb2_verify_map_qid_slot_available()), so add this
+	 * mapping to pending_map and return. When the removal is completed for
+	 * the slot's current occupant, this mapping will be performed.
+	 */
+	if (!dlb2_port_find_slot(port, DLB2_QUEUE_UNMAPPED, &i)) {
+		if (dlb2_port_find_slot(port, DLB2_QUEUE_UNMAP_IN_PROG, &i)) {
+			enum dlb2_qid_map_state new_st;
+
+			port->qid_map[i].pending_qid = queue->id.phys_id;
+			port->qid_map[i].pending_priority = prio;
+
+			new_st = DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP;
+
+			ret = dlb2_port_slot_state_transition(hw, port, queue,
+							      i, new_st);
+			if (ret)
+				return ret;
+
+			DLB2_HW_DBG(hw, "DLB2 map: map pending removal\n");
+
+			goto map_qid_done;
+		}
+	}
+
+	/*
+	 * If the domain has started, a special "dynamic" CQ->queue mapping
+	 * procedure is required in order to safely update the CQ<->QID tables.
+	 * The "static" procedure cannot be used when traffic is flowing,
+	 * because the CQ<->QID tables cannot be updated atomically and the
+	 * scheduler won't see the new mapping unless the queue's if_status
+	 * changes, which isn't guaranteed.
+	 */
+	ret = dlb2_ldb_port_map_qid(hw, domain, port, queue, prio);
+
+	/* If ret is less than zero, it's due to an internal error */
+	if (ret < 0)
+		return ret;
+
+map_qid_done:
+	if (port->enabled)
+		dlb2_ldb_port_cq_enable(hw, port);
+
+	resp->status = 0;
+
+	return 0;
+}
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v4 13/27] event/dlb2: add v2.5 unmap queue
  2021-04-15  1:48   ` [dpdk-dev] [PATCH v4 00/27] Add DLB v2.5 Timothy McDaniel
                       ` (11 preceding siblings ...)
  2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 12/27] event/dlb2: add v2.5 map qid Timothy McDaniel
@ 2021-04-15  1:49     ` Timothy McDaniel
  2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 14/27] event/dlb2: add v2.5 start domain Timothy McDaniel
                       ` (13 subsequent siblings)
  26 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-04-15  1:49 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, harry.van.haaren, jerinj, thomas

Update the low level HW functions responsible for
removing the linkage between a queue and a load
balanced port. Runtime checks are performed on the
port and queue to make sure the state is appropriate
for the unmap operation, and the unmap arguments
are also validated.

The logic is very similar to what was done for v2.0,
but the new combined register map for v2.0 and v2.5
uses new register names and bit names.  Additionally,
new register access macros are used so that the code
can perform the correct action, based on the hardware.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 331 ------------------
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 298 ++++++++++++++++
 2 files changed, 298 insertions(+), 331 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index d59df5e39..ab5b080c1 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -1225,26 +1225,6 @@ dlb2_get_domain_used_dir_pq(struct dlb2_hw *hw,
 	return NULL;
 }
 
-static bool
-dlb2_port_find_slot_with_pending_map_queue(struct dlb2_ldb_port *port,
-					   struct dlb2_ldb_queue *queue,
-					   int *slot)
-{
-	int i;
-
-	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
-		struct dlb2_ldb_port_qid_map *map = &port->qid_map[i];
-
-		if (map->state == DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP &&
-		    map->pending_qid == queue->id.phys_id)
-			break;
-	}
-
-	*slot = i;
-
-	return (i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ);
-}
-
 static struct dlb2_ldb_queue *
 dlb2_get_domain_ldb_queue(u32 id,
 			  bool vdev_req,
@@ -1265,317 +1245,6 @@ dlb2_get_domain_ldb_queue(u32 id,
 	return NULL;
 }
 
-static struct dlb2_ldb_port *
-dlb2_get_domain_used_ldb_port(u32 id,
-			      bool vdev_req,
-			      struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	if (id >= DLB2_MAX_NUM_LDB_PORTS)
-		return NULL;
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
-			if ((!vdev_req && port->id.phys_id == id) ||
-			    (vdev_req && port->id.virt_id == id))
-				return port;
-
-		DLB2_DOM_LIST_FOR(domain->avail_ldb_ports[i], port, iter)
-			if ((!vdev_req && port->id.phys_id == id) ||
-			    (vdev_req && port->id.virt_id == id))
-				return port;
-	}
-
-	return NULL;
-}
-
-static void dlb2_log_unmap_qid(struct dlb2_hw *hw,
-			       u32 domain_id,
-			       struct dlb2_unmap_qid_args *args,
-			       bool vdev_req,
-			       unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 unmap QID arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID: %d\n",
-		    domain_id);
-	DLB2_HW_DBG(hw, "\tPort ID:   %d\n",
-		    args->port_id);
-	DLB2_HW_DBG(hw, "\tQueue ID:  %d\n",
-		    args->qid);
-	if (args->qid < DLB2_MAX_NUM_LDB_QUEUES)
-		DLB2_HW_DBG(hw, "\tQueue's num mappings:  %d\n",
-			    hw->rsrcs.ldb_queues[args->qid].num_mappings);
-}
-
-static int dlb2_verify_unmap_qid_args(struct dlb2_hw *hw,
-				      u32 domain_id,
-				      struct dlb2_unmap_qid_args *args,
-				      struct dlb2_cmd_response *resp,
-				      bool vdev_req,
-				      unsigned int vdev_id)
-{
-	enum dlb2_qid_map_state state;
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_queue *queue;
-	struct dlb2_ldb_port *port;
-	int slot;
-	int id;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (domain == NULL) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	if (!domain->configured) {
-		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
-		return -EINVAL;
-	}
-
-	id = args->port_id;
-
-	port = dlb2_get_domain_used_ldb_port(id, vdev_req, domain);
-
-	if (port == NULL || !port->configured) {
-		resp->status = DLB2_ST_INVALID_PORT_ID;
-		return -EINVAL;
-	}
-
-	if (port->domain_id.phys_id != domain->id.phys_id) {
-		resp->status = DLB2_ST_INVALID_PORT_ID;
-		return -EINVAL;
-	}
-
-	queue = dlb2_get_domain_ldb_queue(args->qid, vdev_req, domain);
-
-	if (queue == NULL || !queue->configured) {
-		DLB2_HW_ERR(hw, "[%s()] Can't unmap unconfigured queue %d\n",
-			    __func__, args->qid);
-		resp->status = DLB2_ST_INVALID_QID;
-		return -EINVAL;
-	}
-
-	/*
-	 * Verify that the port has the queue mapped. From the application's
-	 * perspective a queue is mapped if it is actually mapped, the map is
-	 * in progress, or the map is blocked pending an unmap.
-	 */
-	state = DLB2_QUEUE_MAPPED;
-	if (dlb2_port_find_slot_queue(port, state, queue, &slot))
-		return 0;
-
-	state = DLB2_QUEUE_MAP_IN_PROG;
-	if (dlb2_port_find_slot_queue(port, state, queue, &slot))
-		return 0;
-
-	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &slot))
-		return 0;
-
-	resp->status = DLB2_ST_INVALID_QID;
-	return -EINVAL;
-}
-
-int dlb2_hw_unmap_qid(struct dlb2_hw *hw,
-		      u32 domain_id,
-		      struct dlb2_unmap_qid_args *args,
-		      struct dlb2_cmd_response *resp,
-		      bool vdev_req,
-		      unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_queue *queue;
-	enum dlb2_qid_map_state st;
-	struct dlb2_ldb_port *port;
-	bool unmap_complete;
-	int i, ret, id;
-
-	dlb2_log_unmap_qid(hw, domain_id, args, vdev_req, vdev_id);
-
-	/*
-	 * Verify that hardware resources are available before attempting to
-	 * satisfy the request. This simplifies the error unwinding code.
-	 */
-	ret = dlb2_verify_unmap_qid_args(hw,
-					 domain_id,
-					 args,
-					 resp,
-					 vdev_req,
-					 vdev_id);
-	if (ret)
-		return ret;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-	if (domain == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: domain not found\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	id = args->port_id;
-
-	port = dlb2_get_domain_used_ldb_port(id, vdev_req, domain);
-	if (port == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: port not found\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	queue = dlb2_get_domain_ldb_queue(args->qid, vdev_req, domain);
-	if (queue == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: queue not found\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	/*
-	 * If the queue hasn't been mapped yet, we need to update the slot's
-	 * state and re-enable the queue's inflights.
-	 */
-	st = DLB2_QUEUE_MAP_IN_PROG;
-	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
-		if (i >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-			DLB2_HW_ERR(hw,
-				    "[%s():%d] Internal error: port slot tracking failed\n",
-				    __func__, __LINE__);
-			return -EFAULT;
-		}
-
-		/*
-		 * Since the in-progress map was aborted, re-enable the QID's
-		 * inflights.
-		 */
-		if (queue->num_pending_additions == 0)
-			dlb2_ldb_queue_set_inflight_limit(hw, queue);
-
-		st = DLB2_QUEUE_UNMAPPED;
-		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
-		if (ret)
-			return ret;
-
-		goto unmap_qid_done;
-	}
-
-	/*
-	 * If the queue mapping is on hold pending an unmap, we simply need to
-	 * update the slot's state.
-	 */
-	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &i)) {
-		if (i >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-			DLB2_HW_ERR(hw,
-				    "[%s():%d] Internal error: port slot tracking failed\n",
-				    __func__, __LINE__);
-			return -EFAULT;
-		}
-
-		st = DLB2_QUEUE_UNMAP_IN_PROG;
-		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
-		if (ret)
-			return ret;
-
-		goto unmap_qid_done;
-	}
-
-	st = DLB2_QUEUE_MAPPED;
-	if (!dlb2_port_find_slot_queue(port, st, queue, &i)) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: no available CQ slots\n",
-			    __func__);
-		return -EFAULT;
-	}
-
-	if (i >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: port slot tracking failed\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	/*
-	 * QID->CQ mapping removal is an asynchronous procedure. It requires
-	 * stopping the DLB2 from scheduling this CQ, draining all inflights
-	 * from the CQ, then unmapping the queue from the CQ. This function
-	 * simply marks the port as needing the queue unmapped, and (if
-	 * necessary) starts the unmapping worker thread.
-	 */
-	dlb2_ldb_port_cq_disable(hw, port);
-
-	st = DLB2_QUEUE_UNMAP_IN_PROG;
-	ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
-	if (ret)
-		return ret;
-
-	/*
-	 * Attempt to finish the unmapping now, in case the port has no
-	 * outstanding inflights. If that's not the case, this will fail and
-	 * the unmapping will be completed at a later time.
-	 */
-	unmap_complete = dlb2_domain_finish_unmap_port(hw, domain, port);
-
-	/*
-	 * If the unmapping couldn't complete immediately, launch the worker
-	 * thread (if it isn't already launched) to finish it later.
-	 */
-	if (!unmap_complete && !os_worker_active(hw))
-		os_schedule_work(hw);
-
-unmap_qid_done:
-	resp->status = 0;
-
-	return 0;
-}
-
-static void
-dlb2_log_pending_port_unmaps_args(struct dlb2_hw *hw,
-				  struct dlb2_pending_port_unmaps_args *args,
-				  bool vdev_req,
-				  unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB unmaps in progress arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from VF %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tPort ID: %d\n", args->port_id);
-}
-
-int dlb2_hw_pending_port_unmaps(struct dlb2_hw *hw,
-				u32 domain_id,
-				struct dlb2_pending_port_unmaps_args *args,
-				struct dlb2_cmd_response *resp,
-				bool vdev_req,
-				unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_port *port;
-
-	dlb2_log_pending_port_unmaps_args(hw, args, vdev_req, vdev_id);
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (domain == NULL) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	port = dlb2_get_domain_used_ldb_port(args->port_id, vdev_req, domain);
-	if (port == NULL || !port->configured) {
-		resp->status = DLB2_ST_INVALID_PORT_ID;
-		return -EINVAL;
-	}
-
-	resp->id = port->num_pending_removals;
-
-	return 0;
-}
-
 static int dlb2_verify_start_domain_args(struct dlb2_hw *hw,
 					 u32 domain_id,
 					 struct dlb2_cmd_response *resp,
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index 5277a2643..181922fe3 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -5476,3 +5476,301 @@ int dlb2_hw_map_qid(struct dlb2_hw *hw,
 
 	return 0;
 }
+
+static void dlb2_log_unmap_qid(struct dlb2_hw *hw,
+			       u32 domain_id,
+			       struct dlb2_unmap_qid_args *args,
+			       bool vdev_req,
+			       unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 unmap QID arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID: %d\n",
+		    domain_id);
+	DLB2_HW_DBG(hw, "\tPort ID:   %d\n",
+		    args->port_id);
+	DLB2_HW_DBG(hw, "\tQueue ID:  %d\n",
+		    args->qid);
+	if (args->qid < DLB2_MAX_NUM_LDB_QUEUES)
+		DLB2_HW_DBG(hw, "\tQueue's num mappings:  %d\n",
+			    hw->rsrcs.ldb_queues[args->qid].num_mappings);
+}
+
+static int dlb2_verify_unmap_qid_args(struct dlb2_hw *hw,
+				      u32 domain_id,
+				      struct dlb2_unmap_qid_args *args,
+				      struct dlb2_cmd_response *resp,
+				      bool vdev_req,
+				      unsigned int vdev_id,
+				      struct dlb2_hw_domain **out_domain,
+				      struct dlb2_ldb_port **out_port,
+				      struct dlb2_ldb_queue **out_queue)
+{
+	enum dlb2_qid_map_state state;
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+	struct dlb2_ldb_port *port;
+	int slot;
+	int id;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
+		return -EINVAL;
+	}
+
+	id = args->port_id;
+
+	port = dlb2_get_domain_used_ldb_port(id, vdev_req, domain);
+
+	if (!port || !port->configured) {
+		resp->status = DLB2_ST_INVALID_PORT_ID;
+		return -EINVAL;
+	}
+
+	if (port->domain_id.phys_id != domain->id.phys_id) {
+		resp->status = DLB2_ST_INVALID_PORT_ID;
+		return -EINVAL;
+	}
+
+	queue = dlb2_get_domain_ldb_queue(args->qid, vdev_req, domain);
+
+	if (!queue || !queue->configured) {
+		DLB2_HW_ERR(hw, "[%s()] Can't unmap unconfigured queue %d\n",
+			    __func__, args->qid);
+		resp->status = DLB2_ST_INVALID_QID;
+		return -EINVAL;
+	}
+
+	/*
+	 * Verify that the port has the queue mapped. From the application's
+	 * perspective a queue is mapped if it is actually mapped, the map is
+	 * in progress, or the map is blocked pending an unmap.
+	 */
+	state = DLB2_QUEUE_MAPPED;
+	if (dlb2_port_find_slot_queue(port, state, queue, &slot))
+		goto done;
+
+	state = DLB2_QUEUE_MAP_IN_PROG;
+	if (dlb2_port_find_slot_queue(port, state, queue, &slot))
+		goto done;
+
+	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &slot))
+		goto done;
+
+	resp->status = DLB2_ST_INVALID_QID;
+	return -EINVAL;
+
+done:
+	*out_domain = domain;
+	*out_port = port;
+	*out_queue = queue;
+
+	return 0;
+}
+
+/**
+ * dlb2_hw_unmap_qid() - Unmap a load-balanced queue from a load-balanced port
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: unmap QID arguments.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function configures the DLB to stop scheduling QEs from the specified
+ * queue to the specified port.
+ *
+ * A successful return does not necessarily mean the mapping was removed. If
+ * this function is unable to immediately unmap the queue from the port, it
+ * will add the requested operation to a per-port list of pending map/unmap
+ * operations, and (if it's not already running) launch a kernel thread that
+ * periodically attempts to process all pending operations. See
+ * dlb2_hw_map_qid() for more details.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, invalid port or queue ID, or
+ *	    the domain is not configured.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb2_hw_unmap_qid(struct dlb2_hw *hw,
+		      u32 domain_id,
+		      struct dlb2_unmap_qid_args *args,
+		      struct dlb2_cmd_response *resp,
+		      bool vdev_req,
+		      unsigned int vdev_id)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+	enum dlb2_qid_map_state st;
+	struct dlb2_ldb_port *port;
+	bool unmap_complete;
+	int i, ret;
+
+	dlb2_log_unmap_qid(hw, domain_id, args, vdev_req, vdev_id);
+
+	/*
+	 * Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	ret = dlb2_verify_unmap_qid_args(hw,
+					 domain_id,
+					 args,
+					 resp,
+					 vdev_req,
+					 vdev_id,
+					 &domain,
+					 &port,
+					 &queue);
+	if (ret)
+		return ret;
+
+	/*
+	 * If the queue hasn't been mapped yet, we need to update the slot's
+	 * state and re-enable the queue's inflights.
+	 */
+	st = DLB2_QUEUE_MAP_IN_PROG;
+	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
+		/*
+		 * Since the in-progress map was aborted, re-enable the QID's
+		 * inflights.
+		 */
+		if (queue->num_pending_additions == 0)
+			dlb2_ldb_queue_set_inflight_limit(hw, queue);
+
+		st = DLB2_QUEUE_UNMAPPED;
+		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
+		if (ret)
+			return ret;
+
+		goto unmap_qid_done;
+	}
+
+	/*
+	 * If the queue mapping is on hold pending an unmap, we simply need to
+	 * update the slot's state.
+	 */
+	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &i)) {
+		st = DLB2_QUEUE_UNMAP_IN_PROG;
+		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
+		if (ret)
+			return ret;
+
+		goto unmap_qid_done;
+	}
+
+	st = DLB2_QUEUE_MAPPED;
+	if (!dlb2_port_find_slot_queue(port, st, queue, &i)) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: no available CQ slots\n",
+			    __func__);
+		return -EFAULT;
+	}
+
+	/*
+	 * QID->CQ mapping removal is an asynchronous procedure. It requires
+	 * stopping the DLB2 from scheduling this CQ, draining all inflights
+	 * from the CQ, then unmapping the queue from the CQ. This function
+	 * simply marks the port as needing the queue unmapped, and (if
+	 * necessary) starts the unmapping worker thread.
+	 */
+	dlb2_ldb_port_cq_disable(hw, port);
+
+	st = DLB2_QUEUE_UNMAP_IN_PROG;
+	ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
+	if (ret)
+		return ret;
+
+	/*
+	 * Attempt to finish the unmapping now, in case the port has no
+	 * outstanding inflights. If that's not the case, this will fail and
+	 * the unmapping will be completed at a later time.
+	 */
+	unmap_complete = dlb2_domain_finish_unmap_port(hw, domain, port);
+
+	/*
+	 * If the unmapping couldn't complete immediately, launch the worker
+	 * thread (if it isn't already launched) to finish it later.
+	 */
+	if (!unmap_complete && !os_worker_active(hw))
+		os_schedule_work(hw);
+
+unmap_qid_done:
+	resp->status = 0;
+
+	return 0;
+}
+
+static void
+dlb2_log_pending_port_unmaps_args(struct dlb2_hw *hw,
+				  struct dlb2_pending_port_unmaps_args *args,
+				  bool vdev_req,
+				  unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB unmaps in progress arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from VF %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tPort ID: %d\n", args->port_id);
+}
+
+/**
+ * dlb2_hw_pending_port_unmaps() - returns the number of unmap operations in
+ *	progress.
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: number of unmaps in progress args
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the number of unmaps in progress.
+ *
+ * Errors:
+ * EINVAL - Invalid port ID.
+ */
+int dlb2_hw_pending_port_unmaps(struct dlb2_hw *hw,
+				u32 domain_id,
+				struct dlb2_pending_port_unmaps_args *args,
+				struct dlb2_cmd_response *resp,
+				bool vdev_req,
+				unsigned int vdev_id)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_port *port;
+
+	dlb2_log_pending_port_unmaps_args(hw, args, vdev_req, vdev_id);
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	port = dlb2_get_domain_used_ldb_port(args->port_id, vdev_req, domain);
+	if (!port || !port->configured) {
+		resp->status = DLB2_ST_INVALID_PORT_ID;
+		return -EINVAL;
+	}
+
+	resp->id = port->num_pending_removals;
+
+	return 0;
+}
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v4 14/27] event/dlb2: add v2.5 start domain
  2021-04-15  1:48   ` [dpdk-dev] [PATCH v4 00/27] Add DLB v2.5 Timothy McDaniel
                       ` (12 preceding siblings ...)
  2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 13/27] event/dlb2: add v2.5 unmap queue Timothy McDaniel
@ 2021-04-15  1:49     ` Timothy McDaniel
  2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 15/27] event/dlb2: add v2.5 credit scheme Timothy McDaniel
                       ` (12 subsequent siblings)
  26 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-04-15  1:49 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, harry.van.haaren, jerinj, thomas

Update the low level HW functions responsible for
starting the scheduling domain. Once a domain is
started, its resources can no longer be configured,
except for QID remapping and port enable/disable.
The start domain arguments are validated, and an error
is returned if validation fails, or if the domain is
not configured or has already been started.

The logic is very similar to what was done for v2.0,
but the new combined register map for v2.0 and v2.5
uses new register names and bit names.  Additionally,
new register access macros are used so that the code
can perform the correct action, based on the hardware.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 123 -----------------
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 130 ++++++++++++++++++
 2 files changed, 130 insertions(+), 123 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index ab5b080c1..1e66ebf50 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -1245,129 +1245,6 @@ dlb2_get_domain_ldb_queue(u32 id,
 	return NULL;
 }
 
-static int dlb2_verify_start_domain_args(struct dlb2_hw *hw,
-					 u32 domain_id,
-					 struct dlb2_cmd_response *resp,
-					 bool vdev_req,
-					 unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (domain == NULL) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	if (!domain->configured) {
-		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
-		return -EINVAL;
-	}
-
-	if (domain->started) {
-		resp->status = DLB2_ST_DOMAIN_STARTED;
-		return -EINVAL;
-	}
-
-	return 0;
-}
-
-static void dlb2_log_start_domain(struct dlb2_hw *hw,
-				  u32 domain_id,
-				  bool vdev_req,
-				  unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 start domain arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
-}
-
-/**
- * dlb2_hw_start_domain() - Lock the domain configuration
- * @hw:	Contains the current state of the DLB2 hardware.
- * @domain_id: Domain ID
- * @arg: User-provided arguments (unused, here for ioctl callback template).
- * @resp: Response to user.
- * @vdev_req: Request came from a virtual device.
- * @vdev_id: If vdev_req is true, this contains the virtual device's ID.
- *
- * Return: returns < 0 on error, 0 otherwise. If the driver is unable to
- * satisfy a request, resp->status will be set accordingly.
- */
-int
-dlb2_hw_start_domain(struct dlb2_hw *hw,
-		     u32 domain_id,
-		     struct dlb2_start_domain_args *arg,
-		     struct dlb2_cmd_response *resp,
-		     bool vdev_req,
-		     unsigned int vdev_id)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *dir_queue;
-	struct dlb2_ldb_queue *ldb_queue;
-	struct dlb2_hw_domain *domain;
-	int ret;
-	RTE_SET_USED(arg);
-	RTE_SET_USED(iter);
-
-	dlb2_log_start_domain(hw, domain_id, vdev_req, vdev_id);
-
-	ret = dlb2_verify_start_domain_args(hw,
-					    domain_id,
-					    resp,
-					    vdev_req,
-					    vdev_id);
-	if (ret)
-		return ret;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-	if (domain == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: domain not found\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	/*
-	 * Enable load-balanced and directed queue write permissions for the
-	 * queues this domain owns. Without this, the DLB2 will drop all
-	 * incoming traffic to those queues.
-	 */
-	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, ldb_queue, iter) {
-		union dlb2_sys_ldb_vasqid_v r0 = { {0} };
-		unsigned int offs;
-
-		r0.field.vasqid_v = 1;
-
-		offs = domain->id.phys_id * DLB2_MAX_NUM_LDB_QUEUES +
-			ldb_queue->id.phys_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_LDB_VASQID_V(offs), r0.val);
-	}
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, dir_queue, iter) {
-		union dlb2_sys_dir_vasqid_v r0 = { {0} };
-		unsigned int offs;
-
-		r0.field.vasqid_v = 1;
-
-		offs = domain->id.phys_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) +
-			dir_queue->id.phys_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(offs), r0.val);
-	}
-
-	dlb2_flush_csr(hw);
-
-	domain->started = true;
-
-	resp->status = 0;
-
-	return 0;
-}
-
 static void dlb2_log_get_dir_queue_depth(struct dlb2_hw *hw,
 					 u32 domain_id,
 					 u32 queue_id,
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index 181922fe3..e806a60ac 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -5774,3 +5774,133 @@ int dlb2_hw_pending_port_unmaps(struct dlb2_hw *hw,
 
 	return 0;
 }
+
+static int dlb2_verify_start_domain_args(struct dlb2_hw *hw,
+					 u32 domain_id,
+					 struct dlb2_cmd_response *resp,
+					 bool vdev_req,
+					 unsigned int vdev_id,
+					 struct dlb2_hw_domain **out_domain)
+{
+	struct dlb2_hw_domain *domain;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
+		return -EINVAL;
+	}
+
+	if (domain->started) {
+		resp->status = DLB2_ST_DOMAIN_STARTED;
+		return -EINVAL;
+	}
+
+	*out_domain = domain;
+
+	return 0;
+}
+
+static void dlb2_log_start_domain(struct dlb2_hw *hw,
+				  u32 domain_id,
+				  bool vdev_req,
+				  unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 start domain arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
+}
+
+/**
+ * dlb2_hw_start_domain() - start a scheduling domain
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @arg: start domain arguments.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function starts a scheduling domain, which allows applications to send
+ * traffic through it. Once a domain is started, its resources can no longer be
+ * configured (besides QID remapping and port enable/disable).
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error.
+ *
+ * Errors:
+ * EINVAL - the domain is not configured, or the domain is already started.
+ */
+int
+dlb2_hw_start_domain(struct dlb2_hw *hw,
+		     u32 domain_id,
+		     struct dlb2_start_domain_args *args,
+		     struct dlb2_cmd_response *resp,
+		     bool vdev_req,
+		     unsigned int vdev_id)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *dir_queue;
+	struct dlb2_ldb_queue *ldb_queue;
+	struct dlb2_hw_domain *domain;
+	int ret;
+	RTE_SET_USED(args);
+	RTE_SET_USED(iter);
+
+	dlb2_log_start_domain(hw, domain_id, vdev_req, vdev_id);
+
+	ret = dlb2_verify_start_domain_args(hw,
+					    domain_id,
+					    resp,
+					    vdev_req,
+					    vdev_id,
+					    &domain);
+	if (ret)
+		return ret;
+
+	/*
+	 * Enable load-balanced and directed queue write permissions for the
+	 * queues this domain owns. Without this, the DLB2 will drop all
+	 * incoming traffic to those queues.
+	 */
+	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, ldb_queue, iter) {
+		u32 vasqid_v = 0;
+		unsigned int offs;
+
+		DLB2_BIT_SET(vasqid_v, DLB2_SYS_LDB_VASQID_V_VASQID_V);
+
+		offs = domain->id.phys_id * DLB2_MAX_NUM_LDB_QUEUES +
+			ldb_queue->id.phys_id;
+
+		DLB2_CSR_WR(hw, DLB2_SYS_LDB_VASQID_V(offs), vasqid_v);
+	}
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, dir_queue, iter) {
+		u32 vasqid_v = 0;
+		unsigned int offs;
+
+		DLB2_BIT_SET(vasqid_v, DLB2_SYS_DIR_VASQID_V_VASQID_V);
+
+		offs = domain->id.phys_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) +
+			dir_queue->id.phys_id;
+
+		DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(offs), vasqid_v);
+	}
+
+	dlb2_flush_csr(hw);
+
+	domain->started = true;
+
+	resp->status = 0;
+
+	return 0;
+}
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v4 15/27] event/dlb2: add v2.5 credit scheme
  2021-04-15  1:48   ` [dpdk-dev] [PATCH v4 00/27] Add DLB v2.5 Timothy McDaniel
                       ` (13 preceding siblings ...)
  2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 14/27] event/dlb2: add v2.5 start domain Timothy McDaniel
@ 2021-04-15  1:49     ` Timothy McDaniel
  2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 16/27] event/dlb2: add v2.5 queue depth functions Timothy McDaniel
                       ` (11 subsequent siblings)
  26 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-04-15  1:49 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, harry.van.haaren, jerinj, thomas

DLB v2.5 uses a different credit scheme than was used in DLB v2.0 .
Specifically, there is a single credit pool for both load balanced
and directed traffic, instead of a separate pool for each as is
found with DLB v2.0.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/dlb2.c | 311 ++++++++++++++++++++++++++------------
 1 file changed, 212 insertions(+), 99 deletions(-)

diff --git a/drivers/event/dlb2/dlb2.c b/drivers/event/dlb2/dlb2.c
index 0048f6a1b..cc6495b76 100644
--- a/drivers/event/dlb2/dlb2.c
+++ b/drivers/event/dlb2/dlb2.c
@@ -436,8 +436,13 @@ dlb2_eventdev_info_get(struct rte_eventdev *dev,
 	 */
 	evdev_dlb2_default_info.max_event_ports += dlb2->num_ldb_ports;
 	evdev_dlb2_default_info.max_event_queues += dlb2->num_ldb_queues;
-	evdev_dlb2_default_info.max_num_events += dlb2->max_ldb_credits;
-
+	if (dlb2->version == DLB2_HW_V2_5) {
+		evdev_dlb2_default_info.max_num_events +=
+			dlb2->max_credits;
+	} else {
+		evdev_dlb2_default_info.max_num_events +=
+			dlb2->max_ldb_credits;
+	}
 	evdev_dlb2_default_info.max_event_queues =
 		RTE_MIN(evdev_dlb2_default_info.max_event_queues,
 			RTE_EVENT_MAX_QUEUES_PER_DEV);
@@ -451,7 +456,8 @@ dlb2_eventdev_info_get(struct rte_eventdev *dev,
 
 static int
 dlb2_hw_create_sched_domain(struct dlb2_hw_dev *handle,
-			    const struct dlb2_hw_rsrcs *resources_asked)
+			    const struct dlb2_hw_rsrcs *resources_asked,
+			    uint8_t device_version)
 {
 	int ret = 0;
 	struct dlb2_create_sched_domain_args *cfg;
@@ -468,8 +474,10 @@ dlb2_hw_create_sched_domain(struct dlb2_hw_dev *handle,
 	/* DIR ports and queues */
 
 	cfg->num_dir_ports = resources_asked->num_dir_ports;
-
-	cfg->num_dir_credits = resources_asked->num_dir_credits;
+	if (device_version == DLB2_HW_V2_5)
+		cfg->num_credits = resources_asked->num_credits;
+	else
+		cfg->num_dir_credits = resources_asked->num_dir_credits;
 
 	/* LDB queues */
 
@@ -509,8 +517,8 @@ dlb2_hw_create_sched_domain(struct dlb2_hw_dev *handle,
 		break;
 	}
 
-	cfg->num_ldb_credits =
-		resources_asked->num_ldb_credits;
+	if (device_version == DLB2_HW_V2)
+		cfg->num_ldb_credits = resources_asked->num_ldb_credits;
 
 	cfg->num_atomic_inflights =
 		DLB2_NUM_ATOMIC_INFLIGHTS_PER_QUEUE *
@@ -519,14 +527,24 @@ dlb2_hw_create_sched_domain(struct dlb2_hw_dev *handle,
 	cfg->num_hist_list_entries = resources_asked->num_ldb_ports *
 		DLB2_NUM_HIST_LIST_ENTRIES_PER_LDB_PORT;
 
-	DLB2_LOG_DBG("sched domain create - ldb_qs=%d, ldb_ports=%d, dir_ports=%d, atomic_inflights=%d, hist_list_entries=%d, ldb_credits=%d, dir_credits=%d\n",
-		     cfg->num_ldb_queues,
-		     resources_asked->num_ldb_ports,
-		     cfg->num_dir_ports,
-		     cfg->num_atomic_inflights,
-		     cfg->num_hist_list_entries,
-		     cfg->num_ldb_credits,
-		     cfg->num_dir_credits);
+	if (device_version == DLB2_HW_V2_5) {
+		DLB2_LOG_DBG("sched domain create - ldb_qs=%d, ldb_ports=%d, dir_ports=%d, atomic_inflights=%d, hist_list_entries=%d, credits=%d\n",
+			     cfg->num_ldb_queues,
+			     resources_asked->num_ldb_ports,
+			     cfg->num_dir_ports,
+			     cfg->num_atomic_inflights,
+			     cfg->num_hist_list_entries,
+			     cfg->num_credits);
+	} else {
+		DLB2_LOG_DBG("sched domain create - ldb_qs=%d, ldb_ports=%d, dir_ports=%d, atomic_inflights=%d, hist_list_entries=%d, ldb_credits=%d, dir_credits=%d\n",
+			     cfg->num_ldb_queues,
+			     resources_asked->num_ldb_ports,
+			     cfg->num_dir_ports,
+			     cfg->num_atomic_inflights,
+			     cfg->num_hist_list_entries,
+			     cfg->num_ldb_credits,
+			     cfg->num_dir_credits);
+	}
 
 	/* Configure the QM */
 
@@ -606,7 +624,6 @@ dlb2_eventdev_configure(const struct rte_eventdev *dev)
 	 */
 	if (dlb2->configured) {
 		dlb2_hw_reset_sched_domain(dev, true);
-
 		ret = dlb2_hw_query_resources(dlb2);
 		if (ret) {
 			DLB2_LOG_ERR("get resources err=%d, devid=%d\n",
@@ -665,20 +682,26 @@ dlb2_eventdev_configure(const struct rte_eventdev *dev)
 	/* 1 dir queue per dir port */
 	rsrcs->num_ldb_queues = config->nb_event_queues - rsrcs->num_dir_ports;
 
-	/* Scale down nb_events_limit by 4 for directed credits, since there
-	 * are 4x as many load-balanced credits.
-	 */
-	rsrcs->num_ldb_credits = 0;
-	rsrcs->num_dir_credits = 0;
+	if (dlb2->version == DLB2_HW_V2_5) {
+		rsrcs->num_credits = 0;
+		if (rsrcs->num_ldb_queues || rsrcs->num_dir_ports)
+			rsrcs->num_credits = config->nb_events_limit;
+	} else {
+		/* Scale down nb_events_limit by 4 for directed credits,
+		 * since there are 4x as many load-balanced credits.
+		 */
+		rsrcs->num_ldb_credits = 0;
+		rsrcs->num_dir_credits = 0;
 
-	if (rsrcs->num_ldb_queues)
-		rsrcs->num_ldb_credits = config->nb_events_limit;
-	if (rsrcs->num_dir_ports)
-		rsrcs->num_dir_credits = config->nb_events_limit / 4;
-	if (dlb2->num_dir_credits_override != -1)
-		rsrcs->num_dir_credits = dlb2->num_dir_credits_override;
+		if (rsrcs->num_ldb_queues)
+			rsrcs->num_ldb_credits = config->nb_events_limit;
+		if (rsrcs->num_dir_ports)
+			rsrcs->num_dir_credits = config->nb_events_limit / 4;
+		if (dlb2->num_dir_credits_override != -1)
+			rsrcs->num_dir_credits = dlb2->num_dir_credits_override;
+	}
 
-	if (dlb2_hw_create_sched_domain(handle, rsrcs) < 0) {
+	if (dlb2_hw_create_sched_domain(handle, rsrcs, dlb2->version) < 0) {
 		DLB2_LOG_ERR("dlb2_hw_create_sched_domain failed\n");
 		return -ENODEV;
 	}
@@ -693,10 +716,15 @@ dlb2_eventdev_configure(const struct rte_eventdev *dev)
 	dlb2->num_ldb_ports = dlb2->num_ports - dlb2->num_dir_ports;
 	dlb2->num_ldb_queues = dlb2->num_queues - dlb2->num_dir_ports;
 	dlb2->num_dir_queues = dlb2->num_dir_ports;
-	dlb2->ldb_credit_pool = rsrcs->num_ldb_credits;
-	dlb2->max_ldb_credits = rsrcs->num_ldb_credits;
-	dlb2->dir_credit_pool = rsrcs->num_dir_credits;
-	dlb2->max_dir_credits = rsrcs->num_dir_credits;
+	if (dlb2->version == DLB2_HW_V2_5) {
+		dlb2->credit_pool = rsrcs->num_credits;
+		dlb2->max_credits = rsrcs->num_credits;
+	} else {
+		dlb2->ldb_credit_pool = rsrcs->num_ldb_credits;
+		dlb2->max_ldb_credits = rsrcs->num_ldb_credits;
+		dlb2->dir_credit_pool = rsrcs->num_dir_credits;
+		dlb2->max_dir_credits = rsrcs->num_dir_credits;
+	}
 
 	dlb2->configured = true;
 
@@ -1170,8 +1198,9 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2,
 	struct dlb2_port *qm_port = NULL;
 	char mz_name[RTE_MEMZONE_NAMESIZE];
 	uint32_t qm_port_id;
-	uint16_t ldb_credit_high_watermark;
-	uint16_t dir_credit_high_watermark;
+	uint16_t ldb_credit_high_watermark = 0;
+	uint16_t dir_credit_high_watermark = 0;
+	uint16_t credit_high_watermark = 0;
 
 	if (handle == NULL)
 		return -EINVAL;
@@ -1206,15 +1235,18 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2,
 	/* User controls the LDB high watermark via enqueue depth. The DIR high
 	 * watermark is equal, unless the directed credit pool is too small.
 	 */
-	ldb_credit_high_watermark = enqueue_depth;
-
-	/* If there are no directed ports, the kernel driver will ignore this
-	 * port's directed credit settings. Don't use enqueue_depth if it would
-	 * require more directed credits than are available.
-	 */
-	dir_credit_high_watermark =
-		RTE_MIN(enqueue_depth,
-			handle->cfg.num_dir_credits / dlb2->num_ports);
+	if (dlb2->version == DLB2_HW_V2) {
+		ldb_credit_high_watermark = enqueue_depth;
+		/* If there are no directed ports, the kernel driver will
+		 * ignore this port's directed credit settings. Don't use
+		 * enqueue_depth if it would require more directed credits
+		 * than are available.
+		 */
+		dir_credit_high_watermark =
+			RTE_MIN(enqueue_depth,
+				handle->cfg.num_dir_credits / dlb2->num_ports);
+	} else
+		credit_high_watermark = enqueue_depth;
 
 	/* Per QM values */
 
@@ -1249,8 +1281,12 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2,
 
 	qm_port->id = qm_port_id;
 
-	qm_port->cached_ldb_credits = 0;
-	qm_port->cached_dir_credits = 0;
+	if (dlb2->version == DLB2_HW_V2) {
+		qm_port->cached_ldb_credits = 0;
+		qm_port->cached_dir_credits = 0;
+	} else
+		qm_port->cached_credits = 0;
+
 	/* CQs with depth < 8 use an 8-entry queue, but withhold credits so
 	 * the effective depth is smaller.
 	 */
@@ -1298,17 +1334,26 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2,
 	qm_port->state = PORT_STARTED; /* enabled at create time */
 	qm_port->config_state = DLB2_CONFIGURED;
 
-	qm_port->dir_credits = dir_credit_high_watermark;
-	qm_port->ldb_credits = ldb_credit_high_watermark;
-	qm_port->credit_pool[DLB2_DIR_QUEUE] = &dlb2->dir_credit_pool;
-	qm_port->credit_pool[DLB2_LDB_QUEUE] = &dlb2->ldb_credit_pool;
-
-	DLB2_LOG_DBG("dlb2: created ldb port %d, depth = %d, ldb credits=%d, dir credits=%d\n",
-		     qm_port_id,
-		     dequeue_depth,
-		     qm_port->ldb_credits,
-		     qm_port->dir_credits);
+	if (dlb2->version == DLB2_HW_V2) {
+		qm_port->dir_credits = dir_credit_high_watermark;
+		qm_port->ldb_credits = ldb_credit_high_watermark;
+		qm_port->credit_pool[DLB2_DIR_QUEUE] = &dlb2->dir_credit_pool;
+		qm_port->credit_pool[DLB2_LDB_QUEUE] = &dlb2->ldb_credit_pool;
+
+		DLB2_LOG_DBG("dlb2: created ldb port %d, depth = %d, ldb credits=%d, dir credits=%d\n",
+			     qm_port_id,
+			     dequeue_depth,
+			     qm_port->ldb_credits,
+			     qm_port->dir_credits);
+	} else {
+		qm_port->credits = credit_high_watermark;
+		qm_port->credit_pool[DLB2_COMBINED_POOL] = &dlb2->credit_pool;
 
+		DLB2_LOG_DBG("dlb2: created ldb port %d, depth = %d, credits=%d\n",
+			     qm_port_id,
+			     dequeue_depth,
+			     qm_port->credits);
+	}
 	rte_spinlock_unlock(&handle->resource_lock);
 
 	return 0;
@@ -1356,8 +1401,9 @@ dlb2_hw_create_dir_port(struct dlb2_eventdev *dlb2,
 	struct dlb2_port *qm_port = NULL;
 	char mz_name[RTE_MEMZONE_NAMESIZE];
 	uint32_t qm_port_id;
-	uint16_t ldb_credit_high_watermark;
-	uint16_t dir_credit_high_watermark;
+	uint16_t ldb_credit_high_watermark = 0;
+	uint16_t dir_credit_high_watermark = 0;
+	uint16_t credit_high_watermark = 0;
 
 	if (dlb2 == NULL || handle == NULL)
 		return -EINVAL;
@@ -1386,14 +1432,16 @@ dlb2_hw_create_dir_port(struct dlb2_eventdev *dlb2,
 	/* User controls the LDB high watermark via enqueue depth. The DIR high
 	 * watermark is equal, unless the directed credit pool is too small.
 	 */
-	ldb_credit_high_watermark = enqueue_depth;
-
-	/* Don't use enqueue_depth if it would require more directed credits
-	 * than are available.
-	 */
-	dir_credit_high_watermark =
-		RTE_MIN(enqueue_depth,
-			handle->cfg.num_dir_credits / dlb2->num_ports);
+	if (dlb2->version == DLB2_HW_V2) {
+		ldb_credit_high_watermark = enqueue_depth;
+		/* Don't use enqueue_depth if it would require more directed
+		 * credits than are available.
+		 */
+		dir_credit_high_watermark =
+			RTE_MIN(enqueue_depth,
+				handle->cfg.num_dir_credits / dlb2->num_ports);
+	} else
+		credit_high_watermark = enqueue_depth;
 
 	/* Per QM values */
 
@@ -1430,8 +1478,12 @@ dlb2_hw_create_dir_port(struct dlb2_eventdev *dlb2,
 
 	qm_port->id = qm_port_id;
 
-	qm_port->cached_ldb_credits = 0;
-	qm_port->cached_dir_credits = 0;
+	if (dlb2->version == DLB2_HW_V2) {
+		qm_port->cached_ldb_credits = 0;
+		qm_port->cached_dir_credits = 0;
+	} else
+		qm_port->cached_credits = 0;
+
 	/* CQs with depth < 8 use an 8-entry queue, but withhold credits so
 	 * the effective depth is smaller.
 	 */
@@ -1467,17 +1519,26 @@ dlb2_hw_create_dir_port(struct dlb2_eventdev *dlb2,
 	qm_port->state = PORT_STARTED; /* enabled at create time */
 	qm_port->config_state = DLB2_CONFIGURED;
 
-	qm_port->dir_credits = dir_credit_high_watermark;
-	qm_port->ldb_credits = ldb_credit_high_watermark;
-	qm_port->credit_pool[DLB2_DIR_QUEUE] = &dlb2->dir_credit_pool;
-	qm_port->credit_pool[DLB2_LDB_QUEUE] = &dlb2->ldb_credit_pool;
-
-	DLB2_LOG_DBG("dlb2: created dir port %d, depth = %d cr=%d,%d\n",
-		     qm_port_id,
-		     dequeue_depth,
-		     dir_credit_high_watermark,
-		     ldb_credit_high_watermark);
+	if (dlb2->version == DLB2_HW_V2) {
+		qm_port->dir_credits = dir_credit_high_watermark;
+		qm_port->ldb_credits = ldb_credit_high_watermark;
+		qm_port->credit_pool[DLB2_DIR_QUEUE] = &dlb2->dir_credit_pool;
+		qm_port->credit_pool[DLB2_LDB_QUEUE] = &dlb2->ldb_credit_pool;
+
+		DLB2_LOG_DBG("dlb2: created dir port %d, depth = %d cr=%d,%d\n",
+			     qm_port_id,
+			     dequeue_depth,
+			     dir_credit_high_watermark,
+			     ldb_credit_high_watermark);
+	} else {
+		qm_port->credits = credit_high_watermark;
+		qm_port->credit_pool[DLB2_COMBINED_POOL] = &dlb2->credit_pool;
 
+		DLB2_LOG_DBG("dlb2: created dir port %d, depth = %d cr=%d\n",
+			     qm_port_id,
+			     dequeue_depth,
+			     credit_high_watermark);
+	}
 	rte_spinlock_unlock(&handle->resource_lock);
 
 	return 0;
@@ -2297,6 +2358,24 @@ dlb2_check_enqueue_hw_dir_credits(struct dlb2_port *qm_port)
 	return 0;
 }
 
+static inline int
+dlb2_check_enqueue_hw_credits(struct dlb2_port *qm_port)
+{
+	if (unlikely(qm_port->cached_credits == 0)) {
+		qm_port->cached_credits =
+			dlb2_port_credits_get(qm_port,
+					      DLB2_COMBINED_POOL);
+		if (unlikely(qm_port->cached_credits == 0)) {
+			DLB2_INC_STAT(
+			qm_port->ev_port->stats.traffic.tx_nospc_hw_credits, 1);
+			DLB2_LOG_DBG("credits exhausted\n");
+			return 1; /* credits exhausted */
+		}
+	}
+
+	return 0;
+}
+
 static __rte_always_inline void
 dlb2_pp_write(struct dlb2_enqueue_qe *qe4,
 	      struct process_local_port_data *port_data)
@@ -2565,12 +2644,19 @@ dlb2_event_enqueue_prep(struct dlb2_eventdev_port *ev_port,
 	if (!qm_queue->is_directed) {
 		/* Load balanced destination queue */
 
-		if (dlb2_check_enqueue_hw_ldb_credits(qm_port)) {
-			rte_errno = -ENOSPC;
-			return 1;
+		if (dlb2->version == DLB2_HW_V2) {
+			if (dlb2_check_enqueue_hw_ldb_credits(qm_port)) {
+				rte_errno = -ENOSPC;
+				return 1;
+			}
+			cached_credits = &qm_port->cached_ldb_credits;
+		} else {
+			if (dlb2_check_enqueue_hw_credits(qm_port)) {
+				rte_errno = -ENOSPC;
+				return 1;
+			}
+			cached_credits = &qm_port->cached_credits;
 		}
-		cached_credits = &qm_port->cached_ldb_credits;
-
 		switch (ev->sched_type) {
 		case RTE_SCHED_TYPE_ORDERED:
 			DLB2_LOG_DBG("dlb2: put_qe: RTE_SCHED_TYPE_ORDERED\n");
@@ -2602,12 +2688,19 @@ dlb2_event_enqueue_prep(struct dlb2_eventdev_port *ev_port,
 	} else {
 		/* Directed destination queue */
 
-		if (dlb2_check_enqueue_hw_dir_credits(qm_port)) {
-			rte_errno = -ENOSPC;
-			return 1;
+		if (dlb2->version == DLB2_HW_V2) {
+			if (dlb2_check_enqueue_hw_dir_credits(qm_port)) {
+				rte_errno = -ENOSPC;
+				return 1;
+			}
+			cached_credits = &qm_port->cached_dir_credits;
+		} else {
+			if (dlb2_check_enqueue_hw_credits(qm_port)) {
+				rte_errno = -ENOSPC;
+				return 1;
+			}
+			cached_credits = &qm_port->cached_credits;
 		}
-		cached_credits = &qm_port->cached_dir_credits;
-
 		DLB2_LOG_DBG("dlb2: put_qe: RTE_SCHED_TYPE_DIRECTED\n");
 
 		*sched_type = DLB2_SCHED_DIRECTED;
@@ -2891,20 +2984,40 @@ dlb2_port_credits_inc(struct dlb2_port *qm_port, int num)
 
 	/* increment port credits, and return to pool if exceeds threshold */
 	if (!qm_port->is_directed) {
-		qm_port->cached_ldb_credits += num;
-		if (qm_port->cached_ldb_credits >= 2 * batch_size) {
-			__atomic_fetch_add(
-				qm_port->credit_pool[DLB2_LDB_QUEUE],
-				batch_size, __ATOMIC_SEQ_CST);
-			qm_port->cached_ldb_credits -= batch_size;
+		if (qm_port->dlb2->version == DLB2_HW_V2) {
+			qm_port->cached_ldb_credits += num;
+			if (qm_port->cached_ldb_credits >= 2 * batch_size) {
+				__atomic_fetch_add(
+					qm_port->credit_pool[DLB2_LDB_QUEUE],
+					batch_size, __ATOMIC_SEQ_CST);
+				qm_port->cached_ldb_credits -= batch_size;
+			}
+		} else {
+			qm_port->cached_credits += num;
+			if (qm_port->cached_credits >= 2 * batch_size) {
+				__atomic_fetch_add(
+				      qm_port->credit_pool[DLB2_COMBINED_POOL],
+				      batch_size, __ATOMIC_SEQ_CST);
+				qm_port->cached_credits -= batch_size;
+			}
 		}
 	} else {
-		qm_port->cached_dir_credits += num;
-		if (qm_port->cached_dir_credits >= 2 * batch_size) {
-			__atomic_fetch_add(
-				qm_port->credit_pool[DLB2_DIR_QUEUE],
-				batch_size, __ATOMIC_SEQ_CST);
-			qm_port->cached_dir_credits -= batch_size;
+		if (qm_port->dlb2->version == DLB2_HW_V2) {
+			qm_port->cached_dir_credits += num;
+			if (qm_port->cached_dir_credits >= 2 * batch_size) {
+				__atomic_fetch_add(
+					qm_port->credit_pool[DLB2_DIR_QUEUE],
+					batch_size, __ATOMIC_SEQ_CST);
+				qm_port->cached_dir_credits -= batch_size;
+			}
+		} else {
+			qm_port->cached_credits += num;
+			if (qm_port->cached_credits >= 2 * batch_size) {
+				__atomic_fetch_add(
+				      qm_port->credit_pool[DLB2_COMBINED_POOL],
+				      batch_size, __ATOMIC_SEQ_CST);
+				qm_port->cached_credits -= batch_size;
+			}
 		}
 	}
 }
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v4 16/27] event/dlb2: add v2.5 queue depth functions
  2021-04-15  1:48   ` [dpdk-dev] [PATCH v4 00/27] Add DLB v2.5 Timothy McDaniel
                       ` (14 preceding siblings ...)
  2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 15/27] event/dlb2: add v2.5 credit scheme Timothy McDaniel
@ 2021-04-15  1:49     ` Timothy McDaniel
  2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 17/27] event/dlb2: add v2.5 finish map/unmap Timothy McDaniel
                       ` (10 subsequent siblings)
  26 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-04-15  1:49 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, harry.van.haaren, jerinj, thomas

Update the low level hardware functions responsible for
getting the queue depth. The command arguments are also
validated.

The logic is very similar to what was done for v2.0,
but the new combined register map for v2.0 and v2.5
uses new register names and bit names.  Additionally,
new register access macros are used so that the code
can perform the correct action, based on the hardware.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 160 ------------------
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 135 +++++++++++++++
 2 files changed, 135 insertions(+), 160 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index 1e66ebf50..8c1d8c782 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -65,17 +65,6 @@ static inline void dlb2_flush_csr(struct dlb2_hw *hw)
 	DLB2_CSR_RD(hw, DLB2_SYS_TOTAL_VAS);
 }
 
-static u32 dlb2_dir_queue_depth(struct dlb2_hw *hw,
-				struct dlb2_dir_pq_pair *queue)
-{
-	union dlb2_lsp_qid_dir_enqueue_cnt r0;
-
-	r0.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID_DIR_ENQUEUE_CNT(queue->id.phys_id));
-
-	return r0.field.count;
-}
-
 static void dlb2_ldb_port_cq_enable(struct dlb2_hw *hw,
 				    struct dlb2_ldb_port *port)
 {
@@ -108,24 +97,6 @@ static void dlb2_ldb_port_cq_disable(struct dlb2_hw *hw,
 	dlb2_flush_csr(hw);
 }
 
-static u32 dlb2_ldb_queue_depth(struct dlb2_hw *hw,
-				struct dlb2_ldb_queue *queue)
-{
-	union dlb2_lsp_qid_aqed_active_cnt r0;
-	union dlb2_lsp_qid_atm_active r1;
-	union dlb2_lsp_qid_ldb_enqueue_cnt r2;
-
-	r0.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID_AQED_ACTIVE_CNT(queue->id.phys_id));
-	r1.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID_ATM_ACTIVE(queue->id.phys_id));
-
-	r2.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID_LDB_ENQUEUE_CNT(queue->id.phys_id));
-
-	return r0.field.count + r1.field.count + r2.field.count;
-}
-
 static struct dlb2_ldb_queue *
 dlb2_get_ldb_queue_from_id(struct dlb2_hw *hw,
 			   u32 id,
@@ -1204,134 +1175,3 @@ int dlb2_set_group_sequence_numbers(struct dlb2_hw *hw,
 	return 0;
 }
 
-static struct dlb2_dir_pq_pair *
-dlb2_get_domain_used_dir_pq(struct dlb2_hw *hw,
-			    u32 id,
-			    bool vdev_req,
-			    struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *port;
-	RTE_SET_USED(iter);
-
-	if (id >= DLB2_MAX_NUM_DIR_PORTS(hw->ver))
-		return NULL;
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter)
-		if ((!vdev_req && port->id.phys_id == id) ||
-		    (vdev_req && port->id.virt_id == id))
-			return port;
-
-	return NULL;
-}
-
-static struct dlb2_ldb_queue *
-dlb2_get_domain_ldb_queue(u32 id,
-			  bool vdev_req,
-			  struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_queue *queue;
-	RTE_SET_USED(iter);
-
-	if (id >= DLB2_MAX_NUM_LDB_QUEUES)
-		return NULL;
-
-	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter)
-		if ((!vdev_req && queue->id.phys_id == id) ||
-		    (vdev_req && queue->id.virt_id == id))
-			return queue;
-
-	return NULL;
-}
-
-static void dlb2_log_get_dir_queue_depth(struct dlb2_hw *hw,
-					 u32 domain_id,
-					 u32 queue_id,
-					 bool vdev_req,
-					 unsigned int vf_id)
-{
-	DLB2_HW_DBG(hw, "DLB get directed queue depth:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from VF %d)\n", vf_id);
-	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
-	DLB2_HW_DBG(hw, "\tQueue ID: %d\n", queue_id);
-}
-
-int dlb2_hw_get_dir_queue_depth(struct dlb2_hw *hw,
-				u32 domain_id,
-				struct dlb2_get_dir_queue_depth_args *args,
-				struct dlb2_cmd_response *resp,
-				bool vdev_req,
-				unsigned int vdev_id)
-{
-	struct dlb2_dir_pq_pair *queue;
-	struct dlb2_hw_domain *domain;
-	int id;
-
-	id = domain_id;
-
-	dlb2_log_get_dir_queue_depth(hw, domain_id, args->queue_id,
-				     vdev_req, vdev_id);
-
-	domain = dlb2_get_domain_from_id(hw, id, vdev_req, vdev_id);
-	if (domain == NULL) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	id = args->queue_id;
-
-	queue = dlb2_get_domain_used_dir_pq(hw, id, vdev_req, domain);
-	if (queue == NULL) {
-		resp->status = DLB2_ST_INVALID_QID;
-		return -EINVAL;
-	}
-
-	resp->id = dlb2_dir_queue_depth(hw, queue);
-
-	return 0;
-}
-
-static void dlb2_log_get_ldb_queue_depth(struct dlb2_hw *hw,
-					 u32 domain_id,
-					 u32 queue_id,
-					 bool vdev_req,
-					 unsigned int vf_id)
-{
-	DLB2_HW_DBG(hw, "DLB get load-balanced queue depth:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from VF %d)\n", vf_id);
-	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
-	DLB2_HW_DBG(hw, "\tQueue ID: %d\n", queue_id);
-}
-
-int dlb2_hw_get_ldb_queue_depth(struct dlb2_hw *hw,
-				u32 domain_id,
-				struct dlb2_get_ldb_queue_depth_args *args,
-				struct dlb2_cmd_response *resp,
-				bool vdev_req,
-				unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_queue *queue;
-
-	dlb2_log_get_ldb_queue_depth(hw, domain_id, args->queue_id,
-				     vdev_req, vdev_id);
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-	if (domain == NULL) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	queue = dlb2_get_domain_ldb_queue(args->queue_id, vdev_req, domain);
-	if (queue == NULL) {
-		resp->status = DLB2_ST_INVALID_QID;
-		return -EINVAL;
-	}
-
-	resp->id = dlb2_ldb_queue_depth(hw, queue);
-
-	return 0;
-}
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index e806a60ac..6a5af0c1e 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -5904,3 +5904,138 @@ dlb2_hw_start_domain(struct dlb2_hw *hw,
 
 	return 0;
 }
+
+static void dlb2_log_get_dir_queue_depth(struct dlb2_hw *hw,
+					 u32 domain_id,
+					 u32 queue_id,
+					 bool vdev_req,
+					 unsigned int vf_id)
+{
+	DLB2_HW_DBG(hw, "DLB get directed queue depth:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from VF %d)\n", vf_id);
+	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
+	DLB2_HW_DBG(hw, "\tQueue ID: %d\n", queue_id);
+}
+
+/**
+ * dlb2_hw_get_dir_queue_depth() - returns the depth of a directed queue
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: queue depth args
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function returns the depth of a directed queue.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the depth.
+ *
+ * Errors:
+ * EINVAL - Invalid domain ID or queue ID.
+ */
+int dlb2_hw_get_dir_queue_depth(struct dlb2_hw *hw,
+				u32 domain_id,
+				struct dlb2_get_dir_queue_depth_args *args,
+				struct dlb2_cmd_response *resp,
+				bool vdev_req,
+				unsigned int vdev_id)
+{
+	struct dlb2_dir_pq_pair *queue;
+	struct dlb2_hw_domain *domain;
+	int id;
+
+	id = domain_id;
+
+	dlb2_log_get_dir_queue_depth(hw, domain_id, args->queue_id,
+				     vdev_req, vdev_id);
+
+	domain = dlb2_get_domain_from_id(hw, id, vdev_req, vdev_id);
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	id = args->queue_id;
+
+	queue = dlb2_get_domain_used_dir_pq(hw, id, vdev_req, domain);
+	if (!queue) {
+		resp->status = DLB2_ST_INVALID_QID;
+		return -EINVAL;
+	}
+
+	resp->id = dlb2_dir_queue_depth(hw, queue);
+
+	return 0;
+}
+
+static void dlb2_log_get_ldb_queue_depth(struct dlb2_hw *hw,
+					 u32 domain_id,
+					 u32 queue_id,
+					 bool vdev_req,
+					 unsigned int vf_id)
+{
+	DLB2_HW_DBG(hw, "DLB get load-balanced queue depth:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from VF %d)\n", vf_id);
+	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
+	DLB2_HW_DBG(hw, "\tQueue ID: %d\n", queue_id);
+}
+
+/**
+ * dlb2_hw_get_ldb_queue_depth() - returns the depth of a load-balanced queue
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: queue depth args
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function returns the depth of a load-balanced queue.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the depth.
+ *
+ * Errors:
+ * EINVAL - Invalid domain ID or queue ID.
+ */
+int dlb2_hw_get_ldb_queue_depth(struct dlb2_hw *hw,
+				u32 domain_id,
+				struct dlb2_get_ldb_queue_depth_args *args,
+				struct dlb2_cmd_response *resp,
+				bool vdev_req,
+				unsigned int vdev_id)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+
+	dlb2_log_get_ldb_queue_depth(hw, domain_id, args->queue_id,
+				     vdev_req, vdev_id);
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	queue = dlb2_get_domain_ldb_queue(args->queue_id, vdev_req, domain);
+	if (!queue) {
+		resp->status = DLB2_ST_INVALID_QID;
+		return -EINVAL;
+	}
+
+	resp->id = dlb2_ldb_queue_depth(hw, queue);
+
+	return 0;
+}
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v4 17/27] event/dlb2: add v2.5 finish map/unmap
  2021-04-15  1:48   ` [dpdk-dev] [PATCH v4 00/27] Add DLB v2.5 Timothy McDaniel
                       ` (15 preceding siblings ...)
  2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 16/27] event/dlb2: add v2.5 queue depth functions Timothy McDaniel
@ 2021-04-15  1:49     ` Timothy McDaniel
  2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 18/27] event/dlb2: add v2.5 sparse cq mode Timothy McDaniel
                       ` (9 subsequent siblings)
  26 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-04-15  1:49 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, harry.van.haaren, jerinj, thomas

Update the low level HW functions responsible for
finishing the queue map/unmap operation, which is an
asynchronous operation.

The logic is very similar to what was done for v2.0,
but the new combined register map for v2.0 and v2.5
uses new register names and bit names.  Additionally,
new register access macros are used so that the code
can perform the correct action, based on the hardware.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 1054 -----------------
 .../event/dlb2/pf/base/dlb2_resource_new.c    |   50 +
 2 files changed, 50 insertions(+), 1054 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index 8c1d8c782..f05f750f5 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -54,1060 +54,6 @@ void dlb2_hw_enable_sparse_ldb_cq_mode(struct dlb2_hw *hw)
 	DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, r0.val);
 }
 
-/*
- * The PF driver cannot assume that a register write will affect subsequent HCW
- * writes. To ensure a write completes, the driver must read back a CSR. This
- * function only need be called for configuration that can occur after the
- * domain has started; prior to starting, applications can't send HCWs.
- */
-static inline void dlb2_flush_csr(struct dlb2_hw *hw)
-{
-	DLB2_CSR_RD(hw, DLB2_SYS_TOTAL_VAS);
-}
-
-static void dlb2_ldb_port_cq_enable(struct dlb2_hw *hw,
-				    struct dlb2_ldb_port *port)
-{
-	union dlb2_lsp_cq_ldb_dsbl reg;
-
-	/*
-	 * Don't re-enable the port if a removal is pending. The caller should
-	 * mark this port as enabled (if it isn't already), and when the
-	 * removal completes the port will be enabled.
-	 */
-	if (port->num_pending_removals)
-		return;
-
-	reg.field.disabled = 0;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ_LDB_DSBL(port->id.phys_id), reg.val);
-
-	dlb2_flush_csr(hw);
-}
-
-static void dlb2_ldb_port_cq_disable(struct dlb2_hw *hw,
-				     struct dlb2_ldb_port *port)
-{
-	union dlb2_lsp_cq_ldb_dsbl reg;
-
-	reg.field.disabled = 1;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ_LDB_DSBL(port->id.phys_id), reg.val);
-
-	dlb2_flush_csr(hw);
-}
-
-static struct dlb2_ldb_queue *
-dlb2_get_ldb_queue_from_id(struct dlb2_hw *hw,
-			   u32 id,
-			   bool vdev_req,
-			   unsigned int vdev_id)
-{
-	struct dlb2_list_entry *iter1;
-	struct dlb2_list_entry *iter2;
-	struct dlb2_function_resources *rsrcs;
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_queue *queue;
-	RTE_SET_USED(iter1);
-	RTE_SET_USED(iter2);
-
-	if (id >= DLB2_MAX_NUM_LDB_QUEUES)
-		return NULL;
-
-	rsrcs = (vdev_req) ? &hw->vdev[vdev_id] : &hw->pf;
-
-	if (!vdev_req)
-		return &hw->rsrcs.ldb_queues[id];
-
-	DLB2_FUNC_LIST_FOR(rsrcs->used_domains, domain, iter1) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter2)
-			if (queue->id.virt_id == id)
-				return queue;
-	}
-
-	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_queues, queue, iter1)
-		if (queue->id.virt_id == id)
-			return queue;
-
-	return NULL;
-}
-
-static struct dlb2_hw_domain *dlb2_get_domain_from_id(struct dlb2_hw *hw,
-						      u32 id,
-						      bool vdev_req,
-						      unsigned int vdev_id)
-{
-	struct dlb2_list_entry *iteration;
-	struct dlb2_function_resources *rsrcs;
-	struct dlb2_hw_domain *domain;
-	RTE_SET_USED(iteration);
-
-	if (id >= DLB2_MAX_NUM_DOMAINS)
-		return NULL;
-
-	if (!vdev_req)
-		return &hw->domains[id];
-
-	rsrcs = &hw->vdev[vdev_id];
-
-	DLB2_FUNC_LIST_FOR(rsrcs->used_domains, domain, iteration)
-		if (domain->id.virt_id == id)
-			return domain;
-
-	return NULL;
-}
-
-static int dlb2_port_slot_state_transition(struct dlb2_hw *hw,
-					   struct dlb2_ldb_port *port,
-					   struct dlb2_ldb_queue *queue,
-					   int slot,
-					   enum dlb2_qid_map_state new_state)
-{
-	enum dlb2_qid_map_state curr_state = port->qid_map[slot].state;
-	struct dlb2_hw_domain *domain;
-	int domain_id;
-
-	domain_id = port->domain_id.phys_id;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, false, 0);
-	if (domain == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: unable to find domain %d\n",
-			    __func__, domain_id);
-		return -EINVAL;
-	}
-
-	switch (curr_state) {
-	case DLB2_QUEUE_UNMAPPED:
-		switch (new_state) {
-		case DLB2_QUEUE_MAPPED:
-			queue->num_mappings++;
-			port->num_mappings++;
-			break;
-		case DLB2_QUEUE_MAP_IN_PROG:
-			queue->num_pending_additions++;
-			domain->num_pending_additions++;
-			break;
-		default:
-			goto error;
-		}
-		break;
-	case DLB2_QUEUE_MAPPED:
-		switch (new_state) {
-		case DLB2_QUEUE_UNMAPPED:
-			queue->num_mappings--;
-			port->num_mappings--;
-			break;
-		case DLB2_QUEUE_UNMAP_IN_PROG:
-			port->num_pending_removals++;
-			domain->num_pending_removals++;
-			break;
-		case DLB2_QUEUE_MAPPED:
-			/* Priority change, nothing to update */
-			break;
-		default:
-			goto error;
-		}
-		break;
-	case DLB2_QUEUE_MAP_IN_PROG:
-		switch (new_state) {
-		case DLB2_QUEUE_UNMAPPED:
-			queue->num_pending_additions--;
-			domain->num_pending_additions--;
-			break;
-		case DLB2_QUEUE_MAPPED:
-			queue->num_mappings++;
-			port->num_mappings++;
-			queue->num_pending_additions--;
-			domain->num_pending_additions--;
-			break;
-		default:
-			goto error;
-		}
-		break;
-	case DLB2_QUEUE_UNMAP_IN_PROG:
-		switch (new_state) {
-		case DLB2_QUEUE_UNMAPPED:
-			port->num_pending_removals--;
-			domain->num_pending_removals--;
-			queue->num_mappings--;
-			port->num_mappings--;
-			break;
-		case DLB2_QUEUE_MAPPED:
-			port->num_pending_removals--;
-			domain->num_pending_removals--;
-			break;
-		case DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP:
-			/* Nothing to update */
-			break;
-		default:
-			goto error;
-		}
-		break;
-	case DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP:
-		switch (new_state) {
-		case DLB2_QUEUE_UNMAP_IN_PROG:
-			/* Nothing to update */
-			break;
-		case DLB2_QUEUE_UNMAPPED:
-			/*
-			 * An UNMAP_IN_PROG_PENDING_MAP slot briefly
-			 * becomes UNMAPPED before it transitions to
-			 * MAP_IN_PROG.
-			 */
-			queue->num_mappings--;
-			port->num_mappings--;
-			port->num_pending_removals--;
-			domain->num_pending_removals--;
-			break;
-		default:
-			goto error;
-		}
-		break;
-	default:
-		goto error;
-	}
-
-	port->qid_map[slot].state = new_state;
-
-	DLB2_HW_DBG(hw,
-		    "[%s()] queue %d -> port %d state transition (%d -> %d)\n",
-		    __func__, queue->id.phys_id, port->id.phys_id,
-		    curr_state, new_state);
-	return 0;
-
-error:
-	DLB2_HW_ERR(hw,
-		    "[%s()] Internal error: invalid queue %d -> port %d state transition (%d -> %d)\n",
-		    __func__, queue->id.phys_id, port->id.phys_id,
-		    curr_state, new_state);
-	return -EFAULT;
-}
-
-static bool dlb2_port_find_slot(struct dlb2_ldb_port *port,
-				enum dlb2_qid_map_state state,
-				int *slot)
-{
-	int i;
-
-	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
-		if (port->qid_map[i].state == state)
-			break;
-	}
-
-	*slot = i;
-
-	return (i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ);
-}
-
-static bool dlb2_port_find_slot_queue(struct dlb2_ldb_port *port,
-				      enum dlb2_qid_map_state state,
-				      struct dlb2_ldb_queue *queue,
-				      int *slot)
-{
-	int i;
-
-	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
-		if (port->qid_map[i].state == state &&
-		    port->qid_map[i].qid == queue->id.phys_id)
-			break;
-	}
-
-	*slot = i;
-
-	return (i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ);
-}
-
-/*
- * dlb2_ldb_queue_{enable, disable}_mapped_cqs() don't operate exactly as
- * their function names imply, and should only be called by the dynamic CQ
- * mapping code.
- */
-static void dlb2_ldb_queue_disable_mapped_cqs(struct dlb2_hw *hw,
-					      struct dlb2_hw_domain *domain,
-					      struct dlb2_ldb_queue *queue)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int slot, i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			enum dlb2_qid_map_state state = DLB2_QUEUE_MAPPED;
-
-			if (!dlb2_port_find_slot_queue(port, state,
-						       queue, &slot))
-				continue;
-
-			if (port->enabled)
-				dlb2_ldb_port_cq_disable(hw, port);
-		}
-	}
-}
-
-static void dlb2_ldb_queue_enable_mapped_cqs(struct dlb2_hw *hw,
-					     struct dlb2_hw_domain *domain,
-					     struct dlb2_ldb_queue *queue)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int slot, i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			enum dlb2_qid_map_state state = DLB2_QUEUE_MAPPED;
-
-			if (!dlb2_port_find_slot_queue(port, state,
-						       queue, &slot))
-				continue;
-
-			if (port->enabled)
-				dlb2_ldb_port_cq_enable(hw, port);
-		}
-	}
-}
-
-static void dlb2_ldb_port_clear_queue_if_status(struct dlb2_hw *hw,
-						struct dlb2_ldb_port *port,
-						int slot)
-{
-	union dlb2_lsp_ldb_sched_ctrl r0 = { {0} };
-
-	r0.field.cq = port->id.phys_id;
-	r0.field.qidix = slot;
-	r0.field.value = 0;
-	r0.field.inflight_ok_v = 1;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL, r0.val);
-
-	dlb2_flush_csr(hw);
-}
-
-static void dlb2_ldb_port_set_queue_if_status(struct dlb2_hw *hw,
-					      struct dlb2_ldb_port *port,
-					      int slot)
-{
-	union dlb2_lsp_ldb_sched_ctrl r0 = { {0} };
-
-	r0.field.cq = port->id.phys_id;
-	r0.field.qidix = slot;
-	r0.field.value = 1;
-	r0.field.inflight_ok_v = 1;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL, r0.val);
-
-	dlb2_flush_csr(hw);
-}
-
-static int dlb2_ldb_port_map_qid_static(struct dlb2_hw *hw,
-					struct dlb2_ldb_port *p,
-					struct dlb2_ldb_queue *q,
-					u8 priority)
-{
-	union dlb2_lsp_cq2priov r0;
-	union dlb2_lsp_cq2qid0 r1;
-	union dlb2_atm_qid2cqidix_00 r2;
-	union dlb2_lsp_qid2cqidix_00 r3;
-	union dlb2_lsp_qid2cqidix2_00 r4;
-	enum dlb2_qid_map_state state;
-	int i;
-
-	/* Look for a pending or already mapped slot, else an unused slot */
-	if (!dlb2_port_find_slot_queue(p, DLB2_QUEUE_MAP_IN_PROG, q, &i) &&
-	    !dlb2_port_find_slot_queue(p, DLB2_QUEUE_MAPPED, q, &i) &&
-	    !dlb2_port_find_slot(p, DLB2_QUEUE_UNMAPPED, &i)) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: CQ has no available QID mapping slots\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	if (i >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: port slot tracking failed\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	/* Read-modify-write the priority and valid bit register */
-	r0.val = DLB2_CSR_RD(hw, DLB2_LSP_CQ2PRIOV(p->id.phys_id));
-
-	r0.field.v |= 1 << i;
-	r0.field.prio |= (priority & 0x7) << i * 3;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(p->id.phys_id), r0.val);
-
-	/* Read-modify-write the QID map register */
-	if (i < 4)
-		r1.val = DLB2_CSR_RD(hw, DLB2_LSP_CQ2QID0(p->id.phys_id));
-	else
-		r1.val = DLB2_CSR_RD(hw, DLB2_LSP_CQ2QID1(p->id.phys_id));
-
-	if (i == 0 || i == 4)
-		r1.field.qid_p0 = q->id.phys_id;
-	if (i == 1 || i == 5)
-		r1.field.qid_p1 = q->id.phys_id;
-	if (i == 2 || i == 6)
-		r1.field.qid_p2 = q->id.phys_id;
-	if (i == 3 || i == 7)
-		r1.field.qid_p3 = q->id.phys_id;
-
-	if (i < 4)
-		DLB2_CSR_WR(hw, DLB2_LSP_CQ2QID0(p->id.phys_id), r1.val);
-	else
-		DLB2_CSR_WR(hw, DLB2_LSP_CQ2QID1(p->id.phys_id), r1.val);
-
-	r2.val = DLB2_CSR_RD(hw,
-			     DLB2_ATM_QID2CQIDIX(q->id.phys_id,
-						 p->id.phys_id / 4));
-
-	r3.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID2CQIDIX(q->id.phys_id,
-						 p->id.phys_id / 4));
-
-	r4.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID2CQIDIX2(q->id.phys_id,
-						  p->id.phys_id / 4));
-
-	switch (p->id.phys_id % 4) {
-	case 0:
-		r2.field.cq_p0 |= 1 << i;
-		r3.field.cq_p0 |= 1 << i;
-		r4.field.cq_p0 |= 1 << i;
-		break;
-
-	case 1:
-		r2.field.cq_p1 |= 1 << i;
-		r3.field.cq_p1 |= 1 << i;
-		r4.field.cq_p1 |= 1 << i;
-		break;
-
-	case 2:
-		r2.field.cq_p2 |= 1 << i;
-		r3.field.cq_p2 |= 1 << i;
-		r4.field.cq_p2 |= 1 << i;
-		break;
-
-	case 3:
-		r2.field.cq_p3 |= 1 << i;
-		r3.field.cq_p3 |= 1 << i;
-		r4.field.cq_p3 |= 1 << i;
-		break;
-	}
-
-	DLB2_CSR_WR(hw,
-		    DLB2_ATM_QID2CQIDIX(q->id.phys_id, p->id.phys_id / 4),
-		    r2.val);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID2CQIDIX(q->id.phys_id, p->id.phys_id / 4),
-		    r3.val);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID2CQIDIX2(q->id.phys_id, p->id.phys_id / 4),
-		    r4.val);
-
-	dlb2_flush_csr(hw);
-
-	p->qid_map[i].qid = q->id.phys_id;
-	p->qid_map[i].priority = priority;
-
-	state = DLB2_QUEUE_MAPPED;
-
-	return dlb2_port_slot_state_transition(hw, p, q, i, state);
-}
-
-static int dlb2_ldb_port_set_has_work_bits(struct dlb2_hw *hw,
-					   struct dlb2_ldb_port *port,
-					   struct dlb2_ldb_queue *queue,
-					   int slot)
-{
-	union dlb2_lsp_qid_aqed_active_cnt r0;
-	union dlb2_lsp_qid_ldb_enqueue_cnt r1;
-	union dlb2_lsp_ldb_sched_ctrl r2 = { {0} };
-
-	/* Set the atomic scheduling haswork bit */
-	r0.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID_AQED_ACTIVE_CNT(queue->id.phys_id));
-
-	r2.field.cq = port->id.phys_id;
-	r2.field.qidix = slot;
-	r2.field.value = 1;
-	r2.field.rlist_haswork_v = r0.field.count > 0;
-
-	/* Set the non-atomic scheduling haswork bit */
-	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL, r2.val);
-
-	r1.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID_LDB_ENQUEUE_CNT(queue->id.phys_id));
-
-	memset(&r2, 0, sizeof(r2));
-
-	r2.field.cq = port->id.phys_id;
-	r2.field.qidix = slot;
-	r2.field.value = 1;
-	r2.field.nalb_haswork_v = (r1.field.count > 0);
-
-	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL, r2.val);
-
-	dlb2_flush_csr(hw);
-
-	return 0;
-}
-
-static void dlb2_ldb_port_clear_has_work_bits(struct dlb2_hw *hw,
-					      struct dlb2_ldb_port *port,
-					      u8 slot)
-{
-	union dlb2_lsp_ldb_sched_ctrl r2 = { {0} };
-
-	r2.field.cq = port->id.phys_id;
-	r2.field.qidix = slot;
-	r2.field.value = 0;
-	r2.field.rlist_haswork_v = 1;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL, r2.val);
-
-	memset(&r2, 0, sizeof(r2));
-
-	r2.field.cq = port->id.phys_id;
-	r2.field.qidix = slot;
-	r2.field.value = 0;
-	r2.field.nalb_haswork_v = 1;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL, r2.val);
-
-	dlb2_flush_csr(hw);
-}
-
-static void dlb2_ldb_queue_set_inflight_limit(struct dlb2_hw *hw,
-					      struct dlb2_ldb_queue *queue)
-{
-	union dlb2_lsp_qid_ldb_infl_lim r0 = { {0} };
-
-	r0.field.limit = queue->num_qid_inflights;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_QID_LDB_INFL_LIM(queue->id.phys_id), r0.val);
-}
-
-static void dlb2_ldb_queue_clear_inflight_limit(struct dlb2_hw *hw,
-						struct dlb2_ldb_queue *queue)
-{
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID_LDB_INFL_LIM(queue->id.phys_id),
-		    DLB2_LSP_QID_LDB_INFL_LIM_RST);
-}
-
-static int dlb2_ldb_port_finish_map_qid_dynamic(struct dlb2_hw *hw,
-						struct dlb2_hw_domain *domain,
-						struct dlb2_ldb_port *port,
-						struct dlb2_ldb_queue *queue)
-{
-	struct dlb2_list_entry *iter;
-	union dlb2_lsp_qid_ldb_infl_cnt r0;
-	enum dlb2_qid_map_state state;
-	int slot, ret, i;
-	u8 prio;
-	RTE_SET_USED(iter);
-
-	r0.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID_LDB_INFL_CNT(queue->id.phys_id));
-
-	if (r0.field.count) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: non-zero QID inflight count\n",
-			    __func__);
-		return -EINVAL;
-	}
-
-	/*
-	 * Static map the port and set its corresponding has_work bits.
-	 */
-	state = DLB2_QUEUE_MAP_IN_PROG;
-	if (!dlb2_port_find_slot_queue(port, state, queue, &slot))
-		return -EINVAL;
-
-	if (slot >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: port slot tracking failed\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	prio = port->qid_map[slot].priority;
-
-	/*
-	 * Update the CQ2QID, CQ2PRIOV, and QID2CQIDX registers, and
-	 * the port's qid_map state.
-	 */
-	ret = dlb2_ldb_port_map_qid_static(hw, port, queue, prio);
-	if (ret)
-		return ret;
-
-	ret = dlb2_ldb_port_set_has_work_bits(hw, port, queue, slot);
-	if (ret)
-		return ret;
-
-	/*
-	 * Ensure IF_status(cq,qid) is 0 before enabling the port to
-	 * prevent spurious schedules to cause the queue's inflight
-	 * count to increase.
-	 */
-	dlb2_ldb_port_clear_queue_if_status(hw, port, slot);
-
-	/* Reset the queue's inflight status */
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			state = DLB2_QUEUE_MAPPED;
-			if (!dlb2_port_find_slot_queue(port, state,
-						       queue, &slot))
-				continue;
-
-			dlb2_ldb_port_set_queue_if_status(hw, port, slot);
-		}
-	}
-
-	dlb2_ldb_queue_set_inflight_limit(hw, queue);
-
-	/* Re-enable CQs mapped to this queue */
-	dlb2_ldb_queue_enable_mapped_cqs(hw, domain, queue);
-
-	/* If this queue has other mappings pending, clear its inflight limit */
-	if (queue->num_pending_additions > 0)
-		dlb2_ldb_queue_clear_inflight_limit(hw, queue);
-
-	return 0;
-}
-
-/**
- * dlb2_ldb_port_map_qid_dynamic() - perform a "dynamic" QID->CQ mapping
- * @hw: dlb2_hw handle for a particular device.
- * @port: load-balanced port
- * @queue: load-balanced queue
- * @priority: queue servicing priority
- *
- * Returns 0 if the queue was mapped, 1 if the mapping is scheduled to occur
- * at a later point, and <0 if an error occurred.
- */
-static int dlb2_ldb_port_map_qid_dynamic(struct dlb2_hw *hw,
-					 struct dlb2_ldb_port *port,
-					 struct dlb2_ldb_queue *queue,
-					 u8 priority)
-{
-	union dlb2_lsp_qid_ldb_infl_cnt r0 = { {0} };
-	enum dlb2_qid_map_state state;
-	struct dlb2_hw_domain *domain;
-	int domain_id, slot, ret;
-
-	domain_id = port->domain_id.phys_id;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, false, 0);
-	if (domain == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: unable to find domain %d\n",
-			    __func__, port->domain_id.phys_id);
-		return -EINVAL;
-	}
-
-	/*
-	 * Set the QID inflight limit to 0 to prevent further scheduling of the
-	 * queue.
-	 */
-	DLB2_CSR_WR(hw, DLB2_LSP_QID_LDB_INFL_LIM(queue->id.phys_id), 0);
-
-	if (!dlb2_port_find_slot(port, DLB2_QUEUE_UNMAPPED, &slot)) {
-		DLB2_HW_ERR(hw,
-			    "Internal error: No available unmapped slots\n");
-		return -EFAULT;
-	}
-
-	if (slot >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: port slot tracking failed\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	port->qid_map[slot].qid = queue->id.phys_id;
-	port->qid_map[slot].priority = priority;
-
-	state = DLB2_QUEUE_MAP_IN_PROG;
-	ret = dlb2_port_slot_state_transition(hw, port, queue, slot, state);
-	if (ret)
-		return ret;
-
-	r0.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID_LDB_INFL_CNT(queue->id.phys_id));
-
-	if (r0.field.count) {
-		/*
-		 * The queue is owed completions so it's not safe to map it
-		 * yet. Schedule a kernel thread to complete the mapping later,
-		 * once software has completed all the queue's inflight events.
-		 */
-		if (!os_worker_active(hw))
-			os_schedule_work(hw);
-
-		return 1;
-	}
-
-	/*
-	 * Disable the affected CQ, and the CQs already mapped to the QID,
-	 * before reading the QID's inflight count a second time. There is an
-	 * unlikely race in which the QID may schedule one more QE after we
-	 * read an inflight count of 0, and disabling the CQs guarantees that
-	 * the race will not occur after a re-read of the inflight count
-	 * register.
-	 */
-	if (port->enabled)
-		dlb2_ldb_port_cq_disable(hw, port);
-
-	dlb2_ldb_queue_disable_mapped_cqs(hw, domain, queue);
-
-	r0.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID_LDB_INFL_CNT(queue->id.phys_id));
-
-	if (r0.field.count) {
-		if (port->enabled)
-			dlb2_ldb_port_cq_enable(hw, port);
-
-		dlb2_ldb_queue_enable_mapped_cqs(hw, domain, queue);
-
-		/*
-		 * The queue is owed completions so it's not safe to map it
-		 * yet. Schedule a kernel thread to complete the mapping later,
-		 * once software has completed all the queue's inflight events.
-		 */
-		if (!os_worker_active(hw))
-			os_schedule_work(hw);
-
-		return 1;
-	}
-
-	return dlb2_ldb_port_finish_map_qid_dynamic(hw, domain, port, queue);
-}
-
-static void dlb2_domain_finish_map_port(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain,
-					struct dlb2_ldb_port *port)
-{
-	int i;
-
-	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
-		union dlb2_lsp_qid_ldb_infl_cnt r0;
-		struct dlb2_ldb_queue *queue;
-		int qid;
-
-		if (port->qid_map[i].state != DLB2_QUEUE_MAP_IN_PROG)
-			continue;
-
-		qid = port->qid_map[i].qid;
-
-		queue = dlb2_get_ldb_queue_from_id(hw, qid, false, 0);
-
-		if (queue == NULL) {
-			DLB2_HW_ERR(hw,
-				    "[%s()] Internal error: unable to find queue %d\n",
-				    __func__, qid);
-			continue;
-		}
-
-		r0.val = DLB2_CSR_RD(hw, DLB2_LSP_QID_LDB_INFL_CNT(qid));
-
-		if (r0.field.count)
-			continue;
-
-		/*
-		 * Disable the affected CQ, and the CQs already mapped to the
-		 * QID, before reading the QID's inflight count a second time.
-		 * There is an unlikely race in which the QID may schedule one
-		 * more QE after we read an inflight count of 0, and disabling
-		 * the CQs guarantees that the race will not occur after a
-		 * re-read of the inflight count register.
-		 */
-		if (port->enabled)
-			dlb2_ldb_port_cq_disable(hw, port);
-
-		dlb2_ldb_queue_disable_mapped_cqs(hw, domain, queue);
-
-		r0.val = DLB2_CSR_RD(hw, DLB2_LSP_QID_LDB_INFL_CNT(qid));
-
-		if (r0.field.count) {
-			if (port->enabled)
-				dlb2_ldb_port_cq_enable(hw, port);
-
-			dlb2_ldb_queue_enable_mapped_cqs(hw, domain, queue);
-
-			continue;
-		}
-
-		dlb2_ldb_port_finish_map_qid_dynamic(hw, domain, port, queue);
-	}
-}
-
-static unsigned int
-dlb2_domain_finish_map_qid_procedures(struct dlb2_hw *hw,
-				      struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	if (!domain->configured || domain->num_pending_additions == 0)
-		return 0;
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
-			dlb2_domain_finish_map_port(hw, domain, port);
-	}
-
-	return domain->num_pending_additions;
-}
-
-static int dlb2_ldb_port_unmap_qid(struct dlb2_hw *hw,
-				   struct dlb2_ldb_port *port,
-				   struct dlb2_ldb_queue *queue)
-{
-	enum dlb2_qid_map_state mapped, in_progress, pending_map, unmapped;
-	union dlb2_lsp_cq2priov r0;
-	union dlb2_atm_qid2cqidix_00 r1;
-	union dlb2_lsp_qid2cqidix_00 r2;
-	union dlb2_lsp_qid2cqidix2_00 r3;
-	u32 queue_id;
-	u32 port_id;
-	int i;
-
-	/* Find the queue's slot */
-	mapped = DLB2_QUEUE_MAPPED;
-	in_progress = DLB2_QUEUE_UNMAP_IN_PROG;
-	pending_map = DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP;
-
-	if (!dlb2_port_find_slot_queue(port, mapped, queue, &i) &&
-	    !dlb2_port_find_slot_queue(port, in_progress, queue, &i) &&
-	    !dlb2_port_find_slot_queue(port, pending_map, queue, &i)) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: QID %d isn't mapped\n",
-			    __func__, __LINE__, queue->id.phys_id);
-		return -EFAULT;
-	}
-
-	if (i >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: port slot tracking failed\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	port_id = port->id.phys_id;
-	queue_id = queue->id.phys_id;
-
-	/* Read-modify-write the priority and valid bit register */
-	r0.val = DLB2_CSR_RD(hw, DLB2_LSP_CQ2PRIOV(port_id));
-
-	r0.field.v &= ~(1 << i);
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(port_id), r0.val);
-
-	r1.val = DLB2_CSR_RD(hw,
-			     DLB2_ATM_QID2CQIDIX(queue_id, port_id / 4));
-
-	r2.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID2CQIDIX(queue_id, port_id / 4));
-
-	r3.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID2CQIDIX2(queue_id, port_id / 4));
-
-	switch (port_id % 4) {
-	case 0:
-		r1.field.cq_p0 &= ~(1 << i);
-		r2.field.cq_p0 &= ~(1 << i);
-		r3.field.cq_p0 &= ~(1 << i);
-		break;
-
-	case 1:
-		r1.field.cq_p1 &= ~(1 << i);
-		r2.field.cq_p1 &= ~(1 << i);
-		r3.field.cq_p1 &= ~(1 << i);
-		break;
-
-	case 2:
-		r1.field.cq_p2 &= ~(1 << i);
-		r2.field.cq_p2 &= ~(1 << i);
-		r3.field.cq_p2 &= ~(1 << i);
-		break;
-
-	case 3:
-		r1.field.cq_p3 &= ~(1 << i);
-		r2.field.cq_p3 &= ~(1 << i);
-		r3.field.cq_p3 &= ~(1 << i);
-		break;
-	}
-
-	DLB2_CSR_WR(hw,
-		    DLB2_ATM_QID2CQIDIX(queue_id, port_id / 4),
-		    r1.val);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID2CQIDIX(queue_id, port_id / 4),
-		    r2.val);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID2CQIDIX2(queue_id, port_id / 4),
-		    r3.val);
-
-	dlb2_flush_csr(hw);
-
-	unmapped = DLB2_QUEUE_UNMAPPED;
-
-	return dlb2_port_slot_state_transition(hw, port, queue, i, unmapped);
-}
-
-static int dlb2_ldb_port_map_qid(struct dlb2_hw *hw,
-				 struct dlb2_hw_domain *domain,
-				 struct dlb2_ldb_port *port,
-				 struct dlb2_ldb_queue *queue,
-				 u8 prio)
-{
-	if (domain->started)
-		return dlb2_ldb_port_map_qid_dynamic(hw, port, queue, prio);
-	else
-		return dlb2_ldb_port_map_qid_static(hw, port, queue, prio);
-}
-
-static void
-dlb2_domain_finish_unmap_port_slot(struct dlb2_hw *hw,
-				   struct dlb2_hw_domain *domain,
-				   struct dlb2_ldb_port *port,
-				   int slot)
-{
-	enum dlb2_qid_map_state state;
-	struct dlb2_ldb_queue *queue;
-
-	queue = &hw->rsrcs.ldb_queues[port->qid_map[slot].qid];
-
-	state = port->qid_map[slot].state;
-
-	/* Update the QID2CQIDX and CQ2QID vectors */
-	dlb2_ldb_port_unmap_qid(hw, port, queue);
-
-	/*
-	 * Ensure the QID will not be serviced by this {CQ, slot} by clearing
-	 * the has_work bits
-	 */
-	dlb2_ldb_port_clear_has_work_bits(hw, port, slot);
-
-	/* Reset the {CQ, slot} to its default state */
-	dlb2_ldb_port_set_queue_if_status(hw, port, slot);
-
-	/* Re-enable the CQ if it wasn't manually disabled by the user */
-	if (port->enabled)
-		dlb2_ldb_port_cq_enable(hw, port);
-
-	/*
-	 * If there is a mapping that is pending this slot's removal, perform
-	 * the mapping now.
-	 */
-	if (state == DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP) {
-		struct dlb2_ldb_port_qid_map *map;
-		struct dlb2_ldb_queue *map_queue;
-		u8 prio;
-
-		map = &port->qid_map[slot];
-
-		map->qid = map->pending_qid;
-		map->priority = map->pending_priority;
-
-		map_queue = &hw->rsrcs.ldb_queues[map->qid];
-		prio = map->priority;
-
-		dlb2_ldb_port_map_qid(hw, domain, port, map_queue, prio);
-	}
-}
-
-static bool dlb2_domain_finish_unmap_port(struct dlb2_hw *hw,
-					  struct dlb2_hw_domain *domain,
-					  struct dlb2_ldb_port *port)
-{
-	union dlb2_lsp_cq_ldb_infl_cnt r0;
-	int i;
-
-	if (port->num_pending_removals == 0)
-		return false;
-
-	/*
-	 * The unmap requires all the CQ's outstanding inflights to be
-	 * completed.
-	 */
-	r0.val = DLB2_CSR_RD(hw, DLB2_LSP_CQ_LDB_INFL_CNT(port->id.phys_id));
-	if (r0.field.count > 0)
-		return false;
-
-	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
-		struct dlb2_ldb_port_qid_map *map;
-
-		map = &port->qid_map[i];
-
-		if (map->state != DLB2_QUEUE_UNMAP_IN_PROG &&
-		    map->state != DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP)
-			continue;
-
-		dlb2_domain_finish_unmap_port_slot(hw, domain, port, i);
-	}
-
-	return true;
-}
-
-static unsigned int
-dlb2_domain_finish_unmap_qid_procedures(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	if (!domain->configured || domain->num_pending_removals == 0)
-		return 0;
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
-			dlb2_domain_finish_unmap_port(hw, domain, port);
-	}
-
-	return domain->num_pending_removals;
-}
-
-unsigned int dlb2_finish_unmap_qid_procedures(struct dlb2_hw *hw)
-{
-	int i, num = 0;
-
-	/* Finish queue unmap jobs for any domain that needs it */
-	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
-		struct dlb2_hw_domain *domain = &hw->domains[i];
-
-		num += dlb2_domain_finish_unmap_qid_procedures(hw, domain);
-	}
-
-	return num;
-}
-
-unsigned int dlb2_finish_map_qid_procedures(struct dlb2_hw *hw)
-{
-	int i, num = 0;
-
-	/* Finish queue map jobs for any domain that needs it */
-	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
-		struct dlb2_hw_domain *domain = &hw->domains[i];
-
-		num += dlb2_domain_finish_map_qid_procedures(hw, domain);
-	}
-
-	return num;
-}
-
 int dlb2_get_group_sequence_numbers(struct dlb2_hw *hw, unsigned int group_id)
 {
 	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index 6a5af0c1e..8cd1762cf 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -6039,3 +6039,53 @@ int dlb2_hw_get_ldb_queue_depth(struct dlb2_hw *hw,
 
 	return 0;
 }
+
+/**
+ * dlb2_finish_unmap_qid_procedures() - finish any pending unmap procedures
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function attempts to finish any outstanding unmap procedures.
+ * This function should be called by the kernel thread responsible for
+ * finishing map/unmap procedures.
+ *
+ * Return:
+ * Returns the number of procedures that weren't completed.
+ */
+unsigned int dlb2_finish_unmap_qid_procedures(struct dlb2_hw *hw)
+{
+	int i, num = 0;
+
+	/* Finish queue unmap jobs for any domain that needs it */
+	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
+		struct dlb2_hw_domain *domain = &hw->domains[i];
+
+		num += dlb2_domain_finish_unmap_qid_procedures(hw, domain);
+	}
+
+	return num;
+}
+
+/**
+ * dlb2_finish_map_qid_procedures() - finish any pending map procedures
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function attempts to finish any outstanding map procedures.
+ * This function should be called by the kernel thread responsible for
+ * finishing map/unmap procedures.
+ *
+ * Return:
+ * Returns the number of procedures that weren't completed.
+ */
+unsigned int dlb2_finish_map_qid_procedures(struct dlb2_hw *hw)
+{
+	int i, num = 0;
+
+	/* Finish queue map jobs for any domain that needs it */
+	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
+		struct dlb2_hw_domain *domain = &hw->domains[i];
+
+		num += dlb2_domain_finish_map_qid_procedures(hw, domain);
+	}
+
+	return num;
+}
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v4 18/27] event/dlb2: add v2.5 sparse cq mode
  2021-04-15  1:48   ` [dpdk-dev] [PATCH v4 00/27] Add DLB v2.5 Timothy McDaniel
                       ` (16 preceding siblings ...)
  2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 17/27] event/dlb2: add v2.5 finish map/unmap Timothy McDaniel
@ 2021-04-15  1:49     ` Timothy McDaniel
  2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 19/27] event/dlb2: add v2.5 sequence number management Timothy McDaniel
                       ` (8 subsequent siblings)
  26 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-04-15  1:49 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, harry.van.haaren, jerinj, thomas

Update the low level HW functions responsible for
configuring sparse CQ mode, where each cache line
contains just one QE instead of 4.

The logic is very similar to what was done for v2.0,
but the new combined register map for v2.0 and v2.5
uses new register names and bit names.  Additionally,
new register access macros are used so that the code
can perform the correct action, based on the hardware.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 22 -----------
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 39 +++++++++++++++++++
 2 files changed, 39 insertions(+), 22 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index f05f750f5..d53cce643 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -32,28 +32,6 @@
 #define DLB2_FUNC_LIST_FOR_SAFE(head, ptr, ptr_tmp, it, it_tmp) \
 	DLB2_LIST_FOR_EACH_SAFE((head), ptr, ptr_tmp, func_list, it, it_tmp)
 
-void dlb2_hw_enable_sparse_dir_cq_mode(struct dlb2_hw *hw)
-{
-	union dlb2_chp_cfg_chp_csr_ctrl r0;
-
-	r0.val = DLB2_CSR_RD(hw, DLB2_CHP_CFG_CHP_CSR_CTRL);
-
-	r0.field.cfg_64bytes_qe_dir_cq_mode = 1;
-
-	DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, r0.val);
-}
-
-void dlb2_hw_enable_sparse_ldb_cq_mode(struct dlb2_hw *hw)
-{
-	union dlb2_chp_cfg_chp_csr_ctrl r0;
-
-	r0.val = DLB2_CSR_RD(hw, DLB2_CHP_CFG_CHP_CSR_CTRL);
-
-	r0.field.cfg_64bytes_qe_ldb_cq_mode = 1;
-
-	DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, r0.val);
-}
-
 int dlb2_get_group_sequence_numbers(struct dlb2_hw *hw, unsigned int group_id)
 {
 	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index 8cd1762cf..0f18bfeff 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -6089,3 +6089,42 @@ unsigned int dlb2_finish_map_qid_procedures(struct dlb2_hw *hw)
 
 	return num;
 }
+
+/**
+ * dlb2_hw_enable_sparse_dir_cq_mode() - enable sparse mode for directed ports.
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function must be called prior to configuring scheduling domains.
+ */
+
+void dlb2_hw_enable_sparse_dir_cq_mode(struct dlb2_hw *hw)
+{
+	u32 ctrl;
+
+	ctrl = DLB2_CSR_RD(hw, DLB2_CHP_CFG_CHP_CSR_CTRL);
+
+	DLB2_BIT_SET(ctrl,
+		     DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_DIR_CQ_MODE);
+
+	DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, ctrl);
+}
+
+/**
+ * dlb2_hw_enable_sparse_ldb_cq_mode() - enable sparse mode for load-balanced
+ *	ports.
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function must be called prior to configuring scheduling domains.
+ */
+void dlb2_hw_enable_sparse_ldb_cq_mode(struct dlb2_hw *hw)
+{
+	u32 ctrl;
+
+	ctrl = DLB2_CSR_RD(hw, DLB2_CHP_CFG_CHP_CSR_CTRL);
+
+	DLB2_BIT_SET(ctrl,
+		     DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_LDB_CQ_MODE);
+
+	DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, ctrl);
+}
+
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v4 19/27] event/dlb2: add v2.5 sequence number management
  2021-04-15  1:48   ` [dpdk-dev] [PATCH v4 00/27] Add DLB v2.5 Timothy McDaniel
                       ` (17 preceding siblings ...)
  2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 18/27] event/dlb2: add v2.5 sparse cq mode Timothy McDaniel
@ 2021-04-15  1:49     ` Timothy McDaniel
  2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 20/27] event/dlb2: use new implementation of resource header Timothy McDaniel
                       ` (7 subsequent siblings)
  26 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-04-15  1:49 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, harry.van.haaren, jerinj, thomas

Update the low level HW functions that perform the sequence number
management functions. These include getting a groups number of
sequence numbers per queue, managing in-use slots, getting the
current occupancy, and setting sequence numbers for a group.

The logic is very similar to what was done for v2.0,
but the new combined register map for v2.0 and v2.5
uses new register names and bit names.  Additionally,
new register access macros are used so that the code
can perform the correct action, based on the hardware.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_resource.c    |  67 -----------
 drivers/event/dlb2/pf/base/dlb2_resource.h    |   4 +-
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 105 ++++++++++++++++++
 3 files changed, 107 insertions(+), 69 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index d53cce643..e8a9d52f6 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -32,70 +32,3 @@
 #define DLB2_FUNC_LIST_FOR_SAFE(head, ptr, ptr_tmp, it, it_tmp) \
 	DLB2_LIST_FOR_EACH_SAFE((head), ptr, ptr_tmp, func_list, it, it_tmp)
 
-int dlb2_get_group_sequence_numbers(struct dlb2_hw *hw, unsigned int group_id)
-{
-	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
-		return -EINVAL;
-
-	return hw->rsrcs.sn_groups[group_id].sequence_numbers_per_queue;
-}
-
-int dlb2_get_group_sequence_number_occupancy(struct dlb2_hw *hw,
-					     unsigned int group_id)
-{
-	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
-		return -EINVAL;
-
-	return dlb2_sn_group_used_slots(&hw->rsrcs.sn_groups[group_id]);
-}
-
-static void dlb2_log_set_group_sequence_numbers(struct dlb2_hw *hw,
-						unsigned int group_id,
-						unsigned long val)
-{
-	DLB2_HW_DBG(hw, "DLB2 set group sequence numbers:\n");
-	DLB2_HW_DBG(hw, "\tGroup ID: %u\n", group_id);
-	DLB2_HW_DBG(hw, "\tValue:    %lu\n", val);
-}
-
-int dlb2_set_group_sequence_numbers(struct dlb2_hw *hw,
-				    unsigned int group_id,
-				    unsigned long val)
-{
-	u32 valid_allocations[] = {64, 128, 256, 512, 1024};
-	union dlb2_ro_pipe_grp_sn_mode r0 = { {0} };
-	struct dlb2_sn_group *group;
-	int mode;
-
-	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
-		return -EINVAL;
-
-	group = &hw->rsrcs.sn_groups[group_id];
-
-	/*
-	 * Once the first load-balanced queue using an SN group is configured,
-	 * the group cannot be changed.
-	 */
-	if (group->slot_use_bitmap != 0)
-		return -EPERM;
-
-	for (mode = 0; mode < DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES; mode++)
-		if (val == valid_allocations[mode])
-			break;
-
-	if (mode == DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES)
-		return -EINVAL;
-
-	group->mode = mode;
-	group->sequence_numbers_per_queue = val;
-
-	r0.field.sn_mode_0 = hw->rsrcs.sn_groups[0].mode;
-	r0.field.sn_mode_1 = hw->rsrcs.sn_groups[1].mode;
-
-	DLB2_CSR_WR(hw, DLB2_RO_PIPE_GRP_SN_MODE, r0.val);
-
-	dlb2_log_set_group_sequence_numbers(hw, group_id, val);
-
-	return 0;
-}
-
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.h b/drivers/event/dlb2/pf/base/dlb2_resource.h
index 2e13193bb..00a0b6b57 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.h
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.h
@@ -792,8 +792,8 @@ int dlb2_get_group_sequence_number_occupancy(struct dlb2_hw *hw,
  * ordered queue is configured.
  */
 int dlb2_set_group_sequence_numbers(struct dlb2_hw *hw,
-				    unsigned int group_id,
-				    unsigned long val);
+				    u32 group_id,
+				    u32 val);
 
 /**
  * dlb2_reset_domain() - reset a scheduling domain
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index 0f18bfeff..927b65568 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -6128,3 +6128,108 @@ void dlb2_hw_enable_sparse_ldb_cq_mode(struct dlb2_hw *hw)
 	DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, ctrl);
 }
 
+/**
+ * dlb2_get_group_sequence_numbers() - return a group's number of SNs per queue
+ * @hw: dlb2_hw handle for a particular device.
+ * @group_id: sequence number group ID.
+ *
+ * This function returns the configured number of sequence numbers per queue
+ * for the specified group.
+ *
+ * Return:
+ * Returns -EINVAL if group_id is invalid, else the group's SNs per queue.
+ */
+int dlb2_get_group_sequence_numbers(struct dlb2_hw *hw, u32 group_id)
+{
+	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
+		return -EINVAL;
+
+	return hw->rsrcs.sn_groups[group_id].sequence_numbers_per_queue;
+}
+
+/**
+ * dlb2_get_group_sequence_number_occupancy() - return a group's in-use slots
+ * @hw: dlb2_hw handle for a particular device.
+ * @group_id: sequence number group ID.
+ *
+ * This function returns the group's number of in-use slots (i.e. load-balanced
+ * queues using the specified group).
+ *
+ * Return:
+ * Returns -EINVAL if group_id is invalid, else the group's SNs per queue.
+ */
+int dlb2_get_group_sequence_number_occupancy(struct dlb2_hw *hw, u32 group_id)
+{
+	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
+		return -EINVAL;
+
+	return dlb2_sn_group_used_slots(&hw->rsrcs.sn_groups[group_id]);
+}
+
+static void dlb2_log_set_group_sequence_numbers(struct dlb2_hw *hw,
+						u32 group_id,
+						u32 val)
+{
+	DLB2_HW_DBG(hw, "DLB2 set group sequence numbers:\n");
+	DLB2_HW_DBG(hw, "\tGroup ID: %u\n", group_id);
+	DLB2_HW_DBG(hw, "\tValue:    %u\n", val);
+}
+
+/**
+ * dlb2_set_group_sequence_numbers() - assign a group's number of SNs per queue
+ * @hw: dlb2_hw handle for a particular device.
+ * @group_id: sequence number group ID.
+ * @val: requested amount of sequence numbers per queue.
+ *
+ * This function configures the group's number of sequence numbers per queue.
+ * val can be a power-of-two between 32 and 1024, inclusive. This setting can
+ * be configured until the first ordered load-balanced queue is configured, at
+ * which point the configuration is locked.
+ *
+ * Return:
+ * Returns 0 upon success; -EINVAL if group_id or val is invalid, -EPERM if an
+ * ordered queue is configured.
+ */
+int dlb2_set_group_sequence_numbers(struct dlb2_hw *hw,
+				    u32 group_id,
+				    u32 val)
+{
+	const u32 valid_allocations[] = {64, 128, 256, 512, 1024};
+	struct dlb2_sn_group *group;
+	u32 sn_mode = 0;
+	int mode;
+
+	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
+		return -EINVAL;
+
+	group = &hw->rsrcs.sn_groups[group_id];
+
+	/*
+	 * Once the first load-balanced queue using an SN group is configured,
+	 * the group cannot be changed.
+	 */
+	if (group->slot_use_bitmap != 0)
+		return -EPERM;
+
+	for (mode = 0; mode < DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES; mode++)
+		if (val == valid_allocations[mode])
+			break;
+
+	if (mode == DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES)
+		return -EINVAL;
+
+	group->mode = mode;
+	group->sequence_numbers_per_queue = val;
+
+	DLB2_BITS_SET(sn_mode, hw->rsrcs.sn_groups[0].mode,
+		 DLB2_RO_GRP_SN_MODE_SN_MODE_0);
+	DLB2_BITS_SET(sn_mode, hw->rsrcs.sn_groups[1].mode,
+		 DLB2_RO_GRP_SN_MODE_SN_MODE_1);
+
+	DLB2_CSR_WR(hw, DLB2_RO_GRP_SN_MODE(hw->ver), sn_mode);
+
+	dlb2_log_set_group_sequence_numbers(hw, group_id, val);
+
+	return 0;
+}
+
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v4 20/27] event/dlb2: use new implementation of resource header
  2021-04-15  1:48   ` [dpdk-dev] [PATCH v4 00/27] Add DLB v2.5 Timothy McDaniel
                       ` (18 preceding siblings ...)
  2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 19/27] event/dlb2: add v2.5 sequence number management Timothy McDaniel
@ 2021-04-15  1:49     ` Timothy McDaniel
  2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 21/27] event/dlb2: use new implementation of resource file Timothy McDaniel
                       ` (6 subsequent siblings)
  26 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-04-15  1:49 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, harry.van.haaren, jerinj, thomas

A temporary version of dlb_resource.h (dlb_resource_new.h) was used
by the previous commits in this patch series. Merge the two files
now that DLB v2.5 support has been fully added to dlb_resource.c.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_osdep.h       |  2 -
 drivers/event/dlb2/pf/base/dlb2_resource.h    | 36 +++++++++
 .../event/dlb2/pf/base/dlb2_resource_new.c    |  2 +-
 .../event/dlb2/pf/base/dlb2_resource_new.h    | 73 -------------------
 drivers/event/dlb2/pf/dlb2_main.c             |  2 +-
 drivers/event/dlb2/pf/dlb2_pf.c               |  2 +-
 6 files changed, 39 insertions(+), 78 deletions(-)
 delete mode 100644 drivers/event/dlb2/pf/base/dlb2_resource_new.h

diff --git a/drivers/event/dlb2/pf/base/dlb2_osdep.h b/drivers/event/dlb2/pf/base/dlb2_osdep.h
index 3b0ca84ba..cffe22f3c 100644
--- a/drivers/event/dlb2/pf/base/dlb2_osdep.h
+++ b/drivers/event/dlb2/pf/base/dlb2_osdep.h
@@ -17,8 +17,6 @@
 #include <rte_spinlock.h>
 #include "../dlb2_main.h"
 
-/* TEMPORARY inclusion of both headers for merge */
-#include "dlb2_resource_new.h"
 #include "dlb2_resource.h"
 
 #include "../../dlb2_log.h"
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.h b/drivers/event/dlb2/pf/base/dlb2_resource.h
index 00a0b6b57..684049cd6 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.h
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.h
@@ -8,6 +8,42 @@
 #include "dlb2_user.h"
 #include "dlb2_osdep_types.h"
 
+/**
+ * dlb2_resource_init() - initialize the device
+ * @hw: pointer to struct dlb2_hw.
+ * @ver: device version.
+ *
+ * This function initializes the device's software state (pointed to by the hw
+ * argument) and programs global scheduling QoS registers. This function should
+ * be called during driver initialization.
+ *
+ * The dlb2_hw struct must be unique per DLB 2.0 device and persist until the
+ * device is reset.
+ *
+ * Return:
+ * Returns 0 upon success, <0 otherwise.
+ */
+int dlb2_resource_init(struct dlb2_hw *hw, enum dlb2_hw_ver ver);
+
+/**
+ * dlb2_clr_pmcsr_disable() - power on bulk of DLB 2.0 logic
+ * @hw: dlb2_hw handle for a particular device.
+ * @ver: device version.
+ *
+ * Clearing the PMCSR must be done at initialization to make the device fully
+ * operational.
+ */
+void dlb2_clr_pmcsr_disable(struct dlb2_hw *hw, enum dlb2_hw_ver ver);
+
+/**
+ * dlb2_resource_free() - free device state memory
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function frees software state pointed to by dlb2_hw. This function
+ * should be called when resetting the device or unloading the driver.
+ */
+void dlb2_resource_free(struct dlb2_hw *hw);
+
 /**
  * dlb2_resource_reset() - reset in-use resources to their initial state
  * @hw: dlb2_hw handle for a particular device.
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index 927b65568..2f66b2c71 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -11,7 +11,7 @@
 #include "dlb2_osdep_bitmap.h"
 #include "dlb2_osdep_types.h"
 #include "dlb2_regs_new.h"
-#include "dlb2_resource_new.h" /* TEMP FOR UPSTREAMPATCHES */
+#include "dlb2_resource.h"
 
 #include "../../dlb2_priv.h"
 #include "../../dlb2_inline_fns.h"
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.h b/drivers/event/dlb2/pf/base/dlb2_resource_new.h
deleted file mode 100644
index 51f31543c..000000000
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.h
+++ /dev/null
@@ -1,73 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016-2020 Intel Corporation
- */
-
-#ifndef __DLB2_RESOURCE_NEW_H
-#define __DLB2_RESOURCE_NEW_H
-
-#include "dlb2_user.h"
-#include "dlb2_osdep_types.h"
-
-/**
- * dlb2_resource_init() - initialize the device
- * @hw: pointer to struct dlb2_hw.
- * @ver: device version.
- *
- * This function initializes the device's software state (pointed to by the hw
- * argument) and programs global scheduling QoS registers. This function should
- * be called during driver initialization.
- *
- * The dlb2_hw struct must be unique per DLB 2.0 device and persist until the
- * device is reset.
- *
- * Return:
- * Returns 0 upon success, <0 otherwise.
- */
-int dlb2_resource_init(struct dlb2_hw *hw, enum dlb2_hw_ver ver);
-
-/**
- * dlb2_clr_pmcsr_disable() - power on bulk of DLB 2.0 logic
- * @hw: dlb2_hw handle for a particular device.
- * @ver: device version.
- *
- * Clearing the PMCSR must be done at initialization to make the device fully
- * operational.
- */
-void dlb2_clr_pmcsr_disable(struct dlb2_hw *hw, enum dlb2_hw_ver ver);
-
-/**
- * dlb2_finish_unmap_qid_procedures() - finish any pending unmap procedures
- * @hw: dlb2_hw handle for a particular device.
- *
- * This function attempts to finish any outstanding unmap procedures.
- * This function should be called by the kernel thread responsible for
- * finishing map/unmap procedures.
- *
- * Return:
- * Returns the number of procedures that weren't completed.
- */
-unsigned int dlb2_finish_unmap_qid_procedures(struct dlb2_hw *hw);
-
-/**
- * dlb2_finish_map_qid_procedures() - finish any pending map procedures
- * @hw: dlb2_hw handle for a particular device.
- *
- * This function attempts to finish any outstanding map procedures.
- * This function should be called by the kernel thread responsible for
- * finishing map/unmap procedures.
- *
- * Return:
- * Returns the number of procedures that weren't completed.
- */
-unsigned int dlb2_finish_map_qid_procedures(struct dlb2_hw *hw);
-
-/**
- * dlb2_resource_free() - free device state memory
- * @hw: dlb2_hw handle for a particular device.
- *
- * This function frees software state pointed to by dlb2_hw. This function
- * should be called when resetting the device or unloading the driver.
- */
-void dlb2_resource_free(struct dlb2_hw *hw);
-
-#endif /* __DLB2_RESOURCE_NEW_H */
diff --git a/drivers/event/dlb2/pf/dlb2_main.c b/drivers/event/dlb2/pf/dlb2_main.c
index 5c0640b3c..bac07f097 100644
--- a/drivers/event/dlb2/pf/dlb2_main.c
+++ b/drivers/event/dlb2/pf/dlb2_main.c
@@ -17,7 +17,7 @@
 
 #include "base/dlb2_regs_new.h"
 #include "base/dlb2_hw_types_new.h"
-#include "base/dlb2_resource_new.h"
+#include "base/dlb2_resource.h"
 #include "base/dlb2_osdep.h"
 #include "dlb2_main.h"
 #include "../dlb2_user.h"
diff --git a/drivers/event/dlb2/pf/dlb2_pf.c b/drivers/event/dlb2/pf/dlb2_pf.c
index 1e815f20d..880964a29 100644
--- a/drivers/event/dlb2/pf/dlb2_pf.c
+++ b/drivers/event/dlb2/pf/dlb2_pf.c
@@ -40,7 +40,7 @@
 #include "dlb2_main.h"
 #include "base/dlb2_hw_types_new.h"
 #include "base/dlb2_osdep.h"
-#include "base/dlb2_resource_new.h"
+#include "base/dlb2_resource.h"
 
 static const char *event_dlb2_pf_name = RTE_STR(EVDEV_DLB2_NAME_PMD);
 
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v4 21/27] event/dlb2: use new implementation of resource file
  2021-04-15  1:48   ` [dpdk-dev] [PATCH v4 00/27] Add DLB v2.5 Timothy McDaniel
                       ` (19 preceding siblings ...)
  2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 20/27] event/dlb2: use new implementation of resource header Timothy McDaniel
@ 2021-04-15  1:49     ` Timothy McDaniel
  2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 22/27] event/dlb2: use new implementation of HW types header Timothy McDaniel
                       ` (5 subsequent siblings)
  26 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-04-15  1:49 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, harry.van.haaren, jerinj, thomas

The file dlb_resource_new.c now contains all of the low level
functions required to support both DLB v2.0 and DLB v2.5, and
the original file (dlb_resource.c) was removed in the previous
commit, so rename dlb_resource_new.c to dlb_resource.c, and
update the meson build file so that the new file is built.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/meson.build                |    1 -
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 6205 +++++++++++++++-
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 6235 -----------------
 3 files changed, 6203 insertions(+), 6238 deletions(-)
 delete mode 100644 drivers/event/dlb2/pf/base/dlb2_resource_new.c

diff --git a/drivers/event/dlb2/meson.build b/drivers/event/dlb2/meson.build
index bded07e06..f22638b8e 100644
--- a/drivers/event/dlb2/meson.build
+++ b/drivers/event/dlb2/meson.build
@@ -14,7 +14,6 @@ sources = files('dlb2.c',
 		'pf/dlb2_main.c',
 		'pf/dlb2_pf.c',
 		'pf/base/dlb2_resource.c',
-		'pf/base/dlb2_resource_new.c',
 		'rte_pmd_dlb2.c',
 		'dlb2_selftest.c'
 )
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index e8a9d52f6..2f66b2c71 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -2,13 +2,15 @@
  * Copyright(c) 2016-2020 Intel Corporation
  */
 
+#define DLB2_USE_NEW_HEADERS /* TEMPORARY FOR MERGE */
+
 #include "dlb2_user.h"
 
-#include "dlb2_hw_types.h"
+#include "dlb2_hw_types_new.h"
 #include "dlb2_osdep.h"
 #include "dlb2_osdep_bitmap.h"
 #include "dlb2_osdep_types.h"
-#include "dlb2_regs.h"
+#include "dlb2_regs_new.h"
 #include "dlb2_resource.h"
 
 #include "../../dlb2_priv.h"
@@ -32,3 +34,6202 @@
 #define DLB2_FUNC_LIST_FOR_SAFE(head, ptr, ptr_tmp, it, it_tmp) \
 	DLB2_LIST_FOR_EACH_SAFE((head), ptr, ptr_tmp, func_list, it, it_tmp)
 
+/*
+ * The PF driver cannot assume that a register write will affect subsequent HCW
+ * writes. To ensure a write completes, the driver must read back a CSR. This
+ * function only need be called for configuration that can occur after the
+ * domain has started; prior to starting, applications can't send HCWs.
+ */
+static inline void dlb2_flush_csr(struct dlb2_hw *hw)
+{
+	DLB2_CSR_RD(hw, DLB2_SYS_TOTAL_VAS(hw->ver));
+}
+
+static void dlb2_init_domain_rsrc_lists(struct dlb2_hw_domain *domain)
+{
+	int i;
+
+	dlb2_list_init_head(&domain->used_ldb_queues);
+	dlb2_list_init_head(&domain->used_dir_pq_pairs);
+	dlb2_list_init_head(&domain->avail_ldb_queues);
+	dlb2_list_init_head(&domain->avail_dir_pq_pairs);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
+		dlb2_list_init_head(&domain->used_ldb_ports[i]);
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
+		dlb2_list_init_head(&domain->avail_ldb_ports[i]);
+}
+
+static void dlb2_init_fn_rsrc_lists(struct dlb2_function_resources *rsrc)
+{
+	int i;
+	dlb2_list_init_head(&rsrc->avail_domains);
+	dlb2_list_init_head(&rsrc->used_domains);
+	dlb2_list_init_head(&rsrc->avail_ldb_queues);
+	dlb2_list_init_head(&rsrc->avail_dir_pq_pairs);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
+		dlb2_list_init_head(&rsrc->avail_ldb_ports[i]);
+}
+
+/**
+ * dlb2_resource_free() - free device state memory
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function frees software state pointed to by dlb2_hw. This function
+ * should be called when resetting the device or unloading the driver.
+ */
+void dlb2_resource_free(struct dlb2_hw *hw)
+{
+	int i;
+
+	if (hw->pf.avail_hist_list_entries)
+		dlb2_bitmap_free(hw->pf.avail_hist_list_entries);
+
+	for (i = 0; i < DLB2_MAX_NUM_VDEVS; i++) {
+		if (hw->vdev[i].avail_hist_list_entries)
+			dlb2_bitmap_free(hw->vdev[i].avail_hist_list_entries);
+	}
+}
+
+/**
+ * dlb2_resource_init() - initialize the device
+ * @hw: pointer to struct dlb2_hw.
+ * @ver: device version.
+ *
+ * This function initializes the device's software state (pointed to by the hw
+ * argument) and programs global scheduling QoS registers. This function should
+ * be called during driver initialization, and the dlb2_hw structure should
+ * be zero-initialized before calling the function.
+ *
+ * The dlb2_hw struct must be unique per DLB 2.0 device and persist until the
+ * device is reset.
+ *
+ * Return:
+ * Returns 0 upon success, <0 otherwise.
+ */
+int dlb2_resource_init(struct dlb2_hw *hw, enum dlb2_hw_ver ver)
+{
+	struct dlb2_list_entry *list;
+	unsigned int i;
+	int ret;
+
+	/*
+	 * For optimal load-balancing, ports that map to one or more QIDs in
+	 * common should not be in numerical sequence. The port->QID mapping is
+	 * application dependent, but the driver interleaves port IDs as much
+	 * as possible to reduce the likelihood of sequential ports mapping to
+	 * the same QID(s). This initial allocation of port IDs maximizes the
+	 * average distance between an ID and its immediate neighbors (i.e.
+	 * the distance from 1 to 0 and to 2, the distance from 2 to 1 and to
+	 * 3, etc.).
+	 */
+	const u8 init_ldb_port_allocation[DLB2_MAX_NUM_LDB_PORTS] = {
+		0,  7,  14,  5, 12,  3, 10,  1,  8, 15,  6, 13,  4, 11,  2,  9,
+		16, 23, 30, 21, 28, 19, 26, 17, 24, 31, 22, 29, 20, 27, 18, 25,
+		32, 39, 46, 37, 44, 35, 42, 33, 40, 47, 38, 45, 36, 43, 34, 41,
+		48, 55, 62, 53, 60, 51, 58, 49, 56, 63, 54, 61, 52, 59, 50, 57,
+	};
+
+	hw->ver = ver;
+
+	dlb2_init_fn_rsrc_lists(&hw->pf);
+
+	for (i = 0; i < DLB2_MAX_NUM_VDEVS; i++)
+		dlb2_init_fn_rsrc_lists(&hw->vdev[i]);
+
+	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
+		dlb2_init_domain_rsrc_lists(&hw->domains[i]);
+		hw->domains[i].parent_func = &hw->pf;
+	}
+
+	/* Give all resources to the PF driver */
+	hw->pf.num_avail_domains = DLB2_MAX_NUM_DOMAINS;
+	for (i = 0; i < hw->pf.num_avail_domains; i++) {
+		list = &hw->domains[i].func_list;
+
+		dlb2_list_add(&hw->pf.avail_domains, list);
+	}
+
+	hw->pf.num_avail_ldb_queues = DLB2_MAX_NUM_LDB_QUEUES;
+	for (i = 0; i < hw->pf.num_avail_ldb_queues; i++) {
+		list = &hw->rsrcs.ldb_queues[i].func_list;
+
+		dlb2_list_add(&hw->pf.avail_ldb_queues, list);
+	}
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
+		hw->pf.num_avail_ldb_ports[i] =
+			DLB2_MAX_NUM_LDB_PORTS / DLB2_NUM_COS_DOMAINS;
+
+	for (i = 0; i < DLB2_MAX_NUM_LDB_PORTS; i++) {
+		int cos_id = i >> DLB2_NUM_COS_DOMAINS;
+		struct dlb2_ldb_port *port;
+
+		port = &hw->rsrcs.ldb_ports[init_ldb_port_allocation[i]];
+
+		dlb2_list_add(&hw->pf.avail_ldb_ports[cos_id],
+			      &port->func_list);
+	}
+
+	hw->pf.num_avail_dir_pq_pairs = DLB2_MAX_NUM_DIR_PORTS(hw->ver);
+	for (i = 0; i < hw->pf.num_avail_dir_pq_pairs; i++) {
+		list = &hw->rsrcs.dir_pq_pairs[i].func_list;
+
+		dlb2_list_add(&hw->pf.avail_dir_pq_pairs, list);
+	}
+
+	if (hw->ver == DLB2_HW_V2) {
+		hw->pf.num_avail_qed_entries = DLB2_MAX_NUM_LDB_CREDITS;
+		hw->pf.num_avail_dqed_entries =
+			DLB2_MAX_NUM_DIR_CREDITS(hw->ver);
+	} else {
+		hw->pf.num_avail_entries = DLB2_MAX_NUM_CREDITS(hw->ver);
+	}
+
+	hw->pf.num_avail_aqed_entries = DLB2_MAX_NUM_AQED_ENTRIES;
+
+	ret = dlb2_bitmap_alloc(&hw->pf.avail_hist_list_entries,
+				DLB2_MAX_NUM_HIST_LIST_ENTRIES);
+	if (ret)
+		goto unwind;
+
+	ret = dlb2_bitmap_fill(hw->pf.avail_hist_list_entries);
+	if (ret)
+		goto unwind;
+
+	for (i = 0; i < DLB2_MAX_NUM_VDEVS; i++) {
+		ret = dlb2_bitmap_alloc(&hw->vdev[i].avail_hist_list_entries,
+					DLB2_MAX_NUM_HIST_LIST_ENTRIES);
+		if (ret)
+			goto unwind;
+
+		ret = dlb2_bitmap_zero(hw->vdev[i].avail_hist_list_entries);
+		if (ret)
+			goto unwind;
+	}
+
+	/* Initialize the hardware resource IDs */
+	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
+		hw->domains[i].id.phys_id = i;
+		hw->domains[i].id.vdev_owned = false;
+	}
+
+	for (i = 0; i < DLB2_MAX_NUM_LDB_QUEUES; i++) {
+		hw->rsrcs.ldb_queues[i].id.phys_id = i;
+		hw->rsrcs.ldb_queues[i].id.vdev_owned = false;
+	}
+
+	for (i = 0; i < DLB2_MAX_NUM_LDB_PORTS; i++) {
+		hw->rsrcs.ldb_ports[i].id.phys_id = i;
+		hw->rsrcs.ldb_ports[i].id.vdev_owned = false;
+	}
+
+	for (i = 0; i < DLB2_MAX_NUM_DIR_PORTS(hw->ver); i++) {
+		hw->rsrcs.dir_pq_pairs[i].id.phys_id = i;
+		hw->rsrcs.dir_pq_pairs[i].id.vdev_owned = false;
+	}
+
+	for (i = 0; i < DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {
+		hw->rsrcs.sn_groups[i].id = i;
+		/* Default mode (0) is 64 sequence numbers per queue */
+		hw->rsrcs.sn_groups[i].mode = 0;
+		hw->rsrcs.sn_groups[i].sequence_numbers_per_queue = 64;
+		hw->rsrcs.sn_groups[i].slot_use_bitmap = 0;
+	}
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
+		hw->cos_reservation[i] = 100 / DLB2_NUM_COS_DOMAINS;
+
+	return 0;
+
+unwind:
+	dlb2_resource_free(hw);
+
+	return ret;
+}
+
+/**
+ * dlb2_clr_pmcsr_disable() - power on bulk of DLB 2.0 logic
+ * @hw: dlb2_hw handle for a particular device.
+ * @ver: device version.
+ *
+ * Clearing the PMCSR must be done at initialization to make the device fully
+ * operational.
+ */
+void dlb2_clr_pmcsr_disable(struct dlb2_hw *hw, enum dlb2_hw_ver ver)
+{
+	u32 pmcsr_dis;
+
+	pmcsr_dis = DLB2_CSR_RD(hw, DLB2_CM_CFG_PM_PMCSR_DISABLE(ver));
+
+	DLB2_BITS_CLR(pmcsr_dis, DLB2_CM_CFG_PM_PMCSR_DISABLE_DISABLE);
+
+	DLB2_CSR_WR(hw, DLB2_CM_CFG_PM_PMCSR_DISABLE(ver), pmcsr_dis);
+}
+
+/**
+ * dlb2_hw_get_num_resources() - query the PCI function's available resources
+ * @hw: dlb2_hw handle for a particular device.
+ * @arg: pointer to resource counts.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function returns the number of available resources for the PF or for a
+ * VF.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, -EINVAL if vdev_req is true and vdev_id is
+ * invalid.
+ */
+int dlb2_hw_get_num_resources(struct dlb2_hw *hw,
+			      struct dlb2_get_num_resources_args *arg,
+			      bool vdev_req,
+			      unsigned int vdev_id)
+{
+	struct dlb2_function_resources *rsrcs;
+	struct dlb2_bitmap *map;
+	int i;
+
+	if (vdev_req && vdev_id >= DLB2_MAX_NUM_VDEVS)
+		return -EINVAL;
+
+	if (vdev_req)
+		rsrcs = &hw->vdev[vdev_id];
+	else
+		rsrcs = &hw->pf;
+
+	arg->num_sched_domains = rsrcs->num_avail_domains;
+
+	arg->num_ldb_queues = rsrcs->num_avail_ldb_queues;
+
+	arg->num_ldb_ports = 0;
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
+		arg->num_ldb_ports += rsrcs->num_avail_ldb_ports[i];
+
+	arg->num_cos_ldb_ports[0] = rsrcs->num_avail_ldb_ports[0];
+	arg->num_cos_ldb_ports[1] = rsrcs->num_avail_ldb_ports[1];
+	arg->num_cos_ldb_ports[2] = rsrcs->num_avail_ldb_ports[2];
+	arg->num_cos_ldb_ports[3] = rsrcs->num_avail_ldb_ports[3];
+
+	arg->num_dir_ports = rsrcs->num_avail_dir_pq_pairs;
+
+	arg->num_atomic_inflights = rsrcs->num_avail_aqed_entries;
+
+	map = rsrcs->avail_hist_list_entries;
+
+	arg->num_hist_list_entries = dlb2_bitmap_count(map);
+
+	arg->max_contiguous_hist_list_entries =
+		dlb2_bitmap_longest_set_range(map);
+
+	if (hw->ver == DLB2_HW_V2) {
+		arg->num_ldb_credits = rsrcs->num_avail_qed_entries;
+		arg->num_dir_credits = rsrcs->num_avail_dqed_entries;
+	} else {
+		arg->num_credits = rsrcs->num_avail_entries;
+	}
+	return 0;
+}
+
+static void dlb2_configure_domain_credits_v2_5(struct dlb2_hw *hw,
+					       struct dlb2_hw_domain *domain)
+{
+	u32 reg = 0;
+
+	DLB2_BITS_SET(reg, domain->num_credits, DLB2_CHP_CFG_LDB_VAS_CRD_COUNT);
+	DLB2_CSR_WR(hw, DLB2_CHP_CFG_VAS_CRD(domain->id.phys_id), reg);
+}
+
+static void dlb2_configure_domain_credits_v2(struct dlb2_hw *hw,
+					     struct dlb2_hw_domain *domain)
+{
+	u32 reg = 0;
+
+	DLB2_BITS_SET(reg, domain->num_ldb_credits,
+		      DLB2_CHP_CFG_LDB_VAS_CRD_COUNT);
+	DLB2_CSR_WR(hw, DLB2_CHP_CFG_LDB_VAS_CRD(domain->id.phys_id), reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, domain->num_dir_credits,
+		      DLB2_CHP_CFG_DIR_VAS_CRD_COUNT);
+	DLB2_CSR_WR(hw, DLB2_CHP_CFG_DIR_VAS_CRD(domain->id.phys_id), reg);
+}
+
+static void dlb2_configure_domain_credits(struct dlb2_hw *hw,
+					  struct dlb2_hw_domain *domain)
+{
+	if (hw->ver == DLB2_HW_V2)
+		dlb2_configure_domain_credits_v2(hw, domain);
+	else
+		dlb2_configure_domain_credits_v2_5(hw, domain);
+}
+
+static int dlb2_attach_credits(struct dlb2_function_resources *rsrcs,
+			       struct dlb2_hw_domain *domain,
+			       u32 num_credits,
+			       struct dlb2_cmd_response *resp)
+{
+	if (rsrcs->num_avail_entries < num_credits) {
+		resp->status = DLB2_ST_CREDITS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	rsrcs->num_avail_entries -= num_credits;
+	domain->num_credits += num_credits;
+	return 0;
+}
+
+static struct dlb2_ldb_port *
+dlb2_get_next_ldb_port(struct dlb2_hw *hw,
+		       struct dlb2_function_resources *rsrcs,
+		       u32 domain_id,
+		       u32 cos_id)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	RTE_SET_USED(iter);
+
+	/*
+	 * To reduce the odds of consecutive load-balanced ports mapping to the
+	 * same queue(s), the driver attempts to allocate ports whose neighbors
+	 * are owned by a different domain.
+	 */
+	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {
+		u32 next, prev;
+		u32 phys_id;
+
+		phys_id = port->id.phys_id;
+		next = phys_id + 1;
+		prev = phys_id - 1;
+
+		if (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)
+			next = 0;
+		if (phys_id == 0)
+			prev = DLB2_MAX_NUM_LDB_PORTS - 1;
+
+		if (!hw->rsrcs.ldb_ports[next].owned ||
+		    hw->rsrcs.ldb_ports[next].domain_id.phys_id == domain_id)
+			continue;
+
+		if (!hw->rsrcs.ldb_ports[prev].owned ||
+		    hw->rsrcs.ldb_ports[prev].domain_id.phys_id == domain_id)
+			continue;
+
+		return port;
+	}
+
+	/*
+	 * Failing that, the driver looks for a port with one neighbor owned by
+	 * a different domain and the other unallocated.
+	 */
+	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {
+		u32 next, prev;
+		u32 phys_id;
+
+		phys_id = port->id.phys_id;
+		next = phys_id + 1;
+		prev = phys_id - 1;
+
+		if (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)
+			next = 0;
+		if (phys_id == 0)
+			prev = DLB2_MAX_NUM_LDB_PORTS - 1;
+
+		if (!hw->rsrcs.ldb_ports[prev].owned &&
+		    hw->rsrcs.ldb_ports[next].owned &&
+		    hw->rsrcs.ldb_ports[next].domain_id.phys_id != domain_id)
+			return port;
+
+		if (!hw->rsrcs.ldb_ports[next].owned &&
+		    hw->rsrcs.ldb_ports[prev].owned &&
+		    hw->rsrcs.ldb_ports[prev].domain_id.phys_id != domain_id)
+			return port;
+	}
+
+	/*
+	 * Failing that, the driver looks for a port with both neighbors
+	 * unallocated.
+	 */
+	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {
+		u32 next, prev;
+		u32 phys_id;
+
+		phys_id = port->id.phys_id;
+		next = phys_id + 1;
+		prev = phys_id - 1;
+
+		if (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)
+			next = 0;
+		if (phys_id == 0)
+			prev = DLB2_MAX_NUM_LDB_PORTS - 1;
+
+		if (!hw->rsrcs.ldb_ports[prev].owned &&
+		    !hw->rsrcs.ldb_ports[next].owned)
+			return port;
+	}
+
+	/* If all else fails, the driver returns the next available port. */
+	return DLB2_FUNC_LIST_HEAD(rsrcs->avail_ldb_ports[cos_id],
+				   typeof(*port));
+}
+
+static int __dlb2_attach_ldb_ports(struct dlb2_hw *hw,
+				   struct dlb2_function_resources *rsrcs,
+				   struct dlb2_hw_domain *domain,
+				   u32 num_ports,
+				   u32 cos_id,
+				   struct dlb2_cmd_response *resp)
+{
+	unsigned int i;
+
+	if (rsrcs->num_avail_ldb_ports[cos_id] < num_ports) {
+		resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	for (i = 0; i < num_ports; i++) {
+		struct dlb2_ldb_port *port;
+
+		port = dlb2_get_next_ldb_port(hw, rsrcs,
+					      domain->id.phys_id, cos_id);
+		if (port == NULL) {
+			DLB2_HW_ERR(hw,
+				    "[%s()] Internal error: domain validation failed\n",
+				    __func__);
+			return -EFAULT;
+		}
+
+		dlb2_list_del(&rsrcs->avail_ldb_ports[cos_id],
+			      &port->func_list);
+
+		port->domain_id = domain->id;
+		port->owned = true;
+
+		dlb2_list_add(&domain->avail_ldb_ports[cos_id],
+			      &port->domain_list);
+	}
+
+	rsrcs->num_avail_ldb_ports[cos_id] -= num_ports;
+
+	return 0;
+}
+
+
+static int dlb2_attach_ldb_ports(struct dlb2_hw *hw,
+				 struct dlb2_function_resources *rsrcs,
+				 struct dlb2_hw_domain *domain,
+				 struct dlb2_create_sched_domain_args *args,
+				 struct dlb2_cmd_response *resp)
+{
+	unsigned int i, j;
+	int ret;
+
+	if (args->cos_strict) {
+		for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+			u32 num = args->num_cos_ldb_ports[i];
+
+			/* Allocate ports from specific classes-of-service */
+			ret = __dlb2_attach_ldb_ports(hw,
+						      rsrcs,
+						      domain,
+						      num,
+						      i,
+						      resp);
+			if (ret)
+				return ret;
+		}
+	} else {
+		unsigned int k;
+		u32 cos_id;
+
+		/*
+		 * Attempt to allocate from specific class-of-service, but
+		 * fallback to the other classes if that fails.
+		 */
+		for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+			for (j = 0; j < args->num_cos_ldb_ports[i]; j++) {
+				for (k = 0; k < DLB2_NUM_COS_DOMAINS; k++) {
+					cos_id = (i + k) % DLB2_NUM_COS_DOMAINS;
+
+					ret = __dlb2_attach_ldb_ports(hw,
+								      rsrcs,
+								      domain,
+								      1,
+								      cos_id,
+								      resp);
+					if (ret == 0)
+						break;
+				}
+
+				if (ret)
+					return ret;
+			}
+		}
+	}
+
+	/* Allocate num_ldb_ports from any class-of-service */
+	for (i = 0; i < args->num_ldb_ports; i++) {
+		for (j = 0; j < DLB2_NUM_COS_DOMAINS; j++) {
+			ret = __dlb2_attach_ldb_ports(hw,
+						      rsrcs,
+						      domain,
+						      1,
+						      j,
+						      resp);
+			if (ret == 0)
+				break;
+		}
+
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+
+static int dlb2_attach_dir_ports(struct dlb2_hw *hw,
+				 struct dlb2_function_resources *rsrcs,
+				 struct dlb2_hw_domain *domain,
+				 u32 num_ports,
+				 struct dlb2_cmd_response *resp)
+{
+	unsigned int i;
+
+	if (rsrcs->num_avail_dir_pq_pairs < num_ports) {
+		resp->status = DLB2_ST_DIR_PORTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	for (i = 0; i < num_ports; i++) {
+		struct dlb2_dir_pq_pair *port;
+
+		port = DLB2_FUNC_LIST_HEAD(rsrcs->avail_dir_pq_pairs,
+					   typeof(*port));
+		if (port == NULL) {
+			DLB2_HW_ERR(hw,
+				    "[%s()] Internal error: domain validation failed\n",
+				    __func__);
+			return -EFAULT;
+		}
+
+		dlb2_list_del(&rsrcs->avail_dir_pq_pairs, &port->func_list);
+
+		port->domain_id = domain->id;
+		port->owned = true;
+
+		dlb2_list_add(&domain->avail_dir_pq_pairs, &port->domain_list);
+	}
+
+	rsrcs->num_avail_dir_pq_pairs -= num_ports;
+
+	return 0;
+}
+
+static int dlb2_attach_ldb_credits(struct dlb2_function_resources *rsrcs,
+				   struct dlb2_hw_domain *domain,
+				   u32 num_credits,
+				   struct dlb2_cmd_response *resp)
+{
+	if (rsrcs->num_avail_qed_entries < num_credits) {
+		resp->status = DLB2_ST_LDB_CREDITS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	rsrcs->num_avail_qed_entries -= num_credits;
+	domain->num_ldb_credits += num_credits;
+	return 0;
+}
+
+static int dlb2_attach_dir_credits(struct dlb2_function_resources *rsrcs,
+				   struct dlb2_hw_domain *domain,
+				   u32 num_credits,
+				   struct dlb2_cmd_response *resp)
+{
+	if (rsrcs->num_avail_dqed_entries < num_credits) {
+		resp->status = DLB2_ST_DIR_CREDITS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	rsrcs->num_avail_dqed_entries -= num_credits;
+	domain->num_dir_credits += num_credits;
+	return 0;
+}
+
+
+static int dlb2_attach_atomic_inflights(struct dlb2_function_resources *rsrcs,
+					struct dlb2_hw_domain *domain,
+					u32 num_atomic_inflights,
+					struct dlb2_cmd_response *resp)
+{
+	if (rsrcs->num_avail_aqed_entries < num_atomic_inflights) {
+		resp->status = DLB2_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	rsrcs->num_avail_aqed_entries -= num_atomic_inflights;
+	domain->num_avail_aqed_entries += num_atomic_inflights;
+	return 0;
+}
+
+static int
+dlb2_attach_domain_hist_list_entries(struct dlb2_function_resources *rsrcs,
+				     struct dlb2_hw_domain *domain,
+				     u32 num_hist_list_entries,
+				     struct dlb2_cmd_response *resp)
+{
+	struct dlb2_bitmap *bitmap;
+	int base;
+
+	if (num_hist_list_entries) {
+		bitmap = rsrcs->avail_hist_list_entries;
+
+		base = dlb2_bitmap_find_set_bit_range(bitmap,
+						      num_hist_list_entries);
+		if (base < 0)
+			goto error;
+
+		domain->total_hist_list_entries = num_hist_list_entries;
+		domain->avail_hist_list_entries = num_hist_list_entries;
+		domain->hist_list_entry_base = base;
+		domain->hist_list_entry_offset = 0;
+
+		dlb2_bitmap_clear_range(bitmap, base, num_hist_list_entries);
+	}
+	return 0;
+
+error:
+	resp->status = DLB2_ST_HIST_LIST_ENTRIES_UNAVAILABLE;
+	return -EINVAL;
+}
+
+static int dlb2_attach_ldb_queues(struct dlb2_hw *hw,
+				  struct dlb2_function_resources *rsrcs,
+				  struct dlb2_hw_domain *domain,
+				  u32 num_queues,
+				  struct dlb2_cmd_response *resp)
+{
+	unsigned int i;
+
+	if (rsrcs->num_avail_ldb_queues < num_queues) {
+		resp->status = DLB2_ST_LDB_QUEUES_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	for (i = 0; i < num_queues; i++) {
+		struct dlb2_ldb_queue *queue;
+
+		queue = DLB2_FUNC_LIST_HEAD(rsrcs->avail_ldb_queues,
+					    typeof(*queue));
+		if (queue == NULL) {
+			DLB2_HW_ERR(hw,
+				    "[%s()] Internal error: domain validation failed\n",
+				    __func__);
+			return -EFAULT;
+		}
+
+		dlb2_list_del(&rsrcs->avail_ldb_queues, &queue->func_list);
+
+		queue->domain_id = domain->id;
+		queue->owned = true;
+
+		dlb2_list_add(&domain->avail_ldb_queues, &queue->domain_list);
+	}
+
+	rsrcs->num_avail_ldb_queues -= num_queues;
+
+	return 0;
+}
+
+static int
+dlb2_domain_attach_resources(struct dlb2_hw *hw,
+			     struct dlb2_function_resources *rsrcs,
+			     struct dlb2_hw_domain *domain,
+			     struct dlb2_create_sched_domain_args *args,
+			     struct dlb2_cmd_response *resp)
+{
+	int ret;
+
+	ret = dlb2_attach_ldb_queues(hw,
+				     rsrcs,
+				     domain,
+				     args->num_ldb_queues,
+				     resp);
+	if (ret)
+		return ret;
+
+	ret = dlb2_attach_ldb_ports(hw,
+				    rsrcs,
+				    domain,
+				    args,
+				    resp);
+	if (ret)
+		return ret;
+
+	ret = dlb2_attach_dir_ports(hw,
+				    rsrcs,
+				    domain,
+				    args->num_dir_ports,
+				    resp);
+	if (ret)
+		return ret;
+
+	if (hw->ver == DLB2_HW_V2) {
+		ret = dlb2_attach_ldb_credits(rsrcs,
+					      domain,
+					      args->num_ldb_credits,
+					      resp);
+		if (ret)
+			return ret;
+
+		ret = dlb2_attach_dir_credits(rsrcs,
+					      domain,
+					      args->num_dir_credits,
+					      resp);
+		if (ret)
+			return ret;
+	} else {  /* DLB 2.5 */
+		ret = dlb2_attach_credits(rsrcs,
+					  domain,
+					  args->num_credits,
+					  resp);
+		if (ret)
+			return ret;
+	}
+
+	ret = dlb2_attach_domain_hist_list_entries(rsrcs,
+						   domain,
+						   args->num_hist_list_entries,
+						   resp);
+	if (ret)
+		return ret;
+
+	ret = dlb2_attach_atomic_inflights(rsrcs,
+					   domain,
+					   args->num_atomic_inflights,
+					   resp);
+	if (ret)
+		return ret;
+
+	dlb2_configure_domain_credits(hw, domain);
+
+	domain->configured = true;
+
+	domain->started = false;
+
+	rsrcs->num_avail_domains--;
+
+	return 0;
+}
+
+static int
+dlb2_verify_create_sched_dom_args(struct dlb2_function_resources *rsrcs,
+				  struct dlb2_create_sched_domain_args *args,
+				  struct dlb2_cmd_response *resp,
+				  struct dlb2_hw *hw,
+				  struct dlb2_hw_domain **out_domain)
+{
+	u32 num_avail_ldb_ports, req_ldb_ports;
+	struct dlb2_bitmap *avail_hl_entries;
+	unsigned int max_contig_hl_range;
+	struct dlb2_hw_domain *domain;
+	int i;
+
+	avail_hl_entries = rsrcs->avail_hist_list_entries;
+
+	max_contig_hl_range = dlb2_bitmap_longest_set_range(avail_hl_entries);
+
+	num_avail_ldb_ports = 0;
+	req_ldb_ports = 0;
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		num_avail_ldb_ports += rsrcs->num_avail_ldb_ports[i];
+
+		req_ldb_ports += args->num_cos_ldb_ports[i];
+	}
+
+	req_ldb_ports += args->num_ldb_ports;
+
+	if (rsrcs->num_avail_domains < 1) {
+		resp->status = DLB2_ST_DOMAIN_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	domain = DLB2_FUNC_LIST_HEAD(rsrcs->avail_domains, typeof(*domain));
+	if (domain == NULL) {
+		resp->status = DLB2_ST_DOMAIN_UNAVAILABLE;
+		return -EFAULT;
+	}
+
+	if (rsrcs->num_avail_ldb_queues < args->num_ldb_queues) {
+		resp->status = DLB2_ST_LDB_QUEUES_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	if (req_ldb_ports > num_avail_ldb_ports) {
+		resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	for (i = 0; args->cos_strict && i < DLB2_NUM_COS_DOMAINS; i++) {
+		if (args->num_cos_ldb_ports[i] >
+		    rsrcs->num_avail_ldb_ports[i]) {
+			resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
+			return -EINVAL;
+		}
+	}
+
+	if (args->num_ldb_queues > 0 && req_ldb_ports == 0) {
+		resp->status = DLB2_ST_LDB_PORT_REQUIRED_FOR_LDB_QUEUES;
+		return -EINVAL;
+	}
+
+	if (rsrcs->num_avail_dir_pq_pairs < args->num_dir_ports) {
+		resp->status = DLB2_ST_DIR_PORTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+	if (hw->ver == DLB2_HW_V2_5) {
+		if (rsrcs->num_avail_entries < args->num_credits) {
+			resp->status = DLB2_ST_CREDITS_UNAVAILABLE;
+			return -EINVAL;
+		}
+	} else {
+		if (rsrcs->num_avail_qed_entries < args->num_ldb_credits) {
+			resp->status = DLB2_ST_LDB_CREDITS_UNAVAILABLE;
+			return -EINVAL;
+		}
+		if (rsrcs->num_avail_dqed_entries < args->num_dir_credits) {
+			resp->status = DLB2_ST_DIR_CREDITS_UNAVAILABLE;
+			return -EINVAL;
+		}
+	}
+
+	if (rsrcs->num_avail_aqed_entries < args->num_atomic_inflights) {
+		resp->status = DLB2_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	if (max_contig_hl_range < args->num_hist_list_entries) {
+		resp->status = DLB2_ST_HIST_LIST_ENTRIES_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	*out_domain = domain;
+
+	return 0;
+}
+
+static void
+dlb2_log_create_sched_domain_args(struct dlb2_hw *hw,
+				  struct dlb2_create_sched_domain_args *args,
+				  bool vdev_req,
+				  unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 create sched domain arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tNumber of LDB queues:          %d\n",
+		    args->num_ldb_queues);
+	DLB2_HW_DBG(hw, "\tNumber of LDB ports (any CoS): %d\n",
+		    args->num_ldb_ports);
+	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 0):   %d\n",
+		    args->num_cos_ldb_ports[0]);
+	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 1):   %d\n",
+		    args->num_cos_ldb_ports[1]);
+	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 2):   %d\n",
+		    args->num_cos_ldb_ports[2]);
+	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 3):   %d\n",
+		    args->num_cos_ldb_ports[3]);
+	DLB2_HW_DBG(hw, "\tStrict CoS allocation:         %d\n",
+		    args->cos_strict);
+	DLB2_HW_DBG(hw, "\tNumber of DIR ports:           %d\n",
+		    args->num_dir_ports);
+	DLB2_HW_DBG(hw, "\tNumber of ATM inflights:       %d\n",
+		    args->num_atomic_inflights);
+	DLB2_HW_DBG(hw, "\tNumber of hist list entries:   %d\n",
+		    args->num_hist_list_entries);
+	if (hw->ver == DLB2_HW_V2) {
+		DLB2_HW_DBG(hw, "\tNumber of LDB credits:         %d\n",
+			    args->num_ldb_credits);
+		DLB2_HW_DBG(hw, "\tNumber of DIR credits:         %d\n",
+			    args->num_dir_credits);
+	} else {
+		DLB2_HW_DBG(hw, "\tNumber of credits:         %d\n",
+			    args->num_credits);
+	}
+}
+
+/**
+ * dlb2_hw_create_sched_domain() - create a scheduling domain
+ * @hw: dlb2_hw handle for a particular device.
+ * @args: scheduling domain creation arguments.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function creates a scheduling domain containing the resources specified
+ * in args. The individual resources (queues, ports, credits) can be configured
+ * after creating a scheduling domain.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the domain ID.
+ *
+ * resp->id contains a virtual ID if vdev_req is true.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, or the requested domain name
+ *	    is already in use.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb2_hw_create_sched_domain(struct dlb2_hw *hw,
+				struct dlb2_create_sched_domain_args *args,
+				struct dlb2_cmd_response *resp,
+				bool vdev_req,
+				unsigned int vdev_id)
+{
+	struct dlb2_function_resources *rsrcs;
+	struct dlb2_hw_domain *domain;
+	int ret;
+
+	rsrcs = (vdev_req) ? &hw->vdev[vdev_id] : &hw->pf;
+
+	dlb2_log_create_sched_domain_args(hw, args, vdev_req, vdev_id);
+
+	/*
+	 * Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	ret = dlb2_verify_create_sched_dom_args(rsrcs, args, resp, hw, &domain);
+	if (ret)
+		return ret;
+
+	dlb2_init_domain_rsrc_lists(domain);
+
+	ret = dlb2_domain_attach_resources(hw, rsrcs, domain, args, resp);
+	if (ret) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: failed to verify args.\n",
+			    __func__);
+
+		return ret;
+	}
+
+	dlb2_list_del(&rsrcs->avail_domains, &domain->func_list);
+
+	dlb2_list_add(&rsrcs->used_domains, &domain->func_list);
+
+	resp->id = (vdev_req) ? domain->id.virt_id : domain->id.phys_id;
+	resp->status = 0;
+
+	return 0;
+}
+
+static void dlb2_dir_port_cq_disable(struct dlb2_hw *hw,
+				     struct dlb2_dir_pq_pair *port)
+{
+	u32 reg = 0;
+
+	DLB2_BIT_SET(reg, DLB2_LSP_CQ_DIR_DSBL_DISABLED);
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ_DIR_DSBL(hw->ver, port->id.phys_id), reg);
+
+	dlb2_flush_csr(hw);
+}
+
+static u32 dlb2_dir_cq_token_count(struct dlb2_hw *hw,
+				   struct dlb2_dir_pq_pair *port)
+{
+	u32 cnt;
+
+	cnt = DLB2_CSR_RD(hw,
+			  DLB2_LSP_CQ_DIR_TKN_CNT(hw->ver, port->id.phys_id));
+
+	/*
+	 * Account for the initial token count, which is used in order to
+	 * provide a CQ with depth less than 8.
+	 */
+
+	return DLB2_BITS_GET(cnt, DLB2_LSP_CQ_DIR_TKN_CNT_COUNT) -
+	       port->init_tkn_cnt;
+}
+
+static void dlb2_drain_dir_cq(struct dlb2_hw *hw,
+			      struct dlb2_dir_pq_pair *port)
+{
+	unsigned int port_id = port->id.phys_id;
+	u32 cnt;
+
+	/* Return any outstanding tokens */
+	cnt = dlb2_dir_cq_token_count(hw, port);
+
+	if (cnt != 0) {
+		struct dlb2_hcw hcw_mem[8], *hcw;
+		void __iomem *pp_addr;
+
+		pp_addr = os_map_producer_port(hw, port_id, false);
+
+		/* Point hcw to a 64B-aligned location */
+		hcw = (struct dlb2_hcw *)((uintptr_t)&hcw_mem[4] & ~0x3F);
+
+		/*
+		 * Program the first HCW for a batch token return and
+		 * the rest as NOOPS
+		 */
+		memset(hcw, 0, 4 * sizeof(*hcw));
+		hcw->cq_token = 1;
+		hcw->lock_id = cnt - 1;
+
+		dlb2_movdir64b(pp_addr, hcw);
+
+		os_fence_hcw(hw, pp_addr);
+
+		os_unmap_producer_port(hw, pp_addr);
+	}
+}
+
+static void dlb2_dir_port_cq_enable(struct dlb2_hw *hw,
+				    struct dlb2_dir_pq_pair *port)
+{
+	u32 reg = 0;
+
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ_DIR_DSBL(hw->ver, port->id.phys_id), reg);
+
+	dlb2_flush_csr(hw);
+}
+
+static int dlb2_domain_drain_dir_cqs(struct dlb2_hw *hw,
+				     struct dlb2_hw_domain *domain,
+				     bool toggle_port)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *port;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
+		/*
+		 * Can't drain a port if it's not configured, and there's
+		 * nothing to drain if its queue is unconfigured.
+		 */
+		if (!port->port_configured || !port->queue_configured)
+			continue;
+
+		if (toggle_port)
+			dlb2_dir_port_cq_disable(hw, port);
+
+		dlb2_drain_dir_cq(hw, port);
+
+		if (toggle_port)
+			dlb2_dir_port_cq_enable(hw, port);
+	}
+
+	return 0;
+}
+
+static u32 dlb2_dir_queue_depth(struct dlb2_hw *hw,
+				struct dlb2_dir_pq_pair *queue)
+{
+	u32 cnt;
+
+	cnt = DLB2_CSR_RD(hw, DLB2_LSP_QID_DIR_ENQUEUE_CNT(hw->ver,
+						      queue->id.phys_id));
+
+	return DLB2_BITS_GET(cnt, DLB2_LSP_QID_DIR_ENQUEUE_CNT_COUNT);
+}
+
+static bool dlb2_dir_queue_is_empty(struct dlb2_hw *hw,
+				    struct dlb2_dir_pq_pair *queue)
+{
+	return dlb2_dir_queue_depth(hw, queue) == 0;
+}
+
+static bool dlb2_domain_dir_queues_empty(struct dlb2_hw *hw,
+					 struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *queue;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, queue, iter) {
+		if (!dlb2_dir_queue_is_empty(hw, queue))
+			return false;
+	}
+
+	return true;
+}
+static int dlb2_domain_drain_dir_queues(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain)
+{
+	int i;
+
+	/* If the domain hasn't been started, there's no traffic to drain */
+	if (!domain->started)
+		return 0;
+
+	for (i = 0; i < DLB2_MAX_QID_EMPTY_CHECK_LOOPS; i++) {
+		dlb2_domain_drain_dir_cqs(hw, domain, true);
+
+		if (dlb2_domain_dir_queues_empty(hw, domain))
+			break;
+	}
+
+	if (i == DLB2_MAX_QID_EMPTY_CHECK_LOOPS) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: failed to empty queues\n",
+			    __func__);
+		return -EFAULT;
+	}
+
+	/*
+	 * Drain the CQs one more time. For the queues to go empty, they would
+	 * have scheduled one or more QEs.
+	 */
+	dlb2_domain_drain_dir_cqs(hw, domain, true);
+
+	return 0;
+}
+
+static void dlb2_ldb_port_cq_enable(struct dlb2_hw *hw,
+				    struct dlb2_ldb_port *port)
+{
+	u32 reg = 0;
+
+	/*
+	 * Don't re-enable the port if a removal is pending. The caller should
+	 * mark this port as enabled (if it isn't already), and when the
+	 * removal completes the port will be enabled.
+	 */
+	if (port->num_pending_removals)
+		return;
+
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ_LDB_DSBL(hw->ver, port->id.phys_id), reg);
+
+	dlb2_flush_csr(hw);
+}
+
+static void dlb2_ldb_port_cq_disable(struct dlb2_hw *hw,
+				     struct dlb2_ldb_port *port)
+{
+	u32 reg = 0;
+
+	DLB2_BIT_SET(reg, DLB2_LSP_CQ_LDB_DSBL_DISABLED);
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ_LDB_DSBL(hw->ver, port->id.phys_id), reg);
+
+	dlb2_flush_csr(hw);
+}
+
+static u32 dlb2_ldb_cq_inflight_count(struct dlb2_hw *hw,
+				      struct dlb2_ldb_port *port)
+{
+	u32 cnt;
+
+	cnt = DLB2_CSR_RD(hw,
+			  DLB2_LSP_CQ_LDB_INFL_CNT(hw->ver, port->id.phys_id));
+
+	return DLB2_BITS_GET(cnt, DLB2_LSP_CQ_LDB_INFL_CNT_COUNT);
+}
+
+static u32 dlb2_ldb_cq_token_count(struct dlb2_hw *hw,
+				   struct dlb2_ldb_port *port)
+{
+	u32 cnt;
+
+	cnt = DLB2_CSR_RD(hw,
+			  DLB2_LSP_CQ_LDB_TKN_CNT(hw->ver, port->id.phys_id));
+
+	/*
+	 * Account for the initial token count, which is used in order to
+	 * provide a CQ with depth less than 8.
+	 */
+
+	return DLB2_BITS_GET(cnt, DLB2_LSP_CQ_LDB_TKN_CNT_TOKEN_COUNT) -
+		port->init_tkn_cnt;
+}
+
+static void dlb2_drain_ldb_cq(struct dlb2_hw *hw, struct dlb2_ldb_port *port)
+{
+	u32 infl_cnt, tkn_cnt;
+	unsigned int i;
+
+	infl_cnt = dlb2_ldb_cq_inflight_count(hw, port);
+	tkn_cnt = dlb2_ldb_cq_token_count(hw, port);
+
+	if (infl_cnt || tkn_cnt) {
+		struct dlb2_hcw hcw_mem[8], *hcw;
+		void __iomem *pp_addr;
+
+		pp_addr = os_map_producer_port(hw, port->id.phys_id, true);
+
+		/* Point hcw to a 64B-aligned location */
+		hcw = (struct dlb2_hcw *)((uintptr_t)&hcw_mem[4] & ~0x3F);
+
+		/*
+		 * Program the first HCW for a completion and token return and
+		 * the other HCWs as NOOPS
+		 */
+
+		memset(hcw, 0, 4 * sizeof(*hcw));
+		hcw->qe_comp = (infl_cnt > 0);
+		hcw->cq_token = (tkn_cnt > 0);
+		hcw->lock_id = tkn_cnt - 1;
+
+		/* Return tokens in the first HCW */
+		dlb2_movdir64b(pp_addr, hcw);
+
+		hcw->cq_token = 0;
+
+		/* Issue remaining completions (if any) */
+		for (i = 1; i < infl_cnt; i++)
+			dlb2_movdir64b(pp_addr, hcw);
+
+		os_fence_hcw(hw, pp_addr);
+
+		os_unmap_producer_port(hw, pp_addr);
+	}
+}
+
+static void dlb2_domain_drain_ldb_cqs(struct dlb2_hw *hw,
+				      struct dlb2_hw_domain *domain,
+				      bool toggle_port)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	/* If the domain hasn't been started, there's no traffic to drain */
+	if (!domain->started)
+		return;
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			if (toggle_port)
+				dlb2_ldb_port_cq_disable(hw, port);
+
+			dlb2_drain_ldb_cq(hw, port);
+
+			if (toggle_port)
+				dlb2_ldb_port_cq_enable(hw, port);
+		}
+	}
+}
+
+static u32 dlb2_ldb_queue_depth(struct dlb2_hw *hw,
+				struct dlb2_ldb_queue *queue)
+{
+	u32 aqed, ldb, atm;
+
+	aqed = DLB2_CSR_RD(hw, DLB2_LSP_QID_AQED_ACTIVE_CNT(hw->ver,
+						       queue->id.phys_id));
+	ldb = DLB2_CSR_RD(hw, DLB2_LSP_QID_LDB_ENQUEUE_CNT(hw->ver,
+						      queue->id.phys_id));
+	atm = DLB2_CSR_RD(hw,
+			  DLB2_LSP_QID_ATM_ACTIVE(hw->ver, queue->id.phys_id));
+
+	return DLB2_BITS_GET(aqed, DLB2_LSP_QID_AQED_ACTIVE_CNT_COUNT)
+	       + DLB2_BITS_GET(ldb, DLB2_LSP_QID_LDB_ENQUEUE_CNT_COUNT)
+	       + DLB2_BITS_GET(atm, DLB2_LSP_QID_ATM_ACTIVE_COUNT);
+}
+
+static bool dlb2_ldb_queue_is_empty(struct dlb2_hw *hw,
+				    struct dlb2_ldb_queue *queue)
+{
+	return dlb2_ldb_queue_depth(hw, queue) == 0;
+}
+
+static bool dlb2_domain_mapped_queues_empty(struct dlb2_hw *hw,
+					    struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_queue *queue;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
+		if (queue->num_mappings == 0)
+			continue;
+
+		if (!dlb2_ldb_queue_is_empty(hw, queue))
+			return false;
+	}
+
+	return true;
+}
+
+static int dlb2_domain_drain_mapped_queues(struct dlb2_hw *hw,
+					   struct dlb2_hw_domain *domain)
+{
+	int i;
+
+	/* If the domain hasn't been started, there's no traffic to drain */
+	if (!domain->started)
+		return 0;
+
+	if (domain->num_pending_removals > 0) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: failed to unmap domain queues\n",
+			    __func__);
+		return -EFAULT;
+	}
+
+	for (i = 0; i < DLB2_MAX_QID_EMPTY_CHECK_LOOPS; i++) {
+		dlb2_domain_drain_ldb_cqs(hw, domain, true);
+
+		if (dlb2_domain_mapped_queues_empty(hw, domain))
+			break;
+	}
+
+	if (i == DLB2_MAX_QID_EMPTY_CHECK_LOOPS) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: failed to empty queues\n",
+			    __func__);
+		return -EFAULT;
+	}
+
+	/*
+	 * Drain the CQs one more time. For the queues to go empty, they would
+	 * have scheduled one or more QEs.
+	 */
+	dlb2_domain_drain_ldb_cqs(hw, domain, true);
+
+	return 0;
+}
+
+static void dlb2_domain_enable_ldb_cqs(struct dlb2_hw *hw,
+				       struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			port->enabled = true;
+
+			dlb2_ldb_port_cq_enable(hw, port);
+		}
+	}
+}
+
+static struct dlb2_ldb_queue *
+dlb2_get_ldb_queue_from_id(struct dlb2_hw *hw,
+			   u32 id,
+			   bool vdev_req,
+			   unsigned int vdev_id)
+{
+	struct dlb2_list_entry *iter1;
+	struct dlb2_list_entry *iter2;
+	struct dlb2_function_resources *rsrcs;
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+	RTE_SET_USED(iter1);
+	RTE_SET_USED(iter2);
+
+	if (id >= DLB2_MAX_NUM_LDB_QUEUES)
+		return NULL;
+
+	rsrcs = (vdev_req) ? &hw->vdev[vdev_id] : &hw->pf;
+
+	if (!vdev_req)
+		return &hw->rsrcs.ldb_queues[id];
+
+	DLB2_FUNC_LIST_FOR(rsrcs->used_domains, domain, iter1) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter2) {
+			if (queue->id.virt_id == id)
+				return queue;
+		}
+	}
+
+	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_queues, queue, iter1) {
+		if (queue->id.virt_id == id)
+			return queue;
+	}
+
+	return NULL;
+}
+
+static struct dlb2_hw_domain *dlb2_get_domain_from_id(struct dlb2_hw *hw,
+						      u32 id,
+						      bool vdev_req,
+						      unsigned int vdev_id)
+{
+	struct dlb2_list_entry *iteration;
+	struct dlb2_function_resources *rsrcs;
+	struct dlb2_hw_domain *domain;
+	RTE_SET_USED(iteration);
+
+	if (id >= DLB2_MAX_NUM_DOMAINS)
+		return NULL;
+
+	if (!vdev_req)
+		return &hw->domains[id];
+
+	rsrcs = &hw->vdev[vdev_id];
+
+	DLB2_FUNC_LIST_FOR(rsrcs->used_domains, domain, iteration) {
+		if (domain->id.virt_id == id)
+			return domain;
+	}
+
+	return NULL;
+}
+
+static int dlb2_port_slot_state_transition(struct dlb2_hw *hw,
+					   struct dlb2_ldb_port *port,
+					   struct dlb2_ldb_queue *queue,
+					   int slot,
+					   enum dlb2_qid_map_state new_state)
+{
+	enum dlb2_qid_map_state curr_state = port->qid_map[slot].state;
+	struct dlb2_hw_domain *domain;
+	int domain_id;
+
+	domain_id = port->domain_id.phys_id;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, false, 0);
+	if (domain == NULL) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: unable to find domain %d\n",
+			    __func__, domain_id);
+		return -EINVAL;
+	}
+
+	switch (curr_state) {
+	case DLB2_QUEUE_UNMAPPED:
+		switch (new_state) {
+		case DLB2_QUEUE_MAPPED:
+			queue->num_mappings++;
+			port->num_mappings++;
+			break;
+		case DLB2_QUEUE_MAP_IN_PROG:
+			queue->num_pending_additions++;
+			domain->num_pending_additions++;
+			break;
+		default:
+			goto error;
+		}
+		break;
+	case DLB2_QUEUE_MAPPED:
+		switch (new_state) {
+		case DLB2_QUEUE_UNMAPPED:
+			queue->num_mappings--;
+			port->num_mappings--;
+			break;
+		case DLB2_QUEUE_UNMAP_IN_PROG:
+			port->num_pending_removals++;
+			domain->num_pending_removals++;
+			break;
+		case DLB2_QUEUE_MAPPED:
+			/* Priority change, nothing to update */
+			break;
+		default:
+			goto error;
+		}
+		break;
+	case DLB2_QUEUE_MAP_IN_PROG:
+		switch (new_state) {
+		case DLB2_QUEUE_UNMAPPED:
+			queue->num_pending_additions--;
+			domain->num_pending_additions--;
+			break;
+		case DLB2_QUEUE_MAPPED:
+			queue->num_mappings++;
+			port->num_mappings++;
+			queue->num_pending_additions--;
+			domain->num_pending_additions--;
+			break;
+		default:
+			goto error;
+		}
+		break;
+	case DLB2_QUEUE_UNMAP_IN_PROG:
+		switch (new_state) {
+		case DLB2_QUEUE_UNMAPPED:
+			port->num_pending_removals--;
+			domain->num_pending_removals--;
+			queue->num_mappings--;
+			port->num_mappings--;
+			break;
+		case DLB2_QUEUE_MAPPED:
+			port->num_pending_removals--;
+			domain->num_pending_removals--;
+			break;
+		case DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP:
+			/* Nothing to update */
+			break;
+		default:
+			goto error;
+		}
+		break;
+	case DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP:
+		switch (new_state) {
+		case DLB2_QUEUE_UNMAP_IN_PROG:
+			/* Nothing to update */
+			break;
+		case DLB2_QUEUE_UNMAPPED:
+			/*
+			 * An UNMAP_IN_PROG_PENDING_MAP slot briefly
+			 * becomes UNMAPPED before it transitions to
+			 * MAP_IN_PROG.
+			 */
+			queue->num_mappings--;
+			port->num_mappings--;
+			port->num_pending_removals--;
+			domain->num_pending_removals--;
+			break;
+		default:
+			goto error;
+		}
+		break;
+	default:
+		goto error;
+	}
+
+	port->qid_map[slot].state = new_state;
+
+	DLB2_HW_DBG(hw,
+		    "[%s()] queue %d -> port %d state transition (%d -> %d)\n",
+		    __func__, queue->id.phys_id, port->id.phys_id,
+		    curr_state, new_state);
+	return 0;
+
+error:
+	DLB2_HW_ERR(hw,
+		    "[%s()] Internal error: invalid queue %d -> port %d state transition (%d -> %d)\n",
+		    __func__, queue->id.phys_id, port->id.phys_id,
+		    curr_state, new_state);
+	return -EFAULT;
+}
+
+static bool dlb2_port_find_slot(struct dlb2_ldb_port *port,
+				enum dlb2_qid_map_state state,
+				int *slot)
+{
+	int i;
+
+	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
+		if (port->qid_map[i].state == state)
+			break;
+	}
+
+	*slot = i;
+
+	return (i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ);
+}
+
+static bool dlb2_port_find_slot_queue(struct dlb2_ldb_port *port,
+				      enum dlb2_qid_map_state state,
+				      struct dlb2_ldb_queue *queue,
+				      int *slot)
+{
+	int i;
+
+	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
+		if (port->qid_map[i].state == state &&
+		    port->qid_map[i].qid == queue->id.phys_id)
+			break;
+	}
+
+	*slot = i;
+
+	return (i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ);
+}
+
+/*
+ * dlb2_ldb_queue_{enable, disable}_mapped_cqs() don't operate exactly as
+ * their function names imply, and should only be called by the dynamic CQ
+ * mapping code.
+ */
+static void dlb2_ldb_queue_disable_mapped_cqs(struct dlb2_hw *hw,
+					      struct dlb2_hw_domain *domain,
+					      struct dlb2_ldb_queue *queue)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int slot, i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			enum dlb2_qid_map_state state = DLB2_QUEUE_MAPPED;
+
+			if (!dlb2_port_find_slot_queue(port, state,
+						       queue, &slot))
+				continue;
+
+			if (port->enabled)
+				dlb2_ldb_port_cq_disable(hw, port);
+		}
+	}
+}
+
+static void dlb2_ldb_queue_enable_mapped_cqs(struct dlb2_hw *hw,
+					     struct dlb2_hw_domain *domain,
+					     struct dlb2_ldb_queue *queue)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int slot, i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			enum dlb2_qid_map_state state = DLB2_QUEUE_MAPPED;
+
+			if (!dlb2_port_find_slot_queue(port, state,
+						       queue, &slot))
+				continue;
+
+			if (port->enabled)
+				dlb2_ldb_port_cq_enable(hw, port);
+		}
+	}
+}
+
+static void dlb2_ldb_port_clear_queue_if_status(struct dlb2_hw *hw,
+						struct dlb2_ldb_port *port,
+						int slot)
+{
+	u32 ctrl = 0;
+
+	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
+	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
+	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_INFLIGHT_OK_V);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
+
+	dlb2_flush_csr(hw);
+}
+
+static void dlb2_ldb_port_set_queue_if_status(struct dlb2_hw *hw,
+					      struct dlb2_ldb_port *port,
+					      int slot)
+{
+	u32 ctrl = 0;
+
+	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
+	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
+	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_VALUE);
+	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_INFLIGHT_OK_V);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
+
+	dlb2_flush_csr(hw);
+}
+
+static int dlb2_ldb_port_map_qid_static(struct dlb2_hw *hw,
+					struct dlb2_ldb_port *p,
+					struct dlb2_ldb_queue *q,
+					u8 priority)
+{
+	enum dlb2_qid_map_state state;
+	u32 lsp_qid2cq2;
+	u32 lsp_qid2cq;
+	u32 atm_qid2cq;
+	u32 cq2priov;
+	u32 cq2qid;
+	int i;
+
+	/* Look for a pending or already mapped slot, else an unused slot */
+	if (!dlb2_port_find_slot_queue(p, DLB2_QUEUE_MAP_IN_PROG, q, &i) &&
+	    !dlb2_port_find_slot_queue(p, DLB2_QUEUE_MAPPED, q, &i) &&
+	    !dlb2_port_find_slot(p, DLB2_QUEUE_UNMAPPED, &i)) {
+		DLB2_HW_ERR(hw,
+			    "[%s():%d] Internal error: CQ has no available QID mapping slots\n",
+			    __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	/* Read-modify-write the priority and valid bit register */
+	cq2priov = DLB2_CSR_RD(hw, DLB2_LSP_CQ2PRIOV(hw->ver, p->id.phys_id));
+
+	cq2priov |= (1 << (i + DLB2_LSP_CQ2PRIOV_V_LOC)) & DLB2_LSP_CQ2PRIOV_V;
+	cq2priov |= ((priority & 0x7) << (i + DLB2_LSP_CQ2PRIOV_PRIO_LOC) * 3)
+		    & DLB2_LSP_CQ2PRIOV_PRIO;
+
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(hw->ver, p->id.phys_id), cq2priov);
+
+	/* Read-modify-write the QID map register */
+	if (i < 4)
+		cq2qid = DLB2_CSR_RD(hw, DLB2_LSP_CQ2QID0(hw->ver,
+							  p->id.phys_id));
+	else
+		cq2qid = DLB2_CSR_RD(hw, DLB2_LSP_CQ2QID1(hw->ver,
+							  p->id.phys_id));
+
+	if (i == 0 || i == 4)
+		DLB2_BITS_SET(cq2qid, q->id.phys_id, DLB2_LSP_CQ2QID0_QID_P0);
+	if (i == 1 || i == 5)
+		DLB2_BITS_SET(cq2qid, q->id.phys_id, DLB2_LSP_CQ2QID0_QID_P1);
+	if (i == 2 || i == 6)
+		DLB2_BITS_SET(cq2qid, q->id.phys_id, DLB2_LSP_CQ2QID0_QID_P2);
+	if (i == 3 || i == 7)
+		DLB2_BITS_SET(cq2qid, q->id.phys_id, DLB2_LSP_CQ2QID0_QID_P3);
+
+	if (i < 4)
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_CQ2QID0(hw->ver, p->id.phys_id), cq2qid);
+	else
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_CQ2QID1(hw->ver, p->id.phys_id), cq2qid);
+
+	atm_qid2cq = DLB2_CSR_RD(hw,
+				 DLB2_ATM_QID2CQIDIX(q->id.phys_id,
+						p->id.phys_id / 4));
+
+	lsp_qid2cq = DLB2_CSR_RD(hw,
+				 DLB2_LSP_QID2CQIDIX(hw->ver, q->id.phys_id,
+						p->id.phys_id / 4));
+
+	lsp_qid2cq2 = DLB2_CSR_RD(hw,
+				  DLB2_LSP_QID2CQIDIX2(hw->ver, q->id.phys_id,
+						  p->id.phys_id / 4));
+
+	switch (p->id.phys_id % 4) {
+	case 0:
+		DLB2_BIT_SET(atm_qid2cq,
+			     1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P0_LOC));
+		DLB2_BIT_SET(lsp_qid2cq,
+			     1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P0_LOC));
+		DLB2_BIT_SET(lsp_qid2cq2,
+			     1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P0_LOC));
+		break;
+
+	case 1:
+		DLB2_BIT_SET(atm_qid2cq,
+			     1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P1_LOC));
+		DLB2_BIT_SET(lsp_qid2cq,
+			     1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P1_LOC));
+		DLB2_BIT_SET(lsp_qid2cq2,
+			     1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P1_LOC));
+		break;
+
+	case 2:
+		DLB2_BIT_SET(atm_qid2cq,
+			     1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P2_LOC));
+		DLB2_BIT_SET(lsp_qid2cq,
+			     1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P2_LOC));
+		DLB2_BIT_SET(lsp_qid2cq2,
+			     1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P2_LOC));
+		break;
+
+	case 3:
+		DLB2_BIT_SET(atm_qid2cq,
+			     1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P3_LOC));
+		DLB2_BIT_SET(lsp_qid2cq,
+			     1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P3_LOC));
+		DLB2_BIT_SET(lsp_qid2cq2,
+			     1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P3_LOC));
+		break;
+	}
+
+	DLB2_CSR_WR(hw,
+		    DLB2_ATM_QID2CQIDIX(q->id.phys_id, p->id.phys_id / 4),
+		    atm_qid2cq);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_QID2CQIDIX(hw->ver,
+					q->id.phys_id, p->id.phys_id / 4),
+		    lsp_qid2cq);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_QID2CQIDIX2(hw->ver,
+					 q->id.phys_id, p->id.phys_id / 4),
+		    lsp_qid2cq2);
+
+	dlb2_flush_csr(hw);
+
+	p->qid_map[i].qid = q->id.phys_id;
+	p->qid_map[i].priority = priority;
+
+	state = DLB2_QUEUE_MAPPED;
+
+	return dlb2_port_slot_state_transition(hw, p, q, i, state);
+}
+
+static int dlb2_ldb_port_set_has_work_bits(struct dlb2_hw *hw,
+					   struct dlb2_ldb_port *port,
+					   struct dlb2_ldb_queue *queue,
+					   int slot)
+{
+	u32 ctrl = 0;
+	u32 active;
+	u32 enq;
+
+	/* Set the atomic scheduling haswork bit */
+	active = DLB2_CSR_RD(hw, DLB2_LSP_QID_AQED_ACTIVE_CNT(hw->ver,
+							 queue->id.phys_id));
+
+	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
+	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
+	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_VALUE);
+	DLB2_BITS_SET(ctrl,
+		      DLB2_BITS_GET(active,
+				    DLB2_LSP_QID_AQED_ACTIVE_CNT_COUNT) > 0,
+				    DLB2_LSP_LDB_SCHED_CTRL_RLIST_HASWORK_V);
+
+	/* Set the non-atomic scheduling haswork bit */
+	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
+
+	enq = DLB2_CSR_RD(hw,
+			  DLB2_LSP_QID_LDB_ENQUEUE_CNT(hw->ver,
+						       queue->id.phys_id));
+
+	memset(&ctrl, 0, sizeof(ctrl));
+
+	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
+	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
+	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_VALUE);
+	DLB2_BITS_SET(ctrl,
+		      DLB2_BITS_GET(enq,
+				    DLB2_LSP_QID_LDB_ENQUEUE_CNT_COUNT) > 0,
+		      DLB2_LSP_LDB_SCHED_CTRL_NALB_HASWORK_V);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
+
+	dlb2_flush_csr(hw);
+
+	return 0;
+}
+
+static void dlb2_ldb_port_clear_has_work_bits(struct dlb2_hw *hw,
+					      struct dlb2_ldb_port *port,
+					      u8 slot)
+{
+	u32 ctrl = 0;
+
+	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
+	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
+	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_RLIST_HASWORK_V);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
+
+	memset(&ctrl, 0, sizeof(ctrl));
+
+	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
+	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
+	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_NALB_HASWORK_V);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
+
+	dlb2_flush_csr(hw);
+}
+
+
+static void dlb2_ldb_queue_set_inflight_limit(struct dlb2_hw *hw,
+					      struct dlb2_ldb_queue *queue)
+{
+	u32 infl_lim = 0;
+
+	DLB2_BITS_SET(infl_lim, queue->num_qid_inflights,
+		 DLB2_LSP_QID_LDB_INFL_LIM_LIMIT);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_QID_LDB_INFL_LIM(hw->ver, queue->id.phys_id),
+		    infl_lim);
+}
+
+static void dlb2_ldb_queue_clear_inflight_limit(struct dlb2_hw *hw,
+						struct dlb2_ldb_queue *queue)
+{
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_QID_LDB_INFL_LIM(hw->ver, queue->id.phys_id),
+		    DLB2_LSP_QID_LDB_INFL_LIM_RST);
+}
+
+static int dlb2_ldb_port_finish_map_qid_dynamic(struct dlb2_hw *hw,
+						struct dlb2_hw_domain *domain,
+						struct dlb2_ldb_port *port,
+						struct dlb2_ldb_queue *queue)
+{
+	struct dlb2_list_entry *iter;
+	enum dlb2_qid_map_state state;
+	int slot, ret, i;
+	u32 infl_cnt;
+	u8 prio;
+	RTE_SET_USED(iter);
+
+	infl_cnt = DLB2_CSR_RD(hw,
+			       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver,
+						    queue->id.phys_id));
+
+	if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT)) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: non-zero QID inflight count\n",
+			    __func__);
+		return -EINVAL;
+	}
+
+	/*
+	 * Static map the port and set its corresponding has_work bits.
+	 */
+	state = DLB2_QUEUE_MAP_IN_PROG;
+	if (!dlb2_port_find_slot_queue(port, state, queue, &slot))
+		return -EINVAL;
+
+	prio = port->qid_map[slot].priority;
+
+	/*
+	 * Update the CQ2QID, CQ2PRIOV, and QID2CQIDX registers, and
+	 * the port's qid_map state.
+	 */
+	ret = dlb2_ldb_port_map_qid_static(hw, port, queue, prio);
+	if (ret)
+		return ret;
+
+	ret = dlb2_ldb_port_set_has_work_bits(hw, port, queue, slot);
+	if (ret)
+		return ret;
+
+	/*
+	 * Ensure IF_status(cq,qid) is 0 before enabling the port to
+	 * prevent spurious schedules to cause the queue's inflight
+	 * count to increase.
+	 */
+	dlb2_ldb_port_clear_queue_if_status(hw, port, slot);
+
+	/* Reset the queue's inflight status */
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			state = DLB2_QUEUE_MAPPED;
+			if (!dlb2_port_find_slot_queue(port, state,
+						       queue, &slot))
+				continue;
+
+			dlb2_ldb_port_set_queue_if_status(hw, port, slot);
+		}
+	}
+
+	dlb2_ldb_queue_set_inflight_limit(hw, queue);
+
+	/* Re-enable CQs mapped to this queue */
+	dlb2_ldb_queue_enable_mapped_cqs(hw, domain, queue);
+
+	/* If this queue has other mappings pending, clear its inflight limit */
+	if (queue->num_pending_additions > 0)
+		dlb2_ldb_queue_clear_inflight_limit(hw, queue);
+
+	return 0;
+}
+
+/**
+ * dlb2_ldb_port_map_qid_dynamic() - perform a "dynamic" QID->CQ mapping
+ * @hw: dlb2_hw handle for a particular device.
+ * @port: load-balanced port
+ * @queue: load-balanced queue
+ * @priority: queue servicing priority
+ *
+ * Returns 0 if the queue was mapped, 1 if the mapping is scheduled to occur
+ * at a later point, and <0 if an error occurred.
+ */
+static int dlb2_ldb_port_map_qid_dynamic(struct dlb2_hw *hw,
+					 struct dlb2_ldb_port *port,
+					 struct dlb2_ldb_queue *queue,
+					 u8 priority)
+{
+	enum dlb2_qid_map_state state;
+	struct dlb2_hw_domain *domain;
+	int domain_id, slot, ret;
+	u32 infl_cnt;
+
+	domain_id = port->domain_id.phys_id;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, false, 0);
+	if (domain == NULL) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: unable to find domain %d\n",
+			    __func__, port->domain_id.phys_id);
+		return -EINVAL;
+	}
+
+	/*
+	 * Set the QID inflight limit to 0 to prevent further scheduling of the
+	 * queue.
+	 */
+	DLB2_CSR_WR(hw, DLB2_LSP_QID_LDB_INFL_LIM(hw->ver,
+						  queue->id.phys_id), 0);
+
+	if (!dlb2_port_find_slot(port, DLB2_QUEUE_UNMAPPED, &slot)) {
+		DLB2_HW_ERR(hw,
+			    "Internal error: No available unmapped slots\n");
+		return -EFAULT;
+	}
+
+	port->qid_map[slot].qid = queue->id.phys_id;
+	port->qid_map[slot].priority = priority;
+
+	state = DLB2_QUEUE_MAP_IN_PROG;
+	ret = dlb2_port_slot_state_transition(hw, port, queue, slot, state);
+	if (ret)
+		return ret;
+
+	infl_cnt = DLB2_CSR_RD(hw,
+			       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver,
+						    queue->id.phys_id));
+
+	if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT)) {
+		/*
+		 * The queue is owed completions so it's not safe to map it
+		 * yet. Schedule a kernel thread to complete the mapping later,
+		 * once software has completed all the queue's inflight events.
+		 */
+		if (!os_worker_active(hw))
+			os_schedule_work(hw);
+
+		return 1;
+	}
+
+	/*
+	 * Disable the affected CQ, and the CQs already mapped to the QID,
+	 * before reading the QID's inflight count a second time. There is an
+	 * unlikely race in which the QID may schedule one more QE after we
+	 * read an inflight count of 0, and disabling the CQs guarantees that
+	 * the race will not occur after a re-read of the inflight count
+	 * register.
+	 */
+	if (port->enabled)
+		dlb2_ldb_port_cq_disable(hw, port);
+
+	dlb2_ldb_queue_disable_mapped_cqs(hw, domain, queue);
+
+	infl_cnt = DLB2_CSR_RD(hw,
+			       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver,
+						    queue->id.phys_id));
+
+	if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT)) {
+		if (port->enabled)
+			dlb2_ldb_port_cq_enable(hw, port);
+
+		dlb2_ldb_queue_enable_mapped_cqs(hw, domain, queue);
+
+		/*
+		 * The queue is owed completions so it's not safe to map it
+		 * yet. Schedule a kernel thread to complete the mapping later,
+		 * once software has completed all the queue's inflight events.
+		 */
+		if (!os_worker_active(hw))
+			os_schedule_work(hw);
+
+		return 1;
+	}
+
+	return dlb2_ldb_port_finish_map_qid_dynamic(hw, domain, port, queue);
+}
+
+static void dlb2_domain_finish_map_port(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain,
+					struct dlb2_ldb_port *port)
+{
+	int i;
+
+	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
+		u32 infl_cnt;
+		struct dlb2_ldb_queue *queue;
+		int qid;
+
+		if (port->qid_map[i].state != DLB2_QUEUE_MAP_IN_PROG)
+			continue;
+
+		qid = port->qid_map[i].qid;
+
+		queue = dlb2_get_ldb_queue_from_id(hw, qid, false, 0);
+
+		if (queue == NULL) {
+			DLB2_HW_ERR(hw,
+				    "[%s()] Internal error: unable to find queue %d\n",
+				    __func__, qid);
+			continue;
+		}
+
+		infl_cnt = DLB2_CSR_RD(hw,
+				       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver, qid));
+
+		if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT))
+			continue;
+
+		/*
+		 * Disable the affected CQ, and the CQs already mapped to the
+		 * QID, before reading the QID's inflight count a second time.
+		 * There is an unlikely race in which the QID may schedule one
+		 * more QE after we read an inflight count of 0, and disabling
+		 * the CQs guarantees that the race will not occur after a
+		 * re-read of the inflight count register.
+		 */
+		if (port->enabled)
+			dlb2_ldb_port_cq_disable(hw, port);
+
+		dlb2_ldb_queue_disable_mapped_cqs(hw, domain, queue);
+
+		infl_cnt = DLB2_CSR_RD(hw,
+				       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver, qid));
+
+		if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT)) {
+			if (port->enabled)
+				dlb2_ldb_port_cq_enable(hw, port);
+
+			dlb2_ldb_queue_enable_mapped_cqs(hw, domain, queue);
+
+			continue;
+		}
+
+		dlb2_ldb_port_finish_map_qid_dynamic(hw, domain, port, queue);
+	}
+}
+
+static unsigned int
+dlb2_domain_finish_map_qid_procedures(struct dlb2_hw *hw,
+				      struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	if (!domain->configured || domain->num_pending_additions == 0)
+		return 0;
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
+			dlb2_domain_finish_map_port(hw, domain, port);
+	}
+
+	return domain->num_pending_additions;
+}
+
+static int dlb2_ldb_port_unmap_qid(struct dlb2_hw *hw,
+				   struct dlb2_ldb_port *port,
+				   struct dlb2_ldb_queue *queue)
+{
+	enum dlb2_qid_map_state mapped, in_progress, pending_map, unmapped;
+	u32 lsp_qid2cq2;
+	u32 lsp_qid2cq;
+	u32 atm_qid2cq;
+	u32 cq2priov;
+	u32 queue_id;
+	u32 port_id;
+	int i;
+
+	/* Find the queue's slot */
+	mapped = DLB2_QUEUE_MAPPED;
+	in_progress = DLB2_QUEUE_UNMAP_IN_PROG;
+	pending_map = DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP;
+
+	if (!dlb2_port_find_slot_queue(port, mapped, queue, &i) &&
+	    !dlb2_port_find_slot_queue(port, in_progress, queue, &i) &&
+	    !dlb2_port_find_slot_queue(port, pending_map, queue, &i)) {
+		DLB2_HW_ERR(hw,
+			    "[%s():%d] Internal error: QID %d isn't mapped\n",
+			    __func__, __LINE__, queue->id.phys_id);
+		return -EFAULT;
+	}
+
+	port_id = port->id.phys_id;
+	queue_id = queue->id.phys_id;
+
+	/* Read-modify-write the priority and valid bit register */
+	cq2priov = DLB2_CSR_RD(hw, DLB2_LSP_CQ2PRIOV(hw->ver, port_id));
+
+	cq2priov &= ~(1 << (i + DLB2_LSP_CQ2PRIOV_V_LOC));
+
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(hw->ver, port_id), cq2priov);
+
+	atm_qid2cq = DLB2_CSR_RD(hw, DLB2_ATM_QID2CQIDIX(queue_id,
+							 port_id / 4));
+
+	lsp_qid2cq = DLB2_CSR_RD(hw,
+				 DLB2_LSP_QID2CQIDIX(hw->ver,
+						queue_id, port_id / 4));
+
+	lsp_qid2cq2 = DLB2_CSR_RD(hw,
+				  DLB2_LSP_QID2CQIDIX2(hw->ver,
+						  queue_id, port_id / 4));
+
+	switch (port_id % 4) {
+	case 0:
+		atm_qid2cq &= ~(1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P0_LOC));
+		lsp_qid2cq &= ~(1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P0_LOC));
+		lsp_qid2cq2 &= ~(1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P0_LOC));
+		break;
+
+	case 1:
+		atm_qid2cq &= ~(1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P1_LOC));
+		lsp_qid2cq &= ~(1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P1_LOC));
+		lsp_qid2cq2 &= ~(1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P1_LOC));
+		break;
+
+	case 2:
+		atm_qid2cq &= ~(1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P2_LOC));
+		lsp_qid2cq &= ~(1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P2_LOC));
+		lsp_qid2cq2 &= ~(1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P2_LOC));
+		break;
+
+	case 3:
+		atm_qid2cq &= ~(1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P3_LOC));
+		lsp_qid2cq &= ~(1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P3_LOC));
+		lsp_qid2cq2 &= ~(1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P3_LOC));
+		break;
+	}
+
+	DLB2_CSR_WR(hw, DLB2_ATM_QID2CQIDIX(queue_id, port_id / 4), atm_qid2cq);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_QID2CQIDIX(hw->ver, queue_id, port_id / 4),
+		    lsp_qid2cq);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_QID2CQIDIX2(hw->ver, queue_id, port_id / 4),
+		    lsp_qid2cq2);
+
+	dlb2_flush_csr(hw);
+
+	unmapped = DLB2_QUEUE_UNMAPPED;
+
+	return dlb2_port_slot_state_transition(hw, port, queue, i, unmapped);
+}
+
+static int dlb2_ldb_port_map_qid(struct dlb2_hw *hw,
+				 struct dlb2_hw_domain *domain,
+				 struct dlb2_ldb_port *port,
+				 struct dlb2_ldb_queue *queue,
+				 u8 prio)
+{
+	if (domain->started)
+		return dlb2_ldb_port_map_qid_dynamic(hw, port, queue, prio);
+	else
+		return dlb2_ldb_port_map_qid_static(hw, port, queue, prio);
+}
+
+static void
+dlb2_domain_finish_unmap_port_slot(struct dlb2_hw *hw,
+				   struct dlb2_hw_domain *domain,
+				   struct dlb2_ldb_port *port,
+				   int slot)
+{
+	enum dlb2_qid_map_state state;
+	struct dlb2_ldb_queue *queue;
+
+	queue = &hw->rsrcs.ldb_queues[port->qid_map[slot].qid];
+
+	state = port->qid_map[slot].state;
+
+	/* Update the QID2CQIDX and CQ2QID vectors */
+	dlb2_ldb_port_unmap_qid(hw, port, queue);
+
+	/*
+	 * Ensure the QID will not be serviced by this {CQ, slot} by clearing
+	 * the has_work bits
+	 */
+	dlb2_ldb_port_clear_has_work_bits(hw, port, slot);
+
+	/* Reset the {CQ, slot} to its default state */
+	dlb2_ldb_port_set_queue_if_status(hw, port, slot);
+
+	/* Re-enable the CQ if it was not manually disabled by the user */
+	if (port->enabled)
+		dlb2_ldb_port_cq_enable(hw, port);
+
+	/*
+	 * If there is a mapping that is pending this slot's removal, perform
+	 * the mapping now.
+	 */
+	if (state == DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP) {
+		struct dlb2_ldb_port_qid_map *map;
+		struct dlb2_ldb_queue *map_queue;
+		u8 prio;
+
+		map = &port->qid_map[slot];
+
+		map->qid = map->pending_qid;
+		map->priority = map->pending_priority;
+
+		map_queue = &hw->rsrcs.ldb_queues[map->qid];
+		prio = map->priority;
+
+		dlb2_ldb_port_map_qid(hw, domain, port, map_queue, prio);
+	}
+}
+
+
+static bool dlb2_domain_finish_unmap_port(struct dlb2_hw *hw,
+					  struct dlb2_hw_domain *domain,
+					  struct dlb2_ldb_port *port)
+{
+	u32 infl_cnt;
+	int i;
+
+	if (port->num_pending_removals == 0)
+		return false;
+
+	/*
+	 * The unmap requires all the CQ's outstanding inflights to be
+	 * completed.
+	 */
+	infl_cnt = DLB2_CSR_RD(hw, DLB2_LSP_CQ_LDB_INFL_CNT(hw->ver,
+						       port->id.phys_id));
+	if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_CQ_LDB_INFL_CNT_COUNT) > 0)
+		return false;
+
+	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
+		struct dlb2_ldb_port_qid_map *map;
+
+		map = &port->qid_map[i];
+
+		if (map->state != DLB2_QUEUE_UNMAP_IN_PROG &&
+		    map->state != DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP)
+			continue;
+
+		dlb2_domain_finish_unmap_port_slot(hw, domain, port, i);
+	}
+
+	return true;
+}
+
+static unsigned int
+dlb2_domain_finish_unmap_qid_procedures(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	if (!domain->configured || domain->num_pending_removals == 0)
+		return 0;
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
+			dlb2_domain_finish_unmap_port(hw, domain, port);
+	}
+
+	return domain->num_pending_removals;
+}
+
+static void dlb2_domain_disable_ldb_cqs(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			port->enabled = false;
+
+			dlb2_ldb_port_cq_disable(hw, port);
+		}
+	}
+}
+
+
+static void dlb2_log_reset_domain(struct dlb2_hw *hw,
+				  u32 domain_id,
+				  bool vdev_req,
+				  unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 reset domain:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
+}
+
+static void dlb2_domain_disable_dir_vpps(struct dlb2_hw *hw,
+					 struct dlb2_hw_domain *domain,
+					 unsigned int vdev_id)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *port;
+	u32 vpp_v = 0;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
+		unsigned int offs;
+		u32 virt_id;
+
+		if (hw->virt_mode == DLB2_VIRT_SRIOV)
+			virt_id = port->id.virt_id;
+		else
+			virt_id = port->id.phys_id;
+
+		offs = vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) + virt_id;
+
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP_V(offs), vpp_v);
+	}
+}
+
+static void dlb2_domain_disable_ldb_vpps(struct dlb2_hw *hw,
+					 struct dlb2_hw_domain *domain,
+					 unsigned int vdev_id)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	u32 vpp_v = 0;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			unsigned int offs;
+			u32 virt_id;
+
+			if (hw->virt_mode == DLB2_VIRT_SRIOV)
+				virt_id = port->id.virt_id;
+			else
+				virt_id = port->id.phys_id;
+
+			offs = vdev_id * DLB2_MAX_NUM_LDB_PORTS + virt_id;
+
+			DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VPP_V(offs), vpp_v);
+		}
+	}
+}
+
+static void
+dlb2_domain_disable_ldb_port_interrupts(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	u32 int_en = 0;
+	u32 wd_en = 0;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			DLB2_CSR_WR(hw,
+				    DLB2_CHP_LDB_CQ_INT_ENB(hw->ver,
+						       port->id.phys_id),
+				    int_en);
+
+			DLB2_CSR_WR(hw,
+				    DLB2_CHP_LDB_CQ_WD_ENB(hw->ver,
+						      port->id.phys_id),
+				    wd_en);
+		}
+	}
+}
+
+static void
+dlb2_domain_disable_dir_port_interrupts(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *port;
+	u32 int_en = 0;
+	u32 wd_en = 0;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
+		DLB2_CSR_WR(hw,
+			    DLB2_CHP_DIR_CQ_INT_ENB(hw->ver, port->id.phys_id),
+			    int_en);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_CHP_DIR_CQ_WD_ENB(hw->ver, port->id.phys_id),
+			    wd_en);
+	}
+}
+
+static void
+dlb2_domain_disable_ldb_queue_write_perms(struct dlb2_hw *hw,
+					  struct dlb2_hw_domain *domain)
+{
+	int domain_offset = domain->id.phys_id * DLB2_MAX_NUM_LDB_QUEUES;
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_queue *queue;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
+		int idx = domain_offset + queue->id.phys_id;
+
+		DLB2_CSR_WR(hw, DLB2_SYS_LDB_VASQID_V(idx), 0);
+
+		if (queue->id.vdev_owned) {
+			DLB2_CSR_WR(hw,
+				    DLB2_SYS_LDB_QID2VQID(queue->id.phys_id),
+				    0);
+
+			idx = queue->id.vdev_id * DLB2_MAX_NUM_LDB_QUEUES +
+				queue->id.virt_id;
+
+			DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID_V(idx), 0);
+
+			DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID2QID(idx), 0);
+		}
+	}
+}
+
+static void
+dlb2_domain_disable_dir_queue_write_perms(struct dlb2_hw *hw,
+					  struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *queue;
+	unsigned long max_ports;
+	int domain_offset;
+	RTE_SET_USED(iter);
+
+	max_ports = DLB2_MAX_NUM_DIR_PORTS(hw->ver);
+
+	domain_offset = domain->id.phys_id * max_ports;
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, queue, iter) {
+		int idx = domain_offset + queue->id.phys_id;
+
+		DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(idx), 0);
+
+		if (queue->id.vdev_owned) {
+			idx = queue->id.vdev_id * max_ports + queue->id.virt_id;
+
+			DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID_V(idx), 0);
+
+			DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID2QID(idx), 0);
+		}
+	}
+}
+
+static void dlb2_domain_disable_ldb_seq_checks(struct dlb2_hw *hw,
+					       struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	u32 chk_en = 0;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			DLB2_CSR_WR(hw,
+				    DLB2_CHP_SN_CHK_ENBL(hw->ver,
+							 port->id.phys_id),
+				    chk_en);
+		}
+	}
+}
+
+static int dlb2_domain_wait_for_ldb_cqs_to_empty(struct dlb2_hw *hw,
+						 struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			int j;
+
+			for (j = 0; j < DLB2_MAX_CQ_COMP_CHECK_LOOPS; j++) {
+				if (dlb2_ldb_cq_inflight_count(hw, port) == 0)
+					break;
+			}
+
+			if (j == DLB2_MAX_CQ_COMP_CHECK_LOOPS) {
+				DLB2_HW_ERR(hw,
+					    "[%s()] Internal error: failed to flush load-balanced port %d's completions.\n",
+					    __func__, port->id.phys_id);
+				return -EFAULT;
+			}
+		}
+	}
+
+	return 0;
+}
+
+static void dlb2_domain_disable_dir_cqs(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *port;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
+		port->enabled = false;
+
+		dlb2_dir_port_cq_disable(hw, port);
+	}
+}
+
+static void
+dlb2_domain_disable_dir_producer_ports(struct dlb2_hw *hw,
+				       struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *port;
+	u32 pp_v = 0;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_DIR_PP_V(port->id.phys_id),
+			    pp_v);
+	}
+}
+
+static void
+dlb2_domain_disable_ldb_producer_ports(struct dlb2_hw *hw,
+				       struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	u32 pp_v = 0;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			DLB2_CSR_WR(hw,
+				    DLB2_SYS_LDB_PP_V(port->id.phys_id),
+				    pp_v);
+		}
+	}
+}
+
+static int dlb2_domain_verify_reset_success(struct dlb2_hw *hw,
+					    struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *dir_port;
+	struct dlb2_ldb_port *ldb_port;
+	struct dlb2_ldb_queue *queue;
+	int i;
+	RTE_SET_USED(iter);
+
+	/*
+	 * Confirm that all the domain's queue's inflight counts and AQED
+	 * active counts are 0.
+	 */
+	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
+		if (!dlb2_ldb_queue_is_empty(hw, queue)) {
+			DLB2_HW_ERR(hw,
+				    "[%s()] Internal error: failed to empty ldb queue %d\n",
+				    __func__, queue->id.phys_id);
+			return -EFAULT;
+		}
+	}
+
+	/* Confirm that all the domain's CQs inflight and token counts are 0. */
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], ldb_port, iter) {
+			if (dlb2_ldb_cq_inflight_count(hw, ldb_port) ||
+			    dlb2_ldb_cq_token_count(hw, ldb_port)) {
+				DLB2_HW_ERR(hw,
+					    "[%s()] Internal error: failed to empty ldb port %d\n",
+					    __func__, ldb_port->id.phys_id);
+				return -EFAULT;
+			}
+		}
+	}
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, dir_port, iter) {
+		if (!dlb2_dir_queue_is_empty(hw, dir_port)) {
+			DLB2_HW_ERR(hw,
+				    "[%s()] Internal error: failed to empty dir queue %d\n",
+				    __func__, dir_port->id.phys_id);
+			return -EFAULT;
+		}
+
+		if (dlb2_dir_cq_token_count(hw, dir_port)) {
+			DLB2_HW_ERR(hw,
+				    "[%s()] Internal error: failed to empty dir port %d\n",
+				    __func__, dir_port->id.phys_id);
+			return -EFAULT;
+		}
+	}
+
+	return 0;
+}
+
+static void __dlb2_domain_reset_ldb_port_registers(struct dlb2_hw *hw,
+						   struct dlb2_ldb_port *port)
+{
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_PP2VAS(port->id.phys_id),
+		    DLB2_SYS_LDB_PP2VAS_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ2VAS(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ2VAS_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_PP2VDEV(port->id.phys_id),
+		    DLB2_SYS_LDB_PP2VDEV_RST);
+
+	if (port->id.vdev_owned) {
+		unsigned int offs;
+		u32 virt_id;
+
+		/*
+		 * DLB uses producer port address bits 17:12 to determine the
+		 * producer port ID. In Scalable IOV mode, PP accesses come
+		 * through the PF MMIO window for the physical producer port,
+		 * so for translation purposes the virtual and physical port
+		 * IDs are equal.
+		 */
+		if (hw->virt_mode == DLB2_VIRT_SRIOV)
+			virt_id = port->id.virt_id;
+		else
+			virt_id = port->id.phys_id;
+
+		offs = port->id.vdev_id * DLB2_MAX_NUM_LDB_PORTS + virt_id;
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_VF_LDB_VPP2PP(offs),
+			    DLB2_SYS_VF_LDB_VPP2PP_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_VF_LDB_VPP_V(offs),
+			    DLB2_SYS_VF_LDB_VPP_V_RST);
+	}
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_PP_V(port->id.phys_id),
+		    DLB2_SYS_LDB_PP_V_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_LDB_DSBL(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_LDB_DSBL_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_DEPTH(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ_DEPTH_RST);
+
+	if (hw->ver != DLB2_HW_V2)
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_CFG_CQ_LDB_WU_LIMIT(port->id.phys_id),
+			    DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_LDB_INFL_LIM(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_LDB_INFL_LIM_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_HIST_LIST_LIM(hw->ver, port->id.phys_id),
+		    DLB2_CHP_HIST_LIST_LIM_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_HIST_LIST_BASE(hw->ver, port->id.phys_id),
+		    DLB2_CHP_HIST_LIST_BASE_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_HIST_LIST_POP_PTR(hw->ver, port->id.phys_id),
+		    DLB2_CHP_HIST_LIST_POP_PTR_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_HIST_LIST_PUSH_PTR(hw->ver, port->id.phys_id),
+		    DLB2_CHP_HIST_LIST_PUSH_PTR_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_TMR_THRSH(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ_TMR_THRSH_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_INT_ENB(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ_INT_ENB_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_CQ_ISR(port->id.phys_id),
+		    DLB2_SYS_LDB_CQ_ISR_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_WPTR(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ_WPTR_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_LDB_TKN_CNT(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_LDB_TKN_CNT_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_CQ_ADDR_L(port->id.phys_id),
+		    DLB2_SYS_LDB_CQ_ADDR_L_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_CQ_ADDR_U(port->id.phys_id),
+		    DLB2_SYS_LDB_CQ_ADDR_U_RST);
+
+	if (hw->ver == DLB2_HW_V2)
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_LDB_CQ_AT(port->id.phys_id),
+			    DLB2_SYS_LDB_CQ_AT_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_CQ_PASID(hw->ver, port->id.phys_id),
+		    DLB2_SYS_LDB_CQ_PASID_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_CQ2VF_PF_RO(port->id.phys_id),
+		    DLB2_SYS_LDB_CQ2VF_PF_RO_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTL(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTL_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTH(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTH_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ2QID0(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ2QID0_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ2QID1(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ2QID1_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ2PRIOV(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ2PRIOV_RST);
+}
+
+static void dlb2_domain_reset_ldb_port_registers(struct dlb2_hw *hw,
+						 struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
+			__dlb2_domain_reset_ldb_port_registers(hw, port);
+	}
+}
+
+static void
+__dlb2_domain_reset_dir_port_registers(struct dlb2_hw *hw,
+				       struct dlb2_dir_pq_pair *port)
+{
+	u32 reg = 0;
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ2VAS(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ2VAS_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_DIR_DSBL(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_DIR_DSBL_RST);
+
+	DLB2_BIT_SET(reg, DLB2_SYS_WB_DIR_CQ_STATE_CQ_OPT_CLR);
+
+	if (hw->ver == DLB2_HW_V2)
+		DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_OPT_CLR, port->id.phys_id);
+	else
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_WB_DIR_CQ_STATE(port->id.phys_id), reg);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_DEPTH(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ_DEPTH_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_TMR_THRSH(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ_TMR_THRSH_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_INT_ENB(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ_INT_ENB_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_CQ_ISR(port->id.phys_id),
+		    DLB2_SYS_DIR_CQ_ISR_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(hw->ver,
+						      port->id.phys_id),
+		    DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_WPTR(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ_WPTR_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_DIR_TKN_CNT(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_DIR_TKN_CNT_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_CQ_ADDR_L(port->id.phys_id),
+		    DLB2_SYS_DIR_CQ_ADDR_L_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_CQ_ADDR_U(port->id.phys_id),
+		    DLB2_SYS_DIR_CQ_ADDR_U_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_CQ_AT(port->id.phys_id),
+		    DLB2_SYS_DIR_CQ_AT_RST);
+
+	if (hw->ver == DLB2_HW_V2)
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_DIR_CQ_AT(port->id.phys_id),
+			    DLB2_SYS_DIR_CQ_AT_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_CQ_PASID(hw->ver, port->id.phys_id),
+		    DLB2_SYS_DIR_CQ_PASID_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_CQ_FMT(port->id.phys_id),
+		    DLB2_SYS_DIR_CQ_FMT_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_CQ2VF_PF_RO(port->id.phys_id),
+		    DLB2_SYS_DIR_CQ2VF_PF_RO_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTL(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTL_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTH(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTH_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_PP2VAS(port->id.phys_id),
+		    DLB2_SYS_DIR_PP2VAS_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ2VAS(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ2VAS_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_PP2VDEV(port->id.phys_id),
+		    DLB2_SYS_DIR_PP2VDEV_RST);
+
+	if (port->id.vdev_owned) {
+		unsigned int offs;
+		u32 virt_id;
+
+		/*
+		 * DLB uses producer port address bits 17:12 to determine the
+		 * producer port ID. In Scalable IOV mode, PP accesses come
+		 * through the PF MMIO window for the physical producer port,
+		 * so for translation purposes the virtual and physical port
+		 * IDs are equal.
+		 */
+		if (hw->virt_mode == DLB2_VIRT_SRIOV)
+			virt_id = port->id.virt_id;
+		else
+			virt_id = port->id.phys_id;
+
+		offs = port->id.vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) +
+			virt_id;
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_VF_DIR_VPP2PP(offs),
+			    DLB2_SYS_VF_DIR_VPP2PP_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_VF_DIR_VPP_V(offs),
+			    DLB2_SYS_VF_DIR_VPP_V_RST);
+	}
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_PP_V(port->id.phys_id),
+		    DLB2_SYS_DIR_PP_V_RST);
+}
+
+static void dlb2_domain_reset_dir_port_registers(struct dlb2_hw *hw,
+						 struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *port;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter)
+		__dlb2_domain_reset_dir_port_registers(hw, port);
+}
+
+static void dlb2_domain_reset_ldb_queue_registers(struct dlb2_hw *hw,
+						  struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_queue *queue;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
+		unsigned int queue_id = queue->id.phys_id;
+		int i;
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL(hw->ver, queue_id),
+			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH(hw->ver, queue_id),
+			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTL(hw->ver, queue_id),
+			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTL_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTH(hw->ver, queue_id),
+			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_NALDB_MAX_DEPTH(hw->ver, queue_id),
+			    DLB2_LSP_QID_NALDB_MAX_DEPTH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_LDB_INFL_LIM(hw->ver, queue_id),
+			    DLB2_LSP_QID_LDB_INFL_LIM_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_AQED_ACTIVE_LIM(hw->ver, queue_id),
+			    DLB2_LSP_QID_AQED_ACTIVE_LIM_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_ATM_DEPTH_THRSH(hw->ver, queue_id),
+			    DLB2_LSP_QID_ATM_DEPTH_THRSH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_NALDB_DEPTH_THRSH(hw->ver, queue_id),
+			    DLB2_LSP_QID_NALDB_DEPTH_THRSH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_LDB_QID_ITS(queue_id),
+			    DLB2_SYS_LDB_QID_ITS_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_CHP_ORD_QID_SN(hw->ver, queue_id),
+			    DLB2_CHP_ORD_QID_SN_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_CHP_ORD_QID_SN_MAP(hw->ver, queue_id),
+			    DLB2_CHP_ORD_QID_SN_MAP_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_LDB_QID_V(queue_id),
+			    DLB2_SYS_LDB_QID_V_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_LDB_QID_CFG_V(queue_id),
+			    DLB2_SYS_LDB_QID_CFG_V_RST);
+
+		if (queue->sn_cfg_valid) {
+			u32 offs[2];
+
+			offs[0] = DLB2_RO_GRP_0_SLT_SHFT(hw->ver,
+							 queue->sn_slot);
+			offs[1] = DLB2_RO_GRP_1_SLT_SHFT(hw->ver,
+							 queue->sn_slot);
+
+			DLB2_CSR_WR(hw,
+				    offs[queue->sn_group],
+				    DLB2_RO_GRP_0_SLT_SHFT_RST);
+		}
+
+		for (i = 0; i < DLB2_LSP_QID2CQIDIX_NUM; i++) {
+			DLB2_CSR_WR(hw,
+				    DLB2_LSP_QID2CQIDIX(hw->ver, queue_id, i),
+				    DLB2_LSP_QID2CQIDIX_00_RST);
+
+			DLB2_CSR_WR(hw,
+				    DLB2_LSP_QID2CQIDIX2(hw->ver, queue_id, i),
+				    DLB2_LSP_QID2CQIDIX2_00_RST);
+
+			DLB2_CSR_WR(hw,
+				    DLB2_ATM_QID2CQIDIX(queue_id, i),
+				    DLB2_ATM_QID2CQIDIX_00_RST);
+		}
+	}
+}
+
+static void dlb2_domain_reset_dir_queue_registers(struct dlb2_hw *hw,
+						  struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *queue;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, queue, iter) {
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_DIR_MAX_DEPTH(hw->ver,
+						       queue->id.phys_id),
+			    DLB2_LSP_QID_DIR_MAX_DEPTH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTL(hw->ver,
+							  queue->id.phys_id),
+			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTL_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTH(hw->ver,
+							  queue->id.phys_id),
+			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_DIR_DEPTH_THRSH(hw->ver,
+							 queue->id.phys_id),
+			    DLB2_LSP_QID_DIR_DEPTH_THRSH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_DIR_QID_ITS(queue->id.phys_id),
+			    DLB2_SYS_DIR_QID_ITS_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_DIR_QID_V(queue->id.phys_id),
+			    DLB2_SYS_DIR_QID_V_RST);
+	}
+}
+
+
+
+
+
+static void dlb2_domain_reset_registers(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain)
+{
+	dlb2_domain_reset_ldb_port_registers(hw, domain);
+
+	dlb2_domain_reset_dir_port_registers(hw, domain);
+
+	dlb2_domain_reset_ldb_queue_registers(hw, domain);
+
+	dlb2_domain_reset_dir_queue_registers(hw, domain);
+
+	if (hw->ver == DLB2_HW_V2) {
+		DLB2_CSR_WR(hw,
+			    DLB2_CHP_CFG_LDB_VAS_CRD(domain->id.phys_id),
+			    DLB2_CHP_CFG_LDB_VAS_CRD_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_CHP_CFG_DIR_VAS_CRD(domain->id.phys_id),
+			    DLB2_CHP_CFG_DIR_VAS_CRD_RST);
+	} else
+		DLB2_CSR_WR(hw,
+			    DLB2_CHP_CFG_VAS_CRD(domain->id.phys_id),
+			    DLB2_CHP_CFG_VAS_CRD_RST);
+}
+
+static int dlb2_domain_reset_software_state(struct dlb2_hw *hw,
+					    struct dlb2_hw_domain *domain)
+{
+	struct dlb2_dir_pq_pair *tmp_dir_port;
+	struct dlb2_ldb_queue *tmp_ldb_queue;
+	struct dlb2_ldb_port *tmp_ldb_port;
+	struct dlb2_list_entry *iter1;
+	struct dlb2_list_entry *iter2;
+	struct dlb2_function_resources *rsrcs;
+	struct dlb2_dir_pq_pair *dir_port;
+	struct dlb2_ldb_queue *ldb_queue;
+	struct dlb2_ldb_port *ldb_port;
+	struct dlb2_list_head *list;
+	int ret, i;
+	RTE_SET_USED(tmp_dir_port);
+	RTE_SET_USED(tmp_ldb_queue);
+	RTE_SET_USED(tmp_ldb_port);
+	RTE_SET_USED(iter1);
+	RTE_SET_USED(iter2);
+
+	rsrcs = domain->parent_func;
+
+	/* Move the domain's ldb queues to the function's avail list */
+	list = &domain->used_ldb_queues;
+	DLB2_DOM_LIST_FOR_SAFE(*list, ldb_queue, tmp_ldb_queue, iter1, iter2) {
+		if (ldb_queue->sn_cfg_valid) {
+			struct dlb2_sn_group *grp;
+
+			grp = &hw->rsrcs.sn_groups[ldb_queue->sn_group];
+
+			dlb2_sn_group_free_slot(grp, ldb_queue->sn_slot);
+			ldb_queue->sn_cfg_valid = false;
+		}
+
+		ldb_queue->owned = false;
+		ldb_queue->num_mappings = 0;
+		ldb_queue->num_pending_additions = 0;
+
+		dlb2_list_del(&domain->used_ldb_queues,
+			      &ldb_queue->domain_list);
+		dlb2_list_add(&rsrcs->avail_ldb_queues,
+			      &ldb_queue->func_list);
+		rsrcs->num_avail_ldb_queues++;
+	}
+
+	list = &domain->avail_ldb_queues;
+	DLB2_DOM_LIST_FOR_SAFE(*list, ldb_queue, tmp_ldb_queue, iter1, iter2) {
+		ldb_queue->owned = false;
+
+		dlb2_list_del(&domain->avail_ldb_queues,
+			      &ldb_queue->domain_list);
+		dlb2_list_add(&rsrcs->avail_ldb_queues,
+			      &ldb_queue->func_list);
+		rsrcs->num_avail_ldb_queues++;
+	}
+
+	/* Move the domain's ldb ports to the function's avail list */
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		list = &domain->used_ldb_ports[i];
+		DLB2_DOM_LIST_FOR_SAFE(*list, ldb_port, tmp_ldb_port,
+				       iter1, iter2) {
+			int j;
+
+			ldb_port->owned = false;
+			ldb_port->configured = false;
+			ldb_port->num_pending_removals = 0;
+			ldb_port->num_mappings = 0;
+			ldb_port->init_tkn_cnt = 0;
+			ldb_port->cq_depth = 0;
+			for (j = 0; j < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; j++)
+				ldb_port->qid_map[j].state =
+					DLB2_QUEUE_UNMAPPED;
+
+			dlb2_list_del(&domain->used_ldb_ports[i],
+				      &ldb_port->domain_list);
+			dlb2_list_add(&rsrcs->avail_ldb_ports[i],
+				      &ldb_port->func_list);
+			rsrcs->num_avail_ldb_ports[i]++;
+		}
+
+		list = &domain->avail_ldb_ports[i];
+		DLB2_DOM_LIST_FOR_SAFE(*list, ldb_port, tmp_ldb_port,
+				       iter1, iter2) {
+			ldb_port->owned = false;
+
+			dlb2_list_del(&domain->avail_ldb_ports[i],
+				      &ldb_port->domain_list);
+			dlb2_list_add(&rsrcs->avail_ldb_ports[i],
+				      &ldb_port->func_list);
+			rsrcs->num_avail_ldb_ports[i]++;
+		}
+	}
+
+	/* Move the domain's dir ports to the function's avail list */
+	list = &domain->used_dir_pq_pairs;
+	DLB2_DOM_LIST_FOR_SAFE(*list, dir_port, tmp_dir_port, iter1, iter2) {
+		dir_port->owned = false;
+		dir_port->port_configured = false;
+		dir_port->init_tkn_cnt = 0;
+
+		dlb2_list_del(&domain->used_dir_pq_pairs,
+			      &dir_port->domain_list);
+
+		dlb2_list_add(&rsrcs->avail_dir_pq_pairs,
+			      &dir_port->func_list);
+		rsrcs->num_avail_dir_pq_pairs++;
+	}
+
+	list = &domain->avail_dir_pq_pairs;
+	DLB2_DOM_LIST_FOR_SAFE(*list, dir_port, tmp_dir_port, iter1, iter2) {
+		dir_port->owned = false;
+
+		dlb2_list_del(&domain->avail_dir_pq_pairs,
+			      &dir_port->domain_list);
+
+		dlb2_list_add(&rsrcs->avail_dir_pq_pairs,
+			      &dir_port->func_list);
+		rsrcs->num_avail_dir_pq_pairs++;
+	}
+
+	/* Return hist list entries to the function */
+	ret = dlb2_bitmap_set_range(rsrcs->avail_hist_list_entries,
+				    domain->hist_list_entry_base,
+				    domain->total_hist_list_entries);
+	if (ret) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: domain hist list base does not match the function's bitmap.\n",
+			    __func__);
+		return ret;
+	}
+
+	domain->total_hist_list_entries = 0;
+	domain->avail_hist_list_entries = 0;
+	domain->hist_list_entry_base = 0;
+	domain->hist_list_entry_offset = 0;
+
+	if (hw->ver == DLB2_HW_V2_5) {
+		rsrcs->num_avail_entries += domain->num_credits;
+		domain->num_credits = 0;
+	} else {
+		rsrcs->num_avail_qed_entries += domain->num_ldb_credits;
+		domain->num_ldb_credits = 0;
+
+		rsrcs->num_avail_dqed_entries += domain->num_dir_credits;
+		domain->num_dir_credits = 0;
+	}
+	rsrcs->num_avail_aqed_entries += domain->num_avail_aqed_entries;
+	rsrcs->num_avail_aqed_entries += domain->num_used_aqed_entries;
+	domain->num_avail_aqed_entries = 0;
+	domain->num_used_aqed_entries = 0;
+
+	domain->num_pending_removals = 0;
+	domain->num_pending_additions = 0;
+	domain->configured = false;
+	domain->started = false;
+
+	/*
+	 * Move the domain out of the used_domains list and back to the
+	 * function's avail_domains list.
+	 */
+	dlb2_list_del(&rsrcs->used_domains, &domain->func_list);
+	dlb2_list_add(&rsrcs->avail_domains, &domain->func_list);
+	rsrcs->num_avail_domains++;
+
+	return 0;
+}
+
+static int dlb2_domain_drain_unmapped_queue(struct dlb2_hw *hw,
+					    struct dlb2_hw_domain *domain,
+					    struct dlb2_ldb_queue *queue)
+{
+	struct dlb2_ldb_port *port = NULL;
+	int ret, i;
+
+	/* If a domain has LDB queues, it must have LDB ports */
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		port = DLB2_DOM_LIST_HEAD(domain->used_ldb_ports[i],
+					  typeof(*port));
+		if (port)
+			break;
+	}
+
+	if (port == NULL) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: No configured LDB ports\n",
+			    __func__);
+		return -EFAULT;
+	}
+
+	/* If necessary, free up a QID slot in this CQ */
+	if (port->num_mappings == DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
+		struct dlb2_ldb_queue *mapped_queue;
+
+		mapped_queue = &hw->rsrcs.ldb_queues[port->qid_map[0].qid];
+
+		ret = dlb2_ldb_port_unmap_qid(hw, port, mapped_queue);
+		if (ret)
+			return ret;
+	}
+
+	ret = dlb2_ldb_port_map_qid_dynamic(hw, port, queue, 0);
+	if (ret)
+		return ret;
+
+	return dlb2_domain_drain_mapped_queues(hw, domain);
+}
+
+static int dlb2_domain_drain_unmapped_queues(struct dlb2_hw *hw,
+					     struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_queue *queue;
+	int ret;
+	RTE_SET_USED(iter);
+
+	/* If the domain hasn't been started, there's no traffic to drain */
+	if (!domain->started)
+		return 0;
+
+	/*
+	 * Pre-condition: the unattached queue must not have any outstanding
+	 * completions. This is ensured by calling dlb2_domain_drain_ldb_cqs()
+	 * prior to this in dlb2_domain_drain_mapped_queues().
+	 */
+	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
+		if (queue->num_mappings != 0 ||
+		    dlb2_ldb_queue_is_empty(hw, queue))
+			continue;
+
+		ret = dlb2_domain_drain_unmapped_queue(hw, domain, queue);
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+
+/**
+ * dlb2_reset_domain() - reset a scheduling domain
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function resets and frees a DLB 2.0 scheduling domain and its associated
+ * resources.
+ *
+ * Pre-condition: the driver must ensure software has stopped sending QEs
+ * through this domain's producer ports before invoking this function, or
+ * undefined behavior will result.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, -1 otherwise.
+ *
+ * EINVAL - Invalid domain ID, or the domain is not configured.
+ * EFAULT - Internal error. (Possibly caused if software is the pre-condition
+ *	    is not met.)
+ * ETIMEDOUT - Hardware component didn't reset in the expected time.
+ */
+int dlb2_reset_domain(struct dlb2_hw *hw,
+		      u32 domain_id,
+		      bool vdev_req,
+		      unsigned int vdev_id)
+{
+	struct dlb2_hw_domain *domain;
+	int ret;
+
+	dlb2_log_reset_domain(hw, domain_id, vdev_req, vdev_id);
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (domain == NULL || !domain->configured)
+		return -EINVAL;
+
+	/* Disable VPPs */
+	if (vdev_req) {
+		dlb2_domain_disable_dir_vpps(hw, domain, vdev_id);
+
+		dlb2_domain_disable_ldb_vpps(hw, domain, vdev_id);
+	}
+
+	/* Disable CQ interrupts */
+	dlb2_domain_disable_dir_port_interrupts(hw, domain);
+
+	dlb2_domain_disable_ldb_port_interrupts(hw, domain);
+
+	/*
+	 * For each queue owned by this domain, disable its write permissions to
+	 * cause any traffic sent to it to be dropped. Well-behaved software
+	 * should not be sending QEs at this point.
+	 */
+	dlb2_domain_disable_dir_queue_write_perms(hw, domain);
+
+	dlb2_domain_disable_ldb_queue_write_perms(hw, domain);
+
+	/* Turn off completion tracking on all the domain's PPs. */
+	dlb2_domain_disable_ldb_seq_checks(hw, domain);
+
+	/*
+	 * Disable the LDB CQs and drain them in order to complete the map and
+	 * unmap procedures, which require zero CQ inflights and zero QID
+	 * inflights respectively.
+	 */
+	dlb2_domain_disable_ldb_cqs(hw, domain);
+
+	dlb2_domain_drain_ldb_cqs(hw, domain, false);
+
+	ret = dlb2_domain_wait_for_ldb_cqs_to_empty(hw, domain);
+	if (ret)
+		return ret;
+
+	ret = dlb2_domain_finish_unmap_qid_procedures(hw, domain);
+	if (ret)
+		return ret;
+
+	ret = dlb2_domain_finish_map_qid_procedures(hw, domain);
+	if (ret)
+		return ret;
+
+	/* Re-enable the CQs in order to drain the mapped queues. */
+	dlb2_domain_enable_ldb_cqs(hw, domain);
+
+	ret = dlb2_domain_drain_mapped_queues(hw, domain);
+	if (ret)
+		return ret;
+
+	ret = dlb2_domain_drain_unmapped_queues(hw, domain);
+	if (ret)
+		return ret;
+
+	/* Done draining LDB QEs, so disable the CQs. */
+	dlb2_domain_disable_ldb_cqs(hw, domain);
+
+	dlb2_domain_drain_dir_queues(hw, domain);
+
+	/* Done draining DIR QEs, so disable the CQs. */
+	dlb2_domain_disable_dir_cqs(hw, domain);
+
+	/* Disable PPs */
+	dlb2_domain_disable_dir_producer_ports(hw, domain);
+
+	dlb2_domain_disable_ldb_producer_ports(hw, domain);
+
+	ret = dlb2_domain_verify_reset_success(hw, domain);
+	if (ret)
+		return ret;
+
+	/* Reset the QID and port state. */
+	dlb2_domain_reset_registers(hw, domain);
+
+	/* Hardware reset complete. Reset the domain's software state */
+	return dlb2_domain_reset_software_state(hw, domain);
+}
+
+static void
+dlb2_log_create_ldb_queue_args(struct dlb2_hw *hw,
+			       u32 domain_id,
+			       struct dlb2_create_ldb_queue_args *args,
+			       bool vdev_req,
+			       unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 create load-balanced queue arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID:                  %d\n",
+		    domain_id);
+	DLB2_HW_DBG(hw, "\tNumber of sequence numbers: %d\n",
+		    args->num_sequence_numbers);
+	DLB2_HW_DBG(hw, "\tNumber of QID inflights:    %d\n",
+		    args->num_qid_inflights);
+	DLB2_HW_DBG(hw, "\tNumber of ATM inflights:    %d\n",
+		    args->num_atomic_inflights);
+}
+
+static int
+dlb2_ldb_queue_attach_to_sn_group(struct dlb2_hw *hw,
+				  struct dlb2_ldb_queue *queue,
+				  struct dlb2_create_ldb_queue_args *args)
+{
+	int slot = -1;
+	int i;
+
+	queue->sn_cfg_valid = false;
+
+	if (args->num_sequence_numbers == 0)
+		return 0;
+
+	for (i = 0; i < DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {
+		struct dlb2_sn_group *group = &hw->rsrcs.sn_groups[i];
+
+		if (group->sequence_numbers_per_queue ==
+		    args->num_sequence_numbers &&
+		    !dlb2_sn_group_full(group)) {
+			slot = dlb2_sn_group_alloc_slot(group);
+			if (slot >= 0)
+				break;
+		}
+	}
+
+	if (slot == -1) {
+		DLB2_HW_ERR(hw,
+			    "[%s():%d] Internal error: no sequence number slots available\n",
+			    __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	queue->sn_cfg_valid = true;
+	queue->sn_group = i;
+	queue->sn_slot = slot;
+	return 0;
+}
+
+static int
+dlb2_verify_create_ldb_queue_args(struct dlb2_hw *hw,
+				  u32 domain_id,
+				  struct dlb2_create_ldb_queue_args *args,
+				  struct dlb2_cmd_response *resp,
+				  bool vdev_req,
+				  unsigned int vdev_id,
+				  struct dlb2_hw_domain **out_domain,
+				  struct dlb2_ldb_queue **out_queue)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+	int i;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
+		return -EINVAL;
+	}
+
+	if (domain->started) {
+		resp->status = DLB2_ST_DOMAIN_STARTED;
+		return -EINVAL;
+	}
+
+	queue = DLB2_DOM_LIST_HEAD(domain->avail_ldb_queues, typeof(*queue));
+	if (!queue) {
+		resp->status = DLB2_ST_LDB_QUEUES_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	if (args->num_sequence_numbers) {
+		for (i = 0; i < DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {
+			struct dlb2_sn_group *group = &hw->rsrcs.sn_groups[i];
+
+			if (group->sequence_numbers_per_queue ==
+			    args->num_sequence_numbers &&
+			    !dlb2_sn_group_full(group))
+				break;
+		}
+
+		if (i == DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS) {
+			resp->status = DLB2_ST_SEQUENCE_NUMBERS_UNAVAILABLE;
+			return -EINVAL;
+		}
+	}
+
+	if (args->num_qid_inflights > 4096) {
+		resp->status = DLB2_ST_INVALID_QID_INFLIGHT_ALLOCATION;
+		return -EINVAL;
+	}
+
+	/* Inflights must be <= number of sequence numbers if ordered */
+	if (args->num_sequence_numbers != 0 &&
+	    args->num_qid_inflights > args->num_sequence_numbers) {
+		resp->status = DLB2_ST_INVALID_QID_INFLIGHT_ALLOCATION;
+		return -EINVAL;
+	}
+
+	if (domain->num_avail_aqed_entries < args->num_atomic_inflights) {
+		resp->status = DLB2_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	if (args->num_atomic_inflights &&
+	    args->lock_id_comp_level != 0 &&
+	    args->lock_id_comp_level != 64 &&
+	    args->lock_id_comp_level != 128 &&
+	    args->lock_id_comp_level != 256 &&
+	    args->lock_id_comp_level != 512 &&
+	    args->lock_id_comp_level != 1024 &&
+	    args->lock_id_comp_level != 2048 &&
+	    args->lock_id_comp_level != 4096 &&
+	    args->lock_id_comp_level != 65536) {
+		resp->status = DLB2_ST_INVALID_LOCK_ID_COMP_LEVEL;
+		return -EINVAL;
+	}
+
+	*out_domain = domain;
+	*out_queue = queue;
+
+	return 0;
+}
+
+static int
+dlb2_ldb_queue_attach_resources(struct dlb2_hw *hw,
+				struct dlb2_hw_domain *domain,
+				struct dlb2_ldb_queue *queue,
+				struct dlb2_create_ldb_queue_args *args)
+{
+	int ret;
+	ret = dlb2_ldb_queue_attach_to_sn_group(hw, queue, args);
+	if (ret)
+		return ret;
+
+	/* Attach QID inflights */
+	queue->num_qid_inflights = args->num_qid_inflights;
+
+	/* Attach atomic inflights */
+	queue->aqed_limit = args->num_atomic_inflights;
+
+	domain->num_avail_aqed_entries -= args->num_atomic_inflights;
+	domain->num_used_aqed_entries += args->num_atomic_inflights;
+
+	return 0;
+}
+
+static void dlb2_configure_ldb_queue(struct dlb2_hw *hw,
+				     struct dlb2_hw_domain *domain,
+				     struct dlb2_ldb_queue *queue,
+				     struct dlb2_create_ldb_queue_args *args,
+				     bool vdev_req,
+				     unsigned int vdev_id)
+{
+	struct dlb2_sn_group *sn_group;
+	unsigned int offs;
+	u32 reg = 0;
+	u32 alimit;
+
+	/* QID write permissions are turned on when the domain is started */
+	offs = domain->id.phys_id * DLB2_MAX_NUM_LDB_QUEUES + queue->id.phys_id;
+
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_VASQID_V(offs), reg);
+
+	/*
+	 * Unordered QIDs get 4K inflights, ordered get as many as the number
+	 * of sequence numbers.
+	 */
+	DLB2_BITS_SET(reg, args->num_qid_inflights,
+		      DLB2_LSP_QID_LDB_INFL_LIM_LIMIT);
+	DLB2_CSR_WR(hw, DLB2_LSP_QID_LDB_INFL_LIM(hw->ver,
+						  queue->id.phys_id), reg);
+
+	alimit = queue->aqed_limit;
+
+	if (alimit > DLB2_MAX_NUM_AQED_ENTRIES)
+		alimit = DLB2_MAX_NUM_AQED_ENTRIES;
+
+	reg = 0;
+	DLB2_BITS_SET(reg, alimit, DLB2_LSP_QID_AQED_ACTIVE_LIM_LIMIT);
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_QID_AQED_ACTIVE_LIM(hw->ver,
+						 queue->id.phys_id), reg);
+
+	reg = 0;
+	switch (args->lock_id_comp_level) {
+	case 64:
+		DLB2_BITS_SET(reg, 1, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
+		break;
+	case 128:
+		DLB2_BITS_SET(reg, 2, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
+		break;
+	case 256:
+		DLB2_BITS_SET(reg, 3, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
+		break;
+	case 512:
+		DLB2_BITS_SET(reg, 4, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
+		break;
+	case 1024:
+		DLB2_BITS_SET(reg, 5, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
+		break;
+	case 2048:
+		DLB2_BITS_SET(reg, 6, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
+		break;
+	case 4096:
+		DLB2_BITS_SET(reg, 7, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
+		break;
+	default:
+		/* No compression by default */
+		break;
+	}
+
+	DLB2_CSR_WR(hw, DLB2_AQED_QID_HID_WIDTH(queue->id.phys_id), reg);
+
+	reg = 0;
+	/* Don't timestamp QEs that pass through this queue */
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_QID_ITS(queue->id.phys_id), reg);
+
+	DLB2_BITS_SET(reg, args->depth_threshold,
+		      DLB2_LSP_QID_ATM_DEPTH_THRSH_THRESH);
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_QID_ATM_DEPTH_THRSH(hw->ver,
+						 queue->id.phys_id), reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, args->depth_threshold,
+		      DLB2_LSP_QID_NALDB_DEPTH_THRSH_THRESH);
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_QID_NALDB_DEPTH_THRSH(hw->ver, queue->id.phys_id),
+		    reg);
+
+	/*
+	 * This register limits the number of inflight flows a queue can have
+	 * at one time.  It has an upper bound of 2048, but can be
+	 * over-subscribed. 512 is chosen so that a single queue does not use
+	 * the entire atomic storage, but can use a substantial portion if
+	 * needed.
+	 */
+	reg = 0;
+	DLB2_BITS_SET(reg, 512, DLB2_AQED_QID_FID_LIM_QID_FID_LIMIT);
+	DLB2_CSR_WR(hw, DLB2_AQED_QID_FID_LIM(queue->id.phys_id), reg);
+
+	/* Configure SNs */
+	reg = 0;
+	sn_group = &hw->rsrcs.sn_groups[queue->sn_group];
+	DLB2_BITS_SET(reg, sn_group->mode, DLB2_CHP_ORD_QID_SN_MAP_MODE);
+	DLB2_BITS_SET(reg, queue->sn_slot, DLB2_CHP_ORD_QID_SN_MAP_SLOT);
+	DLB2_BITS_SET(reg, sn_group->id, DLB2_CHP_ORD_QID_SN_MAP_GRP);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_ORD_QID_SN_MAP(hw->ver, queue->id.phys_id), reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, (args->num_sequence_numbers != 0),
+		 DLB2_SYS_LDB_QID_CFG_V_SN_CFG_V);
+	DLB2_BITS_SET(reg, (args->num_atomic_inflights != 0),
+		 DLB2_SYS_LDB_QID_CFG_V_FID_CFG_V);
+
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_QID_CFG_V(queue->id.phys_id), reg);
+
+	if (vdev_req) {
+		offs = vdev_id * DLB2_MAX_NUM_LDB_QUEUES + queue->id.virt_id;
+
+		reg = 0;
+		DLB2_BIT_SET(reg, DLB2_SYS_VF_LDB_VQID_V_VQID_V);
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID_V(offs), reg);
+
+		reg = 0;
+		DLB2_BITS_SET(reg, queue->id.phys_id,
+			      DLB2_SYS_VF_LDB_VQID2QID_QID);
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID2QID(offs), reg);
+
+		reg = 0;
+		DLB2_BITS_SET(reg, queue->id.virt_id,
+			      DLB2_SYS_LDB_QID2VQID_VQID);
+		DLB2_CSR_WR(hw, DLB2_SYS_LDB_QID2VQID(queue->id.phys_id), reg);
+	}
+
+	reg = 0;
+	DLB2_BIT_SET(reg, DLB2_SYS_LDB_QID_V_QID_V);
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_QID_V(queue->id.phys_id), reg);
+}
+
+/**
+ * dlb2_hw_create_ldb_queue() - create a load-balanced queue
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: queue creation arguments.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function creates a load-balanced queue.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the queue ID.
+ *
+ * resp->id contains a virtual ID if vdev_req is true.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, the domain is not configured,
+ *	    the domain has already been started, or the requested queue name is
+ *	    already in use.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb2_hw_create_ldb_queue(struct dlb2_hw *hw,
+			     u32 domain_id,
+			     struct dlb2_create_ldb_queue_args *args,
+			     struct dlb2_cmd_response *resp,
+			     bool vdev_req,
+			     unsigned int vdev_id)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+	int ret;
+
+	dlb2_log_create_ldb_queue_args(hw, domain_id, args, vdev_req, vdev_id);
+
+	/*
+	 * Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	ret = dlb2_verify_create_ldb_queue_args(hw,
+						domain_id,
+						args,
+						resp,
+						vdev_req,
+						vdev_id,
+						&domain,
+						&queue);
+	if (ret)
+		return ret;
+
+	ret = dlb2_ldb_queue_attach_resources(hw, domain, queue, args);
+
+	if (ret) {
+		DLB2_HW_ERR(hw,
+			    "[%s():%d] Internal error: failed to attach the ldb queue resources\n",
+			    __func__, __LINE__);
+		return ret;
+	}
+
+	dlb2_configure_ldb_queue(hw, domain, queue, args, vdev_req, vdev_id);
+
+	queue->num_mappings = 0;
+
+	queue->configured = true;
+
+	/*
+	 * Configuration succeeded, so move the resource from the 'avail' to
+	 * the 'used' list.
+	 */
+	dlb2_list_del(&domain->avail_ldb_queues, &queue->domain_list);
+
+	dlb2_list_add(&domain->used_ldb_queues, &queue->domain_list);
+
+	resp->status = 0;
+	resp->id = (vdev_req) ? queue->id.virt_id : queue->id.phys_id;
+
+	return 0;
+}
+
+static void dlb2_ldb_port_configure_pp(struct dlb2_hw *hw,
+				       struct dlb2_hw_domain *domain,
+				       struct dlb2_ldb_port *port,
+				       bool vdev_req,
+				       unsigned int vdev_id)
+{
+	u32 reg = 0;
+
+	DLB2_BITS_SET(reg, domain->id.phys_id, DLB2_SYS_LDB_PP2VAS_VAS);
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_PP2VAS(port->id.phys_id), reg);
+
+	if (vdev_req) {
+		unsigned int offs;
+		u32 virt_id;
+
+		/*
+		 * DLB uses producer port address bits 17:12 to determine the
+		 * producer port ID. In Scalable IOV mode, PP accesses come
+		 * through the PF MMIO window for the physical producer port,
+		 * so for translation purposes the virtual and physical port
+		 * IDs are equal.
+		 */
+		if (hw->virt_mode == DLB2_VIRT_SRIOV)
+			virt_id = port->id.virt_id;
+		else
+			virt_id = port->id.phys_id;
+
+		reg = 0;
+		DLB2_BITS_SET(reg, port->id.phys_id, DLB2_SYS_VF_LDB_VPP2PP_PP);
+		offs = vdev_id * DLB2_MAX_NUM_LDB_PORTS + virt_id;
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VPP2PP(offs), reg);
+
+		reg = 0;
+		DLB2_BITS_SET(reg, vdev_id, DLB2_SYS_LDB_PP2VDEV_VDEV);
+		DLB2_CSR_WR(hw, DLB2_SYS_LDB_PP2VDEV(port->id.phys_id), reg);
+
+		reg = 0;
+		DLB2_BIT_SET(reg, DLB2_SYS_VF_LDB_VPP_V_VPP_V);
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VPP_V(offs), reg);
+	}
+
+	reg = 0;
+	DLB2_BIT_SET(reg, DLB2_SYS_LDB_PP_V_PP_V);
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_PP_V(port->id.phys_id), reg);
+}
+
+static int dlb2_ldb_port_configure_cq(struct dlb2_hw *hw,
+				      struct dlb2_hw_domain *domain,
+				      struct dlb2_ldb_port *port,
+				      uintptr_t cq_dma_base,
+				      struct dlb2_create_ldb_port_args *args,
+				      bool vdev_req,
+				      unsigned int vdev_id)
+{
+	u32 hl_base = 0;
+	u32 reg = 0;
+	u32 ds = 0;
+
+	/* The CQ address is 64B-aligned, and the DLB only wants bits [63:6] */
+	DLB2_BITS_SET(reg, cq_dma_base >> 6, DLB2_SYS_LDB_CQ_ADDR_L_ADDR_L);
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_ADDR_L(port->id.phys_id), reg);
+
+	reg = cq_dma_base >> 32;
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_ADDR_U(port->id.phys_id), reg);
+
+	/*
+	 * 'ro' == relaxed ordering. This setting allows DLB2 to write
+	 * cache lines out-of-order (but QEs within a cache line are always
+	 * updated in-order).
+	 */
+	reg = 0;
+	DLB2_BITS_SET(reg, vdev_id, DLB2_SYS_LDB_CQ2VF_PF_RO_VF);
+	DLB2_BITS_SET(reg,
+		 !vdev_req && (hw->virt_mode != DLB2_VIRT_SIOV),
+		 DLB2_SYS_LDB_CQ2VF_PF_RO_IS_PF);
+	DLB2_BIT_SET(reg, DLB2_SYS_LDB_CQ2VF_PF_RO_RO);
+
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ2VF_PF_RO(port->id.phys_id), reg);
+
+	port->cq_depth = args->cq_depth;
+
+	if (args->cq_depth <= 8) {
+		ds = 1;
+	} else if (args->cq_depth == 16) {
+		ds = 2;
+	} else if (args->cq_depth == 32) {
+		ds = 3;
+	} else if (args->cq_depth == 64) {
+		ds = 4;
+	} else if (args->cq_depth == 128) {
+		ds = 5;
+	} else if (args->cq_depth == 256) {
+		ds = 6;
+	} else if (args->cq_depth == 512) {
+		ds = 7;
+	} else if (args->cq_depth == 1024) {
+		ds = 8;
+	} else {
+		DLB2_HW_ERR(hw,
+			    "[%s():%d] Internal error: invalid CQ depth\n",
+			    __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	reg = 0;
+	DLB2_BITS_SET(reg, ds,
+		      DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT);
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
+		    reg);
+
+	/*
+	 * To support CQs with depth less than 8, program the token count
+	 * register with a non-zero initial value. Operations such as domain
+	 * reset must take this initial value into account when quiescing the
+	 * CQ.
+	 */
+	port->init_tkn_cnt = 0;
+
+	if (args->cq_depth < 8) {
+		reg = 0;
+		port->init_tkn_cnt = 8 - args->cq_depth;
+
+		DLB2_BITS_SET(reg,
+			      port->init_tkn_cnt,
+			      DLB2_LSP_CQ_LDB_TKN_CNT_TOKEN_COUNT);
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_CQ_LDB_TKN_CNT(hw->ver, port->id.phys_id),
+			    reg);
+	} else {
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_CQ_LDB_TKN_CNT(hw->ver, port->id.phys_id),
+			    DLB2_LSP_CQ_LDB_TKN_CNT_RST);
+	}
+
+	reg = 0;
+	DLB2_BITS_SET(reg, ds,
+		      DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_V2);
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
+		    reg);
+
+	/* Reset the CQ write pointer */
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_WPTR(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ_WPTR_RST);
+
+	reg = 0;
+	DLB2_BITS_SET(reg,
+		      port->hist_list_entry_limit - 1,
+		      DLB2_CHP_HIST_LIST_LIM_LIMIT);
+	DLB2_CSR_WR(hw, DLB2_CHP_HIST_LIST_LIM(hw->ver, port->id.phys_id), reg);
+
+	DLB2_BITS_SET(hl_base, port->hist_list_entry_base,
+		      DLB2_CHP_HIST_LIST_BASE_BASE);
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_HIST_LIST_BASE(hw->ver, port->id.phys_id),
+		    hl_base);
+
+	/*
+	 * The inflight limit sets a cap on the number of QEs for which this CQ
+	 * can owe completions at one time.
+	 */
+	reg = 0;
+	DLB2_BITS_SET(reg, args->cq_history_list_size,
+		      DLB2_LSP_CQ_LDB_INFL_LIM_LIMIT);
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ_LDB_INFL_LIM(hw->ver, port->id.phys_id),
+		    reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, DLB2_BITS_GET(hl_base, DLB2_CHP_HIST_LIST_BASE_BASE),
+		      DLB2_CHP_HIST_LIST_PUSH_PTR_PUSH_PTR);
+	DLB2_CSR_WR(hw, DLB2_CHP_HIST_LIST_PUSH_PTR(hw->ver, port->id.phys_id),
+		    reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, DLB2_BITS_GET(hl_base, DLB2_CHP_HIST_LIST_BASE_BASE),
+		      DLB2_CHP_HIST_LIST_POP_PTR_POP_PTR);
+	DLB2_CSR_WR(hw, DLB2_CHP_HIST_LIST_POP_PTR(hw->ver, port->id.phys_id),
+		    reg);
+
+	/*
+	 * Address translation (AT) settings: 0: untranslated, 2: translated
+	 * (see ATS spec regarding Address Type field for more details)
+	 */
+
+	if (hw->ver == DLB2_HW_V2) {
+		reg = 0;
+		DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_AT(port->id.phys_id), reg);
+	}
+
+	if (vdev_req && hw->virt_mode == DLB2_VIRT_SIOV) {
+		reg = 0;
+		DLB2_BITS_SET(reg, hw->pasid[vdev_id],
+			      DLB2_SYS_LDB_CQ_PASID_PASID);
+		DLB2_BIT_SET(reg, DLB2_SYS_LDB_CQ_PASID_FMT2);
+	}
+
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_PASID(hw->ver, port->id.phys_id), reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, domain->id.phys_id, DLB2_CHP_LDB_CQ2VAS_CQ2VAS);
+	DLB2_CSR_WR(hw, DLB2_CHP_LDB_CQ2VAS(hw->ver, port->id.phys_id), reg);
+
+	/* Disable the port's QID mappings */
+	reg = 0;
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(hw->ver, port->id.phys_id), reg);
+
+	return 0;
+}
+
+static bool
+dlb2_cq_depth_is_valid(u32 depth)
+{
+	if (depth != 1 && depth != 2 &&
+	    depth != 4 && depth != 8 &&
+	    depth != 16 && depth != 32 &&
+	    depth != 64 && depth != 128 &&
+	    depth != 256 && depth != 512 &&
+	    depth != 1024)
+		return false;
+
+	return true;
+}
+
+static int dlb2_configure_ldb_port(struct dlb2_hw *hw,
+				   struct dlb2_hw_domain *domain,
+				   struct dlb2_ldb_port *port,
+				   uintptr_t cq_dma_base,
+				   struct dlb2_create_ldb_port_args *args,
+				   bool vdev_req,
+				   unsigned int vdev_id)
+{
+	int ret, i;
+
+	port->hist_list_entry_base = domain->hist_list_entry_base +
+				     domain->hist_list_entry_offset;
+	port->hist_list_entry_limit = port->hist_list_entry_base +
+				      args->cq_history_list_size;
+
+	domain->hist_list_entry_offset += args->cq_history_list_size;
+	domain->avail_hist_list_entries -= args->cq_history_list_size;
+
+	ret = dlb2_ldb_port_configure_cq(hw,
+					 domain,
+					 port,
+					 cq_dma_base,
+					 args,
+					 vdev_req,
+					 vdev_id);
+	if (ret)
+		return ret;
+
+	dlb2_ldb_port_configure_pp(hw,
+				   domain,
+				   port,
+				   vdev_req,
+				   vdev_id);
+
+	dlb2_ldb_port_cq_enable(hw, port);
+
+	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++)
+		port->qid_map[i].state = DLB2_QUEUE_UNMAPPED;
+	port->num_mappings = 0;
+
+	port->enabled = true;
+
+	port->configured = true;
+
+	return 0;
+}
+
+static void
+dlb2_log_create_ldb_port_args(struct dlb2_hw *hw,
+			      u32 domain_id,
+			      uintptr_t cq_dma_base,
+			      struct dlb2_create_ldb_port_args *args,
+			      bool vdev_req,
+			      unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 create load-balanced port arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID:                 %d\n",
+		    domain_id);
+	DLB2_HW_DBG(hw, "\tCQ depth:                  %d\n",
+		    args->cq_depth);
+	DLB2_HW_DBG(hw, "\tCQ hist list size:         %d\n",
+		    args->cq_history_list_size);
+	DLB2_HW_DBG(hw, "\tCQ base address:           0x%lx\n",
+		    cq_dma_base);
+	DLB2_HW_DBG(hw, "\tCoS ID:                    %u\n", args->cos_id);
+	DLB2_HW_DBG(hw, "\tStrict CoS allocation:     %u\n",
+		    args->cos_strict);
+}
+
+static int
+dlb2_verify_create_ldb_port_args(struct dlb2_hw *hw,
+				 u32 domain_id,
+				 uintptr_t cq_dma_base,
+				 struct dlb2_create_ldb_port_args *args,
+				 struct dlb2_cmd_response *resp,
+				 bool vdev_req,
+				 unsigned int vdev_id,
+				 struct dlb2_hw_domain **out_domain,
+				 struct dlb2_ldb_port **out_port,
+				 int *out_cos_id)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_port *port;
+	int i, id;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
+		return -EINVAL;
+	}
+
+	if (domain->started) {
+		resp->status = DLB2_ST_DOMAIN_STARTED;
+		return -EINVAL;
+	}
+
+	if (args->cos_id >= DLB2_NUM_COS_DOMAINS) {
+		resp->status = DLB2_ST_INVALID_COS_ID;
+		return -EINVAL;
+	}
+
+	if (args->cos_strict) {
+		id = args->cos_id;
+		port = DLB2_DOM_LIST_HEAD(domain->avail_ldb_ports[id],
+					  typeof(*port));
+	} else {
+		for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+			id = (args->cos_id + i) % DLB2_NUM_COS_DOMAINS;
+
+			port = DLB2_DOM_LIST_HEAD(domain->avail_ldb_ports[id],
+						  typeof(*port));
+			if (port)
+				break;
+		}
+	}
+
+	if (!port) {
+		resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	/* Check cache-line alignment */
+	if ((cq_dma_base & 0x3F) != 0) {
+		resp->status = DLB2_ST_INVALID_CQ_VIRT_ADDR;
+		return -EINVAL;
+	}
+
+	if (!dlb2_cq_depth_is_valid(args->cq_depth)) {
+		resp->status = DLB2_ST_INVALID_CQ_DEPTH;
+		return -EINVAL;
+	}
+
+	/* The history list size must be >= 1 */
+	if (!args->cq_history_list_size) {
+		resp->status = DLB2_ST_INVALID_HIST_LIST_DEPTH;
+		return -EINVAL;
+	}
+
+	if (args->cq_history_list_size > domain->avail_hist_list_entries) {
+		resp->status = DLB2_ST_HIST_LIST_ENTRIES_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	*out_domain = domain;
+	*out_port = port;
+	*out_cos_id = id;
+
+	return 0;
+}
+
+/**
+ * dlb2_hw_create_ldb_port() - create a load-balanced port
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: port creation arguments.
+ * @cq_dma_base: base address of the CQ memory. This can be a PA or an IOVA.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function creates a load-balanced port.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the port ID.
+ *
+ * resp->id contains a virtual ID if vdev_req is true.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, a credit setting is invalid, a
+ *	    pointer address is not properly aligned, the domain is not
+ *	    configured, or the domain has already been started.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb2_hw_create_ldb_port(struct dlb2_hw *hw,
+			    u32 domain_id,
+			    struct dlb2_create_ldb_port_args *args,
+			    uintptr_t cq_dma_base,
+			    struct dlb2_cmd_response *resp,
+			    bool vdev_req,
+			    unsigned int vdev_id)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_port *port;
+	int ret, cos_id;
+
+	dlb2_log_create_ldb_port_args(hw,
+				      domain_id,
+				      cq_dma_base,
+				      args,
+				      vdev_req,
+				      vdev_id);
+
+	/*
+	 * Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	ret = dlb2_verify_create_ldb_port_args(hw,
+					       domain_id,
+					       cq_dma_base,
+					       args,
+					       resp,
+					       vdev_req,
+					       vdev_id,
+					       &domain,
+					       &port,
+					       &cos_id);
+	if (ret)
+		return ret;
+
+	ret = dlb2_configure_ldb_port(hw,
+				      domain,
+				      port,
+				      cq_dma_base,
+				      args,
+				      vdev_req,
+				      vdev_id);
+	if (ret)
+		return ret;
+
+	/*
+	 * Configuration succeeded, so move the resource from the 'avail' to
+	 * the 'used' list.
+	 */
+	dlb2_list_del(&domain->avail_ldb_ports[cos_id], &port->domain_list);
+
+	dlb2_list_add(&domain->used_ldb_ports[cos_id], &port->domain_list);
+
+	resp->status = 0;
+	resp->id = (vdev_req) ? port->id.virt_id : port->id.phys_id;
+
+	return 0;
+}
+
+static void
+dlb2_log_create_dir_port_args(struct dlb2_hw *hw,
+			      u32 domain_id,
+			      uintptr_t cq_dma_base,
+			      struct dlb2_create_dir_port_args *args,
+			      bool vdev_req,
+			      unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 create directed port arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID:                 %d\n",
+		    domain_id);
+	DLB2_HW_DBG(hw, "\tCQ depth:                  %d\n",
+		    args->cq_depth);
+	DLB2_HW_DBG(hw, "\tCQ base address:           0x%lx\n",
+		    cq_dma_base);
+}
+
+static struct dlb2_dir_pq_pair *
+dlb2_get_domain_used_dir_pq(struct dlb2_hw *hw,
+			    u32 id,
+			    bool vdev_req,
+			    struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *port;
+	RTE_SET_USED(iter);
+
+	if (id >= DLB2_MAX_NUM_DIR_PORTS(hw->ver))
+		return NULL;
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
+		if ((!vdev_req && port->id.phys_id == id) ||
+		    (vdev_req && port->id.virt_id == id))
+			return port;
+	}
+
+	return NULL;
+}
+
+static int
+dlb2_verify_create_dir_port_args(struct dlb2_hw *hw,
+				 u32 domain_id,
+				 uintptr_t cq_dma_base,
+				 struct dlb2_create_dir_port_args *args,
+				 struct dlb2_cmd_response *resp,
+				 bool vdev_req,
+				 unsigned int vdev_id,
+				 struct dlb2_hw_domain **out_domain,
+				 struct dlb2_dir_pq_pair **out_port)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_dir_pq_pair *pq;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
+		return -EINVAL;
+	}
+
+	if (domain->started) {
+		resp->status = DLB2_ST_DOMAIN_STARTED;
+		return -EINVAL;
+	}
+
+	if (args->queue_id != -1) {
+		/*
+		 * If the user claims the queue is already configured, validate
+		 * the queue ID, its domain, and whether the queue is
+		 * configured.
+		 */
+		pq = dlb2_get_domain_used_dir_pq(hw,
+						 args->queue_id,
+						 vdev_req,
+						 domain);
+
+		if (!pq || pq->domain_id.phys_id != domain->id.phys_id ||
+		    !pq->queue_configured) {
+			resp->status = DLB2_ST_INVALID_DIR_QUEUE_ID;
+			return -EINVAL;
+		}
+	} else {
+		/*
+		 * If the port's queue is not configured, validate that a free
+		 * port-queue pair is available.
+		 */
+		pq = DLB2_DOM_LIST_HEAD(domain->avail_dir_pq_pairs,
+					typeof(*pq));
+		if (!pq) {
+			resp->status = DLB2_ST_DIR_PORTS_UNAVAILABLE;
+			return -EINVAL;
+		}
+	}
+
+	/* Check cache-line alignment */
+	if ((cq_dma_base & 0x3F) != 0) {
+		resp->status = DLB2_ST_INVALID_CQ_VIRT_ADDR;
+		return -EINVAL;
+	}
+
+	if (!dlb2_cq_depth_is_valid(args->cq_depth)) {
+		resp->status = DLB2_ST_INVALID_CQ_DEPTH;
+		return -EINVAL;
+	}
+
+	*out_domain = domain;
+	*out_port = pq;
+
+	return 0;
+}
+
+static void dlb2_dir_port_configure_pp(struct dlb2_hw *hw,
+				       struct dlb2_hw_domain *domain,
+				       struct dlb2_dir_pq_pair *port,
+				       bool vdev_req,
+				       unsigned int vdev_id)
+{
+	u32 reg = 0;
+
+	DLB2_BITS_SET(reg, domain->id.phys_id, DLB2_SYS_DIR_PP2VAS_VAS);
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_PP2VAS(port->id.phys_id), reg);
+
+	if (vdev_req) {
+		unsigned int offs;
+		u32 virt_id;
+
+		/*
+		 * DLB uses producer port address bits 17:12 to determine the
+		 * producer port ID. In Scalable IOV mode, PP accesses come
+		 * through the PF MMIO window for the physical producer port,
+		 * so for translation purposes the virtual and physical port
+		 * IDs are equal.
+		 */
+		if (hw->virt_mode == DLB2_VIRT_SRIOV)
+			virt_id = port->id.virt_id;
+		else
+			virt_id = port->id.phys_id;
+
+		reg = 0;
+		DLB2_BITS_SET(reg, port->id.phys_id, DLB2_SYS_VF_DIR_VPP2PP_PP);
+		offs = vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) + virt_id;
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP2PP(offs), reg);
+
+		reg = 0;
+		DLB2_BITS_SET(reg, vdev_id, DLB2_SYS_DIR_PP2VDEV_VDEV);
+		DLB2_CSR_WR(hw, DLB2_SYS_DIR_PP2VDEV(port->id.phys_id), reg);
+
+		reg = 0;
+		DLB2_BIT_SET(reg, DLB2_SYS_VF_DIR_VPP_V_VPP_V);
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP_V(offs), reg);
+	}
+
+	reg = 0;
+	DLB2_BIT_SET(reg, DLB2_SYS_DIR_PP_V_PP_V);
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_PP_V(port->id.phys_id), reg);
+}
+
+static int dlb2_dir_port_configure_cq(struct dlb2_hw *hw,
+				      struct dlb2_hw_domain *domain,
+				      struct dlb2_dir_pq_pair *port,
+				      uintptr_t cq_dma_base,
+				      struct dlb2_create_dir_port_args *args,
+				      bool vdev_req,
+				      unsigned int vdev_id)
+{
+	u32 reg = 0;
+	u32 ds = 0;
+
+	/* The CQ address is 64B-aligned, and the DLB only wants bits [63:6] */
+	DLB2_BITS_SET(reg, cq_dma_base >> 6, DLB2_SYS_DIR_CQ_ADDR_L_ADDR_L);
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_ADDR_L(port->id.phys_id), reg);
+
+	reg = cq_dma_base >> 32;
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_ADDR_U(port->id.phys_id), reg);
+
+	/*
+	 * 'ro' == relaxed ordering. This setting allows DLB2 to write
+	 * cache lines out-of-order (but QEs within a cache line are always
+	 * updated in-order).
+	 */
+	reg = 0;
+	DLB2_BITS_SET(reg, vdev_id, DLB2_SYS_DIR_CQ2VF_PF_RO_VF);
+	DLB2_BITS_SET(reg, !vdev_req && (hw->virt_mode != DLB2_VIRT_SIOV),
+		 DLB2_SYS_DIR_CQ2VF_PF_RO_IS_PF);
+	DLB2_BIT_SET(reg, DLB2_SYS_DIR_CQ2VF_PF_RO_RO);
+
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ2VF_PF_RO(port->id.phys_id), reg);
+
+	if (args->cq_depth <= 8) {
+		ds = 1;
+	} else if (args->cq_depth == 16) {
+		ds = 2;
+	} else if (args->cq_depth == 32) {
+		ds = 3;
+	} else if (args->cq_depth == 64) {
+		ds = 4;
+	} else if (args->cq_depth == 128) {
+		ds = 5;
+	} else if (args->cq_depth == 256) {
+		ds = 6;
+	} else if (args->cq_depth == 512) {
+		ds = 7;
+	} else if (args->cq_depth == 1024) {
+		ds = 8;
+	} else {
+		DLB2_HW_ERR(hw,
+			    "[%s():%d] Internal error: invalid CQ depth\n",
+			    __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	reg = 0;
+	DLB2_BITS_SET(reg, ds,
+		      DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT);
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
+		    reg);
+
+	/*
+	 * To support CQs with depth less than 8, program the token count
+	 * register with a non-zero initial value. Operations such as domain
+	 * reset must take this initial value into account when quiescing the
+	 * CQ.
+	 */
+	port->init_tkn_cnt = 0;
+
+	if (args->cq_depth < 8) {
+		reg = 0;
+		port->init_tkn_cnt = 8 - args->cq_depth;
+
+		DLB2_BITS_SET(reg, port->init_tkn_cnt,
+			      DLB2_LSP_CQ_DIR_TKN_CNT_COUNT);
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_CQ_DIR_TKN_CNT(hw->ver, port->id.phys_id),
+			    reg);
+	} else {
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_CQ_DIR_TKN_CNT(hw->ver, port->id.phys_id),
+			    DLB2_LSP_CQ_DIR_TKN_CNT_RST);
+	}
+
+	reg = 0;
+	DLB2_BITS_SET(reg, ds,
+		      DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_TOKEN_DEPTH_SELECT_V2);
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(hw->ver,
+						      port->id.phys_id),
+		    reg);
+
+	/* Reset the CQ write pointer */
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_WPTR(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ_WPTR_RST);
+
+	/* Virtualize the PPID */
+	reg = 0;
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_FMT(port->id.phys_id), reg);
+
+	/*
+	 * Address translation (AT) settings: 0: untranslated, 2: translated
+	 * (see ATS spec regarding Address Type field for more details)
+	 */
+	if (hw->ver == DLB2_HW_V2) {
+		reg = 0;
+		DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_AT(port->id.phys_id), reg);
+	}
+
+	if (vdev_req && hw->virt_mode == DLB2_VIRT_SIOV) {
+		DLB2_BITS_SET(reg, hw->pasid[vdev_id],
+			      DLB2_SYS_DIR_CQ_PASID_PASID);
+		DLB2_BIT_SET(reg, DLB2_SYS_DIR_CQ_PASID_FMT2);
+	}
+
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_PASID(hw->ver, port->id.phys_id), reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, domain->id.phys_id, DLB2_CHP_DIR_CQ2VAS_CQ2VAS);
+	DLB2_CSR_WR(hw, DLB2_CHP_DIR_CQ2VAS(hw->ver, port->id.phys_id), reg);
+
+	return 0;
+}
+
+static int dlb2_configure_dir_port(struct dlb2_hw *hw,
+				   struct dlb2_hw_domain *domain,
+				   struct dlb2_dir_pq_pair *port,
+				   uintptr_t cq_dma_base,
+				   struct dlb2_create_dir_port_args *args,
+				   bool vdev_req,
+				   unsigned int vdev_id)
+{
+	int ret;
+
+	ret = dlb2_dir_port_configure_cq(hw,
+					 domain,
+					 port,
+					 cq_dma_base,
+					 args,
+					 vdev_req,
+					 vdev_id);
+
+	if (ret)
+		return ret;
+
+	dlb2_dir_port_configure_pp(hw,
+				   domain,
+				   port,
+				   vdev_req,
+				   vdev_id);
+
+	dlb2_dir_port_cq_enable(hw, port);
+
+	port->enabled = true;
+
+	port->port_configured = true;
+
+	return 0;
+}
+
+/**
+ * dlb2_hw_create_dir_port() - create a directed port
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: port creation arguments.
+ * @cq_dma_base: base address of the CQ memory. This can be a PA or an IOVA.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function creates a directed port.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the port ID.
+ *
+ * resp->id contains a virtual ID if vdev_req is true.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, a credit setting is invalid, a
+ *	    pointer address is not properly aligned, the domain is not
+ *	    configured, or the domain has already been started.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb2_hw_create_dir_port(struct dlb2_hw *hw,
+			    u32 domain_id,
+			    struct dlb2_create_dir_port_args *args,
+			    uintptr_t cq_dma_base,
+			    struct dlb2_cmd_response *resp,
+			    bool vdev_req,
+			    unsigned int vdev_id)
+{
+	struct dlb2_dir_pq_pair *port;
+	struct dlb2_hw_domain *domain;
+	int ret;
+
+	dlb2_log_create_dir_port_args(hw,
+				      domain_id,
+				      cq_dma_base,
+				      args,
+				      vdev_req,
+				      vdev_id);
+
+	/*
+	 * Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	ret = dlb2_verify_create_dir_port_args(hw,
+					       domain_id,
+					       cq_dma_base,
+					       args,
+					       resp,
+					       vdev_req,
+					       vdev_id,
+					       &domain,
+					       &port);
+	if (ret)
+		return ret;
+
+	ret = dlb2_configure_dir_port(hw,
+				      domain,
+				      port,
+				      cq_dma_base,
+				      args,
+				      vdev_req,
+				      vdev_id);
+	if (ret)
+		return ret;
+
+	/*
+	 * Configuration succeeded, so move the resource from the 'avail' to
+	 * the 'used' list (if it's not already there).
+	 */
+	if (args->queue_id == -1) {
+		dlb2_list_del(&domain->avail_dir_pq_pairs, &port->domain_list);
+
+		dlb2_list_add(&domain->used_dir_pq_pairs, &port->domain_list);
+	}
+
+	resp->status = 0;
+	resp->id = (vdev_req) ? port->id.virt_id : port->id.phys_id;
+
+	return 0;
+}
+
+static void dlb2_configure_dir_queue(struct dlb2_hw *hw,
+				     struct dlb2_hw_domain *domain,
+				     struct dlb2_dir_pq_pair *queue,
+				     struct dlb2_create_dir_queue_args *args,
+				     bool vdev_req,
+				     unsigned int vdev_id)
+{
+	unsigned int offs;
+	u32 reg = 0;
+
+	/* QID write permissions are turned on when the domain is started */
+	offs = domain->id.phys_id * DLB2_MAX_NUM_DIR_QUEUES(hw->ver) +
+		queue->id.phys_id;
+
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(offs), reg);
+
+	/* Don't timestamp QEs that pass through this queue */
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_QID_ITS(queue->id.phys_id), reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, args->depth_threshold,
+		      DLB2_LSP_QID_DIR_DEPTH_THRSH_THRESH);
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_QID_DIR_DEPTH_THRSH(hw->ver, queue->id.phys_id),
+		    reg);
+
+	if (vdev_req) {
+		offs = vdev_id * DLB2_MAX_NUM_DIR_QUEUES(hw->ver) +
+			queue->id.virt_id;
+
+		reg = 0;
+		DLB2_BIT_SET(reg, DLB2_SYS_VF_DIR_VQID_V_VQID_V);
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID_V(offs), reg);
+
+		reg = 0;
+		DLB2_BITS_SET(reg, queue->id.phys_id,
+			      DLB2_SYS_VF_DIR_VQID2QID_QID);
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID2QID(offs), reg);
+	}
+
+	reg = 0;
+	DLB2_BIT_SET(reg, DLB2_SYS_DIR_QID_V_QID_V);
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_QID_V(queue->id.phys_id), reg);
+
+	queue->queue_configured = true;
+}
+
+static void
+dlb2_log_create_dir_queue_args(struct dlb2_hw *hw,
+			       u32 domain_id,
+			       struct dlb2_create_dir_queue_args *args,
+			       bool vdev_req,
+			       unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 create directed queue arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
+	DLB2_HW_DBG(hw, "\tPort ID:   %d\n", args->port_id);
+}
+
+static int
+dlb2_verify_create_dir_queue_args(struct dlb2_hw *hw,
+				  u32 domain_id,
+				  struct dlb2_create_dir_queue_args *args,
+				  struct dlb2_cmd_response *resp,
+				  bool vdev_req,
+				  unsigned int vdev_id,
+				  struct dlb2_hw_domain **out_domain,
+				  struct dlb2_dir_pq_pair **out_queue)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_dir_pq_pair *pq;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
+		return -EINVAL;
+	}
+
+	if (domain->started) {
+		resp->status = DLB2_ST_DOMAIN_STARTED;
+		return -EINVAL;
+	}
+
+	/*
+	 * If the user claims the port is already configured, validate the port
+	 * ID, its domain, and whether the port is configured.
+	 */
+	if (args->port_id != -1) {
+		pq = dlb2_get_domain_used_dir_pq(hw,
+						 args->port_id,
+						 vdev_req,
+						 domain);
+
+		if (!pq || pq->domain_id.phys_id != domain->id.phys_id ||
+		    !pq->port_configured) {
+			resp->status = DLB2_ST_INVALID_PORT_ID;
+			return -EINVAL;
+		}
+	} else {
+		/*
+		 * If the queue's port is not configured, validate that a free
+		 * port-queue pair is available.
+		 */
+		pq = DLB2_DOM_LIST_HEAD(domain->avail_dir_pq_pairs,
+					typeof(*pq));
+		if (!pq) {
+			resp->status = DLB2_ST_DIR_QUEUES_UNAVAILABLE;
+			return -EINVAL;
+		}
+	}
+
+	*out_domain = domain;
+	*out_queue = pq;
+
+	return 0;
+}
+
+/**
+ * dlb2_hw_create_dir_queue() - create a directed queue
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: queue creation arguments.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function creates a directed queue.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the queue ID.
+ *
+ * resp->id contains a virtual ID if vdev_req is true.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, the domain is not configured,
+ *	    or the domain has already been started.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb2_hw_create_dir_queue(struct dlb2_hw *hw,
+			     u32 domain_id,
+			     struct dlb2_create_dir_queue_args *args,
+			     struct dlb2_cmd_response *resp,
+			     bool vdev_req,
+			     unsigned int vdev_id)
+{
+	struct dlb2_dir_pq_pair *queue;
+	struct dlb2_hw_domain *domain;
+	int ret;
+
+	dlb2_log_create_dir_queue_args(hw, domain_id, args, vdev_req, vdev_id);
+
+	/*
+	 * Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	ret = dlb2_verify_create_dir_queue_args(hw,
+						domain_id,
+						args,
+						resp,
+						vdev_req,
+						vdev_id,
+						&domain,
+						&queue);
+	if (ret)
+		return ret;
+
+	dlb2_configure_dir_queue(hw, domain, queue, args, vdev_req, vdev_id);
+
+	/*
+	 * Configuration succeeded, so move the resource from the 'avail' to
+	 * the 'used' list (if it's not already there).
+	 */
+	if (args->port_id == -1) {
+		dlb2_list_del(&domain->avail_dir_pq_pairs,
+			      &queue->domain_list);
+
+		dlb2_list_add(&domain->used_dir_pq_pairs,
+			      &queue->domain_list);
+	}
+
+	resp->status = 0;
+
+	resp->id = (vdev_req) ? queue->id.virt_id : queue->id.phys_id;
+
+	return 0;
+}
+
+static bool
+dlb2_port_find_slot_with_pending_map_queue(struct dlb2_ldb_port *port,
+					   struct dlb2_ldb_queue *queue,
+					   int *slot)
+{
+	int i;
+
+	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
+		struct dlb2_ldb_port_qid_map *map = &port->qid_map[i];
+
+		if (map->state == DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP &&
+		    map->pending_qid == queue->id.phys_id)
+			break;
+	}
+
+	*slot = i;
+
+	return (i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ);
+}
+
+static int dlb2_verify_map_qid_slot_available(struct dlb2_ldb_port *port,
+					      struct dlb2_ldb_queue *queue,
+					      struct dlb2_cmd_response *resp)
+{
+	enum dlb2_qid_map_state state;
+	int i;
+
+	/* Unused slot available? */
+	if (port->num_mappings < DLB2_MAX_NUM_QIDS_PER_LDB_CQ)
+		return 0;
+
+	/*
+	 * If the queue is already mapped (from the application's perspective),
+	 * this is simply a priority update.
+	 */
+	state = DLB2_QUEUE_MAPPED;
+	if (dlb2_port_find_slot_queue(port, state, queue, &i))
+		return 0;
+
+	state = DLB2_QUEUE_MAP_IN_PROG;
+	if (dlb2_port_find_slot_queue(port, state, queue, &i))
+		return 0;
+
+	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &i))
+		return 0;
+
+	/*
+	 * If the slot contains an unmap in progress, it's considered
+	 * available.
+	 */
+	state = DLB2_QUEUE_UNMAP_IN_PROG;
+	if (dlb2_port_find_slot(port, state, &i))
+		return 0;
+
+	state = DLB2_QUEUE_UNMAPPED;
+	if (dlb2_port_find_slot(port, state, &i))
+		return 0;
+
+	resp->status = DLB2_ST_NO_QID_SLOTS_AVAILABLE;
+	return -EINVAL;
+}
+
+static struct dlb2_ldb_queue *
+dlb2_get_domain_ldb_queue(u32 id,
+			  bool vdev_req,
+			  struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_queue *queue;
+	RTE_SET_USED(iter);
+
+	if (id >= DLB2_MAX_NUM_LDB_QUEUES)
+		return NULL;
+
+	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
+		if ((!vdev_req && queue->id.phys_id == id) ||
+		    (vdev_req && queue->id.virt_id == id))
+			return queue;
+	}
+
+	return NULL;
+}
+
+static struct dlb2_ldb_port *
+dlb2_get_domain_used_ldb_port(u32 id,
+			      bool vdev_req,
+			      struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	if (id >= DLB2_MAX_NUM_LDB_PORTS)
+		return NULL;
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			if ((!vdev_req && port->id.phys_id == id) ||
+			    (vdev_req && port->id.virt_id == id))
+				return port;
+		}
+
+		DLB2_DOM_LIST_FOR(domain->avail_ldb_ports[i], port, iter) {
+			if ((!vdev_req && port->id.phys_id == id) ||
+			    (vdev_req && port->id.virt_id == id))
+				return port;
+		}
+	}
+
+	return NULL;
+}
+
+static void dlb2_ldb_port_change_qid_priority(struct dlb2_hw *hw,
+					      struct dlb2_ldb_port *port,
+					      int slot,
+					      struct dlb2_map_qid_args *args)
+{
+	u32 cq2priov;
+
+	/* Read-modify-write the priority and valid bit register */
+	cq2priov = DLB2_CSR_RD(hw,
+			       DLB2_LSP_CQ2PRIOV(hw->ver, port->id.phys_id));
+
+	cq2priov |= (1 << (slot + DLB2_LSP_CQ2PRIOV_V_LOC)) &
+		    DLB2_LSP_CQ2PRIOV_V;
+	cq2priov |= ((args->priority & 0x7) << slot * 3) &
+		    DLB2_LSP_CQ2PRIOV_PRIO;
+
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(hw->ver, port->id.phys_id), cq2priov);
+
+	dlb2_flush_csr(hw);
+
+	port->qid_map[slot].priority = args->priority;
+}
+
+static int dlb2_verify_map_qid_args(struct dlb2_hw *hw,
+				    u32 domain_id,
+				    struct dlb2_map_qid_args *args,
+				    struct dlb2_cmd_response *resp,
+				    bool vdev_req,
+				    unsigned int vdev_id,
+				    struct dlb2_hw_domain **out_domain,
+				    struct dlb2_ldb_port **out_port,
+				    struct dlb2_ldb_queue **out_queue)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+	struct dlb2_ldb_port *port;
+	int id;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
+		return -EINVAL;
+	}
+
+	id = args->port_id;
+
+	port = dlb2_get_domain_used_ldb_port(id, vdev_req, domain);
+
+	if (!port || !port->configured) {
+		resp->status = DLB2_ST_INVALID_PORT_ID;
+		return -EINVAL;
+	}
+
+	if (args->priority >= DLB2_QID_PRIORITIES) {
+		resp->status = DLB2_ST_INVALID_PRIORITY;
+		return -EINVAL;
+	}
+
+	queue = dlb2_get_domain_ldb_queue(args->qid, vdev_req, domain);
+
+	if (!queue || !queue->configured) {
+		resp->status = DLB2_ST_INVALID_QID;
+		return -EINVAL;
+	}
+
+	if (queue->domain_id.phys_id != domain->id.phys_id) {
+		resp->status = DLB2_ST_INVALID_QID;
+		return -EINVAL;
+	}
+
+	if (port->domain_id.phys_id != domain->id.phys_id) {
+		resp->status = DLB2_ST_INVALID_PORT_ID;
+		return -EINVAL;
+	}
+
+	*out_domain = domain;
+	*out_queue = queue;
+	*out_port = port;
+
+	return 0;
+}
+
+static void dlb2_log_map_qid(struct dlb2_hw *hw,
+			     u32 domain_id,
+			     struct dlb2_map_qid_args *args,
+			     bool vdev_req,
+			     unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 map QID arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID: %d\n",
+		    domain_id);
+	DLB2_HW_DBG(hw, "\tPort ID:   %d\n",
+		    args->port_id);
+	DLB2_HW_DBG(hw, "\tQueue ID:  %d\n",
+		    args->qid);
+	DLB2_HW_DBG(hw, "\tPriority:  %d\n",
+		    args->priority);
+}
+
+/**
+ * dlb2_hw_map_qid() - map a load-balanced queue to a load-balanced port
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: map QID arguments.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function configures the DLB to schedule QEs from the specified queue
+ * to the specified port. Each load-balanced port can be mapped to up to 8
+ * queues; each load-balanced queue can potentially map to all the
+ * load-balanced ports.
+ *
+ * A successful return does not necessarily mean the mapping was configured. If
+ * this function is unable to immediately map the queue to the port, it will
+ * add the requested operation to a per-port list of pending map/unmap
+ * operations, and (if it's not already running) launch a kernel thread that
+ * periodically attempts to process all pending operations. In a sense, this is
+ * an asynchronous function.
+ *
+ * This asynchronicity creates two views of the state of hardware: the actual
+ * hardware state and the requested state (as if every request completed
+ * immediately). If there are any pending map/unmap operations, the requested
+ * state will differ from the actual state. All validation is performed with
+ * respect to the pending state; for instance, if there are 8 pending map
+ * operations for port X, a request for a 9th will fail because a load-balanced
+ * port can only map up to 8 queues.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, invalid port or queue ID, or
+ *	    the domain is not configured.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb2_hw_map_qid(struct dlb2_hw *hw,
+		    u32 domain_id,
+		    struct dlb2_map_qid_args *args,
+		    struct dlb2_cmd_response *resp,
+		    bool vdev_req,
+		    unsigned int vdev_id)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+	enum dlb2_qid_map_state st;
+	struct dlb2_ldb_port *port;
+	int ret, i;
+	u8 prio;
+
+	dlb2_log_map_qid(hw, domain_id, args, vdev_req, vdev_id);
+
+	/*
+	 * Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	ret = dlb2_verify_map_qid_args(hw,
+				       domain_id,
+				       args,
+				       resp,
+				       vdev_req,
+				       vdev_id,
+				       &domain,
+				       &port,
+				       &queue);
+	if (ret)
+		return ret;
+
+	prio = args->priority;
+
+	/*
+	 * If there are any outstanding detach operations for this port,
+	 * attempt to complete them. This may be necessary to free up a QID
+	 * slot for this requested mapping.
+	 */
+	if (port->num_pending_removals)
+		dlb2_domain_finish_unmap_port(hw, domain, port);
+
+	ret = dlb2_verify_map_qid_slot_available(port, queue, resp);
+	if (ret)
+		return ret;
+
+	/* Hardware requires disabling the CQ before mapping QIDs. */
+	if (port->enabled)
+		dlb2_ldb_port_cq_disable(hw, port);
+
+	/*
+	 * If this is only a priority change, don't perform the full QID->CQ
+	 * mapping procedure
+	 */
+	st = DLB2_QUEUE_MAPPED;
+	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
+		if (prio != port->qid_map[i].priority) {
+			dlb2_ldb_port_change_qid_priority(hw, port, i, args);
+			DLB2_HW_DBG(hw, "DLB2 map: priority change\n");
+		}
+
+		st = DLB2_QUEUE_MAPPED;
+		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
+		if (ret)
+			return ret;
+
+		goto map_qid_done;
+	}
+
+	st = DLB2_QUEUE_UNMAP_IN_PROG;
+	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
+		if (prio != port->qid_map[i].priority) {
+			dlb2_ldb_port_change_qid_priority(hw, port, i, args);
+			DLB2_HW_DBG(hw, "DLB2 map: priority change\n");
+		}
+
+		st = DLB2_QUEUE_MAPPED;
+		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
+		if (ret)
+			return ret;
+
+		goto map_qid_done;
+	}
+
+	/*
+	 * If this is a priority change on an in-progress mapping, don't
+	 * perform the full QID->CQ mapping procedure.
+	 */
+	st = DLB2_QUEUE_MAP_IN_PROG;
+	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
+		port->qid_map[i].priority = prio;
+
+		DLB2_HW_DBG(hw, "DLB2 map: priority change only\n");
+
+		goto map_qid_done;
+	}
+
+	/*
+	 * If this is a priority change on a pending mapping, update the
+	 * pending priority
+	 */
+	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &i)) {
+		port->qid_map[i].pending_priority = prio;
+
+		DLB2_HW_DBG(hw, "DLB2 map: priority change only\n");
+
+		goto map_qid_done;
+	}
+
+	/*
+	 * If all the CQ's slots are in use, then there's an unmap in progress
+	 * (guaranteed by dlb2_verify_map_qid_slot_available()), so add this
+	 * mapping to pending_map and return. When the removal is completed for
+	 * the slot's current occupant, this mapping will be performed.
+	 */
+	if (!dlb2_port_find_slot(port, DLB2_QUEUE_UNMAPPED, &i)) {
+		if (dlb2_port_find_slot(port, DLB2_QUEUE_UNMAP_IN_PROG, &i)) {
+			enum dlb2_qid_map_state new_st;
+
+			port->qid_map[i].pending_qid = queue->id.phys_id;
+			port->qid_map[i].pending_priority = prio;
+
+			new_st = DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP;
+
+			ret = dlb2_port_slot_state_transition(hw, port, queue,
+							      i, new_st);
+			if (ret)
+				return ret;
+
+			DLB2_HW_DBG(hw, "DLB2 map: map pending removal\n");
+
+			goto map_qid_done;
+		}
+	}
+
+	/*
+	 * If the domain has started, a special "dynamic" CQ->queue mapping
+	 * procedure is required in order to safely update the CQ<->QID tables.
+	 * The "static" procedure cannot be used when traffic is flowing,
+	 * because the CQ<->QID tables cannot be updated atomically and the
+	 * scheduler won't see the new mapping unless the queue's if_status
+	 * changes, which isn't guaranteed.
+	 */
+	ret = dlb2_ldb_port_map_qid(hw, domain, port, queue, prio);
+
+	/* If ret is less than zero, it's due to an internal error */
+	if (ret < 0)
+		return ret;
+
+map_qid_done:
+	if (port->enabled)
+		dlb2_ldb_port_cq_enable(hw, port);
+
+	resp->status = 0;
+
+	return 0;
+}
+
+static void dlb2_log_unmap_qid(struct dlb2_hw *hw,
+			       u32 domain_id,
+			       struct dlb2_unmap_qid_args *args,
+			       bool vdev_req,
+			       unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 unmap QID arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID: %d\n",
+		    domain_id);
+	DLB2_HW_DBG(hw, "\tPort ID:   %d\n",
+		    args->port_id);
+	DLB2_HW_DBG(hw, "\tQueue ID:  %d\n",
+		    args->qid);
+	if (args->qid < DLB2_MAX_NUM_LDB_QUEUES)
+		DLB2_HW_DBG(hw, "\tQueue's num mappings:  %d\n",
+			    hw->rsrcs.ldb_queues[args->qid].num_mappings);
+}
+
+static int dlb2_verify_unmap_qid_args(struct dlb2_hw *hw,
+				      u32 domain_id,
+				      struct dlb2_unmap_qid_args *args,
+				      struct dlb2_cmd_response *resp,
+				      bool vdev_req,
+				      unsigned int vdev_id,
+				      struct dlb2_hw_domain **out_domain,
+				      struct dlb2_ldb_port **out_port,
+				      struct dlb2_ldb_queue **out_queue)
+{
+	enum dlb2_qid_map_state state;
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+	struct dlb2_ldb_port *port;
+	int slot;
+	int id;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
+		return -EINVAL;
+	}
+
+	id = args->port_id;
+
+	port = dlb2_get_domain_used_ldb_port(id, vdev_req, domain);
+
+	if (!port || !port->configured) {
+		resp->status = DLB2_ST_INVALID_PORT_ID;
+		return -EINVAL;
+	}
+
+	if (port->domain_id.phys_id != domain->id.phys_id) {
+		resp->status = DLB2_ST_INVALID_PORT_ID;
+		return -EINVAL;
+	}
+
+	queue = dlb2_get_domain_ldb_queue(args->qid, vdev_req, domain);
+
+	if (!queue || !queue->configured) {
+		DLB2_HW_ERR(hw, "[%s()] Can't unmap unconfigured queue %d\n",
+			    __func__, args->qid);
+		resp->status = DLB2_ST_INVALID_QID;
+		return -EINVAL;
+	}
+
+	/*
+	 * Verify that the port has the queue mapped. From the application's
+	 * perspective a queue is mapped if it is actually mapped, the map is
+	 * in progress, or the map is blocked pending an unmap.
+	 */
+	state = DLB2_QUEUE_MAPPED;
+	if (dlb2_port_find_slot_queue(port, state, queue, &slot))
+		goto done;
+
+	state = DLB2_QUEUE_MAP_IN_PROG;
+	if (dlb2_port_find_slot_queue(port, state, queue, &slot))
+		goto done;
+
+	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &slot))
+		goto done;
+
+	resp->status = DLB2_ST_INVALID_QID;
+	return -EINVAL;
+
+done:
+	*out_domain = domain;
+	*out_port = port;
+	*out_queue = queue;
+
+	return 0;
+}
+
+/**
+ * dlb2_hw_unmap_qid() - Unmap a load-balanced queue from a load-balanced port
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: unmap QID arguments.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function configures the DLB to stop scheduling QEs from the specified
+ * queue to the specified port.
+ *
+ * A successful return does not necessarily mean the mapping was removed. If
+ * this function is unable to immediately unmap the queue from the port, it
+ * will add the requested operation to a per-port list of pending map/unmap
+ * operations, and (if it's not already running) launch a kernel thread that
+ * periodically attempts to process all pending operations. See
+ * dlb2_hw_map_qid() for more details.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, invalid port or queue ID, or
+ *	    the domain is not configured.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb2_hw_unmap_qid(struct dlb2_hw *hw,
+		      u32 domain_id,
+		      struct dlb2_unmap_qid_args *args,
+		      struct dlb2_cmd_response *resp,
+		      bool vdev_req,
+		      unsigned int vdev_id)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+	enum dlb2_qid_map_state st;
+	struct dlb2_ldb_port *port;
+	bool unmap_complete;
+	int i, ret;
+
+	dlb2_log_unmap_qid(hw, domain_id, args, vdev_req, vdev_id);
+
+	/*
+	 * Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	ret = dlb2_verify_unmap_qid_args(hw,
+					 domain_id,
+					 args,
+					 resp,
+					 vdev_req,
+					 vdev_id,
+					 &domain,
+					 &port,
+					 &queue);
+	if (ret)
+		return ret;
+
+	/*
+	 * If the queue hasn't been mapped yet, we need to update the slot's
+	 * state and re-enable the queue's inflights.
+	 */
+	st = DLB2_QUEUE_MAP_IN_PROG;
+	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
+		/*
+		 * Since the in-progress map was aborted, re-enable the QID's
+		 * inflights.
+		 */
+		if (queue->num_pending_additions == 0)
+			dlb2_ldb_queue_set_inflight_limit(hw, queue);
+
+		st = DLB2_QUEUE_UNMAPPED;
+		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
+		if (ret)
+			return ret;
+
+		goto unmap_qid_done;
+	}
+
+	/*
+	 * If the queue mapping is on hold pending an unmap, we simply need to
+	 * update the slot's state.
+	 */
+	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &i)) {
+		st = DLB2_QUEUE_UNMAP_IN_PROG;
+		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
+		if (ret)
+			return ret;
+
+		goto unmap_qid_done;
+	}
+
+	st = DLB2_QUEUE_MAPPED;
+	if (!dlb2_port_find_slot_queue(port, st, queue, &i)) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: no available CQ slots\n",
+			    __func__);
+		return -EFAULT;
+	}
+
+	/*
+	 * QID->CQ mapping removal is an asynchronous procedure. It requires
+	 * stopping the DLB2 from scheduling this CQ, draining all inflights
+	 * from the CQ, then unmapping the queue from the CQ. This function
+	 * simply marks the port as needing the queue unmapped, and (if
+	 * necessary) starts the unmapping worker thread.
+	 */
+	dlb2_ldb_port_cq_disable(hw, port);
+
+	st = DLB2_QUEUE_UNMAP_IN_PROG;
+	ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
+	if (ret)
+		return ret;
+
+	/*
+	 * Attempt to finish the unmapping now, in case the port has no
+	 * outstanding inflights. If that's not the case, this will fail and
+	 * the unmapping will be completed at a later time.
+	 */
+	unmap_complete = dlb2_domain_finish_unmap_port(hw, domain, port);
+
+	/*
+	 * If the unmapping couldn't complete immediately, launch the worker
+	 * thread (if it isn't already launched) to finish it later.
+	 */
+	if (!unmap_complete && !os_worker_active(hw))
+		os_schedule_work(hw);
+
+unmap_qid_done:
+	resp->status = 0;
+
+	return 0;
+}
+
+static void
+dlb2_log_pending_port_unmaps_args(struct dlb2_hw *hw,
+				  struct dlb2_pending_port_unmaps_args *args,
+				  bool vdev_req,
+				  unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB unmaps in progress arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from VF %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tPort ID: %d\n", args->port_id);
+}
+
+/**
+ * dlb2_hw_pending_port_unmaps() - returns the number of unmap operations in
+ *	progress.
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: number of unmaps in progress args
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the number of unmaps in progress.
+ *
+ * Errors:
+ * EINVAL - Invalid port ID.
+ */
+int dlb2_hw_pending_port_unmaps(struct dlb2_hw *hw,
+				u32 domain_id,
+				struct dlb2_pending_port_unmaps_args *args,
+				struct dlb2_cmd_response *resp,
+				bool vdev_req,
+				unsigned int vdev_id)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_port *port;
+
+	dlb2_log_pending_port_unmaps_args(hw, args, vdev_req, vdev_id);
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	port = dlb2_get_domain_used_ldb_port(args->port_id, vdev_req, domain);
+	if (!port || !port->configured) {
+		resp->status = DLB2_ST_INVALID_PORT_ID;
+		return -EINVAL;
+	}
+
+	resp->id = port->num_pending_removals;
+
+	return 0;
+}
+
+static int dlb2_verify_start_domain_args(struct dlb2_hw *hw,
+					 u32 domain_id,
+					 struct dlb2_cmd_response *resp,
+					 bool vdev_req,
+					 unsigned int vdev_id,
+					 struct dlb2_hw_domain **out_domain)
+{
+	struct dlb2_hw_domain *domain;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
+		return -EINVAL;
+	}
+
+	if (domain->started) {
+		resp->status = DLB2_ST_DOMAIN_STARTED;
+		return -EINVAL;
+	}
+
+	*out_domain = domain;
+
+	return 0;
+}
+
+static void dlb2_log_start_domain(struct dlb2_hw *hw,
+				  u32 domain_id,
+				  bool vdev_req,
+				  unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 start domain arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
+}
+
+/**
+ * dlb2_hw_start_domain() - start a scheduling domain
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @arg: start domain arguments.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function starts a scheduling domain, which allows applications to send
+ * traffic through it. Once a domain is started, its resources can no longer be
+ * configured (besides QID remapping and port enable/disable).
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error.
+ *
+ * Errors:
+ * EINVAL - the domain is not configured, or the domain is already started.
+ */
+int
+dlb2_hw_start_domain(struct dlb2_hw *hw,
+		     u32 domain_id,
+		     struct dlb2_start_domain_args *args,
+		     struct dlb2_cmd_response *resp,
+		     bool vdev_req,
+		     unsigned int vdev_id)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *dir_queue;
+	struct dlb2_ldb_queue *ldb_queue;
+	struct dlb2_hw_domain *domain;
+	int ret;
+	RTE_SET_USED(args);
+	RTE_SET_USED(iter);
+
+	dlb2_log_start_domain(hw, domain_id, vdev_req, vdev_id);
+
+	ret = dlb2_verify_start_domain_args(hw,
+					    domain_id,
+					    resp,
+					    vdev_req,
+					    vdev_id,
+					    &domain);
+	if (ret)
+		return ret;
+
+	/*
+	 * Enable load-balanced and directed queue write permissions for the
+	 * queues this domain owns. Without this, the DLB2 will drop all
+	 * incoming traffic to those queues.
+	 */
+	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, ldb_queue, iter) {
+		u32 vasqid_v = 0;
+		unsigned int offs;
+
+		DLB2_BIT_SET(vasqid_v, DLB2_SYS_LDB_VASQID_V_VASQID_V);
+
+		offs = domain->id.phys_id * DLB2_MAX_NUM_LDB_QUEUES +
+			ldb_queue->id.phys_id;
+
+		DLB2_CSR_WR(hw, DLB2_SYS_LDB_VASQID_V(offs), vasqid_v);
+	}
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, dir_queue, iter) {
+		u32 vasqid_v = 0;
+		unsigned int offs;
+
+		DLB2_BIT_SET(vasqid_v, DLB2_SYS_DIR_VASQID_V_VASQID_V);
+
+		offs = domain->id.phys_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) +
+			dir_queue->id.phys_id;
+
+		DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(offs), vasqid_v);
+	}
+
+	dlb2_flush_csr(hw);
+
+	domain->started = true;
+
+	resp->status = 0;
+
+	return 0;
+}
+
+static void dlb2_log_get_dir_queue_depth(struct dlb2_hw *hw,
+					 u32 domain_id,
+					 u32 queue_id,
+					 bool vdev_req,
+					 unsigned int vf_id)
+{
+	DLB2_HW_DBG(hw, "DLB get directed queue depth:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from VF %d)\n", vf_id);
+	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
+	DLB2_HW_DBG(hw, "\tQueue ID: %d\n", queue_id);
+}
+
+/**
+ * dlb2_hw_get_dir_queue_depth() - returns the depth of a directed queue
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: queue depth args
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function returns the depth of a directed queue.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the depth.
+ *
+ * Errors:
+ * EINVAL - Invalid domain ID or queue ID.
+ */
+int dlb2_hw_get_dir_queue_depth(struct dlb2_hw *hw,
+				u32 domain_id,
+				struct dlb2_get_dir_queue_depth_args *args,
+				struct dlb2_cmd_response *resp,
+				bool vdev_req,
+				unsigned int vdev_id)
+{
+	struct dlb2_dir_pq_pair *queue;
+	struct dlb2_hw_domain *domain;
+	int id;
+
+	id = domain_id;
+
+	dlb2_log_get_dir_queue_depth(hw, domain_id, args->queue_id,
+				     vdev_req, vdev_id);
+
+	domain = dlb2_get_domain_from_id(hw, id, vdev_req, vdev_id);
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	id = args->queue_id;
+
+	queue = dlb2_get_domain_used_dir_pq(hw, id, vdev_req, domain);
+	if (!queue) {
+		resp->status = DLB2_ST_INVALID_QID;
+		return -EINVAL;
+	}
+
+	resp->id = dlb2_dir_queue_depth(hw, queue);
+
+	return 0;
+}
+
+static void dlb2_log_get_ldb_queue_depth(struct dlb2_hw *hw,
+					 u32 domain_id,
+					 u32 queue_id,
+					 bool vdev_req,
+					 unsigned int vf_id)
+{
+	DLB2_HW_DBG(hw, "DLB get load-balanced queue depth:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from VF %d)\n", vf_id);
+	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
+	DLB2_HW_DBG(hw, "\tQueue ID: %d\n", queue_id);
+}
+
+/**
+ * dlb2_hw_get_ldb_queue_depth() - returns the depth of a load-balanced queue
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: queue depth args
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function returns the depth of a load-balanced queue.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the depth.
+ *
+ * Errors:
+ * EINVAL - Invalid domain ID or queue ID.
+ */
+int dlb2_hw_get_ldb_queue_depth(struct dlb2_hw *hw,
+				u32 domain_id,
+				struct dlb2_get_ldb_queue_depth_args *args,
+				struct dlb2_cmd_response *resp,
+				bool vdev_req,
+				unsigned int vdev_id)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+
+	dlb2_log_get_ldb_queue_depth(hw, domain_id, args->queue_id,
+				     vdev_req, vdev_id);
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	queue = dlb2_get_domain_ldb_queue(args->queue_id, vdev_req, domain);
+	if (!queue) {
+		resp->status = DLB2_ST_INVALID_QID;
+		return -EINVAL;
+	}
+
+	resp->id = dlb2_ldb_queue_depth(hw, queue);
+
+	return 0;
+}
+
+/**
+ * dlb2_finish_unmap_qid_procedures() - finish any pending unmap procedures
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function attempts to finish any outstanding unmap procedures.
+ * This function should be called by the kernel thread responsible for
+ * finishing map/unmap procedures.
+ *
+ * Return:
+ * Returns the number of procedures that weren't completed.
+ */
+unsigned int dlb2_finish_unmap_qid_procedures(struct dlb2_hw *hw)
+{
+	int i, num = 0;
+
+	/* Finish queue unmap jobs for any domain that needs it */
+	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
+		struct dlb2_hw_domain *domain = &hw->domains[i];
+
+		num += dlb2_domain_finish_unmap_qid_procedures(hw, domain);
+	}
+
+	return num;
+}
+
+/**
+ * dlb2_finish_map_qid_procedures() - finish any pending map procedures
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function attempts to finish any outstanding map procedures.
+ * This function should be called by the kernel thread responsible for
+ * finishing map/unmap procedures.
+ *
+ * Return:
+ * Returns the number of procedures that weren't completed.
+ */
+unsigned int dlb2_finish_map_qid_procedures(struct dlb2_hw *hw)
+{
+	int i, num = 0;
+
+	/* Finish queue map jobs for any domain that needs it */
+	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
+		struct dlb2_hw_domain *domain = &hw->domains[i];
+
+		num += dlb2_domain_finish_map_qid_procedures(hw, domain);
+	}
+
+	return num;
+}
+
+/**
+ * dlb2_hw_enable_sparse_dir_cq_mode() - enable sparse mode for directed ports.
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function must be called prior to configuring scheduling domains.
+ */
+
+void dlb2_hw_enable_sparse_dir_cq_mode(struct dlb2_hw *hw)
+{
+	u32 ctrl;
+
+	ctrl = DLB2_CSR_RD(hw, DLB2_CHP_CFG_CHP_CSR_CTRL);
+
+	DLB2_BIT_SET(ctrl,
+		     DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_DIR_CQ_MODE);
+
+	DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, ctrl);
+}
+
+/**
+ * dlb2_hw_enable_sparse_ldb_cq_mode() - enable sparse mode for load-balanced
+ *	ports.
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function must be called prior to configuring scheduling domains.
+ */
+void dlb2_hw_enable_sparse_ldb_cq_mode(struct dlb2_hw *hw)
+{
+	u32 ctrl;
+
+	ctrl = DLB2_CSR_RD(hw, DLB2_CHP_CFG_CHP_CSR_CTRL);
+
+	DLB2_BIT_SET(ctrl,
+		     DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_LDB_CQ_MODE);
+
+	DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, ctrl);
+}
+
+/**
+ * dlb2_get_group_sequence_numbers() - return a group's number of SNs per queue
+ * @hw: dlb2_hw handle for a particular device.
+ * @group_id: sequence number group ID.
+ *
+ * This function returns the configured number of sequence numbers per queue
+ * for the specified group.
+ *
+ * Return:
+ * Returns -EINVAL if group_id is invalid, else the group's SNs per queue.
+ */
+int dlb2_get_group_sequence_numbers(struct dlb2_hw *hw, u32 group_id)
+{
+	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
+		return -EINVAL;
+
+	return hw->rsrcs.sn_groups[group_id].sequence_numbers_per_queue;
+}
+
+/**
+ * dlb2_get_group_sequence_number_occupancy() - return a group's in-use slots
+ * @hw: dlb2_hw handle for a particular device.
+ * @group_id: sequence number group ID.
+ *
+ * This function returns the group's number of in-use slots (i.e. load-balanced
+ * queues using the specified group).
+ *
+ * Return:
+ * Returns -EINVAL if group_id is invalid, else the group's SNs per queue.
+ */
+int dlb2_get_group_sequence_number_occupancy(struct dlb2_hw *hw, u32 group_id)
+{
+	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
+		return -EINVAL;
+
+	return dlb2_sn_group_used_slots(&hw->rsrcs.sn_groups[group_id]);
+}
+
+static void dlb2_log_set_group_sequence_numbers(struct dlb2_hw *hw,
+						u32 group_id,
+						u32 val)
+{
+	DLB2_HW_DBG(hw, "DLB2 set group sequence numbers:\n");
+	DLB2_HW_DBG(hw, "\tGroup ID: %u\n", group_id);
+	DLB2_HW_DBG(hw, "\tValue:    %u\n", val);
+}
+
+/**
+ * dlb2_set_group_sequence_numbers() - assign a group's number of SNs per queue
+ * @hw: dlb2_hw handle for a particular device.
+ * @group_id: sequence number group ID.
+ * @val: requested amount of sequence numbers per queue.
+ *
+ * This function configures the group's number of sequence numbers per queue.
+ * val can be a power-of-two between 32 and 1024, inclusive. This setting can
+ * be configured until the first ordered load-balanced queue is configured, at
+ * which point the configuration is locked.
+ *
+ * Return:
+ * Returns 0 upon success; -EINVAL if group_id or val is invalid, -EPERM if an
+ * ordered queue is configured.
+ */
+int dlb2_set_group_sequence_numbers(struct dlb2_hw *hw,
+				    u32 group_id,
+				    u32 val)
+{
+	const u32 valid_allocations[] = {64, 128, 256, 512, 1024};
+	struct dlb2_sn_group *group;
+	u32 sn_mode = 0;
+	int mode;
+
+	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
+		return -EINVAL;
+
+	group = &hw->rsrcs.sn_groups[group_id];
+
+	/*
+	 * Once the first load-balanced queue using an SN group is configured,
+	 * the group cannot be changed.
+	 */
+	if (group->slot_use_bitmap != 0)
+		return -EPERM;
+
+	for (mode = 0; mode < DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES; mode++)
+		if (val == valid_allocations[mode])
+			break;
+
+	if (mode == DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES)
+		return -EINVAL;
+
+	group->mode = mode;
+	group->sequence_numbers_per_queue = val;
+
+	DLB2_BITS_SET(sn_mode, hw->rsrcs.sn_groups[0].mode,
+		 DLB2_RO_GRP_SN_MODE_SN_MODE_0);
+	DLB2_BITS_SET(sn_mode, hw->rsrcs.sn_groups[1].mode,
+		 DLB2_RO_GRP_SN_MODE_SN_MODE_1);
+
+	DLB2_CSR_WR(hw, DLB2_RO_GRP_SN_MODE(hw->ver), sn_mode);
+
+	dlb2_log_set_group_sequence_numbers(hw, group_id, val);
+
+	return 0;
+}
+
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
deleted file mode 100644
index 2f66b2c71..000000000
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ /dev/null
@@ -1,6235 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016-2020 Intel Corporation
- */
-
-#define DLB2_USE_NEW_HEADERS /* TEMPORARY FOR MERGE */
-
-#include "dlb2_user.h"
-
-#include "dlb2_hw_types_new.h"
-#include "dlb2_osdep.h"
-#include "dlb2_osdep_bitmap.h"
-#include "dlb2_osdep_types.h"
-#include "dlb2_regs_new.h"
-#include "dlb2_resource.h"
-
-#include "../../dlb2_priv.h"
-#include "../../dlb2_inline_fns.h"
-
-#define DLB2_DOM_LIST_HEAD(head, type) \
-	DLB2_LIST_HEAD((head), type, domain_list)
-
-#define DLB2_FUNC_LIST_HEAD(head, type) \
-	DLB2_LIST_HEAD((head), type, func_list)
-
-#define DLB2_DOM_LIST_FOR(head, ptr, iter) \
-	DLB2_LIST_FOR_EACH(head, ptr, domain_list, iter)
-
-#define DLB2_FUNC_LIST_FOR(head, ptr, iter) \
-	DLB2_LIST_FOR_EACH(head, ptr, func_list, iter)
-
-#define DLB2_DOM_LIST_FOR_SAFE(head, ptr, ptr_tmp, it, it_tmp) \
-	DLB2_LIST_FOR_EACH_SAFE((head), ptr, ptr_tmp, domain_list, it, it_tmp)
-
-#define DLB2_FUNC_LIST_FOR_SAFE(head, ptr, ptr_tmp, it, it_tmp) \
-	DLB2_LIST_FOR_EACH_SAFE((head), ptr, ptr_tmp, func_list, it, it_tmp)
-
-/*
- * The PF driver cannot assume that a register write will affect subsequent HCW
- * writes. To ensure a write completes, the driver must read back a CSR. This
- * function only need be called for configuration that can occur after the
- * domain has started; prior to starting, applications can't send HCWs.
- */
-static inline void dlb2_flush_csr(struct dlb2_hw *hw)
-{
-	DLB2_CSR_RD(hw, DLB2_SYS_TOTAL_VAS(hw->ver));
-}
-
-static void dlb2_init_domain_rsrc_lists(struct dlb2_hw_domain *domain)
-{
-	int i;
-
-	dlb2_list_init_head(&domain->used_ldb_queues);
-	dlb2_list_init_head(&domain->used_dir_pq_pairs);
-	dlb2_list_init_head(&domain->avail_ldb_queues);
-	dlb2_list_init_head(&domain->avail_dir_pq_pairs);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
-		dlb2_list_init_head(&domain->used_ldb_ports[i]);
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
-		dlb2_list_init_head(&domain->avail_ldb_ports[i]);
-}
-
-static void dlb2_init_fn_rsrc_lists(struct dlb2_function_resources *rsrc)
-{
-	int i;
-	dlb2_list_init_head(&rsrc->avail_domains);
-	dlb2_list_init_head(&rsrc->used_domains);
-	dlb2_list_init_head(&rsrc->avail_ldb_queues);
-	dlb2_list_init_head(&rsrc->avail_dir_pq_pairs);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
-		dlb2_list_init_head(&rsrc->avail_ldb_ports[i]);
-}
-
-/**
- * dlb2_resource_free() - free device state memory
- * @hw: dlb2_hw handle for a particular device.
- *
- * This function frees software state pointed to by dlb2_hw. This function
- * should be called when resetting the device or unloading the driver.
- */
-void dlb2_resource_free(struct dlb2_hw *hw)
-{
-	int i;
-
-	if (hw->pf.avail_hist_list_entries)
-		dlb2_bitmap_free(hw->pf.avail_hist_list_entries);
-
-	for (i = 0; i < DLB2_MAX_NUM_VDEVS; i++) {
-		if (hw->vdev[i].avail_hist_list_entries)
-			dlb2_bitmap_free(hw->vdev[i].avail_hist_list_entries);
-	}
-}
-
-/**
- * dlb2_resource_init() - initialize the device
- * @hw: pointer to struct dlb2_hw.
- * @ver: device version.
- *
- * This function initializes the device's software state (pointed to by the hw
- * argument) and programs global scheduling QoS registers. This function should
- * be called during driver initialization, and the dlb2_hw structure should
- * be zero-initialized before calling the function.
- *
- * The dlb2_hw struct must be unique per DLB 2.0 device and persist until the
- * device is reset.
- *
- * Return:
- * Returns 0 upon success, <0 otherwise.
- */
-int dlb2_resource_init(struct dlb2_hw *hw, enum dlb2_hw_ver ver)
-{
-	struct dlb2_list_entry *list;
-	unsigned int i;
-	int ret;
-
-	/*
-	 * For optimal load-balancing, ports that map to one or more QIDs in
-	 * common should not be in numerical sequence. The port->QID mapping is
-	 * application dependent, but the driver interleaves port IDs as much
-	 * as possible to reduce the likelihood of sequential ports mapping to
-	 * the same QID(s). This initial allocation of port IDs maximizes the
-	 * average distance between an ID and its immediate neighbors (i.e.
-	 * the distance from 1 to 0 and to 2, the distance from 2 to 1 and to
-	 * 3, etc.).
-	 */
-	const u8 init_ldb_port_allocation[DLB2_MAX_NUM_LDB_PORTS] = {
-		0,  7,  14,  5, 12,  3, 10,  1,  8, 15,  6, 13,  4, 11,  2,  9,
-		16, 23, 30, 21, 28, 19, 26, 17, 24, 31, 22, 29, 20, 27, 18, 25,
-		32, 39, 46, 37, 44, 35, 42, 33, 40, 47, 38, 45, 36, 43, 34, 41,
-		48, 55, 62, 53, 60, 51, 58, 49, 56, 63, 54, 61, 52, 59, 50, 57,
-	};
-
-	hw->ver = ver;
-
-	dlb2_init_fn_rsrc_lists(&hw->pf);
-
-	for (i = 0; i < DLB2_MAX_NUM_VDEVS; i++)
-		dlb2_init_fn_rsrc_lists(&hw->vdev[i]);
-
-	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
-		dlb2_init_domain_rsrc_lists(&hw->domains[i]);
-		hw->domains[i].parent_func = &hw->pf;
-	}
-
-	/* Give all resources to the PF driver */
-	hw->pf.num_avail_domains = DLB2_MAX_NUM_DOMAINS;
-	for (i = 0; i < hw->pf.num_avail_domains; i++) {
-		list = &hw->domains[i].func_list;
-
-		dlb2_list_add(&hw->pf.avail_domains, list);
-	}
-
-	hw->pf.num_avail_ldb_queues = DLB2_MAX_NUM_LDB_QUEUES;
-	for (i = 0; i < hw->pf.num_avail_ldb_queues; i++) {
-		list = &hw->rsrcs.ldb_queues[i].func_list;
-
-		dlb2_list_add(&hw->pf.avail_ldb_queues, list);
-	}
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
-		hw->pf.num_avail_ldb_ports[i] =
-			DLB2_MAX_NUM_LDB_PORTS / DLB2_NUM_COS_DOMAINS;
-
-	for (i = 0; i < DLB2_MAX_NUM_LDB_PORTS; i++) {
-		int cos_id = i >> DLB2_NUM_COS_DOMAINS;
-		struct dlb2_ldb_port *port;
-
-		port = &hw->rsrcs.ldb_ports[init_ldb_port_allocation[i]];
-
-		dlb2_list_add(&hw->pf.avail_ldb_ports[cos_id],
-			      &port->func_list);
-	}
-
-	hw->pf.num_avail_dir_pq_pairs = DLB2_MAX_NUM_DIR_PORTS(hw->ver);
-	for (i = 0; i < hw->pf.num_avail_dir_pq_pairs; i++) {
-		list = &hw->rsrcs.dir_pq_pairs[i].func_list;
-
-		dlb2_list_add(&hw->pf.avail_dir_pq_pairs, list);
-	}
-
-	if (hw->ver == DLB2_HW_V2) {
-		hw->pf.num_avail_qed_entries = DLB2_MAX_NUM_LDB_CREDITS;
-		hw->pf.num_avail_dqed_entries =
-			DLB2_MAX_NUM_DIR_CREDITS(hw->ver);
-	} else {
-		hw->pf.num_avail_entries = DLB2_MAX_NUM_CREDITS(hw->ver);
-	}
-
-	hw->pf.num_avail_aqed_entries = DLB2_MAX_NUM_AQED_ENTRIES;
-
-	ret = dlb2_bitmap_alloc(&hw->pf.avail_hist_list_entries,
-				DLB2_MAX_NUM_HIST_LIST_ENTRIES);
-	if (ret)
-		goto unwind;
-
-	ret = dlb2_bitmap_fill(hw->pf.avail_hist_list_entries);
-	if (ret)
-		goto unwind;
-
-	for (i = 0; i < DLB2_MAX_NUM_VDEVS; i++) {
-		ret = dlb2_bitmap_alloc(&hw->vdev[i].avail_hist_list_entries,
-					DLB2_MAX_NUM_HIST_LIST_ENTRIES);
-		if (ret)
-			goto unwind;
-
-		ret = dlb2_bitmap_zero(hw->vdev[i].avail_hist_list_entries);
-		if (ret)
-			goto unwind;
-	}
-
-	/* Initialize the hardware resource IDs */
-	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
-		hw->domains[i].id.phys_id = i;
-		hw->domains[i].id.vdev_owned = false;
-	}
-
-	for (i = 0; i < DLB2_MAX_NUM_LDB_QUEUES; i++) {
-		hw->rsrcs.ldb_queues[i].id.phys_id = i;
-		hw->rsrcs.ldb_queues[i].id.vdev_owned = false;
-	}
-
-	for (i = 0; i < DLB2_MAX_NUM_LDB_PORTS; i++) {
-		hw->rsrcs.ldb_ports[i].id.phys_id = i;
-		hw->rsrcs.ldb_ports[i].id.vdev_owned = false;
-	}
-
-	for (i = 0; i < DLB2_MAX_NUM_DIR_PORTS(hw->ver); i++) {
-		hw->rsrcs.dir_pq_pairs[i].id.phys_id = i;
-		hw->rsrcs.dir_pq_pairs[i].id.vdev_owned = false;
-	}
-
-	for (i = 0; i < DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {
-		hw->rsrcs.sn_groups[i].id = i;
-		/* Default mode (0) is 64 sequence numbers per queue */
-		hw->rsrcs.sn_groups[i].mode = 0;
-		hw->rsrcs.sn_groups[i].sequence_numbers_per_queue = 64;
-		hw->rsrcs.sn_groups[i].slot_use_bitmap = 0;
-	}
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
-		hw->cos_reservation[i] = 100 / DLB2_NUM_COS_DOMAINS;
-
-	return 0;
-
-unwind:
-	dlb2_resource_free(hw);
-
-	return ret;
-}
-
-/**
- * dlb2_clr_pmcsr_disable() - power on bulk of DLB 2.0 logic
- * @hw: dlb2_hw handle for a particular device.
- * @ver: device version.
- *
- * Clearing the PMCSR must be done at initialization to make the device fully
- * operational.
- */
-void dlb2_clr_pmcsr_disable(struct dlb2_hw *hw, enum dlb2_hw_ver ver)
-{
-	u32 pmcsr_dis;
-
-	pmcsr_dis = DLB2_CSR_RD(hw, DLB2_CM_CFG_PM_PMCSR_DISABLE(ver));
-
-	DLB2_BITS_CLR(pmcsr_dis, DLB2_CM_CFG_PM_PMCSR_DISABLE_DISABLE);
-
-	DLB2_CSR_WR(hw, DLB2_CM_CFG_PM_PMCSR_DISABLE(ver), pmcsr_dis);
-}
-
-/**
- * dlb2_hw_get_num_resources() - query the PCI function's available resources
- * @hw: dlb2_hw handle for a particular device.
- * @arg: pointer to resource counts.
- * @vdev_req: indicates whether this request came from a vdev.
- * @vdev_id: If vdev_req is true, this contains the vdev's ID.
- *
- * This function returns the number of available resources for the PF or for a
- * VF.
- *
- * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
- * device.
- *
- * Return:
- * Returns 0 upon success, -EINVAL if vdev_req is true and vdev_id is
- * invalid.
- */
-int dlb2_hw_get_num_resources(struct dlb2_hw *hw,
-			      struct dlb2_get_num_resources_args *arg,
-			      bool vdev_req,
-			      unsigned int vdev_id)
-{
-	struct dlb2_function_resources *rsrcs;
-	struct dlb2_bitmap *map;
-	int i;
-
-	if (vdev_req && vdev_id >= DLB2_MAX_NUM_VDEVS)
-		return -EINVAL;
-
-	if (vdev_req)
-		rsrcs = &hw->vdev[vdev_id];
-	else
-		rsrcs = &hw->pf;
-
-	arg->num_sched_domains = rsrcs->num_avail_domains;
-
-	arg->num_ldb_queues = rsrcs->num_avail_ldb_queues;
-
-	arg->num_ldb_ports = 0;
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
-		arg->num_ldb_ports += rsrcs->num_avail_ldb_ports[i];
-
-	arg->num_cos_ldb_ports[0] = rsrcs->num_avail_ldb_ports[0];
-	arg->num_cos_ldb_ports[1] = rsrcs->num_avail_ldb_ports[1];
-	arg->num_cos_ldb_ports[2] = rsrcs->num_avail_ldb_ports[2];
-	arg->num_cos_ldb_ports[3] = rsrcs->num_avail_ldb_ports[3];
-
-	arg->num_dir_ports = rsrcs->num_avail_dir_pq_pairs;
-
-	arg->num_atomic_inflights = rsrcs->num_avail_aqed_entries;
-
-	map = rsrcs->avail_hist_list_entries;
-
-	arg->num_hist_list_entries = dlb2_bitmap_count(map);
-
-	arg->max_contiguous_hist_list_entries =
-		dlb2_bitmap_longest_set_range(map);
-
-	if (hw->ver == DLB2_HW_V2) {
-		arg->num_ldb_credits = rsrcs->num_avail_qed_entries;
-		arg->num_dir_credits = rsrcs->num_avail_dqed_entries;
-	} else {
-		arg->num_credits = rsrcs->num_avail_entries;
-	}
-	return 0;
-}
-
-static void dlb2_configure_domain_credits_v2_5(struct dlb2_hw *hw,
-					       struct dlb2_hw_domain *domain)
-{
-	u32 reg = 0;
-
-	DLB2_BITS_SET(reg, domain->num_credits, DLB2_CHP_CFG_LDB_VAS_CRD_COUNT);
-	DLB2_CSR_WR(hw, DLB2_CHP_CFG_VAS_CRD(domain->id.phys_id), reg);
-}
-
-static void dlb2_configure_domain_credits_v2(struct dlb2_hw *hw,
-					     struct dlb2_hw_domain *domain)
-{
-	u32 reg = 0;
-
-	DLB2_BITS_SET(reg, domain->num_ldb_credits,
-		      DLB2_CHP_CFG_LDB_VAS_CRD_COUNT);
-	DLB2_CSR_WR(hw, DLB2_CHP_CFG_LDB_VAS_CRD(domain->id.phys_id), reg);
-
-	reg = 0;
-	DLB2_BITS_SET(reg, domain->num_dir_credits,
-		      DLB2_CHP_CFG_DIR_VAS_CRD_COUNT);
-	DLB2_CSR_WR(hw, DLB2_CHP_CFG_DIR_VAS_CRD(domain->id.phys_id), reg);
-}
-
-static void dlb2_configure_domain_credits(struct dlb2_hw *hw,
-					  struct dlb2_hw_domain *domain)
-{
-	if (hw->ver == DLB2_HW_V2)
-		dlb2_configure_domain_credits_v2(hw, domain);
-	else
-		dlb2_configure_domain_credits_v2_5(hw, domain);
-}
-
-static int dlb2_attach_credits(struct dlb2_function_resources *rsrcs,
-			       struct dlb2_hw_domain *domain,
-			       u32 num_credits,
-			       struct dlb2_cmd_response *resp)
-{
-	if (rsrcs->num_avail_entries < num_credits) {
-		resp->status = DLB2_ST_CREDITS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	rsrcs->num_avail_entries -= num_credits;
-	domain->num_credits += num_credits;
-	return 0;
-}
-
-static struct dlb2_ldb_port *
-dlb2_get_next_ldb_port(struct dlb2_hw *hw,
-		       struct dlb2_function_resources *rsrcs,
-		       u32 domain_id,
-		       u32 cos_id)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	RTE_SET_USED(iter);
-
-	/*
-	 * To reduce the odds of consecutive load-balanced ports mapping to the
-	 * same queue(s), the driver attempts to allocate ports whose neighbors
-	 * are owned by a different domain.
-	 */
-	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {
-		u32 next, prev;
-		u32 phys_id;
-
-		phys_id = port->id.phys_id;
-		next = phys_id + 1;
-		prev = phys_id - 1;
-
-		if (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)
-			next = 0;
-		if (phys_id == 0)
-			prev = DLB2_MAX_NUM_LDB_PORTS - 1;
-
-		if (!hw->rsrcs.ldb_ports[next].owned ||
-		    hw->rsrcs.ldb_ports[next].domain_id.phys_id == domain_id)
-			continue;
-
-		if (!hw->rsrcs.ldb_ports[prev].owned ||
-		    hw->rsrcs.ldb_ports[prev].domain_id.phys_id == domain_id)
-			continue;
-
-		return port;
-	}
-
-	/*
-	 * Failing that, the driver looks for a port with one neighbor owned by
-	 * a different domain and the other unallocated.
-	 */
-	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {
-		u32 next, prev;
-		u32 phys_id;
-
-		phys_id = port->id.phys_id;
-		next = phys_id + 1;
-		prev = phys_id - 1;
-
-		if (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)
-			next = 0;
-		if (phys_id == 0)
-			prev = DLB2_MAX_NUM_LDB_PORTS - 1;
-
-		if (!hw->rsrcs.ldb_ports[prev].owned &&
-		    hw->rsrcs.ldb_ports[next].owned &&
-		    hw->rsrcs.ldb_ports[next].domain_id.phys_id != domain_id)
-			return port;
-
-		if (!hw->rsrcs.ldb_ports[next].owned &&
-		    hw->rsrcs.ldb_ports[prev].owned &&
-		    hw->rsrcs.ldb_ports[prev].domain_id.phys_id != domain_id)
-			return port;
-	}
-
-	/*
-	 * Failing that, the driver looks for a port with both neighbors
-	 * unallocated.
-	 */
-	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {
-		u32 next, prev;
-		u32 phys_id;
-
-		phys_id = port->id.phys_id;
-		next = phys_id + 1;
-		prev = phys_id - 1;
-
-		if (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)
-			next = 0;
-		if (phys_id == 0)
-			prev = DLB2_MAX_NUM_LDB_PORTS - 1;
-
-		if (!hw->rsrcs.ldb_ports[prev].owned &&
-		    !hw->rsrcs.ldb_ports[next].owned)
-			return port;
-	}
-
-	/* If all else fails, the driver returns the next available port. */
-	return DLB2_FUNC_LIST_HEAD(rsrcs->avail_ldb_ports[cos_id],
-				   typeof(*port));
-}
-
-static int __dlb2_attach_ldb_ports(struct dlb2_hw *hw,
-				   struct dlb2_function_resources *rsrcs,
-				   struct dlb2_hw_domain *domain,
-				   u32 num_ports,
-				   u32 cos_id,
-				   struct dlb2_cmd_response *resp)
-{
-	unsigned int i;
-
-	if (rsrcs->num_avail_ldb_ports[cos_id] < num_ports) {
-		resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	for (i = 0; i < num_ports; i++) {
-		struct dlb2_ldb_port *port;
-
-		port = dlb2_get_next_ldb_port(hw, rsrcs,
-					      domain->id.phys_id, cos_id);
-		if (port == NULL) {
-			DLB2_HW_ERR(hw,
-				    "[%s()] Internal error: domain validation failed\n",
-				    __func__);
-			return -EFAULT;
-		}
-
-		dlb2_list_del(&rsrcs->avail_ldb_ports[cos_id],
-			      &port->func_list);
-
-		port->domain_id = domain->id;
-		port->owned = true;
-
-		dlb2_list_add(&domain->avail_ldb_ports[cos_id],
-			      &port->domain_list);
-	}
-
-	rsrcs->num_avail_ldb_ports[cos_id] -= num_ports;
-
-	return 0;
-}
-
-
-static int dlb2_attach_ldb_ports(struct dlb2_hw *hw,
-				 struct dlb2_function_resources *rsrcs,
-				 struct dlb2_hw_domain *domain,
-				 struct dlb2_create_sched_domain_args *args,
-				 struct dlb2_cmd_response *resp)
-{
-	unsigned int i, j;
-	int ret;
-
-	if (args->cos_strict) {
-		for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-			u32 num = args->num_cos_ldb_ports[i];
-
-			/* Allocate ports from specific classes-of-service */
-			ret = __dlb2_attach_ldb_ports(hw,
-						      rsrcs,
-						      domain,
-						      num,
-						      i,
-						      resp);
-			if (ret)
-				return ret;
-		}
-	} else {
-		unsigned int k;
-		u32 cos_id;
-
-		/*
-		 * Attempt to allocate from specific class-of-service, but
-		 * fallback to the other classes if that fails.
-		 */
-		for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-			for (j = 0; j < args->num_cos_ldb_ports[i]; j++) {
-				for (k = 0; k < DLB2_NUM_COS_DOMAINS; k++) {
-					cos_id = (i + k) % DLB2_NUM_COS_DOMAINS;
-
-					ret = __dlb2_attach_ldb_ports(hw,
-								      rsrcs,
-								      domain,
-								      1,
-								      cos_id,
-								      resp);
-					if (ret == 0)
-						break;
-				}
-
-				if (ret)
-					return ret;
-			}
-		}
-	}
-
-	/* Allocate num_ldb_ports from any class-of-service */
-	for (i = 0; i < args->num_ldb_ports; i++) {
-		for (j = 0; j < DLB2_NUM_COS_DOMAINS; j++) {
-			ret = __dlb2_attach_ldb_ports(hw,
-						      rsrcs,
-						      domain,
-						      1,
-						      j,
-						      resp);
-			if (ret == 0)
-				break;
-		}
-
-		if (ret)
-			return ret;
-	}
-
-	return 0;
-}
-
-static int dlb2_attach_dir_ports(struct dlb2_hw *hw,
-				 struct dlb2_function_resources *rsrcs,
-				 struct dlb2_hw_domain *domain,
-				 u32 num_ports,
-				 struct dlb2_cmd_response *resp)
-{
-	unsigned int i;
-
-	if (rsrcs->num_avail_dir_pq_pairs < num_ports) {
-		resp->status = DLB2_ST_DIR_PORTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	for (i = 0; i < num_ports; i++) {
-		struct dlb2_dir_pq_pair *port;
-
-		port = DLB2_FUNC_LIST_HEAD(rsrcs->avail_dir_pq_pairs,
-					   typeof(*port));
-		if (port == NULL) {
-			DLB2_HW_ERR(hw,
-				    "[%s()] Internal error: domain validation failed\n",
-				    __func__);
-			return -EFAULT;
-		}
-
-		dlb2_list_del(&rsrcs->avail_dir_pq_pairs, &port->func_list);
-
-		port->domain_id = domain->id;
-		port->owned = true;
-
-		dlb2_list_add(&domain->avail_dir_pq_pairs, &port->domain_list);
-	}
-
-	rsrcs->num_avail_dir_pq_pairs -= num_ports;
-
-	return 0;
-}
-
-static int dlb2_attach_ldb_credits(struct dlb2_function_resources *rsrcs,
-				   struct dlb2_hw_domain *domain,
-				   u32 num_credits,
-				   struct dlb2_cmd_response *resp)
-{
-	if (rsrcs->num_avail_qed_entries < num_credits) {
-		resp->status = DLB2_ST_LDB_CREDITS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	rsrcs->num_avail_qed_entries -= num_credits;
-	domain->num_ldb_credits += num_credits;
-	return 0;
-}
-
-static int dlb2_attach_dir_credits(struct dlb2_function_resources *rsrcs,
-				   struct dlb2_hw_domain *domain,
-				   u32 num_credits,
-				   struct dlb2_cmd_response *resp)
-{
-	if (rsrcs->num_avail_dqed_entries < num_credits) {
-		resp->status = DLB2_ST_DIR_CREDITS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	rsrcs->num_avail_dqed_entries -= num_credits;
-	domain->num_dir_credits += num_credits;
-	return 0;
-}
-
-
-static int dlb2_attach_atomic_inflights(struct dlb2_function_resources *rsrcs,
-					struct dlb2_hw_domain *domain,
-					u32 num_atomic_inflights,
-					struct dlb2_cmd_response *resp)
-{
-	if (rsrcs->num_avail_aqed_entries < num_atomic_inflights) {
-		resp->status = DLB2_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	rsrcs->num_avail_aqed_entries -= num_atomic_inflights;
-	domain->num_avail_aqed_entries += num_atomic_inflights;
-	return 0;
-}
-
-static int
-dlb2_attach_domain_hist_list_entries(struct dlb2_function_resources *rsrcs,
-				     struct dlb2_hw_domain *domain,
-				     u32 num_hist_list_entries,
-				     struct dlb2_cmd_response *resp)
-{
-	struct dlb2_bitmap *bitmap;
-	int base;
-
-	if (num_hist_list_entries) {
-		bitmap = rsrcs->avail_hist_list_entries;
-
-		base = dlb2_bitmap_find_set_bit_range(bitmap,
-						      num_hist_list_entries);
-		if (base < 0)
-			goto error;
-
-		domain->total_hist_list_entries = num_hist_list_entries;
-		domain->avail_hist_list_entries = num_hist_list_entries;
-		domain->hist_list_entry_base = base;
-		domain->hist_list_entry_offset = 0;
-
-		dlb2_bitmap_clear_range(bitmap, base, num_hist_list_entries);
-	}
-	return 0;
-
-error:
-	resp->status = DLB2_ST_HIST_LIST_ENTRIES_UNAVAILABLE;
-	return -EINVAL;
-}
-
-static int dlb2_attach_ldb_queues(struct dlb2_hw *hw,
-				  struct dlb2_function_resources *rsrcs,
-				  struct dlb2_hw_domain *domain,
-				  u32 num_queues,
-				  struct dlb2_cmd_response *resp)
-{
-	unsigned int i;
-
-	if (rsrcs->num_avail_ldb_queues < num_queues) {
-		resp->status = DLB2_ST_LDB_QUEUES_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	for (i = 0; i < num_queues; i++) {
-		struct dlb2_ldb_queue *queue;
-
-		queue = DLB2_FUNC_LIST_HEAD(rsrcs->avail_ldb_queues,
-					    typeof(*queue));
-		if (queue == NULL) {
-			DLB2_HW_ERR(hw,
-				    "[%s()] Internal error: domain validation failed\n",
-				    __func__);
-			return -EFAULT;
-		}
-
-		dlb2_list_del(&rsrcs->avail_ldb_queues, &queue->func_list);
-
-		queue->domain_id = domain->id;
-		queue->owned = true;
-
-		dlb2_list_add(&domain->avail_ldb_queues, &queue->domain_list);
-	}
-
-	rsrcs->num_avail_ldb_queues -= num_queues;
-
-	return 0;
-}
-
-static int
-dlb2_domain_attach_resources(struct dlb2_hw *hw,
-			     struct dlb2_function_resources *rsrcs,
-			     struct dlb2_hw_domain *domain,
-			     struct dlb2_create_sched_domain_args *args,
-			     struct dlb2_cmd_response *resp)
-{
-	int ret;
-
-	ret = dlb2_attach_ldb_queues(hw,
-				     rsrcs,
-				     domain,
-				     args->num_ldb_queues,
-				     resp);
-	if (ret)
-		return ret;
-
-	ret = dlb2_attach_ldb_ports(hw,
-				    rsrcs,
-				    domain,
-				    args,
-				    resp);
-	if (ret)
-		return ret;
-
-	ret = dlb2_attach_dir_ports(hw,
-				    rsrcs,
-				    domain,
-				    args->num_dir_ports,
-				    resp);
-	if (ret)
-		return ret;
-
-	if (hw->ver == DLB2_HW_V2) {
-		ret = dlb2_attach_ldb_credits(rsrcs,
-					      domain,
-					      args->num_ldb_credits,
-					      resp);
-		if (ret)
-			return ret;
-
-		ret = dlb2_attach_dir_credits(rsrcs,
-					      domain,
-					      args->num_dir_credits,
-					      resp);
-		if (ret)
-			return ret;
-	} else {  /* DLB 2.5 */
-		ret = dlb2_attach_credits(rsrcs,
-					  domain,
-					  args->num_credits,
-					  resp);
-		if (ret)
-			return ret;
-	}
-
-	ret = dlb2_attach_domain_hist_list_entries(rsrcs,
-						   domain,
-						   args->num_hist_list_entries,
-						   resp);
-	if (ret)
-		return ret;
-
-	ret = dlb2_attach_atomic_inflights(rsrcs,
-					   domain,
-					   args->num_atomic_inflights,
-					   resp);
-	if (ret)
-		return ret;
-
-	dlb2_configure_domain_credits(hw, domain);
-
-	domain->configured = true;
-
-	domain->started = false;
-
-	rsrcs->num_avail_domains--;
-
-	return 0;
-}
-
-static int
-dlb2_verify_create_sched_dom_args(struct dlb2_function_resources *rsrcs,
-				  struct dlb2_create_sched_domain_args *args,
-				  struct dlb2_cmd_response *resp,
-				  struct dlb2_hw *hw,
-				  struct dlb2_hw_domain **out_domain)
-{
-	u32 num_avail_ldb_ports, req_ldb_ports;
-	struct dlb2_bitmap *avail_hl_entries;
-	unsigned int max_contig_hl_range;
-	struct dlb2_hw_domain *domain;
-	int i;
-
-	avail_hl_entries = rsrcs->avail_hist_list_entries;
-
-	max_contig_hl_range = dlb2_bitmap_longest_set_range(avail_hl_entries);
-
-	num_avail_ldb_ports = 0;
-	req_ldb_ports = 0;
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		num_avail_ldb_ports += rsrcs->num_avail_ldb_ports[i];
-
-		req_ldb_ports += args->num_cos_ldb_ports[i];
-	}
-
-	req_ldb_ports += args->num_ldb_ports;
-
-	if (rsrcs->num_avail_domains < 1) {
-		resp->status = DLB2_ST_DOMAIN_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	domain = DLB2_FUNC_LIST_HEAD(rsrcs->avail_domains, typeof(*domain));
-	if (domain == NULL) {
-		resp->status = DLB2_ST_DOMAIN_UNAVAILABLE;
-		return -EFAULT;
-	}
-
-	if (rsrcs->num_avail_ldb_queues < args->num_ldb_queues) {
-		resp->status = DLB2_ST_LDB_QUEUES_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	if (req_ldb_ports > num_avail_ldb_ports) {
-		resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	for (i = 0; args->cos_strict && i < DLB2_NUM_COS_DOMAINS; i++) {
-		if (args->num_cos_ldb_ports[i] >
-		    rsrcs->num_avail_ldb_ports[i]) {
-			resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
-			return -EINVAL;
-		}
-	}
-
-	if (args->num_ldb_queues > 0 && req_ldb_ports == 0) {
-		resp->status = DLB2_ST_LDB_PORT_REQUIRED_FOR_LDB_QUEUES;
-		return -EINVAL;
-	}
-
-	if (rsrcs->num_avail_dir_pq_pairs < args->num_dir_ports) {
-		resp->status = DLB2_ST_DIR_PORTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-	if (hw->ver == DLB2_HW_V2_5) {
-		if (rsrcs->num_avail_entries < args->num_credits) {
-			resp->status = DLB2_ST_CREDITS_UNAVAILABLE;
-			return -EINVAL;
-		}
-	} else {
-		if (rsrcs->num_avail_qed_entries < args->num_ldb_credits) {
-			resp->status = DLB2_ST_LDB_CREDITS_UNAVAILABLE;
-			return -EINVAL;
-		}
-		if (rsrcs->num_avail_dqed_entries < args->num_dir_credits) {
-			resp->status = DLB2_ST_DIR_CREDITS_UNAVAILABLE;
-			return -EINVAL;
-		}
-	}
-
-	if (rsrcs->num_avail_aqed_entries < args->num_atomic_inflights) {
-		resp->status = DLB2_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	if (max_contig_hl_range < args->num_hist_list_entries) {
-		resp->status = DLB2_ST_HIST_LIST_ENTRIES_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	*out_domain = domain;
-
-	return 0;
-}
-
-static void
-dlb2_log_create_sched_domain_args(struct dlb2_hw *hw,
-				  struct dlb2_create_sched_domain_args *args,
-				  bool vdev_req,
-				  unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 create sched domain arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tNumber of LDB queues:          %d\n",
-		    args->num_ldb_queues);
-	DLB2_HW_DBG(hw, "\tNumber of LDB ports (any CoS): %d\n",
-		    args->num_ldb_ports);
-	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 0):   %d\n",
-		    args->num_cos_ldb_ports[0]);
-	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 1):   %d\n",
-		    args->num_cos_ldb_ports[1]);
-	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 2):   %d\n",
-		    args->num_cos_ldb_ports[2]);
-	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 3):   %d\n",
-		    args->num_cos_ldb_ports[3]);
-	DLB2_HW_DBG(hw, "\tStrict CoS allocation:         %d\n",
-		    args->cos_strict);
-	DLB2_HW_DBG(hw, "\tNumber of DIR ports:           %d\n",
-		    args->num_dir_ports);
-	DLB2_HW_DBG(hw, "\tNumber of ATM inflights:       %d\n",
-		    args->num_atomic_inflights);
-	DLB2_HW_DBG(hw, "\tNumber of hist list entries:   %d\n",
-		    args->num_hist_list_entries);
-	if (hw->ver == DLB2_HW_V2) {
-		DLB2_HW_DBG(hw, "\tNumber of LDB credits:         %d\n",
-			    args->num_ldb_credits);
-		DLB2_HW_DBG(hw, "\tNumber of DIR credits:         %d\n",
-			    args->num_dir_credits);
-	} else {
-		DLB2_HW_DBG(hw, "\tNumber of credits:         %d\n",
-			    args->num_credits);
-	}
-}
-
-/**
- * dlb2_hw_create_sched_domain() - create a scheduling domain
- * @hw: dlb2_hw handle for a particular device.
- * @args: scheduling domain creation arguments.
- * @resp: response structure.
- * @vdev_req: indicates whether this request came from a vdev.
- * @vdev_id: If vdev_req is true, this contains the vdev's ID.
- *
- * This function creates a scheduling domain containing the resources specified
- * in args. The individual resources (queues, ports, credits) can be configured
- * after creating a scheduling domain.
- *
- * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
- * device.
- *
- * Return:
- * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
- * assigned a detailed error code from enum dlb2_error. If successful, resp->id
- * contains the domain ID.
- *
- * resp->id contains a virtual ID if vdev_req is true.
- *
- * Errors:
- * EINVAL - A requested resource is unavailable, or the requested domain name
- *	    is already in use.
- * EFAULT - Internal error (resp->status not set).
- */
-int dlb2_hw_create_sched_domain(struct dlb2_hw *hw,
-				struct dlb2_create_sched_domain_args *args,
-				struct dlb2_cmd_response *resp,
-				bool vdev_req,
-				unsigned int vdev_id)
-{
-	struct dlb2_function_resources *rsrcs;
-	struct dlb2_hw_domain *domain;
-	int ret;
-
-	rsrcs = (vdev_req) ? &hw->vdev[vdev_id] : &hw->pf;
-
-	dlb2_log_create_sched_domain_args(hw, args, vdev_req, vdev_id);
-
-	/*
-	 * Verify that hardware resources are available before attempting to
-	 * satisfy the request. This simplifies the error unwinding code.
-	 */
-	ret = dlb2_verify_create_sched_dom_args(rsrcs, args, resp, hw, &domain);
-	if (ret)
-		return ret;
-
-	dlb2_init_domain_rsrc_lists(domain);
-
-	ret = dlb2_domain_attach_resources(hw, rsrcs, domain, args, resp);
-	if (ret) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: failed to verify args.\n",
-			    __func__);
-
-		return ret;
-	}
-
-	dlb2_list_del(&rsrcs->avail_domains, &domain->func_list);
-
-	dlb2_list_add(&rsrcs->used_domains, &domain->func_list);
-
-	resp->id = (vdev_req) ? domain->id.virt_id : domain->id.phys_id;
-	resp->status = 0;
-
-	return 0;
-}
-
-static void dlb2_dir_port_cq_disable(struct dlb2_hw *hw,
-				     struct dlb2_dir_pq_pair *port)
-{
-	u32 reg = 0;
-
-	DLB2_BIT_SET(reg, DLB2_LSP_CQ_DIR_DSBL_DISABLED);
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ_DIR_DSBL(hw->ver, port->id.phys_id), reg);
-
-	dlb2_flush_csr(hw);
-}
-
-static u32 dlb2_dir_cq_token_count(struct dlb2_hw *hw,
-				   struct dlb2_dir_pq_pair *port)
-{
-	u32 cnt;
-
-	cnt = DLB2_CSR_RD(hw,
-			  DLB2_LSP_CQ_DIR_TKN_CNT(hw->ver, port->id.phys_id));
-
-	/*
-	 * Account for the initial token count, which is used in order to
-	 * provide a CQ with depth less than 8.
-	 */
-
-	return DLB2_BITS_GET(cnt, DLB2_LSP_CQ_DIR_TKN_CNT_COUNT) -
-	       port->init_tkn_cnt;
-}
-
-static void dlb2_drain_dir_cq(struct dlb2_hw *hw,
-			      struct dlb2_dir_pq_pair *port)
-{
-	unsigned int port_id = port->id.phys_id;
-	u32 cnt;
-
-	/* Return any outstanding tokens */
-	cnt = dlb2_dir_cq_token_count(hw, port);
-
-	if (cnt != 0) {
-		struct dlb2_hcw hcw_mem[8], *hcw;
-		void __iomem *pp_addr;
-
-		pp_addr = os_map_producer_port(hw, port_id, false);
-
-		/* Point hcw to a 64B-aligned location */
-		hcw = (struct dlb2_hcw *)((uintptr_t)&hcw_mem[4] & ~0x3F);
-
-		/*
-		 * Program the first HCW for a batch token return and
-		 * the rest as NOOPS
-		 */
-		memset(hcw, 0, 4 * sizeof(*hcw));
-		hcw->cq_token = 1;
-		hcw->lock_id = cnt - 1;
-
-		dlb2_movdir64b(pp_addr, hcw);
-
-		os_fence_hcw(hw, pp_addr);
-
-		os_unmap_producer_port(hw, pp_addr);
-	}
-}
-
-static void dlb2_dir_port_cq_enable(struct dlb2_hw *hw,
-				    struct dlb2_dir_pq_pair *port)
-{
-	u32 reg = 0;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ_DIR_DSBL(hw->ver, port->id.phys_id), reg);
-
-	dlb2_flush_csr(hw);
-}
-
-static int dlb2_domain_drain_dir_cqs(struct dlb2_hw *hw,
-				     struct dlb2_hw_domain *domain,
-				     bool toggle_port)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *port;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
-		/*
-		 * Can't drain a port if it's not configured, and there's
-		 * nothing to drain if its queue is unconfigured.
-		 */
-		if (!port->port_configured || !port->queue_configured)
-			continue;
-
-		if (toggle_port)
-			dlb2_dir_port_cq_disable(hw, port);
-
-		dlb2_drain_dir_cq(hw, port);
-
-		if (toggle_port)
-			dlb2_dir_port_cq_enable(hw, port);
-	}
-
-	return 0;
-}
-
-static u32 dlb2_dir_queue_depth(struct dlb2_hw *hw,
-				struct dlb2_dir_pq_pair *queue)
-{
-	u32 cnt;
-
-	cnt = DLB2_CSR_RD(hw, DLB2_LSP_QID_DIR_ENQUEUE_CNT(hw->ver,
-						      queue->id.phys_id));
-
-	return DLB2_BITS_GET(cnt, DLB2_LSP_QID_DIR_ENQUEUE_CNT_COUNT);
-}
-
-static bool dlb2_dir_queue_is_empty(struct dlb2_hw *hw,
-				    struct dlb2_dir_pq_pair *queue)
-{
-	return dlb2_dir_queue_depth(hw, queue) == 0;
-}
-
-static bool dlb2_domain_dir_queues_empty(struct dlb2_hw *hw,
-					 struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *queue;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, queue, iter) {
-		if (!dlb2_dir_queue_is_empty(hw, queue))
-			return false;
-	}
-
-	return true;
-}
-static int dlb2_domain_drain_dir_queues(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain)
-{
-	int i;
-
-	/* If the domain hasn't been started, there's no traffic to drain */
-	if (!domain->started)
-		return 0;
-
-	for (i = 0; i < DLB2_MAX_QID_EMPTY_CHECK_LOOPS; i++) {
-		dlb2_domain_drain_dir_cqs(hw, domain, true);
-
-		if (dlb2_domain_dir_queues_empty(hw, domain))
-			break;
-	}
-
-	if (i == DLB2_MAX_QID_EMPTY_CHECK_LOOPS) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: failed to empty queues\n",
-			    __func__);
-		return -EFAULT;
-	}
-
-	/*
-	 * Drain the CQs one more time. For the queues to go empty, they would
-	 * have scheduled one or more QEs.
-	 */
-	dlb2_domain_drain_dir_cqs(hw, domain, true);
-
-	return 0;
-}
-
-static void dlb2_ldb_port_cq_enable(struct dlb2_hw *hw,
-				    struct dlb2_ldb_port *port)
-{
-	u32 reg = 0;
-
-	/*
-	 * Don't re-enable the port if a removal is pending. The caller should
-	 * mark this port as enabled (if it isn't already), and when the
-	 * removal completes the port will be enabled.
-	 */
-	if (port->num_pending_removals)
-		return;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ_LDB_DSBL(hw->ver, port->id.phys_id), reg);
-
-	dlb2_flush_csr(hw);
-}
-
-static void dlb2_ldb_port_cq_disable(struct dlb2_hw *hw,
-				     struct dlb2_ldb_port *port)
-{
-	u32 reg = 0;
-
-	DLB2_BIT_SET(reg, DLB2_LSP_CQ_LDB_DSBL_DISABLED);
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ_LDB_DSBL(hw->ver, port->id.phys_id), reg);
-
-	dlb2_flush_csr(hw);
-}
-
-static u32 dlb2_ldb_cq_inflight_count(struct dlb2_hw *hw,
-				      struct dlb2_ldb_port *port)
-{
-	u32 cnt;
-
-	cnt = DLB2_CSR_RD(hw,
-			  DLB2_LSP_CQ_LDB_INFL_CNT(hw->ver, port->id.phys_id));
-
-	return DLB2_BITS_GET(cnt, DLB2_LSP_CQ_LDB_INFL_CNT_COUNT);
-}
-
-static u32 dlb2_ldb_cq_token_count(struct dlb2_hw *hw,
-				   struct dlb2_ldb_port *port)
-{
-	u32 cnt;
-
-	cnt = DLB2_CSR_RD(hw,
-			  DLB2_LSP_CQ_LDB_TKN_CNT(hw->ver, port->id.phys_id));
-
-	/*
-	 * Account for the initial token count, which is used in order to
-	 * provide a CQ with depth less than 8.
-	 */
-
-	return DLB2_BITS_GET(cnt, DLB2_LSP_CQ_LDB_TKN_CNT_TOKEN_COUNT) -
-		port->init_tkn_cnt;
-}
-
-static void dlb2_drain_ldb_cq(struct dlb2_hw *hw, struct dlb2_ldb_port *port)
-{
-	u32 infl_cnt, tkn_cnt;
-	unsigned int i;
-
-	infl_cnt = dlb2_ldb_cq_inflight_count(hw, port);
-	tkn_cnt = dlb2_ldb_cq_token_count(hw, port);
-
-	if (infl_cnt || tkn_cnt) {
-		struct dlb2_hcw hcw_mem[8], *hcw;
-		void __iomem *pp_addr;
-
-		pp_addr = os_map_producer_port(hw, port->id.phys_id, true);
-
-		/* Point hcw to a 64B-aligned location */
-		hcw = (struct dlb2_hcw *)((uintptr_t)&hcw_mem[4] & ~0x3F);
-
-		/*
-		 * Program the first HCW for a completion and token return and
-		 * the other HCWs as NOOPS
-		 */
-
-		memset(hcw, 0, 4 * sizeof(*hcw));
-		hcw->qe_comp = (infl_cnt > 0);
-		hcw->cq_token = (tkn_cnt > 0);
-		hcw->lock_id = tkn_cnt - 1;
-
-		/* Return tokens in the first HCW */
-		dlb2_movdir64b(pp_addr, hcw);
-
-		hcw->cq_token = 0;
-
-		/* Issue remaining completions (if any) */
-		for (i = 1; i < infl_cnt; i++)
-			dlb2_movdir64b(pp_addr, hcw);
-
-		os_fence_hcw(hw, pp_addr);
-
-		os_unmap_producer_port(hw, pp_addr);
-	}
-}
-
-static void dlb2_domain_drain_ldb_cqs(struct dlb2_hw *hw,
-				      struct dlb2_hw_domain *domain,
-				      bool toggle_port)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	/* If the domain hasn't been started, there's no traffic to drain */
-	if (!domain->started)
-		return;
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			if (toggle_port)
-				dlb2_ldb_port_cq_disable(hw, port);
-
-			dlb2_drain_ldb_cq(hw, port);
-
-			if (toggle_port)
-				dlb2_ldb_port_cq_enable(hw, port);
-		}
-	}
-}
-
-static u32 dlb2_ldb_queue_depth(struct dlb2_hw *hw,
-				struct dlb2_ldb_queue *queue)
-{
-	u32 aqed, ldb, atm;
-
-	aqed = DLB2_CSR_RD(hw, DLB2_LSP_QID_AQED_ACTIVE_CNT(hw->ver,
-						       queue->id.phys_id));
-	ldb = DLB2_CSR_RD(hw, DLB2_LSP_QID_LDB_ENQUEUE_CNT(hw->ver,
-						      queue->id.phys_id));
-	atm = DLB2_CSR_RD(hw,
-			  DLB2_LSP_QID_ATM_ACTIVE(hw->ver, queue->id.phys_id));
-
-	return DLB2_BITS_GET(aqed, DLB2_LSP_QID_AQED_ACTIVE_CNT_COUNT)
-	       + DLB2_BITS_GET(ldb, DLB2_LSP_QID_LDB_ENQUEUE_CNT_COUNT)
-	       + DLB2_BITS_GET(atm, DLB2_LSP_QID_ATM_ACTIVE_COUNT);
-}
-
-static bool dlb2_ldb_queue_is_empty(struct dlb2_hw *hw,
-				    struct dlb2_ldb_queue *queue)
-{
-	return dlb2_ldb_queue_depth(hw, queue) == 0;
-}
-
-static bool dlb2_domain_mapped_queues_empty(struct dlb2_hw *hw,
-					    struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_queue *queue;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
-		if (queue->num_mappings == 0)
-			continue;
-
-		if (!dlb2_ldb_queue_is_empty(hw, queue))
-			return false;
-	}
-
-	return true;
-}
-
-static int dlb2_domain_drain_mapped_queues(struct dlb2_hw *hw,
-					   struct dlb2_hw_domain *domain)
-{
-	int i;
-
-	/* If the domain hasn't been started, there's no traffic to drain */
-	if (!domain->started)
-		return 0;
-
-	if (domain->num_pending_removals > 0) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: failed to unmap domain queues\n",
-			    __func__);
-		return -EFAULT;
-	}
-
-	for (i = 0; i < DLB2_MAX_QID_EMPTY_CHECK_LOOPS; i++) {
-		dlb2_domain_drain_ldb_cqs(hw, domain, true);
-
-		if (dlb2_domain_mapped_queues_empty(hw, domain))
-			break;
-	}
-
-	if (i == DLB2_MAX_QID_EMPTY_CHECK_LOOPS) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: failed to empty queues\n",
-			    __func__);
-		return -EFAULT;
-	}
-
-	/*
-	 * Drain the CQs one more time. For the queues to go empty, they would
-	 * have scheduled one or more QEs.
-	 */
-	dlb2_domain_drain_ldb_cqs(hw, domain, true);
-
-	return 0;
-}
-
-static void dlb2_domain_enable_ldb_cqs(struct dlb2_hw *hw,
-				       struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			port->enabled = true;
-
-			dlb2_ldb_port_cq_enable(hw, port);
-		}
-	}
-}
-
-static struct dlb2_ldb_queue *
-dlb2_get_ldb_queue_from_id(struct dlb2_hw *hw,
-			   u32 id,
-			   bool vdev_req,
-			   unsigned int vdev_id)
-{
-	struct dlb2_list_entry *iter1;
-	struct dlb2_list_entry *iter2;
-	struct dlb2_function_resources *rsrcs;
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_queue *queue;
-	RTE_SET_USED(iter1);
-	RTE_SET_USED(iter2);
-
-	if (id >= DLB2_MAX_NUM_LDB_QUEUES)
-		return NULL;
-
-	rsrcs = (vdev_req) ? &hw->vdev[vdev_id] : &hw->pf;
-
-	if (!vdev_req)
-		return &hw->rsrcs.ldb_queues[id];
-
-	DLB2_FUNC_LIST_FOR(rsrcs->used_domains, domain, iter1) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter2) {
-			if (queue->id.virt_id == id)
-				return queue;
-		}
-	}
-
-	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_queues, queue, iter1) {
-		if (queue->id.virt_id == id)
-			return queue;
-	}
-
-	return NULL;
-}
-
-static struct dlb2_hw_domain *dlb2_get_domain_from_id(struct dlb2_hw *hw,
-						      u32 id,
-						      bool vdev_req,
-						      unsigned int vdev_id)
-{
-	struct dlb2_list_entry *iteration;
-	struct dlb2_function_resources *rsrcs;
-	struct dlb2_hw_domain *domain;
-	RTE_SET_USED(iteration);
-
-	if (id >= DLB2_MAX_NUM_DOMAINS)
-		return NULL;
-
-	if (!vdev_req)
-		return &hw->domains[id];
-
-	rsrcs = &hw->vdev[vdev_id];
-
-	DLB2_FUNC_LIST_FOR(rsrcs->used_domains, domain, iteration) {
-		if (domain->id.virt_id == id)
-			return domain;
-	}
-
-	return NULL;
-}
-
-static int dlb2_port_slot_state_transition(struct dlb2_hw *hw,
-					   struct dlb2_ldb_port *port,
-					   struct dlb2_ldb_queue *queue,
-					   int slot,
-					   enum dlb2_qid_map_state new_state)
-{
-	enum dlb2_qid_map_state curr_state = port->qid_map[slot].state;
-	struct dlb2_hw_domain *domain;
-	int domain_id;
-
-	domain_id = port->domain_id.phys_id;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, false, 0);
-	if (domain == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: unable to find domain %d\n",
-			    __func__, domain_id);
-		return -EINVAL;
-	}
-
-	switch (curr_state) {
-	case DLB2_QUEUE_UNMAPPED:
-		switch (new_state) {
-		case DLB2_QUEUE_MAPPED:
-			queue->num_mappings++;
-			port->num_mappings++;
-			break;
-		case DLB2_QUEUE_MAP_IN_PROG:
-			queue->num_pending_additions++;
-			domain->num_pending_additions++;
-			break;
-		default:
-			goto error;
-		}
-		break;
-	case DLB2_QUEUE_MAPPED:
-		switch (new_state) {
-		case DLB2_QUEUE_UNMAPPED:
-			queue->num_mappings--;
-			port->num_mappings--;
-			break;
-		case DLB2_QUEUE_UNMAP_IN_PROG:
-			port->num_pending_removals++;
-			domain->num_pending_removals++;
-			break;
-		case DLB2_QUEUE_MAPPED:
-			/* Priority change, nothing to update */
-			break;
-		default:
-			goto error;
-		}
-		break;
-	case DLB2_QUEUE_MAP_IN_PROG:
-		switch (new_state) {
-		case DLB2_QUEUE_UNMAPPED:
-			queue->num_pending_additions--;
-			domain->num_pending_additions--;
-			break;
-		case DLB2_QUEUE_MAPPED:
-			queue->num_mappings++;
-			port->num_mappings++;
-			queue->num_pending_additions--;
-			domain->num_pending_additions--;
-			break;
-		default:
-			goto error;
-		}
-		break;
-	case DLB2_QUEUE_UNMAP_IN_PROG:
-		switch (new_state) {
-		case DLB2_QUEUE_UNMAPPED:
-			port->num_pending_removals--;
-			domain->num_pending_removals--;
-			queue->num_mappings--;
-			port->num_mappings--;
-			break;
-		case DLB2_QUEUE_MAPPED:
-			port->num_pending_removals--;
-			domain->num_pending_removals--;
-			break;
-		case DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP:
-			/* Nothing to update */
-			break;
-		default:
-			goto error;
-		}
-		break;
-	case DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP:
-		switch (new_state) {
-		case DLB2_QUEUE_UNMAP_IN_PROG:
-			/* Nothing to update */
-			break;
-		case DLB2_QUEUE_UNMAPPED:
-			/*
-			 * An UNMAP_IN_PROG_PENDING_MAP slot briefly
-			 * becomes UNMAPPED before it transitions to
-			 * MAP_IN_PROG.
-			 */
-			queue->num_mappings--;
-			port->num_mappings--;
-			port->num_pending_removals--;
-			domain->num_pending_removals--;
-			break;
-		default:
-			goto error;
-		}
-		break;
-	default:
-		goto error;
-	}
-
-	port->qid_map[slot].state = new_state;
-
-	DLB2_HW_DBG(hw,
-		    "[%s()] queue %d -> port %d state transition (%d -> %d)\n",
-		    __func__, queue->id.phys_id, port->id.phys_id,
-		    curr_state, new_state);
-	return 0;
-
-error:
-	DLB2_HW_ERR(hw,
-		    "[%s()] Internal error: invalid queue %d -> port %d state transition (%d -> %d)\n",
-		    __func__, queue->id.phys_id, port->id.phys_id,
-		    curr_state, new_state);
-	return -EFAULT;
-}
-
-static bool dlb2_port_find_slot(struct dlb2_ldb_port *port,
-				enum dlb2_qid_map_state state,
-				int *slot)
-{
-	int i;
-
-	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
-		if (port->qid_map[i].state == state)
-			break;
-	}
-
-	*slot = i;
-
-	return (i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ);
-}
-
-static bool dlb2_port_find_slot_queue(struct dlb2_ldb_port *port,
-				      enum dlb2_qid_map_state state,
-				      struct dlb2_ldb_queue *queue,
-				      int *slot)
-{
-	int i;
-
-	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
-		if (port->qid_map[i].state == state &&
-		    port->qid_map[i].qid == queue->id.phys_id)
-			break;
-	}
-
-	*slot = i;
-
-	return (i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ);
-}
-
-/*
- * dlb2_ldb_queue_{enable, disable}_mapped_cqs() don't operate exactly as
- * their function names imply, and should only be called by the dynamic CQ
- * mapping code.
- */
-static void dlb2_ldb_queue_disable_mapped_cqs(struct dlb2_hw *hw,
-					      struct dlb2_hw_domain *domain,
-					      struct dlb2_ldb_queue *queue)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int slot, i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			enum dlb2_qid_map_state state = DLB2_QUEUE_MAPPED;
-
-			if (!dlb2_port_find_slot_queue(port, state,
-						       queue, &slot))
-				continue;
-
-			if (port->enabled)
-				dlb2_ldb_port_cq_disable(hw, port);
-		}
-	}
-}
-
-static void dlb2_ldb_queue_enable_mapped_cqs(struct dlb2_hw *hw,
-					     struct dlb2_hw_domain *domain,
-					     struct dlb2_ldb_queue *queue)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int slot, i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			enum dlb2_qid_map_state state = DLB2_QUEUE_MAPPED;
-
-			if (!dlb2_port_find_slot_queue(port, state,
-						       queue, &slot))
-				continue;
-
-			if (port->enabled)
-				dlb2_ldb_port_cq_enable(hw, port);
-		}
-	}
-}
-
-static void dlb2_ldb_port_clear_queue_if_status(struct dlb2_hw *hw,
-						struct dlb2_ldb_port *port,
-						int slot)
-{
-	u32 ctrl = 0;
-
-	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
-	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
-	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_INFLIGHT_OK_V);
-
-	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
-
-	dlb2_flush_csr(hw);
-}
-
-static void dlb2_ldb_port_set_queue_if_status(struct dlb2_hw *hw,
-					      struct dlb2_ldb_port *port,
-					      int slot)
-{
-	u32 ctrl = 0;
-
-	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
-	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
-	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_VALUE);
-	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_INFLIGHT_OK_V);
-
-	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
-
-	dlb2_flush_csr(hw);
-}
-
-static int dlb2_ldb_port_map_qid_static(struct dlb2_hw *hw,
-					struct dlb2_ldb_port *p,
-					struct dlb2_ldb_queue *q,
-					u8 priority)
-{
-	enum dlb2_qid_map_state state;
-	u32 lsp_qid2cq2;
-	u32 lsp_qid2cq;
-	u32 atm_qid2cq;
-	u32 cq2priov;
-	u32 cq2qid;
-	int i;
-
-	/* Look for a pending or already mapped slot, else an unused slot */
-	if (!dlb2_port_find_slot_queue(p, DLB2_QUEUE_MAP_IN_PROG, q, &i) &&
-	    !dlb2_port_find_slot_queue(p, DLB2_QUEUE_MAPPED, q, &i) &&
-	    !dlb2_port_find_slot(p, DLB2_QUEUE_UNMAPPED, &i)) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: CQ has no available QID mapping slots\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	/* Read-modify-write the priority and valid bit register */
-	cq2priov = DLB2_CSR_RD(hw, DLB2_LSP_CQ2PRIOV(hw->ver, p->id.phys_id));
-
-	cq2priov |= (1 << (i + DLB2_LSP_CQ2PRIOV_V_LOC)) & DLB2_LSP_CQ2PRIOV_V;
-	cq2priov |= ((priority & 0x7) << (i + DLB2_LSP_CQ2PRIOV_PRIO_LOC) * 3)
-		    & DLB2_LSP_CQ2PRIOV_PRIO;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(hw->ver, p->id.phys_id), cq2priov);
-
-	/* Read-modify-write the QID map register */
-	if (i < 4)
-		cq2qid = DLB2_CSR_RD(hw, DLB2_LSP_CQ2QID0(hw->ver,
-							  p->id.phys_id));
-	else
-		cq2qid = DLB2_CSR_RD(hw, DLB2_LSP_CQ2QID1(hw->ver,
-							  p->id.phys_id));
-
-	if (i == 0 || i == 4)
-		DLB2_BITS_SET(cq2qid, q->id.phys_id, DLB2_LSP_CQ2QID0_QID_P0);
-	if (i == 1 || i == 5)
-		DLB2_BITS_SET(cq2qid, q->id.phys_id, DLB2_LSP_CQ2QID0_QID_P1);
-	if (i == 2 || i == 6)
-		DLB2_BITS_SET(cq2qid, q->id.phys_id, DLB2_LSP_CQ2QID0_QID_P2);
-	if (i == 3 || i == 7)
-		DLB2_BITS_SET(cq2qid, q->id.phys_id, DLB2_LSP_CQ2QID0_QID_P3);
-
-	if (i < 4)
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_CQ2QID0(hw->ver, p->id.phys_id), cq2qid);
-	else
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_CQ2QID1(hw->ver, p->id.phys_id), cq2qid);
-
-	atm_qid2cq = DLB2_CSR_RD(hw,
-				 DLB2_ATM_QID2CQIDIX(q->id.phys_id,
-						p->id.phys_id / 4));
-
-	lsp_qid2cq = DLB2_CSR_RD(hw,
-				 DLB2_LSP_QID2CQIDIX(hw->ver, q->id.phys_id,
-						p->id.phys_id / 4));
-
-	lsp_qid2cq2 = DLB2_CSR_RD(hw,
-				  DLB2_LSP_QID2CQIDIX2(hw->ver, q->id.phys_id,
-						  p->id.phys_id / 4));
-
-	switch (p->id.phys_id % 4) {
-	case 0:
-		DLB2_BIT_SET(atm_qid2cq,
-			     1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P0_LOC));
-		DLB2_BIT_SET(lsp_qid2cq,
-			     1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P0_LOC));
-		DLB2_BIT_SET(lsp_qid2cq2,
-			     1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P0_LOC));
-		break;
-
-	case 1:
-		DLB2_BIT_SET(atm_qid2cq,
-			     1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P1_LOC));
-		DLB2_BIT_SET(lsp_qid2cq,
-			     1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P1_LOC));
-		DLB2_BIT_SET(lsp_qid2cq2,
-			     1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P1_LOC));
-		break;
-
-	case 2:
-		DLB2_BIT_SET(atm_qid2cq,
-			     1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P2_LOC));
-		DLB2_BIT_SET(lsp_qid2cq,
-			     1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P2_LOC));
-		DLB2_BIT_SET(lsp_qid2cq2,
-			     1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P2_LOC));
-		break;
-
-	case 3:
-		DLB2_BIT_SET(atm_qid2cq,
-			     1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P3_LOC));
-		DLB2_BIT_SET(lsp_qid2cq,
-			     1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P3_LOC));
-		DLB2_BIT_SET(lsp_qid2cq2,
-			     1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P3_LOC));
-		break;
-	}
-
-	DLB2_CSR_WR(hw,
-		    DLB2_ATM_QID2CQIDIX(q->id.phys_id, p->id.phys_id / 4),
-		    atm_qid2cq);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID2CQIDIX(hw->ver,
-					q->id.phys_id, p->id.phys_id / 4),
-		    lsp_qid2cq);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID2CQIDIX2(hw->ver,
-					 q->id.phys_id, p->id.phys_id / 4),
-		    lsp_qid2cq2);
-
-	dlb2_flush_csr(hw);
-
-	p->qid_map[i].qid = q->id.phys_id;
-	p->qid_map[i].priority = priority;
-
-	state = DLB2_QUEUE_MAPPED;
-
-	return dlb2_port_slot_state_transition(hw, p, q, i, state);
-}
-
-static int dlb2_ldb_port_set_has_work_bits(struct dlb2_hw *hw,
-					   struct dlb2_ldb_port *port,
-					   struct dlb2_ldb_queue *queue,
-					   int slot)
-{
-	u32 ctrl = 0;
-	u32 active;
-	u32 enq;
-
-	/* Set the atomic scheduling haswork bit */
-	active = DLB2_CSR_RD(hw, DLB2_LSP_QID_AQED_ACTIVE_CNT(hw->ver,
-							 queue->id.phys_id));
-
-	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
-	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
-	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_VALUE);
-	DLB2_BITS_SET(ctrl,
-		      DLB2_BITS_GET(active,
-				    DLB2_LSP_QID_AQED_ACTIVE_CNT_COUNT) > 0,
-				    DLB2_LSP_LDB_SCHED_CTRL_RLIST_HASWORK_V);
-
-	/* Set the non-atomic scheduling haswork bit */
-	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
-
-	enq = DLB2_CSR_RD(hw,
-			  DLB2_LSP_QID_LDB_ENQUEUE_CNT(hw->ver,
-						       queue->id.phys_id));
-
-	memset(&ctrl, 0, sizeof(ctrl));
-
-	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
-	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
-	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_VALUE);
-	DLB2_BITS_SET(ctrl,
-		      DLB2_BITS_GET(enq,
-				    DLB2_LSP_QID_LDB_ENQUEUE_CNT_COUNT) > 0,
-		      DLB2_LSP_LDB_SCHED_CTRL_NALB_HASWORK_V);
-
-	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
-
-	dlb2_flush_csr(hw);
-
-	return 0;
-}
-
-static void dlb2_ldb_port_clear_has_work_bits(struct dlb2_hw *hw,
-					      struct dlb2_ldb_port *port,
-					      u8 slot)
-{
-	u32 ctrl = 0;
-
-	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
-	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
-	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_RLIST_HASWORK_V);
-
-	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
-
-	memset(&ctrl, 0, sizeof(ctrl));
-
-	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
-	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
-	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_NALB_HASWORK_V);
-
-	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
-
-	dlb2_flush_csr(hw);
-}
-
-
-static void dlb2_ldb_queue_set_inflight_limit(struct dlb2_hw *hw,
-					      struct dlb2_ldb_queue *queue)
-{
-	u32 infl_lim = 0;
-
-	DLB2_BITS_SET(infl_lim, queue->num_qid_inflights,
-		 DLB2_LSP_QID_LDB_INFL_LIM_LIMIT);
-
-	DLB2_CSR_WR(hw, DLB2_LSP_QID_LDB_INFL_LIM(hw->ver, queue->id.phys_id),
-		    infl_lim);
-}
-
-static void dlb2_ldb_queue_clear_inflight_limit(struct dlb2_hw *hw,
-						struct dlb2_ldb_queue *queue)
-{
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID_LDB_INFL_LIM(hw->ver, queue->id.phys_id),
-		    DLB2_LSP_QID_LDB_INFL_LIM_RST);
-}
-
-static int dlb2_ldb_port_finish_map_qid_dynamic(struct dlb2_hw *hw,
-						struct dlb2_hw_domain *domain,
-						struct dlb2_ldb_port *port,
-						struct dlb2_ldb_queue *queue)
-{
-	struct dlb2_list_entry *iter;
-	enum dlb2_qid_map_state state;
-	int slot, ret, i;
-	u32 infl_cnt;
-	u8 prio;
-	RTE_SET_USED(iter);
-
-	infl_cnt = DLB2_CSR_RD(hw,
-			       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver,
-						    queue->id.phys_id));
-
-	if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT)) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: non-zero QID inflight count\n",
-			    __func__);
-		return -EINVAL;
-	}
-
-	/*
-	 * Static map the port and set its corresponding has_work bits.
-	 */
-	state = DLB2_QUEUE_MAP_IN_PROG;
-	if (!dlb2_port_find_slot_queue(port, state, queue, &slot))
-		return -EINVAL;
-
-	prio = port->qid_map[slot].priority;
-
-	/*
-	 * Update the CQ2QID, CQ2PRIOV, and QID2CQIDX registers, and
-	 * the port's qid_map state.
-	 */
-	ret = dlb2_ldb_port_map_qid_static(hw, port, queue, prio);
-	if (ret)
-		return ret;
-
-	ret = dlb2_ldb_port_set_has_work_bits(hw, port, queue, slot);
-	if (ret)
-		return ret;
-
-	/*
-	 * Ensure IF_status(cq,qid) is 0 before enabling the port to
-	 * prevent spurious schedules to cause the queue's inflight
-	 * count to increase.
-	 */
-	dlb2_ldb_port_clear_queue_if_status(hw, port, slot);
-
-	/* Reset the queue's inflight status */
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			state = DLB2_QUEUE_MAPPED;
-			if (!dlb2_port_find_slot_queue(port, state,
-						       queue, &slot))
-				continue;
-
-			dlb2_ldb_port_set_queue_if_status(hw, port, slot);
-		}
-	}
-
-	dlb2_ldb_queue_set_inflight_limit(hw, queue);
-
-	/* Re-enable CQs mapped to this queue */
-	dlb2_ldb_queue_enable_mapped_cqs(hw, domain, queue);
-
-	/* If this queue has other mappings pending, clear its inflight limit */
-	if (queue->num_pending_additions > 0)
-		dlb2_ldb_queue_clear_inflight_limit(hw, queue);
-
-	return 0;
-}
-
-/**
- * dlb2_ldb_port_map_qid_dynamic() - perform a "dynamic" QID->CQ mapping
- * @hw: dlb2_hw handle for a particular device.
- * @port: load-balanced port
- * @queue: load-balanced queue
- * @priority: queue servicing priority
- *
- * Returns 0 if the queue was mapped, 1 if the mapping is scheduled to occur
- * at a later point, and <0 if an error occurred.
- */
-static int dlb2_ldb_port_map_qid_dynamic(struct dlb2_hw *hw,
-					 struct dlb2_ldb_port *port,
-					 struct dlb2_ldb_queue *queue,
-					 u8 priority)
-{
-	enum dlb2_qid_map_state state;
-	struct dlb2_hw_domain *domain;
-	int domain_id, slot, ret;
-	u32 infl_cnt;
-
-	domain_id = port->domain_id.phys_id;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, false, 0);
-	if (domain == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: unable to find domain %d\n",
-			    __func__, port->domain_id.phys_id);
-		return -EINVAL;
-	}
-
-	/*
-	 * Set the QID inflight limit to 0 to prevent further scheduling of the
-	 * queue.
-	 */
-	DLB2_CSR_WR(hw, DLB2_LSP_QID_LDB_INFL_LIM(hw->ver,
-						  queue->id.phys_id), 0);
-
-	if (!dlb2_port_find_slot(port, DLB2_QUEUE_UNMAPPED, &slot)) {
-		DLB2_HW_ERR(hw,
-			    "Internal error: No available unmapped slots\n");
-		return -EFAULT;
-	}
-
-	port->qid_map[slot].qid = queue->id.phys_id;
-	port->qid_map[slot].priority = priority;
-
-	state = DLB2_QUEUE_MAP_IN_PROG;
-	ret = dlb2_port_slot_state_transition(hw, port, queue, slot, state);
-	if (ret)
-		return ret;
-
-	infl_cnt = DLB2_CSR_RD(hw,
-			       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver,
-						    queue->id.phys_id));
-
-	if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT)) {
-		/*
-		 * The queue is owed completions so it's not safe to map it
-		 * yet. Schedule a kernel thread to complete the mapping later,
-		 * once software has completed all the queue's inflight events.
-		 */
-		if (!os_worker_active(hw))
-			os_schedule_work(hw);
-
-		return 1;
-	}
-
-	/*
-	 * Disable the affected CQ, and the CQs already mapped to the QID,
-	 * before reading the QID's inflight count a second time. There is an
-	 * unlikely race in which the QID may schedule one more QE after we
-	 * read an inflight count of 0, and disabling the CQs guarantees that
-	 * the race will not occur after a re-read of the inflight count
-	 * register.
-	 */
-	if (port->enabled)
-		dlb2_ldb_port_cq_disable(hw, port);
-
-	dlb2_ldb_queue_disable_mapped_cqs(hw, domain, queue);
-
-	infl_cnt = DLB2_CSR_RD(hw,
-			       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver,
-						    queue->id.phys_id));
-
-	if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT)) {
-		if (port->enabled)
-			dlb2_ldb_port_cq_enable(hw, port);
-
-		dlb2_ldb_queue_enable_mapped_cqs(hw, domain, queue);
-
-		/*
-		 * The queue is owed completions so it's not safe to map it
-		 * yet. Schedule a kernel thread to complete the mapping later,
-		 * once software has completed all the queue's inflight events.
-		 */
-		if (!os_worker_active(hw))
-			os_schedule_work(hw);
-
-		return 1;
-	}
-
-	return dlb2_ldb_port_finish_map_qid_dynamic(hw, domain, port, queue);
-}
-
-static void dlb2_domain_finish_map_port(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain,
-					struct dlb2_ldb_port *port)
-{
-	int i;
-
-	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
-		u32 infl_cnt;
-		struct dlb2_ldb_queue *queue;
-		int qid;
-
-		if (port->qid_map[i].state != DLB2_QUEUE_MAP_IN_PROG)
-			continue;
-
-		qid = port->qid_map[i].qid;
-
-		queue = dlb2_get_ldb_queue_from_id(hw, qid, false, 0);
-
-		if (queue == NULL) {
-			DLB2_HW_ERR(hw,
-				    "[%s()] Internal error: unable to find queue %d\n",
-				    __func__, qid);
-			continue;
-		}
-
-		infl_cnt = DLB2_CSR_RD(hw,
-				       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver, qid));
-
-		if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT))
-			continue;
-
-		/*
-		 * Disable the affected CQ, and the CQs already mapped to the
-		 * QID, before reading the QID's inflight count a second time.
-		 * There is an unlikely race in which the QID may schedule one
-		 * more QE after we read an inflight count of 0, and disabling
-		 * the CQs guarantees that the race will not occur after a
-		 * re-read of the inflight count register.
-		 */
-		if (port->enabled)
-			dlb2_ldb_port_cq_disable(hw, port);
-
-		dlb2_ldb_queue_disable_mapped_cqs(hw, domain, queue);
-
-		infl_cnt = DLB2_CSR_RD(hw,
-				       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver, qid));
-
-		if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT)) {
-			if (port->enabled)
-				dlb2_ldb_port_cq_enable(hw, port);
-
-			dlb2_ldb_queue_enable_mapped_cqs(hw, domain, queue);
-
-			continue;
-		}
-
-		dlb2_ldb_port_finish_map_qid_dynamic(hw, domain, port, queue);
-	}
-}
-
-static unsigned int
-dlb2_domain_finish_map_qid_procedures(struct dlb2_hw *hw,
-				      struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	if (!domain->configured || domain->num_pending_additions == 0)
-		return 0;
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
-			dlb2_domain_finish_map_port(hw, domain, port);
-	}
-
-	return domain->num_pending_additions;
-}
-
-static int dlb2_ldb_port_unmap_qid(struct dlb2_hw *hw,
-				   struct dlb2_ldb_port *port,
-				   struct dlb2_ldb_queue *queue)
-{
-	enum dlb2_qid_map_state mapped, in_progress, pending_map, unmapped;
-	u32 lsp_qid2cq2;
-	u32 lsp_qid2cq;
-	u32 atm_qid2cq;
-	u32 cq2priov;
-	u32 queue_id;
-	u32 port_id;
-	int i;
-
-	/* Find the queue's slot */
-	mapped = DLB2_QUEUE_MAPPED;
-	in_progress = DLB2_QUEUE_UNMAP_IN_PROG;
-	pending_map = DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP;
-
-	if (!dlb2_port_find_slot_queue(port, mapped, queue, &i) &&
-	    !dlb2_port_find_slot_queue(port, in_progress, queue, &i) &&
-	    !dlb2_port_find_slot_queue(port, pending_map, queue, &i)) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: QID %d isn't mapped\n",
-			    __func__, __LINE__, queue->id.phys_id);
-		return -EFAULT;
-	}
-
-	port_id = port->id.phys_id;
-	queue_id = queue->id.phys_id;
-
-	/* Read-modify-write the priority and valid bit register */
-	cq2priov = DLB2_CSR_RD(hw, DLB2_LSP_CQ2PRIOV(hw->ver, port_id));
-
-	cq2priov &= ~(1 << (i + DLB2_LSP_CQ2PRIOV_V_LOC));
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(hw->ver, port_id), cq2priov);
-
-	atm_qid2cq = DLB2_CSR_RD(hw, DLB2_ATM_QID2CQIDIX(queue_id,
-							 port_id / 4));
-
-	lsp_qid2cq = DLB2_CSR_RD(hw,
-				 DLB2_LSP_QID2CQIDIX(hw->ver,
-						queue_id, port_id / 4));
-
-	lsp_qid2cq2 = DLB2_CSR_RD(hw,
-				  DLB2_LSP_QID2CQIDIX2(hw->ver,
-						  queue_id, port_id / 4));
-
-	switch (port_id % 4) {
-	case 0:
-		atm_qid2cq &= ~(1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P0_LOC));
-		lsp_qid2cq &= ~(1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P0_LOC));
-		lsp_qid2cq2 &= ~(1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P0_LOC));
-		break;
-
-	case 1:
-		atm_qid2cq &= ~(1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P1_LOC));
-		lsp_qid2cq &= ~(1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P1_LOC));
-		lsp_qid2cq2 &= ~(1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P1_LOC));
-		break;
-
-	case 2:
-		atm_qid2cq &= ~(1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P2_LOC));
-		lsp_qid2cq &= ~(1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P2_LOC));
-		lsp_qid2cq2 &= ~(1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P2_LOC));
-		break;
-
-	case 3:
-		atm_qid2cq &= ~(1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P3_LOC));
-		lsp_qid2cq &= ~(1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P3_LOC));
-		lsp_qid2cq2 &= ~(1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P3_LOC));
-		break;
-	}
-
-	DLB2_CSR_WR(hw, DLB2_ATM_QID2CQIDIX(queue_id, port_id / 4), atm_qid2cq);
-
-	DLB2_CSR_WR(hw, DLB2_LSP_QID2CQIDIX(hw->ver, queue_id, port_id / 4),
-		    lsp_qid2cq);
-
-	DLB2_CSR_WR(hw, DLB2_LSP_QID2CQIDIX2(hw->ver, queue_id, port_id / 4),
-		    lsp_qid2cq2);
-
-	dlb2_flush_csr(hw);
-
-	unmapped = DLB2_QUEUE_UNMAPPED;
-
-	return dlb2_port_slot_state_transition(hw, port, queue, i, unmapped);
-}
-
-static int dlb2_ldb_port_map_qid(struct dlb2_hw *hw,
-				 struct dlb2_hw_domain *domain,
-				 struct dlb2_ldb_port *port,
-				 struct dlb2_ldb_queue *queue,
-				 u8 prio)
-{
-	if (domain->started)
-		return dlb2_ldb_port_map_qid_dynamic(hw, port, queue, prio);
-	else
-		return dlb2_ldb_port_map_qid_static(hw, port, queue, prio);
-}
-
-static void
-dlb2_domain_finish_unmap_port_slot(struct dlb2_hw *hw,
-				   struct dlb2_hw_domain *domain,
-				   struct dlb2_ldb_port *port,
-				   int slot)
-{
-	enum dlb2_qid_map_state state;
-	struct dlb2_ldb_queue *queue;
-
-	queue = &hw->rsrcs.ldb_queues[port->qid_map[slot].qid];
-
-	state = port->qid_map[slot].state;
-
-	/* Update the QID2CQIDX and CQ2QID vectors */
-	dlb2_ldb_port_unmap_qid(hw, port, queue);
-
-	/*
-	 * Ensure the QID will not be serviced by this {CQ, slot} by clearing
-	 * the has_work bits
-	 */
-	dlb2_ldb_port_clear_has_work_bits(hw, port, slot);
-
-	/* Reset the {CQ, slot} to its default state */
-	dlb2_ldb_port_set_queue_if_status(hw, port, slot);
-
-	/* Re-enable the CQ if it was not manually disabled by the user */
-	if (port->enabled)
-		dlb2_ldb_port_cq_enable(hw, port);
-
-	/*
-	 * If there is a mapping that is pending this slot's removal, perform
-	 * the mapping now.
-	 */
-	if (state == DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP) {
-		struct dlb2_ldb_port_qid_map *map;
-		struct dlb2_ldb_queue *map_queue;
-		u8 prio;
-
-		map = &port->qid_map[slot];
-
-		map->qid = map->pending_qid;
-		map->priority = map->pending_priority;
-
-		map_queue = &hw->rsrcs.ldb_queues[map->qid];
-		prio = map->priority;
-
-		dlb2_ldb_port_map_qid(hw, domain, port, map_queue, prio);
-	}
-}
-
-
-static bool dlb2_domain_finish_unmap_port(struct dlb2_hw *hw,
-					  struct dlb2_hw_domain *domain,
-					  struct dlb2_ldb_port *port)
-{
-	u32 infl_cnt;
-	int i;
-
-	if (port->num_pending_removals == 0)
-		return false;
-
-	/*
-	 * The unmap requires all the CQ's outstanding inflights to be
-	 * completed.
-	 */
-	infl_cnt = DLB2_CSR_RD(hw, DLB2_LSP_CQ_LDB_INFL_CNT(hw->ver,
-						       port->id.phys_id));
-	if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_CQ_LDB_INFL_CNT_COUNT) > 0)
-		return false;
-
-	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
-		struct dlb2_ldb_port_qid_map *map;
-
-		map = &port->qid_map[i];
-
-		if (map->state != DLB2_QUEUE_UNMAP_IN_PROG &&
-		    map->state != DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP)
-			continue;
-
-		dlb2_domain_finish_unmap_port_slot(hw, domain, port, i);
-	}
-
-	return true;
-}
-
-static unsigned int
-dlb2_domain_finish_unmap_qid_procedures(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	if (!domain->configured || domain->num_pending_removals == 0)
-		return 0;
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
-			dlb2_domain_finish_unmap_port(hw, domain, port);
-	}
-
-	return domain->num_pending_removals;
-}
-
-static void dlb2_domain_disable_ldb_cqs(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			port->enabled = false;
-
-			dlb2_ldb_port_cq_disable(hw, port);
-		}
-	}
-}
-
-
-static void dlb2_log_reset_domain(struct dlb2_hw *hw,
-				  u32 domain_id,
-				  bool vdev_req,
-				  unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 reset domain:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
-}
-
-static void dlb2_domain_disable_dir_vpps(struct dlb2_hw *hw,
-					 struct dlb2_hw_domain *domain,
-					 unsigned int vdev_id)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *port;
-	u32 vpp_v = 0;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
-		unsigned int offs;
-		u32 virt_id;
-
-		if (hw->virt_mode == DLB2_VIRT_SRIOV)
-			virt_id = port->id.virt_id;
-		else
-			virt_id = port->id.phys_id;
-
-		offs = vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) + virt_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP_V(offs), vpp_v);
-	}
-}
-
-static void dlb2_domain_disable_ldb_vpps(struct dlb2_hw *hw,
-					 struct dlb2_hw_domain *domain,
-					 unsigned int vdev_id)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	u32 vpp_v = 0;
-	int i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			unsigned int offs;
-			u32 virt_id;
-
-			if (hw->virt_mode == DLB2_VIRT_SRIOV)
-				virt_id = port->id.virt_id;
-			else
-				virt_id = port->id.phys_id;
-
-			offs = vdev_id * DLB2_MAX_NUM_LDB_PORTS + virt_id;
-
-			DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VPP_V(offs), vpp_v);
-		}
-	}
-}
-
-static void
-dlb2_domain_disable_ldb_port_interrupts(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	u32 int_en = 0;
-	u32 wd_en = 0;
-	int i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			DLB2_CSR_WR(hw,
-				    DLB2_CHP_LDB_CQ_INT_ENB(hw->ver,
-						       port->id.phys_id),
-				    int_en);
-
-			DLB2_CSR_WR(hw,
-				    DLB2_CHP_LDB_CQ_WD_ENB(hw->ver,
-						      port->id.phys_id),
-				    wd_en);
-		}
-	}
-}
-
-static void
-dlb2_domain_disable_dir_port_interrupts(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *port;
-	u32 int_en = 0;
-	u32 wd_en = 0;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
-		DLB2_CSR_WR(hw,
-			    DLB2_CHP_DIR_CQ_INT_ENB(hw->ver, port->id.phys_id),
-			    int_en);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_CHP_DIR_CQ_WD_ENB(hw->ver, port->id.phys_id),
-			    wd_en);
-	}
-}
-
-static void
-dlb2_domain_disable_ldb_queue_write_perms(struct dlb2_hw *hw,
-					  struct dlb2_hw_domain *domain)
-{
-	int domain_offset = domain->id.phys_id * DLB2_MAX_NUM_LDB_QUEUES;
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_queue *queue;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
-		int idx = domain_offset + queue->id.phys_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_LDB_VASQID_V(idx), 0);
-
-		if (queue->id.vdev_owned) {
-			DLB2_CSR_WR(hw,
-				    DLB2_SYS_LDB_QID2VQID(queue->id.phys_id),
-				    0);
-
-			idx = queue->id.vdev_id * DLB2_MAX_NUM_LDB_QUEUES +
-				queue->id.virt_id;
-
-			DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID_V(idx), 0);
-
-			DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID2QID(idx), 0);
-		}
-	}
-}
-
-static void
-dlb2_domain_disable_dir_queue_write_perms(struct dlb2_hw *hw,
-					  struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *queue;
-	unsigned long max_ports;
-	int domain_offset;
-	RTE_SET_USED(iter);
-
-	max_ports = DLB2_MAX_NUM_DIR_PORTS(hw->ver);
-
-	domain_offset = domain->id.phys_id * max_ports;
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, queue, iter) {
-		int idx = domain_offset + queue->id.phys_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(idx), 0);
-
-		if (queue->id.vdev_owned) {
-			idx = queue->id.vdev_id * max_ports + queue->id.virt_id;
-
-			DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID_V(idx), 0);
-
-			DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID2QID(idx), 0);
-		}
-	}
-}
-
-static void dlb2_domain_disable_ldb_seq_checks(struct dlb2_hw *hw,
-					       struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	u32 chk_en = 0;
-	int i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			DLB2_CSR_WR(hw,
-				    DLB2_CHP_SN_CHK_ENBL(hw->ver,
-							 port->id.phys_id),
-				    chk_en);
-		}
-	}
-}
-
-static int dlb2_domain_wait_for_ldb_cqs_to_empty(struct dlb2_hw *hw,
-						 struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			int j;
-
-			for (j = 0; j < DLB2_MAX_CQ_COMP_CHECK_LOOPS; j++) {
-				if (dlb2_ldb_cq_inflight_count(hw, port) == 0)
-					break;
-			}
-
-			if (j == DLB2_MAX_CQ_COMP_CHECK_LOOPS) {
-				DLB2_HW_ERR(hw,
-					    "[%s()] Internal error: failed to flush load-balanced port %d's completions.\n",
-					    __func__, port->id.phys_id);
-				return -EFAULT;
-			}
-		}
-	}
-
-	return 0;
-}
-
-static void dlb2_domain_disable_dir_cqs(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *port;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
-		port->enabled = false;
-
-		dlb2_dir_port_cq_disable(hw, port);
-	}
-}
-
-static void
-dlb2_domain_disable_dir_producer_ports(struct dlb2_hw *hw,
-				       struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *port;
-	u32 pp_v = 0;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_DIR_PP_V(port->id.phys_id),
-			    pp_v);
-	}
-}
-
-static void
-dlb2_domain_disable_ldb_producer_ports(struct dlb2_hw *hw,
-				       struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	u32 pp_v = 0;
-	int i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			DLB2_CSR_WR(hw,
-				    DLB2_SYS_LDB_PP_V(port->id.phys_id),
-				    pp_v);
-		}
-	}
-}
-
-static int dlb2_domain_verify_reset_success(struct dlb2_hw *hw,
-					    struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *dir_port;
-	struct dlb2_ldb_port *ldb_port;
-	struct dlb2_ldb_queue *queue;
-	int i;
-	RTE_SET_USED(iter);
-
-	/*
-	 * Confirm that all the domain's queue's inflight counts and AQED
-	 * active counts are 0.
-	 */
-	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
-		if (!dlb2_ldb_queue_is_empty(hw, queue)) {
-			DLB2_HW_ERR(hw,
-				    "[%s()] Internal error: failed to empty ldb queue %d\n",
-				    __func__, queue->id.phys_id);
-			return -EFAULT;
-		}
-	}
-
-	/* Confirm that all the domain's CQs inflight and token counts are 0. */
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], ldb_port, iter) {
-			if (dlb2_ldb_cq_inflight_count(hw, ldb_port) ||
-			    dlb2_ldb_cq_token_count(hw, ldb_port)) {
-				DLB2_HW_ERR(hw,
-					    "[%s()] Internal error: failed to empty ldb port %d\n",
-					    __func__, ldb_port->id.phys_id);
-				return -EFAULT;
-			}
-		}
-	}
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, dir_port, iter) {
-		if (!dlb2_dir_queue_is_empty(hw, dir_port)) {
-			DLB2_HW_ERR(hw,
-				    "[%s()] Internal error: failed to empty dir queue %d\n",
-				    __func__, dir_port->id.phys_id);
-			return -EFAULT;
-		}
-
-		if (dlb2_dir_cq_token_count(hw, dir_port)) {
-			DLB2_HW_ERR(hw,
-				    "[%s()] Internal error: failed to empty dir port %d\n",
-				    __func__, dir_port->id.phys_id);
-			return -EFAULT;
-		}
-	}
-
-	return 0;
-}
-
-static void __dlb2_domain_reset_ldb_port_registers(struct dlb2_hw *hw,
-						   struct dlb2_ldb_port *port)
-{
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_PP2VAS(port->id.phys_id),
-		    DLB2_SYS_LDB_PP2VAS_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ2VAS(hw->ver, port->id.phys_id),
-		    DLB2_CHP_LDB_CQ2VAS_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_PP2VDEV(port->id.phys_id),
-		    DLB2_SYS_LDB_PP2VDEV_RST);
-
-	if (port->id.vdev_owned) {
-		unsigned int offs;
-		u32 virt_id;
-
-		/*
-		 * DLB uses producer port address bits 17:12 to determine the
-		 * producer port ID. In Scalable IOV mode, PP accesses come
-		 * through the PF MMIO window for the physical producer port,
-		 * so for translation purposes the virtual and physical port
-		 * IDs are equal.
-		 */
-		if (hw->virt_mode == DLB2_VIRT_SRIOV)
-			virt_id = port->id.virt_id;
-		else
-			virt_id = port->id.phys_id;
-
-		offs = port->id.vdev_id * DLB2_MAX_NUM_LDB_PORTS + virt_id;
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_VF_LDB_VPP2PP(offs),
-			    DLB2_SYS_VF_LDB_VPP2PP_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_VF_LDB_VPP_V(offs),
-			    DLB2_SYS_VF_LDB_VPP_V_RST);
-	}
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_PP_V(port->id.phys_id),
-		    DLB2_SYS_LDB_PP_V_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_LDB_DSBL(hw->ver, port->id.phys_id),
-		    DLB2_LSP_CQ_LDB_DSBL_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_DEPTH(hw->ver, port->id.phys_id),
-		    DLB2_CHP_LDB_CQ_DEPTH_RST);
-
-	if (hw->ver != DLB2_HW_V2)
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_CFG_CQ_LDB_WU_LIMIT(port->id.phys_id),
-			    DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_LDB_INFL_LIM(hw->ver, port->id.phys_id),
-		    DLB2_LSP_CQ_LDB_INFL_LIM_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_HIST_LIST_LIM(hw->ver, port->id.phys_id),
-		    DLB2_CHP_HIST_LIST_LIM_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_HIST_LIST_BASE(hw->ver, port->id.phys_id),
-		    DLB2_CHP_HIST_LIST_BASE_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_HIST_LIST_POP_PTR(hw->ver, port->id.phys_id),
-		    DLB2_CHP_HIST_LIST_POP_PTR_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_HIST_LIST_PUSH_PTR(hw->ver, port->id.phys_id),
-		    DLB2_CHP_HIST_LIST_PUSH_PTR_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH(hw->ver, port->id.phys_id),
-		    DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_TMR_THRSH(hw->ver, port->id.phys_id),
-		    DLB2_CHP_LDB_CQ_TMR_THRSH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_INT_ENB(hw->ver, port->id.phys_id),
-		    DLB2_CHP_LDB_CQ_INT_ENB_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_CQ_ISR(port->id.phys_id),
-		    DLB2_SYS_LDB_CQ_ISR_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
-		    DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
-		    DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_WPTR(hw->ver, port->id.phys_id),
-		    DLB2_CHP_LDB_CQ_WPTR_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_LDB_TKN_CNT(hw->ver, port->id.phys_id),
-		    DLB2_LSP_CQ_LDB_TKN_CNT_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_CQ_ADDR_L(port->id.phys_id),
-		    DLB2_SYS_LDB_CQ_ADDR_L_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_CQ_ADDR_U(port->id.phys_id),
-		    DLB2_SYS_LDB_CQ_ADDR_U_RST);
-
-	if (hw->ver == DLB2_HW_V2)
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_LDB_CQ_AT(port->id.phys_id),
-			    DLB2_SYS_LDB_CQ_AT_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_CQ_PASID(hw->ver, port->id.phys_id),
-		    DLB2_SYS_LDB_CQ_PASID_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_CQ2VF_PF_RO(port->id.phys_id),
-		    DLB2_SYS_LDB_CQ2VF_PF_RO_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTL(hw->ver, port->id.phys_id),
-		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTL_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTH(hw->ver, port->id.phys_id),
-		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ2QID0(hw->ver, port->id.phys_id),
-		    DLB2_LSP_CQ2QID0_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ2QID1(hw->ver, port->id.phys_id),
-		    DLB2_LSP_CQ2QID1_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ2PRIOV(hw->ver, port->id.phys_id),
-		    DLB2_LSP_CQ2PRIOV_RST);
-}
-
-static void dlb2_domain_reset_ldb_port_registers(struct dlb2_hw *hw,
-						 struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
-			__dlb2_domain_reset_ldb_port_registers(hw, port);
-	}
-}
-
-static void
-__dlb2_domain_reset_dir_port_registers(struct dlb2_hw *hw,
-				       struct dlb2_dir_pq_pair *port)
-{
-	u32 reg = 0;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ2VAS(hw->ver, port->id.phys_id),
-		    DLB2_CHP_DIR_CQ2VAS_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_DIR_DSBL(hw->ver, port->id.phys_id),
-		    DLB2_LSP_CQ_DIR_DSBL_RST);
-
-	DLB2_BIT_SET(reg, DLB2_SYS_WB_DIR_CQ_STATE_CQ_OPT_CLR);
-
-	if (hw->ver == DLB2_HW_V2)
-		DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_OPT_CLR, port->id.phys_id);
-	else
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_WB_DIR_CQ_STATE(port->id.phys_id), reg);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_DEPTH(hw->ver, port->id.phys_id),
-		    DLB2_CHP_DIR_CQ_DEPTH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH(hw->ver, port->id.phys_id),
-		    DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_TMR_THRSH(hw->ver, port->id.phys_id),
-		    DLB2_CHP_DIR_CQ_TMR_THRSH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_INT_ENB(hw->ver, port->id.phys_id),
-		    DLB2_CHP_DIR_CQ_INT_ENB_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ_ISR(port->id.phys_id),
-		    DLB2_SYS_DIR_CQ_ISR_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(hw->ver,
-						      port->id.phys_id),
-		    DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
-		    DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_WPTR(hw->ver, port->id.phys_id),
-		    DLB2_CHP_DIR_CQ_WPTR_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_DIR_TKN_CNT(hw->ver, port->id.phys_id),
-		    DLB2_LSP_CQ_DIR_TKN_CNT_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ_ADDR_L(port->id.phys_id),
-		    DLB2_SYS_DIR_CQ_ADDR_L_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ_ADDR_U(port->id.phys_id),
-		    DLB2_SYS_DIR_CQ_ADDR_U_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ_AT(port->id.phys_id),
-		    DLB2_SYS_DIR_CQ_AT_RST);
-
-	if (hw->ver == DLB2_HW_V2)
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_DIR_CQ_AT(port->id.phys_id),
-			    DLB2_SYS_DIR_CQ_AT_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ_PASID(hw->ver, port->id.phys_id),
-		    DLB2_SYS_DIR_CQ_PASID_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ_FMT(port->id.phys_id),
-		    DLB2_SYS_DIR_CQ_FMT_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ2VF_PF_RO(port->id.phys_id),
-		    DLB2_SYS_DIR_CQ2VF_PF_RO_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTL(hw->ver, port->id.phys_id),
-		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTL_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTH(hw->ver, port->id.phys_id),
-		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_PP2VAS(port->id.phys_id),
-		    DLB2_SYS_DIR_PP2VAS_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ2VAS(hw->ver, port->id.phys_id),
-		    DLB2_CHP_DIR_CQ2VAS_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_PP2VDEV(port->id.phys_id),
-		    DLB2_SYS_DIR_PP2VDEV_RST);
-
-	if (port->id.vdev_owned) {
-		unsigned int offs;
-		u32 virt_id;
-
-		/*
-		 * DLB uses producer port address bits 17:12 to determine the
-		 * producer port ID. In Scalable IOV mode, PP accesses come
-		 * through the PF MMIO window for the physical producer port,
-		 * so for translation purposes the virtual and physical port
-		 * IDs are equal.
-		 */
-		if (hw->virt_mode == DLB2_VIRT_SRIOV)
-			virt_id = port->id.virt_id;
-		else
-			virt_id = port->id.phys_id;
-
-		offs = port->id.vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) +
-			virt_id;
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_VF_DIR_VPP2PP(offs),
-			    DLB2_SYS_VF_DIR_VPP2PP_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_VF_DIR_VPP_V(offs),
-			    DLB2_SYS_VF_DIR_VPP_V_RST);
-	}
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_PP_V(port->id.phys_id),
-		    DLB2_SYS_DIR_PP_V_RST);
-}
-
-static void dlb2_domain_reset_dir_port_registers(struct dlb2_hw *hw,
-						 struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *port;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter)
-		__dlb2_domain_reset_dir_port_registers(hw, port);
-}
-
-static void dlb2_domain_reset_ldb_queue_registers(struct dlb2_hw *hw,
-						  struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_queue *queue;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
-		unsigned int queue_id = queue->id.phys_id;
-		int i;
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL(hw->ver, queue_id),
-			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH(hw->ver, queue_id),
-			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTL(hw->ver, queue_id),
-			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTL_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTH(hw->ver, queue_id),
-			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_NALDB_MAX_DEPTH(hw->ver, queue_id),
-			    DLB2_LSP_QID_NALDB_MAX_DEPTH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_LDB_INFL_LIM(hw->ver, queue_id),
-			    DLB2_LSP_QID_LDB_INFL_LIM_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_AQED_ACTIVE_LIM(hw->ver, queue_id),
-			    DLB2_LSP_QID_AQED_ACTIVE_LIM_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_ATM_DEPTH_THRSH(hw->ver, queue_id),
-			    DLB2_LSP_QID_ATM_DEPTH_THRSH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_NALDB_DEPTH_THRSH(hw->ver, queue_id),
-			    DLB2_LSP_QID_NALDB_DEPTH_THRSH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_LDB_QID_ITS(queue_id),
-			    DLB2_SYS_LDB_QID_ITS_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_CHP_ORD_QID_SN(hw->ver, queue_id),
-			    DLB2_CHP_ORD_QID_SN_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_CHP_ORD_QID_SN_MAP(hw->ver, queue_id),
-			    DLB2_CHP_ORD_QID_SN_MAP_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_LDB_QID_V(queue_id),
-			    DLB2_SYS_LDB_QID_V_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_LDB_QID_CFG_V(queue_id),
-			    DLB2_SYS_LDB_QID_CFG_V_RST);
-
-		if (queue->sn_cfg_valid) {
-			u32 offs[2];
-
-			offs[0] = DLB2_RO_GRP_0_SLT_SHFT(hw->ver,
-							 queue->sn_slot);
-			offs[1] = DLB2_RO_GRP_1_SLT_SHFT(hw->ver,
-							 queue->sn_slot);
-
-			DLB2_CSR_WR(hw,
-				    offs[queue->sn_group],
-				    DLB2_RO_GRP_0_SLT_SHFT_RST);
-		}
-
-		for (i = 0; i < DLB2_LSP_QID2CQIDIX_NUM; i++) {
-			DLB2_CSR_WR(hw,
-				    DLB2_LSP_QID2CQIDIX(hw->ver, queue_id, i),
-				    DLB2_LSP_QID2CQIDIX_00_RST);
-
-			DLB2_CSR_WR(hw,
-				    DLB2_LSP_QID2CQIDIX2(hw->ver, queue_id, i),
-				    DLB2_LSP_QID2CQIDIX2_00_RST);
-
-			DLB2_CSR_WR(hw,
-				    DLB2_ATM_QID2CQIDIX(queue_id, i),
-				    DLB2_ATM_QID2CQIDIX_00_RST);
-		}
-	}
-}
-
-static void dlb2_domain_reset_dir_queue_registers(struct dlb2_hw *hw,
-						  struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *queue;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, queue, iter) {
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_DIR_MAX_DEPTH(hw->ver,
-						       queue->id.phys_id),
-			    DLB2_LSP_QID_DIR_MAX_DEPTH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTL(hw->ver,
-							  queue->id.phys_id),
-			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTL_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTH(hw->ver,
-							  queue->id.phys_id),
-			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_DIR_DEPTH_THRSH(hw->ver,
-							 queue->id.phys_id),
-			    DLB2_LSP_QID_DIR_DEPTH_THRSH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_DIR_QID_ITS(queue->id.phys_id),
-			    DLB2_SYS_DIR_QID_ITS_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_DIR_QID_V(queue->id.phys_id),
-			    DLB2_SYS_DIR_QID_V_RST);
-	}
-}
-
-
-
-
-
-static void dlb2_domain_reset_registers(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain)
-{
-	dlb2_domain_reset_ldb_port_registers(hw, domain);
-
-	dlb2_domain_reset_dir_port_registers(hw, domain);
-
-	dlb2_domain_reset_ldb_queue_registers(hw, domain);
-
-	dlb2_domain_reset_dir_queue_registers(hw, domain);
-
-	if (hw->ver == DLB2_HW_V2) {
-		DLB2_CSR_WR(hw,
-			    DLB2_CHP_CFG_LDB_VAS_CRD(domain->id.phys_id),
-			    DLB2_CHP_CFG_LDB_VAS_CRD_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_CHP_CFG_DIR_VAS_CRD(domain->id.phys_id),
-			    DLB2_CHP_CFG_DIR_VAS_CRD_RST);
-	} else
-		DLB2_CSR_WR(hw,
-			    DLB2_CHP_CFG_VAS_CRD(domain->id.phys_id),
-			    DLB2_CHP_CFG_VAS_CRD_RST);
-}
-
-static int dlb2_domain_reset_software_state(struct dlb2_hw *hw,
-					    struct dlb2_hw_domain *domain)
-{
-	struct dlb2_dir_pq_pair *tmp_dir_port;
-	struct dlb2_ldb_queue *tmp_ldb_queue;
-	struct dlb2_ldb_port *tmp_ldb_port;
-	struct dlb2_list_entry *iter1;
-	struct dlb2_list_entry *iter2;
-	struct dlb2_function_resources *rsrcs;
-	struct dlb2_dir_pq_pair *dir_port;
-	struct dlb2_ldb_queue *ldb_queue;
-	struct dlb2_ldb_port *ldb_port;
-	struct dlb2_list_head *list;
-	int ret, i;
-	RTE_SET_USED(tmp_dir_port);
-	RTE_SET_USED(tmp_ldb_queue);
-	RTE_SET_USED(tmp_ldb_port);
-	RTE_SET_USED(iter1);
-	RTE_SET_USED(iter2);
-
-	rsrcs = domain->parent_func;
-
-	/* Move the domain's ldb queues to the function's avail list */
-	list = &domain->used_ldb_queues;
-	DLB2_DOM_LIST_FOR_SAFE(*list, ldb_queue, tmp_ldb_queue, iter1, iter2) {
-		if (ldb_queue->sn_cfg_valid) {
-			struct dlb2_sn_group *grp;
-
-			grp = &hw->rsrcs.sn_groups[ldb_queue->sn_group];
-
-			dlb2_sn_group_free_slot(grp, ldb_queue->sn_slot);
-			ldb_queue->sn_cfg_valid = false;
-		}
-
-		ldb_queue->owned = false;
-		ldb_queue->num_mappings = 0;
-		ldb_queue->num_pending_additions = 0;
-
-		dlb2_list_del(&domain->used_ldb_queues,
-			      &ldb_queue->domain_list);
-		dlb2_list_add(&rsrcs->avail_ldb_queues,
-			      &ldb_queue->func_list);
-		rsrcs->num_avail_ldb_queues++;
-	}
-
-	list = &domain->avail_ldb_queues;
-	DLB2_DOM_LIST_FOR_SAFE(*list, ldb_queue, tmp_ldb_queue, iter1, iter2) {
-		ldb_queue->owned = false;
-
-		dlb2_list_del(&domain->avail_ldb_queues,
-			      &ldb_queue->domain_list);
-		dlb2_list_add(&rsrcs->avail_ldb_queues,
-			      &ldb_queue->func_list);
-		rsrcs->num_avail_ldb_queues++;
-	}
-
-	/* Move the domain's ldb ports to the function's avail list */
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		list = &domain->used_ldb_ports[i];
-		DLB2_DOM_LIST_FOR_SAFE(*list, ldb_port, tmp_ldb_port,
-				       iter1, iter2) {
-			int j;
-
-			ldb_port->owned = false;
-			ldb_port->configured = false;
-			ldb_port->num_pending_removals = 0;
-			ldb_port->num_mappings = 0;
-			ldb_port->init_tkn_cnt = 0;
-			ldb_port->cq_depth = 0;
-			for (j = 0; j < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; j++)
-				ldb_port->qid_map[j].state =
-					DLB2_QUEUE_UNMAPPED;
-
-			dlb2_list_del(&domain->used_ldb_ports[i],
-				      &ldb_port->domain_list);
-			dlb2_list_add(&rsrcs->avail_ldb_ports[i],
-				      &ldb_port->func_list);
-			rsrcs->num_avail_ldb_ports[i]++;
-		}
-
-		list = &domain->avail_ldb_ports[i];
-		DLB2_DOM_LIST_FOR_SAFE(*list, ldb_port, tmp_ldb_port,
-				       iter1, iter2) {
-			ldb_port->owned = false;
-
-			dlb2_list_del(&domain->avail_ldb_ports[i],
-				      &ldb_port->domain_list);
-			dlb2_list_add(&rsrcs->avail_ldb_ports[i],
-				      &ldb_port->func_list);
-			rsrcs->num_avail_ldb_ports[i]++;
-		}
-	}
-
-	/* Move the domain's dir ports to the function's avail list */
-	list = &domain->used_dir_pq_pairs;
-	DLB2_DOM_LIST_FOR_SAFE(*list, dir_port, tmp_dir_port, iter1, iter2) {
-		dir_port->owned = false;
-		dir_port->port_configured = false;
-		dir_port->init_tkn_cnt = 0;
-
-		dlb2_list_del(&domain->used_dir_pq_pairs,
-			      &dir_port->domain_list);
-
-		dlb2_list_add(&rsrcs->avail_dir_pq_pairs,
-			      &dir_port->func_list);
-		rsrcs->num_avail_dir_pq_pairs++;
-	}
-
-	list = &domain->avail_dir_pq_pairs;
-	DLB2_DOM_LIST_FOR_SAFE(*list, dir_port, tmp_dir_port, iter1, iter2) {
-		dir_port->owned = false;
-
-		dlb2_list_del(&domain->avail_dir_pq_pairs,
-			      &dir_port->domain_list);
-
-		dlb2_list_add(&rsrcs->avail_dir_pq_pairs,
-			      &dir_port->func_list);
-		rsrcs->num_avail_dir_pq_pairs++;
-	}
-
-	/* Return hist list entries to the function */
-	ret = dlb2_bitmap_set_range(rsrcs->avail_hist_list_entries,
-				    domain->hist_list_entry_base,
-				    domain->total_hist_list_entries);
-	if (ret) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: domain hist list base does not match the function's bitmap.\n",
-			    __func__);
-		return ret;
-	}
-
-	domain->total_hist_list_entries = 0;
-	domain->avail_hist_list_entries = 0;
-	domain->hist_list_entry_base = 0;
-	domain->hist_list_entry_offset = 0;
-
-	if (hw->ver == DLB2_HW_V2_5) {
-		rsrcs->num_avail_entries += domain->num_credits;
-		domain->num_credits = 0;
-	} else {
-		rsrcs->num_avail_qed_entries += domain->num_ldb_credits;
-		domain->num_ldb_credits = 0;
-
-		rsrcs->num_avail_dqed_entries += domain->num_dir_credits;
-		domain->num_dir_credits = 0;
-	}
-	rsrcs->num_avail_aqed_entries += domain->num_avail_aqed_entries;
-	rsrcs->num_avail_aqed_entries += domain->num_used_aqed_entries;
-	domain->num_avail_aqed_entries = 0;
-	domain->num_used_aqed_entries = 0;
-
-	domain->num_pending_removals = 0;
-	domain->num_pending_additions = 0;
-	domain->configured = false;
-	domain->started = false;
-
-	/*
-	 * Move the domain out of the used_domains list and back to the
-	 * function's avail_domains list.
-	 */
-	dlb2_list_del(&rsrcs->used_domains, &domain->func_list);
-	dlb2_list_add(&rsrcs->avail_domains, &domain->func_list);
-	rsrcs->num_avail_domains++;
-
-	return 0;
-}
-
-static int dlb2_domain_drain_unmapped_queue(struct dlb2_hw *hw,
-					    struct dlb2_hw_domain *domain,
-					    struct dlb2_ldb_queue *queue)
-{
-	struct dlb2_ldb_port *port = NULL;
-	int ret, i;
-
-	/* If a domain has LDB queues, it must have LDB ports */
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		port = DLB2_DOM_LIST_HEAD(domain->used_ldb_ports[i],
-					  typeof(*port));
-		if (port)
-			break;
-	}
-
-	if (port == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: No configured LDB ports\n",
-			    __func__);
-		return -EFAULT;
-	}
-
-	/* If necessary, free up a QID slot in this CQ */
-	if (port->num_mappings == DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-		struct dlb2_ldb_queue *mapped_queue;
-
-		mapped_queue = &hw->rsrcs.ldb_queues[port->qid_map[0].qid];
-
-		ret = dlb2_ldb_port_unmap_qid(hw, port, mapped_queue);
-		if (ret)
-			return ret;
-	}
-
-	ret = dlb2_ldb_port_map_qid_dynamic(hw, port, queue, 0);
-	if (ret)
-		return ret;
-
-	return dlb2_domain_drain_mapped_queues(hw, domain);
-}
-
-static int dlb2_domain_drain_unmapped_queues(struct dlb2_hw *hw,
-					     struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_queue *queue;
-	int ret;
-	RTE_SET_USED(iter);
-
-	/* If the domain hasn't been started, there's no traffic to drain */
-	if (!domain->started)
-		return 0;
-
-	/*
-	 * Pre-condition: the unattached queue must not have any outstanding
-	 * completions. This is ensured by calling dlb2_domain_drain_ldb_cqs()
-	 * prior to this in dlb2_domain_drain_mapped_queues().
-	 */
-	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
-		if (queue->num_mappings != 0 ||
-		    dlb2_ldb_queue_is_empty(hw, queue))
-			continue;
-
-		ret = dlb2_domain_drain_unmapped_queue(hw, domain, queue);
-		if (ret)
-			return ret;
-	}
-
-	return 0;
-}
-
-/**
- * dlb2_reset_domain() - reset a scheduling domain
- * @hw: dlb2_hw handle for a particular device.
- * @domain_id: domain ID.
- * @vdev_req: indicates whether this request came from a vdev.
- * @vdev_id: If vdev_req is true, this contains the vdev's ID.
- *
- * This function resets and frees a DLB 2.0 scheduling domain and its associated
- * resources.
- *
- * Pre-condition: the driver must ensure software has stopped sending QEs
- * through this domain's producer ports before invoking this function, or
- * undefined behavior will result.
- *
- * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
- * device.
- *
- * Return:
- * Returns 0 upon success, -1 otherwise.
- *
- * EINVAL - Invalid domain ID, or the domain is not configured.
- * EFAULT - Internal error. (Possibly caused if software is the pre-condition
- *	    is not met.)
- * ETIMEDOUT - Hardware component didn't reset in the expected time.
- */
-int dlb2_reset_domain(struct dlb2_hw *hw,
-		      u32 domain_id,
-		      bool vdev_req,
-		      unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	int ret;
-
-	dlb2_log_reset_domain(hw, domain_id, vdev_req, vdev_id);
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (domain == NULL || !domain->configured)
-		return -EINVAL;
-
-	/* Disable VPPs */
-	if (vdev_req) {
-		dlb2_domain_disable_dir_vpps(hw, domain, vdev_id);
-
-		dlb2_domain_disable_ldb_vpps(hw, domain, vdev_id);
-	}
-
-	/* Disable CQ interrupts */
-	dlb2_domain_disable_dir_port_interrupts(hw, domain);
-
-	dlb2_domain_disable_ldb_port_interrupts(hw, domain);
-
-	/*
-	 * For each queue owned by this domain, disable its write permissions to
-	 * cause any traffic sent to it to be dropped. Well-behaved software
-	 * should not be sending QEs at this point.
-	 */
-	dlb2_domain_disable_dir_queue_write_perms(hw, domain);
-
-	dlb2_domain_disable_ldb_queue_write_perms(hw, domain);
-
-	/* Turn off completion tracking on all the domain's PPs. */
-	dlb2_domain_disable_ldb_seq_checks(hw, domain);
-
-	/*
-	 * Disable the LDB CQs and drain them in order to complete the map and
-	 * unmap procedures, which require zero CQ inflights and zero QID
-	 * inflights respectively.
-	 */
-	dlb2_domain_disable_ldb_cqs(hw, domain);
-
-	dlb2_domain_drain_ldb_cqs(hw, domain, false);
-
-	ret = dlb2_domain_wait_for_ldb_cqs_to_empty(hw, domain);
-	if (ret)
-		return ret;
-
-	ret = dlb2_domain_finish_unmap_qid_procedures(hw, domain);
-	if (ret)
-		return ret;
-
-	ret = dlb2_domain_finish_map_qid_procedures(hw, domain);
-	if (ret)
-		return ret;
-
-	/* Re-enable the CQs in order to drain the mapped queues. */
-	dlb2_domain_enable_ldb_cqs(hw, domain);
-
-	ret = dlb2_domain_drain_mapped_queues(hw, domain);
-	if (ret)
-		return ret;
-
-	ret = dlb2_domain_drain_unmapped_queues(hw, domain);
-	if (ret)
-		return ret;
-
-	/* Done draining LDB QEs, so disable the CQs. */
-	dlb2_domain_disable_ldb_cqs(hw, domain);
-
-	dlb2_domain_drain_dir_queues(hw, domain);
-
-	/* Done draining DIR QEs, so disable the CQs. */
-	dlb2_domain_disable_dir_cqs(hw, domain);
-
-	/* Disable PPs */
-	dlb2_domain_disable_dir_producer_ports(hw, domain);
-
-	dlb2_domain_disable_ldb_producer_ports(hw, domain);
-
-	ret = dlb2_domain_verify_reset_success(hw, domain);
-	if (ret)
-		return ret;
-
-	/* Reset the QID and port state. */
-	dlb2_domain_reset_registers(hw, domain);
-
-	/* Hardware reset complete. Reset the domain's software state */
-	return dlb2_domain_reset_software_state(hw, domain);
-}
-
-static void
-dlb2_log_create_ldb_queue_args(struct dlb2_hw *hw,
-			       u32 domain_id,
-			       struct dlb2_create_ldb_queue_args *args,
-			       bool vdev_req,
-			       unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 create load-balanced queue arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID:                  %d\n",
-		    domain_id);
-	DLB2_HW_DBG(hw, "\tNumber of sequence numbers: %d\n",
-		    args->num_sequence_numbers);
-	DLB2_HW_DBG(hw, "\tNumber of QID inflights:    %d\n",
-		    args->num_qid_inflights);
-	DLB2_HW_DBG(hw, "\tNumber of ATM inflights:    %d\n",
-		    args->num_atomic_inflights);
-}
-
-static int
-dlb2_ldb_queue_attach_to_sn_group(struct dlb2_hw *hw,
-				  struct dlb2_ldb_queue *queue,
-				  struct dlb2_create_ldb_queue_args *args)
-{
-	int slot = -1;
-	int i;
-
-	queue->sn_cfg_valid = false;
-
-	if (args->num_sequence_numbers == 0)
-		return 0;
-
-	for (i = 0; i < DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {
-		struct dlb2_sn_group *group = &hw->rsrcs.sn_groups[i];
-
-		if (group->sequence_numbers_per_queue ==
-		    args->num_sequence_numbers &&
-		    !dlb2_sn_group_full(group)) {
-			slot = dlb2_sn_group_alloc_slot(group);
-			if (slot >= 0)
-				break;
-		}
-	}
-
-	if (slot == -1) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: no sequence number slots available\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	queue->sn_cfg_valid = true;
-	queue->sn_group = i;
-	queue->sn_slot = slot;
-	return 0;
-}
-
-static int
-dlb2_verify_create_ldb_queue_args(struct dlb2_hw *hw,
-				  u32 domain_id,
-				  struct dlb2_create_ldb_queue_args *args,
-				  struct dlb2_cmd_response *resp,
-				  bool vdev_req,
-				  unsigned int vdev_id,
-				  struct dlb2_hw_domain **out_domain,
-				  struct dlb2_ldb_queue **out_queue)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_queue *queue;
-	int i;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (!domain) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	if (!domain->configured) {
-		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
-		return -EINVAL;
-	}
-
-	if (domain->started) {
-		resp->status = DLB2_ST_DOMAIN_STARTED;
-		return -EINVAL;
-	}
-
-	queue = DLB2_DOM_LIST_HEAD(domain->avail_ldb_queues, typeof(*queue));
-	if (!queue) {
-		resp->status = DLB2_ST_LDB_QUEUES_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	if (args->num_sequence_numbers) {
-		for (i = 0; i < DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {
-			struct dlb2_sn_group *group = &hw->rsrcs.sn_groups[i];
-
-			if (group->sequence_numbers_per_queue ==
-			    args->num_sequence_numbers &&
-			    !dlb2_sn_group_full(group))
-				break;
-		}
-
-		if (i == DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS) {
-			resp->status = DLB2_ST_SEQUENCE_NUMBERS_UNAVAILABLE;
-			return -EINVAL;
-		}
-	}
-
-	if (args->num_qid_inflights > 4096) {
-		resp->status = DLB2_ST_INVALID_QID_INFLIGHT_ALLOCATION;
-		return -EINVAL;
-	}
-
-	/* Inflights must be <= number of sequence numbers if ordered */
-	if (args->num_sequence_numbers != 0 &&
-	    args->num_qid_inflights > args->num_sequence_numbers) {
-		resp->status = DLB2_ST_INVALID_QID_INFLIGHT_ALLOCATION;
-		return -EINVAL;
-	}
-
-	if (domain->num_avail_aqed_entries < args->num_atomic_inflights) {
-		resp->status = DLB2_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	if (args->num_atomic_inflights &&
-	    args->lock_id_comp_level != 0 &&
-	    args->lock_id_comp_level != 64 &&
-	    args->lock_id_comp_level != 128 &&
-	    args->lock_id_comp_level != 256 &&
-	    args->lock_id_comp_level != 512 &&
-	    args->lock_id_comp_level != 1024 &&
-	    args->lock_id_comp_level != 2048 &&
-	    args->lock_id_comp_level != 4096 &&
-	    args->lock_id_comp_level != 65536) {
-		resp->status = DLB2_ST_INVALID_LOCK_ID_COMP_LEVEL;
-		return -EINVAL;
-	}
-
-	*out_domain = domain;
-	*out_queue = queue;
-
-	return 0;
-}
-
-static int
-dlb2_ldb_queue_attach_resources(struct dlb2_hw *hw,
-				struct dlb2_hw_domain *domain,
-				struct dlb2_ldb_queue *queue,
-				struct dlb2_create_ldb_queue_args *args)
-{
-	int ret;
-	ret = dlb2_ldb_queue_attach_to_sn_group(hw, queue, args);
-	if (ret)
-		return ret;
-
-	/* Attach QID inflights */
-	queue->num_qid_inflights = args->num_qid_inflights;
-
-	/* Attach atomic inflights */
-	queue->aqed_limit = args->num_atomic_inflights;
-
-	domain->num_avail_aqed_entries -= args->num_atomic_inflights;
-	domain->num_used_aqed_entries += args->num_atomic_inflights;
-
-	return 0;
-}
-
-static void dlb2_configure_ldb_queue(struct dlb2_hw *hw,
-				     struct dlb2_hw_domain *domain,
-				     struct dlb2_ldb_queue *queue,
-				     struct dlb2_create_ldb_queue_args *args,
-				     bool vdev_req,
-				     unsigned int vdev_id)
-{
-	struct dlb2_sn_group *sn_group;
-	unsigned int offs;
-	u32 reg = 0;
-	u32 alimit;
-
-	/* QID write permissions are turned on when the domain is started */
-	offs = domain->id.phys_id * DLB2_MAX_NUM_LDB_QUEUES + queue->id.phys_id;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_VASQID_V(offs), reg);
-
-	/*
-	 * Unordered QIDs get 4K inflights, ordered get as many as the number
-	 * of sequence numbers.
-	 */
-	DLB2_BITS_SET(reg, args->num_qid_inflights,
-		      DLB2_LSP_QID_LDB_INFL_LIM_LIMIT);
-	DLB2_CSR_WR(hw, DLB2_LSP_QID_LDB_INFL_LIM(hw->ver,
-						  queue->id.phys_id), reg);
-
-	alimit = queue->aqed_limit;
-
-	if (alimit > DLB2_MAX_NUM_AQED_ENTRIES)
-		alimit = DLB2_MAX_NUM_AQED_ENTRIES;
-
-	reg = 0;
-	DLB2_BITS_SET(reg, alimit, DLB2_LSP_QID_AQED_ACTIVE_LIM_LIMIT);
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID_AQED_ACTIVE_LIM(hw->ver,
-						 queue->id.phys_id), reg);
-
-	reg = 0;
-	switch (args->lock_id_comp_level) {
-	case 64:
-		DLB2_BITS_SET(reg, 1, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
-		break;
-	case 128:
-		DLB2_BITS_SET(reg, 2, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
-		break;
-	case 256:
-		DLB2_BITS_SET(reg, 3, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
-		break;
-	case 512:
-		DLB2_BITS_SET(reg, 4, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
-		break;
-	case 1024:
-		DLB2_BITS_SET(reg, 5, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
-		break;
-	case 2048:
-		DLB2_BITS_SET(reg, 6, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
-		break;
-	case 4096:
-		DLB2_BITS_SET(reg, 7, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
-		break;
-	default:
-		/* No compression by default */
-		break;
-	}
-
-	DLB2_CSR_WR(hw, DLB2_AQED_QID_HID_WIDTH(queue->id.phys_id), reg);
-
-	reg = 0;
-	/* Don't timestamp QEs that pass through this queue */
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_QID_ITS(queue->id.phys_id), reg);
-
-	DLB2_BITS_SET(reg, args->depth_threshold,
-		      DLB2_LSP_QID_ATM_DEPTH_THRSH_THRESH);
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID_ATM_DEPTH_THRSH(hw->ver,
-						 queue->id.phys_id), reg);
-
-	reg = 0;
-	DLB2_BITS_SET(reg, args->depth_threshold,
-		      DLB2_LSP_QID_NALDB_DEPTH_THRSH_THRESH);
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID_NALDB_DEPTH_THRSH(hw->ver, queue->id.phys_id),
-		    reg);
-
-	/*
-	 * This register limits the number of inflight flows a queue can have
-	 * at one time.  It has an upper bound of 2048, but can be
-	 * over-subscribed. 512 is chosen so that a single queue does not use
-	 * the entire atomic storage, but can use a substantial portion if
-	 * needed.
-	 */
-	reg = 0;
-	DLB2_BITS_SET(reg, 512, DLB2_AQED_QID_FID_LIM_QID_FID_LIMIT);
-	DLB2_CSR_WR(hw, DLB2_AQED_QID_FID_LIM(queue->id.phys_id), reg);
-
-	/* Configure SNs */
-	reg = 0;
-	sn_group = &hw->rsrcs.sn_groups[queue->sn_group];
-	DLB2_BITS_SET(reg, sn_group->mode, DLB2_CHP_ORD_QID_SN_MAP_MODE);
-	DLB2_BITS_SET(reg, queue->sn_slot, DLB2_CHP_ORD_QID_SN_MAP_SLOT);
-	DLB2_BITS_SET(reg, sn_group->id, DLB2_CHP_ORD_QID_SN_MAP_GRP);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_ORD_QID_SN_MAP(hw->ver, queue->id.phys_id), reg);
-
-	reg = 0;
-	DLB2_BITS_SET(reg, (args->num_sequence_numbers != 0),
-		 DLB2_SYS_LDB_QID_CFG_V_SN_CFG_V);
-	DLB2_BITS_SET(reg, (args->num_atomic_inflights != 0),
-		 DLB2_SYS_LDB_QID_CFG_V_FID_CFG_V);
-
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_QID_CFG_V(queue->id.phys_id), reg);
-
-	if (vdev_req) {
-		offs = vdev_id * DLB2_MAX_NUM_LDB_QUEUES + queue->id.virt_id;
-
-		reg = 0;
-		DLB2_BIT_SET(reg, DLB2_SYS_VF_LDB_VQID_V_VQID_V);
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID_V(offs), reg);
-
-		reg = 0;
-		DLB2_BITS_SET(reg, queue->id.phys_id,
-			      DLB2_SYS_VF_LDB_VQID2QID_QID);
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID2QID(offs), reg);
-
-		reg = 0;
-		DLB2_BITS_SET(reg, queue->id.virt_id,
-			      DLB2_SYS_LDB_QID2VQID_VQID);
-		DLB2_CSR_WR(hw, DLB2_SYS_LDB_QID2VQID(queue->id.phys_id), reg);
-	}
-
-	reg = 0;
-	DLB2_BIT_SET(reg, DLB2_SYS_LDB_QID_V_QID_V);
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_QID_V(queue->id.phys_id), reg);
-}
-
-/**
- * dlb2_hw_create_ldb_queue() - create a load-balanced queue
- * @hw: dlb2_hw handle for a particular device.
- * @domain_id: domain ID.
- * @args: queue creation arguments.
- * @resp: response structure.
- * @vdev_req: indicates whether this request came from a vdev.
- * @vdev_id: If vdev_req is true, this contains the vdev's ID.
- *
- * This function creates a load-balanced queue.
- *
- * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
- * device.
- *
- * Return:
- * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
- * assigned a detailed error code from enum dlb2_error. If successful, resp->id
- * contains the queue ID.
- *
- * resp->id contains a virtual ID if vdev_req is true.
- *
- * Errors:
- * EINVAL - A requested resource is unavailable, the domain is not configured,
- *	    the domain has already been started, or the requested queue name is
- *	    already in use.
- * EFAULT - Internal error (resp->status not set).
- */
-int dlb2_hw_create_ldb_queue(struct dlb2_hw *hw,
-			     u32 domain_id,
-			     struct dlb2_create_ldb_queue_args *args,
-			     struct dlb2_cmd_response *resp,
-			     bool vdev_req,
-			     unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_queue *queue;
-	int ret;
-
-	dlb2_log_create_ldb_queue_args(hw, domain_id, args, vdev_req, vdev_id);
-
-	/*
-	 * Verify that hardware resources are available before attempting to
-	 * satisfy the request. This simplifies the error unwinding code.
-	 */
-	ret = dlb2_verify_create_ldb_queue_args(hw,
-						domain_id,
-						args,
-						resp,
-						vdev_req,
-						vdev_id,
-						&domain,
-						&queue);
-	if (ret)
-		return ret;
-
-	ret = dlb2_ldb_queue_attach_resources(hw, domain, queue, args);
-
-	if (ret) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: failed to attach the ldb queue resources\n",
-			    __func__, __LINE__);
-		return ret;
-	}
-
-	dlb2_configure_ldb_queue(hw, domain, queue, args, vdev_req, vdev_id);
-
-	queue->num_mappings = 0;
-
-	queue->configured = true;
-
-	/*
-	 * Configuration succeeded, so move the resource from the 'avail' to
-	 * the 'used' list.
-	 */
-	dlb2_list_del(&domain->avail_ldb_queues, &queue->domain_list);
-
-	dlb2_list_add(&domain->used_ldb_queues, &queue->domain_list);
-
-	resp->status = 0;
-	resp->id = (vdev_req) ? queue->id.virt_id : queue->id.phys_id;
-
-	return 0;
-}
-
-static void dlb2_ldb_port_configure_pp(struct dlb2_hw *hw,
-				       struct dlb2_hw_domain *domain,
-				       struct dlb2_ldb_port *port,
-				       bool vdev_req,
-				       unsigned int vdev_id)
-{
-	u32 reg = 0;
-
-	DLB2_BITS_SET(reg, domain->id.phys_id, DLB2_SYS_LDB_PP2VAS_VAS);
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_PP2VAS(port->id.phys_id), reg);
-
-	if (vdev_req) {
-		unsigned int offs;
-		u32 virt_id;
-
-		/*
-		 * DLB uses producer port address bits 17:12 to determine the
-		 * producer port ID. In Scalable IOV mode, PP accesses come
-		 * through the PF MMIO window for the physical producer port,
-		 * so for translation purposes the virtual and physical port
-		 * IDs are equal.
-		 */
-		if (hw->virt_mode == DLB2_VIRT_SRIOV)
-			virt_id = port->id.virt_id;
-		else
-			virt_id = port->id.phys_id;
-
-		reg = 0;
-		DLB2_BITS_SET(reg, port->id.phys_id, DLB2_SYS_VF_LDB_VPP2PP_PP);
-		offs = vdev_id * DLB2_MAX_NUM_LDB_PORTS + virt_id;
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VPP2PP(offs), reg);
-
-		reg = 0;
-		DLB2_BITS_SET(reg, vdev_id, DLB2_SYS_LDB_PP2VDEV_VDEV);
-		DLB2_CSR_WR(hw, DLB2_SYS_LDB_PP2VDEV(port->id.phys_id), reg);
-
-		reg = 0;
-		DLB2_BIT_SET(reg, DLB2_SYS_VF_LDB_VPP_V_VPP_V);
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VPP_V(offs), reg);
-	}
-
-	reg = 0;
-	DLB2_BIT_SET(reg, DLB2_SYS_LDB_PP_V_PP_V);
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_PP_V(port->id.phys_id), reg);
-}
-
-static int dlb2_ldb_port_configure_cq(struct dlb2_hw *hw,
-				      struct dlb2_hw_domain *domain,
-				      struct dlb2_ldb_port *port,
-				      uintptr_t cq_dma_base,
-				      struct dlb2_create_ldb_port_args *args,
-				      bool vdev_req,
-				      unsigned int vdev_id)
-{
-	u32 hl_base = 0;
-	u32 reg = 0;
-	u32 ds = 0;
-
-	/* The CQ address is 64B-aligned, and the DLB only wants bits [63:6] */
-	DLB2_BITS_SET(reg, cq_dma_base >> 6, DLB2_SYS_LDB_CQ_ADDR_L_ADDR_L);
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_ADDR_L(port->id.phys_id), reg);
-
-	reg = cq_dma_base >> 32;
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_ADDR_U(port->id.phys_id), reg);
-
-	/*
-	 * 'ro' == relaxed ordering. This setting allows DLB2 to write
-	 * cache lines out-of-order (but QEs within a cache line are always
-	 * updated in-order).
-	 */
-	reg = 0;
-	DLB2_BITS_SET(reg, vdev_id, DLB2_SYS_LDB_CQ2VF_PF_RO_VF);
-	DLB2_BITS_SET(reg,
-		 !vdev_req && (hw->virt_mode != DLB2_VIRT_SIOV),
-		 DLB2_SYS_LDB_CQ2VF_PF_RO_IS_PF);
-	DLB2_BIT_SET(reg, DLB2_SYS_LDB_CQ2VF_PF_RO_RO);
-
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ2VF_PF_RO(port->id.phys_id), reg);
-
-	port->cq_depth = args->cq_depth;
-
-	if (args->cq_depth <= 8) {
-		ds = 1;
-	} else if (args->cq_depth == 16) {
-		ds = 2;
-	} else if (args->cq_depth == 32) {
-		ds = 3;
-	} else if (args->cq_depth == 64) {
-		ds = 4;
-	} else if (args->cq_depth == 128) {
-		ds = 5;
-	} else if (args->cq_depth == 256) {
-		ds = 6;
-	} else if (args->cq_depth == 512) {
-		ds = 7;
-	} else if (args->cq_depth == 1024) {
-		ds = 8;
-	} else {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: invalid CQ depth\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	reg = 0;
-	DLB2_BITS_SET(reg, ds,
-		      DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT);
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
-		    reg);
-
-	/*
-	 * To support CQs with depth less than 8, program the token count
-	 * register with a non-zero initial value. Operations such as domain
-	 * reset must take this initial value into account when quiescing the
-	 * CQ.
-	 */
-	port->init_tkn_cnt = 0;
-
-	if (args->cq_depth < 8) {
-		reg = 0;
-		port->init_tkn_cnt = 8 - args->cq_depth;
-
-		DLB2_BITS_SET(reg,
-			      port->init_tkn_cnt,
-			      DLB2_LSP_CQ_LDB_TKN_CNT_TOKEN_COUNT);
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_CQ_LDB_TKN_CNT(hw->ver, port->id.phys_id),
-			    reg);
-	} else {
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_CQ_LDB_TKN_CNT(hw->ver, port->id.phys_id),
-			    DLB2_LSP_CQ_LDB_TKN_CNT_RST);
-	}
-
-	reg = 0;
-	DLB2_BITS_SET(reg, ds,
-		      DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_V2);
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
-		    reg);
-
-	/* Reset the CQ write pointer */
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_WPTR(hw->ver, port->id.phys_id),
-		    DLB2_CHP_LDB_CQ_WPTR_RST);
-
-	reg = 0;
-	DLB2_BITS_SET(reg,
-		      port->hist_list_entry_limit - 1,
-		      DLB2_CHP_HIST_LIST_LIM_LIMIT);
-	DLB2_CSR_WR(hw, DLB2_CHP_HIST_LIST_LIM(hw->ver, port->id.phys_id), reg);
-
-	DLB2_BITS_SET(hl_base, port->hist_list_entry_base,
-		      DLB2_CHP_HIST_LIST_BASE_BASE);
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_HIST_LIST_BASE(hw->ver, port->id.phys_id),
-		    hl_base);
-
-	/*
-	 * The inflight limit sets a cap on the number of QEs for which this CQ
-	 * can owe completions at one time.
-	 */
-	reg = 0;
-	DLB2_BITS_SET(reg, args->cq_history_list_size,
-		      DLB2_LSP_CQ_LDB_INFL_LIM_LIMIT);
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ_LDB_INFL_LIM(hw->ver, port->id.phys_id),
-		    reg);
-
-	reg = 0;
-	DLB2_BITS_SET(reg, DLB2_BITS_GET(hl_base, DLB2_CHP_HIST_LIST_BASE_BASE),
-		      DLB2_CHP_HIST_LIST_PUSH_PTR_PUSH_PTR);
-	DLB2_CSR_WR(hw, DLB2_CHP_HIST_LIST_PUSH_PTR(hw->ver, port->id.phys_id),
-		    reg);
-
-	reg = 0;
-	DLB2_BITS_SET(reg, DLB2_BITS_GET(hl_base, DLB2_CHP_HIST_LIST_BASE_BASE),
-		      DLB2_CHP_HIST_LIST_POP_PTR_POP_PTR);
-	DLB2_CSR_WR(hw, DLB2_CHP_HIST_LIST_POP_PTR(hw->ver, port->id.phys_id),
-		    reg);
-
-	/*
-	 * Address translation (AT) settings: 0: untranslated, 2: translated
-	 * (see ATS spec regarding Address Type field for more details)
-	 */
-
-	if (hw->ver == DLB2_HW_V2) {
-		reg = 0;
-		DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_AT(port->id.phys_id), reg);
-	}
-
-	if (vdev_req && hw->virt_mode == DLB2_VIRT_SIOV) {
-		reg = 0;
-		DLB2_BITS_SET(reg, hw->pasid[vdev_id],
-			      DLB2_SYS_LDB_CQ_PASID_PASID);
-		DLB2_BIT_SET(reg, DLB2_SYS_LDB_CQ_PASID_FMT2);
-	}
-
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_PASID(hw->ver, port->id.phys_id), reg);
-
-	reg = 0;
-	DLB2_BITS_SET(reg, domain->id.phys_id, DLB2_CHP_LDB_CQ2VAS_CQ2VAS);
-	DLB2_CSR_WR(hw, DLB2_CHP_LDB_CQ2VAS(hw->ver, port->id.phys_id), reg);
-
-	/* Disable the port's QID mappings */
-	reg = 0;
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(hw->ver, port->id.phys_id), reg);
-
-	return 0;
-}
-
-static bool
-dlb2_cq_depth_is_valid(u32 depth)
-{
-	if (depth != 1 && depth != 2 &&
-	    depth != 4 && depth != 8 &&
-	    depth != 16 && depth != 32 &&
-	    depth != 64 && depth != 128 &&
-	    depth != 256 && depth != 512 &&
-	    depth != 1024)
-		return false;
-
-	return true;
-}
-
-static int dlb2_configure_ldb_port(struct dlb2_hw *hw,
-				   struct dlb2_hw_domain *domain,
-				   struct dlb2_ldb_port *port,
-				   uintptr_t cq_dma_base,
-				   struct dlb2_create_ldb_port_args *args,
-				   bool vdev_req,
-				   unsigned int vdev_id)
-{
-	int ret, i;
-
-	port->hist_list_entry_base = domain->hist_list_entry_base +
-				     domain->hist_list_entry_offset;
-	port->hist_list_entry_limit = port->hist_list_entry_base +
-				      args->cq_history_list_size;
-
-	domain->hist_list_entry_offset += args->cq_history_list_size;
-	domain->avail_hist_list_entries -= args->cq_history_list_size;
-
-	ret = dlb2_ldb_port_configure_cq(hw,
-					 domain,
-					 port,
-					 cq_dma_base,
-					 args,
-					 vdev_req,
-					 vdev_id);
-	if (ret)
-		return ret;
-
-	dlb2_ldb_port_configure_pp(hw,
-				   domain,
-				   port,
-				   vdev_req,
-				   vdev_id);
-
-	dlb2_ldb_port_cq_enable(hw, port);
-
-	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++)
-		port->qid_map[i].state = DLB2_QUEUE_UNMAPPED;
-	port->num_mappings = 0;
-
-	port->enabled = true;
-
-	port->configured = true;
-
-	return 0;
-}
-
-static void
-dlb2_log_create_ldb_port_args(struct dlb2_hw *hw,
-			      u32 domain_id,
-			      uintptr_t cq_dma_base,
-			      struct dlb2_create_ldb_port_args *args,
-			      bool vdev_req,
-			      unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 create load-balanced port arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID:                 %d\n",
-		    domain_id);
-	DLB2_HW_DBG(hw, "\tCQ depth:                  %d\n",
-		    args->cq_depth);
-	DLB2_HW_DBG(hw, "\tCQ hist list size:         %d\n",
-		    args->cq_history_list_size);
-	DLB2_HW_DBG(hw, "\tCQ base address:           0x%lx\n",
-		    cq_dma_base);
-	DLB2_HW_DBG(hw, "\tCoS ID:                    %u\n", args->cos_id);
-	DLB2_HW_DBG(hw, "\tStrict CoS allocation:     %u\n",
-		    args->cos_strict);
-}
-
-static int
-dlb2_verify_create_ldb_port_args(struct dlb2_hw *hw,
-				 u32 domain_id,
-				 uintptr_t cq_dma_base,
-				 struct dlb2_create_ldb_port_args *args,
-				 struct dlb2_cmd_response *resp,
-				 bool vdev_req,
-				 unsigned int vdev_id,
-				 struct dlb2_hw_domain **out_domain,
-				 struct dlb2_ldb_port **out_port,
-				 int *out_cos_id)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_port *port;
-	int i, id;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (!domain) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	if (!domain->configured) {
-		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
-		return -EINVAL;
-	}
-
-	if (domain->started) {
-		resp->status = DLB2_ST_DOMAIN_STARTED;
-		return -EINVAL;
-	}
-
-	if (args->cos_id >= DLB2_NUM_COS_DOMAINS) {
-		resp->status = DLB2_ST_INVALID_COS_ID;
-		return -EINVAL;
-	}
-
-	if (args->cos_strict) {
-		id = args->cos_id;
-		port = DLB2_DOM_LIST_HEAD(domain->avail_ldb_ports[id],
-					  typeof(*port));
-	} else {
-		for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-			id = (args->cos_id + i) % DLB2_NUM_COS_DOMAINS;
-
-			port = DLB2_DOM_LIST_HEAD(domain->avail_ldb_ports[id],
-						  typeof(*port));
-			if (port)
-				break;
-		}
-	}
-
-	if (!port) {
-		resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	/* Check cache-line alignment */
-	if ((cq_dma_base & 0x3F) != 0) {
-		resp->status = DLB2_ST_INVALID_CQ_VIRT_ADDR;
-		return -EINVAL;
-	}
-
-	if (!dlb2_cq_depth_is_valid(args->cq_depth)) {
-		resp->status = DLB2_ST_INVALID_CQ_DEPTH;
-		return -EINVAL;
-	}
-
-	/* The history list size must be >= 1 */
-	if (!args->cq_history_list_size) {
-		resp->status = DLB2_ST_INVALID_HIST_LIST_DEPTH;
-		return -EINVAL;
-	}
-
-	if (args->cq_history_list_size > domain->avail_hist_list_entries) {
-		resp->status = DLB2_ST_HIST_LIST_ENTRIES_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	*out_domain = domain;
-	*out_port = port;
-	*out_cos_id = id;
-
-	return 0;
-}
-
-/**
- * dlb2_hw_create_ldb_port() - create a load-balanced port
- * @hw: dlb2_hw handle for a particular device.
- * @domain_id: domain ID.
- * @args: port creation arguments.
- * @cq_dma_base: base address of the CQ memory. This can be a PA or an IOVA.
- * @resp: response structure.
- * @vdev_req: indicates whether this request came from a vdev.
- * @vdev_id: If vdev_req is true, this contains the vdev's ID.
- *
- * This function creates a load-balanced port.
- *
- * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
- * device.
- *
- * Return:
- * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
- * assigned a detailed error code from enum dlb2_error. If successful, resp->id
- * contains the port ID.
- *
- * resp->id contains a virtual ID if vdev_req is true.
- *
- * Errors:
- * EINVAL - A requested resource is unavailable, a credit setting is invalid, a
- *	    pointer address is not properly aligned, the domain is not
- *	    configured, or the domain has already been started.
- * EFAULT - Internal error (resp->status not set).
- */
-int dlb2_hw_create_ldb_port(struct dlb2_hw *hw,
-			    u32 domain_id,
-			    struct dlb2_create_ldb_port_args *args,
-			    uintptr_t cq_dma_base,
-			    struct dlb2_cmd_response *resp,
-			    bool vdev_req,
-			    unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_port *port;
-	int ret, cos_id;
-
-	dlb2_log_create_ldb_port_args(hw,
-				      domain_id,
-				      cq_dma_base,
-				      args,
-				      vdev_req,
-				      vdev_id);
-
-	/*
-	 * Verify that hardware resources are available before attempting to
-	 * satisfy the request. This simplifies the error unwinding code.
-	 */
-	ret = dlb2_verify_create_ldb_port_args(hw,
-					       domain_id,
-					       cq_dma_base,
-					       args,
-					       resp,
-					       vdev_req,
-					       vdev_id,
-					       &domain,
-					       &port,
-					       &cos_id);
-	if (ret)
-		return ret;
-
-	ret = dlb2_configure_ldb_port(hw,
-				      domain,
-				      port,
-				      cq_dma_base,
-				      args,
-				      vdev_req,
-				      vdev_id);
-	if (ret)
-		return ret;
-
-	/*
-	 * Configuration succeeded, so move the resource from the 'avail' to
-	 * the 'used' list.
-	 */
-	dlb2_list_del(&domain->avail_ldb_ports[cos_id], &port->domain_list);
-
-	dlb2_list_add(&domain->used_ldb_ports[cos_id], &port->domain_list);
-
-	resp->status = 0;
-	resp->id = (vdev_req) ? port->id.virt_id : port->id.phys_id;
-
-	return 0;
-}
-
-static void
-dlb2_log_create_dir_port_args(struct dlb2_hw *hw,
-			      u32 domain_id,
-			      uintptr_t cq_dma_base,
-			      struct dlb2_create_dir_port_args *args,
-			      bool vdev_req,
-			      unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 create directed port arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID:                 %d\n",
-		    domain_id);
-	DLB2_HW_DBG(hw, "\tCQ depth:                  %d\n",
-		    args->cq_depth);
-	DLB2_HW_DBG(hw, "\tCQ base address:           0x%lx\n",
-		    cq_dma_base);
-}
-
-static struct dlb2_dir_pq_pair *
-dlb2_get_domain_used_dir_pq(struct dlb2_hw *hw,
-			    u32 id,
-			    bool vdev_req,
-			    struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *port;
-	RTE_SET_USED(iter);
-
-	if (id >= DLB2_MAX_NUM_DIR_PORTS(hw->ver))
-		return NULL;
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
-		if ((!vdev_req && port->id.phys_id == id) ||
-		    (vdev_req && port->id.virt_id == id))
-			return port;
-	}
-
-	return NULL;
-}
-
-static int
-dlb2_verify_create_dir_port_args(struct dlb2_hw *hw,
-				 u32 domain_id,
-				 uintptr_t cq_dma_base,
-				 struct dlb2_create_dir_port_args *args,
-				 struct dlb2_cmd_response *resp,
-				 bool vdev_req,
-				 unsigned int vdev_id,
-				 struct dlb2_hw_domain **out_domain,
-				 struct dlb2_dir_pq_pair **out_port)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_dir_pq_pair *pq;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (!domain) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	if (!domain->configured) {
-		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
-		return -EINVAL;
-	}
-
-	if (domain->started) {
-		resp->status = DLB2_ST_DOMAIN_STARTED;
-		return -EINVAL;
-	}
-
-	if (args->queue_id != -1) {
-		/*
-		 * If the user claims the queue is already configured, validate
-		 * the queue ID, its domain, and whether the queue is
-		 * configured.
-		 */
-		pq = dlb2_get_domain_used_dir_pq(hw,
-						 args->queue_id,
-						 vdev_req,
-						 domain);
-
-		if (!pq || pq->domain_id.phys_id != domain->id.phys_id ||
-		    !pq->queue_configured) {
-			resp->status = DLB2_ST_INVALID_DIR_QUEUE_ID;
-			return -EINVAL;
-		}
-	} else {
-		/*
-		 * If the port's queue is not configured, validate that a free
-		 * port-queue pair is available.
-		 */
-		pq = DLB2_DOM_LIST_HEAD(domain->avail_dir_pq_pairs,
-					typeof(*pq));
-		if (!pq) {
-			resp->status = DLB2_ST_DIR_PORTS_UNAVAILABLE;
-			return -EINVAL;
-		}
-	}
-
-	/* Check cache-line alignment */
-	if ((cq_dma_base & 0x3F) != 0) {
-		resp->status = DLB2_ST_INVALID_CQ_VIRT_ADDR;
-		return -EINVAL;
-	}
-
-	if (!dlb2_cq_depth_is_valid(args->cq_depth)) {
-		resp->status = DLB2_ST_INVALID_CQ_DEPTH;
-		return -EINVAL;
-	}
-
-	*out_domain = domain;
-	*out_port = pq;
-
-	return 0;
-}
-
-static void dlb2_dir_port_configure_pp(struct dlb2_hw *hw,
-				       struct dlb2_hw_domain *domain,
-				       struct dlb2_dir_pq_pair *port,
-				       bool vdev_req,
-				       unsigned int vdev_id)
-{
-	u32 reg = 0;
-
-	DLB2_BITS_SET(reg, domain->id.phys_id, DLB2_SYS_DIR_PP2VAS_VAS);
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_PP2VAS(port->id.phys_id), reg);
-
-	if (vdev_req) {
-		unsigned int offs;
-		u32 virt_id;
-
-		/*
-		 * DLB uses producer port address bits 17:12 to determine the
-		 * producer port ID. In Scalable IOV mode, PP accesses come
-		 * through the PF MMIO window for the physical producer port,
-		 * so for translation purposes the virtual and physical port
-		 * IDs are equal.
-		 */
-		if (hw->virt_mode == DLB2_VIRT_SRIOV)
-			virt_id = port->id.virt_id;
-		else
-			virt_id = port->id.phys_id;
-
-		reg = 0;
-		DLB2_BITS_SET(reg, port->id.phys_id, DLB2_SYS_VF_DIR_VPP2PP_PP);
-		offs = vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) + virt_id;
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP2PP(offs), reg);
-
-		reg = 0;
-		DLB2_BITS_SET(reg, vdev_id, DLB2_SYS_DIR_PP2VDEV_VDEV);
-		DLB2_CSR_WR(hw, DLB2_SYS_DIR_PP2VDEV(port->id.phys_id), reg);
-
-		reg = 0;
-		DLB2_BIT_SET(reg, DLB2_SYS_VF_DIR_VPP_V_VPP_V);
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP_V(offs), reg);
-	}
-
-	reg = 0;
-	DLB2_BIT_SET(reg, DLB2_SYS_DIR_PP_V_PP_V);
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_PP_V(port->id.phys_id), reg);
-}
-
-static int dlb2_dir_port_configure_cq(struct dlb2_hw *hw,
-				      struct dlb2_hw_domain *domain,
-				      struct dlb2_dir_pq_pair *port,
-				      uintptr_t cq_dma_base,
-				      struct dlb2_create_dir_port_args *args,
-				      bool vdev_req,
-				      unsigned int vdev_id)
-{
-	u32 reg = 0;
-	u32 ds = 0;
-
-	/* The CQ address is 64B-aligned, and the DLB only wants bits [63:6] */
-	DLB2_BITS_SET(reg, cq_dma_base >> 6, DLB2_SYS_DIR_CQ_ADDR_L_ADDR_L);
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_ADDR_L(port->id.phys_id), reg);
-
-	reg = cq_dma_base >> 32;
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_ADDR_U(port->id.phys_id), reg);
-
-	/*
-	 * 'ro' == relaxed ordering. This setting allows DLB2 to write
-	 * cache lines out-of-order (but QEs within a cache line are always
-	 * updated in-order).
-	 */
-	reg = 0;
-	DLB2_BITS_SET(reg, vdev_id, DLB2_SYS_DIR_CQ2VF_PF_RO_VF);
-	DLB2_BITS_SET(reg, !vdev_req && (hw->virt_mode != DLB2_VIRT_SIOV),
-		 DLB2_SYS_DIR_CQ2VF_PF_RO_IS_PF);
-	DLB2_BIT_SET(reg, DLB2_SYS_DIR_CQ2VF_PF_RO_RO);
-
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ2VF_PF_RO(port->id.phys_id), reg);
-
-	if (args->cq_depth <= 8) {
-		ds = 1;
-	} else if (args->cq_depth == 16) {
-		ds = 2;
-	} else if (args->cq_depth == 32) {
-		ds = 3;
-	} else if (args->cq_depth == 64) {
-		ds = 4;
-	} else if (args->cq_depth == 128) {
-		ds = 5;
-	} else if (args->cq_depth == 256) {
-		ds = 6;
-	} else if (args->cq_depth == 512) {
-		ds = 7;
-	} else if (args->cq_depth == 1024) {
-		ds = 8;
-	} else {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: invalid CQ depth\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	reg = 0;
-	DLB2_BITS_SET(reg, ds,
-		      DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT);
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
-		    reg);
-
-	/*
-	 * To support CQs with depth less than 8, program the token count
-	 * register with a non-zero initial value. Operations such as domain
-	 * reset must take this initial value into account when quiescing the
-	 * CQ.
-	 */
-	port->init_tkn_cnt = 0;
-
-	if (args->cq_depth < 8) {
-		reg = 0;
-		port->init_tkn_cnt = 8 - args->cq_depth;
-
-		DLB2_BITS_SET(reg, port->init_tkn_cnt,
-			      DLB2_LSP_CQ_DIR_TKN_CNT_COUNT);
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_CQ_DIR_TKN_CNT(hw->ver, port->id.phys_id),
-			    reg);
-	} else {
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_CQ_DIR_TKN_CNT(hw->ver, port->id.phys_id),
-			    DLB2_LSP_CQ_DIR_TKN_CNT_RST);
-	}
-
-	reg = 0;
-	DLB2_BITS_SET(reg, ds,
-		      DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_TOKEN_DEPTH_SELECT_V2);
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(hw->ver,
-						      port->id.phys_id),
-		    reg);
-
-	/* Reset the CQ write pointer */
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_WPTR(hw->ver, port->id.phys_id),
-		    DLB2_CHP_DIR_CQ_WPTR_RST);
-
-	/* Virtualize the PPID */
-	reg = 0;
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_FMT(port->id.phys_id), reg);
-
-	/*
-	 * Address translation (AT) settings: 0: untranslated, 2: translated
-	 * (see ATS spec regarding Address Type field for more details)
-	 */
-	if (hw->ver == DLB2_HW_V2) {
-		reg = 0;
-		DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_AT(port->id.phys_id), reg);
-	}
-
-	if (vdev_req && hw->virt_mode == DLB2_VIRT_SIOV) {
-		DLB2_BITS_SET(reg, hw->pasid[vdev_id],
-			      DLB2_SYS_DIR_CQ_PASID_PASID);
-		DLB2_BIT_SET(reg, DLB2_SYS_DIR_CQ_PASID_FMT2);
-	}
-
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_PASID(hw->ver, port->id.phys_id), reg);
-
-	reg = 0;
-	DLB2_BITS_SET(reg, domain->id.phys_id, DLB2_CHP_DIR_CQ2VAS_CQ2VAS);
-	DLB2_CSR_WR(hw, DLB2_CHP_DIR_CQ2VAS(hw->ver, port->id.phys_id), reg);
-
-	return 0;
-}
-
-static int dlb2_configure_dir_port(struct dlb2_hw *hw,
-				   struct dlb2_hw_domain *domain,
-				   struct dlb2_dir_pq_pair *port,
-				   uintptr_t cq_dma_base,
-				   struct dlb2_create_dir_port_args *args,
-				   bool vdev_req,
-				   unsigned int vdev_id)
-{
-	int ret;
-
-	ret = dlb2_dir_port_configure_cq(hw,
-					 domain,
-					 port,
-					 cq_dma_base,
-					 args,
-					 vdev_req,
-					 vdev_id);
-
-	if (ret)
-		return ret;
-
-	dlb2_dir_port_configure_pp(hw,
-				   domain,
-				   port,
-				   vdev_req,
-				   vdev_id);
-
-	dlb2_dir_port_cq_enable(hw, port);
-
-	port->enabled = true;
-
-	port->port_configured = true;
-
-	return 0;
-}
-
-/**
- * dlb2_hw_create_dir_port() - create a directed port
- * @hw: dlb2_hw handle for a particular device.
- * @domain_id: domain ID.
- * @args: port creation arguments.
- * @cq_dma_base: base address of the CQ memory. This can be a PA or an IOVA.
- * @resp: response structure.
- * @vdev_req: indicates whether this request came from a vdev.
- * @vdev_id: If vdev_req is true, this contains the vdev's ID.
- *
- * This function creates a directed port.
- *
- * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
- * device.
- *
- * Return:
- * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
- * assigned a detailed error code from enum dlb2_error. If successful, resp->id
- * contains the port ID.
- *
- * resp->id contains a virtual ID if vdev_req is true.
- *
- * Errors:
- * EINVAL - A requested resource is unavailable, a credit setting is invalid, a
- *	    pointer address is not properly aligned, the domain is not
- *	    configured, or the domain has already been started.
- * EFAULT - Internal error (resp->status not set).
- */
-int dlb2_hw_create_dir_port(struct dlb2_hw *hw,
-			    u32 domain_id,
-			    struct dlb2_create_dir_port_args *args,
-			    uintptr_t cq_dma_base,
-			    struct dlb2_cmd_response *resp,
-			    bool vdev_req,
-			    unsigned int vdev_id)
-{
-	struct dlb2_dir_pq_pair *port;
-	struct dlb2_hw_domain *domain;
-	int ret;
-
-	dlb2_log_create_dir_port_args(hw,
-				      domain_id,
-				      cq_dma_base,
-				      args,
-				      vdev_req,
-				      vdev_id);
-
-	/*
-	 * Verify that hardware resources are available before attempting to
-	 * satisfy the request. This simplifies the error unwinding code.
-	 */
-	ret = dlb2_verify_create_dir_port_args(hw,
-					       domain_id,
-					       cq_dma_base,
-					       args,
-					       resp,
-					       vdev_req,
-					       vdev_id,
-					       &domain,
-					       &port);
-	if (ret)
-		return ret;
-
-	ret = dlb2_configure_dir_port(hw,
-				      domain,
-				      port,
-				      cq_dma_base,
-				      args,
-				      vdev_req,
-				      vdev_id);
-	if (ret)
-		return ret;
-
-	/*
-	 * Configuration succeeded, so move the resource from the 'avail' to
-	 * the 'used' list (if it's not already there).
-	 */
-	if (args->queue_id == -1) {
-		dlb2_list_del(&domain->avail_dir_pq_pairs, &port->domain_list);
-
-		dlb2_list_add(&domain->used_dir_pq_pairs, &port->domain_list);
-	}
-
-	resp->status = 0;
-	resp->id = (vdev_req) ? port->id.virt_id : port->id.phys_id;
-
-	return 0;
-}
-
-static void dlb2_configure_dir_queue(struct dlb2_hw *hw,
-				     struct dlb2_hw_domain *domain,
-				     struct dlb2_dir_pq_pair *queue,
-				     struct dlb2_create_dir_queue_args *args,
-				     bool vdev_req,
-				     unsigned int vdev_id)
-{
-	unsigned int offs;
-	u32 reg = 0;
-
-	/* QID write permissions are turned on when the domain is started */
-	offs = domain->id.phys_id * DLB2_MAX_NUM_DIR_QUEUES(hw->ver) +
-		queue->id.phys_id;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(offs), reg);
-
-	/* Don't timestamp QEs that pass through this queue */
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_QID_ITS(queue->id.phys_id), reg);
-
-	reg = 0;
-	DLB2_BITS_SET(reg, args->depth_threshold,
-		      DLB2_LSP_QID_DIR_DEPTH_THRSH_THRESH);
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID_DIR_DEPTH_THRSH(hw->ver, queue->id.phys_id),
-		    reg);
-
-	if (vdev_req) {
-		offs = vdev_id * DLB2_MAX_NUM_DIR_QUEUES(hw->ver) +
-			queue->id.virt_id;
-
-		reg = 0;
-		DLB2_BIT_SET(reg, DLB2_SYS_VF_DIR_VQID_V_VQID_V);
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID_V(offs), reg);
-
-		reg = 0;
-		DLB2_BITS_SET(reg, queue->id.phys_id,
-			      DLB2_SYS_VF_DIR_VQID2QID_QID);
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID2QID(offs), reg);
-	}
-
-	reg = 0;
-	DLB2_BIT_SET(reg, DLB2_SYS_DIR_QID_V_QID_V);
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_QID_V(queue->id.phys_id), reg);
-
-	queue->queue_configured = true;
-}
-
-static void
-dlb2_log_create_dir_queue_args(struct dlb2_hw *hw,
-			       u32 domain_id,
-			       struct dlb2_create_dir_queue_args *args,
-			       bool vdev_req,
-			       unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 create directed queue arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
-	DLB2_HW_DBG(hw, "\tPort ID:   %d\n", args->port_id);
-}
-
-static int
-dlb2_verify_create_dir_queue_args(struct dlb2_hw *hw,
-				  u32 domain_id,
-				  struct dlb2_create_dir_queue_args *args,
-				  struct dlb2_cmd_response *resp,
-				  bool vdev_req,
-				  unsigned int vdev_id,
-				  struct dlb2_hw_domain **out_domain,
-				  struct dlb2_dir_pq_pair **out_queue)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_dir_pq_pair *pq;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (!domain) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	if (!domain->configured) {
-		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
-		return -EINVAL;
-	}
-
-	if (domain->started) {
-		resp->status = DLB2_ST_DOMAIN_STARTED;
-		return -EINVAL;
-	}
-
-	/*
-	 * If the user claims the port is already configured, validate the port
-	 * ID, its domain, and whether the port is configured.
-	 */
-	if (args->port_id != -1) {
-		pq = dlb2_get_domain_used_dir_pq(hw,
-						 args->port_id,
-						 vdev_req,
-						 domain);
-
-		if (!pq || pq->domain_id.phys_id != domain->id.phys_id ||
-		    !pq->port_configured) {
-			resp->status = DLB2_ST_INVALID_PORT_ID;
-			return -EINVAL;
-		}
-	} else {
-		/*
-		 * If the queue's port is not configured, validate that a free
-		 * port-queue pair is available.
-		 */
-		pq = DLB2_DOM_LIST_HEAD(domain->avail_dir_pq_pairs,
-					typeof(*pq));
-		if (!pq) {
-			resp->status = DLB2_ST_DIR_QUEUES_UNAVAILABLE;
-			return -EINVAL;
-		}
-	}
-
-	*out_domain = domain;
-	*out_queue = pq;
-
-	return 0;
-}
-
-/**
- * dlb2_hw_create_dir_queue() - create a directed queue
- * @hw: dlb2_hw handle for a particular device.
- * @domain_id: domain ID.
- * @args: queue creation arguments.
- * @resp: response structure.
- * @vdev_req: indicates whether this request came from a vdev.
- * @vdev_id: If vdev_req is true, this contains the vdev's ID.
- *
- * This function creates a directed queue.
- *
- * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
- * device.
- *
- * Return:
- * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
- * assigned a detailed error code from enum dlb2_error. If successful, resp->id
- * contains the queue ID.
- *
- * resp->id contains a virtual ID if vdev_req is true.
- *
- * Errors:
- * EINVAL - A requested resource is unavailable, the domain is not configured,
- *	    or the domain has already been started.
- * EFAULT - Internal error (resp->status not set).
- */
-int dlb2_hw_create_dir_queue(struct dlb2_hw *hw,
-			     u32 domain_id,
-			     struct dlb2_create_dir_queue_args *args,
-			     struct dlb2_cmd_response *resp,
-			     bool vdev_req,
-			     unsigned int vdev_id)
-{
-	struct dlb2_dir_pq_pair *queue;
-	struct dlb2_hw_domain *domain;
-	int ret;
-
-	dlb2_log_create_dir_queue_args(hw, domain_id, args, vdev_req, vdev_id);
-
-	/*
-	 * Verify that hardware resources are available before attempting to
-	 * satisfy the request. This simplifies the error unwinding code.
-	 */
-	ret = dlb2_verify_create_dir_queue_args(hw,
-						domain_id,
-						args,
-						resp,
-						vdev_req,
-						vdev_id,
-						&domain,
-						&queue);
-	if (ret)
-		return ret;
-
-	dlb2_configure_dir_queue(hw, domain, queue, args, vdev_req, vdev_id);
-
-	/*
-	 * Configuration succeeded, so move the resource from the 'avail' to
-	 * the 'used' list (if it's not already there).
-	 */
-	if (args->port_id == -1) {
-		dlb2_list_del(&domain->avail_dir_pq_pairs,
-			      &queue->domain_list);
-
-		dlb2_list_add(&domain->used_dir_pq_pairs,
-			      &queue->domain_list);
-	}
-
-	resp->status = 0;
-
-	resp->id = (vdev_req) ? queue->id.virt_id : queue->id.phys_id;
-
-	return 0;
-}
-
-static bool
-dlb2_port_find_slot_with_pending_map_queue(struct dlb2_ldb_port *port,
-					   struct dlb2_ldb_queue *queue,
-					   int *slot)
-{
-	int i;
-
-	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
-		struct dlb2_ldb_port_qid_map *map = &port->qid_map[i];
-
-		if (map->state == DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP &&
-		    map->pending_qid == queue->id.phys_id)
-			break;
-	}
-
-	*slot = i;
-
-	return (i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ);
-}
-
-static int dlb2_verify_map_qid_slot_available(struct dlb2_ldb_port *port,
-					      struct dlb2_ldb_queue *queue,
-					      struct dlb2_cmd_response *resp)
-{
-	enum dlb2_qid_map_state state;
-	int i;
-
-	/* Unused slot available? */
-	if (port->num_mappings < DLB2_MAX_NUM_QIDS_PER_LDB_CQ)
-		return 0;
-
-	/*
-	 * If the queue is already mapped (from the application's perspective),
-	 * this is simply a priority update.
-	 */
-	state = DLB2_QUEUE_MAPPED;
-	if (dlb2_port_find_slot_queue(port, state, queue, &i))
-		return 0;
-
-	state = DLB2_QUEUE_MAP_IN_PROG;
-	if (dlb2_port_find_slot_queue(port, state, queue, &i))
-		return 0;
-
-	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &i))
-		return 0;
-
-	/*
-	 * If the slot contains an unmap in progress, it's considered
-	 * available.
-	 */
-	state = DLB2_QUEUE_UNMAP_IN_PROG;
-	if (dlb2_port_find_slot(port, state, &i))
-		return 0;
-
-	state = DLB2_QUEUE_UNMAPPED;
-	if (dlb2_port_find_slot(port, state, &i))
-		return 0;
-
-	resp->status = DLB2_ST_NO_QID_SLOTS_AVAILABLE;
-	return -EINVAL;
-}
-
-static struct dlb2_ldb_queue *
-dlb2_get_domain_ldb_queue(u32 id,
-			  bool vdev_req,
-			  struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_queue *queue;
-	RTE_SET_USED(iter);
-
-	if (id >= DLB2_MAX_NUM_LDB_QUEUES)
-		return NULL;
-
-	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
-		if ((!vdev_req && queue->id.phys_id == id) ||
-		    (vdev_req && queue->id.virt_id == id))
-			return queue;
-	}
-
-	return NULL;
-}
-
-static struct dlb2_ldb_port *
-dlb2_get_domain_used_ldb_port(u32 id,
-			      bool vdev_req,
-			      struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	if (id >= DLB2_MAX_NUM_LDB_PORTS)
-		return NULL;
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			if ((!vdev_req && port->id.phys_id == id) ||
-			    (vdev_req && port->id.virt_id == id))
-				return port;
-		}
-
-		DLB2_DOM_LIST_FOR(domain->avail_ldb_ports[i], port, iter) {
-			if ((!vdev_req && port->id.phys_id == id) ||
-			    (vdev_req && port->id.virt_id == id))
-				return port;
-		}
-	}
-
-	return NULL;
-}
-
-static void dlb2_ldb_port_change_qid_priority(struct dlb2_hw *hw,
-					      struct dlb2_ldb_port *port,
-					      int slot,
-					      struct dlb2_map_qid_args *args)
-{
-	u32 cq2priov;
-
-	/* Read-modify-write the priority and valid bit register */
-	cq2priov = DLB2_CSR_RD(hw,
-			       DLB2_LSP_CQ2PRIOV(hw->ver, port->id.phys_id));
-
-	cq2priov |= (1 << (slot + DLB2_LSP_CQ2PRIOV_V_LOC)) &
-		    DLB2_LSP_CQ2PRIOV_V;
-	cq2priov |= ((args->priority & 0x7) << slot * 3) &
-		    DLB2_LSP_CQ2PRIOV_PRIO;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(hw->ver, port->id.phys_id), cq2priov);
-
-	dlb2_flush_csr(hw);
-
-	port->qid_map[slot].priority = args->priority;
-}
-
-static int dlb2_verify_map_qid_args(struct dlb2_hw *hw,
-				    u32 domain_id,
-				    struct dlb2_map_qid_args *args,
-				    struct dlb2_cmd_response *resp,
-				    bool vdev_req,
-				    unsigned int vdev_id,
-				    struct dlb2_hw_domain **out_domain,
-				    struct dlb2_ldb_port **out_port,
-				    struct dlb2_ldb_queue **out_queue)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_queue *queue;
-	struct dlb2_ldb_port *port;
-	int id;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (!domain) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	if (!domain->configured) {
-		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
-		return -EINVAL;
-	}
-
-	id = args->port_id;
-
-	port = dlb2_get_domain_used_ldb_port(id, vdev_req, domain);
-
-	if (!port || !port->configured) {
-		resp->status = DLB2_ST_INVALID_PORT_ID;
-		return -EINVAL;
-	}
-
-	if (args->priority >= DLB2_QID_PRIORITIES) {
-		resp->status = DLB2_ST_INVALID_PRIORITY;
-		return -EINVAL;
-	}
-
-	queue = dlb2_get_domain_ldb_queue(args->qid, vdev_req, domain);
-
-	if (!queue || !queue->configured) {
-		resp->status = DLB2_ST_INVALID_QID;
-		return -EINVAL;
-	}
-
-	if (queue->domain_id.phys_id != domain->id.phys_id) {
-		resp->status = DLB2_ST_INVALID_QID;
-		return -EINVAL;
-	}
-
-	if (port->domain_id.phys_id != domain->id.phys_id) {
-		resp->status = DLB2_ST_INVALID_PORT_ID;
-		return -EINVAL;
-	}
-
-	*out_domain = domain;
-	*out_queue = queue;
-	*out_port = port;
-
-	return 0;
-}
-
-static void dlb2_log_map_qid(struct dlb2_hw *hw,
-			     u32 domain_id,
-			     struct dlb2_map_qid_args *args,
-			     bool vdev_req,
-			     unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 map QID arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID: %d\n",
-		    domain_id);
-	DLB2_HW_DBG(hw, "\tPort ID:   %d\n",
-		    args->port_id);
-	DLB2_HW_DBG(hw, "\tQueue ID:  %d\n",
-		    args->qid);
-	DLB2_HW_DBG(hw, "\tPriority:  %d\n",
-		    args->priority);
-}
-
-/**
- * dlb2_hw_map_qid() - map a load-balanced queue to a load-balanced port
- * @hw: dlb2_hw handle for a particular device.
- * @domain_id: domain ID.
- * @args: map QID arguments.
- * @resp: response structure.
- * @vdev_req: indicates whether this request came from a vdev.
- * @vdev_id: If vdev_req is true, this contains the vdev's ID.
- *
- * This function configures the DLB to schedule QEs from the specified queue
- * to the specified port. Each load-balanced port can be mapped to up to 8
- * queues; each load-balanced queue can potentially map to all the
- * load-balanced ports.
- *
- * A successful return does not necessarily mean the mapping was configured. If
- * this function is unable to immediately map the queue to the port, it will
- * add the requested operation to a per-port list of pending map/unmap
- * operations, and (if it's not already running) launch a kernel thread that
- * periodically attempts to process all pending operations. In a sense, this is
- * an asynchronous function.
- *
- * This asynchronicity creates two views of the state of hardware: the actual
- * hardware state and the requested state (as if every request completed
- * immediately). If there are any pending map/unmap operations, the requested
- * state will differ from the actual state. All validation is performed with
- * respect to the pending state; for instance, if there are 8 pending map
- * operations for port X, a request for a 9th will fail because a load-balanced
- * port can only map up to 8 queues.
- *
- * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
- * device.
- *
- * Return:
- * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
- * assigned a detailed error code from enum dlb2_error.
- *
- * Errors:
- * EINVAL - A requested resource is unavailable, invalid port or queue ID, or
- *	    the domain is not configured.
- * EFAULT - Internal error (resp->status not set).
- */
-int dlb2_hw_map_qid(struct dlb2_hw *hw,
-		    u32 domain_id,
-		    struct dlb2_map_qid_args *args,
-		    struct dlb2_cmd_response *resp,
-		    bool vdev_req,
-		    unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_queue *queue;
-	enum dlb2_qid_map_state st;
-	struct dlb2_ldb_port *port;
-	int ret, i;
-	u8 prio;
-
-	dlb2_log_map_qid(hw, domain_id, args, vdev_req, vdev_id);
-
-	/*
-	 * Verify that hardware resources are available before attempting to
-	 * satisfy the request. This simplifies the error unwinding code.
-	 */
-	ret = dlb2_verify_map_qid_args(hw,
-				       domain_id,
-				       args,
-				       resp,
-				       vdev_req,
-				       vdev_id,
-				       &domain,
-				       &port,
-				       &queue);
-	if (ret)
-		return ret;
-
-	prio = args->priority;
-
-	/*
-	 * If there are any outstanding detach operations for this port,
-	 * attempt to complete them. This may be necessary to free up a QID
-	 * slot for this requested mapping.
-	 */
-	if (port->num_pending_removals)
-		dlb2_domain_finish_unmap_port(hw, domain, port);
-
-	ret = dlb2_verify_map_qid_slot_available(port, queue, resp);
-	if (ret)
-		return ret;
-
-	/* Hardware requires disabling the CQ before mapping QIDs. */
-	if (port->enabled)
-		dlb2_ldb_port_cq_disable(hw, port);
-
-	/*
-	 * If this is only a priority change, don't perform the full QID->CQ
-	 * mapping procedure
-	 */
-	st = DLB2_QUEUE_MAPPED;
-	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
-		if (prio != port->qid_map[i].priority) {
-			dlb2_ldb_port_change_qid_priority(hw, port, i, args);
-			DLB2_HW_DBG(hw, "DLB2 map: priority change\n");
-		}
-
-		st = DLB2_QUEUE_MAPPED;
-		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
-		if (ret)
-			return ret;
-
-		goto map_qid_done;
-	}
-
-	st = DLB2_QUEUE_UNMAP_IN_PROG;
-	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
-		if (prio != port->qid_map[i].priority) {
-			dlb2_ldb_port_change_qid_priority(hw, port, i, args);
-			DLB2_HW_DBG(hw, "DLB2 map: priority change\n");
-		}
-
-		st = DLB2_QUEUE_MAPPED;
-		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
-		if (ret)
-			return ret;
-
-		goto map_qid_done;
-	}
-
-	/*
-	 * If this is a priority change on an in-progress mapping, don't
-	 * perform the full QID->CQ mapping procedure.
-	 */
-	st = DLB2_QUEUE_MAP_IN_PROG;
-	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
-		port->qid_map[i].priority = prio;
-
-		DLB2_HW_DBG(hw, "DLB2 map: priority change only\n");
-
-		goto map_qid_done;
-	}
-
-	/*
-	 * If this is a priority change on a pending mapping, update the
-	 * pending priority
-	 */
-	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &i)) {
-		port->qid_map[i].pending_priority = prio;
-
-		DLB2_HW_DBG(hw, "DLB2 map: priority change only\n");
-
-		goto map_qid_done;
-	}
-
-	/*
-	 * If all the CQ's slots are in use, then there's an unmap in progress
-	 * (guaranteed by dlb2_verify_map_qid_slot_available()), so add this
-	 * mapping to pending_map and return. When the removal is completed for
-	 * the slot's current occupant, this mapping will be performed.
-	 */
-	if (!dlb2_port_find_slot(port, DLB2_QUEUE_UNMAPPED, &i)) {
-		if (dlb2_port_find_slot(port, DLB2_QUEUE_UNMAP_IN_PROG, &i)) {
-			enum dlb2_qid_map_state new_st;
-
-			port->qid_map[i].pending_qid = queue->id.phys_id;
-			port->qid_map[i].pending_priority = prio;
-
-			new_st = DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP;
-
-			ret = dlb2_port_slot_state_transition(hw, port, queue,
-							      i, new_st);
-			if (ret)
-				return ret;
-
-			DLB2_HW_DBG(hw, "DLB2 map: map pending removal\n");
-
-			goto map_qid_done;
-		}
-	}
-
-	/*
-	 * If the domain has started, a special "dynamic" CQ->queue mapping
-	 * procedure is required in order to safely update the CQ<->QID tables.
-	 * The "static" procedure cannot be used when traffic is flowing,
-	 * because the CQ<->QID tables cannot be updated atomically and the
-	 * scheduler won't see the new mapping unless the queue's if_status
-	 * changes, which isn't guaranteed.
-	 */
-	ret = dlb2_ldb_port_map_qid(hw, domain, port, queue, prio);
-
-	/* If ret is less than zero, it's due to an internal error */
-	if (ret < 0)
-		return ret;
-
-map_qid_done:
-	if (port->enabled)
-		dlb2_ldb_port_cq_enable(hw, port);
-
-	resp->status = 0;
-
-	return 0;
-}
-
-static void dlb2_log_unmap_qid(struct dlb2_hw *hw,
-			       u32 domain_id,
-			       struct dlb2_unmap_qid_args *args,
-			       bool vdev_req,
-			       unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 unmap QID arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID: %d\n",
-		    domain_id);
-	DLB2_HW_DBG(hw, "\tPort ID:   %d\n",
-		    args->port_id);
-	DLB2_HW_DBG(hw, "\tQueue ID:  %d\n",
-		    args->qid);
-	if (args->qid < DLB2_MAX_NUM_LDB_QUEUES)
-		DLB2_HW_DBG(hw, "\tQueue's num mappings:  %d\n",
-			    hw->rsrcs.ldb_queues[args->qid].num_mappings);
-}
-
-static int dlb2_verify_unmap_qid_args(struct dlb2_hw *hw,
-				      u32 domain_id,
-				      struct dlb2_unmap_qid_args *args,
-				      struct dlb2_cmd_response *resp,
-				      bool vdev_req,
-				      unsigned int vdev_id,
-				      struct dlb2_hw_domain **out_domain,
-				      struct dlb2_ldb_port **out_port,
-				      struct dlb2_ldb_queue **out_queue)
-{
-	enum dlb2_qid_map_state state;
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_queue *queue;
-	struct dlb2_ldb_port *port;
-	int slot;
-	int id;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (!domain) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	if (!domain->configured) {
-		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
-		return -EINVAL;
-	}
-
-	id = args->port_id;
-
-	port = dlb2_get_domain_used_ldb_port(id, vdev_req, domain);
-
-	if (!port || !port->configured) {
-		resp->status = DLB2_ST_INVALID_PORT_ID;
-		return -EINVAL;
-	}
-
-	if (port->domain_id.phys_id != domain->id.phys_id) {
-		resp->status = DLB2_ST_INVALID_PORT_ID;
-		return -EINVAL;
-	}
-
-	queue = dlb2_get_domain_ldb_queue(args->qid, vdev_req, domain);
-
-	if (!queue || !queue->configured) {
-		DLB2_HW_ERR(hw, "[%s()] Can't unmap unconfigured queue %d\n",
-			    __func__, args->qid);
-		resp->status = DLB2_ST_INVALID_QID;
-		return -EINVAL;
-	}
-
-	/*
-	 * Verify that the port has the queue mapped. From the application's
-	 * perspective a queue is mapped if it is actually mapped, the map is
-	 * in progress, or the map is blocked pending an unmap.
-	 */
-	state = DLB2_QUEUE_MAPPED;
-	if (dlb2_port_find_slot_queue(port, state, queue, &slot))
-		goto done;
-
-	state = DLB2_QUEUE_MAP_IN_PROG;
-	if (dlb2_port_find_slot_queue(port, state, queue, &slot))
-		goto done;
-
-	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &slot))
-		goto done;
-
-	resp->status = DLB2_ST_INVALID_QID;
-	return -EINVAL;
-
-done:
-	*out_domain = domain;
-	*out_port = port;
-	*out_queue = queue;
-
-	return 0;
-}
-
-/**
- * dlb2_hw_unmap_qid() - Unmap a load-balanced queue from a load-balanced port
- * @hw: dlb2_hw handle for a particular device.
- * @domain_id: domain ID.
- * @args: unmap QID arguments.
- * @resp: response structure.
- * @vdev_req: indicates whether this request came from a vdev.
- * @vdev_id: If vdev_req is true, this contains the vdev's ID.
- *
- * This function configures the DLB to stop scheduling QEs from the specified
- * queue to the specified port.
- *
- * A successful return does not necessarily mean the mapping was removed. If
- * this function is unable to immediately unmap the queue from the port, it
- * will add the requested operation to a per-port list of pending map/unmap
- * operations, and (if it's not already running) launch a kernel thread that
- * periodically attempts to process all pending operations. See
- * dlb2_hw_map_qid() for more details.
- *
- * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
- * device.
- *
- * Return:
- * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
- * assigned a detailed error code from enum dlb2_error.
- *
- * Errors:
- * EINVAL - A requested resource is unavailable, invalid port or queue ID, or
- *	    the domain is not configured.
- * EFAULT - Internal error (resp->status not set).
- */
-int dlb2_hw_unmap_qid(struct dlb2_hw *hw,
-		      u32 domain_id,
-		      struct dlb2_unmap_qid_args *args,
-		      struct dlb2_cmd_response *resp,
-		      bool vdev_req,
-		      unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_queue *queue;
-	enum dlb2_qid_map_state st;
-	struct dlb2_ldb_port *port;
-	bool unmap_complete;
-	int i, ret;
-
-	dlb2_log_unmap_qid(hw, domain_id, args, vdev_req, vdev_id);
-
-	/*
-	 * Verify that hardware resources are available before attempting to
-	 * satisfy the request. This simplifies the error unwinding code.
-	 */
-	ret = dlb2_verify_unmap_qid_args(hw,
-					 domain_id,
-					 args,
-					 resp,
-					 vdev_req,
-					 vdev_id,
-					 &domain,
-					 &port,
-					 &queue);
-	if (ret)
-		return ret;
-
-	/*
-	 * If the queue hasn't been mapped yet, we need to update the slot's
-	 * state and re-enable the queue's inflights.
-	 */
-	st = DLB2_QUEUE_MAP_IN_PROG;
-	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
-		/*
-		 * Since the in-progress map was aborted, re-enable the QID's
-		 * inflights.
-		 */
-		if (queue->num_pending_additions == 0)
-			dlb2_ldb_queue_set_inflight_limit(hw, queue);
-
-		st = DLB2_QUEUE_UNMAPPED;
-		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
-		if (ret)
-			return ret;
-
-		goto unmap_qid_done;
-	}
-
-	/*
-	 * If the queue mapping is on hold pending an unmap, we simply need to
-	 * update the slot's state.
-	 */
-	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &i)) {
-		st = DLB2_QUEUE_UNMAP_IN_PROG;
-		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
-		if (ret)
-			return ret;
-
-		goto unmap_qid_done;
-	}
-
-	st = DLB2_QUEUE_MAPPED;
-	if (!dlb2_port_find_slot_queue(port, st, queue, &i)) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: no available CQ slots\n",
-			    __func__);
-		return -EFAULT;
-	}
-
-	/*
-	 * QID->CQ mapping removal is an asynchronous procedure. It requires
-	 * stopping the DLB2 from scheduling this CQ, draining all inflights
-	 * from the CQ, then unmapping the queue from the CQ. This function
-	 * simply marks the port as needing the queue unmapped, and (if
-	 * necessary) starts the unmapping worker thread.
-	 */
-	dlb2_ldb_port_cq_disable(hw, port);
-
-	st = DLB2_QUEUE_UNMAP_IN_PROG;
-	ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
-	if (ret)
-		return ret;
-
-	/*
-	 * Attempt to finish the unmapping now, in case the port has no
-	 * outstanding inflights. If that's not the case, this will fail and
-	 * the unmapping will be completed at a later time.
-	 */
-	unmap_complete = dlb2_domain_finish_unmap_port(hw, domain, port);
-
-	/*
-	 * If the unmapping couldn't complete immediately, launch the worker
-	 * thread (if it isn't already launched) to finish it later.
-	 */
-	if (!unmap_complete && !os_worker_active(hw))
-		os_schedule_work(hw);
-
-unmap_qid_done:
-	resp->status = 0;
-
-	return 0;
-}
-
-static void
-dlb2_log_pending_port_unmaps_args(struct dlb2_hw *hw,
-				  struct dlb2_pending_port_unmaps_args *args,
-				  bool vdev_req,
-				  unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB unmaps in progress arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from VF %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tPort ID: %d\n", args->port_id);
-}
-
-/**
- * dlb2_hw_pending_port_unmaps() - returns the number of unmap operations in
- *	progress.
- * @hw: dlb2_hw handle for a particular device.
- * @domain_id: domain ID.
- * @args: number of unmaps in progress args
- * @resp: response structure.
- * @vdev_req: indicates whether this request came from a vdev.
- * @vdev_id: If vdev_req is true, this contains the vdev's ID.
- *
- * Return:
- * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
- * assigned a detailed error code from enum dlb2_error. If successful, resp->id
- * contains the number of unmaps in progress.
- *
- * Errors:
- * EINVAL - Invalid port ID.
- */
-int dlb2_hw_pending_port_unmaps(struct dlb2_hw *hw,
-				u32 domain_id,
-				struct dlb2_pending_port_unmaps_args *args,
-				struct dlb2_cmd_response *resp,
-				bool vdev_req,
-				unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_port *port;
-
-	dlb2_log_pending_port_unmaps_args(hw, args, vdev_req, vdev_id);
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (!domain) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	port = dlb2_get_domain_used_ldb_port(args->port_id, vdev_req, domain);
-	if (!port || !port->configured) {
-		resp->status = DLB2_ST_INVALID_PORT_ID;
-		return -EINVAL;
-	}
-
-	resp->id = port->num_pending_removals;
-
-	return 0;
-}
-
-static int dlb2_verify_start_domain_args(struct dlb2_hw *hw,
-					 u32 domain_id,
-					 struct dlb2_cmd_response *resp,
-					 bool vdev_req,
-					 unsigned int vdev_id,
-					 struct dlb2_hw_domain **out_domain)
-{
-	struct dlb2_hw_domain *domain;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (!domain) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	if (!domain->configured) {
-		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
-		return -EINVAL;
-	}
-
-	if (domain->started) {
-		resp->status = DLB2_ST_DOMAIN_STARTED;
-		return -EINVAL;
-	}
-
-	*out_domain = domain;
-
-	return 0;
-}
-
-static void dlb2_log_start_domain(struct dlb2_hw *hw,
-				  u32 domain_id,
-				  bool vdev_req,
-				  unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 start domain arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
-}
-
-/**
- * dlb2_hw_start_domain() - start a scheduling domain
- * @hw: dlb2_hw handle for a particular device.
- * @domain_id: domain ID.
- * @arg: start domain arguments.
- * @resp: response structure.
- * @vdev_req: indicates whether this request came from a vdev.
- * @vdev_id: If vdev_req is true, this contains the vdev's ID.
- *
- * This function starts a scheduling domain, which allows applications to send
- * traffic through it. Once a domain is started, its resources can no longer be
- * configured (besides QID remapping and port enable/disable).
- *
- * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
- * device.
- *
- * Return:
- * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
- * assigned a detailed error code from enum dlb2_error.
- *
- * Errors:
- * EINVAL - the domain is not configured, or the domain is already started.
- */
-int
-dlb2_hw_start_domain(struct dlb2_hw *hw,
-		     u32 domain_id,
-		     struct dlb2_start_domain_args *args,
-		     struct dlb2_cmd_response *resp,
-		     bool vdev_req,
-		     unsigned int vdev_id)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *dir_queue;
-	struct dlb2_ldb_queue *ldb_queue;
-	struct dlb2_hw_domain *domain;
-	int ret;
-	RTE_SET_USED(args);
-	RTE_SET_USED(iter);
-
-	dlb2_log_start_domain(hw, domain_id, vdev_req, vdev_id);
-
-	ret = dlb2_verify_start_domain_args(hw,
-					    domain_id,
-					    resp,
-					    vdev_req,
-					    vdev_id,
-					    &domain);
-	if (ret)
-		return ret;
-
-	/*
-	 * Enable load-balanced and directed queue write permissions for the
-	 * queues this domain owns. Without this, the DLB2 will drop all
-	 * incoming traffic to those queues.
-	 */
-	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, ldb_queue, iter) {
-		u32 vasqid_v = 0;
-		unsigned int offs;
-
-		DLB2_BIT_SET(vasqid_v, DLB2_SYS_LDB_VASQID_V_VASQID_V);
-
-		offs = domain->id.phys_id * DLB2_MAX_NUM_LDB_QUEUES +
-			ldb_queue->id.phys_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_LDB_VASQID_V(offs), vasqid_v);
-	}
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, dir_queue, iter) {
-		u32 vasqid_v = 0;
-		unsigned int offs;
-
-		DLB2_BIT_SET(vasqid_v, DLB2_SYS_DIR_VASQID_V_VASQID_V);
-
-		offs = domain->id.phys_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) +
-			dir_queue->id.phys_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(offs), vasqid_v);
-	}
-
-	dlb2_flush_csr(hw);
-
-	domain->started = true;
-
-	resp->status = 0;
-
-	return 0;
-}
-
-static void dlb2_log_get_dir_queue_depth(struct dlb2_hw *hw,
-					 u32 domain_id,
-					 u32 queue_id,
-					 bool vdev_req,
-					 unsigned int vf_id)
-{
-	DLB2_HW_DBG(hw, "DLB get directed queue depth:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from VF %d)\n", vf_id);
-	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
-	DLB2_HW_DBG(hw, "\tQueue ID: %d\n", queue_id);
-}
-
-/**
- * dlb2_hw_get_dir_queue_depth() - returns the depth of a directed queue
- * @hw: dlb2_hw handle for a particular device.
- * @domain_id: domain ID.
- * @args: queue depth args
- * @resp: response structure.
- * @vdev_req: indicates whether this request came from a vdev.
- * @vdev_id: If vdev_req is true, this contains the vdev's ID.
- *
- * This function returns the depth of a directed queue.
- *
- * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
- * device.
- *
- * Return:
- * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
- * assigned a detailed error code from enum dlb2_error. If successful, resp->id
- * contains the depth.
- *
- * Errors:
- * EINVAL - Invalid domain ID or queue ID.
- */
-int dlb2_hw_get_dir_queue_depth(struct dlb2_hw *hw,
-				u32 domain_id,
-				struct dlb2_get_dir_queue_depth_args *args,
-				struct dlb2_cmd_response *resp,
-				bool vdev_req,
-				unsigned int vdev_id)
-{
-	struct dlb2_dir_pq_pair *queue;
-	struct dlb2_hw_domain *domain;
-	int id;
-
-	id = domain_id;
-
-	dlb2_log_get_dir_queue_depth(hw, domain_id, args->queue_id,
-				     vdev_req, vdev_id);
-
-	domain = dlb2_get_domain_from_id(hw, id, vdev_req, vdev_id);
-	if (!domain) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	id = args->queue_id;
-
-	queue = dlb2_get_domain_used_dir_pq(hw, id, vdev_req, domain);
-	if (!queue) {
-		resp->status = DLB2_ST_INVALID_QID;
-		return -EINVAL;
-	}
-
-	resp->id = dlb2_dir_queue_depth(hw, queue);
-
-	return 0;
-}
-
-static void dlb2_log_get_ldb_queue_depth(struct dlb2_hw *hw,
-					 u32 domain_id,
-					 u32 queue_id,
-					 bool vdev_req,
-					 unsigned int vf_id)
-{
-	DLB2_HW_DBG(hw, "DLB get load-balanced queue depth:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from VF %d)\n", vf_id);
-	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
-	DLB2_HW_DBG(hw, "\tQueue ID: %d\n", queue_id);
-}
-
-/**
- * dlb2_hw_get_ldb_queue_depth() - returns the depth of a load-balanced queue
- * @hw: dlb2_hw handle for a particular device.
- * @domain_id: domain ID.
- * @args: queue depth args
- * @resp: response structure.
- * @vdev_req: indicates whether this request came from a vdev.
- * @vdev_id: If vdev_req is true, this contains the vdev's ID.
- *
- * This function returns the depth of a load-balanced queue.
- *
- * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
- * device.
- *
- * Return:
- * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
- * assigned a detailed error code from enum dlb2_error. If successful, resp->id
- * contains the depth.
- *
- * Errors:
- * EINVAL - Invalid domain ID or queue ID.
- */
-int dlb2_hw_get_ldb_queue_depth(struct dlb2_hw *hw,
-				u32 domain_id,
-				struct dlb2_get_ldb_queue_depth_args *args,
-				struct dlb2_cmd_response *resp,
-				bool vdev_req,
-				unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_queue *queue;
-
-	dlb2_log_get_ldb_queue_depth(hw, domain_id, args->queue_id,
-				     vdev_req, vdev_id);
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-	if (!domain) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	queue = dlb2_get_domain_ldb_queue(args->queue_id, vdev_req, domain);
-	if (!queue) {
-		resp->status = DLB2_ST_INVALID_QID;
-		return -EINVAL;
-	}
-
-	resp->id = dlb2_ldb_queue_depth(hw, queue);
-
-	return 0;
-}
-
-/**
- * dlb2_finish_unmap_qid_procedures() - finish any pending unmap procedures
- * @hw: dlb2_hw handle for a particular device.
- *
- * This function attempts to finish any outstanding unmap procedures.
- * This function should be called by the kernel thread responsible for
- * finishing map/unmap procedures.
- *
- * Return:
- * Returns the number of procedures that weren't completed.
- */
-unsigned int dlb2_finish_unmap_qid_procedures(struct dlb2_hw *hw)
-{
-	int i, num = 0;
-
-	/* Finish queue unmap jobs for any domain that needs it */
-	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
-		struct dlb2_hw_domain *domain = &hw->domains[i];
-
-		num += dlb2_domain_finish_unmap_qid_procedures(hw, domain);
-	}
-
-	return num;
-}
-
-/**
- * dlb2_finish_map_qid_procedures() - finish any pending map procedures
- * @hw: dlb2_hw handle for a particular device.
- *
- * This function attempts to finish any outstanding map procedures.
- * This function should be called by the kernel thread responsible for
- * finishing map/unmap procedures.
- *
- * Return:
- * Returns the number of procedures that weren't completed.
- */
-unsigned int dlb2_finish_map_qid_procedures(struct dlb2_hw *hw)
-{
-	int i, num = 0;
-
-	/* Finish queue map jobs for any domain that needs it */
-	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
-		struct dlb2_hw_domain *domain = &hw->domains[i];
-
-		num += dlb2_domain_finish_map_qid_procedures(hw, domain);
-	}
-
-	return num;
-}
-
-/**
- * dlb2_hw_enable_sparse_dir_cq_mode() - enable sparse mode for directed ports.
- * @hw: dlb2_hw handle for a particular device.
- *
- * This function must be called prior to configuring scheduling domains.
- */
-
-void dlb2_hw_enable_sparse_dir_cq_mode(struct dlb2_hw *hw)
-{
-	u32 ctrl;
-
-	ctrl = DLB2_CSR_RD(hw, DLB2_CHP_CFG_CHP_CSR_CTRL);
-
-	DLB2_BIT_SET(ctrl,
-		     DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_DIR_CQ_MODE);
-
-	DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, ctrl);
-}
-
-/**
- * dlb2_hw_enable_sparse_ldb_cq_mode() - enable sparse mode for load-balanced
- *	ports.
- * @hw: dlb2_hw handle for a particular device.
- *
- * This function must be called prior to configuring scheduling domains.
- */
-void dlb2_hw_enable_sparse_ldb_cq_mode(struct dlb2_hw *hw)
-{
-	u32 ctrl;
-
-	ctrl = DLB2_CSR_RD(hw, DLB2_CHP_CFG_CHP_CSR_CTRL);
-
-	DLB2_BIT_SET(ctrl,
-		     DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_LDB_CQ_MODE);
-
-	DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, ctrl);
-}
-
-/**
- * dlb2_get_group_sequence_numbers() - return a group's number of SNs per queue
- * @hw: dlb2_hw handle for a particular device.
- * @group_id: sequence number group ID.
- *
- * This function returns the configured number of sequence numbers per queue
- * for the specified group.
- *
- * Return:
- * Returns -EINVAL if group_id is invalid, else the group's SNs per queue.
- */
-int dlb2_get_group_sequence_numbers(struct dlb2_hw *hw, u32 group_id)
-{
-	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
-		return -EINVAL;
-
-	return hw->rsrcs.sn_groups[group_id].sequence_numbers_per_queue;
-}
-
-/**
- * dlb2_get_group_sequence_number_occupancy() - return a group's in-use slots
- * @hw: dlb2_hw handle for a particular device.
- * @group_id: sequence number group ID.
- *
- * This function returns the group's number of in-use slots (i.e. load-balanced
- * queues using the specified group).
- *
- * Return:
- * Returns -EINVAL if group_id is invalid, else the group's SNs per queue.
- */
-int dlb2_get_group_sequence_number_occupancy(struct dlb2_hw *hw, u32 group_id)
-{
-	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
-		return -EINVAL;
-
-	return dlb2_sn_group_used_slots(&hw->rsrcs.sn_groups[group_id]);
-}
-
-static void dlb2_log_set_group_sequence_numbers(struct dlb2_hw *hw,
-						u32 group_id,
-						u32 val)
-{
-	DLB2_HW_DBG(hw, "DLB2 set group sequence numbers:\n");
-	DLB2_HW_DBG(hw, "\tGroup ID: %u\n", group_id);
-	DLB2_HW_DBG(hw, "\tValue:    %u\n", val);
-}
-
-/**
- * dlb2_set_group_sequence_numbers() - assign a group's number of SNs per queue
- * @hw: dlb2_hw handle for a particular device.
- * @group_id: sequence number group ID.
- * @val: requested amount of sequence numbers per queue.
- *
- * This function configures the group's number of sequence numbers per queue.
- * val can be a power-of-two between 32 and 1024, inclusive. This setting can
- * be configured until the first ordered load-balanced queue is configured, at
- * which point the configuration is locked.
- *
- * Return:
- * Returns 0 upon success; -EINVAL if group_id or val is invalid, -EPERM if an
- * ordered queue is configured.
- */
-int dlb2_set_group_sequence_numbers(struct dlb2_hw *hw,
-				    u32 group_id,
-				    u32 val)
-{
-	const u32 valid_allocations[] = {64, 128, 256, 512, 1024};
-	struct dlb2_sn_group *group;
-	u32 sn_mode = 0;
-	int mode;
-
-	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
-		return -EINVAL;
-
-	group = &hw->rsrcs.sn_groups[group_id];
-
-	/*
-	 * Once the first load-balanced queue using an SN group is configured,
-	 * the group cannot be changed.
-	 */
-	if (group->slot_use_bitmap != 0)
-		return -EPERM;
-
-	for (mode = 0; mode < DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES; mode++)
-		if (val == valid_allocations[mode])
-			break;
-
-	if (mode == DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES)
-		return -EINVAL;
-
-	group->mode = mode;
-	group->sequence_numbers_per_queue = val;
-
-	DLB2_BITS_SET(sn_mode, hw->rsrcs.sn_groups[0].mode,
-		 DLB2_RO_GRP_SN_MODE_SN_MODE_0);
-	DLB2_BITS_SET(sn_mode, hw->rsrcs.sn_groups[1].mode,
-		 DLB2_RO_GRP_SN_MODE_SN_MODE_1);
-
-	DLB2_CSR_WR(hw, DLB2_RO_GRP_SN_MODE(hw->ver), sn_mode);
-
-	dlb2_log_set_group_sequence_numbers(hw, group_id, val);
-
-	return 0;
-}
-
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v4 22/27] event/dlb2: use new implementation of HW types header
  2021-04-15  1:48   ` [dpdk-dev] [PATCH v4 00/27] Add DLB v2.5 Timothy McDaniel
                       ` (20 preceding siblings ...)
  2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 21/27] event/dlb2: use new implementation of resource file Timothy McDaniel
@ 2021-04-15  1:49     ` Timothy McDaniel
  2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 23/27] event/dlb2: use new combined register map Timothy McDaniel
                       ` (4 subsequent siblings)
  26 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-04-15  1:49 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, harry.van.haaren, jerinj, thomas

As support for DLB v2.5 was added, modifications were made to
dlb_hw_types_new.h, but the old file needed to be preserved during
the port in order to meet the requirement that individual patches in
a series each compile successfully. Since the DLB v2.5 support is
completely integrated, it is now safe to remove the old (original)
file, as well as the DLB2_USE_NEW_HEADERS define that was used to
control which version of the file was to be included in certain
source files.
It is now safe to rename the new file, and use it unconditionally
in all DLB source files.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_hw_types.h    |  38 +-
 .../event/dlb2/pf/base/dlb2_hw_types_new.h    | 357 ------------------
 drivers/event/dlb2/pf/base/dlb2_resource.c    |   4 +-
 drivers/event/dlb2/pf/dlb2_main.c             |   4 +-
 drivers/event/dlb2/pf/dlb2_main.h             |   4 -
 drivers/event/dlb2/pf/dlb2_pf.c               |   4 +-
 6 files changed, 33 insertions(+), 378 deletions(-)
 delete mode 100644 drivers/event/dlb2/pf/base/dlb2_hw_types_new.h

diff --git a/drivers/event/dlb2/pf/base/dlb2_hw_types.h b/drivers/event/dlb2/pf/base/dlb2_hw_types.h
index b007e1674..4a6037775 100644
--- a/drivers/event/dlb2/pf/base/dlb2_hw_types.h
+++ b/drivers/event/dlb2/pf/base/dlb2_hw_types.h
@@ -2,14 +2,21 @@
  * Copyright(c) 2016-2020 Intel Corporation
  */
 
-#ifndef __DLB2_HW_TYPES_H
-#define __DLB2_HW_TYPES_H
+#ifndef __DLB2_HW_TYPES_NEW_H
+#define __DLB2_HW_TYPES_NEW_H
 
 #include "../../dlb2_priv.h"
 #include "dlb2_user.h"
 
 #include "dlb2_osdep_list.h"
 #include "dlb2_osdep_types.h"
+#include "dlb2_regs_new.h"
+
+#define DLB2_BITS_SET(x, val, mask)	(x = ((x) & ~(mask))     \
+				 | (((val) << (mask##_LOC)) & (mask)))
+#define DLB2_BITS_CLR(x, mask)	(x &= ~(mask))
+#define DLB2_BIT_SET(x, mask)	((x) |= (mask))
+#define DLB2_BITS_GET(x, mask)	(((x) & (mask)) >> (mask##_LOC))
 
 #define DLB2_MAX_NUM_VDEVS			16
 #define DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS	2
@@ -141,7 +148,7 @@ struct dlb2_dir_pq_pair {
 };
 
 enum dlb2_qid_map_state {
-	/* The slot doesn't contain a valid queue mapping */
+	/* The slot does not contain a valid queue mapping */
 	DLB2_QUEUE_UNMAPPED,
 	/* The slot contains a valid queue mapping */
 	DLB2_QUEUE_MAPPED,
@@ -174,6 +181,7 @@ struct dlb2_ldb_port {
 	u32 hist_list_entry_base;
 	u32 hist_list_entry_limit;
 	u32 ref_cnt;
+	u8 cq_depth;
 	u8 init_tkn_cnt;
 	u8 num_pending_removals;
 	u8 num_mappings;
@@ -245,8 +253,15 @@ struct dlb2_hw_domain {
 	u32 avail_hist_list_entries;
 	u32 hist_list_entry_base;
 	u32 hist_list_entry_offset;
-	u32 num_ldb_credits;
-	u32 num_dir_credits;
+	union {
+		struct {
+			u32 num_ldb_credits;
+			u32 num_dir_credits;
+		};
+		struct {
+			u32 num_credits;
+		};
+	};
 	u32 num_avail_aqed_entries;
 	u32 num_used_aqed_entries;
 	struct dlb2_resource_id id;
@@ -269,8 +284,15 @@ struct dlb2_function_resources {
 	u32 num_avail_ldb_queues;
 	u32 num_avail_ldb_ports[DLB2_NUM_COS_DOMAINS];
 	u32 num_avail_dir_pq_pairs;
-	u32 num_avail_qed_entries;
-	u32 num_avail_dqed_entries;
+	union {
+		struct {
+			u32 num_avail_qed_entries;
+			u32 num_avail_dqed_entries;
+		};
+		struct {
+			u32 num_avail_entries;
+		};
+	};
 	u32 num_avail_aqed_entries;
 	u8 locked; /* (VDEV only) */
 };
@@ -332,4 +354,4 @@ struct dlb2_hw {
 	unsigned int pasid[DLB2_MAX_NUM_VDEVS];
 };
 
-#endif /* __DLB2_HW_TYPES_H */
+#endif /* __DLB2_HW_TYPES_NEW_H */
diff --git a/drivers/event/dlb2/pf/base/dlb2_hw_types_new.h b/drivers/event/dlb2/pf/base/dlb2_hw_types_new.h
deleted file mode 100644
index 4a6037775..000000000
--- a/drivers/event/dlb2/pf/base/dlb2_hw_types_new.h
+++ /dev/null
@@ -1,357 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016-2020 Intel Corporation
- */
-
-#ifndef __DLB2_HW_TYPES_NEW_H
-#define __DLB2_HW_TYPES_NEW_H
-
-#include "../../dlb2_priv.h"
-#include "dlb2_user.h"
-
-#include "dlb2_osdep_list.h"
-#include "dlb2_osdep_types.h"
-#include "dlb2_regs_new.h"
-
-#define DLB2_BITS_SET(x, val, mask)	(x = ((x) & ~(mask))     \
-				 | (((val) << (mask##_LOC)) & (mask)))
-#define DLB2_BITS_CLR(x, mask)	(x &= ~(mask))
-#define DLB2_BIT_SET(x, mask)	((x) |= (mask))
-#define DLB2_BITS_GET(x, mask)	(((x) & (mask)) >> (mask##_LOC))
-
-#define DLB2_MAX_NUM_VDEVS			16
-#define DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS	2
-#define DLB2_NUM_ARB_WEIGHTS			8
-#define DLB2_MAX_NUM_AQED_ENTRIES		2048
-#define DLB2_MAX_WEIGHT				255
-#define DLB2_NUM_COS_DOMAINS			4
-#define DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS	2
-#define DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES	5
-#define DLB2_MAX_CQ_COMP_CHECK_LOOPS		409600
-#define DLB2_MAX_QID_EMPTY_CHECK_LOOPS		(32 * 64 * 1024 * (800 / 30))
-
-#define DLB2_FUNC_BAR				0
-#define DLB2_CSR_BAR				2
-
-#define PCI_DEVICE_ID_INTEL_DLB2_PF 0x2710
-#define PCI_DEVICE_ID_INTEL_DLB2_VF 0x2711
-
-#define PCI_DEVICE_ID_INTEL_DLB2_5_PF 0x2714
-#define PCI_DEVICE_ID_INTEL_DLB2_5_VF 0x2715
-
-#define DLB2_ALARM_HW_SOURCE_SYS 0
-#define DLB2_ALARM_HW_SOURCE_DLB 1
-
-#define DLB2_ALARM_HW_UNIT_CHP 4
-
-#define DLB2_ALARM_SYS_AID_ILLEGAL_QID		3
-#define DLB2_ALARM_SYS_AID_DISABLED_QID		4
-#define DLB2_ALARM_SYS_AID_ILLEGAL_HCW		5
-#define DLB2_ALARM_HW_CHP_AID_ILLEGAL_ENQ	1
-#define DLB2_ALARM_HW_CHP_AID_EXCESS_TOKEN_POPS 2
-
-/*
- * Hardware-defined base addresses. Those prefixed 'DLB2_DRV' are only used by
- * the PF driver.
- */
-#define DLB2_DRV_LDB_PP_BASE   0x2300000
-#define DLB2_DRV_LDB_PP_STRIDE 0x1000
-#define DLB2_DRV_LDB_PP_BOUND  (DLB2_DRV_LDB_PP_BASE + \
-				DLB2_DRV_LDB_PP_STRIDE * DLB2_MAX_NUM_LDB_PORTS)
-#define DLB2_DRV_DIR_PP_BASE   0x2200000
-#define DLB2_DRV_DIR_PP_STRIDE 0x1000
-#define DLB2_DRV_DIR_PP_BOUND  (DLB2_DRV_DIR_PP_BASE + \
-				DLB2_DRV_DIR_PP_STRIDE * DLB2_MAX_NUM_DIR_PORTS)
-#define DLB2_LDB_PP_BASE       0x2100000
-#define DLB2_LDB_PP_STRIDE     0x1000
-#define DLB2_LDB_PP_BOUND      (DLB2_LDB_PP_BASE + \
-				DLB2_LDB_PP_STRIDE * DLB2_MAX_NUM_LDB_PORTS)
-#define DLB2_LDB_PP_OFFS(id)   (DLB2_LDB_PP_BASE + (id) * DLB2_PP_SIZE)
-#define DLB2_DIR_PP_BASE       0x2000000
-#define DLB2_DIR_PP_STRIDE     0x1000
-#define DLB2_DIR_PP_BOUND      (DLB2_DIR_PP_BASE + \
-				DLB2_DIR_PP_STRIDE * \
-				DLB2_MAX_NUM_DIR_PORTS_V2_5)
-#define DLB2_DIR_PP_OFFS(id)   (DLB2_DIR_PP_BASE + (id) * DLB2_PP_SIZE)
-
-struct dlb2_resource_id {
-	u32 phys_id;
-	u32 virt_id;
-	u8 vdev_owned;
-	u8 vdev_id;
-};
-
-struct dlb2_freelist {
-	u32 base;
-	u32 bound;
-	u32 offset;
-};
-
-static inline u32 dlb2_freelist_count(struct dlb2_freelist *list)
-{
-	return list->bound - list->base - list->offset;
-}
-
-struct dlb2_hcw {
-	u64 data;
-	/* Word 3 */
-	u16 opaque;
-	u8 qid;
-	u8 sched_type:2;
-	u8 priority:3;
-	u8 msg_type:3;
-	/* Word 4 */
-	u16 lock_id;
-	u8 ts_flag:1;
-	u8 rsvd1:2;
-	u8 no_dec:1;
-	u8 cmp_id:4;
-	u8 cq_token:1;
-	u8 qe_comp:1;
-	u8 qe_frag:1;
-	u8 qe_valid:1;
-	u8 int_arm:1;
-	u8 error:1;
-	u8 rsvd:2;
-};
-
-struct dlb2_ldb_queue {
-	struct dlb2_list_entry domain_list;
-	struct dlb2_list_entry func_list;
-	struct dlb2_resource_id id;
-	struct dlb2_resource_id domain_id;
-	u32 num_qid_inflights;
-	u32 aqed_limit;
-	u32 sn_group; /* sn == sequence number */
-	u32 sn_slot;
-	u32 num_mappings;
-	u8 sn_cfg_valid;
-	u8 num_pending_additions;
-	u8 owned;
-	u8 configured;
-};
-
-/*
- * Directed ports and queues are paired by nature, so the driver tracks them
- * with a single data structure.
- */
-struct dlb2_dir_pq_pair {
-	struct dlb2_list_entry domain_list;
-	struct dlb2_list_entry func_list;
-	struct dlb2_resource_id id;
-	struct dlb2_resource_id domain_id;
-	u32 ref_cnt;
-	u8 init_tkn_cnt;
-	u8 queue_configured;
-	u8 port_configured;
-	u8 owned;
-	u8 enabled;
-};
-
-enum dlb2_qid_map_state {
-	/* The slot does not contain a valid queue mapping */
-	DLB2_QUEUE_UNMAPPED,
-	/* The slot contains a valid queue mapping */
-	DLB2_QUEUE_MAPPED,
-	/* The driver is mapping a queue into this slot */
-	DLB2_QUEUE_MAP_IN_PROG,
-	/* The driver is unmapping a queue from this slot */
-	DLB2_QUEUE_UNMAP_IN_PROG,
-	/*
-	 * The driver is unmapping a queue from this slot, and once complete
-	 * will replace it with another mapping.
-	 */
-	DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP,
-};
-
-struct dlb2_ldb_port_qid_map {
-	enum dlb2_qid_map_state state;
-	u16 qid;
-	u16 pending_qid;
-	u8 priority;
-	u8 pending_priority;
-};
-
-struct dlb2_ldb_port {
-	struct dlb2_list_entry domain_list;
-	struct dlb2_list_entry func_list;
-	struct dlb2_resource_id id;
-	struct dlb2_resource_id domain_id;
-	/* The qid_map represents the hardware QID mapping state. */
-	struct dlb2_ldb_port_qid_map qid_map[DLB2_MAX_NUM_QIDS_PER_LDB_CQ];
-	u32 hist_list_entry_base;
-	u32 hist_list_entry_limit;
-	u32 ref_cnt;
-	u8 cq_depth;
-	u8 init_tkn_cnt;
-	u8 num_pending_removals;
-	u8 num_mappings;
-	u8 owned;
-	u8 enabled;
-	u8 configured;
-};
-
-struct dlb2_sn_group {
-	u32 mode;
-	u32 sequence_numbers_per_queue;
-	u32 slot_use_bitmap;
-	u32 id;
-};
-
-static inline bool dlb2_sn_group_full(struct dlb2_sn_group *group)
-{
-	const u32 mask[] = {
-		0x0000ffff,  /* 64 SNs per queue */
-		0x000000ff,  /* 128 SNs per queue */
-		0x0000000f,  /* 256 SNs per queue */
-		0x00000003,  /* 512 SNs per queue */
-		0x00000001}; /* 1024 SNs per queue */
-
-	return group->slot_use_bitmap == mask[group->mode];
-}
-
-static inline int dlb2_sn_group_alloc_slot(struct dlb2_sn_group *group)
-{
-	const u32 bound[] = {16, 8, 4, 2, 1};
-	u32 i;
-
-	for (i = 0; i < bound[group->mode]; i++) {
-		if (!(group->slot_use_bitmap & (1 << i))) {
-			group->slot_use_bitmap |= 1 << i;
-			return i;
-		}
-	}
-
-	return -1;
-}
-
-static inline void
-dlb2_sn_group_free_slot(struct dlb2_sn_group *group, int slot)
-{
-	group->slot_use_bitmap &= ~(1 << slot);
-}
-
-static inline int dlb2_sn_group_used_slots(struct dlb2_sn_group *group)
-{
-	int i, cnt = 0;
-
-	for (i = 0; i < 32; i++)
-		cnt += !!(group->slot_use_bitmap & (1 << i));
-
-	return cnt;
-}
-
-struct dlb2_hw_domain {
-	struct dlb2_function_resources *parent_func;
-	struct dlb2_list_entry func_list;
-	struct dlb2_list_head used_ldb_queues;
-	struct dlb2_list_head used_ldb_ports[DLB2_NUM_COS_DOMAINS];
-	struct dlb2_list_head used_dir_pq_pairs;
-	struct dlb2_list_head avail_ldb_queues;
-	struct dlb2_list_head avail_ldb_ports[DLB2_NUM_COS_DOMAINS];
-	struct dlb2_list_head avail_dir_pq_pairs;
-	u32 total_hist_list_entries;
-	u32 avail_hist_list_entries;
-	u32 hist_list_entry_base;
-	u32 hist_list_entry_offset;
-	union {
-		struct {
-			u32 num_ldb_credits;
-			u32 num_dir_credits;
-		};
-		struct {
-			u32 num_credits;
-		};
-	};
-	u32 num_avail_aqed_entries;
-	u32 num_used_aqed_entries;
-	struct dlb2_resource_id id;
-	int num_pending_removals;
-	int num_pending_additions;
-	u8 configured;
-	u8 started;
-};
-
-struct dlb2_bitmap;
-
-struct dlb2_function_resources {
-	struct dlb2_list_head avail_domains;
-	struct dlb2_list_head used_domains;
-	struct dlb2_list_head avail_ldb_queues;
-	struct dlb2_list_head avail_ldb_ports[DLB2_NUM_COS_DOMAINS];
-	struct dlb2_list_head avail_dir_pq_pairs;
-	struct dlb2_bitmap *avail_hist_list_entries;
-	u32 num_avail_domains;
-	u32 num_avail_ldb_queues;
-	u32 num_avail_ldb_ports[DLB2_NUM_COS_DOMAINS];
-	u32 num_avail_dir_pq_pairs;
-	union {
-		struct {
-			u32 num_avail_qed_entries;
-			u32 num_avail_dqed_entries;
-		};
-		struct {
-			u32 num_avail_entries;
-		};
-	};
-	u32 num_avail_aqed_entries;
-	u8 locked; /* (VDEV only) */
-};
-
-/*
- * After initialization, each resource in dlb2_hw_resources is located in one
- * of the following lists:
- * -- The PF's available resources list. These are unconfigured resources owned
- *	by the PF and not allocated to a dlb2 scheduling domain.
- * -- A VDEV's available resources list. These are VDEV-owned unconfigured
- *	resources not allocated to a dlb2 scheduling domain.
- * -- A domain's available resources list. These are domain-owned unconfigured
- *	resources.
- * -- A domain's used resources list. These are domain-owned configured
- *	resources.
- *
- * A resource moves to a new list when a VDEV or domain is created or destroyed,
- * or when the resource is configured.
- */
-struct dlb2_hw_resources {
-	struct dlb2_ldb_queue ldb_queues[DLB2_MAX_NUM_LDB_QUEUES];
-	struct dlb2_ldb_port ldb_ports[DLB2_MAX_NUM_LDB_PORTS];
-	struct dlb2_dir_pq_pair dir_pq_pairs[DLB2_MAX_NUM_DIR_PORTS_V2_5];
-	struct dlb2_sn_group sn_groups[DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS];
-};
-
-struct dlb2_mbox {
-	u32 *mbox;
-	u32 *isr_in_progress;
-};
-
-struct dlb2_sw_mbox {
-	struct dlb2_mbox vdev_to_pf;
-	struct dlb2_mbox pf_to_vdev;
-	void (*pf_to_vdev_inject)(void *arg);
-	void *pf_to_vdev_inject_arg;
-};
-
-struct dlb2_hw {
-	uint8_t ver;
-
-	/* BAR 0 address */
-	void *csr_kva;
-	unsigned long csr_phys_addr;
-	/* BAR 2 address */
-	void *func_kva;
-	unsigned long func_phys_addr;
-
-	/* Resource tracking */
-	struct dlb2_hw_resources rsrcs;
-	struct dlb2_function_resources pf;
-	struct dlb2_function_resources vdev[DLB2_MAX_NUM_VDEVS];
-	struct dlb2_hw_domain domains[DLB2_MAX_NUM_DOMAINS];
-	u8 cos_reservation[DLB2_NUM_COS_DOMAINS];
-
-	/* Virtualization */
-	int virt_mode;
-	struct dlb2_sw_mbox mbox[DLB2_MAX_NUM_VDEVS];
-	unsigned int pasid[DLB2_MAX_NUM_VDEVS];
-};
-
-#endif /* __DLB2_HW_TYPES_NEW_H */
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index 2f66b2c71..54b0207db 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -2,11 +2,9 @@
  * Copyright(c) 2016-2020 Intel Corporation
  */
 
-#define DLB2_USE_NEW_HEADERS /* TEMPORARY FOR MERGE */
-
 #include "dlb2_user.h"
 
-#include "dlb2_hw_types_new.h"
+#include "dlb2_hw_types.h"
 #include "dlb2_osdep.h"
 #include "dlb2_osdep_bitmap.h"
 #include "dlb2_osdep_types.h"
diff --git a/drivers/event/dlb2/pf/dlb2_main.c b/drivers/event/dlb2/pf/dlb2_main.c
index bac07f097..1f6ccf8e4 100644
--- a/drivers/event/dlb2/pf/dlb2_main.c
+++ b/drivers/event/dlb2/pf/dlb2_main.c
@@ -13,10 +13,8 @@
 #include <rte_malloc.h>
 #include <rte_errno.h>
 
-#define DLB2_USE_NEW_HEADERS /* TEMPORARY FOR MERGE */
-
 #include "base/dlb2_regs_new.h"
-#include "base/dlb2_hw_types_new.h"
+#include "base/dlb2_hw_types.h"
 #include "base/dlb2_resource.h"
 #include "base/dlb2_osdep.h"
 #include "dlb2_main.h"
diff --git a/drivers/event/dlb2/pf/dlb2_main.h b/drivers/event/dlb2/pf/dlb2_main.h
index 892298d7a..9eeda482a 100644
--- a/drivers/event/dlb2/pf/dlb2_main.h
+++ b/drivers/event/dlb2/pf/dlb2_main.h
@@ -12,11 +12,7 @@
 #include <rte_bus_pci.h>
 #include <rte_eal_paging.h>
 
-#ifdef DLB2_USE_NEW_HEADERS
-#include "base/dlb2_hw_types_new.h"
-#else
 #include "base/dlb2_hw_types.h"
-#endif
 #include "../dlb2_user.h"
 
 #define DLB2_DEFAULT_UNREGISTER_TIMEOUT_S 5
diff --git a/drivers/event/dlb2/pf/dlb2_pf.c b/drivers/event/dlb2/pf/dlb2_pf.c
index 880964a29..f57dc1584 100644
--- a/drivers/event/dlb2/pf/dlb2_pf.c
+++ b/drivers/event/dlb2/pf/dlb2_pf.c
@@ -32,13 +32,11 @@
 #include <rte_memory.h>
 #include <rte_string_fns.h>
 
-#define DLB2_USE_NEW_HEADERS /* TEMPORARY FOR MERGE */
-
 #include "../dlb2_priv.h"
 #include "../dlb2_iface.h"
 #include "../dlb2_inline_fns.h"
 #include "dlb2_main.h"
-#include "base/dlb2_hw_types_new.h"
+#include "base/dlb2_hw_types.h"
 #include "base/dlb2_osdep.h"
 #include "base/dlb2_resource.h"
 
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v4 23/27] event/dlb2: use new combined register map
  2021-04-15  1:48   ` [dpdk-dev] [PATCH v4 00/27] Add DLB v2.5 Timothy McDaniel
                       ` (21 preceding siblings ...)
  2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 22/27] event/dlb2: use new implementation of HW types header Timothy McDaniel
@ 2021-04-15  1:49     ` Timothy McDaniel
  2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 24/27] event/dlb2: update xstats for v2.5 Timothy McDaniel
                       ` (3 subsequent siblings)
  26 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-04-15  1:49 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, harry.van.haaren, jerinj, thomas

All references to the old register map have been removed,
so it is safe to rename the new combined file that supports
both DLB v2.0 and DLB v2.5. Also fixed all places where this
file is included.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_hw_types.h |    2 +-
 drivers/event/dlb2/pf/base/dlb2_regs.h     | 5955 +++++++++++++-------
 drivers/event/dlb2/pf/base/dlb2_regs_new.h | 4304 --------------
 drivers/event/dlb2/pf/base/dlb2_resource.c |    2 +-
 drivers/event/dlb2/pf/dlb2_main.c          |    2 +-
 5 files changed, 3869 insertions(+), 6396 deletions(-)
 delete mode 100644 drivers/event/dlb2/pf/base/dlb2_regs_new.h

diff --git a/drivers/event/dlb2/pf/base/dlb2_hw_types.h b/drivers/event/dlb2/pf/base/dlb2_hw_types.h
index 4a6037775..6b8fee341 100644
--- a/drivers/event/dlb2/pf/base/dlb2_hw_types.h
+++ b/drivers/event/dlb2/pf/base/dlb2_hw_types.h
@@ -10,7 +10,7 @@
 
 #include "dlb2_osdep_list.h"
 #include "dlb2_osdep_types.h"
-#include "dlb2_regs_new.h"
+#include "dlb2_regs.h"
 
 #define DLB2_BITS_SET(x, val, mask)	(x = ((x) & ~(mask))     \
 				 | (((val) << (mask##_LOC)) & (mask)))
diff --git a/drivers/event/dlb2/pf/base/dlb2_regs.h b/drivers/event/dlb2/pf/base/dlb2_regs.h
index 43ecad4f8..7167f3d2f 100644
--- a/drivers/event/dlb2/pf/base/dlb2_regs.h
+++ b/drivers/event/dlb2/pf/base/dlb2_regs.h
@@ -7,553 +7,550 @@
 
 #include "dlb2_osdep_types.h"
 
-#define DLB2_FUNC_PF_VF2PF_MAILBOX_BYTES 256
-#define DLB2_FUNC_PF_VF2PF_MAILBOX(vf_id, x) \
+#define DLB2_PF_VF2PF_MAILBOX_BYTES 256
+#define DLB2_PF_VF2PF_MAILBOX(vf_id, x) \
 	(0x1000 + 0x4 * (x) + (vf_id) * 0x10000)
-#define DLB2_FUNC_PF_VF2PF_MAILBOX_RST 0x0
-union dlb2_func_pf_vf2pf_mailbox {
-	struct {
-		u32 msg : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_PF_VF2PF_MAILBOX_ISR(vf_id) \
+#define DLB2_PF_VF2PF_MAILBOX_RST 0x0
+
+#define DLB2_PF_VF2PF_MAILBOX_MSG	0xFFFFFFFF
+#define DLB2_PF_VF2PF_MAILBOX_MSG_LOC	0
+
+#define DLB2_PF_VF2PF_MAILBOX_ISR(vf_id) \
 	(0x1f00 + (vf_id) * 0x10000)
-#define DLB2_FUNC_PF_VF2PF_MAILBOX_ISR_RST 0x0
-union dlb2_func_pf_vf2pf_mailbox_isr {
-	struct {
-		u32 vf0_isr : 1;
-		u32 vf1_isr : 1;
-		u32 vf2_isr : 1;
-		u32 vf3_isr : 1;
-		u32 vf4_isr : 1;
-		u32 vf5_isr : 1;
-		u32 vf6_isr : 1;
-		u32 vf7_isr : 1;
-		u32 vf8_isr : 1;
-		u32 vf9_isr : 1;
-		u32 vf10_isr : 1;
-		u32 vf11_isr : 1;
-		u32 vf12_isr : 1;
-		u32 vf13_isr : 1;
-		u32 vf14_isr : 1;
-		u32 vf15_isr : 1;
-		u32 rsvd0 : 16;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_PF_VF2PF_FLR_ISR(vf_id) \
+#define DLB2_PF_VF2PF_MAILBOX_ISR_RST 0x0
+
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF0_ISR	0x00000001
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF1_ISR	0x00000002
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF2_ISR	0x00000004
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF3_ISR	0x00000008
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF4_ISR	0x00000010
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF5_ISR	0x00000020
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF6_ISR	0x00000040
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF7_ISR	0x00000080
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF8_ISR	0x00000100
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF9_ISR	0x00000200
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF10_ISR	0x00000400
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF11_ISR	0x00000800
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF12_ISR	0x00001000
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF13_ISR	0x00002000
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF14_ISR	0x00004000
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF15_ISR	0x00008000
+#define DLB2_PF_VF2PF_MAILBOX_ISR_RSVD0	0xFFFF0000
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF0_ISR_LOC	0
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF1_ISR_LOC	1
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF2_ISR_LOC	2
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF3_ISR_LOC	3
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF4_ISR_LOC	4
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF5_ISR_LOC	5
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF6_ISR_LOC	6
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF7_ISR_LOC	7
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF8_ISR_LOC	8
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF9_ISR_LOC	9
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF10_ISR_LOC	10
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF11_ISR_LOC	11
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF12_ISR_LOC	12
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF13_ISR_LOC	13
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF14_ISR_LOC	14
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF15_ISR_LOC	15
+#define DLB2_PF_VF2PF_MAILBOX_ISR_RSVD0_LOC		16
+
+#define DLB2_PF_VF2PF_FLR_ISR(vf_id) \
 	(0x1f04 + (vf_id) * 0x10000)
-#define DLB2_FUNC_PF_VF2PF_FLR_ISR_RST 0x0
-union dlb2_func_pf_vf2pf_flr_isr {
-	struct {
-		u32 vf0_isr : 1;
-		u32 vf1_isr : 1;
-		u32 vf2_isr : 1;
-		u32 vf3_isr : 1;
-		u32 vf4_isr : 1;
-		u32 vf5_isr : 1;
-		u32 vf6_isr : 1;
-		u32 vf7_isr : 1;
-		u32 vf8_isr : 1;
-		u32 vf9_isr : 1;
-		u32 vf10_isr : 1;
-		u32 vf11_isr : 1;
-		u32 vf12_isr : 1;
-		u32 vf13_isr : 1;
-		u32 vf14_isr : 1;
-		u32 vf15_isr : 1;
-		u32 rsvd0 : 16;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_PF_VF2PF_ISR_PEND(vf_id) \
+#define DLB2_PF_VF2PF_FLR_ISR_RST 0x0
+
+#define DLB2_PF_VF2PF_FLR_ISR_VF0_ISR	0x00000001
+#define DLB2_PF_VF2PF_FLR_ISR_VF1_ISR	0x00000002
+#define DLB2_PF_VF2PF_FLR_ISR_VF2_ISR	0x00000004
+#define DLB2_PF_VF2PF_FLR_ISR_VF3_ISR	0x00000008
+#define DLB2_PF_VF2PF_FLR_ISR_VF4_ISR	0x00000010
+#define DLB2_PF_VF2PF_FLR_ISR_VF5_ISR	0x00000020
+#define DLB2_PF_VF2PF_FLR_ISR_VF6_ISR	0x00000040
+#define DLB2_PF_VF2PF_FLR_ISR_VF7_ISR	0x00000080
+#define DLB2_PF_VF2PF_FLR_ISR_VF8_ISR	0x00000100
+#define DLB2_PF_VF2PF_FLR_ISR_VF9_ISR	0x00000200
+#define DLB2_PF_VF2PF_FLR_ISR_VF10_ISR	0x00000400
+#define DLB2_PF_VF2PF_FLR_ISR_VF11_ISR	0x00000800
+#define DLB2_PF_VF2PF_FLR_ISR_VF12_ISR	0x00001000
+#define DLB2_PF_VF2PF_FLR_ISR_VF13_ISR	0x00002000
+#define DLB2_PF_VF2PF_FLR_ISR_VF14_ISR	0x00004000
+#define DLB2_PF_VF2PF_FLR_ISR_VF15_ISR	0x00008000
+#define DLB2_PF_VF2PF_FLR_ISR_RSVD0		0xFFFF0000
+#define DLB2_PF_VF2PF_FLR_ISR_VF0_ISR_LOC	0
+#define DLB2_PF_VF2PF_FLR_ISR_VF1_ISR_LOC	1
+#define DLB2_PF_VF2PF_FLR_ISR_VF2_ISR_LOC	2
+#define DLB2_PF_VF2PF_FLR_ISR_VF3_ISR_LOC	3
+#define DLB2_PF_VF2PF_FLR_ISR_VF4_ISR_LOC	4
+#define DLB2_PF_VF2PF_FLR_ISR_VF5_ISR_LOC	5
+#define DLB2_PF_VF2PF_FLR_ISR_VF6_ISR_LOC	6
+#define DLB2_PF_VF2PF_FLR_ISR_VF7_ISR_LOC	7
+#define DLB2_PF_VF2PF_FLR_ISR_VF8_ISR_LOC	8
+#define DLB2_PF_VF2PF_FLR_ISR_VF9_ISR_LOC	9
+#define DLB2_PF_VF2PF_FLR_ISR_VF10_ISR_LOC	10
+#define DLB2_PF_VF2PF_FLR_ISR_VF11_ISR_LOC	11
+#define DLB2_PF_VF2PF_FLR_ISR_VF12_ISR_LOC	12
+#define DLB2_PF_VF2PF_FLR_ISR_VF13_ISR_LOC	13
+#define DLB2_PF_VF2PF_FLR_ISR_VF14_ISR_LOC	14
+#define DLB2_PF_VF2PF_FLR_ISR_VF15_ISR_LOC	15
+#define DLB2_PF_VF2PF_FLR_ISR_RSVD0_LOC	16
+
+#define DLB2_PF_VF2PF_ISR_PEND(vf_id) \
 	(0x1f10 + (vf_id) * 0x10000)
-#define DLB2_FUNC_PF_VF2PF_ISR_PEND_RST 0x0
-union dlb2_func_pf_vf2pf_isr_pend {
-	struct {
-		u32 isr_pend : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_PF_PF2VF_MAILBOX_BYTES 64
-#define DLB2_FUNC_PF_PF2VF_MAILBOX(vf_id, x) \
+#define DLB2_PF_VF2PF_ISR_PEND_RST 0x0
+
+#define DLB2_PF_VF2PF_ISR_PEND_ISR_PEND	0x00000001
+#define DLB2_PF_VF2PF_ISR_PEND_RSVD0		0xFFFFFFFE
+#define DLB2_PF_VF2PF_ISR_PEND_ISR_PEND_LOC	0
+#define DLB2_PF_VF2PF_ISR_PEND_RSVD0_LOC	1
+
+#define DLB2_PF_PF2VF_MAILBOX_BYTES 64
+#define DLB2_PF_PF2VF_MAILBOX(vf_id, x) \
 	(0x2000 + 0x4 * (x) + (vf_id) * 0x10000)
-#define DLB2_FUNC_PF_PF2VF_MAILBOX_RST 0x0
-union dlb2_func_pf_pf2vf_mailbox {
-	struct {
-		u32 msg : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_PF_PF2VF_MAILBOX_ISR(vf_id) \
+#define DLB2_PF_PF2VF_MAILBOX_RST 0x0
+
+#define DLB2_PF_PF2VF_MAILBOX_MSG	0xFFFFFFFF
+#define DLB2_PF_PF2VF_MAILBOX_MSG_LOC	0
+
+#define DLB2_PF_PF2VF_MAILBOX_ISR(vf_id) \
 	(0x2f00 + (vf_id) * 0x10000)
-#define DLB2_FUNC_PF_PF2VF_MAILBOX_ISR_RST 0x0
-union dlb2_func_pf_pf2vf_mailbox_isr {
-	struct {
-		u32 vf0_isr : 1;
-		u32 vf1_isr : 1;
-		u32 vf2_isr : 1;
-		u32 vf3_isr : 1;
-		u32 vf4_isr : 1;
-		u32 vf5_isr : 1;
-		u32 vf6_isr : 1;
-		u32 vf7_isr : 1;
-		u32 vf8_isr : 1;
-		u32 vf9_isr : 1;
-		u32 vf10_isr : 1;
-		u32 vf11_isr : 1;
-		u32 vf12_isr : 1;
-		u32 vf13_isr : 1;
-		u32 vf14_isr : 1;
-		u32 vf15_isr : 1;
-		u32 rsvd0 : 16;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_PF_VF_RESET_IN_PROGRESS(vf_id) \
+#define DLB2_PF_PF2VF_MAILBOX_ISR_RST 0x0
+
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF0_ISR	0x00000001
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF1_ISR	0x00000002
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF2_ISR	0x00000004
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF3_ISR	0x00000008
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF4_ISR	0x00000010
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF5_ISR	0x00000020
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF6_ISR	0x00000040
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF7_ISR	0x00000080
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF8_ISR	0x00000100
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF9_ISR	0x00000200
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF10_ISR	0x00000400
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF11_ISR	0x00000800
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF12_ISR	0x00001000
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF13_ISR	0x00002000
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF14_ISR	0x00004000
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF15_ISR	0x00008000
+#define DLB2_PF_PF2VF_MAILBOX_ISR_RSVD0	0xFFFF0000
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF0_ISR_LOC	0
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF1_ISR_LOC	1
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF2_ISR_LOC	2
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF3_ISR_LOC	3
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF4_ISR_LOC	4
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF5_ISR_LOC	5
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF6_ISR_LOC	6
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF7_ISR_LOC	7
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF8_ISR_LOC	8
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF9_ISR_LOC	9
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF10_ISR_LOC	10
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF11_ISR_LOC	11
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF12_ISR_LOC	12
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF13_ISR_LOC	13
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF14_ISR_LOC	14
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF15_ISR_LOC	15
+#define DLB2_PF_PF2VF_MAILBOX_ISR_RSVD0_LOC		16
+
+#define DLB2_PF_VF_RESET_IN_PROGRESS(vf_id) \
 	(0x3000 + (vf_id) * 0x10000)
-#define DLB2_FUNC_PF_VF_RESET_IN_PROGRESS_RST 0xffff
-union dlb2_func_pf_vf_reset_in_progress {
-	struct {
-		u32 vf0_reset_in_progress : 1;
-		u32 vf1_reset_in_progress : 1;
-		u32 vf2_reset_in_progress : 1;
-		u32 vf3_reset_in_progress : 1;
-		u32 vf4_reset_in_progress : 1;
-		u32 vf5_reset_in_progress : 1;
-		u32 vf6_reset_in_progress : 1;
-		u32 vf7_reset_in_progress : 1;
-		u32 vf8_reset_in_progress : 1;
-		u32 vf9_reset_in_progress : 1;
-		u32 vf10_reset_in_progress : 1;
-		u32 vf11_reset_in_progress : 1;
-		u32 vf12_reset_in_progress : 1;
-		u32 vf13_reset_in_progress : 1;
-		u32 vf14_reset_in_progress : 1;
-		u32 vf15_reset_in_progress : 1;
-		u32 rsvd0 : 16;
-	} field;
-	u32 val;
-};
-
-#define DLB2_MSIX_MEM_VECTOR_CTRL(x) \
+#define DLB2_PF_VF_RESET_IN_PROGRESS_RST 0xffff
+
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF0_RESET_IN_PROGRESS	0x00000001
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF1_RESET_IN_PROGRESS	0x00000002
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF2_RESET_IN_PROGRESS	0x00000004
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF3_RESET_IN_PROGRESS	0x00000008
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF4_RESET_IN_PROGRESS	0x00000010
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF5_RESET_IN_PROGRESS	0x00000020
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF6_RESET_IN_PROGRESS	0x00000040
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF7_RESET_IN_PROGRESS	0x00000080
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF8_RESET_IN_PROGRESS	0x00000100
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF9_RESET_IN_PROGRESS	0x00000200
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF10_RESET_IN_PROGRESS	0x00000400
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF11_RESET_IN_PROGRESS	0x00000800
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF12_RESET_IN_PROGRESS	0x00001000
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF13_RESET_IN_PROGRESS	0x00002000
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF14_RESET_IN_PROGRESS	0x00004000
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF15_RESET_IN_PROGRESS	0x00008000
+#define DLB2_PF_VF_RESET_IN_PROGRESS_RSVD0			0xFFFF0000
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF0_RESET_IN_PROGRESS_LOC	0
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF1_RESET_IN_PROGRESS_LOC	1
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF2_RESET_IN_PROGRESS_LOC	2
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF3_RESET_IN_PROGRESS_LOC	3
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF4_RESET_IN_PROGRESS_LOC	4
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF5_RESET_IN_PROGRESS_LOC	5
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF6_RESET_IN_PROGRESS_LOC	6
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF7_RESET_IN_PROGRESS_LOC	7
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF8_RESET_IN_PROGRESS_LOC	8
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF9_RESET_IN_PROGRESS_LOC	9
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF10_RESET_IN_PROGRESS_LOC	10
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF11_RESET_IN_PROGRESS_LOC	11
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF12_RESET_IN_PROGRESS_LOC	12
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF13_RESET_IN_PROGRESS_LOC	13
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF14_RESET_IN_PROGRESS_LOC	14
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF15_RESET_IN_PROGRESS_LOC	15
+#define DLB2_PF_VF_RESET_IN_PROGRESS_RSVD0_LOC			16
+
+#define DLB2_MSIX_VECTOR_CTRL(x) \
 	(0x100000c + (x) * 0x10)
-#define DLB2_MSIX_MEM_VECTOR_CTRL_RST 0x1
-union dlb2_msix_mem_vector_ctrl {
-	struct {
-		u32 vec_mask : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
+#define DLB2_MSIX_VECTOR_CTRL_RST 0x1
+
+#define DLB2_MSIX_VECTOR_CTRL_VEC_MASK	0x00000001
+#define DLB2_MSIX_VECTOR_CTRL_RSVD0		0xFFFFFFFE
+#define DLB2_MSIX_VECTOR_CTRL_VEC_MASK_LOC	0
+#define DLB2_MSIX_VECTOR_CTRL_RSVD0_LOC	1
 
 #define DLB2_IOSF_FUNC_VF_BAR_DSBL(x) \
 	(0x20 + (x) * 0x4)
 #define DLB2_IOSF_FUNC_VF_BAR_DSBL_RST 0x0
-union dlb2_iosf_func_vf_bar_dsbl {
-	struct {
-		u32 func_vf_bar_dis : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_TOTAL_VAS 0x1000011c
+
+#define DLB2_IOSF_FUNC_VF_BAR_DSBL_FUNC_VF_BAR_DIS	0x00000001
+#define DLB2_IOSF_FUNC_VF_BAR_DSBL_RSVD0		0xFFFFFFFE
+#define DLB2_IOSF_FUNC_VF_BAR_DSBL_FUNC_VF_BAR_DIS_LOC	0
+#define DLB2_IOSF_FUNC_VF_BAR_DSBL_RSVD0_LOC			1
+
+#define DLB2_V2SYS_TOTAL_VAS 0x1000011c
+#define DLB2_V2_5SYS_TOTAL_VAS 0x10000114
+#define DLB2_SYS_TOTAL_VAS(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2SYS_TOTAL_VAS : \
+	 DLB2_V2_5SYS_TOTAL_VAS)
 #define DLB2_SYS_TOTAL_VAS_RST 0x20
-union dlb2_sys_total_vas {
-	struct {
-		u32 total_vas : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_TOTAL_DIR_PORTS 0x10000118
-#define DLB2_SYS_TOTAL_DIR_PORTS_RST 0x40
-union dlb2_sys_total_dir_ports {
-	struct {
-		u32 total_dir_ports : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_TOTAL_LDB_PORTS 0x10000114
-#define DLB2_SYS_TOTAL_LDB_PORTS_RST 0x40
-union dlb2_sys_total_ldb_ports {
-	struct {
-		u32 total_ldb_ports : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_TOTAL_DIR_QID 0x10000110
-#define DLB2_SYS_TOTAL_DIR_QID_RST 0x40
-union dlb2_sys_total_dir_qid {
-	struct {
-		u32 total_dir_qid : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_TOTAL_LDB_QID 0x1000010c
-#define DLB2_SYS_TOTAL_LDB_QID_RST 0x20
-union dlb2_sys_total_ldb_qid {
-	struct {
-		u32 total_ldb_qid : 32;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_TOTAL_VAS_TOTAL_VAS	0xFFFFFFFF
+#define DLB2_SYS_TOTAL_VAS_TOTAL_VAS_LOC	0
 
 #define DLB2_SYS_TOTAL_DIR_CRDS 0x10000108
 #define DLB2_SYS_TOTAL_DIR_CRDS_RST 0x1000
-union dlb2_sys_total_dir_crds {
-	struct {
-		u32 total_dir_credits : 32;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_TOTAL_DIR_CRDS_TOTAL_DIR_CREDITS	0xFFFFFFFF
+#define DLB2_SYS_TOTAL_DIR_CRDS_TOTAL_DIR_CREDITS_LOC	0
 
 #define DLB2_SYS_TOTAL_LDB_CRDS 0x10000104
 #define DLB2_SYS_TOTAL_LDB_CRDS_RST 0x2000
-union dlb2_sys_total_ldb_crds {
-	struct {
-		u32 total_ldb_credits : 32;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_TOTAL_LDB_CRDS_TOTAL_LDB_CREDITS	0xFFFFFFFF
+#define DLB2_SYS_TOTAL_LDB_CRDS_TOTAL_LDB_CREDITS_LOC	0
 
 #define DLB2_SYS_ALARM_PF_SYND2 0x10000508
 #define DLB2_SYS_ALARM_PF_SYND2_RST 0x0
-union dlb2_sys_alarm_pf_synd2 {
-	struct {
-		u32 lock_id : 16;
-		u32 meas : 1;
-		u32 debug : 7;
-		u32 cq_pop : 1;
-		u32 qe_uhl : 1;
-		u32 qe_orsp : 1;
-		u32 qe_valid : 1;
-		u32 cq_int_rearm : 1;
-		u32 dsi_error : 1;
-		u32 rsvd0 : 2;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_ALARM_PF_SYND2_LOCK_ID	0x0000FFFF
+#define DLB2_SYS_ALARM_PF_SYND2_MEAS		0x00010000
+#define DLB2_SYS_ALARM_PF_SYND2_DEBUG	0x00FE0000
+#define DLB2_SYS_ALARM_PF_SYND2_CQ_POP	0x01000000
+#define DLB2_SYS_ALARM_PF_SYND2_QE_UHL	0x02000000
+#define DLB2_SYS_ALARM_PF_SYND2_QE_ORSP	0x04000000
+#define DLB2_SYS_ALARM_PF_SYND2_QE_VALID	0x08000000
+#define DLB2_SYS_ALARM_PF_SYND2_CQ_INT_REARM	0x10000000
+#define DLB2_SYS_ALARM_PF_SYND2_DSI_ERROR	0x20000000
+#define DLB2_SYS_ALARM_PF_SYND2_RSVD0	0xC0000000
+#define DLB2_SYS_ALARM_PF_SYND2_LOCK_ID_LOC		0
+#define DLB2_SYS_ALARM_PF_SYND2_MEAS_LOC		16
+#define DLB2_SYS_ALARM_PF_SYND2_DEBUG_LOC		17
+#define DLB2_SYS_ALARM_PF_SYND2_CQ_POP_LOC		24
+#define DLB2_SYS_ALARM_PF_SYND2_QE_UHL_LOC		25
+#define DLB2_SYS_ALARM_PF_SYND2_QE_ORSP_LOC		26
+#define DLB2_SYS_ALARM_PF_SYND2_QE_VALID_LOC		27
+#define DLB2_SYS_ALARM_PF_SYND2_CQ_INT_REARM_LOC	28
+#define DLB2_SYS_ALARM_PF_SYND2_DSI_ERROR_LOC	29
+#define DLB2_SYS_ALARM_PF_SYND2_RSVD0_LOC		30
 
 #define DLB2_SYS_ALARM_PF_SYND1 0x10000504
 #define DLB2_SYS_ALARM_PF_SYND1_RST 0x0
-union dlb2_sys_alarm_pf_synd1 {
-	struct {
-		u32 dsi : 16;
-		u32 qid : 8;
-		u32 qtype : 2;
-		u32 qpri : 3;
-		u32 msg_type : 3;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_ALARM_PF_SYND1_DSI		0x0000FFFF
+#define DLB2_SYS_ALARM_PF_SYND1_QID		0x00FF0000
+#define DLB2_SYS_ALARM_PF_SYND1_QTYPE	0x03000000
+#define DLB2_SYS_ALARM_PF_SYND1_QPRI		0x1C000000
+#define DLB2_SYS_ALARM_PF_SYND1_MSG_TYPE	0xE0000000
+#define DLB2_SYS_ALARM_PF_SYND1_DSI_LOC	0
+#define DLB2_SYS_ALARM_PF_SYND1_QID_LOC	16
+#define DLB2_SYS_ALARM_PF_SYND1_QTYPE_LOC	24
+#define DLB2_SYS_ALARM_PF_SYND1_QPRI_LOC	26
+#define DLB2_SYS_ALARM_PF_SYND1_MSG_TYPE_LOC	29
 
 #define DLB2_SYS_ALARM_PF_SYND0 0x10000500
 #define DLB2_SYS_ALARM_PF_SYND0_RST 0x0
-union dlb2_sys_alarm_pf_synd0 {
-	struct {
-		u32 syndrome : 8;
-		u32 rtype : 2;
-		u32 rsvd0 : 3;
-		u32 is_ldb : 1;
-		u32 cls : 2;
-		u32 aid : 6;
-		u32 unit : 4;
-		u32 source : 4;
-		u32 more : 1;
-		u32 valid : 1;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_ALARM_PF_SYND0_SYNDROME	0x000000FF
+#define DLB2_SYS_ALARM_PF_SYND0_RTYPE	0x00000300
+#define DLB2_SYS_ALARM_PF_SYND0_RSVD0	0x00001C00
+#define DLB2_SYS_ALARM_PF_SYND0_IS_LDB	0x00002000
+#define DLB2_SYS_ALARM_PF_SYND0_CLS		0x0000C000
+#define DLB2_SYS_ALARM_PF_SYND0_AID		0x003F0000
+#define DLB2_SYS_ALARM_PF_SYND0_UNIT		0x03C00000
+#define DLB2_SYS_ALARM_PF_SYND0_SOURCE	0x3C000000
+#define DLB2_SYS_ALARM_PF_SYND0_MORE		0x40000000
+#define DLB2_SYS_ALARM_PF_SYND0_VALID	0x80000000
+#define DLB2_SYS_ALARM_PF_SYND0_SYNDROME_LOC	0
+#define DLB2_SYS_ALARM_PF_SYND0_RTYPE_LOC	8
+#define DLB2_SYS_ALARM_PF_SYND0_RSVD0_LOC	10
+#define DLB2_SYS_ALARM_PF_SYND0_IS_LDB_LOC	13
+#define DLB2_SYS_ALARM_PF_SYND0_CLS_LOC	14
+#define DLB2_SYS_ALARM_PF_SYND0_AID_LOC	16
+#define DLB2_SYS_ALARM_PF_SYND0_UNIT_LOC	22
+#define DLB2_SYS_ALARM_PF_SYND0_SOURCE_LOC	26
+#define DLB2_SYS_ALARM_PF_SYND0_MORE_LOC	30
+#define DLB2_SYS_ALARM_PF_SYND0_VALID_LOC	31
 
 #define DLB2_SYS_VF_LDB_VPP_V(x) \
 	(0x10000f00 + (x) * 0x1000)
 #define DLB2_SYS_VF_LDB_VPP_V_RST 0x0
-union dlb2_sys_vf_ldb_vpp_v {
-	struct {
-		u32 vpp_v : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_VF_LDB_VPP_V_VPP_V	0x00000001
+#define DLB2_SYS_VF_LDB_VPP_V_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_VF_LDB_VPP_V_VPP_V_LOC	0
+#define DLB2_SYS_VF_LDB_VPP_V_RSVD0_LOC	1
 
 #define DLB2_SYS_VF_LDB_VPP2PP(x) \
 	(0x10000f04 + (x) * 0x1000)
 #define DLB2_SYS_VF_LDB_VPP2PP_RST 0x0
-union dlb2_sys_vf_ldb_vpp2pp {
-	struct {
-		u32 pp : 6;
-		u32 rsvd0 : 26;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_VF_LDB_VPP2PP_PP	0x0000003F
+#define DLB2_SYS_VF_LDB_VPP2PP_RSVD0	0xFFFFFFC0
+#define DLB2_SYS_VF_LDB_VPP2PP_PP_LOC	0
+#define DLB2_SYS_VF_LDB_VPP2PP_RSVD0_LOC	6
 
 #define DLB2_SYS_VF_DIR_VPP_V(x) \
 	(0x10000f08 + (x) * 0x1000)
 #define DLB2_SYS_VF_DIR_VPP_V_RST 0x0
-union dlb2_sys_vf_dir_vpp_v {
-	struct {
-		u32 vpp_v : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_VF_DIR_VPP_V_VPP_V	0x00000001
+#define DLB2_SYS_VF_DIR_VPP_V_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_VF_DIR_VPP_V_VPP_V_LOC	0
+#define DLB2_SYS_VF_DIR_VPP_V_RSVD0_LOC	1
 
 #define DLB2_SYS_VF_DIR_VPP2PP(x) \
 	(0x10000f0c + (x) * 0x1000)
 #define DLB2_SYS_VF_DIR_VPP2PP_RST 0x0
-union dlb2_sys_vf_dir_vpp2pp {
-	struct {
-		u32 pp : 6;
-		u32 rsvd0 : 26;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_VF_DIR_VPP2PP_PP	0x0000003F
+#define DLB2_SYS_VF_DIR_VPP2PP_RSVD0	0xFFFFFFC0
+#define DLB2_SYS_VF_DIR_VPP2PP_PP_LOC	0
+#define DLB2_SYS_VF_DIR_VPP2PP_RSVD0_LOC	6
 
 #define DLB2_SYS_VF_LDB_VQID_V(x) \
 	(0x10000f10 + (x) * 0x1000)
 #define DLB2_SYS_VF_LDB_VQID_V_RST 0x0
-union dlb2_sys_vf_ldb_vqid_v {
-	struct {
-		u32 vqid_v : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_VF_LDB_VQID_V_VQID_V	0x00000001
+#define DLB2_SYS_VF_LDB_VQID_V_RSVD0		0xFFFFFFFE
+#define DLB2_SYS_VF_LDB_VQID_V_VQID_V_LOC	0
+#define DLB2_SYS_VF_LDB_VQID_V_RSVD0_LOC	1
 
 #define DLB2_SYS_VF_LDB_VQID2QID(x) \
 	(0x10000f14 + (x) * 0x1000)
 #define DLB2_SYS_VF_LDB_VQID2QID_RST 0x0
-union dlb2_sys_vf_ldb_vqid2qid {
-	struct {
-		u32 qid : 5;
-		u32 rsvd0 : 27;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_VF_LDB_VQID2QID_QID		0x0000001F
+#define DLB2_SYS_VF_LDB_VQID2QID_RSVD0	0xFFFFFFE0
+#define DLB2_SYS_VF_LDB_VQID2QID_QID_LOC	0
+#define DLB2_SYS_VF_LDB_VQID2QID_RSVD0_LOC	5
 
 #define DLB2_SYS_LDB_QID2VQID(x) \
 	(0x10000f18 + (x) * 0x1000)
 #define DLB2_SYS_LDB_QID2VQID_RST 0x0
-union dlb2_sys_ldb_qid2vqid {
-	struct {
-		u32 vqid : 5;
-		u32 rsvd0 : 27;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_LDB_QID2VQID_VQID	0x0000001F
+#define DLB2_SYS_LDB_QID2VQID_RSVD0	0xFFFFFFE0
+#define DLB2_SYS_LDB_QID2VQID_VQID_LOC	0
+#define DLB2_SYS_LDB_QID2VQID_RSVD0_LOC	5
 
 #define DLB2_SYS_VF_DIR_VQID_V(x) \
 	(0x10000f1c + (x) * 0x1000)
 #define DLB2_SYS_VF_DIR_VQID_V_RST 0x0
-union dlb2_sys_vf_dir_vqid_v {
-	struct {
-		u32 vqid_v : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_VF_DIR_VQID_V_VQID_V	0x00000001
+#define DLB2_SYS_VF_DIR_VQID_V_RSVD0		0xFFFFFFFE
+#define DLB2_SYS_VF_DIR_VQID_V_VQID_V_LOC	0
+#define DLB2_SYS_VF_DIR_VQID_V_RSVD0_LOC	1
 
 #define DLB2_SYS_VF_DIR_VQID2QID(x) \
 	(0x10000f20 + (x) * 0x1000)
 #define DLB2_SYS_VF_DIR_VQID2QID_RST 0x0
-union dlb2_sys_vf_dir_vqid2qid {
-	struct {
-		u32 qid : 6;
-		u32 rsvd0 : 26;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_VF_DIR_VQID2QID_QID		0x0000003F
+#define DLB2_SYS_VF_DIR_VQID2QID_RSVD0	0xFFFFFFC0
+#define DLB2_SYS_VF_DIR_VQID2QID_QID_LOC	0
+#define DLB2_SYS_VF_DIR_VQID2QID_RSVD0_LOC	6
 
 #define DLB2_SYS_LDB_VASQID_V(x) \
 	(0x10000f24 + (x) * 0x1000)
 #define DLB2_SYS_LDB_VASQID_V_RST 0x0
-union dlb2_sys_ldb_vasqid_v {
-	struct {
-		u32 vasqid_v : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_LDB_VASQID_V_VASQID_V	0x00000001
+#define DLB2_SYS_LDB_VASQID_V_RSVD0		0xFFFFFFFE
+#define DLB2_SYS_LDB_VASQID_V_VASQID_V_LOC	0
+#define DLB2_SYS_LDB_VASQID_V_RSVD0_LOC	1
 
 #define DLB2_SYS_DIR_VASQID_V(x) \
 	(0x10000f28 + (x) * 0x1000)
 #define DLB2_SYS_DIR_VASQID_V_RST 0x0
-union dlb2_sys_dir_vasqid_v {
-	struct {
-		u32 vasqid_v : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_DIR_VASQID_V_VASQID_V	0x00000001
+#define DLB2_SYS_DIR_VASQID_V_RSVD0		0xFFFFFFFE
+#define DLB2_SYS_DIR_VASQID_V_VASQID_V_LOC	0
+#define DLB2_SYS_DIR_VASQID_V_RSVD0_LOC	1
 
 #define DLB2_SYS_ALARM_VF_SYND2(x) \
 	(0x10000f48 + (x) * 0x1000)
 #define DLB2_SYS_ALARM_VF_SYND2_RST 0x0
-union dlb2_sys_alarm_vf_synd2 {
-	struct {
-		u32 lock_id : 16;
-		u32 debug : 8;
-		u32 cq_pop : 1;
-		u32 qe_uhl : 1;
-		u32 qe_orsp : 1;
-		u32 qe_valid : 1;
-		u32 isz : 1;
-		u32 dsi_error : 1;
-		u32 dlbrsvd : 2;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_ALARM_VF_SYND2_LOCK_ID	0x0000FFFF
+#define DLB2_SYS_ALARM_VF_SYND2_DEBUG	0x00FF0000
+#define DLB2_SYS_ALARM_VF_SYND2_CQ_POP	0x01000000
+#define DLB2_SYS_ALARM_VF_SYND2_QE_UHL	0x02000000
+#define DLB2_SYS_ALARM_VF_SYND2_QE_ORSP	0x04000000
+#define DLB2_SYS_ALARM_VF_SYND2_QE_VALID	0x08000000
+#define DLB2_SYS_ALARM_VF_SYND2_ISZ		0x10000000
+#define DLB2_SYS_ALARM_VF_SYND2_DSI_ERROR	0x20000000
+#define DLB2_SYS_ALARM_VF_SYND2_DLBRSVD	0xC0000000
+#define DLB2_SYS_ALARM_VF_SYND2_LOCK_ID_LOC		0
+#define DLB2_SYS_ALARM_VF_SYND2_DEBUG_LOC		16
+#define DLB2_SYS_ALARM_VF_SYND2_CQ_POP_LOC		24
+#define DLB2_SYS_ALARM_VF_SYND2_QE_UHL_LOC		25
+#define DLB2_SYS_ALARM_VF_SYND2_QE_ORSP_LOC		26
+#define DLB2_SYS_ALARM_VF_SYND2_QE_VALID_LOC		27
+#define DLB2_SYS_ALARM_VF_SYND2_ISZ_LOC		28
+#define DLB2_SYS_ALARM_VF_SYND2_DSI_ERROR_LOC	29
+#define DLB2_SYS_ALARM_VF_SYND2_DLBRSVD_LOC		30
 
 #define DLB2_SYS_ALARM_VF_SYND1(x) \
 	(0x10000f44 + (x) * 0x1000)
 #define DLB2_SYS_ALARM_VF_SYND1_RST 0x0
-union dlb2_sys_alarm_vf_synd1 {
-	struct {
-		u32 dsi : 16;
-		u32 qid : 8;
-		u32 qtype : 2;
-		u32 qpri : 3;
-		u32 msg_type : 3;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_ALARM_VF_SYND1_DSI		0x0000FFFF
+#define DLB2_SYS_ALARM_VF_SYND1_QID		0x00FF0000
+#define DLB2_SYS_ALARM_VF_SYND1_QTYPE	0x03000000
+#define DLB2_SYS_ALARM_VF_SYND1_QPRI		0x1C000000
+#define DLB2_SYS_ALARM_VF_SYND1_MSG_TYPE	0xE0000000
+#define DLB2_SYS_ALARM_VF_SYND1_DSI_LOC	0
+#define DLB2_SYS_ALARM_VF_SYND1_QID_LOC	16
+#define DLB2_SYS_ALARM_VF_SYND1_QTYPE_LOC	24
+#define DLB2_SYS_ALARM_VF_SYND1_QPRI_LOC	26
+#define DLB2_SYS_ALARM_VF_SYND1_MSG_TYPE_LOC	29
 
 #define DLB2_SYS_ALARM_VF_SYND0(x) \
 	(0x10000f40 + (x) * 0x1000)
 #define DLB2_SYS_ALARM_VF_SYND0_RST 0x0
-union dlb2_sys_alarm_vf_synd0 {
-	struct {
-		u32 syndrome : 8;
-		u32 rtype : 2;
-		u32 vf_synd0_parity : 1;
-		u32 vf_synd1_parity : 1;
-		u32 vf_synd2_parity : 1;
-		u32 is_ldb : 1;
-		u32 cls : 2;
-		u32 aid : 6;
-		u32 unit : 4;
-		u32 source : 4;
-		u32 more : 1;
-		u32 valid : 1;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_ALARM_VF_SYND0_SYNDROME		0x000000FF
+#define DLB2_SYS_ALARM_VF_SYND0_RTYPE		0x00000300
+#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND0_PARITY	0x00000400
+#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND1_PARITY	0x00000800
+#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND2_PARITY	0x00001000
+#define DLB2_SYS_ALARM_VF_SYND0_IS_LDB		0x00002000
+#define DLB2_SYS_ALARM_VF_SYND0_CLS			0x0000C000
+#define DLB2_SYS_ALARM_VF_SYND0_AID			0x003F0000
+#define DLB2_SYS_ALARM_VF_SYND0_UNIT			0x03C00000
+#define DLB2_SYS_ALARM_VF_SYND0_SOURCE		0x3C000000
+#define DLB2_SYS_ALARM_VF_SYND0_MORE			0x40000000
+#define DLB2_SYS_ALARM_VF_SYND0_VALID		0x80000000
+#define DLB2_SYS_ALARM_VF_SYND0_SYNDROME_LOC		0
+#define DLB2_SYS_ALARM_VF_SYND0_RTYPE_LOC		8
+#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND0_PARITY_LOC	10
+#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND1_PARITY_LOC	11
+#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND2_PARITY_LOC	12
+#define DLB2_SYS_ALARM_VF_SYND0_IS_LDB_LOC		13
+#define DLB2_SYS_ALARM_VF_SYND0_CLS_LOC		14
+#define DLB2_SYS_ALARM_VF_SYND0_AID_LOC		16
+#define DLB2_SYS_ALARM_VF_SYND0_UNIT_LOC		22
+#define DLB2_SYS_ALARM_VF_SYND0_SOURCE_LOC		26
+#define DLB2_SYS_ALARM_VF_SYND0_MORE_LOC		30
+#define DLB2_SYS_ALARM_VF_SYND0_VALID_LOC		31
 
 #define DLB2_SYS_LDB_QID_CFG_V(x) \
 	(0x10000f58 + (x) * 0x1000)
 #define DLB2_SYS_LDB_QID_CFG_V_RST 0x0
-union dlb2_sys_ldb_qid_cfg_v {
-	struct {
-		u32 sn_cfg_v : 1;
-		u32 fid_cfg_v : 1;
-		u32 rsvd0 : 30;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_LDB_QID_CFG_V_SN_CFG_V	0x00000001
+#define DLB2_SYS_LDB_QID_CFG_V_FID_CFG_V	0x00000002
+#define DLB2_SYS_LDB_QID_CFG_V_RSVD0		0xFFFFFFFC
+#define DLB2_SYS_LDB_QID_CFG_V_SN_CFG_V_LOC	0
+#define DLB2_SYS_LDB_QID_CFG_V_FID_CFG_V_LOC	1
+#define DLB2_SYS_LDB_QID_CFG_V_RSVD0_LOC	2
 
 #define DLB2_SYS_LDB_QID_ITS(x) \
 	(0x10000f54 + (x) * 0x1000)
 #define DLB2_SYS_LDB_QID_ITS_RST 0x0
-union dlb2_sys_ldb_qid_its {
-	struct {
-		u32 qid_its : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_LDB_QID_ITS_QID_ITS	0x00000001
+#define DLB2_SYS_LDB_QID_ITS_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_LDB_QID_ITS_QID_ITS_LOC	0
+#define DLB2_SYS_LDB_QID_ITS_RSVD0_LOC	1
 
 #define DLB2_SYS_LDB_QID_V(x) \
 	(0x10000f50 + (x) * 0x1000)
 #define DLB2_SYS_LDB_QID_V_RST 0x0
-union dlb2_sys_ldb_qid_v {
-	struct {
-		u32 qid_v : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_LDB_QID_V_QID_V	0x00000001
+#define DLB2_SYS_LDB_QID_V_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_LDB_QID_V_QID_V_LOC	0
+#define DLB2_SYS_LDB_QID_V_RSVD0_LOC	1
 
 #define DLB2_SYS_DIR_QID_ITS(x) \
 	(0x10000f64 + (x) * 0x1000)
 #define DLB2_SYS_DIR_QID_ITS_RST 0x0
-union dlb2_sys_dir_qid_its {
-	struct {
-		u32 qid_its : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_DIR_QID_ITS_QID_ITS	0x00000001
+#define DLB2_SYS_DIR_QID_ITS_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_DIR_QID_ITS_QID_ITS_LOC	0
+#define DLB2_SYS_DIR_QID_ITS_RSVD0_LOC	1
 
 #define DLB2_SYS_DIR_QID_V(x) \
 	(0x10000f60 + (x) * 0x1000)
 #define DLB2_SYS_DIR_QID_V_RST 0x0
-union dlb2_sys_dir_qid_v {
-	struct {
-		u32 qid_v : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_DIR_QID_V_QID_V	0x00000001
+#define DLB2_SYS_DIR_QID_V_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_DIR_QID_V_QID_V_LOC	0
+#define DLB2_SYS_DIR_QID_V_RSVD0_LOC	1
 
 #define DLB2_SYS_LDB_CQ_AI_DATA(x) \
 	(0x10000fa8 + (x) * 0x1000)
 #define DLB2_SYS_LDB_CQ_AI_DATA_RST 0x0
-union dlb2_sys_ldb_cq_ai_data {
-	struct {
-		u32 cq_ai_data : 32;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_LDB_CQ_AI_DATA_CQ_AI_DATA	0xFFFFFFFF
+#define DLB2_SYS_LDB_CQ_AI_DATA_CQ_AI_DATA_LOC	0
 
 #define DLB2_SYS_LDB_CQ_AI_ADDR(x) \
 	(0x10000fa4 + (x) * 0x1000)
 #define DLB2_SYS_LDB_CQ_AI_ADDR_RST 0x0
-union dlb2_sys_ldb_cq_ai_addr {
-	struct {
-		u32 rsvd1 : 2;
-		u32 cq_ai_addr : 18;
-		u32 rsvd0 : 12;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_LDB_CQ_PASID(x) \
+
+#define DLB2_SYS_LDB_CQ_AI_ADDR_RSVD1	0x00000003
+#define DLB2_SYS_LDB_CQ_AI_ADDR_CQ_AI_ADDR	0x000FFFFC
+#define DLB2_SYS_LDB_CQ_AI_ADDR_RSVD0	0xFFF00000
+#define DLB2_SYS_LDB_CQ_AI_ADDR_RSVD1_LOC		0
+#define DLB2_SYS_LDB_CQ_AI_ADDR_CQ_AI_ADDR_LOC	2
+#define DLB2_SYS_LDB_CQ_AI_ADDR_RSVD0_LOC		20
+
+#define DLB2_V2SYS_LDB_CQ_PASID(x) \
 	(0x10000fa0 + (x) * 0x1000)
+#define DLB2_V2_5SYS_LDB_CQ_PASID(x) \
+	(0x10000f9c + (x) * 0x1000)
+#define DLB2_SYS_LDB_CQ_PASID(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2SYS_LDB_CQ_PASID(x) : \
+	 DLB2_V2_5SYS_LDB_CQ_PASID(x))
 #define DLB2_SYS_LDB_CQ_PASID_RST 0x0
-union dlb2_sys_ldb_cq_pasid {
-	struct {
-		u32 pasid : 20;
-		u32 exe_req : 1;
-		u32 priv_req : 1;
-		u32 fmt2 : 1;
-		u32 rsvd0 : 9;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_LDB_CQ_PASID_PASID		0x000FFFFF
+#define DLB2_SYS_LDB_CQ_PASID_EXE_REQ	0x00100000
+#define DLB2_SYS_LDB_CQ_PASID_PRIV_REQ	0x00200000
+#define DLB2_SYS_LDB_CQ_PASID_FMT2		0x00400000
+#define DLB2_SYS_LDB_CQ_PASID_RSVD0		0xFF800000
+#define DLB2_SYS_LDB_CQ_PASID_PASID_LOC	0
+#define DLB2_SYS_LDB_CQ_PASID_EXE_REQ_LOC	20
+#define DLB2_SYS_LDB_CQ_PASID_PRIV_REQ_LOC	21
+#define DLB2_SYS_LDB_CQ_PASID_FMT2_LOC	22
+#define DLB2_SYS_LDB_CQ_PASID_RSVD0_LOC	23
 
 #define DLB2_SYS_LDB_CQ_AT(x) \
 	(0x10000f9c + (x) * 0x1000)
 #define DLB2_SYS_LDB_CQ_AT_RST 0x0
-union dlb2_sys_ldb_cq_at {
-	struct {
-		u32 cq_at : 2;
-		u32 rsvd0 : 30;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_LDB_CQ_AT_CQ_AT	0x00000003
+#define DLB2_SYS_LDB_CQ_AT_RSVD0	0xFFFFFFFC
+#define DLB2_SYS_LDB_CQ_AT_CQ_AT_LOC	0
+#define DLB2_SYS_LDB_CQ_AT_RSVD0_LOC	2
 
 #define DLB2_SYS_LDB_CQ_ISR(x) \
 	(0x10000f98 + (x) * 0x1000)
@@ -563,497 +560,891 @@ union dlb2_sys_ldb_cq_at {
 #define DLB2_CQ_ISR_MODE_MSI  1
 #define DLB2_CQ_ISR_MODE_MSIX 2
 #define DLB2_CQ_ISR_MODE_ADI  3
-union dlb2_sys_ldb_cq_isr {
-	struct {
-		u32 vector : 6;
-		u32 vf : 4;
-		u32 en_code : 2;
-		u32 rsvd0 : 20;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_LDB_CQ_ISR_VECTOR	0x0000003F
+#define DLB2_SYS_LDB_CQ_ISR_VF	0x000003C0
+#define DLB2_SYS_LDB_CQ_ISR_EN_CODE	0x00000C00
+#define DLB2_SYS_LDB_CQ_ISR_RSVD0	0xFFFFF000
+#define DLB2_SYS_LDB_CQ_ISR_VECTOR_LOC	0
+#define DLB2_SYS_LDB_CQ_ISR_VF_LOC		6
+#define DLB2_SYS_LDB_CQ_ISR_EN_CODE_LOC	10
+#define DLB2_SYS_LDB_CQ_ISR_RSVD0_LOC	12
 
 #define DLB2_SYS_LDB_CQ2VF_PF_RO(x) \
 	(0x10000f94 + (x) * 0x1000)
 #define DLB2_SYS_LDB_CQ2VF_PF_RO_RST 0x0
-union dlb2_sys_ldb_cq2vf_pf_ro {
-	struct {
-		u32 vf : 4;
-		u32 is_pf : 1;
-		u32 ro : 1;
-		u32 rsvd0 : 26;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_VF		0x0000000F
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_IS_PF	0x00000010
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_RO		0x00000020
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_RSVD0	0xFFFFFFC0
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_VF_LOC	0
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_IS_PF_LOC	4
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_RO_LOC	5
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_RSVD0_LOC	6
 
 #define DLB2_SYS_LDB_PP_V(x) \
 	(0x10000f90 + (x) * 0x1000)
 #define DLB2_SYS_LDB_PP_V_RST 0x0
-union dlb2_sys_ldb_pp_v {
-	struct {
-		u32 pp_v : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_LDB_PP_V_PP_V	0x00000001
+#define DLB2_SYS_LDB_PP_V_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_LDB_PP_V_PP_V_LOC	0
+#define DLB2_SYS_LDB_PP_V_RSVD0_LOC	1
 
 #define DLB2_SYS_LDB_PP2VDEV(x) \
 	(0x10000f8c + (x) * 0x1000)
 #define DLB2_SYS_LDB_PP2VDEV_RST 0x0
-union dlb2_sys_ldb_pp2vdev {
-	struct {
-		u32 vdev : 4;
-		u32 rsvd0 : 28;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_LDB_PP2VDEV_VDEV	0x0000000F
+#define DLB2_SYS_LDB_PP2VDEV_RSVD0	0xFFFFFFF0
+#define DLB2_SYS_LDB_PP2VDEV_VDEV_LOC	0
+#define DLB2_SYS_LDB_PP2VDEV_RSVD0_LOC	4
 
 #define DLB2_SYS_LDB_PP2VAS(x) \
 	(0x10000f88 + (x) * 0x1000)
 #define DLB2_SYS_LDB_PP2VAS_RST 0x0
-union dlb2_sys_ldb_pp2vas {
-	struct {
-		u32 vas : 5;
-		u32 rsvd0 : 27;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_LDB_PP2VAS_VAS	0x0000001F
+#define DLB2_SYS_LDB_PP2VAS_RSVD0	0xFFFFFFE0
+#define DLB2_SYS_LDB_PP2VAS_VAS_LOC		0
+#define DLB2_SYS_LDB_PP2VAS_RSVD0_LOC	5
 
 #define DLB2_SYS_LDB_CQ_ADDR_U(x) \
 	(0x10000f84 + (x) * 0x1000)
 #define DLB2_SYS_LDB_CQ_ADDR_U_RST 0x0
-union dlb2_sys_ldb_cq_addr_u {
-	struct {
-		u32 addr_u : 32;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_LDB_CQ_ADDR_U_ADDR_U	0xFFFFFFFF
+#define DLB2_SYS_LDB_CQ_ADDR_U_ADDR_U_LOC	0
 
 #define DLB2_SYS_LDB_CQ_ADDR_L(x) \
 	(0x10000f80 + (x) * 0x1000)
 #define DLB2_SYS_LDB_CQ_ADDR_L_RST 0x0
-union dlb2_sys_ldb_cq_addr_l {
-	struct {
-		u32 rsvd0 : 6;
-		u32 addr_l : 26;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_LDB_CQ_ADDR_L_RSVD0		0x0000003F
+#define DLB2_SYS_LDB_CQ_ADDR_L_ADDR_L	0xFFFFFFC0
+#define DLB2_SYS_LDB_CQ_ADDR_L_RSVD0_LOC	0
+#define DLB2_SYS_LDB_CQ_ADDR_L_ADDR_L_LOC	6
 
 #define DLB2_SYS_DIR_CQ_FMT(x) \
 	(0x10000fec + (x) * 0x1000)
 #define DLB2_SYS_DIR_CQ_FMT_RST 0x0
-union dlb2_sys_dir_cq_fmt {
-	struct {
-		u32 keep_pf_ppid : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_DIR_CQ_FMT_KEEP_PF_PPID	0x00000001
+#define DLB2_SYS_DIR_CQ_FMT_RSVD0		0xFFFFFFFE
+#define DLB2_SYS_DIR_CQ_FMT_KEEP_PF_PPID_LOC	0
+#define DLB2_SYS_DIR_CQ_FMT_RSVD0_LOC	1
 
 #define DLB2_SYS_DIR_CQ_AI_DATA(x) \
 	(0x10000fe8 + (x) * 0x1000)
 #define DLB2_SYS_DIR_CQ_AI_DATA_RST 0x0
-union dlb2_sys_dir_cq_ai_data {
-	struct {
-		u32 cq_ai_data : 32;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_DIR_CQ_AI_DATA_CQ_AI_DATA	0xFFFFFFFF
+#define DLB2_SYS_DIR_CQ_AI_DATA_CQ_AI_DATA_LOC	0
 
 #define DLB2_SYS_DIR_CQ_AI_ADDR(x) \
 	(0x10000fe4 + (x) * 0x1000)
 #define DLB2_SYS_DIR_CQ_AI_ADDR_RST 0x0
-union dlb2_sys_dir_cq_ai_addr {
-	struct {
-		u32 rsvd1 : 2;
-		u32 cq_ai_addr : 18;
-		u32 rsvd0 : 12;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_DIR_CQ_PASID(x) \
+
+#define DLB2_SYS_DIR_CQ_AI_ADDR_RSVD1	0x00000003
+#define DLB2_SYS_DIR_CQ_AI_ADDR_CQ_AI_ADDR	0x000FFFFC
+#define DLB2_SYS_DIR_CQ_AI_ADDR_RSVD0	0xFFF00000
+#define DLB2_SYS_DIR_CQ_AI_ADDR_RSVD1_LOC		0
+#define DLB2_SYS_DIR_CQ_AI_ADDR_CQ_AI_ADDR_LOC	2
+#define DLB2_SYS_DIR_CQ_AI_ADDR_RSVD0_LOC		20
+
+#define DLB2_V2SYS_DIR_CQ_PASID(x) \
 	(0x10000fe0 + (x) * 0x1000)
+#define DLB2_V2_5SYS_DIR_CQ_PASID(x) \
+	(0x10000fdc + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ_PASID(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2SYS_DIR_CQ_PASID(x) : \
+	 DLB2_V2_5SYS_DIR_CQ_PASID(x))
 #define DLB2_SYS_DIR_CQ_PASID_RST 0x0
-union dlb2_sys_dir_cq_pasid {
-	struct {
-		u32 pasid : 20;
-		u32 exe_req : 1;
-		u32 priv_req : 1;
-		u32 fmt2 : 1;
-		u32 rsvd0 : 9;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_DIR_CQ_PASID_PASID		0x000FFFFF
+#define DLB2_SYS_DIR_CQ_PASID_EXE_REQ	0x00100000
+#define DLB2_SYS_DIR_CQ_PASID_PRIV_REQ	0x00200000
+#define DLB2_SYS_DIR_CQ_PASID_FMT2		0x00400000
+#define DLB2_SYS_DIR_CQ_PASID_RSVD0		0xFF800000
+#define DLB2_SYS_DIR_CQ_PASID_PASID_LOC	0
+#define DLB2_SYS_DIR_CQ_PASID_EXE_REQ_LOC	20
+#define DLB2_SYS_DIR_CQ_PASID_PRIV_REQ_LOC	21
+#define DLB2_SYS_DIR_CQ_PASID_FMT2_LOC	22
+#define DLB2_SYS_DIR_CQ_PASID_RSVD0_LOC	23
 
 #define DLB2_SYS_DIR_CQ_AT(x) \
 	(0x10000fdc + (x) * 0x1000)
 #define DLB2_SYS_DIR_CQ_AT_RST 0x0
-union dlb2_sys_dir_cq_at {
-	struct {
-		u32 cq_at : 2;
-		u32 rsvd0 : 30;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_DIR_CQ_AT_CQ_AT	0x00000003
+#define DLB2_SYS_DIR_CQ_AT_RSVD0	0xFFFFFFFC
+#define DLB2_SYS_DIR_CQ_AT_CQ_AT_LOC	0
+#define DLB2_SYS_DIR_CQ_AT_RSVD0_LOC	2
 
 #define DLB2_SYS_DIR_CQ_ISR(x) \
 	(0x10000fd8 + (x) * 0x1000)
 #define DLB2_SYS_DIR_CQ_ISR_RST 0x0
-union dlb2_sys_dir_cq_isr {
-	struct {
-		u32 vector : 6;
-		u32 vf : 4;
-		u32 en_code : 2;
-		u32 rsvd0 : 20;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_DIR_CQ_ISR_VECTOR	0x0000003F
+#define DLB2_SYS_DIR_CQ_ISR_VF	0x000003C0
+#define DLB2_SYS_DIR_CQ_ISR_EN_CODE	0x00000C00
+#define DLB2_SYS_DIR_CQ_ISR_RSVD0	0xFFFFF000
+#define DLB2_SYS_DIR_CQ_ISR_VECTOR_LOC	0
+#define DLB2_SYS_DIR_CQ_ISR_VF_LOC		6
+#define DLB2_SYS_DIR_CQ_ISR_EN_CODE_LOC	10
+#define DLB2_SYS_DIR_CQ_ISR_RSVD0_LOC	12
 
 #define DLB2_SYS_DIR_CQ2VF_PF_RO(x) \
 	(0x10000fd4 + (x) * 0x1000)
 #define DLB2_SYS_DIR_CQ2VF_PF_RO_RST 0x0
-union dlb2_sys_dir_cq2vf_pf_ro {
-	struct {
-		u32 vf : 4;
-		u32 is_pf : 1;
-		u32 ro : 1;
-		u32 rsvd0 : 26;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_VF		0x0000000F
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_IS_PF	0x00000010
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_RO		0x00000020
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_RSVD0	0xFFFFFFC0
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_VF_LOC	0
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_IS_PF_LOC	4
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_RO_LOC	5
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_RSVD0_LOC	6
 
 #define DLB2_SYS_DIR_PP_V(x) \
 	(0x10000fd0 + (x) * 0x1000)
 #define DLB2_SYS_DIR_PP_V_RST 0x0
-union dlb2_sys_dir_pp_v {
-	struct {
-		u32 pp_v : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_DIR_PP_V_PP_V	0x00000001
+#define DLB2_SYS_DIR_PP_V_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_DIR_PP_V_PP_V_LOC	0
+#define DLB2_SYS_DIR_PP_V_RSVD0_LOC	1
 
 #define DLB2_SYS_DIR_PP2VDEV(x) \
 	(0x10000fcc + (x) * 0x1000)
 #define DLB2_SYS_DIR_PP2VDEV_RST 0x0
-union dlb2_sys_dir_pp2vdev {
-	struct {
-		u32 vdev : 4;
-		u32 rsvd0 : 28;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_DIR_PP2VDEV_VDEV	0x0000000F
+#define DLB2_SYS_DIR_PP2VDEV_RSVD0	0xFFFFFFF0
+#define DLB2_SYS_DIR_PP2VDEV_VDEV_LOC	0
+#define DLB2_SYS_DIR_PP2VDEV_RSVD0_LOC	4
 
 #define DLB2_SYS_DIR_PP2VAS(x) \
 	(0x10000fc8 + (x) * 0x1000)
 #define DLB2_SYS_DIR_PP2VAS_RST 0x0
-union dlb2_sys_dir_pp2vas {
-	struct {
-		u32 vas : 5;
-		u32 rsvd0 : 27;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_DIR_PP2VAS_VAS	0x0000001F
+#define DLB2_SYS_DIR_PP2VAS_RSVD0	0xFFFFFFE0
+#define DLB2_SYS_DIR_PP2VAS_VAS_LOC		0
+#define DLB2_SYS_DIR_PP2VAS_RSVD0_LOC	5
 
 #define DLB2_SYS_DIR_CQ_ADDR_U(x) \
 	(0x10000fc4 + (x) * 0x1000)
 #define DLB2_SYS_DIR_CQ_ADDR_U_RST 0x0
-union dlb2_sys_dir_cq_addr_u {
-	struct {
-		u32 addr_u : 32;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_DIR_CQ_ADDR_U_ADDR_U	0xFFFFFFFF
+#define DLB2_SYS_DIR_CQ_ADDR_U_ADDR_U_LOC	0
 
 #define DLB2_SYS_DIR_CQ_ADDR_L(x) \
 	(0x10000fc0 + (x) * 0x1000)
 #define DLB2_SYS_DIR_CQ_ADDR_L_RST 0x0
-union dlb2_sys_dir_cq_addr_l {
-	struct {
-		u32 rsvd0 : 6;
-		u32 addr_l : 26;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_DIR_CQ_ADDR_L_RSVD0		0x0000003F
+#define DLB2_SYS_DIR_CQ_ADDR_L_ADDR_L	0xFFFFFFC0
+#define DLB2_SYS_DIR_CQ_ADDR_L_RSVD0_LOC	0
+#define DLB2_SYS_DIR_CQ_ADDR_L_ADDR_L_LOC	6
+
+#define DLB2_SYS_PM_SMON_COMP_MASK1 0x10003024
+#define DLB2_SYS_PM_SMON_COMP_MASK1_RST 0xffffffff
+
+#define DLB2_SYS_PM_SMON_COMP_MASK1_COMP_MASK1	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_COMP_MASK1_COMP_MASK1_LOC	0
+
+#define DLB2_SYS_PM_SMON_COMP_MASK0 0x10003020
+#define DLB2_SYS_PM_SMON_COMP_MASK0_RST 0xffffffff
+
+#define DLB2_SYS_PM_SMON_COMP_MASK0_COMP_MASK0	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_COMP_MASK0_COMP_MASK0_LOC	0
+
+#define DLB2_SYS_PM_SMON_MAX_TMR 0x1000301c
+#define DLB2_SYS_PM_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_SYS_PM_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_SYS_PM_SMON_TMR 0x10003018
+#define DLB2_SYS_PM_SMON_TMR_RST 0x0
+
+#define DLB2_SYS_PM_SMON_TMR_TIMER_VAL	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_TMR_TIMER_VAL_LOC	0
+
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR1 0x10003014
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR0 0x10003010
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_SYS_PM_SMON_COMPARE1 0x1000300c
+#define DLB2_SYS_PM_SMON_COMPARE1_RST 0x0
+
+#define DLB2_SYS_PM_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_SYS_PM_SMON_COMPARE0 0x10003008
+#define DLB2_SYS_PM_SMON_COMPARE0_RST 0x0
+
+#define DLB2_SYS_PM_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_SYS_PM_SMON_CFG1 0x10003004
+#define DLB2_SYS_PM_SMON_CFG1_RST 0x0
+
+#define DLB2_SYS_PM_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_SYS_PM_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_SYS_PM_SMON_CFG1_RSVD	0xFFFF0000
+#define DLB2_SYS_PM_SMON_CFG1_MODE0_LOC	0
+#define DLB2_SYS_PM_SMON_CFG1_MODE1_LOC	8
+#define DLB2_SYS_PM_SMON_CFG1_RSVD_LOC	16
+
+#define DLB2_SYS_PM_SMON_CFG0 0x10003000
+#define DLB2_SYS_PM_SMON_CFG0_RST 0x40000000
+
+#define DLB2_SYS_PM_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_SYS_PM_SMON_CFG0_RSVD2			0x0000000E
+#define DLB2_SYS_PM_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_SYS_PM_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_SYS_PM_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_SYS_PM_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_SYS_PM_SMON_CFG0_SMON_MODE		0x0000F000
+#define DLB2_SYS_PM_SMON_CFG0_STOPCOUNTEROVFL	0x00010000
+#define DLB2_SYS_PM_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_SYS_PM_SMON_CFG0_STATCOUNTER0OVFL	0x00040000
+#define DLB2_SYS_PM_SMON_CFG0_STATCOUNTER1OVFL	0x00080000
+#define DLB2_SYS_PM_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_SYS_PM_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_SYS_PM_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_SYS_PM_SMON_CFG0_RSVD1			0x00800000
+#define DLB2_SYS_PM_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_SYS_PM_SMON_CFG0_RSVD0			0x20000000
+#define DLB2_SYS_PM_SMON_CFG0_VERSION		0xC0000000
+#define DLB2_SYS_PM_SMON_CFG0_SMON_ENABLE_LOC		0
+#define DLB2_SYS_PM_SMON_CFG0_RSVD2_LOC			1
+#define DLB2_SYS_PM_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_SYS_PM_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_SYS_PM_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_SYS_PM_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_SYS_PM_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_SYS_PM_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_SYS_PM_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_SYS_PM_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_SYS_PM_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_SYS_PM_SMON_CFG0_STOPTIMEROVFL_LOC		20
+#define DLB2_SYS_PM_SMON_CFG0_INTTIMEROVFL_LOC		21
+#define DLB2_SYS_PM_SMON_CFG0_STATTIMEROVFL_LOC		22
+#define DLB2_SYS_PM_SMON_CFG0_RSVD1_LOC			23
+#define DLB2_SYS_PM_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_SYS_PM_SMON_CFG0_RSVD0_LOC			29
+#define DLB2_SYS_PM_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_SYS_SMON_COMP_MASK1(x) \
+	(0x18002024 + (x) * 0x40)
+#define DLB2_SYS_SMON_COMP_MASK1_RST 0xffffffff
+
+#define DLB2_SYS_SMON_COMP_MASK1_COMP_MASK1	0xFFFFFFFF
+#define DLB2_SYS_SMON_COMP_MASK1_COMP_MASK1_LOC	0
+
+#define DLB2_SYS_SMON_COMP_MASK0(x) \
+	(0x18002020 + (x) * 0x40)
+#define DLB2_SYS_SMON_COMP_MASK0_RST 0xffffffff
+
+#define DLB2_SYS_SMON_COMP_MASK0_COMP_MASK0	0xFFFFFFFF
+#define DLB2_SYS_SMON_COMP_MASK0_COMP_MASK0_LOC	0
+
+#define DLB2_SYS_SMON_MAX_TMR(x) \
+	(0x1800201c + (x) * 0x40)
+#define DLB2_SYS_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_SYS_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_SYS_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_SYS_SMON_TMR(x) \
+	(0x18002018 + (x) * 0x40)
+#define DLB2_SYS_SMON_TMR_RST 0x0
+
+#define DLB2_SYS_SMON_TMR_TIMER_VAL	0xFFFFFFFF
+#define DLB2_SYS_SMON_TMR_TIMER_VAL_LOC	0
+
+#define DLB2_SYS_SMON_ACTIVITYCNTR1(x) \
+	(0x18002014 + (x) * 0x40)
+#define DLB2_SYS_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_SYS_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_SYS_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_SYS_SMON_ACTIVITYCNTR0(x) \
+	(0x18002010 + (x) * 0x40)
+#define DLB2_SYS_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_SYS_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_SYS_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_SYS_SMON_COMPARE1(x) \
+	(0x1800200c + (x) * 0x40)
+#define DLB2_SYS_SMON_COMPARE1_RST 0x0
+
+#define DLB2_SYS_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_SYS_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_SYS_SMON_COMPARE0(x) \
+	(0x18002008 + (x) * 0x40)
+#define DLB2_SYS_SMON_COMPARE0_RST 0x0
+
+#define DLB2_SYS_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_SYS_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_SYS_SMON_CFG1(x) \
+	(0x18002004 + (x) * 0x40)
+#define DLB2_SYS_SMON_CFG1_RST 0x0
+
+#define DLB2_SYS_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_SYS_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_SYS_SMON_CFG1_RSVD	0xFFFF0000
+#define DLB2_SYS_SMON_CFG1_MODE0_LOC	0
+#define DLB2_SYS_SMON_CFG1_MODE1_LOC	8
+#define DLB2_SYS_SMON_CFG1_RSVD_LOC	16
+
+#define DLB2_SYS_SMON_CFG0(x) \
+	(0x18002000 + (x) * 0x40)
+#define DLB2_SYS_SMON_CFG0_RST 0x40000000
+
+#define DLB2_SYS_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_SYS_SMON_CFG0_RSVD2			0x0000000E
+#define DLB2_SYS_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_SYS_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_SYS_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_SYS_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_SYS_SMON_CFG0_SMON_MODE			0x0000F000
+#define DLB2_SYS_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_SYS_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_SYS_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_SYS_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_SYS_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_SYS_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_SYS_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_SYS_SMON_CFG0_RSVD1			0x00800000
+#define DLB2_SYS_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_SYS_SMON_CFG0_RSVD0			0x20000000
+#define DLB2_SYS_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_SYS_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_SYS_SMON_CFG0_RSVD2_LOC				1
+#define DLB2_SYS_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_SYS_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_SYS_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_SYS_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_SYS_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_SYS_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_SYS_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_SYS_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_SYS_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_SYS_SMON_CFG0_STOPTIMEROVFL_LOC			20
+#define DLB2_SYS_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_SYS_SMON_CFG0_STATTIMEROVFL_LOC			22
+#define DLB2_SYS_SMON_CFG0_RSVD1_LOC				23
+#define DLB2_SYS_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_SYS_SMON_CFG0_RSVD0_LOC				29
+#define DLB2_SYS_SMON_CFG0_VERSION_LOC			30
 
 #define DLB2_SYS_INGRESS_ALARM_ENBL 0x10000300
 #define DLB2_SYS_INGRESS_ALARM_ENBL_RST 0x0
-union dlb2_sys_ingress_alarm_enbl {
-	struct {
-		u32 illegal_hcw : 1;
-		u32 illegal_pp : 1;
-		u32 illegal_pasid : 1;
-		u32 illegal_qid : 1;
-		u32 disabled_qid : 1;
-		u32 illegal_ldb_qid_cfg : 1;
-		u32 rsvd0 : 26;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_HCW		0x00000001
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_PP		0x00000002
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_PASID		0x00000004
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_QID		0x00000008
+#define DLB2_SYS_INGRESS_ALARM_ENBL_DISABLED_QID		0x00000010
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_LDB_QID_CFG	0x00000020
+#define DLB2_SYS_INGRESS_ALARM_ENBL_RSVD0			0xFFFFFFC0
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_HCW_LOC		0
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_PP_LOC		1
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_PASID_LOC	2
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_QID_LOC		3
+#define DLB2_SYS_INGRESS_ALARM_ENBL_DISABLED_QID_LOC		4
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_LDB_QID_CFG_LOC	5
+#define DLB2_SYS_INGRESS_ALARM_ENBL_RSVD0_LOC		6
 
 #define DLB2_SYS_MSIX_ACK 0x10000400
 #define DLB2_SYS_MSIX_ACK_RST 0x0
-union dlb2_sys_msix_ack {
-	struct {
-		u32 msix_0_ack : 1;
-		u32 msix_1_ack : 1;
-		u32 rsvd0 : 30;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_MSIX_ACK_MSIX_0_ACK	0x00000001
+#define DLB2_SYS_MSIX_ACK_MSIX_1_ACK	0x00000002
+#define DLB2_SYS_MSIX_ACK_RSVD0	0xFFFFFFFC
+#define DLB2_SYS_MSIX_ACK_MSIX_0_ACK_LOC	0
+#define DLB2_SYS_MSIX_ACK_MSIX_1_ACK_LOC	1
+#define DLB2_SYS_MSIX_ACK_RSVD0_LOC		2
 
 #define DLB2_SYS_MSIX_PASSTHRU 0x10000404
 #define DLB2_SYS_MSIX_PASSTHRU_RST 0x0
-union dlb2_sys_msix_passthru {
-	struct {
-		u32 msix_0_passthru : 1;
-		u32 msix_1_passthru : 1;
-		u32 rsvd0 : 30;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_MSIX_PASSTHRU_MSIX_0_PASSTHRU	0x00000001
+#define DLB2_SYS_MSIX_PASSTHRU_MSIX_1_PASSTHRU	0x00000002
+#define DLB2_SYS_MSIX_PASSTHRU_RSVD0			0xFFFFFFFC
+#define DLB2_SYS_MSIX_PASSTHRU_MSIX_0_PASSTHRU_LOC	0
+#define DLB2_SYS_MSIX_PASSTHRU_MSIX_1_PASSTHRU_LOC	1
+#define DLB2_SYS_MSIX_PASSTHRU_RSVD0_LOC		2
 
 #define DLB2_SYS_MSIX_MODE 0x10000408
 #define DLB2_SYS_MSIX_MODE_RST 0x0
 /* MSI-X Modes */
 #define DLB2_MSIX_MODE_PACKED     0
 #define DLB2_MSIX_MODE_COMPRESSED 1
-union dlb2_sys_msix_mode {
-	struct {
-		u32 mode : 1;
-		u32 poll_mode : 1;
-		u32 poll_mask : 1;
-		u32 poll_lock : 1;
-		u32 rsvd0 : 28;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_MSIX_MODE_MODE_V2	0x00000001
+#define DLB2_SYS_MSIX_MODE_POLL_MODE_V2	0x00000002
+#define DLB2_SYS_MSIX_MODE_POLL_MASK_V2	0x00000004
+#define DLB2_SYS_MSIX_MODE_POLL_LOCK_V2	0x00000008
+#define DLB2_SYS_MSIX_MODE_RSVD0_V2	0xFFFFFFF0
+#define DLB2_SYS_MSIX_MODE_MODE_V2_LOC	0
+#define DLB2_SYS_MSIX_MODE_POLL_MODE_V2_LOC	1
+#define DLB2_SYS_MSIX_MODE_POLL_MASK_V2_LOC	2
+#define DLB2_SYS_MSIX_MODE_POLL_LOCK_V2_LOC	3
+#define DLB2_SYS_MSIX_MODE_RSVD0_V2_LOC	4
+
+#define DLB2_SYS_MSIX_MODE_MODE_V2_5	0x00000001
+#define DLB2_SYS_MSIX_MODE_IMS_POLLING_V2_5	0x00000002
+#define DLB2_SYS_MSIX_MODE_RSVD0_V2_5	0xFFFFFFFC
+#define DLB2_SYS_MSIX_MODE_MODE_V2_5_LOC		0
+#define DLB2_SYS_MSIX_MODE_IMS_POLLING_V2_5_LOC	1
+#define DLB2_SYS_MSIX_MODE_RSVD0_V2_5_LOC		2
 
 #define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS 0x10000440
 #define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_RST 0x0
-union dlb2_sys_dir_cq_31_0_occ_int_sts {
-	struct {
-		u32 cq_0_occ_int : 1;
-		u32 cq_1_occ_int : 1;
-		u32 cq_2_occ_int : 1;
-		u32 cq_3_occ_int : 1;
-		u32 cq_4_occ_int : 1;
-		u32 cq_5_occ_int : 1;
-		u32 cq_6_occ_int : 1;
-		u32 cq_7_occ_int : 1;
-		u32 cq_8_occ_int : 1;
-		u32 cq_9_occ_int : 1;
-		u32 cq_10_occ_int : 1;
-		u32 cq_11_occ_int : 1;
-		u32 cq_12_occ_int : 1;
-		u32 cq_13_occ_int : 1;
-		u32 cq_14_occ_int : 1;
-		u32 cq_15_occ_int : 1;
-		u32 cq_16_occ_int : 1;
-		u32 cq_17_occ_int : 1;
-		u32 cq_18_occ_int : 1;
-		u32 cq_19_occ_int : 1;
-		u32 cq_20_occ_int : 1;
-		u32 cq_21_occ_int : 1;
-		u32 cq_22_occ_int : 1;
-		u32 cq_23_occ_int : 1;
-		u32 cq_24_occ_int : 1;
-		u32 cq_25_occ_int : 1;
-		u32 cq_26_occ_int : 1;
-		u32 cq_27_occ_int : 1;
-		u32 cq_28_occ_int : 1;
-		u32 cq_29_occ_int : 1;
-		u32 cq_30_occ_int : 1;
-		u32 cq_31_occ_int : 1;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_0_OCC_INT	0x00000001
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_1_OCC_INT	0x00000002
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_2_OCC_INT	0x00000004
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_3_OCC_INT	0x00000008
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_4_OCC_INT	0x00000010
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_5_OCC_INT	0x00000020
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_6_OCC_INT	0x00000040
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_7_OCC_INT	0x00000080
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_8_OCC_INT	0x00000100
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_9_OCC_INT	0x00000200
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_10_OCC_INT	0x00000400
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_11_OCC_INT	0x00000800
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_12_OCC_INT	0x00001000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_13_OCC_INT	0x00002000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_14_OCC_INT	0x00004000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_15_OCC_INT	0x00008000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_16_OCC_INT	0x00010000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_17_OCC_INT	0x00020000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_18_OCC_INT	0x00040000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_19_OCC_INT	0x00080000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_20_OCC_INT	0x00100000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_21_OCC_INT	0x00200000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_22_OCC_INT	0x00400000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_23_OCC_INT	0x00800000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_24_OCC_INT	0x01000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_25_OCC_INT	0x02000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_26_OCC_INT	0x04000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_27_OCC_INT	0x08000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_28_OCC_INT	0x10000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_29_OCC_INT	0x20000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_30_OCC_INT	0x40000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_31_OCC_INT	0x80000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_0_OCC_INT_LOC	0
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_1_OCC_INT_LOC	1
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_2_OCC_INT_LOC	2
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_3_OCC_INT_LOC	3
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_4_OCC_INT_LOC	4
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_5_OCC_INT_LOC	5
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_6_OCC_INT_LOC	6
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_7_OCC_INT_LOC	7
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_8_OCC_INT_LOC	8
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_9_OCC_INT_LOC	9
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_10_OCC_INT_LOC	10
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_11_OCC_INT_LOC	11
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_12_OCC_INT_LOC	12
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_13_OCC_INT_LOC	13
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_14_OCC_INT_LOC	14
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_15_OCC_INT_LOC	15
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_16_OCC_INT_LOC	16
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_17_OCC_INT_LOC	17
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_18_OCC_INT_LOC	18
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_19_OCC_INT_LOC	19
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_20_OCC_INT_LOC	20
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_21_OCC_INT_LOC	21
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_22_OCC_INT_LOC	22
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_23_OCC_INT_LOC	23
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_24_OCC_INT_LOC	24
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_25_OCC_INT_LOC	25
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_26_OCC_INT_LOC	26
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_27_OCC_INT_LOC	27
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_28_OCC_INT_LOC	28
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_29_OCC_INT_LOC	29
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_30_OCC_INT_LOC	30
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_31_OCC_INT_LOC	31
 
 #define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS 0x10000444
 #define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_RST 0x0
-union dlb2_sys_dir_cq_63_32_occ_int_sts {
-	struct {
-		u32 cq_32_occ_int : 1;
-		u32 cq_33_occ_int : 1;
-		u32 cq_34_occ_int : 1;
-		u32 cq_35_occ_int : 1;
-		u32 cq_36_occ_int : 1;
-		u32 cq_37_occ_int : 1;
-		u32 cq_38_occ_int : 1;
-		u32 cq_39_occ_int : 1;
-		u32 cq_40_occ_int : 1;
-		u32 cq_41_occ_int : 1;
-		u32 cq_42_occ_int : 1;
-		u32 cq_43_occ_int : 1;
-		u32 cq_44_occ_int : 1;
-		u32 cq_45_occ_int : 1;
-		u32 cq_46_occ_int : 1;
-		u32 cq_47_occ_int : 1;
-		u32 cq_48_occ_int : 1;
-		u32 cq_49_occ_int : 1;
-		u32 cq_50_occ_int : 1;
-		u32 cq_51_occ_int : 1;
-		u32 cq_52_occ_int : 1;
-		u32 cq_53_occ_int : 1;
-		u32 cq_54_occ_int : 1;
-		u32 cq_55_occ_int : 1;
-		u32 cq_56_occ_int : 1;
-		u32 cq_57_occ_int : 1;
-		u32 cq_58_occ_int : 1;
-		u32 cq_59_occ_int : 1;
-		u32 cq_60_occ_int : 1;
-		u32 cq_61_occ_int : 1;
-		u32 cq_62_occ_int : 1;
-		u32 cq_63_occ_int : 1;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_32_OCC_INT	0x00000001
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_33_OCC_INT	0x00000002
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_34_OCC_INT	0x00000004
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_35_OCC_INT	0x00000008
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_36_OCC_INT	0x00000010
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_37_OCC_INT	0x00000020
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_38_OCC_INT	0x00000040
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_39_OCC_INT	0x00000080
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_40_OCC_INT	0x00000100
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_41_OCC_INT	0x00000200
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_42_OCC_INT	0x00000400
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_43_OCC_INT	0x00000800
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_44_OCC_INT	0x00001000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_45_OCC_INT	0x00002000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_46_OCC_INT	0x00004000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_47_OCC_INT	0x00008000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_48_OCC_INT	0x00010000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_49_OCC_INT	0x00020000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_50_OCC_INT	0x00040000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_51_OCC_INT	0x00080000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_52_OCC_INT	0x00100000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_53_OCC_INT	0x00200000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_54_OCC_INT	0x00400000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_55_OCC_INT	0x00800000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_56_OCC_INT	0x01000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_57_OCC_INT	0x02000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_58_OCC_INT	0x04000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_59_OCC_INT	0x08000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_60_OCC_INT	0x10000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_61_OCC_INT	0x20000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_62_OCC_INT	0x40000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_63_OCC_INT	0x80000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_32_OCC_INT_LOC	0
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_33_OCC_INT_LOC	1
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_34_OCC_INT_LOC	2
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_35_OCC_INT_LOC	3
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_36_OCC_INT_LOC	4
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_37_OCC_INT_LOC	5
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_38_OCC_INT_LOC	6
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_39_OCC_INT_LOC	7
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_40_OCC_INT_LOC	8
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_41_OCC_INT_LOC	9
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_42_OCC_INT_LOC	10
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_43_OCC_INT_LOC	11
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_44_OCC_INT_LOC	12
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_45_OCC_INT_LOC	13
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_46_OCC_INT_LOC	14
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_47_OCC_INT_LOC	15
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_48_OCC_INT_LOC	16
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_49_OCC_INT_LOC	17
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_50_OCC_INT_LOC	18
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_51_OCC_INT_LOC	19
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_52_OCC_INT_LOC	20
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_53_OCC_INT_LOC	21
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_54_OCC_INT_LOC	22
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_55_OCC_INT_LOC	23
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_56_OCC_INT_LOC	24
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_57_OCC_INT_LOC	25
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_58_OCC_INT_LOC	26
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_59_OCC_INT_LOC	27
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_60_OCC_INT_LOC	28
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_61_OCC_INT_LOC	29
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_62_OCC_INT_LOC	30
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_63_OCC_INT_LOC	31
 
 #define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS 0x10000460
 #define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_RST 0x0
-union dlb2_sys_ldb_cq_31_0_occ_int_sts {
-	struct {
-		u32 cq_0_occ_int : 1;
-		u32 cq_1_occ_int : 1;
-		u32 cq_2_occ_int : 1;
-		u32 cq_3_occ_int : 1;
-		u32 cq_4_occ_int : 1;
-		u32 cq_5_occ_int : 1;
-		u32 cq_6_occ_int : 1;
-		u32 cq_7_occ_int : 1;
-		u32 cq_8_occ_int : 1;
-		u32 cq_9_occ_int : 1;
-		u32 cq_10_occ_int : 1;
-		u32 cq_11_occ_int : 1;
-		u32 cq_12_occ_int : 1;
-		u32 cq_13_occ_int : 1;
-		u32 cq_14_occ_int : 1;
-		u32 cq_15_occ_int : 1;
-		u32 cq_16_occ_int : 1;
-		u32 cq_17_occ_int : 1;
-		u32 cq_18_occ_int : 1;
-		u32 cq_19_occ_int : 1;
-		u32 cq_20_occ_int : 1;
-		u32 cq_21_occ_int : 1;
-		u32 cq_22_occ_int : 1;
-		u32 cq_23_occ_int : 1;
-		u32 cq_24_occ_int : 1;
-		u32 cq_25_occ_int : 1;
-		u32 cq_26_occ_int : 1;
-		u32 cq_27_occ_int : 1;
-		u32 cq_28_occ_int : 1;
-		u32 cq_29_occ_int : 1;
-		u32 cq_30_occ_int : 1;
-		u32 cq_31_occ_int : 1;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_0_OCC_INT	0x00000001
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_1_OCC_INT	0x00000002
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_2_OCC_INT	0x00000004
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_3_OCC_INT	0x00000008
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_4_OCC_INT	0x00000010
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_5_OCC_INT	0x00000020
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_6_OCC_INT	0x00000040
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_7_OCC_INT	0x00000080
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_8_OCC_INT	0x00000100
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_9_OCC_INT	0x00000200
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_10_OCC_INT	0x00000400
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_11_OCC_INT	0x00000800
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_12_OCC_INT	0x00001000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_13_OCC_INT	0x00002000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_14_OCC_INT	0x00004000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_15_OCC_INT	0x00008000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_16_OCC_INT	0x00010000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_17_OCC_INT	0x00020000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_18_OCC_INT	0x00040000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_19_OCC_INT	0x00080000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_20_OCC_INT	0x00100000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_21_OCC_INT	0x00200000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_22_OCC_INT	0x00400000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_23_OCC_INT	0x00800000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_24_OCC_INT	0x01000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_25_OCC_INT	0x02000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_26_OCC_INT	0x04000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_27_OCC_INT	0x08000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_28_OCC_INT	0x10000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_29_OCC_INT	0x20000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_30_OCC_INT	0x40000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_31_OCC_INT	0x80000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_0_OCC_INT_LOC	0
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_1_OCC_INT_LOC	1
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_2_OCC_INT_LOC	2
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_3_OCC_INT_LOC	3
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_4_OCC_INT_LOC	4
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_5_OCC_INT_LOC	5
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_6_OCC_INT_LOC	6
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_7_OCC_INT_LOC	7
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_8_OCC_INT_LOC	8
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_9_OCC_INT_LOC	9
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_10_OCC_INT_LOC	10
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_11_OCC_INT_LOC	11
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_12_OCC_INT_LOC	12
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_13_OCC_INT_LOC	13
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_14_OCC_INT_LOC	14
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_15_OCC_INT_LOC	15
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_16_OCC_INT_LOC	16
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_17_OCC_INT_LOC	17
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_18_OCC_INT_LOC	18
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_19_OCC_INT_LOC	19
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_20_OCC_INT_LOC	20
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_21_OCC_INT_LOC	21
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_22_OCC_INT_LOC	22
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_23_OCC_INT_LOC	23
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_24_OCC_INT_LOC	24
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_25_OCC_INT_LOC	25
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_26_OCC_INT_LOC	26
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_27_OCC_INT_LOC	27
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_28_OCC_INT_LOC	28
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_29_OCC_INT_LOC	29
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_30_OCC_INT_LOC	30
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_31_OCC_INT_LOC	31
 
 #define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS 0x10000464
 #define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_RST 0x0
-union dlb2_sys_ldb_cq_63_32_occ_int_sts {
-	struct {
-		u32 cq_32_occ_int : 1;
-		u32 cq_33_occ_int : 1;
-		u32 cq_34_occ_int : 1;
-		u32 cq_35_occ_int : 1;
-		u32 cq_36_occ_int : 1;
-		u32 cq_37_occ_int : 1;
-		u32 cq_38_occ_int : 1;
-		u32 cq_39_occ_int : 1;
-		u32 cq_40_occ_int : 1;
-		u32 cq_41_occ_int : 1;
-		u32 cq_42_occ_int : 1;
-		u32 cq_43_occ_int : 1;
-		u32 cq_44_occ_int : 1;
-		u32 cq_45_occ_int : 1;
-		u32 cq_46_occ_int : 1;
-		u32 cq_47_occ_int : 1;
-		u32 cq_48_occ_int : 1;
-		u32 cq_49_occ_int : 1;
-		u32 cq_50_occ_int : 1;
-		u32 cq_51_occ_int : 1;
-		u32 cq_52_occ_int : 1;
-		u32 cq_53_occ_int : 1;
-		u32 cq_54_occ_int : 1;
-		u32 cq_55_occ_int : 1;
-		u32 cq_56_occ_int : 1;
-		u32 cq_57_occ_int : 1;
-		u32 cq_58_occ_int : 1;
-		u32 cq_59_occ_int : 1;
-		u32 cq_60_occ_int : 1;
-		u32 cq_61_occ_int : 1;
-		u32 cq_62_occ_int : 1;
-		u32 cq_63_occ_int : 1;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_32_OCC_INT	0x00000001
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_33_OCC_INT	0x00000002
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_34_OCC_INT	0x00000004
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_35_OCC_INT	0x00000008
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_36_OCC_INT	0x00000010
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_37_OCC_INT	0x00000020
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_38_OCC_INT	0x00000040
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_39_OCC_INT	0x00000080
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_40_OCC_INT	0x00000100
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_41_OCC_INT	0x00000200
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_42_OCC_INT	0x00000400
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_43_OCC_INT	0x00000800
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_44_OCC_INT	0x00001000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_45_OCC_INT	0x00002000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_46_OCC_INT	0x00004000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_47_OCC_INT	0x00008000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_48_OCC_INT	0x00010000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_49_OCC_INT	0x00020000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_50_OCC_INT	0x00040000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_51_OCC_INT	0x00080000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_52_OCC_INT	0x00100000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_53_OCC_INT	0x00200000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_54_OCC_INT	0x00400000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_55_OCC_INT	0x00800000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_56_OCC_INT	0x01000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_57_OCC_INT	0x02000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_58_OCC_INT	0x04000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_59_OCC_INT	0x08000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_60_OCC_INT	0x10000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_61_OCC_INT	0x20000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_62_OCC_INT	0x40000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_63_OCC_INT	0x80000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_32_OCC_INT_LOC	0
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_33_OCC_INT_LOC	1
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_34_OCC_INT_LOC	2
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_35_OCC_INT_LOC	3
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_36_OCC_INT_LOC	4
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_37_OCC_INT_LOC	5
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_38_OCC_INT_LOC	6
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_39_OCC_INT_LOC	7
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_40_OCC_INT_LOC	8
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_41_OCC_INT_LOC	9
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_42_OCC_INT_LOC	10
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_43_OCC_INT_LOC	11
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_44_OCC_INT_LOC	12
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_45_OCC_INT_LOC	13
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_46_OCC_INT_LOC	14
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_47_OCC_INT_LOC	15
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_48_OCC_INT_LOC	16
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_49_OCC_INT_LOC	17
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_50_OCC_INT_LOC	18
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_51_OCC_INT_LOC	19
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_52_OCC_INT_LOC	20
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_53_OCC_INT_LOC	21
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_54_OCC_INT_LOC	22
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_55_OCC_INT_LOC	23
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_56_OCC_INT_LOC	24
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_57_OCC_INT_LOC	25
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_58_OCC_INT_LOC	26
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_59_OCC_INT_LOC	27
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_60_OCC_INT_LOC	28
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_61_OCC_INT_LOC	29
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_62_OCC_INT_LOC	30
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_63_OCC_INT_LOC	31
 
 #define DLB2_SYS_DIR_CQ_OPT_CLR 0x100004c0
 #define DLB2_SYS_DIR_CQ_OPT_CLR_RST 0x0
-union dlb2_sys_dir_cq_opt_clr {
-	struct {
-		u32 cq : 6;
-		u32 rsvd0 : 26;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_DIR_CQ_OPT_CLR_CQ		0x0000003F
+#define DLB2_SYS_DIR_CQ_OPT_CLR_RSVD0	0xFFFFFFC0
+#define DLB2_SYS_DIR_CQ_OPT_CLR_CQ_LOC	0
+#define DLB2_SYS_DIR_CQ_OPT_CLR_RSVD0_LOC	6
 
 #define DLB2_SYS_ALARM_HW_SYND 0x1000050c
 #define DLB2_SYS_ALARM_HW_SYND_RST 0x0
-union dlb2_sys_alarm_hw_synd {
-	struct {
-		u32 syndrome : 8;
-		u32 rtype : 2;
-		u32 alarm : 1;
-		u32 cwd : 1;
-		u32 vf_pf_mb : 1;
-		u32 rsvd0 : 1;
-		u32 cls : 2;
-		u32 aid : 6;
-		u32 unit : 4;
-		u32 source : 4;
-		u32 more : 1;
-		u32 valid : 1;
-	} field;
-	u32 val;
-};
-
-#define DLB2_AQED_PIPE_QID_FID_LIM(x) \
+
+#define DLB2_SYS_ALARM_HW_SYND_SYNDROME	0x000000FF
+#define DLB2_SYS_ALARM_HW_SYND_RTYPE		0x00000300
+#define DLB2_SYS_ALARM_HW_SYND_ALARM		0x00000400
+#define DLB2_SYS_ALARM_HW_SYND_CWD		0x00000800
+#define DLB2_SYS_ALARM_HW_SYND_VF_PF_MB	0x00001000
+#define DLB2_SYS_ALARM_HW_SYND_RSVD0		0x00002000
+#define DLB2_SYS_ALARM_HW_SYND_CLS		0x0000C000
+#define DLB2_SYS_ALARM_HW_SYND_AID		0x003F0000
+#define DLB2_SYS_ALARM_HW_SYND_UNIT		0x03C00000
+#define DLB2_SYS_ALARM_HW_SYND_SOURCE	0x3C000000
+#define DLB2_SYS_ALARM_HW_SYND_MORE		0x40000000
+#define DLB2_SYS_ALARM_HW_SYND_VALID		0x80000000
+#define DLB2_SYS_ALARM_HW_SYND_SYNDROME_LOC	0
+#define DLB2_SYS_ALARM_HW_SYND_RTYPE_LOC	8
+#define DLB2_SYS_ALARM_HW_SYND_ALARM_LOC	10
+#define DLB2_SYS_ALARM_HW_SYND_CWD_LOC	11
+#define DLB2_SYS_ALARM_HW_SYND_VF_PF_MB_LOC	12
+#define DLB2_SYS_ALARM_HW_SYND_RSVD0_LOC	13
+#define DLB2_SYS_ALARM_HW_SYND_CLS_LOC	14
+#define DLB2_SYS_ALARM_HW_SYND_AID_LOC	16
+#define DLB2_SYS_ALARM_HW_SYND_UNIT_LOC	22
+#define DLB2_SYS_ALARM_HW_SYND_SOURCE_LOC	26
+#define DLB2_SYS_ALARM_HW_SYND_MORE_LOC	30
+#define DLB2_SYS_ALARM_HW_SYND_VALID_LOC	31
+
+#define DLB2_AQED_QID_FID_LIM(x) \
 	(0x20000000 + (x) * 0x1000)
-#define DLB2_AQED_PIPE_QID_FID_LIM_RST 0x7ff
-union dlb2_aqed_pipe_qid_fid_lim {
-	struct {
-		u32 qid_fid_limit : 13;
-		u32 rsvd0 : 19;
-	} field;
-	u32 val;
-};
-
-#define DLB2_AQED_PIPE_QID_HID_WIDTH(x) \
+#define DLB2_AQED_QID_FID_LIM_RST 0x7ff
+
+#define DLB2_AQED_QID_FID_LIM_QID_FID_LIMIT	0x00001FFF
+#define DLB2_AQED_QID_FID_LIM_RSVD0		0xFFFFE000
+#define DLB2_AQED_QID_FID_LIM_QID_FID_LIMIT_LOC	0
+#define DLB2_AQED_QID_FID_LIM_RSVD0_LOC		13
+
+#define DLB2_AQED_QID_HID_WIDTH(x) \
 	(0x20080000 + (x) * 0x1000)
-#define DLB2_AQED_PIPE_QID_HID_WIDTH_RST 0x0
-union dlb2_aqed_pipe_qid_hid_width {
-	struct {
-		u32 compress_code : 3;
-		u32 rsvd0 : 29;
-	} field;
-	u32 val;
-};
-
-#define DLB2_AQED_PIPE_CFG_ARB_WEIGHTS_TQPRI_ATM_0 0x24000004
-#define DLB2_AQED_PIPE_CFG_ARB_WEIGHTS_TQPRI_ATM_0_RST 0xfefcfaf8
-union dlb2_aqed_pipe_cfg_arb_weights_tqpri_atm_0 {
-	struct {
-		u32 pri0 : 8;
-		u32 pri1 : 8;
-		u32 pri2 : 8;
-		u32 pri3 : 8;
-	} field;
-	u32 val;
-};
+#define DLB2_AQED_QID_HID_WIDTH_RST 0x0
+
+#define DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE	0x00000007
+#define DLB2_AQED_QID_HID_WIDTH_RSVD0		0xFFFFFFF8
+#define DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE_LOC	0
+#define DLB2_AQED_QID_HID_WIDTH_RSVD0_LOC		3
+
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0 0x24000004
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_RST 0xfefcfaf8
+
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI0	0x000000FF
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI1	0x0000FF00
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI2	0x00FF0000
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI3	0xFF000000
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI0_LOC	0
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI1_LOC	8
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI2_LOC	16
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI3_LOC	24
+
+#define DLB2_AQED_SMON_ACTIVITYCNTR0 0x2c00004c
+#define DLB2_AQED_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_AQED_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_AQED_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_AQED_SMON_ACTIVITYCNTR1 0x2c000050
+#define DLB2_AQED_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_AQED_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_AQED_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_AQED_SMON_COMPARE0 0x2c000054
+#define DLB2_AQED_SMON_COMPARE0_RST 0x0
+
+#define DLB2_AQED_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_AQED_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_AQED_SMON_COMPARE1 0x2c000058
+#define DLB2_AQED_SMON_COMPARE1_RST 0x0
+
+#define DLB2_AQED_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_AQED_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_AQED_SMON_CFG0 0x2c00005c
+#define DLB2_AQED_SMON_CFG0_RST 0x40000000
+
+#define DLB2_AQED_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_AQED_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_AQED_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_AQED_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_AQED_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_AQED_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_AQED_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_AQED_SMON_CFG0_SMON_MODE		0x0000F000
+#define DLB2_AQED_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_AQED_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_AQED_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_AQED_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_AQED_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_AQED_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_AQED_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_AQED_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_AQED_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_AQED_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_AQED_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_AQED_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_AQED_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
+#define DLB2_AQED_SMON_CFG0_RSVZ0_LOC			2
+#define DLB2_AQED_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_AQED_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_AQED_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_AQED_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_AQED_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_AQED_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_AQED_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_AQED_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_AQED_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_AQED_SMON_CFG0_STOPTIMEROVFL_LOC		20
+#define DLB2_AQED_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_AQED_SMON_CFG0_STATTIMEROVFL_LOC		22
+#define DLB2_AQED_SMON_CFG0_RSVZ1_LOC			23
+#define DLB2_AQED_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_AQED_SMON_CFG0_RSVZ2_LOC			29
+#define DLB2_AQED_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_AQED_SMON_CFG1 0x2c000060
+#define DLB2_AQED_SMON_CFG1_RST 0x0
+
+#define DLB2_AQED_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_AQED_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_AQED_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_AQED_SMON_CFG1_MODE0_LOC	0
+#define DLB2_AQED_SMON_CFG1_MODE1_LOC	8
+#define DLB2_AQED_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_AQED_SMON_MAX_TMR 0x2c000064
+#define DLB2_AQED_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_AQED_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_AQED_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_AQED_SMON_TMR 0x2c000068
+#define DLB2_AQED_SMON_TMR_RST 0x0
+
+#define DLB2_AQED_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_AQED_SMON_TMR_TIMER_LOC	0
 
 #define DLB2_ATM_QID2CQIDIX_00(x) \
 	(0x30080000 + (x) * 0x1000)
@@ -1061,1467 +1452,2853 @@ union dlb2_aqed_pipe_cfg_arb_weights_tqpri_atm_0 {
 #define DLB2_ATM_QID2CQIDIX(x, y) \
 	(DLB2_ATM_QID2CQIDIX_00(x) + 0x80000 * (y))
 #define DLB2_ATM_QID2CQIDIX_NUM 16
-union dlb2_atm_qid2cqidix_00 {
-	struct {
-		u32 cq_p0 : 8;
-		u32 cq_p1 : 8;
-		u32 cq_p2 : 8;
-		u32 cq_p3 : 8;
-	} field;
-	u32 val;
-};
+
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P0	0x000000FF
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P1	0x0000FF00
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P2	0x00FF0000
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P3	0xFF000000
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P0_LOC	0
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P1_LOC	8
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P2_LOC	16
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P3_LOC	24
 
 #define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN 0x34000004
 #define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_RST 0xfffefdfc
-union dlb2_atm_cfg_arb_weights_rdy_bin {
-	struct {
-		u32 bin0 : 8;
-		u32 bin1 : 8;
-		u32 bin2 : 8;
-		u32 bin3 : 8;
-	} field;
-	u32 val;
-};
+
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN0	0x000000FF
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN1	0x0000FF00
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN2	0x00FF0000
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN3	0xFF000000
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN0_LOC	0
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN1_LOC	8
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN2_LOC	16
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN3_LOC	24
 
 #define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN 0x34000008
 #define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_RST 0xfffefdfc
-union dlb2_atm_cfg_arb_weights_sched_bin {
-	struct {
-		u32 bin0 : 8;
-		u32 bin1 : 8;
-		u32 bin2 : 8;
-		u32 bin3 : 8;
-	} field;
-	u32 val;
-};
+
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN0	0x000000FF
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN1	0x0000FF00
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN2	0x00FF0000
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN3	0xFF000000
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN0_LOC	0
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN1_LOC	8
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN2_LOC	16
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN3_LOC	24
+
+#define DLB2_ATM_SMON_ACTIVITYCNTR0 0x3c000050
+#define DLB2_ATM_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_ATM_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_ATM_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_ATM_SMON_ACTIVITYCNTR1 0x3c000054
+#define DLB2_ATM_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_ATM_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_ATM_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_ATM_SMON_COMPARE0 0x3c000058
+#define DLB2_ATM_SMON_COMPARE0_RST 0x0
+
+#define DLB2_ATM_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_ATM_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_ATM_SMON_COMPARE1 0x3c00005c
+#define DLB2_ATM_SMON_COMPARE1_RST 0x0
+
+#define DLB2_ATM_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_ATM_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_ATM_SMON_CFG0 0x3c000060
+#define DLB2_ATM_SMON_CFG0_RST 0x40000000
+
+#define DLB2_ATM_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_ATM_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_ATM_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_ATM_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_ATM_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_ATM_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_ATM_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_ATM_SMON_CFG0_SMON_MODE			0x0000F000
+#define DLB2_ATM_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_ATM_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_ATM_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_ATM_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_ATM_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_ATM_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_ATM_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_ATM_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_ATM_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_ATM_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_ATM_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_ATM_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_ATM_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
+#define DLB2_ATM_SMON_CFG0_RSVZ0_LOC				2
+#define DLB2_ATM_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_ATM_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_ATM_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_ATM_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_ATM_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_ATM_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_ATM_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_ATM_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_ATM_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_ATM_SMON_CFG0_STOPTIMEROVFL_LOC			20
+#define DLB2_ATM_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_ATM_SMON_CFG0_STATTIMEROVFL_LOC			22
+#define DLB2_ATM_SMON_CFG0_RSVZ1_LOC				23
+#define DLB2_ATM_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_ATM_SMON_CFG0_RSVZ2_LOC				29
+#define DLB2_ATM_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_ATM_SMON_CFG1 0x3c000064
+#define DLB2_ATM_SMON_CFG1_RST 0x0
+
+#define DLB2_ATM_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_ATM_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_ATM_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_ATM_SMON_CFG1_MODE0_LOC	0
+#define DLB2_ATM_SMON_CFG1_MODE1_LOC	8
+#define DLB2_ATM_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_ATM_SMON_MAX_TMR 0x3c000068
+#define DLB2_ATM_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_ATM_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_ATM_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_ATM_SMON_TMR 0x3c00006c
+#define DLB2_ATM_SMON_TMR_RST 0x0
+
+#define DLB2_ATM_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_ATM_SMON_TMR_TIMER_LOC	0
 
 #define DLB2_CHP_CFG_DIR_VAS_CRD(x) \
 	(0x40000000 + (x) * 0x1000)
 #define DLB2_CHP_CFG_DIR_VAS_CRD_RST 0x0
-union dlb2_chp_cfg_dir_vas_crd {
-	struct {
-		u32 count : 14;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
+
+#define DLB2_CHP_CFG_DIR_VAS_CRD_COUNT	0x00003FFF
+#define DLB2_CHP_CFG_DIR_VAS_CRD_RSVD0	0xFFFFC000
+#define DLB2_CHP_CFG_DIR_VAS_CRD_COUNT_LOC	0
+#define DLB2_CHP_CFG_DIR_VAS_CRD_RSVD0_LOC	14
 
 #define DLB2_CHP_CFG_LDB_VAS_CRD(x) \
 	(0x40080000 + (x) * 0x1000)
 #define DLB2_CHP_CFG_LDB_VAS_CRD_RST 0x0
-union dlb2_chp_cfg_ldb_vas_crd {
-	struct {
-		u32 count : 15;
-		u32 rsvd0 : 17;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_ORD_QID_SN(x) \
+
+#define DLB2_CHP_CFG_LDB_VAS_CRD_COUNT	0x00007FFF
+#define DLB2_CHP_CFG_LDB_VAS_CRD_RSVD0	0xFFFF8000
+#define DLB2_CHP_CFG_LDB_VAS_CRD_COUNT_LOC	0
+#define DLB2_CHP_CFG_LDB_VAS_CRD_RSVD0_LOC	15
+
+#define DLB2_V2CHP_ORD_QID_SN(x) \
 	(0x40100000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_ORD_QID_SN(x) \
+	(0x40080000 + (x) * 0x1000)
+#define DLB2_CHP_ORD_QID_SN(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_ORD_QID_SN(x) : \
+	 DLB2_V2_5CHP_ORD_QID_SN(x))
 #define DLB2_CHP_ORD_QID_SN_RST 0x0
-union dlb2_chp_ord_qid_sn {
-	struct {
-		u32 sn : 10;
-		u32 rsvd0 : 22;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_ORD_QID_SN_MAP(x) \
+
+#define DLB2_CHP_ORD_QID_SN_SN	0x000003FF
+#define DLB2_CHP_ORD_QID_SN_RSVD0	0xFFFFFC00
+#define DLB2_CHP_ORD_QID_SN_SN_LOC		0
+#define DLB2_CHP_ORD_QID_SN_RSVD0_LOC	10
+
+#define DLB2_V2CHP_ORD_QID_SN_MAP(x) \
 	(0x40180000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_ORD_QID_SN_MAP(x) \
+	(0x40100000 + (x) * 0x1000)
+#define DLB2_CHP_ORD_QID_SN_MAP(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_ORD_QID_SN_MAP(x) : \
+	 DLB2_V2_5CHP_ORD_QID_SN_MAP(x))
 #define DLB2_CHP_ORD_QID_SN_MAP_RST 0x0
-union dlb2_chp_ord_qid_sn_map {
-	struct {
-		u32 mode : 3;
-		u32 slot : 4;
-		u32 rsvz0 : 1;
-		u32 grp : 1;
-		u32 rsvz1 : 1;
-		u32 rsvd0 : 22;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_SN_CHK_ENBL(x) \
+
+#define DLB2_CHP_ORD_QID_SN_MAP_MODE		0x00000007
+#define DLB2_CHP_ORD_QID_SN_MAP_SLOT		0x00000078
+#define DLB2_CHP_ORD_QID_SN_MAP_RSVZ0	0x00000080
+#define DLB2_CHP_ORD_QID_SN_MAP_GRP		0x00000100
+#define DLB2_CHP_ORD_QID_SN_MAP_RSVZ1	0x00000200
+#define DLB2_CHP_ORD_QID_SN_MAP_RSVD0	0xFFFFFC00
+#define DLB2_CHP_ORD_QID_SN_MAP_MODE_LOC	0
+#define DLB2_CHP_ORD_QID_SN_MAP_SLOT_LOC	3
+#define DLB2_CHP_ORD_QID_SN_MAP_RSVZ0_LOC	7
+#define DLB2_CHP_ORD_QID_SN_MAP_GRP_LOC	8
+#define DLB2_CHP_ORD_QID_SN_MAP_RSVZ1_LOC	9
+#define DLB2_CHP_ORD_QID_SN_MAP_RSVD0_LOC	10
+
+#define DLB2_V2CHP_SN_CHK_ENBL(x) \
 	(0x40200000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_SN_CHK_ENBL(x) \
+	(0x40180000 + (x) * 0x1000)
+#define DLB2_CHP_SN_CHK_ENBL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_SN_CHK_ENBL(x) : \
+	 DLB2_V2_5CHP_SN_CHK_ENBL(x))
 #define DLB2_CHP_SN_CHK_ENBL_RST 0x0
-union dlb2_chp_sn_chk_enbl {
-	struct {
-		u32 en : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_DIR_CQ_DEPTH(x) \
+
+#define DLB2_CHP_SN_CHK_ENBL_EN	0x00000001
+#define DLB2_CHP_SN_CHK_ENBL_RSVD0	0xFFFFFFFE
+#define DLB2_CHP_SN_CHK_ENBL_EN_LOC		0
+#define DLB2_CHP_SN_CHK_ENBL_RSVD0_LOC	1
+
+#define DLB2_V2CHP_DIR_CQ_DEPTH(x) \
 	(0x40280000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ_DEPTH(x) \
+	(0x40300000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ_DEPTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_DEPTH(x) : \
+	 DLB2_V2_5CHP_DIR_CQ_DEPTH(x))
 #define DLB2_CHP_DIR_CQ_DEPTH_RST 0x0
-union dlb2_chp_dir_cq_depth {
-	struct {
-		u32 depth : 13;
-		u32 rsvd0 : 19;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH(x) \
+
+#define DLB2_CHP_DIR_CQ_DEPTH_DEPTH	0x00001FFF
+#define DLB2_CHP_DIR_CQ_DEPTH_RSVD0	0xFFFFE000
+#define DLB2_CHP_DIR_CQ_DEPTH_DEPTH_LOC	0
+#define DLB2_CHP_DIR_CQ_DEPTH_RSVD0_LOC	13
+
+#define DLB2_V2CHP_DIR_CQ_INT_DEPTH_THRSH(x) \
 	(0x40300000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ_INT_DEPTH_THRSH(x) \
+	(0x40380000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_INT_DEPTH_THRSH(x) : \
+	 DLB2_V2_5CHP_DIR_CQ_INT_DEPTH_THRSH(x))
 #define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_RST 0x0
-union dlb2_chp_dir_cq_int_depth_thrsh {
-	struct {
-		u32 depth_threshold : 13;
-		u32 rsvd0 : 19;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_DIR_CQ_INT_ENB(x) \
+
+#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_DEPTH_THRESHOLD	0x00001FFF
+#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_RSVD0		0xFFFFE000
+#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_DEPTH_THRESHOLD_LOC	0
+#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_RSVD0_LOC		13
+
+#define DLB2_V2CHP_DIR_CQ_INT_ENB(x) \
 	(0x40380000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ_INT_ENB(x) \
+	(0x40400000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ_INT_ENB(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_INT_ENB(x) : \
+	 DLB2_V2_5CHP_DIR_CQ_INT_ENB(x))
 #define DLB2_CHP_DIR_CQ_INT_ENB_RST 0x0
-union dlb2_chp_dir_cq_int_enb {
-	struct {
-		u32 en_tim : 1;
-		u32 en_depth : 1;
-		u32 rsvd0 : 30;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_DIR_CQ_TMR_THRSH(x) \
+
+#define DLB2_CHP_DIR_CQ_INT_ENB_EN_TIM	0x00000001
+#define DLB2_CHP_DIR_CQ_INT_ENB_EN_DEPTH	0x00000002
+#define DLB2_CHP_DIR_CQ_INT_ENB_RSVD0	0xFFFFFFFC
+#define DLB2_CHP_DIR_CQ_INT_ENB_EN_TIM_LOC	0
+#define DLB2_CHP_DIR_CQ_INT_ENB_EN_DEPTH_LOC	1
+#define DLB2_CHP_DIR_CQ_INT_ENB_RSVD0_LOC	2
+
+#define DLB2_V2CHP_DIR_CQ_TMR_THRSH(x) \
 	(0x40480000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ_TMR_THRSH(x) \
+	(0x40500000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ_TMR_THRSH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_TMR_THRSH(x) : \
+	 DLB2_V2_5CHP_DIR_CQ_TMR_THRSH(x))
 #define DLB2_CHP_DIR_CQ_TMR_THRSH_RST 0x1
-union dlb2_chp_dir_cq_tmr_thrsh {
-	struct {
-		u32 thrsh_0 : 1;
-		u32 thrsh_13_1 : 13;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL(x) \
+
+#define DLB2_CHP_DIR_CQ_TMR_THRSH_THRSH_0	0x00000001
+#define DLB2_CHP_DIR_CQ_TMR_THRSH_THRSH_13_1	0x00003FFE
+#define DLB2_CHP_DIR_CQ_TMR_THRSH_RSVD0	0xFFFFC000
+#define DLB2_CHP_DIR_CQ_TMR_THRSH_THRSH_0_LOC	0
+#define DLB2_CHP_DIR_CQ_TMR_THRSH_THRSH_13_1_LOC	1
+#define DLB2_CHP_DIR_CQ_TMR_THRSH_RSVD0_LOC		14
+
+#define DLB2_V2CHP_DIR_CQ_TKN_DEPTH_SEL(x) \
 	(0x40500000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ_TKN_DEPTH_SEL(x) \
+	(0x40580000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_TKN_DEPTH_SEL(x) : \
+	 DLB2_V2_5CHP_DIR_CQ_TKN_DEPTH_SEL(x))
 #define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_RST 0x0
-union dlb2_chp_dir_cq_tkn_depth_sel {
-	struct {
-		u32 token_depth_select : 4;
-		u32 rsvd0 : 28;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_DIR_CQ_WD_ENB(x) \
+
+#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT	0x0000000F
+#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_RSVD0			0xFFFFFFF0
+#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_LOC	0
+#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_RSVD0_LOC		4
+
+#define DLB2_V2CHP_DIR_CQ_WD_ENB(x) \
 	(0x40580000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ_WD_ENB(x) \
+	(0x40600000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ_WD_ENB(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_WD_ENB(x) : \
+	 DLB2_V2_5CHP_DIR_CQ_WD_ENB(x))
 #define DLB2_CHP_DIR_CQ_WD_ENB_RST 0x0
-union dlb2_chp_dir_cq_wd_enb {
-	struct {
-		u32 wd_enable : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_DIR_CQ_WPTR(x) \
+
+#define DLB2_CHP_DIR_CQ_WD_ENB_WD_ENABLE	0x00000001
+#define DLB2_CHP_DIR_CQ_WD_ENB_RSVD0		0xFFFFFFFE
+#define DLB2_CHP_DIR_CQ_WD_ENB_WD_ENABLE_LOC	0
+#define DLB2_CHP_DIR_CQ_WD_ENB_RSVD0_LOC	1
+
+#define DLB2_V2CHP_DIR_CQ_WPTR(x) \
 	(0x40600000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ_WPTR(x) \
+	(0x40680000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ_WPTR(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_WPTR(x) : \
+	 DLB2_V2_5CHP_DIR_CQ_WPTR(x))
 #define DLB2_CHP_DIR_CQ_WPTR_RST 0x0
-union dlb2_chp_dir_cq_wptr {
-	struct {
-		u32 write_pointer : 13;
-		u32 rsvd0 : 19;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_DIR_CQ2VAS(x) \
+
+#define DLB2_CHP_DIR_CQ_WPTR_WRITE_POINTER	0x00001FFF
+#define DLB2_CHP_DIR_CQ_WPTR_RSVD0		0xFFFFE000
+#define DLB2_CHP_DIR_CQ_WPTR_WRITE_POINTER_LOC	0
+#define DLB2_CHP_DIR_CQ_WPTR_RSVD0_LOC		13
+
+#define DLB2_V2CHP_DIR_CQ2VAS(x) \
 	(0x40680000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ2VAS(x) \
+	(0x40700000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ2VAS(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ2VAS(x) : \
+	 DLB2_V2_5CHP_DIR_CQ2VAS(x))
 #define DLB2_CHP_DIR_CQ2VAS_RST 0x0
-union dlb2_chp_dir_cq2vas {
-	struct {
-		u32 cq2vas : 5;
-		u32 rsvd0 : 27;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_HIST_LIST_BASE(x) \
+
+#define DLB2_CHP_DIR_CQ2VAS_CQ2VAS	0x0000001F
+#define DLB2_CHP_DIR_CQ2VAS_RSVD0	0xFFFFFFE0
+#define DLB2_CHP_DIR_CQ2VAS_CQ2VAS_LOC	0
+#define DLB2_CHP_DIR_CQ2VAS_RSVD0_LOC	5
+
+#define DLB2_V2CHP_HIST_LIST_BASE(x) \
 	(0x40700000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_HIST_LIST_BASE(x) \
+	(0x40780000 + (x) * 0x1000)
+#define DLB2_CHP_HIST_LIST_BASE(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_HIST_LIST_BASE(x) : \
+	 DLB2_V2_5CHP_HIST_LIST_BASE(x))
 #define DLB2_CHP_HIST_LIST_BASE_RST 0x0
-union dlb2_chp_hist_list_base {
-	struct {
-		u32 base : 13;
-		u32 rsvd0 : 19;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_HIST_LIST_LIM(x) \
+
+#define DLB2_CHP_HIST_LIST_BASE_BASE		0x00001FFF
+#define DLB2_CHP_HIST_LIST_BASE_RSVD0	0xFFFFE000
+#define DLB2_CHP_HIST_LIST_BASE_BASE_LOC	0
+#define DLB2_CHP_HIST_LIST_BASE_RSVD0_LOC	13
+
+#define DLB2_V2CHP_HIST_LIST_LIM(x) \
 	(0x40780000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_HIST_LIST_LIM(x) \
+	(0x40800000 + (x) * 0x1000)
+#define DLB2_CHP_HIST_LIST_LIM(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_HIST_LIST_LIM(x) : \
+	 DLB2_V2_5CHP_HIST_LIST_LIM(x))
 #define DLB2_CHP_HIST_LIST_LIM_RST 0x0
-union dlb2_chp_hist_list_lim {
-	struct {
-		u32 limit : 13;
-		u32 rsvd0 : 19;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_HIST_LIST_POP_PTR(x) \
+
+#define DLB2_CHP_HIST_LIST_LIM_LIMIT	0x00001FFF
+#define DLB2_CHP_HIST_LIST_LIM_RSVD0	0xFFFFE000
+#define DLB2_CHP_HIST_LIST_LIM_LIMIT_LOC	0
+#define DLB2_CHP_HIST_LIST_LIM_RSVD0_LOC	13
+
+#define DLB2_V2CHP_HIST_LIST_POP_PTR(x) \
 	(0x40800000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_HIST_LIST_POP_PTR(x) \
+	(0x40880000 + (x) * 0x1000)
+#define DLB2_CHP_HIST_LIST_POP_PTR(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_HIST_LIST_POP_PTR(x) : \
+	 DLB2_V2_5CHP_HIST_LIST_POP_PTR(x))
 #define DLB2_CHP_HIST_LIST_POP_PTR_RST 0x0
-union dlb2_chp_hist_list_pop_ptr {
-	struct {
-		u32 pop_ptr : 13;
-		u32 generation : 1;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_HIST_LIST_PUSH_PTR(x) \
+
+#define DLB2_CHP_HIST_LIST_POP_PTR_POP_PTR		0x00001FFF
+#define DLB2_CHP_HIST_LIST_POP_PTR_GENERATION	0x00002000
+#define DLB2_CHP_HIST_LIST_POP_PTR_RSVD0		0xFFFFC000
+#define DLB2_CHP_HIST_LIST_POP_PTR_POP_PTR_LOC	0
+#define DLB2_CHP_HIST_LIST_POP_PTR_GENERATION_LOC	13
+#define DLB2_CHP_HIST_LIST_POP_PTR_RSVD0_LOC		14
+
+#define DLB2_V2CHP_HIST_LIST_PUSH_PTR(x) \
 	(0x40880000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_HIST_LIST_PUSH_PTR(x) \
+	(0x40900000 + (x) * 0x1000)
+#define DLB2_CHP_HIST_LIST_PUSH_PTR(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_HIST_LIST_PUSH_PTR(x) : \
+	 DLB2_V2_5CHP_HIST_LIST_PUSH_PTR(x))
 #define DLB2_CHP_HIST_LIST_PUSH_PTR_RST 0x0
-union dlb2_chp_hist_list_push_ptr {
-	struct {
-		u32 push_ptr : 13;
-		u32 generation : 1;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_LDB_CQ_DEPTH(x) \
+
+#define DLB2_CHP_HIST_LIST_PUSH_PTR_PUSH_PTR		0x00001FFF
+#define DLB2_CHP_HIST_LIST_PUSH_PTR_GENERATION	0x00002000
+#define DLB2_CHP_HIST_LIST_PUSH_PTR_RSVD0		0xFFFFC000
+#define DLB2_CHP_HIST_LIST_PUSH_PTR_PUSH_PTR_LOC	0
+#define DLB2_CHP_HIST_LIST_PUSH_PTR_GENERATION_LOC	13
+#define DLB2_CHP_HIST_LIST_PUSH_PTR_RSVD0_LOC	14
+
+#define DLB2_V2CHP_LDB_CQ_DEPTH(x) \
 	(0x40900000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ_DEPTH(x) \
+	(0x40a80000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ_DEPTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_DEPTH(x) : \
+	 DLB2_V2_5CHP_LDB_CQ_DEPTH(x))
 #define DLB2_CHP_LDB_CQ_DEPTH_RST 0x0
-union dlb2_chp_ldb_cq_depth {
-	struct {
-		u32 depth : 11;
-		u32 rsvd0 : 21;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH(x) \
+
+#define DLB2_CHP_LDB_CQ_DEPTH_DEPTH	0x000007FF
+#define DLB2_CHP_LDB_CQ_DEPTH_RSVD0	0xFFFFF800
+#define DLB2_CHP_LDB_CQ_DEPTH_DEPTH_LOC	0
+#define DLB2_CHP_LDB_CQ_DEPTH_RSVD0_LOC	11
+
+#define DLB2_V2CHP_LDB_CQ_INT_DEPTH_THRSH(x) \
 	(0x40980000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ_INT_DEPTH_THRSH(x) \
+	(0x40b00000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_INT_DEPTH_THRSH(x) : \
+	 DLB2_V2_5CHP_LDB_CQ_INT_DEPTH_THRSH(x))
 #define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_RST 0x0
-union dlb2_chp_ldb_cq_int_depth_thrsh {
-	struct {
-		u32 depth_threshold : 11;
-		u32 rsvd0 : 21;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_LDB_CQ_INT_ENB(x) \
+
+#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_DEPTH_THRESHOLD	0x000007FF
+#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_RSVD0		0xFFFFF800
+#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_DEPTH_THRESHOLD_LOC	0
+#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_RSVD0_LOC		11
+
+#define DLB2_V2CHP_LDB_CQ_INT_ENB(x) \
 	(0x40a00000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ_INT_ENB(x) \
+	(0x40b80000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ_INT_ENB(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_INT_ENB(x) : \
+	 DLB2_V2_5CHP_LDB_CQ_INT_ENB(x))
 #define DLB2_CHP_LDB_CQ_INT_ENB_RST 0x0
-union dlb2_chp_ldb_cq_int_enb {
-	struct {
-		u32 en_tim : 1;
-		u32 en_depth : 1;
-		u32 rsvd0 : 30;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_LDB_CQ_TMR_THRSH(x) \
+
+#define DLB2_CHP_LDB_CQ_INT_ENB_EN_TIM	0x00000001
+#define DLB2_CHP_LDB_CQ_INT_ENB_EN_DEPTH	0x00000002
+#define DLB2_CHP_LDB_CQ_INT_ENB_RSVD0	0xFFFFFFFC
+#define DLB2_CHP_LDB_CQ_INT_ENB_EN_TIM_LOC	0
+#define DLB2_CHP_LDB_CQ_INT_ENB_EN_DEPTH_LOC	1
+#define DLB2_CHP_LDB_CQ_INT_ENB_RSVD0_LOC	2
+
+#define DLB2_V2CHP_LDB_CQ_TMR_THRSH(x) \
 	(0x40b00000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ_TMR_THRSH(x) \
+	(0x40c80000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ_TMR_THRSH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_TMR_THRSH(x) : \
+	 DLB2_V2_5CHP_LDB_CQ_TMR_THRSH(x))
 #define DLB2_CHP_LDB_CQ_TMR_THRSH_RST 0x1
-union dlb2_chp_ldb_cq_tmr_thrsh {
-	struct {
-		u32 thrsh_0 : 1;
-		u32 thrsh_13_1 : 13;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL(x) \
+
+#define DLB2_CHP_LDB_CQ_TMR_THRSH_THRSH_0	0x00000001
+#define DLB2_CHP_LDB_CQ_TMR_THRSH_THRSH_13_1	0x00003FFE
+#define DLB2_CHP_LDB_CQ_TMR_THRSH_RSVD0	0xFFFFC000
+#define DLB2_CHP_LDB_CQ_TMR_THRSH_THRSH_0_LOC	0
+#define DLB2_CHP_LDB_CQ_TMR_THRSH_THRSH_13_1_LOC	1
+#define DLB2_CHP_LDB_CQ_TMR_THRSH_RSVD0_LOC		14
+
+#define DLB2_V2CHP_LDB_CQ_TKN_DEPTH_SEL(x) \
 	(0x40b80000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ_TKN_DEPTH_SEL(x) \
+	(0x40d00000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_TKN_DEPTH_SEL(x) : \
+	 DLB2_V2_5CHP_LDB_CQ_TKN_DEPTH_SEL(x))
 #define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_RST 0x0
-union dlb2_chp_ldb_cq_tkn_depth_sel {
-	struct {
-		u32 token_depth_select : 4;
-		u32 rsvd0 : 28;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_LDB_CQ_WD_ENB(x) \
+
+#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT	0x0000000F
+#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_RSVD0			0xFFFFFFF0
+#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_LOC	0
+#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_RSVD0_LOC		4
+
+#define DLB2_V2CHP_LDB_CQ_WD_ENB(x) \
 	(0x40c00000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ_WD_ENB(x) \
+	(0x40d80000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ_WD_ENB(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_WD_ENB(x) : \
+	 DLB2_V2_5CHP_LDB_CQ_WD_ENB(x))
 #define DLB2_CHP_LDB_CQ_WD_ENB_RST 0x0
-union dlb2_chp_ldb_cq_wd_enb {
-	struct {
-		u32 wd_enable : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_LDB_CQ_WPTR(x) \
+
+#define DLB2_CHP_LDB_CQ_WD_ENB_WD_ENABLE	0x00000001
+#define DLB2_CHP_LDB_CQ_WD_ENB_RSVD0		0xFFFFFFFE
+#define DLB2_CHP_LDB_CQ_WD_ENB_WD_ENABLE_LOC	0
+#define DLB2_CHP_LDB_CQ_WD_ENB_RSVD0_LOC	1
+
+#define DLB2_V2CHP_LDB_CQ_WPTR(x) \
 	(0x40c80000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ_WPTR(x) \
+	(0x40e00000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ_WPTR(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_WPTR(x) : \
+	 DLB2_V2_5CHP_LDB_CQ_WPTR(x))
 #define DLB2_CHP_LDB_CQ_WPTR_RST 0x0
-union dlb2_chp_ldb_cq_wptr {
-	struct {
-		u32 write_pointer : 11;
-		u32 rsvd0 : 21;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_LDB_CQ2VAS(x) \
+
+#define DLB2_CHP_LDB_CQ_WPTR_WRITE_POINTER	0x000007FF
+#define DLB2_CHP_LDB_CQ_WPTR_RSVD0		0xFFFFF800
+#define DLB2_CHP_LDB_CQ_WPTR_WRITE_POINTER_LOC	0
+#define DLB2_CHP_LDB_CQ_WPTR_RSVD0_LOC		11
+
+#define DLB2_V2CHP_LDB_CQ2VAS(x) \
 	(0x40d00000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ2VAS(x) \
+	(0x40e80000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ2VAS(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ2VAS(x) : \
+	 DLB2_V2_5CHP_LDB_CQ2VAS(x))
 #define DLB2_CHP_LDB_CQ2VAS_RST 0x0
-union dlb2_chp_ldb_cq2vas {
-	struct {
-		u32 cq2vas : 5;
-		u32 rsvd0 : 27;
-	} field;
-	u32 val;
-};
+
+#define DLB2_CHP_LDB_CQ2VAS_CQ2VAS	0x0000001F
+#define DLB2_CHP_LDB_CQ2VAS_RSVD0	0xFFFFFFE0
+#define DLB2_CHP_LDB_CQ2VAS_CQ2VAS_LOC	0
+#define DLB2_CHP_LDB_CQ2VAS_RSVD0_LOC	5
 
 #define DLB2_CHP_CFG_CHP_CSR_CTRL 0x44000008
 #define DLB2_CHP_CFG_CHP_CSR_CTRL_RST 0x180002
-union dlb2_chp_cfg_chp_csr_ctrl {
-	struct {
-		u32 int_cor_alarm_dis : 1;
-		u32 int_cor_synd_dis : 1;
-		u32 int_uncr_alarm_dis : 1;
-		u32 int_unc_synd_dis : 1;
-		u32 int_inf0_alarm_dis : 1;
-		u32 int_inf0_synd_dis : 1;
-		u32 int_inf1_alarm_dis : 1;
-		u32 int_inf1_synd_dis : 1;
-		u32 int_inf2_alarm_dis : 1;
-		u32 int_inf2_synd_dis : 1;
-		u32 int_inf3_alarm_dis : 1;
-		u32 int_inf3_synd_dis : 1;
-		u32 int_inf4_alarm_dis : 1;
-		u32 int_inf4_synd_dis : 1;
-		u32 int_inf5_alarm_dis : 1;
-		u32 int_inf5_synd_dis : 1;
-		u32 dlb_cor_alarm_enable : 1;
-		u32 cfg_64bytes_qe_ldb_cq_mode : 1;
-		u32 cfg_64bytes_qe_dir_cq_mode : 1;
-		u32 pad_write_ldb : 1;
-		u32 pad_write_dir : 1;
-		u32 pad_first_write_ldb : 1;
-		u32 pad_first_write_dir : 1;
-		u32 rsvz0 : 9;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_DIR_CQ_INTR_ARMED0 0x4400005c
+
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_COR_ALARM_DIS		0x00000001
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_COR_SYND_DIS		0x00000002
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_UNCR_ALARM_DIS		0x00000004
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_UNC_SYND_DIS		0x00000008
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF0_ALARM_DIS		0x00000010
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF0_SYND_DIS		0x00000020
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF1_ALARM_DIS		0x00000040
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF1_SYND_DIS		0x00000080
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF2_ALARM_DIS		0x00000100
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF2_SYND_DIS		0x00000200
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF3_ALARM_DIS		0x00000400
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF3_SYND_DIS		0x00000800
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF4_ALARM_DIS		0x00001000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF4_SYND_DIS		0x00002000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF5_ALARM_DIS		0x00004000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF5_SYND_DIS		0x00008000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_DLB_COR_ALARM_ENABLE	0x00010000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_LDB_CQ_MODE	0x00020000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_DIR_CQ_MODE	0x00040000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_WRITE_LDB		0x00080000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_WRITE_DIR		0x00100000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_FIRST_WRITE_LDB	0x00200000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_FIRST_WRITE_DIR	0x00400000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_RSVZ0			0xFF800000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_COR_ALARM_DIS_LOC		0
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_COR_SYND_DIS_LOC		1
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_UNCR_ALARM_DIS_LOC		2
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_UNC_SYND_DIS_LOC		3
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF0_ALARM_DIS_LOC		4
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF0_SYND_DIS_LOC		5
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF1_ALARM_DIS_LOC		6
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF1_SYND_DIS_LOC		7
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF2_ALARM_DIS_LOC		8
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF2_SYND_DIS_LOC		9
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF3_ALARM_DIS_LOC		10
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF3_SYND_DIS_LOC		11
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF4_ALARM_DIS_LOC		12
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF4_SYND_DIS_LOC		13
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF5_ALARM_DIS_LOC		14
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF5_SYND_DIS_LOC		15
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_DLB_COR_ALARM_ENABLE_LOC		16
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_LDB_CQ_MODE_LOC	17
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_DIR_CQ_MODE_LOC	18
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_WRITE_LDB_LOC			19
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_WRITE_DIR_LOC			20
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_FIRST_WRITE_LDB_LOC		21
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_FIRST_WRITE_DIR_LOC		22
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_RSVZ0_LOC				23
+
+#define DLB2_V2CHP_DIR_CQ_INTR_ARMED0 0x4400005c
+#define DLB2_V2_5CHP_DIR_CQ_INTR_ARMED0 0x4400004c
+#define DLB2_CHP_DIR_CQ_INTR_ARMED0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_INTR_ARMED0 : \
+	 DLB2_V2_5CHP_DIR_CQ_INTR_ARMED0)
 #define DLB2_CHP_DIR_CQ_INTR_ARMED0_RST 0x0
-union dlb2_chp_dir_cq_intr_armed0 {
-	struct {
-		u32 armed : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_DIR_CQ_INTR_ARMED1 0x44000060
+
+#define DLB2_CHP_DIR_CQ_INTR_ARMED0_ARMED	0xFFFFFFFF
+#define DLB2_CHP_DIR_CQ_INTR_ARMED0_ARMED_LOC	0
+
+#define DLB2_V2CHP_DIR_CQ_INTR_ARMED1 0x44000060
+#define DLB2_V2_5CHP_DIR_CQ_INTR_ARMED1 0x44000050
+#define DLB2_CHP_DIR_CQ_INTR_ARMED1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_INTR_ARMED1 : \
+	 DLB2_V2_5CHP_DIR_CQ_INTR_ARMED1)
 #define DLB2_CHP_DIR_CQ_INTR_ARMED1_RST 0x0
-union dlb2_chp_dir_cq_intr_armed1 {
-	struct {
-		u32 armed : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL 0x44000084
+
+#define DLB2_CHP_DIR_CQ_INTR_ARMED1_ARMED	0xFFFFFFFF
+#define DLB2_CHP_DIR_CQ_INTR_ARMED1_ARMED_LOC	0
+
+#define DLB2_V2CHP_CFG_DIR_CQ_TIMER_CTL 0x44000084
+#define DLB2_V2_5CHP_CFG_DIR_CQ_TIMER_CTL 0x44000088
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_DIR_CQ_TIMER_CTL : \
+	 DLB2_V2_5CHP_CFG_DIR_CQ_TIMER_CTL)
 #define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_RST 0x0
-union dlb2_chp_cfg_dir_cq_timer_ctl {
-	struct {
-		u32 sample_interval : 8;
-		u32 enb : 1;
-		u32 rsvz0 : 23;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_DIR_WDTO_0 0x44000088
+
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_SAMPLE_INTERVAL	0x000000FF
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_ENB			0x00000100
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_RSVZ0			0xFFFFFE00
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_SAMPLE_INTERVAL_LOC	0
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_ENB_LOC		8
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_RSVZ0_LOC		9
+
+#define DLB2_V2CHP_CFG_DIR_WDTO_0 0x44000088
+#define DLB2_V2_5CHP_CFG_DIR_WDTO_0 0x4400008c
+#define DLB2_CHP_CFG_DIR_WDTO_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_DIR_WDTO_0 : \
+	 DLB2_V2_5CHP_CFG_DIR_WDTO_0)
 #define DLB2_CHP_CFG_DIR_WDTO_0_RST 0x0
-union dlb2_chp_cfg_dir_wdto_0 {
-	struct {
-		u32 wdto : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_DIR_WDTO_1 0x4400008c
+
+#define DLB2_CHP_CFG_DIR_WDTO_0_WDTO	0xFFFFFFFF
+#define DLB2_CHP_CFG_DIR_WDTO_0_WDTO_LOC	0
+
+#define DLB2_V2CHP_CFG_DIR_WDTO_1 0x4400008c
+#define DLB2_V2_5CHP_CFG_DIR_WDTO_1 0x44000090
+#define DLB2_CHP_CFG_DIR_WDTO_1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_DIR_WDTO_1 : \
+	 DLB2_V2_5CHP_CFG_DIR_WDTO_1)
 #define DLB2_CHP_CFG_DIR_WDTO_1_RST 0x0
-union dlb2_chp_cfg_dir_wdto_1 {
-	struct {
-		u32 wdto : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_DIR_WD_DISABLE0 0x44000098
+
+#define DLB2_CHP_CFG_DIR_WDTO_1_WDTO	0xFFFFFFFF
+#define DLB2_CHP_CFG_DIR_WDTO_1_WDTO_LOC	0
+
+#define DLB2_V2CHP_CFG_DIR_WD_DISABLE0 0x44000098
+#define DLB2_V2_5CHP_CFG_DIR_WD_DISABLE0 0x440000a4
+#define DLB2_CHP_CFG_DIR_WD_DISABLE0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_DIR_WD_DISABLE0 : \
+	 DLB2_V2_5CHP_CFG_DIR_WD_DISABLE0)
 #define DLB2_CHP_CFG_DIR_WD_DISABLE0_RST 0xffffffff
-union dlb2_chp_cfg_dir_wd_disable0 {
-	struct {
-		u32 wd_disable : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_DIR_WD_DISABLE1 0x4400009c
+
+#define DLB2_CHP_CFG_DIR_WD_DISABLE0_WD_DISABLE	0xFFFFFFFF
+#define DLB2_CHP_CFG_DIR_WD_DISABLE0_WD_DISABLE_LOC	0
+
+#define DLB2_V2CHP_CFG_DIR_WD_DISABLE1 0x4400009c
+#define DLB2_V2_5CHP_CFG_DIR_WD_DISABLE1 0x440000a8
+#define DLB2_CHP_CFG_DIR_WD_DISABLE1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_DIR_WD_DISABLE1 : \
+	 DLB2_V2_5CHP_CFG_DIR_WD_DISABLE1)
 #define DLB2_CHP_CFG_DIR_WD_DISABLE1_RST 0xffffffff
-union dlb2_chp_cfg_dir_wd_disable1 {
-	struct {
-		u32 wd_disable : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL 0x440000a0
+
+#define DLB2_CHP_CFG_DIR_WD_DISABLE1_WD_DISABLE	0xFFFFFFFF
+#define DLB2_CHP_CFG_DIR_WD_DISABLE1_WD_DISABLE_LOC	0
+
+#define DLB2_V2CHP_CFG_DIR_WD_ENB_INTERVAL 0x440000a0
+#define DLB2_V2_5CHP_CFG_DIR_WD_ENB_INTERVAL 0x440000b0
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_DIR_WD_ENB_INTERVAL : \
+	 DLB2_V2_5CHP_CFG_DIR_WD_ENB_INTERVAL)
 #define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_RST 0x0
-union dlb2_chp_cfg_dir_wd_enb_interval {
-	struct {
-		u32 sample_interval : 28;
-		u32 enb : 1;
-		u32 rsvz0 : 3;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_DIR_WD_THRESHOLD 0x440000ac
+
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_SAMPLE_INTERVAL	0x0FFFFFFF
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_ENB			0x10000000
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_RSVZ0		0xE0000000
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_SAMPLE_INTERVAL_LOC	0
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_ENB_LOC		28
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_RSVZ0_LOC		29
+
+#define DLB2_V2CHP_CFG_DIR_WD_THRESHOLD 0x440000ac
+#define DLB2_V2_5CHP_CFG_DIR_WD_THRESHOLD 0x440000c0
+#define DLB2_CHP_CFG_DIR_WD_THRESHOLD(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_DIR_WD_THRESHOLD : \
+	 DLB2_V2_5CHP_CFG_DIR_WD_THRESHOLD)
 #define DLB2_CHP_CFG_DIR_WD_THRESHOLD_RST 0x0
-union dlb2_chp_cfg_dir_wd_threshold {
-	struct {
-		u32 wd_threshold : 8;
-		u32 rsvz0 : 24;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_LDB_CQ_INTR_ARMED0 0x440000b0
+
+#define DLB2_CHP_CFG_DIR_WD_THRESHOLD_WD_THRESHOLD	0x000000FF
+#define DLB2_CHP_CFG_DIR_WD_THRESHOLD_RSVZ0		0xFFFFFF00
+#define DLB2_CHP_CFG_DIR_WD_THRESHOLD_WD_THRESHOLD_LOC	0
+#define DLB2_CHP_CFG_DIR_WD_THRESHOLD_RSVZ0_LOC		8
+
+#define DLB2_V2CHP_LDB_CQ_INTR_ARMED0 0x440000b0
+#define DLB2_V2_5CHP_LDB_CQ_INTR_ARMED0 0x440000c4
+#define DLB2_CHP_LDB_CQ_INTR_ARMED0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_INTR_ARMED0 : \
+	 DLB2_V2_5CHP_LDB_CQ_INTR_ARMED0)
 #define DLB2_CHP_LDB_CQ_INTR_ARMED0_RST 0x0
-union dlb2_chp_ldb_cq_intr_armed0 {
-	struct {
-		u32 armed : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_LDB_CQ_INTR_ARMED1 0x440000b4
+
+#define DLB2_CHP_LDB_CQ_INTR_ARMED0_ARMED	0xFFFFFFFF
+#define DLB2_CHP_LDB_CQ_INTR_ARMED0_ARMED_LOC	0
+
+#define DLB2_V2CHP_LDB_CQ_INTR_ARMED1 0x440000b4
+#define DLB2_V2_5CHP_LDB_CQ_INTR_ARMED1 0x440000c8
+#define DLB2_CHP_LDB_CQ_INTR_ARMED1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_INTR_ARMED1 : \
+	 DLB2_V2_5CHP_LDB_CQ_INTR_ARMED1)
 #define DLB2_CHP_LDB_CQ_INTR_ARMED1_RST 0x0
-union dlb2_chp_ldb_cq_intr_armed1 {
-	struct {
-		u32 armed : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL 0x440000d8
+
+#define DLB2_CHP_LDB_CQ_INTR_ARMED1_ARMED	0xFFFFFFFF
+#define DLB2_CHP_LDB_CQ_INTR_ARMED1_ARMED_LOC	0
+
+#define DLB2_V2CHP_CFG_LDB_CQ_TIMER_CTL 0x440000d8
+#define DLB2_V2_5CHP_CFG_LDB_CQ_TIMER_CTL 0x440000ec
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_LDB_CQ_TIMER_CTL : \
+	 DLB2_V2_5CHP_CFG_LDB_CQ_TIMER_CTL)
 #define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_RST 0x0
-union dlb2_chp_cfg_ldb_cq_timer_ctl {
-	struct {
-		u32 sample_interval : 8;
-		u32 enb : 1;
-		u32 rsvz0 : 23;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_LDB_WDTO_0 0x440000dc
+
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_SAMPLE_INTERVAL	0x000000FF
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_ENB			0x00000100
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_RSVZ0			0xFFFFFE00
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_SAMPLE_INTERVAL_LOC	0
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_ENB_LOC		8
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_RSVZ0_LOC		9
+
+#define DLB2_V2CHP_CFG_LDB_WDTO_0 0x440000dc
+#define DLB2_V2_5CHP_CFG_LDB_WDTO_0 0x440000f0
+#define DLB2_CHP_CFG_LDB_WDTO_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_LDB_WDTO_0 : \
+	 DLB2_V2_5CHP_CFG_LDB_WDTO_0)
 #define DLB2_CHP_CFG_LDB_WDTO_0_RST 0x0
-union dlb2_chp_cfg_ldb_wdto_0 {
-	struct {
-		u32 wdto : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_LDB_WDTO_1 0x440000e0
+
+#define DLB2_CHP_CFG_LDB_WDTO_0_WDTO	0xFFFFFFFF
+#define DLB2_CHP_CFG_LDB_WDTO_0_WDTO_LOC	0
+
+#define DLB2_V2CHP_CFG_LDB_WDTO_1 0x440000e0
+#define DLB2_V2_5CHP_CFG_LDB_WDTO_1 0x440000f4
+#define DLB2_CHP_CFG_LDB_WDTO_1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_LDB_WDTO_1 : \
+	 DLB2_V2_5CHP_CFG_LDB_WDTO_1)
 #define DLB2_CHP_CFG_LDB_WDTO_1_RST 0x0
-union dlb2_chp_cfg_ldb_wdto_1 {
-	struct {
-		u32 wdto : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_LDB_WD_DISABLE0 0x440000ec
+
+#define DLB2_CHP_CFG_LDB_WDTO_1_WDTO	0xFFFFFFFF
+#define DLB2_CHP_CFG_LDB_WDTO_1_WDTO_LOC	0
+
+#define DLB2_V2CHP_CFG_LDB_WD_DISABLE0 0x440000ec
+#define DLB2_V2_5CHP_CFG_LDB_WD_DISABLE0 0x44000100
+#define DLB2_CHP_CFG_LDB_WD_DISABLE0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_LDB_WD_DISABLE0 : \
+	 DLB2_V2_5CHP_CFG_LDB_WD_DISABLE0)
 #define DLB2_CHP_CFG_LDB_WD_DISABLE0_RST 0xffffffff
-union dlb2_chp_cfg_ldb_wd_disable0 {
-	struct {
-		u32 wd_disable : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_LDB_WD_DISABLE1 0x440000f0
+
+#define DLB2_CHP_CFG_LDB_WD_DISABLE0_WD_DISABLE	0xFFFFFFFF
+#define DLB2_CHP_CFG_LDB_WD_DISABLE0_WD_DISABLE_LOC	0
+
+#define DLB2_V2CHP_CFG_LDB_WD_DISABLE1 0x440000f0
+#define DLB2_V2_5CHP_CFG_LDB_WD_DISABLE1 0x44000104
+#define DLB2_CHP_CFG_LDB_WD_DISABLE1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_LDB_WD_DISABLE1 : \
+	 DLB2_V2_5CHP_CFG_LDB_WD_DISABLE1)
 #define DLB2_CHP_CFG_LDB_WD_DISABLE1_RST 0xffffffff
-union dlb2_chp_cfg_ldb_wd_disable1 {
-	struct {
-		u32 wd_disable : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL 0x440000f4
+
+#define DLB2_CHP_CFG_LDB_WD_DISABLE1_WD_DISABLE	0xFFFFFFFF
+#define DLB2_CHP_CFG_LDB_WD_DISABLE1_WD_DISABLE_LOC	0
+
+#define DLB2_V2CHP_CFG_LDB_WD_ENB_INTERVAL 0x440000f4
+#define DLB2_V2_5CHP_CFG_LDB_WD_ENB_INTERVAL 0x44000108
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_LDB_WD_ENB_INTERVAL : \
+	 DLB2_V2_5CHP_CFG_LDB_WD_ENB_INTERVAL)
 #define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_RST 0x0
-union dlb2_chp_cfg_ldb_wd_enb_interval {
-	struct {
-		u32 sample_interval : 28;
-		u32 enb : 1;
-		u32 rsvz0 : 3;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_LDB_WD_THRESHOLD 0x44000100
+
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_SAMPLE_INTERVAL	0x0FFFFFFF
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_ENB			0x10000000
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_RSVZ0		0xE0000000
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_SAMPLE_INTERVAL_LOC	0
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_ENB_LOC		28
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_RSVZ0_LOC		29
+
+#define DLB2_V2CHP_CFG_LDB_WD_THRESHOLD 0x44000100
+#define DLB2_V2_5CHP_CFG_LDB_WD_THRESHOLD 0x44000114
+#define DLB2_CHP_CFG_LDB_WD_THRESHOLD(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_LDB_WD_THRESHOLD : \
+	 DLB2_V2_5CHP_CFG_LDB_WD_THRESHOLD)
 #define DLB2_CHP_CFG_LDB_WD_THRESHOLD_RST 0x0
-union dlb2_chp_cfg_ldb_wd_threshold {
-	struct {
-		u32 wd_threshold : 8;
-		u32 rsvz0 : 24;
-	} field;
-	u32 val;
-};
+
+#define DLB2_CHP_CFG_LDB_WD_THRESHOLD_WD_THRESHOLD	0x000000FF
+#define DLB2_CHP_CFG_LDB_WD_THRESHOLD_RSVZ0		0xFFFFFF00
+#define DLB2_CHP_CFG_LDB_WD_THRESHOLD_WD_THRESHOLD_LOC	0
+#define DLB2_CHP_CFG_LDB_WD_THRESHOLD_RSVZ0_LOC		8
+
+#define DLB2_CHP_SMON_COMPARE0 0x4c000000
+#define DLB2_CHP_SMON_COMPARE0_RST 0x0
+
+#define DLB2_CHP_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_CHP_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_CHP_SMON_COMPARE1 0x4c000004
+#define DLB2_CHP_SMON_COMPARE1_RST 0x0
+
+#define DLB2_CHP_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_CHP_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_CHP_SMON_CFG0 0x4c000008
+#define DLB2_CHP_SMON_CFG0_RST 0x40000000
+
+#define DLB2_CHP_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_CHP_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_CHP_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_CHP_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_CHP_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_CHP_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_CHP_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_CHP_SMON_CFG0_SMON_MODE			0x0000F000
+#define DLB2_CHP_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_CHP_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_CHP_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_CHP_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_CHP_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_CHP_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_CHP_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_CHP_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_CHP_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_CHP_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_CHP_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_CHP_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_CHP_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
+#define DLB2_CHP_SMON_CFG0_RSVZ0_LOC				2
+#define DLB2_CHP_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_CHP_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_CHP_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_CHP_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_CHP_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_CHP_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_CHP_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_CHP_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_CHP_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_CHP_SMON_CFG0_STOPTIMEROVFL_LOC			20
+#define DLB2_CHP_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_CHP_SMON_CFG0_STATTIMEROVFL_LOC			22
+#define DLB2_CHP_SMON_CFG0_RSVZ1_LOC				23
+#define DLB2_CHP_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_CHP_SMON_CFG0_RSVZ2_LOC				29
+#define DLB2_CHP_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_CHP_SMON_CFG1 0x4c00000c
+#define DLB2_CHP_SMON_CFG1_RST 0x0
+
+#define DLB2_CHP_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_CHP_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_CHP_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_CHP_SMON_CFG1_MODE0_LOC	0
+#define DLB2_CHP_SMON_CFG1_MODE1_LOC	8
+#define DLB2_CHP_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_CHP_SMON_ACTIVITYCNTR0 0x4c000010
+#define DLB2_CHP_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_CHP_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_CHP_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_CHP_SMON_ACTIVITYCNTR1 0x4c000014
+#define DLB2_CHP_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_CHP_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_CHP_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_CHP_SMON_MAX_TMR 0x4c000018
+#define DLB2_CHP_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_CHP_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_CHP_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_CHP_SMON_TMR 0x4c00001c
+#define DLB2_CHP_SMON_TMR_RST 0x0
+
+#define DLB2_CHP_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_CHP_SMON_TMR_TIMER_LOC	0
 
 #define DLB2_CHP_CTRL_DIAG_02 0x4c000028
 #define DLB2_CHP_CTRL_DIAG_02_RST 0x1555
-union dlb2_chp_ctrl_diag_02 {
-	struct {
-		u32 egress_credit_status_empty : 1;
-		u32 egress_credit_status_afull : 1;
-		u32 chp_outbound_hcw_pipe_credit_status_empty : 1;
-		u32 chp_outbound_hcw_pipe_credit_status_afull : 1;
-		u32 chp_lsp_ap_cmp_pipe_credit_status_empty : 1;
-		u32 chp_lsp_ap_cmp_pipe_credit_status_afull : 1;
-		u32 chp_lsp_tok_pipe_credit_status_empty : 1;
-		u32 chp_lsp_tok_pipe_credit_status_afull : 1;
-		u32 chp_rop_pipe_credit_status_empty : 1;
-		u32 chp_rop_pipe_credit_status_afull : 1;
-		u32 qed_to_cq_pipe_credit_status_empty : 1;
-		u32 qed_to_cq_pipe_credit_status_afull : 1;
-		u32 egress_lsp_token_credit_status_empty : 1;
-		u32 egress_lsp_token_credit_status_afull : 1;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
+
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_EMPTY_V2	0x00000001
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_AFULL_V2	0x00000002
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_EMPTY_V2 0x04
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_AFULL_V2 0x08
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_EMPTY_V2 0x0010
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_AFULL_V2 0x0020
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_EMPTY_V2    0x0040
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_AFULL_V2    0x0080
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_EMPTY_V2	 0x0100
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_AFULL_V2	 0x0200
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_EMPTY_V2	0x0400
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_AFULL_V2	0x0800
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_EMPTY_V2    0x1000
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_AFULL_V2    0x2000
+#define DLB2_CHP_CTRL_DIAG_02_RSVD0_V2				    0xFFFFC000
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_EMPTY_V2_LOC	    0
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_AFULL_V2_LOC	    1
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_EMPTY_V2_LOC 2
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_AFULL_V2_LOC 3
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_EMPTY_V2_LOC 4
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_AFULL_V2_LOC 5
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_EMPTY_V2_LOC    6
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_AFULL_V2_LOC    7
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_EMPTY_V2_LOC	  8
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_AFULL_V2_LOC	  9
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_EMPTY_V2_LOC	  10
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_AFULL_V2_LOC	  11
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_EMPTY_V2_LOC 12
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_AFULL_V2_LOC 13
+#define DLB2_CHP_CTRL_DIAG_02_RSVD0_V2_LOC				  14
+
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_EMPTY_V2_5	     0x00000001
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_AFULL_V2_5	     0x00000002
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_EMPTY_V2_5  4
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_AFULL_V2_5  8
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_EMPTY_V2_5 0x10
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_AFULL_V2_5 0x20
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_EMPTY_V2_5	0x0040
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_AFULL_V2_5	0x0080
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_EMPTY_V2_5 0x00000100
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_AFULL_V2_5 0x00000200
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_EMPTY_V2_5 0x0400
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_AFULL_V2_5 0x0800
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_EMPTY_V2_5 0x1000
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_AFULL_V2_5 0x2000
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOKEN_QB_STATUS_SIZE_V2_5	    0x0001C000
+#define DLB2_CHP_CTRL_DIAG_02_FREELIST_SIZE_V2_5		    0xFFFE0000
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_EMPTY_V2_5_LOC 0
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_AFULL_V2_5_LOC 1
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_EMPTY_V2_5_LOC 2
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_AFULL_V2_5_LOC 3
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_EMPTY_V2_5_LOC 4
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_AFULL_V2_5_LOC 5
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_EMPTY_V2_5_LOC    6
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_AFULL_V2_5_LOC 7
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_EMPTY_V2_5_LOC	    8
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_AFULL_V2_5_LOC	    9
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_EMPTY_V2_5_LOC   10
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_AFULL_V2_5_LOC   11
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_EMPTY_V2_5_LOC 12
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_AFULL_V2_5_LOC 13
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOKEN_QB_STATUS_SIZE_V2_5_LOC	    14
+#define DLB2_CHP_CTRL_DIAG_02_FREELIST_SIZE_V2_5_LOC			    17
 
 #define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0 0x54000000
 #define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_RST 0xfefcfaf8
-union dlb2_dp_cfg_arb_weights_tqpri_dir_0 {
-	struct {
-		u32 pri0 : 8;
-		u32 pri1 : 8;
-		u32 pri2 : 8;
-		u32 pri3 : 8;
-	} field;
-	u32 val;
-};
+
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI0	0x000000FF
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI1	0x0000FF00
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI2	0x00FF0000
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI3	0xFF000000
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI0_LOC	0
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI1_LOC	8
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI2_LOC	16
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI3_LOC	24
 
 #define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_1 0x54000004
 #define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_1_RST 0x0
-union dlb2_dp_cfg_arb_weights_tqpri_dir_1 {
-	struct {
-		u32 rsvz0 : 32;
-	} field;
-	u32 val;
-};
+
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_1_RSVZ0	0xFFFFFFFF
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_1_RSVZ0_LOC	0
 
 #define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0 0x54000008
 #define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_RST 0xfefcfaf8
-union dlb2_dp_cfg_arb_weights_tqpri_replay_0 {
-	struct {
-		u32 pri0 : 8;
-		u32 pri1 : 8;
-		u32 pri2 : 8;
-		u32 pri3 : 8;
-	} field;
-	u32 val;
-};
+
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI0	0x000000FF
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI1	0x0000FF00
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI2	0x00FF0000
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI3	0xFF000000
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI0_LOC	0
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI1_LOC	8
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI2_LOC	16
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI3_LOC	24
 
 #define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1 0x5400000c
 #define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RST 0x0
-union dlb2_dp_cfg_arb_weights_tqpri_replay_1 {
-	struct {
-		u32 rsvz0 : 32;
-	} field;
-	u32 val;
-};
+
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RSVZ0	0xFFFFFFFF
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RSVZ0_LOC	0
 
 #define DLB2_DP_DIR_CSR_CTRL 0x54000010
 #define DLB2_DP_DIR_CSR_CTRL_RST 0x0
-union dlb2_dp_dir_csr_ctrl {
-	struct {
-		u32 int_cor_alarm_dis : 1;
-		u32 int_cor_synd_dis : 1;
-		u32 int_uncr_alarm_dis : 1;
-		u32 int_unc_synd_dis : 1;
-		u32 int_inf0_alarm_dis : 1;
-		u32 int_inf0_synd_dis : 1;
-		u32 int_inf1_alarm_dis : 1;
-		u32 int_inf1_synd_dis : 1;
-		u32 int_inf2_alarm_dis : 1;
-		u32 int_inf2_synd_dis : 1;
-		u32 int_inf3_alarm_dis : 1;
-		u32 int_inf3_synd_dis : 1;
-		u32 int_inf4_alarm_dis : 1;
-		u32 int_inf4_synd_dis : 1;
-		u32 int_inf5_alarm_dis : 1;
-		u32 int_inf5_synd_dis : 1;
-		u32 rsvz0 : 16;
-	} field;
-	u32 val;
-};
-
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_ATQ_0 0x84000000
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_RST 0xfefcfaf8
-union dlb2_nalb_pipe_cfg_arb_weights_tqpri_atq_0 {
-	struct {
-		u32 pri0 : 8;
-		u32 pri1 : 8;
-		u32 pri2 : 8;
-		u32 pri3 : 8;
-	} field;
-	u32 val;
-};
-
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_ATQ_1 0x84000004
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_ATQ_1_RST 0x0
-union dlb2_nalb_pipe_cfg_arb_weights_tqpri_atq_1 {
-	struct {
-		u32 rsvz0 : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_NALB_0 0x84000008
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_NALB_0_RST 0xfefcfaf8
-union dlb2_nalb_pipe_cfg_arb_weights_tqpri_nalb_0 {
-	struct {
-		u32 pri0 : 8;
-		u32 pri1 : 8;
-		u32 pri2 : 8;
-		u32 pri3 : 8;
-	} field;
-	u32 val;
-};
-
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_NALB_1 0x8400000c
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_NALB_1_RST 0x0
-union dlb2_nalb_pipe_cfg_arb_weights_tqpri_nalb_1 {
-	struct {
-		u32 rsvz0 : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0 0x84000010
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_RST 0xfefcfaf8
-union dlb2_nalb_pipe_cfg_arb_weights_tqpri_replay_0 {
-	struct {
-		u32 pri0 : 8;
-		u32 pri1 : 8;
-		u32 pri2 : 8;
-		u32 pri3 : 8;
-	} field;
-	u32 val;
-};
-
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1 0x84000014
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RST 0x0
-union dlb2_nalb_pipe_cfg_arb_weights_tqpri_replay_1 {
-	struct {
-		u32 rsvz0 : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_RO_PIPE_GRP_0_SLT_SHFT(x) \
+
+#define DLB2_DP_DIR_CSR_CTRL_INT_COR_ALARM_DIS	0x00000001
+#define DLB2_DP_DIR_CSR_CTRL_INT_COR_SYND_DIS	0x00000002
+#define DLB2_DP_DIR_CSR_CTRL_INT_UNCR_ALARM_DIS	0x00000004
+#define DLB2_DP_DIR_CSR_CTRL_INT_UNC_SYND_DIS	0x00000008
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF0_ALARM_DIS	0x00000010
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF0_SYND_DIS	0x00000020
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF1_ALARM_DIS	0x00000040
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF1_SYND_DIS	0x00000080
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF2_ALARM_DIS	0x00000100
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF2_SYND_DIS	0x00000200
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF3_ALARM_DIS	0x00000400
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF3_SYND_DIS	0x00000800
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF4_ALARM_DIS	0x00001000
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF4_SYND_DIS	0x00002000
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF5_ALARM_DIS	0x00004000
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF5_SYND_DIS	0x00008000
+#define DLB2_DP_DIR_CSR_CTRL_RSVZ0			0xFFFF0000
+#define DLB2_DP_DIR_CSR_CTRL_INT_COR_ALARM_DIS_LOC	0
+#define DLB2_DP_DIR_CSR_CTRL_INT_COR_SYND_DIS_LOC	1
+#define DLB2_DP_DIR_CSR_CTRL_INT_UNCR_ALARM_DIS_LOC	2
+#define DLB2_DP_DIR_CSR_CTRL_INT_UNC_SYND_DIS_LOC	3
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF0_ALARM_DIS_LOC	4
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF0_SYND_DIS_LOC	5
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF1_ALARM_DIS_LOC	6
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF1_SYND_DIS_LOC	7
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF2_ALARM_DIS_LOC	8
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF2_SYND_DIS_LOC	9
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF3_ALARM_DIS_LOC	10
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF3_SYND_DIS_LOC	11
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF4_ALARM_DIS_LOC	12
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF4_SYND_DIS_LOC	13
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF5_ALARM_DIS_LOC	14
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF5_SYND_DIS_LOC	15
+#define DLB2_DP_DIR_CSR_CTRL_RSVZ0_LOC		16
+
+#define DLB2_DP_SMON_ACTIVITYCNTR0 0x5c000058
+#define DLB2_DP_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_DP_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_DP_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_DP_SMON_ACTIVITYCNTR1 0x5c00005c
+#define DLB2_DP_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_DP_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_DP_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_DP_SMON_COMPARE0 0x5c000060
+#define DLB2_DP_SMON_COMPARE0_RST 0x0
+
+#define DLB2_DP_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_DP_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_DP_SMON_COMPARE1 0x5c000064
+#define DLB2_DP_SMON_COMPARE1_RST 0x0
+
+#define DLB2_DP_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_DP_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_DP_SMON_CFG0 0x5c000068
+#define DLB2_DP_SMON_CFG0_RST 0x40000000
+
+#define DLB2_DP_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_DP_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_DP_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_DP_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_DP_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_DP_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_DP_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_DP_SMON_CFG0_SMON_MODE			0x0000F000
+#define DLB2_DP_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_DP_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_DP_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_DP_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_DP_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_DP_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_DP_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_DP_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_DP_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_DP_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_DP_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_DP_SMON_CFG0_SMON_ENABLE_LOC		0
+#define DLB2_DP_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC	1
+#define DLB2_DP_SMON_CFG0_RSVZ0_LOC			2
+#define DLB2_DP_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_DP_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_DP_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_DP_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_DP_SMON_CFG0_SMON_MODE_LOC		12
+#define DLB2_DP_SMON_CFG0_STOPCOUNTEROVFL_LOC	16
+#define DLB2_DP_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_DP_SMON_CFG0_STATCOUNTER0OVFL_LOC	18
+#define DLB2_DP_SMON_CFG0_STATCOUNTER1OVFL_LOC	19
+#define DLB2_DP_SMON_CFG0_STOPTIMEROVFL_LOC		20
+#define DLB2_DP_SMON_CFG0_INTTIMEROVFL_LOC		21
+#define DLB2_DP_SMON_CFG0_STATTIMEROVFL_LOC		22
+#define DLB2_DP_SMON_CFG0_RSVZ1_LOC			23
+#define DLB2_DP_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_DP_SMON_CFG0_RSVZ2_LOC			29
+#define DLB2_DP_SMON_CFG0_VERSION_LOC		30
+
+#define DLB2_DP_SMON_CFG1 0x5c00006c
+#define DLB2_DP_SMON_CFG1_RST 0x0
+
+#define DLB2_DP_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_DP_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_DP_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_DP_SMON_CFG1_MODE0_LOC	0
+#define DLB2_DP_SMON_CFG1_MODE1_LOC	8
+#define DLB2_DP_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_DP_SMON_MAX_TMR 0x5c000070
+#define DLB2_DP_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_DP_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_DP_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_DP_SMON_TMR 0x5c000074
+#define DLB2_DP_SMON_TMR_RST 0x0
+
+#define DLB2_DP_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_DP_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_DQED_SMON_ACTIVITYCNTR0 0x6c000024
+#define DLB2_DQED_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_DQED_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_DQED_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_DQED_SMON_ACTIVITYCNTR1 0x6c000028
+#define DLB2_DQED_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_DQED_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_DQED_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_DQED_SMON_COMPARE0 0x6c00002c
+#define DLB2_DQED_SMON_COMPARE0_RST 0x0
+
+#define DLB2_DQED_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_DQED_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_DQED_SMON_COMPARE1 0x6c000030
+#define DLB2_DQED_SMON_COMPARE1_RST 0x0
+
+#define DLB2_DQED_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_DQED_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_DQED_SMON_CFG0 0x6c000034
+#define DLB2_DQED_SMON_CFG0_RST 0x40000000
+
+#define DLB2_DQED_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_DQED_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_DQED_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_DQED_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_DQED_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_DQED_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_DQED_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_DQED_SMON_CFG0_SMON_MODE		0x0000F000
+#define DLB2_DQED_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_DQED_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_DQED_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_DQED_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_DQED_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_DQED_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_DQED_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_DQED_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_DQED_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_DQED_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_DQED_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_DQED_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_DQED_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
+#define DLB2_DQED_SMON_CFG0_RSVZ0_LOC			2
+#define DLB2_DQED_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_DQED_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_DQED_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_DQED_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_DQED_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_DQED_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_DQED_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_DQED_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_DQED_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_DQED_SMON_CFG0_STOPTIMEROVFL_LOC		20
+#define DLB2_DQED_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_DQED_SMON_CFG0_STATTIMEROVFL_LOC		22
+#define DLB2_DQED_SMON_CFG0_RSVZ1_LOC			23
+#define DLB2_DQED_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_DQED_SMON_CFG0_RSVZ2_LOC			29
+#define DLB2_DQED_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_DQED_SMON_CFG1 0x6c000038
+#define DLB2_DQED_SMON_CFG1_RST 0x0
+
+#define DLB2_DQED_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_DQED_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_DQED_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_DQED_SMON_CFG1_MODE0_LOC	0
+#define DLB2_DQED_SMON_CFG1_MODE1_LOC	8
+#define DLB2_DQED_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_DQED_SMON_MAX_TMR 0x6c00003c
+#define DLB2_DQED_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_DQED_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_DQED_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_DQED_SMON_TMR 0x6c000040
+#define DLB2_DQED_SMON_TMR_RST 0x0
+
+#define DLB2_DQED_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_DQED_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_QED_SMON_ACTIVITYCNTR0 0x7c000024
+#define DLB2_QED_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_QED_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_QED_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_QED_SMON_ACTIVITYCNTR1 0x7c000028
+#define DLB2_QED_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_QED_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_QED_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_QED_SMON_COMPARE0 0x7c00002c
+#define DLB2_QED_SMON_COMPARE0_RST 0x0
+
+#define DLB2_QED_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_QED_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_QED_SMON_COMPARE1 0x7c000030
+#define DLB2_QED_SMON_COMPARE1_RST 0x0
+
+#define DLB2_QED_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_QED_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_QED_SMON_CFG0 0x7c000034
+#define DLB2_QED_SMON_CFG0_RST 0x40000000
+
+#define DLB2_QED_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_QED_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_QED_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_QED_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_QED_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_QED_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_QED_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_QED_SMON_CFG0_SMON_MODE			0x0000F000
+#define DLB2_QED_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_QED_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_QED_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_QED_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_QED_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_QED_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_QED_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_QED_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_QED_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_QED_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_QED_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_QED_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_QED_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
+#define DLB2_QED_SMON_CFG0_RSVZ0_LOC				2
+#define DLB2_QED_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_QED_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_QED_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_QED_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_QED_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_QED_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_QED_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_QED_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_QED_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_QED_SMON_CFG0_STOPTIMEROVFL_LOC			20
+#define DLB2_QED_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_QED_SMON_CFG0_STATTIMEROVFL_LOC			22
+#define DLB2_QED_SMON_CFG0_RSVZ1_LOC				23
+#define DLB2_QED_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_QED_SMON_CFG0_RSVZ2_LOC				29
+#define DLB2_QED_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_QED_SMON_CFG1 0x7c000038
+#define DLB2_QED_SMON_CFG1_RST 0x0
+
+#define DLB2_QED_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_QED_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_QED_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_QED_SMON_CFG1_MODE0_LOC	0
+#define DLB2_QED_SMON_CFG1_MODE1_LOC	8
+#define DLB2_QED_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_QED_SMON_MAX_TMR 0x7c00003c
+#define DLB2_QED_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_QED_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_QED_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_QED_SMON_TMR 0x7c000040
+#define DLB2_QED_SMON_TMR_RST 0x0
+
+#define DLB2_QED_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_QED_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0 0x84000000
+#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0 0x74000000
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0 : \
+	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0)
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_RST 0xfefcfaf8
+
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI0	0x000000FF
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI1	0x0000FF00
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI2	0x00FF0000
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI3	0xFF000000
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI0_LOC	0
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI1_LOC	8
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI2_LOC	16
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI3_LOC	24
+
+#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1 0x84000004
+#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1 0x74000004
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1 : \
+	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1)
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1_RST 0x0
+
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1_RSVZ0	0xFFFFFFFF
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1_RSVZ0_LOC	0
+
+#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0 0x84000008
+#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0 0x74000008
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0 : \
+	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0)
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_RST 0xfefcfaf8
+
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI0	0x000000FF
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI1	0x0000FF00
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI2	0x00FF0000
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI3	0xFF000000
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI0_LOC	0
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI1_LOC	8
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI2_LOC	16
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI3_LOC	24
+
+#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1 0x8400000c
+#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1 0x7400000c
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1 : \
+	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1)
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1_RST 0x0
+
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1_RSVZ0	0xFFFFFFFF
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1_RSVZ0_LOC	0
+
+#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0 0x84000010
+#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0 0x74000010
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0 : \
+	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0)
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_RST 0xfefcfaf8
+
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI0	0x000000FF
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI1	0x0000FF00
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI2	0x00FF0000
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI3	0xFF000000
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI0_LOC	0
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI1_LOC	8
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI2_LOC	16
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI3_LOC	24
+
+#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1 0x84000014
+#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1 0x74000014
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1 : \
+	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1)
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RST 0x0
+
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RSVZ0	0xFFFFFFFF
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RSVZ0_LOC	0
+
+#define DLB2_NALB_SMON_ACTIVITYCNTR0 0x8c000064
+#define DLB2_NALB_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_NALB_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_NALB_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_NALB_SMON_ACTIVITYCNTR1 0x8c000068
+#define DLB2_NALB_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_NALB_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_NALB_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_NALB_SMON_COMPARE0 0x8c00006c
+#define DLB2_NALB_SMON_COMPARE0_RST 0x0
+
+#define DLB2_NALB_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_NALB_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_NALB_SMON_COMPARE1 0x8c000070
+#define DLB2_NALB_SMON_COMPARE1_RST 0x0
+
+#define DLB2_NALB_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_NALB_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_NALB_SMON_CFG0 0x8c000074
+#define DLB2_NALB_SMON_CFG0_RST 0x40000000
+
+#define DLB2_NALB_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_NALB_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_NALB_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_NALB_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_NALB_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_NALB_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_NALB_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_NALB_SMON_CFG0_SMON_MODE		0x0000F000
+#define DLB2_NALB_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_NALB_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_NALB_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_NALB_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_NALB_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_NALB_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_NALB_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_NALB_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_NALB_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_NALB_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_NALB_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_NALB_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_NALB_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
+#define DLB2_NALB_SMON_CFG0_RSVZ0_LOC			2
+#define DLB2_NALB_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_NALB_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_NALB_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_NALB_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_NALB_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_NALB_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_NALB_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_NALB_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_NALB_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_NALB_SMON_CFG0_STOPTIMEROVFL_LOC		20
+#define DLB2_NALB_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_NALB_SMON_CFG0_STATTIMEROVFL_LOC		22
+#define DLB2_NALB_SMON_CFG0_RSVZ1_LOC			23
+#define DLB2_NALB_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_NALB_SMON_CFG0_RSVZ2_LOC			29
+#define DLB2_NALB_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_NALB_SMON_CFG1 0x8c000078
+#define DLB2_NALB_SMON_CFG1_RST 0x0
+
+#define DLB2_NALB_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_NALB_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_NALB_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_NALB_SMON_CFG1_MODE0_LOC	0
+#define DLB2_NALB_SMON_CFG1_MODE1_LOC	8
+#define DLB2_NALB_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_NALB_SMON_MAX_TMR 0x8c00007c
+#define DLB2_NALB_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_NALB_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_NALB_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_NALB_SMON_TMR 0x8c000080
+#define DLB2_NALB_SMON_TMR_RST 0x0
+
+#define DLB2_NALB_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_NALB_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_V2RO_GRP_0_SLT_SHFT(x) \
 	(0x96000000 + (x) * 0x4)
-#define DLB2_RO_PIPE_GRP_0_SLT_SHFT_RST 0x0
-union dlb2_ro_pipe_grp_0_slt_shft {
-	struct {
-		u32 change : 10;
-		u32 rsvd0 : 22;
-	} field;
-	u32 val;
-};
-
-#define DLB2_RO_PIPE_GRP_1_SLT_SHFT(x) \
+#define DLB2_V2_5RO_GRP_0_SLT_SHFT(x) \
+	(0x86000000 + (x) * 0x4)
+#define DLB2_RO_GRP_0_SLT_SHFT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2RO_GRP_0_SLT_SHFT(x) : \
+	 DLB2_V2_5RO_GRP_0_SLT_SHFT(x))
+#define DLB2_RO_GRP_0_SLT_SHFT_RST 0x0
+
+#define DLB2_RO_GRP_0_SLT_SHFT_CHANGE	0x000003FF
+#define DLB2_RO_GRP_0_SLT_SHFT_RSVD0		0xFFFFFC00
+#define DLB2_RO_GRP_0_SLT_SHFT_CHANGE_LOC	0
+#define DLB2_RO_GRP_0_SLT_SHFT_RSVD0_LOC	10
+
+#define DLB2_V2RO_GRP_1_SLT_SHFT(x) \
 	(0x96010000 + (x) * 0x4)
-#define DLB2_RO_PIPE_GRP_1_SLT_SHFT_RST 0x0
-union dlb2_ro_pipe_grp_1_slt_shft {
-	struct {
-		u32 change : 10;
-		u32 rsvd0 : 22;
-	} field;
-	u32 val;
-};
-
-#define DLB2_RO_PIPE_GRP_SN_MODE 0x94000000
-#define DLB2_RO_PIPE_GRP_SN_MODE_RST 0x0
-union dlb2_ro_pipe_grp_sn_mode {
-	struct {
-		u32 sn_mode_0 : 3;
-		u32 rszv0 : 5;
-		u32 sn_mode_1 : 3;
-		u32 rszv1 : 21;
-	} field;
-	u32 val;
-};
-
-#define DLB2_RO_PIPE_CFG_CTRL_GENERAL_0 0x9c000000
-#define DLB2_RO_PIPE_CFG_CTRL_GENERAL_0_RST 0x0
-union dlb2_ro_pipe_cfg_ctrl_general_0 {
-	struct {
-		u32 unit_single_step_mode : 1;
-		u32 rr_en : 1;
-		u32 rszv0 : 30;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ2PRIOV(x) \
+#define DLB2_V2_5RO_GRP_1_SLT_SHFT(x) \
+	(0x86010000 + (x) * 0x4)
+#define DLB2_RO_GRP_1_SLT_SHFT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2RO_GRP_1_SLT_SHFT(x) : \
+	 DLB2_V2_5RO_GRP_1_SLT_SHFT(x))
+#define DLB2_RO_GRP_1_SLT_SHFT_RST 0x0
+
+#define DLB2_RO_GRP_1_SLT_SHFT_CHANGE	0x000003FF
+#define DLB2_RO_GRP_1_SLT_SHFT_RSVD0		0xFFFFFC00
+#define DLB2_RO_GRP_1_SLT_SHFT_CHANGE_LOC	0
+#define DLB2_RO_GRP_1_SLT_SHFT_RSVD0_LOC	10
+
+#define DLB2_V2RO_GRP_SN_MODE 0x94000000
+#define DLB2_V2_5RO_GRP_SN_MODE 0x84000000
+#define DLB2_RO_GRP_SN_MODE(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2RO_GRP_SN_MODE : \
+	 DLB2_V2_5RO_GRP_SN_MODE)
+#define DLB2_RO_GRP_SN_MODE_RST 0x0
+
+#define DLB2_RO_GRP_SN_MODE_SN_MODE_0	0x00000007
+#define DLB2_RO_GRP_SN_MODE_RSZV0		0x000000F8
+#define DLB2_RO_GRP_SN_MODE_SN_MODE_1	0x00000700
+#define DLB2_RO_GRP_SN_MODE_RSZV1		0xFFFFF800
+#define DLB2_RO_GRP_SN_MODE_SN_MODE_0_LOC	0
+#define DLB2_RO_GRP_SN_MODE_RSZV0_LOC	3
+#define DLB2_RO_GRP_SN_MODE_SN_MODE_1_LOC	8
+#define DLB2_RO_GRP_SN_MODE_RSZV1_LOC	11
+
+#define DLB2_V2RO_CFG_CTRL_GENERAL_0 0x9c000000
+#define DLB2_V2_5RO_CFG_CTRL_GENERAL_0 0x8c000000
+#define DLB2_RO_CFG_CTRL_GENERAL_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2RO_CFG_CTRL_GENERAL_0 : \
+	 DLB2_V2_5RO_CFG_CTRL_GENERAL_0)
+#define DLB2_RO_CFG_CTRL_GENERAL_0_RST 0x0
+
+#define DLB2_RO_CFG_CTRL_GENERAL_0_UNIT_SINGLE_STEP_MODE	0x00000001
+#define DLB2_RO_CFG_CTRL_GENERAL_0_RR_EN			0x00000002
+#define DLB2_RO_CFG_CTRL_GENERAL_0_RSZV0			0xFFFFFFFC
+#define DLB2_RO_CFG_CTRL_GENERAL_0_UNIT_SINGLE_STEP_MODE_LOC	0
+#define DLB2_RO_CFG_CTRL_GENERAL_0_RR_EN_LOC			1
+#define DLB2_RO_CFG_CTRL_GENERAL_0_RSZV0_LOC			2
+
+#define DLB2_RO_SMON_ACTIVITYCNTR0 0x9c000030
+#define DLB2_RO_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_RO_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_RO_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_RO_SMON_ACTIVITYCNTR1 0x9c000034
+#define DLB2_RO_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_RO_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_RO_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_RO_SMON_COMPARE0 0x9c000038
+#define DLB2_RO_SMON_COMPARE0_RST 0x0
+
+#define DLB2_RO_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_RO_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_RO_SMON_COMPARE1 0x9c00003c
+#define DLB2_RO_SMON_COMPARE1_RST 0x0
+
+#define DLB2_RO_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_RO_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_RO_SMON_CFG0 0x9c000040
+#define DLB2_RO_SMON_CFG0_RST 0x40000000
+
+#define DLB2_RO_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_RO_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_RO_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_RO_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_RO_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_RO_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_RO_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_RO_SMON_CFG0_SMON_MODE			0x0000F000
+#define DLB2_RO_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_RO_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_RO_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_RO_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_RO_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_RO_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_RO_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_RO_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_RO_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_RO_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_RO_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_RO_SMON_CFG0_SMON_ENABLE_LOC		0
+#define DLB2_RO_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC	1
+#define DLB2_RO_SMON_CFG0_RSVZ0_LOC			2
+#define DLB2_RO_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_RO_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_RO_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_RO_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_RO_SMON_CFG0_SMON_MODE_LOC		12
+#define DLB2_RO_SMON_CFG0_STOPCOUNTEROVFL_LOC	16
+#define DLB2_RO_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_RO_SMON_CFG0_STATCOUNTER0OVFL_LOC	18
+#define DLB2_RO_SMON_CFG0_STATCOUNTER1OVFL_LOC	19
+#define DLB2_RO_SMON_CFG0_STOPTIMEROVFL_LOC		20
+#define DLB2_RO_SMON_CFG0_INTTIMEROVFL_LOC		21
+#define DLB2_RO_SMON_CFG0_STATTIMEROVFL_LOC		22
+#define DLB2_RO_SMON_CFG0_RSVZ1_LOC			23
+#define DLB2_RO_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_RO_SMON_CFG0_RSVZ2_LOC			29
+#define DLB2_RO_SMON_CFG0_VERSION_LOC		30
+
+#define DLB2_RO_SMON_CFG1 0x9c000044
+#define DLB2_RO_SMON_CFG1_RST 0x0
+
+#define DLB2_RO_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_RO_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_RO_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_RO_SMON_CFG1_MODE0_LOC	0
+#define DLB2_RO_SMON_CFG1_MODE1_LOC	8
+#define DLB2_RO_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_RO_SMON_MAX_TMR 0x9c000048
+#define DLB2_RO_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_RO_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_RO_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_RO_SMON_TMR 0x9c00004c
+#define DLB2_RO_SMON_TMR_RST 0x0
+
+#define DLB2_RO_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_RO_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_V2LSP_CQ2PRIOV(x) \
 	(0xa0000000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ2PRIOV(x) \
+	(0x90000000 + (x) * 0x1000)
+#define DLB2_LSP_CQ2PRIOV(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ2PRIOV(x) : \
+	 DLB2_V2_5LSP_CQ2PRIOV(x))
 #define DLB2_LSP_CQ2PRIOV_RST 0x0
-union dlb2_lsp_cq2priov {
-	struct {
-		u32 prio : 24;
-		u32 v : 8;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ2QID0(x) \
+
+#define DLB2_LSP_CQ2PRIOV_PRIO	0x00FFFFFF
+#define DLB2_LSP_CQ2PRIOV_V		0xFF000000
+#define DLB2_LSP_CQ2PRIOV_PRIO_LOC	0
+#define DLB2_LSP_CQ2PRIOV_V_LOC	24
+
+#define DLB2_V2LSP_CQ2QID0(x) \
 	(0xa0080000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ2QID0(x) \
+	(0x90080000 + (x) * 0x1000)
+#define DLB2_LSP_CQ2QID0(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ2QID0(x) : \
+	 DLB2_V2_5LSP_CQ2QID0(x))
 #define DLB2_LSP_CQ2QID0_RST 0x0
-union dlb2_lsp_cq2qid0 {
-	struct {
-		u32 qid_p0 : 7;
-		u32 rsvd3 : 1;
-		u32 qid_p1 : 7;
-		u32 rsvd2 : 1;
-		u32 qid_p2 : 7;
-		u32 rsvd1 : 1;
-		u32 qid_p3 : 7;
-		u32 rsvd0 : 1;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ2QID1(x) \
+
+#define DLB2_LSP_CQ2QID0_QID_P0	0x0000007F
+#define DLB2_LSP_CQ2QID0_RSVD3	0x00000080
+#define DLB2_LSP_CQ2QID0_QID_P1	0x00007F00
+#define DLB2_LSP_CQ2QID0_RSVD2	0x00008000
+#define DLB2_LSP_CQ2QID0_QID_P2	0x007F0000
+#define DLB2_LSP_CQ2QID0_RSVD1	0x00800000
+#define DLB2_LSP_CQ2QID0_QID_P3	0x7F000000
+#define DLB2_LSP_CQ2QID0_RSVD0	0x80000000
+#define DLB2_LSP_CQ2QID0_QID_P0_LOC	0
+#define DLB2_LSP_CQ2QID0_RSVD3_LOC	7
+#define DLB2_LSP_CQ2QID0_QID_P1_LOC	8
+#define DLB2_LSP_CQ2QID0_RSVD2_LOC	15
+#define DLB2_LSP_CQ2QID0_QID_P2_LOC	16
+#define DLB2_LSP_CQ2QID0_RSVD1_LOC	23
+#define DLB2_LSP_CQ2QID0_QID_P3_LOC	24
+#define DLB2_LSP_CQ2QID0_RSVD0_LOC	31
+
+#define DLB2_V2LSP_CQ2QID1(x) \
 	(0xa0100000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ2QID1(x) \
+	(0x90100000 + (x) * 0x1000)
+#define DLB2_LSP_CQ2QID1(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ2QID1(x) : \
+	 DLB2_V2_5LSP_CQ2QID1(x))
 #define DLB2_LSP_CQ2QID1_RST 0x0
-union dlb2_lsp_cq2qid1 {
-	struct {
-		u32 qid_p4 : 7;
-		u32 rsvd3 : 1;
-		u32 qid_p5 : 7;
-		u32 rsvd2 : 1;
-		u32 qid_p6 : 7;
-		u32 rsvd1 : 1;
-		u32 qid_p7 : 7;
-		u32 rsvd0 : 1;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_DIR_DSBL(x) \
+
+#define DLB2_LSP_CQ2QID1_QID_P4	0x0000007F
+#define DLB2_LSP_CQ2QID1_RSVD3	0x00000080
+#define DLB2_LSP_CQ2QID1_QID_P5	0x00007F00
+#define DLB2_LSP_CQ2QID1_RSVD2	0x00008000
+#define DLB2_LSP_CQ2QID1_QID_P6	0x007F0000
+#define DLB2_LSP_CQ2QID1_RSVD1	0x00800000
+#define DLB2_LSP_CQ2QID1_QID_P7	0x7F000000
+#define DLB2_LSP_CQ2QID1_RSVD0	0x80000000
+#define DLB2_LSP_CQ2QID1_QID_P4_LOC	0
+#define DLB2_LSP_CQ2QID1_RSVD3_LOC	7
+#define DLB2_LSP_CQ2QID1_QID_P5_LOC	8
+#define DLB2_LSP_CQ2QID1_RSVD2_LOC	15
+#define DLB2_LSP_CQ2QID1_QID_P6_LOC	16
+#define DLB2_LSP_CQ2QID1_RSVD1_LOC	23
+#define DLB2_LSP_CQ2QID1_QID_P7_LOC	24
+#define DLB2_LSP_CQ2QID1_RSVD0_LOC	31
+
+#define DLB2_V2LSP_CQ_DIR_DSBL(x) \
 	(0xa0180000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_DIR_DSBL(x) \
+	(0x90180000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_DIR_DSBL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_DIR_DSBL(x) : \
+	 DLB2_V2_5LSP_CQ_DIR_DSBL(x))
 #define DLB2_LSP_CQ_DIR_DSBL_RST 0x1
-union dlb2_lsp_cq_dir_dsbl {
-	struct {
-		u32 disabled : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_DIR_TKN_CNT(x) \
+
+#define DLB2_LSP_CQ_DIR_DSBL_DISABLED	0x00000001
+#define DLB2_LSP_CQ_DIR_DSBL_RSVD0		0xFFFFFFFE
+#define DLB2_LSP_CQ_DIR_DSBL_DISABLED_LOC	0
+#define DLB2_LSP_CQ_DIR_DSBL_RSVD0_LOC	1
+
+#define DLB2_V2LSP_CQ_DIR_TKN_CNT(x) \
 	(0xa0200000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_DIR_TKN_CNT(x) \
+	(0x90200000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_DIR_TKN_CNT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_DIR_TKN_CNT(x) : \
+	 DLB2_V2_5LSP_CQ_DIR_TKN_CNT(x))
 #define DLB2_LSP_CQ_DIR_TKN_CNT_RST 0x0
-union dlb2_lsp_cq_dir_tkn_cnt {
-	struct {
-		u32 count : 13;
-		u32 rsvd0 : 19;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(x) \
+
+#define DLB2_LSP_CQ_DIR_TKN_CNT_COUNT	0x00001FFF
+#define DLB2_LSP_CQ_DIR_TKN_CNT_RSVD0	0xFFFFE000
+#define DLB2_LSP_CQ_DIR_TKN_CNT_COUNT_LOC	0
+#define DLB2_LSP_CQ_DIR_TKN_CNT_RSVD0_LOC	13
+
+#define DLB2_V2LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(x) \
 	(0xa0280000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(x) \
+	(0x90280000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(x) : \
+	 DLB2_V2_5LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(x))
 #define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RST 0x0
-union dlb2_lsp_cq_dir_tkn_depth_sel_dsi {
-	struct {
-		u32 token_depth_select : 4;
-		u32 disable_wb_opt : 1;
-		u32 ignore_depth : 1;
-		u32 rsvd0 : 26;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTL(x) \
+
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_TOKEN_DEPTH_SELECT_V2	0x0000000F
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_DISABLE_WB_OPT_V2	0x00000010
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_IGNORE_DEPTH_V2	0x00000020
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RSVD0_V2		0xFFFFFFC0
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_TOKEN_DEPTH_SELECT_V2_LOC	0
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_DISABLE_WB_OPT_V2_LOC	4
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_IGNORE_DEPTH_V2_LOC	5
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RSVD0_V2_LOC		6
+
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_TOKEN_DEPTH_SELECT_V2_5 0x0000000F
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_DISABLE_WB_OPT_V2_5	0x00000010
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RSVD0_V2_5		0xFFFFFFE0
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_TOKEN_DEPTH_SELECT_V2_5_LOC	0
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_DISABLE_WB_OPT_V2_5_LOC	4
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RSVD0_V2_5_LOC		5
+
+#define DLB2_V2LSP_CQ_DIR_TOT_SCH_CNTL(x) \
 	(0xa0300000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_DIR_TOT_SCH_CNTL(x) \
+	(0x90300000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_DIR_TOT_SCH_CNTL(x) : \
+	 DLB2_V2_5LSP_CQ_DIR_TOT_SCH_CNTL(x))
 #define DLB2_LSP_CQ_DIR_TOT_SCH_CNTL_RST 0x0
-union dlb2_lsp_cq_dir_tot_sch_cntl {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTH(x) \
+
+#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTL_COUNT	0xFFFFFFFF
+#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTL_COUNT_LOC	0
+
+#define DLB2_V2LSP_CQ_DIR_TOT_SCH_CNTH(x) \
 	(0xa0380000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_DIR_TOT_SCH_CNTH(x) \
+	(0x90380000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_DIR_TOT_SCH_CNTH(x) : \
+	 DLB2_V2_5LSP_CQ_DIR_TOT_SCH_CNTH(x))
 #define DLB2_LSP_CQ_DIR_TOT_SCH_CNTH_RST 0x0
-union dlb2_lsp_cq_dir_tot_sch_cnth {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_LDB_DSBL(x) \
+
+#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTH_COUNT	0xFFFFFFFF
+#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTH_COUNT_LOC	0
+
+#define DLB2_V2LSP_CQ_LDB_DSBL(x) \
 	(0xa0400000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_LDB_DSBL(x) \
+	(0x90400000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_LDB_DSBL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_LDB_DSBL(x) : \
+	 DLB2_V2_5LSP_CQ_LDB_DSBL(x))
 #define DLB2_LSP_CQ_LDB_DSBL_RST 0x1
-union dlb2_lsp_cq_ldb_dsbl {
-	struct {
-		u32 disabled : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_LDB_INFL_CNT(x) \
+
+#define DLB2_LSP_CQ_LDB_DSBL_DISABLED	0x00000001
+#define DLB2_LSP_CQ_LDB_DSBL_RSVD0		0xFFFFFFFE
+#define DLB2_LSP_CQ_LDB_DSBL_DISABLED_LOC	0
+#define DLB2_LSP_CQ_LDB_DSBL_RSVD0_LOC	1
+
+#define DLB2_V2LSP_CQ_LDB_INFL_CNT(x) \
 	(0xa0480000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_LDB_INFL_CNT(x) \
+	(0x90480000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_LDB_INFL_CNT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_LDB_INFL_CNT(x) : \
+	 DLB2_V2_5LSP_CQ_LDB_INFL_CNT(x))
 #define DLB2_LSP_CQ_LDB_INFL_CNT_RST 0x0
-union dlb2_lsp_cq_ldb_infl_cnt {
-	struct {
-		u32 count : 12;
-		u32 rsvd0 : 20;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_LDB_INFL_LIM(x) \
+
+#define DLB2_LSP_CQ_LDB_INFL_CNT_COUNT	0x00000FFF
+#define DLB2_LSP_CQ_LDB_INFL_CNT_RSVD0	0xFFFFF000
+#define DLB2_LSP_CQ_LDB_INFL_CNT_COUNT_LOC	0
+#define DLB2_LSP_CQ_LDB_INFL_CNT_RSVD0_LOC	12
+
+#define DLB2_V2LSP_CQ_LDB_INFL_LIM(x) \
 	(0xa0500000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_LDB_INFL_LIM(x) \
+	(0x90500000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_LDB_INFL_LIM(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_LDB_INFL_LIM(x) : \
+	 DLB2_V2_5LSP_CQ_LDB_INFL_LIM(x))
 #define DLB2_LSP_CQ_LDB_INFL_LIM_RST 0x0
-union dlb2_lsp_cq_ldb_infl_lim {
-	struct {
-		u32 limit : 12;
-		u32 rsvd0 : 20;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_LDB_TKN_CNT(x) \
+
+#define DLB2_LSP_CQ_LDB_INFL_LIM_LIMIT	0x00000FFF
+#define DLB2_LSP_CQ_LDB_INFL_LIM_RSVD0	0xFFFFF000
+#define DLB2_LSP_CQ_LDB_INFL_LIM_LIMIT_LOC	0
+#define DLB2_LSP_CQ_LDB_INFL_LIM_RSVD0_LOC	12
+
+#define DLB2_V2LSP_CQ_LDB_TKN_CNT(x) \
 	(0xa0580000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_LDB_TKN_CNT(x) \
+	(0x90600000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_LDB_TKN_CNT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_LDB_TKN_CNT(x) : \
+	 DLB2_V2_5LSP_CQ_LDB_TKN_CNT(x))
 #define DLB2_LSP_CQ_LDB_TKN_CNT_RST 0x0
-union dlb2_lsp_cq_ldb_tkn_cnt {
-	struct {
-		u32 token_count : 11;
-		u32 rsvd0 : 21;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL(x) \
+
+#define DLB2_LSP_CQ_LDB_TKN_CNT_TOKEN_COUNT	0x000007FF
+#define DLB2_LSP_CQ_LDB_TKN_CNT_RSVD0	0xFFFFF800
+#define DLB2_LSP_CQ_LDB_TKN_CNT_TOKEN_COUNT_LOC	0
+#define DLB2_LSP_CQ_LDB_TKN_CNT_RSVD0_LOC		11
+
+#define DLB2_V2LSP_CQ_LDB_TKN_DEPTH_SEL(x) \
 	(0xa0600000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_LDB_TKN_DEPTH_SEL(x) \
+	(0x90680000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_LDB_TKN_DEPTH_SEL(x) : \
+	 DLB2_V2_5LSP_CQ_LDB_TKN_DEPTH_SEL(x))
 #define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RST 0x0
-union dlb2_lsp_cq_ldb_tkn_depth_sel {
-	struct {
-		u32 token_depth_select : 4;
-		u32 ignore_depth : 1;
-		u32 rsvd0 : 27;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTL(x) \
+
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_V2	0x0000000F
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_IGNORE_DEPTH_V2	0x00000010
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RSVD0_V2		0xFFFFFFE0
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_V2_LOC	0
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_IGNORE_DEPTH_V2_LOC		4
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RSVD0_V2_LOC			5
+
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_V2_5	0x0000000F
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RSVD0_V2_5		0xFFFFFFF0
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_V2_5_LOC	0
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RSVD0_V2_5_LOC			4
+
+#define DLB2_V2LSP_CQ_LDB_TOT_SCH_CNTL(x) \
 	(0xa0680000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_LDB_TOT_SCH_CNTL(x) \
+	(0x90700000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_LDB_TOT_SCH_CNTL(x) : \
+	 DLB2_V2_5LSP_CQ_LDB_TOT_SCH_CNTL(x))
 #define DLB2_LSP_CQ_LDB_TOT_SCH_CNTL_RST 0x0
-union dlb2_lsp_cq_ldb_tot_sch_cntl {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTH(x) \
+
+#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTL_COUNT	0xFFFFFFFF
+#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTL_COUNT_LOC	0
+
+#define DLB2_V2LSP_CQ_LDB_TOT_SCH_CNTH(x) \
 	(0xa0700000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_LDB_TOT_SCH_CNTH(x) \
+	(0x90780000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_LDB_TOT_SCH_CNTH(x) : \
+	 DLB2_V2_5LSP_CQ_LDB_TOT_SCH_CNTH(x))
 #define DLB2_LSP_CQ_LDB_TOT_SCH_CNTH_RST 0x0
-union dlb2_lsp_cq_ldb_tot_sch_cnth {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_DIR_MAX_DEPTH(x) \
+
+#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTH_COUNT	0xFFFFFFFF
+#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTH_COUNT_LOC	0
+
+#define DLB2_V2LSP_QID_DIR_MAX_DEPTH(x) \
 	(0xa0780000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_DIR_MAX_DEPTH(x) \
+	(0x90800000 + (x) * 0x1000)
+#define DLB2_LSP_QID_DIR_MAX_DEPTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_DIR_MAX_DEPTH(x) : \
+	 DLB2_V2_5LSP_QID_DIR_MAX_DEPTH(x))
 #define DLB2_LSP_QID_DIR_MAX_DEPTH_RST 0x0
-union dlb2_lsp_qid_dir_max_depth {
-	struct {
-		u32 depth : 13;
-		u32 rsvd0 : 19;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTL(x) \
+
+#define DLB2_LSP_QID_DIR_MAX_DEPTH_DEPTH	0x00001FFF
+#define DLB2_LSP_QID_DIR_MAX_DEPTH_RSVD0	0xFFFFE000
+#define DLB2_LSP_QID_DIR_MAX_DEPTH_DEPTH_LOC	0
+#define DLB2_LSP_QID_DIR_MAX_DEPTH_RSVD0_LOC	13
+
+#define DLB2_V2LSP_QID_DIR_TOT_ENQ_CNTL(x) \
 	(0xa0800000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_DIR_TOT_ENQ_CNTL(x) \
+	(0x90880000 + (x) * 0x1000)
+#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_DIR_TOT_ENQ_CNTL(x) : \
+	 DLB2_V2_5LSP_QID_DIR_TOT_ENQ_CNTL(x))
 #define DLB2_LSP_QID_DIR_TOT_ENQ_CNTL_RST 0x0
-union dlb2_lsp_qid_dir_tot_enq_cntl {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTH(x) \
+
+#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTL_COUNT	0xFFFFFFFF
+#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTL_COUNT_LOC	0
+
+#define DLB2_V2LSP_QID_DIR_TOT_ENQ_CNTH(x) \
 	(0xa0880000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_DIR_TOT_ENQ_CNTH(x) \
+	(0x90900000 + (x) * 0x1000)
+#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_DIR_TOT_ENQ_CNTH(x) : \
+	 DLB2_V2_5LSP_QID_DIR_TOT_ENQ_CNTH(x))
 #define DLB2_LSP_QID_DIR_TOT_ENQ_CNTH_RST 0x0
-union dlb2_lsp_qid_dir_tot_enq_cnth {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_DIR_ENQUEUE_CNT(x) \
+
+#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTH_COUNT	0xFFFFFFFF
+#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTH_COUNT_LOC	0
+
+#define DLB2_V2LSP_QID_DIR_ENQUEUE_CNT(x) \
 	(0xa0900000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_DIR_ENQUEUE_CNT(x) \
+	(0x90980000 + (x) * 0x1000)
+#define DLB2_LSP_QID_DIR_ENQUEUE_CNT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_DIR_ENQUEUE_CNT(x) : \
+	 DLB2_V2_5LSP_QID_DIR_ENQUEUE_CNT(x))
 #define DLB2_LSP_QID_DIR_ENQUEUE_CNT_RST 0x0
-union dlb2_lsp_qid_dir_enqueue_cnt {
-	struct {
-		u32 count : 13;
-		u32 rsvd0 : 19;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_DIR_DEPTH_THRSH(x) \
+
+#define DLB2_LSP_QID_DIR_ENQUEUE_CNT_COUNT	0x00001FFF
+#define DLB2_LSP_QID_DIR_ENQUEUE_CNT_RSVD0	0xFFFFE000
+#define DLB2_LSP_QID_DIR_ENQUEUE_CNT_COUNT_LOC	0
+#define DLB2_LSP_QID_DIR_ENQUEUE_CNT_RSVD0_LOC	13
+
+#define DLB2_V2LSP_QID_DIR_DEPTH_THRSH(x) \
 	(0xa0980000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_DIR_DEPTH_THRSH(x) \
+	(0x90a00000 + (x) * 0x1000)
+#define DLB2_LSP_QID_DIR_DEPTH_THRSH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_DIR_DEPTH_THRSH(x) : \
+	 DLB2_V2_5LSP_QID_DIR_DEPTH_THRSH(x))
 #define DLB2_LSP_QID_DIR_DEPTH_THRSH_RST 0x0
-union dlb2_lsp_qid_dir_depth_thrsh {
-	struct {
-		u32 thresh : 13;
-		u32 rsvd0 : 19;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_AQED_ACTIVE_CNT(x) \
+
+#define DLB2_LSP_QID_DIR_DEPTH_THRSH_THRESH	0x00001FFF
+#define DLB2_LSP_QID_DIR_DEPTH_THRSH_RSVD0	0xFFFFE000
+#define DLB2_LSP_QID_DIR_DEPTH_THRSH_THRESH_LOC	0
+#define DLB2_LSP_QID_DIR_DEPTH_THRSH_RSVD0_LOC	13
+
+#define DLB2_V2LSP_QID_AQED_ACTIVE_CNT(x) \
 	(0xa0a00000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_AQED_ACTIVE_CNT(x) \
+	(0x90b80000 + (x) * 0x1000)
+#define DLB2_LSP_QID_AQED_ACTIVE_CNT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_AQED_ACTIVE_CNT(x) : \
+	 DLB2_V2_5LSP_QID_AQED_ACTIVE_CNT(x))
 #define DLB2_LSP_QID_AQED_ACTIVE_CNT_RST 0x0
-union dlb2_lsp_qid_aqed_active_cnt {
-	struct {
-		u32 count : 12;
-		u32 rsvd0 : 20;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_AQED_ACTIVE_LIM(x) \
+
+#define DLB2_LSP_QID_AQED_ACTIVE_CNT_COUNT	0x00000FFF
+#define DLB2_LSP_QID_AQED_ACTIVE_CNT_RSVD0	0xFFFFF000
+#define DLB2_LSP_QID_AQED_ACTIVE_CNT_COUNT_LOC	0
+#define DLB2_LSP_QID_AQED_ACTIVE_CNT_RSVD0_LOC	12
+
+#define DLB2_V2LSP_QID_AQED_ACTIVE_LIM(x) \
 	(0xa0a80000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_AQED_ACTIVE_LIM(x) \
+	(0x90c00000 + (x) * 0x1000)
+#define DLB2_LSP_QID_AQED_ACTIVE_LIM(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_AQED_ACTIVE_LIM(x) : \
+	 DLB2_V2_5LSP_QID_AQED_ACTIVE_LIM(x))
 #define DLB2_LSP_QID_AQED_ACTIVE_LIM_RST 0x0
-union dlb2_lsp_qid_aqed_active_lim {
-	struct {
-		u32 limit : 12;
-		u32 rsvd0 : 20;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTL(x) \
+
+#define DLB2_LSP_QID_AQED_ACTIVE_LIM_LIMIT	0x00000FFF
+#define DLB2_LSP_QID_AQED_ACTIVE_LIM_RSVD0	0xFFFFF000
+#define DLB2_LSP_QID_AQED_ACTIVE_LIM_LIMIT_LOC	0
+#define DLB2_LSP_QID_AQED_ACTIVE_LIM_RSVD0_LOC	12
+
+#define DLB2_V2LSP_QID_ATM_TOT_ENQ_CNTL(x) \
 	(0xa0b00000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_ATM_TOT_ENQ_CNTL(x) \
+	(0x90c80000 + (x) * 0x1000)
+#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_ATM_TOT_ENQ_CNTL(x) : \
+	 DLB2_V2_5LSP_QID_ATM_TOT_ENQ_CNTL(x))
 #define DLB2_LSP_QID_ATM_TOT_ENQ_CNTL_RST 0x0
-union dlb2_lsp_qid_atm_tot_enq_cntl {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTH(x) \
+
+#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTL_COUNT	0xFFFFFFFF
+#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTL_COUNT_LOC	0
+
+#define DLB2_V2LSP_QID_ATM_TOT_ENQ_CNTH(x) \
 	(0xa0b80000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_ATM_TOT_ENQ_CNTH(x) \
+	(0x90d00000 + (x) * 0x1000)
+#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_ATM_TOT_ENQ_CNTH(x) : \
+	 DLB2_V2_5LSP_QID_ATM_TOT_ENQ_CNTH(x))
 #define DLB2_LSP_QID_ATM_TOT_ENQ_CNTH_RST 0x0
-union dlb2_lsp_qid_atm_tot_enq_cnth {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_ATQ_ENQUEUE_CNT(x) \
-	(0xa0c00000 + (x) * 0x1000)
-#define DLB2_LSP_QID_ATQ_ENQUEUE_CNT_RST 0x0
-union dlb2_lsp_qid_atq_enqueue_cnt {
-	struct {
-		u32 count : 14;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_LDB_ENQUEUE_CNT(x) \
+
+#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTH_COUNT	0xFFFFFFFF
+#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTH_COUNT_LOC	0
+
+#define DLB2_V2LSP_QID_LDB_ENQUEUE_CNT(x) \
 	(0xa0c80000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_LDB_ENQUEUE_CNT(x) \
+	(0x90e00000 + (x) * 0x1000)
+#define DLB2_LSP_QID_LDB_ENQUEUE_CNT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_LDB_ENQUEUE_CNT(x) : \
+	 DLB2_V2_5LSP_QID_LDB_ENQUEUE_CNT(x))
 #define DLB2_LSP_QID_LDB_ENQUEUE_CNT_RST 0x0
-union dlb2_lsp_qid_ldb_enqueue_cnt {
-	struct {
-		u32 count : 14;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_LDB_INFL_CNT(x) \
+
+#define DLB2_LSP_QID_LDB_ENQUEUE_CNT_COUNT	0x00003FFF
+#define DLB2_LSP_QID_LDB_ENQUEUE_CNT_RSVD0	0xFFFFC000
+#define DLB2_LSP_QID_LDB_ENQUEUE_CNT_COUNT_LOC	0
+#define DLB2_LSP_QID_LDB_ENQUEUE_CNT_RSVD0_LOC	14
+
+#define DLB2_V2LSP_QID_LDB_INFL_CNT(x) \
 	(0xa0d00000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_LDB_INFL_CNT(x) \
+	(0x90e80000 + (x) * 0x1000)
+#define DLB2_LSP_QID_LDB_INFL_CNT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_LDB_INFL_CNT(x) : \
+	 DLB2_V2_5LSP_QID_LDB_INFL_CNT(x))
 #define DLB2_LSP_QID_LDB_INFL_CNT_RST 0x0
-union dlb2_lsp_qid_ldb_infl_cnt {
-	struct {
-		u32 count : 12;
-		u32 rsvd0 : 20;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_LDB_INFL_LIM(x) \
+
+#define DLB2_LSP_QID_LDB_INFL_CNT_COUNT	0x00000FFF
+#define DLB2_LSP_QID_LDB_INFL_CNT_RSVD0	0xFFFFF000
+#define DLB2_LSP_QID_LDB_INFL_CNT_COUNT_LOC	0
+#define DLB2_LSP_QID_LDB_INFL_CNT_RSVD0_LOC	12
+
+#define DLB2_V2LSP_QID_LDB_INFL_LIM(x) \
 	(0xa0d80000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_LDB_INFL_LIM(x) \
+	(0x90f00000 + (x) * 0x1000)
+#define DLB2_LSP_QID_LDB_INFL_LIM(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_LDB_INFL_LIM(x) : \
+	 DLB2_V2_5LSP_QID_LDB_INFL_LIM(x))
 #define DLB2_LSP_QID_LDB_INFL_LIM_RST 0x0
-union dlb2_lsp_qid_ldb_infl_lim {
-	struct {
-		u32 limit : 12;
-		u32 rsvd0 : 20;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID2CQIDIX_00(x) \
+
+#define DLB2_LSP_QID_LDB_INFL_LIM_LIMIT	0x00000FFF
+#define DLB2_LSP_QID_LDB_INFL_LIM_RSVD0	0xFFFFF000
+#define DLB2_LSP_QID_LDB_INFL_LIM_LIMIT_LOC	0
+#define DLB2_LSP_QID_LDB_INFL_LIM_RSVD0_LOC	12
+
+#define DLB2_V2LSP_QID2CQIDIX_00(x) \
 	(0xa0e00000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID2CQIDIX_00(x) \
+	(0x90f80000 + (x) * 0x1000)
+#define DLB2_LSP_QID2CQIDIX_00(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID2CQIDIX_00(x) : \
+	 DLB2_V2_5LSP_QID2CQIDIX_00(x))
 #define DLB2_LSP_QID2CQIDIX_00_RST 0x0
-#define DLB2_LSP_QID2CQIDIX(x, y) \
-	(DLB2_LSP_QID2CQIDIX_00(x) + 0x80000 * (y))
+#define DLB2_LSP_QID2CQIDIX(ver, x, y) \
+	(DLB2_LSP_QID2CQIDIX_00(ver, x) + 0x80000 * (y))
 #define DLB2_LSP_QID2CQIDIX_NUM 16
-union dlb2_lsp_qid2cqidix_00 {
-	struct {
-		u32 cq_p0 : 8;
-		u32 cq_p1 : 8;
-		u32 cq_p2 : 8;
-		u32 cq_p3 : 8;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID2CQIDIX2_00(x) \
+
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P0	0x000000FF
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P1	0x0000FF00
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P2	0x00FF0000
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P3	0xFF000000
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P0_LOC	0
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P1_LOC	8
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P2_LOC	16
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P3_LOC	24
+
+#define DLB2_V2LSP_QID2CQIDIX2_00(x) \
 	(0xa1600000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID2CQIDIX2_00(x) \
+	(0x91780000 + (x) * 0x1000)
+#define DLB2_LSP_QID2CQIDIX2_00(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID2CQIDIX2_00(x) : \
+	 DLB2_V2_5LSP_QID2CQIDIX2_00(x))
 #define DLB2_LSP_QID2CQIDIX2_00_RST 0x0
-#define DLB2_LSP_QID2CQIDIX2(x, y) \
-	(DLB2_LSP_QID2CQIDIX2_00(x) + 0x80000 * (y))
+#define DLB2_LSP_QID2CQIDIX2(ver, x, y) \
+	(DLB2_LSP_QID2CQIDIX2_00(ver, x) + 0x80000 * (y))
 #define DLB2_LSP_QID2CQIDIX2_NUM 16
-union dlb2_lsp_qid2cqidix2_00 {
-	struct {
-		u32 cq_p0 : 8;
-		u32 cq_p1 : 8;
-		u32 cq_p2 : 8;
-		u32 cq_p3 : 8;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_LDB_REPLAY_CNT(x) \
-	(0xa1e00000 + (x) * 0x1000)
-#define DLB2_LSP_QID_LDB_REPLAY_CNT_RST 0x0
-union dlb2_lsp_qid_ldb_replay_cnt {
-	struct {
-		u32 count : 14;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_NALDB_MAX_DEPTH(x) \
+
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P0	0x000000FF
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P1	0x0000FF00
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P2	0x00FF0000
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P3	0xFF000000
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P0_LOC	0
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P1_LOC	8
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P2_LOC	16
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P3_LOC	24
+
+#define DLB2_V2LSP_QID_NALDB_MAX_DEPTH(x) \
 	(0xa1f00000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_NALDB_MAX_DEPTH(x) \
+	(0x92080000 + (x) * 0x1000)
+#define DLB2_LSP_QID_NALDB_MAX_DEPTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_NALDB_MAX_DEPTH(x) : \
+	 DLB2_V2_5LSP_QID_NALDB_MAX_DEPTH(x))
 #define DLB2_LSP_QID_NALDB_MAX_DEPTH_RST 0x0
-union dlb2_lsp_qid_naldb_max_depth {
-	struct {
-		u32 depth : 14;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL(x) \
+
+#define DLB2_LSP_QID_NALDB_MAX_DEPTH_DEPTH	0x00003FFF
+#define DLB2_LSP_QID_NALDB_MAX_DEPTH_RSVD0	0xFFFFC000
+#define DLB2_LSP_QID_NALDB_MAX_DEPTH_DEPTH_LOC	0
+#define DLB2_LSP_QID_NALDB_MAX_DEPTH_RSVD0_LOC	14
+
+#define DLB2_V2LSP_QID_NALDB_TOT_ENQ_CNTL(x) \
 	(0xa1f80000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_NALDB_TOT_ENQ_CNTL(x) \
+	(0x92100000 + (x) * 0x1000)
+#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_NALDB_TOT_ENQ_CNTL(x) : \
+	 DLB2_V2_5LSP_QID_NALDB_TOT_ENQ_CNTL(x))
 #define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL_RST 0x0
-union dlb2_lsp_qid_naldb_tot_enq_cntl {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH(x) \
+
+#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL_COUNT	0xFFFFFFFF
+#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL_COUNT_LOC	0
+
+#define DLB2_V2LSP_QID_NALDB_TOT_ENQ_CNTH(x) \
 	(0xa2000000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_NALDB_TOT_ENQ_CNTH(x) \
+	(0x92180000 + (x) * 0x1000)
+#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_NALDB_TOT_ENQ_CNTH(x) : \
+	 DLB2_V2_5LSP_QID_NALDB_TOT_ENQ_CNTH(x))
 #define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH_RST 0x0
-union dlb2_lsp_qid_naldb_tot_enq_cnth {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_ATM_DEPTH_THRSH(x) \
+
+#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH_COUNT	0xFFFFFFFF
+#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH_COUNT_LOC	0
+
+#define DLB2_V2LSP_QID_ATM_DEPTH_THRSH(x) \
 	(0xa2080000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_ATM_DEPTH_THRSH(x) \
+	(0x92200000 + (x) * 0x1000)
+#define DLB2_LSP_QID_ATM_DEPTH_THRSH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_ATM_DEPTH_THRSH(x) : \
+	 DLB2_V2_5LSP_QID_ATM_DEPTH_THRSH(x))
 #define DLB2_LSP_QID_ATM_DEPTH_THRSH_RST 0x0
-union dlb2_lsp_qid_atm_depth_thrsh {
-	struct {
-		u32 thresh : 14;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_NALDB_DEPTH_THRSH(x) \
+
+#define DLB2_LSP_QID_ATM_DEPTH_THRSH_THRESH	0x00003FFF
+#define DLB2_LSP_QID_ATM_DEPTH_THRSH_RSVD0	0xFFFFC000
+#define DLB2_LSP_QID_ATM_DEPTH_THRSH_THRESH_LOC	0
+#define DLB2_LSP_QID_ATM_DEPTH_THRSH_RSVD0_LOC	14
+
+#define DLB2_V2LSP_QID_NALDB_DEPTH_THRSH(x) \
 	(0xa2100000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_NALDB_DEPTH_THRSH(x) \
+	(0x92280000 + (x) * 0x1000)
+#define DLB2_LSP_QID_NALDB_DEPTH_THRSH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_NALDB_DEPTH_THRSH(x) : \
+	 DLB2_V2_5LSP_QID_NALDB_DEPTH_THRSH(x))
 #define DLB2_LSP_QID_NALDB_DEPTH_THRSH_RST 0x0
-union dlb2_lsp_qid_naldb_depth_thrsh {
-	struct {
-		u32 thresh : 14;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_ATM_ACTIVE(x) \
+
+#define DLB2_LSP_QID_NALDB_DEPTH_THRSH_THRESH	0x00003FFF
+#define DLB2_LSP_QID_NALDB_DEPTH_THRSH_RSVD0		0xFFFFC000
+#define DLB2_LSP_QID_NALDB_DEPTH_THRSH_THRESH_LOC	0
+#define DLB2_LSP_QID_NALDB_DEPTH_THRSH_RSVD0_LOC	14
+
+#define DLB2_V2LSP_QID_ATM_ACTIVE(x) \
 	(0xa2180000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_ATM_ACTIVE(x) \
+	(0x92300000 + (x) * 0x1000)
+#define DLB2_LSP_QID_ATM_ACTIVE(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_ATM_ACTIVE(x) : \
+	 DLB2_V2_5LSP_QID_ATM_ACTIVE(x))
 #define DLB2_LSP_QID_ATM_ACTIVE_RST 0x0
-union dlb2_lsp_qid_atm_active {
-	struct {
-		u32 count : 14;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0 0xa4000008
+
+#define DLB2_LSP_QID_ATM_ACTIVE_COUNT	0x00003FFF
+#define DLB2_LSP_QID_ATM_ACTIVE_RSVD0	0xFFFFC000
+#define DLB2_LSP_QID_ATM_ACTIVE_COUNT_LOC	0
+#define DLB2_LSP_QID_ATM_ACTIVE_RSVD0_LOC	14
+
+#define DLB2_V2LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0 0xa4000008
+#define DLB2_V2_5LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0 0x94000008
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0 : \
+	 DLB2_V2_5LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0)
 #define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_RST 0x0
-union dlb2_lsp_cfg_arb_weight_atm_nalb_qid_0 {
-	struct {
-		u32 pri0_weight : 8;
-		u32 pri1_weight : 8;
-		u32 pri2_weight : 8;
-		u32 pri3_weight : 8;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1 0xa400000c
+
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI0_WEIGHT	0x000000FF
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI1_WEIGHT	0x0000FF00
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI2_WEIGHT	0x00FF0000
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI3_WEIGHT	0xFF000000
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI0_WEIGHT_LOC	0
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI1_WEIGHT_LOC	8
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI2_WEIGHT_LOC	16
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI3_WEIGHT_LOC	24
+
+#define DLB2_V2LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1 0xa400000c
+#define DLB2_V2_5LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1 0x9400000c
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1 : \
+	 DLB2_V2_5LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1)
 #define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_RST 0x0
-union dlb2_lsp_cfg_arb_weight_atm_nalb_qid_1 {
-	struct {
-		u32 rsvz0 : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0 0xa4000014
+
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_RSVZ0_V2	0xFFFFFFFF
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_RSVZ0_V2_LOC	0
+
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI4_WEIGHT_V2_5	0x000000FF
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI5_WEIGHT_V2_5	0x0000FF00
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI6_WEIGHT_V2_5	0x00FF0000
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI7_WEIGHT_V2_5	0xFF000000
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI4_WEIGHT_V2_5_LOC	0
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI5_WEIGHT_V2_5_LOC	8
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI6_WEIGHT_V2_5_LOC	16
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI7_WEIGHT_V2_5_LOC	24
+
+#define DLB2_V2LSP_CFG_ARB_WEIGHT_LDB_QID_0 0xa4000014
+#define DLB2_V2_5LSP_CFG_ARB_WEIGHT_LDB_QID_0 0x94000014
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CFG_ARB_WEIGHT_LDB_QID_0 : \
+	 DLB2_V2_5LSP_CFG_ARB_WEIGHT_LDB_QID_0)
 #define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_RST 0x0
-union dlb2_lsp_cfg_arb_weight_ldb_qid_0 {
-	struct {
-		u32 pri0_weight : 8;
-		u32 pri1_weight : 8;
-		u32 pri2_weight : 8;
-		u32 pri3_weight : 8;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1 0xa4000018
+
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI0_WEIGHT	0x000000FF
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI1_WEIGHT	0x0000FF00
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI2_WEIGHT	0x00FF0000
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI3_WEIGHT	0xFF000000
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI0_WEIGHT_LOC	0
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI1_WEIGHT_LOC	8
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI2_WEIGHT_LOC	16
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI3_WEIGHT_LOC	24
+
+#define DLB2_V2LSP_CFG_ARB_WEIGHT_LDB_QID_1 0xa4000018
+#define DLB2_V2_5LSP_CFG_ARB_WEIGHT_LDB_QID_1 0x94000018
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CFG_ARB_WEIGHT_LDB_QID_1 : \
+	 DLB2_V2_5LSP_CFG_ARB_WEIGHT_LDB_QID_1)
 #define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_RST 0x0
-union dlb2_lsp_cfg_arb_weight_ldb_qid_1 {
-	struct {
-		u32 rsvz0 : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_LDB_SCHED_CTRL 0xa400002c
+
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_RSVZ0_V2	0xFFFFFFFF
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_RSVZ0_V2_LOC	0
+
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI4_WEIGHT_V2_5	0x000000FF
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI5_WEIGHT_V2_5	0x0000FF00
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI6_WEIGHT_V2_5	0x00FF0000
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI7_WEIGHT_V2_5	0xFF000000
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI4_WEIGHT_V2_5_LOC	0
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI5_WEIGHT_V2_5_LOC	8
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI6_WEIGHT_V2_5_LOC	16
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI7_WEIGHT_V2_5_LOC	24
+
+#define DLB2_V2LSP_LDB_SCHED_CTRL 0xa400002c
+#define DLB2_V2_5LSP_LDB_SCHED_CTRL 0x9400002c
+#define DLB2_LSP_LDB_SCHED_CTRL(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_LDB_SCHED_CTRL : \
+	 DLB2_V2_5LSP_LDB_SCHED_CTRL)
 #define DLB2_LSP_LDB_SCHED_CTRL_RST 0x0
-union dlb2_lsp_ldb_sched_ctrl {
-	struct {
-		u32 cq : 8;
-		u32 qidix : 3;
-		u32 value : 1;
-		u32 nalb_haswork_v : 1;
-		u32 rlist_haswork_v : 1;
-		u32 slist_haswork_v : 1;
-		u32 inflight_ok_v : 1;
-		u32 aqed_nfull_v : 1;
-		u32 rsvz0 : 15;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_DIR_SCH_CNT_L 0xa4000034
+
+#define DLB2_LSP_LDB_SCHED_CTRL_CQ			0x000000FF
+#define DLB2_LSP_LDB_SCHED_CTRL_QIDIX		0x00000700
+#define DLB2_LSP_LDB_SCHED_CTRL_VALUE		0x00000800
+#define DLB2_LSP_LDB_SCHED_CTRL_NALB_HASWORK_V	0x00001000
+#define DLB2_LSP_LDB_SCHED_CTRL_RLIST_HASWORK_V	0x00002000
+#define DLB2_LSP_LDB_SCHED_CTRL_SLIST_HASWORK_V	0x00004000
+#define DLB2_LSP_LDB_SCHED_CTRL_INFLIGHT_OK_V	0x00008000
+#define DLB2_LSP_LDB_SCHED_CTRL_AQED_NFULL_V		0x00010000
+#define DLB2_LSP_LDB_SCHED_CTRL_RSVZ0		0xFFFE0000
+#define DLB2_LSP_LDB_SCHED_CTRL_CQ_LOC		0
+#define DLB2_LSP_LDB_SCHED_CTRL_QIDIX_LOC		8
+#define DLB2_LSP_LDB_SCHED_CTRL_VALUE_LOC		11
+#define DLB2_LSP_LDB_SCHED_CTRL_NALB_HASWORK_V_LOC	12
+#define DLB2_LSP_LDB_SCHED_CTRL_RLIST_HASWORK_V_LOC	13
+#define DLB2_LSP_LDB_SCHED_CTRL_SLIST_HASWORK_V_LOC	14
+#define DLB2_LSP_LDB_SCHED_CTRL_INFLIGHT_OK_V_LOC	15
+#define DLB2_LSP_LDB_SCHED_CTRL_AQED_NFULL_V_LOC	16
+#define DLB2_LSP_LDB_SCHED_CTRL_RSVZ0_LOC		17
+
+#define DLB2_V2LSP_DIR_SCH_CNT_L 0xa4000034
+#define DLB2_V2_5LSP_DIR_SCH_CNT_L 0x94000034
+#define DLB2_LSP_DIR_SCH_CNT_L(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_DIR_SCH_CNT_L : \
+	 DLB2_V2_5LSP_DIR_SCH_CNT_L)
 #define DLB2_LSP_DIR_SCH_CNT_L_RST 0x0
-union dlb2_lsp_dir_sch_cnt_l {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_DIR_SCH_CNT_H 0xa4000038
+
+#define DLB2_LSP_DIR_SCH_CNT_L_COUNT	0xFFFFFFFF
+#define DLB2_LSP_DIR_SCH_CNT_L_COUNT_LOC	0
+
+#define DLB2_V2LSP_DIR_SCH_CNT_H 0xa4000038
+#define DLB2_V2_5LSP_DIR_SCH_CNT_H 0x94000038
+#define DLB2_LSP_DIR_SCH_CNT_H(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_DIR_SCH_CNT_H : \
+	 DLB2_V2_5LSP_DIR_SCH_CNT_H)
 #define DLB2_LSP_DIR_SCH_CNT_H_RST 0x0
-union dlb2_lsp_dir_sch_cnt_h {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_LDB_SCH_CNT_L 0xa400003c
+
+#define DLB2_LSP_DIR_SCH_CNT_H_COUNT	0xFFFFFFFF
+#define DLB2_LSP_DIR_SCH_CNT_H_COUNT_LOC	0
+
+#define DLB2_V2LSP_LDB_SCH_CNT_L 0xa400003c
+#define DLB2_V2_5LSP_LDB_SCH_CNT_L 0x9400003c
+#define DLB2_LSP_LDB_SCH_CNT_L(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_LDB_SCH_CNT_L : \
+	 DLB2_V2_5LSP_LDB_SCH_CNT_L)
 #define DLB2_LSP_LDB_SCH_CNT_L_RST 0x0
-union dlb2_lsp_ldb_sch_cnt_l {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_LDB_SCH_CNT_H 0xa4000040
+
+#define DLB2_LSP_LDB_SCH_CNT_L_COUNT	0xFFFFFFFF
+#define DLB2_LSP_LDB_SCH_CNT_L_COUNT_LOC	0
+
+#define DLB2_V2LSP_LDB_SCH_CNT_H 0xa4000040
+#define DLB2_V2_5LSP_LDB_SCH_CNT_H 0x94000040
+#define DLB2_LSP_LDB_SCH_CNT_H(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_LDB_SCH_CNT_H : \
+	 DLB2_V2_5LSP_LDB_SCH_CNT_H)
 #define DLB2_LSP_LDB_SCH_CNT_H_RST 0x0
-union dlb2_lsp_ldb_sch_cnt_h {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CFG_SHDW_CTRL 0xa4000070
+
+#define DLB2_LSP_LDB_SCH_CNT_H_COUNT	0xFFFFFFFF
+#define DLB2_LSP_LDB_SCH_CNT_H_COUNT_LOC	0
+
+#define DLB2_V2LSP_CFG_SHDW_CTRL 0xa4000070
+#define DLB2_V2_5LSP_CFG_SHDW_CTRL 0x94000070
+#define DLB2_LSP_CFG_SHDW_CTRL(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CFG_SHDW_CTRL : \
+	 DLB2_V2_5LSP_CFG_SHDW_CTRL)
 #define DLB2_LSP_CFG_SHDW_CTRL_RST 0x0
-union dlb2_lsp_cfg_shdw_ctrl {
-	struct {
-		u32 transfer : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CFG_SHDW_RANGE_COS(x) \
+
+#define DLB2_LSP_CFG_SHDW_CTRL_TRANSFER	0x00000001
+#define DLB2_LSP_CFG_SHDW_CTRL_RSVD0		0xFFFFFFFE
+#define DLB2_LSP_CFG_SHDW_CTRL_TRANSFER_LOC	0
+#define DLB2_LSP_CFG_SHDW_CTRL_RSVD0_LOC	1
+
+#define DLB2_V2LSP_CFG_SHDW_RANGE_COS(x) \
 	(0xa4000074 + (x) * 4)
+#define DLB2_V2_5LSP_CFG_SHDW_RANGE_COS(x) \
+	(0x94000074 + (x) * 4)
+#define DLB2_LSP_CFG_SHDW_RANGE_COS(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CFG_SHDW_RANGE_COS(x) : \
+	 DLB2_V2_5LSP_CFG_SHDW_RANGE_COS(x))
 #define DLB2_LSP_CFG_SHDW_RANGE_COS_RST 0x40
-union dlb2_lsp_cfg_shdw_range_cos {
-	struct {
-		u32 bw_range : 9;
-		u32 rsvz0 : 22;
-		u32 no_extra_credit : 1;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CFG_CTRL_GENERAL_0 0xac000000
+
+#define DLB2_LSP_CFG_SHDW_RANGE_COS_BW_RANGE		0x000001FF
+#define DLB2_LSP_CFG_SHDW_RANGE_COS_RSVZ0		0x7FFFFE00
+#define DLB2_LSP_CFG_SHDW_RANGE_COS_NO_EXTRA_CREDIT	0x80000000
+#define DLB2_LSP_CFG_SHDW_RANGE_COS_BW_RANGE_LOC		0
+#define DLB2_LSP_CFG_SHDW_RANGE_COS_RSVZ0_LOC		9
+#define DLB2_LSP_CFG_SHDW_RANGE_COS_NO_EXTRA_CREDIT_LOC	31
+
+#define DLB2_V2LSP_CFG_CTRL_GENERAL_0 0xac000000
+#define DLB2_V2_5LSP_CFG_CTRL_GENERAL_0 0x9c000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CFG_CTRL_GENERAL_0 : \
+	 DLB2_V2_5LSP_CFG_CTRL_GENERAL_0)
 #define DLB2_LSP_CFG_CTRL_GENERAL_0_RST 0x0
-union dlb2_lsp_cfg_ctrl_general_0 {
-	struct {
-		u32 disab_atq_empty_arb : 1;
-		u32 inc_tok_unit_idle : 1;
-		u32 disab_rlist_pri : 1;
-		u32 inc_cmp_unit_idle : 1;
-		u32 rsvz0 : 2;
-		u32 dir_single_op : 1;
-		u32 dir_half_bw : 1;
-		u32 dir_single_out : 1;
-		u32 dir_disab_multi : 1;
-		u32 atq_single_op : 1;
-		u32 atq_half_bw : 1;
-		u32 atq_single_out : 1;
-		u32 atq_disab_multi : 1;
-		u32 dirrpl_single_op : 1;
-		u32 dirrpl_half_bw : 1;
-		u32 dirrpl_single_out : 1;
-		u32 lbrpl_single_op : 1;
-		u32 lbrpl_half_bw : 1;
-		u32 lbrpl_single_out : 1;
-		u32 ldb_single_op : 1;
-		u32 ldb_half_bw : 1;
-		u32 ldb_disab_multi : 1;
-		u32 atm_single_sch : 1;
-		u32 atm_single_cmp : 1;
-		u32 ldb_ce_tog_arb : 1;
-		u32 rsvz1 : 1;
-		u32 smon0_valid_sel : 2;
-		u32 smon0_value_sel : 1;
-		u32 smon0_compare_sel : 2;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CFG_MSTR_DIAG_RESET_STS 0xb4000000
-#define DLB2_CFG_MSTR_DIAG_RESET_STS_RST 0x80000bff
-union dlb2_cfg_mstr_diag_reset_sts {
-	struct {
-		u32 chp_pf_reset_done : 1;
-		u32 rop_pf_reset_done : 1;
-		u32 lsp_pf_reset_done : 1;
-		u32 nalb_pf_reset_done : 1;
-		u32 ap_pf_reset_done : 1;
-		u32 dp_pf_reset_done : 1;
-		u32 qed_pf_reset_done : 1;
-		u32 dqed_pf_reset_done : 1;
-		u32 aqed_pf_reset_done : 1;
-		u32 sys_pf_reset_done : 1;
-		u32 pf_reset_active : 1;
-		u32 flrsm_state : 7;
-		u32 rsvd0 : 13;
-		u32 dlb_proc_reset_done : 1;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CFG_MSTR_CFG_DIAGNOSTIC_IDLE_STATUS 0xb4000004
-#define DLB2_CFG_MSTR_CFG_DIAGNOSTIC_IDLE_STATUS_RST 0x9d0fffff
-union dlb2_cfg_mstr_cfg_diagnostic_idle_status {
-	struct {
-		u32 chp_pipeidle : 1;
-		u32 rop_pipeidle : 1;
-		u32 lsp_pipeidle : 1;
-		u32 nalb_pipeidle : 1;
-		u32 ap_pipeidle : 1;
-		u32 dp_pipeidle : 1;
-		u32 qed_pipeidle : 1;
-		u32 dqed_pipeidle : 1;
-		u32 aqed_pipeidle : 1;
-		u32 sys_pipeidle : 1;
-		u32 chp_unit_idle : 1;
-		u32 rop_unit_idle : 1;
-		u32 lsp_unit_idle : 1;
-		u32 nalb_unit_idle : 1;
-		u32 ap_unit_idle : 1;
-		u32 dp_unit_idle : 1;
-		u32 qed_unit_idle : 1;
-		u32 dqed_unit_idle : 1;
-		u32 aqed_unit_idle : 1;
-		u32 sys_unit_idle : 1;
-		u32 rsvd1 : 4;
-		u32 mstr_cfg_ring_idle : 1;
-		u32 mstr_cfg_mstr_idle : 1;
-		u32 mstr_flr_clkreq_b : 1;
-		u32 mstr_proc_idle : 1;
-		u32 mstr_proc_idle_masked : 1;
-		u32 rsvd0 : 2;
-		u32 dlb_func_idle : 1;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CFG_MSTR_CFG_PM_STATUS 0xb4000014
-#define DLB2_CFG_MSTR_CFG_PM_STATUS_RST 0x100403e
-union dlb2_cfg_mstr_cfg_pm_status {
-	struct {
-		u32 prochot : 1;
-		u32 pgcb_dlb_idle : 1;
-		u32 pgcb_dlb_pg_rdy_ack_b : 1;
-		u32 pmsm_pgcb_req_b : 1;
-		u32 pgbc_pmc_pg_req_b : 1;
-		u32 pmc_pgcb_pg_ack_b : 1;
-		u32 pmc_pgcb_fet_en_b : 1;
-		u32 pgcb_fet_en_b : 1;
-		u32 rsvz0 : 1;
-		u32 rsvz1 : 1;
-		u32 fuse_force_on : 1;
-		u32 fuse_proc_disable : 1;
-		u32 rsvz2 : 1;
-		u32 rsvz3 : 1;
-		u32 pm_fsm_d0tod3_ok : 1;
-		u32 pm_fsm_d3tod0_ok : 1;
-		u32 dlb_in_d3 : 1;
-		u32 rsvz4 : 7;
-		u32 pmsm : 8;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CFG_MSTR_CFG_PM_PMCSR_DISABLE 0xb4000018
-#define DLB2_CFG_MSTR_CFG_PM_PMCSR_DISABLE_RST 0x1
-union dlb2_cfg_mstr_cfg_pm_pmcsr_disable {
-	struct {
-		u32 disable : 1;
-		u32 rsvz0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_VF_VF2PF_MAILBOX_BYTES 256
-#define DLB2_FUNC_VF_VF2PF_MAILBOX(x) \
+
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_ATQ_EMPTY_ARB_V2	0x00000001
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_TOK_UNIT_IDLE_V2	0x00000002
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_RLIST_PRI_V2	0x00000004
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_CMP_UNIT_IDLE_V2	0x00000008
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ0_V2		0x00000030
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OP_V2	0x00000040
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_HALF_BW_V2	0x00000080
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OUT_V2	0x00000100
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_DISAB_MULTI_V2	0x00000200
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OP_V2	0x00000400
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_HALF_BW_V2	0x00000800
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OUT_V2	0x00001000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_DISAB_MULTI_V2	0x00002000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OP_V2	0x00004000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_HALF_BW_V2	0x00008000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OUT_V2	0x00010000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OP_V2	0x00020000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_HALF_BW_V2	0x00040000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OUT_V2	0x00080000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_SINGLE_OP_V2	0x00100000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_HALF_BW_V2	0x00200000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_DISAB_MULTI_V2	0x00400000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_SCH_V2	0x00800000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_CMP_V2	0x01000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_CE_TOG_ARB_V2	0x02000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ1_V2		0x04000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALID_SEL_V2	0x18000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALUE_SEL_V2	0x20000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_COMPARE_SEL_V2	0xC0000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_ATQ_EMPTY_ARB_V2_LOC	0
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_TOK_UNIT_IDLE_V2_LOC		1
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_RLIST_PRI_V2_LOC		2
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_CMP_UNIT_IDLE_V2_LOC		3
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ0_V2_LOC			4
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OP_V2_LOC		6
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_HALF_BW_V2_LOC		7
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OUT_V2_LOC		8
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_DISAB_MULTI_V2_LOC		9
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OP_V2_LOC		10
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_HALF_BW_V2_LOC		11
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OUT_V2_LOC		12
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_DISAB_MULTI_V2_LOC		13
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OP_V2_LOC		14
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_HALF_BW_V2_LOC		15
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OUT_V2_LOC		16
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OP_V2_LOC		17
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_HALF_BW_V2_LOC		18
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OUT_V2_LOC		19
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_SINGLE_OP_V2_LOC		20
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_HALF_BW_V2_LOC		21
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_DISAB_MULTI_V2_LOC		22
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_SCH_V2_LOC		23
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_CMP_V2_LOC		24
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_CE_TOG_ARB_V2_LOC		25
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ1_V2_LOC			26
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALID_SEL_V2_LOC		27
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALUE_SEL_V2_LOC		29
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_COMPARE_SEL_V2_LOC		30
+
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_ATQ_EMPTY_ARB_V2_5	0x00000001
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_TOK_UNIT_IDLE_V2_5	0x00000002
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_RLIST_PRI_V2_5	0x00000004
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_CMP_UNIT_IDLE_V2_5	0x00000008
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ENAB_IF_THRESH_V2_5	0x00000010
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ0_V2_5		0x00000020
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OP_V2_5	0x00000040
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_HALF_BW_V2_5	0x00000080
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OUT_V2_5	0x00000100
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_DISAB_MULTI_V2_5	0x00000200
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OP_V2_5	0x00000400
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_HALF_BW_V2_5	0x00000800
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OUT_V2_5	0x00001000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_DISAB_MULTI_V2_5	0x00002000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OP_V2_5	0x00004000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_HALF_BW_V2_5	0x00008000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OUT_V2_5	0x00010000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OP_V2_5	0x00020000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_HALF_BW_V2_5	0x00040000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OUT_V2_5	0x00080000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_SINGLE_OP_V2_5	0x00100000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_HALF_BW_V2_5	0x00200000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_DISAB_MULTI_V2_5	0x00400000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_SCH_V2_5	0x00800000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_CMP_V2_5	0x01000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_CE_TOG_ARB_V2_5	0x02000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ1_V2_5		0x04000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALID_SEL_V2_5	0x18000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALUE_SEL_V2_5	0x20000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_COMPARE_SEL_V2_5	0xC0000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_ATQ_EMPTY_ARB_V2_5_LOC	0
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_TOK_UNIT_IDLE_V2_5_LOC	1
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_RLIST_PRI_V2_5_LOC		2
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_CMP_UNIT_IDLE_V2_5_LOC	3
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ENAB_IF_THRESH_V2_5_LOC		4
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ0_V2_5_LOC			5
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OP_V2_5_LOC		6
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_HALF_BW_V2_5_LOC		7
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OUT_V2_5_LOC		8
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_DISAB_MULTI_V2_5_LOC		9
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OP_V2_5_LOC		10
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_HALF_BW_V2_5_LOC		11
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OUT_V2_5_LOC		12
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_DISAB_MULTI_V2_5_LOC		13
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OP_V2_5_LOC	14
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_HALF_BW_V2_5_LOC		15
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OUT_V2_5_LOC	16
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OP_V2_5_LOC		17
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_HALF_BW_V2_5_LOC		18
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OUT_V2_5_LOC	19
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_SINGLE_OP_V2_5_LOC		20
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_HALF_BW_V2_5_LOC		21
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_DISAB_MULTI_V2_5_LOC		22
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_SCH_V2_5_LOC		23
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_CMP_V2_5_LOC		24
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_CE_TOG_ARB_V2_5_LOC		25
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ1_V2_5_LOC			26
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALID_SEL_V2_5_LOC		27
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALUE_SEL_V2_5_LOC		29
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_COMPARE_SEL_V2_5_LOC	30
+
+#define DLB2_LSP_SMON_COMPARE0 0xac000048
+#define DLB2_LSP_SMON_COMPARE0_RST 0x0
+
+#define DLB2_LSP_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_LSP_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_LSP_SMON_COMPARE1 0xac00004c
+#define DLB2_LSP_SMON_COMPARE1_RST 0x0
+
+#define DLB2_LSP_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_LSP_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_LSP_SMON_CFG0 0xac000050
+#define DLB2_LSP_SMON_CFG0_RST 0x40000000
+
+#define DLB2_LSP_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_LSP_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_LSP_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_LSP_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_LSP_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_LSP_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_LSP_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_LSP_SMON_CFG0_SMON_MODE			0x0000F000
+#define DLB2_LSP_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_LSP_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_LSP_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_LSP_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_LSP_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_LSP_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_LSP_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_LSP_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_LSP_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_LSP_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_LSP_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_LSP_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_LSP_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
+#define DLB2_LSP_SMON_CFG0_RSVZ0_LOC				2
+#define DLB2_LSP_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_LSP_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_LSP_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_LSP_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_LSP_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_LSP_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_LSP_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_LSP_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_LSP_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_LSP_SMON_CFG0_STOPTIMEROVFL_LOC			20
+#define DLB2_LSP_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_LSP_SMON_CFG0_STATTIMEROVFL_LOC			22
+#define DLB2_LSP_SMON_CFG0_RSVZ1_LOC				23
+#define DLB2_LSP_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_LSP_SMON_CFG0_RSVZ2_LOC				29
+#define DLB2_LSP_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_LSP_SMON_CFG1 0xac000054
+#define DLB2_LSP_SMON_CFG1_RST 0x0
+
+#define DLB2_LSP_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_LSP_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_LSP_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_LSP_SMON_CFG1_MODE0_LOC	0
+#define DLB2_LSP_SMON_CFG1_MODE1_LOC	8
+#define DLB2_LSP_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_LSP_SMON_ACTIVITYCNTR0 0xac000058
+#define DLB2_LSP_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_LSP_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_LSP_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_LSP_SMON_ACTIVITYCNTR1 0xac00005c
+#define DLB2_LSP_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_LSP_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_LSP_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_LSP_SMON_MAX_TMR 0xac000060
+#define DLB2_LSP_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_LSP_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_LSP_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_LSP_SMON_TMR 0xac000064
+#define DLB2_LSP_SMON_TMR_RST 0x0
+
+#define DLB2_LSP_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_LSP_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_V2CM_DIAG_RESET_STS 0xb4000000
+#define DLB2_V2_5CM_DIAG_RESET_STS 0xa4000000
+#define DLB2_CM_DIAG_RESET_STS(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 V2CM_DIAG_RESET_STS : \
+	 V2_5CM_DIAG_RESET_STS)
+#define DLB2_CM_DIAG_RESET_STS_RST 0x80000bff
+
+#define DLB2_CM_DIAG_RESET_STS_CHP_PF_RESET_DONE	0x00000001
+#define DLB2_CM_DIAG_RESET_STS_ROP_PF_RESET_DONE	0x00000002
+#define DLB2_CM_DIAG_RESET_STS_LSP_PF_RESET_DONE	0x00000004
+#define DLB2_CM_DIAG_RESET_STS_NALB_PF_RESET_DONE	0x00000008
+#define DLB2_CM_DIAG_RESET_STS_AP_PF_RESET_DONE	0x00000010
+#define DLB2_CM_DIAG_RESET_STS_DP_PF_RESET_DONE	0x00000020
+#define DLB2_CM_DIAG_RESET_STS_QED_PF_RESET_DONE	0x00000040
+#define DLB2_CM_DIAG_RESET_STS_DQED_PF_RESET_DONE	0x00000080
+#define DLB2_CM_DIAG_RESET_STS_AQED_PF_RESET_DONE	0x00000100
+#define DLB2_CM_DIAG_RESET_STS_SYS_PF_RESET_DONE	0x00000200
+#define DLB2_CM_DIAG_RESET_STS_PF_RESET_ACTIVE	0x00000400
+#define DLB2_CM_DIAG_RESET_STS_FLRSM_STATE		0x0003F800
+#define DLB2_CM_DIAG_RESET_STS_RSVD0			0x7FFC0000
+#define DLB2_CM_DIAG_RESET_STS_DLB_PROC_RESET_DONE	0x80000000
+#define DLB2_CM_DIAG_RESET_STS_CHP_PF_RESET_DONE_LOC		0
+#define DLB2_CM_DIAG_RESET_STS_ROP_PF_RESET_DONE_LOC		1
+#define DLB2_CM_DIAG_RESET_STS_LSP_PF_RESET_DONE_LOC		2
+#define DLB2_CM_DIAG_RESET_STS_NALB_PF_RESET_DONE_LOC	3
+#define DLB2_CM_DIAG_RESET_STS_AP_PF_RESET_DONE_LOC		4
+#define DLB2_CM_DIAG_RESET_STS_DP_PF_RESET_DONE_LOC		5
+#define DLB2_CM_DIAG_RESET_STS_QED_PF_RESET_DONE_LOC		6
+#define DLB2_CM_DIAG_RESET_STS_DQED_PF_RESET_DONE_LOC	7
+#define DLB2_CM_DIAG_RESET_STS_AQED_PF_RESET_DONE_LOC	8
+#define DLB2_CM_DIAG_RESET_STS_SYS_PF_RESET_DONE_LOC		9
+#define DLB2_CM_DIAG_RESET_STS_PF_RESET_ACTIVE_LOC		10
+#define DLB2_CM_DIAG_RESET_STS_FLRSM_STATE_LOC		11
+#define DLB2_CM_DIAG_RESET_STS_RSVD0_LOC			18
+#define DLB2_CM_DIAG_RESET_STS_DLB_PROC_RESET_DONE_LOC	31
+
+#define DLB2_V2CM_CFG_DIAGNOSTIC_IDLE_STATUS 0xb4000004
+#define DLB2_V2_5CM_CFG_DIAGNOSTIC_IDLE_STATUS 0xa4000004
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CM_CFG_DIAGNOSTIC_IDLE_STATUS : \
+	 DLB2_V2_5CM_CFG_DIAGNOSTIC_IDLE_STATUS)
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_RST 0x9d0fffff
+
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_CHP_PIPEIDLE		0x00000001
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_ROP_PIPEIDLE		0x00000002
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_LSP_PIPEIDLE		0x00000004
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_NALB_PIPEIDLE	0x00000008
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AP_PIPEIDLE		0x00000010
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DP_PIPEIDLE		0x00000020
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_QED_PIPEIDLE		0x00000040
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DQED_PIPEIDLE	0x00000080
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AQED_PIPEIDLE	0x00000100
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_SYS_PIPEIDLE		0x00000200
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_CHP_UNIT_IDLE	0x00000400
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_ROP_UNIT_IDLE	0x00000800
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_LSP_UNIT_IDLE	0x00001000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_NALB_UNIT_IDLE	0x00002000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AP_UNIT_IDLE		0x00004000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DP_UNIT_IDLE		0x00008000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_QED_UNIT_IDLE	0x00010000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DQED_UNIT_IDLE	0x00020000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AQED_UNIT_IDLE	0x00040000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_SYS_UNIT_IDLE	0x00080000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_RSVD1		0x00F00000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_CFG_RING_IDLE	0x01000000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_CFG_MSTR_IDLE	0x02000000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_FLR_CLKREQ_B	0x04000000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_PROC_IDLE	0x08000000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_PROC_IDLE_MASKED 0x10000000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_RSVD0		 0x60000000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DLB_FUNC_IDLE	 0x80000000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_CHP_PIPEIDLE_LOC		0
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_ROP_PIPEIDLE_LOC		1
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_LSP_PIPEIDLE_LOC		2
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_NALB_PIPEIDLE_LOC		3
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AP_PIPEIDLE_LOC		4
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DP_PIPEIDLE_LOC		5
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_QED_PIPEIDLE_LOC		6
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DQED_PIPEIDLE_LOC		7
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AQED_PIPEIDLE_LOC		8
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_SYS_PIPEIDLE_LOC		9
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_CHP_UNIT_IDLE_LOC		10
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_ROP_UNIT_IDLE_LOC		11
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_LSP_UNIT_IDLE_LOC		12
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_NALB_UNIT_IDLE_LOC	13
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AP_UNIT_IDLE_LOC		14
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DP_UNIT_IDLE_LOC		15
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_QED_UNIT_IDLE_LOC		16
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DQED_UNIT_IDLE_LOC	17
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AQED_UNIT_IDLE_LOC	18
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_SYS_UNIT_IDLE_LOC		19
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_RSVD1_LOC			20
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_CFG_RING_IDLE_LOC	24
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_CFG_MSTR_IDLE_LOC	25
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_FLR_CLKREQ_B_LOC	26
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_PROC_IDLE_LOC	27
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_PROC_IDLE_MASKED_LOC	28
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_RSVD0_LOC			29
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DLB_FUNC_IDLE_LOC		31
+
+#define DLB2_V2CM_CFG_PM_STATUS 0xb4000014
+#define DLB2_V2_5CM_CFG_PM_STATUS 0xa4000014
+#define DLB2_CM_CFG_PM_STATUS(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CM_CFG_PM_STATUS : \
+	 DLB2_V2_5CM_CFG_PM_STATUS)
+#define DLB2_CM_CFG_PM_STATUS_RST 0x100403e
+
+#define DLB2_CM_CFG_PM_STATUS_PROCHOT		0x00000001
+#define DLB2_CM_CFG_PM_STATUS_PGCB_DLB_IDLE		0x00000002
+#define DLB2_CM_CFG_PM_STATUS_PGCB_DLB_PG_RDY_ACK_B	0x00000004
+#define DLB2_CM_CFG_PM_STATUS_PMSM_PGCB_REQ_B	0x00000008
+#define DLB2_CM_CFG_PM_STATUS_PGBC_PMC_PG_REQ_B	0x00000010
+#define DLB2_CM_CFG_PM_STATUS_PMC_PGCB_PG_ACK_B	0x00000020
+#define DLB2_CM_CFG_PM_STATUS_PMC_PGCB_FET_EN_B	0x00000040
+#define DLB2_CM_CFG_PM_STATUS_PGCB_FET_EN_B		0x00000080
+#define DLB2_CM_CFG_PM_STATUS_RSVZ0			0x00000100
+#define DLB2_CM_CFG_PM_STATUS_RSVZ1			0x00000200
+#define DLB2_CM_CFG_PM_STATUS_FUSE_FORCE_ON		0x00000400
+#define DLB2_CM_CFG_PM_STATUS_FUSE_PROC_DISABLE	0x00000800
+#define DLB2_CM_CFG_PM_STATUS_RSVZ2			0x00001000
+#define DLB2_CM_CFG_PM_STATUS_RSVZ3			0x00002000
+#define DLB2_CM_CFG_PM_STATUS_PM_FSM_D0TOD3_OK	0x00004000
+#define DLB2_CM_CFG_PM_STATUS_PM_FSM_D3TOD0_OK	0x00008000
+#define DLB2_CM_CFG_PM_STATUS_DLB_IN_D3		0x00010000
+#define DLB2_CM_CFG_PM_STATUS_RSVZ4			0x00FE0000
+#define DLB2_CM_CFG_PM_STATUS_PMSM			0xFF000000
+#define DLB2_CM_CFG_PM_STATUS_PROCHOT_LOC			0
+#define DLB2_CM_CFG_PM_STATUS_PGCB_DLB_IDLE_LOC		1
+#define DLB2_CM_CFG_PM_STATUS_PGCB_DLB_PG_RDY_ACK_B_LOC	2
+#define DLB2_CM_CFG_PM_STATUS_PMSM_PGCB_REQ_B_LOC		3
+#define DLB2_CM_CFG_PM_STATUS_PGBC_PMC_PG_REQ_B_LOC		4
+#define DLB2_CM_CFG_PM_STATUS_PMC_PGCB_PG_ACK_B_LOC		5
+#define DLB2_CM_CFG_PM_STATUS_PMC_PGCB_FET_EN_B_LOC		6
+#define DLB2_CM_CFG_PM_STATUS_PGCB_FET_EN_B_LOC		7
+#define DLB2_CM_CFG_PM_STATUS_RSVZ0_LOC			8
+#define DLB2_CM_CFG_PM_STATUS_RSVZ1_LOC			9
+#define DLB2_CM_CFG_PM_STATUS_FUSE_FORCE_ON_LOC		10
+#define DLB2_CM_CFG_PM_STATUS_FUSE_PROC_DISABLE_LOC		11
+#define DLB2_CM_CFG_PM_STATUS_RSVZ2_LOC			12
+#define DLB2_CM_CFG_PM_STATUS_RSVZ3_LOC			13
+#define DLB2_CM_CFG_PM_STATUS_PM_FSM_D0TOD3_OK_LOC		14
+#define DLB2_CM_CFG_PM_STATUS_PM_FSM_D3TOD0_OK_LOC		15
+#define DLB2_CM_CFG_PM_STATUS_DLB_IN_D3_LOC			16
+#define DLB2_CM_CFG_PM_STATUS_RSVZ4_LOC			17
+#define DLB2_CM_CFG_PM_STATUS_PMSM_LOC			24
+
+#define DLB2_V2CM_CFG_PM_PMCSR_DISABLE 0xb4000018
+#define DLB2_V2_5CM_CFG_PM_PMCSR_DISABLE 0xa4000018
+#define DLB2_CM_CFG_PM_PMCSR_DISABLE(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CM_CFG_PM_PMCSR_DISABLE : \
+	 DLB2_V2_5CM_CFG_PM_PMCSR_DISABLE)
+#define DLB2_CM_CFG_PM_PMCSR_DISABLE_RST 0x1
+
+#define DLB2_CM_CFG_PM_PMCSR_DISABLE_DISABLE	0x00000001
+#define DLB2_CM_CFG_PM_PMCSR_DISABLE_RSVZ0	0xFFFFFFFE
+#define DLB2_CM_CFG_PM_PMCSR_DISABLE_DISABLE_LOC	0
+#define DLB2_CM_CFG_PM_PMCSR_DISABLE_RSVZ0_LOC	1
+
+#define DLB2_VF_VF2PF_MAILBOX_BYTES 256
+#define DLB2_VF_VF2PF_MAILBOX(x) \
 	(0x1000 + (x) * 0x4)
-#define DLB2_FUNC_VF_VF2PF_MAILBOX_RST 0x0
-union dlb2_func_vf_vf2pf_mailbox {
-	struct {
-		u32 msg : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_VF_VF2PF_MAILBOX_ISR 0x1f00
-#define DLB2_FUNC_VF_VF2PF_MAILBOX_ISR_RST 0x0
-#define DLB2_FUNC_VF_SIOV_VF2PF_MAILBOX_ISR_TRIGGER 0x8000
-union dlb2_func_vf_vf2pf_mailbox_isr {
-	struct {
-		u32 isr : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_VF_PF2VF_MAILBOX_BYTES 64
-#define DLB2_FUNC_VF_PF2VF_MAILBOX(x) \
+#define DLB2_VF_VF2PF_MAILBOX_RST 0x0
+
+#define DLB2_VF_VF2PF_MAILBOX_MSG	0xFFFFFFFF
+#define DLB2_VF_VF2PF_MAILBOX_MSG_LOC	0
+
+#define DLB2_VF_VF2PF_MAILBOX_ISR 0x1f00
+#define DLB2_VF_VF2PF_MAILBOX_ISR_RST 0x0
+#define DLB2_VF_SIOV_MBOX_ISR_TRIGGER 0x8000
+
+#define DLB2_VF_VF2PF_MAILBOX_ISR_ISR	0x00000001
+#define DLB2_VF_VF2PF_MAILBOX_ISR_RSVD0	0xFFFFFFFE
+#define DLB2_VF_VF2PF_MAILBOX_ISR_ISR_LOC	0
+#define DLB2_VF_VF2PF_MAILBOX_ISR_RSVD0_LOC	1
+
+#define DLB2_VF_PF2VF_MAILBOX_BYTES 64
+#define DLB2_VF_PF2VF_MAILBOX(x) \
 	(0x2000 + (x) * 0x4)
-#define DLB2_FUNC_VF_PF2VF_MAILBOX_RST 0x0
-union dlb2_func_vf_pf2vf_mailbox {
-	struct {
-		u32 msg : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_VF_PF2VF_MAILBOX_ISR 0x2f00
-#define DLB2_FUNC_VF_PF2VF_MAILBOX_ISR_RST 0x0
-union dlb2_func_vf_pf2vf_mailbox_isr {
-	struct {
-		u32 pf_isr : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_VF_VF_MSI_ISR_PEND 0x2f10
-#define DLB2_FUNC_VF_VF_MSI_ISR_PEND_RST 0x0
-union dlb2_func_vf_vf_msi_isr_pend {
-	struct {
-		u32 isr_pend : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_VF_VF_RESET_IN_PROGRESS 0x3000
-#define DLB2_FUNC_VF_VF_RESET_IN_PROGRESS_RST 0x1
-union dlb2_func_vf_vf_reset_in_progress {
-	struct {
-		u32 reset_in_progress : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_VF_VF_MSI_ISR 0x4000
-#define DLB2_FUNC_VF_VF_MSI_ISR_RST 0x0
-union dlb2_func_vf_vf_msi_isr {
-	struct {
-		u32 vf_msi_isr : 32;
-	} field;
-	u32 val;
-};
+#define DLB2_VF_PF2VF_MAILBOX_RST 0x0
+
+#define DLB2_VF_PF2VF_MAILBOX_MSG	0xFFFFFFFF
+#define DLB2_VF_PF2VF_MAILBOX_MSG_LOC	0
+
+#define DLB2_VF_PF2VF_MAILBOX_ISR 0x2f00
+#define DLB2_VF_PF2VF_MAILBOX_ISR_RST 0x0
+
+#define DLB2_VF_PF2VF_MAILBOX_ISR_PF_ISR	0x00000001
+#define DLB2_VF_PF2VF_MAILBOX_ISR_RSVD0	0xFFFFFFFE
+#define DLB2_VF_PF2VF_MAILBOX_ISR_PF_ISR_LOC	0
+#define DLB2_VF_PF2VF_MAILBOX_ISR_RSVD0_LOC	1
+
+#define DLB2_VF_VF_MSI_ISR_PEND 0x2f10
+#define DLB2_VF_VF_MSI_ISR_PEND_RST 0x0
+
+#define DLB2_VF_VF_MSI_ISR_PEND_ISR_PEND	0xFFFFFFFF
+#define DLB2_VF_VF_MSI_ISR_PEND_ISR_PEND_LOC	0
+
+#define DLB2_VF_VF_RESET_IN_PROGRESS 0x3000
+#define DLB2_VF_VF_RESET_IN_PROGRESS_RST 0x1
+
+#define DLB2_VF_VF_RESET_IN_PROGRESS_RESET_IN_PROGRESS	0x00000001
+#define DLB2_VF_VF_RESET_IN_PROGRESS_RSVD0			0xFFFFFFFE
+#define DLB2_VF_VF_RESET_IN_PROGRESS_RESET_IN_PROGRESS_LOC	0
+#define DLB2_VF_VF_RESET_IN_PROGRESS_RSVD0_LOC		1
+
+#define DLB2_VF_VF_MSI_ISR 0x4000
+#define DLB2_VF_VF_MSI_ISR_RST 0x0
+
+#define DLB2_VF_VF_MSI_ISR_VF_MSI_ISR	0xFFFFFFFF
+#define DLB2_VF_VF_MSI_ISR_VF_MSI_ISR_LOC	0
+
+#define DLB2_SYS_TOTAL_CREDITS 0x10000100
+#define DLB2_SYS_TOTAL_CREDITS_RST 0x4000
+
+#define DLB2_SYS_TOTAL_CREDITS_TOTAL_CREDITS	0xFFFFFFFF
+#define DLB2_SYS_TOTAL_CREDITS_TOTAL_CREDITS_LOC	0
+
+#define DLB2_SYS_LDB_CQ_AI_ADDR_U(x) \
+	(0x10000fa4 + (x) * 0x1000)
+#define DLB2_SYS_LDB_CQ_AI_ADDR_U_RST 0x0
+
+#define DLB2_SYS_LDB_CQ_AI_ADDR_U_CQ_AI_ADDR_U	0xFFFFFFFF
+#define DLB2_SYS_LDB_CQ_AI_ADDR_U_CQ_AI_ADDR_U_LOC	0
+
+#define DLB2_SYS_LDB_CQ_AI_ADDR_L(x) \
+	(0x10000fa0 + (x) * 0x1000)
+#define DLB2_SYS_LDB_CQ_AI_ADDR_L_RST 0x0
+
+#define DLB2_SYS_LDB_CQ_AI_ADDR_L_RSVD0		0x00000003
+#define DLB2_SYS_LDB_CQ_AI_ADDR_L_CQ_AI_ADDR_L	0xFFFFFFFC
+#define DLB2_SYS_LDB_CQ_AI_ADDR_L_RSVD0_LOC		0
+#define DLB2_SYS_LDB_CQ_AI_ADDR_L_CQ_AI_ADDR_L_LOC	2
+
+#define DLB2_SYS_DIR_CQ_AI_ADDR_U(x) \
+	(0x10000fe4 + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ_AI_ADDR_U_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_AI_ADDR_U_CQ_AI_ADDR_U	0xFFFFFFFF
+#define DLB2_SYS_DIR_CQ_AI_ADDR_U_CQ_AI_ADDR_U_LOC	0
+
+#define DLB2_SYS_DIR_CQ_AI_ADDR_L(x) \
+	(0x10000fe0 + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ_AI_ADDR_L_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_AI_ADDR_L_RSVD0		0x00000003
+#define DLB2_SYS_DIR_CQ_AI_ADDR_L_CQ_AI_ADDR_L	0xFFFFFFFC
+#define DLB2_SYS_DIR_CQ_AI_ADDR_L_RSVD0_LOC		0
+#define DLB2_SYS_DIR_CQ_AI_ADDR_L_CQ_AI_ADDR_L_LOC	2
+
+#define DLB2_SYS_WB_DIR_CQ_STATE(x) \
+	(0x11c00000 + (x) * 0x1000)
+#define DLB2_SYS_WB_DIR_CQ_STATE_RST 0x0
+
+#define DLB2_SYS_WB_DIR_CQ_STATE_WB0_V	0x00000001
+#define DLB2_SYS_WB_DIR_CQ_STATE_WB1_V	0x00000002
+#define DLB2_SYS_WB_DIR_CQ_STATE_WB2_V	0x00000004
+#define DLB2_SYS_WB_DIR_CQ_STATE_DIR_OPT	0x00000008
+#define DLB2_SYS_WB_DIR_CQ_STATE_CQ_OPT_CLR	0x00000010
+#define DLB2_SYS_WB_DIR_CQ_STATE_RSVD0	0xFFFFFFE0
+#define DLB2_SYS_WB_DIR_CQ_STATE_WB0_V_LOC		0
+#define DLB2_SYS_WB_DIR_CQ_STATE_WB1_V_LOC		1
+#define DLB2_SYS_WB_DIR_CQ_STATE_WB2_V_LOC		2
+#define DLB2_SYS_WB_DIR_CQ_STATE_DIR_OPT_LOC		3
+#define DLB2_SYS_WB_DIR_CQ_STATE_CQ_OPT_CLR_LOC	4
+#define DLB2_SYS_WB_DIR_CQ_STATE_RSVD0_LOC		5
+
+#define DLB2_SYS_WB_LDB_CQ_STATE(x) \
+	(0x11d00000 + (x) * 0x1000)
+#define DLB2_SYS_WB_LDB_CQ_STATE_RST 0x0
+
+#define DLB2_SYS_WB_LDB_CQ_STATE_WB0_V	0x00000001
+#define DLB2_SYS_WB_LDB_CQ_STATE_WB1_V	0x00000002
+#define DLB2_SYS_WB_LDB_CQ_STATE_WB2_V	0x00000004
+#define DLB2_SYS_WB_LDB_CQ_STATE_RSVD1	0x00000008
+#define DLB2_SYS_WB_LDB_CQ_STATE_CQ_OPT_CLR	0x00000010
+#define DLB2_SYS_WB_LDB_CQ_STATE_RSVD0	0xFFFFFFE0
+#define DLB2_SYS_WB_LDB_CQ_STATE_WB0_V_LOC		0
+#define DLB2_SYS_WB_LDB_CQ_STATE_WB1_V_LOC		1
+#define DLB2_SYS_WB_LDB_CQ_STATE_WB2_V_LOC		2
+#define DLB2_SYS_WB_LDB_CQ_STATE_RSVD1_LOC		3
+#define DLB2_SYS_WB_LDB_CQ_STATE_CQ_OPT_CLR_LOC	4
+#define DLB2_SYS_WB_LDB_CQ_STATE_RSVD0_LOC		5
+
+#define DLB2_CHP_CFG_VAS_CRD(x) \
+	(0x40000000 + (x) * 0x1000)
+#define DLB2_CHP_CFG_VAS_CRD_RST 0x0
+
+#define DLB2_CHP_CFG_VAS_CRD_COUNT	0x00007FFF
+#define DLB2_CHP_CFG_VAS_CRD_RSVD0	0xFFFF8000
+#define DLB2_CHP_CFG_VAS_CRD_COUNT_LOC	0
+#define DLB2_CHP_CFG_VAS_CRD_RSVD0_LOC	15
+
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT(x) \
+	(0x90b00000 + (x) * 0x1000)
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_RST 0x0
+
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_LIMIT	0x00007FFF
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_V	0x00008000
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_RSVD0	0xFFFF0000
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_LIMIT_LOC	0
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_V_LOC		15
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_RSVD0_LOC	16
 
 #endif /* __DLB2_REGS_H */
diff --git a/drivers/event/dlb2/pf/base/dlb2_regs_new.h b/drivers/event/dlb2/pf/base/dlb2_regs_new.h
deleted file mode 100644
index 26c3e7f4a..000000000
--- a/drivers/event/dlb2/pf/base/dlb2_regs_new.h
+++ /dev/null
@@ -1,4304 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016-2020 Intel Corporation
- */
-
-#ifndef __DLB2_REGS_NEW_H
-#define __DLB2_REGS_NEW_H
-
-#include "dlb2_osdep_types.h"
-
-#define DLB2_PF_VF2PF_MAILBOX_BYTES 256
-#define DLB2_PF_VF2PF_MAILBOX(vf_id, x) \
-	(0x1000 + 0x4 * (x) + (vf_id) * 0x10000)
-#define DLB2_PF_VF2PF_MAILBOX_RST 0x0
-
-#define DLB2_PF_VF2PF_MAILBOX_MSG	0xFFFFFFFF
-#define DLB2_PF_VF2PF_MAILBOX_MSG_LOC	0
-
-#define DLB2_PF_VF2PF_MAILBOX_ISR(vf_id) \
-	(0x1f00 + (vf_id) * 0x10000)
-#define DLB2_PF_VF2PF_MAILBOX_ISR_RST 0x0
-
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF0_ISR	0x00000001
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF1_ISR	0x00000002
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF2_ISR	0x00000004
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF3_ISR	0x00000008
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF4_ISR	0x00000010
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF5_ISR	0x00000020
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF6_ISR	0x00000040
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF7_ISR	0x00000080
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF8_ISR	0x00000100
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF9_ISR	0x00000200
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF10_ISR	0x00000400
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF11_ISR	0x00000800
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF12_ISR	0x00001000
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF13_ISR	0x00002000
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF14_ISR	0x00004000
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF15_ISR	0x00008000
-#define DLB2_PF_VF2PF_MAILBOX_ISR_RSVD0	0xFFFF0000
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF0_ISR_LOC	0
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF1_ISR_LOC	1
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF2_ISR_LOC	2
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF3_ISR_LOC	3
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF4_ISR_LOC	4
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF5_ISR_LOC	5
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF6_ISR_LOC	6
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF7_ISR_LOC	7
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF8_ISR_LOC	8
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF9_ISR_LOC	9
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF10_ISR_LOC	10
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF11_ISR_LOC	11
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF12_ISR_LOC	12
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF13_ISR_LOC	13
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF14_ISR_LOC	14
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF15_ISR_LOC	15
-#define DLB2_PF_VF2PF_MAILBOX_ISR_RSVD0_LOC		16
-
-#define DLB2_PF_VF2PF_FLR_ISR(vf_id) \
-	(0x1f04 + (vf_id) * 0x10000)
-#define DLB2_PF_VF2PF_FLR_ISR_RST 0x0
-
-#define DLB2_PF_VF2PF_FLR_ISR_VF0_ISR	0x00000001
-#define DLB2_PF_VF2PF_FLR_ISR_VF1_ISR	0x00000002
-#define DLB2_PF_VF2PF_FLR_ISR_VF2_ISR	0x00000004
-#define DLB2_PF_VF2PF_FLR_ISR_VF3_ISR	0x00000008
-#define DLB2_PF_VF2PF_FLR_ISR_VF4_ISR	0x00000010
-#define DLB2_PF_VF2PF_FLR_ISR_VF5_ISR	0x00000020
-#define DLB2_PF_VF2PF_FLR_ISR_VF6_ISR	0x00000040
-#define DLB2_PF_VF2PF_FLR_ISR_VF7_ISR	0x00000080
-#define DLB2_PF_VF2PF_FLR_ISR_VF8_ISR	0x00000100
-#define DLB2_PF_VF2PF_FLR_ISR_VF9_ISR	0x00000200
-#define DLB2_PF_VF2PF_FLR_ISR_VF10_ISR	0x00000400
-#define DLB2_PF_VF2PF_FLR_ISR_VF11_ISR	0x00000800
-#define DLB2_PF_VF2PF_FLR_ISR_VF12_ISR	0x00001000
-#define DLB2_PF_VF2PF_FLR_ISR_VF13_ISR	0x00002000
-#define DLB2_PF_VF2PF_FLR_ISR_VF14_ISR	0x00004000
-#define DLB2_PF_VF2PF_FLR_ISR_VF15_ISR	0x00008000
-#define DLB2_PF_VF2PF_FLR_ISR_RSVD0		0xFFFF0000
-#define DLB2_PF_VF2PF_FLR_ISR_VF0_ISR_LOC	0
-#define DLB2_PF_VF2PF_FLR_ISR_VF1_ISR_LOC	1
-#define DLB2_PF_VF2PF_FLR_ISR_VF2_ISR_LOC	2
-#define DLB2_PF_VF2PF_FLR_ISR_VF3_ISR_LOC	3
-#define DLB2_PF_VF2PF_FLR_ISR_VF4_ISR_LOC	4
-#define DLB2_PF_VF2PF_FLR_ISR_VF5_ISR_LOC	5
-#define DLB2_PF_VF2PF_FLR_ISR_VF6_ISR_LOC	6
-#define DLB2_PF_VF2PF_FLR_ISR_VF7_ISR_LOC	7
-#define DLB2_PF_VF2PF_FLR_ISR_VF8_ISR_LOC	8
-#define DLB2_PF_VF2PF_FLR_ISR_VF9_ISR_LOC	9
-#define DLB2_PF_VF2PF_FLR_ISR_VF10_ISR_LOC	10
-#define DLB2_PF_VF2PF_FLR_ISR_VF11_ISR_LOC	11
-#define DLB2_PF_VF2PF_FLR_ISR_VF12_ISR_LOC	12
-#define DLB2_PF_VF2PF_FLR_ISR_VF13_ISR_LOC	13
-#define DLB2_PF_VF2PF_FLR_ISR_VF14_ISR_LOC	14
-#define DLB2_PF_VF2PF_FLR_ISR_VF15_ISR_LOC	15
-#define DLB2_PF_VF2PF_FLR_ISR_RSVD0_LOC	16
-
-#define DLB2_PF_VF2PF_ISR_PEND(vf_id) \
-	(0x1f10 + (vf_id) * 0x10000)
-#define DLB2_PF_VF2PF_ISR_PEND_RST 0x0
-
-#define DLB2_PF_VF2PF_ISR_PEND_ISR_PEND	0x00000001
-#define DLB2_PF_VF2PF_ISR_PEND_RSVD0		0xFFFFFFFE
-#define DLB2_PF_VF2PF_ISR_PEND_ISR_PEND_LOC	0
-#define DLB2_PF_VF2PF_ISR_PEND_RSVD0_LOC	1
-
-#define DLB2_PF_PF2VF_MAILBOX_BYTES 64
-#define DLB2_PF_PF2VF_MAILBOX(vf_id, x) \
-	(0x2000 + 0x4 * (x) + (vf_id) * 0x10000)
-#define DLB2_PF_PF2VF_MAILBOX_RST 0x0
-
-#define DLB2_PF_PF2VF_MAILBOX_MSG	0xFFFFFFFF
-#define DLB2_PF_PF2VF_MAILBOX_MSG_LOC	0
-
-#define DLB2_PF_PF2VF_MAILBOX_ISR(vf_id) \
-	(0x2f00 + (vf_id) * 0x10000)
-#define DLB2_PF_PF2VF_MAILBOX_ISR_RST 0x0
-
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF0_ISR	0x00000001
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF1_ISR	0x00000002
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF2_ISR	0x00000004
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF3_ISR	0x00000008
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF4_ISR	0x00000010
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF5_ISR	0x00000020
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF6_ISR	0x00000040
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF7_ISR	0x00000080
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF8_ISR	0x00000100
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF9_ISR	0x00000200
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF10_ISR	0x00000400
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF11_ISR	0x00000800
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF12_ISR	0x00001000
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF13_ISR	0x00002000
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF14_ISR	0x00004000
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF15_ISR	0x00008000
-#define DLB2_PF_PF2VF_MAILBOX_ISR_RSVD0	0xFFFF0000
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF0_ISR_LOC	0
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF1_ISR_LOC	1
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF2_ISR_LOC	2
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF3_ISR_LOC	3
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF4_ISR_LOC	4
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF5_ISR_LOC	5
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF6_ISR_LOC	6
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF7_ISR_LOC	7
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF8_ISR_LOC	8
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF9_ISR_LOC	9
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF10_ISR_LOC	10
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF11_ISR_LOC	11
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF12_ISR_LOC	12
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF13_ISR_LOC	13
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF14_ISR_LOC	14
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF15_ISR_LOC	15
-#define DLB2_PF_PF2VF_MAILBOX_ISR_RSVD0_LOC		16
-
-#define DLB2_PF_VF_RESET_IN_PROGRESS(vf_id) \
-	(0x3000 + (vf_id) * 0x10000)
-#define DLB2_PF_VF_RESET_IN_PROGRESS_RST 0xffff
-
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF0_RESET_IN_PROGRESS	0x00000001
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF1_RESET_IN_PROGRESS	0x00000002
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF2_RESET_IN_PROGRESS	0x00000004
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF3_RESET_IN_PROGRESS	0x00000008
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF4_RESET_IN_PROGRESS	0x00000010
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF5_RESET_IN_PROGRESS	0x00000020
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF6_RESET_IN_PROGRESS	0x00000040
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF7_RESET_IN_PROGRESS	0x00000080
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF8_RESET_IN_PROGRESS	0x00000100
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF9_RESET_IN_PROGRESS	0x00000200
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF10_RESET_IN_PROGRESS	0x00000400
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF11_RESET_IN_PROGRESS	0x00000800
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF12_RESET_IN_PROGRESS	0x00001000
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF13_RESET_IN_PROGRESS	0x00002000
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF14_RESET_IN_PROGRESS	0x00004000
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF15_RESET_IN_PROGRESS	0x00008000
-#define DLB2_PF_VF_RESET_IN_PROGRESS_RSVD0			0xFFFF0000
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF0_RESET_IN_PROGRESS_LOC	0
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF1_RESET_IN_PROGRESS_LOC	1
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF2_RESET_IN_PROGRESS_LOC	2
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF3_RESET_IN_PROGRESS_LOC	3
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF4_RESET_IN_PROGRESS_LOC	4
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF5_RESET_IN_PROGRESS_LOC	5
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF6_RESET_IN_PROGRESS_LOC	6
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF7_RESET_IN_PROGRESS_LOC	7
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF8_RESET_IN_PROGRESS_LOC	8
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF9_RESET_IN_PROGRESS_LOC	9
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF10_RESET_IN_PROGRESS_LOC	10
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF11_RESET_IN_PROGRESS_LOC	11
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF12_RESET_IN_PROGRESS_LOC	12
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF13_RESET_IN_PROGRESS_LOC	13
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF14_RESET_IN_PROGRESS_LOC	14
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF15_RESET_IN_PROGRESS_LOC	15
-#define DLB2_PF_VF_RESET_IN_PROGRESS_RSVD0_LOC			16
-
-#define DLB2_MSIX_VECTOR_CTRL(x) \
-	(0x100000c + (x) * 0x10)
-#define DLB2_MSIX_VECTOR_CTRL_RST 0x1
-
-#define DLB2_MSIX_VECTOR_CTRL_VEC_MASK	0x00000001
-#define DLB2_MSIX_VECTOR_CTRL_RSVD0		0xFFFFFFFE
-#define DLB2_MSIX_VECTOR_CTRL_VEC_MASK_LOC	0
-#define DLB2_MSIX_VECTOR_CTRL_RSVD0_LOC	1
-
-#define DLB2_IOSF_FUNC_VF_BAR_DSBL(x) \
-	(0x20 + (x) * 0x4)
-#define DLB2_IOSF_FUNC_VF_BAR_DSBL_RST 0x0
-
-#define DLB2_IOSF_FUNC_VF_BAR_DSBL_FUNC_VF_BAR_DIS	0x00000001
-#define DLB2_IOSF_FUNC_VF_BAR_DSBL_RSVD0		0xFFFFFFFE
-#define DLB2_IOSF_FUNC_VF_BAR_DSBL_FUNC_VF_BAR_DIS_LOC	0
-#define DLB2_IOSF_FUNC_VF_BAR_DSBL_RSVD0_LOC			1
-
-#define DLB2_V2SYS_TOTAL_VAS 0x1000011c
-#define DLB2_V2_5SYS_TOTAL_VAS 0x10000114
-#define DLB2_SYS_TOTAL_VAS(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2SYS_TOTAL_VAS : \
-	 DLB2_V2_5SYS_TOTAL_VAS)
-#define DLB2_SYS_TOTAL_VAS_RST 0x20
-
-#define DLB2_SYS_TOTAL_VAS_TOTAL_VAS	0xFFFFFFFF
-#define DLB2_SYS_TOTAL_VAS_TOTAL_VAS_LOC	0
-
-#define DLB2_SYS_TOTAL_DIR_CRDS 0x10000108
-#define DLB2_SYS_TOTAL_DIR_CRDS_RST 0x1000
-
-#define DLB2_SYS_TOTAL_DIR_CRDS_TOTAL_DIR_CREDITS	0xFFFFFFFF
-#define DLB2_SYS_TOTAL_DIR_CRDS_TOTAL_DIR_CREDITS_LOC	0
-
-#define DLB2_SYS_TOTAL_LDB_CRDS 0x10000104
-#define DLB2_SYS_TOTAL_LDB_CRDS_RST 0x2000
-
-#define DLB2_SYS_TOTAL_LDB_CRDS_TOTAL_LDB_CREDITS	0xFFFFFFFF
-#define DLB2_SYS_TOTAL_LDB_CRDS_TOTAL_LDB_CREDITS_LOC	0
-
-#define DLB2_SYS_ALARM_PF_SYND2 0x10000508
-#define DLB2_SYS_ALARM_PF_SYND2_RST 0x0
-
-#define DLB2_SYS_ALARM_PF_SYND2_LOCK_ID	0x0000FFFF
-#define DLB2_SYS_ALARM_PF_SYND2_MEAS		0x00010000
-#define DLB2_SYS_ALARM_PF_SYND2_DEBUG	0x00FE0000
-#define DLB2_SYS_ALARM_PF_SYND2_CQ_POP	0x01000000
-#define DLB2_SYS_ALARM_PF_SYND2_QE_UHL	0x02000000
-#define DLB2_SYS_ALARM_PF_SYND2_QE_ORSP	0x04000000
-#define DLB2_SYS_ALARM_PF_SYND2_QE_VALID	0x08000000
-#define DLB2_SYS_ALARM_PF_SYND2_CQ_INT_REARM	0x10000000
-#define DLB2_SYS_ALARM_PF_SYND2_DSI_ERROR	0x20000000
-#define DLB2_SYS_ALARM_PF_SYND2_RSVD0	0xC0000000
-#define DLB2_SYS_ALARM_PF_SYND2_LOCK_ID_LOC		0
-#define DLB2_SYS_ALARM_PF_SYND2_MEAS_LOC		16
-#define DLB2_SYS_ALARM_PF_SYND2_DEBUG_LOC		17
-#define DLB2_SYS_ALARM_PF_SYND2_CQ_POP_LOC		24
-#define DLB2_SYS_ALARM_PF_SYND2_QE_UHL_LOC		25
-#define DLB2_SYS_ALARM_PF_SYND2_QE_ORSP_LOC		26
-#define DLB2_SYS_ALARM_PF_SYND2_QE_VALID_LOC		27
-#define DLB2_SYS_ALARM_PF_SYND2_CQ_INT_REARM_LOC	28
-#define DLB2_SYS_ALARM_PF_SYND2_DSI_ERROR_LOC	29
-#define DLB2_SYS_ALARM_PF_SYND2_RSVD0_LOC		30
-
-#define DLB2_SYS_ALARM_PF_SYND1 0x10000504
-#define DLB2_SYS_ALARM_PF_SYND1_RST 0x0
-
-#define DLB2_SYS_ALARM_PF_SYND1_DSI		0x0000FFFF
-#define DLB2_SYS_ALARM_PF_SYND1_QID		0x00FF0000
-#define DLB2_SYS_ALARM_PF_SYND1_QTYPE	0x03000000
-#define DLB2_SYS_ALARM_PF_SYND1_QPRI		0x1C000000
-#define DLB2_SYS_ALARM_PF_SYND1_MSG_TYPE	0xE0000000
-#define DLB2_SYS_ALARM_PF_SYND1_DSI_LOC	0
-#define DLB2_SYS_ALARM_PF_SYND1_QID_LOC	16
-#define DLB2_SYS_ALARM_PF_SYND1_QTYPE_LOC	24
-#define DLB2_SYS_ALARM_PF_SYND1_QPRI_LOC	26
-#define DLB2_SYS_ALARM_PF_SYND1_MSG_TYPE_LOC	29
-
-#define DLB2_SYS_ALARM_PF_SYND0 0x10000500
-#define DLB2_SYS_ALARM_PF_SYND0_RST 0x0
-
-#define DLB2_SYS_ALARM_PF_SYND0_SYNDROME	0x000000FF
-#define DLB2_SYS_ALARM_PF_SYND0_RTYPE	0x00000300
-#define DLB2_SYS_ALARM_PF_SYND0_RSVD0	0x00001C00
-#define DLB2_SYS_ALARM_PF_SYND0_IS_LDB	0x00002000
-#define DLB2_SYS_ALARM_PF_SYND0_CLS		0x0000C000
-#define DLB2_SYS_ALARM_PF_SYND0_AID		0x003F0000
-#define DLB2_SYS_ALARM_PF_SYND0_UNIT		0x03C00000
-#define DLB2_SYS_ALARM_PF_SYND0_SOURCE	0x3C000000
-#define DLB2_SYS_ALARM_PF_SYND0_MORE		0x40000000
-#define DLB2_SYS_ALARM_PF_SYND0_VALID	0x80000000
-#define DLB2_SYS_ALARM_PF_SYND0_SYNDROME_LOC	0
-#define DLB2_SYS_ALARM_PF_SYND0_RTYPE_LOC	8
-#define DLB2_SYS_ALARM_PF_SYND0_RSVD0_LOC	10
-#define DLB2_SYS_ALARM_PF_SYND0_IS_LDB_LOC	13
-#define DLB2_SYS_ALARM_PF_SYND0_CLS_LOC	14
-#define DLB2_SYS_ALARM_PF_SYND0_AID_LOC	16
-#define DLB2_SYS_ALARM_PF_SYND0_UNIT_LOC	22
-#define DLB2_SYS_ALARM_PF_SYND0_SOURCE_LOC	26
-#define DLB2_SYS_ALARM_PF_SYND0_MORE_LOC	30
-#define DLB2_SYS_ALARM_PF_SYND0_VALID_LOC	31
-
-#define DLB2_SYS_VF_LDB_VPP_V(x) \
-	(0x10000f00 + (x) * 0x1000)
-#define DLB2_SYS_VF_LDB_VPP_V_RST 0x0
-
-#define DLB2_SYS_VF_LDB_VPP_V_VPP_V	0x00000001
-#define DLB2_SYS_VF_LDB_VPP_V_RSVD0	0xFFFFFFFE
-#define DLB2_SYS_VF_LDB_VPP_V_VPP_V_LOC	0
-#define DLB2_SYS_VF_LDB_VPP_V_RSVD0_LOC	1
-
-#define DLB2_SYS_VF_LDB_VPP2PP(x) \
-	(0x10000f04 + (x) * 0x1000)
-#define DLB2_SYS_VF_LDB_VPP2PP_RST 0x0
-
-#define DLB2_SYS_VF_LDB_VPP2PP_PP	0x0000003F
-#define DLB2_SYS_VF_LDB_VPP2PP_RSVD0	0xFFFFFFC0
-#define DLB2_SYS_VF_LDB_VPP2PP_PP_LOC	0
-#define DLB2_SYS_VF_LDB_VPP2PP_RSVD0_LOC	6
-
-#define DLB2_SYS_VF_DIR_VPP_V(x) \
-	(0x10000f08 + (x) * 0x1000)
-#define DLB2_SYS_VF_DIR_VPP_V_RST 0x0
-
-#define DLB2_SYS_VF_DIR_VPP_V_VPP_V	0x00000001
-#define DLB2_SYS_VF_DIR_VPP_V_RSVD0	0xFFFFFFFE
-#define DLB2_SYS_VF_DIR_VPP_V_VPP_V_LOC	0
-#define DLB2_SYS_VF_DIR_VPP_V_RSVD0_LOC	1
-
-#define DLB2_SYS_VF_DIR_VPP2PP(x) \
-	(0x10000f0c + (x) * 0x1000)
-#define DLB2_SYS_VF_DIR_VPP2PP_RST 0x0
-
-#define DLB2_SYS_VF_DIR_VPP2PP_PP	0x0000003F
-#define DLB2_SYS_VF_DIR_VPP2PP_RSVD0	0xFFFFFFC0
-#define DLB2_SYS_VF_DIR_VPP2PP_PP_LOC	0
-#define DLB2_SYS_VF_DIR_VPP2PP_RSVD0_LOC	6
-
-#define DLB2_SYS_VF_LDB_VQID_V(x) \
-	(0x10000f10 + (x) * 0x1000)
-#define DLB2_SYS_VF_LDB_VQID_V_RST 0x0
-
-#define DLB2_SYS_VF_LDB_VQID_V_VQID_V	0x00000001
-#define DLB2_SYS_VF_LDB_VQID_V_RSVD0		0xFFFFFFFE
-#define DLB2_SYS_VF_LDB_VQID_V_VQID_V_LOC	0
-#define DLB2_SYS_VF_LDB_VQID_V_RSVD0_LOC	1
-
-#define DLB2_SYS_VF_LDB_VQID2QID(x) \
-	(0x10000f14 + (x) * 0x1000)
-#define DLB2_SYS_VF_LDB_VQID2QID_RST 0x0
-
-#define DLB2_SYS_VF_LDB_VQID2QID_QID		0x0000001F
-#define DLB2_SYS_VF_LDB_VQID2QID_RSVD0	0xFFFFFFE0
-#define DLB2_SYS_VF_LDB_VQID2QID_QID_LOC	0
-#define DLB2_SYS_VF_LDB_VQID2QID_RSVD0_LOC	5
-
-#define DLB2_SYS_LDB_QID2VQID(x) \
-	(0x10000f18 + (x) * 0x1000)
-#define DLB2_SYS_LDB_QID2VQID_RST 0x0
-
-#define DLB2_SYS_LDB_QID2VQID_VQID	0x0000001F
-#define DLB2_SYS_LDB_QID2VQID_RSVD0	0xFFFFFFE0
-#define DLB2_SYS_LDB_QID2VQID_VQID_LOC	0
-#define DLB2_SYS_LDB_QID2VQID_RSVD0_LOC	5
-
-#define DLB2_SYS_VF_DIR_VQID_V(x) \
-	(0x10000f1c + (x) * 0x1000)
-#define DLB2_SYS_VF_DIR_VQID_V_RST 0x0
-
-#define DLB2_SYS_VF_DIR_VQID_V_VQID_V	0x00000001
-#define DLB2_SYS_VF_DIR_VQID_V_RSVD0		0xFFFFFFFE
-#define DLB2_SYS_VF_DIR_VQID_V_VQID_V_LOC	0
-#define DLB2_SYS_VF_DIR_VQID_V_RSVD0_LOC	1
-
-#define DLB2_SYS_VF_DIR_VQID2QID(x) \
-	(0x10000f20 + (x) * 0x1000)
-#define DLB2_SYS_VF_DIR_VQID2QID_RST 0x0
-
-#define DLB2_SYS_VF_DIR_VQID2QID_QID		0x0000003F
-#define DLB2_SYS_VF_DIR_VQID2QID_RSVD0	0xFFFFFFC0
-#define DLB2_SYS_VF_DIR_VQID2QID_QID_LOC	0
-#define DLB2_SYS_VF_DIR_VQID2QID_RSVD0_LOC	6
-
-#define DLB2_SYS_LDB_VASQID_V(x) \
-	(0x10000f24 + (x) * 0x1000)
-#define DLB2_SYS_LDB_VASQID_V_RST 0x0
-
-#define DLB2_SYS_LDB_VASQID_V_VASQID_V	0x00000001
-#define DLB2_SYS_LDB_VASQID_V_RSVD0		0xFFFFFFFE
-#define DLB2_SYS_LDB_VASQID_V_VASQID_V_LOC	0
-#define DLB2_SYS_LDB_VASQID_V_RSVD0_LOC	1
-
-#define DLB2_SYS_DIR_VASQID_V(x) \
-	(0x10000f28 + (x) * 0x1000)
-#define DLB2_SYS_DIR_VASQID_V_RST 0x0
-
-#define DLB2_SYS_DIR_VASQID_V_VASQID_V	0x00000001
-#define DLB2_SYS_DIR_VASQID_V_RSVD0		0xFFFFFFFE
-#define DLB2_SYS_DIR_VASQID_V_VASQID_V_LOC	0
-#define DLB2_SYS_DIR_VASQID_V_RSVD0_LOC	1
-
-#define DLB2_SYS_ALARM_VF_SYND2(x) \
-	(0x10000f48 + (x) * 0x1000)
-#define DLB2_SYS_ALARM_VF_SYND2_RST 0x0
-
-#define DLB2_SYS_ALARM_VF_SYND2_LOCK_ID	0x0000FFFF
-#define DLB2_SYS_ALARM_VF_SYND2_DEBUG	0x00FF0000
-#define DLB2_SYS_ALARM_VF_SYND2_CQ_POP	0x01000000
-#define DLB2_SYS_ALARM_VF_SYND2_QE_UHL	0x02000000
-#define DLB2_SYS_ALARM_VF_SYND2_QE_ORSP	0x04000000
-#define DLB2_SYS_ALARM_VF_SYND2_QE_VALID	0x08000000
-#define DLB2_SYS_ALARM_VF_SYND2_ISZ		0x10000000
-#define DLB2_SYS_ALARM_VF_SYND2_DSI_ERROR	0x20000000
-#define DLB2_SYS_ALARM_VF_SYND2_DLBRSVD	0xC0000000
-#define DLB2_SYS_ALARM_VF_SYND2_LOCK_ID_LOC		0
-#define DLB2_SYS_ALARM_VF_SYND2_DEBUG_LOC		16
-#define DLB2_SYS_ALARM_VF_SYND2_CQ_POP_LOC		24
-#define DLB2_SYS_ALARM_VF_SYND2_QE_UHL_LOC		25
-#define DLB2_SYS_ALARM_VF_SYND2_QE_ORSP_LOC		26
-#define DLB2_SYS_ALARM_VF_SYND2_QE_VALID_LOC		27
-#define DLB2_SYS_ALARM_VF_SYND2_ISZ_LOC		28
-#define DLB2_SYS_ALARM_VF_SYND2_DSI_ERROR_LOC	29
-#define DLB2_SYS_ALARM_VF_SYND2_DLBRSVD_LOC		30
-
-#define DLB2_SYS_ALARM_VF_SYND1(x) \
-	(0x10000f44 + (x) * 0x1000)
-#define DLB2_SYS_ALARM_VF_SYND1_RST 0x0
-
-#define DLB2_SYS_ALARM_VF_SYND1_DSI		0x0000FFFF
-#define DLB2_SYS_ALARM_VF_SYND1_QID		0x00FF0000
-#define DLB2_SYS_ALARM_VF_SYND1_QTYPE	0x03000000
-#define DLB2_SYS_ALARM_VF_SYND1_QPRI		0x1C000000
-#define DLB2_SYS_ALARM_VF_SYND1_MSG_TYPE	0xE0000000
-#define DLB2_SYS_ALARM_VF_SYND1_DSI_LOC	0
-#define DLB2_SYS_ALARM_VF_SYND1_QID_LOC	16
-#define DLB2_SYS_ALARM_VF_SYND1_QTYPE_LOC	24
-#define DLB2_SYS_ALARM_VF_SYND1_QPRI_LOC	26
-#define DLB2_SYS_ALARM_VF_SYND1_MSG_TYPE_LOC	29
-
-#define DLB2_SYS_ALARM_VF_SYND0(x) \
-	(0x10000f40 + (x) * 0x1000)
-#define DLB2_SYS_ALARM_VF_SYND0_RST 0x0
-
-#define DLB2_SYS_ALARM_VF_SYND0_SYNDROME		0x000000FF
-#define DLB2_SYS_ALARM_VF_SYND0_RTYPE		0x00000300
-#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND0_PARITY	0x00000400
-#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND1_PARITY	0x00000800
-#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND2_PARITY	0x00001000
-#define DLB2_SYS_ALARM_VF_SYND0_IS_LDB		0x00002000
-#define DLB2_SYS_ALARM_VF_SYND0_CLS			0x0000C000
-#define DLB2_SYS_ALARM_VF_SYND0_AID			0x003F0000
-#define DLB2_SYS_ALARM_VF_SYND0_UNIT			0x03C00000
-#define DLB2_SYS_ALARM_VF_SYND0_SOURCE		0x3C000000
-#define DLB2_SYS_ALARM_VF_SYND0_MORE			0x40000000
-#define DLB2_SYS_ALARM_VF_SYND0_VALID		0x80000000
-#define DLB2_SYS_ALARM_VF_SYND0_SYNDROME_LOC		0
-#define DLB2_SYS_ALARM_VF_SYND0_RTYPE_LOC		8
-#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND0_PARITY_LOC	10
-#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND1_PARITY_LOC	11
-#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND2_PARITY_LOC	12
-#define DLB2_SYS_ALARM_VF_SYND0_IS_LDB_LOC		13
-#define DLB2_SYS_ALARM_VF_SYND0_CLS_LOC		14
-#define DLB2_SYS_ALARM_VF_SYND0_AID_LOC		16
-#define DLB2_SYS_ALARM_VF_SYND0_UNIT_LOC		22
-#define DLB2_SYS_ALARM_VF_SYND0_SOURCE_LOC		26
-#define DLB2_SYS_ALARM_VF_SYND0_MORE_LOC		30
-#define DLB2_SYS_ALARM_VF_SYND0_VALID_LOC		31
-
-#define DLB2_SYS_LDB_QID_CFG_V(x) \
-	(0x10000f58 + (x) * 0x1000)
-#define DLB2_SYS_LDB_QID_CFG_V_RST 0x0
-
-#define DLB2_SYS_LDB_QID_CFG_V_SN_CFG_V	0x00000001
-#define DLB2_SYS_LDB_QID_CFG_V_FID_CFG_V	0x00000002
-#define DLB2_SYS_LDB_QID_CFG_V_RSVD0		0xFFFFFFFC
-#define DLB2_SYS_LDB_QID_CFG_V_SN_CFG_V_LOC	0
-#define DLB2_SYS_LDB_QID_CFG_V_FID_CFG_V_LOC	1
-#define DLB2_SYS_LDB_QID_CFG_V_RSVD0_LOC	2
-
-#define DLB2_SYS_LDB_QID_ITS(x) \
-	(0x10000f54 + (x) * 0x1000)
-#define DLB2_SYS_LDB_QID_ITS_RST 0x0
-
-#define DLB2_SYS_LDB_QID_ITS_QID_ITS	0x00000001
-#define DLB2_SYS_LDB_QID_ITS_RSVD0	0xFFFFFFFE
-#define DLB2_SYS_LDB_QID_ITS_QID_ITS_LOC	0
-#define DLB2_SYS_LDB_QID_ITS_RSVD0_LOC	1
-
-#define DLB2_SYS_LDB_QID_V(x) \
-	(0x10000f50 + (x) * 0x1000)
-#define DLB2_SYS_LDB_QID_V_RST 0x0
-
-#define DLB2_SYS_LDB_QID_V_QID_V	0x00000001
-#define DLB2_SYS_LDB_QID_V_RSVD0	0xFFFFFFFE
-#define DLB2_SYS_LDB_QID_V_QID_V_LOC	0
-#define DLB2_SYS_LDB_QID_V_RSVD0_LOC	1
-
-#define DLB2_SYS_DIR_QID_ITS(x) \
-	(0x10000f64 + (x) * 0x1000)
-#define DLB2_SYS_DIR_QID_ITS_RST 0x0
-
-#define DLB2_SYS_DIR_QID_ITS_QID_ITS	0x00000001
-#define DLB2_SYS_DIR_QID_ITS_RSVD0	0xFFFFFFFE
-#define DLB2_SYS_DIR_QID_ITS_QID_ITS_LOC	0
-#define DLB2_SYS_DIR_QID_ITS_RSVD0_LOC	1
-
-#define DLB2_SYS_DIR_QID_V(x) \
-	(0x10000f60 + (x) * 0x1000)
-#define DLB2_SYS_DIR_QID_V_RST 0x0
-
-#define DLB2_SYS_DIR_QID_V_QID_V	0x00000001
-#define DLB2_SYS_DIR_QID_V_RSVD0	0xFFFFFFFE
-#define DLB2_SYS_DIR_QID_V_QID_V_LOC	0
-#define DLB2_SYS_DIR_QID_V_RSVD0_LOC	1
-
-#define DLB2_SYS_LDB_CQ_AI_DATA(x) \
-	(0x10000fa8 + (x) * 0x1000)
-#define DLB2_SYS_LDB_CQ_AI_DATA_RST 0x0
-
-#define DLB2_SYS_LDB_CQ_AI_DATA_CQ_AI_DATA	0xFFFFFFFF
-#define DLB2_SYS_LDB_CQ_AI_DATA_CQ_AI_DATA_LOC	0
-
-#define DLB2_SYS_LDB_CQ_AI_ADDR(x) \
-	(0x10000fa4 + (x) * 0x1000)
-#define DLB2_SYS_LDB_CQ_AI_ADDR_RST 0x0
-
-#define DLB2_SYS_LDB_CQ_AI_ADDR_RSVD1	0x00000003
-#define DLB2_SYS_LDB_CQ_AI_ADDR_CQ_AI_ADDR	0x000FFFFC
-#define DLB2_SYS_LDB_CQ_AI_ADDR_RSVD0	0xFFF00000
-#define DLB2_SYS_LDB_CQ_AI_ADDR_RSVD1_LOC		0
-#define DLB2_SYS_LDB_CQ_AI_ADDR_CQ_AI_ADDR_LOC	2
-#define DLB2_SYS_LDB_CQ_AI_ADDR_RSVD0_LOC		20
-
-#define DLB2_V2SYS_LDB_CQ_PASID(x) \
-	(0x10000fa0 + (x) * 0x1000)
-#define DLB2_V2_5SYS_LDB_CQ_PASID(x) \
-	(0x10000f9c + (x) * 0x1000)
-#define DLB2_SYS_LDB_CQ_PASID(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2SYS_LDB_CQ_PASID(x) : \
-	 DLB2_V2_5SYS_LDB_CQ_PASID(x))
-#define DLB2_SYS_LDB_CQ_PASID_RST 0x0
-
-#define DLB2_SYS_LDB_CQ_PASID_PASID		0x000FFFFF
-#define DLB2_SYS_LDB_CQ_PASID_EXE_REQ	0x00100000
-#define DLB2_SYS_LDB_CQ_PASID_PRIV_REQ	0x00200000
-#define DLB2_SYS_LDB_CQ_PASID_FMT2		0x00400000
-#define DLB2_SYS_LDB_CQ_PASID_RSVD0		0xFF800000
-#define DLB2_SYS_LDB_CQ_PASID_PASID_LOC	0
-#define DLB2_SYS_LDB_CQ_PASID_EXE_REQ_LOC	20
-#define DLB2_SYS_LDB_CQ_PASID_PRIV_REQ_LOC	21
-#define DLB2_SYS_LDB_CQ_PASID_FMT2_LOC	22
-#define DLB2_SYS_LDB_CQ_PASID_RSVD0_LOC	23
-
-#define DLB2_SYS_LDB_CQ_AT(x) \
-	(0x10000f9c + (x) * 0x1000)
-#define DLB2_SYS_LDB_CQ_AT_RST 0x0
-
-#define DLB2_SYS_LDB_CQ_AT_CQ_AT	0x00000003
-#define DLB2_SYS_LDB_CQ_AT_RSVD0	0xFFFFFFFC
-#define DLB2_SYS_LDB_CQ_AT_CQ_AT_LOC	0
-#define DLB2_SYS_LDB_CQ_AT_RSVD0_LOC	2
-
-#define DLB2_SYS_LDB_CQ_ISR(x) \
-	(0x10000f98 + (x) * 0x1000)
-#define DLB2_SYS_LDB_CQ_ISR_RST 0x0
-/* CQ Interrupt Modes */
-#define DLB2_CQ_ISR_MODE_DIS  0
-#define DLB2_CQ_ISR_MODE_MSI  1
-#define DLB2_CQ_ISR_MODE_MSIX 2
-#define DLB2_CQ_ISR_MODE_ADI  3
-
-#define DLB2_SYS_LDB_CQ_ISR_VECTOR	0x0000003F
-#define DLB2_SYS_LDB_CQ_ISR_VF	0x000003C0
-#define DLB2_SYS_LDB_CQ_ISR_EN_CODE	0x00000C00
-#define DLB2_SYS_LDB_CQ_ISR_RSVD0	0xFFFFF000
-#define DLB2_SYS_LDB_CQ_ISR_VECTOR_LOC	0
-#define DLB2_SYS_LDB_CQ_ISR_VF_LOC		6
-#define DLB2_SYS_LDB_CQ_ISR_EN_CODE_LOC	10
-#define DLB2_SYS_LDB_CQ_ISR_RSVD0_LOC	12
-
-#define DLB2_SYS_LDB_CQ2VF_PF_RO(x) \
-	(0x10000f94 + (x) * 0x1000)
-#define DLB2_SYS_LDB_CQ2VF_PF_RO_RST 0x0
-
-#define DLB2_SYS_LDB_CQ2VF_PF_RO_VF		0x0000000F
-#define DLB2_SYS_LDB_CQ2VF_PF_RO_IS_PF	0x00000010
-#define DLB2_SYS_LDB_CQ2VF_PF_RO_RO		0x00000020
-#define DLB2_SYS_LDB_CQ2VF_PF_RO_RSVD0	0xFFFFFFC0
-#define DLB2_SYS_LDB_CQ2VF_PF_RO_VF_LOC	0
-#define DLB2_SYS_LDB_CQ2VF_PF_RO_IS_PF_LOC	4
-#define DLB2_SYS_LDB_CQ2VF_PF_RO_RO_LOC	5
-#define DLB2_SYS_LDB_CQ2VF_PF_RO_RSVD0_LOC	6
-
-#define DLB2_SYS_LDB_PP_V(x) \
-	(0x10000f90 + (x) * 0x1000)
-#define DLB2_SYS_LDB_PP_V_RST 0x0
-
-#define DLB2_SYS_LDB_PP_V_PP_V	0x00000001
-#define DLB2_SYS_LDB_PP_V_RSVD0	0xFFFFFFFE
-#define DLB2_SYS_LDB_PP_V_PP_V_LOC	0
-#define DLB2_SYS_LDB_PP_V_RSVD0_LOC	1
-
-#define DLB2_SYS_LDB_PP2VDEV(x) \
-	(0x10000f8c + (x) * 0x1000)
-#define DLB2_SYS_LDB_PP2VDEV_RST 0x0
-
-#define DLB2_SYS_LDB_PP2VDEV_VDEV	0x0000000F
-#define DLB2_SYS_LDB_PP2VDEV_RSVD0	0xFFFFFFF0
-#define DLB2_SYS_LDB_PP2VDEV_VDEV_LOC	0
-#define DLB2_SYS_LDB_PP2VDEV_RSVD0_LOC	4
-
-#define DLB2_SYS_LDB_PP2VAS(x) \
-	(0x10000f88 + (x) * 0x1000)
-#define DLB2_SYS_LDB_PP2VAS_RST 0x0
-
-#define DLB2_SYS_LDB_PP2VAS_VAS	0x0000001F
-#define DLB2_SYS_LDB_PP2VAS_RSVD0	0xFFFFFFE0
-#define DLB2_SYS_LDB_PP2VAS_VAS_LOC		0
-#define DLB2_SYS_LDB_PP2VAS_RSVD0_LOC	5
-
-#define DLB2_SYS_LDB_CQ_ADDR_U(x) \
-	(0x10000f84 + (x) * 0x1000)
-#define DLB2_SYS_LDB_CQ_ADDR_U_RST 0x0
-
-#define DLB2_SYS_LDB_CQ_ADDR_U_ADDR_U	0xFFFFFFFF
-#define DLB2_SYS_LDB_CQ_ADDR_U_ADDR_U_LOC	0
-
-#define DLB2_SYS_LDB_CQ_ADDR_L(x) \
-	(0x10000f80 + (x) * 0x1000)
-#define DLB2_SYS_LDB_CQ_ADDR_L_RST 0x0
-
-#define DLB2_SYS_LDB_CQ_ADDR_L_RSVD0		0x0000003F
-#define DLB2_SYS_LDB_CQ_ADDR_L_ADDR_L	0xFFFFFFC0
-#define DLB2_SYS_LDB_CQ_ADDR_L_RSVD0_LOC	0
-#define DLB2_SYS_LDB_CQ_ADDR_L_ADDR_L_LOC	6
-
-#define DLB2_SYS_DIR_CQ_FMT(x) \
-	(0x10000fec + (x) * 0x1000)
-#define DLB2_SYS_DIR_CQ_FMT_RST 0x0
-
-#define DLB2_SYS_DIR_CQ_FMT_KEEP_PF_PPID	0x00000001
-#define DLB2_SYS_DIR_CQ_FMT_RSVD0		0xFFFFFFFE
-#define DLB2_SYS_DIR_CQ_FMT_KEEP_PF_PPID_LOC	0
-#define DLB2_SYS_DIR_CQ_FMT_RSVD0_LOC	1
-
-#define DLB2_SYS_DIR_CQ_AI_DATA(x) \
-	(0x10000fe8 + (x) * 0x1000)
-#define DLB2_SYS_DIR_CQ_AI_DATA_RST 0x0
-
-#define DLB2_SYS_DIR_CQ_AI_DATA_CQ_AI_DATA	0xFFFFFFFF
-#define DLB2_SYS_DIR_CQ_AI_DATA_CQ_AI_DATA_LOC	0
-
-#define DLB2_SYS_DIR_CQ_AI_ADDR(x) \
-	(0x10000fe4 + (x) * 0x1000)
-#define DLB2_SYS_DIR_CQ_AI_ADDR_RST 0x0
-
-#define DLB2_SYS_DIR_CQ_AI_ADDR_RSVD1	0x00000003
-#define DLB2_SYS_DIR_CQ_AI_ADDR_CQ_AI_ADDR	0x000FFFFC
-#define DLB2_SYS_DIR_CQ_AI_ADDR_RSVD0	0xFFF00000
-#define DLB2_SYS_DIR_CQ_AI_ADDR_RSVD1_LOC		0
-#define DLB2_SYS_DIR_CQ_AI_ADDR_CQ_AI_ADDR_LOC	2
-#define DLB2_SYS_DIR_CQ_AI_ADDR_RSVD0_LOC		20
-
-#define DLB2_V2SYS_DIR_CQ_PASID(x) \
-	(0x10000fe0 + (x) * 0x1000)
-#define DLB2_V2_5SYS_DIR_CQ_PASID(x) \
-	(0x10000fdc + (x) * 0x1000)
-#define DLB2_SYS_DIR_CQ_PASID(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2SYS_DIR_CQ_PASID(x) : \
-	 DLB2_V2_5SYS_DIR_CQ_PASID(x))
-#define DLB2_SYS_DIR_CQ_PASID_RST 0x0
-
-#define DLB2_SYS_DIR_CQ_PASID_PASID		0x000FFFFF
-#define DLB2_SYS_DIR_CQ_PASID_EXE_REQ	0x00100000
-#define DLB2_SYS_DIR_CQ_PASID_PRIV_REQ	0x00200000
-#define DLB2_SYS_DIR_CQ_PASID_FMT2		0x00400000
-#define DLB2_SYS_DIR_CQ_PASID_RSVD0		0xFF800000
-#define DLB2_SYS_DIR_CQ_PASID_PASID_LOC	0
-#define DLB2_SYS_DIR_CQ_PASID_EXE_REQ_LOC	20
-#define DLB2_SYS_DIR_CQ_PASID_PRIV_REQ_LOC	21
-#define DLB2_SYS_DIR_CQ_PASID_FMT2_LOC	22
-#define DLB2_SYS_DIR_CQ_PASID_RSVD0_LOC	23
-
-#define DLB2_SYS_DIR_CQ_AT(x) \
-	(0x10000fdc + (x) * 0x1000)
-#define DLB2_SYS_DIR_CQ_AT_RST 0x0
-
-#define DLB2_SYS_DIR_CQ_AT_CQ_AT	0x00000003
-#define DLB2_SYS_DIR_CQ_AT_RSVD0	0xFFFFFFFC
-#define DLB2_SYS_DIR_CQ_AT_CQ_AT_LOC	0
-#define DLB2_SYS_DIR_CQ_AT_RSVD0_LOC	2
-
-#define DLB2_SYS_DIR_CQ_ISR(x) \
-	(0x10000fd8 + (x) * 0x1000)
-#define DLB2_SYS_DIR_CQ_ISR_RST 0x0
-
-#define DLB2_SYS_DIR_CQ_ISR_VECTOR	0x0000003F
-#define DLB2_SYS_DIR_CQ_ISR_VF	0x000003C0
-#define DLB2_SYS_DIR_CQ_ISR_EN_CODE	0x00000C00
-#define DLB2_SYS_DIR_CQ_ISR_RSVD0	0xFFFFF000
-#define DLB2_SYS_DIR_CQ_ISR_VECTOR_LOC	0
-#define DLB2_SYS_DIR_CQ_ISR_VF_LOC		6
-#define DLB2_SYS_DIR_CQ_ISR_EN_CODE_LOC	10
-#define DLB2_SYS_DIR_CQ_ISR_RSVD0_LOC	12
-
-#define DLB2_SYS_DIR_CQ2VF_PF_RO(x) \
-	(0x10000fd4 + (x) * 0x1000)
-#define DLB2_SYS_DIR_CQ2VF_PF_RO_RST 0x0
-
-#define DLB2_SYS_DIR_CQ2VF_PF_RO_VF		0x0000000F
-#define DLB2_SYS_DIR_CQ2VF_PF_RO_IS_PF	0x00000010
-#define DLB2_SYS_DIR_CQ2VF_PF_RO_RO		0x00000020
-#define DLB2_SYS_DIR_CQ2VF_PF_RO_RSVD0	0xFFFFFFC0
-#define DLB2_SYS_DIR_CQ2VF_PF_RO_VF_LOC	0
-#define DLB2_SYS_DIR_CQ2VF_PF_RO_IS_PF_LOC	4
-#define DLB2_SYS_DIR_CQ2VF_PF_RO_RO_LOC	5
-#define DLB2_SYS_DIR_CQ2VF_PF_RO_RSVD0_LOC	6
-
-#define DLB2_SYS_DIR_PP_V(x) \
-	(0x10000fd0 + (x) * 0x1000)
-#define DLB2_SYS_DIR_PP_V_RST 0x0
-
-#define DLB2_SYS_DIR_PP_V_PP_V	0x00000001
-#define DLB2_SYS_DIR_PP_V_RSVD0	0xFFFFFFFE
-#define DLB2_SYS_DIR_PP_V_PP_V_LOC	0
-#define DLB2_SYS_DIR_PP_V_RSVD0_LOC	1
-
-#define DLB2_SYS_DIR_PP2VDEV(x) \
-	(0x10000fcc + (x) * 0x1000)
-#define DLB2_SYS_DIR_PP2VDEV_RST 0x0
-
-#define DLB2_SYS_DIR_PP2VDEV_VDEV	0x0000000F
-#define DLB2_SYS_DIR_PP2VDEV_RSVD0	0xFFFFFFF0
-#define DLB2_SYS_DIR_PP2VDEV_VDEV_LOC	0
-#define DLB2_SYS_DIR_PP2VDEV_RSVD0_LOC	4
-
-#define DLB2_SYS_DIR_PP2VAS(x) \
-	(0x10000fc8 + (x) * 0x1000)
-#define DLB2_SYS_DIR_PP2VAS_RST 0x0
-
-#define DLB2_SYS_DIR_PP2VAS_VAS	0x0000001F
-#define DLB2_SYS_DIR_PP2VAS_RSVD0	0xFFFFFFE0
-#define DLB2_SYS_DIR_PP2VAS_VAS_LOC		0
-#define DLB2_SYS_DIR_PP2VAS_RSVD0_LOC	5
-
-#define DLB2_SYS_DIR_CQ_ADDR_U(x) \
-	(0x10000fc4 + (x) * 0x1000)
-#define DLB2_SYS_DIR_CQ_ADDR_U_RST 0x0
-
-#define DLB2_SYS_DIR_CQ_ADDR_U_ADDR_U	0xFFFFFFFF
-#define DLB2_SYS_DIR_CQ_ADDR_U_ADDR_U_LOC	0
-
-#define DLB2_SYS_DIR_CQ_ADDR_L(x) \
-	(0x10000fc0 + (x) * 0x1000)
-#define DLB2_SYS_DIR_CQ_ADDR_L_RST 0x0
-
-#define DLB2_SYS_DIR_CQ_ADDR_L_RSVD0		0x0000003F
-#define DLB2_SYS_DIR_CQ_ADDR_L_ADDR_L	0xFFFFFFC0
-#define DLB2_SYS_DIR_CQ_ADDR_L_RSVD0_LOC	0
-#define DLB2_SYS_DIR_CQ_ADDR_L_ADDR_L_LOC	6
-
-#define DLB2_SYS_PM_SMON_COMP_MASK1 0x10003024
-#define DLB2_SYS_PM_SMON_COMP_MASK1_RST 0xffffffff
-
-#define DLB2_SYS_PM_SMON_COMP_MASK1_COMP_MASK1	0xFFFFFFFF
-#define DLB2_SYS_PM_SMON_COMP_MASK1_COMP_MASK1_LOC	0
-
-#define DLB2_SYS_PM_SMON_COMP_MASK0 0x10003020
-#define DLB2_SYS_PM_SMON_COMP_MASK0_RST 0xffffffff
-
-#define DLB2_SYS_PM_SMON_COMP_MASK0_COMP_MASK0	0xFFFFFFFF
-#define DLB2_SYS_PM_SMON_COMP_MASK0_COMP_MASK0_LOC	0
-
-#define DLB2_SYS_PM_SMON_MAX_TMR 0x1000301c
-#define DLB2_SYS_PM_SMON_MAX_TMR_RST 0x0
-
-#define DLB2_SYS_PM_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
-#define DLB2_SYS_PM_SMON_MAX_TMR_MAXVALUE_LOC	0
-
-#define DLB2_SYS_PM_SMON_TMR 0x10003018
-#define DLB2_SYS_PM_SMON_TMR_RST 0x0
-
-#define DLB2_SYS_PM_SMON_TMR_TIMER_VAL	0xFFFFFFFF
-#define DLB2_SYS_PM_SMON_TMR_TIMER_VAL_LOC	0
-
-#define DLB2_SYS_PM_SMON_ACTIVITYCNTR1 0x10003014
-#define DLB2_SYS_PM_SMON_ACTIVITYCNTR1_RST 0x0
-
-#define DLB2_SYS_PM_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
-#define DLB2_SYS_PM_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
-
-#define DLB2_SYS_PM_SMON_ACTIVITYCNTR0 0x10003010
-#define DLB2_SYS_PM_SMON_ACTIVITYCNTR0_RST 0x0
-
-#define DLB2_SYS_PM_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
-#define DLB2_SYS_PM_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
-
-#define DLB2_SYS_PM_SMON_COMPARE1 0x1000300c
-#define DLB2_SYS_PM_SMON_COMPARE1_RST 0x0
-
-#define DLB2_SYS_PM_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
-#define DLB2_SYS_PM_SMON_COMPARE1_COMPARE1_LOC	0
-
-#define DLB2_SYS_PM_SMON_COMPARE0 0x10003008
-#define DLB2_SYS_PM_SMON_COMPARE0_RST 0x0
-
-#define DLB2_SYS_PM_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
-#define DLB2_SYS_PM_SMON_COMPARE0_COMPARE0_LOC	0
-
-#define DLB2_SYS_PM_SMON_CFG1 0x10003004
-#define DLB2_SYS_PM_SMON_CFG1_RST 0x0
-
-#define DLB2_SYS_PM_SMON_CFG1_MODE0	0x000000FF
-#define DLB2_SYS_PM_SMON_CFG1_MODE1	0x0000FF00
-#define DLB2_SYS_PM_SMON_CFG1_RSVD	0xFFFF0000
-#define DLB2_SYS_PM_SMON_CFG1_MODE0_LOC	0
-#define DLB2_SYS_PM_SMON_CFG1_MODE1_LOC	8
-#define DLB2_SYS_PM_SMON_CFG1_RSVD_LOC	16
-
-#define DLB2_SYS_PM_SMON_CFG0 0x10003000
-#define DLB2_SYS_PM_SMON_CFG0_RST 0x40000000
-
-#define DLB2_SYS_PM_SMON_CFG0_SMON_ENABLE		0x00000001
-#define DLB2_SYS_PM_SMON_CFG0_RSVD2			0x0000000E
-#define DLB2_SYS_PM_SMON_CFG0_SMON0_FUNCTION		0x00000070
-#define DLB2_SYS_PM_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
-#define DLB2_SYS_PM_SMON_CFG0_SMON1_FUNCTION		0x00000700
-#define DLB2_SYS_PM_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
-#define DLB2_SYS_PM_SMON_CFG0_SMON_MODE		0x0000F000
-#define DLB2_SYS_PM_SMON_CFG0_STOPCOUNTEROVFL	0x00010000
-#define DLB2_SYS_PM_SMON_CFG0_INTCOUNTEROVFL		0x00020000
-#define DLB2_SYS_PM_SMON_CFG0_STATCOUNTER0OVFL	0x00040000
-#define DLB2_SYS_PM_SMON_CFG0_STATCOUNTER1OVFL	0x00080000
-#define DLB2_SYS_PM_SMON_CFG0_STOPTIMEROVFL		0x00100000
-#define DLB2_SYS_PM_SMON_CFG0_INTTIMEROVFL		0x00200000
-#define DLB2_SYS_PM_SMON_CFG0_STATTIMEROVFL		0x00400000
-#define DLB2_SYS_PM_SMON_CFG0_RSVD1			0x00800000
-#define DLB2_SYS_PM_SMON_CFG0_TIMER_PRESCALE		0x1F000000
-#define DLB2_SYS_PM_SMON_CFG0_RSVD0			0x20000000
-#define DLB2_SYS_PM_SMON_CFG0_VERSION		0xC0000000
-#define DLB2_SYS_PM_SMON_CFG0_SMON_ENABLE_LOC		0
-#define DLB2_SYS_PM_SMON_CFG0_RSVD2_LOC			1
-#define DLB2_SYS_PM_SMON_CFG0_SMON0_FUNCTION_LOC		4
-#define DLB2_SYS_PM_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
-#define DLB2_SYS_PM_SMON_CFG0_SMON1_FUNCTION_LOC		8
-#define DLB2_SYS_PM_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
-#define DLB2_SYS_PM_SMON_CFG0_SMON_MODE_LOC			12
-#define DLB2_SYS_PM_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
-#define DLB2_SYS_PM_SMON_CFG0_INTCOUNTEROVFL_LOC		17
-#define DLB2_SYS_PM_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
-#define DLB2_SYS_PM_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
-#define DLB2_SYS_PM_SMON_CFG0_STOPTIMEROVFL_LOC		20
-#define DLB2_SYS_PM_SMON_CFG0_INTTIMEROVFL_LOC		21
-#define DLB2_SYS_PM_SMON_CFG0_STATTIMEROVFL_LOC		22
-#define DLB2_SYS_PM_SMON_CFG0_RSVD1_LOC			23
-#define DLB2_SYS_PM_SMON_CFG0_TIMER_PRESCALE_LOC		24
-#define DLB2_SYS_PM_SMON_CFG0_RSVD0_LOC			29
-#define DLB2_SYS_PM_SMON_CFG0_VERSION_LOC			30
-
-#define DLB2_SYS_SMON_COMP_MASK1(x) \
-	(0x18002024 + (x) * 0x40)
-#define DLB2_SYS_SMON_COMP_MASK1_RST 0xffffffff
-
-#define DLB2_SYS_SMON_COMP_MASK1_COMP_MASK1	0xFFFFFFFF
-#define DLB2_SYS_SMON_COMP_MASK1_COMP_MASK1_LOC	0
-
-#define DLB2_SYS_SMON_COMP_MASK0(x) \
-	(0x18002020 + (x) * 0x40)
-#define DLB2_SYS_SMON_COMP_MASK0_RST 0xffffffff
-
-#define DLB2_SYS_SMON_COMP_MASK0_COMP_MASK0	0xFFFFFFFF
-#define DLB2_SYS_SMON_COMP_MASK0_COMP_MASK0_LOC	0
-
-#define DLB2_SYS_SMON_MAX_TMR(x) \
-	(0x1800201c + (x) * 0x40)
-#define DLB2_SYS_SMON_MAX_TMR_RST 0x0
-
-#define DLB2_SYS_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
-#define DLB2_SYS_SMON_MAX_TMR_MAXVALUE_LOC	0
-
-#define DLB2_SYS_SMON_TMR(x) \
-	(0x18002018 + (x) * 0x40)
-#define DLB2_SYS_SMON_TMR_RST 0x0
-
-#define DLB2_SYS_SMON_TMR_TIMER_VAL	0xFFFFFFFF
-#define DLB2_SYS_SMON_TMR_TIMER_VAL_LOC	0
-
-#define DLB2_SYS_SMON_ACTIVITYCNTR1(x) \
-	(0x18002014 + (x) * 0x40)
-#define DLB2_SYS_SMON_ACTIVITYCNTR1_RST 0x0
-
-#define DLB2_SYS_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
-#define DLB2_SYS_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
-
-#define DLB2_SYS_SMON_ACTIVITYCNTR0(x) \
-	(0x18002010 + (x) * 0x40)
-#define DLB2_SYS_SMON_ACTIVITYCNTR0_RST 0x0
-
-#define DLB2_SYS_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
-#define DLB2_SYS_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
-
-#define DLB2_SYS_SMON_COMPARE1(x) \
-	(0x1800200c + (x) * 0x40)
-#define DLB2_SYS_SMON_COMPARE1_RST 0x0
-
-#define DLB2_SYS_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
-#define DLB2_SYS_SMON_COMPARE1_COMPARE1_LOC	0
-
-#define DLB2_SYS_SMON_COMPARE0(x) \
-	(0x18002008 + (x) * 0x40)
-#define DLB2_SYS_SMON_COMPARE0_RST 0x0
-
-#define DLB2_SYS_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
-#define DLB2_SYS_SMON_COMPARE0_COMPARE0_LOC	0
-
-#define DLB2_SYS_SMON_CFG1(x) \
-	(0x18002004 + (x) * 0x40)
-#define DLB2_SYS_SMON_CFG1_RST 0x0
-
-#define DLB2_SYS_SMON_CFG1_MODE0	0x000000FF
-#define DLB2_SYS_SMON_CFG1_MODE1	0x0000FF00
-#define DLB2_SYS_SMON_CFG1_RSVD	0xFFFF0000
-#define DLB2_SYS_SMON_CFG1_MODE0_LOC	0
-#define DLB2_SYS_SMON_CFG1_MODE1_LOC	8
-#define DLB2_SYS_SMON_CFG1_RSVD_LOC	16
-
-#define DLB2_SYS_SMON_CFG0(x) \
-	(0x18002000 + (x) * 0x40)
-#define DLB2_SYS_SMON_CFG0_RST 0x40000000
-
-#define DLB2_SYS_SMON_CFG0_SMON_ENABLE		0x00000001
-#define DLB2_SYS_SMON_CFG0_RSVD2			0x0000000E
-#define DLB2_SYS_SMON_CFG0_SMON0_FUNCTION		0x00000070
-#define DLB2_SYS_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
-#define DLB2_SYS_SMON_CFG0_SMON1_FUNCTION		0x00000700
-#define DLB2_SYS_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
-#define DLB2_SYS_SMON_CFG0_SMON_MODE			0x0000F000
-#define DLB2_SYS_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
-#define DLB2_SYS_SMON_CFG0_INTCOUNTEROVFL		0x00020000
-#define DLB2_SYS_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
-#define DLB2_SYS_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
-#define DLB2_SYS_SMON_CFG0_STOPTIMEROVFL		0x00100000
-#define DLB2_SYS_SMON_CFG0_INTTIMEROVFL		0x00200000
-#define DLB2_SYS_SMON_CFG0_STATTIMEROVFL		0x00400000
-#define DLB2_SYS_SMON_CFG0_RSVD1			0x00800000
-#define DLB2_SYS_SMON_CFG0_TIMER_PRESCALE		0x1F000000
-#define DLB2_SYS_SMON_CFG0_RSVD0			0x20000000
-#define DLB2_SYS_SMON_CFG0_VERSION			0xC0000000
-#define DLB2_SYS_SMON_CFG0_SMON_ENABLE_LOC			0
-#define DLB2_SYS_SMON_CFG0_RSVD2_LOC				1
-#define DLB2_SYS_SMON_CFG0_SMON0_FUNCTION_LOC		4
-#define DLB2_SYS_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
-#define DLB2_SYS_SMON_CFG0_SMON1_FUNCTION_LOC		8
-#define DLB2_SYS_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
-#define DLB2_SYS_SMON_CFG0_SMON_MODE_LOC			12
-#define DLB2_SYS_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
-#define DLB2_SYS_SMON_CFG0_INTCOUNTEROVFL_LOC		17
-#define DLB2_SYS_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
-#define DLB2_SYS_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
-#define DLB2_SYS_SMON_CFG0_STOPTIMEROVFL_LOC			20
-#define DLB2_SYS_SMON_CFG0_INTTIMEROVFL_LOC			21
-#define DLB2_SYS_SMON_CFG0_STATTIMEROVFL_LOC			22
-#define DLB2_SYS_SMON_CFG0_RSVD1_LOC				23
-#define DLB2_SYS_SMON_CFG0_TIMER_PRESCALE_LOC		24
-#define DLB2_SYS_SMON_CFG0_RSVD0_LOC				29
-#define DLB2_SYS_SMON_CFG0_VERSION_LOC			30
-
-#define DLB2_SYS_INGRESS_ALARM_ENBL 0x10000300
-#define DLB2_SYS_INGRESS_ALARM_ENBL_RST 0x0
-
-#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_HCW		0x00000001
-#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_PP		0x00000002
-#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_PASID		0x00000004
-#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_QID		0x00000008
-#define DLB2_SYS_INGRESS_ALARM_ENBL_DISABLED_QID		0x00000010
-#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_LDB_QID_CFG	0x00000020
-#define DLB2_SYS_INGRESS_ALARM_ENBL_RSVD0			0xFFFFFFC0
-#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_HCW_LOC		0
-#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_PP_LOC		1
-#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_PASID_LOC	2
-#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_QID_LOC		3
-#define DLB2_SYS_INGRESS_ALARM_ENBL_DISABLED_QID_LOC		4
-#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_LDB_QID_CFG_LOC	5
-#define DLB2_SYS_INGRESS_ALARM_ENBL_RSVD0_LOC		6
-
-#define DLB2_SYS_MSIX_ACK 0x10000400
-#define DLB2_SYS_MSIX_ACK_RST 0x0
-
-#define DLB2_SYS_MSIX_ACK_MSIX_0_ACK	0x00000001
-#define DLB2_SYS_MSIX_ACK_MSIX_1_ACK	0x00000002
-#define DLB2_SYS_MSIX_ACK_RSVD0	0xFFFFFFFC
-#define DLB2_SYS_MSIX_ACK_MSIX_0_ACK_LOC	0
-#define DLB2_SYS_MSIX_ACK_MSIX_1_ACK_LOC	1
-#define DLB2_SYS_MSIX_ACK_RSVD0_LOC		2
-
-#define DLB2_SYS_MSIX_PASSTHRU 0x10000404
-#define DLB2_SYS_MSIX_PASSTHRU_RST 0x0
-
-#define DLB2_SYS_MSIX_PASSTHRU_MSIX_0_PASSTHRU	0x00000001
-#define DLB2_SYS_MSIX_PASSTHRU_MSIX_1_PASSTHRU	0x00000002
-#define DLB2_SYS_MSIX_PASSTHRU_RSVD0			0xFFFFFFFC
-#define DLB2_SYS_MSIX_PASSTHRU_MSIX_0_PASSTHRU_LOC	0
-#define DLB2_SYS_MSIX_PASSTHRU_MSIX_1_PASSTHRU_LOC	1
-#define DLB2_SYS_MSIX_PASSTHRU_RSVD0_LOC		2
-
-#define DLB2_SYS_MSIX_MODE 0x10000408
-#define DLB2_SYS_MSIX_MODE_RST 0x0
-/* MSI-X Modes */
-#define DLB2_MSIX_MODE_PACKED     0
-#define DLB2_MSIX_MODE_COMPRESSED 1
-
-#define DLB2_SYS_MSIX_MODE_MODE_V2	0x00000001
-#define DLB2_SYS_MSIX_MODE_POLL_MODE_V2	0x00000002
-#define DLB2_SYS_MSIX_MODE_POLL_MASK_V2	0x00000004
-#define DLB2_SYS_MSIX_MODE_POLL_LOCK_V2	0x00000008
-#define DLB2_SYS_MSIX_MODE_RSVD0_V2	0xFFFFFFF0
-#define DLB2_SYS_MSIX_MODE_MODE_V2_LOC	0
-#define DLB2_SYS_MSIX_MODE_POLL_MODE_V2_LOC	1
-#define DLB2_SYS_MSIX_MODE_POLL_MASK_V2_LOC	2
-#define DLB2_SYS_MSIX_MODE_POLL_LOCK_V2_LOC	3
-#define DLB2_SYS_MSIX_MODE_RSVD0_V2_LOC	4
-
-#define DLB2_SYS_MSIX_MODE_MODE_V2_5	0x00000001
-#define DLB2_SYS_MSIX_MODE_IMS_POLLING_V2_5	0x00000002
-#define DLB2_SYS_MSIX_MODE_RSVD0_V2_5	0xFFFFFFFC
-#define DLB2_SYS_MSIX_MODE_MODE_V2_5_LOC		0
-#define DLB2_SYS_MSIX_MODE_IMS_POLLING_V2_5_LOC	1
-#define DLB2_SYS_MSIX_MODE_RSVD0_V2_5_LOC		2
-
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS 0x10000440
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_RST 0x0
-
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_0_OCC_INT	0x00000001
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_1_OCC_INT	0x00000002
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_2_OCC_INT	0x00000004
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_3_OCC_INT	0x00000008
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_4_OCC_INT	0x00000010
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_5_OCC_INT	0x00000020
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_6_OCC_INT	0x00000040
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_7_OCC_INT	0x00000080
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_8_OCC_INT	0x00000100
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_9_OCC_INT	0x00000200
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_10_OCC_INT	0x00000400
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_11_OCC_INT	0x00000800
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_12_OCC_INT	0x00001000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_13_OCC_INT	0x00002000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_14_OCC_INT	0x00004000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_15_OCC_INT	0x00008000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_16_OCC_INT	0x00010000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_17_OCC_INT	0x00020000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_18_OCC_INT	0x00040000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_19_OCC_INT	0x00080000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_20_OCC_INT	0x00100000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_21_OCC_INT	0x00200000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_22_OCC_INT	0x00400000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_23_OCC_INT	0x00800000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_24_OCC_INT	0x01000000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_25_OCC_INT	0x02000000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_26_OCC_INT	0x04000000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_27_OCC_INT	0x08000000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_28_OCC_INT	0x10000000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_29_OCC_INT	0x20000000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_30_OCC_INT	0x40000000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_31_OCC_INT	0x80000000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_0_OCC_INT_LOC	0
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_1_OCC_INT_LOC	1
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_2_OCC_INT_LOC	2
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_3_OCC_INT_LOC	3
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_4_OCC_INT_LOC	4
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_5_OCC_INT_LOC	5
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_6_OCC_INT_LOC	6
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_7_OCC_INT_LOC	7
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_8_OCC_INT_LOC	8
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_9_OCC_INT_LOC	9
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_10_OCC_INT_LOC	10
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_11_OCC_INT_LOC	11
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_12_OCC_INT_LOC	12
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_13_OCC_INT_LOC	13
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_14_OCC_INT_LOC	14
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_15_OCC_INT_LOC	15
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_16_OCC_INT_LOC	16
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_17_OCC_INT_LOC	17
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_18_OCC_INT_LOC	18
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_19_OCC_INT_LOC	19
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_20_OCC_INT_LOC	20
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_21_OCC_INT_LOC	21
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_22_OCC_INT_LOC	22
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_23_OCC_INT_LOC	23
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_24_OCC_INT_LOC	24
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_25_OCC_INT_LOC	25
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_26_OCC_INT_LOC	26
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_27_OCC_INT_LOC	27
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_28_OCC_INT_LOC	28
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_29_OCC_INT_LOC	29
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_30_OCC_INT_LOC	30
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_31_OCC_INT_LOC	31
-
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS 0x10000444
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_RST 0x0
-
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_32_OCC_INT	0x00000001
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_33_OCC_INT	0x00000002
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_34_OCC_INT	0x00000004
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_35_OCC_INT	0x00000008
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_36_OCC_INT	0x00000010
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_37_OCC_INT	0x00000020
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_38_OCC_INT	0x00000040
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_39_OCC_INT	0x00000080
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_40_OCC_INT	0x00000100
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_41_OCC_INT	0x00000200
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_42_OCC_INT	0x00000400
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_43_OCC_INT	0x00000800
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_44_OCC_INT	0x00001000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_45_OCC_INT	0x00002000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_46_OCC_INT	0x00004000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_47_OCC_INT	0x00008000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_48_OCC_INT	0x00010000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_49_OCC_INT	0x00020000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_50_OCC_INT	0x00040000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_51_OCC_INT	0x00080000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_52_OCC_INT	0x00100000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_53_OCC_INT	0x00200000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_54_OCC_INT	0x00400000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_55_OCC_INT	0x00800000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_56_OCC_INT	0x01000000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_57_OCC_INT	0x02000000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_58_OCC_INT	0x04000000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_59_OCC_INT	0x08000000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_60_OCC_INT	0x10000000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_61_OCC_INT	0x20000000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_62_OCC_INT	0x40000000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_63_OCC_INT	0x80000000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_32_OCC_INT_LOC	0
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_33_OCC_INT_LOC	1
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_34_OCC_INT_LOC	2
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_35_OCC_INT_LOC	3
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_36_OCC_INT_LOC	4
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_37_OCC_INT_LOC	5
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_38_OCC_INT_LOC	6
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_39_OCC_INT_LOC	7
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_40_OCC_INT_LOC	8
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_41_OCC_INT_LOC	9
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_42_OCC_INT_LOC	10
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_43_OCC_INT_LOC	11
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_44_OCC_INT_LOC	12
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_45_OCC_INT_LOC	13
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_46_OCC_INT_LOC	14
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_47_OCC_INT_LOC	15
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_48_OCC_INT_LOC	16
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_49_OCC_INT_LOC	17
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_50_OCC_INT_LOC	18
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_51_OCC_INT_LOC	19
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_52_OCC_INT_LOC	20
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_53_OCC_INT_LOC	21
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_54_OCC_INT_LOC	22
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_55_OCC_INT_LOC	23
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_56_OCC_INT_LOC	24
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_57_OCC_INT_LOC	25
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_58_OCC_INT_LOC	26
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_59_OCC_INT_LOC	27
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_60_OCC_INT_LOC	28
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_61_OCC_INT_LOC	29
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_62_OCC_INT_LOC	30
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_63_OCC_INT_LOC	31
-
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS 0x10000460
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_RST 0x0
-
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_0_OCC_INT	0x00000001
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_1_OCC_INT	0x00000002
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_2_OCC_INT	0x00000004
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_3_OCC_INT	0x00000008
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_4_OCC_INT	0x00000010
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_5_OCC_INT	0x00000020
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_6_OCC_INT	0x00000040
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_7_OCC_INT	0x00000080
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_8_OCC_INT	0x00000100
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_9_OCC_INT	0x00000200
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_10_OCC_INT	0x00000400
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_11_OCC_INT	0x00000800
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_12_OCC_INT	0x00001000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_13_OCC_INT	0x00002000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_14_OCC_INT	0x00004000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_15_OCC_INT	0x00008000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_16_OCC_INT	0x00010000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_17_OCC_INT	0x00020000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_18_OCC_INT	0x00040000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_19_OCC_INT	0x00080000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_20_OCC_INT	0x00100000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_21_OCC_INT	0x00200000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_22_OCC_INT	0x00400000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_23_OCC_INT	0x00800000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_24_OCC_INT	0x01000000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_25_OCC_INT	0x02000000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_26_OCC_INT	0x04000000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_27_OCC_INT	0x08000000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_28_OCC_INT	0x10000000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_29_OCC_INT	0x20000000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_30_OCC_INT	0x40000000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_31_OCC_INT	0x80000000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_0_OCC_INT_LOC	0
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_1_OCC_INT_LOC	1
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_2_OCC_INT_LOC	2
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_3_OCC_INT_LOC	3
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_4_OCC_INT_LOC	4
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_5_OCC_INT_LOC	5
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_6_OCC_INT_LOC	6
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_7_OCC_INT_LOC	7
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_8_OCC_INT_LOC	8
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_9_OCC_INT_LOC	9
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_10_OCC_INT_LOC	10
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_11_OCC_INT_LOC	11
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_12_OCC_INT_LOC	12
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_13_OCC_INT_LOC	13
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_14_OCC_INT_LOC	14
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_15_OCC_INT_LOC	15
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_16_OCC_INT_LOC	16
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_17_OCC_INT_LOC	17
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_18_OCC_INT_LOC	18
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_19_OCC_INT_LOC	19
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_20_OCC_INT_LOC	20
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_21_OCC_INT_LOC	21
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_22_OCC_INT_LOC	22
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_23_OCC_INT_LOC	23
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_24_OCC_INT_LOC	24
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_25_OCC_INT_LOC	25
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_26_OCC_INT_LOC	26
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_27_OCC_INT_LOC	27
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_28_OCC_INT_LOC	28
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_29_OCC_INT_LOC	29
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_30_OCC_INT_LOC	30
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_31_OCC_INT_LOC	31
-
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS 0x10000464
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_RST 0x0
-
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_32_OCC_INT	0x00000001
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_33_OCC_INT	0x00000002
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_34_OCC_INT	0x00000004
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_35_OCC_INT	0x00000008
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_36_OCC_INT	0x00000010
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_37_OCC_INT	0x00000020
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_38_OCC_INT	0x00000040
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_39_OCC_INT	0x00000080
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_40_OCC_INT	0x00000100
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_41_OCC_INT	0x00000200
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_42_OCC_INT	0x00000400
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_43_OCC_INT	0x00000800
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_44_OCC_INT	0x00001000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_45_OCC_INT	0x00002000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_46_OCC_INT	0x00004000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_47_OCC_INT	0x00008000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_48_OCC_INT	0x00010000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_49_OCC_INT	0x00020000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_50_OCC_INT	0x00040000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_51_OCC_INT	0x00080000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_52_OCC_INT	0x00100000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_53_OCC_INT	0x00200000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_54_OCC_INT	0x00400000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_55_OCC_INT	0x00800000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_56_OCC_INT	0x01000000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_57_OCC_INT	0x02000000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_58_OCC_INT	0x04000000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_59_OCC_INT	0x08000000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_60_OCC_INT	0x10000000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_61_OCC_INT	0x20000000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_62_OCC_INT	0x40000000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_63_OCC_INT	0x80000000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_32_OCC_INT_LOC	0
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_33_OCC_INT_LOC	1
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_34_OCC_INT_LOC	2
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_35_OCC_INT_LOC	3
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_36_OCC_INT_LOC	4
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_37_OCC_INT_LOC	5
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_38_OCC_INT_LOC	6
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_39_OCC_INT_LOC	7
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_40_OCC_INT_LOC	8
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_41_OCC_INT_LOC	9
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_42_OCC_INT_LOC	10
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_43_OCC_INT_LOC	11
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_44_OCC_INT_LOC	12
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_45_OCC_INT_LOC	13
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_46_OCC_INT_LOC	14
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_47_OCC_INT_LOC	15
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_48_OCC_INT_LOC	16
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_49_OCC_INT_LOC	17
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_50_OCC_INT_LOC	18
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_51_OCC_INT_LOC	19
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_52_OCC_INT_LOC	20
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_53_OCC_INT_LOC	21
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_54_OCC_INT_LOC	22
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_55_OCC_INT_LOC	23
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_56_OCC_INT_LOC	24
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_57_OCC_INT_LOC	25
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_58_OCC_INT_LOC	26
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_59_OCC_INT_LOC	27
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_60_OCC_INT_LOC	28
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_61_OCC_INT_LOC	29
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_62_OCC_INT_LOC	30
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_63_OCC_INT_LOC	31
-
-#define DLB2_SYS_DIR_CQ_OPT_CLR 0x100004c0
-#define DLB2_SYS_DIR_CQ_OPT_CLR_RST 0x0
-
-#define DLB2_SYS_DIR_CQ_OPT_CLR_CQ		0x0000003F
-#define DLB2_SYS_DIR_CQ_OPT_CLR_RSVD0	0xFFFFFFC0
-#define DLB2_SYS_DIR_CQ_OPT_CLR_CQ_LOC	0
-#define DLB2_SYS_DIR_CQ_OPT_CLR_RSVD0_LOC	6
-
-#define DLB2_SYS_ALARM_HW_SYND 0x1000050c
-#define DLB2_SYS_ALARM_HW_SYND_RST 0x0
-
-#define DLB2_SYS_ALARM_HW_SYND_SYNDROME	0x000000FF
-#define DLB2_SYS_ALARM_HW_SYND_RTYPE		0x00000300
-#define DLB2_SYS_ALARM_HW_SYND_ALARM		0x00000400
-#define DLB2_SYS_ALARM_HW_SYND_CWD		0x00000800
-#define DLB2_SYS_ALARM_HW_SYND_VF_PF_MB	0x00001000
-#define DLB2_SYS_ALARM_HW_SYND_RSVD0		0x00002000
-#define DLB2_SYS_ALARM_HW_SYND_CLS		0x0000C000
-#define DLB2_SYS_ALARM_HW_SYND_AID		0x003F0000
-#define DLB2_SYS_ALARM_HW_SYND_UNIT		0x03C00000
-#define DLB2_SYS_ALARM_HW_SYND_SOURCE	0x3C000000
-#define DLB2_SYS_ALARM_HW_SYND_MORE		0x40000000
-#define DLB2_SYS_ALARM_HW_SYND_VALID		0x80000000
-#define DLB2_SYS_ALARM_HW_SYND_SYNDROME_LOC	0
-#define DLB2_SYS_ALARM_HW_SYND_RTYPE_LOC	8
-#define DLB2_SYS_ALARM_HW_SYND_ALARM_LOC	10
-#define DLB2_SYS_ALARM_HW_SYND_CWD_LOC	11
-#define DLB2_SYS_ALARM_HW_SYND_VF_PF_MB_LOC	12
-#define DLB2_SYS_ALARM_HW_SYND_RSVD0_LOC	13
-#define DLB2_SYS_ALARM_HW_SYND_CLS_LOC	14
-#define DLB2_SYS_ALARM_HW_SYND_AID_LOC	16
-#define DLB2_SYS_ALARM_HW_SYND_UNIT_LOC	22
-#define DLB2_SYS_ALARM_HW_SYND_SOURCE_LOC	26
-#define DLB2_SYS_ALARM_HW_SYND_MORE_LOC	30
-#define DLB2_SYS_ALARM_HW_SYND_VALID_LOC	31
-
-#define DLB2_AQED_QID_FID_LIM(x) \
-	(0x20000000 + (x) * 0x1000)
-#define DLB2_AQED_QID_FID_LIM_RST 0x7ff
-
-#define DLB2_AQED_QID_FID_LIM_QID_FID_LIMIT	0x00001FFF
-#define DLB2_AQED_QID_FID_LIM_RSVD0		0xFFFFE000
-#define DLB2_AQED_QID_FID_LIM_QID_FID_LIMIT_LOC	0
-#define DLB2_AQED_QID_FID_LIM_RSVD0_LOC		13
-
-#define DLB2_AQED_QID_HID_WIDTH(x) \
-	(0x20080000 + (x) * 0x1000)
-#define DLB2_AQED_QID_HID_WIDTH_RST 0x0
-
-#define DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE	0x00000007
-#define DLB2_AQED_QID_HID_WIDTH_RSVD0		0xFFFFFFF8
-#define DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE_LOC	0
-#define DLB2_AQED_QID_HID_WIDTH_RSVD0_LOC		3
-
-#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0 0x24000004
-#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_RST 0xfefcfaf8
-
-#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI0	0x000000FF
-#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI1	0x0000FF00
-#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI2	0x00FF0000
-#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI3	0xFF000000
-#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI0_LOC	0
-#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI1_LOC	8
-#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI2_LOC	16
-#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI3_LOC	24
-
-#define DLB2_AQED_SMON_ACTIVITYCNTR0 0x2c00004c
-#define DLB2_AQED_SMON_ACTIVITYCNTR0_RST 0x0
-
-#define DLB2_AQED_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
-#define DLB2_AQED_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
-
-#define DLB2_AQED_SMON_ACTIVITYCNTR1 0x2c000050
-#define DLB2_AQED_SMON_ACTIVITYCNTR1_RST 0x0
-
-#define DLB2_AQED_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
-#define DLB2_AQED_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
-
-#define DLB2_AQED_SMON_COMPARE0 0x2c000054
-#define DLB2_AQED_SMON_COMPARE0_RST 0x0
-
-#define DLB2_AQED_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
-#define DLB2_AQED_SMON_COMPARE0_COMPARE0_LOC	0
-
-#define DLB2_AQED_SMON_COMPARE1 0x2c000058
-#define DLB2_AQED_SMON_COMPARE1_RST 0x0
-
-#define DLB2_AQED_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
-#define DLB2_AQED_SMON_COMPARE1_COMPARE1_LOC	0
-
-#define DLB2_AQED_SMON_CFG0 0x2c00005c
-#define DLB2_AQED_SMON_CFG0_RST 0x40000000
-
-#define DLB2_AQED_SMON_CFG0_SMON_ENABLE		0x00000001
-#define DLB2_AQED_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
-#define DLB2_AQED_SMON_CFG0_RSVZ0			0x0000000C
-#define DLB2_AQED_SMON_CFG0_SMON0_FUNCTION		0x00000070
-#define DLB2_AQED_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
-#define DLB2_AQED_SMON_CFG0_SMON1_FUNCTION		0x00000700
-#define DLB2_AQED_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
-#define DLB2_AQED_SMON_CFG0_SMON_MODE		0x0000F000
-#define DLB2_AQED_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
-#define DLB2_AQED_SMON_CFG0_INTCOUNTEROVFL		0x00020000
-#define DLB2_AQED_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
-#define DLB2_AQED_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
-#define DLB2_AQED_SMON_CFG0_STOPTIMEROVFL		0x00100000
-#define DLB2_AQED_SMON_CFG0_INTTIMEROVFL		0x00200000
-#define DLB2_AQED_SMON_CFG0_STATTIMEROVFL		0x00400000
-#define DLB2_AQED_SMON_CFG0_RSVZ1			0x00800000
-#define DLB2_AQED_SMON_CFG0_TIMER_PRESCALE		0x1F000000
-#define DLB2_AQED_SMON_CFG0_RSVZ2			0x20000000
-#define DLB2_AQED_SMON_CFG0_VERSION			0xC0000000
-#define DLB2_AQED_SMON_CFG0_SMON_ENABLE_LOC			0
-#define DLB2_AQED_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
-#define DLB2_AQED_SMON_CFG0_RSVZ0_LOC			2
-#define DLB2_AQED_SMON_CFG0_SMON0_FUNCTION_LOC		4
-#define DLB2_AQED_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
-#define DLB2_AQED_SMON_CFG0_SMON1_FUNCTION_LOC		8
-#define DLB2_AQED_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
-#define DLB2_AQED_SMON_CFG0_SMON_MODE_LOC			12
-#define DLB2_AQED_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
-#define DLB2_AQED_SMON_CFG0_INTCOUNTEROVFL_LOC		17
-#define DLB2_AQED_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
-#define DLB2_AQED_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
-#define DLB2_AQED_SMON_CFG0_STOPTIMEROVFL_LOC		20
-#define DLB2_AQED_SMON_CFG0_INTTIMEROVFL_LOC			21
-#define DLB2_AQED_SMON_CFG0_STATTIMEROVFL_LOC		22
-#define DLB2_AQED_SMON_CFG0_RSVZ1_LOC			23
-#define DLB2_AQED_SMON_CFG0_TIMER_PRESCALE_LOC		24
-#define DLB2_AQED_SMON_CFG0_RSVZ2_LOC			29
-#define DLB2_AQED_SMON_CFG0_VERSION_LOC			30
-
-#define DLB2_AQED_SMON_CFG1 0x2c000060
-#define DLB2_AQED_SMON_CFG1_RST 0x0
-
-#define DLB2_AQED_SMON_CFG1_MODE0	0x000000FF
-#define DLB2_AQED_SMON_CFG1_MODE1	0x0000FF00
-#define DLB2_AQED_SMON_CFG1_RSVZ0	0xFFFF0000
-#define DLB2_AQED_SMON_CFG1_MODE0_LOC	0
-#define DLB2_AQED_SMON_CFG1_MODE1_LOC	8
-#define DLB2_AQED_SMON_CFG1_RSVZ0_LOC	16
-
-#define DLB2_AQED_SMON_MAX_TMR 0x2c000064
-#define DLB2_AQED_SMON_MAX_TMR_RST 0x0
-
-#define DLB2_AQED_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
-#define DLB2_AQED_SMON_MAX_TMR_MAXVALUE_LOC	0
-
-#define DLB2_AQED_SMON_TMR 0x2c000068
-#define DLB2_AQED_SMON_TMR_RST 0x0
-
-#define DLB2_AQED_SMON_TMR_TIMER	0xFFFFFFFF
-#define DLB2_AQED_SMON_TMR_TIMER_LOC	0
-
-#define DLB2_ATM_QID2CQIDIX_00(x) \
-	(0x30080000 + (x) * 0x1000)
-#define DLB2_ATM_QID2CQIDIX_00_RST 0x0
-#define DLB2_ATM_QID2CQIDIX(x, y) \
-	(DLB2_ATM_QID2CQIDIX_00(x) + 0x80000 * (y))
-#define DLB2_ATM_QID2CQIDIX_NUM 16
-
-#define DLB2_ATM_QID2CQIDIX_00_CQ_P0	0x000000FF
-#define DLB2_ATM_QID2CQIDIX_00_CQ_P1	0x0000FF00
-#define DLB2_ATM_QID2CQIDIX_00_CQ_P2	0x00FF0000
-#define DLB2_ATM_QID2CQIDIX_00_CQ_P3	0xFF000000
-#define DLB2_ATM_QID2CQIDIX_00_CQ_P0_LOC	0
-#define DLB2_ATM_QID2CQIDIX_00_CQ_P1_LOC	8
-#define DLB2_ATM_QID2CQIDIX_00_CQ_P2_LOC	16
-#define DLB2_ATM_QID2CQIDIX_00_CQ_P3_LOC	24
-
-#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN 0x34000004
-#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_RST 0xfffefdfc
-
-#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN0	0x000000FF
-#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN1	0x0000FF00
-#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN2	0x00FF0000
-#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN3	0xFF000000
-#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN0_LOC	0
-#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN1_LOC	8
-#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN2_LOC	16
-#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN3_LOC	24
-
-#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN 0x34000008
-#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_RST 0xfffefdfc
-
-#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN0	0x000000FF
-#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN1	0x0000FF00
-#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN2	0x00FF0000
-#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN3	0xFF000000
-#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN0_LOC	0
-#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN1_LOC	8
-#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN2_LOC	16
-#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN3_LOC	24
-
-#define DLB2_ATM_SMON_ACTIVITYCNTR0 0x3c000050
-#define DLB2_ATM_SMON_ACTIVITYCNTR0_RST 0x0
-
-#define DLB2_ATM_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
-#define DLB2_ATM_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
-
-#define DLB2_ATM_SMON_ACTIVITYCNTR1 0x3c000054
-#define DLB2_ATM_SMON_ACTIVITYCNTR1_RST 0x0
-
-#define DLB2_ATM_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
-#define DLB2_ATM_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
-
-#define DLB2_ATM_SMON_COMPARE0 0x3c000058
-#define DLB2_ATM_SMON_COMPARE0_RST 0x0
-
-#define DLB2_ATM_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
-#define DLB2_ATM_SMON_COMPARE0_COMPARE0_LOC	0
-
-#define DLB2_ATM_SMON_COMPARE1 0x3c00005c
-#define DLB2_ATM_SMON_COMPARE1_RST 0x0
-
-#define DLB2_ATM_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
-#define DLB2_ATM_SMON_COMPARE1_COMPARE1_LOC	0
-
-#define DLB2_ATM_SMON_CFG0 0x3c000060
-#define DLB2_ATM_SMON_CFG0_RST 0x40000000
-
-#define DLB2_ATM_SMON_CFG0_SMON_ENABLE		0x00000001
-#define DLB2_ATM_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
-#define DLB2_ATM_SMON_CFG0_RSVZ0			0x0000000C
-#define DLB2_ATM_SMON_CFG0_SMON0_FUNCTION		0x00000070
-#define DLB2_ATM_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
-#define DLB2_ATM_SMON_CFG0_SMON1_FUNCTION		0x00000700
-#define DLB2_ATM_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
-#define DLB2_ATM_SMON_CFG0_SMON_MODE			0x0000F000
-#define DLB2_ATM_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
-#define DLB2_ATM_SMON_CFG0_INTCOUNTEROVFL		0x00020000
-#define DLB2_ATM_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
-#define DLB2_ATM_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
-#define DLB2_ATM_SMON_CFG0_STOPTIMEROVFL		0x00100000
-#define DLB2_ATM_SMON_CFG0_INTTIMEROVFL		0x00200000
-#define DLB2_ATM_SMON_CFG0_STATTIMEROVFL		0x00400000
-#define DLB2_ATM_SMON_CFG0_RSVZ1			0x00800000
-#define DLB2_ATM_SMON_CFG0_TIMER_PRESCALE		0x1F000000
-#define DLB2_ATM_SMON_CFG0_RSVZ2			0x20000000
-#define DLB2_ATM_SMON_CFG0_VERSION			0xC0000000
-#define DLB2_ATM_SMON_CFG0_SMON_ENABLE_LOC			0
-#define DLB2_ATM_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
-#define DLB2_ATM_SMON_CFG0_RSVZ0_LOC				2
-#define DLB2_ATM_SMON_CFG0_SMON0_FUNCTION_LOC		4
-#define DLB2_ATM_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
-#define DLB2_ATM_SMON_CFG0_SMON1_FUNCTION_LOC		8
-#define DLB2_ATM_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
-#define DLB2_ATM_SMON_CFG0_SMON_MODE_LOC			12
-#define DLB2_ATM_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
-#define DLB2_ATM_SMON_CFG0_INTCOUNTEROVFL_LOC		17
-#define DLB2_ATM_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
-#define DLB2_ATM_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
-#define DLB2_ATM_SMON_CFG0_STOPTIMEROVFL_LOC			20
-#define DLB2_ATM_SMON_CFG0_INTTIMEROVFL_LOC			21
-#define DLB2_ATM_SMON_CFG0_STATTIMEROVFL_LOC			22
-#define DLB2_ATM_SMON_CFG0_RSVZ1_LOC				23
-#define DLB2_ATM_SMON_CFG0_TIMER_PRESCALE_LOC		24
-#define DLB2_ATM_SMON_CFG0_RSVZ2_LOC				29
-#define DLB2_ATM_SMON_CFG0_VERSION_LOC			30
-
-#define DLB2_ATM_SMON_CFG1 0x3c000064
-#define DLB2_ATM_SMON_CFG1_RST 0x0
-
-#define DLB2_ATM_SMON_CFG1_MODE0	0x000000FF
-#define DLB2_ATM_SMON_CFG1_MODE1	0x0000FF00
-#define DLB2_ATM_SMON_CFG1_RSVZ0	0xFFFF0000
-#define DLB2_ATM_SMON_CFG1_MODE0_LOC	0
-#define DLB2_ATM_SMON_CFG1_MODE1_LOC	8
-#define DLB2_ATM_SMON_CFG1_RSVZ0_LOC	16
-
-#define DLB2_ATM_SMON_MAX_TMR 0x3c000068
-#define DLB2_ATM_SMON_MAX_TMR_RST 0x0
-
-#define DLB2_ATM_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
-#define DLB2_ATM_SMON_MAX_TMR_MAXVALUE_LOC	0
-
-#define DLB2_ATM_SMON_TMR 0x3c00006c
-#define DLB2_ATM_SMON_TMR_RST 0x0
-
-#define DLB2_ATM_SMON_TMR_TIMER	0xFFFFFFFF
-#define DLB2_ATM_SMON_TMR_TIMER_LOC	0
-
-#define DLB2_CHP_CFG_DIR_VAS_CRD(x) \
-	(0x40000000 + (x) * 0x1000)
-#define DLB2_CHP_CFG_DIR_VAS_CRD_RST 0x0
-
-#define DLB2_CHP_CFG_DIR_VAS_CRD_COUNT	0x00003FFF
-#define DLB2_CHP_CFG_DIR_VAS_CRD_RSVD0	0xFFFFC000
-#define DLB2_CHP_CFG_DIR_VAS_CRD_COUNT_LOC	0
-#define DLB2_CHP_CFG_DIR_VAS_CRD_RSVD0_LOC	14
-
-#define DLB2_CHP_CFG_LDB_VAS_CRD(x) \
-	(0x40080000 + (x) * 0x1000)
-#define DLB2_CHP_CFG_LDB_VAS_CRD_RST 0x0
-
-#define DLB2_CHP_CFG_LDB_VAS_CRD_COUNT	0x00007FFF
-#define DLB2_CHP_CFG_LDB_VAS_CRD_RSVD0	0xFFFF8000
-#define DLB2_CHP_CFG_LDB_VAS_CRD_COUNT_LOC	0
-#define DLB2_CHP_CFG_LDB_VAS_CRD_RSVD0_LOC	15
-
-#define DLB2_V2CHP_ORD_QID_SN(x) \
-	(0x40100000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_ORD_QID_SN(x) \
-	(0x40080000 + (x) * 0x1000)
-#define DLB2_CHP_ORD_QID_SN(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_ORD_QID_SN(x) : \
-	 DLB2_V2_5CHP_ORD_QID_SN(x))
-#define DLB2_CHP_ORD_QID_SN_RST 0x0
-
-#define DLB2_CHP_ORD_QID_SN_SN	0x000003FF
-#define DLB2_CHP_ORD_QID_SN_RSVD0	0xFFFFFC00
-#define DLB2_CHP_ORD_QID_SN_SN_LOC		0
-#define DLB2_CHP_ORD_QID_SN_RSVD0_LOC	10
-
-#define DLB2_V2CHP_ORD_QID_SN_MAP(x) \
-	(0x40180000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_ORD_QID_SN_MAP(x) \
-	(0x40100000 + (x) * 0x1000)
-#define DLB2_CHP_ORD_QID_SN_MAP(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_ORD_QID_SN_MAP(x) : \
-	 DLB2_V2_5CHP_ORD_QID_SN_MAP(x))
-#define DLB2_CHP_ORD_QID_SN_MAP_RST 0x0
-
-#define DLB2_CHP_ORD_QID_SN_MAP_MODE		0x00000007
-#define DLB2_CHP_ORD_QID_SN_MAP_SLOT		0x00000078
-#define DLB2_CHP_ORD_QID_SN_MAP_RSVZ0	0x00000080
-#define DLB2_CHP_ORD_QID_SN_MAP_GRP		0x00000100
-#define DLB2_CHP_ORD_QID_SN_MAP_RSVZ1	0x00000200
-#define DLB2_CHP_ORD_QID_SN_MAP_RSVD0	0xFFFFFC00
-#define DLB2_CHP_ORD_QID_SN_MAP_MODE_LOC	0
-#define DLB2_CHP_ORD_QID_SN_MAP_SLOT_LOC	3
-#define DLB2_CHP_ORD_QID_SN_MAP_RSVZ0_LOC	7
-#define DLB2_CHP_ORD_QID_SN_MAP_GRP_LOC	8
-#define DLB2_CHP_ORD_QID_SN_MAP_RSVZ1_LOC	9
-#define DLB2_CHP_ORD_QID_SN_MAP_RSVD0_LOC	10
-
-#define DLB2_V2CHP_SN_CHK_ENBL(x) \
-	(0x40200000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_SN_CHK_ENBL(x) \
-	(0x40180000 + (x) * 0x1000)
-#define DLB2_CHP_SN_CHK_ENBL(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_SN_CHK_ENBL(x) : \
-	 DLB2_V2_5CHP_SN_CHK_ENBL(x))
-#define DLB2_CHP_SN_CHK_ENBL_RST 0x0
-
-#define DLB2_CHP_SN_CHK_ENBL_EN	0x00000001
-#define DLB2_CHP_SN_CHK_ENBL_RSVD0	0xFFFFFFFE
-#define DLB2_CHP_SN_CHK_ENBL_EN_LOC		0
-#define DLB2_CHP_SN_CHK_ENBL_RSVD0_LOC	1
-
-#define DLB2_V2CHP_DIR_CQ_DEPTH(x) \
-	(0x40280000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_DIR_CQ_DEPTH(x) \
-	(0x40300000 + (x) * 0x1000)
-#define DLB2_CHP_DIR_CQ_DEPTH(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_DIR_CQ_DEPTH(x) : \
-	 DLB2_V2_5CHP_DIR_CQ_DEPTH(x))
-#define DLB2_CHP_DIR_CQ_DEPTH_RST 0x0
-
-#define DLB2_CHP_DIR_CQ_DEPTH_DEPTH	0x00001FFF
-#define DLB2_CHP_DIR_CQ_DEPTH_RSVD0	0xFFFFE000
-#define DLB2_CHP_DIR_CQ_DEPTH_DEPTH_LOC	0
-#define DLB2_CHP_DIR_CQ_DEPTH_RSVD0_LOC	13
-
-#define DLB2_V2CHP_DIR_CQ_INT_DEPTH_THRSH(x) \
-	(0x40300000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_DIR_CQ_INT_DEPTH_THRSH(x) \
-	(0x40380000 + (x) * 0x1000)
-#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_DIR_CQ_INT_DEPTH_THRSH(x) : \
-	 DLB2_V2_5CHP_DIR_CQ_INT_DEPTH_THRSH(x))
-#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_RST 0x0
-
-#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_DEPTH_THRESHOLD	0x00001FFF
-#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_RSVD0		0xFFFFE000
-#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_DEPTH_THRESHOLD_LOC	0
-#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_RSVD0_LOC		13
-
-#define DLB2_V2CHP_DIR_CQ_INT_ENB(x) \
-	(0x40380000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_DIR_CQ_INT_ENB(x) \
-	(0x40400000 + (x) * 0x1000)
-#define DLB2_CHP_DIR_CQ_INT_ENB(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_DIR_CQ_INT_ENB(x) : \
-	 DLB2_V2_5CHP_DIR_CQ_INT_ENB(x))
-#define DLB2_CHP_DIR_CQ_INT_ENB_RST 0x0
-
-#define DLB2_CHP_DIR_CQ_INT_ENB_EN_TIM	0x00000001
-#define DLB2_CHP_DIR_CQ_INT_ENB_EN_DEPTH	0x00000002
-#define DLB2_CHP_DIR_CQ_INT_ENB_RSVD0	0xFFFFFFFC
-#define DLB2_CHP_DIR_CQ_INT_ENB_EN_TIM_LOC	0
-#define DLB2_CHP_DIR_CQ_INT_ENB_EN_DEPTH_LOC	1
-#define DLB2_CHP_DIR_CQ_INT_ENB_RSVD0_LOC	2
-
-#define DLB2_V2CHP_DIR_CQ_TMR_THRSH(x) \
-	(0x40480000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_DIR_CQ_TMR_THRSH(x) \
-	(0x40500000 + (x) * 0x1000)
-#define DLB2_CHP_DIR_CQ_TMR_THRSH(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_DIR_CQ_TMR_THRSH(x) : \
-	 DLB2_V2_5CHP_DIR_CQ_TMR_THRSH(x))
-#define DLB2_CHP_DIR_CQ_TMR_THRSH_RST 0x1
-
-#define DLB2_CHP_DIR_CQ_TMR_THRSH_THRSH_0	0x00000001
-#define DLB2_CHP_DIR_CQ_TMR_THRSH_THRSH_13_1	0x00003FFE
-#define DLB2_CHP_DIR_CQ_TMR_THRSH_RSVD0	0xFFFFC000
-#define DLB2_CHP_DIR_CQ_TMR_THRSH_THRSH_0_LOC	0
-#define DLB2_CHP_DIR_CQ_TMR_THRSH_THRSH_13_1_LOC	1
-#define DLB2_CHP_DIR_CQ_TMR_THRSH_RSVD0_LOC		14
-
-#define DLB2_V2CHP_DIR_CQ_TKN_DEPTH_SEL(x) \
-	(0x40500000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_DIR_CQ_TKN_DEPTH_SEL(x) \
-	(0x40580000 + (x) * 0x1000)
-#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_DIR_CQ_TKN_DEPTH_SEL(x) : \
-	 DLB2_V2_5CHP_DIR_CQ_TKN_DEPTH_SEL(x))
-#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_RST 0x0
-
-#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT	0x0000000F
-#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_RSVD0			0xFFFFFFF0
-#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_LOC	0
-#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_RSVD0_LOC		4
-
-#define DLB2_V2CHP_DIR_CQ_WD_ENB(x) \
-	(0x40580000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_DIR_CQ_WD_ENB(x) \
-	(0x40600000 + (x) * 0x1000)
-#define DLB2_CHP_DIR_CQ_WD_ENB(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_DIR_CQ_WD_ENB(x) : \
-	 DLB2_V2_5CHP_DIR_CQ_WD_ENB(x))
-#define DLB2_CHP_DIR_CQ_WD_ENB_RST 0x0
-
-#define DLB2_CHP_DIR_CQ_WD_ENB_WD_ENABLE	0x00000001
-#define DLB2_CHP_DIR_CQ_WD_ENB_RSVD0		0xFFFFFFFE
-#define DLB2_CHP_DIR_CQ_WD_ENB_WD_ENABLE_LOC	0
-#define DLB2_CHP_DIR_CQ_WD_ENB_RSVD0_LOC	1
-
-#define DLB2_V2CHP_DIR_CQ_WPTR(x) \
-	(0x40600000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_DIR_CQ_WPTR(x) \
-	(0x40680000 + (x) * 0x1000)
-#define DLB2_CHP_DIR_CQ_WPTR(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_DIR_CQ_WPTR(x) : \
-	 DLB2_V2_5CHP_DIR_CQ_WPTR(x))
-#define DLB2_CHP_DIR_CQ_WPTR_RST 0x0
-
-#define DLB2_CHP_DIR_CQ_WPTR_WRITE_POINTER	0x00001FFF
-#define DLB2_CHP_DIR_CQ_WPTR_RSVD0		0xFFFFE000
-#define DLB2_CHP_DIR_CQ_WPTR_WRITE_POINTER_LOC	0
-#define DLB2_CHP_DIR_CQ_WPTR_RSVD0_LOC		13
-
-#define DLB2_V2CHP_DIR_CQ2VAS(x) \
-	(0x40680000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_DIR_CQ2VAS(x) \
-	(0x40700000 + (x) * 0x1000)
-#define DLB2_CHP_DIR_CQ2VAS(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_DIR_CQ2VAS(x) : \
-	 DLB2_V2_5CHP_DIR_CQ2VAS(x))
-#define DLB2_CHP_DIR_CQ2VAS_RST 0x0
-
-#define DLB2_CHP_DIR_CQ2VAS_CQ2VAS	0x0000001F
-#define DLB2_CHP_DIR_CQ2VAS_RSVD0	0xFFFFFFE0
-#define DLB2_CHP_DIR_CQ2VAS_CQ2VAS_LOC	0
-#define DLB2_CHP_DIR_CQ2VAS_RSVD0_LOC	5
-
-#define DLB2_V2CHP_HIST_LIST_BASE(x) \
-	(0x40700000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_HIST_LIST_BASE(x) \
-	(0x40780000 + (x) * 0x1000)
-#define DLB2_CHP_HIST_LIST_BASE(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_HIST_LIST_BASE(x) : \
-	 DLB2_V2_5CHP_HIST_LIST_BASE(x))
-#define DLB2_CHP_HIST_LIST_BASE_RST 0x0
-
-#define DLB2_CHP_HIST_LIST_BASE_BASE		0x00001FFF
-#define DLB2_CHP_HIST_LIST_BASE_RSVD0	0xFFFFE000
-#define DLB2_CHP_HIST_LIST_BASE_BASE_LOC	0
-#define DLB2_CHP_HIST_LIST_BASE_RSVD0_LOC	13
-
-#define DLB2_V2CHP_HIST_LIST_LIM(x) \
-	(0x40780000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_HIST_LIST_LIM(x) \
-	(0x40800000 + (x) * 0x1000)
-#define DLB2_CHP_HIST_LIST_LIM(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_HIST_LIST_LIM(x) : \
-	 DLB2_V2_5CHP_HIST_LIST_LIM(x))
-#define DLB2_CHP_HIST_LIST_LIM_RST 0x0
-
-#define DLB2_CHP_HIST_LIST_LIM_LIMIT	0x00001FFF
-#define DLB2_CHP_HIST_LIST_LIM_RSVD0	0xFFFFE000
-#define DLB2_CHP_HIST_LIST_LIM_LIMIT_LOC	0
-#define DLB2_CHP_HIST_LIST_LIM_RSVD0_LOC	13
-
-#define DLB2_V2CHP_HIST_LIST_POP_PTR(x) \
-	(0x40800000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_HIST_LIST_POP_PTR(x) \
-	(0x40880000 + (x) * 0x1000)
-#define DLB2_CHP_HIST_LIST_POP_PTR(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_HIST_LIST_POP_PTR(x) : \
-	 DLB2_V2_5CHP_HIST_LIST_POP_PTR(x))
-#define DLB2_CHP_HIST_LIST_POP_PTR_RST 0x0
-
-#define DLB2_CHP_HIST_LIST_POP_PTR_POP_PTR		0x00001FFF
-#define DLB2_CHP_HIST_LIST_POP_PTR_GENERATION	0x00002000
-#define DLB2_CHP_HIST_LIST_POP_PTR_RSVD0		0xFFFFC000
-#define DLB2_CHP_HIST_LIST_POP_PTR_POP_PTR_LOC	0
-#define DLB2_CHP_HIST_LIST_POP_PTR_GENERATION_LOC	13
-#define DLB2_CHP_HIST_LIST_POP_PTR_RSVD0_LOC		14
-
-#define DLB2_V2CHP_HIST_LIST_PUSH_PTR(x) \
-	(0x40880000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_HIST_LIST_PUSH_PTR(x) \
-	(0x40900000 + (x) * 0x1000)
-#define DLB2_CHP_HIST_LIST_PUSH_PTR(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_HIST_LIST_PUSH_PTR(x) : \
-	 DLB2_V2_5CHP_HIST_LIST_PUSH_PTR(x))
-#define DLB2_CHP_HIST_LIST_PUSH_PTR_RST 0x0
-
-#define DLB2_CHP_HIST_LIST_PUSH_PTR_PUSH_PTR		0x00001FFF
-#define DLB2_CHP_HIST_LIST_PUSH_PTR_GENERATION	0x00002000
-#define DLB2_CHP_HIST_LIST_PUSH_PTR_RSVD0		0xFFFFC000
-#define DLB2_CHP_HIST_LIST_PUSH_PTR_PUSH_PTR_LOC	0
-#define DLB2_CHP_HIST_LIST_PUSH_PTR_GENERATION_LOC	13
-#define DLB2_CHP_HIST_LIST_PUSH_PTR_RSVD0_LOC	14
-
-#define DLB2_V2CHP_LDB_CQ_DEPTH(x) \
-	(0x40900000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_LDB_CQ_DEPTH(x) \
-	(0x40a80000 + (x) * 0x1000)
-#define DLB2_CHP_LDB_CQ_DEPTH(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_LDB_CQ_DEPTH(x) : \
-	 DLB2_V2_5CHP_LDB_CQ_DEPTH(x))
-#define DLB2_CHP_LDB_CQ_DEPTH_RST 0x0
-
-#define DLB2_CHP_LDB_CQ_DEPTH_DEPTH	0x000007FF
-#define DLB2_CHP_LDB_CQ_DEPTH_RSVD0	0xFFFFF800
-#define DLB2_CHP_LDB_CQ_DEPTH_DEPTH_LOC	0
-#define DLB2_CHP_LDB_CQ_DEPTH_RSVD0_LOC	11
-
-#define DLB2_V2CHP_LDB_CQ_INT_DEPTH_THRSH(x) \
-	(0x40980000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_LDB_CQ_INT_DEPTH_THRSH(x) \
-	(0x40b00000 + (x) * 0x1000)
-#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_LDB_CQ_INT_DEPTH_THRSH(x) : \
-	 DLB2_V2_5CHP_LDB_CQ_INT_DEPTH_THRSH(x))
-#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_RST 0x0
-
-#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_DEPTH_THRESHOLD	0x000007FF
-#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_RSVD0		0xFFFFF800
-#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_DEPTH_THRESHOLD_LOC	0
-#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_RSVD0_LOC		11
-
-#define DLB2_V2CHP_LDB_CQ_INT_ENB(x) \
-	(0x40a00000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_LDB_CQ_INT_ENB(x) \
-	(0x40b80000 + (x) * 0x1000)
-#define DLB2_CHP_LDB_CQ_INT_ENB(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_LDB_CQ_INT_ENB(x) : \
-	 DLB2_V2_5CHP_LDB_CQ_INT_ENB(x))
-#define DLB2_CHP_LDB_CQ_INT_ENB_RST 0x0
-
-#define DLB2_CHP_LDB_CQ_INT_ENB_EN_TIM	0x00000001
-#define DLB2_CHP_LDB_CQ_INT_ENB_EN_DEPTH	0x00000002
-#define DLB2_CHP_LDB_CQ_INT_ENB_RSVD0	0xFFFFFFFC
-#define DLB2_CHP_LDB_CQ_INT_ENB_EN_TIM_LOC	0
-#define DLB2_CHP_LDB_CQ_INT_ENB_EN_DEPTH_LOC	1
-#define DLB2_CHP_LDB_CQ_INT_ENB_RSVD0_LOC	2
-
-#define DLB2_V2CHP_LDB_CQ_TMR_THRSH(x) \
-	(0x40b00000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_LDB_CQ_TMR_THRSH(x) \
-	(0x40c80000 + (x) * 0x1000)
-#define DLB2_CHP_LDB_CQ_TMR_THRSH(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_LDB_CQ_TMR_THRSH(x) : \
-	 DLB2_V2_5CHP_LDB_CQ_TMR_THRSH(x))
-#define DLB2_CHP_LDB_CQ_TMR_THRSH_RST 0x1
-
-#define DLB2_CHP_LDB_CQ_TMR_THRSH_THRSH_0	0x00000001
-#define DLB2_CHP_LDB_CQ_TMR_THRSH_THRSH_13_1	0x00003FFE
-#define DLB2_CHP_LDB_CQ_TMR_THRSH_RSVD0	0xFFFFC000
-#define DLB2_CHP_LDB_CQ_TMR_THRSH_THRSH_0_LOC	0
-#define DLB2_CHP_LDB_CQ_TMR_THRSH_THRSH_13_1_LOC	1
-#define DLB2_CHP_LDB_CQ_TMR_THRSH_RSVD0_LOC		14
-
-#define DLB2_V2CHP_LDB_CQ_TKN_DEPTH_SEL(x) \
-	(0x40b80000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_LDB_CQ_TKN_DEPTH_SEL(x) \
-	(0x40d00000 + (x) * 0x1000)
-#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_LDB_CQ_TKN_DEPTH_SEL(x) : \
-	 DLB2_V2_5CHP_LDB_CQ_TKN_DEPTH_SEL(x))
-#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_RST 0x0
-
-#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT	0x0000000F
-#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_RSVD0			0xFFFFFFF0
-#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_LOC	0
-#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_RSVD0_LOC		4
-
-#define DLB2_V2CHP_LDB_CQ_WD_ENB(x) \
-	(0x40c00000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_LDB_CQ_WD_ENB(x) \
-	(0x40d80000 + (x) * 0x1000)
-#define DLB2_CHP_LDB_CQ_WD_ENB(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_LDB_CQ_WD_ENB(x) : \
-	 DLB2_V2_5CHP_LDB_CQ_WD_ENB(x))
-#define DLB2_CHP_LDB_CQ_WD_ENB_RST 0x0
-
-#define DLB2_CHP_LDB_CQ_WD_ENB_WD_ENABLE	0x00000001
-#define DLB2_CHP_LDB_CQ_WD_ENB_RSVD0		0xFFFFFFFE
-#define DLB2_CHP_LDB_CQ_WD_ENB_WD_ENABLE_LOC	0
-#define DLB2_CHP_LDB_CQ_WD_ENB_RSVD0_LOC	1
-
-#define DLB2_V2CHP_LDB_CQ_WPTR(x) \
-	(0x40c80000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_LDB_CQ_WPTR(x) \
-	(0x40e00000 + (x) * 0x1000)
-#define DLB2_CHP_LDB_CQ_WPTR(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_LDB_CQ_WPTR(x) : \
-	 DLB2_V2_5CHP_LDB_CQ_WPTR(x))
-#define DLB2_CHP_LDB_CQ_WPTR_RST 0x0
-
-#define DLB2_CHP_LDB_CQ_WPTR_WRITE_POINTER	0x000007FF
-#define DLB2_CHP_LDB_CQ_WPTR_RSVD0		0xFFFFF800
-#define DLB2_CHP_LDB_CQ_WPTR_WRITE_POINTER_LOC	0
-#define DLB2_CHP_LDB_CQ_WPTR_RSVD0_LOC		11
-
-#define DLB2_V2CHP_LDB_CQ2VAS(x) \
-	(0x40d00000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_LDB_CQ2VAS(x) \
-	(0x40e80000 + (x) * 0x1000)
-#define DLB2_CHP_LDB_CQ2VAS(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_LDB_CQ2VAS(x) : \
-	 DLB2_V2_5CHP_LDB_CQ2VAS(x))
-#define DLB2_CHP_LDB_CQ2VAS_RST 0x0
-
-#define DLB2_CHP_LDB_CQ2VAS_CQ2VAS	0x0000001F
-#define DLB2_CHP_LDB_CQ2VAS_RSVD0	0xFFFFFFE0
-#define DLB2_CHP_LDB_CQ2VAS_CQ2VAS_LOC	0
-#define DLB2_CHP_LDB_CQ2VAS_RSVD0_LOC	5
-
-#define DLB2_CHP_CFG_CHP_CSR_CTRL 0x44000008
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_RST 0x180002
-
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_COR_ALARM_DIS		0x00000001
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_COR_SYND_DIS		0x00000002
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_UNCR_ALARM_DIS		0x00000004
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_UNC_SYND_DIS		0x00000008
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF0_ALARM_DIS		0x00000010
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF0_SYND_DIS		0x00000020
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF1_ALARM_DIS		0x00000040
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF1_SYND_DIS		0x00000080
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF2_ALARM_DIS		0x00000100
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF2_SYND_DIS		0x00000200
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF3_ALARM_DIS		0x00000400
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF3_SYND_DIS		0x00000800
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF4_ALARM_DIS		0x00001000
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF4_SYND_DIS		0x00002000
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF5_ALARM_DIS		0x00004000
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF5_SYND_DIS		0x00008000
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_DLB_COR_ALARM_ENABLE	0x00010000
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_LDB_CQ_MODE	0x00020000
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_DIR_CQ_MODE	0x00040000
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_WRITE_LDB		0x00080000
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_WRITE_DIR		0x00100000
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_FIRST_WRITE_LDB	0x00200000
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_FIRST_WRITE_DIR	0x00400000
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_RSVZ0			0xFF800000
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_COR_ALARM_DIS_LOC		0
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_COR_SYND_DIS_LOC		1
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_UNCR_ALARM_DIS_LOC		2
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_UNC_SYND_DIS_LOC		3
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF0_ALARM_DIS_LOC		4
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF0_SYND_DIS_LOC		5
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF1_ALARM_DIS_LOC		6
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF1_SYND_DIS_LOC		7
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF2_ALARM_DIS_LOC		8
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF2_SYND_DIS_LOC		9
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF3_ALARM_DIS_LOC		10
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF3_SYND_DIS_LOC		11
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF4_ALARM_DIS_LOC		12
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF4_SYND_DIS_LOC		13
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF5_ALARM_DIS_LOC		14
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF5_SYND_DIS_LOC		15
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_DLB_COR_ALARM_ENABLE_LOC		16
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_LDB_CQ_MODE_LOC	17
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_DIR_CQ_MODE_LOC	18
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_WRITE_LDB_LOC			19
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_WRITE_DIR_LOC			20
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_FIRST_WRITE_LDB_LOC		21
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_FIRST_WRITE_DIR_LOC		22
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_RSVZ0_LOC				23
-
-#define DLB2_V2CHP_DIR_CQ_INTR_ARMED0 0x4400005c
-#define DLB2_V2_5CHP_DIR_CQ_INTR_ARMED0 0x4400004c
-#define DLB2_CHP_DIR_CQ_INTR_ARMED0(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_DIR_CQ_INTR_ARMED0 : \
-	 DLB2_V2_5CHP_DIR_CQ_INTR_ARMED0)
-#define DLB2_CHP_DIR_CQ_INTR_ARMED0_RST 0x0
-
-#define DLB2_CHP_DIR_CQ_INTR_ARMED0_ARMED	0xFFFFFFFF
-#define DLB2_CHP_DIR_CQ_INTR_ARMED0_ARMED_LOC	0
-
-#define DLB2_V2CHP_DIR_CQ_INTR_ARMED1 0x44000060
-#define DLB2_V2_5CHP_DIR_CQ_INTR_ARMED1 0x44000050
-#define DLB2_CHP_DIR_CQ_INTR_ARMED1(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_DIR_CQ_INTR_ARMED1 : \
-	 DLB2_V2_5CHP_DIR_CQ_INTR_ARMED1)
-#define DLB2_CHP_DIR_CQ_INTR_ARMED1_RST 0x0
-
-#define DLB2_CHP_DIR_CQ_INTR_ARMED1_ARMED	0xFFFFFFFF
-#define DLB2_CHP_DIR_CQ_INTR_ARMED1_ARMED_LOC	0
-
-#define DLB2_V2CHP_CFG_DIR_CQ_TIMER_CTL 0x44000084
-#define DLB2_V2_5CHP_CFG_DIR_CQ_TIMER_CTL 0x44000088
-#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_CFG_DIR_CQ_TIMER_CTL : \
-	 DLB2_V2_5CHP_CFG_DIR_CQ_TIMER_CTL)
-#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_RST 0x0
-
-#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_SAMPLE_INTERVAL	0x000000FF
-#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_ENB			0x00000100
-#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_RSVZ0			0xFFFFFE00
-#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_SAMPLE_INTERVAL_LOC	0
-#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_ENB_LOC		8
-#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_RSVZ0_LOC		9
-
-#define DLB2_V2CHP_CFG_DIR_WDTO_0 0x44000088
-#define DLB2_V2_5CHP_CFG_DIR_WDTO_0 0x4400008c
-#define DLB2_CHP_CFG_DIR_WDTO_0(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_CFG_DIR_WDTO_0 : \
-	 DLB2_V2_5CHP_CFG_DIR_WDTO_0)
-#define DLB2_CHP_CFG_DIR_WDTO_0_RST 0x0
-
-#define DLB2_CHP_CFG_DIR_WDTO_0_WDTO	0xFFFFFFFF
-#define DLB2_CHP_CFG_DIR_WDTO_0_WDTO_LOC	0
-
-#define DLB2_V2CHP_CFG_DIR_WDTO_1 0x4400008c
-#define DLB2_V2_5CHP_CFG_DIR_WDTO_1 0x44000090
-#define DLB2_CHP_CFG_DIR_WDTO_1(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_CFG_DIR_WDTO_1 : \
-	 DLB2_V2_5CHP_CFG_DIR_WDTO_1)
-#define DLB2_CHP_CFG_DIR_WDTO_1_RST 0x0
-
-#define DLB2_CHP_CFG_DIR_WDTO_1_WDTO	0xFFFFFFFF
-#define DLB2_CHP_CFG_DIR_WDTO_1_WDTO_LOC	0
-
-#define DLB2_V2CHP_CFG_DIR_WD_DISABLE0 0x44000098
-#define DLB2_V2_5CHP_CFG_DIR_WD_DISABLE0 0x440000a4
-#define DLB2_CHP_CFG_DIR_WD_DISABLE0(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_CFG_DIR_WD_DISABLE0 : \
-	 DLB2_V2_5CHP_CFG_DIR_WD_DISABLE0)
-#define DLB2_CHP_CFG_DIR_WD_DISABLE0_RST 0xffffffff
-
-#define DLB2_CHP_CFG_DIR_WD_DISABLE0_WD_DISABLE	0xFFFFFFFF
-#define DLB2_CHP_CFG_DIR_WD_DISABLE0_WD_DISABLE_LOC	0
-
-#define DLB2_V2CHP_CFG_DIR_WD_DISABLE1 0x4400009c
-#define DLB2_V2_5CHP_CFG_DIR_WD_DISABLE1 0x440000a8
-#define DLB2_CHP_CFG_DIR_WD_DISABLE1(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_CFG_DIR_WD_DISABLE1 : \
-	 DLB2_V2_5CHP_CFG_DIR_WD_DISABLE1)
-#define DLB2_CHP_CFG_DIR_WD_DISABLE1_RST 0xffffffff
-
-#define DLB2_CHP_CFG_DIR_WD_DISABLE1_WD_DISABLE	0xFFFFFFFF
-#define DLB2_CHP_CFG_DIR_WD_DISABLE1_WD_DISABLE_LOC	0
-
-#define DLB2_V2CHP_CFG_DIR_WD_ENB_INTERVAL 0x440000a0
-#define DLB2_V2_5CHP_CFG_DIR_WD_ENB_INTERVAL 0x440000b0
-#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_CFG_DIR_WD_ENB_INTERVAL : \
-	 DLB2_V2_5CHP_CFG_DIR_WD_ENB_INTERVAL)
-#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_RST 0x0
-
-#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_SAMPLE_INTERVAL	0x0FFFFFFF
-#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_ENB			0x10000000
-#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_RSVZ0		0xE0000000
-#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_SAMPLE_INTERVAL_LOC	0
-#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_ENB_LOC		28
-#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_RSVZ0_LOC		29
-
-#define DLB2_V2CHP_CFG_DIR_WD_THRESHOLD 0x440000ac
-#define DLB2_V2_5CHP_CFG_DIR_WD_THRESHOLD 0x440000c0
-#define DLB2_CHP_CFG_DIR_WD_THRESHOLD(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_CFG_DIR_WD_THRESHOLD : \
-	 DLB2_V2_5CHP_CFG_DIR_WD_THRESHOLD)
-#define DLB2_CHP_CFG_DIR_WD_THRESHOLD_RST 0x0
-
-#define DLB2_CHP_CFG_DIR_WD_THRESHOLD_WD_THRESHOLD	0x000000FF
-#define DLB2_CHP_CFG_DIR_WD_THRESHOLD_RSVZ0		0xFFFFFF00
-#define DLB2_CHP_CFG_DIR_WD_THRESHOLD_WD_THRESHOLD_LOC	0
-#define DLB2_CHP_CFG_DIR_WD_THRESHOLD_RSVZ0_LOC		8
-
-#define DLB2_V2CHP_LDB_CQ_INTR_ARMED0 0x440000b0
-#define DLB2_V2_5CHP_LDB_CQ_INTR_ARMED0 0x440000c4
-#define DLB2_CHP_LDB_CQ_INTR_ARMED0(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_LDB_CQ_INTR_ARMED0 : \
-	 DLB2_V2_5CHP_LDB_CQ_INTR_ARMED0)
-#define DLB2_CHP_LDB_CQ_INTR_ARMED0_RST 0x0
-
-#define DLB2_CHP_LDB_CQ_INTR_ARMED0_ARMED	0xFFFFFFFF
-#define DLB2_CHP_LDB_CQ_INTR_ARMED0_ARMED_LOC	0
-
-#define DLB2_V2CHP_LDB_CQ_INTR_ARMED1 0x440000b4
-#define DLB2_V2_5CHP_LDB_CQ_INTR_ARMED1 0x440000c8
-#define DLB2_CHP_LDB_CQ_INTR_ARMED1(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_LDB_CQ_INTR_ARMED1 : \
-	 DLB2_V2_5CHP_LDB_CQ_INTR_ARMED1)
-#define DLB2_CHP_LDB_CQ_INTR_ARMED1_RST 0x0
-
-#define DLB2_CHP_LDB_CQ_INTR_ARMED1_ARMED	0xFFFFFFFF
-#define DLB2_CHP_LDB_CQ_INTR_ARMED1_ARMED_LOC	0
-
-#define DLB2_V2CHP_CFG_LDB_CQ_TIMER_CTL 0x440000d8
-#define DLB2_V2_5CHP_CFG_LDB_CQ_TIMER_CTL 0x440000ec
-#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_CFG_LDB_CQ_TIMER_CTL : \
-	 DLB2_V2_5CHP_CFG_LDB_CQ_TIMER_CTL)
-#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_RST 0x0
-
-#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_SAMPLE_INTERVAL	0x000000FF
-#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_ENB			0x00000100
-#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_RSVZ0			0xFFFFFE00
-#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_SAMPLE_INTERVAL_LOC	0
-#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_ENB_LOC		8
-#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_RSVZ0_LOC		9
-
-#define DLB2_V2CHP_CFG_LDB_WDTO_0 0x440000dc
-#define DLB2_V2_5CHP_CFG_LDB_WDTO_0 0x440000f0
-#define DLB2_CHP_CFG_LDB_WDTO_0(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_CFG_LDB_WDTO_0 : \
-	 DLB2_V2_5CHP_CFG_LDB_WDTO_0)
-#define DLB2_CHP_CFG_LDB_WDTO_0_RST 0x0
-
-#define DLB2_CHP_CFG_LDB_WDTO_0_WDTO	0xFFFFFFFF
-#define DLB2_CHP_CFG_LDB_WDTO_0_WDTO_LOC	0
-
-#define DLB2_V2CHP_CFG_LDB_WDTO_1 0x440000e0
-#define DLB2_V2_5CHP_CFG_LDB_WDTO_1 0x440000f4
-#define DLB2_CHP_CFG_LDB_WDTO_1(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_CFG_LDB_WDTO_1 : \
-	 DLB2_V2_5CHP_CFG_LDB_WDTO_1)
-#define DLB2_CHP_CFG_LDB_WDTO_1_RST 0x0
-
-#define DLB2_CHP_CFG_LDB_WDTO_1_WDTO	0xFFFFFFFF
-#define DLB2_CHP_CFG_LDB_WDTO_1_WDTO_LOC	0
-
-#define DLB2_V2CHP_CFG_LDB_WD_DISABLE0 0x440000ec
-#define DLB2_V2_5CHP_CFG_LDB_WD_DISABLE0 0x44000100
-#define DLB2_CHP_CFG_LDB_WD_DISABLE0(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_CFG_LDB_WD_DISABLE0 : \
-	 DLB2_V2_5CHP_CFG_LDB_WD_DISABLE0)
-#define DLB2_CHP_CFG_LDB_WD_DISABLE0_RST 0xffffffff
-
-#define DLB2_CHP_CFG_LDB_WD_DISABLE0_WD_DISABLE	0xFFFFFFFF
-#define DLB2_CHP_CFG_LDB_WD_DISABLE0_WD_DISABLE_LOC	0
-
-#define DLB2_V2CHP_CFG_LDB_WD_DISABLE1 0x440000f0
-#define DLB2_V2_5CHP_CFG_LDB_WD_DISABLE1 0x44000104
-#define DLB2_CHP_CFG_LDB_WD_DISABLE1(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_CFG_LDB_WD_DISABLE1 : \
-	 DLB2_V2_5CHP_CFG_LDB_WD_DISABLE1)
-#define DLB2_CHP_CFG_LDB_WD_DISABLE1_RST 0xffffffff
-
-#define DLB2_CHP_CFG_LDB_WD_DISABLE1_WD_DISABLE	0xFFFFFFFF
-#define DLB2_CHP_CFG_LDB_WD_DISABLE1_WD_DISABLE_LOC	0
-
-#define DLB2_V2CHP_CFG_LDB_WD_ENB_INTERVAL 0x440000f4
-#define DLB2_V2_5CHP_CFG_LDB_WD_ENB_INTERVAL 0x44000108
-#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_CFG_LDB_WD_ENB_INTERVAL : \
-	 DLB2_V2_5CHP_CFG_LDB_WD_ENB_INTERVAL)
-#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_RST 0x0
-
-#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_SAMPLE_INTERVAL	0x0FFFFFFF
-#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_ENB			0x10000000
-#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_RSVZ0		0xE0000000
-#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_SAMPLE_INTERVAL_LOC	0
-#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_ENB_LOC		28
-#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_RSVZ0_LOC		29
-
-#define DLB2_V2CHP_CFG_LDB_WD_THRESHOLD 0x44000100
-#define DLB2_V2_5CHP_CFG_LDB_WD_THRESHOLD 0x44000114
-#define DLB2_CHP_CFG_LDB_WD_THRESHOLD(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_CFG_LDB_WD_THRESHOLD : \
-	 DLB2_V2_5CHP_CFG_LDB_WD_THRESHOLD)
-#define DLB2_CHP_CFG_LDB_WD_THRESHOLD_RST 0x0
-
-#define DLB2_CHP_CFG_LDB_WD_THRESHOLD_WD_THRESHOLD	0x000000FF
-#define DLB2_CHP_CFG_LDB_WD_THRESHOLD_RSVZ0		0xFFFFFF00
-#define DLB2_CHP_CFG_LDB_WD_THRESHOLD_WD_THRESHOLD_LOC	0
-#define DLB2_CHP_CFG_LDB_WD_THRESHOLD_RSVZ0_LOC		8
-
-#define DLB2_CHP_SMON_COMPARE0 0x4c000000
-#define DLB2_CHP_SMON_COMPARE0_RST 0x0
-
-#define DLB2_CHP_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
-#define DLB2_CHP_SMON_COMPARE0_COMPARE0_LOC	0
-
-#define DLB2_CHP_SMON_COMPARE1 0x4c000004
-#define DLB2_CHP_SMON_COMPARE1_RST 0x0
-
-#define DLB2_CHP_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
-#define DLB2_CHP_SMON_COMPARE1_COMPARE1_LOC	0
-
-#define DLB2_CHP_SMON_CFG0 0x4c000008
-#define DLB2_CHP_SMON_CFG0_RST 0x40000000
-
-#define DLB2_CHP_SMON_CFG0_SMON_ENABLE		0x00000001
-#define DLB2_CHP_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
-#define DLB2_CHP_SMON_CFG0_RSVZ0			0x0000000C
-#define DLB2_CHP_SMON_CFG0_SMON0_FUNCTION		0x00000070
-#define DLB2_CHP_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
-#define DLB2_CHP_SMON_CFG0_SMON1_FUNCTION		0x00000700
-#define DLB2_CHP_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
-#define DLB2_CHP_SMON_CFG0_SMON_MODE			0x0000F000
-#define DLB2_CHP_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
-#define DLB2_CHP_SMON_CFG0_INTCOUNTEROVFL		0x00020000
-#define DLB2_CHP_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
-#define DLB2_CHP_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
-#define DLB2_CHP_SMON_CFG0_STOPTIMEROVFL		0x00100000
-#define DLB2_CHP_SMON_CFG0_INTTIMEROVFL		0x00200000
-#define DLB2_CHP_SMON_CFG0_STATTIMEROVFL		0x00400000
-#define DLB2_CHP_SMON_CFG0_RSVZ1			0x00800000
-#define DLB2_CHP_SMON_CFG0_TIMER_PRESCALE		0x1F000000
-#define DLB2_CHP_SMON_CFG0_RSVZ2			0x20000000
-#define DLB2_CHP_SMON_CFG0_VERSION			0xC0000000
-#define DLB2_CHP_SMON_CFG0_SMON_ENABLE_LOC			0
-#define DLB2_CHP_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
-#define DLB2_CHP_SMON_CFG0_RSVZ0_LOC				2
-#define DLB2_CHP_SMON_CFG0_SMON0_FUNCTION_LOC		4
-#define DLB2_CHP_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
-#define DLB2_CHP_SMON_CFG0_SMON1_FUNCTION_LOC		8
-#define DLB2_CHP_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
-#define DLB2_CHP_SMON_CFG0_SMON_MODE_LOC			12
-#define DLB2_CHP_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
-#define DLB2_CHP_SMON_CFG0_INTCOUNTEROVFL_LOC		17
-#define DLB2_CHP_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
-#define DLB2_CHP_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
-#define DLB2_CHP_SMON_CFG0_STOPTIMEROVFL_LOC			20
-#define DLB2_CHP_SMON_CFG0_INTTIMEROVFL_LOC			21
-#define DLB2_CHP_SMON_CFG0_STATTIMEROVFL_LOC			22
-#define DLB2_CHP_SMON_CFG0_RSVZ1_LOC				23
-#define DLB2_CHP_SMON_CFG0_TIMER_PRESCALE_LOC		24
-#define DLB2_CHP_SMON_CFG0_RSVZ2_LOC				29
-#define DLB2_CHP_SMON_CFG0_VERSION_LOC			30
-
-#define DLB2_CHP_SMON_CFG1 0x4c00000c
-#define DLB2_CHP_SMON_CFG1_RST 0x0
-
-#define DLB2_CHP_SMON_CFG1_MODE0	0x000000FF
-#define DLB2_CHP_SMON_CFG1_MODE1	0x0000FF00
-#define DLB2_CHP_SMON_CFG1_RSVZ0	0xFFFF0000
-#define DLB2_CHP_SMON_CFG1_MODE0_LOC	0
-#define DLB2_CHP_SMON_CFG1_MODE1_LOC	8
-#define DLB2_CHP_SMON_CFG1_RSVZ0_LOC	16
-
-#define DLB2_CHP_SMON_ACTIVITYCNTR0 0x4c000010
-#define DLB2_CHP_SMON_ACTIVITYCNTR0_RST 0x0
-
-#define DLB2_CHP_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
-#define DLB2_CHP_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
-
-#define DLB2_CHP_SMON_ACTIVITYCNTR1 0x4c000014
-#define DLB2_CHP_SMON_ACTIVITYCNTR1_RST 0x0
-
-#define DLB2_CHP_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
-#define DLB2_CHP_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
-
-#define DLB2_CHP_SMON_MAX_TMR 0x4c000018
-#define DLB2_CHP_SMON_MAX_TMR_RST 0x0
-
-#define DLB2_CHP_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
-#define DLB2_CHP_SMON_MAX_TMR_MAXVALUE_LOC	0
-
-#define DLB2_CHP_SMON_TMR 0x4c00001c
-#define DLB2_CHP_SMON_TMR_RST 0x0
-
-#define DLB2_CHP_SMON_TMR_TIMER	0xFFFFFFFF
-#define DLB2_CHP_SMON_TMR_TIMER_LOC	0
-
-#define DLB2_CHP_CTRL_DIAG_02 0x4c000028
-#define DLB2_CHP_CTRL_DIAG_02_RST 0x1555
-
-#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_EMPTY_V2	0x00000001
-#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_AFULL_V2	0x00000002
-#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_EMPTY_V2 0x04
-#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_AFULL_V2 0x08
-#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_EMPTY_V2 0x0010
-#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_AFULL_V2 0x0020
-#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_EMPTY_V2    0x0040
-#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_AFULL_V2    0x0080
-#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_EMPTY_V2	 0x0100
-#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_AFULL_V2	 0x0200
-#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_EMPTY_V2	0x0400
-#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_AFULL_V2	0x0800
-#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_EMPTY_V2    0x1000
-#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_AFULL_V2    0x2000
-#define DLB2_CHP_CTRL_DIAG_02_RSVD0_V2				    0xFFFFC000
-#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_EMPTY_V2_LOC	    0
-#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_AFULL_V2_LOC	    1
-#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_EMPTY_V2_LOC 2
-#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_AFULL_V2_LOC 3
-#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_EMPTY_V2_LOC 4
-#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_AFULL_V2_LOC 5
-#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_EMPTY_V2_LOC    6
-#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_AFULL_V2_LOC    7
-#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_EMPTY_V2_LOC	  8
-#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_AFULL_V2_LOC	  9
-#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_EMPTY_V2_LOC	  10
-#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_AFULL_V2_LOC	  11
-#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_EMPTY_V2_LOC 12
-#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_AFULL_V2_LOC 13
-#define DLB2_CHP_CTRL_DIAG_02_RSVD0_V2_LOC				  14
-
-#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_EMPTY_V2_5	     0x00000001
-#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_AFULL_V2_5	     0x00000002
-#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_EMPTY_V2_5  4
-#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_AFULL_V2_5  8
-#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_EMPTY_V2_5 0x10
-#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_AFULL_V2_5 0x20
-#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_EMPTY_V2_5	0x0040
-#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_AFULL_V2_5	0x0080
-#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_EMPTY_V2_5 0x00000100
-#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_AFULL_V2_5 0x00000200
-#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_EMPTY_V2_5 0x0400
-#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_AFULL_V2_5 0x0800
-#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_EMPTY_V2_5 0x1000
-#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_AFULL_V2_5 0x2000
-#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOKEN_QB_STATUS_SIZE_V2_5	    0x0001C000
-#define DLB2_CHP_CTRL_DIAG_02_FREELIST_SIZE_V2_5		    0xFFFE0000
-#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_EMPTY_V2_5_LOC 0
-#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_AFULL_V2_5_LOC 1
-#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_EMPTY_V2_5_LOC 2
-#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_AFULL_V2_5_LOC 3
-#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_EMPTY_V2_5_LOC 4
-#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_AFULL_V2_5_LOC 5
-#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_EMPTY_V2_5_LOC    6
-#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_AFULL_V2_5_LOC 7
-#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_EMPTY_V2_5_LOC	    8
-#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_AFULL_V2_5_LOC	    9
-#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_EMPTY_V2_5_LOC   10
-#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_AFULL_V2_5_LOC   11
-#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_EMPTY_V2_5_LOC 12
-#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_AFULL_V2_5_LOC 13
-#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOKEN_QB_STATUS_SIZE_V2_5_LOC	    14
-#define DLB2_CHP_CTRL_DIAG_02_FREELIST_SIZE_V2_5_LOC			    17
-
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0 0x54000000
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_RST 0xfefcfaf8
-
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI0	0x000000FF
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI1	0x0000FF00
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI2	0x00FF0000
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI3	0xFF000000
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI0_LOC	0
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI1_LOC	8
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI2_LOC	16
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI3_LOC	24
-
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_1 0x54000004
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_1_RST 0x0
-
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_1_RSVZ0	0xFFFFFFFF
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_1_RSVZ0_LOC	0
-
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0 0x54000008
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_RST 0xfefcfaf8
-
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI0	0x000000FF
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI1	0x0000FF00
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI2	0x00FF0000
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI3	0xFF000000
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI0_LOC	0
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI1_LOC	8
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI2_LOC	16
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI3_LOC	24
-
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1 0x5400000c
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RST 0x0
-
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RSVZ0	0xFFFFFFFF
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RSVZ0_LOC	0
-
-#define DLB2_DP_DIR_CSR_CTRL 0x54000010
-#define DLB2_DP_DIR_CSR_CTRL_RST 0x0
-
-#define DLB2_DP_DIR_CSR_CTRL_INT_COR_ALARM_DIS	0x00000001
-#define DLB2_DP_DIR_CSR_CTRL_INT_COR_SYND_DIS	0x00000002
-#define DLB2_DP_DIR_CSR_CTRL_INT_UNCR_ALARM_DIS	0x00000004
-#define DLB2_DP_DIR_CSR_CTRL_INT_UNC_SYND_DIS	0x00000008
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF0_ALARM_DIS	0x00000010
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF0_SYND_DIS	0x00000020
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF1_ALARM_DIS	0x00000040
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF1_SYND_DIS	0x00000080
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF2_ALARM_DIS	0x00000100
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF2_SYND_DIS	0x00000200
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF3_ALARM_DIS	0x00000400
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF3_SYND_DIS	0x00000800
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF4_ALARM_DIS	0x00001000
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF4_SYND_DIS	0x00002000
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF5_ALARM_DIS	0x00004000
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF5_SYND_DIS	0x00008000
-#define DLB2_DP_DIR_CSR_CTRL_RSVZ0			0xFFFF0000
-#define DLB2_DP_DIR_CSR_CTRL_INT_COR_ALARM_DIS_LOC	0
-#define DLB2_DP_DIR_CSR_CTRL_INT_COR_SYND_DIS_LOC	1
-#define DLB2_DP_DIR_CSR_CTRL_INT_UNCR_ALARM_DIS_LOC	2
-#define DLB2_DP_DIR_CSR_CTRL_INT_UNC_SYND_DIS_LOC	3
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF0_ALARM_DIS_LOC	4
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF0_SYND_DIS_LOC	5
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF1_ALARM_DIS_LOC	6
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF1_SYND_DIS_LOC	7
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF2_ALARM_DIS_LOC	8
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF2_SYND_DIS_LOC	9
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF3_ALARM_DIS_LOC	10
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF3_SYND_DIS_LOC	11
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF4_ALARM_DIS_LOC	12
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF4_SYND_DIS_LOC	13
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF5_ALARM_DIS_LOC	14
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF5_SYND_DIS_LOC	15
-#define DLB2_DP_DIR_CSR_CTRL_RSVZ0_LOC		16
-
-#define DLB2_DP_SMON_ACTIVITYCNTR0 0x5c000058
-#define DLB2_DP_SMON_ACTIVITYCNTR0_RST 0x0
-
-#define DLB2_DP_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
-#define DLB2_DP_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
-
-#define DLB2_DP_SMON_ACTIVITYCNTR1 0x5c00005c
-#define DLB2_DP_SMON_ACTIVITYCNTR1_RST 0x0
-
-#define DLB2_DP_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
-#define DLB2_DP_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
-
-#define DLB2_DP_SMON_COMPARE0 0x5c000060
-#define DLB2_DP_SMON_COMPARE0_RST 0x0
-
-#define DLB2_DP_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
-#define DLB2_DP_SMON_COMPARE0_COMPARE0_LOC	0
-
-#define DLB2_DP_SMON_COMPARE1 0x5c000064
-#define DLB2_DP_SMON_COMPARE1_RST 0x0
-
-#define DLB2_DP_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
-#define DLB2_DP_SMON_COMPARE1_COMPARE1_LOC	0
-
-#define DLB2_DP_SMON_CFG0 0x5c000068
-#define DLB2_DP_SMON_CFG0_RST 0x40000000
-
-#define DLB2_DP_SMON_CFG0_SMON_ENABLE		0x00000001
-#define DLB2_DP_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
-#define DLB2_DP_SMON_CFG0_RSVZ0			0x0000000C
-#define DLB2_DP_SMON_CFG0_SMON0_FUNCTION		0x00000070
-#define DLB2_DP_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
-#define DLB2_DP_SMON_CFG0_SMON1_FUNCTION		0x00000700
-#define DLB2_DP_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
-#define DLB2_DP_SMON_CFG0_SMON_MODE			0x0000F000
-#define DLB2_DP_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
-#define DLB2_DP_SMON_CFG0_INTCOUNTEROVFL		0x00020000
-#define DLB2_DP_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
-#define DLB2_DP_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
-#define DLB2_DP_SMON_CFG0_STOPTIMEROVFL		0x00100000
-#define DLB2_DP_SMON_CFG0_INTTIMEROVFL		0x00200000
-#define DLB2_DP_SMON_CFG0_STATTIMEROVFL		0x00400000
-#define DLB2_DP_SMON_CFG0_RSVZ1			0x00800000
-#define DLB2_DP_SMON_CFG0_TIMER_PRESCALE		0x1F000000
-#define DLB2_DP_SMON_CFG0_RSVZ2			0x20000000
-#define DLB2_DP_SMON_CFG0_VERSION			0xC0000000
-#define DLB2_DP_SMON_CFG0_SMON_ENABLE_LOC		0
-#define DLB2_DP_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC	1
-#define DLB2_DP_SMON_CFG0_RSVZ0_LOC			2
-#define DLB2_DP_SMON_CFG0_SMON0_FUNCTION_LOC		4
-#define DLB2_DP_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
-#define DLB2_DP_SMON_CFG0_SMON1_FUNCTION_LOC		8
-#define DLB2_DP_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
-#define DLB2_DP_SMON_CFG0_SMON_MODE_LOC		12
-#define DLB2_DP_SMON_CFG0_STOPCOUNTEROVFL_LOC	16
-#define DLB2_DP_SMON_CFG0_INTCOUNTEROVFL_LOC		17
-#define DLB2_DP_SMON_CFG0_STATCOUNTER0OVFL_LOC	18
-#define DLB2_DP_SMON_CFG0_STATCOUNTER1OVFL_LOC	19
-#define DLB2_DP_SMON_CFG0_STOPTIMEROVFL_LOC		20
-#define DLB2_DP_SMON_CFG0_INTTIMEROVFL_LOC		21
-#define DLB2_DP_SMON_CFG0_STATTIMEROVFL_LOC		22
-#define DLB2_DP_SMON_CFG0_RSVZ1_LOC			23
-#define DLB2_DP_SMON_CFG0_TIMER_PRESCALE_LOC		24
-#define DLB2_DP_SMON_CFG0_RSVZ2_LOC			29
-#define DLB2_DP_SMON_CFG0_VERSION_LOC		30
-
-#define DLB2_DP_SMON_CFG1 0x5c00006c
-#define DLB2_DP_SMON_CFG1_RST 0x0
-
-#define DLB2_DP_SMON_CFG1_MODE0	0x000000FF
-#define DLB2_DP_SMON_CFG1_MODE1	0x0000FF00
-#define DLB2_DP_SMON_CFG1_RSVZ0	0xFFFF0000
-#define DLB2_DP_SMON_CFG1_MODE0_LOC	0
-#define DLB2_DP_SMON_CFG1_MODE1_LOC	8
-#define DLB2_DP_SMON_CFG1_RSVZ0_LOC	16
-
-#define DLB2_DP_SMON_MAX_TMR 0x5c000070
-#define DLB2_DP_SMON_MAX_TMR_RST 0x0
-
-#define DLB2_DP_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
-#define DLB2_DP_SMON_MAX_TMR_MAXVALUE_LOC	0
-
-#define DLB2_DP_SMON_TMR 0x5c000074
-#define DLB2_DP_SMON_TMR_RST 0x0
-
-#define DLB2_DP_SMON_TMR_TIMER	0xFFFFFFFF
-#define DLB2_DP_SMON_TMR_TIMER_LOC	0
-
-#define DLB2_DQED_SMON_ACTIVITYCNTR0 0x6c000024
-#define DLB2_DQED_SMON_ACTIVITYCNTR0_RST 0x0
-
-#define DLB2_DQED_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
-#define DLB2_DQED_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
-
-#define DLB2_DQED_SMON_ACTIVITYCNTR1 0x6c000028
-#define DLB2_DQED_SMON_ACTIVITYCNTR1_RST 0x0
-
-#define DLB2_DQED_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
-#define DLB2_DQED_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
-
-#define DLB2_DQED_SMON_COMPARE0 0x6c00002c
-#define DLB2_DQED_SMON_COMPARE0_RST 0x0
-
-#define DLB2_DQED_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
-#define DLB2_DQED_SMON_COMPARE0_COMPARE0_LOC	0
-
-#define DLB2_DQED_SMON_COMPARE1 0x6c000030
-#define DLB2_DQED_SMON_COMPARE1_RST 0x0
-
-#define DLB2_DQED_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
-#define DLB2_DQED_SMON_COMPARE1_COMPARE1_LOC	0
-
-#define DLB2_DQED_SMON_CFG0 0x6c000034
-#define DLB2_DQED_SMON_CFG0_RST 0x40000000
-
-#define DLB2_DQED_SMON_CFG0_SMON_ENABLE		0x00000001
-#define DLB2_DQED_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
-#define DLB2_DQED_SMON_CFG0_RSVZ0			0x0000000C
-#define DLB2_DQED_SMON_CFG0_SMON0_FUNCTION		0x00000070
-#define DLB2_DQED_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
-#define DLB2_DQED_SMON_CFG0_SMON1_FUNCTION		0x00000700
-#define DLB2_DQED_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
-#define DLB2_DQED_SMON_CFG0_SMON_MODE		0x0000F000
-#define DLB2_DQED_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
-#define DLB2_DQED_SMON_CFG0_INTCOUNTEROVFL		0x00020000
-#define DLB2_DQED_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
-#define DLB2_DQED_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
-#define DLB2_DQED_SMON_CFG0_STOPTIMEROVFL		0x00100000
-#define DLB2_DQED_SMON_CFG0_INTTIMEROVFL		0x00200000
-#define DLB2_DQED_SMON_CFG0_STATTIMEROVFL		0x00400000
-#define DLB2_DQED_SMON_CFG0_RSVZ1			0x00800000
-#define DLB2_DQED_SMON_CFG0_TIMER_PRESCALE		0x1F000000
-#define DLB2_DQED_SMON_CFG0_RSVZ2			0x20000000
-#define DLB2_DQED_SMON_CFG0_VERSION			0xC0000000
-#define DLB2_DQED_SMON_CFG0_SMON_ENABLE_LOC			0
-#define DLB2_DQED_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
-#define DLB2_DQED_SMON_CFG0_RSVZ0_LOC			2
-#define DLB2_DQED_SMON_CFG0_SMON0_FUNCTION_LOC		4
-#define DLB2_DQED_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
-#define DLB2_DQED_SMON_CFG0_SMON1_FUNCTION_LOC		8
-#define DLB2_DQED_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
-#define DLB2_DQED_SMON_CFG0_SMON_MODE_LOC			12
-#define DLB2_DQED_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
-#define DLB2_DQED_SMON_CFG0_INTCOUNTEROVFL_LOC		17
-#define DLB2_DQED_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
-#define DLB2_DQED_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
-#define DLB2_DQED_SMON_CFG0_STOPTIMEROVFL_LOC		20
-#define DLB2_DQED_SMON_CFG0_INTTIMEROVFL_LOC			21
-#define DLB2_DQED_SMON_CFG0_STATTIMEROVFL_LOC		22
-#define DLB2_DQED_SMON_CFG0_RSVZ1_LOC			23
-#define DLB2_DQED_SMON_CFG0_TIMER_PRESCALE_LOC		24
-#define DLB2_DQED_SMON_CFG0_RSVZ2_LOC			29
-#define DLB2_DQED_SMON_CFG0_VERSION_LOC			30
-
-#define DLB2_DQED_SMON_CFG1 0x6c000038
-#define DLB2_DQED_SMON_CFG1_RST 0x0
-
-#define DLB2_DQED_SMON_CFG1_MODE0	0x000000FF
-#define DLB2_DQED_SMON_CFG1_MODE1	0x0000FF00
-#define DLB2_DQED_SMON_CFG1_RSVZ0	0xFFFF0000
-#define DLB2_DQED_SMON_CFG1_MODE0_LOC	0
-#define DLB2_DQED_SMON_CFG1_MODE1_LOC	8
-#define DLB2_DQED_SMON_CFG1_RSVZ0_LOC	16
-
-#define DLB2_DQED_SMON_MAX_TMR 0x6c00003c
-#define DLB2_DQED_SMON_MAX_TMR_RST 0x0
-
-#define DLB2_DQED_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
-#define DLB2_DQED_SMON_MAX_TMR_MAXVALUE_LOC	0
-
-#define DLB2_DQED_SMON_TMR 0x6c000040
-#define DLB2_DQED_SMON_TMR_RST 0x0
-
-#define DLB2_DQED_SMON_TMR_TIMER	0xFFFFFFFF
-#define DLB2_DQED_SMON_TMR_TIMER_LOC	0
-
-#define DLB2_QED_SMON_ACTIVITYCNTR0 0x7c000024
-#define DLB2_QED_SMON_ACTIVITYCNTR0_RST 0x0
-
-#define DLB2_QED_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
-#define DLB2_QED_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
-
-#define DLB2_QED_SMON_ACTIVITYCNTR1 0x7c000028
-#define DLB2_QED_SMON_ACTIVITYCNTR1_RST 0x0
-
-#define DLB2_QED_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
-#define DLB2_QED_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
-
-#define DLB2_QED_SMON_COMPARE0 0x7c00002c
-#define DLB2_QED_SMON_COMPARE0_RST 0x0
-
-#define DLB2_QED_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
-#define DLB2_QED_SMON_COMPARE0_COMPARE0_LOC	0
-
-#define DLB2_QED_SMON_COMPARE1 0x7c000030
-#define DLB2_QED_SMON_COMPARE1_RST 0x0
-
-#define DLB2_QED_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
-#define DLB2_QED_SMON_COMPARE1_COMPARE1_LOC	0
-
-#define DLB2_QED_SMON_CFG0 0x7c000034
-#define DLB2_QED_SMON_CFG0_RST 0x40000000
-
-#define DLB2_QED_SMON_CFG0_SMON_ENABLE		0x00000001
-#define DLB2_QED_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
-#define DLB2_QED_SMON_CFG0_RSVZ0			0x0000000C
-#define DLB2_QED_SMON_CFG0_SMON0_FUNCTION		0x00000070
-#define DLB2_QED_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
-#define DLB2_QED_SMON_CFG0_SMON1_FUNCTION		0x00000700
-#define DLB2_QED_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
-#define DLB2_QED_SMON_CFG0_SMON_MODE			0x0000F000
-#define DLB2_QED_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
-#define DLB2_QED_SMON_CFG0_INTCOUNTEROVFL		0x00020000
-#define DLB2_QED_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
-#define DLB2_QED_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
-#define DLB2_QED_SMON_CFG0_STOPTIMEROVFL		0x00100000
-#define DLB2_QED_SMON_CFG0_INTTIMEROVFL		0x00200000
-#define DLB2_QED_SMON_CFG0_STATTIMEROVFL		0x00400000
-#define DLB2_QED_SMON_CFG0_RSVZ1			0x00800000
-#define DLB2_QED_SMON_CFG0_TIMER_PRESCALE		0x1F000000
-#define DLB2_QED_SMON_CFG0_RSVZ2			0x20000000
-#define DLB2_QED_SMON_CFG0_VERSION			0xC0000000
-#define DLB2_QED_SMON_CFG0_SMON_ENABLE_LOC			0
-#define DLB2_QED_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
-#define DLB2_QED_SMON_CFG0_RSVZ0_LOC				2
-#define DLB2_QED_SMON_CFG0_SMON0_FUNCTION_LOC		4
-#define DLB2_QED_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
-#define DLB2_QED_SMON_CFG0_SMON1_FUNCTION_LOC		8
-#define DLB2_QED_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
-#define DLB2_QED_SMON_CFG0_SMON_MODE_LOC			12
-#define DLB2_QED_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
-#define DLB2_QED_SMON_CFG0_INTCOUNTEROVFL_LOC		17
-#define DLB2_QED_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
-#define DLB2_QED_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
-#define DLB2_QED_SMON_CFG0_STOPTIMEROVFL_LOC			20
-#define DLB2_QED_SMON_CFG0_INTTIMEROVFL_LOC			21
-#define DLB2_QED_SMON_CFG0_STATTIMEROVFL_LOC			22
-#define DLB2_QED_SMON_CFG0_RSVZ1_LOC				23
-#define DLB2_QED_SMON_CFG0_TIMER_PRESCALE_LOC		24
-#define DLB2_QED_SMON_CFG0_RSVZ2_LOC				29
-#define DLB2_QED_SMON_CFG0_VERSION_LOC			30
-
-#define DLB2_QED_SMON_CFG1 0x7c000038
-#define DLB2_QED_SMON_CFG1_RST 0x0
-
-#define DLB2_QED_SMON_CFG1_MODE0	0x000000FF
-#define DLB2_QED_SMON_CFG1_MODE1	0x0000FF00
-#define DLB2_QED_SMON_CFG1_RSVZ0	0xFFFF0000
-#define DLB2_QED_SMON_CFG1_MODE0_LOC	0
-#define DLB2_QED_SMON_CFG1_MODE1_LOC	8
-#define DLB2_QED_SMON_CFG1_RSVZ0_LOC	16
-
-#define DLB2_QED_SMON_MAX_TMR 0x7c00003c
-#define DLB2_QED_SMON_MAX_TMR_RST 0x0
-
-#define DLB2_QED_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
-#define DLB2_QED_SMON_MAX_TMR_MAXVALUE_LOC	0
-
-#define DLB2_QED_SMON_TMR 0x7c000040
-#define DLB2_QED_SMON_TMR_RST 0x0
-
-#define DLB2_QED_SMON_TMR_TIMER	0xFFFFFFFF
-#define DLB2_QED_SMON_TMR_TIMER_LOC	0
-
-#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0 0x84000000
-#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0 0x74000000
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0 : \
-	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0)
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_RST 0xfefcfaf8
-
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI0	0x000000FF
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI1	0x0000FF00
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI2	0x00FF0000
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI3	0xFF000000
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI0_LOC	0
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI1_LOC	8
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI2_LOC	16
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI3_LOC	24
-
-#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1 0x84000004
-#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1 0x74000004
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1 : \
-	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1)
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1_RST 0x0
-
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1_RSVZ0	0xFFFFFFFF
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1_RSVZ0_LOC	0
-
-#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0 0x84000008
-#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0 0x74000008
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0 : \
-	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0)
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_RST 0xfefcfaf8
-
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI0	0x000000FF
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI1	0x0000FF00
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI2	0x00FF0000
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI3	0xFF000000
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI0_LOC	0
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI1_LOC	8
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI2_LOC	16
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI3_LOC	24
-
-#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1 0x8400000c
-#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1 0x7400000c
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1 : \
-	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1)
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1_RST 0x0
-
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1_RSVZ0	0xFFFFFFFF
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1_RSVZ0_LOC	0
-
-#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0 0x84000010
-#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0 0x74000010
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0 : \
-	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0)
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_RST 0xfefcfaf8
-
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI0	0x000000FF
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI1	0x0000FF00
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI2	0x00FF0000
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI3	0xFF000000
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI0_LOC	0
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI1_LOC	8
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI2_LOC	16
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI3_LOC	24
-
-#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1 0x84000014
-#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1 0x74000014
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1 : \
-	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1)
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RST 0x0
-
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RSVZ0	0xFFFFFFFF
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RSVZ0_LOC	0
-
-#define DLB2_NALB_SMON_ACTIVITYCNTR0 0x8c000064
-#define DLB2_NALB_SMON_ACTIVITYCNTR0_RST 0x0
-
-#define DLB2_NALB_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
-#define DLB2_NALB_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
-
-#define DLB2_NALB_SMON_ACTIVITYCNTR1 0x8c000068
-#define DLB2_NALB_SMON_ACTIVITYCNTR1_RST 0x0
-
-#define DLB2_NALB_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
-#define DLB2_NALB_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
-
-#define DLB2_NALB_SMON_COMPARE0 0x8c00006c
-#define DLB2_NALB_SMON_COMPARE0_RST 0x0
-
-#define DLB2_NALB_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
-#define DLB2_NALB_SMON_COMPARE0_COMPARE0_LOC	0
-
-#define DLB2_NALB_SMON_COMPARE1 0x8c000070
-#define DLB2_NALB_SMON_COMPARE1_RST 0x0
-
-#define DLB2_NALB_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
-#define DLB2_NALB_SMON_COMPARE1_COMPARE1_LOC	0
-
-#define DLB2_NALB_SMON_CFG0 0x8c000074
-#define DLB2_NALB_SMON_CFG0_RST 0x40000000
-
-#define DLB2_NALB_SMON_CFG0_SMON_ENABLE		0x00000001
-#define DLB2_NALB_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
-#define DLB2_NALB_SMON_CFG0_RSVZ0			0x0000000C
-#define DLB2_NALB_SMON_CFG0_SMON0_FUNCTION		0x00000070
-#define DLB2_NALB_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
-#define DLB2_NALB_SMON_CFG0_SMON1_FUNCTION		0x00000700
-#define DLB2_NALB_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
-#define DLB2_NALB_SMON_CFG0_SMON_MODE		0x0000F000
-#define DLB2_NALB_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
-#define DLB2_NALB_SMON_CFG0_INTCOUNTEROVFL		0x00020000
-#define DLB2_NALB_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
-#define DLB2_NALB_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
-#define DLB2_NALB_SMON_CFG0_STOPTIMEROVFL		0x00100000
-#define DLB2_NALB_SMON_CFG0_INTTIMEROVFL		0x00200000
-#define DLB2_NALB_SMON_CFG0_STATTIMEROVFL		0x00400000
-#define DLB2_NALB_SMON_CFG0_RSVZ1			0x00800000
-#define DLB2_NALB_SMON_CFG0_TIMER_PRESCALE		0x1F000000
-#define DLB2_NALB_SMON_CFG0_RSVZ2			0x20000000
-#define DLB2_NALB_SMON_CFG0_VERSION			0xC0000000
-#define DLB2_NALB_SMON_CFG0_SMON_ENABLE_LOC			0
-#define DLB2_NALB_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
-#define DLB2_NALB_SMON_CFG0_RSVZ0_LOC			2
-#define DLB2_NALB_SMON_CFG0_SMON0_FUNCTION_LOC		4
-#define DLB2_NALB_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
-#define DLB2_NALB_SMON_CFG0_SMON1_FUNCTION_LOC		8
-#define DLB2_NALB_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
-#define DLB2_NALB_SMON_CFG0_SMON_MODE_LOC			12
-#define DLB2_NALB_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
-#define DLB2_NALB_SMON_CFG0_INTCOUNTEROVFL_LOC		17
-#define DLB2_NALB_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
-#define DLB2_NALB_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
-#define DLB2_NALB_SMON_CFG0_STOPTIMEROVFL_LOC		20
-#define DLB2_NALB_SMON_CFG0_INTTIMEROVFL_LOC			21
-#define DLB2_NALB_SMON_CFG0_STATTIMEROVFL_LOC		22
-#define DLB2_NALB_SMON_CFG0_RSVZ1_LOC			23
-#define DLB2_NALB_SMON_CFG0_TIMER_PRESCALE_LOC		24
-#define DLB2_NALB_SMON_CFG0_RSVZ2_LOC			29
-#define DLB2_NALB_SMON_CFG0_VERSION_LOC			30
-
-#define DLB2_NALB_SMON_CFG1 0x8c000078
-#define DLB2_NALB_SMON_CFG1_RST 0x0
-
-#define DLB2_NALB_SMON_CFG1_MODE0	0x000000FF
-#define DLB2_NALB_SMON_CFG1_MODE1	0x0000FF00
-#define DLB2_NALB_SMON_CFG1_RSVZ0	0xFFFF0000
-#define DLB2_NALB_SMON_CFG1_MODE0_LOC	0
-#define DLB2_NALB_SMON_CFG1_MODE1_LOC	8
-#define DLB2_NALB_SMON_CFG1_RSVZ0_LOC	16
-
-#define DLB2_NALB_SMON_MAX_TMR 0x8c00007c
-#define DLB2_NALB_SMON_MAX_TMR_RST 0x0
-
-#define DLB2_NALB_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
-#define DLB2_NALB_SMON_MAX_TMR_MAXVALUE_LOC	0
-
-#define DLB2_NALB_SMON_TMR 0x8c000080
-#define DLB2_NALB_SMON_TMR_RST 0x0
-
-#define DLB2_NALB_SMON_TMR_TIMER	0xFFFFFFFF
-#define DLB2_NALB_SMON_TMR_TIMER_LOC	0
-
-#define DLB2_V2RO_GRP_0_SLT_SHFT(x) \
-	(0x96000000 + (x) * 0x4)
-#define DLB2_V2_5RO_GRP_0_SLT_SHFT(x) \
-	(0x86000000 + (x) * 0x4)
-#define DLB2_RO_GRP_0_SLT_SHFT(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2RO_GRP_0_SLT_SHFT(x) : \
-	 DLB2_V2_5RO_GRP_0_SLT_SHFT(x))
-#define DLB2_RO_GRP_0_SLT_SHFT_RST 0x0
-
-#define DLB2_RO_GRP_0_SLT_SHFT_CHANGE	0x000003FF
-#define DLB2_RO_GRP_0_SLT_SHFT_RSVD0		0xFFFFFC00
-#define DLB2_RO_GRP_0_SLT_SHFT_CHANGE_LOC	0
-#define DLB2_RO_GRP_0_SLT_SHFT_RSVD0_LOC	10
-
-#define DLB2_V2RO_GRP_1_SLT_SHFT(x) \
-	(0x96010000 + (x) * 0x4)
-#define DLB2_V2_5RO_GRP_1_SLT_SHFT(x) \
-	(0x86010000 + (x) * 0x4)
-#define DLB2_RO_GRP_1_SLT_SHFT(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2RO_GRP_1_SLT_SHFT(x) : \
-	 DLB2_V2_5RO_GRP_1_SLT_SHFT(x))
-#define DLB2_RO_GRP_1_SLT_SHFT_RST 0x0
-
-#define DLB2_RO_GRP_1_SLT_SHFT_CHANGE	0x000003FF
-#define DLB2_RO_GRP_1_SLT_SHFT_RSVD0		0xFFFFFC00
-#define DLB2_RO_GRP_1_SLT_SHFT_CHANGE_LOC	0
-#define DLB2_RO_GRP_1_SLT_SHFT_RSVD0_LOC	10
-
-#define DLB2_V2RO_GRP_SN_MODE 0x94000000
-#define DLB2_V2_5RO_GRP_SN_MODE 0x84000000
-#define DLB2_RO_GRP_SN_MODE(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2RO_GRP_SN_MODE : \
-	 DLB2_V2_5RO_GRP_SN_MODE)
-#define DLB2_RO_GRP_SN_MODE_RST 0x0
-
-#define DLB2_RO_GRP_SN_MODE_SN_MODE_0	0x00000007
-#define DLB2_RO_GRP_SN_MODE_RSZV0		0x000000F8
-#define DLB2_RO_GRP_SN_MODE_SN_MODE_1	0x00000700
-#define DLB2_RO_GRP_SN_MODE_RSZV1		0xFFFFF800
-#define DLB2_RO_GRP_SN_MODE_SN_MODE_0_LOC	0
-#define DLB2_RO_GRP_SN_MODE_RSZV0_LOC	3
-#define DLB2_RO_GRP_SN_MODE_SN_MODE_1_LOC	8
-#define DLB2_RO_GRP_SN_MODE_RSZV1_LOC	11
-
-#define DLB2_V2RO_CFG_CTRL_GENERAL_0 0x9c000000
-#define DLB2_V2_5RO_CFG_CTRL_GENERAL_0 0x8c000000
-#define DLB2_RO_CFG_CTRL_GENERAL_0(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2RO_CFG_CTRL_GENERAL_0 : \
-	 DLB2_V2_5RO_CFG_CTRL_GENERAL_0)
-#define DLB2_RO_CFG_CTRL_GENERAL_0_RST 0x0
-
-#define DLB2_RO_CFG_CTRL_GENERAL_0_UNIT_SINGLE_STEP_MODE	0x00000001
-#define DLB2_RO_CFG_CTRL_GENERAL_0_RR_EN			0x00000002
-#define DLB2_RO_CFG_CTRL_GENERAL_0_RSZV0			0xFFFFFFFC
-#define DLB2_RO_CFG_CTRL_GENERAL_0_UNIT_SINGLE_STEP_MODE_LOC	0
-#define DLB2_RO_CFG_CTRL_GENERAL_0_RR_EN_LOC			1
-#define DLB2_RO_CFG_CTRL_GENERAL_0_RSZV0_LOC			2
-
-#define DLB2_RO_SMON_ACTIVITYCNTR0 0x9c000030
-#define DLB2_RO_SMON_ACTIVITYCNTR0_RST 0x0
-
-#define DLB2_RO_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
-#define DLB2_RO_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
-
-#define DLB2_RO_SMON_ACTIVITYCNTR1 0x9c000034
-#define DLB2_RO_SMON_ACTIVITYCNTR1_RST 0x0
-
-#define DLB2_RO_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
-#define DLB2_RO_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
-
-#define DLB2_RO_SMON_COMPARE0 0x9c000038
-#define DLB2_RO_SMON_COMPARE0_RST 0x0
-
-#define DLB2_RO_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
-#define DLB2_RO_SMON_COMPARE0_COMPARE0_LOC	0
-
-#define DLB2_RO_SMON_COMPARE1 0x9c00003c
-#define DLB2_RO_SMON_COMPARE1_RST 0x0
-
-#define DLB2_RO_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
-#define DLB2_RO_SMON_COMPARE1_COMPARE1_LOC	0
-
-#define DLB2_RO_SMON_CFG0 0x9c000040
-#define DLB2_RO_SMON_CFG0_RST 0x40000000
-
-#define DLB2_RO_SMON_CFG0_SMON_ENABLE		0x00000001
-#define DLB2_RO_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
-#define DLB2_RO_SMON_CFG0_RSVZ0			0x0000000C
-#define DLB2_RO_SMON_CFG0_SMON0_FUNCTION		0x00000070
-#define DLB2_RO_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
-#define DLB2_RO_SMON_CFG0_SMON1_FUNCTION		0x00000700
-#define DLB2_RO_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
-#define DLB2_RO_SMON_CFG0_SMON_MODE			0x0000F000
-#define DLB2_RO_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
-#define DLB2_RO_SMON_CFG0_INTCOUNTEROVFL		0x00020000
-#define DLB2_RO_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
-#define DLB2_RO_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
-#define DLB2_RO_SMON_CFG0_STOPTIMEROVFL		0x00100000
-#define DLB2_RO_SMON_CFG0_INTTIMEROVFL		0x00200000
-#define DLB2_RO_SMON_CFG0_STATTIMEROVFL		0x00400000
-#define DLB2_RO_SMON_CFG0_RSVZ1			0x00800000
-#define DLB2_RO_SMON_CFG0_TIMER_PRESCALE		0x1F000000
-#define DLB2_RO_SMON_CFG0_RSVZ2			0x20000000
-#define DLB2_RO_SMON_CFG0_VERSION			0xC0000000
-#define DLB2_RO_SMON_CFG0_SMON_ENABLE_LOC		0
-#define DLB2_RO_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC	1
-#define DLB2_RO_SMON_CFG0_RSVZ0_LOC			2
-#define DLB2_RO_SMON_CFG0_SMON0_FUNCTION_LOC		4
-#define DLB2_RO_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
-#define DLB2_RO_SMON_CFG0_SMON1_FUNCTION_LOC		8
-#define DLB2_RO_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
-#define DLB2_RO_SMON_CFG0_SMON_MODE_LOC		12
-#define DLB2_RO_SMON_CFG0_STOPCOUNTEROVFL_LOC	16
-#define DLB2_RO_SMON_CFG0_INTCOUNTEROVFL_LOC		17
-#define DLB2_RO_SMON_CFG0_STATCOUNTER0OVFL_LOC	18
-#define DLB2_RO_SMON_CFG0_STATCOUNTER1OVFL_LOC	19
-#define DLB2_RO_SMON_CFG0_STOPTIMEROVFL_LOC		20
-#define DLB2_RO_SMON_CFG0_INTTIMEROVFL_LOC		21
-#define DLB2_RO_SMON_CFG0_STATTIMEROVFL_LOC		22
-#define DLB2_RO_SMON_CFG0_RSVZ1_LOC			23
-#define DLB2_RO_SMON_CFG0_TIMER_PRESCALE_LOC		24
-#define DLB2_RO_SMON_CFG0_RSVZ2_LOC			29
-#define DLB2_RO_SMON_CFG0_VERSION_LOC		30
-
-#define DLB2_RO_SMON_CFG1 0x9c000044
-#define DLB2_RO_SMON_CFG1_RST 0x0
-
-#define DLB2_RO_SMON_CFG1_MODE0	0x000000FF
-#define DLB2_RO_SMON_CFG1_MODE1	0x0000FF00
-#define DLB2_RO_SMON_CFG1_RSVZ0	0xFFFF0000
-#define DLB2_RO_SMON_CFG1_MODE0_LOC	0
-#define DLB2_RO_SMON_CFG1_MODE1_LOC	8
-#define DLB2_RO_SMON_CFG1_RSVZ0_LOC	16
-
-#define DLB2_RO_SMON_MAX_TMR 0x9c000048
-#define DLB2_RO_SMON_MAX_TMR_RST 0x0
-
-#define DLB2_RO_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
-#define DLB2_RO_SMON_MAX_TMR_MAXVALUE_LOC	0
-
-#define DLB2_RO_SMON_TMR 0x9c00004c
-#define DLB2_RO_SMON_TMR_RST 0x0
-
-#define DLB2_RO_SMON_TMR_TIMER	0xFFFFFFFF
-#define DLB2_RO_SMON_TMR_TIMER_LOC	0
-
-#define DLB2_V2LSP_CQ2PRIOV(x) \
-	(0xa0000000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_CQ2PRIOV(x) \
-	(0x90000000 + (x) * 0x1000)
-#define DLB2_LSP_CQ2PRIOV(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CQ2PRIOV(x) : \
-	 DLB2_V2_5LSP_CQ2PRIOV(x))
-#define DLB2_LSP_CQ2PRIOV_RST 0x0
-
-#define DLB2_LSP_CQ2PRIOV_PRIO	0x00FFFFFF
-#define DLB2_LSP_CQ2PRIOV_V		0xFF000000
-#define DLB2_LSP_CQ2PRIOV_PRIO_LOC	0
-#define DLB2_LSP_CQ2PRIOV_V_LOC	24
-
-#define DLB2_V2LSP_CQ2QID0(x) \
-	(0xa0080000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_CQ2QID0(x) \
-	(0x90080000 + (x) * 0x1000)
-#define DLB2_LSP_CQ2QID0(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CQ2QID0(x) : \
-	 DLB2_V2_5LSP_CQ2QID0(x))
-#define DLB2_LSP_CQ2QID0_RST 0x0
-
-#define DLB2_LSP_CQ2QID0_QID_P0	0x0000007F
-#define DLB2_LSP_CQ2QID0_RSVD3	0x00000080
-#define DLB2_LSP_CQ2QID0_QID_P1	0x00007F00
-#define DLB2_LSP_CQ2QID0_RSVD2	0x00008000
-#define DLB2_LSP_CQ2QID0_QID_P2	0x007F0000
-#define DLB2_LSP_CQ2QID0_RSVD1	0x00800000
-#define DLB2_LSP_CQ2QID0_QID_P3	0x7F000000
-#define DLB2_LSP_CQ2QID0_RSVD0	0x80000000
-#define DLB2_LSP_CQ2QID0_QID_P0_LOC	0
-#define DLB2_LSP_CQ2QID0_RSVD3_LOC	7
-#define DLB2_LSP_CQ2QID0_QID_P1_LOC	8
-#define DLB2_LSP_CQ2QID0_RSVD2_LOC	15
-#define DLB2_LSP_CQ2QID0_QID_P2_LOC	16
-#define DLB2_LSP_CQ2QID0_RSVD1_LOC	23
-#define DLB2_LSP_CQ2QID0_QID_P3_LOC	24
-#define DLB2_LSP_CQ2QID0_RSVD0_LOC	31
-
-#define DLB2_V2LSP_CQ2QID1(x) \
-	(0xa0100000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_CQ2QID1(x) \
-	(0x90100000 + (x) * 0x1000)
-#define DLB2_LSP_CQ2QID1(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CQ2QID1(x) : \
-	 DLB2_V2_5LSP_CQ2QID1(x))
-#define DLB2_LSP_CQ2QID1_RST 0x0
-
-#define DLB2_LSP_CQ2QID1_QID_P4	0x0000007F
-#define DLB2_LSP_CQ2QID1_RSVD3	0x00000080
-#define DLB2_LSP_CQ2QID1_QID_P5	0x00007F00
-#define DLB2_LSP_CQ2QID1_RSVD2	0x00008000
-#define DLB2_LSP_CQ2QID1_QID_P6	0x007F0000
-#define DLB2_LSP_CQ2QID1_RSVD1	0x00800000
-#define DLB2_LSP_CQ2QID1_QID_P7	0x7F000000
-#define DLB2_LSP_CQ2QID1_RSVD0	0x80000000
-#define DLB2_LSP_CQ2QID1_QID_P4_LOC	0
-#define DLB2_LSP_CQ2QID1_RSVD3_LOC	7
-#define DLB2_LSP_CQ2QID1_QID_P5_LOC	8
-#define DLB2_LSP_CQ2QID1_RSVD2_LOC	15
-#define DLB2_LSP_CQ2QID1_QID_P6_LOC	16
-#define DLB2_LSP_CQ2QID1_RSVD1_LOC	23
-#define DLB2_LSP_CQ2QID1_QID_P7_LOC	24
-#define DLB2_LSP_CQ2QID1_RSVD0_LOC	31
-
-#define DLB2_V2LSP_CQ_DIR_DSBL(x) \
-	(0xa0180000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_CQ_DIR_DSBL(x) \
-	(0x90180000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_DIR_DSBL(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CQ_DIR_DSBL(x) : \
-	 DLB2_V2_5LSP_CQ_DIR_DSBL(x))
-#define DLB2_LSP_CQ_DIR_DSBL_RST 0x1
-
-#define DLB2_LSP_CQ_DIR_DSBL_DISABLED	0x00000001
-#define DLB2_LSP_CQ_DIR_DSBL_RSVD0		0xFFFFFFFE
-#define DLB2_LSP_CQ_DIR_DSBL_DISABLED_LOC	0
-#define DLB2_LSP_CQ_DIR_DSBL_RSVD0_LOC	1
-
-#define DLB2_V2LSP_CQ_DIR_TKN_CNT(x) \
-	(0xa0200000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_CQ_DIR_TKN_CNT(x) \
-	(0x90200000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_DIR_TKN_CNT(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CQ_DIR_TKN_CNT(x) : \
-	 DLB2_V2_5LSP_CQ_DIR_TKN_CNT(x))
-#define DLB2_LSP_CQ_DIR_TKN_CNT_RST 0x0
-
-#define DLB2_LSP_CQ_DIR_TKN_CNT_COUNT	0x00001FFF
-#define DLB2_LSP_CQ_DIR_TKN_CNT_RSVD0	0xFFFFE000
-#define DLB2_LSP_CQ_DIR_TKN_CNT_COUNT_LOC	0
-#define DLB2_LSP_CQ_DIR_TKN_CNT_RSVD0_LOC	13
-
-#define DLB2_V2LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(x) \
-	(0xa0280000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(x) \
-	(0x90280000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(x) : \
-	 DLB2_V2_5LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(x))
-#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RST 0x0
-
-#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_TOKEN_DEPTH_SELECT_V2	0x0000000F
-#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_DISABLE_WB_OPT_V2	0x00000010
-#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_IGNORE_DEPTH_V2	0x00000020
-#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RSVD0_V2		0xFFFFFFC0
-#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_TOKEN_DEPTH_SELECT_V2_LOC	0
-#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_DISABLE_WB_OPT_V2_LOC	4
-#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_IGNORE_DEPTH_V2_LOC	5
-#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RSVD0_V2_LOC		6
-
-#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_TOKEN_DEPTH_SELECT_V2_5 0x0000000F
-#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_DISABLE_WB_OPT_V2_5	0x00000010
-#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RSVD0_V2_5		0xFFFFFFE0
-#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_TOKEN_DEPTH_SELECT_V2_5_LOC	0
-#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_DISABLE_WB_OPT_V2_5_LOC	4
-#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RSVD0_V2_5_LOC		5
-
-#define DLB2_V2LSP_CQ_DIR_TOT_SCH_CNTL(x) \
-	(0xa0300000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_CQ_DIR_TOT_SCH_CNTL(x) \
-	(0x90300000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTL(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CQ_DIR_TOT_SCH_CNTL(x) : \
-	 DLB2_V2_5LSP_CQ_DIR_TOT_SCH_CNTL(x))
-#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTL_RST 0x0
-
-#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTL_COUNT	0xFFFFFFFF
-#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTL_COUNT_LOC	0
-
-#define DLB2_V2LSP_CQ_DIR_TOT_SCH_CNTH(x) \
-	(0xa0380000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_CQ_DIR_TOT_SCH_CNTH(x) \
-	(0x90380000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTH(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CQ_DIR_TOT_SCH_CNTH(x) : \
-	 DLB2_V2_5LSP_CQ_DIR_TOT_SCH_CNTH(x))
-#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTH_RST 0x0
-
-#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTH_COUNT	0xFFFFFFFF
-#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTH_COUNT_LOC	0
-
-#define DLB2_V2LSP_CQ_LDB_DSBL(x) \
-	(0xa0400000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_CQ_LDB_DSBL(x) \
-	(0x90400000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_LDB_DSBL(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CQ_LDB_DSBL(x) : \
-	 DLB2_V2_5LSP_CQ_LDB_DSBL(x))
-#define DLB2_LSP_CQ_LDB_DSBL_RST 0x1
-
-#define DLB2_LSP_CQ_LDB_DSBL_DISABLED	0x00000001
-#define DLB2_LSP_CQ_LDB_DSBL_RSVD0		0xFFFFFFFE
-#define DLB2_LSP_CQ_LDB_DSBL_DISABLED_LOC	0
-#define DLB2_LSP_CQ_LDB_DSBL_RSVD0_LOC	1
-
-#define DLB2_V2LSP_CQ_LDB_INFL_CNT(x) \
-	(0xa0480000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_CQ_LDB_INFL_CNT(x) \
-	(0x90480000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_LDB_INFL_CNT(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CQ_LDB_INFL_CNT(x) : \
-	 DLB2_V2_5LSP_CQ_LDB_INFL_CNT(x))
-#define DLB2_LSP_CQ_LDB_INFL_CNT_RST 0x0
-
-#define DLB2_LSP_CQ_LDB_INFL_CNT_COUNT	0x00000FFF
-#define DLB2_LSP_CQ_LDB_INFL_CNT_RSVD0	0xFFFFF000
-#define DLB2_LSP_CQ_LDB_INFL_CNT_COUNT_LOC	0
-#define DLB2_LSP_CQ_LDB_INFL_CNT_RSVD0_LOC	12
-
-#define DLB2_V2LSP_CQ_LDB_INFL_LIM(x) \
-	(0xa0500000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_CQ_LDB_INFL_LIM(x) \
-	(0x90500000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_LDB_INFL_LIM(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CQ_LDB_INFL_LIM(x) : \
-	 DLB2_V2_5LSP_CQ_LDB_INFL_LIM(x))
-#define DLB2_LSP_CQ_LDB_INFL_LIM_RST 0x0
-
-#define DLB2_LSP_CQ_LDB_INFL_LIM_LIMIT	0x00000FFF
-#define DLB2_LSP_CQ_LDB_INFL_LIM_RSVD0	0xFFFFF000
-#define DLB2_LSP_CQ_LDB_INFL_LIM_LIMIT_LOC	0
-#define DLB2_LSP_CQ_LDB_INFL_LIM_RSVD0_LOC	12
-
-#define DLB2_V2LSP_CQ_LDB_TKN_CNT(x) \
-	(0xa0580000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_CQ_LDB_TKN_CNT(x) \
-	(0x90600000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_LDB_TKN_CNT(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CQ_LDB_TKN_CNT(x) : \
-	 DLB2_V2_5LSP_CQ_LDB_TKN_CNT(x))
-#define DLB2_LSP_CQ_LDB_TKN_CNT_RST 0x0
-
-#define DLB2_LSP_CQ_LDB_TKN_CNT_TOKEN_COUNT	0x000007FF
-#define DLB2_LSP_CQ_LDB_TKN_CNT_RSVD0	0xFFFFF800
-#define DLB2_LSP_CQ_LDB_TKN_CNT_TOKEN_COUNT_LOC	0
-#define DLB2_LSP_CQ_LDB_TKN_CNT_RSVD0_LOC		11
-
-#define DLB2_V2LSP_CQ_LDB_TKN_DEPTH_SEL(x) \
-	(0xa0600000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_CQ_LDB_TKN_DEPTH_SEL(x) \
-	(0x90680000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CQ_LDB_TKN_DEPTH_SEL(x) : \
-	 DLB2_V2_5LSP_CQ_LDB_TKN_DEPTH_SEL(x))
-#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RST 0x0
-
-#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_V2	0x0000000F
-#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_IGNORE_DEPTH_V2	0x00000010
-#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RSVD0_V2		0xFFFFFFE0
-#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_V2_LOC	0
-#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_IGNORE_DEPTH_V2_LOC		4
-#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RSVD0_V2_LOC			5
-
-#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_V2_5	0x0000000F
-#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RSVD0_V2_5		0xFFFFFFF0
-#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_V2_5_LOC	0
-#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RSVD0_V2_5_LOC			4
-
-#define DLB2_V2LSP_CQ_LDB_TOT_SCH_CNTL(x) \
-	(0xa0680000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_CQ_LDB_TOT_SCH_CNTL(x) \
-	(0x90700000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTL(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CQ_LDB_TOT_SCH_CNTL(x) : \
-	 DLB2_V2_5LSP_CQ_LDB_TOT_SCH_CNTL(x))
-#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTL_RST 0x0
-
-#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTL_COUNT	0xFFFFFFFF
-#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTL_COUNT_LOC	0
-
-#define DLB2_V2LSP_CQ_LDB_TOT_SCH_CNTH(x) \
-	(0xa0700000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_CQ_LDB_TOT_SCH_CNTH(x) \
-	(0x90780000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTH(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CQ_LDB_TOT_SCH_CNTH(x) : \
-	 DLB2_V2_5LSP_CQ_LDB_TOT_SCH_CNTH(x))
-#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTH_RST 0x0
-
-#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTH_COUNT	0xFFFFFFFF
-#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTH_COUNT_LOC	0
-
-#define DLB2_V2LSP_QID_DIR_MAX_DEPTH(x) \
-	(0xa0780000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID_DIR_MAX_DEPTH(x) \
-	(0x90800000 + (x) * 0x1000)
-#define DLB2_LSP_QID_DIR_MAX_DEPTH(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID_DIR_MAX_DEPTH(x) : \
-	 DLB2_V2_5LSP_QID_DIR_MAX_DEPTH(x))
-#define DLB2_LSP_QID_DIR_MAX_DEPTH_RST 0x0
-
-#define DLB2_LSP_QID_DIR_MAX_DEPTH_DEPTH	0x00001FFF
-#define DLB2_LSP_QID_DIR_MAX_DEPTH_RSVD0	0xFFFFE000
-#define DLB2_LSP_QID_DIR_MAX_DEPTH_DEPTH_LOC	0
-#define DLB2_LSP_QID_DIR_MAX_DEPTH_RSVD0_LOC	13
-
-#define DLB2_V2LSP_QID_DIR_TOT_ENQ_CNTL(x) \
-	(0xa0800000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID_DIR_TOT_ENQ_CNTL(x) \
-	(0x90880000 + (x) * 0x1000)
-#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTL(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID_DIR_TOT_ENQ_CNTL(x) : \
-	 DLB2_V2_5LSP_QID_DIR_TOT_ENQ_CNTL(x))
-#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTL_RST 0x0
-
-#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTL_COUNT	0xFFFFFFFF
-#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTL_COUNT_LOC	0
-
-#define DLB2_V2LSP_QID_DIR_TOT_ENQ_CNTH(x) \
-	(0xa0880000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID_DIR_TOT_ENQ_CNTH(x) \
-	(0x90900000 + (x) * 0x1000)
-#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTH(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID_DIR_TOT_ENQ_CNTH(x) : \
-	 DLB2_V2_5LSP_QID_DIR_TOT_ENQ_CNTH(x))
-#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTH_RST 0x0
-
-#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTH_COUNT	0xFFFFFFFF
-#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTH_COUNT_LOC	0
-
-#define DLB2_V2LSP_QID_DIR_ENQUEUE_CNT(x) \
-	(0xa0900000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID_DIR_ENQUEUE_CNT(x) \
-	(0x90980000 + (x) * 0x1000)
-#define DLB2_LSP_QID_DIR_ENQUEUE_CNT(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID_DIR_ENQUEUE_CNT(x) : \
-	 DLB2_V2_5LSP_QID_DIR_ENQUEUE_CNT(x))
-#define DLB2_LSP_QID_DIR_ENQUEUE_CNT_RST 0x0
-
-#define DLB2_LSP_QID_DIR_ENQUEUE_CNT_COUNT	0x00001FFF
-#define DLB2_LSP_QID_DIR_ENQUEUE_CNT_RSVD0	0xFFFFE000
-#define DLB2_LSP_QID_DIR_ENQUEUE_CNT_COUNT_LOC	0
-#define DLB2_LSP_QID_DIR_ENQUEUE_CNT_RSVD0_LOC	13
-
-#define DLB2_V2LSP_QID_DIR_DEPTH_THRSH(x) \
-	(0xa0980000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID_DIR_DEPTH_THRSH(x) \
-	(0x90a00000 + (x) * 0x1000)
-#define DLB2_LSP_QID_DIR_DEPTH_THRSH(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID_DIR_DEPTH_THRSH(x) : \
-	 DLB2_V2_5LSP_QID_DIR_DEPTH_THRSH(x))
-#define DLB2_LSP_QID_DIR_DEPTH_THRSH_RST 0x0
-
-#define DLB2_LSP_QID_DIR_DEPTH_THRSH_THRESH	0x00001FFF
-#define DLB2_LSP_QID_DIR_DEPTH_THRSH_RSVD0	0xFFFFE000
-#define DLB2_LSP_QID_DIR_DEPTH_THRSH_THRESH_LOC	0
-#define DLB2_LSP_QID_DIR_DEPTH_THRSH_RSVD0_LOC	13
-
-#define DLB2_V2LSP_QID_AQED_ACTIVE_CNT(x) \
-	(0xa0a00000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID_AQED_ACTIVE_CNT(x) \
-	(0x90b80000 + (x) * 0x1000)
-#define DLB2_LSP_QID_AQED_ACTIVE_CNT(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID_AQED_ACTIVE_CNT(x) : \
-	 DLB2_V2_5LSP_QID_AQED_ACTIVE_CNT(x))
-#define DLB2_LSP_QID_AQED_ACTIVE_CNT_RST 0x0
-
-#define DLB2_LSP_QID_AQED_ACTIVE_CNT_COUNT	0x00000FFF
-#define DLB2_LSP_QID_AQED_ACTIVE_CNT_RSVD0	0xFFFFF000
-#define DLB2_LSP_QID_AQED_ACTIVE_CNT_COUNT_LOC	0
-#define DLB2_LSP_QID_AQED_ACTIVE_CNT_RSVD0_LOC	12
-
-#define DLB2_V2LSP_QID_AQED_ACTIVE_LIM(x) \
-	(0xa0a80000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID_AQED_ACTIVE_LIM(x) \
-	(0x90c00000 + (x) * 0x1000)
-#define DLB2_LSP_QID_AQED_ACTIVE_LIM(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID_AQED_ACTIVE_LIM(x) : \
-	 DLB2_V2_5LSP_QID_AQED_ACTIVE_LIM(x))
-#define DLB2_LSP_QID_AQED_ACTIVE_LIM_RST 0x0
-
-#define DLB2_LSP_QID_AQED_ACTIVE_LIM_LIMIT	0x00000FFF
-#define DLB2_LSP_QID_AQED_ACTIVE_LIM_RSVD0	0xFFFFF000
-#define DLB2_LSP_QID_AQED_ACTIVE_LIM_LIMIT_LOC	0
-#define DLB2_LSP_QID_AQED_ACTIVE_LIM_RSVD0_LOC	12
-
-#define DLB2_V2LSP_QID_ATM_TOT_ENQ_CNTL(x) \
-	(0xa0b00000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID_ATM_TOT_ENQ_CNTL(x) \
-	(0x90c80000 + (x) * 0x1000)
-#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTL(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID_ATM_TOT_ENQ_CNTL(x) : \
-	 DLB2_V2_5LSP_QID_ATM_TOT_ENQ_CNTL(x))
-#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTL_RST 0x0
-
-#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTL_COUNT	0xFFFFFFFF
-#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTL_COUNT_LOC	0
-
-#define DLB2_V2LSP_QID_ATM_TOT_ENQ_CNTH(x) \
-	(0xa0b80000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID_ATM_TOT_ENQ_CNTH(x) \
-	(0x90d00000 + (x) * 0x1000)
-#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTH(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID_ATM_TOT_ENQ_CNTH(x) : \
-	 DLB2_V2_5LSP_QID_ATM_TOT_ENQ_CNTH(x))
-#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTH_RST 0x0
-
-#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTH_COUNT	0xFFFFFFFF
-#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTH_COUNT_LOC	0
-
-#define DLB2_V2LSP_QID_LDB_ENQUEUE_CNT(x) \
-	(0xa0c80000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID_LDB_ENQUEUE_CNT(x) \
-	(0x90e00000 + (x) * 0x1000)
-#define DLB2_LSP_QID_LDB_ENQUEUE_CNT(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID_LDB_ENQUEUE_CNT(x) : \
-	 DLB2_V2_5LSP_QID_LDB_ENQUEUE_CNT(x))
-#define DLB2_LSP_QID_LDB_ENQUEUE_CNT_RST 0x0
-
-#define DLB2_LSP_QID_LDB_ENQUEUE_CNT_COUNT	0x00003FFF
-#define DLB2_LSP_QID_LDB_ENQUEUE_CNT_RSVD0	0xFFFFC000
-#define DLB2_LSP_QID_LDB_ENQUEUE_CNT_COUNT_LOC	0
-#define DLB2_LSP_QID_LDB_ENQUEUE_CNT_RSVD0_LOC	14
-
-#define DLB2_V2LSP_QID_LDB_INFL_CNT(x) \
-	(0xa0d00000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID_LDB_INFL_CNT(x) \
-	(0x90e80000 + (x) * 0x1000)
-#define DLB2_LSP_QID_LDB_INFL_CNT(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID_LDB_INFL_CNT(x) : \
-	 DLB2_V2_5LSP_QID_LDB_INFL_CNT(x))
-#define DLB2_LSP_QID_LDB_INFL_CNT_RST 0x0
-
-#define DLB2_LSP_QID_LDB_INFL_CNT_COUNT	0x00000FFF
-#define DLB2_LSP_QID_LDB_INFL_CNT_RSVD0	0xFFFFF000
-#define DLB2_LSP_QID_LDB_INFL_CNT_COUNT_LOC	0
-#define DLB2_LSP_QID_LDB_INFL_CNT_RSVD0_LOC	12
-
-#define DLB2_V2LSP_QID_LDB_INFL_LIM(x) \
-	(0xa0d80000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID_LDB_INFL_LIM(x) \
-	(0x90f00000 + (x) * 0x1000)
-#define DLB2_LSP_QID_LDB_INFL_LIM(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID_LDB_INFL_LIM(x) : \
-	 DLB2_V2_5LSP_QID_LDB_INFL_LIM(x))
-#define DLB2_LSP_QID_LDB_INFL_LIM_RST 0x0
-
-#define DLB2_LSP_QID_LDB_INFL_LIM_LIMIT	0x00000FFF
-#define DLB2_LSP_QID_LDB_INFL_LIM_RSVD0	0xFFFFF000
-#define DLB2_LSP_QID_LDB_INFL_LIM_LIMIT_LOC	0
-#define DLB2_LSP_QID_LDB_INFL_LIM_RSVD0_LOC	12
-
-#define DLB2_V2LSP_QID2CQIDIX_00(x) \
-	(0xa0e00000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID2CQIDIX_00(x) \
-	(0x90f80000 + (x) * 0x1000)
-#define DLB2_LSP_QID2CQIDIX_00(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID2CQIDIX_00(x) : \
-	 DLB2_V2_5LSP_QID2CQIDIX_00(x))
-#define DLB2_LSP_QID2CQIDIX_00_RST 0x0
-#define DLB2_LSP_QID2CQIDIX(ver, x, y) \
-	(DLB2_LSP_QID2CQIDIX_00(ver, x) + 0x80000 * (y))
-#define DLB2_LSP_QID2CQIDIX_NUM 16
-
-#define DLB2_LSP_QID2CQIDIX_00_CQ_P0	0x000000FF
-#define DLB2_LSP_QID2CQIDIX_00_CQ_P1	0x0000FF00
-#define DLB2_LSP_QID2CQIDIX_00_CQ_P2	0x00FF0000
-#define DLB2_LSP_QID2CQIDIX_00_CQ_P3	0xFF000000
-#define DLB2_LSP_QID2CQIDIX_00_CQ_P0_LOC	0
-#define DLB2_LSP_QID2CQIDIX_00_CQ_P1_LOC	8
-#define DLB2_LSP_QID2CQIDIX_00_CQ_P2_LOC	16
-#define DLB2_LSP_QID2CQIDIX_00_CQ_P3_LOC	24
-
-#define DLB2_V2LSP_QID2CQIDIX2_00(x) \
-	(0xa1600000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID2CQIDIX2_00(x) \
-	(0x91780000 + (x) * 0x1000)
-#define DLB2_LSP_QID2CQIDIX2_00(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID2CQIDIX2_00(x) : \
-	 DLB2_V2_5LSP_QID2CQIDIX2_00(x))
-#define DLB2_LSP_QID2CQIDIX2_00_RST 0x0
-#define DLB2_LSP_QID2CQIDIX2(ver, x, y) \
-	(DLB2_LSP_QID2CQIDIX2_00(ver, x) + 0x80000 * (y))
-#define DLB2_LSP_QID2CQIDIX2_NUM 16
-
-#define DLB2_LSP_QID2CQIDIX2_00_CQ_P0	0x000000FF
-#define DLB2_LSP_QID2CQIDIX2_00_CQ_P1	0x0000FF00
-#define DLB2_LSP_QID2CQIDIX2_00_CQ_P2	0x00FF0000
-#define DLB2_LSP_QID2CQIDIX2_00_CQ_P3	0xFF000000
-#define DLB2_LSP_QID2CQIDIX2_00_CQ_P0_LOC	0
-#define DLB2_LSP_QID2CQIDIX2_00_CQ_P1_LOC	8
-#define DLB2_LSP_QID2CQIDIX2_00_CQ_P2_LOC	16
-#define DLB2_LSP_QID2CQIDIX2_00_CQ_P3_LOC	24
-
-#define DLB2_V2LSP_QID_NALDB_MAX_DEPTH(x) \
-	(0xa1f00000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID_NALDB_MAX_DEPTH(x) \
-	(0x92080000 + (x) * 0x1000)
-#define DLB2_LSP_QID_NALDB_MAX_DEPTH(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID_NALDB_MAX_DEPTH(x) : \
-	 DLB2_V2_5LSP_QID_NALDB_MAX_DEPTH(x))
-#define DLB2_LSP_QID_NALDB_MAX_DEPTH_RST 0x0
-
-#define DLB2_LSP_QID_NALDB_MAX_DEPTH_DEPTH	0x00003FFF
-#define DLB2_LSP_QID_NALDB_MAX_DEPTH_RSVD0	0xFFFFC000
-#define DLB2_LSP_QID_NALDB_MAX_DEPTH_DEPTH_LOC	0
-#define DLB2_LSP_QID_NALDB_MAX_DEPTH_RSVD0_LOC	14
-
-#define DLB2_V2LSP_QID_NALDB_TOT_ENQ_CNTL(x) \
-	(0xa1f80000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID_NALDB_TOT_ENQ_CNTL(x) \
-	(0x92100000 + (x) * 0x1000)
-#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID_NALDB_TOT_ENQ_CNTL(x) : \
-	 DLB2_V2_5LSP_QID_NALDB_TOT_ENQ_CNTL(x))
-#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL_RST 0x0
-
-#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL_COUNT	0xFFFFFFFF
-#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL_COUNT_LOC	0
-
-#define DLB2_V2LSP_QID_NALDB_TOT_ENQ_CNTH(x) \
-	(0xa2000000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID_NALDB_TOT_ENQ_CNTH(x) \
-	(0x92180000 + (x) * 0x1000)
-#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID_NALDB_TOT_ENQ_CNTH(x) : \
-	 DLB2_V2_5LSP_QID_NALDB_TOT_ENQ_CNTH(x))
-#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH_RST 0x0
-
-#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH_COUNT	0xFFFFFFFF
-#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH_COUNT_LOC	0
-
-#define DLB2_V2LSP_QID_ATM_DEPTH_THRSH(x) \
-	(0xa2080000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID_ATM_DEPTH_THRSH(x) \
-	(0x92200000 + (x) * 0x1000)
-#define DLB2_LSP_QID_ATM_DEPTH_THRSH(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID_ATM_DEPTH_THRSH(x) : \
-	 DLB2_V2_5LSP_QID_ATM_DEPTH_THRSH(x))
-#define DLB2_LSP_QID_ATM_DEPTH_THRSH_RST 0x0
-
-#define DLB2_LSP_QID_ATM_DEPTH_THRSH_THRESH	0x00003FFF
-#define DLB2_LSP_QID_ATM_DEPTH_THRSH_RSVD0	0xFFFFC000
-#define DLB2_LSP_QID_ATM_DEPTH_THRSH_THRESH_LOC	0
-#define DLB2_LSP_QID_ATM_DEPTH_THRSH_RSVD0_LOC	14
-
-#define DLB2_V2LSP_QID_NALDB_DEPTH_THRSH(x) \
-	(0xa2100000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID_NALDB_DEPTH_THRSH(x) \
-	(0x92280000 + (x) * 0x1000)
-#define DLB2_LSP_QID_NALDB_DEPTH_THRSH(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID_NALDB_DEPTH_THRSH(x) : \
-	 DLB2_V2_5LSP_QID_NALDB_DEPTH_THRSH(x))
-#define DLB2_LSP_QID_NALDB_DEPTH_THRSH_RST 0x0
-
-#define DLB2_LSP_QID_NALDB_DEPTH_THRSH_THRESH	0x00003FFF
-#define DLB2_LSP_QID_NALDB_DEPTH_THRSH_RSVD0		0xFFFFC000
-#define DLB2_LSP_QID_NALDB_DEPTH_THRSH_THRESH_LOC	0
-#define DLB2_LSP_QID_NALDB_DEPTH_THRSH_RSVD0_LOC	14
-
-#define DLB2_V2LSP_QID_ATM_ACTIVE(x) \
-	(0xa2180000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID_ATM_ACTIVE(x) \
-	(0x92300000 + (x) * 0x1000)
-#define DLB2_LSP_QID_ATM_ACTIVE(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID_ATM_ACTIVE(x) : \
-	 DLB2_V2_5LSP_QID_ATM_ACTIVE(x))
-#define DLB2_LSP_QID_ATM_ACTIVE_RST 0x0
-
-#define DLB2_LSP_QID_ATM_ACTIVE_COUNT	0x00003FFF
-#define DLB2_LSP_QID_ATM_ACTIVE_RSVD0	0xFFFFC000
-#define DLB2_LSP_QID_ATM_ACTIVE_COUNT_LOC	0
-#define DLB2_LSP_QID_ATM_ACTIVE_RSVD0_LOC	14
-
-#define DLB2_V2LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0 0xa4000008
-#define DLB2_V2_5LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0 0x94000008
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0 : \
-	 DLB2_V2_5LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0)
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_RST 0x0
-
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI0_WEIGHT	0x000000FF
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI1_WEIGHT	0x0000FF00
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI2_WEIGHT	0x00FF0000
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI3_WEIGHT	0xFF000000
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI0_WEIGHT_LOC	0
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI1_WEIGHT_LOC	8
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI2_WEIGHT_LOC	16
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI3_WEIGHT_LOC	24
-
-#define DLB2_V2LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1 0xa400000c
-#define DLB2_V2_5LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1 0x9400000c
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1 : \
-	 DLB2_V2_5LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1)
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_RST 0x0
-
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_RSVZ0_V2	0xFFFFFFFF
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_RSVZ0_V2_LOC	0
-
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI4_WEIGHT_V2_5	0x000000FF
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI5_WEIGHT_V2_5	0x0000FF00
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI6_WEIGHT_V2_5	0x00FF0000
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI7_WEIGHT_V2_5	0xFF000000
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI4_WEIGHT_V2_5_LOC	0
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI5_WEIGHT_V2_5_LOC	8
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI6_WEIGHT_V2_5_LOC	16
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI7_WEIGHT_V2_5_LOC	24
-
-#define DLB2_V2LSP_CFG_ARB_WEIGHT_LDB_QID_0 0xa4000014
-#define DLB2_V2_5LSP_CFG_ARB_WEIGHT_LDB_QID_0 0x94000014
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CFG_ARB_WEIGHT_LDB_QID_0 : \
-	 DLB2_V2_5LSP_CFG_ARB_WEIGHT_LDB_QID_0)
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_RST 0x0
-
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI0_WEIGHT	0x000000FF
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI1_WEIGHT	0x0000FF00
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI2_WEIGHT	0x00FF0000
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI3_WEIGHT	0xFF000000
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI0_WEIGHT_LOC	0
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI1_WEIGHT_LOC	8
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI2_WEIGHT_LOC	16
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI3_WEIGHT_LOC	24
-
-#define DLB2_V2LSP_CFG_ARB_WEIGHT_LDB_QID_1 0xa4000018
-#define DLB2_V2_5LSP_CFG_ARB_WEIGHT_LDB_QID_1 0x94000018
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CFG_ARB_WEIGHT_LDB_QID_1 : \
-	 DLB2_V2_5LSP_CFG_ARB_WEIGHT_LDB_QID_1)
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_RST 0x0
-
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_RSVZ0_V2	0xFFFFFFFF
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_RSVZ0_V2_LOC	0
-
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI4_WEIGHT_V2_5	0x000000FF
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI5_WEIGHT_V2_5	0x0000FF00
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI6_WEIGHT_V2_5	0x00FF0000
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI7_WEIGHT_V2_5	0xFF000000
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI4_WEIGHT_V2_5_LOC	0
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI5_WEIGHT_V2_5_LOC	8
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI6_WEIGHT_V2_5_LOC	16
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI7_WEIGHT_V2_5_LOC	24
-
-#define DLB2_V2LSP_LDB_SCHED_CTRL 0xa400002c
-#define DLB2_V2_5LSP_LDB_SCHED_CTRL 0x9400002c
-#define DLB2_LSP_LDB_SCHED_CTRL(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_LDB_SCHED_CTRL : \
-	 DLB2_V2_5LSP_LDB_SCHED_CTRL)
-#define DLB2_LSP_LDB_SCHED_CTRL_RST 0x0
-
-#define DLB2_LSP_LDB_SCHED_CTRL_CQ			0x000000FF
-#define DLB2_LSP_LDB_SCHED_CTRL_QIDIX		0x00000700
-#define DLB2_LSP_LDB_SCHED_CTRL_VALUE		0x00000800
-#define DLB2_LSP_LDB_SCHED_CTRL_NALB_HASWORK_V	0x00001000
-#define DLB2_LSP_LDB_SCHED_CTRL_RLIST_HASWORK_V	0x00002000
-#define DLB2_LSP_LDB_SCHED_CTRL_SLIST_HASWORK_V	0x00004000
-#define DLB2_LSP_LDB_SCHED_CTRL_INFLIGHT_OK_V	0x00008000
-#define DLB2_LSP_LDB_SCHED_CTRL_AQED_NFULL_V		0x00010000
-#define DLB2_LSP_LDB_SCHED_CTRL_RSVZ0		0xFFFE0000
-#define DLB2_LSP_LDB_SCHED_CTRL_CQ_LOC		0
-#define DLB2_LSP_LDB_SCHED_CTRL_QIDIX_LOC		8
-#define DLB2_LSP_LDB_SCHED_CTRL_VALUE_LOC		11
-#define DLB2_LSP_LDB_SCHED_CTRL_NALB_HASWORK_V_LOC	12
-#define DLB2_LSP_LDB_SCHED_CTRL_RLIST_HASWORK_V_LOC	13
-#define DLB2_LSP_LDB_SCHED_CTRL_SLIST_HASWORK_V_LOC	14
-#define DLB2_LSP_LDB_SCHED_CTRL_INFLIGHT_OK_V_LOC	15
-#define DLB2_LSP_LDB_SCHED_CTRL_AQED_NFULL_V_LOC	16
-#define DLB2_LSP_LDB_SCHED_CTRL_RSVZ0_LOC		17
-
-#define DLB2_V2LSP_DIR_SCH_CNT_L 0xa4000034
-#define DLB2_V2_5LSP_DIR_SCH_CNT_L 0x94000034
-#define DLB2_LSP_DIR_SCH_CNT_L(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_DIR_SCH_CNT_L : \
-	 DLB2_V2_5LSP_DIR_SCH_CNT_L)
-#define DLB2_LSP_DIR_SCH_CNT_L_RST 0x0
-
-#define DLB2_LSP_DIR_SCH_CNT_L_COUNT	0xFFFFFFFF
-#define DLB2_LSP_DIR_SCH_CNT_L_COUNT_LOC	0
-
-#define DLB2_V2LSP_DIR_SCH_CNT_H 0xa4000038
-#define DLB2_V2_5LSP_DIR_SCH_CNT_H 0x94000038
-#define DLB2_LSP_DIR_SCH_CNT_H(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_DIR_SCH_CNT_H : \
-	 DLB2_V2_5LSP_DIR_SCH_CNT_H)
-#define DLB2_LSP_DIR_SCH_CNT_H_RST 0x0
-
-#define DLB2_LSP_DIR_SCH_CNT_H_COUNT	0xFFFFFFFF
-#define DLB2_LSP_DIR_SCH_CNT_H_COUNT_LOC	0
-
-#define DLB2_V2LSP_LDB_SCH_CNT_L 0xa400003c
-#define DLB2_V2_5LSP_LDB_SCH_CNT_L 0x9400003c
-#define DLB2_LSP_LDB_SCH_CNT_L(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_LDB_SCH_CNT_L : \
-	 DLB2_V2_5LSP_LDB_SCH_CNT_L)
-#define DLB2_LSP_LDB_SCH_CNT_L_RST 0x0
-
-#define DLB2_LSP_LDB_SCH_CNT_L_COUNT	0xFFFFFFFF
-#define DLB2_LSP_LDB_SCH_CNT_L_COUNT_LOC	0
-
-#define DLB2_V2LSP_LDB_SCH_CNT_H 0xa4000040
-#define DLB2_V2_5LSP_LDB_SCH_CNT_H 0x94000040
-#define DLB2_LSP_LDB_SCH_CNT_H(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_LDB_SCH_CNT_H : \
-	 DLB2_V2_5LSP_LDB_SCH_CNT_H)
-#define DLB2_LSP_LDB_SCH_CNT_H_RST 0x0
-
-#define DLB2_LSP_LDB_SCH_CNT_H_COUNT	0xFFFFFFFF
-#define DLB2_LSP_LDB_SCH_CNT_H_COUNT_LOC	0
-
-#define DLB2_V2LSP_CFG_SHDW_CTRL 0xa4000070
-#define DLB2_V2_5LSP_CFG_SHDW_CTRL 0x94000070
-#define DLB2_LSP_CFG_SHDW_CTRL(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CFG_SHDW_CTRL : \
-	 DLB2_V2_5LSP_CFG_SHDW_CTRL)
-#define DLB2_LSP_CFG_SHDW_CTRL_RST 0x0
-
-#define DLB2_LSP_CFG_SHDW_CTRL_TRANSFER	0x00000001
-#define DLB2_LSP_CFG_SHDW_CTRL_RSVD0		0xFFFFFFFE
-#define DLB2_LSP_CFG_SHDW_CTRL_TRANSFER_LOC	0
-#define DLB2_LSP_CFG_SHDW_CTRL_RSVD0_LOC	1
-
-#define DLB2_V2LSP_CFG_SHDW_RANGE_COS(x) \
-	(0xa4000074 + (x) * 4)
-#define DLB2_V2_5LSP_CFG_SHDW_RANGE_COS(x) \
-	(0x94000074 + (x) * 4)
-#define DLB2_LSP_CFG_SHDW_RANGE_COS(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CFG_SHDW_RANGE_COS(x) : \
-	 DLB2_V2_5LSP_CFG_SHDW_RANGE_COS(x))
-#define DLB2_LSP_CFG_SHDW_RANGE_COS_RST 0x40
-
-#define DLB2_LSP_CFG_SHDW_RANGE_COS_BW_RANGE		0x000001FF
-#define DLB2_LSP_CFG_SHDW_RANGE_COS_RSVZ0		0x7FFFFE00
-#define DLB2_LSP_CFG_SHDW_RANGE_COS_NO_EXTRA_CREDIT	0x80000000
-#define DLB2_LSP_CFG_SHDW_RANGE_COS_BW_RANGE_LOC		0
-#define DLB2_LSP_CFG_SHDW_RANGE_COS_RSVZ0_LOC		9
-#define DLB2_LSP_CFG_SHDW_RANGE_COS_NO_EXTRA_CREDIT_LOC	31
-
-#define DLB2_V2LSP_CFG_CTRL_GENERAL_0 0xac000000
-#define DLB2_V2_5LSP_CFG_CTRL_GENERAL_0 0x9c000000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CFG_CTRL_GENERAL_0 : \
-	 DLB2_V2_5LSP_CFG_CTRL_GENERAL_0)
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_RST 0x0
-
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_ATQ_EMPTY_ARB_V2	0x00000001
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_TOK_UNIT_IDLE_V2	0x00000002
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_RLIST_PRI_V2	0x00000004
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_CMP_UNIT_IDLE_V2	0x00000008
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ0_V2		0x00000030
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OP_V2	0x00000040
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_HALF_BW_V2	0x00000080
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OUT_V2	0x00000100
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_DISAB_MULTI_V2	0x00000200
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OP_V2	0x00000400
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_HALF_BW_V2	0x00000800
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OUT_V2	0x00001000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_DISAB_MULTI_V2	0x00002000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OP_V2	0x00004000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_HALF_BW_V2	0x00008000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OUT_V2	0x00010000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OP_V2	0x00020000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_HALF_BW_V2	0x00040000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OUT_V2	0x00080000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_SINGLE_OP_V2	0x00100000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_HALF_BW_V2	0x00200000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_DISAB_MULTI_V2	0x00400000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_SCH_V2	0x00800000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_CMP_V2	0x01000000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_CE_TOG_ARB_V2	0x02000000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ1_V2		0x04000000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALID_SEL_V2	0x18000000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALUE_SEL_V2	0x20000000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_COMPARE_SEL_V2	0xC0000000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_ATQ_EMPTY_ARB_V2_LOC	0
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_TOK_UNIT_IDLE_V2_LOC		1
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_RLIST_PRI_V2_LOC		2
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_CMP_UNIT_IDLE_V2_LOC		3
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ0_V2_LOC			4
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OP_V2_LOC		6
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_HALF_BW_V2_LOC		7
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OUT_V2_LOC		8
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_DISAB_MULTI_V2_LOC		9
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OP_V2_LOC		10
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_HALF_BW_V2_LOC		11
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OUT_V2_LOC		12
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_DISAB_MULTI_V2_LOC		13
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OP_V2_LOC		14
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_HALF_BW_V2_LOC		15
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OUT_V2_LOC		16
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OP_V2_LOC		17
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_HALF_BW_V2_LOC		18
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OUT_V2_LOC		19
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_SINGLE_OP_V2_LOC		20
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_HALF_BW_V2_LOC		21
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_DISAB_MULTI_V2_LOC		22
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_SCH_V2_LOC		23
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_CMP_V2_LOC		24
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_CE_TOG_ARB_V2_LOC		25
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ1_V2_LOC			26
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALID_SEL_V2_LOC		27
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALUE_SEL_V2_LOC		29
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_COMPARE_SEL_V2_LOC		30
-
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_ATQ_EMPTY_ARB_V2_5	0x00000001
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_TOK_UNIT_IDLE_V2_5	0x00000002
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_RLIST_PRI_V2_5	0x00000004
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_CMP_UNIT_IDLE_V2_5	0x00000008
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ENAB_IF_THRESH_V2_5	0x00000010
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ0_V2_5		0x00000020
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OP_V2_5	0x00000040
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_HALF_BW_V2_5	0x00000080
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OUT_V2_5	0x00000100
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_DISAB_MULTI_V2_5	0x00000200
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OP_V2_5	0x00000400
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_HALF_BW_V2_5	0x00000800
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OUT_V2_5	0x00001000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_DISAB_MULTI_V2_5	0x00002000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OP_V2_5	0x00004000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_HALF_BW_V2_5	0x00008000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OUT_V2_5	0x00010000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OP_V2_5	0x00020000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_HALF_BW_V2_5	0x00040000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OUT_V2_5	0x00080000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_SINGLE_OP_V2_5	0x00100000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_HALF_BW_V2_5	0x00200000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_DISAB_MULTI_V2_5	0x00400000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_SCH_V2_5	0x00800000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_CMP_V2_5	0x01000000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_CE_TOG_ARB_V2_5	0x02000000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ1_V2_5		0x04000000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALID_SEL_V2_5	0x18000000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALUE_SEL_V2_5	0x20000000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_COMPARE_SEL_V2_5	0xC0000000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_ATQ_EMPTY_ARB_V2_5_LOC	0
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_TOK_UNIT_IDLE_V2_5_LOC	1
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_RLIST_PRI_V2_5_LOC		2
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_CMP_UNIT_IDLE_V2_5_LOC	3
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ENAB_IF_THRESH_V2_5_LOC		4
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ0_V2_5_LOC			5
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OP_V2_5_LOC		6
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_HALF_BW_V2_5_LOC		7
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OUT_V2_5_LOC		8
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_DISAB_MULTI_V2_5_LOC		9
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OP_V2_5_LOC		10
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_HALF_BW_V2_5_LOC		11
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OUT_V2_5_LOC		12
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_DISAB_MULTI_V2_5_LOC		13
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OP_V2_5_LOC	14
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_HALF_BW_V2_5_LOC		15
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OUT_V2_5_LOC	16
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OP_V2_5_LOC		17
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_HALF_BW_V2_5_LOC		18
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OUT_V2_5_LOC	19
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_SINGLE_OP_V2_5_LOC		20
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_HALF_BW_V2_5_LOC		21
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_DISAB_MULTI_V2_5_LOC		22
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_SCH_V2_5_LOC		23
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_CMP_V2_5_LOC		24
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_CE_TOG_ARB_V2_5_LOC		25
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ1_V2_5_LOC			26
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALID_SEL_V2_5_LOC		27
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALUE_SEL_V2_5_LOC		29
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_COMPARE_SEL_V2_5_LOC	30
-
-#define DLB2_LSP_SMON_COMPARE0 0xac000048
-#define DLB2_LSP_SMON_COMPARE0_RST 0x0
-
-#define DLB2_LSP_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
-#define DLB2_LSP_SMON_COMPARE0_COMPARE0_LOC	0
-
-#define DLB2_LSP_SMON_COMPARE1 0xac00004c
-#define DLB2_LSP_SMON_COMPARE1_RST 0x0
-
-#define DLB2_LSP_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
-#define DLB2_LSP_SMON_COMPARE1_COMPARE1_LOC	0
-
-#define DLB2_LSP_SMON_CFG0 0xac000050
-#define DLB2_LSP_SMON_CFG0_RST 0x40000000
-
-#define DLB2_LSP_SMON_CFG0_SMON_ENABLE		0x00000001
-#define DLB2_LSP_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
-#define DLB2_LSP_SMON_CFG0_RSVZ0			0x0000000C
-#define DLB2_LSP_SMON_CFG0_SMON0_FUNCTION		0x00000070
-#define DLB2_LSP_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
-#define DLB2_LSP_SMON_CFG0_SMON1_FUNCTION		0x00000700
-#define DLB2_LSP_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
-#define DLB2_LSP_SMON_CFG0_SMON_MODE			0x0000F000
-#define DLB2_LSP_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
-#define DLB2_LSP_SMON_CFG0_INTCOUNTEROVFL		0x00020000
-#define DLB2_LSP_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
-#define DLB2_LSP_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
-#define DLB2_LSP_SMON_CFG0_STOPTIMEROVFL		0x00100000
-#define DLB2_LSP_SMON_CFG0_INTTIMEROVFL		0x00200000
-#define DLB2_LSP_SMON_CFG0_STATTIMEROVFL		0x00400000
-#define DLB2_LSP_SMON_CFG0_RSVZ1			0x00800000
-#define DLB2_LSP_SMON_CFG0_TIMER_PRESCALE		0x1F000000
-#define DLB2_LSP_SMON_CFG0_RSVZ2			0x20000000
-#define DLB2_LSP_SMON_CFG0_VERSION			0xC0000000
-#define DLB2_LSP_SMON_CFG0_SMON_ENABLE_LOC			0
-#define DLB2_LSP_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
-#define DLB2_LSP_SMON_CFG0_RSVZ0_LOC				2
-#define DLB2_LSP_SMON_CFG0_SMON0_FUNCTION_LOC		4
-#define DLB2_LSP_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
-#define DLB2_LSP_SMON_CFG0_SMON1_FUNCTION_LOC		8
-#define DLB2_LSP_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
-#define DLB2_LSP_SMON_CFG0_SMON_MODE_LOC			12
-#define DLB2_LSP_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
-#define DLB2_LSP_SMON_CFG0_INTCOUNTEROVFL_LOC		17
-#define DLB2_LSP_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
-#define DLB2_LSP_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
-#define DLB2_LSP_SMON_CFG0_STOPTIMEROVFL_LOC			20
-#define DLB2_LSP_SMON_CFG0_INTTIMEROVFL_LOC			21
-#define DLB2_LSP_SMON_CFG0_STATTIMEROVFL_LOC			22
-#define DLB2_LSP_SMON_CFG0_RSVZ1_LOC				23
-#define DLB2_LSP_SMON_CFG0_TIMER_PRESCALE_LOC		24
-#define DLB2_LSP_SMON_CFG0_RSVZ2_LOC				29
-#define DLB2_LSP_SMON_CFG0_VERSION_LOC			30
-
-#define DLB2_LSP_SMON_CFG1 0xac000054
-#define DLB2_LSP_SMON_CFG1_RST 0x0
-
-#define DLB2_LSP_SMON_CFG1_MODE0	0x000000FF
-#define DLB2_LSP_SMON_CFG1_MODE1	0x0000FF00
-#define DLB2_LSP_SMON_CFG1_RSVZ0	0xFFFF0000
-#define DLB2_LSP_SMON_CFG1_MODE0_LOC	0
-#define DLB2_LSP_SMON_CFG1_MODE1_LOC	8
-#define DLB2_LSP_SMON_CFG1_RSVZ0_LOC	16
-
-#define DLB2_LSP_SMON_ACTIVITYCNTR0 0xac000058
-#define DLB2_LSP_SMON_ACTIVITYCNTR0_RST 0x0
-
-#define DLB2_LSP_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
-#define DLB2_LSP_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
-
-#define DLB2_LSP_SMON_ACTIVITYCNTR1 0xac00005c
-#define DLB2_LSP_SMON_ACTIVITYCNTR1_RST 0x0
-
-#define DLB2_LSP_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
-#define DLB2_LSP_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
-
-#define DLB2_LSP_SMON_MAX_TMR 0xac000060
-#define DLB2_LSP_SMON_MAX_TMR_RST 0x0
-
-#define DLB2_LSP_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
-#define DLB2_LSP_SMON_MAX_TMR_MAXVALUE_LOC	0
-
-#define DLB2_LSP_SMON_TMR 0xac000064
-#define DLB2_LSP_SMON_TMR_RST 0x0
-
-#define DLB2_LSP_SMON_TMR_TIMER	0xFFFFFFFF
-#define DLB2_LSP_SMON_TMR_TIMER_LOC	0
-
-#define DLB2_V2CM_DIAG_RESET_STS 0xb4000000
-#define DLB2_V2_5CM_DIAG_RESET_STS 0xa4000000
-#define DLB2_CM_DIAG_RESET_STS(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 V2CM_DIAG_RESET_STS : \
-	 V2_5CM_DIAG_RESET_STS)
-#define DLB2_CM_DIAG_RESET_STS_RST 0x80000bff
-
-#define DLB2_CM_DIAG_RESET_STS_CHP_PF_RESET_DONE	0x00000001
-#define DLB2_CM_DIAG_RESET_STS_ROP_PF_RESET_DONE	0x00000002
-#define DLB2_CM_DIAG_RESET_STS_LSP_PF_RESET_DONE	0x00000004
-#define DLB2_CM_DIAG_RESET_STS_NALB_PF_RESET_DONE	0x00000008
-#define DLB2_CM_DIAG_RESET_STS_AP_PF_RESET_DONE	0x00000010
-#define DLB2_CM_DIAG_RESET_STS_DP_PF_RESET_DONE	0x00000020
-#define DLB2_CM_DIAG_RESET_STS_QED_PF_RESET_DONE	0x00000040
-#define DLB2_CM_DIAG_RESET_STS_DQED_PF_RESET_DONE	0x00000080
-#define DLB2_CM_DIAG_RESET_STS_AQED_PF_RESET_DONE	0x00000100
-#define DLB2_CM_DIAG_RESET_STS_SYS_PF_RESET_DONE	0x00000200
-#define DLB2_CM_DIAG_RESET_STS_PF_RESET_ACTIVE	0x00000400
-#define DLB2_CM_DIAG_RESET_STS_FLRSM_STATE		0x0003F800
-#define DLB2_CM_DIAG_RESET_STS_RSVD0			0x7FFC0000
-#define DLB2_CM_DIAG_RESET_STS_DLB_PROC_RESET_DONE	0x80000000
-#define DLB2_CM_DIAG_RESET_STS_CHP_PF_RESET_DONE_LOC		0
-#define DLB2_CM_DIAG_RESET_STS_ROP_PF_RESET_DONE_LOC		1
-#define DLB2_CM_DIAG_RESET_STS_LSP_PF_RESET_DONE_LOC		2
-#define DLB2_CM_DIAG_RESET_STS_NALB_PF_RESET_DONE_LOC	3
-#define DLB2_CM_DIAG_RESET_STS_AP_PF_RESET_DONE_LOC		4
-#define DLB2_CM_DIAG_RESET_STS_DP_PF_RESET_DONE_LOC		5
-#define DLB2_CM_DIAG_RESET_STS_QED_PF_RESET_DONE_LOC		6
-#define DLB2_CM_DIAG_RESET_STS_DQED_PF_RESET_DONE_LOC	7
-#define DLB2_CM_DIAG_RESET_STS_AQED_PF_RESET_DONE_LOC	8
-#define DLB2_CM_DIAG_RESET_STS_SYS_PF_RESET_DONE_LOC		9
-#define DLB2_CM_DIAG_RESET_STS_PF_RESET_ACTIVE_LOC		10
-#define DLB2_CM_DIAG_RESET_STS_FLRSM_STATE_LOC		11
-#define DLB2_CM_DIAG_RESET_STS_RSVD0_LOC			18
-#define DLB2_CM_DIAG_RESET_STS_DLB_PROC_RESET_DONE_LOC	31
-
-#define DLB2_V2CM_CFG_DIAGNOSTIC_IDLE_STATUS 0xb4000004
-#define DLB2_V2_5CM_CFG_DIAGNOSTIC_IDLE_STATUS 0xa4000004
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CM_CFG_DIAGNOSTIC_IDLE_STATUS : \
-	 DLB2_V2_5CM_CFG_DIAGNOSTIC_IDLE_STATUS)
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_RST 0x9d0fffff
-
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_CHP_PIPEIDLE		0x00000001
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_ROP_PIPEIDLE		0x00000002
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_LSP_PIPEIDLE		0x00000004
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_NALB_PIPEIDLE	0x00000008
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AP_PIPEIDLE		0x00000010
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DP_PIPEIDLE		0x00000020
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_QED_PIPEIDLE		0x00000040
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DQED_PIPEIDLE	0x00000080
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AQED_PIPEIDLE	0x00000100
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_SYS_PIPEIDLE		0x00000200
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_CHP_UNIT_IDLE	0x00000400
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_ROP_UNIT_IDLE	0x00000800
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_LSP_UNIT_IDLE	0x00001000
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_NALB_UNIT_IDLE	0x00002000
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AP_UNIT_IDLE		0x00004000
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DP_UNIT_IDLE		0x00008000
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_QED_UNIT_IDLE	0x00010000
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DQED_UNIT_IDLE	0x00020000
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AQED_UNIT_IDLE	0x00040000
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_SYS_UNIT_IDLE	0x00080000
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_RSVD1		0x00F00000
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_CFG_RING_IDLE	0x01000000
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_CFG_MSTR_IDLE	0x02000000
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_FLR_CLKREQ_B	0x04000000
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_PROC_IDLE	0x08000000
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_PROC_IDLE_MASKED 0x10000000
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_RSVD0		 0x60000000
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DLB_FUNC_IDLE	 0x80000000
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_CHP_PIPEIDLE_LOC		0
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_ROP_PIPEIDLE_LOC		1
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_LSP_PIPEIDLE_LOC		2
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_NALB_PIPEIDLE_LOC		3
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AP_PIPEIDLE_LOC		4
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DP_PIPEIDLE_LOC		5
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_QED_PIPEIDLE_LOC		6
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DQED_PIPEIDLE_LOC		7
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AQED_PIPEIDLE_LOC		8
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_SYS_PIPEIDLE_LOC		9
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_CHP_UNIT_IDLE_LOC		10
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_ROP_UNIT_IDLE_LOC		11
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_LSP_UNIT_IDLE_LOC		12
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_NALB_UNIT_IDLE_LOC	13
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AP_UNIT_IDLE_LOC		14
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DP_UNIT_IDLE_LOC		15
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_QED_UNIT_IDLE_LOC		16
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DQED_UNIT_IDLE_LOC	17
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AQED_UNIT_IDLE_LOC	18
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_SYS_UNIT_IDLE_LOC		19
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_RSVD1_LOC			20
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_CFG_RING_IDLE_LOC	24
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_CFG_MSTR_IDLE_LOC	25
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_FLR_CLKREQ_B_LOC	26
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_PROC_IDLE_LOC	27
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_PROC_IDLE_MASKED_LOC	28
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_RSVD0_LOC			29
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DLB_FUNC_IDLE_LOC		31
-
-#define DLB2_V2CM_CFG_PM_STATUS 0xb4000014
-#define DLB2_V2_5CM_CFG_PM_STATUS 0xa4000014
-#define DLB2_CM_CFG_PM_STATUS(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CM_CFG_PM_STATUS : \
-	 DLB2_V2_5CM_CFG_PM_STATUS)
-#define DLB2_CM_CFG_PM_STATUS_RST 0x100403e
-
-#define DLB2_CM_CFG_PM_STATUS_PROCHOT		0x00000001
-#define DLB2_CM_CFG_PM_STATUS_PGCB_DLB_IDLE		0x00000002
-#define DLB2_CM_CFG_PM_STATUS_PGCB_DLB_PG_RDY_ACK_B	0x00000004
-#define DLB2_CM_CFG_PM_STATUS_PMSM_PGCB_REQ_B	0x00000008
-#define DLB2_CM_CFG_PM_STATUS_PGBC_PMC_PG_REQ_B	0x00000010
-#define DLB2_CM_CFG_PM_STATUS_PMC_PGCB_PG_ACK_B	0x00000020
-#define DLB2_CM_CFG_PM_STATUS_PMC_PGCB_FET_EN_B	0x00000040
-#define DLB2_CM_CFG_PM_STATUS_PGCB_FET_EN_B		0x00000080
-#define DLB2_CM_CFG_PM_STATUS_RSVZ0			0x00000100
-#define DLB2_CM_CFG_PM_STATUS_RSVZ1			0x00000200
-#define DLB2_CM_CFG_PM_STATUS_FUSE_FORCE_ON		0x00000400
-#define DLB2_CM_CFG_PM_STATUS_FUSE_PROC_DISABLE	0x00000800
-#define DLB2_CM_CFG_PM_STATUS_RSVZ2			0x00001000
-#define DLB2_CM_CFG_PM_STATUS_RSVZ3			0x00002000
-#define DLB2_CM_CFG_PM_STATUS_PM_FSM_D0TOD3_OK	0x00004000
-#define DLB2_CM_CFG_PM_STATUS_PM_FSM_D3TOD0_OK	0x00008000
-#define DLB2_CM_CFG_PM_STATUS_DLB_IN_D3		0x00010000
-#define DLB2_CM_CFG_PM_STATUS_RSVZ4			0x00FE0000
-#define DLB2_CM_CFG_PM_STATUS_PMSM			0xFF000000
-#define DLB2_CM_CFG_PM_STATUS_PROCHOT_LOC			0
-#define DLB2_CM_CFG_PM_STATUS_PGCB_DLB_IDLE_LOC		1
-#define DLB2_CM_CFG_PM_STATUS_PGCB_DLB_PG_RDY_ACK_B_LOC	2
-#define DLB2_CM_CFG_PM_STATUS_PMSM_PGCB_REQ_B_LOC		3
-#define DLB2_CM_CFG_PM_STATUS_PGBC_PMC_PG_REQ_B_LOC		4
-#define DLB2_CM_CFG_PM_STATUS_PMC_PGCB_PG_ACK_B_LOC		5
-#define DLB2_CM_CFG_PM_STATUS_PMC_PGCB_FET_EN_B_LOC		6
-#define DLB2_CM_CFG_PM_STATUS_PGCB_FET_EN_B_LOC		7
-#define DLB2_CM_CFG_PM_STATUS_RSVZ0_LOC			8
-#define DLB2_CM_CFG_PM_STATUS_RSVZ1_LOC			9
-#define DLB2_CM_CFG_PM_STATUS_FUSE_FORCE_ON_LOC		10
-#define DLB2_CM_CFG_PM_STATUS_FUSE_PROC_DISABLE_LOC		11
-#define DLB2_CM_CFG_PM_STATUS_RSVZ2_LOC			12
-#define DLB2_CM_CFG_PM_STATUS_RSVZ3_LOC			13
-#define DLB2_CM_CFG_PM_STATUS_PM_FSM_D0TOD3_OK_LOC		14
-#define DLB2_CM_CFG_PM_STATUS_PM_FSM_D3TOD0_OK_LOC		15
-#define DLB2_CM_CFG_PM_STATUS_DLB_IN_D3_LOC			16
-#define DLB2_CM_CFG_PM_STATUS_RSVZ4_LOC			17
-#define DLB2_CM_CFG_PM_STATUS_PMSM_LOC			24
-
-#define DLB2_V2CM_CFG_PM_PMCSR_DISABLE 0xb4000018
-#define DLB2_V2_5CM_CFG_PM_PMCSR_DISABLE 0xa4000018
-#define DLB2_CM_CFG_PM_PMCSR_DISABLE(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CM_CFG_PM_PMCSR_DISABLE : \
-	 DLB2_V2_5CM_CFG_PM_PMCSR_DISABLE)
-#define DLB2_CM_CFG_PM_PMCSR_DISABLE_RST 0x1
-
-#define DLB2_CM_CFG_PM_PMCSR_DISABLE_DISABLE	0x00000001
-#define DLB2_CM_CFG_PM_PMCSR_DISABLE_RSVZ0	0xFFFFFFFE
-#define DLB2_CM_CFG_PM_PMCSR_DISABLE_DISABLE_LOC	0
-#define DLB2_CM_CFG_PM_PMCSR_DISABLE_RSVZ0_LOC	1
-
-#define DLB2_VF_VF2PF_MAILBOX_BYTES 256
-#define DLB2_VF_VF2PF_MAILBOX(x) \
-	(0x1000 + (x) * 0x4)
-#define DLB2_VF_VF2PF_MAILBOX_RST 0x0
-
-#define DLB2_VF_VF2PF_MAILBOX_MSG	0xFFFFFFFF
-#define DLB2_VF_VF2PF_MAILBOX_MSG_LOC	0
-
-#define DLB2_VF_VF2PF_MAILBOX_ISR 0x1f00
-#define DLB2_VF_VF2PF_MAILBOX_ISR_RST 0x0
-#define DLB2_VF_SIOV_MBOX_ISR_TRIGGER 0x8000
-
-#define DLB2_VF_VF2PF_MAILBOX_ISR_ISR	0x00000001
-#define DLB2_VF_VF2PF_MAILBOX_ISR_RSVD0	0xFFFFFFFE
-#define DLB2_VF_VF2PF_MAILBOX_ISR_ISR_LOC	0
-#define DLB2_VF_VF2PF_MAILBOX_ISR_RSVD0_LOC	1
-
-#define DLB2_VF_PF2VF_MAILBOX_BYTES 64
-#define DLB2_VF_PF2VF_MAILBOX(x) \
-	(0x2000 + (x) * 0x4)
-#define DLB2_VF_PF2VF_MAILBOX_RST 0x0
-
-#define DLB2_VF_PF2VF_MAILBOX_MSG	0xFFFFFFFF
-#define DLB2_VF_PF2VF_MAILBOX_MSG_LOC	0
-
-#define DLB2_VF_PF2VF_MAILBOX_ISR 0x2f00
-#define DLB2_VF_PF2VF_MAILBOX_ISR_RST 0x0
-
-#define DLB2_VF_PF2VF_MAILBOX_ISR_PF_ISR	0x00000001
-#define DLB2_VF_PF2VF_MAILBOX_ISR_RSVD0	0xFFFFFFFE
-#define DLB2_VF_PF2VF_MAILBOX_ISR_PF_ISR_LOC	0
-#define DLB2_VF_PF2VF_MAILBOX_ISR_RSVD0_LOC	1
-
-#define DLB2_VF_VF_MSI_ISR_PEND 0x2f10
-#define DLB2_VF_VF_MSI_ISR_PEND_RST 0x0
-
-#define DLB2_VF_VF_MSI_ISR_PEND_ISR_PEND	0xFFFFFFFF
-#define DLB2_VF_VF_MSI_ISR_PEND_ISR_PEND_LOC	0
-
-#define DLB2_VF_VF_RESET_IN_PROGRESS 0x3000
-#define DLB2_VF_VF_RESET_IN_PROGRESS_RST 0x1
-
-#define DLB2_VF_VF_RESET_IN_PROGRESS_RESET_IN_PROGRESS	0x00000001
-#define DLB2_VF_VF_RESET_IN_PROGRESS_RSVD0			0xFFFFFFFE
-#define DLB2_VF_VF_RESET_IN_PROGRESS_RESET_IN_PROGRESS_LOC	0
-#define DLB2_VF_VF_RESET_IN_PROGRESS_RSVD0_LOC		1
-
-#define DLB2_VF_VF_MSI_ISR 0x4000
-#define DLB2_VF_VF_MSI_ISR_RST 0x0
-
-#define DLB2_VF_VF_MSI_ISR_VF_MSI_ISR	0xFFFFFFFF
-#define DLB2_VF_VF_MSI_ISR_VF_MSI_ISR_LOC	0
-
-#define DLB2_SYS_TOTAL_CREDITS 0x10000100
-#define DLB2_SYS_TOTAL_CREDITS_RST 0x4000
-
-#define DLB2_SYS_TOTAL_CREDITS_TOTAL_CREDITS	0xFFFFFFFF
-#define DLB2_SYS_TOTAL_CREDITS_TOTAL_CREDITS_LOC	0
-
-#define DLB2_SYS_LDB_CQ_AI_ADDR_U(x) \
-	(0x10000fa4 + (x) * 0x1000)
-#define DLB2_SYS_LDB_CQ_AI_ADDR_U_RST 0x0
-
-#define DLB2_SYS_LDB_CQ_AI_ADDR_U_CQ_AI_ADDR_U	0xFFFFFFFF
-#define DLB2_SYS_LDB_CQ_AI_ADDR_U_CQ_AI_ADDR_U_LOC	0
-
-#define DLB2_SYS_LDB_CQ_AI_ADDR_L(x) \
-	(0x10000fa0 + (x) * 0x1000)
-#define DLB2_SYS_LDB_CQ_AI_ADDR_L_RST 0x0
-
-#define DLB2_SYS_LDB_CQ_AI_ADDR_L_RSVD0		0x00000003
-#define DLB2_SYS_LDB_CQ_AI_ADDR_L_CQ_AI_ADDR_L	0xFFFFFFFC
-#define DLB2_SYS_LDB_CQ_AI_ADDR_L_RSVD0_LOC		0
-#define DLB2_SYS_LDB_CQ_AI_ADDR_L_CQ_AI_ADDR_L_LOC	2
-
-#define DLB2_SYS_DIR_CQ_AI_ADDR_U(x) \
-	(0x10000fe4 + (x) * 0x1000)
-#define DLB2_SYS_DIR_CQ_AI_ADDR_U_RST 0x0
-
-#define DLB2_SYS_DIR_CQ_AI_ADDR_U_CQ_AI_ADDR_U	0xFFFFFFFF
-#define DLB2_SYS_DIR_CQ_AI_ADDR_U_CQ_AI_ADDR_U_LOC	0
-
-#define DLB2_SYS_DIR_CQ_AI_ADDR_L(x) \
-	(0x10000fe0 + (x) * 0x1000)
-#define DLB2_SYS_DIR_CQ_AI_ADDR_L_RST 0x0
-
-#define DLB2_SYS_DIR_CQ_AI_ADDR_L_RSVD0		0x00000003
-#define DLB2_SYS_DIR_CQ_AI_ADDR_L_CQ_AI_ADDR_L	0xFFFFFFFC
-#define DLB2_SYS_DIR_CQ_AI_ADDR_L_RSVD0_LOC		0
-#define DLB2_SYS_DIR_CQ_AI_ADDR_L_CQ_AI_ADDR_L_LOC	2
-
-#define DLB2_SYS_WB_DIR_CQ_STATE(x) \
-	(0x11c00000 + (x) * 0x1000)
-#define DLB2_SYS_WB_DIR_CQ_STATE_RST 0x0
-
-#define DLB2_SYS_WB_DIR_CQ_STATE_WB0_V	0x00000001
-#define DLB2_SYS_WB_DIR_CQ_STATE_WB1_V	0x00000002
-#define DLB2_SYS_WB_DIR_CQ_STATE_WB2_V	0x00000004
-#define DLB2_SYS_WB_DIR_CQ_STATE_DIR_OPT	0x00000008
-#define DLB2_SYS_WB_DIR_CQ_STATE_CQ_OPT_CLR	0x00000010
-#define DLB2_SYS_WB_DIR_CQ_STATE_RSVD0	0xFFFFFFE0
-#define DLB2_SYS_WB_DIR_CQ_STATE_WB0_V_LOC		0
-#define DLB2_SYS_WB_DIR_CQ_STATE_WB1_V_LOC		1
-#define DLB2_SYS_WB_DIR_CQ_STATE_WB2_V_LOC		2
-#define DLB2_SYS_WB_DIR_CQ_STATE_DIR_OPT_LOC		3
-#define DLB2_SYS_WB_DIR_CQ_STATE_CQ_OPT_CLR_LOC	4
-#define DLB2_SYS_WB_DIR_CQ_STATE_RSVD0_LOC		5
-
-#define DLB2_SYS_WB_LDB_CQ_STATE(x) \
-	(0x11d00000 + (x) * 0x1000)
-#define DLB2_SYS_WB_LDB_CQ_STATE_RST 0x0
-
-#define DLB2_SYS_WB_LDB_CQ_STATE_WB0_V	0x00000001
-#define DLB2_SYS_WB_LDB_CQ_STATE_WB1_V	0x00000002
-#define DLB2_SYS_WB_LDB_CQ_STATE_WB2_V	0x00000004
-#define DLB2_SYS_WB_LDB_CQ_STATE_RSVD1	0x00000008
-#define DLB2_SYS_WB_LDB_CQ_STATE_CQ_OPT_CLR	0x00000010
-#define DLB2_SYS_WB_LDB_CQ_STATE_RSVD0	0xFFFFFFE0
-#define DLB2_SYS_WB_LDB_CQ_STATE_WB0_V_LOC		0
-#define DLB2_SYS_WB_LDB_CQ_STATE_WB1_V_LOC		1
-#define DLB2_SYS_WB_LDB_CQ_STATE_WB2_V_LOC		2
-#define DLB2_SYS_WB_LDB_CQ_STATE_RSVD1_LOC		3
-#define DLB2_SYS_WB_LDB_CQ_STATE_CQ_OPT_CLR_LOC	4
-#define DLB2_SYS_WB_LDB_CQ_STATE_RSVD0_LOC		5
-
-#define DLB2_CHP_CFG_VAS_CRD(x) \
-	(0x40000000 + (x) * 0x1000)
-#define DLB2_CHP_CFG_VAS_CRD_RST 0x0
-
-#define DLB2_CHP_CFG_VAS_CRD_COUNT	0x00007FFF
-#define DLB2_CHP_CFG_VAS_CRD_RSVD0	0xFFFF8000
-#define DLB2_CHP_CFG_VAS_CRD_COUNT_LOC	0
-#define DLB2_CHP_CFG_VAS_CRD_RSVD0_LOC	15
-
-#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT(x) \
-	(0x90b00000 + (x) * 0x1000)
-#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_RST 0x0
-
-#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_LIMIT	0x00007FFF
-#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_V	0x00008000
-#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_RSVD0	0xFFFF0000
-#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_LIMIT_LOC	0
-#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_V_LOC		15
-#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_RSVD0_LOC	16
-
-#endif /* __DLB2_REGS_NEW_H */
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index 54b0207db..3661b940c 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -8,7 +8,7 @@
 #include "dlb2_osdep.h"
 #include "dlb2_osdep_bitmap.h"
 #include "dlb2_osdep_types.h"
-#include "dlb2_regs_new.h"
+#include "dlb2_regs.h"
 #include "dlb2_resource.h"
 
 #include "../../dlb2_priv.h"
diff --git a/drivers/event/dlb2/pf/dlb2_main.c b/drivers/event/dlb2/pf/dlb2_main.c
index 1f6ccf8e4..b6ec85b47 100644
--- a/drivers/event/dlb2/pf/dlb2_main.c
+++ b/drivers/event/dlb2/pf/dlb2_main.c
@@ -13,7 +13,7 @@
 #include <rte_malloc.h>
 #include <rte_errno.h>
 
-#include "base/dlb2_regs_new.h"
+#include "base/dlb2_regs.h"
 #include "base/dlb2_hw_types.h"
 #include "base/dlb2_resource.h"
 #include "base/dlb2_osdep.h"
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v4 24/27] event/dlb2: update xstats for v2.5
  2021-04-15  1:48   ` [dpdk-dev] [PATCH v4 00/27] Add DLB v2.5 Timothy McDaniel
                       ` (22 preceding siblings ...)
  2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 23/27] event/dlb2: use new combined register map Timothy McDaniel
@ 2021-04-15  1:49     ` Timothy McDaniel
  2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 25/27] doc/dlb2: update documentation " Timothy McDaniel
                       ` (2 subsequent siblings)
  26 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-04-15  1:49 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, harry.van.haaren, jerinj, thomas

Add DLB v2.5 specific information to xstats, such as metrics for the new
credit scheme.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/dlb2_xstats.c | 41 ++++++++++++++++++++++++++++----
 1 file changed, 37 insertions(+), 4 deletions(-)

diff --git a/drivers/event/dlb2/dlb2_xstats.c b/drivers/event/dlb2/dlb2_xstats.c
index b62e62060..d4c8d9903 100644
--- a/drivers/event/dlb2/dlb2_xstats.c
+++ b/drivers/event/dlb2/dlb2_xstats.c
@@ -9,6 +9,7 @@
 
 #include "dlb2_priv.h"
 #include "dlb2_inline_fns.h"
+#include "pf/base/dlb2_regs.h"
 
 enum dlb2_xstats_type {
 	/* common to device and port */
@@ -21,6 +22,7 @@ enum dlb2_xstats_type {
 	zero_polls,			/**< Call dequeue burst and return 0 */
 	tx_nospc_ldb_hw_credits,	/**< Insufficient LDB h/w credits */
 	tx_nospc_dir_hw_credits,	/**< Insufficient DIR h/w credits */
+	tx_nospc_hw_credits,		/**< Insufficient h/w credits */
 	tx_nospc_inflight_max,		/**< Reach the new_event_threshold */
 	tx_nospc_new_event_limit,	/**< Insufficient s/w credits */
 	tx_nospc_inflight_credits,	/**< Port has too few s/w credits */
@@ -29,6 +31,7 @@ enum dlb2_xstats_type {
 	inflight_events,
 	ldb_pool_size,
 	dir_pool_size,
+	pool_size,
 	/* port specific */
 	tx_new,				/**< Send an OP_NEW event */
 	tx_fwd,				/**< Send an OP_FORWARD event */
@@ -129,6 +132,9 @@ dlb2_device_traffic_stat_get(struct dlb2_eventdev *dlb2,
 		case tx_nospc_dir_hw_credits:
 			val += port->stats.traffic.tx_nospc_dir_hw_credits;
 			break;
+		case tx_nospc_hw_credits:
+			val += port->stats.traffic.tx_nospc_hw_credits;
+			break;
 		case tx_nospc_inflight_max:
 			val += port->stats.traffic.tx_nospc_inflight_max;
 			break;
@@ -159,6 +165,7 @@ get_dev_stat(struct dlb2_eventdev *dlb2, uint16_t obj_idx __rte_unused,
 	case zero_polls:
 	case tx_nospc_ldb_hw_credits:
 	case tx_nospc_dir_hw_credits:
+	case tx_nospc_hw_credits:
 	case tx_nospc_inflight_max:
 	case tx_nospc_new_event_limit:
 	case tx_nospc_inflight_credits:
@@ -171,6 +178,8 @@ get_dev_stat(struct dlb2_eventdev *dlb2, uint16_t obj_idx __rte_unused,
 		return dlb2->num_ldb_credits;
 	case dir_pool_size:
 		return dlb2->num_dir_credits;
+	case pool_size:
+		return dlb2->num_credits;
 	default: return -1;
 	}
 }
@@ -203,6 +212,9 @@ get_port_stat(struct dlb2_eventdev *dlb2, uint16_t obj_idx,
 	case tx_nospc_dir_hw_credits:
 		return ev_port->stats.traffic.tx_nospc_dir_hw_credits;
 
+	case tx_nospc_hw_credits:
+		return ev_port->stats.traffic.tx_nospc_hw_credits;
+
 	case tx_nospc_inflight_max:
 		return ev_port->stats.traffic.tx_nospc_inflight_max;
 
@@ -357,6 +369,7 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 		"zero_polls",
 		"tx_nospc_ldb_hw_credits",
 		"tx_nospc_dir_hw_credits",
+		"tx_nospc_hw_credits",
 		"tx_nospc_inflight_max",
 		"tx_nospc_new_event_limit",
 		"tx_nospc_inflight_credits",
@@ -364,6 +377,7 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 		"inflight_events",
 		"ldb_pool_size",
 		"dir_pool_size",
+		"pool_size",
 	};
 	static const enum dlb2_xstats_type dev_types[] = {
 		rx_ok,
@@ -375,6 +389,7 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 		zero_polls,
 		tx_nospc_ldb_hw_credits,
 		tx_nospc_dir_hw_credits,
+		tx_nospc_hw_credits,
 		tx_nospc_inflight_max,
 		tx_nospc_new_event_limit,
 		tx_nospc_inflight_credits,
@@ -382,6 +397,7 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 		inflight_events,
 		ldb_pool_size,
 		dir_pool_size,
+		pool_size,
 	};
 	/* Note: generated device stats are not allowed to be reset. */
 	static const uint8_t dev_reset_allowed[] = {
@@ -394,6 +410,7 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 		0, /* zero_polls */
 		0, /* tx_nospc_ldb_hw_credits */
 		0, /* tx_nospc_dir_hw_credits */
+		0, /* tx_nospc_hw_credits */
 		0, /* tx_nospc_inflight_max */
 		0, /* tx_nospc_new_event_limit */
 		0, /* tx_nospc_inflight_credits */
@@ -401,6 +418,7 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 		0, /* inflight_events */
 		0, /* ldb_pool_size */
 		0, /* dir_pool_size */
+		0, /* pool_size */
 	};
 	static const char * const port_stats[] = {
 		"is_configured",
@@ -415,6 +433,7 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 		"zero_polls",
 		"tx_nospc_ldb_hw_credits",
 		"tx_nospc_dir_hw_credits",
+		"tx_nospc_hw_credits",
 		"tx_nospc_inflight_max",
 		"tx_nospc_new_event_limit",
 		"tx_nospc_inflight_credits",
@@ -448,6 +467,7 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 		zero_polls,
 		tx_nospc_ldb_hw_credits,
 		tx_nospc_dir_hw_credits,
+		tx_nospc_hw_credits,
 		tx_nospc_inflight_max,
 		tx_nospc_new_event_limit,
 		tx_nospc_inflight_credits,
@@ -481,6 +501,7 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 		1, /* zero_polls */
 		1, /* tx_nospc_ldb_hw_credits */
 		1, /* tx_nospc_dir_hw_credits */
+		1, /* tx_nospc_hw_credits */
 		1, /* tx_nospc_inflight_max */
 		1, /* tx_nospc_new_event_limit */
 		1, /* tx_nospc_inflight_credits */
@@ -935,8 +956,8 @@ dlb2_eventdev_xstats_reset(struct rte_eventdev *dev,
 		break;
 	case RTE_EVENT_DEV_XSTATS_PORT:
 		if (queue_port_id == -1) {
-			for (i = 0; i < DLB2_MAX_NUM_PORTS(dlb2->version);
-					i++) {
+			for (i = 0;
+			     i < DLB2_MAX_NUM_PORTS(dlb2->version); i++) {
 				if (dlb2_xstats_reset_port(dlb2, i,
 							   ids, nb_ids))
 					return -EINVAL;
@@ -949,8 +970,8 @@ dlb2_eventdev_xstats_reset(struct rte_eventdev *dev,
 		break;
 	case RTE_EVENT_DEV_XSTATS_QUEUE:
 		if (queue_port_id == -1) {
-			for (i = 0; i < DLB2_MAX_NUM_QUEUES(dlb2->version);
-					i++) {
+			for (i = 0;
+			     i < DLB2_MAX_NUM_QUEUES(dlb2->version); i++) {
 				if (dlb2_xstats_reset_queue(dlb2, i,
 							    ids, nb_ids))
 					return -EINVAL;
@@ -1048,6 +1069,9 @@ dlb2_eventdev_dump(struct rte_eventdev *dev, FILE *f)
 	fprintf(f, "\tnum_dir_credits = %u\n",
 		dlb2->hw_rsrc_query_results.num_dir_credits);
 
+	fprintf(f, "\tnum_credits = %u\n",
+		dlb2->hw_rsrc_query_results.num_credits);
+
 	/* Port level information */
 
 	for (i = 0; i < dlb2->num_ports; i++) {
@@ -1102,6 +1126,12 @@ dlb2_eventdev_dump(struct rte_eventdev *dev, FILE *f)
 		fprintf(f, "\tdir_credits = %u\n",
 			p->qm_port.dir_credits);
 
+		fprintf(f, "\tcached_credits = %u\n",
+			p->qm_port.cached_credits);
+
+		fprintf(f, "\tdir_credits = %u\n",
+			p->qm_port.credits);
+
 		fprintf(f, "\tgenbit=%d, cq_idx=%d, cq_depth=%d\n",
 			p->qm_port.gen_bit,
 			p->qm_port.cq_idx,
@@ -1139,6 +1169,9 @@ dlb2_eventdev_dump(struct rte_eventdev *dev, FILE *f)
 		fprintf(f, "\t\ttx_nospc_dir_hw_credits %" PRIu64 "\n",
 			p->stats.traffic.tx_nospc_dir_hw_credits);
 
+		fprintf(f, "\t\ttx_nospc_hw_credits %" PRIu64 "\n",
+			p->stats.traffic.tx_nospc_hw_credits);
+
 		fprintf(f, "\t\ttx_nospc_inflight_max %" PRIu64 "\n",
 			p->stats.traffic.tx_nospc_inflight_max);
 
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v4 25/27] doc/dlb2: update documentation for v2.5
  2021-04-15  1:48   ` [dpdk-dev] [PATCH v4 00/27] Add DLB v2.5 Timothy McDaniel
                       ` (23 preceding siblings ...)
  2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 24/27] event/dlb2: update xstats for v2.5 Timothy McDaniel
@ 2021-04-15  1:49     ` Timothy McDaniel
  2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 26/27] event/dlb: rename dlb2 driver Timothy McDaniel
  2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 27/27] event/dlb: move rte config defines to runtime devargs Timothy McDaniel
  26 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-04-15  1:49 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, harry.van.haaren, jerinj, thomas

Update the dlb documentation for v2.5. Notable differences include
the new cobined credit scheme. Also cleaned up a couple of sections,
and removed a duplicate section.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 doc/guides/eventdevs/dlb2.rst | 75 +++++++++++++----------------------
 1 file changed, 27 insertions(+), 48 deletions(-)

diff --git a/doc/guides/eventdevs/dlb2.rst b/doc/guides/eventdevs/dlb2.rst
index 94d2c77ff..94e46ea7d 100644
--- a/doc/guides/eventdevs/dlb2.rst
+++ b/doc/guides/eventdevs/dlb2.rst
@@ -4,7 +4,8 @@
 Driver for the Intel® Dynamic Load Balancer (DLB2)
 ==================================================
 
-The DPDK dlb poll mode driver supports the Intel® Dynamic Load Balancer.
+The DPDK dlb poll mode driver supports the Intel® Dynamic Load Balancer,
+hardware versions 2.0 and 2.5.
 
 Prerequisites
 -------------
@@ -35,7 +36,7 @@ eventdev API and DLB2 misalign.
 Scheduling Domain Configuration
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-There are 32 scheduling domainis the DLB2.
+DLB2 supports 32 scheduling domains.
 When one is configured, it allocates load-balanced and
 directed queues, ports, credits, and other hardware resources. Some
 resource allocations are user-controlled -- the number of queues, for example
@@ -67,42 +68,7 @@ If the ``RTE_EVENT_QUEUE_CFG_ALL_TYPES`` flag is not set, schedule_type
 dictates the queue's scheduling type.
 
 The ``nb_atomic_order_sequences`` queue configuration field sets the ordered
-queue's reorder buffer size.  DLB2 has 4 groups of ordered queues, where each
-group is configured to contain either 1 queue with 1024 reorder entries, 2
-queues with 512 reorder entries, and so on down to 32 queues with 32 entries.
-
-When a load-balanced queue is created, the PMD will configure a new sequence
-number group on-demand if num_sequence_numbers does not match a pre-existing
-group with available reorder buffer entries. If all sequence number groups are
-in use, no new group will be created and queue configuration will fail. (Note
-that when the PMD is used with a virtual DLB2 device, it cannot change the
-sequence number configuration.)
-
-The queue's ``nb_atomic_flows`` parameter is ignored by the DLB2 PMD, because
-the DLB2 does not limit the number of flows a queue can track. In the DLB2, all
-load-balanced queues can use the full 16-bit flow ID range.
-
-Load-Balanced Queues
-~~~~~~~~~~~~~~~~~~~~
-
-A load-balanced queue can support atomic and ordered scheduling, or atomic and
-unordered scheduling, but not atomic and unordered and ordered scheduling. A
-queue's scheduling types are controlled by the event queue configuration.
-
-If the user sets the ``RTE_EVENT_QUEUE_CFG_ALL_TYPES`` flag, the
-``nb_atomic_order_sequences`` determines the supported scheduling types.
-With non-zero ``nb_atomic_order_sequences``, the queue is configured for atomic
-and ordered scheduling. In this case, ``RTE_SCHED_TYPE_PARALLEL`` scheduling is
-supported by scheduling those events as ordered events.  Note that when the
-event is dequeued, its sched_type will be ``RTE_SCHED_TYPE_ORDERED``. Else if
-``nb_atomic_order_sequences`` is zero, the queue is configured for atomic and
-unordered scheduling. In this case, ``RTE_SCHED_TYPE_ORDERED`` is unsupported.
-
-If the ``RTE_EVENT_QUEUE_CFG_ALL_TYPES`` flag is not set, schedule_type
-dictates the queue's scheduling type.
-
-The ``nb_atomic_order_sequences`` queue configuration field sets the ordered
-queue's reorder buffer size.  DLB2 has 4 groups of ordered queues, where each
+queue's reorder buffer size.  DLB2 has 2 groups of ordered queues, where each
 group is configured to contain either 1 queue with 1024 reorder entries, 2
 queues with 512 reorder entries, and so on down to 32 queues with 32 entries.
 
@@ -157,6 +123,11 @@ type (atomic, ordered, or parallel) is not preserved, and an event's sched_type
 will be set to ``RTE_SCHED_TYPE_ATOMIC`` when it is dequeued from a directed
 port.
 
+Finally, even though all 3 event types are supported on the same QID by
+converting unordered events to ordered, such use should be discouraged as much
+as possible, since mixing types on the same queue uses valuable reorder
+resources, and orders events which do not require ordering.
+
 Flow ID
 ~~~~~~~
 
@@ -169,13 +140,15 @@ Hardware Credits
 DLB2 uses a hardware credit scheme to prevent software from overflowing hardware
 event storage, with each unit of storage represented by a credit. A port spends
 a credit to enqueue an event, and hardware refills the ports with credits as the
-events are scheduled to ports. Refills come from credit pools, and each port is
-a member of a load-balanced credit pool and a directed credit pool. The
-load-balanced credits are used to enqueue to load-balanced queues, and directed
-credits are used for directed queues.
+events are scheduled to ports. Refills come from credit pools.
 
-A DLB2 eventdev contains one load-balanced and one directed credit pool. These
-pools' sizes are controlled by the nb_events_limit field in struct
+For DLB v2.5, there is a single credit pool used for both load balanced and
+directed traffic.
+
+For DLB v2.0, each port is a member of both a load-balanced credit pool and a
+directed credit pool. The load-balanced credits are used to enqueue to
+load-balanced queues, and directed credits are used for directed queues.
+These pools' sizes are controlled by the nb_events_limit field in struct
 rte_event_dev_config. The load-balanced pool is sized to contain
 nb_events_limit credits, and the directed pool is sized to contain
 nb_events_limit/4 credits. The directed pool size can be overridden with the
@@ -276,10 +249,16 @@ The DLB2 supports event priority and per-port queue service priority, as
 described in the eventdev header file. The DLB2 does not support 'global' event
 queue priority established at queue creation time.
 
-DLB2 supports 8 event and queue service priority levels. For both priority
-types, the PMD uses the upper three bits of the priority field to determine the
-DLB2 priority, discarding the 5 least significant bits. The 5 least significant
-event priority bits are not preserved when an event is enqueued.
+DLB2 supports 4 event and queue service priority levels. For both priority types,
+the PMD uses the upper three bits of the priority field to determine the DLB2
+priority, discarding the 5 least significant bits. But least significant bit out
+of 3 priority bits is effectively ignored for binning into 4 priorities. The
+discarded 5 least significant event priority bits are not preserved when an event
+is enqueued.
+
+Note that event priority only works within the same event type.
+When atomic and ordered or unordered events are enqueued to same QID, priority
+across the types is always equal, and both types are served in a round robin manner.
 
 Reconfiguration
 ~~~~~~~~~~~~~~~
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v4 26/27] event/dlb: rename dlb2 driver
  2021-04-15  1:48   ` [dpdk-dev] [PATCH v4 00/27] Add DLB v2.5 Timothy McDaniel
                       ` (24 preceding siblings ...)
  2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 25/27] doc/dlb2: update documentation " Timothy McDaniel
@ 2021-04-15  1:49     ` Timothy McDaniel
  2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 27/27] event/dlb: move rte config defines to runtime devargs Timothy McDaniel
  26 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-04-15  1:49 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, harry.van.haaren, jerinj, thomas

Updated eventdev device name to be dlb_event instead of
dlb2_event.  The new name will be used for all versions
of the DLB hardware. This change required corresponding changes
to the directory name that contains the PMD, as well
as the documentation files, build infrastructure, and PMD
specific APIs.

Updated 20.11 release notes to reference dlb rst file, and not
dlb2 rst file, since it was renamed to match the device name
as part of this patch.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 MAINTAINERS                                   |  6 +-
 app/test/test_eventdev.c                      |  6 +-
 config/rte_config.h                           | 11 ++-
 doc/api/doxy-api-index.md                     |  2 +-
 doc/api/doxy-api.conf.in                      |  2 +-
 doc/guides/eventdevs/{dlb2.rst => dlb.rst}    | 88 +++++++++----------
 doc/guides/eventdevs/index.rst                |  2 +-
 doc/guides/rel_notes/release_20_11.rst        |  2 +-
 doc/guides/rel_notes/release_21_05.rst        |  5 ++
 drivers/event/{dlb2 => dlb}/dlb2.c            | 25 +++---
 drivers/event/{dlb2 => dlb}/dlb2_iface.c      |  0
 drivers/event/{dlb2 => dlb}/dlb2_iface.h      |  0
 drivers/event/{dlb2 => dlb}/dlb2_inline_fns.h |  0
 drivers/event/{dlb2 => dlb}/dlb2_log.h        |  0
 drivers/event/{dlb2 => dlb}/dlb2_priv.h       |  7 +-
 drivers/event/{dlb2 => dlb}/dlb2_selftest.c   |  8 +-
 drivers/event/{dlb2 => dlb}/dlb2_user.h       |  0
 drivers/event/{dlb2 => dlb}/dlb2_xstats.c     |  0
 drivers/event/{dlb2 => dlb}/meson.build       |  4 +-
 .../{dlb2 => dlb}/pf/base/dlb2_hw_types.h     |  0
 .../event/{dlb2 => dlb}/pf/base/dlb2_osdep.h  |  0
 .../{dlb2 => dlb}/pf/base/dlb2_osdep_bitmap.h |  0
 .../{dlb2 => dlb}/pf/base/dlb2_osdep_list.h   |  0
 .../{dlb2 => dlb}/pf/base/dlb2_osdep_types.h  |  0
 .../event/{dlb2 => dlb}/pf/base/dlb2_regs.h   |  0
 .../{dlb2 => dlb}/pf/base/dlb2_resource.c     |  0
 .../{dlb2 => dlb}/pf/base/dlb2_resource.h     |  0
 drivers/event/{dlb2 => dlb}/pf/dlb2_main.c    |  0
 drivers/event/{dlb2 => dlb}/pf/dlb2_main.h    |  0
 drivers/event/{dlb2 => dlb}/pf/dlb2_pf.c      |  0
 .../rte_pmd_dlb2.c => dlb/rte_pmd_dlb.c}      |  6 +-
 .../rte_pmd_dlb2.h => dlb/rte_pmd_dlb.h}      | 12 +--
 drivers/event/{dlb2 => dlb}/version.map       |  2 +-
 drivers/event/meson.build                     |  2 +-
 34 files changed, 95 insertions(+), 95 deletions(-)
 rename doc/guides/eventdevs/{dlb2.rst => dlb.rst} (84%)
 rename drivers/event/{dlb2 => dlb}/dlb2.c (99%)
 rename drivers/event/{dlb2 => dlb}/dlb2_iface.c (100%)
 rename drivers/event/{dlb2 => dlb}/dlb2_iface.h (100%)
 rename drivers/event/{dlb2 => dlb}/dlb2_inline_fns.h (100%)
 rename drivers/event/{dlb2 => dlb}/dlb2_log.h (100%)
 rename drivers/event/{dlb2 => dlb}/dlb2_priv.h (99%)
 rename drivers/event/{dlb2 => dlb}/dlb2_selftest.c (99%)
 rename drivers/event/{dlb2 => dlb}/dlb2_user.h (100%)
 rename drivers/event/{dlb2 => dlb}/dlb2_xstats.c (100%)
 rename drivers/event/{dlb2 => dlb}/meson.build (89%)
 rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_hw_types.h (100%)
 rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_osdep.h (100%)
 rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_osdep_bitmap.h (100%)
 rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_osdep_list.h (100%)
 rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_osdep_types.h (100%)
 rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_regs.h (100%)
 rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_resource.c (100%)
 rename drivers/event/{dlb2 => dlb}/pf/base/dlb2_resource.h (100%)
 rename drivers/event/{dlb2 => dlb}/pf/dlb2_main.c (100%)
 rename drivers/event/{dlb2 => dlb}/pf/dlb2_main.h (100%)
 rename drivers/event/{dlb2 => dlb}/pf/dlb2_pf.c (100%)
 rename drivers/event/{dlb2/rte_pmd_dlb2.c => dlb/rte_pmd_dlb.c} (88%)
 rename drivers/event/{dlb2/rte_pmd_dlb2.h => dlb/rte_pmd_dlb.h} (88%)
 rename drivers/event/{dlb2 => dlb}/version.map (60%)

diff --git a/MAINTAINERS b/MAINTAINERS
index fa143160d..40610e169 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1196,10 +1196,10 @@ Cavium OCTEON TX timvf
 M: Pavan Nikhilesh <pbhagavatula@marvell.com>
 F: drivers/event/octeontx/timvf_*
 
-Intel DLB2
+Intel DLB
 M: Timothy McDaniel <timothy.mcdaniel@intel.com>
-F: drivers/event/dlb2/
-F: doc/guides/eventdevs/dlb2.rst
+F: drivers/event/dlb/
+F: doc/guides/eventdevs/dlb.rst
 
 Marvell OCTEON TX2
 M: Pavan Nikhilesh <pbhagavatula@marvell.com>
diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c
index bcfaa53cb..ba27bed02 100644
--- a/app/test/test_eventdev.c
+++ b/app/test/test_eventdev.c
@@ -1031,9 +1031,9 @@ test_eventdev_selftest_dpaa2(void)
 }
 
 static int
-test_eventdev_selftest_dlb2(void)
+test_eventdev_selftest_dlb(void)
 {
-	return test_eventdev_selftest_impl("dlb2_event", "");
+	return test_eventdev_selftest_impl("dlb_event", "");
 }
 
 REGISTER_TEST_COMMAND(eventdev_common_autotest, test_eventdev_common);
@@ -1043,4 +1043,4 @@ REGISTER_TEST_COMMAND(eventdev_selftest_octeontx,
 REGISTER_TEST_COMMAND(eventdev_selftest_octeontx2,
 		test_eventdev_selftest_octeontx2);
 REGISTER_TEST_COMMAND(eventdev_selftest_dpaa2, test_eventdev_selftest_dpaa2);
-REGISTER_TEST_COMMAND(eventdev_selftest_dlb2, test_eventdev_selftest_dlb2);
+REGISTER_TEST_COMMAND(eventdev_selftest_dlb, test_eventdev_selftest_dlb);
diff --git a/config/rte_config.h b/config/rte_config.h
index b13c0884b..1aa852cd7 100644
--- a/config/rte_config.h
+++ b/config/rte_config.h
@@ -139,11 +139,10 @@
 /* QEDE PMD defines */
 #define RTE_LIBRTE_QEDE_FW ""
 
-/* DLB2 defines */
-#define RTE_LIBRTE_PMD_DLB2_POLL_INTERVAL 1000
-#define RTE_LIBRTE_PMD_DLB2_UMWAIT_CTL_STATE  0
-#undef RTE_LIBRTE_PMD_DLB2_QUELL_STATS
-#define RTE_LIBRTE_PMD_DLB2_SW_CREDIT_QUANTA 32
-#define RTE_PMD_DLB2_DEFAULT_DEPTH_THRESH 256
+/* DLB defines */
+#define RTE_LIBRTE_PMD_DLB_POLL_INTERVAL 1000
+#undef RTE_LIBRTE_PMD_DLB_QUELL_STATS
+#define RTE_LIBRTE_PMD_DLB_SW_CREDIT_QUANTA 32
+#define RTE_LIBRTE_PMD_DLB_DEFAULT_DEPTH_THRESH 256
 
 #endif /* _RTE_CONFIG_H_ */
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index ca2c2f6e0..1c2865525 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -55,7 +55,7 @@ The public API headers are grouped by topics:
   [dpaa2_cmdif]        (@ref rte_pmd_dpaa2_cmdif.h),
   [dpaa2_qdma]         (@ref rte_pmd_dpaa2_qdma.h),
   [crypto_scheduler]   (@ref rte_cryptodev_scheduler.h),
-  [dlb2]               (@ref rte_pmd_dlb2.h),
+  [dlb]                (@ref rte_pmd_dlb.h),
   [ifpga]              (@ref rte_pmd_ifpga.h)
 
 - **memory**:
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index 3c7ee4608..9aebec419 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -7,7 +7,7 @@ USE_MDFILE_AS_MAINPAGE  = @TOPDIR@/doc/api/doxy-api-index.md
 INPUT                   = @TOPDIR@/doc/api/doxy-api-index.md \
                           @TOPDIR@/drivers/bus/vdev \
                           @TOPDIR@/drivers/crypto/scheduler \
-                          @TOPDIR@/drivers/event/dlb2 \
+                          @TOPDIR@/drivers/event/dlb \
                           @TOPDIR@/drivers/mempool/dpaa2 \
                           @TOPDIR@/drivers/net/ark \
                           @TOPDIR@/drivers/net/bnxt \
diff --git a/doc/guides/eventdevs/dlb2.rst b/doc/guides/eventdevs/dlb.rst
similarity index 84%
rename from doc/guides/eventdevs/dlb2.rst
rename to doc/guides/eventdevs/dlb.rst
index 94e46ea7d..3410a6e49 100644
--- a/doc/guides/eventdevs/dlb2.rst
+++ b/doc/guides/eventdevs/dlb.rst
@@ -1,7 +1,7 @@
 ..  SPDX-License-Identifier: BSD-3-Clause
     Copyright(c) 2020 Intel Corporation.
 
-Driver for the Intel® Dynamic Load Balancer (DLB2)
+Driver for the Intel® Dynamic Load Balancer (DLB)
 ==================================================
 
 The DPDK dlb poll mode driver supports the Intel® Dynamic Load Balancer,
@@ -16,34 +16,34 @@ the basic DPDK environment.
 Configuration
 -------------
 
-The DLB2 PF PMD is a user-space PMD that uses VFIO to gain direct
+The DLB PF PMD is a user-space PMD that uses VFIO to gain direct
 device access. To use this operation mode, the PCIe PF device must be bound
 to a DPDK-compatible VFIO driver, such as vfio-pci.
 
 Eventdev API Notes
 ------------------
 
-The DLB2 provides the functions of a DPDK event device; specifically, it
+The DLB PMD provides the functions of a DPDK event device; specifically, it
 supports atomic, ordered, and parallel scheduling events from queues to ports.
-However, the DLB2 hardware is not a perfect match to the eventdev API. Some DLB2
+However, the DLB hardware is not a perfect match to the eventdev API. Some DLB
 features are abstracted by the PMD such as directed ports.
 
 In general the dlb PMD is designed for ease-of-use and does not require a
 detailed understanding of the hardware, but these details are important when
 writing high-performance code. This section describes the places where the
-eventdev API and DLB2 misalign.
+eventdev API and DLB misalign.
 
 Scheduling Domain Configuration
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-DLB2 supports 32 scheduling domains.
+DLB supports 32 scheduling domains.
 When one is configured, it allocates load-balanced and
 directed queues, ports, credits, and other hardware resources. Some
 resource allocations are user-controlled -- the number of queues, for example
 -- and others, like credit pools (one directed and one load-balanced pool per
 scheduling domain), are not.
 
-The DLB2 is a closed system eventdev, and as such the ``nb_events_limit`` device
+The DLB is a closed system eventdev, and as such the ``nb_events_limit`` device
 setup argument and the per-port ``new_event_threshold`` argument apply as
 defined in the eventdev header file. The limit is applied to all enqueues,
 regardless of whether it will consume a directed or load-balanced credit.
@@ -68,7 +68,7 @@ If the ``RTE_EVENT_QUEUE_CFG_ALL_TYPES`` flag is not set, schedule_type
 dictates the queue's scheduling type.
 
 The ``nb_atomic_order_sequences`` queue configuration field sets the ordered
-queue's reorder buffer size.  DLB2 has 2 groups of ordered queues, where each
+queue's reorder buffer size.  DLB has 2 groups of ordered queues, where each
 group is configured to contain either 1 queue with 1024 reorder entries, 2
 queues with 512 reorder entries, and so on down to 32 queues with 32 entries.
 
@@ -76,22 +76,22 @@ When a load-balanced queue is created, the PMD will configure a new sequence
 number group on-demand if num_sequence_numbers does not match a pre-existing
 group with available reorder buffer entries. If all sequence number groups are
 in use, no new group will be created and queue configuration will fail. (Note
-that when the PMD is used with a virtual DLB2 device, it cannot change the
+that when the PMD is used with a virtual DLB device, it cannot change the
 sequence number configuration.)
 
-The queue's ``nb_atomic_flows`` parameter is ignored by the DLB2 PMD, because
-the DLB2 does not limit the number of flows a queue can track. In the DLB2, all
+The queue's ``nb_atomic_flows`` parameter is ignored by the DLB PMD, because
+the DLB does not limit the number of flows a queue can track. In the DLB, all
 load-balanced queues can use the full 16-bit flow ID range.
 
 Load-balanced and Directed Ports
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-DLB2 ports come in two flavors: load-balanced and directed. The eventdev API
+DLB ports come in two flavors: load-balanced and directed. The eventdev API
 does not have the same concept, but it has a similar one: ports and queues that
 are singly-linked (i.e. linked to a single queue or port, respectively).
 
 The ``rte_event_dev_info_get()`` function reports the number of available
-event ports and queues (among other things). For the DLB2 PMD, max_event_ports
+event ports and queues (among other things). For the DLB PMD, max_event_ports
 and max_event_queues report the number of available load-balanced ports and
 queues, and max_single_link_event_port_queue_pairs reports the number of
 available directed ports and queues.
@@ -132,12 +132,12 @@ Flow ID
 ~~~~~~~
 
 The flow ID field is preserved in the event when it is scheduled in the
-DLB2.
+DLB.
 
 Hardware Credits
 ~~~~~~~~~~~~~~~~
 
-DLB2 uses a hardware credit scheme to prevent software from overflowing hardware
+DLB uses a hardware credit scheme to prevent software from overflowing hardware
 event storage, with each unit of storage represented by a credit. A port spends
 a credit to enqueue an event, and hardware refills the ports with credits as the
 events are scheduled to ports. Refills come from credit pools.
@@ -156,7 +156,7 @@ num_dir_credits vdev argument, like so:
 
     .. code-block:: console
 
-       --vdev=dlb1_event,num_dir_credits=<value>
+       --vdev=dlb_event,num_dir_credits=<value>
 
 This can be used if the default allocation is too low or too high for the
 specific application needs. The PMD also supports a vdev arg that limits the
@@ -164,10 +164,10 @@ max_num_events reported by rte_event_dev_info_get():
 
     .. code-block:: console
 
-       --vdev=dlb1_event,max_num_events=<value>
+       --vdev=dlb_event,max_num_events=<value>
 
 By default, max_num_events is reported as the total available load-balanced
-credits. If multiple DLB2-based applications are being used, it may be desirable
+credits. If multiple DLB-based applications are being used, it may be desirable
 to control how many load-balanced credits each application uses, particularly
 when application(s) are written to configure nb_events_limit equal to the
 reported max_num_events.
@@ -193,16 +193,16 @@ order to reach the limit.
 
 If a port attempts to enqueue and has no credits available, the enqueue
 operation will fail and the application must retry the enqueue. Credits are
-replenished asynchronously by the DLB2 hardware.
+replenished asynchronously by the DLB hardware.
 
 Software Credits
 ~~~~~~~~~~~~~~~~
 
-The DLB2 is a "closed system" event dev, and the DLB2 PMD layers a software
+The DLB is a "closed system" event dev, and the DLB PMD layers a software
 credit scheme on top of the hardware credit scheme in order to comply with
 the per-port backpressure described in the eventdev API.
 
-The DLB2's hardware scheme is local to a queue/pipeline stage: a port spends a
+The DLB's hardware scheme is local to a queue/pipeline stage: a port spends a
 credit when it enqueues to a queue, and credits are later replenished after the
 events are dequeued and released.
 
@@ -222,8 +222,8 @@ credits are used to enqueue to a load-balanced queue, and directed credits are
 used to enqueue to a directed queue.
 
 The out-of-credit situations are typically transient, and an eventdev
-application using the DLB2 ought to retry its enqueues if they fail.
-If enqueue fails, DLB2 PMD sets rte_errno as follows:
+application using the DLB ought to retry its enqueues if they fail.
+If enqueue fails, DLB PMD sets rte_errno as follows:
 
 - -ENOSPC: Credit exhaustion (either hardware or software)
 - -EINVAL: Invalid argument, such as port ID, queue ID, or sched_type.
@@ -245,12 +245,12 @@ the port's dequeue_depth).
 Priority
 ~~~~~~~~
 
-The DLB2 supports event priority and per-port queue service priority, as
-described in the eventdev header file. The DLB2 does not support 'global' event
+The DLB supports event priority and per-port queue service priority, as
+described in the eventdev header file. The DLB does not support 'global' event
 queue priority established at queue creation time.
 
-DLB2 supports 4 event and queue service priority levels. For both priority types,
-the PMD uses the upper three bits of the priority field to determine the DLB2
+DLB supports 4 event and queue service priority levels. For both priority types,
+the PMD uses the upper three bits of the priority field to determine the DLB
 priority, discarding the 5 least significant bits. But least significant bit out
 of 3 priority bits is effectively ignored for binning into 4 priorities. The
 discarded 5 least significant event priority bits are not preserved when an event
@@ -265,7 +265,7 @@ Reconfiguration
 
 The Eventdev API allows one to reconfigure a device, its ports, and its queues
 by first stopping the device, calling the configuration function(s), then
-restarting the device. The DLB2 does not support configuring an individual queue
+restarting the device. The DLB does not support configuring an individual queue
 or port without first reconfiguring the entire device, however, so there are
 certain reconfiguration sequences that are valid in the eventdev API but not
 supported by the PMD.
@@ -296,9 +296,9 @@ before its ports or queues can be.
 Deferred Scheduling
 ~~~~~~~~~~~~~~~~~~~
 
-The DLB2 PMD's default behavior for managing a CQ is to "pop" the CQ once per
+The DLB PMD's default behavior for managing a CQ is to "pop" the CQ once per
 dequeued event before returning from rte_event_dequeue_burst(). This frees the
-corresponding entries in the CQ, which enables the DLB2 to schedule more events
+corresponding entries in the CQ, which enables the DLB to schedule more events
 to it.
 
 To support applications seeking finer-grained scheduling control -- for example
@@ -312,12 +312,12 @@ To enable deferred scheduling, use the defer_sched vdev argument like so:
 
     .. code-block:: console
 
-       --vdev=dlb1_event,defer_sched=on
+       --vdev=dlb_event,defer_sched=on
 
 Atomic Inflights Allocation
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-In the last stage prior to scheduling an atomic event to a CQ, DLB2 holds the
+In the last stage prior to scheduling an atomic event to a CQ, DLB holds the
 inflight event in a temporary buffer that is divided among load-balanced
 queues. If a queue's atomic buffer storage fills up, this can result in
 head-of-line-blocking. For example:
@@ -340,12 +340,12 @@ increase a vdev's per-queue atomic-inflight allocation to (for example) 64:
 
     .. code-block:: console
 
-       --vdev=dlb1_event,atm_inflights=64
+       --vdev=dlb_event,atm_inflights=64
 
 QID Depth Threshold
 ~~~~~~~~~~~~~~~~~~~
 
-DLB2 supports setting and tracking queue depth thresholds. Hardware uses
+DLB supports setting and tracking queue depth thresholds. Hardware uses
 the thresholds to track how full a queue is compared to its threshold.
 Four buckets are used
 
@@ -354,7 +354,7 @@ Four buckets are used
 - Greater than 75%, but less than or equal to 100% of depth threshold
 - Greater than 100% of depth thresholds
 
-Per queue threshold metrics are tracked in the DLB2 xstats, and are also
+Per queue threshold metrics are tracked in the DLB xstats, and are also
 returned in the impl_opaque field of each received event.
 
 The per qid threshold can be specified as part of the device args, and
@@ -363,19 +363,19 @@ shown below.
 
     .. code-block:: console
 
-       --vdev=dlb2_event,qid_depth_thresh=all:<threshold_value>
-       --vdev=dlb2_event,qid_depth_thresh=qidA-qidB:<threshold_value>
-       --vdev=dlb2_event,qid_depth_thresh=qid:<threshold_value>
+       --vdev=dlb_event,qid_depth_thresh=all:<threshold_value>
+       --vdev=dlb_event,qid_depth_thresh=qidA-qidB:<threshold_value>
+       --vdev=dlb_event,qid_depth_thresh=qid:<threshold_value>
 
 Class of service
 ~~~~~~~~~~~~~~~~
 
-DLB2 supports provisioning the DLB2 bandwidth into 4 classes of service.
+DLB supports provisioning the DLB bandwidth into 4 classes of service.
 
-- Class 4 corresponds to 40% of the DLB2 hardware bandwidth
-- Class 3 corresponds to 30% of the DLB2 hardware bandwidth
-- Class 2 corresponds to 20% of the DLB2 hardware bandwidth
-- Class 1 corresponds to 10% of the DLB2 hardware bandwidth
+- Class 4 corresponds to 40% of the DLB hardware bandwidth
+- Class 3 corresponds to 30% of the DLB hardware bandwidth
+- Class 2 corresponds to 20% of the DLB hardware bandwidth
+- Class 1 corresponds to 10% of the DLB hardware bandwidth
 - Class 0 corresponds to don't care
 
 The classes are applied globally to the set of ports contained in this
@@ -387,4 +387,4 @@ Class of service can be specified in the devargs, as follows
 
     .. code-block:: console
 
-       --vdev=dlb2_event,cos=<0..4>
+       --vdev=dlb_event,cos=<0..4>
diff --git a/doc/guides/eventdevs/index.rst b/doc/guides/eventdevs/index.rst
index 738788d9e..4b915bf3e 100644
--- a/doc/guides/eventdevs/index.rst
+++ b/doc/guides/eventdevs/index.rst
@@ -11,7 +11,7 @@ application through the eventdev API.
     :maxdepth: 2
     :numbered:
 
-    dlb2
+    dlb
     dpaa
     dpaa2
     dsw
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 7405a9864..4b09cbd39 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -351,7 +351,7 @@ New Features
 * **Added a new driver for the Intel Dynamic Load Balancer v2.0 device.**
 
   Added the new ``dlb2`` eventdev driver for the Intel DLB V2.0 device. See the
-  :doc:`../eventdevs/dlb2` eventdev guide for more details on this new driver.
+  :doc:`../eventdevs/dlb` eventdev guide for more details on this new driver.
 
 * **Added Ice Lake (Gen4) support for Intel NTB.**
 
diff --git a/doc/guides/rel_notes/release_21_05.rst b/doc/guides/rel_notes/release_21_05.rst
index 8a601e0a7..5b25f1479 100644
--- a/doc/guides/rel_notes/release_21_05.rst
+++ b/doc/guides/rel_notes/release_21_05.rst
@@ -94,6 +94,11 @@ New Features
 
   * Added support for preferred busy polling.
 
+* **Updated DLB driver.**
+
+  * Added support for v2.5 hardware.
+  * Renamed DLB2 to DLB, which supports all HW versions v2.0 and v2.5.
+
 * **Updated testpmd.**
 
   * Added a command line option to configure forced speed for Ethernet port.
diff --git a/drivers/event/dlb2/dlb2.c b/drivers/event/dlb/dlb2.c
similarity index 99%
rename from drivers/event/dlb2/dlb2.c
rename to drivers/event/dlb/dlb2.c
index cc6495b76..e5def9357 100644
--- a/drivers/event/dlb2/dlb2.c
+++ b/drivers/event/dlb/dlb2.c
@@ -667,15 +667,8 @@ dlb2_eventdev_configure(const struct rte_eventdev *dev)
 	}
 
 	/* Does this platform support umonitor/umwait? */
-	if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_WAITPKG)) {
-		if (RTE_LIBRTE_PMD_DLB2_UMWAIT_CTL_STATE != 0 &&
-		    RTE_LIBRTE_PMD_DLB2_UMWAIT_CTL_STATE != 1) {
-			DLB2_LOG_ERR("invalid value (%d) for RTE_LIBRTE_PMD_DLB2_UMWAIT_CTL_STATE, must be 0 or 1.\n",
-				     RTE_LIBRTE_PMD_DLB2_UMWAIT_CTL_STATE);
-			return -EINVAL;
-		}
+	if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_WAITPKG))
 		dlb2->umwait_allowed = true;
-	}
 
 	rsrcs->num_dir_ports = config->nb_single_link_event_port_queues;
 	rsrcs->num_ldb_ports  = config->nb_event_ports - rsrcs->num_dir_ports;
@@ -930,8 +923,9 @@ dlb2_hw_create_ldb_queue(struct dlb2_eventdev *dlb2,
 	}
 
 	if (ev_queue->depth_threshold == 0) {
-		cfg.depth_threshold = RTE_PMD_DLB2_DEFAULT_DEPTH_THRESH;
-		ev_queue->depth_threshold = RTE_PMD_DLB2_DEFAULT_DEPTH_THRESH;
+		cfg.depth_threshold = RTE_LIBRTE_PMD_DLB_DEFAULT_DEPTH_THRESH;
+		ev_queue->depth_threshold =
+			RTE_LIBRTE_PMD_DLB_DEFAULT_DEPTH_THRESH;
 	} else
 		cfg.depth_threshold = ev_queue->depth_threshold;
 
@@ -1623,7 +1617,7 @@ dlb2_eventdev_port_setup(struct rte_eventdev *dev,
 		  RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL);
 	ev_port->outstanding_releases = 0;
 	ev_port->inflight_credits = 0;
-	ev_port->credit_update_quanta = RTE_LIBRTE_PMD_DLB2_SW_CREDIT_QUANTA;
+	ev_port->credit_update_quanta = RTE_LIBRTE_PMD_DLB_SW_CREDIT_QUANTA;
 	ev_port->dlb2 = dlb2; /* reverse link */
 
 	/* Tear down pre-existing port->queue links */
@@ -1718,8 +1712,9 @@ dlb2_hw_create_dir_queue(struct dlb2_eventdev *dlb2,
 	cfg.port_id = qm_port_id;
 
 	if (ev_queue->depth_threshold == 0) {
-		cfg.depth_threshold = RTE_PMD_DLB2_DEFAULT_DEPTH_THRESH;
-		ev_queue->depth_threshold = RTE_PMD_DLB2_DEFAULT_DEPTH_THRESH;
+		cfg.depth_threshold = RTE_LIBRTE_PMD_DLB_DEFAULT_DEPTH_THRESH;
+		ev_queue->depth_threshold =
+			RTE_LIBRTE_PMD_DLB_DEFAULT_DEPTH_THRESH;
 	} else
 		cfg.depth_threshold = ev_queue->depth_threshold;
 
@@ -2747,7 +2742,7 @@ dlb2_event_enqueue_prep(struct dlb2_eventdev_port *ev_port,
 	DLB2_INC_STAT(ev_port->stats.tx_op_cnt[ev->op], 1);
 	DLB2_INC_STAT(ev_port->stats.traffic.tx_ok, 1);
 
-#ifndef RTE_LIBRTE_PMD_DLB2_QUELL_STATS
+#ifndef RTE_LIBRTE_PMD_DLB_QUELL_STATS
 	if (ev->op != RTE_EVENT_OP_RELEASE) {
 		DLB2_INC_STAT(ev_port->stats.queue[ev->queue_id].enq_ok, 1);
 		DLB2_INC_STAT(ev_port->stats.tx_sched_cnt[*sched_type], 1);
@@ -3070,7 +3065,7 @@ dlb2_dequeue_wait(struct dlb2_eventdev *dlb2,
 
 		DLB2_INC_STAT(ev_port->stats.traffic.rx_umonitor_umwait, 1);
 	} else {
-		uint64_t poll_interval = RTE_LIBRTE_PMD_DLB2_POLL_INTERVAL;
+		uint64_t poll_interval = RTE_LIBRTE_PMD_DLB_POLL_INTERVAL;
 		uint64_t curr_ticks = rte_get_timer_cycles();
 		uint64_t init_ticks = curr_ticks;
 
diff --git a/drivers/event/dlb2/dlb2_iface.c b/drivers/event/dlb/dlb2_iface.c
similarity index 100%
rename from drivers/event/dlb2/dlb2_iface.c
rename to drivers/event/dlb/dlb2_iface.c
diff --git a/drivers/event/dlb2/dlb2_iface.h b/drivers/event/dlb/dlb2_iface.h
similarity index 100%
rename from drivers/event/dlb2/dlb2_iface.h
rename to drivers/event/dlb/dlb2_iface.h
diff --git a/drivers/event/dlb2/dlb2_inline_fns.h b/drivers/event/dlb/dlb2_inline_fns.h
similarity index 100%
rename from drivers/event/dlb2/dlb2_inline_fns.h
rename to drivers/event/dlb/dlb2_inline_fns.h
diff --git a/drivers/event/dlb2/dlb2_log.h b/drivers/event/dlb/dlb2_log.h
similarity index 100%
rename from drivers/event/dlb2/dlb2_log.h
rename to drivers/event/dlb/dlb2_log.h
diff --git a/drivers/event/dlb2/dlb2_priv.h b/drivers/event/dlb/dlb2_priv.h
similarity index 99%
rename from drivers/event/dlb2/dlb2_priv.h
rename to drivers/event/dlb/dlb2_priv.h
index f3a9fe0aa..f11e08fca 100644
--- a/drivers/event/dlb2/dlb2_priv.h
+++ b/drivers/event/dlb/dlb2_priv.h
@@ -12,7 +12,7 @@
 #include <rte_config.h>
 #include "dlb2_user.h"
 #include "dlb2_log.h"
-#include "rte_pmd_dlb2.h"
+#include "rte_pmd_dlb.h"
 
 #ifndef RTE_LIBRTE_PMD_DLB2_QUELL_STATS
 #define DLB2_INC_STAT(_stat, _incr_val) ((_stat) += _incr_val)
@@ -20,7 +20,8 @@
 #define DLB2_INC_STAT(_stat, _incr_val)
 #endif
 
-#define EVDEV_DLB2_NAME_PMD dlb2_event
+/* common name for all dlb devs (dlb v2.0, dlb v2.5 ...) */
+#define EVDEV_DLB2_NAME_PMD dlb_event
 
 /*  command line arg strings */
 #define NUMA_NODE_ARG "numa_node"
@@ -320,7 +321,7 @@ struct dlb2_port {
 	bool gen_bit;
 	uint16_t dir_credits;
 	uint32_t dequeue_depth;
-	enum dlb2_token_pop_mode token_pop_mode;
+	enum dlb_token_pop_mode token_pop_mode;
 	union dlb2_port_config cfg;
 	uint32_t *credit_pool[DLB2_NUM_QUEUE_TYPES]; /* use __atomic builtins */
 	union {
diff --git a/drivers/event/dlb2/dlb2_selftest.c b/drivers/event/dlb/dlb2_selftest.c
similarity index 99%
rename from drivers/event/dlb2/dlb2_selftest.c
rename to drivers/event/dlb/dlb2_selftest.c
index 5cf66c552..019cbecdc 100644
--- a/drivers/event/dlb2/dlb2_selftest.c
+++ b/drivers/event/dlb/dlb2_selftest.c
@@ -22,7 +22,7 @@
 #include <rte_pause.h>
 
 #include "dlb2_priv.h"
-#include "rte_pmd_dlb2.h"
+#include "rte_pmd_dlb.h"
 
 #define MAX_PORTS 32
 #define MAX_QIDS 32
@@ -1105,13 +1105,13 @@ test_deferred_sched(void)
 		return -1;
 	}
 
-	ret = rte_pmd_dlb2_set_token_pop_mode(evdev, 0, DEFERRED_POP);
+	ret = rte_pmd_dlb_set_token_pop_mode(evdev, 0, DEFERRED_POP);
 	if (ret < 0) {
 		printf("%d: Error setting deferred scheduling\n", __LINE__);
 		goto err;
 	}
 
-	ret = rte_pmd_dlb2_set_token_pop_mode(evdev, 1, DEFERRED_POP);
+	ret = rte_pmd_dlb_set_token_pop_mode(evdev, 1, DEFERRED_POP);
 	if (ret < 0) {
 		printf("%d: Error setting deferred scheduling\n", __LINE__);
 		goto err;
@@ -1257,7 +1257,7 @@ test_delayed_pop(void)
 		return -1;
 	}
 
-	ret = rte_pmd_dlb2_set_token_pop_mode(evdev, 0, DELAYED_POP);
+	ret = rte_pmd_dlb_set_token_pop_mode(evdev, 0, DELAYED_POP);
 	if (ret < 0) {
 		printf("%d: Error setting deferred scheduling\n", __LINE__);
 		goto err;
diff --git a/drivers/event/dlb2/dlb2_user.h b/drivers/event/dlb/dlb2_user.h
similarity index 100%
rename from drivers/event/dlb2/dlb2_user.h
rename to drivers/event/dlb/dlb2_user.h
diff --git a/drivers/event/dlb2/dlb2_xstats.c b/drivers/event/dlb/dlb2_xstats.c
similarity index 100%
rename from drivers/event/dlb2/dlb2_xstats.c
rename to drivers/event/dlb/dlb2_xstats.c
diff --git a/drivers/event/dlb2/meson.build b/drivers/event/dlb/meson.build
similarity index 89%
rename from drivers/event/dlb2/meson.build
rename to drivers/event/dlb/meson.build
index f22638b8e..4a4aed931 100644
--- a/drivers/event/dlb2/meson.build
+++ b/drivers/event/dlb/meson.build
@@ -14,10 +14,10 @@ sources = files('dlb2.c',
 		'pf/dlb2_main.c',
 		'pf/dlb2_pf.c',
 		'pf/base/dlb2_resource.c',
-		'rte_pmd_dlb2.c',
+		'rte_pmd_dlb.c',
 		'dlb2_selftest.c'
 )
 
-headers = files('rte_pmd_dlb2.h')
+headers = files('rte_pmd_dlb.h')
 
 deps += ['mbuf', 'mempool', 'ring', 'pci', 'bus_pci']
diff --git a/drivers/event/dlb2/pf/base/dlb2_hw_types.h b/drivers/event/dlb/pf/base/dlb2_hw_types.h
similarity index 100%
rename from drivers/event/dlb2/pf/base/dlb2_hw_types.h
rename to drivers/event/dlb/pf/base/dlb2_hw_types.h
diff --git a/drivers/event/dlb2/pf/base/dlb2_osdep.h b/drivers/event/dlb/pf/base/dlb2_osdep.h
similarity index 100%
rename from drivers/event/dlb2/pf/base/dlb2_osdep.h
rename to drivers/event/dlb/pf/base/dlb2_osdep.h
diff --git a/drivers/event/dlb2/pf/base/dlb2_osdep_bitmap.h b/drivers/event/dlb/pf/base/dlb2_osdep_bitmap.h
similarity index 100%
rename from drivers/event/dlb2/pf/base/dlb2_osdep_bitmap.h
rename to drivers/event/dlb/pf/base/dlb2_osdep_bitmap.h
diff --git a/drivers/event/dlb2/pf/base/dlb2_osdep_list.h b/drivers/event/dlb/pf/base/dlb2_osdep_list.h
similarity index 100%
rename from drivers/event/dlb2/pf/base/dlb2_osdep_list.h
rename to drivers/event/dlb/pf/base/dlb2_osdep_list.h
diff --git a/drivers/event/dlb2/pf/base/dlb2_osdep_types.h b/drivers/event/dlb/pf/base/dlb2_osdep_types.h
similarity index 100%
rename from drivers/event/dlb2/pf/base/dlb2_osdep_types.h
rename to drivers/event/dlb/pf/base/dlb2_osdep_types.h
diff --git a/drivers/event/dlb2/pf/base/dlb2_regs.h b/drivers/event/dlb/pf/base/dlb2_regs.h
similarity index 100%
rename from drivers/event/dlb2/pf/base/dlb2_regs.h
rename to drivers/event/dlb/pf/base/dlb2_regs.h
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb/pf/base/dlb2_resource.c
similarity index 100%
rename from drivers/event/dlb2/pf/base/dlb2_resource.c
rename to drivers/event/dlb/pf/base/dlb2_resource.c
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.h b/drivers/event/dlb/pf/base/dlb2_resource.h
similarity index 100%
rename from drivers/event/dlb2/pf/base/dlb2_resource.h
rename to drivers/event/dlb/pf/base/dlb2_resource.h
diff --git a/drivers/event/dlb2/pf/dlb2_main.c b/drivers/event/dlb/pf/dlb2_main.c
similarity index 100%
rename from drivers/event/dlb2/pf/dlb2_main.c
rename to drivers/event/dlb/pf/dlb2_main.c
diff --git a/drivers/event/dlb2/pf/dlb2_main.h b/drivers/event/dlb/pf/dlb2_main.h
similarity index 100%
rename from drivers/event/dlb2/pf/dlb2_main.h
rename to drivers/event/dlb/pf/dlb2_main.h
diff --git a/drivers/event/dlb2/pf/dlb2_pf.c b/drivers/event/dlb/pf/dlb2_pf.c
similarity index 100%
rename from drivers/event/dlb2/pf/dlb2_pf.c
rename to drivers/event/dlb/pf/dlb2_pf.c
diff --git a/drivers/event/dlb2/rte_pmd_dlb2.c b/drivers/event/dlb/rte_pmd_dlb.c
similarity index 88%
rename from drivers/event/dlb2/rte_pmd_dlb2.c
rename to drivers/event/dlb/rte_pmd_dlb.c
index 43990e46a..82d203366 100644
--- a/drivers/event/dlb2/rte_pmd_dlb2.c
+++ b/drivers/event/dlb/rte_pmd_dlb.c
@@ -5,14 +5,14 @@
 #include <rte_eventdev.h>
 #include <eventdev_pmd.h>
 
-#include "rte_pmd_dlb2.h"
+#include "rte_pmd_dlb.h"
 #include "dlb2_priv.h"
 #include "dlb2_inline_fns.h"
 
 int
-rte_pmd_dlb2_set_token_pop_mode(uint8_t dev_id,
+rte_pmd_dlb_set_token_pop_mode(uint8_t dev_id,
 				uint8_t port_id,
-				enum dlb2_token_pop_mode mode)
+				enum dlb_token_pop_mode mode)
 {
 	struct dlb2_eventdev *dlb2;
 	struct rte_eventdev *dev;
diff --git a/drivers/event/dlb2/rte_pmd_dlb2.h b/drivers/event/dlb/rte_pmd_dlb.h
similarity index 88%
rename from drivers/event/dlb2/rte_pmd_dlb2.h
rename to drivers/event/dlb/rte_pmd_dlb.h
index 74399db01..d42b1f52a 100644
--- a/drivers/event/dlb2/rte_pmd_dlb2.h
+++ b/drivers/event/dlb/rte_pmd_dlb.h
@@ -3,13 +3,13 @@
  */
 
 /*!
- *  @file      rte_pmd_dlb2.h
+ *  @file      rte_pmd_dlb.h
  *
  *  @brief     DLB PMD-specific functions
  */
 
-#ifndef _RTE_PMD_DLB2_H_
-#define _RTE_PMD_DLB2_H_
+#ifndef _RTE_PMD_DLB_H_
+#define _RTE_PMD_DLB_H_
 
 #ifdef __cplusplus
 extern "C" {
@@ -23,7 +23,7 @@ extern "C" {
  *
  * Selects the token pop mode for a DLB2 port.
  */
-enum dlb2_token_pop_mode {
+enum dlb_token_pop_mode {
 	/* Pop the CQ tokens immediately after dequeueing. */
 	AUTO_POP,
 	/* Pop CQ tokens after (dequeue_depth - 1) events are released.
@@ -61,9 +61,9 @@ enum dlb2_token_pop_mode {
 
 __rte_experimental
 int
-rte_pmd_dlb2_set_token_pop_mode(uint8_t dev_id,
+rte_pmd_dlb_set_token_pop_mode(uint8_t dev_id,
 				uint8_t port_id,
-				enum dlb2_token_pop_mode mode);
+				enum dlb_token_pop_mode mode);
 
 #ifdef __cplusplus
 }
diff --git a/drivers/event/dlb2/version.map b/drivers/event/dlb/version.map
similarity index 60%
rename from drivers/event/dlb2/version.map
rename to drivers/event/dlb/version.map
index b1e4dff0f..3338a22c1 100644
--- a/drivers/event/dlb2/version.map
+++ b/drivers/event/dlb/version.map
@@ -5,5 +5,5 @@ DPDK_21 {
 EXPERIMENTAL {
 	global:
 
-	rte_pmd_dlb2_set_token_pop_mode;
+	rte_pmd_dlb_set_token_pop_mode;
 };
diff --git a/drivers/event/meson.build b/drivers/event/meson.build
index b7f9bf7c6..e9b0433f2 100644
--- a/drivers/event/meson.build
+++ b/drivers/event/meson.build
@@ -5,7 +5,7 @@ if is_windows
 	subdir_done()
 endif
 
-drivers = ['dlb2', 'dpaa', 'dpaa2', 'octeontx2', 'opdl', 'skeleton', 'sw',
+drivers = ['dlb', 'dpaa', 'dpaa2', 'octeontx2', 'opdl', 'skeleton', 'sw',
 	   'dsw']
 if not (toolchain == 'gcc' and cc.version().version_compare('<4.8.6') and
 	dpdk_conf.has('RTE_ARCH_ARM64'))
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v4 27/27] event/dlb: move rte config defines to runtime devargs
  2021-04-15  1:48   ` [dpdk-dev] [PATCH v4 00/27] Add DLB v2.5 Timothy McDaniel
                       ` (25 preceding siblings ...)
  2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 26/27] event/dlb: rename dlb2 driver Timothy McDaniel
@ 2021-04-15  1:49     ` Timothy McDaniel
  26 siblings, 0 replies; 174+ messages in thread
From: Timothy McDaniel @ 2021-04-15  1:49 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, harry.van.haaren, jerinj, thomas

The new devarg names and their default values
are listed below. The defaults have not changed, and
none of these parameters are accessed in the fast path.

poll_interval=1000
sw_credit_quantai=32
default_depth_thresh=256

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 config/rte_config.h            |   3 -
 drivers/event/dlb/dlb2.c       | 109 +++++++++++++++++++++++++++++++--
 drivers/event/dlb/dlb2_priv.h  |  14 +++++
 drivers/event/dlb/pf/dlb2_pf.c |   5 +-
 4 files changed, 121 insertions(+), 10 deletions(-)

diff --git a/config/rte_config.h b/config/rte_config.h
index 1aa852cd7..836aca3c2 100644
--- a/config/rte_config.h
+++ b/config/rte_config.h
@@ -140,9 +140,6 @@
 #define RTE_LIBRTE_QEDE_FW ""
 
 /* DLB defines */
-#define RTE_LIBRTE_PMD_DLB_POLL_INTERVAL 1000
 #undef RTE_LIBRTE_PMD_DLB_QUELL_STATS
-#define RTE_LIBRTE_PMD_DLB_SW_CREDIT_QUANTA 32
-#define RTE_LIBRTE_PMD_DLB_DEFAULT_DEPTH_THRESH 256
 
 #endif /* _RTE_CONFIG_H_ */
diff --git a/drivers/event/dlb/dlb2.c b/drivers/event/dlb/dlb2.c
index e5def9357..818b1c367 100644
--- a/drivers/event/dlb/dlb2.c
+++ b/drivers/event/dlb/dlb2.c
@@ -315,6 +315,66 @@ set_cos(const char *key __rte_unused,
 	return 0;
 }
 
+static int
+set_poll_interval(const char *key __rte_unused,
+	const char *value,
+	void *opaque)
+{
+	int *poll_interval = opaque;
+	int ret;
+
+	if (value == NULL || opaque == NULL) {
+		DLB2_LOG_ERR("NULL pointer\n");
+		return -EINVAL;
+	}
+
+	ret = dlb2_string_to_int(poll_interval, value);
+	if (ret < 0)
+		return ret;
+
+	return 0;
+}
+
+static int
+set_sw_credit_quanta(const char *key __rte_unused,
+	const char *value,
+	void *opaque)
+{
+	int *sw_credit_quanta = opaque;
+	int ret;
+
+	if (value == NULL || opaque == NULL) {
+		DLB2_LOG_ERR("NULL pointer\n");
+		return -EINVAL;
+	}
+
+	ret = dlb2_string_to_int(sw_credit_quanta, value);
+	if (ret < 0)
+		return ret;
+
+	return 0;
+}
+
+static int
+set_default_depth_thresh(const char *key __rte_unused,
+	const char *value,
+	void *opaque)
+{
+	int *default_depth_thresh = opaque;
+	int ret;
+
+	if (value == NULL || opaque == NULL) {
+		DLB2_LOG_ERR("NULL pointer\n");
+		return -EINVAL;
+	}
+
+	ret = dlb2_string_to_int(default_depth_thresh, value);
+	if (ret < 0)
+		return ret;
+
+	return 0;
+}
+
 static int
 set_qid_depth_thresh(const char *key __rte_unused,
 		     const char *value,
@@ -923,9 +983,9 @@ dlb2_hw_create_ldb_queue(struct dlb2_eventdev *dlb2,
 	}
 
 	if (ev_queue->depth_threshold == 0) {
-		cfg.depth_threshold = RTE_LIBRTE_PMD_DLB_DEFAULT_DEPTH_THRESH;
+		cfg.depth_threshold = dlb2->default_depth_thresh;
 		ev_queue->depth_threshold =
-			RTE_LIBRTE_PMD_DLB_DEFAULT_DEPTH_THRESH;
+			dlb2->default_depth_thresh;
 	} else
 		cfg.depth_threshold = ev_queue->depth_threshold;
 
@@ -1617,7 +1677,7 @@ dlb2_eventdev_port_setup(struct rte_eventdev *dev,
 		  RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL);
 	ev_port->outstanding_releases = 0;
 	ev_port->inflight_credits = 0;
-	ev_port->credit_update_quanta = RTE_LIBRTE_PMD_DLB_SW_CREDIT_QUANTA;
+	ev_port->credit_update_quanta = dlb2->sw_credit_quanta;
 	ev_port->dlb2 = dlb2; /* reverse link */
 
 	/* Tear down pre-existing port->queue links */
@@ -1712,9 +1772,9 @@ dlb2_hw_create_dir_queue(struct dlb2_eventdev *dlb2,
 	cfg.port_id = qm_port_id;
 
 	if (ev_queue->depth_threshold == 0) {
-		cfg.depth_threshold = RTE_LIBRTE_PMD_DLB_DEFAULT_DEPTH_THRESH;
+		cfg.depth_threshold = dlb2->default_depth_thresh;
 		ev_queue->depth_threshold =
-			RTE_LIBRTE_PMD_DLB_DEFAULT_DEPTH_THRESH;
+			dlb2->default_depth_thresh;
 	} else
 		cfg.depth_threshold = ev_queue->depth_threshold;
 
@@ -3065,7 +3125,7 @@ dlb2_dequeue_wait(struct dlb2_eventdev *dlb2,
 
 		DLB2_INC_STAT(ev_port->stats.traffic.rx_umonitor_umwait, 1);
 	} else {
-		uint64_t poll_interval = RTE_LIBRTE_PMD_DLB_POLL_INTERVAL;
+		uint64_t poll_interval = dlb2->poll_interval;
 		uint64_t curr_ticks = rte_get_timer_cycles();
 		uint64_t init_ticks = curr_ticks;
 
@@ -4020,6 +4080,9 @@ dlb2_primary_eventdev_probe(struct rte_eventdev *dev,
 	dlb2->max_num_events_override = dlb2_args->max_num_events;
 	dlb2->num_dir_credits_override = dlb2_args->num_dir_credits_override;
 	dlb2->qm_instance.cos_id = dlb2_args->cos_id;
+	dlb2->poll_interval = dlb2_args->poll_interval;
+	dlb2->sw_credit_quanta = dlb2_args->sw_credit_quanta;
+	dlb2->default_depth_thresh = dlb2_args->default_depth_thresh;
 
 	err = dlb2_iface_open(&dlb2->qm_instance, name);
 	if (err < 0) {
@@ -4120,6 +4183,9 @@ dlb2_parse_params(const char *params,
 					     DEV_ID_ARG,
 					     DLB2_QID_DEPTH_THRESH_ARG,
 					     DLB2_COS_ARG,
+					     DLB2_POLL_INTERVAL_ARG,
+					     DLB2_SW_CREDIT_QUANTA_ARG,
+					     DLB2_DEPTH_THRESH_ARG,
 					     NULL };
 
 	if (params != NULL && params[0] != '\0') {
@@ -4202,6 +4268,37 @@ dlb2_parse_params(const char *params,
 				return ret;
 			}
 
+			ret = rte_kvargs_process(kvlist, DLB2_POLL_INTERVAL_ARG,
+						 set_poll_interval,
+						 &dlb2_args->poll_interval);
+			if (ret != 0) {
+				DLB2_LOG_ERR("%s: Error parsing poll interval parameter",
+					     name);
+				rte_kvargs_free(kvlist);
+				return ret;
+			}
+
+			ret = rte_kvargs_process(kvlist,
+						 DLB2_SW_CREDIT_QUANTA_ARG,
+						 set_sw_credit_quanta,
+						 &dlb2_args->sw_credit_quanta);
+			if (ret != 0) {
+				DLB2_LOG_ERR("%s: Error parsing sw xredit quanta parameter",
+					     name);
+				rte_kvargs_free(kvlist);
+				return ret;
+			}
+
+			ret = rte_kvargs_process(kvlist, DLB2_DEPTH_THRESH_ARG,
+					set_default_depth_thresh,
+					&dlb2_args->default_depth_thresh);
+			if (ret != 0) {
+				DLB2_LOG_ERR("%s: Error parsing set depth thresh parameter",
+					     name);
+				rte_kvargs_free(kvlist);
+				return ret;
+			}
+
 			rte_kvargs_free(kvlist);
 		}
 	}
diff --git a/drivers/event/dlb/dlb2_priv.h b/drivers/event/dlb/dlb2_priv.h
index f11e08fca..3c540a264 100644
--- a/drivers/event/dlb/dlb2_priv.h
+++ b/drivers/event/dlb/dlb2_priv.h
@@ -23,6 +23,11 @@
 /* common name for all dlb devs (dlb v2.0, dlb v2.5 ...) */
 #define EVDEV_DLB2_NAME_PMD dlb_event
 
+/* Default values for command line devargs */
+#define DLB2_POLL_INTERVAL_DEFAULT 1000
+#define DLB2_SW_CREDIT_QUANTA_DEFAULT 32
+#define DLB2_DEPTH_THRESH_DEFAULT 256
+
 /*  command line arg strings */
 #define NUMA_NODE_ARG "numa_node"
 #define DLB2_MAX_NUM_EVENTS "max_num_events"
@@ -31,6 +36,9 @@
 #define DLB2_DEFER_SCHED_ARG "defer_sched"
 #define DLB2_QID_DEPTH_THRESH_ARG "qid_depth_thresh"
 #define DLB2_COS_ARG "cos"
+#define DLB2_POLL_INTERVAL_ARG "poll_interval"
+#define DLB2_SW_CREDIT_QUANTA_ARG "sw_credit_quanta"
+#define DLB2_DEPTH_THRESH_ARG "default_depth_thresh"
 
 /* Begin HW related defines and structs */
 
@@ -571,6 +579,9 @@ struct dlb2_eventdev {
 	bool global_dequeue_wait; /* Not using per dequeue wait if true */
 	bool defer_sched;
 	enum dlb2_cq_poll_modes poll_mode;
+	int poll_interval;
+	int sw_credit_quanta;
+	int default_depth_thresh;
 	uint8_t revision;
 	uint8_t version;
 	bool configured;
@@ -604,6 +615,9 @@ struct dlb2_devargs {
 	int defer_sched;
 	struct dlb2_qid_depth_thresholds qid_depth_thresholds;
 	enum dlb2_cos cos_id;
+	int poll_interval;
+	int sw_credit_quanta;
+	int default_depth_thresh;
 };
 
 /* End Eventdev related defines and structs */
diff --git a/drivers/event/dlb/pf/dlb2_pf.c b/drivers/event/dlb/pf/dlb2_pf.c
index f57dc1584..e9da89d65 100644
--- a/drivers/event/dlb/pf/dlb2_pf.c
+++ b/drivers/event/dlb/pf/dlb2_pf.c
@@ -615,7 +615,10 @@ dlb2_eventdev_pci_init(struct rte_eventdev *eventdev)
 		.max_num_events = DLB2_MAX_NUM_LDB_CREDITS,
 		.num_dir_credits_override = -1,
 		.qid_depth_thresholds = { {0} },
-		.cos_id = DLB2_COS_DEFAULT
+		.cos_id = DLB2_COS_DEFAULT,
+		.poll_interval = DLB2_POLL_INTERVAL_DEFAULT,
+		.sw_credit_quanta = DLB2_SW_CREDIT_QUANTA_DEFAULT,
+		.default_depth_thresh = DLB2_DEPTH_THRESH_DEFAULT
 	};
 	struct dlb2_eventdev *dlb2;
 
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [dpdk-dev] [PATCH v3 25/26] event/dlb: remove version from device name
  2021-04-14 20:33         ` Thomas Monjalon
@ 2021-04-15  3:22           ` McDaniel, Timothy
  2021-04-15  5:47           ` Jerin Jacob
  1 sibling, 0 replies; 174+ messages in thread
From: McDaniel, Timothy @ 2021-04-15  3:22 UTC (permalink / raw)
  To: Thomas Monjalon, Jerin Jacob
  Cc: Jerin Jacob, dpdk-dev, Carrillo, Erik G, Gage Eads, Van Haaren,
	Harry, david.marchand



> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Sent: Wednesday, April 14, 2021 3:33 PM
> To: McDaniel, Timothy <timothy.mcdaniel@intel.com>; Jerin Jacob
> <jerinj@marvell.com>
> Cc: Jerin Jacob <jerinjacobk@gmail.com>; dpdk-dev <dev@dpdk.org>; Carrillo,
> Erik G <erik.g.carrillo@intel.com>; Gage Eads <gage.eads@intel.com>; Van
> Haaren, Harry <harry.van.haaren@intel.com>; david.marchand@redhat.com
> Subject: Re: [dpdk-dev] [PATCH v3 25/26] event/dlb: remove version from
> device name
> 
> 14/04/2021 21:44, Jerin Jacob:
> > On Wed, Apr 14, 2021 at 1:49 AM Timothy McDaniel
> > <mailto:timothy.mcdaniel@intel.com> wrote:
> > >
> > > Updated eventdev device name to be dlb_event instead of
> > > dlb2_event.  The new name will be used for all versions
> > > of the DLB hardware. This change required corresponding changes
> > > to the directory name that contains the PMD, as well
> > > as the documentation files, build infrastructure, and PMD
> > > specific APIs.
> > >
> > > Signed-off-by: Timothy McDaniel <mailto:timothy.mcdaniel@intel.com>
> > > --- a/doc/guides/rel_notes/release_21_05.rst
> > > +++ b/doc/guides/rel_notes/release_21_05.rst
> > > +* **Updated DLB driver.**
> > > +
> > > +  * Added support for v2.5 hardware.
> > > +  * Renamed DLB2 to DLB, which supports all HW versions v2.0 and v2.5.
> >
> >  @Thomas Monjalon , Do we need to update the "Removed Items" section?
> 
> I did not follow the exact change.
> Is it changing the driver library name?
> If yes, it is one more ABI issue.
> If not, I don't see what to update in the release notes.
> 

I'm not sure if this is related, but my latest patch series build fails due to a problem with the docs build. The odd thing is that I updated the name in doxy-api-index.md and changed the file name to rte_pmd_dlb.h, so I don't know where it is picking up what I assume is the previous version of the doxy-api-index.md file while building the last patch in the series.   I made this name change in the same commit where I change over from dlb2 to dlb, which is patch 26 in this series. The build is failing on patch 27, and at that point the text string "rte_pmd_dlb2 is not found anywhere in the repo that I can find.

/root/UB2004-64_K5.8.0_GCC10.2.0/x86_64-native-linuxapp-doc/d9404773e5eb425882b0b26bde1e7467/dpdk/doc/api/generate_doxygen.sh doc/api/doxy-api.conf doc/api/html /root/UB2004-64_K5.8.0_GCC10.2.0/x86_64-native-linuxapp-doc/d9404773e5eb425882b0b26bde1e7467/dpdk/doc/api/doxy-html-custom.sh
warning: tag INPUT: input source '/root/UB2004-64_K5.8.0_GCC10.2.0/x86_64-native-linuxapp-doc/d9404773e5eb425882b0b26bde1e7467/dpdk/drivers/event/dlb2' does not exist
error: source /root/UB2004-64_K5.8.0_GCC10.2.0/x86_64-native-linuxapp-doc/d9404773e5eb425882b0b26bde1e7467/dpdk/drivers/event/dlb2 is not a readable file or directory... skipping.
/root/UB2004-64_K5.8.0_GCC10.2.0/x86_64-native-linuxapp-doc/d9404773e5eb425882b0b26bde1e7467/dpdk/doc/api/doxy-api-index.md:56: error: unable to resolve reference to 'rte_pmd_dlb2.h' for \ref command (warning treated as error, aborting now)


^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [dpdk-dev] [PATCH v3 25/26] event/dlb: remove version from device name
  2021-04-14 20:33         ` Thomas Monjalon
  2021-04-15  3:22           ` McDaniel, Timothy
@ 2021-04-15  5:47           ` Jerin Jacob
  2021-04-15  7:48             ` Thomas Monjalon
  1 sibling, 1 reply; 174+ messages in thread
From: Jerin Jacob @ 2021-04-15  5:47 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: Timothy McDaniel, Jerin Jacob, dpdk-dev, Erik Gabriel Carrillo,
	Gage Eads, Van Haaren, Harry, David Marchand

On Thu, Apr 15, 2021 at 2:03 AM Thomas Monjalon <thomas@monjalon.net> wrote:
>
> 14/04/2021 21:44, Jerin Jacob:
> > On Wed, Apr 14, 2021 at 1:49 AM Timothy McDaniel
> > <timothy.mcdaniel@intel.com> wrote:
> > >
> > > Updated eventdev device name to be dlb_event instead of
> > > dlb2_event.  The new name will be used for all versions
> > > of the DLB hardware. This change required corresponding changes
> > > to the directory name that contains the PMD, as well
> > > as the documentation files, build infrastructure, and PMD
> > > specific APIs.
> > >
> > > Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
> > > --- a/doc/guides/rel_notes/release_21_05.rst
> > > +++ b/doc/guides/rel_notes/release_21_05.rst
> > > +* **Updated DLB driver.**
> > > +
> > > +  * Added support for v2.5 hardware.
> > > +  * Renamed DLB2 to DLB, which supports all HW versions v2.0 and v2.5.
> >
> >  @Thomas Monjalon , Do we need to update the "Removed Items" section?
>
> I did not follow the exact change.
> Is it changing the driver library name?
> If yes, it is one more ABI issue.

It is yes. It needs to be fixed in abiignore.

My original question was, Since we are renaming dlb2->dlb one,
driver/dlb2 directory will not be present after this change,
Do we need to update the "Removed Items" section in release notes,
Saying dlb2 driver removed?


> If not, I don't see what to update in the release notes.
>
>

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [dpdk-dev] [PATCH v3 25/26] event/dlb: remove version from device name
  2021-04-15  5:47           ` Jerin Jacob
@ 2021-04-15  7:48             ` Thomas Monjalon
  2021-04-15  7:56               ` Jerin Jacob
  0 siblings, 1 reply; 174+ messages in thread
From: Thomas Monjalon @ 2021-04-15  7:48 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: Timothy McDaniel, Jerin Jacob, dpdk-dev, Erik Gabriel Carrillo,
	Gage Eads, Van Haaren, Harry, David Marchand

15/04/2021 07:47, Jerin Jacob:
> On Thu, Apr 15, 2021 at 2:03 AM Thomas Monjalon <thomas@monjalon.net> wrote:
> > 14/04/2021 21:44, Jerin Jacob:
> > > On Wed, Apr 14, 2021 at 1:49 AM Timothy McDaniel
> > > <timothy.mcdaniel@intel.com> wrote:
> > > >
> > > > Updated eventdev device name to be dlb_event instead of
> > > > dlb2_event.  The new name will be used for all versions
> > > > of the DLB hardware. This change required corresponding changes
> > > > to the directory name that contains the PMD, as well
> > > > as the documentation files, build infrastructure, and PMD
> > > > specific APIs.
> > > >
> > > > Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
> > > > --- a/doc/guides/rel_notes/release_21_05.rst
> > > > +++ b/doc/guides/rel_notes/release_21_05.rst
> > > > +* **Updated DLB driver.**
> > > > +
> > > > +  * Added support for v2.5 hardware.
> > > > +  * Renamed DLB2 to DLB, which supports all HW versions v2.0 and v2.5.
> > >
> > >  @Thomas Monjalon , Do we need to update the "Removed Items" section?
> >
> > I did not follow the exact change.
> > Is it changing the driver library name?
> > If yes, it is one more ABI issue.
> 
> It is yes. It needs to be fixed in abiignore.
> 
> My original question was, Since we are renaming dlb2->dlb one,
> driver/dlb2 directory will not be present after this change,
> Do we need to update the "Removed Items" section in release notes,
> Saying dlb2 driver removed?

Yes we need, but it should have been discussed in techboard first.



^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [dpdk-dev] [PATCH v3 25/26] event/dlb: remove version from device name
  2021-04-15  7:48             ` Thomas Monjalon
@ 2021-04-15  7:56               ` Jerin Jacob
  0 siblings, 0 replies; 174+ messages in thread
From: Jerin Jacob @ 2021-04-15  7:56 UTC (permalink / raw)
  To: Thomas Monjalon, techboard
  Cc: Timothy McDaniel, Jerin Jacob, dpdk-dev, Erik Gabriel Carrillo,
	Gage Eads, Van Haaren, Harry, David Marchand

On Thu, Apr 15, 2021 at 1:18 PM Thomas Monjalon <thomas@monjalon.net> wrote:
>
> 15/04/2021 07:47, Jerin Jacob:
> > On Thu, Apr 15, 2021 at 2:03 AM Thomas Monjalon <thomas@monjalon.net> wrote:
> > > 14/04/2021 21:44, Jerin Jacob:
> > > > On Wed, Apr 14, 2021 at 1:49 AM Timothy McDaniel
> > > > <timothy.mcdaniel@intel.com> wrote:
> > > > >
> > > > > Updated eventdev device name to be dlb_event instead of
> > > > > dlb2_event.  The new name will be used for all versions
> > > > > of the DLB hardware. This change required corresponding changes
> > > > > to the directory name that contains the PMD, as well
> > > > > as the documentation files, build infrastructure, and PMD
> > > > > specific APIs.
> > > > >
> > > > > Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
> > > > > --- a/doc/guides/rel_notes/release_21_05.rst
> > > > > +++ b/doc/guides/rel_notes/release_21_05.rst
> > > > > +* **Updated DLB driver.**
> > > > > +
> > > > > +  * Added support for v2.5 hardware.
> > > > > +  * Renamed DLB2 to DLB, which supports all HW versions v2.0 and v2.5.
> > > >
> > > >  @Thomas Monjalon , Do we need to update the "Removed Items" section?
> > >
> > > I did not follow the exact change.
> > > Is it changing the driver library name?
> > > If yes, it is one more ABI issue.
> >
> > It is yes. It needs to be fixed in abiignore.
> >
> > My original question was, Since we are renaming dlb2->dlb one,
> > driver/dlb2 directory will not be present after this change,
> > Do we need to update the "Removed Items" section in release notes,
> > Saying dlb2 driver removed?
>
> Yes we need, but it should have been discussed in techboard first.

+ Techboard

Cc: @McDaniel, Timothy

OK. I will hold this patch and wait for the techboard's approval.


>
>

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [dpdk-dev] [PATCH v4 02/27] event/dlb2: add v2.5 probe
  2021-04-15  1:48     ` [dpdk-dev] [PATCH v4 02/27] event/dlb2: add v2.5 probe Timothy McDaniel
@ 2021-04-29  7:09       ` Jerin Jacob
  2021-04-29 13:46         ` McDaniel, Timothy
  0 siblings, 1 reply; 174+ messages in thread
From: Jerin Jacob @ 2021-04-29  7:09 UTC (permalink / raw)
  To: Timothy McDaniel
  Cc: dpdk-dev, Erik Gabriel Carrillo, Van Haaren, Harry, Jerin Jacob,
	Thomas Monjalon

On Thu, Apr 15, 2021 at 7:20 AM Timothy McDaniel
<timothy.mcdaniel@intel.com> wrote:
>
> This commit adds dlb v2.5 probe support, and updates
> parameter parsing.
>
> The dlb v2.5 device differs from dlb v2, in that the
> number of resources (ports, queues, ...) is different,
> so macros have been added to take the device version
> into account.
>
> Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>


Marked as "Change requested" in the patchwork based on
https://mails.dpdk.org/archives/dev/2021-April/207696.html update.

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [dpdk-dev] [PATCH v4 02/27] event/dlb2: add v2.5 probe
  2021-04-29  7:09       ` Jerin Jacob
@ 2021-04-29 13:46         ` McDaniel, Timothy
  0 siblings, 0 replies; 174+ messages in thread
From: McDaniel, Timothy @ 2021-04-29 13:46 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dpdk-dev, Carrillo, Erik G, Van Haaren, Harry, Jerin Jacob,
	Thomas Monjalon



> -----Original Message-----
> From: Jerin Jacob <jerinjacobk@gmail.com>
> Sent: Thursday, April 29, 2021 2:10 AM
> To: McDaniel, Timothy <timothy.mcdaniel@intel.com>
> Cc: dpdk-dev <dev@dpdk.org>; Carrillo, Erik G <erik.g.carrillo@intel.com>; Van
> Haaren, Harry <harry.van.haaren@intel.com>; Jerin Jacob
> <jerinj@marvell.com>; Thomas Monjalon <thomas@monjalon.net>
> Subject: Re: [dpdk-dev] [PATCH v4 02/27] event/dlb2: add v2.5 probe
> 
> On Thu, Apr 15, 2021 at 7:20 AM Timothy McDaniel
> <timothy.mcdaniel@intel.com> wrote:
> >
> > This commit adds dlb v2.5 probe support, and updates
> > parameter parsing.
> >
> > The dlb v2.5 device differs from dlb v2, in that the
> > number of resources (ports, queues, ...) is different,
> > so macros have been added to take the device version
> > into account.
> >
> > Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
> 
> 
> Marked as "Change requested" in the patchwork based on
> https://mails.dpdk.org/archives/dev/2021-April/207696.html update.

I will restore the old "dlb2" name and resubmit.

^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v5 00/26] Add DLB v2.5
  2021-03-16 22:18 ` [dpdk-dev] [PATCH 01/25] event/dlb2: add dlb v2.5 probe Timothy McDaniel
                     ` (3 preceding siblings ...)
  2021-04-15  1:48   ` [dpdk-dev] [PATCH v4 00/27] Add DLB v2.5 Timothy McDaniel
@ 2021-05-01 19:03   ` McDaniel, Timothy
  2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 01/26] event/dlb2: minor code cleanup McDaniel, Timothy
                       ` (26 more replies)
  4 siblings, 27 replies; 174+ messages in thread
From: McDaniel, Timothy @ 2021-05-01 19:03 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, harry.van.haaren, jerinj, thomas, Timothy McDaniel

From: Timothy McDaniel <timothy.mcdaniel@intel.com>

This patch series adds support for DLB v2.5 to
the current DLB V2.0 PMD. The resulting PMD supports
both hardware versions.

The main differences between the DLB v2.5 and v2.0 hardware
are:
- Number of queues/ports
- DLB v2.5 uses a combined credit pool, whereas DLB v2.0
  splits credits into 2 pools, a directed credit pool and a
  load balanced credit pool.
- Different register maps, with different bit names and offsets

In order to support both hardware versions with the same PMD,
and avoid code duplication, the file dlb2_resource.c required a
complete rewrite. This required some creative staging of the changes
in order to keep the individual patches relatively small, while
also meeting the requirement that all individual patches in the set
compile cleanly.

To accomplish this, a few temporary files are used:

dlb2_hw_types_new.h
dlb2_resources_new.h
dlb2_resources_new.c

As dlb2_resources_new.c is populated with the new combined v2.0/v2.5
low level logic, the corresponding old code is removed from
dlb2_resource.c, thus allowing both the original and new code to
continue to compile and link cleanly. Once all of the code has been
migrated to the new model, the old versions of the files are removed,
and the new versions are renamed, effectively replacing the old original
files.

As you review the code, you can ignore the code deletions from
dlb2_resource.c, as that file continues to shrink as the new
corresponding logic is added to dlb2_resource_new.c. 

Changes since V4:
1) restore original PMD name (dlb2)
2) resore original PMD source location (drivers/event/dlb2)
3) restore documentation, such that it references dlb2_event,
   instead of dlb_event

Changes since V3:
1) Moved minor cleanup to its own patch. This included
        a) remove FPGA references
        b) eliminate duplicate macros/defines in hw_types
        c) don't include dlb2_mbox.h
        d) delete unused defines.macros (SMON, INT, ...)
2) Changed DLB V2.x and V2.x to simply v2.x, where v is lower case
3) Updated 20.11 release notes to remove reference to dlb2 doc, since
   it is now named dlb.rst
4) Updated commit message/header text, as requested

Changes since V2:
1) fix commit headers
2) fix commit message repeated words
3) remove FPGA reference
4) split out new v2.5 register definitions into separate patch
5) fixed documentation to use DLB and dlb_event exclusively,
   instead of the old names such as dlb1_event, dlb2_event,
   DLB2, ... Final doc updates are done in patch that performs
   device rename from DLB2 tosimply DLB
6) use component event/dlb at commit which changes device name and
   all subsequent commits
7) Move all DLB constants out of config/rte_config.h except QUELL_STATS,
   which is used in the fastpath. Exposed these as devarg command line
   parameters
8) Removed "TEMPORARY" comment leftover in dlb2_osdep.h
9) squashed 20-21 and 22-23 since they were logically the same as 19-20,
   which was requested to be squashed
10) delete old dlb2.rst - dlb.rst has been updated for v2.0 and v2.1

Changes since V1:
1) Simplified subject text for all patches
2) correct typos/spelling
3) remove FPGA references
4) remove stale sysconf() references
5) fixed patches that had compilation issues
6) updated release notes
7) renamed dlb device from dlb2_event to dlb_event
8) moved dlb2 directory to dlb,to match name change
9) fixed other cases where "dlb2" was being used externally

Timothy McDaniel (26):
  event/dlb2: minor code cleanup
  event/dlb2: add v2.5 probe
  event/dlb2: add v2.5 HW register definitions
  event/dlb2: add v2.5 HW init
  event/dlb2: add v2.5 get resources
  event/dlb2: add v2.5 create sched domain
  event/dlb2: add v2.5 domain reset
  event/dlb2: add v2.5 create ldb queue
  event/dlb2: add v2.5 create ldb port
  event/dlb2: add v2.5 create dir port
  event/dlb2: add v2.5 create dir queue
  event/dlb2: add v2.5 map qid
  event/dlb2: add v2.5 unmap queue
  event/dlb2: add v2.5 start domain
  event/dlb2: add v2.5 credit scheme
  event/dlb2: add v2.5 queue depth functions
  event/dlb2: add v2.5 finish map/unmap
  event/dlb2: add v2.5 sparse cq mode
  event/dlb2: add v2.5 sequence number management
  event/dlb2: use new implementation of resource header
  event/dlb2: use new implementation of resource file
  event/dlb2: use new implementation of HW types header
  event/dlb2: use new combined register map
  event/dlb2: update xstats for v2.5
  event/dlb2: move rte config defines to runtime devargs
  doc/dlb2: update documentation for v2.5

 config/rte_config.h                        |    4 -
 doc/guides/eventdevs/dlb2.rst              |  153 +-
 drivers/event/dlb2/dlb2.c                  |  550 +-
 drivers/event/dlb2/dlb2_priv.h             |  170 +-
 drivers/event/dlb2/dlb2_user.h             |   27 +-
 drivers/event/dlb2/dlb2_xstats.c           |   70 +-
 drivers/event/dlb2/pf/base/dlb2_hw_types.h |  106 +-
 drivers/event/dlb2/pf/base/dlb2_mbox.h     |  596 --
 drivers/event/dlb2/pf/base/dlb2_osdep.h    |    2 +
 drivers/event/dlb2/pf/base/dlb2_regs.h     | 5955 +++++++++++++-------
 drivers/event/dlb2/pf/base/dlb2_resource.c | 3278 ++++++-----
 drivers/event/dlb2/pf/base/dlb2_resource.h |   28 +-
 drivers/event/dlb2/pf/dlb2_main.c          |   37 +-
 drivers/event/dlb2/pf/dlb2_pf.c            |   67 +-
 14 files changed, 6445 insertions(+), 4598 deletions(-)
 delete mode 100644 drivers/event/dlb2/pf/base/dlb2_mbox.h

-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v5 01/26] event/dlb2: minor code cleanup
  2021-05-01 19:03   ` [dpdk-dev] [PATCH v5 00/26] Add DLB v2.5 McDaniel, Timothy
@ 2021-05-01 19:03     ` McDaniel, Timothy
  2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 02/26] event/dlb2: add v2.5 probe McDaniel, Timothy
                       ` (25 subsequent siblings)
  26 siblings, 0 replies; 174+ messages in thread
From: McDaniel, Timothy @ 2021-05-01 19:03 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, harry.van.haaren, jerinj, thomas, Timothy McDaniel

From: Timothy McDaniel <timothy.mcdaniel@intel.com>

1) Remove references to FPGA.
2) Do not include dlb2_mbox.h, it is not needed.
3) Remove duplicate macros/defines that were
   present in both dlb2_priv.h and dlb2_hw_types.h.
   Update dlb2_resource.c to include dlb2_priv.h
   so that it picks up the macros/defines that
   have now been consolidated.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_hw_types.h |  46 +-
 drivers/event/dlb2/pf/base/dlb2_mbox.h     | 596 ---------------------
 drivers/event/dlb2/pf/base/dlb2_resource.c |   1 -
 3 files changed, 2 insertions(+), 641 deletions(-)
 delete mode 100644 drivers/event/dlb2/pf/base/dlb2_mbox.h

diff --git a/drivers/event/dlb2/pf/base/dlb2_hw_types.h b/drivers/event/dlb2/pf/base/dlb2_hw_types.h
index 1d99f1e01..c7cd41f8b 100644
--- a/drivers/event/dlb2/pf/base/dlb2_hw_types.h
+++ b/drivers/event/dlb2/pf/base/dlb2_hw_types.h
@@ -5,55 +5,25 @@
 #ifndef __DLB2_HW_TYPES_H
 #define __DLB2_HW_TYPES_H
 
+#include "../../dlb2_priv.h"
 #include "dlb2_user.h"
 
 #include "dlb2_osdep_list.h"
 #include "dlb2_osdep_types.h"
 
 #define DLB2_MAX_NUM_VDEVS			16
-#define DLB2_MAX_NUM_DOMAINS			32
-#define DLB2_MAX_NUM_LDB_QUEUES			32 /* LDB == load-balanced */
-#define DLB2_MAX_NUM_DIR_QUEUES			64 /* DIR == directed */
-#define DLB2_MAX_NUM_LDB_PORTS			64
-#define DLB2_MAX_NUM_DIR_PORTS			64
-#define DLB2_MAX_NUM_LDB_CREDITS		(8 * 1024)
-#define DLB2_MAX_NUM_DIR_CREDITS		(2 * 1024)
-#define DLB2_MAX_NUM_HIST_LIST_ENTRIES		2048
 #define DLB2_MAX_NUM_AQED_ENTRIES		2048
-#define DLB2_MAX_NUM_QIDS_PER_LDB_CQ		8
 #define DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS	2
 #define DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES	5
-#define DLB2_QID_PRIORITIES			8
+
 #define DLB2_NUM_ARB_WEIGHTS			8
 #define DLB2_MAX_WEIGHT				255
 #define DLB2_NUM_COS_DOMAINS			4
 #define DLB2_MAX_CQ_COMP_CHECK_LOOPS		409600
 #define DLB2_MAX_QID_EMPTY_CHECK_LOOPS		(32 * 64 * 1024 * (800 / 30))
-#ifdef FPGA
-#define DLB2_HZ					2000000
-#else
-#define DLB2_HZ					800000000
-#endif
-
 #define PCI_DEVICE_ID_INTEL_DLB2_PF 0x2710
 #define PCI_DEVICE_ID_INTEL_DLB2_VF 0x2711
 
-/* Interrupt related macros */
-#define DLB2_PF_NUM_NON_CQ_INTERRUPT_VECTORS 1
-#define DLB2_PF_NUM_CQ_INTERRUPT_VECTORS     64
-#define DLB2_PF_TOTAL_NUM_INTERRUPT_VECTORS \
-	(DLB2_PF_NUM_NON_CQ_INTERRUPT_VECTORS + \
-	 DLB2_PF_NUM_CQ_INTERRUPT_VECTORS)
-#define DLB2_PF_NUM_COMPRESSED_MODE_VECTORS \
-	(DLB2_PF_NUM_NON_CQ_INTERRUPT_VECTORS + 1)
-#define DLB2_PF_NUM_PACKED_MODE_VECTORS \
-	DLB2_PF_TOTAL_NUM_INTERRUPT_VECTORS
-#define DLB2_PF_COMPRESSED_MODE_CQ_VECTOR_ID \
-	DLB2_PF_NUM_NON_CQ_INTERRUPT_VECTORS
-
-/* DLB non-CQ interrupts (alarm, mailbox, WDT) */
-#define DLB2_INT_NON_CQ 0
-
 #define DLB2_ALARM_HW_SOURCE_SYS 0
 #define DLB2_ALARM_HW_SOURCE_DLB 1
 
@@ -65,18 +35,6 @@
 #define DLB2_ALARM_HW_CHP_AID_ILLEGAL_ENQ	1
 #define DLB2_ALARM_HW_CHP_AID_EXCESS_TOKEN_POPS 2
 
-#define DLB2_VF_NUM_NON_CQ_INTERRUPT_VECTORS 1
-#define DLB2_VF_NUM_CQ_INTERRUPT_VECTORS     31
-#define DLB2_VF_BASE_CQ_VECTOR_ID	     0
-#define DLB2_VF_LAST_CQ_VECTOR_ID	     30
-#define DLB2_VF_MBOX_VECTOR_ID		     31
-#define DLB2_VF_TOTAL_NUM_INTERRUPT_VECTORS \
-	(DLB2_VF_NUM_NON_CQ_INTERRUPT_VECTORS + \
-	 DLB2_VF_NUM_CQ_INTERRUPT_VECTORS)
-
-#define DLB2_VDEV_MAX_NUM_INTERRUPT_VECTORS (DLB2_MAX_NUM_LDB_PORTS + \
-					     DLB2_MAX_NUM_DIR_PORTS + 1)
-
 /*
  * Hardware-defined base addresses. Those prefixed 'DLB2_DRV' are only used by
  * the PF driver.
diff --git a/drivers/event/dlb2/pf/base/dlb2_mbox.h b/drivers/event/dlb2/pf/base/dlb2_mbox.h
deleted file mode 100644
index ce462c089..000000000
--- a/drivers/event/dlb2/pf/base/dlb2_mbox.h
+++ /dev/null
@@ -1,596 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016-2020 Intel Corporation
- */
-
-#ifndef __DLB2_BASE_DLB2_MBOX_H
-#define __DLB2_BASE_DLB2_MBOX_H
-
-#include "dlb2_osdep_types.h"
-#include "dlb2_regs.h"
-
-#define DLB2_MBOX_INTERFACE_VERSION 1
-
-/*
- * The PF uses its PF->VF mailbox to send responses to VF requests, as well as
- * to send requests of its own (e.g. notifying a VF of an impending FLR).
- * To avoid communication race conditions, e.g. the PF sends a response and then
- * sends a request before the VF reads the response, the PF->VF mailbox is
- * divided into two sections:
- * - Bytes 0-47: PF responses
- * - Bytes 48-63: PF requests
- *
- * Partitioning the PF->VF mailbox allows responses and requests to occupy the
- * mailbox simultaneously.
- */
-#define DLB2_PF2VF_RESP_BYTES	  48
-#define DLB2_PF2VF_RESP_BASE	  0
-#define DLB2_PF2VF_RESP_BASE_WORD (DLB2_PF2VF_RESP_BASE / 4)
-
-#define DLB2_PF2VF_REQ_BYTES	  16
-#define DLB2_PF2VF_REQ_BASE	  (DLB2_PF2VF_RESP_BASE + DLB2_PF2VF_RESP_BYTES)
-#define DLB2_PF2VF_REQ_BASE_WORD  (DLB2_PF2VF_REQ_BASE / 4)
-
-/*
- * Similarly, the VF->PF mailbox is divided into two sections:
- * - Bytes 0-239: VF requests
- * -- (Bytes 0-3 are unused due to a hardware errata)
- * - Bytes 240-255: VF responses
- */
-#define DLB2_VF2PF_REQ_BYTES	 236
-#define DLB2_VF2PF_REQ_BASE	 4
-#define DLB2_VF2PF_REQ_BASE_WORD (DLB2_VF2PF_REQ_BASE / 4)
-
-#define DLB2_VF2PF_RESP_BYTES	  16
-#define DLB2_VF2PF_RESP_BASE	  (DLB2_VF2PF_REQ_BASE + DLB2_VF2PF_REQ_BYTES)
-#define DLB2_VF2PF_RESP_BASE_WORD (DLB2_VF2PF_RESP_BASE / 4)
-
-/* VF-initiated commands */
-enum dlb2_mbox_cmd_type {
-	DLB2_MBOX_CMD_REGISTER,
-	DLB2_MBOX_CMD_UNREGISTER,
-	DLB2_MBOX_CMD_GET_NUM_RESOURCES,
-	DLB2_MBOX_CMD_CREATE_SCHED_DOMAIN,
-	DLB2_MBOX_CMD_RESET_SCHED_DOMAIN,
-	DLB2_MBOX_CMD_CREATE_LDB_QUEUE,
-	DLB2_MBOX_CMD_CREATE_DIR_QUEUE,
-	DLB2_MBOX_CMD_CREATE_LDB_PORT,
-	DLB2_MBOX_CMD_CREATE_DIR_PORT,
-	DLB2_MBOX_CMD_ENABLE_LDB_PORT,
-	DLB2_MBOX_CMD_DISABLE_LDB_PORT,
-	DLB2_MBOX_CMD_ENABLE_DIR_PORT,
-	DLB2_MBOX_CMD_DISABLE_DIR_PORT,
-	DLB2_MBOX_CMD_LDB_PORT_OWNED_BY_DOMAIN,
-	DLB2_MBOX_CMD_DIR_PORT_OWNED_BY_DOMAIN,
-	DLB2_MBOX_CMD_MAP_QID,
-	DLB2_MBOX_CMD_UNMAP_QID,
-	DLB2_MBOX_CMD_START_DOMAIN,
-	DLB2_MBOX_CMD_ENABLE_LDB_PORT_INTR,
-	DLB2_MBOX_CMD_ENABLE_DIR_PORT_INTR,
-	DLB2_MBOX_CMD_ARM_CQ_INTR,
-	DLB2_MBOX_CMD_GET_NUM_USED_RESOURCES,
-	DLB2_MBOX_CMD_GET_SN_ALLOCATION,
-	DLB2_MBOX_CMD_GET_LDB_QUEUE_DEPTH,
-	DLB2_MBOX_CMD_GET_DIR_QUEUE_DEPTH,
-	DLB2_MBOX_CMD_PENDING_PORT_UNMAPS,
-	DLB2_MBOX_CMD_GET_COS_BW,
-	DLB2_MBOX_CMD_GET_SN_OCCUPANCY,
-	DLB2_MBOX_CMD_QUERY_CQ_POLL_MODE,
-
-	/* NUM_QE_CMD_TYPES must be last */
-	NUM_DLB2_MBOX_CMD_TYPES,
-};
-
-static const char dlb2_mbox_cmd_type_strings[][128] = {
-	"DLB2_MBOX_CMD_REGISTER",
-	"DLB2_MBOX_CMD_UNREGISTER",
-	"DLB2_MBOX_CMD_GET_NUM_RESOURCES",
-	"DLB2_MBOX_CMD_CREATE_SCHED_DOMAIN",
-	"DLB2_MBOX_CMD_RESET_SCHED_DOMAIN",
-	"DLB2_MBOX_CMD_CREATE_LDB_QUEUE",
-	"DLB2_MBOX_CMD_CREATE_DIR_QUEUE",
-	"DLB2_MBOX_CMD_CREATE_LDB_PORT",
-	"DLB2_MBOX_CMD_CREATE_DIR_PORT",
-	"DLB2_MBOX_CMD_ENABLE_LDB_PORT",
-	"DLB2_MBOX_CMD_DISABLE_LDB_PORT",
-	"DLB2_MBOX_CMD_ENABLE_DIR_PORT",
-	"DLB2_MBOX_CMD_DISABLE_DIR_PORT",
-	"DLB2_MBOX_CMD_LDB_PORT_OWNED_BY_DOMAIN",
-	"DLB2_MBOX_CMD_DIR_PORT_OWNED_BY_DOMAIN",
-	"DLB2_MBOX_CMD_MAP_QID",
-	"DLB2_MBOX_CMD_UNMAP_QID",
-	"DLB2_MBOX_CMD_START_DOMAIN",
-	"DLB2_MBOX_CMD_ENABLE_LDB_PORT_INTR",
-	"DLB2_MBOX_CMD_ENABLE_DIR_PORT_INTR",
-	"DLB2_MBOX_CMD_ARM_CQ_INTR",
-	"DLB2_MBOX_CMD_GET_NUM_USED_RESOURCES",
-	"DLB2_MBOX_CMD_GET_SN_ALLOCATION",
-	"DLB2_MBOX_CMD_GET_LDB_QUEUE_DEPTH",
-	"DLB2_MBOX_CMD_GET_DIR_QUEUE_DEPTH",
-	"DLB2_MBOX_CMD_PENDING_PORT_UNMAPS",
-	"DLB2_MBOX_CMD_GET_COS_BW",
-	"DLB2_MBOX_CMD_GET_SN_OCCUPANCY",
-	"DLB2_MBOX_CMD_QUERY_CQ_POLL_MODE",
-};
-
-/* PF-initiated commands */
-enum dlb2_mbox_vf_cmd_type {
-	DLB2_MBOX_VF_CMD_DOMAIN_ALERT,
-	DLB2_MBOX_VF_CMD_NOTIFICATION,
-	DLB2_MBOX_VF_CMD_IN_USE,
-
-	/* NUM_DLB2_MBOX_VF_CMD_TYPES must be last */
-	NUM_DLB2_MBOX_VF_CMD_TYPES,
-};
-
-static const char dlb2_mbox_vf_cmd_type_strings[][128] = {
-	"DLB2_MBOX_VF_CMD_DOMAIN_ALERT",
-	"DLB2_MBOX_VF_CMD_NOTIFICATION",
-	"DLB2_MBOX_VF_CMD_IN_USE",
-};
-
-#define DLB2_MBOX_CMD_TYPE(hdr) \
-	(((struct dlb2_mbox_req_hdr *)hdr)->type)
-#define DLB2_MBOX_CMD_STRING(hdr) \
-	dlb2_mbox_cmd_type_strings[DLB2_MBOX_CMD_TYPE(hdr)]
-
-enum dlb2_mbox_status_type {
-	DLB2_MBOX_ST_SUCCESS,
-	DLB2_MBOX_ST_INVALID_CMD_TYPE,
-	DLB2_MBOX_ST_VERSION_MISMATCH,
-	DLB2_MBOX_ST_INVALID_OWNER_VF,
-};
-
-static const char dlb2_mbox_status_type_strings[][128] = {
-	"DLB2_MBOX_ST_SUCCESS",
-	"DLB2_MBOX_ST_INVALID_CMD_TYPE",
-	"DLB2_MBOX_ST_VERSION_MISMATCH",
-	"DLB2_MBOX_ST_INVALID_OWNER_VF",
-};
-
-#define DLB2_MBOX_ST_TYPE(hdr) \
-	(((struct dlb2_mbox_resp_hdr *)hdr)->status)
-#define DLB2_MBOX_ST_STRING(hdr) \
-	dlb2_mbox_status_type_strings[DLB2_MBOX_ST_TYPE(hdr)]
-
-/* This structure is always the first field in a request structure */
-struct dlb2_mbox_req_hdr {
-	u32 type;
-};
-
-/* This structure is always the first field in a response structure */
-struct dlb2_mbox_resp_hdr {
-	u32 status;
-};
-
-struct dlb2_mbox_register_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u16 min_interface_version;
-	u16 max_interface_version;
-};
-
-struct dlb2_mbox_register_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 interface_version;
-	u8 pf_id;
-	u8 vf_id;
-	u8 is_auxiliary_vf;
-	u8 primary_vf_id;
-	u32 padding;
-};
-
-struct dlb2_mbox_unregister_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 padding;
-};
-
-struct dlb2_mbox_unregister_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 padding;
-};
-
-struct dlb2_mbox_get_num_resources_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 padding;
-};
-
-struct dlb2_mbox_get_num_resources_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u16 num_sched_domains;
-	u16 num_ldb_queues;
-	u16 num_ldb_ports;
-	u16 num_cos_ldb_ports[4];
-	u16 num_dir_ports;
-	u32 num_atomic_inflights;
-	u32 num_hist_list_entries;
-	u32 max_contiguous_hist_list_entries;
-	u16 num_ldb_credits;
-	u16 num_dir_credits;
-};
-
-struct dlb2_mbox_create_sched_domain_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 num_ldb_queues;
-	u32 num_ldb_ports;
-	u32 num_cos_ldb_ports[4];
-	u32 num_dir_ports;
-	u32 num_atomic_inflights;
-	u32 num_hist_list_entries;
-	u32 num_ldb_credits;
-	u32 num_dir_credits;
-	u8 cos_strict;
-	u8 padding0[3];
-	u32 padding1;
-};
-
-struct dlb2_mbox_create_sched_domain_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 id;
-};
-
-struct dlb2_mbox_reset_sched_domain_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 id;
-};
-
-struct dlb2_mbox_reset_sched_domain_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-};
-
-struct dlb2_mbox_create_ldb_queue_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 num_sequence_numbers;
-	u32 num_qid_inflights;
-	u32 num_atomic_inflights;
-	u32 lock_id_comp_level;
-	u32 depth_threshold;
-	u32 padding;
-};
-
-struct dlb2_mbox_create_ldb_queue_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 id;
-};
-
-struct dlb2_mbox_create_dir_queue_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 port_id;
-	u32 depth_threshold;
-};
-
-struct dlb2_mbox_create_dir_queue_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 id;
-};
-
-struct dlb2_mbox_create_ldb_port_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u16 cq_depth;
-	u16 cq_history_list_size;
-	u8 cos_id;
-	u8 cos_strict;
-	u16 padding1;
-	u64 cq_base_address;
-};
-
-struct dlb2_mbox_create_ldb_port_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 id;
-};
-
-struct dlb2_mbox_create_dir_port_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u64 cq_base_address;
-	u16 cq_depth;
-	u16 padding0;
-	s32 queue_id;
-};
-
-struct dlb2_mbox_create_dir_port_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 id;
-};
-
-struct dlb2_mbox_enable_ldb_port_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 port_id;
-	u32 padding;
-};
-
-struct dlb2_mbox_enable_ldb_port_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 padding;
-};
-
-struct dlb2_mbox_disable_ldb_port_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 port_id;
-	u32 padding;
-};
-
-struct dlb2_mbox_disable_ldb_port_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 padding;
-};
-
-struct dlb2_mbox_enable_dir_port_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 port_id;
-	u32 padding;
-};
-
-struct dlb2_mbox_enable_dir_port_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 padding;
-};
-
-struct dlb2_mbox_disable_dir_port_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 port_id;
-	u32 padding;
-};
-
-struct dlb2_mbox_disable_dir_port_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 padding;
-};
-
-struct dlb2_mbox_ldb_port_owned_by_domain_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 port_id;
-	u32 padding;
-};
-
-struct dlb2_mbox_ldb_port_owned_by_domain_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	s32 owned;
-};
-
-struct dlb2_mbox_dir_port_owned_by_domain_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 port_id;
-	u32 padding;
-};
-
-struct dlb2_mbox_dir_port_owned_by_domain_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	s32 owned;
-};
-
-struct dlb2_mbox_map_qid_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 port_id;
-	u32 qid;
-	u32 priority;
-	u32 padding0;
-};
-
-struct dlb2_mbox_map_qid_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 id;
-};
-
-struct dlb2_mbox_unmap_qid_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 port_id;
-	u32 qid;
-};
-
-struct dlb2_mbox_unmap_qid_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 padding;
-};
-
-struct dlb2_mbox_start_domain_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-};
-
-struct dlb2_mbox_start_domain_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 padding;
-};
-
-struct dlb2_mbox_enable_ldb_port_intr_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u16 port_id;
-	u16 thresh;
-	u16 vector;
-	u16 owner_vf;
-	u16 reserved[2];
-};
-
-struct dlb2_mbox_enable_ldb_port_intr_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 padding;
-};
-
-struct dlb2_mbox_enable_dir_port_intr_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u16 port_id;
-	u16 thresh;
-	u16 vector;
-	u16 owner_vf;
-	u16 reserved[2];
-};
-
-struct dlb2_mbox_enable_dir_port_intr_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 padding;
-};
-
-struct dlb2_mbox_arm_cq_intr_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 port_id;
-	u32 is_ldb;
-};
-
-struct dlb2_mbox_arm_cq_intr_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 padding0;
-};
-
-/*
- * The alert_id and aux_alert_data follows the format of the alerts defined in
- * dlb2_types.h. The alert id contains an enum dlb2_domain_alert_id value, and
- * the aux_alert_data value varies depending on the alert.
- */
-struct dlb2_mbox_vf_alert_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 alert_id;
-	u32 aux_alert_data;
-};
-
-enum dlb2_mbox_vf_notification_type {
-	DLB2_MBOX_VF_NOTIFICATION_PRE_RESET,
-	DLB2_MBOX_VF_NOTIFICATION_POST_RESET,
-
-	/* NUM_DLB2_MBOX_VF_NOTIFICATION_TYPES must be last */
-	NUM_DLB2_MBOX_VF_NOTIFICATION_TYPES,
-};
-
-struct dlb2_mbox_vf_notification_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 notification;
-};
-
-struct dlb2_mbox_vf_in_use_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 padding;
-};
-
-struct dlb2_mbox_vf_in_use_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 in_use;
-};
-
-struct dlb2_mbox_get_sn_allocation_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 group_id;
-};
-
-struct dlb2_mbox_get_sn_allocation_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 num;
-};
-
-struct dlb2_mbox_get_ldb_queue_depth_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 queue_id;
-	u32 padding;
-};
-
-struct dlb2_mbox_get_ldb_queue_depth_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 depth;
-};
-
-struct dlb2_mbox_get_dir_queue_depth_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 queue_id;
-	u32 padding;
-};
-
-struct dlb2_mbox_get_dir_queue_depth_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 depth;
-};
-
-struct dlb2_mbox_pending_port_unmaps_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 domain_id;
-	u32 port_id;
-	u32 padding;
-};
-
-struct dlb2_mbox_pending_port_unmaps_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 num;
-};
-
-struct dlb2_mbox_get_cos_bw_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 cos_id;
-};
-
-struct dlb2_mbox_get_cos_bw_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 num;
-};
-
-struct dlb2_mbox_get_sn_occupancy_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 group_id;
-};
-
-struct dlb2_mbox_get_sn_occupancy_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 num;
-};
-
-struct dlb2_mbox_query_cq_poll_mode_cmd_req {
-	struct dlb2_mbox_req_hdr hdr;
-	u32 padding;
-};
-
-struct dlb2_mbox_query_cq_poll_mode_cmd_resp {
-	struct dlb2_mbox_resp_hdr hdr;
-	u32 error_code;
-	u32 status;
-	u32 mode;
-};
-
-#endif /* __DLB2_BASE_DLB2_MBOX_H */
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index ae5ef2fc3..b57157fdc 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -5,7 +5,6 @@
 #include "dlb2_user.h"
 
 #include "dlb2_hw_types.h"
-#include "dlb2_mbox.h"
 #include "dlb2_osdep.h"
 #include "dlb2_osdep_bitmap.h"
 #include "dlb2_osdep_types.h"
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v5 02/26] event/dlb2: add v2.5 probe
  2021-05-01 19:03   ` [dpdk-dev] [PATCH v5 00/26] Add DLB v2.5 McDaniel, Timothy
  2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 01/26] event/dlb2: minor code cleanup McDaniel, Timothy
@ 2021-05-01 19:03     ` McDaniel, Timothy
  2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 03/26] event/dlb2: add v2.5 HW register definitions McDaniel, Timothy
                       ` (24 subsequent siblings)
  26 siblings, 0 replies; 174+ messages in thread
From: McDaniel, Timothy @ 2021-05-01 19:03 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, harry.van.haaren, jerinj, thomas, Timothy McDaniel

From: Timothy McDaniel <timothy.mcdaniel@intel.com>

This commit adds dlb v2.5 probe support, and updates
parameter parsing.

The dlb v2.5 device differs from dlb v2, in that the
number of resources (ports, queues, ...) is different,
so macros have been added to take the device version
into account.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/dlb2.c                  |  99 +++++++++++---
 drivers/event/dlb2/dlb2_priv.h             | 151 +++++++++++++++------
 drivers/event/dlb2/dlb2_xstats.c           |  37 ++---
 drivers/event/dlb2/pf/base/dlb2_hw_types.h |  28 ++--
 drivers/event/dlb2/pf/base/dlb2_resource.c |  47 ++++---
 drivers/event/dlb2/pf/dlb2_pf.c            |  62 ++++++++-
 6 files changed, 319 insertions(+), 105 deletions(-)

diff --git a/drivers/event/dlb2/dlb2.c b/drivers/event/dlb2/dlb2.c
index fb5ff012a..7f5b9141b 100644
--- a/drivers/event/dlb2/dlb2.c
+++ b/drivers/event/dlb2/dlb2.c
@@ -59,7 +59,8 @@ static struct rte_event_dev_info evdev_dlb2_default_info = {
 	.max_event_port_enqueue_depth = DLB2_MAX_ENQUEUE_DEPTH,
 	.max_event_port_links = DLB2_MAX_NUM_QIDS_PER_LDB_CQ,
 	.max_num_events = DLB2_MAX_NUM_LDB_CREDITS,
-	.max_single_link_event_port_queue_pairs = DLB2_MAX_NUM_DIR_PORTS,
+	.max_single_link_event_port_queue_pairs =
+		DLB2_MAX_NUM_DIR_PORTS(DLB2_HW_V2),
 	.event_dev_cap = (RTE_EVENT_DEV_CAP_QUEUE_QOS |
 			  RTE_EVENT_DEV_CAP_EVENT_QOS |
 			  RTE_EVENT_DEV_CAP_BURST_MODE |
@@ -69,7 +70,7 @@ static struct rte_event_dev_info evdev_dlb2_default_info = {
 };
 
 struct process_local_port_data
-dlb2_port[DLB2_MAX_NUM_PORTS][DLB2_NUM_PORT_TYPES];
+dlb2_port[DLB2_MAX_NUM_PORTS_ALL][DLB2_NUM_PORT_TYPES];
 
 static void
 dlb2_free_qe_mem(struct dlb2_port *qm_port)
@@ -97,7 +98,7 @@ dlb2_init_queue_depth_thresholds(struct dlb2_eventdev *dlb2,
 {
 	int q;
 
-	for (q = 0; q < DLB2_MAX_NUM_QUEUES; q++) {
+	for (q = 0; q < DLB2_MAX_NUM_QUEUES(dlb2->version); q++) {
 		if (qid_depth_thresholds[q] != 0)
 			dlb2->ev_queues[q].depth_threshold =
 				qid_depth_thresholds[q];
@@ -247,9 +248,9 @@ set_num_dir_credits(const char *key __rte_unused,
 		return ret;
 
 	if (*num_dir_credits < 0 ||
-	    *num_dir_credits > DLB2_MAX_NUM_DIR_CREDITS) {
+	    *num_dir_credits > DLB2_MAX_NUM_DIR_CREDITS(DLB2_HW_V2)) {
 		DLB2_LOG_ERR("dlb2: num_dir_credits must be between 0 and %d\n",
-			     DLB2_MAX_NUM_DIR_CREDITS);
+			     DLB2_MAX_NUM_DIR_CREDITS(DLB2_HW_V2));
 		return -EINVAL;
 	}
 
@@ -306,7 +307,6 @@ set_cos(const char *key __rte_unused,
 	return 0;
 }
 
-
 static int
 set_qid_depth_thresh(const char *key __rte_unused,
 		     const char *value,
@@ -327,7 +327,7 @@ set_qid_depth_thresh(const char *key __rte_unused,
 	 */
 	if (sscanf(value, "all:%d", &thresh) == 1) {
 		first = 0;
-		last = DLB2_MAX_NUM_QUEUES - 1;
+		last = DLB2_MAX_NUM_QUEUES(DLB2_HW_V2) - 1;
 	} else if (sscanf(value, "%d-%d:%d", &first, &last, &thresh) == 3) {
 		/* we have everything we need */
 	} else if (sscanf(value, "%d:%d", &first, &thresh) == 2) {
@@ -337,7 +337,56 @@ set_qid_depth_thresh(const char *key __rte_unused,
 		return -EINVAL;
 	}
 
-	if (first > last || first < 0 || last >= DLB2_MAX_NUM_QUEUES) {
+	if (first > last || first < 0 ||
+		last >= DLB2_MAX_NUM_QUEUES(DLB2_HW_V2)) {
+		DLB2_LOG_ERR("Error parsing qid depth devarg, invalid qid value\n");
+		return -EINVAL;
+	}
+
+	if (thresh < 0 || thresh > DLB2_MAX_QUEUE_DEPTH_THRESHOLD) {
+		DLB2_LOG_ERR("Error parsing qid depth devarg, threshold > %d\n",
+			     DLB2_MAX_QUEUE_DEPTH_THRESHOLD);
+		return -EINVAL;
+	}
+
+	for (i = first; i <= last; i++)
+		qid_thresh->val[i] = thresh; /* indexed by qid */
+
+	return 0;
+}
+
+static int
+set_qid_depth_thresh_v2_5(const char *key __rte_unused,
+			  const char *value,
+			  void *opaque)
+{
+	struct dlb2_qid_depth_thresholds *qid_thresh = opaque;
+	int first, last, thresh, i;
+
+	if (value == NULL || opaque == NULL) {
+		DLB2_LOG_ERR("NULL pointer\n");
+		return -EINVAL;
+	}
+
+	/* command line override may take one of the following 3 forms:
+	 * qid_depth_thresh=all:<threshold_value> ... all queues
+	 * qid_depth_thresh=qidA-qidB:<threshold_value> ... a range of queues
+	 * qid_depth_thresh=qid:<threshold_value> ... just one queue
+	 */
+	if (sscanf(value, "all:%d", &thresh) == 1) {
+		first = 0;
+		last = DLB2_MAX_NUM_QUEUES(DLB2_HW_V2_5) - 1;
+	} else if (sscanf(value, "%d-%d:%d", &first, &last, &thresh) == 3) {
+		/* we have everything we need */
+	} else if (sscanf(value, "%d:%d", &first, &thresh) == 2) {
+		last = first;
+	} else {
+		DLB2_LOG_ERR("Error parsing qid depth devarg. Should be all:val, qid-qid:val, or qid:val\n");
+		return -EINVAL;
+	}
+
+	if (first > last || first < 0 ||
+		last >= DLB2_MAX_NUM_QUEUES(DLB2_HW_V2_5)) {
 		DLB2_LOG_ERR("Error parsing qid depth devarg, invalid qid value\n");
 		return -EINVAL;
 	}
@@ -521,7 +570,7 @@ dlb2_hw_reset_sched_domain(const struct rte_eventdev *dev, bool reconfig)
 	for (i = 0; i < dlb2->num_queues; i++)
 		dlb2->ev_queues[i].qm_queue.config_state = config_state;
 
-	for (i = 0; i < DLB2_MAX_NUM_QUEUES; i++)
+	for (i = 0; i < DLB2_MAX_NUM_QUEUES(DLB2_HW_V2_5); i++)
 		dlb2->ev_queues[i].setup_done = false;
 
 	dlb2->num_ports = 0;
@@ -1453,7 +1502,7 @@ dlb2_eventdev_port_setup(struct rte_eventdev *dev,
 
 	dlb2 = dlb2_pmd_priv(dev);
 
-	if (ev_port_id >= DLB2_MAX_NUM_PORTS)
+	if (ev_port_id >= DLB2_MAX_NUM_PORTS(dlb2->version))
 		return -EINVAL;
 
 	if (port_conf->dequeue_depth >
@@ -3895,7 +3944,7 @@ dlb2_primary_eventdev_probe(struct rte_eventdev *dev,
 	}
 
 	/* Initialize each port's token pop mode */
-	for (i = 0; i < DLB2_MAX_NUM_PORTS; i++)
+	for (i = 0; i < DLB2_MAX_NUM_PORTS(dlb2->version); i++)
 		dlb2->ev_ports[i].qm_port.token_pop_mode = AUTO_POP;
 
 	rte_spinlock_init(&dlb2->qm_instance.resource_lock);
@@ -3945,7 +3994,8 @@ dlb2_secondary_eventdev_probe(struct rte_eventdev *dev,
 int
 dlb2_parse_params(const char *params,
 		  const char *name,
-		  struct dlb2_devargs *dlb2_args)
+		  struct dlb2_devargs *dlb2_args,
+		  uint8_t version)
 {
 	int ret = 0;
 	static const char * const args[] = { NUMA_NODE_ARG,
@@ -3984,17 +4034,18 @@ dlb2_parse_params(const char *params,
 				return ret;
 			}
 
-			ret = rte_kvargs_process(kvlist,
+			if (version == DLB2_HW_V2) {
+				ret = rte_kvargs_process(kvlist,
 					DLB2_NUM_DIR_CREDITS,
 					set_num_dir_credits,
 					&dlb2_args->num_dir_credits_override);
-			if (ret != 0) {
-				DLB2_LOG_ERR("%s: Error parsing num_dir_credits parameter",
-					     name);
-				rte_kvargs_free(kvlist);
-				return ret;
+				if (ret != 0) {
+					DLB2_LOG_ERR("%s: Error parsing num_dir_credits parameter",
+						     name);
+					rte_kvargs_free(kvlist);
+					return ret;
+				}
 			}
-
 			ret = rte_kvargs_process(kvlist, DEV_ID_ARG,
 						 set_dev_id,
 						 &dlb2_args->dev_id);
@@ -4005,11 +4056,19 @@ dlb2_parse_params(const char *params,
 				return ret;
 			}
 
-			ret = rte_kvargs_process(
+			if (version == DLB2_HW_V2) {
+				ret = rte_kvargs_process(
 					kvlist,
 					DLB2_QID_DEPTH_THRESH_ARG,
 					set_qid_depth_thresh,
 					&dlb2_args->qid_depth_thresholds);
+			} else {
+				ret = rte_kvargs_process(
+					kvlist,
+					DLB2_QID_DEPTH_THRESH_ARG,
+					set_qid_depth_thresh_v2_5,
+					&dlb2_args->qid_depth_thresholds);
+			}
 			if (ret != 0) {
 				DLB2_LOG_ERR("%s: Error parsing qid_depth_thresh parameter",
 					     name);
diff --git a/drivers/event/dlb2/dlb2_priv.h b/drivers/event/dlb2/dlb2_priv.h
index eb1a93239..1cd78ad94 100644
--- a/drivers/event/dlb2/dlb2_priv.h
+++ b/drivers/event/dlb2/dlb2_priv.h
@@ -33,19 +33,31 @@
 
 /* Begin HW related defines and structs */
 
+#define DLB2_HW_V2 0
+#define DLB2_HW_V2_5 1
 #define DLB2_MAX_NUM_DOMAINS 32
 #define DLB2_MAX_NUM_VFS 16
 #define DLB2_MAX_NUM_LDB_QUEUES 32
 #define DLB2_MAX_NUM_LDB_PORTS 64
-#define DLB2_MAX_NUM_DIR_PORTS 64
-#define DLB2_MAX_NUM_DIR_QUEUES 64
+#define DLB2_MAX_NUM_DIR_PORTS_V2		DLB2_MAX_NUM_DIR_QUEUES_V2
+#define DLB2_MAX_NUM_DIR_PORTS_V2_5		DLB2_MAX_NUM_DIR_QUEUES_V2_5
+#define DLB2_MAX_NUM_DIR_PORTS(ver)		(ver == DLB2_HW_V2 ? \
+						 DLB2_MAX_NUM_DIR_PORTS_V2 : \
+						 DLB2_MAX_NUM_DIR_PORTS_V2_5)
+#define DLB2_MAX_NUM_DIR_QUEUES_V2		64 /* DIR == directed */
+#define DLB2_MAX_NUM_DIR_QUEUES_V2_5		96
+/* When needed for array sizing, the DLB 2.5 macro is used */
+#define DLB2_MAX_NUM_DIR_QUEUES(ver)		(ver == DLB2_HW_V2 ? \
+						 DLB2_MAX_NUM_DIR_QUEUES_V2 : \
+						 DLB2_MAX_NUM_DIR_QUEUES_V2_5)
 #define DLB2_MAX_NUM_FLOWS (64 * 1024)
 #define DLB2_MAX_NUM_LDB_CREDITS (8 * 1024)
-#define DLB2_MAX_NUM_DIR_CREDITS (2 * 1024)
+#define DLB2_MAX_NUM_DIR_CREDITS(ver)		(ver == DLB2_HW_V2 ? 4096 : 0)
+#define DLB2_MAX_NUM_CREDITS(ver)		(ver == DLB2_HW_V2 ? \
+						 0 : DLB2_MAX_NUM_LDB_CREDITS)
 #define DLB2_MAX_NUM_LDB_CREDIT_POOLS 64
 #define DLB2_MAX_NUM_DIR_CREDIT_POOLS 64
 #define DLB2_MAX_NUM_HIST_LIST_ENTRIES 2048
-#define DLB2_MAX_NUM_AQOS_ENTRIES 2048
 #define DLB2_MAX_NUM_QIDS_PER_LDB_CQ 8
 #define DLB2_QID_PRIORITIES 8
 #define DLB2_MAX_DEVICE_PATH 32
@@ -68,6 +80,11 @@
 #define DLB2_NUM_HIST_LIST_ENTRIES_PER_LDB_PORT \
 	DLB2_MAX_CQ_DEPTH
 
+#define DLB2_HW_DEVICE_FROM_PCI_ID(_pdev) \
+	(((_pdev->id.device_id == PCI_DEVICE_ID_INTEL_DLB2_5_PF) ||        \
+	  (_pdev->id.device_id == PCI_DEVICE_ID_INTEL_DLB2_5_VF))   ?   \
+		DLB2_HW_V2_5 : DLB2_HW_V2)
+
 /*
  * Static per queue/port provisioning values
  */
@@ -109,6 +126,8 @@ enum dlb2_hw_queue_types {
 	DLB2_NUM_QUEUE_TYPES /* Must be last */
 };
 
+#define DLB2_COMBINED_POOL DLB2_LDB_QUEUE
+
 #define PORT_TYPE(p) ((p)->is_directed ? DLB2_DIR_PORT : DLB2_LDB_PORT)
 
 /* Do not change - must match hardware! */
@@ -127,8 +146,15 @@ struct dlb2_hw_rsrcs {
 	uint32_t num_ldb_queues;	/* Number of available ldb queues */
 	uint32_t num_ldb_ports;         /* Number of load balanced ports */
 	uint32_t num_dir_ports;         /* Number of directed ports */
-	uint32_t num_ldb_credits;       /* Number of load balanced credits */
-	uint32_t num_dir_credits;       /* Number of directed credits */
+	union {
+		struct {
+			uint32_t num_ldb_credits; /* Number of ldb credits */
+			uint32_t num_dir_credits; /* Number of dir credits */
+		};
+		struct {
+			uint32_t num_credits; /* Number of combined credits */
+		};
+	};
 	uint32_t reorder_window_size;   /* Size of reorder window */
 };
 
@@ -292,9 +318,17 @@ struct dlb2_port {
 	enum dlb2_token_pop_mode token_pop_mode;
 	union dlb2_port_config cfg;
 	uint32_t *credit_pool[DLB2_NUM_QUEUE_TYPES]; /* use __atomic builtins */
-	uint16_t cached_ldb_credits;
-	uint16_t ldb_credits;
-	uint16_t cached_dir_credits;
+	union {
+		struct {
+			uint16_t cached_ldb_credits;
+			uint16_t ldb_credits;
+			uint16_t cached_dir_credits;
+		};
+		struct {
+			uint16_t cached_credits;
+			uint16_t credits;
+		};
+	};
 	bool int_armed;
 	uint16_t owed_tokens;
 	int16_t issued_releases;
@@ -325,11 +359,22 @@ struct process_local_port_data {
 
 struct dlb2_eventdev;
 
+struct dlb2_port_low_level_io_functions {
+	void (*pp_enqueue_four)(void *qe4, void *pp_addr);
+};
+
 struct dlb2_config {
 	int configured;
 	int reserved;
-	uint32_t num_ldb_credits;
-	uint32_t num_dir_credits;
+	union {
+		struct {
+			uint32_t num_ldb_credits;
+			uint32_t num_dir_credits;
+		};
+		struct {
+			uint32_t num_credits;
+		};
+	};
 	struct dlb2_create_sched_domain_args resources;
 };
 
@@ -354,10 +399,18 @@ struct dlb2_hw_dev {
 
 /* Begin DLB2 PMD Eventdev related defines and structs */
 
-#define DLB2_MAX_NUM_QUEUES \
-	(DLB2_MAX_NUM_DIR_QUEUES + DLB2_MAX_NUM_LDB_QUEUES)
+#define DLB2_MAX_NUM_QUEUES(ver)                                \
+	(DLB2_MAX_NUM_DIR_QUEUES(ver) + DLB2_MAX_NUM_LDB_QUEUES)
 
-#define DLB2_MAX_NUM_PORTS (DLB2_MAX_NUM_DIR_PORTS + DLB2_MAX_NUM_LDB_PORTS)
+#define DLB2_MAX_NUM_PORTS(ver) \
+	(DLB2_MAX_NUM_DIR_PORTS(ver) + DLB2_MAX_NUM_LDB_PORTS)
+
+#define DLB2_MAX_NUM_DIR_QUEUES_V2_5 96
+#define DLB2_MAX_NUM_DIR_PORTS_V2_5 DLB2_MAX_NUM_DIR_QUEUES_V2_5
+#define DLB2_MAX_NUM_QUEUES_ALL \
+	(DLB2_MAX_NUM_DIR_QUEUES_V2_5 + DLB2_MAX_NUM_LDB_QUEUES)
+#define DLB2_MAX_NUM_PORTS_ALL \
+	(DLB2_MAX_NUM_DIR_PORTS_V2_5 + DLB2_MAX_NUM_LDB_PORTS)
 #define DLB2_MAX_INPUT_QUEUE_DEPTH 256
 
 /** Structure to hold the queue to port link establishment attributes */
@@ -377,8 +430,15 @@ struct dlb2_traffic_stats {
 	uint64_t tx_ok;
 	uint64_t total_polls;
 	uint64_t zero_polls;
-	uint64_t tx_nospc_ldb_hw_credits;
-	uint64_t tx_nospc_dir_hw_credits;
+	union {
+		struct {
+			uint64_t tx_nospc_ldb_hw_credits;
+			uint64_t tx_nospc_dir_hw_credits;
+		};
+		struct {
+			uint64_t tx_nospc_hw_credits;
+		};
+	};
 	uint64_t tx_nospc_inflight_max;
 	uint64_t tx_nospc_new_event_limit;
 	uint64_t tx_nospc_inflight_credits;
@@ -411,7 +471,7 @@ struct dlb2_port_stats {
 	uint64_t tx_invalid;
 	uint64_t rx_sched_cnt[DLB2_NUM_HW_SCHED_TYPES];
 	uint64_t rx_sched_invalid;
-	struct dlb2_queue_stats queue[DLB2_MAX_NUM_QUEUES];
+	struct dlb2_queue_stats queue[DLB2_MAX_NUM_QUEUES_ALL];
 };
 
 struct dlb2_eventdev_port {
@@ -462,16 +522,16 @@ enum dlb2_run_state {
 };
 
 struct dlb2_eventdev {
-	struct dlb2_eventdev_port ev_ports[DLB2_MAX_NUM_PORTS];
-	struct dlb2_eventdev_queue ev_queues[DLB2_MAX_NUM_QUEUES];
-	uint8_t qm_ldb_to_ev_queue_id[DLB2_MAX_NUM_QUEUES];
-	uint8_t qm_dir_to_ev_queue_id[DLB2_MAX_NUM_QUEUES];
+	struct dlb2_eventdev_port ev_ports[DLB2_MAX_NUM_PORTS_ALL];
+	struct dlb2_eventdev_queue ev_queues[DLB2_MAX_NUM_QUEUES_ALL];
+	uint8_t qm_ldb_to_ev_queue_id[DLB2_MAX_NUM_QUEUES_ALL];
+	uint8_t qm_dir_to_ev_queue_id[DLB2_MAX_NUM_QUEUES_ALL];
 	/* store num stats and offset of the stats for each queue */
-	uint16_t xstats_count_per_qid[DLB2_MAX_NUM_QUEUES];
-	uint16_t xstats_offset_for_qid[DLB2_MAX_NUM_QUEUES];
+	uint16_t xstats_count_per_qid[DLB2_MAX_NUM_QUEUES_ALL];
+	uint16_t xstats_offset_for_qid[DLB2_MAX_NUM_QUEUES_ALL];
 	/* store num stats and offset of the stats for each port */
-	uint16_t xstats_count_per_port[DLB2_MAX_NUM_PORTS];
-	uint16_t xstats_offset_for_port[DLB2_MAX_NUM_PORTS];
+	uint16_t xstats_count_per_port[DLB2_MAX_NUM_PORTS_ALL];
+	uint16_t xstats_offset_for_port[DLB2_MAX_NUM_PORTS_ALL];
 	struct dlb2_get_num_resources_args hw_rsrc_query_results;
 	uint32_t xstats_count_mode_queue;
 	struct dlb2_hw_dev qm_instance; /* strictly hw related */
@@ -487,8 +547,15 @@ struct dlb2_eventdev {
 	int num_dir_credits_override;
 	volatile enum dlb2_run_state run_state;
 	uint16_t num_dir_queues; /* total num of evdev dir queues requested */
-	uint16_t num_dir_credits;
-	uint16_t num_ldb_credits;
+	union {
+		struct {
+			uint16_t num_dir_credits;
+			uint16_t num_ldb_credits;
+		};
+		struct {
+			uint16_t num_credits;
+		};
+	};
 	uint16_t num_queues; /* total queues */
 	uint16_t num_ldb_queues; /* total num of evdev ldb queues requested */
 	uint16_t num_ports; /* total num of evdev ports requested */
@@ -499,21 +566,28 @@ struct dlb2_eventdev {
 	bool defer_sched;
 	enum dlb2_cq_poll_modes poll_mode;
 	uint8_t revision;
+	uint8_t version;
 	bool configured;
-	uint16_t max_ldb_credits;
-	uint16_t max_dir_credits;
-
-	/* force hw credit pool counters into exclusive cache lines */
-
-	/* use __atomic builtins */ /* shared hw cred */
-	uint32_t ldb_credit_pool __rte_cache_aligned;
-	/* use __atomic builtins */ /* shared hw cred */
-	uint32_t dir_credit_pool __rte_cache_aligned;
+	union {
+		struct {
+			uint16_t max_ldb_credits;
+			uint16_t max_dir_credits;
+			/* use __atomic builtins */ /* shared hw cred */
+			uint32_t ldb_credit_pool __rte_cache_aligned;
+			/* use __atomic builtins */ /* shared hw cred */
+			uint32_t dir_credit_pool __rte_cache_aligned;
+		};
+		struct {
+			uint16_t max_credits;
+			/* use __atomic builtins */ /* shared hw cred */
+			uint32_t credit_pool __rte_cache_aligned;
+		};
+	};
 };
 
 /* used for collecting and passing around the dev args */
 struct dlb2_qid_depth_thresholds {
-	int val[DLB2_MAX_NUM_QUEUES];
+	int val[DLB2_MAX_NUM_QUEUES_ALL];
 };
 
 struct dlb2_devargs {
@@ -568,7 +642,8 @@ uint32_t dlb2_get_queue_depth(struct dlb2_eventdev *dlb2,
 
 int dlb2_parse_params(const char *params,
 		      const char *name,
-		      struct dlb2_devargs *dlb2_args);
+		      struct dlb2_devargs *dlb2_args,
+		      uint8_t version);
 
 /* Extern globals */
 extern struct process_local_port_data dlb2_port[][DLB2_NUM_PORT_TYPES];
diff --git a/drivers/event/dlb2/dlb2_xstats.c b/drivers/event/dlb2/dlb2_xstats.c
index 8c3c3cda9..b62e62060 100644
--- a/drivers/event/dlb2/dlb2_xstats.c
+++ b/drivers/event/dlb2/dlb2_xstats.c
@@ -95,7 +95,7 @@ dlb2_device_traffic_stat_get(struct dlb2_eventdev *dlb2,
 	int i;
 	uint64_t val = 0;
 
-	for (i = 0; i < DLB2_MAX_NUM_PORTS; i++) {
+	for (i = 0; i < DLB2_MAX_NUM_PORTS(dlb2->version); i++) {
 		struct dlb2_eventdev_port *port = &dlb2->ev_ports[i];
 
 		if (!port->setup_done)
@@ -269,7 +269,7 @@ dlb2_get_threshold_stat(struct dlb2_eventdev *dlb2, int qid, int stat)
 	int port = 0;
 	uint64_t tally = 0;
 
-	for (port = 0; port < DLB2_MAX_NUM_PORTS; port++)
+	for (port = 0; port < DLB2_MAX_NUM_PORTS(dlb2->version); port++)
 		tally += dlb2->ev_ports[port].stats.queue[qid].qid_depth[stat];
 
 	return tally;
@@ -281,7 +281,7 @@ dlb2_get_enq_ok_stat(struct dlb2_eventdev *dlb2, int qid)
 	int port = 0;
 	uint64_t enq_ok_tally = 0;
 
-	for (port = 0; port < DLB2_MAX_NUM_PORTS; port++)
+	for (port = 0; port < DLB2_MAX_NUM_PORTS(dlb2->version); port++)
 		enq_ok_tally += dlb2->ev_ports[port].stats.queue[qid].enq_ok;
 
 	return enq_ok_tally;
@@ -561,8 +561,8 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 
 	/* other vars */
 	const unsigned int count = RTE_DIM(dev_stats) +
-			DLB2_MAX_NUM_PORTS * RTE_DIM(port_stats) +
-			DLB2_MAX_NUM_QUEUES * RTE_DIM(qid_stats);
+		DLB2_MAX_NUM_PORTS(dlb2->version) * RTE_DIM(port_stats) +
+		DLB2_MAX_NUM_QUEUES(dlb2->version) * RTE_DIM(qid_stats);
 	unsigned int i, port, qid, stat_id = 0;
 
 	dlb2->xstats = rte_zmalloc_socket(NULL,
@@ -583,7 +583,7 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 	}
 	dlb2->xstats_count_mode_dev = stat_id;
 
-	for (port = 0; port < DLB2_MAX_NUM_PORTS; port++) {
+	for (port = 0; port < DLB2_MAX_NUM_PORTS(dlb2->version); port++) {
 		dlb2->xstats_offset_for_port[port] = stat_id;
 
 		uint32_t count_offset = stat_id;
@@ -605,7 +605,7 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 
 	dlb2->xstats_count_mode_port = stat_id - dlb2->xstats_count_mode_dev;
 
-	for (qid = 0; qid < DLB2_MAX_NUM_QUEUES; qid++) {
+	for (qid = 0; qid < DLB2_MAX_NUM_QUEUES(dlb2->version); qid++) {
 		uint32_t count_offset = stat_id;
 
 		dlb2->xstats_offset_for_qid[qid] = stat_id;
@@ -658,16 +658,15 @@ dlb2_eventdev_xstats_get_names(const struct rte_eventdev *dev,
 		xstats_mode_count = dlb2->xstats_count_mode_dev;
 		break;
 	case RTE_EVENT_DEV_XSTATS_PORT:
-		if (queue_port_id >= DLB2_MAX_NUM_PORTS)
+		if (queue_port_id >= DLB2_MAX_NUM_PORTS(dlb2->version))
 			break;
 		xstats_mode_count = dlb2->xstats_count_per_port[queue_port_id];
 		start_offset = dlb2->xstats_offset_for_port[queue_port_id];
 		break;
 	case RTE_EVENT_DEV_XSTATS_QUEUE:
-#if (DLB2_MAX_NUM_QUEUES <= 255) /* max 8 bit value */
-		if (queue_port_id >= DLB2_MAX_NUM_QUEUES)
+		if (queue_port_id >= DLB2_MAX_NUM_QUEUES(dlb2->version) &&
+		    (DLB2_MAX_NUM_QUEUES(dlb2->version) <= 255))
 			break;
-#endif
 		xstats_mode_count = dlb2->xstats_count_per_qid[queue_port_id];
 		start_offset = dlb2->xstats_offset_for_qid[queue_port_id];
 		break;
@@ -709,13 +708,13 @@ dlb2_xstats_update(struct dlb2_eventdev *dlb2,
 		xstats_mode_count = dlb2->xstats_count_mode_dev;
 		break;
 	case RTE_EVENT_DEV_XSTATS_PORT:
-		if (queue_port_id >= DLB2_MAX_NUM_PORTS)
+		if (queue_port_id >= DLB2_MAX_NUM_PORTS(dlb2->version))
 			goto invalid_value;
 		xstats_mode_count = dlb2->xstats_count_per_port[queue_port_id];
 		break;
 	case RTE_EVENT_DEV_XSTATS_QUEUE:
-#if (DLB2_MAX_NUM_QUEUES <= 255) /* max 8 bit value */
-		if (queue_port_id >= DLB2_MAX_NUM_QUEUES)
+#if (DLB2_MAX_NUM_QUEUES(DLB2_HW_V2_5) <= 255) /* max 8 bit value */
+		if (queue_port_id >= DLB2_MAX_NUM_QUEUES(dlb2->version))
 			goto invalid_value;
 #endif
 		xstats_mode_count = dlb2->xstats_count_per_qid[queue_port_id];
@@ -936,12 +935,13 @@ dlb2_eventdev_xstats_reset(struct rte_eventdev *dev,
 		break;
 	case RTE_EVENT_DEV_XSTATS_PORT:
 		if (queue_port_id == -1) {
-			for (i = 0; i < DLB2_MAX_NUM_PORTS; i++) {
+			for (i = 0; i < DLB2_MAX_NUM_PORTS(dlb2->version);
+					i++) {
 				if (dlb2_xstats_reset_port(dlb2, i,
 							   ids, nb_ids))
 					return -EINVAL;
 			}
-		} else if (queue_port_id < DLB2_MAX_NUM_PORTS) {
+		} else if (queue_port_id < DLB2_MAX_NUM_PORTS(dlb2->version)) {
 			if (dlb2_xstats_reset_port(dlb2, queue_port_id,
 						   ids, nb_ids))
 				return -EINVAL;
@@ -949,12 +949,13 @@ dlb2_eventdev_xstats_reset(struct rte_eventdev *dev,
 		break;
 	case RTE_EVENT_DEV_XSTATS_QUEUE:
 		if (queue_port_id == -1) {
-			for (i = 0; i < DLB2_MAX_NUM_QUEUES; i++) {
+			for (i = 0; i < DLB2_MAX_NUM_QUEUES(dlb2->version);
+					i++) {
 				if (dlb2_xstats_reset_queue(dlb2, i,
 							    ids, nb_ids))
 					return -EINVAL;
 			}
-		} else if (queue_port_id < DLB2_MAX_NUM_QUEUES) {
+		} else if (queue_port_id < DLB2_MAX_NUM_QUEUES(dlb2->version)) {
 			if (dlb2_xstats_reset_queue(dlb2, queue_port_id,
 						    ids, nb_ids))
 				return -EINVAL;
diff --git a/drivers/event/dlb2/pf/base/dlb2_hw_types.h b/drivers/event/dlb2/pf/base/dlb2_hw_types.h
index c7cd41f8b..b007e1674 100644
--- a/drivers/event/dlb2/pf/base/dlb2_hw_types.h
+++ b/drivers/event/dlb2/pf/base/dlb2_hw_types.h
@@ -12,18 +12,25 @@
 #include "dlb2_osdep_types.h"
 
 #define DLB2_MAX_NUM_VDEVS			16
-#define DLB2_MAX_NUM_AQED_ENTRIES		2048
 #define DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS	2
-#define DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES	5
-
 #define DLB2_NUM_ARB_WEIGHTS			8
+#define DLB2_MAX_NUM_AQED_ENTRIES		2048
 #define DLB2_MAX_WEIGHT				255
 #define DLB2_NUM_COS_DOMAINS			4
+#define DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS	2
+#define DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES	5
 #define DLB2_MAX_CQ_COMP_CHECK_LOOPS		409600
 #define DLB2_MAX_QID_EMPTY_CHECK_LOOPS		(32 * 64 * 1024 * (800 / 30))
+
+#define DLB2_FUNC_BAR				0
+#define DLB2_CSR_BAR				2
+
 #define PCI_DEVICE_ID_INTEL_DLB2_PF 0x2710
 #define PCI_DEVICE_ID_INTEL_DLB2_VF 0x2711
 
+#define PCI_DEVICE_ID_INTEL_DLB2_5_PF 0x2714
+#define PCI_DEVICE_ID_INTEL_DLB2_5_VF 0x2715
+
 #define DLB2_ALARM_HW_SOURCE_SYS 0
 #define DLB2_ALARM_HW_SOURCE_DLB 1
 
@@ -55,7 +62,8 @@
 #define DLB2_DIR_PP_BASE       0x2000000
 #define DLB2_DIR_PP_STRIDE     0x1000
 #define DLB2_DIR_PP_BOUND      (DLB2_DIR_PP_BASE + \
-				DLB2_DIR_PP_STRIDE * DLB2_MAX_NUM_DIR_PORTS)
+				DLB2_DIR_PP_STRIDE * \
+				DLB2_MAX_NUM_DIR_PORTS_V2_5)
 #define DLB2_DIR_PP_OFFS(id)   (DLB2_DIR_PP_BASE + (id) * DLB2_PP_SIZE)
 
 struct dlb2_resource_id {
@@ -183,7 +191,7 @@ struct dlb2_sn_group {
 
 static inline bool dlb2_sn_group_full(struct dlb2_sn_group *group)
 {
-	u32 mask[] = {
+	const u32 mask[] = {
 		0x0000ffff,  /* 64 SNs per queue */
 		0x000000ff,  /* 128 SNs per queue */
 		0x0000000f,  /* 256 SNs per queue */
@@ -195,7 +203,7 @@ static inline bool dlb2_sn_group_full(struct dlb2_sn_group *group)
 
 static inline int dlb2_sn_group_alloc_slot(struct dlb2_sn_group *group)
 {
-	u32 bound[6] = {16, 8, 4, 2, 1};
+	const u32 bound[] = {16, 8, 4, 2, 1};
 	u32 i;
 
 	for (i = 0; i < bound[group->mode]; i++) {
@@ -285,7 +293,7 @@ struct dlb2_function_resources {
 struct dlb2_hw_resources {
 	struct dlb2_ldb_queue ldb_queues[DLB2_MAX_NUM_LDB_QUEUES];
 	struct dlb2_ldb_port ldb_ports[DLB2_MAX_NUM_LDB_PORTS];
-	struct dlb2_dir_pq_pair dir_pq_pairs[DLB2_MAX_NUM_DIR_PORTS];
+	struct dlb2_dir_pq_pair dir_pq_pairs[DLB2_MAX_NUM_DIR_PORTS_V2_5];
 	struct dlb2_sn_group sn_groups[DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS];
 };
 
@@ -302,11 +310,13 @@ struct dlb2_sw_mbox {
 };
 
 struct dlb2_hw {
+	uint8_t ver;
+
 	/* BAR 0 address */
-	void  *csr_kva;
+	void *csr_kva;
 	unsigned long csr_phys_addr;
 	/* BAR 2 address */
-	void  *func_kva;
+	void *func_kva;
 	unsigned long func_phys_addr;
 
 	/* Resource tracking */
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index b57157fdc..1cb0b9f50 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -211,7 +211,7 @@ int dlb2_resource_init(struct dlb2_hw *hw)
 			      &port->func_list);
 	}
 
-	hw->pf.num_avail_dir_pq_pairs = DLB2_MAX_NUM_DIR_PORTS;
+	hw->pf.num_avail_dir_pq_pairs = DLB2_MAX_NUM_DIR_PORTS(hw->ver);
 	for (i = 0; i < hw->pf.num_avail_dir_pq_pairs; i++) {
 		list = &hw->rsrcs.dir_pq_pairs[i].func_list;
 
@@ -219,7 +219,9 @@ int dlb2_resource_init(struct dlb2_hw *hw)
 	}
 
 	hw->pf.num_avail_qed_entries = DLB2_MAX_NUM_LDB_CREDITS;
-	hw->pf.num_avail_dqed_entries = DLB2_MAX_NUM_DIR_CREDITS;
+	hw->pf.num_avail_dqed_entries =
+		DLB2_MAX_NUM_DIR_CREDITS(hw->ver);
+
 	hw->pf.num_avail_aqed_entries = DLB2_MAX_NUM_AQED_ENTRIES;
 
 	ret = dlb2_bitmap_alloc(&hw->pf.avail_hist_list_entries,
@@ -258,7 +260,7 @@ int dlb2_resource_init(struct dlb2_hw *hw)
 		hw->rsrcs.ldb_ports[i].id.vdev_owned = false;
 	}
 
-	for (i = 0; i < DLB2_MAX_NUM_DIR_PORTS; i++) {
+	for (i = 0; i < DLB2_MAX_NUM_DIR_PORTS(hw->ver); i++) {
 		hw->rsrcs.dir_pq_pairs[i].id.phys_id = i;
 		hw->rsrcs.dir_pq_pairs[i].id.vdev_owned = false;
 	}
@@ -2372,7 +2374,7 @@ static void dlb2_domain_disable_dir_vpps(struct dlb2_hw *hw,
 		else
 			virt_id = port->id.phys_id;
 
-		offs = vdev_id * DLB2_MAX_NUM_DIR_PORTS + virt_id;
+		offs = vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) + virt_id;
 
 		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP_V(offs), r1.val);
 	}
@@ -2505,7 +2507,8 @@ static void
 dlb2_domain_disable_dir_queue_write_perms(struct dlb2_hw *hw,
 					  struct dlb2_hw_domain *domain)
 {
-	int domain_offset = domain->id.phys_id * DLB2_MAX_NUM_DIR_PORTS;
+	int domain_offset = domain->id.phys_id *
+		DLB2_MAX_NUM_DIR_PORTS(hw->ver);
 	struct dlb2_list_entry *iter;
 	struct dlb2_dir_pq_pair *queue;
 	RTE_SET_USED(iter);
@@ -2521,7 +2524,8 @@ dlb2_domain_disable_dir_queue_write_perms(struct dlb2_hw *hw,
 		DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(idx), r0.val);
 
 		if (queue->id.vdev_owned) {
-			idx = queue->id.vdev_id * DLB2_MAX_NUM_DIR_PORTS +
+			idx = queue->id.vdev_id *
+				DLB2_MAX_NUM_DIR_PORTS(hw->ver) +
 				queue->id.virt_id;
 
 			DLB2_CSR_WR(hw,
@@ -2960,7 +2964,8 @@ __dlb2_domain_reset_dir_port_registers(struct dlb2_hw *hw,
 		else
 			virt_id = port->id.phys_id;
 
-		offs = port->id.vdev_id * DLB2_MAX_NUM_DIR_PORTS + virt_id;
+		offs = port->id.vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver)
+			+ virt_id;
 
 		DLB2_CSR_WR(hw,
 			    DLB2_SYS_VF_DIR_VPP2PP(offs),
@@ -4483,7 +4488,8 @@ dlb2_log_create_dir_port_args(struct dlb2_hw *hw,
 }
 
 static struct dlb2_dir_pq_pair *
-dlb2_get_domain_used_dir_pq(u32 id,
+dlb2_get_domain_used_dir_pq(struct dlb2_hw *hw,
+			    u32 id,
 			    bool vdev_req,
 			    struct dlb2_hw_domain *domain)
 {
@@ -4491,7 +4497,7 @@ dlb2_get_domain_used_dir_pq(u32 id,
 	struct dlb2_dir_pq_pair *port;
 	RTE_SET_USED(iter);
 
-	if (id >= DLB2_MAX_NUM_DIR_PORTS)
+	if (id >= DLB2_MAX_NUM_DIR_PORTS(hw->ver))
 		return NULL;
 
 	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter)
@@ -4537,7 +4543,8 @@ dlb2_verify_create_dir_port_args(struct dlb2_hw *hw,
 	if (args->queue_id != -1) {
 		struct dlb2_dir_pq_pair *queue;
 
-		queue = dlb2_get_domain_used_dir_pq(args->queue_id,
+		queue = dlb2_get_domain_used_dir_pq(hw,
+						    args->queue_id,
 						    vdev_req,
 						    domain);
 
@@ -4617,7 +4624,7 @@ static void dlb2_dir_port_configure_pp(struct dlb2_hw *hw,
 
 		r1.field.pp = port->id.phys_id;
 
-		offs = vdev_id * DLB2_MAX_NUM_DIR_PORTS + virt_id;
+		offs = vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) + virt_id;
 
 		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP2PP(offs), r1.val);
 
@@ -4856,7 +4863,8 @@ int dlb2_hw_create_dir_port(struct dlb2_hw *hw,
 	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
 
 	if (args->queue_id != -1)
-		port = dlb2_get_domain_used_dir_pq(args->queue_id,
+		port = dlb2_get_domain_used_dir_pq(hw,
+						   args->queue_id,
 						   vdev_req,
 						   domain);
 	else
@@ -4912,7 +4920,7 @@ static void dlb2_configure_dir_queue(struct dlb2_hw *hw,
 	/* QID write permissions are turned on when the domain is started */
 	r0.field.vasqid_v = 0;
 
-	offs = domain->id.phys_id * DLB2_MAX_NUM_DIR_QUEUES +
+	offs = domain->id.phys_id * DLB2_MAX_NUM_DIR_QUEUES(hw->ver) +
 		queue->id.phys_id;
 
 	DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(offs), r0.val);
@@ -4934,7 +4942,8 @@ static void dlb2_configure_dir_queue(struct dlb2_hw *hw,
 		union dlb2_sys_vf_dir_vqid_v r3 = { {0} };
 		union dlb2_sys_vf_dir_vqid2qid r4 = { {0} };
 
-		offs = vdev_id * DLB2_MAX_NUM_DIR_QUEUES + queue->id.virt_id;
+		offs = vdev_id * DLB2_MAX_NUM_DIR_QUEUES(hw->ver)
+			+ queue->id.virt_id;
 
 		r3.field.vqid_v = 1;
 
@@ -5000,7 +5009,8 @@ dlb2_verify_create_dir_queue_args(struct dlb2_hw *hw,
 	if (args->port_id != -1) {
 		struct dlb2_dir_pq_pair *port;
 
-		port = dlb2_get_domain_used_dir_pq(args->port_id,
+		port = dlb2_get_domain_used_dir_pq(hw,
+						   args->port_id,
 						   vdev_req,
 						   domain);
 
@@ -5071,7 +5081,8 @@ int dlb2_hw_create_dir_queue(struct dlb2_hw *hw,
 	}
 
 	if (args->port_id != -1)
-		queue = dlb2_get_domain_used_dir_pq(args->port_id,
+		queue = dlb2_get_domain_used_dir_pq(hw,
+						    args->port_id,
 						    vdev_req,
 						    domain);
 	else
@@ -5919,7 +5930,7 @@ dlb2_hw_start_domain(struct dlb2_hw *hw,
 
 		r0.field.vasqid_v = 1;
 
-		offs = domain->id.phys_id * DLB2_MAX_NUM_DIR_PORTS +
+		offs = domain->id.phys_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) +
 			dir_queue->id.phys_id;
 
 		DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(offs), r0.val);
@@ -5971,7 +5982,7 @@ int dlb2_hw_get_dir_queue_depth(struct dlb2_hw *hw,
 
 	id = args->queue_id;
 
-	queue = dlb2_get_domain_used_dir_pq(id, vdev_req, domain);
+	queue = dlb2_get_domain_used_dir_pq(hw, id, vdev_req, domain);
 	if (queue == NULL) {
 		resp->status = DLB2_ST_INVALID_QID;
 		return -EINVAL;
diff --git a/drivers/event/dlb2/pf/dlb2_pf.c b/drivers/event/dlb2/pf/dlb2_pf.c
index cfb22efe8..f57dc1584 100644
--- a/drivers/event/dlb2/pf/dlb2_pf.c
+++ b/drivers/event/dlb2/pf/dlb2_pf.c
@@ -47,7 +47,7 @@ dlb2_pf_low_level_io_init(void)
 {
 	int i;
 	/* Addresses will be initialized at port create */
-	for (i = 0; i < DLB2_MAX_NUM_PORTS; i++) {
+	for (i = 0; i < DLB2_MAX_NUM_PORTS(DLB2_HW_V2_5); i++) {
 		/* First directed ports */
 		dlb2_port[i][DLB2_DIR_PORT].pp_addr = NULL;
 		dlb2_port[i][DLB2_DIR_PORT].cq_base = NULL;
@@ -628,6 +628,7 @@ dlb2_eventdev_pci_init(struct rte_eventdev *eventdev)
 
 	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
 		dlb2 = dlb2_pmd_priv(eventdev); /* rte_zmalloc_socket mem */
+		dlb2->version = DLB2_HW_DEVICE_FROM_PCI_ID(pci_dev);
 
 		/* Probe the DLB2 PF layer */
 		dlb2->qm_instance.pf_dev = dlb2_probe(pci_dev);
@@ -643,7 +644,8 @@ dlb2_eventdev_pci_init(struct rte_eventdev *eventdev)
 		if (pci_dev->device.devargs) {
 			ret = dlb2_parse_params(pci_dev->device.devargs->args,
 						pci_dev->device.devargs->name,
-						&dlb2_args);
+						&dlb2_args,
+						dlb2->version);
 			if (ret) {
 				DLB2_LOG_ERR("PFPMD failed to parse args ret=%d, errno=%d\n",
 					     ret, rte_errno);
@@ -655,6 +657,8 @@ dlb2_eventdev_pci_init(struct rte_eventdev *eventdev)
 						  event_dlb2_pf_name,
 						  &dlb2_args);
 	} else {
+		dlb2 = dlb2_pmd_priv(eventdev);
+		dlb2->version = DLB2_HW_DEVICE_FROM_PCI_ID(pci_dev);
 		ret = dlb2_secondary_eventdev_probe(eventdev,
 						    event_dlb2_pf_name);
 	}
@@ -684,6 +688,16 @@ static const struct rte_pci_id pci_id_dlb2_map[] = {
 	},
 };
 
+static const struct rte_pci_id pci_id_dlb2_5_map[] = {
+	{
+		RTE_PCI_DEVICE(EVENTDEV_INTEL_VENDOR_ID,
+			       PCI_DEVICE_ID_INTEL_DLB2_5_PF)
+	},
+	{
+		.vendor_id = 0,
+	},
+};
+
 static int
 event_dlb2_pci_probe(struct rte_pci_driver *pci_drv,
 		     struct rte_pci_device *pci_dev)
@@ -718,6 +732,40 @@ event_dlb2_pci_remove(struct rte_pci_device *pci_dev)
 
 }
 
+static int
+event_dlb2_5_pci_probe(struct rte_pci_driver *pci_drv,
+		       struct rte_pci_device *pci_dev)
+{
+	int ret;
+
+	ret = rte_event_pmd_pci_probe_named(pci_drv, pci_dev,
+					    sizeof(struct dlb2_eventdev),
+					    dlb2_eventdev_pci_init,
+					    event_dlb2_pf_name);
+	if (ret) {
+		DLB2_LOG_INFO("rte_event_pmd_pci_probe_named() failed, "
+				"ret=%d\n", ret);
+	}
+
+	return ret;
+}
+
+static int
+event_dlb2_5_pci_remove(struct rte_pci_device *pci_dev)
+{
+	int ret;
+
+	ret = rte_event_pmd_pci_remove(pci_dev, NULL);
+
+	if (ret) {
+		DLB2_LOG_INFO("rte_event_pmd_pci_remove() failed, "
+				"ret=%d\n", ret);
+	}
+
+	return ret;
+
+}
+
 static struct rte_pci_driver pci_eventdev_dlb2_pmd = {
 	.id_table = pci_id_dlb2_map,
 	.drv_flags = RTE_PCI_DRV_NEED_MAPPING,
@@ -725,5 +773,15 @@ static struct rte_pci_driver pci_eventdev_dlb2_pmd = {
 	.remove = event_dlb2_pci_remove,
 };
 
+static struct rte_pci_driver pci_eventdev_dlb2_5_pmd = {
+	.id_table = pci_id_dlb2_5_map,
+	.drv_flags = RTE_PCI_DRV_NEED_MAPPING,
+	.probe = event_dlb2_5_pci_probe,
+	.remove = event_dlb2_5_pci_remove,
+};
+
 RTE_PMD_REGISTER_PCI(event_dlb2_pf, pci_eventdev_dlb2_pmd);
 RTE_PMD_REGISTER_PCI_TABLE(event_dlb2_pf, pci_id_dlb2_map);
+
+RTE_PMD_REGISTER_PCI(event_dlb2_5_pf, pci_eventdev_dlb2_5_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(event_dlb2_5_pf, pci_id_dlb2_5_map);
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v5 03/26] event/dlb2: add v2.5 HW register definitions
  2021-05-01 19:03   ` [dpdk-dev] [PATCH v5 00/26] Add DLB v2.5 McDaniel, Timothy
  2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 01/26] event/dlb2: minor code cleanup McDaniel, Timothy
  2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 02/26] event/dlb2: add v2.5 probe McDaniel, Timothy
@ 2021-05-01 19:03     ` McDaniel, Timothy
  2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 04/26] event/dlb2: add v2.5 HW init McDaniel, Timothy
                       ` (23 subsequent siblings)
  26 siblings, 0 replies; 174+ messages in thread
From: McDaniel, Timothy @ 2021-05-01 19:03 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, harry.van.haaren, jerinj, thomas, Timothy McDaniel

From: Timothy McDaniel <timothy.mcdaniel@intel.com>

Add auto-generated register definitions, updated to
support both DLB v2.0 and v2.5 devices.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_regs_new.h | 4304 ++++++++++++++++++++
 1 file changed, 4304 insertions(+)
 create mode 100644 drivers/event/dlb2/pf/base/dlb2_regs_new.h

diff --git a/drivers/event/dlb2/pf/base/dlb2_regs_new.h b/drivers/event/dlb2/pf/base/dlb2_regs_new.h
new file mode 100644
index 000000000..26c3e7f4a
--- /dev/null
+++ b/drivers/event/dlb2/pf/base/dlb2_regs_new.h
@@ -0,0 +1,4304 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2016-2020 Intel Corporation
+ */
+
+#ifndef __DLB2_REGS_NEW_H
+#define __DLB2_REGS_NEW_H
+
+#include "dlb2_osdep_types.h"
+
+#define DLB2_PF_VF2PF_MAILBOX_BYTES 256
+#define DLB2_PF_VF2PF_MAILBOX(vf_id, x) \
+	(0x1000 + 0x4 * (x) + (vf_id) * 0x10000)
+#define DLB2_PF_VF2PF_MAILBOX_RST 0x0
+
+#define DLB2_PF_VF2PF_MAILBOX_MSG	0xFFFFFFFF
+#define DLB2_PF_VF2PF_MAILBOX_MSG_LOC	0
+
+#define DLB2_PF_VF2PF_MAILBOX_ISR(vf_id) \
+	(0x1f00 + (vf_id) * 0x10000)
+#define DLB2_PF_VF2PF_MAILBOX_ISR_RST 0x0
+
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF0_ISR	0x00000001
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF1_ISR	0x00000002
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF2_ISR	0x00000004
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF3_ISR	0x00000008
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF4_ISR	0x00000010
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF5_ISR	0x00000020
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF6_ISR	0x00000040
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF7_ISR	0x00000080
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF8_ISR	0x00000100
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF9_ISR	0x00000200
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF10_ISR	0x00000400
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF11_ISR	0x00000800
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF12_ISR	0x00001000
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF13_ISR	0x00002000
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF14_ISR	0x00004000
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF15_ISR	0x00008000
+#define DLB2_PF_VF2PF_MAILBOX_ISR_RSVD0	0xFFFF0000
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF0_ISR_LOC	0
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF1_ISR_LOC	1
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF2_ISR_LOC	2
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF3_ISR_LOC	3
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF4_ISR_LOC	4
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF5_ISR_LOC	5
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF6_ISR_LOC	6
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF7_ISR_LOC	7
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF8_ISR_LOC	8
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF9_ISR_LOC	9
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF10_ISR_LOC	10
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF11_ISR_LOC	11
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF12_ISR_LOC	12
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF13_ISR_LOC	13
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF14_ISR_LOC	14
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF15_ISR_LOC	15
+#define DLB2_PF_VF2PF_MAILBOX_ISR_RSVD0_LOC		16
+
+#define DLB2_PF_VF2PF_FLR_ISR(vf_id) \
+	(0x1f04 + (vf_id) * 0x10000)
+#define DLB2_PF_VF2PF_FLR_ISR_RST 0x0
+
+#define DLB2_PF_VF2PF_FLR_ISR_VF0_ISR	0x00000001
+#define DLB2_PF_VF2PF_FLR_ISR_VF1_ISR	0x00000002
+#define DLB2_PF_VF2PF_FLR_ISR_VF2_ISR	0x00000004
+#define DLB2_PF_VF2PF_FLR_ISR_VF3_ISR	0x00000008
+#define DLB2_PF_VF2PF_FLR_ISR_VF4_ISR	0x00000010
+#define DLB2_PF_VF2PF_FLR_ISR_VF5_ISR	0x00000020
+#define DLB2_PF_VF2PF_FLR_ISR_VF6_ISR	0x00000040
+#define DLB2_PF_VF2PF_FLR_ISR_VF7_ISR	0x00000080
+#define DLB2_PF_VF2PF_FLR_ISR_VF8_ISR	0x00000100
+#define DLB2_PF_VF2PF_FLR_ISR_VF9_ISR	0x00000200
+#define DLB2_PF_VF2PF_FLR_ISR_VF10_ISR	0x00000400
+#define DLB2_PF_VF2PF_FLR_ISR_VF11_ISR	0x00000800
+#define DLB2_PF_VF2PF_FLR_ISR_VF12_ISR	0x00001000
+#define DLB2_PF_VF2PF_FLR_ISR_VF13_ISR	0x00002000
+#define DLB2_PF_VF2PF_FLR_ISR_VF14_ISR	0x00004000
+#define DLB2_PF_VF2PF_FLR_ISR_VF15_ISR	0x00008000
+#define DLB2_PF_VF2PF_FLR_ISR_RSVD0		0xFFFF0000
+#define DLB2_PF_VF2PF_FLR_ISR_VF0_ISR_LOC	0
+#define DLB2_PF_VF2PF_FLR_ISR_VF1_ISR_LOC	1
+#define DLB2_PF_VF2PF_FLR_ISR_VF2_ISR_LOC	2
+#define DLB2_PF_VF2PF_FLR_ISR_VF3_ISR_LOC	3
+#define DLB2_PF_VF2PF_FLR_ISR_VF4_ISR_LOC	4
+#define DLB2_PF_VF2PF_FLR_ISR_VF5_ISR_LOC	5
+#define DLB2_PF_VF2PF_FLR_ISR_VF6_ISR_LOC	6
+#define DLB2_PF_VF2PF_FLR_ISR_VF7_ISR_LOC	7
+#define DLB2_PF_VF2PF_FLR_ISR_VF8_ISR_LOC	8
+#define DLB2_PF_VF2PF_FLR_ISR_VF9_ISR_LOC	9
+#define DLB2_PF_VF2PF_FLR_ISR_VF10_ISR_LOC	10
+#define DLB2_PF_VF2PF_FLR_ISR_VF11_ISR_LOC	11
+#define DLB2_PF_VF2PF_FLR_ISR_VF12_ISR_LOC	12
+#define DLB2_PF_VF2PF_FLR_ISR_VF13_ISR_LOC	13
+#define DLB2_PF_VF2PF_FLR_ISR_VF14_ISR_LOC	14
+#define DLB2_PF_VF2PF_FLR_ISR_VF15_ISR_LOC	15
+#define DLB2_PF_VF2PF_FLR_ISR_RSVD0_LOC	16
+
+#define DLB2_PF_VF2PF_ISR_PEND(vf_id) \
+	(0x1f10 + (vf_id) * 0x10000)
+#define DLB2_PF_VF2PF_ISR_PEND_RST 0x0
+
+#define DLB2_PF_VF2PF_ISR_PEND_ISR_PEND	0x00000001
+#define DLB2_PF_VF2PF_ISR_PEND_RSVD0		0xFFFFFFFE
+#define DLB2_PF_VF2PF_ISR_PEND_ISR_PEND_LOC	0
+#define DLB2_PF_VF2PF_ISR_PEND_RSVD0_LOC	1
+
+#define DLB2_PF_PF2VF_MAILBOX_BYTES 64
+#define DLB2_PF_PF2VF_MAILBOX(vf_id, x) \
+	(0x2000 + 0x4 * (x) + (vf_id) * 0x10000)
+#define DLB2_PF_PF2VF_MAILBOX_RST 0x0
+
+#define DLB2_PF_PF2VF_MAILBOX_MSG	0xFFFFFFFF
+#define DLB2_PF_PF2VF_MAILBOX_MSG_LOC	0
+
+#define DLB2_PF_PF2VF_MAILBOX_ISR(vf_id) \
+	(0x2f00 + (vf_id) * 0x10000)
+#define DLB2_PF_PF2VF_MAILBOX_ISR_RST 0x0
+
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF0_ISR	0x00000001
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF1_ISR	0x00000002
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF2_ISR	0x00000004
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF3_ISR	0x00000008
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF4_ISR	0x00000010
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF5_ISR	0x00000020
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF6_ISR	0x00000040
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF7_ISR	0x00000080
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF8_ISR	0x00000100
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF9_ISR	0x00000200
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF10_ISR	0x00000400
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF11_ISR	0x00000800
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF12_ISR	0x00001000
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF13_ISR	0x00002000
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF14_ISR	0x00004000
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF15_ISR	0x00008000
+#define DLB2_PF_PF2VF_MAILBOX_ISR_RSVD0	0xFFFF0000
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF0_ISR_LOC	0
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF1_ISR_LOC	1
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF2_ISR_LOC	2
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF3_ISR_LOC	3
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF4_ISR_LOC	4
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF5_ISR_LOC	5
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF6_ISR_LOC	6
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF7_ISR_LOC	7
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF8_ISR_LOC	8
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF9_ISR_LOC	9
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF10_ISR_LOC	10
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF11_ISR_LOC	11
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF12_ISR_LOC	12
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF13_ISR_LOC	13
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF14_ISR_LOC	14
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF15_ISR_LOC	15
+#define DLB2_PF_PF2VF_MAILBOX_ISR_RSVD0_LOC		16
+
+#define DLB2_PF_VF_RESET_IN_PROGRESS(vf_id) \
+	(0x3000 + (vf_id) * 0x10000)
+#define DLB2_PF_VF_RESET_IN_PROGRESS_RST 0xffff
+
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF0_RESET_IN_PROGRESS	0x00000001
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF1_RESET_IN_PROGRESS	0x00000002
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF2_RESET_IN_PROGRESS	0x00000004
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF3_RESET_IN_PROGRESS	0x00000008
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF4_RESET_IN_PROGRESS	0x00000010
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF5_RESET_IN_PROGRESS	0x00000020
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF6_RESET_IN_PROGRESS	0x00000040
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF7_RESET_IN_PROGRESS	0x00000080
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF8_RESET_IN_PROGRESS	0x00000100
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF9_RESET_IN_PROGRESS	0x00000200
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF10_RESET_IN_PROGRESS	0x00000400
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF11_RESET_IN_PROGRESS	0x00000800
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF12_RESET_IN_PROGRESS	0x00001000
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF13_RESET_IN_PROGRESS	0x00002000
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF14_RESET_IN_PROGRESS	0x00004000
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF15_RESET_IN_PROGRESS	0x00008000
+#define DLB2_PF_VF_RESET_IN_PROGRESS_RSVD0			0xFFFF0000
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF0_RESET_IN_PROGRESS_LOC	0
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF1_RESET_IN_PROGRESS_LOC	1
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF2_RESET_IN_PROGRESS_LOC	2
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF3_RESET_IN_PROGRESS_LOC	3
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF4_RESET_IN_PROGRESS_LOC	4
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF5_RESET_IN_PROGRESS_LOC	5
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF6_RESET_IN_PROGRESS_LOC	6
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF7_RESET_IN_PROGRESS_LOC	7
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF8_RESET_IN_PROGRESS_LOC	8
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF9_RESET_IN_PROGRESS_LOC	9
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF10_RESET_IN_PROGRESS_LOC	10
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF11_RESET_IN_PROGRESS_LOC	11
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF12_RESET_IN_PROGRESS_LOC	12
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF13_RESET_IN_PROGRESS_LOC	13
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF14_RESET_IN_PROGRESS_LOC	14
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF15_RESET_IN_PROGRESS_LOC	15
+#define DLB2_PF_VF_RESET_IN_PROGRESS_RSVD0_LOC			16
+
+#define DLB2_MSIX_VECTOR_CTRL(x) \
+	(0x100000c + (x) * 0x10)
+#define DLB2_MSIX_VECTOR_CTRL_RST 0x1
+
+#define DLB2_MSIX_VECTOR_CTRL_VEC_MASK	0x00000001
+#define DLB2_MSIX_VECTOR_CTRL_RSVD0		0xFFFFFFFE
+#define DLB2_MSIX_VECTOR_CTRL_VEC_MASK_LOC	0
+#define DLB2_MSIX_VECTOR_CTRL_RSVD0_LOC	1
+
+#define DLB2_IOSF_FUNC_VF_BAR_DSBL(x) \
+	(0x20 + (x) * 0x4)
+#define DLB2_IOSF_FUNC_VF_BAR_DSBL_RST 0x0
+
+#define DLB2_IOSF_FUNC_VF_BAR_DSBL_FUNC_VF_BAR_DIS	0x00000001
+#define DLB2_IOSF_FUNC_VF_BAR_DSBL_RSVD0		0xFFFFFFFE
+#define DLB2_IOSF_FUNC_VF_BAR_DSBL_FUNC_VF_BAR_DIS_LOC	0
+#define DLB2_IOSF_FUNC_VF_BAR_DSBL_RSVD0_LOC			1
+
+#define DLB2_V2SYS_TOTAL_VAS 0x1000011c
+#define DLB2_V2_5SYS_TOTAL_VAS 0x10000114
+#define DLB2_SYS_TOTAL_VAS(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2SYS_TOTAL_VAS : \
+	 DLB2_V2_5SYS_TOTAL_VAS)
+#define DLB2_SYS_TOTAL_VAS_RST 0x20
+
+#define DLB2_SYS_TOTAL_VAS_TOTAL_VAS	0xFFFFFFFF
+#define DLB2_SYS_TOTAL_VAS_TOTAL_VAS_LOC	0
+
+#define DLB2_SYS_TOTAL_DIR_CRDS 0x10000108
+#define DLB2_SYS_TOTAL_DIR_CRDS_RST 0x1000
+
+#define DLB2_SYS_TOTAL_DIR_CRDS_TOTAL_DIR_CREDITS	0xFFFFFFFF
+#define DLB2_SYS_TOTAL_DIR_CRDS_TOTAL_DIR_CREDITS_LOC	0
+
+#define DLB2_SYS_TOTAL_LDB_CRDS 0x10000104
+#define DLB2_SYS_TOTAL_LDB_CRDS_RST 0x2000
+
+#define DLB2_SYS_TOTAL_LDB_CRDS_TOTAL_LDB_CREDITS	0xFFFFFFFF
+#define DLB2_SYS_TOTAL_LDB_CRDS_TOTAL_LDB_CREDITS_LOC	0
+
+#define DLB2_SYS_ALARM_PF_SYND2 0x10000508
+#define DLB2_SYS_ALARM_PF_SYND2_RST 0x0
+
+#define DLB2_SYS_ALARM_PF_SYND2_LOCK_ID	0x0000FFFF
+#define DLB2_SYS_ALARM_PF_SYND2_MEAS		0x00010000
+#define DLB2_SYS_ALARM_PF_SYND2_DEBUG	0x00FE0000
+#define DLB2_SYS_ALARM_PF_SYND2_CQ_POP	0x01000000
+#define DLB2_SYS_ALARM_PF_SYND2_QE_UHL	0x02000000
+#define DLB2_SYS_ALARM_PF_SYND2_QE_ORSP	0x04000000
+#define DLB2_SYS_ALARM_PF_SYND2_QE_VALID	0x08000000
+#define DLB2_SYS_ALARM_PF_SYND2_CQ_INT_REARM	0x10000000
+#define DLB2_SYS_ALARM_PF_SYND2_DSI_ERROR	0x20000000
+#define DLB2_SYS_ALARM_PF_SYND2_RSVD0	0xC0000000
+#define DLB2_SYS_ALARM_PF_SYND2_LOCK_ID_LOC		0
+#define DLB2_SYS_ALARM_PF_SYND2_MEAS_LOC		16
+#define DLB2_SYS_ALARM_PF_SYND2_DEBUG_LOC		17
+#define DLB2_SYS_ALARM_PF_SYND2_CQ_POP_LOC		24
+#define DLB2_SYS_ALARM_PF_SYND2_QE_UHL_LOC		25
+#define DLB2_SYS_ALARM_PF_SYND2_QE_ORSP_LOC		26
+#define DLB2_SYS_ALARM_PF_SYND2_QE_VALID_LOC		27
+#define DLB2_SYS_ALARM_PF_SYND2_CQ_INT_REARM_LOC	28
+#define DLB2_SYS_ALARM_PF_SYND2_DSI_ERROR_LOC	29
+#define DLB2_SYS_ALARM_PF_SYND2_RSVD0_LOC		30
+
+#define DLB2_SYS_ALARM_PF_SYND1 0x10000504
+#define DLB2_SYS_ALARM_PF_SYND1_RST 0x0
+
+#define DLB2_SYS_ALARM_PF_SYND1_DSI		0x0000FFFF
+#define DLB2_SYS_ALARM_PF_SYND1_QID		0x00FF0000
+#define DLB2_SYS_ALARM_PF_SYND1_QTYPE	0x03000000
+#define DLB2_SYS_ALARM_PF_SYND1_QPRI		0x1C000000
+#define DLB2_SYS_ALARM_PF_SYND1_MSG_TYPE	0xE0000000
+#define DLB2_SYS_ALARM_PF_SYND1_DSI_LOC	0
+#define DLB2_SYS_ALARM_PF_SYND1_QID_LOC	16
+#define DLB2_SYS_ALARM_PF_SYND1_QTYPE_LOC	24
+#define DLB2_SYS_ALARM_PF_SYND1_QPRI_LOC	26
+#define DLB2_SYS_ALARM_PF_SYND1_MSG_TYPE_LOC	29
+
+#define DLB2_SYS_ALARM_PF_SYND0 0x10000500
+#define DLB2_SYS_ALARM_PF_SYND0_RST 0x0
+
+#define DLB2_SYS_ALARM_PF_SYND0_SYNDROME	0x000000FF
+#define DLB2_SYS_ALARM_PF_SYND0_RTYPE	0x00000300
+#define DLB2_SYS_ALARM_PF_SYND0_RSVD0	0x00001C00
+#define DLB2_SYS_ALARM_PF_SYND0_IS_LDB	0x00002000
+#define DLB2_SYS_ALARM_PF_SYND0_CLS		0x0000C000
+#define DLB2_SYS_ALARM_PF_SYND0_AID		0x003F0000
+#define DLB2_SYS_ALARM_PF_SYND0_UNIT		0x03C00000
+#define DLB2_SYS_ALARM_PF_SYND0_SOURCE	0x3C000000
+#define DLB2_SYS_ALARM_PF_SYND0_MORE		0x40000000
+#define DLB2_SYS_ALARM_PF_SYND0_VALID	0x80000000
+#define DLB2_SYS_ALARM_PF_SYND0_SYNDROME_LOC	0
+#define DLB2_SYS_ALARM_PF_SYND0_RTYPE_LOC	8
+#define DLB2_SYS_ALARM_PF_SYND0_RSVD0_LOC	10
+#define DLB2_SYS_ALARM_PF_SYND0_IS_LDB_LOC	13
+#define DLB2_SYS_ALARM_PF_SYND0_CLS_LOC	14
+#define DLB2_SYS_ALARM_PF_SYND0_AID_LOC	16
+#define DLB2_SYS_ALARM_PF_SYND0_UNIT_LOC	22
+#define DLB2_SYS_ALARM_PF_SYND0_SOURCE_LOC	26
+#define DLB2_SYS_ALARM_PF_SYND0_MORE_LOC	30
+#define DLB2_SYS_ALARM_PF_SYND0_VALID_LOC	31
+
+#define DLB2_SYS_VF_LDB_VPP_V(x) \
+	(0x10000f00 + (x) * 0x1000)
+#define DLB2_SYS_VF_LDB_VPP_V_RST 0x0
+
+#define DLB2_SYS_VF_LDB_VPP_V_VPP_V	0x00000001
+#define DLB2_SYS_VF_LDB_VPP_V_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_VF_LDB_VPP_V_VPP_V_LOC	0
+#define DLB2_SYS_VF_LDB_VPP_V_RSVD0_LOC	1
+
+#define DLB2_SYS_VF_LDB_VPP2PP(x) \
+	(0x10000f04 + (x) * 0x1000)
+#define DLB2_SYS_VF_LDB_VPP2PP_RST 0x0
+
+#define DLB2_SYS_VF_LDB_VPP2PP_PP	0x0000003F
+#define DLB2_SYS_VF_LDB_VPP2PP_RSVD0	0xFFFFFFC0
+#define DLB2_SYS_VF_LDB_VPP2PP_PP_LOC	0
+#define DLB2_SYS_VF_LDB_VPP2PP_RSVD0_LOC	6
+
+#define DLB2_SYS_VF_DIR_VPP_V(x) \
+	(0x10000f08 + (x) * 0x1000)
+#define DLB2_SYS_VF_DIR_VPP_V_RST 0x0
+
+#define DLB2_SYS_VF_DIR_VPP_V_VPP_V	0x00000001
+#define DLB2_SYS_VF_DIR_VPP_V_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_VF_DIR_VPP_V_VPP_V_LOC	0
+#define DLB2_SYS_VF_DIR_VPP_V_RSVD0_LOC	1
+
+#define DLB2_SYS_VF_DIR_VPP2PP(x) \
+	(0x10000f0c + (x) * 0x1000)
+#define DLB2_SYS_VF_DIR_VPP2PP_RST 0x0
+
+#define DLB2_SYS_VF_DIR_VPP2PP_PP	0x0000003F
+#define DLB2_SYS_VF_DIR_VPP2PP_RSVD0	0xFFFFFFC0
+#define DLB2_SYS_VF_DIR_VPP2PP_PP_LOC	0
+#define DLB2_SYS_VF_DIR_VPP2PP_RSVD0_LOC	6
+
+#define DLB2_SYS_VF_LDB_VQID_V(x) \
+	(0x10000f10 + (x) * 0x1000)
+#define DLB2_SYS_VF_LDB_VQID_V_RST 0x0
+
+#define DLB2_SYS_VF_LDB_VQID_V_VQID_V	0x00000001
+#define DLB2_SYS_VF_LDB_VQID_V_RSVD0		0xFFFFFFFE
+#define DLB2_SYS_VF_LDB_VQID_V_VQID_V_LOC	0
+#define DLB2_SYS_VF_LDB_VQID_V_RSVD0_LOC	1
+
+#define DLB2_SYS_VF_LDB_VQID2QID(x) \
+	(0x10000f14 + (x) * 0x1000)
+#define DLB2_SYS_VF_LDB_VQID2QID_RST 0x0
+
+#define DLB2_SYS_VF_LDB_VQID2QID_QID		0x0000001F
+#define DLB2_SYS_VF_LDB_VQID2QID_RSVD0	0xFFFFFFE0
+#define DLB2_SYS_VF_LDB_VQID2QID_QID_LOC	0
+#define DLB2_SYS_VF_LDB_VQID2QID_RSVD0_LOC	5
+
+#define DLB2_SYS_LDB_QID2VQID(x) \
+	(0x10000f18 + (x) * 0x1000)
+#define DLB2_SYS_LDB_QID2VQID_RST 0x0
+
+#define DLB2_SYS_LDB_QID2VQID_VQID	0x0000001F
+#define DLB2_SYS_LDB_QID2VQID_RSVD0	0xFFFFFFE0
+#define DLB2_SYS_LDB_QID2VQID_VQID_LOC	0
+#define DLB2_SYS_LDB_QID2VQID_RSVD0_LOC	5
+
+#define DLB2_SYS_VF_DIR_VQID_V(x) \
+	(0x10000f1c + (x) * 0x1000)
+#define DLB2_SYS_VF_DIR_VQID_V_RST 0x0
+
+#define DLB2_SYS_VF_DIR_VQID_V_VQID_V	0x00000001
+#define DLB2_SYS_VF_DIR_VQID_V_RSVD0		0xFFFFFFFE
+#define DLB2_SYS_VF_DIR_VQID_V_VQID_V_LOC	0
+#define DLB2_SYS_VF_DIR_VQID_V_RSVD0_LOC	1
+
+#define DLB2_SYS_VF_DIR_VQID2QID(x) \
+	(0x10000f20 + (x) * 0x1000)
+#define DLB2_SYS_VF_DIR_VQID2QID_RST 0x0
+
+#define DLB2_SYS_VF_DIR_VQID2QID_QID		0x0000003F
+#define DLB2_SYS_VF_DIR_VQID2QID_RSVD0	0xFFFFFFC0
+#define DLB2_SYS_VF_DIR_VQID2QID_QID_LOC	0
+#define DLB2_SYS_VF_DIR_VQID2QID_RSVD0_LOC	6
+
+#define DLB2_SYS_LDB_VASQID_V(x) \
+	(0x10000f24 + (x) * 0x1000)
+#define DLB2_SYS_LDB_VASQID_V_RST 0x0
+
+#define DLB2_SYS_LDB_VASQID_V_VASQID_V	0x00000001
+#define DLB2_SYS_LDB_VASQID_V_RSVD0		0xFFFFFFFE
+#define DLB2_SYS_LDB_VASQID_V_VASQID_V_LOC	0
+#define DLB2_SYS_LDB_VASQID_V_RSVD0_LOC	1
+
+#define DLB2_SYS_DIR_VASQID_V(x) \
+	(0x10000f28 + (x) * 0x1000)
+#define DLB2_SYS_DIR_VASQID_V_RST 0x0
+
+#define DLB2_SYS_DIR_VASQID_V_VASQID_V	0x00000001
+#define DLB2_SYS_DIR_VASQID_V_RSVD0		0xFFFFFFFE
+#define DLB2_SYS_DIR_VASQID_V_VASQID_V_LOC	0
+#define DLB2_SYS_DIR_VASQID_V_RSVD0_LOC	1
+
+#define DLB2_SYS_ALARM_VF_SYND2(x) \
+	(0x10000f48 + (x) * 0x1000)
+#define DLB2_SYS_ALARM_VF_SYND2_RST 0x0
+
+#define DLB2_SYS_ALARM_VF_SYND2_LOCK_ID	0x0000FFFF
+#define DLB2_SYS_ALARM_VF_SYND2_DEBUG	0x00FF0000
+#define DLB2_SYS_ALARM_VF_SYND2_CQ_POP	0x01000000
+#define DLB2_SYS_ALARM_VF_SYND2_QE_UHL	0x02000000
+#define DLB2_SYS_ALARM_VF_SYND2_QE_ORSP	0x04000000
+#define DLB2_SYS_ALARM_VF_SYND2_QE_VALID	0x08000000
+#define DLB2_SYS_ALARM_VF_SYND2_ISZ		0x10000000
+#define DLB2_SYS_ALARM_VF_SYND2_DSI_ERROR	0x20000000
+#define DLB2_SYS_ALARM_VF_SYND2_DLBRSVD	0xC0000000
+#define DLB2_SYS_ALARM_VF_SYND2_LOCK_ID_LOC		0
+#define DLB2_SYS_ALARM_VF_SYND2_DEBUG_LOC		16
+#define DLB2_SYS_ALARM_VF_SYND2_CQ_POP_LOC		24
+#define DLB2_SYS_ALARM_VF_SYND2_QE_UHL_LOC		25
+#define DLB2_SYS_ALARM_VF_SYND2_QE_ORSP_LOC		26
+#define DLB2_SYS_ALARM_VF_SYND2_QE_VALID_LOC		27
+#define DLB2_SYS_ALARM_VF_SYND2_ISZ_LOC		28
+#define DLB2_SYS_ALARM_VF_SYND2_DSI_ERROR_LOC	29
+#define DLB2_SYS_ALARM_VF_SYND2_DLBRSVD_LOC		30
+
+#define DLB2_SYS_ALARM_VF_SYND1(x) \
+	(0x10000f44 + (x) * 0x1000)
+#define DLB2_SYS_ALARM_VF_SYND1_RST 0x0
+
+#define DLB2_SYS_ALARM_VF_SYND1_DSI		0x0000FFFF
+#define DLB2_SYS_ALARM_VF_SYND1_QID		0x00FF0000
+#define DLB2_SYS_ALARM_VF_SYND1_QTYPE	0x03000000
+#define DLB2_SYS_ALARM_VF_SYND1_QPRI		0x1C000000
+#define DLB2_SYS_ALARM_VF_SYND1_MSG_TYPE	0xE0000000
+#define DLB2_SYS_ALARM_VF_SYND1_DSI_LOC	0
+#define DLB2_SYS_ALARM_VF_SYND1_QID_LOC	16
+#define DLB2_SYS_ALARM_VF_SYND1_QTYPE_LOC	24
+#define DLB2_SYS_ALARM_VF_SYND1_QPRI_LOC	26
+#define DLB2_SYS_ALARM_VF_SYND1_MSG_TYPE_LOC	29
+
+#define DLB2_SYS_ALARM_VF_SYND0(x) \
+	(0x10000f40 + (x) * 0x1000)
+#define DLB2_SYS_ALARM_VF_SYND0_RST 0x0
+
+#define DLB2_SYS_ALARM_VF_SYND0_SYNDROME		0x000000FF
+#define DLB2_SYS_ALARM_VF_SYND0_RTYPE		0x00000300
+#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND0_PARITY	0x00000400
+#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND1_PARITY	0x00000800
+#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND2_PARITY	0x00001000
+#define DLB2_SYS_ALARM_VF_SYND0_IS_LDB		0x00002000
+#define DLB2_SYS_ALARM_VF_SYND0_CLS			0x0000C000
+#define DLB2_SYS_ALARM_VF_SYND0_AID			0x003F0000
+#define DLB2_SYS_ALARM_VF_SYND0_UNIT			0x03C00000
+#define DLB2_SYS_ALARM_VF_SYND0_SOURCE		0x3C000000
+#define DLB2_SYS_ALARM_VF_SYND0_MORE			0x40000000
+#define DLB2_SYS_ALARM_VF_SYND0_VALID		0x80000000
+#define DLB2_SYS_ALARM_VF_SYND0_SYNDROME_LOC		0
+#define DLB2_SYS_ALARM_VF_SYND0_RTYPE_LOC		8
+#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND0_PARITY_LOC	10
+#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND1_PARITY_LOC	11
+#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND2_PARITY_LOC	12
+#define DLB2_SYS_ALARM_VF_SYND0_IS_LDB_LOC		13
+#define DLB2_SYS_ALARM_VF_SYND0_CLS_LOC		14
+#define DLB2_SYS_ALARM_VF_SYND0_AID_LOC		16
+#define DLB2_SYS_ALARM_VF_SYND0_UNIT_LOC		22
+#define DLB2_SYS_ALARM_VF_SYND0_SOURCE_LOC		26
+#define DLB2_SYS_ALARM_VF_SYND0_MORE_LOC		30
+#define DLB2_SYS_ALARM_VF_SYND0_VALID_LOC		31
+
+#define DLB2_SYS_LDB_QID_CFG_V(x) \
+	(0x10000f58 + (x) * 0x1000)
+#define DLB2_SYS_LDB_QID_CFG_V_RST 0x0
+
+#define DLB2_SYS_LDB_QID_CFG_V_SN_CFG_V	0x00000001
+#define DLB2_SYS_LDB_QID_CFG_V_FID_CFG_V	0x00000002
+#define DLB2_SYS_LDB_QID_CFG_V_RSVD0		0xFFFFFFFC
+#define DLB2_SYS_LDB_QID_CFG_V_SN_CFG_V_LOC	0
+#define DLB2_SYS_LDB_QID_CFG_V_FID_CFG_V_LOC	1
+#define DLB2_SYS_LDB_QID_CFG_V_RSVD0_LOC	2
+
+#define DLB2_SYS_LDB_QID_ITS(x) \
+	(0x10000f54 + (x) * 0x1000)
+#define DLB2_SYS_LDB_QID_ITS_RST 0x0
+
+#define DLB2_SYS_LDB_QID_ITS_QID_ITS	0x00000001
+#define DLB2_SYS_LDB_QID_ITS_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_LDB_QID_ITS_QID_ITS_LOC	0
+#define DLB2_SYS_LDB_QID_ITS_RSVD0_LOC	1
+
+#define DLB2_SYS_LDB_QID_V(x) \
+	(0x10000f50 + (x) * 0x1000)
+#define DLB2_SYS_LDB_QID_V_RST 0x0
+
+#define DLB2_SYS_LDB_QID_V_QID_V	0x00000001
+#define DLB2_SYS_LDB_QID_V_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_LDB_QID_V_QID_V_LOC	0
+#define DLB2_SYS_LDB_QID_V_RSVD0_LOC	1
+
+#define DLB2_SYS_DIR_QID_ITS(x) \
+	(0x10000f64 + (x) * 0x1000)
+#define DLB2_SYS_DIR_QID_ITS_RST 0x0
+
+#define DLB2_SYS_DIR_QID_ITS_QID_ITS	0x00000001
+#define DLB2_SYS_DIR_QID_ITS_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_DIR_QID_ITS_QID_ITS_LOC	0
+#define DLB2_SYS_DIR_QID_ITS_RSVD0_LOC	1
+
+#define DLB2_SYS_DIR_QID_V(x) \
+	(0x10000f60 + (x) * 0x1000)
+#define DLB2_SYS_DIR_QID_V_RST 0x0
+
+#define DLB2_SYS_DIR_QID_V_QID_V	0x00000001
+#define DLB2_SYS_DIR_QID_V_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_DIR_QID_V_QID_V_LOC	0
+#define DLB2_SYS_DIR_QID_V_RSVD0_LOC	1
+
+#define DLB2_SYS_LDB_CQ_AI_DATA(x) \
+	(0x10000fa8 + (x) * 0x1000)
+#define DLB2_SYS_LDB_CQ_AI_DATA_RST 0x0
+
+#define DLB2_SYS_LDB_CQ_AI_DATA_CQ_AI_DATA	0xFFFFFFFF
+#define DLB2_SYS_LDB_CQ_AI_DATA_CQ_AI_DATA_LOC	0
+
+#define DLB2_SYS_LDB_CQ_AI_ADDR(x) \
+	(0x10000fa4 + (x) * 0x1000)
+#define DLB2_SYS_LDB_CQ_AI_ADDR_RST 0x0
+
+#define DLB2_SYS_LDB_CQ_AI_ADDR_RSVD1	0x00000003
+#define DLB2_SYS_LDB_CQ_AI_ADDR_CQ_AI_ADDR	0x000FFFFC
+#define DLB2_SYS_LDB_CQ_AI_ADDR_RSVD0	0xFFF00000
+#define DLB2_SYS_LDB_CQ_AI_ADDR_RSVD1_LOC		0
+#define DLB2_SYS_LDB_CQ_AI_ADDR_CQ_AI_ADDR_LOC	2
+#define DLB2_SYS_LDB_CQ_AI_ADDR_RSVD0_LOC		20
+
+#define DLB2_V2SYS_LDB_CQ_PASID(x) \
+	(0x10000fa0 + (x) * 0x1000)
+#define DLB2_V2_5SYS_LDB_CQ_PASID(x) \
+	(0x10000f9c + (x) * 0x1000)
+#define DLB2_SYS_LDB_CQ_PASID(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2SYS_LDB_CQ_PASID(x) : \
+	 DLB2_V2_5SYS_LDB_CQ_PASID(x))
+#define DLB2_SYS_LDB_CQ_PASID_RST 0x0
+
+#define DLB2_SYS_LDB_CQ_PASID_PASID		0x000FFFFF
+#define DLB2_SYS_LDB_CQ_PASID_EXE_REQ	0x00100000
+#define DLB2_SYS_LDB_CQ_PASID_PRIV_REQ	0x00200000
+#define DLB2_SYS_LDB_CQ_PASID_FMT2		0x00400000
+#define DLB2_SYS_LDB_CQ_PASID_RSVD0		0xFF800000
+#define DLB2_SYS_LDB_CQ_PASID_PASID_LOC	0
+#define DLB2_SYS_LDB_CQ_PASID_EXE_REQ_LOC	20
+#define DLB2_SYS_LDB_CQ_PASID_PRIV_REQ_LOC	21
+#define DLB2_SYS_LDB_CQ_PASID_FMT2_LOC	22
+#define DLB2_SYS_LDB_CQ_PASID_RSVD0_LOC	23
+
+#define DLB2_SYS_LDB_CQ_AT(x) \
+	(0x10000f9c + (x) * 0x1000)
+#define DLB2_SYS_LDB_CQ_AT_RST 0x0
+
+#define DLB2_SYS_LDB_CQ_AT_CQ_AT	0x00000003
+#define DLB2_SYS_LDB_CQ_AT_RSVD0	0xFFFFFFFC
+#define DLB2_SYS_LDB_CQ_AT_CQ_AT_LOC	0
+#define DLB2_SYS_LDB_CQ_AT_RSVD0_LOC	2
+
+#define DLB2_SYS_LDB_CQ_ISR(x) \
+	(0x10000f98 + (x) * 0x1000)
+#define DLB2_SYS_LDB_CQ_ISR_RST 0x0
+/* CQ Interrupt Modes */
+#define DLB2_CQ_ISR_MODE_DIS  0
+#define DLB2_CQ_ISR_MODE_MSI  1
+#define DLB2_CQ_ISR_MODE_MSIX 2
+#define DLB2_CQ_ISR_MODE_ADI  3
+
+#define DLB2_SYS_LDB_CQ_ISR_VECTOR	0x0000003F
+#define DLB2_SYS_LDB_CQ_ISR_VF	0x000003C0
+#define DLB2_SYS_LDB_CQ_ISR_EN_CODE	0x00000C00
+#define DLB2_SYS_LDB_CQ_ISR_RSVD0	0xFFFFF000
+#define DLB2_SYS_LDB_CQ_ISR_VECTOR_LOC	0
+#define DLB2_SYS_LDB_CQ_ISR_VF_LOC		6
+#define DLB2_SYS_LDB_CQ_ISR_EN_CODE_LOC	10
+#define DLB2_SYS_LDB_CQ_ISR_RSVD0_LOC	12
+
+#define DLB2_SYS_LDB_CQ2VF_PF_RO(x) \
+	(0x10000f94 + (x) * 0x1000)
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_RST 0x0
+
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_VF		0x0000000F
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_IS_PF	0x00000010
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_RO		0x00000020
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_RSVD0	0xFFFFFFC0
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_VF_LOC	0
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_IS_PF_LOC	4
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_RO_LOC	5
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_RSVD0_LOC	6
+
+#define DLB2_SYS_LDB_PP_V(x) \
+	(0x10000f90 + (x) * 0x1000)
+#define DLB2_SYS_LDB_PP_V_RST 0x0
+
+#define DLB2_SYS_LDB_PP_V_PP_V	0x00000001
+#define DLB2_SYS_LDB_PP_V_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_LDB_PP_V_PP_V_LOC	0
+#define DLB2_SYS_LDB_PP_V_RSVD0_LOC	1
+
+#define DLB2_SYS_LDB_PP2VDEV(x) \
+	(0x10000f8c + (x) * 0x1000)
+#define DLB2_SYS_LDB_PP2VDEV_RST 0x0
+
+#define DLB2_SYS_LDB_PP2VDEV_VDEV	0x0000000F
+#define DLB2_SYS_LDB_PP2VDEV_RSVD0	0xFFFFFFF0
+#define DLB2_SYS_LDB_PP2VDEV_VDEV_LOC	0
+#define DLB2_SYS_LDB_PP2VDEV_RSVD0_LOC	4
+
+#define DLB2_SYS_LDB_PP2VAS(x) \
+	(0x10000f88 + (x) * 0x1000)
+#define DLB2_SYS_LDB_PP2VAS_RST 0x0
+
+#define DLB2_SYS_LDB_PP2VAS_VAS	0x0000001F
+#define DLB2_SYS_LDB_PP2VAS_RSVD0	0xFFFFFFE0
+#define DLB2_SYS_LDB_PP2VAS_VAS_LOC		0
+#define DLB2_SYS_LDB_PP2VAS_RSVD0_LOC	5
+
+#define DLB2_SYS_LDB_CQ_ADDR_U(x) \
+	(0x10000f84 + (x) * 0x1000)
+#define DLB2_SYS_LDB_CQ_ADDR_U_RST 0x0
+
+#define DLB2_SYS_LDB_CQ_ADDR_U_ADDR_U	0xFFFFFFFF
+#define DLB2_SYS_LDB_CQ_ADDR_U_ADDR_U_LOC	0
+
+#define DLB2_SYS_LDB_CQ_ADDR_L(x) \
+	(0x10000f80 + (x) * 0x1000)
+#define DLB2_SYS_LDB_CQ_ADDR_L_RST 0x0
+
+#define DLB2_SYS_LDB_CQ_ADDR_L_RSVD0		0x0000003F
+#define DLB2_SYS_LDB_CQ_ADDR_L_ADDR_L	0xFFFFFFC0
+#define DLB2_SYS_LDB_CQ_ADDR_L_RSVD0_LOC	0
+#define DLB2_SYS_LDB_CQ_ADDR_L_ADDR_L_LOC	6
+
+#define DLB2_SYS_DIR_CQ_FMT(x) \
+	(0x10000fec + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ_FMT_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_FMT_KEEP_PF_PPID	0x00000001
+#define DLB2_SYS_DIR_CQ_FMT_RSVD0		0xFFFFFFFE
+#define DLB2_SYS_DIR_CQ_FMT_KEEP_PF_PPID_LOC	0
+#define DLB2_SYS_DIR_CQ_FMT_RSVD0_LOC	1
+
+#define DLB2_SYS_DIR_CQ_AI_DATA(x) \
+	(0x10000fe8 + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ_AI_DATA_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_AI_DATA_CQ_AI_DATA	0xFFFFFFFF
+#define DLB2_SYS_DIR_CQ_AI_DATA_CQ_AI_DATA_LOC	0
+
+#define DLB2_SYS_DIR_CQ_AI_ADDR(x) \
+	(0x10000fe4 + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ_AI_ADDR_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_AI_ADDR_RSVD1	0x00000003
+#define DLB2_SYS_DIR_CQ_AI_ADDR_CQ_AI_ADDR	0x000FFFFC
+#define DLB2_SYS_DIR_CQ_AI_ADDR_RSVD0	0xFFF00000
+#define DLB2_SYS_DIR_CQ_AI_ADDR_RSVD1_LOC		0
+#define DLB2_SYS_DIR_CQ_AI_ADDR_CQ_AI_ADDR_LOC	2
+#define DLB2_SYS_DIR_CQ_AI_ADDR_RSVD0_LOC		20
+
+#define DLB2_V2SYS_DIR_CQ_PASID(x) \
+	(0x10000fe0 + (x) * 0x1000)
+#define DLB2_V2_5SYS_DIR_CQ_PASID(x) \
+	(0x10000fdc + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ_PASID(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2SYS_DIR_CQ_PASID(x) : \
+	 DLB2_V2_5SYS_DIR_CQ_PASID(x))
+#define DLB2_SYS_DIR_CQ_PASID_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_PASID_PASID		0x000FFFFF
+#define DLB2_SYS_DIR_CQ_PASID_EXE_REQ	0x00100000
+#define DLB2_SYS_DIR_CQ_PASID_PRIV_REQ	0x00200000
+#define DLB2_SYS_DIR_CQ_PASID_FMT2		0x00400000
+#define DLB2_SYS_DIR_CQ_PASID_RSVD0		0xFF800000
+#define DLB2_SYS_DIR_CQ_PASID_PASID_LOC	0
+#define DLB2_SYS_DIR_CQ_PASID_EXE_REQ_LOC	20
+#define DLB2_SYS_DIR_CQ_PASID_PRIV_REQ_LOC	21
+#define DLB2_SYS_DIR_CQ_PASID_FMT2_LOC	22
+#define DLB2_SYS_DIR_CQ_PASID_RSVD0_LOC	23
+
+#define DLB2_SYS_DIR_CQ_AT(x) \
+	(0x10000fdc + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ_AT_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_AT_CQ_AT	0x00000003
+#define DLB2_SYS_DIR_CQ_AT_RSVD0	0xFFFFFFFC
+#define DLB2_SYS_DIR_CQ_AT_CQ_AT_LOC	0
+#define DLB2_SYS_DIR_CQ_AT_RSVD0_LOC	2
+
+#define DLB2_SYS_DIR_CQ_ISR(x) \
+	(0x10000fd8 + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ_ISR_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_ISR_VECTOR	0x0000003F
+#define DLB2_SYS_DIR_CQ_ISR_VF	0x000003C0
+#define DLB2_SYS_DIR_CQ_ISR_EN_CODE	0x00000C00
+#define DLB2_SYS_DIR_CQ_ISR_RSVD0	0xFFFFF000
+#define DLB2_SYS_DIR_CQ_ISR_VECTOR_LOC	0
+#define DLB2_SYS_DIR_CQ_ISR_VF_LOC		6
+#define DLB2_SYS_DIR_CQ_ISR_EN_CODE_LOC	10
+#define DLB2_SYS_DIR_CQ_ISR_RSVD0_LOC	12
+
+#define DLB2_SYS_DIR_CQ2VF_PF_RO(x) \
+	(0x10000fd4 + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_RST 0x0
+
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_VF		0x0000000F
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_IS_PF	0x00000010
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_RO		0x00000020
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_RSVD0	0xFFFFFFC0
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_VF_LOC	0
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_IS_PF_LOC	4
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_RO_LOC	5
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_RSVD0_LOC	6
+
+#define DLB2_SYS_DIR_PP_V(x) \
+	(0x10000fd0 + (x) * 0x1000)
+#define DLB2_SYS_DIR_PP_V_RST 0x0
+
+#define DLB2_SYS_DIR_PP_V_PP_V	0x00000001
+#define DLB2_SYS_DIR_PP_V_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_DIR_PP_V_PP_V_LOC	0
+#define DLB2_SYS_DIR_PP_V_RSVD0_LOC	1
+
+#define DLB2_SYS_DIR_PP2VDEV(x) \
+	(0x10000fcc + (x) * 0x1000)
+#define DLB2_SYS_DIR_PP2VDEV_RST 0x0
+
+#define DLB2_SYS_DIR_PP2VDEV_VDEV	0x0000000F
+#define DLB2_SYS_DIR_PP2VDEV_RSVD0	0xFFFFFFF0
+#define DLB2_SYS_DIR_PP2VDEV_VDEV_LOC	0
+#define DLB2_SYS_DIR_PP2VDEV_RSVD0_LOC	4
+
+#define DLB2_SYS_DIR_PP2VAS(x) \
+	(0x10000fc8 + (x) * 0x1000)
+#define DLB2_SYS_DIR_PP2VAS_RST 0x0
+
+#define DLB2_SYS_DIR_PP2VAS_VAS	0x0000001F
+#define DLB2_SYS_DIR_PP2VAS_RSVD0	0xFFFFFFE0
+#define DLB2_SYS_DIR_PP2VAS_VAS_LOC		0
+#define DLB2_SYS_DIR_PP2VAS_RSVD0_LOC	5
+
+#define DLB2_SYS_DIR_CQ_ADDR_U(x) \
+	(0x10000fc4 + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ_ADDR_U_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_ADDR_U_ADDR_U	0xFFFFFFFF
+#define DLB2_SYS_DIR_CQ_ADDR_U_ADDR_U_LOC	0
+
+#define DLB2_SYS_DIR_CQ_ADDR_L(x) \
+	(0x10000fc0 + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ_ADDR_L_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_ADDR_L_RSVD0		0x0000003F
+#define DLB2_SYS_DIR_CQ_ADDR_L_ADDR_L	0xFFFFFFC0
+#define DLB2_SYS_DIR_CQ_ADDR_L_RSVD0_LOC	0
+#define DLB2_SYS_DIR_CQ_ADDR_L_ADDR_L_LOC	6
+
+#define DLB2_SYS_PM_SMON_COMP_MASK1 0x10003024
+#define DLB2_SYS_PM_SMON_COMP_MASK1_RST 0xffffffff
+
+#define DLB2_SYS_PM_SMON_COMP_MASK1_COMP_MASK1	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_COMP_MASK1_COMP_MASK1_LOC	0
+
+#define DLB2_SYS_PM_SMON_COMP_MASK0 0x10003020
+#define DLB2_SYS_PM_SMON_COMP_MASK0_RST 0xffffffff
+
+#define DLB2_SYS_PM_SMON_COMP_MASK0_COMP_MASK0	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_COMP_MASK0_COMP_MASK0_LOC	0
+
+#define DLB2_SYS_PM_SMON_MAX_TMR 0x1000301c
+#define DLB2_SYS_PM_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_SYS_PM_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_SYS_PM_SMON_TMR 0x10003018
+#define DLB2_SYS_PM_SMON_TMR_RST 0x0
+
+#define DLB2_SYS_PM_SMON_TMR_TIMER_VAL	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_TMR_TIMER_VAL_LOC	0
+
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR1 0x10003014
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR0 0x10003010
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_SYS_PM_SMON_COMPARE1 0x1000300c
+#define DLB2_SYS_PM_SMON_COMPARE1_RST 0x0
+
+#define DLB2_SYS_PM_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_SYS_PM_SMON_COMPARE0 0x10003008
+#define DLB2_SYS_PM_SMON_COMPARE0_RST 0x0
+
+#define DLB2_SYS_PM_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_SYS_PM_SMON_CFG1 0x10003004
+#define DLB2_SYS_PM_SMON_CFG1_RST 0x0
+
+#define DLB2_SYS_PM_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_SYS_PM_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_SYS_PM_SMON_CFG1_RSVD	0xFFFF0000
+#define DLB2_SYS_PM_SMON_CFG1_MODE0_LOC	0
+#define DLB2_SYS_PM_SMON_CFG1_MODE1_LOC	8
+#define DLB2_SYS_PM_SMON_CFG1_RSVD_LOC	16
+
+#define DLB2_SYS_PM_SMON_CFG0 0x10003000
+#define DLB2_SYS_PM_SMON_CFG0_RST 0x40000000
+
+#define DLB2_SYS_PM_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_SYS_PM_SMON_CFG0_RSVD2			0x0000000E
+#define DLB2_SYS_PM_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_SYS_PM_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_SYS_PM_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_SYS_PM_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_SYS_PM_SMON_CFG0_SMON_MODE		0x0000F000
+#define DLB2_SYS_PM_SMON_CFG0_STOPCOUNTEROVFL	0x00010000
+#define DLB2_SYS_PM_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_SYS_PM_SMON_CFG0_STATCOUNTER0OVFL	0x00040000
+#define DLB2_SYS_PM_SMON_CFG0_STATCOUNTER1OVFL	0x00080000
+#define DLB2_SYS_PM_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_SYS_PM_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_SYS_PM_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_SYS_PM_SMON_CFG0_RSVD1			0x00800000
+#define DLB2_SYS_PM_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_SYS_PM_SMON_CFG0_RSVD0			0x20000000
+#define DLB2_SYS_PM_SMON_CFG0_VERSION		0xC0000000
+#define DLB2_SYS_PM_SMON_CFG0_SMON_ENABLE_LOC		0
+#define DLB2_SYS_PM_SMON_CFG0_RSVD2_LOC			1
+#define DLB2_SYS_PM_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_SYS_PM_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_SYS_PM_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_SYS_PM_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_SYS_PM_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_SYS_PM_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_SYS_PM_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_SYS_PM_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_SYS_PM_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_SYS_PM_SMON_CFG0_STOPTIMEROVFL_LOC		20
+#define DLB2_SYS_PM_SMON_CFG0_INTTIMEROVFL_LOC		21
+#define DLB2_SYS_PM_SMON_CFG0_STATTIMEROVFL_LOC		22
+#define DLB2_SYS_PM_SMON_CFG0_RSVD1_LOC			23
+#define DLB2_SYS_PM_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_SYS_PM_SMON_CFG0_RSVD0_LOC			29
+#define DLB2_SYS_PM_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_SYS_SMON_COMP_MASK1(x) \
+	(0x18002024 + (x) * 0x40)
+#define DLB2_SYS_SMON_COMP_MASK1_RST 0xffffffff
+
+#define DLB2_SYS_SMON_COMP_MASK1_COMP_MASK1	0xFFFFFFFF
+#define DLB2_SYS_SMON_COMP_MASK1_COMP_MASK1_LOC	0
+
+#define DLB2_SYS_SMON_COMP_MASK0(x) \
+	(0x18002020 + (x) * 0x40)
+#define DLB2_SYS_SMON_COMP_MASK0_RST 0xffffffff
+
+#define DLB2_SYS_SMON_COMP_MASK0_COMP_MASK0	0xFFFFFFFF
+#define DLB2_SYS_SMON_COMP_MASK0_COMP_MASK0_LOC	0
+
+#define DLB2_SYS_SMON_MAX_TMR(x) \
+	(0x1800201c + (x) * 0x40)
+#define DLB2_SYS_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_SYS_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_SYS_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_SYS_SMON_TMR(x) \
+	(0x18002018 + (x) * 0x40)
+#define DLB2_SYS_SMON_TMR_RST 0x0
+
+#define DLB2_SYS_SMON_TMR_TIMER_VAL	0xFFFFFFFF
+#define DLB2_SYS_SMON_TMR_TIMER_VAL_LOC	0
+
+#define DLB2_SYS_SMON_ACTIVITYCNTR1(x) \
+	(0x18002014 + (x) * 0x40)
+#define DLB2_SYS_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_SYS_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_SYS_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_SYS_SMON_ACTIVITYCNTR0(x) \
+	(0x18002010 + (x) * 0x40)
+#define DLB2_SYS_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_SYS_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_SYS_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_SYS_SMON_COMPARE1(x) \
+	(0x1800200c + (x) * 0x40)
+#define DLB2_SYS_SMON_COMPARE1_RST 0x0
+
+#define DLB2_SYS_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_SYS_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_SYS_SMON_COMPARE0(x) \
+	(0x18002008 + (x) * 0x40)
+#define DLB2_SYS_SMON_COMPARE0_RST 0x0
+
+#define DLB2_SYS_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_SYS_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_SYS_SMON_CFG1(x) \
+	(0x18002004 + (x) * 0x40)
+#define DLB2_SYS_SMON_CFG1_RST 0x0
+
+#define DLB2_SYS_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_SYS_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_SYS_SMON_CFG1_RSVD	0xFFFF0000
+#define DLB2_SYS_SMON_CFG1_MODE0_LOC	0
+#define DLB2_SYS_SMON_CFG1_MODE1_LOC	8
+#define DLB2_SYS_SMON_CFG1_RSVD_LOC	16
+
+#define DLB2_SYS_SMON_CFG0(x) \
+	(0x18002000 + (x) * 0x40)
+#define DLB2_SYS_SMON_CFG0_RST 0x40000000
+
+#define DLB2_SYS_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_SYS_SMON_CFG0_RSVD2			0x0000000E
+#define DLB2_SYS_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_SYS_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_SYS_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_SYS_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_SYS_SMON_CFG0_SMON_MODE			0x0000F000
+#define DLB2_SYS_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_SYS_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_SYS_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_SYS_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_SYS_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_SYS_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_SYS_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_SYS_SMON_CFG0_RSVD1			0x00800000
+#define DLB2_SYS_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_SYS_SMON_CFG0_RSVD0			0x20000000
+#define DLB2_SYS_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_SYS_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_SYS_SMON_CFG0_RSVD2_LOC				1
+#define DLB2_SYS_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_SYS_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_SYS_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_SYS_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_SYS_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_SYS_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_SYS_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_SYS_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_SYS_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_SYS_SMON_CFG0_STOPTIMEROVFL_LOC			20
+#define DLB2_SYS_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_SYS_SMON_CFG0_STATTIMEROVFL_LOC			22
+#define DLB2_SYS_SMON_CFG0_RSVD1_LOC				23
+#define DLB2_SYS_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_SYS_SMON_CFG0_RSVD0_LOC				29
+#define DLB2_SYS_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_SYS_INGRESS_ALARM_ENBL 0x10000300
+#define DLB2_SYS_INGRESS_ALARM_ENBL_RST 0x0
+
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_HCW		0x00000001
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_PP		0x00000002
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_PASID		0x00000004
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_QID		0x00000008
+#define DLB2_SYS_INGRESS_ALARM_ENBL_DISABLED_QID		0x00000010
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_LDB_QID_CFG	0x00000020
+#define DLB2_SYS_INGRESS_ALARM_ENBL_RSVD0			0xFFFFFFC0
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_HCW_LOC		0
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_PP_LOC		1
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_PASID_LOC	2
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_QID_LOC		3
+#define DLB2_SYS_INGRESS_ALARM_ENBL_DISABLED_QID_LOC		4
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_LDB_QID_CFG_LOC	5
+#define DLB2_SYS_INGRESS_ALARM_ENBL_RSVD0_LOC		6
+
+#define DLB2_SYS_MSIX_ACK 0x10000400
+#define DLB2_SYS_MSIX_ACK_RST 0x0
+
+#define DLB2_SYS_MSIX_ACK_MSIX_0_ACK	0x00000001
+#define DLB2_SYS_MSIX_ACK_MSIX_1_ACK	0x00000002
+#define DLB2_SYS_MSIX_ACK_RSVD0	0xFFFFFFFC
+#define DLB2_SYS_MSIX_ACK_MSIX_0_ACK_LOC	0
+#define DLB2_SYS_MSIX_ACK_MSIX_1_ACK_LOC	1
+#define DLB2_SYS_MSIX_ACK_RSVD0_LOC		2
+
+#define DLB2_SYS_MSIX_PASSTHRU 0x10000404
+#define DLB2_SYS_MSIX_PASSTHRU_RST 0x0
+
+#define DLB2_SYS_MSIX_PASSTHRU_MSIX_0_PASSTHRU	0x00000001
+#define DLB2_SYS_MSIX_PASSTHRU_MSIX_1_PASSTHRU	0x00000002
+#define DLB2_SYS_MSIX_PASSTHRU_RSVD0			0xFFFFFFFC
+#define DLB2_SYS_MSIX_PASSTHRU_MSIX_0_PASSTHRU_LOC	0
+#define DLB2_SYS_MSIX_PASSTHRU_MSIX_1_PASSTHRU_LOC	1
+#define DLB2_SYS_MSIX_PASSTHRU_RSVD0_LOC		2
+
+#define DLB2_SYS_MSIX_MODE 0x10000408
+#define DLB2_SYS_MSIX_MODE_RST 0x0
+/* MSI-X Modes */
+#define DLB2_MSIX_MODE_PACKED     0
+#define DLB2_MSIX_MODE_COMPRESSED 1
+
+#define DLB2_SYS_MSIX_MODE_MODE_V2	0x00000001
+#define DLB2_SYS_MSIX_MODE_POLL_MODE_V2	0x00000002
+#define DLB2_SYS_MSIX_MODE_POLL_MASK_V2	0x00000004
+#define DLB2_SYS_MSIX_MODE_POLL_LOCK_V2	0x00000008
+#define DLB2_SYS_MSIX_MODE_RSVD0_V2	0xFFFFFFF0
+#define DLB2_SYS_MSIX_MODE_MODE_V2_LOC	0
+#define DLB2_SYS_MSIX_MODE_POLL_MODE_V2_LOC	1
+#define DLB2_SYS_MSIX_MODE_POLL_MASK_V2_LOC	2
+#define DLB2_SYS_MSIX_MODE_POLL_LOCK_V2_LOC	3
+#define DLB2_SYS_MSIX_MODE_RSVD0_V2_LOC	4
+
+#define DLB2_SYS_MSIX_MODE_MODE_V2_5	0x00000001
+#define DLB2_SYS_MSIX_MODE_IMS_POLLING_V2_5	0x00000002
+#define DLB2_SYS_MSIX_MODE_RSVD0_V2_5	0xFFFFFFFC
+#define DLB2_SYS_MSIX_MODE_MODE_V2_5_LOC		0
+#define DLB2_SYS_MSIX_MODE_IMS_POLLING_V2_5_LOC	1
+#define DLB2_SYS_MSIX_MODE_RSVD0_V2_5_LOC		2
+
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS 0x10000440
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_0_OCC_INT	0x00000001
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_1_OCC_INT	0x00000002
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_2_OCC_INT	0x00000004
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_3_OCC_INT	0x00000008
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_4_OCC_INT	0x00000010
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_5_OCC_INT	0x00000020
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_6_OCC_INT	0x00000040
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_7_OCC_INT	0x00000080
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_8_OCC_INT	0x00000100
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_9_OCC_INT	0x00000200
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_10_OCC_INT	0x00000400
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_11_OCC_INT	0x00000800
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_12_OCC_INT	0x00001000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_13_OCC_INT	0x00002000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_14_OCC_INT	0x00004000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_15_OCC_INT	0x00008000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_16_OCC_INT	0x00010000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_17_OCC_INT	0x00020000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_18_OCC_INT	0x00040000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_19_OCC_INT	0x00080000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_20_OCC_INT	0x00100000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_21_OCC_INT	0x00200000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_22_OCC_INT	0x00400000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_23_OCC_INT	0x00800000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_24_OCC_INT	0x01000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_25_OCC_INT	0x02000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_26_OCC_INT	0x04000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_27_OCC_INT	0x08000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_28_OCC_INT	0x10000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_29_OCC_INT	0x20000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_30_OCC_INT	0x40000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_31_OCC_INT	0x80000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_0_OCC_INT_LOC	0
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_1_OCC_INT_LOC	1
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_2_OCC_INT_LOC	2
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_3_OCC_INT_LOC	3
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_4_OCC_INT_LOC	4
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_5_OCC_INT_LOC	5
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_6_OCC_INT_LOC	6
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_7_OCC_INT_LOC	7
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_8_OCC_INT_LOC	8
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_9_OCC_INT_LOC	9
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_10_OCC_INT_LOC	10
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_11_OCC_INT_LOC	11
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_12_OCC_INT_LOC	12
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_13_OCC_INT_LOC	13
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_14_OCC_INT_LOC	14
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_15_OCC_INT_LOC	15
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_16_OCC_INT_LOC	16
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_17_OCC_INT_LOC	17
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_18_OCC_INT_LOC	18
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_19_OCC_INT_LOC	19
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_20_OCC_INT_LOC	20
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_21_OCC_INT_LOC	21
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_22_OCC_INT_LOC	22
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_23_OCC_INT_LOC	23
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_24_OCC_INT_LOC	24
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_25_OCC_INT_LOC	25
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_26_OCC_INT_LOC	26
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_27_OCC_INT_LOC	27
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_28_OCC_INT_LOC	28
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_29_OCC_INT_LOC	29
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_30_OCC_INT_LOC	30
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_31_OCC_INT_LOC	31
+
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS 0x10000444
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_32_OCC_INT	0x00000001
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_33_OCC_INT	0x00000002
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_34_OCC_INT	0x00000004
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_35_OCC_INT	0x00000008
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_36_OCC_INT	0x00000010
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_37_OCC_INT	0x00000020
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_38_OCC_INT	0x00000040
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_39_OCC_INT	0x00000080
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_40_OCC_INT	0x00000100
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_41_OCC_INT	0x00000200
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_42_OCC_INT	0x00000400
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_43_OCC_INT	0x00000800
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_44_OCC_INT	0x00001000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_45_OCC_INT	0x00002000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_46_OCC_INT	0x00004000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_47_OCC_INT	0x00008000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_48_OCC_INT	0x00010000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_49_OCC_INT	0x00020000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_50_OCC_INT	0x00040000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_51_OCC_INT	0x00080000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_52_OCC_INT	0x00100000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_53_OCC_INT	0x00200000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_54_OCC_INT	0x00400000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_55_OCC_INT	0x00800000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_56_OCC_INT	0x01000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_57_OCC_INT	0x02000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_58_OCC_INT	0x04000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_59_OCC_INT	0x08000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_60_OCC_INT	0x10000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_61_OCC_INT	0x20000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_62_OCC_INT	0x40000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_63_OCC_INT	0x80000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_32_OCC_INT_LOC	0
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_33_OCC_INT_LOC	1
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_34_OCC_INT_LOC	2
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_35_OCC_INT_LOC	3
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_36_OCC_INT_LOC	4
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_37_OCC_INT_LOC	5
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_38_OCC_INT_LOC	6
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_39_OCC_INT_LOC	7
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_40_OCC_INT_LOC	8
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_41_OCC_INT_LOC	9
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_42_OCC_INT_LOC	10
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_43_OCC_INT_LOC	11
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_44_OCC_INT_LOC	12
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_45_OCC_INT_LOC	13
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_46_OCC_INT_LOC	14
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_47_OCC_INT_LOC	15
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_48_OCC_INT_LOC	16
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_49_OCC_INT_LOC	17
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_50_OCC_INT_LOC	18
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_51_OCC_INT_LOC	19
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_52_OCC_INT_LOC	20
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_53_OCC_INT_LOC	21
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_54_OCC_INT_LOC	22
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_55_OCC_INT_LOC	23
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_56_OCC_INT_LOC	24
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_57_OCC_INT_LOC	25
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_58_OCC_INT_LOC	26
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_59_OCC_INT_LOC	27
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_60_OCC_INT_LOC	28
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_61_OCC_INT_LOC	29
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_62_OCC_INT_LOC	30
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_63_OCC_INT_LOC	31
+
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS 0x10000460
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_RST 0x0
+
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_0_OCC_INT	0x00000001
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_1_OCC_INT	0x00000002
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_2_OCC_INT	0x00000004
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_3_OCC_INT	0x00000008
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_4_OCC_INT	0x00000010
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_5_OCC_INT	0x00000020
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_6_OCC_INT	0x00000040
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_7_OCC_INT	0x00000080
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_8_OCC_INT	0x00000100
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_9_OCC_INT	0x00000200
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_10_OCC_INT	0x00000400
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_11_OCC_INT	0x00000800
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_12_OCC_INT	0x00001000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_13_OCC_INT	0x00002000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_14_OCC_INT	0x00004000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_15_OCC_INT	0x00008000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_16_OCC_INT	0x00010000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_17_OCC_INT	0x00020000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_18_OCC_INT	0x00040000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_19_OCC_INT	0x00080000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_20_OCC_INT	0x00100000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_21_OCC_INT	0x00200000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_22_OCC_INT	0x00400000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_23_OCC_INT	0x00800000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_24_OCC_INT	0x01000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_25_OCC_INT	0x02000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_26_OCC_INT	0x04000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_27_OCC_INT	0x08000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_28_OCC_INT	0x10000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_29_OCC_INT	0x20000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_30_OCC_INT	0x40000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_31_OCC_INT	0x80000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_0_OCC_INT_LOC	0
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_1_OCC_INT_LOC	1
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_2_OCC_INT_LOC	2
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_3_OCC_INT_LOC	3
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_4_OCC_INT_LOC	4
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_5_OCC_INT_LOC	5
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_6_OCC_INT_LOC	6
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_7_OCC_INT_LOC	7
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_8_OCC_INT_LOC	8
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_9_OCC_INT_LOC	9
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_10_OCC_INT_LOC	10
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_11_OCC_INT_LOC	11
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_12_OCC_INT_LOC	12
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_13_OCC_INT_LOC	13
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_14_OCC_INT_LOC	14
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_15_OCC_INT_LOC	15
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_16_OCC_INT_LOC	16
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_17_OCC_INT_LOC	17
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_18_OCC_INT_LOC	18
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_19_OCC_INT_LOC	19
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_20_OCC_INT_LOC	20
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_21_OCC_INT_LOC	21
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_22_OCC_INT_LOC	22
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_23_OCC_INT_LOC	23
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_24_OCC_INT_LOC	24
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_25_OCC_INT_LOC	25
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_26_OCC_INT_LOC	26
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_27_OCC_INT_LOC	27
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_28_OCC_INT_LOC	28
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_29_OCC_INT_LOC	29
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_30_OCC_INT_LOC	30
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_31_OCC_INT_LOC	31
+
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS 0x10000464
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_RST 0x0
+
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_32_OCC_INT	0x00000001
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_33_OCC_INT	0x00000002
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_34_OCC_INT	0x00000004
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_35_OCC_INT	0x00000008
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_36_OCC_INT	0x00000010
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_37_OCC_INT	0x00000020
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_38_OCC_INT	0x00000040
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_39_OCC_INT	0x00000080
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_40_OCC_INT	0x00000100
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_41_OCC_INT	0x00000200
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_42_OCC_INT	0x00000400
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_43_OCC_INT	0x00000800
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_44_OCC_INT	0x00001000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_45_OCC_INT	0x00002000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_46_OCC_INT	0x00004000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_47_OCC_INT	0x00008000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_48_OCC_INT	0x00010000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_49_OCC_INT	0x00020000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_50_OCC_INT	0x00040000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_51_OCC_INT	0x00080000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_52_OCC_INT	0x00100000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_53_OCC_INT	0x00200000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_54_OCC_INT	0x00400000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_55_OCC_INT	0x00800000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_56_OCC_INT	0x01000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_57_OCC_INT	0x02000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_58_OCC_INT	0x04000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_59_OCC_INT	0x08000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_60_OCC_INT	0x10000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_61_OCC_INT	0x20000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_62_OCC_INT	0x40000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_63_OCC_INT	0x80000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_32_OCC_INT_LOC	0
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_33_OCC_INT_LOC	1
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_34_OCC_INT_LOC	2
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_35_OCC_INT_LOC	3
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_36_OCC_INT_LOC	4
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_37_OCC_INT_LOC	5
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_38_OCC_INT_LOC	6
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_39_OCC_INT_LOC	7
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_40_OCC_INT_LOC	8
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_41_OCC_INT_LOC	9
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_42_OCC_INT_LOC	10
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_43_OCC_INT_LOC	11
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_44_OCC_INT_LOC	12
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_45_OCC_INT_LOC	13
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_46_OCC_INT_LOC	14
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_47_OCC_INT_LOC	15
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_48_OCC_INT_LOC	16
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_49_OCC_INT_LOC	17
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_50_OCC_INT_LOC	18
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_51_OCC_INT_LOC	19
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_52_OCC_INT_LOC	20
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_53_OCC_INT_LOC	21
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_54_OCC_INT_LOC	22
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_55_OCC_INT_LOC	23
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_56_OCC_INT_LOC	24
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_57_OCC_INT_LOC	25
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_58_OCC_INT_LOC	26
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_59_OCC_INT_LOC	27
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_60_OCC_INT_LOC	28
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_61_OCC_INT_LOC	29
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_62_OCC_INT_LOC	30
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_63_OCC_INT_LOC	31
+
+#define DLB2_SYS_DIR_CQ_OPT_CLR 0x100004c0
+#define DLB2_SYS_DIR_CQ_OPT_CLR_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_OPT_CLR_CQ		0x0000003F
+#define DLB2_SYS_DIR_CQ_OPT_CLR_RSVD0	0xFFFFFFC0
+#define DLB2_SYS_DIR_CQ_OPT_CLR_CQ_LOC	0
+#define DLB2_SYS_DIR_CQ_OPT_CLR_RSVD0_LOC	6
+
+#define DLB2_SYS_ALARM_HW_SYND 0x1000050c
+#define DLB2_SYS_ALARM_HW_SYND_RST 0x0
+
+#define DLB2_SYS_ALARM_HW_SYND_SYNDROME	0x000000FF
+#define DLB2_SYS_ALARM_HW_SYND_RTYPE		0x00000300
+#define DLB2_SYS_ALARM_HW_SYND_ALARM		0x00000400
+#define DLB2_SYS_ALARM_HW_SYND_CWD		0x00000800
+#define DLB2_SYS_ALARM_HW_SYND_VF_PF_MB	0x00001000
+#define DLB2_SYS_ALARM_HW_SYND_RSVD0		0x00002000
+#define DLB2_SYS_ALARM_HW_SYND_CLS		0x0000C000
+#define DLB2_SYS_ALARM_HW_SYND_AID		0x003F0000
+#define DLB2_SYS_ALARM_HW_SYND_UNIT		0x03C00000
+#define DLB2_SYS_ALARM_HW_SYND_SOURCE	0x3C000000
+#define DLB2_SYS_ALARM_HW_SYND_MORE		0x40000000
+#define DLB2_SYS_ALARM_HW_SYND_VALID		0x80000000
+#define DLB2_SYS_ALARM_HW_SYND_SYNDROME_LOC	0
+#define DLB2_SYS_ALARM_HW_SYND_RTYPE_LOC	8
+#define DLB2_SYS_ALARM_HW_SYND_ALARM_LOC	10
+#define DLB2_SYS_ALARM_HW_SYND_CWD_LOC	11
+#define DLB2_SYS_ALARM_HW_SYND_VF_PF_MB_LOC	12
+#define DLB2_SYS_ALARM_HW_SYND_RSVD0_LOC	13
+#define DLB2_SYS_ALARM_HW_SYND_CLS_LOC	14
+#define DLB2_SYS_ALARM_HW_SYND_AID_LOC	16
+#define DLB2_SYS_ALARM_HW_SYND_UNIT_LOC	22
+#define DLB2_SYS_ALARM_HW_SYND_SOURCE_LOC	26
+#define DLB2_SYS_ALARM_HW_SYND_MORE_LOC	30
+#define DLB2_SYS_ALARM_HW_SYND_VALID_LOC	31
+
+#define DLB2_AQED_QID_FID_LIM(x) \
+	(0x20000000 + (x) * 0x1000)
+#define DLB2_AQED_QID_FID_LIM_RST 0x7ff
+
+#define DLB2_AQED_QID_FID_LIM_QID_FID_LIMIT	0x00001FFF
+#define DLB2_AQED_QID_FID_LIM_RSVD0		0xFFFFE000
+#define DLB2_AQED_QID_FID_LIM_QID_FID_LIMIT_LOC	0
+#define DLB2_AQED_QID_FID_LIM_RSVD0_LOC		13
+
+#define DLB2_AQED_QID_HID_WIDTH(x) \
+	(0x20080000 + (x) * 0x1000)
+#define DLB2_AQED_QID_HID_WIDTH_RST 0x0
+
+#define DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE	0x00000007
+#define DLB2_AQED_QID_HID_WIDTH_RSVD0		0xFFFFFFF8
+#define DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE_LOC	0
+#define DLB2_AQED_QID_HID_WIDTH_RSVD0_LOC		3
+
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0 0x24000004
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_RST 0xfefcfaf8
+
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI0	0x000000FF
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI1	0x0000FF00
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI2	0x00FF0000
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI3	0xFF000000
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI0_LOC	0
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI1_LOC	8
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI2_LOC	16
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI3_LOC	24
+
+#define DLB2_AQED_SMON_ACTIVITYCNTR0 0x2c00004c
+#define DLB2_AQED_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_AQED_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_AQED_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_AQED_SMON_ACTIVITYCNTR1 0x2c000050
+#define DLB2_AQED_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_AQED_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_AQED_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_AQED_SMON_COMPARE0 0x2c000054
+#define DLB2_AQED_SMON_COMPARE0_RST 0x0
+
+#define DLB2_AQED_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_AQED_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_AQED_SMON_COMPARE1 0x2c000058
+#define DLB2_AQED_SMON_COMPARE1_RST 0x0
+
+#define DLB2_AQED_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_AQED_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_AQED_SMON_CFG0 0x2c00005c
+#define DLB2_AQED_SMON_CFG0_RST 0x40000000
+
+#define DLB2_AQED_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_AQED_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_AQED_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_AQED_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_AQED_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_AQED_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_AQED_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_AQED_SMON_CFG0_SMON_MODE		0x0000F000
+#define DLB2_AQED_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_AQED_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_AQED_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_AQED_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_AQED_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_AQED_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_AQED_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_AQED_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_AQED_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_AQED_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_AQED_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_AQED_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_AQED_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
+#define DLB2_AQED_SMON_CFG0_RSVZ0_LOC			2
+#define DLB2_AQED_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_AQED_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_AQED_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_AQED_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_AQED_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_AQED_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_AQED_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_AQED_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_AQED_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_AQED_SMON_CFG0_STOPTIMEROVFL_LOC		20
+#define DLB2_AQED_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_AQED_SMON_CFG0_STATTIMEROVFL_LOC		22
+#define DLB2_AQED_SMON_CFG0_RSVZ1_LOC			23
+#define DLB2_AQED_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_AQED_SMON_CFG0_RSVZ2_LOC			29
+#define DLB2_AQED_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_AQED_SMON_CFG1 0x2c000060
+#define DLB2_AQED_SMON_CFG1_RST 0x0
+
+#define DLB2_AQED_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_AQED_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_AQED_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_AQED_SMON_CFG1_MODE0_LOC	0
+#define DLB2_AQED_SMON_CFG1_MODE1_LOC	8
+#define DLB2_AQED_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_AQED_SMON_MAX_TMR 0x2c000064
+#define DLB2_AQED_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_AQED_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_AQED_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_AQED_SMON_TMR 0x2c000068
+#define DLB2_AQED_SMON_TMR_RST 0x0
+
+#define DLB2_AQED_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_AQED_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_ATM_QID2CQIDIX_00(x) \
+	(0x30080000 + (x) * 0x1000)
+#define DLB2_ATM_QID2CQIDIX_00_RST 0x0
+#define DLB2_ATM_QID2CQIDIX(x, y) \
+	(DLB2_ATM_QID2CQIDIX_00(x) + 0x80000 * (y))
+#define DLB2_ATM_QID2CQIDIX_NUM 16
+
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P0	0x000000FF
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P1	0x0000FF00
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P2	0x00FF0000
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P3	0xFF000000
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P0_LOC	0
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P1_LOC	8
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P2_LOC	16
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P3_LOC	24
+
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN 0x34000004
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_RST 0xfffefdfc
+
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN0	0x000000FF
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN1	0x0000FF00
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN2	0x00FF0000
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN3	0xFF000000
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN0_LOC	0
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN1_LOC	8
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN2_LOC	16
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN3_LOC	24
+
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN 0x34000008
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_RST 0xfffefdfc
+
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN0	0x000000FF
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN1	0x0000FF00
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN2	0x00FF0000
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN3	0xFF000000
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN0_LOC	0
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN1_LOC	8
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN2_LOC	16
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN3_LOC	24
+
+#define DLB2_ATM_SMON_ACTIVITYCNTR0 0x3c000050
+#define DLB2_ATM_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_ATM_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_ATM_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_ATM_SMON_ACTIVITYCNTR1 0x3c000054
+#define DLB2_ATM_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_ATM_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_ATM_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_ATM_SMON_COMPARE0 0x3c000058
+#define DLB2_ATM_SMON_COMPARE0_RST 0x0
+
+#define DLB2_ATM_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_ATM_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_ATM_SMON_COMPARE1 0x3c00005c
+#define DLB2_ATM_SMON_COMPARE1_RST 0x0
+
+#define DLB2_ATM_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_ATM_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_ATM_SMON_CFG0 0x3c000060
+#define DLB2_ATM_SMON_CFG0_RST 0x40000000
+
+#define DLB2_ATM_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_ATM_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_ATM_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_ATM_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_ATM_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_ATM_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_ATM_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_ATM_SMON_CFG0_SMON_MODE			0x0000F000
+#define DLB2_ATM_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_ATM_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_ATM_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_ATM_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_ATM_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_ATM_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_ATM_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_ATM_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_ATM_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_ATM_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_ATM_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_ATM_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_ATM_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
+#define DLB2_ATM_SMON_CFG0_RSVZ0_LOC				2
+#define DLB2_ATM_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_ATM_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_ATM_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_ATM_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_ATM_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_ATM_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_ATM_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_ATM_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_ATM_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_ATM_SMON_CFG0_STOPTIMEROVFL_LOC			20
+#define DLB2_ATM_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_ATM_SMON_CFG0_STATTIMEROVFL_LOC			22
+#define DLB2_ATM_SMON_CFG0_RSVZ1_LOC				23
+#define DLB2_ATM_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_ATM_SMON_CFG0_RSVZ2_LOC				29
+#define DLB2_ATM_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_ATM_SMON_CFG1 0x3c000064
+#define DLB2_ATM_SMON_CFG1_RST 0x0
+
+#define DLB2_ATM_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_ATM_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_ATM_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_ATM_SMON_CFG1_MODE0_LOC	0
+#define DLB2_ATM_SMON_CFG1_MODE1_LOC	8
+#define DLB2_ATM_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_ATM_SMON_MAX_TMR 0x3c000068
+#define DLB2_ATM_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_ATM_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_ATM_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_ATM_SMON_TMR 0x3c00006c
+#define DLB2_ATM_SMON_TMR_RST 0x0
+
+#define DLB2_ATM_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_ATM_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_CHP_CFG_DIR_VAS_CRD(x) \
+	(0x40000000 + (x) * 0x1000)
+#define DLB2_CHP_CFG_DIR_VAS_CRD_RST 0x0
+
+#define DLB2_CHP_CFG_DIR_VAS_CRD_COUNT	0x00003FFF
+#define DLB2_CHP_CFG_DIR_VAS_CRD_RSVD0	0xFFFFC000
+#define DLB2_CHP_CFG_DIR_VAS_CRD_COUNT_LOC	0
+#define DLB2_CHP_CFG_DIR_VAS_CRD_RSVD0_LOC	14
+
+#define DLB2_CHP_CFG_LDB_VAS_CRD(x) \
+	(0x40080000 + (x) * 0x1000)
+#define DLB2_CHP_CFG_LDB_VAS_CRD_RST 0x0
+
+#define DLB2_CHP_CFG_LDB_VAS_CRD_COUNT	0x00007FFF
+#define DLB2_CHP_CFG_LDB_VAS_CRD_RSVD0	0xFFFF8000
+#define DLB2_CHP_CFG_LDB_VAS_CRD_COUNT_LOC	0
+#define DLB2_CHP_CFG_LDB_VAS_CRD_RSVD0_LOC	15
+
+#define DLB2_V2CHP_ORD_QID_SN(x) \
+	(0x40100000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_ORD_QID_SN(x) \
+	(0x40080000 + (x) * 0x1000)
+#define DLB2_CHP_ORD_QID_SN(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_ORD_QID_SN(x) : \
+	 DLB2_V2_5CHP_ORD_QID_SN(x))
+#define DLB2_CHP_ORD_QID_SN_RST 0x0
+
+#define DLB2_CHP_ORD_QID_SN_SN	0x000003FF
+#define DLB2_CHP_ORD_QID_SN_RSVD0	0xFFFFFC00
+#define DLB2_CHP_ORD_QID_SN_SN_LOC		0
+#define DLB2_CHP_ORD_QID_SN_RSVD0_LOC	10
+
+#define DLB2_V2CHP_ORD_QID_SN_MAP(x) \
+	(0x40180000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_ORD_QID_SN_MAP(x) \
+	(0x40100000 + (x) * 0x1000)
+#define DLB2_CHP_ORD_QID_SN_MAP(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_ORD_QID_SN_MAP(x) : \
+	 DLB2_V2_5CHP_ORD_QID_SN_MAP(x))
+#define DLB2_CHP_ORD_QID_SN_MAP_RST 0x0
+
+#define DLB2_CHP_ORD_QID_SN_MAP_MODE		0x00000007
+#define DLB2_CHP_ORD_QID_SN_MAP_SLOT		0x00000078
+#define DLB2_CHP_ORD_QID_SN_MAP_RSVZ0	0x00000080
+#define DLB2_CHP_ORD_QID_SN_MAP_GRP		0x00000100
+#define DLB2_CHP_ORD_QID_SN_MAP_RSVZ1	0x00000200
+#define DLB2_CHP_ORD_QID_SN_MAP_RSVD0	0xFFFFFC00
+#define DLB2_CHP_ORD_QID_SN_MAP_MODE_LOC	0
+#define DLB2_CHP_ORD_QID_SN_MAP_SLOT_LOC	3
+#define DLB2_CHP_ORD_QID_SN_MAP_RSVZ0_LOC	7
+#define DLB2_CHP_ORD_QID_SN_MAP_GRP_LOC	8
+#define DLB2_CHP_ORD_QID_SN_MAP_RSVZ1_LOC	9
+#define DLB2_CHP_ORD_QID_SN_MAP_RSVD0_LOC	10
+
+#define DLB2_V2CHP_SN_CHK_ENBL(x) \
+	(0x40200000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_SN_CHK_ENBL(x) \
+	(0x40180000 + (x) * 0x1000)
+#define DLB2_CHP_SN_CHK_ENBL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_SN_CHK_ENBL(x) : \
+	 DLB2_V2_5CHP_SN_CHK_ENBL(x))
+#define DLB2_CHP_SN_CHK_ENBL_RST 0x0
+
+#define DLB2_CHP_SN_CHK_ENBL_EN	0x00000001
+#define DLB2_CHP_SN_CHK_ENBL_RSVD0	0xFFFFFFFE
+#define DLB2_CHP_SN_CHK_ENBL_EN_LOC		0
+#define DLB2_CHP_SN_CHK_ENBL_RSVD0_LOC	1
+
+#define DLB2_V2CHP_DIR_CQ_DEPTH(x) \
+	(0x40280000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ_DEPTH(x) \
+	(0x40300000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ_DEPTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_DEPTH(x) : \
+	 DLB2_V2_5CHP_DIR_CQ_DEPTH(x))
+#define DLB2_CHP_DIR_CQ_DEPTH_RST 0x0
+
+#define DLB2_CHP_DIR_CQ_DEPTH_DEPTH	0x00001FFF
+#define DLB2_CHP_DIR_CQ_DEPTH_RSVD0	0xFFFFE000
+#define DLB2_CHP_DIR_CQ_DEPTH_DEPTH_LOC	0
+#define DLB2_CHP_DIR_CQ_DEPTH_RSVD0_LOC	13
+
+#define DLB2_V2CHP_DIR_CQ_INT_DEPTH_THRSH(x) \
+	(0x40300000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ_INT_DEPTH_THRSH(x) \
+	(0x40380000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_INT_DEPTH_THRSH(x) : \
+	 DLB2_V2_5CHP_DIR_CQ_INT_DEPTH_THRSH(x))
+#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_RST 0x0
+
+#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_DEPTH_THRESHOLD	0x00001FFF
+#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_RSVD0		0xFFFFE000
+#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_DEPTH_THRESHOLD_LOC	0
+#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_RSVD0_LOC		13
+
+#define DLB2_V2CHP_DIR_CQ_INT_ENB(x) \
+	(0x40380000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ_INT_ENB(x) \
+	(0x40400000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ_INT_ENB(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_INT_ENB(x) : \
+	 DLB2_V2_5CHP_DIR_CQ_INT_ENB(x))
+#define DLB2_CHP_DIR_CQ_INT_ENB_RST 0x0
+
+#define DLB2_CHP_DIR_CQ_INT_ENB_EN_TIM	0x00000001
+#define DLB2_CHP_DIR_CQ_INT_ENB_EN_DEPTH	0x00000002
+#define DLB2_CHP_DIR_CQ_INT_ENB_RSVD0	0xFFFFFFFC
+#define DLB2_CHP_DIR_CQ_INT_ENB_EN_TIM_LOC	0
+#define DLB2_CHP_DIR_CQ_INT_ENB_EN_DEPTH_LOC	1
+#define DLB2_CHP_DIR_CQ_INT_ENB_RSVD0_LOC	2
+
+#define DLB2_V2CHP_DIR_CQ_TMR_THRSH(x) \
+	(0x40480000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ_TMR_THRSH(x) \
+	(0x40500000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ_TMR_THRSH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_TMR_THRSH(x) : \
+	 DLB2_V2_5CHP_DIR_CQ_TMR_THRSH(x))
+#define DLB2_CHP_DIR_CQ_TMR_THRSH_RST 0x1
+
+#define DLB2_CHP_DIR_CQ_TMR_THRSH_THRSH_0	0x00000001
+#define DLB2_CHP_DIR_CQ_TMR_THRSH_THRSH_13_1	0x00003FFE
+#define DLB2_CHP_DIR_CQ_TMR_THRSH_RSVD0	0xFFFFC000
+#define DLB2_CHP_DIR_CQ_TMR_THRSH_THRSH_0_LOC	0
+#define DLB2_CHP_DIR_CQ_TMR_THRSH_THRSH_13_1_LOC	1
+#define DLB2_CHP_DIR_CQ_TMR_THRSH_RSVD0_LOC		14
+
+#define DLB2_V2CHP_DIR_CQ_TKN_DEPTH_SEL(x) \
+	(0x40500000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ_TKN_DEPTH_SEL(x) \
+	(0x40580000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_TKN_DEPTH_SEL(x) : \
+	 DLB2_V2_5CHP_DIR_CQ_TKN_DEPTH_SEL(x))
+#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_RST 0x0
+
+#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT	0x0000000F
+#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_RSVD0			0xFFFFFFF0
+#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_LOC	0
+#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_RSVD0_LOC		4
+
+#define DLB2_V2CHP_DIR_CQ_WD_ENB(x) \
+	(0x40580000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ_WD_ENB(x) \
+	(0x40600000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ_WD_ENB(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_WD_ENB(x) : \
+	 DLB2_V2_5CHP_DIR_CQ_WD_ENB(x))
+#define DLB2_CHP_DIR_CQ_WD_ENB_RST 0x0
+
+#define DLB2_CHP_DIR_CQ_WD_ENB_WD_ENABLE	0x00000001
+#define DLB2_CHP_DIR_CQ_WD_ENB_RSVD0		0xFFFFFFFE
+#define DLB2_CHP_DIR_CQ_WD_ENB_WD_ENABLE_LOC	0
+#define DLB2_CHP_DIR_CQ_WD_ENB_RSVD0_LOC	1
+
+#define DLB2_V2CHP_DIR_CQ_WPTR(x) \
+	(0x40600000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ_WPTR(x) \
+	(0x40680000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ_WPTR(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_WPTR(x) : \
+	 DLB2_V2_5CHP_DIR_CQ_WPTR(x))
+#define DLB2_CHP_DIR_CQ_WPTR_RST 0x0
+
+#define DLB2_CHP_DIR_CQ_WPTR_WRITE_POINTER	0x00001FFF
+#define DLB2_CHP_DIR_CQ_WPTR_RSVD0		0xFFFFE000
+#define DLB2_CHP_DIR_CQ_WPTR_WRITE_POINTER_LOC	0
+#define DLB2_CHP_DIR_CQ_WPTR_RSVD0_LOC		13
+
+#define DLB2_V2CHP_DIR_CQ2VAS(x) \
+	(0x40680000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ2VAS(x) \
+	(0x40700000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ2VAS(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ2VAS(x) : \
+	 DLB2_V2_5CHP_DIR_CQ2VAS(x))
+#define DLB2_CHP_DIR_CQ2VAS_RST 0x0
+
+#define DLB2_CHP_DIR_CQ2VAS_CQ2VAS	0x0000001F
+#define DLB2_CHP_DIR_CQ2VAS_RSVD0	0xFFFFFFE0
+#define DLB2_CHP_DIR_CQ2VAS_CQ2VAS_LOC	0
+#define DLB2_CHP_DIR_CQ2VAS_RSVD0_LOC	5
+
+#define DLB2_V2CHP_HIST_LIST_BASE(x) \
+	(0x40700000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_HIST_LIST_BASE(x) \
+	(0x40780000 + (x) * 0x1000)
+#define DLB2_CHP_HIST_LIST_BASE(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_HIST_LIST_BASE(x) : \
+	 DLB2_V2_5CHP_HIST_LIST_BASE(x))
+#define DLB2_CHP_HIST_LIST_BASE_RST 0x0
+
+#define DLB2_CHP_HIST_LIST_BASE_BASE		0x00001FFF
+#define DLB2_CHP_HIST_LIST_BASE_RSVD0	0xFFFFE000
+#define DLB2_CHP_HIST_LIST_BASE_BASE_LOC	0
+#define DLB2_CHP_HIST_LIST_BASE_RSVD0_LOC	13
+
+#define DLB2_V2CHP_HIST_LIST_LIM(x) \
+	(0x40780000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_HIST_LIST_LIM(x) \
+	(0x40800000 + (x) * 0x1000)
+#define DLB2_CHP_HIST_LIST_LIM(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_HIST_LIST_LIM(x) : \
+	 DLB2_V2_5CHP_HIST_LIST_LIM(x))
+#define DLB2_CHP_HIST_LIST_LIM_RST 0x0
+
+#define DLB2_CHP_HIST_LIST_LIM_LIMIT	0x00001FFF
+#define DLB2_CHP_HIST_LIST_LIM_RSVD0	0xFFFFE000
+#define DLB2_CHP_HIST_LIST_LIM_LIMIT_LOC	0
+#define DLB2_CHP_HIST_LIST_LIM_RSVD0_LOC	13
+
+#define DLB2_V2CHP_HIST_LIST_POP_PTR(x) \
+	(0x40800000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_HIST_LIST_POP_PTR(x) \
+	(0x40880000 + (x) * 0x1000)
+#define DLB2_CHP_HIST_LIST_POP_PTR(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_HIST_LIST_POP_PTR(x) : \
+	 DLB2_V2_5CHP_HIST_LIST_POP_PTR(x))
+#define DLB2_CHP_HIST_LIST_POP_PTR_RST 0x0
+
+#define DLB2_CHP_HIST_LIST_POP_PTR_POP_PTR		0x00001FFF
+#define DLB2_CHP_HIST_LIST_POP_PTR_GENERATION	0x00002000
+#define DLB2_CHP_HIST_LIST_POP_PTR_RSVD0		0xFFFFC000
+#define DLB2_CHP_HIST_LIST_POP_PTR_POP_PTR_LOC	0
+#define DLB2_CHP_HIST_LIST_POP_PTR_GENERATION_LOC	13
+#define DLB2_CHP_HIST_LIST_POP_PTR_RSVD0_LOC		14
+
+#define DLB2_V2CHP_HIST_LIST_PUSH_PTR(x) \
+	(0x40880000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_HIST_LIST_PUSH_PTR(x) \
+	(0x40900000 + (x) * 0x1000)
+#define DLB2_CHP_HIST_LIST_PUSH_PTR(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_HIST_LIST_PUSH_PTR(x) : \
+	 DLB2_V2_5CHP_HIST_LIST_PUSH_PTR(x))
+#define DLB2_CHP_HIST_LIST_PUSH_PTR_RST 0x0
+
+#define DLB2_CHP_HIST_LIST_PUSH_PTR_PUSH_PTR		0x00001FFF
+#define DLB2_CHP_HIST_LIST_PUSH_PTR_GENERATION	0x00002000
+#define DLB2_CHP_HIST_LIST_PUSH_PTR_RSVD0		0xFFFFC000
+#define DLB2_CHP_HIST_LIST_PUSH_PTR_PUSH_PTR_LOC	0
+#define DLB2_CHP_HIST_LIST_PUSH_PTR_GENERATION_LOC	13
+#define DLB2_CHP_HIST_LIST_PUSH_PTR_RSVD0_LOC	14
+
+#define DLB2_V2CHP_LDB_CQ_DEPTH(x) \
+	(0x40900000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ_DEPTH(x) \
+	(0x40a80000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ_DEPTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_DEPTH(x) : \
+	 DLB2_V2_5CHP_LDB_CQ_DEPTH(x))
+#define DLB2_CHP_LDB_CQ_DEPTH_RST 0x0
+
+#define DLB2_CHP_LDB_CQ_DEPTH_DEPTH	0x000007FF
+#define DLB2_CHP_LDB_CQ_DEPTH_RSVD0	0xFFFFF800
+#define DLB2_CHP_LDB_CQ_DEPTH_DEPTH_LOC	0
+#define DLB2_CHP_LDB_CQ_DEPTH_RSVD0_LOC	11
+
+#define DLB2_V2CHP_LDB_CQ_INT_DEPTH_THRSH(x) \
+	(0x40980000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ_INT_DEPTH_THRSH(x) \
+	(0x40b00000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_INT_DEPTH_THRSH(x) : \
+	 DLB2_V2_5CHP_LDB_CQ_INT_DEPTH_THRSH(x))
+#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_RST 0x0
+
+#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_DEPTH_THRESHOLD	0x000007FF
+#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_RSVD0		0xFFFFF800
+#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_DEPTH_THRESHOLD_LOC	0
+#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_RSVD0_LOC		11
+
+#define DLB2_V2CHP_LDB_CQ_INT_ENB(x) \
+	(0x40a00000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ_INT_ENB(x) \
+	(0x40b80000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ_INT_ENB(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_INT_ENB(x) : \
+	 DLB2_V2_5CHP_LDB_CQ_INT_ENB(x))
+#define DLB2_CHP_LDB_CQ_INT_ENB_RST 0x0
+
+#define DLB2_CHP_LDB_CQ_INT_ENB_EN_TIM	0x00000001
+#define DLB2_CHP_LDB_CQ_INT_ENB_EN_DEPTH	0x00000002
+#define DLB2_CHP_LDB_CQ_INT_ENB_RSVD0	0xFFFFFFFC
+#define DLB2_CHP_LDB_CQ_INT_ENB_EN_TIM_LOC	0
+#define DLB2_CHP_LDB_CQ_INT_ENB_EN_DEPTH_LOC	1
+#define DLB2_CHP_LDB_CQ_INT_ENB_RSVD0_LOC	2
+
+#define DLB2_V2CHP_LDB_CQ_TMR_THRSH(x) \
+	(0x40b00000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ_TMR_THRSH(x) \
+	(0x40c80000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ_TMR_THRSH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_TMR_THRSH(x) : \
+	 DLB2_V2_5CHP_LDB_CQ_TMR_THRSH(x))
+#define DLB2_CHP_LDB_CQ_TMR_THRSH_RST 0x1
+
+#define DLB2_CHP_LDB_CQ_TMR_THRSH_THRSH_0	0x00000001
+#define DLB2_CHP_LDB_CQ_TMR_THRSH_THRSH_13_1	0x00003FFE
+#define DLB2_CHP_LDB_CQ_TMR_THRSH_RSVD0	0xFFFFC000
+#define DLB2_CHP_LDB_CQ_TMR_THRSH_THRSH_0_LOC	0
+#define DLB2_CHP_LDB_CQ_TMR_THRSH_THRSH_13_1_LOC	1
+#define DLB2_CHP_LDB_CQ_TMR_THRSH_RSVD0_LOC		14
+
+#define DLB2_V2CHP_LDB_CQ_TKN_DEPTH_SEL(x) \
+	(0x40b80000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ_TKN_DEPTH_SEL(x) \
+	(0x40d00000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_TKN_DEPTH_SEL(x) : \
+	 DLB2_V2_5CHP_LDB_CQ_TKN_DEPTH_SEL(x))
+#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_RST 0x0
+
+#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT	0x0000000F
+#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_RSVD0			0xFFFFFFF0
+#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_LOC	0
+#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_RSVD0_LOC		4
+
+#define DLB2_V2CHP_LDB_CQ_WD_ENB(x) \
+	(0x40c00000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ_WD_ENB(x) \
+	(0x40d80000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ_WD_ENB(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_WD_ENB(x) : \
+	 DLB2_V2_5CHP_LDB_CQ_WD_ENB(x))
+#define DLB2_CHP_LDB_CQ_WD_ENB_RST 0x0
+
+#define DLB2_CHP_LDB_CQ_WD_ENB_WD_ENABLE	0x00000001
+#define DLB2_CHP_LDB_CQ_WD_ENB_RSVD0		0xFFFFFFFE
+#define DLB2_CHP_LDB_CQ_WD_ENB_WD_ENABLE_LOC	0
+#define DLB2_CHP_LDB_CQ_WD_ENB_RSVD0_LOC	1
+
+#define DLB2_V2CHP_LDB_CQ_WPTR(x) \
+	(0x40c80000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ_WPTR(x) \
+	(0x40e00000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ_WPTR(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_WPTR(x) : \
+	 DLB2_V2_5CHP_LDB_CQ_WPTR(x))
+#define DLB2_CHP_LDB_CQ_WPTR_RST 0x0
+
+#define DLB2_CHP_LDB_CQ_WPTR_WRITE_POINTER	0x000007FF
+#define DLB2_CHP_LDB_CQ_WPTR_RSVD0		0xFFFFF800
+#define DLB2_CHP_LDB_CQ_WPTR_WRITE_POINTER_LOC	0
+#define DLB2_CHP_LDB_CQ_WPTR_RSVD0_LOC		11
+
+#define DLB2_V2CHP_LDB_CQ2VAS(x) \
+	(0x40d00000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ2VAS(x) \
+	(0x40e80000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ2VAS(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ2VAS(x) : \
+	 DLB2_V2_5CHP_LDB_CQ2VAS(x))
+#define DLB2_CHP_LDB_CQ2VAS_RST 0x0
+
+#define DLB2_CHP_LDB_CQ2VAS_CQ2VAS	0x0000001F
+#define DLB2_CHP_LDB_CQ2VAS_RSVD0	0xFFFFFFE0
+#define DLB2_CHP_LDB_CQ2VAS_CQ2VAS_LOC	0
+#define DLB2_CHP_LDB_CQ2VAS_RSVD0_LOC	5
+
+#define DLB2_CHP_CFG_CHP_CSR_CTRL 0x44000008
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_RST 0x180002
+
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_COR_ALARM_DIS		0x00000001
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_COR_SYND_DIS		0x00000002
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_UNCR_ALARM_DIS		0x00000004
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_UNC_SYND_DIS		0x00000008
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF0_ALARM_DIS		0x00000010
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF0_SYND_DIS		0x00000020
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF1_ALARM_DIS		0x00000040
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF1_SYND_DIS		0x00000080
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF2_ALARM_DIS		0x00000100
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF2_SYND_DIS		0x00000200
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF3_ALARM_DIS		0x00000400
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF3_SYND_DIS		0x00000800
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF4_ALARM_DIS		0x00001000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF4_SYND_DIS		0x00002000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF5_ALARM_DIS		0x00004000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF5_SYND_DIS		0x00008000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_DLB_COR_ALARM_ENABLE	0x00010000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_LDB_CQ_MODE	0x00020000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_DIR_CQ_MODE	0x00040000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_WRITE_LDB		0x00080000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_WRITE_DIR		0x00100000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_FIRST_WRITE_LDB	0x00200000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_FIRST_WRITE_DIR	0x00400000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_RSVZ0			0xFF800000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_COR_ALARM_DIS_LOC		0
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_COR_SYND_DIS_LOC		1
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_UNCR_ALARM_DIS_LOC		2
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_UNC_SYND_DIS_LOC		3
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF0_ALARM_DIS_LOC		4
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF0_SYND_DIS_LOC		5
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF1_ALARM_DIS_LOC		6
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF1_SYND_DIS_LOC		7
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF2_ALARM_DIS_LOC		8
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF2_SYND_DIS_LOC		9
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF3_ALARM_DIS_LOC		10
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF3_SYND_DIS_LOC		11
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF4_ALARM_DIS_LOC		12
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF4_SYND_DIS_LOC		13
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF5_ALARM_DIS_LOC		14
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF5_SYND_DIS_LOC		15
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_DLB_COR_ALARM_ENABLE_LOC		16
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_LDB_CQ_MODE_LOC	17
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_DIR_CQ_MODE_LOC	18
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_WRITE_LDB_LOC			19
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_WRITE_DIR_LOC			20
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_FIRST_WRITE_LDB_LOC		21
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_FIRST_WRITE_DIR_LOC		22
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_RSVZ0_LOC				23
+
+#define DLB2_V2CHP_DIR_CQ_INTR_ARMED0 0x4400005c
+#define DLB2_V2_5CHP_DIR_CQ_INTR_ARMED0 0x4400004c
+#define DLB2_CHP_DIR_CQ_INTR_ARMED0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_INTR_ARMED0 : \
+	 DLB2_V2_5CHP_DIR_CQ_INTR_ARMED0)
+#define DLB2_CHP_DIR_CQ_INTR_ARMED0_RST 0x0
+
+#define DLB2_CHP_DIR_CQ_INTR_ARMED0_ARMED	0xFFFFFFFF
+#define DLB2_CHP_DIR_CQ_INTR_ARMED0_ARMED_LOC	0
+
+#define DLB2_V2CHP_DIR_CQ_INTR_ARMED1 0x44000060
+#define DLB2_V2_5CHP_DIR_CQ_INTR_ARMED1 0x44000050
+#define DLB2_CHP_DIR_CQ_INTR_ARMED1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_INTR_ARMED1 : \
+	 DLB2_V2_5CHP_DIR_CQ_INTR_ARMED1)
+#define DLB2_CHP_DIR_CQ_INTR_ARMED1_RST 0x0
+
+#define DLB2_CHP_DIR_CQ_INTR_ARMED1_ARMED	0xFFFFFFFF
+#define DLB2_CHP_DIR_CQ_INTR_ARMED1_ARMED_LOC	0
+
+#define DLB2_V2CHP_CFG_DIR_CQ_TIMER_CTL 0x44000084
+#define DLB2_V2_5CHP_CFG_DIR_CQ_TIMER_CTL 0x44000088
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_DIR_CQ_TIMER_CTL : \
+	 DLB2_V2_5CHP_CFG_DIR_CQ_TIMER_CTL)
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_RST 0x0
+
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_SAMPLE_INTERVAL	0x000000FF
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_ENB			0x00000100
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_RSVZ0			0xFFFFFE00
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_SAMPLE_INTERVAL_LOC	0
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_ENB_LOC		8
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_RSVZ0_LOC		9
+
+#define DLB2_V2CHP_CFG_DIR_WDTO_0 0x44000088
+#define DLB2_V2_5CHP_CFG_DIR_WDTO_0 0x4400008c
+#define DLB2_CHP_CFG_DIR_WDTO_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_DIR_WDTO_0 : \
+	 DLB2_V2_5CHP_CFG_DIR_WDTO_0)
+#define DLB2_CHP_CFG_DIR_WDTO_0_RST 0x0
+
+#define DLB2_CHP_CFG_DIR_WDTO_0_WDTO	0xFFFFFFFF
+#define DLB2_CHP_CFG_DIR_WDTO_0_WDTO_LOC	0
+
+#define DLB2_V2CHP_CFG_DIR_WDTO_1 0x4400008c
+#define DLB2_V2_5CHP_CFG_DIR_WDTO_1 0x44000090
+#define DLB2_CHP_CFG_DIR_WDTO_1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_DIR_WDTO_1 : \
+	 DLB2_V2_5CHP_CFG_DIR_WDTO_1)
+#define DLB2_CHP_CFG_DIR_WDTO_1_RST 0x0
+
+#define DLB2_CHP_CFG_DIR_WDTO_1_WDTO	0xFFFFFFFF
+#define DLB2_CHP_CFG_DIR_WDTO_1_WDTO_LOC	0
+
+#define DLB2_V2CHP_CFG_DIR_WD_DISABLE0 0x44000098
+#define DLB2_V2_5CHP_CFG_DIR_WD_DISABLE0 0x440000a4
+#define DLB2_CHP_CFG_DIR_WD_DISABLE0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_DIR_WD_DISABLE0 : \
+	 DLB2_V2_5CHP_CFG_DIR_WD_DISABLE0)
+#define DLB2_CHP_CFG_DIR_WD_DISABLE0_RST 0xffffffff
+
+#define DLB2_CHP_CFG_DIR_WD_DISABLE0_WD_DISABLE	0xFFFFFFFF
+#define DLB2_CHP_CFG_DIR_WD_DISABLE0_WD_DISABLE_LOC	0
+
+#define DLB2_V2CHP_CFG_DIR_WD_DISABLE1 0x4400009c
+#define DLB2_V2_5CHP_CFG_DIR_WD_DISABLE1 0x440000a8
+#define DLB2_CHP_CFG_DIR_WD_DISABLE1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_DIR_WD_DISABLE1 : \
+	 DLB2_V2_5CHP_CFG_DIR_WD_DISABLE1)
+#define DLB2_CHP_CFG_DIR_WD_DISABLE1_RST 0xffffffff
+
+#define DLB2_CHP_CFG_DIR_WD_DISABLE1_WD_DISABLE	0xFFFFFFFF
+#define DLB2_CHP_CFG_DIR_WD_DISABLE1_WD_DISABLE_LOC	0
+
+#define DLB2_V2CHP_CFG_DIR_WD_ENB_INTERVAL 0x440000a0
+#define DLB2_V2_5CHP_CFG_DIR_WD_ENB_INTERVAL 0x440000b0
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_DIR_WD_ENB_INTERVAL : \
+	 DLB2_V2_5CHP_CFG_DIR_WD_ENB_INTERVAL)
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_RST 0x0
+
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_SAMPLE_INTERVAL	0x0FFFFFFF
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_ENB			0x10000000
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_RSVZ0		0xE0000000
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_SAMPLE_INTERVAL_LOC	0
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_ENB_LOC		28
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_RSVZ0_LOC		29
+
+#define DLB2_V2CHP_CFG_DIR_WD_THRESHOLD 0x440000ac
+#define DLB2_V2_5CHP_CFG_DIR_WD_THRESHOLD 0x440000c0
+#define DLB2_CHP_CFG_DIR_WD_THRESHOLD(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_DIR_WD_THRESHOLD : \
+	 DLB2_V2_5CHP_CFG_DIR_WD_THRESHOLD)
+#define DLB2_CHP_CFG_DIR_WD_THRESHOLD_RST 0x0
+
+#define DLB2_CHP_CFG_DIR_WD_THRESHOLD_WD_THRESHOLD	0x000000FF
+#define DLB2_CHP_CFG_DIR_WD_THRESHOLD_RSVZ0		0xFFFFFF00
+#define DLB2_CHP_CFG_DIR_WD_THRESHOLD_WD_THRESHOLD_LOC	0
+#define DLB2_CHP_CFG_DIR_WD_THRESHOLD_RSVZ0_LOC		8
+
+#define DLB2_V2CHP_LDB_CQ_INTR_ARMED0 0x440000b0
+#define DLB2_V2_5CHP_LDB_CQ_INTR_ARMED0 0x440000c4
+#define DLB2_CHP_LDB_CQ_INTR_ARMED0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_INTR_ARMED0 : \
+	 DLB2_V2_5CHP_LDB_CQ_INTR_ARMED0)
+#define DLB2_CHP_LDB_CQ_INTR_ARMED0_RST 0x0
+
+#define DLB2_CHP_LDB_CQ_INTR_ARMED0_ARMED	0xFFFFFFFF
+#define DLB2_CHP_LDB_CQ_INTR_ARMED0_ARMED_LOC	0
+
+#define DLB2_V2CHP_LDB_CQ_INTR_ARMED1 0x440000b4
+#define DLB2_V2_5CHP_LDB_CQ_INTR_ARMED1 0x440000c8
+#define DLB2_CHP_LDB_CQ_INTR_ARMED1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_INTR_ARMED1 : \
+	 DLB2_V2_5CHP_LDB_CQ_INTR_ARMED1)
+#define DLB2_CHP_LDB_CQ_INTR_ARMED1_RST 0x0
+
+#define DLB2_CHP_LDB_CQ_INTR_ARMED1_ARMED	0xFFFFFFFF
+#define DLB2_CHP_LDB_CQ_INTR_ARMED1_ARMED_LOC	0
+
+#define DLB2_V2CHP_CFG_LDB_CQ_TIMER_CTL 0x440000d8
+#define DLB2_V2_5CHP_CFG_LDB_CQ_TIMER_CTL 0x440000ec
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_LDB_CQ_TIMER_CTL : \
+	 DLB2_V2_5CHP_CFG_LDB_CQ_TIMER_CTL)
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_RST 0x0
+
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_SAMPLE_INTERVAL	0x000000FF
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_ENB			0x00000100
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_RSVZ0			0xFFFFFE00
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_SAMPLE_INTERVAL_LOC	0
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_ENB_LOC		8
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_RSVZ0_LOC		9
+
+#define DLB2_V2CHP_CFG_LDB_WDTO_0 0x440000dc
+#define DLB2_V2_5CHP_CFG_LDB_WDTO_0 0x440000f0
+#define DLB2_CHP_CFG_LDB_WDTO_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_LDB_WDTO_0 : \
+	 DLB2_V2_5CHP_CFG_LDB_WDTO_0)
+#define DLB2_CHP_CFG_LDB_WDTO_0_RST 0x0
+
+#define DLB2_CHP_CFG_LDB_WDTO_0_WDTO	0xFFFFFFFF
+#define DLB2_CHP_CFG_LDB_WDTO_0_WDTO_LOC	0
+
+#define DLB2_V2CHP_CFG_LDB_WDTO_1 0x440000e0
+#define DLB2_V2_5CHP_CFG_LDB_WDTO_1 0x440000f4
+#define DLB2_CHP_CFG_LDB_WDTO_1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_LDB_WDTO_1 : \
+	 DLB2_V2_5CHP_CFG_LDB_WDTO_1)
+#define DLB2_CHP_CFG_LDB_WDTO_1_RST 0x0
+
+#define DLB2_CHP_CFG_LDB_WDTO_1_WDTO	0xFFFFFFFF
+#define DLB2_CHP_CFG_LDB_WDTO_1_WDTO_LOC	0
+
+#define DLB2_V2CHP_CFG_LDB_WD_DISABLE0 0x440000ec
+#define DLB2_V2_5CHP_CFG_LDB_WD_DISABLE0 0x44000100
+#define DLB2_CHP_CFG_LDB_WD_DISABLE0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_LDB_WD_DISABLE0 : \
+	 DLB2_V2_5CHP_CFG_LDB_WD_DISABLE0)
+#define DLB2_CHP_CFG_LDB_WD_DISABLE0_RST 0xffffffff
+
+#define DLB2_CHP_CFG_LDB_WD_DISABLE0_WD_DISABLE	0xFFFFFFFF
+#define DLB2_CHP_CFG_LDB_WD_DISABLE0_WD_DISABLE_LOC	0
+
+#define DLB2_V2CHP_CFG_LDB_WD_DISABLE1 0x440000f0
+#define DLB2_V2_5CHP_CFG_LDB_WD_DISABLE1 0x44000104
+#define DLB2_CHP_CFG_LDB_WD_DISABLE1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_LDB_WD_DISABLE1 : \
+	 DLB2_V2_5CHP_CFG_LDB_WD_DISABLE1)
+#define DLB2_CHP_CFG_LDB_WD_DISABLE1_RST 0xffffffff
+
+#define DLB2_CHP_CFG_LDB_WD_DISABLE1_WD_DISABLE	0xFFFFFFFF
+#define DLB2_CHP_CFG_LDB_WD_DISABLE1_WD_DISABLE_LOC	0
+
+#define DLB2_V2CHP_CFG_LDB_WD_ENB_INTERVAL 0x440000f4
+#define DLB2_V2_5CHP_CFG_LDB_WD_ENB_INTERVAL 0x44000108
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_LDB_WD_ENB_INTERVAL : \
+	 DLB2_V2_5CHP_CFG_LDB_WD_ENB_INTERVAL)
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_RST 0x0
+
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_SAMPLE_INTERVAL	0x0FFFFFFF
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_ENB			0x10000000
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_RSVZ0		0xE0000000
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_SAMPLE_INTERVAL_LOC	0
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_ENB_LOC		28
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_RSVZ0_LOC		29
+
+#define DLB2_V2CHP_CFG_LDB_WD_THRESHOLD 0x44000100
+#define DLB2_V2_5CHP_CFG_LDB_WD_THRESHOLD 0x44000114
+#define DLB2_CHP_CFG_LDB_WD_THRESHOLD(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_LDB_WD_THRESHOLD : \
+	 DLB2_V2_5CHP_CFG_LDB_WD_THRESHOLD)
+#define DLB2_CHP_CFG_LDB_WD_THRESHOLD_RST 0x0
+
+#define DLB2_CHP_CFG_LDB_WD_THRESHOLD_WD_THRESHOLD	0x000000FF
+#define DLB2_CHP_CFG_LDB_WD_THRESHOLD_RSVZ0		0xFFFFFF00
+#define DLB2_CHP_CFG_LDB_WD_THRESHOLD_WD_THRESHOLD_LOC	0
+#define DLB2_CHP_CFG_LDB_WD_THRESHOLD_RSVZ0_LOC		8
+
+#define DLB2_CHP_SMON_COMPARE0 0x4c000000
+#define DLB2_CHP_SMON_COMPARE0_RST 0x0
+
+#define DLB2_CHP_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_CHP_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_CHP_SMON_COMPARE1 0x4c000004
+#define DLB2_CHP_SMON_COMPARE1_RST 0x0
+
+#define DLB2_CHP_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_CHP_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_CHP_SMON_CFG0 0x4c000008
+#define DLB2_CHP_SMON_CFG0_RST 0x40000000
+
+#define DLB2_CHP_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_CHP_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_CHP_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_CHP_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_CHP_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_CHP_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_CHP_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_CHP_SMON_CFG0_SMON_MODE			0x0000F000
+#define DLB2_CHP_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_CHP_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_CHP_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_CHP_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_CHP_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_CHP_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_CHP_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_CHP_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_CHP_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_CHP_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_CHP_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_CHP_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_CHP_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
+#define DLB2_CHP_SMON_CFG0_RSVZ0_LOC				2
+#define DLB2_CHP_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_CHP_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_CHP_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_CHP_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_CHP_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_CHP_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_CHP_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_CHP_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_CHP_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_CHP_SMON_CFG0_STOPTIMEROVFL_LOC			20
+#define DLB2_CHP_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_CHP_SMON_CFG0_STATTIMEROVFL_LOC			22
+#define DLB2_CHP_SMON_CFG0_RSVZ1_LOC				23
+#define DLB2_CHP_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_CHP_SMON_CFG0_RSVZ2_LOC				29
+#define DLB2_CHP_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_CHP_SMON_CFG1 0x4c00000c
+#define DLB2_CHP_SMON_CFG1_RST 0x0
+
+#define DLB2_CHP_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_CHP_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_CHP_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_CHP_SMON_CFG1_MODE0_LOC	0
+#define DLB2_CHP_SMON_CFG1_MODE1_LOC	8
+#define DLB2_CHP_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_CHP_SMON_ACTIVITYCNTR0 0x4c000010
+#define DLB2_CHP_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_CHP_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_CHP_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_CHP_SMON_ACTIVITYCNTR1 0x4c000014
+#define DLB2_CHP_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_CHP_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_CHP_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_CHP_SMON_MAX_TMR 0x4c000018
+#define DLB2_CHP_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_CHP_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_CHP_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_CHP_SMON_TMR 0x4c00001c
+#define DLB2_CHP_SMON_TMR_RST 0x0
+
+#define DLB2_CHP_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_CHP_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_CHP_CTRL_DIAG_02 0x4c000028
+#define DLB2_CHP_CTRL_DIAG_02_RST 0x1555
+
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_EMPTY_V2	0x00000001
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_AFULL_V2	0x00000002
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_EMPTY_V2 0x04
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_AFULL_V2 0x08
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_EMPTY_V2 0x0010
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_AFULL_V2 0x0020
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_EMPTY_V2    0x0040
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_AFULL_V2    0x0080
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_EMPTY_V2	 0x0100
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_AFULL_V2	 0x0200
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_EMPTY_V2	0x0400
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_AFULL_V2	0x0800
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_EMPTY_V2    0x1000
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_AFULL_V2    0x2000
+#define DLB2_CHP_CTRL_DIAG_02_RSVD0_V2				    0xFFFFC000
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_EMPTY_V2_LOC	    0
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_AFULL_V2_LOC	    1
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_EMPTY_V2_LOC 2
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_AFULL_V2_LOC 3
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_EMPTY_V2_LOC 4
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_AFULL_V2_LOC 5
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_EMPTY_V2_LOC    6
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_AFULL_V2_LOC    7
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_EMPTY_V2_LOC	  8
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_AFULL_V2_LOC	  9
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_EMPTY_V2_LOC	  10
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_AFULL_V2_LOC	  11
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_EMPTY_V2_LOC 12
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_AFULL_V2_LOC 13
+#define DLB2_CHP_CTRL_DIAG_02_RSVD0_V2_LOC				  14
+
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_EMPTY_V2_5	     0x00000001
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_AFULL_V2_5	     0x00000002
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_EMPTY_V2_5  4
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_AFULL_V2_5  8
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_EMPTY_V2_5 0x10
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_AFULL_V2_5 0x20
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_EMPTY_V2_5	0x0040
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_AFULL_V2_5	0x0080
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_EMPTY_V2_5 0x00000100
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_AFULL_V2_5 0x00000200
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_EMPTY_V2_5 0x0400
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_AFULL_V2_5 0x0800
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_EMPTY_V2_5 0x1000
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_AFULL_V2_5 0x2000
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOKEN_QB_STATUS_SIZE_V2_5	    0x0001C000
+#define DLB2_CHP_CTRL_DIAG_02_FREELIST_SIZE_V2_5		    0xFFFE0000
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_EMPTY_V2_5_LOC 0
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_AFULL_V2_5_LOC 1
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_EMPTY_V2_5_LOC 2
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_AFULL_V2_5_LOC 3
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_EMPTY_V2_5_LOC 4
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_AFULL_V2_5_LOC 5
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_EMPTY_V2_5_LOC    6
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_AFULL_V2_5_LOC 7
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_EMPTY_V2_5_LOC	    8
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_AFULL_V2_5_LOC	    9
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_EMPTY_V2_5_LOC   10
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_AFULL_V2_5_LOC   11
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_EMPTY_V2_5_LOC 12
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_AFULL_V2_5_LOC 13
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOKEN_QB_STATUS_SIZE_V2_5_LOC	    14
+#define DLB2_CHP_CTRL_DIAG_02_FREELIST_SIZE_V2_5_LOC			    17
+
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0 0x54000000
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_RST 0xfefcfaf8
+
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI0	0x000000FF
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI1	0x0000FF00
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI2	0x00FF0000
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI3	0xFF000000
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI0_LOC	0
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI1_LOC	8
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI2_LOC	16
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI3_LOC	24
+
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_1 0x54000004
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_1_RST 0x0
+
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_1_RSVZ0	0xFFFFFFFF
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_1_RSVZ0_LOC	0
+
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0 0x54000008
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_RST 0xfefcfaf8
+
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI0	0x000000FF
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI1	0x0000FF00
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI2	0x00FF0000
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI3	0xFF000000
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI0_LOC	0
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI1_LOC	8
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI2_LOC	16
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI3_LOC	24
+
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1 0x5400000c
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RST 0x0
+
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RSVZ0	0xFFFFFFFF
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RSVZ0_LOC	0
+
+#define DLB2_DP_DIR_CSR_CTRL 0x54000010
+#define DLB2_DP_DIR_CSR_CTRL_RST 0x0
+
+#define DLB2_DP_DIR_CSR_CTRL_INT_COR_ALARM_DIS	0x00000001
+#define DLB2_DP_DIR_CSR_CTRL_INT_COR_SYND_DIS	0x00000002
+#define DLB2_DP_DIR_CSR_CTRL_INT_UNCR_ALARM_DIS	0x00000004
+#define DLB2_DP_DIR_CSR_CTRL_INT_UNC_SYND_DIS	0x00000008
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF0_ALARM_DIS	0x00000010
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF0_SYND_DIS	0x00000020
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF1_ALARM_DIS	0x00000040
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF1_SYND_DIS	0x00000080
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF2_ALARM_DIS	0x00000100
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF2_SYND_DIS	0x00000200
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF3_ALARM_DIS	0x00000400
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF3_SYND_DIS	0x00000800
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF4_ALARM_DIS	0x00001000
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF4_SYND_DIS	0x00002000
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF5_ALARM_DIS	0x00004000
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF5_SYND_DIS	0x00008000
+#define DLB2_DP_DIR_CSR_CTRL_RSVZ0			0xFFFF0000
+#define DLB2_DP_DIR_CSR_CTRL_INT_COR_ALARM_DIS_LOC	0
+#define DLB2_DP_DIR_CSR_CTRL_INT_COR_SYND_DIS_LOC	1
+#define DLB2_DP_DIR_CSR_CTRL_INT_UNCR_ALARM_DIS_LOC	2
+#define DLB2_DP_DIR_CSR_CTRL_INT_UNC_SYND_DIS_LOC	3
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF0_ALARM_DIS_LOC	4
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF0_SYND_DIS_LOC	5
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF1_ALARM_DIS_LOC	6
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF1_SYND_DIS_LOC	7
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF2_ALARM_DIS_LOC	8
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF2_SYND_DIS_LOC	9
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF3_ALARM_DIS_LOC	10
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF3_SYND_DIS_LOC	11
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF4_ALARM_DIS_LOC	12
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF4_SYND_DIS_LOC	13
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF5_ALARM_DIS_LOC	14
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF5_SYND_DIS_LOC	15
+#define DLB2_DP_DIR_CSR_CTRL_RSVZ0_LOC		16
+
+#define DLB2_DP_SMON_ACTIVITYCNTR0 0x5c000058
+#define DLB2_DP_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_DP_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_DP_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_DP_SMON_ACTIVITYCNTR1 0x5c00005c
+#define DLB2_DP_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_DP_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_DP_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_DP_SMON_COMPARE0 0x5c000060
+#define DLB2_DP_SMON_COMPARE0_RST 0x0
+
+#define DLB2_DP_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_DP_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_DP_SMON_COMPARE1 0x5c000064
+#define DLB2_DP_SMON_COMPARE1_RST 0x0
+
+#define DLB2_DP_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_DP_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_DP_SMON_CFG0 0x5c000068
+#define DLB2_DP_SMON_CFG0_RST 0x40000000
+
+#define DLB2_DP_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_DP_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_DP_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_DP_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_DP_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_DP_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_DP_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_DP_SMON_CFG0_SMON_MODE			0x0000F000
+#define DLB2_DP_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_DP_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_DP_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_DP_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_DP_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_DP_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_DP_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_DP_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_DP_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_DP_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_DP_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_DP_SMON_CFG0_SMON_ENABLE_LOC		0
+#define DLB2_DP_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC	1
+#define DLB2_DP_SMON_CFG0_RSVZ0_LOC			2
+#define DLB2_DP_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_DP_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_DP_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_DP_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_DP_SMON_CFG0_SMON_MODE_LOC		12
+#define DLB2_DP_SMON_CFG0_STOPCOUNTEROVFL_LOC	16
+#define DLB2_DP_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_DP_SMON_CFG0_STATCOUNTER0OVFL_LOC	18
+#define DLB2_DP_SMON_CFG0_STATCOUNTER1OVFL_LOC	19
+#define DLB2_DP_SMON_CFG0_STOPTIMEROVFL_LOC		20
+#define DLB2_DP_SMON_CFG0_INTTIMEROVFL_LOC		21
+#define DLB2_DP_SMON_CFG0_STATTIMEROVFL_LOC		22
+#define DLB2_DP_SMON_CFG0_RSVZ1_LOC			23
+#define DLB2_DP_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_DP_SMON_CFG0_RSVZ2_LOC			29
+#define DLB2_DP_SMON_CFG0_VERSION_LOC		30
+
+#define DLB2_DP_SMON_CFG1 0x5c00006c
+#define DLB2_DP_SMON_CFG1_RST 0x0
+
+#define DLB2_DP_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_DP_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_DP_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_DP_SMON_CFG1_MODE0_LOC	0
+#define DLB2_DP_SMON_CFG1_MODE1_LOC	8
+#define DLB2_DP_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_DP_SMON_MAX_TMR 0x5c000070
+#define DLB2_DP_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_DP_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_DP_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_DP_SMON_TMR 0x5c000074
+#define DLB2_DP_SMON_TMR_RST 0x0
+
+#define DLB2_DP_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_DP_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_DQED_SMON_ACTIVITYCNTR0 0x6c000024
+#define DLB2_DQED_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_DQED_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_DQED_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_DQED_SMON_ACTIVITYCNTR1 0x6c000028
+#define DLB2_DQED_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_DQED_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_DQED_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_DQED_SMON_COMPARE0 0x6c00002c
+#define DLB2_DQED_SMON_COMPARE0_RST 0x0
+
+#define DLB2_DQED_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_DQED_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_DQED_SMON_COMPARE1 0x6c000030
+#define DLB2_DQED_SMON_COMPARE1_RST 0x0
+
+#define DLB2_DQED_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_DQED_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_DQED_SMON_CFG0 0x6c000034
+#define DLB2_DQED_SMON_CFG0_RST 0x40000000
+
+#define DLB2_DQED_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_DQED_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_DQED_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_DQED_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_DQED_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_DQED_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_DQED_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_DQED_SMON_CFG0_SMON_MODE		0x0000F000
+#define DLB2_DQED_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_DQED_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_DQED_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_DQED_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_DQED_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_DQED_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_DQED_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_DQED_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_DQED_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_DQED_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_DQED_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_DQED_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_DQED_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
+#define DLB2_DQED_SMON_CFG0_RSVZ0_LOC			2
+#define DLB2_DQED_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_DQED_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_DQED_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_DQED_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_DQED_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_DQED_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_DQED_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_DQED_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_DQED_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_DQED_SMON_CFG0_STOPTIMEROVFL_LOC		20
+#define DLB2_DQED_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_DQED_SMON_CFG0_STATTIMEROVFL_LOC		22
+#define DLB2_DQED_SMON_CFG0_RSVZ1_LOC			23
+#define DLB2_DQED_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_DQED_SMON_CFG0_RSVZ2_LOC			29
+#define DLB2_DQED_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_DQED_SMON_CFG1 0x6c000038
+#define DLB2_DQED_SMON_CFG1_RST 0x0
+
+#define DLB2_DQED_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_DQED_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_DQED_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_DQED_SMON_CFG1_MODE0_LOC	0
+#define DLB2_DQED_SMON_CFG1_MODE1_LOC	8
+#define DLB2_DQED_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_DQED_SMON_MAX_TMR 0x6c00003c
+#define DLB2_DQED_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_DQED_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_DQED_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_DQED_SMON_TMR 0x6c000040
+#define DLB2_DQED_SMON_TMR_RST 0x0
+
+#define DLB2_DQED_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_DQED_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_QED_SMON_ACTIVITYCNTR0 0x7c000024
+#define DLB2_QED_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_QED_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_QED_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_QED_SMON_ACTIVITYCNTR1 0x7c000028
+#define DLB2_QED_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_QED_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_QED_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_QED_SMON_COMPARE0 0x7c00002c
+#define DLB2_QED_SMON_COMPARE0_RST 0x0
+
+#define DLB2_QED_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_QED_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_QED_SMON_COMPARE1 0x7c000030
+#define DLB2_QED_SMON_COMPARE1_RST 0x0
+
+#define DLB2_QED_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_QED_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_QED_SMON_CFG0 0x7c000034
+#define DLB2_QED_SMON_CFG0_RST 0x40000000
+
+#define DLB2_QED_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_QED_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_QED_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_QED_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_QED_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_QED_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_QED_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_QED_SMON_CFG0_SMON_MODE			0x0000F000
+#define DLB2_QED_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_QED_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_QED_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_QED_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_QED_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_QED_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_QED_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_QED_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_QED_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_QED_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_QED_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_QED_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_QED_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
+#define DLB2_QED_SMON_CFG0_RSVZ0_LOC				2
+#define DLB2_QED_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_QED_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_QED_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_QED_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_QED_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_QED_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_QED_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_QED_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_QED_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_QED_SMON_CFG0_STOPTIMEROVFL_LOC			20
+#define DLB2_QED_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_QED_SMON_CFG0_STATTIMEROVFL_LOC			22
+#define DLB2_QED_SMON_CFG0_RSVZ1_LOC				23
+#define DLB2_QED_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_QED_SMON_CFG0_RSVZ2_LOC				29
+#define DLB2_QED_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_QED_SMON_CFG1 0x7c000038
+#define DLB2_QED_SMON_CFG1_RST 0x0
+
+#define DLB2_QED_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_QED_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_QED_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_QED_SMON_CFG1_MODE0_LOC	0
+#define DLB2_QED_SMON_CFG1_MODE1_LOC	8
+#define DLB2_QED_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_QED_SMON_MAX_TMR 0x7c00003c
+#define DLB2_QED_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_QED_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_QED_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_QED_SMON_TMR 0x7c000040
+#define DLB2_QED_SMON_TMR_RST 0x0
+
+#define DLB2_QED_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_QED_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0 0x84000000
+#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0 0x74000000
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0 : \
+	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0)
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_RST 0xfefcfaf8
+
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI0	0x000000FF
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI1	0x0000FF00
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI2	0x00FF0000
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI3	0xFF000000
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI0_LOC	0
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI1_LOC	8
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI2_LOC	16
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI3_LOC	24
+
+#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1 0x84000004
+#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1 0x74000004
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1 : \
+	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1)
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1_RST 0x0
+
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1_RSVZ0	0xFFFFFFFF
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1_RSVZ0_LOC	0
+
+#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0 0x84000008
+#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0 0x74000008
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0 : \
+	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0)
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_RST 0xfefcfaf8
+
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI0	0x000000FF
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI1	0x0000FF00
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI2	0x00FF0000
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI3	0xFF000000
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI0_LOC	0
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI1_LOC	8
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI2_LOC	16
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI3_LOC	24
+
+#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1 0x8400000c
+#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1 0x7400000c
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1 : \
+	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1)
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1_RST 0x0
+
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1_RSVZ0	0xFFFFFFFF
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1_RSVZ0_LOC	0
+
+#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0 0x84000010
+#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0 0x74000010
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0 : \
+	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0)
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_RST 0xfefcfaf8
+
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI0	0x000000FF
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI1	0x0000FF00
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI2	0x00FF0000
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI3	0xFF000000
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI0_LOC	0
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI1_LOC	8
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI2_LOC	16
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI3_LOC	24
+
+#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1 0x84000014
+#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1 0x74000014
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1 : \
+	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1)
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RST 0x0
+
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RSVZ0	0xFFFFFFFF
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RSVZ0_LOC	0
+
+#define DLB2_NALB_SMON_ACTIVITYCNTR0 0x8c000064
+#define DLB2_NALB_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_NALB_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_NALB_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_NALB_SMON_ACTIVITYCNTR1 0x8c000068
+#define DLB2_NALB_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_NALB_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_NALB_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_NALB_SMON_COMPARE0 0x8c00006c
+#define DLB2_NALB_SMON_COMPARE0_RST 0x0
+
+#define DLB2_NALB_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_NALB_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_NALB_SMON_COMPARE1 0x8c000070
+#define DLB2_NALB_SMON_COMPARE1_RST 0x0
+
+#define DLB2_NALB_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_NALB_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_NALB_SMON_CFG0 0x8c000074
+#define DLB2_NALB_SMON_CFG0_RST 0x40000000
+
+#define DLB2_NALB_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_NALB_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_NALB_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_NALB_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_NALB_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_NALB_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_NALB_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_NALB_SMON_CFG0_SMON_MODE		0x0000F000
+#define DLB2_NALB_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_NALB_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_NALB_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_NALB_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_NALB_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_NALB_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_NALB_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_NALB_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_NALB_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_NALB_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_NALB_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_NALB_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_NALB_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
+#define DLB2_NALB_SMON_CFG0_RSVZ0_LOC			2
+#define DLB2_NALB_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_NALB_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_NALB_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_NALB_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_NALB_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_NALB_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_NALB_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_NALB_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_NALB_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_NALB_SMON_CFG0_STOPTIMEROVFL_LOC		20
+#define DLB2_NALB_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_NALB_SMON_CFG0_STATTIMEROVFL_LOC		22
+#define DLB2_NALB_SMON_CFG0_RSVZ1_LOC			23
+#define DLB2_NALB_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_NALB_SMON_CFG0_RSVZ2_LOC			29
+#define DLB2_NALB_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_NALB_SMON_CFG1 0x8c000078
+#define DLB2_NALB_SMON_CFG1_RST 0x0
+
+#define DLB2_NALB_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_NALB_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_NALB_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_NALB_SMON_CFG1_MODE0_LOC	0
+#define DLB2_NALB_SMON_CFG1_MODE1_LOC	8
+#define DLB2_NALB_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_NALB_SMON_MAX_TMR 0x8c00007c
+#define DLB2_NALB_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_NALB_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_NALB_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_NALB_SMON_TMR 0x8c000080
+#define DLB2_NALB_SMON_TMR_RST 0x0
+
+#define DLB2_NALB_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_NALB_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_V2RO_GRP_0_SLT_SHFT(x) \
+	(0x96000000 + (x) * 0x4)
+#define DLB2_V2_5RO_GRP_0_SLT_SHFT(x) \
+	(0x86000000 + (x) * 0x4)
+#define DLB2_RO_GRP_0_SLT_SHFT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2RO_GRP_0_SLT_SHFT(x) : \
+	 DLB2_V2_5RO_GRP_0_SLT_SHFT(x))
+#define DLB2_RO_GRP_0_SLT_SHFT_RST 0x0
+
+#define DLB2_RO_GRP_0_SLT_SHFT_CHANGE	0x000003FF
+#define DLB2_RO_GRP_0_SLT_SHFT_RSVD0		0xFFFFFC00
+#define DLB2_RO_GRP_0_SLT_SHFT_CHANGE_LOC	0
+#define DLB2_RO_GRP_0_SLT_SHFT_RSVD0_LOC	10
+
+#define DLB2_V2RO_GRP_1_SLT_SHFT(x) \
+	(0x96010000 + (x) * 0x4)
+#define DLB2_V2_5RO_GRP_1_SLT_SHFT(x) \
+	(0x86010000 + (x) * 0x4)
+#define DLB2_RO_GRP_1_SLT_SHFT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2RO_GRP_1_SLT_SHFT(x) : \
+	 DLB2_V2_5RO_GRP_1_SLT_SHFT(x))
+#define DLB2_RO_GRP_1_SLT_SHFT_RST 0x0
+
+#define DLB2_RO_GRP_1_SLT_SHFT_CHANGE	0x000003FF
+#define DLB2_RO_GRP_1_SLT_SHFT_RSVD0		0xFFFFFC00
+#define DLB2_RO_GRP_1_SLT_SHFT_CHANGE_LOC	0
+#define DLB2_RO_GRP_1_SLT_SHFT_RSVD0_LOC	10
+
+#define DLB2_V2RO_GRP_SN_MODE 0x94000000
+#define DLB2_V2_5RO_GRP_SN_MODE 0x84000000
+#define DLB2_RO_GRP_SN_MODE(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2RO_GRP_SN_MODE : \
+	 DLB2_V2_5RO_GRP_SN_MODE)
+#define DLB2_RO_GRP_SN_MODE_RST 0x0
+
+#define DLB2_RO_GRP_SN_MODE_SN_MODE_0	0x00000007
+#define DLB2_RO_GRP_SN_MODE_RSZV0		0x000000F8
+#define DLB2_RO_GRP_SN_MODE_SN_MODE_1	0x00000700
+#define DLB2_RO_GRP_SN_MODE_RSZV1		0xFFFFF800
+#define DLB2_RO_GRP_SN_MODE_SN_MODE_0_LOC	0
+#define DLB2_RO_GRP_SN_MODE_RSZV0_LOC	3
+#define DLB2_RO_GRP_SN_MODE_SN_MODE_1_LOC	8
+#define DLB2_RO_GRP_SN_MODE_RSZV1_LOC	11
+
+#define DLB2_V2RO_CFG_CTRL_GENERAL_0 0x9c000000
+#define DLB2_V2_5RO_CFG_CTRL_GENERAL_0 0x8c000000
+#define DLB2_RO_CFG_CTRL_GENERAL_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2RO_CFG_CTRL_GENERAL_0 : \
+	 DLB2_V2_5RO_CFG_CTRL_GENERAL_0)
+#define DLB2_RO_CFG_CTRL_GENERAL_0_RST 0x0
+
+#define DLB2_RO_CFG_CTRL_GENERAL_0_UNIT_SINGLE_STEP_MODE	0x00000001
+#define DLB2_RO_CFG_CTRL_GENERAL_0_RR_EN			0x00000002
+#define DLB2_RO_CFG_CTRL_GENERAL_0_RSZV0			0xFFFFFFFC
+#define DLB2_RO_CFG_CTRL_GENERAL_0_UNIT_SINGLE_STEP_MODE_LOC	0
+#define DLB2_RO_CFG_CTRL_GENERAL_0_RR_EN_LOC			1
+#define DLB2_RO_CFG_CTRL_GENERAL_0_RSZV0_LOC			2
+
+#define DLB2_RO_SMON_ACTIVITYCNTR0 0x9c000030
+#define DLB2_RO_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_RO_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_RO_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_RO_SMON_ACTIVITYCNTR1 0x9c000034
+#define DLB2_RO_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_RO_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_RO_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_RO_SMON_COMPARE0 0x9c000038
+#define DLB2_RO_SMON_COMPARE0_RST 0x0
+
+#define DLB2_RO_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_RO_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_RO_SMON_COMPARE1 0x9c00003c
+#define DLB2_RO_SMON_COMPARE1_RST 0x0
+
+#define DLB2_RO_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_RO_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_RO_SMON_CFG0 0x9c000040
+#define DLB2_RO_SMON_CFG0_RST 0x40000000
+
+#define DLB2_RO_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_RO_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_RO_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_RO_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_RO_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_RO_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_RO_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_RO_SMON_CFG0_SMON_MODE			0x0000F000
+#define DLB2_RO_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_RO_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_RO_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_RO_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_RO_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_RO_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_RO_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_RO_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_RO_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_RO_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_RO_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_RO_SMON_CFG0_SMON_ENABLE_LOC		0
+#define DLB2_RO_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC	1
+#define DLB2_RO_SMON_CFG0_RSVZ0_LOC			2
+#define DLB2_RO_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_RO_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_RO_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_RO_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_RO_SMON_CFG0_SMON_MODE_LOC		12
+#define DLB2_RO_SMON_CFG0_STOPCOUNTEROVFL_LOC	16
+#define DLB2_RO_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_RO_SMON_CFG0_STATCOUNTER0OVFL_LOC	18
+#define DLB2_RO_SMON_CFG0_STATCOUNTER1OVFL_LOC	19
+#define DLB2_RO_SMON_CFG0_STOPTIMEROVFL_LOC		20
+#define DLB2_RO_SMON_CFG0_INTTIMEROVFL_LOC		21
+#define DLB2_RO_SMON_CFG0_STATTIMEROVFL_LOC		22
+#define DLB2_RO_SMON_CFG0_RSVZ1_LOC			23
+#define DLB2_RO_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_RO_SMON_CFG0_RSVZ2_LOC			29
+#define DLB2_RO_SMON_CFG0_VERSION_LOC		30
+
+#define DLB2_RO_SMON_CFG1 0x9c000044
+#define DLB2_RO_SMON_CFG1_RST 0x0
+
+#define DLB2_RO_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_RO_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_RO_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_RO_SMON_CFG1_MODE0_LOC	0
+#define DLB2_RO_SMON_CFG1_MODE1_LOC	8
+#define DLB2_RO_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_RO_SMON_MAX_TMR 0x9c000048
+#define DLB2_RO_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_RO_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_RO_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_RO_SMON_TMR 0x9c00004c
+#define DLB2_RO_SMON_TMR_RST 0x0
+
+#define DLB2_RO_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_RO_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_V2LSP_CQ2PRIOV(x) \
+	(0xa0000000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ2PRIOV(x) \
+	(0x90000000 + (x) * 0x1000)
+#define DLB2_LSP_CQ2PRIOV(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ2PRIOV(x) : \
+	 DLB2_V2_5LSP_CQ2PRIOV(x))
+#define DLB2_LSP_CQ2PRIOV_RST 0x0
+
+#define DLB2_LSP_CQ2PRIOV_PRIO	0x00FFFFFF
+#define DLB2_LSP_CQ2PRIOV_V		0xFF000000
+#define DLB2_LSP_CQ2PRIOV_PRIO_LOC	0
+#define DLB2_LSP_CQ2PRIOV_V_LOC	24
+
+#define DLB2_V2LSP_CQ2QID0(x) \
+	(0xa0080000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ2QID0(x) \
+	(0x90080000 + (x) * 0x1000)
+#define DLB2_LSP_CQ2QID0(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ2QID0(x) : \
+	 DLB2_V2_5LSP_CQ2QID0(x))
+#define DLB2_LSP_CQ2QID0_RST 0x0
+
+#define DLB2_LSP_CQ2QID0_QID_P0	0x0000007F
+#define DLB2_LSP_CQ2QID0_RSVD3	0x00000080
+#define DLB2_LSP_CQ2QID0_QID_P1	0x00007F00
+#define DLB2_LSP_CQ2QID0_RSVD2	0x00008000
+#define DLB2_LSP_CQ2QID0_QID_P2	0x007F0000
+#define DLB2_LSP_CQ2QID0_RSVD1	0x00800000
+#define DLB2_LSP_CQ2QID0_QID_P3	0x7F000000
+#define DLB2_LSP_CQ2QID0_RSVD0	0x80000000
+#define DLB2_LSP_CQ2QID0_QID_P0_LOC	0
+#define DLB2_LSP_CQ2QID0_RSVD3_LOC	7
+#define DLB2_LSP_CQ2QID0_QID_P1_LOC	8
+#define DLB2_LSP_CQ2QID0_RSVD2_LOC	15
+#define DLB2_LSP_CQ2QID0_QID_P2_LOC	16
+#define DLB2_LSP_CQ2QID0_RSVD1_LOC	23
+#define DLB2_LSP_CQ2QID0_QID_P3_LOC	24
+#define DLB2_LSP_CQ2QID0_RSVD0_LOC	31
+
+#define DLB2_V2LSP_CQ2QID1(x) \
+	(0xa0100000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ2QID1(x) \
+	(0x90100000 + (x) * 0x1000)
+#define DLB2_LSP_CQ2QID1(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ2QID1(x) : \
+	 DLB2_V2_5LSP_CQ2QID1(x))
+#define DLB2_LSP_CQ2QID1_RST 0x0
+
+#define DLB2_LSP_CQ2QID1_QID_P4	0x0000007F
+#define DLB2_LSP_CQ2QID1_RSVD3	0x00000080
+#define DLB2_LSP_CQ2QID1_QID_P5	0x00007F00
+#define DLB2_LSP_CQ2QID1_RSVD2	0x00008000
+#define DLB2_LSP_CQ2QID1_QID_P6	0x007F0000
+#define DLB2_LSP_CQ2QID1_RSVD1	0x00800000
+#define DLB2_LSP_CQ2QID1_QID_P7	0x7F000000
+#define DLB2_LSP_CQ2QID1_RSVD0	0x80000000
+#define DLB2_LSP_CQ2QID1_QID_P4_LOC	0
+#define DLB2_LSP_CQ2QID1_RSVD3_LOC	7
+#define DLB2_LSP_CQ2QID1_QID_P5_LOC	8
+#define DLB2_LSP_CQ2QID1_RSVD2_LOC	15
+#define DLB2_LSP_CQ2QID1_QID_P6_LOC	16
+#define DLB2_LSP_CQ2QID1_RSVD1_LOC	23
+#define DLB2_LSP_CQ2QID1_QID_P7_LOC	24
+#define DLB2_LSP_CQ2QID1_RSVD0_LOC	31
+
+#define DLB2_V2LSP_CQ_DIR_DSBL(x) \
+	(0xa0180000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_DIR_DSBL(x) \
+	(0x90180000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_DIR_DSBL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_DIR_DSBL(x) : \
+	 DLB2_V2_5LSP_CQ_DIR_DSBL(x))
+#define DLB2_LSP_CQ_DIR_DSBL_RST 0x1
+
+#define DLB2_LSP_CQ_DIR_DSBL_DISABLED	0x00000001
+#define DLB2_LSP_CQ_DIR_DSBL_RSVD0		0xFFFFFFFE
+#define DLB2_LSP_CQ_DIR_DSBL_DISABLED_LOC	0
+#define DLB2_LSP_CQ_DIR_DSBL_RSVD0_LOC	1
+
+#define DLB2_V2LSP_CQ_DIR_TKN_CNT(x) \
+	(0xa0200000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_DIR_TKN_CNT(x) \
+	(0x90200000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_DIR_TKN_CNT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_DIR_TKN_CNT(x) : \
+	 DLB2_V2_5LSP_CQ_DIR_TKN_CNT(x))
+#define DLB2_LSP_CQ_DIR_TKN_CNT_RST 0x0
+
+#define DLB2_LSP_CQ_DIR_TKN_CNT_COUNT	0x00001FFF
+#define DLB2_LSP_CQ_DIR_TKN_CNT_RSVD0	0xFFFFE000
+#define DLB2_LSP_CQ_DIR_TKN_CNT_COUNT_LOC	0
+#define DLB2_LSP_CQ_DIR_TKN_CNT_RSVD0_LOC	13
+
+#define DLB2_V2LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(x) \
+	(0xa0280000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(x) \
+	(0x90280000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(x) : \
+	 DLB2_V2_5LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(x))
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RST 0x0
+
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_TOKEN_DEPTH_SELECT_V2	0x0000000F
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_DISABLE_WB_OPT_V2	0x00000010
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_IGNORE_DEPTH_V2	0x00000020
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RSVD0_V2		0xFFFFFFC0
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_TOKEN_DEPTH_SELECT_V2_LOC	0
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_DISABLE_WB_OPT_V2_LOC	4
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_IGNORE_DEPTH_V2_LOC	5
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RSVD0_V2_LOC		6
+
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_TOKEN_DEPTH_SELECT_V2_5 0x0000000F
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_DISABLE_WB_OPT_V2_5	0x00000010
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RSVD0_V2_5		0xFFFFFFE0
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_TOKEN_DEPTH_SELECT_V2_5_LOC	0
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_DISABLE_WB_OPT_V2_5_LOC	4
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RSVD0_V2_5_LOC		5
+
+#define DLB2_V2LSP_CQ_DIR_TOT_SCH_CNTL(x) \
+	(0xa0300000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_DIR_TOT_SCH_CNTL(x) \
+	(0x90300000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_DIR_TOT_SCH_CNTL(x) : \
+	 DLB2_V2_5LSP_CQ_DIR_TOT_SCH_CNTL(x))
+#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTL_RST 0x0
+
+#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTL_COUNT	0xFFFFFFFF
+#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTL_COUNT_LOC	0
+
+#define DLB2_V2LSP_CQ_DIR_TOT_SCH_CNTH(x) \
+	(0xa0380000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_DIR_TOT_SCH_CNTH(x) \
+	(0x90380000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_DIR_TOT_SCH_CNTH(x) : \
+	 DLB2_V2_5LSP_CQ_DIR_TOT_SCH_CNTH(x))
+#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTH_RST 0x0
+
+#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTH_COUNT	0xFFFFFFFF
+#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTH_COUNT_LOC	0
+
+#define DLB2_V2LSP_CQ_LDB_DSBL(x) \
+	(0xa0400000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_LDB_DSBL(x) \
+	(0x90400000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_LDB_DSBL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_LDB_DSBL(x) : \
+	 DLB2_V2_5LSP_CQ_LDB_DSBL(x))
+#define DLB2_LSP_CQ_LDB_DSBL_RST 0x1
+
+#define DLB2_LSP_CQ_LDB_DSBL_DISABLED	0x00000001
+#define DLB2_LSP_CQ_LDB_DSBL_RSVD0		0xFFFFFFFE
+#define DLB2_LSP_CQ_LDB_DSBL_DISABLED_LOC	0
+#define DLB2_LSP_CQ_LDB_DSBL_RSVD0_LOC	1
+
+#define DLB2_V2LSP_CQ_LDB_INFL_CNT(x) \
+	(0xa0480000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_LDB_INFL_CNT(x) \
+	(0x90480000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_LDB_INFL_CNT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_LDB_INFL_CNT(x) : \
+	 DLB2_V2_5LSP_CQ_LDB_INFL_CNT(x))
+#define DLB2_LSP_CQ_LDB_INFL_CNT_RST 0x0
+
+#define DLB2_LSP_CQ_LDB_INFL_CNT_COUNT	0x00000FFF
+#define DLB2_LSP_CQ_LDB_INFL_CNT_RSVD0	0xFFFFF000
+#define DLB2_LSP_CQ_LDB_INFL_CNT_COUNT_LOC	0
+#define DLB2_LSP_CQ_LDB_INFL_CNT_RSVD0_LOC	12
+
+#define DLB2_V2LSP_CQ_LDB_INFL_LIM(x) \
+	(0xa0500000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_LDB_INFL_LIM(x) \
+	(0x90500000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_LDB_INFL_LIM(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_LDB_INFL_LIM(x) : \
+	 DLB2_V2_5LSP_CQ_LDB_INFL_LIM(x))
+#define DLB2_LSP_CQ_LDB_INFL_LIM_RST 0x0
+
+#define DLB2_LSP_CQ_LDB_INFL_LIM_LIMIT	0x00000FFF
+#define DLB2_LSP_CQ_LDB_INFL_LIM_RSVD0	0xFFFFF000
+#define DLB2_LSP_CQ_LDB_INFL_LIM_LIMIT_LOC	0
+#define DLB2_LSP_CQ_LDB_INFL_LIM_RSVD0_LOC	12
+
+#define DLB2_V2LSP_CQ_LDB_TKN_CNT(x) \
+	(0xa0580000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_LDB_TKN_CNT(x) \
+	(0x90600000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_LDB_TKN_CNT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_LDB_TKN_CNT(x) : \
+	 DLB2_V2_5LSP_CQ_LDB_TKN_CNT(x))
+#define DLB2_LSP_CQ_LDB_TKN_CNT_RST 0x0
+
+#define DLB2_LSP_CQ_LDB_TKN_CNT_TOKEN_COUNT	0x000007FF
+#define DLB2_LSP_CQ_LDB_TKN_CNT_RSVD0	0xFFFFF800
+#define DLB2_LSP_CQ_LDB_TKN_CNT_TOKEN_COUNT_LOC	0
+#define DLB2_LSP_CQ_LDB_TKN_CNT_RSVD0_LOC		11
+
+#define DLB2_V2LSP_CQ_LDB_TKN_DEPTH_SEL(x) \
+	(0xa0600000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_LDB_TKN_DEPTH_SEL(x) \
+	(0x90680000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_LDB_TKN_DEPTH_SEL(x) : \
+	 DLB2_V2_5LSP_CQ_LDB_TKN_DEPTH_SEL(x))
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RST 0x0
+
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_V2	0x0000000F
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_IGNORE_DEPTH_V2	0x00000010
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RSVD0_V2		0xFFFFFFE0
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_V2_LOC	0
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_IGNORE_DEPTH_V2_LOC		4
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RSVD0_V2_LOC			5
+
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_V2_5	0x0000000F
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RSVD0_V2_5		0xFFFFFFF0
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_V2_5_LOC	0
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RSVD0_V2_5_LOC			4
+
+#define DLB2_V2LSP_CQ_LDB_TOT_SCH_CNTL(x) \
+	(0xa0680000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_LDB_TOT_SCH_CNTL(x) \
+	(0x90700000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_LDB_TOT_SCH_CNTL(x) : \
+	 DLB2_V2_5LSP_CQ_LDB_TOT_SCH_CNTL(x))
+#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTL_RST 0x0
+
+#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTL_COUNT	0xFFFFFFFF
+#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTL_COUNT_LOC	0
+
+#define DLB2_V2LSP_CQ_LDB_TOT_SCH_CNTH(x) \
+	(0xa0700000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_LDB_TOT_SCH_CNTH(x) \
+	(0x90780000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_LDB_TOT_SCH_CNTH(x) : \
+	 DLB2_V2_5LSP_CQ_LDB_TOT_SCH_CNTH(x))
+#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTH_RST 0x0
+
+#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTH_COUNT	0xFFFFFFFF
+#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTH_COUNT_LOC	0
+
+#define DLB2_V2LSP_QID_DIR_MAX_DEPTH(x) \
+	(0xa0780000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_DIR_MAX_DEPTH(x) \
+	(0x90800000 + (x) * 0x1000)
+#define DLB2_LSP_QID_DIR_MAX_DEPTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_DIR_MAX_DEPTH(x) : \
+	 DLB2_V2_5LSP_QID_DIR_MAX_DEPTH(x))
+#define DLB2_LSP_QID_DIR_MAX_DEPTH_RST 0x0
+
+#define DLB2_LSP_QID_DIR_MAX_DEPTH_DEPTH	0x00001FFF
+#define DLB2_LSP_QID_DIR_MAX_DEPTH_RSVD0	0xFFFFE000
+#define DLB2_LSP_QID_DIR_MAX_DEPTH_DEPTH_LOC	0
+#define DLB2_LSP_QID_DIR_MAX_DEPTH_RSVD0_LOC	13
+
+#define DLB2_V2LSP_QID_DIR_TOT_ENQ_CNTL(x) \
+	(0xa0800000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_DIR_TOT_ENQ_CNTL(x) \
+	(0x90880000 + (x) * 0x1000)
+#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_DIR_TOT_ENQ_CNTL(x) : \
+	 DLB2_V2_5LSP_QID_DIR_TOT_ENQ_CNTL(x))
+#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTL_RST 0x0
+
+#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTL_COUNT	0xFFFFFFFF
+#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTL_COUNT_LOC	0
+
+#define DLB2_V2LSP_QID_DIR_TOT_ENQ_CNTH(x) \
+	(0xa0880000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_DIR_TOT_ENQ_CNTH(x) \
+	(0x90900000 + (x) * 0x1000)
+#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_DIR_TOT_ENQ_CNTH(x) : \
+	 DLB2_V2_5LSP_QID_DIR_TOT_ENQ_CNTH(x))
+#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTH_RST 0x0
+
+#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTH_COUNT	0xFFFFFFFF
+#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTH_COUNT_LOC	0
+
+#define DLB2_V2LSP_QID_DIR_ENQUEUE_CNT(x) \
+	(0xa0900000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_DIR_ENQUEUE_CNT(x) \
+	(0x90980000 + (x) * 0x1000)
+#define DLB2_LSP_QID_DIR_ENQUEUE_CNT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_DIR_ENQUEUE_CNT(x) : \
+	 DLB2_V2_5LSP_QID_DIR_ENQUEUE_CNT(x))
+#define DLB2_LSP_QID_DIR_ENQUEUE_CNT_RST 0x0
+
+#define DLB2_LSP_QID_DIR_ENQUEUE_CNT_COUNT	0x00001FFF
+#define DLB2_LSP_QID_DIR_ENQUEUE_CNT_RSVD0	0xFFFFE000
+#define DLB2_LSP_QID_DIR_ENQUEUE_CNT_COUNT_LOC	0
+#define DLB2_LSP_QID_DIR_ENQUEUE_CNT_RSVD0_LOC	13
+
+#define DLB2_V2LSP_QID_DIR_DEPTH_THRSH(x) \
+	(0xa0980000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_DIR_DEPTH_THRSH(x) \
+	(0x90a00000 + (x) * 0x1000)
+#define DLB2_LSP_QID_DIR_DEPTH_THRSH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_DIR_DEPTH_THRSH(x) : \
+	 DLB2_V2_5LSP_QID_DIR_DEPTH_THRSH(x))
+#define DLB2_LSP_QID_DIR_DEPTH_THRSH_RST 0x0
+
+#define DLB2_LSP_QID_DIR_DEPTH_THRSH_THRESH	0x00001FFF
+#define DLB2_LSP_QID_DIR_DEPTH_THRSH_RSVD0	0xFFFFE000
+#define DLB2_LSP_QID_DIR_DEPTH_THRSH_THRESH_LOC	0
+#define DLB2_LSP_QID_DIR_DEPTH_THRSH_RSVD0_LOC	13
+
+#define DLB2_V2LSP_QID_AQED_ACTIVE_CNT(x) \
+	(0xa0a00000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_AQED_ACTIVE_CNT(x) \
+	(0x90b80000 + (x) * 0x1000)
+#define DLB2_LSP_QID_AQED_ACTIVE_CNT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_AQED_ACTIVE_CNT(x) : \
+	 DLB2_V2_5LSP_QID_AQED_ACTIVE_CNT(x))
+#define DLB2_LSP_QID_AQED_ACTIVE_CNT_RST 0x0
+
+#define DLB2_LSP_QID_AQED_ACTIVE_CNT_COUNT	0x00000FFF
+#define DLB2_LSP_QID_AQED_ACTIVE_CNT_RSVD0	0xFFFFF000
+#define DLB2_LSP_QID_AQED_ACTIVE_CNT_COUNT_LOC	0
+#define DLB2_LSP_QID_AQED_ACTIVE_CNT_RSVD0_LOC	12
+
+#define DLB2_V2LSP_QID_AQED_ACTIVE_LIM(x) \
+	(0xa0a80000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_AQED_ACTIVE_LIM(x) \
+	(0x90c00000 + (x) * 0x1000)
+#define DLB2_LSP_QID_AQED_ACTIVE_LIM(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_AQED_ACTIVE_LIM(x) : \
+	 DLB2_V2_5LSP_QID_AQED_ACTIVE_LIM(x))
+#define DLB2_LSP_QID_AQED_ACTIVE_LIM_RST 0x0
+
+#define DLB2_LSP_QID_AQED_ACTIVE_LIM_LIMIT	0x00000FFF
+#define DLB2_LSP_QID_AQED_ACTIVE_LIM_RSVD0	0xFFFFF000
+#define DLB2_LSP_QID_AQED_ACTIVE_LIM_LIMIT_LOC	0
+#define DLB2_LSP_QID_AQED_ACTIVE_LIM_RSVD0_LOC	12
+
+#define DLB2_V2LSP_QID_ATM_TOT_ENQ_CNTL(x) \
+	(0xa0b00000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_ATM_TOT_ENQ_CNTL(x) \
+	(0x90c80000 + (x) * 0x1000)
+#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_ATM_TOT_ENQ_CNTL(x) : \
+	 DLB2_V2_5LSP_QID_ATM_TOT_ENQ_CNTL(x))
+#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTL_RST 0x0
+
+#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTL_COUNT	0xFFFFFFFF
+#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTL_COUNT_LOC	0
+
+#define DLB2_V2LSP_QID_ATM_TOT_ENQ_CNTH(x) \
+	(0xa0b80000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_ATM_TOT_ENQ_CNTH(x) \
+	(0x90d00000 + (x) * 0x1000)
+#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_ATM_TOT_ENQ_CNTH(x) : \
+	 DLB2_V2_5LSP_QID_ATM_TOT_ENQ_CNTH(x))
+#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTH_RST 0x0
+
+#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTH_COUNT	0xFFFFFFFF
+#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTH_COUNT_LOC	0
+
+#define DLB2_V2LSP_QID_LDB_ENQUEUE_CNT(x) \
+	(0xa0c80000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_LDB_ENQUEUE_CNT(x) \
+	(0x90e00000 + (x) * 0x1000)
+#define DLB2_LSP_QID_LDB_ENQUEUE_CNT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_LDB_ENQUEUE_CNT(x) : \
+	 DLB2_V2_5LSP_QID_LDB_ENQUEUE_CNT(x))
+#define DLB2_LSP_QID_LDB_ENQUEUE_CNT_RST 0x0
+
+#define DLB2_LSP_QID_LDB_ENQUEUE_CNT_COUNT	0x00003FFF
+#define DLB2_LSP_QID_LDB_ENQUEUE_CNT_RSVD0	0xFFFFC000
+#define DLB2_LSP_QID_LDB_ENQUEUE_CNT_COUNT_LOC	0
+#define DLB2_LSP_QID_LDB_ENQUEUE_CNT_RSVD0_LOC	14
+
+#define DLB2_V2LSP_QID_LDB_INFL_CNT(x) \
+	(0xa0d00000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_LDB_INFL_CNT(x) \
+	(0x90e80000 + (x) * 0x1000)
+#define DLB2_LSP_QID_LDB_INFL_CNT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_LDB_INFL_CNT(x) : \
+	 DLB2_V2_5LSP_QID_LDB_INFL_CNT(x))
+#define DLB2_LSP_QID_LDB_INFL_CNT_RST 0x0
+
+#define DLB2_LSP_QID_LDB_INFL_CNT_COUNT	0x00000FFF
+#define DLB2_LSP_QID_LDB_INFL_CNT_RSVD0	0xFFFFF000
+#define DLB2_LSP_QID_LDB_INFL_CNT_COUNT_LOC	0
+#define DLB2_LSP_QID_LDB_INFL_CNT_RSVD0_LOC	12
+
+#define DLB2_V2LSP_QID_LDB_INFL_LIM(x) \
+	(0xa0d80000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_LDB_INFL_LIM(x) \
+	(0x90f00000 + (x) * 0x1000)
+#define DLB2_LSP_QID_LDB_INFL_LIM(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_LDB_INFL_LIM(x) : \
+	 DLB2_V2_5LSP_QID_LDB_INFL_LIM(x))
+#define DLB2_LSP_QID_LDB_INFL_LIM_RST 0x0
+
+#define DLB2_LSP_QID_LDB_INFL_LIM_LIMIT	0x00000FFF
+#define DLB2_LSP_QID_LDB_INFL_LIM_RSVD0	0xFFFFF000
+#define DLB2_LSP_QID_LDB_INFL_LIM_LIMIT_LOC	0
+#define DLB2_LSP_QID_LDB_INFL_LIM_RSVD0_LOC	12
+
+#define DLB2_V2LSP_QID2CQIDIX_00(x) \
+	(0xa0e00000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID2CQIDIX_00(x) \
+	(0x90f80000 + (x) * 0x1000)
+#define DLB2_LSP_QID2CQIDIX_00(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID2CQIDIX_00(x) : \
+	 DLB2_V2_5LSP_QID2CQIDIX_00(x))
+#define DLB2_LSP_QID2CQIDIX_00_RST 0x0
+#define DLB2_LSP_QID2CQIDIX(ver, x, y) \
+	(DLB2_LSP_QID2CQIDIX_00(ver, x) + 0x80000 * (y))
+#define DLB2_LSP_QID2CQIDIX_NUM 16
+
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P0	0x000000FF
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P1	0x0000FF00
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P2	0x00FF0000
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P3	0xFF000000
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P0_LOC	0
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P1_LOC	8
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P2_LOC	16
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P3_LOC	24
+
+#define DLB2_V2LSP_QID2CQIDIX2_00(x) \
+	(0xa1600000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID2CQIDIX2_00(x) \
+	(0x91780000 + (x) * 0x1000)
+#define DLB2_LSP_QID2CQIDIX2_00(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID2CQIDIX2_00(x) : \
+	 DLB2_V2_5LSP_QID2CQIDIX2_00(x))
+#define DLB2_LSP_QID2CQIDIX2_00_RST 0x0
+#define DLB2_LSP_QID2CQIDIX2(ver, x, y) \
+	(DLB2_LSP_QID2CQIDIX2_00(ver, x) + 0x80000 * (y))
+#define DLB2_LSP_QID2CQIDIX2_NUM 16
+
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P0	0x000000FF
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P1	0x0000FF00
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P2	0x00FF0000
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P3	0xFF000000
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P0_LOC	0
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P1_LOC	8
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P2_LOC	16
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P3_LOC	24
+
+#define DLB2_V2LSP_QID_NALDB_MAX_DEPTH(x) \
+	(0xa1f00000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_NALDB_MAX_DEPTH(x) \
+	(0x92080000 + (x) * 0x1000)
+#define DLB2_LSP_QID_NALDB_MAX_DEPTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_NALDB_MAX_DEPTH(x) : \
+	 DLB2_V2_5LSP_QID_NALDB_MAX_DEPTH(x))
+#define DLB2_LSP_QID_NALDB_MAX_DEPTH_RST 0x0
+
+#define DLB2_LSP_QID_NALDB_MAX_DEPTH_DEPTH	0x00003FFF
+#define DLB2_LSP_QID_NALDB_MAX_DEPTH_RSVD0	0xFFFFC000
+#define DLB2_LSP_QID_NALDB_MAX_DEPTH_DEPTH_LOC	0
+#define DLB2_LSP_QID_NALDB_MAX_DEPTH_RSVD0_LOC	14
+
+#define DLB2_V2LSP_QID_NALDB_TOT_ENQ_CNTL(x) \
+	(0xa1f80000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_NALDB_TOT_ENQ_CNTL(x) \
+	(0x92100000 + (x) * 0x1000)
+#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_NALDB_TOT_ENQ_CNTL(x) : \
+	 DLB2_V2_5LSP_QID_NALDB_TOT_ENQ_CNTL(x))
+#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL_RST 0x0
+
+#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL_COUNT	0xFFFFFFFF
+#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL_COUNT_LOC	0
+
+#define DLB2_V2LSP_QID_NALDB_TOT_ENQ_CNTH(x) \
+	(0xa2000000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_NALDB_TOT_ENQ_CNTH(x) \
+	(0x92180000 + (x) * 0x1000)
+#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_NALDB_TOT_ENQ_CNTH(x) : \
+	 DLB2_V2_5LSP_QID_NALDB_TOT_ENQ_CNTH(x))
+#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH_RST 0x0
+
+#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH_COUNT	0xFFFFFFFF
+#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH_COUNT_LOC	0
+
+#define DLB2_V2LSP_QID_ATM_DEPTH_THRSH(x) \
+	(0xa2080000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_ATM_DEPTH_THRSH(x) \
+	(0x92200000 + (x) * 0x1000)
+#define DLB2_LSP_QID_ATM_DEPTH_THRSH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_ATM_DEPTH_THRSH(x) : \
+	 DLB2_V2_5LSP_QID_ATM_DEPTH_THRSH(x))
+#define DLB2_LSP_QID_ATM_DEPTH_THRSH_RST 0x0
+
+#define DLB2_LSP_QID_ATM_DEPTH_THRSH_THRESH	0x00003FFF
+#define DLB2_LSP_QID_ATM_DEPTH_THRSH_RSVD0	0xFFFFC000
+#define DLB2_LSP_QID_ATM_DEPTH_THRSH_THRESH_LOC	0
+#define DLB2_LSP_QID_ATM_DEPTH_THRSH_RSVD0_LOC	14
+
+#define DLB2_V2LSP_QID_NALDB_DEPTH_THRSH(x) \
+	(0xa2100000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_NALDB_DEPTH_THRSH(x) \
+	(0x92280000 + (x) * 0x1000)
+#define DLB2_LSP_QID_NALDB_DEPTH_THRSH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_NALDB_DEPTH_THRSH(x) : \
+	 DLB2_V2_5LSP_QID_NALDB_DEPTH_THRSH(x))
+#define DLB2_LSP_QID_NALDB_DEPTH_THRSH_RST 0x0
+
+#define DLB2_LSP_QID_NALDB_DEPTH_THRSH_THRESH	0x00003FFF
+#define DLB2_LSP_QID_NALDB_DEPTH_THRSH_RSVD0		0xFFFFC000
+#define DLB2_LSP_QID_NALDB_DEPTH_THRSH_THRESH_LOC	0
+#define DLB2_LSP_QID_NALDB_DEPTH_THRSH_RSVD0_LOC	14
+
+#define DLB2_V2LSP_QID_ATM_ACTIVE(x) \
+	(0xa2180000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_ATM_ACTIVE(x) \
+	(0x92300000 + (x) * 0x1000)
+#define DLB2_LSP_QID_ATM_ACTIVE(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_ATM_ACTIVE(x) : \
+	 DLB2_V2_5LSP_QID_ATM_ACTIVE(x))
+#define DLB2_LSP_QID_ATM_ACTIVE_RST 0x0
+
+#define DLB2_LSP_QID_ATM_ACTIVE_COUNT	0x00003FFF
+#define DLB2_LSP_QID_ATM_ACTIVE_RSVD0	0xFFFFC000
+#define DLB2_LSP_QID_ATM_ACTIVE_COUNT_LOC	0
+#define DLB2_LSP_QID_ATM_ACTIVE_RSVD0_LOC	14
+
+#define DLB2_V2LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0 0xa4000008
+#define DLB2_V2_5LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0 0x94000008
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0 : \
+	 DLB2_V2_5LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0)
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_RST 0x0
+
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI0_WEIGHT	0x000000FF
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI1_WEIGHT	0x0000FF00
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI2_WEIGHT	0x00FF0000
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI3_WEIGHT	0xFF000000
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI0_WEIGHT_LOC	0
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI1_WEIGHT_LOC	8
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI2_WEIGHT_LOC	16
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI3_WEIGHT_LOC	24
+
+#define DLB2_V2LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1 0xa400000c
+#define DLB2_V2_5LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1 0x9400000c
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1 : \
+	 DLB2_V2_5LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1)
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_RST 0x0
+
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_RSVZ0_V2	0xFFFFFFFF
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_RSVZ0_V2_LOC	0
+
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI4_WEIGHT_V2_5	0x000000FF
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI5_WEIGHT_V2_5	0x0000FF00
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI6_WEIGHT_V2_5	0x00FF0000
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI7_WEIGHT_V2_5	0xFF000000
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI4_WEIGHT_V2_5_LOC	0
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI5_WEIGHT_V2_5_LOC	8
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI6_WEIGHT_V2_5_LOC	16
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI7_WEIGHT_V2_5_LOC	24
+
+#define DLB2_V2LSP_CFG_ARB_WEIGHT_LDB_QID_0 0xa4000014
+#define DLB2_V2_5LSP_CFG_ARB_WEIGHT_LDB_QID_0 0x94000014
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CFG_ARB_WEIGHT_LDB_QID_0 : \
+	 DLB2_V2_5LSP_CFG_ARB_WEIGHT_LDB_QID_0)
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_RST 0x0
+
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI0_WEIGHT	0x000000FF
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI1_WEIGHT	0x0000FF00
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI2_WEIGHT	0x00FF0000
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI3_WEIGHT	0xFF000000
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI0_WEIGHT_LOC	0
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI1_WEIGHT_LOC	8
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI2_WEIGHT_LOC	16
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI3_WEIGHT_LOC	24
+
+#define DLB2_V2LSP_CFG_ARB_WEIGHT_LDB_QID_1 0xa4000018
+#define DLB2_V2_5LSP_CFG_ARB_WEIGHT_LDB_QID_1 0x94000018
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CFG_ARB_WEIGHT_LDB_QID_1 : \
+	 DLB2_V2_5LSP_CFG_ARB_WEIGHT_LDB_QID_1)
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_RST 0x0
+
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_RSVZ0_V2	0xFFFFFFFF
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_RSVZ0_V2_LOC	0
+
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI4_WEIGHT_V2_5	0x000000FF
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI5_WEIGHT_V2_5	0x0000FF00
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI6_WEIGHT_V2_5	0x00FF0000
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI7_WEIGHT_V2_5	0xFF000000
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI4_WEIGHT_V2_5_LOC	0
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI5_WEIGHT_V2_5_LOC	8
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI6_WEIGHT_V2_5_LOC	16
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI7_WEIGHT_V2_5_LOC	24
+
+#define DLB2_V2LSP_LDB_SCHED_CTRL 0xa400002c
+#define DLB2_V2_5LSP_LDB_SCHED_CTRL 0x9400002c
+#define DLB2_LSP_LDB_SCHED_CTRL(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_LDB_SCHED_CTRL : \
+	 DLB2_V2_5LSP_LDB_SCHED_CTRL)
+#define DLB2_LSP_LDB_SCHED_CTRL_RST 0x0
+
+#define DLB2_LSP_LDB_SCHED_CTRL_CQ			0x000000FF
+#define DLB2_LSP_LDB_SCHED_CTRL_QIDIX		0x00000700
+#define DLB2_LSP_LDB_SCHED_CTRL_VALUE		0x00000800
+#define DLB2_LSP_LDB_SCHED_CTRL_NALB_HASWORK_V	0x00001000
+#define DLB2_LSP_LDB_SCHED_CTRL_RLIST_HASWORK_V	0x00002000
+#define DLB2_LSP_LDB_SCHED_CTRL_SLIST_HASWORK_V	0x00004000
+#define DLB2_LSP_LDB_SCHED_CTRL_INFLIGHT_OK_V	0x00008000
+#define DLB2_LSP_LDB_SCHED_CTRL_AQED_NFULL_V		0x00010000
+#define DLB2_LSP_LDB_SCHED_CTRL_RSVZ0		0xFFFE0000
+#define DLB2_LSP_LDB_SCHED_CTRL_CQ_LOC		0
+#define DLB2_LSP_LDB_SCHED_CTRL_QIDIX_LOC		8
+#define DLB2_LSP_LDB_SCHED_CTRL_VALUE_LOC		11
+#define DLB2_LSP_LDB_SCHED_CTRL_NALB_HASWORK_V_LOC	12
+#define DLB2_LSP_LDB_SCHED_CTRL_RLIST_HASWORK_V_LOC	13
+#define DLB2_LSP_LDB_SCHED_CTRL_SLIST_HASWORK_V_LOC	14
+#define DLB2_LSP_LDB_SCHED_CTRL_INFLIGHT_OK_V_LOC	15
+#define DLB2_LSP_LDB_SCHED_CTRL_AQED_NFULL_V_LOC	16
+#define DLB2_LSP_LDB_SCHED_CTRL_RSVZ0_LOC		17
+
+#define DLB2_V2LSP_DIR_SCH_CNT_L 0xa4000034
+#define DLB2_V2_5LSP_DIR_SCH_CNT_L 0x94000034
+#define DLB2_LSP_DIR_SCH_CNT_L(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_DIR_SCH_CNT_L : \
+	 DLB2_V2_5LSP_DIR_SCH_CNT_L)
+#define DLB2_LSP_DIR_SCH_CNT_L_RST 0x0
+
+#define DLB2_LSP_DIR_SCH_CNT_L_COUNT	0xFFFFFFFF
+#define DLB2_LSP_DIR_SCH_CNT_L_COUNT_LOC	0
+
+#define DLB2_V2LSP_DIR_SCH_CNT_H 0xa4000038
+#define DLB2_V2_5LSP_DIR_SCH_CNT_H 0x94000038
+#define DLB2_LSP_DIR_SCH_CNT_H(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_DIR_SCH_CNT_H : \
+	 DLB2_V2_5LSP_DIR_SCH_CNT_H)
+#define DLB2_LSP_DIR_SCH_CNT_H_RST 0x0
+
+#define DLB2_LSP_DIR_SCH_CNT_H_COUNT	0xFFFFFFFF
+#define DLB2_LSP_DIR_SCH_CNT_H_COUNT_LOC	0
+
+#define DLB2_V2LSP_LDB_SCH_CNT_L 0xa400003c
+#define DLB2_V2_5LSP_LDB_SCH_CNT_L 0x9400003c
+#define DLB2_LSP_LDB_SCH_CNT_L(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_LDB_SCH_CNT_L : \
+	 DLB2_V2_5LSP_LDB_SCH_CNT_L)
+#define DLB2_LSP_LDB_SCH_CNT_L_RST 0x0
+
+#define DLB2_LSP_LDB_SCH_CNT_L_COUNT	0xFFFFFFFF
+#define DLB2_LSP_LDB_SCH_CNT_L_COUNT_LOC	0
+
+#define DLB2_V2LSP_LDB_SCH_CNT_H 0xa4000040
+#define DLB2_V2_5LSP_LDB_SCH_CNT_H 0x94000040
+#define DLB2_LSP_LDB_SCH_CNT_H(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_LDB_SCH_CNT_H : \
+	 DLB2_V2_5LSP_LDB_SCH_CNT_H)
+#define DLB2_LSP_LDB_SCH_CNT_H_RST 0x0
+
+#define DLB2_LSP_LDB_SCH_CNT_H_COUNT	0xFFFFFFFF
+#define DLB2_LSP_LDB_SCH_CNT_H_COUNT_LOC	0
+
+#define DLB2_V2LSP_CFG_SHDW_CTRL 0xa4000070
+#define DLB2_V2_5LSP_CFG_SHDW_CTRL 0x94000070
+#define DLB2_LSP_CFG_SHDW_CTRL(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CFG_SHDW_CTRL : \
+	 DLB2_V2_5LSP_CFG_SHDW_CTRL)
+#define DLB2_LSP_CFG_SHDW_CTRL_RST 0x0
+
+#define DLB2_LSP_CFG_SHDW_CTRL_TRANSFER	0x00000001
+#define DLB2_LSP_CFG_SHDW_CTRL_RSVD0		0xFFFFFFFE
+#define DLB2_LSP_CFG_SHDW_CTRL_TRANSFER_LOC	0
+#define DLB2_LSP_CFG_SHDW_CTRL_RSVD0_LOC	1
+
+#define DLB2_V2LSP_CFG_SHDW_RANGE_COS(x) \
+	(0xa4000074 + (x) * 4)
+#define DLB2_V2_5LSP_CFG_SHDW_RANGE_COS(x) \
+	(0x94000074 + (x) * 4)
+#define DLB2_LSP_CFG_SHDW_RANGE_COS(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CFG_SHDW_RANGE_COS(x) : \
+	 DLB2_V2_5LSP_CFG_SHDW_RANGE_COS(x))
+#define DLB2_LSP_CFG_SHDW_RANGE_COS_RST 0x40
+
+#define DLB2_LSP_CFG_SHDW_RANGE_COS_BW_RANGE		0x000001FF
+#define DLB2_LSP_CFG_SHDW_RANGE_COS_RSVZ0		0x7FFFFE00
+#define DLB2_LSP_CFG_SHDW_RANGE_COS_NO_EXTRA_CREDIT	0x80000000
+#define DLB2_LSP_CFG_SHDW_RANGE_COS_BW_RANGE_LOC		0
+#define DLB2_LSP_CFG_SHDW_RANGE_COS_RSVZ0_LOC		9
+#define DLB2_LSP_CFG_SHDW_RANGE_COS_NO_EXTRA_CREDIT_LOC	31
+
+#define DLB2_V2LSP_CFG_CTRL_GENERAL_0 0xac000000
+#define DLB2_V2_5LSP_CFG_CTRL_GENERAL_0 0x9c000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CFG_CTRL_GENERAL_0 : \
+	 DLB2_V2_5LSP_CFG_CTRL_GENERAL_0)
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RST 0x0
+
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_ATQ_EMPTY_ARB_V2	0x00000001
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_TOK_UNIT_IDLE_V2	0x00000002
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_RLIST_PRI_V2	0x00000004
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_CMP_UNIT_IDLE_V2	0x00000008
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ0_V2		0x00000030
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OP_V2	0x00000040
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_HALF_BW_V2	0x00000080
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OUT_V2	0x00000100
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_DISAB_MULTI_V2	0x00000200
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OP_V2	0x00000400
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_HALF_BW_V2	0x00000800
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OUT_V2	0x00001000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_DISAB_MULTI_V2	0x00002000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OP_V2	0x00004000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_HALF_BW_V2	0x00008000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OUT_V2	0x00010000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OP_V2	0x00020000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_HALF_BW_V2	0x00040000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OUT_V2	0x00080000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_SINGLE_OP_V2	0x00100000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_HALF_BW_V2	0x00200000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_DISAB_MULTI_V2	0x00400000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_SCH_V2	0x00800000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_CMP_V2	0x01000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_CE_TOG_ARB_V2	0x02000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ1_V2		0x04000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALID_SEL_V2	0x18000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALUE_SEL_V2	0x20000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_COMPARE_SEL_V2	0xC0000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_ATQ_EMPTY_ARB_V2_LOC	0
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_TOK_UNIT_IDLE_V2_LOC		1
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_RLIST_PRI_V2_LOC		2
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_CMP_UNIT_IDLE_V2_LOC		3
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ0_V2_LOC			4
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OP_V2_LOC		6
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_HALF_BW_V2_LOC		7
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OUT_V2_LOC		8
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_DISAB_MULTI_V2_LOC		9
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OP_V2_LOC		10
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_HALF_BW_V2_LOC		11
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OUT_V2_LOC		12
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_DISAB_MULTI_V2_LOC		13
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OP_V2_LOC		14
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_HALF_BW_V2_LOC		15
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OUT_V2_LOC		16
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OP_V2_LOC		17
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_HALF_BW_V2_LOC		18
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OUT_V2_LOC		19
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_SINGLE_OP_V2_LOC		20
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_HALF_BW_V2_LOC		21
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_DISAB_MULTI_V2_LOC		22
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_SCH_V2_LOC		23
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_CMP_V2_LOC		24
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_CE_TOG_ARB_V2_LOC		25
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ1_V2_LOC			26
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALID_SEL_V2_LOC		27
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALUE_SEL_V2_LOC		29
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_COMPARE_SEL_V2_LOC		30
+
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_ATQ_EMPTY_ARB_V2_5	0x00000001
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_TOK_UNIT_IDLE_V2_5	0x00000002
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_RLIST_PRI_V2_5	0x00000004
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_CMP_UNIT_IDLE_V2_5	0x00000008
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ENAB_IF_THRESH_V2_5	0x00000010
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ0_V2_5		0x00000020
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OP_V2_5	0x00000040
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_HALF_BW_V2_5	0x00000080
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OUT_V2_5	0x00000100
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_DISAB_MULTI_V2_5	0x00000200
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OP_V2_5	0x00000400
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_HALF_BW_V2_5	0x00000800
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OUT_V2_5	0x00001000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_DISAB_MULTI_V2_5	0x00002000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OP_V2_5	0x00004000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_HALF_BW_V2_5	0x00008000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OUT_V2_5	0x00010000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OP_V2_5	0x00020000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_HALF_BW_V2_5	0x00040000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OUT_V2_5	0x00080000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_SINGLE_OP_V2_5	0x00100000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_HALF_BW_V2_5	0x00200000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_DISAB_MULTI_V2_5	0x00400000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_SCH_V2_5	0x00800000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_CMP_V2_5	0x01000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_CE_TOG_ARB_V2_5	0x02000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ1_V2_5		0x04000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALID_SEL_V2_5	0x18000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALUE_SEL_V2_5	0x20000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_COMPARE_SEL_V2_5	0xC0000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_ATQ_EMPTY_ARB_V2_5_LOC	0
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_TOK_UNIT_IDLE_V2_5_LOC	1
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_RLIST_PRI_V2_5_LOC		2
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_CMP_UNIT_IDLE_V2_5_LOC	3
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ENAB_IF_THRESH_V2_5_LOC		4
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ0_V2_5_LOC			5
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OP_V2_5_LOC		6
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_HALF_BW_V2_5_LOC		7
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OUT_V2_5_LOC		8
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_DISAB_MULTI_V2_5_LOC		9
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OP_V2_5_LOC		10
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_HALF_BW_V2_5_LOC		11
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OUT_V2_5_LOC		12
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_DISAB_MULTI_V2_5_LOC		13
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OP_V2_5_LOC	14
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_HALF_BW_V2_5_LOC		15
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OUT_V2_5_LOC	16
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OP_V2_5_LOC		17
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_HALF_BW_V2_5_LOC		18
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OUT_V2_5_LOC	19
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_SINGLE_OP_V2_5_LOC		20
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_HALF_BW_V2_5_LOC		21
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_DISAB_MULTI_V2_5_LOC		22
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_SCH_V2_5_LOC		23
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_CMP_V2_5_LOC		24
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_CE_TOG_ARB_V2_5_LOC		25
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ1_V2_5_LOC			26
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALID_SEL_V2_5_LOC		27
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALUE_SEL_V2_5_LOC		29
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_COMPARE_SEL_V2_5_LOC	30
+
+#define DLB2_LSP_SMON_COMPARE0 0xac000048
+#define DLB2_LSP_SMON_COMPARE0_RST 0x0
+
+#define DLB2_LSP_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_LSP_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_LSP_SMON_COMPARE1 0xac00004c
+#define DLB2_LSP_SMON_COMPARE1_RST 0x0
+
+#define DLB2_LSP_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_LSP_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_LSP_SMON_CFG0 0xac000050
+#define DLB2_LSP_SMON_CFG0_RST 0x40000000
+
+#define DLB2_LSP_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_LSP_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_LSP_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_LSP_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_LSP_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_LSP_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_LSP_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_LSP_SMON_CFG0_SMON_MODE			0x0000F000
+#define DLB2_LSP_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_LSP_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_LSP_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_LSP_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_LSP_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_LSP_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_LSP_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_LSP_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_LSP_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_LSP_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_LSP_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_LSP_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_LSP_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
+#define DLB2_LSP_SMON_CFG0_RSVZ0_LOC				2
+#define DLB2_LSP_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_LSP_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_LSP_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_LSP_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_LSP_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_LSP_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_LSP_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_LSP_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_LSP_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_LSP_SMON_CFG0_STOPTIMEROVFL_LOC			20
+#define DLB2_LSP_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_LSP_SMON_CFG0_STATTIMEROVFL_LOC			22
+#define DLB2_LSP_SMON_CFG0_RSVZ1_LOC				23
+#define DLB2_LSP_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_LSP_SMON_CFG0_RSVZ2_LOC				29
+#define DLB2_LSP_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_LSP_SMON_CFG1 0xac000054
+#define DLB2_LSP_SMON_CFG1_RST 0x0
+
+#define DLB2_LSP_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_LSP_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_LSP_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_LSP_SMON_CFG1_MODE0_LOC	0
+#define DLB2_LSP_SMON_CFG1_MODE1_LOC	8
+#define DLB2_LSP_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_LSP_SMON_ACTIVITYCNTR0 0xac000058
+#define DLB2_LSP_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_LSP_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_LSP_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_LSP_SMON_ACTIVITYCNTR1 0xac00005c
+#define DLB2_LSP_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_LSP_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_LSP_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_LSP_SMON_MAX_TMR 0xac000060
+#define DLB2_LSP_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_LSP_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_LSP_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_LSP_SMON_TMR 0xac000064
+#define DLB2_LSP_SMON_TMR_RST 0x0
+
+#define DLB2_LSP_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_LSP_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_V2CM_DIAG_RESET_STS 0xb4000000
+#define DLB2_V2_5CM_DIAG_RESET_STS 0xa4000000
+#define DLB2_CM_DIAG_RESET_STS(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 V2CM_DIAG_RESET_STS : \
+	 V2_5CM_DIAG_RESET_STS)
+#define DLB2_CM_DIAG_RESET_STS_RST 0x80000bff
+
+#define DLB2_CM_DIAG_RESET_STS_CHP_PF_RESET_DONE	0x00000001
+#define DLB2_CM_DIAG_RESET_STS_ROP_PF_RESET_DONE	0x00000002
+#define DLB2_CM_DIAG_RESET_STS_LSP_PF_RESET_DONE	0x00000004
+#define DLB2_CM_DIAG_RESET_STS_NALB_PF_RESET_DONE	0x00000008
+#define DLB2_CM_DIAG_RESET_STS_AP_PF_RESET_DONE	0x00000010
+#define DLB2_CM_DIAG_RESET_STS_DP_PF_RESET_DONE	0x00000020
+#define DLB2_CM_DIAG_RESET_STS_QED_PF_RESET_DONE	0x00000040
+#define DLB2_CM_DIAG_RESET_STS_DQED_PF_RESET_DONE	0x00000080
+#define DLB2_CM_DIAG_RESET_STS_AQED_PF_RESET_DONE	0x00000100
+#define DLB2_CM_DIAG_RESET_STS_SYS_PF_RESET_DONE	0x00000200
+#define DLB2_CM_DIAG_RESET_STS_PF_RESET_ACTIVE	0x00000400
+#define DLB2_CM_DIAG_RESET_STS_FLRSM_STATE		0x0003F800
+#define DLB2_CM_DIAG_RESET_STS_RSVD0			0x7FFC0000
+#define DLB2_CM_DIAG_RESET_STS_DLB_PROC_RESET_DONE	0x80000000
+#define DLB2_CM_DIAG_RESET_STS_CHP_PF_RESET_DONE_LOC		0
+#define DLB2_CM_DIAG_RESET_STS_ROP_PF_RESET_DONE_LOC		1
+#define DLB2_CM_DIAG_RESET_STS_LSP_PF_RESET_DONE_LOC		2
+#define DLB2_CM_DIAG_RESET_STS_NALB_PF_RESET_DONE_LOC	3
+#define DLB2_CM_DIAG_RESET_STS_AP_PF_RESET_DONE_LOC		4
+#define DLB2_CM_DIAG_RESET_STS_DP_PF_RESET_DONE_LOC		5
+#define DLB2_CM_DIAG_RESET_STS_QED_PF_RESET_DONE_LOC		6
+#define DLB2_CM_DIAG_RESET_STS_DQED_PF_RESET_DONE_LOC	7
+#define DLB2_CM_DIAG_RESET_STS_AQED_PF_RESET_DONE_LOC	8
+#define DLB2_CM_DIAG_RESET_STS_SYS_PF_RESET_DONE_LOC		9
+#define DLB2_CM_DIAG_RESET_STS_PF_RESET_ACTIVE_LOC		10
+#define DLB2_CM_DIAG_RESET_STS_FLRSM_STATE_LOC		11
+#define DLB2_CM_DIAG_RESET_STS_RSVD0_LOC			18
+#define DLB2_CM_DIAG_RESET_STS_DLB_PROC_RESET_DONE_LOC	31
+
+#define DLB2_V2CM_CFG_DIAGNOSTIC_IDLE_STATUS 0xb4000004
+#define DLB2_V2_5CM_CFG_DIAGNOSTIC_IDLE_STATUS 0xa4000004
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CM_CFG_DIAGNOSTIC_IDLE_STATUS : \
+	 DLB2_V2_5CM_CFG_DIAGNOSTIC_IDLE_STATUS)
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_RST 0x9d0fffff
+
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_CHP_PIPEIDLE		0x00000001
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_ROP_PIPEIDLE		0x00000002
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_LSP_PIPEIDLE		0x00000004
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_NALB_PIPEIDLE	0x00000008
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AP_PIPEIDLE		0x00000010
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DP_PIPEIDLE		0x00000020
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_QED_PIPEIDLE		0x00000040
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DQED_PIPEIDLE	0x00000080
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AQED_PIPEIDLE	0x00000100
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_SYS_PIPEIDLE		0x00000200
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_CHP_UNIT_IDLE	0x00000400
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_ROP_UNIT_IDLE	0x00000800
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_LSP_UNIT_IDLE	0x00001000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_NALB_UNIT_IDLE	0x00002000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AP_UNIT_IDLE		0x00004000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DP_UNIT_IDLE		0x00008000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_QED_UNIT_IDLE	0x00010000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DQED_UNIT_IDLE	0x00020000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AQED_UNIT_IDLE	0x00040000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_SYS_UNIT_IDLE	0x00080000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_RSVD1		0x00F00000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_CFG_RING_IDLE	0x01000000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_CFG_MSTR_IDLE	0x02000000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_FLR_CLKREQ_B	0x04000000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_PROC_IDLE	0x08000000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_PROC_IDLE_MASKED 0x10000000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_RSVD0		 0x60000000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DLB_FUNC_IDLE	 0x80000000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_CHP_PIPEIDLE_LOC		0
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_ROP_PIPEIDLE_LOC		1
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_LSP_PIPEIDLE_LOC		2
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_NALB_PIPEIDLE_LOC		3
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AP_PIPEIDLE_LOC		4
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DP_PIPEIDLE_LOC		5
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_QED_PIPEIDLE_LOC		6
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DQED_PIPEIDLE_LOC		7
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AQED_PIPEIDLE_LOC		8
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_SYS_PIPEIDLE_LOC		9
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_CHP_UNIT_IDLE_LOC		10
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_ROP_UNIT_IDLE_LOC		11
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_LSP_UNIT_IDLE_LOC		12
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_NALB_UNIT_IDLE_LOC	13
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AP_UNIT_IDLE_LOC		14
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DP_UNIT_IDLE_LOC		15
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_QED_UNIT_IDLE_LOC		16
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DQED_UNIT_IDLE_LOC	17
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AQED_UNIT_IDLE_LOC	18
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_SYS_UNIT_IDLE_LOC		19
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_RSVD1_LOC			20
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_CFG_RING_IDLE_LOC	24
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_CFG_MSTR_IDLE_LOC	25
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_FLR_CLKREQ_B_LOC	26
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_PROC_IDLE_LOC	27
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_PROC_IDLE_MASKED_LOC	28
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_RSVD0_LOC			29
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DLB_FUNC_IDLE_LOC		31
+
+#define DLB2_V2CM_CFG_PM_STATUS 0xb4000014
+#define DLB2_V2_5CM_CFG_PM_STATUS 0xa4000014
+#define DLB2_CM_CFG_PM_STATUS(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CM_CFG_PM_STATUS : \
+	 DLB2_V2_5CM_CFG_PM_STATUS)
+#define DLB2_CM_CFG_PM_STATUS_RST 0x100403e
+
+#define DLB2_CM_CFG_PM_STATUS_PROCHOT		0x00000001
+#define DLB2_CM_CFG_PM_STATUS_PGCB_DLB_IDLE		0x00000002
+#define DLB2_CM_CFG_PM_STATUS_PGCB_DLB_PG_RDY_ACK_B	0x00000004
+#define DLB2_CM_CFG_PM_STATUS_PMSM_PGCB_REQ_B	0x00000008
+#define DLB2_CM_CFG_PM_STATUS_PGBC_PMC_PG_REQ_B	0x00000010
+#define DLB2_CM_CFG_PM_STATUS_PMC_PGCB_PG_ACK_B	0x00000020
+#define DLB2_CM_CFG_PM_STATUS_PMC_PGCB_FET_EN_B	0x00000040
+#define DLB2_CM_CFG_PM_STATUS_PGCB_FET_EN_B		0x00000080
+#define DLB2_CM_CFG_PM_STATUS_RSVZ0			0x00000100
+#define DLB2_CM_CFG_PM_STATUS_RSVZ1			0x00000200
+#define DLB2_CM_CFG_PM_STATUS_FUSE_FORCE_ON		0x00000400
+#define DLB2_CM_CFG_PM_STATUS_FUSE_PROC_DISABLE	0x00000800
+#define DLB2_CM_CFG_PM_STATUS_RSVZ2			0x00001000
+#define DLB2_CM_CFG_PM_STATUS_RSVZ3			0x00002000
+#define DLB2_CM_CFG_PM_STATUS_PM_FSM_D0TOD3_OK	0x00004000
+#define DLB2_CM_CFG_PM_STATUS_PM_FSM_D3TOD0_OK	0x00008000
+#define DLB2_CM_CFG_PM_STATUS_DLB_IN_D3		0x00010000
+#define DLB2_CM_CFG_PM_STATUS_RSVZ4			0x00FE0000
+#define DLB2_CM_CFG_PM_STATUS_PMSM			0xFF000000
+#define DLB2_CM_CFG_PM_STATUS_PROCHOT_LOC			0
+#define DLB2_CM_CFG_PM_STATUS_PGCB_DLB_IDLE_LOC		1
+#define DLB2_CM_CFG_PM_STATUS_PGCB_DLB_PG_RDY_ACK_B_LOC	2
+#define DLB2_CM_CFG_PM_STATUS_PMSM_PGCB_REQ_B_LOC		3
+#define DLB2_CM_CFG_PM_STATUS_PGBC_PMC_PG_REQ_B_LOC		4
+#define DLB2_CM_CFG_PM_STATUS_PMC_PGCB_PG_ACK_B_LOC		5
+#define DLB2_CM_CFG_PM_STATUS_PMC_PGCB_FET_EN_B_LOC		6
+#define DLB2_CM_CFG_PM_STATUS_PGCB_FET_EN_B_LOC		7
+#define DLB2_CM_CFG_PM_STATUS_RSVZ0_LOC			8
+#define DLB2_CM_CFG_PM_STATUS_RSVZ1_LOC			9
+#define DLB2_CM_CFG_PM_STATUS_FUSE_FORCE_ON_LOC		10
+#define DLB2_CM_CFG_PM_STATUS_FUSE_PROC_DISABLE_LOC		11
+#define DLB2_CM_CFG_PM_STATUS_RSVZ2_LOC			12
+#define DLB2_CM_CFG_PM_STATUS_RSVZ3_LOC			13
+#define DLB2_CM_CFG_PM_STATUS_PM_FSM_D0TOD3_OK_LOC		14
+#define DLB2_CM_CFG_PM_STATUS_PM_FSM_D3TOD0_OK_LOC		15
+#define DLB2_CM_CFG_PM_STATUS_DLB_IN_D3_LOC			16
+#define DLB2_CM_CFG_PM_STATUS_RSVZ4_LOC			17
+#define DLB2_CM_CFG_PM_STATUS_PMSM_LOC			24
+
+#define DLB2_V2CM_CFG_PM_PMCSR_DISABLE 0xb4000018
+#define DLB2_V2_5CM_CFG_PM_PMCSR_DISABLE 0xa4000018
+#define DLB2_CM_CFG_PM_PMCSR_DISABLE(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CM_CFG_PM_PMCSR_DISABLE : \
+	 DLB2_V2_5CM_CFG_PM_PMCSR_DISABLE)
+#define DLB2_CM_CFG_PM_PMCSR_DISABLE_RST 0x1
+
+#define DLB2_CM_CFG_PM_PMCSR_DISABLE_DISABLE	0x00000001
+#define DLB2_CM_CFG_PM_PMCSR_DISABLE_RSVZ0	0xFFFFFFFE
+#define DLB2_CM_CFG_PM_PMCSR_DISABLE_DISABLE_LOC	0
+#define DLB2_CM_CFG_PM_PMCSR_DISABLE_RSVZ0_LOC	1
+
+#define DLB2_VF_VF2PF_MAILBOX_BYTES 256
+#define DLB2_VF_VF2PF_MAILBOX(x) \
+	(0x1000 + (x) * 0x4)
+#define DLB2_VF_VF2PF_MAILBOX_RST 0x0
+
+#define DLB2_VF_VF2PF_MAILBOX_MSG	0xFFFFFFFF
+#define DLB2_VF_VF2PF_MAILBOX_MSG_LOC	0
+
+#define DLB2_VF_VF2PF_MAILBOX_ISR 0x1f00
+#define DLB2_VF_VF2PF_MAILBOX_ISR_RST 0x0
+#define DLB2_VF_SIOV_MBOX_ISR_TRIGGER 0x8000
+
+#define DLB2_VF_VF2PF_MAILBOX_ISR_ISR	0x00000001
+#define DLB2_VF_VF2PF_MAILBOX_ISR_RSVD0	0xFFFFFFFE
+#define DLB2_VF_VF2PF_MAILBOX_ISR_ISR_LOC	0
+#define DLB2_VF_VF2PF_MAILBOX_ISR_RSVD0_LOC	1
+
+#define DLB2_VF_PF2VF_MAILBOX_BYTES 64
+#define DLB2_VF_PF2VF_MAILBOX(x) \
+	(0x2000 + (x) * 0x4)
+#define DLB2_VF_PF2VF_MAILBOX_RST 0x0
+
+#define DLB2_VF_PF2VF_MAILBOX_MSG	0xFFFFFFFF
+#define DLB2_VF_PF2VF_MAILBOX_MSG_LOC	0
+
+#define DLB2_VF_PF2VF_MAILBOX_ISR 0x2f00
+#define DLB2_VF_PF2VF_MAILBOX_ISR_RST 0x0
+
+#define DLB2_VF_PF2VF_MAILBOX_ISR_PF_ISR	0x00000001
+#define DLB2_VF_PF2VF_MAILBOX_ISR_RSVD0	0xFFFFFFFE
+#define DLB2_VF_PF2VF_MAILBOX_ISR_PF_ISR_LOC	0
+#define DLB2_VF_PF2VF_MAILBOX_ISR_RSVD0_LOC	1
+
+#define DLB2_VF_VF_MSI_ISR_PEND 0x2f10
+#define DLB2_VF_VF_MSI_ISR_PEND_RST 0x0
+
+#define DLB2_VF_VF_MSI_ISR_PEND_ISR_PEND	0xFFFFFFFF
+#define DLB2_VF_VF_MSI_ISR_PEND_ISR_PEND_LOC	0
+
+#define DLB2_VF_VF_RESET_IN_PROGRESS 0x3000
+#define DLB2_VF_VF_RESET_IN_PROGRESS_RST 0x1
+
+#define DLB2_VF_VF_RESET_IN_PROGRESS_RESET_IN_PROGRESS	0x00000001
+#define DLB2_VF_VF_RESET_IN_PROGRESS_RSVD0			0xFFFFFFFE
+#define DLB2_VF_VF_RESET_IN_PROGRESS_RESET_IN_PROGRESS_LOC	0
+#define DLB2_VF_VF_RESET_IN_PROGRESS_RSVD0_LOC		1
+
+#define DLB2_VF_VF_MSI_ISR 0x4000
+#define DLB2_VF_VF_MSI_ISR_RST 0x0
+
+#define DLB2_VF_VF_MSI_ISR_VF_MSI_ISR	0xFFFFFFFF
+#define DLB2_VF_VF_MSI_ISR_VF_MSI_ISR_LOC	0
+
+#define DLB2_SYS_TOTAL_CREDITS 0x10000100
+#define DLB2_SYS_TOTAL_CREDITS_RST 0x4000
+
+#define DLB2_SYS_TOTAL_CREDITS_TOTAL_CREDITS	0xFFFFFFFF
+#define DLB2_SYS_TOTAL_CREDITS_TOTAL_CREDITS_LOC	0
+
+#define DLB2_SYS_LDB_CQ_AI_ADDR_U(x) \
+	(0x10000fa4 + (x) * 0x1000)
+#define DLB2_SYS_LDB_CQ_AI_ADDR_U_RST 0x0
+
+#define DLB2_SYS_LDB_CQ_AI_ADDR_U_CQ_AI_ADDR_U	0xFFFFFFFF
+#define DLB2_SYS_LDB_CQ_AI_ADDR_U_CQ_AI_ADDR_U_LOC	0
+
+#define DLB2_SYS_LDB_CQ_AI_ADDR_L(x) \
+	(0x10000fa0 + (x) * 0x1000)
+#define DLB2_SYS_LDB_CQ_AI_ADDR_L_RST 0x0
+
+#define DLB2_SYS_LDB_CQ_AI_ADDR_L_RSVD0		0x00000003
+#define DLB2_SYS_LDB_CQ_AI_ADDR_L_CQ_AI_ADDR_L	0xFFFFFFFC
+#define DLB2_SYS_LDB_CQ_AI_ADDR_L_RSVD0_LOC		0
+#define DLB2_SYS_LDB_CQ_AI_ADDR_L_CQ_AI_ADDR_L_LOC	2
+
+#define DLB2_SYS_DIR_CQ_AI_ADDR_U(x) \
+	(0x10000fe4 + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ_AI_ADDR_U_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_AI_ADDR_U_CQ_AI_ADDR_U	0xFFFFFFFF
+#define DLB2_SYS_DIR_CQ_AI_ADDR_U_CQ_AI_ADDR_U_LOC	0
+
+#define DLB2_SYS_DIR_CQ_AI_ADDR_L(x) \
+	(0x10000fe0 + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ_AI_ADDR_L_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_AI_ADDR_L_RSVD0		0x00000003
+#define DLB2_SYS_DIR_CQ_AI_ADDR_L_CQ_AI_ADDR_L	0xFFFFFFFC
+#define DLB2_SYS_DIR_CQ_AI_ADDR_L_RSVD0_LOC		0
+#define DLB2_SYS_DIR_CQ_AI_ADDR_L_CQ_AI_ADDR_L_LOC	2
+
+#define DLB2_SYS_WB_DIR_CQ_STATE(x) \
+	(0x11c00000 + (x) * 0x1000)
+#define DLB2_SYS_WB_DIR_CQ_STATE_RST 0x0
+
+#define DLB2_SYS_WB_DIR_CQ_STATE_WB0_V	0x00000001
+#define DLB2_SYS_WB_DIR_CQ_STATE_WB1_V	0x00000002
+#define DLB2_SYS_WB_DIR_CQ_STATE_WB2_V	0x00000004
+#define DLB2_SYS_WB_DIR_CQ_STATE_DIR_OPT	0x00000008
+#define DLB2_SYS_WB_DIR_CQ_STATE_CQ_OPT_CLR	0x00000010
+#define DLB2_SYS_WB_DIR_CQ_STATE_RSVD0	0xFFFFFFE0
+#define DLB2_SYS_WB_DIR_CQ_STATE_WB0_V_LOC		0
+#define DLB2_SYS_WB_DIR_CQ_STATE_WB1_V_LOC		1
+#define DLB2_SYS_WB_DIR_CQ_STATE_WB2_V_LOC		2
+#define DLB2_SYS_WB_DIR_CQ_STATE_DIR_OPT_LOC		3
+#define DLB2_SYS_WB_DIR_CQ_STATE_CQ_OPT_CLR_LOC	4
+#define DLB2_SYS_WB_DIR_CQ_STATE_RSVD0_LOC		5
+
+#define DLB2_SYS_WB_LDB_CQ_STATE(x) \
+	(0x11d00000 + (x) * 0x1000)
+#define DLB2_SYS_WB_LDB_CQ_STATE_RST 0x0
+
+#define DLB2_SYS_WB_LDB_CQ_STATE_WB0_V	0x00000001
+#define DLB2_SYS_WB_LDB_CQ_STATE_WB1_V	0x00000002
+#define DLB2_SYS_WB_LDB_CQ_STATE_WB2_V	0x00000004
+#define DLB2_SYS_WB_LDB_CQ_STATE_RSVD1	0x00000008
+#define DLB2_SYS_WB_LDB_CQ_STATE_CQ_OPT_CLR	0x00000010
+#define DLB2_SYS_WB_LDB_CQ_STATE_RSVD0	0xFFFFFFE0
+#define DLB2_SYS_WB_LDB_CQ_STATE_WB0_V_LOC		0
+#define DLB2_SYS_WB_LDB_CQ_STATE_WB1_V_LOC		1
+#define DLB2_SYS_WB_LDB_CQ_STATE_WB2_V_LOC		2
+#define DLB2_SYS_WB_LDB_CQ_STATE_RSVD1_LOC		3
+#define DLB2_SYS_WB_LDB_CQ_STATE_CQ_OPT_CLR_LOC	4
+#define DLB2_SYS_WB_LDB_CQ_STATE_RSVD0_LOC		5
+
+#define DLB2_CHP_CFG_VAS_CRD(x) \
+	(0x40000000 + (x) * 0x1000)
+#define DLB2_CHP_CFG_VAS_CRD_RST 0x0
+
+#define DLB2_CHP_CFG_VAS_CRD_COUNT	0x00007FFF
+#define DLB2_CHP_CFG_VAS_CRD_RSVD0	0xFFFF8000
+#define DLB2_CHP_CFG_VAS_CRD_COUNT_LOC	0
+#define DLB2_CHP_CFG_VAS_CRD_RSVD0_LOC	15
+
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT(x) \
+	(0x90b00000 + (x) * 0x1000)
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_RST 0x0
+
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_LIMIT	0x00007FFF
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_V	0x00008000
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_RSVD0	0xFFFF0000
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_LIMIT_LOC	0
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_V_LOC		15
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_RSVD0_LOC	16
+
+#endif /* __DLB2_REGS_NEW_H */
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v5 04/26] event/dlb2: add v2.5 HW init
  2021-05-01 19:03   ` [dpdk-dev] [PATCH v5 00/26] Add DLB v2.5 McDaniel, Timothy
                       ` (2 preceding siblings ...)
  2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 03/26] event/dlb2: add v2.5 HW register definitions McDaniel, Timothy
@ 2021-05-01 19:03     ` McDaniel, Timothy
  2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 05/26] event/dlb2: add v2.5 get resources McDaniel, Timothy
                       ` (22 subsequent siblings)
  26 siblings, 0 replies; 174+ messages in thread
From: McDaniel, Timothy @ 2021-05-01 19:03 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, harry.van.haaren, jerinj, thomas, Timothy McDaniel

From: Timothy McDaniel <timothy.mcdaniel@intel.com>

This commit adds support for DLB v2.5 probe-time hardware init,
and sets up a framework for incorporating the remaining
changes required to support DLB v2.5.

DLB v2.0 and DLB v2.5 are similar in many respects, but their
register offsets and definitions are different. As a result of these,
differences, the low level hardware functions must take the device
version into consideration. This requires that the hardware version be
passed to many of the low level functions, so that the PMD can
take the appropriate action based on the device version.

To ease the transition and keep the individual patches small, three
temporary files are added in this commit. These files have "new"
in their names.  The files with "new" contain changes specific to a
consolidated PMD that supports both DLB v2.0 and DLB 2.5. Their sister
files of the same name (minus "new") contain the old DLB v2.0 specific
code. The intent is to remove code from the original files as that code
is ported to the combined DLB 2.0/2.5 PMD model and added to the "new"
files in a series of commits. At end of the patch series, the old files
will be empty and the "new" files will have the logic needed
to implement a single PMD that supports both DLB v2.0 and DLB v2.5.
At that time, the original DLB v2.0 specific files will be deleted,
and the "new" files will be renamed and replace them.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/dlb2_priv.h                |   5 +
 drivers/event/dlb2/meson.build                |   1 +
 .../event/dlb2/pf/base/dlb2_hw_types_new.h    | 356 ++++++++++++++++++
 drivers/event/dlb2/pf/base/dlb2_osdep.h       |   4 +
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 180 +--------
 drivers/event/dlb2/pf/base/dlb2_resource.h    |  36 --
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 259 +++++++++++++
 .../event/dlb2/pf/base/dlb2_resource_new.h    |  73 ++++
 drivers/event/dlb2/pf/dlb2_main.c             |  41 +-
 drivers/event/dlb2/pf/dlb2_main.h             |   4 +
 drivers/event/dlb2/pf/dlb2_pf.c               |   6 +-
 11 files changed, 735 insertions(+), 230 deletions(-)
 create mode 100644 drivers/event/dlb2/pf/base/dlb2_hw_types_new.h
 create mode 100644 drivers/event/dlb2/pf/base/dlb2_resource_new.c
 create mode 100644 drivers/event/dlb2/pf/base/dlb2_resource_new.h

diff --git a/drivers/event/dlb2/dlb2_priv.h b/drivers/event/dlb2/dlb2_priv.h
index 1cd78ad94..f3a9fe0aa 100644
--- a/drivers/event/dlb2/dlb2_priv.h
+++ b/drivers/event/dlb2/dlb2_priv.h
@@ -114,6 +114,11 @@
 #define EV_TO_DLB2_PRIO(x) ((x) >> 5)
 #define DLB2_TO_EV_PRIO(x) ((x) << 5)
 
+enum dlb2_hw_ver {
+	DLB2_HW_VER_2,
+	DLB2_HW_VER_2_5,
+};
+
 enum dlb2_hw_port_types {
 	DLB2_LDB_PORT,
 	DLB2_DIR_PORT,
diff --git a/drivers/event/dlb2/meson.build b/drivers/event/dlb2/meson.build
index f963589fd..0c848161e 100644
--- a/drivers/event/dlb2/meson.build
+++ b/drivers/event/dlb2/meson.build
@@ -15,6 +15,7 @@ sources = files(
         'pf/dlb2_main.c',
         'pf/dlb2_pf.c',
         'pf/base/dlb2_resource.c',
+        'pf/base/dlb2_resource_new.c',
         'rte_pmd_dlb2.c',
         'dlb2_selftest.c',
 )
diff --git a/drivers/event/dlb2/pf/base/dlb2_hw_types_new.h b/drivers/event/dlb2/pf/base/dlb2_hw_types_new.h
new file mode 100644
index 000000000..4a4185acd
--- /dev/null
+++ b/drivers/event/dlb2/pf/base/dlb2_hw_types_new.h
@@ -0,0 +1,356 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2016-2020 Intel Corporation
+ */
+
+#ifndef __DLB2_HW_TYPES_NEW_H
+#define __DLB2_HW_TYPES_NEW_H
+
+#include "../../dlb2_priv.h"
+#include "dlb2_user.h"
+
+#include "dlb2_osdep_list.h"
+#include "dlb2_osdep_types.h"
+#include "dlb2_regs_new.h"
+
+#define DLB2_BITS_SET(x, val, mask)	(x = ((x) & ~(mask))     \
+				 | (((val) << (mask##_LOC)) & (mask)))
+#define DLB2_BITS_CLR(x, mask)	(x &= ~(mask))
+#define DLB2_BIT_SET(x, mask)	((x) |= (mask))
+#define DLB2_BITS_GET(x, mask)	(((x) & (mask)) >> (mask##_LOC))
+
+#define DLB2_MAX_NUM_VDEVS			16
+#define DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS	2
+#define DLB2_NUM_ARB_WEIGHTS			8
+#define DLB2_MAX_NUM_AQED_ENTRIES		2048
+#define DLB2_MAX_WEIGHT				255
+#define DLB2_NUM_COS_DOMAINS			4
+#define DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS	2
+#define DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES	5
+#define DLB2_MAX_CQ_COMP_CHECK_LOOPS		409600
+#define DLB2_MAX_QID_EMPTY_CHECK_LOOPS		(32 * 64 * 1024 * (800 / 30))
+
+#define DLB2_FUNC_BAR				0
+#define DLB2_CSR_BAR				2
+
+#define PCI_DEVICE_ID_INTEL_DLB2_PF 0x2710
+#define PCI_DEVICE_ID_INTEL_DLB2_VF 0x2711
+
+#define PCI_DEVICE_ID_INTEL_DLB2_5_PF 0x2714
+#define PCI_DEVICE_ID_INTEL_DLB2_5_VF 0x2715
+
+#define DLB2_ALARM_HW_SOURCE_SYS 0
+#define DLB2_ALARM_HW_SOURCE_DLB 1
+
+#define DLB2_ALARM_HW_UNIT_CHP 4
+
+#define DLB2_ALARM_SYS_AID_ILLEGAL_QID		3
+#define DLB2_ALARM_SYS_AID_DISABLED_QID		4
+#define DLB2_ALARM_SYS_AID_ILLEGAL_HCW		5
+#define DLB2_ALARM_HW_CHP_AID_ILLEGAL_ENQ	1
+#define DLB2_ALARM_HW_CHP_AID_EXCESS_TOKEN_POPS 2
+
+/*
+ * Hardware-defined base addresses. Those prefixed 'DLB2_DRV' are only used by
+ * the PF driver.
+ */
+#define DLB2_DRV_LDB_PP_BASE   0x2300000
+#define DLB2_DRV_LDB_PP_STRIDE 0x1000
+#define DLB2_DRV_LDB_PP_BOUND  (DLB2_DRV_LDB_PP_BASE + \
+				DLB2_DRV_LDB_PP_STRIDE * DLB2_MAX_NUM_LDB_PORTS)
+#define DLB2_DRV_DIR_PP_BASE   0x2200000
+#define DLB2_DRV_DIR_PP_STRIDE 0x1000
+#define DLB2_DRV_DIR_PP_BOUND  (DLB2_DRV_DIR_PP_BASE + \
+				DLB2_DRV_DIR_PP_STRIDE * DLB2_MAX_NUM_DIR_PORTS)
+#define DLB2_LDB_PP_BASE       0x2100000
+#define DLB2_LDB_PP_STRIDE     0x1000
+#define DLB2_LDB_PP_BOUND      (DLB2_LDB_PP_BASE + \
+				DLB2_LDB_PP_STRIDE * DLB2_MAX_NUM_LDB_PORTS)
+#define DLB2_LDB_PP_OFFS(id)   (DLB2_LDB_PP_BASE + (id) * DLB2_PP_SIZE)
+#define DLB2_DIR_PP_BASE       0x2000000
+#define DLB2_DIR_PP_STRIDE     0x1000
+#define DLB2_DIR_PP_BOUND      (DLB2_DIR_PP_BASE + \
+				DLB2_DIR_PP_STRIDE * \
+				DLB2_MAX_NUM_DIR_PORTS_V2_5)
+#define DLB2_DIR_PP_OFFS(id)   (DLB2_DIR_PP_BASE + (id) * DLB2_PP_SIZE)
+
+struct dlb2_resource_id {
+	u32 phys_id;
+	u32 virt_id;
+	u8 vdev_owned;
+	u8 vdev_id;
+};
+
+struct dlb2_freelist {
+	u32 base;
+	u32 bound;
+	u32 offset;
+};
+
+static inline u32 dlb2_freelist_count(struct dlb2_freelist *list)
+{
+	return list->bound - list->base - list->offset;
+}
+
+struct dlb2_hcw {
+	u64 data;
+	/* Word 3 */
+	u16 opaque;
+	u8 qid;
+	u8 sched_type:2;
+	u8 priority:3;
+	u8 msg_type:3;
+	/* Word 4 */
+	u16 lock_id;
+	u8 ts_flag:1;
+	u8 rsvd1:2;
+	u8 no_dec:1;
+	u8 cmp_id:4;
+	u8 cq_token:1;
+	u8 qe_comp:1;
+	u8 qe_frag:1;
+	u8 qe_valid:1;
+	u8 int_arm:1;
+	u8 error:1;
+	u8 rsvd:2;
+};
+
+struct dlb2_ldb_queue {
+	struct dlb2_list_entry domain_list;
+	struct dlb2_list_entry func_list;
+	struct dlb2_resource_id id;
+	struct dlb2_resource_id domain_id;
+	u32 num_qid_inflights;
+	u32 aqed_limit;
+	u32 sn_group; /* sn == sequence number */
+	u32 sn_slot;
+	u32 num_mappings;
+	u8 sn_cfg_valid;
+	u8 num_pending_additions;
+	u8 owned;
+	u8 configured;
+};
+
+/*
+ * Directed ports and queues are paired by nature, so the driver tracks them
+ * with a single data structure.
+ */
+struct dlb2_dir_pq_pair {
+	struct dlb2_list_entry domain_list;
+	struct dlb2_list_entry func_list;
+	struct dlb2_resource_id id;
+	struct dlb2_resource_id domain_id;
+	u32 ref_cnt;
+	u8 init_tkn_cnt;
+	u8 queue_configured;
+	u8 port_configured;
+	u8 owned;
+	u8 enabled;
+};
+
+enum dlb2_qid_map_state {
+	/* The slot does not contain a valid queue mapping */
+	DLB2_QUEUE_UNMAPPED,
+	/* The slot contains a valid queue mapping */
+	DLB2_QUEUE_MAPPED,
+	/* The driver is mapping a queue into this slot */
+	DLB2_QUEUE_MAP_IN_PROG,
+	/* The driver is unmapping a queue from this slot */
+	DLB2_QUEUE_UNMAP_IN_PROG,
+	/*
+	 * The driver is unmapping a queue from this slot, and once complete
+	 * will replace it with another mapping.
+	 */
+	DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP,
+};
+
+struct dlb2_ldb_port_qid_map {
+	enum dlb2_qid_map_state state;
+	u16 qid;
+	u16 pending_qid;
+	u8 priority;
+	u8 pending_priority;
+};
+
+struct dlb2_ldb_port {
+	struct dlb2_list_entry domain_list;
+	struct dlb2_list_entry func_list;
+	struct dlb2_resource_id id;
+	struct dlb2_resource_id domain_id;
+	/* The qid_map represents the hardware QID mapping state. */
+	struct dlb2_ldb_port_qid_map qid_map[DLB2_MAX_NUM_QIDS_PER_LDB_CQ];
+	u32 hist_list_entry_base;
+	u32 hist_list_entry_limit;
+	u32 ref_cnt;
+	u8 init_tkn_cnt;
+	u8 num_pending_removals;
+	u8 num_mappings;
+	u8 owned;
+	u8 enabled;
+	u8 configured;
+};
+
+struct dlb2_sn_group {
+	u32 mode;
+	u32 sequence_numbers_per_queue;
+	u32 slot_use_bitmap;
+	u32 id;
+};
+
+static inline bool dlb2_sn_group_full(struct dlb2_sn_group *group)
+{
+	const u32 mask[] = {
+		0x0000ffff,  /* 64 SNs per queue */
+		0x000000ff,  /* 128 SNs per queue */
+		0x0000000f,  /* 256 SNs per queue */
+		0x00000003,  /* 512 SNs per queue */
+		0x00000001}; /* 1024 SNs per queue */
+
+	return group->slot_use_bitmap == mask[group->mode];
+}
+
+static inline int dlb2_sn_group_alloc_slot(struct dlb2_sn_group *group)
+{
+	const u32 bound[] = {16, 8, 4, 2, 1};
+	u32 i;
+
+	for (i = 0; i < bound[group->mode]; i++) {
+		if (!(group->slot_use_bitmap & (1 << i))) {
+			group->slot_use_bitmap |= 1 << i;
+			return i;
+		}
+	}
+
+	return -1;
+}
+
+static inline void
+dlb2_sn_group_free_slot(struct dlb2_sn_group *group, int slot)
+{
+	group->slot_use_bitmap &= ~(1 << slot);
+}
+
+static inline int dlb2_sn_group_used_slots(struct dlb2_sn_group *group)
+{
+	int i, cnt = 0;
+
+	for (i = 0; i < 32; i++)
+		cnt += !!(group->slot_use_bitmap & (1 << i));
+
+	return cnt;
+}
+
+struct dlb2_hw_domain {
+	struct dlb2_function_resources *parent_func;
+	struct dlb2_list_entry func_list;
+	struct dlb2_list_head used_ldb_queues;
+	struct dlb2_list_head used_ldb_ports[DLB2_NUM_COS_DOMAINS];
+	struct dlb2_list_head used_dir_pq_pairs;
+	struct dlb2_list_head avail_ldb_queues;
+	struct dlb2_list_head avail_ldb_ports[DLB2_NUM_COS_DOMAINS];
+	struct dlb2_list_head avail_dir_pq_pairs;
+	u32 total_hist_list_entries;
+	u32 avail_hist_list_entries;
+	u32 hist_list_entry_base;
+	u32 hist_list_entry_offset;
+	union {
+		struct {
+			u32 num_ldb_credits;
+			u32 num_dir_credits;
+		};
+		struct {
+			u32 num_credits;
+		};
+	};
+	u32 num_avail_aqed_entries;
+	u32 num_used_aqed_entries;
+	struct dlb2_resource_id id;
+	int num_pending_removals;
+	int num_pending_additions;
+	u8 configured;
+	u8 started;
+};
+
+struct dlb2_bitmap;
+
+struct dlb2_function_resources {
+	struct dlb2_list_head avail_domains;
+	struct dlb2_list_head used_domains;
+	struct dlb2_list_head avail_ldb_queues;
+	struct dlb2_list_head avail_ldb_ports[DLB2_NUM_COS_DOMAINS];
+	struct dlb2_list_head avail_dir_pq_pairs;
+	struct dlb2_bitmap *avail_hist_list_entries;
+	u32 num_avail_domains;
+	u32 num_avail_ldb_queues;
+	u32 num_avail_ldb_ports[DLB2_NUM_COS_DOMAINS];
+	u32 num_avail_dir_pq_pairs;
+	union {
+		struct {
+			u32 num_avail_qed_entries;
+			u32 num_avail_dqed_entries;
+		};
+		struct {
+			u32 num_avail_entries;
+		};
+	};
+	u32 num_avail_aqed_entries;
+	u8 locked; /* (VDEV only) */
+};
+
+/*
+ * After initialization, each resource in dlb2_hw_resources is located in one
+ * of the following lists:
+ * -- The PF's available resources list. These are unconfigured resources owned
+ *	by the PF and not allocated to a dlb2 scheduling domain.
+ * -- A VDEV's available resources list. These are VDEV-owned unconfigured
+ *	resources not allocated to a dlb2 scheduling domain.
+ * -- A domain's available resources list. These are domain-owned unconfigured
+ *	resources.
+ * -- A domain's used resources list. These are domain-owned configured
+ *	resources.
+ *
+ * A resource moves to a new list when a VDEV or domain is created or destroyed,
+ * or when the resource is configured.
+ */
+struct dlb2_hw_resources {
+	struct dlb2_ldb_queue ldb_queues[DLB2_MAX_NUM_LDB_QUEUES];
+	struct dlb2_ldb_port ldb_ports[DLB2_MAX_NUM_LDB_PORTS];
+	struct dlb2_dir_pq_pair dir_pq_pairs[DLB2_MAX_NUM_DIR_PORTS_V2_5];
+	struct dlb2_sn_group sn_groups[DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS];
+};
+
+struct dlb2_mbox {
+	u32 *mbox;
+	u32 *isr_in_progress;
+};
+
+struct dlb2_sw_mbox {
+	struct dlb2_mbox vdev_to_pf;
+	struct dlb2_mbox pf_to_vdev;
+	void (*pf_to_vdev_inject)(void *arg);
+	void *pf_to_vdev_inject_arg;
+};
+
+struct dlb2_hw {
+	uint8_t ver;
+
+	/* BAR 0 address */
+	void *csr_kva;
+	unsigned long csr_phys_addr;
+	/* BAR 2 address */
+	void *func_kva;
+	unsigned long func_phys_addr;
+
+	/* Resource tracking */
+	struct dlb2_hw_resources rsrcs;
+	struct dlb2_function_resources pf;
+	struct dlb2_function_resources vdev[DLB2_MAX_NUM_VDEVS];
+	struct dlb2_hw_domain domains[DLB2_MAX_NUM_DOMAINS];
+	u8 cos_reservation[DLB2_NUM_COS_DOMAINS];
+
+	/* Virtualization */
+	int virt_mode;
+	struct dlb2_sw_mbox mbox[DLB2_MAX_NUM_VDEVS];
+	unsigned int pasid[DLB2_MAX_NUM_VDEVS];
+};
+
+#endif /* __DLB2_HW_TYPES_NEW_H */
diff --git a/drivers/event/dlb2/pf/base/dlb2_osdep.h b/drivers/event/dlb2/pf/base/dlb2_osdep.h
index aa101a49a..3b0ca84ba 100644
--- a/drivers/event/dlb2/pf/base/dlb2_osdep.h
+++ b/drivers/event/dlb2/pf/base/dlb2_osdep.h
@@ -16,7 +16,11 @@
 #include <rte_log.h>
 #include <rte_spinlock.h>
 #include "../dlb2_main.h"
+
+/* TEMPORARY inclusion of both headers for merge */
+#include "dlb2_resource_new.h"
 #include "dlb2_resource.h"
+
 #include "../../dlb2_log.h"
 #include "../../dlb2_user.h"
 
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index 1cb0b9f50..7ba6521ef 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -47,19 +47,6 @@ static void dlb2_init_domain_rsrc_lists(struct dlb2_hw_domain *domain)
 		dlb2_list_init_head(&domain->avail_ldb_ports[i]);
 }
 
-static void dlb2_init_fn_rsrc_lists(struct dlb2_function_resources *rsrc)
-{
-	int i;
-
-	dlb2_list_init_head(&rsrc->avail_domains);
-	dlb2_list_init_head(&rsrc->used_domains);
-	dlb2_list_init_head(&rsrc->avail_ldb_queues);
-	dlb2_list_init_head(&rsrc->avail_dir_pq_pairs);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
-		dlb2_list_init_head(&rsrc->avail_ldb_ports[i]);
-}
-
 void dlb2_hw_enable_sparse_dir_cq_mode(struct dlb2_hw *hw)
 {
 	union dlb2_chp_cfg_chp_csr_ctrl r0;
@@ -130,171 +117,6 @@ void dlb2_hw_enable_sparse_ldb_cq_mode(struct dlb2_hw *hw)
 	DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, r0.val);
 }
 
-void dlb2_resource_free(struct dlb2_hw *hw)
-{
-	int i;
-
-	if (hw->pf.avail_hist_list_entries)
-		dlb2_bitmap_free(hw->pf.avail_hist_list_entries);
-
-	for (i = 0; i < DLB2_MAX_NUM_VDEVS; i++) {
-		if (hw->vdev[i].avail_hist_list_entries)
-			dlb2_bitmap_free(hw->vdev[i].avail_hist_list_entries);
-	}
-}
-
-int dlb2_resource_init(struct dlb2_hw *hw)
-{
-	struct dlb2_list_entry *list;
-	unsigned int i;
-	int ret;
-
-	/*
-	 * For optimal load-balancing, ports that map to one or more QIDs in
-	 * common should not be in numerical sequence. This is application
-	 * dependent, but the driver interleaves port IDs as much as possible
-	 * to reduce the likelihood of this. This initial allocation maximizes
-	 * the average distance between an ID and its immediate neighbors (i.e.
-	 * the distance from 1 to 0 and to 2, the distance from 2 to 1 and to
-	 * 3, etc.).
-	 */
-	u8 init_ldb_port_allocation[DLB2_MAX_NUM_LDB_PORTS] = {
-		0,  7,  14,  5, 12,  3, 10,  1,  8, 15,  6, 13,  4, 11,  2,  9,
-		16, 23, 30, 21, 28, 19, 26, 17, 24, 31, 22, 29, 20, 27, 18, 25,
-		32, 39, 46, 37, 44, 35, 42, 33, 40, 47, 38, 45, 36, 43, 34, 41,
-		48, 55, 62, 53, 60, 51, 58, 49, 56, 63, 54, 61, 52, 59, 50, 57,
-	};
-
-	/* Zero-out resource tracking data structures */
-	memset(&hw->rsrcs, 0, sizeof(hw->rsrcs));
-	memset(&hw->pf, 0, sizeof(hw->pf));
-
-	dlb2_init_fn_rsrc_lists(&hw->pf);
-
-	for (i = 0; i < DLB2_MAX_NUM_VDEVS; i++) {
-		memset(&hw->vdev[i], 0, sizeof(hw->vdev[i]));
-		dlb2_init_fn_rsrc_lists(&hw->vdev[i]);
-	}
-
-	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
-		memset(&hw->domains[i], 0, sizeof(hw->domains[i]));
-		dlb2_init_domain_rsrc_lists(&hw->domains[i]);
-		hw->domains[i].parent_func = &hw->pf;
-	}
-
-	/* Give all resources to the PF driver */
-	hw->pf.num_avail_domains = DLB2_MAX_NUM_DOMAINS;
-	for (i = 0; i < hw->pf.num_avail_domains; i++) {
-		list = &hw->domains[i].func_list;
-
-		dlb2_list_add(&hw->pf.avail_domains, list);
-	}
-
-	hw->pf.num_avail_ldb_queues = DLB2_MAX_NUM_LDB_QUEUES;
-	for (i = 0; i < hw->pf.num_avail_ldb_queues; i++) {
-		list = &hw->rsrcs.ldb_queues[i].func_list;
-
-		dlb2_list_add(&hw->pf.avail_ldb_queues, list);
-	}
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
-		hw->pf.num_avail_ldb_ports[i] =
-			DLB2_MAX_NUM_LDB_PORTS / DLB2_NUM_COS_DOMAINS;
-
-	for (i = 0; i < DLB2_MAX_NUM_LDB_PORTS; i++) {
-		int cos_id = i >> DLB2_NUM_COS_DOMAINS;
-		struct dlb2_ldb_port *port;
-
-		port = &hw->rsrcs.ldb_ports[init_ldb_port_allocation[i]];
-
-		dlb2_list_add(&hw->pf.avail_ldb_ports[cos_id],
-			      &port->func_list);
-	}
-
-	hw->pf.num_avail_dir_pq_pairs = DLB2_MAX_NUM_DIR_PORTS(hw->ver);
-	for (i = 0; i < hw->pf.num_avail_dir_pq_pairs; i++) {
-		list = &hw->rsrcs.dir_pq_pairs[i].func_list;
-
-		dlb2_list_add(&hw->pf.avail_dir_pq_pairs, list);
-	}
-
-	hw->pf.num_avail_qed_entries = DLB2_MAX_NUM_LDB_CREDITS;
-	hw->pf.num_avail_dqed_entries =
-		DLB2_MAX_NUM_DIR_CREDITS(hw->ver);
-
-	hw->pf.num_avail_aqed_entries = DLB2_MAX_NUM_AQED_ENTRIES;
-
-	ret = dlb2_bitmap_alloc(&hw->pf.avail_hist_list_entries,
-				DLB2_MAX_NUM_HIST_LIST_ENTRIES);
-	if (ret)
-		goto unwind;
-
-	ret = dlb2_bitmap_fill(hw->pf.avail_hist_list_entries);
-	if (ret)
-		goto unwind;
-
-	for (i = 0; i < DLB2_MAX_NUM_VDEVS; i++) {
-		ret = dlb2_bitmap_alloc(&hw->vdev[i].avail_hist_list_entries,
-					DLB2_MAX_NUM_HIST_LIST_ENTRIES);
-		if (ret)
-			goto unwind;
-
-		ret = dlb2_bitmap_zero(hw->vdev[i].avail_hist_list_entries);
-		if (ret)
-			goto unwind;
-	}
-
-	/* Initialize the hardware resource IDs */
-	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
-		hw->domains[i].id.phys_id = i;
-		hw->domains[i].id.vdev_owned = false;
-	}
-
-	for (i = 0; i < DLB2_MAX_NUM_LDB_QUEUES; i++) {
-		hw->rsrcs.ldb_queues[i].id.phys_id = i;
-		hw->rsrcs.ldb_queues[i].id.vdev_owned = false;
-	}
-
-	for (i = 0; i < DLB2_MAX_NUM_LDB_PORTS; i++) {
-		hw->rsrcs.ldb_ports[i].id.phys_id = i;
-		hw->rsrcs.ldb_ports[i].id.vdev_owned = false;
-	}
-
-	for (i = 0; i < DLB2_MAX_NUM_DIR_PORTS(hw->ver); i++) {
-		hw->rsrcs.dir_pq_pairs[i].id.phys_id = i;
-		hw->rsrcs.dir_pq_pairs[i].id.vdev_owned = false;
-	}
-
-	for (i = 0; i < DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {
-		hw->rsrcs.sn_groups[i].id = i;
-		/* Default mode (0) is 64 sequence numbers per queue */
-		hw->rsrcs.sn_groups[i].mode = 0;
-		hw->rsrcs.sn_groups[i].sequence_numbers_per_queue = 64;
-		hw->rsrcs.sn_groups[i].slot_use_bitmap = 0;
-	}
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
-		hw->cos_reservation[i] = 100 / DLB2_NUM_COS_DOMAINS;
-
-	return 0;
-
-unwind:
-	dlb2_resource_free(hw);
-
-	return ret;
-}
-
-void dlb2_clr_pmcsr_disable(struct dlb2_hw *hw)
-{
-	union dlb2_cfg_mstr_cfg_pm_pmcsr_disable r0;
-
-	r0.val = DLB2_CSR_RD(hw, DLB2_CFG_MSTR_CFG_PM_PMCSR_DISABLE);
-
-	r0.field.disable = 0;
-
-	DLB2_CSR_WR(hw, DLB2_CFG_MSTR_CFG_PM_PMCSR_DISABLE, r0.val);
-}
-
 static void dlb2_configure_domain_credits(struct dlb2_hw *hw,
 					  struct dlb2_hw_domain *domain)
 {
@@ -5876,7 +5698,7 @@ static void dlb2_log_start_domain(struct dlb2_hw *hw,
 int
 dlb2_hw_start_domain(struct dlb2_hw *hw,
 		     u32 domain_id,
-		     __attribute((unused)) struct dlb2_start_domain_args *arg,
+		     struct dlb2_start_domain_args *arg,
 		     struct dlb2_cmd_response *resp,
 		     bool vdev_req,
 		     unsigned int vdev_id)
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.h b/drivers/event/dlb2/pf/base/dlb2_resource.h
index 503fdf317..2e13193bb 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.h
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.h
@@ -6,35 +6,8 @@
 #define __DLB2_RESOURCE_H
 
 #include "dlb2_user.h"
-
-#include "dlb2_hw_types.h"
 #include "dlb2_osdep_types.h"
 
-/**
- * dlb2_resource_init() - initialize the device
- * @hw: pointer to struct dlb2_hw.
- *
- * This function initializes the device's software state (pointed to by the hw
- * argument) and programs global scheduling QoS registers. This function should
- * be called during driver initialization.
- *
- * The dlb2_hw struct must be unique per DLB 2.0 device and persist until the
- * device is reset.
- *
- * Return:
- * Returns 0 upon success, <0 otherwise.
- */
-int dlb2_resource_init(struct dlb2_hw *hw);
-
-/**
- * dlb2_resource_free() - free device state memory
- * @hw: dlb2_hw handle for a particular device.
- *
- * This function frees software state pointed to by dlb2_hw. This function
- * should be called when resetting the device or unloading the driver.
- */
-void dlb2_resource_free(struct dlb2_hw *hw);
-
 /**
  * dlb2_resource_reset() - reset in-use resources to their initial state
  * @hw: dlb2_hw handle for a particular device.
@@ -1485,15 +1458,6 @@ int dlb2_notify_vf(struct dlb2_hw *hw,
  */
 int dlb2_vdev_in_use(struct dlb2_hw *hw, unsigned int id);
 
-/**
- * dlb2_clr_pmcsr_disable() - power on bulk of DLB 2.0 logic
- * @hw: dlb2_hw handle for a particular device.
- *
- * Clearing the PMCSR must be done at initialization to make the device fully
- * operational.
- */
-void dlb2_clr_pmcsr_disable(struct dlb2_hw *hw);
-
 /**
  * dlb2_hw_get_ldb_queue_depth() - returns the depth of a load-balanced queue
  * @hw: dlb2_hw handle for a particular device.
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
new file mode 100644
index 000000000..175b0799e
--- /dev/null
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -0,0 +1,259 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2016-2020 Intel Corporation
+ */
+
+#define DLB2_USE_NEW_HEADERS /* TEMPORARY FOR MERGE */
+
+#include "dlb2_user.h"
+
+#include "dlb2_hw_types_new.h"
+#include "dlb2_osdep.h"
+#include "dlb2_osdep_bitmap.h"
+#include "dlb2_osdep_types.h"
+#include "dlb2_regs_new.h"
+#include "dlb2_resource_new.h" /* TEMP FOR UPSTREAMPATCHES */
+
+#include "../../dlb2_priv.h"
+#include "../../dlb2_inline_fns.h"
+
+#define DLB2_DOM_LIST_HEAD(head, type) \
+	DLB2_LIST_HEAD((head), type, domain_list)
+
+#define DLB2_FUNC_LIST_HEAD(head, type) \
+	DLB2_LIST_HEAD((head), type, func_list)
+
+#define DLB2_DOM_LIST_FOR(head, ptr, iter) \
+	DLB2_LIST_FOR_EACH(head, ptr, domain_list, iter)
+
+#define DLB2_FUNC_LIST_FOR(head, ptr, iter) \
+	DLB2_LIST_FOR_EACH(head, ptr, func_list, iter)
+
+#define DLB2_DOM_LIST_FOR_SAFE(head, ptr, ptr_tmp, it, it_tmp) \
+	DLB2_LIST_FOR_EACH_SAFE((head), ptr, ptr_tmp, domain_list, it, it_tmp)
+
+#define DLB2_FUNC_LIST_FOR_SAFE(head, ptr, ptr_tmp, it, it_tmp) \
+	DLB2_LIST_FOR_EACH_SAFE((head), ptr, ptr_tmp, func_list, it, it_tmp)
+
+static void dlb2_init_domain_rsrc_lists(struct dlb2_hw_domain *domain)
+{
+	int i;
+
+	dlb2_list_init_head(&domain->used_ldb_queues);
+	dlb2_list_init_head(&domain->used_dir_pq_pairs);
+	dlb2_list_init_head(&domain->avail_ldb_queues);
+	dlb2_list_init_head(&domain->avail_dir_pq_pairs);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
+		dlb2_list_init_head(&domain->used_ldb_ports[i]);
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
+		dlb2_list_init_head(&domain->avail_ldb_ports[i]);
+}
+
+static void dlb2_init_fn_rsrc_lists(struct dlb2_function_resources *rsrc)
+{
+	int i;
+	dlb2_list_init_head(&rsrc->avail_domains);
+	dlb2_list_init_head(&rsrc->used_domains);
+	dlb2_list_init_head(&rsrc->avail_ldb_queues);
+	dlb2_list_init_head(&rsrc->avail_dir_pq_pairs);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
+		dlb2_list_init_head(&rsrc->avail_ldb_ports[i]);
+}
+
+/**
+ * dlb2_resource_free() - free device state memory
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function frees software state pointed to by dlb2_hw. This function
+ * should be called when resetting the device or unloading the driver.
+ */
+void dlb2_resource_free(struct dlb2_hw *hw)
+{
+	int i;
+
+	if (hw->pf.avail_hist_list_entries)
+		dlb2_bitmap_free(hw->pf.avail_hist_list_entries);
+
+	for (i = 0; i < DLB2_MAX_NUM_VDEVS; i++) {
+		if (hw->vdev[i].avail_hist_list_entries)
+			dlb2_bitmap_free(hw->vdev[i].avail_hist_list_entries);
+	}
+}
+
+/**
+ * dlb2_resource_init() - initialize the device
+ * @hw: pointer to struct dlb2_hw.
+ * @ver: device version.
+ *
+ * This function initializes the device's software state (pointed to by the hw
+ * argument) and programs global scheduling QoS registers. This function should
+ * be called during driver initialization, and the dlb2_hw structure should
+ * be zero-initialized before calling the function.
+ *
+ * The dlb2_hw struct must be unique per DLB 2.0 device and persist until the
+ * device is reset.
+ *
+ * Return:
+ * Returns 0 upon success, <0 otherwise.
+ */
+int dlb2_resource_init(struct dlb2_hw *hw, enum dlb2_hw_ver ver)
+{
+	struct dlb2_list_entry *list;
+	unsigned int i;
+	int ret;
+
+	/*
+	 * For optimal load-balancing, ports that map to one or more QIDs in
+	 * common should not be in numerical sequence. The port->QID mapping is
+	 * application dependent, but the driver interleaves port IDs as much
+	 * as possible to reduce the likelihood of sequential ports mapping to
+	 * the same QID(s). This initial allocation of port IDs maximizes the
+	 * average distance between an ID and its immediate neighbors (i.e.
+	 * the distance from 1 to 0 and to 2, the distance from 2 to 1 and to
+	 * 3, etc.).
+	 */
+	const u8 init_ldb_port_allocation[DLB2_MAX_NUM_LDB_PORTS] = {
+		0,  7,  14,  5, 12,  3, 10,  1,  8, 15,  6, 13,  4, 11,  2,  9,
+		16, 23, 30, 21, 28, 19, 26, 17, 24, 31, 22, 29, 20, 27, 18, 25,
+		32, 39, 46, 37, 44, 35, 42, 33, 40, 47, 38, 45, 36, 43, 34, 41,
+		48, 55, 62, 53, 60, 51, 58, 49, 56, 63, 54, 61, 52, 59, 50, 57,
+	};
+
+	hw->ver = ver;
+
+	dlb2_init_fn_rsrc_lists(&hw->pf);
+
+	for (i = 0; i < DLB2_MAX_NUM_VDEVS; i++)
+		dlb2_init_fn_rsrc_lists(&hw->vdev[i]);
+
+	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
+		dlb2_init_domain_rsrc_lists(&hw->domains[i]);
+		hw->domains[i].parent_func = &hw->pf;
+	}
+
+	/* Give all resources to the PF driver */
+	hw->pf.num_avail_domains = DLB2_MAX_NUM_DOMAINS;
+	for (i = 0; i < hw->pf.num_avail_domains; i++) {
+		list = &hw->domains[i].func_list;
+
+		dlb2_list_add(&hw->pf.avail_domains, list);
+	}
+
+	hw->pf.num_avail_ldb_queues = DLB2_MAX_NUM_LDB_QUEUES;
+	for (i = 0; i < hw->pf.num_avail_ldb_queues; i++) {
+		list = &hw->rsrcs.ldb_queues[i].func_list;
+
+		dlb2_list_add(&hw->pf.avail_ldb_queues, list);
+	}
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
+		hw->pf.num_avail_ldb_ports[i] =
+			DLB2_MAX_NUM_LDB_PORTS / DLB2_NUM_COS_DOMAINS;
+
+	for (i = 0; i < DLB2_MAX_NUM_LDB_PORTS; i++) {
+		int cos_id = i >> DLB2_NUM_COS_DOMAINS;
+		struct dlb2_ldb_port *port;
+
+		port = &hw->rsrcs.ldb_ports[init_ldb_port_allocation[i]];
+
+		dlb2_list_add(&hw->pf.avail_ldb_ports[cos_id],
+			      &port->func_list);
+	}
+
+	hw->pf.num_avail_dir_pq_pairs = DLB2_MAX_NUM_DIR_PORTS(hw->ver);
+	for (i = 0; i < hw->pf.num_avail_dir_pq_pairs; i++) {
+		list = &hw->rsrcs.dir_pq_pairs[i].func_list;
+
+		dlb2_list_add(&hw->pf.avail_dir_pq_pairs, list);
+	}
+
+	if (hw->ver == DLB2_HW_V2) {
+		hw->pf.num_avail_qed_entries = DLB2_MAX_NUM_LDB_CREDITS;
+		hw->pf.num_avail_dqed_entries =
+			DLB2_MAX_NUM_DIR_CREDITS(hw->ver);
+	} else {
+		hw->pf.num_avail_entries = DLB2_MAX_NUM_CREDITS(hw->ver);
+	}
+
+	hw->pf.num_avail_aqed_entries = DLB2_MAX_NUM_AQED_ENTRIES;
+
+	ret = dlb2_bitmap_alloc(&hw->pf.avail_hist_list_entries,
+				DLB2_MAX_NUM_HIST_LIST_ENTRIES);
+	if (ret)
+		goto unwind;
+
+	ret = dlb2_bitmap_fill(hw->pf.avail_hist_list_entries);
+	if (ret)
+		goto unwind;
+
+	for (i = 0; i < DLB2_MAX_NUM_VDEVS; i++) {
+		ret = dlb2_bitmap_alloc(&hw->vdev[i].avail_hist_list_entries,
+					DLB2_MAX_NUM_HIST_LIST_ENTRIES);
+		if (ret)
+			goto unwind;
+
+		ret = dlb2_bitmap_zero(hw->vdev[i].avail_hist_list_entries);
+		if (ret)
+			goto unwind;
+	}
+
+	/* Initialize the hardware resource IDs */
+	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
+		hw->domains[i].id.phys_id = i;
+		hw->domains[i].id.vdev_owned = false;
+	}
+
+	for (i = 0; i < DLB2_MAX_NUM_LDB_QUEUES; i++) {
+		hw->rsrcs.ldb_queues[i].id.phys_id = i;
+		hw->rsrcs.ldb_queues[i].id.vdev_owned = false;
+	}
+
+	for (i = 0; i < DLB2_MAX_NUM_LDB_PORTS; i++) {
+		hw->rsrcs.ldb_ports[i].id.phys_id = i;
+		hw->rsrcs.ldb_ports[i].id.vdev_owned = false;
+	}
+
+	for (i = 0; i < DLB2_MAX_NUM_DIR_PORTS(hw->ver); i++) {
+		hw->rsrcs.dir_pq_pairs[i].id.phys_id = i;
+		hw->rsrcs.dir_pq_pairs[i].id.vdev_owned = false;
+	}
+
+	for (i = 0; i < DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {
+		hw->rsrcs.sn_groups[i].id = i;
+		/* Default mode (0) is 64 sequence numbers per queue */
+		hw->rsrcs.sn_groups[i].mode = 0;
+		hw->rsrcs.sn_groups[i].sequence_numbers_per_queue = 64;
+		hw->rsrcs.sn_groups[i].slot_use_bitmap = 0;
+	}
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
+		hw->cos_reservation[i] = 100 / DLB2_NUM_COS_DOMAINS;
+
+	return 0;
+
+unwind:
+	dlb2_resource_free(hw);
+
+	return ret;
+}
+
+/**
+ * dlb2_clr_pmcsr_disable() - power on bulk of DLB 2.0 logic
+ * @hw: dlb2_hw handle for a particular device.
+ * @ver: device version.
+ *
+ * Clearing the PMCSR must be done at initialization to make the device fully
+ * operational.
+ */
+void dlb2_clr_pmcsr_disable(struct dlb2_hw *hw, enum dlb2_hw_ver ver)
+{
+	u32 pmcsr_dis;
+
+	pmcsr_dis = DLB2_CSR_RD(hw, DLB2_CM_CFG_PM_PMCSR_DISABLE(ver));
+
+	DLB2_BITS_CLR(pmcsr_dis, DLB2_CM_CFG_PM_PMCSR_DISABLE_DISABLE);
+
+	DLB2_CSR_WR(hw, DLB2_CM_CFG_PM_PMCSR_DISABLE(ver), pmcsr_dis);
+}
+
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.h b/drivers/event/dlb2/pf/base/dlb2_resource_new.h
new file mode 100644
index 000000000..51f31543c
--- /dev/null
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.h
@@ -0,0 +1,73 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2016-2020 Intel Corporation
+ */
+
+#ifndef __DLB2_RESOURCE_NEW_H
+#define __DLB2_RESOURCE_NEW_H
+
+#include "dlb2_user.h"
+#include "dlb2_osdep_types.h"
+
+/**
+ * dlb2_resource_init() - initialize the device
+ * @hw: pointer to struct dlb2_hw.
+ * @ver: device version.
+ *
+ * This function initializes the device's software state (pointed to by the hw
+ * argument) and programs global scheduling QoS registers. This function should
+ * be called during driver initialization.
+ *
+ * The dlb2_hw struct must be unique per DLB 2.0 device and persist until the
+ * device is reset.
+ *
+ * Return:
+ * Returns 0 upon success, <0 otherwise.
+ */
+int dlb2_resource_init(struct dlb2_hw *hw, enum dlb2_hw_ver ver);
+
+/**
+ * dlb2_clr_pmcsr_disable() - power on bulk of DLB 2.0 logic
+ * @hw: dlb2_hw handle for a particular device.
+ * @ver: device version.
+ *
+ * Clearing the PMCSR must be done at initialization to make the device fully
+ * operational.
+ */
+void dlb2_clr_pmcsr_disable(struct dlb2_hw *hw, enum dlb2_hw_ver ver);
+
+/**
+ * dlb2_finish_unmap_qid_procedures() - finish any pending unmap procedures
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function attempts to finish any outstanding unmap procedures.
+ * This function should be called by the kernel thread responsible for
+ * finishing map/unmap procedures.
+ *
+ * Return:
+ * Returns the number of procedures that weren't completed.
+ */
+unsigned int dlb2_finish_unmap_qid_procedures(struct dlb2_hw *hw);
+
+/**
+ * dlb2_finish_map_qid_procedures() - finish any pending map procedures
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function attempts to finish any outstanding map procedures.
+ * This function should be called by the kernel thread responsible for
+ * finishing map/unmap procedures.
+ *
+ * Return:
+ * Returns the number of procedures that weren't completed.
+ */
+unsigned int dlb2_finish_map_qid_procedures(struct dlb2_hw *hw);
+
+/**
+ * dlb2_resource_free() - free device state memory
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function frees software state pointed to by dlb2_hw. This function
+ * should be called when resetting the device or unloading the driver.
+ */
+void dlb2_resource_free(struct dlb2_hw *hw);
+
+#endif /* __DLB2_RESOURCE_NEW_H */
diff --git a/drivers/event/dlb2/pf/dlb2_main.c b/drivers/event/dlb2/pf/dlb2_main.c
index a9d407f2f..5c0640b3c 100644
--- a/drivers/event/dlb2/pf/dlb2_main.c
+++ b/drivers/event/dlb2/pf/dlb2_main.c
@@ -13,9 +13,12 @@
 #include <rte_malloc.h>
 #include <rte_errno.h>
 
-#include "base/dlb2_resource.h"
+#define DLB2_USE_NEW_HEADERS /* TEMPORARY FOR MERGE */
+
+#include "base/dlb2_regs_new.h"
+#include "base/dlb2_hw_types_new.h"
+#include "base/dlb2_resource_new.h"
 #include "base/dlb2_osdep.h"
-#include "base/dlb2_regs.h"
 #include "dlb2_main.h"
 #include "../dlb2_user.h"
 #include "../dlb2_priv.h"
@@ -103,25 +106,34 @@ dlb2_pf_init_driver_state(struct dlb2_dev *dlb2_dev)
 
 static void dlb2_pf_enable_pm(struct dlb2_dev *dlb2_dev)
 {
-	dlb2_clr_pmcsr_disable(&dlb2_dev->hw);
+	int version;
+	version = DLB2_HW_DEVICE_FROM_PCI_ID(dlb2_dev->pdev);
+
+	dlb2_clr_pmcsr_disable(&dlb2_dev->hw, version);
 }
 
 #define DLB2_READY_RETRY_LIMIT 1000
-static int dlb2_pf_wait_for_device_ready(struct dlb2_dev *dlb2_dev)
+static int dlb2_pf_wait_for_device_ready(struct dlb2_dev *dlb2_dev,
+					 int dlb_version)
 {
 	u32 retries = 0;
 
 	/* Allow at least 1s for the device to become active after power-on */
 	for (retries = 0; retries < DLB2_READY_RETRY_LIMIT; retries++) {
-		union dlb2_cfg_mstr_cfg_diagnostic_idle_status idle;
-		union dlb2_cfg_mstr_cfg_pm_status pm_st;
+		u32 idle_val;
+		u32 idle_dlb_func_idle;
+		u32 pm_st_val;
+		u32 pm_st_pmsm;
 		u32 addr;
 
-		addr = DLB2_CFG_MSTR_CFG_PM_STATUS;
-		pm_st.val = DLB2_CSR_RD(&dlb2_dev->hw, addr);
-		addr = DLB2_CFG_MSTR_CFG_DIAGNOSTIC_IDLE_STATUS;
-		idle.val = DLB2_CSR_RD(&dlb2_dev->hw, addr);
-		if (pm_st.field.pmsm == 1 && idle.field.dlb_func_idle == 1)
+		addr = DLB2_CM_CFG_PM_STATUS(dlb_version);
+		pm_st_val = DLB2_CSR_RD(&dlb2_dev->hw, addr);
+		addr = DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS(dlb_version);
+		idle_val = DLB2_CSR_RD(&dlb2_dev->hw, addr);
+		idle_dlb_func_idle = idle_val &
+			DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DLB_FUNC_IDLE;
+		pm_st_pmsm = pm_st_val & DLB2_CM_CFG_PM_STATUS_PMSM;
+		if (pm_st_pmsm && idle_dlb_func_idle)
 			break;
 
 		rte_delay_ms(1);
@@ -141,6 +153,7 @@ dlb2_probe(struct rte_pci_device *pdev)
 {
 	struct dlb2_dev *dlb2_dev;
 	int ret = 0;
+	int dlb_version = 0;
 
 	DLB2_INFO(dlb2_dev, "probe\n");
 
@@ -152,6 +165,8 @@ dlb2_probe(struct rte_pci_device *pdev)
 		goto dlb2_dev_malloc_fail;
 	}
 
+	dlb_version = DLB2_HW_DEVICE_FROM_PCI_ID(pdev);
+
 	/* PCI Bus driver has already mapped bar space into process.
 	 * Save off our IO register and FUNC addresses.
 	 */
@@ -191,7 +206,7 @@ dlb2_probe(struct rte_pci_device *pdev)
 	 */
 	dlb2_pf_enable_pm(dlb2_dev);
 
-	ret = dlb2_pf_wait_for_device_ready(dlb2_dev);
+	ret = dlb2_pf_wait_for_device_ready(dlb2_dev, dlb_version);
 	if (ret)
 		goto wait_for_device_ready_fail;
 
@@ -203,7 +218,7 @@ dlb2_probe(struct rte_pci_device *pdev)
 	if (ret)
 		goto init_driver_state_fail;
 
-	ret = dlb2_resource_init(&dlb2_dev->hw);
+	ret = dlb2_resource_init(&dlb2_dev->hw, dlb_version);
 	if (ret)
 		goto resource_init_fail;
 
diff --git a/drivers/event/dlb2/pf/dlb2_main.h b/drivers/event/dlb2/pf/dlb2_main.h
index 9eeda482a..892298d7a 100644
--- a/drivers/event/dlb2/pf/dlb2_main.h
+++ b/drivers/event/dlb2/pf/dlb2_main.h
@@ -12,7 +12,11 @@
 #include <rte_bus_pci.h>
 #include <rte_eal_paging.h>
 
+#ifdef DLB2_USE_NEW_HEADERS
+#include "base/dlb2_hw_types_new.h"
+#else
 #include "base/dlb2_hw_types.h"
+#endif
 #include "../dlb2_user.h"
 
 #define DLB2_DEFAULT_UNREGISTER_TIMEOUT_S 5
diff --git a/drivers/event/dlb2/pf/dlb2_pf.c b/drivers/event/dlb2/pf/dlb2_pf.c
index f57dc1584..1e815f20d 100644
--- a/drivers/event/dlb2/pf/dlb2_pf.c
+++ b/drivers/event/dlb2/pf/dlb2_pf.c
@@ -32,13 +32,15 @@
 #include <rte_memory.h>
 #include <rte_string_fns.h>
 
+#define DLB2_USE_NEW_HEADERS /* TEMPORARY FOR MERGE */
+
 #include "../dlb2_priv.h"
 #include "../dlb2_iface.h"
 #include "../dlb2_inline_fns.h"
 #include "dlb2_main.h"
-#include "base/dlb2_hw_types.h"
+#include "base/dlb2_hw_types_new.h"
 #include "base/dlb2_osdep.h"
-#include "base/dlb2_resource.h"
+#include "base/dlb2_resource_new.h"
 
 static const char *event_dlb2_pf_name = RTE_STR(EVDEV_DLB2_NAME_PMD);
 
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v5 05/26] event/dlb2: add v2.5 get resources
  2021-05-01 19:03   ` [dpdk-dev] [PATCH v5 00/26] Add DLB v2.5 McDaniel, Timothy
                       ` (3 preceding siblings ...)
  2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 04/26] event/dlb2: add v2.5 HW init McDaniel, Timothy
@ 2021-05-01 19:03     ` McDaniel, Timothy
  2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 06/26] event/dlb2: add v2.5 create sched domain McDaniel, Timothy
                       ` (21 subsequent siblings)
  26 siblings, 0 replies; 174+ messages in thread
From: McDaniel, Timothy @ 2021-05-01 19:03 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, harry.van.haaren, jerinj, thomas, Timothy McDaniel

From: Timothy McDaniel <timothy.mcdaniel@intel.com>

DLB v2.5 uses a new credit scheme, where directed and load balanced
credits are unified, instead of having separate directed and load
balanced credit pools.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/dlb2.c                     | 20 ++++--
 drivers/event/dlb2/dlb2_user.h                | 14 +++-
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 48 --------------
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 66 +++++++++++++++++++
 4 files changed, 92 insertions(+), 56 deletions(-)

diff --git a/drivers/event/dlb2/dlb2.c b/drivers/event/dlb2/dlb2.c
index 7f5b9141b..0048f6a1b 100644
--- a/drivers/event/dlb2/dlb2.c
+++ b/drivers/event/dlb2/dlb2.c
@@ -132,17 +132,25 @@ dlb2_hw_query_resources(struct dlb2_eventdev *dlb2)
 	evdev_dlb2_default_info.max_event_ports =
 		dlb2->hw_rsrc_query_results.num_ldb_ports;
 
-	evdev_dlb2_default_info.max_num_events =
-		dlb2->hw_rsrc_query_results.num_ldb_credits;
-
+	if (dlb2->version == DLB2_HW_V2_5) {
+		evdev_dlb2_default_info.max_num_events =
+			dlb2->hw_rsrc_query_results.num_credits;
+	} else {
+		evdev_dlb2_default_info.max_num_events =
+			dlb2->hw_rsrc_query_results.num_ldb_credits;
+	}
 	/* Save off values used when creating the scheduling domain. */
 
 	handle->info.num_sched_domains =
 		dlb2->hw_rsrc_query_results.num_sched_domains;
 
-	handle->info.hw_rsrc_max.nb_events_limit =
-		dlb2->hw_rsrc_query_results.num_ldb_credits;
-
+	if (dlb2->version == DLB2_HW_V2_5) {
+		handle->info.hw_rsrc_max.nb_events_limit =
+			dlb2->hw_rsrc_query_results.num_credits;
+	} else {
+		handle->info.hw_rsrc_max.nb_events_limit =
+			dlb2->hw_rsrc_query_results.num_ldb_credits;
+	}
 	handle->info.hw_rsrc_max.num_queues =
 		dlb2->hw_rsrc_query_results.num_ldb_queues +
 		dlb2->hw_rsrc_query_results.num_dir_ports;
diff --git a/drivers/event/dlb2/dlb2_user.h b/drivers/event/dlb2/dlb2_user.h
index f4bda7822..b7d125dec 100644
--- a/drivers/event/dlb2/dlb2_user.h
+++ b/drivers/event/dlb2/dlb2_user.h
@@ -195,9 +195,12 @@ struct dlb2_create_sched_domain_args {
  *	contiguous range of history list entries.
  * - num_ldb_credits: Amount of available load-balanced QE storage.
  * - num_dir_credits: Amount of available directed QE storage.
+ * - response.status: Detailed error code. In certain cases, such as if the
+ *	ioctl request arg is invalid, the driver won't set status.
  */
 struct dlb2_get_num_resources_args {
 	/* Output parameters */
+	struct dlb2_cmd_response response;
 	__u32 num_sched_domains;
 	__u32 num_ldb_queues;
 	__u32 num_ldb_ports;
@@ -206,8 +209,15 @@ struct dlb2_get_num_resources_args {
 	__u32 num_atomic_inflights;
 	__u32 num_hist_list_entries;
 	__u32 max_contiguous_hist_list_entries;
-	__u32 num_ldb_credits;
-	__u32 num_dir_credits;
+	union {
+		struct {
+			__u32 num_ldb_credits;
+			__u32 num_dir_credits;
+		};
+		struct {
+			__u32 num_credits;
+		};
+	};
 };
 
 /*
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index 7ba6521ef..eda983d85 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -58,54 +58,6 @@ void dlb2_hw_enable_sparse_dir_cq_mode(struct dlb2_hw *hw)
 	DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, r0.val);
 }
 
-int dlb2_hw_get_num_resources(struct dlb2_hw *hw,
-			      struct dlb2_get_num_resources_args *arg,
-			      bool vdev_req,
-			      unsigned int vdev_id)
-{
-	struct dlb2_function_resources *rsrcs;
-	struct dlb2_bitmap *map;
-	int i;
-
-	if (vdev_req && vdev_id >= DLB2_MAX_NUM_VDEVS)
-		return -EINVAL;
-
-	if (vdev_req)
-		rsrcs = &hw->vdev[vdev_id];
-	else
-		rsrcs = &hw->pf;
-
-	arg->num_sched_domains = rsrcs->num_avail_domains;
-
-	arg->num_ldb_queues = rsrcs->num_avail_ldb_queues;
-
-	arg->num_ldb_ports = 0;
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
-		arg->num_ldb_ports += rsrcs->num_avail_ldb_ports[i];
-
-	arg->num_cos_ldb_ports[0] = rsrcs->num_avail_ldb_ports[0];
-	arg->num_cos_ldb_ports[1] = rsrcs->num_avail_ldb_ports[1];
-	arg->num_cos_ldb_ports[2] = rsrcs->num_avail_ldb_ports[2];
-	arg->num_cos_ldb_ports[3] = rsrcs->num_avail_ldb_ports[3];
-
-	arg->num_dir_ports = rsrcs->num_avail_dir_pq_pairs;
-
-	arg->num_atomic_inflights = rsrcs->num_avail_aqed_entries;
-
-	map = rsrcs->avail_hist_list_entries;
-
-	arg->num_hist_list_entries = dlb2_bitmap_count(map);
-
-	arg->max_contiguous_hist_list_entries =
-		dlb2_bitmap_longest_set_range(map);
-
-	arg->num_ldb_credits = rsrcs->num_avail_qed_entries;
-
-	arg->num_dir_credits = rsrcs->num_avail_dqed_entries;
-
-	return 0;
-}
-
 void dlb2_hw_enable_sparse_ldb_cq_mode(struct dlb2_hw *hw)
 {
 	union dlb2_chp_cfg_chp_csr_ctrl r0;
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index 175b0799e..14b97dbf9 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -257,3 +257,69 @@ void dlb2_clr_pmcsr_disable(struct dlb2_hw *hw, enum dlb2_hw_ver ver)
 	DLB2_CSR_WR(hw, DLB2_CM_CFG_PM_PMCSR_DISABLE(ver), pmcsr_dis);
 }
 
+/**
+ * dlb2_hw_get_num_resources() - query the PCI function's available resources
+ * @hw: dlb2_hw handle for a particular device.
+ * @arg: pointer to resource counts.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function returns the number of available resources for the PF or for a
+ * VF.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, -EINVAL if vdev_req is true and vdev_id is
+ * invalid.
+ */
+int dlb2_hw_get_num_resources(struct dlb2_hw *hw,
+			      struct dlb2_get_num_resources_args *arg,
+			      bool vdev_req,
+			      unsigned int vdev_id)
+{
+	struct dlb2_function_resources *rsrcs;
+	struct dlb2_bitmap *map;
+	int i;
+
+	if (vdev_req && vdev_id >= DLB2_MAX_NUM_VDEVS)
+		return -EINVAL;
+
+	if (vdev_req)
+		rsrcs = &hw->vdev[vdev_id];
+	else
+		rsrcs = &hw->pf;
+
+	arg->num_sched_domains = rsrcs->num_avail_domains;
+
+	arg->num_ldb_queues = rsrcs->num_avail_ldb_queues;
+
+	arg->num_ldb_ports = 0;
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
+		arg->num_ldb_ports += rsrcs->num_avail_ldb_ports[i];
+
+	arg->num_cos_ldb_ports[0] = rsrcs->num_avail_ldb_ports[0];
+	arg->num_cos_ldb_ports[1] = rsrcs->num_avail_ldb_ports[1];
+	arg->num_cos_ldb_ports[2] = rsrcs->num_avail_ldb_ports[2];
+	arg->num_cos_ldb_ports[3] = rsrcs->num_avail_ldb_ports[3];
+
+	arg->num_dir_ports = rsrcs->num_avail_dir_pq_pairs;
+
+	arg->num_atomic_inflights = rsrcs->num_avail_aqed_entries;
+
+	map = rsrcs->avail_hist_list_entries;
+
+	arg->num_hist_list_entries = dlb2_bitmap_count(map);
+
+	arg->max_contiguous_hist_list_entries =
+		dlb2_bitmap_longest_set_range(map);
+
+	if (hw->ver == DLB2_HW_V2) {
+		arg->num_ldb_credits = rsrcs->num_avail_qed_entries;
+		arg->num_dir_credits = rsrcs->num_avail_dqed_entries;
+	} else {
+		arg->num_credits = rsrcs->num_avail_entries;
+	}
+	return 0;
+}
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v5 06/26] event/dlb2: add v2.5 create sched domain
  2021-05-01 19:03   ` [dpdk-dev] [PATCH v5 00/26] Add DLB v2.5 McDaniel, Timothy
                       ` (4 preceding siblings ...)
  2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 05/26] event/dlb2: add v2.5 get resources McDaniel, Timothy
@ 2021-05-01 19:03     ` McDaniel, Timothy
  2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 07/26] event/dlb2: add v2.5 domain reset McDaniel, Timothy
                       ` (20 subsequent siblings)
  26 siblings, 0 replies; 174+ messages in thread
From: McDaniel, Timothy @ 2021-05-01 19:03 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, harry.van.haaren, jerinj, thomas, Timothy McDaniel

From: Timothy McDaniel <timothy.mcdaniel@intel.com>

Update domain creation logic to account for DLB v2.5
credit scheme, new register map, and new register access
macros.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/dlb2_user.h                |  13 +-
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 645 ----------------
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 696 ++++++++++++++++++
 3 files changed, 707 insertions(+), 647 deletions(-)

diff --git a/drivers/event/dlb2/dlb2_user.h b/drivers/event/dlb2/dlb2_user.h
index b7d125dec..9760e9bda 100644
--- a/drivers/event/dlb2/dlb2_user.h
+++ b/drivers/event/dlb2/dlb2_user.h
@@ -18,6 +18,7 @@ enum dlb2_error {
 	DLB2_ST_LDB_QUEUES_UNAVAILABLE,
 	DLB2_ST_LDB_CREDITS_UNAVAILABLE,
 	DLB2_ST_DIR_CREDITS_UNAVAILABLE,
+	DLB2_ST_CREDITS_UNAVAILABLE,
 	DLB2_ST_SEQUENCE_NUMBERS_UNAVAILABLE,
 	DLB2_ST_INVALID_DOMAIN_ID,
 	DLB2_ST_INVALID_QID_INFLIGHT_ALLOCATION,
@@ -57,6 +58,7 @@ static const char dlb2_error_strings[][128] = {
 	"DLB2_ST_LDB_QUEUES_UNAVAILABLE",
 	"DLB2_ST_LDB_CREDITS_UNAVAILABLE",
 	"DLB2_ST_DIR_CREDITS_UNAVAILABLE",
+	"DLB2_ST_CREDITS_UNAVAILABLE",
 	"DLB2_ST_SEQUENCE_NUMBERS_UNAVAILABLE",
 	"DLB2_ST_INVALID_DOMAIN_ID",
 	"DLB2_ST_INVALID_QID_INFLIGHT_ALLOCATION",
@@ -170,8 +172,15 @@ struct dlb2_create_sched_domain_args {
 	__u32 num_dir_ports;
 	__u32 num_atomic_inflights;
 	__u32 num_hist_list_entries;
-	__u32 num_ldb_credits;
-	__u32 num_dir_credits;
+	union {
+		struct {
+			__u32 num_ldb_credits;
+			__u32 num_dir_credits;
+		};
+		struct {
+			__u32 num_credits;
+		};
+	};
 	__u8 cos_strict;
 	__u8 padding1[3];
 };
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index eda983d85..99c3d031d 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -32,21 +32,6 @@
 #define DLB2_FUNC_LIST_FOR_SAFE(head, ptr, ptr_tmp, it, it_tmp) \
 	DLB2_LIST_FOR_EACH_SAFE((head), ptr, ptr_tmp, func_list, it, it_tmp)
 
-static void dlb2_init_domain_rsrc_lists(struct dlb2_hw_domain *domain)
-{
-	int i;
-
-	dlb2_list_init_head(&domain->used_ldb_queues);
-	dlb2_list_init_head(&domain->used_dir_pq_pairs);
-	dlb2_list_init_head(&domain->avail_ldb_queues);
-	dlb2_list_init_head(&domain->avail_dir_pq_pairs);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
-		dlb2_list_init_head(&domain->used_ldb_ports[i]);
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
-		dlb2_list_init_head(&domain->avail_ldb_ports[i]);
-}
-
 void dlb2_hw_enable_sparse_dir_cq_mode(struct dlb2_hw *hw)
 {
 	union dlb2_chp_cfg_chp_csr_ctrl r0;
@@ -69,636 +54,6 @@ void dlb2_hw_enable_sparse_ldb_cq_mode(struct dlb2_hw *hw)
 	DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, r0.val);
 }
 
-static void dlb2_configure_domain_credits(struct dlb2_hw *hw,
-					  struct dlb2_hw_domain *domain)
-{
-	union dlb2_chp_cfg_ldb_vas_crd r0 = { {0} };
-	union dlb2_chp_cfg_dir_vas_crd r1 = { {0} };
-
-	r0.field.count = domain->num_ldb_credits;
-
-	DLB2_CSR_WR(hw, DLB2_CHP_CFG_LDB_VAS_CRD(domain->id.phys_id), r0.val);
-
-	r1.field.count = domain->num_dir_credits;
-
-	DLB2_CSR_WR(hw, DLB2_CHP_CFG_DIR_VAS_CRD(domain->id.phys_id), r1.val);
-}
-
-static struct dlb2_ldb_port *
-dlb2_get_next_ldb_port(struct dlb2_hw *hw,
-		       struct dlb2_function_resources *rsrcs,
-		       u32 domain_id,
-		       u32 cos_id)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	RTE_SET_USED(iter);
-	/*
-	 * To reduce the odds of consecutive load-balanced ports mapping to the
-	 * same queue(s), the driver attempts to allocate ports whose neighbors
-	 * are owned by a different domain.
-	 */
-	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {
-		u32 next, prev;
-		u32 phys_id;
-
-		phys_id = port->id.phys_id;
-		next = phys_id + 1;
-		prev = phys_id - 1;
-
-		if (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)
-			next = 0;
-		if (phys_id == 0)
-			prev = DLB2_MAX_NUM_LDB_PORTS - 1;
-
-		if (!hw->rsrcs.ldb_ports[next].owned ||
-		    hw->rsrcs.ldb_ports[next].domain_id.phys_id == domain_id)
-			continue;
-
-		if (!hw->rsrcs.ldb_ports[prev].owned ||
-		    hw->rsrcs.ldb_ports[prev].domain_id.phys_id == domain_id)
-			continue;
-
-		return port;
-	}
-
-	/*
-	 * Failing that, the driver looks for a port with one neighbor owned by
-	 * a different domain and the other unallocated.
-	 */
-	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {
-		u32 next, prev;
-		u32 phys_id;
-
-		phys_id = port->id.phys_id;
-		next = phys_id + 1;
-		prev = phys_id - 1;
-
-		if (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)
-			next = 0;
-		if (phys_id == 0)
-			prev = DLB2_MAX_NUM_LDB_PORTS - 1;
-
-		if (!hw->rsrcs.ldb_ports[prev].owned &&
-		    hw->rsrcs.ldb_ports[next].owned &&
-		    hw->rsrcs.ldb_ports[next].domain_id.phys_id != domain_id)
-			return port;
-
-		if (!hw->rsrcs.ldb_ports[next].owned &&
-		    hw->rsrcs.ldb_ports[prev].owned &&
-		    hw->rsrcs.ldb_ports[prev].domain_id.phys_id != domain_id)
-			return port;
-	}
-
-	/*
-	 * Failing that, the driver looks for a port with both neighbors
-	 * unallocated.
-	 */
-	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {
-		u32 next, prev;
-		u32 phys_id;
-
-		phys_id = port->id.phys_id;
-		next = phys_id + 1;
-		prev = phys_id - 1;
-
-		if (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)
-			next = 0;
-		if (phys_id == 0)
-			prev = DLB2_MAX_NUM_LDB_PORTS - 1;
-
-		if (!hw->rsrcs.ldb_ports[prev].owned &&
-		    !hw->rsrcs.ldb_ports[next].owned)
-			return port;
-	}
-
-	/* If all else fails, the driver returns the next available port. */
-	return DLB2_FUNC_LIST_HEAD(rsrcs->avail_ldb_ports[cos_id],
-				   typeof(*port));
-}
-
-static int __dlb2_attach_ldb_ports(struct dlb2_hw *hw,
-				   struct dlb2_function_resources *rsrcs,
-				   struct dlb2_hw_domain *domain,
-				   u32 num_ports,
-				   u32 cos_id,
-				   struct dlb2_cmd_response *resp)
-{
-	unsigned int i;
-
-	if (rsrcs->num_avail_ldb_ports[cos_id] < num_ports) {
-		resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	for (i = 0; i < num_ports; i++) {
-		struct dlb2_ldb_port *port;
-
-		port = dlb2_get_next_ldb_port(hw, rsrcs,
-					      domain->id.phys_id, cos_id);
-		if (port == NULL) {
-			DLB2_HW_ERR(hw,
-				    "[%s()] Internal error: domain validation failed\n",
-				    __func__);
-			return -EFAULT;
-		}
-
-		dlb2_list_del(&rsrcs->avail_ldb_ports[cos_id],
-			      &port->func_list);
-
-		port->domain_id = domain->id;
-		port->owned = true;
-
-		dlb2_list_add(&domain->avail_ldb_ports[cos_id],
-			      &port->domain_list);
-	}
-
-	rsrcs->num_avail_ldb_ports[cos_id] -= num_ports;
-
-	return 0;
-}
-
-static int dlb2_attach_ldb_ports(struct dlb2_hw *hw,
-				 struct dlb2_function_resources *rsrcs,
-				 struct dlb2_hw_domain *domain,
-				 struct dlb2_create_sched_domain_args *args,
-				 struct dlb2_cmd_response *resp)
-{
-	unsigned int i, j;
-	int ret;
-
-	if (args->cos_strict) {
-		for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-			u32 num = args->num_cos_ldb_ports[i];
-
-			/* Allocate ports from specific classes-of-service */
-			ret = __dlb2_attach_ldb_ports(hw,
-						      rsrcs,
-						      domain,
-						      num,
-						      i,
-						      resp);
-			if (ret)
-				return ret;
-		}
-	} else {
-		unsigned int k;
-		u32 cos_id;
-
-		/*
-		 * Attempt to allocate from specific class-of-service, but
-		 * fallback to the other classes if that fails.
-		 */
-		for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-			for (j = 0; j < args->num_cos_ldb_ports[i]; j++) {
-				for (k = 0; k < DLB2_NUM_COS_DOMAINS; k++) {
-					cos_id = (i + k) % DLB2_NUM_COS_DOMAINS;
-
-					ret = __dlb2_attach_ldb_ports(hw,
-								      rsrcs,
-								      domain,
-								      1,
-								      cos_id,
-								      resp);
-					if (ret == 0)
-						break;
-				}
-
-				if (ret < 0)
-					return ret;
-			}
-		}
-	}
-
-	/* Allocate num_ldb_ports from any class-of-service */
-	for (i = 0; i < args->num_ldb_ports; i++) {
-		for (j = 0; j < DLB2_NUM_COS_DOMAINS; j++) {
-			ret = __dlb2_attach_ldb_ports(hw,
-						      rsrcs,
-						      domain,
-						      1,
-						      j,
-						      resp);
-			if (ret == 0)
-				break;
-		}
-
-		if (ret < 0)
-			return ret;
-	}
-
-	return 0;
-}
-
-static int dlb2_attach_dir_ports(struct dlb2_hw *hw,
-				 struct dlb2_function_resources *rsrcs,
-				 struct dlb2_hw_domain *domain,
-				 u32 num_ports,
-				 struct dlb2_cmd_response *resp)
-{
-	unsigned int i;
-
-	if (rsrcs->num_avail_dir_pq_pairs < num_ports) {
-		resp->status = DLB2_ST_DIR_PORTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	for (i = 0; i < num_ports; i++) {
-		struct dlb2_dir_pq_pair *port;
-
-		port = DLB2_FUNC_LIST_HEAD(rsrcs->avail_dir_pq_pairs,
-					   typeof(*port));
-		if (port == NULL) {
-			DLB2_HW_ERR(hw,
-				    "[%s()] Internal error: domain validation failed\n",
-				    __func__);
-			return -EFAULT;
-		}
-
-		dlb2_list_del(&rsrcs->avail_dir_pq_pairs, &port->func_list);
-
-		port->domain_id = domain->id;
-		port->owned = true;
-
-		dlb2_list_add(&domain->avail_dir_pq_pairs, &port->domain_list);
-	}
-
-	rsrcs->num_avail_dir_pq_pairs -= num_ports;
-
-	return 0;
-}
-
-static int dlb2_attach_ldb_credits(struct dlb2_function_resources *rsrcs,
-				   struct dlb2_hw_domain *domain,
-				   u32 num_credits,
-				   struct dlb2_cmd_response *resp)
-{
-	if (rsrcs->num_avail_qed_entries < num_credits) {
-		resp->status = DLB2_ST_LDB_CREDITS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	rsrcs->num_avail_qed_entries -= num_credits;
-	domain->num_ldb_credits += num_credits;
-	return 0;
-}
-
-static int dlb2_attach_dir_credits(struct dlb2_function_resources *rsrcs,
-				   struct dlb2_hw_domain *domain,
-				   u32 num_credits,
-				   struct dlb2_cmd_response *resp)
-{
-	if (rsrcs->num_avail_dqed_entries < num_credits) {
-		resp->status = DLB2_ST_DIR_CREDITS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	rsrcs->num_avail_dqed_entries -= num_credits;
-	domain->num_dir_credits += num_credits;
-	return 0;
-}
-
-static int dlb2_attach_atomic_inflights(struct dlb2_function_resources *rsrcs,
-					struct dlb2_hw_domain *domain,
-					u32 num_atomic_inflights,
-					struct dlb2_cmd_response *resp)
-{
-	if (rsrcs->num_avail_aqed_entries < num_atomic_inflights) {
-		resp->status = DLB2_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	rsrcs->num_avail_aqed_entries -= num_atomic_inflights;
-	domain->num_avail_aqed_entries += num_atomic_inflights;
-	return 0;
-}
-
-static int
-dlb2_attach_domain_hist_list_entries(struct dlb2_function_resources *rsrcs,
-				     struct dlb2_hw_domain *domain,
-				     u32 num_hist_list_entries,
-				     struct dlb2_cmd_response *resp)
-{
-	struct dlb2_bitmap *bitmap;
-	int base;
-
-	if (num_hist_list_entries) {
-		bitmap = rsrcs->avail_hist_list_entries;
-
-		base = dlb2_bitmap_find_set_bit_range(bitmap,
-						      num_hist_list_entries);
-		if (base < 0)
-			goto error;
-
-		domain->total_hist_list_entries = num_hist_list_entries;
-		domain->avail_hist_list_entries = num_hist_list_entries;
-		domain->hist_list_entry_base = base;
-		domain->hist_list_entry_offset = 0;
-
-		dlb2_bitmap_clear_range(bitmap, base, num_hist_list_entries);
-	}
-	return 0;
-
-error:
-	resp->status = DLB2_ST_HIST_LIST_ENTRIES_UNAVAILABLE;
-	return -EINVAL;
-}
-
-static int dlb2_attach_ldb_queues(struct dlb2_hw *hw,
-				  struct dlb2_function_resources *rsrcs,
-				  struct dlb2_hw_domain *domain,
-				  u32 num_queues,
-				  struct dlb2_cmd_response *resp)
-{
-	unsigned int i;
-
-	if (rsrcs->num_avail_ldb_queues < num_queues) {
-		resp->status = DLB2_ST_LDB_QUEUES_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	for (i = 0; i < num_queues; i++) {
-		struct dlb2_ldb_queue *queue;
-
-		queue = DLB2_FUNC_LIST_HEAD(rsrcs->avail_ldb_queues,
-					    typeof(*queue));
-		if (queue == NULL) {
-			DLB2_HW_ERR(hw,
-				    "[%s()] Internal error: domain validation failed\n",
-				    __func__);
-			return -EFAULT;
-		}
-
-		dlb2_list_del(&rsrcs->avail_ldb_queues, &queue->func_list);
-
-		queue->domain_id = domain->id;
-		queue->owned = true;
-
-		dlb2_list_add(&domain->avail_ldb_queues, &queue->domain_list);
-	}
-
-	rsrcs->num_avail_ldb_queues -= num_queues;
-
-	return 0;
-}
-
-static int
-dlb2_domain_attach_resources(struct dlb2_hw *hw,
-			     struct dlb2_function_resources *rsrcs,
-			     struct dlb2_hw_domain *domain,
-			     struct dlb2_create_sched_domain_args *args,
-			     struct dlb2_cmd_response *resp)
-{
-	int ret;
-
-	ret = dlb2_attach_ldb_queues(hw,
-				     rsrcs,
-				     domain,
-				     args->num_ldb_queues,
-				     resp);
-	if (ret < 0)
-		return ret;
-
-	ret = dlb2_attach_ldb_ports(hw,
-				    rsrcs,
-				    domain,
-				    args,
-				    resp);
-	if (ret < 0)
-		return ret;
-
-	ret = dlb2_attach_dir_ports(hw,
-				    rsrcs,
-				    domain,
-				    args->num_dir_ports,
-				    resp);
-	if (ret < 0)
-		return ret;
-
-	ret = dlb2_attach_ldb_credits(rsrcs,
-				      domain,
-				      args->num_ldb_credits,
-				      resp);
-	if (ret < 0)
-		return ret;
-
-	ret = dlb2_attach_dir_credits(rsrcs,
-				      domain,
-				      args->num_dir_credits,
-				      resp);
-	if (ret < 0)
-		return ret;
-
-	ret = dlb2_attach_domain_hist_list_entries(rsrcs,
-						   domain,
-						   args->num_hist_list_entries,
-						   resp);
-	if (ret < 0)
-		return ret;
-
-	ret = dlb2_attach_atomic_inflights(rsrcs,
-					   domain,
-					   args->num_atomic_inflights,
-					   resp);
-	if (ret < 0)
-		return ret;
-
-	dlb2_configure_domain_credits(hw, domain);
-
-	domain->configured = true;
-
-	domain->started = false;
-
-	rsrcs->num_avail_domains--;
-
-	return 0;
-}
-
-static int
-dlb2_verify_create_sched_dom_args(struct dlb2_function_resources *rsrcs,
-				  struct dlb2_create_sched_domain_args *args,
-				  struct dlb2_cmd_response *resp)
-{
-	u32 num_avail_ldb_ports, req_ldb_ports;
-	struct dlb2_bitmap *avail_hl_entries;
-	unsigned int max_contig_hl_range;
-	int i;
-
-	avail_hl_entries = rsrcs->avail_hist_list_entries;
-
-	max_contig_hl_range = dlb2_bitmap_longest_set_range(avail_hl_entries);
-
-	num_avail_ldb_ports = 0;
-	req_ldb_ports = 0;
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		num_avail_ldb_ports += rsrcs->num_avail_ldb_ports[i];
-
-		req_ldb_ports += args->num_cos_ldb_ports[i];
-	}
-
-	req_ldb_ports += args->num_ldb_ports;
-
-	if (rsrcs->num_avail_domains < 1) {
-		resp->status = DLB2_ST_DOMAIN_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	if (rsrcs->num_avail_ldb_queues < args->num_ldb_queues) {
-		resp->status = DLB2_ST_LDB_QUEUES_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	if (req_ldb_ports > num_avail_ldb_ports) {
-		resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	for (i = 0; args->cos_strict && i < DLB2_NUM_COS_DOMAINS; i++) {
-		if (args->num_cos_ldb_ports[i] >
-		    rsrcs->num_avail_ldb_ports[i]) {
-			resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
-			return -EINVAL;
-		}
-	}
-
-	if (args->num_ldb_queues > 0 && req_ldb_ports == 0) {
-		resp->status = DLB2_ST_LDB_PORT_REQUIRED_FOR_LDB_QUEUES;
-		return -EINVAL;
-	}
-
-	if (rsrcs->num_avail_dir_pq_pairs < args->num_dir_ports) {
-		resp->status = DLB2_ST_DIR_PORTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	if (rsrcs->num_avail_qed_entries < args->num_ldb_credits) {
-		resp->status = DLB2_ST_LDB_CREDITS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	if (rsrcs->num_avail_dqed_entries < args->num_dir_credits) {
-		resp->status = DLB2_ST_DIR_CREDITS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	if (rsrcs->num_avail_aqed_entries < args->num_atomic_inflights) {
-		resp->status = DLB2_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	if (max_contig_hl_range < args->num_hist_list_entries) {
-		resp->status = DLB2_ST_HIST_LIST_ENTRIES_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	return 0;
-}
-
-static void
-dlb2_log_create_sched_domain_args(struct dlb2_hw *hw,
-				  struct dlb2_create_sched_domain_args *args,
-				  bool vdev_req,
-				  unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 create sched domain arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tNumber of LDB queues:          %d\n",
-		    args->num_ldb_queues);
-	DLB2_HW_DBG(hw, "\tNumber of LDB ports (any CoS): %d\n",
-		    args->num_ldb_ports);
-	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 0):   %d\n",
-		    args->num_cos_ldb_ports[0]);
-	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 1):   %d\n",
-		    args->num_cos_ldb_ports[1]);
-	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 2):   %d\n",
-		    args->num_cos_ldb_ports[1]);
-	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 3):   %d\n",
-		    args->num_cos_ldb_ports[1]);
-	DLB2_HW_DBG(hw, "\tStrict CoS allocation:         %d\n",
-		    args->cos_strict);
-	DLB2_HW_DBG(hw, "\tNumber of DIR ports:           %d\n",
-		    args->num_dir_ports);
-	DLB2_HW_DBG(hw, "\tNumber of ATM inflights:       %d\n",
-		    args->num_atomic_inflights);
-	DLB2_HW_DBG(hw, "\tNumber of hist list entries:   %d\n",
-		    args->num_hist_list_entries);
-	DLB2_HW_DBG(hw, "\tNumber of LDB credits:         %d\n",
-		    args->num_ldb_credits);
-	DLB2_HW_DBG(hw, "\tNumber of DIR credits:         %d\n",
-		    args->num_dir_credits);
-}
-
-/**
- * dlb2_hw_create_sched_domain() - Allocate and initialize a DLB scheduling
- *	domain and its resources.
- * @hw:	Contains the current state of the DLB2 hardware.
- * @args: User-provided arguments.
- * @resp: Response to user.
- * @vdev_req: Request came from a virtual device.
- * @vdev_id: If vdev_req is true, this contains the virtual device's ID.
- *
- * Return: returns < 0 on error, 0 otherwise. If the driver is unable to
- * satisfy a request, resp->status will be set accordingly.
- */
-int dlb2_hw_create_sched_domain(struct dlb2_hw *hw,
-				struct dlb2_create_sched_domain_args *args,
-				struct dlb2_cmd_response *resp,
-				bool vdev_req,
-				unsigned int vdev_id)
-{
-	struct dlb2_function_resources *rsrcs;
-	struct dlb2_hw_domain *domain;
-	int ret;
-
-	rsrcs = (vdev_req) ? &hw->vdev[vdev_id] : &hw->pf;
-
-	dlb2_log_create_sched_domain_args(hw, args, vdev_req, vdev_id);
-
-	/*
-	 * Verify that hardware resources are available before attempting to
-	 * satisfy the request. This simplifies the error unwinding code.
-	 */
-	ret = dlb2_verify_create_sched_dom_args(rsrcs, args, resp);
-	if (ret)
-		return ret;
-
-	domain = DLB2_FUNC_LIST_HEAD(rsrcs->avail_domains, typeof(*domain));
-	if (domain == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: no available domains\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	if (domain->configured) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: avail_domains contains configured domains.\n",
-			    __func__);
-		return -EFAULT;
-	}
-
-	dlb2_init_domain_rsrc_lists(domain);
-
-	ret = dlb2_domain_attach_resources(hw, rsrcs, domain, args, resp);
-	if (ret < 0) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: failed to verify args.\n",
-			    __func__);
-
-		return ret;
-	}
-
-	dlb2_list_del(&rsrcs->avail_domains, &domain->func_list);
-
-	dlb2_list_add(&rsrcs->used_domains, &domain->func_list);
-
-	resp->id = (vdev_req) ? domain->id.virt_id : domain->id.phys_id;
-	resp->status = 0;
-
-	return 0;
-}
-
 /*
  * The PF driver cannot assume that a register write will affect subsequent HCW
  * writes. To ensure a write completes, the driver must read back a CSR. This
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index 14b97dbf9..8f97dd865 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -323,3 +323,699 @@ int dlb2_hw_get_num_resources(struct dlb2_hw *hw,
 	}
 	return 0;
 }
+
+static void dlb2_configure_domain_credits_v2_5(struct dlb2_hw *hw,
+					       struct dlb2_hw_domain *domain)
+{
+	u32 reg = 0;
+
+	DLB2_BITS_SET(reg, domain->num_credits, DLB2_CHP_CFG_LDB_VAS_CRD_COUNT);
+	DLB2_CSR_WR(hw, DLB2_CHP_CFG_VAS_CRD(domain->id.phys_id), reg);
+}
+
+static void dlb2_configure_domain_credits_v2(struct dlb2_hw *hw,
+					     struct dlb2_hw_domain *domain)
+{
+	u32 reg = 0;
+
+	DLB2_BITS_SET(reg, domain->num_ldb_credits,
+		      DLB2_CHP_CFG_LDB_VAS_CRD_COUNT);
+	DLB2_CSR_WR(hw, DLB2_CHP_CFG_LDB_VAS_CRD(domain->id.phys_id), reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, domain->num_dir_credits,
+		      DLB2_CHP_CFG_DIR_VAS_CRD_COUNT);
+	DLB2_CSR_WR(hw, DLB2_CHP_CFG_DIR_VAS_CRD(domain->id.phys_id), reg);
+}
+
+static void dlb2_configure_domain_credits(struct dlb2_hw *hw,
+					  struct dlb2_hw_domain *domain)
+{
+	if (hw->ver == DLB2_HW_V2)
+		dlb2_configure_domain_credits_v2(hw, domain);
+	else
+		dlb2_configure_domain_credits_v2_5(hw, domain);
+}
+
+static int dlb2_attach_credits(struct dlb2_function_resources *rsrcs,
+			       struct dlb2_hw_domain *domain,
+			       u32 num_credits,
+			       struct dlb2_cmd_response *resp)
+{
+	if (rsrcs->num_avail_entries < num_credits) {
+		resp->status = DLB2_ST_CREDITS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	rsrcs->num_avail_entries -= num_credits;
+	domain->num_credits += num_credits;
+	return 0;
+}
+
+static struct dlb2_ldb_port *
+dlb2_get_next_ldb_port(struct dlb2_hw *hw,
+		       struct dlb2_function_resources *rsrcs,
+		       u32 domain_id,
+		       u32 cos_id)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	RTE_SET_USED(iter);
+
+	/*
+	 * To reduce the odds of consecutive load-balanced ports mapping to the
+	 * same queue(s), the driver attempts to allocate ports whose neighbors
+	 * are owned by a different domain.
+	 */
+	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {
+		u32 next, prev;
+		u32 phys_id;
+
+		phys_id = port->id.phys_id;
+		next = phys_id + 1;
+		prev = phys_id - 1;
+
+		if (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)
+			next = 0;
+		if (phys_id == 0)
+			prev = DLB2_MAX_NUM_LDB_PORTS - 1;
+
+		if (!hw->rsrcs.ldb_ports[next].owned ||
+		    hw->rsrcs.ldb_ports[next].domain_id.phys_id == domain_id)
+			continue;
+
+		if (!hw->rsrcs.ldb_ports[prev].owned ||
+		    hw->rsrcs.ldb_ports[prev].domain_id.phys_id == domain_id)
+			continue;
+
+		return port;
+	}
+
+	/*
+	 * Failing that, the driver looks for a port with one neighbor owned by
+	 * a different domain and the other unallocated.
+	 */
+	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {
+		u32 next, prev;
+		u32 phys_id;
+
+		phys_id = port->id.phys_id;
+		next = phys_id + 1;
+		prev = phys_id - 1;
+
+		if (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)
+			next = 0;
+		if (phys_id == 0)
+			prev = DLB2_MAX_NUM_LDB_PORTS - 1;
+
+		if (!hw->rsrcs.ldb_ports[prev].owned &&
+		    hw->rsrcs.ldb_ports[next].owned &&
+		    hw->rsrcs.ldb_ports[next].domain_id.phys_id != domain_id)
+			return port;
+
+		if (!hw->rsrcs.ldb_ports[next].owned &&
+		    hw->rsrcs.ldb_ports[prev].owned &&
+		    hw->rsrcs.ldb_ports[prev].domain_id.phys_id != domain_id)
+			return port;
+	}
+
+	/*
+	 * Failing that, the driver looks for a port with both neighbors
+	 * unallocated.
+	 */
+	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {
+		u32 next, prev;
+		u32 phys_id;
+
+		phys_id = port->id.phys_id;
+		next = phys_id + 1;
+		prev = phys_id - 1;
+
+		if (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)
+			next = 0;
+		if (phys_id == 0)
+			prev = DLB2_MAX_NUM_LDB_PORTS - 1;
+
+		if (!hw->rsrcs.ldb_ports[prev].owned &&
+		    !hw->rsrcs.ldb_ports[next].owned)
+			return port;
+	}
+
+	/* If all else fails, the driver returns the next available port. */
+	return DLB2_FUNC_LIST_HEAD(rsrcs->avail_ldb_ports[cos_id],
+				   typeof(*port));
+}
+
+static int __dlb2_attach_ldb_ports(struct dlb2_hw *hw,
+				   struct dlb2_function_resources *rsrcs,
+				   struct dlb2_hw_domain *domain,
+				   u32 num_ports,
+				   u32 cos_id,
+				   struct dlb2_cmd_response *resp)
+{
+	unsigned int i;
+
+	if (rsrcs->num_avail_ldb_ports[cos_id] < num_ports) {
+		resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	for (i = 0; i < num_ports; i++) {
+		struct dlb2_ldb_port *port;
+
+		port = dlb2_get_next_ldb_port(hw, rsrcs,
+					      domain->id.phys_id, cos_id);
+		if (port == NULL) {
+			DLB2_HW_ERR(hw,
+				    "[%s()] Internal error: domain validation failed\n",
+				    __func__);
+			return -EFAULT;
+		}
+
+		dlb2_list_del(&rsrcs->avail_ldb_ports[cos_id],
+			      &port->func_list);
+
+		port->domain_id = domain->id;
+		port->owned = true;
+
+		dlb2_list_add(&domain->avail_ldb_ports[cos_id],
+			      &port->domain_list);
+	}
+
+	rsrcs->num_avail_ldb_ports[cos_id] -= num_ports;
+
+	return 0;
+}
+
+
+static int dlb2_attach_ldb_ports(struct dlb2_hw *hw,
+				 struct dlb2_function_resources *rsrcs,
+				 struct dlb2_hw_domain *domain,
+				 struct dlb2_create_sched_domain_args *args,
+				 struct dlb2_cmd_response *resp)
+{
+	unsigned int i, j;
+	int ret;
+
+	if (args->cos_strict) {
+		for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+			u32 num = args->num_cos_ldb_ports[i];
+
+			/* Allocate ports from specific classes-of-service */
+			ret = __dlb2_attach_ldb_ports(hw,
+						      rsrcs,
+						      domain,
+						      num,
+						      i,
+						      resp);
+			if (ret)
+				return ret;
+		}
+	} else {
+		unsigned int k;
+		u32 cos_id;
+
+		/*
+		 * Attempt to allocate from specific class-of-service, but
+		 * fallback to the other classes if that fails.
+		 */
+		for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+			for (j = 0; j < args->num_cos_ldb_ports[i]; j++) {
+				for (k = 0; k < DLB2_NUM_COS_DOMAINS; k++) {
+					cos_id = (i + k) % DLB2_NUM_COS_DOMAINS;
+
+					ret = __dlb2_attach_ldb_ports(hw,
+								      rsrcs,
+								      domain,
+								      1,
+								      cos_id,
+								      resp);
+					if (ret == 0)
+						break;
+				}
+
+				if (ret)
+					return ret;
+			}
+		}
+	}
+
+	/* Allocate num_ldb_ports from any class-of-service */
+	for (i = 0; i < args->num_ldb_ports; i++) {
+		for (j = 0; j < DLB2_NUM_COS_DOMAINS; j++) {
+			ret = __dlb2_attach_ldb_ports(hw,
+						      rsrcs,
+						      domain,
+						      1,
+						      j,
+						      resp);
+			if (ret == 0)
+				break;
+		}
+
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+
+static int dlb2_attach_dir_ports(struct dlb2_hw *hw,
+				 struct dlb2_function_resources *rsrcs,
+				 struct dlb2_hw_domain *domain,
+				 u32 num_ports,
+				 struct dlb2_cmd_response *resp)
+{
+	unsigned int i;
+
+	if (rsrcs->num_avail_dir_pq_pairs < num_ports) {
+		resp->status = DLB2_ST_DIR_PORTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	for (i = 0; i < num_ports; i++) {
+		struct dlb2_dir_pq_pair *port;
+
+		port = DLB2_FUNC_LIST_HEAD(rsrcs->avail_dir_pq_pairs,
+					   typeof(*port));
+		if (port == NULL) {
+			DLB2_HW_ERR(hw,
+				    "[%s()] Internal error: domain validation failed\n",
+				    __func__);
+			return -EFAULT;
+		}
+
+		dlb2_list_del(&rsrcs->avail_dir_pq_pairs, &port->func_list);
+
+		port->domain_id = domain->id;
+		port->owned = true;
+
+		dlb2_list_add(&domain->avail_dir_pq_pairs, &port->domain_list);
+	}
+
+	rsrcs->num_avail_dir_pq_pairs -= num_ports;
+
+	return 0;
+}
+
+static int dlb2_attach_ldb_credits(struct dlb2_function_resources *rsrcs,
+				   struct dlb2_hw_domain *domain,
+				   u32 num_credits,
+				   struct dlb2_cmd_response *resp)
+{
+	if (rsrcs->num_avail_qed_entries < num_credits) {
+		resp->status = DLB2_ST_LDB_CREDITS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	rsrcs->num_avail_qed_entries -= num_credits;
+	domain->num_ldb_credits += num_credits;
+	return 0;
+}
+
+static int dlb2_attach_dir_credits(struct dlb2_function_resources *rsrcs,
+				   struct dlb2_hw_domain *domain,
+				   u32 num_credits,
+				   struct dlb2_cmd_response *resp)
+{
+	if (rsrcs->num_avail_dqed_entries < num_credits) {
+		resp->status = DLB2_ST_DIR_CREDITS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	rsrcs->num_avail_dqed_entries -= num_credits;
+	domain->num_dir_credits += num_credits;
+	return 0;
+}
+
+
+static int dlb2_attach_atomic_inflights(struct dlb2_function_resources *rsrcs,
+					struct dlb2_hw_domain *domain,
+					u32 num_atomic_inflights,
+					struct dlb2_cmd_response *resp)
+{
+	if (rsrcs->num_avail_aqed_entries < num_atomic_inflights) {
+		resp->status = DLB2_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	rsrcs->num_avail_aqed_entries -= num_atomic_inflights;
+	domain->num_avail_aqed_entries += num_atomic_inflights;
+	return 0;
+}
+
+static int
+dlb2_attach_domain_hist_list_entries(struct dlb2_function_resources *rsrcs,
+				     struct dlb2_hw_domain *domain,
+				     u32 num_hist_list_entries,
+				     struct dlb2_cmd_response *resp)
+{
+	struct dlb2_bitmap *bitmap;
+	int base;
+
+	if (num_hist_list_entries) {
+		bitmap = rsrcs->avail_hist_list_entries;
+
+		base = dlb2_bitmap_find_set_bit_range(bitmap,
+						      num_hist_list_entries);
+		if (base < 0)
+			goto error;
+
+		domain->total_hist_list_entries = num_hist_list_entries;
+		domain->avail_hist_list_entries = num_hist_list_entries;
+		domain->hist_list_entry_base = base;
+		domain->hist_list_entry_offset = 0;
+
+		dlb2_bitmap_clear_range(bitmap, base, num_hist_list_entries);
+	}
+	return 0;
+
+error:
+	resp->status = DLB2_ST_HIST_LIST_ENTRIES_UNAVAILABLE;
+	return -EINVAL;
+}
+
+static int dlb2_attach_ldb_queues(struct dlb2_hw *hw,
+				  struct dlb2_function_resources *rsrcs,
+				  struct dlb2_hw_domain *domain,
+				  u32 num_queues,
+				  struct dlb2_cmd_response *resp)
+{
+	unsigned int i;
+
+	if (rsrcs->num_avail_ldb_queues < num_queues) {
+		resp->status = DLB2_ST_LDB_QUEUES_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	for (i = 0; i < num_queues; i++) {
+		struct dlb2_ldb_queue *queue;
+
+		queue = DLB2_FUNC_LIST_HEAD(rsrcs->avail_ldb_queues,
+					    typeof(*queue));
+		if (queue == NULL) {
+			DLB2_HW_ERR(hw,
+				    "[%s()] Internal error: domain validation failed\n",
+				    __func__);
+			return -EFAULT;
+		}
+
+		dlb2_list_del(&rsrcs->avail_ldb_queues, &queue->func_list);
+
+		queue->domain_id = domain->id;
+		queue->owned = true;
+
+		dlb2_list_add(&domain->avail_ldb_queues, &queue->domain_list);
+	}
+
+	rsrcs->num_avail_ldb_queues -= num_queues;
+
+	return 0;
+}
+
+static int
+dlb2_domain_attach_resources(struct dlb2_hw *hw,
+			     struct dlb2_function_resources *rsrcs,
+			     struct dlb2_hw_domain *domain,
+			     struct dlb2_create_sched_domain_args *args,
+			     struct dlb2_cmd_response *resp)
+{
+	int ret;
+
+	ret = dlb2_attach_ldb_queues(hw,
+				     rsrcs,
+				     domain,
+				     args->num_ldb_queues,
+				     resp);
+	if (ret)
+		return ret;
+
+	ret = dlb2_attach_ldb_ports(hw,
+				    rsrcs,
+				    domain,
+				    args,
+				    resp);
+	if (ret)
+		return ret;
+
+	ret = dlb2_attach_dir_ports(hw,
+				    rsrcs,
+				    domain,
+				    args->num_dir_ports,
+				    resp);
+	if (ret)
+		return ret;
+
+	if (hw->ver == DLB2_HW_V2) {
+		ret = dlb2_attach_ldb_credits(rsrcs,
+					      domain,
+					      args->num_ldb_credits,
+					      resp);
+		if (ret)
+			return ret;
+
+		ret = dlb2_attach_dir_credits(rsrcs,
+					      domain,
+					      args->num_dir_credits,
+					      resp);
+		if (ret)
+			return ret;
+	} else {  /* DLB 2.5 */
+		ret = dlb2_attach_credits(rsrcs,
+					  domain,
+					  args->num_credits,
+					  resp);
+		if (ret)
+			return ret;
+	}
+
+	ret = dlb2_attach_domain_hist_list_entries(rsrcs,
+						   domain,
+						   args->num_hist_list_entries,
+						   resp);
+	if (ret)
+		return ret;
+
+	ret = dlb2_attach_atomic_inflights(rsrcs,
+					   domain,
+					   args->num_atomic_inflights,
+					   resp);
+	if (ret)
+		return ret;
+
+	dlb2_configure_domain_credits(hw, domain);
+
+	domain->configured = true;
+
+	domain->started = false;
+
+	rsrcs->num_avail_domains--;
+
+	return 0;
+}
+
+static int
+dlb2_verify_create_sched_dom_args(struct dlb2_function_resources *rsrcs,
+				  struct dlb2_create_sched_domain_args *args,
+				  struct dlb2_cmd_response *resp,
+				  struct dlb2_hw *hw,
+				  struct dlb2_hw_domain **out_domain)
+{
+	u32 num_avail_ldb_ports, req_ldb_ports;
+	struct dlb2_bitmap *avail_hl_entries;
+	unsigned int max_contig_hl_range;
+	struct dlb2_hw_domain *domain;
+	int i;
+
+	avail_hl_entries = rsrcs->avail_hist_list_entries;
+
+	max_contig_hl_range = dlb2_bitmap_longest_set_range(avail_hl_entries);
+
+	num_avail_ldb_ports = 0;
+	req_ldb_ports = 0;
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		num_avail_ldb_ports += rsrcs->num_avail_ldb_ports[i];
+
+		req_ldb_ports += args->num_cos_ldb_ports[i];
+	}
+
+	req_ldb_ports += args->num_ldb_ports;
+
+	if (rsrcs->num_avail_domains < 1) {
+		resp->status = DLB2_ST_DOMAIN_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	domain = DLB2_FUNC_LIST_HEAD(rsrcs->avail_domains, typeof(*domain));
+	if (domain == NULL) {
+		resp->status = DLB2_ST_DOMAIN_UNAVAILABLE;
+		return -EFAULT;
+	}
+
+	if (rsrcs->num_avail_ldb_queues < args->num_ldb_queues) {
+		resp->status = DLB2_ST_LDB_QUEUES_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	if (req_ldb_ports > num_avail_ldb_ports) {
+		resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	for (i = 0; args->cos_strict && i < DLB2_NUM_COS_DOMAINS; i++) {
+		if (args->num_cos_ldb_ports[i] >
+		    rsrcs->num_avail_ldb_ports[i]) {
+			resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
+			return -EINVAL;
+		}
+	}
+
+	if (args->num_ldb_queues > 0 && req_ldb_ports == 0) {
+		resp->status = DLB2_ST_LDB_PORT_REQUIRED_FOR_LDB_QUEUES;
+		return -EINVAL;
+	}
+
+	if (rsrcs->num_avail_dir_pq_pairs < args->num_dir_ports) {
+		resp->status = DLB2_ST_DIR_PORTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+	if (hw->ver == DLB2_HW_V2_5) {
+		if (rsrcs->num_avail_entries < args->num_credits) {
+			resp->status = DLB2_ST_CREDITS_UNAVAILABLE;
+			return -EINVAL;
+		}
+	} else {
+		if (rsrcs->num_avail_qed_entries < args->num_ldb_credits) {
+			resp->status = DLB2_ST_LDB_CREDITS_UNAVAILABLE;
+			return -EINVAL;
+		}
+		if (rsrcs->num_avail_dqed_entries < args->num_dir_credits) {
+			resp->status = DLB2_ST_DIR_CREDITS_UNAVAILABLE;
+			return -EINVAL;
+		}
+	}
+
+	if (rsrcs->num_avail_aqed_entries < args->num_atomic_inflights) {
+		resp->status = DLB2_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	if (max_contig_hl_range < args->num_hist_list_entries) {
+		resp->status = DLB2_ST_HIST_LIST_ENTRIES_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	*out_domain = domain;
+
+	return 0;
+}
+
+static void
+dlb2_log_create_sched_domain_args(struct dlb2_hw *hw,
+				  struct dlb2_create_sched_domain_args *args,
+				  bool vdev_req,
+				  unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 create sched domain arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tNumber of LDB queues:          %d\n",
+		    args->num_ldb_queues);
+	DLB2_HW_DBG(hw, "\tNumber of LDB ports (any CoS): %d\n",
+		    args->num_ldb_ports);
+	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 0):   %d\n",
+		    args->num_cos_ldb_ports[0]);
+	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 1):   %d\n",
+		    args->num_cos_ldb_ports[1]);
+	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 2):   %d\n",
+		    args->num_cos_ldb_ports[2]);
+	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 3):   %d\n",
+		    args->num_cos_ldb_ports[3]);
+	DLB2_HW_DBG(hw, "\tStrict CoS allocation:         %d\n",
+		    args->cos_strict);
+	DLB2_HW_DBG(hw, "\tNumber of DIR ports:           %d\n",
+		    args->num_dir_ports);
+	DLB2_HW_DBG(hw, "\tNumber of ATM inflights:       %d\n",
+		    args->num_atomic_inflights);
+	DLB2_HW_DBG(hw, "\tNumber of hist list entries:   %d\n",
+		    args->num_hist_list_entries);
+	if (hw->ver == DLB2_HW_V2) {
+		DLB2_HW_DBG(hw, "\tNumber of LDB credits:         %d\n",
+			    args->num_ldb_credits);
+		DLB2_HW_DBG(hw, "\tNumber of DIR credits:         %d\n",
+			    args->num_dir_credits);
+	} else {
+		DLB2_HW_DBG(hw, "\tNumber of credits:         %d\n",
+			    args->num_credits);
+	}
+}
+
+/**
+ * dlb2_hw_create_sched_domain() - create a scheduling domain
+ * @hw: dlb2_hw handle for a particular device.
+ * @args: scheduling domain creation arguments.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function creates a scheduling domain containing the resources specified
+ * in args. The individual resources (queues, ports, credits) can be configured
+ * after creating a scheduling domain.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the domain ID.
+ *
+ * resp->id contains a virtual ID if vdev_req is true.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, or the requested domain name
+ *	    is already in use.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb2_hw_create_sched_domain(struct dlb2_hw *hw,
+				struct dlb2_create_sched_domain_args *args,
+				struct dlb2_cmd_response *resp,
+				bool vdev_req,
+				unsigned int vdev_id)
+{
+	struct dlb2_function_resources *rsrcs;
+	struct dlb2_hw_domain *domain;
+	int ret;
+
+	rsrcs = (vdev_req) ? &hw->vdev[vdev_id] : &hw->pf;
+
+	dlb2_log_create_sched_domain_args(hw, args, vdev_req, vdev_id);
+
+	/*
+	 * Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	ret = dlb2_verify_create_sched_dom_args(rsrcs, args, resp, hw, &domain);
+	if (ret)
+		return ret;
+
+	dlb2_init_domain_rsrc_lists(domain);
+
+	ret = dlb2_domain_attach_resources(hw, rsrcs, domain, args, resp);
+	if (ret) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: failed to verify args.\n",
+			    __func__);
+
+		return ret;
+	}
+
+	dlb2_list_del(&rsrcs->avail_domains, &domain->func_list);
+
+	dlb2_list_add(&rsrcs->used_domains, &domain->func_list);
+
+	resp->id = (vdev_req) ? domain->id.virt_id : domain->id.phys_id;
+	resp->status = 0;
+
+	return 0;
+}
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v5 07/26] event/dlb2: add v2.5 domain reset
  2021-05-01 19:03   ` [dpdk-dev] [PATCH v5 00/26] Add DLB v2.5 McDaniel, Timothy
                       ` (5 preceding siblings ...)
  2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 06/26] event/dlb2: add v2.5 create sched domain McDaniel, Timothy
@ 2021-05-01 19:03     ` McDaniel, Timothy
  2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 08/26] event/dlb2: add v2.5 create ldb queue McDaniel, Timothy
                       ` (19 subsequent siblings)
  26 siblings, 0 replies; 174+ messages in thread
From: McDaniel, Timothy @ 2021-05-01 19:03 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, harry.van.haaren, jerinj, thomas, Timothy McDaniel

From: Timothy McDaniel <timothy.mcdaniel@intel.com>

Reset hardware registers, consumer queues, ports,
interrupts and software. Queues must also be drained
as part of the reset process.

The logic is very similar to what was done for v2.0,
but the new combined register map for v2.0 and v2.5
uses new register names and bit names.  Additionally,
new register access macros are used so that the code
can perform the correct action, based on the hardware
version, v2.0 or v2.5.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 .../event/dlb2/pf/base/dlb2_hw_types_new.h    |    1 +
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 1494 ----------
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 2562 +++++++++++++++++
 3 files changed, 2563 insertions(+), 1494 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_hw_types_new.h b/drivers/event/dlb2/pf/base/dlb2_hw_types_new.h
index 4a4185acd..4a6037775 100644
--- a/drivers/event/dlb2/pf/base/dlb2_hw_types_new.h
+++ b/drivers/event/dlb2/pf/base/dlb2_hw_types_new.h
@@ -181,6 +181,7 @@ struct dlb2_ldb_port {
 	u32 hist_list_entry_base;
 	u32 hist_list_entry_limit;
 	u32 ref_cnt;
+	u8 cq_depth;
 	u8 init_tkn_cnt;
 	u8 num_pending_removals;
 	u8 num_mappings;
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index 99c3d031d..041aeaeee 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -65,69 +65,6 @@ static inline void dlb2_flush_csr(struct dlb2_hw *hw)
 	DLB2_CSR_RD(hw, DLB2_SYS_TOTAL_VAS);
 }
 
-static void dlb2_dir_port_cq_disable(struct dlb2_hw *hw,
-				     struct dlb2_dir_pq_pair *port)
-{
-	union dlb2_lsp_cq_dir_dsbl reg;
-
-	reg.field.disabled = 1;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ_DIR_DSBL(port->id.phys_id), reg.val);
-
-	dlb2_flush_csr(hw);
-}
-
-static u32 dlb2_dir_cq_token_count(struct dlb2_hw *hw,
-				   struct dlb2_dir_pq_pair *port)
-{
-	union dlb2_lsp_cq_dir_tkn_cnt r0;
-
-	r0.val = DLB2_CSR_RD(hw, DLB2_LSP_CQ_DIR_TKN_CNT(port->id.phys_id));
-
-	/*
-	 * Account for the initial token count, which is used in order to
-	 * provide a CQ with depth less than 8.
-	 */
-
-	return r0.field.count - port->init_tkn_cnt;
-}
-
-static int dlb2_drain_dir_cq(struct dlb2_hw *hw,
-			     struct dlb2_dir_pq_pair *port)
-{
-	unsigned int port_id = port->id.phys_id;
-	u32 cnt;
-
-	/* Return any outstanding tokens */
-	cnt = dlb2_dir_cq_token_count(hw, port);
-
-	if (cnt != 0) {
-		struct dlb2_hcw hcw_mem[8], *hcw;
-		void  *pp_addr;
-
-		pp_addr = os_map_producer_port(hw, port_id, false);
-
-		/* Point hcw to a 64B-aligned location */
-		hcw = (struct dlb2_hcw *)((uintptr_t)&hcw_mem[4] & ~0x3F);
-
-		/*
-		 * Program the first HCW for a batch token return and
-		 * the rest as NOOPS
-		 */
-		memset(hcw, 0, 4 * sizeof(*hcw));
-		hcw->cq_token = 1;
-		hcw->lock_id = cnt - 1;
-
-		dlb2_movdir64b(pp_addr, hcw);
-
-		os_fence_hcw(hw, pp_addr);
-
-		os_unmap_producer_port(hw, pp_addr);
-	}
-
-	return 0;
-}
-
 static void dlb2_dir_port_cq_enable(struct dlb2_hw *hw,
 				    struct dlb2_dir_pq_pair *port)
 {
@@ -140,37 +77,6 @@ static void dlb2_dir_port_cq_enable(struct dlb2_hw *hw,
 	dlb2_flush_csr(hw);
 }
 
-static int dlb2_domain_drain_dir_cqs(struct dlb2_hw *hw,
-				     struct dlb2_hw_domain *domain,
-				     bool toggle_port)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *port;
-	int ret;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
-		/*
-		 * Can't drain a port if it's not configured, and there's
-		 * nothing to drain if its queue is unconfigured.
-		 */
-		if (!port->port_configured || !port->queue_configured)
-			continue;
-
-		if (toggle_port)
-			dlb2_dir_port_cq_disable(hw, port);
-
-		ret = dlb2_drain_dir_cq(hw, port);
-		if (ret < 0)
-			return ret;
-
-		if (toggle_port)
-			dlb2_dir_port_cq_enable(hw, port);
-	}
-
-	return 0;
-}
-
 static u32 dlb2_dir_queue_depth(struct dlb2_hw *hw,
 				struct dlb2_dir_pq_pair *queue)
 {
@@ -182,63 +88,6 @@ static u32 dlb2_dir_queue_depth(struct dlb2_hw *hw,
 	return r0.field.count;
 }
 
-static bool dlb2_dir_queue_is_empty(struct dlb2_hw *hw,
-				    struct dlb2_dir_pq_pair *queue)
-{
-	return dlb2_dir_queue_depth(hw, queue) == 0;
-}
-
-static bool dlb2_domain_dir_queues_empty(struct dlb2_hw *hw,
-					 struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *queue;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, queue, iter) {
-		if (!dlb2_dir_queue_is_empty(hw, queue))
-			return false;
-	}
-
-	return true;
-}
-
-static int dlb2_domain_drain_dir_queues(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain)
-{
-	int i, ret;
-
-	/* If the domain hasn't been started, there's no traffic to drain */
-	if (!domain->started)
-		return 0;
-
-	for (i = 0; i < DLB2_MAX_QID_EMPTY_CHECK_LOOPS; i++) {
-		ret = dlb2_domain_drain_dir_cqs(hw, domain, true);
-		if (ret < 0)
-			return ret;
-
-		if (dlb2_domain_dir_queues_empty(hw, domain))
-			break;
-	}
-
-	if (i == DLB2_MAX_QID_EMPTY_CHECK_LOOPS) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: failed to empty queues\n",
-			    __func__);
-		return -EFAULT;
-	}
-
-	/*
-	 * Drain the CQs one more time. For the queues to go empty, they would
-	 * have scheduled one or more QEs.
-	 */
-	ret = dlb2_domain_drain_dir_cqs(hw, domain, true);
-	if (ret < 0)
-		return ret;
-
-	return 0;
-}
-
 static void dlb2_ldb_port_cq_enable(struct dlb2_hw *hw,
 				    struct dlb2_ldb_port *port)
 {
@@ -271,105 +120,6 @@ static void dlb2_ldb_port_cq_disable(struct dlb2_hw *hw,
 	dlb2_flush_csr(hw);
 }
 
-static u32 dlb2_ldb_cq_inflight_count(struct dlb2_hw *hw,
-				      struct dlb2_ldb_port *port)
-{
-	union dlb2_lsp_cq_ldb_infl_cnt r0;
-
-	r0.val = DLB2_CSR_RD(hw, DLB2_LSP_CQ_LDB_INFL_CNT(port->id.phys_id));
-
-	return r0.field.count;
-}
-
-static u32 dlb2_ldb_cq_token_count(struct dlb2_hw *hw,
-				   struct dlb2_ldb_port *port)
-{
-	union dlb2_lsp_cq_ldb_tkn_cnt r0;
-
-	r0.val = DLB2_CSR_RD(hw, DLB2_LSP_CQ_LDB_TKN_CNT(port->id.phys_id));
-
-	/*
-	 * Account for the initial token count, which is used in order to
-	 * provide a CQ with depth less than 8.
-	 */
-
-	return r0.field.token_count - port->init_tkn_cnt;
-}
-
-static int dlb2_drain_ldb_cq(struct dlb2_hw *hw, struct dlb2_ldb_port *port)
-{
-	u32 infl_cnt, tkn_cnt;
-	unsigned int i;
-
-	infl_cnt = dlb2_ldb_cq_inflight_count(hw, port);
-	tkn_cnt = dlb2_ldb_cq_token_count(hw, port);
-
-	if (infl_cnt || tkn_cnt) {
-		struct dlb2_hcw hcw_mem[8], *hcw;
-		void  *pp_addr;
-
-		pp_addr = os_map_producer_port(hw, port->id.phys_id, true);
-
-		/* Point hcw to a 64B-aligned location */
-		hcw = (struct dlb2_hcw *)((uintptr_t)&hcw_mem[4] & ~0x3F);
-
-		/*
-		 * Program the first HCW for a completion and token return and
-		 * the other HCWs as NOOPS
-		 */
-
-		memset(hcw, 0, 4 * sizeof(*hcw));
-		hcw->qe_comp = (infl_cnt > 0);
-		hcw->cq_token = (tkn_cnt > 0);
-		hcw->lock_id = tkn_cnt - 1;
-
-		/* Return tokens in the first HCW */
-		dlb2_movdir64b(pp_addr, hcw);
-
-		hcw->cq_token = 0;
-
-		/* Issue remaining completions (if any) */
-		for (i = 1; i < infl_cnt; i++)
-			dlb2_movdir64b(pp_addr, hcw);
-
-		os_fence_hcw(hw, pp_addr);
-
-		os_unmap_producer_port(hw, pp_addr);
-	}
-
-	return 0;
-}
-
-static int dlb2_domain_drain_ldb_cqs(struct dlb2_hw *hw,
-				     struct dlb2_hw_domain *domain,
-				     bool toggle_port)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int ret, i;
-	RTE_SET_USED(iter);
-
-	/* If the domain hasn't been started, there's no traffic to drain */
-	if (!domain->started)
-		return 0;
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			if (toggle_port)
-				dlb2_ldb_port_cq_disable(hw, port);
-
-			ret = dlb2_drain_ldb_cq(hw, port);
-			if (ret < 0)
-				return ret;
-
-			if (toggle_port)
-				dlb2_ldb_port_cq_enable(hw, port);
-		}
-	}
-
-	return 0;
-}
-
 static u32 dlb2_ldb_queue_depth(struct dlb2_hw *hw,
 				struct dlb2_ldb_queue *queue)
 {
@@ -388,90 +138,6 @@ static u32 dlb2_ldb_queue_depth(struct dlb2_hw *hw,
 	return r0.field.count + r1.field.count + r2.field.count;
 }
 
-static bool dlb2_ldb_queue_is_empty(struct dlb2_hw *hw,
-				    struct dlb2_ldb_queue *queue)
-{
-	return dlb2_ldb_queue_depth(hw, queue) == 0;
-}
-
-static bool dlb2_domain_mapped_queues_empty(struct dlb2_hw *hw,
-					    struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_queue *queue;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
-		if (queue->num_mappings == 0)
-			continue;
-
-		if (!dlb2_ldb_queue_is_empty(hw, queue))
-			return false;
-	}
-
-	return true;
-}
-
-static int dlb2_domain_drain_mapped_queues(struct dlb2_hw *hw,
-					   struct dlb2_hw_domain *domain)
-{
-	int i, ret;
-
-	/* If the domain hasn't been started, there's no traffic to drain */
-	if (!domain->started)
-		return 0;
-
-	if (domain->num_pending_removals > 0) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: failed to unmap domain queues\n",
-			    __func__);
-		return -EFAULT;
-	}
-
-	for (i = 0; i < DLB2_MAX_QID_EMPTY_CHECK_LOOPS; i++) {
-		ret = dlb2_domain_drain_ldb_cqs(hw, domain, true);
-		if (ret < 0)
-			return ret;
-
-		if (dlb2_domain_mapped_queues_empty(hw, domain))
-			break;
-	}
-
-	if (i == DLB2_MAX_QID_EMPTY_CHECK_LOOPS) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: failed to empty queues\n",
-			    __func__);
-		return -EFAULT;
-	}
-
-	/*
-	 * Drain the CQs one more time. For the queues to go empty, they would
-	 * have scheduled one or more QEs.
-	 */
-	ret = dlb2_domain_drain_ldb_cqs(hw, domain, true);
-	if (ret < 0)
-		return ret;
-
-	return 0;
-}
-
-static void dlb2_domain_enable_ldb_cqs(struct dlb2_hw *hw,
-				       struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			port->enabled = true;
-
-			dlb2_ldb_port_cq_enable(hw, port);
-		}
-	}
-}
-
 static struct dlb2_ldb_queue *
 dlb2_get_ldb_queue_from_id(struct dlb2_hw *hw,
 			   u32 id,
@@ -1455,1166 +1121,6 @@ dlb2_domain_finish_unmap_qid_procedures(struct dlb2_hw *hw,
 	return domain->num_pending_removals;
 }
 
-static void dlb2_domain_disable_ldb_cqs(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			port->enabled = false;
-
-			dlb2_ldb_port_cq_disable(hw, port);
-		}
-	}
-}
-
-static void dlb2_log_reset_domain(struct dlb2_hw *hw,
-				  u32 domain_id,
-				  bool vdev_req,
-				  unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 reset domain:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
-}
-
-static void dlb2_domain_disable_dir_vpps(struct dlb2_hw *hw,
-					 struct dlb2_hw_domain *domain,
-					 unsigned int vdev_id)
-{
-	struct dlb2_list_entry *iter;
-	union dlb2_sys_vf_dir_vpp_v r1;
-	struct dlb2_dir_pq_pair *port;
-	RTE_SET_USED(iter);
-
-	r1.field.vpp_v = 0;
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
-		unsigned int offs;
-		u32 virt_id;
-
-		if (hw->virt_mode == DLB2_VIRT_SRIOV)
-			virt_id = port->id.virt_id;
-		else
-			virt_id = port->id.phys_id;
-
-		offs = vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) + virt_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP_V(offs), r1.val);
-	}
-}
-
-static void dlb2_domain_disable_ldb_vpps(struct dlb2_hw *hw,
-					 struct dlb2_hw_domain *domain,
-					 unsigned int vdev_id)
-{
-	struct dlb2_list_entry *iter;
-	union dlb2_sys_vf_ldb_vpp_v r1;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	r1.field.vpp_v = 0;
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			unsigned int offs;
-			u32 virt_id;
-
-			if (hw->virt_mode == DLB2_VIRT_SRIOV)
-				virt_id = port->id.virt_id;
-			else
-				virt_id = port->id.phys_id;
-
-			offs = vdev_id * DLB2_MAX_NUM_LDB_PORTS + virt_id;
-
-			DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VPP_V(offs), r1.val);
-		}
-	}
-}
-
-static void
-dlb2_domain_disable_ldb_port_interrupts(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	union dlb2_chp_ldb_cq_int_enb r0 = { {0} };
-	union dlb2_chp_ldb_cq_wd_enb r1 = { {0} };
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	r0.field.en_tim = 0;
-	r0.field.en_depth = 0;
-
-	r1.field.wd_enable = 0;
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			DLB2_CSR_WR(hw,
-				    DLB2_CHP_LDB_CQ_INT_ENB(port->id.phys_id),
-				    r0.val);
-
-			DLB2_CSR_WR(hw,
-				    DLB2_CHP_LDB_CQ_WD_ENB(port->id.phys_id),
-				    r1.val);
-		}
-	}
-}
-
-static void
-dlb2_domain_disable_dir_port_interrupts(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	union dlb2_chp_dir_cq_int_enb r0 = { {0} };
-	union dlb2_chp_dir_cq_wd_enb r1 = { {0} };
-	struct dlb2_dir_pq_pair *port;
-	RTE_SET_USED(iter);
-
-	r0.field.en_tim = 0;
-	r0.field.en_depth = 0;
-
-	r1.field.wd_enable = 0;
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
-		DLB2_CSR_WR(hw,
-			    DLB2_CHP_DIR_CQ_INT_ENB(port->id.phys_id),
-			    r0.val);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_CHP_DIR_CQ_WD_ENB(port->id.phys_id),
-			    r1.val);
-	}
-}
-
-static void
-dlb2_domain_disable_ldb_queue_write_perms(struct dlb2_hw *hw,
-					  struct dlb2_hw_domain *domain)
-{
-	int domain_offset = domain->id.phys_id * DLB2_MAX_NUM_LDB_QUEUES;
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_queue *queue;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
-		union dlb2_sys_ldb_vasqid_v r0 = { {0} };
-		union dlb2_sys_ldb_qid2vqid r1 = { {0} };
-		union dlb2_sys_vf_ldb_vqid_v r2 = { {0} };
-		union dlb2_sys_vf_ldb_vqid2qid r3 = { {0} };
-		int idx;
-
-		idx = domain_offset + queue->id.phys_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_LDB_VASQID_V(idx), r0.val);
-
-		if (queue->id.vdev_owned) {
-			DLB2_CSR_WR(hw,
-				    DLB2_SYS_LDB_QID2VQID(queue->id.phys_id),
-				    r1.val);
-
-			idx = queue->id.vdev_id * DLB2_MAX_NUM_LDB_QUEUES +
-				queue->id.virt_id;
-
-			DLB2_CSR_WR(hw,
-				    DLB2_SYS_VF_LDB_VQID_V(idx),
-				    r2.val);
-
-			DLB2_CSR_WR(hw,
-				    DLB2_SYS_VF_LDB_VQID2QID(idx),
-				    r3.val);
-		}
-	}
-}
-
-static void
-dlb2_domain_disable_dir_queue_write_perms(struct dlb2_hw *hw,
-					  struct dlb2_hw_domain *domain)
-{
-	int domain_offset = domain->id.phys_id *
-		DLB2_MAX_NUM_DIR_PORTS(hw->ver);
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *queue;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, queue, iter) {
-		union dlb2_sys_dir_vasqid_v r0 = { {0} };
-		union dlb2_sys_vf_dir_vqid_v r1 = { {0} };
-		union dlb2_sys_vf_dir_vqid2qid r2 = { {0} };
-		int idx;
-
-		idx = domain_offset + queue->id.phys_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(idx), r0.val);
-
-		if (queue->id.vdev_owned) {
-			idx = queue->id.vdev_id *
-				DLB2_MAX_NUM_DIR_PORTS(hw->ver) +
-				queue->id.virt_id;
-
-			DLB2_CSR_WR(hw,
-				    DLB2_SYS_VF_DIR_VQID_V(idx),
-				    r1.val);
-
-			DLB2_CSR_WR(hw,
-				    DLB2_SYS_VF_DIR_VQID2QID(idx),
-				    r2.val);
-		}
-	}
-}
-
-static void dlb2_domain_disable_ldb_seq_checks(struct dlb2_hw *hw,
-					       struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	union dlb2_chp_sn_chk_enbl r1;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	r1.field.en = 0;
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
-			DLB2_CSR_WR(hw,
-				    DLB2_CHP_SN_CHK_ENBL(port->id.phys_id),
-				    r1.val);
-	}
-}
-
-static int dlb2_domain_wait_for_ldb_cqs_to_empty(struct dlb2_hw *hw,
-						 struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			int i;
-
-			for (i = 0; i < DLB2_MAX_CQ_COMP_CHECK_LOOPS; i++) {
-				if (dlb2_ldb_cq_inflight_count(hw, port) == 0)
-					break;
-			}
-
-			if (i == DLB2_MAX_CQ_COMP_CHECK_LOOPS) {
-				DLB2_HW_ERR(hw,
-					    "[%s()] Internal error: failed to flush load-balanced port %d's completions.\n",
-					    __func__, port->id.phys_id);
-				return -EFAULT;
-			}
-		}
-	}
-
-	return 0;
-}
-
-static void dlb2_domain_disable_dir_cqs(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *port;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
-		port->enabled = false;
-
-		dlb2_dir_port_cq_disable(hw, port);
-	}
-}
-
-static void
-dlb2_domain_disable_dir_producer_ports(struct dlb2_hw *hw,
-				       struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *port;
-	union dlb2_sys_dir_pp_v r1;
-	RTE_SET_USED(iter);
-
-	r1.field.pp_v = 0;
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter)
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_DIR_PP_V(port->id.phys_id),
-			    r1.val);
-}
-
-static void
-dlb2_domain_disable_ldb_producer_ports(struct dlb2_hw *hw,
-				       struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	union dlb2_sys_ldb_pp_v r1;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	r1.field.pp_v = 0;
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
-			DLB2_CSR_WR(hw,
-				    DLB2_SYS_LDB_PP_V(port->id.phys_id),
-				    r1.val);
-	}
-}
-
-static int dlb2_domain_verify_reset_success(struct dlb2_hw *hw,
-					    struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *dir_port;
-	struct dlb2_ldb_port *ldb_port;
-	struct dlb2_ldb_queue *queue;
-	int i;
-	RTE_SET_USED(iter);
-
-	/*
-	 * Confirm that all the domain's queue's inflight counts and AQED
-	 * active counts are 0.
-	 */
-	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
-		if (!dlb2_ldb_queue_is_empty(hw, queue)) {
-			DLB2_HW_ERR(hw,
-				    "[%s()] Internal error: failed to empty ldb queue %d\n",
-				    __func__, queue->id.phys_id);
-			return -EFAULT;
-		}
-	}
-
-	/* Confirm that all the domain's CQs inflight and token counts are 0. */
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], ldb_port, iter) {
-			if (dlb2_ldb_cq_inflight_count(hw, ldb_port) ||
-			    dlb2_ldb_cq_token_count(hw, ldb_port)) {
-				DLB2_HW_ERR(hw,
-					    "[%s()] Internal error: failed to empty ldb port %d\n",
-					    __func__, ldb_port->id.phys_id);
-				return -EFAULT;
-			}
-		}
-	}
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, dir_port, iter) {
-		if (!dlb2_dir_queue_is_empty(hw, dir_port)) {
-			DLB2_HW_ERR(hw,
-				    "[%s()] Internal error: failed to empty dir queue %d\n",
-				    __func__, dir_port->id.phys_id);
-			return -EFAULT;
-		}
-
-		if (dlb2_dir_cq_token_count(hw, dir_port)) {
-			DLB2_HW_ERR(hw,
-				    "[%s()] Internal error: failed to empty dir port %d\n",
-				    __func__, dir_port->id.phys_id);
-			return -EFAULT;
-		}
-	}
-
-	return 0;
-}
-
-static void __dlb2_domain_reset_ldb_port_registers(struct dlb2_hw *hw,
-						   struct dlb2_ldb_port *port)
-{
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_PP2VAS(port->id.phys_id),
-		    DLB2_SYS_LDB_PP2VAS_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ2VAS(port->id.phys_id),
-		    DLB2_CHP_LDB_CQ2VAS_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_PP2VDEV(port->id.phys_id),
-		    DLB2_SYS_LDB_PP2VDEV_RST);
-
-	if (port->id.vdev_owned) {
-		unsigned int offs;
-		u32 virt_id;
-
-		/*
-		 * DLB uses producer port address bits 17:12 to determine the
-		 * producer port ID. In Scalable IOV mode, PP accesses come
-		 * through the PF MMIO window for the physical producer port,
-		 * so for translation purposes the virtual and physical port
-		 * IDs are equal.
-		 */
-		if (hw->virt_mode == DLB2_VIRT_SRIOV)
-			virt_id = port->id.virt_id;
-		else
-			virt_id = port->id.phys_id;
-
-		offs = port->id.vdev_id * DLB2_MAX_NUM_LDB_PORTS + virt_id;
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_VF_LDB_VPP2PP(offs),
-			    DLB2_SYS_VF_LDB_VPP2PP_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_VF_LDB_VPP_V(offs),
-			    DLB2_SYS_VF_LDB_VPP_V_RST);
-	}
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_PP_V(port->id.phys_id),
-		    DLB2_SYS_LDB_PP_V_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_LDB_DSBL(port->id.phys_id),
-		    DLB2_LSP_CQ_LDB_DSBL_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_DEPTH(port->id.phys_id),
-		    DLB2_CHP_LDB_CQ_DEPTH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_LDB_INFL_LIM(port->id.phys_id),
-		    DLB2_LSP_CQ_LDB_INFL_LIM_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_HIST_LIST_LIM(port->id.phys_id),
-		    DLB2_CHP_HIST_LIST_LIM_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_HIST_LIST_BASE(port->id.phys_id),
-		    DLB2_CHP_HIST_LIST_BASE_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_HIST_LIST_POP_PTR(port->id.phys_id),
-		    DLB2_CHP_HIST_LIST_POP_PTR_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_HIST_LIST_PUSH_PTR(port->id.phys_id),
-		    DLB2_CHP_HIST_LIST_PUSH_PTR_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH(port->id.phys_id),
-		    DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_TMR_THRSH(port->id.phys_id),
-		    DLB2_CHP_LDB_CQ_TMR_THRSH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_INT_ENB(port->id.phys_id),
-		    DLB2_CHP_LDB_CQ_INT_ENB_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_CQ_ISR(port->id.phys_id),
-		    DLB2_SYS_LDB_CQ_ISR_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL(port->id.phys_id),
-		    DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL(port->id.phys_id),
-		    DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_WPTR(port->id.phys_id),
-		    DLB2_CHP_LDB_CQ_WPTR_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_LDB_TKN_CNT(port->id.phys_id),
-		    DLB2_LSP_CQ_LDB_TKN_CNT_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_CQ_ADDR_L(port->id.phys_id),
-		    DLB2_SYS_LDB_CQ_ADDR_L_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_CQ_ADDR_U(port->id.phys_id),
-		    DLB2_SYS_LDB_CQ_ADDR_U_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_CQ_AT(port->id.phys_id),
-		    DLB2_SYS_LDB_CQ_AT_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_CQ_PASID(port->id.phys_id),
-		    DLB2_SYS_LDB_CQ_PASID_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_CQ2VF_PF_RO(port->id.phys_id),
-		    DLB2_SYS_LDB_CQ2VF_PF_RO_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTL(port->id.phys_id),
-		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTL_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTH(port->id.phys_id),
-		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ2QID0(port->id.phys_id),
-		    DLB2_LSP_CQ2QID0_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ2QID1(port->id.phys_id),
-		    DLB2_LSP_CQ2QID1_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ2PRIOV(port->id.phys_id),
-		    DLB2_LSP_CQ2PRIOV_RST);
-}
-
-static void dlb2_domain_reset_ldb_port_registers(struct dlb2_hw *hw,
-						 struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
-			__dlb2_domain_reset_ldb_port_registers(hw, port);
-	}
-}
-
-static void
-__dlb2_domain_reset_dir_port_registers(struct dlb2_hw *hw,
-				       struct dlb2_dir_pq_pair *port)
-{
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ2VAS(port->id.phys_id),
-		    DLB2_CHP_DIR_CQ2VAS_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_DIR_DSBL(port->id.phys_id),
-		    DLB2_LSP_CQ_DIR_DSBL_RST);
-
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_OPT_CLR, port->id.phys_id);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_DEPTH(port->id.phys_id),
-		    DLB2_CHP_DIR_CQ_DEPTH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH(port->id.phys_id),
-		    DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_TMR_THRSH(port->id.phys_id),
-		    DLB2_CHP_DIR_CQ_TMR_THRSH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_INT_ENB(port->id.phys_id),
-		    DLB2_CHP_DIR_CQ_INT_ENB_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ_ISR(port->id.phys_id),
-		    DLB2_SYS_DIR_CQ_ISR_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(port->id.phys_id),
-		    DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL(port->id.phys_id),
-		    DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_WPTR(port->id.phys_id),
-		    DLB2_CHP_DIR_CQ_WPTR_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_DIR_TKN_CNT(port->id.phys_id),
-		    DLB2_LSP_CQ_DIR_TKN_CNT_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ_ADDR_L(port->id.phys_id),
-		    DLB2_SYS_DIR_CQ_ADDR_L_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ_ADDR_U(port->id.phys_id),
-		    DLB2_SYS_DIR_CQ_ADDR_U_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ_AT(port->id.phys_id),
-		    DLB2_SYS_DIR_CQ_AT_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ_PASID(port->id.phys_id),
-		    DLB2_SYS_DIR_CQ_PASID_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ_FMT(port->id.phys_id),
-		    DLB2_SYS_DIR_CQ_FMT_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ2VF_PF_RO(port->id.phys_id),
-		    DLB2_SYS_DIR_CQ2VF_PF_RO_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTL(port->id.phys_id),
-		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTL_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTH(port->id.phys_id),
-		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_PP2VAS(port->id.phys_id),
-		    DLB2_SYS_DIR_PP2VAS_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ2VAS(port->id.phys_id),
-		    DLB2_CHP_DIR_CQ2VAS_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_PP2VDEV(port->id.phys_id),
-		    DLB2_SYS_DIR_PP2VDEV_RST);
-
-	if (port->id.vdev_owned) {
-		unsigned int offs;
-		u32 virt_id;
-
-		/*
-		 * DLB uses producer port address bits 17:12 to determine the
-		 * producer port ID. In Scalable IOV mode, PP accesses come
-		 * through the PF MMIO window for the physical producer port,
-		 * so for translation purposes the virtual and physical port
-		 * IDs are equal.
-		 */
-		if (hw->virt_mode == DLB2_VIRT_SRIOV)
-			virt_id = port->id.virt_id;
-		else
-			virt_id = port->id.phys_id;
-
-		offs = port->id.vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver)
-			+ virt_id;
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_VF_DIR_VPP2PP(offs),
-			    DLB2_SYS_VF_DIR_VPP2PP_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_VF_DIR_VPP_V(offs),
-			    DLB2_SYS_VF_DIR_VPP_V_RST);
-	}
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_PP_V(port->id.phys_id),
-		    DLB2_SYS_DIR_PP_V_RST);
-}
-
-static void dlb2_domain_reset_dir_port_registers(struct dlb2_hw *hw,
-						 struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *port;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter)
-		__dlb2_domain_reset_dir_port_registers(hw, port);
-}
-
-static void dlb2_domain_reset_ldb_queue_registers(struct dlb2_hw *hw,
-						  struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_queue *queue;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
-		unsigned int queue_id = queue->id.phys_id;
-		int i;
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL(queue_id),
-			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH(queue_id),
-			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTL(queue_id),
-			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTL_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTH(queue_id),
-			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_NALDB_MAX_DEPTH(queue_id),
-			    DLB2_LSP_QID_NALDB_MAX_DEPTH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_LDB_INFL_LIM(queue_id),
-			    DLB2_LSP_QID_LDB_INFL_LIM_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_AQED_ACTIVE_LIM(queue_id),
-			    DLB2_LSP_QID_AQED_ACTIVE_LIM_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_ATM_DEPTH_THRSH(queue_id),
-			    DLB2_LSP_QID_ATM_DEPTH_THRSH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_NALDB_DEPTH_THRSH(queue_id),
-			    DLB2_LSP_QID_NALDB_DEPTH_THRSH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_LDB_QID_ITS(queue_id),
-			    DLB2_SYS_LDB_QID_ITS_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_CHP_ORD_QID_SN(queue_id),
-			    DLB2_CHP_ORD_QID_SN_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_CHP_ORD_QID_SN_MAP(queue_id),
-			    DLB2_CHP_ORD_QID_SN_MAP_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_LDB_QID_V(queue_id),
-			    DLB2_SYS_LDB_QID_V_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_LDB_QID_CFG_V(queue_id),
-			    DLB2_SYS_LDB_QID_CFG_V_RST);
-
-		if (queue->sn_cfg_valid) {
-			u32 offs[2];
-
-			offs[0] = DLB2_RO_PIPE_GRP_0_SLT_SHFT(queue->sn_slot);
-			offs[1] = DLB2_RO_PIPE_GRP_1_SLT_SHFT(queue->sn_slot);
-
-			DLB2_CSR_WR(hw,
-				    offs[queue->sn_group],
-				    DLB2_RO_PIPE_GRP_0_SLT_SHFT_RST);
-		}
-
-		for (i = 0; i < DLB2_LSP_QID2CQIDIX_NUM; i++) {
-			DLB2_CSR_WR(hw,
-				    DLB2_LSP_QID2CQIDIX(queue_id, i),
-				    DLB2_LSP_QID2CQIDIX_00_RST);
-
-			DLB2_CSR_WR(hw,
-				    DLB2_LSP_QID2CQIDIX2(queue_id, i),
-				    DLB2_LSP_QID2CQIDIX2_00_RST);
-
-			DLB2_CSR_WR(hw,
-				    DLB2_ATM_QID2CQIDIX(queue_id, i),
-				    DLB2_ATM_QID2CQIDIX_00_RST);
-		}
-	}
-}
-
-static void dlb2_domain_reset_dir_queue_registers(struct dlb2_hw *hw,
-						  struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *queue;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, queue, iter) {
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_DIR_MAX_DEPTH(queue->id.phys_id),
-			    DLB2_LSP_QID_DIR_MAX_DEPTH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTL(queue->id.phys_id),
-			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTL_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTH(queue->id.phys_id),
-			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_DIR_DEPTH_THRSH(queue->id.phys_id),
-			    DLB2_LSP_QID_DIR_DEPTH_THRSH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_DIR_QID_ITS(queue->id.phys_id),
-			    DLB2_SYS_DIR_QID_ITS_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_DIR_QID_V(queue->id.phys_id),
-			    DLB2_SYS_DIR_QID_V_RST);
-	}
-}
-
-static void dlb2_domain_reset_registers(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain)
-{
-	dlb2_domain_reset_ldb_port_registers(hw, domain);
-
-	dlb2_domain_reset_dir_port_registers(hw, domain);
-
-	dlb2_domain_reset_ldb_queue_registers(hw, domain);
-
-	dlb2_domain_reset_dir_queue_registers(hw, domain);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_CFG_LDB_VAS_CRD(domain->id.phys_id),
-		    DLB2_CHP_CFG_LDB_VAS_CRD_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_CFG_DIR_VAS_CRD(domain->id.phys_id),
-		    DLB2_CHP_CFG_DIR_VAS_CRD_RST);
-}
-
-static int dlb2_domain_reset_software_state(struct dlb2_hw *hw,
-					    struct dlb2_hw_domain *domain)
-{
-	struct dlb2_dir_pq_pair *tmp_dir_port;
-	struct dlb2_ldb_queue *tmp_ldb_queue;
-	struct dlb2_ldb_port *tmp_ldb_port;
-	struct dlb2_list_entry *iter1;
-	struct dlb2_list_entry *iter2;
-	struct dlb2_function_resources *rsrcs;
-	struct dlb2_dir_pq_pair *dir_port;
-	struct dlb2_ldb_queue *ldb_queue;
-	struct dlb2_ldb_port *ldb_port;
-	struct dlb2_list_head *list;
-	int ret, i;
-	RTE_SET_USED(tmp_dir_port);
-	RTE_SET_USED(tmp_ldb_queue);
-	RTE_SET_USED(tmp_ldb_port);
-	RTE_SET_USED(iter1);
-	RTE_SET_USED(iter2);
-
-	rsrcs = domain->parent_func;
-
-	/* Move the domain's ldb queues to the function's avail list */
-	list = &domain->used_ldb_queues;
-	DLB2_DOM_LIST_FOR_SAFE(*list, ldb_queue, tmp_ldb_queue, iter1, iter2) {
-		if (ldb_queue->sn_cfg_valid) {
-			struct dlb2_sn_group *grp;
-
-			grp = &hw->rsrcs.sn_groups[ldb_queue->sn_group];
-
-			dlb2_sn_group_free_slot(grp, ldb_queue->sn_slot);
-			ldb_queue->sn_cfg_valid = false;
-		}
-
-		ldb_queue->owned = false;
-		ldb_queue->num_mappings = 0;
-		ldb_queue->num_pending_additions = 0;
-
-		dlb2_list_del(&domain->used_ldb_queues,
-			      &ldb_queue->domain_list);
-		dlb2_list_add(&rsrcs->avail_ldb_queues,
-			      &ldb_queue->func_list);
-		rsrcs->num_avail_ldb_queues++;
-	}
-
-	list = &domain->avail_ldb_queues;
-	DLB2_DOM_LIST_FOR_SAFE(*list, ldb_queue, tmp_ldb_queue, iter1, iter2) {
-		ldb_queue->owned = false;
-
-		dlb2_list_del(&domain->avail_ldb_queues,
-			      &ldb_queue->domain_list);
-		dlb2_list_add(&rsrcs->avail_ldb_queues,
-			      &ldb_queue->func_list);
-		rsrcs->num_avail_ldb_queues++;
-	}
-
-	/* Move the domain's ldb ports to the function's avail list */
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		list = &domain->used_ldb_ports[i];
-		DLB2_DOM_LIST_FOR_SAFE(*list, ldb_port, tmp_ldb_port,
-				       iter1, iter2) {
-			int j;
-
-			ldb_port->owned = false;
-			ldb_port->configured = false;
-			ldb_port->num_pending_removals = 0;
-			ldb_port->num_mappings = 0;
-			ldb_port->init_tkn_cnt = 0;
-			for (j = 0; j < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; j++)
-				ldb_port->qid_map[j].state =
-					DLB2_QUEUE_UNMAPPED;
-
-			dlb2_list_del(&domain->used_ldb_ports[i],
-				      &ldb_port->domain_list);
-			dlb2_list_add(&rsrcs->avail_ldb_ports[i],
-				      &ldb_port->func_list);
-			rsrcs->num_avail_ldb_ports[i]++;
-		}
-
-		list = &domain->avail_ldb_ports[i];
-		DLB2_DOM_LIST_FOR_SAFE(*list, ldb_port, tmp_ldb_port,
-				       iter1, iter2) {
-			ldb_port->owned = false;
-
-			dlb2_list_del(&domain->avail_ldb_ports[i],
-				      &ldb_port->domain_list);
-			dlb2_list_add(&rsrcs->avail_ldb_ports[i],
-				      &ldb_port->func_list);
-			rsrcs->num_avail_ldb_ports[i]++;
-		}
-	}
-
-	/* Move the domain's dir ports to the function's avail list */
-	list = &domain->used_dir_pq_pairs;
-	DLB2_DOM_LIST_FOR_SAFE(*list, dir_port, tmp_dir_port, iter1, iter2) {
-		dir_port->owned = false;
-		dir_port->port_configured = false;
-		dir_port->init_tkn_cnt = 0;
-
-		dlb2_list_del(&domain->used_dir_pq_pairs,
-			      &dir_port->domain_list);
-
-		dlb2_list_add(&rsrcs->avail_dir_pq_pairs,
-			      &dir_port->func_list);
-		rsrcs->num_avail_dir_pq_pairs++;
-	}
-
-	list = &domain->avail_dir_pq_pairs;
-	DLB2_DOM_LIST_FOR_SAFE(*list, dir_port, tmp_dir_port, iter1, iter2) {
-		dir_port->owned = false;
-
-		dlb2_list_del(&domain->avail_dir_pq_pairs,
-			      &dir_port->domain_list);
-
-		dlb2_list_add(&rsrcs->avail_dir_pq_pairs,
-			      &dir_port->func_list);
-		rsrcs->num_avail_dir_pq_pairs++;
-	}
-
-	/* Return hist list entries to the function */
-	ret = dlb2_bitmap_set_range(rsrcs->avail_hist_list_entries,
-				    domain->hist_list_entry_base,
-				    domain->total_hist_list_entries);
-	if (ret) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: domain hist list base doesn't match the function's bitmap.\n",
-			    __func__);
-		return ret;
-	}
-
-	domain->total_hist_list_entries = 0;
-	domain->avail_hist_list_entries = 0;
-	domain->hist_list_entry_base = 0;
-	domain->hist_list_entry_offset = 0;
-
-	rsrcs->num_avail_qed_entries += domain->num_ldb_credits;
-	domain->num_ldb_credits = 0;
-
-	rsrcs->num_avail_dqed_entries += domain->num_dir_credits;
-	domain->num_dir_credits = 0;
-
-	rsrcs->num_avail_aqed_entries += domain->num_avail_aqed_entries;
-	rsrcs->num_avail_aqed_entries += domain->num_used_aqed_entries;
-	domain->num_avail_aqed_entries = 0;
-	domain->num_used_aqed_entries = 0;
-
-	domain->num_pending_removals = 0;
-	domain->num_pending_additions = 0;
-	domain->configured = false;
-	domain->started = false;
-
-	/*
-	 * Move the domain out of the used_domains list and back to the
-	 * function's avail_domains list.
-	 */
-	dlb2_list_del(&rsrcs->used_domains, &domain->func_list);
-	dlb2_list_add(&rsrcs->avail_domains, &domain->func_list);
-	rsrcs->num_avail_domains++;
-
-	return 0;
-}
-
-static int dlb2_domain_drain_unmapped_queue(struct dlb2_hw *hw,
-					    struct dlb2_hw_domain *domain,
-					    struct dlb2_ldb_queue *queue)
-{
-	struct dlb2_ldb_port *port;
-	int ret, i;
-
-	/* If a domain has LDB queues, it must have LDB ports */
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		if (!dlb2_list_empty(&domain->used_ldb_ports[i]))
-			break;
-	}
-
-	if (i == DLB2_NUM_COS_DOMAINS) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: No configured LDB ports\n",
-			    __func__);
-		return -EFAULT;
-	}
-
-	port = DLB2_DOM_LIST_HEAD(domain->used_ldb_ports[i], typeof(*port));
-
-	/* If necessary, free up a QID slot in this CQ */
-	if (port->num_mappings == DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-		struct dlb2_ldb_queue *mapped_queue;
-
-		mapped_queue = &hw->rsrcs.ldb_queues[port->qid_map[0].qid];
-
-		ret = dlb2_ldb_port_unmap_qid(hw, port, mapped_queue);
-		if (ret)
-			return ret;
-	}
-
-	ret = dlb2_ldb_port_map_qid_dynamic(hw, port, queue, 0);
-	if (ret)
-		return ret;
-
-	return dlb2_domain_drain_mapped_queues(hw, domain);
-}
-
-static int dlb2_domain_drain_unmapped_queues(struct dlb2_hw *hw,
-					     struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_queue *queue;
-	int ret;
-	RTE_SET_USED(iter);
-
-	/* If the domain hasn't been started, there's no traffic to drain */
-	if (!domain->started)
-		return 0;
-
-	/*
-	 * Pre-condition: the unattached queue must not have any outstanding
-	 * completions. This is ensured by calling dlb2_domain_drain_ldb_cqs()
-	 * prior to this in dlb2_domain_drain_mapped_queues().
-	 */
-	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
-		if (queue->num_mappings != 0 ||
-		    dlb2_ldb_queue_is_empty(hw, queue))
-			continue;
-
-		ret = dlb2_domain_drain_unmapped_queue(hw, domain, queue);
-		if (ret)
-			return ret;
-	}
-
-	return 0;
-}
-
-/**
- * dlb2_reset_domain() - Reset a DLB scheduling domain and its associated
- *	hardware resources.
- * @hw:	Contains the current state of the DLB2 hardware.
- * @domain_id: Domain ID
- * @vdev_req: Request came from a virtual device.
- * @vdev_id: If vdev_req is true, this contains the virtual device's ID.
- *
- * Note: User software *must* stop sending to this domain's producer ports
- * before invoking this function, otherwise undefined behavior will result.
- *
- * Return: returns < 0 on error, 0 otherwise.
- */
-int dlb2_reset_domain(struct dlb2_hw *hw,
-		      u32 domain_id,
-		      bool vdev_req,
-		      unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	int ret;
-
-	dlb2_log_reset_domain(hw, domain_id, vdev_req, vdev_id);
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (domain  == NULL || !domain->configured)
-		return -EINVAL;
-
-	/* Disable VPPs */
-	if (vdev_req) {
-		dlb2_domain_disable_dir_vpps(hw, domain, vdev_id);
-
-		dlb2_domain_disable_ldb_vpps(hw, domain, vdev_id);
-	}
-
-	/* Disable CQ interrupts */
-	dlb2_domain_disable_dir_port_interrupts(hw, domain);
-
-	dlb2_domain_disable_ldb_port_interrupts(hw, domain);
-
-	/*
-	 * For each queue owned by this domain, disable its write permissions to
-	 * cause any traffic sent to it to be dropped. Well-behaved software
-	 * should not be sending QEs at this point.
-	 */
-	dlb2_domain_disable_dir_queue_write_perms(hw, domain);
-
-	dlb2_domain_disable_ldb_queue_write_perms(hw, domain);
-
-	/* Turn off completion tracking on all the domain's PPs. */
-	dlb2_domain_disable_ldb_seq_checks(hw, domain);
-
-	/*
-	 * Disable the LDB CQs and drain them in order to complete the map and
-	 * unmap procedures, which require zero CQ inflights and zero QID
-	 * inflights respectively.
-	 */
-	dlb2_domain_disable_ldb_cqs(hw, domain);
-
-	ret = dlb2_domain_drain_ldb_cqs(hw, domain, false);
-	if (ret < 0)
-		return ret;
-
-	ret = dlb2_domain_wait_for_ldb_cqs_to_empty(hw, domain);
-	if (ret < 0)
-		return ret;
-
-	ret = dlb2_domain_finish_unmap_qid_procedures(hw, domain);
-	if (ret < 0)
-		return ret;
-
-	ret = dlb2_domain_finish_map_qid_procedures(hw, domain);
-	if (ret < 0)
-		return ret;
-
-	/* Re-enable the CQs in order to drain the mapped queues. */
-	dlb2_domain_enable_ldb_cqs(hw, domain);
-
-	ret = dlb2_domain_drain_mapped_queues(hw, domain);
-	if (ret < 0)
-		return ret;
-
-	ret = dlb2_domain_drain_unmapped_queues(hw, domain);
-	if (ret < 0)
-		return ret;
-
-	/* Done draining LDB QEs, so disable the CQs. */
-	dlb2_domain_disable_ldb_cqs(hw, domain);
-
-	dlb2_domain_drain_dir_queues(hw, domain);
-
-	/* Done draining DIR QEs, so disable the CQs. */
-	dlb2_domain_disable_dir_cqs(hw, domain);
-
-	/* Disable PPs */
-	dlb2_domain_disable_dir_producer_ports(hw, domain);
-
-	dlb2_domain_disable_ldb_producer_ports(hw, domain);
-
-	ret = dlb2_domain_verify_reset_success(hw, domain);
-	if (ret)
-		return ret;
-
-	/* Reset the QID and port state. */
-	dlb2_domain_reset_registers(hw, domain);
-
-	/* Hardware reset complete. Reset the domain's software state */
-	ret = dlb2_domain_reset_software_state(hw, domain);
-	if (ret)
-		return ret;
-
-	return 0;
-}
-
 unsigned int dlb2_finish_unmap_qid_procedures(struct dlb2_hw *hw)
 {
 	int i, num = 0;
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index 8f97dd865..641812412 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -34,6 +34,17 @@
 #define DLB2_FUNC_LIST_FOR_SAFE(head, ptr, ptr_tmp, it, it_tmp) \
 	DLB2_LIST_FOR_EACH_SAFE((head), ptr, ptr_tmp, func_list, it, it_tmp)
 
+/*
+ * The PF driver cannot assume that a register write will affect subsequent HCW
+ * writes. To ensure a write completes, the driver must read back a CSR. This
+ * function only need be called for configuration that can occur after the
+ * domain has started; prior to starting, applications can't send HCWs.
+ */
+static inline void dlb2_flush_csr(struct dlb2_hw *hw)
+{
+	DLB2_CSR_RD(hw, DLB2_SYS_TOTAL_VAS(hw->ver));
+}
+
 static void dlb2_init_domain_rsrc_lists(struct dlb2_hw_domain *domain)
 {
 	int i;
@@ -1019,3 +1030,2554 @@ int dlb2_hw_create_sched_domain(struct dlb2_hw *hw,
 
 	return 0;
 }
+
+static void dlb2_dir_port_cq_disable(struct dlb2_hw *hw,
+				     struct dlb2_dir_pq_pair *port)
+{
+	u32 reg = 0;
+
+	DLB2_BIT_SET(reg, DLB2_LSP_CQ_DIR_DSBL_DISABLED);
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ_DIR_DSBL(hw->ver, port->id.phys_id), reg);
+
+	dlb2_flush_csr(hw);
+}
+
+static u32 dlb2_dir_cq_token_count(struct dlb2_hw *hw,
+				   struct dlb2_dir_pq_pair *port)
+{
+	u32 cnt;
+
+	cnt = DLB2_CSR_RD(hw,
+			  DLB2_LSP_CQ_DIR_TKN_CNT(hw->ver, port->id.phys_id));
+
+	/*
+	 * Account for the initial token count, which is used in order to
+	 * provide a CQ with depth less than 8.
+	 */
+
+	return DLB2_BITS_GET(cnt, DLB2_LSP_CQ_DIR_TKN_CNT_COUNT) -
+	       port->init_tkn_cnt;
+}
+
+static void dlb2_drain_dir_cq(struct dlb2_hw *hw,
+			      struct dlb2_dir_pq_pair *port)
+{
+	unsigned int port_id = port->id.phys_id;
+	u32 cnt;
+
+	/* Return any outstanding tokens */
+	cnt = dlb2_dir_cq_token_count(hw, port);
+
+	if (cnt != 0) {
+		struct dlb2_hcw hcw_mem[8], *hcw;
+		void __iomem *pp_addr;
+
+		pp_addr = os_map_producer_port(hw, port_id, false);
+
+		/* Point hcw to a 64B-aligned location */
+		hcw = (struct dlb2_hcw *)((uintptr_t)&hcw_mem[4] & ~0x3F);
+
+		/*
+		 * Program the first HCW for a batch token return and
+		 * the rest as NOOPS
+		 */
+		memset(hcw, 0, 4 * sizeof(*hcw));
+		hcw->cq_token = 1;
+		hcw->lock_id = cnt - 1;
+
+		dlb2_movdir64b(pp_addr, hcw);
+
+		os_fence_hcw(hw, pp_addr);
+
+		os_unmap_producer_port(hw, pp_addr);
+	}
+}
+
+static void dlb2_dir_port_cq_enable(struct dlb2_hw *hw,
+				    struct dlb2_dir_pq_pair *port)
+{
+	u32 reg = 0;
+
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ_DIR_DSBL(hw->ver, port->id.phys_id), reg);
+
+	dlb2_flush_csr(hw);
+}
+
+static int dlb2_domain_drain_dir_cqs(struct dlb2_hw *hw,
+				     struct dlb2_hw_domain *domain,
+				     bool toggle_port)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *port;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
+		/*
+		 * Can't drain a port if it's not configured, and there's
+		 * nothing to drain if its queue is unconfigured.
+		 */
+		if (!port->port_configured || !port->queue_configured)
+			continue;
+
+		if (toggle_port)
+			dlb2_dir_port_cq_disable(hw, port);
+
+		dlb2_drain_dir_cq(hw, port);
+
+		if (toggle_port)
+			dlb2_dir_port_cq_enable(hw, port);
+	}
+
+	return 0;
+}
+
+static u32 dlb2_dir_queue_depth(struct dlb2_hw *hw,
+				struct dlb2_dir_pq_pair *queue)
+{
+	u32 cnt;
+
+	cnt = DLB2_CSR_RD(hw, DLB2_LSP_QID_DIR_ENQUEUE_CNT(hw->ver,
+						      queue->id.phys_id));
+
+	return DLB2_BITS_GET(cnt, DLB2_LSP_QID_DIR_ENQUEUE_CNT_COUNT);
+}
+
+static bool dlb2_dir_queue_is_empty(struct dlb2_hw *hw,
+				    struct dlb2_dir_pq_pair *queue)
+{
+	return dlb2_dir_queue_depth(hw, queue) == 0;
+}
+
+static bool dlb2_domain_dir_queues_empty(struct dlb2_hw *hw,
+					 struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *queue;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, queue, iter) {
+		if (!dlb2_dir_queue_is_empty(hw, queue))
+			return false;
+	}
+
+	return true;
+}
+static int dlb2_domain_drain_dir_queues(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain)
+{
+	int i;
+
+	/* If the domain hasn't been started, there's no traffic to drain */
+	if (!domain->started)
+		return 0;
+
+	for (i = 0; i < DLB2_MAX_QID_EMPTY_CHECK_LOOPS; i++) {
+		dlb2_domain_drain_dir_cqs(hw, domain, true);
+
+		if (dlb2_domain_dir_queues_empty(hw, domain))
+			break;
+	}
+
+	if (i == DLB2_MAX_QID_EMPTY_CHECK_LOOPS) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: failed to empty queues\n",
+			    __func__);
+		return -EFAULT;
+	}
+
+	/*
+	 * Drain the CQs one more time. For the queues to go empty, they would
+	 * have scheduled one or more QEs.
+	 */
+	dlb2_domain_drain_dir_cqs(hw, domain, true);
+
+	return 0;
+}
+
+static void dlb2_ldb_port_cq_enable(struct dlb2_hw *hw,
+				    struct dlb2_ldb_port *port)
+{
+	u32 reg = 0;
+
+	/*
+	 * Don't re-enable the port if a removal is pending. The caller should
+	 * mark this port as enabled (if it isn't already), and when the
+	 * removal completes the port will be enabled.
+	 */
+	if (port->num_pending_removals)
+		return;
+
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ_LDB_DSBL(hw->ver, port->id.phys_id), reg);
+
+	dlb2_flush_csr(hw);
+}
+
+static void dlb2_ldb_port_cq_disable(struct dlb2_hw *hw,
+				     struct dlb2_ldb_port *port)
+{
+	u32 reg = 0;
+
+	DLB2_BIT_SET(reg, DLB2_LSP_CQ_LDB_DSBL_DISABLED);
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ_LDB_DSBL(hw->ver, port->id.phys_id), reg);
+
+	dlb2_flush_csr(hw);
+}
+
+static u32 dlb2_ldb_cq_inflight_count(struct dlb2_hw *hw,
+				      struct dlb2_ldb_port *port)
+{
+	u32 cnt;
+
+	cnt = DLB2_CSR_RD(hw,
+			  DLB2_LSP_CQ_LDB_INFL_CNT(hw->ver, port->id.phys_id));
+
+	return DLB2_BITS_GET(cnt, DLB2_LSP_CQ_LDB_INFL_CNT_COUNT);
+}
+
+static u32 dlb2_ldb_cq_token_count(struct dlb2_hw *hw,
+				   struct dlb2_ldb_port *port)
+{
+	u32 cnt;
+
+	cnt = DLB2_CSR_RD(hw,
+			  DLB2_LSP_CQ_LDB_TKN_CNT(hw->ver, port->id.phys_id));
+
+	/*
+	 * Account for the initial token count, which is used in order to
+	 * provide a CQ with depth less than 8.
+	 */
+
+	return DLB2_BITS_GET(cnt, DLB2_LSP_CQ_LDB_TKN_CNT_TOKEN_COUNT) -
+		port->init_tkn_cnt;
+}
+
+static void dlb2_drain_ldb_cq(struct dlb2_hw *hw, struct dlb2_ldb_port *port)
+{
+	u32 infl_cnt, tkn_cnt;
+	unsigned int i;
+
+	infl_cnt = dlb2_ldb_cq_inflight_count(hw, port);
+	tkn_cnt = dlb2_ldb_cq_token_count(hw, port);
+
+	if (infl_cnt || tkn_cnt) {
+		struct dlb2_hcw hcw_mem[8], *hcw;
+		void __iomem *pp_addr;
+
+		pp_addr = os_map_producer_port(hw, port->id.phys_id, true);
+
+		/* Point hcw to a 64B-aligned location */
+		hcw = (struct dlb2_hcw *)((uintptr_t)&hcw_mem[4] & ~0x3F);
+
+		/*
+		 * Program the first HCW for a completion and token return and
+		 * the other HCWs as NOOPS
+		 */
+
+		memset(hcw, 0, 4 * sizeof(*hcw));
+		hcw->qe_comp = (infl_cnt > 0);
+		hcw->cq_token = (tkn_cnt > 0);
+		hcw->lock_id = tkn_cnt - 1;
+
+		/* Return tokens in the first HCW */
+		dlb2_movdir64b(pp_addr, hcw);
+
+		hcw->cq_token = 0;
+
+		/* Issue remaining completions (if any) */
+		for (i = 1; i < infl_cnt; i++)
+			dlb2_movdir64b(pp_addr, hcw);
+
+		os_fence_hcw(hw, pp_addr);
+
+		os_unmap_producer_port(hw, pp_addr);
+	}
+}
+
+static void dlb2_domain_drain_ldb_cqs(struct dlb2_hw *hw,
+				      struct dlb2_hw_domain *domain,
+				      bool toggle_port)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	/* If the domain hasn't been started, there's no traffic to drain */
+	if (!domain->started)
+		return;
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			if (toggle_port)
+				dlb2_ldb_port_cq_disable(hw, port);
+
+			dlb2_drain_ldb_cq(hw, port);
+
+			if (toggle_port)
+				dlb2_ldb_port_cq_enable(hw, port);
+		}
+	}
+}
+
+static u32 dlb2_ldb_queue_depth(struct dlb2_hw *hw,
+				struct dlb2_ldb_queue *queue)
+{
+	u32 aqed, ldb, atm;
+
+	aqed = DLB2_CSR_RD(hw, DLB2_LSP_QID_AQED_ACTIVE_CNT(hw->ver,
+						       queue->id.phys_id));
+	ldb = DLB2_CSR_RD(hw, DLB2_LSP_QID_LDB_ENQUEUE_CNT(hw->ver,
+						      queue->id.phys_id));
+	atm = DLB2_CSR_RD(hw,
+			  DLB2_LSP_QID_ATM_ACTIVE(hw->ver, queue->id.phys_id));
+
+	return DLB2_BITS_GET(aqed, DLB2_LSP_QID_AQED_ACTIVE_CNT_COUNT)
+	       + DLB2_BITS_GET(ldb, DLB2_LSP_QID_LDB_ENQUEUE_CNT_COUNT)
+	       + DLB2_BITS_GET(atm, DLB2_LSP_QID_ATM_ACTIVE_COUNT);
+}
+
+static bool dlb2_ldb_queue_is_empty(struct dlb2_hw *hw,
+				    struct dlb2_ldb_queue *queue)
+{
+	return dlb2_ldb_queue_depth(hw, queue) == 0;
+}
+
+static bool dlb2_domain_mapped_queues_empty(struct dlb2_hw *hw,
+					    struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_queue *queue;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
+		if (queue->num_mappings == 0)
+			continue;
+
+		if (!dlb2_ldb_queue_is_empty(hw, queue))
+			return false;
+	}
+
+	return true;
+}
+
+static int dlb2_domain_drain_mapped_queues(struct dlb2_hw *hw,
+					   struct dlb2_hw_domain *domain)
+{
+	int i;
+
+	/* If the domain hasn't been started, there's no traffic to drain */
+	if (!domain->started)
+		return 0;
+
+	if (domain->num_pending_removals > 0) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: failed to unmap domain queues\n",
+			    __func__);
+		return -EFAULT;
+	}
+
+	for (i = 0; i < DLB2_MAX_QID_EMPTY_CHECK_LOOPS; i++) {
+		dlb2_domain_drain_ldb_cqs(hw, domain, true);
+
+		if (dlb2_domain_mapped_queues_empty(hw, domain))
+			break;
+	}
+
+	if (i == DLB2_MAX_QID_EMPTY_CHECK_LOOPS) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: failed to empty queues\n",
+			    __func__);
+		return -EFAULT;
+	}
+
+	/*
+	 * Drain the CQs one more time. For the queues to go empty, they would
+	 * have scheduled one or more QEs.
+	 */
+	dlb2_domain_drain_ldb_cqs(hw, domain, true);
+
+	return 0;
+}
+
+static void dlb2_domain_enable_ldb_cqs(struct dlb2_hw *hw,
+				       struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			port->enabled = true;
+
+			dlb2_ldb_port_cq_enable(hw, port);
+		}
+	}
+}
+
+static struct dlb2_ldb_queue *
+dlb2_get_ldb_queue_from_id(struct dlb2_hw *hw,
+			   u32 id,
+			   bool vdev_req,
+			   unsigned int vdev_id)
+{
+	struct dlb2_list_entry *iter1;
+	struct dlb2_list_entry *iter2;
+	struct dlb2_function_resources *rsrcs;
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+	RTE_SET_USED(iter1);
+	RTE_SET_USED(iter2);
+
+	if (id >= DLB2_MAX_NUM_LDB_QUEUES)
+		return NULL;
+
+	rsrcs = (vdev_req) ? &hw->vdev[vdev_id] : &hw->pf;
+
+	if (!vdev_req)
+		return &hw->rsrcs.ldb_queues[id];
+
+	DLB2_FUNC_LIST_FOR(rsrcs->used_domains, domain, iter1) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter2) {
+			if (queue->id.virt_id == id)
+				return queue;
+		}
+	}
+
+	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_queues, queue, iter1) {
+		if (queue->id.virt_id == id)
+			return queue;
+	}
+
+	return NULL;
+}
+
+static struct dlb2_hw_domain *dlb2_get_domain_from_id(struct dlb2_hw *hw,
+						      u32 id,
+						      bool vdev_req,
+						      unsigned int vdev_id)
+{
+	struct dlb2_list_entry *iteration;
+	struct dlb2_function_resources *rsrcs;
+	struct dlb2_hw_domain *domain;
+	RTE_SET_USED(iteration);
+
+	if (id >= DLB2_MAX_NUM_DOMAINS)
+		return NULL;
+
+	if (!vdev_req)
+		return &hw->domains[id];
+
+	rsrcs = &hw->vdev[vdev_id];
+
+	DLB2_FUNC_LIST_FOR(rsrcs->used_domains, domain, iteration) {
+		if (domain->id.virt_id == id)
+			return domain;
+	}
+
+	return NULL;
+}
+
+static int dlb2_port_slot_state_transition(struct dlb2_hw *hw,
+					   struct dlb2_ldb_port *port,
+					   struct dlb2_ldb_queue *queue,
+					   int slot,
+					   enum dlb2_qid_map_state new_state)
+{
+	enum dlb2_qid_map_state curr_state = port->qid_map[slot].state;
+	struct dlb2_hw_domain *domain;
+	int domain_id;
+
+	domain_id = port->domain_id.phys_id;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, false, 0);
+	if (domain == NULL) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: unable to find domain %d\n",
+			    __func__, domain_id);
+		return -EINVAL;
+	}
+
+	switch (curr_state) {
+	case DLB2_QUEUE_UNMAPPED:
+		switch (new_state) {
+		case DLB2_QUEUE_MAPPED:
+			queue->num_mappings++;
+			port->num_mappings++;
+			break;
+		case DLB2_QUEUE_MAP_IN_PROG:
+			queue->num_pending_additions++;
+			domain->num_pending_additions++;
+			break;
+		default:
+			goto error;
+		}
+		break;
+	case DLB2_QUEUE_MAPPED:
+		switch (new_state) {
+		case DLB2_QUEUE_UNMAPPED:
+			queue->num_mappings--;
+			port->num_mappings--;
+			break;
+		case DLB2_QUEUE_UNMAP_IN_PROG:
+			port->num_pending_removals++;
+			domain->num_pending_removals++;
+			break;
+		case DLB2_QUEUE_MAPPED:
+			/* Priority change, nothing to update */
+			break;
+		default:
+			goto error;
+		}
+		break;
+	case DLB2_QUEUE_MAP_IN_PROG:
+		switch (new_state) {
+		case DLB2_QUEUE_UNMAPPED:
+			queue->num_pending_additions--;
+			domain->num_pending_additions--;
+			break;
+		case DLB2_QUEUE_MAPPED:
+			queue->num_mappings++;
+			port->num_mappings++;
+			queue->num_pending_additions--;
+			domain->num_pending_additions--;
+			break;
+		default:
+			goto error;
+		}
+		break;
+	case DLB2_QUEUE_UNMAP_IN_PROG:
+		switch (new_state) {
+		case DLB2_QUEUE_UNMAPPED:
+			port->num_pending_removals--;
+			domain->num_pending_removals--;
+			queue->num_mappings--;
+			port->num_mappings--;
+			break;
+		case DLB2_QUEUE_MAPPED:
+			port->num_pending_removals--;
+			domain->num_pending_removals--;
+			break;
+		case DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP:
+			/* Nothing to update */
+			break;
+		default:
+			goto error;
+		}
+		break;
+	case DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP:
+		switch (new_state) {
+		case DLB2_QUEUE_UNMAP_IN_PROG:
+			/* Nothing to update */
+			break;
+		case DLB2_QUEUE_UNMAPPED:
+			/*
+			 * An UNMAP_IN_PROG_PENDING_MAP slot briefly
+			 * becomes UNMAPPED before it transitions to
+			 * MAP_IN_PROG.
+			 */
+			queue->num_mappings--;
+			port->num_mappings--;
+			port->num_pending_removals--;
+			domain->num_pending_removals--;
+			break;
+		default:
+			goto error;
+		}
+		break;
+	default:
+		goto error;
+	}
+
+	port->qid_map[slot].state = new_state;
+
+	DLB2_HW_DBG(hw,
+		    "[%s()] queue %d -> port %d state transition (%d -> %d)\n",
+		    __func__, queue->id.phys_id, port->id.phys_id,
+		    curr_state, new_state);
+	return 0;
+
+error:
+	DLB2_HW_ERR(hw,
+		    "[%s()] Internal error: invalid queue %d -> port %d state transition (%d -> %d)\n",
+		    __func__, queue->id.phys_id, port->id.phys_id,
+		    curr_state, new_state);
+	return -EFAULT;
+}
+
+static bool dlb2_port_find_slot(struct dlb2_ldb_port *port,
+				enum dlb2_qid_map_state state,
+				int *slot)
+{
+	int i;
+
+	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
+		if (port->qid_map[i].state == state)
+			break;
+	}
+
+	*slot = i;
+
+	return (i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ);
+}
+
+static bool dlb2_port_find_slot_queue(struct dlb2_ldb_port *port,
+				      enum dlb2_qid_map_state state,
+				      struct dlb2_ldb_queue *queue,
+				      int *slot)
+{
+	int i;
+
+	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
+		if (port->qid_map[i].state == state &&
+		    port->qid_map[i].qid == queue->id.phys_id)
+			break;
+	}
+
+	*slot = i;
+
+	return (i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ);
+}
+
+/*
+ * dlb2_ldb_queue_{enable, disable}_mapped_cqs() don't operate exactly as
+ * their function names imply, and should only be called by the dynamic CQ
+ * mapping code.
+ */
+static void dlb2_ldb_queue_disable_mapped_cqs(struct dlb2_hw *hw,
+					      struct dlb2_hw_domain *domain,
+					      struct dlb2_ldb_queue *queue)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int slot, i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			enum dlb2_qid_map_state state = DLB2_QUEUE_MAPPED;
+
+			if (!dlb2_port_find_slot_queue(port, state,
+						       queue, &slot))
+				continue;
+
+			if (port->enabled)
+				dlb2_ldb_port_cq_disable(hw, port);
+		}
+	}
+}
+
+static void dlb2_ldb_queue_enable_mapped_cqs(struct dlb2_hw *hw,
+					     struct dlb2_hw_domain *domain,
+					     struct dlb2_ldb_queue *queue)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int slot, i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			enum dlb2_qid_map_state state = DLB2_QUEUE_MAPPED;
+
+			if (!dlb2_port_find_slot_queue(port, state,
+						       queue, &slot))
+				continue;
+
+			if (port->enabled)
+				dlb2_ldb_port_cq_enable(hw, port);
+		}
+	}
+}
+
+static void dlb2_ldb_port_clear_queue_if_status(struct dlb2_hw *hw,
+						struct dlb2_ldb_port *port,
+						int slot)
+{
+	u32 ctrl = 0;
+
+	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
+	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
+	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_INFLIGHT_OK_V);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
+
+	dlb2_flush_csr(hw);
+}
+
+static void dlb2_ldb_port_set_queue_if_status(struct dlb2_hw *hw,
+					      struct dlb2_ldb_port *port,
+					      int slot)
+{
+	u32 ctrl = 0;
+
+	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
+	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
+	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_VALUE);
+	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_INFLIGHT_OK_V);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
+
+	dlb2_flush_csr(hw);
+}
+
+static int dlb2_ldb_port_map_qid_static(struct dlb2_hw *hw,
+					struct dlb2_ldb_port *p,
+					struct dlb2_ldb_queue *q,
+					u8 priority)
+{
+	enum dlb2_qid_map_state state;
+	u32 lsp_qid2cq2;
+	u32 lsp_qid2cq;
+	u32 atm_qid2cq;
+	u32 cq2priov;
+	u32 cq2qid;
+	int i;
+
+	/* Look for a pending or already mapped slot, else an unused slot */
+	if (!dlb2_port_find_slot_queue(p, DLB2_QUEUE_MAP_IN_PROG, q, &i) &&
+	    !dlb2_port_find_slot_queue(p, DLB2_QUEUE_MAPPED, q, &i) &&
+	    !dlb2_port_find_slot(p, DLB2_QUEUE_UNMAPPED, &i)) {
+		DLB2_HW_ERR(hw,
+			    "[%s():%d] Internal error: CQ has no available QID mapping slots\n",
+			    __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	/* Read-modify-write the priority and valid bit register */
+	cq2priov = DLB2_CSR_RD(hw, DLB2_LSP_CQ2PRIOV(hw->ver, p->id.phys_id));
+
+	cq2priov |= (1 << (i + DLB2_LSP_CQ2PRIOV_V_LOC)) & DLB2_LSP_CQ2PRIOV_V;
+	cq2priov |= ((priority & 0x7) << (i + DLB2_LSP_CQ2PRIOV_PRIO_LOC) * 3)
+		    & DLB2_LSP_CQ2PRIOV_PRIO;
+
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(hw->ver, p->id.phys_id), cq2priov);
+
+	/* Read-modify-write the QID map register */
+	if (i < 4)
+		cq2qid = DLB2_CSR_RD(hw, DLB2_LSP_CQ2QID0(hw->ver,
+							  p->id.phys_id));
+	else
+		cq2qid = DLB2_CSR_RD(hw, DLB2_LSP_CQ2QID1(hw->ver,
+							  p->id.phys_id));
+
+	if (i == 0 || i == 4)
+		DLB2_BITS_SET(cq2qid, q->id.phys_id, DLB2_LSP_CQ2QID0_QID_P0);
+	if (i == 1 || i == 5)
+		DLB2_BITS_SET(cq2qid, q->id.phys_id, DLB2_LSP_CQ2QID0_QID_P1);
+	if (i == 2 || i == 6)
+		DLB2_BITS_SET(cq2qid, q->id.phys_id, DLB2_LSP_CQ2QID0_QID_P2);
+	if (i == 3 || i == 7)
+		DLB2_BITS_SET(cq2qid, q->id.phys_id, DLB2_LSP_CQ2QID0_QID_P3);
+
+	if (i < 4)
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_CQ2QID0(hw->ver, p->id.phys_id), cq2qid);
+	else
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_CQ2QID1(hw->ver, p->id.phys_id), cq2qid);
+
+	atm_qid2cq = DLB2_CSR_RD(hw,
+				 DLB2_ATM_QID2CQIDIX(q->id.phys_id,
+						p->id.phys_id / 4));
+
+	lsp_qid2cq = DLB2_CSR_RD(hw,
+				 DLB2_LSP_QID2CQIDIX(hw->ver, q->id.phys_id,
+						p->id.phys_id / 4));
+
+	lsp_qid2cq2 = DLB2_CSR_RD(hw,
+				  DLB2_LSP_QID2CQIDIX2(hw->ver, q->id.phys_id,
+						  p->id.phys_id / 4));
+
+	switch (p->id.phys_id % 4) {
+	case 0:
+		DLB2_BIT_SET(atm_qid2cq,
+			     1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P0_LOC));
+		DLB2_BIT_SET(lsp_qid2cq,
+			     1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P0_LOC));
+		DLB2_BIT_SET(lsp_qid2cq2,
+			     1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P0_LOC));
+		break;
+
+	case 1:
+		DLB2_BIT_SET(atm_qid2cq,
+			     1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P1_LOC));
+		DLB2_BIT_SET(lsp_qid2cq,
+			     1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P1_LOC));
+		DLB2_BIT_SET(lsp_qid2cq2,
+			     1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P1_LOC));
+		break;
+
+	case 2:
+		DLB2_BIT_SET(atm_qid2cq,
+			     1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P2_LOC));
+		DLB2_BIT_SET(lsp_qid2cq,
+			     1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P2_LOC));
+		DLB2_BIT_SET(lsp_qid2cq2,
+			     1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P2_LOC));
+		break;
+
+	case 3:
+		DLB2_BIT_SET(atm_qid2cq,
+			     1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P3_LOC));
+		DLB2_BIT_SET(lsp_qid2cq,
+			     1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P3_LOC));
+		DLB2_BIT_SET(lsp_qid2cq2,
+			     1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P3_LOC));
+		break;
+	}
+
+	DLB2_CSR_WR(hw,
+		    DLB2_ATM_QID2CQIDIX(q->id.phys_id, p->id.phys_id / 4),
+		    atm_qid2cq);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_QID2CQIDIX(hw->ver,
+					q->id.phys_id, p->id.phys_id / 4),
+		    lsp_qid2cq);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_QID2CQIDIX2(hw->ver,
+					 q->id.phys_id, p->id.phys_id / 4),
+		    lsp_qid2cq2);
+
+	dlb2_flush_csr(hw);
+
+	p->qid_map[i].qid = q->id.phys_id;
+	p->qid_map[i].priority = priority;
+
+	state = DLB2_QUEUE_MAPPED;
+
+	return dlb2_port_slot_state_transition(hw, p, q, i, state);
+}
+
+static int dlb2_ldb_port_set_has_work_bits(struct dlb2_hw *hw,
+					   struct dlb2_ldb_port *port,
+					   struct dlb2_ldb_queue *queue,
+					   int slot)
+{
+	u32 ctrl = 0;
+	u32 active;
+	u32 enq;
+
+	/* Set the atomic scheduling haswork bit */
+	active = DLB2_CSR_RD(hw, DLB2_LSP_QID_AQED_ACTIVE_CNT(hw->ver,
+							 queue->id.phys_id));
+
+	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
+	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
+	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_VALUE);
+	DLB2_BITS_SET(ctrl,
+		      DLB2_BITS_GET(active,
+				    DLB2_LSP_QID_AQED_ACTIVE_CNT_COUNT) > 0,
+				    DLB2_LSP_LDB_SCHED_CTRL_RLIST_HASWORK_V);
+
+	/* Set the non-atomic scheduling haswork bit */
+	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
+
+	enq = DLB2_CSR_RD(hw,
+			  DLB2_LSP_QID_LDB_ENQUEUE_CNT(hw->ver,
+						       queue->id.phys_id));
+
+	memset(&ctrl, 0, sizeof(ctrl));
+
+	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
+	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
+	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_VALUE);
+	DLB2_BITS_SET(ctrl,
+		      DLB2_BITS_GET(enq,
+				    DLB2_LSP_QID_LDB_ENQUEUE_CNT_COUNT) > 0,
+		      DLB2_LSP_LDB_SCHED_CTRL_NALB_HASWORK_V);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
+
+	dlb2_flush_csr(hw);
+
+	return 0;
+}
+
+static void dlb2_ldb_port_clear_has_work_bits(struct dlb2_hw *hw,
+					      struct dlb2_ldb_port *port,
+					      u8 slot)
+{
+	u32 ctrl = 0;
+
+	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
+	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
+	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_RLIST_HASWORK_V);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
+
+	memset(&ctrl, 0, sizeof(ctrl));
+
+	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
+	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
+	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_NALB_HASWORK_V);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
+
+	dlb2_flush_csr(hw);
+}
+
+
+static void dlb2_ldb_queue_set_inflight_limit(struct dlb2_hw *hw,
+					      struct dlb2_ldb_queue *queue)
+{
+	u32 infl_lim = 0;
+
+	DLB2_BITS_SET(infl_lim, queue->num_qid_inflights,
+		 DLB2_LSP_QID_LDB_INFL_LIM_LIMIT);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_QID_LDB_INFL_LIM(hw->ver, queue->id.phys_id),
+		    infl_lim);
+}
+
+static void dlb2_ldb_queue_clear_inflight_limit(struct dlb2_hw *hw,
+						struct dlb2_ldb_queue *queue)
+{
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_QID_LDB_INFL_LIM(hw->ver, queue->id.phys_id),
+		    DLB2_LSP_QID_LDB_INFL_LIM_RST);
+}
+
+static int dlb2_ldb_port_finish_map_qid_dynamic(struct dlb2_hw *hw,
+						struct dlb2_hw_domain *domain,
+						struct dlb2_ldb_port *port,
+						struct dlb2_ldb_queue *queue)
+{
+	struct dlb2_list_entry *iter;
+	enum dlb2_qid_map_state state;
+	int slot, ret, i;
+	u32 infl_cnt;
+	u8 prio;
+	RTE_SET_USED(iter);
+
+	infl_cnt = DLB2_CSR_RD(hw,
+			       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver,
+						    queue->id.phys_id));
+
+	if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT)) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: non-zero QID inflight count\n",
+			    __func__);
+		return -EINVAL;
+	}
+
+	/*
+	 * Static map the port and set its corresponding has_work bits.
+	 */
+	state = DLB2_QUEUE_MAP_IN_PROG;
+	if (!dlb2_port_find_slot_queue(port, state, queue, &slot))
+		return -EINVAL;
+
+	prio = port->qid_map[slot].priority;
+
+	/*
+	 * Update the CQ2QID, CQ2PRIOV, and QID2CQIDX registers, and
+	 * the port's qid_map state.
+	 */
+	ret = dlb2_ldb_port_map_qid_static(hw, port, queue, prio);
+	if (ret)
+		return ret;
+
+	ret = dlb2_ldb_port_set_has_work_bits(hw, port, queue, slot);
+	if (ret)
+		return ret;
+
+	/*
+	 * Ensure IF_status(cq,qid) is 0 before enabling the port to
+	 * prevent spurious schedules to cause the queue's inflight
+	 * count to increase.
+	 */
+	dlb2_ldb_port_clear_queue_if_status(hw, port, slot);
+
+	/* Reset the queue's inflight status */
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			state = DLB2_QUEUE_MAPPED;
+			if (!dlb2_port_find_slot_queue(port, state,
+						       queue, &slot))
+				continue;
+
+			dlb2_ldb_port_set_queue_if_status(hw, port, slot);
+		}
+	}
+
+	dlb2_ldb_queue_set_inflight_limit(hw, queue);
+
+	/* Re-enable CQs mapped to this queue */
+	dlb2_ldb_queue_enable_mapped_cqs(hw, domain, queue);
+
+	/* If this queue has other mappings pending, clear its inflight limit */
+	if (queue->num_pending_additions > 0)
+		dlb2_ldb_queue_clear_inflight_limit(hw, queue);
+
+	return 0;
+}
+
+/**
+ * dlb2_ldb_port_map_qid_dynamic() - perform a "dynamic" QID->CQ mapping
+ * @hw: dlb2_hw handle for a particular device.
+ * @port: load-balanced port
+ * @queue: load-balanced queue
+ * @priority: queue servicing priority
+ *
+ * Returns 0 if the queue was mapped, 1 if the mapping is scheduled to occur
+ * at a later point, and <0 if an error occurred.
+ */
+static int dlb2_ldb_port_map_qid_dynamic(struct dlb2_hw *hw,
+					 struct dlb2_ldb_port *port,
+					 struct dlb2_ldb_queue *queue,
+					 u8 priority)
+{
+	enum dlb2_qid_map_state state;
+	struct dlb2_hw_domain *domain;
+	int domain_id, slot, ret;
+	u32 infl_cnt;
+
+	domain_id = port->domain_id.phys_id;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, false, 0);
+	if (domain == NULL) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: unable to find domain %d\n",
+			    __func__, port->domain_id.phys_id);
+		return -EINVAL;
+	}
+
+	/*
+	 * Set the QID inflight limit to 0 to prevent further scheduling of the
+	 * queue.
+	 */
+	DLB2_CSR_WR(hw, DLB2_LSP_QID_LDB_INFL_LIM(hw->ver,
+						  queue->id.phys_id), 0);
+
+	if (!dlb2_port_find_slot(port, DLB2_QUEUE_UNMAPPED, &slot)) {
+		DLB2_HW_ERR(hw,
+			    "Internal error: No available unmapped slots\n");
+		return -EFAULT;
+	}
+
+	port->qid_map[slot].qid = queue->id.phys_id;
+	port->qid_map[slot].priority = priority;
+
+	state = DLB2_QUEUE_MAP_IN_PROG;
+	ret = dlb2_port_slot_state_transition(hw, port, queue, slot, state);
+	if (ret)
+		return ret;
+
+	infl_cnt = DLB2_CSR_RD(hw,
+			       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver,
+						    queue->id.phys_id));
+
+	if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT)) {
+		/*
+		 * The queue is owed completions so it's not safe to map it
+		 * yet. Schedule a kernel thread to complete the mapping later,
+		 * once software has completed all the queue's inflight events.
+		 */
+		if (!os_worker_active(hw))
+			os_schedule_work(hw);
+
+		return 1;
+	}
+
+	/*
+	 * Disable the affected CQ, and the CQs already mapped to the QID,
+	 * before reading the QID's inflight count a second time. There is an
+	 * unlikely race in which the QID may schedule one more QE after we
+	 * read an inflight count of 0, and disabling the CQs guarantees that
+	 * the race will not occur after a re-read of the inflight count
+	 * register.
+	 */
+	if (port->enabled)
+		dlb2_ldb_port_cq_disable(hw, port);
+
+	dlb2_ldb_queue_disable_mapped_cqs(hw, domain, queue);
+
+	infl_cnt = DLB2_CSR_RD(hw,
+			       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver,
+						    queue->id.phys_id));
+
+	if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT)) {
+		if (port->enabled)
+			dlb2_ldb_port_cq_enable(hw, port);
+
+		dlb2_ldb_queue_enable_mapped_cqs(hw, domain, queue);
+
+		/*
+		 * The queue is owed completions so it's not safe to map it
+		 * yet. Schedule a kernel thread to complete the mapping later,
+		 * once software has completed all the queue's inflight events.
+		 */
+		if (!os_worker_active(hw))
+			os_schedule_work(hw);
+
+		return 1;
+	}
+
+	return dlb2_ldb_port_finish_map_qid_dynamic(hw, domain, port, queue);
+}
+
+static void dlb2_domain_finish_map_port(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain,
+					struct dlb2_ldb_port *port)
+{
+	int i;
+
+	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
+		u32 infl_cnt;
+		struct dlb2_ldb_queue *queue;
+		int qid;
+
+		if (port->qid_map[i].state != DLB2_QUEUE_MAP_IN_PROG)
+			continue;
+
+		qid = port->qid_map[i].qid;
+
+		queue = dlb2_get_ldb_queue_from_id(hw, qid, false, 0);
+
+		if (queue == NULL) {
+			DLB2_HW_ERR(hw,
+				    "[%s()] Internal error: unable to find queue %d\n",
+				    __func__, qid);
+			continue;
+		}
+
+		infl_cnt = DLB2_CSR_RD(hw,
+				       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver, qid));
+
+		if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT))
+			continue;
+
+		/*
+		 * Disable the affected CQ, and the CQs already mapped to the
+		 * QID, before reading the QID's inflight count a second time.
+		 * There is an unlikely race in which the QID may schedule one
+		 * more QE after we read an inflight count of 0, and disabling
+		 * the CQs guarantees that the race will not occur after a
+		 * re-read of the inflight count register.
+		 */
+		if (port->enabled)
+			dlb2_ldb_port_cq_disable(hw, port);
+
+		dlb2_ldb_queue_disable_mapped_cqs(hw, domain, queue);
+
+		infl_cnt = DLB2_CSR_RD(hw,
+				       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver, qid));
+
+		if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT)) {
+			if (port->enabled)
+				dlb2_ldb_port_cq_enable(hw, port);
+
+			dlb2_ldb_queue_enable_mapped_cqs(hw, domain, queue);
+
+			continue;
+		}
+
+		dlb2_ldb_port_finish_map_qid_dynamic(hw, domain, port, queue);
+	}
+}
+
+static unsigned int
+dlb2_domain_finish_map_qid_procedures(struct dlb2_hw *hw,
+				      struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	if (!domain->configured || domain->num_pending_additions == 0)
+		return 0;
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
+			dlb2_domain_finish_map_port(hw, domain, port);
+	}
+
+	return domain->num_pending_additions;
+}
+
+static int dlb2_ldb_port_unmap_qid(struct dlb2_hw *hw,
+				   struct dlb2_ldb_port *port,
+				   struct dlb2_ldb_queue *queue)
+{
+	enum dlb2_qid_map_state mapped, in_progress, pending_map, unmapped;
+	u32 lsp_qid2cq2;
+	u32 lsp_qid2cq;
+	u32 atm_qid2cq;
+	u32 cq2priov;
+	u32 queue_id;
+	u32 port_id;
+	int i;
+
+	/* Find the queue's slot */
+	mapped = DLB2_QUEUE_MAPPED;
+	in_progress = DLB2_QUEUE_UNMAP_IN_PROG;
+	pending_map = DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP;
+
+	if (!dlb2_port_find_slot_queue(port, mapped, queue, &i) &&
+	    !dlb2_port_find_slot_queue(port, in_progress, queue, &i) &&
+	    !dlb2_port_find_slot_queue(port, pending_map, queue, &i)) {
+		DLB2_HW_ERR(hw,
+			    "[%s():%d] Internal error: QID %d isn't mapped\n",
+			    __func__, __LINE__, queue->id.phys_id);
+		return -EFAULT;
+	}
+
+	port_id = port->id.phys_id;
+	queue_id = queue->id.phys_id;
+
+	/* Read-modify-write the priority and valid bit register */
+	cq2priov = DLB2_CSR_RD(hw, DLB2_LSP_CQ2PRIOV(hw->ver, port_id));
+
+	cq2priov &= ~(1 << (i + DLB2_LSP_CQ2PRIOV_V_LOC));
+
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(hw->ver, port_id), cq2priov);
+
+	atm_qid2cq = DLB2_CSR_RD(hw, DLB2_ATM_QID2CQIDIX(queue_id,
+							 port_id / 4));
+
+	lsp_qid2cq = DLB2_CSR_RD(hw,
+				 DLB2_LSP_QID2CQIDIX(hw->ver,
+						queue_id, port_id / 4));
+
+	lsp_qid2cq2 = DLB2_CSR_RD(hw,
+				  DLB2_LSP_QID2CQIDIX2(hw->ver,
+						  queue_id, port_id / 4));
+
+	switch (port_id % 4) {
+	case 0:
+		atm_qid2cq &= ~(1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P0_LOC));
+		lsp_qid2cq &= ~(1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P0_LOC));
+		lsp_qid2cq2 &= ~(1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P0_LOC));
+		break;
+
+	case 1:
+		atm_qid2cq &= ~(1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P1_LOC));
+		lsp_qid2cq &= ~(1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P1_LOC));
+		lsp_qid2cq2 &= ~(1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P1_LOC));
+		break;
+
+	case 2:
+		atm_qid2cq &= ~(1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P2_LOC));
+		lsp_qid2cq &= ~(1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P2_LOC));
+		lsp_qid2cq2 &= ~(1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P2_LOC));
+		break;
+
+	case 3:
+		atm_qid2cq &= ~(1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P3_LOC));
+		lsp_qid2cq &= ~(1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P3_LOC));
+		lsp_qid2cq2 &= ~(1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P3_LOC));
+		break;
+	}
+
+	DLB2_CSR_WR(hw, DLB2_ATM_QID2CQIDIX(queue_id, port_id / 4), atm_qid2cq);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_QID2CQIDIX(hw->ver, queue_id, port_id / 4),
+		    lsp_qid2cq);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_QID2CQIDIX2(hw->ver, queue_id, port_id / 4),
+		    lsp_qid2cq2);
+
+	dlb2_flush_csr(hw);
+
+	unmapped = DLB2_QUEUE_UNMAPPED;
+
+	return dlb2_port_slot_state_transition(hw, port, queue, i, unmapped);
+}
+
+static int dlb2_ldb_port_map_qid(struct dlb2_hw *hw,
+				 struct dlb2_hw_domain *domain,
+				 struct dlb2_ldb_port *port,
+				 struct dlb2_ldb_queue *queue,
+				 u8 prio)
+{
+	if (domain->started)
+		return dlb2_ldb_port_map_qid_dynamic(hw, port, queue, prio);
+	else
+		return dlb2_ldb_port_map_qid_static(hw, port, queue, prio);
+}
+
+static void
+dlb2_domain_finish_unmap_port_slot(struct dlb2_hw *hw,
+				   struct dlb2_hw_domain *domain,
+				   struct dlb2_ldb_port *port,
+				   int slot)
+{
+	enum dlb2_qid_map_state state;
+	struct dlb2_ldb_queue *queue;
+
+	queue = &hw->rsrcs.ldb_queues[port->qid_map[slot].qid];
+
+	state = port->qid_map[slot].state;
+
+	/* Update the QID2CQIDX and CQ2QID vectors */
+	dlb2_ldb_port_unmap_qid(hw, port, queue);
+
+	/*
+	 * Ensure the QID will not be serviced by this {CQ, slot} by clearing
+	 * the has_work bits
+	 */
+	dlb2_ldb_port_clear_has_work_bits(hw, port, slot);
+
+	/* Reset the {CQ, slot} to its default state */
+	dlb2_ldb_port_set_queue_if_status(hw, port, slot);
+
+	/* Re-enable the CQ if it was not manually disabled by the user */
+	if (port->enabled)
+		dlb2_ldb_port_cq_enable(hw, port);
+
+	/*
+	 * If there is a mapping that is pending this slot's removal, perform
+	 * the mapping now.
+	 */
+	if (state == DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP) {
+		struct dlb2_ldb_port_qid_map *map;
+		struct dlb2_ldb_queue *map_queue;
+		u8 prio;
+
+		map = &port->qid_map[slot];
+
+		map->qid = map->pending_qid;
+		map->priority = map->pending_priority;
+
+		map_queue = &hw->rsrcs.ldb_queues[map->qid];
+		prio = map->priority;
+
+		dlb2_ldb_port_map_qid(hw, domain, port, map_queue, prio);
+	}
+}
+
+
+static bool dlb2_domain_finish_unmap_port(struct dlb2_hw *hw,
+					  struct dlb2_hw_domain *domain,
+					  struct dlb2_ldb_port *port)
+{
+	u32 infl_cnt;
+	int i;
+
+	if (port->num_pending_removals == 0)
+		return false;
+
+	/*
+	 * The unmap requires all the CQ's outstanding inflights to be
+	 * completed.
+	 */
+	infl_cnt = DLB2_CSR_RD(hw, DLB2_LSP_CQ_LDB_INFL_CNT(hw->ver,
+						       port->id.phys_id));
+	if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_CQ_LDB_INFL_CNT_COUNT) > 0)
+		return false;
+
+	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
+		struct dlb2_ldb_port_qid_map *map;
+
+		map = &port->qid_map[i];
+
+		if (map->state != DLB2_QUEUE_UNMAP_IN_PROG &&
+		    map->state != DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP)
+			continue;
+
+		dlb2_domain_finish_unmap_port_slot(hw, domain, port, i);
+	}
+
+	return true;
+}
+
+static unsigned int
+dlb2_domain_finish_unmap_qid_procedures(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	if (!domain->configured || domain->num_pending_removals == 0)
+		return 0;
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
+			dlb2_domain_finish_unmap_port(hw, domain, port);
+	}
+
+	return domain->num_pending_removals;
+}
+
+static void dlb2_domain_disable_ldb_cqs(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			port->enabled = false;
+
+			dlb2_ldb_port_cq_disable(hw, port);
+		}
+	}
+}
+
+
+static void dlb2_log_reset_domain(struct dlb2_hw *hw,
+				  u32 domain_id,
+				  bool vdev_req,
+				  unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 reset domain:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
+}
+
+static void dlb2_domain_disable_dir_vpps(struct dlb2_hw *hw,
+					 struct dlb2_hw_domain *domain,
+					 unsigned int vdev_id)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *port;
+	u32 vpp_v = 0;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
+		unsigned int offs;
+		u32 virt_id;
+
+		if (hw->virt_mode == DLB2_VIRT_SRIOV)
+			virt_id = port->id.virt_id;
+		else
+			virt_id = port->id.phys_id;
+
+		offs = vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) + virt_id;
+
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP_V(offs), vpp_v);
+	}
+}
+
+static void dlb2_domain_disable_ldb_vpps(struct dlb2_hw *hw,
+					 struct dlb2_hw_domain *domain,
+					 unsigned int vdev_id)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	u32 vpp_v = 0;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			unsigned int offs;
+			u32 virt_id;
+
+			if (hw->virt_mode == DLB2_VIRT_SRIOV)
+				virt_id = port->id.virt_id;
+			else
+				virt_id = port->id.phys_id;
+
+			offs = vdev_id * DLB2_MAX_NUM_LDB_PORTS + virt_id;
+
+			DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VPP_V(offs), vpp_v);
+		}
+	}
+}
+
+static void
+dlb2_domain_disable_ldb_port_interrupts(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	u32 int_en = 0;
+	u32 wd_en = 0;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			DLB2_CSR_WR(hw,
+				    DLB2_CHP_LDB_CQ_INT_ENB(hw->ver,
+						       port->id.phys_id),
+				    int_en);
+
+			DLB2_CSR_WR(hw,
+				    DLB2_CHP_LDB_CQ_WD_ENB(hw->ver,
+						      port->id.phys_id),
+				    wd_en);
+		}
+	}
+}
+
+static void
+dlb2_domain_disable_dir_port_interrupts(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *port;
+	u32 int_en = 0;
+	u32 wd_en = 0;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
+		DLB2_CSR_WR(hw,
+			    DLB2_CHP_DIR_CQ_INT_ENB(hw->ver, port->id.phys_id),
+			    int_en);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_CHP_DIR_CQ_WD_ENB(hw->ver, port->id.phys_id),
+			    wd_en);
+	}
+}
+
+static void
+dlb2_domain_disable_ldb_queue_write_perms(struct dlb2_hw *hw,
+					  struct dlb2_hw_domain *domain)
+{
+	int domain_offset = domain->id.phys_id * DLB2_MAX_NUM_LDB_QUEUES;
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_queue *queue;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
+		int idx = domain_offset + queue->id.phys_id;
+
+		DLB2_CSR_WR(hw, DLB2_SYS_LDB_VASQID_V(idx), 0);
+
+		if (queue->id.vdev_owned) {
+			DLB2_CSR_WR(hw,
+				    DLB2_SYS_LDB_QID2VQID(queue->id.phys_id),
+				    0);
+
+			idx = queue->id.vdev_id * DLB2_MAX_NUM_LDB_QUEUES +
+				queue->id.virt_id;
+
+			DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID_V(idx), 0);
+
+			DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID2QID(idx), 0);
+		}
+	}
+}
+
+static void
+dlb2_domain_disable_dir_queue_write_perms(struct dlb2_hw *hw,
+					  struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *queue;
+	unsigned long max_ports;
+	int domain_offset;
+	RTE_SET_USED(iter);
+
+	max_ports = DLB2_MAX_NUM_DIR_PORTS(hw->ver);
+
+	domain_offset = domain->id.phys_id * max_ports;
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, queue, iter) {
+		int idx = domain_offset + queue->id.phys_id;
+
+		DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(idx), 0);
+
+		if (queue->id.vdev_owned) {
+			idx = queue->id.vdev_id * max_ports + queue->id.virt_id;
+
+			DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID_V(idx), 0);
+
+			DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID2QID(idx), 0);
+		}
+	}
+}
+
+static void dlb2_domain_disable_ldb_seq_checks(struct dlb2_hw *hw,
+					       struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	u32 chk_en = 0;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			DLB2_CSR_WR(hw,
+				    DLB2_CHP_SN_CHK_ENBL(hw->ver,
+							 port->id.phys_id),
+				    chk_en);
+		}
+	}
+}
+
+static int dlb2_domain_wait_for_ldb_cqs_to_empty(struct dlb2_hw *hw,
+						 struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			int j;
+
+			for (j = 0; j < DLB2_MAX_CQ_COMP_CHECK_LOOPS; j++) {
+				if (dlb2_ldb_cq_inflight_count(hw, port) == 0)
+					break;
+			}
+
+			if (j == DLB2_MAX_CQ_COMP_CHECK_LOOPS) {
+				DLB2_HW_ERR(hw,
+					    "[%s()] Internal error: failed to flush load-balanced port %d's completions.\n",
+					    __func__, port->id.phys_id);
+				return -EFAULT;
+			}
+		}
+	}
+
+	return 0;
+}
+
+static void dlb2_domain_disable_dir_cqs(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *port;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
+		port->enabled = false;
+
+		dlb2_dir_port_cq_disable(hw, port);
+	}
+}
+
+static void
+dlb2_domain_disable_dir_producer_ports(struct dlb2_hw *hw,
+				       struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *port;
+	u32 pp_v = 0;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_DIR_PP_V(port->id.phys_id),
+			    pp_v);
+	}
+}
+
+static void
+dlb2_domain_disable_ldb_producer_ports(struct dlb2_hw *hw,
+				       struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	u32 pp_v = 0;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			DLB2_CSR_WR(hw,
+				    DLB2_SYS_LDB_PP_V(port->id.phys_id),
+				    pp_v);
+		}
+	}
+}
+
+static int dlb2_domain_verify_reset_success(struct dlb2_hw *hw,
+					    struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *dir_port;
+	struct dlb2_ldb_port *ldb_port;
+	struct dlb2_ldb_queue *queue;
+	int i;
+	RTE_SET_USED(iter);
+
+	/*
+	 * Confirm that all the domain's queue's inflight counts and AQED
+	 * active counts are 0.
+	 */
+	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
+		if (!dlb2_ldb_queue_is_empty(hw, queue)) {
+			DLB2_HW_ERR(hw,
+				    "[%s()] Internal error: failed to empty ldb queue %d\n",
+				    __func__, queue->id.phys_id);
+			return -EFAULT;
+		}
+	}
+
+	/* Confirm that all the domain's CQs inflight and token counts are 0. */
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], ldb_port, iter) {
+			if (dlb2_ldb_cq_inflight_count(hw, ldb_port) ||
+			    dlb2_ldb_cq_token_count(hw, ldb_port)) {
+				DLB2_HW_ERR(hw,
+					    "[%s()] Internal error: failed to empty ldb port %d\n",
+					    __func__, ldb_port->id.phys_id);
+				return -EFAULT;
+			}
+		}
+	}
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, dir_port, iter) {
+		if (!dlb2_dir_queue_is_empty(hw, dir_port)) {
+			DLB2_HW_ERR(hw,
+				    "[%s()] Internal error: failed to empty dir queue %d\n",
+				    __func__, dir_port->id.phys_id);
+			return -EFAULT;
+		}
+
+		if (dlb2_dir_cq_token_count(hw, dir_port)) {
+			DLB2_HW_ERR(hw,
+				    "[%s()] Internal error: failed to empty dir port %d\n",
+				    __func__, dir_port->id.phys_id);
+			return -EFAULT;
+		}
+	}
+
+	return 0;
+}
+
+static void __dlb2_domain_reset_ldb_port_registers(struct dlb2_hw *hw,
+						   struct dlb2_ldb_port *port)
+{
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_PP2VAS(port->id.phys_id),
+		    DLB2_SYS_LDB_PP2VAS_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ2VAS(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ2VAS_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_PP2VDEV(port->id.phys_id),
+		    DLB2_SYS_LDB_PP2VDEV_RST);
+
+	if (port->id.vdev_owned) {
+		unsigned int offs;
+		u32 virt_id;
+
+		/*
+		 * DLB uses producer port address bits 17:12 to determine the
+		 * producer port ID. In Scalable IOV mode, PP accesses come
+		 * through the PF MMIO window for the physical producer port,
+		 * so for translation purposes the virtual and physical port
+		 * IDs are equal.
+		 */
+		if (hw->virt_mode == DLB2_VIRT_SRIOV)
+			virt_id = port->id.virt_id;
+		else
+			virt_id = port->id.phys_id;
+
+		offs = port->id.vdev_id * DLB2_MAX_NUM_LDB_PORTS + virt_id;
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_VF_LDB_VPP2PP(offs),
+			    DLB2_SYS_VF_LDB_VPP2PP_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_VF_LDB_VPP_V(offs),
+			    DLB2_SYS_VF_LDB_VPP_V_RST);
+	}
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_PP_V(port->id.phys_id),
+		    DLB2_SYS_LDB_PP_V_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_LDB_DSBL(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_LDB_DSBL_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_DEPTH(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ_DEPTH_RST);
+
+	if (hw->ver != DLB2_HW_V2)
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_CFG_CQ_LDB_WU_LIMIT(port->id.phys_id),
+			    DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_LDB_INFL_LIM(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_LDB_INFL_LIM_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_HIST_LIST_LIM(hw->ver, port->id.phys_id),
+		    DLB2_CHP_HIST_LIST_LIM_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_HIST_LIST_BASE(hw->ver, port->id.phys_id),
+		    DLB2_CHP_HIST_LIST_BASE_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_HIST_LIST_POP_PTR(hw->ver, port->id.phys_id),
+		    DLB2_CHP_HIST_LIST_POP_PTR_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_HIST_LIST_PUSH_PTR(hw->ver, port->id.phys_id),
+		    DLB2_CHP_HIST_LIST_PUSH_PTR_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_TMR_THRSH(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ_TMR_THRSH_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_INT_ENB(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ_INT_ENB_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_CQ_ISR(port->id.phys_id),
+		    DLB2_SYS_LDB_CQ_ISR_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_WPTR(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ_WPTR_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_LDB_TKN_CNT(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_LDB_TKN_CNT_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_CQ_ADDR_L(port->id.phys_id),
+		    DLB2_SYS_LDB_CQ_ADDR_L_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_CQ_ADDR_U(port->id.phys_id),
+		    DLB2_SYS_LDB_CQ_ADDR_U_RST);
+
+	if (hw->ver == DLB2_HW_V2)
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_LDB_CQ_AT(port->id.phys_id),
+			    DLB2_SYS_LDB_CQ_AT_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_CQ_PASID(hw->ver, port->id.phys_id),
+		    DLB2_SYS_LDB_CQ_PASID_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_CQ2VF_PF_RO(port->id.phys_id),
+		    DLB2_SYS_LDB_CQ2VF_PF_RO_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTL(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTL_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTH(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTH_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ2QID0(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ2QID0_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ2QID1(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ2QID1_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ2PRIOV(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ2PRIOV_RST);
+}
+
+static void dlb2_domain_reset_ldb_port_registers(struct dlb2_hw *hw,
+						 struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
+			__dlb2_domain_reset_ldb_port_registers(hw, port);
+	}
+}
+
+static void
+__dlb2_domain_reset_dir_port_registers(struct dlb2_hw *hw,
+				       struct dlb2_dir_pq_pair *port)
+{
+	u32 reg = 0;
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ2VAS(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ2VAS_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_DIR_DSBL(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_DIR_DSBL_RST);
+
+	DLB2_BIT_SET(reg, DLB2_SYS_WB_DIR_CQ_STATE_CQ_OPT_CLR);
+
+	if (hw->ver == DLB2_HW_V2)
+		DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_OPT_CLR, port->id.phys_id);
+	else
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_WB_DIR_CQ_STATE(port->id.phys_id), reg);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_DEPTH(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ_DEPTH_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_TMR_THRSH(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ_TMR_THRSH_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_INT_ENB(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ_INT_ENB_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_CQ_ISR(port->id.phys_id),
+		    DLB2_SYS_DIR_CQ_ISR_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(hw->ver,
+						      port->id.phys_id),
+		    DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_WPTR(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ_WPTR_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_DIR_TKN_CNT(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_DIR_TKN_CNT_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_CQ_ADDR_L(port->id.phys_id),
+		    DLB2_SYS_DIR_CQ_ADDR_L_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_CQ_ADDR_U(port->id.phys_id),
+		    DLB2_SYS_DIR_CQ_ADDR_U_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_CQ_AT(port->id.phys_id),
+		    DLB2_SYS_DIR_CQ_AT_RST);
+
+	if (hw->ver == DLB2_HW_V2)
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_DIR_CQ_AT(port->id.phys_id),
+			    DLB2_SYS_DIR_CQ_AT_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_CQ_PASID(hw->ver, port->id.phys_id),
+		    DLB2_SYS_DIR_CQ_PASID_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_CQ_FMT(port->id.phys_id),
+		    DLB2_SYS_DIR_CQ_FMT_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_CQ2VF_PF_RO(port->id.phys_id),
+		    DLB2_SYS_DIR_CQ2VF_PF_RO_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTL(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTL_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTH(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTH_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_PP2VAS(port->id.phys_id),
+		    DLB2_SYS_DIR_PP2VAS_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ2VAS(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ2VAS_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_PP2VDEV(port->id.phys_id),
+		    DLB2_SYS_DIR_PP2VDEV_RST);
+
+	if (port->id.vdev_owned) {
+		unsigned int offs;
+		u32 virt_id;
+
+		/*
+		 * DLB uses producer port address bits 17:12 to determine the
+		 * producer port ID. In Scalable IOV mode, PP accesses come
+		 * through the PF MMIO window for the physical producer port,
+		 * so for translation purposes the virtual and physical port
+		 * IDs are equal.
+		 */
+		if (hw->virt_mode == DLB2_VIRT_SRIOV)
+			virt_id = port->id.virt_id;
+		else
+			virt_id = port->id.phys_id;
+
+		offs = port->id.vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) +
+			virt_id;
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_VF_DIR_VPP2PP(offs),
+			    DLB2_SYS_VF_DIR_VPP2PP_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_VF_DIR_VPP_V(offs),
+			    DLB2_SYS_VF_DIR_VPP_V_RST);
+	}
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_PP_V(port->id.phys_id),
+		    DLB2_SYS_DIR_PP_V_RST);
+}
+
+static void dlb2_domain_reset_dir_port_registers(struct dlb2_hw *hw,
+						 struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *port;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter)
+		__dlb2_domain_reset_dir_port_registers(hw, port);
+}
+
+static void dlb2_domain_reset_ldb_queue_registers(struct dlb2_hw *hw,
+						  struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_queue *queue;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
+		unsigned int queue_id = queue->id.phys_id;
+		int i;
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL(hw->ver, queue_id),
+			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH(hw->ver, queue_id),
+			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTL(hw->ver, queue_id),
+			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTL_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTH(hw->ver, queue_id),
+			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_NALDB_MAX_DEPTH(hw->ver, queue_id),
+			    DLB2_LSP_QID_NALDB_MAX_DEPTH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_LDB_INFL_LIM(hw->ver, queue_id),
+			    DLB2_LSP_QID_LDB_INFL_LIM_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_AQED_ACTIVE_LIM(hw->ver, queue_id),
+			    DLB2_LSP_QID_AQED_ACTIVE_LIM_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_ATM_DEPTH_THRSH(hw->ver, queue_id),
+			    DLB2_LSP_QID_ATM_DEPTH_THRSH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_NALDB_DEPTH_THRSH(hw->ver, queue_id),
+			    DLB2_LSP_QID_NALDB_DEPTH_THRSH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_LDB_QID_ITS(queue_id),
+			    DLB2_SYS_LDB_QID_ITS_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_CHP_ORD_QID_SN(hw->ver, queue_id),
+			    DLB2_CHP_ORD_QID_SN_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_CHP_ORD_QID_SN_MAP(hw->ver, queue_id),
+			    DLB2_CHP_ORD_QID_SN_MAP_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_LDB_QID_V(queue_id),
+			    DLB2_SYS_LDB_QID_V_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_LDB_QID_CFG_V(queue_id),
+			    DLB2_SYS_LDB_QID_CFG_V_RST);
+
+		if (queue->sn_cfg_valid) {
+			u32 offs[2];
+
+			offs[0] = DLB2_RO_GRP_0_SLT_SHFT(hw->ver,
+							 queue->sn_slot);
+			offs[1] = DLB2_RO_GRP_1_SLT_SHFT(hw->ver,
+							 queue->sn_slot);
+
+			DLB2_CSR_WR(hw,
+				    offs[queue->sn_group],
+				    DLB2_RO_GRP_0_SLT_SHFT_RST);
+		}
+
+		for (i = 0; i < DLB2_LSP_QID2CQIDIX_NUM; i++) {
+			DLB2_CSR_WR(hw,
+				    DLB2_LSP_QID2CQIDIX(hw->ver, queue_id, i),
+				    DLB2_LSP_QID2CQIDIX_00_RST);
+
+			DLB2_CSR_WR(hw,
+				    DLB2_LSP_QID2CQIDIX2(hw->ver, queue_id, i),
+				    DLB2_LSP_QID2CQIDIX2_00_RST);
+
+			DLB2_CSR_WR(hw,
+				    DLB2_ATM_QID2CQIDIX(queue_id, i),
+				    DLB2_ATM_QID2CQIDIX_00_RST);
+		}
+	}
+}
+
+static void dlb2_domain_reset_dir_queue_registers(struct dlb2_hw *hw,
+						  struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *queue;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, queue, iter) {
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_DIR_MAX_DEPTH(hw->ver,
+						       queue->id.phys_id),
+			    DLB2_LSP_QID_DIR_MAX_DEPTH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTL(hw->ver,
+							  queue->id.phys_id),
+			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTL_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTH(hw->ver,
+							  queue->id.phys_id),
+			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_DIR_DEPTH_THRSH(hw->ver,
+							 queue->id.phys_id),
+			    DLB2_LSP_QID_DIR_DEPTH_THRSH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_DIR_QID_ITS(queue->id.phys_id),
+			    DLB2_SYS_DIR_QID_ITS_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_DIR_QID_V(queue->id.phys_id),
+			    DLB2_SYS_DIR_QID_V_RST);
+	}
+}
+
+
+
+
+
+static void dlb2_domain_reset_registers(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain)
+{
+	dlb2_domain_reset_ldb_port_registers(hw, domain);
+
+	dlb2_domain_reset_dir_port_registers(hw, domain);
+
+	dlb2_domain_reset_ldb_queue_registers(hw, domain);
+
+	dlb2_domain_reset_dir_queue_registers(hw, domain);
+
+	if (hw->ver == DLB2_HW_V2) {
+		DLB2_CSR_WR(hw,
+			    DLB2_CHP_CFG_LDB_VAS_CRD(domain->id.phys_id),
+			    DLB2_CHP_CFG_LDB_VAS_CRD_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_CHP_CFG_DIR_VAS_CRD(domain->id.phys_id),
+			    DLB2_CHP_CFG_DIR_VAS_CRD_RST);
+	} else
+		DLB2_CSR_WR(hw,
+			    DLB2_CHP_CFG_VAS_CRD(domain->id.phys_id),
+			    DLB2_CHP_CFG_VAS_CRD_RST);
+}
+
+static int dlb2_domain_reset_software_state(struct dlb2_hw *hw,
+					    struct dlb2_hw_domain *domain)
+{
+	struct dlb2_dir_pq_pair *tmp_dir_port;
+	struct dlb2_ldb_queue *tmp_ldb_queue;
+	struct dlb2_ldb_port *tmp_ldb_port;
+	struct dlb2_list_entry *iter1;
+	struct dlb2_list_entry *iter2;
+	struct dlb2_function_resources *rsrcs;
+	struct dlb2_dir_pq_pair *dir_port;
+	struct dlb2_ldb_queue *ldb_queue;
+	struct dlb2_ldb_port *ldb_port;
+	struct dlb2_list_head *list;
+	int ret, i;
+	RTE_SET_USED(tmp_dir_port);
+	RTE_SET_USED(tmp_ldb_queue);
+	RTE_SET_USED(tmp_ldb_port);
+	RTE_SET_USED(iter1);
+	RTE_SET_USED(iter2);
+
+	rsrcs = domain->parent_func;
+
+	/* Move the domain's ldb queues to the function's avail list */
+	list = &domain->used_ldb_queues;
+	DLB2_DOM_LIST_FOR_SAFE(*list, ldb_queue, tmp_ldb_queue, iter1, iter2) {
+		if (ldb_queue->sn_cfg_valid) {
+			struct dlb2_sn_group *grp;
+
+			grp = &hw->rsrcs.sn_groups[ldb_queue->sn_group];
+
+			dlb2_sn_group_free_slot(grp, ldb_queue->sn_slot);
+			ldb_queue->sn_cfg_valid = false;
+		}
+
+		ldb_queue->owned = false;
+		ldb_queue->num_mappings = 0;
+		ldb_queue->num_pending_additions = 0;
+
+		dlb2_list_del(&domain->used_ldb_queues,
+			      &ldb_queue->domain_list);
+		dlb2_list_add(&rsrcs->avail_ldb_queues,
+			      &ldb_queue->func_list);
+		rsrcs->num_avail_ldb_queues++;
+	}
+
+	list = &domain->avail_ldb_queues;
+	DLB2_DOM_LIST_FOR_SAFE(*list, ldb_queue, tmp_ldb_queue, iter1, iter2) {
+		ldb_queue->owned = false;
+
+		dlb2_list_del(&domain->avail_ldb_queues,
+			      &ldb_queue->domain_list);
+		dlb2_list_add(&rsrcs->avail_ldb_queues,
+			      &ldb_queue->func_list);
+		rsrcs->num_avail_ldb_queues++;
+	}
+
+	/* Move the domain's ldb ports to the function's avail list */
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		list = &domain->used_ldb_ports[i];
+		DLB2_DOM_LIST_FOR_SAFE(*list, ldb_port, tmp_ldb_port,
+				       iter1, iter2) {
+			int j;
+
+			ldb_port->owned = false;
+			ldb_port->configured = false;
+			ldb_port->num_pending_removals = 0;
+			ldb_port->num_mappings = 0;
+			ldb_port->init_tkn_cnt = 0;
+			ldb_port->cq_depth = 0;
+			for (j = 0; j < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; j++)
+				ldb_port->qid_map[j].state =
+					DLB2_QUEUE_UNMAPPED;
+
+			dlb2_list_del(&domain->used_ldb_ports[i],
+				      &ldb_port->domain_list);
+			dlb2_list_add(&rsrcs->avail_ldb_ports[i],
+				      &ldb_port->func_list);
+			rsrcs->num_avail_ldb_ports[i]++;
+		}
+
+		list = &domain->avail_ldb_ports[i];
+		DLB2_DOM_LIST_FOR_SAFE(*list, ldb_port, tmp_ldb_port,
+				       iter1, iter2) {
+			ldb_port->owned = false;
+
+			dlb2_list_del(&domain->avail_ldb_ports[i],
+				      &ldb_port->domain_list);
+			dlb2_list_add(&rsrcs->avail_ldb_ports[i],
+				      &ldb_port->func_list);
+			rsrcs->num_avail_ldb_ports[i]++;
+		}
+	}
+
+	/* Move the domain's dir ports to the function's avail list */
+	list = &domain->used_dir_pq_pairs;
+	DLB2_DOM_LIST_FOR_SAFE(*list, dir_port, tmp_dir_port, iter1, iter2) {
+		dir_port->owned = false;
+		dir_port->port_configured = false;
+		dir_port->init_tkn_cnt = 0;
+
+		dlb2_list_del(&domain->used_dir_pq_pairs,
+			      &dir_port->domain_list);
+
+		dlb2_list_add(&rsrcs->avail_dir_pq_pairs,
+			      &dir_port->func_list);
+		rsrcs->num_avail_dir_pq_pairs++;
+	}
+
+	list = &domain->avail_dir_pq_pairs;
+	DLB2_DOM_LIST_FOR_SAFE(*list, dir_port, tmp_dir_port, iter1, iter2) {
+		dir_port->owned = false;
+
+		dlb2_list_del(&domain->avail_dir_pq_pairs,
+			      &dir_port->domain_list);
+
+		dlb2_list_add(&rsrcs->avail_dir_pq_pairs,
+			      &dir_port->func_list);
+		rsrcs->num_avail_dir_pq_pairs++;
+	}
+
+	/* Return hist list entries to the function */
+	ret = dlb2_bitmap_set_range(rsrcs->avail_hist_list_entries,
+				    domain->hist_list_entry_base,
+				    domain->total_hist_list_entries);
+	if (ret) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: domain hist list base does not match the function's bitmap.\n",
+			    __func__);
+		return ret;
+	}
+
+	domain->total_hist_list_entries = 0;
+	domain->avail_hist_list_entries = 0;
+	domain->hist_list_entry_base = 0;
+	domain->hist_list_entry_offset = 0;
+
+	if (hw->ver == DLB2_HW_V2_5) {
+		rsrcs->num_avail_entries += domain->num_credits;
+		domain->num_credits = 0;
+	} else {
+		rsrcs->num_avail_qed_entries += domain->num_ldb_credits;
+		domain->num_ldb_credits = 0;
+
+		rsrcs->num_avail_dqed_entries += domain->num_dir_credits;
+		domain->num_dir_credits = 0;
+	}
+	rsrcs->num_avail_aqed_entries += domain->num_avail_aqed_entries;
+	rsrcs->num_avail_aqed_entries += domain->num_used_aqed_entries;
+	domain->num_avail_aqed_entries = 0;
+	domain->num_used_aqed_entries = 0;
+
+	domain->num_pending_removals = 0;
+	domain->num_pending_additions = 0;
+	domain->configured = false;
+	domain->started = false;
+
+	/*
+	 * Move the domain out of the used_domains list and back to the
+	 * function's avail_domains list.
+	 */
+	dlb2_list_del(&rsrcs->used_domains, &domain->func_list);
+	dlb2_list_add(&rsrcs->avail_domains, &domain->func_list);
+	rsrcs->num_avail_domains++;
+
+	return 0;
+}
+
+static int dlb2_domain_drain_unmapped_queue(struct dlb2_hw *hw,
+					    struct dlb2_hw_domain *domain,
+					    struct dlb2_ldb_queue *queue)
+{
+	struct dlb2_ldb_port *port = NULL;
+	int ret, i;
+
+	/* If a domain has LDB queues, it must have LDB ports */
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		port = DLB2_DOM_LIST_HEAD(domain->used_ldb_ports[i],
+					  typeof(*port));
+		if (port)
+			break;
+	}
+
+	if (port == NULL) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: No configured LDB ports\n",
+			    __func__);
+		return -EFAULT;
+	}
+
+	/* If necessary, free up a QID slot in this CQ */
+	if (port->num_mappings == DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
+		struct dlb2_ldb_queue *mapped_queue;
+
+		mapped_queue = &hw->rsrcs.ldb_queues[port->qid_map[0].qid];
+
+		ret = dlb2_ldb_port_unmap_qid(hw, port, mapped_queue);
+		if (ret)
+			return ret;
+	}
+
+	ret = dlb2_ldb_port_map_qid_dynamic(hw, port, queue, 0);
+	if (ret)
+		return ret;
+
+	return dlb2_domain_drain_mapped_queues(hw, domain);
+}
+
+static int dlb2_domain_drain_unmapped_queues(struct dlb2_hw *hw,
+					     struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_queue *queue;
+	int ret;
+	RTE_SET_USED(iter);
+
+	/* If the domain hasn't been started, there's no traffic to drain */
+	if (!domain->started)
+		return 0;
+
+	/*
+	 * Pre-condition: the unattached queue must not have any outstanding
+	 * completions. This is ensured by calling dlb2_domain_drain_ldb_cqs()
+	 * prior to this in dlb2_domain_drain_mapped_queues().
+	 */
+	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
+		if (queue->num_mappings != 0 ||
+		    dlb2_ldb_queue_is_empty(hw, queue))
+			continue;
+
+		ret = dlb2_domain_drain_unmapped_queue(hw, domain, queue);
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+
+/**
+ * dlb2_reset_domain() - reset a scheduling domain
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function resets and frees a DLB 2.0 scheduling domain and its associated
+ * resources.
+ *
+ * Pre-condition: the driver must ensure software has stopped sending QEs
+ * through this domain's producer ports before invoking this function, or
+ * undefined behavior will result.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, -1 otherwise.
+ *
+ * EINVAL - Invalid domain ID, or the domain is not configured.
+ * EFAULT - Internal error. (Possibly caused if software is the pre-condition
+ *	    is not met.)
+ * ETIMEDOUT - Hardware component didn't reset in the expected time.
+ */
+int dlb2_reset_domain(struct dlb2_hw *hw,
+		      u32 domain_id,
+		      bool vdev_req,
+		      unsigned int vdev_id)
+{
+	struct dlb2_hw_domain *domain;
+	int ret;
+
+	dlb2_log_reset_domain(hw, domain_id, vdev_req, vdev_id);
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (domain == NULL || !domain->configured)
+		return -EINVAL;
+
+	/* Disable VPPs */
+	if (vdev_req) {
+		dlb2_domain_disable_dir_vpps(hw, domain, vdev_id);
+
+		dlb2_domain_disable_ldb_vpps(hw, domain, vdev_id);
+	}
+
+	/* Disable CQ interrupts */
+	dlb2_domain_disable_dir_port_interrupts(hw, domain);
+
+	dlb2_domain_disable_ldb_port_interrupts(hw, domain);
+
+	/*
+	 * For each queue owned by this domain, disable its write permissions to
+	 * cause any traffic sent to it to be dropped. Well-behaved software
+	 * should not be sending QEs at this point.
+	 */
+	dlb2_domain_disable_dir_queue_write_perms(hw, domain);
+
+	dlb2_domain_disable_ldb_queue_write_perms(hw, domain);
+
+	/* Turn off completion tracking on all the domain's PPs. */
+	dlb2_domain_disable_ldb_seq_checks(hw, domain);
+
+	/*
+	 * Disable the LDB CQs and drain them in order to complete the map and
+	 * unmap procedures, which require zero CQ inflights and zero QID
+	 * inflights respectively.
+	 */
+	dlb2_domain_disable_ldb_cqs(hw, domain);
+
+	dlb2_domain_drain_ldb_cqs(hw, domain, false);
+
+	ret = dlb2_domain_wait_for_ldb_cqs_to_empty(hw, domain);
+	if (ret)
+		return ret;
+
+	ret = dlb2_domain_finish_unmap_qid_procedures(hw, domain);
+	if (ret)
+		return ret;
+
+	ret = dlb2_domain_finish_map_qid_procedures(hw, domain);
+	if (ret)
+		return ret;
+
+	/* Re-enable the CQs in order to drain the mapped queues. */
+	dlb2_domain_enable_ldb_cqs(hw, domain);
+
+	ret = dlb2_domain_drain_mapped_queues(hw, domain);
+	if (ret)
+		return ret;
+
+	ret = dlb2_domain_drain_unmapped_queues(hw, domain);
+	if (ret)
+		return ret;
+
+	/* Done draining LDB QEs, so disable the CQs. */
+	dlb2_domain_disable_ldb_cqs(hw, domain);
+
+	dlb2_domain_drain_dir_queues(hw, domain);
+
+	/* Done draining DIR QEs, so disable the CQs. */
+	dlb2_domain_disable_dir_cqs(hw, domain);
+
+	/* Disable PPs */
+	dlb2_domain_disable_dir_producer_ports(hw, domain);
+
+	dlb2_domain_disable_ldb_producer_ports(hw, domain);
+
+	ret = dlb2_domain_verify_reset_success(hw, domain);
+	if (ret)
+		return ret;
+
+	/* Reset the QID and port state. */
+	dlb2_domain_reset_registers(hw, domain);
+
+	/* Hardware reset complete. Reset the domain's software state */
+	return dlb2_domain_reset_software_state(hw, domain);
+}
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v5 08/26] event/dlb2: add v2.5 create ldb queue
  2021-05-01 19:03   ` [dpdk-dev] [PATCH v5 00/26] Add DLB v2.5 McDaniel, Timothy
                       ` (6 preceding siblings ...)
  2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 07/26] event/dlb2: add v2.5 domain reset McDaniel, Timothy
@ 2021-05-01 19:03     ` McDaniel, Timothy
  2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 09/26] event/dlb2: add v2.5 create ldb port McDaniel, Timothy
                       ` (18 subsequent siblings)
  26 siblings, 0 replies; 174+ messages in thread
From: McDaniel, Timothy @ 2021-05-01 19:03 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, harry.van.haaren, jerinj, thomas, Timothy McDaniel

From: Timothy McDaniel <timothy.mcdaniel@intel.com>

Updated low level hardware functions related to configuring
load balanced queues. These functions create the queues,
as well as attach related resources required by load
balanced queues, such as sequence numbers.

The logic is very similar to what was done for v2.0,
but the new combined register map for v2.0 and v2.5
uses new register names and bit names.  Additionally,
new register access macros are used so that the code
can perform the correct action based on the hardware
version, v2.0 or v2.5.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 397 ------------------
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 391 +++++++++++++++++
 2 files changed, 391 insertions(+), 397 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index 041aeaeee..f8b85bc57 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -1149,403 +1149,6 @@ unsigned int dlb2_finish_map_qid_procedures(struct dlb2_hw *hw)
 	return num;
 }
 
-
-static void dlb2_configure_ldb_queue(struct dlb2_hw *hw,
-				     struct dlb2_hw_domain *domain,
-				     struct dlb2_ldb_queue *queue,
-				     struct dlb2_create_ldb_queue_args *args,
-				     bool vdev_req,
-				     unsigned int vdev_id)
-{
-	union dlb2_sys_vf_ldb_vqid_v r0 = { {0} };
-	union dlb2_sys_vf_ldb_vqid2qid r1 = { {0} };
-	union dlb2_sys_ldb_qid2vqid r2 = { {0} };
-	union dlb2_sys_ldb_vasqid_v r3 = { {0} };
-	union dlb2_lsp_qid_ldb_infl_lim r4 = { {0} };
-	union dlb2_lsp_qid_aqed_active_lim r5 = { {0} };
-	union dlb2_aqed_pipe_qid_hid_width r6 = { {0} };
-	union dlb2_sys_ldb_qid_its r7 = { {0} };
-	union dlb2_lsp_qid_atm_depth_thrsh r8 = { {0} };
-	union dlb2_lsp_qid_naldb_depth_thrsh r9 = { {0} };
-	union dlb2_aqed_pipe_qid_fid_lim r10 = { {0} };
-	union dlb2_chp_ord_qid_sn_map r11 = { {0} };
-	union dlb2_sys_ldb_qid_cfg_v r12 = { {0} };
-	union dlb2_sys_ldb_qid_v r13 = { {0} };
-
-	struct dlb2_sn_group *sn_group;
-	unsigned int offs;
-
-	/* QID write permissions are turned on when the domain is started */
-	r3.field.vasqid_v = 0;
-
-	offs = domain->id.phys_id * DLB2_MAX_NUM_LDB_QUEUES +
-		queue->id.phys_id;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_VASQID_V(offs), r3.val);
-
-	/*
-	 * Unordered QIDs get 4K inflights, ordered get as many as the number
-	 * of sequence numbers.
-	 */
-	r4.field.limit = args->num_qid_inflights;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_QID_LDB_INFL_LIM(queue->id.phys_id), r4.val);
-
-	r5.field.limit = queue->aqed_limit;
-
-	if (r5.field.limit > DLB2_MAX_NUM_AQED_ENTRIES)
-		r5.field.limit = DLB2_MAX_NUM_AQED_ENTRIES;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID_AQED_ACTIVE_LIM(queue->id.phys_id),
-		    r5.val);
-
-	switch (args->lock_id_comp_level) {
-	case 64:
-		r6.field.compress_code = 1;
-		break;
-	case 128:
-		r6.field.compress_code = 2;
-		break;
-	case 256:
-		r6.field.compress_code = 3;
-		break;
-	case 512:
-		r6.field.compress_code = 4;
-		break;
-	case 1024:
-		r6.field.compress_code = 5;
-		break;
-	case 2048:
-		r6.field.compress_code = 6;
-		break;
-	case 4096:
-		r6.field.compress_code = 7;
-		break;
-	case 0:
-	case 65536:
-		r6.field.compress_code = 0;
-	}
-
-	DLB2_CSR_WR(hw,
-		    DLB2_AQED_PIPE_QID_HID_WIDTH(queue->id.phys_id),
-		    r6.val);
-
-	/* Don't timestamp QEs that pass through this queue */
-	r7.field.qid_its = 0;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_QID_ITS(queue->id.phys_id),
-		    r7.val);
-
-	r8.field.thresh = args->depth_threshold;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID_ATM_DEPTH_THRSH(queue->id.phys_id),
-		    r8.val);
-
-	r9.field.thresh = args->depth_threshold;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID_NALDB_DEPTH_THRSH(queue->id.phys_id),
-		    r9.val);
-
-	/*
-	 * This register limits the number of inflight flows a queue can have
-	 * at one time.  It has an upper bound of 2048, but can be
-	 * over-subscribed. 512 is chosen so that a single queue doesn't use
-	 * the entire atomic storage, but can use a substantial portion if
-	 * needed.
-	 */
-	r10.field.qid_fid_limit = 512;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_AQED_PIPE_QID_FID_LIM(queue->id.phys_id),
-		    r10.val);
-
-	/* Configure SNs */
-	sn_group = &hw->rsrcs.sn_groups[queue->sn_group];
-	r11.field.mode = sn_group->mode;
-	r11.field.slot = queue->sn_slot;
-	r11.field.grp  = sn_group->id;
-
-	DLB2_CSR_WR(hw, DLB2_CHP_ORD_QID_SN_MAP(queue->id.phys_id), r11.val);
-
-	r12.field.sn_cfg_v = (args->num_sequence_numbers != 0);
-	r12.field.fid_cfg_v = (args->num_atomic_inflights != 0);
-
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_QID_CFG_V(queue->id.phys_id), r12.val);
-
-	if (vdev_req) {
-		offs = vdev_id * DLB2_MAX_NUM_LDB_QUEUES + queue->id.virt_id;
-
-		r0.field.vqid_v = 1;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID_V(offs), r0.val);
-
-		r1.field.qid = queue->id.phys_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID2QID(offs), r1.val);
-
-		r2.field.vqid = queue->id.virt_id;
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_LDB_QID2VQID(queue->id.phys_id),
-			    r2.val);
-	}
-
-	r13.field.qid_v = 1;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_QID_V(queue->id.phys_id), r13.val);
-}
-
-static int
-dlb2_ldb_queue_attach_to_sn_group(struct dlb2_hw *hw,
-				  struct dlb2_ldb_queue *queue,
-				  struct dlb2_create_ldb_queue_args *args)
-{
-	int slot = -1;
-	int i;
-
-	queue->sn_cfg_valid = false;
-
-	if (args->num_sequence_numbers == 0)
-		return 0;
-
-	for (i = 0; i < DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {
-		struct dlb2_sn_group *group = &hw->rsrcs.sn_groups[i];
-
-		if (group->sequence_numbers_per_queue ==
-		    args->num_sequence_numbers &&
-		    !dlb2_sn_group_full(group)) {
-			slot = dlb2_sn_group_alloc_slot(group);
-			if (slot >= 0)
-				break;
-		}
-	}
-
-	if (slot == -1) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: no sequence number slots available\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	queue->sn_cfg_valid = true;
-	queue->sn_group = i;
-	queue->sn_slot = slot;
-	return 0;
-}
-
-static int
-dlb2_ldb_queue_attach_resources(struct dlb2_hw *hw,
-				struct dlb2_hw_domain *domain,
-				struct dlb2_ldb_queue *queue,
-				struct dlb2_create_ldb_queue_args *args)
-{
-	int ret;
-
-	ret = dlb2_ldb_queue_attach_to_sn_group(hw, queue, args);
-	if (ret)
-		return ret;
-
-	/* Attach QID inflights */
-	queue->num_qid_inflights = args->num_qid_inflights;
-
-	/* Attach atomic inflights */
-	queue->aqed_limit = args->num_atomic_inflights;
-
-	domain->num_avail_aqed_entries -= args->num_atomic_inflights;
-	domain->num_used_aqed_entries += args->num_atomic_inflights;
-
-	return 0;
-}
-
-static int
-dlb2_verify_create_ldb_queue_args(struct dlb2_hw *hw,
-				  u32 domain_id,
-				  struct dlb2_create_ldb_queue_args *args,
-				  struct dlb2_cmd_response *resp,
-				  bool vdev_req,
-				  unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	int i;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (domain == NULL) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	if (!domain->configured) {
-		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
-		return -EINVAL;
-	}
-
-	if (domain->started) {
-		resp->status = DLB2_ST_DOMAIN_STARTED;
-		return -EINVAL;
-	}
-
-	if (dlb2_list_empty(&domain->avail_ldb_queues)) {
-		resp->status = DLB2_ST_LDB_QUEUES_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	if (args->num_sequence_numbers) {
-		for (i = 0; i < DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {
-			struct dlb2_sn_group *group = &hw->rsrcs.sn_groups[i];
-
-			if (group->sequence_numbers_per_queue ==
-			    args->num_sequence_numbers &&
-			    !dlb2_sn_group_full(group))
-				break;
-		}
-
-		if (i == DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS) {
-			resp->status = DLB2_ST_SEQUENCE_NUMBERS_UNAVAILABLE;
-			return -EINVAL;
-		}
-	}
-
-	if (args->num_qid_inflights > 4096) {
-		resp->status = DLB2_ST_INVALID_QID_INFLIGHT_ALLOCATION;
-		return -EINVAL;
-	}
-
-	/* Inflights must be <= number of sequence numbers if ordered */
-	if (args->num_sequence_numbers != 0 &&
-	    args->num_qid_inflights > args->num_sequence_numbers) {
-		resp->status = DLB2_ST_INVALID_QID_INFLIGHT_ALLOCATION;
-		return -EINVAL;
-	}
-
-	if (domain->num_avail_aqed_entries < args->num_atomic_inflights) {
-		resp->status = DLB2_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	if (args->num_atomic_inflights &&
-	    args->lock_id_comp_level != 0 &&
-	    args->lock_id_comp_level != 64 &&
-	    args->lock_id_comp_level != 128 &&
-	    args->lock_id_comp_level != 256 &&
-	    args->lock_id_comp_level != 512 &&
-	    args->lock_id_comp_level != 1024 &&
-	    args->lock_id_comp_level != 2048 &&
-	    args->lock_id_comp_level != 4096 &&
-	    args->lock_id_comp_level != 65536) {
-		resp->status = DLB2_ST_INVALID_LOCK_ID_COMP_LEVEL;
-		return -EINVAL;
-	}
-
-	return 0;
-}
-
-static void
-dlb2_log_create_ldb_queue_args(struct dlb2_hw *hw,
-			       u32 domain_id,
-			       struct dlb2_create_ldb_queue_args *args,
-			       bool vdev_req,
-			       unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 create load-balanced queue arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID:                  %d\n",
-		    domain_id);
-	DLB2_HW_DBG(hw, "\tNumber of sequence numbers: %d\n",
-		    args->num_sequence_numbers);
-	DLB2_HW_DBG(hw, "\tNumber of QID inflights:    %d\n",
-		    args->num_qid_inflights);
-	DLB2_HW_DBG(hw, "\tNumber of ATM inflights:    %d\n",
-		    args->num_atomic_inflights);
-}
-
-/**
- * dlb2_hw_create_ldb_queue() - Allocate and initialize a DLB LDB queue.
- * @hw:	Contains the current state of the DLB2 hardware.
- * @domain_id: Domain ID
- * @args: User-provided arguments.
- * @resp: Response to user.
- * @vdev_req: Request came from a virtual device.
- * @vdev_id: If vdev_req is true, this contains the virtual device's ID.
- *
- * Return: returns < 0 on error, 0 otherwise. If the driver is unable to
- * satisfy a request, resp->status will be set accordingly.
- */
-int dlb2_hw_create_ldb_queue(struct dlb2_hw *hw,
-			     u32 domain_id,
-			     struct dlb2_create_ldb_queue_args *args,
-			     struct dlb2_cmd_response *resp,
-			     bool vdev_req,
-			     unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_queue *queue;
-	int ret;
-
-	dlb2_log_create_ldb_queue_args(hw, domain_id, args, vdev_req, vdev_id);
-
-	/*
-	 * Verify that hardware resources are available before attempting to
-	 * satisfy the request. This simplifies the error unwinding code.
-	 */
-	ret = dlb2_verify_create_ldb_queue_args(hw,
-						domain_id,
-						args,
-						resp,
-						vdev_req,
-						vdev_id);
-	if (ret)
-		return ret;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-	if (domain == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: domain not found\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	queue = DLB2_DOM_LIST_HEAD(domain->avail_ldb_queues, typeof(*queue));
-	if (queue == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: no available ldb queues\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	ret = dlb2_ldb_queue_attach_resources(hw, domain, queue, args);
-	if (ret < 0) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: failed to attach the ldb queue resources\n",
-			    __func__, __LINE__);
-		return ret;
-	}
-
-	dlb2_configure_ldb_queue(hw, domain, queue, args, vdev_req, vdev_id);
-
-	queue->num_mappings = 0;
-
-	queue->configured = true;
-
-	/*
-	 * Configuration succeeded, so move the resource from the 'avail' to
-	 * the 'used' list.
-	 */
-	dlb2_list_del(&domain->avail_ldb_queues, &queue->domain_list);
-
-	dlb2_list_add(&domain->used_ldb_queues, &queue->domain_list);
-
-	resp->status = 0;
-	resp->id = (vdev_req) ? queue->id.virt_id : queue->id.phys_id;
-
-	return 0;
-}
-
 int dlb2_get_group_sequence_numbers(struct dlb2_hw *hw, unsigned int group_id)
 {
 	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index 641812412..b52d2becd 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -3581,3 +3581,394 @@ int dlb2_reset_domain(struct dlb2_hw *hw,
 	/* Hardware reset complete. Reset the domain's software state */
 	return dlb2_domain_reset_software_state(hw, domain);
 }
+
+static void
+dlb2_log_create_ldb_queue_args(struct dlb2_hw *hw,
+			       u32 domain_id,
+			       struct dlb2_create_ldb_queue_args *args,
+			       bool vdev_req,
+			       unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 create load-balanced queue arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID:                  %d\n",
+		    domain_id);
+	DLB2_HW_DBG(hw, "\tNumber of sequence numbers: %d\n",
+		    args->num_sequence_numbers);
+	DLB2_HW_DBG(hw, "\tNumber of QID inflights:    %d\n",
+		    args->num_qid_inflights);
+	DLB2_HW_DBG(hw, "\tNumber of ATM inflights:    %d\n",
+		    args->num_atomic_inflights);
+}
+
+static int
+dlb2_ldb_queue_attach_to_sn_group(struct dlb2_hw *hw,
+				  struct dlb2_ldb_queue *queue,
+				  struct dlb2_create_ldb_queue_args *args)
+{
+	int slot = -1;
+	int i;
+
+	queue->sn_cfg_valid = false;
+
+	if (args->num_sequence_numbers == 0)
+		return 0;
+
+	for (i = 0; i < DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {
+		struct dlb2_sn_group *group = &hw->rsrcs.sn_groups[i];
+
+		if (group->sequence_numbers_per_queue ==
+		    args->num_sequence_numbers &&
+		    !dlb2_sn_group_full(group)) {
+			slot = dlb2_sn_group_alloc_slot(group);
+			if (slot >= 0)
+				break;
+		}
+	}
+
+	if (slot == -1) {
+		DLB2_HW_ERR(hw,
+			    "[%s():%d] Internal error: no sequence number slots available\n",
+			    __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	queue->sn_cfg_valid = true;
+	queue->sn_group = i;
+	queue->sn_slot = slot;
+	return 0;
+}
+
+static int
+dlb2_verify_create_ldb_queue_args(struct dlb2_hw *hw,
+				  u32 domain_id,
+				  struct dlb2_create_ldb_queue_args *args,
+				  struct dlb2_cmd_response *resp,
+				  bool vdev_req,
+				  unsigned int vdev_id,
+				  struct dlb2_hw_domain **out_domain,
+				  struct dlb2_ldb_queue **out_queue)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+	int i;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
+		return -EINVAL;
+	}
+
+	if (domain->started) {
+		resp->status = DLB2_ST_DOMAIN_STARTED;
+		return -EINVAL;
+	}
+
+	queue = DLB2_DOM_LIST_HEAD(domain->avail_ldb_queues, typeof(*queue));
+	if (!queue) {
+		resp->status = DLB2_ST_LDB_QUEUES_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	if (args->num_sequence_numbers) {
+		for (i = 0; i < DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {
+			struct dlb2_sn_group *group = &hw->rsrcs.sn_groups[i];
+
+			if (group->sequence_numbers_per_queue ==
+			    args->num_sequence_numbers &&
+			    !dlb2_sn_group_full(group))
+				break;
+		}
+
+		if (i == DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS) {
+			resp->status = DLB2_ST_SEQUENCE_NUMBERS_UNAVAILABLE;
+			return -EINVAL;
+		}
+	}
+
+	if (args->num_qid_inflights > 4096) {
+		resp->status = DLB2_ST_INVALID_QID_INFLIGHT_ALLOCATION;
+		return -EINVAL;
+	}
+
+	/* Inflights must be <= number of sequence numbers if ordered */
+	if (args->num_sequence_numbers != 0 &&
+	    args->num_qid_inflights > args->num_sequence_numbers) {
+		resp->status = DLB2_ST_INVALID_QID_INFLIGHT_ALLOCATION;
+		return -EINVAL;
+	}
+
+	if (domain->num_avail_aqed_entries < args->num_atomic_inflights) {
+		resp->status = DLB2_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	if (args->num_atomic_inflights &&
+	    args->lock_id_comp_level != 0 &&
+	    args->lock_id_comp_level != 64 &&
+	    args->lock_id_comp_level != 128 &&
+	    args->lock_id_comp_level != 256 &&
+	    args->lock_id_comp_level != 512 &&
+	    args->lock_id_comp_level != 1024 &&
+	    args->lock_id_comp_level != 2048 &&
+	    args->lock_id_comp_level != 4096 &&
+	    args->lock_id_comp_level != 65536) {
+		resp->status = DLB2_ST_INVALID_LOCK_ID_COMP_LEVEL;
+		return -EINVAL;
+	}
+
+	*out_domain = domain;
+	*out_queue = queue;
+
+	return 0;
+}
+
+static int
+dlb2_ldb_queue_attach_resources(struct dlb2_hw *hw,
+				struct dlb2_hw_domain *domain,
+				struct dlb2_ldb_queue *queue,
+				struct dlb2_create_ldb_queue_args *args)
+{
+	int ret;
+	ret = dlb2_ldb_queue_attach_to_sn_group(hw, queue, args);
+	if (ret)
+		return ret;
+
+	/* Attach QID inflights */
+	queue->num_qid_inflights = args->num_qid_inflights;
+
+	/* Attach atomic inflights */
+	queue->aqed_limit = args->num_atomic_inflights;
+
+	domain->num_avail_aqed_entries -= args->num_atomic_inflights;
+	domain->num_used_aqed_entries += args->num_atomic_inflights;
+
+	return 0;
+}
+
+static void dlb2_configure_ldb_queue(struct dlb2_hw *hw,
+				     struct dlb2_hw_domain *domain,
+				     struct dlb2_ldb_queue *queue,
+				     struct dlb2_create_ldb_queue_args *args,
+				     bool vdev_req,
+				     unsigned int vdev_id)
+{
+	struct dlb2_sn_group *sn_group;
+	unsigned int offs;
+	u32 reg = 0;
+	u32 alimit;
+
+	/* QID write permissions are turned on when the domain is started */
+	offs = domain->id.phys_id * DLB2_MAX_NUM_LDB_QUEUES + queue->id.phys_id;
+
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_VASQID_V(offs), reg);
+
+	/*
+	 * Unordered QIDs get 4K inflights, ordered get as many as the number
+	 * of sequence numbers.
+	 */
+	DLB2_BITS_SET(reg, args->num_qid_inflights,
+		      DLB2_LSP_QID_LDB_INFL_LIM_LIMIT);
+	DLB2_CSR_WR(hw, DLB2_LSP_QID_LDB_INFL_LIM(hw->ver,
+						  queue->id.phys_id), reg);
+
+	alimit = queue->aqed_limit;
+
+	if (alimit > DLB2_MAX_NUM_AQED_ENTRIES)
+		alimit = DLB2_MAX_NUM_AQED_ENTRIES;
+
+	reg = 0;
+	DLB2_BITS_SET(reg, alimit, DLB2_LSP_QID_AQED_ACTIVE_LIM_LIMIT);
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_QID_AQED_ACTIVE_LIM(hw->ver,
+						 queue->id.phys_id), reg);
+
+	reg = 0;
+	switch (args->lock_id_comp_level) {
+	case 64:
+		DLB2_BITS_SET(reg, 1, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
+		break;
+	case 128:
+		DLB2_BITS_SET(reg, 2, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
+		break;
+	case 256:
+		DLB2_BITS_SET(reg, 3, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
+		break;
+	case 512:
+		DLB2_BITS_SET(reg, 4, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
+		break;
+	case 1024:
+		DLB2_BITS_SET(reg, 5, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
+		break;
+	case 2048:
+		DLB2_BITS_SET(reg, 6, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
+		break;
+	case 4096:
+		DLB2_BITS_SET(reg, 7, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
+		break;
+	default:
+		/* No compression by default */
+		break;
+	}
+
+	DLB2_CSR_WR(hw, DLB2_AQED_QID_HID_WIDTH(queue->id.phys_id), reg);
+
+	reg = 0;
+	/* Don't timestamp QEs that pass through this queue */
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_QID_ITS(queue->id.phys_id), reg);
+
+	DLB2_BITS_SET(reg, args->depth_threshold,
+		      DLB2_LSP_QID_ATM_DEPTH_THRSH_THRESH);
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_QID_ATM_DEPTH_THRSH(hw->ver,
+						 queue->id.phys_id), reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, args->depth_threshold,
+		      DLB2_LSP_QID_NALDB_DEPTH_THRSH_THRESH);
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_QID_NALDB_DEPTH_THRSH(hw->ver, queue->id.phys_id),
+		    reg);
+
+	/*
+	 * This register limits the number of inflight flows a queue can have
+	 * at one time.  It has an upper bound of 2048, but can be
+	 * over-subscribed. 512 is chosen so that a single queue does not use
+	 * the entire atomic storage, but can use a substantial portion if
+	 * needed.
+	 */
+	reg = 0;
+	DLB2_BITS_SET(reg, 512, DLB2_AQED_QID_FID_LIM_QID_FID_LIMIT);
+	DLB2_CSR_WR(hw, DLB2_AQED_QID_FID_LIM(queue->id.phys_id), reg);
+
+	/* Configure SNs */
+	reg = 0;
+	sn_group = &hw->rsrcs.sn_groups[queue->sn_group];
+	DLB2_BITS_SET(reg, sn_group->mode, DLB2_CHP_ORD_QID_SN_MAP_MODE);
+	DLB2_BITS_SET(reg, queue->sn_slot, DLB2_CHP_ORD_QID_SN_MAP_SLOT);
+	DLB2_BITS_SET(reg, sn_group->id, DLB2_CHP_ORD_QID_SN_MAP_GRP);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_ORD_QID_SN_MAP(hw->ver, queue->id.phys_id), reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, (args->num_sequence_numbers != 0),
+		 DLB2_SYS_LDB_QID_CFG_V_SN_CFG_V);
+	DLB2_BITS_SET(reg, (args->num_atomic_inflights != 0),
+		 DLB2_SYS_LDB_QID_CFG_V_FID_CFG_V);
+
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_QID_CFG_V(queue->id.phys_id), reg);
+
+	if (vdev_req) {
+		offs = vdev_id * DLB2_MAX_NUM_LDB_QUEUES + queue->id.virt_id;
+
+		reg = 0;
+		DLB2_BIT_SET(reg, DLB2_SYS_VF_LDB_VQID_V_VQID_V);
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID_V(offs), reg);
+
+		reg = 0;
+		DLB2_BITS_SET(reg, queue->id.phys_id,
+			      DLB2_SYS_VF_LDB_VQID2QID_QID);
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID2QID(offs), reg);
+
+		reg = 0;
+		DLB2_BITS_SET(reg, queue->id.virt_id,
+			      DLB2_SYS_LDB_QID2VQID_VQID);
+		DLB2_CSR_WR(hw, DLB2_SYS_LDB_QID2VQID(queue->id.phys_id), reg);
+	}
+
+	reg = 0;
+	DLB2_BIT_SET(reg, DLB2_SYS_LDB_QID_V_QID_V);
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_QID_V(queue->id.phys_id), reg);
+}
+
+/**
+ * dlb2_hw_create_ldb_queue() - create a load-balanced queue
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: queue creation arguments.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function creates a load-balanced queue.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the queue ID.
+ *
+ * resp->id contains a virtual ID if vdev_req is true.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, the domain is not configured,
+ *	    the domain has already been started, or the requested queue name is
+ *	    already in use.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb2_hw_create_ldb_queue(struct dlb2_hw *hw,
+			     u32 domain_id,
+			     struct dlb2_create_ldb_queue_args *args,
+			     struct dlb2_cmd_response *resp,
+			     bool vdev_req,
+			     unsigned int vdev_id)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+	int ret;
+
+	dlb2_log_create_ldb_queue_args(hw, domain_id, args, vdev_req, vdev_id);
+
+	/*
+	 * Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	ret = dlb2_verify_create_ldb_queue_args(hw,
+						domain_id,
+						args,
+						resp,
+						vdev_req,
+						vdev_id,
+						&domain,
+						&queue);
+	if (ret)
+		return ret;
+
+	ret = dlb2_ldb_queue_attach_resources(hw, domain, queue, args);
+
+	if (ret) {
+		DLB2_HW_ERR(hw,
+			    "[%s():%d] Internal error: failed to attach the ldb queue resources\n",
+			    __func__, __LINE__);
+		return ret;
+	}
+
+	dlb2_configure_ldb_queue(hw, domain, queue, args, vdev_req, vdev_id);
+
+	queue->num_mappings = 0;
+
+	queue->configured = true;
+
+	/*
+	 * Configuration succeeded, so move the resource from the 'avail' to
+	 * the 'used' list.
+	 */
+	dlb2_list_del(&domain->avail_ldb_queues, &queue->domain_list);
+
+	dlb2_list_add(&domain->used_ldb_queues, &queue->domain_list);
+
+	resp->status = 0;
+	resp->id = (vdev_req) ? queue->id.virt_id : queue->id.phys_id;
+
+	return 0;
+}
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v5 09/26] event/dlb2: add v2.5 create ldb port
  2021-05-01 19:03   ` [dpdk-dev] [PATCH v5 00/26] Add DLB v2.5 McDaniel, Timothy
                       ` (7 preceding siblings ...)
  2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 08/26] event/dlb2: add v2.5 create ldb queue McDaniel, Timothy
@ 2021-05-01 19:03     ` McDaniel, Timothy
  2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 10/26] event/dlb2: add v2.5 create dir port McDaniel, Timothy
                       ` (17 subsequent siblings)
  26 siblings, 0 replies; 174+ messages in thread
From: McDaniel, Timothy @ 2021-05-01 19:03 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, harry.van.haaren, jerinj, thomas, Timothy McDaniel

From: Timothy McDaniel <timothy.mcdaniel@intel.com>

Update the low level HW functions responsible for
creating load balanced ports. These functions create the
producer port (PP), configure the consumer queue (CQ), and
validate the port creation arguments.

The logic is very similar to what was done for v2.0,
but the new combined register map for v2.0 and v2.5
uses new register names and bit names.  Additionally,
new register access macros are used so that the code
can perform the correct action, based on the hardware
version, v2.0 or v2.5.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 490 ------------------
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 471 +++++++++++++++++
 2 files changed, 471 insertions(+), 490 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index f8b85bc57..45d096eec 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -1216,496 +1216,6 @@ int dlb2_set_group_sequence_numbers(struct dlb2_hw *hw,
 	return 0;
 }
 
-static void dlb2_ldb_port_configure_pp(struct dlb2_hw *hw,
-				       struct dlb2_hw_domain *domain,
-				       struct dlb2_ldb_port *port,
-				       bool vdev_req,
-				       unsigned int vdev_id)
-{
-	union dlb2_sys_ldb_pp2vas r0 = { {0} };
-	union dlb2_sys_ldb_pp_v r4 = { {0} };
-
-	r0.field.vas = domain->id.phys_id;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_PP2VAS(port->id.phys_id), r0.val);
-
-	if (vdev_req) {
-		union dlb2_sys_vf_ldb_vpp2pp r1 = { {0} };
-		union dlb2_sys_ldb_pp2vdev r2 = { {0} };
-		union dlb2_sys_vf_ldb_vpp_v r3 = { {0} };
-		unsigned int offs;
-		u32 virt_id;
-
-		/*
-		 * DLB uses producer port address bits 17:12 to determine the
-		 * producer port ID. In Scalable IOV mode, PP accesses come
-		 * through the PF MMIO window for the physical producer port,
-		 * so for translation purposes the virtual and physical port
-		 * IDs are equal.
-		 */
-		if (hw->virt_mode == DLB2_VIRT_SRIOV)
-			virt_id = port->id.virt_id;
-		else
-			virt_id = port->id.phys_id;
-
-		r1.field.pp = port->id.phys_id;
-
-		offs = vdev_id * DLB2_MAX_NUM_LDB_PORTS + virt_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VPP2PP(offs), r1.val);
-
-		r2.field.vdev = vdev_id;
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_LDB_PP2VDEV(port->id.phys_id),
-			    r2.val);
-
-		r3.field.vpp_v = 1;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VPP_V(offs), r3.val);
-	}
-
-	r4.field.pp_v = 1;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_PP_V(port->id.phys_id),
-		    r4.val);
-}
-
-static int dlb2_ldb_port_configure_cq(struct dlb2_hw *hw,
-				      struct dlb2_hw_domain *domain,
-				      struct dlb2_ldb_port *port,
-				      uintptr_t cq_dma_base,
-				      struct dlb2_create_ldb_port_args *args,
-				      bool vdev_req,
-				      unsigned int vdev_id)
-{
-	union dlb2_sys_ldb_cq_addr_l r0 = { {0} };
-	union dlb2_sys_ldb_cq_addr_u r1 = { {0} };
-	union dlb2_sys_ldb_cq2vf_pf_ro r2 = { {0} };
-	union dlb2_chp_ldb_cq_tkn_depth_sel r3 = { {0} };
-	union dlb2_lsp_cq_ldb_tkn_depth_sel r4 = { {0} };
-	union dlb2_chp_hist_list_lim r5 = { {0} };
-	union dlb2_chp_hist_list_base r6 = { {0} };
-	union dlb2_lsp_cq_ldb_infl_lim r7 = { {0} };
-	union dlb2_chp_hist_list_push_ptr r8 = { {0} };
-	union dlb2_chp_hist_list_pop_ptr r9 = { {0} };
-	union dlb2_sys_ldb_cq_at r10 = { {0} };
-	union dlb2_sys_ldb_cq_pasid r11 = { {0} };
-	union dlb2_chp_ldb_cq2vas r12 = { {0} };
-	union dlb2_lsp_cq2priov r13 = { {0} };
-
-	/* The CQ address is 64B-aligned, and the DLB only wants bits [63:6] */
-	r0.field.addr_l = cq_dma_base >> 6;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_ADDR_L(port->id.phys_id), r0.val);
-
-	r1.field.addr_u = cq_dma_base >> 32;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_ADDR_U(port->id.phys_id), r1.val);
-
-	/*
-	 * 'ro' == relaxed ordering. This setting allows DLB2 to write
-	 * cache lines out-of-order (but QEs within a cache line are always
-	 * updated in-order).
-	 */
-	r2.field.vf = vdev_id;
-	r2.field.is_pf = !vdev_req && (hw->virt_mode != DLB2_VIRT_SIOV);
-	r2.field.ro = 1;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ2VF_PF_RO(port->id.phys_id), r2.val);
-
-	if (args->cq_depth <= 8) {
-		r3.field.token_depth_select = 1;
-	} else if (args->cq_depth == 16) {
-		r3.field.token_depth_select = 2;
-	} else if (args->cq_depth == 32) {
-		r3.field.token_depth_select = 3;
-	} else if (args->cq_depth == 64) {
-		r3.field.token_depth_select = 4;
-	} else if (args->cq_depth == 128) {
-		r3.field.token_depth_select = 5;
-	} else if (args->cq_depth == 256) {
-		r3.field.token_depth_select = 6;
-	} else if (args->cq_depth == 512) {
-		r3.field.token_depth_select = 7;
-	} else if (args->cq_depth == 1024) {
-		r3.field.token_depth_select = 8;
-	} else {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: invalid CQ depth\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL(port->id.phys_id),
-		    r3.val);
-
-	/*
-	 * To support CQs with depth less than 8, program the token count
-	 * register with a non-zero initial value. Operations such as domain
-	 * reset must take this initial value into account when quiescing the
-	 * CQ.
-	 */
-	port->init_tkn_cnt = 0;
-
-	if (args->cq_depth < 8) {
-		union dlb2_lsp_cq_ldb_tkn_cnt r14 = { {0} };
-
-		port->init_tkn_cnt = 8 - args->cq_depth;
-
-		r14.field.token_count = port->init_tkn_cnt;
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_CQ_LDB_TKN_CNT(port->id.phys_id),
-			    r14.val);
-	} else {
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_CQ_LDB_TKN_CNT(port->id.phys_id),
-			    DLB2_LSP_CQ_LDB_TKN_CNT_RST);
-	}
-
-	r4.field.token_depth_select = r3.field.token_depth_select;
-	r4.field.ignore_depth = 0;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL(port->id.phys_id),
-		    r4.val);
-
-	/* Reset the CQ write pointer */
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_WPTR(port->id.phys_id),
-		    DLB2_CHP_LDB_CQ_WPTR_RST);
-
-	r5.field.limit = port->hist_list_entry_limit - 1;
-
-	DLB2_CSR_WR(hw, DLB2_CHP_HIST_LIST_LIM(port->id.phys_id), r5.val);
-
-	r6.field.base = port->hist_list_entry_base;
-
-	DLB2_CSR_WR(hw, DLB2_CHP_HIST_LIST_BASE(port->id.phys_id), r6.val);
-
-	/*
-	 * The inflight limit sets a cap on the number of QEs for which this CQ
-	 * can owe completions at one time.
-	 */
-	r7.field.limit = args->cq_history_list_size;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ_LDB_INFL_LIM(port->id.phys_id), r7.val);
-
-	r8.field.push_ptr = r6.field.base;
-	r8.field.generation = 0;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_HIST_LIST_PUSH_PTR(port->id.phys_id),
-		    r8.val);
-
-	r9.field.pop_ptr = r6.field.base;
-	r9.field.generation = 0;
-
-	DLB2_CSR_WR(hw, DLB2_CHP_HIST_LIST_POP_PTR(port->id.phys_id), r9.val);
-
-	/*
-	 * Address translation (AT) settings: 0: untranslated, 2: translated
-	 * (see ATS spec regarding Address Type field for more details)
-	 */
-	r10.field.cq_at = 0;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_AT(port->id.phys_id), r10.val);
-
-	if (vdev_req && hw->virt_mode == DLB2_VIRT_SIOV) {
-		r11.field.pasid = hw->pasid[vdev_id];
-		r11.field.fmt2 = 1;
-	}
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_CQ_PASID(port->id.phys_id),
-		    r11.val);
-
-	r12.field.cq2vas = domain->id.phys_id;
-
-	DLB2_CSR_WR(hw, DLB2_CHP_LDB_CQ2VAS(port->id.phys_id), r12.val);
-
-	/* Disable the port's QID mappings */
-	r13.field.v = 0;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(port->id.phys_id), r13.val);
-
-	return 0;
-}
-
-static int dlb2_configure_ldb_port(struct dlb2_hw *hw,
-				   struct dlb2_hw_domain *domain,
-				   struct dlb2_ldb_port *port,
-				   uintptr_t cq_dma_base,
-				   struct dlb2_create_ldb_port_args *args,
-				   bool vdev_req,
-				   unsigned int vdev_id)
-{
-	int ret, i;
-
-	port->hist_list_entry_base = domain->hist_list_entry_base +
-				     domain->hist_list_entry_offset;
-	port->hist_list_entry_limit = port->hist_list_entry_base +
-				      args->cq_history_list_size;
-
-	domain->hist_list_entry_offset += args->cq_history_list_size;
-	domain->avail_hist_list_entries -= args->cq_history_list_size;
-
-	ret = dlb2_ldb_port_configure_cq(hw,
-					 domain,
-					 port,
-					 cq_dma_base,
-					 args,
-					 vdev_req,
-					 vdev_id);
-	if (ret < 0)
-		return ret;
-
-	dlb2_ldb_port_configure_pp(hw,
-				   domain,
-				   port,
-				   vdev_req,
-				   vdev_id);
-
-	dlb2_ldb_port_cq_enable(hw, port);
-
-	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++)
-		port->qid_map[i].state = DLB2_QUEUE_UNMAPPED;
-	port->num_mappings = 0;
-
-	port->enabled = true;
-
-	port->configured = true;
-
-	return 0;
-}
-
-static void
-dlb2_log_create_ldb_port_args(struct dlb2_hw *hw,
-			      u32 domain_id,
-			      uintptr_t cq_dma_base,
-			      struct dlb2_create_ldb_port_args *args,
-			      bool vdev_req,
-			      unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 create load-balanced port arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID:                 %d\n",
-		    domain_id);
-	DLB2_HW_DBG(hw, "\tCQ depth:                  %d\n",
-		    args->cq_depth);
-	DLB2_HW_DBG(hw, "\tCQ hist list size:         %d\n",
-		    args->cq_history_list_size);
-	DLB2_HW_DBG(hw, "\tCQ base address:           0x%lx\n",
-		    cq_dma_base);
-	DLB2_HW_DBG(hw, "\tCoS ID:                    %u\n", args->cos_id);
-	DLB2_HW_DBG(hw, "\tStrict CoS allocation:     %u\n",
-		    args->cos_strict);
-}
-
-static int
-dlb2_verify_create_ldb_port_args(struct dlb2_hw *hw,
-				 u32 domain_id,
-				 uintptr_t cq_dma_base,
-				 struct dlb2_create_ldb_port_args *args,
-				 struct dlb2_cmd_response *resp,
-				 bool vdev_req,
-				 unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	int i;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (domain == NULL) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	if (!domain->configured) {
-		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
-		return -EINVAL;
-	}
-
-	if (domain->started) {
-		resp->status = DLB2_ST_DOMAIN_STARTED;
-		return -EINVAL;
-	}
-
-	if (args->cos_id >= DLB2_NUM_COS_DOMAINS) {
-		resp->status = DLB2_ST_INVALID_COS_ID;
-		return -EINVAL;
-	}
-
-	if (args->cos_strict) {
-		if (dlb2_list_empty(&domain->avail_ldb_ports[args->cos_id])) {
-			resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
-			return -EINVAL;
-		}
-	} else {
-		for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-			if (!dlb2_list_empty(&domain->avail_ldb_ports[i]))
-				break;
-		}
-
-		if (i == DLB2_NUM_COS_DOMAINS) {
-			resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
-			return -EINVAL;
-		}
-	}
-
-	/* Check cache-line alignment */
-	if ((cq_dma_base & 0x3F) != 0) {
-		resp->status = DLB2_ST_INVALID_CQ_VIRT_ADDR;
-		return -EINVAL;
-	}
-
-	if (args->cq_depth != 1 &&
-	    args->cq_depth != 2 &&
-	    args->cq_depth != 4 &&
-	    args->cq_depth != 8 &&
-	    args->cq_depth != 16 &&
-	    args->cq_depth != 32 &&
-	    args->cq_depth != 64 &&
-	    args->cq_depth != 128 &&
-	    args->cq_depth != 256 &&
-	    args->cq_depth != 512 &&
-	    args->cq_depth != 1024) {
-		resp->status = DLB2_ST_INVALID_CQ_DEPTH;
-		return -EINVAL;
-	}
-
-	/* The history list size must be >= 1 */
-	if (!args->cq_history_list_size) {
-		resp->status = DLB2_ST_INVALID_HIST_LIST_DEPTH;
-		return -EINVAL;
-	}
-
-	if (args->cq_history_list_size > domain->avail_hist_list_entries) {
-		resp->status = DLB2_ST_HIST_LIST_ENTRIES_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	return 0;
-}
-
-
-/**
- * dlb2_hw_create_ldb_port() - Allocate and initialize a load-balanced port and
- *	its resources.
- * @hw:	Contains the current state of the DLB2 hardware.
- * @domain_id: Domain ID
- * @args: User-provided arguments.
- * @cq_dma_base: Base DMA address for consumer queue memory
- * @resp: Response to user.
- * @vdev_req: Request came from a virtual device.
- * @vdev_id: If vdev_req is true, this contains the virtual device's ID.
- *
- * Return: returns < 0 on error, 0 otherwise. If the driver is unable to
- * satisfy a request, resp->status will be set accordingly.
- */
-int dlb2_hw_create_ldb_port(struct dlb2_hw *hw,
-			    u32 domain_id,
-			    struct dlb2_create_ldb_port_args *args,
-			    uintptr_t cq_dma_base,
-			    struct dlb2_cmd_response *resp,
-			    bool vdev_req,
-			    unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_port *port;
-	int ret, cos_id, i;
-
-	dlb2_log_create_ldb_port_args(hw,
-				      domain_id,
-				      cq_dma_base,
-				      args,
-				      vdev_req,
-				      vdev_id);
-
-	/*
-	 * Verify that hardware resources are available before attempting to
-	 * satisfy the request. This simplifies the error unwinding code.
-	 */
-	ret = dlb2_verify_create_ldb_port_args(hw,
-					       domain_id,
-					       cq_dma_base,
-					       args,
-					       resp,
-					       vdev_req,
-					       vdev_id);
-	if (ret)
-		return ret;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-	if (domain == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: domain not found\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	if (args->cos_strict) {
-		cos_id = args->cos_id;
-
-		port = DLB2_DOM_LIST_HEAD(domain->avail_ldb_ports[cos_id],
-					  typeof(*port));
-	} else {
-		int idx;
-
-		for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-			idx = (args->cos_id + i) % DLB2_NUM_COS_DOMAINS;
-
-			port = DLB2_DOM_LIST_HEAD(domain->avail_ldb_ports[idx],
-						  typeof(*port));
-			if (port)
-				break;
-		}
-
-		cos_id = idx;
-	}
-
-	if (port == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: no available ldb ports\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	if (port->configured) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: avail_ldb_ports contains configured ports.\n",
-			    __func__);
-		return -EFAULT;
-	}
-
-	ret = dlb2_configure_ldb_port(hw,
-				      domain,
-				      port,
-				      cq_dma_base,
-				      args,
-				      vdev_req,
-				      vdev_id);
-	if (ret < 0)
-		return ret;
-
-	/*
-	 * Configuration succeeded, so move the resource from the 'avail' to
-	 * the 'used' list.
-	 */
-	dlb2_list_del(&domain->avail_ldb_ports[cos_id], &port->domain_list);
-
-	dlb2_list_add(&domain->used_ldb_ports[cos_id], &port->domain_list);
-
-	resp->status = 0;
-	resp->id = (vdev_req) ? port->id.virt_id : port->id.phys_id;
-
-	return 0;
-}
-
 static void
 dlb2_log_create_dir_port_args(struct dlb2_hw *hw,
 			      u32 domain_id,
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index b52d2becd..2eb39e23d 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -3972,3 +3972,474 @@ int dlb2_hw_create_ldb_queue(struct dlb2_hw *hw,
 
 	return 0;
 }
+
+static void dlb2_ldb_port_configure_pp(struct dlb2_hw *hw,
+				       struct dlb2_hw_domain *domain,
+				       struct dlb2_ldb_port *port,
+				       bool vdev_req,
+				       unsigned int vdev_id)
+{
+	u32 reg = 0;
+
+	DLB2_BITS_SET(reg, domain->id.phys_id, DLB2_SYS_LDB_PP2VAS_VAS);
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_PP2VAS(port->id.phys_id), reg);
+
+	if (vdev_req) {
+		unsigned int offs;
+		u32 virt_id;
+
+		/*
+		 * DLB uses producer port address bits 17:12 to determine the
+		 * producer port ID. In Scalable IOV mode, PP accesses come
+		 * through the PF MMIO window for the physical producer port,
+		 * so for translation purposes the virtual and physical port
+		 * IDs are equal.
+		 */
+		if (hw->virt_mode == DLB2_VIRT_SRIOV)
+			virt_id = port->id.virt_id;
+		else
+			virt_id = port->id.phys_id;
+
+		reg = 0;
+		DLB2_BITS_SET(reg, port->id.phys_id, DLB2_SYS_VF_LDB_VPP2PP_PP);
+		offs = vdev_id * DLB2_MAX_NUM_LDB_PORTS + virt_id;
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VPP2PP(offs), reg);
+
+		reg = 0;
+		DLB2_BITS_SET(reg, vdev_id, DLB2_SYS_LDB_PP2VDEV_VDEV);
+		DLB2_CSR_WR(hw, DLB2_SYS_LDB_PP2VDEV(port->id.phys_id), reg);
+
+		reg = 0;
+		DLB2_BIT_SET(reg, DLB2_SYS_VF_LDB_VPP_V_VPP_V);
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VPP_V(offs), reg);
+	}
+
+	reg = 0;
+	DLB2_BIT_SET(reg, DLB2_SYS_LDB_PP_V_PP_V);
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_PP_V(port->id.phys_id), reg);
+}
+
+static int dlb2_ldb_port_configure_cq(struct dlb2_hw *hw,
+				      struct dlb2_hw_domain *domain,
+				      struct dlb2_ldb_port *port,
+				      uintptr_t cq_dma_base,
+				      struct dlb2_create_ldb_port_args *args,
+				      bool vdev_req,
+				      unsigned int vdev_id)
+{
+	u32 hl_base = 0;
+	u32 reg = 0;
+	u32 ds = 0;
+
+	/* The CQ address is 64B-aligned, and the DLB only wants bits [63:6] */
+	DLB2_BITS_SET(reg, cq_dma_base >> 6, DLB2_SYS_LDB_CQ_ADDR_L_ADDR_L);
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_ADDR_L(port->id.phys_id), reg);
+
+	reg = cq_dma_base >> 32;
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_ADDR_U(port->id.phys_id), reg);
+
+	/*
+	 * 'ro' == relaxed ordering. This setting allows DLB2 to write
+	 * cache lines out-of-order (but QEs within a cache line are always
+	 * updated in-order).
+	 */
+	reg = 0;
+	DLB2_BITS_SET(reg, vdev_id, DLB2_SYS_LDB_CQ2VF_PF_RO_VF);
+	DLB2_BITS_SET(reg,
+		 !vdev_req && (hw->virt_mode != DLB2_VIRT_SIOV),
+		 DLB2_SYS_LDB_CQ2VF_PF_RO_IS_PF);
+	DLB2_BIT_SET(reg, DLB2_SYS_LDB_CQ2VF_PF_RO_RO);
+
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ2VF_PF_RO(port->id.phys_id), reg);
+
+	port->cq_depth = args->cq_depth;
+
+	if (args->cq_depth <= 8) {
+		ds = 1;
+	} else if (args->cq_depth == 16) {
+		ds = 2;
+	} else if (args->cq_depth == 32) {
+		ds = 3;
+	} else if (args->cq_depth == 64) {
+		ds = 4;
+	} else if (args->cq_depth == 128) {
+		ds = 5;
+	} else if (args->cq_depth == 256) {
+		ds = 6;
+	} else if (args->cq_depth == 512) {
+		ds = 7;
+	} else if (args->cq_depth == 1024) {
+		ds = 8;
+	} else {
+		DLB2_HW_ERR(hw,
+			    "[%s():%d] Internal error: invalid CQ depth\n",
+			    __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	reg = 0;
+	DLB2_BITS_SET(reg, ds,
+		      DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT);
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
+		    reg);
+
+	/*
+	 * To support CQs with depth less than 8, program the token count
+	 * register with a non-zero initial value. Operations such as domain
+	 * reset must take this initial value into account when quiescing the
+	 * CQ.
+	 */
+	port->init_tkn_cnt = 0;
+
+	if (args->cq_depth < 8) {
+		reg = 0;
+		port->init_tkn_cnt = 8 - args->cq_depth;
+
+		DLB2_BITS_SET(reg,
+			      port->init_tkn_cnt,
+			      DLB2_LSP_CQ_LDB_TKN_CNT_TOKEN_COUNT);
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_CQ_LDB_TKN_CNT(hw->ver, port->id.phys_id),
+			    reg);
+	} else {
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_CQ_LDB_TKN_CNT(hw->ver, port->id.phys_id),
+			    DLB2_LSP_CQ_LDB_TKN_CNT_RST);
+	}
+
+	reg = 0;
+	DLB2_BITS_SET(reg, ds,
+		      DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_V2);
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
+		    reg);
+
+	/* Reset the CQ write pointer */
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_WPTR(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ_WPTR_RST);
+
+	reg = 0;
+	DLB2_BITS_SET(reg,
+		      port->hist_list_entry_limit - 1,
+		      DLB2_CHP_HIST_LIST_LIM_LIMIT);
+	DLB2_CSR_WR(hw, DLB2_CHP_HIST_LIST_LIM(hw->ver, port->id.phys_id), reg);
+
+	DLB2_BITS_SET(hl_base, port->hist_list_entry_base,
+		      DLB2_CHP_HIST_LIST_BASE_BASE);
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_HIST_LIST_BASE(hw->ver, port->id.phys_id),
+		    hl_base);
+
+	/*
+	 * The inflight limit sets a cap on the number of QEs for which this CQ
+	 * can owe completions at one time.
+	 */
+	reg = 0;
+	DLB2_BITS_SET(reg, args->cq_history_list_size,
+		      DLB2_LSP_CQ_LDB_INFL_LIM_LIMIT);
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ_LDB_INFL_LIM(hw->ver, port->id.phys_id),
+		    reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, DLB2_BITS_GET(hl_base, DLB2_CHP_HIST_LIST_BASE_BASE),
+		      DLB2_CHP_HIST_LIST_PUSH_PTR_PUSH_PTR);
+	DLB2_CSR_WR(hw, DLB2_CHP_HIST_LIST_PUSH_PTR(hw->ver, port->id.phys_id),
+		    reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, DLB2_BITS_GET(hl_base, DLB2_CHP_HIST_LIST_BASE_BASE),
+		      DLB2_CHP_HIST_LIST_POP_PTR_POP_PTR);
+	DLB2_CSR_WR(hw, DLB2_CHP_HIST_LIST_POP_PTR(hw->ver, port->id.phys_id),
+		    reg);
+
+	/*
+	 * Address translation (AT) settings: 0: untranslated, 2: translated
+	 * (see ATS spec regarding Address Type field for more details)
+	 */
+
+	if (hw->ver == DLB2_HW_V2) {
+		reg = 0;
+		DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_AT(port->id.phys_id), reg);
+	}
+
+	if (vdev_req && hw->virt_mode == DLB2_VIRT_SIOV) {
+		reg = 0;
+		DLB2_BITS_SET(reg, hw->pasid[vdev_id],
+			      DLB2_SYS_LDB_CQ_PASID_PASID);
+		DLB2_BIT_SET(reg, DLB2_SYS_LDB_CQ_PASID_FMT2);
+	}
+
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_PASID(hw->ver, port->id.phys_id), reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, domain->id.phys_id, DLB2_CHP_LDB_CQ2VAS_CQ2VAS);
+	DLB2_CSR_WR(hw, DLB2_CHP_LDB_CQ2VAS(hw->ver, port->id.phys_id), reg);
+
+	/* Disable the port's QID mappings */
+	reg = 0;
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(hw->ver, port->id.phys_id), reg);
+
+	return 0;
+}
+
+static bool
+dlb2_cq_depth_is_valid(u32 depth)
+{
+	if (depth != 1 && depth != 2 &&
+	    depth != 4 && depth != 8 &&
+	    depth != 16 && depth != 32 &&
+	    depth != 64 && depth != 128 &&
+	    depth != 256 && depth != 512 &&
+	    depth != 1024)
+		return false;
+
+	return true;
+}
+
+static int dlb2_configure_ldb_port(struct dlb2_hw *hw,
+				   struct dlb2_hw_domain *domain,
+				   struct dlb2_ldb_port *port,
+				   uintptr_t cq_dma_base,
+				   struct dlb2_create_ldb_port_args *args,
+				   bool vdev_req,
+				   unsigned int vdev_id)
+{
+	int ret, i;
+
+	port->hist_list_entry_base = domain->hist_list_entry_base +
+				     domain->hist_list_entry_offset;
+	port->hist_list_entry_limit = port->hist_list_entry_base +
+				      args->cq_history_list_size;
+
+	domain->hist_list_entry_offset += args->cq_history_list_size;
+	domain->avail_hist_list_entries -= args->cq_history_list_size;
+
+	ret = dlb2_ldb_port_configure_cq(hw,
+					 domain,
+					 port,
+					 cq_dma_base,
+					 args,
+					 vdev_req,
+					 vdev_id);
+	if (ret)
+		return ret;
+
+	dlb2_ldb_port_configure_pp(hw,
+				   domain,
+				   port,
+				   vdev_req,
+				   vdev_id);
+
+	dlb2_ldb_port_cq_enable(hw, port);
+
+	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++)
+		port->qid_map[i].state = DLB2_QUEUE_UNMAPPED;
+	port->num_mappings = 0;
+
+	port->enabled = true;
+
+	port->configured = true;
+
+	return 0;
+}
+
+static void
+dlb2_log_create_ldb_port_args(struct dlb2_hw *hw,
+			      u32 domain_id,
+			      uintptr_t cq_dma_base,
+			      struct dlb2_create_ldb_port_args *args,
+			      bool vdev_req,
+			      unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 create load-balanced port arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID:                 %d\n",
+		    domain_id);
+	DLB2_HW_DBG(hw, "\tCQ depth:                  %d\n",
+		    args->cq_depth);
+	DLB2_HW_DBG(hw, "\tCQ hist list size:         %d\n",
+		    args->cq_history_list_size);
+	DLB2_HW_DBG(hw, "\tCQ base address:           0x%lx\n",
+		    cq_dma_base);
+	DLB2_HW_DBG(hw, "\tCoS ID:                    %u\n", args->cos_id);
+	DLB2_HW_DBG(hw, "\tStrict CoS allocation:     %u\n",
+		    args->cos_strict);
+}
+
+static int
+dlb2_verify_create_ldb_port_args(struct dlb2_hw *hw,
+				 u32 domain_id,
+				 uintptr_t cq_dma_base,
+				 struct dlb2_create_ldb_port_args *args,
+				 struct dlb2_cmd_response *resp,
+				 bool vdev_req,
+				 unsigned int vdev_id,
+				 struct dlb2_hw_domain **out_domain,
+				 struct dlb2_ldb_port **out_port,
+				 int *out_cos_id)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_port *port;
+	int i, id;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
+		return -EINVAL;
+	}
+
+	if (domain->started) {
+		resp->status = DLB2_ST_DOMAIN_STARTED;
+		return -EINVAL;
+	}
+
+	if (args->cos_id >= DLB2_NUM_COS_DOMAINS) {
+		resp->status = DLB2_ST_INVALID_COS_ID;
+		return -EINVAL;
+	}
+
+	if (args->cos_strict) {
+		id = args->cos_id;
+		port = DLB2_DOM_LIST_HEAD(domain->avail_ldb_ports[id],
+					  typeof(*port));
+	} else {
+		for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+			id = (args->cos_id + i) % DLB2_NUM_COS_DOMAINS;
+
+			port = DLB2_DOM_LIST_HEAD(domain->avail_ldb_ports[id],
+						  typeof(*port));
+			if (port)
+				break;
+		}
+	}
+
+	if (!port) {
+		resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	/* Check cache-line alignment */
+	if ((cq_dma_base & 0x3F) != 0) {
+		resp->status = DLB2_ST_INVALID_CQ_VIRT_ADDR;
+		return -EINVAL;
+	}
+
+	if (!dlb2_cq_depth_is_valid(args->cq_depth)) {
+		resp->status = DLB2_ST_INVALID_CQ_DEPTH;
+		return -EINVAL;
+	}
+
+	/* The history list size must be >= 1 */
+	if (!args->cq_history_list_size) {
+		resp->status = DLB2_ST_INVALID_HIST_LIST_DEPTH;
+		return -EINVAL;
+	}
+
+	if (args->cq_history_list_size > domain->avail_hist_list_entries) {
+		resp->status = DLB2_ST_HIST_LIST_ENTRIES_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	*out_domain = domain;
+	*out_port = port;
+	*out_cos_id = id;
+
+	return 0;
+}
+
+/**
+ * dlb2_hw_create_ldb_port() - create a load-balanced port
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: port creation arguments.
+ * @cq_dma_base: base address of the CQ memory. This can be a PA or an IOVA.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function creates a load-balanced port.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the port ID.
+ *
+ * resp->id contains a virtual ID if vdev_req is true.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, a credit setting is invalid, a
+ *	    pointer address is not properly aligned, the domain is not
+ *	    configured, or the domain has already been started.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb2_hw_create_ldb_port(struct dlb2_hw *hw,
+			    u32 domain_id,
+			    struct dlb2_create_ldb_port_args *args,
+			    uintptr_t cq_dma_base,
+			    struct dlb2_cmd_response *resp,
+			    bool vdev_req,
+			    unsigned int vdev_id)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_port *port;
+	int ret, cos_id;
+
+	dlb2_log_create_ldb_port_args(hw,
+				      domain_id,
+				      cq_dma_base,
+				      args,
+				      vdev_req,
+				      vdev_id);
+
+	/*
+	 * Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	ret = dlb2_verify_create_ldb_port_args(hw,
+					       domain_id,
+					       cq_dma_base,
+					       args,
+					       resp,
+					       vdev_req,
+					       vdev_id,
+					       &domain,
+					       &port,
+					       &cos_id);
+	if (ret)
+		return ret;
+
+	ret = dlb2_configure_ldb_port(hw,
+				      domain,
+				      port,
+				      cq_dma_base,
+				      args,
+				      vdev_req,
+				      vdev_id);
+	if (ret)
+		return ret;
+
+	/*
+	 * Configuration succeeded, so move the resource from the 'avail' to
+	 * the 'used' list.
+	 */
+	dlb2_list_del(&domain->avail_ldb_ports[cos_id], &port->domain_list);
+
+	dlb2_list_add(&domain->used_ldb_ports[cos_id], &port->domain_list);
+
+	resp->status = 0;
+	resp->id = (vdev_req) ? port->id.virt_id : port->id.phys_id;
+
+	return 0;
+}
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v5 10/26] event/dlb2: add v2.5 create dir port
  2021-05-01 19:03   ` [dpdk-dev] [PATCH v5 00/26] Add DLB v2.5 McDaniel, Timothy
                       ` (8 preceding siblings ...)
  2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 09/26] event/dlb2: add v2.5 create ldb port McDaniel, Timothy
@ 2021-05-01 19:03     ` McDaniel, Timothy
  2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 11/26] event/dlb2: add v2.5 create dir queue McDaniel, Timothy
                       ` (16 subsequent siblings)
  26 siblings, 0 replies; 174+ messages in thread
From: McDaniel, Timothy @ 2021-05-01 19:03 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, harry.van.haaren, jerinj, thomas, Timothy McDaniel

From: Timothy McDaniel <timothy.mcdaniel@intel.com>

Update the low level HW functions responsible for
creating directed ports. These functions create the
producer port (PP), configure the consumer queue (CQ),
configure queue depth, and validate the port creation
arguments.

The logic is very similar to what was done for v2.0,
but the new combined register map for v2.0 and v2.5
uses new register names and bit names.  Additionally,
new register access macros are used so that the code
can perform the correct action, based on the hardware
version, v2.0 or v2.5.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 426 ------------------
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 414 +++++++++++++++++
 2 files changed, 414 insertions(+), 426 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index 45d096eec..70c52e908 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -65,18 +65,6 @@ static inline void dlb2_flush_csr(struct dlb2_hw *hw)
 	DLB2_CSR_RD(hw, DLB2_SYS_TOTAL_VAS);
 }
 
-static void dlb2_dir_port_cq_enable(struct dlb2_hw *hw,
-				    struct dlb2_dir_pq_pair *port)
-{
-	union dlb2_lsp_cq_dir_dsbl reg;
-
-	reg.field.disabled = 0;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ_DIR_DSBL(port->id.phys_id), reg.val);
-
-	dlb2_flush_csr(hw);
-}
-
 static u32 dlb2_dir_queue_depth(struct dlb2_hw *hw,
 				struct dlb2_dir_pq_pair *queue)
 {
@@ -1216,25 +1204,6 @@ int dlb2_set_group_sequence_numbers(struct dlb2_hw *hw,
 	return 0;
 }
 
-static void
-dlb2_log_create_dir_port_args(struct dlb2_hw *hw,
-			      u32 domain_id,
-			      uintptr_t cq_dma_base,
-			      struct dlb2_create_dir_port_args *args,
-			      bool vdev_req,
-			      unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 create directed port arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID:                 %d\n",
-		    domain_id);
-	DLB2_HW_DBG(hw, "\tCQ depth:                  %d\n",
-		    args->cq_depth);
-	DLB2_HW_DBG(hw, "\tCQ base address:           0x%lx\n",
-		    cq_dma_base);
-}
-
 static struct dlb2_dir_pq_pair *
 dlb2_get_domain_used_dir_pq(struct dlb2_hw *hw,
 			    u32 id,
@@ -1256,401 +1225,6 @@ dlb2_get_domain_used_dir_pq(struct dlb2_hw *hw,
 	return NULL;
 }
 
-static int
-dlb2_verify_create_dir_port_args(struct dlb2_hw *hw,
-				 u32 domain_id,
-				 uintptr_t cq_dma_base,
-				 struct dlb2_create_dir_port_args *args,
-				 struct dlb2_cmd_response *resp,
-				 bool vdev_req,
-				 unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (domain == NULL) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	if (!domain->configured) {
-		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
-		return -EINVAL;
-	}
-
-	if (domain->started) {
-		resp->status = DLB2_ST_DOMAIN_STARTED;
-		return -EINVAL;
-	}
-
-	/*
-	 * If the user claims the queue is already configured, validate
-	 * the queue ID, its domain, and whether the queue is configured.
-	 */
-	if (args->queue_id != -1) {
-		struct dlb2_dir_pq_pair *queue;
-
-		queue = dlb2_get_domain_used_dir_pq(hw,
-						    args->queue_id,
-						    vdev_req,
-						    domain);
-
-		if (queue == NULL || queue->domain_id.phys_id !=
-				domain->id.phys_id ||
-				!queue->queue_configured) {
-			resp->status = DLB2_ST_INVALID_DIR_QUEUE_ID;
-			return -EINVAL;
-		}
-	}
-
-	/*
-	 * If the port's queue is not configured, validate that a free
-	 * port-queue pair is available.
-	 */
-	if (args->queue_id == -1 &&
-	    dlb2_list_empty(&domain->avail_dir_pq_pairs)) {
-		resp->status = DLB2_ST_DIR_PORTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	/* Check cache-line alignment */
-	if ((cq_dma_base & 0x3F) != 0) {
-		resp->status = DLB2_ST_INVALID_CQ_VIRT_ADDR;
-		return -EINVAL;
-	}
-
-	if (args->cq_depth != 1 &&
-	    args->cq_depth != 2 &&
-	    args->cq_depth != 4 &&
-	    args->cq_depth != 8 &&
-	    args->cq_depth != 16 &&
-	    args->cq_depth != 32 &&
-	    args->cq_depth != 64 &&
-	    args->cq_depth != 128 &&
-	    args->cq_depth != 256 &&
-	    args->cq_depth != 512 &&
-	    args->cq_depth != 1024) {
-		resp->status = DLB2_ST_INVALID_CQ_DEPTH;
-		return -EINVAL;
-	}
-
-	return 0;
-}
-
-static void dlb2_dir_port_configure_pp(struct dlb2_hw *hw,
-				       struct dlb2_hw_domain *domain,
-				       struct dlb2_dir_pq_pair *port,
-				       bool vdev_req,
-				       unsigned int vdev_id)
-{
-	union dlb2_sys_dir_pp2vas r0 = { {0} };
-	union dlb2_sys_dir_pp_v r4 = { {0} };
-
-	r0.field.vas = domain->id.phys_id;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_PP2VAS(port->id.phys_id), r0.val);
-
-	if (vdev_req) {
-		union dlb2_sys_vf_dir_vpp2pp r1 = { {0} };
-		union dlb2_sys_dir_pp2vdev r2 = { {0} };
-		union dlb2_sys_vf_dir_vpp_v r3 = { {0} };
-		unsigned int offs;
-		u32 virt_id;
-
-		/*
-		 * DLB uses producer port address bits 17:12 to determine the
-		 * producer port ID. In Scalable IOV mode, PP accesses come
-		 * through the PF MMIO window for the physical producer port,
-		 * so for translation purposes the virtual and physical port
-		 * IDs are equal.
-		 */
-		if (hw->virt_mode == DLB2_VIRT_SRIOV)
-			virt_id = port->id.virt_id;
-		else
-			virt_id = port->id.phys_id;
-
-		r1.field.pp = port->id.phys_id;
-
-		offs = vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) + virt_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP2PP(offs), r1.val);
-
-		r2.field.vdev = vdev_id;
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_DIR_PP2VDEV(port->id.phys_id),
-			    r2.val);
-
-		r3.field.vpp_v = 1;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP_V(offs), r3.val);
-	}
-
-	r4.field.pp_v = 1;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_PP_V(port->id.phys_id),
-		    r4.val);
-}
-
-static int dlb2_dir_port_configure_cq(struct dlb2_hw *hw,
-				      struct dlb2_hw_domain *domain,
-				      struct dlb2_dir_pq_pair *port,
-				      uintptr_t cq_dma_base,
-				      struct dlb2_create_dir_port_args *args,
-				      bool vdev_req,
-				      unsigned int vdev_id)
-{
-	union dlb2_sys_dir_cq_addr_l r0 = { {0} };
-	union dlb2_sys_dir_cq_addr_u r1 = { {0} };
-	union dlb2_sys_dir_cq2vf_pf_ro r2 = { {0} };
-	union dlb2_chp_dir_cq_tkn_depth_sel r3 = { {0} };
-	union dlb2_lsp_cq_dir_tkn_depth_sel_dsi r4 = { {0} };
-	union dlb2_sys_dir_cq_fmt r9 = { {0} };
-	union dlb2_sys_dir_cq_at r10 = { {0} };
-	union dlb2_sys_dir_cq_pasid r11 = { {0} };
-	union dlb2_chp_dir_cq2vas r12 = { {0} };
-
-	/* The CQ address is 64B-aligned, and the DLB only wants bits [63:6] */
-	r0.field.addr_l = cq_dma_base >> 6;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_ADDR_L(port->id.phys_id), r0.val);
-
-	r1.field.addr_u = cq_dma_base >> 32;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_ADDR_U(port->id.phys_id), r1.val);
-
-	/*
-	 * 'ro' == relaxed ordering. This setting allows DLB2 to write
-	 * cache lines out-of-order (but QEs within a cache line are always
-	 * updated in-order).
-	 */
-	r2.field.vf = vdev_id;
-	r2.field.is_pf = !vdev_req && (hw->virt_mode != DLB2_VIRT_SIOV);
-	r2.field.ro = 1;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ2VF_PF_RO(port->id.phys_id), r2.val);
-
-	if (args->cq_depth <= 8) {
-		r3.field.token_depth_select = 1;
-	} else if (args->cq_depth == 16) {
-		r3.field.token_depth_select = 2;
-	} else if (args->cq_depth == 32) {
-		r3.field.token_depth_select = 3;
-	} else if (args->cq_depth == 64) {
-		r3.field.token_depth_select = 4;
-	} else if (args->cq_depth == 128) {
-		r3.field.token_depth_select = 5;
-	} else if (args->cq_depth == 256) {
-		r3.field.token_depth_select = 6;
-	} else if (args->cq_depth == 512) {
-		r3.field.token_depth_select = 7;
-	} else if (args->cq_depth == 1024) {
-		r3.field.token_depth_select = 8;
-	} else {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: invalid CQ depth\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL(port->id.phys_id),
-		    r3.val);
-
-	/*
-	 * To support CQs with depth less than 8, program the token count
-	 * register with a non-zero initial value. Operations such as domain
-	 * reset must take this initial value into account when quiescing the
-	 * CQ.
-	 */
-	port->init_tkn_cnt = 0;
-
-	if (args->cq_depth < 8) {
-		union dlb2_lsp_cq_dir_tkn_cnt r13 = { {0} };
-
-		port->init_tkn_cnt = 8 - args->cq_depth;
-
-		r13.field.count = port->init_tkn_cnt;
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_CQ_DIR_TKN_CNT(port->id.phys_id),
-			    r13.val);
-	} else {
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_CQ_DIR_TKN_CNT(port->id.phys_id),
-			    DLB2_LSP_CQ_DIR_TKN_CNT_RST);
-	}
-
-	r4.field.token_depth_select = r3.field.token_depth_select;
-	r4.field.disable_wb_opt = 0;
-	r4.field.ignore_depth = 0;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(port->id.phys_id),
-		    r4.val);
-
-	/* Reset the CQ write pointer */
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_WPTR(port->id.phys_id),
-		    DLB2_CHP_DIR_CQ_WPTR_RST);
-
-	/* Virtualize the PPID */
-	r9.field.keep_pf_ppid = 0;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_FMT(port->id.phys_id), r9.val);
-
-	/*
-	 * Address translation (AT) settings: 0: untranslated, 2: translated
-	 * (see ATS spec regarding Address Type field for more details)
-	 */
-	r10.field.cq_at = 0;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_AT(port->id.phys_id), r10.val);
-
-	if (vdev_req && hw->virt_mode == DLB2_VIRT_SIOV) {
-		r11.field.pasid = hw->pasid[vdev_id];
-		r11.field.fmt2 = 1;
-	}
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ_PASID(port->id.phys_id),
-		    r11.val);
-
-	r12.field.cq2vas = domain->id.phys_id;
-
-	DLB2_CSR_WR(hw, DLB2_CHP_DIR_CQ2VAS(port->id.phys_id), r12.val);
-
-	return 0;
-}
-
-static int dlb2_configure_dir_port(struct dlb2_hw *hw,
-				   struct dlb2_hw_domain *domain,
-				   struct dlb2_dir_pq_pair *port,
-				   uintptr_t cq_dma_base,
-				   struct dlb2_create_dir_port_args *args,
-				   bool vdev_req,
-				   unsigned int vdev_id)
-{
-	int ret;
-
-	ret = dlb2_dir_port_configure_cq(hw,
-					 domain,
-					 port,
-					 cq_dma_base,
-					 args,
-					 vdev_req,
-					 vdev_id);
-
-	if (ret < 0)
-		return ret;
-
-	dlb2_dir_port_configure_pp(hw,
-				   domain,
-				   port,
-				   vdev_req,
-				   vdev_id);
-
-	dlb2_dir_port_cq_enable(hw, port);
-
-	port->enabled = true;
-
-	port->port_configured = true;
-
-	return 0;
-}
-
-/**
- * dlb2_hw_create_dir_port() - Allocate and initialize a DLB directed port
- *	and queue. The port/queue pair have the same ID and name.
- * @hw:	Contains the current state of the DLB2 hardware.
- * @domain_id: Domain ID
- * @args: User-provided arguments.
- * @cq_dma_base: Base DMA address for consumer queue memory
- * @resp: Response to user.
- * @vdev_req: Request came from a virtual device.
- * @vdev_id: If vdev_req is true, this contains the virtual device's ID.
- *
- * Return: returns < 0 on error, 0 otherwise. If the driver is unable to
- * satisfy a request, resp->status will be set accordingly.
- */
-int dlb2_hw_create_dir_port(struct dlb2_hw *hw,
-			    u32 domain_id,
-			    struct dlb2_create_dir_port_args *args,
-			    uintptr_t cq_dma_base,
-			    struct dlb2_cmd_response *resp,
-			    bool vdev_req,
-			    unsigned int vdev_id)
-{
-	struct dlb2_dir_pq_pair *port;
-	struct dlb2_hw_domain *domain;
-	int ret;
-
-	dlb2_log_create_dir_port_args(hw,
-				      domain_id,
-				      cq_dma_base,
-				      args,
-				      vdev_req,
-				      vdev_id);
-
-	/*
-	 * Verify that hardware resources are available before attempting to
-	 * satisfy the request. This simplifies the error unwinding code.
-	 */
-	ret = dlb2_verify_create_dir_port_args(hw,
-					       domain_id,
-					       cq_dma_base,
-					       args,
-					       resp,
-					       vdev_req,
-					       vdev_id);
-	if (ret)
-		return ret;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (args->queue_id != -1)
-		port = dlb2_get_domain_used_dir_pq(hw,
-						   args->queue_id,
-						   vdev_req,
-						   domain);
-	else
-		port = DLB2_DOM_LIST_HEAD(domain->avail_dir_pq_pairs,
-					  typeof(*port));
-	if (port == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: no available dir ports\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	ret = dlb2_configure_dir_port(hw,
-				      domain,
-				      port,
-				      cq_dma_base,
-				      args,
-				      vdev_req,
-				      vdev_id);
-	if (ret < 0)
-		return ret;
-
-	/*
-	 * Configuration succeeded, so move the resource from the 'avail' to
-	 * the 'used' list (if it's not already there).
-	 */
-	if (args->queue_id == -1) {
-		dlb2_list_del(&domain->avail_dir_pq_pairs, &port->domain_list);
-
-		dlb2_list_add(&domain->used_dir_pq_pairs, &port->domain_list);
-	}
-
-	resp->status = 0;
-	resp->id = (vdev_req) ? port->id.virt_id : port->id.phys_id;
-
-	return 0;
-}
-
 static void dlb2_configure_dir_queue(struct dlb2_hw *hw,
 				     struct dlb2_hw_domain *domain,
 				     struct dlb2_dir_pq_pair *queue,
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index 2eb39e23d..4e4b390dd 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -4443,3 +4443,417 @@ int dlb2_hw_create_ldb_port(struct dlb2_hw *hw,
 
 	return 0;
 }
+
+static void
+dlb2_log_create_dir_port_args(struct dlb2_hw *hw,
+			      u32 domain_id,
+			      uintptr_t cq_dma_base,
+			      struct dlb2_create_dir_port_args *args,
+			      bool vdev_req,
+			      unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 create directed port arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID:                 %d\n",
+		    domain_id);
+	DLB2_HW_DBG(hw, "\tCQ depth:                  %d\n",
+		    args->cq_depth);
+	DLB2_HW_DBG(hw, "\tCQ base address:           0x%lx\n",
+		    cq_dma_base);
+}
+
+static struct dlb2_dir_pq_pair *
+dlb2_get_domain_used_dir_pq(struct dlb2_hw *hw,
+			    u32 id,
+			    bool vdev_req,
+			    struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *port;
+	RTE_SET_USED(iter);
+
+	if (id >= DLB2_MAX_NUM_DIR_PORTS(hw->ver))
+		return NULL;
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
+		if ((!vdev_req && port->id.phys_id == id) ||
+		    (vdev_req && port->id.virt_id == id))
+			return port;
+	}
+
+	return NULL;
+}
+
+static int
+dlb2_verify_create_dir_port_args(struct dlb2_hw *hw,
+				 u32 domain_id,
+				 uintptr_t cq_dma_base,
+				 struct dlb2_create_dir_port_args *args,
+				 struct dlb2_cmd_response *resp,
+				 bool vdev_req,
+				 unsigned int vdev_id,
+				 struct dlb2_hw_domain **out_domain,
+				 struct dlb2_dir_pq_pair **out_port)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_dir_pq_pair *pq;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
+		return -EINVAL;
+	}
+
+	if (domain->started) {
+		resp->status = DLB2_ST_DOMAIN_STARTED;
+		return -EINVAL;
+	}
+
+	if (args->queue_id != -1) {
+		/*
+		 * If the user claims the queue is already configured, validate
+		 * the queue ID, its domain, and whether the queue is
+		 * configured.
+		 */
+		pq = dlb2_get_domain_used_dir_pq(hw,
+						 args->queue_id,
+						 vdev_req,
+						 domain);
+
+		if (!pq || pq->domain_id.phys_id != domain->id.phys_id ||
+		    !pq->queue_configured) {
+			resp->status = DLB2_ST_INVALID_DIR_QUEUE_ID;
+			return -EINVAL;
+		}
+	} else {
+		/*
+		 * If the port's queue is not configured, validate that a free
+		 * port-queue pair is available.
+		 */
+		pq = DLB2_DOM_LIST_HEAD(domain->avail_dir_pq_pairs,
+					typeof(*pq));
+		if (!pq) {
+			resp->status = DLB2_ST_DIR_PORTS_UNAVAILABLE;
+			return -EINVAL;
+		}
+	}
+
+	/* Check cache-line alignment */
+	if ((cq_dma_base & 0x3F) != 0) {
+		resp->status = DLB2_ST_INVALID_CQ_VIRT_ADDR;
+		return -EINVAL;
+	}
+
+	if (!dlb2_cq_depth_is_valid(args->cq_depth)) {
+		resp->status = DLB2_ST_INVALID_CQ_DEPTH;
+		return -EINVAL;
+	}
+
+	*out_domain = domain;
+	*out_port = pq;
+
+	return 0;
+}
+
+static void dlb2_dir_port_configure_pp(struct dlb2_hw *hw,
+				       struct dlb2_hw_domain *domain,
+				       struct dlb2_dir_pq_pair *port,
+				       bool vdev_req,
+				       unsigned int vdev_id)
+{
+	u32 reg = 0;
+
+	DLB2_BITS_SET(reg, domain->id.phys_id, DLB2_SYS_DIR_PP2VAS_VAS);
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_PP2VAS(port->id.phys_id), reg);
+
+	if (vdev_req) {
+		unsigned int offs;
+		u32 virt_id;
+
+		/*
+		 * DLB uses producer port address bits 17:12 to determine the
+		 * producer port ID. In Scalable IOV mode, PP accesses come
+		 * through the PF MMIO window for the physical producer port,
+		 * so for translation purposes the virtual and physical port
+		 * IDs are equal.
+		 */
+		if (hw->virt_mode == DLB2_VIRT_SRIOV)
+			virt_id = port->id.virt_id;
+		else
+			virt_id = port->id.phys_id;
+
+		reg = 0;
+		DLB2_BITS_SET(reg, port->id.phys_id, DLB2_SYS_VF_DIR_VPP2PP_PP);
+		offs = vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) + virt_id;
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP2PP(offs), reg);
+
+		reg = 0;
+		DLB2_BITS_SET(reg, vdev_id, DLB2_SYS_DIR_PP2VDEV_VDEV);
+		DLB2_CSR_WR(hw, DLB2_SYS_DIR_PP2VDEV(port->id.phys_id), reg);
+
+		reg = 0;
+		DLB2_BIT_SET(reg, DLB2_SYS_VF_DIR_VPP_V_VPP_V);
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP_V(offs), reg);
+	}
+
+	reg = 0;
+	DLB2_BIT_SET(reg, DLB2_SYS_DIR_PP_V_PP_V);
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_PP_V(port->id.phys_id), reg);
+}
+
+static int dlb2_dir_port_configure_cq(struct dlb2_hw *hw,
+				      struct dlb2_hw_domain *domain,
+				      struct dlb2_dir_pq_pair *port,
+				      uintptr_t cq_dma_base,
+				      struct dlb2_create_dir_port_args *args,
+				      bool vdev_req,
+				      unsigned int vdev_id)
+{
+	u32 reg = 0;
+	u32 ds = 0;
+
+	/* The CQ address is 64B-aligned, and the DLB only wants bits [63:6] */
+	DLB2_BITS_SET(reg, cq_dma_base >> 6, DLB2_SYS_DIR_CQ_ADDR_L_ADDR_L);
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_ADDR_L(port->id.phys_id), reg);
+
+	reg = cq_dma_base >> 32;
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_ADDR_U(port->id.phys_id), reg);
+
+	/*
+	 * 'ro' == relaxed ordering. This setting allows DLB2 to write
+	 * cache lines out-of-order (but QEs within a cache line are always
+	 * updated in-order).
+	 */
+	reg = 0;
+	DLB2_BITS_SET(reg, vdev_id, DLB2_SYS_DIR_CQ2VF_PF_RO_VF);
+	DLB2_BITS_SET(reg, !vdev_req && (hw->virt_mode != DLB2_VIRT_SIOV),
+		 DLB2_SYS_DIR_CQ2VF_PF_RO_IS_PF);
+	DLB2_BIT_SET(reg, DLB2_SYS_DIR_CQ2VF_PF_RO_RO);
+
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ2VF_PF_RO(port->id.phys_id), reg);
+
+	if (args->cq_depth <= 8) {
+		ds = 1;
+	} else if (args->cq_depth == 16) {
+		ds = 2;
+	} else if (args->cq_depth == 32) {
+		ds = 3;
+	} else if (args->cq_depth == 64) {
+		ds = 4;
+	} else if (args->cq_depth == 128) {
+		ds = 5;
+	} else if (args->cq_depth == 256) {
+		ds = 6;
+	} else if (args->cq_depth == 512) {
+		ds = 7;
+	} else if (args->cq_depth == 1024) {
+		ds = 8;
+	} else {
+		DLB2_HW_ERR(hw,
+			    "[%s():%d] Internal error: invalid CQ depth\n",
+			    __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	reg = 0;
+	DLB2_BITS_SET(reg, ds,
+		      DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT);
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
+		    reg);
+
+	/*
+	 * To support CQs with depth less than 8, program the token count
+	 * register with a non-zero initial value. Operations such as domain
+	 * reset must take this initial value into account when quiescing the
+	 * CQ.
+	 */
+	port->init_tkn_cnt = 0;
+
+	if (args->cq_depth < 8) {
+		reg = 0;
+		port->init_tkn_cnt = 8 - args->cq_depth;
+
+		DLB2_BITS_SET(reg, port->init_tkn_cnt,
+			      DLB2_LSP_CQ_DIR_TKN_CNT_COUNT);
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_CQ_DIR_TKN_CNT(hw->ver, port->id.phys_id),
+			    reg);
+	} else {
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_CQ_DIR_TKN_CNT(hw->ver, port->id.phys_id),
+			    DLB2_LSP_CQ_DIR_TKN_CNT_RST);
+	}
+
+	reg = 0;
+	DLB2_BITS_SET(reg, ds,
+		      DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_TOKEN_DEPTH_SELECT_V2);
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(hw->ver,
+						      port->id.phys_id),
+		    reg);
+
+	/* Reset the CQ write pointer */
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_WPTR(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ_WPTR_RST);
+
+	/* Virtualize the PPID */
+	reg = 0;
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_FMT(port->id.phys_id), reg);
+
+	/*
+	 * Address translation (AT) settings: 0: untranslated, 2: translated
+	 * (see ATS spec regarding Address Type field for more details)
+	 */
+	if (hw->ver == DLB2_HW_V2) {
+		reg = 0;
+		DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_AT(port->id.phys_id), reg);
+	}
+
+	if (vdev_req && hw->virt_mode == DLB2_VIRT_SIOV) {
+		DLB2_BITS_SET(reg, hw->pasid[vdev_id],
+			      DLB2_SYS_DIR_CQ_PASID_PASID);
+		DLB2_BIT_SET(reg, DLB2_SYS_DIR_CQ_PASID_FMT2);
+	}
+
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_PASID(hw->ver, port->id.phys_id), reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, domain->id.phys_id, DLB2_CHP_DIR_CQ2VAS_CQ2VAS);
+	DLB2_CSR_WR(hw, DLB2_CHP_DIR_CQ2VAS(hw->ver, port->id.phys_id), reg);
+
+	return 0;
+}
+
+static int dlb2_configure_dir_port(struct dlb2_hw *hw,
+				   struct dlb2_hw_domain *domain,
+				   struct dlb2_dir_pq_pair *port,
+				   uintptr_t cq_dma_base,
+				   struct dlb2_create_dir_port_args *args,
+				   bool vdev_req,
+				   unsigned int vdev_id)
+{
+	int ret;
+
+	ret = dlb2_dir_port_configure_cq(hw,
+					 domain,
+					 port,
+					 cq_dma_base,
+					 args,
+					 vdev_req,
+					 vdev_id);
+
+	if (ret)
+		return ret;
+
+	dlb2_dir_port_configure_pp(hw,
+				   domain,
+				   port,
+				   vdev_req,
+				   vdev_id);
+
+	dlb2_dir_port_cq_enable(hw, port);
+
+	port->enabled = true;
+
+	port->port_configured = true;
+
+	return 0;
+}
+
+/**
+ * dlb2_hw_create_dir_port() - create a directed port
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: port creation arguments.
+ * @cq_dma_base: base address of the CQ memory. This can be a PA or an IOVA.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function creates a directed port.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the port ID.
+ *
+ * resp->id contains a virtual ID if vdev_req is true.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, a credit setting is invalid, a
+ *	    pointer address is not properly aligned, the domain is not
+ *	    configured, or the domain has already been started.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb2_hw_create_dir_port(struct dlb2_hw *hw,
+			    u32 domain_id,
+			    struct dlb2_create_dir_port_args *args,
+			    uintptr_t cq_dma_base,
+			    struct dlb2_cmd_response *resp,
+			    bool vdev_req,
+			    unsigned int vdev_id)
+{
+	struct dlb2_dir_pq_pair *port;
+	struct dlb2_hw_domain *domain;
+	int ret;
+
+	dlb2_log_create_dir_port_args(hw,
+				      domain_id,
+				      cq_dma_base,
+				      args,
+				      vdev_req,
+				      vdev_id);
+
+	/*
+	 * Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	ret = dlb2_verify_create_dir_port_args(hw,
+					       domain_id,
+					       cq_dma_base,
+					       args,
+					       resp,
+					       vdev_req,
+					       vdev_id,
+					       &domain,
+					       &port);
+	if (ret)
+		return ret;
+
+	ret = dlb2_configure_dir_port(hw,
+				      domain,
+				      port,
+				      cq_dma_base,
+				      args,
+				      vdev_req,
+				      vdev_id);
+	if (ret)
+		return ret;
+
+	/*
+	 * Configuration succeeded, so move the resource from the 'avail' to
+	 * the 'used' list (if it's not already there).
+	 */
+	if (args->queue_id == -1) {
+		dlb2_list_del(&domain->avail_dir_pq_pairs, &port->domain_list);
+
+		dlb2_list_add(&domain->used_dir_pq_pairs, &port->domain_list);
+	}
+
+	resp->status = 0;
+	resp->id = (vdev_req) ? port->id.virt_id : port->id.phys_id;
+
+	return 0;
+}
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v5 11/26] event/dlb2: add v2.5 create dir queue
  2021-05-01 19:03   ` [dpdk-dev] [PATCH v5 00/26] Add DLB v2.5 McDaniel, Timothy
                       ` (9 preceding siblings ...)
  2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 10/26] event/dlb2: add v2.5 create dir port McDaniel, Timothy
@ 2021-05-01 19:03     ` McDaniel, Timothy
  2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 12/26] event/dlb2: add v2.5 map qid McDaniel, Timothy
                       ` (15 subsequent siblings)
  26 siblings, 0 replies; 174+ messages in thread
From: McDaniel, Timothy @ 2021-05-01 19:03 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, harry.van.haaren, jerinj, thomas, Timothy McDaniel

From: Timothy McDaniel <timothy.mcdaniel@intel.com>

Update the low level HW functions responsible for
creating directed queues. These functions configure
the depth threshold, configure queue depth, and
validate the queue creation arguments.

The logic is very similar to what was done for v2.0,
but the new combined register map for v2.0 and v2.5
uses new register names and bit names.  Additionally,
new register access macros are used so that the code
can perform the correct action, based on the hardware
version, v2.0 or v2.5.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 213 ------------------
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 201 +++++++++++++++++
 2 files changed, 201 insertions(+), 213 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index 70c52e908..362deadfe 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -1225,219 +1225,6 @@ dlb2_get_domain_used_dir_pq(struct dlb2_hw *hw,
 	return NULL;
 }
 
-static void dlb2_configure_dir_queue(struct dlb2_hw *hw,
-				     struct dlb2_hw_domain *domain,
-				     struct dlb2_dir_pq_pair *queue,
-				     struct dlb2_create_dir_queue_args *args,
-				     bool vdev_req,
-				     unsigned int vdev_id)
-{
-	union dlb2_sys_dir_vasqid_v r0 = { {0} };
-	union dlb2_sys_dir_qid_its r1 = { {0} };
-	union dlb2_lsp_qid_dir_depth_thrsh r2 = { {0} };
-	union dlb2_sys_dir_qid_v r5 = { {0} };
-
-	unsigned int offs;
-
-	/* QID write permissions are turned on when the domain is started */
-	r0.field.vasqid_v = 0;
-
-	offs = domain->id.phys_id * DLB2_MAX_NUM_DIR_QUEUES(hw->ver) +
-		queue->id.phys_id;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(offs), r0.val);
-
-	/* Don't timestamp QEs that pass through this queue */
-	r1.field.qid_its = 0;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_QID_ITS(queue->id.phys_id),
-		    r1.val);
-
-	r2.field.thresh = args->depth_threshold;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID_DIR_DEPTH_THRSH(queue->id.phys_id),
-		    r2.val);
-
-	if (vdev_req) {
-		union dlb2_sys_vf_dir_vqid_v r3 = { {0} };
-		union dlb2_sys_vf_dir_vqid2qid r4 = { {0} };
-
-		offs = vdev_id * DLB2_MAX_NUM_DIR_QUEUES(hw->ver)
-			+ queue->id.virt_id;
-
-		r3.field.vqid_v = 1;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID_V(offs), r3.val);
-
-		r4.field.qid = queue->id.phys_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID2QID(offs), r4.val);
-	}
-
-	r5.field.qid_v = 1;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_QID_V(queue->id.phys_id), r5.val);
-
-	queue->queue_configured = true;
-}
-
-static void
-dlb2_log_create_dir_queue_args(struct dlb2_hw *hw,
-			       u32 domain_id,
-			       struct dlb2_create_dir_queue_args *args,
-			       bool vdev_req,
-			       unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 create directed queue arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
-	DLB2_HW_DBG(hw, "\tPort ID:   %d\n", args->port_id);
-}
-
-static int
-dlb2_verify_create_dir_queue_args(struct dlb2_hw *hw,
-				  u32 domain_id,
-				  struct dlb2_create_dir_queue_args *args,
-				  struct dlb2_cmd_response *resp,
-				  bool vdev_req,
-				  unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (domain == NULL) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	if (!domain->configured) {
-		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
-		return -EINVAL;
-	}
-
-	if (domain->started) {
-		resp->status = DLB2_ST_DOMAIN_STARTED;
-		return -EINVAL;
-	}
-
-	/*
-	 * If the user claims the port is already configured, validate the port
-	 * ID, its domain, and whether the port is configured.
-	 */
-	if (args->port_id != -1) {
-		struct dlb2_dir_pq_pair *port;
-
-		port = dlb2_get_domain_used_dir_pq(hw,
-						   args->port_id,
-						   vdev_req,
-						   domain);
-
-		if (port == NULL || port->domain_id.phys_id !=
-				domain->id.phys_id || !port->port_configured) {
-			resp->status = DLB2_ST_INVALID_PORT_ID;
-			return -EINVAL;
-		}
-	}
-
-	/*
-	 * If the queue's port is not configured, validate that a free
-	 * port-queue pair is available.
-	 */
-	if (args->port_id == -1 &&
-	    dlb2_list_empty(&domain->avail_dir_pq_pairs)) {
-		resp->status = DLB2_ST_DIR_QUEUES_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	return 0;
-}
-
-/**
- * dlb2_hw_create_dir_queue() - Allocate and initialize a DLB DIR queue.
- * @hw:	Contains the current state of the DLB2 hardware.
- * @domain_id: Domain ID
- * @args: User-provided arguments.
- * @resp: Response to user.
- * @vdev_req: Request came from a virtual device.
- * @vdev_id: If vdev_req is true, this contains the virtual device's ID.
- *
- * Return: returns < 0 on error, 0 otherwise. If the driver is unable to
- * satisfy a request, resp->status will be set accordingly.
- */
-int dlb2_hw_create_dir_queue(struct dlb2_hw *hw,
-			     u32 domain_id,
-			     struct dlb2_create_dir_queue_args *args,
-			     struct dlb2_cmd_response *resp,
-			     bool vdev_req,
-			     unsigned int vdev_id)
-{
-	struct dlb2_dir_pq_pair *queue;
-	struct dlb2_hw_domain *domain;
-	int ret;
-
-	dlb2_log_create_dir_queue_args(hw, domain_id, args, vdev_req, vdev_id);
-
-	/*
-	 * Verify that hardware resources are available before attempting to
-	 * satisfy the request. This simplifies the error unwinding code.
-	 */
-	ret = dlb2_verify_create_dir_queue_args(hw,
-						domain_id,
-						args,
-						resp,
-						vdev_req,
-						vdev_id);
-	if (ret)
-		return ret;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-	if (domain == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: domain not found\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	if (args->port_id != -1)
-		queue = dlb2_get_domain_used_dir_pq(hw,
-						    args->port_id,
-						    vdev_req,
-						    domain);
-	else
-		queue = DLB2_DOM_LIST_HEAD(domain->avail_dir_pq_pairs,
-					   typeof(*queue));
-	if (queue == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: no available dir queues\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	dlb2_configure_dir_queue(hw, domain, queue, args, vdev_req, vdev_id);
-
-	/*
-	 * Configuration succeeded, so move the resource from the 'avail' to
-	 * the 'used' list (if it's not already there).
-	 */
-	if (args->port_id == -1) {
-		dlb2_list_del(&domain->avail_dir_pq_pairs,
-			      &queue->domain_list);
-
-		dlb2_list_add(&domain->used_dir_pq_pairs,
-			      &queue->domain_list);
-	}
-
-	resp->status = 0;
-
-	resp->id = (vdev_req) ? queue->id.virt_id : queue->id.phys_id;
-
-	return 0;
-}
-
 static bool
 dlb2_port_find_slot_with_pending_map_queue(struct dlb2_ldb_port *port,
 					   struct dlb2_ldb_queue *queue,
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index 4e4b390dd..d4b401250 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -4857,3 +4857,204 @@ int dlb2_hw_create_dir_port(struct dlb2_hw *hw,
 
 	return 0;
 }
+
+static void dlb2_configure_dir_queue(struct dlb2_hw *hw,
+				     struct dlb2_hw_domain *domain,
+				     struct dlb2_dir_pq_pair *queue,
+				     struct dlb2_create_dir_queue_args *args,
+				     bool vdev_req,
+				     unsigned int vdev_id)
+{
+	unsigned int offs;
+	u32 reg = 0;
+
+	/* QID write permissions are turned on when the domain is started */
+	offs = domain->id.phys_id * DLB2_MAX_NUM_DIR_QUEUES(hw->ver) +
+		queue->id.phys_id;
+
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(offs), reg);
+
+	/* Don't timestamp QEs that pass through this queue */
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_QID_ITS(queue->id.phys_id), reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, args->depth_threshold,
+		      DLB2_LSP_QID_DIR_DEPTH_THRSH_THRESH);
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_QID_DIR_DEPTH_THRSH(hw->ver, queue->id.phys_id),
+		    reg);
+
+	if (vdev_req) {
+		offs = vdev_id * DLB2_MAX_NUM_DIR_QUEUES(hw->ver) +
+			queue->id.virt_id;
+
+		reg = 0;
+		DLB2_BIT_SET(reg, DLB2_SYS_VF_DIR_VQID_V_VQID_V);
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID_V(offs), reg);
+
+		reg = 0;
+		DLB2_BITS_SET(reg, queue->id.phys_id,
+			      DLB2_SYS_VF_DIR_VQID2QID_QID);
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID2QID(offs), reg);
+	}
+
+	reg = 0;
+	DLB2_BIT_SET(reg, DLB2_SYS_DIR_QID_V_QID_V);
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_QID_V(queue->id.phys_id), reg);
+
+	queue->queue_configured = true;
+}
+
+static void
+dlb2_log_create_dir_queue_args(struct dlb2_hw *hw,
+			       u32 domain_id,
+			       struct dlb2_create_dir_queue_args *args,
+			       bool vdev_req,
+			       unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 create directed queue arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
+	DLB2_HW_DBG(hw, "\tPort ID:   %d\n", args->port_id);
+}
+
+static int
+dlb2_verify_create_dir_queue_args(struct dlb2_hw *hw,
+				  u32 domain_id,
+				  struct dlb2_create_dir_queue_args *args,
+				  struct dlb2_cmd_response *resp,
+				  bool vdev_req,
+				  unsigned int vdev_id,
+				  struct dlb2_hw_domain **out_domain,
+				  struct dlb2_dir_pq_pair **out_queue)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_dir_pq_pair *pq;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
+		return -EINVAL;
+	}
+
+	if (domain->started) {
+		resp->status = DLB2_ST_DOMAIN_STARTED;
+		return -EINVAL;
+	}
+
+	/*
+	 * If the user claims the port is already configured, validate the port
+	 * ID, its domain, and whether the port is configured.
+	 */
+	if (args->port_id != -1) {
+		pq = dlb2_get_domain_used_dir_pq(hw,
+						 args->port_id,
+						 vdev_req,
+						 domain);
+
+		if (!pq || pq->domain_id.phys_id != domain->id.phys_id ||
+		    !pq->port_configured) {
+			resp->status = DLB2_ST_INVALID_PORT_ID;
+			return -EINVAL;
+		}
+	} else {
+		/*
+		 * If the queue's port is not configured, validate that a free
+		 * port-queue pair is available.
+		 */
+		pq = DLB2_DOM_LIST_HEAD(domain->avail_dir_pq_pairs,
+					typeof(*pq));
+		if (!pq) {
+			resp->status = DLB2_ST_DIR_QUEUES_UNAVAILABLE;
+			return -EINVAL;
+		}
+	}
+
+	*out_domain = domain;
+	*out_queue = pq;
+
+	return 0;
+}
+
+/**
+ * dlb2_hw_create_dir_queue() - create a directed queue
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: queue creation arguments.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function creates a directed queue.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the queue ID.
+ *
+ * resp->id contains a virtual ID if vdev_req is true.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, the domain is not configured,
+ *	    or the domain has already been started.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb2_hw_create_dir_queue(struct dlb2_hw *hw,
+			     u32 domain_id,
+			     struct dlb2_create_dir_queue_args *args,
+			     struct dlb2_cmd_response *resp,
+			     bool vdev_req,
+			     unsigned int vdev_id)
+{
+	struct dlb2_dir_pq_pair *queue;
+	struct dlb2_hw_domain *domain;
+	int ret;
+
+	dlb2_log_create_dir_queue_args(hw, domain_id, args, vdev_req, vdev_id);
+
+	/*
+	 * Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	ret = dlb2_verify_create_dir_queue_args(hw,
+						domain_id,
+						args,
+						resp,
+						vdev_req,
+						vdev_id,
+						&domain,
+						&queue);
+	if (ret)
+		return ret;
+
+	dlb2_configure_dir_queue(hw, domain, queue, args, vdev_req, vdev_id);
+
+	/*
+	 * Configuration succeeded, so move the resource from the 'avail' to
+	 * the 'used' list (if it's not already there).
+	 */
+	if (args->port_id == -1) {
+		dlb2_list_del(&domain->avail_dir_pq_pairs,
+			      &queue->domain_list);
+
+		dlb2_list_add(&domain->used_dir_pq_pairs,
+			      &queue->domain_list);
+	}
+
+	resp->status = 0;
+
+	resp->id = (vdev_req) ? queue->id.virt_id : queue->id.phys_id;
+
+	return 0;
+}
+
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v5 12/26] event/dlb2: add v2.5 map qid
  2021-05-01 19:03   ` [dpdk-dev] [PATCH v5 00/26] Add DLB v2.5 McDaniel, Timothy
                       ` (10 preceding siblings ...)
  2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 11/26] event/dlb2: add v2.5 create dir queue McDaniel, Timothy
@ 2021-05-01 19:03     ` McDaniel, Timothy
  2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 13/26] event/dlb2: add v2.5 unmap queue McDaniel, Timothy
                       ` (14 subsequent siblings)
  26 siblings, 0 replies; 174+ messages in thread
From: McDaniel, Timothy @ 2021-05-01 19:03 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, harry.van.haaren, jerinj, thomas, Timothy McDaniel

From: Timothy McDaniel <timothy.mcdaniel@intel.com>

Update the low level HW functions responsible for
mapping queues to ports. These functions also validate
the map arguments and verify that the maximum number
of queues linked to a load balanced port does not
exceed the capabilities of the hardware.

The logic is very similar to what was done for v2.0,
but the new combined register map for v2.0 and v2.5
uses new register names and bit names.  Additionally,
new register access macros are used so that the code
can perform the correct action, based on the hardware
version, v2.0 or v2.5.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 355 ---------------
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 418 ++++++++++++++++++
 2 files changed, 418 insertions(+), 355 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index 362deadfe..d59df5e39 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -1245,68 +1245,6 @@ dlb2_port_find_slot_with_pending_map_queue(struct dlb2_ldb_port *port,
 	return (i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ);
 }
 
-static void dlb2_ldb_port_change_qid_priority(struct dlb2_hw *hw,
-					      struct dlb2_ldb_port *port,
-					      int slot,
-					      struct dlb2_map_qid_args *args)
-{
-	union dlb2_lsp_cq2priov r0;
-
-	/* Read-modify-write the priority and valid bit register */
-	r0.val = DLB2_CSR_RD(hw, DLB2_LSP_CQ2PRIOV(port->id.phys_id));
-
-	r0.field.v |= 1 << slot;
-	r0.field.prio |= (args->priority & 0x7) << slot * 3;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(port->id.phys_id), r0.val);
-
-	dlb2_flush_csr(hw);
-
-	port->qid_map[slot].priority = args->priority;
-}
-
-static int dlb2_verify_map_qid_slot_available(struct dlb2_ldb_port *port,
-					      struct dlb2_ldb_queue *queue,
-					      struct dlb2_cmd_response *resp)
-{
-	enum dlb2_qid_map_state state;
-	int i;
-
-	/* Unused slot available? */
-	if (port->num_mappings < DLB2_MAX_NUM_QIDS_PER_LDB_CQ)
-		return 0;
-
-	/*
-	 * If the queue is already mapped (from the application's perspective),
-	 * this is simply a priority update.
-	 */
-	state = DLB2_QUEUE_MAPPED;
-	if (dlb2_port_find_slot_queue(port, state, queue, &i))
-		return 0;
-
-	state = DLB2_QUEUE_MAP_IN_PROG;
-	if (dlb2_port_find_slot_queue(port, state, queue, &i))
-		return 0;
-
-	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &i))
-		return 0;
-
-	/*
-	 * If the slot contains an unmap in progress, it's considered
-	 * available.
-	 */
-	state = DLB2_QUEUE_UNMAP_IN_PROG;
-	if (dlb2_port_find_slot(port, state, &i))
-		return 0;
-
-	state = DLB2_QUEUE_UNMAPPED;
-	if (dlb2_port_find_slot(port, state, &i))
-		return 0;
-
-	resp->status = DLB2_ST_NO_QID_SLOTS_AVAILABLE;
-	return -EINVAL;
-}
-
 static struct dlb2_ldb_queue *
 dlb2_get_domain_ldb_queue(u32 id,
 			  bool vdev_req,
@@ -1355,299 +1293,6 @@ dlb2_get_domain_used_ldb_port(u32 id,
 	return NULL;
 }
 
-static int dlb2_verify_map_qid_args(struct dlb2_hw *hw,
-				    u32 domain_id,
-				    struct dlb2_map_qid_args *args,
-				    struct dlb2_cmd_response *resp,
-				    bool vdev_req,
-				    unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_port *port;
-	struct dlb2_ldb_queue *queue;
-	int id;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (domain == NULL) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	if (!domain->configured) {
-		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
-		return -EINVAL;
-	}
-
-	id = args->port_id;
-
-	port = dlb2_get_domain_used_ldb_port(id, vdev_req, domain);
-
-	if (port == NULL || !port->configured) {
-		resp->status = DLB2_ST_INVALID_PORT_ID;
-		return -EINVAL;
-	}
-
-	if (args->priority >= DLB2_QID_PRIORITIES) {
-		resp->status = DLB2_ST_INVALID_PRIORITY;
-		return -EINVAL;
-	}
-
-	queue = dlb2_get_domain_ldb_queue(args->qid, vdev_req, domain);
-
-	if (queue == NULL || !queue->configured) {
-		resp->status = DLB2_ST_INVALID_QID;
-		return -EINVAL;
-	}
-
-	if (queue->domain_id.phys_id != domain->id.phys_id) {
-		resp->status = DLB2_ST_INVALID_QID;
-		return -EINVAL;
-	}
-
-	if (port->domain_id.phys_id != domain->id.phys_id) {
-		resp->status = DLB2_ST_INVALID_PORT_ID;
-		return -EINVAL;
-	}
-
-	return 0;
-}
-
-static void dlb2_log_map_qid(struct dlb2_hw *hw,
-			     u32 domain_id,
-			     struct dlb2_map_qid_args *args,
-			     bool vdev_req,
-			     unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 map QID arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID: %d\n",
-		    domain_id);
-	DLB2_HW_DBG(hw, "\tPort ID:   %d\n",
-		    args->port_id);
-	DLB2_HW_DBG(hw, "\tQueue ID:  %d\n",
-		    args->qid);
-	DLB2_HW_DBG(hw, "\tPriority:  %d\n",
-		    args->priority);
-}
-
-int dlb2_hw_map_qid(struct dlb2_hw *hw,
-		    u32 domain_id,
-		    struct dlb2_map_qid_args *args,
-		    struct dlb2_cmd_response *resp,
-		    bool vdev_req,
-		    unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_queue *queue;
-	enum dlb2_qid_map_state st;
-	struct dlb2_ldb_port *port;
-	int ret, i, id;
-	u8 prio;
-
-	dlb2_log_map_qid(hw, domain_id, args, vdev_req, vdev_id);
-
-	/*
-	 * Verify that hardware resources are available before attempting to
-	 * satisfy the request. This simplifies the error unwinding code.
-	 */
-	ret = dlb2_verify_map_qid_args(hw,
-				       domain_id,
-				       args,
-				       resp,
-				       vdev_req,
-				       vdev_id);
-	if (ret)
-		return ret;
-
-	prio = args->priority;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-	if (domain == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: domain not found\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	id = args->port_id;
-
-	port = dlb2_get_domain_used_ldb_port(id, vdev_req, domain);
-	if (port == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: port not found\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	queue = dlb2_get_domain_ldb_queue(args->qid, vdev_req, domain);
-	if (queue == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: queue not found\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	/*
-	 * If there are any outstanding detach operations for this port,
-	 * attempt to complete them. This may be necessary to free up a QID
-	 * slot for this requested mapping.
-	 */
-	if (port->num_pending_removals)
-		dlb2_domain_finish_unmap_port(hw, domain, port);
-
-	ret = dlb2_verify_map_qid_slot_available(port, queue, resp);
-	if (ret)
-		return ret;
-
-	/* Hardware requires disabling the CQ before mapping QIDs. */
-	if (port->enabled)
-		dlb2_ldb_port_cq_disable(hw, port);
-
-	/*
-	 * If this is only a priority change, don't perform the full QID->CQ
-	 * mapping procedure
-	 */
-	st = DLB2_QUEUE_MAPPED;
-	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
-		if (i >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-			DLB2_HW_ERR(hw,
-				    "[%s():%d] Internal error: port slot tracking failed\n",
-				    __func__, __LINE__);
-			return -EFAULT;
-		}
-
-		if (prio != port->qid_map[i].priority) {
-			dlb2_ldb_port_change_qid_priority(hw, port, i, args);
-			DLB2_HW_DBG(hw, "DLB2 map: priority change\n");
-		}
-
-		st = DLB2_QUEUE_MAPPED;
-		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
-		if (ret)
-			return ret;
-
-		goto map_qid_done;
-	}
-
-	st = DLB2_QUEUE_UNMAP_IN_PROG;
-	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
-		if (i >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-			DLB2_HW_ERR(hw,
-				    "[%s():%d] Internal error: port slot tracking failed\n",
-				    __func__, __LINE__);
-			return -EFAULT;
-		}
-
-		if (prio != port->qid_map[i].priority) {
-			dlb2_ldb_port_change_qid_priority(hw, port, i, args);
-			DLB2_HW_DBG(hw, "DLB2 map: priority change\n");
-		}
-
-		st = DLB2_QUEUE_MAPPED;
-		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
-		if (ret)
-			return ret;
-
-		goto map_qid_done;
-	}
-
-	/*
-	 * If this is a priority change on an in-progress mapping, don't
-	 * perform the full QID->CQ mapping procedure.
-	 */
-	st = DLB2_QUEUE_MAP_IN_PROG;
-	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
-		if (i >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-			DLB2_HW_ERR(hw,
-				    "[%s():%d] Internal error: port slot tracking failed\n",
-				    __func__, __LINE__);
-			return -EFAULT;
-		}
-
-		port->qid_map[i].priority = prio;
-
-		DLB2_HW_DBG(hw, "DLB2 map: priority change only\n");
-
-		goto map_qid_done;
-	}
-
-	/*
-	 * If this is a priority change on a pending mapping, update the
-	 * pending priority
-	 */
-	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &i)) {
-		if (i >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-			DLB2_HW_ERR(hw,
-				    "[%s():%d] Internal error: port slot tracking failed\n",
-				    __func__, __LINE__);
-			return -EFAULT;
-		}
-
-		port->qid_map[i].pending_priority = prio;
-
-		DLB2_HW_DBG(hw, "DLB2 map: priority change only\n");
-
-		goto map_qid_done;
-	}
-
-	/*
-	 * If all the CQ's slots are in use, then there's an unmap in progress
-	 * (guaranteed by dlb2_verify_map_qid_slot_available()), so add this
-	 * mapping to pending_map and return. When the removal is completed for
-	 * the slot's current occupant, this mapping will be performed.
-	 */
-	if (!dlb2_port_find_slot(port, DLB2_QUEUE_UNMAPPED, &i)) {
-		if (dlb2_port_find_slot(port, DLB2_QUEUE_UNMAP_IN_PROG, &i)) {
-			enum dlb2_qid_map_state st;
-
-			if (i >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-				DLB2_HW_ERR(hw,
-					    "[%s():%d] Internal error: port slot tracking failed\n",
-					    __func__, __LINE__);
-				return -EFAULT;
-			}
-
-			port->qid_map[i].pending_qid = queue->id.phys_id;
-			port->qid_map[i].pending_priority = prio;
-
-			st = DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP;
-
-			ret = dlb2_port_slot_state_transition(hw, port, queue,
-							      i, st);
-			if (ret)
-				return ret;
-
-			DLB2_HW_DBG(hw, "DLB2 map: map pending removal\n");
-
-			goto map_qid_done;
-		}
-	}
-
-	/*
-	 * If the domain has started, a special "dynamic" CQ->queue mapping
-	 * procedure is required in order to safely update the CQ<->QID tables.
-	 * The "static" procedure cannot be used when traffic is flowing,
-	 * because the CQ<->QID tables cannot be updated atomically and the
-	 * scheduler won't see the new mapping unless the queue's if_status
-	 * changes, which isn't guaranteed.
-	 */
-	ret = dlb2_ldb_port_map_qid(hw, domain, port, queue, prio);
-
-	/* If ret is less than zero, it's due to an internal error */
-	if (ret < 0)
-		return ret;
-
-map_qid_done:
-	if (port->enabled)
-		dlb2_ldb_port_cq_enable(hw, port);
-
-	resp->status = 0;
-
-	return 0;
-}
-
 static void dlb2_log_unmap_qid(struct dlb2_hw *hw,
 			       u32 domain_id,
 			       struct dlb2_unmap_qid_args *args,
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index d4b401250..5277a2643 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -5058,3 +5058,421 @@ int dlb2_hw_create_dir_queue(struct dlb2_hw *hw,
 	return 0;
 }
 
+static bool
+dlb2_port_find_slot_with_pending_map_queue(struct dlb2_ldb_port *port,
+					   struct dlb2_ldb_queue *queue,
+					   int *slot)
+{
+	int i;
+
+	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
+		struct dlb2_ldb_port_qid_map *map = &port->qid_map[i];
+
+		if (map->state == DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP &&
+		    map->pending_qid == queue->id.phys_id)
+			break;
+	}
+
+	*slot = i;
+
+	return (i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ);
+}
+
+static int dlb2_verify_map_qid_slot_available(struct dlb2_ldb_port *port,
+					      struct dlb2_ldb_queue *queue,
+					      struct dlb2_cmd_response *resp)
+{
+	enum dlb2_qid_map_state state;
+	int i;
+
+	/* Unused slot available? */
+	if (port->num_mappings < DLB2_MAX_NUM_QIDS_PER_LDB_CQ)
+		return 0;
+
+	/*
+	 * If the queue is already mapped (from the application's perspective),
+	 * this is simply a priority update.
+	 */
+	state = DLB2_QUEUE_MAPPED;
+	if (dlb2_port_find_slot_queue(port, state, queue, &i))
+		return 0;
+
+	state = DLB2_QUEUE_MAP_IN_PROG;
+	if (dlb2_port_find_slot_queue(port, state, queue, &i))
+		return 0;
+
+	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &i))
+		return 0;
+
+	/*
+	 * If the slot contains an unmap in progress, it's considered
+	 * available.
+	 */
+	state = DLB2_QUEUE_UNMAP_IN_PROG;
+	if (dlb2_port_find_slot(port, state, &i))
+		return 0;
+
+	state = DLB2_QUEUE_UNMAPPED;
+	if (dlb2_port_find_slot(port, state, &i))
+		return 0;
+
+	resp->status = DLB2_ST_NO_QID_SLOTS_AVAILABLE;
+	return -EINVAL;
+}
+
+static struct dlb2_ldb_queue *
+dlb2_get_domain_ldb_queue(u32 id,
+			  bool vdev_req,
+			  struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_queue *queue;
+	RTE_SET_USED(iter);
+
+	if (id >= DLB2_MAX_NUM_LDB_QUEUES)
+		return NULL;
+
+	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
+		if ((!vdev_req && queue->id.phys_id == id) ||
+		    (vdev_req && queue->id.virt_id == id))
+			return queue;
+	}
+
+	return NULL;
+}
+
+static struct dlb2_ldb_port *
+dlb2_get_domain_used_ldb_port(u32 id,
+			      bool vdev_req,
+			      struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	if (id >= DLB2_MAX_NUM_LDB_PORTS)
+		return NULL;
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			if ((!vdev_req && port->id.phys_id == id) ||
+			    (vdev_req && port->id.virt_id == id))
+				return port;
+		}
+
+		DLB2_DOM_LIST_FOR(domain->avail_ldb_ports[i], port, iter) {
+			if ((!vdev_req && port->id.phys_id == id) ||
+			    (vdev_req && port->id.virt_id == id))
+				return port;
+		}
+	}
+
+	return NULL;
+}
+
+static void dlb2_ldb_port_change_qid_priority(struct dlb2_hw *hw,
+					      struct dlb2_ldb_port *port,
+					      int slot,
+					      struct dlb2_map_qid_args *args)
+{
+	u32 cq2priov;
+
+	/* Read-modify-write the priority and valid bit register */
+	cq2priov = DLB2_CSR_RD(hw,
+			       DLB2_LSP_CQ2PRIOV(hw->ver, port->id.phys_id));
+
+	cq2priov |= (1 << (slot + DLB2_LSP_CQ2PRIOV_V_LOC)) &
+		    DLB2_LSP_CQ2PRIOV_V;
+	cq2priov |= ((args->priority & 0x7) << slot * 3) &
+		    DLB2_LSP_CQ2PRIOV_PRIO;
+
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(hw->ver, port->id.phys_id), cq2priov);
+
+	dlb2_flush_csr(hw);
+
+	port->qid_map[slot].priority = args->priority;
+}
+
+static int dlb2_verify_map_qid_args(struct dlb2_hw *hw,
+				    u32 domain_id,
+				    struct dlb2_map_qid_args *args,
+				    struct dlb2_cmd_response *resp,
+				    bool vdev_req,
+				    unsigned int vdev_id,
+				    struct dlb2_hw_domain **out_domain,
+				    struct dlb2_ldb_port **out_port,
+				    struct dlb2_ldb_queue **out_queue)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+	struct dlb2_ldb_port *port;
+	int id;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
+		return -EINVAL;
+	}
+
+	id = args->port_id;
+
+	port = dlb2_get_domain_used_ldb_port(id, vdev_req, domain);
+
+	if (!port || !port->configured) {
+		resp->status = DLB2_ST_INVALID_PORT_ID;
+		return -EINVAL;
+	}
+
+	if (args->priority >= DLB2_QID_PRIORITIES) {
+		resp->status = DLB2_ST_INVALID_PRIORITY;
+		return -EINVAL;
+	}
+
+	queue = dlb2_get_domain_ldb_queue(args->qid, vdev_req, domain);
+
+	if (!queue || !queue->configured) {
+		resp->status = DLB2_ST_INVALID_QID;
+		return -EINVAL;
+	}
+
+	if (queue->domain_id.phys_id != domain->id.phys_id) {
+		resp->status = DLB2_ST_INVALID_QID;
+		return -EINVAL;
+	}
+
+	if (port->domain_id.phys_id != domain->id.phys_id) {
+		resp->status = DLB2_ST_INVALID_PORT_ID;
+		return -EINVAL;
+	}
+
+	*out_domain = domain;
+	*out_queue = queue;
+	*out_port = port;
+
+	return 0;
+}
+
+static void dlb2_log_map_qid(struct dlb2_hw *hw,
+			     u32 domain_id,
+			     struct dlb2_map_qid_args *args,
+			     bool vdev_req,
+			     unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 map QID arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID: %d\n",
+		    domain_id);
+	DLB2_HW_DBG(hw, "\tPort ID:   %d\n",
+		    args->port_id);
+	DLB2_HW_DBG(hw, "\tQueue ID:  %d\n",
+		    args->qid);
+	DLB2_HW_DBG(hw, "\tPriority:  %d\n",
+		    args->priority);
+}
+
+/**
+ * dlb2_hw_map_qid() - map a load-balanced queue to a load-balanced port
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: map QID arguments.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function configures the DLB to schedule QEs from the specified queue
+ * to the specified port. Each load-balanced port can be mapped to up to 8
+ * queues; each load-balanced queue can potentially map to all the
+ * load-balanced ports.
+ *
+ * A successful return does not necessarily mean the mapping was configured. If
+ * this function is unable to immediately map the queue to the port, it will
+ * add the requested operation to a per-port list of pending map/unmap
+ * operations, and (if it's not already running) launch a kernel thread that
+ * periodically attempts to process all pending operations. In a sense, this is
+ * an asynchronous function.
+ *
+ * This asynchronicity creates two views of the state of hardware: the actual
+ * hardware state and the requested state (as if every request completed
+ * immediately). If there are any pending map/unmap operations, the requested
+ * state will differ from the actual state. All validation is performed with
+ * respect to the pending state; for instance, if there are 8 pending map
+ * operations for port X, a request for a 9th will fail because a load-balanced
+ * port can only map up to 8 queues.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, invalid port or queue ID, or
+ *	    the domain is not configured.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb2_hw_map_qid(struct dlb2_hw *hw,
+		    u32 domain_id,
+		    struct dlb2_map_qid_args *args,
+		    struct dlb2_cmd_response *resp,
+		    bool vdev_req,
+		    unsigned int vdev_id)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+	enum dlb2_qid_map_state st;
+	struct dlb2_ldb_port *port;
+	int ret, i;
+	u8 prio;
+
+	dlb2_log_map_qid(hw, domain_id, args, vdev_req, vdev_id);
+
+	/*
+	 * Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	ret = dlb2_verify_map_qid_args(hw,
+				       domain_id,
+				       args,
+				       resp,
+				       vdev_req,
+				       vdev_id,
+				       &domain,
+				       &port,
+				       &queue);
+	if (ret)
+		return ret;
+
+	prio = args->priority;
+
+	/*
+	 * If there are any outstanding detach operations for this port,
+	 * attempt to complete them. This may be necessary to free up a QID
+	 * slot for this requested mapping.
+	 */
+	if (port->num_pending_removals)
+		dlb2_domain_finish_unmap_port(hw, domain, port);
+
+	ret = dlb2_verify_map_qid_slot_available(port, queue, resp);
+	if (ret)
+		return ret;
+
+	/* Hardware requires disabling the CQ before mapping QIDs. */
+	if (port->enabled)
+		dlb2_ldb_port_cq_disable(hw, port);
+
+	/*
+	 * If this is only a priority change, don't perform the full QID->CQ
+	 * mapping procedure
+	 */
+	st = DLB2_QUEUE_MAPPED;
+	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
+		if (prio != port->qid_map[i].priority) {
+			dlb2_ldb_port_change_qid_priority(hw, port, i, args);
+			DLB2_HW_DBG(hw, "DLB2 map: priority change\n");
+		}
+
+		st = DLB2_QUEUE_MAPPED;
+		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
+		if (ret)
+			return ret;
+
+		goto map_qid_done;
+	}
+
+	st = DLB2_QUEUE_UNMAP_IN_PROG;
+	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
+		if (prio != port->qid_map[i].priority) {
+			dlb2_ldb_port_change_qid_priority(hw, port, i, args);
+			DLB2_HW_DBG(hw, "DLB2 map: priority change\n");
+		}
+
+		st = DLB2_QUEUE_MAPPED;
+		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
+		if (ret)
+			return ret;
+
+		goto map_qid_done;
+	}
+
+	/*
+	 * If this is a priority change on an in-progress mapping, don't
+	 * perform the full QID->CQ mapping procedure.
+	 */
+	st = DLB2_QUEUE_MAP_IN_PROG;
+	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
+		port->qid_map[i].priority = prio;
+
+		DLB2_HW_DBG(hw, "DLB2 map: priority change only\n");
+
+		goto map_qid_done;
+	}
+
+	/*
+	 * If this is a priority change on a pending mapping, update the
+	 * pending priority
+	 */
+	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &i)) {
+		port->qid_map[i].pending_priority = prio;
+
+		DLB2_HW_DBG(hw, "DLB2 map: priority change only\n");
+
+		goto map_qid_done;
+	}
+
+	/*
+	 * If all the CQ's slots are in use, then there's an unmap in progress
+	 * (guaranteed by dlb2_verify_map_qid_slot_available()), so add this
+	 * mapping to pending_map and return. When the removal is completed for
+	 * the slot's current occupant, this mapping will be performed.
+	 */
+	if (!dlb2_port_find_slot(port, DLB2_QUEUE_UNMAPPED, &i)) {
+		if (dlb2_port_find_slot(port, DLB2_QUEUE_UNMAP_IN_PROG, &i)) {
+			enum dlb2_qid_map_state new_st;
+
+			port->qid_map[i].pending_qid = queue->id.phys_id;
+			port->qid_map[i].pending_priority = prio;
+
+			new_st = DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP;
+
+			ret = dlb2_port_slot_state_transition(hw, port, queue,
+							      i, new_st);
+			if (ret)
+				return ret;
+
+			DLB2_HW_DBG(hw, "DLB2 map: map pending removal\n");
+
+			goto map_qid_done;
+		}
+	}
+
+	/*
+	 * If the domain has started, a special "dynamic" CQ->queue mapping
+	 * procedure is required in order to safely update the CQ<->QID tables.
+	 * The "static" procedure cannot be used when traffic is flowing,
+	 * because the CQ<->QID tables cannot be updated atomically and the
+	 * scheduler won't see the new mapping unless the queue's if_status
+	 * changes, which isn't guaranteed.
+	 */
+	ret = dlb2_ldb_port_map_qid(hw, domain, port, queue, prio);
+
+	/* If ret is less than zero, it's due to an internal error */
+	if (ret < 0)
+		return ret;
+
+map_qid_done:
+	if (port->enabled)
+		dlb2_ldb_port_cq_enable(hw, port);
+
+	resp->status = 0;
+
+	return 0;
+}
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v5 13/26] event/dlb2: add v2.5 unmap queue
  2021-05-01 19:03   ` [dpdk-dev] [PATCH v5 00/26] Add DLB v2.5 McDaniel, Timothy
                       ` (11 preceding siblings ...)
  2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 12/26] event/dlb2: add v2.5 map qid McDaniel, Timothy
@ 2021-05-01 19:03     ` McDaniel, Timothy
  2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 14/26] event/dlb2: add v2.5 start domain McDaniel, Timothy
                       ` (13 subsequent siblings)
  26 siblings, 0 replies; 174+ messages in thread
From: McDaniel, Timothy @ 2021-05-01 19:03 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, harry.van.haaren, jerinj, thomas, Timothy McDaniel

From: Timothy McDaniel <timothy.mcdaniel@intel.com>

Update the low level HW functions responsible for
removing the linkage between a queue and a load
balanced port. Runtime checks are performed on the
port and queue to make sure the state is appropriate
for the unmap operation, and the unmap arguments
are also validated.

The logic is very similar to what was done for v2.0,
but the new combined register map for v2.0 and v2.5
uses new register names and bit names.  Additionally,
new register access macros are used so that the code
can perform the correct action, based on the hardware.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 331 ------------------
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 298 ++++++++++++++++
 2 files changed, 298 insertions(+), 331 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index d59df5e39..ab5b080c1 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -1225,26 +1225,6 @@ dlb2_get_domain_used_dir_pq(struct dlb2_hw *hw,
 	return NULL;
 }
 
-static bool
-dlb2_port_find_slot_with_pending_map_queue(struct dlb2_ldb_port *port,
-					   struct dlb2_ldb_queue *queue,
-					   int *slot)
-{
-	int i;
-
-	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
-		struct dlb2_ldb_port_qid_map *map = &port->qid_map[i];
-
-		if (map->state == DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP &&
-		    map->pending_qid == queue->id.phys_id)
-			break;
-	}
-
-	*slot = i;
-
-	return (i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ);
-}
-
 static struct dlb2_ldb_queue *
 dlb2_get_domain_ldb_queue(u32 id,
 			  bool vdev_req,
@@ -1265,317 +1245,6 @@ dlb2_get_domain_ldb_queue(u32 id,
 	return NULL;
 }
 
-static struct dlb2_ldb_port *
-dlb2_get_domain_used_ldb_port(u32 id,
-			      bool vdev_req,
-			      struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	if (id >= DLB2_MAX_NUM_LDB_PORTS)
-		return NULL;
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
-			if ((!vdev_req && port->id.phys_id == id) ||
-			    (vdev_req && port->id.virt_id == id))
-				return port;
-
-		DLB2_DOM_LIST_FOR(domain->avail_ldb_ports[i], port, iter)
-			if ((!vdev_req && port->id.phys_id == id) ||
-			    (vdev_req && port->id.virt_id == id))
-				return port;
-	}
-
-	return NULL;
-}
-
-static void dlb2_log_unmap_qid(struct dlb2_hw *hw,
-			       u32 domain_id,
-			       struct dlb2_unmap_qid_args *args,
-			       bool vdev_req,
-			       unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 unmap QID arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID: %d\n",
-		    domain_id);
-	DLB2_HW_DBG(hw, "\tPort ID:   %d\n",
-		    args->port_id);
-	DLB2_HW_DBG(hw, "\tQueue ID:  %d\n",
-		    args->qid);
-	if (args->qid < DLB2_MAX_NUM_LDB_QUEUES)
-		DLB2_HW_DBG(hw, "\tQueue's num mappings:  %d\n",
-			    hw->rsrcs.ldb_queues[args->qid].num_mappings);
-}
-
-static int dlb2_verify_unmap_qid_args(struct dlb2_hw *hw,
-				      u32 domain_id,
-				      struct dlb2_unmap_qid_args *args,
-				      struct dlb2_cmd_response *resp,
-				      bool vdev_req,
-				      unsigned int vdev_id)
-{
-	enum dlb2_qid_map_state state;
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_queue *queue;
-	struct dlb2_ldb_port *port;
-	int slot;
-	int id;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (domain == NULL) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	if (!domain->configured) {
-		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
-		return -EINVAL;
-	}
-
-	id = args->port_id;
-
-	port = dlb2_get_domain_used_ldb_port(id, vdev_req, domain);
-
-	if (port == NULL || !port->configured) {
-		resp->status = DLB2_ST_INVALID_PORT_ID;
-		return -EINVAL;
-	}
-
-	if (port->domain_id.phys_id != domain->id.phys_id) {
-		resp->status = DLB2_ST_INVALID_PORT_ID;
-		return -EINVAL;
-	}
-
-	queue = dlb2_get_domain_ldb_queue(args->qid, vdev_req, domain);
-
-	if (queue == NULL || !queue->configured) {
-		DLB2_HW_ERR(hw, "[%s()] Can't unmap unconfigured queue %d\n",
-			    __func__, args->qid);
-		resp->status = DLB2_ST_INVALID_QID;
-		return -EINVAL;
-	}
-
-	/*
-	 * Verify that the port has the queue mapped. From the application's
-	 * perspective a queue is mapped if it is actually mapped, the map is
-	 * in progress, or the map is blocked pending an unmap.
-	 */
-	state = DLB2_QUEUE_MAPPED;
-	if (dlb2_port_find_slot_queue(port, state, queue, &slot))
-		return 0;
-
-	state = DLB2_QUEUE_MAP_IN_PROG;
-	if (dlb2_port_find_slot_queue(port, state, queue, &slot))
-		return 0;
-
-	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &slot))
-		return 0;
-
-	resp->status = DLB2_ST_INVALID_QID;
-	return -EINVAL;
-}
-
-int dlb2_hw_unmap_qid(struct dlb2_hw *hw,
-		      u32 domain_id,
-		      struct dlb2_unmap_qid_args *args,
-		      struct dlb2_cmd_response *resp,
-		      bool vdev_req,
-		      unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_queue *queue;
-	enum dlb2_qid_map_state st;
-	struct dlb2_ldb_port *port;
-	bool unmap_complete;
-	int i, ret, id;
-
-	dlb2_log_unmap_qid(hw, domain_id, args, vdev_req, vdev_id);
-
-	/*
-	 * Verify that hardware resources are available before attempting to
-	 * satisfy the request. This simplifies the error unwinding code.
-	 */
-	ret = dlb2_verify_unmap_qid_args(hw,
-					 domain_id,
-					 args,
-					 resp,
-					 vdev_req,
-					 vdev_id);
-	if (ret)
-		return ret;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-	if (domain == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: domain not found\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	id = args->port_id;
-
-	port = dlb2_get_domain_used_ldb_port(id, vdev_req, domain);
-	if (port == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: port not found\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	queue = dlb2_get_domain_ldb_queue(args->qid, vdev_req, domain);
-	if (queue == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: queue not found\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	/*
-	 * If the queue hasn't been mapped yet, we need to update the slot's
-	 * state and re-enable the queue's inflights.
-	 */
-	st = DLB2_QUEUE_MAP_IN_PROG;
-	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
-		if (i >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-			DLB2_HW_ERR(hw,
-				    "[%s():%d] Internal error: port slot tracking failed\n",
-				    __func__, __LINE__);
-			return -EFAULT;
-		}
-
-		/*
-		 * Since the in-progress map was aborted, re-enable the QID's
-		 * inflights.
-		 */
-		if (queue->num_pending_additions == 0)
-			dlb2_ldb_queue_set_inflight_limit(hw, queue);
-
-		st = DLB2_QUEUE_UNMAPPED;
-		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
-		if (ret)
-			return ret;
-
-		goto unmap_qid_done;
-	}
-
-	/*
-	 * If the queue mapping is on hold pending an unmap, we simply need to
-	 * update the slot's state.
-	 */
-	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &i)) {
-		if (i >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-			DLB2_HW_ERR(hw,
-				    "[%s():%d] Internal error: port slot tracking failed\n",
-				    __func__, __LINE__);
-			return -EFAULT;
-		}
-
-		st = DLB2_QUEUE_UNMAP_IN_PROG;
-		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
-		if (ret)
-			return ret;
-
-		goto unmap_qid_done;
-	}
-
-	st = DLB2_QUEUE_MAPPED;
-	if (!dlb2_port_find_slot_queue(port, st, queue, &i)) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: no available CQ slots\n",
-			    __func__);
-		return -EFAULT;
-	}
-
-	if (i >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: port slot tracking failed\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	/*
-	 * QID->CQ mapping removal is an asynchronous procedure. It requires
-	 * stopping the DLB2 from scheduling this CQ, draining all inflights
-	 * from the CQ, then unmapping the queue from the CQ. This function
-	 * simply marks the port as needing the queue unmapped, and (if
-	 * necessary) starts the unmapping worker thread.
-	 */
-	dlb2_ldb_port_cq_disable(hw, port);
-
-	st = DLB2_QUEUE_UNMAP_IN_PROG;
-	ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
-	if (ret)
-		return ret;
-
-	/*
-	 * Attempt to finish the unmapping now, in case the port has no
-	 * outstanding inflights. If that's not the case, this will fail and
-	 * the unmapping will be completed at a later time.
-	 */
-	unmap_complete = dlb2_domain_finish_unmap_port(hw, domain, port);
-
-	/*
-	 * If the unmapping couldn't complete immediately, launch the worker
-	 * thread (if it isn't already launched) to finish it later.
-	 */
-	if (!unmap_complete && !os_worker_active(hw))
-		os_schedule_work(hw);
-
-unmap_qid_done:
-	resp->status = 0;
-
-	return 0;
-}
-
-static void
-dlb2_log_pending_port_unmaps_args(struct dlb2_hw *hw,
-				  struct dlb2_pending_port_unmaps_args *args,
-				  bool vdev_req,
-				  unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB unmaps in progress arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from VF %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tPort ID: %d\n", args->port_id);
-}
-
-int dlb2_hw_pending_port_unmaps(struct dlb2_hw *hw,
-				u32 domain_id,
-				struct dlb2_pending_port_unmaps_args *args,
-				struct dlb2_cmd_response *resp,
-				bool vdev_req,
-				unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_port *port;
-
-	dlb2_log_pending_port_unmaps_args(hw, args, vdev_req, vdev_id);
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (domain == NULL) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	port = dlb2_get_domain_used_ldb_port(args->port_id, vdev_req, domain);
-	if (port == NULL || !port->configured) {
-		resp->status = DLB2_ST_INVALID_PORT_ID;
-		return -EINVAL;
-	}
-
-	resp->id = port->num_pending_removals;
-
-	return 0;
-}
-
 static int dlb2_verify_start_domain_args(struct dlb2_hw *hw,
 					 u32 domain_id,
 					 struct dlb2_cmd_response *resp,
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index 5277a2643..181922fe3 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -5476,3 +5476,301 @@ int dlb2_hw_map_qid(struct dlb2_hw *hw,
 
 	return 0;
 }
+
+static void dlb2_log_unmap_qid(struct dlb2_hw *hw,
+			       u32 domain_id,
+			       struct dlb2_unmap_qid_args *args,
+			       bool vdev_req,
+			       unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 unmap QID arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID: %d\n",
+		    domain_id);
+	DLB2_HW_DBG(hw, "\tPort ID:   %d\n",
+		    args->port_id);
+	DLB2_HW_DBG(hw, "\tQueue ID:  %d\n",
+		    args->qid);
+	if (args->qid < DLB2_MAX_NUM_LDB_QUEUES)
+		DLB2_HW_DBG(hw, "\tQueue's num mappings:  %d\n",
+			    hw->rsrcs.ldb_queues[args->qid].num_mappings);
+}
+
+static int dlb2_verify_unmap_qid_args(struct dlb2_hw *hw,
+				      u32 domain_id,
+				      struct dlb2_unmap_qid_args *args,
+				      struct dlb2_cmd_response *resp,
+				      bool vdev_req,
+				      unsigned int vdev_id,
+				      struct dlb2_hw_domain **out_domain,
+				      struct dlb2_ldb_port **out_port,
+				      struct dlb2_ldb_queue **out_queue)
+{
+	enum dlb2_qid_map_state state;
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+	struct dlb2_ldb_port *port;
+	int slot;
+	int id;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
+		return -EINVAL;
+	}
+
+	id = args->port_id;
+
+	port = dlb2_get_domain_used_ldb_port(id, vdev_req, domain);
+
+	if (!port || !port->configured) {
+		resp->status = DLB2_ST_INVALID_PORT_ID;
+		return -EINVAL;
+	}
+
+	if (port->domain_id.phys_id != domain->id.phys_id) {
+		resp->status = DLB2_ST_INVALID_PORT_ID;
+		return -EINVAL;
+	}
+
+	queue = dlb2_get_domain_ldb_queue(args->qid, vdev_req, domain);
+
+	if (!queue || !queue->configured) {
+		DLB2_HW_ERR(hw, "[%s()] Can't unmap unconfigured queue %d\n",
+			    __func__, args->qid);
+		resp->status = DLB2_ST_INVALID_QID;
+		return -EINVAL;
+	}
+
+	/*
+	 * Verify that the port has the queue mapped. From the application's
+	 * perspective a queue is mapped if it is actually mapped, the map is
+	 * in progress, or the map is blocked pending an unmap.
+	 */
+	state = DLB2_QUEUE_MAPPED;
+	if (dlb2_port_find_slot_queue(port, state, queue, &slot))
+		goto done;
+
+	state = DLB2_QUEUE_MAP_IN_PROG;
+	if (dlb2_port_find_slot_queue(port, state, queue, &slot))
+		goto done;
+
+	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &slot))
+		goto done;
+
+	resp->status = DLB2_ST_INVALID_QID;
+	return -EINVAL;
+
+done:
+	*out_domain = domain;
+	*out_port = port;
+	*out_queue = queue;
+
+	return 0;
+}
+
+/**
+ * dlb2_hw_unmap_qid() - Unmap a load-balanced queue from a load-balanced port
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: unmap QID arguments.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function configures the DLB to stop scheduling QEs from the specified
+ * queue to the specified port.
+ *
+ * A successful return does not necessarily mean the mapping was removed. If
+ * this function is unable to immediately unmap the queue from the port, it
+ * will add the requested operation to a per-port list of pending map/unmap
+ * operations, and (if it's not already running) launch a kernel thread that
+ * periodically attempts to process all pending operations. See
+ * dlb2_hw_map_qid() for more details.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, invalid port or queue ID, or
+ *	    the domain is not configured.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb2_hw_unmap_qid(struct dlb2_hw *hw,
+		      u32 domain_id,
+		      struct dlb2_unmap_qid_args *args,
+		      struct dlb2_cmd_response *resp,
+		      bool vdev_req,
+		      unsigned int vdev_id)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+	enum dlb2_qid_map_state st;
+	struct dlb2_ldb_port *port;
+	bool unmap_complete;
+	int i, ret;
+
+	dlb2_log_unmap_qid(hw, domain_id, args, vdev_req, vdev_id);
+
+	/*
+	 * Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	ret = dlb2_verify_unmap_qid_args(hw,
+					 domain_id,
+					 args,
+					 resp,
+					 vdev_req,
+					 vdev_id,
+					 &domain,
+					 &port,
+					 &queue);
+	if (ret)
+		return ret;
+
+	/*
+	 * If the queue hasn't been mapped yet, we need to update the slot's
+	 * state and re-enable the queue's inflights.
+	 */
+	st = DLB2_QUEUE_MAP_IN_PROG;
+	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
+		/*
+		 * Since the in-progress map was aborted, re-enable the QID's
+		 * inflights.
+		 */
+		if (queue->num_pending_additions == 0)
+			dlb2_ldb_queue_set_inflight_limit(hw, queue);
+
+		st = DLB2_QUEUE_UNMAPPED;
+		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
+		if (ret)
+			return ret;
+
+		goto unmap_qid_done;
+	}
+
+	/*
+	 * If the queue mapping is on hold pending an unmap, we simply need to
+	 * update the slot's state.
+	 */
+	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &i)) {
+		st = DLB2_QUEUE_UNMAP_IN_PROG;
+		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
+		if (ret)
+			return ret;
+
+		goto unmap_qid_done;
+	}
+
+	st = DLB2_QUEUE_MAPPED;
+	if (!dlb2_port_find_slot_queue(port, st, queue, &i)) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: no available CQ slots\n",
+			    __func__);
+		return -EFAULT;
+	}
+
+	/*
+	 * QID->CQ mapping removal is an asynchronous procedure. It requires
+	 * stopping the DLB2 from scheduling this CQ, draining all inflights
+	 * from the CQ, then unmapping the queue from the CQ. This function
+	 * simply marks the port as needing the queue unmapped, and (if
+	 * necessary) starts the unmapping worker thread.
+	 */
+	dlb2_ldb_port_cq_disable(hw, port);
+
+	st = DLB2_QUEUE_UNMAP_IN_PROG;
+	ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
+	if (ret)
+		return ret;
+
+	/*
+	 * Attempt to finish the unmapping now, in case the port has no
+	 * outstanding inflights. If that's not the case, this will fail and
+	 * the unmapping will be completed at a later time.
+	 */
+	unmap_complete = dlb2_domain_finish_unmap_port(hw, domain, port);
+
+	/*
+	 * If the unmapping couldn't complete immediately, launch the worker
+	 * thread (if it isn't already launched) to finish it later.
+	 */
+	if (!unmap_complete && !os_worker_active(hw))
+		os_schedule_work(hw);
+
+unmap_qid_done:
+	resp->status = 0;
+
+	return 0;
+}
+
+static void
+dlb2_log_pending_port_unmaps_args(struct dlb2_hw *hw,
+				  struct dlb2_pending_port_unmaps_args *args,
+				  bool vdev_req,
+				  unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB unmaps in progress arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from VF %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tPort ID: %d\n", args->port_id);
+}
+
+/**
+ * dlb2_hw_pending_port_unmaps() - returns the number of unmap operations in
+ *	progress.
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: number of unmaps in progress args
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the number of unmaps in progress.
+ *
+ * Errors:
+ * EINVAL - Invalid port ID.
+ */
+int dlb2_hw_pending_port_unmaps(struct dlb2_hw *hw,
+				u32 domain_id,
+				struct dlb2_pending_port_unmaps_args *args,
+				struct dlb2_cmd_response *resp,
+				bool vdev_req,
+				unsigned int vdev_id)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_port *port;
+
+	dlb2_log_pending_port_unmaps_args(hw, args, vdev_req, vdev_id);
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	port = dlb2_get_domain_used_ldb_port(args->port_id, vdev_req, domain);
+	if (!port || !port->configured) {
+		resp->status = DLB2_ST_INVALID_PORT_ID;
+		return -EINVAL;
+	}
+
+	resp->id = port->num_pending_removals;
+
+	return 0;
+}
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v5 14/26] event/dlb2: add v2.5 start domain
  2021-05-01 19:03   ` [dpdk-dev] [PATCH v5 00/26] Add DLB v2.5 McDaniel, Timothy
                       ` (12 preceding siblings ...)
  2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 13/26] event/dlb2: add v2.5 unmap queue McDaniel, Timothy
@ 2021-05-01 19:03     ` McDaniel, Timothy
  2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 15/26] event/dlb2: add v2.5 credit scheme McDaniel, Timothy
                       ` (12 subsequent siblings)
  26 siblings, 0 replies; 174+ messages in thread
From: McDaniel, Timothy @ 2021-05-01 19:03 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, harry.van.haaren, jerinj, thomas, Timothy McDaniel

From: Timothy McDaniel <timothy.mcdaniel@intel.com>

Update the low level HW functions responsible for
starting the scheduling domain. Once a domain is
started, its resources can no longer be configured,
except for QID remapping and port enable/disable.
The start domain arguments are validated, and an error
is returned if validation fails, or if the domain is
not configured or has already been started.

The logic is very similar to what was done for v2.0,
but the new combined register map for v2.0 and v2.5
uses new register names and bit names.  Additionally,
new register access macros are used so that the code
can perform the correct action, based on the hardware.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 123 -----------------
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 130 ++++++++++++++++++
 2 files changed, 130 insertions(+), 123 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index ab5b080c1..1e66ebf50 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -1245,129 +1245,6 @@ dlb2_get_domain_ldb_queue(u32 id,
 	return NULL;
 }
 
-static int dlb2_verify_start_domain_args(struct dlb2_hw *hw,
-					 u32 domain_id,
-					 struct dlb2_cmd_response *resp,
-					 bool vdev_req,
-					 unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (domain == NULL) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	if (!domain->configured) {
-		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
-		return -EINVAL;
-	}
-
-	if (domain->started) {
-		resp->status = DLB2_ST_DOMAIN_STARTED;
-		return -EINVAL;
-	}
-
-	return 0;
-}
-
-static void dlb2_log_start_domain(struct dlb2_hw *hw,
-				  u32 domain_id,
-				  bool vdev_req,
-				  unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 start domain arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
-}
-
-/**
- * dlb2_hw_start_domain() - Lock the domain configuration
- * @hw:	Contains the current state of the DLB2 hardware.
- * @domain_id: Domain ID
- * @arg: User-provided arguments (unused, here for ioctl callback template).
- * @resp: Response to user.
- * @vdev_req: Request came from a virtual device.
- * @vdev_id: If vdev_req is true, this contains the virtual device's ID.
- *
- * Return: returns < 0 on error, 0 otherwise. If the driver is unable to
- * satisfy a request, resp->status will be set accordingly.
- */
-int
-dlb2_hw_start_domain(struct dlb2_hw *hw,
-		     u32 domain_id,
-		     struct dlb2_start_domain_args *arg,
-		     struct dlb2_cmd_response *resp,
-		     bool vdev_req,
-		     unsigned int vdev_id)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *dir_queue;
-	struct dlb2_ldb_queue *ldb_queue;
-	struct dlb2_hw_domain *domain;
-	int ret;
-	RTE_SET_USED(arg);
-	RTE_SET_USED(iter);
-
-	dlb2_log_start_domain(hw, domain_id, vdev_req, vdev_id);
-
-	ret = dlb2_verify_start_domain_args(hw,
-					    domain_id,
-					    resp,
-					    vdev_req,
-					    vdev_id);
-	if (ret)
-		return ret;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-	if (domain == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: domain not found\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	/*
-	 * Enable load-balanced and directed queue write permissions for the
-	 * queues this domain owns. Without this, the DLB2 will drop all
-	 * incoming traffic to those queues.
-	 */
-	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, ldb_queue, iter) {
-		union dlb2_sys_ldb_vasqid_v r0 = { {0} };
-		unsigned int offs;
-
-		r0.field.vasqid_v = 1;
-
-		offs = domain->id.phys_id * DLB2_MAX_NUM_LDB_QUEUES +
-			ldb_queue->id.phys_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_LDB_VASQID_V(offs), r0.val);
-	}
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, dir_queue, iter) {
-		union dlb2_sys_dir_vasqid_v r0 = { {0} };
-		unsigned int offs;
-
-		r0.field.vasqid_v = 1;
-
-		offs = domain->id.phys_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) +
-			dir_queue->id.phys_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(offs), r0.val);
-	}
-
-	dlb2_flush_csr(hw);
-
-	domain->started = true;
-
-	resp->status = 0;
-
-	return 0;
-}
-
 static void dlb2_log_get_dir_queue_depth(struct dlb2_hw *hw,
 					 u32 domain_id,
 					 u32 queue_id,
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index 181922fe3..e806a60ac 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -5774,3 +5774,133 @@ int dlb2_hw_pending_port_unmaps(struct dlb2_hw *hw,
 
 	return 0;
 }
+
+static int dlb2_verify_start_domain_args(struct dlb2_hw *hw,
+					 u32 domain_id,
+					 struct dlb2_cmd_response *resp,
+					 bool vdev_req,
+					 unsigned int vdev_id,
+					 struct dlb2_hw_domain **out_domain)
+{
+	struct dlb2_hw_domain *domain;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
+		return -EINVAL;
+	}
+
+	if (domain->started) {
+		resp->status = DLB2_ST_DOMAIN_STARTED;
+		return -EINVAL;
+	}
+
+	*out_domain = domain;
+
+	return 0;
+}
+
+static void dlb2_log_start_domain(struct dlb2_hw *hw,
+				  u32 domain_id,
+				  bool vdev_req,
+				  unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 start domain arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
+}
+
+/**
+ * dlb2_hw_start_domain() - start a scheduling domain
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @arg: start domain arguments.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function starts a scheduling domain, which allows applications to send
+ * traffic through it. Once a domain is started, its resources can no longer be
+ * configured (besides QID remapping and port enable/disable).
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error.
+ *
+ * Errors:
+ * EINVAL - the domain is not configured, or the domain is already started.
+ */
+int
+dlb2_hw_start_domain(struct dlb2_hw *hw,
+		     u32 domain_id,
+		     struct dlb2_start_domain_args *args,
+		     struct dlb2_cmd_response *resp,
+		     bool vdev_req,
+		     unsigned int vdev_id)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *dir_queue;
+	struct dlb2_ldb_queue *ldb_queue;
+	struct dlb2_hw_domain *domain;
+	int ret;
+	RTE_SET_USED(args);
+	RTE_SET_USED(iter);
+
+	dlb2_log_start_domain(hw, domain_id, vdev_req, vdev_id);
+
+	ret = dlb2_verify_start_domain_args(hw,
+					    domain_id,
+					    resp,
+					    vdev_req,
+					    vdev_id,
+					    &domain);
+	if (ret)
+		return ret;
+
+	/*
+	 * Enable load-balanced and directed queue write permissions for the
+	 * queues this domain owns. Without this, the DLB2 will drop all
+	 * incoming traffic to those queues.
+	 */
+	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, ldb_queue, iter) {
+		u32 vasqid_v = 0;
+		unsigned int offs;
+
+		DLB2_BIT_SET(vasqid_v, DLB2_SYS_LDB_VASQID_V_VASQID_V);
+
+		offs = domain->id.phys_id * DLB2_MAX_NUM_LDB_QUEUES +
+			ldb_queue->id.phys_id;
+
+		DLB2_CSR_WR(hw, DLB2_SYS_LDB_VASQID_V(offs), vasqid_v);
+	}
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, dir_queue, iter) {
+		u32 vasqid_v = 0;
+		unsigned int offs;
+
+		DLB2_BIT_SET(vasqid_v, DLB2_SYS_DIR_VASQID_V_VASQID_V);
+
+		offs = domain->id.phys_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) +
+			dir_queue->id.phys_id;
+
+		DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(offs), vasqid_v);
+	}
+
+	dlb2_flush_csr(hw);
+
+	domain->started = true;
+
+	resp->status = 0;
+
+	return 0;
+}
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v5 15/26] event/dlb2: add v2.5 credit scheme
  2021-05-01 19:03   ` [dpdk-dev] [PATCH v5 00/26] Add DLB v2.5 McDaniel, Timothy
                       ` (13 preceding siblings ...)
  2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 14/26] event/dlb2: add v2.5 start domain McDaniel, Timothy
@ 2021-05-01 19:03     ` McDaniel, Timothy
  2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 16/26] event/dlb2: add v2.5 queue depth functions McDaniel, Timothy
                       ` (11 subsequent siblings)
  26 siblings, 0 replies; 174+ messages in thread
From: McDaniel, Timothy @ 2021-05-01 19:03 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, harry.van.haaren, jerinj, thomas, Timothy McDaniel

From: Timothy McDaniel <timothy.mcdaniel@intel.com>

DLB v2.5 uses a different credit scheme than was used in DLB v2.0 .
Specifically, there is a single credit pool for both load balanced
and directed traffic, instead of a separate pool for each as is
found with DLB v2.0.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/dlb2.c | 311 ++++++++++++++++++++++++++------------
 1 file changed, 212 insertions(+), 99 deletions(-)

diff --git a/drivers/event/dlb2/dlb2.c b/drivers/event/dlb2/dlb2.c
index 0048f6a1b..cc6495b76 100644
--- a/drivers/event/dlb2/dlb2.c
+++ b/drivers/event/dlb2/dlb2.c
@@ -436,8 +436,13 @@ dlb2_eventdev_info_get(struct rte_eventdev *dev,
 	 */
 	evdev_dlb2_default_info.max_event_ports += dlb2->num_ldb_ports;
 	evdev_dlb2_default_info.max_event_queues += dlb2->num_ldb_queues;
-	evdev_dlb2_default_info.max_num_events += dlb2->max_ldb_credits;
-
+	if (dlb2->version == DLB2_HW_V2_5) {
+		evdev_dlb2_default_info.max_num_events +=
+			dlb2->max_credits;
+	} else {
+		evdev_dlb2_default_info.max_num_events +=
+			dlb2->max_ldb_credits;
+	}
 	evdev_dlb2_default_info.max_event_queues =
 		RTE_MIN(evdev_dlb2_default_info.max_event_queues,
 			RTE_EVENT_MAX_QUEUES_PER_DEV);
@@ -451,7 +456,8 @@ dlb2_eventdev_info_get(struct rte_eventdev *dev,
 
 static int
 dlb2_hw_create_sched_domain(struct dlb2_hw_dev *handle,
-			    const struct dlb2_hw_rsrcs *resources_asked)
+			    const struct dlb2_hw_rsrcs *resources_asked,
+			    uint8_t device_version)
 {
 	int ret = 0;
 	struct dlb2_create_sched_domain_args *cfg;
@@ -468,8 +474,10 @@ dlb2_hw_create_sched_domain(struct dlb2_hw_dev *handle,
 	/* DIR ports and queues */
 
 	cfg->num_dir_ports = resources_asked->num_dir_ports;
-
-	cfg->num_dir_credits = resources_asked->num_dir_credits;
+	if (device_version == DLB2_HW_V2_5)
+		cfg->num_credits = resources_asked->num_credits;
+	else
+		cfg->num_dir_credits = resources_asked->num_dir_credits;
 
 	/* LDB queues */
 
@@ -509,8 +517,8 @@ dlb2_hw_create_sched_domain(struct dlb2_hw_dev *handle,
 		break;
 	}
 
-	cfg->num_ldb_credits =
-		resources_asked->num_ldb_credits;
+	if (device_version == DLB2_HW_V2)
+		cfg->num_ldb_credits = resources_asked->num_ldb_credits;
 
 	cfg->num_atomic_inflights =
 		DLB2_NUM_ATOMIC_INFLIGHTS_PER_QUEUE *
@@ -519,14 +527,24 @@ dlb2_hw_create_sched_domain(struct dlb2_hw_dev *handle,
 	cfg->num_hist_list_entries = resources_asked->num_ldb_ports *
 		DLB2_NUM_HIST_LIST_ENTRIES_PER_LDB_PORT;
 
-	DLB2_LOG_DBG("sched domain create - ldb_qs=%d, ldb_ports=%d, dir_ports=%d, atomic_inflights=%d, hist_list_entries=%d, ldb_credits=%d, dir_credits=%d\n",
-		     cfg->num_ldb_queues,
-		     resources_asked->num_ldb_ports,
-		     cfg->num_dir_ports,
-		     cfg->num_atomic_inflights,
-		     cfg->num_hist_list_entries,
-		     cfg->num_ldb_credits,
-		     cfg->num_dir_credits);
+	if (device_version == DLB2_HW_V2_5) {
+		DLB2_LOG_DBG("sched domain create - ldb_qs=%d, ldb_ports=%d, dir_ports=%d, atomic_inflights=%d, hist_list_entries=%d, credits=%d\n",
+			     cfg->num_ldb_queues,
+			     resources_asked->num_ldb_ports,
+			     cfg->num_dir_ports,
+			     cfg->num_atomic_inflights,
+			     cfg->num_hist_list_entries,
+			     cfg->num_credits);
+	} else {
+		DLB2_LOG_DBG("sched domain create - ldb_qs=%d, ldb_ports=%d, dir_ports=%d, atomic_inflights=%d, hist_list_entries=%d, ldb_credits=%d, dir_credits=%d\n",
+			     cfg->num_ldb_queues,
+			     resources_asked->num_ldb_ports,
+			     cfg->num_dir_ports,
+			     cfg->num_atomic_inflights,
+			     cfg->num_hist_list_entries,
+			     cfg->num_ldb_credits,
+			     cfg->num_dir_credits);
+	}
 
 	/* Configure the QM */
 
@@ -606,7 +624,6 @@ dlb2_eventdev_configure(const struct rte_eventdev *dev)
 	 */
 	if (dlb2->configured) {
 		dlb2_hw_reset_sched_domain(dev, true);
-
 		ret = dlb2_hw_query_resources(dlb2);
 		if (ret) {
 			DLB2_LOG_ERR("get resources err=%d, devid=%d\n",
@@ -665,20 +682,26 @@ dlb2_eventdev_configure(const struct rte_eventdev *dev)
 	/* 1 dir queue per dir port */
 	rsrcs->num_ldb_queues = config->nb_event_queues - rsrcs->num_dir_ports;
 
-	/* Scale down nb_events_limit by 4 for directed credits, since there
-	 * are 4x as many load-balanced credits.
-	 */
-	rsrcs->num_ldb_credits = 0;
-	rsrcs->num_dir_credits = 0;
+	if (dlb2->version == DLB2_HW_V2_5) {
+		rsrcs->num_credits = 0;
+		if (rsrcs->num_ldb_queues || rsrcs->num_dir_ports)
+			rsrcs->num_credits = config->nb_events_limit;
+	} else {
+		/* Scale down nb_events_limit by 4 for directed credits,
+		 * since there are 4x as many load-balanced credits.
+		 */
+		rsrcs->num_ldb_credits = 0;
+		rsrcs->num_dir_credits = 0;
 
-	if (rsrcs->num_ldb_queues)
-		rsrcs->num_ldb_credits = config->nb_events_limit;
-	if (rsrcs->num_dir_ports)
-		rsrcs->num_dir_credits = config->nb_events_limit / 4;
-	if (dlb2->num_dir_credits_override != -1)
-		rsrcs->num_dir_credits = dlb2->num_dir_credits_override;
+		if (rsrcs->num_ldb_queues)
+			rsrcs->num_ldb_credits = config->nb_events_limit;
+		if (rsrcs->num_dir_ports)
+			rsrcs->num_dir_credits = config->nb_events_limit / 4;
+		if (dlb2->num_dir_credits_override != -1)
+			rsrcs->num_dir_credits = dlb2->num_dir_credits_override;
+	}
 
-	if (dlb2_hw_create_sched_domain(handle, rsrcs) < 0) {
+	if (dlb2_hw_create_sched_domain(handle, rsrcs, dlb2->version) < 0) {
 		DLB2_LOG_ERR("dlb2_hw_create_sched_domain failed\n");
 		return -ENODEV;
 	}
@@ -693,10 +716,15 @@ dlb2_eventdev_configure(const struct rte_eventdev *dev)
 	dlb2->num_ldb_ports = dlb2->num_ports - dlb2->num_dir_ports;
 	dlb2->num_ldb_queues = dlb2->num_queues - dlb2->num_dir_ports;
 	dlb2->num_dir_queues = dlb2->num_dir_ports;
-	dlb2->ldb_credit_pool = rsrcs->num_ldb_credits;
-	dlb2->max_ldb_credits = rsrcs->num_ldb_credits;
-	dlb2->dir_credit_pool = rsrcs->num_dir_credits;
-	dlb2->max_dir_credits = rsrcs->num_dir_credits;
+	if (dlb2->version == DLB2_HW_V2_5) {
+		dlb2->credit_pool = rsrcs->num_credits;
+		dlb2->max_credits = rsrcs->num_credits;
+	} else {
+		dlb2->ldb_credit_pool = rsrcs->num_ldb_credits;
+		dlb2->max_ldb_credits = rsrcs->num_ldb_credits;
+		dlb2->dir_credit_pool = rsrcs->num_dir_credits;
+		dlb2->max_dir_credits = rsrcs->num_dir_credits;
+	}
 
 	dlb2->configured = true;
 
@@ -1170,8 +1198,9 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2,
 	struct dlb2_port *qm_port = NULL;
 	char mz_name[RTE_MEMZONE_NAMESIZE];
 	uint32_t qm_port_id;
-	uint16_t ldb_credit_high_watermark;
-	uint16_t dir_credit_high_watermark;
+	uint16_t ldb_credit_high_watermark = 0;
+	uint16_t dir_credit_high_watermark = 0;
+	uint16_t credit_high_watermark = 0;
 
 	if (handle == NULL)
 		return -EINVAL;
@@ -1206,15 +1235,18 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2,
 	/* User controls the LDB high watermark via enqueue depth. The DIR high
 	 * watermark is equal, unless the directed credit pool is too small.
 	 */
-	ldb_credit_high_watermark = enqueue_depth;
-
-	/* If there are no directed ports, the kernel driver will ignore this
-	 * port's directed credit settings. Don't use enqueue_depth if it would
-	 * require more directed credits than are available.
-	 */
-	dir_credit_high_watermark =
-		RTE_MIN(enqueue_depth,
-			handle->cfg.num_dir_credits / dlb2->num_ports);
+	if (dlb2->version == DLB2_HW_V2) {
+		ldb_credit_high_watermark = enqueue_depth;
+		/* If there are no directed ports, the kernel driver will
+		 * ignore this port's directed credit settings. Don't use
+		 * enqueue_depth if it would require more directed credits
+		 * than are available.
+		 */
+		dir_credit_high_watermark =
+			RTE_MIN(enqueue_depth,
+				handle->cfg.num_dir_credits / dlb2->num_ports);
+	} else
+		credit_high_watermark = enqueue_depth;
 
 	/* Per QM values */
 
@@ -1249,8 +1281,12 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2,
 
 	qm_port->id = qm_port_id;
 
-	qm_port->cached_ldb_credits = 0;
-	qm_port->cached_dir_credits = 0;
+	if (dlb2->version == DLB2_HW_V2) {
+		qm_port->cached_ldb_credits = 0;
+		qm_port->cached_dir_credits = 0;
+	} else
+		qm_port->cached_credits = 0;
+
 	/* CQs with depth < 8 use an 8-entry queue, but withhold credits so
 	 * the effective depth is smaller.
 	 */
@@ -1298,17 +1334,26 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2,
 	qm_port->state = PORT_STARTED; /* enabled at create time */
 	qm_port->config_state = DLB2_CONFIGURED;
 
-	qm_port->dir_credits = dir_credit_high_watermark;
-	qm_port->ldb_credits = ldb_credit_high_watermark;
-	qm_port->credit_pool[DLB2_DIR_QUEUE] = &dlb2->dir_credit_pool;
-	qm_port->credit_pool[DLB2_LDB_QUEUE] = &dlb2->ldb_credit_pool;
-
-	DLB2_LOG_DBG("dlb2: created ldb port %d, depth = %d, ldb credits=%d, dir credits=%d\n",
-		     qm_port_id,
-		     dequeue_depth,
-		     qm_port->ldb_credits,
-		     qm_port->dir_credits);
+	if (dlb2->version == DLB2_HW_V2) {
+		qm_port->dir_credits = dir_credit_high_watermark;
+		qm_port->ldb_credits = ldb_credit_high_watermark;
+		qm_port->credit_pool[DLB2_DIR_QUEUE] = &dlb2->dir_credit_pool;
+		qm_port->credit_pool[DLB2_LDB_QUEUE] = &dlb2->ldb_credit_pool;
+
+		DLB2_LOG_DBG("dlb2: created ldb port %d, depth = %d, ldb credits=%d, dir credits=%d\n",
+			     qm_port_id,
+			     dequeue_depth,
+			     qm_port->ldb_credits,
+			     qm_port->dir_credits);
+	} else {
+		qm_port->credits = credit_high_watermark;
+		qm_port->credit_pool[DLB2_COMBINED_POOL] = &dlb2->credit_pool;
 
+		DLB2_LOG_DBG("dlb2: created ldb port %d, depth = %d, credits=%d\n",
+			     qm_port_id,
+			     dequeue_depth,
+			     qm_port->credits);
+	}
 	rte_spinlock_unlock(&handle->resource_lock);
 
 	return 0;
@@ -1356,8 +1401,9 @@ dlb2_hw_create_dir_port(struct dlb2_eventdev *dlb2,
 	struct dlb2_port *qm_port = NULL;
 	char mz_name[RTE_MEMZONE_NAMESIZE];
 	uint32_t qm_port_id;
-	uint16_t ldb_credit_high_watermark;
-	uint16_t dir_credit_high_watermark;
+	uint16_t ldb_credit_high_watermark = 0;
+	uint16_t dir_credit_high_watermark = 0;
+	uint16_t credit_high_watermark = 0;
 
 	if (dlb2 == NULL || handle == NULL)
 		return -EINVAL;
@@ -1386,14 +1432,16 @@ dlb2_hw_create_dir_port(struct dlb2_eventdev *dlb2,
 	/* User controls the LDB high watermark via enqueue depth. The DIR high
 	 * watermark is equal, unless the directed credit pool is too small.
 	 */
-	ldb_credit_high_watermark = enqueue_depth;
-
-	/* Don't use enqueue_depth if it would require more directed credits
-	 * than are available.
-	 */
-	dir_credit_high_watermark =
-		RTE_MIN(enqueue_depth,
-			handle->cfg.num_dir_credits / dlb2->num_ports);
+	if (dlb2->version == DLB2_HW_V2) {
+		ldb_credit_high_watermark = enqueue_depth;
+		/* Don't use enqueue_depth if it would require more directed
+		 * credits than are available.
+		 */
+		dir_credit_high_watermark =
+			RTE_MIN(enqueue_depth,
+				handle->cfg.num_dir_credits / dlb2->num_ports);
+	} else
+		credit_high_watermark = enqueue_depth;
 
 	/* Per QM values */
 
@@ -1430,8 +1478,12 @@ dlb2_hw_create_dir_port(struct dlb2_eventdev *dlb2,
 
 	qm_port->id = qm_port_id;
 
-	qm_port->cached_ldb_credits = 0;
-	qm_port->cached_dir_credits = 0;
+	if (dlb2->version == DLB2_HW_V2) {
+		qm_port->cached_ldb_credits = 0;
+		qm_port->cached_dir_credits = 0;
+	} else
+		qm_port->cached_credits = 0;
+
 	/* CQs with depth < 8 use an 8-entry queue, but withhold credits so
 	 * the effective depth is smaller.
 	 */
@@ -1467,17 +1519,26 @@ dlb2_hw_create_dir_port(struct dlb2_eventdev *dlb2,
 	qm_port->state = PORT_STARTED; /* enabled at create time */
 	qm_port->config_state = DLB2_CONFIGURED;
 
-	qm_port->dir_credits = dir_credit_high_watermark;
-	qm_port->ldb_credits = ldb_credit_high_watermark;
-	qm_port->credit_pool[DLB2_DIR_QUEUE] = &dlb2->dir_credit_pool;
-	qm_port->credit_pool[DLB2_LDB_QUEUE] = &dlb2->ldb_credit_pool;
-
-	DLB2_LOG_DBG("dlb2: created dir port %d, depth = %d cr=%d,%d\n",
-		     qm_port_id,
-		     dequeue_depth,
-		     dir_credit_high_watermark,
-		     ldb_credit_high_watermark);
+	if (dlb2->version == DLB2_HW_V2) {
+		qm_port->dir_credits = dir_credit_high_watermark;
+		qm_port->ldb_credits = ldb_credit_high_watermark;
+		qm_port->credit_pool[DLB2_DIR_QUEUE] = &dlb2->dir_credit_pool;
+		qm_port->credit_pool[DLB2_LDB_QUEUE] = &dlb2->ldb_credit_pool;
+
+		DLB2_LOG_DBG("dlb2: created dir port %d, depth = %d cr=%d,%d\n",
+			     qm_port_id,
+			     dequeue_depth,
+			     dir_credit_high_watermark,
+			     ldb_credit_high_watermark);
+	} else {
+		qm_port->credits = credit_high_watermark;
+		qm_port->credit_pool[DLB2_COMBINED_POOL] = &dlb2->credit_pool;
 
+		DLB2_LOG_DBG("dlb2: created dir port %d, depth = %d cr=%d\n",
+			     qm_port_id,
+			     dequeue_depth,
+			     credit_high_watermark);
+	}
 	rte_spinlock_unlock(&handle->resource_lock);
 
 	return 0;
@@ -2297,6 +2358,24 @@ dlb2_check_enqueue_hw_dir_credits(struct dlb2_port *qm_port)
 	return 0;
 }
 
+static inline int
+dlb2_check_enqueue_hw_credits(struct dlb2_port *qm_port)
+{
+	if (unlikely(qm_port->cached_credits == 0)) {
+		qm_port->cached_credits =
+			dlb2_port_credits_get(qm_port,
+					      DLB2_COMBINED_POOL);
+		if (unlikely(qm_port->cached_credits == 0)) {
+			DLB2_INC_STAT(
+			qm_port->ev_port->stats.traffic.tx_nospc_hw_credits, 1);
+			DLB2_LOG_DBG("credits exhausted\n");
+			return 1; /* credits exhausted */
+		}
+	}
+
+	return 0;
+}
+
 static __rte_always_inline void
 dlb2_pp_write(struct dlb2_enqueue_qe *qe4,
 	      struct process_local_port_data *port_data)
@@ -2565,12 +2644,19 @@ dlb2_event_enqueue_prep(struct dlb2_eventdev_port *ev_port,
 	if (!qm_queue->is_directed) {
 		/* Load balanced destination queue */
 
-		if (dlb2_check_enqueue_hw_ldb_credits(qm_port)) {
-			rte_errno = -ENOSPC;
-			return 1;
+		if (dlb2->version == DLB2_HW_V2) {
+			if (dlb2_check_enqueue_hw_ldb_credits(qm_port)) {
+				rte_errno = -ENOSPC;
+				return 1;
+			}
+			cached_credits = &qm_port->cached_ldb_credits;
+		} else {
+			if (dlb2_check_enqueue_hw_credits(qm_port)) {
+				rte_errno = -ENOSPC;
+				return 1;
+			}
+			cached_credits = &qm_port->cached_credits;
 		}
-		cached_credits = &qm_port->cached_ldb_credits;
-
 		switch (ev->sched_type) {
 		case RTE_SCHED_TYPE_ORDERED:
 			DLB2_LOG_DBG("dlb2: put_qe: RTE_SCHED_TYPE_ORDERED\n");
@@ -2602,12 +2688,19 @@ dlb2_event_enqueue_prep(struct dlb2_eventdev_port *ev_port,
 	} else {
 		/* Directed destination queue */
 
-		if (dlb2_check_enqueue_hw_dir_credits(qm_port)) {
-			rte_errno = -ENOSPC;
-			return 1;
+		if (dlb2->version == DLB2_HW_V2) {
+			if (dlb2_check_enqueue_hw_dir_credits(qm_port)) {
+				rte_errno = -ENOSPC;
+				return 1;
+			}
+			cached_credits = &qm_port->cached_dir_credits;
+		} else {
+			if (dlb2_check_enqueue_hw_credits(qm_port)) {
+				rte_errno = -ENOSPC;
+				return 1;
+			}
+			cached_credits = &qm_port->cached_credits;
 		}
-		cached_credits = &qm_port->cached_dir_credits;
-
 		DLB2_LOG_DBG("dlb2: put_qe: RTE_SCHED_TYPE_DIRECTED\n");
 
 		*sched_type = DLB2_SCHED_DIRECTED;
@@ -2891,20 +2984,40 @@ dlb2_port_credits_inc(struct dlb2_port *qm_port, int num)
 
 	/* increment port credits, and return to pool if exceeds threshold */
 	if (!qm_port->is_directed) {
-		qm_port->cached_ldb_credits += num;
-		if (qm_port->cached_ldb_credits >= 2 * batch_size) {
-			__atomic_fetch_add(
-				qm_port->credit_pool[DLB2_LDB_QUEUE],
-				batch_size, __ATOMIC_SEQ_CST);
-			qm_port->cached_ldb_credits -= batch_size;
+		if (qm_port->dlb2->version == DLB2_HW_V2) {
+			qm_port->cached_ldb_credits += num;
+			if (qm_port->cached_ldb_credits >= 2 * batch_size) {
+				__atomic_fetch_add(
+					qm_port->credit_pool[DLB2_LDB_QUEUE],
+					batch_size, __ATOMIC_SEQ_CST);
+				qm_port->cached_ldb_credits -= batch_size;
+			}
+		} else {
+			qm_port->cached_credits += num;
+			if (qm_port->cached_credits >= 2 * batch_size) {
+				__atomic_fetch_add(
+				      qm_port->credit_pool[DLB2_COMBINED_POOL],
+				      batch_size, __ATOMIC_SEQ_CST);
+				qm_port->cached_credits -= batch_size;
+			}
 		}
 	} else {
-		qm_port->cached_dir_credits += num;
-		if (qm_port->cached_dir_credits >= 2 * batch_size) {
-			__atomic_fetch_add(
-				qm_port->credit_pool[DLB2_DIR_QUEUE],
-				batch_size, __ATOMIC_SEQ_CST);
-			qm_port->cached_dir_credits -= batch_size;
+		if (qm_port->dlb2->version == DLB2_HW_V2) {
+			qm_port->cached_dir_credits += num;
+			if (qm_port->cached_dir_credits >= 2 * batch_size) {
+				__atomic_fetch_add(
+					qm_port->credit_pool[DLB2_DIR_QUEUE],
+					batch_size, __ATOMIC_SEQ_CST);
+				qm_port->cached_dir_credits -= batch_size;
+			}
+		} else {
+			qm_port->cached_credits += num;
+			if (qm_port->cached_credits >= 2 * batch_size) {
+				__atomic_fetch_add(
+				      qm_port->credit_pool[DLB2_COMBINED_POOL],
+				      batch_size, __ATOMIC_SEQ_CST);
+				qm_port->cached_credits -= batch_size;
+			}
 		}
 	}
 }
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v5 16/26] event/dlb2: add v2.5 queue depth functions
  2021-05-01 19:03   ` [dpdk-dev] [PATCH v5 00/26] Add DLB v2.5 McDaniel, Timothy
                       ` (14 preceding siblings ...)
  2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 15/26] event/dlb2: add v2.5 credit scheme McDaniel, Timothy
@ 2021-05-01 19:03     ` McDaniel, Timothy
  2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 17/26] event/dlb2: add v2.5 finish map/unmap McDaniel, Timothy
                       ` (10 subsequent siblings)
  26 siblings, 0 replies; 174+ messages in thread
From: McDaniel, Timothy @ 2021-05-01 19:03 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, harry.van.haaren, jerinj, thomas, Timothy McDaniel

From: Timothy McDaniel <timothy.mcdaniel@intel.com>

Update the low level hardware functions responsible for
getting the queue depth. The command arguments are also
validated.

The logic is very similar to what was done for v2.0,
but the new combined register map for v2.0 and v2.5
uses new register names and bit names.  Additionally,
new register access macros are used so that the code
can perform the correct action, based on the hardware.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 160 ------------------
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 135 +++++++++++++++
 2 files changed, 135 insertions(+), 160 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index 1e66ebf50..8c1d8c782 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -65,17 +65,6 @@ static inline void dlb2_flush_csr(struct dlb2_hw *hw)
 	DLB2_CSR_RD(hw, DLB2_SYS_TOTAL_VAS);
 }
 
-static u32 dlb2_dir_queue_depth(struct dlb2_hw *hw,
-				struct dlb2_dir_pq_pair *queue)
-{
-	union dlb2_lsp_qid_dir_enqueue_cnt r0;
-
-	r0.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID_DIR_ENQUEUE_CNT(queue->id.phys_id));
-
-	return r0.field.count;
-}
-
 static void dlb2_ldb_port_cq_enable(struct dlb2_hw *hw,
 				    struct dlb2_ldb_port *port)
 {
@@ -108,24 +97,6 @@ static void dlb2_ldb_port_cq_disable(struct dlb2_hw *hw,
 	dlb2_flush_csr(hw);
 }
 
-static u32 dlb2_ldb_queue_depth(struct dlb2_hw *hw,
-				struct dlb2_ldb_queue *queue)
-{
-	union dlb2_lsp_qid_aqed_active_cnt r0;
-	union dlb2_lsp_qid_atm_active r1;
-	union dlb2_lsp_qid_ldb_enqueue_cnt r2;
-
-	r0.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID_AQED_ACTIVE_CNT(queue->id.phys_id));
-	r1.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID_ATM_ACTIVE(queue->id.phys_id));
-
-	r2.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID_LDB_ENQUEUE_CNT(queue->id.phys_id));
-
-	return r0.field.count + r1.field.count + r2.field.count;
-}
-
 static struct dlb2_ldb_queue *
 dlb2_get_ldb_queue_from_id(struct dlb2_hw *hw,
 			   u32 id,
@@ -1204,134 +1175,3 @@ int dlb2_set_group_sequence_numbers(struct dlb2_hw *hw,
 	return 0;
 }
 
-static struct dlb2_dir_pq_pair *
-dlb2_get_domain_used_dir_pq(struct dlb2_hw *hw,
-			    u32 id,
-			    bool vdev_req,
-			    struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *port;
-	RTE_SET_USED(iter);
-
-	if (id >= DLB2_MAX_NUM_DIR_PORTS(hw->ver))
-		return NULL;
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter)
-		if ((!vdev_req && port->id.phys_id == id) ||
-		    (vdev_req && port->id.virt_id == id))
-			return port;
-
-	return NULL;
-}
-
-static struct dlb2_ldb_queue *
-dlb2_get_domain_ldb_queue(u32 id,
-			  bool vdev_req,
-			  struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_queue *queue;
-	RTE_SET_USED(iter);
-
-	if (id >= DLB2_MAX_NUM_LDB_QUEUES)
-		return NULL;
-
-	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter)
-		if ((!vdev_req && queue->id.phys_id == id) ||
-		    (vdev_req && queue->id.virt_id == id))
-			return queue;
-
-	return NULL;
-}
-
-static void dlb2_log_get_dir_queue_depth(struct dlb2_hw *hw,
-					 u32 domain_id,
-					 u32 queue_id,
-					 bool vdev_req,
-					 unsigned int vf_id)
-{
-	DLB2_HW_DBG(hw, "DLB get directed queue depth:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from VF %d)\n", vf_id);
-	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
-	DLB2_HW_DBG(hw, "\tQueue ID: %d\n", queue_id);
-}
-
-int dlb2_hw_get_dir_queue_depth(struct dlb2_hw *hw,
-				u32 domain_id,
-				struct dlb2_get_dir_queue_depth_args *args,
-				struct dlb2_cmd_response *resp,
-				bool vdev_req,
-				unsigned int vdev_id)
-{
-	struct dlb2_dir_pq_pair *queue;
-	struct dlb2_hw_domain *domain;
-	int id;
-
-	id = domain_id;
-
-	dlb2_log_get_dir_queue_depth(hw, domain_id, args->queue_id,
-				     vdev_req, vdev_id);
-
-	domain = dlb2_get_domain_from_id(hw, id, vdev_req, vdev_id);
-	if (domain == NULL) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	id = args->queue_id;
-
-	queue = dlb2_get_domain_used_dir_pq(hw, id, vdev_req, domain);
-	if (queue == NULL) {
-		resp->status = DLB2_ST_INVALID_QID;
-		return -EINVAL;
-	}
-
-	resp->id = dlb2_dir_queue_depth(hw, queue);
-
-	return 0;
-}
-
-static void dlb2_log_get_ldb_queue_depth(struct dlb2_hw *hw,
-					 u32 domain_id,
-					 u32 queue_id,
-					 bool vdev_req,
-					 unsigned int vf_id)
-{
-	DLB2_HW_DBG(hw, "DLB get load-balanced queue depth:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from VF %d)\n", vf_id);
-	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
-	DLB2_HW_DBG(hw, "\tQueue ID: %d\n", queue_id);
-}
-
-int dlb2_hw_get_ldb_queue_depth(struct dlb2_hw *hw,
-				u32 domain_id,
-				struct dlb2_get_ldb_queue_depth_args *args,
-				struct dlb2_cmd_response *resp,
-				bool vdev_req,
-				unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_queue *queue;
-
-	dlb2_log_get_ldb_queue_depth(hw, domain_id, args->queue_id,
-				     vdev_req, vdev_id);
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-	if (domain == NULL) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	queue = dlb2_get_domain_ldb_queue(args->queue_id, vdev_req, domain);
-	if (queue == NULL) {
-		resp->status = DLB2_ST_INVALID_QID;
-		return -EINVAL;
-	}
-
-	resp->id = dlb2_ldb_queue_depth(hw, queue);
-
-	return 0;
-}
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index e806a60ac..6a5af0c1e 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -5904,3 +5904,138 @@ dlb2_hw_start_domain(struct dlb2_hw *hw,
 
 	return 0;
 }
+
+static void dlb2_log_get_dir_queue_depth(struct dlb2_hw *hw,
+					 u32 domain_id,
+					 u32 queue_id,
+					 bool vdev_req,
+					 unsigned int vf_id)
+{
+	DLB2_HW_DBG(hw, "DLB get directed queue depth:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from VF %d)\n", vf_id);
+	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
+	DLB2_HW_DBG(hw, "\tQueue ID: %d\n", queue_id);
+}
+
+/**
+ * dlb2_hw_get_dir_queue_depth() - returns the depth of a directed queue
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: queue depth args
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function returns the depth of a directed queue.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the depth.
+ *
+ * Errors:
+ * EINVAL - Invalid domain ID or queue ID.
+ */
+int dlb2_hw_get_dir_queue_depth(struct dlb2_hw *hw,
+				u32 domain_id,
+				struct dlb2_get_dir_queue_depth_args *args,
+				struct dlb2_cmd_response *resp,
+				bool vdev_req,
+				unsigned int vdev_id)
+{
+	struct dlb2_dir_pq_pair *queue;
+	struct dlb2_hw_domain *domain;
+	int id;
+
+	id = domain_id;
+
+	dlb2_log_get_dir_queue_depth(hw, domain_id, args->queue_id,
+				     vdev_req, vdev_id);
+
+	domain = dlb2_get_domain_from_id(hw, id, vdev_req, vdev_id);
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	id = args->queue_id;
+
+	queue = dlb2_get_domain_used_dir_pq(hw, id, vdev_req, domain);
+	if (!queue) {
+		resp->status = DLB2_ST_INVALID_QID;
+		return -EINVAL;
+	}
+
+	resp->id = dlb2_dir_queue_depth(hw, queue);
+
+	return 0;
+}
+
+static void dlb2_log_get_ldb_queue_depth(struct dlb2_hw *hw,
+					 u32 domain_id,
+					 u32 queue_id,
+					 bool vdev_req,
+					 unsigned int vf_id)
+{
+	DLB2_HW_DBG(hw, "DLB get load-balanced queue depth:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from VF %d)\n", vf_id);
+	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
+	DLB2_HW_DBG(hw, "\tQueue ID: %d\n", queue_id);
+}
+
+/**
+ * dlb2_hw_get_ldb_queue_depth() - returns the depth of a load-balanced queue
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: queue depth args
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function returns the depth of a load-balanced queue.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the depth.
+ *
+ * Errors:
+ * EINVAL - Invalid domain ID or queue ID.
+ */
+int dlb2_hw_get_ldb_queue_depth(struct dlb2_hw *hw,
+				u32 domain_id,
+				struct dlb2_get_ldb_queue_depth_args *args,
+				struct dlb2_cmd_response *resp,
+				bool vdev_req,
+				unsigned int vdev_id)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+
+	dlb2_log_get_ldb_queue_depth(hw, domain_id, args->queue_id,
+				     vdev_req, vdev_id);
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	queue = dlb2_get_domain_ldb_queue(args->queue_id, vdev_req, domain);
+	if (!queue) {
+		resp->status = DLB2_ST_INVALID_QID;
+		return -EINVAL;
+	}
+
+	resp->id = dlb2_ldb_queue_depth(hw, queue);
+
+	return 0;
+}
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v5 17/26] event/dlb2: add v2.5 finish map/unmap
  2021-05-01 19:03   ` [dpdk-dev] [PATCH v5 00/26] Add DLB v2.5 McDaniel, Timothy
                       ` (15 preceding siblings ...)
  2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 16/26] event/dlb2: add v2.5 queue depth functions McDaniel, Timothy
@ 2021-05-01 19:03     ` McDaniel, Timothy
  2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 18/26] event/dlb2: add v2.5 sparse cq mode McDaniel, Timothy
                       ` (9 subsequent siblings)
  26 siblings, 0 replies; 174+ messages in thread
From: McDaniel, Timothy @ 2021-05-01 19:03 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, harry.van.haaren, jerinj, thomas, Timothy McDaniel

From: Timothy McDaniel <timothy.mcdaniel@intel.com>

Update the low level HW functions responsible for
finishing the queue map/unmap operation, which is an
asynchronous operation.

The logic is very similar to what was done for v2.0,
but the new combined register map for v2.0 and v2.5
uses new register names and bit names.  Additionally,
new register access macros are used so that the code
can perform the correct action, based on the hardware.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 1054 -----------------
 .../event/dlb2/pf/base/dlb2_resource_new.c    |   50 +
 2 files changed, 50 insertions(+), 1054 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index 8c1d8c782..f05f750f5 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -54,1060 +54,6 @@ void dlb2_hw_enable_sparse_ldb_cq_mode(struct dlb2_hw *hw)
 	DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, r0.val);
 }
 
-/*
- * The PF driver cannot assume that a register write will affect subsequent HCW
- * writes. To ensure a write completes, the driver must read back a CSR. This
- * function only need be called for configuration that can occur after the
- * domain has started; prior to starting, applications can't send HCWs.
- */
-static inline void dlb2_flush_csr(struct dlb2_hw *hw)
-{
-	DLB2_CSR_RD(hw, DLB2_SYS_TOTAL_VAS);
-}
-
-static void dlb2_ldb_port_cq_enable(struct dlb2_hw *hw,
-				    struct dlb2_ldb_port *port)
-{
-	union dlb2_lsp_cq_ldb_dsbl reg;
-
-	/*
-	 * Don't re-enable the port if a removal is pending. The caller should
-	 * mark this port as enabled (if it isn't already), and when the
-	 * removal completes the port will be enabled.
-	 */
-	if (port->num_pending_removals)
-		return;
-
-	reg.field.disabled = 0;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ_LDB_DSBL(port->id.phys_id), reg.val);
-
-	dlb2_flush_csr(hw);
-}
-
-static void dlb2_ldb_port_cq_disable(struct dlb2_hw *hw,
-				     struct dlb2_ldb_port *port)
-{
-	union dlb2_lsp_cq_ldb_dsbl reg;
-
-	reg.field.disabled = 1;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ_LDB_DSBL(port->id.phys_id), reg.val);
-
-	dlb2_flush_csr(hw);
-}
-
-static struct dlb2_ldb_queue *
-dlb2_get_ldb_queue_from_id(struct dlb2_hw *hw,
-			   u32 id,
-			   bool vdev_req,
-			   unsigned int vdev_id)
-{
-	struct dlb2_list_entry *iter1;
-	struct dlb2_list_entry *iter2;
-	struct dlb2_function_resources *rsrcs;
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_queue *queue;
-	RTE_SET_USED(iter1);
-	RTE_SET_USED(iter2);
-
-	if (id >= DLB2_MAX_NUM_LDB_QUEUES)
-		return NULL;
-
-	rsrcs = (vdev_req) ? &hw->vdev[vdev_id] : &hw->pf;
-
-	if (!vdev_req)
-		return &hw->rsrcs.ldb_queues[id];
-
-	DLB2_FUNC_LIST_FOR(rsrcs->used_domains, domain, iter1) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter2)
-			if (queue->id.virt_id == id)
-				return queue;
-	}
-
-	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_queues, queue, iter1)
-		if (queue->id.virt_id == id)
-			return queue;
-
-	return NULL;
-}
-
-static struct dlb2_hw_domain *dlb2_get_domain_from_id(struct dlb2_hw *hw,
-						      u32 id,
-						      bool vdev_req,
-						      unsigned int vdev_id)
-{
-	struct dlb2_list_entry *iteration;
-	struct dlb2_function_resources *rsrcs;
-	struct dlb2_hw_domain *domain;
-	RTE_SET_USED(iteration);
-
-	if (id >= DLB2_MAX_NUM_DOMAINS)
-		return NULL;
-
-	if (!vdev_req)
-		return &hw->domains[id];
-
-	rsrcs = &hw->vdev[vdev_id];
-
-	DLB2_FUNC_LIST_FOR(rsrcs->used_domains, domain, iteration)
-		if (domain->id.virt_id == id)
-			return domain;
-
-	return NULL;
-}
-
-static int dlb2_port_slot_state_transition(struct dlb2_hw *hw,
-					   struct dlb2_ldb_port *port,
-					   struct dlb2_ldb_queue *queue,
-					   int slot,
-					   enum dlb2_qid_map_state new_state)
-{
-	enum dlb2_qid_map_state curr_state = port->qid_map[slot].state;
-	struct dlb2_hw_domain *domain;
-	int domain_id;
-
-	domain_id = port->domain_id.phys_id;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, false, 0);
-	if (domain == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: unable to find domain %d\n",
-			    __func__, domain_id);
-		return -EINVAL;
-	}
-
-	switch (curr_state) {
-	case DLB2_QUEUE_UNMAPPED:
-		switch (new_state) {
-		case DLB2_QUEUE_MAPPED:
-			queue->num_mappings++;
-			port->num_mappings++;
-			break;
-		case DLB2_QUEUE_MAP_IN_PROG:
-			queue->num_pending_additions++;
-			domain->num_pending_additions++;
-			break;
-		default:
-			goto error;
-		}
-		break;
-	case DLB2_QUEUE_MAPPED:
-		switch (new_state) {
-		case DLB2_QUEUE_UNMAPPED:
-			queue->num_mappings--;
-			port->num_mappings--;
-			break;
-		case DLB2_QUEUE_UNMAP_IN_PROG:
-			port->num_pending_removals++;
-			domain->num_pending_removals++;
-			break;
-		case DLB2_QUEUE_MAPPED:
-			/* Priority change, nothing to update */
-			break;
-		default:
-			goto error;
-		}
-		break;
-	case DLB2_QUEUE_MAP_IN_PROG:
-		switch (new_state) {
-		case DLB2_QUEUE_UNMAPPED:
-			queue->num_pending_additions--;
-			domain->num_pending_additions--;
-			break;
-		case DLB2_QUEUE_MAPPED:
-			queue->num_mappings++;
-			port->num_mappings++;
-			queue->num_pending_additions--;
-			domain->num_pending_additions--;
-			break;
-		default:
-			goto error;
-		}
-		break;
-	case DLB2_QUEUE_UNMAP_IN_PROG:
-		switch (new_state) {
-		case DLB2_QUEUE_UNMAPPED:
-			port->num_pending_removals--;
-			domain->num_pending_removals--;
-			queue->num_mappings--;
-			port->num_mappings--;
-			break;
-		case DLB2_QUEUE_MAPPED:
-			port->num_pending_removals--;
-			domain->num_pending_removals--;
-			break;
-		case DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP:
-			/* Nothing to update */
-			break;
-		default:
-			goto error;
-		}
-		break;
-	case DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP:
-		switch (new_state) {
-		case DLB2_QUEUE_UNMAP_IN_PROG:
-			/* Nothing to update */
-			break;
-		case DLB2_QUEUE_UNMAPPED:
-			/*
-			 * An UNMAP_IN_PROG_PENDING_MAP slot briefly
-			 * becomes UNMAPPED before it transitions to
-			 * MAP_IN_PROG.
-			 */
-			queue->num_mappings--;
-			port->num_mappings--;
-			port->num_pending_removals--;
-			domain->num_pending_removals--;
-			break;
-		default:
-			goto error;
-		}
-		break;
-	default:
-		goto error;
-	}
-
-	port->qid_map[slot].state = new_state;
-
-	DLB2_HW_DBG(hw,
-		    "[%s()] queue %d -> port %d state transition (%d -> %d)\n",
-		    __func__, queue->id.phys_id, port->id.phys_id,
-		    curr_state, new_state);
-	return 0;
-
-error:
-	DLB2_HW_ERR(hw,
-		    "[%s()] Internal error: invalid queue %d -> port %d state transition (%d -> %d)\n",
-		    __func__, queue->id.phys_id, port->id.phys_id,
-		    curr_state, new_state);
-	return -EFAULT;
-}
-
-static bool dlb2_port_find_slot(struct dlb2_ldb_port *port,
-				enum dlb2_qid_map_state state,
-				int *slot)
-{
-	int i;
-
-	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
-		if (port->qid_map[i].state == state)
-			break;
-	}
-
-	*slot = i;
-
-	return (i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ);
-}
-
-static bool dlb2_port_find_slot_queue(struct dlb2_ldb_port *port,
-				      enum dlb2_qid_map_state state,
-				      struct dlb2_ldb_queue *queue,
-				      int *slot)
-{
-	int i;
-
-	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
-		if (port->qid_map[i].state == state &&
-		    port->qid_map[i].qid == queue->id.phys_id)
-			break;
-	}
-
-	*slot = i;
-
-	return (i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ);
-}
-
-/*
- * dlb2_ldb_queue_{enable, disable}_mapped_cqs() don't operate exactly as
- * their function names imply, and should only be called by the dynamic CQ
- * mapping code.
- */
-static void dlb2_ldb_queue_disable_mapped_cqs(struct dlb2_hw *hw,
-					      struct dlb2_hw_domain *domain,
-					      struct dlb2_ldb_queue *queue)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int slot, i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			enum dlb2_qid_map_state state = DLB2_QUEUE_MAPPED;
-
-			if (!dlb2_port_find_slot_queue(port, state,
-						       queue, &slot))
-				continue;
-
-			if (port->enabled)
-				dlb2_ldb_port_cq_disable(hw, port);
-		}
-	}
-}
-
-static void dlb2_ldb_queue_enable_mapped_cqs(struct dlb2_hw *hw,
-					     struct dlb2_hw_domain *domain,
-					     struct dlb2_ldb_queue *queue)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int slot, i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			enum dlb2_qid_map_state state = DLB2_QUEUE_MAPPED;
-
-			if (!dlb2_port_find_slot_queue(port, state,
-						       queue, &slot))
-				continue;
-
-			if (port->enabled)
-				dlb2_ldb_port_cq_enable(hw, port);
-		}
-	}
-}
-
-static void dlb2_ldb_port_clear_queue_if_status(struct dlb2_hw *hw,
-						struct dlb2_ldb_port *port,
-						int slot)
-{
-	union dlb2_lsp_ldb_sched_ctrl r0 = { {0} };
-
-	r0.field.cq = port->id.phys_id;
-	r0.field.qidix = slot;
-	r0.field.value = 0;
-	r0.field.inflight_ok_v = 1;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL, r0.val);
-
-	dlb2_flush_csr(hw);
-}
-
-static void dlb2_ldb_port_set_queue_if_status(struct dlb2_hw *hw,
-					      struct dlb2_ldb_port *port,
-					      int slot)
-{
-	union dlb2_lsp_ldb_sched_ctrl r0 = { {0} };
-
-	r0.field.cq = port->id.phys_id;
-	r0.field.qidix = slot;
-	r0.field.value = 1;
-	r0.field.inflight_ok_v = 1;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL, r0.val);
-
-	dlb2_flush_csr(hw);
-}
-
-static int dlb2_ldb_port_map_qid_static(struct dlb2_hw *hw,
-					struct dlb2_ldb_port *p,
-					struct dlb2_ldb_queue *q,
-					u8 priority)
-{
-	union dlb2_lsp_cq2priov r0;
-	union dlb2_lsp_cq2qid0 r1;
-	union dlb2_atm_qid2cqidix_00 r2;
-	union dlb2_lsp_qid2cqidix_00 r3;
-	union dlb2_lsp_qid2cqidix2_00 r4;
-	enum dlb2_qid_map_state state;
-	int i;
-
-	/* Look for a pending or already mapped slot, else an unused slot */
-	if (!dlb2_port_find_slot_queue(p, DLB2_QUEUE_MAP_IN_PROG, q, &i) &&
-	    !dlb2_port_find_slot_queue(p, DLB2_QUEUE_MAPPED, q, &i) &&
-	    !dlb2_port_find_slot(p, DLB2_QUEUE_UNMAPPED, &i)) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: CQ has no available QID mapping slots\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	if (i >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: port slot tracking failed\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	/* Read-modify-write the priority and valid bit register */
-	r0.val = DLB2_CSR_RD(hw, DLB2_LSP_CQ2PRIOV(p->id.phys_id));
-
-	r0.field.v |= 1 << i;
-	r0.field.prio |= (priority & 0x7) << i * 3;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(p->id.phys_id), r0.val);
-
-	/* Read-modify-write the QID map register */
-	if (i < 4)
-		r1.val = DLB2_CSR_RD(hw, DLB2_LSP_CQ2QID0(p->id.phys_id));
-	else
-		r1.val = DLB2_CSR_RD(hw, DLB2_LSP_CQ2QID1(p->id.phys_id));
-
-	if (i == 0 || i == 4)
-		r1.field.qid_p0 = q->id.phys_id;
-	if (i == 1 || i == 5)
-		r1.field.qid_p1 = q->id.phys_id;
-	if (i == 2 || i == 6)
-		r1.field.qid_p2 = q->id.phys_id;
-	if (i == 3 || i == 7)
-		r1.field.qid_p3 = q->id.phys_id;
-
-	if (i < 4)
-		DLB2_CSR_WR(hw, DLB2_LSP_CQ2QID0(p->id.phys_id), r1.val);
-	else
-		DLB2_CSR_WR(hw, DLB2_LSP_CQ2QID1(p->id.phys_id), r1.val);
-
-	r2.val = DLB2_CSR_RD(hw,
-			     DLB2_ATM_QID2CQIDIX(q->id.phys_id,
-						 p->id.phys_id / 4));
-
-	r3.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID2CQIDIX(q->id.phys_id,
-						 p->id.phys_id / 4));
-
-	r4.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID2CQIDIX2(q->id.phys_id,
-						  p->id.phys_id / 4));
-
-	switch (p->id.phys_id % 4) {
-	case 0:
-		r2.field.cq_p0 |= 1 << i;
-		r3.field.cq_p0 |= 1 << i;
-		r4.field.cq_p0 |= 1 << i;
-		break;
-
-	case 1:
-		r2.field.cq_p1 |= 1 << i;
-		r3.field.cq_p1 |= 1 << i;
-		r4.field.cq_p1 |= 1 << i;
-		break;
-
-	case 2:
-		r2.field.cq_p2 |= 1 << i;
-		r3.field.cq_p2 |= 1 << i;
-		r4.field.cq_p2 |= 1 << i;
-		break;
-
-	case 3:
-		r2.field.cq_p3 |= 1 << i;
-		r3.field.cq_p3 |= 1 << i;
-		r4.field.cq_p3 |= 1 << i;
-		break;
-	}
-
-	DLB2_CSR_WR(hw,
-		    DLB2_ATM_QID2CQIDIX(q->id.phys_id, p->id.phys_id / 4),
-		    r2.val);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID2CQIDIX(q->id.phys_id, p->id.phys_id / 4),
-		    r3.val);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID2CQIDIX2(q->id.phys_id, p->id.phys_id / 4),
-		    r4.val);
-
-	dlb2_flush_csr(hw);
-
-	p->qid_map[i].qid = q->id.phys_id;
-	p->qid_map[i].priority = priority;
-
-	state = DLB2_QUEUE_MAPPED;
-
-	return dlb2_port_slot_state_transition(hw, p, q, i, state);
-}
-
-static int dlb2_ldb_port_set_has_work_bits(struct dlb2_hw *hw,
-					   struct dlb2_ldb_port *port,
-					   struct dlb2_ldb_queue *queue,
-					   int slot)
-{
-	union dlb2_lsp_qid_aqed_active_cnt r0;
-	union dlb2_lsp_qid_ldb_enqueue_cnt r1;
-	union dlb2_lsp_ldb_sched_ctrl r2 = { {0} };
-
-	/* Set the atomic scheduling haswork bit */
-	r0.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID_AQED_ACTIVE_CNT(queue->id.phys_id));
-
-	r2.field.cq = port->id.phys_id;
-	r2.field.qidix = slot;
-	r2.field.value = 1;
-	r2.field.rlist_haswork_v = r0.field.count > 0;
-
-	/* Set the non-atomic scheduling haswork bit */
-	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL, r2.val);
-
-	r1.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID_LDB_ENQUEUE_CNT(queue->id.phys_id));
-
-	memset(&r2, 0, sizeof(r2));
-
-	r2.field.cq = port->id.phys_id;
-	r2.field.qidix = slot;
-	r2.field.value = 1;
-	r2.field.nalb_haswork_v = (r1.field.count > 0);
-
-	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL, r2.val);
-
-	dlb2_flush_csr(hw);
-
-	return 0;
-}
-
-static void dlb2_ldb_port_clear_has_work_bits(struct dlb2_hw *hw,
-					      struct dlb2_ldb_port *port,
-					      u8 slot)
-{
-	union dlb2_lsp_ldb_sched_ctrl r2 = { {0} };
-
-	r2.field.cq = port->id.phys_id;
-	r2.field.qidix = slot;
-	r2.field.value = 0;
-	r2.field.rlist_haswork_v = 1;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL, r2.val);
-
-	memset(&r2, 0, sizeof(r2));
-
-	r2.field.cq = port->id.phys_id;
-	r2.field.qidix = slot;
-	r2.field.value = 0;
-	r2.field.nalb_haswork_v = 1;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL, r2.val);
-
-	dlb2_flush_csr(hw);
-}
-
-static void dlb2_ldb_queue_set_inflight_limit(struct dlb2_hw *hw,
-					      struct dlb2_ldb_queue *queue)
-{
-	union dlb2_lsp_qid_ldb_infl_lim r0 = { {0} };
-
-	r0.field.limit = queue->num_qid_inflights;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_QID_LDB_INFL_LIM(queue->id.phys_id), r0.val);
-}
-
-static void dlb2_ldb_queue_clear_inflight_limit(struct dlb2_hw *hw,
-						struct dlb2_ldb_queue *queue)
-{
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID_LDB_INFL_LIM(queue->id.phys_id),
-		    DLB2_LSP_QID_LDB_INFL_LIM_RST);
-}
-
-static int dlb2_ldb_port_finish_map_qid_dynamic(struct dlb2_hw *hw,
-						struct dlb2_hw_domain *domain,
-						struct dlb2_ldb_port *port,
-						struct dlb2_ldb_queue *queue)
-{
-	struct dlb2_list_entry *iter;
-	union dlb2_lsp_qid_ldb_infl_cnt r0;
-	enum dlb2_qid_map_state state;
-	int slot, ret, i;
-	u8 prio;
-	RTE_SET_USED(iter);
-
-	r0.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID_LDB_INFL_CNT(queue->id.phys_id));
-
-	if (r0.field.count) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: non-zero QID inflight count\n",
-			    __func__);
-		return -EINVAL;
-	}
-
-	/*
-	 * Static map the port and set its corresponding has_work bits.
-	 */
-	state = DLB2_QUEUE_MAP_IN_PROG;
-	if (!dlb2_port_find_slot_queue(port, state, queue, &slot))
-		return -EINVAL;
-
-	if (slot >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: port slot tracking failed\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	prio = port->qid_map[slot].priority;
-
-	/*
-	 * Update the CQ2QID, CQ2PRIOV, and QID2CQIDX registers, and
-	 * the port's qid_map state.
-	 */
-	ret = dlb2_ldb_port_map_qid_static(hw, port, queue, prio);
-	if (ret)
-		return ret;
-
-	ret = dlb2_ldb_port_set_has_work_bits(hw, port, queue, slot);
-	if (ret)
-		return ret;
-
-	/*
-	 * Ensure IF_status(cq,qid) is 0 before enabling the port to
-	 * prevent spurious schedules to cause the queue's inflight
-	 * count to increase.
-	 */
-	dlb2_ldb_port_clear_queue_if_status(hw, port, slot);
-
-	/* Reset the queue's inflight status */
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			state = DLB2_QUEUE_MAPPED;
-			if (!dlb2_port_find_slot_queue(port, state,
-						       queue, &slot))
-				continue;
-
-			dlb2_ldb_port_set_queue_if_status(hw, port, slot);
-		}
-	}
-
-	dlb2_ldb_queue_set_inflight_limit(hw, queue);
-
-	/* Re-enable CQs mapped to this queue */
-	dlb2_ldb_queue_enable_mapped_cqs(hw, domain, queue);
-
-	/* If this queue has other mappings pending, clear its inflight limit */
-	if (queue->num_pending_additions > 0)
-		dlb2_ldb_queue_clear_inflight_limit(hw, queue);
-
-	return 0;
-}
-
-/**
- * dlb2_ldb_port_map_qid_dynamic() - perform a "dynamic" QID->CQ mapping
- * @hw: dlb2_hw handle for a particular device.
- * @port: load-balanced port
- * @queue: load-balanced queue
- * @priority: queue servicing priority
- *
- * Returns 0 if the queue was mapped, 1 if the mapping is scheduled to occur
- * at a later point, and <0 if an error occurred.
- */
-static int dlb2_ldb_port_map_qid_dynamic(struct dlb2_hw *hw,
-					 struct dlb2_ldb_port *port,
-					 struct dlb2_ldb_queue *queue,
-					 u8 priority)
-{
-	union dlb2_lsp_qid_ldb_infl_cnt r0 = { {0} };
-	enum dlb2_qid_map_state state;
-	struct dlb2_hw_domain *domain;
-	int domain_id, slot, ret;
-
-	domain_id = port->domain_id.phys_id;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, false, 0);
-	if (domain == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: unable to find domain %d\n",
-			    __func__, port->domain_id.phys_id);
-		return -EINVAL;
-	}
-
-	/*
-	 * Set the QID inflight limit to 0 to prevent further scheduling of the
-	 * queue.
-	 */
-	DLB2_CSR_WR(hw, DLB2_LSP_QID_LDB_INFL_LIM(queue->id.phys_id), 0);
-
-	if (!dlb2_port_find_slot(port, DLB2_QUEUE_UNMAPPED, &slot)) {
-		DLB2_HW_ERR(hw,
-			    "Internal error: No available unmapped slots\n");
-		return -EFAULT;
-	}
-
-	if (slot >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: port slot tracking failed\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	port->qid_map[slot].qid = queue->id.phys_id;
-	port->qid_map[slot].priority = priority;
-
-	state = DLB2_QUEUE_MAP_IN_PROG;
-	ret = dlb2_port_slot_state_transition(hw, port, queue, slot, state);
-	if (ret)
-		return ret;
-
-	r0.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID_LDB_INFL_CNT(queue->id.phys_id));
-
-	if (r0.field.count) {
-		/*
-		 * The queue is owed completions so it's not safe to map it
-		 * yet. Schedule a kernel thread to complete the mapping later,
-		 * once software has completed all the queue's inflight events.
-		 */
-		if (!os_worker_active(hw))
-			os_schedule_work(hw);
-
-		return 1;
-	}
-
-	/*
-	 * Disable the affected CQ, and the CQs already mapped to the QID,
-	 * before reading the QID's inflight count a second time. There is an
-	 * unlikely race in which the QID may schedule one more QE after we
-	 * read an inflight count of 0, and disabling the CQs guarantees that
-	 * the race will not occur after a re-read of the inflight count
-	 * register.
-	 */
-	if (port->enabled)
-		dlb2_ldb_port_cq_disable(hw, port);
-
-	dlb2_ldb_queue_disable_mapped_cqs(hw, domain, queue);
-
-	r0.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID_LDB_INFL_CNT(queue->id.phys_id));
-
-	if (r0.field.count) {
-		if (port->enabled)
-			dlb2_ldb_port_cq_enable(hw, port);
-
-		dlb2_ldb_queue_enable_mapped_cqs(hw, domain, queue);
-
-		/*
-		 * The queue is owed completions so it's not safe to map it
-		 * yet. Schedule a kernel thread to complete the mapping later,
-		 * once software has completed all the queue's inflight events.
-		 */
-		if (!os_worker_active(hw))
-			os_schedule_work(hw);
-
-		return 1;
-	}
-
-	return dlb2_ldb_port_finish_map_qid_dynamic(hw, domain, port, queue);
-}
-
-static void dlb2_domain_finish_map_port(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain,
-					struct dlb2_ldb_port *port)
-{
-	int i;
-
-	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
-		union dlb2_lsp_qid_ldb_infl_cnt r0;
-		struct dlb2_ldb_queue *queue;
-		int qid;
-
-		if (port->qid_map[i].state != DLB2_QUEUE_MAP_IN_PROG)
-			continue;
-
-		qid = port->qid_map[i].qid;
-
-		queue = dlb2_get_ldb_queue_from_id(hw, qid, false, 0);
-
-		if (queue == NULL) {
-			DLB2_HW_ERR(hw,
-				    "[%s()] Internal error: unable to find queue %d\n",
-				    __func__, qid);
-			continue;
-		}
-
-		r0.val = DLB2_CSR_RD(hw, DLB2_LSP_QID_LDB_INFL_CNT(qid));
-
-		if (r0.field.count)
-			continue;
-
-		/*
-		 * Disable the affected CQ, and the CQs already mapped to the
-		 * QID, before reading the QID's inflight count a second time.
-		 * There is an unlikely race in which the QID may schedule one
-		 * more QE after we read an inflight count of 0, and disabling
-		 * the CQs guarantees that the race will not occur after a
-		 * re-read of the inflight count register.
-		 */
-		if (port->enabled)
-			dlb2_ldb_port_cq_disable(hw, port);
-
-		dlb2_ldb_queue_disable_mapped_cqs(hw, domain, queue);
-
-		r0.val = DLB2_CSR_RD(hw, DLB2_LSP_QID_LDB_INFL_CNT(qid));
-
-		if (r0.field.count) {
-			if (port->enabled)
-				dlb2_ldb_port_cq_enable(hw, port);
-
-			dlb2_ldb_queue_enable_mapped_cqs(hw, domain, queue);
-
-			continue;
-		}
-
-		dlb2_ldb_port_finish_map_qid_dynamic(hw, domain, port, queue);
-	}
-}
-
-static unsigned int
-dlb2_domain_finish_map_qid_procedures(struct dlb2_hw *hw,
-				      struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	if (!domain->configured || domain->num_pending_additions == 0)
-		return 0;
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
-			dlb2_domain_finish_map_port(hw, domain, port);
-	}
-
-	return domain->num_pending_additions;
-}
-
-static int dlb2_ldb_port_unmap_qid(struct dlb2_hw *hw,
-				   struct dlb2_ldb_port *port,
-				   struct dlb2_ldb_queue *queue)
-{
-	enum dlb2_qid_map_state mapped, in_progress, pending_map, unmapped;
-	union dlb2_lsp_cq2priov r0;
-	union dlb2_atm_qid2cqidix_00 r1;
-	union dlb2_lsp_qid2cqidix_00 r2;
-	union dlb2_lsp_qid2cqidix2_00 r3;
-	u32 queue_id;
-	u32 port_id;
-	int i;
-
-	/* Find the queue's slot */
-	mapped = DLB2_QUEUE_MAPPED;
-	in_progress = DLB2_QUEUE_UNMAP_IN_PROG;
-	pending_map = DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP;
-
-	if (!dlb2_port_find_slot_queue(port, mapped, queue, &i) &&
-	    !dlb2_port_find_slot_queue(port, in_progress, queue, &i) &&
-	    !dlb2_port_find_slot_queue(port, pending_map, queue, &i)) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: QID %d isn't mapped\n",
-			    __func__, __LINE__, queue->id.phys_id);
-		return -EFAULT;
-	}
-
-	if (i >= DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: port slot tracking failed\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	port_id = port->id.phys_id;
-	queue_id = queue->id.phys_id;
-
-	/* Read-modify-write the priority and valid bit register */
-	r0.val = DLB2_CSR_RD(hw, DLB2_LSP_CQ2PRIOV(port_id));
-
-	r0.field.v &= ~(1 << i);
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(port_id), r0.val);
-
-	r1.val = DLB2_CSR_RD(hw,
-			     DLB2_ATM_QID2CQIDIX(queue_id, port_id / 4));
-
-	r2.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID2CQIDIX(queue_id, port_id / 4));
-
-	r3.val = DLB2_CSR_RD(hw,
-			     DLB2_LSP_QID2CQIDIX2(queue_id, port_id / 4));
-
-	switch (port_id % 4) {
-	case 0:
-		r1.field.cq_p0 &= ~(1 << i);
-		r2.field.cq_p0 &= ~(1 << i);
-		r3.field.cq_p0 &= ~(1 << i);
-		break;
-
-	case 1:
-		r1.field.cq_p1 &= ~(1 << i);
-		r2.field.cq_p1 &= ~(1 << i);
-		r3.field.cq_p1 &= ~(1 << i);
-		break;
-
-	case 2:
-		r1.field.cq_p2 &= ~(1 << i);
-		r2.field.cq_p2 &= ~(1 << i);
-		r3.field.cq_p2 &= ~(1 << i);
-		break;
-
-	case 3:
-		r1.field.cq_p3 &= ~(1 << i);
-		r2.field.cq_p3 &= ~(1 << i);
-		r3.field.cq_p3 &= ~(1 << i);
-		break;
-	}
-
-	DLB2_CSR_WR(hw,
-		    DLB2_ATM_QID2CQIDIX(queue_id, port_id / 4),
-		    r1.val);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID2CQIDIX(queue_id, port_id / 4),
-		    r2.val);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID2CQIDIX2(queue_id, port_id / 4),
-		    r3.val);
-
-	dlb2_flush_csr(hw);
-
-	unmapped = DLB2_QUEUE_UNMAPPED;
-
-	return dlb2_port_slot_state_transition(hw, port, queue, i, unmapped);
-}
-
-static int dlb2_ldb_port_map_qid(struct dlb2_hw *hw,
-				 struct dlb2_hw_domain *domain,
-				 struct dlb2_ldb_port *port,
-				 struct dlb2_ldb_queue *queue,
-				 u8 prio)
-{
-	if (domain->started)
-		return dlb2_ldb_port_map_qid_dynamic(hw, port, queue, prio);
-	else
-		return dlb2_ldb_port_map_qid_static(hw, port, queue, prio);
-}
-
-static void
-dlb2_domain_finish_unmap_port_slot(struct dlb2_hw *hw,
-				   struct dlb2_hw_domain *domain,
-				   struct dlb2_ldb_port *port,
-				   int slot)
-{
-	enum dlb2_qid_map_state state;
-	struct dlb2_ldb_queue *queue;
-
-	queue = &hw->rsrcs.ldb_queues[port->qid_map[slot].qid];
-
-	state = port->qid_map[slot].state;
-
-	/* Update the QID2CQIDX and CQ2QID vectors */
-	dlb2_ldb_port_unmap_qid(hw, port, queue);
-
-	/*
-	 * Ensure the QID will not be serviced by this {CQ, slot} by clearing
-	 * the has_work bits
-	 */
-	dlb2_ldb_port_clear_has_work_bits(hw, port, slot);
-
-	/* Reset the {CQ, slot} to its default state */
-	dlb2_ldb_port_set_queue_if_status(hw, port, slot);
-
-	/* Re-enable the CQ if it wasn't manually disabled by the user */
-	if (port->enabled)
-		dlb2_ldb_port_cq_enable(hw, port);
-
-	/*
-	 * If there is a mapping that is pending this slot's removal, perform
-	 * the mapping now.
-	 */
-	if (state == DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP) {
-		struct dlb2_ldb_port_qid_map *map;
-		struct dlb2_ldb_queue *map_queue;
-		u8 prio;
-
-		map = &port->qid_map[slot];
-
-		map->qid = map->pending_qid;
-		map->priority = map->pending_priority;
-
-		map_queue = &hw->rsrcs.ldb_queues[map->qid];
-		prio = map->priority;
-
-		dlb2_ldb_port_map_qid(hw, domain, port, map_queue, prio);
-	}
-}
-
-static bool dlb2_domain_finish_unmap_port(struct dlb2_hw *hw,
-					  struct dlb2_hw_domain *domain,
-					  struct dlb2_ldb_port *port)
-{
-	union dlb2_lsp_cq_ldb_infl_cnt r0;
-	int i;
-
-	if (port->num_pending_removals == 0)
-		return false;
-
-	/*
-	 * The unmap requires all the CQ's outstanding inflights to be
-	 * completed.
-	 */
-	r0.val = DLB2_CSR_RD(hw, DLB2_LSP_CQ_LDB_INFL_CNT(port->id.phys_id));
-	if (r0.field.count > 0)
-		return false;
-
-	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
-		struct dlb2_ldb_port_qid_map *map;
-
-		map = &port->qid_map[i];
-
-		if (map->state != DLB2_QUEUE_UNMAP_IN_PROG &&
-		    map->state != DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP)
-			continue;
-
-		dlb2_domain_finish_unmap_port_slot(hw, domain, port, i);
-	}
-
-	return true;
-}
-
-static unsigned int
-dlb2_domain_finish_unmap_qid_procedures(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	if (!domain->configured || domain->num_pending_removals == 0)
-		return 0;
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
-			dlb2_domain_finish_unmap_port(hw, domain, port);
-	}
-
-	return domain->num_pending_removals;
-}
-
-unsigned int dlb2_finish_unmap_qid_procedures(struct dlb2_hw *hw)
-{
-	int i, num = 0;
-
-	/* Finish queue unmap jobs for any domain that needs it */
-	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
-		struct dlb2_hw_domain *domain = &hw->domains[i];
-
-		num += dlb2_domain_finish_unmap_qid_procedures(hw, domain);
-	}
-
-	return num;
-}
-
-unsigned int dlb2_finish_map_qid_procedures(struct dlb2_hw *hw)
-{
-	int i, num = 0;
-
-	/* Finish queue map jobs for any domain that needs it */
-	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
-		struct dlb2_hw_domain *domain = &hw->domains[i];
-
-		num += dlb2_domain_finish_map_qid_procedures(hw, domain);
-	}
-
-	return num;
-}
-
 int dlb2_get_group_sequence_numbers(struct dlb2_hw *hw, unsigned int group_id)
 {
 	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index 6a5af0c1e..8cd1762cf 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -6039,3 +6039,53 @@ int dlb2_hw_get_ldb_queue_depth(struct dlb2_hw *hw,
 
 	return 0;
 }
+
+/**
+ * dlb2_finish_unmap_qid_procedures() - finish any pending unmap procedures
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function attempts to finish any outstanding unmap procedures.
+ * This function should be called by the kernel thread responsible for
+ * finishing map/unmap procedures.
+ *
+ * Return:
+ * Returns the number of procedures that weren't completed.
+ */
+unsigned int dlb2_finish_unmap_qid_procedures(struct dlb2_hw *hw)
+{
+	int i, num = 0;
+
+	/* Finish queue unmap jobs for any domain that needs it */
+	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
+		struct dlb2_hw_domain *domain = &hw->domains[i];
+
+		num += dlb2_domain_finish_unmap_qid_procedures(hw, domain);
+	}
+
+	return num;
+}
+
+/**
+ * dlb2_finish_map_qid_procedures() - finish any pending map procedures
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function attempts to finish any outstanding map procedures.
+ * This function should be called by the kernel thread responsible for
+ * finishing map/unmap procedures.
+ *
+ * Return:
+ * Returns the number of procedures that weren't completed.
+ */
+unsigned int dlb2_finish_map_qid_procedures(struct dlb2_hw *hw)
+{
+	int i, num = 0;
+
+	/* Finish queue map jobs for any domain that needs it */
+	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
+		struct dlb2_hw_domain *domain = &hw->domains[i];
+
+		num += dlb2_domain_finish_map_qid_procedures(hw, domain);
+	}
+
+	return num;
+}
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v5 18/26] event/dlb2: add v2.5 sparse cq mode
  2021-05-01 19:03   ` [dpdk-dev] [PATCH v5 00/26] Add DLB v2.5 McDaniel, Timothy
                       ` (16 preceding siblings ...)
  2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 17/26] event/dlb2: add v2.5 finish map/unmap McDaniel, Timothy
@ 2021-05-01 19:03     ` McDaniel, Timothy
  2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 19/26] event/dlb2: add v2.5 sequence number management McDaniel, Timothy
                       ` (8 subsequent siblings)
  26 siblings, 0 replies; 174+ messages in thread
From: McDaniel, Timothy @ 2021-05-01 19:03 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, harry.van.haaren, jerinj, thomas, Timothy McDaniel

From: Timothy McDaniel <timothy.mcdaniel@intel.com>

Update the low level HW functions responsible for
configuring sparse CQ mode, where each cache line
contains just one QE instead of 4.

The logic is very similar to what was done for v2.0,
but the new combined register map for v2.0 and v2.5
uses new register names and bit names.  Additionally,
new register access macros are used so that the code
can perform the correct action, based on the hardware.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 22 -----------
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 39 +++++++++++++++++++
 2 files changed, 39 insertions(+), 22 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index f05f750f5..d53cce643 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -32,28 +32,6 @@
 #define DLB2_FUNC_LIST_FOR_SAFE(head, ptr, ptr_tmp, it, it_tmp) \
 	DLB2_LIST_FOR_EACH_SAFE((head), ptr, ptr_tmp, func_list, it, it_tmp)
 
-void dlb2_hw_enable_sparse_dir_cq_mode(struct dlb2_hw *hw)
-{
-	union dlb2_chp_cfg_chp_csr_ctrl r0;
-
-	r0.val = DLB2_CSR_RD(hw, DLB2_CHP_CFG_CHP_CSR_CTRL);
-
-	r0.field.cfg_64bytes_qe_dir_cq_mode = 1;
-
-	DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, r0.val);
-}
-
-void dlb2_hw_enable_sparse_ldb_cq_mode(struct dlb2_hw *hw)
-{
-	union dlb2_chp_cfg_chp_csr_ctrl r0;
-
-	r0.val = DLB2_CSR_RD(hw, DLB2_CHP_CFG_CHP_CSR_CTRL);
-
-	r0.field.cfg_64bytes_qe_ldb_cq_mode = 1;
-
-	DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, r0.val);
-}
-
 int dlb2_get_group_sequence_numbers(struct dlb2_hw *hw, unsigned int group_id)
 {
 	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index 8cd1762cf..0f18bfeff 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -6089,3 +6089,42 @@ unsigned int dlb2_finish_map_qid_procedures(struct dlb2_hw *hw)
 
 	return num;
 }
+
+/**
+ * dlb2_hw_enable_sparse_dir_cq_mode() - enable sparse mode for directed ports.
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function must be called prior to configuring scheduling domains.
+ */
+
+void dlb2_hw_enable_sparse_dir_cq_mode(struct dlb2_hw *hw)
+{
+	u32 ctrl;
+
+	ctrl = DLB2_CSR_RD(hw, DLB2_CHP_CFG_CHP_CSR_CTRL);
+
+	DLB2_BIT_SET(ctrl,
+		     DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_DIR_CQ_MODE);
+
+	DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, ctrl);
+}
+
+/**
+ * dlb2_hw_enable_sparse_ldb_cq_mode() - enable sparse mode for load-balanced
+ *	ports.
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function must be called prior to configuring scheduling domains.
+ */
+void dlb2_hw_enable_sparse_ldb_cq_mode(struct dlb2_hw *hw)
+{
+	u32 ctrl;
+
+	ctrl = DLB2_CSR_RD(hw, DLB2_CHP_CFG_CHP_CSR_CTRL);
+
+	DLB2_BIT_SET(ctrl,
+		     DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_LDB_CQ_MODE);
+
+	DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, ctrl);
+}
+
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v5 19/26] event/dlb2: add v2.5 sequence number management
  2021-05-01 19:03   ` [dpdk-dev] [PATCH v5 00/26] Add DLB v2.5 McDaniel, Timothy
                       ` (17 preceding siblings ...)
  2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 18/26] event/dlb2: add v2.5 sparse cq mode McDaniel, Timothy
@ 2021-05-01 19:03     ` McDaniel, Timothy
  2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 20/26] event/dlb2: use new implementation of resource header McDaniel, Timothy
                       ` (7 subsequent siblings)
  26 siblings, 0 replies; 174+ messages in thread
From: McDaniel, Timothy @ 2021-05-01 19:03 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, harry.van.haaren, jerinj, thomas, Timothy McDaniel

From: Timothy McDaniel <timothy.mcdaniel@intel.com>

Update the low level HW functions that perform the sequence number
management functions. These include getting a groups number of
sequence numbers per queue, managing in-use slots, getting the
current occupancy, and setting sequence numbers for a group.

The logic is very similar to what was done for v2.0,
but the new combined register map for v2.0 and v2.5
uses new register names and bit names.  Additionally,
new register access macros are used so that the code
can perform the correct action, based on the hardware.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_resource.c    |  67 -----------
 drivers/event/dlb2/pf/base/dlb2_resource.h    |   4 +-
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 105 ++++++++++++++++++
 3 files changed, 107 insertions(+), 69 deletions(-)

diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index d53cce643..e8a9d52f6 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -32,70 +32,3 @@
 #define DLB2_FUNC_LIST_FOR_SAFE(head, ptr, ptr_tmp, it, it_tmp) \
 	DLB2_LIST_FOR_EACH_SAFE((head), ptr, ptr_tmp, func_list, it, it_tmp)
 
-int dlb2_get_group_sequence_numbers(struct dlb2_hw *hw, unsigned int group_id)
-{
-	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
-		return -EINVAL;
-
-	return hw->rsrcs.sn_groups[group_id].sequence_numbers_per_queue;
-}
-
-int dlb2_get_group_sequence_number_occupancy(struct dlb2_hw *hw,
-					     unsigned int group_id)
-{
-	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
-		return -EINVAL;
-
-	return dlb2_sn_group_used_slots(&hw->rsrcs.sn_groups[group_id]);
-}
-
-static void dlb2_log_set_group_sequence_numbers(struct dlb2_hw *hw,
-						unsigned int group_id,
-						unsigned long val)
-{
-	DLB2_HW_DBG(hw, "DLB2 set group sequence numbers:\n");
-	DLB2_HW_DBG(hw, "\tGroup ID: %u\n", group_id);
-	DLB2_HW_DBG(hw, "\tValue:    %lu\n", val);
-}
-
-int dlb2_set_group_sequence_numbers(struct dlb2_hw *hw,
-				    unsigned int group_id,
-				    unsigned long val)
-{
-	u32 valid_allocations[] = {64, 128, 256, 512, 1024};
-	union dlb2_ro_pipe_grp_sn_mode r0 = { {0} };
-	struct dlb2_sn_group *group;
-	int mode;
-
-	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
-		return -EINVAL;
-
-	group = &hw->rsrcs.sn_groups[group_id];
-
-	/*
-	 * Once the first load-balanced queue using an SN group is configured,
-	 * the group cannot be changed.
-	 */
-	if (group->slot_use_bitmap != 0)
-		return -EPERM;
-
-	for (mode = 0; mode < DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES; mode++)
-		if (val == valid_allocations[mode])
-			break;
-
-	if (mode == DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES)
-		return -EINVAL;
-
-	group->mode = mode;
-	group->sequence_numbers_per_queue = val;
-
-	r0.field.sn_mode_0 = hw->rsrcs.sn_groups[0].mode;
-	r0.field.sn_mode_1 = hw->rsrcs.sn_groups[1].mode;
-
-	DLB2_CSR_WR(hw, DLB2_RO_PIPE_GRP_SN_MODE, r0.val);
-
-	dlb2_log_set_group_sequence_numbers(hw, group_id, val);
-
-	return 0;
-}
-
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.h b/drivers/event/dlb2/pf/base/dlb2_resource.h
index 2e13193bb..00a0b6b57 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.h
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.h
@@ -792,8 +792,8 @@ int dlb2_get_group_sequence_number_occupancy(struct dlb2_hw *hw,
  * ordered queue is configured.
  */
 int dlb2_set_group_sequence_numbers(struct dlb2_hw *hw,
-				    unsigned int group_id,
-				    unsigned long val);
+				    u32 group_id,
+				    u32 val);
 
 /**
  * dlb2_reset_domain() - reset a scheduling domain
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index 0f18bfeff..927b65568 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -6128,3 +6128,108 @@ void dlb2_hw_enable_sparse_ldb_cq_mode(struct dlb2_hw *hw)
 	DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, ctrl);
 }
 
+/**
+ * dlb2_get_group_sequence_numbers() - return a group's number of SNs per queue
+ * @hw: dlb2_hw handle for a particular device.
+ * @group_id: sequence number group ID.
+ *
+ * This function returns the configured number of sequence numbers per queue
+ * for the specified group.
+ *
+ * Return:
+ * Returns -EINVAL if group_id is invalid, else the group's SNs per queue.
+ */
+int dlb2_get_group_sequence_numbers(struct dlb2_hw *hw, u32 group_id)
+{
+	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
+		return -EINVAL;
+
+	return hw->rsrcs.sn_groups[group_id].sequence_numbers_per_queue;
+}
+
+/**
+ * dlb2_get_group_sequence_number_occupancy() - return a group's in-use slots
+ * @hw: dlb2_hw handle for a particular device.
+ * @group_id: sequence number group ID.
+ *
+ * This function returns the group's number of in-use slots (i.e. load-balanced
+ * queues using the specified group).
+ *
+ * Return:
+ * Returns -EINVAL if group_id is invalid, else the group's SNs per queue.
+ */
+int dlb2_get_group_sequence_number_occupancy(struct dlb2_hw *hw, u32 group_id)
+{
+	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
+		return -EINVAL;
+
+	return dlb2_sn_group_used_slots(&hw->rsrcs.sn_groups[group_id]);
+}
+
+static void dlb2_log_set_group_sequence_numbers(struct dlb2_hw *hw,
+						u32 group_id,
+						u32 val)
+{
+	DLB2_HW_DBG(hw, "DLB2 set group sequence numbers:\n");
+	DLB2_HW_DBG(hw, "\tGroup ID: %u\n", group_id);
+	DLB2_HW_DBG(hw, "\tValue:    %u\n", val);
+}
+
+/**
+ * dlb2_set_group_sequence_numbers() - assign a group's number of SNs per queue
+ * @hw: dlb2_hw handle for a particular device.
+ * @group_id: sequence number group ID.
+ * @val: requested amount of sequence numbers per queue.
+ *
+ * This function configures the group's number of sequence numbers per queue.
+ * val can be a power-of-two between 32 and 1024, inclusive. This setting can
+ * be configured until the first ordered load-balanced queue is configured, at
+ * which point the configuration is locked.
+ *
+ * Return:
+ * Returns 0 upon success; -EINVAL if group_id or val is invalid, -EPERM if an
+ * ordered queue is configured.
+ */
+int dlb2_set_group_sequence_numbers(struct dlb2_hw *hw,
+				    u32 group_id,
+				    u32 val)
+{
+	const u32 valid_allocations[] = {64, 128, 256, 512, 1024};
+	struct dlb2_sn_group *group;
+	u32 sn_mode = 0;
+	int mode;
+
+	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
+		return -EINVAL;
+
+	group = &hw->rsrcs.sn_groups[group_id];
+
+	/*
+	 * Once the first load-balanced queue using an SN group is configured,
+	 * the group cannot be changed.
+	 */
+	if (group->slot_use_bitmap != 0)
+		return -EPERM;
+
+	for (mode = 0; mode < DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES; mode++)
+		if (val == valid_allocations[mode])
+			break;
+
+	if (mode == DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES)
+		return -EINVAL;
+
+	group->mode = mode;
+	group->sequence_numbers_per_queue = val;
+
+	DLB2_BITS_SET(sn_mode, hw->rsrcs.sn_groups[0].mode,
+		 DLB2_RO_GRP_SN_MODE_SN_MODE_0);
+	DLB2_BITS_SET(sn_mode, hw->rsrcs.sn_groups[1].mode,
+		 DLB2_RO_GRP_SN_MODE_SN_MODE_1);
+
+	DLB2_CSR_WR(hw, DLB2_RO_GRP_SN_MODE(hw->ver), sn_mode);
+
+	dlb2_log_set_group_sequence_numbers(hw, group_id, val);
+
+	return 0;
+}
+
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v5 20/26] event/dlb2: use new implementation of resource header
  2021-05-01 19:03   ` [dpdk-dev] [PATCH v5 00/26] Add DLB v2.5 McDaniel, Timothy
                       ` (18 preceding siblings ...)
  2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 19/26] event/dlb2: add v2.5 sequence number management McDaniel, Timothy
@ 2021-05-01 19:03     ` McDaniel, Timothy
  2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 21/26] event/dlb2: use new implementation of resource file McDaniel, Timothy
                       ` (6 subsequent siblings)
  26 siblings, 0 replies; 174+ messages in thread
From: McDaniel, Timothy @ 2021-05-01 19:03 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, harry.van.haaren, jerinj, thomas, Timothy McDaniel

From: Timothy McDaniel <timothy.mcdaniel@intel.com>

A temporary version of dlb_resource.h (dlb_resource_new.h) was used
by the previous commits in this patch series. Merge the two files
now that DLB v2.5 support has been fully added to dlb_resource.c.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_osdep.h       |  2 -
 drivers/event/dlb2/pf/base/dlb2_resource.h    | 36 +++++++++
 .../event/dlb2/pf/base/dlb2_resource_new.c    |  2 +-
 .../event/dlb2/pf/base/dlb2_resource_new.h    | 73 -------------------
 drivers/event/dlb2/pf/dlb2_main.c             |  2 +-
 drivers/event/dlb2/pf/dlb2_pf.c               |  2 +-
 6 files changed, 39 insertions(+), 78 deletions(-)
 delete mode 100644 drivers/event/dlb2/pf/base/dlb2_resource_new.h

diff --git a/drivers/event/dlb2/pf/base/dlb2_osdep.h b/drivers/event/dlb2/pf/base/dlb2_osdep.h
index 3b0ca84ba..cffe22f3c 100644
--- a/drivers/event/dlb2/pf/base/dlb2_osdep.h
+++ b/drivers/event/dlb2/pf/base/dlb2_osdep.h
@@ -17,8 +17,6 @@
 #include <rte_spinlock.h>
 #include "../dlb2_main.h"
 
-/* TEMPORARY inclusion of both headers for merge */
-#include "dlb2_resource_new.h"
 #include "dlb2_resource.h"
 
 #include "../../dlb2_log.h"
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.h b/drivers/event/dlb2/pf/base/dlb2_resource.h
index 00a0b6b57..684049cd6 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.h
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.h
@@ -8,6 +8,42 @@
 #include "dlb2_user.h"
 #include "dlb2_osdep_types.h"
 
+/**
+ * dlb2_resource_init() - initialize the device
+ * @hw: pointer to struct dlb2_hw.
+ * @ver: device version.
+ *
+ * This function initializes the device's software state (pointed to by the hw
+ * argument) and programs global scheduling QoS registers. This function should
+ * be called during driver initialization.
+ *
+ * The dlb2_hw struct must be unique per DLB 2.0 device and persist until the
+ * device is reset.
+ *
+ * Return:
+ * Returns 0 upon success, <0 otherwise.
+ */
+int dlb2_resource_init(struct dlb2_hw *hw, enum dlb2_hw_ver ver);
+
+/**
+ * dlb2_clr_pmcsr_disable() - power on bulk of DLB 2.0 logic
+ * @hw: dlb2_hw handle for a particular device.
+ * @ver: device version.
+ *
+ * Clearing the PMCSR must be done at initialization to make the device fully
+ * operational.
+ */
+void dlb2_clr_pmcsr_disable(struct dlb2_hw *hw, enum dlb2_hw_ver ver);
+
+/**
+ * dlb2_resource_free() - free device state memory
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function frees software state pointed to by dlb2_hw. This function
+ * should be called when resetting the device or unloading the driver.
+ */
+void dlb2_resource_free(struct dlb2_hw *hw);
+
 /**
  * dlb2_resource_reset() - reset in-use resources to their initial state
  * @hw: dlb2_hw handle for a particular device.
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
index 927b65568..2f66b2c71 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
@@ -11,7 +11,7 @@
 #include "dlb2_osdep_bitmap.h"
 #include "dlb2_osdep_types.h"
 #include "dlb2_regs_new.h"
-#include "dlb2_resource_new.h" /* TEMP FOR UPSTREAMPATCHES */
+#include "dlb2_resource.h"
 
 #include "../../dlb2_priv.h"
 #include "../../dlb2_inline_fns.h"
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.h b/drivers/event/dlb2/pf/base/dlb2_resource_new.h
deleted file mode 100644
index 51f31543c..000000000
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.h
+++ /dev/null
@@ -1,73 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016-2020 Intel Corporation
- */
-
-#ifndef __DLB2_RESOURCE_NEW_H
-#define __DLB2_RESOURCE_NEW_H
-
-#include "dlb2_user.h"
-#include "dlb2_osdep_types.h"
-
-/**
- * dlb2_resource_init() - initialize the device
- * @hw: pointer to struct dlb2_hw.
- * @ver: device version.
- *
- * This function initializes the device's software state (pointed to by the hw
- * argument) and programs global scheduling QoS registers. This function should
- * be called during driver initialization.
- *
- * The dlb2_hw struct must be unique per DLB 2.0 device and persist until the
- * device is reset.
- *
- * Return:
- * Returns 0 upon success, <0 otherwise.
- */
-int dlb2_resource_init(struct dlb2_hw *hw, enum dlb2_hw_ver ver);
-
-/**
- * dlb2_clr_pmcsr_disable() - power on bulk of DLB 2.0 logic
- * @hw: dlb2_hw handle for a particular device.
- * @ver: device version.
- *
- * Clearing the PMCSR must be done at initialization to make the device fully
- * operational.
- */
-void dlb2_clr_pmcsr_disable(struct dlb2_hw *hw, enum dlb2_hw_ver ver);
-
-/**
- * dlb2_finish_unmap_qid_procedures() - finish any pending unmap procedures
- * @hw: dlb2_hw handle for a particular device.
- *
- * This function attempts to finish any outstanding unmap procedures.
- * This function should be called by the kernel thread responsible for
- * finishing map/unmap procedures.
- *
- * Return:
- * Returns the number of procedures that weren't completed.
- */
-unsigned int dlb2_finish_unmap_qid_procedures(struct dlb2_hw *hw);
-
-/**
- * dlb2_finish_map_qid_procedures() - finish any pending map procedures
- * @hw: dlb2_hw handle for a particular device.
- *
- * This function attempts to finish any outstanding map procedures.
- * This function should be called by the kernel thread responsible for
- * finishing map/unmap procedures.
- *
- * Return:
- * Returns the number of procedures that weren't completed.
- */
-unsigned int dlb2_finish_map_qid_procedures(struct dlb2_hw *hw);
-
-/**
- * dlb2_resource_free() - free device state memory
- * @hw: dlb2_hw handle for a particular device.
- *
- * This function frees software state pointed to by dlb2_hw. This function
- * should be called when resetting the device or unloading the driver.
- */
-void dlb2_resource_free(struct dlb2_hw *hw);
-
-#endif /* __DLB2_RESOURCE_NEW_H */
diff --git a/drivers/event/dlb2/pf/dlb2_main.c b/drivers/event/dlb2/pf/dlb2_main.c
index 5c0640b3c..bac07f097 100644
--- a/drivers/event/dlb2/pf/dlb2_main.c
+++ b/drivers/event/dlb2/pf/dlb2_main.c
@@ -17,7 +17,7 @@
 
 #include "base/dlb2_regs_new.h"
 #include "base/dlb2_hw_types_new.h"
-#include "base/dlb2_resource_new.h"
+#include "base/dlb2_resource.h"
 #include "base/dlb2_osdep.h"
 #include "dlb2_main.h"
 #include "../dlb2_user.h"
diff --git a/drivers/event/dlb2/pf/dlb2_pf.c b/drivers/event/dlb2/pf/dlb2_pf.c
index 1e815f20d..880964a29 100644
--- a/drivers/event/dlb2/pf/dlb2_pf.c
+++ b/drivers/event/dlb2/pf/dlb2_pf.c
@@ -40,7 +40,7 @@
 #include "dlb2_main.h"
 #include "base/dlb2_hw_types_new.h"
 #include "base/dlb2_osdep.h"
-#include "base/dlb2_resource_new.h"
+#include "base/dlb2_resource.h"
 
 static const char *event_dlb2_pf_name = RTE_STR(EVDEV_DLB2_NAME_PMD);
 
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v5 21/26] event/dlb2: use new implementation of resource file
  2021-05-01 19:03   ` [dpdk-dev] [PATCH v5 00/26] Add DLB v2.5 McDaniel, Timothy
                       ` (19 preceding siblings ...)
  2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 20/26] event/dlb2: use new implementation of resource header McDaniel, Timothy
@ 2021-05-01 19:03     ` McDaniel, Timothy
  2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 22/26] event/dlb2: use new implementation of HW types header McDaniel, Timothy
                       ` (5 subsequent siblings)
  26 siblings, 0 replies; 174+ messages in thread
From: McDaniel, Timothy @ 2021-05-01 19:03 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, harry.van.haaren, jerinj, thomas, Timothy McDaniel

From: Timothy McDaniel <timothy.mcdaniel@intel.com>

The file dlb_resource_new.c now contains all of the low level
functions required to support both DLB v2.0 and DLB v2.5, and
the original file (dlb_resource.c) was removed in the previous
commit, so rename dlb_resource_new.c to dlb_resource.c, and
update the meson build file so that the new file is built.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/meson.build                |    1 -
 drivers/event/dlb2/pf/base/dlb2_resource.c    | 6205 +++++++++++++++-
 .../event/dlb2/pf/base/dlb2_resource_new.c    | 6235 -----------------
 3 files changed, 6203 insertions(+), 6238 deletions(-)
 delete mode 100644 drivers/event/dlb2/pf/base/dlb2_resource_new.c

diff --git a/drivers/event/dlb2/meson.build b/drivers/event/dlb2/meson.build
index 0c848161e..f963589fd 100644
--- a/drivers/event/dlb2/meson.build
+++ b/drivers/event/dlb2/meson.build
@@ -15,7 +15,6 @@ sources = files(
         'pf/dlb2_main.c',
         'pf/dlb2_pf.c',
         'pf/base/dlb2_resource.c',
-        'pf/base/dlb2_resource_new.c',
         'rte_pmd_dlb2.c',
         'dlb2_selftest.c',
 )
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index e8a9d52f6..2f66b2c71 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -2,13 +2,15 @@
  * Copyright(c) 2016-2020 Intel Corporation
  */
 
+#define DLB2_USE_NEW_HEADERS /* TEMPORARY FOR MERGE */
+
 #include "dlb2_user.h"
 
-#include "dlb2_hw_types.h"
+#include "dlb2_hw_types_new.h"
 #include "dlb2_osdep.h"
 #include "dlb2_osdep_bitmap.h"
 #include "dlb2_osdep_types.h"
-#include "dlb2_regs.h"
+#include "dlb2_regs_new.h"
 #include "dlb2_resource.h"
 
 #include "../../dlb2_priv.h"
@@ -32,3 +34,6202 @@
 #define DLB2_FUNC_LIST_FOR_SAFE(head, ptr, ptr_tmp, it, it_tmp) \
 	DLB2_LIST_FOR_EACH_SAFE((head), ptr, ptr_tmp, func_list, it, it_tmp)
 
+/*
+ * The PF driver cannot assume that a register write will affect subsequent HCW
+ * writes. To ensure a write completes, the driver must read back a CSR. This
+ * function only need be called for configuration that can occur after the
+ * domain has started; prior to starting, applications can't send HCWs.
+ */
+static inline void dlb2_flush_csr(struct dlb2_hw *hw)
+{
+	DLB2_CSR_RD(hw, DLB2_SYS_TOTAL_VAS(hw->ver));
+}
+
+static void dlb2_init_domain_rsrc_lists(struct dlb2_hw_domain *domain)
+{
+	int i;
+
+	dlb2_list_init_head(&domain->used_ldb_queues);
+	dlb2_list_init_head(&domain->used_dir_pq_pairs);
+	dlb2_list_init_head(&domain->avail_ldb_queues);
+	dlb2_list_init_head(&domain->avail_dir_pq_pairs);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
+		dlb2_list_init_head(&domain->used_ldb_ports[i]);
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
+		dlb2_list_init_head(&domain->avail_ldb_ports[i]);
+}
+
+static void dlb2_init_fn_rsrc_lists(struct dlb2_function_resources *rsrc)
+{
+	int i;
+	dlb2_list_init_head(&rsrc->avail_domains);
+	dlb2_list_init_head(&rsrc->used_domains);
+	dlb2_list_init_head(&rsrc->avail_ldb_queues);
+	dlb2_list_init_head(&rsrc->avail_dir_pq_pairs);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
+		dlb2_list_init_head(&rsrc->avail_ldb_ports[i]);
+}
+
+/**
+ * dlb2_resource_free() - free device state memory
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function frees software state pointed to by dlb2_hw. This function
+ * should be called when resetting the device or unloading the driver.
+ */
+void dlb2_resource_free(struct dlb2_hw *hw)
+{
+	int i;
+
+	if (hw->pf.avail_hist_list_entries)
+		dlb2_bitmap_free(hw->pf.avail_hist_list_entries);
+
+	for (i = 0; i < DLB2_MAX_NUM_VDEVS; i++) {
+		if (hw->vdev[i].avail_hist_list_entries)
+			dlb2_bitmap_free(hw->vdev[i].avail_hist_list_entries);
+	}
+}
+
+/**
+ * dlb2_resource_init() - initialize the device
+ * @hw: pointer to struct dlb2_hw.
+ * @ver: device version.
+ *
+ * This function initializes the device's software state (pointed to by the hw
+ * argument) and programs global scheduling QoS registers. This function should
+ * be called during driver initialization, and the dlb2_hw structure should
+ * be zero-initialized before calling the function.
+ *
+ * The dlb2_hw struct must be unique per DLB 2.0 device and persist until the
+ * device is reset.
+ *
+ * Return:
+ * Returns 0 upon success, <0 otherwise.
+ */
+int dlb2_resource_init(struct dlb2_hw *hw, enum dlb2_hw_ver ver)
+{
+	struct dlb2_list_entry *list;
+	unsigned int i;
+	int ret;
+
+	/*
+	 * For optimal load-balancing, ports that map to one or more QIDs in
+	 * common should not be in numerical sequence. The port->QID mapping is
+	 * application dependent, but the driver interleaves port IDs as much
+	 * as possible to reduce the likelihood of sequential ports mapping to
+	 * the same QID(s). This initial allocation of port IDs maximizes the
+	 * average distance between an ID and its immediate neighbors (i.e.
+	 * the distance from 1 to 0 and to 2, the distance from 2 to 1 and to
+	 * 3, etc.).
+	 */
+	const u8 init_ldb_port_allocation[DLB2_MAX_NUM_LDB_PORTS] = {
+		0,  7,  14,  5, 12,  3, 10,  1,  8, 15,  6, 13,  4, 11,  2,  9,
+		16, 23, 30, 21, 28, 19, 26, 17, 24, 31, 22, 29, 20, 27, 18, 25,
+		32, 39, 46, 37, 44, 35, 42, 33, 40, 47, 38, 45, 36, 43, 34, 41,
+		48, 55, 62, 53, 60, 51, 58, 49, 56, 63, 54, 61, 52, 59, 50, 57,
+	};
+
+	hw->ver = ver;
+
+	dlb2_init_fn_rsrc_lists(&hw->pf);
+
+	for (i = 0; i < DLB2_MAX_NUM_VDEVS; i++)
+		dlb2_init_fn_rsrc_lists(&hw->vdev[i]);
+
+	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
+		dlb2_init_domain_rsrc_lists(&hw->domains[i]);
+		hw->domains[i].parent_func = &hw->pf;
+	}
+
+	/* Give all resources to the PF driver */
+	hw->pf.num_avail_domains = DLB2_MAX_NUM_DOMAINS;
+	for (i = 0; i < hw->pf.num_avail_domains; i++) {
+		list = &hw->domains[i].func_list;
+
+		dlb2_list_add(&hw->pf.avail_domains, list);
+	}
+
+	hw->pf.num_avail_ldb_queues = DLB2_MAX_NUM_LDB_QUEUES;
+	for (i = 0; i < hw->pf.num_avail_ldb_queues; i++) {
+		list = &hw->rsrcs.ldb_queues[i].func_list;
+
+		dlb2_list_add(&hw->pf.avail_ldb_queues, list);
+	}
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
+		hw->pf.num_avail_ldb_ports[i] =
+			DLB2_MAX_NUM_LDB_PORTS / DLB2_NUM_COS_DOMAINS;
+
+	for (i = 0; i < DLB2_MAX_NUM_LDB_PORTS; i++) {
+		int cos_id = i >> DLB2_NUM_COS_DOMAINS;
+		struct dlb2_ldb_port *port;
+
+		port = &hw->rsrcs.ldb_ports[init_ldb_port_allocation[i]];
+
+		dlb2_list_add(&hw->pf.avail_ldb_ports[cos_id],
+			      &port->func_list);
+	}
+
+	hw->pf.num_avail_dir_pq_pairs = DLB2_MAX_NUM_DIR_PORTS(hw->ver);
+	for (i = 0; i < hw->pf.num_avail_dir_pq_pairs; i++) {
+		list = &hw->rsrcs.dir_pq_pairs[i].func_list;
+
+		dlb2_list_add(&hw->pf.avail_dir_pq_pairs, list);
+	}
+
+	if (hw->ver == DLB2_HW_V2) {
+		hw->pf.num_avail_qed_entries = DLB2_MAX_NUM_LDB_CREDITS;
+		hw->pf.num_avail_dqed_entries =
+			DLB2_MAX_NUM_DIR_CREDITS(hw->ver);
+	} else {
+		hw->pf.num_avail_entries = DLB2_MAX_NUM_CREDITS(hw->ver);
+	}
+
+	hw->pf.num_avail_aqed_entries = DLB2_MAX_NUM_AQED_ENTRIES;
+
+	ret = dlb2_bitmap_alloc(&hw->pf.avail_hist_list_entries,
+				DLB2_MAX_NUM_HIST_LIST_ENTRIES);
+	if (ret)
+		goto unwind;
+
+	ret = dlb2_bitmap_fill(hw->pf.avail_hist_list_entries);
+	if (ret)
+		goto unwind;
+
+	for (i = 0; i < DLB2_MAX_NUM_VDEVS; i++) {
+		ret = dlb2_bitmap_alloc(&hw->vdev[i].avail_hist_list_entries,
+					DLB2_MAX_NUM_HIST_LIST_ENTRIES);
+		if (ret)
+			goto unwind;
+
+		ret = dlb2_bitmap_zero(hw->vdev[i].avail_hist_list_entries);
+		if (ret)
+			goto unwind;
+	}
+
+	/* Initialize the hardware resource IDs */
+	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
+		hw->domains[i].id.phys_id = i;
+		hw->domains[i].id.vdev_owned = false;
+	}
+
+	for (i = 0; i < DLB2_MAX_NUM_LDB_QUEUES; i++) {
+		hw->rsrcs.ldb_queues[i].id.phys_id = i;
+		hw->rsrcs.ldb_queues[i].id.vdev_owned = false;
+	}
+
+	for (i = 0; i < DLB2_MAX_NUM_LDB_PORTS; i++) {
+		hw->rsrcs.ldb_ports[i].id.phys_id = i;
+		hw->rsrcs.ldb_ports[i].id.vdev_owned = false;
+	}
+
+	for (i = 0; i < DLB2_MAX_NUM_DIR_PORTS(hw->ver); i++) {
+		hw->rsrcs.dir_pq_pairs[i].id.phys_id = i;
+		hw->rsrcs.dir_pq_pairs[i].id.vdev_owned = false;
+	}
+
+	for (i = 0; i < DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {
+		hw->rsrcs.sn_groups[i].id = i;
+		/* Default mode (0) is 64 sequence numbers per queue */
+		hw->rsrcs.sn_groups[i].mode = 0;
+		hw->rsrcs.sn_groups[i].sequence_numbers_per_queue = 64;
+		hw->rsrcs.sn_groups[i].slot_use_bitmap = 0;
+	}
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
+		hw->cos_reservation[i] = 100 / DLB2_NUM_COS_DOMAINS;
+
+	return 0;
+
+unwind:
+	dlb2_resource_free(hw);
+
+	return ret;
+}
+
+/**
+ * dlb2_clr_pmcsr_disable() - power on bulk of DLB 2.0 logic
+ * @hw: dlb2_hw handle for a particular device.
+ * @ver: device version.
+ *
+ * Clearing the PMCSR must be done at initialization to make the device fully
+ * operational.
+ */
+void dlb2_clr_pmcsr_disable(struct dlb2_hw *hw, enum dlb2_hw_ver ver)
+{
+	u32 pmcsr_dis;
+
+	pmcsr_dis = DLB2_CSR_RD(hw, DLB2_CM_CFG_PM_PMCSR_DISABLE(ver));
+
+	DLB2_BITS_CLR(pmcsr_dis, DLB2_CM_CFG_PM_PMCSR_DISABLE_DISABLE);
+
+	DLB2_CSR_WR(hw, DLB2_CM_CFG_PM_PMCSR_DISABLE(ver), pmcsr_dis);
+}
+
+/**
+ * dlb2_hw_get_num_resources() - query the PCI function's available resources
+ * @hw: dlb2_hw handle for a particular device.
+ * @arg: pointer to resource counts.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function returns the number of available resources for the PF or for a
+ * VF.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, -EINVAL if vdev_req is true and vdev_id is
+ * invalid.
+ */
+int dlb2_hw_get_num_resources(struct dlb2_hw *hw,
+			      struct dlb2_get_num_resources_args *arg,
+			      bool vdev_req,
+			      unsigned int vdev_id)
+{
+	struct dlb2_function_resources *rsrcs;
+	struct dlb2_bitmap *map;
+	int i;
+
+	if (vdev_req && vdev_id >= DLB2_MAX_NUM_VDEVS)
+		return -EINVAL;
+
+	if (vdev_req)
+		rsrcs = &hw->vdev[vdev_id];
+	else
+		rsrcs = &hw->pf;
+
+	arg->num_sched_domains = rsrcs->num_avail_domains;
+
+	arg->num_ldb_queues = rsrcs->num_avail_ldb_queues;
+
+	arg->num_ldb_ports = 0;
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
+		arg->num_ldb_ports += rsrcs->num_avail_ldb_ports[i];
+
+	arg->num_cos_ldb_ports[0] = rsrcs->num_avail_ldb_ports[0];
+	arg->num_cos_ldb_ports[1] = rsrcs->num_avail_ldb_ports[1];
+	arg->num_cos_ldb_ports[2] = rsrcs->num_avail_ldb_ports[2];
+	arg->num_cos_ldb_ports[3] = rsrcs->num_avail_ldb_ports[3];
+
+	arg->num_dir_ports = rsrcs->num_avail_dir_pq_pairs;
+
+	arg->num_atomic_inflights = rsrcs->num_avail_aqed_entries;
+
+	map = rsrcs->avail_hist_list_entries;
+
+	arg->num_hist_list_entries = dlb2_bitmap_count(map);
+
+	arg->max_contiguous_hist_list_entries =
+		dlb2_bitmap_longest_set_range(map);
+
+	if (hw->ver == DLB2_HW_V2) {
+		arg->num_ldb_credits = rsrcs->num_avail_qed_entries;
+		arg->num_dir_credits = rsrcs->num_avail_dqed_entries;
+	} else {
+		arg->num_credits = rsrcs->num_avail_entries;
+	}
+	return 0;
+}
+
+static void dlb2_configure_domain_credits_v2_5(struct dlb2_hw *hw,
+					       struct dlb2_hw_domain *domain)
+{
+	u32 reg = 0;
+
+	DLB2_BITS_SET(reg, domain->num_credits, DLB2_CHP_CFG_LDB_VAS_CRD_COUNT);
+	DLB2_CSR_WR(hw, DLB2_CHP_CFG_VAS_CRD(domain->id.phys_id), reg);
+}
+
+static void dlb2_configure_domain_credits_v2(struct dlb2_hw *hw,
+					     struct dlb2_hw_domain *domain)
+{
+	u32 reg = 0;
+
+	DLB2_BITS_SET(reg, domain->num_ldb_credits,
+		      DLB2_CHP_CFG_LDB_VAS_CRD_COUNT);
+	DLB2_CSR_WR(hw, DLB2_CHP_CFG_LDB_VAS_CRD(domain->id.phys_id), reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, domain->num_dir_credits,
+		      DLB2_CHP_CFG_DIR_VAS_CRD_COUNT);
+	DLB2_CSR_WR(hw, DLB2_CHP_CFG_DIR_VAS_CRD(domain->id.phys_id), reg);
+}
+
+static void dlb2_configure_domain_credits(struct dlb2_hw *hw,
+					  struct dlb2_hw_domain *domain)
+{
+	if (hw->ver == DLB2_HW_V2)
+		dlb2_configure_domain_credits_v2(hw, domain);
+	else
+		dlb2_configure_domain_credits_v2_5(hw, domain);
+}
+
+static int dlb2_attach_credits(struct dlb2_function_resources *rsrcs,
+			       struct dlb2_hw_domain *domain,
+			       u32 num_credits,
+			       struct dlb2_cmd_response *resp)
+{
+	if (rsrcs->num_avail_entries < num_credits) {
+		resp->status = DLB2_ST_CREDITS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	rsrcs->num_avail_entries -= num_credits;
+	domain->num_credits += num_credits;
+	return 0;
+}
+
+static struct dlb2_ldb_port *
+dlb2_get_next_ldb_port(struct dlb2_hw *hw,
+		       struct dlb2_function_resources *rsrcs,
+		       u32 domain_id,
+		       u32 cos_id)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	RTE_SET_USED(iter);
+
+	/*
+	 * To reduce the odds of consecutive load-balanced ports mapping to the
+	 * same queue(s), the driver attempts to allocate ports whose neighbors
+	 * are owned by a different domain.
+	 */
+	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {
+		u32 next, prev;
+		u32 phys_id;
+
+		phys_id = port->id.phys_id;
+		next = phys_id + 1;
+		prev = phys_id - 1;
+
+		if (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)
+			next = 0;
+		if (phys_id == 0)
+			prev = DLB2_MAX_NUM_LDB_PORTS - 1;
+
+		if (!hw->rsrcs.ldb_ports[next].owned ||
+		    hw->rsrcs.ldb_ports[next].domain_id.phys_id == domain_id)
+			continue;
+
+		if (!hw->rsrcs.ldb_ports[prev].owned ||
+		    hw->rsrcs.ldb_ports[prev].domain_id.phys_id == domain_id)
+			continue;
+
+		return port;
+	}
+
+	/*
+	 * Failing that, the driver looks for a port with one neighbor owned by
+	 * a different domain and the other unallocated.
+	 */
+	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {
+		u32 next, prev;
+		u32 phys_id;
+
+		phys_id = port->id.phys_id;
+		next = phys_id + 1;
+		prev = phys_id - 1;
+
+		if (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)
+			next = 0;
+		if (phys_id == 0)
+			prev = DLB2_MAX_NUM_LDB_PORTS - 1;
+
+		if (!hw->rsrcs.ldb_ports[prev].owned &&
+		    hw->rsrcs.ldb_ports[next].owned &&
+		    hw->rsrcs.ldb_ports[next].domain_id.phys_id != domain_id)
+			return port;
+
+		if (!hw->rsrcs.ldb_ports[next].owned &&
+		    hw->rsrcs.ldb_ports[prev].owned &&
+		    hw->rsrcs.ldb_ports[prev].domain_id.phys_id != domain_id)
+			return port;
+	}
+
+	/*
+	 * Failing that, the driver looks for a port with both neighbors
+	 * unallocated.
+	 */
+	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {
+		u32 next, prev;
+		u32 phys_id;
+
+		phys_id = port->id.phys_id;
+		next = phys_id + 1;
+		prev = phys_id - 1;
+
+		if (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)
+			next = 0;
+		if (phys_id == 0)
+			prev = DLB2_MAX_NUM_LDB_PORTS - 1;
+
+		if (!hw->rsrcs.ldb_ports[prev].owned &&
+		    !hw->rsrcs.ldb_ports[next].owned)
+			return port;
+	}
+
+	/* If all else fails, the driver returns the next available port. */
+	return DLB2_FUNC_LIST_HEAD(rsrcs->avail_ldb_ports[cos_id],
+				   typeof(*port));
+}
+
+static int __dlb2_attach_ldb_ports(struct dlb2_hw *hw,
+				   struct dlb2_function_resources *rsrcs,
+				   struct dlb2_hw_domain *domain,
+				   u32 num_ports,
+				   u32 cos_id,
+				   struct dlb2_cmd_response *resp)
+{
+	unsigned int i;
+
+	if (rsrcs->num_avail_ldb_ports[cos_id] < num_ports) {
+		resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	for (i = 0; i < num_ports; i++) {
+		struct dlb2_ldb_port *port;
+
+		port = dlb2_get_next_ldb_port(hw, rsrcs,
+					      domain->id.phys_id, cos_id);
+		if (port == NULL) {
+			DLB2_HW_ERR(hw,
+				    "[%s()] Internal error: domain validation failed\n",
+				    __func__);
+			return -EFAULT;
+		}
+
+		dlb2_list_del(&rsrcs->avail_ldb_ports[cos_id],
+			      &port->func_list);
+
+		port->domain_id = domain->id;
+		port->owned = true;
+
+		dlb2_list_add(&domain->avail_ldb_ports[cos_id],
+			      &port->domain_list);
+	}
+
+	rsrcs->num_avail_ldb_ports[cos_id] -= num_ports;
+
+	return 0;
+}
+
+
+static int dlb2_attach_ldb_ports(struct dlb2_hw *hw,
+				 struct dlb2_function_resources *rsrcs,
+				 struct dlb2_hw_domain *domain,
+				 struct dlb2_create_sched_domain_args *args,
+				 struct dlb2_cmd_response *resp)
+{
+	unsigned int i, j;
+	int ret;
+
+	if (args->cos_strict) {
+		for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+			u32 num = args->num_cos_ldb_ports[i];
+
+			/* Allocate ports from specific classes-of-service */
+			ret = __dlb2_attach_ldb_ports(hw,
+						      rsrcs,
+						      domain,
+						      num,
+						      i,
+						      resp);
+			if (ret)
+				return ret;
+		}
+	} else {
+		unsigned int k;
+		u32 cos_id;
+
+		/*
+		 * Attempt to allocate from specific class-of-service, but
+		 * fallback to the other classes if that fails.
+		 */
+		for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+			for (j = 0; j < args->num_cos_ldb_ports[i]; j++) {
+				for (k = 0; k < DLB2_NUM_COS_DOMAINS; k++) {
+					cos_id = (i + k) % DLB2_NUM_COS_DOMAINS;
+
+					ret = __dlb2_attach_ldb_ports(hw,
+								      rsrcs,
+								      domain,
+								      1,
+								      cos_id,
+								      resp);
+					if (ret == 0)
+						break;
+				}
+
+				if (ret)
+					return ret;
+			}
+		}
+	}
+
+	/* Allocate num_ldb_ports from any class-of-service */
+	for (i = 0; i < args->num_ldb_ports; i++) {
+		for (j = 0; j < DLB2_NUM_COS_DOMAINS; j++) {
+			ret = __dlb2_attach_ldb_ports(hw,
+						      rsrcs,
+						      domain,
+						      1,
+						      j,
+						      resp);
+			if (ret == 0)
+				break;
+		}
+
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+
+static int dlb2_attach_dir_ports(struct dlb2_hw *hw,
+				 struct dlb2_function_resources *rsrcs,
+				 struct dlb2_hw_domain *domain,
+				 u32 num_ports,
+				 struct dlb2_cmd_response *resp)
+{
+	unsigned int i;
+
+	if (rsrcs->num_avail_dir_pq_pairs < num_ports) {
+		resp->status = DLB2_ST_DIR_PORTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	for (i = 0; i < num_ports; i++) {
+		struct dlb2_dir_pq_pair *port;
+
+		port = DLB2_FUNC_LIST_HEAD(rsrcs->avail_dir_pq_pairs,
+					   typeof(*port));
+		if (port == NULL) {
+			DLB2_HW_ERR(hw,
+				    "[%s()] Internal error: domain validation failed\n",
+				    __func__);
+			return -EFAULT;
+		}
+
+		dlb2_list_del(&rsrcs->avail_dir_pq_pairs, &port->func_list);
+
+		port->domain_id = domain->id;
+		port->owned = true;
+
+		dlb2_list_add(&domain->avail_dir_pq_pairs, &port->domain_list);
+	}
+
+	rsrcs->num_avail_dir_pq_pairs -= num_ports;
+
+	return 0;
+}
+
+static int dlb2_attach_ldb_credits(struct dlb2_function_resources *rsrcs,
+				   struct dlb2_hw_domain *domain,
+				   u32 num_credits,
+				   struct dlb2_cmd_response *resp)
+{
+	if (rsrcs->num_avail_qed_entries < num_credits) {
+		resp->status = DLB2_ST_LDB_CREDITS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	rsrcs->num_avail_qed_entries -= num_credits;
+	domain->num_ldb_credits += num_credits;
+	return 0;
+}
+
+static int dlb2_attach_dir_credits(struct dlb2_function_resources *rsrcs,
+				   struct dlb2_hw_domain *domain,
+				   u32 num_credits,
+				   struct dlb2_cmd_response *resp)
+{
+	if (rsrcs->num_avail_dqed_entries < num_credits) {
+		resp->status = DLB2_ST_DIR_CREDITS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	rsrcs->num_avail_dqed_entries -= num_credits;
+	domain->num_dir_credits += num_credits;
+	return 0;
+}
+
+
+static int dlb2_attach_atomic_inflights(struct dlb2_function_resources *rsrcs,
+					struct dlb2_hw_domain *domain,
+					u32 num_atomic_inflights,
+					struct dlb2_cmd_response *resp)
+{
+	if (rsrcs->num_avail_aqed_entries < num_atomic_inflights) {
+		resp->status = DLB2_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	rsrcs->num_avail_aqed_entries -= num_atomic_inflights;
+	domain->num_avail_aqed_entries += num_atomic_inflights;
+	return 0;
+}
+
+static int
+dlb2_attach_domain_hist_list_entries(struct dlb2_function_resources *rsrcs,
+				     struct dlb2_hw_domain *domain,
+				     u32 num_hist_list_entries,
+				     struct dlb2_cmd_response *resp)
+{
+	struct dlb2_bitmap *bitmap;
+	int base;
+
+	if (num_hist_list_entries) {
+		bitmap = rsrcs->avail_hist_list_entries;
+
+		base = dlb2_bitmap_find_set_bit_range(bitmap,
+						      num_hist_list_entries);
+		if (base < 0)
+			goto error;
+
+		domain->total_hist_list_entries = num_hist_list_entries;
+		domain->avail_hist_list_entries = num_hist_list_entries;
+		domain->hist_list_entry_base = base;
+		domain->hist_list_entry_offset = 0;
+
+		dlb2_bitmap_clear_range(bitmap, base, num_hist_list_entries);
+	}
+	return 0;
+
+error:
+	resp->status = DLB2_ST_HIST_LIST_ENTRIES_UNAVAILABLE;
+	return -EINVAL;
+}
+
+static int dlb2_attach_ldb_queues(struct dlb2_hw *hw,
+				  struct dlb2_function_resources *rsrcs,
+				  struct dlb2_hw_domain *domain,
+				  u32 num_queues,
+				  struct dlb2_cmd_response *resp)
+{
+	unsigned int i;
+
+	if (rsrcs->num_avail_ldb_queues < num_queues) {
+		resp->status = DLB2_ST_LDB_QUEUES_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	for (i = 0; i < num_queues; i++) {
+		struct dlb2_ldb_queue *queue;
+
+		queue = DLB2_FUNC_LIST_HEAD(rsrcs->avail_ldb_queues,
+					    typeof(*queue));
+		if (queue == NULL) {
+			DLB2_HW_ERR(hw,
+				    "[%s()] Internal error: domain validation failed\n",
+				    __func__);
+			return -EFAULT;
+		}
+
+		dlb2_list_del(&rsrcs->avail_ldb_queues, &queue->func_list);
+
+		queue->domain_id = domain->id;
+		queue->owned = true;
+
+		dlb2_list_add(&domain->avail_ldb_queues, &queue->domain_list);
+	}
+
+	rsrcs->num_avail_ldb_queues -= num_queues;
+
+	return 0;
+}
+
+static int
+dlb2_domain_attach_resources(struct dlb2_hw *hw,
+			     struct dlb2_function_resources *rsrcs,
+			     struct dlb2_hw_domain *domain,
+			     struct dlb2_create_sched_domain_args *args,
+			     struct dlb2_cmd_response *resp)
+{
+	int ret;
+
+	ret = dlb2_attach_ldb_queues(hw,
+				     rsrcs,
+				     domain,
+				     args->num_ldb_queues,
+				     resp);
+	if (ret)
+		return ret;
+
+	ret = dlb2_attach_ldb_ports(hw,
+				    rsrcs,
+				    domain,
+				    args,
+				    resp);
+	if (ret)
+		return ret;
+
+	ret = dlb2_attach_dir_ports(hw,
+				    rsrcs,
+				    domain,
+				    args->num_dir_ports,
+				    resp);
+	if (ret)
+		return ret;
+
+	if (hw->ver == DLB2_HW_V2) {
+		ret = dlb2_attach_ldb_credits(rsrcs,
+					      domain,
+					      args->num_ldb_credits,
+					      resp);
+		if (ret)
+			return ret;
+
+		ret = dlb2_attach_dir_credits(rsrcs,
+					      domain,
+					      args->num_dir_credits,
+					      resp);
+		if (ret)
+			return ret;
+	} else {  /* DLB 2.5 */
+		ret = dlb2_attach_credits(rsrcs,
+					  domain,
+					  args->num_credits,
+					  resp);
+		if (ret)
+			return ret;
+	}
+
+	ret = dlb2_attach_domain_hist_list_entries(rsrcs,
+						   domain,
+						   args->num_hist_list_entries,
+						   resp);
+	if (ret)
+		return ret;
+
+	ret = dlb2_attach_atomic_inflights(rsrcs,
+					   domain,
+					   args->num_atomic_inflights,
+					   resp);
+	if (ret)
+		return ret;
+
+	dlb2_configure_domain_credits(hw, domain);
+
+	domain->configured = true;
+
+	domain->started = false;
+
+	rsrcs->num_avail_domains--;
+
+	return 0;
+}
+
+static int
+dlb2_verify_create_sched_dom_args(struct dlb2_function_resources *rsrcs,
+				  struct dlb2_create_sched_domain_args *args,
+				  struct dlb2_cmd_response *resp,
+				  struct dlb2_hw *hw,
+				  struct dlb2_hw_domain **out_domain)
+{
+	u32 num_avail_ldb_ports, req_ldb_ports;
+	struct dlb2_bitmap *avail_hl_entries;
+	unsigned int max_contig_hl_range;
+	struct dlb2_hw_domain *domain;
+	int i;
+
+	avail_hl_entries = rsrcs->avail_hist_list_entries;
+
+	max_contig_hl_range = dlb2_bitmap_longest_set_range(avail_hl_entries);
+
+	num_avail_ldb_ports = 0;
+	req_ldb_ports = 0;
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		num_avail_ldb_ports += rsrcs->num_avail_ldb_ports[i];
+
+		req_ldb_ports += args->num_cos_ldb_ports[i];
+	}
+
+	req_ldb_ports += args->num_ldb_ports;
+
+	if (rsrcs->num_avail_domains < 1) {
+		resp->status = DLB2_ST_DOMAIN_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	domain = DLB2_FUNC_LIST_HEAD(rsrcs->avail_domains, typeof(*domain));
+	if (domain == NULL) {
+		resp->status = DLB2_ST_DOMAIN_UNAVAILABLE;
+		return -EFAULT;
+	}
+
+	if (rsrcs->num_avail_ldb_queues < args->num_ldb_queues) {
+		resp->status = DLB2_ST_LDB_QUEUES_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	if (req_ldb_ports > num_avail_ldb_ports) {
+		resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	for (i = 0; args->cos_strict && i < DLB2_NUM_COS_DOMAINS; i++) {
+		if (args->num_cos_ldb_ports[i] >
+		    rsrcs->num_avail_ldb_ports[i]) {
+			resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
+			return -EINVAL;
+		}
+	}
+
+	if (args->num_ldb_queues > 0 && req_ldb_ports == 0) {
+		resp->status = DLB2_ST_LDB_PORT_REQUIRED_FOR_LDB_QUEUES;
+		return -EINVAL;
+	}
+
+	if (rsrcs->num_avail_dir_pq_pairs < args->num_dir_ports) {
+		resp->status = DLB2_ST_DIR_PORTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+	if (hw->ver == DLB2_HW_V2_5) {
+		if (rsrcs->num_avail_entries < args->num_credits) {
+			resp->status = DLB2_ST_CREDITS_UNAVAILABLE;
+			return -EINVAL;
+		}
+	} else {
+		if (rsrcs->num_avail_qed_entries < args->num_ldb_credits) {
+			resp->status = DLB2_ST_LDB_CREDITS_UNAVAILABLE;
+			return -EINVAL;
+		}
+		if (rsrcs->num_avail_dqed_entries < args->num_dir_credits) {
+			resp->status = DLB2_ST_DIR_CREDITS_UNAVAILABLE;
+			return -EINVAL;
+		}
+	}
+
+	if (rsrcs->num_avail_aqed_entries < args->num_atomic_inflights) {
+		resp->status = DLB2_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	if (max_contig_hl_range < args->num_hist_list_entries) {
+		resp->status = DLB2_ST_HIST_LIST_ENTRIES_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	*out_domain = domain;
+
+	return 0;
+}
+
+static void
+dlb2_log_create_sched_domain_args(struct dlb2_hw *hw,
+				  struct dlb2_create_sched_domain_args *args,
+				  bool vdev_req,
+				  unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 create sched domain arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tNumber of LDB queues:          %d\n",
+		    args->num_ldb_queues);
+	DLB2_HW_DBG(hw, "\tNumber of LDB ports (any CoS): %d\n",
+		    args->num_ldb_ports);
+	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 0):   %d\n",
+		    args->num_cos_ldb_ports[0]);
+	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 1):   %d\n",
+		    args->num_cos_ldb_ports[1]);
+	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 2):   %d\n",
+		    args->num_cos_ldb_ports[2]);
+	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 3):   %d\n",
+		    args->num_cos_ldb_ports[3]);
+	DLB2_HW_DBG(hw, "\tStrict CoS allocation:         %d\n",
+		    args->cos_strict);
+	DLB2_HW_DBG(hw, "\tNumber of DIR ports:           %d\n",
+		    args->num_dir_ports);
+	DLB2_HW_DBG(hw, "\tNumber of ATM inflights:       %d\n",
+		    args->num_atomic_inflights);
+	DLB2_HW_DBG(hw, "\tNumber of hist list entries:   %d\n",
+		    args->num_hist_list_entries);
+	if (hw->ver == DLB2_HW_V2) {
+		DLB2_HW_DBG(hw, "\tNumber of LDB credits:         %d\n",
+			    args->num_ldb_credits);
+		DLB2_HW_DBG(hw, "\tNumber of DIR credits:         %d\n",
+			    args->num_dir_credits);
+	} else {
+		DLB2_HW_DBG(hw, "\tNumber of credits:         %d\n",
+			    args->num_credits);
+	}
+}
+
+/**
+ * dlb2_hw_create_sched_domain() - create a scheduling domain
+ * @hw: dlb2_hw handle for a particular device.
+ * @args: scheduling domain creation arguments.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function creates a scheduling domain containing the resources specified
+ * in args. The individual resources (queues, ports, credits) can be configured
+ * after creating a scheduling domain.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the domain ID.
+ *
+ * resp->id contains a virtual ID if vdev_req is true.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, or the requested domain name
+ *	    is already in use.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb2_hw_create_sched_domain(struct dlb2_hw *hw,
+				struct dlb2_create_sched_domain_args *args,
+				struct dlb2_cmd_response *resp,
+				bool vdev_req,
+				unsigned int vdev_id)
+{
+	struct dlb2_function_resources *rsrcs;
+	struct dlb2_hw_domain *domain;
+	int ret;
+
+	rsrcs = (vdev_req) ? &hw->vdev[vdev_id] : &hw->pf;
+
+	dlb2_log_create_sched_domain_args(hw, args, vdev_req, vdev_id);
+
+	/*
+	 * Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	ret = dlb2_verify_create_sched_dom_args(rsrcs, args, resp, hw, &domain);
+	if (ret)
+		return ret;
+
+	dlb2_init_domain_rsrc_lists(domain);
+
+	ret = dlb2_domain_attach_resources(hw, rsrcs, domain, args, resp);
+	if (ret) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: failed to verify args.\n",
+			    __func__);
+
+		return ret;
+	}
+
+	dlb2_list_del(&rsrcs->avail_domains, &domain->func_list);
+
+	dlb2_list_add(&rsrcs->used_domains, &domain->func_list);
+
+	resp->id = (vdev_req) ? domain->id.virt_id : domain->id.phys_id;
+	resp->status = 0;
+
+	return 0;
+}
+
+static void dlb2_dir_port_cq_disable(struct dlb2_hw *hw,
+				     struct dlb2_dir_pq_pair *port)
+{
+	u32 reg = 0;
+
+	DLB2_BIT_SET(reg, DLB2_LSP_CQ_DIR_DSBL_DISABLED);
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ_DIR_DSBL(hw->ver, port->id.phys_id), reg);
+
+	dlb2_flush_csr(hw);
+}
+
+static u32 dlb2_dir_cq_token_count(struct dlb2_hw *hw,
+				   struct dlb2_dir_pq_pair *port)
+{
+	u32 cnt;
+
+	cnt = DLB2_CSR_RD(hw,
+			  DLB2_LSP_CQ_DIR_TKN_CNT(hw->ver, port->id.phys_id));
+
+	/*
+	 * Account for the initial token count, which is used in order to
+	 * provide a CQ with depth less than 8.
+	 */
+
+	return DLB2_BITS_GET(cnt, DLB2_LSP_CQ_DIR_TKN_CNT_COUNT) -
+	       port->init_tkn_cnt;
+}
+
+static void dlb2_drain_dir_cq(struct dlb2_hw *hw,
+			      struct dlb2_dir_pq_pair *port)
+{
+	unsigned int port_id = port->id.phys_id;
+	u32 cnt;
+
+	/* Return any outstanding tokens */
+	cnt = dlb2_dir_cq_token_count(hw, port);
+
+	if (cnt != 0) {
+		struct dlb2_hcw hcw_mem[8], *hcw;
+		void __iomem *pp_addr;
+
+		pp_addr = os_map_producer_port(hw, port_id, false);
+
+		/* Point hcw to a 64B-aligned location */
+		hcw = (struct dlb2_hcw *)((uintptr_t)&hcw_mem[4] & ~0x3F);
+
+		/*
+		 * Program the first HCW for a batch token return and
+		 * the rest as NOOPS
+		 */
+		memset(hcw, 0, 4 * sizeof(*hcw));
+		hcw->cq_token = 1;
+		hcw->lock_id = cnt - 1;
+
+		dlb2_movdir64b(pp_addr, hcw);
+
+		os_fence_hcw(hw, pp_addr);
+
+		os_unmap_producer_port(hw, pp_addr);
+	}
+}
+
+static void dlb2_dir_port_cq_enable(struct dlb2_hw *hw,
+				    struct dlb2_dir_pq_pair *port)
+{
+	u32 reg = 0;
+
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ_DIR_DSBL(hw->ver, port->id.phys_id), reg);
+
+	dlb2_flush_csr(hw);
+}
+
+static int dlb2_domain_drain_dir_cqs(struct dlb2_hw *hw,
+				     struct dlb2_hw_domain *domain,
+				     bool toggle_port)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *port;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
+		/*
+		 * Can't drain a port if it's not configured, and there's
+		 * nothing to drain if its queue is unconfigured.
+		 */
+		if (!port->port_configured || !port->queue_configured)
+			continue;
+
+		if (toggle_port)
+			dlb2_dir_port_cq_disable(hw, port);
+
+		dlb2_drain_dir_cq(hw, port);
+
+		if (toggle_port)
+			dlb2_dir_port_cq_enable(hw, port);
+	}
+
+	return 0;
+}
+
+static u32 dlb2_dir_queue_depth(struct dlb2_hw *hw,
+				struct dlb2_dir_pq_pair *queue)
+{
+	u32 cnt;
+
+	cnt = DLB2_CSR_RD(hw, DLB2_LSP_QID_DIR_ENQUEUE_CNT(hw->ver,
+						      queue->id.phys_id));
+
+	return DLB2_BITS_GET(cnt, DLB2_LSP_QID_DIR_ENQUEUE_CNT_COUNT);
+}
+
+static bool dlb2_dir_queue_is_empty(struct dlb2_hw *hw,
+				    struct dlb2_dir_pq_pair *queue)
+{
+	return dlb2_dir_queue_depth(hw, queue) == 0;
+}
+
+static bool dlb2_domain_dir_queues_empty(struct dlb2_hw *hw,
+					 struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *queue;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, queue, iter) {
+		if (!dlb2_dir_queue_is_empty(hw, queue))
+			return false;
+	}
+
+	return true;
+}
+static int dlb2_domain_drain_dir_queues(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain)
+{
+	int i;
+
+	/* If the domain hasn't been started, there's no traffic to drain */
+	if (!domain->started)
+		return 0;
+
+	for (i = 0; i < DLB2_MAX_QID_EMPTY_CHECK_LOOPS; i++) {
+		dlb2_domain_drain_dir_cqs(hw, domain, true);
+
+		if (dlb2_domain_dir_queues_empty(hw, domain))
+			break;
+	}
+
+	if (i == DLB2_MAX_QID_EMPTY_CHECK_LOOPS) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: failed to empty queues\n",
+			    __func__);
+		return -EFAULT;
+	}
+
+	/*
+	 * Drain the CQs one more time. For the queues to go empty, they would
+	 * have scheduled one or more QEs.
+	 */
+	dlb2_domain_drain_dir_cqs(hw, domain, true);
+
+	return 0;
+}
+
+static void dlb2_ldb_port_cq_enable(struct dlb2_hw *hw,
+				    struct dlb2_ldb_port *port)
+{
+	u32 reg = 0;
+
+	/*
+	 * Don't re-enable the port if a removal is pending. The caller should
+	 * mark this port as enabled (if it isn't already), and when the
+	 * removal completes the port will be enabled.
+	 */
+	if (port->num_pending_removals)
+		return;
+
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ_LDB_DSBL(hw->ver, port->id.phys_id), reg);
+
+	dlb2_flush_csr(hw);
+}
+
+static void dlb2_ldb_port_cq_disable(struct dlb2_hw *hw,
+				     struct dlb2_ldb_port *port)
+{
+	u32 reg = 0;
+
+	DLB2_BIT_SET(reg, DLB2_LSP_CQ_LDB_DSBL_DISABLED);
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ_LDB_DSBL(hw->ver, port->id.phys_id), reg);
+
+	dlb2_flush_csr(hw);
+}
+
+static u32 dlb2_ldb_cq_inflight_count(struct dlb2_hw *hw,
+				      struct dlb2_ldb_port *port)
+{
+	u32 cnt;
+
+	cnt = DLB2_CSR_RD(hw,
+			  DLB2_LSP_CQ_LDB_INFL_CNT(hw->ver, port->id.phys_id));
+
+	return DLB2_BITS_GET(cnt, DLB2_LSP_CQ_LDB_INFL_CNT_COUNT);
+}
+
+static u32 dlb2_ldb_cq_token_count(struct dlb2_hw *hw,
+				   struct dlb2_ldb_port *port)
+{
+	u32 cnt;
+
+	cnt = DLB2_CSR_RD(hw,
+			  DLB2_LSP_CQ_LDB_TKN_CNT(hw->ver, port->id.phys_id));
+
+	/*
+	 * Account for the initial token count, which is used in order to
+	 * provide a CQ with depth less than 8.
+	 */
+
+	return DLB2_BITS_GET(cnt, DLB2_LSP_CQ_LDB_TKN_CNT_TOKEN_COUNT) -
+		port->init_tkn_cnt;
+}
+
+static void dlb2_drain_ldb_cq(struct dlb2_hw *hw, struct dlb2_ldb_port *port)
+{
+	u32 infl_cnt, tkn_cnt;
+	unsigned int i;
+
+	infl_cnt = dlb2_ldb_cq_inflight_count(hw, port);
+	tkn_cnt = dlb2_ldb_cq_token_count(hw, port);
+
+	if (infl_cnt || tkn_cnt) {
+		struct dlb2_hcw hcw_mem[8], *hcw;
+		void __iomem *pp_addr;
+
+		pp_addr = os_map_producer_port(hw, port->id.phys_id, true);
+
+		/* Point hcw to a 64B-aligned location */
+		hcw = (struct dlb2_hcw *)((uintptr_t)&hcw_mem[4] & ~0x3F);
+
+		/*
+		 * Program the first HCW for a completion and token return and
+		 * the other HCWs as NOOPS
+		 */
+
+		memset(hcw, 0, 4 * sizeof(*hcw));
+		hcw->qe_comp = (infl_cnt > 0);
+		hcw->cq_token = (tkn_cnt > 0);
+		hcw->lock_id = tkn_cnt - 1;
+
+		/* Return tokens in the first HCW */
+		dlb2_movdir64b(pp_addr, hcw);
+
+		hcw->cq_token = 0;
+
+		/* Issue remaining completions (if any) */
+		for (i = 1; i < infl_cnt; i++)
+			dlb2_movdir64b(pp_addr, hcw);
+
+		os_fence_hcw(hw, pp_addr);
+
+		os_unmap_producer_port(hw, pp_addr);
+	}
+}
+
+static void dlb2_domain_drain_ldb_cqs(struct dlb2_hw *hw,
+				      struct dlb2_hw_domain *domain,
+				      bool toggle_port)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	/* If the domain hasn't been started, there's no traffic to drain */
+	if (!domain->started)
+		return;
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			if (toggle_port)
+				dlb2_ldb_port_cq_disable(hw, port);
+
+			dlb2_drain_ldb_cq(hw, port);
+
+			if (toggle_port)
+				dlb2_ldb_port_cq_enable(hw, port);
+		}
+	}
+}
+
+static u32 dlb2_ldb_queue_depth(struct dlb2_hw *hw,
+				struct dlb2_ldb_queue *queue)
+{
+	u32 aqed, ldb, atm;
+
+	aqed = DLB2_CSR_RD(hw, DLB2_LSP_QID_AQED_ACTIVE_CNT(hw->ver,
+						       queue->id.phys_id));
+	ldb = DLB2_CSR_RD(hw, DLB2_LSP_QID_LDB_ENQUEUE_CNT(hw->ver,
+						      queue->id.phys_id));
+	atm = DLB2_CSR_RD(hw,
+			  DLB2_LSP_QID_ATM_ACTIVE(hw->ver, queue->id.phys_id));
+
+	return DLB2_BITS_GET(aqed, DLB2_LSP_QID_AQED_ACTIVE_CNT_COUNT)
+	       + DLB2_BITS_GET(ldb, DLB2_LSP_QID_LDB_ENQUEUE_CNT_COUNT)
+	       + DLB2_BITS_GET(atm, DLB2_LSP_QID_ATM_ACTIVE_COUNT);
+}
+
+static bool dlb2_ldb_queue_is_empty(struct dlb2_hw *hw,
+				    struct dlb2_ldb_queue *queue)
+{
+	return dlb2_ldb_queue_depth(hw, queue) == 0;
+}
+
+static bool dlb2_domain_mapped_queues_empty(struct dlb2_hw *hw,
+					    struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_queue *queue;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
+		if (queue->num_mappings == 0)
+			continue;
+
+		if (!dlb2_ldb_queue_is_empty(hw, queue))
+			return false;
+	}
+
+	return true;
+}
+
+static int dlb2_domain_drain_mapped_queues(struct dlb2_hw *hw,
+					   struct dlb2_hw_domain *domain)
+{
+	int i;
+
+	/* If the domain hasn't been started, there's no traffic to drain */
+	if (!domain->started)
+		return 0;
+
+	if (domain->num_pending_removals > 0) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: failed to unmap domain queues\n",
+			    __func__);
+		return -EFAULT;
+	}
+
+	for (i = 0; i < DLB2_MAX_QID_EMPTY_CHECK_LOOPS; i++) {
+		dlb2_domain_drain_ldb_cqs(hw, domain, true);
+
+		if (dlb2_domain_mapped_queues_empty(hw, domain))
+			break;
+	}
+
+	if (i == DLB2_MAX_QID_EMPTY_CHECK_LOOPS) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: failed to empty queues\n",
+			    __func__);
+		return -EFAULT;
+	}
+
+	/*
+	 * Drain the CQs one more time. For the queues to go empty, they would
+	 * have scheduled one or more QEs.
+	 */
+	dlb2_domain_drain_ldb_cqs(hw, domain, true);
+
+	return 0;
+}
+
+static void dlb2_domain_enable_ldb_cqs(struct dlb2_hw *hw,
+				       struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			port->enabled = true;
+
+			dlb2_ldb_port_cq_enable(hw, port);
+		}
+	}
+}
+
+static struct dlb2_ldb_queue *
+dlb2_get_ldb_queue_from_id(struct dlb2_hw *hw,
+			   u32 id,
+			   bool vdev_req,
+			   unsigned int vdev_id)
+{
+	struct dlb2_list_entry *iter1;
+	struct dlb2_list_entry *iter2;
+	struct dlb2_function_resources *rsrcs;
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+	RTE_SET_USED(iter1);
+	RTE_SET_USED(iter2);
+
+	if (id >= DLB2_MAX_NUM_LDB_QUEUES)
+		return NULL;
+
+	rsrcs = (vdev_req) ? &hw->vdev[vdev_id] : &hw->pf;
+
+	if (!vdev_req)
+		return &hw->rsrcs.ldb_queues[id];
+
+	DLB2_FUNC_LIST_FOR(rsrcs->used_domains, domain, iter1) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter2) {
+			if (queue->id.virt_id == id)
+				return queue;
+		}
+	}
+
+	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_queues, queue, iter1) {
+		if (queue->id.virt_id == id)
+			return queue;
+	}
+
+	return NULL;
+}
+
+static struct dlb2_hw_domain *dlb2_get_domain_from_id(struct dlb2_hw *hw,
+						      u32 id,
+						      bool vdev_req,
+						      unsigned int vdev_id)
+{
+	struct dlb2_list_entry *iteration;
+	struct dlb2_function_resources *rsrcs;
+	struct dlb2_hw_domain *domain;
+	RTE_SET_USED(iteration);
+
+	if (id >= DLB2_MAX_NUM_DOMAINS)
+		return NULL;
+
+	if (!vdev_req)
+		return &hw->domains[id];
+
+	rsrcs = &hw->vdev[vdev_id];
+
+	DLB2_FUNC_LIST_FOR(rsrcs->used_domains, domain, iteration) {
+		if (domain->id.virt_id == id)
+			return domain;
+	}
+
+	return NULL;
+}
+
+static int dlb2_port_slot_state_transition(struct dlb2_hw *hw,
+					   struct dlb2_ldb_port *port,
+					   struct dlb2_ldb_queue *queue,
+					   int slot,
+					   enum dlb2_qid_map_state new_state)
+{
+	enum dlb2_qid_map_state curr_state = port->qid_map[slot].state;
+	struct dlb2_hw_domain *domain;
+	int domain_id;
+
+	domain_id = port->domain_id.phys_id;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, false, 0);
+	if (domain == NULL) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: unable to find domain %d\n",
+			    __func__, domain_id);
+		return -EINVAL;
+	}
+
+	switch (curr_state) {
+	case DLB2_QUEUE_UNMAPPED:
+		switch (new_state) {
+		case DLB2_QUEUE_MAPPED:
+			queue->num_mappings++;
+			port->num_mappings++;
+			break;
+		case DLB2_QUEUE_MAP_IN_PROG:
+			queue->num_pending_additions++;
+			domain->num_pending_additions++;
+			break;
+		default:
+			goto error;
+		}
+		break;
+	case DLB2_QUEUE_MAPPED:
+		switch (new_state) {
+		case DLB2_QUEUE_UNMAPPED:
+			queue->num_mappings--;
+			port->num_mappings--;
+			break;
+		case DLB2_QUEUE_UNMAP_IN_PROG:
+			port->num_pending_removals++;
+			domain->num_pending_removals++;
+			break;
+		case DLB2_QUEUE_MAPPED:
+			/* Priority change, nothing to update */
+			break;
+		default:
+			goto error;
+		}
+		break;
+	case DLB2_QUEUE_MAP_IN_PROG:
+		switch (new_state) {
+		case DLB2_QUEUE_UNMAPPED:
+			queue->num_pending_additions--;
+			domain->num_pending_additions--;
+			break;
+		case DLB2_QUEUE_MAPPED:
+			queue->num_mappings++;
+			port->num_mappings++;
+			queue->num_pending_additions--;
+			domain->num_pending_additions--;
+			break;
+		default:
+			goto error;
+		}
+		break;
+	case DLB2_QUEUE_UNMAP_IN_PROG:
+		switch (new_state) {
+		case DLB2_QUEUE_UNMAPPED:
+			port->num_pending_removals--;
+			domain->num_pending_removals--;
+			queue->num_mappings--;
+			port->num_mappings--;
+			break;
+		case DLB2_QUEUE_MAPPED:
+			port->num_pending_removals--;
+			domain->num_pending_removals--;
+			break;
+		case DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP:
+			/* Nothing to update */
+			break;
+		default:
+			goto error;
+		}
+		break;
+	case DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP:
+		switch (new_state) {
+		case DLB2_QUEUE_UNMAP_IN_PROG:
+			/* Nothing to update */
+			break;
+		case DLB2_QUEUE_UNMAPPED:
+			/*
+			 * An UNMAP_IN_PROG_PENDING_MAP slot briefly
+			 * becomes UNMAPPED before it transitions to
+			 * MAP_IN_PROG.
+			 */
+			queue->num_mappings--;
+			port->num_mappings--;
+			port->num_pending_removals--;
+			domain->num_pending_removals--;
+			break;
+		default:
+			goto error;
+		}
+		break;
+	default:
+		goto error;
+	}
+
+	port->qid_map[slot].state = new_state;
+
+	DLB2_HW_DBG(hw,
+		    "[%s()] queue %d -> port %d state transition (%d -> %d)\n",
+		    __func__, queue->id.phys_id, port->id.phys_id,
+		    curr_state, new_state);
+	return 0;
+
+error:
+	DLB2_HW_ERR(hw,
+		    "[%s()] Internal error: invalid queue %d -> port %d state transition (%d -> %d)\n",
+		    __func__, queue->id.phys_id, port->id.phys_id,
+		    curr_state, new_state);
+	return -EFAULT;
+}
+
+static bool dlb2_port_find_slot(struct dlb2_ldb_port *port,
+				enum dlb2_qid_map_state state,
+				int *slot)
+{
+	int i;
+
+	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
+		if (port->qid_map[i].state == state)
+			break;
+	}
+
+	*slot = i;
+
+	return (i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ);
+}
+
+static bool dlb2_port_find_slot_queue(struct dlb2_ldb_port *port,
+				      enum dlb2_qid_map_state state,
+				      struct dlb2_ldb_queue *queue,
+				      int *slot)
+{
+	int i;
+
+	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
+		if (port->qid_map[i].state == state &&
+		    port->qid_map[i].qid == queue->id.phys_id)
+			break;
+	}
+
+	*slot = i;
+
+	return (i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ);
+}
+
+/*
+ * dlb2_ldb_queue_{enable, disable}_mapped_cqs() don't operate exactly as
+ * their function names imply, and should only be called by the dynamic CQ
+ * mapping code.
+ */
+static void dlb2_ldb_queue_disable_mapped_cqs(struct dlb2_hw *hw,
+					      struct dlb2_hw_domain *domain,
+					      struct dlb2_ldb_queue *queue)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int slot, i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			enum dlb2_qid_map_state state = DLB2_QUEUE_MAPPED;
+
+			if (!dlb2_port_find_slot_queue(port, state,
+						       queue, &slot))
+				continue;
+
+			if (port->enabled)
+				dlb2_ldb_port_cq_disable(hw, port);
+		}
+	}
+}
+
+static void dlb2_ldb_queue_enable_mapped_cqs(struct dlb2_hw *hw,
+					     struct dlb2_hw_domain *domain,
+					     struct dlb2_ldb_queue *queue)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int slot, i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			enum dlb2_qid_map_state state = DLB2_QUEUE_MAPPED;
+
+			if (!dlb2_port_find_slot_queue(port, state,
+						       queue, &slot))
+				continue;
+
+			if (port->enabled)
+				dlb2_ldb_port_cq_enable(hw, port);
+		}
+	}
+}
+
+static void dlb2_ldb_port_clear_queue_if_status(struct dlb2_hw *hw,
+						struct dlb2_ldb_port *port,
+						int slot)
+{
+	u32 ctrl = 0;
+
+	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
+	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
+	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_INFLIGHT_OK_V);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
+
+	dlb2_flush_csr(hw);
+}
+
+static void dlb2_ldb_port_set_queue_if_status(struct dlb2_hw *hw,
+					      struct dlb2_ldb_port *port,
+					      int slot)
+{
+	u32 ctrl = 0;
+
+	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
+	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
+	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_VALUE);
+	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_INFLIGHT_OK_V);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
+
+	dlb2_flush_csr(hw);
+}
+
+static int dlb2_ldb_port_map_qid_static(struct dlb2_hw *hw,
+					struct dlb2_ldb_port *p,
+					struct dlb2_ldb_queue *q,
+					u8 priority)
+{
+	enum dlb2_qid_map_state state;
+	u32 lsp_qid2cq2;
+	u32 lsp_qid2cq;
+	u32 atm_qid2cq;
+	u32 cq2priov;
+	u32 cq2qid;
+	int i;
+
+	/* Look for a pending or already mapped slot, else an unused slot */
+	if (!dlb2_port_find_slot_queue(p, DLB2_QUEUE_MAP_IN_PROG, q, &i) &&
+	    !dlb2_port_find_slot_queue(p, DLB2_QUEUE_MAPPED, q, &i) &&
+	    !dlb2_port_find_slot(p, DLB2_QUEUE_UNMAPPED, &i)) {
+		DLB2_HW_ERR(hw,
+			    "[%s():%d] Internal error: CQ has no available QID mapping slots\n",
+			    __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	/* Read-modify-write the priority and valid bit register */
+	cq2priov = DLB2_CSR_RD(hw, DLB2_LSP_CQ2PRIOV(hw->ver, p->id.phys_id));
+
+	cq2priov |= (1 << (i + DLB2_LSP_CQ2PRIOV_V_LOC)) & DLB2_LSP_CQ2PRIOV_V;
+	cq2priov |= ((priority & 0x7) << (i + DLB2_LSP_CQ2PRIOV_PRIO_LOC) * 3)
+		    & DLB2_LSP_CQ2PRIOV_PRIO;
+
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(hw->ver, p->id.phys_id), cq2priov);
+
+	/* Read-modify-write the QID map register */
+	if (i < 4)
+		cq2qid = DLB2_CSR_RD(hw, DLB2_LSP_CQ2QID0(hw->ver,
+							  p->id.phys_id));
+	else
+		cq2qid = DLB2_CSR_RD(hw, DLB2_LSP_CQ2QID1(hw->ver,
+							  p->id.phys_id));
+
+	if (i == 0 || i == 4)
+		DLB2_BITS_SET(cq2qid, q->id.phys_id, DLB2_LSP_CQ2QID0_QID_P0);
+	if (i == 1 || i == 5)
+		DLB2_BITS_SET(cq2qid, q->id.phys_id, DLB2_LSP_CQ2QID0_QID_P1);
+	if (i == 2 || i == 6)
+		DLB2_BITS_SET(cq2qid, q->id.phys_id, DLB2_LSP_CQ2QID0_QID_P2);
+	if (i == 3 || i == 7)
+		DLB2_BITS_SET(cq2qid, q->id.phys_id, DLB2_LSP_CQ2QID0_QID_P3);
+
+	if (i < 4)
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_CQ2QID0(hw->ver, p->id.phys_id), cq2qid);
+	else
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_CQ2QID1(hw->ver, p->id.phys_id), cq2qid);
+
+	atm_qid2cq = DLB2_CSR_RD(hw,
+				 DLB2_ATM_QID2CQIDIX(q->id.phys_id,
+						p->id.phys_id / 4));
+
+	lsp_qid2cq = DLB2_CSR_RD(hw,
+				 DLB2_LSP_QID2CQIDIX(hw->ver, q->id.phys_id,
+						p->id.phys_id / 4));
+
+	lsp_qid2cq2 = DLB2_CSR_RD(hw,
+				  DLB2_LSP_QID2CQIDIX2(hw->ver, q->id.phys_id,
+						  p->id.phys_id / 4));
+
+	switch (p->id.phys_id % 4) {
+	case 0:
+		DLB2_BIT_SET(atm_qid2cq,
+			     1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P0_LOC));
+		DLB2_BIT_SET(lsp_qid2cq,
+			     1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P0_LOC));
+		DLB2_BIT_SET(lsp_qid2cq2,
+			     1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P0_LOC));
+		break;
+
+	case 1:
+		DLB2_BIT_SET(atm_qid2cq,
+			     1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P1_LOC));
+		DLB2_BIT_SET(lsp_qid2cq,
+			     1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P1_LOC));
+		DLB2_BIT_SET(lsp_qid2cq2,
+			     1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P1_LOC));
+		break;
+
+	case 2:
+		DLB2_BIT_SET(atm_qid2cq,
+			     1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P2_LOC));
+		DLB2_BIT_SET(lsp_qid2cq,
+			     1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P2_LOC));
+		DLB2_BIT_SET(lsp_qid2cq2,
+			     1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P2_LOC));
+		break;
+
+	case 3:
+		DLB2_BIT_SET(atm_qid2cq,
+			     1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P3_LOC));
+		DLB2_BIT_SET(lsp_qid2cq,
+			     1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P3_LOC));
+		DLB2_BIT_SET(lsp_qid2cq2,
+			     1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P3_LOC));
+		break;
+	}
+
+	DLB2_CSR_WR(hw,
+		    DLB2_ATM_QID2CQIDIX(q->id.phys_id, p->id.phys_id / 4),
+		    atm_qid2cq);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_QID2CQIDIX(hw->ver,
+					q->id.phys_id, p->id.phys_id / 4),
+		    lsp_qid2cq);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_QID2CQIDIX2(hw->ver,
+					 q->id.phys_id, p->id.phys_id / 4),
+		    lsp_qid2cq2);
+
+	dlb2_flush_csr(hw);
+
+	p->qid_map[i].qid = q->id.phys_id;
+	p->qid_map[i].priority = priority;
+
+	state = DLB2_QUEUE_MAPPED;
+
+	return dlb2_port_slot_state_transition(hw, p, q, i, state);
+}
+
+static int dlb2_ldb_port_set_has_work_bits(struct dlb2_hw *hw,
+					   struct dlb2_ldb_port *port,
+					   struct dlb2_ldb_queue *queue,
+					   int slot)
+{
+	u32 ctrl = 0;
+	u32 active;
+	u32 enq;
+
+	/* Set the atomic scheduling haswork bit */
+	active = DLB2_CSR_RD(hw, DLB2_LSP_QID_AQED_ACTIVE_CNT(hw->ver,
+							 queue->id.phys_id));
+
+	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
+	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
+	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_VALUE);
+	DLB2_BITS_SET(ctrl,
+		      DLB2_BITS_GET(active,
+				    DLB2_LSP_QID_AQED_ACTIVE_CNT_COUNT) > 0,
+				    DLB2_LSP_LDB_SCHED_CTRL_RLIST_HASWORK_V);
+
+	/* Set the non-atomic scheduling haswork bit */
+	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
+
+	enq = DLB2_CSR_RD(hw,
+			  DLB2_LSP_QID_LDB_ENQUEUE_CNT(hw->ver,
+						       queue->id.phys_id));
+
+	memset(&ctrl, 0, sizeof(ctrl));
+
+	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
+	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
+	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_VALUE);
+	DLB2_BITS_SET(ctrl,
+		      DLB2_BITS_GET(enq,
+				    DLB2_LSP_QID_LDB_ENQUEUE_CNT_COUNT) > 0,
+		      DLB2_LSP_LDB_SCHED_CTRL_NALB_HASWORK_V);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
+
+	dlb2_flush_csr(hw);
+
+	return 0;
+}
+
+static void dlb2_ldb_port_clear_has_work_bits(struct dlb2_hw *hw,
+					      struct dlb2_ldb_port *port,
+					      u8 slot)
+{
+	u32 ctrl = 0;
+
+	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
+	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
+	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_RLIST_HASWORK_V);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
+
+	memset(&ctrl, 0, sizeof(ctrl));
+
+	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
+	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
+	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_NALB_HASWORK_V);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
+
+	dlb2_flush_csr(hw);
+}
+
+
+static void dlb2_ldb_queue_set_inflight_limit(struct dlb2_hw *hw,
+					      struct dlb2_ldb_queue *queue)
+{
+	u32 infl_lim = 0;
+
+	DLB2_BITS_SET(infl_lim, queue->num_qid_inflights,
+		 DLB2_LSP_QID_LDB_INFL_LIM_LIMIT);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_QID_LDB_INFL_LIM(hw->ver, queue->id.phys_id),
+		    infl_lim);
+}
+
+static void dlb2_ldb_queue_clear_inflight_limit(struct dlb2_hw *hw,
+						struct dlb2_ldb_queue *queue)
+{
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_QID_LDB_INFL_LIM(hw->ver, queue->id.phys_id),
+		    DLB2_LSP_QID_LDB_INFL_LIM_RST);
+}
+
+static int dlb2_ldb_port_finish_map_qid_dynamic(struct dlb2_hw *hw,
+						struct dlb2_hw_domain *domain,
+						struct dlb2_ldb_port *port,
+						struct dlb2_ldb_queue *queue)
+{
+	struct dlb2_list_entry *iter;
+	enum dlb2_qid_map_state state;
+	int slot, ret, i;
+	u32 infl_cnt;
+	u8 prio;
+	RTE_SET_USED(iter);
+
+	infl_cnt = DLB2_CSR_RD(hw,
+			       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver,
+						    queue->id.phys_id));
+
+	if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT)) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: non-zero QID inflight count\n",
+			    __func__);
+		return -EINVAL;
+	}
+
+	/*
+	 * Static map the port and set its corresponding has_work bits.
+	 */
+	state = DLB2_QUEUE_MAP_IN_PROG;
+	if (!dlb2_port_find_slot_queue(port, state, queue, &slot))
+		return -EINVAL;
+
+	prio = port->qid_map[slot].priority;
+
+	/*
+	 * Update the CQ2QID, CQ2PRIOV, and QID2CQIDX registers, and
+	 * the port's qid_map state.
+	 */
+	ret = dlb2_ldb_port_map_qid_static(hw, port, queue, prio);
+	if (ret)
+		return ret;
+
+	ret = dlb2_ldb_port_set_has_work_bits(hw, port, queue, slot);
+	if (ret)
+		return ret;
+
+	/*
+	 * Ensure IF_status(cq,qid) is 0 before enabling the port to
+	 * prevent spurious schedules to cause the queue's inflight
+	 * count to increase.
+	 */
+	dlb2_ldb_port_clear_queue_if_status(hw, port, slot);
+
+	/* Reset the queue's inflight status */
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			state = DLB2_QUEUE_MAPPED;
+			if (!dlb2_port_find_slot_queue(port, state,
+						       queue, &slot))
+				continue;
+
+			dlb2_ldb_port_set_queue_if_status(hw, port, slot);
+		}
+	}
+
+	dlb2_ldb_queue_set_inflight_limit(hw, queue);
+
+	/* Re-enable CQs mapped to this queue */
+	dlb2_ldb_queue_enable_mapped_cqs(hw, domain, queue);
+
+	/* If this queue has other mappings pending, clear its inflight limit */
+	if (queue->num_pending_additions > 0)
+		dlb2_ldb_queue_clear_inflight_limit(hw, queue);
+
+	return 0;
+}
+
+/**
+ * dlb2_ldb_port_map_qid_dynamic() - perform a "dynamic" QID->CQ mapping
+ * @hw: dlb2_hw handle for a particular device.
+ * @port: load-balanced port
+ * @queue: load-balanced queue
+ * @priority: queue servicing priority
+ *
+ * Returns 0 if the queue was mapped, 1 if the mapping is scheduled to occur
+ * at a later point, and <0 if an error occurred.
+ */
+static int dlb2_ldb_port_map_qid_dynamic(struct dlb2_hw *hw,
+					 struct dlb2_ldb_port *port,
+					 struct dlb2_ldb_queue *queue,
+					 u8 priority)
+{
+	enum dlb2_qid_map_state state;
+	struct dlb2_hw_domain *domain;
+	int domain_id, slot, ret;
+	u32 infl_cnt;
+
+	domain_id = port->domain_id.phys_id;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, false, 0);
+	if (domain == NULL) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: unable to find domain %d\n",
+			    __func__, port->domain_id.phys_id);
+		return -EINVAL;
+	}
+
+	/*
+	 * Set the QID inflight limit to 0 to prevent further scheduling of the
+	 * queue.
+	 */
+	DLB2_CSR_WR(hw, DLB2_LSP_QID_LDB_INFL_LIM(hw->ver,
+						  queue->id.phys_id), 0);
+
+	if (!dlb2_port_find_slot(port, DLB2_QUEUE_UNMAPPED, &slot)) {
+		DLB2_HW_ERR(hw,
+			    "Internal error: No available unmapped slots\n");
+		return -EFAULT;
+	}
+
+	port->qid_map[slot].qid = queue->id.phys_id;
+	port->qid_map[slot].priority = priority;
+
+	state = DLB2_QUEUE_MAP_IN_PROG;
+	ret = dlb2_port_slot_state_transition(hw, port, queue, slot, state);
+	if (ret)
+		return ret;
+
+	infl_cnt = DLB2_CSR_RD(hw,
+			       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver,
+						    queue->id.phys_id));
+
+	if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT)) {
+		/*
+		 * The queue is owed completions so it's not safe to map it
+		 * yet. Schedule a kernel thread to complete the mapping later,
+		 * once software has completed all the queue's inflight events.
+		 */
+		if (!os_worker_active(hw))
+			os_schedule_work(hw);
+
+		return 1;
+	}
+
+	/*
+	 * Disable the affected CQ, and the CQs already mapped to the QID,
+	 * before reading the QID's inflight count a second time. There is an
+	 * unlikely race in which the QID may schedule one more QE after we
+	 * read an inflight count of 0, and disabling the CQs guarantees that
+	 * the race will not occur after a re-read of the inflight count
+	 * register.
+	 */
+	if (port->enabled)
+		dlb2_ldb_port_cq_disable(hw, port);
+
+	dlb2_ldb_queue_disable_mapped_cqs(hw, domain, queue);
+
+	infl_cnt = DLB2_CSR_RD(hw,
+			       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver,
+						    queue->id.phys_id));
+
+	if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT)) {
+		if (port->enabled)
+			dlb2_ldb_port_cq_enable(hw, port);
+
+		dlb2_ldb_queue_enable_mapped_cqs(hw, domain, queue);
+
+		/*
+		 * The queue is owed completions so it's not safe to map it
+		 * yet. Schedule a kernel thread to complete the mapping later,
+		 * once software has completed all the queue's inflight events.
+		 */
+		if (!os_worker_active(hw))
+			os_schedule_work(hw);
+
+		return 1;
+	}
+
+	return dlb2_ldb_port_finish_map_qid_dynamic(hw, domain, port, queue);
+}
+
+static void dlb2_domain_finish_map_port(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain,
+					struct dlb2_ldb_port *port)
+{
+	int i;
+
+	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
+		u32 infl_cnt;
+		struct dlb2_ldb_queue *queue;
+		int qid;
+
+		if (port->qid_map[i].state != DLB2_QUEUE_MAP_IN_PROG)
+			continue;
+
+		qid = port->qid_map[i].qid;
+
+		queue = dlb2_get_ldb_queue_from_id(hw, qid, false, 0);
+
+		if (queue == NULL) {
+			DLB2_HW_ERR(hw,
+				    "[%s()] Internal error: unable to find queue %d\n",
+				    __func__, qid);
+			continue;
+		}
+
+		infl_cnt = DLB2_CSR_RD(hw,
+				       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver, qid));
+
+		if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT))
+			continue;
+
+		/*
+		 * Disable the affected CQ, and the CQs already mapped to the
+		 * QID, before reading the QID's inflight count a second time.
+		 * There is an unlikely race in which the QID may schedule one
+		 * more QE after we read an inflight count of 0, and disabling
+		 * the CQs guarantees that the race will not occur after a
+		 * re-read of the inflight count register.
+		 */
+		if (port->enabled)
+			dlb2_ldb_port_cq_disable(hw, port);
+
+		dlb2_ldb_queue_disable_mapped_cqs(hw, domain, queue);
+
+		infl_cnt = DLB2_CSR_RD(hw,
+				       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver, qid));
+
+		if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT)) {
+			if (port->enabled)
+				dlb2_ldb_port_cq_enable(hw, port);
+
+			dlb2_ldb_queue_enable_mapped_cqs(hw, domain, queue);
+
+			continue;
+		}
+
+		dlb2_ldb_port_finish_map_qid_dynamic(hw, domain, port, queue);
+	}
+}
+
+static unsigned int
+dlb2_domain_finish_map_qid_procedures(struct dlb2_hw *hw,
+				      struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	if (!domain->configured || domain->num_pending_additions == 0)
+		return 0;
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
+			dlb2_domain_finish_map_port(hw, domain, port);
+	}
+
+	return domain->num_pending_additions;
+}
+
+static int dlb2_ldb_port_unmap_qid(struct dlb2_hw *hw,
+				   struct dlb2_ldb_port *port,
+				   struct dlb2_ldb_queue *queue)
+{
+	enum dlb2_qid_map_state mapped, in_progress, pending_map, unmapped;
+	u32 lsp_qid2cq2;
+	u32 lsp_qid2cq;
+	u32 atm_qid2cq;
+	u32 cq2priov;
+	u32 queue_id;
+	u32 port_id;
+	int i;
+
+	/* Find the queue's slot */
+	mapped = DLB2_QUEUE_MAPPED;
+	in_progress = DLB2_QUEUE_UNMAP_IN_PROG;
+	pending_map = DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP;
+
+	if (!dlb2_port_find_slot_queue(port, mapped, queue, &i) &&
+	    !dlb2_port_find_slot_queue(port, in_progress, queue, &i) &&
+	    !dlb2_port_find_slot_queue(port, pending_map, queue, &i)) {
+		DLB2_HW_ERR(hw,
+			    "[%s():%d] Internal error: QID %d isn't mapped\n",
+			    __func__, __LINE__, queue->id.phys_id);
+		return -EFAULT;
+	}
+
+	port_id = port->id.phys_id;
+	queue_id = queue->id.phys_id;
+
+	/* Read-modify-write the priority and valid bit register */
+	cq2priov = DLB2_CSR_RD(hw, DLB2_LSP_CQ2PRIOV(hw->ver, port_id));
+
+	cq2priov &= ~(1 << (i + DLB2_LSP_CQ2PRIOV_V_LOC));
+
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(hw->ver, port_id), cq2priov);
+
+	atm_qid2cq = DLB2_CSR_RD(hw, DLB2_ATM_QID2CQIDIX(queue_id,
+							 port_id / 4));
+
+	lsp_qid2cq = DLB2_CSR_RD(hw,
+				 DLB2_LSP_QID2CQIDIX(hw->ver,
+						queue_id, port_id / 4));
+
+	lsp_qid2cq2 = DLB2_CSR_RD(hw,
+				  DLB2_LSP_QID2CQIDIX2(hw->ver,
+						  queue_id, port_id / 4));
+
+	switch (port_id % 4) {
+	case 0:
+		atm_qid2cq &= ~(1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P0_LOC));
+		lsp_qid2cq &= ~(1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P0_LOC));
+		lsp_qid2cq2 &= ~(1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P0_LOC));
+		break;
+
+	case 1:
+		atm_qid2cq &= ~(1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P1_LOC));
+		lsp_qid2cq &= ~(1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P1_LOC));
+		lsp_qid2cq2 &= ~(1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P1_LOC));
+		break;
+
+	case 2:
+		atm_qid2cq &= ~(1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P2_LOC));
+		lsp_qid2cq &= ~(1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P2_LOC));
+		lsp_qid2cq2 &= ~(1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P2_LOC));
+		break;
+
+	case 3:
+		atm_qid2cq &= ~(1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P3_LOC));
+		lsp_qid2cq &= ~(1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P3_LOC));
+		lsp_qid2cq2 &= ~(1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P3_LOC));
+		break;
+	}
+
+	DLB2_CSR_WR(hw, DLB2_ATM_QID2CQIDIX(queue_id, port_id / 4), atm_qid2cq);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_QID2CQIDIX(hw->ver, queue_id, port_id / 4),
+		    lsp_qid2cq);
+
+	DLB2_CSR_WR(hw, DLB2_LSP_QID2CQIDIX2(hw->ver, queue_id, port_id / 4),
+		    lsp_qid2cq2);
+
+	dlb2_flush_csr(hw);
+
+	unmapped = DLB2_QUEUE_UNMAPPED;
+
+	return dlb2_port_slot_state_transition(hw, port, queue, i, unmapped);
+}
+
+static int dlb2_ldb_port_map_qid(struct dlb2_hw *hw,
+				 struct dlb2_hw_domain *domain,
+				 struct dlb2_ldb_port *port,
+				 struct dlb2_ldb_queue *queue,
+				 u8 prio)
+{
+	if (domain->started)
+		return dlb2_ldb_port_map_qid_dynamic(hw, port, queue, prio);
+	else
+		return dlb2_ldb_port_map_qid_static(hw, port, queue, prio);
+}
+
+static void
+dlb2_domain_finish_unmap_port_slot(struct dlb2_hw *hw,
+				   struct dlb2_hw_domain *domain,
+				   struct dlb2_ldb_port *port,
+				   int slot)
+{
+	enum dlb2_qid_map_state state;
+	struct dlb2_ldb_queue *queue;
+
+	queue = &hw->rsrcs.ldb_queues[port->qid_map[slot].qid];
+
+	state = port->qid_map[slot].state;
+
+	/* Update the QID2CQIDX and CQ2QID vectors */
+	dlb2_ldb_port_unmap_qid(hw, port, queue);
+
+	/*
+	 * Ensure the QID will not be serviced by this {CQ, slot} by clearing
+	 * the has_work bits
+	 */
+	dlb2_ldb_port_clear_has_work_bits(hw, port, slot);
+
+	/* Reset the {CQ, slot} to its default state */
+	dlb2_ldb_port_set_queue_if_status(hw, port, slot);
+
+	/* Re-enable the CQ if it was not manually disabled by the user */
+	if (port->enabled)
+		dlb2_ldb_port_cq_enable(hw, port);
+
+	/*
+	 * If there is a mapping that is pending this slot's removal, perform
+	 * the mapping now.
+	 */
+	if (state == DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP) {
+		struct dlb2_ldb_port_qid_map *map;
+		struct dlb2_ldb_queue *map_queue;
+		u8 prio;
+
+		map = &port->qid_map[slot];
+
+		map->qid = map->pending_qid;
+		map->priority = map->pending_priority;
+
+		map_queue = &hw->rsrcs.ldb_queues[map->qid];
+		prio = map->priority;
+
+		dlb2_ldb_port_map_qid(hw, domain, port, map_queue, prio);
+	}
+}
+
+
+static bool dlb2_domain_finish_unmap_port(struct dlb2_hw *hw,
+					  struct dlb2_hw_domain *domain,
+					  struct dlb2_ldb_port *port)
+{
+	u32 infl_cnt;
+	int i;
+
+	if (port->num_pending_removals == 0)
+		return false;
+
+	/*
+	 * The unmap requires all the CQ's outstanding inflights to be
+	 * completed.
+	 */
+	infl_cnt = DLB2_CSR_RD(hw, DLB2_LSP_CQ_LDB_INFL_CNT(hw->ver,
+						       port->id.phys_id));
+	if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_CQ_LDB_INFL_CNT_COUNT) > 0)
+		return false;
+
+	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
+		struct dlb2_ldb_port_qid_map *map;
+
+		map = &port->qid_map[i];
+
+		if (map->state != DLB2_QUEUE_UNMAP_IN_PROG &&
+		    map->state != DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP)
+			continue;
+
+		dlb2_domain_finish_unmap_port_slot(hw, domain, port, i);
+	}
+
+	return true;
+}
+
+static unsigned int
+dlb2_domain_finish_unmap_qid_procedures(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	if (!domain->configured || domain->num_pending_removals == 0)
+		return 0;
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
+			dlb2_domain_finish_unmap_port(hw, domain, port);
+	}
+
+	return domain->num_pending_removals;
+}
+
+static void dlb2_domain_disable_ldb_cqs(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			port->enabled = false;
+
+			dlb2_ldb_port_cq_disable(hw, port);
+		}
+	}
+}
+
+
+static void dlb2_log_reset_domain(struct dlb2_hw *hw,
+				  u32 domain_id,
+				  bool vdev_req,
+				  unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 reset domain:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
+}
+
+static void dlb2_domain_disable_dir_vpps(struct dlb2_hw *hw,
+					 struct dlb2_hw_domain *domain,
+					 unsigned int vdev_id)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *port;
+	u32 vpp_v = 0;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
+		unsigned int offs;
+		u32 virt_id;
+
+		if (hw->virt_mode == DLB2_VIRT_SRIOV)
+			virt_id = port->id.virt_id;
+		else
+			virt_id = port->id.phys_id;
+
+		offs = vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) + virt_id;
+
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP_V(offs), vpp_v);
+	}
+}
+
+static void dlb2_domain_disable_ldb_vpps(struct dlb2_hw *hw,
+					 struct dlb2_hw_domain *domain,
+					 unsigned int vdev_id)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	u32 vpp_v = 0;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			unsigned int offs;
+			u32 virt_id;
+
+			if (hw->virt_mode == DLB2_VIRT_SRIOV)
+				virt_id = port->id.virt_id;
+			else
+				virt_id = port->id.phys_id;
+
+			offs = vdev_id * DLB2_MAX_NUM_LDB_PORTS + virt_id;
+
+			DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VPP_V(offs), vpp_v);
+		}
+	}
+}
+
+static void
+dlb2_domain_disable_ldb_port_interrupts(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	u32 int_en = 0;
+	u32 wd_en = 0;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			DLB2_CSR_WR(hw,
+				    DLB2_CHP_LDB_CQ_INT_ENB(hw->ver,
+						       port->id.phys_id),
+				    int_en);
+
+			DLB2_CSR_WR(hw,
+				    DLB2_CHP_LDB_CQ_WD_ENB(hw->ver,
+						      port->id.phys_id),
+				    wd_en);
+		}
+	}
+}
+
+static void
+dlb2_domain_disable_dir_port_interrupts(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *port;
+	u32 int_en = 0;
+	u32 wd_en = 0;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
+		DLB2_CSR_WR(hw,
+			    DLB2_CHP_DIR_CQ_INT_ENB(hw->ver, port->id.phys_id),
+			    int_en);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_CHP_DIR_CQ_WD_ENB(hw->ver, port->id.phys_id),
+			    wd_en);
+	}
+}
+
+static void
+dlb2_domain_disable_ldb_queue_write_perms(struct dlb2_hw *hw,
+					  struct dlb2_hw_domain *domain)
+{
+	int domain_offset = domain->id.phys_id * DLB2_MAX_NUM_LDB_QUEUES;
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_queue *queue;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
+		int idx = domain_offset + queue->id.phys_id;
+
+		DLB2_CSR_WR(hw, DLB2_SYS_LDB_VASQID_V(idx), 0);
+
+		if (queue->id.vdev_owned) {
+			DLB2_CSR_WR(hw,
+				    DLB2_SYS_LDB_QID2VQID(queue->id.phys_id),
+				    0);
+
+			idx = queue->id.vdev_id * DLB2_MAX_NUM_LDB_QUEUES +
+				queue->id.virt_id;
+
+			DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID_V(idx), 0);
+
+			DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID2QID(idx), 0);
+		}
+	}
+}
+
+static void
+dlb2_domain_disable_dir_queue_write_perms(struct dlb2_hw *hw,
+					  struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *queue;
+	unsigned long max_ports;
+	int domain_offset;
+	RTE_SET_USED(iter);
+
+	max_ports = DLB2_MAX_NUM_DIR_PORTS(hw->ver);
+
+	domain_offset = domain->id.phys_id * max_ports;
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, queue, iter) {
+		int idx = domain_offset + queue->id.phys_id;
+
+		DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(idx), 0);
+
+		if (queue->id.vdev_owned) {
+			idx = queue->id.vdev_id * max_ports + queue->id.virt_id;
+
+			DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID_V(idx), 0);
+
+			DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID2QID(idx), 0);
+		}
+	}
+}
+
+static void dlb2_domain_disable_ldb_seq_checks(struct dlb2_hw *hw,
+					       struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	u32 chk_en = 0;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			DLB2_CSR_WR(hw,
+				    DLB2_CHP_SN_CHK_ENBL(hw->ver,
+							 port->id.phys_id),
+				    chk_en);
+		}
+	}
+}
+
+static int dlb2_domain_wait_for_ldb_cqs_to_empty(struct dlb2_hw *hw,
+						 struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			int j;
+
+			for (j = 0; j < DLB2_MAX_CQ_COMP_CHECK_LOOPS; j++) {
+				if (dlb2_ldb_cq_inflight_count(hw, port) == 0)
+					break;
+			}
+
+			if (j == DLB2_MAX_CQ_COMP_CHECK_LOOPS) {
+				DLB2_HW_ERR(hw,
+					    "[%s()] Internal error: failed to flush load-balanced port %d's completions.\n",
+					    __func__, port->id.phys_id);
+				return -EFAULT;
+			}
+		}
+	}
+
+	return 0;
+}
+
+static void dlb2_domain_disable_dir_cqs(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *port;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
+		port->enabled = false;
+
+		dlb2_dir_port_cq_disable(hw, port);
+	}
+}
+
+static void
+dlb2_domain_disable_dir_producer_ports(struct dlb2_hw *hw,
+				       struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *port;
+	u32 pp_v = 0;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_DIR_PP_V(port->id.phys_id),
+			    pp_v);
+	}
+}
+
+static void
+dlb2_domain_disable_ldb_producer_ports(struct dlb2_hw *hw,
+				       struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	u32 pp_v = 0;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			DLB2_CSR_WR(hw,
+				    DLB2_SYS_LDB_PP_V(port->id.phys_id),
+				    pp_v);
+		}
+	}
+}
+
+static int dlb2_domain_verify_reset_success(struct dlb2_hw *hw,
+					    struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *dir_port;
+	struct dlb2_ldb_port *ldb_port;
+	struct dlb2_ldb_queue *queue;
+	int i;
+	RTE_SET_USED(iter);
+
+	/*
+	 * Confirm that all the domain's queue's inflight counts and AQED
+	 * active counts are 0.
+	 */
+	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
+		if (!dlb2_ldb_queue_is_empty(hw, queue)) {
+			DLB2_HW_ERR(hw,
+				    "[%s()] Internal error: failed to empty ldb queue %d\n",
+				    __func__, queue->id.phys_id);
+			return -EFAULT;
+		}
+	}
+
+	/* Confirm that all the domain's CQs inflight and token counts are 0. */
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], ldb_port, iter) {
+			if (dlb2_ldb_cq_inflight_count(hw, ldb_port) ||
+			    dlb2_ldb_cq_token_count(hw, ldb_port)) {
+				DLB2_HW_ERR(hw,
+					    "[%s()] Internal error: failed to empty ldb port %d\n",
+					    __func__, ldb_port->id.phys_id);
+				return -EFAULT;
+			}
+		}
+	}
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, dir_port, iter) {
+		if (!dlb2_dir_queue_is_empty(hw, dir_port)) {
+			DLB2_HW_ERR(hw,
+				    "[%s()] Internal error: failed to empty dir queue %d\n",
+				    __func__, dir_port->id.phys_id);
+			return -EFAULT;
+		}
+
+		if (dlb2_dir_cq_token_count(hw, dir_port)) {
+			DLB2_HW_ERR(hw,
+				    "[%s()] Internal error: failed to empty dir port %d\n",
+				    __func__, dir_port->id.phys_id);
+			return -EFAULT;
+		}
+	}
+
+	return 0;
+}
+
+static void __dlb2_domain_reset_ldb_port_registers(struct dlb2_hw *hw,
+						   struct dlb2_ldb_port *port)
+{
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_PP2VAS(port->id.phys_id),
+		    DLB2_SYS_LDB_PP2VAS_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ2VAS(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ2VAS_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_PP2VDEV(port->id.phys_id),
+		    DLB2_SYS_LDB_PP2VDEV_RST);
+
+	if (port->id.vdev_owned) {
+		unsigned int offs;
+		u32 virt_id;
+
+		/*
+		 * DLB uses producer port address bits 17:12 to determine the
+		 * producer port ID. In Scalable IOV mode, PP accesses come
+		 * through the PF MMIO window for the physical producer port,
+		 * so for translation purposes the virtual and physical port
+		 * IDs are equal.
+		 */
+		if (hw->virt_mode == DLB2_VIRT_SRIOV)
+			virt_id = port->id.virt_id;
+		else
+			virt_id = port->id.phys_id;
+
+		offs = port->id.vdev_id * DLB2_MAX_NUM_LDB_PORTS + virt_id;
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_VF_LDB_VPP2PP(offs),
+			    DLB2_SYS_VF_LDB_VPP2PP_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_VF_LDB_VPP_V(offs),
+			    DLB2_SYS_VF_LDB_VPP_V_RST);
+	}
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_PP_V(port->id.phys_id),
+		    DLB2_SYS_LDB_PP_V_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_LDB_DSBL(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_LDB_DSBL_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_DEPTH(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ_DEPTH_RST);
+
+	if (hw->ver != DLB2_HW_V2)
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_CFG_CQ_LDB_WU_LIMIT(port->id.phys_id),
+			    DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_LDB_INFL_LIM(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_LDB_INFL_LIM_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_HIST_LIST_LIM(hw->ver, port->id.phys_id),
+		    DLB2_CHP_HIST_LIST_LIM_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_HIST_LIST_BASE(hw->ver, port->id.phys_id),
+		    DLB2_CHP_HIST_LIST_BASE_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_HIST_LIST_POP_PTR(hw->ver, port->id.phys_id),
+		    DLB2_CHP_HIST_LIST_POP_PTR_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_HIST_LIST_PUSH_PTR(hw->ver, port->id.phys_id),
+		    DLB2_CHP_HIST_LIST_PUSH_PTR_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_TMR_THRSH(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ_TMR_THRSH_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_INT_ENB(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ_INT_ENB_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_CQ_ISR(port->id.phys_id),
+		    DLB2_SYS_LDB_CQ_ISR_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_WPTR(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ_WPTR_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_LDB_TKN_CNT(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_LDB_TKN_CNT_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_CQ_ADDR_L(port->id.phys_id),
+		    DLB2_SYS_LDB_CQ_ADDR_L_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_CQ_ADDR_U(port->id.phys_id),
+		    DLB2_SYS_LDB_CQ_ADDR_U_RST);
+
+	if (hw->ver == DLB2_HW_V2)
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_LDB_CQ_AT(port->id.phys_id),
+			    DLB2_SYS_LDB_CQ_AT_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_CQ_PASID(hw->ver, port->id.phys_id),
+		    DLB2_SYS_LDB_CQ_PASID_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_LDB_CQ2VF_PF_RO(port->id.phys_id),
+		    DLB2_SYS_LDB_CQ2VF_PF_RO_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTL(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTL_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTH(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTH_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ2QID0(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ2QID0_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ2QID1(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ2QID1_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ2PRIOV(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ2PRIOV_RST);
+}
+
+static void dlb2_domain_reset_ldb_port_registers(struct dlb2_hw *hw,
+						 struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
+			__dlb2_domain_reset_ldb_port_registers(hw, port);
+	}
+}
+
+static void
+__dlb2_domain_reset_dir_port_registers(struct dlb2_hw *hw,
+				       struct dlb2_dir_pq_pair *port)
+{
+	u32 reg = 0;
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ2VAS(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ2VAS_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_DIR_DSBL(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_DIR_DSBL_RST);
+
+	DLB2_BIT_SET(reg, DLB2_SYS_WB_DIR_CQ_STATE_CQ_OPT_CLR);
+
+	if (hw->ver == DLB2_HW_V2)
+		DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_OPT_CLR, port->id.phys_id);
+	else
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_WB_DIR_CQ_STATE(port->id.phys_id), reg);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_DEPTH(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ_DEPTH_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_TMR_THRSH(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ_TMR_THRSH_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_INT_ENB(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ_INT_ENB_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_CQ_ISR(port->id.phys_id),
+		    DLB2_SYS_DIR_CQ_ISR_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(hw->ver,
+						      port->id.phys_id),
+		    DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_WPTR(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ_WPTR_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_DIR_TKN_CNT(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_DIR_TKN_CNT_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_CQ_ADDR_L(port->id.phys_id),
+		    DLB2_SYS_DIR_CQ_ADDR_L_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_CQ_ADDR_U(port->id.phys_id),
+		    DLB2_SYS_DIR_CQ_ADDR_U_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_CQ_AT(port->id.phys_id),
+		    DLB2_SYS_DIR_CQ_AT_RST);
+
+	if (hw->ver == DLB2_HW_V2)
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_DIR_CQ_AT(port->id.phys_id),
+			    DLB2_SYS_DIR_CQ_AT_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_CQ_PASID(hw->ver, port->id.phys_id),
+		    DLB2_SYS_DIR_CQ_PASID_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_CQ_FMT(port->id.phys_id),
+		    DLB2_SYS_DIR_CQ_FMT_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_CQ2VF_PF_RO(port->id.phys_id),
+		    DLB2_SYS_DIR_CQ2VF_PF_RO_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTL(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTL_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTH(hw->ver, port->id.phys_id),
+		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTH_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_PP2VAS(port->id.phys_id),
+		    DLB2_SYS_DIR_PP2VAS_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ2VAS(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ2VAS_RST);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_PP2VDEV(port->id.phys_id),
+		    DLB2_SYS_DIR_PP2VDEV_RST);
+
+	if (port->id.vdev_owned) {
+		unsigned int offs;
+		u32 virt_id;
+
+		/*
+		 * DLB uses producer port address bits 17:12 to determine the
+		 * producer port ID. In Scalable IOV mode, PP accesses come
+		 * through the PF MMIO window for the physical producer port,
+		 * so for translation purposes the virtual and physical port
+		 * IDs are equal.
+		 */
+		if (hw->virt_mode == DLB2_VIRT_SRIOV)
+			virt_id = port->id.virt_id;
+		else
+			virt_id = port->id.phys_id;
+
+		offs = port->id.vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) +
+			virt_id;
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_VF_DIR_VPP2PP(offs),
+			    DLB2_SYS_VF_DIR_VPP2PP_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_VF_DIR_VPP_V(offs),
+			    DLB2_SYS_VF_DIR_VPP_V_RST);
+	}
+
+	DLB2_CSR_WR(hw,
+		    DLB2_SYS_DIR_PP_V(port->id.phys_id),
+		    DLB2_SYS_DIR_PP_V_RST);
+}
+
+static void dlb2_domain_reset_dir_port_registers(struct dlb2_hw *hw,
+						 struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *port;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter)
+		__dlb2_domain_reset_dir_port_registers(hw, port);
+}
+
+static void dlb2_domain_reset_ldb_queue_registers(struct dlb2_hw *hw,
+						  struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_queue *queue;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
+		unsigned int queue_id = queue->id.phys_id;
+		int i;
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL(hw->ver, queue_id),
+			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH(hw->ver, queue_id),
+			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTL(hw->ver, queue_id),
+			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTL_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTH(hw->ver, queue_id),
+			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_NALDB_MAX_DEPTH(hw->ver, queue_id),
+			    DLB2_LSP_QID_NALDB_MAX_DEPTH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_LDB_INFL_LIM(hw->ver, queue_id),
+			    DLB2_LSP_QID_LDB_INFL_LIM_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_AQED_ACTIVE_LIM(hw->ver, queue_id),
+			    DLB2_LSP_QID_AQED_ACTIVE_LIM_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_ATM_DEPTH_THRSH(hw->ver, queue_id),
+			    DLB2_LSP_QID_ATM_DEPTH_THRSH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_NALDB_DEPTH_THRSH(hw->ver, queue_id),
+			    DLB2_LSP_QID_NALDB_DEPTH_THRSH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_LDB_QID_ITS(queue_id),
+			    DLB2_SYS_LDB_QID_ITS_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_CHP_ORD_QID_SN(hw->ver, queue_id),
+			    DLB2_CHP_ORD_QID_SN_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_CHP_ORD_QID_SN_MAP(hw->ver, queue_id),
+			    DLB2_CHP_ORD_QID_SN_MAP_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_LDB_QID_V(queue_id),
+			    DLB2_SYS_LDB_QID_V_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_LDB_QID_CFG_V(queue_id),
+			    DLB2_SYS_LDB_QID_CFG_V_RST);
+
+		if (queue->sn_cfg_valid) {
+			u32 offs[2];
+
+			offs[0] = DLB2_RO_GRP_0_SLT_SHFT(hw->ver,
+							 queue->sn_slot);
+			offs[1] = DLB2_RO_GRP_1_SLT_SHFT(hw->ver,
+							 queue->sn_slot);
+
+			DLB2_CSR_WR(hw,
+				    offs[queue->sn_group],
+				    DLB2_RO_GRP_0_SLT_SHFT_RST);
+		}
+
+		for (i = 0; i < DLB2_LSP_QID2CQIDIX_NUM; i++) {
+			DLB2_CSR_WR(hw,
+				    DLB2_LSP_QID2CQIDIX(hw->ver, queue_id, i),
+				    DLB2_LSP_QID2CQIDIX_00_RST);
+
+			DLB2_CSR_WR(hw,
+				    DLB2_LSP_QID2CQIDIX2(hw->ver, queue_id, i),
+				    DLB2_LSP_QID2CQIDIX2_00_RST);
+
+			DLB2_CSR_WR(hw,
+				    DLB2_ATM_QID2CQIDIX(queue_id, i),
+				    DLB2_ATM_QID2CQIDIX_00_RST);
+		}
+	}
+}
+
+static void dlb2_domain_reset_dir_queue_registers(struct dlb2_hw *hw,
+						  struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *queue;
+	RTE_SET_USED(iter);
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, queue, iter) {
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_DIR_MAX_DEPTH(hw->ver,
+						       queue->id.phys_id),
+			    DLB2_LSP_QID_DIR_MAX_DEPTH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTL(hw->ver,
+							  queue->id.phys_id),
+			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTL_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTH(hw->ver,
+							  queue->id.phys_id),
+			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_QID_DIR_DEPTH_THRSH(hw->ver,
+							 queue->id.phys_id),
+			    DLB2_LSP_QID_DIR_DEPTH_THRSH_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_DIR_QID_ITS(queue->id.phys_id),
+			    DLB2_SYS_DIR_QID_ITS_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_SYS_DIR_QID_V(queue->id.phys_id),
+			    DLB2_SYS_DIR_QID_V_RST);
+	}
+}
+
+
+
+
+
+static void dlb2_domain_reset_registers(struct dlb2_hw *hw,
+					struct dlb2_hw_domain *domain)
+{
+	dlb2_domain_reset_ldb_port_registers(hw, domain);
+
+	dlb2_domain_reset_dir_port_registers(hw, domain);
+
+	dlb2_domain_reset_ldb_queue_registers(hw, domain);
+
+	dlb2_domain_reset_dir_queue_registers(hw, domain);
+
+	if (hw->ver == DLB2_HW_V2) {
+		DLB2_CSR_WR(hw,
+			    DLB2_CHP_CFG_LDB_VAS_CRD(domain->id.phys_id),
+			    DLB2_CHP_CFG_LDB_VAS_CRD_RST);
+
+		DLB2_CSR_WR(hw,
+			    DLB2_CHP_CFG_DIR_VAS_CRD(domain->id.phys_id),
+			    DLB2_CHP_CFG_DIR_VAS_CRD_RST);
+	} else
+		DLB2_CSR_WR(hw,
+			    DLB2_CHP_CFG_VAS_CRD(domain->id.phys_id),
+			    DLB2_CHP_CFG_VAS_CRD_RST);
+}
+
+static int dlb2_domain_reset_software_state(struct dlb2_hw *hw,
+					    struct dlb2_hw_domain *domain)
+{
+	struct dlb2_dir_pq_pair *tmp_dir_port;
+	struct dlb2_ldb_queue *tmp_ldb_queue;
+	struct dlb2_ldb_port *tmp_ldb_port;
+	struct dlb2_list_entry *iter1;
+	struct dlb2_list_entry *iter2;
+	struct dlb2_function_resources *rsrcs;
+	struct dlb2_dir_pq_pair *dir_port;
+	struct dlb2_ldb_queue *ldb_queue;
+	struct dlb2_ldb_port *ldb_port;
+	struct dlb2_list_head *list;
+	int ret, i;
+	RTE_SET_USED(tmp_dir_port);
+	RTE_SET_USED(tmp_ldb_queue);
+	RTE_SET_USED(tmp_ldb_port);
+	RTE_SET_USED(iter1);
+	RTE_SET_USED(iter2);
+
+	rsrcs = domain->parent_func;
+
+	/* Move the domain's ldb queues to the function's avail list */
+	list = &domain->used_ldb_queues;
+	DLB2_DOM_LIST_FOR_SAFE(*list, ldb_queue, tmp_ldb_queue, iter1, iter2) {
+		if (ldb_queue->sn_cfg_valid) {
+			struct dlb2_sn_group *grp;
+
+			grp = &hw->rsrcs.sn_groups[ldb_queue->sn_group];
+
+			dlb2_sn_group_free_slot(grp, ldb_queue->sn_slot);
+			ldb_queue->sn_cfg_valid = false;
+		}
+
+		ldb_queue->owned = false;
+		ldb_queue->num_mappings = 0;
+		ldb_queue->num_pending_additions = 0;
+
+		dlb2_list_del(&domain->used_ldb_queues,
+			      &ldb_queue->domain_list);
+		dlb2_list_add(&rsrcs->avail_ldb_queues,
+			      &ldb_queue->func_list);
+		rsrcs->num_avail_ldb_queues++;
+	}
+
+	list = &domain->avail_ldb_queues;
+	DLB2_DOM_LIST_FOR_SAFE(*list, ldb_queue, tmp_ldb_queue, iter1, iter2) {
+		ldb_queue->owned = false;
+
+		dlb2_list_del(&domain->avail_ldb_queues,
+			      &ldb_queue->domain_list);
+		dlb2_list_add(&rsrcs->avail_ldb_queues,
+			      &ldb_queue->func_list);
+		rsrcs->num_avail_ldb_queues++;
+	}
+
+	/* Move the domain's ldb ports to the function's avail list */
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		list = &domain->used_ldb_ports[i];
+		DLB2_DOM_LIST_FOR_SAFE(*list, ldb_port, tmp_ldb_port,
+				       iter1, iter2) {
+			int j;
+
+			ldb_port->owned = false;
+			ldb_port->configured = false;
+			ldb_port->num_pending_removals = 0;
+			ldb_port->num_mappings = 0;
+			ldb_port->init_tkn_cnt = 0;
+			ldb_port->cq_depth = 0;
+			for (j = 0; j < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; j++)
+				ldb_port->qid_map[j].state =
+					DLB2_QUEUE_UNMAPPED;
+
+			dlb2_list_del(&domain->used_ldb_ports[i],
+				      &ldb_port->domain_list);
+			dlb2_list_add(&rsrcs->avail_ldb_ports[i],
+				      &ldb_port->func_list);
+			rsrcs->num_avail_ldb_ports[i]++;
+		}
+
+		list = &domain->avail_ldb_ports[i];
+		DLB2_DOM_LIST_FOR_SAFE(*list, ldb_port, tmp_ldb_port,
+				       iter1, iter2) {
+			ldb_port->owned = false;
+
+			dlb2_list_del(&domain->avail_ldb_ports[i],
+				      &ldb_port->domain_list);
+			dlb2_list_add(&rsrcs->avail_ldb_ports[i],
+				      &ldb_port->func_list);
+			rsrcs->num_avail_ldb_ports[i]++;
+		}
+	}
+
+	/* Move the domain's dir ports to the function's avail list */
+	list = &domain->used_dir_pq_pairs;
+	DLB2_DOM_LIST_FOR_SAFE(*list, dir_port, tmp_dir_port, iter1, iter2) {
+		dir_port->owned = false;
+		dir_port->port_configured = false;
+		dir_port->init_tkn_cnt = 0;
+
+		dlb2_list_del(&domain->used_dir_pq_pairs,
+			      &dir_port->domain_list);
+
+		dlb2_list_add(&rsrcs->avail_dir_pq_pairs,
+			      &dir_port->func_list);
+		rsrcs->num_avail_dir_pq_pairs++;
+	}
+
+	list = &domain->avail_dir_pq_pairs;
+	DLB2_DOM_LIST_FOR_SAFE(*list, dir_port, tmp_dir_port, iter1, iter2) {
+		dir_port->owned = false;
+
+		dlb2_list_del(&domain->avail_dir_pq_pairs,
+			      &dir_port->domain_list);
+
+		dlb2_list_add(&rsrcs->avail_dir_pq_pairs,
+			      &dir_port->func_list);
+		rsrcs->num_avail_dir_pq_pairs++;
+	}
+
+	/* Return hist list entries to the function */
+	ret = dlb2_bitmap_set_range(rsrcs->avail_hist_list_entries,
+				    domain->hist_list_entry_base,
+				    domain->total_hist_list_entries);
+	if (ret) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: domain hist list base does not match the function's bitmap.\n",
+			    __func__);
+		return ret;
+	}
+
+	domain->total_hist_list_entries = 0;
+	domain->avail_hist_list_entries = 0;
+	domain->hist_list_entry_base = 0;
+	domain->hist_list_entry_offset = 0;
+
+	if (hw->ver == DLB2_HW_V2_5) {
+		rsrcs->num_avail_entries += domain->num_credits;
+		domain->num_credits = 0;
+	} else {
+		rsrcs->num_avail_qed_entries += domain->num_ldb_credits;
+		domain->num_ldb_credits = 0;
+
+		rsrcs->num_avail_dqed_entries += domain->num_dir_credits;
+		domain->num_dir_credits = 0;
+	}
+	rsrcs->num_avail_aqed_entries += domain->num_avail_aqed_entries;
+	rsrcs->num_avail_aqed_entries += domain->num_used_aqed_entries;
+	domain->num_avail_aqed_entries = 0;
+	domain->num_used_aqed_entries = 0;
+
+	domain->num_pending_removals = 0;
+	domain->num_pending_additions = 0;
+	domain->configured = false;
+	domain->started = false;
+
+	/*
+	 * Move the domain out of the used_domains list and back to the
+	 * function's avail_domains list.
+	 */
+	dlb2_list_del(&rsrcs->used_domains, &domain->func_list);
+	dlb2_list_add(&rsrcs->avail_domains, &domain->func_list);
+	rsrcs->num_avail_domains++;
+
+	return 0;
+}
+
+static int dlb2_domain_drain_unmapped_queue(struct dlb2_hw *hw,
+					    struct dlb2_hw_domain *domain,
+					    struct dlb2_ldb_queue *queue)
+{
+	struct dlb2_ldb_port *port = NULL;
+	int ret, i;
+
+	/* If a domain has LDB queues, it must have LDB ports */
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		port = DLB2_DOM_LIST_HEAD(domain->used_ldb_ports[i],
+					  typeof(*port));
+		if (port)
+			break;
+	}
+
+	if (port == NULL) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: No configured LDB ports\n",
+			    __func__);
+		return -EFAULT;
+	}
+
+	/* If necessary, free up a QID slot in this CQ */
+	if (port->num_mappings == DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
+		struct dlb2_ldb_queue *mapped_queue;
+
+		mapped_queue = &hw->rsrcs.ldb_queues[port->qid_map[0].qid];
+
+		ret = dlb2_ldb_port_unmap_qid(hw, port, mapped_queue);
+		if (ret)
+			return ret;
+	}
+
+	ret = dlb2_ldb_port_map_qid_dynamic(hw, port, queue, 0);
+	if (ret)
+		return ret;
+
+	return dlb2_domain_drain_mapped_queues(hw, domain);
+}
+
+static int dlb2_domain_drain_unmapped_queues(struct dlb2_hw *hw,
+					     struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_queue *queue;
+	int ret;
+	RTE_SET_USED(iter);
+
+	/* If the domain hasn't been started, there's no traffic to drain */
+	if (!domain->started)
+		return 0;
+
+	/*
+	 * Pre-condition: the unattached queue must not have any outstanding
+	 * completions. This is ensured by calling dlb2_domain_drain_ldb_cqs()
+	 * prior to this in dlb2_domain_drain_mapped_queues().
+	 */
+	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
+		if (queue->num_mappings != 0 ||
+		    dlb2_ldb_queue_is_empty(hw, queue))
+			continue;
+
+		ret = dlb2_domain_drain_unmapped_queue(hw, domain, queue);
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+
+/**
+ * dlb2_reset_domain() - reset a scheduling domain
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function resets and frees a DLB 2.0 scheduling domain and its associated
+ * resources.
+ *
+ * Pre-condition: the driver must ensure software has stopped sending QEs
+ * through this domain's producer ports before invoking this function, or
+ * undefined behavior will result.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, -1 otherwise.
+ *
+ * EINVAL - Invalid domain ID, or the domain is not configured.
+ * EFAULT - Internal error. (Possibly caused if software is the pre-condition
+ *	    is not met.)
+ * ETIMEDOUT - Hardware component didn't reset in the expected time.
+ */
+int dlb2_reset_domain(struct dlb2_hw *hw,
+		      u32 domain_id,
+		      bool vdev_req,
+		      unsigned int vdev_id)
+{
+	struct dlb2_hw_domain *domain;
+	int ret;
+
+	dlb2_log_reset_domain(hw, domain_id, vdev_req, vdev_id);
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (domain == NULL || !domain->configured)
+		return -EINVAL;
+
+	/* Disable VPPs */
+	if (vdev_req) {
+		dlb2_domain_disable_dir_vpps(hw, domain, vdev_id);
+
+		dlb2_domain_disable_ldb_vpps(hw, domain, vdev_id);
+	}
+
+	/* Disable CQ interrupts */
+	dlb2_domain_disable_dir_port_interrupts(hw, domain);
+
+	dlb2_domain_disable_ldb_port_interrupts(hw, domain);
+
+	/*
+	 * For each queue owned by this domain, disable its write permissions to
+	 * cause any traffic sent to it to be dropped. Well-behaved software
+	 * should not be sending QEs at this point.
+	 */
+	dlb2_domain_disable_dir_queue_write_perms(hw, domain);
+
+	dlb2_domain_disable_ldb_queue_write_perms(hw, domain);
+
+	/* Turn off completion tracking on all the domain's PPs. */
+	dlb2_domain_disable_ldb_seq_checks(hw, domain);
+
+	/*
+	 * Disable the LDB CQs and drain them in order to complete the map and
+	 * unmap procedures, which require zero CQ inflights and zero QID
+	 * inflights respectively.
+	 */
+	dlb2_domain_disable_ldb_cqs(hw, domain);
+
+	dlb2_domain_drain_ldb_cqs(hw, domain, false);
+
+	ret = dlb2_domain_wait_for_ldb_cqs_to_empty(hw, domain);
+	if (ret)
+		return ret;
+
+	ret = dlb2_domain_finish_unmap_qid_procedures(hw, domain);
+	if (ret)
+		return ret;
+
+	ret = dlb2_domain_finish_map_qid_procedures(hw, domain);
+	if (ret)
+		return ret;
+
+	/* Re-enable the CQs in order to drain the mapped queues. */
+	dlb2_domain_enable_ldb_cqs(hw, domain);
+
+	ret = dlb2_domain_drain_mapped_queues(hw, domain);
+	if (ret)
+		return ret;
+
+	ret = dlb2_domain_drain_unmapped_queues(hw, domain);
+	if (ret)
+		return ret;
+
+	/* Done draining LDB QEs, so disable the CQs. */
+	dlb2_domain_disable_ldb_cqs(hw, domain);
+
+	dlb2_domain_drain_dir_queues(hw, domain);
+
+	/* Done draining DIR QEs, so disable the CQs. */
+	dlb2_domain_disable_dir_cqs(hw, domain);
+
+	/* Disable PPs */
+	dlb2_domain_disable_dir_producer_ports(hw, domain);
+
+	dlb2_domain_disable_ldb_producer_ports(hw, domain);
+
+	ret = dlb2_domain_verify_reset_success(hw, domain);
+	if (ret)
+		return ret;
+
+	/* Reset the QID and port state. */
+	dlb2_domain_reset_registers(hw, domain);
+
+	/* Hardware reset complete. Reset the domain's software state */
+	return dlb2_domain_reset_software_state(hw, domain);
+}
+
+static void
+dlb2_log_create_ldb_queue_args(struct dlb2_hw *hw,
+			       u32 domain_id,
+			       struct dlb2_create_ldb_queue_args *args,
+			       bool vdev_req,
+			       unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 create load-balanced queue arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID:                  %d\n",
+		    domain_id);
+	DLB2_HW_DBG(hw, "\tNumber of sequence numbers: %d\n",
+		    args->num_sequence_numbers);
+	DLB2_HW_DBG(hw, "\tNumber of QID inflights:    %d\n",
+		    args->num_qid_inflights);
+	DLB2_HW_DBG(hw, "\tNumber of ATM inflights:    %d\n",
+		    args->num_atomic_inflights);
+}
+
+static int
+dlb2_ldb_queue_attach_to_sn_group(struct dlb2_hw *hw,
+				  struct dlb2_ldb_queue *queue,
+				  struct dlb2_create_ldb_queue_args *args)
+{
+	int slot = -1;
+	int i;
+
+	queue->sn_cfg_valid = false;
+
+	if (args->num_sequence_numbers == 0)
+		return 0;
+
+	for (i = 0; i < DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {
+		struct dlb2_sn_group *group = &hw->rsrcs.sn_groups[i];
+
+		if (group->sequence_numbers_per_queue ==
+		    args->num_sequence_numbers &&
+		    !dlb2_sn_group_full(group)) {
+			slot = dlb2_sn_group_alloc_slot(group);
+			if (slot >= 0)
+				break;
+		}
+	}
+
+	if (slot == -1) {
+		DLB2_HW_ERR(hw,
+			    "[%s():%d] Internal error: no sequence number slots available\n",
+			    __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	queue->sn_cfg_valid = true;
+	queue->sn_group = i;
+	queue->sn_slot = slot;
+	return 0;
+}
+
+static int
+dlb2_verify_create_ldb_queue_args(struct dlb2_hw *hw,
+				  u32 domain_id,
+				  struct dlb2_create_ldb_queue_args *args,
+				  struct dlb2_cmd_response *resp,
+				  bool vdev_req,
+				  unsigned int vdev_id,
+				  struct dlb2_hw_domain **out_domain,
+				  struct dlb2_ldb_queue **out_queue)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+	int i;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
+		return -EINVAL;
+	}
+
+	if (domain->started) {
+		resp->status = DLB2_ST_DOMAIN_STARTED;
+		return -EINVAL;
+	}
+
+	queue = DLB2_DOM_LIST_HEAD(domain->avail_ldb_queues, typeof(*queue));
+	if (!queue) {
+		resp->status = DLB2_ST_LDB_QUEUES_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	if (args->num_sequence_numbers) {
+		for (i = 0; i < DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {
+			struct dlb2_sn_group *group = &hw->rsrcs.sn_groups[i];
+
+			if (group->sequence_numbers_per_queue ==
+			    args->num_sequence_numbers &&
+			    !dlb2_sn_group_full(group))
+				break;
+		}
+
+		if (i == DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS) {
+			resp->status = DLB2_ST_SEQUENCE_NUMBERS_UNAVAILABLE;
+			return -EINVAL;
+		}
+	}
+
+	if (args->num_qid_inflights > 4096) {
+		resp->status = DLB2_ST_INVALID_QID_INFLIGHT_ALLOCATION;
+		return -EINVAL;
+	}
+
+	/* Inflights must be <= number of sequence numbers if ordered */
+	if (args->num_sequence_numbers != 0 &&
+	    args->num_qid_inflights > args->num_sequence_numbers) {
+		resp->status = DLB2_ST_INVALID_QID_INFLIGHT_ALLOCATION;
+		return -EINVAL;
+	}
+
+	if (domain->num_avail_aqed_entries < args->num_atomic_inflights) {
+		resp->status = DLB2_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	if (args->num_atomic_inflights &&
+	    args->lock_id_comp_level != 0 &&
+	    args->lock_id_comp_level != 64 &&
+	    args->lock_id_comp_level != 128 &&
+	    args->lock_id_comp_level != 256 &&
+	    args->lock_id_comp_level != 512 &&
+	    args->lock_id_comp_level != 1024 &&
+	    args->lock_id_comp_level != 2048 &&
+	    args->lock_id_comp_level != 4096 &&
+	    args->lock_id_comp_level != 65536) {
+		resp->status = DLB2_ST_INVALID_LOCK_ID_COMP_LEVEL;
+		return -EINVAL;
+	}
+
+	*out_domain = domain;
+	*out_queue = queue;
+
+	return 0;
+}
+
+static int
+dlb2_ldb_queue_attach_resources(struct dlb2_hw *hw,
+				struct dlb2_hw_domain *domain,
+				struct dlb2_ldb_queue *queue,
+				struct dlb2_create_ldb_queue_args *args)
+{
+	int ret;
+	ret = dlb2_ldb_queue_attach_to_sn_group(hw, queue, args);
+	if (ret)
+		return ret;
+
+	/* Attach QID inflights */
+	queue->num_qid_inflights = args->num_qid_inflights;
+
+	/* Attach atomic inflights */
+	queue->aqed_limit = args->num_atomic_inflights;
+
+	domain->num_avail_aqed_entries -= args->num_atomic_inflights;
+	domain->num_used_aqed_entries += args->num_atomic_inflights;
+
+	return 0;
+}
+
+static void dlb2_configure_ldb_queue(struct dlb2_hw *hw,
+				     struct dlb2_hw_domain *domain,
+				     struct dlb2_ldb_queue *queue,
+				     struct dlb2_create_ldb_queue_args *args,
+				     bool vdev_req,
+				     unsigned int vdev_id)
+{
+	struct dlb2_sn_group *sn_group;
+	unsigned int offs;
+	u32 reg = 0;
+	u32 alimit;
+
+	/* QID write permissions are turned on when the domain is started */
+	offs = domain->id.phys_id * DLB2_MAX_NUM_LDB_QUEUES + queue->id.phys_id;
+
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_VASQID_V(offs), reg);
+
+	/*
+	 * Unordered QIDs get 4K inflights, ordered get as many as the number
+	 * of sequence numbers.
+	 */
+	DLB2_BITS_SET(reg, args->num_qid_inflights,
+		      DLB2_LSP_QID_LDB_INFL_LIM_LIMIT);
+	DLB2_CSR_WR(hw, DLB2_LSP_QID_LDB_INFL_LIM(hw->ver,
+						  queue->id.phys_id), reg);
+
+	alimit = queue->aqed_limit;
+
+	if (alimit > DLB2_MAX_NUM_AQED_ENTRIES)
+		alimit = DLB2_MAX_NUM_AQED_ENTRIES;
+
+	reg = 0;
+	DLB2_BITS_SET(reg, alimit, DLB2_LSP_QID_AQED_ACTIVE_LIM_LIMIT);
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_QID_AQED_ACTIVE_LIM(hw->ver,
+						 queue->id.phys_id), reg);
+
+	reg = 0;
+	switch (args->lock_id_comp_level) {
+	case 64:
+		DLB2_BITS_SET(reg, 1, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
+		break;
+	case 128:
+		DLB2_BITS_SET(reg, 2, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
+		break;
+	case 256:
+		DLB2_BITS_SET(reg, 3, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
+		break;
+	case 512:
+		DLB2_BITS_SET(reg, 4, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
+		break;
+	case 1024:
+		DLB2_BITS_SET(reg, 5, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
+		break;
+	case 2048:
+		DLB2_BITS_SET(reg, 6, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
+		break;
+	case 4096:
+		DLB2_BITS_SET(reg, 7, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
+		break;
+	default:
+		/* No compression by default */
+		break;
+	}
+
+	DLB2_CSR_WR(hw, DLB2_AQED_QID_HID_WIDTH(queue->id.phys_id), reg);
+
+	reg = 0;
+	/* Don't timestamp QEs that pass through this queue */
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_QID_ITS(queue->id.phys_id), reg);
+
+	DLB2_BITS_SET(reg, args->depth_threshold,
+		      DLB2_LSP_QID_ATM_DEPTH_THRSH_THRESH);
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_QID_ATM_DEPTH_THRSH(hw->ver,
+						 queue->id.phys_id), reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, args->depth_threshold,
+		      DLB2_LSP_QID_NALDB_DEPTH_THRSH_THRESH);
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_QID_NALDB_DEPTH_THRSH(hw->ver, queue->id.phys_id),
+		    reg);
+
+	/*
+	 * This register limits the number of inflight flows a queue can have
+	 * at one time.  It has an upper bound of 2048, but can be
+	 * over-subscribed. 512 is chosen so that a single queue does not use
+	 * the entire atomic storage, but can use a substantial portion if
+	 * needed.
+	 */
+	reg = 0;
+	DLB2_BITS_SET(reg, 512, DLB2_AQED_QID_FID_LIM_QID_FID_LIMIT);
+	DLB2_CSR_WR(hw, DLB2_AQED_QID_FID_LIM(queue->id.phys_id), reg);
+
+	/* Configure SNs */
+	reg = 0;
+	sn_group = &hw->rsrcs.sn_groups[queue->sn_group];
+	DLB2_BITS_SET(reg, sn_group->mode, DLB2_CHP_ORD_QID_SN_MAP_MODE);
+	DLB2_BITS_SET(reg, queue->sn_slot, DLB2_CHP_ORD_QID_SN_MAP_SLOT);
+	DLB2_BITS_SET(reg, sn_group->id, DLB2_CHP_ORD_QID_SN_MAP_GRP);
+
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_ORD_QID_SN_MAP(hw->ver, queue->id.phys_id), reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, (args->num_sequence_numbers != 0),
+		 DLB2_SYS_LDB_QID_CFG_V_SN_CFG_V);
+	DLB2_BITS_SET(reg, (args->num_atomic_inflights != 0),
+		 DLB2_SYS_LDB_QID_CFG_V_FID_CFG_V);
+
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_QID_CFG_V(queue->id.phys_id), reg);
+
+	if (vdev_req) {
+		offs = vdev_id * DLB2_MAX_NUM_LDB_QUEUES + queue->id.virt_id;
+
+		reg = 0;
+		DLB2_BIT_SET(reg, DLB2_SYS_VF_LDB_VQID_V_VQID_V);
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID_V(offs), reg);
+
+		reg = 0;
+		DLB2_BITS_SET(reg, queue->id.phys_id,
+			      DLB2_SYS_VF_LDB_VQID2QID_QID);
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID2QID(offs), reg);
+
+		reg = 0;
+		DLB2_BITS_SET(reg, queue->id.virt_id,
+			      DLB2_SYS_LDB_QID2VQID_VQID);
+		DLB2_CSR_WR(hw, DLB2_SYS_LDB_QID2VQID(queue->id.phys_id), reg);
+	}
+
+	reg = 0;
+	DLB2_BIT_SET(reg, DLB2_SYS_LDB_QID_V_QID_V);
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_QID_V(queue->id.phys_id), reg);
+}
+
+/**
+ * dlb2_hw_create_ldb_queue() - create a load-balanced queue
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: queue creation arguments.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function creates a load-balanced queue.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the queue ID.
+ *
+ * resp->id contains a virtual ID if vdev_req is true.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, the domain is not configured,
+ *	    the domain has already been started, or the requested queue name is
+ *	    already in use.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb2_hw_create_ldb_queue(struct dlb2_hw *hw,
+			     u32 domain_id,
+			     struct dlb2_create_ldb_queue_args *args,
+			     struct dlb2_cmd_response *resp,
+			     bool vdev_req,
+			     unsigned int vdev_id)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+	int ret;
+
+	dlb2_log_create_ldb_queue_args(hw, domain_id, args, vdev_req, vdev_id);
+
+	/*
+	 * Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	ret = dlb2_verify_create_ldb_queue_args(hw,
+						domain_id,
+						args,
+						resp,
+						vdev_req,
+						vdev_id,
+						&domain,
+						&queue);
+	if (ret)
+		return ret;
+
+	ret = dlb2_ldb_queue_attach_resources(hw, domain, queue, args);
+
+	if (ret) {
+		DLB2_HW_ERR(hw,
+			    "[%s():%d] Internal error: failed to attach the ldb queue resources\n",
+			    __func__, __LINE__);
+		return ret;
+	}
+
+	dlb2_configure_ldb_queue(hw, domain, queue, args, vdev_req, vdev_id);
+
+	queue->num_mappings = 0;
+
+	queue->configured = true;
+
+	/*
+	 * Configuration succeeded, so move the resource from the 'avail' to
+	 * the 'used' list.
+	 */
+	dlb2_list_del(&domain->avail_ldb_queues, &queue->domain_list);
+
+	dlb2_list_add(&domain->used_ldb_queues, &queue->domain_list);
+
+	resp->status = 0;
+	resp->id = (vdev_req) ? queue->id.virt_id : queue->id.phys_id;
+
+	return 0;
+}
+
+static void dlb2_ldb_port_configure_pp(struct dlb2_hw *hw,
+				       struct dlb2_hw_domain *domain,
+				       struct dlb2_ldb_port *port,
+				       bool vdev_req,
+				       unsigned int vdev_id)
+{
+	u32 reg = 0;
+
+	DLB2_BITS_SET(reg, domain->id.phys_id, DLB2_SYS_LDB_PP2VAS_VAS);
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_PP2VAS(port->id.phys_id), reg);
+
+	if (vdev_req) {
+		unsigned int offs;
+		u32 virt_id;
+
+		/*
+		 * DLB uses producer port address bits 17:12 to determine the
+		 * producer port ID. In Scalable IOV mode, PP accesses come
+		 * through the PF MMIO window for the physical producer port,
+		 * so for translation purposes the virtual and physical port
+		 * IDs are equal.
+		 */
+		if (hw->virt_mode == DLB2_VIRT_SRIOV)
+			virt_id = port->id.virt_id;
+		else
+			virt_id = port->id.phys_id;
+
+		reg = 0;
+		DLB2_BITS_SET(reg, port->id.phys_id, DLB2_SYS_VF_LDB_VPP2PP_PP);
+		offs = vdev_id * DLB2_MAX_NUM_LDB_PORTS + virt_id;
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VPP2PP(offs), reg);
+
+		reg = 0;
+		DLB2_BITS_SET(reg, vdev_id, DLB2_SYS_LDB_PP2VDEV_VDEV);
+		DLB2_CSR_WR(hw, DLB2_SYS_LDB_PP2VDEV(port->id.phys_id), reg);
+
+		reg = 0;
+		DLB2_BIT_SET(reg, DLB2_SYS_VF_LDB_VPP_V_VPP_V);
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VPP_V(offs), reg);
+	}
+
+	reg = 0;
+	DLB2_BIT_SET(reg, DLB2_SYS_LDB_PP_V_PP_V);
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_PP_V(port->id.phys_id), reg);
+}
+
+static int dlb2_ldb_port_configure_cq(struct dlb2_hw *hw,
+				      struct dlb2_hw_domain *domain,
+				      struct dlb2_ldb_port *port,
+				      uintptr_t cq_dma_base,
+				      struct dlb2_create_ldb_port_args *args,
+				      bool vdev_req,
+				      unsigned int vdev_id)
+{
+	u32 hl_base = 0;
+	u32 reg = 0;
+	u32 ds = 0;
+
+	/* The CQ address is 64B-aligned, and the DLB only wants bits [63:6] */
+	DLB2_BITS_SET(reg, cq_dma_base >> 6, DLB2_SYS_LDB_CQ_ADDR_L_ADDR_L);
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_ADDR_L(port->id.phys_id), reg);
+
+	reg = cq_dma_base >> 32;
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_ADDR_U(port->id.phys_id), reg);
+
+	/*
+	 * 'ro' == relaxed ordering. This setting allows DLB2 to write
+	 * cache lines out-of-order (but QEs within a cache line are always
+	 * updated in-order).
+	 */
+	reg = 0;
+	DLB2_BITS_SET(reg, vdev_id, DLB2_SYS_LDB_CQ2VF_PF_RO_VF);
+	DLB2_BITS_SET(reg,
+		 !vdev_req && (hw->virt_mode != DLB2_VIRT_SIOV),
+		 DLB2_SYS_LDB_CQ2VF_PF_RO_IS_PF);
+	DLB2_BIT_SET(reg, DLB2_SYS_LDB_CQ2VF_PF_RO_RO);
+
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ2VF_PF_RO(port->id.phys_id), reg);
+
+	port->cq_depth = args->cq_depth;
+
+	if (args->cq_depth <= 8) {
+		ds = 1;
+	} else if (args->cq_depth == 16) {
+		ds = 2;
+	} else if (args->cq_depth == 32) {
+		ds = 3;
+	} else if (args->cq_depth == 64) {
+		ds = 4;
+	} else if (args->cq_depth == 128) {
+		ds = 5;
+	} else if (args->cq_depth == 256) {
+		ds = 6;
+	} else if (args->cq_depth == 512) {
+		ds = 7;
+	} else if (args->cq_depth == 1024) {
+		ds = 8;
+	} else {
+		DLB2_HW_ERR(hw,
+			    "[%s():%d] Internal error: invalid CQ depth\n",
+			    __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	reg = 0;
+	DLB2_BITS_SET(reg, ds,
+		      DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT);
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
+		    reg);
+
+	/*
+	 * To support CQs with depth less than 8, program the token count
+	 * register with a non-zero initial value. Operations such as domain
+	 * reset must take this initial value into account when quiescing the
+	 * CQ.
+	 */
+	port->init_tkn_cnt = 0;
+
+	if (args->cq_depth < 8) {
+		reg = 0;
+		port->init_tkn_cnt = 8 - args->cq_depth;
+
+		DLB2_BITS_SET(reg,
+			      port->init_tkn_cnt,
+			      DLB2_LSP_CQ_LDB_TKN_CNT_TOKEN_COUNT);
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_CQ_LDB_TKN_CNT(hw->ver, port->id.phys_id),
+			    reg);
+	} else {
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_CQ_LDB_TKN_CNT(hw->ver, port->id.phys_id),
+			    DLB2_LSP_CQ_LDB_TKN_CNT_RST);
+	}
+
+	reg = 0;
+	DLB2_BITS_SET(reg, ds,
+		      DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_V2);
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
+		    reg);
+
+	/* Reset the CQ write pointer */
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_LDB_CQ_WPTR(hw->ver, port->id.phys_id),
+		    DLB2_CHP_LDB_CQ_WPTR_RST);
+
+	reg = 0;
+	DLB2_BITS_SET(reg,
+		      port->hist_list_entry_limit - 1,
+		      DLB2_CHP_HIST_LIST_LIM_LIMIT);
+	DLB2_CSR_WR(hw, DLB2_CHP_HIST_LIST_LIM(hw->ver, port->id.phys_id), reg);
+
+	DLB2_BITS_SET(hl_base, port->hist_list_entry_base,
+		      DLB2_CHP_HIST_LIST_BASE_BASE);
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_HIST_LIST_BASE(hw->ver, port->id.phys_id),
+		    hl_base);
+
+	/*
+	 * The inflight limit sets a cap on the number of QEs for which this CQ
+	 * can owe completions at one time.
+	 */
+	reg = 0;
+	DLB2_BITS_SET(reg, args->cq_history_list_size,
+		      DLB2_LSP_CQ_LDB_INFL_LIM_LIMIT);
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ_LDB_INFL_LIM(hw->ver, port->id.phys_id),
+		    reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, DLB2_BITS_GET(hl_base, DLB2_CHP_HIST_LIST_BASE_BASE),
+		      DLB2_CHP_HIST_LIST_PUSH_PTR_PUSH_PTR);
+	DLB2_CSR_WR(hw, DLB2_CHP_HIST_LIST_PUSH_PTR(hw->ver, port->id.phys_id),
+		    reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, DLB2_BITS_GET(hl_base, DLB2_CHP_HIST_LIST_BASE_BASE),
+		      DLB2_CHP_HIST_LIST_POP_PTR_POP_PTR);
+	DLB2_CSR_WR(hw, DLB2_CHP_HIST_LIST_POP_PTR(hw->ver, port->id.phys_id),
+		    reg);
+
+	/*
+	 * Address translation (AT) settings: 0: untranslated, 2: translated
+	 * (see ATS spec regarding Address Type field for more details)
+	 */
+
+	if (hw->ver == DLB2_HW_V2) {
+		reg = 0;
+		DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_AT(port->id.phys_id), reg);
+	}
+
+	if (vdev_req && hw->virt_mode == DLB2_VIRT_SIOV) {
+		reg = 0;
+		DLB2_BITS_SET(reg, hw->pasid[vdev_id],
+			      DLB2_SYS_LDB_CQ_PASID_PASID);
+		DLB2_BIT_SET(reg, DLB2_SYS_LDB_CQ_PASID_FMT2);
+	}
+
+	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_PASID(hw->ver, port->id.phys_id), reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, domain->id.phys_id, DLB2_CHP_LDB_CQ2VAS_CQ2VAS);
+	DLB2_CSR_WR(hw, DLB2_CHP_LDB_CQ2VAS(hw->ver, port->id.phys_id), reg);
+
+	/* Disable the port's QID mappings */
+	reg = 0;
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(hw->ver, port->id.phys_id), reg);
+
+	return 0;
+}
+
+static bool
+dlb2_cq_depth_is_valid(u32 depth)
+{
+	if (depth != 1 && depth != 2 &&
+	    depth != 4 && depth != 8 &&
+	    depth != 16 && depth != 32 &&
+	    depth != 64 && depth != 128 &&
+	    depth != 256 && depth != 512 &&
+	    depth != 1024)
+		return false;
+
+	return true;
+}
+
+static int dlb2_configure_ldb_port(struct dlb2_hw *hw,
+				   struct dlb2_hw_domain *domain,
+				   struct dlb2_ldb_port *port,
+				   uintptr_t cq_dma_base,
+				   struct dlb2_create_ldb_port_args *args,
+				   bool vdev_req,
+				   unsigned int vdev_id)
+{
+	int ret, i;
+
+	port->hist_list_entry_base = domain->hist_list_entry_base +
+				     domain->hist_list_entry_offset;
+	port->hist_list_entry_limit = port->hist_list_entry_base +
+				      args->cq_history_list_size;
+
+	domain->hist_list_entry_offset += args->cq_history_list_size;
+	domain->avail_hist_list_entries -= args->cq_history_list_size;
+
+	ret = dlb2_ldb_port_configure_cq(hw,
+					 domain,
+					 port,
+					 cq_dma_base,
+					 args,
+					 vdev_req,
+					 vdev_id);
+	if (ret)
+		return ret;
+
+	dlb2_ldb_port_configure_pp(hw,
+				   domain,
+				   port,
+				   vdev_req,
+				   vdev_id);
+
+	dlb2_ldb_port_cq_enable(hw, port);
+
+	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++)
+		port->qid_map[i].state = DLB2_QUEUE_UNMAPPED;
+	port->num_mappings = 0;
+
+	port->enabled = true;
+
+	port->configured = true;
+
+	return 0;
+}
+
+static void
+dlb2_log_create_ldb_port_args(struct dlb2_hw *hw,
+			      u32 domain_id,
+			      uintptr_t cq_dma_base,
+			      struct dlb2_create_ldb_port_args *args,
+			      bool vdev_req,
+			      unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 create load-balanced port arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID:                 %d\n",
+		    domain_id);
+	DLB2_HW_DBG(hw, "\tCQ depth:                  %d\n",
+		    args->cq_depth);
+	DLB2_HW_DBG(hw, "\tCQ hist list size:         %d\n",
+		    args->cq_history_list_size);
+	DLB2_HW_DBG(hw, "\tCQ base address:           0x%lx\n",
+		    cq_dma_base);
+	DLB2_HW_DBG(hw, "\tCoS ID:                    %u\n", args->cos_id);
+	DLB2_HW_DBG(hw, "\tStrict CoS allocation:     %u\n",
+		    args->cos_strict);
+}
+
+static int
+dlb2_verify_create_ldb_port_args(struct dlb2_hw *hw,
+				 u32 domain_id,
+				 uintptr_t cq_dma_base,
+				 struct dlb2_create_ldb_port_args *args,
+				 struct dlb2_cmd_response *resp,
+				 bool vdev_req,
+				 unsigned int vdev_id,
+				 struct dlb2_hw_domain **out_domain,
+				 struct dlb2_ldb_port **out_port,
+				 int *out_cos_id)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_port *port;
+	int i, id;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
+		return -EINVAL;
+	}
+
+	if (domain->started) {
+		resp->status = DLB2_ST_DOMAIN_STARTED;
+		return -EINVAL;
+	}
+
+	if (args->cos_id >= DLB2_NUM_COS_DOMAINS) {
+		resp->status = DLB2_ST_INVALID_COS_ID;
+		return -EINVAL;
+	}
+
+	if (args->cos_strict) {
+		id = args->cos_id;
+		port = DLB2_DOM_LIST_HEAD(domain->avail_ldb_ports[id],
+					  typeof(*port));
+	} else {
+		for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+			id = (args->cos_id + i) % DLB2_NUM_COS_DOMAINS;
+
+			port = DLB2_DOM_LIST_HEAD(domain->avail_ldb_ports[id],
+						  typeof(*port));
+			if (port)
+				break;
+		}
+	}
+
+	if (!port) {
+		resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	/* Check cache-line alignment */
+	if ((cq_dma_base & 0x3F) != 0) {
+		resp->status = DLB2_ST_INVALID_CQ_VIRT_ADDR;
+		return -EINVAL;
+	}
+
+	if (!dlb2_cq_depth_is_valid(args->cq_depth)) {
+		resp->status = DLB2_ST_INVALID_CQ_DEPTH;
+		return -EINVAL;
+	}
+
+	/* The history list size must be >= 1 */
+	if (!args->cq_history_list_size) {
+		resp->status = DLB2_ST_INVALID_HIST_LIST_DEPTH;
+		return -EINVAL;
+	}
+
+	if (args->cq_history_list_size > domain->avail_hist_list_entries) {
+		resp->status = DLB2_ST_HIST_LIST_ENTRIES_UNAVAILABLE;
+		return -EINVAL;
+	}
+
+	*out_domain = domain;
+	*out_port = port;
+	*out_cos_id = id;
+
+	return 0;
+}
+
+/**
+ * dlb2_hw_create_ldb_port() - create a load-balanced port
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: port creation arguments.
+ * @cq_dma_base: base address of the CQ memory. This can be a PA or an IOVA.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function creates a load-balanced port.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the port ID.
+ *
+ * resp->id contains a virtual ID if vdev_req is true.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, a credit setting is invalid, a
+ *	    pointer address is not properly aligned, the domain is not
+ *	    configured, or the domain has already been started.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb2_hw_create_ldb_port(struct dlb2_hw *hw,
+			    u32 domain_id,
+			    struct dlb2_create_ldb_port_args *args,
+			    uintptr_t cq_dma_base,
+			    struct dlb2_cmd_response *resp,
+			    bool vdev_req,
+			    unsigned int vdev_id)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_port *port;
+	int ret, cos_id;
+
+	dlb2_log_create_ldb_port_args(hw,
+				      domain_id,
+				      cq_dma_base,
+				      args,
+				      vdev_req,
+				      vdev_id);
+
+	/*
+	 * Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	ret = dlb2_verify_create_ldb_port_args(hw,
+					       domain_id,
+					       cq_dma_base,
+					       args,
+					       resp,
+					       vdev_req,
+					       vdev_id,
+					       &domain,
+					       &port,
+					       &cos_id);
+	if (ret)
+		return ret;
+
+	ret = dlb2_configure_ldb_port(hw,
+				      domain,
+				      port,
+				      cq_dma_base,
+				      args,
+				      vdev_req,
+				      vdev_id);
+	if (ret)
+		return ret;
+
+	/*
+	 * Configuration succeeded, so move the resource from the 'avail' to
+	 * the 'used' list.
+	 */
+	dlb2_list_del(&domain->avail_ldb_ports[cos_id], &port->domain_list);
+
+	dlb2_list_add(&domain->used_ldb_ports[cos_id], &port->domain_list);
+
+	resp->status = 0;
+	resp->id = (vdev_req) ? port->id.virt_id : port->id.phys_id;
+
+	return 0;
+}
+
+static void
+dlb2_log_create_dir_port_args(struct dlb2_hw *hw,
+			      u32 domain_id,
+			      uintptr_t cq_dma_base,
+			      struct dlb2_create_dir_port_args *args,
+			      bool vdev_req,
+			      unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 create directed port arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID:                 %d\n",
+		    domain_id);
+	DLB2_HW_DBG(hw, "\tCQ depth:                  %d\n",
+		    args->cq_depth);
+	DLB2_HW_DBG(hw, "\tCQ base address:           0x%lx\n",
+		    cq_dma_base);
+}
+
+static struct dlb2_dir_pq_pair *
+dlb2_get_domain_used_dir_pq(struct dlb2_hw *hw,
+			    u32 id,
+			    bool vdev_req,
+			    struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *port;
+	RTE_SET_USED(iter);
+
+	if (id >= DLB2_MAX_NUM_DIR_PORTS(hw->ver))
+		return NULL;
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
+		if ((!vdev_req && port->id.phys_id == id) ||
+		    (vdev_req && port->id.virt_id == id))
+			return port;
+	}
+
+	return NULL;
+}
+
+static int
+dlb2_verify_create_dir_port_args(struct dlb2_hw *hw,
+				 u32 domain_id,
+				 uintptr_t cq_dma_base,
+				 struct dlb2_create_dir_port_args *args,
+				 struct dlb2_cmd_response *resp,
+				 bool vdev_req,
+				 unsigned int vdev_id,
+				 struct dlb2_hw_domain **out_domain,
+				 struct dlb2_dir_pq_pair **out_port)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_dir_pq_pair *pq;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
+		return -EINVAL;
+	}
+
+	if (domain->started) {
+		resp->status = DLB2_ST_DOMAIN_STARTED;
+		return -EINVAL;
+	}
+
+	if (args->queue_id != -1) {
+		/*
+		 * If the user claims the queue is already configured, validate
+		 * the queue ID, its domain, and whether the queue is
+		 * configured.
+		 */
+		pq = dlb2_get_domain_used_dir_pq(hw,
+						 args->queue_id,
+						 vdev_req,
+						 domain);
+
+		if (!pq || pq->domain_id.phys_id != domain->id.phys_id ||
+		    !pq->queue_configured) {
+			resp->status = DLB2_ST_INVALID_DIR_QUEUE_ID;
+			return -EINVAL;
+		}
+	} else {
+		/*
+		 * If the port's queue is not configured, validate that a free
+		 * port-queue pair is available.
+		 */
+		pq = DLB2_DOM_LIST_HEAD(domain->avail_dir_pq_pairs,
+					typeof(*pq));
+		if (!pq) {
+			resp->status = DLB2_ST_DIR_PORTS_UNAVAILABLE;
+			return -EINVAL;
+		}
+	}
+
+	/* Check cache-line alignment */
+	if ((cq_dma_base & 0x3F) != 0) {
+		resp->status = DLB2_ST_INVALID_CQ_VIRT_ADDR;
+		return -EINVAL;
+	}
+
+	if (!dlb2_cq_depth_is_valid(args->cq_depth)) {
+		resp->status = DLB2_ST_INVALID_CQ_DEPTH;
+		return -EINVAL;
+	}
+
+	*out_domain = domain;
+	*out_port = pq;
+
+	return 0;
+}
+
+static void dlb2_dir_port_configure_pp(struct dlb2_hw *hw,
+				       struct dlb2_hw_domain *domain,
+				       struct dlb2_dir_pq_pair *port,
+				       bool vdev_req,
+				       unsigned int vdev_id)
+{
+	u32 reg = 0;
+
+	DLB2_BITS_SET(reg, domain->id.phys_id, DLB2_SYS_DIR_PP2VAS_VAS);
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_PP2VAS(port->id.phys_id), reg);
+
+	if (vdev_req) {
+		unsigned int offs;
+		u32 virt_id;
+
+		/*
+		 * DLB uses producer port address bits 17:12 to determine the
+		 * producer port ID. In Scalable IOV mode, PP accesses come
+		 * through the PF MMIO window for the physical producer port,
+		 * so for translation purposes the virtual and physical port
+		 * IDs are equal.
+		 */
+		if (hw->virt_mode == DLB2_VIRT_SRIOV)
+			virt_id = port->id.virt_id;
+		else
+			virt_id = port->id.phys_id;
+
+		reg = 0;
+		DLB2_BITS_SET(reg, port->id.phys_id, DLB2_SYS_VF_DIR_VPP2PP_PP);
+		offs = vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) + virt_id;
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP2PP(offs), reg);
+
+		reg = 0;
+		DLB2_BITS_SET(reg, vdev_id, DLB2_SYS_DIR_PP2VDEV_VDEV);
+		DLB2_CSR_WR(hw, DLB2_SYS_DIR_PP2VDEV(port->id.phys_id), reg);
+
+		reg = 0;
+		DLB2_BIT_SET(reg, DLB2_SYS_VF_DIR_VPP_V_VPP_V);
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP_V(offs), reg);
+	}
+
+	reg = 0;
+	DLB2_BIT_SET(reg, DLB2_SYS_DIR_PP_V_PP_V);
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_PP_V(port->id.phys_id), reg);
+}
+
+static int dlb2_dir_port_configure_cq(struct dlb2_hw *hw,
+				      struct dlb2_hw_domain *domain,
+				      struct dlb2_dir_pq_pair *port,
+				      uintptr_t cq_dma_base,
+				      struct dlb2_create_dir_port_args *args,
+				      bool vdev_req,
+				      unsigned int vdev_id)
+{
+	u32 reg = 0;
+	u32 ds = 0;
+
+	/* The CQ address is 64B-aligned, and the DLB only wants bits [63:6] */
+	DLB2_BITS_SET(reg, cq_dma_base >> 6, DLB2_SYS_DIR_CQ_ADDR_L_ADDR_L);
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_ADDR_L(port->id.phys_id), reg);
+
+	reg = cq_dma_base >> 32;
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_ADDR_U(port->id.phys_id), reg);
+
+	/*
+	 * 'ro' == relaxed ordering. This setting allows DLB2 to write
+	 * cache lines out-of-order (but QEs within a cache line are always
+	 * updated in-order).
+	 */
+	reg = 0;
+	DLB2_BITS_SET(reg, vdev_id, DLB2_SYS_DIR_CQ2VF_PF_RO_VF);
+	DLB2_BITS_SET(reg, !vdev_req && (hw->virt_mode != DLB2_VIRT_SIOV),
+		 DLB2_SYS_DIR_CQ2VF_PF_RO_IS_PF);
+	DLB2_BIT_SET(reg, DLB2_SYS_DIR_CQ2VF_PF_RO_RO);
+
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ2VF_PF_RO(port->id.phys_id), reg);
+
+	if (args->cq_depth <= 8) {
+		ds = 1;
+	} else if (args->cq_depth == 16) {
+		ds = 2;
+	} else if (args->cq_depth == 32) {
+		ds = 3;
+	} else if (args->cq_depth == 64) {
+		ds = 4;
+	} else if (args->cq_depth == 128) {
+		ds = 5;
+	} else if (args->cq_depth == 256) {
+		ds = 6;
+	} else if (args->cq_depth == 512) {
+		ds = 7;
+	} else if (args->cq_depth == 1024) {
+		ds = 8;
+	} else {
+		DLB2_HW_ERR(hw,
+			    "[%s():%d] Internal error: invalid CQ depth\n",
+			    __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	reg = 0;
+	DLB2_BITS_SET(reg, ds,
+		      DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT);
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
+		    reg);
+
+	/*
+	 * To support CQs with depth less than 8, program the token count
+	 * register with a non-zero initial value. Operations such as domain
+	 * reset must take this initial value into account when quiescing the
+	 * CQ.
+	 */
+	port->init_tkn_cnt = 0;
+
+	if (args->cq_depth < 8) {
+		reg = 0;
+		port->init_tkn_cnt = 8 - args->cq_depth;
+
+		DLB2_BITS_SET(reg, port->init_tkn_cnt,
+			      DLB2_LSP_CQ_DIR_TKN_CNT_COUNT);
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_CQ_DIR_TKN_CNT(hw->ver, port->id.phys_id),
+			    reg);
+	} else {
+		DLB2_CSR_WR(hw,
+			    DLB2_LSP_CQ_DIR_TKN_CNT(hw->ver, port->id.phys_id),
+			    DLB2_LSP_CQ_DIR_TKN_CNT_RST);
+	}
+
+	reg = 0;
+	DLB2_BITS_SET(reg, ds,
+		      DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_TOKEN_DEPTH_SELECT_V2);
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(hw->ver,
+						      port->id.phys_id),
+		    reg);
+
+	/* Reset the CQ write pointer */
+	DLB2_CSR_WR(hw,
+		    DLB2_CHP_DIR_CQ_WPTR(hw->ver, port->id.phys_id),
+		    DLB2_CHP_DIR_CQ_WPTR_RST);
+
+	/* Virtualize the PPID */
+	reg = 0;
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_FMT(port->id.phys_id), reg);
+
+	/*
+	 * Address translation (AT) settings: 0: untranslated, 2: translated
+	 * (see ATS spec regarding Address Type field for more details)
+	 */
+	if (hw->ver == DLB2_HW_V2) {
+		reg = 0;
+		DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_AT(port->id.phys_id), reg);
+	}
+
+	if (vdev_req && hw->virt_mode == DLB2_VIRT_SIOV) {
+		DLB2_BITS_SET(reg, hw->pasid[vdev_id],
+			      DLB2_SYS_DIR_CQ_PASID_PASID);
+		DLB2_BIT_SET(reg, DLB2_SYS_DIR_CQ_PASID_FMT2);
+	}
+
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_PASID(hw->ver, port->id.phys_id), reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, domain->id.phys_id, DLB2_CHP_DIR_CQ2VAS_CQ2VAS);
+	DLB2_CSR_WR(hw, DLB2_CHP_DIR_CQ2VAS(hw->ver, port->id.phys_id), reg);
+
+	return 0;
+}
+
+static int dlb2_configure_dir_port(struct dlb2_hw *hw,
+				   struct dlb2_hw_domain *domain,
+				   struct dlb2_dir_pq_pair *port,
+				   uintptr_t cq_dma_base,
+				   struct dlb2_create_dir_port_args *args,
+				   bool vdev_req,
+				   unsigned int vdev_id)
+{
+	int ret;
+
+	ret = dlb2_dir_port_configure_cq(hw,
+					 domain,
+					 port,
+					 cq_dma_base,
+					 args,
+					 vdev_req,
+					 vdev_id);
+
+	if (ret)
+		return ret;
+
+	dlb2_dir_port_configure_pp(hw,
+				   domain,
+				   port,
+				   vdev_req,
+				   vdev_id);
+
+	dlb2_dir_port_cq_enable(hw, port);
+
+	port->enabled = true;
+
+	port->port_configured = true;
+
+	return 0;
+}
+
+/**
+ * dlb2_hw_create_dir_port() - create a directed port
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: port creation arguments.
+ * @cq_dma_base: base address of the CQ memory. This can be a PA or an IOVA.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function creates a directed port.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the port ID.
+ *
+ * resp->id contains a virtual ID if vdev_req is true.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, a credit setting is invalid, a
+ *	    pointer address is not properly aligned, the domain is not
+ *	    configured, or the domain has already been started.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb2_hw_create_dir_port(struct dlb2_hw *hw,
+			    u32 domain_id,
+			    struct dlb2_create_dir_port_args *args,
+			    uintptr_t cq_dma_base,
+			    struct dlb2_cmd_response *resp,
+			    bool vdev_req,
+			    unsigned int vdev_id)
+{
+	struct dlb2_dir_pq_pair *port;
+	struct dlb2_hw_domain *domain;
+	int ret;
+
+	dlb2_log_create_dir_port_args(hw,
+				      domain_id,
+				      cq_dma_base,
+				      args,
+				      vdev_req,
+				      vdev_id);
+
+	/*
+	 * Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	ret = dlb2_verify_create_dir_port_args(hw,
+					       domain_id,
+					       cq_dma_base,
+					       args,
+					       resp,
+					       vdev_req,
+					       vdev_id,
+					       &domain,
+					       &port);
+	if (ret)
+		return ret;
+
+	ret = dlb2_configure_dir_port(hw,
+				      domain,
+				      port,
+				      cq_dma_base,
+				      args,
+				      vdev_req,
+				      vdev_id);
+	if (ret)
+		return ret;
+
+	/*
+	 * Configuration succeeded, so move the resource from the 'avail' to
+	 * the 'used' list (if it's not already there).
+	 */
+	if (args->queue_id == -1) {
+		dlb2_list_del(&domain->avail_dir_pq_pairs, &port->domain_list);
+
+		dlb2_list_add(&domain->used_dir_pq_pairs, &port->domain_list);
+	}
+
+	resp->status = 0;
+	resp->id = (vdev_req) ? port->id.virt_id : port->id.phys_id;
+
+	return 0;
+}
+
+static void dlb2_configure_dir_queue(struct dlb2_hw *hw,
+				     struct dlb2_hw_domain *domain,
+				     struct dlb2_dir_pq_pair *queue,
+				     struct dlb2_create_dir_queue_args *args,
+				     bool vdev_req,
+				     unsigned int vdev_id)
+{
+	unsigned int offs;
+	u32 reg = 0;
+
+	/* QID write permissions are turned on when the domain is started */
+	offs = domain->id.phys_id * DLB2_MAX_NUM_DIR_QUEUES(hw->ver) +
+		queue->id.phys_id;
+
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(offs), reg);
+
+	/* Don't timestamp QEs that pass through this queue */
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_QID_ITS(queue->id.phys_id), reg);
+
+	reg = 0;
+	DLB2_BITS_SET(reg, args->depth_threshold,
+		      DLB2_LSP_QID_DIR_DEPTH_THRSH_THRESH);
+	DLB2_CSR_WR(hw,
+		    DLB2_LSP_QID_DIR_DEPTH_THRSH(hw->ver, queue->id.phys_id),
+		    reg);
+
+	if (vdev_req) {
+		offs = vdev_id * DLB2_MAX_NUM_DIR_QUEUES(hw->ver) +
+			queue->id.virt_id;
+
+		reg = 0;
+		DLB2_BIT_SET(reg, DLB2_SYS_VF_DIR_VQID_V_VQID_V);
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID_V(offs), reg);
+
+		reg = 0;
+		DLB2_BITS_SET(reg, queue->id.phys_id,
+			      DLB2_SYS_VF_DIR_VQID2QID_QID);
+		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID2QID(offs), reg);
+	}
+
+	reg = 0;
+	DLB2_BIT_SET(reg, DLB2_SYS_DIR_QID_V_QID_V);
+	DLB2_CSR_WR(hw, DLB2_SYS_DIR_QID_V(queue->id.phys_id), reg);
+
+	queue->queue_configured = true;
+}
+
+static void
+dlb2_log_create_dir_queue_args(struct dlb2_hw *hw,
+			       u32 domain_id,
+			       struct dlb2_create_dir_queue_args *args,
+			       bool vdev_req,
+			       unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 create directed queue arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
+	DLB2_HW_DBG(hw, "\tPort ID:   %d\n", args->port_id);
+}
+
+static int
+dlb2_verify_create_dir_queue_args(struct dlb2_hw *hw,
+				  u32 domain_id,
+				  struct dlb2_create_dir_queue_args *args,
+				  struct dlb2_cmd_response *resp,
+				  bool vdev_req,
+				  unsigned int vdev_id,
+				  struct dlb2_hw_domain **out_domain,
+				  struct dlb2_dir_pq_pair **out_queue)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_dir_pq_pair *pq;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
+		return -EINVAL;
+	}
+
+	if (domain->started) {
+		resp->status = DLB2_ST_DOMAIN_STARTED;
+		return -EINVAL;
+	}
+
+	/*
+	 * If the user claims the port is already configured, validate the port
+	 * ID, its domain, and whether the port is configured.
+	 */
+	if (args->port_id != -1) {
+		pq = dlb2_get_domain_used_dir_pq(hw,
+						 args->port_id,
+						 vdev_req,
+						 domain);
+
+		if (!pq || pq->domain_id.phys_id != domain->id.phys_id ||
+		    !pq->port_configured) {
+			resp->status = DLB2_ST_INVALID_PORT_ID;
+			return -EINVAL;
+		}
+	} else {
+		/*
+		 * If the queue's port is not configured, validate that a free
+		 * port-queue pair is available.
+		 */
+		pq = DLB2_DOM_LIST_HEAD(domain->avail_dir_pq_pairs,
+					typeof(*pq));
+		if (!pq) {
+			resp->status = DLB2_ST_DIR_QUEUES_UNAVAILABLE;
+			return -EINVAL;
+		}
+	}
+
+	*out_domain = domain;
+	*out_queue = pq;
+
+	return 0;
+}
+
+/**
+ * dlb2_hw_create_dir_queue() - create a directed queue
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: queue creation arguments.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function creates a directed queue.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the queue ID.
+ *
+ * resp->id contains a virtual ID if vdev_req is true.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, the domain is not configured,
+ *	    or the domain has already been started.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb2_hw_create_dir_queue(struct dlb2_hw *hw,
+			     u32 domain_id,
+			     struct dlb2_create_dir_queue_args *args,
+			     struct dlb2_cmd_response *resp,
+			     bool vdev_req,
+			     unsigned int vdev_id)
+{
+	struct dlb2_dir_pq_pair *queue;
+	struct dlb2_hw_domain *domain;
+	int ret;
+
+	dlb2_log_create_dir_queue_args(hw, domain_id, args, vdev_req, vdev_id);
+
+	/*
+	 * Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	ret = dlb2_verify_create_dir_queue_args(hw,
+						domain_id,
+						args,
+						resp,
+						vdev_req,
+						vdev_id,
+						&domain,
+						&queue);
+	if (ret)
+		return ret;
+
+	dlb2_configure_dir_queue(hw, domain, queue, args, vdev_req, vdev_id);
+
+	/*
+	 * Configuration succeeded, so move the resource from the 'avail' to
+	 * the 'used' list (if it's not already there).
+	 */
+	if (args->port_id == -1) {
+		dlb2_list_del(&domain->avail_dir_pq_pairs,
+			      &queue->domain_list);
+
+		dlb2_list_add(&domain->used_dir_pq_pairs,
+			      &queue->domain_list);
+	}
+
+	resp->status = 0;
+
+	resp->id = (vdev_req) ? queue->id.virt_id : queue->id.phys_id;
+
+	return 0;
+}
+
+static bool
+dlb2_port_find_slot_with_pending_map_queue(struct dlb2_ldb_port *port,
+					   struct dlb2_ldb_queue *queue,
+					   int *slot)
+{
+	int i;
+
+	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
+		struct dlb2_ldb_port_qid_map *map = &port->qid_map[i];
+
+		if (map->state == DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP &&
+		    map->pending_qid == queue->id.phys_id)
+			break;
+	}
+
+	*slot = i;
+
+	return (i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ);
+}
+
+static int dlb2_verify_map_qid_slot_available(struct dlb2_ldb_port *port,
+					      struct dlb2_ldb_queue *queue,
+					      struct dlb2_cmd_response *resp)
+{
+	enum dlb2_qid_map_state state;
+	int i;
+
+	/* Unused slot available? */
+	if (port->num_mappings < DLB2_MAX_NUM_QIDS_PER_LDB_CQ)
+		return 0;
+
+	/*
+	 * If the queue is already mapped (from the application's perspective),
+	 * this is simply a priority update.
+	 */
+	state = DLB2_QUEUE_MAPPED;
+	if (dlb2_port_find_slot_queue(port, state, queue, &i))
+		return 0;
+
+	state = DLB2_QUEUE_MAP_IN_PROG;
+	if (dlb2_port_find_slot_queue(port, state, queue, &i))
+		return 0;
+
+	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &i))
+		return 0;
+
+	/*
+	 * If the slot contains an unmap in progress, it's considered
+	 * available.
+	 */
+	state = DLB2_QUEUE_UNMAP_IN_PROG;
+	if (dlb2_port_find_slot(port, state, &i))
+		return 0;
+
+	state = DLB2_QUEUE_UNMAPPED;
+	if (dlb2_port_find_slot(port, state, &i))
+		return 0;
+
+	resp->status = DLB2_ST_NO_QID_SLOTS_AVAILABLE;
+	return -EINVAL;
+}
+
+static struct dlb2_ldb_queue *
+dlb2_get_domain_ldb_queue(u32 id,
+			  bool vdev_req,
+			  struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_queue *queue;
+	RTE_SET_USED(iter);
+
+	if (id >= DLB2_MAX_NUM_LDB_QUEUES)
+		return NULL;
+
+	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
+		if ((!vdev_req && queue->id.phys_id == id) ||
+		    (vdev_req && queue->id.virt_id == id))
+			return queue;
+	}
+
+	return NULL;
+}
+
+static struct dlb2_ldb_port *
+dlb2_get_domain_used_ldb_port(u32 id,
+			      bool vdev_req,
+			      struct dlb2_hw_domain *domain)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_ldb_port *port;
+	int i;
+	RTE_SET_USED(iter);
+
+	if (id >= DLB2_MAX_NUM_LDB_PORTS)
+		return NULL;
+
+	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
+		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
+			if ((!vdev_req && port->id.phys_id == id) ||
+			    (vdev_req && port->id.virt_id == id))
+				return port;
+		}
+
+		DLB2_DOM_LIST_FOR(domain->avail_ldb_ports[i], port, iter) {
+			if ((!vdev_req && port->id.phys_id == id) ||
+			    (vdev_req && port->id.virt_id == id))
+				return port;
+		}
+	}
+
+	return NULL;
+}
+
+static void dlb2_ldb_port_change_qid_priority(struct dlb2_hw *hw,
+					      struct dlb2_ldb_port *port,
+					      int slot,
+					      struct dlb2_map_qid_args *args)
+{
+	u32 cq2priov;
+
+	/* Read-modify-write the priority and valid bit register */
+	cq2priov = DLB2_CSR_RD(hw,
+			       DLB2_LSP_CQ2PRIOV(hw->ver, port->id.phys_id));
+
+	cq2priov |= (1 << (slot + DLB2_LSP_CQ2PRIOV_V_LOC)) &
+		    DLB2_LSP_CQ2PRIOV_V;
+	cq2priov |= ((args->priority & 0x7) << slot * 3) &
+		    DLB2_LSP_CQ2PRIOV_PRIO;
+
+	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(hw->ver, port->id.phys_id), cq2priov);
+
+	dlb2_flush_csr(hw);
+
+	port->qid_map[slot].priority = args->priority;
+}
+
+static int dlb2_verify_map_qid_args(struct dlb2_hw *hw,
+				    u32 domain_id,
+				    struct dlb2_map_qid_args *args,
+				    struct dlb2_cmd_response *resp,
+				    bool vdev_req,
+				    unsigned int vdev_id,
+				    struct dlb2_hw_domain **out_domain,
+				    struct dlb2_ldb_port **out_port,
+				    struct dlb2_ldb_queue **out_queue)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+	struct dlb2_ldb_port *port;
+	int id;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
+		return -EINVAL;
+	}
+
+	id = args->port_id;
+
+	port = dlb2_get_domain_used_ldb_port(id, vdev_req, domain);
+
+	if (!port || !port->configured) {
+		resp->status = DLB2_ST_INVALID_PORT_ID;
+		return -EINVAL;
+	}
+
+	if (args->priority >= DLB2_QID_PRIORITIES) {
+		resp->status = DLB2_ST_INVALID_PRIORITY;
+		return -EINVAL;
+	}
+
+	queue = dlb2_get_domain_ldb_queue(args->qid, vdev_req, domain);
+
+	if (!queue || !queue->configured) {
+		resp->status = DLB2_ST_INVALID_QID;
+		return -EINVAL;
+	}
+
+	if (queue->domain_id.phys_id != domain->id.phys_id) {
+		resp->status = DLB2_ST_INVALID_QID;
+		return -EINVAL;
+	}
+
+	if (port->domain_id.phys_id != domain->id.phys_id) {
+		resp->status = DLB2_ST_INVALID_PORT_ID;
+		return -EINVAL;
+	}
+
+	*out_domain = domain;
+	*out_queue = queue;
+	*out_port = port;
+
+	return 0;
+}
+
+static void dlb2_log_map_qid(struct dlb2_hw *hw,
+			     u32 domain_id,
+			     struct dlb2_map_qid_args *args,
+			     bool vdev_req,
+			     unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 map QID arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID: %d\n",
+		    domain_id);
+	DLB2_HW_DBG(hw, "\tPort ID:   %d\n",
+		    args->port_id);
+	DLB2_HW_DBG(hw, "\tQueue ID:  %d\n",
+		    args->qid);
+	DLB2_HW_DBG(hw, "\tPriority:  %d\n",
+		    args->priority);
+}
+
+/**
+ * dlb2_hw_map_qid() - map a load-balanced queue to a load-balanced port
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: map QID arguments.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function configures the DLB to schedule QEs from the specified queue
+ * to the specified port. Each load-balanced port can be mapped to up to 8
+ * queues; each load-balanced queue can potentially map to all the
+ * load-balanced ports.
+ *
+ * A successful return does not necessarily mean the mapping was configured. If
+ * this function is unable to immediately map the queue to the port, it will
+ * add the requested operation to a per-port list of pending map/unmap
+ * operations, and (if it's not already running) launch a kernel thread that
+ * periodically attempts to process all pending operations. In a sense, this is
+ * an asynchronous function.
+ *
+ * This asynchronicity creates two views of the state of hardware: the actual
+ * hardware state and the requested state (as if every request completed
+ * immediately). If there are any pending map/unmap operations, the requested
+ * state will differ from the actual state. All validation is performed with
+ * respect to the pending state; for instance, if there are 8 pending map
+ * operations for port X, a request for a 9th will fail because a load-balanced
+ * port can only map up to 8 queues.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, invalid port or queue ID, or
+ *	    the domain is not configured.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb2_hw_map_qid(struct dlb2_hw *hw,
+		    u32 domain_id,
+		    struct dlb2_map_qid_args *args,
+		    struct dlb2_cmd_response *resp,
+		    bool vdev_req,
+		    unsigned int vdev_id)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+	enum dlb2_qid_map_state st;
+	struct dlb2_ldb_port *port;
+	int ret, i;
+	u8 prio;
+
+	dlb2_log_map_qid(hw, domain_id, args, vdev_req, vdev_id);
+
+	/*
+	 * Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	ret = dlb2_verify_map_qid_args(hw,
+				       domain_id,
+				       args,
+				       resp,
+				       vdev_req,
+				       vdev_id,
+				       &domain,
+				       &port,
+				       &queue);
+	if (ret)
+		return ret;
+
+	prio = args->priority;
+
+	/*
+	 * If there are any outstanding detach operations for this port,
+	 * attempt to complete them. This may be necessary to free up a QID
+	 * slot for this requested mapping.
+	 */
+	if (port->num_pending_removals)
+		dlb2_domain_finish_unmap_port(hw, domain, port);
+
+	ret = dlb2_verify_map_qid_slot_available(port, queue, resp);
+	if (ret)
+		return ret;
+
+	/* Hardware requires disabling the CQ before mapping QIDs. */
+	if (port->enabled)
+		dlb2_ldb_port_cq_disable(hw, port);
+
+	/*
+	 * If this is only a priority change, don't perform the full QID->CQ
+	 * mapping procedure
+	 */
+	st = DLB2_QUEUE_MAPPED;
+	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
+		if (prio != port->qid_map[i].priority) {
+			dlb2_ldb_port_change_qid_priority(hw, port, i, args);
+			DLB2_HW_DBG(hw, "DLB2 map: priority change\n");
+		}
+
+		st = DLB2_QUEUE_MAPPED;
+		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
+		if (ret)
+			return ret;
+
+		goto map_qid_done;
+	}
+
+	st = DLB2_QUEUE_UNMAP_IN_PROG;
+	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
+		if (prio != port->qid_map[i].priority) {
+			dlb2_ldb_port_change_qid_priority(hw, port, i, args);
+			DLB2_HW_DBG(hw, "DLB2 map: priority change\n");
+		}
+
+		st = DLB2_QUEUE_MAPPED;
+		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
+		if (ret)
+			return ret;
+
+		goto map_qid_done;
+	}
+
+	/*
+	 * If this is a priority change on an in-progress mapping, don't
+	 * perform the full QID->CQ mapping procedure.
+	 */
+	st = DLB2_QUEUE_MAP_IN_PROG;
+	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
+		port->qid_map[i].priority = prio;
+
+		DLB2_HW_DBG(hw, "DLB2 map: priority change only\n");
+
+		goto map_qid_done;
+	}
+
+	/*
+	 * If this is a priority change on a pending mapping, update the
+	 * pending priority
+	 */
+	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &i)) {
+		port->qid_map[i].pending_priority = prio;
+
+		DLB2_HW_DBG(hw, "DLB2 map: priority change only\n");
+
+		goto map_qid_done;
+	}
+
+	/*
+	 * If all the CQ's slots are in use, then there's an unmap in progress
+	 * (guaranteed by dlb2_verify_map_qid_slot_available()), so add this
+	 * mapping to pending_map and return. When the removal is completed for
+	 * the slot's current occupant, this mapping will be performed.
+	 */
+	if (!dlb2_port_find_slot(port, DLB2_QUEUE_UNMAPPED, &i)) {
+		if (dlb2_port_find_slot(port, DLB2_QUEUE_UNMAP_IN_PROG, &i)) {
+			enum dlb2_qid_map_state new_st;
+
+			port->qid_map[i].pending_qid = queue->id.phys_id;
+			port->qid_map[i].pending_priority = prio;
+
+			new_st = DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP;
+
+			ret = dlb2_port_slot_state_transition(hw, port, queue,
+							      i, new_st);
+			if (ret)
+				return ret;
+
+			DLB2_HW_DBG(hw, "DLB2 map: map pending removal\n");
+
+			goto map_qid_done;
+		}
+	}
+
+	/*
+	 * If the domain has started, a special "dynamic" CQ->queue mapping
+	 * procedure is required in order to safely update the CQ<->QID tables.
+	 * The "static" procedure cannot be used when traffic is flowing,
+	 * because the CQ<->QID tables cannot be updated atomically and the
+	 * scheduler won't see the new mapping unless the queue's if_status
+	 * changes, which isn't guaranteed.
+	 */
+	ret = dlb2_ldb_port_map_qid(hw, domain, port, queue, prio);
+
+	/* If ret is less than zero, it's due to an internal error */
+	if (ret < 0)
+		return ret;
+
+map_qid_done:
+	if (port->enabled)
+		dlb2_ldb_port_cq_enable(hw, port);
+
+	resp->status = 0;
+
+	return 0;
+}
+
+static void dlb2_log_unmap_qid(struct dlb2_hw *hw,
+			       u32 domain_id,
+			       struct dlb2_unmap_qid_args *args,
+			       bool vdev_req,
+			       unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 unmap QID arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID: %d\n",
+		    domain_id);
+	DLB2_HW_DBG(hw, "\tPort ID:   %d\n",
+		    args->port_id);
+	DLB2_HW_DBG(hw, "\tQueue ID:  %d\n",
+		    args->qid);
+	if (args->qid < DLB2_MAX_NUM_LDB_QUEUES)
+		DLB2_HW_DBG(hw, "\tQueue's num mappings:  %d\n",
+			    hw->rsrcs.ldb_queues[args->qid].num_mappings);
+}
+
+static int dlb2_verify_unmap_qid_args(struct dlb2_hw *hw,
+				      u32 domain_id,
+				      struct dlb2_unmap_qid_args *args,
+				      struct dlb2_cmd_response *resp,
+				      bool vdev_req,
+				      unsigned int vdev_id,
+				      struct dlb2_hw_domain **out_domain,
+				      struct dlb2_ldb_port **out_port,
+				      struct dlb2_ldb_queue **out_queue)
+{
+	enum dlb2_qid_map_state state;
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+	struct dlb2_ldb_port *port;
+	int slot;
+	int id;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
+		return -EINVAL;
+	}
+
+	id = args->port_id;
+
+	port = dlb2_get_domain_used_ldb_port(id, vdev_req, domain);
+
+	if (!port || !port->configured) {
+		resp->status = DLB2_ST_INVALID_PORT_ID;
+		return -EINVAL;
+	}
+
+	if (port->domain_id.phys_id != domain->id.phys_id) {
+		resp->status = DLB2_ST_INVALID_PORT_ID;
+		return -EINVAL;
+	}
+
+	queue = dlb2_get_domain_ldb_queue(args->qid, vdev_req, domain);
+
+	if (!queue || !queue->configured) {
+		DLB2_HW_ERR(hw, "[%s()] Can't unmap unconfigured queue %d\n",
+			    __func__, args->qid);
+		resp->status = DLB2_ST_INVALID_QID;
+		return -EINVAL;
+	}
+
+	/*
+	 * Verify that the port has the queue mapped. From the application's
+	 * perspective a queue is mapped if it is actually mapped, the map is
+	 * in progress, or the map is blocked pending an unmap.
+	 */
+	state = DLB2_QUEUE_MAPPED;
+	if (dlb2_port_find_slot_queue(port, state, queue, &slot))
+		goto done;
+
+	state = DLB2_QUEUE_MAP_IN_PROG;
+	if (dlb2_port_find_slot_queue(port, state, queue, &slot))
+		goto done;
+
+	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &slot))
+		goto done;
+
+	resp->status = DLB2_ST_INVALID_QID;
+	return -EINVAL;
+
+done:
+	*out_domain = domain;
+	*out_port = port;
+	*out_queue = queue;
+
+	return 0;
+}
+
+/**
+ * dlb2_hw_unmap_qid() - Unmap a load-balanced queue from a load-balanced port
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: unmap QID arguments.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function configures the DLB to stop scheduling QEs from the specified
+ * queue to the specified port.
+ *
+ * A successful return does not necessarily mean the mapping was removed. If
+ * this function is unable to immediately unmap the queue from the port, it
+ * will add the requested operation to a per-port list of pending map/unmap
+ * operations, and (if it's not already running) launch a kernel thread that
+ * periodically attempts to process all pending operations. See
+ * dlb2_hw_map_qid() for more details.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, invalid port or queue ID, or
+ *	    the domain is not configured.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb2_hw_unmap_qid(struct dlb2_hw *hw,
+		      u32 domain_id,
+		      struct dlb2_unmap_qid_args *args,
+		      struct dlb2_cmd_response *resp,
+		      bool vdev_req,
+		      unsigned int vdev_id)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+	enum dlb2_qid_map_state st;
+	struct dlb2_ldb_port *port;
+	bool unmap_complete;
+	int i, ret;
+
+	dlb2_log_unmap_qid(hw, domain_id, args, vdev_req, vdev_id);
+
+	/*
+	 * Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	ret = dlb2_verify_unmap_qid_args(hw,
+					 domain_id,
+					 args,
+					 resp,
+					 vdev_req,
+					 vdev_id,
+					 &domain,
+					 &port,
+					 &queue);
+	if (ret)
+		return ret;
+
+	/*
+	 * If the queue hasn't been mapped yet, we need to update the slot's
+	 * state and re-enable the queue's inflights.
+	 */
+	st = DLB2_QUEUE_MAP_IN_PROG;
+	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
+		/*
+		 * Since the in-progress map was aborted, re-enable the QID's
+		 * inflights.
+		 */
+		if (queue->num_pending_additions == 0)
+			dlb2_ldb_queue_set_inflight_limit(hw, queue);
+
+		st = DLB2_QUEUE_UNMAPPED;
+		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
+		if (ret)
+			return ret;
+
+		goto unmap_qid_done;
+	}
+
+	/*
+	 * If the queue mapping is on hold pending an unmap, we simply need to
+	 * update the slot's state.
+	 */
+	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &i)) {
+		st = DLB2_QUEUE_UNMAP_IN_PROG;
+		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
+		if (ret)
+			return ret;
+
+		goto unmap_qid_done;
+	}
+
+	st = DLB2_QUEUE_MAPPED;
+	if (!dlb2_port_find_slot_queue(port, st, queue, &i)) {
+		DLB2_HW_ERR(hw,
+			    "[%s()] Internal error: no available CQ slots\n",
+			    __func__);
+		return -EFAULT;
+	}
+
+	/*
+	 * QID->CQ mapping removal is an asynchronous procedure. It requires
+	 * stopping the DLB2 from scheduling this CQ, draining all inflights
+	 * from the CQ, then unmapping the queue from the CQ. This function
+	 * simply marks the port as needing the queue unmapped, and (if
+	 * necessary) starts the unmapping worker thread.
+	 */
+	dlb2_ldb_port_cq_disable(hw, port);
+
+	st = DLB2_QUEUE_UNMAP_IN_PROG;
+	ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
+	if (ret)
+		return ret;
+
+	/*
+	 * Attempt to finish the unmapping now, in case the port has no
+	 * outstanding inflights. If that's not the case, this will fail and
+	 * the unmapping will be completed at a later time.
+	 */
+	unmap_complete = dlb2_domain_finish_unmap_port(hw, domain, port);
+
+	/*
+	 * If the unmapping couldn't complete immediately, launch the worker
+	 * thread (if it isn't already launched) to finish it later.
+	 */
+	if (!unmap_complete && !os_worker_active(hw))
+		os_schedule_work(hw);
+
+unmap_qid_done:
+	resp->status = 0;
+
+	return 0;
+}
+
+static void
+dlb2_log_pending_port_unmaps_args(struct dlb2_hw *hw,
+				  struct dlb2_pending_port_unmaps_args *args,
+				  bool vdev_req,
+				  unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB unmaps in progress arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from VF %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tPort ID: %d\n", args->port_id);
+}
+
+/**
+ * dlb2_hw_pending_port_unmaps() - returns the number of unmap operations in
+ *	progress.
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: number of unmaps in progress args
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the number of unmaps in progress.
+ *
+ * Errors:
+ * EINVAL - Invalid port ID.
+ */
+int dlb2_hw_pending_port_unmaps(struct dlb2_hw *hw,
+				u32 domain_id,
+				struct dlb2_pending_port_unmaps_args *args,
+				struct dlb2_cmd_response *resp,
+				bool vdev_req,
+				unsigned int vdev_id)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_port *port;
+
+	dlb2_log_pending_port_unmaps_args(hw, args, vdev_req, vdev_id);
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	port = dlb2_get_domain_used_ldb_port(args->port_id, vdev_req, domain);
+	if (!port || !port->configured) {
+		resp->status = DLB2_ST_INVALID_PORT_ID;
+		return -EINVAL;
+	}
+
+	resp->id = port->num_pending_removals;
+
+	return 0;
+}
+
+static int dlb2_verify_start_domain_args(struct dlb2_hw *hw,
+					 u32 domain_id,
+					 struct dlb2_cmd_response *resp,
+					 bool vdev_req,
+					 unsigned int vdev_id,
+					 struct dlb2_hw_domain **out_domain)
+{
+	struct dlb2_hw_domain *domain;
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
+		return -EINVAL;
+	}
+
+	if (domain->started) {
+		resp->status = DLB2_ST_DOMAIN_STARTED;
+		return -EINVAL;
+	}
+
+	*out_domain = domain;
+
+	return 0;
+}
+
+static void dlb2_log_start_domain(struct dlb2_hw *hw,
+				  u32 domain_id,
+				  bool vdev_req,
+				  unsigned int vdev_id)
+{
+	DLB2_HW_DBG(hw, "DLB2 start domain arguments:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
+	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
+}
+
+/**
+ * dlb2_hw_start_domain() - start a scheduling domain
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @arg: start domain arguments.
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function starts a scheduling domain, which allows applications to send
+ * traffic through it. Once a domain is started, its resources can no longer be
+ * configured (besides QID remapping and port enable/disable).
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error.
+ *
+ * Errors:
+ * EINVAL - the domain is not configured, or the domain is already started.
+ */
+int
+dlb2_hw_start_domain(struct dlb2_hw *hw,
+		     u32 domain_id,
+		     struct dlb2_start_domain_args *args,
+		     struct dlb2_cmd_response *resp,
+		     bool vdev_req,
+		     unsigned int vdev_id)
+{
+	struct dlb2_list_entry *iter;
+	struct dlb2_dir_pq_pair *dir_queue;
+	struct dlb2_ldb_queue *ldb_queue;
+	struct dlb2_hw_domain *domain;
+	int ret;
+	RTE_SET_USED(args);
+	RTE_SET_USED(iter);
+
+	dlb2_log_start_domain(hw, domain_id, vdev_req, vdev_id);
+
+	ret = dlb2_verify_start_domain_args(hw,
+					    domain_id,
+					    resp,
+					    vdev_req,
+					    vdev_id,
+					    &domain);
+	if (ret)
+		return ret;
+
+	/*
+	 * Enable load-balanced and directed queue write permissions for the
+	 * queues this domain owns. Without this, the DLB2 will drop all
+	 * incoming traffic to those queues.
+	 */
+	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, ldb_queue, iter) {
+		u32 vasqid_v = 0;
+		unsigned int offs;
+
+		DLB2_BIT_SET(vasqid_v, DLB2_SYS_LDB_VASQID_V_VASQID_V);
+
+		offs = domain->id.phys_id * DLB2_MAX_NUM_LDB_QUEUES +
+			ldb_queue->id.phys_id;
+
+		DLB2_CSR_WR(hw, DLB2_SYS_LDB_VASQID_V(offs), vasqid_v);
+	}
+
+	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, dir_queue, iter) {
+		u32 vasqid_v = 0;
+		unsigned int offs;
+
+		DLB2_BIT_SET(vasqid_v, DLB2_SYS_DIR_VASQID_V_VASQID_V);
+
+		offs = domain->id.phys_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) +
+			dir_queue->id.phys_id;
+
+		DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(offs), vasqid_v);
+	}
+
+	dlb2_flush_csr(hw);
+
+	domain->started = true;
+
+	resp->status = 0;
+
+	return 0;
+}
+
+static void dlb2_log_get_dir_queue_depth(struct dlb2_hw *hw,
+					 u32 domain_id,
+					 u32 queue_id,
+					 bool vdev_req,
+					 unsigned int vf_id)
+{
+	DLB2_HW_DBG(hw, "DLB get directed queue depth:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from VF %d)\n", vf_id);
+	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
+	DLB2_HW_DBG(hw, "\tQueue ID: %d\n", queue_id);
+}
+
+/**
+ * dlb2_hw_get_dir_queue_depth() - returns the depth of a directed queue
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: queue depth args
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function returns the depth of a directed queue.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the depth.
+ *
+ * Errors:
+ * EINVAL - Invalid domain ID or queue ID.
+ */
+int dlb2_hw_get_dir_queue_depth(struct dlb2_hw *hw,
+				u32 domain_id,
+				struct dlb2_get_dir_queue_depth_args *args,
+				struct dlb2_cmd_response *resp,
+				bool vdev_req,
+				unsigned int vdev_id)
+{
+	struct dlb2_dir_pq_pair *queue;
+	struct dlb2_hw_domain *domain;
+	int id;
+
+	id = domain_id;
+
+	dlb2_log_get_dir_queue_depth(hw, domain_id, args->queue_id,
+				     vdev_req, vdev_id);
+
+	domain = dlb2_get_domain_from_id(hw, id, vdev_req, vdev_id);
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	id = args->queue_id;
+
+	queue = dlb2_get_domain_used_dir_pq(hw, id, vdev_req, domain);
+	if (!queue) {
+		resp->status = DLB2_ST_INVALID_QID;
+		return -EINVAL;
+	}
+
+	resp->id = dlb2_dir_queue_depth(hw, queue);
+
+	return 0;
+}
+
+static void dlb2_log_get_ldb_queue_depth(struct dlb2_hw *hw,
+					 u32 domain_id,
+					 u32 queue_id,
+					 bool vdev_req,
+					 unsigned int vf_id)
+{
+	DLB2_HW_DBG(hw, "DLB get load-balanced queue depth:\n");
+	if (vdev_req)
+		DLB2_HW_DBG(hw, "(Request from VF %d)\n", vf_id);
+	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
+	DLB2_HW_DBG(hw, "\tQueue ID: %d\n", queue_id);
+}
+
+/**
+ * dlb2_hw_get_ldb_queue_depth() - returns the depth of a load-balanced queue
+ * @hw: dlb2_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: queue depth args
+ * @resp: response structure.
+ * @vdev_req: indicates whether this request came from a vdev.
+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.
+ *
+ * This function returns the depth of a load-balanced queue.
+ *
+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
+ * device.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id
+ * contains the depth.
+ *
+ * Errors:
+ * EINVAL - Invalid domain ID or queue ID.
+ */
+int dlb2_hw_get_ldb_queue_depth(struct dlb2_hw *hw,
+				u32 domain_id,
+				struct dlb2_get_ldb_queue_depth_args *args,
+				struct dlb2_cmd_response *resp,
+				bool vdev_req,
+				unsigned int vdev_id)
+{
+	struct dlb2_hw_domain *domain;
+	struct dlb2_ldb_queue *queue;
+
+	dlb2_log_get_ldb_queue_depth(hw, domain_id, args->queue_id,
+				     vdev_req, vdev_id);
+
+	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
+	if (!domain) {
+		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	queue = dlb2_get_domain_ldb_queue(args->queue_id, vdev_req, domain);
+	if (!queue) {
+		resp->status = DLB2_ST_INVALID_QID;
+		return -EINVAL;
+	}
+
+	resp->id = dlb2_ldb_queue_depth(hw, queue);
+
+	return 0;
+}
+
+/**
+ * dlb2_finish_unmap_qid_procedures() - finish any pending unmap procedures
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function attempts to finish any outstanding unmap procedures.
+ * This function should be called by the kernel thread responsible for
+ * finishing map/unmap procedures.
+ *
+ * Return:
+ * Returns the number of procedures that weren't completed.
+ */
+unsigned int dlb2_finish_unmap_qid_procedures(struct dlb2_hw *hw)
+{
+	int i, num = 0;
+
+	/* Finish queue unmap jobs for any domain that needs it */
+	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
+		struct dlb2_hw_domain *domain = &hw->domains[i];
+
+		num += dlb2_domain_finish_unmap_qid_procedures(hw, domain);
+	}
+
+	return num;
+}
+
+/**
+ * dlb2_finish_map_qid_procedures() - finish any pending map procedures
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function attempts to finish any outstanding map procedures.
+ * This function should be called by the kernel thread responsible for
+ * finishing map/unmap procedures.
+ *
+ * Return:
+ * Returns the number of procedures that weren't completed.
+ */
+unsigned int dlb2_finish_map_qid_procedures(struct dlb2_hw *hw)
+{
+	int i, num = 0;
+
+	/* Finish queue map jobs for any domain that needs it */
+	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
+		struct dlb2_hw_domain *domain = &hw->domains[i];
+
+		num += dlb2_domain_finish_map_qid_procedures(hw, domain);
+	}
+
+	return num;
+}
+
+/**
+ * dlb2_hw_enable_sparse_dir_cq_mode() - enable sparse mode for directed ports.
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function must be called prior to configuring scheduling domains.
+ */
+
+void dlb2_hw_enable_sparse_dir_cq_mode(struct dlb2_hw *hw)
+{
+	u32 ctrl;
+
+	ctrl = DLB2_CSR_RD(hw, DLB2_CHP_CFG_CHP_CSR_CTRL);
+
+	DLB2_BIT_SET(ctrl,
+		     DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_DIR_CQ_MODE);
+
+	DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, ctrl);
+}
+
+/**
+ * dlb2_hw_enable_sparse_ldb_cq_mode() - enable sparse mode for load-balanced
+ *	ports.
+ * @hw: dlb2_hw handle for a particular device.
+ *
+ * This function must be called prior to configuring scheduling domains.
+ */
+void dlb2_hw_enable_sparse_ldb_cq_mode(struct dlb2_hw *hw)
+{
+	u32 ctrl;
+
+	ctrl = DLB2_CSR_RD(hw, DLB2_CHP_CFG_CHP_CSR_CTRL);
+
+	DLB2_BIT_SET(ctrl,
+		     DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_LDB_CQ_MODE);
+
+	DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, ctrl);
+}
+
+/**
+ * dlb2_get_group_sequence_numbers() - return a group's number of SNs per queue
+ * @hw: dlb2_hw handle for a particular device.
+ * @group_id: sequence number group ID.
+ *
+ * This function returns the configured number of sequence numbers per queue
+ * for the specified group.
+ *
+ * Return:
+ * Returns -EINVAL if group_id is invalid, else the group's SNs per queue.
+ */
+int dlb2_get_group_sequence_numbers(struct dlb2_hw *hw, u32 group_id)
+{
+	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
+		return -EINVAL;
+
+	return hw->rsrcs.sn_groups[group_id].sequence_numbers_per_queue;
+}
+
+/**
+ * dlb2_get_group_sequence_number_occupancy() - return a group's in-use slots
+ * @hw: dlb2_hw handle for a particular device.
+ * @group_id: sequence number group ID.
+ *
+ * This function returns the group's number of in-use slots (i.e. load-balanced
+ * queues using the specified group).
+ *
+ * Return:
+ * Returns -EINVAL if group_id is invalid, else the group's SNs per queue.
+ */
+int dlb2_get_group_sequence_number_occupancy(struct dlb2_hw *hw, u32 group_id)
+{
+	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
+		return -EINVAL;
+
+	return dlb2_sn_group_used_slots(&hw->rsrcs.sn_groups[group_id]);
+}
+
+static void dlb2_log_set_group_sequence_numbers(struct dlb2_hw *hw,
+						u32 group_id,
+						u32 val)
+{
+	DLB2_HW_DBG(hw, "DLB2 set group sequence numbers:\n");
+	DLB2_HW_DBG(hw, "\tGroup ID: %u\n", group_id);
+	DLB2_HW_DBG(hw, "\tValue:    %u\n", val);
+}
+
+/**
+ * dlb2_set_group_sequence_numbers() - assign a group's number of SNs per queue
+ * @hw: dlb2_hw handle for a particular device.
+ * @group_id: sequence number group ID.
+ * @val: requested amount of sequence numbers per queue.
+ *
+ * This function configures the group's number of sequence numbers per queue.
+ * val can be a power-of-two between 32 and 1024, inclusive. This setting can
+ * be configured until the first ordered load-balanced queue is configured, at
+ * which point the configuration is locked.
+ *
+ * Return:
+ * Returns 0 upon success; -EINVAL if group_id or val is invalid, -EPERM if an
+ * ordered queue is configured.
+ */
+int dlb2_set_group_sequence_numbers(struct dlb2_hw *hw,
+				    u32 group_id,
+				    u32 val)
+{
+	const u32 valid_allocations[] = {64, 128, 256, 512, 1024};
+	struct dlb2_sn_group *group;
+	u32 sn_mode = 0;
+	int mode;
+
+	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
+		return -EINVAL;
+
+	group = &hw->rsrcs.sn_groups[group_id];
+
+	/*
+	 * Once the first load-balanced queue using an SN group is configured,
+	 * the group cannot be changed.
+	 */
+	if (group->slot_use_bitmap != 0)
+		return -EPERM;
+
+	for (mode = 0; mode < DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES; mode++)
+		if (val == valid_allocations[mode])
+			break;
+
+	if (mode == DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES)
+		return -EINVAL;
+
+	group->mode = mode;
+	group->sequence_numbers_per_queue = val;
+
+	DLB2_BITS_SET(sn_mode, hw->rsrcs.sn_groups[0].mode,
+		 DLB2_RO_GRP_SN_MODE_SN_MODE_0);
+	DLB2_BITS_SET(sn_mode, hw->rsrcs.sn_groups[1].mode,
+		 DLB2_RO_GRP_SN_MODE_SN_MODE_1);
+
+	DLB2_CSR_WR(hw, DLB2_RO_GRP_SN_MODE(hw->ver), sn_mode);
+
+	dlb2_log_set_group_sequence_numbers(hw, group_id, val);
+
+	return 0;
+}
+
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c
deleted file mode 100644
index 2f66b2c71..000000000
--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c
+++ /dev/null
@@ -1,6235 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016-2020 Intel Corporation
- */
-
-#define DLB2_USE_NEW_HEADERS /* TEMPORARY FOR MERGE */
-
-#include "dlb2_user.h"
-
-#include "dlb2_hw_types_new.h"
-#include "dlb2_osdep.h"
-#include "dlb2_osdep_bitmap.h"
-#include "dlb2_osdep_types.h"
-#include "dlb2_regs_new.h"
-#include "dlb2_resource.h"
-
-#include "../../dlb2_priv.h"
-#include "../../dlb2_inline_fns.h"
-
-#define DLB2_DOM_LIST_HEAD(head, type) \
-	DLB2_LIST_HEAD((head), type, domain_list)
-
-#define DLB2_FUNC_LIST_HEAD(head, type) \
-	DLB2_LIST_HEAD((head), type, func_list)
-
-#define DLB2_DOM_LIST_FOR(head, ptr, iter) \
-	DLB2_LIST_FOR_EACH(head, ptr, domain_list, iter)
-
-#define DLB2_FUNC_LIST_FOR(head, ptr, iter) \
-	DLB2_LIST_FOR_EACH(head, ptr, func_list, iter)
-
-#define DLB2_DOM_LIST_FOR_SAFE(head, ptr, ptr_tmp, it, it_tmp) \
-	DLB2_LIST_FOR_EACH_SAFE((head), ptr, ptr_tmp, domain_list, it, it_tmp)
-
-#define DLB2_FUNC_LIST_FOR_SAFE(head, ptr, ptr_tmp, it, it_tmp) \
-	DLB2_LIST_FOR_EACH_SAFE((head), ptr, ptr_tmp, func_list, it, it_tmp)
-
-/*
- * The PF driver cannot assume that a register write will affect subsequent HCW
- * writes. To ensure a write completes, the driver must read back a CSR. This
- * function only need be called for configuration that can occur after the
- * domain has started; prior to starting, applications can't send HCWs.
- */
-static inline void dlb2_flush_csr(struct dlb2_hw *hw)
-{
-	DLB2_CSR_RD(hw, DLB2_SYS_TOTAL_VAS(hw->ver));
-}
-
-static void dlb2_init_domain_rsrc_lists(struct dlb2_hw_domain *domain)
-{
-	int i;
-
-	dlb2_list_init_head(&domain->used_ldb_queues);
-	dlb2_list_init_head(&domain->used_dir_pq_pairs);
-	dlb2_list_init_head(&domain->avail_ldb_queues);
-	dlb2_list_init_head(&domain->avail_dir_pq_pairs);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
-		dlb2_list_init_head(&domain->used_ldb_ports[i]);
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
-		dlb2_list_init_head(&domain->avail_ldb_ports[i]);
-}
-
-static void dlb2_init_fn_rsrc_lists(struct dlb2_function_resources *rsrc)
-{
-	int i;
-	dlb2_list_init_head(&rsrc->avail_domains);
-	dlb2_list_init_head(&rsrc->used_domains);
-	dlb2_list_init_head(&rsrc->avail_ldb_queues);
-	dlb2_list_init_head(&rsrc->avail_dir_pq_pairs);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
-		dlb2_list_init_head(&rsrc->avail_ldb_ports[i]);
-}
-
-/**
- * dlb2_resource_free() - free device state memory
- * @hw: dlb2_hw handle for a particular device.
- *
- * This function frees software state pointed to by dlb2_hw. This function
- * should be called when resetting the device or unloading the driver.
- */
-void dlb2_resource_free(struct dlb2_hw *hw)
-{
-	int i;
-
-	if (hw->pf.avail_hist_list_entries)
-		dlb2_bitmap_free(hw->pf.avail_hist_list_entries);
-
-	for (i = 0; i < DLB2_MAX_NUM_VDEVS; i++) {
-		if (hw->vdev[i].avail_hist_list_entries)
-			dlb2_bitmap_free(hw->vdev[i].avail_hist_list_entries);
-	}
-}
-
-/**
- * dlb2_resource_init() - initialize the device
- * @hw: pointer to struct dlb2_hw.
- * @ver: device version.
- *
- * This function initializes the device's software state (pointed to by the hw
- * argument) and programs global scheduling QoS registers. This function should
- * be called during driver initialization, and the dlb2_hw structure should
- * be zero-initialized before calling the function.
- *
- * The dlb2_hw struct must be unique per DLB 2.0 device and persist until the
- * device is reset.
- *
- * Return:
- * Returns 0 upon success, <0 otherwise.
- */
-int dlb2_resource_init(struct dlb2_hw *hw, enum dlb2_hw_ver ver)
-{
-	struct dlb2_list_entry *list;
-	unsigned int i;
-	int ret;
-
-	/*
-	 * For optimal load-balancing, ports that map to one or more QIDs in
-	 * common should not be in numerical sequence. The port->QID mapping is
-	 * application dependent, but the driver interleaves port IDs as much
-	 * as possible to reduce the likelihood of sequential ports mapping to
-	 * the same QID(s). This initial allocation of port IDs maximizes the
-	 * average distance between an ID and its immediate neighbors (i.e.
-	 * the distance from 1 to 0 and to 2, the distance from 2 to 1 and to
-	 * 3, etc.).
-	 */
-	const u8 init_ldb_port_allocation[DLB2_MAX_NUM_LDB_PORTS] = {
-		0,  7,  14,  5, 12,  3, 10,  1,  8, 15,  6, 13,  4, 11,  2,  9,
-		16, 23, 30, 21, 28, 19, 26, 17, 24, 31, 22, 29, 20, 27, 18, 25,
-		32, 39, 46, 37, 44, 35, 42, 33, 40, 47, 38, 45, 36, 43, 34, 41,
-		48, 55, 62, 53, 60, 51, 58, 49, 56, 63, 54, 61, 52, 59, 50, 57,
-	};
-
-	hw->ver = ver;
-
-	dlb2_init_fn_rsrc_lists(&hw->pf);
-
-	for (i = 0; i < DLB2_MAX_NUM_VDEVS; i++)
-		dlb2_init_fn_rsrc_lists(&hw->vdev[i]);
-
-	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
-		dlb2_init_domain_rsrc_lists(&hw->domains[i]);
-		hw->domains[i].parent_func = &hw->pf;
-	}
-
-	/* Give all resources to the PF driver */
-	hw->pf.num_avail_domains = DLB2_MAX_NUM_DOMAINS;
-	for (i = 0; i < hw->pf.num_avail_domains; i++) {
-		list = &hw->domains[i].func_list;
-
-		dlb2_list_add(&hw->pf.avail_domains, list);
-	}
-
-	hw->pf.num_avail_ldb_queues = DLB2_MAX_NUM_LDB_QUEUES;
-	for (i = 0; i < hw->pf.num_avail_ldb_queues; i++) {
-		list = &hw->rsrcs.ldb_queues[i].func_list;
-
-		dlb2_list_add(&hw->pf.avail_ldb_queues, list);
-	}
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
-		hw->pf.num_avail_ldb_ports[i] =
-			DLB2_MAX_NUM_LDB_PORTS / DLB2_NUM_COS_DOMAINS;
-
-	for (i = 0; i < DLB2_MAX_NUM_LDB_PORTS; i++) {
-		int cos_id = i >> DLB2_NUM_COS_DOMAINS;
-		struct dlb2_ldb_port *port;
-
-		port = &hw->rsrcs.ldb_ports[init_ldb_port_allocation[i]];
-
-		dlb2_list_add(&hw->pf.avail_ldb_ports[cos_id],
-			      &port->func_list);
-	}
-
-	hw->pf.num_avail_dir_pq_pairs = DLB2_MAX_NUM_DIR_PORTS(hw->ver);
-	for (i = 0; i < hw->pf.num_avail_dir_pq_pairs; i++) {
-		list = &hw->rsrcs.dir_pq_pairs[i].func_list;
-
-		dlb2_list_add(&hw->pf.avail_dir_pq_pairs, list);
-	}
-
-	if (hw->ver == DLB2_HW_V2) {
-		hw->pf.num_avail_qed_entries = DLB2_MAX_NUM_LDB_CREDITS;
-		hw->pf.num_avail_dqed_entries =
-			DLB2_MAX_NUM_DIR_CREDITS(hw->ver);
-	} else {
-		hw->pf.num_avail_entries = DLB2_MAX_NUM_CREDITS(hw->ver);
-	}
-
-	hw->pf.num_avail_aqed_entries = DLB2_MAX_NUM_AQED_ENTRIES;
-
-	ret = dlb2_bitmap_alloc(&hw->pf.avail_hist_list_entries,
-				DLB2_MAX_NUM_HIST_LIST_ENTRIES);
-	if (ret)
-		goto unwind;
-
-	ret = dlb2_bitmap_fill(hw->pf.avail_hist_list_entries);
-	if (ret)
-		goto unwind;
-
-	for (i = 0; i < DLB2_MAX_NUM_VDEVS; i++) {
-		ret = dlb2_bitmap_alloc(&hw->vdev[i].avail_hist_list_entries,
-					DLB2_MAX_NUM_HIST_LIST_ENTRIES);
-		if (ret)
-			goto unwind;
-
-		ret = dlb2_bitmap_zero(hw->vdev[i].avail_hist_list_entries);
-		if (ret)
-			goto unwind;
-	}
-
-	/* Initialize the hardware resource IDs */
-	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
-		hw->domains[i].id.phys_id = i;
-		hw->domains[i].id.vdev_owned = false;
-	}
-
-	for (i = 0; i < DLB2_MAX_NUM_LDB_QUEUES; i++) {
-		hw->rsrcs.ldb_queues[i].id.phys_id = i;
-		hw->rsrcs.ldb_queues[i].id.vdev_owned = false;
-	}
-
-	for (i = 0; i < DLB2_MAX_NUM_LDB_PORTS; i++) {
-		hw->rsrcs.ldb_ports[i].id.phys_id = i;
-		hw->rsrcs.ldb_ports[i].id.vdev_owned = false;
-	}
-
-	for (i = 0; i < DLB2_MAX_NUM_DIR_PORTS(hw->ver); i++) {
-		hw->rsrcs.dir_pq_pairs[i].id.phys_id = i;
-		hw->rsrcs.dir_pq_pairs[i].id.vdev_owned = false;
-	}
-
-	for (i = 0; i < DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {
-		hw->rsrcs.sn_groups[i].id = i;
-		/* Default mode (0) is 64 sequence numbers per queue */
-		hw->rsrcs.sn_groups[i].mode = 0;
-		hw->rsrcs.sn_groups[i].sequence_numbers_per_queue = 64;
-		hw->rsrcs.sn_groups[i].slot_use_bitmap = 0;
-	}
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
-		hw->cos_reservation[i] = 100 / DLB2_NUM_COS_DOMAINS;
-
-	return 0;
-
-unwind:
-	dlb2_resource_free(hw);
-
-	return ret;
-}
-
-/**
- * dlb2_clr_pmcsr_disable() - power on bulk of DLB 2.0 logic
- * @hw: dlb2_hw handle for a particular device.
- * @ver: device version.
- *
- * Clearing the PMCSR must be done at initialization to make the device fully
- * operational.
- */
-void dlb2_clr_pmcsr_disable(struct dlb2_hw *hw, enum dlb2_hw_ver ver)
-{
-	u32 pmcsr_dis;
-
-	pmcsr_dis = DLB2_CSR_RD(hw, DLB2_CM_CFG_PM_PMCSR_DISABLE(ver));
-
-	DLB2_BITS_CLR(pmcsr_dis, DLB2_CM_CFG_PM_PMCSR_DISABLE_DISABLE);
-
-	DLB2_CSR_WR(hw, DLB2_CM_CFG_PM_PMCSR_DISABLE(ver), pmcsr_dis);
-}
-
-/**
- * dlb2_hw_get_num_resources() - query the PCI function's available resources
- * @hw: dlb2_hw handle for a particular device.
- * @arg: pointer to resource counts.
- * @vdev_req: indicates whether this request came from a vdev.
- * @vdev_id: If vdev_req is true, this contains the vdev's ID.
- *
- * This function returns the number of available resources for the PF or for a
- * VF.
- *
- * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
- * device.
- *
- * Return:
- * Returns 0 upon success, -EINVAL if vdev_req is true and vdev_id is
- * invalid.
- */
-int dlb2_hw_get_num_resources(struct dlb2_hw *hw,
-			      struct dlb2_get_num_resources_args *arg,
-			      bool vdev_req,
-			      unsigned int vdev_id)
-{
-	struct dlb2_function_resources *rsrcs;
-	struct dlb2_bitmap *map;
-	int i;
-
-	if (vdev_req && vdev_id >= DLB2_MAX_NUM_VDEVS)
-		return -EINVAL;
-
-	if (vdev_req)
-		rsrcs = &hw->vdev[vdev_id];
-	else
-		rsrcs = &hw->pf;
-
-	arg->num_sched_domains = rsrcs->num_avail_domains;
-
-	arg->num_ldb_queues = rsrcs->num_avail_ldb_queues;
-
-	arg->num_ldb_ports = 0;
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
-		arg->num_ldb_ports += rsrcs->num_avail_ldb_ports[i];
-
-	arg->num_cos_ldb_ports[0] = rsrcs->num_avail_ldb_ports[0];
-	arg->num_cos_ldb_ports[1] = rsrcs->num_avail_ldb_ports[1];
-	arg->num_cos_ldb_ports[2] = rsrcs->num_avail_ldb_ports[2];
-	arg->num_cos_ldb_ports[3] = rsrcs->num_avail_ldb_ports[3];
-
-	arg->num_dir_ports = rsrcs->num_avail_dir_pq_pairs;
-
-	arg->num_atomic_inflights = rsrcs->num_avail_aqed_entries;
-
-	map = rsrcs->avail_hist_list_entries;
-
-	arg->num_hist_list_entries = dlb2_bitmap_count(map);
-
-	arg->max_contiguous_hist_list_entries =
-		dlb2_bitmap_longest_set_range(map);
-
-	if (hw->ver == DLB2_HW_V2) {
-		arg->num_ldb_credits = rsrcs->num_avail_qed_entries;
-		arg->num_dir_credits = rsrcs->num_avail_dqed_entries;
-	} else {
-		arg->num_credits = rsrcs->num_avail_entries;
-	}
-	return 0;
-}
-
-static void dlb2_configure_domain_credits_v2_5(struct dlb2_hw *hw,
-					       struct dlb2_hw_domain *domain)
-{
-	u32 reg = 0;
-
-	DLB2_BITS_SET(reg, domain->num_credits, DLB2_CHP_CFG_LDB_VAS_CRD_COUNT);
-	DLB2_CSR_WR(hw, DLB2_CHP_CFG_VAS_CRD(domain->id.phys_id), reg);
-}
-
-static void dlb2_configure_domain_credits_v2(struct dlb2_hw *hw,
-					     struct dlb2_hw_domain *domain)
-{
-	u32 reg = 0;
-
-	DLB2_BITS_SET(reg, domain->num_ldb_credits,
-		      DLB2_CHP_CFG_LDB_VAS_CRD_COUNT);
-	DLB2_CSR_WR(hw, DLB2_CHP_CFG_LDB_VAS_CRD(domain->id.phys_id), reg);
-
-	reg = 0;
-	DLB2_BITS_SET(reg, domain->num_dir_credits,
-		      DLB2_CHP_CFG_DIR_VAS_CRD_COUNT);
-	DLB2_CSR_WR(hw, DLB2_CHP_CFG_DIR_VAS_CRD(domain->id.phys_id), reg);
-}
-
-static void dlb2_configure_domain_credits(struct dlb2_hw *hw,
-					  struct dlb2_hw_domain *domain)
-{
-	if (hw->ver == DLB2_HW_V2)
-		dlb2_configure_domain_credits_v2(hw, domain);
-	else
-		dlb2_configure_domain_credits_v2_5(hw, domain);
-}
-
-static int dlb2_attach_credits(struct dlb2_function_resources *rsrcs,
-			       struct dlb2_hw_domain *domain,
-			       u32 num_credits,
-			       struct dlb2_cmd_response *resp)
-{
-	if (rsrcs->num_avail_entries < num_credits) {
-		resp->status = DLB2_ST_CREDITS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	rsrcs->num_avail_entries -= num_credits;
-	domain->num_credits += num_credits;
-	return 0;
-}
-
-static struct dlb2_ldb_port *
-dlb2_get_next_ldb_port(struct dlb2_hw *hw,
-		       struct dlb2_function_resources *rsrcs,
-		       u32 domain_id,
-		       u32 cos_id)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	RTE_SET_USED(iter);
-
-	/*
-	 * To reduce the odds of consecutive load-balanced ports mapping to the
-	 * same queue(s), the driver attempts to allocate ports whose neighbors
-	 * are owned by a different domain.
-	 */
-	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {
-		u32 next, prev;
-		u32 phys_id;
-
-		phys_id = port->id.phys_id;
-		next = phys_id + 1;
-		prev = phys_id - 1;
-
-		if (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)
-			next = 0;
-		if (phys_id == 0)
-			prev = DLB2_MAX_NUM_LDB_PORTS - 1;
-
-		if (!hw->rsrcs.ldb_ports[next].owned ||
-		    hw->rsrcs.ldb_ports[next].domain_id.phys_id == domain_id)
-			continue;
-
-		if (!hw->rsrcs.ldb_ports[prev].owned ||
-		    hw->rsrcs.ldb_ports[prev].domain_id.phys_id == domain_id)
-			continue;
-
-		return port;
-	}
-
-	/*
-	 * Failing that, the driver looks for a port with one neighbor owned by
-	 * a different domain and the other unallocated.
-	 */
-	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {
-		u32 next, prev;
-		u32 phys_id;
-
-		phys_id = port->id.phys_id;
-		next = phys_id + 1;
-		prev = phys_id - 1;
-
-		if (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)
-			next = 0;
-		if (phys_id == 0)
-			prev = DLB2_MAX_NUM_LDB_PORTS - 1;
-
-		if (!hw->rsrcs.ldb_ports[prev].owned &&
-		    hw->rsrcs.ldb_ports[next].owned &&
-		    hw->rsrcs.ldb_ports[next].domain_id.phys_id != domain_id)
-			return port;
-
-		if (!hw->rsrcs.ldb_ports[next].owned &&
-		    hw->rsrcs.ldb_ports[prev].owned &&
-		    hw->rsrcs.ldb_ports[prev].domain_id.phys_id != domain_id)
-			return port;
-	}
-
-	/*
-	 * Failing that, the driver looks for a port with both neighbors
-	 * unallocated.
-	 */
-	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {
-		u32 next, prev;
-		u32 phys_id;
-
-		phys_id = port->id.phys_id;
-		next = phys_id + 1;
-		prev = phys_id - 1;
-
-		if (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)
-			next = 0;
-		if (phys_id == 0)
-			prev = DLB2_MAX_NUM_LDB_PORTS - 1;
-
-		if (!hw->rsrcs.ldb_ports[prev].owned &&
-		    !hw->rsrcs.ldb_ports[next].owned)
-			return port;
-	}
-
-	/* If all else fails, the driver returns the next available port. */
-	return DLB2_FUNC_LIST_HEAD(rsrcs->avail_ldb_ports[cos_id],
-				   typeof(*port));
-}
-
-static int __dlb2_attach_ldb_ports(struct dlb2_hw *hw,
-				   struct dlb2_function_resources *rsrcs,
-				   struct dlb2_hw_domain *domain,
-				   u32 num_ports,
-				   u32 cos_id,
-				   struct dlb2_cmd_response *resp)
-{
-	unsigned int i;
-
-	if (rsrcs->num_avail_ldb_ports[cos_id] < num_ports) {
-		resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	for (i = 0; i < num_ports; i++) {
-		struct dlb2_ldb_port *port;
-
-		port = dlb2_get_next_ldb_port(hw, rsrcs,
-					      domain->id.phys_id, cos_id);
-		if (port == NULL) {
-			DLB2_HW_ERR(hw,
-				    "[%s()] Internal error: domain validation failed\n",
-				    __func__);
-			return -EFAULT;
-		}
-
-		dlb2_list_del(&rsrcs->avail_ldb_ports[cos_id],
-			      &port->func_list);
-
-		port->domain_id = domain->id;
-		port->owned = true;
-
-		dlb2_list_add(&domain->avail_ldb_ports[cos_id],
-			      &port->domain_list);
-	}
-
-	rsrcs->num_avail_ldb_ports[cos_id] -= num_ports;
-
-	return 0;
-}
-
-
-static int dlb2_attach_ldb_ports(struct dlb2_hw *hw,
-				 struct dlb2_function_resources *rsrcs,
-				 struct dlb2_hw_domain *domain,
-				 struct dlb2_create_sched_domain_args *args,
-				 struct dlb2_cmd_response *resp)
-{
-	unsigned int i, j;
-	int ret;
-
-	if (args->cos_strict) {
-		for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-			u32 num = args->num_cos_ldb_ports[i];
-
-			/* Allocate ports from specific classes-of-service */
-			ret = __dlb2_attach_ldb_ports(hw,
-						      rsrcs,
-						      domain,
-						      num,
-						      i,
-						      resp);
-			if (ret)
-				return ret;
-		}
-	} else {
-		unsigned int k;
-		u32 cos_id;
-
-		/*
-		 * Attempt to allocate from specific class-of-service, but
-		 * fallback to the other classes if that fails.
-		 */
-		for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-			for (j = 0; j < args->num_cos_ldb_ports[i]; j++) {
-				for (k = 0; k < DLB2_NUM_COS_DOMAINS; k++) {
-					cos_id = (i + k) % DLB2_NUM_COS_DOMAINS;
-
-					ret = __dlb2_attach_ldb_ports(hw,
-								      rsrcs,
-								      domain,
-								      1,
-								      cos_id,
-								      resp);
-					if (ret == 0)
-						break;
-				}
-
-				if (ret)
-					return ret;
-			}
-		}
-	}
-
-	/* Allocate num_ldb_ports from any class-of-service */
-	for (i = 0; i < args->num_ldb_ports; i++) {
-		for (j = 0; j < DLB2_NUM_COS_DOMAINS; j++) {
-			ret = __dlb2_attach_ldb_ports(hw,
-						      rsrcs,
-						      domain,
-						      1,
-						      j,
-						      resp);
-			if (ret == 0)
-				break;
-		}
-
-		if (ret)
-			return ret;
-	}
-
-	return 0;
-}
-
-static int dlb2_attach_dir_ports(struct dlb2_hw *hw,
-				 struct dlb2_function_resources *rsrcs,
-				 struct dlb2_hw_domain *domain,
-				 u32 num_ports,
-				 struct dlb2_cmd_response *resp)
-{
-	unsigned int i;
-
-	if (rsrcs->num_avail_dir_pq_pairs < num_ports) {
-		resp->status = DLB2_ST_DIR_PORTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	for (i = 0; i < num_ports; i++) {
-		struct dlb2_dir_pq_pair *port;
-
-		port = DLB2_FUNC_LIST_HEAD(rsrcs->avail_dir_pq_pairs,
-					   typeof(*port));
-		if (port == NULL) {
-			DLB2_HW_ERR(hw,
-				    "[%s()] Internal error: domain validation failed\n",
-				    __func__);
-			return -EFAULT;
-		}
-
-		dlb2_list_del(&rsrcs->avail_dir_pq_pairs, &port->func_list);
-
-		port->domain_id = domain->id;
-		port->owned = true;
-
-		dlb2_list_add(&domain->avail_dir_pq_pairs, &port->domain_list);
-	}
-
-	rsrcs->num_avail_dir_pq_pairs -= num_ports;
-
-	return 0;
-}
-
-static int dlb2_attach_ldb_credits(struct dlb2_function_resources *rsrcs,
-				   struct dlb2_hw_domain *domain,
-				   u32 num_credits,
-				   struct dlb2_cmd_response *resp)
-{
-	if (rsrcs->num_avail_qed_entries < num_credits) {
-		resp->status = DLB2_ST_LDB_CREDITS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	rsrcs->num_avail_qed_entries -= num_credits;
-	domain->num_ldb_credits += num_credits;
-	return 0;
-}
-
-static int dlb2_attach_dir_credits(struct dlb2_function_resources *rsrcs,
-				   struct dlb2_hw_domain *domain,
-				   u32 num_credits,
-				   struct dlb2_cmd_response *resp)
-{
-	if (rsrcs->num_avail_dqed_entries < num_credits) {
-		resp->status = DLB2_ST_DIR_CREDITS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	rsrcs->num_avail_dqed_entries -= num_credits;
-	domain->num_dir_credits += num_credits;
-	return 0;
-}
-
-
-static int dlb2_attach_atomic_inflights(struct dlb2_function_resources *rsrcs,
-					struct dlb2_hw_domain *domain,
-					u32 num_atomic_inflights,
-					struct dlb2_cmd_response *resp)
-{
-	if (rsrcs->num_avail_aqed_entries < num_atomic_inflights) {
-		resp->status = DLB2_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	rsrcs->num_avail_aqed_entries -= num_atomic_inflights;
-	domain->num_avail_aqed_entries += num_atomic_inflights;
-	return 0;
-}
-
-static int
-dlb2_attach_domain_hist_list_entries(struct dlb2_function_resources *rsrcs,
-				     struct dlb2_hw_domain *domain,
-				     u32 num_hist_list_entries,
-				     struct dlb2_cmd_response *resp)
-{
-	struct dlb2_bitmap *bitmap;
-	int base;
-
-	if (num_hist_list_entries) {
-		bitmap = rsrcs->avail_hist_list_entries;
-
-		base = dlb2_bitmap_find_set_bit_range(bitmap,
-						      num_hist_list_entries);
-		if (base < 0)
-			goto error;
-
-		domain->total_hist_list_entries = num_hist_list_entries;
-		domain->avail_hist_list_entries = num_hist_list_entries;
-		domain->hist_list_entry_base = base;
-		domain->hist_list_entry_offset = 0;
-
-		dlb2_bitmap_clear_range(bitmap, base, num_hist_list_entries);
-	}
-	return 0;
-
-error:
-	resp->status = DLB2_ST_HIST_LIST_ENTRIES_UNAVAILABLE;
-	return -EINVAL;
-}
-
-static int dlb2_attach_ldb_queues(struct dlb2_hw *hw,
-				  struct dlb2_function_resources *rsrcs,
-				  struct dlb2_hw_domain *domain,
-				  u32 num_queues,
-				  struct dlb2_cmd_response *resp)
-{
-	unsigned int i;
-
-	if (rsrcs->num_avail_ldb_queues < num_queues) {
-		resp->status = DLB2_ST_LDB_QUEUES_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	for (i = 0; i < num_queues; i++) {
-		struct dlb2_ldb_queue *queue;
-
-		queue = DLB2_FUNC_LIST_HEAD(rsrcs->avail_ldb_queues,
-					    typeof(*queue));
-		if (queue == NULL) {
-			DLB2_HW_ERR(hw,
-				    "[%s()] Internal error: domain validation failed\n",
-				    __func__);
-			return -EFAULT;
-		}
-
-		dlb2_list_del(&rsrcs->avail_ldb_queues, &queue->func_list);
-
-		queue->domain_id = domain->id;
-		queue->owned = true;
-
-		dlb2_list_add(&domain->avail_ldb_queues, &queue->domain_list);
-	}
-
-	rsrcs->num_avail_ldb_queues -= num_queues;
-
-	return 0;
-}
-
-static int
-dlb2_domain_attach_resources(struct dlb2_hw *hw,
-			     struct dlb2_function_resources *rsrcs,
-			     struct dlb2_hw_domain *domain,
-			     struct dlb2_create_sched_domain_args *args,
-			     struct dlb2_cmd_response *resp)
-{
-	int ret;
-
-	ret = dlb2_attach_ldb_queues(hw,
-				     rsrcs,
-				     domain,
-				     args->num_ldb_queues,
-				     resp);
-	if (ret)
-		return ret;
-
-	ret = dlb2_attach_ldb_ports(hw,
-				    rsrcs,
-				    domain,
-				    args,
-				    resp);
-	if (ret)
-		return ret;
-
-	ret = dlb2_attach_dir_ports(hw,
-				    rsrcs,
-				    domain,
-				    args->num_dir_ports,
-				    resp);
-	if (ret)
-		return ret;
-
-	if (hw->ver == DLB2_HW_V2) {
-		ret = dlb2_attach_ldb_credits(rsrcs,
-					      domain,
-					      args->num_ldb_credits,
-					      resp);
-		if (ret)
-			return ret;
-
-		ret = dlb2_attach_dir_credits(rsrcs,
-					      domain,
-					      args->num_dir_credits,
-					      resp);
-		if (ret)
-			return ret;
-	} else {  /* DLB 2.5 */
-		ret = dlb2_attach_credits(rsrcs,
-					  domain,
-					  args->num_credits,
-					  resp);
-		if (ret)
-			return ret;
-	}
-
-	ret = dlb2_attach_domain_hist_list_entries(rsrcs,
-						   domain,
-						   args->num_hist_list_entries,
-						   resp);
-	if (ret)
-		return ret;
-
-	ret = dlb2_attach_atomic_inflights(rsrcs,
-					   domain,
-					   args->num_atomic_inflights,
-					   resp);
-	if (ret)
-		return ret;
-
-	dlb2_configure_domain_credits(hw, domain);
-
-	domain->configured = true;
-
-	domain->started = false;
-
-	rsrcs->num_avail_domains--;
-
-	return 0;
-}
-
-static int
-dlb2_verify_create_sched_dom_args(struct dlb2_function_resources *rsrcs,
-				  struct dlb2_create_sched_domain_args *args,
-				  struct dlb2_cmd_response *resp,
-				  struct dlb2_hw *hw,
-				  struct dlb2_hw_domain **out_domain)
-{
-	u32 num_avail_ldb_ports, req_ldb_ports;
-	struct dlb2_bitmap *avail_hl_entries;
-	unsigned int max_contig_hl_range;
-	struct dlb2_hw_domain *domain;
-	int i;
-
-	avail_hl_entries = rsrcs->avail_hist_list_entries;
-
-	max_contig_hl_range = dlb2_bitmap_longest_set_range(avail_hl_entries);
-
-	num_avail_ldb_ports = 0;
-	req_ldb_ports = 0;
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		num_avail_ldb_ports += rsrcs->num_avail_ldb_ports[i];
-
-		req_ldb_ports += args->num_cos_ldb_ports[i];
-	}
-
-	req_ldb_ports += args->num_ldb_ports;
-
-	if (rsrcs->num_avail_domains < 1) {
-		resp->status = DLB2_ST_DOMAIN_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	domain = DLB2_FUNC_LIST_HEAD(rsrcs->avail_domains, typeof(*domain));
-	if (domain == NULL) {
-		resp->status = DLB2_ST_DOMAIN_UNAVAILABLE;
-		return -EFAULT;
-	}
-
-	if (rsrcs->num_avail_ldb_queues < args->num_ldb_queues) {
-		resp->status = DLB2_ST_LDB_QUEUES_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	if (req_ldb_ports > num_avail_ldb_ports) {
-		resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	for (i = 0; args->cos_strict && i < DLB2_NUM_COS_DOMAINS; i++) {
-		if (args->num_cos_ldb_ports[i] >
-		    rsrcs->num_avail_ldb_ports[i]) {
-			resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
-			return -EINVAL;
-		}
-	}
-
-	if (args->num_ldb_queues > 0 && req_ldb_ports == 0) {
-		resp->status = DLB2_ST_LDB_PORT_REQUIRED_FOR_LDB_QUEUES;
-		return -EINVAL;
-	}
-
-	if (rsrcs->num_avail_dir_pq_pairs < args->num_dir_ports) {
-		resp->status = DLB2_ST_DIR_PORTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-	if (hw->ver == DLB2_HW_V2_5) {
-		if (rsrcs->num_avail_entries < args->num_credits) {
-			resp->status = DLB2_ST_CREDITS_UNAVAILABLE;
-			return -EINVAL;
-		}
-	} else {
-		if (rsrcs->num_avail_qed_entries < args->num_ldb_credits) {
-			resp->status = DLB2_ST_LDB_CREDITS_UNAVAILABLE;
-			return -EINVAL;
-		}
-		if (rsrcs->num_avail_dqed_entries < args->num_dir_credits) {
-			resp->status = DLB2_ST_DIR_CREDITS_UNAVAILABLE;
-			return -EINVAL;
-		}
-	}
-
-	if (rsrcs->num_avail_aqed_entries < args->num_atomic_inflights) {
-		resp->status = DLB2_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	if (max_contig_hl_range < args->num_hist_list_entries) {
-		resp->status = DLB2_ST_HIST_LIST_ENTRIES_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	*out_domain = domain;
-
-	return 0;
-}
-
-static void
-dlb2_log_create_sched_domain_args(struct dlb2_hw *hw,
-				  struct dlb2_create_sched_domain_args *args,
-				  bool vdev_req,
-				  unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 create sched domain arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tNumber of LDB queues:          %d\n",
-		    args->num_ldb_queues);
-	DLB2_HW_DBG(hw, "\tNumber of LDB ports (any CoS): %d\n",
-		    args->num_ldb_ports);
-	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 0):   %d\n",
-		    args->num_cos_ldb_ports[0]);
-	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 1):   %d\n",
-		    args->num_cos_ldb_ports[1]);
-	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 2):   %d\n",
-		    args->num_cos_ldb_ports[2]);
-	DLB2_HW_DBG(hw, "\tNumber of LDB ports (CoS 3):   %d\n",
-		    args->num_cos_ldb_ports[3]);
-	DLB2_HW_DBG(hw, "\tStrict CoS allocation:         %d\n",
-		    args->cos_strict);
-	DLB2_HW_DBG(hw, "\tNumber of DIR ports:           %d\n",
-		    args->num_dir_ports);
-	DLB2_HW_DBG(hw, "\tNumber of ATM inflights:       %d\n",
-		    args->num_atomic_inflights);
-	DLB2_HW_DBG(hw, "\tNumber of hist list entries:   %d\n",
-		    args->num_hist_list_entries);
-	if (hw->ver == DLB2_HW_V2) {
-		DLB2_HW_DBG(hw, "\tNumber of LDB credits:         %d\n",
-			    args->num_ldb_credits);
-		DLB2_HW_DBG(hw, "\tNumber of DIR credits:         %d\n",
-			    args->num_dir_credits);
-	} else {
-		DLB2_HW_DBG(hw, "\tNumber of credits:         %d\n",
-			    args->num_credits);
-	}
-}
-
-/**
- * dlb2_hw_create_sched_domain() - create a scheduling domain
- * @hw: dlb2_hw handle for a particular device.
- * @args: scheduling domain creation arguments.
- * @resp: response structure.
- * @vdev_req: indicates whether this request came from a vdev.
- * @vdev_id: If vdev_req is true, this contains the vdev's ID.
- *
- * This function creates a scheduling domain containing the resources specified
- * in args. The individual resources (queues, ports, credits) can be configured
- * after creating a scheduling domain.
- *
- * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
- * device.
- *
- * Return:
- * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
- * assigned a detailed error code from enum dlb2_error. If successful, resp->id
- * contains the domain ID.
- *
- * resp->id contains a virtual ID if vdev_req is true.
- *
- * Errors:
- * EINVAL - A requested resource is unavailable, or the requested domain name
- *	    is already in use.
- * EFAULT - Internal error (resp->status not set).
- */
-int dlb2_hw_create_sched_domain(struct dlb2_hw *hw,
-				struct dlb2_create_sched_domain_args *args,
-				struct dlb2_cmd_response *resp,
-				bool vdev_req,
-				unsigned int vdev_id)
-{
-	struct dlb2_function_resources *rsrcs;
-	struct dlb2_hw_domain *domain;
-	int ret;
-
-	rsrcs = (vdev_req) ? &hw->vdev[vdev_id] : &hw->pf;
-
-	dlb2_log_create_sched_domain_args(hw, args, vdev_req, vdev_id);
-
-	/*
-	 * Verify that hardware resources are available before attempting to
-	 * satisfy the request. This simplifies the error unwinding code.
-	 */
-	ret = dlb2_verify_create_sched_dom_args(rsrcs, args, resp, hw, &domain);
-	if (ret)
-		return ret;
-
-	dlb2_init_domain_rsrc_lists(domain);
-
-	ret = dlb2_domain_attach_resources(hw, rsrcs, domain, args, resp);
-	if (ret) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: failed to verify args.\n",
-			    __func__);
-
-		return ret;
-	}
-
-	dlb2_list_del(&rsrcs->avail_domains, &domain->func_list);
-
-	dlb2_list_add(&rsrcs->used_domains, &domain->func_list);
-
-	resp->id = (vdev_req) ? domain->id.virt_id : domain->id.phys_id;
-	resp->status = 0;
-
-	return 0;
-}
-
-static void dlb2_dir_port_cq_disable(struct dlb2_hw *hw,
-				     struct dlb2_dir_pq_pair *port)
-{
-	u32 reg = 0;
-
-	DLB2_BIT_SET(reg, DLB2_LSP_CQ_DIR_DSBL_DISABLED);
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ_DIR_DSBL(hw->ver, port->id.phys_id), reg);
-
-	dlb2_flush_csr(hw);
-}
-
-static u32 dlb2_dir_cq_token_count(struct dlb2_hw *hw,
-				   struct dlb2_dir_pq_pair *port)
-{
-	u32 cnt;
-
-	cnt = DLB2_CSR_RD(hw,
-			  DLB2_LSP_CQ_DIR_TKN_CNT(hw->ver, port->id.phys_id));
-
-	/*
-	 * Account for the initial token count, which is used in order to
-	 * provide a CQ with depth less than 8.
-	 */
-
-	return DLB2_BITS_GET(cnt, DLB2_LSP_CQ_DIR_TKN_CNT_COUNT) -
-	       port->init_tkn_cnt;
-}
-
-static void dlb2_drain_dir_cq(struct dlb2_hw *hw,
-			      struct dlb2_dir_pq_pair *port)
-{
-	unsigned int port_id = port->id.phys_id;
-	u32 cnt;
-
-	/* Return any outstanding tokens */
-	cnt = dlb2_dir_cq_token_count(hw, port);
-
-	if (cnt != 0) {
-		struct dlb2_hcw hcw_mem[8], *hcw;
-		void __iomem *pp_addr;
-
-		pp_addr = os_map_producer_port(hw, port_id, false);
-
-		/* Point hcw to a 64B-aligned location */
-		hcw = (struct dlb2_hcw *)((uintptr_t)&hcw_mem[4] & ~0x3F);
-
-		/*
-		 * Program the first HCW for a batch token return and
-		 * the rest as NOOPS
-		 */
-		memset(hcw, 0, 4 * sizeof(*hcw));
-		hcw->cq_token = 1;
-		hcw->lock_id = cnt - 1;
-
-		dlb2_movdir64b(pp_addr, hcw);
-
-		os_fence_hcw(hw, pp_addr);
-
-		os_unmap_producer_port(hw, pp_addr);
-	}
-}
-
-static void dlb2_dir_port_cq_enable(struct dlb2_hw *hw,
-				    struct dlb2_dir_pq_pair *port)
-{
-	u32 reg = 0;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ_DIR_DSBL(hw->ver, port->id.phys_id), reg);
-
-	dlb2_flush_csr(hw);
-}
-
-static int dlb2_domain_drain_dir_cqs(struct dlb2_hw *hw,
-				     struct dlb2_hw_domain *domain,
-				     bool toggle_port)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *port;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
-		/*
-		 * Can't drain a port if it's not configured, and there's
-		 * nothing to drain if its queue is unconfigured.
-		 */
-		if (!port->port_configured || !port->queue_configured)
-			continue;
-
-		if (toggle_port)
-			dlb2_dir_port_cq_disable(hw, port);
-
-		dlb2_drain_dir_cq(hw, port);
-
-		if (toggle_port)
-			dlb2_dir_port_cq_enable(hw, port);
-	}
-
-	return 0;
-}
-
-static u32 dlb2_dir_queue_depth(struct dlb2_hw *hw,
-				struct dlb2_dir_pq_pair *queue)
-{
-	u32 cnt;
-
-	cnt = DLB2_CSR_RD(hw, DLB2_LSP_QID_DIR_ENQUEUE_CNT(hw->ver,
-						      queue->id.phys_id));
-
-	return DLB2_BITS_GET(cnt, DLB2_LSP_QID_DIR_ENQUEUE_CNT_COUNT);
-}
-
-static bool dlb2_dir_queue_is_empty(struct dlb2_hw *hw,
-				    struct dlb2_dir_pq_pair *queue)
-{
-	return dlb2_dir_queue_depth(hw, queue) == 0;
-}
-
-static bool dlb2_domain_dir_queues_empty(struct dlb2_hw *hw,
-					 struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *queue;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, queue, iter) {
-		if (!dlb2_dir_queue_is_empty(hw, queue))
-			return false;
-	}
-
-	return true;
-}
-static int dlb2_domain_drain_dir_queues(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain)
-{
-	int i;
-
-	/* If the domain hasn't been started, there's no traffic to drain */
-	if (!domain->started)
-		return 0;
-
-	for (i = 0; i < DLB2_MAX_QID_EMPTY_CHECK_LOOPS; i++) {
-		dlb2_domain_drain_dir_cqs(hw, domain, true);
-
-		if (dlb2_domain_dir_queues_empty(hw, domain))
-			break;
-	}
-
-	if (i == DLB2_MAX_QID_EMPTY_CHECK_LOOPS) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: failed to empty queues\n",
-			    __func__);
-		return -EFAULT;
-	}
-
-	/*
-	 * Drain the CQs one more time. For the queues to go empty, they would
-	 * have scheduled one or more QEs.
-	 */
-	dlb2_domain_drain_dir_cqs(hw, domain, true);
-
-	return 0;
-}
-
-static void dlb2_ldb_port_cq_enable(struct dlb2_hw *hw,
-				    struct dlb2_ldb_port *port)
-{
-	u32 reg = 0;
-
-	/*
-	 * Don't re-enable the port if a removal is pending. The caller should
-	 * mark this port as enabled (if it isn't already), and when the
-	 * removal completes the port will be enabled.
-	 */
-	if (port->num_pending_removals)
-		return;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ_LDB_DSBL(hw->ver, port->id.phys_id), reg);
-
-	dlb2_flush_csr(hw);
-}
-
-static void dlb2_ldb_port_cq_disable(struct dlb2_hw *hw,
-				     struct dlb2_ldb_port *port)
-{
-	u32 reg = 0;
-
-	DLB2_BIT_SET(reg, DLB2_LSP_CQ_LDB_DSBL_DISABLED);
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ_LDB_DSBL(hw->ver, port->id.phys_id), reg);
-
-	dlb2_flush_csr(hw);
-}
-
-static u32 dlb2_ldb_cq_inflight_count(struct dlb2_hw *hw,
-				      struct dlb2_ldb_port *port)
-{
-	u32 cnt;
-
-	cnt = DLB2_CSR_RD(hw,
-			  DLB2_LSP_CQ_LDB_INFL_CNT(hw->ver, port->id.phys_id));
-
-	return DLB2_BITS_GET(cnt, DLB2_LSP_CQ_LDB_INFL_CNT_COUNT);
-}
-
-static u32 dlb2_ldb_cq_token_count(struct dlb2_hw *hw,
-				   struct dlb2_ldb_port *port)
-{
-	u32 cnt;
-
-	cnt = DLB2_CSR_RD(hw,
-			  DLB2_LSP_CQ_LDB_TKN_CNT(hw->ver, port->id.phys_id));
-
-	/*
-	 * Account for the initial token count, which is used in order to
-	 * provide a CQ with depth less than 8.
-	 */
-
-	return DLB2_BITS_GET(cnt, DLB2_LSP_CQ_LDB_TKN_CNT_TOKEN_COUNT) -
-		port->init_tkn_cnt;
-}
-
-static void dlb2_drain_ldb_cq(struct dlb2_hw *hw, struct dlb2_ldb_port *port)
-{
-	u32 infl_cnt, tkn_cnt;
-	unsigned int i;
-
-	infl_cnt = dlb2_ldb_cq_inflight_count(hw, port);
-	tkn_cnt = dlb2_ldb_cq_token_count(hw, port);
-
-	if (infl_cnt || tkn_cnt) {
-		struct dlb2_hcw hcw_mem[8], *hcw;
-		void __iomem *pp_addr;
-
-		pp_addr = os_map_producer_port(hw, port->id.phys_id, true);
-
-		/* Point hcw to a 64B-aligned location */
-		hcw = (struct dlb2_hcw *)((uintptr_t)&hcw_mem[4] & ~0x3F);
-
-		/*
-		 * Program the first HCW for a completion and token return and
-		 * the other HCWs as NOOPS
-		 */
-
-		memset(hcw, 0, 4 * sizeof(*hcw));
-		hcw->qe_comp = (infl_cnt > 0);
-		hcw->cq_token = (tkn_cnt > 0);
-		hcw->lock_id = tkn_cnt - 1;
-
-		/* Return tokens in the first HCW */
-		dlb2_movdir64b(pp_addr, hcw);
-
-		hcw->cq_token = 0;
-
-		/* Issue remaining completions (if any) */
-		for (i = 1; i < infl_cnt; i++)
-			dlb2_movdir64b(pp_addr, hcw);
-
-		os_fence_hcw(hw, pp_addr);
-
-		os_unmap_producer_port(hw, pp_addr);
-	}
-}
-
-static void dlb2_domain_drain_ldb_cqs(struct dlb2_hw *hw,
-				      struct dlb2_hw_domain *domain,
-				      bool toggle_port)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	/* If the domain hasn't been started, there's no traffic to drain */
-	if (!domain->started)
-		return;
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			if (toggle_port)
-				dlb2_ldb_port_cq_disable(hw, port);
-
-			dlb2_drain_ldb_cq(hw, port);
-
-			if (toggle_port)
-				dlb2_ldb_port_cq_enable(hw, port);
-		}
-	}
-}
-
-static u32 dlb2_ldb_queue_depth(struct dlb2_hw *hw,
-				struct dlb2_ldb_queue *queue)
-{
-	u32 aqed, ldb, atm;
-
-	aqed = DLB2_CSR_RD(hw, DLB2_LSP_QID_AQED_ACTIVE_CNT(hw->ver,
-						       queue->id.phys_id));
-	ldb = DLB2_CSR_RD(hw, DLB2_LSP_QID_LDB_ENQUEUE_CNT(hw->ver,
-						      queue->id.phys_id));
-	atm = DLB2_CSR_RD(hw,
-			  DLB2_LSP_QID_ATM_ACTIVE(hw->ver, queue->id.phys_id));
-
-	return DLB2_BITS_GET(aqed, DLB2_LSP_QID_AQED_ACTIVE_CNT_COUNT)
-	       + DLB2_BITS_GET(ldb, DLB2_LSP_QID_LDB_ENQUEUE_CNT_COUNT)
-	       + DLB2_BITS_GET(atm, DLB2_LSP_QID_ATM_ACTIVE_COUNT);
-}
-
-static bool dlb2_ldb_queue_is_empty(struct dlb2_hw *hw,
-				    struct dlb2_ldb_queue *queue)
-{
-	return dlb2_ldb_queue_depth(hw, queue) == 0;
-}
-
-static bool dlb2_domain_mapped_queues_empty(struct dlb2_hw *hw,
-					    struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_queue *queue;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
-		if (queue->num_mappings == 0)
-			continue;
-
-		if (!dlb2_ldb_queue_is_empty(hw, queue))
-			return false;
-	}
-
-	return true;
-}
-
-static int dlb2_domain_drain_mapped_queues(struct dlb2_hw *hw,
-					   struct dlb2_hw_domain *domain)
-{
-	int i;
-
-	/* If the domain hasn't been started, there's no traffic to drain */
-	if (!domain->started)
-		return 0;
-
-	if (domain->num_pending_removals > 0) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: failed to unmap domain queues\n",
-			    __func__);
-		return -EFAULT;
-	}
-
-	for (i = 0; i < DLB2_MAX_QID_EMPTY_CHECK_LOOPS; i++) {
-		dlb2_domain_drain_ldb_cqs(hw, domain, true);
-
-		if (dlb2_domain_mapped_queues_empty(hw, domain))
-			break;
-	}
-
-	if (i == DLB2_MAX_QID_EMPTY_CHECK_LOOPS) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: failed to empty queues\n",
-			    __func__);
-		return -EFAULT;
-	}
-
-	/*
-	 * Drain the CQs one more time. For the queues to go empty, they would
-	 * have scheduled one or more QEs.
-	 */
-	dlb2_domain_drain_ldb_cqs(hw, domain, true);
-
-	return 0;
-}
-
-static void dlb2_domain_enable_ldb_cqs(struct dlb2_hw *hw,
-				       struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			port->enabled = true;
-
-			dlb2_ldb_port_cq_enable(hw, port);
-		}
-	}
-}
-
-static struct dlb2_ldb_queue *
-dlb2_get_ldb_queue_from_id(struct dlb2_hw *hw,
-			   u32 id,
-			   bool vdev_req,
-			   unsigned int vdev_id)
-{
-	struct dlb2_list_entry *iter1;
-	struct dlb2_list_entry *iter2;
-	struct dlb2_function_resources *rsrcs;
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_queue *queue;
-	RTE_SET_USED(iter1);
-	RTE_SET_USED(iter2);
-
-	if (id >= DLB2_MAX_NUM_LDB_QUEUES)
-		return NULL;
-
-	rsrcs = (vdev_req) ? &hw->vdev[vdev_id] : &hw->pf;
-
-	if (!vdev_req)
-		return &hw->rsrcs.ldb_queues[id];
-
-	DLB2_FUNC_LIST_FOR(rsrcs->used_domains, domain, iter1) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter2) {
-			if (queue->id.virt_id == id)
-				return queue;
-		}
-	}
-
-	DLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_queues, queue, iter1) {
-		if (queue->id.virt_id == id)
-			return queue;
-	}
-
-	return NULL;
-}
-
-static struct dlb2_hw_domain *dlb2_get_domain_from_id(struct dlb2_hw *hw,
-						      u32 id,
-						      bool vdev_req,
-						      unsigned int vdev_id)
-{
-	struct dlb2_list_entry *iteration;
-	struct dlb2_function_resources *rsrcs;
-	struct dlb2_hw_domain *domain;
-	RTE_SET_USED(iteration);
-
-	if (id >= DLB2_MAX_NUM_DOMAINS)
-		return NULL;
-
-	if (!vdev_req)
-		return &hw->domains[id];
-
-	rsrcs = &hw->vdev[vdev_id];
-
-	DLB2_FUNC_LIST_FOR(rsrcs->used_domains, domain, iteration) {
-		if (domain->id.virt_id == id)
-			return domain;
-	}
-
-	return NULL;
-}
-
-static int dlb2_port_slot_state_transition(struct dlb2_hw *hw,
-					   struct dlb2_ldb_port *port,
-					   struct dlb2_ldb_queue *queue,
-					   int slot,
-					   enum dlb2_qid_map_state new_state)
-{
-	enum dlb2_qid_map_state curr_state = port->qid_map[slot].state;
-	struct dlb2_hw_domain *domain;
-	int domain_id;
-
-	domain_id = port->domain_id.phys_id;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, false, 0);
-	if (domain == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: unable to find domain %d\n",
-			    __func__, domain_id);
-		return -EINVAL;
-	}
-
-	switch (curr_state) {
-	case DLB2_QUEUE_UNMAPPED:
-		switch (new_state) {
-		case DLB2_QUEUE_MAPPED:
-			queue->num_mappings++;
-			port->num_mappings++;
-			break;
-		case DLB2_QUEUE_MAP_IN_PROG:
-			queue->num_pending_additions++;
-			domain->num_pending_additions++;
-			break;
-		default:
-			goto error;
-		}
-		break;
-	case DLB2_QUEUE_MAPPED:
-		switch (new_state) {
-		case DLB2_QUEUE_UNMAPPED:
-			queue->num_mappings--;
-			port->num_mappings--;
-			break;
-		case DLB2_QUEUE_UNMAP_IN_PROG:
-			port->num_pending_removals++;
-			domain->num_pending_removals++;
-			break;
-		case DLB2_QUEUE_MAPPED:
-			/* Priority change, nothing to update */
-			break;
-		default:
-			goto error;
-		}
-		break;
-	case DLB2_QUEUE_MAP_IN_PROG:
-		switch (new_state) {
-		case DLB2_QUEUE_UNMAPPED:
-			queue->num_pending_additions--;
-			domain->num_pending_additions--;
-			break;
-		case DLB2_QUEUE_MAPPED:
-			queue->num_mappings++;
-			port->num_mappings++;
-			queue->num_pending_additions--;
-			domain->num_pending_additions--;
-			break;
-		default:
-			goto error;
-		}
-		break;
-	case DLB2_QUEUE_UNMAP_IN_PROG:
-		switch (new_state) {
-		case DLB2_QUEUE_UNMAPPED:
-			port->num_pending_removals--;
-			domain->num_pending_removals--;
-			queue->num_mappings--;
-			port->num_mappings--;
-			break;
-		case DLB2_QUEUE_MAPPED:
-			port->num_pending_removals--;
-			domain->num_pending_removals--;
-			break;
-		case DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP:
-			/* Nothing to update */
-			break;
-		default:
-			goto error;
-		}
-		break;
-	case DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP:
-		switch (new_state) {
-		case DLB2_QUEUE_UNMAP_IN_PROG:
-			/* Nothing to update */
-			break;
-		case DLB2_QUEUE_UNMAPPED:
-			/*
-			 * An UNMAP_IN_PROG_PENDING_MAP slot briefly
-			 * becomes UNMAPPED before it transitions to
-			 * MAP_IN_PROG.
-			 */
-			queue->num_mappings--;
-			port->num_mappings--;
-			port->num_pending_removals--;
-			domain->num_pending_removals--;
-			break;
-		default:
-			goto error;
-		}
-		break;
-	default:
-		goto error;
-	}
-
-	port->qid_map[slot].state = new_state;
-
-	DLB2_HW_DBG(hw,
-		    "[%s()] queue %d -> port %d state transition (%d -> %d)\n",
-		    __func__, queue->id.phys_id, port->id.phys_id,
-		    curr_state, new_state);
-	return 0;
-
-error:
-	DLB2_HW_ERR(hw,
-		    "[%s()] Internal error: invalid queue %d -> port %d state transition (%d -> %d)\n",
-		    __func__, queue->id.phys_id, port->id.phys_id,
-		    curr_state, new_state);
-	return -EFAULT;
-}
-
-static bool dlb2_port_find_slot(struct dlb2_ldb_port *port,
-				enum dlb2_qid_map_state state,
-				int *slot)
-{
-	int i;
-
-	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
-		if (port->qid_map[i].state == state)
-			break;
-	}
-
-	*slot = i;
-
-	return (i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ);
-}
-
-static bool dlb2_port_find_slot_queue(struct dlb2_ldb_port *port,
-				      enum dlb2_qid_map_state state,
-				      struct dlb2_ldb_queue *queue,
-				      int *slot)
-{
-	int i;
-
-	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
-		if (port->qid_map[i].state == state &&
-		    port->qid_map[i].qid == queue->id.phys_id)
-			break;
-	}
-
-	*slot = i;
-
-	return (i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ);
-}
-
-/*
- * dlb2_ldb_queue_{enable, disable}_mapped_cqs() don't operate exactly as
- * their function names imply, and should only be called by the dynamic CQ
- * mapping code.
- */
-static void dlb2_ldb_queue_disable_mapped_cqs(struct dlb2_hw *hw,
-					      struct dlb2_hw_domain *domain,
-					      struct dlb2_ldb_queue *queue)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int slot, i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			enum dlb2_qid_map_state state = DLB2_QUEUE_MAPPED;
-
-			if (!dlb2_port_find_slot_queue(port, state,
-						       queue, &slot))
-				continue;
-
-			if (port->enabled)
-				dlb2_ldb_port_cq_disable(hw, port);
-		}
-	}
-}
-
-static void dlb2_ldb_queue_enable_mapped_cqs(struct dlb2_hw *hw,
-					     struct dlb2_hw_domain *domain,
-					     struct dlb2_ldb_queue *queue)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int slot, i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			enum dlb2_qid_map_state state = DLB2_QUEUE_MAPPED;
-
-			if (!dlb2_port_find_slot_queue(port, state,
-						       queue, &slot))
-				continue;
-
-			if (port->enabled)
-				dlb2_ldb_port_cq_enable(hw, port);
-		}
-	}
-}
-
-static void dlb2_ldb_port_clear_queue_if_status(struct dlb2_hw *hw,
-						struct dlb2_ldb_port *port,
-						int slot)
-{
-	u32 ctrl = 0;
-
-	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
-	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
-	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_INFLIGHT_OK_V);
-
-	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
-
-	dlb2_flush_csr(hw);
-}
-
-static void dlb2_ldb_port_set_queue_if_status(struct dlb2_hw *hw,
-					      struct dlb2_ldb_port *port,
-					      int slot)
-{
-	u32 ctrl = 0;
-
-	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
-	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
-	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_VALUE);
-	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_INFLIGHT_OK_V);
-
-	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
-
-	dlb2_flush_csr(hw);
-}
-
-static int dlb2_ldb_port_map_qid_static(struct dlb2_hw *hw,
-					struct dlb2_ldb_port *p,
-					struct dlb2_ldb_queue *q,
-					u8 priority)
-{
-	enum dlb2_qid_map_state state;
-	u32 lsp_qid2cq2;
-	u32 lsp_qid2cq;
-	u32 atm_qid2cq;
-	u32 cq2priov;
-	u32 cq2qid;
-	int i;
-
-	/* Look for a pending or already mapped slot, else an unused slot */
-	if (!dlb2_port_find_slot_queue(p, DLB2_QUEUE_MAP_IN_PROG, q, &i) &&
-	    !dlb2_port_find_slot_queue(p, DLB2_QUEUE_MAPPED, q, &i) &&
-	    !dlb2_port_find_slot(p, DLB2_QUEUE_UNMAPPED, &i)) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: CQ has no available QID mapping slots\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	/* Read-modify-write the priority and valid bit register */
-	cq2priov = DLB2_CSR_RD(hw, DLB2_LSP_CQ2PRIOV(hw->ver, p->id.phys_id));
-
-	cq2priov |= (1 << (i + DLB2_LSP_CQ2PRIOV_V_LOC)) & DLB2_LSP_CQ2PRIOV_V;
-	cq2priov |= ((priority & 0x7) << (i + DLB2_LSP_CQ2PRIOV_PRIO_LOC) * 3)
-		    & DLB2_LSP_CQ2PRIOV_PRIO;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(hw->ver, p->id.phys_id), cq2priov);
-
-	/* Read-modify-write the QID map register */
-	if (i < 4)
-		cq2qid = DLB2_CSR_RD(hw, DLB2_LSP_CQ2QID0(hw->ver,
-							  p->id.phys_id));
-	else
-		cq2qid = DLB2_CSR_RD(hw, DLB2_LSP_CQ2QID1(hw->ver,
-							  p->id.phys_id));
-
-	if (i == 0 || i == 4)
-		DLB2_BITS_SET(cq2qid, q->id.phys_id, DLB2_LSP_CQ2QID0_QID_P0);
-	if (i == 1 || i == 5)
-		DLB2_BITS_SET(cq2qid, q->id.phys_id, DLB2_LSP_CQ2QID0_QID_P1);
-	if (i == 2 || i == 6)
-		DLB2_BITS_SET(cq2qid, q->id.phys_id, DLB2_LSP_CQ2QID0_QID_P2);
-	if (i == 3 || i == 7)
-		DLB2_BITS_SET(cq2qid, q->id.phys_id, DLB2_LSP_CQ2QID0_QID_P3);
-
-	if (i < 4)
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_CQ2QID0(hw->ver, p->id.phys_id), cq2qid);
-	else
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_CQ2QID1(hw->ver, p->id.phys_id), cq2qid);
-
-	atm_qid2cq = DLB2_CSR_RD(hw,
-				 DLB2_ATM_QID2CQIDIX(q->id.phys_id,
-						p->id.phys_id / 4));
-
-	lsp_qid2cq = DLB2_CSR_RD(hw,
-				 DLB2_LSP_QID2CQIDIX(hw->ver, q->id.phys_id,
-						p->id.phys_id / 4));
-
-	lsp_qid2cq2 = DLB2_CSR_RD(hw,
-				  DLB2_LSP_QID2CQIDIX2(hw->ver, q->id.phys_id,
-						  p->id.phys_id / 4));
-
-	switch (p->id.phys_id % 4) {
-	case 0:
-		DLB2_BIT_SET(atm_qid2cq,
-			     1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P0_LOC));
-		DLB2_BIT_SET(lsp_qid2cq,
-			     1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P0_LOC));
-		DLB2_BIT_SET(lsp_qid2cq2,
-			     1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P0_LOC));
-		break;
-
-	case 1:
-		DLB2_BIT_SET(atm_qid2cq,
-			     1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P1_LOC));
-		DLB2_BIT_SET(lsp_qid2cq,
-			     1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P1_LOC));
-		DLB2_BIT_SET(lsp_qid2cq2,
-			     1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P1_LOC));
-		break;
-
-	case 2:
-		DLB2_BIT_SET(atm_qid2cq,
-			     1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P2_LOC));
-		DLB2_BIT_SET(lsp_qid2cq,
-			     1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P2_LOC));
-		DLB2_BIT_SET(lsp_qid2cq2,
-			     1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P2_LOC));
-		break;
-
-	case 3:
-		DLB2_BIT_SET(atm_qid2cq,
-			     1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P3_LOC));
-		DLB2_BIT_SET(lsp_qid2cq,
-			     1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P3_LOC));
-		DLB2_BIT_SET(lsp_qid2cq2,
-			     1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P3_LOC));
-		break;
-	}
-
-	DLB2_CSR_WR(hw,
-		    DLB2_ATM_QID2CQIDIX(q->id.phys_id, p->id.phys_id / 4),
-		    atm_qid2cq);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID2CQIDIX(hw->ver,
-					q->id.phys_id, p->id.phys_id / 4),
-		    lsp_qid2cq);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID2CQIDIX2(hw->ver,
-					 q->id.phys_id, p->id.phys_id / 4),
-		    lsp_qid2cq2);
-
-	dlb2_flush_csr(hw);
-
-	p->qid_map[i].qid = q->id.phys_id;
-	p->qid_map[i].priority = priority;
-
-	state = DLB2_QUEUE_MAPPED;
-
-	return dlb2_port_slot_state_transition(hw, p, q, i, state);
-}
-
-static int dlb2_ldb_port_set_has_work_bits(struct dlb2_hw *hw,
-					   struct dlb2_ldb_port *port,
-					   struct dlb2_ldb_queue *queue,
-					   int slot)
-{
-	u32 ctrl = 0;
-	u32 active;
-	u32 enq;
-
-	/* Set the atomic scheduling haswork bit */
-	active = DLB2_CSR_RD(hw, DLB2_LSP_QID_AQED_ACTIVE_CNT(hw->ver,
-							 queue->id.phys_id));
-
-	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
-	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
-	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_VALUE);
-	DLB2_BITS_SET(ctrl,
-		      DLB2_BITS_GET(active,
-				    DLB2_LSP_QID_AQED_ACTIVE_CNT_COUNT) > 0,
-				    DLB2_LSP_LDB_SCHED_CTRL_RLIST_HASWORK_V);
-
-	/* Set the non-atomic scheduling haswork bit */
-	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
-
-	enq = DLB2_CSR_RD(hw,
-			  DLB2_LSP_QID_LDB_ENQUEUE_CNT(hw->ver,
-						       queue->id.phys_id));
-
-	memset(&ctrl, 0, sizeof(ctrl));
-
-	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
-	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
-	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_VALUE);
-	DLB2_BITS_SET(ctrl,
-		      DLB2_BITS_GET(enq,
-				    DLB2_LSP_QID_LDB_ENQUEUE_CNT_COUNT) > 0,
-		      DLB2_LSP_LDB_SCHED_CTRL_NALB_HASWORK_V);
-
-	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
-
-	dlb2_flush_csr(hw);
-
-	return 0;
-}
-
-static void dlb2_ldb_port_clear_has_work_bits(struct dlb2_hw *hw,
-					      struct dlb2_ldb_port *port,
-					      u8 slot)
-{
-	u32 ctrl = 0;
-
-	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
-	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
-	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_RLIST_HASWORK_V);
-
-	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
-
-	memset(&ctrl, 0, sizeof(ctrl));
-
-	DLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);
-	DLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);
-	DLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_NALB_HASWORK_V);
-
-	DLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);
-
-	dlb2_flush_csr(hw);
-}
-
-
-static void dlb2_ldb_queue_set_inflight_limit(struct dlb2_hw *hw,
-					      struct dlb2_ldb_queue *queue)
-{
-	u32 infl_lim = 0;
-
-	DLB2_BITS_SET(infl_lim, queue->num_qid_inflights,
-		 DLB2_LSP_QID_LDB_INFL_LIM_LIMIT);
-
-	DLB2_CSR_WR(hw, DLB2_LSP_QID_LDB_INFL_LIM(hw->ver, queue->id.phys_id),
-		    infl_lim);
-}
-
-static void dlb2_ldb_queue_clear_inflight_limit(struct dlb2_hw *hw,
-						struct dlb2_ldb_queue *queue)
-{
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID_LDB_INFL_LIM(hw->ver, queue->id.phys_id),
-		    DLB2_LSP_QID_LDB_INFL_LIM_RST);
-}
-
-static int dlb2_ldb_port_finish_map_qid_dynamic(struct dlb2_hw *hw,
-						struct dlb2_hw_domain *domain,
-						struct dlb2_ldb_port *port,
-						struct dlb2_ldb_queue *queue)
-{
-	struct dlb2_list_entry *iter;
-	enum dlb2_qid_map_state state;
-	int slot, ret, i;
-	u32 infl_cnt;
-	u8 prio;
-	RTE_SET_USED(iter);
-
-	infl_cnt = DLB2_CSR_RD(hw,
-			       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver,
-						    queue->id.phys_id));
-
-	if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT)) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: non-zero QID inflight count\n",
-			    __func__);
-		return -EINVAL;
-	}
-
-	/*
-	 * Static map the port and set its corresponding has_work bits.
-	 */
-	state = DLB2_QUEUE_MAP_IN_PROG;
-	if (!dlb2_port_find_slot_queue(port, state, queue, &slot))
-		return -EINVAL;
-
-	prio = port->qid_map[slot].priority;
-
-	/*
-	 * Update the CQ2QID, CQ2PRIOV, and QID2CQIDX registers, and
-	 * the port's qid_map state.
-	 */
-	ret = dlb2_ldb_port_map_qid_static(hw, port, queue, prio);
-	if (ret)
-		return ret;
-
-	ret = dlb2_ldb_port_set_has_work_bits(hw, port, queue, slot);
-	if (ret)
-		return ret;
-
-	/*
-	 * Ensure IF_status(cq,qid) is 0 before enabling the port to
-	 * prevent spurious schedules to cause the queue's inflight
-	 * count to increase.
-	 */
-	dlb2_ldb_port_clear_queue_if_status(hw, port, slot);
-
-	/* Reset the queue's inflight status */
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			state = DLB2_QUEUE_MAPPED;
-			if (!dlb2_port_find_slot_queue(port, state,
-						       queue, &slot))
-				continue;
-
-			dlb2_ldb_port_set_queue_if_status(hw, port, slot);
-		}
-	}
-
-	dlb2_ldb_queue_set_inflight_limit(hw, queue);
-
-	/* Re-enable CQs mapped to this queue */
-	dlb2_ldb_queue_enable_mapped_cqs(hw, domain, queue);
-
-	/* If this queue has other mappings pending, clear its inflight limit */
-	if (queue->num_pending_additions > 0)
-		dlb2_ldb_queue_clear_inflight_limit(hw, queue);
-
-	return 0;
-}
-
-/**
- * dlb2_ldb_port_map_qid_dynamic() - perform a "dynamic" QID->CQ mapping
- * @hw: dlb2_hw handle for a particular device.
- * @port: load-balanced port
- * @queue: load-balanced queue
- * @priority: queue servicing priority
- *
- * Returns 0 if the queue was mapped, 1 if the mapping is scheduled to occur
- * at a later point, and <0 if an error occurred.
- */
-static int dlb2_ldb_port_map_qid_dynamic(struct dlb2_hw *hw,
-					 struct dlb2_ldb_port *port,
-					 struct dlb2_ldb_queue *queue,
-					 u8 priority)
-{
-	enum dlb2_qid_map_state state;
-	struct dlb2_hw_domain *domain;
-	int domain_id, slot, ret;
-	u32 infl_cnt;
-
-	domain_id = port->domain_id.phys_id;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, false, 0);
-	if (domain == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: unable to find domain %d\n",
-			    __func__, port->domain_id.phys_id);
-		return -EINVAL;
-	}
-
-	/*
-	 * Set the QID inflight limit to 0 to prevent further scheduling of the
-	 * queue.
-	 */
-	DLB2_CSR_WR(hw, DLB2_LSP_QID_LDB_INFL_LIM(hw->ver,
-						  queue->id.phys_id), 0);
-
-	if (!dlb2_port_find_slot(port, DLB2_QUEUE_UNMAPPED, &slot)) {
-		DLB2_HW_ERR(hw,
-			    "Internal error: No available unmapped slots\n");
-		return -EFAULT;
-	}
-
-	port->qid_map[slot].qid = queue->id.phys_id;
-	port->qid_map[slot].priority = priority;
-
-	state = DLB2_QUEUE_MAP_IN_PROG;
-	ret = dlb2_port_slot_state_transition(hw, port, queue, slot, state);
-	if (ret)
-		return ret;
-
-	infl_cnt = DLB2_CSR_RD(hw,
-			       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver,
-						    queue->id.phys_id));
-
-	if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT)) {
-		/*
-		 * The queue is owed completions so it's not safe to map it
-		 * yet. Schedule a kernel thread to complete the mapping later,
-		 * once software has completed all the queue's inflight events.
-		 */
-		if (!os_worker_active(hw))
-			os_schedule_work(hw);
-
-		return 1;
-	}
-
-	/*
-	 * Disable the affected CQ, and the CQs already mapped to the QID,
-	 * before reading the QID's inflight count a second time. There is an
-	 * unlikely race in which the QID may schedule one more QE after we
-	 * read an inflight count of 0, and disabling the CQs guarantees that
-	 * the race will not occur after a re-read of the inflight count
-	 * register.
-	 */
-	if (port->enabled)
-		dlb2_ldb_port_cq_disable(hw, port);
-
-	dlb2_ldb_queue_disable_mapped_cqs(hw, domain, queue);
-
-	infl_cnt = DLB2_CSR_RD(hw,
-			       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver,
-						    queue->id.phys_id));
-
-	if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT)) {
-		if (port->enabled)
-			dlb2_ldb_port_cq_enable(hw, port);
-
-		dlb2_ldb_queue_enable_mapped_cqs(hw, domain, queue);
-
-		/*
-		 * The queue is owed completions so it's not safe to map it
-		 * yet. Schedule a kernel thread to complete the mapping later,
-		 * once software has completed all the queue's inflight events.
-		 */
-		if (!os_worker_active(hw))
-			os_schedule_work(hw);
-
-		return 1;
-	}
-
-	return dlb2_ldb_port_finish_map_qid_dynamic(hw, domain, port, queue);
-}
-
-static void dlb2_domain_finish_map_port(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain,
-					struct dlb2_ldb_port *port)
-{
-	int i;
-
-	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
-		u32 infl_cnt;
-		struct dlb2_ldb_queue *queue;
-		int qid;
-
-		if (port->qid_map[i].state != DLB2_QUEUE_MAP_IN_PROG)
-			continue;
-
-		qid = port->qid_map[i].qid;
-
-		queue = dlb2_get_ldb_queue_from_id(hw, qid, false, 0);
-
-		if (queue == NULL) {
-			DLB2_HW_ERR(hw,
-				    "[%s()] Internal error: unable to find queue %d\n",
-				    __func__, qid);
-			continue;
-		}
-
-		infl_cnt = DLB2_CSR_RD(hw,
-				       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver, qid));
-
-		if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT))
-			continue;
-
-		/*
-		 * Disable the affected CQ, and the CQs already mapped to the
-		 * QID, before reading the QID's inflight count a second time.
-		 * There is an unlikely race in which the QID may schedule one
-		 * more QE after we read an inflight count of 0, and disabling
-		 * the CQs guarantees that the race will not occur after a
-		 * re-read of the inflight count register.
-		 */
-		if (port->enabled)
-			dlb2_ldb_port_cq_disable(hw, port);
-
-		dlb2_ldb_queue_disable_mapped_cqs(hw, domain, queue);
-
-		infl_cnt = DLB2_CSR_RD(hw,
-				       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver, qid));
-
-		if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT)) {
-			if (port->enabled)
-				dlb2_ldb_port_cq_enable(hw, port);
-
-			dlb2_ldb_queue_enable_mapped_cqs(hw, domain, queue);
-
-			continue;
-		}
-
-		dlb2_ldb_port_finish_map_qid_dynamic(hw, domain, port, queue);
-	}
-}
-
-static unsigned int
-dlb2_domain_finish_map_qid_procedures(struct dlb2_hw *hw,
-				      struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	if (!domain->configured || domain->num_pending_additions == 0)
-		return 0;
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
-			dlb2_domain_finish_map_port(hw, domain, port);
-	}
-
-	return domain->num_pending_additions;
-}
-
-static int dlb2_ldb_port_unmap_qid(struct dlb2_hw *hw,
-				   struct dlb2_ldb_port *port,
-				   struct dlb2_ldb_queue *queue)
-{
-	enum dlb2_qid_map_state mapped, in_progress, pending_map, unmapped;
-	u32 lsp_qid2cq2;
-	u32 lsp_qid2cq;
-	u32 atm_qid2cq;
-	u32 cq2priov;
-	u32 queue_id;
-	u32 port_id;
-	int i;
-
-	/* Find the queue's slot */
-	mapped = DLB2_QUEUE_MAPPED;
-	in_progress = DLB2_QUEUE_UNMAP_IN_PROG;
-	pending_map = DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP;
-
-	if (!dlb2_port_find_slot_queue(port, mapped, queue, &i) &&
-	    !dlb2_port_find_slot_queue(port, in_progress, queue, &i) &&
-	    !dlb2_port_find_slot_queue(port, pending_map, queue, &i)) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: QID %d isn't mapped\n",
-			    __func__, __LINE__, queue->id.phys_id);
-		return -EFAULT;
-	}
-
-	port_id = port->id.phys_id;
-	queue_id = queue->id.phys_id;
-
-	/* Read-modify-write the priority and valid bit register */
-	cq2priov = DLB2_CSR_RD(hw, DLB2_LSP_CQ2PRIOV(hw->ver, port_id));
-
-	cq2priov &= ~(1 << (i + DLB2_LSP_CQ2PRIOV_V_LOC));
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(hw->ver, port_id), cq2priov);
-
-	atm_qid2cq = DLB2_CSR_RD(hw, DLB2_ATM_QID2CQIDIX(queue_id,
-							 port_id / 4));
-
-	lsp_qid2cq = DLB2_CSR_RD(hw,
-				 DLB2_LSP_QID2CQIDIX(hw->ver,
-						queue_id, port_id / 4));
-
-	lsp_qid2cq2 = DLB2_CSR_RD(hw,
-				  DLB2_LSP_QID2CQIDIX2(hw->ver,
-						  queue_id, port_id / 4));
-
-	switch (port_id % 4) {
-	case 0:
-		atm_qid2cq &= ~(1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P0_LOC));
-		lsp_qid2cq &= ~(1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P0_LOC));
-		lsp_qid2cq2 &= ~(1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P0_LOC));
-		break;
-
-	case 1:
-		atm_qid2cq &= ~(1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P1_LOC));
-		lsp_qid2cq &= ~(1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P1_LOC));
-		lsp_qid2cq2 &= ~(1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P1_LOC));
-		break;
-
-	case 2:
-		atm_qid2cq &= ~(1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P2_LOC));
-		lsp_qid2cq &= ~(1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P2_LOC));
-		lsp_qid2cq2 &= ~(1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P2_LOC));
-		break;
-
-	case 3:
-		atm_qid2cq &= ~(1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P3_LOC));
-		lsp_qid2cq &= ~(1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P3_LOC));
-		lsp_qid2cq2 &= ~(1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P3_LOC));
-		break;
-	}
-
-	DLB2_CSR_WR(hw, DLB2_ATM_QID2CQIDIX(queue_id, port_id / 4), atm_qid2cq);
-
-	DLB2_CSR_WR(hw, DLB2_LSP_QID2CQIDIX(hw->ver, queue_id, port_id / 4),
-		    lsp_qid2cq);
-
-	DLB2_CSR_WR(hw, DLB2_LSP_QID2CQIDIX2(hw->ver, queue_id, port_id / 4),
-		    lsp_qid2cq2);
-
-	dlb2_flush_csr(hw);
-
-	unmapped = DLB2_QUEUE_UNMAPPED;
-
-	return dlb2_port_slot_state_transition(hw, port, queue, i, unmapped);
-}
-
-static int dlb2_ldb_port_map_qid(struct dlb2_hw *hw,
-				 struct dlb2_hw_domain *domain,
-				 struct dlb2_ldb_port *port,
-				 struct dlb2_ldb_queue *queue,
-				 u8 prio)
-{
-	if (domain->started)
-		return dlb2_ldb_port_map_qid_dynamic(hw, port, queue, prio);
-	else
-		return dlb2_ldb_port_map_qid_static(hw, port, queue, prio);
-}
-
-static void
-dlb2_domain_finish_unmap_port_slot(struct dlb2_hw *hw,
-				   struct dlb2_hw_domain *domain,
-				   struct dlb2_ldb_port *port,
-				   int slot)
-{
-	enum dlb2_qid_map_state state;
-	struct dlb2_ldb_queue *queue;
-
-	queue = &hw->rsrcs.ldb_queues[port->qid_map[slot].qid];
-
-	state = port->qid_map[slot].state;
-
-	/* Update the QID2CQIDX and CQ2QID vectors */
-	dlb2_ldb_port_unmap_qid(hw, port, queue);
-
-	/*
-	 * Ensure the QID will not be serviced by this {CQ, slot} by clearing
-	 * the has_work bits
-	 */
-	dlb2_ldb_port_clear_has_work_bits(hw, port, slot);
-
-	/* Reset the {CQ, slot} to its default state */
-	dlb2_ldb_port_set_queue_if_status(hw, port, slot);
-
-	/* Re-enable the CQ if it was not manually disabled by the user */
-	if (port->enabled)
-		dlb2_ldb_port_cq_enable(hw, port);
-
-	/*
-	 * If there is a mapping that is pending this slot's removal, perform
-	 * the mapping now.
-	 */
-	if (state == DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP) {
-		struct dlb2_ldb_port_qid_map *map;
-		struct dlb2_ldb_queue *map_queue;
-		u8 prio;
-
-		map = &port->qid_map[slot];
-
-		map->qid = map->pending_qid;
-		map->priority = map->pending_priority;
-
-		map_queue = &hw->rsrcs.ldb_queues[map->qid];
-		prio = map->priority;
-
-		dlb2_ldb_port_map_qid(hw, domain, port, map_queue, prio);
-	}
-}
-
-
-static bool dlb2_domain_finish_unmap_port(struct dlb2_hw *hw,
-					  struct dlb2_hw_domain *domain,
-					  struct dlb2_ldb_port *port)
-{
-	u32 infl_cnt;
-	int i;
-
-	if (port->num_pending_removals == 0)
-		return false;
-
-	/*
-	 * The unmap requires all the CQ's outstanding inflights to be
-	 * completed.
-	 */
-	infl_cnt = DLB2_CSR_RD(hw, DLB2_LSP_CQ_LDB_INFL_CNT(hw->ver,
-						       port->id.phys_id));
-	if (DLB2_BITS_GET(infl_cnt, DLB2_LSP_CQ_LDB_INFL_CNT_COUNT) > 0)
-		return false;
-
-	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
-		struct dlb2_ldb_port_qid_map *map;
-
-		map = &port->qid_map[i];
-
-		if (map->state != DLB2_QUEUE_UNMAP_IN_PROG &&
-		    map->state != DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP)
-			continue;
-
-		dlb2_domain_finish_unmap_port_slot(hw, domain, port, i);
-	}
-
-	return true;
-}
-
-static unsigned int
-dlb2_domain_finish_unmap_qid_procedures(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	if (!domain->configured || domain->num_pending_removals == 0)
-		return 0;
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
-			dlb2_domain_finish_unmap_port(hw, domain, port);
-	}
-
-	return domain->num_pending_removals;
-}
-
-static void dlb2_domain_disable_ldb_cqs(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			port->enabled = false;
-
-			dlb2_ldb_port_cq_disable(hw, port);
-		}
-	}
-}
-
-
-static void dlb2_log_reset_domain(struct dlb2_hw *hw,
-				  u32 domain_id,
-				  bool vdev_req,
-				  unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 reset domain:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
-}
-
-static void dlb2_domain_disable_dir_vpps(struct dlb2_hw *hw,
-					 struct dlb2_hw_domain *domain,
-					 unsigned int vdev_id)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *port;
-	u32 vpp_v = 0;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
-		unsigned int offs;
-		u32 virt_id;
-
-		if (hw->virt_mode == DLB2_VIRT_SRIOV)
-			virt_id = port->id.virt_id;
-		else
-			virt_id = port->id.phys_id;
-
-		offs = vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) + virt_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP_V(offs), vpp_v);
-	}
-}
-
-static void dlb2_domain_disable_ldb_vpps(struct dlb2_hw *hw,
-					 struct dlb2_hw_domain *domain,
-					 unsigned int vdev_id)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	u32 vpp_v = 0;
-	int i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			unsigned int offs;
-			u32 virt_id;
-
-			if (hw->virt_mode == DLB2_VIRT_SRIOV)
-				virt_id = port->id.virt_id;
-			else
-				virt_id = port->id.phys_id;
-
-			offs = vdev_id * DLB2_MAX_NUM_LDB_PORTS + virt_id;
-
-			DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VPP_V(offs), vpp_v);
-		}
-	}
-}
-
-static void
-dlb2_domain_disable_ldb_port_interrupts(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	u32 int_en = 0;
-	u32 wd_en = 0;
-	int i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			DLB2_CSR_WR(hw,
-				    DLB2_CHP_LDB_CQ_INT_ENB(hw->ver,
-						       port->id.phys_id),
-				    int_en);
-
-			DLB2_CSR_WR(hw,
-				    DLB2_CHP_LDB_CQ_WD_ENB(hw->ver,
-						      port->id.phys_id),
-				    wd_en);
-		}
-	}
-}
-
-static void
-dlb2_domain_disable_dir_port_interrupts(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *port;
-	u32 int_en = 0;
-	u32 wd_en = 0;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
-		DLB2_CSR_WR(hw,
-			    DLB2_CHP_DIR_CQ_INT_ENB(hw->ver, port->id.phys_id),
-			    int_en);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_CHP_DIR_CQ_WD_ENB(hw->ver, port->id.phys_id),
-			    wd_en);
-	}
-}
-
-static void
-dlb2_domain_disable_ldb_queue_write_perms(struct dlb2_hw *hw,
-					  struct dlb2_hw_domain *domain)
-{
-	int domain_offset = domain->id.phys_id * DLB2_MAX_NUM_LDB_QUEUES;
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_queue *queue;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
-		int idx = domain_offset + queue->id.phys_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_LDB_VASQID_V(idx), 0);
-
-		if (queue->id.vdev_owned) {
-			DLB2_CSR_WR(hw,
-				    DLB2_SYS_LDB_QID2VQID(queue->id.phys_id),
-				    0);
-
-			idx = queue->id.vdev_id * DLB2_MAX_NUM_LDB_QUEUES +
-				queue->id.virt_id;
-
-			DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID_V(idx), 0);
-
-			DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID2QID(idx), 0);
-		}
-	}
-}
-
-static void
-dlb2_domain_disable_dir_queue_write_perms(struct dlb2_hw *hw,
-					  struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *queue;
-	unsigned long max_ports;
-	int domain_offset;
-	RTE_SET_USED(iter);
-
-	max_ports = DLB2_MAX_NUM_DIR_PORTS(hw->ver);
-
-	domain_offset = domain->id.phys_id * max_ports;
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, queue, iter) {
-		int idx = domain_offset + queue->id.phys_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(idx), 0);
-
-		if (queue->id.vdev_owned) {
-			idx = queue->id.vdev_id * max_ports + queue->id.virt_id;
-
-			DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID_V(idx), 0);
-
-			DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID2QID(idx), 0);
-		}
-	}
-}
-
-static void dlb2_domain_disable_ldb_seq_checks(struct dlb2_hw *hw,
-					       struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	u32 chk_en = 0;
-	int i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			DLB2_CSR_WR(hw,
-				    DLB2_CHP_SN_CHK_ENBL(hw->ver,
-							 port->id.phys_id),
-				    chk_en);
-		}
-	}
-}
-
-static int dlb2_domain_wait_for_ldb_cqs_to_empty(struct dlb2_hw *hw,
-						 struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			int j;
-
-			for (j = 0; j < DLB2_MAX_CQ_COMP_CHECK_LOOPS; j++) {
-				if (dlb2_ldb_cq_inflight_count(hw, port) == 0)
-					break;
-			}
-
-			if (j == DLB2_MAX_CQ_COMP_CHECK_LOOPS) {
-				DLB2_HW_ERR(hw,
-					    "[%s()] Internal error: failed to flush load-balanced port %d's completions.\n",
-					    __func__, port->id.phys_id);
-				return -EFAULT;
-			}
-		}
-	}
-
-	return 0;
-}
-
-static void dlb2_domain_disable_dir_cqs(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *port;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
-		port->enabled = false;
-
-		dlb2_dir_port_cq_disable(hw, port);
-	}
-}
-
-static void
-dlb2_domain_disable_dir_producer_ports(struct dlb2_hw *hw,
-				       struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *port;
-	u32 pp_v = 0;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_DIR_PP_V(port->id.phys_id),
-			    pp_v);
-	}
-}
-
-static void
-dlb2_domain_disable_ldb_producer_ports(struct dlb2_hw *hw,
-				       struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	u32 pp_v = 0;
-	int i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			DLB2_CSR_WR(hw,
-				    DLB2_SYS_LDB_PP_V(port->id.phys_id),
-				    pp_v);
-		}
-	}
-}
-
-static int dlb2_domain_verify_reset_success(struct dlb2_hw *hw,
-					    struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *dir_port;
-	struct dlb2_ldb_port *ldb_port;
-	struct dlb2_ldb_queue *queue;
-	int i;
-	RTE_SET_USED(iter);
-
-	/*
-	 * Confirm that all the domain's queue's inflight counts and AQED
-	 * active counts are 0.
-	 */
-	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
-		if (!dlb2_ldb_queue_is_empty(hw, queue)) {
-			DLB2_HW_ERR(hw,
-				    "[%s()] Internal error: failed to empty ldb queue %d\n",
-				    __func__, queue->id.phys_id);
-			return -EFAULT;
-		}
-	}
-
-	/* Confirm that all the domain's CQs inflight and token counts are 0. */
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], ldb_port, iter) {
-			if (dlb2_ldb_cq_inflight_count(hw, ldb_port) ||
-			    dlb2_ldb_cq_token_count(hw, ldb_port)) {
-				DLB2_HW_ERR(hw,
-					    "[%s()] Internal error: failed to empty ldb port %d\n",
-					    __func__, ldb_port->id.phys_id);
-				return -EFAULT;
-			}
-		}
-	}
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, dir_port, iter) {
-		if (!dlb2_dir_queue_is_empty(hw, dir_port)) {
-			DLB2_HW_ERR(hw,
-				    "[%s()] Internal error: failed to empty dir queue %d\n",
-				    __func__, dir_port->id.phys_id);
-			return -EFAULT;
-		}
-
-		if (dlb2_dir_cq_token_count(hw, dir_port)) {
-			DLB2_HW_ERR(hw,
-				    "[%s()] Internal error: failed to empty dir port %d\n",
-				    __func__, dir_port->id.phys_id);
-			return -EFAULT;
-		}
-	}
-
-	return 0;
-}
-
-static void __dlb2_domain_reset_ldb_port_registers(struct dlb2_hw *hw,
-						   struct dlb2_ldb_port *port)
-{
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_PP2VAS(port->id.phys_id),
-		    DLB2_SYS_LDB_PP2VAS_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ2VAS(hw->ver, port->id.phys_id),
-		    DLB2_CHP_LDB_CQ2VAS_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_PP2VDEV(port->id.phys_id),
-		    DLB2_SYS_LDB_PP2VDEV_RST);
-
-	if (port->id.vdev_owned) {
-		unsigned int offs;
-		u32 virt_id;
-
-		/*
-		 * DLB uses producer port address bits 17:12 to determine the
-		 * producer port ID. In Scalable IOV mode, PP accesses come
-		 * through the PF MMIO window for the physical producer port,
-		 * so for translation purposes the virtual and physical port
-		 * IDs are equal.
-		 */
-		if (hw->virt_mode == DLB2_VIRT_SRIOV)
-			virt_id = port->id.virt_id;
-		else
-			virt_id = port->id.phys_id;
-
-		offs = port->id.vdev_id * DLB2_MAX_NUM_LDB_PORTS + virt_id;
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_VF_LDB_VPP2PP(offs),
-			    DLB2_SYS_VF_LDB_VPP2PP_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_VF_LDB_VPP_V(offs),
-			    DLB2_SYS_VF_LDB_VPP_V_RST);
-	}
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_PP_V(port->id.phys_id),
-		    DLB2_SYS_LDB_PP_V_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_LDB_DSBL(hw->ver, port->id.phys_id),
-		    DLB2_LSP_CQ_LDB_DSBL_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_DEPTH(hw->ver, port->id.phys_id),
-		    DLB2_CHP_LDB_CQ_DEPTH_RST);
-
-	if (hw->ver != DLB2_HW_V2)
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_CFG_CQ_LDB_WU_LIMIT(port->id.phys_id),
-			    DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_LDB_INFL_LIM(hw->ver, port->id.phys_id),
-		    DLB2_LSP_CQ_LDB_INFL_LIM_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_HIST_LIST_LIM(hw->ver, port->id.phys_id),
-		    DLB2_CHP_HIST_LIST_LIM_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_HIST_LIST_BASE(hw->ver, port->id.phys_id),
-		    DLB2_CHP_HIST_LIST_BASE_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_HIST_LIST_POP_PTR(hw->ver, port->id.phys_id),
-		    DLB2_CHP_HIST_LIST_POP_PTR_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_HIST_LIST_PUSH_PTR(hw->ver, port->id.phys_id),
-		    DLB2_CHP_HIST_LIST_PUSH_PTR_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH(hw->ver, port->id.phys_id),
-		    DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_TMR_THRSH(hw->ver, port->id.phys_id),
-		    DLB2_CHP_LDB_CQ_TMR_THRSH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_INT_ENB(hw->ver, port->id.phys_id),
-		    DLB2_CHP_LDB_CQ_INT_ENB_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_CQ_ISR(port->id.phys_id),
-		    DLB2_SYS_LDB_CQ_ISR_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
-		    DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
-		    DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_WPTR(hw->ver, port->id.phys_id),
-		    DLB2_CHP_LDB_CQ_WPTR_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_LDB_TKN_CNT(hw->ver, port->id.phys_id),
-		    DLB2_LSP_CQ_LDB_TKN_CNT_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_CQ_ADDR_L(port->id.phys_id),
-		    DLB2_SYS_LDB_CQ_ADDR_L_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_CQ_ADDR_U(port->id.phys_id),
-		    DLB2_SYS_LDB_CQ_ADDR_U_RST);
-
-	if (hw->ver == DLB2_HW_V2)
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_LDB_CQ_AT(port->id.phys_id),
-			    DLB2_SYS_LDB_CQ_AT_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_CQ_PASID(hw->ver, port->id.phys_id),
-		    DLB2_SYS_LDB_CQ_PASID_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_LDB_CQ2VF_PF_RO(port->id.phys_id),
-		    DLB2_SYS_LDB_CQ2VF_PF_RO_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTL(hw->ver, port->id.phys_id),
-		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTL_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTH(hw->ver, port->id.phys_id),
-		    DLB2_LSP_CQ_LDB_TOT_SCH_CNTH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ2QID0(hw->ver, port->id.phys_id),
-		    DLB2_LSP_CQ2QID0_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ2QID1(hw->ver, port->id.phys_id),
-		    DLB2_LSP_CQ2QID1_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ2PRIOV(hw->ver, port->id.phys_id),
-		    DLB2_LSP_CQ2PRIOV_RST);
-}
-
-static void dlb2_domain_reset_ldb_port_registers(struct dlb2_hw *hw,
-						 struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)
-			__dlb2_domain_reset_ldb_port_registers(hw, port);
-	}
-}
-
-static void
-__dlb2_domain_reset_dir_port_registers(struct dlb2_hw *hw,
-				       struct dlb2_dir_pq_pair *port)
-{
-	u32 reg = 0;
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ2VAS(hw->ver, port->id.phys_id),
-		    DLB2_CHP_DIR_CQ2VAS_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_DIR_DSBL(hw->ver, port->id.phys_id),
-		    DLB2_LSP_CQ_DIR_DSBL_RST);
-
-	DLB2_BIT_SET(reg, DLB2_SYS_WB_DIR_CQ_STATE_CQ_OPT_CLR);
-
-	if (hw->ver == DLB2_HW_V2)
-		DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_OPT_CLR, port->id.phys_id);
-	else
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_WB_DIR_CQ_STATE(port->id.phys_id), reg);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_DEPTH(hw->ver, port->id.phys_id),
-		    DLB2_CHP_DIR_CQ_DEPTH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH(hw->ver, port->id.phys_id),
-		    DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_TMR_THRSH(hw->ver, port->id.phys_id),
-		    DLB2_CHP_DIR_CQ_TMR_THRSH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_INT_ENB(hw->ver, port->id.phys_id),
-		    DLB2_CHP_DIR_CQ_INT_ENB_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ_ISR(port->id.phys_id),
-		    DLB2_SYS_DIR_CQ_ISR_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(hw->ver,
-						      port->id.phys_id),
-		    DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
-		    DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_WPTR(hw->ver, port->id.phys_id),
-		    DLB2_CHP_DIR_CQ_WPTR_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_DIR_TKN_CNT(hw->ver, port->id.phys_id),
-		    DLB2_LSP_CQ_DIR_TKN_CNT_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ_ADDR_L(port->id.phys_id),
-		    DLB2_SYS_DIR_CQ_ADDR_L_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ_ADDR_U(port->id.phys_id),
-		    DLB2_SYS_DIR_CQ_ADDR_U_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ_AT(port->id.phys_id),
-		    DLB2_SYS_DIR_CQ_AT_RST);
-
-	if (hw->ver == DLB2_HW_V2)
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_DIR_CQ_AT(port->id.phys_id),
-			    DLB2_SYS_DIR_CQ_AT_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ_PASID(hw->ver, port->id.phys_id),
-		    DLB2_SYS_DIR_CQ_PASID_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ_FMT(port->id.phys_id),
-		    DLB2_SYS_DIR_CQ_FMT_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_CQ2VF_PF_RO(port->id.phys_id),
-		    DLB2_SYS_DIR_CQ2VF_PF_RO_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTL(hw->ver, port->id.phys_id),
-		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTL_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTH(hw->ver, port->id.phys_id),
-		    DLB2_LSP_CQ_DIR_TOT_SCH_CNTH_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_PP2VAS(port->id.phys_id),
-		    DLB2_SYS_DIR_PP2VAS_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ2VAS(hw->ver, port->id.phys_id),
-		    DLB2_CHP_DIR_CQ2VAS_RST);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_PP2VDEV(port->id.phys_id),
-		    DLB2_SYS_DIR_PP2VDEV_RST);
-
-	if (port->id.vdev_owned) {
-		unsigned int offs;
-		u32 virt_id;
-
-		/*
-		 * DLB uses producer port address bits 17:12 to determine the
-		 * producer port ID. In Scalable IOV mode, PP accesses come
-		 * through the PF MMIO window for the physical producer port,
-		 * so for translation purposes the virtual and physical port
-		 * IDs are equal.
-		 */
-		if (hw->virt_mode == DLB2_VIRT_SRIOV)
-			virt_id = port->id.virt_id;
-		else
-			virt_id = port->id.phys_id;
-
-		offs = port->id.vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) +
-			virt_id;
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_VF_DIR_VPP2PP(offs),
-			    DLB2_SYS_VF_DIR_VPP2PP_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_VF_DIR_VPP_V(offs),
-			    DLB2_SYS_VF_DIR_VPP_V_RST);
-	}
-
-	DLB2_CSR_WR(hw,
-		    DLB2_SYS_DIR_PP_V(port->id.phys_id),
-		    DLB2_SYS_DIR_PP_V_RST);
-}
-
-static void dlb2_domain_reset_dir_port_registers(struct dlb2_hw *hw,
-						 struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *port;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter)
-		__dlb2_domain_reset_dir_port_registers(hw, port);
-}
-
-static void dlb2_domain_reset_ldb_queue_registers(struct dlb2_hw *hw,
-						  struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_queue *queue;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
-		unsigned int queue_id = queue->id.phys_id;
-		int i;
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL(hw->ver, queue_id),
-			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH(hw->ver, queue_id),
-			    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTL(hw->ver, queue_id),
-			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTL_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTH(hw->ver, queue_id),
-			    DLB2_LSP_QID_ATM_TOT_ENQ_CNTH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_NALDB_MAX_DEPTH(hw->ver, queue_id),
-			    DLB2_LSP_QID_NALDB_MAX_DEPTH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_LDB_INFL_LIM(hw->ver, queue_id),
-			    DLB2_LSP_QID_LDB_INFL_LIM_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_AQED_ACTIVE_LIM(hw->ver, queue_id),
-			    DLB2_LSP_QID_AQED_ACTIVE_LIM_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_ATM_DEPTH_THRSH(hw->ver, queue_id),
-			    DLB2_LSP_QID_ATM_DEPTH_THRSH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_NALDB_DEPTH_THRSH(hw->ver, queue_id),
-			    DLB2_LSP_QID_NALDB_DEPTH_THRSH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_LDB_QID_ITS(queue_id),
-			    DLB2_SYS_LDB_QID_ITS_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_CHP_ORD_QID_SN(hw->ver, queue_id),
-			    DLB2_CHP_ORD_QID_SN_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_CHP_ORD_QID_SN_MAP(hw->ver, queue_id),
-			    DLB2_CHP_ORD_QID_SN_MAP_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_LDB_QID_V(queue_id),
-			    DLB2_SYS_LDB_QID_V_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_LDB_QID_CFG_V(queue_id),
-			    DLB2_SYS_LDB_QID_CFG_V_RST);
-
-		if (queue->sn_cfg_valid) {
-			u32 offs[2];
-
-			offs[0] = DLB2_RO_GRP_0_SLT_SHFT(hw->ver,
-							 queue->sn_slot);
-			offs[1] = DLB2_RO_GRP_1_SLT_SHFT(hw->ver,
-							 queue->sn_slot);
-
-			DLB2_CSR_WR(hw,
-				    offs[queue->sn_group],
-				    DLB2_RO_GRP_0_SLT_SHFT_RST);
-		}
-
-		for (i = 0; i < DLB2_LSP_QID2CQIDIX_NUM; i++) {
-			DLB2_CSR_WR(hw,
-				    DLB2_LSP_QID2CQIDIX(hw->ver, queue_id, i),
-				    DLB2_LSP_QID2CQIDIX_00_RST);
-
-			DLB2_CSR_WR(hw,
-				    DLB2_LSP_QID2CQIDIX2(hw->ver, queue_id, i),
-				    DLB2_LSP_QID2CQIDIX2_00_RST);
-
-			DLB2_CSR_WR(hw,
-				    DLB2_ATM_QID2CQIDIX(queue_id, i),
-				    DLB2_ATM_QID2CQIDIX_00_RST);
-		}
-	}
-}
-
-static void dlb2_domain_reset_dir_queue_registers(struct dlb2_hw *hw,
-						  struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *queue;
-	RTE_SET_USED(iter);
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, queue, iter) {
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_DIR_MAX_DEPTH(hw->ver,
-						       queue->id.phys_id),
-			    DLB2_LSP_QID_DIR_MAX_DEPTH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTL(hw->ver,
-							  queue->id.phys_id),
-			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTL_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTH(hw->ver,
-							  queue->id.phys_id),
-			    DLB2_LSP_QID_DIR_TOT_ENQ_CNTH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_QID_DIR_DEPTH_THRSH(hw->ver,
-							 queue->id.phys_id),
-			    DLB2_LSP_QID_DIR_DEPTH_THRSH_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_DIR_QID_ITS(queue->id.phys_id),
-			    DLB2_SYS_DIR_QID_ITS_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_SYS_DIR_QID_V(queue->id.phys_id),
-			    DLB2_SYS_DIR_QID_V_RST);
-	}
-}
-
-
-
-
-
-static void dlb2_domain_reset_registers(struct dlb2_hw *hw,
-					struct dlb2_hw_domain *domain)
-{
-	dlb2_domain_reset_ldb_port_registers(hw, domain);
-
-	dlb2_domain_reset_dir_port_registers(hw, domain);
-
-	dlb2_domain_reset_ldb_queue_registers(hw, domain);
-
-	dlb2_domain_reset_dir_queue_registers(hw, domain);
-
-	if (hw->ver == DLB2_HW_V2) {
-		DLB2_CSR_WR(hw,
-			    DLB2_CHP_CFG_LDB_VAS_CRD(domain->id.phys_id),
-			    DLB2_CHP_CFG_LDB_VAS_CRD_RST);
-
-		DLB2_CSR_WR(hw,
-			    DLB2_CHP_CFG_DIR_VAS_CRD(domain->id.phys_id),
-			    DLB2_CHP_CFG_DIR_VAS_CRD_RST);
-	} else
-		DLB2_CSR_WR(hw,
-			    DLB2_CHP_CFG_VAS_CRD(domain->id.phys_id),
-			    DLB2_CHP_CFG_VAS_CRD_RST);
-}
-
-static int dlb2_domain_reset_software_state(struct dlb2_hw *hw,
-					    struct dlb2_hw_domain *domain)
-{
-	struct dlb2_dir_pq_pair *tmp_dir_port;
-	struct dlb2_ldb_queue *tmp_ldb_queue;
-	struct dlb2_ldb_port *tmp_ldb_port;
-	struct dlb2_list_entry *iter1;
-	struct dlb2_list_entry *iter2;
-	struct dlb2_function_resources *rsrcs;
-	struct dlb2_dir_pq_pair *dir_port;
-	struct dlb2_ldb_queue *ldb_queue;
-	struct dlb2_ldb_port *ldb_port;
-	struct dlb2_list_head *list;
-	int ret, i;
-	RTE_SET_USED(tmp_dir_port);
-	RTE_SET_USED(tmp_ldb_queue);
-	RTE_SET_USED(tmp_ldb_port);
-	RTE_SET_USED(iter1);
-	RTE_SET_USED(iter2);
-
-	rsrcs = domain->parent_func;
-
-	/* Move the domain's ldb queues to the function's avail list */
-	list = &domain->used_ldb_queues;
-	DLB2_DOM_LIST_FOR_SAFE(*list, ldb_queue, tmp_ldb_queue, iter1, iter2) {
-		if (ldb_queue->sn_cfg_valid) {
-			struct dlb2_sn_group *grp;
-
-			grp = &hw->rsrcs.sn_groups[ldb_queue->sn_group];
-
-			dlb2_sn_group_free_slot(grp, ldb_queue->sn_slot);
-			ldb_queue->sn_cfg_valid = false;
-		}
-
-		ldb_queue->owned = false;
-		ldb_queue->num_mappings = 0;
-		ldb_queue->num_pending_additions = 0;
-
-		dlb2_list_del(&domain->used_ldb_queues,
-			      &ldb_queue->domain_list);
-		dlb2_list_add(&rsrcs->avail_ldb_queues,
-			      &ldb_queue->func_list);
-		rsrcs->num_avail_ldb_queues++;
-	}
-
-	list = &domain->avail_ldb_queues;
-	DLB2_DOM_LIST_FOR_SAFE(*list, ldb_queue, tmp_ldb_queue, iter1, iter2) {
-		ldb_queue->owned = false;
-
-		dlb2_list_del(&domain->avail_ldb_queues,
-			      &ldb_queue->domain_list);
-		dlb2_list_add(&rsrcs->avail_ldb_queues,
-			      &ldb_queue->func_list);
-		rsrcs->num_avail_ldb_queues++;
-	}
-
-	/* Move the domain's ldb ports to the function's avail list */
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		list = &domain->used_ldb_ports[i];
-		DLB2_DOM_LIST_FOR_SAFE(*list, ldb_port, tmp_ldb_port,
-				       iter1, iter2) {
-			int j;
-
-			ldb_port->owned = false;
-			ldb_port->configured = false;
-			ldb_port->num_pending_removals = 0;
-			ldb_port->num_mappings = 0;
-			ldb_port->init_tkn_cnt = 0;
-			ldb_port->cq_depth = 0;
-			for (j = 0; j < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; j++)
-				ldb_port->qid_map[j].state =
-					DLB2_QUEUE_UNMAPPED;
-
-			dlb2_list_del(&domain->used_ldb_ports[i],
-				      &ldb_port->domain_list);
-			dlb2_list_add(&rsrcs->avail_ldb_ports[i],
-				      &ldb_port->func_list);
-			rsrcs->num_avail_ldb_ports[i]++;
-		}
-
-		list = &domain->avail_ldb_ports[i];
-		DLB2_DOM_LIST_FOR_SAFE(*list, ldb_port, tmp_ldb_port,
-				       iter1, iter2) {
-			ldb_port->owned = false;
-
-			dlb2_list_del(&domain->avail_ldb_ports[i],
-				      &ldb_port->domain_list);
-			dlb2_list_add(&rsrcs->avail_ldb_ports[i],
-				      &ldb_port->func_list);
-			rsrcs->num_avail_ldb_ports[i]++;
-		}
-	}
-
-	/* Move the domain's dir ports to the function's avail list */
-	list = &domain->used_dir_pq_pairs;
-	DLB2_DOM_LIST_FOR_SAFE(*list, dir_port, tmp_dir_port, iter1, iter2) {
-		dir_port->owned = false;
-		dir_port->port_configured = false;
-		dir_port->init_tkn_cnt = 0;
-
-		dlb2_list_del(&domain->used_dir_pq_pairs,
-			      &dir_port->domain_list);
-
-		dlb2_list_add(&rsrcs->avail_dir_pq_pairs,
-			      &dir_port->func_list);
-		rsrcs->num_avail_dir_pq_pairs++;
-	}
-
-	list = &domain->avail_dir_pq_pairs;
-	DLB2_DOM_LIST_FOR_SAFE(*list, dir_port, tmp_dir_port, iter1, iter2) {
-		dir_port->owned = false;
-
-		dlb2_list_del(&domain->avail_dir_pq_pairs,
-			      &dir_port->domain_list);
-
-		dlb2_list_add(&rsrcs->avail_dir_pq_pairs,
-			      &dir_port->func_list);
-		rsrcs->num_avail_dir_pq_pairs++;
-	}
-
-	/* Return hist list entries to the function */
-	ret = dlb2_bitmap_set_range(rsrcs->avail_hist_list_entries,
-				    domain->hist_list_entry_base,
-				    domain->total_hist_list_entries);
-	if (ret) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: domain hist list base does not match the function's bitmap.\n",
-			    __func__);
-		return ret;
-	}
-
-	domain->total_hist_list_entries = 0;
-	domain->avail_hist_list_entries = 0;
-	domain->hist_list_entry_base = 0;
-	domain->hist_list_entry_offset = 0;
-
-	if (hw->ver == DLB2_HW_V2_5) {
-		rsrcs->num_avail_entries += domain->num_credits;
-		domain->num_credits = 0;
-	} else {
-		rsrcs->num_avail_qed_entries += domain->num_ldb_credits;
-		domain->num_ldb_credits = 0;
-
-		rsrcs->num_avail_dqed_entries += domain->num_dir_credits;
-		domain->num_dir_credits = 0;
-	}
-	rsrcs->num_avail_aqed_entries += domain->num_avail_aqed_entries;
-	rsrcs->num_avail_aqed_entries += domain->num_used_aqed_entries;
-	domain->num_avail_aqed_entries = 0;
-	domain->num_used_aqed_entries = 0;
-
-	domain->num_pending_removals = 0;
-	domain->num_pending_additions = 0;
-	domain->configured = false;
-	domain->started = false;
-
-	/*
-	 * Move the domain out of the used_domains list and back to the
-	 * function's avail_domains list.
-	 */
-	dlb2_list_del(&rsrcs->used_domains, &domain->func_list);
-	dlb2_list_add(&rsrcs->avail_domains, &domain->func_list);
-	rsrcs->num_avail_domains++;
-
-	return 0;
-}
-
-static int dlb2_domain_drain_unmapped_queue(struct dlb2_hw *hw,
-					    struct dlb2_hw_domain *domain,
-					    struct dlb2_ldb_queue *queue)
-{
-	struct dlb2_ldb_port *port = NULL;
-	int ret, i;
-
-	/* If a domain has LDB queues, it must have LDB ports */
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		port = DLB2_DOM_LIST_HEAD(domain->used_ldb_ports[i],
-					  typeof(*port));
-		if (port)
-			break;
-	}
-
-	if (port == NULL) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: No configured LDB ports\n",
-			    __func__);
-		return -EFAULT;
-	}
-
-	/* If necessary, free up a QID slot in this CQ */
-	if (port->num_mappings == DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {
-		struct dlb2_ldb_queue *mapped_queue;
-
-		mapped_queue = &hw->rsrcs.ldb_queues[port->qid_map[0].qid];
-
-		ret = dlb2_ldb_port_unmap_qid(hw, port, mapped_queue);
-		if (ret)
-			return ret;
-	}
-
-	ret = dlb2_ldb_port_map_qid_dynamic(hw, port, queue, 0);
-	if (ret)
-		return ret;
-
-	return dlb2_domain_drain_mapped_queues(hw, domain);
-}
-
-static int dlb2_domain_drain_unmapped_queues(struct dlb2_hw *hw,
-					     struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_queue *queue;
-	int ret;
-	RTE_SET_USED(iter);
-
-	/* If the domain hasn't been started, there's no traffic to drain */
-	if (!domain->started)
-		return 0;
-
-	/*
-	 * Pre-condition: the unattached queue must not have any outstanding
-	 * completions. This is ensured by calling dlb2_domain_drain_ldb_cqs()
-	 * prior to this in dlb2_domain_drain_mapped_queues().
-	 */
-	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
-		if (queue->num_mappings != 0 ||
-		    dlb2_ldb_queue_is_empty(hw, queue))
-			continue;
-
-		ret = dlb2_domain_drain_unmapped_queue(hw, domain, queue);
-		if (ret)
-			return ret;
-	}
-
-	return 0;
-}
-
-/**
- * dlb2_reset_domain() - reset a scheduling domain
- * @hw: dlb2_hw handle for a particular device.
- * @domain_id: domain ID.
- * @vdev_req: indicates whether this request came from a vdev.
- * @vdev_id: If vdev_req is true, this contains the vdev's ID.
- *
- * This function resets and frees a DLB 2.0 scheduling domain and its associated
- * resources.
- *
- * Pre-condition: the driver must ensure software has stopped sending QEs
- * through this domain's producer ports before invoking this function, or
- * undefined behavior will result.
- *
- * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
- * device.
- *
- * Return:
- * Returns 0 upon success, -1 otherwise.
- *
- * EINVAL - Invalid domain ID, or the domain is not configured.
- * EFAULT - Internal error. (Possibly caused if software is the pre-condition
- *	    is not met.)
- * ETIMEDOUT - Hardware component didn't reset in the expected time.
- */
-int dlb2_reset_domain(struct dlb2_hw *hw,
-		      u32 domain_id,
-		      bool vdev_req,
-		      unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	int ret;
-
-	dlb2_log_reset_domain(hw, domain_id, vdev_req, vdev_id);
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (domain == NULL || !domain->configured)
-		return -EINVAL;
-
-	/* Disable VPPs */
-	if (vdev_req) {
-		dlb2_domain_disable_dir_vpps(hw, domain, vdev_id);
-
-		dlb2_domain_disable_ldb_vpps(hw, domain, vdev_id);
-	}
-
-	/* Disable CQ interrupts */
-	dlb2_domain_disable_dir_port_interrupts(hw, domain);
-
-	dlb2_domain_disable_ldb_port_interrupts(hw, domain);
-
-	/*
-	 * For each queue owned by this domain, disable its write permissions to
-	 * cause any traffic sent to it to be dropped. Well-behaved software
-	 * should not be sending QEs at this point.
-	 */
-	dlb2_domain_disable_dir_queue_write_perms(hw, domain);
-
-	dlb2_domain_disable_ldb_queue_write_perms(hw, domain);
-
-	/* Turn off completion tracking on all the domain's PPs. */
-	dlb2_domain_disable_ldb_seq_checks(hw, domain);
-
-	/*
-	 * Disable the LDB CQs and drain them in order to complete the map and
-	 * unmap procedures, which require zero CQ inflights and zero QID
-	 * inflights respectively.
-	 */
-	dlb2_domain_disable_ldb_cqs(hw, domain);
-
-	dlb2_domain_drain_ldb_cqs(hw, domain, false);
-
-	ret = dlb2_domain_wait_for_ldb_cqs_to_empty(hw, domain);
-	if (ret)
-		return ret;
-
-	ret = dlb2_domain_finish_unmap_qid_procedures(hw, domain);
-	if (ret)
-		return ret;
-
-	ret = dlb2_domain_finish_map_qid_procedures(hw, domain);
-	if (ret)
-		return ret;
-
-	/* Re-enable the CQs in order to drain the mapped queues. */
-	dlb2_domain_enable_ldb_cqs(hw, domain);
-
-	ret = dlb2_domain_drain_mapped_queues(hw, domain);
-	if (ret)
-		return ret;
-
-	ret = dlb2_domain_drain_unmapped_queues(hw, domain);
-	if (ret)
-		return ret;
-
-	/* Done draining LDB QEs, so disable the CQs. */
-	dlb2_domain_disable_ldb_cqs(hw, domain);
-
-	dlb2_domain_drain_dir_queues(hw, domain);
-
-	/* Done draining DIR QEs, so disable the CQs. */
-	dlb2_domain_disable_dir_cqs(hw, domain);
-
-	/* Disable PPs */
-	dlb2_domain_disable_dir_producer_ports(hw, domain);
-
-	dlb2_domain_disable_ldb_producer_ports(hw, domain);
-
-	ret = dlb2_domain_verify_reset_success(hw, domain);
-	if (ret)
-		return ret;
-
-	/* Reset the QID and port state. */
-	dlb2_domain_reset_registers(hw, domain);
-
-	/* Hardware reset complete. Reset the domain's software state */
-	return dlb2_domain_reset_software_state(hw, domain);
-}
-
-static void
-dlb2_log_create_ldb_queue_args(struct dlb2_hw *hw,
-			       u32 domain_id,
-			       struct dlb2_create_ldb_queue_args *args,
-			       bool vdev_req,
-			       unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 create load-balanced queue arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID:                  %d\n",
-		    domain_id);
-	DLB2_HW_DBG(hw, "\tNumber of sequence numbers: %d\n",
-		    args->num_sequence_numbers);
-	DLB2_HW_DBG(hw, "\tNumber of QID inflights:    %d\n",
-		    args->num_qid_inflights);
-	DLB2_HW_DBG(hw, "\tNumber of ATM inflights:    %d\n",
-		    args->num_atomic_inflights);
-}
-
-static int
-dlb2_ldb_queue_attach_to_sn_group(struct dlb2_hw *hw,
-				  struct dlb2_ldb_queue *queue,
-				  struct dlb2_create_ldb_queue_args *args)
-{
-	int slot = -1;
-	int i;
-
-	queue->sn_cfg_valid = false;
-
-	if (args->num_sequence_numbers == 0)
-		return 0;
-
-	for (i = 0; i < DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {
-		struct dlb2_sn_group *group = &hw->rsrcs.sn_groups[i];
-
-		if (group->sequence_numbers_per_queue ==
-		    args->num_sequence_numbers &&
-		    !dlb2_sn_group_full(group)) {
-			slot = dlb2_sn_group_alloc_slot(group);
-			if (slot >= 0)
-				break;
-		}
-	}
-
-	if (slot == -1) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: no sequence number slots available\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	queue->sn_cfg_valid = true;
-	queue->sn_group = i;
-	queue->sn_slot = slot;
-	return 0;
-}
-
-static int
-dlb2_verify_create_ldb_queue_args(struct dlb2_hw *hw,
-				  u32 domain_id,
-				  struct dlb2_create_ldb_queue_args *args,
-				  struct dlb2_cmd_response *resp,
-				  bool vdev_req,
-				  unsigned int vdev_id,
-				  struct dlb2_hw_domain **out_domain,
-				  struct dlb2_ldb_queue **out_queue)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_queue *queue;
-	int i;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (!domain) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	if (!domain->configured) {
-		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
-		return -EINVAL;
-	}
-
-	if (domain->started) {
-		resp->status = DLB2_ST_DOMAIN_STARTED;
-		return -EINVAL;
-	}
-
-	queue = DLB2_DOM_LIST_HEAD(domain->avail_ldb_queues, typeof(*queue));
-	if (!queue) {
-		resp->status = DLB2_ST_LDB_QUEUES_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	if (args->num_sequence_numbers) {
-		for (i = 0; i < DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {
-			struct dlb2_sn_group *group = &hw->rsrcs.sn_groups[i];
-
-			if (group->sequence_numbers_per_queue ==
-			    args->num_sequence_numbers &&
-			    !dlb2_sn_group_full(group))
-				break;
-		}
-
-		if (i == DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS) {
-			resp->status = DLB2_ST_SEQUENCE_NUMBERS_UNAVAILABLE;
-			return -EINVAL;
-		}
-	}
-
-	if (args->num_qid_inflights > 4096) {
-		resp->status = DLB2_ST_INVALID_QID_INFLIGHT_ALLOCATION;
-		return -EINVAL;
-	}
-
-	/* Inflights must be <= number of sequence numbers if ordered */
-	if (args->num_sequence_numbers != 0 &&
-	    args->num_qid_inflights > args->num_sequence_numbers) {
-		resp->status = DLB2_ST_INVALID_QID_INFLIGHT_ALLOCATION;
-		return -EINVAL;
-	}
-
-	if (domain->num_avail_aqed_entries < args->num_atomic_inflights) {
-		resp->status = DLB2_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	if (args->num_atomic_inflights &&
-	    args->lock_id_comp_level != 0 &&
-	    args->lock_id_comp_level != 64 &&
-	    args->lock_id_comp_level != 128 &&
-	    args->lock_id_comp_level != 256 &&
-	    args->lock_id_comp_level != 512 &&
-	    args->lock_id_comp_level != 1024 &&
-	    args->lock_id_comp_level != 2048 &&
-	    args->lock_id_comp_level != 4096 &&
-	    args->lock_id_comp_level != 65536) {
-		resp->status = DLB2_ST_INVALID_LOCK_ID_COMP_LEVEL;
-		return -EINVAL;
-	}
-
-	*out_domain = domain;
-	*out_queue = queue;
-
-	return 0;
-}
-
-static int
-dlb2_ldb_queue_attach_resources(struct dlb2_hw *hw,
-				struct dlb2_hw_domain *domain,
-				struct dlb2_ldb_queue *queue,
-				struct dlb2_create_ldb_queue_args *args)
-{
-	int ret;
-	ret = dlb2_ldb_queue_attach_to_sn_group(hw, queue, args);
-	if (ret)
-		return ret;
-
-	/* Attach QID inflights */
-	queue->num_qid_inflights = args->num_qid_inflights;
-
-	/* Attach atomic inflights */
-	queue->aqed_limit = args->num_atomic_inflights;
-
-	domain->num_avail_aqed_entries -= args->num_atomic_inflights;
-	domain->num_used_aqed_entries += args->num_atomic_inflights;
-
-	return 0;
-}
-
-static void dlb2_configure_ldb_queue(struct dlb2_hw *hw,
-				     struct dlb2_hw_domain *domain,
-				     struct dlb2_ldb_queue *queue,
-				     struct dlb2_create_ldb_queue_args *args,
-				     bool vdev_req,
-				     unsigned int vdev_id)
-{
-	struct dlb2_sn_group *sn_group;
-	unsigned int offs;
-	u32 reg = 0;
-	u32 alimit;
-
-	/* QID write permissions are turned on when the domain is started */
-	offs = domain->id.phys_id * DLB2_MAX_NUM_LDB_QUEUES + queue->id.phys_id;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_VASQID_V(offs), reg);
-
-	/*
-	 * Unordered QIDs get 4K inflights, ordered get as many as the number
-	 * of sequence numbers.
-	 */
-	DLB2_BITS_SET(reg, args->num_qid_inflights,
-		      DLB2_LSP_QID_LDB_INFL_LIM_LIMIT);
-	DLB2_CSR_WR(hw, DLB2_LSP_QID_LDB_INFL_LIM(hw->ver,
-						  queue->id.phys_id), reg);
-
-	alimit = queue->aqed_limit;
-
-	if (alimit > DLB2_MAX_NUM_AQED_ENTRIES)
-		alimit = DLB2_MAX_NUM_AQED_ENTRIES;
-
-	reg = 0;
-	DLB2_BITS_SET(reg, alimit, DLB2_LSP_QID_AQED_ACTIVE_LIM_LIMIT);
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID_AQED_ACTIVE_LIM(hw->ver,
-						 queue->id.phys_id), reg);
-
-	reg = 0;
-	switch (args->lock_id_comp_level) {
-	case 64:
-		DLB2_BITS_SET(reg, 1, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
-		break;
-	case 128:
-		DLB2_BITS_SET(reg, 2, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
-		break;
-	case 256:
-		DLB2_BITS_SET(reg, 3, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
-		break;
-	case 512:
-		DLB2_BITS_SET(reg, 4, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
-		break;
-	case 1024:
-		DLB2_BITS_SET(reg, 5, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
-		break;
-	case 2048:
-		DLB2_BITS_SET(reg, 6, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
-		break;
-	case 4096:
-		DLB2_BITS_SET(reg, 7, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);
-		break;
-	default:
-		/* No compression by default */
-		break;
-	}
-
-	DLB2_CSR_WR(hw, DLB2_AQED_QID_HID_WIDTH(queue->id.phys_id), reg);
-
-	reg = 0;
-	/* Don't timestamp QEs that pass through this queue */
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_QID_ITS(queue->id.phys_id), reg);
-
-	DLB2_BITS_SET(reg, args->depth_threshold,
-		      DLB2_LSP_QID_ATM_DEPTH_THRSH_THRESH);
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID_ATM_DEPTH_THRSH(hw->ver,
-						 queue->id.phys_id), reg);
-
-	reg = 0;
-	DLB2_BITS_SET(reg, args->depth_threshold,
-		      DLB2_LSP_QID_NALDB_DEPTH_THRSH_THRESH);
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID_NALDB_DEPTH_THRSH(hw->ver, queue->id.phys_id),
-		    reg);
-
-	/*
-	 * This register limits the number of inflight flows a queue can have
-	 * at one time.  It has an upper bound of 2048, but can be
-	 * over-subscribed. 512 is chosen so that a single queue does not use
-	 * the entire atomic storage, but can use a substantial portion if
-	 * needed.
-	 */
-	reg = 0;
-	DLB2_BITS_SET(reg, 512, DLB2_AQED_QID_FID_LIM_QID_FID_LIMIT);
-	DLB2_CSR_WR(hw, DLB2_AQED_QID_FID_LIM(queue->id.phys_id), reg);
-
-	/* Configure SNs */
-	reg = 0;
-	sn_group = &hw->rsrcs.sn_groups[queue->sn_group];
-	DLB2_BITS_SET(reg, sn_group->mode, DLB2_CHP_ORD_QID_SN_MAP_MODE);
-	DLB2_BITS_SET(reg, queue->sn_slot, DLB2_CHP_ORD_QID_SN_MAP_SLOT);
-	DLB2_BITS_SET(reg, sn_group->id, DLB2_CHP_ORD_QID_SN_MAP_GRP);
-
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_ORD_QID_SN_MAP(hw->ver, queue->id.phys_id), reg);
-
-	reg = 0;
-	DLB2_BITS_SET(reg, (args->num_sequence_numbers != 0),
-		 DLB2_SYS_LDB_QID_CFG_V_SN_CFG_V);
-	DLB2_BITS_SET(reg, (args->num_atomic_inflights != 0),
-		 DLB2_SYS_LDB_QID_CFG_V_FID_CFG_V);
-
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_QID_CFG_V(queue->id.phys_id), reg);
-
-	if (vdev_req) {
-		offs = vdev_id * DLB2_MAX_NUM_LDB_QUEUES + queue->id.virt_id;
-
-		reg = 0;
-		DLB2_BIT_SET(reg, DLB2_SYS_VF_LDB_VQID_V_VQID_V);
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID_V(offs), reg);
-
-		reg = 0;
-		DLB2_BITS_SET(reg, queue->id.phys_id,
-			      DLB2_SYS_VF_LDB_VQID2QID_QID);
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID2QID(offs), reg);
-
-		reg = 0;
-		DLB2_BITS_SET(reg, queue->id.virt_id,
-			      DLB2_SYS_LDB_QID2VQID_VQID);
-		DLB2_CSR_WR(hw, DLB2_SYS_LDB_QID2VQID(queue->id.phys_id), reg);
-	}
-
-	reg = 0;
-	DLB2_BIT_SET(reg, DLB2_SYS_LDB_QID_V_QID_V);
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_QID_V(queue->id.phys_id), reg);
-}
-
-/**
- * dlb2_hw_create_ldb_queue() - create a load-balanced queue
- * @hw: dlb2_hw handle for a particular device.
- * @domain_id: domain ID.
- * @args: queue creation arguments.
- * @resp: response structure.
- * @vdev_req: indicates whether this request came from a vdev.
- * @vdev_id: If vdev_req is true, this contains the vdev's ID.
- *
- * This function creates a load-balanced queue.
- *
- * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
- * device.
- *
- * Return:
- * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
- * assigned a detailed error code from enum dlb2_error. If successful, resp->id
- * contains the queue ID.
- *
- * resp->id contains a virtual ID if vdev_req is true.
- *
- * Errors:
- * EINVAL - A requested resource is unavailable, the domain is not configured,
- *	    the domain has already been started, or the requested queue name is
- *	    already in use.
- * EFAULT - Internal error (resp->status not set).
- */
-int dlb2_hw_create_ldb_queue(struct dlb2_hw *hw,
-			     u32 domain_id,
-			     struct dlb2_create_ldb_queue_args *args,
-			     struct dlb2_cmd_response *resp,
-			     bool vdev_req,
-			     unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_queue *queue;
-	int ret;
-
-	dlb2_log_create_ldb_queue_args(hw, domain_id, args, vdev_req, vdev_id);
-
-	/*
-	 * Verify that hardware resources are available before attempting to
-	 * satisfy the request. This simplifies the error unwinding code.
-	 */
-	ret = dlb2_verify_create_ldb_queue_args(hw,
-						domain_id,
-						args,
-						resp,
-						vdev_req,
-						vdev_id,
-						&domain,
-						&queue);
-	if (ret)
-		return ret;
-
-	ret = dlb2_ldb_queue_attach_resources(hw, domain, queue, args);
-
-	if (ret) {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: failed to attach the ldb queue resources\n",
-			    __func__, __LINE__);
-		return ret;
-	}
-
-	dlb2_configure_ldb_queue(hw, domain, queue, args, vdev_req, vdev_id);
-
-	queue->num_mappings = 0;
-
-	queue->configured = true;
-
-	/*
-	 * Configuration succeeded, so move the resource from the 'avail' to
-	 * the 'used' list.
-	 */
-	dlb2_list_del(&domain->avail_ldb_queues, &queue->domain_list);
-
-	dlb2_list_add(&domain->used_ldb_queues, &queue->domain_list);
-
-	resp->status = 0;
-	resp->id = (vdev_req) ? queue->id.virt_id : queue->id.phys_id;
-
-	return 0;
-}
-
-static void dlb2_ldb_port_configure_pp(struct dlb2_hw *hw,
-				       struct dlb2_hw_domain *domain,
-				       struct dlb2_ldb_port *port,
-				       bool vdev_req,
-				       unsigned int vdev_id)
-{
-	u32 reg = 0;
-
-	DLB2_BITS_SET(reg, domain->id.phys_id, DLB2_SYS_LDB_PP2VAS_VAS);
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_PP2VAS(port->id.phys_id), reg);
-
-	if (vdev_req) {
-		unsigned int offs;
-		u32 virt_id;
-
-		/*
-		 * DLB uses producer port address bits 17:12 to determine the
-		 * producer port ID. In Scalable IOV mode, PP accesses come
-		 * through the PF MMIO window for the physical producer port,
-		 * so for translation purposes the virtual and physical port
-		 * IDs are equal.
-		 */
-		if (hw->virt_mode == DLB2_VIRT_SRIOV)
-			virt_id = port->id.virt_id;
-		else
-			virt_id = port->id.phys_id;
-
-		reg = 0;
-		DLB2_BITS_SET(reg, port->id.phys_id, DLB2_SYS_VF_LDB_VPP2PP_PP);
-		offs = vdev_id * DLB2_MAX_NUM_LDB_PORTS + virt_id;
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VPP2PP(offs), reg);
-
-		reg = 0;
-		DLB2_BITS_SET(reg, vdev_id, DLB2_SYS_LDB_PP2VDEV_VDEV);
-		DLB2_CSR_WR(hw, DLB2_SYS_LDB_PP2VDEV(port->id.phys_id), reg);
-
-		reg = 0;
-		DLB2_BIT_SET(reg, DLB2_SYS_VF_LDB_VPP_V_VPP_V);
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VPP_V(offs), reg);
-	}
-
-	reg = 0;
-	DLB2_BIT_SET(reg, DLB2_SYS_LDB_PP_V_PP_V);
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_PP_V(port->id.phys_id), reg);
-}
-
-static int dlb2_ldb_port_configure_cq(struct dlb2_hw *hw,
-				      struct dlb2_hw_domain *domain,
-				      struct dlb2_ldb_port *port,
-				      uintptr_t cq_dma_base,
-				      struct dlb2_create_ldb_port_args *args,
-				      bool vdev_req,
-				      unsigned int vdev_id)
-{
-	u32 hl_base = 0;
-	u32 reg = 0;
-	u32 ds = 0;
-
-	/* The CQ address is 64B-aligned, and the DLB only wants bits [63:6] */
-	DLB2_BITS_SET(reg, cq_dma_base >> 6, DLB2_SYS_LDB_CQ_ADDR_L_ADDR_L);
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_ADDR_L(port->id.phys_id), reg);
-
-	reg = cq_dma_base >> 32;
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_ADDR_U(port->id.phys_id), reg);
-
-	/*
-	 * 'ro' == relaxed ordering. This setting allows DLB2 to write
-	 * cache lines out-of-order (but QEs within a cache line are always
-	 * updated in-order).
-	 */
-	reg = 0;
-	DLB2_BITS_SET(reg, vdev_id, DLB2_SYS_LDB_CQ2VF_PF_RO_VF);
-	DLB2_BITS_SET(reg,
-		 !vdev_req && (hw->virt_mode != DLB2_VIRT_SIOV),
-		 DLB2_SYS_LDB_CQ2VF_PF_RO_IS_PF);
-	DLB2_BIT_SET(reg, DLB2_SYS_LDB_CQ2VF_PF_RO_RO);
-
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ2VF_PF_RO(port->id.phys_id), reg);
-
-	port->cq_depth = args->cq_depth;
-
-	if (args->cq_depth <= 8) {
-		ds = 1;
-	} else if (args->cq_depth == 16) {
-		ds = 2;
-	} else if (args->cq_depth == 32) {
-		ds = 3;
-	} else if (args->cq_depth == 64) {
-		ds = 4;
-	} else if (args->cq_depth == 128) {
-		ds = 5;
-	} else if (args->cq_depth == 256) {
-		ds = 6;
-	} else if (args->cq_depth == 512) {
-		ds = 7;
-	} else if (args->cq_depth == 1024) {
-		ds = 8;
-	} else {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: invalid CQ depth\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	reg = 0;
-	DLB2_BITS_SET(reg, ds,
-		      DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT);
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
-		    reg);
-
-	/*
-	 * To support CQs with depth less than 8, program the token count
-	 * register with a non-zero initial value. Operations such as domain
-	 * reset must take this initial value into account when quiescing the
-	 * CQ.
-	 */
-	port->init_tkn_cnt = 0;
-
-	if (args->cq_depth < 8) {
-		reg = 0;
-		port->init_tkn_cnt = 8 - args->cq_depth;
-
-		DLB2_BITS_SET(reg,
-			      port->init_tkn_cnt,
-			      DLB2_LSP_CQ_LDB_TKN_CNT_TOKEN_COUNT);
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_CQ_LDB_TKN_CNT(hw->ver, port->id.phys_id),
-			    reg);
-	} else {
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_CQ_LDB_TKN_CNT(hw->ver, port->id.phys_id),
-			    DLB2_LSP_CQ_LDB_TKN_CNT_RST);
-	}
-
-	reg = 0;
-	DLB2_BITS_SET(reg, ds,
-		      DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_V2);
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
-		    reg);
-
-	/* Reset the CQ write pointer */
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_LDB_CQ_WPTR(hw->ver, port->id.phys_id),
-		    DLB2_CHP_LDB_CQ_WPTR_RST);
-
-	reg = 0;
-	DLB2_BITS_SET(reg,
-		      port->hist_list_entry_limit - 1,
-		      DLB2_CHP_HIST_LIST_LIM_LIMIT);
-	DLB2_CSR_WR(hw, DLB2_CHP_HIST_LIST_LIM(hw->ver, port->id.phys_id), reg);
-
-	DLB2_BITS_SET(hl_base, port->hist_list_entry_base,
-		      DLB2_CHP_HIST_LIST_BASE_BASE);
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_HIST_LIST_BASE(hw->ver, port->id.phys_id),
-		    hl_base);
-
-	/*
-	 * The inflight limit sets a cap on the number of QEs for which this CQ
-	 * can owe completions at one time.
-	 */
-	reg = 0;
-	DLB2_BITS_SET(reg, args->cq_history_list_size,
-		      DLB2_LSP_CQ_LDB_INFL_LIM_LIMIT);
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ_LDB_INFL_LIM(hw->ver, port->id.phys_id),
-		    reg);
-
-	reg = 0;
-	DLB2_BITS_SET(reg, DLB2_BITS_GET(hl_base, DLB2_CHP_HIST_LIST_BASE_BASE),
-		      DLB2_CHP_HIST_LIST_PUSH_PTR_PUSH_PTR);
-	DLB2_CSR_WR(hw, DLB2_CHP_HIST_LIST_PUSH_PTR(hw->ver, port->id.phys_id),
-		    reg);
-
-	reg = 0;
-	DLB2_BITS_SET(reg, DLB2_BITS_GET(hl_base, DLB2_CHP_HIST_LIST_BASE_BASE),
-		      DLB2_CHP_HIST_LIST_POP_PTR_POP_PTR);
-	DLB2_CSR_WR(hw, DLB2_CHP_HIST_LIST_POP_PTR(hw->ver, port->id.phys_id),
-		    reg);
-
-	/*
-	 * Address translation (AT) settings: 0: untranslated, 2: translated
-	 * (see ATS spec regarding Address Type field for more details)
-	 */
-
-	if (hw->ver == DLB2_HW_V2) {
-		reg = 0;
-		DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_AT(port->id.phys_id), reg);
-	}
-
-	if (vdev_req && hw->virt_mode == DLB2_VIRT_SIOV) {
-		reg = 0;
-		DLB2_BITS_SET(reg, hw->pasid[vdev_id],
-			      DLB2_SYS_LDB_CQ_PASID_PASID);
-		DLB2_BIT_SET(reg, DLB2_SYS_LDB_CQ_PASID_FMT2);
-	}
-
-	DLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_PASID(hw->ver, port->id.phys_id), reg);
-
-	reg = 0;
-	DLB2_BITS_SET(reg, domain->id.phys_id, DLB2_CHP_LDB_CQ2VAS_CQ2VAS);
-	DLB2_CSR_WR(hw, DLB2_CHP_LDB_CQ2VAS(hw->ver, port->id.phys_id), reg);
-
-	/* Disable the port's QID mappings */
-	reg = 0;
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(hw->ver, port->id.phys_id), reg);
-
-	return 0;
-}
-
-static bool
-dlb2_cq_depth_is_valid(u32 depth)
-{
-	if (depth != 1 && depth != 2 &&
-	    depth != 4 && depth != 8 &&
-	    depth != 16 && depth != 32 &&
-	    depth != 64 && depth != 128 &&
-	    depth != 256 && depth != 512 &&
-	    depth != 1024)
-		return false;
-
-	return true;
-}
-
-static int dlb2_configure_ldb_port(struct dlb2_hw *hw,
-				   struct dlb2_hw_domain *domain,
-				   struct dlb2_ldb_port *port,
-				   uintptr_t cq_dma_base,
-				   struct dlb2_create_ldb_port_args *args,
-				   bool vdev_req,
-				   unsigned int vdev_id)
-{
-	int ret, i;
-
-	port->hist_list_entry_base = domain->hist_list_entry_base +
-				     domain->hist_list_entry_offset;
-	port->hist_list_entry_limit = port->hist_list_entry_base +
-				      args->cq_history_list_size;
-
-	domain->hist_list_entry_offset += args->cq_history_list_size;
-	domain->avail_hist_list_entries -= args->cq_history_list_size;
-
-	ret = dlb2_ldb_port_configure_cq(hw,
-					 domain,
-					 port,
-					 cq_dma_base,
-					 args,
-					 vdev_req,
-					 vdev_id);
-	if (ret)
-		return ret;
-
-	dlb2_ldb_port_configure_pp(hw,
-				   domain,
-				   port,
-				   vdev_req,
-				   vdev_id);
-
-	dlb2_ldb_port_cq_enable(hw, port);
-
-	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++)
-		port->qid_map[i].state = DLB2_QUEUE_UNMAPPED;
-	port->num_mappings = 0;
-
-	port->enabled = true;
-
-	port->configured = true;
-
-	return 0;
-}
-
-static void
-dlb2_log_create_ldb_port_args(struct dlb2_hw *hw,
-			      u32 domain_id,
-			      uintptr_t cq_dma_base,
-			      struct dlb2_create_ldb_port_args *args,
-			      bool vdev_req,
-			      unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 create load-balanced port arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID:                 %d\n",
-		    domain_id);
-	DLB2_HW_DBG(hw, "\tCQ depth:                  %d\n",
-		    args->cq_depth);
-	DLB2_HW_DBG(hw, "\tCQ hist list size:         %d\n",
-		    args->cq_history_list_size);
-	DLB2_HW_DBG(hw, "\tCQ base address:           0x%lx\n",
-		    cq_dma_base);
-	DLB2_HW_DBG(hw, "\tCoS ID:                    %u\n", args->cos_id);
-	DLB2_HW_DBG(hw, "\tStrict CoS allocation:     %u\n",
-		    args->cos_strict);
-}
-
-static int
-dlb2_verify_create_ldb_port_args(struct dlb2_hw *hw,
-				 u32 domain_id,
-				 uintptr_t cq_dma_base,
-				 struct dlb2_create_ldb_port_args *args,
-				 struct dlb2_cmd_response *resp,
-				 bool vdev_req,
-				 unsigned int vdev_id,
-				 struct dlb2_hw_domain **out_domain,
-				 struct dlb2_ldb_port **out_port,
-				 int *out_cos_id)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_port *port;
-	int i, id;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (!domain) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	if (!domain->configured) {
-		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
-		return -EINVAL;
-	}
-
-	if (domain->started) {
-		resp->status = DLB2_ST_DOMAIN_STARTED;
-		return -EINVAL;
-	}
-
-	if (args->cos_id >= DLB2_NUM_COS_DOMAINS) {
-		resp->status = DLB2_ST_INVALID_COS_ID;
-		return -EINVAL;
-	}
-
-	if (args->cos_strict) {
-		id = args->cos_id;
-		port = DLB2_DOM_LIST_HEAD(domain->avail_ldb_ports[id],
-					  typeof(*port));
-	} else {
-		for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-			id = (args->cos_id + i) % DLB2_NUM_COS_DOMAINS;
-
-			port = DLB2_DOM_LIST_HEAD(domain->avail_ldb_ports[id],
-						  typeof(*port));
-			if (port)
-				break;
-		}
-	}
-
-	if (!port) {
-		resp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	/* Check cache-line alignment */
-	if ((cq_dma_base & 0x3F) != 0) {
-		resp->status = DLB2_ST_INVALID_CQ_VIRT_ADDR;
-		return -EINVAL;
-	}
-
-	if (!dlb2_cq_depth_is_valid(args->cq_depth)) {
-		resp->status = DLB2_ST_INVALID_CQ_DEPTH;
-		return -EINVAL;
-	}
-
-	/* The history list size must be >= 1 */
-	if (!args->cq_history_list_size) {
-		resp->status = DLB2_ST_INVALID_HIST_LIST_DEPTH;
-		return -EINVAL;
-	}
-
-	if (args->cq_history_list_size > domain->avail_hist_list_entries) {
-		resp->status = DLB2_ST_HIST_LIST_ENTRIES_UNAVAILABLE;
-		return -EINVAL;
-	}
-
-	*out_domain = domain;
-	*out_port = port;
-	*out_cos_id = id;
-
-	return 0;
-}
-
-/**
- * dlb2_hw_create_ldb_port() - create a load-balanced port
- * @hw: dlb2_hw handle for a particular device.
- * @domain_id: domain ID.
- * @args: port creation arguments.
- * @cq_dma_base: base address of the CQ memory. This can be a PA or an IOVA.
- * @resp: response structure.
- * @vdev_req: indicates whether this request came from a vdev.
- * @vdev_id: If vdev_req is true, this contains the vdev's ID.
- *
- * This function creates a load-balanced port.
- *
- * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
- * device.
- *
- * Return:
- * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
- * assigned a detailed error code from enum dlb2_error. If successful, resp->id
- * contains the port ID.
- *
- * resp->id contains a virtual ID if vdev_req is true.
- *
- * Errors:
- * EINVAL - A requested resource is unavailable, a credit setting is invalid, a
- *	    pointer address is not properly aligned, the domain is not
- *	    configured, or the domain has already been started.
- * EFAULT - Internal error (resp->status not set).
- */
-int dlb2_hw_create_ldb_port(struct dlb2_hw *hw,
-			    u32 domain_id,
-			    struct dlb2_create_ldb_port_args *args,
-			    uintptr_t cq_dma_base,
-			    struct dlb2_cmd_response *resp,
-			    bool vdev_req,
-			    unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_port *port;
-	int ret, cos_id;
-
-	dlb2_log_create_ldb_port_args(hw,
-				      domain_id,
-				      cq_dma_base,
-				      args,
-				      vdev_req,
-				      vdev_id);
-
-	/*
-	 * Verify that hardware resources are available before attempting to
-	 * satisfy the request. This simplifies the error unwinding code.
-	 */
-	ret = dlb2_verify_create_ldb_port_args(hw,
-					       domain_id,
-					       cq_dma_base,
-					       args,
-					       resp,
-					       vdev_req,
-					       vdev_id,
-					       &domain,
-					       &port,
-					       &cos_id);
-	if (ret)
-		return ret;
-
-	ret = dlb2_configure_ldb_port(hw,
-				      domain,
-				      port,
-				      cq_dma_base,
-				      args,
-				      vdev_req,
-				      vdev_id);
-	if (ret)
-		return ret;
-
-	/*
-	 * Configuration succeeded, so move the resource from the 'avail' to
-	 * the 'used' list.
-	 */
-	dlb2_list_del(&domain->avail_ldb_ports[cos_id], &port->domain_list);
-
-	dlb2_list_add(&domain->used_ldb_ports[cos_id], &port->domain_list);
-
-	resp->status = 0;
-	resp->id = (vdev_req) ? port->id.virt_id : port->id.phys_id;
-
-	return 0;
-}
-
-static void
-dlb2_log_create_dir_port_args(struct dlb2_hw *hw,
-			      u32 domain_id,
-			      uintptr_t cq_dma_base,
-			      struct dlb2_create_dir_port_args *args,
-			      bool vdev_req,
-			      unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 create directed port arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID:                 %d\n",
-		    domain_id);
-	DLB2_HW_DBG(hw, "\tCQ depth:                  %d\n",
-		    args->cq_depth);
-	DLB2_HW_DBG(hw, "\tCQ base address:           0x%lx\n",
-		    cq_dma_base);
-}
-
-static struct dlb2_dir_pq_pair *
-dlb2_get_domain_used_dir_pq(struct dlb2_hw *hw,
-			    u32 id,
-			    bool vdev_req,
-			    struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *port;
-	RTE_SET_USED(iter);
-
-	if (id >= DLB2_MAX_NUM_DIR_PORTS(hw->ver))
-		return NULL;
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
-		if ((!vdev_req && port->id.phys_id == id) ||
-		    (vdev_req && port->id.virt_id == id))
-			return port;
-	}
-
-	return NULL;
-}
-
-static int
-dlb2_verify_create_dir_port_args(struct dlb2_hw *hw,
-				 u32 domain_id,
-				 uintptr_t cq_dma_base,
-				 struct dlb2_create_dir_port_args *args,
-				 struct dlb2_cmd_response *resp,
-				 bool vdev_req,
-				 unsigned int vdev_id,
-				 struct dlb2_hw_domain **out_domain,
-				 struct dlb2_dir_pq_pair **out_port)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_dir_pq_pair *pq;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (!domain) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	if (!domain->configured) {
-		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
-		return -EINVAL;
-	}
-
-	if (domain->started) {
-		resp->status = DLB2_ST_DOMAIN_STARTED;
-		return -EINVAL;
-	}
-
-	if (args->queue_id != -1) {
-		/*
-		 * If the user claims the queue is already configured, validate
-		 * the queue ID, its domain, and whether the queue is
-		 * configured.
-		 */
-		pq = dlb2_get_domain_used_dir_pq(hw,
-						 args->queue_id,
-						 vdev_req,
-						 domain);
-
-		if (!pq || pq->domain_id.phys_id != domain->id.phys_id ||
-		    !pq->queue_configured) {
-			resp->status = DLB2_ST_INVALID_DIR_QUEUE_ID;
-			return -EINVAL;
-		}
-	} else {
-		/*
-		 * If the port's queue is not configured, validate that a free
-		 * port-queue pair is available.
-		 */
-		pq = DLB2_DOM_LIST_HEAD(domain->avail_dir_pq_pairs,
-					typeof(*pq));
-		if (!pq) {
-			resp->status = DLB2_ST_DIR_PORTS_UNAVAILABLE;
-			return -EINVAL;
-		}
-	}
-
-	/* Check cache-line alignment */
-	if ((cq_dma_base & 0x3F) != 0) {
-		resp->status = DLB2_ST_INVALID_CQ_VIRT_ADDR;
-		return -EINVAL;
-	}
-
-	if (!dlb2_cq_depth_is_valid(args->cq_depth)) {
-		resp->status = DLB2_ST_INVALID_CQ_DEPTH;
-		return -EINVAL;
-	}
-
-	*out_domain = domain;
-	*out_port = pq;
-
-	return 0;
-}
-
-static void dlb2_dir_port_configure_pp(struct dlb2_hw *hw,
-				       struct dlb2_hw_domain *domain,
-				       struct dlb2_dir_pq_pair *port,
-				       bool vdev_req,
-				       unsigned int vdev_id)
-{
-	u32 reg = 0;
-
-	DLB2_BITS_SET(reg, domain->id.phys_id, DLB2_SYS_DIR_PP2VAS_VAS);
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_PP2VAS(port->id.phys_id), reg);
-
-	if (vdev_req) {
-		unsigned int offs;
-		u32 virt_id;
-
-		/*
-		 * DLB uses producer port address bits 17:12 to determine the
-		 * producer port ID. In Scalable IOV mode, PP accesses come
-		 * through the PF MMIO window for the physical producer port,
-		 * so for translation purposes the virtual and physical port
-		 * IDs are equal.
-		 */
-		if (hw->virt_mode == DLB2_VIRT_SRIOV)
-			virt_id = port->id.virt_id;
-		else
-			virt_id = port->id.phys_id;
-
-		reg = 0;
-		DLB2_BITS_SET(reg, port->id.phys_id, DLB2_SYS_VF_DIR_VPP2PP_PP);
-		offs = vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) + virt_id;
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP2PP(offs), reg);
-
-		reg = 0;
-		DLB2_BITS_SET(reg, vdev_id, DLB2_SYS_DIR_PP2VDEV_VDEV);
-		DLB2_CSR_WR(hw, DLB2_SYS_DIR_PP2VDEV(port->id.phys_id), reg);
-
-		reg = 0;
-		DLB2_BIT_SET(reg, DLB2_SYS_VF_DIR_VPP_V_VPP_V);
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP_V(offs), reg);
-	}
-
-	reg = 0;
-	DLB2_BIT_SET(reg, DLB2_SYS_DIR_PP_V_PP_V);
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_PP_V(port->id.phys_id), reg);
-}
-
-static int dlb2_dir_port_configure_cq(struct dlb2_hw *hw,
-				      struct dlb2_hw_domain *domain,
-				      struct dlb2_dir_pq_pair *port,
-				      uintptr_t cq_dma_base,
-				      struct dlb2_create_dir_port_args *args,
-				      bool vdev_req,
-				      unsigned int vdev_id)
-{
-	u32 reg = 0;
-	u32 ds = 0;
-
-	/* The CQ address is 64B-aligned, and the DLB only wants bits [63:6] */
-	DLB2_BITS_SET(reg, cq_dma_base >> 6, DLB2_SYS_DIR_CQ_ADDR_L_ADDR_L);
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_ADDR_L(port->id.phys_id), reg);
-
-	reg = cq_dma_base >> 32;
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_ADDR_U(port->id.phys_id), reg);
-
-	/*
-	 * 'ro' == relaxed ordering. This setting allows DLB2 to write
-	 * cache lines out-of-order (but QEs within a cache line are always
-	 * updated in-order).
-	 */
-	reg = 0;
-	DLB2_BITS_SET(reg, vdev_id, DLB2_SYS_DIR_CQ2VF_PF_RO_VF);
-	DLB2_BITS_SET(reg, !vdev_req && (hw->virt_mode != DLB2_VIRT_SIOV),
-		 DLB2_SYS_DIR_CQ2VF_PF_RO_IS_PF);
-	DLB2_BIT_SET(reg, DLB2_SYS_DIR_CQ2VF_PF_RO_RO);
-
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ2VF_PF_RO(port->id.phys_id), reg);
-
-	if (args->cq_depth <= 8) {
-		ds = 1;
-	} else if (args->cq_depth == 16) {
-		ds = 2;
-	} else if (args->cq_depth == 32) {
-		ds = 3;
-	} else if (args->cq_depth == 64) {
-		ds = 4;
-	} else if (args->cq_depth == 128) {
-		ds = 5;
-	} else if (args->cq_depth == 256) {
-		ds = 6;
-	} else if (args->cq_depth == 512) {
-		ds = 7;
-	} else if (args->cq_depth == 1024) {
-		ds = 8;
-	} else {
-		DLB2_HW_ERR(hw,
-			    "[%s():%d] Internal error: invalid CQ depth\n",
-			    __func__, __LINE__);
-		return -EFAULT;
-	}
-
-	reg = 0;
-	DLB2_BITS_SET(reg, ds,
-		      DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT);
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),
-		    reg);
-
-	/*
-	 * To support CQs with depth less than 8, program the token count
-	 * register with a non-zero initial value. Operations such as domain
-	 * reset must take this initial value into account when quiescing the
-	 * CQ.
-	 */
-	port->init_tkn_cnt = 0;
-
-	if (args->cq_depth < 8) {
-		reg = 0;
-		port->init_tkn_cnt = 8 - args->cq_depth;
-
-		DLB2_BITS_SET(reg, port->init_tkn_cnt,
-			      DLB2_LSP_CQ_DIR_TKN_CNT_COUNT);
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_CQ_DIR_TKN_CNT(hw->ver, port->id.phys_id),
-			    reg);
-	} else {
-		DLB2_CSR_WR(hw,
-			    DLB2_LSP_CQ_DIR_TKN_CNT(hw->ver, port->id.phys_id),
-			    DLB2_LSP_CQ_DIR_TKN_CNT_RST);
-	}
-
-	reg = 0;
-	DLB2_BITS_SET(reg, ds,
-		      DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_TOKEN_DEPTH_SELECT_V2);
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(hw->ver,
-						      port->id.phys_id),
-		    reg);
-
-	/* Reset the CQ write pointer */
-	DLB2_CSR_WR(hw,
-		    DLB2_CHP_DIR_CQ_WPTR(hw->ver, port->id.phys_id),
-		    DLB2_CHP_DIR_CQ_WPTR_RST);
-
-	/* Virtualize the PPID */
-	reg = 0;
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_FMT(port->id.phys_id), reg);
-
-	/*
-	 * Address translation (AT) settings: 0: untranslated, 2: translated
-	 * (see ATS spec regarding Address Type field for more details)
-	 */
-	if (hw->ver == DLB2_HW_V2) {
-		reg = 0;
-		DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_AT(port->id.phys_id), reg);
-	}
-
-	if (vdev_req && hw->virt_mode == DLB2_VIRT_SIOV) {
-		DLB2_BITS_SET(reg, hw->pasid[vdev_id],
-			      DLB2_SYS_DIR_CQ_PASID_PASID);
-		DLB2_BIT_SET(reg, DLB2_SYS_DIR_CQ_PASID_FMT2);
-	}
-
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_PASID(hw->ver, port->id.phys_id), reg);
-
-	reg = 0;
-	DLB2_BITS_SET(reg, domain->id.phys_id, DLB2_CHP_DIR_CQ2VAS_CQ2VAS);
-	DLB2_CSR_WR(hw, DLB2_CHP_DIR_CQ2VAS(hw->ver, port->id.phys_id), reg);
-
-	return 0;
-}
-
-static int dlb2_configure_dir_port(struct dlb2_hw *hw,
-				   struct dlb2_hw_domain *domain,
-				   struct dlb2_dir_pq_pair *port,
-				   uintptr_t cq_dma_base,
-				   struct dlb2_create_dir_port_args *args,
-				   bool vdev_req,
-				   unsigned int vdev_id)
-{
-	int ret;
-
-	ret = dlb2_dir_port_configure_cq(hw,
-					 domain,
-					 port,
-					 cq_dma_base,
-					 args,
-					 vdev_req,
-					 vdev_id);
-
-	if (ret)
-		return ret;
-
-	dlb2_dir_port_configure_pp(hw,
-				   domain,
-				   port,
-				   vdev_req,
-				   vdev_id);
-
-	dlb2_dir_port_cq_enable(hw, port);
-
-	port->enabled = true;
-
-	port->port_configured = true;
-
-	return 0;
-}
-
-/**
- * dlb2_hw_create_dir_port() - create a directed port
- * @hw: dlb2_hw handle for a particular device.
- * @domain_id: domain ID.
- * @args: port creation arguments.
- * @cq_dma_base: base address of the CQ memory. This can be a PA or an IOVA.
- * @resp: response structure.
- * @vdev_req: indicates whether this request came from a vdev.
- * @vdev_id: If vdev_req is true, this contains the vdev's ID.
- *
- * This function creates a directed port.
- *
- * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
- * device.
- *
- * Return:
- * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
- * assigned a detailed error code from enum dlb2_error. If successful, resp->id
- * contains the port ID.
- *
- * resp->id contains a virtual ID if vdev_req is true.
- *
- * Errors:
- * EINVAL - A requested resource is unavailable, a credit setting is invalid, a
- *	    pointer address is not properly aligned, the domain is not
- *	    configured, or the domain has already been started.
- * EFAULT - Internal error (resp->status not set).
- */
-int dlb2_hw_create_dir_port(struct dlb2_hw *hw,
-			    u32 domain_id,
-			    struct dlb2_create_dir_port_args *args,
-			    uintptr_t cq_dma_base,
-			    struct dlb2_cmd_response *resp,
-			    bool vdev_req,
-			    unsigned int vdev_id)
-{
-	struct dlb2_dir_pq_pair *port;
-	struct dlb2_hw_domain *domain;
-	int ret;
-
-	dlb2_log_create_dir_port_args(hw,
-				      domain_id,
-				      cq_dma_base,
-				      args,
-				      vdev_req,
-				      vdev_id);
-
-	/*
-	 * Verify that hardware resources are available before attempting to
-	 * satisfy the request. This simplifies the error unwinding code.
-	 */
-	ret = dlb2_verify_create_dir_port_args(hw,
-					       domain_id,
-					       cq_dma_base,
-					       args,
-					       resp,
-					       vdev_req,
-					       vdev_id,
-					       &domain,
-					       &port);
-	if (ret)
-		return ret;
-
-	ret = dlb2_configure_dir_port(hw,
-				      domain,
-				      port,
-				      cq_dma_base,
-				      args,
-				      vdev_req,
-				      vdev_id);
-	if (ret)
-		return ret;
-
-	/*
-	 * Configuration succeeded, so move the resource from the 'avail' to
-	 * the 'used' list (if it's not already there).
-	 */
-	if (args->queue_id == -1) {
-		dlb2_list_del(&domain->avail_dir_pq_pairs, &port->domain_list);
-
-		dlb2_list_add(&domain->used_dir_pq_pairs, &port->domain_list);
-	}
-
-	resp->status = 0;
-	resp->id = (vdev_req) ? port->id.virt_id : port->id.phys_id;
-
-	return 0;
-}
-
-static void dlb2_configure_dir_queue(struct dlb2_hw *hw,
-				     struct dlb2_hw_domain *domain,
-				     struct dlb2_dir_pq_pair *queue,
-				     struct dlb2_create_dir_queue_args *args,
-				     bool vdev_req,
-				     unsigned int vdev_id)
-{
-	unsigned int offs;
-	u32 reg = 0;
-
-	/* QID write permissions are turned on when the domain is started */
-	offs = domain->id.phys_id * DLB2_MAX_NUM_DIR_QUEUES(hw->ver) +
-		queue->id.phys_id;
-
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(offs), reg);
-
-	/* Don't timestamp QEs that pass through this queue */
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_QID_ITS(queue->id.phys_id), reg);
-
-	reg = 0;
-	DLB2_BITS_SET(reg, args->depth_threshold,
-		      DLB2_LSP_QID_DIR_DEPTH_THRSH_THRESH);
-	DLB2_CSR_WR(hw,
-		    DLB2_LSP_QID_DIR_DEPTH_THRSH(hw->ver, queue->id.phys_id),
-		    reg);
-
-	if (vdev_req) {
-		offs = vdev_id * DLB2_MAX_NUM_DIR_QUEUES(hw->ver) +
-			queue->id.virt_id;
-
-		reg = 0;
-		DLB2_BIT_SET(reg, DLB2_SYS_VF_DIR_VQID_V_VQID_V);
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID_V(offs), reg);
-
-		reg = 0;
-		DLB2_BITS_SET(reg, queue->id.phys_id,
-			      DLB2_SYS_VF_DIR_VQID2QID_QID);
-		DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID2QID(offs), reg);
-	}
-
-	reg = 0;
-	DLB2_BIT_SET(reg, DLB2_SYS_DIR_QID_V_QID_V);
-	DLB2_CSR_WR(hw, DLB2_SYS_DIR_QID_V(queue->id.phys_id), reg);
-
-	queue->queue_configured = true;
-}
-
-static void
-dlb2_log_create_dir_queue_args(struct dlb2_hw *hw,
-			       u32 domain_id,
-			       struct dlb2_create_dir_queue_args *args,
-			       bool vdev_req,
-			       unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 create directed queue arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
-	DLB2_HW_DBG(hw, "\tPort ID:   %d\n", args->port_id);
-}
-
-static int
-dlb2_verify_create_dir_queue_args(struct dlb2_hw *hw,
-				  u32 domain_id,
-				  struct dlb2_create_dir_queue_args *args,
-				  struct dlb2_cmd_response *resp,
-				  bool vdev_req,
-				  unsigned int vdev_id,
-				  struct dlb2_hw_domain **out_domain,
-				  struct dlb2_dir_pq_pair **out_queue)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_dir_pq_pair *pq;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (!domain) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	if (!domain->configured) {
-		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
-		return -EINVAL;
-	}
-
-	if (domain->started) {
-		resp->status = DLB2_ST_DOMAIN_STARTED;
-		return -EINVAL;
-	}
-
-	/*
-	 * If the user claims the port is already configured, validate the port
-	 * ID, its domain, and whether the port is configured.
-	 */
-	if (args->port_id != -1) {
-		pq = dlb2_get_domain_used_dir_pq(hw,
-						 args->port_id,
-						 vdev_req,
-						 domain);
-
-		if (!pq || pq->domain_id.phys_id != domain->id.phys_id ||
-		    !pq->port_configured) {
-			resp->status = DLB2_ST_INVALID_PORT_ID;
-			return -EINVAL;
-		}
-	} else {
-		/*
-		 * If the queue's port is not configured, validate that a free
-		 * port-queue pair is available.
-		 */
-		pq = DLB2_DOM_LIST_HEAD(domain->avail_dir_pq_pairs,
-					typeof(*pq));
-		if (!pq) {
-			resp->status = DLB2_ST_DIR_QUEUES_UNAVAILABLE;
-			return -EINVAL;
-		}
-	}
-
-	*out_domain = domain;
-	*out_queue = pq;
-
-	return 0;
-}
-
-/**
- * dlb2_hw_create_dir_queue() - create a directed queue
- * @hw: dlb2_hw handle for a particular device.
- * @domain_id: domain ID.
- * @args: queue creation arguments.
- * @resp: response structure.
- * @vdev_req: indicates whether this request came from a vdev.
- * @vdev_id: If vdev_req is true, this contains the vdev's ID.
- *
- * This function creates a directed queue.
- *
- * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
- * device.
- *
- * Return:
- * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
- * assigned a detailed error code from enum dlb2_error. If successful, resp->id
- * contains the queue ID.
- *
- * resp->id contains a virtual ID if vdev_req is true.
- *
- * Errors:
- * EINVAL - A requested resource is unavailable, the domain is not configured,
- *	    or the domain has already been started.
- * EFAULT - Internal error (resp->status not set).
- */
-int dlb2_hw_create_dir_queue(struct dlb2_hw *hw,
-			     u32 domain_id,
-			     struct dlb2_create_dir_queue_args *args,
-			     struct dlb2_cmd_response *resp,
-			     bool vdev_req,
-			     unsigned int vdev_id)
-{
-	struct dlb2_dir_pq_pair *queue;
-	struct dlb2_hw_domain *domain;
-	int ret;
-
-	dlb2_log_create_dir_queue_args(hw, domain_id, args, vdev_req, vdev_id);
-
-	/*
-	 * Verify that hardware resources are available before attempting to
-	 * satisfy the request. This simplifies the error unwinding code.
-	 */
-	ret = dlb2_verify_create_dir_queue_args(hw,
-						domain_id,
-						args,
-						resp,
-						vdev_req,
-						vdev_id,
-						&domain,
-						&queue);
-	if (ret)
-		return ret;
-
-	dlb2_configure_dir_queue(hw, domain, queue, args, vdev_req, vdev_id);
-
-	/*
-	 * Configuration succeeded, so move the resource from the 'avail' to
-	 * the 'used' list (if it's not already there).
-	 */
-	if (args->port_id == -1) {
-		dlb2_list_del(&domain->avail_dir_pq_pairs,
-			      &queue->domain_list);
-
-		dlb2_list_add(&domain->used_dir_pq_pairs,
-			      &queue->domain_list);
-	}
-
-	resp->status = 0;
-
-	resp->id = (vdev_req) ? queue->id.virt_id : queue->id.phys_id;
-
-	return 0;
-}
-
-static bool
-dlb2_port_find_slot_with_pending_map_queue(struct dlb2_ldb_port *port,
-					   struct dlb2_ldb_queue *queue,
-					   int *slot)
-{
-	int i;
-
-	for (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
-		struct dlb2_ldb_port_qid_map *map = &port->qid_map[i];
-
-		if (map->state == DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP &&
-		    map->pending_qid == queue->id.phys_id)
-			break;
-	}
-
-	*slot = i;
-
-	return (i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ);
-}
-
-static int dlb2_verify_map_qid_slot_available(struct dlb2_ldb_port *port,
-					      struct dlb2_ldb_queue *queue,
-					      struct dlb2_cmd_response *resp)
-{
-	enum dlb2_qid_map_state state;
-	int i;
-
-	/* Unused slot available? */
-	if (port->num_mappings < DLB2_MAX_NUM_QIDS_PER_LDB_CQ)
-		return 0;
-
-	/*
-	 * If the queue is already mapped (from the application's perspective),
-	 * this is simply a priority update.
-	 */
-	state = DLB2_QUEUE_MAPPED;
-	if (dlb2_port_find_slot_queue(port, state, queue, &i))
-		return 0;
-
-	state = DLB2_QUEUE_MAP_IN_PROG;
-	if (dlb2_port_find_slot_queue(port, state, queue, &i))
-		return 0;
-
-	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &i))
-		return 0;
-
-	/*
-	 * If the slot contains an unmap in progress, it's considered
-	 * available.
-	 */
-	state = DLB2_QUEUE_UNMAP_IN_PROG;
-	if (dlb2_port_find_slot(port, state, &i))
-		return 0;
-
-	state = DLB2_QUEUE_UNMAPPED;
-	if (dlb2_port_find_slot(port, state, &i))
-		return 0;
-
-	resp->status = DLB2_ST_NO_QID_SLOTS_AVAILABLE;
-	return -EINVAL;
-}
-
-static struct dlb2_ldb_queue *
-dlb2_get_domain_ldb_queue(u32 id,
-			  bool vdev_req,
-			  struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_queue *queue;
-	RTE_SET_USED(iter);
-
-	if (id >= DLB2_MAX_NUM_LDB_QUEUES)
-		return NULL;
-
-	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
-		if ((!vdev_req && queue->id.phys_id == id) ||
-		    (vdev_req && queue->id.virt_id == id))
-			return queue;
-	}
-
-	return NULL;
-}
-
-static struct dlb2_ldb_port *
-dlb2_get_domain_used_ldb_port(u32 id,
-			      bool vdev_req,
-			      struct dlb2_hw_domain *domain)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_ldb_port *port;
-	int i;
-	RTE_SET_USED(iter);
-
-	if (id >= DLB2_MAX_NUM_LDB_PORTS)
-		return NULL;
-
-	for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {
-		DLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {
-			if ((!vdev_req && port->id.phys_id == id) ||
-			    (vdev_req && port->id.virt_id == id))
-				return port;
-		}
-
-		DLB2_DOM_LIST_FOR(domain->avail_ldb_ports[i], port, iter) {
-			if ((!vdev_req && port->id.phys_id == id) ||
-			    (vdev_req && port->id.virt_id == id))
-				return port;
-		}
-	}
-
-	return NULL;
-}
-
-static void dlb2_ldb_port_change_qid_priority(struct dlb2_hw *hw,
-					      struct dlb2_ldb_port *port,
-					      int slot,
-					      struct dlb2_map_qid_args *args)
-{
-	u32 cq2priov;
-
-	/* Read-modify-write the priority and valid bit register */
-	cq2priov = DLB2_CSR_RD(hw,
-			       DLB2_LSP_CQ2PRIOV(hw->ver, port->id.phys_id));
-
-	cq2priov |= (1 << (slot + DLB2_LSP_CQ2PRIOV_V_LOC)) &
-		    DLB2_LSP_CQ2PRIOV_V;
-	cq2priov |= ((args->priority & 0x7) << slot * 3) &
-		    DLB2_LSP_CQ2PRIOV_PRIO;
-
-	DLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(hw->ver, port->id.phys_id), cq2priov);
-
-	dlb2_flush_csr(hw);
-
-	port->qid_map[slot].priority = args->priority;
-}
-
-static int dlb2_verify_map_qid_args(struct dlb2_hw *hw,
-				    u32 domain_id,
-				    struct dlb2_map_qid_args *args,
-				    struct dlb2_cmd_response *resp,
-				    bool vdev_req,
-				    unsigned int vdev_id,
-				    struct dlb2_hw_domain **out_domain,
-				    struct dlb2_ldb_port **out_port,
-				    struct dlb2_ldb_queue **out_queue)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_queue *queue;
-	struct dlb2_ldb_port *port;
-	int id;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (!domain) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	if (!domain->configured) {
-		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
-		return -EINVAL;
-	}
-
-	id = args->port_id;
-
-	port = dlb2_get_domain_used_ldb_port(id, vdev_req, domain);
-
-	if (!port || !port->configured) {
-		resp->status = DLB2_ST_INVALID_PORT_ID;
-		return -EINVAL;
-	}
-
-	if (args->priority >= DLB2_QID_PRIORITIES) {
-		resp->status = DLB2_ST_INVALID_PRIORITY;
-		return -EINVAL;
-	}
-
-	queue = dlb2_get_domain_ldb_queue(args->qid, vdev_req, domain);
-
-	if (!queue || !queue->configured) {
-		resp->status = DLB2_ST_INVALID_QID;
-		return -EINVAL;
-	}
-
-	if (queue->domain_id.phys_id != domain->id.phys_id) {
-		resp->status = DLB2_ST_INVALID_QID;
-		return -EINVAL;
-	}
-
-	if (port->domain_id.phys_id != domain->id.phys_id) {
-		resp->status = DLB2_ST_INVALID_PORT_ID;
-		return -EINVAL;
-	}
-
-	*out_domain = domain;
-	*out_queue = queue;
-	*out_port = port;
-
-	return 0;
-}
-
-static void dlb2_log_map_qid(struct dlb2_hw *hw,
-			     u32 domain_id,
-			     struct dlb2_map_qid_args *args,
-			     bool vdev_req,
-			     unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 map QID arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID: %d\n",
-		    domain_id);
-	DLB2_HW_DBG(hw, "\tPort ID:   %d\n",
-		    args->port_id);
-	DLB2_HW_DBG(hw, "\tQueue ID:  %d\n",
-		    args->qid);
-	DLB2_HW_DBG(hw, "\tPriority:  %d\n",
-		    args->priority);
-}
-
-/**
- * dlb2_hw_map_qid() - map a load-balanced queue to a load-balanced port
- * @hw: dlb2_hw handle for a particular device.
- * @domain_id: domain ID.
- * @args: map QID arguments.
- * @resp: response structure.
- * @vdev_req: indicates whether this request came from a vdev.
- * @vdev_id: If vdev_req is true, this contains the vdev's ID.
- *
- * This function configures the DLB to schedule QEs from the specified queue
- * to the specified port. Each load-balanced port can be mapped to up to 8
- * queues; each load-balanced queue can potentially map to all the
- * load-balanced ports.
- *
- * A successful return does not necessarily mean the mapping was configured. If
- * this function is unable to immediately map the queue to the port, it will
- * add the requested operation to a per-port list of pending map/unmap
- * operations, and (if it's not already running) launch a kernel thread that
- * periodically attempts to process all pending operations. In a sense, this is
- * an asynchronous function.
- *
- * This asynchronicity creates two views of the state of hardware: the actual
- * hardware state and the requested state (as if every request completed
- * immediately). If there are any pending map/unmap operations, the requested
- * state will differ from the actual state. All validation is performed with
- * respect to the pending state; for instance, if there are 8 pending map
- * operations for port X, a request for a 9th will fail because a load-balanced
- * port can only map up to 8 queues.
- *
- * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
- * device.
- *
- * Return:
- * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
- * assigned a detailed error code from enum dlb2_error.
- *
- * Errors:
- * EINVAL - A requested resource is unavailable, invalid port or queue ID, or
- *	    the domain is not configured.
- * EFAULT - Internal error (resp->status not set).
- */
-int dlb2_hw_map_qid(struct dlb2_hw *hw,
-		    u32 domain_id,
-		    struct dlb2_map_qid_args *args,
-		    struct dlb2_cmd_response *resp,
-		    bool vdev_req,
-		    unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_queue *queue;
-	enum dlb2_qid_map_state st;
-	struct dlb2_ldb_port *port;
-	int ret, i;
-	u8 prio;
-
-	dlb2_log_map_qid(hw, domain_id, args, vdev_req, vdev_id);
-
-	/*
-	 * Verify that hardware resources are available before attempting to
-	 * satisfy the request. This simplifies the error unwinding code.
-	 */
-	ret = dlb2_verify_map_qid_args(hw,
-				       domain_id,
-				       args,
-				       resp,
-				       vdev_req,
-				       vdev_id,
-				       &domain,
-				       &port,
-				       &queue);
-	if (ret)
-		return ret;
-
-	prio = args->priority;
-
-	/*
-	 * If there are any outstanding detach operations for this port,
-	 * attempt to complete them. This may be necessary to free up a QID
-	 * slot for this requested mapping.
-	 */
-	if (port->num_pending_removals)
-		dlb2_domain_finish_unmap_port(hw, domain, port);
-
-	ret = dlb2_verify_map_qid_slot_available(port, queue, resp);
-	if (ret)
-		return ret;
-
-	/* Hardware requires disabling the CQ before mapping QIDs. */
-	if (port->enabled)
-		dlb2_ldb_port_cq_disable(hw, port);
-
-	/*
-	 * If this is only a priority change, don't perform the full QID->CQ
-	 * mapping procedure
-	 */
-	st = DLB2_QUEUE_MAPPED;
-	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
-		if (prio != port->qid_map[i].priority) {
-			dlb2_ldb_port_change_qid_priority(hw, port, i, args);
-			DLB2_HW_DBG(hw, "DLB2 map: priority change\n");
-		}
-
-		st = DLB2_QUEUE_MAPPED;
-		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
-		if (ret)
-			return ret;
-
-		goto map_qid_done;
-	}
-
-	st = DLB2_QUEUE_UNMAP_IN_PROG;
-	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
-		if (prio != port->qid_map[i].priority) {
-			dlb2_ldb_port_change_qid_priority(hw, port, i, args);
-			DLB2_HW_DBG(hw, "DLB2 map: priority change\n");
-		}
-
-		st = DLB2_QUEUE_MAPPED;
-		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
-		if (ret)
-			return ret;
-
-		goto map_qid_done;
-	}
-
-	/*
-	 * If this is a priority change on an in-progress mapping, don't
-	 * perform the full QID->CQ mapping procedure.
-	 */
-	st = DLB2_QUEUE_MAP_IN_PROG;
-	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
-		port->qid_map[i].priority = prio;
-
-		DLB2_HW_DBG(hw, "DLB2 map: priority change only\n");
-
-		goto map_qid_done;
-	}
-
-	/*
-	 * If this is a priority change on a pending mapping, update the
-	 * pending priority
-	 */
-	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &i)) {
-		port->qid_map[i].pending_priority = prio;
-
-		DLB2_HW_DBG(hw, "DLB2 map: priority change only\n");
-
-		goto map_qid_done;
-	}
-
-	/*
-	 * If all the CQ's slots are in use, then there's an unmap in progress
-	 * (guaranteed by dlb2_verify_map_qid_slot_available()), so add this
-	 * mapping to pending_map and return. When the removal is completed for
-	 * the slot's current occupant, this mapping will be performed.
-	 */
-	if (!dlb2_port_find_slot(port, DLB2_QUEUE_UNMAPPED, &i)) {
-		if (dlb2_port_find_slot(port, DLB2_QUEUE_UNMAP_IN_PROG, &i)) {
-			enum dlb2_qid_map_state new_st;
-
-			port->qid_map[i].pending_qid = queue->id.phys_id;
-			port->qid_map[i].pending_priority = prio;
-
-			new_st = DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP;
-
-			ret = dlb2_port_slot_state_transition(hw, port, queue,
-							      i, new_st);
-			if (ret)
-				return ret;
-
-			DLB2_HW_DBG(hw, "DLB2 map: map pending removal\n");
-
-			goto map_qid_done;
-		}
-	}
-
-	/*
-	 * If the domain has started, a special "dynamic" CQ->queue mapping
-	 * procedure is required in order to safely update the CQ<->QID tables.
-	 * The "static" procedure cannot be used when traffic is flowing,
-	 * because the CQ<->QID tables cannot be updated atomically and the
-	 * scheduler won't see the new mapping unless the queue's if_status
-	 * changes, which isn't guaranteed.
-	 */
-	ret = dlb2_ldb_port_map_qid(hw, domain, port, queue, prio);
-
-	/* If ret is less than zero, it's due to an internal error */
-	if (ret < 0)
-		return ret;
-
-map_qid_done:
-	if (port->enabled)
-		dlb2_ldb_port_cq_enable(hw, port);
-
-	resp->status = 0;
-
-	return 0;
-}
-
-static void dlb2_log_unmap_qid(struct dlb2_hw *hw,
-			       u32 domain_id,
-			       struct dlb2_unmap_qid_args *args,
-			       bool vdev_req,
-			       unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 unmap QID arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID: %d\n",
-		    domain_id);
-	DLB2_HW_DBG(hw, "\tPort ID:   %d\n",
-		    args->port_id);
-	DLB2_HW_DBG(hw, "\tQueue ID:  %d\n",
-		    args->qid);
-	if (args->qid < DLB2_MAX_NUM_LDB_QUEUES)
-		DLB2_HW_DBG(hw, "\tQueue's num mappings:  %d\n",
-			    hw->rsrcs.ldb_queues[args->qid].num_mappings);
-}
-
-static int dlb2_verify_unmap_qid_args(struct dlb2_hw *hw,
-				      u32 domain_id,
-				      struct dlb2_unmap_qid_args *args,
-				      struct dlb2_cmd_response *resp,
-				      bool vdev_req,
-				      unsigned int vdev_id,
-				      struct dlb2_hw_domain **out_domain,
-				      struct dlb2_ldb_port **out_port,
-				      struct dlb2_ldb_queue **out_queue)
-{
-	enum dlb2_qid_map_state state;
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_queue *queue;
-	struct dlb2_ldb_port *port;
-	int slot;
-	int id;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (!domain) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	if (!domain->configured) {
-		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
-		return -EINVAL;
-	}
-
-	id = args->port_id;
-
-	port = dlb2_get_domain_used_ldb_port(id, vdev_req, domain);
-
-	if (!port || !port->configured) {
-		resp->status = DLB2_ST_INVALID_PORT_ID;
-		return -EINVAL;
-	}
-
-	if (port->domain_id.phys_id != domain->id.phys_id) {
-		resp->status = DLB2_ST_INVALID_PORT_ID;
-		return -EINVAL;
-	}
-
-	queue = dlb2_get_domain_ldb_queue(args->qid, vdev_req, domain);
-
-	if (!queue || !queue->configured) {
-		DLB2_HW_ERR(hw, "[%s()] Can't unmap unconfigured queue %d\n",
-			    __func__, args->qid);
-		resp->status = DLB2_ST_INVALID_QID;
-		return -EINVAL;
-	}
-
-	/*
-	 * Verify that the port has the queue mapped. From the application's
-	 * perspective a queue is mapped if it is actually mapped, the map is
-	 * in progress, or the map is blocked pending an unmap.
-	 */
-	state = DLB2_QUEUE_MAPPED;
-	if (dlb2_port_find_slot_queue(port, state, queue, &slot))
-		goto done;
-
-	state = DLB2_QUEUE_MAP_IN_PROG;
-	if (dlb2_port_find_slot_queue(port, state, queue, &slot))
-		goto done;
-
-	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &slot))
-		goto done;
-
-	resp->status = DLB2_ST_INVALID_QID;
-	return -EINVAL;
-
-done:
-	*out_domain = domain;
-	*out_port = port;
-	*out_queue = queue;
-
-	return 0;
-}
-
-/**
- * dlb2_hw_unmap_qid() - Unmap a load-balanced queue from a load-balanced port
- * @hw: dlb2_hw handle for a particular device.
- * @domain_id: domain ID.
- * @args: unmap QID arguments.
- * @resp: response structure.
- * @vdev_req: indicates whether this request came from a vdev.
- * @vdev_id: If vdev_req is true, this contains the vdev's ID.
- *
- * This function configures the DLB to stop scheduling QEs from the specified
- * queue to the specified port.
- *
- * A successful return does not necessarily mean the mapping was removed. If
- * this function is unable to immediately unmap the queue from the port, it
- * will add the requested operation to a per-port list of pending map/unmap
- * operations, and (if it's not already running) launch a kernel thread that
- * periodically attempts to process all pending operations. See
- * dlb2_hw_map_qid() for more details.
- *
- * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
- * device.
- *
- * Return:
- * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
- * assigned a detailed error code from enum dlb2_error.
- *
- * Errors:
- * EINVAL - A requested resource is unavailable, invalid port or queue ID, or
- *	    the domain is not configured.
- * EFAULT - Internal error (resp->status not set).
- */
-int dlb2_hw_unmap_qid(struct dlb2_hw *hw,
-		      u32 domain_id,
-		      struct dlb2_unmap_qid_args *args,
-		      struct dlb2_cmd_response *resp,
-		      bool vdev_req,
-		      unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_queue *queue;
-	enum dlb2_qid_map_state st;
-	struct dlb2_ldb_port *port;
-	bool unmap_complete;
-	int i, ret;
-
-	dlb2_log_unmap_qid(hw, domain_id, args, vdev_req, vdev_id);
-
-	/*
-	 * Verify that hardware resources are available before attempting to
-	 * satisfy the request. This simplifies the error unwinding code.
-	 */
-	ret = dlb2_verify_unmap_qid_args(hw,
-					 domain_id,
-					 args,
-					 resp,
-					 vdev_req,
-					 vdev_id,
-					 &domain,
-					 &port,
-					 &queue);
-	if (ret)
-		return ret;
-
-	/*
-	 * If the queue hasn't been mapped yet, we need to update the slot's
-	 * state and re-enable the queue's inflights.
-	 */
-	st = DLB2_QUEUE_MAP_IN_PROG;
-	if (dlb2_port_find_slot_queue(port, st, queue, &i)) {
-		/*
-		 * Since the in-progress map was aborted, re-enable the QID's
-		 * inflights.
-		 */
-		if (queue->num_pending_additions == 0)
-			dlb2_ldb_queue_set_inflight_limit(hw, queue);
-
-		st = DLB2_QUEUE_UNMAPPED;
-		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
-		if (ret)
-			return ret;
-
-		goto unmap_qid_done;
-	}
-
-	/*
-	 * If the queue mapping is on hold pending an unmap, we simply need to
-	 * update the slot's state.
-	 */
-	if (dlb2_port_find_slot_with_pending_map_queue(port, queue, &i)) {
-		st = DLB2_QUEUE_UNMAP_IN_PROG;
-		ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
-		if (ret)
-			return ret;
-
-		goto unmap_qid_done;
-	}
-
-	st = DLB2_QUEUE_MAPPED;
-	if (!dlb2_port_find_slot_queue(port, st, queue, &i)) {
-		DLB2_HW_ERR(hw,
-			    "[%s()] Internal error: no available CQ slots\n",
-			    __func__);
-		return -EFAULT;
-	}
-
-	/*
-	 * QID->CQ mapping removal is an asynchronous procedure. It requires
-	 * stopping the DLB2 from scheduling this CQ, draining all inflights
-	 * from the CQ, then unmapping the queue from the CQ. This function
-	 * simply marks the port as needing the queue unmapped, and (if
-	 * necessary) starts the unmapping worker thread.
-	 */
-	dlb2_ldb_port_cq_disable(hw, port);
-
-	st = DLB2_QUEUE_UNMAP_IN_PROG;
-	ret = dlb2_port_slot_state_transition(hw, port, queue, i, st);
-	if (ret)
-		return ret;
-
-	/*
-	 * Attempt to finish the unmapping now, in case the port has no
-	 * outstanding inflights. If that's not the case, this will fail and
-	 * the unmapping will be completed at a later time.
-	 */
-	unmap_complete = dlb2_domain_finish_unmap_port(hw, domain, port);
-
-	/*
-	 * If the unmapping couldn't complete immediately, launch the worker
-	 * thread (if it isn't already launched) to finish it later.
-	 */
-	if (!unmap_complete && !os_worker_active(hw))
-		os_schedule_work(hw);
-
-unmap_qid_done:
-	resp->status = 0;
-
-	return 0;
-}
-
-static void
-dlb2_log_pending_port_unmaps_args(struct dlb2_hw *hw,
-				  struct dlb2_pending_port_unmaps_args *args,
-				  bool vdev_req,
-				  unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB unmaps in progress arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from VF %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tPort ID: %d\n", args->port_id);
-}
-
-/**
- * dlb2_hw_pending_port_unmaps() - returns the number of unmap operations in
- *	progress.
- * @hw: dlb2_hw handle for a particular device.
- * @domain_id: domain ID.
- * @args: number of unmaps in progress args
- * @resp: response structure.
- * @vdev_req: indicates whether this request came from a vdev.
- * @vdev_id: If vdev_req is true, this contains the vdev's ID.
- *
- * Return:
- * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
- * assigned a detailed error code from enum dlb2_error. If successful, resp->id
- * contains the number of unmaps in progress.
- *
- * Errors:
- * EINVAL - Invalid port ID.
- */
-int dlb2_hw_pending_port_unmaps(struct dlb2_hw *hw,
-				u32 domain_id,
-				struct dlb2_pending_port_unmaps_args *args,
-				struct dlb2_cmd_response *resp,
-				bool vdev_req,
-				unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_port *port;
-
-	dlb2_log_pending_port_unmaps_args(hw, args, vdev_req, vdev_id);
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (!domain) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	port = dlb2_get_domain_used_ldb_port(args->port_id, vdev_req, domain);
-	if (!port || !port->configured) {
-		resp->status = DLB2_ST_INVALID_PORT_ID;
-		return -EINVAL;
-	}
-
-	resp->id = port->num_pending_removals;
-
-	return 0;
-}
-
-static int dlb2_verify_start_domain_args(struct dlb2_hw *hw,
-					 u32 domain_id,
-					 struct dlb2_cmd_response *resp,
-					 bool vdev_req,
-					 unsigned int vdev_id,
-					 struct dlb2_hw_domain **out_domain)
-{
-	struct dlb2_hw_domain *domain;
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-
-	if (!domain) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	if (!domain->configured) {
-		resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;
-		return -EINVAL;
-	}
-
-	if (domain->started) {
-		resp->status = DLB2_ST_DOMAIN_STARTED;
-		return -EINVAL;
-	}
-
-	*out_domain = domain;
-
-	return 0;
-}
-
-static void dlb2_log_start_domain(struct dlb2_hw *hw,
-				  u32 domain_id,
-				  bool vdev_req,
-				  unsigned int vdev_id)
-{
-	DLB2_HW_DBG(hw, "DLB2 start domain arguments:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id);
-	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
-}
-
-/**
- * dlb2_hw_start_domain() - start a scheduling domain
- * @hw: dlb2_hw handle for a particular device.
- * @domain_id: domain ID.
- * @arg: start domain arguments.
- * @resp: response structure.
- * @vdev_req: indicates whether this request came from a vdev.
- * @vdev_id: If vdev_req is true, this contains the vdev's ID.
- *
- * This function starts a scheduling domain, which allows applications to send
- * traffic through it. Once a domain is started, its resources can no longer be
- * configured (besides QID remapping and port enable/disable).
- *
- * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
- * device.
- *
- * Return:
- * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
- * assigned a detailed error code from enum dlb2_error.
- *
- * Errors:
- * EINVAL - the domain is not configured, or the domain is already started.
- */
-int
-dlb2_hw_start_domain(struct dlb2_hw *hw,
-		     u32 domain_id,
-		     struct dlb2_start_domain_args *args,
-		     struct dlb2_cmd_response *resp,
-		     bool vdev_req,
-		     unsigned int vdev_id)
-{
-	struct dlb2_list_entry *iter;
-	struct dlb2_dir_pq_pair *dir_queue;
-	struct dlb2_ldb_queue *ldb_queue;
-	struct dlb2_hw_domain *domain;
-	int ret;
-	RTE_SET_USED(args);
-	RTE_SET_USED(iter);
-
-	dlb2_log_start_domain(hw, domain_id, vdev_req, vdev_id);
-
-	ret = dlb2_verify_start_domain_args(hw,
-					    domain_id,
-					    resp,
-					    vdev_req,
-					    vdev_id,
-					    &domain);
-	if (ret)
-		return ret;
-
-	/*
-	 * Enable load-balanced and directed queue write permissions for the
-	 * queues this domain owns. Without this, the DLB2 will drop all
-	 * incoming traffic to those queues.
-	 */
-	DLB2_DOM_LIST_FOR(domain->used_ldb_queues, ldb_queue, iter) {
-		u32 vasqid_v = 0;
-		unsigned int offs;
-
-		DLB2_BIT_SET(vasqid_v, DLB2_SYS_LDB_VASQID_V_VASQID_V);
-
-		offs = domain->id.phys_id * DLB2_MAX_NUM_LDB_QUEUES +
-			ldb_queue->id.phys_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_LDB_VASQID_V(offs), vasqid_v);
-	}
-
-	DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, dir_queue, iter) {
-		u32 vasqid_v = 0;
-		unsigned int offs;
-
-		DLB2_BIT_SET(vasqid_v, DLB2_SYS_DIR_VASQID_V_VASQID_V);
-
-		offs = domain->id.phys_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) +
-			dir_queue->id.phys_id;
-
-		DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(offs), vasqid_v);
-	}
-
-	dlb2_flush_csr(hw);
-
-	domain->started = true;
-
-	resp->status = 0;
-
-	return 0;
-}
-
-static void dlb2_log_get_dir_queue_depth(struct dlb2_hw *hw,
-					 u32 domain_id,
-					 u32 queue_id,
-					 bool vdev_req,
-					 unsigned int vf_id)
-{
-	DLB2_HW_DBG(hw, "DLB get directed queue depth:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from VF %d)\n", vf_id);
-	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
-	DLB2_HW_DBG(hw, "\tQueue ID: %d\n", queue_id);
-}
-
-/**
- * dlb2_hw_get_dir_queue_depth() - returns the depth of a directed queue
- * @hw: dlb2_hw handle for a particular device.
- * @domain_id: domain ID.
- * @args: queue depth args
- * @resp: response structure.
- * @vdev_req: indicates whether this request came from a vdev.
- * @vdev_id: If vdev_req is true, this contains the vdev's ID.
- *
- * This function returns the depth of a directed queue.
- *
- * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
- * device.
- *
- * Return:
- * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
- * assigned a detailed error code from enum dlb2_error. If successful, resp->id
- * contains the depth.
- *
- * Errors:
- * EINVAL - Invalid domain ID or queue ID.
- */
-int dlb2_hw_get_dir_queue_depth(struct dlb2_hw *hw,
-				u32 domain_id,
-				struct dlb2_get_dir_queue_depth_args *args,
-				struct dlb2_cmd_response *resp,
-				bool vdev_req,
-				unsigned int vdev_id)
-{
-	struct dlb2_dir_pq_pair *queue;
-	struct dlb2_hw_domain *domain;
-	int id;
-
-	id = domain_id;
-
-	dlb2_log_get_dir_queue_depth(hw, domain_id, args->queue_id,
-				     vdev_req, vdev_id);
-
-	domain = dlb2_get_domain_from_id(hw, id, vdev_req, vdev_id);
-	if (!domain) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	id = args->queue_id;
-
-	queue = dlb2_get_domain_used_dir_pq(hw, id, vdev_req, domain);
-	if (!queue) {
-		resp->status = DLB2_ST_INVALID_QID;
-		return -EINVAL;
-	}
-
-	resp->id = dlb2_dir_queue_depth(hw, queue);
-
-	return 0;
-}
-
-static void dlb2_log_get_ldb_queue_depth(struct dlb2_hw *hw,
-					 u32 domain_id,
-					 u32 queue_id,
-					 bool vdev_req,
-					 unsigned int vf_id)
-{
-	DLB2_HW_DBG(hw, "DLB get load-balanced queue depth:\n");
-	if (vdev_req)
-		DLB2_HW_DBG(hw, "(Request from VF %d)\n", vf_id);
-	DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id);
-	DLB2_HW_DBG(hw, "\tQueue ID: %d\n", queue_id);
-}
-
-/**
- * dlb2_hw_get_ldb_queue_depth() - returns the depth of a load-balanced queue
- * @hw: dlb2_hw handle for a particular device.
- * @domain_id: domain ID.
- * @args: queue depth args
- * @resp: response structure.
- * @vdev_req: indicates whether this request came from a vdev.
- * @vdev_id: If vdev_req is true, this contains the vdev's ID.
- *
- * This function returns the depth of a load-balanced queue.
- *
- * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
- * device.
- *
- * Return:
- * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
- * assigned a detailed error code from enum dlb2_error. If successful, resp->id
- * contains the depth.
- *
- * Errors:
- * EINVAL - Invalid domain ID or queue ID.
- */
-int dlb2_hw_get_ldb_queue_depth(struct dlb2_hw *hw,
-				u32 domain_id,
-				struct dlb2_get_ldb_queue_depth_args *args,
-				struct dlb2_cmd_response *resp,
-				bool vdev_req,
-				unsigned int vdev_id)
-{
-	struct dlb2_hw_domain *domain;
-	struct dlb2_ldb_queue *queue;
-
-	dlb2_log_get_ldb_queue_depth(hw, domain_id, args->queue_id,
-				     vdev_req, vdev_id);
-
-	domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);
-	if (!domain) {
-		resp->status = DLB2_ST_INVALID_DOMAIN_ID;
-		return -EINVAL;
-	}
-
-	queue = dlb2_get_domain_ldb_queue(args->queue_id, vdev_req, domain);
-	if (!queue) {
-		resp->status = DLB2_ST_INVALID_QID;
-		return -EINVAL;
-	}
-
-	resp->id = dlb2_ldb_queue_depth(hw, queue);
-
-	return 0;
-}
-
-/**
- * dlb2_finish_unmap_qid_procedures() - finish any pending unmap procedures
- * @hw: dlb2_hw handle for a particular device.
- *
- * This function attempts to finish any outstanding unmap procedures.
- * This function should be called by the kernel thread responsible for
- * finishing map/unmap procedures.
- *
- * Return:
- * Returns the number of procedures that weren't completed.
- */
-unsigned int dlb2_finish_unmap_qid_procedures(struct dlb2_hw *hw)
-{
-	int i, num = 0;
-
-	/* Finish queue unmap jobs for any domain that needs it */
-	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
-		struct dlb2_hw_domain *domain = &hw->domains[i];
-
-		num += dlb2_domain_finish_unmap_qid_procedures(hw, domain);
-	}
-
-	return num;
-}
-
-/**
- * dlb2_finish_map_qid_procedures() - finish any pending map procedures
- * @hw: dlb2_hw handle for a particular device.
- *
- * This function attempts to finish any outstanding map procedures.
- * This function should be called by the kernel thread responsible for
- * finishing map/unmap procedures.
- *
- * Return:
- * Returns the number of procedures that weren't completed.
- */
-unsigned int dlb2_finish_map_qid_procedures(struct dlb2_hw *hw)
-{
-	int i, num = 0;
-
-	/* Finish queue map jobs for any domain that needs it */
-	for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
-		struct dlb2_hw_domain *domain = &hw->domains[i];
-
-		num += dlb2_domain_finish_map_qid_procedures(hw, domain);
-	}
-
-	return num;
-}
-
-/**
- * dlb2_hw_enable_sparse_dir_cq_mode() - enable sparse mode for directed ports.
- * @hw: dlb2_hw handle for a particular device.
- *
- * This function must be called prior to configuring scheduling domains.
- */
-
-void dlb2_hw_enable_sparse_dir_cq_mode(struct dlb2_hw *hw)
-{
-	u32 ctrl;
-
-	ctrl = DLB2_CSR_RD(hw, DLB2_CHP_CFG_CHP_CSR_CTRL);
-
-	DLB2_BIT_SET(ctrl,
-		     DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_DIR_CQ_MODE);
-
-	DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, ctrl);
-}
-
-/**
- * dlb2_hw_enable_sparse_ldb_cq_mode() - enable sparse mode for load-balanced
- *	ports.
- * @hw: dlb2_hw handle for a particular device.
- *
- * This function must be called prior to configuring scheduling domains.
- */
-void dlb2_hw_enable_sparse_ldb_cq_mode(struct dlb2_hw *hw)
-{
-	u32 ctrl;
-
-	ctrl = DLB2_CSR_RD(hw, DLB2_CHP_CFG_CHP_CSR_CTRL);
-
-	DLB2_BIT_SET(ctrl,
-		     DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_LDB_CQ_MODE);
-
-	DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, ctrl);
-}
-
-/**
- * dlb2_get_group_sequence_numbers() - return a group's number of SNs per queue
- * @hw: dlb2_hw handle for a particular device.
- * @group_id: sequence number group ID.
- *
- * This function returns the configured number of sequence numbers per queue
- * for the specified group.
- *
- * Return:
- * Returns -EINVAL if group_id is invalid, else the group's SNs per queue.
- */
-int dlb2_get_group_sequence_numbers(struct dlb2_hw *hw, u32 group_id)
-{
-	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
-		return -EINVAL;
-
-	return hw->rsrcs.sn_groups[group_id].sequence_numbers_per_queue;
-}
-
-/**
- * dlb2_get_group_sequence_number_occupancy() - return a group's in-use slots
- * @hw: dlb2_hw handle for a particular device.
- * @group_id: sequence number group ID.
- *
- * This function returns the group's number of in-use slots (i.e. load-balanced
- * queues using the specified group).
- *
- * Return:
- * Returns -EINVAL if group_id is invalid, else the group's SNs per queue.
- */
-int dlb2_get_group_sequence_number_occupancy(struct dlb2_hw *hw, u32 group_id)
-{
-	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
-		return -EINVAL;
-
-	return dlb2_sn_group_used_slots(&hw->rsrcs.sn_groups[group_id]);
-}
-
-static void dlb2_log_set_group_sequence_numbers(struct dlb2_hw *hw,
-						u32 group_id,
-						u32 val)
-{
-	DLB2_HW_DBG(hw, "DLB2 set group sequence numbers:\n");
-	DLB2_HW_DBG(hw, "\tGroup ID: %u\n", group_id);
-	DLB2_HW_DBG(hw, "\tValue:    %u\n", val);
-}
-
-/**
- * dlb2_set_group_sequence_numbers() - assign a group's number of SNs per queue
- * @hw: dlb2_hw handle for a particular device.
- * @group_id: sequence number group ID.
- * @val: requested amount of sequence numbers per queue.
- *
- * This function configures the group's number of sequence numbers per queue.
- * val can be a power-of-two between 32 and 1024, inclusive. This setting can
- * be configured until the first ordered load-balanced queue is configured, at
- * which point the configuration is locked.
- *
- * Return:
- * Returns 0 upon success; -EINVAL if group_id or val is invalid, -EPERM if an
- * ordered queue is configured.
- */
-int dlb2_set_group_sequence_numbers(struct dlb2_hw *hw,
-				    u32 group_id,
-				    u32 val)
-{
-	const u32 valid_allocations[] = {64, 128, 256, 512, 1024};
-	struct dlb2_sn_group *group;
-	u32 sn_mode = 0;
-	int mode;
-
-	if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
-		return -EINVAL;
-
-	group = &hw->rsrcs.sn_groups[group_id];
-
-	/*
-	 * Once the first load-balanced queue using an SN group is configured,
-	 * the group cannot be changed.
-	 */
-	if (group->slot_use_bitmap != 0)
-		return -EPERM;
-
-	for (mode = 0; mode < DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES; mode++)
-		if (val == valid_allocations[mode])
-			break;
-
-	if (mode == DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES)
-		return -EINVAL;
-
-	group->mode = mode;
-	group->sequence_numbers_per_queue = val;
-
-	DLB2_BITS_SET(sn_mode, hw->rsrcs.sn_groups[0].mode,
-		 DLB2_RO_GRP_SN_MODE_SN_MODE_0);
-	DLB2_BITS_SET(sn_mode, hw->rsrcs.sn_groups[1].mode,
-		 DLB2_RO_GRP_SN_MODE_SN_MODE_1);
-
-	DLB2_CSR_WR(hw, DLB2_RO_GRP_SN_MODE(hw->ver), sn_mode);
-
-	dlb2_log_set_group_sequence_numbers(hw, group_id, val);
-
-	return 0;
-}
-
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v5 22/26] event/dlb2: use new implementation of HW types header
  2021-05-01 19:03   ` [dpdk-dev] [PATCH v5 00/26] Add DLB v2.5 McDaniel, Timothy
                       ` (20 preceding siblings ...)
  2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 21/26] event/dlb2: use new implementation of resource file McDaniel, Timothy
@ 2021-05-01 19:03     ` McDaniel, Timothy
  2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 23/26] event/dlb2: use new combined register map McDaniel, Timothy
                       ` (4 subsequent siblings)
  26 siblings, 0 replies; 174+ messages in thread
From: McDaniel, Timothy @ 2021-05-01 19:03 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, harry.van.haaren, jerinj, thomas, Timothy McDaniel

From: Timothy McDaniel <timothy.mcdaniel@intel.com>

As support for DLB v2.5 was added, modifications were made to
dlb_hw_types_new.h, but the old file needed to be preserved during
the port in order to meet the requirement that individual patches in
a series each compile successfully. Since the DLB v2.5 support is
completely integrated, it is now safe to remove the old (original)
file, as well as the DLB2_USE_NEW_HEADERS define that was used to
control which version of the file was to be included in certain
source files.
It is now safe to rename the new file, and use it unconditionally
in all DLB source files.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_hw_types.h    |  38 +-
 .../event/dlb2/pf/base/dlb2_hw_types_new.h    | 357 ------------------
 drivers/event/dlb2/pf/base/dlb2_resource.c    |   4 +-
 drivers/event/dlb2/pf/dlb2_main.c             |   4 +-
 drivers/event/dlb2/pf/dlb2_main.h             |   4 -
 drivers/event/dlb2/pf/dlb2_pf.c               |   4 +-
 6 files changed, 33 insertions(+), 378 deletions(-)
 delete mode 100644 drivers/event/dlb2/pf/base/dlb2_hw_types_new.h

diff --git a/drivers/event/dlb2/pf/base/dlb2_hw_types.h b/drivers/event/dlb2/pf/base/dlb2_hw_types.h
index b007e1674..4a6037775 100644
--- a/drivers/event/dlb2/pf/base/dlb2_hw_types.h
+++ b/drivers/event/dlb2/pf/base/dlb2_hw_types.h
@@ -2,14 +2,21 @@
  * Copyright(c) 2016-2020 Intel Corporation
  */
 
-#ifndef __DLB2_HW_TYPES_H
-#define __DLB2_HW_TYPES_H
+#ifndef __DLB2_HW_TYPES_NEW_H
+#define __DLB2_HW_TYPES_NEW_H
 
 #include "../../dlb2_priv.h"
 #include "dlb2_user.h"
 
 #include "dlb2_osdep_list.h"
 #include "dlb2_osdep_types.h"
+#include "dlb2_regs_new.h"
+
+#define DLB2_BITS_SET(x, val, mask)	(x = ((x) & ~(mask))     \
+				 | (((val) << (mask##_LOC)) & (mask)))
+#define DLB2_BITS_CLR(x, mask)	(x &= ~(mask))
+#define DLB2_BIT_SET(x, mask)	((x) |= (mask))
+#define DLB2_BITS_GET(x, mask)	(((x) & (mask)) >> (mask##_LOC))
 
 #define DLB2_MAX_NUM_VDEVS			16
 #define DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS	2
@@ -141,7 +148,7 @@ struct dlb2_dir_pq_pair {
 };
 
 enum dlb2_qid_map_state {
-	/* The slot doesn't contain a valid queue mapping */
+	/* The slot does not contain a valid queue mapping */
 	DLB2_QUEUE_UNMAPPED,
 	/* The slot contains a valid queue mapping */
 	DLB2_QUEUE_MAPPED,
@@ -174,6 +181,7 @@ struct dlb2_ldb_port {
 	u32 hist_list_entry_base;
 	u32 hist_list_entry_limit;
 	u32 ref_cnt;
+	u8 cq_depth;
 	u8 init_tkn_cnt;
 	u8 num_pending_removals;
 	u8 num_mappings;
@@ -245,8 +253,15 @@ struct dlb2_hw_domain {
 	u32 avail_hist_list_entries;
 	u32 hist_list_entry_base;
 	u32 hist_list_entry_offset;
-	u32 num_ldb_credits;
-	u32 num_dir_credits;
+	union {
+		struct {
+			u32 num_ldb_credits;
+			u32 num_dir_credits;
+		};
+		struct {
+			u32 num_credits;
+		};
+	};
 	u32 num_avail_aqed_entries;
 	u32 num_used_aqed_entries;
 	struct dlb2_resource_id id;
@@ -269,8 +284,15 @@ struct dlb2_function_resources {
 	u32 num_avail_ldb_queues;
 	u32 num_avail_ldb_ports[DLB2_NUM_COS_DOMAINS];
 	u32 num_avail_dir_pq_pairs;
-	u32 num_avail_qed_entries;
-	u32 num_avail_dqed_entries;
+	union {
+		struct {
+			u32 num_avail_qed_entries;
+			u32 num_avail_dqed_entries;
+		};
+		struct {
+			u32 num_avail_entries;
+		};
+	};
 	u32 num_avail_aqed_entries;
 	u8 locked; /* (VDEV only) */
 };
@@ -332,4 +354,4 @@ struct dlb2_hw {
 	unsigned int pasid[DLB2_MAX_NUM_VDEVS];
 };
 
-#endif /* __DLB2_HW_TYPES_H */
+#endif /* __DLB2_HW_TYPES_NEW_H */
diff --git a/drivers/event/dlb2/pf/base/dlb2_hw_types_new.h b/drivers/event/dlb2/pf/base/dlb2_hw_types_new.h
deleted file mode 100644
index 4a6037775..000000000
--- a/drivers/event/dlb2/pf/base/dlb2_hw_types_new.h
+++ /dev/null
@@ -1,357 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016-2020 Intel Corporation
- */
-
-#ifndef __DLB2_HW_TYPES_NEW_H
-#define __DLB2_HW_TYPES_NEW_H
-
-#include "../../dlb2_priv.h"
-#include "dlb2_user.h"
-
-#include "dlb2_osdep_list.h"
-#include "dlb2_osdep_types.h"
-#include "dlb2_regs_new.h"
-
-#define DLB2_BITS_SET(x, val, mask)	(x = ((x) & ~(mask))     \
-				 | (((val) << (mask##_LOC)) & (mask)))
-#define DLB2_BITS_CLR(x, mask)	(x &= ~(mask))
-#define DLB2_BIT_SET(x, mask)	((x) |= (mask))
-#define DLB2_BITS_GET(x, mask)	(((x) & (mask)) >> (mask##_LOC))
-
-#define DLB2_MAX_NUM_VDEVS			16
-#define DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS	2
-#define DLB2_NUM_ARB_WEIGHTS			8
-#define DLB2_MAX_NUM_AQED_ENTRIES		2048
-#define DLB2_MAX_WEIGHT				255
-#define DLB2_NUM_COS_DOMAINS			4
-#define DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS	2
-#define DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES	5
-#define DLB2_MAX_CQ_COMP_CHECK_LOOPS		409600
-#define DLB2_MAX_QID_EMPTY_CHECK_LOOPS		(32 * 64 * 1024 * (800 / 30))
-
-#define DLB2_FUNC_BAR				0
-#define DLB2_CSR_BAR				2
-
-#define PCI_DEVICE_ID_INTEL_DLB2_PF 0x2710
-#define PCI_DEVICE_ID_INTEL_DLB2_VF 0x2711
-
-#define PCI_DEVICE_ID_INTEL_DLB2_5_PF 0x2714
-#define PCI_DEVICE_ID_INTEL_DLB2_5_VF 0x2715
-
-#define DLB2_ALARM_HW_SOURCE_SYS 0
-#define DLB2_ALARM_HW_SOURCE_DLB 1
-
-#define DLB2_ALARM_HW_UNIT_CHP 4
-
-#define DLB2_ALARM_SYS_AID_ILLEGAL_QID		3
-#define DLB2_ALARM_SYS_AID_DISABLED_QID		4
-#define DLB2_ALARM_SYS_AID_ILLEGAL_HCW		5
-#define DLB2_ALARM_HW_CHP_AID_ILLEGAL_ENQ	1
-#define DLB2_ALARM_HW_CHP_AID_EXCESS_TOKEN_POPS 2
-
-/*
- * Hardware-defined base addresses. Those prefixed 'DLB2_DRV' are only used by
- * the PF driver.
- */
-#define DLB2_DRV_LDB_PP_BASE   0x2300000
-#define DLB2_DRV_LDB_PP_STRIDE 0x1000
-#define DLB2_DRV_LDB_PP_BOUND  (DLB2_DRV_LDB_PP_BASE + \
-				DLB2_DRV_LDB_PP_STRIDE * DLB2_MAX_NUM_LDB_PORTS)
-#define DLB2_DRV_DIR_PP_BASE   0x2200000
-#define DLB2_DRV_DIR_PP_STRIDE 0x1000
-#define DLB2_DRV_DIR_PP_BOUND  (DLB2_DRV_DIR_PP_BASE + \
-				DLB2_DRV_DIR_PP_STRIDE * DLB2_MAX_NUM_DIR_PORTS)
-#define DLB2_LDB_PP_BASE       0x2100000
-#define DLB2_LDB_PP_STRIDE     0x1000
-#define DLB2_LDB_PP_BOUND      (DLB2_LDB_PP_BASE + \
-				DLB2_LDB_PP_STRIDE * DLB2_MAX_NUM_LDB_PORTS)
-#define DLB2_LDB_PP_OFFS(id)   (DLB2_LDB_PP_BASE + (id) * DLB2_PP_SIZE)
-#define DLB2_DIR_PP_BASE       0x2000000
-#define DLB2_DIR_PP_STRIDE     0x1000
-#define DLB2_DIR_PP_BOUND      (DLB2_DIR_PP_BASE + \
-				DLB2_DIR_PP_STRIDE * \
-				DLB2_MAX_NUM_DIR_PORTS_V2_5)
-#define DLB2_DIR_PP_OFFS(id)   (DLB2_DIR_PP_BASE + (id) * DLB2_PP_SIZE)
-
-struct dlb2_resource_id {
-	u32 phys_id;
-	u32 virt_id;
-	u8 vdev_owned;
-	u8 vdev_id;
-};
-
-struct dlb2_freelist {
-	u32 base;
-	u32 bound;
-	u32 offset;
-};
-
-static inline u32 dlb2_freelist_count(struct dlb2_freelist *list)
-{
-	return list->bound - list->base - list->offset;
-}
-
-struct dlb2_hcw {
-	u64 data;
-	/* Word 3 */
-	u16 opaque;
-	u8 qid;
-	u8 sched_type:2;
-	u8 priority:3;
-	u8 msg_type:3;
-	/* Word 4 */
-	u16 lock_id;
-	u8 ts_flag:1;
-	u8 rsvd1:2;
-	u8 no_dec:1;
-	u8 cmp_id:4;
-	u8 cq_token:1;
-	u8 qe_comp:1;
-	u8 qe_frag:1;
-	u8 qe_valid:1;
-	u8 int_arm:1;
-	u8 error:1;
-	u8 rsvd:2;
-};
-
-struct dlb2_ldb_queue {
-	struct dlb2_list_entry domain_list;
-	struct dlb2_list_entry func_list;
-	struct dlb2_resource_id id;
-	struct dlb2_resource_id domain_id;
-	u32 num_qid_inflights;
-	u32 aqed_limit;
-	u32 sn_group; /* sn == sequence number */
-	u32 sn_slot;
-	u32 num_mappings;
-	u8 sn_cfg_valid;
-	u8 num_pending_additions;
-	u8 owned;
-	u8 configured;
-};
-
-/*
- * Directed ports and queues are paired by nature, so the driver tracks them
- * with a single data structure.
- */
-struct dlb2_dir_pq_pair {
-	struct dlb2_list_entry domain_list;
-	struct dlb2_list_entry func_list;
-	struct dlb2_resource_id id;
-	struct dlb2_resource_id domain_id;
-	u32 ref_cnt;
-	u8 init_tkn_cnt;
-	u8 queue_configured;
-	u8 port_configured;
-	u8 owned;
-	u8 enabled;
-};
-
-enum dlb2_qid_map_state {
-	/* The slot does not contain a valid queue mapping */
-	DLB2_QUEUE_UNMAPPED,
-	/* The slot contains a valid queue mapping */
-	DLB2_QUEUE_MAPPED,
-	/* The driver is mapping a queue into this slot */
-	DLB2_QUEUE_MAP_IN_PROG,
-	/* The driver is unmapping a queue from this slot */
-	DLB2_QUEUE_UNMAP_IN_PROG,
-	/*
-	 * The driver is unmapping a queue from this slot, and once complete
-	 * will replace it with another mapping.
-	 */
-	DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP,
-};
-
-struct dlb2_ldb_port_qid_map {
-	enum dlb2_qid_map_state state;
-	u16 qid;
-	u16 pending_qid;
-	u8 priority;
-	u8 pending_priority;
-};
-
-struct dlb2_ldb_port {
-	struct dlb2_list_entry domain_list;
-	struct dlb2_list_entry func_list;
-	struct dlb2_resource_id id;
-	struct dlb2_resource_id domain_id;
-	/* The qid_map represents the hardware QID mapping state. */
-	struct dlb2_ldb_port_qid_map qid_map[DLB2_MAX_NUM_QIDS_PER_LDB_CQ];
-	u32 hist_list_entry_base;
-	u32 hist_list_entry_limit;
-	u32 ref_cnt;
-	u8 cq_depth;
-	u8 init_tkn_cnt;
-	u8 num_pending_removals;
-	u8 num_mappings;
-	u8 owned;
-	u8 enabled;
-	u8 configured;
-};
-
-struct dlb2_sn_group {
-	u32 mode;
-	u32 sequence_numbers_per_queue;
-	u32 slot_use_bitmap;
-	u32 id;
-};
-
-static inline bool dlb2_sn_group_full(struct dlb2_sn_group *group)
-{
-	const u32 mask[] = {
-		0x0000ffff,  /* 64 SNs per queue */
-		0x000000ff,  /* 128 SNs per queue */
-		0x0000000f,  /* 256 SNs per queue */
-		0x00000003,  /* 512 SNs per queue */
-		0x00000001}; /* 1024 SNs per queue */
-
-	return group->slot_use_bitmap == mask[group->mode];
-}
-
-static inline int dlb2_sn_group_alloc_slot(struct dlb2_sn_group *group)
-{
-	const u32 bound[] = {16, 8, 4, 2, 1};
-	u32 i;
-
-	for (i = 0; i < bound[group->mode]; i++) {
-		if (!(group->slot_use_bitmap & (1 << i))) {
-			group->slot_use_bitmap |= 1 << i;
-			return i;
-		}
-	}
-
-	return -1;
-}
-
-static inline void
-dlb2_sn_group_free_slot(struct dlb2_sn_group *group, int slot)
-{
-	group->slot_use_bitmap &= ~(1 << slot);
-}
-
-static inline int dlb2_sn_group_used_slots(struct dlb2_sn_group *group)
-{
-	int i, cnt = 0;
-
-	for (i = 0; i < 32; i++)
-		cnt += !!(group->slot_use_bitmap & (1 << i));
-
-	return cnt;
-}
-
-struct dlb2_hw_domain {
-	struct dlb2_function_resources *parent_func;
-	struct dlb2_list_entry func_list;
-	struct dlb2_list_head used_ldb_queues;
-	struct dlb2_list_head used_ldb_ports[DLB2_NUM_COS_DOMAINS];
-	struct dlb2_list_head used_dir_pq_pairs;
-	struct dlb2_list_head avail_ldb_queues;
-	struct dlb2_list_head avail_ldb_ports[DLB2_NUM_COS_DOMAINS];
-	struct dlb2_list_head avail_dir_pq_pairs;
-	u32 total_hist_list_entries;
-	u32 avail_hist_list_entries;
-	u32 hist_list_entry_base;
-	u32 hist_list_entry_offset;
-	union {
-		struct {
-			u32 num_ldb_credits;
-			u32 num_dir_credits;
-		};
-		struct {
-			u32 num_credits;
-		};
-	};
-	u32 num_avail_aqed_entries;
-	u32 num_used_aqed_entries;
-	struct dlb2_resource_id id;
-	int num_pending_removals;
-	int num_pending_additions;
-	u8 configured;
-	u8 started;
-};
-
-struct dlb2_bitmap;
-
-struct dlb2_function_resources {
-	struct dlb2_list_head avail_domains;
-	struct dlb2_list_head used_domains;
-	struct dlb2_list_head avail_ldb_queues;
-	struct dlb2_list_head avail_ldb_ports[DLB2_NUM_COS_DOMAINS];
-	struct dlb2_list_head avail_dir_pq_pairs;
-	struct dlb2_bitmap *avail_hist_list_entries;
-	u32 num_avail_domains;
-	u32 num_avail_ldb_queues;
-	u32 num_avail_ldb_ports[DLB2_NUM_COS_DOMAINS];
-	u32 num_avail_dir_pq_pairs;
-	union {
-		struct {
-			u32 num_avail_qed_entries;
-			u32 num_avail_dqed_entries;
-		};
-		struct {
-			u32 num_avail_entries;
-		};
-	};
-	u32 num_avail_aqed_entries;
-	u8 locked; /* (VDEV only) */
-};
-
-/*
- * After initialization, each resource in dlb2_hw_resources is located in one
- * of the following lists:
- * -- The PF's available resources list. These are unconfigured resources owned
- *	by the PF and not allocated to a dlb2 scheduling domain.
- * -- A VDEV's available resources list. These are VDEV-owned unconfigured
- *	resources not allocated to a dlb2 scheduling domain.
- * -- A domain's available resources list. These are domain-owned unconfigured
- *	resources.
- * -- A domain's used resources list. These are domain-owned configured
- *	resources.
- *
- * A resource moves to a new list when a VDEV or domain is created or destroyed,
- * or when the resource is configured.
- */
-struct dlb2_hw_resources {
-	struct dlb2_ldb_queue ldb_queues[DLB2_MAX_NUM_LDB_QUEUES];
-	struct dlb2_ldb_port ldb_ports[DLB2_MAX_NUM_LDB_PORTS];
-	struct dlb2_dir_pq_pair dir_pq_pairs[DLB2_MAX_NUM_DIR_PORTS_V2_5];
-	struct dlb2_sn_group sn_groups[DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS];
-};
-
-struct dlb2_mbox {
-	u32 *mbox;
-	u32 *isr_in_progress;
-};
-
-struct dlb2_sw_mbox {
-	struct dlb2_mbox vdev_to_pf;
-	struct dlb2_mbox pf_to_vdev;
-	void (*pf_to_vdev_inject)(void *arg);
-	void *pf_to_vdev_inject_arg;
-};
-
-struct dlb2_hw {
-	uint8_t ver;
-
-	/* BAR 0 address */
-	void *csr_kva;
-	unsigned long csr_phys_addr;
-	/* BAR 2 address */
-	void *func_kva;
-	unsigned long func_phys_addr;
-
-	/* Resource tracking */
-	struct dlb2_hw_resources rsrcs;
-	struct dlb2_function_resources pf;
-	struct dlb2_function_resources vdev[DLB2_MAX_NUM_VDEVS];
-	struct dlb2_hw_domain domains[DLB2_MAX_NUM_DOMAINS];
-	u8 cos_reservation[DLB2_NUM_COS_DOMAINS];
-
-	/* Virtualization */
-	int virt_mode;
-	struct dlb2_sw_mbox mbox[DLB2_MAX_NUM_VDEVS];
-	unsigned int pasid[DLB2_MAX_NUM_VDEVS];
-};
-
-#endif /* __DLB2_HW_TYPES_NEW_H */
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index 2f66b2c71..54b0207db 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -2,11 +2,9 @@
  * Copyright(c) 2016-2020 Intel Corporation
  */
 
-#define DLB2_USE_NEW_HEADERS /* TEMPORARY FOR MERGE */
-
 #include "dlb2_user.h"
 
-#include "dlb2_hw_types_new.h"
+#include "dlb2_hw_types.h"
 #include "dlb2_osdep.h"
 #include "dlb2_osdep_bitmap.h"
 #include "dlb2_osdep_types.h"
diff --git a/drivers/event/dlb2/pf/dlb2_main.c b/drivers/event/dlb2/pf/dlb2_main.c
index bac07f097..1f6ccf8e4 100644
--- a/drivers/event/dlb2/pf/dlb2_main.c
+++ b/drivers/event/dlb2/pf/dlb2_main.c
@@ -13,10 +13,8 @@
 #include <rte_malloc.h>
 #include <rte_errno.h>
 
-#define DLB2_USE_NEW_HEADERS /* TEMPORARY FOR MERGE */
-
 #include "base/dlb2_regs_new.h"
-#include "base/dlb2_hw_types_new.h"
+#include "base/dlb2_hw_types.h"
 #include "base/dlb2_resource.h"
 #include "base/dlb2_osdep.h"
 #include "dlb2_main.h"
diff --git a/drivers/event/dlb2/pf/dlb2_main.h b/drivers/event/dlb2/pf/dlb2_main.h
index 892298d7a..9eeda482a 100644
--- a/drivers/event/dlb2/pf/dlb2_main.h
+++ b/drivers/event/dlb2/pf/dlb2_main.h
@@ -12,11 +12,7 @@
 #include <rte_bus_pci.h>
 #include <rte_eal_paging.h>
 
-#ifdef DLB2_USE_NEW_HEADERS
-#include "base/dlb2_hw_types_new.h"
-#else
 #include "base/dlb2_hw_types.h"
-#endif
 #include "../dlb2_user.h"
 
 #define DLB2_DEFAULT_UNREGISTER_TIMEOUT_S 5
diff --git a/drivers/event/dlb2/pf/dlb2_pf.c b/drivers/event/dlb2/pf/dlb2_pf.c
index 880964a29..f57dc1584 100644
--- a/drivers/event/dlb2/pf/dlb2_pf.c
+++ b/drivers/event/dlb2/pf/dlb2_pf.c
@@ -32,13 +32,11 @@
 #include <rte_memory.h>
 #include <rte_string_fns.h>
 
-#define DLB2_USE_NEW_HEADERS /* TEMPORARY FOR MERGE */
-
 #include "../dlb2_priv.h"
 #include "../dlb2_iface.h"
 #include "../dlb2_inline_fns.h"
 #include "dlb2_main.h"
-#include "base/dlb2_hw_types_new.h"
+#include "base/dlb2_hw_types.h"
 #include "base/dlb2_osdep.h"
 #include "base/dlb2_resource.h"
 
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v5 23/26] event/dlb2: use new combined register map
  2021-05-01 19:03   ` [dpdk-dev] [PATCH v5 00/26] Add DLB v2.5 McDaniel, Timothy
                       ` (21 preceding siblings ...)
  2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 22/26] event/dlb2: use new implementation of HW types header McDaniel, Timothy
@ 2021-05-01 19:03     ` McDaniel, Timothy
  2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 24/26] event/dlb2: update xstats for v2.5 McDaniel, Timothy
                       ` (3 subsequent siblings)
  26 siblings, 0 replies; 174+ messages in thread
From: McDaniel, Timothy @ 2021-05-01 19:03 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, harry.van.haaren, jerinj, thomas, Timothy McDaniel

From: Timothy McDaniel <timothy.mcdaniel@intel.com>

All references to the old register map have been removed,
so it is safe to rename the new combined file that supports
both DLB v2.0 and DLB v2.5. Also fixed all places where this
file is included.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/pf/base/dlb2_hw_types.h |    2 +-
 drivers/event/dlb2/pf/base/dlb2_regs.h     | 5955 +++++++++++++-------
 drivers/event/dlb2/pf/base/dlb2_regs_new.h | 4304 --------------
 drivers/event/dlb2/pf/base/dlb2_resource.c |    2 +-
 drivers/event/dlb2/pf/dlb2_main.c          |    2 +-
 5 files changed, 3869 insertions(+), 6396 deletions(-)
 delete mode 100644 drivers/event/dlb2/pf/base/dlb2_regs_new.h

diff --git a/drivers/event/dlb2/pf/base/dlb2_hw_types.h b/drivers/event/dlb2/pf/base/dlb2_hw_types.h
index 4a6037775..6b8fee341 100644
--- a/drivers/event/dlb2/pf/base/dlb2_hw_types.h
+++ b/drivers/event/dlb2/pf/base/dlb2_hw_types.h
@@ -10,7 +10,7 @@
 
 #include "dlb2_osdep_list.h"
 #include "dlb2_osdep_types.h"
-#include "dlb2_regs_new.h"
+#include "dlb2_regs.h"
 
 #define DLB2_BITS_SET(x, val, mask)	(x = ((x) & ~(mask))     \
 				 | (((val) << (mask##_LOC)) & (mask)))
diff --git a/drivers/event/dlb2/pf/base/dlb2_regs.h b/drivers/event/dlb2/pf/base/dlb2_regs.h
index 43ecad4f8..7167f3d2f 100644
--- a/drivers/event/dlb2/pf/base/dlb2_regs.h
+++ b/drivers/event/dlb2/pf/base/dlb2_regs.h
@@ -7,553 +7,550 @@
 
 #include "dlb2_osdep_types.h"
 
-#define DLB2_FUNC_PF_VF2PF_MAILBOX_BYTES 256
-#define DLB2_FUNC_PF_VF2PF_MAILBOX(vf_id, x) \
+#define DLB2_PF_VF2PF_MAILBOX_BYTES 256
+#define DLB2_PF_VF2PF_MAILBOX(vf_id, x) \
 	(0x1000 + 0x4 * (x) + (vf_id) * 0x10000)
-#define DLB2_FUNC_PF_VF2PF_MAILBOX_RST 0x0
-union dlb2_func_pf_vf2pf_mailbox {
-	struct {
-		u32 msg : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_PF_VF2PF_MAILBOX_ISR(vf_id) \
+#define DLB2_PF_VF2PF_MAILBOX_RST 0x0
+
+#define DLB2_PF_VF2PF_MAILBOX_MSG	0xFFFFFFFF
+#define DLB2_PF_VF2PF_MAILBOX_MSG_LOC	0
+
+#define DLB2_PF_VF2PF_MAILBOX_ISR(vf_id) \
 	(0x1f00 + (vf_id) * 0x10000)
-#define DLB2_FUNC_PF_VF2PF_MAILBOX_ISR_RST 0x0
-union dlb2_func_pf_vf2pf_mailbox_isr {
-	struct {
-		u32 vf0_isr : 1;
-		u32 vf1_isr : 1;
-		u32 vf2_isr : 1;
-		u32 vf3_isr : 1;
-		u32 vf4_isr : 1;
-		u32 vf5_isr : 1;
-		u32 vf6_isr : 1;
-		u32 vf7_isr : 1;
-		u32 vf8_isr : 1;
-		u32 vf9_isr : 1;
-		u32 vf10_isr : 1;
-		u32 vf11_isr : 1;
-		u32 vf12_isr : 1;
-		u32 vf13_isr : 1;
-		u32 vf14_isr : 1;
-		u32 vf15_isr : 1;
-		u32 rsvd0 : 16;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_PF_VF2PF_FLR_ISR(vf_id) \
+#define DLB2_PF_VF2PF_MAILBOX_ISR_RST 0x0
+
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF0_ISR	0x00000001
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF1_ISR	0x00000002
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF2_ISR	0x00000004
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF3_ISR	0x00000008
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF4_ISR	0x00000010
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF5_ISR	0x00000020
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF6_ISR	0x00000040
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF7_ISR	0x00000080
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF8_ISR	0x00000100
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF9_ISR	0x00000200
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF10_ISR	0x00000400
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF11_ISR	0x00000800
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF12_ISR	0x00001000
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF13_ISR	0x00002000
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF14_ISR	0x00004000
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF15_ISR	0x00008000
+#define DLB2_PF_VF2PF_MAILBOX_ISR_RSVD0	0xFFFF0000
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF0_ISR_LOC	0
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF1_ISR_LOC	1
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF2_ISR_LOC	2
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF3_ISR_LOC	3
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF4_ISR_LOC	4
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF5_ISR_LOC	5
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF6_ISR_LOC	6
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF7_ISR_LOC	7
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF8_ISR_LOC	8
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF9_ISR_LOC	9
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF10_ISR_LOC	10
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF11_ISR_LOC	11
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF12_ISR_LOC	12
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF13_ISR_LOC	13
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF14_ISR_LOC	14
+#define DLB2_PF_VF2PF_MAILBOX_ISR_VF15_ISR_LOC	15
+#define DLB2_PF_VF2PF_MAILBOX_ISR_RSVD0_LOC		16
+
+#define DLB2_PF_VF2PF_FLR_ISR(vf_id) \
 	(0x1f04 + (vf_id) * 0x10000)
-#define DLB2_FUNC_PF_VF2PF_FLR_ISR_RST 0x0
-union dlb2_func_pf_vf2pf_flr_isr {
-	struct {
-		u32 vf0_isr : 1;
-		u32 vf1_isr : 1;
-		u32 vf2_isr : 1;
-		u32 vf3_isr : 1;
-		u32 vf4_isr : 1;
-		u32 vf5_isr : 1;
-		u32 vf6_isr : 1;
-		u32 vf7_isr : 1;
-		u32 vf8_isr : 1;
-		u32 vf9_isr : 1;
-		u32 vf10_isr : 1;
-		u32 vf11_isr : 1;
-		u32 vf12_isr : 1;
-		u32 vf13_isr : 1;
-		u32 vf14_isr : 1;
-		u32 vf15_isr : 1;
-		u32 rsvd0 : 16;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_PF_VF2PF_ISR_PEND(vf_id) \
+#define DLB2_PF_VF2PF_FLR_ISR_RST 0x0
+
+#define DLB2_PF_VF2PF_FLR_ISR_VF0_ISR	0x00000001
+#define DLB2_PF_VF2PF_FLR_ISR_VF1_ISR	0x00000002
+#define DLB2_PF_VF2PF_FLR_ISR_VF2_ISR	0x00000004
+#define DLB2_PF_VF2PF_FLR_ISR_VF3_ISR	0x00000008
+#define DLB2_PF_VF2PF_FLR_ISR_VF4_ISR	0x00000010
+#define DLB2_PF_VF2PF_FLR_ISR_VF5_ISR	0x00000020
+#define DLB2_PF_VF2PF_FLR_ISR_VF6_ISR	0x00000040
+#define DLB2_PF_VF2PF_FLR_ISR_VF7_ISR	0x00000080
+#define DLB2_PF_VF2PF_FLR_ISR_VF8_ISR	0x00000100
+#define DLB2_PF_VF2PF_FLR_ISR_VF9_ISR	0x00000200
+#define DLB2_PF_VF2PF_FLR_ISR_VF10_ISR	0x00000400
+#define DLB2_PF_VF2PF_FLR_ISR_VF11_ISR	0x00000800
+#define DLB2_PF_VF2PF_FLR_ISR_VF12_ISR	0x00001000
+#define DLB2_PF_VF2PF_FLR_ISR_VF13_ISR	0x00002000
+#define DLB2_PF_VF2PF_FLR_ISR_VF14_ISR	0x00004000
+#define DLB2_PF_VF2PF_FLR_ISR_VF15_ISR	0x00008000
+#define DLB2_PF_VF2PF_FLR_ISR_RSVD0		0xFFFF0000
+#define DLB2_PF_VF2PF_FLR_ISR_VF0_ISR_LOC	0
+#define DLB2_PF_VF2PF_FLR_ISR_VF1_ISR_LOC	1
+#define DLB2_PF_VF2PF_FLR_ISR_VF2_ISR_LOC	2
+#define DLB2_PF_VF2PF_FLR_ISR_VF3_ISR_LOC	3
+#define DLB2_PF_VF2PF_FLR_ISR_VF4_ISR_LOC	4
+#define DLB2_PF_VF2PF_FLR_ISR_VF5_ISR_LOC	5
+#define DLB2_PF_VF2PF_FLR_ISR_VF6_ISR_LOC	6
+#define DLB2_PF_VF2PF_FLR_ISR_VF7_ISR_LOC	7
+#define DLB2_PF_VF2PF_FLR_ISR_VF8_ISR_LOC	8
+#define DLB2_PF_VF2PF_FLR_ISR_VF9_ISR_LOC	9
+#define DLB2_PF_VF2PF_FLR_ISR_VF10_ISR_LOC	10
+#define DLB2_PF_VF2PF_FLR_ISR_VF11_ISR_LOC	11
+#define DLB2_PF_VF2PF_FLR_ISR_VF12_ISR_LOC	12
+#define DLB2_PF_VF2PF_FLR_ISR_VF13_ISR_LOC	13
+#define DLB2_PF_VF2PF_FLR_ISR_VF14_ISR_LOC	14
+#define DLB2_PF_VF2PF_FLR_ISR_VF15_ISR_LOC	15
+#define DLB2_PF_VF2PF_FLR_ISR_RSVD0_LOC	16
+
+#define DLB2_PF_VF2PF_ISR_PEND(vf_id) \
 	(0x1f10 + (vf_id) * 0x10000)
-#define DLB2_FUNC_PF_VF2PF_ISR_PEND_RST 0x0
-union dlb2_func_pf_vf2pf_isr_pend {
-	struct {
-		u32 isr_pend : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_PF_PF2VF_MAILBOX_BYTES 64
-#define DLB2_FUNC_PF_PF2VF_MAILBOX(vf_id, x) \
+#define DLB2_PF_VF2PF_ISR_PEND_RST 0x0
+
+#define DLB2_PF_VF2PF_ISR_PEND_ISR_PEND	0x00000001
+#define DLB2_PF_VF2PF_ISR_PEND_RSVD0		0xFFFFFFFE
+#define DLB2_PF_VF2PF_ISR_PEND_ISR_PEND_LOC	0
+#define DLB2_PF_VF2PF_ISR_PEND_RSVD0_LOC	1
+
+#define DLB2_PF_PF2VF_MAILBOX_BYTES 64
+#define DLB2_PF_PF2VF_MAILBOX(vf_id, x) \
 	(0x2000 + 0x4 * (x) + (vf_id) * 0x10000)
-#define DLB2_FUNC_PF_PF2VF_MAILBOX_RST 0x0
-union dlb2_func_pf_pf2vf_mailbox {
-	struct {
-		u32 msg : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_PF_PF2VF_MAILBOX_ISR(vf_id) \
+#define DLB2_PF_PF2VF_MAILBOX_RST 0x0
+
+#define DLB2_PF_PF2VF_MAILBOX_MSG	0xFFFFFFFF
+#define DLB2_PF_PF2VF_MAILBOX_MSG_LOC	0
+
+#define DLB2_PF_PF2VF_MAILBOX_ISR(vf_id) \
 	(0x2f00 + (vf_id) * 0x10000)
-#define DLB2_FUNC_PF_PF2VF_MAILBOX_ISR_RST 0x0
-union dlb2_func_pf_pf2vf_mailbox_isr {
-	struct {
-		u32 vf0_isr : 1;
-		u32 vf1_isr : 1;
-		u32 vf2_isr : 1;
-		u32 vf3_isr : 1;
-		u32 vf4_isr : 1;
-		u32 vf5_isr : 1;
-		u32 vf6_isr : 1;
-		u32 vf7_isr : 1;
-		u32 vf8_isr : 1;
-		u32 vf9_isr : 1;
-		u32 vf10_isr : 1;
-		u32 vf11_isr : 1;
-		u32 vf12_isr : 1;
-		u32 vf13_isr : 1;
-		u32 vf14_isr : 1;
-		u32 vf15_isr : 1;
-		u32 rsvd0 : 16;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_PF_VF_RESET_IN_PROGRESS(vf_id) \
+#define DLB2_PF_PF2VF_MAILBOX_ISR_RST 0x0
+
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF0_ISR	0x00000001
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF1_ISR	0x00000002
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF2_ISR	0x00000004
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF3_ISR	0x00000008
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF4_ISR	0x00000010
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF5_ISR	0x00000020
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF6_ISR	0x00000040
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF7_ISR	0x00000080
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF8_ISR	0x00000100
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF9_ISR	0x00000200
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF10_ISR	0x00000400
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF11_ISR	0x00000800
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF12_ISR	0x00001000
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF13_ISR	0x00002000
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF14_ISR	0x00004000
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF15_ISR	0x00008000
+#define DLB2_PF_PF2VF_MAILBOX_ISR_RSVD0	0xFFFF0000
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF0_ISR_LOC	0
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF1_ISR_LOC	1
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF2_ISR_LOC	2
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF3_ISR_LOC	3
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF4_ISR_LOC	4
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF5_ISR_LOC	5
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF6_ISR_LOC	6
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF7_ISR_LOC	7
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF8_ISR_LOC	8
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF9_ISR_LOC	9
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF10_ISR_LOC	10
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF11_ISR_LOC	11
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF12_ISR_LOC	12
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF13_ISR_LOC	13
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF14_ISR_LOC	14
+#define DLB2_PF_PF2VF_MAILBOX_ISR_VF15_ISR_LOC	15
+#define DLB2_PF_PF2VF_MAILBOX_ISR_RSVD0_LOC		16
+
+#define DLB2_PF_VF_RESET_IN_PROGRESS(vf_id) \
 	(0x3000 + (vf_id) * 0x10000)
-#define DLB2_FUNC_PF_VF_RESET_IN_PROGRESS_RST 0xffff
-union dlb2_func_pf_vf_reset_in_progress {
-	struct {
-		u32 vf0_reset_in_progress : 1;
-		u32 vf1_reset_in_progress : 1;
-		u32 vf2_reset_in_progress : 1;
-		u32 vf3_reset_in_progress : 1;
-		u32 vf4_reset_in_progress : 1;
-		u32 vf5_reset_in_progress : 1;
-		u32 vf6_reset_in_progress : 1;
-		u32 vf7_reset_in_progress : 1;
-		u32 vf8_reset_in_progress : 1;
-		u32 vf9_reset_in_progress : 1;
-		u32 vf10_reset_in_progress : 1;
-		u32 vf11_reset_in_progress : 1;
-		u32 vf12_reset_in_progress : 1;
-		u32 vf13_reset_in_progress : 1;
-		u32 vf14_reset_in_progress : 1;
-		u32 vf15_reset_in_progress : 1;
-		u32 rsvd0 : 16;
-	} field;
-	u32 val;
-};
-
-#define DLB2_MSIX_MEM_VECTOR_CTRL(x) \
+#define DLB2_PF_VF_RESET_IN_PROGRESS_RST 0xffff
+
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF0_RESET_IN_PROGRESS	0x00000001
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF1_RESET_IN_PROGRESS	0x00000002
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF2_RESET_IN_PROGRESS	0x00000004
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF3_RESET_IN_PROGRESS	0x00000008
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF4_RESET_IN_PROGRESS	0x00000010
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF5_RESET_IN_PROGRESS	0x00000020
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF6_RESET_IN_PROGRESS	0x00000040
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF7_RESET_IN_PROGRESS	0x00000080
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF8_RESET_IN_PROGRESS	0x00000100
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF9_RESET_IN_PROGRESS	0x00000200
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF10_RESET_IN_PROGRESS	0x00000400
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF11_RESET_IN_PROGRESS	0x00000800
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF12_RESET_IN_PROGRESS	0x00001000
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF13_RESET_IN_PROGRESS	0x00002000
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF14_RESET_IN_PROGRESS	0x00004000
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF15_RESET_IN_PROGRESS	0x00008000
+#define DLB2_PF_VF_RESET_IN_PROGRESS_RSVD0			0xFFFF0000
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF0_RESET_IN_PROGRESS_LOC	0
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF1_RESET_IN_PROGRESS_LOC	1
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF2_RESET_IN_PROGRESS_LOC	2
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF3_RESET_IN_PROGRESS_LOC	3
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF4_RESET_IN_PROGRESS_LOC	4
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF5_RESET_IN_PROGRESS_LOC	5
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF6_RESET_IN_PROGRESS_LOC	6
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF7_RESET_IN_PROGRESS_LOC	7
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF8_RESET_IN_PROGRESS_LOC	8
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF9_RESET_IN_PROGRESS_LOC	9
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF10_RESET_IN_PROGRESS_LOC	10
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF11_RESET_IN_PROGRESS_LOC	11
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF12_RESET_IN_PROGRESS_LOC	12
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF13_RESET_IN_PROGRESS_LOC	13
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF14_RESET_IN_PROGRESS_LOC	14
+#define DLB2_PF_VF_RESET_IN_PROGRESS_VF15_RESET_IN_PROGRESS_LOC	15
+#define DLB2_PF_VF_RESET_IN_PROGRESS_RSVD0_LOC			16
+
+#define DLB2_MSIX_VECTOR_CTRL(x) \
 	(0x100000c + (x) * 0x10)
-#define DLB2_MSIX_MEM_VECTOR_CTRL_RST 0x1
-union dlb2_msix_mem_vector_ctrl {
-	struct {
-		u32 vec_mask : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
+#define DLB2_MSIX_VECTOR_CTRL_RST 0x1
+
+#define DLB2_MSIX_VECTOR_CTRL_VEC_MASK	0x00000001
+#define DLB2_MSIX_VECTOR_CTRL_RSVD0		0xFFFFFFFE
+#define DLB2_MSIX_VECTOR_CTRL_VEC_MASK_LOC	0
+#define DLB2_MSIX_VECTOR_CTRL_RSVD0_LOC	1
 
 #define DLB2_IOSF_FUNC_VF_BAR_DSBL(x) \
 	(0x20 + (x) * 0x4)
 #define DLB2_IOSF_FUNC_VF_BAR_DSBL_RST 0x0
-union dlb2_iosf_func_vf_bar_dsbl {
-	struct {
-		u32 func_vf_bar_dis : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_TOTAL_VAS 0x1000011c
+
+#define DLB2_IOSF_FUNC_VF_BAR_DSBL_FUNC_VF_BAR_DIS	0x00000001
+#define DLB2_IOSF_FUNC_VF_BAR_DSBL_RSVD0		0xFFFFFFFE
+#define DLB2_IOSF_FUNC_VF_BAR_DSBL_FUNC_VF_BAR_DIS_LOC	0
+#define DLB2_IOSF_FUNC_VF_BAR_DSBL_RSVD0_LOC			1
+
+#define DLB2_V2SYS_TOTAL_VAS 0x1000011c
+#define DLB2_V2_5SYS_TOTAL_VAS 0x10000114
+#define DLB2_SYS_TOTAL_VAS(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2SYS_TOTAL_VAS : \
+	 DLB2_V2_5SYS_TOTAL_VAS)
 #define DLB2_SYS_TOTAL_VAS_RST 0x20
-union dlb2_sys_total_vas {
-	struct {
-		u32 total_vas : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_TOTAL_DIR_PORTS 0x10000118
-#define DLB2_SYS_TOTAL_DIR_PORTS_RST 0x40
-union dlb2_sys_total_dir_ports {
-	struct {
-		u32 total_dir_ports : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_TOTAL_LDB_PORTS 0x10000114
-#define DLB2_SYS_TOTAL_LDB_PORTS_RST 0x40
-union dlb2_sys_total_ldb_ports {
-	struct {
-		u32 total_ldb_ports : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_TOTAL_DIR_QID 0x10000110
-#define DLB2_SYS_TOTAL_DIR_QID_RST 0x40
-union dlb2_sys_total_dir_qid {
-	struct {
-		u32 total_dir_qid : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_TOTAL_LDB_QID 0x1000010c
-#define DLB2_SYS_TOTAL_LDB_QID_RST 0x20
-union dlb2_sys_total_ldb_qid {
-	struct {
-		u32 total_ldb_qid : 32;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_TOTAL_VAS_TOTAL_VAS	0xFFFFFFFF
+#define DLB2_SYS_TOTAL_VAS_TOTAL_VAS_LOC	0
 
 #define DLB2_SYS_TOTAL_DIR_CRDS 0x10000108
 #define DLB2_SYS_TOTAL_DIR_CRDS_RST 0x1000
-union dlb2_sys_total_dir_crds {
-	struct {
-		u32 total_dir_credits : 32;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_TOTAL_DIR_CRDS_TOTAL_DIR_CREDITS	0xFFFFFFFF
+#define DLB2_SYS_TOTAL_DIR_CRDS_TOTAL_DIR_CREDITS_LOC	0
 
 #define DLB2_SYS_TOTAL_LDB_CRDS 0x10000104
 #define DLB2_SYS_TOTAL_LDB_CRDS_RST 0x2000
-union dlb2_sys_total_ldb_crds {
-	struct {
-		u32 total_ldb_credits : 32;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_TOTAL_LDB_CRDS_TOTAL_LDB_CREDITS	0xFFFFFFFF
+#define DLB2_SYS_TOTAL_LDB_CRDS_TOTAL_LDB_CREDITS_LOC	0
 
 #define DLB2_SYS_ALARM_PF_SYND2 0x10000508
 #define DLB2_SYS_ALARM_PF_SYND2_RST 0x0
-union dlb2_sys_alarm_pf_synd2 {
-	struct {
-		u32 lock_id : 16;
-		u32 meas : 1;
-		u32 debug : 7;
-		u32 cq_pop : 1;
-		u32 qe_uhl : 1;
-		u32 qe_orsp : 1;
-		u32 qe_valid : 1;
-		u32 cq_int_rearm : 1;
-		u32 dsi_error : 1;
-		u32 rsvd0 : 2;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_ALARM_PF_SYND2_LOCK_ID	0x0000FFFF
+#define DLB2_SYS_ALARM_PF_SYND2_MEAS		0x00010000
+#define DLB2_SYS_ALARM_PF_SYND2_DEBUG	0x00FE0000
+#define DLB2_SYS_ALARM_PF_SYND2_CQ_POP	0x01000000
+#define DLB2_SYS_ALARM_PF_SYND2_QE_UHL	0x02000000
+#define DLB2_SYS_ALARM_PF_SYND2_QE_ORSP	0x04000000
+#define DLB2_SYS_ALARM_PF_SYND2_QE_VALID	0x08000000
+#define DLB2_SYS_ALARM_PF_SYND2_CQ_INT_REARM	0x10000000
+#define DLB2_SYS_ALARM_PF_SYND2_DSI_ERROR	0x20000000
+#define DLB2_SYS_ALARM_PF_SYND2_RSVD0	0xC0000000
+#define DLB2_SYS_ALARM_PF_SYND2_LOCK_ID_LOC		0
+#define DLB2_SYS_ALARM_PF_SYND2_MEAS_LOC		16
+#define DLB2_SYS_ALARM_PF_SYND2_DEBUG_LOC		17
+#define DLB2_SYS_ALARM_PF_SYND2_CQ_POP_LOC		24
+#define DLB2_SYS_ALARM_PF_SYND2_QE_UHL_LOC		25
+#define DLB2_SYS_ALARM_PF_SYND2_QE_ORSP_LOC		26
+#define DLB2_SYS_ALARM_PF_SYND2_QE_VALID_LOC		27
+#define DLB2_SYS_ALARM_PF_SYND2_CQ_INT_REARM_LOC	28
+#define DLB2_SYS_ALARM_PF_SYND2_DSI_ERROR_LOC	29
+#define DLB2_SYS_ALARM_PF_SYND2_RSVD0_LOC		30
 
 #define DLB2_SYS_ALARM_PF_SYND1 0x10000504
 #define DLB2_SYS_ALARM_PF_SYND1_RST 0x0
-union dlb2_sys_alarm_pf_synd1 {
-	struct {
-		u32 dsi : 16;
-		u32 qid : 8;
-		u32 qtype : 2;
-		u32 qpri : 3;
-		u32 msg_type : 3;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_ALARM_PF_SYND1_DSI		0x0000FFFF
+#define DLB2_SYS_ALARM_PF_SYND1_QID		0x00FF0000
+#define DLB2_SYS_ALARM_PF_SYND1_QTYPE	0x03000000
+#define DLB2_SYS_ALARM_PF_SYND1_QPRI		0x1C000000
+#define DLB2_SYS_ALARM_PF_SYND1_MSG_TYPE	0xE0000000
+#define DLB2_SYS_ALARM_PF_SYND1_DSI_LOC	0
+#define DLB2_SYS_ALARM_PF_SYND1_QID_LOC	16
+#define DLB2_SYS_ALARM_PF_SYND1_QTYPE_LOC	24
+#define DLB2_SYS_ALARM_PF_SYND1_QPRI_LOC	26
+#define DLB2_SYS_ALARM_PF_SYND1_MSG_TYPE_LOC	29
 
 #define DLB2_SYS_ALARM_PF_SYND0 0x10000500
 #define DLB2_SYS_ALARM_PF_SYND0_RST 0x0
-union dlb2_sys_alarm_pf_synd0 {
-	struct {
-		u32 syndrome : 8;
-		u32 rtype : 2;
-		u32 rsvd0 : 3;
-		u32 is_ldb : 1;
-		u32 cls : 2;
-		u32 aid : 6;
-		u32 unit : 4;
-		u32 source : 4;
-		u32 more : 1;
-		u32 valid : 1;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_ALARM_PF_SYND0_SYNDROME	0x000000FF
+#define DLB2_SYS_ALARM_PF_SYND0_RTYPE	0x00000300
+#define DLB2_SYS_ALARM_PF_SYND0_RSVD0	0x00001C00
+#define DLB2_SYS_ALARM_PF_SYND0_IS_LDB	0x00002000
+#define DLB2_SYS_ALARM_PF_SYND0_CLS		0x0000C000
+#define DLB2_SYS_ALARM_PF_SYND0_AID		0x003F0000
+#define DLB2_SYS_ALARM_PF_SYND0_UNIT		0x03C00000
+#define DLB2_SYS_ALARM_PF_SYND0_SOURCE	0x3C000000
+#define DLB2_SYS_ALARM_PF_SYND0_MORE		0x40000000
+#define DLB2_SYS_ALARM_PF_SYND0_VALID	0x80000000
+#define DLB2_SYS_ALARM_PF_SYND0_SYNDROME_LOC	0
+#define DLB2_SYS_ALARM_PF_SYND0_RTYPE_LOC	8
+#define DLB2_SYS_ALARM_PF_SYND0_RSVD0_LOC	10
+#define DLB2_SYS_ALARM_PF_SYND0_IS_LDB_LOC	13
+#define DLB2_SYS_ALARM_PF_SYND0_CLS_LOC	14
+#define DLB2_SYS_ALARM_PF_SYND0_AID_LOC	16
+#define DLB2_SYS_ALARM_PF_SYND0_UNIT_LOC	22
+#define DLB2_SYS_ALARM_PF_SYND0_SOURCE_LOC	26
+#define DLB2_SYS_ALARM_PF_SYND0_MORE_LOC	30
+#define DLB2_SYS_ALARM_PF_SYND0_VALID_LOC	31
 
 #define DLB2_SYS_VF_LDB_VPP_V(x) \
 	(0x10000f00 + (x) * 0x1000)
 #define DLB2_SYS_VF_LDB_VPP_V_RST 0x0
-union dlb2_sys_vf_ldb_vpp_v {
-	struct {
-		u32 vpp_v : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_VF_LDB_VPP_V_VPP_V	0x00000001
+#define DLB2_SYS_VF_LDB_VPP_V_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_VF_LDB_VPP_V_VPP_V_LOC	0
+#define DLB2_SYS_VF_LDB_VPP_V_RSVD0_LOC	1
 
 #define DLB2_SYS_VF_LDB_VPP2PP(x) \
 	(0x10000f04 + (x) * 0x1000)
 #define DLB2_SYS_VF_LDB_VPP2PP_RST 0x0
-union dlb2_sys_vf_ldb_vpp2pp {
-	struct {
-		u32 pp : 6;
-		u32 rsvd0 : 26;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_VF_LDB_VPP2PP_PP	0x0000003F
+#define DLB2_SYS_VF_LDB_VPP2PP_RSVD0	0xFFFFFFC0
+#define DLB2_SYS_VF_LDB_VPP2PP_PP_LOC	0
+#define DLB2_SYS_VF_LDB_VPP2PP_RSVD0_LOC	6
 
 #define DLB2_SYS_VF_DIR_VPP_V(x) \
 	(0x10000f08 + (x) * 0x1000)
 #define DLB2_SYS_VF_DIR_VPP_V_RST 0x0
-union dlb2_sys_vf_dir_vpp_v {
-	struct {
-		u32 vpp_v : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_VF_DIR_VPP_V_VPP_V	0x00000001
+#define DLB2_SYS_VF_DIR_VPP_V_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_VF_DIR_VPP_V_VPP_V_LOC	0
+#define DLB2_SYS_VF_DIR_VPP_V_RSVD0_LOC	1
 
 #define DLB2_SYS_VF_DIR_VPP2PP(x) \
 	(0x10000f0c + (x) * 0x1000)
 #define DLB2_SYS_VF_DIR_VPP2PP_RST 0x0
-union dlb2_sys_vf_dir_vpp2pp {
-	struct {
-		u32 pp : 6;
-		u32 rsvd0 : 26;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_VF_DIR_VPP2PP_PP	0x0000003F
+#define DLB2_SYS_VF_DIR_VPP2PP_RSVD0	0xFFFFFFC0
+#define DLB2_SYS_VF_DIR_VPP2PP_PP_LOC	0
+#define DLB2_SYS_VF_DIR_VPP2PP_RSVD0_LOC	6
 
 #define DLB2_SYS_VF_LDB_VQID_V(x) \
 	(0x10000f10 + (x) * 0x1000)
 #define DLB2_SYS_VF_LDB_VQID_V_RST 0x0
-union dlb2_sys_vf_ldb_vqid_v {
-	struct {
-		u32 vqid_v : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_VF_LDB_VQID_V_VQID_V	0x00000001
+#define DLB2_SYS_VF_LDB_VQID_V_RSVD0		0xFFFFFFFE
+#define DLB2_SYS_VF_LDB_VQID_V_VQID_V_LOC	0
+#define DLB2_SYS_VF_LDB_VQID_V_RSVD0_LOC	1
 
 #define DLB2_SYS_VF_LDB_VQID2QID(x) \
 	(0x10000f14 + (x) * 0x1000)
 #define DLB2_SYS_VF_LDB_VQID2QID_RST 0x0
-union dlb2_sys_vf_ldb_vqid2qid {
-	struct {
-		u32 qid : 5;
-		u32 rsvd0 : 27;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_VF_LDB_VQID2QID_QID		0x0000001F
+#define DLB2_SYS_VF_LDB_VQID2QID_RSVD0	0xFFFFFFE0
+#define DLB2_SYS_VF_LDB_VQID2QID_QID_LOC	0
+#define DLB2_SYS_VF_LDB_VQID2QID_RSVD0_LOC	5
 
 #define DLB2_SYS_LDB_QID2VQID(x) \
 	(0x10000f18 + (x) * 0x1000)
 #define DLB2_SYS_LDB_QID2VQID_RST 0x0
-union dlb2_sys_ldb_qid2vqid {
-	struct {
-		u32 vqid : 5;
-		u32 rsvd0 : 27;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_LDB_QID2VQID_VQID	0x0000001F
+#define DLB2_SYS_LDB_QID2VQID_RSVD0	0xFFFFFFE0
+#define DLB2_SYS_LDB_QID2VQID_VQID_LOC	0
+#define DLB2_SYS_LDB_QID2VQID_RSVD0_LOC	5
 
 #define DLB2_SYS_VF_DIR_VQID_V(x) \
 	(0x10000f1c + (x) * 0x1000)
 #define DLB2_SYS_VF_DIR_VQID_V_RST 0x0
-union dlb2_sys_vf_dir_vqid_v {
-	struct {
-		u32 vqid_v : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_VF_DIR_VQID_V_VQID_V	0x00000001
+#define DLB2_SYS_VF_DIR_VQID_V_RSVD0		0xFFFFFFFE
+#define DLB2_SYS_VF_DIR_VQID_V_VQID_V_LOC	0
+#define DLB2_SYS_VF_DIR_VQID_V_RSVD0_LOC	1
 
 #define DLB2_SYS_VF_DIR_VQID2QID(x) \
 	(0x10000f20 + (x) * 0x1000)
 #define DLB2_SYS_VF_DIR_VQID2QID_RST 0x0
-union dlb2_sys_vf_dir_vqid2qid {
-	struct {
-		u32 qid : 6;
-		u32 rsvd0 : 26;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_VF_DIR_VQID2QID_QID		0x0000003F
+#define DLB2_SYS_VF_DIR_VQID2QID_RSVD0	0xFFFFFFC0
+#define DLB2_SYS_VF_DIR_VQID2QID_QID_LOC	0
+#define DLB2_SYS_VF_DIR_VQID2QID_RSVD0_LOC	6
 
 #define DLB2_SYS_LDB_VASQID_V(x) \
 	(0x10000f24 + (x) * 0x1000)
 #define DLB2_SYS_LDB_VASQID_V_RST 0x0
-union dlb2_sys_ldb_vasqid_v {
-	struct {
-		u32 vasqid_v : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_LDB_VASQID_V_VASQID_V	0x00000001
+#define DLB2_SYS_LDB_VASQID_V_RSVD0		0xFFFFFFFE
+#define DLB2_SYS_LDB_VASQID_V_VASQID_V_LOC	0
+#define DLB2_SYS_LDB_VASQID_V_RSVD0_LOC	1
 
 #define DLB2_SYS_DIR_VASQID_V(x) \
 	(0x10000f28 + (x) * 0x1000)
 #define DLB2_SYS_DIR_VASQID_V_RST 0x0
-union dlb2_sys_dir_vasqid_v {
-	struct {
-		u32 vasqid_v : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_DIR_VASQID_V_VASQID_V	0x00000001
+#define DLB2_SYS_DIR_VASQID_V_RSVD0		0xFFFFFFFE
+#define DLB2_SYS_DIR_VASQID_V_VASQID_V_LOC	0
+#define DLB2_SYS_DIR_VASQID_V_RSVD0_LOC	1
 
 #define DLB2_SYS_ALARM_VF_SYND2(x) \
 	(0x10000f48 + (x) * 0x1000)
 #define DLB2_SYS_ALARM_VF_SYND2_RST 0x0
-union dlb2_sys_alarm_vf_synd2 {
-	struct {
-		u32 lock_id : 16;
-		u32 debug : 8;
-		u32 cq_pop : 1;
-		u32 qe_uhl : 1;
-		u32 qe_orsp : 1;
-		u32 qe_valid : 1;
-		u32 isz : 1;
-		u32 dsi_error : 1;
-		u32 dlbrsvd : 2;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_ALARM_VF_SYND2_LOCK_ID	0x0000FFFF
+#define DLB2_SYS_ALARM_VF_SYND2_DEBUG	0x00FF0000
+#define DLB2_SYS_ALARM_VF_SYND2_CQ_POP	0x01000000
+#define DLB2_SYS_ALARM_VF_SYND2_QE_UHL	0x02000000
+#define DLB2_SYS_ALARM_VF_SYND2_QE_ORSP	0x04000000
+#define DLB2_SYS_ALARM_VF_SYND2_QE_VALID	0x08000000
+#define DLB2_SYS_ALARM_VF_SYND2_ISZ		0x10000000
+#define DLB2_SYS_ALARM_VF_SYND2_DSI_ERROR	0x20000000
+#define DLB2_SYS_ALARM_VF_SYND2_DLBRSVD	0xC0000000
+#define DLB2_SYS_ALARM_VF_SYND2_LOCK_ID_LOC		0
+#define DLB2_SYS_ALARM_VF_SYND2_DEBUG_LOC		16
+#define DLB2_SYS_ALARM_VF_SYND2_CQ_POP_LOC		24
+#define DLB2_SYS_ALARM_VF_SYND2_QE_UHL_LOC		25
+#define DLB2_SYS_ALARM_VF_SYND2_QE_ORSP_LOC		26
+#define DLB2_SYS_ALARM_VF_SYND2_QE_VALID_LOC		27
+#define DLB2_SYS_ALARM_VF_SYND2_ISZ_LOC		28
+#define DLB2_SYS_ALARM_VF_SYND2_DSI_ERROR_LOC	29
+#define DLB2_SYS_ALARM_VF_SYND2_DLBRSVD_LOC		30
 
 #define DLB2_SYS_ALARM_VF_SYND1(x) \
 	(0x10000f44 + (x) * 0x1000)
 #define DLB2_SYS_ALARM_VF_SYND1_RST 0x0
-union dlb2_sys_alarm_vf_synd1 {
-	struct {
-		u32 dsi : 16;
-		u32 qid : 8;
-		u32 qtype : 2;
-		u32 qpri : 3;
-		u32 msg_type : 3;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_ALARM_VF_SYND1_DSI		0x0000FFFF
+#define DLB2_SYS_ALARM_VF_SYND1_QID		0x00FF0000
+#define DLB2_SYS_ALARM_VF_SYND1_QTYPE	0x03000000
+#define DLB2_SYS_ALARM_VF_SYND1_QPRI		0x1C000000
+#define DLB2_SYS_ALARM_VF_SYND1_MSG_TYPE	0xE0000000
+#define DLB2_SYS_ALARM_VF_SYND1_DSI_LOC	0
+#define DLB2_SYS_ALARM_VF_SYND1_QID_LOC	16
+#define DLB2_SYS_ALARM_VF_SYND1_QTYPE_LOC	24
+#define DLB2_SYS_ALARM_VF_SYND1_QPRI_LOC	26
+#define DLB2_SYS_ALARM_VF_SYND1_MSG_TYPE_LOC	29
 
 #define DLB2_SYS_ALARM_VF_SYND0(x) \
 	(0x10000f40 + (x) * 0x1000)
 #define DLB2_SYS_ALARM_VF_SYND0_RST 0x0
-union dlb2_sys_alarm_vf_synd0 {
-	struct {
-		u32 syndrome : 8;
-		u32 rtype : 2;
-		u32 vf_synd0_parity : 1;
-		u32 vf_synd1_parity : 1;
-		u32 vf_synd2_parity : 1;
-		u32 is_ldb : 1;
-		u32 cls : 2;
-		u32 aid : 6;
-		u32 unit : 4;
-		u32 source : 4;
-		u32 more : 1;
-		u32 valid : 1;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_ALARM_VF_SYND0_SYNDROME		0x000000FF
+#define DLB2_SYS_ALARM_VF_SYND0_RTYPE		0x00000300
+#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND0_PARITY	0x00000400
+#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND1_PARITY	0x00000800
+#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND2_PARITY	0x00001000
+#define DLB2_SYS_ALARM_VF_SYND0_IS_LDB		0x00002000
+#define DLB2_SYS_ALARM_VF_SYND0_CLS			0x0000C000
+#define DLB2_SYS_ALARM_VF_SYND0_AID			0x003F0000
+#define DLB2_SYS_ALARM_VF_SYND0_UNIT			0x03C00000
+#define DLB2_SYS_ALARM_VF_SYND0_SOURCE		0x3C000000
+#define DLB2_SYS_ALARM_VF_SYND0_MORE			0x40000000
+#define DLB2_SYS_ALARM_VF_SYND0_VALID		0x80000000
+#define DLB2_SYS_ALARM_VF_SYND0_SYNDROME_LOC		0
+#define DLB2_SYS_ALARM_VF_SYND0_RTYPE_LOC		8
+#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND0_PARITY_LOC	10
+#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND1_PARITY_LOC	11
+#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND2_PARITY_LOC	12
+#define DLB2_SYS_ALARM_VF_SYND0_IS_LDB_LOC		13
+#define DLB2_SYS_ALARM_VF_SYND0_CLS_LOC		14
+#define DLB2_SYS_ALARM_VF_SYND0_AID_LOC		16
+#define DLB2_SYS_ALARM_VF_SYND0_UNIT_LOC		22
+#define DLB2_SYS_ALARM_VF_SYND0_SOURCE_LOC		26
+#define DLB2_SYS_ALARM_VF_SYND0_MORE_LOC		30
+#define DLB2_SYS_ALARM_VF_SYND0_VALID_LOC		31
 
 #define DLB2_SYS_LDB_QID_CFG_V(x) \
 	(0x10000f58 + (x) * 0x1000)
 #define DLB2_SYS_LDB_QID_CFG_V_RST 0x0
-union dlb2_sys_ldb_qid_cfg_v {
-	struct {
-		u32 sn_cfg_v : 1;
-		u32 fid_cfg_v : 1;
-		u32 rsvd0 : 30;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_LDB_QID_CFG_V_SN_CFG_V	0x00000001
+#define DLB2_SYS_LDB_QID_CFG_V_FID_CFG_V	0x00000002
+#define DLB2_SYS_LDB_QID_CFG_V_RSVD0		0xFFFFFFFC
+#define DLB2_SYS_LDB_QID_CFG_V_SN_CFG_V_LOC	0
+#define DLB2_SYS_LDB_QID_CFG_V_FID_CFG_V_LOC	1
+#define DLB2_SYS_LDB_QID_CFG_V_RSVD0_LOC	2
 
 #define DLB2_SYS_LDB_QID_ITS(x) \
 	(0x10000f54 + (x) * 0x1000)
 #define DLB2_SYS_LDB_QID_ITS_RST 0x0
-union dlb2_sys_ldb_qid_its {
-	struct {
-		u32 qid_its : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_LDB_QID_ITS_QID_ITS	0x00000001
+#define DLB2_SYS_LDB_QID_ITS_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_LDB_QID_ITS_QID_ITS_LOC	0
+#define DLB2_SYS_LDB_QID_ITS_RSVD0_LOC	1
 
 #define DLB2_SYS_LDB_QID_V(x) \
 	(0x10000f50 + (x) * 0x1000)
 #define DLB2_SYS_LDB_QID_V_RST 0x0
-union dlb2_sys_ldb_qid_v {
-	struct {
-		u32 qid_v : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_LDB_QID_V_QID_V	0x00000001
+#define DLB2_SYS_LDB_QID_V_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_LDB_QID_V_QID_V_LOC	0
+#define DLB2_SYS_LDB_QID_V_RSVD0_LOC	1
 
 #define DLB2_SYS_DIR_QID_ITS(x) \
 	(0x10000f64 + (x) * 0x1000)
 #define DLB2_SYS_DIR_QID_ITS_RST 0x0
-union dlb2_sys_dir_qid_its {
-	struct {
-		u32 qid_its : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_DIR_QID_ITS_QID_ITS	0x00000001
+#define DLB2_SYS_DIR_QID_ITS_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_DIR_QID_ITS_QID_ITS_LOC	0
+#define DLB2_SYS_DIR_QID_ITS_RSVD0_LOC	1
 
 #define DLB2_SYS_DIR_QID_V(x) \
 	(0x10000f60 + (x) * 0x1000)
 #define DLB2_SYS_DIR_QID_V_RST 0x0
-union dlb2_sys_dir_qid_v {
-	struct {
-		u32 qid_v : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_DIR_QID_V_QID_V	0x00000001
+#define DLB2_SYS_DIR_QID_V_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_DIR_QID_V_QID_V_LOC	0
+#define DLB2_SYS_DIR_QID_V_RSVD0_LOC	1
 
 #define DLB2_SYS_LDB_CQ_AI_DATA(x) \
 	(0x10000fa8 + (x) * 0x1000)
 #define DLB2_SYS_LDB_CQ_AI_DATA_RST 0x0
-union dlb2_sys_ldb_cq_ai_data {
-	struct {
-		u32 cq_ai_data : 32;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_LDB_CQ_AI_DATA_CQ_AI_DATA	0xFFFFFFFF
+#define DLB2_SYS_LDB_CQ_AI_DATA_CQ_AI_DATA_LOC	0
 
 #define DLB2_SYS_LDB_CQ_AI_ADDR(x) \
 	(0x10000fa4 + (x) * 0x1000)
 #define DLB2_SYS_LDB_CQ_AI_ADDR_RST 0x0
-union dlb2_sys_ldb_cq_ai_addr {
-	struct {
-		u32 rsvd1 : 2;
-		u32 cq_ai_addr : 18;
-		u32 rsvd0 : 12;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_LDB_CQ_PASID(x) \
+
+#define DLB2_SYS_LDB_CQ_AI_ADDR_RSVD1	0x00000003
+#define DLB2_SYS_LDB_CQ_AI_ADDR_CQ_AI_ADDR	0x000FFFFC
+#define DLB2_SYS_LDB_CQ_AI_ADDR_RSVD0	0xFFF00000
+#define DLB2_SYS_LDB_CQ_AI_ADDR_RSVD1_LOC		0
+#define DLB2_SYS_LDB_CQ_AI_ADDR_CQ_AI_ADDR_LOC	2
+#define DLB2_SYS_LDB_CQ_AI_ADDR_RSVD0_LOC		20
+
+#define DLB2_V2SYS_LDB_CQ_PASID(x) \
 	(0x10000fa0 + (x) * 0x1000)
+#define DLB2_V2_5SYS_LDB_CQ_PASID(x) \
+	(0x10000f9c + (x) * 0x1000)
+#define DLB2_SYS_LDB_CQ_PASID(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2SYS_LDB_CQ_PASID(x) : \
+	 DLB2_V2_5SYS_LDB_CQ_PASID(x))
 #define DLB2_SYS_LDB_CQ_PASID_RST 0x0
-union dlb2_sys_ldb_cq_pasid {
-	struct {
-		u32 pasid : 20;
-		u32 exe_req : 1;
-		u32 priv_req : 1;
-		u32 fmt2 : 1;
-		u32 rsvd0 : 9;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_LDB_CQ_PASID_PASID		0x000FFFFF
+#define DLB2_SYS_LDB_CQ_PASID_EXE_REQ	0x00100000
+#define DLB2_SYS_LDB_CQ_PASID_PRIV_REQ	0x00200000
+#define DLB2_SYS_LDB_CQ_PASID_FMT2		0x00400000
+#define DLB2_SYS_LDB_CQ_PASID_RSVD0		0xFF800000
+#define DLB2_SYS_LDB_CQ_PASID_PASID_LOC	0
+#define DLB2_SYS_LDB_CQ_PASID_EXE_REQ_LOC	20
+#define DLB2_SYS_LDB_CQ_PASID_PRIV_REQ_LOC	21
+#define DLB2_SYS_LDB_CQ_PASID_FMT2_LOC	22
+#define DLB2_SYS_LDB_CQ_PASID_RSVD0_LOC	23
 
 #define DLB2_SYS_LDB_CQ_AT(x) \
 	(0x10000f9c + (x) * 0x1000)
 #define DLB2_SYS_LDB_CQ_AT_RST 0x0
-union dlb2_sys_ldb_cq_at {
-	struct {
-		u32 cq_at : 2;
-		u32 rsvd0 : 30;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_LDB_CQ_AT_CQ_AT	0x00000003
+#define DLB2_SYS_LDB_CQ_AT_RSVD0	0xFFFFFFFC
+#define DLB2_SYS_LDB_CQ_AT_CQ_AT_LOC	0
+#define DLB2_SYS_LDB_CQ_AT_RSVD0_LOC	2
 
 #define DLB2_SYS_LDB_CQ_ISR(x) \
 	(0x10000f98 + (x) * 0x1000)
@@ -563,497 +560,891 @@ union dlb2_sys_ldb_cq_at {
 #define DLB2_CQ_ISR_MODE_MSI  1
 #define DLB2_CQ_ISR_MODE_MSIX 2
 #define DLB2_CQ_ISR_MODE_ADI  3
-union dlb2_sys_ldb_cq_isr {
-	struct {
-		u32 vector : 6;
-		u32 vf : 4;
-		u32 en_code : 2;
-		u32 rsvd0 : 20;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_LDB_CQ_ISR_VECTOR	0x0000003F
+#define DLB2_SYS_LDB_CQ_ISR_VF	0x000003C0
+#define DLB2_SYS_LDB_CQ_ISR_EN_CODE	0x00000C00
+#define DLB2_SYS_LDB_CQ_ISR_RSVD0	0xFFFFF000
+#define DLB2_SYS_LDB_CQ_ISR_VECTOR_LOC	0
+#define DLB2_SYS_LDB_CQ_ISR_VF_LOC		6
+#define DLB2_SYS_LDB_CQ_ISR_EN_CODE_LOC	10
+#define DLB2_SYS_LDB_CQ_ISR_RSVD0_LOC	12
 
 #define DLB2_SYS_LDB_CQ2VF_PF_RO(x) \
 	(0x10000f94 + (x) * 0x1000)
 #define DLB2_SYS_LDB_CQ2VF_PF_RO_RST 0x0
-union dlb2_sys_ldb_cq2vf_pf_ro {
-	struct {
-		u32 vf : 4;
-		u32 is_pf : 1;
-		u32 ro : 1;
-		u32 rsvd0 : 26;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_VF		0x0000000F
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_IS_PF	0x00000010
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_RO		0x00000020
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_RSVD0	0xFFFFFFC0
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_VF_LOC	0
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_IS_PF_LOC	4
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_RO_LOC	5
+#define DLB2_SYS_LDB_CQ2VF_PF_RO_RSVD0_LOC	6
 
 #define DLB2_SYS_LDB_PP_V(x) \
 	(0x10000f90 + (x) * 0x1000)
 #define DLB2_SYS_LDB_PP_V_RST 0x0
-union dlb2_sys_ldb_pp_v {
-	struct {
-		u32 pp_v : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_LDB_PP_V_PP_V	0x00000001
+#define DLB2_SYS_LDB_PP_V_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_LDB_PP_V_PP_V_LOC	0
+#define DLB2_SYS_LDB_PP_V_RSVD0_LOC	1
 
 #define DLB2_SYS_LDB_PP2VDEV(x) \
 	(0x10000f8c + (x) * 0x1000)
 #define DLB2_SYS_LDB_PP2VDEV_RST 0x0
-union dlb2_sys_ldb_pp2vdev {
-	struct {
-		u32 vdev : 4;
-		u32 rsvd0 : 28;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_LDB_PP2VDEV_VDEV	0x0000000F
+#define DLB2_SYS_LDB_PP2VDEV_RSVD0	0xFFFFFFF0
+#define DLB2_SYS_LDB_PP2VDEV_VDEV_LOC	0
+#define DLB2_SYS_LDB_PP2VDEV_RSVD0_LOC	4
 
 #define DLB2_SYS_LDB_PP2VAS(x) \
 	(0x10000f88 + (x) * 0x1000)
 #define DLB2_SYS_LDB_PP2VAS_RST 0x0
-union dlb2_sys_ldb_pp2vas {
-	struct {
-		u32 vas : 5;
-		u32 rsvd0 : 27;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_LDB_PP2VAS_VAS	0x0000001F
+#define DLB2_SYS_LDB_PP2VAS_RSVD0	0xFFFFFFE0
+#define DLB2_SYS_LDB_PP2VAS_VAS_LOC		0
+#define DLB2_SYS_LDB_PP2VAS_RSVD0_LOC	5
 
 #define DLB2_SYS_LDB_CQ_ADDR_U(x) \
 	(0x10000f84 + (x) * 0x1000)
 #define DLB2_SYS_LDB_CQ_ADDR_U_RST 0x0
-union dlb2_sys_ldb_cq_addr_u {
-	struct {
-		u32 addr_u : 32;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_LDB_CQ_ADDR_U_ADDR_U	0xFFFFFFFF
+#define DLB2_SYS_LDB_CQ_ADDR_U_ADDR_U_LOC	0
 
 #define DLB2_SYS_LDB_CQ_ADDR_L(x) \
 	(0x10000f80 + (x) * 0x1000)
 #define DLB2_SYS_LDB_CQ_ADDR_L_RST 0x0
-union dlb2_sys_ldb_cq_addr_l {
-	struct {
-		u32 rsvd0 : 6;
-		u32 addr_l : 26;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_LDB_CQ_ADDR_L_RSVD0		0x0000003F
+#define DLB2_SYS_LDB_CQ_ADDR_L_ADDR_L	0xFFFFFFC0
+#define DLB2_SYS_LDB_CQ_ADDR_L_RSVD0_LOC	0
+#define DLB2_SYS_LDB_CQ_ADDR_L_ADDR_L_LOC	6
 
 #define DLB2_SYS_DIR_CQ_FMT(x) \
 	(0x10000fec + (x) * 0x1000)
 #define DLB2_SYS_DIR_CQ_FMT_RST 0x0
-union dlb2_sys_dir_cq_fmt {
-	struct {
-		u32 keep_pf_ppid : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_DIR_CQ_FMT_KEEP_PF_PPID	0x00000001
+#define DLB2_SYS_DIR_CQ_FMT_RSVD0		0xFFFFFFFE
+#define DLB2_SYS_DIR_CQ_FMT_KEEP_PF_PPID_LOC	0
+#define DLB2_SYS_DIR_CQ_FMT_RSVD0_LOC	1
 
 #define DLB2_SYS_DIR_CQ_AI_DATA(x) \
 	(0x10000fe8 + (x) * 0x1000)
 #define DLB2_SYS_DIR_CQ_AI_DATA_RST 0x0
-union dlb2_sys_dir_cq_ai_data {
-	struct {
-		u32 cq_ai_data : 32;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_DIR_CQ_AI_DATA_CQ_AI_DATA	0xFFFFFFFF
+#define DLB2_SYS_DIR_CQ_AI_DATA_CQ_AI_DATA_LOC	0
 
 #define DLB2_SYS_DIR_CQ_AI_ADDR(x) \
 	(0x10000fe4 + (x) * 0x1000)
 #define DLB2_SYS_DIR_CQ_AI_ADDR_RST 0x0
-union dlb2_sys_dir_cq_ai_addr {
-	struct {
-		u32 rsvd1 : 2;
-		u32 cq_ai_addr : 18;
-		u32 rsvd0 : 12;
-	} field;
-	u32 val;
-};
-
-#define DLB2_SYS_DIR_CQ_PASID(x) \
+
+#define DLB2_SYS_DIR_CQ_AI_ADDR_RSVD1	0x00000003
+#define DLB2_SYS_DIR_CQ_AI_ADDR_CQ_AI_ADDR	0x000FFFFC
+#define DLB2_SYS_DIR_CQ_AI_ADDR_RSVD0	0xFFF00000
+#define DLB2_SYS_DIR_CQ_AI_ADDR_RSVD1_LOC		0
+#define DLB2_SYS_DIR_CQ_AI_ADDR_CQ_AI_ADDR_LOC	2
+#define DLB2_SYS_DIR_CQ_AI_ADDR_RSVD0_LOC		20
+
+#define DLB2_V2SYS_DIR_CQ_PASID(x) \
 	(0x10000fe0 + (x) * 0x1000)
+#define DLB2_V2_5SYS_DIR_CQ_PASID(x) \
+	(0x10000fdc + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ_PASID(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2SYS_DIR_CQ_PASID(x) : \
+	 DLB2_V2_5SYS_DIR_CQ_PASID(x))
 #define DLB2_SYS_DIR_CQ_PASID_RST 0x0
-union dlb2_sys_dir_cq_pasid {
-	struct {
-		u32 pasid : 20;
-		u32 exe_req : 1;
-		u32 priv_req : 1;
-		u32 fmt2 : 1;
-		u32 rsvd0 : 9;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_DIR_CQ_PASID_PASID		0x000FFFFF
+#define DLB2_SYS_DIR_CQ_PASID_EXE_REQ	0x00100000
+#define DLB2_SYS_DIR_CQ_PASID_PRIV_REQ	0x00200000
+#define DLB2_SYS_DIR_CQ_PASID_FMT2		0x00400000
+#define DLB2_SYS_DIR_CQ_PASID_RSVD0		0xFF800000
+#define DLB2_SYS_DIR_CQ_PASID_PASID_LOC	0
+#define DLB2_SYS_DIR_CQ_PASID_EXE_REQ_LOC	20
+#define DLB2_SYS_DIR_CQ_PASID_PRIV_REQ_LOC	21
+#define DLB2_SYS_DIR_CQ_PASID_FMT2_LOC	22
+#define DLB2_SYS_DIR_CQ_PASID_RSVD0_LOC	23
 
 #define DLB2_SYS_DIR_CQ_AT(x) \
 	(0x10000fdc + (x) * 0x1000)
 #define DLB2_SYS_DIR_CQ_AT_RST 0x0
-union dlb2_sys_dir_cq_at {
-	struct {
-		u32 cq_at : 2;
-		u32 rsvd0 : 30;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_DIR_CQ_AT_CQ_AT	0x00000003
+#define DLB2_SYS_DIR_CQ_AT_RSVD0	0xFFFFFFFC
+#define DLB2_SYS_DIR_CQ_AT_CQ_AT_LOC	0
+#define DLB2_SYS_DIR_CQ_AT_RSVD0_LOC	2
 
 #define DLB2_SYS_DIR_CQ_ISR(x) \
 	(0x10000fd8 + (x) * 0x1000)
 #define DLB2_SYS_DIR_CQ_ISR_RST 0x0
-union dlb2_sys_dir_cq_isr {
-	struct {
-		u32 vector : 6;
-		u32 vf : 4;
-		u32 en_code : 2;
-		u32 rsvd0 : 20;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_DIR_CQ_ISR_VECTOR	0x0000003F
+#define DLB2_SYS_DIR_CQ_ISR_VF	0x000003C0
+#define DLB2_SYS_DIR_CQ_ISR_EN_CODE	0x00000C00
+#define DLB2_SYS_DIR_CQ_ISR_RSVD0	0xFFFFF000
+#define DLB2_SYS_DIR_CQ_ISR_VECTOR_LOC	0
+#define DLB2_SYS_DIR_CQ_ISR_VF_LOC		6
+#define DLB2_SYS_DIR_CQ_ISR_EN_CODE_LOC	10
+#define DLB2_SYS_DIR_CQ_ISR_RSVD0_LOC	12
 
 #define DLB2_SYS_DIR_CQ2VF_PF_RO(x) \
 	(0x10000fd4 + (x) * 0x1000)
 #define DLB2_SYS_DIR_CQ2VF_PF_RO_RST 0x0
-union dlb2_sys_dir_cq2vf_pf_ro {
-	struct {
-		u32 vf : 4;
-		u32 is_pf : 1;
-		u32 ro : 1;
-		u32 rsvd0 : 26;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_VF		0x0000000F
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_IS_PF	0x00000010
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_RO		0x00000020
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_RSVD0	0xFFFFFFC0
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_VF_LOC	0
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_IS_PF_LOC	4
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_RO_LOC	5
+#define DLB2_SYS_DIR_CQ2VF_PF_RO_RSVD0_LOC	6
 
 #define DLB2_SYS_DIR_PP_V(x) \
 	(0x10000fd0 + (x) * 0x1000)
 #define DLB2_SYS_DIR_PP_V_RST 0x0
-union dlb2_sys_dir_pp_v {
-	struct {
-		u32 pp_v : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_DIR_PP_V_PP_V	0x00000001
+#define DLB2_SYS_DIR_PP_V_RSVD0	0xFFFFFFFE
+#define DLB2_SYS_DIR_PP_V_PP_V_LOC	0
+#define DLB2_SYS_DIR_PP_V_RSVD0_LOC	1
 
 #define DLB2_SYS_DIR_PP2VDEV(x) \
 	(0x10000fcc + (x) * 0x1000)
 #define DLB2_SYS_DIR_PP2VDEV_RST 0x0
-union dlb2_sys_dir_pp2vdev {
-	struct {
-		u32 vdev : 4;
-		u32 rsvd0 : 28;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_DIR_PP2VDEV_VDEV	0x0000000F
+#define DLB2_SYS_DIR_PP2VDEV_RSVD0	0xFFFFFFF0
+#define DLB2_SYS_DIR_PP2VDEV_VDEV_LOC	0
+#define DLB2_SYS_DIR_PP2VDEV_RSVD0_LOC	4
 
 #define DLB2_SYS_DIR_PP2VAS(x) \
 	(0x10000fc8 + (x) * 0x1000)
 #define DLB2_SYS_DIR_PP2VAS_RST 0x0
-union dlb2_sys_dir_pp2vas {
-	struct {
-		u32 vas : 5;
-		u32 rsvd0 : 27;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_DIR_PP2VAS_VAS	0x0000001F
+#define DLB2_SYS_DIR_PP2VAS_RSVD0	0xFFFFFFE0
+#define DLB2_SYS_DIR_PP2VAS_VAS_LOC		0
+#define DLB2_SYS_DIR_PP2VAS_RSVD0_LOC	5
 
 #define DLB2_SYS_DIR_CQ_ADDR_U(x) \
 	(0x10000fc4 + (x) * 0x1000)
 #define DLB2_SYS_DIR_CQ_ADDR_U_RST 0x0
-union dlb2_sys_dir_cq_addr_u {
-	struct {
-		u32 addr_u : 32;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_DIR_CQ_ADDR_U_ADDR_U	0xFFFFFFFF
+#define DLB2_SYS_DIR_CQ_ADDR_U_ADDR_U_LOC	0
 
 #define DLB2_SYS_DIR_CQ_ADDR_L(x) \
 	(0x10000fc0 + (x) * 0x1000)
 #define DLB2_SYS_DIR_CQ_ADDR_L_RST 0x0
-union dlb2_sys_dir_cq_addr_l {
-	struct {
-		u32 rsvd0 : 6;
-		u32 addr_l : 26;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_DIR_CQ_ADDR_L_RSVD0		0x0000003F
+#define DLB2_SYS_DIR_CQ_ADDR_L_ADDR_L	0xFFFFFFC0
+#define DLB2_SYS_DIR_CQ_ADDR_L_RSVD0_LOC	0
+#define DLB2_SYS_DIR_CQ_ADDR_L_ADDR_L_LOC	6
+
+#define DLB2_SYS_PM_SMON_COMP_MASK1 0x10003024
+#define DLB2_SYS_PM_SMON_COMP_MASK1_RST 0xffffffff
+
+#define DLB2_SYS_PM_SMON_COMP_MASK1_COMP_MASK1	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_COMP_MASK1_COMP_MASK1_LOC	0
+
+#define DLB2_SYS_PM_SMON_COMP_MASK0 0x10003020
+#define DLB2_SYS_PM_SMON_COMP_MASK0_RST 0xffffffff
+
+#define DLB2_SYS_PM_SMON_COMP_MASK0_COMP_MASK0	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_COMP_MASK0_COMP_MASK0_LOC	0
+
+#define DLB2_SYS_PM_SMON_MAX_TMR 0x1000301c
+#define DLB2_SYS_PM_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_SYS_PM_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_SYS_PM_SMON_TMR 0x10003018
+#define DLB2_SYS_PM_SMON_TMR_RST 0x0
+
+#define DLB2_SYS_PM_SMON_TMR_TIMER_VAL	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_TMR_TIMER_VAL_LOC	0
+
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR1 0x10003014
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR0 0x10003010
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_SYS_PM_SMON_COMPARE1 0x1000300c
+#define DLB2_SYS_PM_SMON_COMPARE1_RST 0x0
+
+#define DLB2_SYS_PM_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_SYS_PM_SMON_COMPARE0 0x10003008
+#define DLB2_SYS_PM_SMON_COMPARE0_RST 0x0
+
+#define DLB2_SYS_PM_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_SYS_PM_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_SYS_PM_SMON_CFG1 0x10003004
+#define DLB2_SYS_PM_SMON_CFG1_RST 0x0
+
+#define DLB2_SYS_PM_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_SYS_PM_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_SYS_PM_SMON_CFG1_RSVD	0xFFFF0000
+#define DLB2_SYS_PM_SMON_CFG1_MODE0_LOC	0
+#define DLB2_SYS_PM_SMON_CFG1_MODE1_LOC	8
+#define DLB2_SYS_PM_SMON_CFG1_RSVD_LOC	16
+
+#define DLB2_SYS_PM_SMON_CFG0 0x10003000
+#define DLB2_SYS_PM_SMON_CFG0_RST 0x40000000
+
+#define DLB2_SYS_PM_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_SYS_PM_SMON_CFG0_RSVD2			0x0000000E
+#define DLB2_SYS_PM_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_SYS_PM_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_SYS_PM_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_SYS_PM_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_SYS_PM_SMON_CFG0_SMON_MODE		0x0000F000
+#define DLB2_SYS_PM_SMON_CFG0_STOPCOUNTEROVFL	0x00010000
+#define DLB2_SYS_PM_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_SYS_PM_SMON_CFG0_STATCOUNTER0OVFL	0x00040000
+#define DLB2_SYS_PM_SMON_CFG0_STATCOUNTER1OVFL	0x00080000
+#define DLB2_SYS_PM_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_SYS_PM_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_SYS_PM_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_SYS_PM_SMON_CFG0_RSVD1			0x00800000
+#define DLB2_SYS_PM_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_SYS_PM_SMON_CFG0_RSVD0			0x20000000
+#define DLB2_SYS_PM_SMON_CFG0_VERSION		0xC0000000
+#define DLB2_SYS_PM_SMON_CFG0_SMON_ENABLE_LOC		0
+#define DLB2_SYS_PM_SMON_CFG0_RSVD2_LOC			1
+#define DLB2_SYS_PM_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_SYS_PM_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_SYS_PM_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_SYS_PM_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_SYS_PM_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_SYS_PM_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_SYS_PM_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_SYS_PM_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_SYS_PM_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_SYS_PM_SMON_CFG0_STOPTIMEROVFL_LOC		20
+#define DLB2_SYS_PM_SMON_CFG0_INTTIMEROVFL_LOC		21
+#define DLB2_SYS_PM_SMON_CFG0_STATTIMEROVFL_LOC		22
+#define DLB2_SYS_PM_SMON_CFG0_RSVD1_LOC			23
+#define DLB2_SYS_PM_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_SYS_PM_SMON_CFG0_RSVD0_LOC			29
+#define DLB2_SYS_PM_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_SYS_SMON_COMP_MASK1(x) \
+	(0x18002024 + (x) * 0x40)
+#define DLB2_SYS_SMON_COMP_MASK1_RST 0xffffffff
+
+#define DLB2_SYS_SMON_COMP_MASK1_COMP_MASK1	0xFFFFFFFF
+#define DLB2_SYS_SMON_COMP_MASK1_COMP_MASK1_LOC	0
+
+#define DLB2_SYS_SMON_COMP_MASK0(x) \
+	(0x18002020 + (x) * 0x40)
+#define DLB2_SYS_SMON_COMP_MASK0_RST 0xffffffff
+
+#define DLB2_SYS_SMON_COMP_MASK0_COMP_MASK0	0xFFFFFFFF
+#define DLB2_SYS_SMON_COMP_MASK0_COMP_MASK0_LOC	0
+
+#define DLB2_SYS_SMON_MAX_TMR(x) \
+	(0x1800201c + (x) * 0x40)
+#define DLB2_SYS_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_SYS_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_SYS_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_SYS_SMON_TMR(x) \
+	(0x18002018 + (x) * 0x40)
+#define DLB2_SYS_SMON_TMR_RST 0x0
+
+#define DLB2_SYS_SMON_TMR_TIMER_VAL	0xFFFFFFFF
+#define DLB2_SYS_SMON_TMR_TIMER_VAL_LOC	0
+
+#define DLB2_SYS_SMON_ACTIVITYCNTR1(x) \
+	(0x18002014 + (x) * 0x40)
+#define DLB2_SYS_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_SYS_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_SYS_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_SYS_SMON_ACTIVITYCNTR0(x) \
+	(0x18002010 + (x) * 0x40)
+#define DLB2_SYS_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_SYS_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_SYS_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_SYS_SMON_COMPARE1(x) \
+	(0x1800200c + (x) * 0x40)
+#define DLB2_SYS_SMON_COMPARE1_RST 0x0
+
+#define DLB2_SYS_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_SYS_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_SYS_SMON_COMPARE0(x) \
+	(0x18002008 + (x) * 0x40)
+#define DLB2_SYS_SMON_COMPARE0_RST 0x0
+
+#define DLB2_SYS_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_SYS_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_SYS_SMON_CFG1(x) \
+	(0x18002004 + (x) * 0x40)
+#define DLB2_SYS_SMON_CFG1_RST 0x0
+
+#define DLB2_SYS_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_SYS_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_SYS_SMON_CFG1_RSVD	0xFFFF0000
+#define DLB2_SYS_SMON_CFG1_MODE0_LOC	0
+#define DLB2_SYS_SMON_CFG1_MODE1_LOC	8
+#define DLB2_SYS_SMON_CFG1_RSVD_LOC	16
+
+#define DLB2_SYS_SMON_CFG0(x) \
+	(0x18002000 + (x) * 0x40)
+#define DLB2_SYS_SMON_CFG0_RST 0x40000000
+
+#define DLB2_SYS_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_SYS_SMON_CFG0_RSVD2			0x0000000E
+#define DLB2_SYS_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_SYS_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_SYS_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_SYS_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_SYS_SMON_CFG0_SMON_MODE			0x0000F000
+#define DLB2_SYS_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_SYS_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_SYS_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_SYS_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_SYS_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_SYS_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_SYS_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_SYS_SMON_CFG0_RSVD1			0x00800000
+#define DLB2_SYS_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_SYS_SMON_CFG0_RSVD0			0x20000000
+#define DLB2_SYS_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_SYS_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_SYS_SMON_CFG0_RSVD2_LOC				1
+#define DLB2_SYS_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_SYS_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_SYS_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_SYS_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_SYS_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_SYS_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_SYS_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_SYS_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_SYS_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_SYS_SMON_CFG0_STOPTIMEROVFL_LOC			20
+#define DLB2_SYS_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_SYS_SMON_CFG0_STATTIMEROVFL_LOC			22
+#define DLB2_SYS_SMON_CFG0_RSVD1_LOC				23
+#define DLB2_SYS_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_SYS_SMON_CFG0_RSVD0_LOC				29
+#define DLB2_SYS_SMON_CFG0_VERSION_LOC			30
 
 #define DLB2_SYS_INGRESS_ALARM_ENBL 0x10000300
 #define DLB2_SYS_INGRESS_ALARM_ENBL_RST 0x0
-union dlb2_sys_ingress_alarm_enbl {
-	struct {
-		u32 illegal_hcw : 1;
-		u32 illegal_pp : 1;
-		u32 illegal_pasid : 1;
-		u32 illegal_qid : 1;
-		u32 disabled_qid : 1;
-		u32 illegal_ldb_qid_cfg : 1;
-		u32 rsvd0 : 26;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_HCW		0x00000001
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_PP		0x00000002
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_PASID		0x00000004
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_QID		0x00000008
+#define DLB2_SYS_INGRESS_ALARM_ENBL_DISABLED_QID		0x00000010
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_LDB_QID_CFG	0x00000020
+#define DLB2_SYS_INGRESS_ALARM_ENBL_RSVD0			0xFFFFFFC0
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_HCW_LOC		0
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_PP_LOC		1
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_PASID_LOC	2
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_QID_LOC		3
+#define DLB2_SYS_INGRESS_ALARM_ENBL_DISABLED_QID_LOC		4
+#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_LDB_QID_CFG_LOC	5
+#define DLB2_SYS_INGRESS_ALARM_ENBL_RSVD0_LOC		6
 
 #define DLB2_SYS_MSIX_ACK 0x10000400
 #define DLB2_SYS_MSIX_ACK_RST 0x0
-union dlb2_sys_msix_ack {
-	struct {
-		u32 msix_0_ack : 1;
-		u32 msix_1_ack : 1;
-		u32 rsvd0 : 30;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_MSIX_ACK_MSIX_0_ACK	0x00000001
+#define DLB2_SYS_MSIX_ACK_MSIX_1_ACK	0x00000002
+#define DLB2_SYS_MSIX_ACK_RSVD0	0xFFFFFFFC
+#define DLB2_SYS_MSIX_ACK_MSIX_0_ACK_LOC	0
+#define DLB2_SYS_MSIX_ACK_MSIX_1_ACK_LOC	1
+#define DLB2_SYS_MSIX_ACK_RSVD0_LOC		2
 
 #define DLB2_SYS_MSIX_PASSTHRU 0x10000404
 #define DLB2_SYS_MSIX_PASSTHRU_RST 0x0
-union dlb2_sys_msix_passthru {
-	struct {
-		u32 msix_0_passthru : 1;
-		u32 msix_1_passthru : 1;
-		u32 rsvd0 : 30;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_MSIX_PASSTHRU_MSIX_0_PASSTHRU	0x00000001
+#define DLB2_SYS_MSIX_PASSTHRU_MSIX_1_PASSTHRU	0x00000002
+#define DLB2_SYS_MSIX_PASSTHRU_RSVD0			0xFFFFFFFC
+#define DLB2_SYS_MSIX_PASSTHRU_MSIX_0_PASSTHRU_LOC	0
+#define DLB2_SYS_MSIX_PASSTHRU_MSIX_1_PASSTHRU_LOC	1
+#define DLB2_SYS_MSIX_PASSTHRU_RSVD0_LOC		2
 
 #define DLB2_SYS_MSIX_MODE 0x10000408
 #define DLB2_SYS_MSIX_MODE_RST 0x0
 /* MSI-X Modes */
 #define DLB2_MSIX_MODE_PACKED     0
 #define DLB2_MSIX_MODE_COMPRESSED 1
-union dlb2_sys_msix_mode {
-	struct {
-		u32 mode : 1;
-		u32 poll_mode : 1;
-		u32 poll_mask : 1;
-		u32 poll_lock : 1;
-		u32 rsvd0 : 28;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_MSIX_MODE_MODE_V2	0x00000001
+#define DLB2_SYS_MSIX_MODE_POLL_MODE_V2	0x00000002
+#define DLB2_SYS_MSIX_MODE_POLL_MASK_V2	0x00000004
+#define DLB2_SYS_MSIX_MODE_POLL_LOCK_V2	0x00000008
+#define DLB2_SYS_MSIX_MODE_RSVD0_V2	0xFFFFFFF0
+#define DLB2_SYS_MSIX_MODE_MODE_V2_LOC	0
+#define DLB2_SYS_MSIX_MODE_POLL_MODE_V2_LOC	1
+#define DLB2_SYS_MSIX_MODE_POLL_MASK_V2_LOC	2
+#define DLB2_SYS_MSIX_MODE_POLL_LOCK_V2_LOC	3
+#define DLB2_SYS_MSIX_MODE_RSVD0_V2_LOC	4
+
+#define DLB2_SYS_MSIX_MODE_MODE_V2_5	0x00000001
+#define DLB2_SYS_MSIX_MODE_IMS_POLLING_V2_5	0x00000002
+#define DLB2_SYS_MSIX_MODE_RSVD0_V2_5	0xFFFFFFFC
+#define DLB2_SYS_MSIX_MODE_MODE_V2_5_LOC		0
+#define DLB2_SYS_MSIX_MODE_IMS_POLLING_V2_5_LOC	1
+#define DLB2_SYS_MSIX_MODE_RSVD0_V2_5_LOC		2
 
 #define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS 0x10000440
 #define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_RST 0x0
-union dlb2_sys_dir_cq_31_0_occ_int_sts {
-	struct {
-		u32 cq_0_occ_int : 1;
-		u32 cq_1_occ_int : 1;
-		u32 cq_2_occ_int : 1;
-		u32 cq_3_occ_int : 1;
-		u32 cq_4_occ_int : 1;
-		u32 cq_5_occ_int : 1;
-		u32 cq_6_occ_int : 1;
-		u32 cq_7_occ_int : 1;
-		u32 cq_8_occ_int : 1;
-		u32 cq_9_occ_int : 1;
-		u32 cq_10_occ_int : 1;
-		u32 cq_11_occ_int : 1;
-		u32 cq_12_occ_int : 1;
-		u32 cq_13_occ_int : 1;
-		u32 cq_14_occ_int : 1;
-		u32 cq_15_occ_int : 1;
-		u32 cq_16_occ_int : 1;
-		u32 cq_17_occ_int : 1;
-		u32 cq_18_occ_int : 1;
-		u32 cq_19_occ_int : 1;
-		u32 cq_20_occ_int : 1;
-		u32 cq_21_occ_int : 1;
-		u32 cq_22_occ_int : 1;
-		u32 cq_23_occ_int : 1;
-		u32 cq_24_occ_int : 1;
-		u32 cq_25_occ_int : 1;
-		u32 cq_26_occ_int : 1;
-		u32 cq_27_occ_int : 1;
-		u32 cq_28_occ_int : 1;
-		u32 cq_29_occ_int : 1;
-		u32 cq_30_occ_int : 1;
-		u32 cq_31_occ_int : 1;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_0_OCC_INT	0x00000001
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_1_OCC_INT	0x00000002
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_2_OCC_INT	0x00000004
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_3_OCC_INT	0x00000008
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_4_OCC_INT	0x00000010
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_5_OCC_INT	0x00000020
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_6_OCC_INT	0x00000040
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_7_OCC_INT	0x00000080
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_8_OCC_INT	0x00000100
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_9_OCC_INT	0x00000200
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_10_OCC_INT	0x00000400
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_11_OCC_INT	0x00000800
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_12_OCC_INT	0x00001000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_13_OCC_INT	0x00002000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_14_OCC_INT	0x00004000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_15_OCC_INT	0x00008000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_16_OCC_INT	0x00010000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_17_OCC_INT	0x00020000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_18_OCC_INT	0x00040000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_19_OCC_INT	0x00080000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_20_OCC_INT	0x00100000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_21_OCC_INT	0x00200000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_22_OCC_INT	0x00400000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_23_OCC_INT	0x00800000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_24_OCC_INT	0x01000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_25_OCC_INT	0x02000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_26_OCC_INT	0x04000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_27_OCC_INT	0x08000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_28_OCC_INT	0x10000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_29_OCC_INT	0x20000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_30_OCC_INT	0x40000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_31_OCC_INT	0x80000000
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_0_OCC_INT_LOC	0
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_1_OCC_INT_LOC	1
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_2_OCC_INT_LOC	2
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_3_OCC_INT_LOC	3
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_4_OCC_INT_LOC	4
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_5_OCC_INT_LOC	5
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_6_OCC_INT_LOC	6
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_7_OCC_INT_LOC	7
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_8_OCC_INT_LOC	8
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_9_OCC_INT_LOC	9
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_10_OCC_INT_LOC	10
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_11_OCC_INT_LOC	11
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_12_OCC_INT_LOC	12
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_13_OCC_INT_LOC	13
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_14_OCC_INT_LOC	14
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_15_OCC_INT_LOC	15
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_16_OCC_INT_LOC	16
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_17_OCC_INT_LOC	17
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_18_OCC_INT_LOC	18
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_19_OCC_INT_LOC	19
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_20_OCC_INT_LOC	20
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_21_OCC_INT_LOC	21
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_22_OCC_INT_LOC	22
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_23_OCC_INT_LOC	23
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_24_OCC_INT_LOC	24
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_25_OCC_INT_LOC	25
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_26_OCC_INT_LOC	26
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_27_OCC_INT_LOC	27
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_28_OCC_INT_LOC	28
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_29_OCC_INT_LOC	29
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_30_OCC_INT_LOC	30
+#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_31_OCC_INT_LOC	31
 
 #define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS 0x10000444
 #define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_RST 0x0
-union dlb2_sys_dir_cq_63_32_occ_int_sts {
-	struct {
-		u32 cq_32_occ_int : 1;
-		u32 cq_33_occ_int : 1;
-		u32 cq_34_occ_int : 1;
-		u32 cq_35_occ_int : 1;
-		u32 cq_36_occ_int : 1;
-		u32 cq_37_occ_int : 1;
-		u32 cq_38_occ_int : 1;
-		u32 cq_39_occ_int : 1;
-		u32 cq_40_occ_int : 1;
-		u32 cq_41_occ_int : 1;
-		u32 cq_42_occ_int : 1;
-		u32 cq_43_occ_int : 1;
-		u32 cq_44_occ_int : 1;
-		u32 cq_45_occ_int : 1;
-		u32 cq_46_occ_int : 1;
-		u32 cq_47_occ_int : 1;
-		u32 cq_48_occ_int : 1;
-		u32 cq_49_occ_int : 1;
-		u32 cq_50_occ_int : 1;
-		u32 cq_51_occ_int : 1;
-		u32 cq_52_occ_int : 1;
-		u32 cq_53_occ_int : 1;
-		u32 cq_54_occ_int : 1;
-		u32 cq_55_occ_int : 1;
-		u32 cq_56_occ_int : 1;
-		u32 cq_57_occ_int : 1;
-		u32 cq_58_occ_int : 1;
-		u32 cq_59_occ_int : 1;
-		u32 cq_60_occ_int : 1;
-		u32 cq_61_occ_int : 1;
-		u32 cq_62_occ_int : 1;
-		u32 cq_63_occ_int : 1;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_32_OCC_INT	0x00000001
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_33_OCC_INT	0x00000002
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_34_OCC_INT	0x00000004
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_35_OCC_INT	0x00000008
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_36_OCC_INT	0x00000010
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_37_OCC_INT	0x00000020
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_38_OCC_INT	0x00000040
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_39_OCC_INT	0x00000080
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_40_OCC_INT	0x00000100
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_41_OCC_INT	0x00000200
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_42_OCC_INT	0x00000400
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_43_OCC_INT	0x00000800
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_44_OCC_INT	0x00001000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_45_OCC_INT	0x00002000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_46_OCC_INT	0x00004000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_47_OCC_INT	0x00008000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_48_OCC_INT	0x00010000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_49_OCC_INT	0x00020000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_50_OCC_INT	0x00040000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_51_OCC_INT	0x00080000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_52_OCC_INT	0x00100000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_53_OCC_INT	0x00200000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_54_OCC_INT	0x00400000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_55_OCC_INT	0x00800000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_56_OCC_INT	0x01000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_57_OCC_INT	0x02000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_58_OCC_INT	0x04000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_59_OCC_INT	0x08000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_60_OCC_INT	0x10000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_61_OCC_INT	0x20000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_62_OCC_INT	0x40000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_63_OCC_INT	0x80000000
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_32_OCC_INT_LOC	0
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_33_OCC_INT_LOC	1
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_34_OCC_INT_LOC	2
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_35_OCC_INT_LOC	3
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_36_OCC_INT_LOC	4
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_37_OCC_INT_LOC	5
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_38_OCC_INT_LOC	6
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_39_OCC_INT_LOC	7
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_40_OCC_INT_LOC	8
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_41_OCC_INT_LOC	9
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_42_OCC_INT_LOC	10
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_43_OCC_INT_LOC	11
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_44_OCC_INT_LOC	12
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_45_OCC_INT_LOC	13
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_46_OCC_INT_LOC	14
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_47_OCC_INT_LOC	15
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_48_OCC_INT_LOC	16
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_49_OCC_INT_LOC	17
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_50_OCC_INT_LOC	18
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_51_OCC_INT_LOC	19
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_52_OCC_INT_LOC	20
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_53_OCC_INT_LOC	21
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_54_OCC_INT_LOC	22
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_55_OCC_INT_LOC	23
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_56_OCC_INT_LOC	24
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_57_OCC_INT_LOC	25
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_58_OCC_INT_LOC	26
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_59_OCC_INT_LOC	27
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_60_OCC_INT_LOC	28
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_61_OCC_INT_LOC	29
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_62_OCC_INT_LOC	30
+#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_63_OCC_INT_LOC	31
 
 #define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS 0x10000460
 #define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_RST 0x0
-union dlb2_sys_ldb_cq_31_0_occ_int_sts {
-	struct {
-		u32 cq_0_occ_int : 1;
-		u32 cq_1_occ_int : 1;
-		u32 cq_2_occ_int : 1;
-		u32 cq_3_occ_int : 1;
-		u32 cq_4_occ_int : 1;
-		u32 cq_5_occ_int : 1;
-		u32 cq_6_occ_int : 1;
-		u32 cq_7_occ_int : 1;
-		u32 cq_8_occ_int : 1;
-		u32 cq_9_occ_int : 1;
-		u32 cq_10_occ_int : 1;
-		u32 cq_11_occ_int : 1;
-		u32 cq_12_occ_int : 1;
-		u32 cq_13_occ_int : 1;
-		u32 cq_14_occ_int : 1;
-		u32 cq_15_occ_int : 1;
-		u32 cq_16_occ_int : 1;
-		u32 cq_17_occ_int : 1;
-		u32 cq_18_occ_int : 1;
-		u32 cq_19_occ_int : 1;
-		u32 cq_20_occ_int : 1;
-		u32 cq_21_occ_int : 1;
-		u32 cq_22_occ_int : 1;
-		u32 cq_23_occ_int : 1;
-		u32 cq_24_occ_int : 1;
-		u32 cq_25_occ_int : 1;
-		u32 cq_26_occ_int : 1;
-		u32 cq_27_occ_int : 1;
-		u32 cq_28_occ_int : 1;
-		u32 cq_29_occ_int : 1;
-		u32 cq_30_occ_int : 1;
-		u32 cq_31_occ_int : 1;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_0_OCC_INT	0x00000001
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_1_OCC_INT	0x00000002
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_2_OCC_INT	0x00000004
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_3_OCC_INT	0x00000008
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_4_OCC_INT	0x00000010
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_5_OCC_INT	0x00000020
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_6_OCC_INT	0x00000040
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_7_OCC_INT	0x00000080
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_8_OCC_INT	0x00000100
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_9_OCC_INT	0x00000200
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_10_OCC_INT	0x00000400
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_11_OCC_INT	0x00000800
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_12_OCC_INT	0x00001000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_13_OCC_INT	0x00002000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_14_OCC_INT	0x00004000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_15_OCC_INT	0x00008000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_16_OCC_INT	0x00010000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_17_OCC_INT	0x00020000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_18_OCC_INT	0x00040000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_19_OCC_INT	0x00080000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_20_OCC_INT	0x00100000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_21_OCC_INT	0x00200000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_22_OCC_INT	0x00400000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_23_OCC_INT	0x00800000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_24_OCC_INT	0x01000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_25_OCC_INT	0x02000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_26_OCC_INT	0x04000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_27_OCC_INT	0x08000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_28_OCC_INT	0x10000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_29_OCC_INT	0x20000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_30_OCC_INT	0x40000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_31_OCC_INT	0x80000000
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_0_OCC_INT_LOC	0
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_1_OCC_INT_LOC	1
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_2_OCC_INT_LOC	2
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_3_OCC_INT_LOC	3
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_4_OCC_INT_LOC	4
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_5_OCC_INT_LOC	5
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_6_OCC_INT_LOC	6
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_7_OCC_INT_LOC	7
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_8_OCC_INT_LOC	8
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_9_OCC_INT_LOC	9
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_10_OCC_INT_LOC	10
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_11_OCC_INT_LOC	11
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_12_OCC_INT_LOC	12
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_13_OCC_INT_LOC	13
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_14_OCC_INT_LOC	14
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_15_OCC_INT_LOC	15
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_16_OCC_INT_LOC	16
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_17_OCC_INT_LOC	17
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_18_OCC_INT_LOC	18
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_19_OCC_INT_LOC	19
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_20_OCC_INT_LOC	20
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_21_OCC_INT_LOC	21
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_22_OCC_INT_LOC	22
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_23_OCC_INT_LOC	23
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_24_OCC_INT_LOC	24
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_25_OCC_INT_LOC	25
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_26_OCC_INT_LOC	26
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_27_OCC_INT_LOC	27
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_28_OCC_INT_LOC	28
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_29_OCC_INT_LOC	29
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_30_OCC_INT_LOC	30
+#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_31_OCC_INT_LOC	31
 
 #define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS 0x10000464
 #define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_RST 0x0
-union dlb2_sys_ldb_cq_63_32_occ_int_sts {
-	struct {
-		u32 cq_32_occ_int : 1;
-		u32 cq_33_occ_int : 1;
-		u32 cq_34_occ_int : 1;
-		u32 cq_35_occ_int : 1;
-		u32 cq_36_occ_int : 1;
-		u32 cq_37_occ_int : 1;
-		u32 cq_38_occ_int : 1;
-		u32 cq_39_occ_int : 1;
-		u32 cq_40_occ_int : 1;
-		u32 cq_41_occ_int : 1;
-		u32 cq_42_occ_int : 1;
-		u32 cq_43_occ_int : 1;
-		u32 cq_44_occ_int : 1;
-		u32 cq_45_occ_int : 1;
-		u32 cq_46_occ_int : 1;
-		u32 cq_47_occ_int : 1;
-		u32 cq_48_occ_int : 1;
-		u32 cq_49_occ_int : 1;
-		u32 cq_50_occ_int : 1;
-		u32 cq_51_occ_int : 1;
-		u32 cq_52_occ_int : 1;
-		u32 cq_53_occ_int : 1;
-		u32 cq_54_occ_int : 1;
-		u32 cq_55_occ_int : 1;
-		u32 cq_56_occ_int : 1;
-		u32 cq_57_occ_int : 1;
-		u32 cq_58_occ_int : 1;
-		u32 cq_59_occ_int : 1;
-		u32 cq_60_occ_int : 1;
-		u32 cq_61_occ_int : 1;
-		u32 cq_62_occ_int : 1;
-		u32 cq_63_occ_int : 1;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_32_OCC_INT	0x00000001
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_33_OCC_INT	0x00000002
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_34_OCC_INT	0x00000004
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_35_OCC_INT	0x00000008
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_36_OCC_INT	0x00000010
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_37_OCC_INT	0x00000020
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_38_OCC_INT	0x00000040
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_39_OCC_INT	0x00000080
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_40_OCC_INT	0x00000100
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_41_OCC_INT	0x00000200
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_42_OCC_INT	0x00000400
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_43_OCC_INT	0x00000800
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_44_OCC_INT	0x00001000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_45_OCC_INT	0x00002000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_46_OCC_INT	0x00004000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_47_OCC_INT	0x00008000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_48_OCC_INT	0x00010000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_49_OCC_INT	0x00020000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_50_OCC_INT	0x00040000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_51_OCC_INT	0x00080000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_52_OCC_INT	0x00100000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_53_OCC_INT	0x00200000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_54_OCC_INT	0x00400000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_55_OCC_INT	0x00800000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_56_OCC_INT	0x01000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_57_OCC_INT	0x02000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_58_OCC_INT	0x04000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_59_OCC_INT	0x08000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_60_OCC_INT	0x10000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_61_OCC_INT	0x20000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_62_OCC_INT	0x40000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_63_OCC_INT	0x80000000
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_32_OCC_INT_LOC	0
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_33_OCC_INT_LOC	1
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_34_OCC_INT_LOC	2
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_35_OCC_INT_LOC	3
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_36_OCC_INT_LOC	4
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_37_OCC_INT_LOC	5
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_38_OCC_INT_LOC	6
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_39_OCC_INT_LOC	7
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_40_OCC_INT_LOC	8
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_41_OCC_INT_LOC	9
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_42_OCC_INT_LOC	10
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_43_OCC_INT_LOC	11
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_44_OCC_INT_LOC	12
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_45_OCC_INT_LOC	13
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_46_OCC_INT_LOC	14
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_47_OCC_INT_LOC	15
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_48_OCC_INT_LOC	16
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_49_OCC_INT_LOC	17
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_50_OCC_INT_LOC	18
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_51_OCC_INT_LOC	19
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_52_OCC_INT_LOC	20
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_53_OCC_INT_LOC	21
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_54_OCC_INT_LOC	22
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_55_OCC_INT_LOC	23
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_56_OCC_INT_LOC	24
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_57_OCC_INT_LOC	25
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_58_OCC_INT_LOC	26
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_59_OCC_INT_LOC	27
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_60_OCC_INT_LOC	28
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_61_OCC_INT_LOC	29
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_62_OCC_INT_LOC	30
+#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_63_OCC_INT_LOC	31
 
 #define DLB2_SYS_DIR_CQ_OPT_CLR 0x100004c0
 #define DLB2_SYS_DIR_CQ_OPT_CLR_RST 0x0
-union dlb2_sys_dir_cq_opt_clr {
-	struct {
-		u32 cq : 6;
-		u32 rsvd0 : 26;
-	} field;
-	u32 val;
-};
+
+#define DLB2_SYS_DIR_CQ_OPT_CLR_CQ		0x0000003F
+#define DLB2_SYS_DIR_CQ_OPT_CLR_RSVD0	0xFFFFFFC0
+#define DLB2_SYS_DIR_CQ_OPT_CLR_CQ_LOC	0
+#define DLB2_SYS_DIR_CQ_OPT_CLR_RSVD0_LOC	6
 
 #define DLB2_SYS_ALARM_HW_SYND 0x1000050c
 #define DLB2_SYS_ALARM_HW_SYND_RST 0x0
-union dlb2_sys_alarm_hw_synd {
-	struct {
-		u32 syndrome : 8;
-		u32 rtype : 2;
-		u32 alarm : 1;
-		u32 cwd : 1;
-		u32 vf_pf_mb : 1;
-		u32 rsvd0 : 1;
-		u32 cls : 2;
-		u32 aid : 6;
-		u32 unit : 4;
-		u32 source : 4;
-		u32 more : 1;
-		u32 valid : 1;
-	} field;
-	u32 val;
-};
-
-#define DLB2_AQED_PIPE_QID_FID_LIM(x) \
+
+#define DLB2_SYS_ALARM_HW_SYND_SYNDROME	0x000000FF
+#define DLB2_SYS_ALARM_HW_SYND_RTYPE		0x00000300
+#define DLB2_SYS_ALARM_HW_SYND_ALARM		0x00000400
+#define DLB2_SYS_ALARM_HW_SYND_CWD		0x00000800
+#define DLB2_SYS_ALARM_HW_SYND_VF_PF_MB	0x00001000
+#define DLB2_SYS_ALARM_HW_SYND_RSVD0		0x00002000
+#define DLB2_SYS_ALARM_HW_SYND_CLS		0x0000C000
+#define DLB2_SYS_ALARM_HW_SYND_AID		0x003F0000
+#define DLB2_SYS_ALARM_HW_SYND_UNIT		0x03C00000
+#define DLB2_SYS_ALARM_HW_SYND_SOURCE	0x3C000000
+#define DLB2_SYS_ALARM_HW_SYND_MORE		0x40000000
+#define DLB2_SYS_ALARM_HW_SYND_VALID		0x80000000
+#define DLB2_SYS_ALARM_HW_SYND_SYNDROME_LOC	0
+#define DLB2_SYS_ALARM_HW_SYND_RTYPE_LOC	8
+#define DLB2_SYS_ALARM_HW_SYND_ALARM_LOC	10
+#define DLB2_SYS_ALARM_HW_SYND_CWD_LOC	11
+#define DLB2_SYS_ALARM_HW_SYND_VF_PF_MB_LOC	12
+#define DLB2_SYS_ALARM_HW_SYND_RSVD0_LOC	13
+#define DLB2_SYS_ALARM_HW_SYND_CLS_LOC	14
+#define DLB2_SYS_ALARM_HW_SYND_AID_LOC	16
+#define DLB2_SYS_ALARM_HW_SYND_UNIT_LOC	22
+#define DLB2_SYS_ALARM_HW_SYND_SOURCE_LOC	26
+#define DLB2_SYS_ALARM_HW_SYND_MORE_LOC	30
+#define DLB2_SYS_ALARM_HW_SYND_VALID_LOC	31
+
+#define DLB2_AQED_QID_FID_LIM(x) \
 	(0x20000000 + (x) * 0x1000)
-#define DLB2_AQED_PIPE_QID_FID_LIM_RST 0x7ff
-union dlb2_aqed_pipe_qid_fid_lim {
-	struct {
-		u32 qid_fid_limit : 13;
-		u32 rsvd0 : 19;
-	} field;
-	u32 val;
-};
-
-#define DLB2_AQED_PIPE_QID_HID_WIDTH(x) \
+#define DLB2_AQED_QID_FID_LIM_RST 0x7ff
+
+#define DLB2_AQED_QID_FID_LIM_QID_FID_LIMIT	0x00001FFF
+#define DLB2_AQED_QID_FID_LIM_RSVD0		0xFFFFE000
+#define DLB2_AQED_QID_FID_LIM_QID_FID_LIMIT_LOC	0
+#define DLB2_AQED_QID_FID_LIM_RSVD0_LOC		13
+
+#define DLB2_AQED_QID_HID_WIDTH(x) \
 	(0x20080000 + (x) * 0x1000)
-#define DLB2_AQED_PIPE_QID_HID_WIDTH_RST 0x0
-union dlb2_aqed_pipe_qid_hid_width {
-	struct {
-		u32 compress_code : 3;
-		u32 rsvd0 : 29;
-	} field;
-	u32 val;
-};
-
-#define DLB2_AQED_PIPE_CFG_ARB_WEIGHTS_TQPRI_ATM_0 0x24000004
-#define DLB2_AQED_PIPE_CFG_ARB_WEIGHTS_TQPRI_ATM_0_RST 0xfefcfaf8
-union dlb2_aqed_pipe_cfg_arb_weights_tqpri_atm_0 {
-	struct {
-		u32 pri0 : 8;
-		u32 pri1 : 8;
-		u32 pri2 : 8;
-		u32 pri3 : 8;
-	} field;
-	u32 val;
-};
+#define DLB2_AQED_QID_HID_WIDTH_RST 0x0
+
+#define DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE	0x00000007
+#define DLB2_AQED_QID_HID_WIDTH_RSVD0		0xFFFFFFF8
+#define DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE_LOC	0
+#define DLB2_AQED_QID_HID_WIDTH_RSVD0_LOC		3
+
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0 0x24000004
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_RST 0xfefcfaf8
+
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI0	0x000000FF
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI1	0x0000FF00
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI2	0x00FF0000
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI3	0xFF000000
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI0_LOC	0
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI1_LOC	8
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI2_LOC	16
+#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI3_LOC	24
+
+#define DLB2_AQED_SMON_ACTIVITYCNTR0 0x2c00004c
+#define DLB2_AQED_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_AQED_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_AQED_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_AQED_SMON_ACTIVITYCNTR1 0x2c000050
+#define DLB2_AQED_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_AQED_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_AQED_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_AQED_SMON_COMPARE0 0x2c000054
+#define DLB2_AQED_SMON_COMPARE0_RST 0x0
+
+#define DLB2_AQED_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_AQED_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_AQED_SMON_COMPARE1 0x2c000058
+#define DLB2_AQED_SMON_COMPARE1_RST 0x0
+
+#define DLB2_AQED_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_AQED_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_AQED_SMON_CFG0 0x2c00005c
+#define DLB2_AQED_SMON_CFG0_RST 0x40000000
+
+#define DLB2_AQED_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_AQED_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_AQED_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_AQED_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_AQED_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_AQED_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_AQED_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_AQED_SMON_CFG0_SMON_MODE		0x0000F000
+#define DLB2_AQED_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_AQED_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_AQED_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_AQED_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_AQED_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_AQED_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_AQED_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_AQED_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_AQED_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_AQED_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_AQED_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_AQED_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_AQED_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
+#define DLB2_AQED_SMON_CFG0_RSVZ0_LOC			2
+#define DLB2_AQED_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_AQED_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_AQED_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_AQED_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_AQED_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_AQED_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_AQED_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_AQED_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_AQED_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_AQED_SMON_CFG0_STOPTIMEROVFL_LOC		20
+#define DLB2_AQED_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_AQED_SMON_CFG0_STATTIMEROVFL_LOC		22
+#define DLB2_AQED_SMON_CFG0_RSVZ1_LOC			23
+#define DLB2_AQED_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_AQED_SMON_CFG0_RSVZ2_LOC			29
+#define DLB2_AQED_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_AQED_SMON_CFG1 0x2c000060
+#define DLB2_AQED_SMON_CFG1_RST 0x0
+
+#define DLB2_AQED_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_AQED_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_AQED_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_AQED_SMON_CFG1_MODE0_LOC	0
+#define DLB2_AQED_SMON_CFG1_MODE1_LOC	8
+#define DLB2_AQED_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_AQED_SMON_MAX_TMR 0x2c000064
+#define DLB2_AQED_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_AQED_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_AQED_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_AQED_SMON_TMR 0x2c000068
+#define DLB2_AQED_SMON_TMR_RST 0x0
+
+#define DLB2_AQED_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_AQED_SMON_TMR_TIMER_LOC	0
 
 #define DLB2_ATM_QID2CQIDIX_00(x) \
 	(0x30080000 + (x) * 0x1000)
@@ -1061,1467 +1452,2853 @@ union dlb2_aqed_pipe_cfg_arb_weights_tqpri_atm_0 {
 #define DLB2_ATM_QID2CQIDIX(x, y) \
 	(DLB2_ATM_QID2CQIDIX_00(x) + 0x80000 * (y))
 #define DLB2_ATM_QID2CQIDIX_NUM 16
-union dlb2_atm_qid2cqidix_00 {
-	struct {
-		u32 cq_p0 : 8;
-		u32 cq_p1 : 8;
-		u32 cq_p2 : 8;
-		u32 cq_p3 : 8;
-	} field;
-	u32 val;
-};
+
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P0	0x000000FF
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P1	0x0000FF00
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P2	0x00FF0000
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P3	0xFF000000
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P0_LOC	0
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P1_LOC	8
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P2_LOC	16
+#define DLB2_ATM_QID2CQIDIX_00_CQ_P3_LOC	24
 
 #define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN 0x34000004
 #define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_RST 0xfffefdfc
-union dlb2_atm_cfg_arb_weights_rdy_bin {
-	struct {
-		u32 bin0 : 8;
-		u32 bin1 : 8;
-		u32 bin2 : 8;
-		u32 bin3 : 8;
-	} field;
-	u32 val;
-};
+
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN0	0x000000FF
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN1	0x0000FF00
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN2	0x00FF0000
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN3	0xFF000000
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN0_LOC	0
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN1_LOC	8
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN2_LOC	16
+#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN3_LOC	24
 
 #define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN 0x34000008
 #define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_RST 0xfffefdfc
-union dlb2_atm_cfg_arb_weights_sched_bin {
-	struct {
-		u32 bin0 : 8;
-		u32 bin1 : 8;
-		u32 bin2 : 8;
-		u32 bin3 : 8;
-	} field;
-	u32 val;
-};
+
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN0	0x000000FF
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN1	0x0000FF00
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN2	0x00FF0000
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN3	0xFF000000
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN0_LOC	0
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN1_LOC	8
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN2_LOC	16
+#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN3_LOC	24
+
+#define DLB2_ATM_SMON_ACTIVITYCNTR0 0x3c000050
+#define DLB2_ATM_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_ATM_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_ATM_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_ATM_SMON_ACTIVITYCNTR1 0x3c000054
+#define DLB2_ATM_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_ATM_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_ATM_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_ATM_SMON_COMPARE0 0x3c000058
+#define DLB2_ATM_SMON_COMPARE0_RST 0x0
+
+#define DLB2_ATM_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_ATM_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_ATM_SMON_COMPARE1 0x3c00005c
+#define DLB2_ATM_SMON_COMPARE1_RST 0x0
+
+#define DLB2_ATM_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_ATM_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_ATM_SMON_CFG0 0x3c000060
+#define DLB2_ATM_SMON_CFG0_RST 0x40000000
+
+#define DLB2_ATM_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_ATM_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_ATM_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_ATM_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_ATM_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_ATM_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_ATM_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_ATM_SMON_CFG0_SMON_MODE			0x0000F000
+#define DLB2_ATM_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_ATM_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_ATM_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_ATM_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_ATM_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_ATM_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_ATM_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_ATM_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_ATM_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_ATM_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_ATM_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_ATM_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_ATM_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
+#define DLB2_ATM_SMON_CFG0_RSVZ0_LOC				2
+#define DLB2_ATM_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_ATM_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_ATM_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_ATM_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_ATM_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_ATM_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_ATM_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_ATM_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_ATM_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_ATM_SMON_CFG0_STOPTIMEROVFL_LOC			20
+#define DLB2_ATM_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_ATM_SMON_CFG0_STATTIMEROVFL_LOC			22
+#define DLB2_ATM_SMON_CFG0_RSVZ1_LOC				23
+#define DLB2_ATM_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_ATM_SMON_CFG0_RSVZ2_LOC				29
+#define DLB2_ATM_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_ATM_SMON_CFG1 0x3c000064
+#define DLB2_ATM_SMON_CFG1_RST 0x0
+
+#define DLB2_ATM_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_ATM_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_ATM_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_ATM_SMON_CFG1_MODE0_LOC	0
+#define DLB2_ATM_SMON_CFG1_MODE1_LOC	8
+#define DLB2_ATM_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_ATM_SMON_MAX_TMR 0x3c000068
+#define DLB2_ATM_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_ATM_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_ATM_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_ATM_SMON_TMR 0x3c00006c
+#define DLB2_ATM_SMON_TMR_RST 0x0
+
+#define DLB2_ATM_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_ATM_SMON_TMR_TIMER_LOC	0
 
 #define DLB2_CHP_CFG_DIR_VAS_CRD(x) \
 	(0x40000000 + (x) * 0x1000)
 #define DLB2_CHP_CFG_DIR_VAS_CRD_RST 0x0
-union dlb2_chp_cfg_dir_vas_crd {
-	struct {
-		u32 count : 14;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
+
+#define DLB2_CHP_CFG_DIR_VAS_CRD_COUNT	0x00003FFF
+#define DLB2_CHP_CFG_DIR_VAS_CRD_RSVD0	0xFFFFC000
+#define DLB2_CHP_CFG_DIR_VAS_CRD_COUNT_LOC	0
+#define DLB2_CHP_CFG_DIR_VAS_CRD_RSVD0_LOC	14
 
 #define DLB2_CHP_CFG_LDB_VAS_CRD(x) \
 	(0x40080000 + (x) * 0x1000)
 #define DLB2_CHP_CFG_LDB_VAS_CRD_RST 0x0
-union dlb2_chp_cfg_ldb_vas_crd {
-	struct {
-		u32 count : 15;
-		u32 rsvd0 : 17;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_ORD_QID_SN(x) \
+
+#define DLB2_CHP_CFG_LDB_VAS_CRD_COUNT	0x00007FFF
+#define DLB2_CHP_CFG_LDB_VAS_CRD_RSVD0	0xFFFF8000
+#define DLB2_CHP_CFG_LDB_VAS_CRD_COUNT_LOC	0
+#define DLB2_CHP_CFG_LDB_VAS_CRD_RSVD0_LOC	15
+
+#define DLB2_V2CHP_ORD_QID_SN(x) \
 	(0x40100000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_ORD_QID_SN(x) \
+	(0x40080000 + (x) * 0x1000)
+#define DLB2_CHP_ORD_QID_SN(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_ORD_QID_SN(x) : \
+	 DLB2_V2_5CHP_ORD_QID_SN(x))
 #define DLB2_CHP_ORD_QID_SN_RST 0x0
-union dlb2_chp_ord_qid_sn {
-	struct {
-		u32 sn : 10;
-		u32 rsvd0 : 22;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_ORD_QID_SN_MAP(x) \
+
+#define DLB2_CHP_ORD_QID_SN_SN	0x000003FF
+#define DLB2_CHP_ORD_QID_SN_RSVD0	0xFFFFFC00
+#define DLB2_CHP_ORD_QID_SN_SN_LOC		0
+#define DLB2_CHP_ORD_QID_SN_RSVD0_LOC	10
+
+#define DLB2_V2CHP_ORD_QID_SN_MAP(x) \
 	(0x40180000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_ORD_QID_SN_MAP(x) \
+	(0x40100000 + (x) * 0x1000)
+#define DLB2_CHP_ORD_QID_SN_MAP(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_ORD_QID_SN_MAP(x) : \
+	 DLB2_V2_5CHP_ORD_QID_SN_MAP(x))
 #define DLB2_CHP_ORD_QID_SN_MAP_RST 0x0
-union dlb2_chp_ord_qid_sn_map {
-	struct {
-		u32 mode : 3;
-		u32 slot : 4;
-		u32 rsvz0 : 1;
-		u32 grp : 1;
-		u32 rsvz1 : 1;
-		u32 rsvd0 : 22;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_SN_CHK_ENBL(x) \
+
+#define DLB2_CHP_ORD_QID_SN_MAP_MODE		0x00000007
+#define DLB2_CHP_ORD_QID_SN_MAP_SLOT		0x00000078
+#define DLB2_CHP_ORD_QID_SN_MAP_RSVZ0	0x00000080
+#define DLB2_CHP_ORD_QID_SN_MAP_GRP		0x00000100
+#define DLB2_CHP_ORD_QID_SN_MAP_RSVZ1	0x00000200
+#define DLB2_CHP_ORD_QID_SN_MAP_RSVD0	0xFFFFFC00
+#define DLB2_CHP_ORD_QID_SN_MAP_MODE_LOC	0
+#define DLB2_CHP_ORD_QID_SN_MAP_SLOT_LOC	3
+#define DLB2_CHP_ORD_QID_SN_MAP_RSVZ0_LOC	7
+#define DLB2_CHP_ORD_QID_SN_MAP_GRP_LOC	8
+#define DLB2_CHP_ORD_QID_SN_MAP_RSVZ1_LOC	9
+#define DLB2_CHP_ORD_QID_SN_MAP_RSVD0_LOC	10
+
+#define DLB2_V2CHP_SN_CHK_ENBL(x) \
 	(0x40200000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_SN_CHK_ENBL(x) \
+	(0x40180000 + (x) * 0x1000)
+#define DLB2_CHP_SN_CHK_ENBL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_SN_CHK_ENBL(x) : \
+	 DLB2_V2_5CHP_SN_CHK_ENBL(x))
 #define DLB2_CHP_SN_CHK_ENBL_RST 0x0
-union dlb2_chp_sn_chk_enbl {
-	struct {
-		u32 en : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_DIR_CQ_DEPTH(x) \
+
+#define DLB2_CHP_SN_CHK_ENBL_EN	0x00000001
+#define DLB2_CHP_SN_CHK_ENBL_RSVD0	0xFFFFFFFE
+#define DLB2_CHP_SN_CHK_ENBL_EN_LOC		0
+#define DLB2_CHP_SN_CHK_ENBL_RSVD0_LOC	1
+
+#define DLB2_V2CHP_DIR_CQ_DEPTH(x) \
 	(0x40280000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ_DEPTH(x) \
+	(0x40300000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ_DEPTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_DEPTH(x) : \
+	 DLB2_V2_5CHP_DIR_CQ_DEPTH(x))
 #define DLB2_CHP_DIR_CQ_DEPTH_RST 0x0
-union dlb2_chp_dir_cq_depth {
-	struct {
-		u32 depth : 13;
-		u32 rsvd0 : 19;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH(x) \
+
+#define DLB2_CHP_DIR_CQ_DEPTH_DEPTH	0x00001FFF
+#define DLB2_CHP_DIR_CQ_DEPTH_RSVD0	0xFFFFE000
+#define DLB2_CHP_DIR_CQ_DEPTH_DEPTH_LOC	0
+#define DLB2_CHP_DIR_CQ_DEPTH_RSVD0_LOC	13
+
+#define DLB2_V2CHP_DIR_CQ_INT_DEPTH_THRSH(x) \
 	(0x40300000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ_INT_DEPTH_THRSH(x) \
+	(0x40380000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_INT_DEPTH_THRSH(x) : \
+	 DLB2_V2_5CHP_DIR_CQ_INT_DEPTH_THRSH(x))
 #define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_RST 0x0
-union dlb2_chp_dir_cq_int_depth_thrsh {
-	struct {
-		u32 depth_threshold : 13;
-		u32 rsvd0 : 19;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_DIR_CQ_INT_ENB(x) \
+
+#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_DEPTH_THRESHOLD	0x00001FFF
+#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_RSVD0		0xFFFFE000
+#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_DEPTH_THRESHOLD_LOC	0
+#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_RSVD0_LOC		13
+
+#define DLB2_V2CHP_DIR_CQ_INT_ENB(x) \
 	(0x40380000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ_INT_ENB(x) \
+	(0x40400000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ_INT_ENB(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_INT_ENB(x) : \
+	 DLB2_V2_5CHP_DIR_CQ_INT_ENB(x))
 #define DLB2_CHP_DIR_CQ_INT_ENB_RST 0x0
-union dlb2_chp_dir_cq_int_enb {
-	struct {
-		u32 en_tim : 1;
-		u32 en_depth : 1;
-		u32 rsvd0 : 30;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_DIR_CQ_TMR_THRSH(x) \
+
+#define DLB2_CHP_DIR_CQ_INT_ENB_EN_TIM	0x00000001
+#define DLB2_CHP_DIR_CQ_INT_ENB_EN_DEPTH	0x00000002
+#define DLB2_CHP_DIR_CQ_INT_ENB_RSVD0	0xFFFFFFFC
+#define DLB2_CHP_DIR_CQ_INT_ENB_EN_TIM_LOC	0
+#define DLB2_CHP_DIR_CQ_INT_ENB_EN_DEPTH_LOC	1
+#define DLB2_CHP_DIR_CQ_INT_ENB_RSVD0_LOC	2
+
+#define DLB2_V2CHP_DIR_CQ_TMR_THRSH(x) \
 	(0x40480000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ_TMR_THRSH(x) \
+	(0x40500000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ_TMR_THRSH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_TMR_THRSH(x) : \
+	 DLB2_V2_5CHP_DIR_CQ_TMR_THRSH(x))
 #define DLB2_CHP_DIR_CQ_TMR_THRSH_RST 0x1
-union dlb2_chp_dir_cq_tmr_thrsh {
-	struct {
-		u32 thrsh_0 : 1;
-		u32 thrsh_13_1 : 13;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL(x) \
+
+#define DLB2_CHP_DIR_CQ_TMR_THRSH_THRSH_0	0x00000001
+#define DLB2_CHP_DIR_CQ_TMR_THRSH_THRSH_13_1	0x00003FFE
+#define DLB2_CHP_DIR_CQ_TMR_THRSH_RSVD0	0xFFFFC000
+#define DLB2_CHP_DIR_CQ_TMR_THRSH_THRSH_0_LOC	0
+#define DLB2_CHP_DIR_CQ_TMR_THRSH_THRSH_13_1_LOC	1
+#define DLB2_CHP_DIR_CQ_TMR_THRSH_RSVD0_LOC		14
+
+#define DLB2_V2CHP_DIR_CQ_TKN_DEPTH_SEL(x) \
 	(0x40500000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ_TKN_DEPTH_SEL(x) \
+	(0x40580000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_TKN_DEPTH_SEL(x) : \
+	 DLB2_V2_5CHP_DIR_CQ_TKN_DEPTH_SEL(x))
 #define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_RST 0x0
-union dlb2_chp_dir_cq_tkn_depth_sel {
-	struct {
-		u32 token_depth_select : 4;
-		u32 rsvd0 : 28;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_DIR_CQ_WD_ENB(x) \
+
+#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT	0x0000000F
+#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_RSVD0			0xFFFFFFF0
+#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_LOC	0
+#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_RSVD0_LOC		4
+
+#define DLB2_V2CHP_DIR_CQ_WD_ENB(x) \
 	(0x40580000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ_WD_ENB(x) \
+	(0x40600000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ_WD_ENB(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_WD_ENB(x) : \
+	 DLB2_V2_5CHP_DIR_CQ_WD_ENB(x))
 #define DLB2_CHP_DIR_CQ_WD_ENB_RST 0x0
-union dlb2_chp_dir_cq_wd_enb {
-	struct {
-		u32 wd_enable : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_DIR_CQ_WPTR(x) \
+
+#define DLB2_CHP_DIR_CQ_WD_ENB_WD_ENABLE	0x00000001
+#define DLB2_CHP_DIR_CQ_WD_ENB_RSVD0		0xFFFFFFFE
+#define DLB2_CHP_DIR_CQ_WD_ENB_WD_ENABLE_LOC	0
+#define DLB2_CHP_DIR_CQ_WD_ENB_RSVD0_LOC	1
+
+#define DLB2_V2CHP_DIR_CQ_WPTR(x) \
 	(0x40600000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ_WPTR(x) \
+	(0x40680000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ_WPTR(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_WPTR(x) : \
+	 DLB2_V2_5CHP_DIR_CQ_WPTR(x))
 #define DLB2_CHP_DIR_CQ_WPTR_RST 0x0
-union dlb2_chp_dir_cq_wptr {
-	struct {
-		u32 write_pointer : 13;
-		u32 rsvd0 : 19;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_DIR_CQ2VAS(x) \
+
+#define DLB2_CHP_DIR_CQ_WPTR_WRITE_POINTER	0x00001FFF
+#define DLB2_CHP_DIR_CQ_WPTR_RSVD0		0xFFFFE000
+#define DLB2_CHP_DIR_CQ_WPTR_WRITE_POINTER_LOC	0
+#define DLB2_CHP_DIR_CQ_WPTR_RSVD0_LOC		13
+
+#define DLB2_V2CHP_DIR_CQ2VAS(x) \
 	(0x40680000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_DIR_CQ2VAS(x) \
+	(0x40700000 + (x) * 0x1000)
+#define DLB2_CHP_DIR_CQ2VAS(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ2VAS(x) : \
+	 DLB2_V2_5CHP_DIR_CQ2VAS(x))
 #define DLB2_CHP_DIR_CQ2VAS_RST 0x0
-union dlb2_chp_dir_cq2vas {
-	struct {
-		u32 cq2vas : 5;
-		u32 rsvd0 : 27;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_HIST_LIST_BASE(x) \
+
+#define DLB2_CHP_DIR_CQ2VAS_CQ2VAS	0x0000001F
+#define DLB2_CHP_DIR_CQ2VAS_RSVD0	0xFFFFFFE0
+#define DLB2_CHP_DIR_CQ2VAS_CQ2VAS_LOC	0
+#define DLB2_CHP_DIR_CQ2VAS_RSVD0_LOC	5
+
+#define DLB2_V2CHP_HIST_LIST_BASE(x) \
 	(0x40700000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_HIST_LIST_BASE(x) \
+	(0x40780000 + (x) * 0x1000)
+#define DLB2_CHP_HIST_LIST_BASE(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_HIST_LIST_BASE(x) : \
+	 DLB2_V2_5CHP_HIST_LIST_BASE(x))
 #define DLB2_CHP_HIST_LIST_BASE_RST 0x0
-union dlb2_chp_hist_list_base {
-	struct {
-		u32 base : 13;
-		u32 rsvd0 : 19;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_HIST_LIST_LIM(x) \
+
+#define DLB2_CHP_HIST_LIST_BASE_BASE		0x00001FFF
+#define DLB2_CHP_HIST_LIST_BASE_RSVD0	0xFFFFE000
+#define DLB2_CHP_HIST_LIST_BASE_BASE_LOC	0
+#define DLB2_CHP_HIST_LIST_BASE_RSVD0_LOC	13
+
+#define DLB2_V2CHP_HIST_LIST_LIM(x) \
 	(0x40780000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_HIST_LIST_LIM(x) \
+	(0x40800000 + (x) * 0x1000)
+#define DLB2_CHP_HIST_LIST_LIM(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_HIST_LIST_LIM(x) : \
+	 DLB2_V2_5CHP_HIST_LIST_LIM(x))
 #define DLB2_CHP_HIST_LIST_LIM_RST 0x0
-union dlb2_chp_hist_list_lim {
-	struct {
-		u32 limit : 13;
-		u32 rsvd0 : 19;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_HIST_LIST_POP_PTR(x) \
+
+#define DLB2_CHP_HIST_LIST_LIM_LIMIT	0x00001FFF
+#define DLB2_CHP_HIST_LIST_LIM_RSVD0	0xFFFFE000
+#define DLB2_CHP_HIST_LIST_LIM_LIMIT_LOC	0
+#define DLB2_CHP_HIST_LIST_LIM_RSVD0_LOC	13
+
+#define DLB2_V2CHP_HIST_LIST_POP_PTR(x) \
 	(0x40800000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_HIST_LIST_POP_PTR(x) \
+	(0x40880000 + (x) * 0x1000)
+#define DLB2_CHP_HIST_LIST_POP_PTR(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_HIST_LIST_POP_PTR(x) : \
+	 DLB2_V2_5CHP_HIST_LIST_POP_PTR(x))
 #define DLB2_CHP_HIST_LIST_POP_PTR_RST 0x0
-union dlb2_chp_hist_list_pop_ptr {
-	struct {
-		u32 pop_ptr : 13;
-		u32 generation : 1;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_HIST_LIST_PUSH_PTR(x) \
+
+#define DLB2_CHP_HIST_LIST_POP_PTR_POP_PTR		0x00001FFF
+#define DLB2_CHP_HIST_LIST_POP_PTR_GENERATION	0x00002000
+#define DLB2_CHP_HIST_LIST_POP_PTR_RSVD0		0xFFFFC000
+#define DLB2_CHP_HIST_LIST_POP_PTR_POP_PTR_LOC	0
+#define DLB2_CHP_HIST_LIST_POP_PTR_GENERATION_LOC	13
+#define DLB2_CHP_HIST_LIST_POP_PTR_RSVD0_LOC		14
+
+#define DLB2_V2CHP_HIST_LIST_PUSH_PTR(x) \
 	(0x40880000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_HIST_LIST_PUSH_PTR(x) \
+	(0x40900000 + (x) * 0x1000)
+#define DLB2_CHP_HIST_LIST_PUSH_PTR(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_HIST_LIST_PUSH_PTR(x) : \
+	 DLB2_V2_5CHP_HIST_LIST_PUSH_PTR(x))
 #define DLB2_CHP_HIST_LIST_PUSH_PTR_RST 0x0
-union dlb2_chp_hist_list_push_ptr {
-	struct {
-		u32 push_ptr : 13;
-		u32 generation : 1;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_LDB_CQ_DEPTH(x) \
+
+#define DLB2_CHP_HIST_LIST_PUSH_PTR_PUSH_PTR		0x00001FFF
+#define DLB2_CHP_HIST_LIST_PUSH_PTR_GENERATION	0x00002000
+#define DLB2_CHP_HIST_LIST_PUSH_PTR_RSVD0		0xFFFFC000
+#define DLB2_CHP_HIST_LIST_PUSH_PTR_PUSH_PTR_LOC	0
+#define DLB2_CHP_HIST_LIST_PUSH_PTR_GENERATION_LOC	13
+#define DLB2_CHP_HIST_LIST_PUSH_PTR_RSVD0_LOC	14
+
+#define DLB2_V2CHP_LDB_CQ_DEPTH(x) \
 	(0x40900000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ_DEPTH(x) \
+	(0x40a80000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ_DEPTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_DEPTH(x) : \
+	 DLB2_V2_5CHP_LDB_CQ_DEPTH(x))
 #define DLB2_CHP_LDB_CQ_DEPTH_RST 0x0
-union dlb2_chp_ldb_cq_depth {
-	struct {
-		u32 depth : 11;
-		u32 rsvd0 : 21;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH(x) \
+
+#define DLB2_CHP_LDB_CQ_DEPTH_DEPTH	0x000007FF
+#define DLB2_CHP_LDB_CQ_DEPTH_RSVD0	0xFFFFF800
+#define DLB2_CHP_LDB_CQ_DEPTH_DEPTH_LOC	0
+#define DLB2_CHP_LDB_CQ_DEPTH_RSVD0_LOC	11
+
+#define DLB2_V2CHP_LDB_CQ_INT_DEPTH_THRSH(x) \
 	(0x40980000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ_INT_DEPTH_THRSH(x) \
+	(0x40b00000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_INT_DEPTH_THRSH(x) : \
+	 DLB2_V2_5CHP_LDB_CQ_INT_DEPTH_THRSH(x))
 #define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_RST 0x0
-union dlb2_chp_ldb_cq_int_depth_thrsh {
-	struct {
-		u32 depth_threshold : 11;
-		u32 rsvd0 : 21;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_LDB_CQ_INT_ENB(x) \
+
+#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_DEPTH_THRESHOLD	0x000007FF
+#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_RSVD0		0xFFFFF800
+#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_DEPTH_THRESHOLD_LOC	0
+#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_RSVD0_LOC		11
+
+#define DLB2_V2CHP_LDB_CQ_INT_ENB(x) \
 	(0x40a00000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ_INT_ENB(x) \
+	(0x40b80000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ_INT_ENB(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_INT_ENB(x) : \
+	 DLB2_V2_5CHP_LDB_CQ_INT_ENB(x))
 #define DLB2_CHP_LDB_CQ_INT_ENB_RST 0x0
-union dlb2_chp_ldb_cq_int_enb {
-	struct {
-		u32 en_tim : 1;
-		u32 en_depth : 1;
-		u32 rsvd0 : 30;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_LDB_CQ_TMR_THRSH(x) \
+
+#define DLB2_CHP_LDB_CQ_INT_ENB_EN_TIM	0x00000001
+#define DLB2_CHP_LDB_CQ_INT_ENB_EN_DEPTH	0x00000002
+#define DLB2_CHP_LDB_CQ_INT_ENB_RSVD0	0xFFFFFFFC
+#define DLB2_CHP_LDB_CQ_INT_ENB_EN_TIM_LOC	0
+#define DLB2_CHP_LDB_CQ_INT_ENB_EN_DEPTH_LOC	1
+#define DLB2_CHP_LDB_CQ_INT_ENB_RSVD0_LOC	2
+
+#define DLB2_V2CHP_LDB_CQ_TMR_THRSH(x) \
 	(0x40b00000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ_TMR_THRSH(x) \
+	(0x40c80000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ_TMR_THRSH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_TMR_THRSH(x) : \
+	 DLB2_V2_5CHP_LDB_CQ_TMR_THRSH(x))
 #define DLB2_CHP_LDB_CQ_TMR_THRSH_RST 0x1
-union dlb2_chp_ldb_cq_tmr_thrsh {
-	struct {
-		u32 thrsh_0 : 1;
-		u32 thrsh_13_1 : 13;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL(x) \
+
+#define DLB2_CHP_LDB_CQ_TMR_THRSH_THRSH_0	0x00000001
+#define DLB2_CHP_LDB_CQ_TMR_THRSH_THRSH_13_1	0x00003FFE
+#define DLB2_CHP_LDB_CQ_TMR_THRSH_RSVD0	0xFFFFC000
+#define DLB2_CHP_LDB_CQ_TMR_THRSH_THRSH_0_LOC	0
+#define DLB2_CHP_LDB_CQ_TMR_THRSH_THRSH_13_1_LOC	1
+#define DLB2_CHP_LDB_CQ_TMR_THRSH_RSVD0_LOC		14
+
+#define DLB2_V2CHP_LDB_CQ_TKN_DEPTH_SEL(x) \
 	(0x40b80000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ_TKN_DEPTH_SEL(x) \
+	(0x40d00000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_TKN_DEPTH_SEL(x) : \
+	 DLB2_V2_5CHP_LDB_CQ_TKN_DEPTH_SEL(x))
 #define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_RST 0x0
-union dlb2_chp_ldb_cq_tkn_depth_sel {
-	struct {
-		u32 token_depth_select : 4;
-		u32 rsvd0 : 28;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_LDB_CQ_WD_ENB(x) \
+
+#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT	0x0000000F
+#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_RSVD0			0xFFFFFFF0
+#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_LOC	0
+#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_RSVD0_LOC		4
+
+#define DLB2_V2CHP_LDB_CQ_WD_ENB(x) \
 	(0x40c00000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ_WD_ENB(x) \
+	(0x40d80000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ_WD_ENB(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_WD_ENB(x) : \
+	 DLB2_V2_5CHP_LDB_CQ_WD_ENB(x))
 #define DLB2_CHP_LDB_CQ_WD_ENB_RST 0x0
-union dlb2_chp_ldb_cq_wd_enb {
-	struct {
-		u32 wd_enable : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_LDB_CQ_WPTR(x) \
+
+#define DLB2_CHP_LDB_CQ_WD_ENB_WD_ENABLE	0x00000001
+#define DLB2_CHP_LDB_CQ_WD_ENB_RSVD0		0xFFFFFFFE
+#define DLB2_CHP_LDB_CQ_WD_ENB_WD_ENABLE_LOC	0
+#define DLB2_CHP_LDB_CQ_WD_ENB_RSVD0_LOC	1
+
+#define DLB2_V2CHP_LDB_CQ_WPTR(x) \
 	(0x40c80000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ_WPTR(x) \
+	(0x40e00000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ_WPTR(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_WPTR(x) : \
+	 DLB2_V2_5CHP_LDB_CQ_WPTR(x))
 #define DLB2_CHP_LDB_CQ_WPTR_RST 0x0
-union dlb2_chp_ldb_cq_wptr {
-	struct {
-		u32 write_pointer : 11;
-		u32 rsvd0 : 21;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_LDB_CQ2VAS(x) \
+
+#define DLB2_CHP_LDB_CQ_WPTR_WRITE_POINTER	0x000007FF
+#define DLB2_CHP_LDB_CQ_WPTR_RSVD0		0xFFFFF800
+#define DLB2_CHP_LDB_CQ_WPTR_WRITE_POINTER_LOC	0
+#define DLB2_CHP_LDB_CQ_WPTR_RSVD0_LOC		11
+
+#define DLB2_V2CHP_LDB_CQ2VAS(x) \
 	(0x40d00000 + (x) * 0x1000)
+#define DLB2_V2_5CHP_LDB_CQ2VAS(x) \
+	(0x40e80000 + (x) * 0x1000)
+#define DLB2_CHP_LDB_CQ2VAS(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ2VAS(x) : \
+	 DLB2_V2_5CHP_LDB_CQ2VAS(x))
 #define DLB2_CHP_LDB_CQ2VAS_RST 0x0
-union dlb2_chp_ldb_cq2vas {
-	struct {
-		u32 cq2vas : 5;
-		u32 rsvd0 : 27;
-	} field;
-	u32 val;
-};
+
+#define DLB2_CHP_LDB_CQ2VAS_CQ2VAS	0x0000001F
+#define DLB2_CHP_LDB_CQ2VAS_RSVD0	0xFFFFFFE0
+#define DLB2_CHP_LDB_CQ2VAS_CQ2VAS_LOC	0
+#define DLB2_CHP_LDB_CQ2VAS_RSVD0_LOC	5
 
 #define DLB2_CHP_CFG_CHP_CSR_CTRL 0x44000008
 #define DLB2_CHP_CFG_CHP_CSR_CTRL_RST 0x180002
-union dlb2_chp_cfg_chp_csr_ctrl {
-	struct {
-		u32 int_cor_alarm_dis : 1;
-		u32 int_cor_synd_dis : 1;
-		u32 int_uncr_alarm_dis : 1;
-		u32 int_unc_synd_dis : 1;
-		u32 int_inf0_alarm_dis : 1;
-		u32 int_inf0_synd_dis : 1;
-		u32 int_inf1_alarm_dis : 1;
-		u32 int_inf1_synd_dis : 1;
-		u32 int_inf2_alarm_dis : 1;
-		u32 int_inf2_synd_dis : 1;
-		u32 int_inf3_alarm_dis : 1;
-		u32 int_inf3_synd_dis : 1;
-		u32 int_inf4_alarm_dis : 1;
-		u32 int_inf4_synd_dis : 1;
-		u32 int_inf5_alarm_dis : 1;
-		u32 int_inf5_synd_dis : 1;
-		u32 dlb_cor_alarm_enable : 1;
-		u32 cfg_64bytes_qe_ldb_cq_mode : 1;
-		u32 cfg_64bytes_qe_dir_cq_mode : 1;
-		u32 pad_write_ldb : 1;
-		u32 pad_write_dir : 1;
-		u32 pad_first_write_ldb : 1;
-		u32 pad_first_write_dir : 1;
-		u32 rsvz0 : 9;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_DIR_CQ_INTR_ARMED0 0x4400005c
+
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_COR_ALARM_DIS		0x00000001
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_COR_SYND_DIS		0x00000002
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_UNCR_ALARM_DIS		0x00000004
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_UNC_SYND_DIS		0x00000008
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF0_ALARM_DIS		0x00000010
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF0_SYND_DIS		0x00000020
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF1_ALARM_DIS		0x00000040
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF1_SYND_DIS		0x00000080
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF2_ALARM_DIS		0x00000100
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF2_SYND_DIS		0x00000200
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF3_ALARM_DIS		0x00000400
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF3_SYND_DIS		0x00000800
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF4_ALARM_DIS		0x00001000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF4_SYND_DIS		0x00002000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF5_ALARM_DIS		0x00004000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF5_SYND_DIS		0x00008000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_DLB_COR_ALARM_ENABLE	0x00010000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_LDB_CQ_MODE	0x00020000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_DIR_CQ_MODE	0x00040000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_WRITE_LDB		0x00080000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_WRITE_DIR		0x00100000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_FIRST_WRITE_LDB	0x00200000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_FIRST_WRITE_DIR	0x00400000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_RSVZ0			0xFF800000
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_COR_ALARM_DIS_LOC		0
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_COR_SYND_DIS_LOC		1
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_UNCR_ALARM_DIS_LOC		2
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_UNC_SYND_DIS_LOC		3
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF0_ALARM_DIS_LOC		4
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF0_SYND_DIS_LOC		5
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF1_ALARM_DIS_LOC		6
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF1_SYND_DIS_LOC		7
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF2_ALARM_DIS_LOC		8
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF2_SYND_DIS_LOC		9
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF3_ALARM_DIS_LOC		10
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF3_SYND_DIS_LOC		11
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF4_ALARM_DIS_LOC		12
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF4_SYND_DIS_LOC		13
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF5_ALARM_DIS_LOC		14
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF5_SYND_DIS_LOC		15
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_DLB_COR_ALARM_ENABLE_LOC		16
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_LDB_CQ_MODE_LOC	17
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_DIR_CQ_MODE_LOC	18
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_WRITE_LDB_LOC			19
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_WRITE_DIR_LOC			20
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_FIRST_WRITE_LDB_LOC		21
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_FIRST_WRITE_DIR_LOC		22
+#define DLB2_CHP_CFG_CHP_CSR_CTRL_RSVZ0_LOC				23
+
+#define DLB2_V2CHP_DIR_CQ_INTR_ARMED0 0x4400005c
+#define DLB2_V2_5CHP_DIR_CQ_INTR_ARMED0 0x4400004c
+#define DLB2_CHP_DIR_CQ_INTR_ARMED0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_INTR_ARMED0 : \
+	 DLB2_V2_5CHP_DIR_CQ_INTR_ARMED0)
 #define DLB2_CHP_DIR_CQ_INTR_ARMED0_RST 0x0
-union dlb2_chp_dir_cq_intr_armed0 {
-	struct {
-		u32 armed : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_DIR_CQ_INTR_ARMED1 0x44000060
+
+#define DLB2_CHP_DIR_CQ_INTR_ARMED0_ARMED	0xFFFFFFFF
+#define DLB2_CHP_DIR_CQ_INTR_ARMED0_ARMED_LOC	0
+
+#define DLB2_V2CHP_DIR_CQ_INTR_ARMED1 0x44000060
+#define DLB2_V2_5CHP_DIR_CQ_INTR_ARMED1 0x44000050
+#define DLB2_CHP_DIR_CQ_INTR_ARMED1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_DIR_CQ_INTR_ARMED1 : \
+	 DLB2_V2_5CHP_DIR_CQ_INTR_ARMED1)
 #define DLB2_CHP_DIR_CQ_INTR_ARMED1_RST 0x0
-union dlb2_chp_dir_cq_intr_armed1 {
-	struct {
-		u32 armed : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL 0x44000084
+
+#define DLB2_CHP_DIR_CQ_INTR_ARMED1_ARMED	0xFFFFFFFF
+#define DLB2_CHP_DIR_CQ_INTR_ARMED1_ARMED_LOC	0
+
+#define DLB2_V2CHP_CFG_DIR_CQ_TIMER_CTL 0x44000084
+#define DLB2_V2_5CHP_CFG_DIR_CQ_TIMER_CTL 0x44000088
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_DIR_CQ_TIMER_CTL : \
+	 DLB2_V2_5CHP_CFG_DIR_CQ_TIMER_CTL)
 #define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_RST 0x0
-union dlb2_chp_cfg_dir_cq_timer_ctl {
-	struct {
-		u32 sample_interval : 8;
-		u32 enb : 1;
-		u32 rsvz0 : 23;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_DIR_WDTO_0 0x44000088
+
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_SAMPLE_INTERVAL	0x000000FF
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_ENB			0x00000100
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_RSVZ0			0xFFFFFE00
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_SAMPLE_INTERVAL_LOC	0
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_ENB_LOC		8
+#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_RSVZ0_LOC		9
+
+#define DLB2_V2CHP_CFG_DIR_WDTO_0 0x44000088
+#define DLB2_V2_5CHP_CFG_DIR_WDTO_0 0x4400008c
+#define DLB2_CHP_CFG_DIR_WDTO_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_DIR_WDTO_0 : \
+	 DLB2_V2_5CHP_CFG_DIR_WDTO_0)
 #define DLB2_CHP_CFG_DIR_WDTO_0_RST 0x0
-union dlb2_chp_cfg_dir_wdto_0 {
-	struct {
-		u32 wdto : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_DIR_WDTO_1 0x4400008c
+
+#define DLB2_CHP_CFG_DIR_WDTO_0_WDTO	0xFFFFFFFF
+#define DLB2_CHP_CFG_DIR_WDTO_0_WDTO_LOC	0
+
+#define DLB2_V2CHP_CFG_DIR_WDTO_1 0x4400008c
+#define DLB2_V2_5CHP_CFG_DIR_WDTO_1 0x44000090
+#define DLB2_CHP_CFG_DIR_WDTO_1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_DIR_WDTO_1 : \
+	 DLB2_V2_5CHP_CFG_DIR_WDTO_1)
 #define DLB2_CHP_CFG_DIR_WDTO_1_RST 0x0
-union dlb2_chp_cfg_dir_wdto_1 {
-	struct {
-		u32 wdto : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_DIR_WD_DISABLE0 0x44000098
+
+#define DLB2_CHP_CFG_DIR_WDTO_1_WDTO	0xFFFFFFFF
+#define DLB2_CHP_CFG_DIR_WDTO_1_WDTO_LOC	0
+
+#define DLB2_V2CHP_CFG_DIR_WD_DISABLE0 0x44000098
+#define DLB2_V2_5CHP_CFG_DIR_WD_DISABLE0 0x440000a4
+#define DLB2_CHP_CFG_DIR_WD_DISABLE0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_DIR_WD_DISABLE0 : \
+	 DLB2_V2_5CHP_CFG_DIR_WD_DISABLE0)
 #define DLB2_CHP_CFG_DIR_WD_DISABLE0_RST 0xffffffff
-union dlb2_chp_cfg_dir_wd_disable0 {
-	struct {
-		u32 wd_disable : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_DIR_WD_DISABLE1 0x4400009c
+
+#define DLB2_CHP_CFG_DIR_WD_DISABLE0_WD_DISABLE	0xFFFFFFFF
+#define DLB2_CHP_CFG_DIR_WD_DISABLE0_WD_DISABLE_LOC	0
+
+#define DLB2_V2CHP_CFG_DIR_WD_DISABLE1 0x4400009c
+#define DLB2_V2_5CHP_CFG_DIR_WD_DISABLE1 0x440000a8
+#define DLB2_CHP_CFG_DIR_WD_DISABLE1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_DIR_WD_DISABLE1 : \
+	 DLB2_V2_5CHP_CFG_DIR_WD_DISABLE1)
 #define DLB2_CHP_CFG_DIR_WD_DISABLE1_RST 0xffffffff
-union dlb2_chp_cfg_dir_wd_disable1 {
-	struct {
-		u32 wd_disable : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL 0x440000a0
+
+#define DLB2_CHP_CFG_DIR_WD_DISABLE1_WD_DISABLE	0xFFFFFFFF
+#define DLB2_CHP_CFG_DIR_WD_DISABLE1_WD_DISABLE_LOC	0
+
+#define DLB2_V2CHP_CFG_DIR_WD_ENB_INTERVAL 0x440000a0
+#define DLB2_V2_5CHP_CFG_DIR_WD_ENB_INTERVAL 0x440000b0
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_DIR_WD_ENB_INTERVAL : \
+	 DLB2_V2_5CHP_CFG_DIR_WD_ENB_INTERVAL)
 #define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_RST 0x0
-union dlb2_chp_cfg_dir_wd_enb_interval {
-	struct {
-		u32 sample_interval : 28;
-		u32 enb : 1;
-		u32 rsvz0 : 3;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_DIR_WD_THRESHOLD 0x440000ac
+
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_SAMPLE_INTERVAL	0x0FFFFFFF
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_ENB			0x10000000
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_RSVZ0		0xE0000000
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_SAMPLE_INTERVAL_LOC	0
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_ENB_LOC		28
+#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_RSVZ0_LOC		29
+
+#define DLB2_V2CHP_CFG_DIR_WD_THRESHOLD 0x440000ac
+#define DLB2_V2_5CHP_CFG_DIR_WD_THRESHOLD 0x440000c0
+#define DLB2_CHP_CFG_DIR_WD_THRESHOLD(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_DIR_WD_THRESHOLD : \
+	 DLB2_V2_5CHP_CFG_DIR_WD_THRESHOLD)
 #define DLB2_CHP_CFG_DIR_WD_THRESHOLD_RST 0x0
-union dlb2_chp_cfg_dir_wd_threshold {
-	struct {
-		u32 wd_threshold : 8;
-		u32 rsvz0 : 24;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_LDB_CQ_INTR_ARMED0 0x440000b0
+
+#define DLB2_CHP_CFG_DIR_WD_THRESHOLD_WD_THRESHOLD	0x000000FF
+#define DLB2_CHP_CFG_DIR_WD_THRESHOLD_RSVZ0		0xFFFFFF00
+#define DLB2_CHP_CFG_DIR_WD_THRESHOLD_WD_THRESHOLD_LOC	0
+#define DLB2_CHP_CFG_DIR_WD_THRESHOLD_RSVZ0_LOC		8
+
+#define DLB2_V2CHP_LDB_CQ_INTR_ARMED0 0x440000b0
+#define DLB2_V2_5CHP_LDB_CQ_INTR_ARMED0 0x440000c4
+#define DLB2_CHP_LDB_CQ_INTR_ARMED0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_INTR_ARMED0 : \
+	 DLB2_V2_5CHP_LDB_CQ_INTR_ARMED0)
 #define DLB2_CHP_LDB_CQ_INTR_ARMED0_RST 0x0
-union dlb2_chp_ldb_cq_intr_armed0 {
-	struct {
-		u32 armed : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_LDB_CQ_INTR_ARMED1 0x440000b4
+
+#define DLB2_CHP_LDB_CQ_INTR_ARMED0_ARMED	0xFFFFFFFF
+#define DLB2_CHP_LDB_CQ_INTR_ARMED0_ARMED_LOC	0
+
+#define DLB2_V2CHP_LDB_CQ_INTR_ARMED1 0x440000b4
+#define DLB2_V2_5CHP_LDB_CQ_INTR_ARMED1 0x440000c8
+#define DLB2_CHP_LDB_CQ_INTR_ARMED1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_LDB_CQ_INTR_ARMED1 : \
+	 DLB2_V2_5CHP_LDB_CQ_INTR_ARMED1)
 #define DLB2_CHP_LDB_CQ_INTR_ARMED1_RST 0x0
-union dlb2_chp_ldb_cq_intr_armed1 {
-	struct {
-		u32 armed : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL 0x440000d8
+
+#define DLB2_CHP_LDB_CQ_INTR_ARMED1_ARMED	0xFFFFFFFF
+#define DLB2_CHP_LDB_CQ_INTR_ARMED1_ARMED_LOC	0
+
+#define DLB2_V2CHP_CFG_LDB_CQ_TIMER_CTL 0x440000d8
+#define DLB2_V2_5CHP_CFG_LDB_CQ_TIMER_CTL 0x440000ec
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_LDB_CQ_TIMER_CTL : \
+	 DLB2_V2_5CHP_CFG_LDB_CQ_TIMER_CTL)
 #define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_RST 0x0
-union dlb2_chp_cfg_ldb_cq_timer_ctl {
-	struct {
-		u32 sample_interval : 8;
-		u32 enb : 1;
-		u32 rsvz0 : 23;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_LDB_WDTO_0 0x440000dc
+
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_SAMPLE_INTERVAL	0x000000FF
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_ENB			0x00000100
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_RSVZ0			0xFFFFFE00
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_SAMPLE_INTERVAL_LOC	0
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_ENB_LOC		8
+#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_RSVZ0_LOC		9
+
+#define DLB2_V2CHP_CFG_LDB_WDTO_0 0x440000dc
+#define DLB2_V2_5CHP_CFG_LDB_WDTO_0 0x440000f0
+#define DLB2_CHP_CFG_LDB_WDTO_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_LDB_WDTO_0 : \
+	 DLB2_V2_5CHP_CFG_LDB_WDTO_0)
 #define DLB2_CHP_CFG_LDB_WDTO_0_RST 0x0
-union dlb2_chp_cfg_ldb_wdto_0 {
-	struct {
-		u32 wdto : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_LDB_WDTO_1 0x440000e0
+
+#define DLB2_CHP_CFG_LDB_WDTO_0_WDTO	0xFFFFFFFF
+#define DLB2_CHP_CFG_LDB_WDTO_0_WDTO_LOC	0
+
+#define DLB2_V2CHP_CFG_LDB_WDTO_1 0x440000e0
+#define DLB2_V2_5CHP_CFG_LDB_WDTO_1 0x440000f4
+#define DLB2_CHP_CFG_LDB_WDTO_1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_LDB_WDTO_1 : \
+	 DLB2_V2_5CHP_CFG_LDB_WDTO_1)
 #define DLB2_CHP_CFG_LDB_WDTO_1_RST 0x0
-union dlb2_chp_cfg_ldb_wdto_1 {
-	struct {
-		u32 wdto : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_LDB_WD_DISABLE0 0x440000ec
+
+#define DLB2_CHP_CFG_LDB_WDTO_1_WDTO	0xFFFFFFFF
+#define DLB2_CHP_CFG_LDB_WDTO_1_WDTO_LOC	0
+
+#define DLB2_V2CHP_CFG_LDB_WD_DISABLE0 0x440000ec
+#define DLB2_V2_5CHP_CFG_LDB_WD_DISABLE0 0x44000100
+#define DLB2_CHP_CFG_LDB_WD_DISABLE0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_LDB_WD_DISABLE0 : \
+	 DLB2_V2_5CHP_CFG_LDB_WD_DISABLE0)
 #define DLB2_CHP_CFG_LDB_WD_DISABLE0_RST 0xffffffff
-union dlb2_chp_cfg_ldb_wd_disable0 {
-	struct {
-		u32 wd_disable : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_LDB_WD_DISABLE1 0x440000f0
+
+#define DLB2_CHP_CFG_LDB_WD_DISABLE0_WD_DISABLE	0xFFFFFFFF
+#define DLB2_CHP_CFG_LDB_WD_DISABLE0_WD_DISABLE_LOC	0
+
+#define DLB2_V2CHP_CFG_LDB_WD_DISABLE1 0x440000f0
+#define DLB2_V2_5CHP_CFG_LDB_WD_DISABLE1 0x44000104
+#define DLB2_CHP_CFG_LDB_WD_DISABLE1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_LDB_WD_DISABLE1 : \
+	 DLB2_V2_5CHP_CFG_LDB_WD_DISABLE1)
 #define DLB2_CHP_CFG_LDB_WD_DISABLE1_RST 0xffffffff
-union dlb2_chp_cfg_ldb_wd_disable1 {
-	struct {
-		u32 wd_disable : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL 0x440000f4
+
+#define DLB2_CHP_CFG_LDB_WD_DISABLE1_WD_DISABLE	0xFFFFFFFF
+#define DLB2_CHP_CFG_LDB_WD_DISABLE1_WD_DISABLE_LOC	0
+
+#define DLB2_V2CHP_CFG_LDB_WD_ENB_INTERVAL 0x440000f4
+#define DLB2_V2_5CHP_CFG_LDB_WD_ENB_INTERVAL 0x44000108
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_LDB_WD_ENB_INTERVAL : \
+	 DLB2_V2_5CHP_CFG_LDB_WD_ENB_INTERVAL)
 #define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_RST 0x0
-union dlb2_chp_cfg_ldb_wd_enb_interval {
-	struct {
-		u32 sample_interval : 28;
-		u32 enb : 1;
-		u32 rsvz0 : 3;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CHP_CFG_LDB_WD_THRESHOLD 0x44000100
+
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_SAMPLE_INTERVAL	0x0FFFFFFF
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_ENB			0x10000000
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_RSVZ0		0xE0000000
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_SAMPLE_INTERVAL_LOC	0
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_ENB_LOC		28
+#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_RSVZ0_LOC		29
+
+#define DLB2_V2CHP_CFG_LDB_WD_THRESHOLD 0x44000100
+#define DLB2_V2_5CHP_CFG_LDB_WD_THRESHOLD 0x44000114
+#define DLB2_CHP_CFG_LDB_WD_THRESHOLD(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CHP_CFG_LDB_WD_THRESHOLD : \
+	 DLB2_V2_5CHP_CFG_LDB_WD_THRESHOLD)
 #define DLB2_CHP_CFG_LDB_WD_THRESHOLD_RST 0x0
-union dlb2_chp_cfg_ldb_wd_threshold {
-	struct {
-		u32 wd_threshold : 8;
-		u32 rsvz0 : 24;
-	} field;
-	u32 val;
-};
+
+#define DLB2_CHP_CFG_LDB_WD_THRESHOLD_WD_THRESHOLD	0x000000FF
+#define DLB2_CHP_CFG_LDB_WD_THRESHOLD_RSVZ0		0xFFFFFF00
+#define DLB2_CHP_CFG_LDB_WD_THRESHOLD_WD_THRESHOLD_LOC	0
+#define DLB2_CHP_CFG_LDB_WD_THRESHOLD_RSVZ0_LOC		8
+
+#define DLB2_CHP_SMON_COMPARE0 0x4c000000
+#define DLB2_CHP_SMON_COMPARE0_RST 0x0
+
+#define DLB2_CHP_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_CHP_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_CHP_SMON_COMPARE1 0x4c000004
+#define DLB2_CHP_SMON_COMPARE1_RST 0x0
+
+#define DLB2_CHP_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_CHP_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_CHP_SMON_CFG0 0x4c000008
+#define DLB2_CHP_SMON_CFG0_RST 0x40000000
+
+#define DLB2_CHP_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_CHP_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_CHP_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_CHP_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_CHP_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_CHP_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_CHP_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_CHP_SMON_CFG0_SMON_MODE			0x0000F000
+#define DLB2_CHP_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_CHP_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_CHP_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_CHP_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_CHP_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_CHP_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_CHP_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_CHP_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_CHP_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_CHP_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_CHP_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_CHP_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_CHP_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
+#define DLB2_CHP_SMON_CFG0_RSVZ0_LOC				2
+#define DLB2_CHP_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_CHP_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_CHP_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_CHP_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_CHP_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_CHP_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_CHP_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_CHP_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_CHP_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_CHP_SMON_CFG0_STOPTIMEROVFL_LOC			20
+#define DLB2_CHP_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_CHP_SMON_CFG0_STATTIMEROVFL_LOC			22
+#define DLB2_CHP_SMON_CFG0_RSVZ1_LOC				23
+#define DLB2_CHP_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_CHP_SMON_CFG0_RSVZ2_LOC				29
+#define DLB2_CHP_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_CHP_SMON_CFG1 0x4c00000c
+#define DLB2_CHP_SMON_CFG1_RST 0x0
+
+#define DLB2_CHP_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_CHP_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_CHP_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_CHP_SMON_CFG1_MODE0_LOC	0
+#define DLB2_CHP_SMON_CFG1_MODE1_LOC	8
+#define DLB2_CHP_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_CHP_SMON_ACTIVITYCNTR0 0x4c000010
+#define DLB2_CHP_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_CHP_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_CHP_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_CHP_SMON_ACTIVITYCNTR1 0x4c000014
+#define DLB2_CHP_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_CHP_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_CHP_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_CHP_SMON_MAX_TMR 0x4c000018
+#define DLB2_CHP_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_CHP_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_CHP_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_CHP_SMON_TMR 0x4c00001c
+#define DLB2_CHP_SMON_TMR_RST 0x0
+
+#define DLB2_CHP_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_CHP_SMON_TMR_TIMER_LOC	0
 
 #define DLB2_CHP_CTRL_DIAG_02 0x4c000028
 #define DLB2_CHP_CTRL_DIAG_02_RST 0x1555
-union dlb2_chp_ctrl_diag_02 {
-	struct {
-		u32 egress_credit_status_empty : 1;
-		u32 egress_credit_status_afull : 1;
-		u32 chp_outbound_hcw_pipe_credit_status_empty : 1;
-		u32 chp_outbound_hcw_pipe_credit_status_afull : 1;
-		u32 chp_lsp_ap_cmp_pipe_credit_status_empty : 1;
-		u32 chp_lsp_ap_cmp_pipe_credit_status_afull : 1;
-		u32 chp_lsp_tok_pipe_credit_status_empty : 1;
-		u32 chp_lsp_tok_pipe_credit_status_afull : 1;
-		u32 chp_rop_pipe_credit_status_empty : 1;
-		u32 chp_rop_pipe_credit_status_afull : 1;
-		u32 qed_to_cq_pipe_credit_status_empty : 1;
-		u32 qed_to_cq_pipe_credit_status_afull : 1;
-		u32 egress_lsp_token_credit_status_empty : 1;
-		u32 egress_lsp_token_credit_status_afull : 1;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
+
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_EMPTY_V2	0x00000001
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_AFULL_V2	0x00000002
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_EMPTY_V2 0x04
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_AFULL_V2 0x08
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_EMPTY_V2 0x0010
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_AFULL_V2 0x0020
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_EMPTY_V2    0x0040
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_AFULL_V2    0x0080
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_EMPTY_V2	 0x0100
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_AFULL_V2	 0x0200
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_EMPTY_V2	0x0400
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_AFULL_V2	0x0800
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_EMPTY_V2    0x1000
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_AFULL_V2    0x2000
+#define DLB2_CHP_CTRL_DIAG_02_RSVD0_V2				    0xFFFFC000
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_EMPTY_V2_LOC	    0
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_AFULL_V2_LOC	    1
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_EMPTY_V2_LOC 2
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_AFULL_V2_LOC 3
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_EMPTY_V2_LOC 4
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_AFULL_V2_LOC 5
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_EMPTY_V2_LOC    6
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_AFULL_V2_LOC    7
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_EMPTY_V2_LOC	  8
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_AFULL_V2_LOC	  9
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_EMPTY_V2_LOC	  10
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_AFULL_V2_LOC	  11
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_EMPTY_V2_LOC 12
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_AFULL_V2_LOC 13
+#define DLB2_CHP_CTRL_DIAG_02_RSVD0_V2_LOC				  14
+
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_EMPTY_V2_5	     0x00000001
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_AFULL_V2_5	     0x00000002
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_EMPTY_V2_5  4
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_AFULL_V2_5  8
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_EMPTY_V2_5 0x10
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_AFULL_V2_5 0x20
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_EMPTY_V2_5	0x0040
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_AFULL_V2_5	0x0080
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_EMPTY_V2_5 0x00000100
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_AFULL_V2_5 0x00000200
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_EMPTY_V2_5 0x0400
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_AFULL_V2_5 0x0800
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_EMPTY_V2_5 0x1000
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_AFULL_V2_5 0x2000
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOKEN_QB_STATUS_SIZE_V2_5	    0x0001C000
+#define DLB2_CHP_CTRL_DIAG_02_FREELIST_SIZE_V2_5		    0xFFFE0000
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_EMPTY_V2_5_LOC 0
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_AFULL_V2_5_LOC 1
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_EMPTY_V2_5_LOC 2
+#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_AFULL_V2_5_LOC 3
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_EMPTY_V2_5_LOC 4
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_AFULL_V2_5_LOC 5
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_EMPTY_V2_5_LOC    6
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_AFULL_V2_5_LOC 7
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_EMPTY_V2_5_LOC	    8
+#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_AFULL_V2_5_LOC	    9
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_EMPTY_V2_5_LOC   10
+#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_AFULL_V2_5_LOC   11
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_EMPTY_V2_5_LOC 12
+#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_AFULL_V2_5_LOC 13
+#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOKEN_QB_STATUS_SIZE_V2_5_LOC	    14
+#define DLB2_CHP_CTRL_DIAG_02_FREELIST_SIZE_V2_5_LOC			    17
 
 #define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0 0x54000000
 #define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_RST 0xfefcfaf8
-union dlb2_dp_cfg_arb_weights_tqpri_dir_0 {
-	struct {
-		u32 pri0 : 8;
-		u32 pri1 : 8;
-		u32 pri2 : 8;
-		u32 pri3 : 8;
-	} field;
-	u32 val;
-};
+
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI0	0x000000FF
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI1	0x0000FF00
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI2	0x00FF0000
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI3	0xFF000000
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI0_LOC	0
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI1_LOC	8
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI2_LOC	16
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI3_LOC	24
 
 #define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_1 0x54000004
 #define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_1_RST 0x0
-union dlb2_dp_cfg_arb_weights_tqpri_dir_1 {
-	struct {
-		u32 rsvz0 : 32;
-	} field;
-	u32 val;
-};
+
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_1_RSVZ0	0xFFFFFFFF
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_1_RSVZ0_LOC	0
 
 #define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0 0x54000008
 #define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_RST 0xfefcfaf8
-union dlb2_dp_cfg_arb_weights_tqpri_replay_0 {
-	struct {
-		u32 pri0 : 8;
-		u32 pri1 : 8;
-		u32 pri2 : 8;
-		u32 pri3 : 8;
-	} field;
-	u32 val;
-};
+
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI0	0x000000FF
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI1	0x0000FF00
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI2	0x00FF0000
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI3	0xFF000000
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI0_LOC	0
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI1_LOC	8
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI2_LOC	16
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI3_LOC	24
 
 #define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1 0x5400000c
 #define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RST 0x0
-union dlb2_dp_cfg_arb_weights_tqpri_replay_1 {
-	struct {
-		u32 rsvz0 : 32;
-	} field;
-	u32 val;
-};
+
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RSVZ0	0xFFFFFFFF
+#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RSVZ0_LOC	0
 
 #define DLB2_DP_DIR_CSR_CTRL 0x54000010
 #define DLB2_DP_DIR_CSR_CTRL_RST 0x0
-union dlb2_dp_dir_csr_ctrl {
-	struct {
-		u32 int_cor_alarm_dis : 1;
-		u32 int_cor_synd_dis : 1;
-		u32 int_uncr_alarm_dis : 1;
-		u32 int_unc_synd_dis : 1;
-		u32 int_inf0_alarm_dis : 1;
-		u32 int_inf0_synd_dis : 1;
-		u32 int_inf1_alarm_dis : 1;
-		u32 int_inf1_synd_dis : 1;
-		u32 int_inf2_alarm_dis : 1;
-		u32 int_inf2_synd_dis : 1;
-		u32 int_inf3_alarm_dis : 1;
-		u32 int_inf3_synd_dis : 1;
-		u32 int_inf4_alarm_dis : 1;
-		u32 int_inf4_synd_dis : 1;
-		u32 int_inf5_alarm_dis : 1;
-		u32 int_inf5_synd_dis : 1;
-		u32 rsvz0 : 16;
-	} field;
-	u32 val;
-};
-
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_ATQ_0 0x84000000
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_RST 0xfefcfaf8
-union dlb2_nalb_pipe_cfg_arb_weights_tqpri_atq_0 {
-	struct {
-		u32 pri0 : 8;
-		u32 pri1 : 8;
-		u32 pri2 : 8;
-		u32 pri3 : 8;
-	} field;
-	u32 val;
-};
-
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_ATQ_1 0x84000004
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_ATQ_1_RST 0x0
-union dlb2_nalb_pipe_cfg_arb_weights_tqpri_atq_1 {
-	struct {
-		u32 rsvz0 : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_NALB_0 0x84000008
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_NALB_0_RST 0xfefcfaf8
-union dlb2_nalb_pipe_cfg_arb_weights_tqpri_nalb_0 {
-	struct {
-		u32 pri0 : 8;
-		u32 pri1 : 8;
-		u32 pri2 : 8;
-		u32 pri3 : 8;
-	} field;
-	u32 val;
-};
-
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_NALB_1 0x8400000c
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_NALB_1_RST 0x0
-union dlb2_nalb_pipe_cfg_arb_weights_tqpri_nalb_1 {
-	struct {
-		u32 rsvz0 : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0 0x84000010
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_RST 0xfefcfaf8
-union dlb2_nalb_pipe_cfg_arb_weights_tqpri_replay_0 {
-	struct {
-		u32 pri0 : 8;
-		u32 pri1 : 8;
-		u32 pri2 : 8;
-		u32 pri3 : 8;
-	} field;
-	u32 val;
-};
-
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1 0x84000014
-#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RST 0x0
-union dlb2_nalb_pipe_cfg_arb_weights_tqpri_replay_1 {
-	struct {
-		u32 rsvz0 : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_RO_PIPE_GRP_0_SLT_SHFT(x) \
+
+#define DLB2_DP_DIR_CSR_CTRL_INT_COR_ALARM_DIS	0x00000001
+#define DLB2_DP_DIR_CSR_CTRL_INT_COR_SYND_DIS	0x00000002
+#define DLB2_DP_DIR_CSR_CTRL_INT_UNCR_ALARM_DIS	0x00000004
+#define DLB2_DP_DIR_CSR_CTRL_INT_UNC_SYND_DIS	0x00000008
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF0_ALARM_DIS	0x00000010
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF0_SYND_DIS	0x00000020
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF1_ALARM_DIS	0x00000040
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF1_SYND_DIS	0x00000080
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF2_ALARM_DIS	0x00000100
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF2_SYND_DIS	0x00000200
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF3_ALARM_DIS	0x00000400
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF3_SYND_DIS	0x00000800
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF4_ALARM_DIS	0x00001000
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF4_SYND_DIS	0x00002000
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF5_ALARM_DIS	0x00004000
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF5_SYND_DIS	0x00008000
+#define DLB2_DP_DIR_CSR_CTRL_RSVZ0			0xFFFF0000
+#define DLB2_DP_DIR_CSR_CTRL_INT_COR_ALARM_DIS_LOC	0
+#define DLB2_DP_DIR_CSR_CTRL_INT_COR_SYND_DIS_LOC	1
+#define DLB2_DP_DIR_CSR_CTRL_INT_UNCR_ALARM_DIS_LOC	2
+#define DLB2_DP_DIR_CSR_CTRL_INT_UNC_SYND_DIS_LOC	3
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF0_ALARM_DIS_LOC	4
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF0_SYND_DIS_LOC	5
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF1_ALARM_DIS_LOC	6
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF1_SYND_DIS_LOC	7
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF2_ALARM_DIS_LOC	8
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF2_SYND_DIS_LOC	9
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF3_ALARM_DIS_LOC	10
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF3_SYND_DIS_LOC	11
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF4_ALARM_DIS_LOC	12
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF4_SYND_DIS_LOC	13
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF5_ALARM_DIS_LOC	14
+#define DLB2_DP_DIR_CSR_CTRL_INT_INF5_SYND_DIS_LOC	15
+#define DLB2_DP_DIR_CSR_CTRL_RSVZ0_LOC		16
+
+#define DLB2_DP_SMON_ACTIVITYCNTR0 0x5c000058
+#define DLB2_DP_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_DP_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_DP_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_DP_SMON_ACTIVITYCNTR1 0x5c00005c
+#define DLB2_DP_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_DP_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_DP_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_DP_SMON_COMPARE0 0x5c000060
+#define DLB2_DP_SMON_COMPARE0_RST 0x0
+
+#define DLB2_DP_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_DP_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_DP_SMON_COMPARE1 0x5c000064
+#define DLB2_DP_SMON_COMPARE1_RST 0x0
+
+#define DLB2_DP_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_DP_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_DP_SMON_CFG0 0x5c000068
+#define DLB2_DP_SMON_CFG0_RST 0x40000000
+
+#define DLB2_DP_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_DP_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_DP_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_DP_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_DP_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_DP_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_DP_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_DP_SMON_CFG0_SMON_MODE			0x0000F000
+#define DLB2_DP_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_DP_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_DP_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_DP_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_DP_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_DP_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_DP_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_DP_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_DP_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_DP_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_DP_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_DP_SMON_CFG0_SMON_ENABLE_LOC		0
+#define DLB2_DP_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC	1
+#define DLB2_DP_SMON_CFG0_RSVZ0_LOC			2
+#define DLB2_DP_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_DP_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_DP_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_DP_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_DP_SMON_CFG0_SMON_MODE_LOC		12
+#define DLB2_DP_SMON_CFG0_STOPCOUNTEROVFL_LOC	16
+#define DLB2_DP_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_DP_SMON_CFG0_STATCOUNTER0OVFL_LOC	18
+#define DLB2_DP_SMON_CFG0_STATCOUNTER1OVFL_LOC	19
+#define DLB2_DP_SMON_CFG0_STOPTIMEROVFL_LOC		20
+#define DLB2_DP_SMON_CFG0_INTTIMEROVFL_LOC		21
+#define DLB2_DP_SMON_CFG0_STATTIMEROVFL_LOC		22
+#define DLB2_DP_SMON_CFG0_RSVZ1_LOC			23
+#define DLB2_DP_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_DP_SMON_CFG0_RSVZ2_LOC			29
+#define DLB2_DP_SMON_CFG0_VERSION_LOC		30
+
+#define DLB2_DP_SMON_CFG1 0x5c00006c
+#define DLB2_DP_SMON_CFG1_RST 0x0
+
+#define DLB2_DP_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_DP_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_DP_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_DP_SMON_CFG1_MODE0_LOC	0
+#define DLB2_DP_SMON_CFG1_MODE1_LOC	8
+#define DLB2_DP_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_DP_SMON_MAX_TMR 0x5c000070
+#define DLB2_DP_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_DP_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_DP_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_DP_SMON_TMR 0x5c000074
+#define DLB2_DP_SMON_TMR_RST 0x0
+
+#define DLB2_DP_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_DP_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_DQED_SMON_ACTIVITYCNTR0 0x6c000024
+#define DLB2_DQED_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_DQED_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_DQED_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_DQED_SMON_ACTIVITYCNTR1 0x6c000028
+#define DLB2_DQED_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_DQED_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_DQED_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_DQED_SMON_COMPARE0 0x6c00002c
+#define DLB2_DQED_SMON_COMPARE0_RST 0x0
+
+#define DLB2_DQED_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_DQED_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_DQED_SMON_COMPARE1 0x6c000030
+#define DLB2_DQED_SMON_COMPARE1_RST 0x0
+
+#define DLB2_DQED_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_DQED_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_DQED_SMON_CFG0 0x6c000034
+#define DLB2_DQED_SMON_CFG0_RST 0x40000000
+
+#define DLB2_DQED_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_DQED_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_DQED_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_DQED_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_DQED_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_DQED_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_DQED_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_DQED_SMON_CFG0_SMON_MODE		0x0000F000
+#define DLB2_DQED_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_DQED_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_DQED_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_DQED_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_DQED_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_DQED_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_DQED_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_DQED_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_DQED_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_DQED_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_DQED_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_DQED_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_DQED_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
+#define DLB2_DQED_SMON_CFG0_RSVZ0_LOC			2
+#define DLB2_DQED_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_DQED_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_DQED_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_DQED_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_DQED_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_DQED_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_DQED_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_DQED_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_DQED_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_DQED_SMON_CFG0_STOPTIMEROVFL_LOC		20
+#define DLB2_DQED_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_DQED_SMON_CFG0_STATTIMEROVFL_LOC		22
+#define DLB2_DQED_SMON_CFG0_RSVZ1_LOC			23
+#define DLB2_DQED_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_DQED_SMON_CFG0_RSVZ2_LOC			29
+#define DLB2_DQED_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_DQED_SMON_CFG1 0x6c000038
+#define DLB2_DQED_SMON_CFG1_RST 0x0
+
+#define DLB2_DQED_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_DQED_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_DQED_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_DQED_SMON_CFG1_MODE0_LOC	0
+#define DLB2_DQED_SMON_CFG1_MODE1_LOC	8
+#define DLB2_DQED_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_DQED_SMON_MAX_TMR 0x6c00003c
+#define DLB2_DQED_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_DQED_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_DQED_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_DQED_SMON_TMR 0x6c000040
+#define DLB2_DQED_SMON_TMR_RST 0x0
+
+#define DLB2_DQED_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_DQED_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_QED_SMON_ACTIVITYCNTR0 0x7c000024
+#define DLB2_QED_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_QED_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_QED_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_QED_SMON_ACTIVITYCNTR1 0x7c000028
+#define DLB2_QED_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_QED_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_QED_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_QED_SMON_COMPARE0 0x7c00002c
+#define DLB2_QED_SMON_COMPARE0_RST 0x0
+
+#define DLB2_QED_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_QED_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_QED_SMON_COMPARE1 0x7c000030
+#define DLB2_QED_SMON_COMPARE1_RST 0x0
+
+#define DLB2_QED_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_QED_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_QED_SMON_CFG0 0x7c000034
+#define DLB2_QED_SMON_CFG0_RST 0x40000000
+
+#define DLB2_QED_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_QED_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_QED_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_QED_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_QED_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_QED_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_QED_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_QED_SMON_CFG0_SMON_MODE			0x0000F000
+#define DLB2_QED_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_QED_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_QED_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_QED_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_QED_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_QED_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_QED_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_QED_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_QED_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_QED_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_QED_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_QED_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_QED_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
+#define DLB2_QED_SMON_CFG0_RSVZ0_LOC				2
+#define DLB2_QED_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_QED_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_QED_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_QED_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_QED_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_QED_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_QED_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_QED_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_QED_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_QED_SMON_CFG0_STOPTIMEROVFL_LOC			20
+#define DLB2_QED_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_QED_SMON_CFG0_STATTIMEROVFL_LOC			22
+#define DLB2_QED_SMON_CFG0_RSVZ1_LOC				23
+#define DLB2_QED_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_QED_SMON_CFG0_RSVZ2_LOC				29
+#define DLB2_QED_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_QED_SMON_CFG1 0x7c000038
+#define DLB2_QED_SMON_CFG1_RST 0x0
+
+#define DLB2_QED_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_QED_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_QED_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_QED_SMON_CFG1_MODE0_LOC	0
+#define DLB2_QED_SMON_CFG1_MODE1_LOC	8
+#define DLB2_QED_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_QED_SMON_MAX_TMR 0x7c00003c
+#define DLB2_QED_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_QED_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_QED_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_QED_SMON_TMR 0x7c000040
+#define DLB2_QED_SMON_TMR_RST 0x0
+
+#define DLB2_QED_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_QED_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0 0x84000000
+#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0 0x74000000
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0 : \
+	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0)
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_RST 0xfefcfaf8
+
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI0	0x000000FF
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI1	0x0000FF00
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI2	0x00FF0000
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI3	0xFF000000
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI0_LOC	0
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI1_LOC	8
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI2_LOC	16
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI3_LOC	24
+
+#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1 0x84000004
+#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1 0x74000004
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1 : \
+	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1)
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1_RST 0x0
+
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1_RSVZ0	0xFFFFFFFF
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1_RSVZ0_LOC	0
+
+#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0 0x84000008
+#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0 0x74000008
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0 : \
+	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0)
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_RST 0xfefcfaf8
+
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI0	0x000000FF
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI1	0x0000FF00
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI2	0x00FF0000
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI3	0xFF000000
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI0_LOC	0
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI1_LOC	8
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI2_LOC	16
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI3_LOC	24
+
+#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1 0x8400000c
+#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1 0x7400000c
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1 : \
+	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1)
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1_RST 0x0
+
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1_RSVZ0	0xFFFFFFFF
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1_RSVZ0_LOC	0
+
+#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0 0x84000010
+#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0 0x74000010
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0 : \
+	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0)
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_RST 0xfefcfaf8
+
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI0	0x000000FF
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI1	0x0000FF00
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI2	0x00FF0000
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI3	0xFF000000
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI0_LOC	0
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI1_LOC	8
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI2_LOC	16
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI3_LOC	24
+
+#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1 0x84000014
+#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1 0x74000014
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1 : \
+	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1)
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RST 0x0
+
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RSVZ0	0xFFFFFFFF
+#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RSVZ0_LOC	0
+
+#define DLB2_NALB_SMON_ACTIVITYCNTR0 0x8c000064
+#define DLB2_NALB_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_NALB_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_NALB_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_NALB_SMON_ACTIVITYCNTR1 0x8c000068
+#define DLB2_NALB_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_NALB_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_NALB_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_NALB_SMON_COMPARE0 0x8c00006c
+#define DLB2_NALB_SMON_COMPARE0_RST 0x0
+
+#define DLB2_NALB_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_NALB_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_NALB_SMON_COMPARE1 0x8c000070
+#define DLB2_NALB_SMON_COMPARE1_RST 0x0
+
+#define DLB2_NALB_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_NALB_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_NALB_SMON_CFG0 0x8c000074
+#define DLB2_NALB_SMON_CFG0_RST 0x40000000
+
+#define DLB2_NALB_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_NALB_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_NALB_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_NALB_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_NALB_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_NALB_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_NALB_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_NALB_SMON_CFG0_SMON_MODE		0x0000F000
+#define DLB2_NALB_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_NALB_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_NALB_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_NALB_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_NALB_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_NALB_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_NALB_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_NALB_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_NALB_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_NALB_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_NALB_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_NALB_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_NALB_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
+#define DLB2_NALB_SMON_CFG0_RSVZ0_LOC			2
+#define DLB2_NALB_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_NALB_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_NALB_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_NALB_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_NALB_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_NALB_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_NALB_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_NALB_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_NALB_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_NALB_SMON_CFG0_STOPTIMEROVFL_LOC		20
+#define DLB2_NALB_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_NALB_SMON_CFG0_STATTIMEROVFL_LOC		22
+#define DLB2_NALB_SMON_CFG0_RSVZ1_LOC			23
+#define DLB2_NALB_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_NALB_SMON_CFG0_RSVZ2_LOC			29
+#define DLB2_NALB_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_NALB_SMON_CFG1 0x8c000078
+#define DLB2_NALB_SMON_CFG1_RST 0x0
+
+#define DLB2_NALB_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_NALB_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_NALB_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_NALB_SMON_CFG1_MODE0_LOC	0
+#define DLB2_NALB_SMON_CFG1_MODE1_LOC	8
+#define DLB2_NALB_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_NALB_SMON_MAX_TMR 0x8c00007c
+#define DLB2_NALB_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_NALB_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_NALB_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_NALB_SMON_TMR 0x8c000080
+#define DLB2_NALB_SMON_TMR_RST 0x0
+
+#define DLB2_NALB_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_NALB_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_V2RO_GRP_0_SLT_SHFT(x) \
 	(0x96000000 + (x) * 0x4)
-#define DLB2_RO_PIPE_GRP_0_SLT_SHFT_RST 0x0
-union dlb2_ro_pipe_grp_0_slt_shft {
-	struct {
-		u32 change : 10;
-		u32 rsvd0 : 22;
-	} field;
-	u32 val;
-};
-
-#define DLB2_RO_PIPE_GRP_1_SLT_SHFT(x) \
+#define DLB2_V2_5RO_GRP_0_SLT_SHFT(x) \
+	(0x86000000 + (x) * 0x4)
+#define DLB2_RO_GRP_0_SLT_SHFT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2RO_GRP_0_SLT_SHFT(x) : \
+	 DLB2_V2_5RO_GRP_0_SLT_SHFT(x))
+#define DLB2_RO_GRP_0_SLT_SHFT_RST 0x0
+
+#define DLB2_RO_GRP_0_SLT_SHFT_CHANGE	0x000003FF
+#define DLB2_RO_GRP_0_SLT_SHFT_RSVD0		0xFFFFFC00
+#define DLB2_RO_GRP_0_SLT_SHFT_CHANGE_LOC	0
+#define DLB2_RO_GRP_0_SLT_SHFT_RSVD0_LOC	10
+
+#define DLB2_V2RO_GRP_1_SLT_SHFT(x) \
 	(0x96010000 + (x) * 0x4)
-#define DLB2_RO_PIPE_GRP_1_SLT_SHFT_RST 0x0
-union dlb2_ro_pipe_grp_1_slt_shft {
-	struct {
-		u32 change : 10;
-		u32 rsvd0 : 22;
-	} field;
-	u32 val;
-};
-
-#define DLB2_RO_PIPE_GRP_SN_MODE 0x94000000
-#define DLB2_RO_PIPE_GRP_SN_MODE_RST 0x0
-union dlb2_ro_pipe_grp_sn_mode {
-	struct {
-		u32 sn_mode_0 : 3;
-		u32 rszv0 : 5;
-		u32 sn_mode_1 : 3;
-		u32 rszv1 : 21;
-	} field;
-	u32 val;
-};
-
-#define DLB2_RO_PIPE_CFG_CTRL_GENERAL_0 0x9c000000
-#define DLB2_RO_PIPE_CFG_CTRL_GENERAL_0_RST 0x0
-union dlb2_ro_pipe_cfg_ctrl_general_0 {
-	struct {
-		u32 unit_single_step_mode : 1;
-		u32 rr_en : 1;
-		u32 rszv0 : 30;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ2PRIOV(x) \
+#define DLB2_V2_5RO_GRP_1_SLT_SHFT(x) \
+	(0x86010000 + (x) * 0x4)
+#define DLB2_RO_GRP_1_SLT_SHFT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2RO_GRP_1_SLT_SHFT(x) : \
+	 DLB2_V2_5RO_GRP_1_SLT_SHFT(x))
+#define DLB2_RO_GRP_1_SLT_SHFT_RST 0x0
+
+#define DLB2_RO_GRP_1_SLT_SHFT_CHANGE	0x000003FF
+#define DLB2_RO_GRP_1_SLT_SHFT_RSVD0		0xFFFFFC00
+#define DLB2_RO_GRP_1_SLT_SHFT_CHANGE_LOC	0
+#define DLB2_RO_GRP_1_SLT_SHFT_RSVD0_LOC	10
+
+#define DLB2_V2RO_GRP_SN_MODE 0x94000000
+#define DLB2_V2_5RO_GRP_SN_MODE 0x84000000
+#define DLB2_RO_GRP_SN_MODE(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2RO_GRP_SN_MODE : \
+	 DLB2_V2_5RO_GRP_SN_MODE)
+#define DLB2_RO_GRP_SN_MODE_RST 0x0
+
+#define DLB2_RO_GRP_SN_MODE_SN_MODE_0	0x00000007
+#define DLB2_RO_GRP_SN_MODE_RSZV0		0x000000F8
+#define DLB2_RO_GRP_SN_MODE_SN_MODE_1	0x00000700
+#define DLB2_RO_GRP_SN_MODE_RSZV1		0xFFFFF800
+#define DLB2_RO_GRP_SN_MODE_SN_MODE_0_LOC	0
+#define DLB2_RO_GRP_SN_MODE_RSZV0_LOC	3
+#define DLB2_RO_GRP_SN_MODE_SN_MODE_1_LOC	8
+#define DLB2_RO_GRP_SN_MODE_RSZV1_LOC	11
+
+#define DLB2_V2RO_CFG_CTRL_GENERAL_0 0x9c000000
+#define DLB2_V2_5RO_CFG_CTRL_GENERAL_0 0x8c000000
+#define DLB2_RO_CFG_CTRL_GENERAL_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2RO_CFG_CTRL_GENERAL_0 : \
+	 DLB2_V2_5RO_CFG_CTRL_GENERAL_0)
+#define DLB2_RO_CFG_CTRL_GENERAL_0_RST 0x0
+
+#define DLB2_RO_CFG_CTRL_GENERAL_0_UNIT_SINGLE_STEP_MODE	0x00000001
+#define DLB2_RO_CFG_CTRL_GENERAL_0_RR_EN			0x00000002
+#define DLB2_RO_CFG_CTRL_GENERAL_0_RSZV0			0xFFFFFFFC
+#define DLB2_RO_CFG_CTRL_GENERAL_0_UNIT_SINGLE_STEP_MODE_LOC	0
+#define DLB2_RO_CFG_CTRL_GENERAL_0_RR_EN_LOC			1
+#define DLB2_RO_CFG_CTRL_GENERAL_0_RSZV0_LOC			2
+
+#define DLB2_RO_SMON_ACTIVITYCNTR0 0x9c000030
+#define DLB2_RO_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_RO_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_RO_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_RO_SMON_ACTIVITYCNTR1 0x9c000034
+#define DLB2_RO_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_RO_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_RO_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_RO_SMON_COMPARE0 0x9c000038
+#define DLB2_RO_SMON_COMPARE0_RST 0x0
+
+#define DLB2_RO_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_RO_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_RO_SMON_COMPARE1 0x9c00003c
+#define DLB2_RO_SMON_COMPARE1_RST 0x0
+
+#define DLB2_RO_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_RO_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_RO_SMON_CFG0 0x9c000040
+#define DLB2_RO_SMON_CFG0_RST 0x40000000
+
+#define DLB2_RO_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_RO_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_RO_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_RO_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_RO_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_RO_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_RO_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_RO_SMON_CFG0_SMON_MODE			0x0000F000
+#define DLB2_RO_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_RO_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_RO_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_RO_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_RO_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_RO_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_RO_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_RO_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_RO_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_RO_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_RO_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_RO_SMON_CFG0_SMON_ENABLE_LOC		0
+#define DLB2_RO_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC	1
+#define DLB2_RO_SMON_CFG0_RSVZ0_LOC			2
+#define DLB2_RO_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_RO_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_RO_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_RO_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_RO_SMON_CFG0_SMON_MODE_LOC		12
+#define DLB2_RO_SMON_CFG0_STOPCOUNTEROVFL_LOC	16
+#define DLB2_RO_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_RO_SMON_CFG0_STATCOUNTER0OVFL_LOC	18
+#define DLB2_RO_SMON_CFG0_STATCOUNTER1OVFL_LOC	19
+#define DLB2_RO_SMON_CFG0_STOPTIMEROVFL_LOC		20
+#define DLB2_RO_SMON_CFG0_INTTIMEROVFL_LOC		21
+#define DLB2_RO_SMON_CFG0_STATTIMEROVFL_LOC		22
+#define DLB2_RO_SMON_CFG0_RSVZ1_LOC			23
+#define DLB2_RO_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_RO_SMON_CFG0_RSVZ2_LOC			29
+#define DLB2_RO_SMON_CFG0_VERSION_LOC		30
+
+#define DLB2_RO_SMON_CFG1 0x9c000044
+#define DLB2_RO_SMON_CFG1_RST 0x0
+
+#define DLB2_RO_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_RO_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_RO_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_RO_SMON_CFG1_MODE0_LOC	0
+#define DLB2_RO_SMON_CFG1_MODE1_LOC	8
+#define DLB2_RO_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_RO_SMON_MAX_TMR 0x9c000048
+#define DLB2_RO_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_RO_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_RO_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_RO_SMON_TMR 0x9c00004c
+#define DLB2_RO_SMON_TMR_RST 0x0
+
+#define DLB2_RO_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_RO_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_V2LSP_CQ2PRIOV(x) \
 	(0xa0000000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ2PRIOV(x) \
+	(0x90000000 + (x) * 0x1000)
+#define DLB2_LSP_CQ2PRIOV(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ2PRIOV(x) : \
+	 DLB2_V2_5LSP_CQ2PRIOV(x))
 #define DLB2_LSP_CQ2PRIOV_RST 0x0
-union dlb2_lsp_cq2priov {
-	struct {
-		u32 prio : 24;
-		u32 v : 8;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ2QID0(x) \
+
+#define DLB2_LSP_CQ2PRIOV_PRIO	0x00FFFFFF
+#define DLB2_LSP_CQ2PRIOV_V		0xFF000000
+#define DLB2_LSP_CQ2PRIOV_PRIO_LOC	0
+#define DLB2_LSP_CQ2PRIOV_V_LOC	24
+
+#define DLB2_V2LSP_CQ2QID0(x) \
 	(0xa0080000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ2QID0(x) \
+	(0x90080000 + (x) * 0x1000)
+#define DLB2_LSP_CQ2QID0(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ2QID0(x) : \
+	 DLB2_V2_5LSP_CQ2QID0(x))
 #define DLB2_LSP_CQ2QID0_RST 0x0
-union dlb2_lsp_cq2qid0 {
-	struct {
-		u32 qid_p0 : 7;
-		u32 rsvd3 : 1;
-		u32 qid_p1 : 7;
-		u32 rsvd2 : 1;
-		u32 qid_p2 : 7;
-		u32 rsvd1 : 1;
-		u32 qid_p3 : 7;
-		u32 rsvd0 : 1;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ2QID1(x) \
+
+#define DLB2_LSP_CQ2QID0_QID_P0	0x0000007F
+#define DLB2_LSP_CQ2QID0_RSVD3	0x00000080
+#define DLB2_LSP_CQ2QID0_QID_P1	0x00007F00
+#define DLB2_LSP_CQ2QID0_RSVD2	0x00008000
+#define DLB2_LSP_CQ2QID0_QID_P2	0x007F0000
+#define DLB2_LSP_CQ2QID0_RSVD1	0x00800000
+#define DLB2_LSP_CQ2QID0_QID_P3	0x7F000000
+#define DLB2_LSP_CQ2QID0_RSVD0	0x80000000
+#define DLB2_LSP_CQ2QID0_QID_P0_LOC	0
+#define DLB2_LSP_CQ2QID0_RSVD3_LOC	7
+#define DLB2_LSP_CQ2QID0_QID_P1_LOC	8
+#define DLB2_LSP_CQ2QID0_RSVD2_LOC	15
+#define DLB2_LSP_CQ2QID0_QID_P2_LOC	16
+#define DLB2_LSP_CQ2QID0_RSVD1_LOC	23
+#define DLB2_LSP_CQ2QID0_QID_P3_LOC	24
+#define DLB2_LSP_CQ2QID0_RSVD0_LOC	31
+
+#define DLB2_V2LSP_CQ2QID1(x) \
 	(0xa0100000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ2QID1(x) \
+	(0x90100000 + (x) * 0x1000)
+#define DLB2_LSP_CQ2QID1(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ2QID1(x) : \
+	 DLB2_V2_5LSP_CQ2QID1(x))
 #define DLB2_LSP_CQ2QID1_RST 0x0
-union dlb2_lsp_cq2qid1 {
-	struct {
-		u32 qid_p4 : 7;
-		u32 rsvd3 : 1;
-		u32 qid_p5 : 7;
-		u32 rsvd2 : 1;
-		u32 qid_p6 : 7;
-		u32 rsvd1 : 1;
-		u32 qid_p7 : 7;
-		u32 rsvd0 : 1;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_DIR_DSBL(x) \
+
+#define DLB2_LSP_CQ2QID1_QID_P4	0x0000007F
+#define DLB2_LSP_CQ2QID1_RSVD3	0x00000080
+#define DLB2_LSP_CQ2QID1_QID_P5	0x00007F00
+#define DLB2_LSP_CQ2QID1_RSVD2	0x00008000
+#define DLB2_LSP_CQ2QID1_QID_P6	0x007F0000
+#define DLB2_LSP_CQ2QID1_RSVD1	0x00800000
+#define DLB2_LSP_CQ2QID1_QID_P7	0x7F000000
+#define DLB2_LSP_CQ2QID1_RSVD0	0x80000000
+#define DLB2_LSP_CQ2QID1_QID_P4_LOC	0
+#define DLB2_LSP_CQ2QID1_RSVD3_LOC	7
+#define DLB2_LSP_CQ2QID1_QID_P5_LOC	8
+#define DLB2_LSP_CQ2QID1_RSVD2_LOC	15
+#define DLB2_LSP_CQ2QID1_QID_P6_LOC	16
+#define DLB2_LSP_CQ2QID1_RSVD1_LOC	23
+#define DLB2_LSP_CQ2QID1_QID_P7_LOC	24
+#define DLB2_LSP_CQ2QID1_RSVD0_LOC	31
+
+#define DLB2_V2LSP_CQ_DIR_DSBL(x) \
 	(0xa0180000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_DIR_DSBL(x) \
+	(0x90180000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_DIR_DSBL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_DIR_DSBL(x) : \
+	 DLB2_V2_5LSP_CQ_DIR_DSBL(x))
 #define DLB2_LSP_CQ_DIR_DSBL_RST 0x1
-union dlb2_lsp_cq_dir_dsbl {
-	struct {
-		u32 disabled : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_DIR_TKN_CNT(x) \
+
+#define DLB2_LSP_CQ_DIR_DSBL_DISABLED	0x00000001
+#define DLB2_LSP_CQ_DIR_DSBL_RSVD0		0xFFFFFFFE
+#define DLB2_LSP_CQ_DIR_DSBL_DISABLED_LOC	0
+#define DLB2_LSP_CQ_DIR_DSBL_RSVD0_LOC	1
+
+#define DLB2_V2LSP_CQ_DIR_TKN_CNT(x) \
 	(0xa0200000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_DIR_TKN_CNT(x) \
+	(0x90200000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_DIR_TKN_CNT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_DIR_TKN_CNT(x) : \
+	 DLB2_V2_5LSP_CQ_DIR_TKN_CNT(x))
 #define DLB2_LSP_CQ_DIR_TKN_CNT_RST 0x0
-union dlb2_lsp_cq_dir_tkn_cnt {
-	struct {
-		u32 count : 13;
-		u32 rsvd0 : 19;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(x) \
+
+#define DLB2_LSP_CQ_DIR_TKN_CNT_COUNT	0x00001FFF
+#define DLB2_LSP_CQ_DIR_TKN_CNT_RSVD0	0xFFFFE000
+#define DLB2_LSP_CQ_DIR_TKN_CNT_COUNT_LOC	0
+#define DLB2_LSP_CQ_DIR_TKN_CNT_RSVD0_LOC	13
+
+#define DLB2_V2LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(x) \
 	(0xa0280000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(x) \
+	(0x90280000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(x) : \
+	 DLB2_V2_5LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(x))
 #define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RST 0x0
-union dlb2_lsp_cq_dir_tkn_depth_sel_dsi {
-	struct {
-		u32 token_depth_select : 4;
-		u32 disable_wb_opt : 1;
-		u32 ignore_depth : 1;
-		u32 rsvd0 : 26;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTL(x) \
+
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_TOKEN_DEPTH_SELECT_V2	0x0000000F
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_DISABLE_WB_OPT_V2	0x00000010
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_IGNORE_DEPTH_V2	0x00000020
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RSVD0_V2		0xFFFFFFC0
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_TOKEN_DEPTH_SELECT_V2_LOC	0
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_DISABLE_WB_OPT_V2_LOC	4
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_IGNORE_DEPTH_V2_LOC	5
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RSVD0_V2_LOC		6
+
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_TOKEN_DEPTH_SELECT_V2_5 0x0000000F
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_DISABLE_WB_OPT_V2_5	0x00000010
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RSVD0_V2_5		0xFFFFFFE0
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_TOKEN_DEPTH_SELECT_V2_5_LOC	0
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_DISABLE_WB_OPT_V2_5_LOC	4
+#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RSVD0_V2_5_LOC		5
+
+#define DLB2_V2LSP_CQ_DIR_TOT_SCH_CNTL(x) \
 	(0xa0300000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_DIR_TOT_SCH_CNTL(x) \
+	(0x90300000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_DIR_TOT_SCH_CNTL(x) : \
+	 DLB2_V2_5LSP_CQ_DIR_TOT_SCH_CNTL(x))
 #define DLB2_LSP_CQ_DIR_TOT_SCH_CNTL_RST 0x0
-union dlb2_lsp_cq_dir_tot_sch_cntl {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTH(x) \
+
+#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTL_COUNT	0xFFFFFFFF
+#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTL_COUNT_LOC	0
+
+#define DLB2_V2LSP_CQ_DIR_TOT_SCH_CNTH(x) \
 	(0xa0380000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_DIR_TOT_SCH_CNTH(x) \
+	(0x90380000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_DIR_TOT_SCH_CNTH(x) : \
+	 DLB2_V2_5LSP_CQ_DIR_TOT_SCH_CNTH(x))
 #define DLB2_LSP_CQ_DIR_TOT_SCH_CNTH_RST 0x0
-union dlb2_lsp_cq_dir_tot_sch_cnth {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_LDB_DSBL(x) \
+
+#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTH_COUNT	0xFFFFFFFF
+#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTH_COUNT_LOC	0
+
+#define DLB2_V2LSP_CQ_LDB_DSBL(x) \
 	(0xa0400000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_LDB_DSBL(x) \
+	(0x90400000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_LDB_DSBL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_LDB_DSBL(x) : \
+	 DLB2_V2_5LSP_CQ_LDB_DSBL(x))
 #define DLB2_LSP_CQ_LDB_DSBL_RST 0x1
-union dlb2_lsp_cq_ldb_dsbl {
-	struct {
-		u32 disabled : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_LDB_INFL_CNT(x) \
+
+#define DLB2_LSP_CQ_LDB_DSBL_DISABLED	0x00000001
+#define DLB2_LSP_CQ_LDB_DSBL_RSVD0		0xFFFFFFFE
+#define DLB2_LSP_CQ_LDB_DSBL_DISABLED_LOC	0
+#define DLB2_LSP_CQ_LDB_DSBL_RSVD0_LOC	1
+
+#define DLB2_V2LSP_CQ_LDB_INFL_CNT(x) \
 	(0xa0480000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_LDB_INFL_CNT(x) \
+	(0x90480000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_LDB_INFL_CNT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_LDB_INFL_CNT(x) : \
+	 DLB2_V2_5LSP_CQ_LDB_INFL_CNT(x))
 #define DLB2_LSP_CQ_LDB_INFL_CNT_RST 0x0
-union dlb2_lsp_cq_ldb_infl_cnt {
-	struct {
-		u32 count : 12;
-		u32 rsvd0 : 20;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_LDB_INFL_LIM(x) \
+
+#define DLB2_LSP_CQ_LDB_INFL_CNT_COUNT	0x00000FFF
+#define DLB2_LSP_CQ_LDB_INFL_CNT_RSVD0	0xFFFFF000
+#define DLB2_LSP_CQ_LDB_INFL_CNT_COUNT_LOC	0
+#define DLB2_LSP_CQ_LDB_INFL_CNT_RSVD0_LOC	12
+
+#define DLB2_V2LSP_CQ_LDB_INFL_LIM(x) \
 	(0xa0500000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_LDB_INFL_LIM(x) \
+	(0x90500000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_LDB_INFL_LIM(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_LDB_INFL_LIM(x) : \
+	 DLB2_V2_5LSP_CQ_LDB_INFL_LIM(x))
 #define DLB2_LSP_CQ_LDB_INFL_LIM_RST 0x0
-union dlb2_lsp_cq_ldb_infl_lim {
-	struct {
-		u32 limit : 12;
-		u32 rsvd0 : 20;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_LDB_TKN_CNT(x) \
+
+#define DLB2_LSP_CQ_LDB_INFL_LIM_LIMIT	0x00000FFF
+#define DLB2_LSP_CQ_LDB_INFL_LIM_RSVD0	0xFFFFF000
+#define DLB2_LSP_CQ_LDB_INFL_LIM_LIMIT_LOC	0
+#define DLB2_LSP_CQ_LDB_INFL_LIM_RSVD0_LOC	12
+
+#define DLB2_V2LSP_CQ_LDB_TKN_CNT(x) \
 	(0xa0580000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_LDB_TKN_CNT(x) \
+	(0x90600000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_LDB_TKN_CNT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_LDB_TKN_CNT(x) : \
+	 DLB2_V2_5LSP_CQ_LDB_TKN_CNT(x))
 #define DLB2_LSP_CQ_LDB_TKN_CNT_RST 0x0
-union dlb2_lsp_cq_ldb_tkn_cnt {
-	struct {
-		u32 token_count : 11;
-		u32 rsvd0 : 21;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL(x) \
+
+#define DLB2_LSP_CQ_LDB_TKN_CNT_TOKEN_COUNT	0x000007FF
+#define DLB2_LSP_CQ_LDB_TKN_CNT_RSVD0	0xFFFFF800
+#define DLB2_LSP_CQ_LDB_TKN_CNT_TOKEN_COUNT_LOC	0
+#define DLB2_LSP_CQ_LDB_TKN_CNT_RSVD0_LOC		11
+
+#define DLB2_V2LSP_CQ_LDB_TKN_DEPTH_SEL(x) \
 	(0xa0600000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_LDB_TKN_DEPTH_SEL(x) \
+	(0x90680000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_LDB_TKN_DEPTH_SEL(x) : \
+	 DLB2_V2_5LSP_CQ_LDB_TKN_DEPTH_SEL(x))
 #define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RST 0x0
-union dlb2_lsp_cq_ldb_tkn_depth_sel {
-	struct {
-		u32 token_depth_select : 4;
-		u32 ignore_depth : 1;
-		u32 rsvd0 : 27;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTL(x) \
+
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_V2	0x0000000F
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_IGNORE_DEPTH_V2	0x00000010
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RSVD0_V2		0xFFFFFFE0
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_V2_LOC	0
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_IGNORE_DEPTH_V2_LOC		4
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RSVD0_V2_LOC			5
+
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_V2_5	0x0000000F
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RSVD0_V2_5		0xFFFFFFF0
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_V2_5_LOC	0
+#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RSVD0_V2_5_LOC			4
+
+#define DLB2_V2LSP_CQ_LDB_TOT_SCH_CNTL(x) \
 	(0xa0680000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_LDB_TOT_SCH_CNTL(x) \
+	(0x90700000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_LDB_TOT_SCH_CNTL(x) : \
+	 DLB2_V2_5LSP_CQ_LDB_TOT_SCH_CNTL(x))
 #define DLB2_LSP_CQ_LDB_TOT_SCH_CNTL_RST 0x0
-union dlb2_lsp_cq_ldb_tot_sch_cntl {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTH(x) \
+
+#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTL_COUNT	0xFFFFFFFF
+#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTL_COUNT_LOC	0
+
+#define DLB2_V2LSP_CQ_LDB_TOT_SCH_CNTH(x) \
 	(0xa0700000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_CQ_LDB_TOT_SCH_CNTH(x) \
+	(0x90780000 + (x) * 0x1000)
+#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CQ_LDB_TOT_SCH_CNTH(x) : \
+	 DLB2_V2_5LSP_CQ_LDB_TOT_SCH_CNTH(x))
 #define DLB2_LSP_CQ_LDB_TOT_SCH_CNTH_RST 0x0
-union dlb2_lsp_cq_ldb_tot_sch_cnth {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_DIR_MAX_DEPTH(x) \
+
+#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTH_COUNT	0xFFFFFFFF
+#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTH_COUNT_LOC	0
+
+#define DLB2_V2LSP_QID_DIR_MAX_DEPTH(x) \
 	(0xa0780000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_DIR_MAX_DEPTH(x) \
+	(0x90800000 + (x) * 0x1000)
+#define DLB2_LSP_QID_DIR_MAX_DEPTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_DIR_MAX_DEPTH(x) : \
+	 DLB2_V2_5LSP_QID_DIR_MAX_DEPTH(x))
 #define DLB2_LSP_QID_DIR_MAX_DEPTH_RST 0x0
-union dlb2_lsp_qid_dir_max_depth {
-	struct {
-		u32 depth : 13;
-		u32 rsvd0 : 19;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTL(x) \
+
+#define DLB2_LSP_QID_DIR_MAX_DEPTH_DEPTH	0x00001FFF
+#define DLB2_LSP_QID_DIR_MAX_DEPTH_RSVD0	0xFFFFE000
+#define DLB2_LSP_QID_DIR_MAX_DEPTH_DEPTH_LOC	0
+#define DLB2_LSP_QID_DIR_MAX_DEPTH_RSVD0_LOC	13
+
+#define DLB2_V2LSP_QID_DIR_TOT_ENQ_CNTL(x) \
 	(0xa0800000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_DIR_TOT_ENQ_CNTL(x) \
+	(0x90880000 + (x) * 0x1000)
+#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_DIR_TOT_ENQ_CNTL(x) : \
+	 DLB2_V2_5LSP_QID_DIR_TOT_ENQ_CNTL(x))
 #define DLB2_LSP_QID_DIR_TOT_ENQ_CNTL_RST 0x0
-union dlb2_lsp_qid_dir_tot_enq_cntl {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTH(x) \
+
+#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTL_COUNT	0xFFFFFFFF
+#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTL_COUNT_LOC	0
+
+#define DLB2_V2LSP_QID_DIR_TOT_ENQ_CNTH(x) \
 	(0xa0880000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_DIR_TOT_ENQ_CNTH(x) \
+	(0x90900000 + (x) * 0x1000)
+#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_DIR_TOT_ENQ_CNTH(x) : \
+	 DLB2_V2_5LSP_QID_DIR_TOT_ENQ_CNTH(x))
 #define DLB2_LSP_QID_DIR_TOT_ENQ_CNTH_RST 0x0
-union dlb2_lsp_qid_dir_tot_enq_cnth {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_DIR_ENQUEUE_CNT(x) \
+
+#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTH_COUNT	0xFFFFFFFF
+#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTH_COUNT_LOC	0
+
+#define DLB2_V2LSP_QID_DIR_ENQUEUE_CNT(x) \
 	(0xa0900000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_DIR_ENQUEUE_CNT(x) \
+	(0x90980000 + (x) * 0x1000)
+#define DLB2_LSP_QID_DIR_ENQUEUE_CNT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_DIR_ENQUEUE_CNT(x) : \
+	 DLB2_V2_5LSP_QID_DIR_ENQUEUE_CNT(x))
 #define DLB2_LSP_QID_DIR_ENQUEUE_CNT_RST 0x0
-union dlb2_lsp_qid_dir_enqueue_cnt {
-	struct {
-		u32 count : 13;
-		u32 rsvd0 : 19;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_DIR_DEPTH_THRSH(x) \
+
+#define DLB2_LSP_QID_DIR_ENQUEUE_CNT_COUNT	0x00001FFF
+#define DLB2_LSP_QID_DIR_ENQUEUE_CNT_RSVD0	0xFFFFE000
+#define DLB2_LSP_QID_DIR_ENQUEUE_CNT_COUNT_LOC	0
+#define DLB2_LSP_QID_DIR_ENQUEUE_CNT_RSVD0_LOC	13
+
+#define DLB2_V2LSP_QID_DIR_DEPTH_THRSH(x) \
 	(0xa0980000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_DIR_DEPTH_THRSH(x) \
+	(0x90a00000 + (x) * 0x1000)
+#define DLB2_LSP_QID_DIR_DEPTH_THRSH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_DIR_DEPTH_THRSH(x) : \
+	 DLB2_V2_5LSP_QID_DIR_DEPTH_THRSH(x))
 #define DLB2_LSP_QID_DIR_DEPTH_THRSH_RST 0x0
-union dlb2_lsp_qid_dir_depth_thrsh {
-	struct {
-		u32 thresh : 13;
-		u32 rsvd0 : 19;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_AQED_ACTIVE_CNT(x) \
+
+#define DLB2_LSP_QID_DIR_DEPTH_THRSH_THRESH	0x00001FFF
+#define DLB2_LSP_QID_DIR_DEPTH_THRSH_RSVD0	0xFFFFE000
+#define DLB2_LSP_QID_DIR_DEPTH_THRSH_THRESH_LOC	0
+#define DLB2_LSP_QID_DIR_DEPTH_THRSH_RSVD0_LOC	13
+
+#define DLB2_V2LSP_QID_AQED_ACTIVE_CNT(x) \
 	(0xa0a00000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_AQED_ACTIVE_CNT(x) \
+	(0x90b80000 + (x) * 0x1000)
+#define DLB2_LSP_QID_AQED_ACTIVE_CNT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_AQED_ACTIVE_CNT(x) : \
+	 DLB2_V2_5LSP_QID_AQED_ACTIVE_CNT(x))
 #define DLB2_LSP_QID_AQED_ACTIVE_CNT_RST 0x0
-union dlb2_lsp_qid_aqed_active_cnt {
-	struct {
-		u32 count : 12;
-		u32 rsvd0 : 20;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_AQED_ACTIVE_LIM(x) \
+
+#define DLB2_LSP_QID_AQED_ACTIVE_CNT_COUNT	0x00000FFF
+#define DLB2_LSP_QID_AQED_ACTIVE_CNT_RSVD0	0xFFFFF000
+#define DLB2_LSP_QID_AQED_ACTIVE_CNT_COUNT_LOC	0
+#define DLB2_LSP_QID_AQED_ACTIVE_CNT_RSVD0_LOC	12
+
+#define DLB2_V2LSP_QID_AQED_ACTIVE_LIM(x) \
 	(0xa0a80000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_AQED_ACTIVE_LIM(x) \
+	(0x90c00000 + (x) * 0x1000)
+#define DLB2_LSP_QID_AQED_ACTIVE_LIM(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_AQED_ACTIVE_LIM(x) : \
+	 DLB2_V2_5LSP_QID_AQED_ACTIVE_LIM(x))
 #define DLB2_LSP_QID_AQED_ACTIVE_LIM_RST 0x0
-union dlb2_lsp_qid_aqed_active_lim {
-	struct {
-		u32 limit : 12;
-		u32 rsvd0 : 20;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTL(x) \
+
+#define DLB2_LSP_QID_AQED_ACTIVE_LIM_LIMIT	0x00000FFF
+#define DLB2_LSP_QID_AQED_ACTIVE_LIM_RSVD0	0xFFFFF000
+#define DLB2_LSP_QID_AQED_ACTIVE_LIM_LIMIT_LOC	0
+#define DLB2_LSP_QID_AQED_ACTIVE_LIM_RSVD0_LOC	12
+
+#define DLB2_V2LSP_QID_ATM_TOT_ENQ_CNTL(x) \
 	(0xa0b00000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_ATM_TOT_ENQ_CNTL(x) \
+	(0x90c80000 + (x) * 0x1000)
+#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_ATM_TOT_ENQ_CNTL(x) : \
+	 DLB2_V2_5LSP_QID_ATM_TOT_ENQ_CNTL(x))
 #define DLB2_LSP_QID_ATM_TOT_ENQ_CNTL_RST 0x0
-union dlb2_lsp_qid_atm_tot_enq_cntl {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTH(x) \
+
+#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTL_COUNT	0xFFFFFFFF
+#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTL_COUNT_LOC	0
+
+#define DLB2_V2LSP_QID_ATM_TOT_ENQ_CNTH(x) \
 	(0xa0b80000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_ATM_TOT_ENQ_CNTH(x) \
+	(0x90d00000 + (x) * 0x1000)
+#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_ATM_TOT_ENQ_CNTH(x) : \
+	 DLB2_V2_5LSP_QID_ATM_TOT_ENQ_CNTH(x))
 #define DLB2_LSP_QID_ATM_TOT_ENQ_CNTH_RST 0x0
-union dlb2_lsp_qid_atm_tot_enq_cnth {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_ATQ_ENQUEUE_CNT(x) \
-	(0xa0c00000 + (x) * 0x1000)
-#define DLB2_LSP_QID_ATQ_ENQUEUE_CNT_RST 0x0
-union dlb2_lsp_qid_atq_enqueue_cnt {
-	struct {
-		u32 count : 14;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_LDB_ENQUEUE_CNT(x) \
+
+#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTH_COUNT	0xFFFFFFFF
+#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTH_COUNT_LOC	0
+
+#define DLB2_V2LSP_QID_LDB_ENQUEUE_CNT(x) \
 	(0xa0c80000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_LDB_ENQUEUE_CNT(x) \
+	(0x90e00000 + (x) * 0x1000)
+#define DLB2_LSP_QID_LDB_ENQUEUE_CNT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_LDB_ENQUEUE_CNT(x) : \
+	 DLB2_V2_5LSP_QID_LDB_ENQUEUE_CNT(x))
 #define DLB2_LSP_QID_LDB_ENQUEUE_CNT_RST 0x0
-union dlb2_lsp_qid_ldb_enqueue_cnt {
-	struct {
-		u32 count : 14;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_LDB_INFL_CNT(x) \
+
+#define DLB2_LSP_QID_LDB_ENQUEUE_CNT_COUNT	0x00003FFF
+#define DLB2_LSP_QID_LDB_ENQUEUE_CNT_RSVD0	0xFFFFC000
+#define DLB2_LSP_QID_LDB_ENQUEUE_CNT_COUNT_LOC	0
+#define DLB2_LSP_QID_LDB_ENQUEUE_CNT_RSVD0_LOC	14
+
+#define DLB2_V2LSP_QID_LDB_INFL_CNT(x) \
 	(0xa0d00000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_LDB_INFL_CNT(x) \
+	(0x90e80000 + (x) * 0x1000)
+#define DLB2_LSP_QID_LDB_INFL_CNT(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_LDB_INFL_CNT(x) : \
+	 DLB2_V2_5LSP_QID_LDB_INFL_CNT(x))
 #define DLB2_LSP_QID_LDB_INFL_CNT_RST 0x0
-union dlb2_lsp_qid_ldb_infl_cnt {
-	struct {
-		u32 count : 12;
-		u32 rsvd0 : 20;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_LDB_INFL_LIM(x) \
+
+#define DLB2_LSP_QID_LDB_INFL_CNT_COUNT	0x00000FFF
+#define DLB2_LSP_QID_LDB_INFL_CNT_RSVD0	0xFFFFF000
+#define DLB2_LSP_QID_LDB_INFL_CNT_COUNT_LOC	0
+#define DLB2_LSP_QID_LDB_INFL_CNT_RSVD0_LOC	12
+
+#define DLB2_V2LSP_QID_LDB_INFL_LIM(x) \
 	(0xa0d80000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_LDB_INFL_LIM(x) \
+	(0x90f00000 + (x) * 0x1000)
+#define DLB2_LSP_QID_LDB_INFL_LIM(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_LDB_INFL_LIM(x) : \
+	 DLB2_V2_5LSP_QID_LDB_INFL_LIM(x))
 #define DLB2_LSP_QID_LDB_INFL_LIM_RST 0x0
-union dlb2_lsp_qid_ldb_infl_lim {
-	struct {
-		u32 limit : 12;
-		u32 rsvd0 : 20;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID2CQIDIX_00(x) \
+
+#define DLB2_LSP_QID_LDB_INFL_LIM_LIMIT	0x00000FFF
+#define DLB2_LSP_QID_LDB_INFL_LIM_RSVD0	0xFFFFF000
+#define DLB2_LSP_QID_LDB_INFL_LIM_LIMIT_LOC	0
+#define DLB2_LSP_QID_LDB_INFL_LIM_RSVD0_LOC	12
+
+#define DLB2_V2LSP_QID2CQIDIX_00(x) \
 	(0xa0e00000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID2CQIDIX_00(x) \
+	(0x90f80000 + (x) * 0x1000)
+#define DLB2_LSP_QID2CQIDIX_00(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID2CQIDIX_00(x) : \
+	 DLB2_V2_5LSP_QID2CQIDIX_00(x))
 #define DLB2_LSP_QID2CQIDIX_00_RST 0x0
-#define DLB2_LSP_QID2CQIDIX(x, y) \
-	(DLB2_LSP_QID2CQIDIX_00(x) + 0x80000 * (y))
+#define DLB2_LSP_QID2CQIDIX(ver, x, y) \
+	(DLB2_LSP_QID2CQIDIX_00(ver, x) + 0x80000 * (y))
 #define DLB2_LSP_QID2CQIDIX_NUM 16
-union dlb2_lsp_qid2cqidix_00 {
-	struct {
-		u32 cq_p0 : 8;
-		u32 cq_p1 : 8;
-		u32 cq_p2 : 8;
-		u32 cq_p3 : 8;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID2CQIDIX2_00(x) \
+
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P0	0x000000FF
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P1	0x0000FF00
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P2	0x00FF0000
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P3	0xFF000000
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P0_LOC	0
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P1_LOC	8
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P2_LOC	16
+#define DLB2_LSP_QID2CQIDIX_00_CQ_P3_LOC	24
+
+#define DLB2_V2LSP_QID2CQIDIX2_00(x) \
 	(0xa1600000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID2CQIDIX2_00(x) \
+	(0x91780000 + (x) * 0x1000)
+#define DLB2_LSP_QID2CQIDIX2_00(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID2CQIDIX2_00(x) : \
+	 DLB2_V2_5LSP_QID2CQIDIX2_00(x))
 #define DLB2_LSP_QID2CQIDIX2_00_RST 0x0
-#define DLB2_LSP_QID2CQIDIX2(x, y) \
-	(DLB2_LSP_QID2CQIDIX2_00(x) + 0x80000 * (y))
+#define DLB2_LSP_QID2CQIDIX2(ver, x, y) \
+	(DLB2_LSP_QID2CQIDIX2_00(ver, x) + 0x80000 * (y))
 #define DLB2_LSP_QID2CQIDIX2_NUM 16
-union dlb2_lsp_qid2cqidix2_00 {
-	struct {
-		u32 cq_p0 : 8;
-		u32 cq_p1 : 8;
-		u32 cq_p2 : 8;
-		u32 cq_p3 : 8;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_LDB_REPLAY_CNT(x) \
-	(0xa1e00000 + (x) * 0x1000)
-#define DLB2_LSP_QID_LDB_REPLAY_CNT_RST 0x0
-union dlb2_lsp_qid_ldb_replay_cnt {
-	struct {
-		u32 count : 14;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_NALDB_MAX_DEPTH(x) \
+
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P0	0x000000FF
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P1	0x0000FF00
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P2	0x00FF0000
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P3	0xFF000000
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P0_LOC	0
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P1_LOC	8
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P2_LOC	16
+#define DLB2_LSP_QID2CQIDIX2_00_CQ_P3_LOC	24
+
+#define DLB2_V2LSP_QID_NALDB_MAX_DEPTH(x) \
 	(0xa1f00000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_NALDB_MAX_DEPTH(x) \
+	(0x92080000 + (x) * 0x1000)
+#define DLB2_LSP_QID_NALDB_MAX_DEPTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_NALDB_MAX_DEPTH(x) : \
+	 DLB2_V2_5LSP_QID_NALDB_MAX_DEPTH(x))
 #define DLB2_LSP_QID_NALDB_MAX_DEPTH_RST 0x0
-union dlb2_lsp_qid_naldb_max_depth {
-	struct {
-		u32 depth : 14;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL(x) \
+
+#define DLB2_LSP_QID_NALDB_MAX_DEPTH_DEPTH	0x00003FFF
+#define DLB2_LSP_QID_NALDB_MAX_DEPTH_RSVD0	0xFFFFC000
+#define DLB2_LSP_QID_NALDB_MAX_DEPTH_DEPTH_LOC	0
+#define DLB2_LSP_QID_NALDB_MAX_DEPTH_RSVD0_LOC	14
+
+#define DLB2_V2LSP_QID_NALDB_TOT_ENQ_CNTL(x) \
 	(0xa1f80000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_NALDB_TOT_ENQ_CNTL(x) \
+	(0x92100000 + (x) * 0x1000)
+#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_NALDB_TOT_ENQ_CNTL(x) : \
+	 DLB2_V2_5LSP_QID_NALDB_TOT_ENQ_CNTL(x))
 #define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL_RST 0x0
-union dlb2_lsp_qid_naldb_tot_enq_cntl {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH(x) \
+
+#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL_COUNT	0xFFFFFFFF
+#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL_COUNT_LOC	0
+
+#define DLB2_V2LSP_QID_NALDB_TOT_ENQ_CNTH(x) \
 	(0xa2000000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_NALDB_TOT_ENQ_CNTH(x) \
+	(0x92180000 + (x) * 0x1000)
+#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_NALDB_TOT_ENQ_CNTH(x) : \
+	 DLB2_V2_5LSP_QID_NALDB_TOT_ENQ_CNTH(x))
 #define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH_RST 0x0
-union dlb2_lsp_qid_naldb_tot_enq_cnth {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_ATM_DEPTH_THRSH(x) \
+
+#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH_COUNT	0xFFFFFFFF
+#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH_COUNT_LOC	0
+
+#define DLB2_V2LSP_QID_ATM_DEPTH_THRSH(x) \
 	(0xa2080000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_ATM_DEPTH_THRSH(x) \
+	(0x92200000 + (x) * 0x1000)
+#define DLB2_LSP_QID_ATM_DEPTH_THRSH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_ATM_DEPTH_THRSH(x) : \
+	 DLB2_V2_5LSP_QID_ATM_DEPTH_THRSH(x))
 #define DLB2_LSP_QID_ATM_DEPTH_THRSH_RST 0x0
-union dlb2_lsp_qid_atm_depth_thrsh {
-	struct {
-		u32 thresh : 14;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_NALDB_DEPTH_THRSH(x) \
+
+#define DLB2_LSP_QID_ATM_DEPTH_THRSH_THRESH	0x00003FFF
+#define DLB2_LSP_QID_ATM_DEPTH_THRSH_RSVD0	0xFFFFC000
+#define DLB2_LSP_QID_ATM_DEPTH_THRSH_THRESH_LOC	0
+#define DLB2_LSP_QID_ATM_DEPTH_THRSH_RSVD0_LOC	14
+
+#define DLB2_V2LSP_QID_NALDB_DEPTH_THRSH(x) \
 	(0xa2100000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_NALDB_DEPTH_THRSH(x) \
+	(0x92280000 + (x) * 0x1000)
+#define DLB2_LSP_QID_NALDB_DEPTH_THRSH(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_NALDB_DEPTH_THRSH(x) : \
+	 DLB2_V2_5LSP_QID_NALDB_DEPTH_THRSH(x))
 #define DLB2_LSP_QID_NALDB_DEPTH_THRSH_RST 0x0
-union dlb2_lsp_qid_naldb_depth_thrsh {
-	struct {
-		u32 thresh : 14;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_QID_ATM_ACTIVE(x) \
+
+#define DLB2_LSP_QID_NALDB_DEPTH_THRSH_THRESH	0x00003FFF
+#define DLB2_LSP_QID_NALDB_DEPTH_THRSH_RSVD0		0xFFFFC000
+#define DLB2_LSP_QID_NALDB_DEPTH_THRSH_THRESH_LOC	0
+#define DLB2_LSP_QID_NALDB_DEPTH_THRSH_RSVD0_LOC	14
+
+#define DLB2_V2LSP_QID_ATM_ACTIVE(x) \
 	(0xa2180000 + (x) * 0x1000)
+#define DLB2_V2_5LSP_QID_ATM_ACTIVE(x) \
+	(0x92300000 + (x) * 0x1000)
+#define DLB2_LSP_QID_ATM_ACTIVE(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_QID_ATM_ACTIVE(x) : \
+	 DLB2_V2_5LSP_QID_ATM_ACTIVE(x))
 #define DLB2_LSP_QID_ATM_ACTIVE_RST 0x0
-union dlb2_lsp_qid_atm_active {
-	struct {
-		u32 count : 14;
-		u32 rsvd0 : 18;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0 0xa4000008
+
+#define DLB2_LSP_QID_ATM_ACTIVE_COUNT	0x00003FFF
+#define DLB2_LSP_QID_ATM_ACTIVE_RSVD0	0xFFFFC000
+#define DLB2_LSP_QID_ATM_ACTIVE_COUNT_LOC	0
+#define DLB2_LSP_QID_ATM_ACTIVE_RSVD0_LOC	14
+
+#define DLB2_V2LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0 0xa4000008
+#define DLB2_V2_5LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0 0x94000008
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0 : \
+	 DLB2_V2_5LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0)
 #define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_RST 0x0
-union dlb2_lsp_cfg_arb_weight_atm_nalb_qid_0 {
-	struct {
-		u32 pri0_weight : 8;
-		u32 pri1_weight : 8;
-		u32 pri2_weight : 8;
-		u32 pri3_weight : 8;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1 0xa400000c
+
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI0_WEIGHT	0x000000FF
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI1_WEIGHT	0x0000FF00
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI2_WEIGHT	0x00FF0000
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI3_WEIGHT	0xFF000000
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI0_WEIGHT_LOC	0
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI1_WEIGHT_LOC	8
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI2_WEIGHT_LOC	16
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI3_WEIGHT_LOC	24
+
+#define DLB2_V2LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1 0xa400000c
+#define DLB2_V2_5LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1 0x9400000c
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1 : \
+	 DLB2_V2_5LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1)
 #define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_RST 0x0
-union dlb2_lsp_cfg_arb_weight_atm_nalb_qid_1 {
-	struct {
-		u32 rsvz0 : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0 0xa4000014
+
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_RSVZ0_V2	0xFFFFFFFF
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_RSVZ0_V2_LOC	0
+
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI4_WEIGHT_V2_5	0x000000FF
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI5_WEIGHT_V2_5	0x0000FF00
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI6_WEIGHT_V2_5	0x00FF0000
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI7_WEIGHT_V2_5	0xFF000000
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI4_WEIGHT_V2_5_LOC	0
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI5_WEIGHT_V2_5_LOC	8
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI6_WEIGHT_V2_5_LOC	16
+#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI7_WEIGHT_V2_5_LOC	24
+
+#define DLB2_V2LSP_CFG_ARB_WEIGHT_LDB_QID_0 0xa4000014
+#define DLB2_V2_5LSP_CFG_ARB_WEIGHT_LDB_QID_0 0x94000014
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CFG_ARB_WEIGHT_LDB_QID_0 : \
+	 DLB2_V2_5LSP_CFG_ARB_WEIGHT_LDB_QID_0)
 #define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_RST 0x0
-union dlb2_lsp_cfg_arb_weight_ldb_qid_0 {
-	struct {
-		u32 pri0_weight : 8;
-		u32 pri1_weight : 8;
-		u32 pri2_weight : 8;
-		u32 pri3_weight : 8;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1 0xa4000018
+
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI0_WEIGHT	0x000000FF
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI1_WEIGHT	0x0000FF00
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI2_WEIGHT	0x00FF0000
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI3_WEIGHT	0xFF000000
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI0_WEIGHT_LOC	0
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI1_WEIGHT_LOC	8
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI2_WEIGHT_LOC	16
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI3_WEIGHT_LOC	24
+
+#define DLB2_V2LSP_CFG_ARB_WEIGHT_LDB_QID_1 0xa4000018
+#define DLB2_V2_5LSP_CFG_ARB_WEIGHT_LDB_QID_1 0x94000018
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CFG_ARB_WEIGHT_LDB_QID_1 : \
+	 DLB2_V2_5LSP_CFG_ARB_WEIGHT_LDB_QID_1)
 #define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_RST 0x0
-union dlb2_lsp_cfg_arb_weight_ldb_qid_1 {
-	struct {
-		u32 rsvz0 : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_LDB_SCHED_CTRL 0xa400002c
+
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_RSVZ0_V2	0xFFFFFFFF
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_RSVZ0_V2_LOC	0
+
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI4_WEIGHT_V2_5	0x000000FF
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI5_WEIGHT_V2_5	0x0000FF00
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI6_WEIGHT_V2_5	0x00FF0000
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI7_WEIGHT_V2_5	0xFF000000
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI4_WEIGHT_V2_5_LOC	0
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI5_WEIGHT_V2_5_LOC	8
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI6_WEIGHT_V2_5_LOC	16
+#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI7_WEIGHT_V2_5_LOC	24
+
+#define DLB2_V2LSP_LDB_SCHED_CTRL 0xa400002c
+#define DLB2_V2_5LSP_LDB_SCHED_CTRL 0x9400002c
+#define DLB2_LSP_LDB_SCHED_CTRL(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_LDB_SCHED_CTRL : \
+	 DLB2_V2_5LSP_LDB_SCHED_CTRL)
 #define DLB2_LSP_LDB_SCHED_CTRL_RST 0x0
-union dlb2_lsp_ldb_sched_ctrl {
-	struct {
-		u32 cq : 8;
-		u32 qidix : 3;
-		u32 value : 1;
-		u32 nalb_haswork_v : 1;
-		u32 rlist_haswork_v : 1;
-		u32 slist_haswork_v : 1;
-		u32 inflight_ok_v : 1;
-		u32 aqed_nfull_v : 1;
-		u32 rsvz0 : 15;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_DIR_SCH_CNT_L 0xa4000034
+
+#define DLB2_LSP_LDB_SCHED_CTRL_CQ			0x000000FF
+#define DLB2_LSP_LDB_SCHED_CTRL_QIDIX		0x00000700
+#define DLB2_LSP_LDB_SCHED_CTRL_VALUE		0x00000800
+#define DLB2_LSP_LDB_SCHED_CTRL_NALB_HASWORK_V	0x00001000
+#define DLB2_LSP_LDB_SCHED_CTRL_RLIST_HASWORK_V	0x00002000
+#define DLB2_LSP_LDB_SCHED_CTRL_SLIST_HASWORK_V	0x00004000
+#define DLB2_LSP_LDB_SCHED_CTRL_INFLIGHT_OK_V	0x00008000
+#define DLB2_LSP_LDB_SCHED_CTRL_AQED_NFULL_V		0x00010000
+#define DLB2_LSP_LDB_SCHED_CTRL_RSVZ0		0xFFFE0000
+#define DLB2_LSP_LDB_SCHED_CTRL_CQ_LOC		0
+#define DLB2_LSP_LDB_SCHED_CTRL_QIDIX_LOC		8
+#define DLB2_LSP_LDB_SCHED_CTRL_VALUE_LOC		11
+#define DLB2_LSP_LDB_SCHED_CTRL_NALB_HASWORK_V_LOC	12
+#define DLB2_LSP_LDB_SCHED_CTRL_RLIST_HASWORK_V_LOC	13
+#define DLB2_LSP_LDB_SCHED_CTRL_SLIST_HASWORK_V_LOC	14
+#define DLB2_LSP_LDB_SCHED_CTRL_INFLIGHT_OK_V_LOC	15
+#define DLB2_LSP_LDB_SCHED_CTRL_AQED_NFULL_V_LOC	16
+#define DLB2_LSP_LDB_SCHED_CTRL_RSVZ0_LOC		17
+
+#define DLB2_V2LSP_DIR_SCH_CNT_L 0xa4000034
+#define DLB2_V2_5LSP_DIR_SCH_CNT_L 0x94000034
+#define DLB2_LSP_DIR_SCH_CNT_L(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_DIR_SCH_CNT_L : \
+	 DLB2_V2_5LSP_DIR_SCH_CNT_L)
 #define DLB2_LSP_DIR_SCH_CNT_L_RST 0x0
-union dlb2_lsp_dir_sch_cnt_l {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_DIR_SCH_CNT_H 0xa4000038
+
+#define DLB2_LSP_DIR_SCH_CNT_L_COUNT	0xFFFFFFFF
+#define DLB2_LSP_DIR_SCH_CNT_L_COUNT_LOC	0
+
+#define DLB2_V2LSP_DIR_SCH_CNT_H 0xa4000038
+#define DLB2_V2_5LSP_DIR_SCH_CNT_H 0x94000038
+#define DLB2_LSP_DIR_SCH_CNT_H(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_DIR_SCH_CNT_H : \
+	 DLB2_V2_5LSP_DIR_SCH_CNT_H)
 #define DLB2_LSP_DIR_SCH_CNT_H_RST 0x0
-union dlb2_lsp_dir_sch_cnt_h {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_LDB_SCH_CNT_L 0xa400003c
+
+#define DLB2_LSP_DIR_SCH_CNT_H_COUNT	0xFFFFFFFF
+#define DLB2_LSP_DIR_SCH_CNT_H_COUNT_LOC	0
+
+#define DLB2_V2LSP_LDB_SCH_CNT_L 0xa400003c
+#define DLB2_V2_5LSP_LDB_SCH_CNT_L 0x9400003c
+#define DLB2_LSP_LDB_SCH_CNT_L(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_LDB_SCH_CNT_L : \
+	 DLB2_V2_5LSP_LDB_SCH_CNT_L)
 #define DLB2_LSP_LDB_SCH_CNT_L_RST 0x0
-union dlb2_lsp_ldb_sch_cnt_l {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_LDB_SCH_CNT_H 0xa4000040
+
+#define DLB2_LSP_LDB_SCH_CNT_L_COUNT	0xFFFFFFFF
+#define DLB2_LSP_LDB_SCH_CNT_L_COUNT_LOC	0
+
+#define DLB2_V2LSP_LDB_SCH_CNT_H 0xa4000040
+#define DLB2_V2_5LSP_LDB_SCH_CNT_H 0x94000040
+#define DLB2_LSP_LDB_SCH_CNT_H(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_LDB_SCH_CNT_H : \
+	 DLB2_V2_5LSP_LDB_SCH_CNT_H)
 #define DLB2_LSP_LDB_SCH_CNT_H_RST 0x0
-union dlb2_lsp_ldb_sch_cnt_h {
-	struct {
-		u32 count : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CFG_SHDW_CTRL 0xa4000070
+
+#define DLB2_LSP_LDB_SCH_CNT_H_COUNT	0xFFFFFFFF
+#define DLB2_LSP_LDB_SCH_CNT_H_COUNT_LOC	0
+
+#define DLB2_V2LSP_CFG_SHDW_CTRL 0xa4000070
+#define DLB2_V2_5LSP_CFG_SHDW_CTRL 0x94000070
+#define DLB2_LSP_CFG_SHDW_CTRL(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CFG_SHDW_CTRL : \
+	 DLB2_V2_5LSP_CFG_SHDW_CTRL)
 #define DLB2_LSP_CFG_SHDW_CTRL_RST 0x0
-union dlb2_lsp_cfg_shdw_ctrl {
-	struct {
-		u32 transfer : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CFG_SHDW_RANGE_COS(x) \
+
+#define DLB2_LSP_CFG_SHDW_CTRL_TRANSFER	0x00000001
+#define DLB2_LSP_CFG_SHDW_CTRL_RSVD0		0xFFFFFFFE
+#define DLB2_LSP_CFG_SHDW_CTRL_TRANSFER_LOC	0
+#define DLB2_LSP_CFG_SHDW_CTRL_RSVD0_LOC	1
+
+#define DLB2_V2LSP_CFG_SHDW_RANGE_COS(x) \
 	(0xa4000074 + (x) * 4)
+#define DLB2_V2_5LSP_CFG_SHDW_RANGE_COS(x) \
+	(0x94000074 + (x) * 4)
+#define DLB2_LSP_CFG_SHDW_RANGE_COS(ver, x) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CFG_SHDW_RANGE_COS(x) : \
+	 DLB2_V2_5LSP_CFG_SHDW_RANGE_COS(x))
 #define DLB2_LSP_CFG_SHDW_RANGE_COS_RST 0x40
-union dlb2_lsp_cfg_shdw_range_cos {
-	struct {
-		u32 bw_range : 9;
-		u32 rsvz0 : 22;
-		u32 no_extra_credit : 1;
-	} field;
-	u32 val;
-};
-
-#define DLB2_LSP_CFG_CTRL_GENERAL_0 0xac000000
+
+#define DLB2_LSP_CFG_SHDW_RANGE_COS_BW_RANGE		0x000001FF
+#define DLB2_LSP_CFG_SHDW_RANGE_COS_RSVZ0		0x7FFFFE00
+#define DLB2_LSP_CFG_SHDW_RANGE_COS_NO_EXTRA_CREDIT	0x80000000
+#define DLB2_LSP_CFG_SHDW_RANGE_COS_BW_RANGE_LOC		0
+#define DLB2_LSP_CFG_SHDW_RANGE_COS_RSVZ0_LOC		9
+#define DLB2_LSP_CFG_SHDW_RANGE_COS_NO_EXTRA_CREDIT_LOC	31
+
+#define DLB2_V2LSP_CFG_CTRL_GENERAL_0 0xac000000
+#define DLB2_V2_5LSP_CFG_CTRL_GENERAL_0 0x9c000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2LSP_CFG_CTRL_GENERAL_0 : \
+	 DLB2_V2_5LSP_CFG_CTRL_GENERAL_0)
 #define DLB2_LSP_CFG_CTRL_GENERAL_0_RST 0x0
-union dlb2_lsp_cfg_ctrl_general_0 {
-	struct {
-		u32 disab_atq_empty_arb : 1;
-		u32 inc_tok_unit_idle : 1;
-		u32 disab_rlist_pri : 1;
-		u32 inc_cmp_unit_idle : 1;
-		u32 rsvz0 : 2;
-		u32 dir_single_op : 1;
-		u32 dir_half_bw : 1;
-		u32 dir_single_out : 1;
-		u32 dir_disab_multi : 1;
-		u32 atq_single_op : 1;
-		u32 atq_half_bw : 1;
-		u32 atq_single_out : 1;
-		u32 atq_disab_multi : 1;
-		u32 dirrpl_single_op : 1;
-		u32 dirrpl_half_bw : 1;
-		u32 dirrpl_single_out : 1;
-		u32 lbrpl_single_op : 1;
-		u32 lbrpl_half_bw : 1;
-		u32 lbrpl_single_out : 1;
-		u32 ldb_single_op : 1;
-		u32 ldb_half_bw : 1;
-		u32 ldb_disab_multi : 1;
-		u32 atm_single_sch : 1;
-		u32 atm_single_cmp : 1;
-		u32 ldb_ce_tog_arb : 1;
-		u32 rsvz1 : 1;
-		u32 smon0_valid_sel : 2;
-		u32 smon0_value_sel : 1;
-		u32 smon0_compare_sel : 2;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CFG_MSTR_DIAG_RESET_STS 0xb4000000
-#define DLB2_CFG_MSTR_DIAG_RESET_STS_RST 0x80000bff
-union dlb2_cfg_mstr_diag_reset_sts {
-	struct {
-		u32 chp_pf_reset_done : 1;
-		u32 rop_pf_reset_done : 1;
-		u32 lsp_pf_reset_done : 1;
-		u32 nalb_pf_reset_done : 1;
-		u32 ap_pf_reset_done : 1;
-		u32 dp_pf_reset_done : 1;
-		u32 qed_pf_reset_done : 1;
-		u32 dqed_pf_reset_done : 1;
-		u32 aqed_pf_reset_done : 1;
-		u32 sys_pf_reset_done : 1;
-		u32 pf_reset_active : 1;
-		u32 flrsm_state : 7;
-		u32 rsvd0 : 13;
-		u32 dlb_proc_reset_done : 1;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CFG_MSTR_CFG_DIAGNOSTIC_IDLE_STATUS 0xb4000004
-#define DLB2_CFG_MSTR_CFG_DIAGNOSTIC_IDLE_STATUS_RST 0x9d0fffff
-union dlb2_cfg_mstr_cfg_diagnostic_idle_status {
-	struct {
-		u32 chp_pipeidle : 1;
-		u32 rop_pipeidle : 1;
-		u32 lsp_pipeidle : 1;
-		u32 nalb_pipeidle : 1;
-		u32 ap_pipeidle : 1;
-		u32 dp_pipeidle : 1;
-		u32 qed_pipeidle : 1;
-		u32 dqed_pipeidle : 1;
-		u32 aqed_pipeidle : 1;
-		u32 sys_pipeidle : 1;
-		u32 chp_unit_idle : 1;
-		u32 rop_unit_idle : 1;
-		u32 lsp_unit_idle : 1;
-		u32 nalb_unit_idle : 1;
-		u32 ap_unit_idle : 1;
-		u32 dp_unit_idle : 1;
-		u32 qed_unit_idle : 1;
-		u32 dqed_unit_idle : 1;
-		u32 aqed_unit_idle : 1;
-		u32 sys_unit_idle : 1;
-		u32 rsvd1 : 4;
-		u32 mstr_cfg_ring_idle : 1;
-		u32 mstr_cfg_mstr_idle : 1;
-		u32 mstr_flr_clkreq_b : 1;
-		u32 mstr_proc_idle : 1;
-		u32 mstr_proc_idle_masked : 1;
-		u32 rsvd0 : 2;
-		u32 dlb_func_idle : 1;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CFG_MSTR_CFG_PM_STATUS 0xb4000014
-#define DLB2_CFG_MSTR_CFG_PM_STATUS_RST 0x100403e
-union dlb2_cfg_mstr_cfg_pm_status {
-	struct {
-		u32 prochot : 1;
-		u32 pgcb_dlb_idle : 1;
-		u32 pgcb_dlb_pg_rdy_ack_b : 1;
-		u32 pmsm_pgcb_req_b : 1;
-		u32 pgbc_pmc_pg_req_b : 1;
-		u32 pmc_pgcb_pg_ack_b : 1;
-		u32 pmc_pgcb_fet_en_b : 1;
-		u32 pgcb_fet_en_b : 1;
-		u32 rsvz0 : 1;
-		u32 rsvz1 : 1;
-		u32 fuse_force_on : 1;
-		u32 fuse_proc_disable : 1;
-		u32 rsvz2 : 1;
-		u32 rsvz3 : 1;
-		u32 pm_fsm_d0tod3_ok : 1;
-		u32 pm_fsm_d3tod0_ok : 1;
-		u32 dlb_in_d3 : 1;
-		u32 rsvz4 : 7;
-		u32 pmsm : 8;
-	} field;
-	u32 val;
-};
-
-#define DLB2_CFG_MSTR_CFG_PM_PMCSR_DISABLE 0xb4000018
-#define DLB2_CFG_MSTR_CFG_PM_PMCSR_DISABLE_RST 0x1
-union dlb2_cfg_mstr_cfg_pm_pmcsr_disable {
-	struct {
-		u32 disable : 1;
-		u32 rsvz0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_VF_VF2PF_MAILBOX_BYTES 256
-#define DLB2_FUNC_VF_VF2PF_MAILBOX(x) \
+
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_ATQ_EMPTY_ARB_V2	0x00000001
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_TOK_UNIT_IDLE_V2	0x00000002
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_RLIST_PRI_V2	0x00000004
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_CMP_UNIT_IDLE_V2	0x00000008
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ0_V2		0x00000030
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OP_V2	0x00000040
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_HALF_BW_V2	0x00000080
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OUT_V2	0x00000100
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_DISAB_MULTI_V2	0x00000200
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OP_V2	0x00000400
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_HALF_BW_V2	0x00000800
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OUT_V2	0x00001000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_DISAB_MULTI_V2	0x00002000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OP_V2	0x00004000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_HALF_BW_V2	0x00008000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OUT_V2	0x00010000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OP_V2	0x00020000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_HALF_BW_V2	0x00040000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OUT_V2	0x00080000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_SINGLE_OP_V2	0x00100000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_HALF_BW_V2	0x00200000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_DISAB_MULTI_V2	0x00400000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_SCH_V2	0x00800000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_CMP_V2	0x01000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_CE_TOG_ARB_V2	0x02000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ1_V2		0x04000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALID_SEL_V2	0x18000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALUE_SEL_V2	0x20000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_COMPARE_SEL_V2	0xC0000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_ATQ_EMPTY_ARB_V2_LOC	0
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_TOK_UNIT_IDLE_V2_LOC		1
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_RLIST_PRI_V2_LOC		2
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_CMP_UNIT_IDLE_V2_LOC		3
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ0_V2_LOC			4
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OP_V2_LOC		6
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_HALF_BW_V2_LOC		7
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OUT_V2_LOC		8
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_DISAB_MULTI_V2_LOC		9
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OP_V2_LOC		10
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_HALF_BW_V2_LOC		11
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OUT_V2_LOC		12
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_DISAB_MULTI_V2_LOC		13
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OP_V2_LOC		14
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_HALF_BW_V2_LOC		15
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OUT_V2_LOC		16
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OP_V2_LOC		17
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_HALF_BW_V2_LOC		18
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OUT_V2_LOC		19
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_SINGLE_OP_V2_LOC		20
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_HALF_BW_V2_LOC		21
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_DISAB_MULTI_V2_LOC		22
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_SCH_V2_LOC		23
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_CMP_V2_LOC		24
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_CE_TOG_ARB_V2_LOC		25
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ1_V2_LOC			26
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALID_SEL_V2_LOC		27
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALUE_SEL_V2_LOC		29
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_COMPARE_SEL_V2_LOC		30
+
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_ATQ_EMPTY_ARB_V2_5	0x00000001
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_TOK_UNIT_IDLE_V2_5	0x00000002
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_RLIST_PRI_V2_5	0x00000004
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_CMP_UNIT_IDLE_V2_5	0x00000008
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ENAB_IF_THRESH_V2_5	0x00000010
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ0_V2_5		0x00000020
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OP_V2_5	0x00000040
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_HALF_BW_V2_5	0x00000080
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OUT_V2_5	0x00000100
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_DISAB_MULTI_V2_5	0x00000200
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OP_V2_5	0x00000400
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_HALF_BW_V2_5	0x00000800
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OUT_V2_5	0x00001000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_DISAB_MULTI_V2_5	0x00002000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OP_V2_5	0x00004000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_HALF_BW_V2_5	0x00008000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OUT_V2_5	0x00010000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OP_V2_5	0x00020000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_HALF_BW_V2_5	0x00040000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OUT_V2_5	0x00080000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_SINGLE_OP_V2_5	0x00100000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_HALF_BW_V2_5	0x00200000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_DISAB_MULTI_V2_5	0x00400000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_SCH_V2_5	0x00800000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_CMP_V2_5	0x01000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_CE_TOG_ARB_V2_5	0x02000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ1_V2_5		0x04000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALID_SEL_V2_5	0x18000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALUE_SEL_V2_5	0x20000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_COMPARE_SEL_V2_5	0xC0000000
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_ATQ_EMPTY_ARB_V2_5_LOC	0
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_TOK_UNIT_IDLE_V2_5_LOC	1
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_RLIST_PRI_V2_5_LOC		2
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_CMP_UNIT_IDLE_V2_5_LOC	3
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ENAB_IF_THRESH_V2_5_LOC		4
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ0_V2_5_LOC			5
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OP_V2_5_LOC		6
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_HALF_BW_V2_5_LOC		7
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OUT_V2_5_LOC		8
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_DISAB_MULTI_V2_5_LOC		9
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OP_V2_5_LOC		10
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_HALF_BW_V2_5_LOC		11
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OUT_V2_5_LOC		12
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_DISAB_MULTI_V2_5_LOC		13
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OP_V2_5_LOC	14
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_HALF_BW_V2_5_LOC		15
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OUT_V2_5_LOC	16
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OP_V2_5_LOC		17
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_HALF_BW_V2_5_LOC		18
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OUT_V2_5_LOC	19
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_SINGLE_OP_V2_5_LOC		20
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_HALF_BW_V2_5_LOC		21
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_DISAB_MULTI_V2_5_LOC		22
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_SCH_V2_5_LOC		23
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_CMP_V2_5_LOC		24
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_CE_TOG_ARB_V2_5_LOC		25
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ1_V2_5_LOC			26
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALID_SEL_V2_5_LOC		27
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALUE_SEL_V2_5_LOC		29
+#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_COMPARE_SEL_V2_5_LOC	30
+
+#define DLB2_LSP_SMON_COMPARE0 0xac000048
+#define DLB2_LSP_SMON_COMPARE0_RST 0x0
+
+#define DLB2_LSP_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
+#define DLB2_LSP_SMON_COMPARE0_COMPARE0_LOC	0
+
+#define DLB2_LSP_SMON_COMPARE1 0xac00004c
+#define DLB2_LSP_SMON_COMPARE1_RST 0x0
+
+#define DLB2_LSP_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
+#define DLB2_LSP_SMON_COMPARE1_COMPARE1_LOC	0
+
+#define DLB2_LSP_SMON_CFG0 0xac000050
+#define DLB2_LSP_SMON_CFG0_RST 0x40000000
+
+#define DLB2_LSP_SMON_CFG0_SMON_ENABLE		0x00000001
+#define DLB2_LSP_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
+#define DLB2_LSP_SMON_CFG0_RSVZ0			0x0000000C
+#define DLB2_LSP_SMON_CFG0_SMON0_FUNCTION		0x00000070
+#define DLB2_LSP_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
+#define DLB2_LSP_SMON_CFG0_SMON1_FUNCTION		0x00000700
+#define DLB2_LSP_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
+#define DLB2_LSP_SMON_CFG0_SMON_MODE			0x0000F000
+#define DLB2_LSP_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
+#define DLB2_LSP_SMON_CFG0_INTCOUNTEROVFL		0x00020000
+#define DLB2_LSP_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
+#define DLB2_LSP_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
+#define DLB2_LSP_SMON_CFG0_STOPTIMEROVFL		0x00100000
+#define DLB2_LSP_SMON_CFG0_INTTIMEROVFL		0x00200000
+#define DLB2_LSP_SMON_CFG0_STATTIMEROVFL		0x00400000
+#define DLB2_LSP_SMON_CFG0_RSVZ1			0x00800000
+#define DLB2_LSP_SMON_CFG0_TIMER_PRESCALE		0x1F000000
+#define DLB2_LSP_SMON_CFG0_RSVZ2			0x20000000
+#define DLB2_LSP_SMON_CFG0_VERSION			0xC0000000
+#define DLB2_LSP_SMON_CFG0_SMON_ENABLE_LOC			0
+#define DLB2_LSP_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
+#define DLB2_LSP_SMON_CFG0_RSVZ0_LOC				2
+#define DLB2_LSP_SMON_CFG0_SMON0_FUNCTION_LOC		4
+#define DLB2_LSP_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
+#define DLB2_LSP_SMON_CFG0_SMON1_FUNCTION_LOC		8
+#define DLB2_LSP_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
+#define DLB2_LSP_SMON_CFG0_SMON_MODE_LOC			12
+#define DLB2_LSP_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
+#define DLB2_LSP_SMON_CFG0_INTCOUNTEROVFL_LOC		17
+#define DLB2_LSP_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
+#define DLB2_LSP_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
+#define DLB2_LSP_SMON_CFG0_STOPTIMEROVFL_LOC			20
+#define DLB2_LSP_SMON_CFG0_INTTIMEROVFL_LOC			21
+#define DLB2_LSP_SMON_CFG0_STATTIMEROVFL_LOC			22
+#define DLB2_LSP_SMON_CFG0_RSVZ1_LOC				23
+#define DLB2_LSP_SMON_CFG0_TIMER_PRESCALE_LOC		24
+#define DLB2_LSP_SMON_CFG0_RSVZ2_LOC				29
+#define DLB2_LSP_SMON_CFG0_VERSION_LOC			30
+
+#define DLB2_LSP_SMON_CFG1 0xac000054
+#define DLB2_LSP_SMON_CFG1_RST 0x0
+
+#define DLB2_LSP_SMON_CFG1_MODE0	0x000000FF
+#define DLB2_LSP_SMON_CFG1_MODE1	0x0000FF00
+#define DLB2_LSP_SMON_CFG1_RSVZ0	0xFFFF0000
+#define DLB2_LSP_SMON_CFG1_MODE0_LOC	0
+#define DLB2_LSP_SMON_CFG1_MODE1_LOC	8
+#define DLB2_LSP_SMON_CFG1_RSVZ0_LOC	16
+
+#define DLB2_LSP_SMON_ACTIVITYCNTR0 0xac000058
+#define DLB2_LSP_SMON_ACTIVITYCNTR0_RST 0x0
+
+#define DLB2_LSP_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
+#define DLB2_LSP_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
+
+#define DLB2_LSP_SMON_ACTIVITYCNTR1 0xac00005c
+#define DLB2_LSP_SMON_ACTIVITYCNTR1_RST 0x0
+
+#define DLB2_LSP_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
+#define DLB2_LSP_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
+
+#define DLB2_LSP_SMON_MAX_TMR 0xac000060
+#define DLB2_LSP_SMON_MAX_TMR_RST 0x0
+
+#define DLB2_LSP_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
+#define DLB2_LSP_SMON_MAX_TMR_MAXVALUE_LOC	0
+
+#define DLB2_LSP_SMON_TMR 0xac000064
+#define DLB2_LSP_SMON_TMR_RST 0x0
+
+#define DLB2_LSP_SMON_TMR_TIMER	0xFFFFFFFF
+#define DLB2_LSP_SMON_TMR_TIMER_LOC	0
+
+#define DLB2_V2CM_DIAG_RESET_STS 0xb4000000
+#define DLB2_V2_5CM_DIAG_RESET_STS 0xa4000000
+#define DLB2_CM_DIAG_RESET_STS(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 V2CM_DIAG_RESET_STS : \
+	 V2_5CM_DIAG_RESET_STS)
+#define DLB2_CM_DIAG_RESET_STS_RST 0x80000bff
+
+#define DLB2_CM_DIAG_RESET_STS_CHP_PF_RESET_DONE	0x00000001
+#define DLB2_CM_DIAG_RESET_STS_ROP_PF_RESET_DONE	0x00000002
+#define DLB2_CM_DIAG_RESET_STS_LSP_PF_RESET_DONE	0x00000004
+#define DLB2_CM_DIAG_RESET_STS_NALB_PF_RESET_DONE	0x00000008
+#define DLB2_CM_DIAG_RESET_STS_AP_PF_RESET_DONE	0x00000010
+#define DLB2_CM_DIAG_RESET_STS_DP_PF_RESET_DONE	0x00000020
+#define DLB2_CM_DIAG_RESET_STS_QED_PF_RESET_DONE	0x00000040
+#define DLB2_CM_DIAG_RESET_STS_DQED_PF_RESET_DONE	0x00000080
+#define DLB2_CM_DIAG_RESET_STS_AQED_PF_RESET_DONE	0x00000100
+#define DLB2_CM_DIAG_RESET_STS_SYS_PF_RESET_DONE	0x00000200
+#define DLB2_CM_DIAG_RESET_STS_PF_RESET_ACTIVE	0x00000400
+#define DLB2_CM_DIAG_RESET_STS_FLRSM_STATE		0x0003F800
+#define DLB2_CM_DIAG_RESET_STS_RSVD0			0x7FFC0000
+#define DLB2_CM_DIAG_RESET_STS_DLB_PROC_RESET_DONE	0x80000000
+#define DLB2_CM_DIAG_RESET_STS_CHP_PF_RESET_DONE_LOC		0
+#define DLB2_CM_DIAG_RESET_STS_ROP_PF_RESET_DONE_LOC		1
+#define DLB2_CM_DIAG_RESET_STS_LSP_PF_RESET_DONE_LOC		2
+#define DLB2_CM_DIAG_RESET_STS_NALB_PF_RESET_DONE_LOC	3
+#define DLB2_CM_DIAG_RESET_STS_AP_PF_RESET_DONE_LOC		4
+#define DLB2_CM_DIAG_RESET_STS_DP_PF_RESET_DONE_LOC		5
+#define DLB2_CM_DIAG_RESET_STS_QED_PF_RESET_DONE_LOC		6
+#define DLB2_CM_DIAG_RESET_STS_DQED_PF_RESET_DONE_LOC	7
+#define DLB2_CM_DIAG_RESET_STS_AQED_PF_RESET_DONE_LOC	8
+#define DLB2_CM_DIAG_RESET_STS_SYS_PF_RESET_DONE_LOC		9
+#define DLB2_CM_DIAG_RESET_STS_PF_RESET_ACTIVE_LOC		10
+#define DLB2_CM_DIAG_RESET_STS_FLRSM_STATE_LOC		11
+#define DLB2_CM_DIAG_RESET_STS_RSVD0_LOC			18
+#define DLB2_CM_DIAG_RESET_STS_DLB_PROC_RESET_DONE_LOC	31
+
+#define DLB2_V2CM_CFG_DIAGNOSTIC_IDLE_STATUS 0xb4000004
+#define DLB2_V2_5CM_CFG_DIAGNOSTIC_IDLE_STATUS 0xa4000004
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CM_CFG_DIAGNOSTIC_IDLE_STATUS : \
+	 DLB2_V2_5CM_CFG_DIAGNOSTIC_IDLE_STATUS)
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_RST 0x9d0fffff
+
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_CHP_PIPEIDLE		0x00000001
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_ROP_PIPEIDLE		0x00000002
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_LSP_PIPEIDLE		0x00000004
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_NALB_PIPEIDLE	0x00000008
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AP_PIPEIDLE		0x00000010
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DP_PIPEIDLE		0x00000020
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_QED_PIPEIDLE		0x00000040
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DQED_PIPEIDLE	0x00000080
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AQED_PIPEIDLE	0x00000100
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_SYS_PIPEIDLE		0x00000200
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_CHP_UNIT_IDLE	0x00000400
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_ROP_UNIT_IDLE	0x00000800
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_LSP_UNIT_IDLE	0x00001000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_NALB_UNIT_IDLE	0x00002000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AP_UNIT_IDLE		0x00004000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DP_UNIT_IDLE		0x00008000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_QED_UNIT_IDLE	0x00010000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DQED_UNIT_IDLE	0x00020000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AQED_UNIT_IDLE	0x00040000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_SYS_UNIT_IDLE	0x00080000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_RSVD1		0x00F00000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_CFG_RING_IDLE	0x01000000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_CFG_MSTR_IDLE	0x02000000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_FLR_CLKREQ_B	0x04000000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_PROC_IDLE	0x08000000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_PROC_IDLE_MASKED 0x10000000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_RSVD0		 0x60000000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DLB_FUNC_IDLE	 0x80000000
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_CHP_PIPEIDLE_LOC		0
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_ROP_PIPEIDLE_LOC		1
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_LSP_PIPEIDLE_LOC		2
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_NALB_PIPEIDLE_LOC		3
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AP_PIPEIDLE_LOC		4
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DP_PIPEIDLE_LOC		5
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_QED_PIPEIDLE_LOC		6
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DQED_PIPEIDLE_LOC		7
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AQED_PIPEIDLE_LOC		8
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_SYS_PIPEIDLE_LOC		9
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_CHP_UNIT_IDLE_LOC		10
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_ROP_UNIT_IDLE_LOC		11
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_LSP_UNIT_IDLE_LOC		12
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_NALB_UNIT_IDLE_LOC	13
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AP_UNIT_IDLE_LOC		14
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DP_UNIT_IDLE_LOC		15
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_QED_UNIT_IDLE_LOC		16
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DQED_UNIT_IDLE_LOC	17
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AQED_UNIT_IDLE_LOC	18
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_SYS_UNIT_IDLE_LOC		19
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_RSVD1_LOC			20
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_CFG_RING_IDLE_LOC	24
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_CFG_MSTR_IDLE_LOC	25
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_FLR_CLKREQ_B_LOC	26
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_PROC_IDLE_LOC	27
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_PROC_IDLE_MASKED_LOC	28
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_RSVD0_LOC			29
+#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DLB_FUNC_IDLE_LOC		31
+
+#define DLB2_V2CM_CFG_PM_STATUS 0xb4000014
+#define DLB2_V2_5CM_CFG_PM_STATUS 0xa4000014
+#define DLB2_CM_CFG_PM_STATUS(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CM_CFG_PM_STATUS : \
+	 DLB2_V2_5CM_CFG_PM_STATUS)
+#define DLB2_CM_CFG_PM_STATUS_RST 0x100403e
+
+#define DLB2_CM_CFG_PM_STATUS_PROCHOT		0x00000001
+#define DLB2_CM_CFG_PM_STATUS_PGCB_DLB_IDLE		0x00000002
+#define DLB2_CM_CFG_PM_STATUS_PGCB_DLB_PG_RDY_ACK_B	0x00000004
+#define DLB2_CM_CFG_PM_STATUS_PMSM_PGCB_REQ_B	0x00000008
+#define DLB2_CM_CFG_PM_STATUS_PGBC_PMC_PG_REQ_B	0x00000010
+#define DLB2_CM_CFG_PM_STATUS_PMC_PGCB_PG_ACK_B	0x00000020
+#define DLB2_CM_CFG_PM_STATUS_PMC_PGCB_FET_EN_B	0x00000040
+#define DLB2_CM_CFG_PM_STATUS_PGCB_FET_EN_B		0x00000080
+#define DLB2_CM_CFG_PM_STATUS_RSVZ0			0x00000100
+#define DLB2_CM_CFG_PM_STATUS_RSVZ1			0x00000200
+#define DLB2_CM_CFG_PM_STATUS_FUSE_FORCE_ON		0x00000400
+#define DLB2_CM_CFG_PM_STATUS_FUSE_PROC_DISABLE	0x00000800
+#define DLB2_CM_CFG_PM_STATUS_RSVZ2			0x00001000
+#define DLB2_CM_CFG_PM_STATUS_RSVZ3			0x00002000
+#define DLB2_CM_CFG_PM_STATUS_PM_FSM_D0TOD3_OK	0x00004000
+#define DLB2_CM_CFG_PM_STATUS_PM_FSM_D3TOD0_OK	0x00008000
+#define DLB2_CM_CFG_PM_STATUS_DLB_IN_D3		0x00010000
+#define DLB2_CM_CFG_PM_STATUS_RSVZ4			0x00FE0000
+#define DLB2_CM_CFG_PM_STATUS_PMSM			0xFF000000
+#define DLB2_CM_CFG_PM_STATUS_PROCHOT_LOC			0
+#define DLB2_CM_CFG_PM_STATUS_PGCB_DLB_IDLE_LOC		1
+#define DLB2_CM_CFG_PM_STATUS_PGCB_DLB_PG_RDY_ACK_B_LOC	2
+#define DLB2_CM_CFG_PM_STATUS_PMSM_PGCB_REQ_B_LOC		3
+#define DLB2_CM_CFG_PM_STATUS_PGBC_PMC_PG_REQ_B_LOC		4
+#define DLB2_CM_CFG_PM_STATUS_PMC_PGCB_PG_ACK_B_LOC		5
+#define DLB2_CM_CFG_PM_STATUS_PMC_PGCB_FET_EN_B_LOC		6
+#define DLB2_CM_CFG_PM_STATUS_PGCB_FET_EN_B_LOC		7
+#define DLB2_CM_CFG_PM_STATUS_RSVZ0_LOC			8
+#define DLB2_CM_CFG_PM_STATUS_RSVZ1_LOC			9
+#define DLB2_CM_CFG_PM_STATUS_FUSE_FORCE_ON_LOC		10
+#define DLB2_CM_CFG_PM_STATUS_FUSE_PROC_DISABLE_LOC		11
+#define DLB2_CM_CFG_PM_STATUS_RSVZ2_LOC			12
+#define DLB2_CM_CFG_PM_STATUS_RSVZ3_LOC			13
+#define DLB2_CM_CFG_PM_STATUS_PM_FSM_D0TOD3_OK_LOC		14
+#define DLB2_CM_CFG_PM_STATUS_PM_FSM_D3TOD0_OK_LOC		15
+#define DLB2_CM_CFG_PM_STATUS_DLB_IN_D3_LOC			16
+#define DLB2_CM_CFG_PM_STATUS_RSVZ4_LOC			17
+#define DLB2_CM_CFG_PM_STATUS_PMSM_LOC			24
+
+#define DLB2_V2CM_CFG_PM_PMCSR_DISABLE 0xb4000018
+#define DLB2_V2_5CM_CFG_PM_PMCSR_DISABLE 0xa4000018
+#define DLB2_CM_CFG_PM_PMCSR_DISABLE(ver) \
+	(ver == DLB2_HW_V2 ? \
+	 DLB2_V2CM_CFG_PM_PMCSR_DISABLE : \
+	 DLB2_V2_5CM_CFG_PM_PMCSR_DISABLE)
+#define DLB2_CM_CFG_PM_PMCSR_DISABLE_RST 0x1
+
+#define DLB2_CM_CFG_PM_PMCSR_DISABLE_DISABLE	0x00000001
+#define DLB2_CM_CFG_PM_PMCSR_DISABLE_RSVZ0	0xFFFFFFFE
+#define DLB2_CM_CFG_PM_PMCSR_DISABLE_DISABLE_LOC	0
+#define DLB2_CM_CFG_PM_PMCSR_DISABLE_RSVZ0_LOC	1
+
+#define DLB2_VF_VF2PF_MAILBOX_BYTES 256
+#define DLB2_VF_VF2PF_MAILBOX(x) \
 	(0x1000 + (x) * 0x4)
-#define DLB2_FUNC_VF_VF2PF_MAILBOX_RST 0x0
-union dlb2_func_vf_vf2pf_mailbox {
-	struct {
-		u32 msg : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_VF_VF2PF_MAILBOX_ISR 0x1f00
-#define DLB2_FUNC_VF_VF2PF_MAILBOX_ISR_RST 0x0
-#define DLB2_FUNC_VF_SIOV_VF2PF_MAILBOX_ISR_TRIGGER 0x8000
-union dlb2_func_vf_vf2pf_mailbox_isr {
-	struct {
-		u32 isr : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_VF_PF2VF_MAILBOX_BYTES 64
-#define DLB2_FUNC_VF_PF2VF_MAILBOX(x) \
+#define DLB2_VF_VF2PF_MAILBOX_RST 0x0
+
+#define DLB2_VF_VF2PF_MAILBOX_MSG	0xFFFFFFFF
+#define DLB2_VF_VF2PF_MAILBOX_MSG_LOC	0
+
+#define DLB2_VF_VF2PF_MAILBOX_ISR 0x1f00
+#define DLB2_VF_VF2PF_MAILBOX_ISR_RST 0x0
+#define DLB2_VF_SIOV_MBOX_ISR_TRIGGER 0x8000
+
+#define DLB2_VF_VF2PF_MAILBOX_ISR_ISR	0x00000001
+#define DLB2_VF_VF2PF_MAILBOX_ISR_RSVD0	0xFFFFFFFE
+#define DLB2_VF_VF2PF_MAILBOX_ISR_ISR_LOC	0
+#define DLB2_VF_VF2PF_MAILBOX_ISR_RSVD0_LOC	1
+
+#define DLB2_VF_PF2VF_MAILBOX_BYTES 64
+#define DLB2_VF_PF2VF_MAILBOX(x) \
 	(0x2000 + (x) * 0x4)
-#define DLB2_FUNC_VF_PF2VF_MAILBOX_RST 0x0
-union dlb2_func_vf_pf2vf_mailbox {
-	struct {
-		u32 msg : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_VF_PF2VF_MAILBOX_ISR 0x2f00
-#define DLB2_FUNC_VF_PF2VF_MAILBOX_ISR_RST 0x0
-union dlb2_func_vf_pf2vf_mailbox_isr {
-	struct {
-		u32 pf_isr : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_VF_VF_MSI_ISR_PEND 0x2f10
-#define DLB2_FUNC_VF_VF_MSI_ISR_PEND_RST 0x0
-union dlb2_func_vf_vf_msi_isr_pend {
-	struct {
-		u32 isr_pend : 32;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_VF_VF_RESET_IN_PROGRESS 0x3000
-#define DLB2_FUNC_VF_VF_RESET_IN_PROGRESS_RST 0x1
-union dlb2_func_vf_vf_reset_in_progress {
-	struct {
-		u32 reset_in_progress : 1;
-		u32 rsvd0 : 31;
-	} field;
-	u32 val;
-};
-
-#define DLB2_FUNC_VF_VF_MSI_ISR 0x4000
-#define DLB2_FUNC_VF_VF_MSI_ISR_RST 0x0
-union dlb2_func_vf_vf_msi_isr {
-	struct {
-		u32 vf_msi_isr : 32;
-	} field;
-	u32 val;
-};
+#define DLB2_VF_PF2VF_MAILBOX_RST 0x0
+
+#define DLB2_VF_PF2VF_MAILBOX_MSG	0xFFFFFFFF
+#define DLB2_VF_PF2VF_MAILBOX_MSG_LOC	0
+
+#define DLB2_VF_PF2VF_MAILBOX_ISR 0x2f00
+#define DLB2_VF_PF2VF_MAILBOX_ISR_RST 0x0
+
+#define DLB2_VF_PF2VF_MAILBOX_ISR_PF_ISR	0x00000001
+#define DLB2_VF_PF2VF_MAILBOX_ISR_RSVD0	0xFFFFFFFE
+#define DLB2_VF_PF2VF_MAILBOX_ISR_PF_ISR_LOC	0
+#define DLB2_VF_PF2VF_MAILBOX_ISR_RSVD0_LOC	1
+
+#define DLB2_VF_VF_MSI_ISR_PEND 0x2f10
+#define DLB2_VF_VF_MSI_ISR_PEND_RST 0x0
+
+#define DLB2_VF_VF_MSI_ISR_PEND_ISR_PEND	0xFFFFFFFF
+#define DLB2_VF_VF_MSI_ISR_PEND_ISR_PEND_LOC	0
+
+#define DLB2_VF_VF_RESET_IN_PROGRESS 0x3000
+#define DLB2_VF_VF_RESET_IN_PROGRESS_RST 0x1
+
+#define DLB2_VF_VF_RESET_IN_PROGRESS_RESET_IN_PROGRESS	0x00000001
+#define DLB2_VF_VF_RESET_IN_PROGRESS_RSVD0			0xFFFFFFFE
+#define DLB2_VF_VF_RESET_IN_PROGRESS_RESET_IN_PROGRESS_LOC	0
+#define DLB2_VF_VF_RESET_IN_PROGRESS_RSVD0_LOC		1
+
+#define DLB2_VF_VF_MSI_ISR 0x4000
+#define DLB2_VF_VF_MSI_ISR_RST 0x0
+
+#define DLB2_VF_VF_MSI_ISR_VF_MSI_ISR	0xFFFFFFFF
+#define DLB2_VF_VF_MSI_ISR_VF_MSI_ISR_LOC	0
+
+#define DLB2_SYS_TOTAL_CREDITS 0x10000100
+#define DLB2_SYS_TOTAL_CREDITS_RST 0x4000
+
+#define DLB2_SYS_TOTAL_CREDITS_TOTAL_CREDITS	0xFFFFFFFF
+#define DLB2_SYS_TOTAL_CREDITS_TOTAL_CREDITS_LOC	0
+
+#define DLB2_SYS_LDB_CQ_AI_ADDR_U(x) \
+	(0x10000fa4 + (x) * 0x1000)
+#define DLB2_SYS_LDB_CQ_AI_ADDR_U_RST 0x0
+
+#define DLB2_SYS_LDB_CQ_AI_ADDR_U_CQ_AI_ADDR_U	0xFFFFFFFF
+#define DLB2_SYS_LDB_CQ_AI_ADDR_U_CQ_AI_ADDR_U_LOC	0
+
+#define DLB2_SYS_LDB_CQ_AI_ADDR_L(x) \
+	(0x10000fa0 + (x) * 0x1000)
+#define DLB2_SYS_LDB_CQ_AI_ADDR_L_RST 0x0
+
+#define DLB2_SYS_LDB_CQ_AI_ADDR_L_RSVD0		0x00000003
+#define DLB2_SYS_LDB_CQ_AI_ADDR_L_CQ_AI_ADDR_L	0xFFFFFFFC
+#define DLB2_SYS_LDB_CQ_AI_ADDR_L_RSVD0_LOC		0
+#define DLB2_SYS_LDB_CQ_AI_ADDR_L_CQ_AI_ADDR_L_LOC	2
+
+#define DLB2_SYS_DIR_CQ_AI_ADDR_U(x) \
+	(0x10000fe4 + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ_AI_ADDR_U_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_AI_ADDR_U_CQ_AI_ADDR_U	0xFFFFFFFF
+#define DLB2_SYS_DIR_CQ_AI_ADDR_U_CQ_AI_ADDR_U_LOC	0
+
+#define DLB2_SYS_DIR_CQ_AI_ADDR_L(x) \
+	(0x10000fe0 + (x) * 0x1000)
+#define DLB2_SYS_DIR_CQ_AI_ADDR_L_RST 0x0
+
+#define DLB2_SYS_DIR_CQ_AI_ADDR_L_RSVD0		0x00000003
+#define DLB2_SYS_DIR_CQ_AI_ADDR_L_CQ_AI_ADDR_L	0xFFFFFFFC
+#define DLB2_SYS_DIR_CQ_AI_ADDR_L_RSVD0_LOC		0
+#define DLB2_SYS_DIR_CQ_AI_ADDR_L_CQ_AI_ADDR_L_LOC	2
+
+#define DLB2_SYS_WB_DIR_CQ_STATE(x) \
+	(0x11c00000 + (x) * 0x1000)
+#define DLB2_SYS_WB_DIR_CQ_STATE_RST 0x0
+
+#define DLB2_SYS_WB_DIR_CQ_STATE_WB0_V	0x00000001
+#define DLB2_SYS_WB_DIR_CQ_STATE_WB1_V	0x00000002
+#define DLB2_SYS_WB_DIR_CQ_STATE_WB2_V	0x00000004
+#define DLB2_SYS_WB_DIR_CQ_STATE_DIR_OPT	0x00000008
+#define DLB2_SYS_WB_DIR_CQ_STATE_CQ_OPT_CLR	0x00000010
+#define DLB2_SYS_WB_DIR_CQ_STATE_RSVD0	0xFFFFFFE0
+#define DLB2_SYS_WB_DIR_CQ_STATE_WB0_V_LOC		0
+#define DLB2_SYS_WB_DIR_CQ_STATE_WB1_V_LOC		1
+#define DLB2_SYS_WB_DIR_CQ_STATE_WB2_V_LOC		2
+#define DLB2_SYS_WB_DIR_CQ_STATE_DIR_OPT_LOC		3
+#define DLB2_SYS_WB_DIR_CQ_STATE_CQ_OPT_CLR_LOC	4
+#define DLB2_SYS_WB_DIR_CQ_STATE_RSVD0_LOC		5
+
+#define DLB2_SYS_WB_LDB_CQ_STATE(x) \
+	(0x11d00000 + (x) * 0x1000)
+#define DLB2_SYS_WB_LDB_CQ_STATE_RST 0x0
+
+#define DLB2_SYS_WB_LDB_CQ_STATE_WB0_V	0x00000001
+#define DLB2_SYS_WB_LDB_CQ_STATE_WB1_V	0x00000002
+#define DLB2_SYS_WB_LDB_CQ_STATE_WB2_V	0x00000004
+#define DLB2_SYS_WB_LDB_CQ_STATE_RSVD1	0x00000008
+#define DLB2_SYS_WB_LDB_CQ_STATE_CQ_OPT_CLR	0x00000010
+#define DLB2_SYS_WB_LDB_CQ_STATE_RSVD0	0xFFFFFFE0
+#define DLB2_SYS_WB_LDB_CQ_STATE_WB0_V_LOC		0
+#define DLB2_SYS_WB_LDB_CQ_STATE_WB1_V_LOC		1
+#define DLB2_SYS_WB_LDB_CQ_STATE_WB2_V_LOC		2
+#define DLB2_SYS_WB_LDB_CQ_STATE_RSVD1_LOC		3
+#define DLB2_SYS_WB_LDB_CQ_STATE_CQ_OPT_CLR_LOC	4
+#define DLB2_SYS_WB_LDB_CQ_STATE_RSVD0_LOC		5
+
+#define DLB2_CHP_CFG_VAS_CRD(x) \
+	(0x40000000 + (x) * 0x1000)
+#define DLB2_CHP_CFG_VAS_CRD_RST 0x0
+
+#define DLB2_CHP_CFG_VAS_CRD_COUNT	0x00007FFF
+#define DLB2_CHP_CFG_VAS_CRD_RSVD0	0xFFFF8000
+#define DLB2_CHP_CFG_VAS_CRD_COUNT_LOC	0
+#define DLB2_CHP_CFG_VAS_CRD_RSVD0_LOC	15
+
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT(x) \
+	(0x90b00000 + (x) * 0x1000)
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_RST 0x0
+
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_LIMIT	0x00007FFF
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_V	0x00008000
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_RSVD0	0xFFFF0000
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_LIMIT_LOC	0
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_V_LOC		15
+#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_RSVD0_LOC	16
 
 #endif /* __DLB2_REGS_H */
diff --git a/drivers/event/dlb2/pf/base/dlb2_regs_new.h b/drivers/event/dlb2/pf/base/dlb2_regs_new.h
deleted file mode 100644
index 26c3e7f4a..000000000
--- a/drivers/event/dlb2/pf/base/dlb2_regs_new.h
+++ /dev/null
@@ -1,4304 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016-2020 Intel Corporation
- */
-
-#ifndef __DLB2_REGS_NEW_H
-#define __DLB2_REGS_NEW_H
-
-#include "dlb2_osdep_types.h"
-
-#define DLB2_PF_VF2PF_MAILBOX_BYTES 256
-#define DLB2_PF_VF2PF_MAILBOX(vf_id, x) \
-	(0x1000 + 0x4 * (x) + (vf_id) * 0x10000)
-#define DLB2_PF_VF2PF_MAILBOX_RST 0x0
-
-#define DLB2_PF_VF2PF_MAILBOX_MSG	0xFFFFFFFF
-#define DLB2_PF_VF2PF_MAILBOX_MSG_LOC	0
-
-#define DLB2_PF_VF2PF_MAILBOX_ISR(vf_id) \
-	(0x1f00 + (vf_id) * 0x10000)
-#define DLB2_PF_VF2PF_MAILBOX_ISR_RST 0x0
-
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF0_ISR	0x00000001
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF1_ISR	0x00000002
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF2_ISR	0x00000004
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF3_ISR	0x00000008
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF4_ISR	0x00000010
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF5_ISR	0x00000020
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF6_ISR	0x00000040
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF7_ISR	0x00000080
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF8_ISR	0x00000100
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF9_ISR	0x00000200
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF10_ISR	0x00000400
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF11_ISR	0x00000800
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF12_ISR	0x00001000
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF13_ISR	0x00002000
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF14_ISR	0x00004000
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF15_ISR	0x00008000
-#define DLB2_PF_VF2PF_MAILBOX_ISR_RSVD0	0xFFFF0000
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF0_ISR_LOC	0
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF1_ISR_LOC	1
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF2_ISR_LOC	2
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF3_ISR_LOC	3
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF4_ISR_LOC	4
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF5_ISR_LOC	5
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF6_ISR_LOC	6
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF7_ISR_LOC	7
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF8_ISR_LOC	8
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF9_ISR_LOC	9
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF10_ISR_LOC	10
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF11_ISR_LOC	11
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF12_ISR_LOC	12
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF13_ISR_LOC	13
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF14_ISR_LOC	14
-#define DLB2_PF_VF2PF_MAILBOX_ISR_VF15_ISR_LOC	15
-#define DLB2_PF_VF2PF_MAILBOX_ISR_RSVD0_LOC		16
-
-#define DLB2_PF_VF2PF_FLR_ISR(vf_id) \
-	(0x1f04 + (vf_id) * 0x10000)
-#define DLB2_PF_VF2PF_FLR_ISR_RST 0x0
-
-#define DLB2_PF_VF2PF_FLR_ISR_VF0_ISR	0x00000001
-#define DLB2_PF_VF2PF_FLR_ISR_VF1_ISR	0x00000002
-#define DLB2_PF_VF2PF_FLR_ISR_VF2_ISR	0x00000004
-#define DLB2_PF_VF2PF_FLR_ISR_VF3_ISR	0x00000008
-#define DLB2_PF_VF2PF_FLR_ISR_VF4_ISR	0x00000010
-#define DLB2_PF_VF2PF_FLR_ISR_VF5_ISR	0x00000020
-#define DLB2_PF_VF2PF_FLR_ISR_VF6_ISR	0x00000040
-#define DLB2_PF_VF2PF_FLR_ISR_VF7_ISR	0x00000080
-#define DLB2_PF_VF2PF_FLR_ISR_VF8_ISR	0x00000100
-#define DLB2_PF_VF2PF_FLR_ISR_VF9_ISR	0x00000200
-#define DLB2_PF_VF2PF_FLR_ISR_VF10_ISR	0x00000400
-#define DLB2_PF_VF2PF_FLR_ISR_VF11_ISR	0x00000800
-#define DLB2_PF_VF2PF_FLR_ISR_VF12_ISR	0x00001000
-#define DLB2_PF_VF2PF_FLR_ISR_VF13_ISR	0x00002000
-#define DLB2_PF_VF2PF_FLR_ISR_VF14_ISR	0x00004000
-#define DLB2_PF_VF2PF_FLR_ISR_VF15_ISR	0x00008000
-#define DLB2_PF_VF2PF_FLR_ISR_RSVD0		0xFFFF0000
-#define DLB2_PF_VF2PF_FLR_ISR_VF0_ISR_LOC	0
-#define DLB2_PF_VF2PF_FLR_ISR_VF1_ISR_LOC	1
-#define DLB2_PF_VF2PF_FLR_ISR_VF2_ISR_LOC	2
-#define DLB2_PF_VF2PF_FLR_ISR_VF3_ISR_LOC	3
-#define DLB2_PF_VF2PF_FLR_ISR_VF4_ISR_LOC	4
-#define DLB2_PF_VF2PF_FLR_ISR_VF5_ISR_LOC	5
-#define DLB2_PF_VF2PF_FLR_ISR_VF6_ISR_LOC	6
-#define DLB2_PF_VF2PF_FLR_ISR_VF7_ISR_LOC	7
-#define DLB2_PF_VF2PF_FLR_ISR_VF8_ISR_LOC	8
-#define DLB2_PF_VF2PF_FLR_ISR_VF9_ISR_LOC	9
-#define DLB2_PF_VF2PF_FLR_ISR_VF10_ISR_LOC	10
-#define DLB2_PF_VF2PF_FLR_ISR_VF11_ISR_LOC	11
-#define DLB2_PF_VF2PF_FLR_ISR_VF12_ISR_LOC	12
-#define DLB2_PF_VF2PF_FLR_ISR_VF13_ISR_LOC	13
-#define DLB2_PF_VF2PF_FLR_ISR_VF14_ISR_LOC	14
-#define DLB2_PF_VF2PF_FLR_ISR_VF15_ISR_LOC	15
-#define DLB2_PF_VF2PF_FLR_ISR_RSVD0_LOC	16
-
-#define DLB2_PF_VF2PF_ISR_PEND(vf_id) \
-	(0x1f10 + (vf_id) * 0x10000)
-#define DLB2_PF_VF2PF_ISR_PEND_RST 0x0
-
-#define DLB2_PF_VF2PF_ISR_PEND_ISR_PEND	0x00000001
-#define DLB2_PF_VF2PF_ISR_PEND_RSVD0		0xFFFFFFFE
-#define DLB2_PF_VF2PF_ISR_PEND_ISR_PEND_LOC	0
-#define DLB2_PF_VF2PF_ISR_PEND_RSVD0_LOC	1
-
-#define DLB2_PF_PF2VF_MAILBOX_BYTES 64
-#define DLB2_PF_PF2VF_MAILBOX(vf_id, x) \
-	(0x2000 + 0x4 * (x) + (vf_id) * 0x10000)
-#define DLB2_PF_PF2VF_MAILBOX_RST 0x0
-
-#define DLB2_PF_PF2VF_MAILBOX_MSG	0xFFFFFFFF
-#define DLB2_PF_PF2VF_MAILBOX_MSG_LOC	0
-
-#define DLB2_PF_PF2VF_MAILBOX_ISR(vf_id) \
-	(0x2f00 + (vf_id) * 0x10000)
-#define DLB2_PF_PF2VF_MAILBOX_ISR_RST 0x0
-
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF0_ISR	0x00000001
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF1_ISR	0x00000002
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF2_ISR	0x00000004
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF3_ISR	0x00000008
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF4_ISR	0x00000010
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF5_ISR	0x00000020
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF6_ISR	0x00000040
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF7_ISR	0x00000080
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF8_ISR	0x00000100
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF9_ISR	0x00000200
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF10_ISR	0x00000400
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF11_ISR	0x00000800
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF12_ISR	0x00001000
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF13_ISR	0x00002000
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF14_ISR	0x00004000
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF15_ISR	0x00008000
-#define DLB2_PF_PF2VF_MAILBOX_ISR_RSVD0	0xFFFF0000
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF0_ISR_LOC	0
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF1_ISR_LOC	1
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF2_ISR_LOC	2
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF3_ISR_LOC	3
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF4_ISR_LOC	4
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF5_ISR_LOC	5
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF6_ISR_LOC	6
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF7_ISR_LOC	7
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF8_ISR_LOC	8
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF9_ISR_LOC	9
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF10_ISR_LOC	10
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF11_ISR_LOC	11
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF12_ISR_LOC	12
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF13_ISR_LOC	13
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF14_ISR_LOC	14
-#define DLB2_PF_PF2VF_MAILBOX_ISR_VF15_ISR_LOC	15
-#define DLB2_PF_PF2VF_MAILBOX_ISR_RSVD0_LOC		16
-
-#define DLB2_PF_VF_RESET_IN_PROGRESS(vf_id) \
-	(0x3000 + (vf_id) * 0x10000)
-#define DLB2_PF_VF_RESET_IN_PROGRESS_RST 0xffff
-
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF0_RESET_IN_PROGRESS	0x00000001
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF1_RESET_IN_PROGRESS	0x00000002
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF2_RESET_IN_PROGRESS	0x00000004
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF3_RESET_IN_PROGRESS	0x00000008
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF4_RESET_IN_PROGRESS	0x00000010
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF5_RESET_IN_PROGRESS	0x00000020
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF6_RESET_IN_PROGRESS	0x00000040
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF7_RESET_IN_PROGRESS	0x00000080
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF8_RESET_IN_PROGRESS	0x00000100
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF9_RESET_IN_PROGRESS	0x00000200
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF10_RESET_IN_PROGRESS	0x00000400
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF11_RESET_IN_PROGRESS	0x00000800
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF12_RESET_IN_PROGRESS	0x00001000
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF13_RESET_IN_PROGRESS	0x00002000
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF14_RESET_IN_PROGRESS	0x00004000
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF15_RESET_IN_PROGRESS	0x00008000
-#define DLB2_PF_VF_RESET_IN_PROGRESS_RSVD0			0xFFFF0000
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF0_RESET_IN_PROGRESS_LOC	0
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF1_RESET_IN_PROGRESS_LOC	1
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF2_RESET_IN_PROGRESS_LOC	2
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF3_RESET_IN_PROGRESS_LOC	3
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF4_RESET_IN_PROGRESS_LOC	4
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF5_RESET_IN_PROGRESS_LOC	5
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF6_RESET_IN_PROGRESS_LOC	6
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF7_RESET_IN_PROGRESS_LOC	7
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF8_RESET_IN_PROGRESS_LOC	8
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF9_RESET_IN_PROGRESS_LOC	9
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF10_RESET_IN_PROGRESS_LOC	10
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF11_RESET_IN_PROGRESS_LOC	11
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF12_RESET_IN_PROGRESS_LOC	12
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF13_RESET_IN_PROGRESS_LOC	13
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF14_RESET_IN_PROGRESS_LOC	14
-#define DLB2_PF_VF_RESET_IN_PROGRESS_VF15_RESET_IN_PROGRESS_LOC	15
-#define DLB2_PF_VF_RESET_IN_PROGRESS_RSVD0_LOC			16
-
-#define DLB2_MSIX_VECTOR_CTRL(x) \
-	(0x100000c + (x) * 0x10)
-#define DLB2_MSIX_VECTOR_CTRL_RST 0x1
-
-#define DLB2_MSIX_VECTOR_CTRL_VEC_MASK	0x00000001
-#define DLB2_MSIX_VECTOR_CTRL_RSVD0		0xFFFFFFFE
-#define DLB2_MSIX_VECTOR_CTRL_VEC_MASK_LOC	0
-#define DLB2_MSIX_VECTOR_CTRL_RSVD0_LOC	1
-
-#define DLB2_IOSF_FUNC_VF_BAR_DSBL(x) \
-	(0x20 + (x) * 0x4)
-#define DLB2_IOSF_FUNC_VF_BAR_DSBL_RST 0x0
-
-#define DLB2_IOSF_FUNC_VF_BAR_DSBL_FUNC_VF_BAR_DIS	0x00000001
-#define DLB2_IOSF_FUNC_VF_BAR_DSBL_RSVD0		0xFFFFFFFE
-#define DLB2_IOSF_FUNC_VF_BAR_DSBL_FUNC_VF_BAR_DIS_LOC	0
-#define DLB2_IOSF_FUNC_VF_BAR_DSBL_RSVD0_LOC			1
-
-#define DLB2_V2SYS_TOTAL_VAS 0x1000011c
-#define DLB2_V2_5SYS_TOTAL_VAS 0x10000114
-#define DLB2_SYS_TOTAL_VAS(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2SYS_TOTAL_VAS : \
-	 DLB2_V2_5SYS_TOTAL_VAS)
-#define DLB2_SYS_TOTAL_VAS_RST 0x20
-
-#define DLB2_SYS_TOTAL_VAS_TOTAL_VAS	0xFFFFFFFF
-#define DLB2_SYS_TOTAL_VAS_TOTAL_VAS_LOC	0
-
-#define DLB2_SYS_TOTAL_DIR_CRDS 0x10000108
-#define DLB2_SYS_TOTAL_DIR_CRDS_RST 0x1000
-
-#define DLB2_SYS_TOTAL_DIR_CRDS_TOTAL_DIR_CREDITS	0xFFFFFFFF
-#define DLB2_SYS_TOTAL_DIR_CRDS_TOTAL_DIR_CREDITS_LOC	0
-
-#define DLB2_SYS_TOTAL_LDB_CRDS 0x10000104
-#define DLB2_SYS_TOTAL_LDB_CRDS_RST 0x2000
-
-#define DLB2_SYS_TOTAL_LDB_CRDS_TOTAL_LDB_CREDITS	0xFFFFFFFF
-#define DLB2_SYS_TOTAL_LDB_CRDS_TOTAL_LDB_CREDITS_LOC	0
-
-#define DLB2_SYS_ALARM_PF_SYND2 0x10000508
-#define DLB2_SYS_ALARM_PF_SYND2_RST 0x0
-
-#define DLB2_SYS_ALARM_PF_SYND2_LOCK_ID	0x0000FFFF
-#define DLB2_SYS_ALARM_PF_SYND2_MEAS		0x00010000
-#define DLB2_SYS_ALARM_PF_SYND2_DEBUG	0x00FE0000
-#define DLB2_SYS_ALARM_PF_SYND2_CQ_POP	0x01000000
-#define DLB2_SYS_ALARM_PF_SYND2_QE_UHL	0x02000000
-#define DLB2_SYS_ALARM_PF_SYND2_QE_ORSP	0x04000000
-#define DLB2_SYS_ALARM_PF_SYND2_QE_VALID	0x08000000
-#define DLB2_SYS_ALARM_PF_SYND2_CQ_INT_REARM	0x10000000
-#define DLB2_SYS_ALARM_PF_SYND2_DSI_ERROR	0x20000000
-#define DLB2_SYS_ALARM_PF_SYND2_RSVD0	0xC0000000
-#define DLB2_SYS_ALARM_PF_SYND2_LOCK_ID_LOC		0
-#define DLB2_SYS_ALARM_PF_SYND2_MEAS_LOC		16
-#define DLB2_SYS_ALARM_PF_SYND2_DEBUG_LOC		17
-#define DLB2_SYS_ALARM_PF_SYND2_CQ_POP_LOC		24
-#define DLB2_SYS_ALARM_PF_SYND2_QE_UHL_LOC		25
-#define DLB2_SYS_ALARM_PF_SYND2_QE_ORSP_LOC		26
-#define DLB2_SYS_ALARM_PF_SYND2_QE_VALID_LOC		27
-#define DLB2_SYS_ALARM_PF_SYND2_CQ_INT_REARM_LOC	28
-#define DLB2_SYS_ALARM_PF_SYND2_DSI_ERROR_LOC	29
-#define DLB2_SYS_ALARM_PF_SYND2_RSVD0_LOC		30
-
-#define DLB2_SYS_ALARM_PF_SYND1 0x10000504
-#define DLB2_SYS_ALARM_PF_SYND1_RST 0x0
-
-#define DLB2_SYS_ALARM_PF_SYND1_DSI		0x0000FFFF
-#define DLB2_SYS_ALARM_PF_SYND1_QID		0x00FF0000
-#define DLB2_SYS_ALARM_PF_SYND1_QTYPE	0x03000000
-#define DLB2_SYS_ALARM_PF_SYND1_QPRI		0x1C000000
-#define DLB2_SYS_ALARM_PF_SYND1_MSG_TYPE	0xE0000000
-#define DLB2_SYS_ALARM_PF_SYND1_DSI_LOC	0
-#define DLB2_SYS_ALARM_PF_SYND1_QID_LOC	16
-#define DLB2_SYS_ALARM_PF_SYND1_QTYPE_LOC	24
-#define DLB2_SYS_ALARM_PF_SYND1_QPRI_LOC	26
-#define DLB2_SYS_ALARM_PF_SYND1_MSG_TYPE_LOC	29
-
-#define DLB2_SYS_ALARM_PF_SYND0 0x10000500
-#define DLB2_SYS_ALARM_PF_SYND0_RST 0x0
-
-#define DLB2_SYS_ALARM_PF_SYND0_SYNDROME	0x000000FF
-#define DLB2_SYS_ALARM_PF_SYND0_RTYPE	0x00000300
-#define DLB2_SYS_ALARM_PF_SYND0_RSVD0	0x00001C00
-#define DLB2_SYS_ALARM_PF_SYND0_IS_LDB	0x00002000
-#define DLB2_SYS_ALARM_PF_SYND0_CLS		0x0000C000
-#define DLB2_SYS_ALARM_PF_SYND0_AID		0x003F0000
-#define DLB2_SYS_ALARM_PF_SYND0_UNIT		0x03C00000
-#define DLB2_SYS_ALARM_PF_SYND0_SOURCE	0x3C000000
-#define DLB2_SYS_ALARM_PF_SYND0_MORE		0x40000000
-#define DLB2_SYS_ALARM_PF_SYND0_VALID	0x80000000
-#define DLB2_SYS_ALARM_PF_SYND0_SYNDROME_LOC	0
-#define DLB2_SYS_ALARM_PF_SYND0_RTYPE_LOC	8
-#define DLB2_SYS_ALARM_PF_SYND0_RSVD0_LOC	10
-#define DLB2_SYS_ALARM_PF_SYND0_IS_LDB_LOC	13
-#define DLB2_SYS_ALARM_PF_SYND0_CLS_LOC	14
-#define DLB2_SYS_ALARM_PF_SYND0_AID_LOC	16
-#define DLB2_SYS_ALARM_PF_SYND0_UNIT_LOC	22
-#define DLB2_SYS_ALARM_PF_SYND0_SOURCE_LOC	26
-#define DLB2_SYS_ALARM_PF_SYND0_MORE_LOC	30
-#define DLB2_SYS_ALARM_PF_SYND0_VALID_LOC	31
-
-#define DLB2_SYS_VF_LDB_VPP_V(x) \
-	(0x10000f00 + (x) * 0x1000)
-#define DLB2_SYS_VF_LDB_VPP_V_RST 0x0
-
-#define DLB2_SYS_VF_LDB_VPP_V_VPP_V	0x00000001
-#define DLB2_SYS_VF_LDB_VPP_V_RSVD0	0xFFFFFFFE
-#define DLB2_SYS_VF_LDB_VPP_V_VPP_V_LOC	0
-#define DLB2_SYS_VF_LDB_VPP_V_RSVD0_LOC	1
-
-#define DLB2_SYS_VF_LDB_VPP2PP(x) \
-	(0x10000f04 + (x) * 0x1000)
-#define DLB2_SYS_VF_LDB_VPP2PP_RST 0x0
-
-#define DLB2_SYS_VF_LDB_VPP2PP_PP	0x0000003F
-#define DLB2_SYS_VF_LDB_VPP2PP_RSVD0	0xFFFFFFC0
-#define DLB2_SYS_VF_LDB_VPP2PP_PP_LOC	0
-#define DLB2_SYS_VF_LDB_VPP2PP_RSVD0_LOC	6
-
-#define DLB2_SYS_VF_DIR_VPP_V(x) \
-	(0x10000f08 + (x) * 0x1000)
-#define DLB2_SYS_VF_DIR_VPP_V_RST 0x0
-
-#define DLB2_SYS_VF_DIR_VPP_V_VPP_V	0x00000001
-#define DLB2_SYS_VF_DIR_VPP_V_RSVD0	0xFFFFFFFE
-#define DLB2_SYS_VF_DIR_VPP_V_VPP_V_LOC	0
-#define DLB2_SYS_VF_DIR_VPP_V_RSVD0_LOC	1
-
-#define DLB2_SYS_VF_DIR_VPP2PP(x) \
-	(0x10000f0c + (x) * 0x1000)
-#define DLB2_SYS_VF_DIR_VPP2PP_RST 0x0
-
-#define DLB2_SYS_VF_DIR_VPP2PP_PP	0x0000003F
-#define DLB2_SYS_VF_DIR_VPP2PP_RSVD0	0xFFFFFFC0
-#define DLB2_SYS_VF_DIR_VPP2PP_PP_LOC	0
-#define DLB2_SYS_VF_DIR_VPP2PP_RSVD0_LOC	6
-
-#define DLB2_SYS_VF_LDB_VQID_V(x) \
-	(0x10000f10 + (x) * 0x1000)
-#define DLB2_SYS_VF_LDB_VQID_V_RST 0x0
-
-#define DLB2_SYS_VF_LDB_VQID_V_VQID_V	0x00000001
-#define DLB2_SYS_VF_LDB_VQID_V_RSVD0		0xFFFFFFFE
-#define DLB2_SYS_VF_LDB_VQID_V_VQID_V_LOC	0
-#define DLB2_SYS_VF_LDB_VQID_V_RSVD0_LOC	1
-
-#define DLB2_SYS_VF_LDB_VQID2QID(x) \
-	(0x10000f14 + (x) * 0x1000)
-#define DLB2_SYS_VF_LDB_VQID2QID_RST 0x0
-
-#define DLB2_SYS_VF_LDB_VQID2QID_QID		0x0000001F
-#define DLB2_SYS_VF_LDB_VQID2QID_RSVD0	0xFFFFFFE0
-#define DLB2_SYS_VF_LDB_VQID2QID_QID_LOC	0
-#define DLB2_SYS_VF_LDB_VQID2QID_RSVD0_LOC	5
-
-#define DLB2_SYS_LDB_QID2VQID(x) \
-	(0x10000f18 + (x) * 0x1000)
-#define DLB2_SYS_LDB_QID2VQID_RST 0x0
-
-#define DLB2_SYS_LDB_QID2VQID_VQID	0x0000001F
-#define DLB2_SYS_LDB_QID2VQID_RSVD0	0xFFFFFFE0
-#define DLB2_SYS_LDB_QID2VQID_VQID_LOC	0
-#define DLB2_SYS_LDB_QID2VQID_RSVD0_LOC	5
-
-#define DLB2_SYS_VF_DIR_VQID_V(x) \
-	(0x10000f1c + (x) * 0x1000)
-#define DLB2_SYS_VF_DIR_VQID_V_RST 0x0
-
-#define DLB2_SYS_VF_DIR_VQID_V_VQID_V	0x00000001
-#define DLB2_SYS_VF_DIR_VQID_V_RSVD0		0xFFFFFFFE
-#define DLB2_SYS_VF_DIR_VQID_V_VQID_V_LOC	0
-#define DLB2_SYS_VF_DIR_VQID_V_RSVD0_LOC	1
-
-#define DLB2_SYS_VF_DIR_VQID2QID(x) \
-	(0x10000f20 + (x) * 0x1000)
-#define DLB2_SYS_VF_DIR_VQID2QID_RST 0x0
-
-#define DLB2_SYS_VF_DIR_VQID2QID_QID		0x0000003F
-#define DLB2_SYS_VF_DIR_VQID2QID_RSVD0	0xFFFFFFC0
-#define DLB2_SYS_VF_DIR_VQID2QID_QID_LOC	0
-#define DLB2_SYS_VF_DIR_VQID2QID_RSVD0_LOC	6
-
-#define DLB2_SYS_LDB_VASQID_V(x) \
-	(0x10000f24 + (x) * 0x1000)
-#define DLB2_SYS_LDB_VASQID_V_RST 0x0
-
-#define DLB2_SYS_LDB_VASQID_V_VASQID_V	0x00000001
-#define DLB2_SYS_LDB_VASQID_V_RSVD0		0xFFFFFFFE
-#define DLB2_SYS_LDB_VASQID_V_VASQID_V_LOC	0
-#define DLB2_SYS_LDB_VASQID_V_RSVD0_LOC	1
-
-#define DLB2_SYS_DIR_VASQID_V(x) \
-	(0x10000f28 + (x) * 0x1000)
-#define DLB2_SYS_DIR_VASQID_V_RST 0x0
-
-#define DLB2_SYS_DIR_VASQID_V_VASQID_V	0x00000001
-#define DLB2_SYS_DIR_VASQID_V_RSVD0		0xFFFFFFFE
-#define DLB2_SYS_DIR_VASQID_V_VASQID_V_LOC	0
-#define DLB2_SYS_DIR_VASQID_V_RSVD0_LOC	1
-
-#define DLB2_SYS_ALARM_VF_SYND2(x) \
-	(0x10000f48 + (x) * 0x1000)
-#define DLB2_SYS_ALARM_VF_SYND2_RST 0x0
-
-#define DLB2_SYS_ALARM_VF_SYND2_LOCK_ID	0x0000FFFF
-#define DLB2_SYS_ALARM_VF_SYND2_DEBUG	0x00FF0000
-#define DLB2_SYS_ALARM_VF_SYND2_CQ_POP	0x01000000
-#define DLB2_SYS_ALARM_VF_SYND2_QE_UHL	0x02000000
-#define DLB2_SYS_ALARM_VF_SYND2_QE_ORSP	0x04000000
-#define DLB2_SYS_ALARM_VF_SYND2_QE_VALID	0x08000000
-#define DLB2_SYS_ALARM_VF_SYND2_ISZ		0x10000000
-#define DLB2_SYS_ALARM_VF_SYND2_DSI_ERROR	0x20000000
-#define DLB2_SYS_ALARM_VF_SYND2_DLBRSVD	0xC0000000
-#define DLB2_SYS_ALARM_VF_SYND2_LOCK_ID_LOC		0
-#define DLB2_SYS_ALARM_VF_SYND2_DEBUG_LOC		16
-#define DLB2_SYS_ALARM_VF_SYND2_CQ_POP_LOC		24
-#define DLB2_SYS_ALARM_VF_SYND2_QE_UHL_LOC		25
-#define DLB2_SYS_ALARM_VF_SYND2_QE_ORSP_LOC		26
-#define DLB2_SYS_ALARM_VF_SYND2_QE_VALID_LOC		27
-#define DLB2_SYS_ALARM_VF_SYND2_ISZ_LOC		28
-#define DLB2_SYS_ALARM_VF_SYND2_DSI_ERROR_LOC	29
-#define DLB2_SYS_ALARM_VF_SYND2_DLBRSVD_LOC		30
-
-#define DLB2_SYS_ALARM_VF_SYND1(x) \
-	(0x10000f44 + (x) * 0x1000)
-#define DLB2_SYS_ALARM_VF_SYND1_RST 0x0
-
-#define DLB2_SYS_ALARM_VF_SYND1_DSI		0x0000FFFF
-#define DLB2_SYS_ALARM_VF_SYND1_QID		0x00FF0000
-#define DLB2_SYS_ALARM_VF_SYND1_QTYPE	0x03000000
-#define DLB2_SYS_ALARM_VF_SYND1_QPRI		0x1C000000
-#define DLB2_SYS_ALARM_VF_SYND1_MSG_TYPE	0xE0000000
-#define DLB2_SYS_ALARM_VF_SYND1_DSI_LOC	0
-#define DLB2_SYS_ALARM_VF_SYND1_QID_LOC	16
-#define DLB2_SYS_ALARM_VF_SYND1_QTYPE_LOC	24
-#define DLB2_SYS_ALARM_VF_SYND1_QPRI_LOC	26
-#define DLB2_SYS_ALARM_VF_SYND1_MSG_TYPE_LOC	29
-
-#define DLB2_SYS_ALARM_VF_SYND0(x) \
-	(0x10000f40 + (x) * 0x1000)
-#define DLB2_SYS_ALARM_VF_SYND0_RST 0x0
-
-#define DLB2_SYS_ALARM_VF_SYND0_SYNDROME		0x000000FF
-#define DLB2_SYS_ALARM_VF_SYND0_RTYPE		0x00000300
-#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND0_PARITY	0x00000400
-#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND1_PARITY	0x00000800
-#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND2_PARITY	0x00001000
-#define DLB2_SYS_ALARM_VF_SYND0_IS_LDB		0x00002000
-#define DLB2_SYS_ALARM_VF_SYND0_CLS			0x0000C000
-#define DLB2_SYS_ALARM_VF_SYND0_AID			0x003F0000
-#define DLB2_SYS_ALARM_VF_SYND0_UNIT			0x03C00000
-#define DLB2_SYS_ALARM_VF_SYND0_SOURCE		0x3C000000
-#define DLB2_SYS_ALARM_VF_SYND0_MORE			0x40000000
-#define DLB2_SYS_ALARM_VF_SYND0_VALID		0x80000000
-#define DLB2_SYS_ALARM_VF_SYND0_SYNDROME_LOC		0
-#define DLB2_SYS_ALARM_VF_SYND0_RTYPE_LOC		8
-#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND0_PARITY_LOC	10
-#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND1_PARITY_LOC	11
-#define DLB2_SYS_ALARM_VF_SYND0_VF_SYND2_PARITY_LOC	12
-#define DLB2_SYS_ALARM_VF_SYND0_IS_LDB_LOC		13
-#define DLB2_SYS_ALARM_VF_SYND0_CLS_LOC		14
-#define DLB2_SYS_ALARM_VF_SYND0_AID_LOC		16
-#define DLB2_SYS_ALARM_VF_SYND0_UNIT_LOC		22
-#define DLB2_SYS_ALARM_VF_SYND0_SOURCE_LOC		26
-#define DLB2_SYS_ALARM_VF_SYND0_MORE_LOC		30
-#define DLB2_SYS_ALARM_VF_SYND0_VALID_LOC		31
-
-#define DLB2_SYS_LDB_QID_CFG_V(x) \
-	(0x10000f58 + (x) * 0x1000)
-#define DLB2_SYS_LDB_QID_CFG_V_RST 0x0
-
-#define DLB2_SYS_LDB_QID_CFG_V_SN_CFG_V	0x00000001
-#define DLB2_SYS_LDB_QID_CFG_V_FID_CFG_V	0x00000002
-#define DLB2_SYS_LDB_QID_CFG_V_RSVD0		0xFFFFFFFC
-#define DLB2_SYS_LDB_QID_CFG_V_SN_CFG_V_LOC	0
-#define DLB2_SYS_LDB_QID_CFG_V_FID_CFG_V_LOC	1
-#define DLB2_SYS_LDB_QID_CFG_V_RSVD0_LOC	2
-
-#define DLB2_SYS_LDB_QID_ITS(x) \
-	(0x10000f54 + (x) * 0x1000)
-#define DLB2_SYS_LDB_QID_ITS_RST 0x0
-
-#define DLB2_SYS_LDB_QID_ITS_QID_ITS	0x00000001
-#define DLB2_SYS_LDB_QID_ITS_RSVD0	0xFFFFFFFE
-#define DLB2_SYS_LDB_QID_ITS_QID_ITS_LOC	0
-#define DLB2_SYS_LDB_QID_ITS_RSVD0_LOC	1
-
-#define DLB2_SYS_LDB_QID_V(x) \
-	(0x10000f50 + (x) * 0x1000)
-#define DLB2_SYS_LDB_QID_V_RST 0x0
-
-#define DLB2_SYS_LDB_QID_V_QID_V	0x00000001
-#define DLB2_SYS_LDB_QID_V_RSVD0	0xFFFFFFFE
-#define DLB2_SYS_LDB_QID_V_QID_V_LOC	0
-#define DLB2_SYS_LDB_QID_V_RSVD0_LOC	1
-
-#define DLB2_SYS_DIR_QID_ITS(x) \
-	(0x10000f64 + (x) * 0x1000)
-#define DLB2_SYS_DIR_QID_ITS_RST 0x0
-
-#define DLB2_SYS_DIR_QID_ITS_QID_ITS	0x00000001
-#define DLB2_SYS_DIR_QID_ITS_RSVD0	0xFFFFFFFE
-#define DLB2_SYS_DIR_QID_ITS_QID_ITS_LOC	0
-#define DLB2_SYS_DIR_QID_ITS_RSVD0_LOC	1
-
-#define DLB2_SYS_DIR_QID_V(x) \
-	(0x10000f60 + (x) * 0x1000)
-#define DLB2_SYS_DIR_QID_V_RST 0x0
-
-#define DLB2_SYS_DIR_QID_V_QID_V	0x00000001
-#define DLB2_SYS_DIR_QID_V_RSVD0	0xFFFFFFFE
-#define DLB2_SYS_DIR_QID_V_QID_V_LOC	0
-#define DLB2_SYS_DIR_QID_V_RSVD0_LOC	1
-
-#define DLB2_SYS_LDB_CQ_AI_DATA(x) \
-	(0x10000fa8 + (x) * 0x1000)
-#define DLB2_SYS_LDB_CQ_AI_DATA_RST 0x0
-
-#define DLB2_SYS_LDB_CQ_AI_DATA_CQ_AI_DATA	0xFFFFFFFF
-#define DLB2_SYS_LDB_CQ_AI_DATA_CQ_AI_DATA_LOC	0
-
-#define DLB2_SYS_LDB_CQ_AI_ADDR(x) \
-	(0x10000fa4 + (x) * 0x1000)
-#define DLB2_SYS_LDB_CQ_AI_ADDR_RST 0x0
-
-#define DLB2_SYS_LDB_CQ_AI_ADDR_RSVD1	0x00000003
-#define DLB2_SYS_LDB_CQ_AI_ADDR_CQ_AI_ADDR	0x000FFFFC
-#define DLB2_SYS_LDB_CQ_AI_ADDR_RSVD0	0xFFF00000
-#define DLB2_SYS_LDB_CQ_AI_ADDR_RSVD1_LOC		0
-#define DLB2_SYS_LDB_CQ_AI_ADDR_CQ_AI_ADDR_LOC	2
-#define DLB2_SYS_LDB_CQ_AI_ADDR_RSVD0_LOC		20
-
-#define DLB2_V2SYS_LDB_CQ_PASID(x) \
-	(0x10000fa0 + (x) * 0x1000)
-#define DLB2_V2_5SYS_LDB_CQ_PASID(x) \
-	(0x10000f9c + (x) * 0x1000)
-#define DLB2_SYS_LDB_CQ_PASID(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2SYS_LDB_CQ_PASID(x) : \
-	 DLB2_V2_5SYS_LDB_CQ_PASID(x))
-#define DLB2_SYS_LDB_CQ_PASID_RST 0x0
-
-#define DLB2_SYS_LDB_CQ_PASID_PASID		0x000FFFFF
-#define DLB2_SYS_LDB_CQ_PASID_EXE_REQ	0x00100000
-#define DLB2_SYS_LDB_CQ_PASID_PRIV_REQ	0x00200000
-#define DLB2_SYS_LDB_CQ_PASID_FMT2		0x00400000
-#define DLB2_SYS_LDB_CQ_PASID_RSVD0		0xFF800000
-#define DLB2_SYS_LDB_CQ_PASID_PASID_LOC	0
-#define DLB2_SYS_LDB_CQ_PASID_EXE_REQ_LOC	20
-#define DLB2_SYS_LDB_CQ_PASID_PRIV_REQ_LOC	21
-#define DLB2_SYS_LDB_CQ_PASID_FMT2_LOC	22
-#define DLB2_SYS_LDB_CQ_PASID_RSVD0_LOC	23
-
-#define DLB2_SYS_LDB_CQ_AT(x) \
-	(0x10000f9c + (x) * 0x1000)
-#define DLB2_SYS_LDB_CQ_AT_RST 0x0
-
-#define DLB2_SYS_LDB_CQ_AT_CQ_AT	0x00000003
-#define DLB2_SYS_LDB_CQ_AT_RSVD0	0xFFFFFFFC
-#define DLB2_SYS_LDB_CQ_AT_CQ_AT_LOC	0
-#define DLB2_SYS_LDB_CQ_AT_RSVD0_LOC	2
-
-#define DLB2_SYS_LDB_CQ_ISR(x) \
-	(0x10000f98 + (x) * 0x1000)
-#define DLB2_SYS_LDB_CQ_ISR_RST 0x0
-/* CQ Interrupt Modes */
-#define DLB2_CQ_ISR_MODE_DIS  0
-#define DLB2_CQ_ISR_MODE_MSI  1
-#define DLB2_CQ_ISR_MODE_MSIX 2
-#define DLB2_CQ_ISR_MODE_ADI  3
-
-#define DLB2_SYS_LDB_CQ_ISR_VECTOR	0x0000003F
-#define DLB2_SYS_LDB_CQ_ISR_VF	0x000003C0
-#define DLB2_SYS_LDB_CQ_ISR_EN_CODE	0x00000C00
-#define DLB2_SYS_LDB_CQ_ISR_RSVD0	0xFFFFF000
-#define DLB2_SYS_LDB_CQ_ISR_VECTOR_LOC	0
-#define DLB2_SYS_LDB_CQ_ISR_VF_LOC		6
-#define DLB2_SYS_LDB_CQ_ISR_EN_CODE_LOC	10
-#define DLB2_SYS_LDB_CQ_ISR_RSVD0_LOC	12
-
-#define DLB2_SYS_LDB_CQ2VF_PF_RO(x) \
-	(0x10000f94 + (x) * 0x1000)
-#define DLB2_SYS_LDB_CQ2VF_PF_RO_RST 0x0
-
-#define DLB2_SYS_LDB_CQ2VF_PF_RO_VF		0x0000000F
-#define DLB2_SYS_LDB_CQ2VF_PF_RO_IS_PF	0x00000010
-#define DLB2_SYS_LDB_CQ2VF_PF_RO_RO		0x00000020
-#define DLB2_SYS_LDB_CQ2VF_PF_RO_RSVD0	0xFFFFFFC0
-#define DLB2_SYS_LDB_CQ2VF_PF_RO_VF_LOC	0
-#define DLB2_SYS_LDB_CQ2VF_PF_RO_IS_PF_LOC	4
-#define DLB2_SYS_LDB_CQ2VF_PF_RO_RO_LOC	5
-#define DLB2_SYS_LDB_CQ2VF_PF_RO_RSVD0_LOC	6
-
-#define DLB2_SYS_LDB_PP_V(x) \
-	(0x10000f90 + (x) * 0x1000)
-#define DLB2_SYS_LDB_PP_V_RST 0x0
-
-#define DLB2_SYS_LDB_PP_V_PP_V	0x00000001
-#define DLB2_SYS_LDB_PP_V_RSVD0	0xFFFFFFFE
-#define DLB2_SYS_LDB_PP_V_PP_V_LOC	0
-#define DLB2_SYS_LDB_PP_V_RSVD0_LOC	1
-
-#define DLB2_SYS_LDB_PP2VDEV(x) \
-	(0x10000f8c + (x) * 0x1000)
-#define DLB2_SYS_LDB_PP2VDEV_RST 0x0
-
-#define DLB2_SYS_LDB_PP2VDEV_VDEV	0x0000000F
-#define DLB2_SYS_LDB_PP2VDEV_RSVD0	0xFFFFFFF0
-#define DLB2_SYS_LDB_PP2VDEV_VDEV_LOC	0
-#define DLB2_SYS_LDB_PP2VDEV_RSVD0_LOC	4
-
-#define DLB2_SYS_LDB_PP2VAS(x) \
-	(0x10000f88 + (x) * 0x1000)
-#define DLB2_SYS_LDB_PP2VAS_RST 0x0
-
-#define DLB2_SYS_LDB_PP2VAS_VAS	0x0000001F
-#define DLB2_SYS_LDB_PP2VAS_RSVD0	0xFFFFFFE0
-#define DLB2_SYS_LDB_PP2VAS_VAS_LOC		0
-#define DLB2_SYS_LDB_PP2VAS_RSVD0_LOC	5
-
-#define DLB2_SYS_LDB_CQ_ADDR_U(x) \
-	(0x10000f84 + (x) * 0x1000)
-#define DLB2_SYS_LDB_CQ_ADDR_U_RST 0x0
-
-#define DLB2_SYS_LDB_CQ_ADDR_U_ADDR_U	0xFFFFFFFF
-#define DLB2_SYS_LDB_CQ_ADDR_U_ADDR_U_LOC	0
-
-#define DLB2_SYS_LDB_CQ_ADDR_L(x) \
-	(0x10000f80 + (x) * 0x1000)
-#define DLB2_SYS_LDB_CQ_ADDR_L_RST 0x0
-
-#define DLB2_SYS_LDB_CQ_ADDR_L_RSVD0		0x0000003F
-#define DLB2_SYS_LDB_CQ_ADDR_L_ADDR_L	0xFFFFFFC0
-#define DLB2_SYS_LDB_CQ_ADDR_L_RSVD0_LOC	0
-#define DLB2_SYS_LDB_CQ_ADDR_L_ADDR_L_LOC	6
-
-#define DLB2_SYS_DIR_CQ_FMT(x) \
-	(0x10000fec + (x) * 0x1000)
-#define DLB2_SYS_DIR_CQ_FMT_RST 0x0
-
-#define DLB2_SYS_DIR_CQ_FMT_KEEP_PF_PPID	0x00000001
-#define DLB2_SYS_DIR_CQ_FMT_RSVD0		0xFFFFFFFE
-#define DLB2_SYS_DIR_CQ_FMT_KEEP_PF_PPID_LOC	0
-#define DLB2_SYS_DIR_CQ_FMT_RSVD0_LOC	1
-
-#define DLB2_SYS_DIR_CQ_AI_DATA(x) \
-	(0x10000fe8 + (x) * 0x1000)
-#define DLB2_SYS_DIR_CQ_AI_DATA_RST 0x0
-
-#define DLB2_SYS_DIR_CQ_AI_DATA_CQ_AI_DATA	0xFFFFFFFF
-#define DLB2_SYS_DIR_CQ_AI_DATA_CQ_AI_DATA_LOC	0
-
-#define DLB2_SYS_DIR_CQ_AI_ADDR(x) \
-	(0x10000fe4 + (x) * 0x1000)
-#define DLB2_SYS_DIR_CQ_AI_ADDR_RST 0x0
-
-#define DLB2_SYS_DIR_CQ_AI_ADDR_RSVD1	0x00000003
-#define DLB2_SYS_DIR_CQ_AI_ADDR_CQ_AI_ADDR	0x000FFFFC
-#define DLB2_SYS_DIR_CQ_AI_ADDR_RSVD0	0xFFF00000
-#define DLB2_SYS_DIR_CQ_AI_ADDR_RSVD1_LOC		0
-#define DLB2_SYS_DIR_CQ_AI_ADDR_CQ_AI_ADDR_LOC	2
-#define DLB2_SYS_DIR_CQ_AI_ADDR_RSVD0_LOC		20
-
-#define DLB2_V2SYS_DIR_CQ_PASID(x) \
-	(0x10000fe0 + (x) * 0x1000)
-#define DLB2_V2_5SYS_DIR_CQ_PASID(x) \
-	(0x10000fdc + (x) * 0x1000)
-#define DLB2_SYS_DIR_CQ_PASID(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2SYS_DIR_CQ_PASID(x) : \
-	 DLB2_V2_5SYS_DIR_CQ_PASID(x))
-#define DLB2_SYS_DIR_CQ_PASID_RST 0x0
-
-#define DLB2_SYS_DIR_CQ_PASID_PASID		0x000FFFFF
-#define DLB2_SYS_DIR_CQ_PASID_EXE_REQ	0x00100000
-#define DLB2_SYS_DIR_CQ_PASID_PRIV_REQ	0x00200000
-#define DLB2_SYS_DIR_CQ_PASID_FMT2		0x00400000
-#define DLB2_SYS_DIR_CQ_PASID_RSVD0		0xFF800000
-#define DLB2_SYS_DIR_CQ_PASID_PASID_LOC	0
-#define DLB2_SYS_DIR_CQ_PASID_EXE_REQ_LOC	20
-#define DLB2_SYS_DIR_CQ_PASID_PRIV_REQ_LOC	21
-#define DLB2_SYS_DIR_CQ_PASID_FMT2_LOC	22
-#define DLB2_SYS_DIR_CQ_PASID_RSVD0_LOC	23
-
-#define DLB2_SYS_DIR_CQ_AT(x) \
-	(0x10000fdc + (x) * 0x1000)
-#define DLB2_SYS_DIR_CQ_AT_RST 0x0
-
-#define DLB2_SYS_DIR_CQ_AT_CQ_AT	0x00000003
-#define DLB2_SYS_DIR_CQ_AT_RSVD0	0xFFFFFFFC
-#define DLB2_SYS_DIR_CQ_AT_CQ_AT_LOC	0
-#define DLB2_SYS_DIR_CQ_AT_RSVD0_LOC	2
-
-#define DLB2_SYS_DIR_CQ_ISR(x) \
-	(0x10000fd8 + (x) * 0x1000)
-#define DLB2_SYS_DIR_CQ_ISR_RST 0x0
-
-#define DLB2_SYS_DIR_CQ_ISR_VECTOR	0x0000003F
-#define DLB2_SYS_DIR_CQ_ISR_VF	0x000003C0
-#define DLB2_SYS_DIR_CQ_ISR_EN_CODE	0x00000C00
-#define DLB2_SYS_DIR_CQ_ISR_RSVD0	0xFFFFF000
-#define DLB2_SYS_DIR_CQ_ISR_VECTOR_LOC	0
-#define DLB2_SYS_DIR_CQ_ISR_VF_LOC		6
-#define DLB2_SYS_DIR_CQ_ISR_EN_CODE_LOC	10
-#define DLB2_SYS_DIR_CQ_ISR_RSVD0_LOC	12
-
-#define DLB2_SYS_DIR_CQ2VF_PF_RO(x) \
-	(0x10000fd4 + (x) * 0x1000)
-#define DLB2_SYS_DIR_CQ2VF_PF_RO_RST 0x0
-
-#define DLB2_SYS_DIR_CQ2VF_PF_RO_VF		0x0000000F
-#define DLB2_SYS_DIR_CQ2VF_PF_RO_IS_PF	0x00000010
-#define DLB2_SYS_DIR_CQ2VF_PF_RO_RO		0x00000020
-#define DLB2_SYS_DIR_CQ2VF_PF_RO_RSVD0	0xFFFFFFC0
-#define DLB2_SYS_DIR_CQ2VF_PF_RO_VF_LOC	0
-#define DLB2_SYS_DIR_CQ2VF_PF_RO_IS_PF_LOC	4
-#define DLB2_SYS_DIR_CQ2VF_PF_RO_RO_LOC	5
-#define DLB2_SYS_DIR_CQ2VF_PF_RO_RSVD0_LOC	6
-
-#define DLB2_SYS_DIR_PP_V(x) \
-	(0x10000fd0 + (x) * 0x1000)
-#define DLB2_SYS_DIR_PP_V_RST 0x0
-
-#define DLB2_SYS_DIR_PP_V_PP_V	0x00000001
-#define DLB2_SYS_DIR_PP_V_RSVD0	0xFFFFFFFE
-#define DLB2_SYS_DIR_PP_V_PP_V_LOC	0
-#define DLB2_SYS_DIR_PP_V_RSVD0_LOC	1
-
-#define DLB2_SYS_DIR_PP2VDEV(x) \
-	(0x10000fcc + (x) * 0x1000)
-#define DLB2_SYS_DIR_PP2VDEV_RST 0x0
-
-#define DLB2_SYS_DIR_PP2VDEV_VDEV	0x0000000F
-#define DLB2_SYS_DIR_PP2VDEV_RSVD0	0xFFFFFFF0
-#define DLB2_SYS_DIR_PP2VDEV_VDEV_LOC	0
-#define DLB2_SYS_DIR_PP2VDEV_RSVD0_LOC	4
-
-#define DLB2_SYS_DIR_PP2VAS(x) \
-	(0x10000fc8 + (x) * 0x1000)
-#define DLB2_SYS_DIR_PP2VAS_RST 0x0
-
-#define DLB2_SYS_DIR_PP2VAS_VAS	0x0000001F
-#define DLB2_SYS_DIR_PP2VAS_RSVD0	0xFFFFFFE0
-#define DLB2_SYS_DIR_PP2VAS_VAS_LOC		0
-#define DLB2_SYS_DIR_PP2VAS_RSVD0_LOC	5
-
-#define DLB2_SYS_DIR_CQ_ADDR_U(x) \
-	(0x10000fc4 + (x) * 0x1000)
-#define DLB2_SYS_DIR_CQ_ADDR_U_RST 0x0
-
-#define DLB2_SYS_DIR_CQ_ADDR_U_ADDR_U	0xFFFFFFFF
-#define DLB2_SYS_DIR_CQ_ADDR_U_ADDR_U_LOC	0
-
-#define DLB2_SYS_DIR_CQ_ADDR_L(x) \
-	(0x10000fc0 + (x) * 0x1000)
-#define DLB2_SYS_DIR_CQ_ADDR_L_RST 0x0
-
-#define DLB2_SYS_DIR_CQ_ADDR_L_RSVD0		0x0000003F
-#define DLB2_SYS_DIR_CQ_ADDR_L_ADDR_L	0xFFFFFFC0
-#define DLB2_SYS_DIR_CQ_ADDR_L_RSVD0_LOC	0
-#define DLB2_SYS_DIR_CQ_ADDR_L_ADDR_L_LOC	6
-
-#define DLB2_SYS_PM_SMON_COMP_MASK1 0x10003024
-#define DLB2_SYS_PM_SMON_COMP_MASK1_RST 0xffffffff
-
-#define DLB2_SYS_PM_SMON_COMP_MASK1_COMP_MASK1	0xFFFFFFFF
-#define DLB2_SYS_PM_SMON_COMP_MASK1_COMP_MASK1_LOC	0
-
-#define DLB2_SYS_PM_SMON_COMP_MASK0 0x10003020
-#define DLB2_SYS_PM_SMON_COMP_MASK0_RST 0xffffffff
-
-#define DLB2_SYS_PM_SMON_COMP_MASK0_COMP_MASK0	0xFFFFFFFF
-#define DLB2_SYS_PM_SMON_COMP_MASK0_COMP_MASK0_LOC	0
-
-#define DLB2_SYS_PM_SMON_MAX_TMR 0x1000301c
-#define DLB2_SYS_PM_SMON_MAX_TMR_RST 0x0
-
-#define DLB2_SYS_PM_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
-#define DLB2_SYS_PM_SMON_MAX_TMR_MAXVALUE_LOC	0
-
-#define DLB2_SYS_PM_SMON_TMR 0x10003018
-#define DLB2_SYS_PM_SMON_TMR_RST 0x0
-
-#define DLB2_SYS_PM_SMON_TMR_TIMER_VAL	0xFFFFFFFF
-#define DLB2_SYS_PM_SMON_TMR_TIMER_VAL_LOC	0
-
-#define DLB2_SYS_PM_SMON_ACTIVITYCNTR1 0x10003014
-#define DLB2_SYS_PM_SMON_ACTIVITYCNTR1_RST 0x0
-
-#define DLB2_SYS_PM_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
-#define DLB2_SYS_PM_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
-
-#define DLB2_SYS_PM_SMON_ACTIVITYCNTR0 0x10003010
-#define DLB2_SYS_PM_SMON_ACTIVITYCNTR0_RST 0x0
-
-#define DLB2_SYS_PM_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
-#define DLB2_SYS_PM_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
-
-#define DLB2_SYS_PM_SMON_COMPARE1 0x1000300c
-#define DLB2_SYS_PM_SMON_COMPARE1_RST 0x0
-
-#define DLB2_SYS_PM_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
-#define DLB2_SYS_PM_SMON_COMPARE1_COMPARE1_LOC	0
-
-#define DLB2_SYS_PM_SMON_COMPARE0 0x10003008
-#define DLB2_SYS_PM_SMON_COMPARE0_RST 0x0
-
-#define DLB2_SYS_PM_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
-#define DLB2_SYS_PM_SMON_COMPARE0_COMPARE0_LOC	0
-
-#define DLB2_SYS_PM_SMON_CFG1 0x10003004
-#define DLB2_SYS_PM_SMON_CFG1_RST 0x0
-
-#define DLB2_SYS_PM_SMON_CFG1_MODE0	0x000000FF
-#define DLB2_SYS_PM_SMON_CFG1_MODE1	0x0000FF00
-#define DLB2_SYS_PM_SMON_CFG1_RSVD	0xFFFF0000
-#define DLB2_SYS_PM_SMON_CFG1_MODE0_LOC	0
-#define DLB2_SYS_PM_SMON_CFG1_MODE1_LOC	8
-#define DLB2_SYS_PM_SMON_CFG1_RSVD_LOC	16
-
-#define DLB2_SYS_PM_SMON_CFG0 0x10003000
-#define DLB2_SYS_PM_SMON_CFG0_RST 0x40000000
-
-#define DLB2_SYS_PM_SMON_CFG0_SMON_ENABLE		0x00000001
-#define DLB2_SYS_PM_SMON_CFG0_RSVD2			0x0000000E
-#define DLB2_SYS_PM_SMON_CFG0_SMON0_FUNCTION		0x00000070
-#define DLB2_SYS_PM_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
-#define DLB2_SYS_PM_SMON_CFG0_SMON1_FUNCTION		0x00000700
-#define DLB2_SYS_PM_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
-#define DLB2_SYS_PM_SMON_CFG0_SMON_MODE		0x0000F000
-#define DLB2_SYS_PM_SMON_CFG0_STOPCOUNTEROVFL	0x00010000
-#define DLB2_SYS_PM_SMON_CFG0_INTCOUNTEROVFL		0x00020000
-#define DLB2_SYS_PM_SMON_CFG0_STATCOUNTER0OVFL	0x00040000
-#define DLB2_SYS_PM_SMON_CFG0_STATCOUNTER1OVFL	0x00080000
-#define DLB2_SYS_PM_SMON_CFG0_STOPTIMEROVFL		0x00100000
-#define DLB2_SYS_PM_SMON_CFG0_INTTIMEROVFL		0x00200000
-#define DLB2_SYS_PM_SMON_CFG0_STATTIMEROVFL		0x00400000
-#define DLB2_SYS_PM_SMON_CFG0_RSVD1			0x00800000
-#define DLB2_SYS_PM_SMON_CFG0_TIMER_PRESCALE		0x1F000000
-#define DLB2_SYS_PM_SMON_CFG0_RSVD0			0x20000000
-#define DLB2_SYS_PM_SMON_CFG0_VERSION		0xC0000000
-#define DLB2_SYS_PM_SMON_CFG0_SMON_ENABLE_LOC		0
-#define DLB2_SYS_PM_SMON_CFG0_RSVD2_LOC			1
-#define DLB2_SYS_PM_SMON_CFG0_SMON0_FUNCTION_LOC		4
-#define DLB2_SYS_PM_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
-#define DLB2_SYS_PM_SMON_CFG0_SMON1_FUNCTION_LOC		8
-#define DLB2_SYS_PM_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
-#define DLB2_SYS_PM_SMON_CFG0_SMON_MODE_LOC			12
-#define DLB2_SYS_PM_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
-#define DLB2_SYS_PM_SMON_CFG0_INTCOUNTEROVFL_LOC		17
-#define DLB2_SYS_PM_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
-#define DLB2_SYS_PM_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
-#define DLB2_SYS_PM_SMON_CFG0_STOPTIMEROVFL_LOC		20
-#define DLB2_SYS_PM_SMON_CFG0_INTTIMEROVFL_LOC		21
-#define DLB2_SYS_PM_SMON_CFG0_STATTIMEROVFL_LOC		22
-#define DLB2_SYS_PM_SMON_CFG0_RSVD1_LOC			23
-#define DLB2_SYS_PM_SMON_CFG0_TIMER_PRESCALE_LOC		24
-#define DLB2_SYS_PM_SMON_CFG0_RSVD0_LOC			29
-#define DLB2_SYS_PM_SMON_CFG0_VERSION_LOC			30
-
-#define DLB2_SYS_SMON_COMP_MASK1(x) \
-	(0x18002024 + (x) * 0x40)
-#define DLB2_SYS_SMON_COMP_MASK1_RST 0xffffffff
-
-#define DLB2_SYS_SMON_COMP_MASK1_COMP_MASK1	0xFFFFFFFF
-#define DLB2_SYS_SMON_COMP_MASK1_COMP_MASK1_LOC	0
-
-#define DLB2_SYS_SMON_COMP_MASK0(x) \
-	(0x18002020 + (x) * 0x40)
-#define DLB2_SYS_SMON_COMP_MASK0_RST 0xffffffff
-
-#define DLB2_SYS_SMON_COMP_MASK0_COMP_MASK0	0xFFFFFFFF
-#define DLB2_SYS_SMON_COMP_MASK0_COMP_MASK0_LOC	0
-
-#define DLB2_SYS_SMON_MAX_TMR(x) \
-	(0x1800201c + (x) * 0x40)
-#define DLB2_SYS_SMON_MAX_TMR_RST 0x0
-
-#define DLB2_SYS_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
-#define DLB2_SYS_SMON_MAX_TMR_MAXVALUE_LOC	0
-
-#define DLB2_SYS_SMON_TMR(x) \
-	(0x18002018 + (x) * 0x40)
-#define DLB2_SYS_SMON_TMR_RST 0x0
-
-#define DLB2_SYS_SMON_TMR_TIMER_VAL	0xFFFFFFFF
-#define DLB2_SYS_SMON_TMR_TIMER_VAL_LOC	0
-
-#define DLB2_SYS_SMON_ACTIVITYCNTR1(x) \
-	(0x18002014 + (x) * 0x40)
-#define DLB2_SYS_SMON_ACTIVITYCNTR1_RST 0x0
-
-#define DLB2_SYS_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
-#define DLB2_SYS_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
-
-#define DLB2_SYS_SMON_ACTIVITYCNTR0(x) \
-	(0x18002010 + (x) * 0x40)
-#define DLB2_SYS_SMON_ACTIVITYCNTR0_RST 0x0
-
-#define DLB2_SYS_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
-#define DLB2_SYS_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
-
-#define DLB2_SYS_SMON_COMPARE1(x) \
-	(0x1800200c + (x) * 0x40)
-#define DLB2_SYS_SMON_COMPARE1_RST 0x0
-
-#define DLB2_SYS_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
-#define DLB2_SYS_SMON_COMPARE1_COMPARE1_LOC	0
-
-#define DLB2_SYS_SMON_COMPARE0(x) \
-	(0x18002008 + (x) * 0x40)
-#define DLB2_SYS_SMON_COMPARE0_RST 0x0
-
-#define DLB2_SYS_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
-#define DLB2_SYS_SMON_COMPARE0_COMPARE0_LOC	0
-
-#define DLB2_SYS_SMON_CFG1(x) \
-	(0x18002004 + (x) * 0x40)
-#define DLB2_SYS_SMON_CFG1_RST 0x0
-
-#define DLB2_SYS_SMON_CFG1_MODE0	0x000000FF
-#define DLB2_SYS_SMON_CFG1_MODE1	0x0000FF00
-#define DLB2_SYS_SMON_CFG1_RSVD	0xFFFF0000
-#define DLB2_SYS_SMON_CFG1_MODE0_LOC	0
-#define DLB2_SYS_SMON_CFG1_MODE1_LOC	8
-#define DLB2_SYS_SMON_CFG1_RSVD_LOC	16
-
-#define DLB2_SYS_SMON_CFG0(x) \
-	(0x18002000 + (x) * 0x40)
-#define DLB2_SYS_SMON_CFG0_RST 0x40000000
-
-#define DLB2_SYS_SMON_CFG0_SMON_ENABLE		0x00000001
-#define DLB2_SYS_SMON_CFG0_RSVD2			0x0000000E
-#define DLB2_SYS_SMON_CFG0_SMON0_FUNCTION		0x00000070
-#define DLB2_SYS_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
-#define DLB2_SYS_SMON_CFG0_SMON1_FUNCTION		0x00000700
-#define DLB2_SYS_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
-#define DLB2_SYS_SMON_CFG0_SMON_MODE			0x0000F000
-#define DLB2_SYS_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
-#define DLB2_SYS_SMON_CFG0_INTCOUNTEROVFL		0x00020000
-#define DLB2_SYS_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
-#define DLB2_SYS_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
-#define DLB2_SYS_SMON_CFG0_STOPTIMEROVFL		0x00100000
-#define DLB2_SYS_SMON_CFG0_INTTIMEROVFL		0x00200000
-#define DLB2_SYS_SMON_CFG0_STATTIMEROVFL		0x00400000
-#define DLB2_SYS_SMON_CFG0_RSVD1			0x00800000
-#define DLB2_SYS_SMON_CFG0_TIMER_PRESCALE		0x1F000000
-#define DLB2_SYS_SMON_CFG0_RSVD0			0x20000000
-#define DLB2_SYS_SMON_CFG0_VERSION			0xC0000000
-#define DLB2_SYS_SMON_CFG0_SMON_ENABLE_LOC			0
-#define DLB2_SYS_SMON_CFG0_RSVD2_LOC				1
-#define DLB2_SYS_SMON_CFG0_SMON0_FUNCTION_LOC		4
-#define DLB2_SYS_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
-#define DLB2_SYS_SMON_CFG0_SMON1_FUNCTION_LOC		8
-#define DLB2_SYS_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
-#define DLB2_SYS_SMON_CFG0_SMON_MODE_LOC			12
-#define DLB2_SYS_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
-#define DLB2_SYS_SMON_CFG0_INTCOUNTEROVFL_LOC		17
-#define DLB2_SYS_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
-#define DLB2_SYS_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
-#define DLB2_SYS_SMON_CFG0_STOPTIMEROVFL_LOC			20
-#define DLB2_SYS_SMON_CFG0_INTTIMEROVFL_LOC			21
-#define DLB2_SYS_SMON_CFG0_STATTIMEROVFL_LOC			22
-#define DLB2_SYS_SMON_CFG0_RSVD1_LOC				23
-#define DLB2_SYS_SMON_CFG0_TIMER_PRESCALE_LOC		24
-#define DLB2_SYS_SMON_CFG0_RSVD0_LOC				29
-#define DLB2_SYS_SMON_CFG0_VERSION_LOC			30
-
-#define DLB2_SYS_INGRESS_ALARM_ENBL 0x10000300
-#define DLB2_SYS_INGRESS_ALARM_ENBL_RST 0x0
-
-#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_HCW		0x00000001
-#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_PP		0x00000002
-#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_PASID		0x00000004
-#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_QID		0x00000008
-#define DLB2_SYS_INGRESS_ALARM_ENBL_DISABLED_QID		0x00000010
-#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_LDB_QID_CFG	0x00000020
-#define DLB2_SYS_INGRESS_ALARM_ENBL_RSVD0			0xFFFFFFC0
-#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_HCW_LOC		0
-#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_PP_LOC		1
-#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_PASID_LOC	2
-#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_QID_LOC		3
-#define DLB2_SYS_INGRESS_ALARM_ENBL_DISABLED_QID_LOC		4
-#define DLB2_SYS_INGRESS_ALARM_ENBL_ILLEGAL_LDB_QID_CFG_LOC	5
-#define DLB2_SYS_INGRESS_ALARM_ENBL_RSVD0_LOC		6
-
-#define DLB2_SYS_MSIX_ACK 0x10000400
-#define DLB2_SYS_MSIX_ACK_RST 0x0
-
-#define DLB2_SYS_MSIX_ACK_MSIX_0_ACK	0x00000001
-#define DLB2_SYS_MSIX_ACK_MSIX_1_ACK	0x00000002
-#define DLB2_SYS_MSIX_ACK_RSVD0	0xFFFFFFFC
-#define DLB2_SYS_MSIX_ACK_MSIX_0_ACK_LOC	0
-#define DLB2_SYS_MSIX_ACK_MSIX_1_ACK_LOC	1
-#define DLB2_SYS_MSIX_ACK_RSVD0_LOC		2
-
-#define DLB2_SYS_MSIX_PASSTHRU 0x10000404
-#define DLB2_SYS_MSIX_PASSTHRU_RST 0x0
-
-#define DLB2_SYS_MSIX_PASSTHRU_MSIX_0_PASSTHRU	0x00000001
-#define DLB2_SYS_MSIX_PASSTHRU_MSIX_1_PASSTHRU	0x00000002
-#define DLB2_SYS_MSIX_PASSTHRU_RSVD0			0xFFFFFFFC
-#define DLB2_SYS_MSIX_PASSTHRU_MSIX_0_PASSTHRU_LOC	0
-#define DLB2_SYS_MSIX_PASSTHRU_MSIX_1_PASSTHRU_LOC	1
-#define DLB2_SYS_MSIX_PASSTHRU_RSVD0_LOC		2
-
-#define DLB2_SYS_MSIX_MODE 0x10000408
-#define DLB2_SYS_MSIX_MODE_RST 0x0
-/* MSI-X Modes */
-#define DLB2_MSIX_MODE_PACKED     0
-#define DLB2_MSIX_MODE_COMPRESSED 1
-
-#define DLB2_SYS_MSIX_MODE_MODE_V2	0x00000001
-#define DLB2_SYS_MSIX_MODE_POLL_MODE_V2	0x00000002
-#define DLB2_SYS_MSIX_MODE_POLL_MASK_V2	0x00000004
-#define DLB2_SYS_MSIX_MODE_POLL_LOCK_V2	0x00000008
-#define DLB2_SYS_MSIX_MODE_RSVD0_V2	0xFFFFFFF0
-#define DLB2_SYS_MSIX_MODE_MODE_V2_LOC	0
-#define DLB2_SYS_MSIX_MODE_POLL_MODE_V2_LOC	1
-#define DLB2_SYS_MSIX_MODE_POLL_MASK_V2_LOC	2
-#define DLB2_SYS_MSIX_MODE_POLL_LOCK_V2_LOC	3
-#define DLB2_SYS_MSIX_MODE_RSVD0_V2_LOC	4
-
-#define DLB2_SYS_MSIX_MODE_MODE_V2_5	0x00000001
-#define DLB2_SYS_MSIX_MODE_IMS_POLLING_V2_5	0x00000002
-#define DLB2_SYS_MSIX_MODE_RSVD0_V2_5	0xFFFFFFFC
-#define DLB2_SYS_MSIX_MODE_MODE_V2_5_LOC		0
-#define DLB2_SYS_MSIX_MODE_IMS_POLLING_V2_5_LOC	1
-#define DLB2_SYS_MSIX_MODE_RSVD0_V2_5_LOC		2
-
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS 0x10000440
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_RST 0x0
-
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_0_OCC_INT	0x00000001
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_1_OCC_INT	0x00000002
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_2_OCC_INT	0x00000004
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_3_OCC_INT	0x00000008
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_4_OCC_INT	0x00000010
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_5_OCC_INT	0x00000020
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_6_OCC_INT	0x00000040
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_7_OCC_INT	0x00000080
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_8_OCC_INT	0x00000100
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_9_OCC_INT	0x00000200
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_10_OCC_INT	0x00000400
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_11_OCC_INT	0x00000800
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_12_OCC_INT	0x00001000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_13_OCC_INT	0x00002000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_14_OCC_INT	0x00004000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_15_OCC_INT	0x00008000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_16_OCC_INT	0x00010000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_17_OCC_INT	0x00020000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_18_OCC_INT	0x00040000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_19_OCC_INT	0x00080000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_20_OCC_INT	0x00100000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_21_OCC_INT	0x00200000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_22_OCC_INT	0x00400000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_23_OCC_INT	0x00800000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_24_OCC_INT	0x01000000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_25_OCC_INT	0x02000000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_26_OCC_INT	0x04000000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_27_OCC_INT	0x08000000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_28_OCC_INT	0x10000000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_29_OCC_INT	0x20000000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_30_OCC_INT	0x40000000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_31_OCC_INT	0x80000000
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_0_OCC_INT_LOC	0
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_1_OCC_INT_LOC	1
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_2_OCC_INT_LOC	2
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_3_OCC_INT_LOC	3
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_4_OCC_INT_LOC	4
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_5_OCC_INT_LOC	5
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_6_OCC_INT_LOC	6
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_7_OCC_INT_LOC	7
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_8_OCC_INT_LOC	8
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_9_OCC_INT_LOC	9
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_10_OCC_INT_LOC	10
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_11_OCC_INT_LOC	11
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_12_OCC_INT_LOC	12
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_13_OCC_INT_LOC	13
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_14_OCC_INT_LOC	14
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_15_OCC_INT_LOC	15
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_16_OCC_INT_LOC	16
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_17_OCC_INT_LOC	17
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_18_OCC_INT_LOC	18
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_19_OCC_INT_LOC	19
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_20_OCC_INT_LOC	20
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_21_OCC_INT_LOC	21
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_22_OCC_INT_LOC	22
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_23_OCC_INT_LOC	23
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_24_OCC_INT_LOC	24
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_25_OCC_INT_LOC	25
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_26_OCC_INT_LOC	26
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_27_OCC_INT_LOC	27
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_28_OCC_INT_LOC	28
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_29_OCC_INT_LOC	29
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_30_OCC_INT_LOC	30
-#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_CQ_31_OCC_INT_LOC	31
-
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS 0x10000444
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_RST 0x0
-
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_32_OCC_INT	0x00000001
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_33_OCC_INT	0x00000002
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_34_OCC_INT	0x00000004
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_35_OCC_INT	0x00000008
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_36_OCC_INT	0x00000010
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_37_OCC_INT	0x00000020
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_38_OCC_INT	0x00000040
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_39_OCC_INT	0x00000080
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_40_OCC_INT	0x00000100
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_41_OCC_INT	0x00000200
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_42_OCC_INT	0x00000400
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_43_OCC_INT	0x00000800
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_44_OCC_INT	0x00001000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_45_OCC_INT	0x00002000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_46_OCC_INT	0x00004000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_47_OCC_INT	0x00008000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_48_OCC_INT	0x00010000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_49_OCC_INT	0x00020000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_50_OCC_INT	0x00040000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_51_OCC_INT	0x00080000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_52_OCC_INT	0x00100000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_53_OCC_INT	0x00200000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_54_OCC_INT	0x00400000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_55_OCC_INT	0x00800000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_56_OCC_INT	0x01000000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_57_OCC_INT	0x02000000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_58_OCC_INT	0x04000000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_59_OCC_INT	0x08000000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_60_OCC_INT	0x10000000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_61_OCC_INT	0x20000000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_62_OCC_INT	0x40000000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_63_OCC_INT	0x80000000
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_32_OCC_INT_LOC	0
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_33_OCC_INT_LOC	1
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_34_OCC_INT_LOC	2
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_35_OCC_INT_LOC	3
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_36_OCC_INT_LOC	4
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_37_OCC_INT_LOC	5
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_38_OCC_INT_LOC	6
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_39_OCC_INT_LOC	7
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_40_OCC_INT_LOC	8
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_41_OCC_INT_LOC	9
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_42_OCC_INT_LOC	10
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_43_OCC_INT_LOC	11
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_44_OCC_INT_LOC	12
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_45_OCC_INT_LOC	13
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_46_OCC_INT_LOC	14
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_47_OCC_INT_LOC	15
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_48_OCC_INT_LOC	16
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_49_OCC_INT_LOC	17
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_50_OCC_INT_LOC	18
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_51_OCC_INT_LOC	19
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_52_OCC_INT_LOC	20
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_53_OCC_INT_LOC	21
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_54_OCC_INT_LOC	22
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_55_OCC_INT_LOC	23
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_56_OCC_INT_LOC	24
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_57_OCC_INT_LOC	25
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_58_OCC_INT_LOC	26
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_59_OCC_INT_LOC	27
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_60_OCC_INT_LOC	28
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_61_OCC_INT_LOC	29
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_62_OCC_INT_LOC	30
-#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_CQ_63_OCC_INT_LOC	31
-
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS 0x10000460
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_RST 0x0
-
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_0_OCC_INT	0x00000001
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_1_OCC_INT	0x00000002
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_2_OCC_INT	0x00000004
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_3_OCC_INT	0x00000008
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_4_OCC_INT	0x00000010
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_5_OCC_INT	0x00000020
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_6_OCC_INT	0x00000040
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_7_OCC_INT	0x00000080
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_8_OCC_INT	0x00000100
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_9_OCC_INT	0x00000200
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_10_OCC_INT	0x00000400
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_11_OCC_INT	0x00000800
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_12_OCC_INT	0x00001000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_13_OCC_INT	0x00002000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_14_OCC_INT	0x00004000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_15_OCC_INT	0x00008000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_16_OCC_INT	0x00010000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_17_OCC_INT	0x00020000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_18_OCC_INT	0x00040000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_19_OCC_INT	0x00080000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_20_OCC_INT	0x00100000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_21_OCC_INT	0x00200000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_22_OCC_INT	0x00400000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_23_OCC_INT	0x00800000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_24_OCC_INT	0x01000000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_25_OCC_INT	0x02000000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_26_OCC_INT	0x04000000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_27_OCC_INT	0x08000000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_28_OCC_INT	0x10000000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_29_OCC_INT	0x20000000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_30_OCC_INT	0x40000000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_31_OCC_INT	0x80000000
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_0_OCC_INT_LOC	0
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_1_OCC_INT_LOC	1
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_2_OCC_INT_LOC	2
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_3_OCC_INT_LOC	3
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_4_OCC_INT_LOC	4
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_5_OCC_INT_LOC	5
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_6_OCC_INT_LOC	6
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_7_OCC_INT_LOC	7
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_8_OCC_INT_LOC	8
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_9_OCC_INT_LOC	9
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_10_OCC_INT_LOC	10
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_11_OCC_INT_LOC	11
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_12_OCC_INT_LOC	12
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_13_OCC_INT_LOC	13
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_14_OCC_INT_LOC	14
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_15_OCC_INT_LOC	15
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_16_OCC_INT_LOC	16
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_17_OCC_INT_LOC	17
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_18_OCC_INT_LOC	18
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_19_OCC_INT_LOC	19
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_20_OCC_INT_LOC	20
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_21_OCC_INT_LOC	21
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_22_OCC_INT_LOC	22
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_23_OCC_INT_LOC	23
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_24_OCC_INT_LOC	24
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_25_OCC_INT_LOC	25
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_26_OCC_INT_LOC	26
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_27_OCC_INT_LOC	27
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_28_OCC_INT_LOC	28
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_29_OCC_INT_LOC	29
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_30_OCC_INT_LOC	30
-#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_CQ_31_OCC_INT_LOC	31
-
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS 0x10000464
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_RST 0x0
-
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_32_OCC_INT	0x00000001
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_33_OCC_INT	0x00000002
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_34_OCC_INT	0x00000004
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_35_OCC_INT	0x00000008
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_36_OCC_INT	0x00000010
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_37_OCC_INT	0x00000020
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_38_OCC_INT	0x00000040
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_39_OCC_INT	0x00000080
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_40_OCC_INT	0x00000100
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_41_OCC_INT	0x00000200
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_42_OCC_INT	0x00000400
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_43_OCC_INT	0x00000800
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_44_OCC_INT	0x00001000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_45_OCC_INT	0x00002000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_46_OCC_INT	0x00004000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_47_OCC_INT	0x00008000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_48_OCC_INT	0x00010000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_49_OCC_INT	0x00020000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_50_OCC_INT	0x00040000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_51_OCC_INT	0x00080000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_52_OCC_INT	0x00100000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_53_OCC_INT	0x00200000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_54_OCC_INT	0x00400000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_55_OCC_INT	0x00800000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_56_OCC_INT	0x01000000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_57_OCC_INT	0x02000000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_58_OCC_INT	0x04000000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_59_OCC_INT	0x08000000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_60_OCC_INT	0x10000000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_61_OCC_INT	0x20000000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_62_OCC_INT	0x40000000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_63_OCC_INT	0x80000000
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_32_OCC_INT_LOC	0
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_33_OCC_INT_LOC	1
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_34_OCC_INT_LOC	2
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_35_OCC_INT_LOC	3
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_36_OCC_INT_LOC	4
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_37_OCC_INT_LOC	5
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_38_OCC_INT_LOC	6
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_39_OCC_INT_LOC	7
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_40_OCC_INT_LOC	8
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_41_OCC_INT_LOC	9
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_42_OCC_INT_LOC	10
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_43_OCC_INT_LOC	11
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_44_OCC_INT_LOC	12
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_45_OCC_INT_LOC	13
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_46_OCC_INT_LOC	14
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_47_OCC_INT_LOC	15
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_48_OCC_INT_LOC	16
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_49_OCC_INT_LOC	17
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_50_OCC_INT_LOC	18
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_51_OCC_INT_LOC	19
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_52_OCC_INT_LOC	20
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_53_OCC_INT_LOC	21
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_54_OCC_INT_LOC	22
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_55_OCC_INT_LOC	23
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_56_OCC_INT_LOC	24
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_57_OCC_INT_LOC	25
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_58_OCC_INT_LOC	26
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_59_OCC_INT_LOC	27
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_60_OCC_INT_LOC	28
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_61_OCC_INT_LOC	29
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_62_OCC_INT_LOC	30
-#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_CQ_63_OCC_INT_LOC	31
-
-#define DLB2_SYS_DIR_CQ_OPT_CLR 0x100004c0
-#define DLB2_SYS_DIR_CQ_OPT_CLR_RST 0x0
-
-#define DLB2_SYS_DIR_CQ_OPT_CLR_CQ		0x0000003F
-#define DLB2_SYS_DIR_CQ_OPT_CLR_RSVD0	0xFFFFFFC0
-#define DLB2_SYS_DIR_CQ_OPT_CLR_CQ_LOC	0
-#define DLB2_SYS_DIR_CQ_OPT_CLR_RSVD0_LOC	6
-
-#define DLB2_SYS_ALARM_HW_SYND 0x1000050c
-#define DLB2_SYS_ALARM_HW_SYND_RST 0x0
-
-#define DLB2_SYS_ALARM_HW_SYND_SYNDROME	0x000000FF
-#define DLB2_SYS_ALARM_HW_SYND_RTYPE		0x00000300
-#define DLB2_SYS_ALARM_HW_SYND_ALARM		0x00000400
-#define DLB2_SYS_ALARM_HW_SYND_CWD		0x00000800
-#define DLB2_SYS_ALARM_HW_SYND_VF_PF_MB	0x00001000
-#define DLB2_SYS_ALARM_HW_SYND_RSVD0		0x00002000
-#define DLB2_SYS_ALARM_HW_SYND_CLS		0x0000C000
-#define DLB2_SYS_ALARM_HW_SYND_AID		0x003F0000
-#define DLB2_SYS_ALARM_HW_SYND_UNIT		0x03C00000
-#define DLB2_SYS_ALARM_HW_SYND_SOURCE	0x3C000000
-#define DLB2_SYS_ALARM_HW_SYND_MORE		0x40000000
-#define DLB2_SYS_ALARM_HW_SYND_VALID		0x80000000
-#define DLB2_SYS_ALARM_HW_SYND_SYNDROME_LOC	0
-#define DLB2_SYS_ALARM_HW_SYND_RTYPE_LOC	8
-#define DLB2_SYS_ALARM_HW_SYND_ALARM_LOC	10
-#define DLB2_SYS_ALARM_HW_SYND_CWD_LOC	11
-#define DLB2_SYS_ALARM_HW_SYND_VF_PF_MB_LOC	12
-#define DLB2_SYS_ALARM_HW_SYND_RSVD0_LOC	13
-#define DLB2_SYS_ALARM_HW_SYND_CLS_LOC	14
-#define DLB2_SYS_ALARM_HW_SYND_AID_LOC	16
-#define DLB2_SYS_ALARM_HW_SYND_UNIT_LOC	22
-#define DLB2_SYS_ALARM_HW_SYND_SOURCE_LOC	26
-#define DLB2_SYS_ALARM_HW_SYND_MORE_LOC	30
-#define DLB2_SYS_ALARM_HW_SYND_VALID_LOC	31
-
-#define DLB2_AQED_QID_FID_LIM(x) \
-	(0x20000000 + (x) * 0x1000)
-#define DLB2_AQED_QID_FID_LIM_RST 0x7ff
-
-#define DLB2_AQED_QID_FID_LIM_QID_FID_LIMIT	0x00001FFF
-#define DLB2_AQED_QID_FID_LIM_RSVD0		0xFFFFE000
-#define DLB2_AQED_QID_FID_LIM_QID_FID_LIMIT_LOC	0
-#define DLB2_AQED_QID_FID_LIM_RSVD0_LOC		13
-
-#define DLB2_AQED_QID_HID_WIDTH(x) \
-	(0x20080000 + (x) * 0x1000)
-#define DLB2_AQED_QID_HID_WIDTH_RST 0x0
-
-#define DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE	0x00000007
-#define DLB2_AQED_QID_HID_WIDTH_RSVD0		0xFFFFFFF8
-#define DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE_LOC	0
-#define DLB2_AQED_QID_HID_WIDTH_RSVD0_LOC		3
-
-#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0 0x24000004
-#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_RST 0xfefcfaf8
-
-#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI0	0x000000FF
-#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI1	0x0000FF00
-#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI2	0x00FF0000
-#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI3	0xFF000000
-#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI0_LOC	0
-#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI1_LOC	8
-#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI2_LOC	16
-#define DLB2_AQED_CFG_ARB_WEIGHTS_TQPRI_ATM_0_PRI3_LOC	24
-
-#define DLB2_AQED_SMON_ACTIVITYCNTR0 0x2c00004c
-#define DLB2_AQED_SMON_ACTIVITYCNTR0_RST 0x0
-
-#define DLB2_AQED_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
-#define DLB2_AQED_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
-
-#define DLB2_AQED_SMON_ACTIVITYCNTR1 0x2c000050
-#define DLB2_AQED_SMON_ACTIVITYCNTR1_RST 0x0
-
-#define DLB2_AQED_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
-#define DLB2_AQED_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
-
-#define DLB2_AQED_SMON_COMPARE0 0x2c000054
-#define DLB2_AQED_SMON_COMPARE0_RST 0x0
-
-#define DLB2_AQED_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
-#define DLB2_AQED_SMON_COMPARE0_COMPARE0_LOC	0
-
-#define DLB2_AQED_SMON_COMPARE1 0x2c000058
-#define DLB2_AQED_SMON_COMPARE1_RST 0x0
-
-#define DLB2_AQED_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
-#define DLB2_AQED_SMON_COMPARE1_COMPARE1_LOC	0
-
-#define DLB2_AQED_SMON_CFG0 0x2c00005c
-#define DLB2_AQED_SMON_CFG0_RST 0x40000000
-
-#define DLB2_AQED_SMON_CFG0_SMON_ENABLE		0x00000001
-#define DLB2_AQED_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
-#define DLB2_AQED_SMON_CFG0_RSVZ0			0x0000000C
-#define DLB2_AQED_SMON_CFG0_SMON0_FUNCTION		0x00000070
-#define DLB2_AQED_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
-#define DLB2_AQED_SMON_CFG0_SMON1_FUNCTION		0x00000700
-#define DLB2_AQED_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
-#define DLB2_AQED_SMON_CFG0_SMON_MODE		0x0000F000
-#define DLB2_AQED_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
-#define DLB2_AQED_SMON_CFG0_INTCOUNTEROVFL		0x00020000
-#define DLB2_AQED_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
-#define DLB2_AQED_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
-#define DLB2_AQED_SMON_CFG0_STOPTIMEROVFL		0x00100000
-#define DLB2_AQED_SMON_CFG0_INTTIMEROVFL		0x00200000
-#define DLB2_AQED_SMON_CFG0_STATTIMEROVFL		0x00400000
-#define DLB2_AQED_SMON_CFG0_RSVZ1			0x00800000
-#define DLB2_AQED_SMON_CFG0_TIMER_PRESCALE		0x1F000000
-#define DLB2_AQED_SMON_CFG0_RSVZ2			0x20000000
-#define DLB2_AQED_SMON_CFG0_VERSION			0xC0000000
-#define DLB2_AQED_SMON_CFG0_SMON_ENABLE_LOC			0
-#define DLB2_AQED_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
-#define DLB2_AQED_SMON_CFG0_RSVZ0_LOC			2
-#define DLB2_AQED_SMON_CFG0_SMON0_FUNCTION_LOC		4
-#define DLB2_AQED_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
-#define DLB2_AQED_SMON_CFG0_SMON1_FUNCTION_LOC		8
-#define DLB2_AQED_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
-#define DLB2_AQED_SMON_CFG0_SMON_MODE_LOC			12
-#define DLB2_AQED_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
-#define DLB2_AQED_SMON_CFG0_INTCOUNTEROVFL_LOC		17
-#define DLB2_AQED_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
-#define DLB2_AQED_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
-#define DLB2_AQED_SMON_CFG0_STOPTIMEROVFL_LOC		20
-#define DLB2_AQED_SMON_CFG0_INTTIMEROVFL_LOC			21
-#define DLB2_AQED_SMON_CFG0_STATTIMEROVFL_LOC		22
-#define DLB2_AQED_SMON_CFG0_RSVZ1_LOC			23
-#define DLB2_AQED_SMON_CFG0_TIMER_PRESCALE_LOC		24
-#define DLB2_AQED_SMON_CFG0_RSVZ2_LOC			29
-#define DLB2_AQED_SMON_CFG0_VERSION_LOC			30
-
-#define DLB2_AQED_SMON_CFG1 0x2c000060
-#define DLB2_AQED_SMON_CFG1_RST 0x0
-
-#define DLB2_AQED_SMON_CFG1_MODE0	0x000000FF
-#define DLB2_AQED_SMON_CFG1_MODE1	0x0000FF00
-#define DLB2_AQED_SMON_CFG1_RSVZ0	0xFFFF0000
-#define DLB2_AQED_SMON_CFG1_MODE0_LOC	0
-#define DLB2_AQED_SMON_CFG1_MODE1_LOC	8
-#define DLB2_AQED_SMON_CFG1_RSVZ0_LOC	16
-
-#define DLB2_AQED_SMON_MAX_TMR 0x2c000064
-#define DLB2_AQED_SMON_MAX_TMR_RST 0x0
-
-#define DLB2_AQED_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
-#define DLB2_AQED_SMON_MAX_TMR_MAXVALUE_LOC	0
-
-#define DLB2_AQED_SMON_TMR 0x2c000068
-#define DLB2_AQED_SMON_TMR_RST 0x0
-
-#define DLB2_AQED_SMON_TMR_TIMER	0xFFFFFFFF
-#define DLB2_AQED_SMON_TMR_TIMER_LOC	0
-
-#define DLB2_ATM_QID2CQIDIX_00(x) \
-	(0x30080000 + (x) * 0x1000)
-#define DLB2_ATM_QID2CQIDIX_00_RST 0x0
-#define DLB2_ATM_QID2CQIDIX(x, y) \
-	(DLB2_ATM_QID2CQIDIX_00(x) + 0x80000 * (y))
-#define DLB2_ATM_QID2CQIDIX_NUM 16
-
-#define DLB2_ATM_QID2CQIDIX_00_CQ_P0	0x000000FF
-#define DLB2_ATM_QID2CQIDIX_00_CQ_P1	0x0000FF00
-#define DLB2_ATM_QID2CQIDIX_00_CQ_P2	0x00FF0000
-#define DLB2_ATM_QID2CQIDIX_00_CQ_P3	0xFF000000
-#define DLB2_ATM_QID2CQIDIX_00_CQ_P0_LOC	0
-#define DLB2_ATM_QID2CQIDIX_00_CQ_P1_LOC	8
-#define DLB2_ATM_QID2CQIDIX_00_CQ_P2_LOC	16
-#define DLB2_ATM_QID2CQIDIX_00_CQ_P3_LOC	24
-
-#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN 0x34000004
-#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_RST 0xfffefdfc
-
-#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN0	0x000000FF
-#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN1	0x0000FF00
-#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN2	0x00FF0000
-#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN3	0xFF000000
-#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN0_LOC	0
-#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN1_LOC	8
-#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN2_LOC	16
-#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_BIN3_LOC	24
-
-#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN 0x34000008
-#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_RST 0xfffefdfc
-
-#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN0	0x000000FF
-#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN1	0x0000FF00
-#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN2	0x00FF0000
-#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN3	0xFF000000
-#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN0_LOC	0
-#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN1_LOC	8
-#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN2_LOC	16
-#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_BIN3_LOC	24
-
-#define DLB2_ATM_SMON_ACTIVITYCNTR0 0x3c000050
-#define DLB2_ATM_SMON_ACTIVITYCNTR0_RST 0x0
-
-#define DLB2_ATM_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
-#define DLB2_ATM_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
-
-#define DLB2_ATM_SMON_ACTIVITYCNTR1 0x3c000054
-#define DLB2_ATM_SMON_ACTIVITYCNTR1_RST 0x0
-
-#define DLB2_ATM_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
-#define DLB2_ATM_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
-
-#define DLB2_ATM_SMON_COMPARE0 0x3c000058
-#define DLB2_ATM_SMON_COMPARE0_RST 0x0
-
-#define DLB2_ATM_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
-#define DLB2_ATM_SMON_COMPARE0_COMPARE0_LOC	0
-
-#define DLB2_ATM_SMON_COMPARE1 0x3c00005c
-#define DLB2_ATM_SMON_COMPARE1_RST 0x0
-
-#define DLB2_ATM_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
-#define DLB2_ATM_SMON_COMPARE1_COMPARE1_LOC	0
-
-#define DLB2_ATM_SMON_CFG0 0x3c000060
-#define DLB2_ATM_SMON_CFG0_RST 0x40000000
-
-#define DLB2_ATM_SMON_CFG0_SMON_ENABLE		0x00000001
-#define DLB2_ATM_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
-#define DLB2_ATM_SMON_CFG0_RSVZ0			0x0000000C
-#define DLB2_ATM_SMON_CFG0_SMON0_FUNCTION		0x00000070
-#define DLB2_ATM_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
-#define DLB2_ATM_SMON_CFG0_SMON1_FUNCTION		0x00000700
-#define DLB2_ATM_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
-#define DLB2_ATM_SMON_CFG0_SMON_MODE			0x0000F000
-#define DLB2_ATM_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
-#define DLB2_ATM_SMON_CFG0_INTCOUNTEROVFL		0x00020000
-#define DLB2_ATM_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
-#define DLB2_ATM_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
-#define DLB2_ATM_SMON_CFG0_STOPTIMEROVFL		0x00100000
-#define DLB2_ATM_SMON_CFG0_INTTIMEROVFL		0x00200000
-#define DLB2_ATM_SMON_CFG0_STATTIMEROVFL		0x00400000
-#define DLB2_ATM_SMON_CFG0_RSVZ1			0x00800000
-#define DLB2_ATM_SMON_CFG0_TIMER_PRESCALE		0x1F000000
-#define DLB2_ATM_SMON_CFG0_RSVZ2			0x20000000
-#define DLB2_ATM_SMON_CFG0_VERSION			0xC0000000
-#define DLB2_ATM_SMON_CFG0_SMON_ENABLE_LOC			0
-#define DLB2_ATM_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
-#define DLB2_ATM_SMON_CFG0_RSVZ0_LOC				2
-#define DLB2_ATM_SMON_CFG0_SMON0_FUNCTION_LOC		4
-#define DLB2_ATM_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
-#define DLB2_ATM_SMON_CFG0_SMON1_FUNCTION_LOC		8
-#define DLB2_ATM_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
-#define DLB2_ATM_SMON_CFG0_SMON_MODE_LOC			12
-#define DLB2_ATM_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
-#define DLB2_ATM_SMON_CFG0_INTCOUNTEROVFL_LOC		17
-#define DLB2_ATM_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
-#define DLB2_ATM_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
-#define DLB2_ATM_SMON_CFG0_STOPTIMEROVFL_LOC			20
-#define DLB2_ATM_SMON_CFG0_INTTIMEROVFL_LOC			21
-#define DLB2_ATM_SMON_CFG0_STATTIMEROVFL_LOC			22
-#define DLB2_ATM_SMON_CFG0_RSVZ1_LOC				23
-#define DLB2_ATM_SMON_CFG0_TIMER_PRESCALE_LOC		24
-#define DLB2_ATM_SMON_CFG0_RSVZ2_LOC				29
-#define DLB2_ATM_SMON_CFG0_VERSION_LOC			30
-
-#define DLB2_ATM_SMON_CFG1 0x3c000064
-#define DLB2_ATM_SMON_CFG1_RST 0x0
-
-#define DLB2_ATM_SMON_CFG1_MODE0	0x000000FF
-#define DLB2_ATM_SMON_CFG1_MODE1	0x0000FF00
-#define DLB2_ATM_SMON_CFG1_RSVZ0	0xFFFF0000
-#define DLB2_ATM_SMON_CFG1_MODE0_LOC	0
-#define DLB2_ATM_SMON_CFG1_MODE1_LOC	8
-#define DLB2_ATM_SMON_CFG1_RSVZ0_LOC	16
-
-#define DLB2_ATM_SMON_MAX_TMR 0x3c000068
-#define DLB2_ATM_SMON_MAX_TMR_RST 0x0
-
-#define DLB2_ATM_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
-#define DLB2_ATM_SMON_MAX_TMR_MAXVALUE_LOC	0
-
-#define DLB2_ATM_SMON_TMR 0x3c00006c
-#define DLB2_ATM_SMON_TMR_RST 0x0
-
-#define DLB2_ATM_SMON_TMR_TIMER	0xFFFFFFFF
-#define DLB2_ATM_SMON_TMR_TIMER_LOC	0
-
-#define DLB2_CHP_CFG_DIR_VAS_CRD(x) \
-	(0x40000000 + (x) * 0x1000)
-#define DLB2_CHP_CFG_DIR_VAS_CRD_RST 0x0
-
-#define DLB2_CHP_CFG_DIR_VAS_CRD_COUNT	0x00003FFF
-#define DLB2_CHP_CFG_DIR_VAS_CRD_RSVD0	0xFFFFC000
-#define DLB2_CHP_CFG_DIR_VAS_CRD_COUNT_LOC	0
-#define DLB2_CHP_CFG_DIR_VAS_CRD_RSVD0_LOC	14
-
-#define DLB2_CHP_CFG_LDB_VAS_CRD(x) \
-	(0x40080000 + (x) * 0x1000)
-#define DLB2_CHP_CFG_LDB_VAS_CRD_RST 0x0
-
-#define DLB2_CHP_CFG_LDB_VAS_CRD_COUNT	0x00007FFF
-#define DLB2_CHP_CFG_LDB_VAS_CRD_RSVD0	0xFFFF8000
-#define DLB2_CHP_CFG_LDB_VAS_CRD_COUNT_LOC	0
-#define DLB2_CHP_CFG_LDB_VAS_CRD_RSVD0_LOC	15
-
-#define DLB2_V2CHP_ORD_QID_SN(x) \
-	(0x40100000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_ORD_QID_SN(x) \
-	(0x40080000 + (x) * 0x1000)
-#define DLB2_CHP_ORD_QID_SN(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_ORD_QID_SN(x) : \
-	 DLB2_V2_5CHP_ORD_QID_SN(x))
-#define DLB2_CHP_ORD_QID_SN_RST 0x0
-
-#define DLB2_CHP_ORD_QID_SN_SN	0x000003FF
-#define DLB2_CHP_ORD_QID_SN_RSVD0	0xFFFFFC00
-#define DLB2_CHP_ORD_QID_SN_SN_LOC		0
-#define DLB2_CHP_ORD_QID_SN_RSVD0_LOC	10
-
-#define DLB2_V2CHP_ORD_QID_SN_MAP(x) \
-	(0x40180000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_ORD_QID_SN_MAP(x) \
-	(0x40100000 + (x) * 0x1000)
-#define DLB2_CHP_ORD_QID_SN_MAP(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_ORD_QID_SN_MAP(x) : \
-	 DLB2_V2_5CHP_ORD_QID_SN_MAP(x))
-#define DLB2_CHP_ORD_QID_SN_MAP_RST 0x0
-
-#define DLB2_CHP_ORD_QID_SN_MAP_MODE		0x00000007
-#define DLB2_CHP_ORD_QID_SN_MAP_SLOT		0x00000078
-#define DLB2_CHP_ORD_QID_SN_MAP_RSVZ0	0x00000080
-#define DLB2_CHP_ORD_QID_SN_MAP_GRP		0x00000100
-#define DLB2_CHP_ORD_QID_SN_MAP_RSVZ1	0x00000200
-#define DLB2_CHP_ORD_QID_SN_MAP_RSVD0	0xFFFFFC00
-#define DLB2_CHP_ORD_QID_SN_MAP_MODE_LOC	0
-#define DLB2_CHP_ORD_QID_SN_MAP_SLOT_LOC	3
-#define DLB2_CHP_ORD_QID_SN_MAP_RSVZ0_LOC	7
-#define DLB2_CHP_ORD_QID_SN_MAP_GRP_LOC	8
-#define DLB2_CHP_ORD_QID_SN_MAP_RSVZ1_LOC	9
-#define DLB2_CHP_ORD_QID_SN_MAP_RSVD0_LOC	10
-
-#define DLB2_V2CHP_SN_CHK_ENBL(x) \
-	(0x40200000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_SN_CHK_ENBL(x) \
-	(0x40180000 + (x) * 0x1000)
-#define DLB2_CHP_SN_CHK_ENBL(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_SN_CHK_ENBL(x) : \
-	 DLB2_V2_5CHP_SN_CHK_ENBL(x))
-#define DLB2_CHP_SN_CHK_ENBL_RST 0x0
-
-#define DLB2_CHP_SN_CHK_ENBL_EN	0x00000001
-#define DLB2_CHP_SN_CHK_ENBL_RSVD0	0xFFFFFFFE
-#define DLB2_CHP_SN_CHK_ENBL_EN_LOC		0
-#define DLB2_CHP_SN_CHK_ENBL_RSVD0_LOC	1
-
-#define DLB2_V2CHP_DIR_CQ_DEPTH(x) \
-	(0x40280000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_DIR_CQ_DEPTH(x) \
-	(0x40300000 + (x) * 0x1000)
-#define DLB2_CHP_DIR_CQ_DEPTH(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_DIR_CQ_DEPTH(x) : \
-	 DLB2_V2_5CHP_DIR_CQ_DEPTH(x))
-#define DLB2_CHP_DIR_CQ_DEPTH_RST 0x0
-
-#define DLB2_CHP_DIR_CQ_DEPTH_DEPTH	0x00001FFF
-#define DLB2_CHP_DIR_CQ_DEPTH_RSVD0	0xFFFFE000
-#define DLB2_CHP_DIR_CQ_DEPTH_DEPTH_LOC	0
-#define DLB2_CHP_DIR_CQ_DEPTH_RSVD0_LOC	13
-
-#define DLB2_V2CHP_DIR_CQ_INT_DEPTH_THRSH(x) \
-	(0x40300000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_DIR_CQ_INT_DEPTH_THRSH(x) \
-	(0x40380000 + (x) * 0x1000)
-#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_DIR_CQ_INT_DEPTH_THRSH(x) : \
-	 DLB2_V2_5CHP_DIR_CQ_INT_DEPTH_THRSH(x))
-#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_RST 0x0
-
-#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_DEPTH_THRESHOLD	0x00001FFF
-#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_RSVD0		0xFFFFE000
-#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_DEPTH_THRESHOLD_LOC	0
-#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_RSVD0_LOC		13
-
-#define DLB2_V2CHP_DIR_CQ_INT_ENB(x) \
-	(0x40380000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_DIR_CQ_INT_ENB(x) \
-	(0x40400000 + (x) * 0x1000)
-#define DLB2_CHP_DIR_CQ_INT_ENB(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_DIR_CQ_INT_ENB(x) : \
-	 DLB2_V2_5CHP_DIR_CQ_INT_ENB(x))
-#define DLB2_CHP_DIR_CQ_INT_ENB_RST 0x0
-
-#define DLB2_CHP_DIR_CQ_INT_ENB_EN_TIM	0x00000001
-#define DLB2_CHP_DIR_CQ_INT_ENB_EN_DEPTH	0x00000002
-#define DLB2_CHP_DIR_CQ_INT_ENB_RSVD0	0xFFFFFFFC
-#define DLB2_CHP_DIR_CQ_INT_ENB_EN_TIM_LOC	0
-#define DLB2_CHP_DIR_CQ_INT_ENB_EN_DEPTH_LOC	1
-#define DLB2_CHP_DIR_CQ_INT_ENB_RSVD0_LOC	2
-
-#define DLB2_V2CHP_DIR_CQ_TMR_THRSH(x) \
-	(0x40480000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_DIR_CQ_TMR_THRSH(x) \
-	(0x40500000 + (x) * 0x1000)
-#define DLB2_CHP_DIR_CQ_TMR_THRSH(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_DIR_CQ_TMR_THRSH(x) : \
-	 DLB2_V2_5CHP_DIR_CQ_TMR_THRSH(x))
-#define DLB2_CHP_DIR_CQ_TMR_THRSH_RST 0x1
-
-#define DLB2_CHP_DIR_CQ_TMR_THRSH_THRSH_0	0x00000001
-#define DLB2_CHP_DIR_CQ_TMR_THRSH_THRSH_13_1	0x00003FFE
-#define DLB2_CHP_DIR_CQ_TMR_THRSH_RSVD0	0xFFFFC000
-#define DLB2_CHP_DIR_CQ_TMR_THRSH_THRSH_0_LOC	0
-#define DLB2_CHP_DIR_CQ_TMR_THRSH_THRSH_13_1_LOC	1
-#define DLB2_CHP_DIR_CQ_TMR_THRSH_RSVD0_LOC		14
-
-#define DLB2_V2CHP_DIR_CQ_TKN_DEPTH_SEL(x) \
-	(0x40500000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_DIR_CQ_TKN_DEPTH_SEL(x) \
-	(0x40580000 + (x) * 0x1000)
-#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_DIR_CQ_TKN_DEPTH_SEL(x) : \
-	 DLB2_V2_5CHP_DIR_CQ_TKN_DEPTH_SEL(x))
-#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_RST 0x0
-
-#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT	0x0000000F
-#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_RSVD0			0xFFFFFFF0
-#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_LOC	0
-#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_RSVD0_LOC		4
-
-#define DLB2_V2CHP_DIR_CQ_WD_ENB(x) \
-	(0x40580000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_DIR_CQ_WD_ENB(x) \
-	(0x40600000 + (x) * 0x1000)
-#define DLB2_CHP_DIR_CQ_WD_ENB(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_DIR_CQ_WD_ENB(x) : \
-	 DLB2_V2_5CHP_DIR_CQ_WD_ENB(x))
-#define DLB2_CHP_DIR_CQ_WD_ENB_RST 0x0
-
-#define DLB2_CHP_DIR_CQ_WD_ENB_WD_ENABLE	0x00000001
-#define DLB2_CHP_DIR_CQ_WD_ENB_RSVD0		0xFFFFFFFE
-#define DLB2_CHP_DIR_CQ_WD_ENB_WD_ENABLE_LOC	0
-#define DLB2_CHP_DIR_CQ_WD_ENB_RSVD0_LOC	1
-
-#define DLB2_V2CHP_DIR_CQ_WPTR(x) \
-	(0x40600000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_DIR_CQ_WPTR(x) \
-	(0x40680000 + (x) * 0x1000)
-#define DLB2_CHP_DIR_CQ_WPTR(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_DIR_CQ_WPTR(x) : \
-	 DLB2_V2_5CHP_DIR_CQ_WPTR(x))
-#define DLB2_CHP_DIR_CQ_WPTR_RST 0x0
-
-#define DLB2_CHP_DIR_CQ_WPTR_WRITE_POINTER	0x00001FFF
-#define DLB2_CHP_DIR_CQ_WPTR_RSVD0		0xFFFFE000
-#define DLB2_CHP_DIR_CQ_WPTR_WRITE_POINTER_LOC	0
-#define DLB2_CHP_DIR_CQ_WPTR_RSVD0_LOC		13
-
-#define DLB2_V2CHP_DIR_CQ2VAS(x) \
-	(0x40680000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_DIR_CQ2VAS(x) \
-	(0x40700000 + (x) * 0x1000)
-#define DLB2_CHP_DIR_CQ2VAS(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_DIR_CQ2VAS(x) : \
-	 DLB2_V2_5CHP_DIR_CQ2VAS(x))
-#define DLB2_CHP_DIR_CQ2VAS_RST 0x0
-
-#define DLB2_CHP_DIR_CQ2VAS_CQ2VAS	0x0000001F
-#define DLB2_CHP_DIR_CQ2VAS_RSVD0	0xFFFFFFE0
-#define DLB2_CHP_DIR_CQ2VAS_CQ2VAS_LOC	0
-#define DLB2_CHP_DIR_CQ2VAS_RSVD0_LOC	5
-
-#define DLB2_V2CHP_HIST_LIST_BASE(x) \
-	(0x40700000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_HIST_LIST_BASE(x) \
-	(0x40780000 + (x) * 0x1000)
-#define DLB2_CHP_HIST_LIST_BASE(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_HIST_LIST_BASE(x) : \
-	 DLB2_V2_5CHP_HIST_LIST_BASE(x))
-#define DLB2_CHP_HIST_LIST_BASE_RST 0x0
-
-#define DLB2_CHP_HIST_LIST_BASE_BASE		0x00001FFF
-#define DLB2_CHP_HIST_LIST_BASE_RSVD0	0xFFFFE000
-#define DLB2_CHP_HIST_LIST_BASE_BASE_LOC	0
-#define DLB2_CHP_HIST_LIST_BASE_RSVD0_LOC	13
-
-#define DLB2_V2CHP_HIST_LIST_LIM(x) \
-	(0x40780000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_HIST_LIST_LIM(x) \
-	(0x40800000 + (x) * 0x1000)
-#define DLB2_CHP_HIST_LIST_LIM(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_HIST_LIST_LIM(x) : \
-	 DLB2_V2_5CHP_HIST_LIST_LIM(x))
-#define DLB2_CHP_HIST_LIST_LIM_RST 0x0
-
-#define DLB2_CHP_HIST_LIST_LIM_LIMIT	0x00001FFF
-#define DLB2_CHP_HIST_LIST_LIM_RSVD0	0xFFFFE000
-#define DLB2_CHP_HIST_LIST_LIM_LIMIT_LOC	0
-#define DLB2_CHP_HIST_LIST_LIM_RSVD0_LOC	13
-
-#define DLB2_V2CHP_HIST_LIST_POP_PTR(x) \
-	(0x40800000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_HIST_LIST_POP_PTR(x) \
-	(0x40880000 + (x) * 0x1000)
-#define DLB2_CHP_HIST_LIST_POP_PTR(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_HIST_LIST_POP_PTR(x) : \
-	 DLB2_V2_5CHP_HIST_LIST_POP_PTR(x))
-#define DLB2_CHP_HIST_LIST_POP_PTR_RST 0x0
-
-#define DLB2_CHP_HIST_LIST_POP_PTR_POP_PTR		0x00001FFF
-#define DLB2_CHP_HIST_LIST_POP_PTR_GENERATION	0x00002000
-#define DLB2_CHP_HIST_LIST_POP_PTR_RSVD0		0xFFFFC000
-#define DLB2_CHP_HIST_LIST_POP_PTR_POP_PTR_LOC	0
-#define DLB2_CHP_HIST_LIST_POP_PTR_GENERATION_LOC	13
-#define DLB2_CHP_HIST_LIST_POP_PTR_RSVD0_LOC		14
-
-#define DLB2_V2CHP_HIST_LIST_PUSH_PTR(x) \
-	(0x40880000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_HIST_LIST_PUSH_PTR(x) \
-	(0x40900000 + (x) * 0x1000)
-#define DLB2_CHP_HIST_LIST_PUSH_PTR(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_HIST_LIST_PUSH_PTR(x) : \
-	 DLB2_V2_5CHP_HIST_LIST_PUSH_PTR(x))
-#define DLB2_CHP_HIST_LIST_PUSH_PTR_RST 0x0
-
-#define DLB2_CHP_HIST_LIST_PUSH_PTR_PUSH_PTR		0x00001FFF
-#define DLB2_CHP_HIST_LIST_PUSH_PTR_GENERATION	0x00002000
-#define DLB2_CHP_HIST_LIST_PUSH_PTR_RSVD0		0xFFFFC000
-#define DLB2_CHP_HIST_LIST_PUSH_PTR_PUSH_PTR_LOC	0
-#define DLB2_CHP_HIST_LIST_PUSH_PTR_GENERATION_LOC	13
-#define DLB2_CHP_HIST_LIST_PUSH_PTR_RSVD0_LOC	14
-
-#define DLB2_V2CHP_LDB_CQ_DEPTH(x) \
-	(0x40900000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_LDB_CQ_DEPTH(x) \
-	(0x40a80000 + (x) * 0x1000)
-#define DLB2_CHP_LDB_CQ_DEPTH(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_LDB_CQ_DEPTH(x) : \
-	 DLB2_V2_5CHP_LDB_CQ_DEPTH(x))
-#define DLB2_CHP_LDB_CQ_DEPTH_RST 0x0
-
-#define DLB2_CHP_LDB_CQ_DEPTH_DEPTH	0x000007FF
-#define DLB2_CHP_LDB_CQ_DEPTH_RSVD0	0xFFFFF800
-#define DLB2_CHP_LDB_CQ_DEPTH_DEPTH_LOC	0
-#define DLB2_CHP_LDB_CQ_DEPTH_RSVD0_LOC	11
-
-#define DLB2_V2CHP_LDB_CQ_INT_DEPTH_THRSH(x) \
-	(0x40980000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_LDB_CQ_INT_DEPTH_THRSH(x) \
-	(0x40b00000 + (x) * 0x1000)
-#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_LDB_CQ_INT_DEPTH_THRSH(x) : \
-	 DLB2_V2_5CHP_LDB_CQ_INT_DEPTH_THRSH(x))
-#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_RST 0x0
-
-#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_DEPTH_THRESHOLD	0x000007FF
-#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_RSVD0		0xFFFFF800
-#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_DEPTH_THRESHOLD_LOC	0
-#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_RSVD0_LOC		11
-
-#define DLB2_V2CHP_LDB_CQ_INT_ENB(x) \
-	(0x40a00000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_LDB_CQ_INT_ENB(x) \
-	(0x40b80000 + (x) * 0x1000)
-#define DLB2_CHP_LDB_CQ_INT_ENB(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_LDB_CQ_INT_ENB(x) : \
-	 DLB2_V2_5CHP_LDB_CQ_INT_ENB(x))
-#define DLB2_CHP_LDB_CQ_INT_ENB_RST 0x0
-
-#define DLB2_CHP_LDB_CQ_INT_ENB_EN_TIM	0x00000001
-#define DLB2_CHP_LDB_CQ_INT_ENB_EN_DEPTH	0x00000002
-#define DLB2_CHP_LDB_CQ_INT_ENB_RSVD0	0xFFFFFFFC
-#define DLB2_CHP_LDB_CQ_INT_ENB_EN_TIM_LOC	0
-#define DLB2_CHP_LDB_CQ_INT_ENB_EN_DEPTH_LOC	1
-#define DLB2_CHP_LDB_CQ_INT_ENB_RSVD0_LOC	2
-
-#define DLB2_V2CHP_LDB_CQ_TMR_THRSH(x) \
-	(0x40b00000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_LDB_CQ_TMR_THRSH(x) \
-	(0x40c80000 + (x) * 0x1000)
-#define DLB2_CHP_LDB_CQ_TMR_THRSH(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_LDB_CQ_TMR_THRSH(x) : \
-	 DLB2_V2_5CHP_LDB_CQ_TMR_THRSH(x))
-#define DLB2_CHP_LDB_CQ_TMR_THRSH_RST 0x1
-
-#define DLB2_CHP_LDB_CQ_TMR_THRSH_THRSH_0	0x00000001
-#define DLB2_CHP_LDB_CQ_TMR_THRSH_THRSH_13_1	0x00003FFE
-#define DLB2_CHP_LDB_CQ_TMR_THRSH_RSVD0	0xFFFFC000
-#define DLB2_CHP_LDB_CQ_TMR_THRSH_THRSH_0_LOC	0
-#define DLB2_CHP_LDB_CQ_TMR_THRSH_THRSH_13_1_LOC	1
-#define DLB2_CHP_LDB_CQ_TMR_THRSH_RSVD0_LOC		14
-
-#define DLB2_V2CHP_LDB_CQ_TKN_DEPTH_SEL(x) \
-	(0x40b80000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_LDB_CQ_TKN_DEPTH_SEL(x) \
-	(0x40d00000 + (x) * 0x1000)
-#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_LDB_CQ_TKN_DEPTH_SEL(x) : \
-	 DLB2_V2_5CHP_LDB_CQ_TKN_DEPTH_SEL(x))
-#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_RST 0x0
-
-#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT	0x0000000F
-#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_RSVD0			0xFFFFFFF0
-#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_LOC	0
-#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_RSVD0_LOC		4
-
-#define DLB2_V2CHP_LDB_CQ_WD_ENB(x) \
-	(0x40c00000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_LDB_CQ_WD_ENB(x) \
-	(0x40d80000 + (x) * 0x1000)
-#define DLB2_CHP_LDB_CQ_WD_ENB(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_LDB_CQ_WD_ENB(x) : \
-	 DLB2_V2_5CHP_LDB_CQ_WD_ENB(x))
-#define DLB2_CHP_LDB_CQ_WD_ENB_RST 0x0
-
-#define DLB2_CHP_LDB_CQ_WD_ENB_WD_ENABLE	0x00000001
-#define DLB2_CHP_LDB_CQ_WD_ENB_RSVD0		0xFFFFFFFE
-#define DLB2_CHP_LDB_CQ_WD_ENB_WD_ENABLE_LOC	0
-#define DLB2_CHP_LDB_CQ_WD_ENB_RSVD0_LOC	1
-
-#define DLB2_V2CHP_LDB_CQ_WPTR(x) \
-	(0x40c80000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_LDB_CQ_WPTR(x) \
-	(0x40e00000 + (x) * 0x1000)
-#define DLB2_CHP_LDB_CQ_WPTR(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_LDB_CQ_WPTR(x) : \
-	 DLB2_V2_5CHP_LDB_CQ_WPTR(x))
-#define DLB2_CHP_LDB_CQ_WPTR_RST 0x0
-
-#define DLB2_CHP_LDB_CQ_WPTR_WRITE_POINTER	0x000007FF
-#define DLB2_CHP_LDB_CQ_WPTR_RSVD0		0xFFFFF800
-#define DLB2_CHP_LDB_CQ_WPTR_WRITE_POINTER_LOC	0
-#define DLB2_CHP_LDB_CQ_WPTR_RSVD0_LOC		11
-
-#define DLB2_V2CHP_LDB_CQ2VAS(x) \
-	(0x40d00000 + (x) * 0x1000)
-#define DLB2_V2_5CHP_LDB_CQ2VAS(x) \
-	(0x40e80000 + (x) * 0x1000)
-#define DLB2_CHP_LDB_CQ2VAS(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_LDB_CQ2VAS(x) : \
-	 DLB2_V2_5CHP_LDB_CQ2VAS(x))
-#define DLB2_CHP_LDB_CQ2VAS_RST 0x0
-
-#define DLB2_CHP_LDB_CQ2VAS_CQ2VAS	0x0000001F
-#define DLB2_CHP_LDB_CQ2VAS_RSVD0	0xFFFFFFE0
-#define DLB2_CHP_LDB_CQ2VAS_CQ2VAS_LOC	0
-#define DLB2_CHP_LDB_CQ2VAS_RSVD0_LOC	5
-
-#define DLB2_CHP_CFG_CHP_CSR_CTRL 0x44000008
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_RST 0x180002
-
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_COR_ALARM_DIS		0x00000001
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_COR_SYND_DIS		0x00000002
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_UNCR_ALARM_DIS		0x00000004
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_UNC_SYND_DIS		0x00000008
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF0_ALARM_DIS		0x00000010
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF0_SYND_DIS		0x00000020
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF1_ALARM_DIS		0x00000040
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF1_SYND_DIS		0x00000080
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF2_ALARM_DIS		0x00000100
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF2_SYND_DIS		0x00000200
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF3_ALARM_DIS		0x00000400
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF3_SYND_DIS		0x00000800
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF4_ALARM_DIS		0x00001000
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF4_SYND_DIS		0x00002000
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF5_ALARM_DIS		0x00004000
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF5_SYND_DIS		0x00008000
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_DLB_COR_ALARM_ENABLE	0x00010000
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_LDB_CQ_MODE	0x00020000
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_DIR_CQ_MODE	0x00040000
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_WRITE_LDB		0x00080000
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_WRITE_DIR		0x00100000
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_FIRST_WRITE_LDB	0x00200000
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_FIRST_WRITE_DIR	0x00400000
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_RSVZ0			0xFF800000
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_COR_ALARM_DIS_LOC		0
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_COR_SYND_DIS_LOC		1
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_UNCR_ALARM_DIS_LOC		2
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_UNC_SYND_DIS_LOC		3
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF0_ALARM_DIS_LOC		4
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF0_SYND_DIS_LOC		5
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF1_ALARM_DIS_LOC		6
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF1_SYND_DIS_LOC		7
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF2_ALARM_DIS_LOC		8
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF2_SYND_DIS_LOC		9
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF3_ALARM_DIS_LOC		10
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF3_SYND_DIS_LOC		11
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF4_ALARM_DIS_LOC		12
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF4_SYND_DIS_LOC		13
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF5_ALARM_DIS_LOC		14
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_INT_INF5_SYND_DIS_LOC		15
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_DLB_COR_ALARM_ENABLE_LOC		16
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_LDB_CQ_MODE_LOC	17
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_DIR_CQ_MODE_LOC	18
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_WRITE_LDB_LOC			19
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_WRITE_DIR_LOC			20
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_FIRST_WRITE_LDB_LOC		21
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_PAD_FIRST_WRITE_DIR_LOC		22
-#define DLB2_CHP_CFG_CHP_CSR_CTRL_RSVZ0_LOC				23
-
-#define DLB2_V2CHP_DIR_CQ_INTR_ARMED0 0x4400005c
-#define DLB2_V2_5CHP_DIR_CQ_INTR_ARMED0 0x4400004c
-#define DLB2_CHP_DIR_CQ_INTR_ARMED0(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_DIR_CQ_INTR_ARMED0 : \
-	 DLB2_V2_5CHP_DIR_CQ_INTR_ARMED0)
-#define DLB2_CHP_DIR_CQ_INTR_ARMED0_RST 0x0
-
-#define DLB2_CHP_DIR_CQ_INTR_ARMED0_ARMED	0xFFFFFFFF
-#define DLB2_CHP_DIR_CQ_INTR_ARMED0_ARMED_LOC	0
-
-#define DLB2_V2CHP_DIR_CQ_INTR_ARMED1 0x44000060
-#define DLB2_V2_5CHP_DIR_CQ_INTR_ARMED1 0x44000050
-#define DLB2_CHP_DIR_CQ_INTR_ARMED1(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_DIR_CQ_INTR_ARMED1 : \
-	 DLB2_V2_5CHP_DIR_CQ_INTR_ARMED1)
-#define DLB2_CHP_DIR_CQ_INTR_ARMED1_RST 0x0
-
-#define DLB2_CHP_DIR_CQ_INTR_ARMED1_ARMED	0xFFFFFFFF
-#define DLB2_CHP_DIR_CQ_INTR_ARMED1_ARMED_LOC	0
-
-#define DLB2_V2CHP_CFG_DIR_CQ_TIMER_CTL 0x44000084
-#define DLB2_V2_5CHP_CFG_DIR_CQ_TIMER_CTL 0x44000088
-#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_CFG_DIR_CQ_TIMER_CTL : \
-	 DLB2_V2_5CHP_CFG_DIR_CQ_TIMER_CTL)
-#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_RST 0x0
-
-#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_SAMPLE_INTERVAL	0x000000FF
-#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_ENB			0x00000100
-#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_RSVZ0			0xFFFFFE00
-#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_SAMPLE_INTERVAL_LOC	0
-#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_ENB_LOC		8
-#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_RSVZ0_LOC		9
-
-#define DLB2_V2CHP_CFG_DIR_WDTO_0 0x44000088
-#define DLB2_V2_5CHP_CFG_DIR_WDTO_0 0x4400008c
-#define DLB2_CHP_CFG_DIR_WDTO_0(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_CFG_DIR_WDTO_0 : \
-	 DLB2_V2_5CHP_CFG_DIR_WDTO_0)
-#define DLB2_CHP_CFG_DIR_WDTO_0_RST 0x0
-
-#define DLB2_CHP_CFG_DIR_WDTO_0_WDTO	0xFFFFFFFF
-#define DLB2_CHP_CFG_DIR_WDTO_0_WDTO_LOC	0
-
-#define DLB2_V2CHP_CFG_DIR_WDTO_1 0x4400008c
-#define DLB2_V2_5CHP_CFG_DIR_WDTO_1 0x44000090
-#define DLB2_CHP_CFG_DIR_WDTO_1(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_CFG_DIR_WDTO_1 : \
-	 DLB2_V2_5CHP_CFG_DIR_WDTO_1)
-#define DLB2_CHP_CFG_DIR_WDTO_1_RST 0x0
-
-#define DLB2_CHP_CFG_DIR_WDTO_1_WDTO	0xFFFFFFFF
-#define DLB2_CHP_CFG_DIR_WDTO_1_WDTO_LOC	0
-
-#define DLB2_V2CHP_CFG_DIR_WD_DISABLE0 0x44000098
-#define DLB2_V2_5CHP_CFG_DIR_WD_DISABLE0 0x440000a4
-#define DLB2_CHP_CFG_DIR_WD_DISABLE0(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_CFG_DIR_WD_DISABLE0 : \
-	 DLB2_V2_5CHP_CFG_DIR_WD_DISABLE0)
-#define DLB2_CHP_CFG_DIR_WD_DISABLE0_RST 0xffffffff
-
-#define DLB2_CHP_CFG_DIR_WD_DISABLE0_WD_DISABLE	0xFFFFFFFF
-#define DLB2_CHP_CFG_DIR_WD_DISABLE0_WD_DISABLE_LOC	0
-
-#define DLB2_V2CHP_CFG_DIR_WD_DISABLE1 0x4400009c
-#define DLB2_V2_5CHP_CFG_DIR_WD_DISABLE1 0x440000a8
-#define DLB2_CHP_CFG_DIR_WD_DISABLE1(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_CFG_DIR_WD_DISABLE1 : \
-	 DLB2_V2_5CHP_CFG_DIR_WD_DISABLE1)
-#define DLB2_CHP_CFG_DIR_WD_DISABLE1_RST 0xffffffff
-
-#define DLB2_CHP_CFG_DIR_WD_DISABLE1_WD_DISABLE	0xFFFFFFFF
-#define DLB2_CHP_CFG_DIR_WD_DISABLE1_WD_DISABLE_LOC	0
-
-#define DLB2_V2CHP_CFG_DIR_WD_ENB_INTERVAL 0x440000a0
-#define DLB2_V2_5CHP_CFG_DIR_WD_ENB_INTERVAL 0x440000b0
-#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_CFG_DIR_WD_ENB_INTERVAL : \
-	 DLB2_V2_5CHP_CFG_DIR_WD_ENB_INTERVAL)
-#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_RST 0x0
-
-#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_SAMPLE_INTERVAL	0x0FFFFFFF
-#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_ENB			0x10000000
-#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_RSVZ0		0xE0000000
-#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_SAMPLE_INTERVAL_LOC	0
-#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_ENB_LOC		28
-#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_RSVZ0_LOC		29
-
-#define DLB2_V2CHP_CFG_DIR_WD_THRESHOLD 0x440000ac
-#define DLB2_V2_5CHP_CFG_DIR_WD_THRESHOLD 0x440000c0
-#define DLB2_CHP_CFG_DIR_WD_THRESHOLD(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_CFG_DIR_WD_THRESHOLD : \
-	 DLB2_V2_5CHP_CFG_DIR_WD_THRESHOLD)
-#define DLB2_CHP_CFG_DIR_WD_THRESHOLD_RST 0x0
-
-#define DLB2_CHP_CFG_DIR_WD_THRESHOLD_WD_THRESHOLD	0x000000FF
-#define DLB2_CHP_CFG_DIR_WD_THRESHOLD_RSVZ0		0xFFFFFF00
-#define DLB2_CHP_CFG_DIR_WD_THRESHOLD_WD_THRESHOLD_LOC	0
-#define DLB2_CHP_CFG_DIR_WD_THRESHOLD_RSVZ0_LOC		8
-
-#define DLB2_V2CHP_LDB_CQ_INTR_ARMED0 0x440000b0
-#define DLB2_V2_5CHP_LDB_CQ_INTR_ARMED0 0x440000c4
-#define DLB2_CHP_LDB_CQ_INTR_ARMED0(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_LDB_CQ_INTR_ARMED0 : \
-	 DLB2_V2_5CHP_LDB_CQ_INTR_ARMED0)
-#define DLB2_CHP_LDB_CQ_INTR_ARMED0_RST 0x0
-
-#define DLB2_CHP_LDB_CQ_INTR_ARMED0_ARMED	0xFFFFFFFF
-#define DLB2_CHP_LDB_CQ_INTR_ARMED0_ARMED_LOC	0
-
-#define DLB2_V2CHP_LDB_CQ_INTR_ARMED1 0x440000b4
-#define DLB2_V2_5CHP_LDB_CQ_INTR_ARMED1 0x440000c8
-#define DLB2_CHP_LDB_CQ_INTR_ARMED1(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_LDB_CQ_INTR_ARMED1 : \
-	 DLB2_V2_5CHP_LDB_CQ_INTR_ARMED1)
-#define DLB2_CHP_LDB_CQ_INTR_ARMED1_RST 0x0
-
-#define DLB2_CHP_LDB_CQ_INTR_ARMED1_ARMED	0xFFFFFFFF
-#define DLB2_CHP_LDB_CQ_INTR_ARMED1_ARMED_LOC	0
-
-#define DLB2_V2CHP_CFG_LDB_CQ_TIMER_CTL 0x440000d8
-#define DLB2_V2_5CHP_CFG_LDB_CQ_TIMER_CTL 0x440000ec
-#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_CFG_LDB_CQ_TIMER_CTL : \
-	 DLB2_V2_5CHP_CFG_LDB_CQ_TIMER_CTL)
-#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_RST 0x0
-
-#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_SAMPLE_INTERVAL	0x000000FF
-#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_ENB			0x00000100
-#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_RSVZ0			0xFFFFFE00
-#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_SAMPLE_INTERVAL_LOC	0
-#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_ENB_LOC		8
-#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_RSVZ0_LOC		9
-
-#define DLB2_V2CHP_CFG_LDB_WDTO_0 0x440000dc
-#define DLB2_V2_5CHP_CFG_LDB_WDTO_0 0x440000f0
-#define DLB2_CHP_CFG_LDB_WDTO_0(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_CFG_LDB_WDTO_0 : \
-	 DLB2_V2_5CHP_CFG_LDB_WDTO_0)
-#define DLB2_CHP_CFG_LDB_WDTO_0_RST 0x0
-
-#define DLB2_CHP_CFG_LDB_WDTO_0_WDTO	0xFFFFFFFF
-#define DLB2_CHP_CFG_LDB_WDTO_0_WDTO_LOC	0
-
-#define DLB2_V2CHP_CFG_LDB_WDTO_1 0x440000e0
-#define DLB2_V2_5CHP_CFG_LDB_WDTO_1 0x440000f4
-#define DLB2_CHP_CFG_LDB_WDTO_1(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_CFG_LDB_WDTO_1 : \
-	 DLB2_V2_5CHP_CFG_LDB_WDTO_1)
-#define DLB2_CHP_CFG_LDB_WDTO_1_RST 0x0
-
-#define DLB2_CHP_CFG_LDB_WDTO_1_WDTO	0xFFFFFFFF
-#define DLB2_CHP_CFG_LDB_WDTO_1_WDTO_LOC	0
-
-#define DLB2_V2CHP_CFG_LDB_WD_DISABLE0 0x440000ec
-#define DLB2_V2_5CHP_CFG_LDB_WD_DISABLE0 0x44000100
-#define DLB2_CHP_CFG_LDB_WD_DISABLE0(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_CFG_LDB_WD_DISABLE0 : \
-	 DLB2_V2_5CHP_CFG_LDB_WD_DISABLE0)
-#define DLB2_CHP_CFG_LDB_WD_DISABLE0_RST 0xffffffff
-
-#define DLB2_CHP_CFG_LDB_WD_DISABLE0_WD_DISABLE	0xFFFFFFFF
-#define DLB2_CHP_CFG_LDB_WD_DISABLE0_WD_DISABLE_LOC	0
-
-#define DLB2_V2CHP_CFG_LDB_WD_DISABLE1 0x440000f0
-#define DLB2_V2_5CHP_CFG_LDB_WD_DISABLE1 0x44000104
-#define DLB2_CHP_CFG_LDB_WD_DISABLE1(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_CFG_LDB_WD_DISABLE1 : \
-	 DLB2_V2_5CHP_CFG_LDB_WD_DISABLE1)
-#define DLB2_CHP_CFG_LDB_WD_DISABLE1_RST 0xffffffff
-
-#define DLB2_CHP_CFG_LDB_WD_DISABLE1_WD_DISABLE	0xFFFFFFFF
-#define DLB2_CHP_CFG_LDB_WD_DISABLE1_WD_DISABLE_LOC	0
-
-#define DLB2_V2CHP_CFG_LDB_WD_ENB_INTERVAL 0x440000f4
-#define DLB2_V2_5CHP_CFG_LDB_WD_ENB_INTERVAL 0x44000108
-#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_CFG_LDB_WD_ENB_INTERVAL : \
-	 DLB2_V2_5CHP_CFG_LDB_WD_ENB_INTERVAL)
-#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_RST 0x0
-
-#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_SAMPLE_INTERVAL	0x0FFFFFFF
-#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_ENB			0x10000000
-#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_RSVZ0		0xE0000000
-#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_SAMPLE_INTERVAL_LOC	0
-#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_ENB_LOC		28
-#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_RSVZ0_LOC		29
-
-#define DLB2_V2CHP_CFG_LDB_WD_THRESHOLD 0x44000100
-#define DLB2_V2_5CHP_CFG_LDB_WD_THRESHOLD 0x44000114
-#define DLB2_CHP_CFG_LDB_WD_THRESHOLD(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CHP_CFG_LDB_WD_THRESHOLD : \
-	 DLB2_V2_5CHP_CFG_LDB_WD_THRESHOLD)
-#define DLB2_CHP_CFG_LDB_WD_THRESHOLD_RST 0x0
-
-#define DLB2_CHP_CFG_LDB_WD_THRESHOLD_WD_THRESHOLD	0x000000FF
-#define DLB2_CHP_CFG_LDB_WD_THRESHOLD_RSVZ0		0xFFFFFF00
-#define DLB2_CHP_CFG_LDB_WD_THRESHOLD_WD_THRESHOLD_LOC	0
-#define DLB2_CHP_CFG_LDB_WD_THRESHOLD_RSVZ0_LOC		8
-
-#define DLB2_CHP_SMON_COMPARE0 0x4c000000
-#define DLB2_CHP_SMON_COMPARE0_RST 0x0
-
-#define DLB2_CHP_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
-#define DLB2_CHP_SMON_COMPARE0_COMPARE0_LOC	0
-
-#define DLB2_CHP_SMON_COMPARE1 0x4c000004
-#define DLB2_CHP_SMON_COMPARE1_RST 0x0
-
-#define DLB2_CHP_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
-#define DLB2_CHP_SMON_COMPARE1_COMPARE1_LOC	0
-
-#define DLB2_CHP_SMON_CFG0 0x4c000008
-#define DLB2_CHP_SMON_CFG0_RST 0x40000000
-
-#define DLB2_CHP_SMON_CFG0_SMON_ENABLE		0x00000001
-#define DLB2_CHP_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
-#define DLB2_CHP_SMON_CFG0_RSVZ0			0x0000000C
-#define DLB2_CHP_SMON_CFG0_SMON0_FUNCTION		0x00000070
-#define DLB2_CHP_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
-#define DLB2_CHP_SMON_CFG0_SMON1_FUNCTION		0x00000700
-#define DLB2_CHP_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
-#define DLB2_CHP_SMON_CFG0_SMON_MODE			0x0000F000
-#define DLB2_CHP_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
-#define DLB2_CHP_SMON_CFG0_INTCOUNTEROVFL		0x00020000
-#define DLB2_CHP_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
-#define DLB2_CHP_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
-#define DLB2_CHP_SMON_CFG0_STOPTIMEROVFL		0x00100000
-#define DLB2_CHP_SMON_CFG0_INTTIMEROVFL		0x00200000
-#define DLB2_CHP_SMON_CFG0_STATTIMEROVFL		0x00400000
-#define DLB2_CHP_SMON_CFG0_RSVZ1			0x00800000
-#define DLB2_CHP_SMON_CFG0_TIMER_PRESCALE		0x1F000000
-#define DLB2_CHP_SMON_CFG0_RSVZ2			0x20000000
-#define DLB2_CHP_SMON_CFG0_VERSION			0xC0000000
-#define DLB2_CHP_SMON_CFG0_SMON_ENABLE_LOC			0
-#define DLB2_CHP_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
-#define DLB2_CHP_SMON_CFG0_RSVZ0_LOC				2
-#define DLB2_CHP_SMON_CFG0_SMON0_FUNCTION_LOC		4
-#define DLB2_CHP_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
-#define DLB2_CHP_SMON_CFG0_SMON1_FUNCTION_LOC		8
-#define DLB2_CHP_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
-#define DLB2_CHP_SMON_CFG0_SMON_MODE_LOC			12
-#define DLB2_CHP_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
-#define DLB2_CHP_SMON_CFG0_INTCOUNTEROVFL_LOC		17
-#define DLB2_CHP_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
-#define DLB2_CHP_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
-#define DLB2_CHP_SMON_CFG0_STOPTIMEROVFL_LOC			20
-#define DLB2_CHP_SMON_CFG0_INTTIMEROVFL_LOC			21
-#define DLB2_CHP_SMON_CFG0_STATTIMEROVFL_LOC			22
-#define DLB2_CHP_SMON_CFG0_RSVZ1_LOC				23
-#define DLB2_CHP_SMON_CFG0_TIMER_PRESCALE_LOC		24
-#define DLB2_CHP_SMON_CFG0_RSVZ2_LOC				29
-#define DLB2_CHP_SMON_CFG0_VERSION_LOC			30
-
-#define DLB2_CHP_SMON_CFG1 0x4c00000c
-#define DLB2_CHP_SMON_CFG1_RST 0x0
-
-#define DLB2_CHP_SMON_CFG1_MODE0	0x000000FF
-#define DLB2_CHP_SMON_CFG1_MODE1	0x0000FF00
-#define DLB2_CHP_SMON_CFG1_RSVZ0	0xFFFF0000
-#define DLB2_CHP_SMON_CFG1_MODE0_LOC	0
-#define DLB2_CHP_SMON_CFG1_MODE1_LOC	8
-#define DLB2_CHP_SMON_CFG1_RSVZ0_LOC	16
-
-#define DLB2_CHP_SMON_ACTIVITYCNTR0 0x4c000010
-#define DLB2_CHP_SMON_ACTIVITYCNTR0_RST 0x0
-
-#define DLB2_CHP_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
-#define DLB2_CHP_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
-
-#define DLB2_CHP_SMON_ACTIVITYCNTR1 0x4c000014
-#define DLB2_CHP_SMON_ACTIVITYCNTR1_RST 0x0
-
-#define DLB2_CHP_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
-#define DLB2_CHP_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
-
-#define DLB2_CHP_SMON_MAX_TMR 0x4c000018
-#define DLB2_CHP_SMON_MAX_TMR_RST 0x0
-
-#define DLB2_CHP_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
-#define DLB2_CHP_SMON_MAX_TMR_MAXVALUE_LOC	0
-
-#define DLB2_CHP_SMON_TMR 0x4c00001c
-#define DLB2_CHP_SMON_TMR_RST 0x0
-
-#define DLB2_CHP_SMON_TMR_TIMER	0xFFFFFFFF
-#define DLB2_CHP_SMON_TMR_TIMER_LOC	0
-
-#define DLB2_CHP_CTRL_DIAG_02 0x4c000028
-#define DLB2_CHP_CTRL_DIAG_02_RST 0x1555
-
-#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_EMPTY_V2	0x00000001
-#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_AFULL_V2	0x00000002
-#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_EMPTY_V2 0x04
-#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_AFULL_V2 0x08
-#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_EMPTY_V2 0x0010
-#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_AFULL_V2 0x0020
-#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_EMPTY_V2    0x0040
-#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_AFULL_V2    0x0080
-#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_EMPTY_V2	 0x0100
-#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_AFULL_V2	 0x0200
-#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_EMPTY_V2	0x0400
-#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_AFULL_V2	0x0800
-#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_EMPTY_V2    0x1000
-#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_AFULL_V2    0x2000
-#define DLB2_CHP_CTRL_DIAG_02_RSVD0_V2				    0xFFFFC000
-#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_EMPTY_V2_LOC	    0
-#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_AFULL_V2_LOC	    1
-#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_EMPTY_V2_LOC 2
-#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_AFULL_V2_LOC 3
-#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_EMPTY_V2_LOC 4
-#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_AFULL_V2_LOC 5
-#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_EMPTY_V2_LOC    6
-#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_AFULL_V2_LOC    7
-#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_EMPTY_V2_LOC	  8
-#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_AFULL_V2_LOC	  9
-#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_EMPTY_V2_LOC	  10
-#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_AFULL_V2_LOC	  11
-#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_EMPTY_V2_LOC 12
-#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_AFULL_V2_LOC 13
-#define DLB2_CHP_CTRL_DIAG_02_RSVD0_V2_LOC				  14
-
-#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_EMPTY_V2_5	     0x00000001
-#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_AFULL_V2_5	     0x00000002
-#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_EMPTY_V2_5  4
-#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_AFULL_V2_5  8
-#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_EMPTY_V2_5 0x10
-#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_AFULL_V2_5 0x20
-#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_EMPTY_V2_5	0x0040
-#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_AFULL_V2_5	0x0080
-#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_EMPTY_V2_5 0x00000100
-#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_AFULL_V2_5 0x00000200
-#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_EMPTY_V2_5 0x0400
-#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_AFULL_V2_5 0x0800
-#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_EMPTY_V2_5 0x1000
-#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_AFULL_V2_5 0x2000
-#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOKEN_QB_STATUS_SIZE_V2_5	    0x0001C000
-#define DLB2_CHP_CTRL_DIAG_02_FREELIST_SIZE_V2_5		    0xFFFE0000
-#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_EMPTY_V2_5_LOC 0
-#define DLB2_CHP_CTRL_DIAG_02_EGRESS_CREDIT_STATUS_AFULL_V2_5_LOC 1
-#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_EMPTY_V2_5_LOC 2
-#define DLB2_CHP_CTRL_DIAG_02_CHP_OUT_HCW_PIPE_CREDIT_STATUS_AFULL_V2_5_LOC 3
-#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_EMPTY_V2_5_LOC 4
-#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_AP_CMP_PIPE_CREDIT_STATUS_AFULL_V2_5_LOC 5
-#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_EMPTY_V2_5_LOC    6
-#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOK_PIPE_CREDIT_STATUS_AFULL_V2_5_LOC 7
-#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_EMPTY_V2_5_LOC	    8
-#define DLB2_CHP_CTRL_DIAG_02_CHP_ROP_PIPE_CREDIT_STATUS_AFULL_V2_5_LOC	    9
-#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_EMPTY_V2_5_LOC   10
-#define DLB2_CHP_CTRL_DIAG_02_QED_TO_CQ_PIPE_CREDIT_STATUS_AFULL_V2_5_LOC   11
-#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_EMPTY_V2_5_LOC 12
-#define DLB2_CHP_CTRL_DIAG_02_EGRESS_LSP_TOKEN_CREDIT_STATUS_AFULL_V2_5_LOC 13
-#define DLB2_CHP_CTRL_DIAG_02_CHP_LSP_TOKEN_QB_STATUS_SIZE_V2_5_LOC	    14
-#define DLB2_CHP_CTRL_DIAG_02_FREELIST_SIZE_V2_5_LOC			    17
-
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0 0x54000000
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_RST 0xfefcfaf8
-
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI0	0x000000FF
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI1	0x0000FF00
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI2	0x00FF0000
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI3	0xFF000000
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI0_LOC	0
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI1_LOC	8
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI2_LOC	16
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_PRI3_LOC	24
-
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_1 0x54000004
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_1_RST 0x0
-
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_1_RSVZ0	0xFFFFFFFF
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_1_RSVZ0_LOC	0
-
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0 0x54000008
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_RST 0xfefcfaf8
-
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI0	0x000000FF
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI1	0x0000FF00
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI2	0x00FF0000
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI3	0xFF000000
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI0_LOC	0
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI1_LOC	8
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI2_LOC	16
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI3_LOC	24
-
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1 0x5400000c
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RST 0x0
-
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RSVZ0	0xFFFFFFFF
-#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RSVZ0_LOC	0
-
-#define DLB2_DP_DIR_CSR_CTRL 0x54000010
-#define DLB2_DP_DIR_CSR_CTRL_RST 0x0
-
-#define DLB2_DP_DIR_CSR_CTRL_INT_COR_ALARM_DIS	0x00000001
-#define DLB2_DP_DIR_CSR_CTRL_INT_COR_SYND_DIS	0x00000002
-#define DLB2_DP_DIR_CSR_CTRL_INT_UNCR_ALARM_DIS	0x00000004
-#define DLB2_DP_DIR_CSR_CTRL_INT_UNC_SYND_DIS	0x00000008
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF0_ALARM_DIS	0x00000010
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF0_SYND_DIS	0x00000020
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF1_ALARM_DIS	0x00000040
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF1_SYND_DIS	0x00000080
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF2_ALARM_DIS	0x00000100
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF2_SYND_DIS	0x00000200
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF3_ALARM_DIS	0x00000400
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF3_SYND_DIS	0x00000800
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF4_ALARM_DIS	0x00001000
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF4_SYND_DIS	0x00002000
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF5_ALARM_DIS	0x00004000
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF5_SYND_DIS	0x00008000
-#define DLB2_DP_DIR_CSR_CTRL_RSVZ0			0xFFFF0000
-#define DLB2_DP_DIR_CSR_CTRL_INT_COR_ALARM_DIS_LOC	0
-#define DLB2_DP_DIR_CSR_CTRL_INT_COR_SYND_DIS_LOC	1
-#define DLB2_DP_DIR_CSR_CTRL_INT_UNCR_ALARM_DIS_LOC	2
-#define DLB2_DP_DIR_CSR_CTRL_INT_UNC_SYND_DIS_LOC	3
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF0_ALARM_DIS_LOC	4
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF0_SYND_DIS_LOC	5
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF1_ALARM_DIS_LOC	6
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF1_SYND_DIS_LOC	7
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF2_ALARM_DIS_LOC	8
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF2_SYND_DIS_LOC	9
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF3_ALARM_DIS_LOC	10
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF3_SYND_DIS_LOC	11
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF4_ALARM_DIS_LOC	12
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF4_SYND_DIS_LOC	13
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF5_ALARM_DIS_LOC	14
-#define DLB2_DP_DIR_CSR_CTRL_INT_INF5_SYND_DIS_LOC	15
-#define DLB2_DP_DIR_CSR_CTRL_RSVZ0_LOC		16
-
-#define DLB2_DP_SMON_ACTIVITYCNTR0 0x5c000058
-#define DLB2_DP_SMON_ACTIVITYCNTR0_RST 0x0
-
-#define DLB2_DP_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
-#define DLB2_DP_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
-
-#define DLB2_DP_SMON_ACTIVITYCNTR1 0x5c00005c
-#define DLB2_DP_SMON_ACTIVITYCNTR1_RST 0x0
-
-#define DLB2_DP_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
-#define DLB2_DP_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
-
-#define DLB2_DP_SMON_COMPARE0 0x5c000060
-#define DLB2_DP_SMON_COMPARE0_RST 0x0
-
-#define DLB2_DP_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
-#define DLB2_DP_SMON_COMPARE0_COMPARE0_LOC	0
-
-#define DLB2_DP_SMON_COMPARE1 0x5c000064
-#define DLB2_DP_SMON_COMPARE1_RST 0x0
-
-#define DLB2_DP_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
-#define DLB2_DP_SMON_COMPARE1_COMPARE1_LOC	0
-
-#define DLB2_DP_SMON_CFG0 0x5c000068
-#define DLB2_DP_SMON_CFG0_RST 0x40000000
-
-#define DLB2_DP_SMON_CFG0_SMON_ENABLE		0x00000001
-#define DLB2_DP_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
-#define DLB2_DP_SMON_CFG0_RSVZ0			0x0000000C
-#define DLB2_DP_SMON_CFG0_SMON0_FUNCTION		0x00000070
-#define DLB2_DP_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
-#define DLB2_DP_SMON_CFG0_SMON1_FUNCTION		0x00000700
-#define DLB2_DP_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
-#define DLB2_DP_SMON_CFG0_SMON_MODE			0x0000F000
-#define DLB2_DP_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
-#define DLB2_DP_SMON_CFG0_INTCOUNTEROVFL		0x00020000
-#define DLB2_DP_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
-#define DLB2_DP_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
-#define DLB2_DP_SMON_CFG0_STOPTIMEROVFL		0x00100000
-#define DLB2_DP_SMON_CFG0_INTTIMEROVFL		0x00200000
-#define DLB2_DP_SMON_CFG0_STATTIMEROVFL		0x00400000
-#define DLB2_DP_SMON_CFG0_RSVZ1			0x00800000
-#define DLB2_DP_SMON_CFG0_TIMER_PRESCALE		0x1F000000
-#define DLB2_DP_SMON_CFG0_RSVZ2			0x20000000
-#define DLB2_DP_SMON_CFG0_VERSION			0xC0000000
-#define DLB2_DP_SMON_CFG0_SMON_ENABLE_LOC		0
-#define DLB2_DP_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC	1
-#define DLB2_DP_SMON_CFG0_RSVZ0_LOC			2
-#define DLB2_DP_SMON_CFG0_SMON0_FUNCTION_LOC		4
-#define DLB2_DP_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
-#define DLB2_DP_SMON_CFG0_SMON1_FUNCTION_LOC		8
-#define DLB2_DP_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
-#define DLB2_DP_SMON_CFG0_SMON_MODE_LOC		12
-#define DLB2_DP_SMON_CFG0_STOPCOUNTEROVFL_LOC	16
-#define DLB2_DP_SMON_CFG0_INTCOUNTEROVFL_LOC		17
-#define DLB2_DP_SMON_CFG0_STATCOUNTER0OVFL_LOC	18
-#define DLB2_DP_SMON_CFG0_STATCOUNTER1OVFL_LOC	19
-#define DLB2_DP_SMON_CFG0_STOPTIMEROVFL_LOC		20
-#define DLB2_DP_SMON_CFG0_INTTIMEROVFL_LOC		21
-#define DLB2_DP_SMON_CFG0_STATTIMEROVFL_LOC		22
-#define DLB2_DP_SMON_CFG0_RSVZ1_LOC			23
-#define DLB2_DP_SMON_CFG0_TIMER_PRESCALE_LOC		24
-#define DLB2_DP_SMON_CFG0_RSVZ2_LOC			29
-#define DLB2_DP_SMON_CFG0_VERSION_LOC		30
-
-#define DLB2_DP_SMON_CFG1 0x5c00006c
-#define DLB2_DP_SMON_CFG1_RST 0x0
-
-#define DLB2_DP_SMON_CFG1_MODE0	0x000000FF
-#define DLB2_DP_SMON_CFG1_MODE1	0x0000FF00
-#define DLB2_DP_SMON_CFG1_RSVZ0	0xFFFF0000
-#define DLB2_DP_SMON_CFG1_MODE0_LOC	0
-#define DLB2_DP_SMON_CFG1_MODE1_LOC	8
-#define DLB2_DP_SMON_CFG1_RSVZ0_LOC	16
-
-#define DLB2_DP_SMON_MAX_TMR 0x5c000070
-#define DLB2_DP_SMON_MAX_TMR_RST 0x0
-
-#define DLB2_DP_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
-#define DLB2_DP_SMON_MAX_TMR_MAXVALUE_LOC	0
-
-#define DLB2_DP_SMON_TMR 0x5c000074
-#define DLB2_DP_SMON_TMR_RST 0x0
-
-#define DLB2_DP_SMON_TMR_TIMER	0xFFFFFFFF
-#define DLB2_DP_SMON_TMR_TIMER_LOC	0
-
-#define DLB2_DQED_SMON_ACTIVITYCNTR0 0x6c000024
-#define DLB2_DQED_SMON_ACTIVITYCNTR0_RST 0x0
-
-#define DLB2_DQED_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
-#define DLB2_DQED_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
-
-#define DLB2_DQED_SMON_ACTIVITYCNTR1 0x6c000028
-#define DLB2_DQED_SMON_ACTIVITYCNTR1_RST 0x0
-
-#define DLB2_DQED_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
-#define DLB2_DQED_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
-
-#define DLB2_DQED_SMON_COMPARE0 0x6c00002c
-#define DLB2_DQED_SMON_COMPARE0_RST 0x0
-
-#define DLB2_DQED_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
-#define DLB2_DQED_SMON_COMPARE0_COMPARE0_LOC	0
-
-#define DLB2_DQED_SMON_COMPARE1 0x6c000030
-#define DLB2_DQED_SMON_COMPARE1_RST 0x0
-
-#define DLB2_DQED_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
-#define DLB2_DQED_SMON_COMPARE1_COMPARE1_LOC	0
-
-#define DLB2_DQED_SMON_CFG0 0x6c000034
-#define DLB2_DQED_SMON_CFG0_RST 0x40000000
-
-#define DLB2_DQED_SMON_CFG0_SMON_ENABLE		0x00000001
-#define DLB2_DQED_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
-#define DLB2_DQED_SMON_CFG0_RSVZ0			0x0000000C
-#define DLB2_DQED_SMON_CFG0_SMON0_FUNCTION		0x00000070
-#define DLB2_DQED_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
-#define DLB2_DQED_SMON_CFG0_SMON1_FUNCTION		0x00000700
-#define DLB2_DQED_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
-#define DLB2_DQED_SMON_CFG0_SMON_MODE		0x0000F000
-#define DLB2_DQED_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
-#define DLB2_DQED_SMON_CFG0_INTCOUNTEROVFL		0x00020000
-#define DLB2_DQED_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
-#define DLB2_DQED_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
-#define DLB2_DQED_SMON_CFG0_STOPTIMEROVFL		0x00100000
-#define DLB2_DQED_SMON_CFG0_INTTIMEROVFL		0x00200000
-#define DLB2_DQED_SMON_CFG0_STATTIMEROVFL		0x00400000
-#define DLB2_DQED_SMON_CFG0_RSVZ1			0x00800000
-#define DLB2_DQED_SMON_CFG0_TIMER_PRESCALE		0x1F000000
-#define DLB2_DQED_SMON_CFG0_RSVZ2			0x20000000
-#define DLB2_DQED_SMON_CFG0_VERSION			0xC0000000
-#define DLB2_DQED_SMON_CFG0_SMON_ENABLE_LOC			0
-#define DLB2_DQED_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
-#define DLB2_DQED_SMON_CFG0_RSVZ0_LOC			2
-#define DLB2_DQED_SMON_CFG0_SMON0_FUNCTION_LOC		4
-#define DLB2_DQED_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
-#define DLB2_DQED_SMON_CFG0_SMON1_FUNCTION_LOC		8
-#define DLB2_DQED_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
-#define DLB2_DQED_SMON_CFG0_SMON_MODE_LOC			12
-#define DLB2_DQED_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
-#define DLB2_DQED_SMON_CFG0_INTCOUNTEROVFL_LOC		17
-#define DLB2_DQED_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
-#define DLB2_DQED_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
-#define DLB2_DQED_SMON_CFG0_STOPTIMEROVFL_LOC		20
-#define DLB2_DQED_SMON_CFG0_INTTIMEROVFL_LOC			21
-#define DLB2_DQED_SMON_CFG0_STATTIMEROVFL_LOC		22
-#define DLB2_DQED_SMON_CFG0_RSVZ1_LOC			23
-#define DLB2_DQED_SMON_CFG0_TIMER_PRESCALE_LOC		24
-#define DLB2_DQED_SMON_CFG0_RSVZ2_LOC			29
-#define DLB2_DQED_SMON_CFG0_VERSION_LOC			30
-
-#define DLB2_DQED_SMON_CFG1 0x6c000038
-#define DLB2_DQED_SMON_CFG1_RST 0x0
-
-#define DLB2_DQED_SMON_CFG1_MODE0	0x000000FF
-#define DLB2_DQED_SMON_CFG1_MODE1	0x0000FF00
-#define DLB2_DQED_SMON_CFG1_RSVZ0	0xFFFF0000
-#define DLB2_DQED_SMON_CFG1_MODE0_LOC	0
-#define DLB2_DQED_SMON_CFG1_MODE1_LOC	8
-#define DLB2_DQED_SMON_CFG1_RSVZ0_LOC	16
-
-#define DLB2_DQED_SMON_MAX_TMR 0x6c00003c
-#define DLB2_DQED_SMON_MAX_TMR_RST 0x0
-
-#define DLB2_DQED_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
-#define DLB2_DQED_SMON_MAX_TMR_MAXVALUE_LOC	0
-
-#define DLB2_DQED_SMON_TMR 0x6c000040
-#define DLB2_DQED_SMON_TMR_RST 0x0
-
-#define DLB2_DQED_SMON_TMR_TIMER	0xFFFFFFFF
-#define DLB2_DQED_SMON_TMR_TIMER_LOC	0
-
-#define DLB2_QED_SMON_ACTIVITYCNTR0 0x7c000024
-#define DLB2_QED_SMON_ACTIVITYCNTR0_RST 0x0
-
-#define DLB2_QED_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
-#define DLB2_QED_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
-
-#define DLB2_QED_SMON_ACTIVITYCNTR1 0x7c000028
-#define DLB2_QED_SMON_ACTIVITYCNTR1_RST 0x0
-
-#define DLB2_QED_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
-#define DLB2_QED_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
-
-#define DLB2_QED_SMON_COMPARE0 0x7c00002c
-#define DLB2_QED_SMON_COMPARE0_RST 0x0
-
-#define DLB2_QED_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
-#define DLB2_QED_SMON_COMPARE0_COMPARE0_LOC	0
-
-#define DLB2_QED_SMON_COMPARE1 0x7c000030
-#define DLB2_QED_SMON_COMPARE1_RST 0x0
-
-#define DLB2_QED_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
-#define DLB2_QED_SMON_COMPARE1_COMPARE1_LOC	0
-
-#define DLB2_QED_SMON_CFG0 0x7c000034
-#define DLB2_QED_SMON_CFG0_RST 0x40000000
-
-#define DLB2_QED_SMON_CFG0_SMON_ENABLE		0x00000001
-#define DLB2_QED_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
-#define DLB2_QED_SMON_CFG0_RSVZ0			0x0000000C
-#define DLB2_QED_SMON_CFG0_SMON0_FUNCTION		0x00000070
-#define DLB2_QED_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
-#define DLB2_QED_SMON_CFG0_SMON1_FUNCTION		0x00000700
-#define DLB2_QED_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
-#define DLB2_QED_SMON_CFG0_SMON_MODE			0x0000F000
-#define DLB2_QED_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
-#define DLB2_QED_SMON_CFG0_INTCOUNTEROVFL		0x00020000
-#define DLB2_QED_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
-#define DLB2_QED_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
-#define DLB2_QED_SMON_CFG0_STOPTIMEROVFL		0x00100000
-#define DLB2_QED_SMON_CFG0_INTTIMEROVFL		0x00200000
-#define DLB2_QED_SMON_CFG0_STATTIMEROVFL		0x00400000
-#define DLB2_QED_SMON_CFG0_RSVZ1			0x00800000
-#define DLB2_QED_SMON_CFG0_TIMER_PRESCALE		0x1F000000
-#define DLB2_QED_SMON_CFG0_RSVZ2			0x20000000
-#define DLB2_QED_SMON_CFG0_VERSION			0xC0000000
-#define DLB2_QED_SMON_CFG0_SMON_ENABLE_LOC			0
-#define DLB2_QED_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
-#define DLB2_QED_SMON_CFG0_RSVZ0_LOC				2
-#define DLB2_QED_SMON_CFG0_SMON0_FUNCTION_LOC		4
-#define DLB2_QED_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
-#define DLB2_QED_SMON_CFG0_SMON1_FUNCTION_LOC		8
-#define DLB2_QED_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
-#define DLB2_QED_SMON_CFG0_SMON_MODE_LOC			12
-#define DLB2_QED_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
-#define DLB2_QED_SMON_CFG0_INTCOUNTEROVFL_LOC		17
-#define DLB2_QED_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
-#define DLB2_QED_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
-#define DLB2_QED_SMON_CFG0_STOPTIMEROVFL_LOC			20
-#define DLB2_QED_SMON_CFG0_INTTIMEROVFL_LOC			21
-#define DLB2_QED_SMON_CFG0_STATTIMEROVFL_LOC			22
-#define DLB2_QED_SMON_CFG0_RSVZ1_LOC				23
-#define DLB2_QED_SMON_CFG0_TIMER_PRESCALE_LOC		24
-#define DLB2_QED_SMON_CFG0_RSVZ2_LOC				29
-#define DLB2_QED_SMON_CFG0_VERSION_LOC			30
-
-#define DLB2_QED_SMON_CFG1 0x7c000038
-#define DLB2_QED_SMON_CFG1_RST 0x0
-
-#define DLB2_QED_SMON_CFG1_MODE0	0x000000FF
-#define DLB2_QED_SMON_CFG1_MODE1	0x0000FF00
-#define DLB2_QED_SMON_CFG1_RSVZ0	0xFFFF0000
-#define DLB2_QED_SMON_CFG1_MODE0_LOC	0
-#define DLB2_QED_SMON_CFG1_MODE1_LOC	8
-#define DLB2_QED_SMON_CFG1_RSVZ0_LOC	16
-
-#define DLB2_QED_SMON_MAX_TMR 0x7c00003c
-#define DLB2_QED_SMON_MAX_TMR_RST 0x0
-
-#define DLB2_QED_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
-#define DLB2_QED_SMON_MAX_TMR_MAXVALUE_LOC	0
-
-#define DLB2_QED_SMON_TMR 0x7c000040
-#define DLB2_QED_SMON_TMR_RST 0x0
-
-#define DLB2_QED_SMON_TMR_TIMER	0xFFFFFFFF
-#define DLB2_QED_SMON_TMR_TIMER_LOC	0
-
-#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0 0x84000000
-#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0 0x74000000
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0 : \
-	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0)
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_RST 0xfefcfaf8
-
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI0	0x000000FF
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI1	0x0000FF00
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI2	0x00FF0000
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI3	0xFF000000
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI0_LOC	0
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI1_LOC	8
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI2_LOC	16
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_PRI3_LOC	24
-
-#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1 0x84000004
-#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1 0x74000004
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1 : \
-	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1)
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1_RST 0x0
-
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1_RSVZ0	0xFFFFFFFF
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_ATQ_1_RSVZ0_LOC	0
-
-#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0 0x84000008
-#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0 0x74000008
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0 : \
-	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0)
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_RST 0xfefcfaf8
-
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI0	0x000000FF
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI1	0x0000FF00
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI2	0x00FF0000
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI3	0xFF000000
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI0_LOC	0
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI1_LOC	8
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI2_LOC	16
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_0_PRI3_LOC	24
-
-#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1 0x8400000c
-#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1 0x7400000c
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1 : \
-	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1)
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1_RST 0x0
-
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1_RSVZ0	0xFFFFFFFF
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_NALB_1_RSVZ0_LOC	0
-
-#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0 0x84000010
-#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0 0x74000010
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0 : \
-	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0)
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_RST 0xfefcfaf8
-
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI0	0x000000FF
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI1	0x0000FF00
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI2	0x00FF0000
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI3	0xFF000000
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI0_LOC	0
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI1_LOC	8
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI2_LOC	16
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_PRI3_LOC	24
-
-#define DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1 0x84000014
-#define DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1 0x74000014
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1 : \
-	 DLB2_V2_5NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1)
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RST 0x0
-
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RSVZ0	0xFFFFFFFF
-#define DLB2_NALB_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RSVZ0_LOC	0
-
-#define DLB2_NALB_SMON_ACTIVITYCNTR0 0x8c000064
-#define DLB2_NALB_SMON_ACTIVITYCNTR0_RST 0x0
-
-#define DLB2_NALB_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
-#define DLB2_NALB_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
-
-#define DLB2_NALB_SMON_ACTIVITYCNTR1 0x8c000068
-#define DLB2_NALB_SMON_ACTIVITYCNTR1_RST 0x0
-
-#define DLB2_NALB_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
-#define DLB2_NALB_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
-
-#define DLB2_NALB_SMON_COMPARE0 0x8c00006c
-#define DLB2_NALB_SMON_COMPARE0_RST 0x0
-
-#define DLB2_NALB_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
-#define DLB2_NALB_SMON_COMPARE0_COMPARE0_LOC	0
-
-#define DLB2_NALB_SMON_COMPARE1 0x8c000070
-#define DLB2_NALB_SMON_COMPARE1_RST 0x0
-
-#define DLB2_NALB_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
-#define DLB2_NALB_SMON_COMPARE1_COMPARE1_LOC	0
-
-#define DLB2_NALB_SMON_CFG0 0x8c000074
-#define DLB2_NALB_SMON_CFG0_RST 0x40000000
-
-#define DLB2_NALB_SMON_CFG0_SMON_ENABLE		0x00000001
-#define DLB2_NALB_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
-#define DLB2_NALB_SMON_CFG0_RSVZ0			0x0000000C
-#define DLB2_NALB_SMON_CFG0_SMON0_FUNCTION		0x00000070
-#define DLB2_NALB_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
-#define DLB2_NALB_SMON_CFG0_SMON1_FUNCTION		0x00000700
-#define DLB2_NALB_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
-#define DLB2_NALB_SMON_CFG0_SMON_MODE		0x0000F000
-#define DLB2_NALB_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
-#define DLB2_NALB_SMON_CFG0_INTCOUNTEROVFL		0x00020000
-#define DLB2_NALB_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
-#define DLB2_NALB_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
-#define DLB2_NALB_SMON_CFG0_STOPTIMEROVFL		0x00100000
-#define DLB2_NALB_SMON_CFG0_INTTIMEROVFL		0x00200000
-#define DLB2_NALB_SMON_CFG0_STATTIMEROVFL		0x00400000
-#define DLB2_NALB_SMON_CFG0_RSVZ1			0x00800000
-#define DLB2_NALB_SMON_CFG0_TIMER_PRESCALE		0x1F000000
-#define DLB2_NALB_SMON_CFG0_RSVZ2			0x20000000
-#define DLB2_NALB_SMON_CFG0_VERSION			0xC0000000
-#define DLB2_NALB_SMON_CFG0_SMON_ENABLE_LOC			0
-#define DLB2_NALB_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
-#define DLB2_NALB_SMON_CFG0_RSVZ0_LOC			2
-#define DLB2_NALB_SMON_CFG0_SMON0_FUNCTION_LOC		4
-#define DLB2_NALB_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
-#define DLB2_NALB_SMON_CFG0_SMON1_FUNCTION_LOC		8
-#define DLB2_NALB_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
-#define DLB2_NALB_SMON_CFG0_SMON_MODE_LOC			12
-#define DLB2_NALB_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
-#define DLB2_NALB_SMON_CFG0_INTCOUNTEROVFL_LOC		17
-#define DLB2_NALB_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
-#define DLB2_NALB_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
-#define DLB2_NALB_SMON_CFG0_STOPTIMEROVFL_LOC		20
-#define DLB2_NALB_SMON_CFG0_INTTIMEROVFL_LOC			21
-#define DLB2_NALB_SMON_CFG0_STATTIMEROVFL_LOC		22
-#define DLB2_NALB_SMON_CFG0_RSVZ1_LOC			23
-#define DLB2_NALB_SMON_CFG0_TIMER_PRESCALE_LOC		24
-#define DLB2_NALB_SMON_CFG0_RSVZ2_LOC			29
-#define DLB2_NALB_SMON_CFG0_VERSION_LOC			30
-
-#define DLB2_NALB_SMON_CFG1 0x8c000078
-#define DLB2_NALB_SMON_CFG1_RST 0x0
-
-#define DLB2_NALB_SMON_CFG1_MODE0	0x000000FF
-#define DLB2_NALB_SMON_CFG1_MODE1	0x0000FF00
-#define DLB2_NALB_SMON_CFG1_RSVZ0	0xFFFF0000
-#define DLB2_NALB_SMON_CFG1_MODE0_LOC	0
-#define DLB2_NALB_SMON_CFG1_MODE1_LOC	8
-#define DLB2_NALB_SMON_CFG1_RSVZ0_LOC	16
-
-#define DLB2_NALB_SMON_MAX_TMR 0x8c00007c
-#define DLB2_NALB_SMON_MAX_TMR_RST 0x0
-
-#define DLB2_NALB_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
-#define DLB2_NALB_SMON_MAX_TMR_MAXVALUE_LOC	0
-
-#define DLB2_NALB_SMON_TMR 0x8c000080
-#define DLB2_NALB_SMON_TMR_RST 0x0
-
-#define DLB2_NALB_SMON_TMR_TIMER	0xFFFFFFFF
-#define DLB2_NALB_SMON_TMR_TIMER_LOC	0
-
-#define DLB2_V2RO_GRP_0_SLT_SHFT(x) \
-	(0x96000000 + (x) * 0x4)
-#define DLB2_V2_5RO_GRP_0_SLT_SHFT(x) \
-	(0x86000000 + (x) * 0x4)
-#define DLB2_RO_GRP_0_SLT_SHFT(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2RO_GRP_0_SLT_SHFT(x) : \
-	 DLB2_V2_5RO_GRP_0_SLT_SHFT(x))
-#define DLB2_RO_GRP_0_SLT_SHFT_RST 0x0
-
-#define DLB2_RO_GRP_0_SLT_SHFT_CHANGE	0x000003FF
-#define DLB2_RO_GRP_0_SLT_SHFT_RSVD0		0xFFFFFC00
-#define DLB2_RO_GRP_0_SLT_SHFT_CHANGE_LOC	0
-#define DLB2_RO_GRP_0_SLT_SHFT_RSVD0_LOC	10
-
-#define DLB2_V2RO_GRP_1_SLT_SHFT(x) \
-	(0x96010000 + (x) * 0x4)
-#define DLB2_V2_5RO_GRP_1_SLT_SHFT(x) \
-	(0x86010000 + (x) * 0x4)
-#define DLB2_RO_GRP_1_SLT_SHFT(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2RO_GRP_1_SLT_SHFT(x) : \
-	 DLB2_V2_5RO_GRP_1_SLT_SHFT(x))
-#define DLB2_RO_GRP_1_SLT_SHFT_RST 0x0
-
-#define DLB2_RO_GRP_1_SLT_SHFT_CHANGE	0x000003FF
-#define DLB2_RO_GRP_1_SLT_SHFT_RSVD0		0xFFFFFC00
-#define DLB2_RO_GRP_1_SLT_SHFT_CHANGE_LOC	0
-#define DLB2_RO_GRP_1_SLT_SHFT_RSVD0_LOC	10
-
-#define DLB2_V2RO_GRP_SN_MODE 0x94000000
-#define DLB2_V2_5RO_GRP_SN_MODE 0x84000000
-#define DLB2_RO_GRP_SN_MODE(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2RO_GRP_SN_MODE : \
-	 DLB2_V2_5RO_GRP_SN_MODE)
-#define DLB2_RO_GRP_SN_MODE_RST 0x0
-
-#define DLB2_RO_GRP_SN_MODE_SN_MODE_0	0x00000007
-#define DLB2_RO_GRP_SN_MODE_RSZV0		0x000000F8
-#define DLB2_RO_GRP_SN_MODE_SN_MODE_1	0x00000700
-#define DLB2_RO_GRP_SN_MODE_RSZV1		0xFFFFF800
-#define DLB2_RO_GRP_SN_MODE_SN_MODE_0_LOC	0
-#define DLB2_RO_GRP_SN_MODE_RSZV0_LOC	3
-#define DLB2_RO_GRP_SN_MODE_SN_MODE_1_LOC	8
-#define DLB2_RO_GRP_SN_MODE_RSZV1_LOC	11
-
-#define DLB2_V2RO_CFG_CTRL_GENERAL_0 0x9c000000
-#define DLB2_V2_5RO_CFG_CTRL_GENERAL_0 0x8c000000
-#define DLB2_RO_CFG_CTRL_GENERAL_0(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2RO_CFG_CTRL_GENERAL_0 : \
-	 DLB2_V2_5RO_CFG_CTRL_GENERAL_0)
-#define DLB2_RO_CFG_CTRL_GENERAL_0_RST 0x0
-
-#define DLB2_RO_CFG_CTRL_GENERAL_0_UNIT_SINGLE_STEP_MODE	0x00000001
-#define DLB2_RO_CFG_CTRL_GENERAL_0_RR_EN			0x00000002
-#define DLB2_RO_CFG_CTRL_GENERAL_0_RSZV0			0xFFFFFFFC
-#define DLB2_RO_CFG_CTRL_GENERAL_0_UNIT_SINGLE_STEP_MODE_LOC	0
-#define DLB2_RO_CFG_CTRL_GENERAL_0_RR_EN_LOC			1
-#define DLB2_RO_CFG_CTRL_GENERAL_0_RSZV0_LOC			2
-
-#define DLB2_RO_SMON_ACTIVITYCNTR0 0x9c000030
-#define DLB2_RO_SMON_ACTIVITYCNTR0_RST 0x0
-
-#define DLB2_RO_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
-#define DLB2_RO_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
-
-#define DLB2_RO_SMON_ACTIVITYCNTR1 0x9c000034
-#define DLB2_RO_SMON_ACTIVITYCNTR1_RST 0x0
-
-#define DLB2_RO_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
-#define DLB2_RO_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
-
-#define DLB2_RO_SMON_COMPARE0 0x9c000038
-#define DLB2_RO_SMON_COMPARE0_RST 0x0
-
-#define DLB2_RO_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
-#define DLB2_RO_SMON_COMPARE0_COMPARE0_LOC	0
-
-#define DLB2_RO_SMON_COMPARE1 0x9c00003c
-#define DLB2_RO_SMON_COMPARE1_RST 0x0
-
-#define DLB2_RO_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
-#define DLB2_RO_SMON_COMPARE1_COMPARE1_LOC	0
-
-#define DLB2_RO_SMON_CFG0 0x9c000040
-#define DLB2_RO_SMON_CFG0_RST 0x40000000
-
-#define DLB2_RO_SMON_CFG0_SMON_ENABLE		0x00000001
-#define DLB2_RO_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
-#define DLB2_RO_SMON_CFG0_RSVZ0			0x0000000C
-#define DLB2_RO_SMON_CFG0_SMON0_FUNCTION		0x00000070
-#define DLB2_RO_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
-#define DLB2_RO_SMON_CFG0_SMON1_FUNCTION		0x00000700
-#define DLB2_RO_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
-#define DLB2_RO_SMON_CFG0_SMON_MODE			0x0000F000
-#define DLB2_RO_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
-#define DLB2_RO_SMON_CFG0_INTCOUNTEROVFL		0x00020000
-#define DLB2_RO_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
-#define DLB2_RO_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
-#define DLB2_RO_SMON_CFG0_STOPTIMEROVFL		0x00100000
-#define DLB2_RO_SMON_CFG0_INTTIMEROVFL		0x00200000
-#define DLB2_RO_SMON_CFG0_STATTIMEROVFL		0x00400000
-#define DLB2_RO_SMON_CFG0_RSVZ1			0x00800000
-#define DLB2_RO_SMON_CFG0_TIMER_PRESCALE		0x1F000000
-#define DLB2_RO_SMON_CFG0_RSVZ2			0x20000000
-#define DLB2_RO_SMON_CFG0_VERSION			0xC0000000
-#define DLB2_RO_SMON_CFG0_SMON_ENABLE_LOC		0
-#define DLB2_RO_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC	1
-#define DLB2_RO_SMON_CFG0_RSVZ0_LOC			2
-#define DLB2_RO_SMON_CFG0_SMON0_FUNCTION_LOC		4
-#define DLB2_RO_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
-#define DLB2_RO_SMON_CFG0_SMON1_FUNCTION_LOC		8
-#define DLB2_RO_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
-#define DLB2_RO_SMON_CFG0_SMON_MODE_LOC		12
-#define DLB2_RO_SMON_CFG0_STOPCOUNTEROVFL_LOC	16
-#define DLB2_RO_SMON_CFG0_INTCOUNTEROVFL_LOC		17
-#define DLB2_RO_SMON_CFG0_STATCOUNTER0OVFL_LOC	18
-#define DLB2_RO_SMON_CFG0_STATCOUNTER1OVFL_LOC	19
-#define DLB2_RO_SMON_CFG0_STOPTIMEROVFL_LOC		20
-#define DLB2_RO_SMON_CFG0_INTTIMEROVFL_LOC		21
-#define DLB2_RO_SMON_CFG0_STATTIMEROVFL_LOC		22
-#define DLB2_RO_SMON_CFG0_RSVZ1_LOC			23
-#define DLB2_RO_SMON_CFG0_TIMER_PRESCALE_LOC		24
-#define DLB2_RO_SMON_CFG0_RSVZ2_LOC			29
-#define DLB2_RO_SMON_CFG0_VERSION_LOC		30
-
-#define DLB2_RO_SMON_CFG1 0x9c000044
-#define DLB2_RO_SMON_CFG1_RST 0x0
-
-#define DLB2_RO_SMON_CFG1_MODE0	0x000000FF
-#define DLB2_RO_SMON_CFG1_MODE1	0x0000FF00
-#define DLB2_RO_SMON_CFG1_RSVZ0	0xFFFF0000
-#define DLB2_RO_SMON_CFG1_MODE0_LOC	0
-#define DLB2_RO_SMON_CFG1_MODE1_LOC	8
-#define DLB2_RO_SMON_CFG1_RSVZ0_LOC	16
-
-#define DLB2_RO_SMON_MAX_TMR 0x9c000048
-#define DLB2_RO_SMON_MAX_TMR_RST 0x0
-
-#define DLB2_RO_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
-#define DLB2_RO_SMON_MAX_TMR_MAXVALUE_LOC	0
-
-#define DLB2_RO_SMON_TMR 0x9c00004c
-#define DLB2_RO_SMON_TMR_RST 0x0
-
-#define DLB2_RO_SMON_TMR_TIMER	0xFFFFFFFF
-#define DLB2_RO_SMON_TMR_TIMER_LOC	0
-
-#define DLB2_V2LSP_CQ2PRIOV(x) \
-	(0xa0000000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_CQ2PRIOV(x) \
-	(0x90000000 + (x) * 0x1000)
-#define DLB2_LSP_CQ2PRIOV(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CQ2PRIOV(x) : \
-	 DLB2_V2_5LSP_CQ2PRIOV(x))
-#define DLB2_LSP_CQ2PRIOV_RST 0x0
-
-#define DLB2_LSP_CQ2PRIOV_PRIO	0x00FFFFFF
-#define DLB2_LSP_CQ2PRIOV_V		0xFF000000
-#define DLB2_LSP_CQ2PRIOV_PRIO_LOC	0
-#define DLB2_LSP_CQ2PRIOV_V_LOC	24
-
-#define DLB2_V2LSP_CQ2QID0(x) \
-	(0xa0080000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_CQ2QID0(x) \
-	(0x90080000 + (x) * 0x1000)
-#define DLB2_LSP_CQ2QID0(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CQ2QID0(x) : \
-	 DLB2_V2_5LSP_CQ2QID0(x))
-#define DLB2_LSP_CQ2QID0_RST 0x0
-
-#define DLB2_LSP_CQ2QID0_QID_P0	0x0000007F
-#define DLB2_LSP_CQ2QID0_RSVD3	0x00000080
-#define DLB2_LSP_CQ2QID0_QID_P1	0x00007F00
-#define DLB2_LSP_CQ2QID0_RSVD2	0x00008000
-#define DLB2_LSP_CQ2QID0_QID_P2	0x007F0000
-#define DLB2_LSP_CQ2QID0_RSVD1	0x00800000
-#define DLB2_LSP_CQ2QID0_QID_P3	0x7F000000
-#define DLB2_LSP_CQ2QID0_RSVD0	0x80000000
-#define DLB2_LSP_CQ2QID0_QID_P0_LOC	0
-#define DLB2_LSP_CQ2QID0_RSVD3_LOC	7
-#define DLB2_LSP_CQ2QID0_QID_P1_LOC	8
-#define DLB2_LSP_CQ2QID0_RSVD2_LOC	15
-#define DLB2_LSP_CQ2QID0_QID_P2_LOC	16
-#define DLB2_LSP_CQ2QID0_RSVD1_LOC	23
-#define DLB2_LSP_CQ2QID0_QID_P3_LOC	24
-#define DLB2_LSP_CQ2QID0_RSVD0_LOC	31
-
-#define DLB2_V2LSP_CQ2QID1(x) \
-	(0xa0100000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_CQ2QID1(x) \
-	(0x90100000 + (x) * 0x1000)
-#define DLB2_LSP_CQ2QID1(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CQ2QID1(x) : \
-	 DLB2_V2_5LSP_CQ2QID1(x))
-#define DLB2_LSP_CQ2QID1_RST 0x0
-
-#define DLB2_LSP_CQ2QID1_QID_P4	0x0000007F
-#define DLB2_LSP_CQ2QID1_RSVD3	0x00000080
-#define DLB2_LSP_CQ2QID1_QID_P5	0x00007F00
-#define DLB2_LSP_CQ2QID1_RSVD2	0x00008000
-#define DLB2_LSP_CQ2QID1_QID_P6	0x007F0000
-#define DLB2_LSP_CQ2QID1_RSVD1	0x00800000
-#define DLB2_LSP_CQ2QID1_QID_P7	0x7F000000
-#define DLB2_LSP_CQ2QID1_RSVD0	0x80000000
-#define DLB2_LSP_CQ2QID1_QID_P4_LOC	0
-#define DLB2_LSP_CQ2QID1_RSVD3_LOC	7
-#define DLB2_LSP_CQ2QID1_QID_P5_LOC	8
-#define DLB2_LSP_CQ2QID1_RSVD2_LOC	15
-#define DLB2_LSP_CQ2QID1_QID_P6_LOC	16
-#define DLB2_LSP_CQ2QID1_RSVD1_LOC	23
-#define DLB2_LSP_CQ2QID1_QID_P7_LOC	24
-#define DLB2_LSP_CQ2QID1_RSVD0_LOC	31
-
-#define DLB2_V2LSP_CQ_DIR_DSBL(x) \
-	(0xa0180000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_CQ_DIR_DSBL(x) \
-	(0x90180000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_DIR_DSBL(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CQ_DIR_DSBL(x) : \
-	 DLB2_V2_5LSP_CQ_DIR_DSBL(x))
-#define DLB2_LSP_CQ_DIR_DSBL_RST 0x1
-
-#define DLB2_LSP_CQ_DIR_DSBL_DISABLED	0x00000001
-#define DLB2_LSP_CQ_DIR_DSBL_RSVD0		0xFFFFFFFE
-#define DLB2_LSP_CQ_DIR_DSBL_DISABLED_LOC	0
-#define DLB2_LSP_CQ_DIR_DSBL_RSVD0_LOC	1
-
-#define DLB2_V2LSP_CQ_DIR_TKN_CNT(x) \
-	(0xa0200000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_CQ_DIR_TKN_CNT(x) \
-	(0x90200000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_DIR_TKN_CNT(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CQ_DIR_TKN_CNT(x) : \
-	 DLB2_V2_5LSP_CQ_DIR_TKN_CNT(x))
-#define DLB2_LSP_CQ_DIR_TKN_CNT_RST 0x0
-
-#define DLB2_LSP_CQ_DIR_TKN_CNT_COUNT	0x00001FFF
-#define DLB2_LSP_CQ_DIR_TKN_CNT_RSVD0	0xFFFFE000
-#define DLB2_LSP_CQ_DIR_TKN_CNT_COUNT_LOC	0
-#define DLB2_LSP_CQ_DIR_TKN_CNT_RSVD0_LOC	13
-
-#define DLB2_V2LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(x) \
-	(0xa0280000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(x) \
-	(0x90280000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(x) : \
-	 DLB2_V2_5LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(x))
-#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RST 0x0
-
-#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_TOKEN_DEPTH_SELECT_V2	0x0000000F
-#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_DISABLE_WB_OPT_V2	0x00000010
-#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_IGNORE_DEPTH_V2	0x00000020
-#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RSVD0_V2		0xFFFFFFC0
-#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_TOKEN_DEPTH_SELECT_V2_LOC	0
-#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_DISABLE_WB_OPT_V2_LOC	4
-#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_IGNORE_DEPTH_V2_LOC	5
-#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RSVD0_V2_LOC		6
-
-#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_TOKEN_DEPTH_SELECT_V2_5 0x0000000F
-#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_DISABLE_WB_OPT_V2_5	0x00000010
-#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RSVD0_V2_5		0xFFFFFFE0
-#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_TOKEN_DEPTH_SELECT_V2_5_LOC	0
-#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_DISABLE_WB_OPT_V2_5_LOC	4
-#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RSVD0_V2_5_LOC		5
-
-#define DLB2_V2LSP_CQ_DIR_TOT_SCH_CNTL(x) \
-	(0xa0300000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_CQ_DIR_TOT_SCH_CNTL(x) \
-	(0x90300000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTL(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CQ_DIR_TOT_SCH_CNTL(x) : \
-	 DLB2_V2_5LSP_CQ_DIR_TOT_SCH_CNTL(x))
-#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTL_RST 0x0
-
-#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTL_COUNT	0xFFFFFFFF
-#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTL_COUNT_LOC	0
-
-#define DLB2_V2LSP_CQ_DIR_TOT_SCH_CNTH(x) \
-	(0xa0380000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_CQ_DIR_TOT_SCH_CNTH(x) \
-	(0x90380000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTH(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CQ_DIR_TOT_SCH_CNTH(x) : \
-	 DLB2_V2_5LSP_CQ_DIR_TOT_SCH_CNTH(x))
-#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTH_RST 0x0
-
-#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTH_COUNT	0xFFFFFFFF
-#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTH_COUNT_LOC	0
-
-#define DLB2_V2LSP_CQ_LDB_DSBL(x) \
-	(0xa0400000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_CQ_LDB_DSBL(x) \
-	(0x90400000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_LDB_DSBL(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CQ_LDB_DSBL(x) : \
-	 DLB2_V2_5LSP_CQ_LDB_DSBL(x))
-#define DLB2_LSP_CQ_LDB_DSBL_RST 0x1
-
-#define DLB2_LSP_CQ_LDB_DSBL_DISABLED	0x00000001
-#define DLB2_LSP_CQ_LDB_DSBL_RSVD0		0xFFFFFFFE
-#define DLB2_LSP_CQ_LDB_DSBL_DISABLED_LOC	0
-#define DLB2_LSP_CQ_LDB_DSBL_RSVD0_LOC	1
-
-#define DLB2_V2LSP_CQ_LDB_INFL_CNT(x) \
-	(0xa0480000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_CQ_LDB_INFL_CNT(x) \
-	(0x90480000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_LDB_INFL_CNT(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CQ_LDB_INFL_CNT(x) : \
-	 DLB2_V2_5LSP_CQ_LDB_INFL_CNT(x))
-#define DLB2_LSP_CQ_LDB_INFL_CNT_RST 0x0
-
-#define DLB2_LSP_CQ_LDB_INFL_CNT_COUNT	0x00000FFF
-#define DLB2_LSP_CQ_LDB_INFL_CNT_RSVD0	0xFFFFF000
-#define DLB2_LSP_CQ_LDB_INFL_CNT_COUNT_LOC	0
-#define DLB2_LSP_CQ_LDB_INFL_CNT_RSVD0_LOC	12
-
-#define DLB2_V2LSP_CQ_LDB_INFL_LIM(x) \
-	(0xa0500000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_CQ_LDB_INFL_LIM(x) \
-	(0x90500000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_LDB_INFL_LIM(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CQ_LDB_INFL_LIM(x) : \
-	 DLB2_V2_5LSP_CQ_LDB_INFL_LIM(x))
-#define DLB2_LSP_CQ_LDB_INFL_LIM_RST 0x0
-
-#define DLB2_LSP_CQ_LDB_INFL_LIM_LIMIT	0x00000FFF
-#define DLB2_LSP_CQ_LDB_INFL_LIM_RSVD0	0xFFFFF000
-#define DLB2_LSP_CQ_LDB_INFL_LIM_LIMIT_LOC	0
-#define DLB2_LSP_CQ_LDB_INFL_LIM_RSVD0_LOC	12
-
-#define DLB2_V2LSP_CQ_LDB_TKN_CNT(x) \
-	(0xa0580000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_CQ_LDB_TKN_CNT(x) \
-	(0x90600000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_LDB_TKN_CNT(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CQ_LDB_TKN_CNT(x) : \
-	 DLB2_V2_5LSP_CQ_LDB_TKN_CNT(x))
-#define DLB2_LSP_CQ_LDB_TKN_CNT_RST 0x0
-
-#define DLB2_LSP_CQ_LDB_TKN_CNT_TOKEN_COUNT	0x000007FF
-#define DLB2_LSP_CQ_LDB_TKN_CNT_RSVD0	0xFFFFF800
-#define DLB2_LSP_CQ_LDB_TKN_CNT_TOKEN_COUNT_LOC	0
-#define DLB2_LSP_CQ_LDB_TKN_CNT_RSVD0_LOC		11
-
-#define DLB2_V2LSP_CQ_LDB_TKN_DEPTH_SEL(x) \
-	(0xa0600000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_CQ_LDB_TKN_DEPTH_SEL(x) \
-	(0x90680000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CQ_LDB_TKN_DEPTH_SEL(x) : \
-	 DLB2_V2_5LSP_CQ_LDB_TKN_DEPTH_SEL(x))
-#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RST 0x0
-
-#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_V2	0x0000000F
-#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_IGNORE_DEPTH_V2	0x00000010
-#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RSVD0_V2		0xFFFFFFE0
-#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_V2_LOC	0
-#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_IGNORE_DEPTH_V2_LOC		4
-#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RSVD0_V2_LOC			5
-
-#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_V2_5	0x0000000F
-#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RSVD0_V2_5		0xFFFFFFF0
-#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_V2_5_LOC	0
-#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RSVD0_V2_5_LOC			4
-
-#define DLB2_V2LSP_CQ_LDB_TOT_SCH_CNTL(x) \
-	(0xa0680000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_CQ_LDB_TOT_SCH_CNTL(x) \
-	(0x90700000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTL(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CQ_LDB_TOT_SCH_CNTL(x) : \
-	 DLB2_V2_5LSP_CQ_LDB_TOT_SCH_CNTL(x))
-#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTL_RST 0x0
-
-#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTL_COUNT	0xFFFFFFFF
-#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTL_COUNT_LOC	0
-
-#define DLB2_V2LSP_CQ_LDB_TOT_SCH_CNTH(x) \
-	(0xa0700000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_CQ_LDB_TOT_SCH_CNTH(x) \
-	(0x90780000 + (x) * 0x1000)
-#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTH(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CQ_LDB_TOT_SCH_CNTH(x) : \
-	 DLB2_V2_5LSP_CQ_LDB_TOT_SCH_CNTH(x))
-#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTH_RST 0x0
-
-#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTH_COUNT	0xFFFFFFFF
-#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTH_COUNT_LOC	0
-
-#define DLB2_V2LSP_QID_DIR_MAX_DEPTH(x) \
-	(0xa0780000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID_DIR_MAX_DEPTH(x) \
-	(0x90800000 + (x) * 0x1000)
-#define DLB2_LSP_QID_DIR_MAX_DEPTH(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID_DIR_MAX_DEPTH(x) : \
-	 DLB2_V2_5LSP_QID_DIR_MAX_DEPTH(x))
-#define DLB2_LSP_QID_DIR_MAX_DEPTH_RST 0x0
-
-#define DLB2_LSP_QID_DIR_MAX_DEPTH_DEPTH	0x00001FFF
-#define DLB2_LSP_QID_DIR_MAX_DEPTH_RSVD0	0xFFFFE000
-#define DLB2_LSP_QID_DIR_MAX_DEPTH_DEPTH_LOC	0
-#define DLB2_LSP_QID_DIR_MAX_DEPTH_RSVD0_LOC	13
-
-#define DLB2_V2LSP_QID_DIR_TOT_ENQ_CNTL(x) \
-	(0xa0800000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID_DIR_TOT_ENQ_CNTL(x) \
-	(0x90880000 + (x) * 0x1000)
-#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTL(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID_DIR_TOT_ENQ_CNTL(x) : \
-	 DLB2_V2_5LSP_QID_DIR_TOT_ENQ_CNTL(x))
-#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTL_RST 0x0
-
-#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTL_COUNT	0xFFFFFFFF
-#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTL_COUNT_LOC	0
-
-#define DLB2_V2LSP_QID_DIR_TOT_ENQ_CNTH(x) \
-	(0xa0880000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID_DIR_TOT_ENQ_CNTH(x) \
-	(0x90900000 + (x) * 0x1000)
-#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTH(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID_DIR_TOT_ENQ_CNTH(x) : \
-	 DLB2_V2_5LSP_QID_DIR_TOT_ENQ_CNTH(x))
-#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTH_RST 0x0
-
-#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTH_COUNT	0xFFFFFFFF
-#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTH_COUNT_LOC	0
-
-#define DLB2_V2LSP_QID_DIR_ENQUEUE_CNT(x) \
-	(0xa0900000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID_DIR_ENQUEUE_CNT(x) \
-	(0x90980000 + (x) * 0x1000)
-#define DLB2_LSP_QID_DIR_ENQUEUE_CNT(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID_DIR_ENQUEUE_CNT(x) : \
-	 DLB2_V2_5LSP_QID_DIR_ENQUEUE_CNT(x))
-#define DLB2_LSP_QID_DIR_ENQUEUE_CNT_RST 0x0
-
-#define DLB2_LSP_QID_DIR_ENQUEUE_CNT_COUNT	0x00001FFF
-#define DLB2_LSP_QID_DIR_ENQUEUE_CNT_RSVD0	0xFFFFE000
-#define DLB2_LSP_QID_DIR_ENQUEUE_CNT_COUNT_LOC	0
-#define DLB2_LSP_QID_DIR_ENQUEUE_CNT_RSVD0_LOC	13
-
-#define DLB2_V2LSP_QID_DIR_DEPTH_THRSH(x) \
-	(0xa0980000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID_DIR_DEPTH_THRSH(x) \
-	(0x90a00000 + (x) * 0x1000)
-#define DLB2_LSP_QID_DIR_DEPTH_THRSH(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID_DIR_DEPTH_THRSH(x) : \
-	 DLB2_V2_5LSP_QID_DIR_DEPTH_THRSH(x))
-#define DLB2_LSP_QID_DIR_DEPTH_THRSH_RST 0x0
-
-#define DLB2_LSP_QID_DIR_DEPTH_THRSH_THRESH	0x00001FFF
-#define DLB2_LSP_QID_DIR_DEPTH_THRSH_RSVD0	0xFFFFE000
-#define DLB2_LSP_QID_DIR_DEPTH_THRSH_THRESH_LOC	0
-#define DLB2_LSP_QID_DIR_DEPTH_THRSH_RSVD0_LOC	13
-
-#define DLB2_V2LSP_QID_AQED_ACTIVE_CNT(x) \
-	(0xa0a00000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID_AQED_ACTIVE_CNT(x) \
-	(0x90b80000 + (x) * 0x1000)
-#define DLB2_LSP_QID_AQED_ACTIVE_CNT(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID_AQED_ACTIVE_CNT(x) : \
-	 DLB2_V2_5LSP_QID_AQED_ACTIVE_CNT(x))
-#define DLB2_LSP_QID_AQED_ACTIVE_CNT_RST 0x0
-
-#define DLB2_LSP_QID_AQED_ACTIVE_CNT_COUNT	0x00000FFF
-#define DLB2_LSP_QID_AQED_ACTIVE_CNT_RSVD0	0xFFFFF000
-#define DLB2_LSP_QID_AQED_ACTIVE_CNT_COUNT_LOC	0
-#define DLB2_LSP_QID_AQED_ACTIVE_CNT_RSVD0_LOC	12
-
-#define DLB2_V2LSP_QID_AQED_ACTIVE_LIM(x) \
-	(0xa0a80000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID_AQED_ACTIVE_LIM(x) \
-	(0x90c00000 + (x) * 0x1000)
-#define DLB2_LSP_QID_AQED_ACTIVE_LIM(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID_AQED_ACTIVE_LIM(x) : \
-	 DLB2_V2_5LSP_QID_AQED_ACTIVE_LIM(x))
-#define DLB2_LSP_QID_AQED_ACTIVE_LIM_RST 0x0
-
-#define DLB2_LSP_QID_AQED_ACTIVE_LIM_LIMIT	0x00000FFF
-#define DLB2_LSP_QID_AQED_ACTIVE_LIM_RSVD0	0xFFFFF000
-#define DLB2_LSP_QID_AQED_ACTIVE_LIM_LIMIT_LOC	0
-#define DLB2_LSP_QID_AQED_ACTIVE_LIM_RSVD0_LOC	12
-
-#define DLB2_V2LSP_QID_ATM_TOT_ENQ_CNTL(x) \
-	(0xa0b00000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID_ATM_TOT_ENQ_CNTL(x) \
-	(0x90c80000 + (x) * 0x1000)
-#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTL(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID_ATM_TOT_ENQ_CNTL(x) : \
-	 DLB2_V2_5LSP_QID_ATM_TOT_ENQ_CNTL(x))
-#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTL_RST 0x0
-
-#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTL_COUNT	0xFFFFFFFF
-#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTL_COUNT_LOC	0
-
-#define DLB2_V2LSP_QID_ATM_TOT_ENQ_CNTH(x) \
-	(0xa0b80000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID_ATM_TOT_ENQ_CNTH(x) \
-	(0x90d00000 + (x) * 0x1000)
-#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTH(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID_ATM_TOT_ENQ_CNTH(x) : \
-	 DLB2_V2_5LSP_QID_ATM_TOT_ENQ_CNTH(x))
-#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTH_RST 0x0
-
-#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTH_COUNT	0xFFFFFFFF
-#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTH_COUNT_LOC	0
-
-#define DLB2_V2LSP_QID_LDB_ENQUEUE_CNT(x) \
-	(0xa0c80000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID_LDB_ENQUEUE_CNT(x) \
-	(0x90e00000 + (x) * 0x1000)
-#define DLB2_LSP_QID_LDB_ENQUEUE_CNT(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID_LDB_ENQUEUE_CNT(x) : \
-	 DLB2_V2_5LSP_QID_LDB_ENQUEUE_CNT(x))
-#define DLB2_LSP_QID_LDB_ENQUEUE_CNT_RST 0x0
-
-#define DLB2_LSP_QID_LDB_ENQUEUE_CNT_COUNT	0x00003FFF
-#define DLB2_LSP_QID_LDB_ENQUEUE_CNT_RSVD0	0xFFFFC000
-#define DLB2_LSP_QID_LDB_ENQUEUE_CNT_COUNT_LOC	0
-#define DLB2_LSP_QID_LDB_ENQUEUE_CNT_RSVD0_LOC	14
-
-#define DLB2_V2LSP_QID_LDB_INFL_CNT(x) \
-	(0xa0d00000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID_LDB_INFL_CNT(x) \
-	(0x90e80000 + (x) * 0x1000)
-#define DLB2_LSP_QID_LDB_INFL_CNT(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID_LDB_INFL_CNT(x) : \
-	 DLB2_V2_5LSP_QID_LDB_INFL_CNT(x))
-#define DLB2_LSP_QID_LDB_INFL_CNT_RST 0x0
-
-#define DLB2_LSP_QID_LDB_INFL_CNT_COUNT	0x00000FFF
-#define DLB2_LSP_QID_LDB_INFL_CNT_RSVD0	0xFFFFF000
-#define DLB2_LSP_QID_LDB_INFL_CNT_COUNT_LOC	0
-#define DLB2_LSP_QID_LDB_INFL_CNT_RSVD0_LOC	12
-
-#define DLB2_V2LSP_QID_LDB_INFL_LIM(x) \
-	(0xa0d80000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID_LDB_INFL_LIM(x) \
-	(0x90f00000 + (x) * 0x1000)
-#define DLB2_LSP_QID_LDB_INFL_LIM(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID_LDB_INFL_LIM(x) : \
-	 DLB2_V2_5LSP_QID_LDB_INFL_LIM(x))
-#define DLB2_LSP_QID_LDB_INFL_LIM_RST 0x0
-
-#define DLB2_LSP_QID_LDB_INFL_LIM_LIMIT	0x00000FFF
-#define DLB2_LSP_QID_LDB_INFL_LIM_RSVD0	0xFFFFF000
-#define DLB2_LSP_QID_LDB_INFL_LIM_LIMIT_LOC	0
-#define DLB2_LSP_QID_LDB_INFL_LIM_RSVD0_LOC	12
-
-#define DLB2_V2LSP_QID2CQIDIX_00(x) \
-	(0xa0e00000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID2CQIDIX_00(x) \
-	(0x90f80000 + (x) * 0x1000)
-#define DLB2_LSP_QID2CQIDIX_00(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID2CQIDIX_00(x) : \
-	 DLB2_V2_5LSP_QID2CQIDIX_00(x))
-#define DLB2_LSP_QID2CQIDIX_00_RST 0x0
-#define DLB2_LSP_QID2CQIDIX(ver, x, y) \
-	(DLB2_LSP_QID2CQIDIX_00(ver, x) + 0x80000 * (y))
-#define DLB2_LSP_QID2CQIDIX_NUM 16
-
-#define DLB2_LSP_QID2CQIDIX_00_CQ_P0	0x000000FF
-#define DLB2_LSP_QID2CQIDIX_00_CQ_P1	0x0000FF00
-#define DLB2_LSP_QID2CQIDIX_00_CQ_P2	0x00FF0000
-#define DLB2_LSP_QID2CQIDIX_00_CQ_P3	0xFF000000
-#define DLB2_LSP_QID2CQIDIX_00_CQ_P0_LOC	0
-#define DLB2_LSP_QID2CQIDIX_00_CQ_P1_LOC	8
-#define DLB2_LSP_QID2CQIDIX_00_CQ_P2_LOC	16
-#define DLB2_LSP_QID2CQIDIX_00_CQ_P3_LOC	24
-
-#define DLB2_V2LSP_QID2CQIDIX2_00(x) \
-	(0xa1600000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID2CQIDIX2_00(x) \
-	(0x91780000 + (x) * 0x1000)
-#define DLB2_LSP_QID2CQIDIX2_00(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID2CQIDIX2_00(x) : \
-	 DLB2_V2_5LSP_QID2CQIDIX2_00(x))
-#define DLB2_LSP_QID2CQIDIX2_00_RST 0x0
-#define DLB2_LSP_QID2CQIDIX2(ver, x, y) \
-	(DLB2_LSP_QID2CQIDIX2_00(ver, x) + 0x80000 * (y))
-#define DLB2_LSP_QID2CQIDIX2_NUM 16
-
-#define DLB2_LSP_QID2CQIDIX2_00_CQ_P0	0x000000FF
-#define DLB2_LSP_QID2CQIDIX2_00_CQ_P1	0x0000FF00
-#define DLB2_LSP_QID2CQIDIX2_00_CQ_P2	0x00FF0000
-#define DLB2_LSP_QID2CQIDIX2_00_CQ_P3	0xFF000000
-#define DLB2_LSP_QID2CQIDIX2_00_CQ_P0_LOC	0
-#define DLB2_LSP_QID2CQIDIX2_00_CQ_P1_LOC	8
-#define DLB2_LSP_QID2CQIDIX2_00_CQ_P2_LOC	16
-#define DLB2_LSP_QID2CQIDIX2_00_CQ_P3_LOC	24
-
-#define DLB2_V2LSP_QID_NALDB_MAX_DEPTH(x) \
-	(0xa1f00000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID_NALDB_MAX_DEPTH(x) \
-	(0x92080000 + (x) * 0x1000)
-#define DLB2_LSP_QID_NALDB_MAX_DEPTH(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID_NALDB_MAX_DEPTH(x) : \
-	 DLB2_V2_5LSP_QID_NALDB_MAX_DEPTH(x))
-#define DLB2_LSP_QID_NALDB_MAX_DEPTH_RST 0x0
-
-#define DLB2_LSP_QID_NALDB_MAX_DEPTH_DEPTH	0x00003FFF
-#define DLB2_LSP_QID_NALDB_MAX_DEPTH_RSVD0	0xFFFFC000
-#define DLB2_LSP_QID_NALDB_MAX_DEPTH_DEPTH_LOC	0
-#define DLB2_LSP_QID_NALDB_MAX_DEPTH_RSVD0_LOC	14
-
-#define DLB2_V2LSP_QID_NALDB_TOT_ENQ_CNTL(x) \
-	(0xa1f80000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID_NALDB_TOT_ENQ_CNTL(x) \
-	(0x92100000 + (x) * 0x1000)
-#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID_NALDB_TOT_ENQ_CNTL(x) : \
-	 DLB2_V2_5LSP_QID_NALDB_TOT_ENQ_CNTL(x))
-#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL_RST 0x0
-
-#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL_COUNT	0xFFFFFFFF
-#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL_COUNT_LOC	0
-
-#define DLB2_V2LSP_QID_NALDB_TOT_ENQ_CNTH(x) \
-	(0xa2000000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID_NALDB_TOT_ENQ_CNTH(x) \
-	(0x92180000 + (x) * 0x1000)
-#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID_NALDB_TOT_ENQ_CNTH(x) : \
-	 DLB2_V2_5LSP_QID_NALDB_TOT_ENQ_CNTH(x))
-#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH_RST 0x0
-
-#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH_COUNT	0xFFFFFFFF
-#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH_COUNT_LOC	0
-
-#define DLB2_V2LSP_QID_ATM_DEPTH_THRSH(x) \
-	(0xa2080000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID_ATM_DEPTH_THRSH(x) \
-	(0x92200000 + (x) * 0x1000)
-#define DLB2_LSP_QID_ATM_DEPTH_THRSH(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID_ATM_DEPTH_THRSH(x) : \
-	 DLB2_V2_5LSP_QID_ATM_DEPTH_THRSH(x))
-#define DLB2_LSP_QID_ATM_DEPTH_THRSH_RST 0x0
-
-#define DLB2_LSP_QID_ATM_DEPTH_THRSH_THRESH	0x00003FFF
-#define DLB2_LSP_QID_ATM_DEPTH_THRSH_RSVD0	0xFFFFC000
-#define DLB2_LSP_QID_ATM_DEPTH_THRSH_THRESH_LOC	0
-#define DLB2_LSP_QID_ATM_DEPTH_THRSH_RSVD0_LOC	14
-
-#define DLB2_V2LSP_QID_NALDB_DEPTH_THRSH(x) \
-	(0xa2100000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID_NALDB_DEPTH_THRSH(x) \
-	(0x92280000 + (x) * 0x1000)
-#define DLB2_LSP_QID_NALDB_DEPTH_THRSH(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID_NALDB_DEPTH_THRSH(x) : \
-	 DLB2_V2_5LSP_QID_NALDB_DEPTH_THRSH(x))
-#define DLB2_LSP_QID_NALDB_DEPTH_THRSH_RST 0x0
-
-#define DLB2_LSP_QID_NALDB_DEPTH_THRSH_THRESH	0x00003FFF
-#define DLB2_LSP_QID_NALDB_DEPTH_THRSH_RSVD0		0xFFFFC000
-#define DLB2_LSP_QID_NALDB_DEPTH_THRSH_THRESH_LOC	0
-#define DLB2_LSP_QID_NALDB_DEPTH_THRSH_RSVD0_LOC	14
-
-#define DLB2_V2LSP_QID_ATM_ACTIVE(x) \
-	(0xa2180000 + (x) * 0x1000)
-#define DLB2_V2_5LSP_QID_ATM_ACTIVE(x) \
-	(0x92300000 + (x) * 0x1000)
-#define DLB2_LSP_QID_ATM_ACTIVE(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_QID_ATM_ACTIVE(x) : \
-	 DLB2_V2_5LSP_QID_ATM_ACTIVE(x))
-#define DLB2_LSP_QID_ATM_ACTIVE_RST 0x0
-
-#define DLB2_LSP_QID_ATM_ACTIVE_COUNT	0x00003FFF
-#define DLB2_LSP_QID_ATM_ACTIVE_RSVD0	0xFFFFC000
-#define DLB2_LSP_QID_ATM_ACTIVE_COUNT_LOC	0
-#define DLB2_LSP_QID_ATM_ACTIVE_RSVD0_LOC	14
-
-#define DLB2_V2LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0 0xa4000008
-#define DLB2_V2_5LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0 0x94000008
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0 : \
-	 DLB2_V2_5LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0)
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_RST 0x0
-
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI0_WEIGHT	0x000000FF
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI1_WEIGHT	0x0000FF00
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI2_WEIGHT	0x00FF0000
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI3_WEIGHT	0xFF000000
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI0_WEIGHT_LOC	0
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI1_WEIGHT_LOC	8
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI2_WEIGHT_LOC	16
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_PRI3_WEIGHT_LOC	24
-
-#define DLB2_V2LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1 0xa400000c
-#define DLB2_V2_5LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1 0x9400000c
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1 : \
-	 DLB2_V2_5LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1)
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_RST 0x0
-
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_RSVZ0_V2	0xFFFFFFFF
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_RSVZ0_V2_LOC	0
-
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI4_WEIGHT_V2_5	0x000000FF
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI5_WEIGHT_V2_5	0x0000FF00
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI6_WEIGHT_V2_5	0x00FF0000
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI7_WEIGHT_V2_5	0xFF000000
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI4_WEIGHT_V2_5_LOC	0
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI5_WEIGHT_V2_5_LOC	8
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI6_WEIGHT_V2_5_LOC	16
-#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_PRI7_WEIGHT_V2_5_LOC	24
-
-#define DLB2_V2LSP_CFG_ARB_WEIGHT_LDB_QID_0 0xa4000014
-#define DLB2_V2_5LSP_CFG_ARB_WEIGHT_LDB_QID_0 0x94000014
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CFG_ARB_WEIGHT_LDB_QID_0 : \
-	 DLB2_V2_5LSP_CFG_ARB_WEIGHT_LDB_QID_0)
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_RST 0x0
-
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI0_WEIGHT	0x000000FF
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI1_WEIGHT	0x0000FF00
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI2_WEIGHT	0x00FF0000
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI3_WEIGHT	0xFF000000
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI0_WEIGHT_LOC	0
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI1_WEIGHT_LOC	8
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI2_WEIGHT_LOC	16
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_PRI3_WEIGHT_LOC	24
-
-#define DLB2_V2LSP_CFG_ARB_WEIGHT_LDB_QID_1 0xa4000018
-#define DLB2_V2_5LSP_CFG_ARB_WEIGHT_LDB_QID_1 0x94000018
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CFG_ARB_WEIGHT_LDB_QID_1 : \
-	 DLB2_V2_5LSP_CFG_ARB_WEIGHT_LDB_QID_1)
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_RST 0x0
-
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_RSVZ0_V2	0xFFFFFFFF
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_RSVZ0_V2_LOC	0
-
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI4_WEIGHT_V2_5	0x000000FF
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI5_WEIGHT_V2_5	0x0000FF00
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI6_WEIGHT_V2_5	0x00FF0000
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI7_WEIGHT_V2_5	0xFF000000
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI4_WEIGHT_V2_5_LOC	0
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI5_WEIGHT_V2_5_LOC	8
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI6_WEIGHT_V2_5_LOC	16
-#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_PRI7_WEIGHT_V2_5_LOC	24
-
-#define DLB2_V2LSP_LDB_SCHED_CTRL 0xa400002c
-#define DLB2_V2_5LSP_LDB_SCHED_CTRL 0x9400002c
-#define DLB2_LSP_LDB_SCHED_CTRL(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_LDB_SCHED_CTRL : \
-	 DLB2_V2_5LSP_LDB_SCHED_CTRL)
-#define DLB2_LSP_LDB_SCHED_CTRL_RST 0x0
-
-#define DLB2_LSP_LDB_SCHED_CTRL_CQ			0x000000FF
-#define DLB2_LSP_LDB_SCHED_CTRL_QIDIX		0x00000700
-#define DLB2_LSP_LDB_SCHED_CTRL_VALUE		0x00000800
-#define DLB2_LSP_LDB_SCHED_CTRL_NALB_HASWORK_V	0x00001000
-#define DLB2_LSP_LDB_SCHED_CTRL_RLIST_HASWORK_V	0x00002000
-#define DLB2_LSP_LDB_SCHED_CTRL_SLIST_HASWORK_V	0x00004000
-#define DLB2_LSP_LDB_SCHED_CTRL_INFLIGHT_OK_V	0x00008000
-#define DLB2_LSP_LDB_SCHED_CTRL_AQED_NFULL_V		0x00010000
-#define DLB2_LSP_LDB_SCHED_CTRL_RSVZ0		0xFFFE0000
-#define DLB2_LSP_LDB_SCHED_CTRL_CQ_LOC		0
-#define DLB2_LSP_LDB_SCHED_CTRL_QIDIX_LOC		8
-#define DLB2_LSP_LDB_SCHED_CTRL_VALUE_LOC		11
-#define DLB2_LSP_LDB_SCHED_CTRL_NALB_HASWORK_V_LOC	12
-#define DLB2_LSP_LDB_SCHED_CTRL_RLIST_HASWORK_V_LOC	13
-#define DLB2_LSP_LDB_SCHED_CTRL_SLIST_HASWORK_V_LOC	14
-#define DLB2_LSP_LDB_SCHED_CTRL_INFLIGHT_OK_V_LOC	15
-#define DLB2_LSP_LDB_SCHED_CTRL_AQED_NFULL_V_LOC	16
-#define DLB2_LSP_LDB_SCHED_CTRL_RSVZ0_LOC		17
-
-#define DLB2_V2LSP_DIR_SCH_CNT_L 0xa4000034
-#define DLB2_V2_5LSP_DIR_SCH_CNT_L 0x94000034
-#define DLB2_LSP_DIR_SCH_CNT_L(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_DIR_SCH_CNT_L : \
-	 DLB2_V2_5LSP_DIR_SCH_CNT_L)
-#define DLB2_LSP_DIR_SCH_CNT_L_RST 0x0
-
-#define DLB2_LSP_DIR_SCH_CNT_L_COUNT	0xFFFFFFFF
-#define DLB2_LSP_DIR_SCH_CNT_L_COUNT_LOC	0
-
-#define DLB2_V2LSP_DIR_SCH_CNT_H 0xa4000038
-#define DLB2_V2_5LSP_DIR_SCH_CNT_H 0x94000038
-#define DLB2_LSP_DIR_SCH_CNT_H(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_DIR_SCH_CNT_H : \
-	 DLB2_V2_5LSP_DIR_SCH_CNT_H)
-#define DLB2_LSP_DIR_SCH_CNT_H_RST 0x0
-
-#define DLB2_LSP_DIR_SCH_CNT_H_COUNT	0xFFFFFFFF
-#define DLB2_LSP_DIR_SCH_CNT_H_COUNT_LOC	0
-
-#define DLB2_V2LSP_LDB_SCH_CNT_L 0xa400003c
-#define DLB2_V2_5LSP_LDB_SCH_CNT_L 0x9400003c
-#define DLB2_LSP_LDB_SCH_CNT_L(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_LDB_SCH_CNT_L : \
-	 DLB2_V2_5LSP_LDB_SCH_CNT_L)
-#define DLB2_LSP_LDB_SCH_CNT_L_RST 0x0
-
-#define DLB2_LSP_LDB_SCH_CNT_L_COUNT	0xFFFFFFFF
-#define DLB2_LSP_LDB_SCH_CNT_L_COUNT_LOC	0
-
-#define DLB2_V2LSP_LDB_SCH_CNT_H 0xa4000040
-#define DLB2_V2_5LSP_LDB_SCH_CNT_H 0x94000040
-#define DLB2_LSP_LDB_SCH_CNT_H(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_LDB_SCH_CNT_H : \
-	 DLB2_V2_5LSP_LDB_SCH_CNT_H)
-#define DLB2_LSP_LDB_SCH_CNT_H_RST 0x0
-
-#define DLB2_LSP_LDB_SCH_CNT_H_COUNT	0xFFFFFFFF
-#define DLB2_LSP_LDB_SCH_CNT_H_COUNT_LOC	0
-
-#define DLB2_V2LSP_CFG_SHDW_CTRL 0xa4000070
-#define DLB2_V2_5LSP_CFG_SHDW_CTRL 0x94000070
-#define DLB2_LSP_CFG_SHDW_CTRL(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CFG_SHDW_CTRL : \
-	 DLB2_V2_5LSP_CFG_SHDW_CTRL)
-#define DLB2_LSP_CFG_SHDW_CTRL_RST 0x0
-
-#define DLB2_LSP_CFG_SHDW_CTRL_TRANSFER	0x00000001
-#define DLB2_LSP_CFG_SHDW_CTRL_RSVD0		0xFFFFFFFE
-#define DLB2_LSP_CFG_SHDW_CTRL_TRANSFER_LOC	0
-#define DLB2_LSP_CFG_SHDW_CTRL_RSVD0_LOC	1
-
-#define DLB2_V2LSP_CFG_SHDW_RANGE_COS(x) \
-	(0xa4000074 + (x) * 4)
-#define DLB2_V2_5LSP_CFG_SHDW_RANGE_COS(x) \
-	(0x94000074 + (x) * 4)
-#define DLB2_LSP_CFG_SHDW_RANGE_COS(ver, x) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CFG_SHDW_RANGE_COS(x) : \
-	 DLB2_V2_5LSP_CFG_SHDW_RANGE_COS(x))
-#define DLB2_LSP_CFG_SHDW_RANGE_COS_RST 0x40
-
-#define DLB2_LSP_CFG_SHDW_RANGE_COS_BW_RANGE		0x000001FF
-#define DLB2_LSP_CFG_SHDW_RANGE_COS_RSVZ0		0x7FFFFE00
-#define DLB2_LSP_CFG_SHDW_RANGE_COS_NO_EXTRA_CREDIT	0x80000000
-#define DLB2_LSP_CFG_SHDW_RANGE_COS_BW_RANGE_LOC		0
-#define DLB2_LSP_CFG_SHDW_RANGE_COS_RSVZ0_LOC		9
-#define DLB2_LSP_CFG_SHDW_RANGE_COS_NO_EXTRA_CREDIT_LOC	31
-
-#define DLB2_V2LSP_CFG_CTRL_GENERAL_0 0xac000000
-#define DLB2_V2_5LSP_CFG_CTRL_GENERAL_0 0x9c000000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2LSP_CFG_CTRL_GENERAL_0 : \
-	 DLB2_V2_5LSP_CFG_CTRL_GENERAL_0)
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_RST 0x0
-
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_ATQ_EMPTY_ARB_V2	0x00000001
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_TOK_UNIT_IDLE_V2	0x00000002
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_RLIST_PRI_V2	0x00000004
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_CMP_UNIT_IDLE_V2	0x00000008
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ0_V2		0x00000030
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OP_V2	0x00000040
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_HALF_BW_V2	0x00000080
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OUT_V2	0x00000100
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_DISAB_MULTI_V2	0x00000200
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OP_V2	0x00000400
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_HALF_BW_V2	0x00000800
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OUT_V2	0x00001000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_DISAB_MULTI_V2	0x00002000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OP_V2	0x00004000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_HALF_BW_V2	0x00008000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OUT_V2	0x00010000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OP_V2	0x00020000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_HALF_BW_V2	0x00040000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OUT_V2	0x00080000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_SINGLE_OP_V2	0x00100000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_HALF_BW_V2	0x00200000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_DISAB_MULTI_V2	0x00400000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_SCH_V2	0x00800000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_CMP_V2	0x01000000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_CE_TOG_ARB_V2	0x02000000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ1_V2		0x04000000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALID_SEL_V2	0x18000000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALUE_SEL_V2	0x20000000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_COMPARE_SEL_V2	0xC0000000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_ATQ_EMPTY_ARB_V2_LOC	0
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_TOK_UNIT_IDLE_V2_LOC		1
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_RLIST_PRI_V2_LOC		2
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_CMP_UNIT_IDLE_V2_LOC		3
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ0_V2_LOC			4
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OP_V2_LOC		6
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_HALF_BW_V2_LOC		7
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OUT_V2_LOC		8
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_DISAB_MULTI_V2_LOC		9
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OP_V2_LOC		10
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_HALF_BW_V2_LOC		11
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OUT_V2_LOC		12
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_DISAB_MULTI_V2_LOC		13
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OP_V2_LOC		14
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_HALF_BW_V2_LOC		15
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OUT_V2_LOC		16
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OP_V2_LOC		17
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_HALF_BW_V2_LOC		18
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OUT_V2_LOC		19
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_SINGLE_OP_V2_LOC		20
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_HALF_BW_V2_LOC		21
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_DISAB_MULTI_V2_LOC		22
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_SCH_V2_LOC		23
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_CMP_V2_LOC		24
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_CE_TOG_ARB_V2_LOC		25
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ1_V2_LOC			26
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALID_SEL_V2_LOC		27
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALUE_SEL_V2_LOC		29
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_COMPARE_SEL_V2_LOC		30
-
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_ATQ_EMPTY_ARB_V2_5	0x00000001
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_TOK_UNIT_IDLE_V2_5	0x00000002
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_RLIST_PRI_V2_5	0x00000004
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_CMP_UNIT_IDLE_V2_5	0x00000008
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ENAB_IF_THRESH_V2_5	0x00000010
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ0_V2_5		0x00000020
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OP_V2_5	0x00000040
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_HALF_BW_V2_5	0x00000080
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OUT_V2_5	0x00000100
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_DISAB_MULTI_V2_5	0x00000200
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OP_V2_5	0x00000400
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_HALF_BW_V2_5	0x00000800
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OUT_V2_5	0x00001000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_DISAB_MULTI_V2_5	0x00002000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OP_V2_5	0x00004000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_HALF_BW_V2_5	0x00008000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OUT_V2_5	0x00010000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OP_V2_5	0x00020000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_HALF_BW_V2_5	0x00040000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OUT_V2_5	0x00080000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_SINGLE_OP_V2_5	0x00100000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_HALF_BW_V2_5	0x00200000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_DISAB_MULTI_V2_5	0x00400000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_SCH_V2_5	0x00800000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_CMP_V2_5	0x01000000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_CE_TOG_ARB_V2_5	0x02000000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ1_V2_5		0x04000000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALID_SEL_V2_5	0x18000000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALUE_SEL_V2_5	0x20000000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_COMPARE_SEL_V2_5	0xC0000000
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_ATQ_EMPTY_ARB_V2_5_LOC	0
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_TOK_UNIT_IDLE_V2_5_LOC	1
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DISAB_RLIST_PRI_V2_5_LOC		2
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_INC_CMP_UNIT_IDLE_V2_5_LOC	3
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ENAB_IF_THRESH_V2_5_LOC		4
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ0_V2_5_LOC			5
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OP_V2_5_LOC		6
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_HALF_BW_V2_5_LOC		7
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_SINGLE_OUT_V2_5_LOC		8
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIR_DISAB_MULTI_V2_5_LOC		9
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OP_V2_5_LOC		10
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_HALF_BW_V2_5_LOC		11
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_SINGLE_OUT_V2_5_LOC		12
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATQ_DISAB_MULTI_V2_5_LOC		13
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OP_V2_5_LOC	14
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_HALF_BW_V2_5_LOC		15
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_DIRRPL_SINGLE_OUT_V2_5_LOC	16
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OP_V2_5_LOC		17
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_HALF_BW_V2_5_LOC		18
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LBRPL_SINGLE_OUT_V2_5_LOC	19
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_SINGLE_OP_V2_5_LOC		20
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_HALF_BW_V2_5_LOC		21
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_DISAB_MULTI_V2_5_LOC		22
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_SCH_V2_5_LOC		23
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_ATM_SINGLE_CMP_V2_5_LOC		24
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_LDB_CE_TOG_ARB_V2_5_LOC		25
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_RSVZ1_V2_5_LOC			26
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALID_SEL_V2_5_LOC		27
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_VALUE_SEL_V2_5_LOC		29
-#define DLB2_LSP_CFG_CTRL_GENERAL_0_SMON0_COMPARE_SEL_V2_5_LOC	30
-
-#define DLB2_LSP_SMON_COMPARE0 0xac000048
-#define DLB2_LSP_SMON_COMPARE0_RST 0x0
-
-#define DLB2_LSP_SMON_COMPARE0_COMPARE0	0xFFFFFFFF
-#define DLB2_LSP_SMON_COMPARE0_COMPARE0_LOC	0
-
-#define DLB2_LSP_SMON_COMPARE1 0xac00004c
-#define DLB2_LSP_SMON_COMPARE1_RST 0x0
-
-#define DLB2_LSP_SMON_COMPARE1_COMPARE1	0xFFFFFFFF
-#define DLB2_LSP_SMON_COMPARE1_COMPARE1_LOC	0
-
-#define DLB2_LSP_SMON_CFG0 0xac000050
-#define DLB2_LSP_SMON_CFG0_RST 0x40000000
-
-#define DLB2_LSP_SMON_CFG0_SMON_ENABLE		0x00000001
-#define DLB2_LSP_SMON_CFG0_SMON_0TRIGGER_ENABLE	0x00000002
-#define DLB2_LSP_SMON_CFG0_RSVZ0			0x0000000C
-#define DLB2_LSP_SMON_CFG0_SMON0_FUNCTION		0x00000070
-#define DLB2_LSP_SMON_CFG0_SMON0_FUNCTION_COMPARE	0x00000080
-#define DLB2_LSP_SMON_CFG0_SMON1_FUNCTION		0x00000700
-#define DLB2_LSP_SMON_CFG0_SMON1_FUNCTION_COMPARE	0x00000800
-#define DLB2_LSP_SMON_CFG0_SMON_MODE			0x0000F000
-#define DLB2_LSP_SMON_CFG0_STOPCOUNTEROVFL		0x00010000
-#define DLB2_LSP_SMON_CFG0_INTCOUNTEROVFL		0x00020000
-#define DLB2_LSP_SMON_CFG0_STATCOUNTER0OVFL		0x00040000
-#define DLB2_LSP_SMON_CFG0_STATCOUNTER1OVFL		0x00080000
-#define DLB2_LSP_SMON_CFG0_STOPTIMEROVFL		0x00100000
-#define DLB2_LSP_SMON_CFG0_INTTIMEROVFL		0x00200000
-#define DLB2_LSP_SMON_CFG0_STATTIMEROVFL		0x00400000
-#define DLB2_LSP_SMON_CFG0_RSVZ1			0x00800000
-#define DLB2_LSP_SMON_CFG0_TIMER_PRESCALE		0x1F000000
-#define DLB2_LSP_SMON_CFG0_RSVZ2			0x20000000
-#define DLB2_LSP_SMON_CFG0_VERSION			0xC0000000
-#define DLB2_LSP_SMON_CFG0_SMON_ENABLE_LOC			0
-#define DLB2_LSP_SMON_CFG0_SMON_0TRIGGER_ENABLE_LOC		1
-#define DLB2_LSP_SMON_CFG0_RSVZ0_LOC				2
-#define DLB2_LSP_SMON_CFG0_SMON0_FUNCTION_LOC		4
-#define DLB2_LSP_SMON_CFG0_SMON0_FUNCTION_COMPARE_LOC	7
-#define DLB2_LSP_SMON_CFG0_SMON1_FUNCTION_LOC		8
-#define DLB2_LSP_SMON_CFG0_SMON1_FUNCTION_COMPARE_LOC	11
-#define DLB2_LSP_SMON_CFG0_SMON_MODE_LOC			12
-#define DLB2_LSP_SMON_CFG0_STOPCOUNTEROVFL_LOC		16
-#define DLB2_LSP_SMON_CFG0_INTCOUNTEROVFL_LOC		17
-#define DLB2_LSP_SMON_CFG0_STATCOUNTER0OVFL_LOC		18
-#define DLB2_LSP_SMON_CFG0_STATCOUNTER1OVFL_LOC		19
-#define DLB2_LSP_SMON_CFG0_STOPTIMEROVFL_LOC			20
-#define DLB2_LSP_SMON_CFG0_INTTIMEROVFL_LOC			21
-#define DLB2_LSP_SMON_CFG0_STATTIMEROVFL_LOC			22
-#define DLB2_LSP_SMON_CFG0_RSVZ1_LOC				23
-#define DLB2_LSP_SMON_CFG0_TIMER_PRESCALE_LOC		24
-#define DLB2_LSP_SMON_CFG0_RSVZ2_LOC				29
-#define DLB2_LSP_SMON_CFG0_VERSION_LOC			30
-
-#define DLB2_LSP_SMON_CFG1 0xac000054
-#define DLB2_LSP_SMON_CFG1_RST 0x0
-
-#define DLB2_LSP_SMON_CFG1_MODE0	0x000000FF
-#define DLB2_LSP_SMON_CFG1_MODE1	0x0000FF00
-#define DLB2_LSP_SMON_CFG1_RSVZ0	0xFFFF0000
-#define DLB2_LSP_SMON_CFG1_MODE0_LOC	0
-#define DLB2_LSP_SMON_CFG1_MODE1_LOC	8
-#define DLB2_LSP_SMON_CFG1_RSVZ0_LOC	16
-
-#define DLB2_LSP_SMON_ACTIVITYCNTR0 0xac000058
-#define DLB2_LSP_SMON_ACTIVITYCNTR0_RST 0x0
-
-#define DLB2_LSP_SMON_ACTIVITYCNTR0_COUNTER0	0xFFFFFFFF
-#define DLB2_LSP_SMON_ACTIVITYCNTR0_COUNTER0_LOC	0
-
-#define DLB2_LSP_SMON_ACTIVITYCNTR1 0xac00005c
-#define DLB2_LSP_SMON_ACTIVITYCNTR1_RST 0x0
-
-#define DLB2_LSP_SMON_ACTIVITYCNTR1_COUNTER1	0xFFFFFFFF
-#define DLB2_LSP_SMON_ACTIVITYCNTR1_COUNTER1_LOC	0
-
-#define DLB2_LSP_SMON_MAX_TMR 0xac000060
-#define DLB2_LSP_SMON_MAX_TMR_RST 0x0
-
-#define DLB2_LSP_SMON_MAX_TMR_MAXVALUE	0xFFFFFFFF
-#define DLB2_LSP_SMON_MAX_TMR_MAXVALUE_LOC	0
-
-#define DLB2_LSP_SMON_TMR 0xac000064
-#define DLB2_LSP_SMON_TMR_RST 0x0
-
-#define DLB2_LSP_SMON_TMR_TIMER	0xFFFFFFFF
-#define DLB2_LSP_SMON_TMR_TIMER_LOC	0
-
-#define DLB2_V2CM_DIAG_RESET_STS 0xb4000000
-#define DLB2_V2_5CM_DIAG_RESET_STS 0xa4000000
-#define DLB2_CM_DIAG_RESET_STS(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 V2CM_DIAG_RESET_STS : \
-	 V2_5CM_DIAG_RESET_STS)
-#define DLB2_CM_DIAG_RESET_STS_RST 0x80000bff
-
-#define DLB2_CM_DIAG_RESET_STS_CHP_PF_RESET_DONE	0x00000001
-#define DLB2_CM_DIAG_RESET_STS_ROP_PF_RESET_DONE	0x00000002
-#define DLB2_CM_DIAG_RESET_STS_LSP_PF_RESET_DONE	0x00000004
-#define DLB2_CM_DIAG_RESET_STS_NALB_PF_RESET_DONE	0x00000008
-#define DLB2_CM_DIAG_RESET_STS_AP_PF_RESET_DONE	0x00000010
-#define DLB2_CM_DIAG_RESET_STS_DP_PF_RESET_DONE	0x00000020
-#define DLB2_CM_DIAG_RESET_STS_QED_PF_RESET_DONE	0x00000040
-#define DLB2_CM_DIAG_RESET_STS_DQED_PF_RESET_DONE	0x00000080
-#define DLB2_CM_DIAG_RESET_STS_AQED_PF_RESET_DONE	0x00000100
-#define DLB2_CM_DIAG_RESET_STS_SYS_PF_RESET_DONE	0x00000200
-#define DLB2_CM_DIAG_RESET_STS_PF_RESET_ACTIVE	0x00000400
-#define DLB2_CM_DIAG_RESET_STS_FLRSM_STATE		0x0003F800
-#define DLB2_CM_DIAG_RESET_STS_RSVD0			0x7FFC0000
-#define DLB2_CM_DIAG_RESET_STS_DLB_PROC_RESET_DONE	0x80000000
-#define DLB2_CM_DIAG_RESET_STS_CHP_PF_RESET_DONE_LOC		0
-#define DLB2_CM_DIAG_RESET_STS_ROP_PF_RESET_DONE_LOC		1
-#define DLB2_CM_DIAG_RESET_STS_LSP_PF_RESET_DONE_LOC		2
-#define DLB2_CM_DIAG_RESET_STS_NALB_PF_RESET_DONE_LOC	3
-#define DLB2_CM_DIAG_RESET_STS_AP_PF_RESET_DONE_LOC		4
-#define DLB2_CM_DIAG_RESET_STS_DP_PF_RESET_DONE_LOC		5
-#define DLB2_CM_DIAG_RESET_STS_QED_PF_RESET_DONE_LOC		6
-#define DLB2_CM_DIAG_RESET_STS_DQED_PF_RESET_DONE_LOC	7
-#define DLB2_CM_DIAG_RESET_STS_AQED_PF_RESET_DONE_LOC	8
-#define DLB2_CM_DIAG_RESET_STS_SYS_PF_RESET_DONE_LOC		9
-#define DLB2_CM_DIAG_RESET_STS_PF_RESET_ACTIVE_LOC		10
-#define DLB2_CM_DIAG_RESET_STS_FLRSM_STATE_LOC		11
-#define DLB2_CM_DIAG_RESET_STS_RSVD0_LOC			18
-#define DLB2_CM_DIAG_RESET_STS_DLB_PROC_RESET_DONE_LOC	31
-
-#define DLB2_V2CM_CFG_DIAGNOSTIC_IDLE_STATUS 0xb4000004
-#define DLB2_V2_5CM_CFG_DIAGNOSTIC_IDLE_STATUS 0xa4000004
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CM_CFG_DIAGNOSTIC_IDLE_STATUS : \
-	 DLB2_V2_5CM_CFG_DIAGNOSTIC_IDLE_STATUS)
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_RST 0x9d0fffff
-
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_CHP_PIPEIDLE		0x00000001
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_ROP_PIPEIDLE		0x00000002
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_LSP_PIPEIDLE		0x00000004
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_NALB_PIPEIDLE	0x00000008
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AP_PIPEIDLE		0x00000010
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DP_PIPEIDLE		0x00000020
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_QED_PIPEIDLE		0x00000040
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DQED_PIPEIDLE	0x00000080
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AQED_PIPEIDLE	0x00000100
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_SYS_PIPEIDLE		0x00000200
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_CHP_UNIT_IDLE	0x00000400
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_ROP_UNIT_IDLE	0x00000800
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_LSP_UNIT_IDLE	0x00001000
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_NALB_UNIT_IDLE	0x00002000
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AP_UNIT_IDLE		0x00004000
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DP_UNIT_IDLE		0x00008000
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_QED_UNIT_IDLE	0x00010000
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DQED_UNIT_IDLE	0x00020000
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AQED_UNIT_IDLE	0x00040000
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_SYS_UNIT_IDLE	0x00080000
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_RSVD1		0x00F00000
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_CFG_RING_IDLE	0x01000000
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_CFG_MSTR_IDLE	0x02000000
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_FLR_CLKREQ_B	0x04000000
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_PROC_IDLE	0x08000000
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_PROC_IDLE_MASKED 0x10000000
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_RSVD0		 0x60000000
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DLB_FUNC_IDLE	 0x80000000
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_CHP_PIPEIDLE_LOC		0
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_ROP_PIPEIDLE_LOC		1
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_LSP_PIPEIDLE_LOC		2
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_NALB_PIPEIDLE_LOC		3
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AP_PIPEIDLE_LOC		4
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DP_PIPEIDLE_LOC		5
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_QED_PIPEIDLE_LOC		6
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DQED_PIPEIDLE_LOC		7
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AQED_PIPEIDLE_LOC		8
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_SYS_PIPEIDLE_LOC		9
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_CHP_UNIT_IDLE_LOC		10
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_ROP_UNIT_IDLE_LOC		11
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_LSP_UNIT_IDLE_LOC		12
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_NALB_UNIT_IDLE_LOC	13
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AP_UNIT_IDLE_LOC		14
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DP_UNIT_IDLE_LOC		15
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_QED_UNIT_IDLE_LOC		16
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DQED_UNIT_IDLE_LOC	17
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_AQED_UNIT_IDLE_LOC	18
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_SYS_UNIT_IDLE_LOC		19
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_RSVD1_LOC			20
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_CFG_RING_IDLE_LOC	24
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_CFG_MSTR_IDLE_LOC	25
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_FLR_CLKREQ_B_LOC	26
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_PROC_IDLE_LOC	27
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_MSTR_PROC_IDLE_MASKED_LOC	28
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_RSVD0_LOC			29
-#define DLB2_CM_CFG_DIAGNOSTIC_IDLE_STATUS_DLB_FUNC_IDLE_LOC		31
-
-#define DLB2_V2CM_CFG_PM_STATUS 0xb4000014
-#define DLB2_V2_5CM_CFG_PM_STATUS 0xa4000014
-#define DLB2_CM_CFG_PM_STATUS(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CM_CFG_PM_STATUS : \
-	 DLB2_V2_5CM_CFG_PM_STATUS)
-#define DLB2_CM_CFG_PM_STATUS_RST 0x100403e
-
-#define DLB2_CM_CFG_PM_STATUS_PROCHOT		0x00000001
-#define DLB2_CM_CFG_PM_STATUS_PGCB_DLB_IDLE		0x00000002
-#define DLB2_CM_CFG_PM_STATUS_PGCB_DLB_PG_RDY_ACK_B	0x00000004
-#define DLB2_CM_CFG_PM_STATUS_PMSM_PGCB_REQ_B	0x00000008
-#define DLB2_CM_CFG_PM_STATUS_PGBC_PMC_PG_REQ_B	0x00000010
-#define DLB2_CM_CFG_PM_STATUS_PMC_PGCB_PG_ACK_B	0x00000020
-#define DLB2_CM_CFG_PM_STATUS_PMC_PGCB_FET_EN_B	0x00000040
-#define DLB2_CM_CFG_PM_STATUS_PGCB_FET_EN_B		0x00000080
-#define DLB2_CM_CFG_PM_STATUS_RSVZ0			0x00000100
-#define DLB2_CM_CFG_PM_STATUS_RSVZ1			0x00000200
-#define DLB2_CM_CFG_PM_STATUS_FUSE_FORCE_ON		0x00000400
-#define DLB2_CM_CFG_PM_STATUS_FUSE_PROC_DISABLE	0x00000800
-#define DLB2_CM_CFG_PM_STATUS_RSVZ2			0x00001000
-#define DLB2_CM_CFG_PM_STATUS_RSVZ3			0x00002000
-#define DLB2_CM_CFG_PM_STATUS_PM_FSM_D0TOD3_OK	0x00004000
-#define DLB2_CM_CFG_PM_STATUS_PM_FSM_D3TOD0_OK	0x00008000
-#define DLB2_CM_CFG_PM_STATUS_DLB_IN_D3		0x00010000
-#define DLB2_CM_CFG_PM_STATUS_RSVZ4			0x00FE0000
-#define DLB2_CM_CFG_PM_STATUS_PMSM			0xFF000000
-#define DLB2_CM_CFG_PM_STATUS_PROCHOT_LOC			0
-#define DLB2_CM_CFG_PM_STATUS_PGCB_DLB_IDLE_LOC		1
-#define DLB2_CM_CFG_PM_STATUS_PGCB_DLB_PG_RDY_ACK_B_LOC	2
-#define DLB2_CM_CFG_PM_STATUS_PMSM_PGCB_REQ_B_LOC		3
-#define DLB2_CM_CFG_PM_STATUS_PGBC_PMC_PG_REQ_B_LOC		4
-#define DLB2_CM_CFG_PM_STATUS_PMC_PGCB_PG_ACK_B_LOC		5
-#define DLB2_CM_CFG_PM_STATUS_PMC_PGCB_FET_EN_B_LOC		6
-#define DLB2_CM_CFG_PM_STATUS_PGCB_FET_EN_B_LOC		7
-#define DLB2_CM_CFG_PM_STATUS_RSVZ0_LOC			8
-#define DLB2_CM_CFG_PM_STATUS_RSVZ1_LOC			9
-#define DLB2_CM_CFG_PM_STATUS_FUSE_FORCE_ON_LOC		10
-#define DLB2_CM_CFG_PM_STATUS_FUSE_PROC_DISABLE_LOC		11
-#define DLB2_CM_CFG_PM_STATUS_RSVZ2_LOC			12
-#define DLB2_CM_CFG_PM_STATUS_RSVZ3_LOC			13
-#define DLB2_CM_CFG_PM_STATUS_PM_FSM_D0TOD3_OK_LOC		14
-#define DLB2_CM_CFG_PM_STATUS_PM_FSM_D3TOD0_OK_LOC		15
-#define DLB2_CM_CFG_PM_STATUS_DLB_IN_D3_LOC			16
-#define DLB2_CM_CFG_PM_STATUS_RSVZ4_LOC			17
-#define DLB2_CM_CFG_PM_STATUS_PMSM_LOC			24
-
-#define DLB2_V2CM_CFG_PM_PMCSR_DISABLE 0xb4000018
-#define DLB2_V2_5CM_CFG_PM_PMCSR_DISABLE 0xa4000018
-#define DLB2_CM_CFG_PM_PMCSR_DISABLE(ver) \
-	(ver == DLB2_HW_V2 ? \
-	 DLB2_V2CM_CFG_PM_PMCSR_DISABLE : \
-	 DLB2_V2_5CM_CFG_PM_PMCSR_DISABLE)
-#define DLB2_CM_CFG_PM_PMCSR_DISABLE_RST 0x1
-
-#define DLB2_CM_CFG_PM_PMCSR_DISABLE_DISABLE	0x00000001
-#define DLB2_CM_CFG_PM_PMCSR_DISABLE_RSVZ0	0xFFFFFFFE
-#define DLB2_CM_CFG_PM_PMCSR_DISABLE_DISABLE_LOC	0
-#define DLB2_CM_CFG_PM_PMCSR_DISABLE_RSVZ0_LOC	1
-
-#define DLB2_VF_VF2PF_MAILBOX_BYTES 256
-#define DLB2_VF_VF2PF_MAILBOX(x) \
-	(0x1000 + (x) * 0x4)
-#define DLB2_VF_VF2PF_MAILBOX_RST 0x0
-
-#define DLB2_VF_VF2PF_MAILBOX_MSG	0xFFFFFFFF
-#define DLB2_VF_VF2PF_MAILBOX_MSG_LOC	0
-
-#define DLB2_VF_VF2PF_MAILBOX_ISR 0x1f00
-#define DLB2_VF_VF2PF_MAILBOX_ISR_RST 0x0
-#define DLB2_VF_SIOV_MBOX_ISR_TRIGGER 0x8000
-
-#define DLB2_VF_VF2PF_MAILBOX_ISR_ISR	0x00000001
-#define DLB2_VF_VF2PF_MAILBOX_ISR_RSVD0	0xFFFFFFFE
-#define DLB2_VF_VF2PF_MAILBOX_ISR_ISR_LOC	0
-#define DLB2_VF_VF2PF_MAILBOX_ISR_RSVD0_LOC	1
-
-#define DLB2_VF_PF2VF_MAILBOX_BYTES 64
-#define DLB2_VF_PF2VF_MAILBOX(x) \
-	(0x2000 + (x) * 0x4)
-#define DLB2_VF_PF2VF_MAILBOX_RST 0x0
-
-#define DLB2_VF_PF2VF_MAILBOX_MSG	0xFFFFFFFF
-#define DLB2_VF_PF2VF_MAILBOX_MSG_LOC	0
-
-#define DLB2_VF_PF2VF_MAILBOX_ISR 0x2f00
-#define DLB2_VF_PF2VF_MAILBOX_ISR_RST 0x0
-
-#define DLB2_VF_PF2VF_MAILBOX_ISR_PF_ISR	0x00000001
-#define DLB2_VF_PF2VF_MAILBOX_ISR_RSVD0	0xFFFFFFFE
-#define DLB2_VF_PF2VF_MAILBOX_ISR_PF_ISR_LOC	0
-#define DLB2_VF_PF2VF_MAILBOX_ISR_RSVD0_LOC	1
-
-#define DLB2_VF_VF_MSI_ISR_PEND 0x2f10
-#define DLB2_VF_VF_MSI_ISR_PEND_RST 0x0
-
-#define DLB2_VF_VF_MSI_ISR_PEND_ISR_PEND	0xFFFFFFFF
-#define DLB2_VF_VF_MSI_ISR_PEND_ISR_PEND_LOC	0
-
-#define DLB2_VF_VF_RESET_IN_PROGRESS 0x3000
-#define DLB2_VF_VF_RESET_IN_PROGRESS_RST 0x1
-
-#define DLB2_VF_VF_RESET_IN_PROGRESS_RESET_IN_PROGRESS	0x00000001
-#define DLB2_VF_VF_RESET_IN_PROGRESS_RSVD0			0xFFFFFFFE
-#define DLB2_VF_VF_RESET_IN_PROGRESS_RESET_IN_PROGRESS_LOC	0
-#define DLB2_VF_VF_RESET_IN_PROGRESS_RSVD0_LOC		1
-
-#define DLB2_VF_VF_MSI_ISR 0x4000
-#define DLB2_VF_VF_MSI_ISR_RST 0x0
-
-#define DLB2_VF_VF_MSI_ISR_VF_MSI_ISR	0xFFFFFFFF
-#define DLB2_VF_VF_MSI_ISR_VF_MSI_ISR_LOC	0
-
-#define DLB2_SYS_TOTAL_CREDITS 0x10000100
-#define DLB2_SYS_TOTAL_CREDITS_RST 0x4000
-
-#define DLB2_SYS_TOTAL_CREDITS_TOTAL_CREDITS	0xFFFFFFFF
-#define DLB2_SYS_TOTAL_CREDITS_TOTAL_CREDITS_LOC	0
-
-#define DLB2_SYS_LDB_CQ_AI_ADDR_U(x) \
-	(0x10000fa4 + (x) * 0x1000)
-#define DLB2_SYS_LDB_CQ_AI_ADDR_U_RST 0x0
-
-#define DLB2_SYS_LDB_CQ_AI_ADDR_U_CQ_AI_ADDR_U	0xFFFFFFFF
-#define DLB2_SYS_LDB_CQ_AI_ADDR_U_CQ_AI_ADDR_U_LOC	0
-
-#define DLB2_SYS_LDB_CQ_AI_ADDR_L(x) \
-	(0x10000fa0 + (x) * 0x1000)
-#define DLB2_SYS_LDB_CQ_AI_ADDR_L_RST 0x0
-
-#define DLB2_SYS_LDB_CQ_AI_ADDR_L_RSVD0		0x00000003
-#define DLB2_SYS_LDB_CQ_AI_ADDR_L_CQ_AI_ADDR_L	0xFFFFFFFC
-#define DLB2_SYS_LDB_CQ_AI_ADDR_L_RSVD0_LOC		0
-#define DLB2_SYS_LDB_CQ_AI_ADDR_L_CQ_AI_ADDR_L_LOC	2
-
-#define DLB2_SYS_DIR_CQ_AI_ADDR_U(x) \
-	(0x10000fe4 + (x) * 0x1000)
-#define DLB2_SYS_DIR_CQ_AI_ADDR_U_RST 0x0
-
-#define DLB2_SYS_DIR_CQ_AI_ADDR_U_CQ_AI_ADDR_U	0xFFFFFFFF
-#define DLB2_SYS_DIR_CQ_AI_ADDR_U_CQ_AI_ADDR_U_LOC	0
-
-#define DLB2_SYS_DIR_CQ_AI_ADDR_L(x) \
-	(0x10000fe0 + (x) * 0x1000)
-#define DLB2_SYS_DIR_CQ_AI_ADDR_L_RST 0x0
-
-#define DLB2_SYS_DIR_CQ_AI_ADDR_L_RSVD0		0x00000003
-#define DLB2_SYS_DIR_CQ_AI_ADDR_L_CQ_AI_ADDR_L	0xFFFFFFFC
-#define DLB2_SYS_DIR_CQ_AI_ADDR_L_RSVD0_LOC		0
-#define DLB2_SYS_DIR_CQ_AI_ADDR_L_CQ_AI_ADDR_L_LOC	2
-
-#define DLB2_SYS_WB_DIR_CQ_STATE(x) \
-	(0x11c00000 + (x) * 0x1000)
-#define DLB2_SYS_WB_DIR_CQ_STATE_RST 0x0
-
-#define DLB2_SYS_WB_DIR_CQ_STATE_WB0_V	0x00000001
-#define DLB2_SYS_WB_DIR_CQ_STATE_WB1_V	0x00000002
-#define DLB2_SYS_WB_DIR_CQ_STATE_WB2_V	0x00000004
-#define DLB2_SYS_WB_DIR_CQ_STATE_DIR_OPT	0x00000008
-#define DLB2_SYS_WB_DIR_CQ_STATE_CQ_OPT_CLR	0x00000010
-#define DLB2_SYS_WB_DIR_CQ_STATE_RSVD0	0xFFFFFFE0
-#define DLB2_SYS_WB_DIR_CQ_STATE_WB0_V_LOC		0
-#define DLB2_SYS_WB_DIR_CQ_STATE_WB1_V_LOC		1
-#define DLB2_SYS_WB_DIR_CQ_STATE_WB2_V_LOC		2
-#define DLB2_SYS_WB_DIR_CQ_STATE_DIR_OPT_LOC		3
-#define DLB2_SYS_WB_DIR_CQ_STATE_CQ_OPT_CLR_LOC	4
-#define DLB2_SYS_WB_DIR_CQ_STATE_RSVD0_LOC		5
-
-#define DLB2_SYS_WB_LDB_CQ_STATE(x) \
-	(0x11d00000 + (x) * 0x1000)
-#define DLB2_SYS_WB_LDB_CQ_STATE_RST 0x0
-
-#define DLB2_SYS_WB_LDB_CQ_STATE_WB0_V	0x00000001
-#define DLB2_SYS_WB_LDB_CQ_STATE_WB1_V	0x00000002
-#define DLB2_SYS_WB_LDB_CQ_STATE_WB2_V	0x00000004
-#define DLB2_SYS_WB_LDB_CQ_STATE_RSVD1	0x00000008
-#define DLB2_SYS_WB_LDB_CQ_STATE_CQ_OPT_CLR	0x00000010
-#define DLB2_SYS_WB_LDB_CQ_STATE_RSVD0	0xFFFFFFE0
-#define DLB2_SYS_WB_LDB_CQ_STATE_WB0_V_LOC		0
-#define DLB2_SYS_WB_LDB_CQ_STATE_WB1_V_LOC		1
-#define DLB2_SYS_WB_LDB_CQ_STATE_WB2_V_LOC		2
-#define DLB2_SYS_WB_LDB_CQ_STATE_RSVD1_LOC		3
-#define DLB2_SYS_WB_LDB_CQ_STATE_CQ_OPT_CLR_LOC	4
-#define DLB2_SYS_WB_LDB_CQ_STATE_RSVD0_LOC		5
-
-#define DLB2_CHP_CFG_VAS_CRD(x) \
-	(0x40000000 + (x) * 0x1000)
-#define DLB2_CHP_CFG_VAS_CRD_RST 0x0
-
-#define DLB2_CHP_CFG_VAS_CRD_COUNT	0x00007FFF
-#define DLB2_CHP_CFG_VAS_CRD_RSVD0	0xFFFF8000
-#define DLB2_CHP_CFG_VAS_CRD_COUNT_LOC	0
-#define DLB2_CHP_CFG_VAS_CRD_RSVD0_LOC	15
-
-#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT(x) \
-	(0x90b00000 + (x) * 0x1000)
-#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_RST 0x0
-
-#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_LIMIT	0x00007FFF
-#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_V	0x00008000
-#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_RSVD0	0xFFFF0000
-#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_LIMIT_LOC	0
-#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_V_LOC		15
-#define DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_RSVD0_LOC	16
-
-#endif /* __DLB2_REGS_NEW_H */
diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
index 54b0207db..3661b940c 100644
--- a/drivers/event/dlb2/pf/base/dlb2_resource.c
+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
@@ -8,7 +8,7 @@
 #include "dlb2_osdep.h"
 #include "dlb2_osdep_bitmap.h"
 #include "dlb2_osdep_types.h"
-#include "dlb2_regs_new.h"
+#include "dlb2_regs.h"
 #include "dlb2_resource.h"
 
 #include "../../dlb2_priv.h"
diff --git a/drivers/event/dlb2/pf/dlb2_main.c b/drivers/event/dlb2/pf/dlb2_main.c
index 1f6ccf8e4..b6ec85b47 100644
--- a/drivers/event/dlb2/pf/dlb2_main.c
+++ b/drivers/event/dlb2/pf/dlb2_main.c
@@ -13,7 +13,7 @@
 #include <rte_malloc.h>
 #include <rte_errno.h>
 
-#include "base/dlb2_regs_new.h"
+#include "base/dlb2_regs.h"
 #include "base/dlb2_hw_types.h"
 #include "base/dlb2_resource.h"
 #include "base/dlb2_osdep.h"
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v5 24/26] event/dlb2: update xstats for v2.5
  2021-05-01 19:03   ` [dpdk-dev] [PATCH v5 00/26] Add DLB v2.5 McDaniel, Timothy
                       ` (22 preceding siblings ...)
  2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 23/26] event/dlb2: use new combined register map McDaniel, Timothy
@ 2021-05-01 19:03     ` McDaniel, Timothy
  2021-05-01 19:04     ` [dpdk-dev] [PATCH v5 25/26] event/dlb2: move rte config defines to runtime devargs McDaniel, Timothy
                       ` (2 subsequent siblings)
  26 siblings, 0 replies; 174+ messages in thread
From: McDaniel, Timothy @ 2021-05-01 19:03 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, harry.van.haaren, jerinj, thomas, Timothy McDaniel

From: Timothy McDaniel <timothy.mcdaniel@intel.com>

Add DLB v2.5 specific information to xstats, such as metrics for the new
credit scheme.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/dlb2_xstats.c | 41 ++++++++++++++++++++++++++++----
 1 file changed, 37 insertions(+), 4 deletions(-)

diff --git a/drivers/event/dlb2/dlb2_xstats.c b/drivers/event/dlb2/dlb2_xstats.c
index b62e62060..d4c8d9903 100644
--- a/drivers/event/dlb2/dlb2_xstats.c
+++ b/drivers/event/dlb2/dlb2_xstats.c
@@ -9,6 +9,7 @@
 
 #include "dlb2_priv.h"
 #include "dlb2_inline_fns.h"
+#include "pf/base/dlb2_regs.h"
 
 enum dlb2_xstats_type {
 	/* common to device and port */
@@ -21,6 +22,7 @@ enum dlb2_xstats_type {
 	zero_polls,			/**< Call dequeue burst and return 0 */
 	tx_nospc_ldb_hw_credits,	/**< Insufficient LDB h/w credits */
 	tx_nospc_dir_hw_credits,	/**< Insufficient DIR h/w credits */
+	tx_nospc_hw_credits,		/**< Insufficient h/w credits */
 	tx_nospc_inflight_max,		/**< Reach the new_event_threshold */
 	tx_nospc_new_event_limit,	/**< Insufficient s/w credits */
 	tx_nospc_inflight_credits,	/**< Port has too few s/w credits */
@@ -29,6 +31,7 @@ enum dlb2_xstats_type {
 	inflight_events,
 	ldb_pool_size,
 	dir_pool_size,
+	pool_size,
 	/* port specific */
 	tx_new,				/**< Send an OP_NEW event */
 	tx_fwd,				/**< Send an OP_FORWARD event */
@@ -129,6 +132,9 @@ dlb2_device_traffic_stat_get(struct dlb2_eventdev *dlb2,
 		case tx_nospc_dir_hw_credits:
 			val += port->stats.traffic.tx_nospc_dir_hw_credits;
 			break;
+		case tx_nospc_hw_credits:
+			val += port->stats.traffic.tx_nospc_hw_credits;
+			break;
 		case tx_nospc_inflight_max:
 			val += port->stats.traffic.tx_nospc_inflight_max;
 			break;
@@ -159,6 +165,7 @@ get_dev_stat(struct dlb2_eventdev *dlb2, uint16_t obj_idx __rte_unused,
 	case zero_polls:
 	case tx_nospc_ldb_hw_credits:
 	case tx_nospc_dir_hw_credits:
+	case tx_nospc_hw_credits:
 	case tx_nospc_inflight_max:
 	case tx_nospc_new_event_limit:
 	case tx_nospc_inflight_credits:
@@ -171,6 +178,8 @@ get_dev_stat(struct dlb2_eventdev *dlb2, uint16_t obj_idx __rte_unused,
 		return dlb2->num_ldb_credits;
 	case dir_pool_size:
 		return dlb2->num_dir_credits;
+	case pool_size:
+		return dlb2->num_credits;
 	default: return -1;
 	}
 }
@@ -203,6 +212,9 @@ get_port_stat(struct dlb2_eventdev *dlb2, uint16_t obj_idx,
 	case tx_nospc_dir_hw_credits:
 		return ev_port->stats.traffic.tx_nospc_dir_hw_credits;
 
+	case tx_nospc_hw_credits:
+		return ev_port->stats.traffic.tx_nospc_hw_credits;
+
 	case tx_nospc_inflight_max:
 		return ev_port->stats.traffic.tx_nospc_inflight_max;
 
@@ -357,6 +369,7 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 		"zero_polls",
 		"tx_nospc_ldb_hw_credits",
 		"tx_nospc_dir_hw_credits",
+		"tx_nospc_hw_credits",
 		"tx_nospc_inflight_max",
 		"tx_nospc_new_event_limit",
 		"tx_nospc_inflight_credits",
@@ -364,6 +377,7 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 		"inflight_events",
 		"ldb_pool_size",
 		"dir_pool_size",
+		"pool_size",
 	};
 	static const enum dlb2_xstats_type dev_types[] = {
 		rx_ok,
@@ -375,6 +389,7 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 		zero_polls,
 		tx_nospc_ldb_hw_credits,
 		tx_nospc_dir_hw_credits,
+		tx_nospc_hw_credits,
 		tx_nospc_inflight_max,
 		tx_nospc_new_event_limit,
 		tx_nospc_inflight_credits,
@@ -382,6 +397,7 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 		inflight_events,
 		ldb_pool_size,
 		dir_pool_size,
+		pool_size,
 	};
 	/* Note: generated device stats are not allowed to be reset. */
 	static const uint8_t dev_reset_allowed[] = {
@@ -394,6 +410,7 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 		0, /* zero_polls */
 		0, /* tx_nospc_ldb_hw_credits */
 		0, /* tx_nospc_dir_hw_credits */
+		0, /* tx_nospc_hw_credits */
 		0, /* tx_nospc_inflight_max */
 		0, /* tx_nospc_new_event_limit */
 		0, /* tx_nospc_inflight_credits */
@@ -401,6 +418,7 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 		0, /* inflight_events */
 		0, /* ldb_pool_size */
 		0, /* dir_pool_size */
+		0, /* pool_size */
 	};
 	static const char * const port_stats[] = {
 		"is_configured",
@@ -415,6 +433,7 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 		"zero_polls",
 		"tx_nospc_ldb_hw_credits",
 		"tx_nospc_dir_hw_credits",
+		"tx_nospc_hw_credits",
 		"tx_nospc_inflight_max",
 		"tx_nospc_new_event_limit",
 		"tx_nospc_inflight_credits",
@@ -448,6 +467,7 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 		zero_polls,
 		tx_nospc_ldb_hw_credits,
 		tx_nospc_dir_hw_credits,
+		tx_nospc_hw_credits,
 		tx_nospc_inflight_max,
 		tx_nospc_new_event_limit,
 		tx_nospc_inflight_credits,
@@ -481,6 +501,7 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 		1, /* zero_polls */
 		1, /* tx_nospc_ldb_hw_credits */
 		1, /* tx_nospc_dir_hw_credits */
+		1, /* tx_nospc_hw_credits */
 		1, /* tx_nospc_inflight_max */
 		1, /* tx_nospc_new_event_limit */
 		1, /* tx_nospc_inflight_credits */
@@ -935,8 +956,8 @@ dlb2_eventdev_xstats_reset(struct rte_eventdev *dev,
 		break;
 	case RTE_EVENT_DEV_XSTATS_PORT:
 		if (queue_port_id == -1) {
-			for (i = 0; i < DLB2_MAX_NUM_PORTS(dlb2->version);
-					i++) {
+			for (i = 0;
+			     i < DLB2_MAX_NUM_PORTS(dlb2->version); i++) {
 				if (dlb2_xstats_reset_port(dlb2, i,
 							   ids, nb_ids))
 					return -EINVAL;
@@ -949,8 +970,8 @@ dlb2_eventdev_xstats_reset(struct rte_eventdev *dev,
 		break;
 	case RTE_EVENT_DEV_XSTATS_QUEUE:
 		if (queue_port_id == -1) {
-			for (i = 0; i < DLB2_MAX_NUM_QUEUES(dlb2->version);
-					i++) {
+			for (i = 0;
+			     i < DLB2_MAX_NUM_QUEUES(dlb2->version); i++) {
 				if (dlb2_xstats_reset_queue(dlb2, i,
 							    ids, nb_ids))
 					return -EINVAL;
@@ -1048,6 +1069,9 @@ dlb2_eventdev_dump(struct rte_eventdev *dev, FILE *f)
 	fprintf(f, "\tnum_dir_credits = %u\n",
 		dlb2->hw_rsrc_query_results.num_dir_credits);
 
+	fprintf(f, "\tnum_credits = %u\n",
+		dlb2->hw_rsrc_query_results.num_credits);
+
 	/* Port level information */
 
 	for (i = 0; i < dlb2->num_ports; i++) {
@@ -1102,6 +1126,12 @@ dlb2_eventdev_dump(struct rte_eventdev *dev, FILE *f)
 		fprintf(f, "\tdir_credits = %u\n",
 			p->qm_port.dir_credits);
 
+		fprintf(f, "\tcached_credits = %u\n",
+			p->qm_port.cached_credits);
+
+		fprintf(f, "\tdir_credits = %u\n",
+			p->qm_port.credits);
+
 		fprintf(f, "\tgenbit=%d, cq_idx=%d, cq_depth=%d\n",
 			p->qm_port.gen_bit,
 			p->qm_port.cq_idx,
@@ -1139,6 +1169,9 @@ dlb2_eventdev_dump(struct rte_eventdev *dev, FILE *f)
 		fprintf(f, "\t\ttx_nospc_dir_hw_credits %" PRIu64 "\n",
 			p->stats.traffic.tx_nospc_dir_hw_credits);
 
+		fprintf(f, "\t\ttx_nospc_hw_credits %" PRIu64 "\n",
+			p->stats.traffic.tx_nospc_hw_credits);
+
 		fprintf(f, "\t\ttx_nospc_inflight_max %" PRIu64 "\n",
 			p->stats.traffic.tx_nospc_inflight_max);
 
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v5 25/26] event/dlb2: move rte config defines to runtime devargs
  2021-05-01 19:03   ` [dpdk-dev] [PATCH v5 00/26] Add DLB v2.5 McDaniel, Timothy
                       ` (23 preceding siblings ...)
  2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 24/26] event/dlb2: update xstats for v2.5 McDaniel, Timothy
@ 2021-05-01 19:04     ` McDaniel, Timothy
  2021-05-01 19:04     ` [dpdk-dev] [PATCH v5 26/26] doc/dlb2: update documentation for v2.5 McDaniel, Timothy
  2021-05-04  8:28     ` [dpdk-dev] [PATCH v5 00/26] Add DLB v2.5 Jerin Jacob
  26 siblings, 0 replies; 174+ messages in thread
From: McDaniel, Timothy @ 2021-05-01 19:04 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, harry.van.haaren, jerinj, thomas, Timothy McDaniel

From: Timothy McDaniel <timothy.mcdaniel@intel.com>

The new devarg names and their default values
are listed below. The defaults have not changed, and
none of these parameters are accessed in the fast path.

poll_interval=1000
sw_credit_quantai=32
default_depth_thresh=256

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 config/rte_config.h             |   4 --
 drivers/event/dlb2/dlb2.c       | 122 ++++++++++++++++++++++++++++----
 drivers/event/dlb2/dlb2_priv.h  |  14 ++++
 drivers/event/dlb2/pf/dlb2_pf.c |   5 +-
 4 files changed, 125 insertions(+), 20 deletions(-)

diff --git a/config/rte_config.h b/config/rte_config.h
index b13c0884b..590903c07 100644
--- a/config/rte_config.h
+++ b/config/rte_config.h
@@ -140,10 +140,6 @@
 #define RTE_LIBRTE_QEDE_FW ""
 
 /* DLB2 defines */
-#define RTE_LIBRTE_PMD_DLB2_POLL_INTERVAL 1000
-#define RTE_LIBRTE_PMD_DLB2_UMWAIT_CTL_STATE  0
 #undef RTE_LIBRTE_PMD_DLB2_QUELL_STATS
-#define RTE_LIBRTE_PMD_DLB2_SW_CREDIT_QUANTA 32
-#define RTE_PMD_DLB2_DEFAULT_DEPTH_THRESH 256
 
 #endif /* _RTE_CONFIG_H_ */
diff --git a/drivers/event/dlb2/dlb2.c b/drivers/event/dlb2/dlb2.c
index cc6495b76..818b1c367 100644
--- a/drivers/event/dlb2/dlb2.c
+++ b/drivers/event/dlb2/dlb2.c
@@ -315,6 +315,66 @@ set_cos(const char *key __rte_unused,
 	return 0;
 }
 
+static int
+set_poll_interval(const char *key __rte_unused,
+	const char *value,
+	void *opaque)
+{
+	int *poll_interval = opaque;
+	int ret;
+
+	if (value == NULL || opaque == NULL) {
+		DLB2_LOG_ERR("NULL pointer\n");
+		return -EINVAL;
+	}
+
+	ret = dlb2_string_to_int(poll_interval, value);
+	if (ret < 0)
+		return ret;
+
+	return 0;
+}
+
+static int
+set_sw_credit_quanta(const char *key __rte_unused,
+	const char *value,
+	void *opaque)
+{
+	int *sw_credit_quanta = opaque;
+	int ret;
+
+	if (value == NULL || opaque == NULL) {
+		DLB2_LOG_ERR("NULL pointer\n");
+		return -EINVAL;
+	}
+
+	ret = dlb2_string_to_int(sw_credit_quanta, value);
+	if (ret < 0)
+		return ret;
+
+	return 0;
+}
+
+static int
+set_default_depth_thresh(const char *key __rte_unused,
+	const char *value,
+	void *opaque)
+{
+	int *default_depth_thresh = opaque;
+	int ret;
+
+	if (value == NULL || opaque == NULL) {
+		DLB2_LOG_ERR("NULL pointer\n");
+		return -EINVAL;
+	}
+
+	ret = dlb2_string_to_int(default_depth_thresh, value);
+	if (ret < 0)
+		return ret;
+
+	return 0;
+}
+
 static int
 set_qid_depth_thresh(const char *key __rte_unused,
 		     const char *value,
@@ -667,15 +727,8 @@ dlb2_eventdev_configure(const struct rte_eventdev *dev)
 	}
 
 	/* Does this platform support umonitor/umwait? */
-	if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_WAITPKG)) {
-		if (RTE_LIBRTE_PMD_DLB2_UMWAIT_CTL_STATE != 0 &&
-		    RTE_LIBRTE_PMD_DLB2_UMWAIT_CTL_STATE != 1) {
-			DLB2_LOG_ERR("invalid value (%d) for RTE_LIBRTE_PMD_DLB2_UMWAIT_CTL_STATE, must be 0 or 1.\n",
-				     RTE_LIBRTE_PMD_DLB2_UMWAIT_CTL_STATE);
-			return -EINVAL;
-		}
+	if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_WAITPKG))
 		dlb2->umwait_allowed = true;
-	}
 
 	rsrcs->num_dir_ports = config->nb_single_link_event_port_queues;
 	rsrcs->num_ldb_ports  = config->nb_event_ports - rsrcs->num_dir_ports;
@@ -930,8 +983,9 @@ dlb2_hw_create_ldb_queue(struct dlb2_eventdev *dlb2,
 	}
 
 	if (ev_queue->depth_threshold == 0) {
-		cfg.depth_threshold = RTE_PMD_DLB2_DEFAULT_DEPTH_THRESH;
-		ev_queue->depth_threshold = RTE_PMD_DLB2_DEFAULT_DEPTH_THRESH;
+		cfg.depth_threshold = dlb2->default_depth_thresh;
+		ev_queue->depth_threshold =
+			dlb2->default_depth_thresh;
 	} else
 		cfg.depth_threshold = ev_queue->depth_threshold;
 
@@ -1623,7 +1677,7 @@ dlb2_eventdev_port_setup(struct rte_eventdev *dev,
 		  RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL);
 	ev_port->outstanding_releases = 0;
 	ev_port->inflight_credits = 0;
-	ev_port->credit_update_quanta = RTE_LIBRTE_PMD_DLB2_SW_CREDIT_QUANTA;
+	ev_port->credit_update_quanta = dlb2->sw_credit_quanta;
 	ev_port->dlb2 = dlb2; /* reverse link */
 
 	/* Tear down pre-existing port->queue links */
@@ -1718,8 +1772,9 @@ dlb2_hw_create_dir_queue(struct dlb2_eventdev *dlb2,
 	cfg.port_id = qm_port_id;
 
 	if (ev_queue->depth_threshold == 0) {
-		cfg.depth_threshold = RTE_PMD_DLB2_DEFAULT_DEPTH_THRESH;
-		ev_queue->depth_threshold = RTE_PMD_DLB2_DEFAULT_DEPTH_THRESH;
+		cfg.depth_threshold = dlb2->default_depth_thresh;
+		ev_queue->depth_threshold =
+			dlb2->default_depth_thresh;
 	} else
 		cfg.depth_threshold = ev_queue->depth_threshold;
 
@@ -2747,7 +2802,7 @@ dlb2_event_enqueue_prep(struct dlb2_eventdev_port *ev_port,
 	DLB2_INC_STAT(ev_port->stats.tx_op_cnt[ev->op], 1);
 	DLB2_INC_STAT(ev_port->stats.traffic.tx_ok, 1);
 
-#ifndef RTE_LIBRTE_PMD_DLB2_QUELL_STATS
+#ifndef RTE_LIBRTE_PMD_DLB_QUELL_STATS
 	if (ev->op != RTE_EVENT_OP_RELEASE) {
 		DLB2_INC_STAT(ev_port->stats.queue[ev->queue_id].enq_ok, 1);
 		DLB2_INC_STAT(ev_port->stats.tx_sched_cnt[*sched_type], 1);
@@ -3070,7 +3125,7 @@ dlb2_dequeue_wait(struct dlb2_eventdev *dlb2,
 
 		DLB2_INC_STAT(ev_port->stats.traffic.rx_umonitor_umwait, 1);
 	} else {
-		uint64_t poll_interval = RTE_LIBRTE_PMD_DLB2_POLL_INTERVAL;
+		uint64_t poll_interval = dlb2->poll_interval;
 		uint64_t curr_ticks = rte_get_timer_cycles();
 		uint64_t init_ticks = curr_ticks;
 
@@ -4025,6 +4080,9 @@ dlb2_primary_eventdev_probe(struct rte_eventdev *dev,
 	dlb2->max_num_events_override = dlb2_args->max_num_events;
 	dlb2->num_dir_credits_override = dlb2_args->num_dir_credits_override;
 	dlb2->qm_instance.cos_id = dlb2_args->cos_id;
+	dlb2->poll_interval = dlb2_args->poll_interval;
+	dlb2->sw_credit_quanta = dlb2_args->sw_credit_quanta;
+	dlb2->default_depth_thresh = dlb2_args->default_depth_thresh;
 
 	err = dlb2_iface_open(&dlb2->qm_instance, name);
 	if (err < 0) {
@@ -4125,6 +4183,9 @@ dlb2_parse_params(const char *params,
 					     DEV_ID_ARG,
 					     DLB2_QID_DEPTH_THRESH_ARG,
 					     DLB2_COS_ARG,
+					     DLB2_POLL_INTERVAL_ARG,
+					     DLB2_SW_CREDIT_QUANTA_ARG,
+					     DLB2_DEPTH_THRESH_ARG,
 					     NULL };
 
 	if (params != NULL && params[0] != '\0') {
@@ -4207,6 +4268,37 @@ dlb2_parse_params(const char *params,
 				return ret;
 			}
 
+			ret = rte_kvargs_process(kvlist, DLB2_POLL_INTERVAL_ARG,
+						 set_poll_interval,
+						 &dlb2_args->poll_interval);
+			if (ret != 0) {
+				DLB2_LOG_ERR("%s: Error parsing poll interval parameter",
+					     name);
+				rte_kvargs_free(kvlist);
+				return ret;
+			}
+
+			ret = rte_kvargs_process(kvlist,
+						 DLB2_SW_CREDIT_QUANTA_ARG,
+						 set_sw_credit_quanta,
+						 &dlb2_args->sw_credit_quanta);
+			if (ret != 0) {
+				DLB2_LOG_ERR("%s: Error parsing sw xredit quanta parameter",
+					     name);
+				rte_kvargs_free(kvlist);
+				return ret;
+			}
+
+			ret = rte_kvargs_process(kvlist, DLB2_DEPTH_THRESH_ARG,
+					set_default_depth_thresh,
+					&dlb2_args->default_depth_thresh);
+			if (ret != 0) {
+				DLB2_LOG_ERR("%s: Error parsing set depth thresh parameter",
+					     name);
+				rte_kvargs_free(kvlist);
+				return ret;
+			}
+
 			rte_kvargs_free(kvlist);
 		}
 	}
diff --git a/drivers/event/dlb2/dlb2_priv.h b/drivers/event/dlb2/dlb2_priv.h
index f3a9fe0aa..cf120c92d 100644
--- a/drivers/event/dlb2/dlb2_priv.h
+++ b/drivers/event/dlb2/dlb2_priv.h
@@ -22,6 +22,11 @@
 
 #define EVDEV_DLB2_NAME_PMD dlb2_event
 
+/* Default values for command line devargs */
+#define DLB2_POLL_INTERVAL_DEFAULT 1000
+#define DLB2_SW_CREDIT_QUANTA_DEFAULT 32
+#define DLB2_DEPTH_THRESH_DEFAULT 256
+
 /*  command line arg strings */
 #define NUMA_NODE_ARG "numa_node"
 #define DLB2_MAX_NUM_EVENTS "max_num_events"
@@ -30,6 +35,9 @@
 #define DLB2_DEFER_SCHED_ARG "defer_sched"
 #define DLB2_QID_DEPTH_THRESH_ARG "qid_depth_thresh"
 #define DLB2_COS_ARG "cos"
+#define DLB2_POLL_INTERVAL_ARG "poll_interval"
+#define DLB2_SW_CREDIT_QUANTA_ARG "sw_credit_quanta"
+#define DLB2_DEPTH_THRESH_ARG "default_depth_thresh"
 
 /* Begin HW related defines and structs */
 
@@ -570,6 +578,9 @@ struct dlb2_eventdev {
 	bool global_dequeue_wait; /* Not using per dequeue wait if true */
 	bool defer_sched;
 	enum dlb2_cq_poll_modes poll_mode;
+	int poll_interval;
+	int sw_credit_quanta;
+	int default_depth_thresh;
 	uint8_t revision;
 	uint8_t version;
 	bool configured;
@@ -603,6 +614,9 @@ struct dlb2_devargs {
 	int defer_sched;
 	struct dlb2_qid_depth_thresholds qid_depth_thresholds;
 	enum dlb2_cos cos_id;
+	int poll_interval;
+	int sw_credit_quanta;
+	int default_depth_thresh;
 };
 
 /* End Eventdev related defines and structs */
diff --git a/drivers/event/dlb2/pf/dlb2_pf.c b/drivers/event/dlb2/pf/dlb2_pf.c
index f57dc1584..e9da89d65 100644
--- a/drivers/event/dlb2/pf/dlb2_pf.c
+++ b/drivers/event/dlb2/pf/dlb2_pf.c
@@ -615,7 +615,10 @@ dlb2_eventdev_pci_init(struct rte_eventdev *eventdev)
 		.max_num_events = DLB2_MAX_NUM_LDB_CREDITS,
 		.num_dir_credits_override = -1,
 		.qid_depth_thresholds = { {0} },
-		.cos_id = DLB2_COS_DEFAULT
+		.cos_id = DLB2_COS_DEFAULT,
+		.poll_interval = DLB2_POLL_INTERVAL_DEFAULT,
+		.sw_credit_quanta = DLB2_SW_CREDIT_QUANTA_DEFAULT,
+		.default_depth_thresh = DLB2_DEPTH_THRESH_DEFAULT
 	};
 	struct dlb2_eventdev *dlb2;
 
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [dpdk-dev] [PATCH v5 26/26] doc/dlb2: update documentation for v2.5
  2021-05-01 19:03   ` [dpdk-dev] [PATCH v5 00/26] Add DLB v2.5 McDaniel, Timothy
                       ` (24 preceding siblings ...)
  2021-05-01 19:04     ` [dpdk-dev] [PATCH v5 25/26] event/dlb2: move rte config defines to runtime devargs McDaniel, Timothy
@ 2021-05-01 19:04     ` McDaniel, Timothy
  2021-05-04  8:28     ` [dpdk-dev] [PATCH v5 00/26] Add DLB v2.5 Jerin Jacob
  26 siblings, 0 replies; 174+ messages in thread
From: McDaniel, Timothy @ 2021-05-01 19:04 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, harry.van.haaren, jerinj, thomas, Timothy McDaniel

From: Timothy McDaniel <timothy.mcdaniel@intel.com>

Update the dlb documentation for v2.5. Notable differences include
the new cobined credit scheme. Also cleaned up a couple of sections,
and removed a duplicate section.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 doc/guides/eventdevs/dlb2.rst | 153 +++++++++++++++-------------------
 1 file changed, 66 insertions(+), 87 deletions(-)

diff --git a/doc/guides/eventdevs/dlb2.rst b/doc/guides/eventdevs/dlb2.rst
index 94d2c77ff..0f1f25cc5 100644
--- a/doc/guides/eventdevs/dlb2.rst
+++ b/doc/guides/eventdevs/dlb2.rst
@@ -1,10 +1,11 @@
 ..  SPDX-License-Identifier: BSD-3-Clause
     Copyright(c) 2020 Intel Corporation.
 
-Driver for the Intel® Dynamic Load Balancer (DLB2)
+Driver for the Intel® Dynamic Load Balancer (DLB)
 ==================================================
 
-The DPDK dlb poll mode driver supports the Intel® Dynamic Load Balancer.
+The DPDK DLB poll mode driver supports the Intel® Dynamic Load Balancer,
+hardware versions 2.0 and 2.5.
 
 Prerequisites
 -------------
@@ -15,34 +16,34 @@ the basic DPDK environment.
 Configuration
 -------------
 
-The DLB2 PF PMD is a user-space PMD that uses VFIO to gain direct
+The DLB PF PMD is a user-space PMD that uses VFIO to gain direct
 device access. To use this operation mode, the PCIe PF device must be bound
 to a DPDK-compatible VFIO driver, such as vfio-pci.
 
 Eventdev API Notes
 ------------------
 
-The DLB2 provides the functions of a DPDK event device; specifically, it
+The DLB PMD provides the functions of a DPDK event device; specifically, it
 supports atomic, ordered, and parallel scheduling events from queues to ports.
-However, the DLB2 hardware is not a perfect match to the eventdev API. Some DLB2
+However, the DLB hardware is not a perfect match to the eventdev API. Some DLB
 features are abstracted by the PMD such as directed ports.
 
-In general the dlb PMD is designed for ease-of-use and does not require a
+In general the DLB PMD is designed for ease-of-use and does not require a
 detailed understanding of the hardware, but these details are important when
 writing high-performance code. This section describes the places where the
-eventdev API and DLB2 misalign.
+eventdev API and DLB misalign.
 
 Scheduling Domain Configuration
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-There are 32 scheduling domainis the DLB2.
+DLB supports 32 scheduling domains.
 When one is configured, it allocates load-balanced and
 directed queues, ports, credits, and other hardware resources. Some
 resource allocations are user-controlled -- the number of queues, for example
 -- and others, like credit pools (one directed and one load-balanced pool per
 scheduling domain), are not.
 
-The DLB2 is a closed system eventdev, and as such the ``nb_events_limit`` device
+The DLB is a closed system eventdev, and as such the ``nb_events_limit`` device
 setup argument and the per-port ``new_event_threshold`` argument apply as
 defined in the eventdev header file. The limit is applied to all enqueues,
 regardless of whether it will consume a directed or load-balanced credit.
@@ -67,7 +68,7 @@ If the ``RTE_EVENT_QUEUE_CFG_ALL_TYPES`` flag is not set, schedule_type
 dictates the queue's scheduling type.
 
 The ``nb_atomic_order_sequences`` queue configuration field sets the ordered
-queue's reorder buffer size.  DLB2 has 4 groups of ordered queues, where each
+queue's reorder buffer size.  DLB has 2 groups of ordered queues, where each
 group is configured to contain either 1 queue with 1024 reorder entries, 2
 queues with 512 reorder entries, and so on down to 32 queues with 32 entries.
 
@@ -75,57 +76,22 @@ When a load-balanced queue is created, the PMD will configure a new sequence
 number group on-demand if num_sequence_numbers does not match a pre-existing
 group with available reorder buffer entries. If all sequence number groups are
 in use, no new group will be created and queue configuration will fail. (Note
-that when the PMD is used with a virtual DLB2 device, it cannot change the
+that when the PMD is used with a virtual DLB device, it cannot change the
 sequence number configuration.)
 
-The queue's ``nb_atomic_flows`` parameter is ignored by the DLB2 PMD, because
-the DLB2 does not limit the number of flows a queue can track. In the DLB2, all
-load-balanced queues can use the full 16-bit flow ID range.
-
-Load-Balanced Queues
-~~~~~~~~~~~~~~~~~~~~
-
-A load-balanced queue can support atomic and ordered scheduling, or atomic and
-unordered scheduling, but not atomic and unordered and ordered scheduling. A
-queue's scheduling types are controlled by the event queue configuration.
-
-If the user sets the ``RTE_EVENT_QUEUE_CFG_ALL_TYPES`` flag, the
-``nb_atomic_order_sequences`` determines the supported scheduling types.
-With non-zero ``nb_atomic_order_sequences``, the queue is configured for atomic
-and ordered scheduling. In this case, ``RTE_SCHED_TYPE_PARALLEL`` scheduling is
-supported by scheduling those events as ordered events.  Note that when the
-event is dequeued, its sched_type will be ``RTE_SCHED_TYPE_ORDERED``. Else if
-``nb_atomic_order_sequences`` is zero, the queue is configured for atomic and
-unordered scheduling. In this case, ``RTE_SCHED_TYPE_ORDERED`` is unsupported.
-
-If the ``RTE_EVENT_QUEUE_CFG_ALL_TYPES`` flag is not set, schedule_type
-dictates the queue's scheduling type.
-
-The ``nb_atomic_order_sequences`` queue configuration field sets the ordered
-queue's reorder buffer size.  DLB2 has 4 groups of ordered queues, where each
-group is configured to contain either 1 queue with 1024 reorder entries, 2
-queues with 512 reorder entries, and so on down to 32 queues with 32 entries.
-
-When a load-balanced queue is created, the PMD will configure a new sequence
-number group on-demand if num_sequence_numbers does not match a pre-existing
-group with available reorder buffer entries. If all sequence number groups are
-in use, no new group will be created and queue configuration will fail. (Note
-that when the PMD is used with a virtual DLB2 device, it cannot change the
-sequence number configuration.)
-
-The queue's ``nb_atomic_flows`` parameter is ignored by the DLB2 PMD, because
-the DLB2 does not limit the number of flows a queue can track. In the DLB2, all
+The queue's ``nb_atomic_flows`` parameter is ignored by the DLB PMD, because
+the DLB does not limit the number of flows a queue can track. In the DLB, all
 load-balanced queues can use the full 16-bit flow ID range.
 
 Load-balanced and Directed Ports
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-DLB2 ports come in two flavors: load-balanced and directed. The eventdev API
+DLB ports come in two flavors: load-balanced and directed. The eventdev API
 does not have the same concept, but it has a similar one: ports and queues that
 are singly-linked (i.e. linked to a single queue or port, respectively).
 
 The ``rte_event_dev_info_get()`` function reports the number of available
-event ports and queues (among other things). For the DLB2 PMD, max_event_ports
+event ports and queues (among other things). For the DLB PMD, max_event_ports
 and max_event_queues report the number of available load-balanced ports and
 queues, and max_single_link_event_port_queue_pairs reports the number of
 available directed ports and queues.
@@ -151,31 +117,38 @@ only be linked to a single directed queue (and vice versa), and that link
 cannot change after the eventdev is started.
 
 The eventdev API does not have a directed scheduling type. To support directed
-traffic, the dlb PMD detects when an event is being sent to a directed queue
+traffic, the DLB PMD detects when an event is being sent to a directed queue
 and overrides its scheduling type. Note that the originally selected scheduling
 type (atomic, ordered, or parallel) is not preserved, and an event's sched_type
 will be set to ``RTE_SCHED_TYPE_ATOMIC`` when it is dequeued from a directed
 port.
 
+Finally, even though all 3 event types are supported on the same QID by
+converting unordered events to ordered, such use should be discouraged as much
+as possible, since mixing types on the same queue uses valuable reorder
+resources, and orders events which do not require ordering.
+
 Flow ID
 ~~~~~~~
 
 The flow ID field is preserved in the event when it is scheduled in the
-DLB2.
+DLB.
 
 Hardware Credits
 ~~~~~~~~~~~~~~~~
 
-DLB2 uses a hardware credit scheme to prevent software from overflowing hardware
+DLB uses a hardware credit scheme to prevent software from overflowing hardware
 event storage, with each unit of storage represented by a credit. A port spends
 a credit to enqueue an event, and hardware refills the ports with credits as the
-events are scheduled to ports. Refills come from credit pools, and each port is
-a member of a load-balanced credit pool and a directed credit pool. The
-load-balanced credits are used to enqueue to load-balanced queues, and directed
-credits are used for directed queues.
+events are scheduled to ports. Refills come from credit pools.
 
-A DLB2 eventdev contains one load-balanced and one directed credit pool. These
-pools' sizes are controlled by the nb_events_limit field in struct
+For DLB v2.5, there is a single credit pool used for both load balanced and
+directed traffic.
+
+For DLB v2.0, each port is a member of both a load-balanced credit pool and a
+directed credit pool. The load-balanced credits are used to enqueue to
+load-balanced queues, and directed credits are used for directed queues.
+These pools' sizes are controlled by the nb_events_limit field in struct
 rte_event_dev_config. The load-balanced pool is sized to contain
 nb_events_limit credits, and the directed pool is sized to contain
 nb_events_limit/4 credits. The directed pool size can be overridden with the
@@ -183,7 +156,7 @@ num_dir_credits vdev argument, like so:
 
     .. code-block:: console
 
-       --vdev=dlb1_event,num_dir_credits=<value>
+       --vdev=dlb2_event,num_dir_credits=<value>
 
 This can be used if the default allocation is too low or too high for the
 specific application needs. The PMD also supports a vdev arg that limits the
@@ -191,17 +164,17 @@ max_num_events reported by rte_event_dev_info_get():
 
     .. code-block:: console
 
-       --vdev=dlb1_event,max_num_events=<value>
+       --vdev=dlb2_event,max_num_events=<value>
 
 By default, max_num_events is reported as the total available load-balanced
-credits. If multiple DLB2-based applications are being used, it may be desirable
+credits. If multiple DLB-based applications are being used, it may be desirable
 to control how many load-balanced credits each application uses, particularly
 when application(s) are written to configure nb_events_limit equal to the
 reported max_num_events.
 
 Each port is a member of both credit pools. A port's credit allocation is
 defined by its low watermark, high watermark, and refill quanta. These three
-parameters are calculated by the dlb PMD like so:
+parameters are calculated by the DLB PMD like so:
 
 - The load-balanced high watermark is set to the port's enqueue_depth.
   The directed high watermark is set to the minimum of the enqueue_depth and
@@ -220,16 +193,16 @@ order to reach the limit.
 
 If a port attempts to enqueue and has no credits available, the enqueue
 operation will fail and the application must retry the enqueue. Credits are
-replenished asynchronously by the DLB2 hardware.
+replenished asynchronously by the DLB hardware.
 
 Software Credits
 ~~~~~~~~~~~~~~~~
 
-The DLB2 is a "closed system" event dev, and the DLB2 PMD layers a software
+The DLB is a "closed system" event dev, and the DLB PMD layers a software
 credit scheme on top of the hardware credit scheme in order to comply with
 the per-port backpressure described in the eventdev API.
 
-The DLB2's hardware scheme is local to a queue/pipeline stage: a port spends a
+The DLB's hardware scheme is local to a queue/pipeline stage: a port spends a
 credit when it enqueues to a queue, and credits are later replenished after the
 events are dequeued and released.
 
@@ -249,8 +222,8 @@ credits are used to enqueue to a load-balanced queue, and directed credits are
 used to enqueue to a directed queue.
 
 The out-of-credit situations are typically transient, and an eventdev
-application using the DLB2 ought to retry its enqueues if they fail.
-If enqueue fails, DLB2 PMD sets rte_errno as follows:
+application using the DLB ought to retry its enqueues if they fail.
+If enqueue fails, DLB PMD sets rte_errno as follows:
 
 - -ENOSPC: Credit exhaustion (either hardware or software)
 - -EINVAL: Invalid argument, such as port ID, queue ID, or sched_type.
@@ -272,21 +245,27 @@ the port's dequeue_depth).
 Priority
 ~~~~~~~~
 
-The DLB2 supports event priority and per-port queue service priority, as
-described in the eventdev header file. The DLB2 does not support 'global' event
+The DLB supports event priority and per-port queue service priority, as
+described in the eventdev header file. The DLB does not support 'global' event
 queue priority established at queue creation time.
 
-DLB2 supports 8 event and queue service priority levels. For both priority
-types, the PMD uses the upper three bits of the priority field to determine the
-DLB2 priority, discarding the 5 least significant bits. The 5 least significant
-event priority bits are not preserved when an event is enqueued.
+DLB supports 4 event and queue service priority levels. For both priority types,
+the PMD uses the upper three bits of the priority field to determine the DLB
+priority, discarding the 5 least significant bits. But least significant bit out
+of 3 priority bits is effectively ignored for binning into 4 priorities. The
+discarded 5 least significant event priority bits are not preserved when an event
+is enqueued.
+
+Note that event priority only works within the same event type.
+When atomic and ordered or unordered events are enqueued to same QID, priority
+across the types is always equal, and both types are served in a round robin manner.
 
 Reconfiguration
 ~~~~~~~~~~~~~~~
 
 The Eventdev API allows one to reconfigure a device, its ports, and its queues
 by first stopping the device, calling the configuration function(s), then
-restarting the device. The DLB2 does not support configuring an individual queue
+restarting the device. The DLB does not support configuring an individual queue
 or port without first reconfiguring the entire device, however, so there are
 certain reconfiguration sequences that are valid in the eventdev API but not
 supported by the PMD.
@@ -317,9 +296,9 @@ before its ports or queues can be.
 Deferred Scheduling
 ~~~~~~~~~~~~~~~~~~~
 
-The DLB2 PMD's default behavior for managing a CQ is to "pop" the CQ once per
+The DLB PMD's default behavior for managing a CQ is to "pop" the CQ once per
 dequeued event before returning from rte_event_dequeue_burst(). This frees the
-corresponding entries in the CQ, which enables the DLB2 to schedule more events
+corresponding entries in the CQ, which enables the DLB to schedule more events
 to it.
 
 To support applications seeking finer-grained scheduling control -- for example
@@ -333,12 +312,12 @@ To enable deferred scheduling, use the defer_sched vdev argument like so:
 
     .. code-block:: console
 
-       --vdev=dlb1_event,defer_sched=on
+       --vdev=dlb2_event,defer_sched=on
 
 Atomic Inflights Allocation
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-In the last stage prior to scheduling an atomic event to a CQ, DLB2 holds the
+In the last stage prior to scheduling an atomic event to a CQ, DLB holds the
 inflight event in a temporary buffer that is divided among load-balanced
 queues. If a queue's atomic buffer storage fills up, this can result in
 head-of-line-blocking. For example:
@@ -361,12 +340,12 @@ increase a vdev's per-queue atomic-inflight allocation to (for example) 64:
 
     .. code-block:: console
 
-       --vdev=dlb1_event,atm_inflights=64
+       --vdev=dlb2_event,atm_inflights=64
 
 QID Depth Threshold
 ~~~~~~~~~~~~~~~~~~~
 
-DLB2 supports setting and tracking queue depth thresholds. Hardware uses
+DLB supports setting and tracking queue depth thresholds. Hardware uses
 the thresholds to track how full a queue is compared to its threshold.
 Four buckets are used
 
@@ -375,7 +354,7 @@ Four buckets are used
 - Greater than 75%, but less than or equal to 100% of depth threshold
 - Greater than 100% of depth thresholds
 
-Per queue threshold metrics are tracked in the DLB2 xstats, and are also
+Per queue threshold metrics are tracked in the DLB xstats, and are also
 returned in the impl_opaque field of each received event.
 
 The per qid threshold can be specified as part of the device args, and
@@ -391,12 +370,12 @@ shown below.
 Class of service
 ~~~~~~~~~~~~~~~~
 
-DLB2 supports provisioning the DLB2 bandwidth into 4 classes of service.
+DLB supports provisioning the DLB bandwidth into 4 classes of service.
 
-- Class 4 corresponds to 40% of the DLB2 hardware bandwidth
-- Class 3 corresponds to 30% of the DLB2 hardware bandwidth
-- Class 2 corresponds to 20% of the DLB2 hardware bandwidth
-- Class 1 corresponds to 10% of the DLB2 hardware bandwidth
+- Class 4 corresponds to 40% of the DLB hardware bandwidth
+- Class 3 corresponds to 30% of the DLB hardware bandwidth
+- Class 2 corresponds to 20% of the DLB hardware bandwidth
+- Class 1 corresponds to 10% of the DLB hardware bandwidth
 - Class 0 corresponds to don't care
 
 The classes are applied globally to the set of ports contained in this
-- 
2.23.0


^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [dpdk-dev] [PATCH v5 00/26] Add DLB v2.5
  2021-05-01 19:03   ` [dpdk-dev] [PATCH v5 00/26] Add DLB v2.5 McDaniel, Timothy
                       ` (25 preceding siblings ...)
  2021-05-01 19:04     ` [dpdk-dev] [PATCH v5 26/26] doc/dlb2: update documentation for v2.5 McDaniel, Timothy
@ 2021-05-04  8:28     ` Jerin Jacob
  26 siblings, 0 replies; 174+ messages in thread
From: Jerin Jacob @ 2021-05-04  8:28 UTC (permalink / raw)
  To: McDaniel, Timothy
  Cc: dpdk-dev, Erik Gabriel Carrillo, Van Haaren, Harry, Jerin Jacob,
	Thomas Monjalon

On Sun, May 2, 2021 at 12:35 AM McDaniel, Timothy
<timothy.mcdaniel@intel.com> wrote:
>
> From: Timothy McDaniel <timothy.mcdaniel@intel.com>
>
> This patch series adds support for DLB v2.5 to
> the current DLB V2.0 PMD. The resulting PMD supports
> both hardware versions.
>
> The main differences between the DLB v2.5 and v2.0 hardware
> are:
> - Number of queues/ports
> - DLB v2.5 uses a combined credit pool, whereas DLB v2.0
>   splits credits into 2 pools, a directed credit pool and a
>   load balanced credit pool.
> - Different register maps, with different bit names and offsets
>
> In order to support both hardware versions with the same PMD,
> and avoid code duplication, the file dlb2_resource.c required a
> complete rewrite. This required some creative staging of the changes
> in order to keep the individual patches relatively small, while
> also meeting the requirement that all individual patches in the set
> compile cleanly.
>
> To accomplish this, a few temporary files are used:
>
> dlb2_hw_types_new.h
> dlb2_resources_new.h
> dlb2_resources_new.c
>
> As dlb2_resources_new.c is populated with the new combined v2.0/v2.5
> low level logic, the corresponding old code is removed from
> dlb2_resource.c, thus allowing both the original and new code to
> continue to compile and link cleanly. Once all of the code has been
> migrated to the new model, the old versions of the files are removed,
> and the new versions are renamed, effectively replacing the old original
> files.
>
> As you review the code, you can ignore the code deletions from
> dlb2_resource.c, as that file continues to shrink as the new
> corresponding logic is added to dlb2_resource_new.c.
>
> Changes since V4:
> 1) restore original PMD name (dlb2)
> 2) resore original PMD source location (drivers/event/dlb2)
> 3) restore documentation, such that it references dlb2_event,
>    instead of dlb_event


Applied the changes are some update in git comment.
Also updated the release notes like below

diff --git a/doc/guides/rel_notes/release_21_05.rst
b/doc/guides/rel_notes/release_21_05.rst
index 428615e4f..58f796b7e 100644
--- a/doc/guides/rel_notes/release_21_05.rst
+++ b/doc/guides/rel_notes/release_21_05.rst
@@ -273,6 +273,10 @@ New Features
   * Added support for crypto adapter forward mode in octeontx2 event and crypto
     device driver.

+* **Updated Intel DLB2 driver.**
+
+  * Added support for v2.5 device.
+







>
> Changes since V3:
> 1) Moved minor cleanup to its own patch. This included
>         a) remove FPGA references
>         b) eliminate duplicate macros/defines in hw_types
>         c) don't include dlb2_mbox.h
>         d) delete unused defines.macros (SMON, INT, ...)
> 2) Changed DLB V2.x and V2.x to simply v2.x, where v is lower case
> 3) Updated 20.11 release notes to remove reference to dlb2 doc, since
>    it is now named dlb.rst
> 4) Updated commit message/header text, as requested
>
> Changes since V2:
> 1) fix commit headers
> 2) fix commit message repeated words
> 3) remove FPGA reference
> 4) split out new v2.5 register definitions into separate patch
> 5) fixed documentation to use DLB and dlb_event exclusively,
>    instead of the old names such as dlb1_event, dlb2_event,
>    DLB2, ... Final doc updates are done in patch that performs
>    device rename from DLB2 tosimply DLB
> 6) use component event/dlb at commit which changes device name and
>    all subsequent commits
> 7) Move all DLB constants out of config/rte_config.h except QUELL_STATS,
>    which is used in the fastpath. Exposed these as devarg command line
>    parameters
> 8) Removed "TEMPORARY" comment leftover in dlb2_osdep.h
> 9) squashed 20-21 and 22-23 since they were logically the same as 19-20,
>    which was requested to be squashed
> 10) delete old dlb2.rst - dlb.rst has been updated for v2.0 and v2.1
>
> Changes since V1:
> 1) Simplified subject text for all patches
> 2) correct typos/spelling
> 3) remove FPGA references
> 4) remove stale sysconf() references
> 5) fixed patches that had compilation issues
> 6) updated release notes
> 7) renamed dlb device from dlb2_event to dlb_event
> 8) moved dlb2 directory to dlb,to match name change
> 9) fixed other cases where "dlb2" was being used externally
>
> Timothy McDaniel (26):
>   event/dlb2: minor code cleanup
>   event/dlb2: add v2.5 probe
>   event/dlb2: add v2.5 HW register definitions
>   event/dlb2: add v2.5 HW init
>   event/dlb2: add v2.5 get resources
>   event/dlb2: add v2.5 create sched domain
>   event/dlb2: add v2.5 domain reset
>   event/dlb2: add v2.5 create ldb queue
>   event/dlb2: add v2.5 create ldb port
>   event/dlb2: add v2.5 create dir port
>   event/dlb2: add v2.5 create dir queue
>   event/dlb2: add v2.5 map qid
>   event/dlb2: add v2.5 unmap queue
>   event/dlb2: add v2.5 start domain
>   event/dlb2: add v2.5 credit scheme
>   event/dlb2: add v2.5 queue depth functions
>   event/dlb2: add v2.5 finish map/unmap
>   event/dlb2: add v2.5 sparse cq mode
>   event/dlb2: add v2.5 sequence number management
>   event/dlb2: use new implementation of resource header
>   event/dlb2: use new implementation of resource file
>   event/dlb2: use new implementation of HW types header
>   event/dlb2: use new combined register map
>   event/dlb2: update xstats for v2.5
>   event/dlb2: move rte config defines to runtime devargs
>   doc/dlb2: update documentation for v2.5
>
>  config/rte_config.h                        |    4 -
>  doc/guides/eventdevs/dlb2.rst              |  153 +-
>  drivers/event/dlb2/dlb2.c                  |  550 +-
>  drivers/event/dlb2/dlb2_priv.h             |  170 +-
>  drivers/event/dlb2/dlb2_user.h             |   27 +-
>  drivers/event/dlb2/dlb2_xstats.c           |   70 +-
>  drivers/event/dlb2/pf/base/dlb2_hw_types.h |  106 +-
>  drivers/event/dlb2/pf/base/dlb2_mbox.h     |  596 --
>  drivers/event/dlb2/pf/base/dlb2_osdep.h    |    2 +
>  drivers/event/dlb2/pf/base/dlb2_regs.h     | 5955 +++++++++++++-------
>  drivers/event/dlb2/pf/base/dlb2_resource.c | 3278 ++++++-----
>  drivers/event/dlb2/pf/base/dlb2_resource.h |   28 +-
>  drivers/event/dlb2/pf/dlb2_main.c          |   37 +-
>  drivers/event/dlb2/pf/dlb2_pf.c            |   67 +-
>  14 files changed, 6445 insertions(+), 4598 deletions(-)
>  delete mode 100644 drivers/event/dlb2/pf/base/dlb2_mbox.h
>
> --
> 2.23.0
>

^ permalink raw reply	[flat|nested] 174+ messages in thread

end of thread, other threads:[~2021-05-04  8:28 UTC | newest]

Thread overview: 174+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-16 22:18 [dpdk-dev] [PATCH 00/25] Add Support for DLB v2.5 Timothy McDaniel
2021-03-16 22:18 ` [dpdk-dev] [PATCH 01/25] event/dlb2: add dlb v2.5 probe Timothy McDaniel
2021-03-21  9:48   ` Jerin Jacob
2021-03-24 19:31     ` McDaniel, Timothy
2021-03-26 11:01       ` Jerin Jacob
2021-03-26 14:03         ` McDaniel, Timothy
2021-03-26 14:33           ` Jerin Jacob
2021-03-29 15:00             ` McDaniel, Timothy
2021-03-29 15:51               ` Jerin Jacob
2021-03-29 15:55                 ` McDaniel, Timothy
2021-03-30 19:35   ` [dpdk-dev] [PATCH v2 00/27] Add DLB V2.5 Timothy McDaniel
2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 01/27] event/dlb2: add v2.5 probe Timothy McDaniel
2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 02/27] event/dlb2: add v2.5 HW init Timothy McDaniel
2021-04-03 10:18       ` Jerin Jacob
2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 03/27] event/dlb2: add v2.5 get_resources Timothy McDaniel
2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 04/27] event/dlb2: add v2.5 create sched domain Timothy McDaniel
2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 05/27] event/dlb2: add v2.5 domain reset Timothy McDaniel
2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 06/27] event/dlb2: add V2.5 create ldb queue Timothy McDaniel
2021-04-14 19:20       ` Jerin Jacob
2021-04-14 19:41         ` McDaniel, Timothy
2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 07/27] event/dlb2: add v2.5 create ldb port Timothy McDaniel
2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 08/27] event/dlb2: add v2.5 create dir port Timothy McDaniel
2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 09/27] event/dlb2: add v2.5 create dir queue Timothy McDaniel
2021-04-03 10:26       ` Jerin Jacob
2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 10/27] event/dlb2: add v2.5 map qid Timothy McDaniel
2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 11/27] event/dlb2: add v2.5 unmap queue Timothy McDaniel
2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 12/27] event/dlb2: add v2.5 start domain Timothy McDaniel
2021-04-14 19:23       ` Jerin Jacob
2021-04-14 19:42         ` McDaniel, Timothy
2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 13/27] event/dlb2: add v2.5 credit scheme Timothy McDaniel
2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 14/27] event/dlb2: add v2.5 queue depth functions Timothy McDaniel
2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 15/27] event/dlb2: add v2.5 finish map/unmap Timothy McDaniel
2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 16/27] event/dlb2: add v2.5 sparse cq mode Timothy McDaniel
2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 17/27] event/dlb2: add v2.5 sequence number management Timothy McDaniel
2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 18/27] event/dlb2: consolidate resource header files into one file Timothy McDaniel
2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 19/27] event/dlb2: delete old dlb2_resource.c file Timothy McDaniel
2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 20/27] event/dlb2: move dlb_resource_new.c to dlb_resource.c Timothy McDaniel
2021-04-03 10:29       ` Jerin Jacob
2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 21/27] event/dlb2: remove temporary file, dlb_hw_types.h Timothy McDaniel
2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 22/27] event/dlb2: move dlb2_hw_type_new.h to dlb2_hw_types.h Timothy McDaniel
2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 23/27] event/dlb2: delete old register map file, dlb2_regs.h Timothy McDaniel
2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 24/27] event/dlb2: rename dlb2_regs_new.h to dlb2_regs.h Timothy McDaniel
2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 25/27] event/dlb2: update xstats for v2.5 Timothy McDaniel
2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 26/27] doc/dlb2: update documentation " Timothy McDaniel
2021-03-30 19:35     ` [dpdk-dev] [PATCH v2 27/27] event/dlb2: Change device name to dlb_event Timothy McDaniel
2021-04-03 10:39       ` Jerin Jacob
2021-04-03  9:51     ` [dpdk-dev] [PATCH v2 00/27] Add DLB V2.5 Jerin Jacob
2021-04-13 20:14   ` [dpdk-dev] [PATCH v3 00/26] " Timothy McDaniel
2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 01/26] event/dlb2: add v2.5 probe Timothy McDaniel
2021-04-14 19:16       ` Jerin Jacob
2021-04-14 19:41         ` McDaniel, Timothy
2021-04-14 19:47           ` Jerin Jacob
2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 02/26] event/dlb2: add v2.5 HW register definitions Timothy McDaniel
2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 03/26] event/dlb2: add v2.5 HW init Timothy McDaniel
2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 04/26] event/dlb2: add v2.5 get resources Timothy McDaniel
2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 05/26] event/dlb2: add v2.5 create sched domain Timothy McDaniel
2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 06/26] event/dlb2: add v2.5 domain reset Timothy McDaniel
2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 07/26] event/dlb2: add V2.5 create ldb queue Timothy McDaniel
2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 08/26] event/dlb2: add v2.5 create ldb port Timothy McDaniel
2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 09/26] event/dlb2: add v2.5 create dir port Timothy McDaniel
2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 10/26] event/dlb2: add v2.5 create dir queue Timothy McDaniel
2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 11/26] event/dlb2: add v2.5 map qid Timothy McDaniel
2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 12/26] event/dlb2: add v2.5 unmap queue Timothy McDaniel
2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 13/26] event/dlb2: add v2.5 start domain Timothy McDaniel
2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 14/26] event/dlb2: add v2.5 credit scheme Timothy McDaniel
2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 15/26] event/dlb2: add v2.5 queue depth functions Timothy McDaniel
2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 16/26] event/dlb2: add v2.5 finish map/unmap Timothy McDaniel
2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 17/26] event/dlb2: add v2.5 sparse cq mode Timothy McDaniel
2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 18/26] event/dlb2: add v2.5 sequence number management Timothy McDaniel
2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 19/26] event/dlb2: use new implementation of resource header Timothy McDaniel
2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 20/26] event/dlb2: use new implementation of resource file Timothy McDaniel
2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 21/26] event/dlb2: use new implementation of HW types header Timothy McDaniel
2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 22/26] event/dlb2: use new combined register map Timothy McDaniel
2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 23/26] event/dlb2: update xstats for v2.5 Timothy McDaniel
2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 24/26] doc/dlb2: update documentation " Timothy McDaniel
2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 25/26] event/dlb: remove version from device name Timothy McDaniel
2021-04-14 19:31       ` Jerin Jacob
2021-04-14 19:42         ` McDaniel, Timothy
2021-04-14 19:44       ` Jerin Jacob
2021-04-14 20:33         ` Thomas Monjalon
2021-04-15  3:22           ` McDaniel, Timothy
2021-04-15  5:47           ` Jerin Jacob
2021-04-15  7:48             ` Thomas Monjalon
2021-04-15  7:56               ` Jerin Jacob
2021-04-13 20:14     ` [dpdk-dev] [PATCH v3 26/26] event/dlb: move rte config defines to runtime devargs Timothy McDaniel
2021-04-14 19:11       ` Jerin Jacob
2021-04-14 19:38         ` McDaniel, Timothy
2021-04-14 19:52           ` Jerin Jacob
2021-04-15  1:48   ` [dpdk-dev] [PATCH v4 00/27] Add DLB v2.5 Timothy McDaniel
2021-04-15  1:48     ` [dpdk-dev] [PATCH v4 01/27] event/dlb2: minor code cleanup Timothy McDaniel
2021-04-15  1:48     ` [dpdk-dev] [PATCH v4 02/27] event/dlb2: add v2.5 probe Timothy McDaniel
2021-04-29  7:09       ` Jerin Jacob
2021-04-29 13:46         ` McDaniel, Timothy
2021-04-15  1:48     ` [dpdk-dev] [PATCH v4 03/27] event/dlb2: add v2.5 HW register definitions Timothy McDaniel
2021-04-15  1:48     ` [dpdk-dev] [PATCH v4 04/27] event/dlb2: add v2.5 HW init Timothy McDaniel
2021-04-15  1:48     ` [dpdk-dev] [PATCH v4 05/27] event/dlb2: add v2.5 get resources Timothy McDaniel
2021-04-15  1:48     ` [dpdk-dev] [PATCH v4 06/27] event/dlb2: add v2.5 create sched domain Timothy McDaniel
2021-04-15  1:48     ` [dpdk-dev] [PATCH v4 07/27] event/dlb2: add v2.5 domain reset Timothy McDaniel
2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 08/27] event/dlb2: add v2.5 create ldb queue Timothy McDaniel
2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 09/27] event/dlb2: add v2.5 create ldb port Timothy McDaniel
2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 10/27] event/dlb2: add v2.5 create dir port Timothy McDaniel
2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 11/27] event/dlb2: add v2.5 create dir queue Timothy McDaniel
2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 12/27] event/dlb2: add v2.5 map qid Timothy McDaniel
2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 13/27] event/dlb2: add v2.5 unmap queue Timothy McDaniel
2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 14/27] event/dlb2: add v2.5 start domain Timothy McDaniel
2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 15/27] event/dlb2: add v2.5 credit scheme Timothy McDaniel
2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 16/27] event/dlb2: add v2.5 queue depth functions Timothy McDaniel
2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 17/27] event/dlb2: add v2.5 finish map/unmap Timothy McDaniel
2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 18/27] event/dlb2: add v2.5 sparse cq mode Timothy McDaniel
2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 19/27] event/dlb2: add v2.5 sequence number management Timothy McDaniel
2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 20/27] event/dlb2: use new implementation of resource header Timothy McDaniel
2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 21/27] event/dlb2: use new implementation of resource file Timothy McDaniel
2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 22/27] event/dlb2: use new implementation of HW types header Timothy McDaniel
2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 23/27] event/dlb2: use new combined register map Timothy McDaniel
2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 24/27] event/dlb2: update xstats for v2.5 Timothy McDaniel
2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 25/27] doc/dlb2: update documentation " Timothy McDaniel
2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 26/27] event/dlb: rename dlb2 driver Timothy McDaniel
2021-04-15  1:49     ` [dpdk-dev] [PATCH v4 27/27] event/dlb: move rte config defines to runtime devargs Timothy McDaniel
2021-05-01 19:03   ` [dpdk-dev] [PATCH v5 00/26] Add DLB v2.5 McDaniel, Timothy
2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 01/26] event/dlb2: minor code cleanup McDaniel, Timothy
2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 02/26] event/dlb2: add v2.5 probe McDaniel, Timothy
2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 03/26] event/dlb2: add v2.5 HW register definitions McDaniel, Timothy
2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 04/26] event/dlb2: add v2.5 HW init McDaniel, Timothy
2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 05/26] event/dlb2: add v2.5 get resources McDaniel, Timothy
2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 06/26] event/dlb2: add v2.5 create sched domain McDaniel, Timothy
2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 07/26] event/dlb2: add v2.5 domain reset McDaniel, Timothy
2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 08/26] event/dlb2: add v2.5 create ldb queue McDaniel, Timothy
2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 09/26] event/dlb2: add v2.5 create ldb port McDaniel, Timothy
2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 10/26] event/dlb2: add v2.5 create dir port McDaniel, Timothy
2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 11/26] event/dlb2: add v2.5 create dir queue McDaniel, Timothy
2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 12/26] event/dlb2: add v2.5 map qid McDaniel, Timothy
2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 13/26] event/dlb2: add v2.5 unmap queue McDaniel, Timothy
2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 14/26] event/dlb2: add v2.5 start domain McDaniel, Timothy
2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 15/26] event/dlb2: add v2.5 credit scheme McDaniel, Timothy
2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 16/26] event/dlb2: add v2.5 queue depth functions McDaniel, Timothy
2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 17/26] event/dlb2: add v2.5 finish map/unmap McDaniel, Timothy
2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 18/26] event/dlb2: add v2.5 sparse cq mode McDaniel, Timothy
2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 19/26] event/dlb2: add v2.5 sequence number management McDaniel, Timothy
2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 20/26] event/dlb2: use new implementation of resource header McDaniel, Timothy
2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 21/26] event/dlb2: use new implementation of resource file McDaniel, Timothy
2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 22/26] event/dlb2: use new implementation of HW types header McDaniel, Timothy
2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 23/26] event/dlb2: use new combined register map McDaniel, Timothy
2021-05-01 19:03     ` [dpdk-dev] [PATCH v5 24/26] event/dlb2: update xstats for v2.5 McDaniel, Timothy
2021-05-01 19:04     ` [dpdk-dev] [PATCH v5 25/26] event/dlb2: move rte config defines to runtime devargs McDaniel, Timothy
2021-05-01 19:04     ` [dpdk-dev] [PATCH v5 26/26] doc/dlb2: update documentation for v2.5 McDaniel, Timothy
2021-05-04  8:28     ` [dpdk-dev] [PATCH v5 00/26] Add DLB v2.5 Jerin Jacob
2021-03-16 22:18 ` [dpdk-dev] [PATCH 02/25] event/dlb2: add DLB v2.5 probe-time hardware init Timothy McDaniel
2021-03-21 10:30   ` [dpdk-dev] [EXT] " Jerin Jacob Kollanukkaran
2021-03-26 16:37     ` McDaniel, Timothy
2021-03-16 22:18 ` [dpdk-dev] [PATCH 03/25] event/dlb2: add DLB v2.5 support to get_resources Timothy McDaniel
2021-03-16 22:18 ` [dpdk-dev] [PATCH 04/25] event/dlb2: add DLB v2.5 support to create sched domain Timothy McDaniel
2021-04-03 10:22   ` Jerin Jacob
2021-03-16 22:18 ` [dpdk-dev] [PATCH 05/25] event/dlb2: add DLB v2.5 support to domain reset Timothy McDaniel
2021-03-16 22:18 ` [dpdk-dev] [PATCH 06/25] event/dlb2: add DLB V2.5 support to create ldb queue Timothy McDaniel
2021-03-16 22:18 ` [dpdk-dev] [PATCH 07/25] event/dlb2: add DLB v2.5 support to create ldb port Timothy McDaniel
2021-03-16 22:18 ` [dpdk-dev] [PATCH 08/25] event/dlb2: add DLB v2.5 support to create dir port Timothy McDaniel
2021-03-16 22:18 ` [dpdk-dev] [PATCH 09/25] event/dlb2: add DLB v2.5 support to create dir queue Timothy McDaniel
2021-03-16 22:18 ` [dpdk-dev] [PATCH 10/25] event/dlb2: add DLB v2.5 support to map qid Timothy McDaniel
2021-03-16 22:18 ` [dpdk-dev] [PATCH 11/25] event/dlb2: add DLB v2.5 support to unmap queue Timothy McDaniel
2021-03-16 22:18 ` [dpdk-dev] [PATCH 12/25] event/dlb2: add DLB v2.5 support to start domain Timothy McDaniel
2021-03-16 22:18 ` [dpdk-dev] [PATCH 13/25] event/dlb2: add DLB v2.5 credit scheme Timothy McDaniel
2021-03-16 22:18 ` [dpdk-dev] [PATCH 14/25] event/dlb2: Add DLB v2.5 support to get queue depth functions Timothy McDaniel
2021-03-16 22:18 ` [dpdk-dev] [PATCH 15/25] event/dlb2: add DLB v2.5 finish map/unmap interfaces Timothy McDaniel
2021-03-16 22:18 ` [dpdk-dev] [PATCH 16/25] event/dlb2: add DLB v2.5 sparse cq mode Timothy McDaniel
2021-03-16 22:18 ` [dpdk-dev] [PATCH 17/25] event/dlb2: add DLB v2.5 support to sequence number management Timothy McDaniel
2021-03-16 22:18 ` [dpdk-dev] [PATCH 18/25] event/dlb2: consolidate dlb resource header files into one file Timothy McDaniel
2021-03-16 22:18 ` [dpdk-dev] [PATCH 19/25] event/dlb2: delete old dlb2_resource.c file Timothy McDaniel
2021-03-16 22:18 ` [dpdk-dev] [PATCH 20/25] event/dlb2: move dlb_resource_new.c to dlb_resource.c Timothy McDaniel
2021-03-16 22:18 ` [dpdk-dev] [PATCH 21/25] event/dlb2: remove temporary file, dlb_hw_types.h Timothy McDaniel
2021-03-16 22:18 ` [dpdk-dev] [PATCH 22/25] event/dlb2: move dlb2_hw_type_new.h to dlb2_hw_types.h Timothy McDaniel
2021-03-16 22:18 ` [dpdk-dev] [PATCH 23/25] event/dlb2: delete old register map file, dlb2_regs.h Timothy McDaniel
2021-03-16 22:18 ` [dpdk-dev] [PATCH 24/25] event/dlb2: rename dlb2_regs_new.h to dlb2_regs.h Timothy McDaniel
2021-03-16 22:18 ` [dpdk-dev] [PATCH 25/25] event/dlb2: update xstats for DLB v2.5 Timothy McDaniel
2021-03-21 10:50 ` [dpdk-dev] [PATCH 00/25] Add Support " Jerin Jacob

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).