DPDK patches and discussions
 help / color / mirror / Atom feed
From: Ajit Khaparde <ajit.khaparde@broadcom.com>
To: dev@dpdk.org
Cc: Michael Wildt <michael.wildt@broadcom.com>,
	Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>,
	Randy Schacher <stuart.schacher@broadcom.com>
Subject: [dpdk-dev] [PATCH v3 23/51] net/bnxt: update table get to use new design
Date: Wed,  1 Jul 2020 21:11:06 -0700	[thread overview]
Message-ID: <20200702041134.43198-24-ajit.khaparde@broadcom.com> (raw)
In-Reply-To: <20200702041134.43198-1-ajit.khaparde@broadcom.com>

From: Michael Wildt <michael.wildt@broadcom.com>

- Move bulk table get implementation to new Tbl Module design.
- Update messages for bulk table get
- Retrieve specified table element using bulk mechanism
- Remove deprecated resource definitions
- Update device type configuration for P4.
- Update RM DB HCAPI count check and fix EM internal and host
  code such that EM DBs can be created correctly.
- Update error logging to be info on unbind in the different modules.
- Move RTE RSVD out of tf_resources.h

Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/hcapi/cfa_p40_tbl.h      |  250 ++
 drivers/net/bnxt/hcapi/hcapi_cfa.h        |    2 +
 drivers/net/bnxt/meson.build              |    3 +-
 drivers/net/bnxt/tf_core/Makefile         |    2 -
 drivers/net/bnxt/tf_core/tf_common.h      |   55 +-
 drivers/net/bnxt/tf_core/tf_core.c        |   86 +-
 drivers/net/bnxt/tf_core/tf_device.h      |   24 +-
 drivers/net/bnxt/tf_core/tf_device_p4.c   |    4 +-
 drivers/net/bnxt/tf_core/tf_device_p4.h   |    5 +-
 drivers/net/bnxt/tf_core/tf_em.h          |   88 +-
 drivers/net/bnxt/tf_core/tf_em_common.c   |   29 +-
 drivers/net/bnxt/tf_core/tf_em_internal.c |   59 +-
 drivers/net/bnxt/tf_core/tf_identifier.c  |   14 +-
 drivers/net/bnxt/tf_core/tf_msg.c         |   31 +-
 drivers/net/bnxt/tf_core/tf_msg.h         |    8 +-
 drivers/net/bnxt/tf_core/tf_resources.h   |  529 ---
 drivers/net/bnxt/tf_core/tf_rm.c          | 3695 ++++-----------------
 drivers/net/bnxt/tf_core/tf_rm.h          |  539 +--
 drivers/net/bnxt/tf_core/tf_rm_new.c      |  907 -----
 drivers/net/bnxt/tf_core/tf_rm_new.h      |  446 ---
 drivers/net/bnxt/tf_core/tf_session.h     |  214 +-
 drivers/net/bnxt/tf_core/tf_tbl.c         |  478 ++-
 drivers/net/bnxt/tf_core/tf_tbl.h         |  436 ++-
 drivers/net/bnxt/tf_core/tf_tbl_type.c    |  342 --
 drivers/net/bnxt/tf_core/tf_tbl_type.h    |  318 --
 drivers/net/bnxt/tf_core/tf_tcam.c        |   15 +-
 26 files changed, 2337 insertions(+), 6242 deletions(-)
 create mode 100644 drivers/net/bnxt/hcapi/cfa_p40_tbl.h
 delete mode 100644 drivers/net/bnxt/tf_core/tf_rm_new.c
 delete mode 100644 drivers/net/bnxt/tf_core/tf_rm_new.h
 delete mode 100644 drivers/net/bnxt/tf_core/tf_tbl_type.c
 delete mode 100644 drivers/net/bnxt/tf_core/tf_tbl_type.h

diff --git a/drivers/net/bnxt/hcapi/cfa_p40_tbl.h b/drivers/net/bnxt/hcapi/cfa_p40_tbl.h
new file mode 100644
index 000000000..c30e4f49c
--- /dev/null
+++ b/drivers/net/bnxt/hcapi/cfa_p40_tbl.h
@@ -0,0 +1,250 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+/*
+ * Name:  cfa_p40_tbl.h
+ *
+ * Description: header for SWE based on Truflow
+ *
+ * Date:  12/16/19 17:18:12
+ *
+ * Note:  This file was originally generated by tflib_decode.py.
+ *        Remainder is hand coded due to lack of availability of xml for
+ *        addtional tables at this time (EEM Record and union fields)
+ *
+ **/
+#ifndef _CFA_P40_TBL_H_
+#define _CFA_P40_TBL_H_
+
+#include "cfa_p40_hw.h"
+
+#include "hcapi_cfa_defs.h"
+
+const struct hcapi_cfa_field cfa_p40_prof_l2_ctxt_tcam_layout[] = {
+	{CFA_P40_PROF_L2_CTXT_TCAM_VALID_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_VALID_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_KEY_TYPE_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_KEY_TYPE_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_TUN_HDR_TYPE_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_TUN_HDR_TYPE_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_T_L2_NUMTAGS_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_T_L2_NUMTAGS_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_L2_NUMTAGS_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_L2_NUMTAGS_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_MAC1_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_MAC1_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_T_OVID_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_T_OVID_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_T_IVID_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_T_IVID_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_SPARIF_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_SPARIF_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_SVIF_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_SVIF_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_MAC0_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_MAC0_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_OVID_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_OVID_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_IVID_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_IVID_NUM_BITS},
+};
+
+const struct hcapi_cfa_field cfa_p40_act_veb_tcam_layout[] = {
+	{CFA_P40_ACT_VEB_TCAM_VALID_BITPOS,
+	 CFA_P40_ACT_VEB_TCAM_VALID_NUM_BITS},
+	{CFA_P40_ACT_VEB_TCAM_RESERVED_BITPOS,
+	 CFA_P40_ACT_VEB_TCAM_RESERVED_NUM_BITS},
+	{CFA_P40_ACT_VEB_TCAM_PARIF_IN_BITPOS,
+	 CFA_P40_ACT_VEB_TCAM_PARIF_IN_NUM_BITS},
+	{CFA_P40_ACT_VEB_TCAM_NUM_VTAGS_BITPOS,
+	 CFA_P40_ACT_VEB_TCAM_NUM_VTAGS_NUM_BITS},
+	{CFA_P40_ACT_VEB_TCAM_MAC_BITPOS,
+	 CFA_P40_ACT_VEB_TCAM_MAC_NUM_BITS},
+	{CFA_P40_ACT_VEB_TCAM_OVID_BITPOS,
+	 CFA_P40_ACT_VEB_TCAM_OVID_NUM_BITS},
+	{CFA_P40_ACT_VEB_TCAM_IVID_BITPOS,
+	 CFA_P40_ACT_VEB_TCAM_IVID_NUM_BITS},
+};
+
+const struct hcapi_cfa_field cfa_p40_lkup_tcam_record_mem_layout[] = {
+	{CFA_P40_LKUP_TCAM_RECORD_MEM_VALID_BITPOS,
+	 CFA_P40_LKUP_TCAM_RECORD_MEM_VALID_NUM_BITS},
+	{CFA_P40_LKUP_TCAM_RECORD_MEM_ACT_REC_PTR_BITPOS,
+	 CFA_P40_LKUP_TCAM_RECORD_MEM_ACT_REC_PTR_NUM_BITS},
+	{CFA_P40_LKUP_TCAM_RECORD_MEM_STRENGTH_BITPOS,
+	 CFA_P40_LKUP_TCAM_RECORD_MEM_STRENGTH_NUM_BITS},
+};
+
+const struct hcapi_cfa_field cfa_p40_prof_ctxt_remap_mem_layout[] = {
+	{CFA_P40_PROF_CTXT_REMAP_MEM_TPID_ANTI_SPOOF_CTL_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_TPID_ANTI_SPOOF_CTL_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_PRI_ANTI_SPOOF_CTL_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_PRI_ANTI_SPOOF_CTL_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_BYP_SP_LKUP_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_BYP_SP_LKUP_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_SP_REC_PTR_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_SP_REC_PTR_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_BD_ACT_EN_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_BD_ACT_EN_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_DEFAULT_TPID_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_DEFAULT_TPID_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_ALLOWED_TPID_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_ALLOWED_TPID_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_DEFAULT_PRI_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_DEFAULT_PRI_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_ALLOWED_PRI_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_ALLOWED_PRI_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_PARIF_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_PARIF_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_BYP_LKUP_EN_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_BYP_LKUP_EN_NUM_BITS},
+	/* Fields below not generated through automation */
+	{CFA_P40_PROF_CTXT_REMAP_MEM_PROF_VNIC_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_PROF_VNIC_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_PROF_FUNC_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_PROF_FUNC_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_L2_CTXT_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_L2_CTXT_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_ARP_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_ARP_NUM_BITS},
+};
+
+const struct hcapi_cfa_field cfa_p40_prof_profile_tcam_remap_mem_layout[] = {
+	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_PL_BYP_LKUP_EN_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_PL_BYP_LKUP_EN_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_SEARCH_ENB_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_SEARCH_ENB_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_PROFILE_ID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_PROFILE_ID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_KEY_ID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_KEY_ID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_KEY_MASK_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_KEY_MASK_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_SEARCH_ENB_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_SEARCH_ENB_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_PROFILE_ID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_PROFILE_ID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_KEY_ID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_KEY_ID_NUM_BITS},
+};
+
+const struct hcapi_cfa_field cfa_p40_prof_profile_tcam_layout[] = {
+	{CFA_P40_PROF_PROFILE_TCAM_VALID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_VALID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_PKT_TYPE_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_PKT_TYPE_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_RECYCLE_CNT_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_RECYCLE_CNT_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_AGG_ERROR_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_AGG_ERROR_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_PROF_FUNC_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_PROF_FUNC_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_RESERVED_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_RESERVED_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_HREC_NEXT_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_HREC_NEXT_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL2_HDR_VALID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL2_HDR_VALID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL2_HDR_TYPE_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL2_HDR_TYPE_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL2_UC_MC_BC_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL2_UC_MC_BC_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL2_VTAG_PRESENT_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL2_VTAG_PRESENT_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL2_TWO_VTAGS_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL2_TWO_VTAGS_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL3_VALID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL3_VALID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL3_ERROR_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL3_ERROR_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL3_HDR_TYPE_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL3_HDR_TYPE_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL3_HDR_ISIP_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL3_HDR_ISIP_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL3_IPV6_CMP_SRC_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL3_IPV6_CMP_SRC_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL3_IPV6_CMP_DEST_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL3_IPV6_CMP_DEST_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_VALID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_VALID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_ERROR_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_ERROR_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_TYPE_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_TYPE_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_IS_UDP_TCP_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_IS_UDP_TCP_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_VALID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_VALID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_ERR_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_ERR_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_TYPE_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_TYPE_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_FLAGS_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_FLAGS_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L2_HDR_VALID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L2_HDR_VALID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L2_HDR_ERROR_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L2_HDR_ERROR_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L2_HDR_TYPE_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L2_HDR_TYPE_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L2_UC_MC_BC_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L2_UC_MC_BC_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L2_VTAG_PRESENT_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L2_VTAG_PRESENT_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L2_TWO_VTAGS_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L2_TWO_VTAGS_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L3_VALID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L3_VALID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L3_ERROR_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L3_ERROR_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L3_HDR_TYPE_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L3_HDR_TYPE_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L3_HDR_ISIP_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L3_HDR_ISIP_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L3_IPV6_CMP_SRC_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L3_IPV6_CMP_SRC_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L3_IPV6_CMP_DEST_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L3_IPV6_CMP_DEST_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L4_HDR_VALID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L4_HDR_VALID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L4_HDR_ERROR_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L4_HDR_ERROR_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L4_HDR_TYPE_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L4_HDR_TYPE_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L4_HDR_IS_UDP_TCP_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L4_HDR_IS_UDP_TCP_NUM_BITS},
+};
+
+/**************************************************************************/
+/**
+ * Non-autogenerated fields
+ */
+
+const struct hcapi_cfa_field cfa_p40_eem_key_tbl_layout[] = {
+	{CFA_P40_EEM_KEY_TBL_VALID_BITPOS,
+	 CFA_P40_EEM_KEY_TBL_VALID_NUM_BITS},
+
+	{CFA_P40_EEM_KEY_TBL_L1_CACHEABLE_BITPOS,
+	 CFA_P40_EEM_KEY_TBL_L1_CACHEABLE_NUM_BITS},
+
+	{CFA_P40_EEM_KEY_TBL_STRENGTH_BITPOS,
+	 CFA_P40_EEM_KEY_TBL_STRENGTH_NUM_BITS},
+
+	{CFA_P40_EEM_KEY_TBL_KEY_SZ_BITPOS,
+	 CFA_P40_EEM_KEY_TBL_KEY_SZ_NUM_BITS},
+
+	{CFA_P40_EEM_KEY_TBL_REC_SZ_BITPOS,
+	 CFA_P40_EEM_KEY_TBL_REC_SZ_NUM_BITS},
+
+	{CFA_P40_EEM_KEY_TBL_ACT_REC_INT_BITPOS,
+	 CFA_P40_EEM_KEY_TBL_ACT_REC_INT_NUM_BITS},
+
+	{CFA_P40_EEM_KEY_TBL_EXT_FLOW_CTR_BITPOS,
+	 CFA_P40_EEM_KEY_TBL_EXT_FLOW_CTR_NUM_BITS},
+
+	{CFA_P40_EEM_KEY_TBL_AR_PTR_BITPOS,
+	 CFA_P40_EEM_KEY_TBL_AR_PTR_NUM_BITS},
+
+};
+#endif /* _CFA_P40_TBL_H_ */
diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa.h b/drivers/net/bnxt/hcapi/hcapi_cfa.h
index f60af4e56..7a67493bd 100644
--- a/drivers/net/bnxt/hcapi/hcapi_cfa.h
+++ b/drivers/net/bnxt/hcapi/hcapi_cfa.h
@@ -243,6 +243,8 @@ int hcapi_cfa_p4_wc_tcam_hwop(struct hcapi_cfa_hwop *op,
 			       struct hcapi_cfa_data *obj_data);
 int hcapi_cfa_p4_wc_tcam_rec_hwop(struct hcapi_cfa_hwop *op,
 				   struct hcapi_cfa_data *obj_data);
+int hcapi_cfa_p4_mirror_hwop(struct hcapi_cfa_hwop *op,
+			     struct hcapi_cfa_data *mirror);
 #endif /* SUPPORT_CFA_HW_P4 */
 /**
  *  HCAPI CFA device HW operation function callback definition
diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index 35038dc8b..7f3ec6204 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -41,10 +41,9 @@ sources = files('bnxt_cpr.c',
 	'tf_core/tf_identifier.c',
 	'tf_core/tf_shadow_tbl.c',
 	'tf_core/tf_shadow_tcam.c',
-	'tf_core/tf_tbl_type.c',
 	'tf_core/tf_tcam.c',
 	'tf_core/tf_util.c',
-	'tf_core/tf_rm_new.c',
+	'tf_core/tf_rm.c',
 
 	'hcapi/hcapi_cfa_p4.c',
 
diff --git a/drivers/net/bnxt/tf_core/Makefile b/drivers/net/bnxt/tf_core/Makefile
index f186741e4..9ba60e1c2 100644
--- a/drivers/net/bnxt/tf_core/Makefile
+++ b/drivers/net/bnxt/tf_core/Makefile
@@ -23,10 +23,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_device_p4.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_identifier.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_shadow_tbl.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_shadow_tcam.c
-SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tbl_type.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tcam.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_util.c
-SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_rm_new.c
 
 SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_core.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_project.h
diff --git a/drivers/net/bnxt/tf_core/tf_common.h b/drivers/net/bnxt/tf_core/tf_common.h
index ec3bca835..b982203db 100644
--- a/drivers/net/bnxt/tf_core/tf_common.h
+++ b/drivers/net/bnxt/tf_core/tf_common.h
@@ -6,52 +6,11 @@
 #ifndef _TF_COMMON_H_
 #define _TF_COMMON_H_
 
-/* Helper to check the parms */
-#define TF_CHECK_PARMS_SESSION(tfp, parms) do {	\
-		if ((parms) == NULL || (tfp) == NULL) { \
-			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n"); \
-			return -EINVAL; \
-		} \
-		if ((tfp)->session == NULL || \
-		    (tfp)->session->core_data == NULL) { \
-			TFP_DRV_LOG(ERR, "%s: session error\n", \
-				    tf_dir_2_str((parms)->dir)); \
-			return -EINVAL; \
-		} \
-	} while (0)
-
-#define TF_CHECK_PARMS_SESSION_NO_DIR(tfp, parms) do {	\
-		if ((parms) == NULL || (tfp) == NULL) { \
-			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n"); \
-			return -EINVAL; \
-		} \
-		if ((tfp)->session == NULL || \
-		    (tfp)->session->core_data == NULL) { \
-			TFP_DRV_LOG(ERR, "Session error\n"); \
-			return -EINVAL; \
-		} \
-	} while (0)
-
-#define TF_CHECK_PARMS(tfp, parms) do {	\
-		if ((parms) == NULL || (tfp) == NULL) { \
-			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n"); \
-			return -EINVAL; \
-		} \
-	} while (0)
-
-#define TF_CHECK_TFP_SESSION(tfp) do { \
-		if ((tfp) == NULL) { \
-			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n"); \
-			return -EINVAL; \
-		} \
-		if ((tfp)->session == NULL || \
-		    (tfp)->session->core_data == NULL) { \
-			TFP_DRV_LOG(ERR, "Session error\n"); \
-			return -EINVAL; \
-		} \
-	} while (0)
-
+/* Helpers to performs parameter check */
 
+/**
+ * Checks 1 parameter against NULL.
+ */
 #define TF_CHECK_PARMS1(parms) do {					\
 		if ((parms) == NULL) {					\
 			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n");	\
@@ -59,6 +18,9 @@
 		}							\
 	} while (0)
 
+/**
+ * Checks 2 parameters against NULL.
+ */
 #define TF_CHECK_PARMS2(parms1, parms2) do {				\
 		if ((parms1) == NULL || (parms2) == NULL) {		\
 			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n");	\
@@ -66,6 +28,9 @@
 		}							\
 	} while (0)
 
+/**
+ * Checks 3 parameters against NULL.
+ */
 #define TF_CHECK_PARMS3(parms1, parms2, parms3) do {			\
 		if ((parms1) == NULL ||					\
 		    (parms2) == NULL ||					\
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 8b3e15c8a..8727900c4 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -186,7 +186,7 @@ int tf_insert_em_entry(struct tf *tfp,
 	struct tf_dev_info     *dev;
 	int rc;
 
-	TF_CHECK_PARMS_SESSION(tfp, parms);
+	TF_CHECK_PARMS2(tfp, parms);
 
 	/* Retrieve the session information */
 	rc = tf_session_get_session(tfp, &tfs);
@@ -241,7 +241,7 @@ int tf_delete_em_entry(struct tf *tfp,
 	struct tf_dev_info     *dev;
 	int rc;
 
-	TF_CHECK_PARMS_SESSION(tfp, parms);
+	TF_CHECK_PARMS2(tfp, parms);
 
 	/* Retrieve the session information */
 	rc = tf_session_get_session(tfp, &tfs);
@@ -523,7 +523,7 @@ int
 tf_get_tcam_entry(struct tf *tfp __rte_unused,
 		  struct tf_get_tcam_entry_parms *parms __rte_unused)
 {
-	TF_CHECK_PARMS_SESSION(tfp, parms);
+	TF_CHECK_PARMS2(tfp, parms);
 	return -EOPNOTSUPP;
 }
 
@@ -821,7 +821,80 @@ tf_get_tbl_entry(struct tf *tfp,
 	return rc;
 }
 
-/* API defined in tf_core.h */
+int
+tf_bulk_get_tbl_entry(struct tf *tfp,
+		 struct tf_bulk_get_tbl_entry_parms *parms)
+{
+	int rc = 0;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_tbl_get_bulk_parms bparms;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Can't do static initialization due to UT enum check */
+	memset(&bparms, 0, sizeof(struct tf_tbl_get_bulk_parms));
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (parms->type == TF_TBL_TYPE_EXT) {
+		/* Not supported, yet */
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s, External table type not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+
+		return rc;
+	}
+
+	/* Internal table type processing */
+
+	if (dev->ops->tf_dev_get_bulk_tbl == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return -EOPNOTSUPP;
+	}
+
+	bparms.dir = parms->dir;
+	bparms.type = parms->type;
+	bparms.starting_idx = parms->starting_idx;
+	bparms.num_entries = parms->num_entries;
+	bparms.entry_sz_in_bytes = parms->entry_sz_in_bytes;
+	bparms.physical_mem_addr = parms->physical_mem_addr;
+	rc = dev->ops->tf_dev_get_bulk_tbl(tfp, &bparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Table get bulk failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	return rc;
+}
+
 int
 tf_alloc_tbl_scope(struct tf *tfp,
 		   struct tf_alloc_tbl_scope_parms *parms)
@@ -830,7 +903,7 @@ tf_alloc_tbl_scope(struct tf *tfp,
 	struct tf_dev_info *dev;
 	int rc;
 
-	TF_CHECK_PARMS_SESSION_NO_DIR(tfp, parms);
+	TF_CHECK_PARMS2(tfp, parms);
 
 	/* Retrieve the session information */
 	rc = tf_session_get_session(tfp, &tfs);
@@ -861,7 +934,6 @@ tf_alloc_tbl_scope(struct tf *tfp,
 	return rc;
 }
 
-/* API defined in tf_core.h */
 int
 tf_free_tbl_scope(struct tf *tfp,
 		  struct tf_free_tbl_scope_parms *parms)
@@ -870,7 +942,7 @@ tf_free_tbl_scope(struct tf *tfp,
 	struct tf_dev_info *dev;
 	int rc;
 
-	TF_CHECK_PARMS_SESSION_NO_DIR(tfp, parms);
+	TF_CHECK_PARMS2(tfp, parms);
 
 	/* Retrieve the session information */
 	rc = tf_session_get_session(tfp, &tfs);
diff --git a/drivers/net/bnxt/tf_core/tf_device.h b/drivers/net/bnxt/tf_core/tf_device.h
index 2712d1039..93f3627d4 100644
--- a/drivers/net/bnxt/tf_core/tf_device.h
+++ b/drivers/net/bnxt/tf_core/tf_device.h
@@ -8,7 +8,7 @@
 
 #include "tf_core.h"
 #include "tf_identifier.h"
-#include "tf_tbl_type.h"
+#include "tf_tbl.h"
 #include "tf_tcam.h"
 
 struct tf;
@@ -293,7 +293,27 @@ struct tf_dev_ops {
 	 *   - (-EINVAL) on failure.
 	 */
 	int (*tf_dev_get_tbl)(struct tf *tfp,
-			       struct tf_tbl_get_parms *parms);
+			      struct tf_tbl_get_parms *parms);
+
+	/**
+	 * Retrieves the specified table type element using 'bulk'
+	 * mechanism.
+	 *
+	 * This API retrieves the specified element data by invoking the
+	 * firmware.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table get bulk parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_get_bulk_tbl)(struct tf *tfp,
+				   struct tf_tbl_get_bulk_parms *parms);
 
 	/**
 	 * Allocation of a tcam element.
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
index 127c655a6..e3526672f 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.c
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -8,7 +8,7 @@
 
 #include "tf_device.h"
 #include "tf_identifier.h"
-#include "tf_tbl_type.h"
+#include "tf_tbl.h"
 #include "tf_tcam.h"
 #include "tf_em.h"
 
@@ -88,6 +88,7 @@ const struct tf_dev_ops tf_dev_ops_p4_init = {
 	.tf_dev_alloc_search_tbl = NULL,
 	.tf_dev_set_tbl = NULL,
 	.tf_dev_get_tbl = NULL,
+	.tf_dev_get_bulk_tbl = NULL,
 	.tf_dev_alloc_tcam = NULL,
 	.tf_dev_free_tcam = NULL,
 	.tf_dev_alloc_search_tcam = NULL,
@@ -114,6 +115,7 @@ const struct tf_dev_ops tf_dev_ops_p4 = {
 	.tf_dev_alloc_search_tbl = NULL,
 	.tf_dev_set_tbl = tf_tbl_set,
 	.tf_dev_get_tbl = tf_tbl_get,
+	.tf_dev_get_bulk_tbl = tf_tbl_bulk_get,
 	.tf_dev_alloc_tcam = tf_tcam_alloc,
 	.tf_dev_free_tcam = tf_tcam_free,
 	.tf_dev_alloc_search_tcam = NULL,
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.h b/drivers/net/bnxt/tf_core/tf_device_p4.h
index da6dd65a3..473e4eae5 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.h
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.h
@@ -9,7 +9,7 @@
 #include <cfa_resource_types.h>
 
 #include "tf_core.h"
-#include "tf_rm_new.h"
+#include "tf_rm.h"
 
 struct tf_rm_element_cfg tf_ident_p4[TF_IDENT_TYPE_MAX] = {
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_L2_CTXT_REMAP },
@@ -41,8 +41,7 @@ struct tf_rm_element_cfg tf_tbl_p4[TF_TBL_TYPE_MAX] = {
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_ENCAP_64B },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_MAC },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_MAC_IPV4 },
-	/* CFA_RESOURCE_TYPE_P4_SP_MAC_IPV6 */
-	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_MAC_IPV6 },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_COUNTER_64B },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_SPORT },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_DPORT },
diff --git a/drivers/net/bnxt/tf_core/tf_em.h b/drivers/net/bnxt/tf_core/tf_em.h
index cf799c200..6bfcbd59e 100644
--- a/drivers/net/bnxt/tf_core/tf_em.h
+++ b/drivers/net/bnxt/tf_core/tf_em.h
@@ -23,6 +23,56 @@
 #define TF_EM_MAX_MASK 0x7FFF
 #define TF_EM_MAX_ENTRY (128 * 1024 * 1024)
 
+/**
+ * Hardware Page sizes supported for EEM:
+ *   4K, 8K, 64K, 256K, 1M, 2M, 4M, 1G.
+ *
+ * Round-down other page sizes to the lower hardware page
+ * size supported.
+ */
+#define TF_EM_PAGE_SIZE_4K 12
+#define TF_EM_PAGE_SIZE_8K 13
+#define TF_EM_PAGE_SIZE_64K 16
+#define TF_EM_PAGE_SIZE_256K 18
+#define TF_EM_PAGE_SIZE_1M 20
+#define TF_EM_PAGE_SIZE_2M 21
+#define TF_EM_PAGE_SIZE_4M 22
+#define TF_EM_PAGE_SIZE_1G 30
+
+/* Set page size */
+#define BNXT_TF_PAGE_SIZE TF_EM_PAGE_SIZE_2M
+
+#if (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_4K)	/** 4K */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_4K
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_4K
+#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_8K)	/** 8K */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_8K
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_8K
+#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_64K)	/** 64K */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_64K
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_64K
+#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_256K)	/** 256K */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_256K
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_256K
+#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_1M)	/** 1M */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_1M
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1M
+#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_2M)	/** 2M */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_2M
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_2M
+#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_4M)	/** 4M */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_4M
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_4M
+#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_1G)	/** 1G */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_1G
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1G
+#else
+#error "Invalid Page Size specified. Please use a TF_EM_PAGE_SIZE_n define"
+#endif
+
+#define TF_EM_PAGE_SIZE	(1 << TF_EM_PAGE_SHIFT)
+#define TF_EM_PAGE_ALIGNMENT (1 << TF_EM_PAGE_SHIFT)
+
 /*
  * Used to build GFID:
  *
@@ -80,13 +130,43 @@ struct tf_em_cfg_parms {
 };
 
 /**
- * @page table Table
+ * @page em EM
  *
  * @ref tf_alloc_eem_tbl_scope
  *
  * @ref tf_free_eem_tbl_scope_cb
  *
- * @ref tbl_scope_cb_find
+ * @ref tf_em_insert_int_entry
+ *
+ * @ref tf_em_delete_int_entry
+ *
+ * @ref tf_em_insert_ext_entry
+ *
+ * @ref tf_em_delete_ext_entry
+ *
+ * @ref tf_em_insert_ext_sys_entry
+ *
+ * @ref tf_em_delete_ext_sys_entry
+ *
+ * @ref tf_em_int_bind
+ *
+ * @ref tf_em_int_unbind
+ *
+ * @ref tf_em_ext_common_bind
+ *
+ * @ref tf_em_ext_common_unbind
+ *
+ * @ref tf_em_ext_host_alloc
+ *
+ * @ref tf_em_ext_host_free
+ *
+ * @ref tf_em_ext_system_alloc
+ *
+ * @ref tf_em_ext_system_free
+ *
+ * @ref tf_em_ext_common_free
+ *
+ * @ref tf_em_ext_common_alloc
  */
 
 /**
@@ -328,7 +408,7 @@ int tf_em_ext_host_free(struct tf *tfp,
  *   -EINVAL - Parameter error
  */
 int tf_em_ext_system_alloc(struct tf *tfp,
-			 struct tf_alloc_tbl_scope_parms *parms);
+			   struct tf_alloc_tbl_scope_parms *parms);
 
 /**
  * Free for external EEM using system memory
@@ -344,7 +424,7 @@ int tf_em_ext_system_alloc(struct tf *tfp,
  *   -EINVAL - Parameter error
  */
 int tf_em_ext_system_free(struct tf *tfp,
-			struct tf_free_tbl_scope_parms *parms);
+			  struct tf_free_tbl_scope_parms *parms);
 
 /**
  * Common free for external EEM using host or system memory
diff --git a/drivers/net/bnxt/tf_core/tf_em_common.c b/drivers/net/bnxt/tf_core/tf_em_common.c
index ba6aa7ac1..d0d80daeb 100644
--- a/drivers/net/bnxt/tf_core/tf_em_common.c
+++ b/drivers/net/bnxt/tf_core/tf_em_common.c
@@ -194,12 +194,13 @@ tf_em_ext_common_bind(struct tf *tfp,
 	int rc;
 	int i;
 	struct tf_rm_create_db_parms db_cfg = { 0 };
+	uint8_t db_exists = 0;
 
 	TF_CHECK_PARMS2(tfp, parms);
 
 	if (init) {
 		TFP_DRV_LOG(ERR,
-			    "Identifier already initialized\n");
+			    "EM Ext DB already initialized\n");
 		return -EINVAL;
 	}
 
@@ -210,19 +211,29 @@ tf_em_ext_common_bind(struct tf *tfp,
 	for (i = 0; i < TF_DIR_MAX; i++) {
 		db_cfg.dir = i;
 		db_cfg.alloc_cnt = parms->resources->em_cnt[i].cnt;
+
+		/* Check if we got any request to support EEM, if so
+		 * we build an EM Ext DB holding Table Scopes.
+		 */
+		if (db_cfg.alloc_cnt[TF_EM_TBL_TYPE_TBL_SCOPE] == 0)
+			continue;
+
 		db_cfg.rm_db = &eem_db[i];
 		rc = tf_rm_create_db(tfp, &db_cfg);
 		if (rc) {
 			TFP_DRV_LOG(ERR,
-				    "%s: EM DB creation failed\n",
+				    "%s: EM Ext DB creation failed\n",
 				    tf_dir_2_str(i));
 
 			return rc;
 		}
+		db_exists = 1;
 	}
 
-	mem_type = parms->mem_type;
-	init = 1;
+	if (db_exists) {
+		mem_type = parms->mem_type;
+		init = 1;
+	}
 
 	return 0;
 }
@@ -236,13 +247,11 @@ tf_em_ext_common_unbind(struct tf *tfp)
 
 	TF_CHECK_PARMS1(tfp);
 
-	/* Bail if nothing has been initialized done silent as to
-	 * allow for creation cleanup.
-	 */
+	/* Bail if nothing has been initialized */
 	if (!init) {
-		TFP_DRV_LOG(ERR,
-			    "No EM DBs created\n");
-		return -EINVAL;
+		TFP_DRV_LOG(INFO,
+			    "No EM Ext DBs created\n");
+		return 0;
 	}
 
 	for (i = 0; i < TF_DIR_MAX; i++) {
diff --git a/drivers/net/bnxt/tf_core/tf_em_internal.c b/drivers/net/bnxt/tf_core/tf_em_internal.c
index 9be91ad5d..1c514747d 100644
--- a/drivers/net/bnxt/tf_core/tf_em_internal.c
+++ b/drivers/net/bnxt/tf_core/tf_em_internal.c
@@ -225,12 +225,13 @@ tf_em_int_bind(struct tf *tfp,
 	int i;
 	struct tf_rm_create_db_parms db_cfg = { 0 };
 	struct tf_session *session;
+	uint8_t db_exists = 0;
 
 	TF_CHECK_PARMS2(tfp, parms);
 
 	if (init) {
 		TFP_DRV_LOG(ERR,
-			    "Identifier already initialized\n");
+			    "EM Int DB already initialized\n");
 		return -EINVAL;
 	}
 
@@ -242,31 +243,35 @@ tf_em_int_bind(struct tf *tfp,
 				  TF_SESSION_EM_POOL_SIZE);
 	}
 
-	/*
-	 * I'm not sure that this code is needed.
-	 * leaving for now until resolved
-	 */
-	if (parms->num_elements) {
-		db_cfg.type = TF_DEVICE_MODULE_TYPE_EM;
-		db_cfg.num_elements = parms->num_elements;
-		db_cfg.cfg = parms->cfg;
-
-		for (i = 0; i < TF_DIR_MAX; i++) {
-			db_cfg.dir = i;
-			db_cfg.alloc_cnt = parms->resources->em_cnt[i].cnt;
-			db_cfg.rm_db = &em_db[i];
-			rc = tf_rm_create_db(tfp, &db_cfg);
-			if (rc) {
-				TFP_DRV_LOG(ERR,
-					    "%s: EM DB creation failed\n",
-					    tf_dir_2_str(i));
+	db_cfg.type = TF_DEVICE_MODULE_TYPE_EM;
+	db_cfg.num_elements = parms->num_elements;
+	db_cfg.cfg = parms->cfg;
 
-				return rc;
-			}
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		db_cfg.dir = i;
+		db_cfg.alloc_cnt = parms->resources->em_cnt[i].cnt;
+
+		/* Check if we got any request to support EEM, if so
+		 * we build an EM Int DB holding Table Scopes.
+		 */
+		if (db_cfg.alloc_cnt[TF_EM_TBL_TYPE_EM_RECORD] == 0)
+			continue;
+
+		db_cfg.rm_db = &em_db[i];
+		rc = tf_rm_create_db(tfp, &db_cfg);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: EM Int DB creation failed\n",
+				    tf_dir_2_str(i));
+
+			return rc;
 		}
+		db_exists = 1;
 	}
 
-	init = 1;
+	if (db_exists)
+		init = 1;
+
 	return 0;
 }
 
@@ -280,13 +285,11 @@ tf_em_int_unbind(struct tf *tfp)
 
 	TF_CHECK_PARMS1(tfp);
 
-	/* Bail if nothing has been initialized done silent as to
-	 * allow for creation cleanup.
-	 */
+	/* Bail if nothing has been initialized */
 	if (!init) {
-		TFP_DRV_LOG(ERR,
-			    "No EM DBs created\n");
-		return -EINVAL;
+		TFP_DRV_LOG(INFO,
+			    "No EM Int DBs created\n");
+		return 0;
 	}
 
 	session = (struct tf_session *)tfp->session->core_data;
diff --git a/drivers/net/bnxt/tf_core/tf_identifier.c b/drivers/net/bnxt/tf_core/tf_identifier.c
index b197bb271..211371081 100644
--- a/drivers/net/bnxt/tf_core/tf_identifier.c
+++ b/drivers/net/bnxt/tf_core/tf_identifier.c
@@ -7,7 +7,7 @@
 
 #include "tf_identifier.h"
 #include "tf_common.h"
-#include "tf_rm_new.h"
+#include "tf_rm.h"
 #include "tf_util.h"
 #include "tfp.h"
 
@@ -35,7 +35,7 @@ tf_ident_bind(struct tf *tfp,
 
 	if (init) {
 		TFP_DRV_LOG(ERR,
-			    "Identifier already initialized\n");
+			    "Identifier DB already initialized\n");
 		return -EINVAL;
 	}
 
@@ -65,7 +65,7 @@ tf_ident_bind(struct tf *tfp,
 }
 
 int
-tf_ident_unbind(struct tf *tfp __rte_unused)
+tf_ident_unbind(struct tf *tfp)
 {
 	int rc;
 	int i;
@@ -73,13 +73,11 @@ tf_ident_unbind(struct tf *tfp __rte_unused)
 
 	TF_CHECK_PARMS1(tfp);
 
-	/* Bail if nothing has been initialized done silent as to
-	 * allow for creation cleanup.
-	 */
+	/* Bail if nothing has been initialized */
 	if (!init) {
-		TFP_DRV_LOG(ERR,
+		TFP_DRV_LOG(INFO,
 			    "No Identifier DBs created\n");
-		return -EINVAL;
+		return 0;
 	}
 
 	for (i = 0; i < TF_DIR_MAX; i++) {
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index d8b80bc84..02d8a4971 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -871,26 +871,41 @@ tf_msg_get_tbl_entry(struct tf *tfp,
 
 int
 tf_msg_bulk_get_tbl_entry(struct tf *tfp,
-			  struct tf_bulk_get_tbl_entry_parms *params)
+			  enum tf_dir dir,
+			  uint16_t hcapi_type,
+			  uint32_t starting_idx,
+			  uint16_t num_entries,
+			  uint16_t entry_sz_in_bytes,
+			  uint64_t physical_mem_addr)
 {
 	int rc;
 	struct tfp_send_msg_parms parms = { 0 };
 	struct tf_tbl_type_bulk_get_input req = { 0 };
 	struct tf_tbl_type_bulk_get_output resp = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+	struct tf_session *tfs;
 	int data_size = 0;
 
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
+		return rc;
+	}
+
 	/* Populate the request */
 	req.fw_session_id =
 		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	req.flags = tfp_cpu_to_le_16(params->dir);
-	req.type = tfp_cpu_to_le_32(params->type);
-	req.start_index = tfp_cpu_to_le_32(params->starting_idx);
-	req.num_entries = tfp_cpu_to_le_32(params->num_entries);
+	req.flags = tfp_cpu_to_le_16(dir);
+	req.type = tfp_cpu_to_le_32(hcapi_type);
+	req.start_index = tfp_cpu_to_le_32(starting_idx);
+	req.num_entries = tfp_cpu_to_le_32(num_entries);
 
-	data_size = params->num_entries * params->entry_sz_in_bytes;
+	data_size = num_entries * entry_sz_in_bytes;
 
-	req.host_addr = tfp_cpu_to_le_64(params->physical_mem_addr);
+	req.host_addr = tfp_cpu_to_le_64(physical_mem_addr);
 
 	MSG_PREP(parms,
 		 TF_KONG_MB,
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 8e276d4c0..7432873d7 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -11,7 +11,6 @@
 
 #include "tf_tbl.h"
 #include "tf_rm.h"
-#include "tf_rm_new.h"
 #include "tf_tcam.h"
 
 struct tf;
@@ -422,6 +421,11 @@ int tf_msg_get_tbl_entry(struct tf *tfp,
  *  0 on Success else internal Truflow error
  */
 int tf_msg_bulk_get_tbl_entry(struct tf *tfp,
-			  struct tf_bulk_get_tbl_entry_parms *parms);
+			      enum tf_dir dir,
+			      uint16_t hcapi_type,
+			      uint32_t starting_idx,
+			      uint16_t num_entries,
+			      uint16_t entry_sz_in_bytes,
+			      uint64_t physical_mem_addr);
 
 #endif  /* _TF_MSG_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_resources.h b/drivers/net/bnxt/tf_core/tf_resources.h
index b7b445102..4688514fc 100644
--- a/drivers/net/bnxt/tf_core/tf_resources.h
+++ b/drivers/net/bnxt/tf_core/tf_resources.h
@@ -6,535 +6,6 @@
 #ifndef _TF_RESOURCES_H_
 #define _TF_RESOURCES_H_
 
-/*
- * Hardware specific MAX values
- * NOTE: Should really come from the chip_cfg.h in some MAX form or HCAPI
- */
-
-/* Common HW resources for all chip variants */
-#define TF_NUM_L2_CTXT_TCAM      1024      /* < Number of L2 context TCAM
-					    * entries
-					    */
-#define TF_NUM_PROF_FUNC          128      /* < Number prof_func ID */
-#define TF_NUM_PROF_TCAM         1024      /* < Number entries in profile
-					    * TCAM
-					    */
-#define TF_NUM_EM_PROF_ID          64      /* < Number software EM Profile
-					    * IDs
-					    */
-#define TF_NUM_WC_PROF_ID         256      /* < Number WC profile IDs */
-#define TF_NUM_WC_TCAM_ROW        512      /* < Number of rows in WC TCAM */
-#define TF_NUM_METER_PROF         256      /* < Number of meter profiles */
-#define TF_NUM_METER             1024      /* < Number of meter instances */
-#define TF_NUM_MIRROR               2      /* < Number of mirror instances */
-#define TF_NUM_UPAR                 2      /* < Number of UPAR instances */
-
-/* Wh+/SR specific HW resources */
-#define TF_NUM_SP_TCAM            512      /* < Number of Source Property TCAM
-					    * entries
-					    */
-
-/* SR/SR2 specific HW resources */
-#define TF_NUM_L2_FUNC            256      /* < Number of L2 Func */
-
-
-/* Thor, SR2 common HW resources */
-#define TF_NUM_FKB                  1      /* < Number of Flexible Key Builder
-					    * templates
-					    */
-
-/* SR2 specific HW resources */
 #define TF_NUM_TBL_SCOPE           16      /* < Number of TBL scopes */
-#define TF_NUM_EPOCH0               1      /* < Number of Epoch0 */
-#define TF_NUM_EPOCH1               1      /* < Number of Epoch1 */
-#define TF_NUM_METADATA             8      /* < Number of MetaData Profiles */
-#define TF_NUM_CT_STATE            32      /* < Number of Connection Tracking
-					    * States
-					    */
-#define TF_NUM_RANGE_PROF          16      /* < Number of Range Profiles */
-#define TF_NUM_RANGE_ENTRY (64 * 1024)     /* < Number of Range Entries */
-#define TF_NUM_LAG_ENTRY          256      /* < Number of LAG Entries */
-
-/*
- * Common for the Reserved Resource defines below:
- *
- * - HW Resources
- *   For resources where a priority level plays a role, i.e. l2 ctx
- *   tcam entries, both a number of resources and a begin/end pair is
- *   required. The begin/end is used to assure TFLIB gets the correct
- *   priority setting for that resource.
- *
- *   For EM records there is no priority required thus a number of
- *   resources is sufficient.
- *
- *   Example, TCAM:
- *     64 L2 CTXT TCAM entries would in a max 1024 pool be entry
- *     0-63 as HW presents 0 as the highest priority entry.
- *
- * - SRAM Resources
- *   Handled as regular resources as there is no priority required.
- *
- * Common for these resources is that they are handled per direction,
- * rx/tx.
- */
-
-/* HW Resources */
-
-/* L2 CTX */
-#define TF_RSVD_L2_CTXT_TCAM_RX                   64
-#define TF_RSVD_L2_CTXT_TCAM_BEGIN_IDX_RX         0
-#define TF_RSVD_L2_CTXT_TCAM_END_IDX_RX           (TF_RSVD_L2_CTXT_RX - 1)
-#define TF_RSVD_L2_CTXT_TCAM_TX                   960
-#define TF_RSVD_L2_CTXT_TCAM_BEGIN_IDX_TX         0
-#define TF_RSVD_L2_CTXT_TCAM_END_IDX_TX           (TF_RSVD_L2_CTXT_TX - 1)
-
-/* Profiler */
-#define TF_RSVD_PROF_FUNC_RX                      64
-#define TF_RSVD_PROF_FUNC_BEGIN_IDX_RX            64
-#define TF_RSVD_PROF_FUNC_END_IDX_RX              127
-#define TF_RSVD_PROF_FUNC_TX                      64
-#define TF_RSVD_PROF_FUNC_BEGIN_IDX_TX            64
-#define TF_RSVD_PROF_FUNC_END_IDX_TX              127
-
-#define TF_RSVD_PROF_TCAM_RX                      64
-#define TF_RSVD_PROF_TCAM_BEGIN_IDX_RX            960
-#define TF_RSVD_PROF_TCAM_END_IDX_RX              1023
-#define TF_RSVD_PROF_TCAM_TX                      64
-#define TF_RSVD_PROF_TCAM_BEGIN_IDX_TX            960
-#define TF_RSVD_PROF_TCAM_END_IDX_TX              1023
-
-/* EM Profiles IDs */
-#define TF_RSVD_EM_PROF_ID_RX                     64
-#define TF_RSVD_EM_PROF_ID_BEGIN_IDX_RX           0
-#define TF_RSVD_EM_PROF_ID_END_IDX_RX             63  /* Less on CU+ then SR */
-#define TF_RSVD_EM_PROF_ID_TX                     64
-#define TF_RSVD_EM_PROF_ID_BEGIN_IDX_TX           0
-#define TF_RSVD_EM_PROF_ID_END_IDX_TX             63  /* Less on CU+ then SR */
-
-/* EM Records */
-#define TF_RSVD_EM_REC_RX                         16000
-#define TF_RSVD_EM_REC_BEGIN_IDX_RX               0
-#define TF_RSVD_EM_REC_TX                         16000
-#define TF_RSVD_EM_REC_BEGIN_IDX_TX               0
-
-/* Wildcard */
-#define TF_RSVD_WC_TCAM_PROF_ID_RX                128
-#define TF_RSVD_WC_TCAM_PROF_ID_BEGIN_IDX_RX      128
-#define TF_RSVD_WC_TCAM_PROF_ID_END_IDX_RX        255
-#define TF_RSVD_WC_TCAM_PROF_ID_TX                128
-#define TF_RSVD_WC_TCAM_PROF_ID_BEGIN_IDX_TX      128
-#define TF_RSVD_WC_TCAM_PROF_ID_END_IDX_TX        255
-
-#define TF_RSVD_WC_TCAM_RX                        64
-#define TF_RSVD_WC_TCAM_BEGIN_IDX_RX              0
-#define TF_RSVD_WC_TCAM_END_IDX_RX                63
-#define TF_RSVD_WC_TCAM_TX                        64
-#define TF_RSVD_WC_TCAM_BEGIN_IDX_TX              0
-#define TF_RSVD_WC_TCAM_END_IDX_TX                63
-
-#define TF_RSVD_METER_PROF_RX                     0
-#define TF_RSVD_METER_PROF_BEGIN_IDX_RX           0
-#define TF_RSVD_METER_PROF_END_IDX_RX             0
-#define TF_RSVD_METER_PROF_TX                     0
-#define TF_RSVD_METER_PROF_BEGIN_IDX_TX           0
-#define TF_RSVD_METER_PROF_END_IDX_TX             0
-
-#define TF_RSVD_METER_INST_RX                     0
-#define TF_RSVD_METER_INST_BEGIN_IDX_RX           0
-#define TF_RSVD_METER_INST_END_IDX_RX             0
-#define TF_RSVD_METER_INST_TX                     0
-#define TF_RSVD_METER_INST_BEGIN_IDX_TX           0
-#define TF_RSVD_METER_INST_END_IDX_TX             0
-
-/* Mirror */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_MIRROR_RX                         0
-#define TF_RSVD_MIRROR_BEGIN_IDX_RX               0
-#define TF_RSVD_MIRROR_END_IDX_RX                 0
-#define TF_RSVD_MIRROR_TX                         0
-#define TF_RSVD_MIRROR_BEGIN_IDX_TX               0
-#define TF_RSVD_MIRROR_END_IDX_TX                 0
-
-/* UPAR */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_UPAR_RX                           0
-#define TF_RSVD_UPAR_BEGIN_IDX_RX                 0
-#define TF_RSVD_UPAR_END_IDX_RX                   0
-#define TF_RSVD_UPAR_TX                           0
-#define TF_RSVD_UPAR_BEGIN_IDX_TX                 0
-#define TF_RSVD_UPAR_END_IDX_TX                   0
-
-/* Source Properties */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_SP_TCAM_RX                        0
-#define TF_RSVD_SP_TCAM_BEGIN_IDX_RX              0
-#define TF_RSVD_SP_TCAM_END_IDX_RX                0
-#define TF_RSVD_SP_TCAM_TX                        0
-#define TF_RSVD_SP_TCAM_BEGIN_IDX_TX              0
-#define TF_RSVD_SP_TCAM_END_IDX_TX                0
-
-/* L2 Func */
-#define TF_RSVD_L2_FUNC_RX                        0
-#define TF_RSVD_L2_FUNC_BEGIN_IDX_RX              0
-#define TF_RSVD_L2_FUNC_END_IDX_RX                0
-#define TF_RSVD_L2_FUNC_TX                        0
-#define TF_RSVD_L2_FUNC_BEGIN_IDX_TX              0
-#define TF_RSVD_L2_FUNC_END_IDX_TX                0
-
-/* FKB */
-#define TF_RSVD_FKB_RX                            0
-#define TF_RSVD_FKB_BEGIN_IDX_RX                  0
-#define TF_RSVD_FKB_END_IDX_RX                    0
-#define TF_RSVD_FKB_TX                            0
-#define TF_RSVD_FKB_BEGIN_IDX_TX                  0
-#define TF_RSVD_FKB_END_IDX_TX                    0
-
-/* TBL Scope */
-#define TF_RSVD_TBL_SCOPE_RX                      1
-#define TF_RSVD_TBL_SCOPE_BEGIN_IDX_RX            0
-#define TF_RSVD_TBL_SCOPE_END_IDX_RX              1
-#define TF_RSVD_TBL_SCOPE_TX                      1
-#define TF_RSVD_TBL_SCOPE_BEGIN_IDX_TX            0
-#define TF_RSVD_TBL_SCOPE_END_IDX_TX              1
-
-/* EPOCH0 */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_EPOCH0_RX                         0
-#define TF_RSVD_EPOCH0_BEGIN_IDX_RX               0
-#define TF_RSVD_EPOCH0_END_IDX_RX                 0
-#define TF_RSVD_EPOCH0_TX                         0
-#define TF_RSVD_EPOCH0_BEGIN_IDX_TX               0
-#define TF_RSVD_EPOCH0_END_IDX_TX                 0
-
-/* EPOCH1 */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_EPOCH1_RX                         0
-#define TF_RSVD_EPOCH1_BEGIN_IDX_RX               0
-#define TF_RSVD_EPOCH1_END_IDX_RX                 0
-#define TF_RSVD_EPOCH1_TX                         0
-#define TF_RSVD_EPOCH1_BEGIN_IDX_TX               0
-#define TF_RSVD_EPOCH1_END_IDX_TX                 0
-
-/* METADATA */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_METADATA_RX                       0
-#define TF_RSVD_METADATA_BEGIN_IDX_RX             0
-#define TF_RSVD_METADATA_END_IDX_RX               0
-#define TF_RSVD_METADATA_TX                       0
-#define TF_RSVD_METADATA_BEGIN_IDX_TX             0
-#define TF_RSVD_METADATA_END_IDX_TX               0
-
-/* CT_STATE */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_CT_STATE_RX                       0
-#define TF_RSVD_CT_STATE_BEGIN_IDX_RX             0
-#define TF_RSVD_CT_STATE_END_IDX_RX               0
-#define TF_RSVD_CT_STATE_TX                       0
-#define TF_RSVD_CT_STATE_BEGIN_IDX_TX             0
-#define TF_RSVD_CT_STATE_END_IDX_TX               0
-
-/* RANGE_PROF */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_RANGE_PROF_RX                     0
-#define TF_RSVD_RANGE_PROF_BEGIN_IDX_RX           0
-#define TF_RSVD_RANGE_PROF_END_IDX_RX             0
-#define TF_RSVD_RANGE_PROF_TX                     0
-#define TF_RSVD_RANGE_PROF_BEGIN_IDX_TX           0
-#define TF_RSVD_RANGE_PROF_END_IDX_TX             0
-
-/* RANGE_ENTRY */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_RANGE_ENTRY_RX                    0
-#define TF_RSVD_RANGE_ENTRY_BEGIN_IDX_RX          0
-#define TF_RSVD_RANGE_ENTRY_END_IDX_RX            0
-#define TF_RSVD_RANGE_ENTRY_TX                    0
-#define TF_RSVD_RANGE_ENTRY_BEGIN_IDX_TX          0
-#define TF_RSVD_RANGE_ENTRY_END_IDX_TX            0
-
-/* LAG_ENTRY */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_LAG_ENTRY_RX                      0
-#define TF_RSVD_LAG_ENTRY_BEGIN_IDX_RX            0
-#define TF_RSVD_LAG_ENTRY_END_IDX_RX              0
-#define TF_RSVD_LAG_ENTRY_TX                      0
-#define TF_RSVD_LAG_ENTRY_BEGIN_IDX_TX            0
-#define TF_RSVD_LAG_ENTRY_END_IDX_TX              0
-
-
-/* SRAM - Resources
- * Limited to the types that CFA provides.
- */
-#define TF_RSVD_SRAM_FULL_ACTION_RX               8001
-#define TF_RSVD_SRAM_FULL_ACTION_BEGIN_IDX_RX     0
-#define TF_RSVD_SRAM_FULL_ACTION_TX               8001
-#define TF_RSVD_SRAM_FULL_ACTION_BEGIN_IDX_TX     0
-
-/* Not yet supported fully in the infra */
-#define TF_RSVD_SRAM_MCG_RX                       0
-#define TF_RSVD_SRAM_MCG_BEGIN_IDX_RX             0
-/* Multicast Group on TX is not supported */
-#define TF_RSVD_SRAM_MCG_TX                       0
-#define TF_RSVD_SRAM_MCG_BEGIN_IDX_TX             0
-
-/* First encap of 8B RX is reserved by CFA */
-#define TF_RSVD_SRAM_ENCAP_8B_RX                  32
-#define TF_RSVD_SRAM_ENCAP_8B_BEGIN_IDX_RX        0
-/* First encap of 8B TX is reserved by CFA */
-#define TF_RSVD_SRAM_ENCAP_8B_TX                  0
-#define TF_RSVD_SRAM_ENCAP_8B_BEGIN_IDX_TX        0
-
-#define TF_RSVD_SRAM_ENCAP_16B_RX                 16
-#define TF_RSVD_SRAM_ENCAP_16B_BEGIN_IDX_RX       0
-/* First encap of 16B TX is reserved by CFA */
-#define TF_RSVD_SRAM_ENCAP_16B_TX                 20
-#define TF_RSVD_SRAM_ENCAP_16B_BEGIN_IDX_TX       0
-
-/* Encap of 64B on RX is not supported */
-#define TF_RSVD_SRAM_ENCAP_64B_RX                 0
-#define TF_RSVD_SRAM_ENCAP_64B_BEGIN_IDX_RX       0
-/* First encap of 64B TX is reserved by CFA */
-#define TF_RSVD_SRAM_ENCAP_64B_TX                 1007
-#define TF_RSVD_SRAM_ENCAP_64B_BEGIN_IDX_TX       0
-
-#define TF_RSVD_SRAM_SP_SMAC_RX                   0
-#define TF_RSVD_SRAM_SP_SMAC_BEGIN_IDX_RX         0
-#define TF_RSVD_SRAM_SP_SMAC_TX                   0
-#define TF_RSVD_SRAM_SP_SMAC_BEGIN_IDX_TX         0
-
-/* SRAM SP IPV4 on RX is not supported */
-#define TF_RSVD_SRAM_SP_SMAC_IPV4_RX              0
-#define TF_RSVD_SRAM_SP_SMAC_IPV4_BEGIN_IDX_RX    0
-#define TF_RSVD_SRAM_SP_SMAC_IPV4_TX              511
-#define TF_RSVD_SRAM_SP_SMAC_IPV4_BEGIN_IDX_TX    0
-
-/* SRAM SP IPV6 on RX is not supported */
-#define TF_RSVD_SRAM_SP_SMAC_IPV6_RX              0
-#define TF_RSVD_SRAM_SP_SMAC_IPV6_BEGIN_IDX_RX    0
-/* Not yet supported fully in infra */
-#define TF_RSVD_SRAM_SP_SMAC_IPV6_TX              0
-#define TF_RSVD_SRAM_SP_SMAC_IPV6_BEGIN_IDX_TX    0
-
-#define TF_RSVD_SRAM_COUNTER_64B_RX               160
-#define TF_RSVD_SRAM_COUNTER_64B_BEGIN_IDX_RX     0
-#define TF_RSVD_SRAM_COUNTER_64B_TX               160
-#define TF_RSVD_SRAM_COUNTER_64B_BEGIN_IDX_TX     0
-
-#define TF_RSVD_SRAM_NAT_SPORT_RX                 0
-#define TF_RSVD_SRAM_NAT_SPORT_BEGIN_IDX_RX       0
-#define TF_RSVD_SRAM_NAT_SPORT_TX                 0
-#define TF_RSVD_SRAM_NAT_SPORT_BEGIN_IDX_TX       0
-
-#define TF_RSVD_SRAM_NAT_DPORT_RX                 0
-#define TF_RSVD_SRAM_NAT_DPORT_BEGIN_IDX_RX       0
-#define TF_RSVD_SRAM_NAT_DPORT_TX                 0
-#define TF_RSVD_SRAM_NAT_DPORT_BEGIN_IDX_TX       0
-
-#define TF_RSVD_SRAM_NAT_S_IPV4_RX                0
-#define TF_RSVD_SRAM_NAT_S_IPV4_BEGIN_IDX_RX      0
-#define TF_RSVD_SRAM_NAT_S_IPV4_TX                0
-#define TF_RSVD_SRAM_NAT_S_IPV4_BEGIN_IDX_TX      0
-
-#define TF_RSVD_SRAM_NAT_D_IPV4_RX                0
-#define TF_RSVD_SRAM_NAT_D_IPV4_BEGIN_IDX_RX      0
-#define TF_RSVD_SRAM_NAT_D_IPV4_TX                0
-#define TF_RSVD_SRAM_NAT_D_IPV4_BEGIN_IDX_TX      0
-
-/* HW Resource Pool names */
-
-#define TF_L2_CTXT_TCAM_POOL_NAME         l2_ctxt_tcam_pool
-#define TF_L2_CTXT_TCAM_POOL_NAME_RX      l2_ctxt_tcam_pool_rx
-#define TF_L2_CTXT_TCAM_POOL_NAME_TX      l2_ctxt_tcam_pool_tx
-
-#define TF_PROF_FUNC_POOL_NAME            prof_func_pool
-#define TF_PROF_FUNC_POOL_NAME_RX         prof_func_pool_rx
-#define TF_PROF_FUNC_POOL_NAME_TX         prof_func_pool_tx
-
-#define TF_PROF_TCAM_POOL_NAME            prof_tcam_pool
-#define TF_PROF_TCAM_POOL_NAME_RX         prof_tcam_pool_rx
-#define TF_PROF_TCAM_POOL_NAME_TX         prof_tcam_pool_tx
-
-#define TF_EM_PROF_ID_POOL_NAME           em_prof_id_pool
-#define TF_EM_PROF_ID_POOL_NAME_RX        em_prof_id_pool_rx
-#define TF_EM_PROF_ID_POOL_NAME_TX        em_prof_id_pool_tx
-
-#define TF_WC_TCAM_PROF_ID_POOL_NAME      wc_tcam_prof_id_pool
-#define TF_WC_TCAM_PROF_ID_POOL_NAME_RX   wc_tcam_prof_id_pool_rx
-#define TF_WC_TCAM_PROF_ID_POOL_NAME_TX   wc_tcam_prof_id_pool_tx
-
-#define TF_WC_TCAM_POOL_NAME              wc_tcam_pool
-#define TF_WC_TCAM_POOL_NAME_RX           wc_tcam_pool_rx
-#define TF_WC_TCAM_POOL_NAME_TX           wc_tcam_pool_tx
-
-#define TF_METER_PROF_POOL_NAME           meter_prof_pool
-#define TF_METER_PROF_POOL_NAME_RX        meter_prof_pool_rx
-#define TF_METER_PROF_POOL_NAME_TX        meter_prof_pool_tx
-
-#define TF_METER_INST_POOL_NAME           meter_inst_pool
-#define TF_METER_INST_POOL_NAME_RX        meter_inst_pool_rx
-#define TF_METER_INST_POOL_NAME_TX        meter_inst_pool_tx
-
-#define TF_MIRROR_POOL_NAME               mirror_pool
-#define TF_MIRROR_POOL_NAME_RX            mirror_pool_rx
-#define TF_MIRROR_POOL_NAME_TX            mirror_pool_tx
-
-#define TF_UPAR_POOL_NAME                 upar_pool
-#define TF_UPAR_POOL_NAME_RX              upar_pool_rx
-#define TF_UPAR_POOL_NAME_TX              upar_pool_tx
-
-#define TF_SP_TCAM_POOL_NAME              sp_tcam_pool
-#define TF_SP_TCAM_POOL_NAME_RX           sp_tcam_pool_rx
-#define TF_SP_TCAM_POOL_NAME_TX           sp_tcam_pool_tx
-
-#define TF_FKB_POOL_NAME                  fkb_pool
-#define TF_FKB_POOL_NAME_RX               fkb_pool_rx
-#define TF_FKB_POOL_NAME_TX               fkb_pool_tx
-
-#define TF_TBL_SCOPE_POOL_NAME            tbl_scope_pool
-#define TF_TBL_SCOPE_POOL_NAME_RX         tbl_scope_pool_rx
-#define TF_TBL_SCOPE_POOL_NAME_TX         tbl_scope_pool_tx
-
-#define TF_L2_FUNC_POOL_NAME              l2_func_pool
-#define TF_L2_FUNC_POOL_NAME_RX           l2_func_pool_rx
-#define TF_L2_FUNC_POOL_NAME_TX           l2_func_pool_tx
-
-#define TF_EPOCH0_POOL_NAME               epoch0_pool
-#define TF_EPOCH0_POOL_NAME_RX            epoch0_pool_rx
-#define TF_EPOCH0_POOL_NAME_TX            epoch0_pool_tx
-
-#define TF_EPOCH1_POOL_NAME               epoch1_pool
-#define TF_EPOCH1_POOL_NAME_RX            epoch1_pool_rx
-#define TF_EPOCH1_POOL_NAME_TX            epoch1_pool_tx
-
-#define TF_METADATA_POOL_NAME             metadata_pool
-#define TF_METADATA_POOL_NAME_RX          metadata_pool_rx
-#define TF_METADATA_POOL_NAME_TX          metadata_pool_tx
-
-#define TF_CT_STATE_POOL_NAME             ct_state_pool
-#define TF_CT_STATE_POOL_NAME_RX          ct_state_pool_rx
-#define TF_CT_STATE_POOL_NAME_TX          ct_state_pool_tx
-
-#define TF_RANGE_PROF_POOL_NAME           range_prof_pool
-#define TF_RANGE_PROF_POOL_NAME_RX        range_prof_pool_rx
-#define TF_RANGE_PROF_POOL_NAME_TX        range_prof_pool_tx
-
-#define TF_RANGE_ENTRY_POOL_NAME          range_entry_pool
-#define TF_RANGE_ENTRY_POOL_NAME_RX       range_entry_pool_rx
-#define TF_RANGE_ENTRY_POOL_NAME_TX       range_entry_pool_tx
-
-#define TF_LAG_ENTRY_POOL_NAME            lag_entry_pool
-#define TF_LAG_ENTRY_POOL_NAME_RX         lag_entry_pool_rx
-#define TF_LAG_ENTRY_POOL_NAME_TX         lag_entry_pool_tx
-
-/* SRAM Resource Pool names */
-#define TF_SRAM_FULL_ACTION_POOL_NAME     sram_full_action_pool
-#define TF_SRAM_FULL_ACTION_POOL_NAME_RX  sram_full_action_pool_rx
-#define TF_SRAM_FULL_ACTION_POOL_NAME_TX  sram_full_action_pool_tx
-
-#define TF_SRAM_MCG_POOL_NAME             sram_mcg_pool
-#define TF_SRAM_MCG_POOL_NAME_RX          sram_mcg_pool_rx
-#define TF_SRAM_MCG_POOL_NAME_TX          sram_mcg_pool_tx
-
-#define TF_SRAM_ENCAP_8B_POOL_NAME        sram_encap_8b_pool
-#define TF_SRAM_ENCAP_8B_POOL_NAME_RX     sram_encap_8b_pool_rx
-#define TF_SRAM_ENCAP_8B_POOL_NAME_TX     sram_encap_8b_pool_tx
-
-#define TF_SRAM_ENCAP_16B_POOL_NAME       sram_encap_16b_pool
-#define TF_SRAM_ENCAP_16B_POOL_NAME_RX    sram_encap_16b_pool_rx
-#define TF_SRAM_ENCAP_16B_POOL_NAME_TX    sram_encap_16b_pool_tx
-
-#define TF_SRAM_ENCAP_64B_POOL_NAME       sram_encap_64b_pool
-#define TF_SRAM_ENCAP_64B_POOL_NAME_RX    sram_encap_64b_pool_rx
-#define TF_SRAM_ENCAP_64B_POOL_NAME_TX    sram_encap_64b_pool_tx
-
-#define TF_SRAM_SP_SMAC_POOL_NAME         sram_sp_smac_pool
-#define TF_SRAM_SP_SMAC_POOL_NAME_RX      sram_sp_smac_pool_rx
-#define TF_SRAM_SP_SMAC_POOL_NAME_TX      sram_sp_smac_pool_tx
-
-#define TF_SRAM_SP_SMAC_IPV4_POOL_NAME    sram_sp_smac_ipv4_pool
-#define TF_SRAM_SP_SMAC_IPV4_POOL_NAME_RX sram_sp_smac_ipv4_pool_rx
-#define TF_SRAM_SP_SMAC_IPV4_POOL_NAME_TX sram_sp_smac_ipv4_pool_tx
-
-#define TF_SRAM_SP_SMAC_IPV6_POOL_NAME    sram_sp_smac_ipv6_pool
-#define TF_SRAM_SP_SMAC_IPV6_POOL_NAME_RX sram_sp_smac_ipv6_pool_rx
-#define TF_SRAM_SP_SMAC_IPV6_POOL_NAME_TX sram_sp_smac_ipv6_pool_tx
-
-#define TF_SRAM_STATS_64B_POOL_NAME       sram_stats_64b_pool
-#define TF_SRAM_STATS_64B_POOL_NAME_RX    sram_stats_64b_pool_rx
-#define TF_SRAM_STATS_64B_POOL_NAME_TX    sram_stats_64b_pool_tx
-
-#define TF_SRAM_NAT_SPORT_POOL_NAME       sram_nat_sport_pool
-#define TF_SRAM_NAT_SPORT_POOL_NAME_RX    sram_nat_sport_pool_rx
-#define TF_SRAM_NAT_SPORT_POOL_NAME_TX    sram_nat_sport_pool_tx
-
-#define TF_SRAM_NAT_DPORT_POOL_NAME       sram_nat_dport_pool
-#define TF_SRAM_NAT_DPORT_POOL_NAME_RX    sram_nat_dport_pool_rx
-#define TF_SRAM_NAT_DPORT_POOL_NAME_TX    sram_nat_dport_pool_tx
-
-#define TF_SRAM_NAT_S_IPV4_POOL_NAME      sram_nat_s_ipv4_pool
-#define TF_SRAM_NAT_S_IPV4_POOL_NAME_RX   sram_nat_s_ipv4_pool_rx
-#define TF_SRAM_NAT_S_IPV4_POOL_NAME_TX   sram_nat_s_ipv4_pool_tx
-
-#define TF_SRAM_NAT_D_IPV4_POOL_NAME      sram_nat_d_ipv4_pool
-#define TF_SRAM_NAT_D_IPV4_POOL_NAME_RX   sram_nat_d_ipv4_pool_rx
-#define TF_SRAM_NAT_D_IPV4_POOL_NAME_TX   sram_nat_d_ipv4_pool_tx
-
-/* Sw Resource Pool Names */
-
-#define TF_L2_CTXT_REMAP_POOL_NAME         l2_ctxt_remap_pool
-#define TF_L2_CTXT_REMAP_POOL_NAME_RX      l2_ctxt_remap_pool_rx
-#define TF_L2_CTXT_REMAP_POOL_NAME_TX      l2_ctxt_remap_pool_tx
-
-
-/** HW Resource types
- */
-enum tf_resource_type_hw {
-	/* Common HW resources for all chip variants */
-	TF_RESC_TYPE_HW_L2_CTXT_TCAM,
-	TF_RESC_TYPE_HW_PROF_FUNC,
-	TF_RESC_TYPE_HW_PROF_TCAM,
-	TF_RESC_TYPE_HW_EM_PROF_ID,
-	TF_RESC_TYPE_HW_EM_REC,
-	TF_RESC_TYPE_HW_WC_TCAM_PROF_ID,
-	TF_RESC_TYPE_HW_WC_TCAM,
-	TF_RESC_TYPE_HW_METER_PROF,
-	TF_RESC_TYPE_HW_METER_INST,
-	TF_RESC_TYPE_HW_MIRROR,
-	TF_RESC_TYPE_HW_UPAR,
-	/* Wh+/SR specific HW resources */
-	TF_RESC_TYPE_HW_SP_TCAM,
-	/* SR/SR2 specific HW resources */
-	TF_RESC_TYPE_HW_L2_FUNC,
-	/* Thor, SR2 common HW resources */
-	TF_RESC_TYPE_HW_FKB,
-	/* SR2 specific HW resources */
-	TF_RESC_TYPE_HW_TBL_SCOPE,
-	TF_RESC_TYPE_HW_EPOCH0,
-	TF_RESC_TYPE_HW_EPOCH1,
-	TF_RESC_TYPE_HW_METADATA,
-	TF_RESC_TYPE_HW_CT_STATE,
-	TF_RESC_TYPE_HW_RANGE_PROF,
-	TF_RESC_TYPE_HW_RANGE_ENTRY,
-	TF_RESC_TYPE_HW_LAG_ENTRY,
-	TF_RESC_TYPE_HW_MAX
-};
-
-/** HW Resource types
- */
-enum tf_resource_type_sram {
-	TF_RESC_TYPE_SRAM_FULL_ACTION,
-	TF_RESC_TYPE_SRAM_MCG,
-	TF_RESC_TYPE_SRAM_ENCAP_8B,
-	TF_RESC_TYPE_SRAM_ENCAP_16B,
-	TF_RESC_TYPE_SRAM_ENCAP_64B,
-	TF_RESC_TYPE_SRAM_SP_SMAC,
-	TF_RESC_TYPE_SRAM_SP_SMAC_IPV4,
-	TF_RESC_TYPE_SRAM_SP_SMAC_IPV6,
-	TF_RESC_TYPE_SRAM_COUNTER_64B,
-	TF_RESC_TYPE_SRAM_NAT_SPORT,
-	TF_RESC_TYPE_SRAM_NAT_DPORT,
-	TF_RESC_TYPE_SRAM_NAT_S_IPV4,
-	TF_RESC_TYPE_SRAM_NAT_D_IPV4,
-	TF_RESC_TYPE_SRAM_MAX
-};
 
 #endif /* _TF_RESOURCES_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_rm.c b/drivers/net/bnxt/tf_core/tf_rm.c
index e0a84e64d..e0469b653 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.c
+++ b/drivers/net/bnxt/tf_core/tf_rm.c
@@ -7,3171 +7,916 @@
 
 #include <rte_common.h>
 
+#include <cfa_resource_types.h>
+
 #include "tf_rm.h"
-#include "tf_core.h"
+#include "tf_common.h"
 #include "tf_util.h"
 #include "tf_session.h"
-#include "tf_resources.h"
-#include "tf_msg.h"
-#include "bnxt.h"
+#include "tf_device.h"
 #include "tfp.h"
+#include "tf_msg.h"
 
 /**
- * Internal macro to perform HW resource allocation check between what
- * firmware reports vs what was statically requested.
- *
- * Parameters:
- *   struct tf_rm_hw_query    *hquery      - Pointer to the hw query result
- *   enum tf_dir               dir         - Direction to process
- *   enum tf_resource_type_hw  hcapi_type  - HCAPI type, the index element
- *                                           in the hw query structure
- *   define                    def_value   - Define value to check against
- *   uint32_t                 *eflag       - Result of the check
- */
-#define TF_RM_CHECK_HW_ALLOC(hquery, dir, hcapi_type, def_value, eflag) do {  \
-	if ((dir) == TF_DIR_RX) {					      \
-		if ((hquery)->hw_query[(hcapi_type)].max != def_value ## _RX) \
-			*(eflag) |= 1 << (hcapi_type);			      \
-	} else {							      \
-		if ((hquery)->hw_query[(hcapi_type)].max != def_value ## _TX) \
-			*(eflag) |= 1 << (hcapi_type);			      \
-	}								      \
-} while (0)
-
-/**
- * Internal macro to perform HW resource allocation check between what
- * firmware reports vs what was statically requested.
- *
- * Parameters:
- *   struct tf_rm_sram_query   *squery      - Pointer to the sram query result
- *   enum tf_dir                dir         - Direction to process
- *   enum tf_resource_type_sram hcapi_type  - HCAPI type, the index element
- *                                            in the hw query structure
- *   define                     def_value   - Define value to check against
- *   uint32_t                  *eflag       - Result of the check
- */
-#define TF_RM_CHECK_SRAM_ALLOC(squery, dir, hcapi_type, def_value, eflag) do { \
-	if ((dir) == TF_DIR_RX) {					       \
-		if ((squery)->sram_query[(hcapi_type)].max != def_value ## _RX)\
-			*(eflag) |= 1 << (hcapi_type);			       \
-	} else {							       \
-		if ((squery)->sram_query[(hcapi_type)].max != def_value ## _TX)\
-			*(eflag) |= 1 << (hcapi_type);			       \
-	}								       \
-} while (0)
-
-/**
- * Internal macro to convert a reserved resource define name to be
- * direction specific.
- *
- * Parameters:
- *   enum tf_dir    dir         - Direction to process
- *   string         type        - Type name to append RX or TX to
- *   string         dtype       - Direction specific type
- *
- *
+ * Generic RM Element data type that an RM DB is build upon.
  */
-#define TF_RESC_RSVD(dir, type, dtype) do {	\
-		if ((dir) == TF_DIR_RX)		\
-			(dtype) = type ## _RX;	\
-		else				\
-			(dtype) = type ## _TX;	\
-	} while (0)
-
-const char
-*tf_hcapi_hw_2_str(enum tf_resource_type_hw hw_type)
-{
-	switch (hw_type) {
-	case TF_RESC_TYPE_HW_L2_CTXT_TCAM:
-		return "L2 ctxt tcam";
-	case TF_RESC_TYPE_HW_PROF_FUNC:
-		return "Profile Func";
-	case TF_RESC_TYPE_HW_PROF_TCAM:
-		return "Profile tcam";
-	case TF_RESC_TYPE_HW_EM_PROF_ID:
-		return "EM profile id";
-	case TF_RESC_TYPE_HW_EM_REC:
-		return "EM record";
-	case TF_RESC_TYPE_HW_WC_TCAM_PROF_ID:
-		return "WC tcam profile id";
-	case TF_RESC_TYPE_HW_WC_TCAM:
-		return "WC tcam";
-	case TF_RESC_TYPE_HW_METER_PROF:
-		return "Meter profile";
-	case TF_RESC_TYPE_HW_METER_INST:
-		return "Meter instance";
-	case TF_RESC_TYPE_HW_MIRROR:
-		return "Mirror";
-	case TF_RESC_TYPE_HW_UPAR:
-		return "UPAR";
-	case TF_RESC_TYPE_HW_SP_TCAM:
-		return "Source properties tcam";
-	case TF_RESC_TYPE_HW_L2_FUNC:
-		return "L2 Function";
-	case TF_RESC_TYPE_HW_FKB:
-		return "FKB";
-	case TF_RESC_TYPE_HW_TBL_SCOPE:
-		return "Table scope";
-	case TF_RESC_TYPE_HW_EPOCH0:
-		return "EPOCH0";
-	case TF_RESC_TYPE_HW_EPOCH1:
-		return "EPOCH1";
-	case TF_RESC_TYPE_HW_METADATA:
-		return "Metadata";
-	case TF_RESC_TYPE_HW_CT_STATE:
-		return "Connection tracking state";
-	case TF_RESC_TYPE_HW_RANGE_PROF:
-		return "Range profile";
-	case TF_RESC_TYPE_HW_RANGE_ENTRY:
-		return "Range entry";
-	case TF_RESC_TYPE_HW_LAG_ENTRY:
-		return "LAG";
-	default:
-		return "Invalid identifier";
-	}
-}
-
-const char
-*tf_hcapi_sram_2_str(enum tf_resource_type_sram sram_type)
-{
-	switch (sram_type) {
-	case TF_RESC_TYPE_SRAM_FULL_ACTION:
-		return "Full action";
-	case TF_RESC_TYPE_SRAM_MCG:
-		return "MCG";
-	case TF_RESC_TYPE_SRAM_ENCAP_8B:
-		return "Encap 8B";
-	case TF_RESC_TYPE_SRAM_ENCAP_16B:
-		return "Encap 16B";
-	case TF_RESC_TYPE_SRAM_ENCAP_64B:
-		return "Encap 64B";
-	case TF_RESC_TYPE_SRAM_SP_SMAC:
-		return "Source properties SMAC";
-	case TF_RESC_TYPE_SRAM_SP_SMAC_IPV4:
-		return "Source properties SMAC IPv4";
-	case TF_RESC_TYPE_SRAM_SP_SMAC_IPV6:
-		return "Source properties IPv6";
-	case TF_RESC_TYPE_SRAM_COUNTER_64B:
-		return "Counter 64B";
-	case TF_RESC_TYPE_SRAM_NAT_SPORT:
-		return "NAT source port";
-	case TF_RESC_TYPE_SRAM_NAT_DPORT:
-		return "NAT destination port";
-	case TF_RESC_TYPE_SRAM_NAT_S_IPV4:
-		return "NAT source IPv4";
-	case TF_RESC_TYPE_SRAM_NAT_D_IPV4:
-		return "NAT destination IPv4";
-	default:
-		return "Invalid identifier";
-	}
-}
+struct tf_rm_element {
+	/**
+	 * RM Element configuration type. If Private then the
+	 * hcapi_type can be ignored. If Null then the element is not
+	 * valid for the device.
+	 */
+	enum tf_rm_elem_cfg_type cfg_type;
 
-/**
- * Helper function to perform a HW HCAPI resource type lookup against
- * the reserved value of the same static type.
- *
- * Returns:
- *   -EOPNOTSUPP - Reserved resource type not supported
- *   Value       - Integer value of the reserved value for the requested type
- */
-static int
-tf_rm_rsvd_hw_value(enum tf_dir dir, enum tf_resource_type_hw index)
-{
-	uint32_t value = -EOPNOTSUPP;
+	/**
+	 * HCAPI RM Type for the element.
+	 */
+	uint16_t hcapi_type;
 
-	switch (index) {
-	case TF_RESC_TYPE_HW_L2_CTXT_TCAM:
-		TF_RESC_RSVD(dir, TF_RSVD_L2_CTXT_TCAM, value);
-		break;
-	case TF_RESC_TYPE_HW_PROF_FUNC:
-		TF_RESC_RSVD(dir, TF_RSVD_PROF_FUNC, value);
-		break;
-	case TF_RESC_TYPE_HW_PROF_TCAM:
-		TF_RESC_RSVD(dir, TF_RSVD_PROF_TCAM, value);
-		break;
-	case TF_RESC_TYPE_HW_EM_PROF_ID:
-		TF_RESC_RSVD(dir, TF_RSVD_EM_PROF_ID, value);
-		break;
-	case TF_RESC_TYPE_HW_EM_REC:
-		TF_RESC_RSVD(dir, TF_RSVD_EM_REC, value);
-		break;
-	case TF_RESC_TYPE_HW_WC_TCAM_PROF_ID:
-		TF_RESC_RSVD(dir, TF_RSVD_WC_TCAM_PROF_ID, value);
-		break;
-	case TF_RESC_TYPE_HW_WC_TCAM:
-		TF_RESC_RSVD(dir, TF_RSVD_WC_TCAM, value);
-		break;
-	case TF_RESC_TYPE_HW_METER_PROF:
-		TF_RESC_RSVD(dir, TF_RSVD_METER_PROF, value);
-		break;
-	case TF_RESC_TYPE_HW_METER_INST:
-		TF_RESC_RSVD(dir, TF_RSVD_METER_INST, value);
-		break;
-	case TF_RESC_TYPE_HW_MIRROR:
-		TF_RESC_RSVD(dir, TF_RSVD_MIRROR, value);
-		break;
-	case TF_RESC_TYPE_HW_UPAR:
-		TF_RESC_RSVD(dir, TF_RSVD_UPAR, value);
-		break;
-	case TF_RESC_TYPE_HW_SP_TCAM:
-		TF_RESC_RSVD(dir, TF_RSVD_SP_TCAM, value);
-		break;
-	case TF_RESC_TYPE_HW_L2_FUNC:
-		TF_RESC_RSVD(dir, TF_RSVD_L2_FUNC, value);
-		break;
-	case TF_RESC_TYPE_HW_FKB:
-		TF_RESC_RSVD(dir, TF_RSVD_FKB, value);
-		break;
-	case TF_RESC_TYPE_HW_TBL_SCOPE:
-		TF_RESC_RSVD(dir, TF_RSVD_TBL_SCOPE, value);
-		break;
-	case TF_RESC_TYPE_HW_EPOCH0:
-		TF_RESC_RSVD(dir, TF_RSVD_EPOCH0, value);
-		break;
-	case TF_RESC_TYPE_HW_EPOCH1:
-		TF_RESC_RSVD(dir, TF_RSVD_EPOCH1, value);
-		break;
-	case TF_RESC_TYPE_HW_METADATA:
-		TF_RESC_RSVD(dir, TF_RSVD_METADATA, value);
-		break;
-	case TF_RESC_TYPE_HW_CT_STATE:
-		TF_RESC_RSVD(dir, TF_RSVD_CT_STATE, value);
-		break;
-	case TF_RESC_TYPE_HW_RANGE_PROF:
-		TF_RESC_RSVD(dir, TF_RSVD_RANGE_PROF, value);
-		break;
-	case TF_RESC_TYPE_HW_RANGE_ENTRY:
-		TF_RESC_RSVD(dir, TF_RSVD_RANGE_ENTRY, value);
-		break;
-	case TF_RESC_TYPE_HW_LAG_ENTRY:
-		TF_RESC_RSVD(dir, TF_RSVD_LAG_ENTRY, value);
-		break;
-	default:
-		break;
-	}
+	/**
+	 * HCAPI RM allocated range information for the element.
+	 */
+	struct tf_rm_alloc_info alloc;
 
-	return value;
-}
+	/**
+	 * Bit allocator pool for the element. Pool size is controlled
+	 * by the struct tf_session_resources at time of session creation.
+	 * Null indicates that the element is not used for the device.
+	 */
+	struct bitalloc *pool;
+};
 
 /**
- * Helper function to perform a SRAM HCAPI resource type lookup
- * against the reserved value of the same static type.
- *
- * Returns:
- *   -EOPNOTSUPP - Reserved resource type not supported
- *   Value       - Integer value of the reserved value for the requested type
+ * TF RM DB definition
  */
-static int
-tf_rm_rsvd_sram_value(enum tf_dir dir, enum tf_resource_type_sram index)
-{
-	uint32_t value = -EOPNOTSUPP;
-
-	switch (index) {
-	case TF_RESC_TYPE_SRAM_FULL_ACTION:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_FULL_ACTION, value);
-		break;
-	case TF_RESC_TYPE_SRAM_MCG:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_MCG, value);
-		break;
-	case TF_RESC_TYPE_SRAM_ENCAP_8B:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_ENCAP_8B, value);
-		break;
-	case TF_RESC_TYPE_SRAM_ENCAP_16B:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_ENCAP_16B, value);
-		break;
-	case TF_RESC_TYPE_SRAM_ENCAP_64B:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_ENCAP_64B, value);
-		break;
-	case TF_RESC_TYPE_SRAM_SP_SMAC:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_SP_SMAC, value);
-		break;
-	case TF_RESC_TYPE_SRAM_SP_SMAC_IPV4:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_SP_SMAC_IPV4, value);
-		break;
-	case TF_RESC_TYPE_SRAM_SP_SMAC_IPV6:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_SP_SMAC_IPV6, value);
-		break;
-	case TF_RESC_TYPE_SRAM_COUNTER_64B:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_COUNTER_64B, value);
-		break;
-	case TF_RESC_TYPE_SRAM_NAT_SPORT:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_NAT_SPORT, value);
-		break;
-	case TF_RESC_TYPE_SRAM_NAT_DPORT:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_NAT_DPORT, value);
-		break;
-	case TF_RESC_TYPE_SRAM_NAT_S_IPV4:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_NAT_S_IPV4, value);
-		break;
-	case TF_RESC_TYPE_SRAM_NAT_D_IPV4:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_NAT_D_IPV4, value);
-		break;
-	default:
-		break;
-	}
-
-	return value;
-}
+struct tf_rm_new_db {
+	/**
+	 * Number of elements in the DB
+	 */
+	uint16_t num_entries;
 
-/**
- * Helper function to print all the HW resource qcaps errors reported
- * in the error_flag.
- *
- * [in] dir
- *   Receive or transmit direction
- *
- * [in] error_flag
- *   Pointer to the hw error flags created at time of the query check
- */
-static void
-tf_rm_print_hw_qcaps_error(enum tf_dir dir,
-			   struct tf_rm_hw_query *hw_query,
-			   uint32_t *error_flag)
-{
-	int i;
+	/**
+	 * Direction this DB controls.
+	 */
+	enum tf_dir dir;
 
-	TFP_DRV_LOG(ERR, "QCAPS errors HW\n");
-	TFP_DRV_LOG(ERR, "  Direction: %s\n", tf_dir_2_str(dir));
-	TFP_DRV_LOG(ERR, "  Elements:\n");
+	/**
+	 * Module type, used for logging purposes.
+	 */
+	enum tf_device_module_type type;
 
-	for (i = 0; i < TF_RESC_TYPE_HW_MAX; i++) {
-		if (*error_flag & 1 << i)
-			TFP_DRV_LOG(ERR, "    %s, %d elem available, req:%d\n",
-				    tf_hcapi_hw_2_str(i),
-				    hw_query->hw_query[i].max,
-				    tf_rm_rsvd_hw_value(dir, i));
-	}
-}
+	/**
+	 * The DB consists of an array of elements
+	 */
+	struct tf_rm_element *db;
+};
 
 /**
- * Helper function to print all the SRAM resource qcaps errors
- * reported in the error_flag.
+ * Adjust an index according to the allocation information.
  *
- * [in] dir
- *   Receive or transmit direction
+ * All resources are controlled in a 0 based pool. Some resources, by
+ * design, are not 0 based, i.e. Full Action Records (SRAM) thus they
+ * need to be adjusted before they are handed out.
  *
- * [in] error_flag
- *   Pointer to the sram error flags created at time of the query check
- */
-static void
-tf_rm_print_sram_qcaps_error(enum tf_dir dir,
-			     struct tf_rm_sram_query *sram_query,
-			     uint32_t *error_flag)
-{
-	int i;
-
-	TFP_DRV_LOG(ERR, "QCAPS errors SRAM\n");
-	TFP_DRV_LOG(ERR, "  Direction: %s\n", tf_dir_2_str(dir));
-	TFP_DRV_LOG(ERR, "  Elements:\n");
-
-	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++) {
-		if (*error_flag & 1 << i)
-			TFP_DRV_LOG(ERR, "    %s, %d elem available, req:%d\n",
-				    tf_hcapi_sram_2_str(i),
-				    sram_query->sram_query[i].max,
-				    tf_rm_rsvd_sram_value(dir, i));
-	}
-}
-
-/**
- * Performs a HW resource check between what firmware capability
- * reports and what the core expects is available.
+ * [in] cfg
+ *   Pointer to the DB configuration
  *
- * Firmware performs the resource carving at AFM init time and the
- * resource capability is reported in the TruFlow qcaps msg.
+ * [in] reservations
+ *   Pointer to the allocation values associated with the module
  *
- * [in] query
- *   Pointer to HW Query data structure. Query holds what the firmware
- *   offers of the HW resources.
+ * [in] count
+ *   Number of DB configuration elements
  *
- * [in] dir
- *   Receive or transmit direction
- *
- * [in/out] error_flag
- *   Pointer to a bit array indicating the error of a single HCAPI
- *   resource type. When a bit is set to 1, the HCAPI resource type
- *   failed static allocation.
+ * [out] valid_count
+ *   Number of HCAPI entries with a reservation value greater than 0
  *
  * Returns:
- *  0       - Success
- *  -ENOMEM - Failure on one of the allocated resources. Check the
- *            error_flag for what types are flagged errored.
- */
-static int
-tf_rm_check_hw_qcaps_static(struct tf_rm_hw_query *query,
-			    enum tf_dir dir,
-			    uint32_t *error_flag)
-{
-	*error_flag = 0;
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_L2_CTXT_TCAM,
-			     TF_RSVD_L2_CTXT_TCAM,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_PROF_FUNC,
-			     TF_RSVD_PROF_FUNC,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_PROF_TCAM,
-			     TF_RSVD_PROF_TCAM,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_EM_PROF_ID,
-			     TF_RSVD_EM_PROF_ID,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_EM_REC,
-			     TF_RSVD_EM_REC,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_WC_TCAM_PROF_ID,
-			     TF_RSVD_WC_TCAM_PROF_ID,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_WC_TCAM,
-			     TF_RSVD_WC_TCAM,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_METER_PROF,
-			     TF_RSVD_METER_PROF,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_METER_INST,
-			     TF_RSVD_METER_INST,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_MIRROR,
-			     TF_RSVD_MIRROR,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_UPAR,
-			     TF_RSVD_UPAR,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_SP_TCAM,
-			     TF_RSVD_SP_TCAM,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_L2_FUNC,
-			     TF_RSVD_L2_FUNC,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_FKB,
-			     TF_RSVD_FKB,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_TBL_SCOPE,
-			     TF_RSVD_TBL_SCOPE,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_EPOCH0,
-			     TF_RSVD_EPOCH0,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_EPOCH1,
-			     TF_RSVD_EPOCH1,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_METADATA,
-			     TF_RSVD_METADATA,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_CT_STATE,
-			     TF_RSVD_CT_STATE,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_RANGE_PROF,
-			     TF_RSVD_RANGE_PROF,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_RANGE_ENTRY,
-			     TF_RSVD_RANGE_ENTRY,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_LAG_ENTRY,
-			     TF_RSVD_LAG_ENTRY,
-			     error_flag);
-
-	if (*error_flag != 0)
-		return -ENOMEM;
-
-	return 0;
-}
-
-/**
- * Performs a SRAM resource check between what firmware capability
- * reports and what the core expects is available.
- *
- * Firmware performs the resource carving at AFM init time and the
- * resource capability is reported in the TruFlow qcaps msg.
- *
- * [in] query
- *   Pointer to SRAM Query data structure. Query holds what the
- *   firmware offers of the SRAM resources.
- *
- * [in] dir
- *   Receive or transmit direction
- *
- * [in/out] error_flag
- *   Pointer to a bit array indicating the error of a single HCAPI
- *   resource type. When a bit is set to 1, the HCAPI resource type
- *   failed static allocation.
- *
- * Returns:
- *  0       - Success
- *  -ENOMEM - Failure on one of the allocated resources. Check the
- *            error_flag for what types are flagged errored.
- */
-static int
-tf_rm_check_sram_qcaps_static(struct tf_rm_sram_query *query,
-			      enum tf_dir dir,
-			      uint32_t *error_flag)
-{
-	*error_flag = 0;
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_FULL_ACTION,
-			       TF_RSVD_SRAM_FULL_ACTION,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_MCG,
-			       TF_RSVD_SRAM_MCG,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_ENCAP_8B,
-			       TF_RSVD_SRAM_ENCAP_8B,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_ENCAP_16B,
-			       TF_RSVD_SRAM_ENCAP_16B,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_ENCAP_64B,
-			       TF_RSVD_SRAM_ENCAP_64B,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_SP_SMAC,
-			       TF_RSVD_SRAM_SP_SMAC,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_SP_SMAC_IPV4,
-			       TF_RSVD_SRAM_SP_SMAC_IPV4,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_SP_SMAC_IPV6,
-			       TF_RSVD_SRAM_SP_SMAC_IPV6,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_COUNTER_64B,
-			       TF_RSVD_SRAM_COUNTER_64B,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_NAT_SPORT,
-			       TF_RSVD_SRAM_NAT_SPORT,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_NAT_DPORT,
-			       TF_RSVD_SRAM_NAT_DPORT,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_NAT_S_IPV4,
-			       TF_RSVD_SRAM_NAT_S_IPV4,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_NAT_D_IPV4,
-			       TF_RSVD_SRAM_NAT_D_IPV4,
-			       error_flag);
-
-	if (*error_flag != 0)
-		return -ENOMEM;
-
-	return 0;
-}
-
-/**
- * Internal function to mark pool entries used.
+ *     0          - Success
+ *   - EOPNOTSUPP - Operation not supported
  */
 static void
-tf_rm_reserve_range(uint32_t count,
-		    uint32_t rsv_begin,
-		    uint32_t rsv_end,
-		    uint32_t max,
-		    struct bitalloc *pool)
+tf_rm_count_hcapi_reservations(enum tf_dir dir,
+			       enum tf_device_module_type type,
+			       struct tf_rm_element_cfg *cfg,
+			       uint16_t *reservations,
+			       uint16_t count,
+			       uint16_t *valid_count)
 {
-	uint32_t i;
+	int i;
+	uint16_t cnt = 0;
 
-	/* If no resources has been requested we mark everything
-	 * 'used'
-	 */
-	if (count == 0)	{
-		for (i = 0; i < max; i++)
-			ba_alloc_index(pool, i);
-	} else {
-		/* Support 2 main modes
-		 * Reserved range starts from bottom up (with
-		 * pre-reserved value or not)
-		 * - begin = 0 to end xx
-		 * - begin = 1 to end xx
-		 *
-		 * Reserved range starts from top down
-		 * - begin = yy to end max
-		 */
+	for (i = 0; i < count; i++) {
+		if (cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI &&
+		    reservations[i] > 0)
+			cnt++;
 
-		/* Bottom up check, start from 0 */
-		if (rsv_begin == 0) {
-			for (i = rsv_end + 1; i < max; i++)
-				ba_alloc_index(pool, i);
-		}
-
-		/* Bottom up check, start from 1 or higher OR
-		 * Top Down
+		/* Only log msg if a type is attempted reserved and
+		 * not supported. We ignore EM module as its using a
+		 * split configuration array thus it would fail for
+		 * this type of check.
 		 */
-		if (rsv_begin >= 1) {
-			/* Allocate from 0 until start */
-			for (i = 0; i < rsv_begin; i++)
-				ba_alloc_index(pool, i);
-
-			/* Skip and then do the remaining */
-			if (rsv_end < max - 1) {
-				for (i = rsv_end; i < max; i++)
-					ba_alloc_index(pool, i);
-			}
-		}
-	}
-}
-
-/**
- * Internal function to mark all the l2 ctxt allocated that Truflow
- * does not own.
- */
-static void
-tf_rm_rsvd_l2_ctxt(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_L2_CTXT_TCAM;
-	uint32_t end = 0;
-
-	/* l2 ctxt rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_L2_CTXT_TCAM,
-			    tfs->TF_L2_CTXT_TCAM_POOL_NAME_RX);
-
-	/* l2 ctxt tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_L2_CTXT_TCAM,
-			    tfs->TF_L2_CTXT_TCAM_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the profile tcam and profile func
- * resources that Truflow does not own.
- */
-static void
-tf_rm_rsvd_prof(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_PROF_FUNC;
-	uint32_t end = 0;
-
-	/* profile func rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_PROF_FUNC,
-			    tfs->TF_PROF_FUNC_POOL_NAME_RX);
-
-	/* profile func tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_PROF_FUNC,
-			    tfs->TF_PROF_FUNC_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_HW_PROF_TCAM;
-
-	/* profile tcam rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_PROF_TCAM,
-			    tfs->TF_PROF_TCAM_POOL_NAME_RX);
-
-	/* profile tcam tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_PROF_TCAM,
-			    tfs->TF_PROF_TCAM_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the em profile id allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_em_prof(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_EM_PROF_ID;
-	uint32_t end = 0;
-
-	/* em prof id rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_EM_PROF_ID,
-			    tfs->TF_EM_PROF_ID_POOL_NAME_RX);
-
-	/* em prof id tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_EM_PROF_ID,
-			    tfs->TF_EM_PROF_ID_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the wildcard tcam and profile id
- * resources that Truflow does not own.
- */
-static void
-tf_rm_rsvd_wc(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_WC_TCAM_PROF_ID;
-	uint32_t end = 0;
-
-	/* wc profile id rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_WC_PROF_ID,
-			    tfs->TF_WC_TCAM_PROF_ID_POOL_NAME_RX);
-
-	/* wc profile id tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_WC_PROF_ID,
-			    tfs->TF_WC_TCAM_PROF_ID_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_HW_WC_TCAM;
-
-	/* wc tcam rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_WC_TCAM_ROW,
-			    tfs->TF_WC_TCAM_POOL_NAME_RX);
-
-	/* wc tcam tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_WC_TCAM_ROW,
-			    tfs->TF_WC_TCAM_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the meter resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_meter(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_METER_PROF;
-	uint32_t end = 0;
-
-	/* meter profiles rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_METER_PROF,
-			    tfs->TF_METER_PROF_POOL_NAME_RX);
-
-	/* meter profiles tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_METER_PROF,
-			    tfs->TF_METER_PROF_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_HW_METER_INST;
-
-	/* meter rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_METER,
-			    tfs->TF_METER_INST_POOL_NAME_RX);
-
-	/* meter tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_METER,
-			    tfs->TF_METER_INST_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the mirror resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_mirror(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_MIRROR;
-	uint32_t end = 0;
-
-	/* mirror rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_MIRROR,
-			    tfs->TF_MIRROR_POOL_NAME_RX);
-
-	/* mirror tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_MIRROR,
-			    tfs->TF_MIRROR_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the upar resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_upar(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_UPAR;
-	uint32_t end = 0;
-
-	/* upar rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_UPAR,
-			    tfs->TF_UPAR_POOL_NAME_RX);
-
-	/* upar tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_UPAR,
-			    tfs->TF_UPAR_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the sp tcam resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_sp_tcam(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_SP_TCAM;
-	uint32_t end = 0;
-
-	/* sp tcam rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_SP_TCAM,
-			    tfs->TF_SP_TCAM_POOL_NAME_RX);
-
-	/* sp tcam tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_SP_TCAM,
-			    tfs->TF_SP_TCAM_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the l2 func resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_l2_func(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_L2_FUNC;
-	uint32_t end = 0;
-
-	/* l2 func rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_L2_FUNC,
-			    tfs->TF_L2_FUNC_POOL_NAME_RX);
-
-	/* l2 func tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_L2_FUNC,
-			    tfs->TF_L2_FUNC_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the fkb resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_fkb(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_FKB;
-	uint32_t end = 0;
-
-	/* fkb rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_FKB,
-			    tfs->TF_FKB_POOL_NAME_RX);
-
-	/* fkb tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_FKB,
-			    tfs->TF_FKB_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the tbld scope resources allocated
- * that Truflow does not own.
- */
-static void
-tf_rm_rsvd_tbl_scope(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_TBL_SCOPE;
-	uint32_t end = 0;
-
-	/* tbl scope rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_TBL_SCOPE,
-			    tfs->TF_TBL_SCOPE_POOL_NAME_RX);
-
-	/* tbl scope tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_TBL_SCOPE,
-			    tfs->TF_TBL_SCOPE_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the l2 epoch resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_epoch(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_EPOCH0;
-	uint32_t end = 0;
-
-	/* epoch0 rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_EPOCH0,
-			    tfs->TF_EPOCH0_POOL_NAME_RX);
-
-	/* epoch0 tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_EPOCH0,
-			    tfs->TF_EPOCH0_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_HW_EPOCH1;
-
-	/* epoch1 rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_EPOCH1,
-			    tfs->TF_EPOCH1_POOL_NAME_RX);
-
-	/* epoch1 tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_EPOCH1,
-			    tfs->TF_EPOCH1_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the metadata resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_metadata(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_METADATA;
-	uint32_t end = 0;
-
-	/* metadata rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_METADATA,
-			    tfs->TF_METADATA_POOL_NAME_RX);
-
-	/* metadata tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_METADATA,
-			    tfs->TF_METADATA_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the ct state resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_ct_state(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_CT_STATE;
-	uint32_t end = 0;
-
-	/* ct state rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_CT_STATE,
-			    tfs->TF_CT_STATE_POOL_NAME_RX);
-
-	/* ct state tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_CT_STATE,
-			    tfs->TF_CT_STATE_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the range resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_range(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_RANGE_PROF;
-	uint32_t end = 0;
-
-	/* range profile rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_RANGE_PROF,
-			    tfs->TF_RANGE_PROF_POOL_NAME_RX);
-
-	/* range profile tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_RANGE_PROF,
-			    tfs->TF_RANGE_PROF_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_HW_RANGE_ENTRY;
-
-	/* range entry rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_RANGE_ENTRY,
-			    tfs->TF_RANGE_ENTRY_POOL_NAME_RX);
-
-	/* range entry tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_RANGE_ENTRY,
-			    tfs->TF_RANGE_ENTRY_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the lag resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_lag_entry(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_LAG_ENTRY;
-	uint32_t end = 0;
-
-	/* lag entry rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_LAG_ENTRY,
-			    tfs->TF_LAG_ENTRY_POOL_NAME_RX);
-
-	/* lag entry tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_LAG_ENTRY,
-			    tfs->TF_LAG_ENTRY_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the full action resources allocated
- * that Truflow does not own.
- */
-static void
-tf_rm_rsvd_sram_full_action(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_SRAM_FULL_ACTION;
-	uint16_t end = 0;
-
-	/* full action rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_FULL_ACTION_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_FULL_ACTION_RX,
-			    tfs->TF_SRAM_FULL_ACTION_POOL_NAME_RX);
-
-	/* full action tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_FULL_ACTION_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_FULL_ACTION_TX,
-			    tfs->TF_SRAM_FULL_ACTION_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the multicast group resources
- * allocated that Truflow does not own.
- */
-static void
-tf_rm_rsvd_sram_mcg(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_SRAM_MCG;
-	uint16_t end = 0;
-
-	/* multicast group rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_MCG_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_MCG_RX,
-			    tfs->TF_SRAM_MCG_POOL_NAME_RX);
-
-	/* Multicast Group on TX is not supported */
-}
-
-/**
- * Internal function to mark all the encap resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_sram_encap(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_SRAM_ENCAP_8B;
-	uint16_t end = 0;
-
-	/* encap 8b rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_ENCAP_8B_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_ENCAP_8B_RX,
-			    tfs->TF_SRAM_ENCAP_8B_POOL_NAME_RX);
-
-	/* encap 8b tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_ENCAP_8B_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_ENCAP_8B_TX,
-			    tfs->TF_SRAM_ENCAP_8B_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_SRAM_ENCAP_16B;
-
-	/* encap 16b rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_ENCAP_16B_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_ENCAP_16B_RX,
-			    tfs->TF_SRAM_ENCAP_16B_POOL_NAME_RX);
-
-	/* encap 16b tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_ENCAP_16B_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_ENCAP_16B_TX,
-			    tfs->TF_SRAM_ENCAP_16B_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_SRAM_ENCAP_64B;
-
-	/* Encap 64B not supported on RX */
-
-	/* Encap 64b tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_ENCAP_64B_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_ENCAP_64B_TX,
-			    tfs->TF_SRAM_ENCAP_64B_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the sp resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_sram_sp(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_SRAM_SP_SMAC;
-	uint16_t end = 0;
-
-	/* sp smac rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_SP_SMAC_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_SP_SMAC_RX,
-			    tfs->TF_SRAM_SP_SMAC_POOL_NAME_RX);
-
-	/* sp smac tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_SP_SMAC_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_SP_SMAC_TX,
-			    tfs->TF_SRAM_SP_SMAC_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_SRAM_SP_SMAC_IPV4;
-
-	/* SP SMAC IPv4 not supported on RX */
-
-	/* sp smac ipv4 tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_SP_SMAC_IPV4_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_SP_SMAC_IPV4_TX,
-			    tfs->TF_SRAM_SP_SMAC_IPV4_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_SRAM_SP_SMAC_IPV6;
-
-	/* SP SMAC IPv6 not supported on RX */
-
-	/* sp smac ipv6 tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_SP_SMAC_IPV6_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_SP_SMAC_IPV6_TX,
-			    tfs->TF_SRAM_SP_SMAC_IPV6_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the stat resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_sram_stats(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_SRAM_COUNTER_64B;
-	uint16_t end = 0;
-
-	/* counter 64b rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_COUNTER_64B_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_COUNTER_64B_RX,
-			    tfs->TF_SRAM_STATS_64B_POOL_NAME_RX);
-
-	/* counter 64b tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_COUNTER_64B_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_COUNTER_64B_TX,
-			    tfs->TF_SRAM_STATS_64B_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the nat resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_sram_nat(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_SRAM_NAT_SPORT;
-	uint16_t end = 0;
-
-	/* nat source port rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_NAT_SPORT_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_NAT_SPORT_RX,
-			    tfs->TF_SRAM_NAT_SPORT_POOL_NAME_RX);
-
-	/* nat source port tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_NAT_SPORT_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_NAT_SPORT_TX,
-			    tfs->TF_SRAM_NAT_SPORT_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_SRAM_NAT_DPORT;
-
-	/* nat destination port rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_NAT_DPORT_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_NAT_DPORT_RX,
-			    tfs->TF_SRAM_NAT_DPORT_POOL_NAME_RX);
-
-	/* nat destination port tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_NAT_DPORT_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_NAT_DPORT_TX,
-			    tfs->TF_SRAM_NAT_DPORT_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_SRAM_NAT_S_IPV4;
-
-	/* nat source port ipv4 rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_NAT_S_IPV4_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_NAT_S_IPV4_RX,
-			    tfs->TF_SRAM_NAT_S_IPV4_POOL_NAME_RX);
-
-	/* nat source ipv4 port tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_NAT_S_IPV4_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_NAT_S_IPV4_TX,
-			    tfs->TF_SRAM_NAT_S_IPV4_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_SRAM_NAT_D_IPV4;
-
-	/* nat destination port ipv4 rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_NAT_D_IPV4_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_NAT_D_IPV4_RX,
-			    tfs->TF_SRAM_NAT_D_IPV4_POOL_NAME_RX);
-
-	/* nat destination ipv4 port tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_NAT_D_IPV4_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_NAT_D_IPV4_TX,
-			    tfs->TF_SRAM_NAT_D_IPV4_POOL_NAME_TX);
-}
-
-/**
- * Internal function used to validate the HW allocated resources
- * against the requested values.
- */
-static int
-tf_rm_hw_alloc_validate(enum tf_dir dir,
-			struct tf_rm_hw_alloc *hw_alloc,
-			struct tf_rm_entry *hw_entry)
-{
-	int error = 0;
-	int i;
-
-	for (i = 0; i < TF_RESC_TYPE_HW_MAX; i++) {
-		if (hw_entry[i].stride != hw_alloc->hw_num[i]) {
+		if (type != TF_DEVICE_MODULE_TYPE_EM &&
+		    cfg[i].cfg_type == TF_RM_ELEM_CFG_NULL &&
+		    reservations[i] > 0) {
 			TFP_DRV_LOG(ERR,
-				"%s, Alloc failed id:%d expect:%d got:%d\n",
+				"%s, %s, %s allocation not supported\n",
+				tf_device_module_type_2_str(type),
 				tf_dir_2_str(dir),
-				i,
-				hw_alloc->hw_num[i],
-				hw_entry[i].stride);
-			error = -1;
-		}
-	}
-
-	return error;
-}
-
-/**
- * Internal function used to validate the SRAM allocated resources
- * against the requested values.
- */
-static int
-tf_rm_sram_alloc_validate(enum tf_dir dir __rte_unused,
-			  struct tf_rm_sram_alloc *sram_alloc,
-			  struct tf_rm_entry *sram_entry)
-{
-	int error = 0;
-	int i;
-
-	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++) {
-		if (sram_entry[i].stride != sram_alloc->sram_num[i]) {
-			TFP_DRV_LOG(ERR,
-				"%s, Alloc failed idx:%d expect:%d got:%d\n",
+				tf_device_module_type_subtype_2_str(type, i));
+			printf("%s, %s, %s allocation of %d not supported\n",
+				tf_device_module_type_2_str(type),
 				tf_dir_2_str(dir),
-				i,
-				sram_alloc->sram_num[i],
-				sram_entry[i].stride);
-			error = -1;
+			       tf_device_module_type_subtype_2_str(type, i),
+			       reservations[i]);
+
 		}
 	}
 
-	return error;
+	*valid_count = cnt;
 }
 
 /**
- * Internal function used to mark all the HW resources allocated that
- * Truflow does not own.
+ * Resource Manager Adjust of base index definitions.
  */
-static void
-tf_rm_reserve_hw(struct tf *tfp)
-{
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* TBD
-	 * There is no direct AFM resource allocation as it is carved
-	 * statically at AFM boot time. Thus the bit allocators work
-	 * on the full HW resource amount and we just mark everything
-	 * used except the resources that Truflow took ownership off.
-	 */
-	tf_rm_rsvd_l2_ctxt(tfs);
-	tf_rm_rsvd_prof(tfs);
-	tf_rm_rsvd_em_prof(tfs);
-	tf_rm_rsvd_wc(tfs);
-	tf_rm_rsvd_mirror(tfs);
-	tf_rm_rsvd_meter(tfs);
-	tf_rm_rsvd_upar(tfs);
-	tf_rm_rsvd_sp_tcam(tfs);
-	tf_rm_rsvd_l2_func(tfs);
-	tf_rm_rsvd_fkb(tfs);
-	tf_rm_rsvd_tbl_scope(tfs);
-	tf_rm_rsvd_epoch(tfs);
-	tf_rm_rsvd_metadata(tfs);
-	tf_rm_rsvd_ct_state(tfs);
-	tf_rm_rsvd_range(tfs);
-	tf_rm_rsvd_lag_entry(tfs);
-}
-
-/**
- * Internal function used to mark all the SRAM resources allocated
- * that Truflow does not own.
- */
-static void
-tf_rm_reserve_sram(struct tf *tfp)
-{
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* TBD
-	 * There is no direct AFM resource allocation as it is carved
-	 * statically at AFM boot time. Thus the bit allocators work
-	 * on the full HW resource amount and we just mark everything
-	 * used except the resources that Truflow took ownership off.
-	 */
-	tf_rm_rsvd_sram_full_action(tfs);
-	tf_rm_rsvd_sram_mcg(tfs);
-	tf_rm_rsvd_sram_encap(tfs);
-	tf_rm_rsvd_sram_sp(tfs);
-	tf_rm_rsvd_sram_stats(tfs);
-	tf_rm_rsvd_sram_nat(tfs);
-}
-
-/**
- * Internal function used to allocate and validate all HW resources.
- */
-static int
-tf_rm_allocate_validate_hw(struct tf *tfp,
-			   enum tf_dir dir)
-{
-	int rc;
-	int i;
-	struct tf_rm_hw_query hw_query;
-	struct tf_rm_hw_alloc hw_alloc;
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-	struct tf_rm_entry *hw_entries;
-	uint32_t error_flag;
-
-	if (dir == TF_DIR_RX)
-		hw_entries = tfs->resc.rx.hw_entry;
-	else
-		hw_entries = tfs->resc.tx.hw_entry;
-
-	/* Query for Session HW Resources */
-
-	memset(&hw_query, 0, sizeof(hw_query)); /* RSXX */
-	rc = tf_rm_check_hw_qcaps_static(&hw_query, dir, &error_flag);
-	if (rc) {
-		/* Log error */
-		TFP_DRV_LOG(ERR,
-			"%s, HW QCAPS validation failed,"
-			"error_flag:0x%x, rc:%s\n",
-			tf_dir_2_str(dir),
-			error_flag,
-			strerror(-rc));
-		tf_rm_print_hw_qcaps_error(dir, &hw_query, &error_flag);
-		goto cleanup;
-	}
-
-	/* Post process HW capability */
-	for (i = 0; i < TF_RESC_TYPE_HW_MAX; i++)
-		hw_alloc.hw_num[i] = hw_query.hw_query[i].max;
-
-	/* Allocate Session HW Resources */
-	/* Perform HW allocation validation as its possible the
-	 * resource availability changed between qcaps and alloc
-	 */
-	rc = tf_rm_hw_alloc_validate(dir, &hw_alloc, hw_entries);
-	if (rc) {
-		/* Log error */
-		TFP_DRV_LOG(ERR,
-			    "%s, HW Resource validation failed, rc:%s\n",
-			    tf_dir_2_str(dir),
-			    strerror(-rc));
-		goto cleanup;
-	}
-
-	return 0;
-
- cleanup:
-
-	return -1;
-}
+enum tf_rm_adjust_type {
+	TF_RM_ADJUST_ADD_BASE, /**< Adds base to the index */
+	TF_RM_ADJUST_RM_BASE   /**< Removes base from the index */
+};
 
 /**
- * Internal function used to allocate and validate all SRAM resources.
+ * Adjust an index according to the allocation information.
  *
- * [in] tfp
- *   Pointer to TF handle
+ * All resources are controlled in a 0 based pool. Some resources, by
+ * design, are not 0 based, i.e. Full Action Records (SRAM) thus they
+ * need to be adjusted before they are handed out.
  *
- * [in] dir
- *   Receive or transmit direction
+ * [in] db
+ *   Pointer to the db, used for the lookup
+ *
+ * [in] action
+ *   Adjust action
+ *
+ * [in] db_index
+ *   DB index for the element type
+ *
+ * [in] index
+ *   Index to convert
+ *
+ * [out] adj_index
+ *   Adjusted index
  *
  * Returns:
- *   0  - Success
- *   -1 - Internal error
+ *     0          - Success
+ *   - EOPNOTSUPP - Operation not supported
  */
 static int
-tf_rm_allocate_validate_sram(struct tf *tfp,
-			     enum tf_dir dir)
+tf_rm_adjust_index(struct tf_rm_element *db,
+		   enum tf_rm_adjust_type action,
+		   uint32_t db_index,
+		   uint32_t index,
+		   uint32_t *adj_index)
 {
-	int rc;
-	int i;
-	struct tf_rm_sram_query sram_query;
-	struct tf_rm_sram_alloc sram_alloc;
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-	struct tf_rm_entry *sram_entries;
-	uint32_t error_flag;
-
-	if (dir == TF_DIR_RX)
-		sram_entries = tfs->resc.rx.sram_entry;
-	else
-		sram_entries = tfs->resc.tx.sram_entry;
-
-	memset(&sram_query, 0, sizeof(sram_query)); /* RSXX */
-	rc = tf_rm_check_sram_qcaps_static(&sram_query, dir, &error_flag);
-	if (rc) {
-		/* Log error */
-		TFP_DRV_LOG(ERR,
-			"%s, SRAM QCAPS validation failed,"
-			"error_flag:%x, rc:%s\n",
-			tf_dir_2_str(dir),
-			error_flag,
-			strerror(-rc));
-		tf_rm_print_sram_qcaps_error(dir, &sram_query, &error_flag);
-		goto cleanup;
-	}
+	int rc = 0;
+	uint32_t base_index;
 
-	/* Post process SRAM capability */
-	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++)
-		sram_alloc.sram_num[i] = sram_query.sram_query[i].max;
+	base_index = db[db_index].alloc.entry.start;
 
-	/* Perform SRAM allocation validation as its possible the
-	 * resource availability changed between qcaps and alloc
-	 */
-	rc = tf_rm_sram_alloc_validate(dir, &sram_alloc, sram_entries);
-	if (rc) {
-		/* Log error */
-		TFP_DRV_LOG(ERR,
-			    "%s, SRAM Resource allocation validation failed,"
-			    " rc:%s\n",
-			    tf_dir_2_str(dir),
-			    strerror(-rc));
-		goto cleanup;
+	switch (action) {
+	case TF_RM_ADJUST_RM_BASE:
+		*adj_index = index - base_index;
+		break;
+	case TF_RM_ADJUST_ADD_BASE:
+		*adj_index = index + base_index;
+		break;
+	default:
+		return -EOPNOTSUPP;
 	}
 
-	return 0;
-
- cleanup:
-
-	return -1;
+	return rc;
 }
 
 /**
- * Helper function used to prune a HW resource array to only hold
- * elements that needs to be flushed.
- *
- * [in] tfs
- *   Session handle
+ * Logs an array of found residual entries to the console.
  *
  * [in] dir
  *   Receive or transmit direction
  *
- * [in] hw_entries
- *   Master HW Resource database
+ * [in] type
+ *   Type of Device Module
  *
- * [in/out] flush_entries
- *   Pruned HW Resource database of entries to be flushed. This
- *   array should be passed in as a complete copy of the master HW
- *   Resource database. The outgoing result will be a pruned version
- *   based on the result of the requested checking
+ * [in] count
+ *   Number of entries in the residual array
  *
- * Returns:
- *    0 - Success, no flush required
- *    1 - Success, flush required
- *   -1 - Internal error
+ * [in] residuals
+ *   Pointer to an array of residual entries. Array is index same as
+ *   the DB in which this function is used. Each entry holds residual
+ *   value for that entry.
  */
-static int
-tf_rm_hw_to_flush(struct tf_session *tfs,
-		  enum tf_dir dir,
-		  struct tf_rm_entry *hw_entries,
-		  struct tf_rm_entry *flush_entries)
+static void
+tf_rm_log_residuals(enum tf_dir dir,
+		    enum tf_device_module_type type,
+		    uint16_t count,
+		    uint16_t *residuals)
 {
-	int rc;
-	int flush_rc = 0;
-	int free_cnt;
-	struct bitalloc *pool;
+	int i;
 
-	/* Check all the hw resource pools and check for left over
-	 * elements. Any found will result in the complete pool of a
-	 * type to get invalidated.
+	/* Walk the residual array and log the types that wasn't
+	 * cleaned up to the console.
 	 */
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_L2_CTXT_TCAM_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_L2_CTXT_TCAM].stride) {
-		flush_entries[TF_RESC_TYPE_HW_L2_CTXT_TCAM].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_L2_CTXT_TCAM].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_PROF_FUNC_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_PROF_FUNC].stride) {
-		flush_entries[TF_RESC_TYPE_HW_PROF_FUNC].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_PROF_FUNC].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_PROF_TCAM_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_PROF_TCAM].stride) {
-		flush_entries[TF_RESC_TYPE_HW_PROF_TCAM].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_PROF_TCAM].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_EM_PROF_ID_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_EM_PROF_ID].stride) {
-		flush_entries[TF_RESC_TYPE_HW_EM_PROF_ID].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_EM_PROF_ID].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	flush_entries[TF_RESC_TYPE_HW_EM_REC].start = 0;
-	flush_entries[TF_RESC_TYPE_HW_EM_REC].stride = 0;
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_WC_TCAM_PROF_ID_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_WC_TCAM_PROF_ID].stride) {
-		flush_entries[TF_RESC_TYPE_HW_WC_TCAM_PROF_ID].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_WC_TCAM_PROF_ID].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_WC_TCAM_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_WC_TCAM].stride) {
-		flush_entries[TF_RESC_TYPE_HW_WC_TCAM].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_WC_TCAM].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_METER_PROF_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_METER_PROF].stride) {
-		flush_entries[TF_RESC_TYPE_HW_METER_PROF].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_METER_PROF].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_METER_INST_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_METER_INST].stride) {
-		flush_entries[TF_RESC_TYPE_HW_METER_INST].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_METER_INST].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_MIRROR_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_MIRROR].stride) {
-		flush_entries[TF_RESC_TYPE_HW_MIRROR].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_MIRROR].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_UPAR_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_UPAR].stride) {
-		flush_entries[TF_RESC_TYPE_HW_UPAR].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_UPAR].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SP_TCAM_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_SP_TCAM].stride) {
-		flush_entries[TF_RESC_TYPE_HW_SP_TCAM].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_SP_TCAM].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_L2_FUNC_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_L2_FUNC].stride) {
-		flush_entries[TF_RESC_TYPE_HW_L2_FUNC].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_L2_FUNC].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_FKB_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_FKB].stride) {
-		flush_entries[TF_RESC_TYPE_HW_FKB].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_FKB].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_TBL_SCOPE_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_TBL_SCOPE].stride) {
-		flush_entries[TF_RESC_TYPE_HW_TBL_SCOPE].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_TBL_SCOPE].stride = 0;
-	} else {
-		TFP_DRV_LOG(ERR, "%s, TBL_SCOPE free_cnt:%d, entries:%d\n",
-			    tf_dir_2_str(dir),
-			    free_cnt,
-			    hw_entries[TF_RESC_TYPE_HW_TBL_SCOPE].stride);
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_EPOCH0_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_EPOCH0].stride) {
-		flush_entries[TF_RESC_TYPE_HW_EPOCH0].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_EPOCH0].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_EPOCH1_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_EPOCH1].stride) {
-		flush_entries[TF_RESC_TYPE_HW_EPOCH1].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_EPOCH1].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_METADATA_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_METADATA].stride) {
-		flush_entries[TF_RESC_TYPE_HW_METADATA].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_METADATA].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_CT_STATE_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_CT_STATE].stride) {
-		flush_entries[TF_RESC_TYPE_HW_CT_STATE].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_CT_STATE].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_RANGE_PROF_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_RANGE_PROF].stride) {
-		flush_entries[TF_RESC_TYPE_HW_RANGE_PROF].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_RANGE_PROF].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_RANGE_ENTRY_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_RANGE_ENTRY].stride) {
-		flush_entries[TF_RESC_TYPE_HW_RANGE_ENTRY].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_RANGE_ENTRY].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_LAG_ENTRY_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_LAG_ENTRY].stride) {
-		flush_entries[TF_RESC_TYPE_HW_LAG_ENTRY].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_LAG_ENTRY].stride = 0;
-	} else {
-		flush_rc = 1;
+	for (i = 0; i < count; i++) {
+		if (residuals[i] != 0)
+			TFP_DRV_LOG(ERR,
+				"%s, %s was not cleaned up, %d outstanding\n",
+				tf_dir_2_str(dir),
+				tf_device_module_type_subtype_2_str(type, i),
+				residuals[i]);
 	}
-
-	return flush_rc;
 }
 
 /**
- * Helper function used to prune a SRAM resource array to only hold
- * elements that needs to be flushed.
+ * Performs a check of the passed in DB for any lingering elements. If
+ * a resource type was found to not have been cleaned up by the caller
+ * then its residual values are recorded, logged and passed back in an
+ * allocate reservation array that the caller can pass to the FW for
+ * cleanup.
  *
- * [in] tfs
- *   Session handle
- *
- * [in] dir
- *   Receive or transmit direction
+ * [in] db
+ *   Pointer to the db, used for the lookup
  *
- * [in] hw_entries
- *   Master SRAM Resource data base
+ * [out] resv_size
+ *   Pointer to the reservation size of the generated reservation
+ *   array.
  *
- * [in/out] flush_entries
- *   Pruned SRAM Resource database of entries to be flushed. This
- *   array should be passed in as a complete copy of the master SRAM
- *   Resource database. The outgoing result will be a pruned version
- *   based on the result of the requested checking
+ * [in/out] resv
+ *   Pointer Pointer to a reservation array. The reservation array is
+ *   allocated after the residual scan and holds any found residual
+ *   entries. Thus it can be smaller than the DB that the check was
+ *   performed on. Array must be freed by the caller.
+ *
+ * [out] residuals_present
+ *   Pointer to a bool flag indicating if residual was present in the
+ *   DB
  *
  * Returns:
- *    0 - Success, no flush required
- *    1 - Success, flush required
- *   -1 - Internal error
+ *     0          - Success
+ *   - EOPNOTSUPP - Operation not supported
  */
 static int
-tf_rm_sram_to_flush(struct tf_session *tfs,
-		    enum tf_dir dir,
-		    struct tf_rm_entry *sram_entries,
-		    struct tf_rm_entry *flush_entries)
+tf_rm_check_residuals(struct tf_rm_new_db *rm_db,
+		      uint16_t *resv_size,
+		      struct tf_rm_resc_entry **resv,
+		      bool *residuals_present)
 {
 	int rc;
-	int flush_rc = 0;
-	int free_cnt;
-	struct bitalloc *pool;
-
-	/* Check all the sram resource pools and check for left over
-	 * elements. Any found will result in the complete pool of a
-	 * type to get invalidated.
-	 */
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SRAM_FULL_ACTION_POOL_NAME,
-			rc);
+	int i;
+	int f;
+	uint16_t count;
+	uint16_t found;
+	uint16_t *residuals = NULL;
+	uint16_t hcapi_type;
+	struct tf_rm_get_inuse_count_parms iparms;
+	struct tf_rm_get_alloc_info_parms aparms;
+	struct tf_rm_get_hcapi_parms hparms;
+	struct tf_rm_alloc_info info;
+	struct tfp_calloc_parms cparms;
+	struct tf_rm_resc_entry *local_resv = NULL;
+
+	/* Create array to hold the entries that have residuals */
+	cparms.nitems = rm_db->num_entries;
+	cparms.size = sizeof(uint16_t);
+	cparms.alignment = 0;
+	rc = tfp_calloc(&cparms);
 	if (rc)
 		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_FULL_ACTION].stride) {
-		flush_entries[TF_RESC_TYPE_SRAM_FULL_ACTION].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_FULL_ACTION].stride = 0;
-	} else {
-		flush_rc = 1;
+
+	residuals = (uint16_t *)cparms.mem_va;
+
+	/* Traverse the DB and collect any residual elements */
+	iparms.rm_db = rm_db;
+	iparms.count = &count;
+	for (i = 0, found = 0; i < rm_db->num_entries; i++) {
+		iparms.db_index = i;
+		rc = tf_rm_get_inuse_count(&iparms);
+		/* Not a device supported entry, just skip */
+		if (rc == -ENOTSUP)
+			continue;
+		if (rc)
+			goto cleanup_residuals;
+
+		if (count) {
+			found++;
+			residuals[i] = count;
+			*residuals_present = true;
+		}
 	}
 
-	/* Only pools for RX direction */
-	if (dir == TF_DIR_RX) {
-		TF_RM_GET_POOLS_RX(tfs, &pool,
-				   TF_SRAM_MCG_POOL_NAME);
+	if (*residuals_present) {
+		/* Populate a reduced resv array with only the entries
+		 * that have residuals.
+		 */
+		cparms.nitems = found;
+		cparms.size = sizeof(struct tf_rm_resc_entry);
+		cparms.alignment = 0;
+		rc = tfp_calloc(&cparms);
 		if (rc)
 			return rc;
-		free_cnt = ba_free_count(pool);
-		if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_MCG].stride) {
-			flush_entries[TF_RESC_TYPE_SRAM_MCG].start = 0;
-			flush_entries[TF_RESC_TYPE_SRAM_MCG].stride = 0;
-		} else {
-			flush_rc = 1;
+
+		local_resv = (struct tf_rm_resc_entry *)cparms.mem_va;
+
+		aparms.rm_db = rm_db;
+		hparms.rm_db = rm_db;
+		hparms.hcapi_type = &hcapi_type;
+		for (i = 0, f = 0; i < rm_db->num_entries; i++) {
+			if (residuals[i] == 0)
+				continue;
+			aparms.db_index = i;
+			aparms.info = &info;
+			rc = tf_rm_get_info(&aparms);
+			if (rc)
+				goto cleanup_all;
+
+			hparms.db_index = i;
+			rc = tf_rm_get_hcapi_type(&hparms);
+			if (rc)
+				goto cleanup_all;
+
+			local_resv[f].type = hcapi_type;
+			local_resv[f].start = info.entry.start;
+			local_resv[f].stride = info.entry.stride;
+			f++;
 		}
-	} else {
-		/* Always prune TX direction */
-		flush_entries[TF_RESC_TYPE_SRAM_MCG].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_MCG].stride = 0;
+		*resv_size = found;
 	}
 
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SRAM_ENCAP_8B_POOL_NAME,
-			rc);
+	tf_rm_log_residuals(rm_db->dir,
+			    rm_db->type,
+			    rm_db->num_entries,
+			    residuals);
+
+	tfp_free((void *)residuals);
+	*resv = local_resv;
+
+	return 0;
+
+ cleanup_all:
+	tfp_free((void *)local_resv);
+	*resv = NULL;
+ cleanup_residuals:
+	tfp_free((void *)residuals);
+
+	return rc;
+}
+
+int
+tf_rm_create_db(struct tf *tfp,
+		struct tf_rm_create_db_parms *parms)
+{
+	int rc;
+	int i;
+	int j;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	uint16_t max_types;
+	struct tfp_calloc_parms cparms;
+	struct tf_rm_resc_req_entry *query;
+	enum tf_rm_resc_resv_strategy resv_strategy;
+	struct tf_rm_resc_req_entry *req;
+	struct tf_rm_resc_entry *resv;
+	struct tf_rm_new_db *rm_db;
+	struct tf_rm_element *db;
+	uint32_t pool_size;
+	uint16_t hcapi_items;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
 	if (rc)
 		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_ENCAP_8B].stride) {
-		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_8B].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_8B].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
 
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SRAM_ENCAP_16B_POOL_NAME,
-			rc);
+	/* Retrieve device information */
+	rc = tf_session_get_device(tfs, &dev);
 	if (rc)
 		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_ENCAP_16B].stride) {
-		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_16B].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_16B].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
 
-	/* Only pools for TX direction */
-	if (dir == TF_DIR_TX) {
-		TF_RM_GET_POOLS_TX(tfs, &pool,
-				   TF_SRAM_ENCAP_64B_POOL_NAME);
-		if (rc)
-			return rc;
-		free_cnt = ba_free_count(pool);
-		if (free_cnt ==
-		    sram_entries[TF_RESC_TYPE_SRAM_ENCAP_64B].stride) {
-			flush_entries[TF_RESC_TYPE_SRAM_ENCAP_64B].start = 0;
-			flush_entries[TF_RESC_TYPE_SRAM_ENCAP_64B].stride = 0;
-		} else {
-			flush_rc = 1;
-		}
-	} else {
-		/* Always prune RX direction */
-		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_64B].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_64B].stride = 0;
-	}
+	/* Need device max number of elements for the RM QCAPS */
+	rc = dev->ops->tf_dev_get_max_types(tfp, &max_types);
+	if (rc)
+		return rc;
 
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SRAM_SP_SMAC_POOL_NAME,
-			rc);
+	cparms.nitems = max_types;
+	cparms.size = sizeof(struct tf_rm_resc_req_entry);
+	cparms.alignment = 0;
+	rc = tfp_calloc(&cparms);
 	if (rc)
 		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_SP_SMAC].stride) {
-		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
 
-	/* Only pools for TX direction */
-	if (dir == TF_DIR_TX) {
-		TF_RM_GET_POOLS_TX(tfs, &pool,
-				   TF_SRAM_SP_SMAC_IPV4_POOL_NAME);
-		if (rc)
-			return rc;
-		free_cnt = ba_free_count(pool);
-		if (free_cnt ==
-		    sram_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV4].stride) {
-			flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV4].start = 0;
-			flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV4].stride =
-				0;
-		} else {
-			flush_rc = 1;
-		}
-	} else {
-		/* Always prune RX direction */
-		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV4].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV4].stride = 0;
-	}
+	query = (struct tf_rm_resc_req_entry *)cparms.mem_va;
 
-	/* Only pools for TX direction */
-	if (dir == TF_DIR_TX) {
-		TF_RM_GET_POOLS_TX(tfs, &pool,
-				   TF_SRAM_SP_SMAC_IPV6_POOL_NAME);
-		if (rc)
-			return rc;
-		free_cnt = ba_free_count(pool);
-		if (free_cnt ==
-		    sram_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV6].stride) {
-			flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV6].start = 0;
-			flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV6].stride =
-				0;
-		} else {
-			flush_rc = 1;
-		}
-	} else {
-		/* Always prune RX direction */
-		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV6].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV6].stride = 0;
+	/* Get Firmware Capabilities */
+	rc = tf_msg_session_resc_qcaps(tfp,
+				       parms->dir,
+				       max_types,
+				       query,
+				       &resv_strategy);
+	if (rc)
+		return rc;
+
+	/* Process capabilities against DB requirements. However, as a
+	 * DB can hold elements that are not HCAPI we can reduce the
+	 * req msg content by removing those out of the request yet
+	 * the DB holds them all as to give a fast lookup. We can also
+	 * remove entries where there are no request for elements.
+	 */
+	tf_rm_count_hcapi_reservations(parms->dir,
+				       parms->type,
+				       parms->cfg,
+				       parms->alloc_cnt,
+				       parms->num_elements,
+				       &hcapi_items);
+
+	/* Handle the case where a DB create request really ends up
+	 * being empty. Unsupported (if not rare) case but possible
+	 * that no resources are necessary for a 'direction'.
+	 */
+	if (hcapi_items == 0) {
+		TFP_DRV_LOG(ERR,
+			"%s: DB create request for Zero elements, DB Type:%s\n",
+			tf_dir_2_str(parms->dir),
+			tf_device_module_type_2_str(parms->type));
+
+		parms->rm_db = NULL;
+		return -ENOMEM;
 	}
 
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SRAM_STATS_64B_POOL_NAME,
-			rc);
+	/* Alloc request, alignment already set */
+	cparms.nitems = (size_t)hcapi_items;
+	cparms.size = sizeof(struct tf_rm_resc_req_entry);
+	rc = tfp_calloc(&cparms);
 	if (rc)
 		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_COUNTER_64B].stride) {
-		flush_entries[TF_RESC_TYPE_SRAM_COUNTER_64B].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_COUNTER_64B].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
+	req = (struct tf_rm_resc_req_entry *)cparms.mem_va;
 
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SRAM_NAT_SPORT_POOL_NAME,
-			rc);
+	/* Alloc reservation, alignment and nitems already set */
+	cparms.size = sizeof(struct tf_rm_resc_entry);
+	rc = tfp_calloc(&cparms);
 	if (rc)
 		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_NAT_SPORT].stride) {
-		flush_entries[TF_RESC_TYPE_SRAM_NAT_SPORT].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_NAT_SPORT].stride = 0;
-	} else {
-		flush_rc = 1;
+	resv = (struct tf_rm_resc_entry *)cparms.mem_va;
+
+	/* Build the request */
+	for (i = 0, j = 0; i < parms->num_elements; i++) {
+		/* Skip any non HCAPI cfg elements */
+		if (parms->cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI) {
+			/* Only perform reservation for entries that
+			 * has been requested
+			 */
+			if (parms->alloc_cnt[i] == 0)
+				continue;
+
+			/* Verify that we can get the full amount
+			 * allocated per the qcaps availability.
+			 */
+			if (parms->alloc_cnt[i] <=
+			    query[parms->cfg[i].hcapi_type].max) {
+				req[j].type = parms->cfg[i].hcapi_type;
+				req[j].min = parms->alloc_cnt[i];
+				req[j].max = parms->alloc_cnt[i];
+				j++;
+			} else {
+				TFP_DRV_LOG(ERR,
+					    "%s: Resource failure, type:%d\n",
+					    tf_dir_2_str(parms->dir),
+					    parms->cfg[i].hcapi_type);
+				TFP_DRV_LOG(ERR,
+					"req:%d, avail:%d\n",
+					parms->alloc_cnt[i],
+					query[parms->cfg[i].hcapi_type].max);
+				return -EINVAL;
+			}
+		}
 	}
 
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SRAM_NAT_DPORT_POOL_NAME,
-			rc);
+	rc = tf_msg_session_resc_alloc(tfp,
+				       parms->dir,
+				       hcapi_items,
+				       req,
+				       resv);
 	if (rc)
 		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_NAT_DPORT].stride) {
-		flush_entries[TF_RESC_TYPE_SRAM_NAT_DPORT].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_NAT_DPORT].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
 
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SRAM_NAT_S_IPV4_POOL_NAME,
-			rc);
+	/* Build the RM DB per the request */
+	cparms.nitems = 1;
+	cparms.size = sizeof(struct tf_rm_new_db);
+	rc = tfp_calloc(&cparms);
 	if (rc)
 		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_NAT_S_IPV4].stride) {
-		flush_entries[TF_RESC_TYPE_SRAM_NAT_S_IPV4].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_NAT_S_IPV4].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
+	rm_db = (void *)cparms.mem_va;
 
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SRAM_NAT_D_IPV4_POOL_NAME,
-			rc);
+	/* Build the DB within RM DB */
+	cparms.nitems = parms->num_elements;
+	cparms.size = sizeof(struct tf_rm_element);
+	rc = tfp_calloc(&cparms);
 	if (rc)
 		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_NAT_D_IPV4].stride) {
-		flush_entries[TF_RESC_TYPE_SRAM_NAT_D_IPV4].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_NAT_D_IPV4].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
+	rm_db->db = (struct tf_rm_element *)cparms.mem_va;
 
-	return flush_rc;
-}
+	db = rm_db->db;
+	for (i = 0, j = 0; i < parms->num_elements; i++) {
+		db[i].cfg_type = parms->cfg[i].cfg_type;
+		db[i].hcapi_type = parms->cfg[i].hcapi_type;
 
-/**
- * Helper function used to generate an error log for the HW types that
- * needs to be flushed. The types should have been cleaned up ahead of
- * invoking tf_close_session.
- *
- * [in] hw_entries
- *   HW Resource database holding elements to be flushed
- */
-static void
-tf_rm_log_hw_flush(enum tf_dir dir,
-		   struct tf_rm_entry *hw_entries)
-{
-	int i;
+		/* Skip any non HCAPI types as we didn't include them
+		 * in the reservation request.
+		 */
+		if (parms->cfg[i].cfg_type != TF_RM_ELEM_CFG_HCAPI)
+			continue;
 
-	/* Walk the hw flush array and log the types that wasn't
-	 * cleaned up.
-	 */
-	for (i = 0; i < TF_RESC_TYPE_HW_MAX; i++) {
-		if (hw_entries[i].stride != 0)
+		/* If the element didn't request an allocation no need
+		 * to create a pool nor verify if we got a reservation.
+		 */
+		if (parms->alloc_cnt[i] == 0)
+			continue;
+
+		/* If the element had requested an allocation and that
+		 * allocation was a success (full amount) then
+		 * allocate the pool.
+		 */
+		if (parms->alloc_cnt[i] == resv[j].stride) {
+			db[i].alloc.entry.start = resv[j].start;
+			db[i].alloc.entry.stride = resv[j].stride;
+
+			printf("Entry:%d Start:%d Stride:%d\n",
+			       i,
+			       resv[j].start,
+			       resv[j].stride);
+
+			/* Create pool */
+			pool_size = (BITALLOC_SIZEOF(resv[j].stride) /
+				     sizeof(struct bitalloc));
+			/* Alloc request, alignment already set */
+			cparms.nitems = pool_size;
+			cparms.size = sizeof(struct bitalloc);
+			rc = tfp_calloc(&cparms);
+			if (rc) {
+				TFP_DRV_LOG(ERR,
+					    "%s: Pool alloc failed, type:%d\n",
+					    tf_dir_2_str(parms->dir),
+					    db[i].cfg_type);
+				goto fail;
+			}
+			db[i].pool = (struct bitalloc *)cparms.mem_va;
+
+			rc = ba_init(db[i].pool, resv[j].stride);
+			if (rc) {
+				TFP_DRV_LOG(ERR,
+					    "%s: Pool init failed, type:%d\n",
+					    tf_dir_2_str(parms->dir),
+					    db[i].cfg_type);
+				goto fail;
+			}
+			j++;
+		} else {
+			/* Bail out as we want what we requested for
+			 * all elements, not any less.
+			 */
 			TFP_DRV_LOG(ERR,
-				    "%s, %s was not cleaned up\n",
-				    tf_dir_2_str(dir),
-				    tf_hcapi_hw_2_str(i));
+				    "%s: Alloc failed, type:%d\n",
+				    tf_dir_2_str(parms->dir),
+				    db[i].cfg_type);
+			TFP_DRV_LOG(ERR,
+				    "req:%d, alloc:%d\n",
+				    parms->alloc_cnt[i],
+				    resv[j].stride);
+			goto fail;
+		}
 	}
+
+	rm_db->num_entries = parms->num_elements;
+	rm_db->dir = parms->dir;
+	rm_db->type = parms->type;
+	*parms->rm_db = (void *)rm_db;
+
+	printf("%s: type:%d num_entries:%d\n",
+	       tf_dir_2_str(parms->dir),
+	       parms->type,
+	       i);
+
+	tfp_free((void *)req);
+	tfp_free((void *)resv);
+
+	return 0;
+
+ fail:
+	tfp_free((void *)req);
+	tfp_free((void *)resv);
+	tfp_free((void *)db->pool);
+	tfp_free((void *)db);
+	tfp_free((void *)rm_db);
+	parms->rm_db = NULL;
+
+	return -EINVAL;
 }
 
-/**
- * Helper function used to generate an error log for the SRAM types
- * that needs to be flushed. The types should have been cleaned up
- * ahead of invoking tf_close_session.
- *
- * [in] sram_entries
- *   SRAM Resource database holding elements to be flushed
- */
-static void
-tf_rm_log_sram_flush(enum tf_dir dir,
-		     struct tf_rm_entry *sram_entries)
+int
+tf_rm_free_db(struct tf *tfp,
+	      struct tf_rm_free_db_parms *parms)
 {
+	int rc;
 	int i;
+	uint16_t resv_size = 0;
+	struct tf_rm_new_db *rm_db;
+	struct tf_rm_resc_entry *resv;
+	bool residuals_found = false;
+
+	TF_CHECK_PARMS2(parms, parms->rm_db);
+
+	/* Device unbind happens when the TF Session is closed and the
+	 * session ref count is 0. Device unbind will cleanup each of
+	 * its support modules, i.e. Identifier, thus we're ending up
+	 * here to close the DB.
+	 *
+	 * On TF Session close it is assumed that the session has already
+	 * cleaned up all its resources, individually, while
+	 * destroying its flows.
+	 *
+	 * To assist in the 'cleanup checking' the DB is checked for any
+	 * remaining elements and logged if found to be the case.
+	 *
+	 * Any such elements will need to be 'cleared' ahead of
+	 * returning the resources to the HCAPI RM.
+	 *
+	 * RM will signal FW to flush the DB resources. FW will
+	 * perform the invalidation. TF Session close will return the
+	 * previous allocated elements to the RM and then close the
+	 * HCAPI RM registration. That then saves several 'free' msgs
+	 * from being required.
+	 */
 
-	/* Walk the sram flush array and log the types that wasn't
-	 * cleaned up.
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+
+	/* Check for residuals that the client didn't clean up */
+	rc = tf_rm_check_residuals(rm_db,
+				   &resv_size,
+				   &resv,
+				   &residuals_found);
+	if (rc)
+		return rc;
+
+	/* Invalidate any residuals followed by a DB traversal for
+	 * pool cleanup.
 	 */
-	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++) {
-		if (sram_entries[i].stride != 0)
+	if (residuals_found) {
+		rc = tf_msg_session_resc_flush(tfp,
+					       parms->dir,
+					       resv_size,
+					       resv);
+		tfp_free((void *)resv);
+		/* On failure we still have to cleanup so we can only
+		 * log that FW failed.
+		 */
+		if (rc)
 			TFP_DRV_LOG(ERR,
-				    "%s, %s was not cleaned up\n",
-				    tf_dir_2_str(dir),
-				    tf_hcapi_sram_2_str(i));
+				    "%s: Internal Flush error, module:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    tf_device_module_type_2_str(rm_db->type));
 	}
-}
 
-void
-tf_rm_init(struct tf *tfp __rte_unused)
-{
-	struct tf_session *tfs =
-		(struct tf_session *)(tfp->session->core_data);
+	for (i = 0; i < rm_db->num_entries; i++)
+		tfp_free((void *)rm_db->db[i].pool);
 
-	/* This version is host specific and should be checked against
-	 * when attaching as there is no guarantee that a secondary
-	 * would run from same image version.
-	 */
-	tfs->ver.major = TF_SESSION_VER_MAJOR;
-	tfs->ver.minor = TF_SESSION_VER_MINOR;
-	tfs->ver.update = TF_SESSION_VER_UPDATE;
-
-	tfs->session_id.id = 0;
-	tfs->ref_count = 0;
-
-	/* Initialization of Table Scopes */
-	/* ll_init(&tfs->tbl_scope_ll); */
-
-	/* Initialization of HW and SRAM resource DB */
-	memset(&tfs->resc, 0, sizeof(struct tf_rm_db));
-
-	/* Initialization of HW Resource Pools */
-	ba_init(tfs->TF_L2_CTXT_TCAM_POOL_NAME_RX, TF_NUM_L2_CTXT_TCAM);
-	ba_init(tfs->TF_L2_CTXT_TCAM_POOL_NAME_TX, TF_NUM_L2_CTXT_TCAM);
-	ba_init(tfs->TF_PROF_FUNC_POOL_NAME_RX, TF_NUM_PROF_FUNC);
-	ba_init(tfs->TF_PROF_FUNC_POOL_NAME_TX, TF_NUM_PROF_FUNC);
-	ba_init(tfs->TF_PROF_TCAM_POOL_NAME_RX, TF_NUM_PROF_TCAM);
-	ba_init(tfs->TF_PROF_TCAM_POOL_NAME_TX, TF_NUM_PROF_TCAM);
-	ba_init(tfs->TF_EM_PROF_ID_POOL_NAME_RX, TF_NUM_EM_PROF_ID);
-	ba_init(tfs->TF_EM_PROF_ID_POOL_NAME_TX, TF_NUM_EM_PROF_ID);
-
-	/* TBD, how do we want to handle EM records ?*/
-	/* EM Records should not be controlled by way of a pool */
-
-	ba_init(tfs->TF_WC_TCAM_PROF_ID_POOL_NAME_RX, TF_NUM_WC_PROF_ID);
-	ba_init(tfs->TF_WC_TCAM_PROF_ID_POOL_NAME_TX, TF_NUM_WC_PROF_ID);
-	ba_init(tfs->TF_WC_TCAM_POOL_NAME_RX, TF_NUM_WC_TCAM_ROW);
-	ba_init(tfs->TF_WC_TCAM_POOL_NAME_TX, TF_NUM_WC_TCAM_ROW);
-	ba_init(tfs->TF_METER_PROF_POOL_NAME_RX, TF_NUM_METER_PROF);
-	ba_init(tfs->TF_METER_PROF_POOL_NAME_TX, TF_NUM_METER_PROF);
-	ba_init(tfs->TF_METER_INST_POOL_NAME_RX, TF_NUM_METER);
-	ba_init(tfs->TF_METER_INST_POOL_NAME_TX, TF_NUM_METER);
-	ba_init(tfs->TF_MIRROR_POOL_NAME_RX, TF_NUM_MIRROR);
-	ba_init(tfs->TF_MIRROR_POOL_NAME_TX, TF_NUM_MIRROR);
-	ba_init(tfs->TF_UPAR_POOL_NAME_RX, TF_NUM_UPAR);
-	ba_init(tfs->TF_UPAR_POOL_NAME_TX, TF_NUM_UPAR);
-
-	ba_init(tfs->TF_SP_TCAM_POOL_NAME_RX, TF_NUM_SP_TCAM);
-	ba_init(tfs->TF_SP_TCAM_POOL_NAME_TX, TF_NUM_SP_TCAM);
-
-	ba_init(tfs->TF_FKB_POOL_NAME_RX, TF_NUM_FKB);
-	ba_init(tfs->TF_FKB_POOL_NAME_TX, TF_NUM_FKB);
-
-	ba_init(tfs->TF_TBL_SCOPE_POOL_NAME_RX, TF_NUM_TBL_SCOPE);
-	ba_init(tfs->TF_TBL_SCOPE_POOL_NAME_TX, TF_NUM_TBL_SCOPE);
-	ba_init(tfs->TF_L2_FUNC_POOL_NAME_RX, TF_NUM_L2_FUNC);
-	ba_init(tfs->TF_L2_FUNC_POOL_NAME_TX, TF_NUM_L2_FUNC);
-	ba_init(tfs->TF_EPOCH0_POOL_NAME_RX, TF_NUM_EPOCH0);
-	ba_init(tfs->TF_EPOCH0_POOL_NAME_TX, TF_NUM_EPOCH0);
-	ba_init(tfs->TF_EPOCH1_POOL_NAME_RX, TF_NUM_EPOCH1);
-	ba_init(tfs->TF_EPOCH1_POOL_NAME_TX, TF_NUM_EPOCH1);
-	ba_init(tfs->TF_METADATA_POOL_NAME_RX, TF_NUM_METADATA);
-	ba_init(tfs->TF_METADATA_POOL_NAME_TX, TF_NUM_METADATA);
-	ba_init(tfs->TF_CT_STATE_POOL_NAME_RX, TF_NUM_CT_STATE);
-	ba_init(tfs->TF_CT_STATE_POOL_NAME_TX, TF_NUM_CT_STATE);
-	ba_init(tfs->TF_RANGE_PROF_POOL_NAME_RX, TF_NUM_RANGE_PROF);
-	ba_init(tfs->TF_RANGE_PROF_POOL_NAME_TX, TF_NUM_RANGE_PROF);
-	ba_init(tfs->TF_RANGE_ENTRY_POOL_NAME_RX, TF_NUM_RANGE_ENTRY);
-	ba_init(tfs->TF_RANGE_ENTRY_POOL_NAME_TX, TF_NUM_RANGE_ENTRY);
-	ba_init(tfs->TF_LAG_ENTRY_POOL_NAME_RX, TF_NUM_LAG_ENTRY);
-	ba_init(tfs->TF_LAG_ENTRY_POOL_NAME_TX, TF_NUM_LAG_ENTRY);
-
-	/* Initialization of SRAM Resource Pools
-	 * These pools are set to the TFLIB defined MAX sizes not
-	 * AFM's HW max as to limit the memory consumption
-	 */
-	ba_init(tfs->TF_SRAM_FULL_ACTION_POOL_NAME_RX,
-		TF_RSVD_SRAM_FULL_ACTION_RX);
-	ba_init(tfs->TF_SRAM_FULL_ACTION_POOL_NAME_TX,
-		TF_RSVD_SRAM_FULL_ACTION_TX);
-	/* Only Multicast Group on RX is supported */
-	ba_init(tfs->TF_SRAM_MCG_POOL_NAME_RX,
-		TF_RSVD_SRAM_MCG_RX);
-	ba_init(tfs->TF_SRAM_ENCAP_8B_POOL_NAME_RX,
-		TF_RSVD_SRAM_ENCAP_8B_RX);
-	ba_init(tfs->TF_SRAM_ENCAP_8B_POOL_NAME_TX,
-		TF_RSVD_SRAM_ENCAP_8B_TX);
-	ba_init(tfs->TF_SRAM_ENCAP_16B_POOL_NAME_RX,
-		TF_RSVD_SRAM_ENCAP_16B_RX);
-	ba_init(tfs->TF_SRAM_ENCAP_16B_POOL_NAME_TX,
-		TF_RSVD_SRAM_ENCAP_16B_TX);
-	/* Only Encap 64B on TX is supported */
-	ba_init(tfs->TF_SRAM_ENCAP_64B_POOL_NAME_TX,
-		TF_RSVD_SRAM_ENCAP_64B_TX);
-	ba_init(tfs->TF_SRAM_SP_SMAC_POOL_NAME_RX,
-		TF_RSVD_SRAM_SP_SMAC_RX);
-	ba_init(tfs->TF_SRAM_SP_SMAC_POOL_NAME_TX,
-		TF_RSVD_SRAM_SP_SMAC_TX);
-	/* Only SP SMAC IPv4 on TX is supported */
-	ba_init(tfs->TF_SRAM_SP_SMAC_IPV4_POOL_NAME_TX,
-		TF_RSVD_SRAM_SP_SMAC_IPV4_TX);
-	/* Only SP SMAC IPv6 on TX is supported */
-	ba_init(tfs->TF_SRAM_SP_SMAC_IPV6_POOL_NAME_TX,
-		TF_RSVD_SRAM_SP_SMAC_IPV6_TX);
-	ba_init(tfs->TF_SRAM_STATS_64B_POOL_NAME_RX,
-		TF_RSVD_SRAM_COUNTER_64B_RX);
-	ba_init(tfs->TF_SRAM_STATS_64B_POOL_NAME_TX,
-		TF_RSVD_SRAM_COUNTER_64B_TX);
-	ba_init(tfs->TF_SRAM_NAT_SPORT_POOL_NAME_RX,
-		TF_RSVD_SRAM_NAT_SPORT_RX);
-	ba_init(tfs->TF_SRAM_NAT_SPORT_POOL_NAME_TX,
-		TF_RSVD_SRAM_NAT_SPORT_TX);
-	ba_init(tfs->TF_SRAM_NAT_DPORT_POOL_NAME_RX,
-		TF_RSVD_SRAM_NAT_DPORT_RX);
-	ba_init(tfs->TF_SRAM_NAT_DPORT_POOL_NAME_TX,
-		TF_RSVD_SRAM_NAT_DPORT_TX);
-	ba_init(tfs->TF_SRAM_NAT_S_IPV4_POOL_NAME_RX,
-		TF_RSVD_SRAM_NAT_S_IPV4_RX);
-	ba_init(tfs->TF_SRAM_NAT_S_IPV4_POOL_NAME_TX,
-		TF_RSVD_SRAM_NAT_S_IPV4_TX);
-	ba_init(tfs->TF_SRAM_NAT_D_IPV4_POOL_NAME_RX,
-		TF_RSVD_SRAM_NAT_D_IPV4_RX);
-	ba_init(tfs->TF_SRAM_NAT_D_IPV4_POOL_NAME_TX,
-		TF_RSVD_SRAM_NAT_D_IPV4_TX);
-
-	/* Initialization of pools local to TF Core */
-	ba_init(tfs->TF_L2_CTXT_REMAP_POOL_NAME_RX, TF_NUM_L2_CTXT_TCAM);
-	ba_init(tfs->TF_L2_CTXT_REMAP_POOL_NAME_TX, TF_NUM_L2_CTXT_TCAM);
+	tfp_free((void *)parms->rm_db);
+
+	return rc;
 }
 
 int
-tf_rm_allocate_validate(struct tf *tfp)
+tf_rm_allocate(struct tf_rm_allocate_parms *parms)
 {
 	int rc;
-	int i;
+	int id;
+	uint32_t index;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
 
-	for (i = 0; i < TF_DIR_MAX; i++) {
-		rc = tf_rm_allocate_validate_hw(tfp, i);
-		if (rc)
-			return rc;
-		rc = tf_rm_allocate_validate_sram(tfp, i);
-		if (rc)
-			return rc;
+	TF_CHECK_PARMS2(parms, parms->rm_db);
+
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
+
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	/* Bail out if the pool is not valid, should never happen */
+	if (rm_db->db[parms->db_index].pool == NULL) {
+		rc = -ENOTSUP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Invalid pool for this type:%d, rc:%s\n",
+			    tf_dir_2_str(rm_db->dir),
+			    parms->db_index,
+			    strerror(-rc));
+		return rc;
 	}
 
-	/* With both HW and SRAM allocated and validated we can
-	 * 'scrub' the reservation on the pools.
+	/*
+	 * priority  0: allocate from top of the tcam i.e. high
+	 * priority !0: allocate index from bottom i.e lowest
 	 */
-	tf_rm_reserve_hw(tfp);
-	tf_rm_reserve_sram(tfp);
+	if (parms->priority)
+		id = ba_alloc_reverse(rm_db->db[parms->db_index].pool);
+	else
+		id = ba_alloc(rm_db->db[parms->db_index].pool);
+	if (id == BA_FAIL) {
+		rc = -ENOMEM;
+		TFP_DRV_LOG(ERR,
+			    "%s: Allocation failed, rc:%s\n",
+			    tf_dir_2_str(rm_db->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Adjust for any non zero start value */
+	rc = tf_rm_adjust_index(rm_db->db,
+				TF_RM_ADJUST_ADD_BASE,
+				parms->db_index,
+				id,
+				&index);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Alloc adjust of base index failed, rc:%s\n",
+			    tf_dir_2_str(rm_db->dir),
+			    strerror(-rc));
+		return -EINVAL;
+	}
+
+	*parms->index = index;
 
 	return rc;
 }
 
 int
-tf_rm_close(struct tf *tfp)
+tf_rm_free(struct tf_rm_free_parms *parms)
 {
 	int rc;
-	int rc_close = 0;
-	int i;
-	struct tf_rm_entry *hw_entries;
-	struct tf_rm_entry *hw_flush_entries;
-	struct tf_rm_entry *sram_entries;
-	struct tf_rm_entry *sram_flush_entries;
-	struct tf_session *tfs __rte_unused =
-		(struct tf_session *)(tfp->session->core_data);
-
-	struct tf_rm_db flush_resc = tfs->resc;
-
-	/* On close it is assumed that the session has already cleaned
-	 * up all its resources, individually, while destroying its
-	 * flows. No checking is performed thus the behavior is as
-	 * follows.
-	 *
-	 * Session RM will signal FW to release session resources. FW
-	 * will perform invalidation of all the allocated entries
-	 * (assures any outstanding resources has been cleared, then
-	 * free the FW RM instance.
-	 *
-	 * Session will then be freed by tf_close_session() thus there
-	 * is no need to clean each resource pool as the whole session
-	 * is going away.
-	 */
-
-	for (i = 0; i < TF_DIR_MAX; i++) {
-		if (i == TF_DIR_RX) {
-			hw_entries = tfs->resc.rx.hw_entry;
-			hw_flush_entries = flush_resc.rx.hw_entry;
-			sram_entries = tfs->resc.rx.sram_entry;
-			sram_flush_entries = flush_resc.rx.sram_entry;
-		} else {
-			hw_entries = tfs->resc.tx.hw_entry;
-			hw_flush_entries = flush_resc.tx.hw_entry;
-			sram_entries = tfs->resc.tx.sram_entry;
-			sram_flush_entries = flush_resc.tx.sram_entry;
-		}
+	uint32_t adj_index;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
 
-		/* Check for any not previously freed HW resources and
-		 * flush if required.
-		 */
-		rc = tf_rm_hw_to_flush(tfs, i, hw_entries, hw_flush_entries);
-		if (rc) {
-			rc_close = -ENOTEMPTY;
-			/* Log error */
-			TFP_DRV_LOG(ERR,
-				    "%s, lingering HW resources, rc:%s\n",
-				    tf_dir_2_str(i),
-				    strerror(-rc));
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
-			/* Log the entries to be flushed */
-			tf_rm_log_hw_flush(i, hw_flush_entries);
-		}
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
 
-		/* Check for any not previously freed SRAM resources
-		 * and flush if required.
-		 */
-		rc = tf_rm_sram_to_flush(tfs,
-					 i,
-					 sram_entries,
-					 sram_flush_entries);
-		if (rc) {
-			rc_close = -ENOTEMPTY;
-			/* Log error */
-			TFP_DRV_LOG(ERR,
-				    "%s, lingering SRAM resources, rc:%s\n",
-				    tf_dir_2_str(i),
-				    strerror(-rc));
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
 
-			/* Log the entries to be flushed */
-			tf_rm_log_sram_flush(i, sram_flush_entries);
-		}
+	/* Bail out if the pool is not valid, should never happen */
+	if (rm_db->db[parms->db_index].pool == NULL) {
+		rc = -ENOTSUP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Invalid pool for this type:%d, rc:%s\n",
+			    tf_dir_2_str(rm_db->dir),
+			    parms->db_index,
+			    strerror(-rc));
+		return rc;
 	}
 
-	return rc_close;
-}
+	/* Adjust for any non zero start value */
+	rc = tf_rm_adjust_index(rm_db->db,
+				TF_RM_ADJUST_RM_BASE,
+				parms->db_index,
+				parms->index,
+				&adj_index);
+	if (rc)
+		return rc;
 
-#if (TF_SHADOW == 1)
-int
-tf_rm_shadow_db_init(struct tf_session *tfs)
-{
-	rc = 1;
+	rc = ba_free(rm_db->db[parms->db_index].pool, adj_index);
+	/* No logging direction matters and that is not available here */
+	if (rc)
+		return rc;
 
 	return rc;
 }
-#endif /* TF_SHADOW */
 
 int
-tf_rm_lookup_tcam_type_pool(struct tf_session *tfs,
-			    enum tf_dir dir,
-			    enum tf_tcam_tbl_type type,
-			    struct bitalloc **pool)
+tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms)
 {
-	int rc = -EOPNOTSUPP;
+	int rc;
+	uint32_t adj_index;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
 
-	*pool = NULL;
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
-	switch (type) {
-	case TF_TCAM_TBL_TYPE_L2_CTXT_TCAM:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_L2_CTXT_TCAM_POOL_NAME,
-				rc);
-		break;
-	case TF_TCAM_TBL_TYPE_PROF_TCAM:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_PROF_TCAM_POOL_NAME,
-				rc);
-		break;
-	case TF_TCAM_TBL_TYPE_WC_TCAM:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_WC_TCAM_POOL_NAME,
-				rc);
-		break;
-	case TF_TCAM_TBL_TYPE_VEB_TCAM:
-	case TF_TCAM_TBL_TYPE_SP_TCAM:
-	case TF_TCAM_TBL_TYPE_CT_RULE_TCAM:
-	default:
-		break;
-	}
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
 
-	if (rc == -EOPNOTSUPP) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Tcam type not supported, type:%d\n",
-			    tf_dir_2_str(dir),
-			    type);
-		return rc;
-	} else if (rc == -1) {
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	/* Bail out if the pool is not valid, should never happen */
+	if (rm_db->db[parms->db_index].pool == NULL) {
+		rc = -ENOTSUP;
 		TFP_DRV_LOG(ERR,
-			    "%s, Tcam type lookup failed, type:%d\n",
-			    tf_dir_2_str(dir),
-			    type);
+			    "%s: Invalid pool for this type:%d, rc:%s\n",
+			    tf_dir_2_str(rm_db->dir),
+			    parms->db_index,
+			    strerror(-rc));
 		return rc;
 	}
 
-	return 0;
+	/* Adjust for any non zero start value */
+	rc = tf_rm_adjust_index(rm_db->db,
+				TF_RM_ADJUST_RM_BASE,
+				parms->db_index,
+				parms->index,
+				&adj_index);
+	if (rc)
+		return rc;
+
+	*parms->allocated = ba_inuse(rm_db->db[parms->db_index].pool,
+				     adj_index);
+
+	return rc;
 }
 
 int
-tf_rm_lookup_tbl_type_pool(struct tf_session *tfs,
-			   enum tf_dir dir,
-			   enum tf_tbl_type type,
-			   struct bitalloc **pool)
+tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms)
 {
-	int rc = -EOPNOTSUPP;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
 
-	*pool = NULL;
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
-	switch (type) {
-	case TF_TBL_TYPE_FULL_ACT_RECORD:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_SRAM_FULL_ACTION_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_MCAST_GROUPS:
-		/* No pools for TX direction, so bail out */
-		if (dir == TF_DIR_TX)
-			break;
-		TF_RM_GET_POOLS_RX(tfs, pool,
-				   TF_SRAM_MCG_POOL_NAME);
-		rc = 0;
-		break;
-	case TF_TBL_TYPE_ACT_ENCAP_8B:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_SRAM_ENCAP_8B_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_ACT_ENCAP_16B:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_SRAM_ENCAP_16B_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_ACT_ENCAP_64B:
-		/* No pools for RX direction, so bail out */
-		if (dir == TF_DIR_RX)
-			break;
-		TF_RM_GET_POOLS_TX(tfs, pool,
-				   TF_SRAM_ENCAP_64B_POOL_NAME);
-		rc = 0;
-		break;
-	case TF_TBL_TYPE_ACT_SP_SMAC:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_SRAM_SP_SMAC_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_ACT_SP_SMAC_IPV4:
-		/* No pools for TX direction, so bail out */
-		if (dir == TF_DIR_RX)
-			break;
-		TF_RM_GET_POOLS_TX(tfs, pool,
-				   TF_SRAM_SP_SMAC_IPV4_POOL_NAME);
-		rc = 0;
-		break;
-	case TF_TBL_TYPE_ACT_SP_SMAC_IPV6:
-		/* No pools for TX direction, so bail out */
-		if (dir == TF_DIR_RX)
-			break;
-		TF_RM_GET_POOLS_TX(tfs, pool,
-				   TF_SRAM_SP_SMAC_IPV6_POOL_NAME);
-		rc = 0;
-		break;
-	case TF_TBL_TYPE_ACT_STATS_64:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_SRAM_STATS_64B_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_ACT_MODIFY_SPORT:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_SRAM_NAT_SPORT_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_ACT_MODIFY_IPV4_SRC:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_SRAM_NAT_S_IPV4_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_SRAM_NAT_D_IPV4_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_METER_PROF:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_METER_PROF_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_METER_INST:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_METER_INST_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_MIRROR_CONFIG:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_MIRROR_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_UPAR:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_UPAR_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_EPOCH0:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_EPOCH0_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_EPOCH1:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_EPOCH1_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_METADATA:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_METADATA_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_CT_STATE:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_CT_STATE_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_RANGE_PROF:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_RANGE_PROF_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_RANGE_ENTRY:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_RANGE_ENTRY_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_LAG:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_LAG_ENTRY_POOL_NAME,
-				rc);
-		break;
-	/* Not yet supported */
-	case TF_TBL_TYPE_ACT_ENCAP_32B:
-	case TF_TBL_TYPE_ACT_MODIFY_IPV6_DEST:
-	case TF_TBL_TYPE_ACT_MODIFY_IPV6_SRC:
-	case TF_TBL_TYPE_VNIC_SVIF:
-		break;
-	/* No bitalloc pools for these types */
-	case TF_TBL_TYPE_EXT:
-	default:
-		break;
-	}
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
 
-	if (rc == -EOPNOTSUPP) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Table type not supported, type:%d\n",
-			    tf_dir_2_str(dir),
-			    type);
-		return rc;
-	} else if (rc == -1) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Table type lookup failed, type:%d\n",
-			    tf_dir_2_str(dir),
-			    type);
-		return rc;
-	}
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	memcpy(parms->info,
+	       &rm_db->db[parms->db_index].alloc,
+	       sizeof(struct tf_rm_alloc_info));
 
 	return 0;
 }
 
 int
-tf_rm_convert_tbl_type(enum tf_tbl_type type,
-		       uint32_t *hcapi_type)
+tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms)
 {
-	int rc = 0;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
 
-	switch (type) {
-	case TF_TBL_TYPE_FULL_ACT_RECORD:
-		*hcapi_type = TF_RESC_TYPE_SRAM_FULL_ACTION;
-		break;
-	case TF_TBL_TYPE_MCAST_GROUPS:
-		*hcapi_type = TF_RESC_TYPE_SRAM_MCG;
-		break;
-	case TF_TBL_TYPE_ACT_ENCAP_8B:
-		*hcapi_type = TF_RESC_TYPE_SRAM_ENCAP_8B;
-		break;
-	case TF_TBL_TYPE_ACT_ENCAP_16B:
-		*hcapi_type = TF_RESC_TYPE_SRAM_ENCAP_16B;
-		break;
-	case TF_TBL_TYPE_ACT_ENCAP_64B:
-		*hcapi_type = TF_RESC_TYPE_SRAM_ENCAP_64B;
-		break;
-	case TF_TBL_TYPE_ACT_SP_SMAC:
-		*hcapi_type = TF_RESC_TYPE_SRAM_SP_SMAC;
-		break;
-	case TF_TBL_TYPE_ACT_SP_SMAC_IPV4:
-		*hcapi_type = TF_RESC_TYPE_SRAM_SP_SMAC_IPV4;
-		break;
-	case TF_TBL_TYPE_ACT_SP_SMAC_IPV6:
-		*hcapi_type = TF_RESC_TYPE_SRAM_SP_SMAC_IPV6;
-		break;
-	case TF_TBL_TYPE_ACT_STATS_64:
-		*hcapi_type = TF_RESC_TYPE_SRAM_COUNTER_64B;
-		break;
-	case TF_TBL_TYPE_ACT_MODIFY_SPORT:
-		*hcapi_type = TF_RESC_TYPE_SRAM_NAT_SPORT;
-		break;
-	case TF_TBL_TYPE_ACT_MODIFY_DPORT:
-		*hcapi_type = TF_RESC_TYPE_SRAM_NAT_DPORT;
-		break;
-	case TF_TBL_TYPE_ACT_MODIFY_IPV4_SRC:
-		*hcapi_type = TF_RESC_TYPE_SRAM_NAT_S_IPV4;
-		break;
-	case TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST:
-		*hcapi_type = TF_RESC_TYPE_SRAM_NAT_D_IPV4;
-		break;
-	case TF_TBL_TYPE_METER_PROF:
-		*hcapi_type = TF_RESC_TYPE_HW_METER_PROF;
-		break;
-	case TF_TBL_TYPE_METER_INST:
-		*hcapi_type = TF_RESC_TYPE_HW_METER_INST;
-		break;
-	case TF_TBL_TYPE_MIRROR_CONFIG:
-		*hcapi_type = TF_RESC_TYPE_HW_MIRROR;
-		break;
-	case TF_TBL_TYPE_UPAR:
-		*hcapi_type = TF_RESC_TYPE_HW_UPAR;
-		break;
-	case TF_TBL_TYPE_EPOCH0:
-		*hcapi_type = TF_RESC_TYPE_HW_EPOCH0;
-		break;
-	case TF_TBL_TYPE_EPOCH1:
-		*hcapi_type = TF_RESC_TYPE_HW_EPOCH1;
-		break;
-	case TF_TBL_TYPE_METADATA:
-		*hcapi_type = TF_RESC_TYPE_HW_METADATA;
-		break;
-	case TF_TBL_TYPE_CT_STATE:
-		*hcapi_type = TF_RESC_TYPE_HW_CT_STATE;
-		break;
-	case TF_TBL_TYPE_RANGE_PROF:
-		*hcapi_type = TF_RESC_TYPE_HW_RANGE_PROF;
-		break;
-	case TF_TBL_TYPE_RANGE_ENTRY:
-		*hcapi_type = TF_RESC_TYPE_HW_RANGE_ENTRY;
-		break;
-	case TF_TBL_TYPE_LAG:
-		*hcapi_type = TF_RESC_TYPE_HW_LAG_ENTRY;
-		break;
-	/* Not yet supported */
-	case TF_TBL_TYPE_ACT_ENCAP_32B:
-	case TF_TBL_TYPE_ACT_MODIFY_IPV6_DEST:
-	case TF_TBL_TYPE_ACT_MODIFY_IPV6_SRC:
-	case TF_TBL_TYPE_VNIC_SVIF:
-	case TF_TBL_TYPE_EXT:   /* No pools for this type */
-	default:
-		*hcapi_type = -1;
-		rc = -EOPNOTSUPP;
-	}
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
-	return rc;
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
+
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	*parms->hcapi_type = rm_db->db[parms->db_index].hcapi_type;
+
+	return 0;
 }
 
 int
-tf_rm_convert_index(struct tf_session *tfs,
-		    enum tf_dir dir,
-		    enum tf_tbl_type type,
-		    enum tf_rm_convert_type c_type,
-		    uint32_t index,
-		    uint32_t *convert_index)
+tf_rm_get_inuse_count(struct tf_rm_get_inuse_count_parms *parms)
 {
-	int rc;
-	struct tf_rm_resc *resc;
-	uint32_t hcapi_type;
-	uint32_t base_index;
+	int rc = 0;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
 
-	if (dir == TF_DIR_RX)
-		resc = &tfs->resc.rx;
-	else if (dir == TF_DIR_TX)
-		resc = &tfs->resc.tx;
-	else
-		return -EOPNOTSUPP;
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
-	rc = tf_rm_convert_tbl_type(type, &hcapi_type);
-	if (rc)
-		return -1;
-
-	switch (type) {
-	case TF_TBL_TYPE_FULL_ACT_RECORD:
-	case TF_TBL_TYPE_MCAST_GROUPS:
-	case TF_TBL_TYPE_ACT_ENCAP_8B:
-	case TF_TBL_TYPE_ACT_ENCAP_16B:
-	case TF_TBL_TYPE_ACT_ENCAP_32B:
-	case TF_TBL_TYPE_ACT_ENCAP_64B:
-	case TF_TBL_TYPE_ACT_SP_SMAC:
-	case TF_TBL_TYPE_ACT_SP_SMAC_IPV4:
-	case TF_TBL_TYPE_ACT_SP_SMAC_IPV6:
-	case TF_TBL_TYPE_ACT_STATS_64:
-	case TF_TBL_TYPE_ACT_MODIFY_SPORT:
-	case TF_TBL_TYPE_ACT_MODIFY_DPORT:
-	case TF_TBL_TYPE_ACT_MODIFY_IPV4_SRC:
-	case TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST:
-		base_index = resc->sram_entry[hcapi_type].start;
-		break;
-	case TF_TBL_TYPE_MIRROR_CONFIG:
-	case TF_TBL_TYPE_METER_PROF:
-	case TF_TBL_TYPE_METER_INST:
-	case TF_TBL_TYPE_UPAR:
-	case TF_TBL_TYPE_EPOCH0:
-	case TF_TBL_TYPE_EPOCH1:
-	case TF_TBL_TYPE_METADATA:
-	case TF_TBL_TYPE_CT_STATE:
-	case TF_TBL_TYPE_RANGE_PROF:
-	case TF_TBL_TYPE_RANGE_ENTRY:
-	case TF_TBL_TYPE_LAG:
-		base_index = resc->hw_entry[hcapi_type].start;
-		break;
-	/* Not yet supported */
-	case TF_TBL_TYPE_VNIC_SVIF:
-	case TF_TBL_TYPE_EXT:   /* No pools for this type */
-	default:
-		return -EOPNOTSUPP;
-	}
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
 
-	switch (c_type) {
-	case TF_RM_CONVERT_RM_BASE:
-		*convert_index = index - base_index;
-		break;
-	case TF_RM_CONVERT_ADD_BASE:
-		*convert_index = index + base_index;
-		break;
-	default:
-		return -EOPNOTSUPP;
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	/* Bail silently (no logging), if the pool is not valid there
+	 * was no elements allocated for it.
+	 */
+	if (rm_db->db[parms->db_index].pool == NULL) {
+		*parms->count = 0;
+		return 0;
 	}
 
-	return 0;
+	*parms->count = ba_inuse_count(rm_db->db[parms->db_index].pool);
+
+	return rc;
+
 }
diff --git a/drivers/net/bnxt/tf_core/tf_rm.h b/drivers/net/bnxt/tf_core/tf_rm.h
index 1a09f13a7..5cb68892a 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.h
+++ b/drivers/net/bnxt/tf_core/tf_rm.h
@@ -3,301 +3,444 @@
  * All rights reserved.
  */
 
-#ifndef TF_RM_H_
-#define TF_RM_H_
+#ifndef TF_RM_NEW_H_
+#define TF_RM_NEW_H_
 
-#include "tf_resources.h"
 #include "tf_core.h"
 #include "bitalloc.h"
+#include "tf_device.h"
 
 struct tf;
-struct tf_session;
 
-/* Internal macro to determine appropriate allocation pools based on
- * DIRECTION parm, also performs error checking for DIRECTION parm. The
- * SESSION_POOL and SESSION pointers are set appropriately upon
- * successful return (the GLOBAL_POOL is used to globally manage
- * resource allocation and the SESSION_POOL is used to track the
- * resources that have been allocated to the session)
+/**
+ * The Resource Manager (RM) module provides basic DB handling for
+ * internal resources. These resources exists within the actual device
+ * and are controlled by the HCAPI Resource Manager running on the
+ * firmware.
+ *
+ * The RM DBs are all intended to be indexed using TF types there for
+ * a lookup requires no additional conversion. The DB configuration
+ * specifies the TF Type to HCAPI Type mapping and it becomes the
+ * responsibility of the DB initialization to handle this static
+ * mapping.
+ *
+ * Accessor functions are providing access to the DB, thus hiding the
+ * implementation.
  *
- * parameters:
- *   struct tfp        *tfp
- *   enum tf_dir        direction
- *   struct bitalloc  **session_pool
- *   string             base_pool_name - used to form pointers to the
- *					 appropriate bit allocation
- *					 pools, both directions of the
- *					 session pools must have same
- *					 base name, for example if
- *					 POOL_NAME is feat_pool: - the
- *					 ptr's to the session pools
- *					 are feat_pool_rx feat_pool_tx
+ * The RM DB will work on its initial allocated sizes so the
+ * capability of dynamically growing a particular resource is not
+ * possible. If this capability later becomes a requirement then the
+ * MAX pool size of the Chip œneeds to be added to the tf_rm_elem_info
+ * structure and several new APIs would need to be added to allow for
+ * growth of a single TF resource type.
  *
- *  int                  rc - return code
- *			      0 - Success
- *			     -1 - invalid DIRECTION parm
+ * The access functions does not check for NULL pointers as it's a
+ * support module, not called directly.
  */
-#define TF_RM_GET_POOLS(tfs, direction, session_pool, pool_name, rc) do { \
-		(rc) = 0;						\
-		if ((direction) == TF_DIR_RX) {				\
-			*(session_pool) = (tfs)->pool_name ## _RX;	\
-		} else if ((direction) == TF_DIR_TX) {			\
-			*(session_pool) = (tfs)->pool_name ## _TX;	\
-		} else {						\
-			rc = -1;					\
-		}							\
-	} while (0)
 
-#define TF_RM_GET_POOLS_RX(tfs, session_pool, pool_name)	\
-	(*(session_pool) = (tfs)->pool_name ## _RX)
+/**
+ * Resource reservation single entry result. Used when accessing HCAPI
+ * RM on the firmware.
+ */
+struct tf_rm_new_entry {
+	/** Starting index of the allocated resource */
+	uint16_t start;
+	/** Number of allocated elements */
+	uint16_t stride;
+};
 
-#define TF_RM_GET_POOLS_TX(tfs, session_pool, pool_name)	\
-	(*(session_pool) = (tfs)->pool_name ## _TX)
+/**
+ * RM Element configuration enumeration. Used by the Device to
+ * indicate how the RM elements the DB consists off, are to be
+ * configured at time of DB creation. The TF may present types to the
+ * ULP layer that is not controlled by HCAPI within the Firmware.
+ */
+enum tf_rm_elem_cfg_type {
+	/** No configuration */
+	TF_RM_ELEM_CFG_NULL,
+	/** HCAPI 'controlled', uses a Pool for internal storage */
+	TF_RM_ELEM_CFG_HCAPI,
+	/** Private thus not HCAPI 'controlled', creates a Pool for storage */
+	TF_RM_ELEM_CFG_PRIVATE,
+	/**
+	 * Shared element thus it belongs to a shared FW Session and
+	 * is not controlled by the Host.
+	 */
+	TF_RM_ELEM_CFG_SHARED,
+	TF_RM_TYPE_MAX
+};
 
 /**
- * Resource query single entry
+ * RM Reservation strategy enumeration. Type of strategy comes from
+ * the HCAPI RM QCAPS handshake.
  */
-struct tf_rm_query_entry {
-	/** Minimum guaranteed number of elements */
-	uint16_t min;
-	/** Maximum non-guaranteed number of elements */
-	uint16_t max;
+enum tf_rm_resc_resv_strategy {
+	TF_RM_RESC_RESV_STATIC_PARTITION,
+	TF_RM_RESC_RESV_STRATEGY_1,
+	TF_RM_RESC_RESV_STRATEGY_2,
+	TF_RM_RESC_RESV_STRATEGY_3,
+	TF_RM_RESC_RESV_MAX
 };
 
 /**
- * Resource single entry
+ * RM Element configuration structure, used by the Device to configure
+ * how an individual TF type is configured in regard to the HCAPI RM
+ * of same type.
  */
-struct tf_rm_entry {
-	/** Starting index of the allocated resource */
-	uint16_t start;
-	/** Number of allocated elements */
-	uint16_t stride;
+struct tf_rm_element_cfg {
+	/**
+	 * RM Element config controls how the DB for that element is
+	 * processed.
+	 */
+	enum tf_rm_elem_cfg_type cfg_type;
+
+	/* If a HCAPI to TF type conversion is required then TF type
+	 * can be added here.
+	 */
+
+	/**
+	 * HCAPI RM Type for the element. Used for TF to HCAPI type
+	 * conversion.
+	 */
+	uint16_t hcapi_type;
 };
 
 /**
- * Resource query array of HW entities
+ * Allocation information for a single element.
  */
-struct tf_rm_hw_query {
-	/** array of HW resource entries */
-	struct tf_rm_query_entry hw_query[TF_RESC_TYPE_HW_MAX];
+struct tf_rm_alloc_info {
+	/**
+	 * HCAPI RM allocated range information.
+	 *
+	 * NOTE:
+	 * In case of dynamic allocation support this would have
+	 * to be changed to linked list of tf_rm_entry instead.
+	 */
+	struct tf_rm_new_entry entry;
 };
 
 /**
- * Resource allocation array of HW entities
+ * Create RM DB parameters
  */
-struct tf_rm_hw_alloc {
-	/** array of HW resource entries */
-	uint16_t hw_num[TF_RESC_TYPE_HW_MAX];
+struct tf_rm_create_db_parms {
+	/**
+	 * [in] Device module type. Used for logging purposes.
+	 */
+	enum tf_device_module_type type;
+	/**
+	 * [in] Receive or transmit direction.
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Number of elements.
+	 */
+	uint16_t num_elements;
+	/**
+	 * [in] Parameter structure array. Array size is num_elements.
+	 */
+	struct tf_rm_element_cfg *cfg;
+	/**
+	 * Resource allocation count array. This array content
+	 * originates from the tf_session_resources that is passed in
+	 * on session open.
+	 * Array size is num_elements.
+	 */
+	uint16_t *alloc_cnt;
+	/**
+	 * [out] RM DB Handle
+	 */
+	void **rm_db;
 };
 
 /**
- * Resource query array of SRAM entities
+ * Free RM DB parameters
  */
-struct tf_rm_sram_query {
-	/** array of SRAM resource entries */
-	struct tf_rm_query_entry sram_query[TF_RESC_TYPE_SRAM_MAX];
+struct tf_rm_free_db_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *rm_db;
 };
 
 /**
- * Resource allocation array of SRAM entities
+ * Allocate RM parameters for a single element
  */
-struct tf_rm_sram_alloc {
-	/** array of SRAM resource entries */
-	uint16_t sram_num[TF_RESC_TYPE_SRAM_MAX];
+struct tf_rm_allocate_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [in] Pointer to the allocated index in normalized
+	 * form. Normalized means the index has been adjusted,
+	 * i.e. Full Action Record offsets.
+	 */
+	uint32_t *index;
+	/**
+	 * [in] Priority, indicates the prority of the entry
+	 * priority  0: allocate from top of the tcam (from index 0
+	 *              or lowest available index)
+	 * priority !0: allocate from bottom of the tcam (from highest
+	 *              available index)
+	 */
+	uint32_t priority;
 };
 
 /**
- * Resource Manager arrays for a single direction
+ * Free RM parameters for a single element
  */
-struct tf_rm_resc {
-	/** array of HW resource entries */
-	struct tf_rm_entry hw_entry[TF_RESC_TYPE_HW_MAX];
-	/** array of SRAM resource entries */
-	struct tf_rm_entry sram_entry[TF_RESC_TYPE_SRAM_MAX];
+struct tf_rm_free_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [in] Index to free
+	 */
+	uint16_t index;
 };
 
 /**
- * Resource Manager Database
+ * Is Allocated parameters for a single element
  */
-struct tf_rm_db {
-	struct tf_rm_resc rx;
-	struct tf_rm_resc tx;
+struct tf_rm_is_allocated_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [in] Index to free
+	 */
+	uint32_t index;
+	/**
+	 * [in] Pointer to flag that indicates the state of the query
+	 */
+	int *allocated;
 };
 
 /**
- * Helper function used to convert HW HCAPI resource type to a string.
+ * Get Allocation information for a single element
  */
-const char
-*tf_hcapi_hw_2_str(enum tf_resource_type_hw hw_type);
+struct tf_rm_get_alloc_info_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [out] Pointer to the requested allocation information for
+	 * the specified db_index
+	 */
+	struct tf_rm_alloc_info *info;
+};
 
 /**
- * Helper function used to convert SRAM HCAPI resource type to a string.
+ * Get HCAPI type parameters for a single element
  */
-const char
-*tf_hcapi_sram_2_str(enum tf_resource_type_sram sram_type);
+struct tf_rm_get_hcapi_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [out] Pointer to the hcapi type for the specified db_index
+	 */
+	uint16_t *hcapi_type;
+};
 
 /**
- * Initializes the Resource Manager and the associated database
- * entries for HW and SRAM resources. Must be called before any other
- * Resource Manager functions.
+ * Get InUse count parameters for single element
+ */
+struct tf_rm_get_inuse_count_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [out] Pointer to the inuse count for the specified db_index
+	 */
+	uint16_t *count;
+};
+
+/**
+ * @page rm Resource Manager
  *
- * [in] tfp
- *   Pointer to TF handle
+ * @ref tf_rm_create_db
+ *
+ * @ref tf_rm_free_db
+ *
+ * @ref tf_rm_allocate
+ *
+ * @ref tf_rm_free
+ *
+ * @ref tf_rm_is_allocated
+ *
+ * @ref tf_rm_get_info
+ *
+ * @ref tf_rm_get_hcapi_type
+ *
+ * @ref tf_rm_get_inuse_count
  */
-void tf_rm_init(struct tf *tfp);
 
 /**
- * Allocates and validates both HW and SRAM resources per the NVM
- * configuration. If any allocation fails all resources for the
- * session is deallocated.
+ * Creates and fills a Resource Manager (RM) DB with requested
+ * elements. The DB is indexed per the parms structure.
  *
  * [in] tfp
- *   Pointer to TF handle
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to create parameters
  *
  * Returns
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
-int tf_rm_allocate_validate(struct tf *tfp);
+/*
+ * NOTE:
+ * - Fail on parameter check
+ * - Fail on DB creation, i.e. alloc amount is not possible or validation fails
+ * - Fail on DB creation if DB already exist
+ *
+ * - Allocs local DB
+ * - Does hcapi qcaps
+ * - Does hcapi reservation
+ * - Populates the pool with allocated elements
+ * - Returns handle to the created DB
+ */
+int tf_rm_create_db(struct tf *tfp,
+		    struct tf_rm_create_db_parms *parms);
 
 /**
- * Closes the Resource Manager and frees all allocated resources per
- * the associated database.
+ * Closes the Resource Manager (RM) DB and frees all allocated
+ * resources per the associated database.
  *
  * [in] tfp
- *   Pointer to TF handle
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to free parameters
  *
  * Returns
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
- *   - (-ENOTEMPTY) if resources are not cleaned up before close
  */
-int tf_rm_close(struct tf *tfp);
+int tf_rm_free_db(struct tf *tfp,
+		  struct tf_rm_free_db_parms *parms);
 
-#if (TF_SHADOW == 1)
 /**
- * Initializes Shadow DB of configuration elements
+ * Allocates a single element for the type specified, within the DB.
  *
- * [in] tfs
- *   Pointer to TF Session
+ * [in] parms
+ *   Pointer to allocate parameters
  *
- * Returns:
- *  0  - Success
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ *   - (-ENOMEM) if pool is empty
  */
-int tf_rm_shadow_db_init(struct tf_session *tfs);
-#endif /* TF_SHADOW */
+int tf_rm_allocate(struct tf_rm_allocate_parms *parms);
 
 /**
- * Perform a Session Pool lookup using the Tcam table type.
- *
- * Function will print error msg if tcam type is unsupported or lookup
- * failed.
+ * Free's a single element for the type specified, within the DB.
  *
- * [in] tfs
- *   Pointer to TF Session
+ * [in] parms
+ *   Pointer to free parameters
  *
- * [in] type
- *   Type of the object
- *
- * [in] dir
- *    Receive or transmit direction
- *
- * [in/out]  session_pool
- *   Session pool
- *
- * Returns:
- *  0           - Success will set the **pool
- *  -EOPNOTSUPP - Type is not supported
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
  */
-int
-tf_rm_lookup_tcam_type_pool(struct tf_session *tfs,
-			    enum tf_dir dir,
-			    enum tf_tcam_tbl_type type,
-			    struct bitalloc **pool);
+int tf_rm_free(struct tf_rm_free_parms *parms);
 
 /**
- * Perform a Session Pool lookup using the Table type.
- *
- * Function will print error msg if table type is unsupported or
- * lookup failed.
- *
- * [in] tfs
- *   Pointer to TF Session
- *
- * [in] type
- *   Type of the object
+ * Performs an allocation verification check on a specified element.
  *
- * [in] dir
- *    Receive or transmit direction
+ * [in] parms
+ *   Pointer to is allocated parameters
  *
- * [in/out]  session_pool
- *   Session pool
- *
- * Returns:
- *  0           - Success will set the **pool
- *  -EOPNOTSUPP - Type is not supported
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
  */
-int
-tf_rm_lookup_tbl_type_pool(struct tf_session *tfs,
-			   enum tf_dir dir,
-			   enum tf_tbl_type type,
-			   struct bitalloc **pool);
+/*
+ * NOTE:
+ *  - If pool is set to Chip MAX, then the query index must be checked
+ *    against the allocated range and query index must be allocated as well.
+ *  - If pool is allocated size only, then check if query index is allocated.
+ */
+int tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms);
 
 /**
- * Converts the TF Table Type to internal HCAPI_TYPE
- *
- * [in] type
- *   Type to be converted
+ * Retrieves an elements allocation information from the Resource
+ * Manager (RM) DB.
  *
- * [in/out] hcapi_type
- *   Converted type
+ * [in] parms
+ *   Pointer to get info parameters
  *
- * Returns:
- *  0           - Success will set the *hcapi_type
- *  -EOPNOTSUPP - Type is not supported
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
  */
-int
-tf_rm_convert_tbl_type(enum tf_tbl_type type,
-		       uint32_t *hcapi_type);
+int tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms);
 
 /**
- * TF RM Convert of index methods.
+ * Performs a lookup in the Resource Manager DB and retrives the
+ * requested HCAPI RM type.
+ *
+ * [in] parms
+ *   Pointer to get hcapi parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
  */
-enum tf_rm_convert_type {
-	/** Adds the base of the Session Pool to the index */
-	TF_RM_CONVERT_ADD_BASE,
-	/** Removes the Session Pool base from the index */
-	TF_RM_CONVERT_RM_BASE
-};
+int tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms);
 
 /**
- * Provides conversion of the Table Type index in relation to the
- * Session Pool base.
- *
- * [in] tfs
- *   Pointer to TF Session
- *
- * [in] dir
- *    Receive or transmit direction
- *
- * [in] type
- *   Type of the object
+ * Performs a lookup in the Resource Manager DB and retrives the
+ * requested HCAPI RM type inuse count.
  *
- * [in] c_type
- *   Type of conversion to perform
+ * [in] parms
+ *   Pointer to get inuse parameters
  *
- * [in] index
- *   Index to be converted
- *
- * [in/out]  convert_index
- *   Pointer to the converted index
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
  */
-int
-tf_rm_convert_index(struct tf_session *tfs,
-		    enum tf_dir dir,
-		    enum tf_tbl_type type,
-		    enum tf_rm_convert_type c_type,
-		    uint32_t index,
-		    uint32_t *convert_index);
+int tf_rm_get_inuse_count(struct tf_rm_get_inuse_count_parms *parms);
 
-#endif /* TF_RM_H_ */
+#endif /* TF_RM_NEW_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.c b/drivers/net/bnxt/tf_core/tf_rm_new.c
deleted file mode 100644
index 2d9be654a..000000000
--- a/drivers/net/bnxt/tf_core/tf_rm_new.c
+++ /dev/null
@@ -1,907 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2019-2020 Broadcom
- * All rights reserved.
- */
-
-#include <string.h>
-
-#include <rte_common.h>
-
-#include <cfa_resource_types.h>
-
-#include "tf_rm_new.h"
-#include "tf_common.h"
-#include "tf_util.h"
-#include "tf_session.h"
-#include "tf_device.h"
-#include "tfp.h"
-#include "tf_msg.h"
-
-/**
- * Generic RM Element data type that an RM DB is build upon.
- */
-struct tf_rm_element {
-	/**
-	 * RM Element configuration type. If Private then the
-	 * hcapi_type can be ignored. If Null then the element is not
-	 * valid for the device.
-	 */
-	enum tf_rm_elem_cfg_type cfg_type;
-
-	/**
-	 * HCAPI RM Type for the element.
-	 */
-	uint16_t hcapi_type;
-
-	/**
-	 * HCAPI RM allocated range information for the element.
-	 */
-	struct tf_rm_alloc_info alloc;
-
-	/**
-	 * Bit allocator pool for the element. Pool size is controlled
-	 * by the struct tf_session_resources at time of session creation.
-	 * Null indicates that the element is not used for the device.
-	 */
-	struct bitalloc *pool;
-};
-
-/**
- * TF RM DB definition
- */
-struct tf_rm_new_db {
-	/**
-	 * Number of elements in the DB
-	 */
-	uint16_t num_entries;
-
-	/**
-	 * Direction this DB controls.
-	 */
-	enum tf_dir dir;
-
-	/**
-	 * Module type, used for logging purposes.
-	 */
-	enum tf_device_module_type type;
-
-	/**
-	 * The DB consists of an array of elements
-	 */
-	struct tf_rm_element *db;
-};
-
-/**
- * Adjust an index according to the allocation information.
- *
- * All resources are controlled in a 0 based pool. Some resources, by
- * design, are not 0 based, i.e. Full Action Records (SRAM) thus they
- * need to be adjusted before they are handed out.
- *
- * [in] cfg
- *   Pointer to the DB configuration
- *
- * [in] reservations
- *   Pointer to the allocation values associated with the module
- *
- * [in] count
- *   Number of DB configuration elements
- *
- * [out] valid_count
- *   Number of HCAPI entries with a reservation value greater than 0
- *
- * Returns:
- *     0          - Success
- *   - EOPNOTSUPP - Operation not supported
- */
-static void
-tf_rm_count_hcapi_reservations(enum tf_dir dir,
-			       enum tf_device_module_type type,
-			       struct tf_rm_element_cfg *cfg,
-			       uint16_t *reservations,
-			       uint16_t count,
-			       uint16_t *valid_count)
-{
-	int i;
-	uint16_t cnt = 0;
-
-	for (i = 0; i < count; i++) {
-		if (cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI &&
-		    reservations[i] > 0)
-			cnt++;
-
-		/* Only log msg if a type is attempted reserved and
-		 * not supported. We ignore EM module as its using a
-		 * split configuration array thus it would fail for
-		 * this type of check.
-		 */
-		if (type != TF_DEVICE_MODULE_TYPE_EM &&
-		    cfg[i].cfg_type == TF_RM_ELEM_CFG_NULL &&
-		    reservations[i] > 0) {
-			TFP_DRV_LOG(ERR,
-				"%s, %s, %s allocation not supported\n",
-				tf_device_module_type_2_str(type),
-				tf_dir_2_str(dir),
-				tf_device_module_type_subtype_2_str(type, i));
-			printf("%s, %s, %s allocation of %d not supported\n",
-				tf_device_module_type_2_str(type),
-				tf_dir_2_str(dir),
-			       tf_device_module_type_subtype_2_str(type, i),
-			       reservations[i]);
-		}
-	}
-
-	*valid_count = cnt;
-}
-
-/**
- * Resource Manager Adjust of base index definitions.
- */
-enum tf_rm_adjust_type {
-	TF_RM_ADJUST_ADD_BASE, /**< Adds base to the index */
-	TF_RM_ADJUST_RM_BASE   /**< Removes base from the index */
-};
-
-/**
- * Adjust an index according to the allocation information.
- *
- * All resources are controlled in a 0 based pool. Some resources, by
- * design, are not 0 based, i.e. Full Action Records (SRAM) thus they
- * need to be adjusted before they are handed out.
- *
- * [in] db
- *   Pointer to the db, used for the lookup
- *
- * [in] action
- *   Adjust action
- *
- * [in] db_index
- *   DB index for the element type
- *
- * [in] index
- *   Index to convert
- *
- * [out] adj_index
- *   Adjusted index
- *
- * Returns:
- *     0          - Success
- *   - EOPNOTSUPP - Operation not supported
- */
-static int
-tf_rm_adjust_index(struct tf_rm_element *db,
-		   enum tf_rm_adjust_type action,
-		   uint32_t db_index,
-		   uint32_t index,
-		   uint32_t *adj_index)
-{
-	int rc = 0;
-	uint32_t base_index;
-
-	base_index = db[db_index].alloc.entry.start;
-
-	switch (action) {
-	case TF_RM_ADJUST_RM_BASE:
-		*adj_index = index - base_index;
-		break;
-	case TF_RM_ADJUST_ADD_BASE:
-		*adj_index = index + base_index;
-		break;
-	default:
-		return -EOPNOTSUPP;
-	}
-
-	return rc;
-}
-
-/**
- * Logs an array of found residual entries to the console.
- *
- * [in] dir
- *   Receive or transmit direction
- *
- * [in] type
- *   Type of Device Module
- *
- * [in] count
- *   Number of entries in the residual array
- *
- * [in] residuals
- *   Pointer to an array of residual entries. Array is index same as
- *   the DB in which this function is used. Each entry holds residual
- *   value for that entry.
- */
-static void
-tf_rm_log_residuals(enum tf_dir dir,
-		    enum tf_device_module_type type,
-		    uint16_t count,
-		    uint16_t *residuals)
-{
-	int i;
-
-	/* Walk the residual array and log the types that wasn't
-	 * cleaned up to the console.
-	 */
-	for (i = 0; i < count; i++) {
-		if (residuals[i] != 0)
-			TFP_DRV_LOG(ERR,
-				"%s, %s was not cleaned up, %d outstanding\n",
-				tf_dir_2_str(dir),
-				tf_device_module_type_subtype_2_str(type, i),
-				residuals[i]);
-	}
-}
-
-/**
- * Performs a check of the passed in DB for any lingering elements. If
- * a resource type was found to not have been cleaned up by the caller
- * then its residual values are recorded, logged and passed back in an
- * allocate reservation array that the caller can pass to the FW for
- * cleanup.
- *
- * [in] db
- *   Pointer to the db, used for the lookup
- *
- * [out] resv_size
- *   Pointer to the reservation size of the generated reservation
- *   array.
- *
- * [in/out] resv
- *   Pointer Pointer to a reservation array. The reservation array is
- *   allocated after the residual scan and holds any found residual
- *   entries. Thus it can be smaller than the DB that the check was
- *   performed on. Array must be freed by the caller.
- *
- * [out] residuals_present
- *   Pointer to a bool flag indicating if residual was present in the
- *   DB
- *
- * Returns:
- *     0          - Success
- *   - EOPNOTSUPP - Operation not supported
- */
-static int
-tf_rm_check_residuals(struct tf_rm_new_db *rm_db,
-		      uint16_t *resv_size,
-		      struct tf_rm_resc_entry **resv,
-		      bool *residuals_present)
-{
-	int rc;
-	int i;
-	int f;
-	uint16_t count;
-	uint16_t found;
-	uint16_t *residuals = NULL;
-	uint16_t hcapi_type;
-	struct tf_rm_get_inuse_count_parms iparms;
-	struct tf_rm_get_alloc_info_parms aparms;
-	struct tf_rm_get_hcapi_parms hparms;
-	struct tf_rm_alloc_info info;
-	struct tfp_calloc_parms cparms;
-	struct tf_rm_resc_entry *local_resv = NULL;
-
-	/* Create array to hold the entries that have residuals */
-	cparms.nitems = rm_db->num_entries;
-	cparms.size = sizeof(uint16_t);
-	cparms.alignment = 0;
-	rc = tfp_calloc(&cparms);
-	if (rc)
-		return rc;
-
-	residuals = (uint16_t *)cparms.mem_va;
-
-	/* Traverse the DB and collect any residual elements */
-	iparms.rm_db = rm_db;
-	iparms.count = &count;
-	for (i = 0, found = 0; i < rm_db->num_entries; i++) {
-		iparms.db_index = i;
-		rc = tf_rm_get_inuse_count(&iparms);
-		/* Not a device supported entry, just skip */
-		if (rc == -ENOTSUP)
-			continue;
-		if (rc)
-			goto cleanup_residuals;
-
-		if (count) {
-			found++;
-			residuals[i] = count;
-			*residuals_present = true;
-		}
-	}
-
-	if (*residuals_present) {
-		/* Populate a reduced resv array with only the entries
-		 * that have residuals.
-		 */
-		cparms.nitems = found;
-		cparms.size = sizeof(struct tf_rm_resc_entry);
-		cparms.alignment = 0;
-		rc = tfp_calloc(&cparms);
-		if (rc)
-			return rc;
-
-		local_resv = (struct tf_rm_resc_entry *)cparms.mem_va;
-
-		aparms.rm_db = rm_db;
-		hparms.rm_db = rm_db;
-		hparms.hcapi_type = &hcapi_type;
-		for (i = 0, f = 0; i < rm_db->num_entries; i++) {
-			if (residuals[i] == 0)
-				continue;
-			aparms.db_index = i;
-			aparms.info = &info;
-			rc = tf_rm_get_info(&aparms);
-			if (rc)
-				goto cleanup_all;
-
-			hparms.db_index = i;
-			rc = tf_rm_get_hcapi_type(&hparms);
-			if (rc)
-				goto cleanup_all;
-
-			local_resv[f].type = hcapi_type;
-			local_resv[f].start = info.entry.start;
-			local_resv[f].stride = info.entry.stride;
-			f++;
-		}
-		*resv_size = found;
-	}
-
-	tf_rm_log_residuals(rm_db->dir,
-			    rm_db->type,
-			    rm_db->num_entries,
-			    residuals);
-
-	tfp_free((void *)residuals);
-	*resv = local_resv;
-
-	return 0;
-
- cleanup_all:
-	tfp_free((void *)local_resv);
-	*resv = NULL;
- cleanup_residuals:
-	tfp_free((void *)residuals);
-
-	return rc;
-}
-
-int
-tf_rm_create_db(struct tf *tfp,
-		struct tf_rm_create_db_parms *parms)
-{
-	int rc;
-	int i;
-	int j;
-	struct tf_session *tfs;
-	struct tf_dev_info *dev;
-	uint16_t max_types;
-	struct tfp_calloc_parms cparms;
-	struct tf_rm_resc_req_entry *query;
-	enum tf_rm_resc_resv_strategy resv_strategy;
-	struct tf_rm_resc_req_entry *req;
-	struct tf_rm_resc_entry *resv;
-	struct tf_rm_new_db *rm_db;
-	struct tf_rm_element *db;
-	uint32_t pool_size;
-	uint16_t hcapi_items;
-
-	TF_CHECK_PARMS2(tfp, parms);
-
-	/* Retrieve the session information */
-	rc = tf_session_get_session(tfp, &tfs);
-	if (rc)
-		return rc;
-
-	/* Retrieve device information */
-	rc = tf_session_get_device(tfs, &dev);
-	if (rc)
-		return rc;
-
-	/* Need device max number of elements for the RM QCAPS */
-	rc = dev->ops->tf_dev_get_max_types(tfp, &max_types);
-	if (rc)
-		return rc;
-
-	cparms.nitems = max_types;
-	cparms.size = sizeof(struct tf_rm_resc_req_entry);
-	cparms.alignment = 0;
-	rc = tfp_calloc(&cparms);
-	if (rc)
-		return rc;
-
-	query = (struct tf_rm_resc_req_entry *)cparms.mem_va;
-
-	/* Get Firmware Capabilities */
-	rc = tf_msg_session_resc_qcaps(tfp,
-				       parms->dir,
-				       max_types,
-				       query,
-				       &resv_strategy);
-	if (rc)
-		return rc;
-
-	/* Process capabilities against DB requirements. However, as a
-	 * DB can hold elements that are not HCAPI we can reduce the
-	 * req msg content by removing those out of the request yet
-	 * the DB holds them all as to give a fast lookup. We can also
-	 * remove entries where there are no request for elements.
-	 */
-	tf_rm_count_hcapi_reservations(parms->dir,
-				       parms->type,
-				       parms->cfg,
-				       parms->alloc_cnt,
-				       parms->num_elements,
-				       &hcapi_items);
-
-	/* Alloc request, alignment already set */
-	cparms.nitems = (size_t)hcapi_items;
-	cparms.size = sizeof(struct tf_rm_resc_req_entry);
-	rc = tfp_calloc(&cparms);
-	if (rc)
-		return rc;
-	req = (struct tf_rm_resc_req_entry *)cparms.mem_va;
-
-	/* Alloc reservation, alignment and nitems already set */
-	cparms.size = sizeof(struct tf_rm_resc_entry);
-	rc = tfp_calloc(&cparms);
-	if (rc)
-		return rc;
-	resv = (struct tf_rm_resc_entry *)cparms.mem_va;
-
-	/* Build the request */
-	for (i = 0, j = 0; i < parms->num_elements; i++) {
-		/* Skip any non HCAPI cfg elements */
-		if (parms->cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI) {
-			/* Only perform reservation for entries that
-			 * has been requested
-			 */
-			if (parms->alloc_cnt[i] == 0)
-				continue;
-
-			/* Verify that we can get the full amount
-			 * allocated per the qcaps availability.
-			 */
-			if (parms->alloc_cnt[i] <=
-			    query[parms->cfg[i].hcapi_type].max) {
-				req[j].type = parms->cfg[i].hcapi_type;
-				req[j].min = parms->alloc_cnt[i];
-				req[j].max = parms->alloc_cnt[i];
-				j++;
-			} else {
-				TFP_DRV_LOG(ERR,
-					    "%s: Resource failure, type:%d\n",
-					    tf_dir_2_str(parms->dir),
-					    parms->cfg[i].hcapi_type);
-				TFP_DRV_LOG(ERR,
-					"req:%d, avail:%d\n",
-					parms->alloc_cnt[i],
-					query[parms->cfg[i].hcapi_type].max);
-				return -EINVAL;
-			}
-		}
-	}
-
-	rc = tf_msg_session_resc_alloc(tfp,
-				       parms->dir,
-				       hcapi_items,
-				       req,
-				       resv);
-	if (rc)
-		return rc;
-
-	/* Build the RM DB per the request */
-	cparms.nitems = 1;
-	cparms.size = sizeof(struct tf_rm_new_db);
-	rc = tfp_calloc(&cparms);
-	if (rc)
-		return rc;
-	rm_db = (void *)cparms.mem_va;
-
-	/* Build the DB within RM DB */
-	cparms.nitems = parms->num_elements;
-	cparms.size = sizeof(struct tf_rm_element);
-	rc = tfp_calloc(&cparms);
-	if (rc)
-		return rc;
-	rm_db->db = (struct tf_rm_element *)cparms.mem_va;
-
-	db = rm_db->db;
-	for (i = 0, j = 0; i < parms->num_elements; i++) {
-		db[i].cfg_type = parms->cfg[i].cfg_type;
-		db[i].hcapi_type = parms->cfg[i].hcapi_type;
-
-		/* Skip any non HCAPI types as we didn't include them
-		 * in the reservation request.
-		 */
-		if (parms->cfg[i].cfg_type != TF_RM_ELEM_CFG_HCAPI)
-			continue;
-
-		/* If the element didn't request an allocation no need
-		 * to create a pool nor verify if we got a reservation.
-		 */
-		if (parms->alloc_cnt[i] == 0)
-			continue;
-
-		/* If the element had requested an allocation and that
-		 * allocation was a success (full amount) then
-		 * allocate the pool.
-		 */
-		if (parms->alloc_cnt[i] == resv[j].stride) {
-			db[i].alloc.entry.start = resv[j].start;
-			db[i].alloc.entry.stride = resv[j].stride;
-
-			printf("Entry:%d Start:%d Stride:%d\n",
-			       i,
-			       resv[j].start,
-			       resv[j].stride);
-
-			/* Create pool */
-			pool_size = (BITALLOC_SIZEOF(resv[j].stride) /
-				     sizeof(struct bitalloc));
-			/* Alloc request, alignment already set */
-			cparms.nitems = pool_size;
-			cparms.size = sizeof(struct bitalloc);
-			rc = tfp_calloc(&cparms);
-			if (rc) {
-				TFP_DRV_LOG(ERR,
-					    "%s: Pool alloc failed, type:%d\n",
-					    tf_dir_2_str(parms->dir),
-					    db[i].cfg_type);
-				goto fail;
-			}
-			db[i].pool = (struct bitalloc *)cparms.mem_va;
-
-			rc = ba_init(db[i].pool, resv[j].stride);
-			if (rc) {
-				TFP_DRV_LOG(ERR,
-					    "%s: Pool init failed, type:%d\n",
-					    tf_dir_2_str(parms->dir),
-					    db[i].cfg_type);
-				goto fail;
-			}
-			j++;
-		} else {
-			/* Bail out as we want what we requested for
-			 * all elements, not any less.
-			 */
-			TFP_DRV_LOG(ERR,
-				    "%s: Alloc failed, type:%d\n",
-				    tf_dir_2_str(parms->dir),
-				    db[i].cfg_type);
-			TFP_DRV_LOG(ERR,
-				    "req:%d, alloc:%d\n",
-				    parms->alloc_cnt[i],
-				    resv[j].stride);
-			goto fail;
-		}
-	}
-
-	rm_db->num_entries = parms->num_elements;
-	rm_db->dir = parms->dir;
-	rm_db->type = parms->type;
-	*parms->rm_db = (void *)rm_db;
-
-	printf("%s: type:%d num_entries:%d\n",
-	       tf_dir_2_str(parms->dir),
-	       parms->type,
-	       i);
-
-	tfp_free((void *)req);
-	tfp_free((void *)resv);
-
-	return 0;
-
- fail:
-	tfp_free((void *)req);
-	tfp_free((void *)resv);
-	tfp_free((void *)db->pool);
-	tfp_free((void *)db);
-	tfp_free((void *)rm_db);
-	parms->rm_db = NULL;
-
-	return -EINVAL;
-}
-
-int
-tf_rm_free_db(struct tf *tfp,
-	      struct tf_rm_free_db_parms *parms)
-{
-	int rc;
-	int i;
-	uint16_t resv_size = 0;
-	struct tf_rm_new_db *rm_db;
-	struct tf_rm_resc_entry *resv;
-	bool residuals_found = false;
-
-	TF_CHECK_PARMS2(parms, parms->rm_db);
-
-	/* Device unbind happens when the TF Session is closed and the
-	 * session ref count is 0. Device unbind will cleanup each of
-	 * its support modules, i.e. Identifier, thus we're ending up
-	 * here to close the DB.
-	 *
-	 * On TF Session close it is assumed that the session has already
-	 * cleaned up all its resources, individually, while
-	 * destroying its flows.
-	 *
-	 * To assist in the 'cleanup checking' the DB is checked for any
-	 * remaining elements and logged if found to be the case.
-	 *
-	 * Any such elements will need to be 'cleared' ahead of
-	 * returning the resources to the HCAPI RM.
-	 *
-	 * RM will signal FW to flush the DB resources. FW will
-	 * perform the invalidation. TF Session close will return the
-	 * previous allocated elements to the RM and then close the
-	 * HCAPI RM registration. That then saves several 'free' msgs
-	 * from being required.
-	 */
-
-	rm_db = (struct tf_rm_new_db *)parms->rm_db;
-
-	/* Check for residuals that the client didn't clean up */
-	rc = tf_rm_check_residuals(rm_db,
-				   &resv_size,
-				   &resv,
-				   &residuals_found);
-	if (rc)
-		return rc;
-
-	/* Invalidate any residuals followed by a DB traversal for
-	 * pool cleanup.
-	 */
-	if (residuals_found) {
-		rc = tf_msg_session_resc_flush(tfp,
-					       parms->dir,
-					       resv_size,
-					       resv);
-		tfp_free((void *)resv);
-		/* On failure we still have to cleanup so we can only
-		 * log that FW failed.
-		 */
-		if (rc)
-			TFP_DRV_LOG(ERR,
-				    "%s: Internal Flush error, module:%s\n",
-				    tf_dir_2_str(parms->dir),
-				    tf_device_module_type_2_str(rm_db->type));
-	}
-
-	for (i = 0; i < rm_db->num_entries; i++)
-		tfp_free((void *)rm_db->db[i].pool);
-
-	tfp_free((void *)parms->rm_db);
-
-	return rc;
-}
-
-int
-tf_rm_allocate(struct tf_rm_allocate_parms *parms)
-{
-	int rc;
-	int id;
-	uint32_t index;
-	struct tf_rm_new_db *rm_db;
-	enum tf_rm_elem_cfg_type cfg_type;
-
-	TF_CHECK_PARMS2(parms, parms->rm_db);
-
-	rm_db = (struct tf_rm_new_db *)parms->rm_db;
-	cfg_type = rm_db->db[parms->db_index].cfg_type;
-
-	/* Bail out if not controlled by RM */
-	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
-		return -ENOTSUP;
-
-	/* Bail out if the pool is not valid, should never happen */
-	if (rm_db->db[parms->db_index].pool == NULL) {
-		rc = -ENOTSUP;
-		TFP_DRV_LOG(ERR,
-			    "%s: Invalid pool for this type:%d, rc:%s\n",
-			    tf_dir_2_str(rm_db->dir),
-			    parms->db_index,
-			    strerror(-rc));
-		return rc;
-	}
-
-	/*
-	 * priority  0: allocate from top of the tcam i.e. high
-	 * priority !0: allocate index from bottom i.e lowest
-	 */
-	if (parms->priority)
-		id = ba_alloc_reverse(rm_db->db[parms->db_index].pool);
-	else
-		id = ba_alloc(rm_db->db[parms->db_index].pool);
-	if (id == BA_FAIL) {
-		rc = -ENOMEM;
-		TFP_DRV_LOG(ERR,
-			    "%s: Allocation failed, rc:%s\n",
-			    tf_dir_2_str(rm_db->dir),
-			    strerror(-rc));
-		return rc;
-	}
-
-	/* Adjust for any non zero start value */
-	rc = tf_rm_adjust_index(rm_db->db,
-				TF_RM_ADJUST_ADD_BASE,
-				parms->db_index,
-				id,
-				&index);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s: Alloc adjust of base index failed, rc:%s\n",
-			    tf_dir_2_str(rm_db->dir),
-			    strerror(-rc));
-		return -EINVAL;
-	}
-
-	*parms->index = index;
-
-	return rc;
-}
-
-int
-tf_rm_free(struct tf_rm_free_parms *parms)
-{
-	int rc;
-	uint32_t adj_index;
-	struct tf_rm_new_db *rm_db;
-	enum tf_rm_elem_cfg_type cfg_type;
-
-	TF_CHECK_PARMS2(parms, parms->rm_db);
-
-	rm_db = (struct tf_rm_new_db *)parms->rm_db;
-	cfg_type = rm_db->db[parms->db_index].cfg_type;
-
-	/* Bail out if not controlled by RM */
-	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
-		return -ENOTSUP;
-
-	/* Bail out if the pool is not valid, should never happen */
-	if (rm_db->db[parms->db_index].pool == NULL) {
-		rc = -ENOTSUP;
-		TFP_DRV_LOG(ERR,
-			    "%s: Invalid pool for this type:%d, rc:%s\n",
-			    tf_dir_2_str(rm_db->dir),
-			    parms->db_index,
-			    strerror(-rc));
-		return rc;
-	}
-
-	/* Adjust for any non zero start value */
-	rc = tf_rm_adjust_index(rm_db->db,
-				TF_RM_ADJUST_RM_BASE,
-				parms->db_index,
-				parms->index,
-				&adj_index);
-	if (rc)
-		return rc;
-
-	rc = ba_free(rm_db->db[parms->db_index].pool, adj_index);
-	/* No logging direction matters and that is not available here */
-	if (rc)
-		return rc;
-
-	return rc;
-}
-
-int
-tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms)
-{
-	int rc;
-	uint32_t adj_index;
-	struct tf_rm_new_db *rm_db;
-	enum tf_rm_elem_cfg_type cfg_type;
-
-	TF_CHECK_PARMS2(parms, parms->rm_db);
-
-	rm_db = (struct tf_rm_new_db *)parms->rm_db;
-	cfg_type = rm_db->db[parms->db_index].cfg_type;
-
-	/* Bail out if not controlled by RM */
-	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
-		return -ENOTSUP;
-
-	/* Bail out if the pool is not valid, should never happen */
-	if (rm_db->db[parms->db_index].pool == NULL) {
-		rc = -ENOTSUP;
-		TFP_DRV_LOG(ERR,
-			    "%s: Invalid pool for this type:%d, rc:%s\n",
-			    tf_dir_2_str(rm_db->dir),
-			    parms->db_index,
-			    strerror(-rc));
-		return rc;
-	}
-
-	/* Adjust for any non zero start value */
-	rc = tf_rm_adjust_index(rm_db->db,
-				TF_RM_ADJUST_RM_BASE,
-				parms->db_index,
-				parms->index,
-				&adj_index);
-	if (rc)
-		return rc;
-
-	*parms->allocated = ba_inuse(rm_db->db[parms->db_index].pool,
-				     adj_index);
-
-	return rc;
-}
-
-int
-tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms)
-{
-	struct tf_rm_new_db *rm_db;
-	enum tf_rm_elem_cfg_type cfg_type;
-
-	TF_CHECK_PARMS2(parms, parms->rm_db);
-
-	rm_db = (struct tf_rm_new_db *)parms->rm_db;
-	cfg_type = rm_db->db[parms->db_index].cfg_type;
-
-	/* Bail out if not controlled by RM */
-	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
-		return -ENOTSUP;
-
-	memcpy(parms->info,
-	       &rm_db->db[parms->db_index].alloc,
-	       sizeof(struct tf_rm_alloc_info));
-
-	return 0;
-}
-
-int
-tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms)
-{
-	struct tf_rm_new_db *rm_db;
-	enum tf_rm_elem_cfg_type cfg_type;
-
-	TF_CHECK_PARMS2(parms, parms->rm_db);
-
-	rm_db = (struct tf_rm_new_db *)parms->rm_db;
-	cfg_type = rm_db->db[parms->db_index].cfg_type;
-
-	/* Bail out if not controlled by RM */
-	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
-		return -ENOTSUP;
-
-	*parms->hcapi_type = rm_db->db[parms->db_index].hcapi_type;
-
-	return 0;
-}
-
-int
-tf_rm_get_inuse_count(struct tf_rm_get_inuse_count_parms *parms)
-{
-	int rc = 0;
-	struct tf_rm_new_db *rm_db;
-	enum tf_rm_elem_cfg_type cfg_type;
-
-	TF_CHECK_PARMS2(parms, parms->rm_db);
-
-	rm_db = (struct tf_rm_new_db *)parms->rm_db;
-	cfg_type = rm_db->db[parms->db_index].cfg_type;
-
-	/* Bail out if not controlled by RM */
-	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
-		return -ENOTSUP;
-
-	/* Bail silently (no logging), if the pool is not valid there
-	 * was no elements allocated for it.
-	 */
-	if (rm_db->db[parms->db_index].pool == NULL) {
-		*parms->count = 0;
-		return 0;
-	}
-
-	*parms->count = ba_inuse_count(rm_db->db[parms->db_index].pool);
-
-	return rc;
-
-}
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.h b/drivers/net/bnxt/tf_core/tf_rm_new.h
deleted file mode 100644
index 5cb68892a..000000000
--- a/drivers/net/bnxt/tf_core/tf_rm_new.h
+++ /dev/null
@@ -1,446 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2019-2020 Broadcom
- * All rights reserved.
- */
-
-#ifndef TF_RM_NEW_H_
-#define TF_RM_NEW_H_
-
-#include "tf_core.h"
-#include "bitalloc.h"
-#include "tf_device.h"
-
-struct tf;
-
-/**
- * The Resource Manager (RM) module provides basic DB handling for
- * internal resources. These resources exists within the actual device
- * and are controlled by the HCAPI Resource Manager running on the
- * firmware.
- *
- * The RM DBs are all intended to be indexed using TF types there for
- * a lookup requires no additional conversion. The DB configuration
- * specifies the TF Type to HCAPI Type mapping and it becomes the
- * responsibility of the DB initialization to handle this static
- * mapping.
- *
- * Accessor functions are providing access to the DB, thus hiding the
- * implementation.
- *
- * The RM DB will work on its initial allocated sizes so the
- * capability of dynamically growing a particular resource is not
- * possible. If this capability later becomes a requirement then the
- * MAX pool size of the Chip œneeds to be added to the tf_rm_elem_info
- * structure and several new APIs would need to be added to allow for
- * growth of a single TF resource type.
- *
- * The access functions does not check for NULL pointers as it's a
- * support module, not called directly.
- */
-
-/**
- * Resource reservation single entry result. Used when accessing HCAPI
- * RM on the firmware.
- */
-struct tf_rm_new_entry {
-	/** Starting index of the allocated resource */
-	uint16_t start;
-	/** Number of allocated elements */
-	uint16_t stride;
-};
-
-/**
- * RM Element configuration enumeration. Used by the Device to
- * indicate how the RM elements the DB consists off, are to be
- * configured at time of DB creation. The TF may present types to the
- * ULP layer that is not controlled by HCAPI within the Firmware.
- */
-enum tf_rm_elem_cfg_type {
-	/** No configuration */
-	TF_RM_ELEM_CFG_NULL,
-	/** HCAPI 'controlled', uses a Pool for internal storage */
-	TF_RM_ELEM_CFG_HCAPI,
-	/** Private thus not HCAPI 'controlled', creates a Pool for storage */
-	TF_RM_ELEM_CFG_PRIVATE,
-	/**
-	 * Shared element thus it belongs to a shared FW Session and
-	 * is not controlled by the Host.
-	 */
-	TF_RM_ELEM_CFG_SHARED,
-	TF_RM_TYPE_MAX
-};
-
-/**
- * RM Reservation strategy enumeration. Type of strategy comes from
- * the HCAPI RM QCAPS handshake.
- */
-enum tf_rm_resc_resv_strategy {
-	TF_RM_RESC_RESV_STATIC_PARTITION,
-	TF_RM_RESC_RESV_STRATEGY_1,
-	TF_RM_RESC_RESV_STRATEGY_2,
-	TF_RM_RESC_RESV_STRATEGY_3,
-	TF_RM_RESC_RESV_MAX
-};
-
-/**
- * RM Element configuration structure, used by the Device to configure
- * how an individual TF type is configured in regard to the HCAPI RM
- * of same type.
- */
-struct tf_rm_element_cfg {
-	/**
-	 * RM Element config controls how the DB for that element is
-	 * processed.
-	 */
-	enum tf_rm_elem_cfg_type cfg_type;
-
-	/* If a HCAPI to TF type conversion is required then TF type
-	 * can be added here.
-	 */
-
-	/**
-	 * HCAPI RM Type for the element. Used for TF to HCAPI type
-	 * conversion.
-	 */
-	uint16_t hcapi_type;
-};
-
-/**
- * Allocation information for a single element.
- */
-struct tf_rm_alloc_info {
-	/**
-	 * HCAPI RM allocated range information.
-	 *
-	 * NOTE:
-	 * In case of dynamic allocation support this would have
-	 * to be changed to linked list of tf_rm_entry instead.
-	 */
-	struct tf_rm_new_entry entry;
-};
-
-/**
- * Create RM DB parameters
- */
-struct tf_rm_create_db_parms {
-	/**
-	 * [in] Device module type. Used for logging purposes.
-	 */
-	enum tf_device_module_type type;
-	/**
-	 * [in] Receive or transmit direction.
-	 */
-	enum tf_dir dir;
-	/**
-	 * [in] Number of elements.
-	 */
-	uint16_t num_elements;
-	/**
-	 * [in] Parameter structure array. Array size is num_elements.
-	 */
-	struct tf_rm_element_cfg *cfg;
-	/**
-	 * Resource allocation count array. This array content
-	 * originates from the tf_session_resources that is passed in
-	 * on session open.
-	 * Array size is num_elements.
-	 */
-	uint16_t *alloc_cnt;
-	/**
-	 * [out] RM DB Handle
-	 */
-	void **rm_db;
-};
-
-/**
- * Free RM DB parameters
- */
-struct tf_rm_free_db_parms {
-	/**
-	 * [in] Receive or transmit direction
-	 */
-	enum tf_dir dir;
-	/**
-	 * [in] RM DB Handle
-	 */
-	void *rm_db;
-};
-
-/**
- * Allocate RM parameters for a single element
- */
-struct tf_rm_allocate_parms {
-	/**
-	 * [in] RM DB Handle
-	 */
-	void *rm_db;
-	/**
-	 * [in] DB Index, indicates which DB entry to perform the
-	 * action on.
-	 */
-	uint16_t db_index;
-	/**
-	 * [in] Pointer to the allocated index in normalized
-	 * form. Normalized means the index has been adjusted,
-	 * i.e. Full Action Record offsets.
-	 */
-	uint32_t *index;
-	/**
-	 * [in] Priority, indicates the prority of the entry
-	 * priority  0: allocate from top of the tcam (from index 0
-	 *              or lowest available index)
-	 * priority !0: allocate from bottom of the tcam (from highest
-	 *              available index)
-	 */
-	uint32_t priority;
-};
-
-/**
- * Free RM parameters for a single element
- */
-struct tf_rm_free_parms {
-	/**
-	 * [in] RM DB Handle
-	 */
-	void *rm_db;
-	/**
-	 * [in] DB Index, indicates which DB entry to perform the
-	 * action on.
-	 */
-	uint16_t db_index;
-	/**
-	 * [in] Index to free
-	 */
-	uint16_t index;
-};
-
-/**
- * Is Allocated parameters for a single element
- */
-struct tf_rm_is_allocated_parms {
-	/**
-	 * [in] RM DB Handle
-	 */
-	void *rm_db;
-	/**
-	 * [in] DB Index, indicates which DB entry to perform the
-	 * action on.
-	 */
-	uint16_t db_index;
-	/**
-	 * [in] Index to free
-	 */
-	uint32_t index;
-	/**
-	 * [in] Pointer to flag that indicates the state of the query
-	 */
-	int *allocated;
-};
-
-/**
- * Get Allocation information for a single element
- */
-struct tf_rm_get_alloc_info_parms {
-	/**
-	 * [in] RM DB Handle
-	 */
-	void *rm_db;
-	/**
-	 * [in] DB Index, indicates which DB entry to perform the
-	 * action on.
-	 */
-	uint16_t db_index;
-	/**
-	 * [out] Pointer to the requested allocation information for
-	 * the specified db_index
-	 */
-	struct tf_rm_alloc_info *info;
-};
-
-/**
- * Get HCAPI type parameters for a single element
- */
-struct tf_rm_get_hcapi_parms {
-	/**
-	 * [in] RM DB Handle
-	 */
-	void *rm_db;
-	/**
-	 * [in] DB Index, indicates which DB entry to perform the
-	 * action on.
-	 */
-	uint16_t db_index;
-	/**
-	 * [out] Pointer to the hcapi type for the specified db_index
-	 */
-	uint16_t *hcapi_type;
-};
-
-/**
- * Get InUse count parameters for single element
- */
-struct tf_rm_get_inuse_count_parms {
-	/**
-	 * [in] RM DB Handle
-	 */
-	void *rm_db;
-	/**
-	 * [in] DB Index, indicates which DB entry to perform the
-	 * action on.
-	 */
-	uint16_t db_index;
-	/**
-	 * [out] Pointer to the inuse count for the specified db_index
-	 */
-	uint16_t *count;
-};
-
-/**
- * @page rm Resource Manager
- *
- * @ref tf_rm_create_db
- *
- * @ref tf_rm_free_db
- *
- * @ref tf_rm_allocate
- *
- * @ref tf_rm_free
- *
- * @ref tf_rm_is_allocated
- *
- * @ref tf_rm_get_info
- *
- * @ref tf_rm_get_hcapi_type
- *
- * @ref tf_rm_get_inuse_count
- */
-
-/**
- * Creates and fills a Resource Manager (RM) DB with requested
- * elements. The DB is indexed per the parms structure.
- *
- * [in] tfp
- *   Pointer to TF handle, used for HCAPI communication
- *
- * [in] parms
- *   Pointer to create parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-/*
- * NOTE:
- * - Fail on parameter check
- * - Fail on DB creation, i.e. alloc amount is not possible or validation fails
- * - Fail on DB creation if DB already exist
- *
- * - Allocs local DB
- * - Does hcapi qcaps
- * - Does hcapi reservation
- * - Populates the pool with allocated elements
- * - Returns handle to the created DB
- */
-int tf_rm_create_db(struct tf *tfp,
-		    struct tf_rm_create_db_parms *parms);
-
-/**
- * Closes the Resource Manager (RM) DB and frees all allocated
- * resources per the associated database.
- *
- * [in] tfp
- *   Pointer to TF handle, used for HCAPI communication
- *
- * [in] parms
- *   Pointer to free parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_rm_free_db(struct tf *tfp,
-		  struct tf_rm_free_db_parms *parms);
-
-/**
- * Allocates a single element for the type specified, within the DB.
- *
- * [in] parms
- *   Pointer to allocate parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- *   - (-ENOMEM) if pool is empty
- */
-int tf_rm_allocate(struct tf_rm_allocate_parms *parms);
-
-/**
- * Free's a single element for the type specified, within the DB.
- *
- * [in] parms
- *   Pointer to free parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_rm_free(struct tf_rm_free_parms *parms);
-
-/**
- * Performs an allocation verification check on a specified element.
- *
- * [in] parms
- *   Pointer to is allocated parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-/*
- * NOTE:
- *  - If pool is set to Chip MAX, then the query index must be checked
- *    against the allocated range and query index must be allocated as well.
- *  - If pool is allocated size only, then check if query index is allocated.
- */
-int tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms);
-
-/**
- * Retrieves an elements allocation information from the Resource
- * Manager (RM) DB.
- *
- * [in] parms
- *   Pointer to get info parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms);
-
-/**
- * Performs a lookup in the Resource Manager DB and retrives the
- * requested HCAPI RM type.
- *
- * [in] parms
- *   Pointer to get hcapi parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms);
-
-/**
- * Performs a lookup in the Resource Manager DB and retrives the
- * requested HCAPI RM type inuse count.
- *
- * [in] parms
- *   Pointer to get inuse parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_rm_get_inuse_count(struct tf_rm_get_inuse_count_parms *parms);
-
-#endif /* TF_RM_NEW_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h
index 705bb0955..e4472ed7f 100644
--- a/drivers/net/bnxt/tf_core/tf_session.h
+++ b/drivers/net/bnxt/tf_core/tf_session.h
@@ -14,6 +14,7 @@
 #include "tf_device.h"
 #include "tf_rm.h"
 #include "tf_tbl.h"
+#include "tf_resources.h"
 #include "stack.h"
 
 /**
@@ -43,7 +44,8 @@
 #define TF_SESSION_EM_POOL_SIZE \
 	(TF_SESSION_TOTAL_FN_BLOCKS / TF_SESSION_EM_ENTRY_SIZE)
 
-/** Session
+/**
+ * Session
  *
  * Shared memory containing private TruFlow session information.
  * Through this structure the session can keep track of resource
@@ -99,216 +101,6 @@ struct tf_session {
 	/** Device handle */
 	struct tf_dev_info dev;
 
-	/** Session HW and SRAM resources */
-	struct tf_rm_db resc;
-
-	/* Session HW resource pools */
-
-	/** RX L2 CTXT TCAM Pool */
-	BITALLOC_INST(TF_L2_CTXT_TCAM_POOL_NAME_RX, TF_NUM_L2_CTXT_TCAM);
-	/** TX L2 CTXT TCAM Pool */
-	BITALLOC_INST(TF_L2_CTXT_TCAM_POOL_NAME_TX, TF_NUM_L2_CTXT_TCAM);
-
-	/** RX Profile Func Pool */
-	BITALLOC_INST(TF_PROF_FUNC_POOL_NAME_RX, TF_NUM_PROF_FUNC);
-	/** TX Profile Func Pool */
-	BITALLOC_INST(TF_PROF_FUNC_POOL_NAME_TX, TF_NUM_PROF_FUNC);
-
-	/** RX Profile TCAM Pool */
-	BITALLOC_INST(TF_PROF_TCAM_POOL_NAME_RX, TF_NUM_PROF_TCAM);
-	/** TX Profile TCAM Pool */
-	BITALLOC_INST(TF_PROF_TCAM_POOL_NAME_TX, TF_NUM_PROF_TCAM);
-
-	/** RX EM Profile ID Pool */
-	BITALLOC_INST(TF_EM_PROF_ID_POOL_NAME_RX, TF_NUM_EM_PROF_ID);
-	/** TX EM Key Pool */
-	BITALLOC_INST(TF_EM_PROF_ID_POOL_NAME_TX, TF_NUM_EM_PROF_ID);
-
-	/** RX WC Profile Pool */
-	BITALLOC_INST(TF_WC_TCAM_PROF_ID_POOL_NAME_RX, TF_NUM_WC_PROF_ID);
-	/** TX WC Profile Pool */
-	BITALLOC_INST(TF_WC_TCAM_PROF_ID_POOL_NAME_TX, TF_NUM_WC_PROF_ID);
-
-	/* TBD, how do we want to handle EM records ?*/
-	/* EM Records are not controlled by way of a pool */
-
-	/** RX WC TCAM Pool */
-	BITALLOC_INST(TF_WC_TCAM_POOL_NAME_RX, TF_NUM_WC_TCAM_ROW);
-	/** TX WC TCAM Pool */
-	BITALLOC_INST(TF_WC_TCAM_POOL_NAME_TX, TF_NUM_WC_TCAM_ROW);
-
-	/** RX Meter Profile Pool */
-	BITALLOC_INST(TF_METER_PROF_POOL_NAME_RX, TF_NUM_METER_PROF);
-	/** TX Meter Profile Pool */
-	BITALLOC_INST(TF_METER_PROF_POOL_NAME_TX, TF_NUM_METER_PROF);
-
-	/** RX Meter Instance Pool */
-	BITALLOC_INST(TF_METER_INST_POOL_NAME_RX, TF_NUM_METER);
-	/** TX Meter Pool */
-	BITALLOC_INST(TF_METER_INST_POOL_NAME_TX, TF_NUM_METER);
-
-	/** RX Mirror Configuration Pool*/
-	BITALLOC_INST(TF_MIRROR_POOL_NAME_RX, TF_NUM_MIRROR);
-	/** RX Mirror Configuration Pool */
-	BITALLOC_INST(TF_MIRROR_POOL_NAME_TX, TF_NUM_MIRROR);
-
-	/** RX UPAR Pool */
-	BITALLOC_INST(TF_UPAR_POOL_NAME_RX, TF_NUM_UPAR);
-	/** TX UPAR Pool */
-	BITALLOC_INST(TF_UPAR_POOL_NAME_TX, TF_NUM_UPAR);
-
-	/** RX SP TCAM Pool */
-	BITALLOC_INST(TF_SP_TCAM_POOL_NAME_RX, TF_NUM_SP_TCAM);
-	/** TX SP TCAM Pool */
-	BITALLOC_INST(TF_SP_TCAM_POOL_NAME_TX, TF_NUM_SP_TCAM);
-
-	/** RX FKB Pool */
-	BITALLOC_INST(TF_FKB_POOL_NAME_RX, TF_NUM_FKB);
-	/** TX FKB Pool */
-	BITALLOC_INST(TF_FKB_POOL_NAME_TX, TF_NUM_FKB);
-
-	/** RX Table Scope Pool */
-	BITALLOC_INST(TF_TBL_SCOPE_POOL_NAME_RX, TF_NUM_TBL_SCOPE);
-	/** TX Table Scope Pool */
-	BITALLOC_INST(TF_TBL_SCOPE_POOL_NAME_TX, TF_NUM_TBL_SCOPE);
-
-	/** RX L2 Func Pool */
-	BITALLOC_INST(TF_L2_FUNC_POOL_NAME_RX, TF_NUM_L2_FUNC);
-	/** TX L2 Func Pool */
-	BITALLOC_INST(TF_L2_FUNC_POOL_NAME_TX, TF_NUM_L2_FUNC);
-
-	/** RX Epoch0 Pool */
-	BITALLOC_INST(TF_EPOCH0_POOL_NAME_RX, TF_NUM_EPOCH0);
-	/** TX Epoch0 Pool */
-	BITALLOC_INST(TF_EPOCH0_POOL_NAME_TX, TF_NUM_EPOCH0);
-
-	/** TX Epoch1 Pool */
-	BITALLOC_INST(TF_EPOCH1_POOL_NAME_RX, TF_NUM_EPOCH1);
-	/** TX Epoch1 Pool */
-	BITALLOC_INST(TF_EPOCH1_POOL_NAME_TX, TF_NUM_EPOCH1);
-
-	/** RX MetaData Profile Pool */
-	BITALLOC_INST(TF_METADATA_POOL_NAME_RX, TF_NUM_METADATA);
-	/** TX MetaData Profile Pool */
-	BITALLOC_INST(TF_METADATA_POOL_NAME_TX, TF_NUM_METADATA);
-
-	/** RX Connection Tracking State Pool */
-	BITALLOC_INST(TF_CT_STATE_POOL_NAME_RX, TF_NUM_CT_STATE);
-	/** TX Connection Tracking State Pool */
-	BITALLOC_INST(TF_CT_STATE_POOL_NAME_TX, TF_NUM_CT_STATE);
-
-	/** RX Range Profile Pool */
-	BITALLOC_INST(TF_RANGE_PROF_POOL_NAME_RX, TF_NUM_RANGE_PROF);
-	/** TX Range Profile Pool */
-	BITALLOC_INST(TF_RANGE_PROF_POOL_NAME_TX, TF_NUM_RANGE_PROF);
-
-	/** RX Range Pool */
-	BITALLOC_INST(TF_RANGE_ENTRY_POOL_NAME_RX, TF_NUM_RANGE_ENTRY);
-	/** TX Range Pool */
-	BITALLOC_INST(TF_RANGE_ENTRY_POOL_NAME_TX, TF_NUM_RANGE_ENTRY);
-
-	/** RX LAG Pool */
-	BITALLOC_INST(TF_LAG_ENTRY_POOL_NAME_RX, TF_NUM_LAG_ENTRY);
-	/** TX LAG Pool */
-	BITALLOC_INST(TF_LAG_ENTRY_POOL_NAME_TX, TF_NUM_LAG_ENTRY);
-
-	/* Session SRAM pools */
-
-	/** RX Full Action Record Pool */
-	BITALLOC_INST(TF_SRAM_FULL_ACTION_POOL_NAME_RX,
-		      TF_RSVD_SRAM_FULL_ACTION_RX);
-	/** TX Full Action Record Pool */
-	BITALLOC_INST(TF_SRAM_FULL_ACTION_POOL_NAME_TX,
-		      TF_RSVD_SRAM_FULL_ACTION_TX);
-
-	/** RX Multicast Group Pool, only RX is supported */
-	BITALLOC_INST(TF_SRAM_MCG_POOL_NAME_RX,
-		      TF_RSVD_SRAM_MCG_RX);
-
-	/** RX Encap 8B Pool*/
-	BITALLOC_INST(TF_SRAM_ENCAP_8B_POOL_NAME_RX,
-		      TF_RSVD_SRAM_ENCAP_8B_RX);
-	/** TX Encap 8B Pool*/
-	BITALLOC_INST(TF_SRAM_ENCAP_8B_POOL_NAME_TX,
-		      TF_RSVD_SRAM_ENCAP_8B_TX);
-
-	/** RX Encap 16B Pool */
-	BITALLOC_INST(TF_SRAM_ENCAP_16B_POOL_NAME_RX,
-		      TF_RSVD_SRAM_ENCAP_16B_RX);
-	/** TX Encap 16B Pool */
-	BITALLOC_INST(TF_SRAM_ENCAP_16B_POOL_NAME_TX,
-		      TF_RSVD_SRAM_ENCAP_16B_TX);
-
-	/** TX Encap 64B Pool, only TX is supported */
-	BITALLOC_INST(TF_SRAM_ENCAP_64B_POOL_NAME_TX,
-		      TF_RSVD_SRAM_ENCAP_64B_TX);
-
-	/** RX Source Properties SMAC Pool */
-	BITALLOC_INST(TF_SRAM_SP_SMAC_POOL_NAME_RX,
-		      TF_RSVD_SRAM_SP_SMAC_RX);
-	/** TX Source Properties SMAC Pool */
-	BITALLOC_INST(TF_SRAM_SP_SMAC_POOL_NAME_TX,
-		      TF_RSVD_SRAM_SP_SMAC_TX);
-
-	/** TX Source Properties SMAC IPv4 Pool, only TX is supported */
-	BITALLOC_INST(TF_SRAM_SP_SMAC_IPV4_POOL_NAME_TX,
-		      TF_RSVD_SRAM_SP_SMAC_IPV4_TX);
-
-	/** TX Source Properties SMAC IPv6 Pool, only TX is supported */
-	BITALLOC_INST(TF_SRAM_SP_SMAC_IPV6_POOL_NAME_TX,
-		      TF_RSVD_SRAM_SP_SMAC_IPV6_TX);
-
-	/** RX Counter 64B Pool */
-	BITALLOC_INST(TF_SRAM_STATS_64B_POOL_NAME_RX,
-		      TF_RSVD_SRAM_COUNTER_64B_RX);
-	/** TX Counter 64B Pool */
-	BITALLOC_INST(TF_SRAM_STATS_64B_POOL_NAME_TX,
-		      TF_RSVD_SRAM_COUNTER_64B_TX);
-
-	/** RX NAT Source Port Pool */
-	BITALLOC_INST(TF_SRAM_NAT_SPORT_POOL_NAME_RX,
-		      TF_RSVD_SRAM_NAT_SPORT_RX);
-	/** TX NAT Source Port Pool */
-	BITALLOC_INST(TF_SRAM_NAT_SPORT_POOL_NAME_TX,
-		      TF_RSVD_SRAM_NAT_SPORT_TX);
-
-	/** RX NAT Destination Port Pool */
-	BITALLOC_INST(TF_SRAM_NAT_DPORT_POOL_NAME_RX,
-		      TF_RSVD_SRAM_NAT_DPORT_RX);
-	/** TX NAT Destination Port Pool */
-	BITALLOC_INST(TF_SRAM_NAT_DPORT_POOL_NAME_TX,
-		      TF_RSVD_SRAM_NAT_DPORT_TX);
-
-	/** RX NAT Source IPv4 Pool */
-	BITALLOC_INST(TF_SRAM_NAT_S_IPV4_POOL_NAME_RX,
-		      TF_RSVD_SRAM_NAT_S_IPV4_RX);
-	/** TX NAT Source IPv4 Pool */
-	BITALLOC_INST(TF_SRAM_NAT_S_IPV4_POOL_NAME_TX,
-		      TF_RSVD_SRAM_NAT_S_IPV4_TX);
-
-	/** RX NAT Destination IPv4 Pool */
-	BITALLOC_INST(TF_SRAM_NAT_D_IPV4_POOL_NAME_RX,
-		      TF_RSVD_SRAM_NAT_D_IPV4_RX);
-	/** TX NAT IPv4 Destination Pool */
-	BITALLOC_INST(TF_SRAM_NAT_D_IPV4_POOL_NAME_TX,
-		      TF_RSVD_SRAM_NAT_D_IPV4_TX);
-
-	/**
-	 * Pools not allocated from HCAPI RM
-	 */
-
-	/** RX L2 Ctx Remap ID  Pool */
-	BITALLOC_INST(TF_L2_CTXT_REMAP_POOL_NAME_RX, TF_NUM_L2_CTXT_TCAM);
-	/** TX L2 Ctx Remap ID Pool */
-	BITALLOC_INST(TF_L2_CTXT_REMAP_POOL_NAME_TX, TF_NUM_L2_CTXT_TCAM);
-
-	/** CRC32 seed table */
-#define TF_LKUP_SEED_MEM_SIZE 512
-	uint32_t lkup_em_seed_mem[TF_DIR_MAX][TF_LKUP_SEED_MEM_SIZE];
-
-	/** Lookup3 init values */
-	uint32_t lkup_lkup3_init_cfg[TF_DIR_MAX];
-
 	/** Table scope array */
 	struct tf_tbl_scope_cb tbl_scopes[TF_NUM_TBL_SCOPE];
 
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index d7f5de4c4..05e866dc6 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -5,175 +5,413 @@
 
 /* Truflow Table APIs and supporting code */
 
-#include <stdio.h>
-#include <string.h>
-#include <stdbool.h>
-#include <math.h>
-#include <sys/param.h>
 #include <rte_common.h>
-#include <rte_errno.h>
-#include "hsi_struct_def_dpdk.h"
 
-#include "tf_core.h"
+#include "tf_tbl.h"
+#include "tf_common.h"
+#include "tf_rm.h"
 #include "tf_util.h"
-#include "tf_em.h"
 #include "tf_msg.h"
 #include "tfp.h"
-#include "hwrm_tf.h"
-#include "bnxt.h"
-#include "tf_resources.h"
-#include "tf_rm.h"
-#include "stack.h"
-#include "tf_common.h"
+
+
+struct tf;
+
+/**
+ * Table DBs.
+ */
+static void *tbl_db[TF_DIR_MAX];
+
+/**
+ * Table Shadow DBs
+ */
+/* static void *shadow_tbl_db[TF_DIR_MAX]; */
+
+/**
+ * Init flag, set on bind and cleared on unbind
+ */
+static uint8_t init;
 
 /**
- * Internal function to get a Table Entry. Supports all Table Types
- * except the TF_TBL_TYPE_EXT as that is handled as a table scope.
- *
- * [in] tfp
- *   Pointer to TruFlow handle
- *
- * [in] parms
- *   Pointer to input parameters
- *
- * Returns:
- *   0       - Success
- *   -EINVAL - Parameter error
+ * Shadow init flag, set on bind and cleared on unbind
  */
-static int
-tf_bulk_get_tbl_entry_internal(struct tf *tfp,
-			  struct tf_bulk_get_tbl_entry_parms *parms)
+/* static uint8_t shadow_init; */
+
+int
+tf_tbl_bind(struct tf *tfp,
+	    struct tf_tbl_cfg_parms *parms)
+{
+	int rc;
+	int i;
+	struct tf_rm_create_db_parms db_cfg = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (init) {
+		TFP_DRV_LOG(ERR,
+			    "Table DB already initialized\n");
+		return -EINVAL;
+	}
+
+	db_cfg.num_elements = parms->num_elements;
+	db_cfg.type = TF_DEVICE_MODULE_TYPE_TABLE;
+	db_cfg.num_elements = parms->num_elements;
+	db_cfg.cfg = parms->cfg;
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		db_cfg.dir = i;
+		db_cfg.alloc_cnt = parms->resources->tbl_cnt[i].cnt;
+		db_cfg.rm_db = &tbl_db[i];
+		rc = tf_rm_create_db(tfp, &db_cfg);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: Table DB creation failed\n",
+				    tf_dir_2_str(i));
+
+			return rc;
+		}
+	}
+
+	init = 1;
+
+	printf("Table Type - initialized\n");
+
+	return 0;
+}
+
+int
+tf_tbl_unbind(struct tf *tfp)
 {
 	int rc;
-	int id;
-	uint32_t index;
-	struct bitalloc *session_pool;
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Lookup the pool using the table type of the element */
-	rc = tf_rm_lookup_tbl_type_pool(tfs,
-					parms->dir,
-					parms->type,
-					&session_pool);
-	/* Error logging handled by tf_rm_lookup_tbl_type_pool */
+	int i;
+	struct tf_rm_free_db_parms fparms = { 0 };
+
+	TF_CHECK_PARMS1(tfp);
+
+	/* Bail if nothing has been initialized */
+	if (!init) {
+		TFP_DRV_LOG(INFO,
+			    "No Table DBs created\n");
+		return 0;
+	}
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		fparms.dir = i;
+		fparms.rm_db = tbl_db[i];
+		rc = tf_rm_free_db(tfp, &fparms);
+		if (rc)
+			return rc;
+
+		tbl_db[i] = NULL;
+	}
+
+	init = 0;
+
+	return 0;
+}
+
+int
+tf_tbl_alloc(struct tf *tfp __rte_unused,
+	     struct tf_tbl_alloc_parms *parms)
+{
+	int rc;
+	uint32_t idx;
+	struct tf_rm_allocate_parms aparms = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Table DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Allocate requested element */
+	aparms.rm_db = tbl_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.index = &idx;
+	rc = tf_rm_allocate(&aparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed allocate, type:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type);
+		return rc;
+	}
+
+	*parms->idx = idx;
+
+	return 0;
+}
+
+int
+tf_tbl_free(struct tf *tfp __rte_unused,
+	    struct tf_tbl_free_parms *parms)
+{
+	int rc;
+	struct tf_rm_is_allocated_parms aparms = { 0 };
+	struct tf_rm_free_parms fparms = { 0 };
+	int allocated = 0;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Table DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Check if element is in use */
+	aparms.rm_db = tbl_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.index = parms->idx;
+	aparms.allocated = &allocated;
+	rc = tf_rm_is_allocated(&aparms);
 	if (rc)
 		return rc;
 
-	index = parms->starting_idx;
+	if (!allocated) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Entry already free, type:%d, index:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    parms->idx);
+		return rc;
+	}
 
-	/*
-	 * Adjust the returned index/offset as there is no guarantee
-	 * that the start is 0 at time of RM allocation
-	 */
-	tf_rm_convert_index(tfs,
-			    parms->dir,
+	/* Free requested element */
+	fparms.rm_db = tbl_db[parms->dir];
+	fparms.db_index = parms->type;
+	fparms.index = parms->idx;
+	rc = tf_rm_free(&fparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Free failed, type:%d, index:%d\n",
+			    tf_dir_2_str(parms->dir),
 			    parms->type,
-			    TF_RM_CONVERT_RM_BASE,
-			    parms->starting_idx,
-			    &index);
+			    parms->idx);
+		return rc;
+	}
+
+	return 0;
+}
+
+int
+tf_tbl_alloc_search(struct tf *tfp __rte_unused,
+		    struct tf_tbl_alloc_search_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tbl_set(struct tf *tfp,
+	   struct tf_tbl_set_parms *parms)
+{
+	int rc;
+	int allocated = 0;
+	uint16_t hcapi_type;
+	struct tf_rm_is_allocated_parms aparms = { 0 };
+	struct tf_rm_get_hcapi_parms hparms = { 0 };
+
+	TF_CHECK_PARMS3(tfp, parms, parms->data);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Table DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
 
 	/* Verify that the entry has been previously allocated */
-	id = ba_inuse(session_pool, index);
-	if (id != 1) {
+	aparms.rm_db = tbl_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.index = parms->idx;
+	aparms.allocated = &allocated;
+	rc = tf_rm_is_allocated(&aparms);
+	if (rc)
+		return rc;
+
+	if (!allocated) {
 		TFP_DRV_LOG(ERR,
-		   "%s, Invalid or not allocated index, type:%d, starting_idx:%d\n",
+		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
 		   tf_dir_2_str(parms->dir),
 		   parms->type,
-		   index);
+		   parms->idx);
 		return -EINVAL;
 	}
 
-	/* Get the entry */
-	rc = tf_msg_bulk_get_tbl_entry(tfp, parms);
+	/* Set the entry */
+	hparms.rm_db = tbl_db[parms->dir];
+	hparms.db_index = parms->type;
+	hparms.hcapi_type = &hcapi_type;
+	rc = tf_rm_get_hcapi_type(&hparms);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
-			    "%s, Bulk get failed, type:%d, rc:%s\n",
+			    "%s, Failed type lookup, type:%d, rc:%s\n",
 			    tf_dir_2_str(parms->dir),
 			    parms->type,
 			    strerror(-rc));
+		return rc;
 	}
 
-	return rc;
+	rc = tf_msg_set_tbl_entry(tfp,
+				  parms->dir,
+				  hcapi_type,
+				  parms->data_sz_in_bytes,
+				  parms->data,
+				  parms->idx);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Set failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+	}
+
+	return 0;
 }
 
-#if (TF_SHADOW == 1)
-/**
- * Allocate Tbl entry from the Shadow DB. Shadow DB is searched for
- * the requested entry. If found the ref count is incremente and
- * returned.
- *
- * [in] tfs
- *   Pointer to session
- * [in] parms
- *   Allocation parameters
- *
- * Return:
- *  0       - Success, entry found and ref count incremented
- *  -ENOENT - Failure, entry not found
- */
-static int
-tf_alloc_tbl_entry_shadow(struct tf_session *tfs __rte_unused,
-			  struct tf_alloc_tbl_entry_parms *parms __rte_unused)
+int
+tf_tbl_get(struct tf *tfp,
+	   struct tf_tbl_get_parms *parms)
 {
-	TFP_DRV_LOG(ERR,
-		    "%s, Entry Alloc with search not supported\n",
-		    tf_dir_2_str(parms->dir));
+	int rc;
+	uint16_t hcapi_type;
+	int allocated = 0;
+	struct tf_rm_is_allocated_parms aparms = { 0 };
+	struct tf_rm_get_hcapi_parms hparms = { 0 };
 
-	return -EOPNOTSUPP;
-}
+	TF_CHECK_PARMS3(tfp, parms, parms->data);
 
-/**
- * Free Tbl entry from the Shadow DB. Shadow DB is searched for
- * the requested entry. If found the ref count is decremente and
- * new ref_count returned.
- *
- * [in] tfs
- *   Pointer to session
- * [in] parms
- *   Allocation parameters
- *
- * Return:
- *  0       - Success, entry found and ref count decremented
- *  -ENOENT - Failure, entry not found
- */
-static int
-tf_free_tbl_entry_shadow(struct tf_session *tfs,
-			 struct tf_free_tbl_entry_parms *parms)
-{
-	TFP_DRV_LOG(ERR,
-		    "%s, Entry Free with search not supported\n",
-		    tf_dir_2_str(parms->dir));
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Table DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
 
-	return -EOPNOTSUPP;
-}
-#endif /* TF_SHADOW */
+	/* Verify that the entry has been previously allocated */
+	aparms.rm_db = tbl_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.index = parms->idx;
+	aparms.allocated = &allocated;
+	rc = tf_rm_is_allocated(&aparms);
+	if (rc)
+		return rc;
 
+	if (!allocated) {
+		TFP_DRV_LOG(ERR,
+		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
+		   tf_dir_2_str(parms->dir),
+		   parms->type,
+		   parms->idx);
+		return -EINVAL;
+	}
+
+	/* Set the entry */
+	hparms.rm_db = tbl_db[parms->dir];
+	hparms.db_index = parms->type;
+	hparms.hcapi_type = &hcapi_type;
+	rc = tf_rm_get_hcapi_type(&hparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Failed type lookup, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Get the entry */
+	rc = tf_msg_get_tbl_entry(tfp,
+				  parms->dir,
+				  hcapi_type,
+				  parms->data_sz_in_bytes,
+				  parms->data,
+				  parms->idx);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Get failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+	}
+
+	return 0;
+}
 
- /* API defined in tf_core.h */
 int
-tf_bulk_get_tbl_entry(struct tf *tfp,
-		 struct tf_bulk_get_tbl_entry_parms *parms)
+tf_tbl_bulk_get(struct tf *tfp,
+		struct tf_tbl_get_bulk_parms *parms)
 {
-	int rc = 0;
+	int rc;
+	int i;
+	uint16_t hcapi_type;
+	uint32_t idx;
+	int allocated = 0;
+	struct tf_rm_is_allocated_parms aparms = { 0 };
+	struct tf_rm_get_hcapi_parms hparms = { 0 };
 
-	TF_CHECK_PARMS_SESSION(tfp, parms);
+	TF_CHECK_PARMS2(tfp, parms);
 
-	if (parms->type == TF_TBL_TYPE_EXT) {
-		/* Not supported, yet */
+	if (!init) {
 		TFP_DRV_LOG(ERR,
-			    "%s, External table type not supported\n",
+			    "%s: No Table DBs created\n",
 			    tf_dir_2_str(parms->dir));
 
-		rc = -EOPNOTSUPP;
-	} else {
-		/* Internal table type processing */
-		rc = tf_bulk_get_tbl_entry_internal(tfp, parms);
+		return -EINVAL;
+	}
+	/* Verify that the entries has been previously allocated */
+	aparms.rm_db = tbl_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.allocated = &allocated;
+	idx = parms->starting_idx;
+	for (i = 0; i < parms->num_entries; i++) {
+		aparms.index = idx;
+		rc = tf_rm_is_allocated(&aparms);
 		if (rc)
+			return rc;
+
+		if (!allocated) {
 			TFP_DRV_LOG(ERR,
-				    "%s, Bulk get failed, type:%d, rc:%s\n",
+				    "%s, Invalid or not allocated index, type:%d, idx:%d\n",
 				    tf_dir_2_str(parms->dir),
 				    parms->type,
-				    strerror(-rc));
+				    idx);
+			return -EINVAL;
+		}
+		idx++;
+	}
+
+	hparms.rm_db = tbl_db[parms->dir];
+	hparms.db_index = parms->type;
+	hparms.hcapi_type = &hcapi_type;
+	rc = tf_rm_get_hcapi_type(&hparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Failed type lookup, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Get the entries */
+	rc = tf_msg_bulk_get_tbl_entry(tfp,
+				       parms->dir,
+				       hcapi_type,
+				       parms->starting_idx,
+				       parms->num_entries,
+				       parms->entry_sz_in_bytes,
+				       parms->physical_mem_addr);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Bulk get failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
 	}
 
 	return rc;
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.h b/drivers/net/bnxt/tf_core/tf_tbl.h
index b17557345..eb560ffa7 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.h
+++ b/drivers/net/bnxt/tf_core/tf_tbl.h
@@ -3,17 +3,21 @@
  * All rights reserved.
  */
 
-#ifndef _TF_TBL_H_
-#define _TF_TBL_H_
-
-#include <stdint.h>
+#ifndef TF_TBL_TYPE_H_
+#define TF_TBL_TYPE_H_
 
 #include "tf_core.h"
 #include "stack.h"
 
-struct tf_session;
+struct tf;
+
+/**
+ * The Table module provides processing of Internal TF table types.
+ */
 
-/** table scope control block content */
+/**
+ * Table scope control block content
+ */
 struct tf_em_caps {
 	uint32_t flags;
 	uint32_t supported;
@@ -35,66 +39,364 @@ struct tf_em_caps {
 struct tf_tbl_scope_cb {
 	uint32_t tbl_scope_id;
 	int index;
-	struct hcapi_cfa_em_ctx_mem_info  em_ctx_info[TF_DIR_MAX];
-	struct tf_em_caps          em_caps[TF_DIR_MAX];
-	struct stack               ext_act_pool[TF_DIR_MAX];
-	uint32_t                  *ext_act_pool_mem[TF_DIR_MAX];
+	struct hcapi_cfa_em_ctx_mem_info em_ctx_info[TF_DIR_MAX];
+	struct tf_em_caps em_caps[TF_DIR_MAX];
+	struct stack ext_act_pool[TF_DIR_MAX];
+	uint32_t *ext_act_pool_mem[TF_DIR_MAX];
+};
+
+/**
+ * Table configuration parameters
+ */
+struct tf_tbl_cfg_parms {
+	/**
+	 * Number of table types in each of the configuration arrays
+	 */
+	uint16_t num_elements;
+	/**
+	 * Table Type element configuration array
+	 */
+	struct tf_rm_element_cfg *cfg;
+	/**
+	 * Shadow table type configuration array
+	 */
+	struct tf_shadow_tbl_cfg *shadow_cfg;
+	/**
+	 * Boolean controlling the request shadow copy.
+	 */
+	bool shadow_copy;
+	/**
+	 * Session resource allocations
+	 */
+	struct tf_session_resources *resources;
+};
+
+/**
+ * Table allocation parameters
+ */
+struct tf_tbl_alloc_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of the allocation
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [out] Idx of allocated entry or found entry (if search_enable)
+	 */
+	uint32_t *idx;
+};
+
+/**
+ * Table free parameters
+ */
+struct tf_tbl_free_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of the allocation type
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Index to free
+	 */
+	uint32_t idx;
+	/**
+	 * [out] Reference count after free, only valid if session has been
+	 * created with shadow_copy.
+	 */
+	uint16_t ref_cnt;
 };
 
-/** Hardware Page sizes supported for EEM: 4K, 8K, 64K, 256K, 1M, 2M, 4M, 1G.
- * Round-down other page sizes to the lower hardware page size supported.
- */
-#define TF_EM_PAGE_SIZE_4K 12
-#define TF_EM_PAGE_SIZE_8K 13
-#define TF_EM_PAGE_SIZE_64K 16
-#define TF_EM_PAGE_SIZE_256K 18
-#define TF_EM_PAGE_SIZE_1M 20
-#define TF_EM_PAGE_SIZE_2M 21
-#define TF_EM_PAGE_SIZE_4M 22
-#define TF_EM_PAGE_SIZE_1G 30
-
-/* Set page size */
-#define BNXT_TF_PAGE_SIZE TF_EM_PAGE_SIZE_2M
-
-#if (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_4K)	/** 4K */
-#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_4K
-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_4K
-#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_8K)	/** 8K */
-#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_8K
-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_8K
-#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_64K)	/** 64K */
-#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_64K
-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_64K
-#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_256K)	/** 256K */
-#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_256K
-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_256K
-#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_1M)	/** 1M */
-#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_1M
-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1M
-#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_2M)	/** 2M */
-#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_2M
-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_2M
-#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_4M)	/** 4M */
-#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_4M
-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_4M
-#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_1G)	/** 1G */
-#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_1G
-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1G
-#else
-#error "Invalid Page Size specified. Please use a TF_EM_PAGE_SIZE_n define"
-#endif
-
-#define TF_EM_PAGE_SIZE	(1 << TF_EM_PAGE_SHIFT)
-#define TF_EM_PAGE_ALIGNMENT (1 << TF_EM_PAGE_SHIFT)
-
-/**
- * Initialize table pool structure to indicate
- * no table scope has been associated with the
- * external pool of indexes.
- *
- * [in] session
- */
-void
-tf_init_tbl_pool(struct tf_session *session);
-
-#endif /* _TF_TBL_H_ */
+/**
+ * Table allocate search parameters
+ */
+struct tf_tbl_alloc_search_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of the allocation
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Table scope identifier (ignored unless TF_TBL_TYPE_EXT)
+	 */
+	uint32_t tbl_scope_id;
+	/**
+	 * [in] Enable search for matching entry. If the table type is
+	 * internal the shadow copy will be searched before
+	 * alloc. Session must be configured with shadow copy enabled.
+	 */
+	uint8_t search_enable;
+	/**
+	 * [in] Result data to search for (if search_enable)
+	 */
+	uint8_t *result;
+	/**
+	 * [in] Result data size in bytes (if search_enable)
+	 */
+	uint16_t result_sz_in_bytes;
+	/**
+	 * [out] If search_enable, set if matching entry found
+	 */
+	uint8_t hit;
+	/**
+	 * [out] Current ref count after allocation (if search_enable)
+	 */
+	uint16_t ref_cnt;
+	/**
+	 * [out] Idx of allocated entry or found entry (if search_enable)
+	 */
+	uint32_t idx;
+};
+
+/**
+ * Table set parameters
+ */
+struct tf_tbl_set_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to set
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Entry data
+	 */
+	uint8_t *data;
+	/**
+	 * [in] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Entry index to write to
+	 */
+	uint32_t idx;
+};
+
+/**
+ * Table get parameters
+ */
+struct tf_tbl_get_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to get
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [out] Entry data
+	 */
+	uint8_t *data;
+	/**
+	 * [out] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Entry index to read
+	 */
+	uint32_t idx;
+};
+
+/**
+ * Table get bulk parameters
+ */
+struct tf_tbl_get_bulk_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to get
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Starting index to read from
+	 */
+	uint32_t starting_idx;
+	/**
+	 * [in] Number of sequential entries
+	 */
+	uint16_t num_entries;
+	/**
+	 * [in] Size of the single entry
+	 */
+	uint16_t entry_sz_in_bytes;
+	/**
+	 * [out] Host physical address, where the data
+	 * will be copied to by the firmware.
+	 * Use tfp_calloc() API and mem_pa
+	 * variable of the tfp_calloc_parms
+	 * structure for the physical address.
+	 */
+	uint64_t physical_mem_addr;
+};
+
+/**
+ * @page tbl Table
+ *
+ * @ref tf_tbl_bind
+ *
+ * @ref tf_tbl_unbind
+ *
+ * @ref tf_tbl_alloc
+ *
+ * @ref tf_tbl_free
+ *
+ * @ref tf_tbl_alloc_search
+ *
+ * @ref tf_tbl_set
+ *
+ * @ref tf_tbl_get
+ *
+ * @ref tf_tbl_bulk_get
+ */
+
+/**
+ * Initializes the Table module with the requested DBs. Must be
+ * invoked as the first thing before any of the access functions.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to Table configuration parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_bind(struct tf *tfp,
+		struct tf_tbl_cfg_parms *parms);
+
+/**
+ * Cleans up the private DBs and releases all the data.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_unbind(struct tf *tfp);
+
+/**
+ * Allocates the requested table type from the internal RM DB.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to Table allocation parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_alloc(struct tf *tfp,
+		 struct tf_tbl_alloc_parms *parms);
+
+/**
+ * Free's the requested table type and returns it to the DB. If shadow
+ * DB is enabled its searched first and if found the element refcount
+ * is decremented. If refcount goes to 0 then its returned to the
+ * table type DB.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to Table free parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_free(struct tf *tfp,
+		struct tf_tbl_free_parms *parms);
+
+/**
+ * Supported if Shadow DB is configured. Searches the Shadow DB for
+ * any matching element. If found the refcount in the shadow DB is
+ * updated accordingly. If not found a new element is allocated and
+ * installed into the shadow DB.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_alloc_search(struct tf *tfp,
+			struct tf_tbl_alloc_search_parms *parms);
+
+/**
+ * Configures the requested element by sending a firmware request which
+ * then installs it into the device internal structures.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to Table set parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_set(struct tf *tfp,
+	       struct tf_tbl_set_parms *parms);
+
+/**
+ * Retrieves the requested element by sending a firmware request to get
+ * the element.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to Table get parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_get(struct tf *tfp,
+	       struct tf_tbl_get_parms *parms);
+
+/**
+ * Retrieves bulk block of elements by sending a firmware request to
+ * get the elements.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to Table get bulk parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_bulk_get(struct tf *tfp,
+		    struct tf_tbl_get_bulk_parms *parms);
+
+#endif /* TF_TBL_TYPE_H */
diff --git a/drivers/net/bnxt/tf_core/tf_tbl_type.c b/drivers/net/bnxt/tf_core/tf_tbl_type.c
deleted file mode 100644
index 2f5af6060..000000000
--- a/drivers/net/bnxt/tf_core/tf_tbl_type.c
+++ /dev/null
@@ -1,342 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2019-2020 Broadcom
- * All rights reserved.
- */
-
-#include <rte_common.h>
-
-#include "tf_tbl_type.h"
-#include "tf_common.h"
-#include "tf_rm_new.h"
-#include "tf_util.h"
-#include "tf_msg.h"
-#include "tfp.h"
-
-struct tf;
-
-/**
- * Table DBs.
- */
-static void *tbl_db[TF_DIR_MAX];
-
-/**
- * Table Shadow DBs
- */
-/* static void *shadow_tbl_db[TF_DIR_MAX]; */
-
-/**
- * Init flag, set on bind and cleared on unbind
- */
-static uint8_t init;
-
-/**
- * Shadow init flag, set on bind and cleared on unbind
- */
-/* static uint8_t shadow_init; */
-
-int
-tf_tbl_bind(struct tf *tfp,
-	    struct tf_tbl_cfg_parms *parms)
-{
-	int rc;
-	int i;
-	struct tf_rm_create_db_parms db_cfg = { 0 };
-
-	TF_CHECK_PARMS2(tfp, parms);
-
-	if (init) {
-		TFP_DRV_LOG(ERR,
-			    "Table already initialized\n");
-		return -EINVAL;
-	}
-
-	db_cfg.num_elements = parms->num_elements;
-	db_cfg.type = TF_DEVICE_MODULE_TYPE_TABLE;
-	db_cfg.num_elements = parms->num_elements;
-	db_cfg.cfg = parms->cfg;
-
-	for (i = 0; i < TF_DIR_MAX; i++) {
-		db_cfg.dir = i;
-		db_cfg.alloc_cnt = parms->resources->tbl_cnt[i].cnt;
-		db_cfg.rm_db = &tbl_db[i];
-		rc = tf_rm_create_db(tfp, &db_cfg);
-		if (rc) {
-			TFP_DRV_LOG(ERR,
-				    "%s: Table DB creation failed\n",
-				    tf_dir_2_str(i));
-
-			return rc;
-		}
-	}
-
-	init = 1;
-
-	printf("Table Type - initialized\n");
-
-	return 0;
-}
-
-int
-tf_tbl_unbind(struct tf *tfp __rte_unused)
-{
-	int rc;
-	int i;
-	struct tf_rm_free_db_parms fparms = { 0 };
-
-	TF_CHECK_PARMS1(tfp);
-
-	/* Bail if nothing has been initialized done silent as to
-	 * allow for creation cleanup.
-	 */
-	if (!init) {
-		TFP_DRV_LOG(ERR,
-			    "No Table DBs created\n");
-		return -EINVAL;
-	}
-
-	for (i = 0; i < TF_DIR_MAX; i++) {
-		fparms.dir = i;
-		fparms.rm_db = tbl_db[i];
-		rc = tf_rm_free_db(tfp, &fparms);
-		if (rc)
-			return rc;
-
-		tbl_db[i] = NULL;
-	}
-
-	init = 0;
-
-	return 0;
-}
-
-int
-tf_tbl_alloc(struct tf *tfp __rte_unused,
-	     struct tf_tbl_alloc_parms *parms)
-{
-	int rc;
-	uint32_t idx;
-	struct tf_rm_allocate_parms aparms = { 0 };
-
-	TF_CHECK_PARMS2(tfp, parms);
-
-	if (!init) {
-		TFP_DRV_LOG(ERR,
-			    "%s: No Table DBs created\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
-
-	/* Allocate requested element */
-	aparms.rm_db = tbl_db[parms->dir];
-	aparms.db_index = parms->type;
-	aparms.index = &idx;
-	rc = tf_rm_allocate(&aparms);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s: Failed allocate, type:%d\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type);
-		return rc;
-	}
-
-	*parms->idx = idx;
-
-	return 0;
-}
-
-int
-tf_tbl_free(struct tf *tfp __rte_unused,
-	    struct tf_tbl_free_parms *parms)
-{
-	int rc;
-	struct tf_rm_is_allocated_parms aparms = { 0 };
-	struct tf_rm_free_parms fparms = { 0 };
-	int allocated = 0;
-
-	TF_CHECK_PARMS2(tfp, parms);
-
-	if (!init) {
-		TFP_DRV_LOG(ERR,
-			    "%s: No Table DBs created\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
-
-	/* Check if element is in use */
-	aparms.rm_db = tbl_db[parms->dir];
-	aparms.db_index = parms->type;
-	aparms.index = parms->idx;
-	aparms.allocated = &allocated;
-	rc = tf_rm_is_allocated(&aparms);
-	if (rc)
-		return rc;
-
-	if (!allocated) {
-		TFP_DRV_LOG(ERR,
-			    "%s: Entry already free, type:%d, index:%d\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type,
-			    parms->idx);
-		return rc;
-	}
-
-	/* Free requested element */
-	fparms.rm_db = tbl_db[parms->dir];
-	fparms.db_index = parms->type;
-	fparms.index = parms->idx;
-	rc = tf_rm_free(&fparms);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s: Free failed, type:%d, index:%d\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type,
-			    parms->idx);
-		return rc;
-	}
-
-	return 0;
-}
-
-int
-tf_tbl_alloc_search(struct tf *tfp __rte_unused,
-		    struct tf_tbl_alloc_search_parms *parms __rte_unused)
-{
-	return 0;
-}
-
-int
-tf_tbl_set(struct tf *tfp,
-	   struct tf_tbl_set_parms *parms)
-{
-	int rc;
-	int allocated = 0;
-	uint16_t hcapi_type;
-	struct tf_rm_is_allocated_parms aparms = { 0 };
-	struct tf_rm_get_hcapi_parms hparms = { 0 };
-
-	TF_CHECK_PARMS3(tfp, parms, parms->data);
-
-	if (!init) {
-		TFP_DRV_LOG(ERR,
-			    "%s: No Table DBs created\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
-
-	/* Verify that the entry has been previously allocated */
-	aparms.rm_db = tbl_db[parms->dir];
-	aparms.db_index = parms->type;
-	aparms.index = parms->idx;
-	aparms.allocated = &allocated;
-	rc = tf_rm_is_allocated(&aparms);
-	if (rc)
-		return rc;
-
-	if (!allocated) {
-		TFP_DRV_LOG(ERR,
-		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
-		   tf_dir_2_str(parms->dir),
-		   parms->type,
-		   parms->idx);
-		return -EINVAL;
-	}
-
-	/* Set the entry */
-	hparms.rm_db = tbl_db[parms->dir];
-	hparms.db_index = parms->type;
-	hparms.hcapi_type = &hcapi_type;
-	rc = tf_rm_get_hcapi_type(&hparms);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Failed type lookup, type:%d, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type,
-			    strerror(-rc));
-		return rc;
-	}
-
-	rc = tf_msg_set_tbl_entry(tfp,
-				  parms->dir,
-				  hcapi_type,
-				  parms->data_sz_in_bytes,
-				  parms->data,
-				  parms->idx);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Set failed, type:%d, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type,
-			    strerror(-rc));
-	}
-
-	return 0;
-}
-
-int
-tf_tbl_get(struct tf *tfp,
-	   struct tf_tbl_get_parms *parms)
-{
-	int rc;
-	uint16_t hcapi_type;
-	int allocated = 0;
-	struct tf_rm_is_allocated_parms aparms = { 0 };
-	struct tf_rm_get_hcapi_parms hparms = { 0 };
-
-	TF_CHECK_PARMS3(tfp, parms, parms->data);
-
-	if (!init) {
-		TFP_DRV_LOG(ERR,
-			    "%s: No Table DBs created\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
-
-	/* Verify that the entry has been previously allocated */
-	aparms.rm_db = tbl_db[parms->dir];
-	aparms.db_index = parms->type;
-	aparms.index = parms->idx;
-	aparms.allocated = &allocated;
-	rc = tf_rm_is_allocated(&aparms);
-	if (rc)
-		return rc;
-
-	if (!allocated) {
-		TFP_DRV_LOG(ERR,
-		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
-		   tf_dir_2_str(parms->dir),
-		   parms->type,
-		   parms->idx);
-		return -EINVAL;
-	}
-
-	/* Set the entry */
-	hparms.rm_db = tbl_db[parms->dir];
-	hparms.db_index = parms->type;
-	hparms.hcapi_type = &hcapi_type;
-	rc = tf_rm_get_hcapi_type(&hparms);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Failed type lookup, type:%d, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type,
-			    strerror(-rc));
-		return rc;
-	}
-
-	/* Get the entry */
-	rc = tf_msg_get_tbl_entry(tfp,
-				  parms->dir,
-				  hcapi_type,
-				  parms->data_sz_in_bytes,
-				  parms->data,
-				  parms->idx);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Get failed, type:%d, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type,
-			    strerror(-rc));
-	}
-
-	return 0;
-}
diff --git a/drivers/net/bnxt/tf_core/tf_tbl_type.h b/drivers/net/bnxt/tf_core/tf_tbl_type.h
deleted file mode 100644
index 3474489a6..000000000
--- a/drivers/net/bnxt/tf_core/tf_tbl_type.h
+++ /dev/null
@@ -1,318 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2019-2020 Broadcom
- * All rights reserved.
- */
-
-#ifndef TF_TBL_TYPE_H_
-#define TF_TBL_TYPE_H_
-
-#include "tf_core.h"
-
-struct tf;
-
-/**
- * The Table module provides processing of Internal TF table types.
- */
-
-/**
- * Table configuration parameters
- */
-struct tf_tbl_cfg_parms {
-	/**
-	 * Number of table types in each of the configuration arrays
-	 */
-	uint16_t num_elements;
-	/**
-	 * Table Type element configuration array
-	 */
-	struct tf_rm_element_cfg *cfg;
-	/**
-	 * Shadow table type configuration array
-	 */
-	struct tf_shadow_tbl_cfg *shadow_cfg;
-	/**
-	 * Boolean controlling the request shadow copy.
-	 */
-	bool shadow_copy;
-	/**
-	 * Session resource allocations
-	 */
-	struct tf_session_resources *resources;
-};
-
-/**
- * Table allocation parameters
- */
-struct tf_tbl_alloc_parms {
-	/**
-	 * [in] Receive or transmit direction
-	 */
-	enum tf_dir dir;
-	/**
-	 * [in] Type of the allocation
-	 */
-	enum tf_tbl_type type;
-	/**
-	 * [out] Idx of allocated entry or found entry (if search_enable)
-	 */
-	uint32_t *idx;
-};
-
-/**
- * Table free parameters
- */
-struct tf_tbl_free_parms {
-	/**
-	 * [in] Receive or transmit direction
-	 */
-	enum tf_dir dir;
-	/**
-	 * [in] Type of the allocation type
-	 */
-	enum tf_tbl_type type;
-	/**
-	 * [in] Index to free
-	 */
-	uint32_t idx;
-	/**
-	 * [out] Reference count after free, only valid if session has been
-	 * created with shadow_copy.
-	 */
-	uint16_t ref_cnt;
-};
-
-/**
- * Table allocate search parameters
- */
-struct tf_tbl_alloc_search_parms {
-	/**
-	 * [in] Receive or transmit direction
-	 */
-	enum tf_dir dir;
-	/**
-	 * [in] Type of the allocation
-	 */
-	enum tf_tbl_type type;
-	/**
-	 * [in] Table scope identifier (ignored unless TF_TBL_TYPE_EXT)
-	 */
-	uint32_t tbl_scope_id;
-	/**
-	 * [in] Enable search for matching entry. If the table type is
-	 * internal the shadow copy will be searched before
-	 * alloc. Session must be configured with shadow copy enabled.
-	 */
-	uint8_t search_enable;
-	/**
-	 * [in] Result data to search for (if search_enable)
-	 */
-	uint8_t *result;
-	/**
-	 * [in] Result data size in bytes (if search_enable)
-	 */
-	uint16_t result_sz_in_bytes;
-	/**
-	 * [out] If search_enable, set if matching entry found
-	 */
-	uint8_t hit;
-	/**
-	 * [out] Current ref count after allocation (if search_enable)
-	 */
-	uint16_t ref_cnt;
-	/**
-	 * [out] Idx of allocated entry or found entry (if search_enable)
-	 */
-	uint32_t idx;
-};
-
-/**
- * Table set parameters
- */
-struct tf_tbl_set_parms {
-	/**
-	 * [in] Receive or transmit direction
-	 */
-	enum tf_dir dir;
-	/**
-	 * [in] Type of object to set
-	 */
-	enum tf_tbl_type type;
-	/**
-	 * [in] Entry data
-	 */
-	uint8_t *data;
-	/**
-	 * [in] Entry size
-	 */
-	uint16_t data_sz_in_bytes;
-	/**
-	 * [in] Entry index to write to
-	 */
-	uint32_t idx;
-};
-
-/**
- * Table get parameters
- */
-struct tf_tbl_get_parms {
-	/**
-	 * [in] Receive or transmit direction
-	 */
-	enum tf_dir dir;
-	/**
-	 * [in] Type of object to get
-	 */
-	enum tf_tbl_type type;
-	/**
-	 * [out] Entry data
-	 */
-	uint8_t *data;
-	/**
-	 * [out] Entry size
-	 */
-	uint16_t data_sz_in_bytes;
-	/**
-	 * [in] Entry index to read
-	 */
-	uint32_t idx;
-};
-
-/**
- * @page tbl Table
- *
- * @ref tf_tbl_bind
- *
- * @ref tf_tbl_unbind
- *
- * @ref tf_tbl_alloc
- *
- * @ref tf_tbl_free
- *
- * @ref tf_tbl_alloc_search
- *
- * @ref tf_tbl_set
- *
- * @ref tf_tbl_get
- */
-
-/**
- * Initializes the Table module with the requested DBs. Must be
- * invoked as the first thing before any of the access functions.
- *
- * [in] tfp
- *   Pointer to TF handle, used for HCAPI communication
- *
- * [in] parms
- *   Pointer to Table configuration parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_tbl_bind(struct tf *tfp,
-		struct tf_tbl_cfg_parms *parms);
-
-/**
- * Cleans up the private DBs and releases all the data.
- *
- * [in] tfp
- *   Pointer to TF handle, used for HCAPI communication
- *
- * [in] parms
- *   Pointer to parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_tbl_unbind(struct tf *tfp);
-
-/**
- * Allocates the requested table type from the internal RM DB.
- *
- * [in] tfp
- *   Pointer to TF handle, used for HCAPI communication
- *
- * [in] parms
- *   Pointer to Table allocation parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_tbl_alloc(struct tf *tfp,
-		 struct tf_tbl_alloc_parms *parms);
-
-/**
- * Free's the requested table type and returns it to the DB. If shadow
- * DB is enabled its searched first and if found the element refcount
- * is decremented. If refcount goes to 0 then its returned to the
- * table type DB.
- *
- * [in] tfp
- *   Pointer to TF handle, used for HCAPI communication
- *
- * [in] parms
- *   Pointer to Table free parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_tbl_free(struct tf *tfp,
-		struct tf_tbl_free_parms *parms);
-
-/**
- * Supported if Shadow DB is configured. Searches the Shadow DB for
- * any matching element. If found the refcount in the shadow DB is
- * updated accordingly. If not found a new element is allocated and
- * installed into the shadow DB.
- *
- * [in] tfp
- *   Pointer to TF handle, used for HCAPI communication
- *
- * [in] parms
- *   Pointer to parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_tbl_alloc_search(struct tf *tfp,
-			struct tf_tbl_alloc_search_parms *parms);
-
-/**
- * Configures the requested element by sending a firmware request which
- * then installs it into the device internal structures.
- *
- * [in] tfp
- *   Pointer to TF handle, used for HCAPI communication
- *
- * [in] parms
- *   Pointer to Table set parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_tbl_set(struct tf *tfp,
-	       struct tf_tbl_set_parms *parms);
-
-/**
- * Retrieves the requested element by sending a firmware request to get
- * the element.
- *
- * [in] tfp
- *   Pointer to TF handle, used for HCAPI communication
- *
- * [in] parms
- *   Pointer to Table get parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_tbl_get(struct tf *tfp,
-	       struct tf_tbl_get_parms *parms);
-
-#endif /* TF_TBL_TYPE_H */
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
index a1761ad56..fc047f8f8 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.c
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -9,7 +9,7 @@
 #include "tf_tcam.h"
 #include "tf_common.h"
 #include "tf_util.h"
-#include "tf_rm_new.h"
+#include "tf_rm.h"
 #include "tf_device.h"
 #include "tfp.h"
 #include "tf_session.h"
@@ -49,7 +49,7 @@ tf_tcam_bind(struct tf *tfp,
 
 	if (init) {
 		TFP_DRV_LOG(ERR,
-			    "TCAM already initialized\n");
+			    "TCAM DB already initialized\n");
 		return -EINVAL;
 	}
 
@@ -86,11 +86,12 @@ tf_tcam_unbind(struct tf *tfp)
 
 	TF_CHECK_PARMS1(tfp);
 
-	/* Bail if nothing has been initialized done silent as to
-	 * allow for creation cleanup.
-	 */
-	if (!init)
-		return -EINVAL;
+	/* Bail if nothing has been initialized */
+	if (!init) {
+		TFP_DRV_LOG(INFO,
+			    "No TCAM DBs created\n");
+		return 0;
+	}
 
 	for (i = 0; i < TF_DIR_MAX; i++) {
 		fparms.dir = i;
-- 
2.21.1 (Apple Git-122.3)


  parent reply	other threads:[~2020-07-02  4:17 UTC|newest]

Thread overview: 271+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-06-12 13:28 [dpdk-dev] [PATCH 00/50] add features for host-based flow management Somnath Kotur
2020-06-12 13:28 ` [dpdk-dev] [PATCH 01/50] net/bnxt: Basic infrastructure support for VF representors Somnath Kotur
2020-06-12 13:28 ` [dpdk-dev] [PATCH 02/50] net/bnxt: Infrastructure support for VF-reps data path Somnath Kotur
2020-06-12 13:28 ` [dpdk-dev] [PATCH 03/50] net/bnxt: add support to get FID, default vnic ID and svif of VF-Rep Endpoint Somnath Kotur
2020-06-12 13:28 ` [dpdk-dev] [PATCH 04/50] net/bnxt: initialize parent PF information Somnath Kotur
2020-06-12 13:28 ` [dpdk-dev] [PATCH 05/50] net/bnxt: modify ulp_port_db_dev_port_intf_update prototype Somnath Kotur
2020-06-12 13:28 ` [dpdk-dev] [PATCH 06/50] net/bnxt: get port & function related information Somnath Kotur
2020-06-12 13:28 ` [dpdk-dev] [PATCH 07/50] net/bnxt: add support for bnxt_hwrm_port_phy_qcaps Somnath Kotur
2020-06-12 13:28 ` [dpdk-dev] [PATCH 08/50] net/bnxt: modify port_db to store & retrieve more info Somnath Kotur
2020-06-12 13:28 ` [dpdk-dev] [PATCH 09/50] net/bnxt: add support for Exact Match Somnath Kotur
2020-06-12 13:28 ` [dpdk-dev] [PATCH 10/50] net/bnxt: modify EM insert and delete to use HWRM direct Somnath Kotur
2020-06-12 13:28 ` [dpdk-dev] [PATCH 11/50] net/bnxt: add multi device support Somnath Kotur
2020-06-12 13:28 ` [dpdk-dev] [PATCH 12/50] net/bnxt: support bulk table get and mirror Somnath Kotur
2020-06-12 13:28 ` [dpdk-dev] [PATCH 13/50] net/bnxt: update multi device design support Somnath Kotur
2020-06-12 13:28 ` [dpdk-dev] [PATCH 14/50] net/bnxt: support two-level priority for TCAMs Somnath Kotur
2020-06-12 13:28 ` [dpdk-dev] [PATCH 15/50] net/bnxt: add HCAPI interface support Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 16/50] net/bnxt: add core changes for EM and EEM lookups Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 17/50] net/bnxt: implement support for TCAM access Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 18/50] net/bnxt: multiple device implementation Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 19/50] net/bnxt: update identifier with remap support Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 20/50] net/bnxt: update RM with residual checker Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 21/50] net/bnxt: support two level priority for TCAMs Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 22/50] net/bnxt: support EM and TCAM lookup with table scope Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 23/50] net/bnxt: update table get to use new design Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 24/50] net/bnxt: update RM to support HCAPI only Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 25/50] net/bnxt: remove table scope from session Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 26/50] net/bnxt: add external action alloc and free Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 27/50] net/bnxt: align CFA resources with RM Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 28/50] net/bnxt: implement IF tables set and get Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 29/50] net/bnxt: add TF register and unregister Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 30/50] net/bnxt: add global config set and get APIs Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 31/50] net/bnxt: add support for EEM System memory Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 32/50] net/bnxt: integrate with the latest tf_core library Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 33/50] net/bnxt: add support for internal encap records Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 34/50] net/bnxt: add support for if table processing Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 35/50] net/bnxt: disable vector mode in tx direction when truflow is enabled Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 36/50] net/bnxt: add index opcode and index operand mapper table Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 37/50] net/bnxt: add support for global resource templates Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 38/50] net/bnxt: add support for internal exact match entries Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 39/50] net/bnxt: add support for conditional execution of mapper tables Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 40/50] net/bnxt: enable HWRM_PORT_MAC_QCFG for trusted vf Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 41/50] net/bnxt: enhancements for port db Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 42/50] net/bnxt: fix for VF to VFR conduit Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 43/50] net/bnxt: fix to parse representor along with other dev-args Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 44/50] net/bnxt: fill mapper parameters with default rules info Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 45/50] net/bnxt: add support for vf rep and stat templates Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 46/50] net/bnxt: create default flow rules for the VF-rep conduit Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 47/50] net/bnxt: add ingress & egress port default rules Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 48/50] net/bnxt: fill cfa_action in the tx buffer descriptor properly Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 49/50] net/bnxt: support for ULP Flow counter Manager Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 50/50] net/bnxt: Add support for flow query with action_type COUNT Somnath Kotur
2020-07-01  6:51 ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 01/51] net/bnxt: add basic infrastructure for VF representors Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 02/51] net/bnxt: add support for VF-reps data path Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 03/51] net/bnxt: get IDs for VF-Rep endpoint Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 04/51] net/bnxt: initialize parent PF information Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 05/51] net/bnxt: modify port db dev interface Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 06/51] net/bnxt: get port and function info Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 07/51] net/bnxt: add support for hwrm port phy qcaps Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 08/51] net/bnxt: modify port db to handle more info Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 09/51] net/bnxt: add support for exact match Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 10/51] net/bnxt: modify EM insert and delete to use HWRM direct Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 11/51] net/bnxt: add multi device support Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 12/51] net/bnxt: support bulk table get and mirror Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 13/51] net/bnxt: update multi device design support Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 14/51] net/bnxt: support two-level priority for TCAMs Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 15/51] net/bnxt: add HCAPI interface support Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 16/51] net/bnxt: add core changes for EM and EEM lookups Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 17/51] net/bnxt: implement support for TCAM access Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 18/51] net/bnxt: multiple device implementation Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 19/51] net/bnxt: update identifier with remap support Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 20/51] net/bnxt: update RM with residual checker Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 21/51] net/bnxt: support two level priority for TCAMs Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 22/51] net/bnxt: support EM and TCAM lookup with table scope Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 23/51] net/bnxt: update table get to use new design Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 24/51] net/bnxt: update RM to support HCAPI only Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 25/51] net/bnxt: remove table scope from session Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 26/51] net/bnxt: add external action alloc and free Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 27/51] net/bnxt: align CFA resources with RM Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 28/51] net/bnxt: implement IF tables set and get Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 29/51] net/bnxt: add TF register and unregister Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 30/51] net/bnxt: add global config set and get APIs Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 31/51] net/bnxt: add support for EEM System memory Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 32/51] net/bnxt: integrate with the latest tf core changes Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 33/51] net/bnxt: add support for internal encap records Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 34/51] net/bnxt: add support for if table processing Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 35/51] net/bnxt: disable Tx vector mode if truflow is enabled Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 36/51] net/bnxt: add index opcode and operand to mapper table Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 37/51] net/bnxt: add support for global resource templates Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 38/51] net/bnxt: add support for internal exact match entries Ajit Khaparde
2020-07-01  6:52   ` [dpdk-dev] [PATCH v2 39/51] net/bnxt: add support for conditional execution of mapper tables Ajit Khaparde
2020-07-01  6:52   ` [dpdk-dev] [PATCH v2 40/51] net/bnxt: enable port MAC qcfg command for trusted VF Ajit Khaparde
2020-07-01  6:52   ` [dpdk-dev] [PATCH v2 41/51] net/bnxt: enhancements for port db Ajit Khaparde
2020-07-01  6:52   ` [dpdk-dev] [PATCH v2 42/51] net/bnxt: manage VF to VFR conduit Ajit Khaparde
2020-07-01  6:52   ` [dpdk-dev] [PATCH v2 43/51] net/bnxt: parse representor along with other dev-args Ajit Khaparde
2020-07-01  6:52   ` [dpdk-dev] [PATCH v2 44/51] net/bnxt: fill mapper parameters with default rules info Ajit Khaparde
2020-07-01  6:52   ` [dpdk-dev] [PATCH v2 45/51] net/bnxt: add VF-rep and stat templates Ajit Khaparde
2020-07-01  6:52   ` [dpdk-dev] [PATCH v2 46/51] net/bnxt: create default flow rules for the VF-rep conduit Ajit Khaparde
2020-07-01  6:52   ` [dpdk-dev] [PATCH v2 47/51] net/bnxt: add port default rules for ingress and egress Ajit Khaparde
2020-07-01  6:52   ` [dpdk-dev] [PATCH v2 48/51] net/bnxt: fill cfa action in the Tx descriptor Ajit Khaparde
2020-07-01  6:52   ` [dpdk-dev] [PATCH v2 49/51] net/bnxt: add ULP Flow counter Manager Ajit Khaparde
2020-07-01  6:52   ` [dpdk-dev] [PATCH v2 50/51] net/bnxt: add support for count action in flow query Ajit Khaparde
2020-07-01  6:52   ` [dpdk-dev] [PATCH v2 51/51] doc: update release notes Ajit Khaparde
2020-07-01 14:26   ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
2020-07-01 21:31     ` Ferruh Yigit
2020-07-02  4:10       ` [dpdk-dev] [PATCH v3 " Ajit Khaparde
2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 01/51] net/bnxt: add basic infrastructure for VF reps Ajit Khaparde
2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 02/51] net/bnxt: add support for VF-reps data path Ajit Khaparde
2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 03/51] net/bnxt: get IDs for VF-Rep endpoint Ajit Khaparde
2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 04/51] net/bnxt: initialize parent PF information Ajit Khaparde
2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 05/51] net/bnxt: modify port db dev interface Ajit Khaparde
2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 06/51] net/bnxt: get port and function info Ajit Khaparde
2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 07/51] net/bnxt: add support for hwrm port phy qcaps Ajit Khaparde
2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 08/51] net/bnxt: modify port db to handle more info Ajit Khaparde
2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 09/51] net/bnxt: add support for exact match Ajit Khaparde
2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 10/51] net/bnxt: modify EM insert and delete to use HWRM direct Ajit Khaparde
2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 11/51] net/bnxt: add multi device support Ajit Khaparde
2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 12/51] net/bnxt: support bulk table get and mirror Ajit Khaparde
2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 13/51] net/bnxt: update multi device design support Ajit Khaparde
2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 14/51] net/bnxt: support two-level priority for TCAMs Ajit Khaparde
2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 15/51] net/bnxt: add HCAPI interface support Ajit Khaparde
2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 16/51] net/bnxt: add core changes for EM and EEM lookups Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 17/51] net/bnxt: implement support for TCAM access Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 18/51] net/bnxt: multiple device implementation Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 19/51] net/bnxt: update identifier with remap support Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 20/51] net/bnxt: update RM with residual checker Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 21/51] net/bnxt: support two level priority for TCAMs Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 22/51] net/bnxt: support EM and TCAM lookup with table scope Ajit Khaparde
2020-07-02  4:11         ` Ajit Khaparde [this message]
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 24/51] net/bnxt: update RM to support HCAPI only Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 25/51] net/bnxt: remove table scope from session Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 26/51] net/bnxt: add external action alloc and free Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 27/51] net/bnxt: align CFA resources with RM Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 28/51] net/bnxt: implement IF tables set and get Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 29/51] net/bnxt: add TF register and unregister Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 30/51] net/bnxt: add global config set and get APIs Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 31/51] net/bnxt: add support for EEM System memory Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 32/51] net/bnxt: integrate with the latest tf core changes Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 33/51] net/bnxt: add support for internal encap records Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 34/51] net/bnxt: add support for if table processing Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 35/51] net/bnxt: disable Tx vector mode if truflow is enabled Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 36/51] net/bnxt: add index opcode and operand to mapper table Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 37/51] net/bnxt: add support for global resource templates Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 38/51] net/bnxt: add support for internal exact match entries Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 39/51] net/bnxt: add conditional execution of mapper tables Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 40/51] net/bnxt: enable port MAC qcfg for trusted VF Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 41/51] net/bnxt: enhancements for port db Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 42/51] net/bnxt: manage VF to VFR conduit Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 43/51] net/bnxt: parse reps along with other dev-args Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 44/51] net/bnxt: fill mapper parameters with default rules Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 45/51] net/bnxt: add VF-rep and stat templates Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 46/51] net/bnxt: create default flow rules for the VF-rep Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 47/51] net/bnxt: add port default rules for ingress and egress Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 48/51] net/bnxt: fill cfa action in the Tx descriptor Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 49/51] net/bnxt: add ULP Flow counter Manager Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 50/51] net/bnxt: add support for count action in flow query Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 51/51] doc: update release notes Ajit Khaparde
2020-07-02 23:27       ` [dpdk-dev] [PATCH v4 00/51] add features for host-based flow management Ajit Khaparde
2020-07-02 23:27         ` [dpdk-dev] [PATCH v4 01/51] net/bnxt: add basic infrastructure for VF reps Ajit Khaparde
2020-07-02 23:27         ` [dpdk-dev] [PATCH v4 02/51] net/bnxt: add support for VF-reps data path Ajit Khaparde
2020-07-02 23:27         ` [dpdk-dev] [PATCH v4 03/51] net/bnxt: get IDs for VF-Rep endpoint Ajit Khaparde
2020-07-02 23:27         ` [dpdk-dev] [PATCH v4 04/51] net/bnxt: initialize parent PF information Ajit Khaparde
2020-07-02 23:27         ` [dpdk-dev] [PATCH v4 05/51] net/bnxt: modify port db dev interface Ajit Khaparde
2020-07-02 23:27         ` [dpdk-dev] [PATCH v4 06/51] net/bnxt: get port and function info Ajit Khaparde
2020-07-02 23:27         ` [dpdk-dev] [PATCH v4 07/51] net/bnxt: add support for hwrm port phy qcaps Ajit Khaparde
2020-07-02 23:27         ` [dpdk-dev] [PATCH v4 08/51] net/bnxt: modify port db to handle more info Ajit Khaparde
2020-07-02 23:27         ` [dpdk-dev] [PATCH v4 09/51] net/bnxt: add support for exact match Ajit Khaparde
2020-07-02 23:27         ` [dpdk-dev] [PATCH v4 10/51] net/bnxt: modify EM insert and delete to use HWRM direct Ajit Khaparde
2020-07-02 23:27         ` [dpdk-dev] [PATCH v4 11/51] net/bnxt: add multi device support Ajit Khaparde
2020-07-02 23:27         ` [dpdk-dev] [PATCH v4 12/51] net/bnxt: support bulk table get and mirror Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 13/51] net/bnxt: update multi device design support Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 14/51] net/bnxt: support two-level priority for TCAMs Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 15/51] net/bnxt: add HCAPI interface support Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 16/51] net/bnxt: add core changes for EM and EEM lookups Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 17/51] net/bnxt: implement support for TCAM access Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 18/51] net/bnxt: multiple device implementation Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 19/51] net/bnxt: update identifier with remap support Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 20/51] net/bnxt: update RM with residual checker Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 21/51] net/bnxt: support two level priority for TCAMs Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 22/51] net/bnxt: support EM and TCAM lookup with table scope Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 23/51] net/bnxt: update table get to use new design Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 24/51] net/bnxt: update RM to support HCAPI only Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 25/51] net/bnxt: remove table scope from session Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 26/51] net/bnxt: add external action alloc and free Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 27/51] net/bnxt: align CFA resources with RM Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 28/51] net/bnxt: implement IF tables set and get Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 29/51] net/bnxt: add TF register and unregister Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 30/51] net/bnxt: add global config set and get APIs Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 31/51] net/bnxt: add support for EEM System memory Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 32/51] net/bnxt: integrate with the latest tf core changes Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 33/51] net/bnxt: add support for internal encap records Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 34/51] net/bnxt: add support for if table processing Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 35/51] net/bnxt: disable Tx vector mode if truflow is enabled Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 36/51] net/bnxt: add index opcode and operand to mapper table Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 37/51] net/bnxt: add support for global resource templates Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 38/51] net/bnxt: add support for internal exact match entries Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 39/51] net/bnxt: add support for conditional execution of mapper tables Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 40/51] net/bnxt: enable port MAC qcfg command for trusted VF Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 41/51] net/bnxt: enhancements for port db Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 42/51] net/bnxt: manage VF to VFR conduit Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 43/51] net/bnxt: parse reps along with other dev-args Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 44/51] net/bnxt: fill mapper parameters with default rules Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 45/51] net/bnxt: add VF-rep and stat templates Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 46/51] net/bnxt: create default flow rules for the VF-rep Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 47/51] net/bnxt: add port default rules for ingress and egress Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 48/51] net/bnxt: fill cfa action in the Tx descriptor Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 49/51] net/bnxt: add ULP Flow counter Manager Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 50/51] net/bnxt: add support for count action in flow query Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 51/51] doc: update release notes Ajit Khaparde
2020-07-03 21:01       ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 01/51] net/bnxt: add basic infrastructure for VF reps Ajit Khaparde
2020-07-06 10:07           ` Ferruh Yigit
2020-07-06 14:04             ` Somnath Kotur
2020-07-06 14:14               ` Ajit Khaparde
2020-07-06 18:35                 ` Ferruh Yigit
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 02/51] net/bnxt: add support for VF-reps data path Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 03/51] net/bnxt: get IDs for VF-Rep endpoint Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 04/51] net/bnxt: initialize parent PF information Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 05/51] net/bnxt: modify port db dev interface Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 06/51] net/bnxt: get port and function info Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 07/51] net/bnxt: add support for hwrm port phy qcaps Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 08/51] net/bnxt: modify port db to handle more info Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 09/51] net/bnxt: add support for exact match Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 10/51] net/bnxt: use HWRM direct for EM insert and delete Ajit Khaparde
2020-07-06 18:47           ` Ferruh Yigit
2020-07-06 19:11           ` Ferruh Yigit
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 11/51] net/bnxt: add multi device support Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 12/51] net/bnxt: support bulk table get and mirror Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 13/51] net/bnxt: update multi device design support Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 14/51] net/bnxt: support two-level priority for TCAMs Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 15/51] net/bnxt: add HCAPI interface support Ajit Khaparde
2020-07-07  8:03           ` Ferruh Yigit
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 16/51] net/bnxt: add core changes for EM and EEM lookups Ajit Khaparde
2020-07-07  8:08           ` Ferruh Yigit
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 17/51] net/bnxt: implement support for TCAM access Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 18/51] net/bnxt: multiple device implementation Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 19/51] net/bnxt: update identifier with remap support Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 20/51] net/bnxt: update RM with residual checker Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 21/51] net/bnxt: support two level priority for TCAMs Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 22/51] net/bnxt: use table scope for EM and TCAM lookup Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 23/51] net/bnxt: update table get to use new design Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 24/51] net/bnxt: update RM to support HCAPI only Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 25/51] net/bnxt: remove table scope from session Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 26/51] net/bnxt: add external action alloc and free Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 27/51] net/bnxt: align CFA resources with RM Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 28/51] net/bnxt: implement IF tables set and get Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 29/51] net/bnxt: add TF register and unregister Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 30/51] net/bnxt: add global config set and get APIs Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 31/51] net/bnxt: add support for EEM System memory Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 32/51] net/bnxt: integrate with the latest tf core changes Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 33/51] net/bnxt: add support for internal encap records Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 34/51] net/bnxt: add support for if table processing Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 35/51] net/bnxt: disable Tx vector mode if truflow is set Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 36/51] net/bnxt: add index opcode and operand to mapper table Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 37/51] net/bnxt: add support for global resource templates Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 38/51] net/bnxt: add support for internal exact match Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 39/51] net/bnxt: add conditional execution of mapper tables Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 40/51] net/bnxt: allow port MAC qcfg command for trusted VF Ajit Khaparde
2020-07-03 21:02         ` [dpdk-dev] [PATCH v5 41/51] net/bnxt: enhancements for port db Ajit Khaparde
2020-07-03 21:02         ` [dpdk-dev] [PATCH v5 42/51] net/bnxt: manage VF to VFR conduit Ajit Khaparde
2020-07-03 21:02         ` [dpdk-dev] [PATCH v5 43/51] net/bnxt: parse reps along with other dev-args Ajit Khaparde
2020-07-03 21:02         ` [dpdk-dev] [PATCH v5 44/51] net/bnxt: fill mapper parameters with default rules Ajit Khaparde
2020-07-03 21:02         ` [dpdk-dev] [PATCH v5 45/51] net/bnxt: add VF-rep and stat templates Ajit Khaparde
2020-07-03 21:02         ` [dpdk-dev] [PATCH v5 46/51] net/bnxt: create default flow rules for the VF-rep Ajit Khaparde
2020-07-03 21:02         ` [dpdk-dev] [PATCH v5 47/51] net/bnxt: add port default rules for ingress and egress Ajit Khaparde
2020-07-03 21:02         ` [dpdk-dev] [PATCH v5 48/51] net/bnxt: fill cfa action in the Tx descriptor Ajit Khaparde
2020-07-03 21:02         ` [dpdk-dev] [PATCH v5 49/51] net/bnxt: add ULP Flow counter Manager Ajit Khaparde
2020-07-03 21:02         ` [dpdk-dev] [PATCH v5 50/51] net/bnxt: add support for count action in flow query Ajit Khaparde
2020-07-03 21:02         ` [dpdk-dev] [PATCH v5 51/51] doc: update release notes Ajit Khaparde
2020-07-06  1:47         ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
2020-07-06 10:10         ` Ferruh Yigit

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200702041134.43198-24-ajit.khaparde@broadcom.com \
    --to=ajit.khaparde@broadcom.com \
    --cc=dev@dpdk.org \
    --cc=michael.wildt@broadcom.com \
    --cc=stuart.schacher@broadcom.com \
    --cc=venkatkumar.duvvuru@broadcom.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).